id
stringlengths 9
9
| title
stringlengths 15
188
| full_text
stringlengths 6
704k
|
---|---|---|
0704.0849 | LRS Bianchi Type-V Viscous Fluid Universe With a Time Dependent
Cosmological Term $\Lambda$ | LRS Bianchi Type-V Viscous Fluid Universe with a Time
Dependent Cosmological Term Λ
Anirudh Pradhan a,1, J. P. Shahi b and Chandra Bhan Singh c
aDepartment of Mathematics, Hindu Post-graduate College, Zamania-232 331,
Ghazipur, India
E-Addresses: [email protected], [email protected]
b,c Department of Mathematics, Harish Chandra Post-graduate College,
Varanasi, India
Abstract
An LRS Bianchi type-V cosmological models representing a viscous
fluid distribution with a time dependent cosmological term Λ is inves-
tigated. To get a determinate solution, the viscosity coefficient of bulk
viscous fluid is assumed to be a power function of mass density. It turns
out that the cosmological term Λ(t) is a decreasing function of time, which
is consistent with recent observations of type Ia supernovae. Various phys-
ical and kinematic features of these models have also been explored.
PACS number: 98.80.Es, 98.80.-k
Key words: cosmology, variable cosmological constant, viscous universe
1 Introduction
Cosmological models representing the early stages of the Universe have been
studied by several authors. An LRS (Locally Rotationally Symmetric) Binachi
type-V spatially homogeneous space-time creates more interest due to its richer
structure both physically and geometrically than the standard perfect fluid FRW
models. An LRS Bianchi type-V universe is a simple generalization of the
Robertson-Walker metric with negative curvature. Most cosmological models
assume that the matter in the universe can be described by ’dust’ (a pressure-
less distribution) or at best a perfect fluid. However, bulk viscosity is expected
to play an important role at certain stages of expanding universe [1]−[3]. It has
been shown that bulk viscosity leads to inflationary like solution [4] and acts
like a negative energy field in an expanding universe [5]. Furthermore, there
are several processes which are expected to give rise to viscous effects. These
are the decoupling of neutrinos during the radiation era and the decoupling of
1Corresponding Author
http://arxiv.org/abs/0704.0849v2
radiation and matter during the recombination era. Bulk viscosity is associ-
ated with the Grand Unification Theories (GUT) phase transition and string
creation. Thus, we should consider the presence of a material distribution other
than a perfect fluid to have realistic cosmological models (see Grøn [6] for a
review on cosmological models with bulk viscosity). A number of authors have
discussed cosmological solutions with bulk viscosity in various context [7]−[9].
Models with a relic cosmological constant Λ have received considerable at-
tention recently among researchers for various reasons (see Refs.[10]−[14] and
references therein). Some of the recent discussions on the cosmological constant
“problem” and consequence on cosmology with a time-varying cosmological con-
stant by Ratra and Peebles [15], Dolgov [16]−[18] and Sahni and Starobinsky
[19] have pointed out that in the absence of any interaction with matter or radi-
ation, the cosmological constant remains a “constant”. However, in the presence
of interactions with matter or radiation, a solution of Einstein equations and
the assumed equation of covariant conservation of stress-energy with a time-
varying Λ can be found. For these solutions, conservation of energy requires
decrease in the energy density of the vacuum component to be compensated by
a corresponding increase in the energy density of matter or radiation. Earlier
researchers on this topic, are contained in Zeldovich [20], Weinberg [11] and
Carroll, Press and Turner [21]. Recent observations by Perlmutter et al. [22]
and Riess et al. [23] strongly favour a significant and positive value of Λ. Their
finding arise from the study of more than 50 type Ia supernovae with redshifts
in the range 0.10 ≤ z ≤ 0.83 and these suggest Friedmann models with negative
pressure matter such as a cosmological constant (Λ), domain walls or cosmic
strings (Vilenkin [24], Garnavich et al. [25]) Recently, Carmeli and Kuzmenko
[26] have shown that the cosmological relativistic theory (Behar and Carmeli
[27]) predicts the value for cosmological constant Λ = 1.934 × 10−35s−2. This
value of “Λ” is in excellent agreement with the measurements recently obtained
by the High-Z Supernova Team and Supernova Cosmological Project (Garnavich
et al. [25], Perlmutter et al. [22], Riess et al. [23], Schmidt et al. [28]). The
main conclusion of these observations is that the expansion of the universe is
accelerating.
Several ansätz have been proposed in which the Λ term decays with time
(see Refs. Gasperini [29, 30], Berman [31], Freese et al. [14], Özer and Taha [14],
Peebles and Ratra [32], Chen and Hu [33], Abdussattar and Viswakarma [34],
Gariel and Le Denmat [35], Pradhan et al. [36]). Of the special interest is the
ansätz Λ ∝ S−2 (where S is the scale factor of the Robertson-Walker metric)
by Chen and Wu [33], which has been considered/modified by several authors
( Abdel-Rahaman [37], Carvalho et al. [14], Waga [38], Silveira and Waga [39],
Vishwakarma [40]).
Recently Bali and Yadav [41] obtained an LRS Bianchi type-V viscous fluid
cosmological models in general relativity. Motivated by the situations discussed
above, in this paper, we focus upon the exact solutions of Einstein’s field equa-
tions in presence of a bulk viscous fluid in an expanding universe. We do this
by extending the work of Bali and Yadav [41] by including a time dependent
cosmological term Λ in the field equations. We have also assumed the coefficient
of bulk viscosity to be a power function of mass density. This paper is organized
as follows. The metric and the field equations are presented in section 2. In
section 3 we deal with the solution of the field equations in presence of viscous
fluid. The sections 3.1 and 3.2 contain the two different cases and also con-
tain some physical aspects of these models respectively. Section 4 describe two
models under suitable transformations. Finally in section 5 concluding remarks
have been given.
2 The Metric and Field Euations
We consider LRS Bianchi type-V metric in the form
ds2 = −dt2 +A2dx2 +B2e2x(dy2 + dz2), (1)
where A and B are functions of t alone.
The Einstein’s field equations (in gravitational units c = 1, G = 1) read as
i + Λg
i = −8πT
i , (2)
where R
i is the Ricci tensor; R = g
ijRij is the Ricci scalar; and T
i is the stress
energy-tensor in the presence of bulk stress given by
i = (ρ+ p)viv
j + pg
i − (v
i; + v
;i + v
jvℓvi;ℓ + viv
ξ − 2
vℓ;ℓ(g
i + viv
j). (3)
Here ρ, p, η and ξ are the energy density, isotropic pressure, coefficients of
shear viscosity and bulk viscous coefficient respectively and vi the flow vector
satisfying the relations
ivj = −1. (4)
The semicolon (; ) indicates covariant differentiation. We choose the coordinates
to be comoving, so that vi = δi4.
The Einstein’s field equations (2) for the line element (1) has been set up as
= −8π
p− 2ηA4
ξ − 2
− Λ, (5)
= −8π
p− 2η
− Λ, (6)
2A4B4
= −8πρ− Λ, (7)
= 0. (8)
The suffix 4 after the symbols A, B denotes ordinary differentiation with respect
to t and
θ = vℓ;ℓ
3 Solutions of the Field Eqations
In this section, we have revisited the solutions obtained by Bali and Yadav [41].
Equations (5) - (8) are four independent equations in seven unknowns A, B, p,
ρ, ξ, η and Λ. For complete determinacy of the system, we need three extra
conditions.
Eq. (8), after integration, reduce to
A = Bk, (9)
where k is an integrating constant. Equations (5) and (6) lead to
− A44
− A4B4
= −16πη
. (10)
Using Eq. (9) in (10), we obtain
k + 1
f = −16πη, (11)
where B4 = f(B). Eq. (11) leads to
f = − 16πη
(k + 2)
, (12)
where L is an integrating constant. Eq. (12) again leads to
B = (k + 2)
k1 − k2e−16πηt
k+2 , (13)
where
, (14)
, (15)
N being constant of integration. From Eqs. (9) and (13), we obtain
A = (k + 2)
k1 − k2e−16πηt
k+2 . (16)
Hence the metric (1) reduces to the form
ds2 = −dt2 + (k + 2)
k1 − k2e−16πηt
k+2 dx2
+ e2x(k + 2)
k1 − k2e−16πηt
k+2 (dy2 + dz2). (17)
The pressure and density of the model (17) are obtained as
8πp =
(8π)(16πη)k2e
−16πηt
3(k + 2)2(k1 − k2e−16πηt)2
k1(k + 2)
2(4η + 3ξ)− {k2(4η + 3ξ)
+4k(η+3ξ)+2(5η+6ξ)}k2e−16πηt
[(k + 2)(k1 − k2e−16πηt)]
−Λ, (18)
8πρ = − (2k + 1)
(k + 2)2
(16πη)2k22
e−32πηt
(k1 − k2e−16πηt)2
[(k + 2)(k1 − k2e−16πηt)]
+ Λ. (19)
The expansion θ in the model (17) is obtained as
(16πη)k2e
−16πηt
(k1 − k2e−16πηt)
. (20)
For complete determinacy of the system we have to consider three extra condi-
tions. Firstly we assume that the coefficient of shear viscosity is constant, i.e.,
η = η0 (say). For the specification of Λ(t), we secondly assume that the fluid
obeys an equation of state of the form
p = γρ, (21)
where γ(0 ≤ γ ≤ 1) is a constant.
Thirdly bulk viscosity (ξ) is assumed to be a simple power function of the
energy density [42]−[45].
ξ(t) = ξ0ρ
n, (22)
where ξ0 and n are constants. For small density, n may even be equal to unity
as used in Murphy’s work [46] for simplicity. If n = 1, Eq. (22) may correspond
to a radiative fluid [47]. Near the big bang, 0 ≤ n ≤ 1
is a more appropriate
assumption [48] to obtain realistic models.
For simplicity and realistic models of physical importance, we consider the
following two cases (n = 0, 1):
3.1 Model I: Solution for n = 0
When n = 0, Eq. (22) reduces to ξ = ξ0 = constant. Hence, in this case Eqs.
(18) and (19), with the use of (21), lead to
8π(1 + γ)ρ =
k1(k + 2)
2(4η0 + 3ξ0)− {k2(4η0 + 3ξ0)
+ 4k(η0 + 3ξ0) + 2(5η0 + 6ξ0)}k2e−16πη0t
− (2k + 1)M
. (23)
Eliminating ρ(t) between Eqs. (19) and (23), we obtain
(1 + γ)Λ =
k1(k + 2)
2(4η0 + 3ξ0)− {k2(4η0 + 3ξ0)
+ 4k(η0 + 3ξ0) + 2(5η0 + 6ξ0)}k2e−16πη0t
+ (2k + 1)γ
(1− 3γ)
, (24)
where
M = 16πk2η0e
−16πη0t,
N = (k + 2)(k1 − k2e−16πη0t),
P = 2k2 + 2k + 5,
Q = k2 + 4k + 4. (25)
3.2 Model II: Solution for n = 1
When n = 1, Eq. (22) reduces to ξ = ξ0ρ . Hence, in this case Eqs. (18) and
(19), with the use of (21), leads to
8πρ =
16πM{2k1(k + 2)2η0 − Pk2η0e−16πη0t}
3 [(1 + γ)N2 −M{k1(k + 2)2ξ0 −Qk2ξ0e−16πη0t}]
k+2 − (2k + 1)M2
[(1 + γ)N2 −M{k1(k + 2)2ξ0 −Qk2ξ0e−16πη0t}]
. (26)
Eliminating ρ(t) between Eqs. (19) and (26), we get
Λ = 16πM [2k1(k + 2)
2η0 − Pk2η0e−16πη0t] +
γ(2k + 1)
(1 + γ)
(1− 3γ)
(1 + γ)N
M [k1(k + 2)
2ξ0 −Qk2ξ0e−16πη0t]{4N
k+2 − (2k + 1)M2}
(1 + γ)N2 [(1 + γ)N2 −M{k1(k + 2)2ξ0 −Qk2ξ0e−16πη0t}]
From Eqs. (23) and (26), we note that ρ(t) is a decreasing function of time
and ρ > 0 for all time in both models. The behaviour of the universe in these
models will be determined by the cosmological term Λ; this term has the same
effect as a uniform mass density ρeff = −Λ/4πG, which is constant in space
and time. A positive value of Λ corresponds to a negative effective mass density
(repulsion). Hence, we expect that in the universe with a positive value of Λ,
the expansion will tend to accelerate; whereas in the universe with negative
value of Λ, the expansion will slow down, stop and reverse. From Eqs. (24) and
(27), we observe that the cosmological term Λ in both models is a decreasing
function of time and it approaches a small positive value as time increase more
and more. This is a good agreement with recent observations of supernovae Ia
(Garnavich et al. [25], Perlmutter et al. [22], Riess et al. [23], Schmidt et al.
[28]).
The shear σ in the model (17) is given by
(k − 1)M√
. (28)
The non-vanishing components of conformal curvature tensor are given by
C2323 = −C1414 =
(k − 1)M
[kM − 16πη0k1(k + 2)], (29)
C1313 = −C2424 =
(k − 1)M
[16πη0k1(k + 2)− kM ], (30)
C1212 = −C3434 =
(k − 1)M
[16πη0k1(k + 2)− kM ]. (31)
Equations (20) and (28) lead to
(k − 1)√
3(k + 2)
= constant. (32)
The model (17) is expanding, non-rotating and shearing. Since σ
= conatant,
hence the model does not approach isotropy. The space-time (17) is Petrov type
D in presence of viscosity.
4 Other Models
After using the transformation
k1 − k2e−16πηt = sin (16πητ), k + 2 = 1/16πη, (33)
the metric (17) reduces to
ds2 = −
cos (16πητ)
k1 − sin (16πητ)
dτ2 +
sin (16πητ)
]2(1−32πη)
+ e2x
sin (16πητ)
](32πη)
(dy2 + dz2). (34)
The pressure (p), density (ρ) and the expansion (θ) of the model (34) are ob-
tained as
8πp =
(16πη)2{k1 − sin (16πητ)
3 sin2 (16πητ)
2k1−2(1−48πη+1152π2η2){k1−sin (16πητ)}
(16πη)(8πξ){k1 − sin (16πητ)}
sin (16πητ)
sin (16πητ)
]2(1−32πη)
− Λ, (35)
8πρ =
2(24πη − 1)(16πη)3{k1 − sin (16πητ)}2
sin2 (16πητ)
sin (16πητ)
]2(1−32πη)
(16πη){k1 − sin (16πητ)}
sin (16πητ)
. (37)
4.1 Model I: Solution for n = 0
When n = 0, Eq. (22) reduces to ξ = ξ0 = constant. Hence, in this case Eqs.
(35) and (36), with the use of (21), lead to
8π(1 + γ)ρ =
2(16πη0)
3 sin2(16πη0τ)
[k1 − P1M1] +
(16πη0)(8πξ0)M1
sin(16πη0τ)
+ 4N1 +
2(24πη0 − 1)(16πη0)3M21
sin2(16πη0τ)
. (38)
Eliminating ρ(t) between Eqs. (36) and (38), we obtain
(1 + γ)Λ =
2(16πη0)
3 sin2(16πη0τ)
[k1 − P1M1] +
(16πη0)(8πξ0)M1
sin(16πη0τ)
+ (1− 3γ)N1 +
2γ(24πη0 − 1)(16πη0)3M21
sin2(16πη0τ)
. (39)
4.2 Model II: Solution for n = 1
When n = 1, Eq. (22) reduces to ξ = ξ0ρ . Hence, in this case Eqs. (35) and
(36), with the use of (21), lead to
8πρ =
2(16πη0)
2M1[(k1 − P1M1) + 3(24πη0 − 1)(16πη0)M1]
3 sin(16πη0τ)[(1 + γ) sin(16πη0τ) − 16πη0ξ0M1]
4N1 sin(16πη0τ)
[(1 + γ) sin(16πη0τ) − 16πη0ξ0M1]
. (40)
Eliminating ρ(t) between Eqs. (36) and (40), we obtain
2(16πη0)
2M1(k1 − P1M1)
3 sin(16πη0τ)[(1 + γ) sin(16πη0τ)− 16πη0ξ0M1]
[3(16πη0ξ0)M1 + (1− 3γ) sin(16πη0τ)]
[(1 + γ) sin(16πη0τ)− 16πη0ξ0M1]
2(24πη0 − 1)(16πη0)3M21 [γ(1 + γ) sin(16πη0τ) − (1− γ)(16πη0ξ0)M1]
(1 + γ) sin2(16πη0τ)[(1 + γ) sin(16πη0τ)− (16πη0ξ0)M1]
, (41)
where
M1 = k1 − sin(16πη0τ),
16πη0
sin (16πη0τ)
]2(1−32πη)
P1 = 1− 48πη0 + 1152π2η20 . (42)
The shear (σ) in the model (34) is obtained as
(1− 48πη0)(16πη0)[k1 − sin(16πη0τ)√
3 sin(16πη0τ)
. (43)
The models descibed in cases 4.1 and 4.2 preserve the same properties as in the
cases of 3.1 and 3.2.
5 Conclusions
We have obtained a new class of LRS Bianchi type-V cosmological models of
the universe in presence of a viscous fluid distribution with a time dependent
cosmological term Λ. We have revisited the solutions obtained by Bali and Ya-
dav [41] and obtained new solutions which also generalize their work.
The cosmological constant is a parameter describing the energy density of
the vacuum (empty space), and a potentially important contribution to the dy-
namical history of the universe. The physical interpretation of the cosmological
constant as vacuum energy is supported by the existence of the “zero point”
energy predicted by quantum mechanics. In quantum mechanics, particle and
antiparticle pairs are consistently being created out of the vacuum. Even though
these particles exist for only a short amount of time before annihilating each
other they do give the vacuum a non-zero potential energy. In general relativity,
all forms of energy should gravitate, including the energy of vacuum, hence the
cosmological constant. A negative cosmological constant adds to the attractive
gravity of matter, therefore universes with a negative cosmological constant are
invariably doomed to re-collapse [49]. A positive cosmological constant resists
the attractive gravity of matter due to its negative pressure. For most universes,
the positive cosmological constant eventually dominates over the attraction of
matter and drives the universe to expand exponentially [50].
The cosmological constants in all models given in Sections 3.1 and 3.2 are
decreasing functions of time and they all approach a small and positive value
at late times which are supported by the results from recent type Ia supernova
observations recently obtained by the High-z Supernova Team and Supernova
Cosmological Project (Garnavich et al. [25], Perlmutter et al. [22], Riess et al.
[23], Schmidt et al. [28]). Thus, with our approach, we obtain a physically rele-
vant decay law for the cosmological term unlike other investigators where adhoc
laws were used to arrive at a mathematical expressions for the decaying vacuum
energy. Our derived models provide a good agreement with the observational
results. We have derived value for the cosmological constant Λ and attempted
to formulate a physical interpretation for it.
Acknowledgements
The authors wish to thank the Harish-Chandra Research Institute, Allahabad,
India, for providing facility where part this work was done. We also thank to
Professor Raj Bali for his fruitful suggestions and comments in the first draft of
the paper.
References
[1] C. W. Misner, Astrophys. J. 151, 431 (1968).
[2] G. F. R. Ellis, In General Relativity and Cosmology, Enrico Fermi Course,
R. K. Sachs. ed. (Academic Press, New York, 1979).
[3] B. L. Hu, In Advance in Astrophysics, eds. L. J. Fung and R. Ruffini,
(World Scientific, Singapore, 1983).
[4] T. Padmanabhan and S. M. Chitre, Phys. Lett. A 120, 433 (1987).
[5] V. B. Johri and R. Sudarshan, Proc. Int. Conf. on Mathematical Modelling
in Science and Technology, L. S. Srinath et al., eds (World Scientific,
Singapore, 1989).
[6] Ø. Grøn, Astrophys. Space Sci. 173, 191 (1990).
[7] A. Pradhan, V. K. Yadav and I. Chakrabarty, Int. J. Mod. Phys. D 10,
339 (2001).
I. Chakrabarty, A. Pradhan and N. N. Saste, Int. J. Mod. Phys. D 10,
741 (2001).
A. Pradhan and I. Aotemashi, Int. J. Mod. Phys. D 11, 1419 (2002).
A. Pradhan and H. R. Pandey, Int. J. Mod. Phys. D 12 , 941 (2003).
[8] L. P. Chimento, A. S. Jakubi and D. Pavon, Class. Quant. Grav. 16, 1625
(1999).
[9] G. P. Singh, S. G. Ghosh and A. Beesham, Aust. J. Phys. 50, 903 (1997).
[10] S. Weinberg, Rev. Mod. Phys. 61, 1 (1989).
[11] S. Weinberg, Gravitation and Cosmology, (Wiley, New York, 1972).
[12] J. A. Frieman and I. Waga, Phys. Rev. D 57, 4642 (1998).
[13] R. Carlberg, et al., Astrophys. J. 462, 32 (1996).
[14] M. Özer and M. O. Taha, Nucl. Phys. B 287, 776 (1987).
K. Freese, F. C. Adams, J. A. Frieman and E. Motta, ibid. B 287, 1797
(1987).
J. C. Carvalho, J. A. S. Lima and I. Waga, Phys. Rev.D 46, 2404 (1992).
V. Silviera and I. Waga, ibid. D 50, 4890 (1994).
[15] B. Ratra and P. J. E. Peebles, Phys. Rev. D 37, 3406 (1988).
[16] A. D. Dolgov, in The Very Early Universe, eds. G. W. Gibbons, S. W.
Hawking and S. T. C. Siklos, (Cambridge Univerity Press, 1983).
[17] A. D. Dolgov, M. V. Sazhin and Ya. B. Zeldovich, Basics of Modern
Cosmology, (Editions Frontiers, 1990).
[18] A. D. Dolgov, Phys. Rev. D 55, 5881 (1997).
[19] V. Sahni and A. Starobinsky, Int. J. Mod. Phys. D 9, 373 (2000).
[20] Ya. B. Zeldovich, Sov. Phys.-Uspekhi 11, 381 (1968).
[21] S. M. Carroll, W. H. Press and E. L. Turner, Ann. Rev. Astron. Astrophys.
30, 499 (1992).
[22] S. Perlmutter et al., Astrophys. J. 483, 565 (1997), Supernova Cosmology
Project Collaboration (astro-ph/9608192);
S. Perlmutter et al., Nature 391, 51 (1998), Supernova Cosmology Project
Collaboration (astro-ph/9712212);
S. Perlmutter et al., Astrophys. J. 517, 565 (1999), Project Collaboration
(astro-ph/9608192).
[23] A. G. Riess et al., Astron. J. 116, 1009 (1998); Hi-Z Supernova Team
Collaboration (astro-ph/9805201).
[24] A. Vilenkin, Phys. Rep. 121, 265 (1985).
[25] P. M. Garnavich et al., Astrophys. J. 493, L53 (1998a), Hi-z Supernova
Team Collaboration (astro-ph/9710123);
P. M. Garnavich et al., Astrophys. J. 509, 74 (1998b); Hi-z Supernova
Team Collaboration (astro-ph/9806396).
[26] M. Carmeli and T. Kuzmenko, Int. J. Theor. Phys. 41, 131 (2002).
[27] S. Behar and M. Carmeli, Int. J. Theor. Phys. 39, 1375 (2002) 1375.
http://arxiv.org/abs/astro-ph/9608192
http://arxiv.org/abs/astro-ph/9712212
http://arxiv.org/abs/astro-ph/9608192
http://arxiv.org/abs/astro-ph/9805201
http://arxiv.org/abs/astro-ph/9710123
http://arxiv.org/abs/astro-ph/9806396
[28] B. P. Schmidt et al., Astrophys. J. 507, 46 (1998), Hi-z Supernova Team
Collaboration (astro-ph/9805200).
[29] M. Gasperini, Phys. Lett. B 194, 347 (1987).
[30] M. Gasperini, Class. Quant. Grav. 5, 521 (1988).
[31] M. S. Berman, Int. J. Theor. Phys. 29, 567 (1990) 567;
M. S. Berman, Int. J. Theor. Phys. 29, 1419 (1990);
M. S. Berman, Phys. Rev. D 43, 75 (1991).
M. S. Berman and M. M. Som, Int. J. Theor. Phys. 29, 1411 (1990).
M. S. Berman, M. M. Som and F. M. Gomide, Gen. Rel. Grav. 21, 287
(1989).
M. S. Berman and F. M. Gomide, Gen. Rel. Grav. 22, 625 (1990).
[32] P. J. E. Peebles and B. Ratra, Astrophys. J. 325, L17 (1988).
[33] W. Chen and Y. S. Wu, Phys. Rev. D 41, 695 (1990).
[34] Abdussattar and R. G. Vishwakarma, Pramana J. Phys. 47, 41 (1996).
[35] J. Gariel and G. Le Denmat, Class. Quant. Grav. 16, 149 (1999).
[36] A. Pradhan and A. Kumar, Int. J. Mod. Phys. D 10, 291 (2001).
A. Pradhan and V. K. Yadav, Int J. Mod Phys. D 11, 983 (2002).
A. Pradhan and O. P. Pandey, Int. J. Mod. Phys. D 12, 941 (2003).
A. Pradhan, S. K. Srivastava and K. R. Jotania, Czech. J. Phys. 54, 255
(2004).
A. Pradhan, A. K. Yadav and L. Yadav, Czech. J. Phys. 55, 503 (2005).
A. Pradhan and P. Pandey, Czech. J. Phys. 55, 749 (2005).
A. Pradhan and P. Pandey, Astrophys. Space Sci. 301, 221 (2006).
G. S. Khadekar, A. Pradhan and M. R. Molaei, Int. J. Mod. Phys. D 15,
95 (2006).
A. Pradhan, K. Srivastava and R. P. Singh, Fizika B (Zagreb) 15, 141
(2006).
C. P. Singh, S. Kumar and A. Pradhan, Class. Quantum Grav. 24, 455
(2007).
A. Pradhan, A. K. Singh and S. Otarod, Romanian J. Phys. 52, 415
(2007).
[37] A.-M. M. Abdel-Rahaman, Gen. Rel. Grav. 22, 655 (1990); Phys. Rev. D
45, 3492 (1992).
[38] I. Waga, Astrophys. J. 414, 436 (1993).
[39] V. Silveira and I. Waga, Phys. Rev. D 50, 4890 (1994).
[40] R. G. Vishwakarma, Class. Quant. Grav. 17, 3833 (2000).
[41] R. Bali and M. K. Yadav, J. Raj. Acad. Phys. Sci. 1, 47 (2002).
http://arxiv.org/abs/astro-ph/9805200
[42] D. Pavon, J. Bafaluy and D. Jou, Class Quant. Grav. 8, 357 (1991);
“Proc. Hanno Rund Conf. on Relativity and Thermodynamics”, Ed. S.
D. Maharaj, (University of Natal, Durban, 1996, p. 21).
[43] R. Maartens, Class Quant. Grav. 12, 1455 (1995).
[44] W. Zimdahl, Phys. Rev. D 53, 5483 (1996).
[45] N. O. Santos, R. S. Dias and A. Banerjee, J. Math. Phys. 26, 878 (1985).
[46] G. L. Murphy, Phys. Rev. D 8, 4231 (1973).
[47] S. Weinberg, Astrophys. J. 168, 175 (1971).
[48] U. A. Belinskii and I. M. Khalatnikov, Sov. Phys. JETP 42, 205 (1976).
[49] S. M. Carrol, W. H. Press and E. L. Turner, ARA&A 30, 499 (1992).
[50] C. S. Kochanek, Astrophys. J. 384, 1 (1992).
Introduction
The Metric and Field Euations
Solutions of the Field Eqations
Model I: Solution for n = 0
Model II: Solution for n = 1
Other Models
Model I: Solution for n = 0
Model II: Solution for n = 1
Conclusions
|
0704.0850 | Density matrix elements and entanglement entropy for the spin-1/2 XXZ
chain at $\Delta$=1/2 | Density matrix elements and entanglement entropy for
the spin-1/2 XXZ chain at ∆=1/2
Jun Sato 1 ∗, Masahiro Shiroishi 1 †
1 Institute for Solid State Physics, University of Tokyo,
Kashiwanoha 5-1-5, Kashiwa, Chiba 277-8581, Japan
November 9, 2018
Abstract
We have analytically obtained all the density matrix elements up to six lattice
sites for the spin-1/2 Heisenberg XXZ chain at ∆ = 1/2. We use the multiple inte-
gral formula of the correlation function for the massless XXZ chain derived by Jimbo
and Miwa. As for the spin-spin correlation functions, we have newly obtained the
fourth- and fifth-neighbour transverse correlation functions. We have calculated all
the eigenvalues of the density matrix and analyze the eigenvalue-distribution. Using
these results the exact values of the entanglement entropy for the reduced density ma-
trix up six lattice sites have been obtained. We observe that our exact results agree
quite well with the asymptotic formula predicted by the conformal field theory.
∗[email protected]
†[email protected]
http://arxiv.org/abs/0704.0850v1
1 Introduction
The spin-1/2 antiferromagnetic Heisenberg XXZ chain is one of the most fundamental models
for one-dimensional quantum magnetism, which is given by the Hamiltonian
Sxj S
j+1 + S
j+1 +∆S
, (1.1)
where Sαj = σ
j /2 with σ
j being the Pauli matrices acting on the j-th site and ∆ is the
anisotropy parameter. For ∆ > 1, it is called the massive XXZ model where the system is
gapful. Meanwhile for −1 < ∆ ≤ 1 case, the system is gapless and called the massless XXZ
model. Especially we call it XXX model for the isotropic case ∆ = 1.
The exact eigenvalues and eigenvectors of this model can be obtained by the Bethe Ansatz
method [1, 2]. Many physical quantities in the thermodynamic limit such as specific heat,
magnetic susceptibility, elementary excitations, etc..., can be exactly evaluated even at finite
temperature by the Bethe ansatz method [2].
The exact calculation of the correlation functions, however, is still a difficult problem. The
exceptional case is ∆ = 0, where the system reduces to a lattice free-fermion model by the
Jordan-Wigner transformation. In this case, we can calculate arbitrary correlation functions
by means of Wick’s theorem [3, 4]. Recently, however, there have been rapid developments in
the exact evaluations of correlation functions for ∆ 6= 0 case also, since Kyoto Group (Jimbo,
Miki, Miwa, Nakayashiki) derived a multiple integral representation for arbitrary correlation
functions. Using the representation theory of the quantum affine algebra Uq(ŝl2), they first
derived a multiple integral representation for massive XXZ antiferromagnetic chain in 1992
[5, 6], which is before long extended to the XXX case [7, 8] and the massless XXZ case
[9]. Later the same integral representations were reproduced by Kitanine, Maillet, Terras
[10] in the framework of Quantum Inverse Scattering Method. They have also succeeded in
generalizing the integral representations to the XXZ model with an external magnetic field
[10]. More recently the multiple integral formulas were extended to dynamical correlation
functions as well as finite temperature correlation functions [11, 12, 13, 14]. In this way
it has been established now the correlation functions for XXZ model are represented by
multiple integrals in general. However, these multiple integrals are difficult to evaluate both
numerically and analytically.
For general anisotropy ∆, it has been shown that the multiple inetegrals up to four-
dimension can be reduced to one-dimensional integrals [15, 16, 17, 18, 19, 20, 21]. As a
result all the density matrix elements within four lattice sites have been obtained for general
anisotropy [21]. To reduce the multiple integrals into one-dimension, however, involves hard
calculation, which makes difficult to obtain correlation functions on more than four lattice
sites. On the other hand, at the isotropic point ∆ = 1, an algebraic method based on
qKZ equation has been devised [22] and all the density matrix elements up to six lattice
sites have been obtained [23, 24]. Moreover, as for the spin-spin correlation functions, up to
seventh-neighbour correlation 〈Sz1Sz8〉 for XXX chain have been obtained from the generating
functional approach [25, 26]. It is desirable that this algebraic method will be generalized to
the case with ∆ 6= 1. Actually, Boos, Jimbo, Miwa, Smirnov and Takeyama have derived an
exponential formula for the density matrix elements of XXZ model, which does not contain
multiple integrals [27, 28, 29, 30, 31]. It, however, seems still hard to evaluate the formula
for general density matrix elements.
Among the general ∆ 6= 0, there is a special point ∆ = 1/2, where some intriguing prop-
erties have been observed. Let us define a correlation function called Emptiness Formation
Probability (EFP) [8] which signifies the probability to find a ferromagnetic string of length
P (n) ≡
+ Szj
. (1.2)
The explicit general formula for P (n) at ∆ = 1/2 was conjectured in [33]
P (n) = 2−n
(3k + 1)!
(n + k)!
, (1.3)
which is proportional to the number of alternating sign matrix of size n × n. Later this
conjecture was proved by the explicit evaluation of the multiple integral representing the
EFP [34]. Remarkably, one can also obtain the exact asymptotic behavior as n → ∞
from this formula, which is the unique valuable example except for the free fermion point
∆ = 0. Note also that as for the longitudinal two-point correlation functions at ∆ = 1/2,
up to eighth-neighbour correlation function 〈Sz1Sz9〉 have been obtained in [32] by use of the
multiple integral representation for the generating function. Most outstanding is that all the
results are represented by single rational numbers. These results motivated us to calculate
other correlation functions at ∆ = 1/2. Actually we have obtained all the density matrix
elements up to six lattice sites by the direct evaluation of the multiple integrals. All the
results can be written by single rational numbers as expected. A direct evaluation of the
multiple integrals is possible due to the particularity of the case for ∆ = 1/2 as is explained
below.
2 Analytical evaluation of multiple integral
Here we shall describe how we analytically obtain the density matrix elements at ∆ = 1/2
from the multiple integral formula. Any correlation function can be expressed as a sum of
density matrix elements P
,··· ,ǫ′n
ǫ1,··· ,ǫn , which are defined by the ground state expectation value of
the product of elementary matrices:
,··· ,ǫ′n
ǫ1,··· ,ǫn ≡ 〈E
1 · · ·Eǫ
n 〉, (2.1)
where E
j are 2× 2 elementary matrices acting on the j-th site as
E++j =
+ Szj , E
− Szj ,
E+−j =
= S+j = S
j + iS
j , E
= S−j = S
j − iS
The multiple integral formula of the density matrix element for the massless XXZ chain
reads [9]
,··· ,ǫ′n
ǫ1,··· ,ǫn =(−ν)−n(n−1)/2
· · ·
sinh(xa − xb)
sinh[(xa − xb − ifabπ)ν]
sinhyk−1 [(xk + iπ/2)ν] sinh
n−yk [(xk − iπ/2)ν]
coshn xk
, (2.2)
where the parameter ν is related to the anisotropy as ∆ = cosπν and fab and yk are
determined as
fab = (1 + sign[(s
′ − a + 1/2)(s′ − b+ 1/2)])/2,
y1 > y2 > · · · > ys′, ǫ′yi = +
ys′+1 > · · · > yn, ǫn+1−yi = −. (2.3)
In the case of ∆ = 1/2, namely ν = 1/3, the significant simplification occurs in the multiple
integrals due to the trigonometric identity
sinh(xa−xb) = 4 sinh[(xa−xb)/3] sinh[(xa−xb+iπ)/3] sinh[(xa−xb−iπ)/3]. (2.4)
Actually if we note that the parameter fab takes the value 0 or 1, the first factor in the
multiple integral at ν = 1/3 can be decomposed as
sinh(xa − xb)
sinh[(xa − xb − iπ)/3]
= 4 sinh
xa − xb
xa − xb + iπ
= −1 + ωe
(xa−xb) + ω−1e−
(xa−xb), (2.5)
sinh(xa − xb)
sinh[(xa − xb)/3]
= 4 sinh
xa − xb + iπ
xa − xb − iπ
= 1 + e
(xa−xb) + e−
(xa−xb), (2.6)
where ω = eiπ/3. Expanding the trigonometoric functions in the second factor into exponen-
tials
sinhy−1 [(x+ iπ/2)/3] sinhn−y [(x− iπ/2)/3]
= 21−n
ω1/2ex/3 − ω−1/2e−x/3
)y−1 (
ω−1/2ex/3 − ω1/2e−x/3
= 21−n
(−1)l+m
y − 1
ωy−l+m−(n+1)/2e
(n−2l−2m−1)x, (2.7)
we can explicitly evaluate the multiple integral by use of the formula
eαxdx
coshn x
= 2n−1B
, Re(n± α) > 0, (2.8)
where B(p, q) is the beta function defined by
B(p, q) =
tp−1(1− t)q−1dt, Re(p),Re(q) > 0. (2.9)
Table 1: Comparison with the asymptotic formula of the transverse correlation function
〈Sx1Sx2 〉 〈Sx1Sx3 〉 〈Sx1Sx4 〉 〈Sx1Sx5 〉 〈Sx1Sx6 〉
Exact −0.156250 0.0800781 −0.0671234 0.0521997 −0.0467664
Asymptotics −0.159522 0.0787307 −0.0667821 0.0519121 −0.0466083
In this way we have succeeded in calculating all the density matrix elements up to six lattice
sites. All the results are represented by single rational numbers, which are presented in
Appendix A. As for the spin-spin correlation functions, we have newly obtained the fourth-
and fifth-neighbour transverse two-point correlation function
〈Sx1Sx2 〉 = −
= −0.15625,
〈Sx1Sx3 〉 =
= 0.080078125,
〈Sx1Sx4 〉 = −
65536
= −0.0671234130859375,
〈Sx1Sx5 〉 =
1751531
33554432
= 0.0521996915340423583984375,
〈Sx1Sx6 〉 = −
3213760345
68719476736
= −0.046766368104727007448673248291015625.
The asymptotic formula of the transverse two-point correlation function for the massless
XXZ chain is established in [35, 36]
〈Sx1Sx1+n〉 ∼ Ax(η)
(−1)n
− Ãx(η)
+ · · · , η = 1− ν,
Ax(η) =
8(1− η)2
sinh(ηt)
sinh(t) cosh[(1− η)t]
− ηe−2t
Ãx(η) =
2η(1− η)
cosh(2ηt)e−2t − 1
2 sinh(ηt) sinh(t) cosh[(1− η)t]
sinh(ηt)
η2 + 1
, (2.10)
which produces a good numerical value even for small n as is shown in Table 1. Note that the
longitudinal correlation function was obtained up to eighth-neighbour correlaion 〈Sz1Sz9〉 from
the multiple integral representation for the generating function [32]. Note also that up to
third-neighbour both longitudinal and transverse correlation functions for general anisotropy
∆ were obtained in [21].
3 Reduced density matrix and entanglement entropy
Below let us discuss the reduced density matrix for a sub-chain and the entanglement entropy.
The density matrix for the infinite system at zero temperature has the form
ρT ≡ |GS〉〈GS|, (3.1)
0 10 20 30 40 50 60
0 10 20 30
Figure 1: Eigenvalue-distribution of density matrices
Table 2: Entanglement entropy S(n) of a finite sub-chain of length n
S(1) S(2) S(3) S(4)
1 1.3716407621868583 1.5766810784924767 1.7179079372711414
S(5) S(6)
1.8262818282012363 1.9144714710902746
where |GS〉 denotes the ground state of the total system. We consider a finite sub-chain of
length n, the rest of which is regarded as an environment. We define the reduced density
matrix for this sub-chain by tracing out the environment from the infinite chain
ρn ≡ trEρT =
,··· ,ǫ′n
ǫ1,··· ,ǫn
ǫj ,ǫ
. (3.2)
We have numerically evaluate all the eigenvalues ωα (α = 1, 2, · · · , 2n) of the reduced density
matrix ρn up to n = 6. We show the distribution of the eigenvalues in Figure 1. The
distribution is less degenerate comapared with the isotropic case ∆ = 1 shown in [24]. In the
odd n case, all the eigenvalues are two-fold degenerate due to the spin-reverse symmetry.
Subsequently we exactly evaluate the von Neumann entropy (Entanglement entropy)
defined as
S(n) ≡ −trρn log2 ρn = −
ωα log2 ωα. (3.3)
The exact numerical values of S(n) up to n = 6 are shown in Table 2. By analyzing the
behaviour of the entanglement S(n) for large n, we can see how long quantum correlations
reach [37]. In the massive region ∆ > 1, the entanglement entropy will be saturated as n
grows due to the finite correlation length. This means the ground state is well approximated
by a subsystem of a finite length corresponding to the large eigenvalues of reduced density
matrix. On the other hand, in the massless case −1 < ∆ ≤ 1, the conformal field theory
predict that the entanglement entropy shows a logarithmic divergence [38]
S(n) ∼ 1
log2 n + k∆. (3.4)
1 2 3 4 5 6
Exact
Asymptotics
Figure 2: Entanglement entropy S(n) of a finite sub-chain of length n
Our exact results up to n = 6 agree quite well with the asymptotic formula as shown in Figure
2. We estimate the numerical value of the constant term k∆=1/2 as k∆=1/2 ∼ S(6)− 13 log2 6 =
1.0528. This numerical value is slightly smaller than the isotropic case ∆ = 1, where the
constant k∆=1 is estimated as k∆=1 ∼ 1.0607 from the exact data for S(n) up to n = 6 [24].
At free fermion point ∆ = 0, the exact asymptotic formula has been obtained in [39]
S(n) ∼ 1
log2 n+ k∆=0,
k∆=0 = 1/3−
t sinh2(t/2)
− cosh(t/2)
2 sinh3(t/2)
/ ln 2. (3.5)
In this case the numerical value for the constant term is given by k∆=0 = 1.0474932144 · · · .
4 Summary and discussion
We have succeeded in obtaining all the density matrix elements on six lattice sites for XXZ
chain at ∆ = 1/2. Especially we have newly obtained the fourth- and fifth-neighbour
transverse spin-spin correlation functions. Our exact results for the transverse correlations
show good agreement with the asymptotic formula established in [35, 36]. Subsequently we
have calculated all the eigenvalues of the reduced density matrix ρn up to n = 6. From these
results we have exactly evaluated the entanglement entropy, which shows a good agreement
with the asymptotic formula derived via the conformal field theory. Finally, we remark
that similar procedures to evaluate the multiple integrals are also possible at ν = 1/n for
n = 4, 5, 6, · · · , since there are similar trigonometric identities as (2.4). We will report the
calculation of correlation functions for these cases in subsequent papers.
Acknowledgement
The authors are grateful to K. Sakai for valuable discussions. This work is in part sup-
ported by Grant-in-Aid for the Scientific Research (B) No. 18340112. from the Ministry of
Education, Culture, Sports, Science and Technology, Japan.
Appendix A Density matrix elements up to n = 6
In this appendix we present all the independent density matrix elements defined in eq. (2.1)
up to n = 6. Other elements can be computed from the relations
,··· ,ǫ′n
ǫ1,··· ,ǫn = 0 if
ǫj 6=
ǫ′j , (A.1)
,··· ,ǫ′n
ǫ1,··· ,ǫn = P
ǫ1,··· ,ǫn
,··· ,ǫ′n
,··· ,−ǫ′n
−ǫ1,··· ,−ǫn
ǫ′n,··· ,ǫ
ǫn,··· ,ǫ1 , (A.2)
,··· ,ǫ′n
+,ǫ1,··· ,ǫn
,··· ,ǫ′n
−,ǫ1,··· ,ǫn
,··· ,ǫ′n,+
ǫ1,··· ,ǫn,+
,··· ,ǫ′n,−
ǫ1,··· ,ǫn,−
,··· ,ǫ′n
ǫ1,··· ,ǫn , (A.3)
and the formula for the EFP [33, 34]
P (n) = P
+,··· ,+
+,··· ,+ = 2
(3k + 1)!
(n+ k)!
. (A.4)
Appendix A.1 n ≤ 4
P−++− = −
= −0.3125, P−++++− =
= 0.0800781,
P−++++−++ = −
= −0.0269775, P−+++++−+ =
65536
= 0.0240936,
P−++++++− = −
32768
= −0.00881958, P+−+++−++ =
16384
= 0.0632935,
P+−++++−+ = −
32768
= −0.0611877, P−−+++−+− = −
65536
= −0.0583038,
P−−++++−− =
65536
= 0.0212555, P−+−++−+− =
32768
= 0.149017,
P−++−+−−+ =
32768
= 0.0943298.
Appendix A.2 n = 5
P−+++++−+++ = −
14721
8388608
= −0.00175488, P−++++++−++ =
37335
16777216
= 0.00222534,
P−+++++++−+ = −
48987
33554432
= −0.00145993, P−++++++++− =
13911
33554432
= 0.00041458,
P+−++++−+++ =
179699
33554432
= 0.00535545, P+−+++++−++ = −
120337
16777216
= −0.00717264,
P+−++++++−+ =
165155
33554432
= 0.004922, P++−++++−++ =
168313
16777216
= 0.0100322,
P−−++++−−++ =
31069
2097152
= 0.0148149, P−−++++−+−+ = −
411583
16777216
= −0.0245323,
P−−++++−++− =
196569
16777216
= 0.0117164, P−−+++++−+− = −
281271
33554432
= −0.00838253,
P−−++++++−− =
79673
33554432
= 0.00237444, P−+−+++−−++ = −
1441787
33554432
= −0.0429686,
P−+−+++−++− = −
1261655
33554432
= −0.0376002, P−+−++++−+− =
59459
2097152
= 0.0283523,
P−++−++−++− =
1575515
33554432
= 0.046954, P−+++−+−−++ = −
696151
33554432
= −0.0207469,
P−+++−+−+−+ =
1366619
33554432
= 0.0407284.
Appendix A.3 n = 6
P−++++++−++++ = −
1546981
34359738368
= −0.0000450231, P−+++++++−+++ =
5095899
68719476736
= 0.0000741551,
P−++++++++−++ = −
2366275
34359738368
= −0.0000688677, P−+++++++++−+ =
2455833
68719476736
= 0.0000357371,
P−++++++++++− = −
284577
34359738368
= −8.28228× 10−6, P+−+++++−++++ =
2927709
17179869184
= 0.000170415,
P+−++++++−+++ = −
20086627
68719476736
= −0.000292299, P+−+++++++−++ =
19268565
68719476736
= 0.000280395,
P+−++++++++−+ = −
10295153
68719476736
= −0.000149814, P++−+++++−+++ =
17781349
34359738368
= 0.000517505,
P++−++++++−++ = −
35087523
68719476736
= −0.000510591, P−−+++++−−+++ =
48421023
34359738368
= 0.00140924,
P−−+++++−+−++ = −
214080091
68719476736
= −0.00311528, P−−+++++−++−+ =
88171589
34359738368
= 0.00256613,
P−−+++++−+++− = −
57522267
68719476736
= −0.000837059, P−−++++++−−++ =
56776545
34359738368
= 0.00165241,
P−−++++++−+−+ = −
154538459
68719476736
= −0.00224883, P−−++++++−++− =
60809571
68719476736
= 0.000884896,
P−−+++++++−−+ =
6708473
8589934592
= 0.000780969, P−−+++++++−+− = −
33366621
68719476736
= −0.000485548,
P−−++++++++−− =
3860673
34359738368
= 0.00011236, P−+−++++−−+++ = −
85706851
17179869184
= −0.0049888,
P−+−++++−+−++ =
12211375
1073741824
= 0.0113727, P−+−++++−++−+ = −
332557469
34359738368
= −0.0096787,
P−+−++++−+++− =
56183761
17179869184
= 0.00327033, P−+−+++++−−++ = −
430452959
68719476736
= −0.00626391,
P−+−+++++−+−+ =
606065059
68719476736
= 0.00881941, P−+−+++++−++− = −
123612511
34359738368
= −0.0035976,
P−+−++++++−−+ = −
108202041
34359738368
= −0.00314909, P−+−++++++−+− =
70061315
34359738368
= 0.00203905,
P−++−+++−−+++ =
7860495
1073741824
= 0.00732066, P−++−+++−+−++ = −
591759525
34359738368
= −0.0172225,
P−++−+++−++−+ =
1044016671
68719476736
= 0.0151924, P−++−+++−+++− = −
367905053
68719476736
= −0.00535372,
P−++−++++−−++ =
676957849
68719476736
= 0.00985103, P−++−++++−+−+ = −
988973861
68719476736
= −0.0143915,
P−++−++++−++− =
6581795
1073741824
= 0.00612977, P−++−+++++−−+ =
363618785
68719476736
= 0.00529135,
P−+++−++−−+++ = −
185522333
34359738368
= −0.00539941, P−+++−++−+−++ =
901633567
68719476736
= 0.0131205,
P−+++−++−++−+ = −
103539423
8589934592
= −0.0120536, P−+++−++−+++− =
38524625
8589934592
= 0.00448486,
P−+++−+++−−++ = −
267901987
34359738368
= −0.00779697, P−+++−+++−+−+ =
12750645
1073741824
= 0.011875,
P−+++−++++−−+ = −
309855965
68719476736
= −0.004509, P−++++−+−−+++ =
29410257
17179869184
= 0.0017119,
P−++++−+−+−++ = −
296882461
68719476736
= −0.00432021, P−++++−+−++−+ =
35985105
8589934592
= 0.00418922,
P−++++−++−−++ =
92176287
34359738368
= 0.00268268, P+−−++++−−+++ =
202646807
34359738368
= 0.0058978,
P+−−++++−+−++ = −
972245985
68719476736
= −0.014148, P+−−++++−++−+ =
217687057
17179869184
= 0.0126711,
P+−−+++++−+−+ = −
211696415
17179869184
= −0.0123224, P+−−++++++−−+ =
78922695
17179869184
= 0.00459391,
P+−+−+++−+−++ =
1196499417
34359738368
= 0.0348227, P+−+−+++−++−+ = −
2209522727
68719476736
= −0.0321528,
P+−+−++++−+−+ =
1108384987
34359738368
= 0.0322582, P+−++−++−++−+ =
530683585
17179869184
= 0.0308899,
P+−++−+++−−++ =
347202525
17179869184
= 0.0202098, P−−−++++−−++− = −
268623007
68719476736
= −0.00390898,
P−−−++++−+−+− =
46285135
8589934592
= 0.0053883, P−−−++++−++−− = −
136974885
68719476736
= −0.00199325,
P−−−+++++−+−− =
19939391
17179869184
= 0.00116063, P−−−++++++−−− = −
18442085
68719476736
= −0.000268368,
P−−+−+++−−++− =
1018463205
68719476736
= 0.0148206, P−−+−+++−+−+− = −
1454513249
68719476736
= −0.021166,
P−−+−+++−++−− =
277721503
34359738368
= 0.00808276, P−−+−++++−+−− = −
335265249
68719476736
= −0.00487875,
P−−++−++−−++− = −
369408975
17179869184
= −0.0215024, P−−++−++−+−+− =
1104236607
34359738368
= 0.0321375,
P−−++−++−++−− = −
880560357
68719476736
= −0.0128138, P−−++−+++−−+− = −
876924641
68719476736
= −0.0127609,
P−−+++−+−−−++ =
113631201
17179869184
= 0.00661421, P−−+++−+−−+−+ = −
292857807
17179869184
= −0.0170466,
P−−+++−+−+−−+ =
548645951
34359738368
= 0.0159677, P−−+++−++−−−+ = −
377925345
68719476736
= −0.00549954,
P−+−+−++−−++− =
1719255909
34359738368
= 0.0500369, P−+−+−++−+−+− = −
5350158879
68719476736
= −0.0778551,
P−+−++−+−−+−+ =
1565770597
34359738368
= 0.0455699, P−+−++−+−+−−+ = −
3059753503
68719476736
= −0.0445253,
P−++−−++−−++− = −
2117554719
68719476736
= −0.0308145.
References
[1] H.A. Bethe, Z. Phys. 71 (1931) 205.
[2] M. Takahashi, Thermodynamics of One-Dimensional Solvable Models, Cambridge Uni-
versity Press, Cambridge, 1999.
[3] E. Lieb, T. Schultz, D. Mattis, Ann. Phys. (N.Y.) 16 (1961) 407.
[4] B.M. McCoy, Phys. Rev. 173 (1968) 531.
[5] M. Jimbo, K. Miki, T. Miwa, A. Nakayashiki, Phys. Lett. A 168 (1992) 256.
[6] M. Jimbo, T. Miwa, Algebraic Analysis of Solvable Lattice Models, CBMS Regional Con-
ference Series in Mathematics vol.85, American Mathematical Society, Providence, 1994.
[7] A. Nakayashiki, Int. J. Mod. Phys. A 9 (1994) 5673.
[8] V.E. Korepin, A. Izergin, F.H.L. Essler, D. Uglov, Phys. Lett. A 190 (1994) 182.
[9] M. Jimbo, T. Miwa, J. Phys. A: Math. Gen. 29 (1996) 2923.
[10] N. Kitanine, J.M. Maillet, V. Terras, Nucl. Phys. B 567 (2000), 554.
[11] N. Kitanine, J.M. Maillet, N.A. Slavnov, V. Terras, Nucl. Phys. B 729 (2005) 558.
[12] F.Göhmann, A. Klümper, A. Seel, J. Phys. A: Math. Gen 37 (2004) 7625.
[13] F.Göhmann, A. Klümper, A. Seel, J. Phys. A: Math. Gen 38 (2005) 1833.
[14] K. Sakai, “Dynamical correlation functions of the XXZ model at finite temperature”,
cond-mat/0703319.
[15] H.E. Boos, V.E. Korepin, J. Phys. A: Math. Gen. 34 (2001) 5311.
[16] H.E. Boos, V.E. Korepin, “Evaluation of integrals representing correlators in XXX
Heisenberg spin chain” in. MathPhys Odyssey 2001, Birkhäuser, Basel, (2001) 65.
[17] H.E. Boos, V.E. Korepin, Y. Nishiyama, M. Shiroishi, J. Phys. A: Math. Gen 35 (2002)
4443.
[18] K. Sakai, M. Shiroishi, Y. Nishiyama, M. Takahashi, Phys. Rev. E 67 (2003) 065101.
[19] G. Kato, M. Shiroishi, M. Takahashi, K. Sakai, J. Phys. A: Math. Gen. 36 (2003) L337.
[20] M. Takahashi, G. Kato, M. Shiroishi, J. Phys. Soc. Jpn, 73 (2004) 245.
[21] G. Kato, M. Shiroishi, M. Takahashi, K. Sakai, J. Phys. A: Math. Gen. 37 (2004) 5097.
[22] H.E. Boos, V.E. Korepin, F.A. Smirnov, Nucl. Phys. B 658 (2003) 417.
[23] H.E. Boos, M. Shiroishi, M. Takahashi, Nucl. Phys. B 712 (2005) 573.
[24] J. Sato, M. Shiroishi, M. Takahashi, J. Stat. Mech. 0612 (2006) P017.
http://arxiv.org/abs/cond-mat/0703319
[25] J. Sato, M. Shiroishi, J. Phys. A: Math. Gen. 38 (2005) L405.
[26] J. Sato, M. Shiroishi, M. Takahashi, Nucl. Phys. B 729 (2005) 441, hep-th/0507290.
[27] H.E. Boos, M. Jimbo, T. Miwa, F. Smirnov, Y. Takeyama, Algebra Anal. 17 (2005)
[28] H.E. Boos, M. Jimbo, T. Miwa, F. Smirnov, Y. Takeyama, Commun. Math. Phys. 261
(2006) 245.
[29] H.E. Boos, M. Jimbo, T. Miwa, F. Smirnov, Y. Takeyama, J. Phys. A: Math. Gen. 38
(2005) 7629.
[30] H.E. Boos, M. Jimbo, T. Miwa, F. Smirnov, Y. Takeyama, Lett. Math. Phys. 75 (2006)
[31] H.E. Boos, M. Jimbo, T. Miwa, F. Smirnov, Y. Takeyama, Annales Henri Poincare 7
(2006) 1395.
[32] N. Kitanine, J.M. Maillet, N.A. Slavnov, V. Terras, J. Stat. Mech. 0509 (2005) L002.
[33] A.V. Razumov, Yu.G. Stroganov, J. Phys. A: Math. Gen. 34 (2001) 3185.
[34] N. Kitanine, J.M. Maillet, N.A. Slavnov, V. Terras, J. Phys. A: Math. Gen. 35 (2002)
L385.
[35] S. Lukyanov, A. Zamolodchikov, Nucl. Phys. B 493 (1997) 571.
[36] S. Lukyanov, V. Terras, Nucl. Phys. B 654 (2003) 323.
[37] G. Vidal, J.I. Latorre, E. Rico, A. Kitaev, Phys. Rev. Lett. 90 (2003) 227902.
[38] C. Holzhey, F. Larsen, F. Wilczek, Nucl. Phys. B 424 (1994) 443.
[39] B.-Q. Jin, V.E. Korepin, J. Stat. Phys. 116 (2004) 79.
http://arxiv.org/abs/hep-th/0507290
Introduction
Analytical evaluation of multiple integral
Reduced density matrix and entanglement entropy
Summary and discussion
Density matrix elements up to n=6
|
0704.0851 | Counting on rectangular areas | Counting on Rectangular Areas
Milan Janjić,
Faculty of Natural Sciences and mathematics,
Banja Luka, Republic of Srpska, Bosnia and Herzegovina.
Counting on Rectangular Areas
Abstract
In the first section of this paper we prove a theorem for the number
of columns of a rectangular area that are identical to the given one. A
special case, concerning (0, 1)-matrices, is also stated.
In the next section we apply this theorem to derive several combina-
torial identities by counting specified subsets of a finite set. This means
that the obtained identities will involve binomial coefficients only. We
start with a simple equation which is, in fact, an immediate consequence
of Binomial theorem, but it is derived independently of it. The second
result concerns sums of binomial coefficients. In a special case we obtain
one of the best known binomial identity dealing with alternating sums.
Klee’s identity is also obtained as a special case as well as some formu-
lae for partial sums of binomial coefficients, that is, for the numbers of
Bernoulli’s triangle.
1 A counting theorem
The set of natural numbers {1, 2, . . . , n} will be denoted by [n], and by |X | will
be denoted the number of elements of the set X.
For the proof of the main theorem we need the following simple result:
(−1)|I| = 0, (1)
where I run over all subsets of [n] (empty set included). This may be easily
proved by induction or using Binomial theorem. But the proof by induction
makes all further investigations independent even of Binomial theorem.
Let A be an m× n rectangular matrix filled with elements which belong to
a set Ω.
By the i-column of A we shall mean each column of A that is equal to
[c1, c2, . . . , cm]
T , where c1, c2, . . . , cm of Ω are given. We shall denote the number
of i-columns of A by νA(c) or simply by ν(c).
For I = {i1, i2, . . . , ik} ⊂ [m], by A(I) will be denoted the maximal number
of columns j of A such that
aij 6= cj , (i ∈ I).
http://arxiv.org/abs/0704.0851v1
We also define
A(∅) = n.
Theorem 1. The number ν(c) of i-columns of A is equal
ν(c) =
(−1)|I|A(I), (2)
where summation is taken over all subsets I of [m].
Proof. Theorem may be proved by the standard combinatorial method, by
counting the contribution of each column of A in the sum on the right side of
We give here a proof by induction. First, the formula will be proved in the
case ν(c) = 0 and ν(c) = n. In the case ν(c) = n it is obvious that for I 6= ∅ we
have A(I) = 0, which implies
(−1)|I|A(I) = n+
I 6=∅
(−1)|I|A(I) = n.
In the case ν(c) = 0 we use induction on n. If n = 1 then the matrix
A has only one column, which is not equal c. It yields that there exists i0 ∈
{1, 2, . . . ,m} such that ai0,1 6= ci0 . Denote by I0 the set of all such numbers.
Then A(I) = 1 if and only if I ⊂ I0. From this and (1) we obtain
(−1)|I|A(I) =
(−1)|I| = 0.
Suppose now that the formula is true for matrices with n columns and that
A has n+ 1-columns, and νA(c) = 0. Omitting the first column, the matrix B
with n columns remains. If I0 is the same as in the case n = 1, then
(−1)|I|A(I) =
I 6⊂I0
(−1)|I|A(I) +
(−1)|I|A(I) =
I 6⊂I0
(−1)|I|B(I) +
(−1)|I|(B(I) + 1) =
(−1)|I|B(I) +
(−1)|I| = 0,
since the first sum is equal zero by the induction hypothesis, and the second by
For the rest of the proof we use induction on n again. For n = 1 the matrix
A has only one column which is either equal c or not. In both cases theorem is
true, from the preceding.
Suppose that theorem holds for n, and that the matrix A has n+1 columns.
We may suppose that ν(c) ≥ 1. Omitting one of the i-columns we obtain the
matrix B with n columns. By the induction hypothesis theorem is true for B.
On the other hand it is clear that A(I) = B(I) for each nonempty subset I.
Furthermore A has one i-column more then B, which implies
ν(c) = νA(c) = νB(c) + 1 = 1 +
(−1)|I|B(I) =
= 1 + n+
I 6=∅
(−1)|I|B(I) = 1 + n+
I 6=∅
(−1)|I|A(I).
ν(c) =
(−1)|I|A(I),
and theorem is proved.
If the number A(I) does not depend on elements of the set I, but only on
its number |I| then the equation(2) may be written in the form
ν(c) =
(−1)i
A(i), (3)
where |I| = i.
Our object of investigation will be (0, 1) matrices. Let c be the i- column of
a such matrix A. Take I0 ⊆ [m], |I0| = k such that
1 i ∈ I0
0 i 6∈ I0
Then the number A(I) is equal to the number of columns of A having 0’s in
the rows labelled by the set I ∩ I0, and 1’s in the rows labelled by the set I \ I0.
Suppose that the number A(I) depends only on |I ∩ I0|, |I \ I0|. If we denote
|I ∩ I0| = i1, |I \ I0| = i2, A(I) = A(i1, i2), then (2) may be written in the form
ν(c) =
(−1)i1+i2
A(i1, i2). (5)
2 Counting subsets of a finite set
Suppose that a finite set X = {x1, x2, . . . , xn} is given. Label by 1, 2, . . . , 2
n all
subsets of X arbitrary and define an n× 2n matrix A in the following way
aij =
1 if xi lies in the set labelled by j
0 otherwise
. (6)
Take I0 ⊆ [n], |I0| = k, and form the submatrix B of A consisting of those
rows of A which indices belong to I0. Let c be arbitrary i-column of B. Define
= {i ∈ I0 : ci = 1},
I ′′0 = {i ∈ I0 : ci = 0}
. (7)
The number ν(c) is equal to the number of subsets that contain the set
{xi, i ∈ I
0}, and do not intersect the set {xi : i ∈ I
0 }. There are obviously
ν(c) = 2n−k,
such sets.
Furthermore, if I ⊆ I0 then the number B(I) is equal to the number of
subsets that contain the set {xi : i ∈ I ∩ I
}, and do not meet the set {xi : i ∈
I ∩ I ′0}. It is clear that there are
B(I) = 2n−|I|
such subsets, so that the formula (2) may be applied. It follows
2n−k =
(−1)i
2n−i.
Thus we have
Proposition 2.1. For each nonnegative integer k holds
(−1)i
2k−i.
Note 2.1. The preceding equation is a trivial consequence of Binomial theorem.
But here it is obtained independently of this theorem.
The preceding Proposition shows that counting i-columns over all subsets of
X always produce the same result.
We shall now make some restrictions on the number of subsets of X . Take
0 ≤ m1 ≤ m2 ≤ n fixed, and consider the submatrix C of A consisting of rows
whose indices belong to I0, and columns corresponding to those subsets of X
that have m, (m1 ≤ m ≤ m2) elements.
Let c be an i-column of C. Define I ′0 = {i ∈ I0 : ci = 1}, |I
0| = l.
The number ν(c) is equal to the number of sets that contain {xi : i ∈ I
and do not intersect the sets {xi : i ∈ I0 \ I
}. We thus have
m2−|I
i=m1−|I
n− |I0|
On the other hand, for I ⊆ I0 the number C(I) corresponds to the number
of sets that contain {xi : i ∈ I \ I
}, and do not intersect {xi : i ∈ I ∩ I
}. Its
number is equal
m2−|I\I
i3=m1−|I\I
n− |I|
It follows that the formula (5) may be applied. We thus have
Proposition 2.2. For 0 ≤ m1 ≤ m2 ≤ n, and 0 ≤ l ≤ k holds
i=m1−l
m2−i2
i3=m1−i2
(−1)i1+i2
k − l
n− i1 − i2
In the special case when one takes k = l, m1 = m2 = m we obtain
Corollary 2.1. For arbitrary nonnegative integers m,n, k holds
(−1)i
. (9)
Note 2.2. The preceding is one of the best known binomial identities. It appears
in the book [1] in many different forms.
Taking m1 = m2 = m, in (8) one gets
Corollary 2.2. For arbitrary nonnegative integer m,n, k, l, (l ≤ k) holds
(−1)i1+i2
k − l
n− i1 − i2
m− i2
, (10)
For l = 0 we obtain
(−1)i
, (11)
which is only another form of (9).
Taking n = 2k, l = k in (10)we obtain
(−1)i1
2k − i1
Substituting k − i1 by i we obtain
Corollary 2.3. Klee’s identity,([2],p.13)
(−1)k
(−1)i
k + i
From (8) we may obtain different formulae for partial sums of binomial
coefficients, that is, for the numbers of Bernoulli’s triangle. For instance, taking
l = 0, m1 = 0, m2 = m we obtain
Corollary 2.4. For any 0 ≤ m ≤ n and arbitrary nonnegative integer k holds
(−1)i1
n+ k − i1
. (12)
Note 2.3. The number k in the preceding equation may be considered as a free
variable that takes nonnegative integer values. Specially, for k = 1 the equa-
tion represents the standard recursion formula for the numbers of Bernoulli’s
triangle.
Taking k = l = m1, m2 = m one obtains
(−1)i1
n+ k − i1
Note 2.4. The formulae (12) and (13) differs in the range of the index i2.
References
[1] J. Riordan, Combinatorial Identities. New York: Wiley, 1979.
A counting theorem
Counting subsets of a finite set
|
0704.0852 | Bose-Einstein correlations of direct photons in Au+Au collisions at
$\sqrt{s_{NN}} = 200$ GeV | November 9, 2018 19:7 WSPC/INSTRUCTION FILE DPeressounko-
ggHBT-T
International Journal of Modern Physics E
c© World Scientific Publishing Company
Bose-Einstein correlations of direct photons in Au+Au collisions at
sNN = 200 GeV
D. Peressounko for the PHENIX collaboration∗
RRC ”Kurchatov Institute”, Kurchatov sq.1,
Moscow, 123182, Russia
[email protected]
Received (received date)
Revised (revised date)
The current status of the analysis of direct photon Bose-Einstein correlations in Au+Au
collisions at
sNN = 200 GeV done by the PHENIX collaboration is summarized. All
possible sources of distortion of the two-photon correlation function are discussed and
methods to control them in the PHENIX experiment are presented.
1. Introduction
Photons have an extremely long mean free path length and escape from the hot
matter without rescattering. By measuring their Bose-Einstein (or Hanbury-Brown
Twiss, HBT) correlations one can extract the space-time dimensions of the hottest
central part of the collision1,2,3,4,5 in contrast to hadron HBT correlations which
measure the size of the system at the moment of its freeze-out. Moreover, photons
emitted at different stages of the collision dominate in different ranges of trans-
verse momentum6, therefore measuring photon correlation radii at various average
transverse momenta (KT ) one can scan the space-time dimensions of the system at
various times and thus trace the evolution of the hot matter.
Photons emitted directly by the hot matter – direct photons – constitute only a
small fraction of the total photon yield while the dominant contribution comes from
decays of the final state hadrons, mainly π0 → 2γ and η → 2γ mesons. Fortunately,
the lifetime of these hadrons is extremely large and the width of the Bose-Einstein
correlations between the decay photons is of the order of a few eV and cannot
obscure the direct photon correlations. This feature can be used to extract the direct
photon yield3: assuming that direct photons are emitted incoherently, the photon
correlation strength parameter can be related to the proportion of direct photons as
λ = 1/2(Ndirγ /N
2. This approach is probably the only way to experimentally
measure direct photon yield at very small pT . Presently, the only experiment to
∗For the full list of the PHENIX collaboration and acknowledgments, see9.
http://arxiv.org/abs/0704.0852v1
November 9, 2018 19:7 WSPC/INSTRUCTION FILE DPeressounko-
ggHBT-T
2 D. Peressounko
have measured direct photon Bose-Einstein correlations in ultrarelativistic heavy
ion collisions is WA987. An invariant correlation radius was extracted and the direct
photon yield was measured in Pb+Pb collisions at
sNN = 17 GeV.
Since the strength of the direct photon Bose-Einstein correlation is typically a
few tenths of a percent, it is important to exclude all background contributions
which could distort the photon correlation function. These contributions can be
classified as following: apparatus effects (close clusters interference – attraction of
close clusters in the calorimeter during reconstruction) and correlations caused by
real particles. The latter in turn can be divided into contribution due to ”splitting”
of particles – processes like antineutron annihilation in the calorimeter and photon
conversion on detector material in front of the calorimeter; contamination by corre-
lated hadrons (e.g. Bose-Einstein-correlated π±); background correlations of decay
photons. In this paper we consider all of these contributions in detail and describe
how to control for them in the PHENIX experiment.
2. Analysis
This analysis is based on the data taken by PHENIX in Run3 (d+Au) and Run4
(Au+Au). The total collected statistics is ≈ 3 billion d+Au events and ≈ 900 M
Au+Au events. Details of the PHENIX configuration in these runs can be found in
references 8 and 9, respectively.
2.1. Apparatus effects
Since correlation functions are rapidly rising functions at small relative momenta
any small distortion of the relative momentum for real pairs, because of errors in
reconstruction of close clusters in the calorimeter (”cluster attraction”) for example,
can lead to the appearance of a fake bump in the correlation function.
To explore the influence of cluster interference in the calorimeter EMCAL, we
construct a set of correlation functions by applying different cuts on the minimal
distance between photon clusters in EMCAL. To quantify the difference between
these correlation functions we fit them with a Gaussian and compare the extracted
correlation parameters. We find that for correlation functions that include clusters
with small relative distances there is strong dependence on minimal distance cut,
but for distance cuts above 24 cm (4-5 modules) the correlation parameters are
independent of the relative distance cut. This implies that with this distance cut
the apparatus effects are sufficiently small.
2.2. Photon conversion, n̄ annihilation, and similar backgrounds
The next class of possible backgrounds are processes in which one real particle
produces several clusters in the calorimeter close to each other. These are processes
like n̄ annihilation in the calorimeter producing several separated clusters, or photon
conversion in front of calorimeter, or residual correlations between photons that
November 9, 2018 19:7 WSPC/INSTRUCTION FILE DPeressounko-
ggHBT-T
Bose-Einstein correlations of direct photons in Au+Au collisions 3
(GeV)
0 0.05 0.1 0.15 0.2 0.25 0.3
C Min.Bias, Au+Au
Min.Bias, d+Au, scaled
Fig. 1. Two-photon correlation function measured in d+Au collisions at
sNN = 200 GeV scaled
to reproduce the height of the π0 peak in Au+Au collisions compared to the same correlation
function measured in Au+Au collisions at
sNN = 200 GeV. Absolute vertical scale is omitted
in this technical plot.
belong to different π0 in decays like η → 3π0 → 6γ. The common feature of this
type of process is that their strength is proportional to the number of particles per
event and not to the square of the number of particles, as would be the case for
Bose-Einstein correlations.
To estimate the upper limit on these contributions, we compare two-photon
correlation functions, calculated in d+Au and Au+Au collisions. For the moment
we assume, that all correlations at small relative momenta seen in d+Au collisions
are due to the background effects under consideration. Then we scale the correlation
function obtained in d+Au collisions with the number of π0 (that is we reproduce
the height of the π0 peak in Au+Au collisions):
Cscaled2 = 1−
hAu+Auπ
hd+Auπ
(C2 − 1). (1)
The result of this operation is shown in Fig. 1. We find that the scaled d+Au
correlation function lies well below (close to unity) the correlation function calcu-
lated for Au+Au collisions at small relative momenta. From this we conclude that
the contribution from effects with strength proportional to the first power of the
number of particles is negligible in Au+Au collisions.
2.3. Charged and neutral hadron contamination
Another possible source of distortion of the photon correlation function is a contam-
ination by (correlated) hadrons. Although we use rather strict identification criteria
November 9, 2018 19:7 WSPC/INSTRUCTION FILE DPeressounko-
ggHBT-T
4 D. Peressounko
for photons there still may be some admixture of correlated hadrons contributing
to the region of small relative momenta.
(GeV)
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
C Converted + EMCAL
EMCAL + EMCAL
Fig. 2. Comparison of two-photon correlation functions measured in Au+Au collisions at
sNN =
200 GeV by two different methods: both photons are registered in the EMCAL (closed) and one
photon is registered in EMCAL while the other is reconstructed through its external conversion
(open). Absolute vertical scale is omitted in this technical plot.
To exclude this possibility, we construct the two-photon correlation function
using one photon registered in the calorimeter EMCAL and reconstructing the sec-
ond photon from its conversion into an e+e− pair on the material of the beam
pipe. The photon sample, constructed using external conversions is completely free
from hadron contamination, so comparison of the standard correlation function
with the pure one allows to estimate the contribution from non-photon contami-
nation. This comparison is shown in Fig. 2. We find that the correlation function
constructed with the more pure photon sample demonstrates a slightly larger cor-
relation strength. This demonstrates that the observed correlation is indeed a pho-
ton correlation, while hadron contamination in the photon sample just increases
combinatorial background and reduces the correlation strength. In addition, this
comparison shows that we have properly excluded the region of cluster interference.
Due to deflection by the magnetic field the electrons of the e+e− conversion pair
hit the calorimeter far from the location of the pair photon used in the correlation
function and thus effects related to the interference of close clusters are absent.
2.4. Photon residual correlations
The last possible source of the distortion of the photon correlation function are
residual correlations between photons. We have already demonstrated that the con-
November 9, 2018 19:7 WSPC/INSTRUCTION FILE DPeressounko-
ggHBT-T
Bose-Einstein correlations of direct photons in Au+Au collisions 5
tributions of residual correlations between photons in decays like η → 3π0 → 6γ,
with strength proportional to Npart and not N
part is negligible in Au+Au collisions.
Below we consider other effects, which may cause photon correlations. These are
collective flow (and jet-like correlations) and correlations between photons, origi-
nated from decays of Bose-Einstein correlated mesons. Collective (elliptic) flow as
well as jet-like correlations are long-range effects, resulting in correlations at relative
angles much larger than under consideration here (for example, the opening angle of
a photon pair with 20 MeV mass and KT = 500 MeV is ∼ 5 degrees). Monte-Carlo
simulations demonstrate that flow and jet-like contribution are indeed negligible.
(GeV)
0 0.05 0.1 0.15 0.2 0.25 0.3
C Min.Bias Au+Au, Data
HBT resid.corr., Sim.0π
Fig. 3. Comparison of two-photon correlation functions measured in Au+Au collisions at
sNN =
200 GeV with Monte-Carlo simulations of the contribution of residual correlations due to decays
of Bose-Einstein-correlated neutral pions. Absolute vertical scale is omitted in this technical plot.
Potentially, the most serious distortion of the photon correlation function are
residual correlations between decay photons of HBT-correlated π0s. Monte-Carlo
simulations show that this contribution is not negligible, but has a rather specific
shape (see Fig. 3), so that it does not distort the photon correlation function at
small Qinv. This result can be explained as follows. Let us consider two π
0s with
zero relative momentum. The distribution of decay photons is isotropic in their rest
frame, and the probability to find a collinear photon pair (Qinv = 0) is suppressed
due to phase space reasons. The photon pair mass distribution has a maximum at
2/3mπ, not at zero. After convoluting with the pion correlation function we find
a step-like two-photon correlation function3. On the other hand, if one artificially
chooses photons with momentum along the direction of the parent π0 (e.g. by
looking at photon pairs at very large KT ), then the shape of the decay photon
correlation function will reproduce the shape of the parent π0 correlation. This
November 9, 2018 19:7 WSPC/INSTRUCTION FILE DPeressounko-
ggHBT-T
6 D. Peressounko
probably explains the different shape of the residual correlations due to decays of
HBT-correlated π0 found in10.
3. Conclusions
We have presented the current status of analysis of direct photon Bose-Einstein
correlations in the PHENIX experiment. We are able to measure the two-photon
correlation function with a precision sufficient to extract the direct photon corre-
lations. Correlation measurements in which one of the photon pair has converted
to an e+e− pair have been used to provide an important cross-check. We have
demonstrated that all known backgrounds are under control. The extraction of the
correlation parameters of direct photon pairs is in progress.
References
1. A.N. Makhlin, JETP Lett. 46:55 (1987); A.N. Makhlin, Sov.J.Nucl.Phys.
49:151,(1989).
2. D.K. Srivastava, J. Kapusta, Phys.Rev. C48:1335 (1993); D.K. Srivastava, C. Gale,
Phys.Lett. B319:407 (1993); D.K. Srivastava, Phys.Rev. D49:4523 (1994); D.K. Srivas-
tava, J. Kapusta, Phys.Rev. C50:505 (1994). D.K. Srivastava, Phys.Rev. C71:034905
(2005); S. Bass, B. Muller, D.K. Srivastava, Phys.Rev. Lett. 93:162301 (2004).
3. D. Peressounko, Phys.Rev. C67:014905 (2003).
4. J. Alam et al., Phys.Rev. C67:054902 (2003); J. Alam et al., Phys.Rev. C70:054901
(2004).
5. T. Renk, Phys.Rev. C71:064905 (2005); hep-ph/0408218.
6. D. d’Enterria and D.Peressounko, Eur.Phys.J.C46:451 (2006).
7. M.M. Aggarwal et al., Phys.Rev.Lett. 93:022301 (2004).
8. S.S.Adler et al., (PHENIX collaboration), Phys.Rev.Lett. 98:012002.
9. S.Bathe et al., (PHENIX collaboration), Nucl.Phys. A774:731 (2006).
10. D.Das et al., nucl-ex/0511055.
http://arxiv.org/abs/hep-ph/0408218
http://arxiv.org/abs/nucl-ex/0511055
Introduction
Analysis
Apparatus effects
Photon conversion, annihilation, and similar backgrounds
Charged and neutral hadron contamination
Photon residual correlations
Conclusions
|
0704.0853 | Normalized Ricci flow on nonparabolic surfaces | NORMALIZED RICCI FLOW ON NONPARABOLIC SURFACES
HAO YIN
Abstract. This paper studies normalized Ricci flow on a nonparabolic sur-
face, whose scalar curvature is asymptotically −1 in an integral sense. By a
method initiated by R. Hamilton, the flow is shown to converge to a metric of
constant scalar curvature −1. A relative estimate of Green’s function is proved
as a tool.
1. Introduction
Let (M, g) be a Riemannian manifold of dimension 2. The normalized ricci flow
= (r −R)gij ,
where R is the scalar curvature and r is some constant. For compact surface, r is
the average of scalar curvature. In this case, Hamilton [4] and Chow [2] proved the
normalized Ricci flow from any initial metric will exist for all time and converge
to a metric of constant curvature. It’s therefore nature to ask if such result holds
for non-compact surfaces. Recently, a preprint of Ji and Sesum [14] generalized
the above result to complete surfaces with logarithmic ends. Such surfaces have
infinities like hyperbolic cusps. In particular, they have finite volume, therefore are
parabolic, in the sense that there exists no positive Green’s function. One of their
result shows that the normalized Ricci flow from such a metric will exist for all time
and converge to hyperbolic metric. In this paper, we study nonparabolic complete
surfaces, i.e. surfaces admitting positive Green’s function. In contrast to [14],
such surfaces have at least one nonparabolic end and have infinite volume. For a
discussion of parabolic and nonparabolic ends and their geometric characterization,
see Li’s survey paper [6].
Here we choose r = −1 because if the flow converges, the limit metric will be
of constant curvature r. Since we are considering noncompact surfaces, r can’t
be positive. If r = 0, the limit will be flat R2 or its quotient. However, it’s
well known that these flat surfaces are parabolic. On the other hand, whether a
surface is parabolic or nonparabolic is invariant under quasi-isometries. Since if
the normalized Ricci flow converges, then the limit metric will be quasi-isometric
to the initial one, we know r can’t be zero.(For the definition of quasi-isometry, see
also [6].) If r < 0, we can always assume r = −1 by a scaling.
The main result of this paper is
Theorem 1.1. Let (M, g) be a nonparabolic surface with bounded curvature. If the
infinity is close to a hyperbolic metric in the sense that
|R+ 1| dV < +∞.
http://arxiv.org/abs/0704.0853v2
2 HAO YIN
Then, the normalized Ricci flow will converge to a metric of constant scalar curva-
ture −1.
As in [14], we try to apply the above result to prove results along the line
of Uniformization theorem. That amounts to prove the existence of a complete
hyperbolic metric within a given conformal class of a noncompact surface. In [14],
the authors proved that there is a uniformization theorem for Riemann surfaces
obtained from compact Riemann surface by removing finitely many points and
remarked that similar result should be true for Riemann surfaces obtained from
compact ones by removing finitely many disjoint disks and points. Our theorem
can be used to prove the same result in the case there is at least one disk removed.
In fact, we will give a unified proof, which includes and simplifies the proof of [14].
Precisely, we will show
Corollary 1.2. Let M be a Riemann surface obtained from compact Riemann sur-
face by removing finitely many disjoint disks and/or points. If no disk is removed,
then we further assume that the Euler number of M is less than zero. Then there
exists on M a complete hyperbolic metric compatible with the conformal structure.
The proof of Theorem 1.1 is along the same line as [14]. The method was initiated
by Hamilton in [4]. There, Hamilton considered only compact case. for the purpose
of generalizing this method to complete case, we need to overcome some analytic
difficulties. Precisely, one need to solve Poisson equations and obtain estimates for
the solutions, for all t. Those growth estimates for the solution are needed to apply
the maximum principle. As for the maximum principle, there are many versions of
maximum principle on complete manifolds. Since we will be working on complete
manifold with a changing metric, the closest version for our need is in [1]. We still
need a little modification.
Theorem 1.3. Suppose g(t) is a smooth family of complete metrics defined on M ,
0 ≤ t ≤ T with Ricci curvature bounded from below and
∣ ≤ C on M × [0, T ].
Suppose f(x, t) is a smooth function defined on M × [0, T ] such that
△tf −
whenever f(x, t) > 0 and
exp(−ar2t (o, x))f2+(x, t)dVt < ∞
for some a > 0. If f(x, 0) ≤ 0 for all x ∈ M , then f ≤ 0 on M × [0, T ].
Although there is no detail in [1], one can prove it using the method of Ecker
and Huisken in [3] and Ni and Tam in [12].
To solve the Poisson equation△u = R+1 for t = 0. We use a result of Ni[10], See
Theorem 3.1. That’s the reason why we assume
|R+ 1| dV < +∞. Moreover,
we prove a growth estimate of the solution under the further assumption that Ricci
curvature bounded from blow. This result is true for all dimensions. For the growth
estimate, an estimate of Green’s function is proved under the assumption that Ricci
curvature bounded from below. This estimate may be of independent interests, see
the discussion in Section 2.
Instead of solving △tu(x, t) = R(x, t) + 1 for later t. We solve an evolution
equation for u. Thanks to the recent preprint of Chau, Tam and Yu [1], we can
RICCI FLOW ON SURFACES 3
solve this evolution equation with a changing metric. Following a method in [11],
we show that u, |∇u| and △u satisfy the growth estimate like in equation (1). With
these preperation, we proceed to show that u(x, t) is indeed the potential functions
we need. Now the Theorem 1.1 follows from the approach of Hamilton and repeated
use of Theorem 1.3.
The paper is organized as follows: In Section 2, we prove the crucial estimate of
Green’s function needed for the growth estimate. In Section 3, we solve the Poisson
equation and prove the relevant growth estimates. In the last section, we prove
Theorem 1.1 and discuss results related to Uniformization theorem.
2. An estimate of Green’s function
In this section we prove that
Theorem 2.1. Let (M, g) be a complete noncompact manifold with Ricci curvature
bounded from below by −K. Assume that M admits a positive Green’s function
G(x, y). Let x0 be a fixed point in M . Then there exists constant A > 0 and B > 0,
which may depend on M and x0, so that
{G(x,y)>eAr(y,x0)}
G(x, y)dx ≤ BeAr(y,x0),
where r(y, x0) is the distance from y to x0.
Remark 2.2. It’s impossible to get an estimate of this kind with constant depending
only on K. Considering a family of nonparabolic manifolds Mi, which are becoming
less and less ’nonparabolic’, i.e. their infinities are closing up. For any A,B > 0,
there exists Mi and some xi ∈ Mi such that
{Gi(x,xi)>A}
Gi(x, xi)dx > B.
See [8].
Remark 2.3. To the best of the author’s knowledge, known estimates on Green’s
function in terms of volume of balls require Ricci curvature to be non-negative,
See [9]. There could be one estimate of such type for Ricci curvature bounded from
below, in light of [1]. If so, our relative estimate should be a corollary. The following
proof is a direct one.
We begin with a lemma,
Lemma 2.4. There is a constant C depending only on K and the dimension, such
that if Ricci curvature on B(x, 1) is bounded from below by −K and G(x, y) is the
Dirichlet Green’s function on B(x, 1), then
B(x,1)
G(x, y)dy < C.
Proof. Let H(x, y, t) be the Dirichlet heat kernel of B(x, 1). It’s easy to see
B(x,1)
H(x, y, t)dy ≤ 1,
for all t > 0.
4 HAO YIN
Now we prove that H(x, y, 2) is bounded from above. The proof is Moser itera-
tion, which has appeared several times. Here we follow computations in [17]. Since
we have Dirichlet boundary condition, we don’t need cut off function of space.
Let 0 < τ < 2 and 0 < δ ≤ 1/2 be some positive constants, σk = (1− (1/2)kδ)τ
and ηi be smooth function on [0,∞) such that 1) ηi = 0 on [0, σi], 2) ηi = 1 on
[σi+1,∞) and 3) η′i ≤ 2i+3(δτ)−1. Let pi = (1 + 2n )
i. Since H is a solution to the
heat equation, it’s easy to know Hp is a subsolution to the heat equation for p > 1.
−△y)Hp(x, y, t) ≤ 0.
Multiply by η2iH
pi and integrate
B(x,1)
−△y)Hpidydt ≤ 0.
Routine computation gives
B(x,1)
|∇yHpi |2 dydt+
B(x,1)
H2pi(x, y, T )dy ≤ 2i+3(τδ)−1
B(x,1)
H2pidydt.
The sobolev inequality in [13] implies
B(x,1)
(Hpi)
n−2 dy
≤ CV −2/n
B(x,1)
|∇yHpi |2 +H2pidy,
where V is the volume of B(x, 1). By Hölder inequality,
B(x,1)
H2pi+1dy ≤
B(x,1)
(Hpi)
n−2 dy
B(x,1)
H2pidy)2/n
≤ (CV −2/n
B(x,1)
|∇yHpi |2 +H2pidy)(
B(x,1)
H2pidy)2/n.
By (2), integrate over time
B(x,1)
H2pi+1dydt ≤ CV −2/nci+30 (στ)−(1+2/n)(
B(x,1)
H2pidydt)1+
where c0 = 2
1+2/n. A standard Moser iteration gives
t∈[τ,2]
y∈B(x,1)
H2(x, y, t) ≤ CV −1(στ)−
(1−δ)τ
B(x,1)
H2(x, y, t)dydt.
An iteration process as given in [7] implies the L1 mean value inequality. In par-
ticular,
y∈B(x,1)
H(x, y, 2) ≤ CV −1
B(x,1)
H(x, y, t)dydt ≤ CV −1.
Hence,
B(x,1)
H2(x, y, 2)dy ≤ CV −1.
RICCI FLOW ON SURFACES 5
Due to a Poincaré inequality in [7],
B(x,1)
H2(x, y, t)dy =
B(x,1)
2H△yHdy
B(x,1)
|∇yH |2 dy
B(x,1)
H2(x, y, t)dy.
This differential inequality implies
B(x,1)
H2(x, y, t)dy ≤
B(x,1)
H2(x, y, 2)dy × e−C(t−2)
≤ CV −1e−C(t−2).
Hölder inequality shows
B(x,1)
H(x, y, t) ≤ V (B(x, 1))
B(x,1)
H2(x, y, t)dy ≤ Ce−C(t−2),
for t ≥ 2. The lemma follows from
B(x,1)
G(x, y)dy =
B(x,1)
H(x, y, t)dydt.
Now let’s turn to the proof of Theorem 2.1.
Proof. The key tool in the proof is Gradient estimate for harmonic function. Recall
that if u is a positive harmonic function on B(x, 2R), then
B(x,R)
|∇ log u(x)|2 ≤ C1K + C2R−2
This is to say outside B(x, 0.1), the Green function as a function of y decays or
increases at most exponentially with a factor
C1K + 100C2.
(1) Consider G(x0, y), Set
p = max
y∈∂B(x0,1)
G(x0, y).
As pointed out in Li and Tam, in the paper constructing Green function, G(x0, y) ≤
p for y /∈ B(x0, 1). Since the Green function is symmetric, for any point y far out
in the infinity, G(y, x0) ≤ p.
(2) If the theorem is not true, then for any big A and B, there is a point y (far
away) so that
{G(x,y)>eAr(y,x0)}
G(x, y) > BeAr(y,x0).
We will derive a contradiction with (1).
Claim: {x|G(x, y) > eAr} ⊂ B(y, 1) is not true.
If true, then consider the Dirichlet Green function G1(z, y) on B(y, 1). It’s well
known that G(z, y) − G1(z, y) is a harmonic function. Notice that this harmonic
function has boundary value less than eAr. Therefore, its integration on B(y, 1) is
less than eAr × V ol(B(y, 1)). Since we assume Ricci lower bound, V ol(B(y, 1)) is
less than a universal constant depending on K.
6 HAO YIN
Therefore,
{G(x,y)>eAr}
G(x, y)dx ≤
B(y,1)
G(x, y)dx
≤ V ol(B(y, 1))× eAr +
B(y,1)
G1(x, y)dx
≤ C(K,n)× eAr,
where we used Lemma 2.4 for the last inequality. If we choose B to be any number
larger than C(K,n) in the above equation, then the choice of y gives an contradic-
tion and implies that the claim is true.
(3) There is a z ∈ {x|G(x, y) > eAr} so that d(z, y) = 1 because the set
{G(x, y) > eAr} is connected. This follows from the maximum principle and the
construction of Green’s function.
(3.1) If |d(y, x0)− d(z, x0)| < 0.3, then
Let σ be the minimal geodesic connecting z and x0.
Claim: the nearest distance from y to σ is no less than 0.1.
If not, let w be the point in σ such that d(y, w) < 0.1. Since d(y, z) > 1, we
d(w, z) > 0.9
Now, w is on the minimal geodesic from z to x0, so
d(w, x0) ≤ d(z, x0)− 0.9
d(y, x0) < d(w, x0) + d(y, w) < d(z, x0)− 0.8
This is a contradiction , so the claim is true.
We can use the gradient estimate along the segment σ. (Notice that d(z, x0) <
r(x, x0) + 1)
G(y, x0) >
G(y, z)
C1K + 100C2(r + 1))
This is a contradiction if we choose A >>
C1K + 100C2.
(3.2) If d(z, x0) ≤ d(y, x0)− 0.3, then
The distance from y to the minimal geodesic connecting z and x0 will be larger
than 0.1. The above argument gives a contradiction.
(3.3) If d(z, x0) ≥ d(y, x0) + 0.3, then
Since G(z, y) > eAr, we move the center to z, by symmetry of Green function.
G(y, z) > eAr(y,x0)) > eA
′r(z,x0). This is case (3.2). We get a contradiction at z.
This finishes the proof of estimate of Green function. �
3. Poisson equations △u = R+ 1
This section is divided into two parts. The first part solves the Poisson equation
for t = 0. The second part solves for t > 0 before the maximum time using an
indirect way.
First, we use Theorem 2.1 to obtain an growth estimate of the solution of the
Poisson equation △u = R + 1 for t = 0. The existence part without curvature
restriction and boundedness of f of the following theorem is due to Lei Ni in [10].
RICCI FLOW ON SURFACES 7
Theorem 3.1. Let M be a complete nonparabolic manifold with Ricci curvature
bounded from below by −K. For non-negative bounded continuous function f the
Poisson equation
△u = −f
has a non-negative solution u ∈ W 2,nloc (M) ∩ C
loc (M)(0 < α < 1) if f ∈ L1(M).
Moreover, for any fixed x0 ∈ M , there exists A > 0 and C > 0 such that
u(x) ≤ CeAr(x,x0).
Proof. Let G(x, y) be the positive Green’s function.
G(x, y)f(y)dy =
{G(x,y)≤eAr(x,x0)}
G(x, y)f(y)dy
{G(x,y)>eAr(x,x0)}
G(x, y)f(y)dy
≤ CeAr(x,x0).
For the first term, we use the assumption that f is integrable, for the second term,
we use the boundedness of f and the Theorem 2.1. The estimate above shows the
Poisson equation is solvable with the required estimate. �
Corollary 3.2. Let M be a surface satisfying the assumptions in Theorem 1.1.
There exists a solution u0 to the equation △u0 = R(x) + 1 satisfying
exp(−ar2(x, x0))u20(x)dV < ∞
exp(−br2(x, x0)) |∇u0|2 (x)dV < ∞
where a and b are some positive constants.
Proof. Solve the Poisson equation for the positive part and the negative part of
R+ 1 respectively. Then subtract the solutions. The first integral estimate follows
from the pointwise growth estimate and volume comparison.
Let R > 1. Choose a cut-off function ϕ such that
ϕ(x) =
1 x ∈ B(x0, R)
0 x /∈ B(x0, 2R)
|∇ϕ|2 ≤ C1ϕ.
Multiply the equation by ϕu0 and integrate over M ,
ϕu0△u0dV =
(R + 1)ϕu0dV,
which implies
ϕ |∇u0|2 dV +
u0∇ϕ · ∇u0dV = −
(R+ 1)ϕu0dV.
8 HAO YIN
Hence
(ϕ− |∇ϕ|
) |∇u0|2 dV ≤ C
B(x0,2R)
u20dV + C
B(x0,2R)
|u0| dV.
B(x0,2R)
u20dV + CV ol(B(x0, 2R)).
From the integration estimate of u0,
B(x0,2R)
u20dV ≤ Ce4aR
By choice of ϕ,
B(x0,R)
|∇u0|2 dV ≤ CeãR
From here, it’s not difficult to see the estimate we need. �
Now let’s look at the case of t > 0. In fact, it’s not difficult to show the above
method can be used for t > 0. This amounts to show that M is still nonparabolic
for t > 0 and
|R+ 1| dV is still finite. The first claim is trivial and the second
follows from the evolution equation and maximum principle. Assume the solutions
are u(t). We have trouble in deriving the evolution equation for u(t), due to the
possible existence of nontrivial harmonic functions. This explains why we use the
following indirect way.
Lemma 3.3. Assume the normalized Ricci flow exists for t ∈ [0, Tmax). The
following equation has a solution u(x, t) (0 ≤ t < Tmax) with initial value u0,
= △u− u,
where △ is the Laplace operator of metric g(t). Moreover, there exists a > 0
depending on T such that for any T < Tmax
exp(−ar2(x, x0))u2(x, t)dVt < ∞.
Similar estimates hold for |∇u| and △u with different constants.
Remark 3.4. Since g(0) and g(t) are equivalent up to a constant depending on T ,
it doesn’t matter whether we estimate ∇u or ∇tu and whether we use r to stand
for distance at g(0) or g(t) if t ∈ [0, T ].
Proof. In [1], the authors considered a class of evolution equation with changing
metric. ∂u
= △u− u with the underling metric evolving by normalized Ricci flow
is in this class. They proved, among other things, that the fundamental solution
Z(x, t; y, s) has a Gaussian upper bound, i.e
Z(x, t; y, x) ≤ C
t− s)
r2(x,y)
D(t−s) .
These constants depends on the solution of normalized Ricci flow and T . See
Corollary 5.2 in [1]. For simplicity, denote Z(x, t; y, 0) by H(x, y, t), then to solve
the equation, it suffices to show the following integral converges,
u(x, t) =
H(x, y, t)u0(y)dy.
RICCI FLOW ON SURFACES 9
Bt(x,1)
H(x, y, t)u0(y)dy
≤ CeAr(x,x0),
because the integral of H on Bt(x, 1) is less than 1 and u0 grows at most exponen-
tially by Theorem 2.1.
M\Bt(x,1)
H(x, y, t)u0(y)dy
M\Bt(x,1)
r2(x,y)
Dt |u0(y)| dy.
By volume comparison,
Vx(1) ≥ C1e−A1r(x,x0)Vx0(1)
t) ≥ C2e−A1r(x,x0) min(1, tn/2).
Therefore
M\Bt(x,1)
H(x, y, t)u0(y)dy
M\Bt(x,1)
CeA1r(x,x0)e−
r2(x,y)
2DT |u0(y)| dy
M\Bt(x,1)
CeA2r(x,x0)eAr(x,y)e−
r2(x,y)
2DT dy
≤ CeA2r(x,x0).
In summary,
|u(x, t)| ≤ CeAr(x,x0),
where A means a different constant. Volume comparison then implies
exp(−ar2(x, x0))u2(x, t)dVt < ∞.
For estimates on derivatives, note first that etu(x, t) is a solution of heat equation
(with evolving metric)
with initial value u0. Since we allow constants depend on T , it’s equivalent to prove
estimates for etu(x, t). Therefore, from now on, to the end of this proof, we assume
u(x, t) is a solution of heat equation. Then
(2) (△− ∂
)u2 = 2 |∇u| .
Assume that ϕ : R+ → R+ satisfies
1) ϕ(x) = 1 for x ≤ 1;
2) ϕ(x) = 0 for x ≥ 2.
Choose the cut-off function ϕ(
r(x,x0)
)(R > 1). Multiplying this to the equation
(2) and integrate,
r(x, x0)
) |∇u|2 dVtdt ≤
r(x, x0)
)u2dVtdt.
△ϕ( r
) = div(ϕ′(
= ϕ′′(
|∇r|2
+ ϕ′(
10 HAO YIN
By definition of ϕ, we know ϕ( r
) vanishes unless R ≤ r(x, x0) ≤ 2R. Laplacian
comparison implies (curvature is bounded from below −k)
△r ≤ (n− 1)
kcoth(
kr) ≤ C.
Therefore,
ϕ△u2dVt ≤ C
B(x0,2R)
u2dVt.
Let dVt = e
FdV0,
(ϕu2)eF dV0dt ≥
(ϕu2eF )dtdV0 −
Cϕu2dVtdt
ϕu2(x, T )dVT −
ϕu2(x, 0)dV0 − C
ϕu2dVtdt
ϕu20(x)dV0 − C
ϕu2dVtdt.
Here we have used the fact that ∂e
is bounded. Combined with equation (3) and
B(x0,R)
|∇u|2 dVtdt ≤ C
B(x0,2R)
u2dVtdt+
B(x0,2R)
u20(x)dV0.
From here it’s easy to see the type of estimate in Theorem 1.3. For △u, it suffices
to consider
∣. The Bochner formula in this case is (remember we have assumed
that u is a solution of the heat equation),
(△− ∂
) |∇u|2 = 2
2 − |∇u|2 .
The same argument as before works for
Lemma 3.5. For t ∈ [0, Tmax),
△tu(x, t) = R(x, t) + 1.
Proof. We know for t = 0 it’s true. Calculation shows
(△tu−R(t)− 1) = (R+ 1)△tu+△t(△tu− u)−△tR−R(R+ 1)
= △t(△tu−R(t)− 1) +R(△tu−R− 1)
By previous lemma, we have growth estimate for △tu−R(t)−1. If △tu−R−1 ≥ 0,
−△t)(△tu−R(t)− 1) ≤ C(△tu−R− 1).
If △tu−R− 1 ≤ 0, then
−△t)(△tu−R(t)− 1) ≥ C(△tu−R− 1).
Apply maximum principle for △tu − R − 1, which is zero at t = 0. We know it’s
zero forever. �
RICCI FLOW ON SURFACES 11
4. Proof of the main theorem and the corollary
Assume we have a surface satisfying the assumptions of Theorem 1.1. Short
time existence is known, see [15]. The long time existence and convergence follows
exactly by an argument of Hamilton in [4]. For completeness, we outline the steps.
Solve Poisson equations△tu(x, t) = R(x, t)+1 as we did. Consider the evolution
equation for H = R+ 1 + |∇u|2,
H = △H − 2 |M |2 −H,
where M = ∇∇u − 1
△f · g. Since we have growth estimate for H , maximum
principle says
R+ 1 ≤ H ≤ Ce−t.
Therefore, after some time R will be negative everywhere. Applying maximum
principle again to the evolution equation of scalar curvature
R = △R+ R(R+ 1)
will prove Theorem 1.1.
Next, we discuss the application of the above theorem to Uniformization theorem.
Let S be a compact Riemann surface. Let p1, · · · , pk be k different points in S
and D1, · · · , Dl be l domains on S such that all of them are disjoint and Di is
diffeomorphic to disk. Denote S \ ∪iDi \ {p1, · · · , pk} by M . The aim is to show
there exists a complete hyperbolic metric on M compatible with the conformal
structure.
The approach is to construct an initial metric g0 on M compatible with the
conformal structure so that the normalized Ricci flow starting from g0 will converge
to a hyperbolic metric. Assume there is metric h in the given conformal class of S.
Note that h is incomplete as a metric on M .
For pi, there is an isothermal coordinate (x, y) around pi. By a conformal change
of h, one can ask g0 to be
(x2 + y2) log2(x2 + y2)
(dx2 + dy2)
in a small neighborhood Ui of pi.
Remark 4.1. This is called hyperbolic cusp metric in [14] and it has scalar curva-
ture −1.
For Dj, let r be the distence to ∂Dj on M with respect to h. Let Vj be a
neighborhood of ∂Dj in M . Let (r, θ) be the Fermi coordinates for ∂Dj so that
h0 = dr
2 +A(r, θ)dθ2.
We will find ρ = ρ(r, θ) such that
1) ρ = 0 on ∂Dj;
2) dρ 6= 0 on ∂Dj;
12 HAO YIN
is asymptoticly hyperbolic in high order. Let K and K0 be the Gaussian curvature
of h and g0 respectively. We have the formula,
K0 = ρ
2(△h log ρ+K).
In order that K0 = −1,
1− |∇ρ|2 + ρ△ρ+ ρ2K = 0.
In terms of r and θ,
|∇ρ|2 = (∂ρ
)2 +A−1(r, θ)(
△ρ = ∂
Here A, B, C and D are smooth functions of r and θ. The equation now becomes
(5) ρ
+ 1− (
)2 −A−1(
)2 + ρ2K = 0.
If equation (5) is true at r = 0, then
(r, θ) = 1.
Here we used that fact that ρ > 0.
Set η(r, θ) = ρ
. Equation (6) implies η(0, θ) = 1. Equation (5) becomes
+Brη +Br2 ∂η
+ Cr2 ∂η
+Dr2 ∂
+ 1−η
)2 −A−1 r
)2 + ηr2K = 0
For the convinience of formal calculation, this equation is rewritten as
(7) (D2 −D − 2)η + F [r, η] = 0,
where D = r ∂
F [r, η] = Brη +Br2
+ Cr2
(1− η)2
)2 −A−1 r
)2 + ηr2K.
Equation (7) is a very typical form of Fuchsian type PDE. Formal solutions of
this kind of equation has been discussed many times. For example, Kichenassamy
[5] and Yin [16]. We will only outline the main steps here, for details see [5] and
[16].
Consider formal solution with the following expansion,
(8) η(r, θ) = 1 +
aij(θ)r
i(log r)j .
We will call the sum
j=0 aijr
i(log r)j the i-level of the expansion. Note that D
maps i-level to i-level. Details on formal calculation could be find in [5] and [16].
A common feature of all terms in F [r, η], which is crutial in obtaining a formal
solution, is that the k-level of F [r, η] could be calculated with knowledge of only
l-level of η with l < k. For example, consider (1 − η)2/η. It’s the multiplication
of three formal series, two 1 − η and 1/η. In order the k-level of η appears in the
RICCI FLOW ON SURFACES 13
k-level of (1 − η)2/η, the only possibility is that two of the three series contribute
zero level and one k-level. However, the zero level of 1− η vanishes.
The only thing we need is that there exists a formal solution and furthermore
due to Borel’s Lemma as in [16], there is an approximate solution so that
(D2 −D − 2)η + F [r, η] = o(rk)
for any k. In terms of ρ,
(9) K0 + 1 = 1 + ρ
2(△h log ρ+K) = o(ρk)
for any k. This metric g0 near ∂Dj has Gaussian curvature -1 asymptotically. By
a scaling, we assume it has scalar curvature -1 asymptotically.
We construct g0 by doing the above to every point Pi and disk Dj. If there is
at least one disk removed, we know M is nonparabolic.
|R+ 1| dV
is finite because of equation (9). Therefore, Theorem 1.1 proves the Uniformization
in this case.
If there is no disk removed, i.e. M = S \ {p1, · · · , pk} and M has negative
Euler number, then it’s proved in [14] that there exists a hyperbolic metric in the
conformal class. A large part of [14] is devoted to solve
(10) △g0u = Rg0 + 1
with |∇u| < ∞.
Observe that the above equation is equivalent to
(11) △hu =
(Rg0 + 1).
Since every end of (M, g0) is a hyperbolic cusp, Gauss-Bonnet theorem says
Rg0dV0 = 2πχ(M) < 0.
There exists a function f of compact support on M such that the volume of
(M, efg0) is −2πχ(M), because (M, g0) has finite volume. Denote efg0 by g0,
since the infinity is not changed, equation (12) is still true. Now, the volume of
(M, g0) is −2πχ(M). This implies
(Rg0 + 1)dV0 = 0.
Therefore
(Rg0 + 1)dVh = 0.
By construction of g0, we know
(Rg0+1) is zero near Pi. So
(Rg0+1) is a smooth
function on S. Therefore, equation (11) is solvable. Since u is a smooth function
on compact surface S, u has bounded gradient with respect to h. The relation of
h and g0 near Pi is explicit. It’s straight forward to check u has bounded gradient
as a function of (M, g0). This simplifies the proof in [14].
14 HAO YIN
Remark 4.2. In the case that there is at least one disk removed, by construction
of g0, Rg0 +1 vanishes at high order near ∂Dj. Then one can extend the definition
(Rg0 + 1) to S so that
(Rg0 + 1)dVh = 0.
The rest is the same as in the previous case.
This method of solving Poisson equation depends on the conformal structure of
M , therefore Theorem 3.1 and Theorem 1.1 are not coverd by the above discussion.
References
[1] Chau, A., Tam, L.-F. and Yu, C., Pseudolocality for the Ricci flow and applications, preprint,
DG/0701153.
[2] Chow, B., The Ricci flow on the 2-sphere, J. Diff. Geom. 33(1991), 325-334.
[3] Ecker, K. and Huisken, G., Interior estimates for hypersurfaces moving by mean curvature,
Invent. Math., 105(1991), 547-569.
[4] Hamilton, R., The Ricci flow on surfaces. Mathematics and General Relativity, Contemporary
Mathematics, 71(1988), 237-261.
[5] Kichenassamy, S., On a conjecture of Fefferman and Graham, Adv. In Math., 184(2004),
268-288.
[6] Li, P., Curvature and function theorey on Riemannian manifolds, Surveys in Differential Ge-
ometry, Vol VII, International Press(2000), 375-432.
[7] Li, P. and Schoen, R., Lp and mean value properties of subharmonic functions on Riemannian
manifolds, Acta Math. , 153(1984), no.3-4, 279-301.
[8] Li, P. and Tam, L.-F., Symmetric Green’s function on complete manifolds, Amer. J. Math.,
109(1987), 1129-1154.
[9] Li,P. and Yau, S.-T., On the parabolic kernel of the Schrödinger operator, Acta. Math.,
156(1986), 139-168.
[10] Ni, L. Poisson equation and Hermitian-Einstein metrics on holomorphic vector bundles over
complete noncompact Kähler manifolds, Indiana Univ. Math. Jour., 51 (2002), 670-703.
[11] Ni, L. and Tam, L.-F., Plurisubharmonic functions and the structure of complete Kähler
manifolds with nonnegative curvture, J. Diff. Geom., 64(2003), 457-524.
[12] Ni, L. and Tam, L.-F., Kähler Ricci flow and Poincaré Lelong equation, Comm. Anal. Geom.,
12(2004), no 1. 111-141.
[13] Saloff-Coste, L. Uniform elliptic operators on Riemannian manifolds, J. Diff. Geom., 36(1992),
417-450.
[14] Ji, L.-Z. and Sesum, N. Uniformization of conformally finite Riemann surfaces by the Ricci
flow, preprint, DG/0703357.
[15] Shi, W.-X., Deforming the metric on complete Riemannian manifolds, J. Diff. Geom.,
30(1989), 223-301.
[16] Yin, H., Boundary regularity of harmonic maps from Hyperbolic space into nonpositively
curved manifolds, to appear in Pacific. J. Math..
[17] Zhang, Q.-S., Some gradient estimates for the heat equation on domains and for an equation
by Perelman, preprint, DG/0605518.
1. Introduction
2. An estimate of Green's function
3. Poisson equations u=R+1
4. Proof of the main theorem and the corollary
References
|
0704.0854 | Polarization properties of subwavelength hole arrays consisting of
rectangular holes | myjournal manuscript No.
(will be inserted by the editor)
Polarization properties of subwavelength hole arrays consisting of
rectangular holes
Xi-Feng Ren, Pei Zhang, Guo-Ping Guo⋆, Yun-Feng Huang, Zhi-Wei Wang, Guang-Can Guo
Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, People’s Republic of
China
Received: date / Revised version: date
Abstract Influence of hole shape on extraordinary op-
tical transmission was investigated using hole arrays con-
sisting of rectangular holes with different aspect ratio.
It was found that the transmission could be tuned con-
tinuously by rotating the hole array. Further more, a
phase was generated in this process, and linear polar-
ization states could be changed to elliptical polarization
states. This phase was correlated with the aspect ratio of
the holes. An intuitional model was presented to explain
these results.
PACS numbers:78.66.Bz,73.20.MF, 71.36.+c
1 introduction
In metal films perforated with a periodic array of sub-
wavelength apertures, it has long been observed that
there is an unusually high optical transmission[1]. It is
⋆ E-mail: e-mail: :[email protected]
believed that metal surface plays a crucial role and the
phenomenon is mediated by surface plasmon polaritons
(SPPs) and there is a process of transforming photon
to SPP and back to photon[2,3,4]. This phenomenon
can be used in various applications, for example, sen-
sors, optoelectronic device, etc[5,6,7,8,9,10]. Polariza-
tion properties of nanohole arrays have been studied in
many works[11,12,13]. Recently, orbital angular momen-
tum of photons was explored to investigate the spatial
mode properties of surface plasmon assisted transmis-
sion [14,15]. It is also showed that entanglement of pho-
ton pairs can be preserved when they respectively travel
through a hole array [15,16,17]. Therefore, the macro-
scopic surface plasmon polarizations, a collective excita-
tion wave involving typically 1010 free electrons propa-
gating at the surface of conducting matter, have a true
quantum nature. However, the increasing use of EOT
requires further understanding of the phenomenon.
http://arxiv.org/abs/0704.0854v2
2 Xi-Feng Ren, Pei Zhang, Guo-Ping Guo, Yun-Feng Huang, Zhi-Wei Wang, Guang-Can Guo
The polarization of the incident light determines the
mode of excited SPP which is also related to the periodic
structure. For the manipulation of light at a subwave-
length scale with periodic arrays of holes, two ingredi-
ents exist: shape and periodicity[2,3,4,11,18,19,20]. In-
fluence of unsymmetrical periodicity on EOT was dis-
cussed in [21]. Influence of the hole shape on EOT was
also observed recently[18,20], in which the authors mainly
focused on the transmission spectra. In this work, we
used rectangle hole arrays to investigate the influence of
hole shape on the polarization properties of EOT. It is
found that linear polarization states could be changed
to elliptical polarization states and a phase could be
added between two eigenmode directions. The phase was
changed when the aspect ratio of the rectangle holes was
varied. The hole array was also rotated in the plane per-
pendicular to the illuminate beam. The optical transmis-
sion was changed in this process. It strongly depended
on the rotation angle, in other words, the angle between
polarization of incident light and axis of hole array, as
in the case with unsymmetrical hole array structure[21].
2 experimental results and modeling
2.1 Relation between transmission efficiency and
photon polarization
Fig. 1(a) is a scanning electron microscope picture of
part of our hole arrays. The hole arrays are produced
as follows: after subsequently evaporating a 3-nm tita-
nium bonding layer and a 135-nm gold layer onto a 0.5-
mm-thick silica glass substrate, a focused ion beam etch-
ing system is used to produce rectangle holes (100nm×
100nm, 100nm × 150nm, 100nm × 200nm, 100nm ×
300nm respectively) arranged as a square lattice (520nm
period). The area of the hole array is 10µm× 10µm.
Transmission spectra of the hole arrays were recorded
by a silicon avalanche photodiode single photon counter
couple with a spectrograph through a fiber. White light
from a stabilized tungsten-halogen source passed though
a single mode fiber and a polarizer (only vertical polar-
ized light can pass), then illuminated the sample. The
hole arrays were set between two lenses of 35mm focal
length, so that the light was normally incident on the
hole array with a cross sectional diameter about 10µm
and covered hundreds of holes. The light exiting from
the hole array was launched into the spectrograph. The
hole arrays were rotated anti-clockwise in the plane per-
pendicular to the illuminating light, as shown in Fig.
(a) (b)
Fig. 1 (Color online)The rectangle hole arrays. (a) Scanning
electron microscope pictures. (b) Rotation direction. S (L)
is the axis of short (long) edge of rectangle hole; H(V) is
horizontal (vertical) axis.
Polarization properties of subwavelength hole arrays consisting of rectangular holes 3
1(b). Transmission spectra of the hole arrays for rota-
tion angle θ = 0o and 90o were given in Fig. 2. There
were large difference between the two cases, which was
also observed in [18].
Further, the typical hole array(100nm×300nm holes)
was rotated anti-clockwise in the plane perpendicular to
the illuminating light(see Fig.1 (b)). Transmission effi-
ciencies of H and V photons(702nm wavelength) were
measured with rotation angle θ = 0o, 30o, 45o, 60o, and
90o respectively, as shown in Fig. 3. They were varied
with θ. To explain the results, we gave a simple model.
For our sample, photons with 702nm wavelength will
excite the SPP eigenmodes (0,±1) and (±1, 0). Since
the SPPs were excited in the directions of long (L) and
short (S) edges of rectangle holes, we suspected that this
550 600 650 700 750 800 850
550 600 650 700 750 800 850
550 600 650 700 750 800 850
550 600 650 700 750 800 850
Wavelength(nm)
Fig. 2 (Color online)Hole array transmittance as a function
of wavelength for rotation angle θ = 0o(black square dots)
and 90o(red round dots)(holes for a, b, c, and d are 100nm×
100nm, 100nm × 150nm, 100nm × 200nm, and 100nm ×
300nm respectively). The dashed vertical lines indicate the
wavelength of 702nm used in the experiment.
two directions were eigenmode-directions for our sample.
The polarization of illuminating light was projected into
the two eigenmode-directions to excite SPPs. After that,
the two kinds of SPPs transmitted the holes and irritated
light with different transmission efficiencies TL and TS
respectively. For light whose polarization had an angle θ
with the S direction, the transmission efficiency Tθ will
Tθ = TS cos
2(θ) + TL sin
2(θ). (1)
This equation was also given in the works[20,21]. Due
to the unequal values of TL and TS, the whole transmis-
sion efficiency was varied with angle θ. So if we know
the transmission spectra for enginmode-directions (here
L and S), we can calculate out the transmission spectra
(including the heights and locations of peaks) for any
0 15 30 45 60 75 90
10000
20000
30000
40000
50000
60000
70000
Tilt angle (degree)
Fig. 3 (Color online)Transmittance as a function rotation
angle θ for photons in 702nm wavelength(100nm × 300nm
holes). Red round dots and black square dots are the counts
for V and H photons respectively. The lines come from the-
oretical calculation.
4 Xi-Feng Ren, Pei Zhang, Guo-Ping Guo, Yun-Feng Huang, Zhi-Wei Wang, Guang-Can Guo
θ. The theoretical calculations were also given in Fig.
3, which agreed well with the experimental data. The
similar results were also observed when the hole arrays
(100nm× 150nm and 100nm× 200nm) were used. With
this model, the transmission efficiency can be continu-
ously tuned in a certain range.
2.2 Influence of hole shape on photon polarization
To investigate the polarization property of the hole ar-
ray, we used the method of polarization state tomog-
raphy. Experimental setup was shown in Fig. 4. White
light from a stabilized tungsten-halogen source passed
though single mode fiber and 4nm filter (center wave-
length 702 nm) to generate 702nm wavelength photons.
Polarization of input light was controlled by a polarizer,
a HWP (half wave plate, 702nm) and a QWP (quar-
ter wave plate, 702nm). The hole array was set between
two lenses of 35mm focal length. Symmetrically, a QWP,
a HWP and a polarizer were combined to analyze the
polarization of transmitted photons. For arbitrary in-
put states, the output states were measured in the four
bases: H , V , 1/
2(|H〉 + |V 〉), and 1/
2(|H〉 + i|V 〉).
With these experimental data, we could get the density
matrix of output states, which gave the full polarization
characters of transmitted photons. For example, in the
case of θ = 0o, for input state 1/
2(|H〉 + eI∗0.5π|V 〉),
four counts (8943, 31079, 3623 and 21760) were recorded
when we used the four detection bases. The density ma-
trix was calculated as:
0.223 −0.410− 0.043i
−0.410 + 0.043i 0.777
, (2)
which had a fidelity of 0.997 with the pure state 0.472|H〉+
0.882eI∗0.967π|V 〉. Compared this state with the input
state, we found that not only the ratio of |H〉 and |V 〉
was changed, but also a phase ϕ = 0.467π was added
between them. The similar phenomenon was also ob-
served when the input state was 1/
2(|H〉 + |V 〉) and
in this case ϕ = 0.442π. We also considered the cases
for θ = 30o, 45o, 60o, and 90o. The experimental density
matrices had the fidelities all larger than 0.960 with the
theoretical calculations, where ϕ = (0.462 ± 0.053)π. It
can be seen that the phase ϕ was hardly influenced by
the rotation.
To study the dependence of phase ϕ with the hole
shape, we performed the same measurements on other
hole arrays which were shown in Fig. 1. It was found
that ϕ was changed with the aspect ratio of the rectan-
Filter Detector
HA SMF Source SMF
Polarization Controller
Polarization Analyzer
Fig. 4 (Color online)Experimental setup to investigate the
polarization property of our rectangle hole array. Polariza-
tion of input light was controlled by a polarizer, a HWP and
a QWP. The hole array was set between two lenses of 35mm
focal length. Symmetrically, a QWP, a HWP and a polar-
izer were combined to analyze the polarization of transmitted
photons.
Polarization properties of subwavelength hole arrays consisting of rectangular holes 5
gle holes. Fig. 5 gave the relation between ϕ and aspect
ratio. The phases are 0, (0.227±0.032)π, (0.357±0.020)π
and (0.462±0.053)π for aspect ratio 1, 1.5, 2.0 and 3.0 re-
spectively. As mentioned above, period is another impor-
tant parameter in the EOT experiments. Since no similar
result was observed for hole arrays with symmetrical pe-
riods, a special quadrate hole array(see Fig. 1 of [21]) was
also investigated to show the influence of the hole period.
We found that even the periods were different in two di-
rections, there was no birefringent phenomenon(ϕ = 0).
This birefringent phenomenon might be explained
with the propagating of SPPs on the metal surface. As
we know, the interaction of the incident light with sur-
face plasmon is made allowed by coupling through the
grating momentum and obeys conservation of momen-
k sp =
k 0 ± i
Gx ± j
Gy, (3)
1.0 1.5 2.0 2.5 3.0
Aspect ratio
Fig. 5 (Color online)Relation between birefringent phase ϕ
and hole shape aspect ratio. ϕ becomes lager when the aspect
ratio increases.
where
k sp is the surface plasmon wave vector,
k 0 is the
component of the incident wave vector that lies in the
plane of the sample,
Gx and
Gy are the reciprocal lattice
vectors, and i, j are integers. Usually, Gx = Gy = 2π/d
for a square lattice, and relation
k sp ∗ d = mπ was sat-
isfied, where m was the band index[22]. While for our
rectangle hole arrays, the length of holes in L direction
was changed form 150nm to 300nm, which was not as
same as it in S direction. Though Gx = Gy = 2π/d for
our rectangle hole array, the time for surface plasmon
polariton propagating in the L direction must be influ-
enced by the aspect ratio of hole shape, which could not
be same as that in the S direction. A phase difference
ϕ was generated between the two directions, leading the
birefringent phenomenon. Due to the absorption or scat-
tering of the SPPs and scattering at the hole edges, it
is hard to give the accurate value of the phase or the
exact relation between the phase and aspect ratio of
holes. Even so, ϕ could be controlled by changing the
hole shape. As a contrast, there was no birefringent phe-
nomenon observed when the quadrate hole array(see Fig.
1 of [21]) was used. The reason was that phase Gx ∗ dx
always equal to Gy ∗ dy, even Gx 6= Gy for the quadrate
hole array.
3 conclusion
In conclusion, rectangle hole array was explored to study
the influence of hole shape on EOT, especially the prop-
erties of photon polarization. Because of the unsymmet-
6 Xi-Feng Ren, Pei Zhang, Guo-Ping Guo, Yun-Feng Huang, Zhi-Wei Wang, Guang-Can Guo
rical of the hole shape, a birefringent phenomenon was
observed. The phase was determined by the hole shape,
which gave us a potential method to control this bire-
fringent process. It was also found that the transmission
efficiency can be tuned continuously by rotating the hole
array. These results might be explained using an intu-
itional model based on surface plasmon eigenmodes.
This work was funded by the National Fundamental
Research Program, National Natural Science Foundation
of China (10604052), Program for New Century Excel-
lent Talents in University, the Innovation Funds from
Chinese Academy of Sciences, the Program of the Educa-
tion Department of Anhui Province (Grant No.2006kj074A).
Xi-Feng Ren also thanks for the China Postdoctoral Sci-
ence Foundation (20060400205) and the K. C. Wong Ed-
ucation Foundation, Hong Kong.
References
1. T.W. Ebbesen, H. J. Lezec, H. F. Ghaemi, T. Thio, and
P. A. Wolff, Nature 391, 667 (1998).
2. H. Raether, Surface Plasmons on Smooth and Rough Sur-
faces and on Gratings, Vol. 111 of Springer Tracts in Mod-
ern Physics, Springer, Berlin, (1988).
3. D. E. Grupp, H. J. Lezec, T. W. Ebbesen, K. M. Pellerin,
and Tineke Thio, Appl. Phys. Lett. 77 1569 (2000).
4. M. Moreno, F. J. Garca-Vidal, H. J. Lezec, K. M. Pellerin,
T. Thio, J. B. Pendry, and T. W. Ebbesen, Phys. Rev.
Lett. 86, 1114 (2001).
5. S. M. Williams, K. R. Rodriguez, S. Teeters-Kennedy, A.
D. Stafford, S. R. Bishop, U. K. Lincoln, and J. V. Coe,
J. Phys. Chem. B. 108, 11833 (2004).
6. A. G. Brolo, R. Gordon, B. Leathem, and K. L. Kavanagh,
Langmuir. 20, 4813 (2004).
7. A. Nahata, R. A. Linke, T. Ishi, and K. Ohashi, Opt. Lett.
28, 423 (2003).
8. X. Luo and T. Ishihara, Appl. Phys. Lett. 84, 4780 (2004).
9. S. Shinada, J. Hasijume and F. Koyama, Appl. Phys. Lett.
83, 836 (2003).
10. C. Genet and T. W. Ebbeson, Nature, 445, 39 (2007).
11. J. Elliott, I. I. Smolyaninov, N. I. Zheludev, and A. V.
Zayats, Opt. Lett. 29, 1414 (2004).
12. R. Gordon, A. G. Brolo, A. McKinnon, A. Rajora, B.
Leathem, and K. L. Kavanagh, Phys. Rev. Lett. 92,
037401 (2004).
13. E. Altewischer, C. Genet, M. P. van Exter, and J. P.
Woerdman, Opt. Lett. 30, 90 (2005).
14. X. F. Ren, G. P. Guo, Y. F. Huang, Z. W. Wang, and
G. C. Guo, Opt. Lett. 31, 2792, (2006).
15. X. F. Ren, G. P. Guo, Y. F. Huang, C. F. Li, and G. C.
Guo, Europhys. Lett. 76, 753 (2006).
16. E. Altewischer, M. P. van Exter and J. P. Woerdman
Nature 418 304 (2002).
17. S. Fasel, F. Robin, E. Moreno, D. Erni, N. Gisin and H.
Zbinden, Phys. Rev. Lett. 94 110501 (2005).
18. K. J. Klein Koerkamp, S. Enoch, F. B. Segerink, N. F.
van Hulst and L. Kuipers, Phys. Rev. Lett. 92 183901
(2004).
19. Zhichao Ruan and Min Qiu, Phys. Rev. Lett. 96 233901
(2006).
Polarization properties of subwavelength hole arrays consisting of rectangular holes 7
20. M. Sarrazin, J. P. Vigneron, Opt. Commun. 240 89
(2004) .
21. X. F. Ren, G. P. Guo, Y. F. Huang, Z. W. Wang, and
G. C. Guo, Appl. Phys. Lett. 90, 161112 (2007).
22. F. L. Tejeira, S. G. Rodrigo, L. M. Moreno, F. J. G. Vi-
dal, E. Devaux, T. W. Ebbesen, J. R. Krenn, I. P. Radko,
S. I.Bozhevolnyi, M. U. Gonzalez, J. C. Weeber, and A.
Dereux, Nature Physics 3, 324 (2007).
introduction
experimental results and modeling
conclusion
|
0704.0855 | Three dimensional cooling and trapping with a narrow line | EPJ manuscript No.
(will be inserted by the editor)
Three dimensional cooling and trapping with a narrow line
T. Chanelière1, L. He2, R. Kaiser3 and D. Wilkowski3
1 + Now at: Laboratoire Aimé Cotton, CNRS, UPR 3321, Université Paris-Sud, Bat. 505, F-91405 Orsay Cedex, France
2 ∗ Now at: State Key Laboratory of Magnetic Resonance and Atomic and Molecular Wuhan Institute of Physics and Mathe-
matics, Chinese Academy of Sciences, Wuhan 430071, P. R. China
3 Institut Non Linéaire de Nice, CNRS, UMR 6618, Université de Nice Sophia-Antipolis, F-06560 Valbonne, France.
October 21, 2018
Abstract. The intercombination line of Strontium at 689nm is successfully used in laser cooling to reach
the photon recoil limit with Doppler cooling in a magneto-optical traps (MOT). In this paper we present
a systematic study of the loading efficiency of such a MOT. Comparing the experimental results to a
simple model allows us to discuss the actual limitation of our apparatus. We also study in detail the final
MOT regime emphasizing the role of gravity on the position, size and temperature along the vertical and
horizontal directions. At large laser detuning, one finds an unusual situation where cooling and trapping
occur in the presence of a high bias magnetic field.
PACS. 3 9.25.+k
1 Introduction
Cooling and trapping alkaline-earth atoms offer interest-
ing alternatives to alkaline atoms. Indeed, the singlet-
triplet forbidden lines can be used for optical frequency
measurement and related subjects [1]. Moreover, the spin-
less ground state of the most abundant bosonic isotopes
can lead to simpler or at least different cold collisions prob-
lems than with alkaline atoms [2]. Considering fermionic
isotopes, the long-living and isolated nuclear spin can be
controlled by optical means [3] and has been proposed to
implement quantum logic gates [4]. It has also been shown
that the ultimate performance of Doppler cooling can be
greatly improved by using narrow transitions whose pho-
ton recoil frequency shifts ωr are larger than their natural
widths Γ [5]. This is the case for the 1S0 →3 P1 spin-
forbidden line of Magnesium (ωr ≈ 1100Γ ) or Calcium
(ωr ≈ 36Γ ). Unfortunately, both atomic species can not
be hold in a standard magneto-optical trap (MOT) be-
cause the radiation pressure force is not strong enough
to overcome gravity. This imposes the use of an extra
quenching laser as demonstrated for Ca [6]. For Stron-
tium, the natural width of the intercombination transition
(Γ = 2π×7.5 kHz) is slightly broader than the recoil shift
(ωr = 2π×4.7 kHz). The radiation pressure force is higher
than the gravity but at the same time the final tempera-
ture is still in the µK range [7,8]. In parallel, the narrow
transition partially prevents multiple scattering processes
and the related atomic repulsive force [10]. Hence impor-
tant improvements on the spatial density have been re-
ported [7]. However, despite experimental efforts, such as
adding an extra confining optical potential, pure optical
methods have not allowed yet to reach the quantum de-
generacy regime with Strontium atoms [9].
In this paper, we will discuss some performances, es-
sentially in terms of temperatures, sizes and loading rates,
of a Strontium 88 MOT using the 689 nm 1S0 →3P1 in-
tercombination line.
Initially the atoms are precooled in a MOT on the
spin-allowed 461 nm 1S0 →1P1 transition (natural width
Γ = 2π × 32MHz) as discussed in [11]. Then the atoms
are transferred into the 689 nm intercombination MOT.
To achieve a high loading rate, Katori et al. [7] have used
laser spectrum, broadened by frequency modulation. Thus
the velocity capture range of the 689 nm MOT matches
the typical velocity in the 461 nm MOT. They report a
transfer efficiency of 30%. The same value of transfer effi-
ciency is also reported in reference [8]. In our set-up, 50%
of the atoms initially in the blue MOT are transferred into
the red one. In section 3 we present a systematic study
of the transfer efficiency as function of the parameters of
the frequency modulation. In order to discuss the intrin-
sic limitations of the loading efficiency, we compare our
experimental results to a simple model. In particular, we
demonstrated that our transfer efficiency is limited by the
size of the red MOT beams. We show that it could be op-
timized up to 90% with realistic laser power (25mW per
beams).
The minimum temperature achieved in the broadband
MOT is about 2.5µK. In order to reduce the tempera-
ture down to the photon recoil limit (0.5µK), we apply
a second cooling stage, using a single frequency laser and
observe similar temperatures, detuning and intensity de-
pendencies as reported in the literature (see references [7],
http://arxiv.org/abs/0704.0855v2
2 T. Chanelière, L. He, R. Kaiser and D. Wilkowski: Three dimensional cooling and trapping with a narrow line
[8], [12] and [13]). In those publications, the role of gravity
on the cooling and trapping dynamics along the z vertical
direction has been discussed. In this paper we compare
the steady state behaviour along vertical (z) direction to
that along the horizontal plane (x−y) where gravity plays
indirectly a crucial role (section 4).
Details about the dynamics are given in references
[8],[12]. In particular the authors establish three regimes.
In regime (I) the laser detuning |δ| is larger than the
power-broadened linewidth ΓE = Γ
(1 + s). Regime (II)
on the contrary corresponds to ΓE > |δ|. In both regimes
(I) and (II) ΓE ≫ Γ, ωr and the semiclassical limit is a
good approximation. In regime (III) the saturation pa-
rameter is small and a full quantum treatment is required.
We will focus here on the semiclassical regime (I). In this
regime, we confirm that the temperature along the z di-
rection is independent of the detuning δ. Following Loftus
et al. [12], we have also found (see section 4.1) that this
behavior is due to the balance of the gravitational force
and the radiation pressure force produced by the upward
pointing laser (the gravity defining the downward direc-
tion). The center of mass of the atomic cloud is shifted
downward from the magnetic field quadrupole center. As a
consequence, cooling and trapping in the horizontal plane
occur at a strong bias magnetic field mostly perpendicu-
lar to the cooling plane. This unusual situation is studied
in detail (section 4.2). Despite different friction and dif-
fusion coefficients along the horizontal and the vertical
directions, the horizontal temperature is found to be the
same as the vertical one (see section 4.3). In reference [12],
the trapping potential is predicted to have a box shape
whose walls are given by the laser detuning. This is in-
deed the case without a bias magnetic field along the z
axis. It is actually different for the regime (I) described in
this paper. Here we have found that the trapping poten-
tial remain harmonic. This leads to a cloud width in the
horizontal direction which is proportional to
|δ| (section
4.2).
2 Experimental set-up
Our blue MOT setup (on the broad 1S0 →1 P1 transi-
tion at 461 nm) is described in references [14,15]. Briefly,
it is composed by six independent laser beams typically
10mW/cm2 each. The magnetic field gradient is about
70G/cm. The blue MOT is loaded from an atomic beam
extracted from an oven at 550 ◦C and longitudinally slowed
down by a Zeeman slower. The loading rate of our blue
MOT is of 109 atoms/s and we trap about 2.106 in a
0.6mm rms radius cloud when no repumping lasers are
used [16]. To optimize the transfer into the red MOT, the
temperature of the blue MOT should be as small as possi-
ble. As previously observed [11], this temperature depends
strongly on the optical field intensity. We therefore de-
crease the intensity by a factor 5 (see figure 1) 4ms before
switching off the blue MOT. The rms velocity right before
the transfer stage is thus reduced down to σb = 0.6m/s
whereas the rms size remains unchanged. Similar two stage
cooling in a blue MOT is also reported in reference [13].
The 689 nm laser source is an anti-reflection coated
laser diode in a 10 cm long extended cavity, closed by a
diffraction grating. It is locked to an ULE cavity using the
Pound-Drever-Hall technique [17]. The unity gain of the
servo loop is obtained at a frequency of 1MHz. From the
noise spectrum of the error signal, we derive a frequency
noise power. It shows, in the range of interest, namely
1Hz − 100 kHz, an upper limit of 160 Hz2/Hz which is
low enough for our purpose. The transmitted light from
the ULE cavity is injected into a 20mW slave laser diode.
Then the noise components at frequencies higher than the
ULE cavity cut-off (300 kHz) are filtered. It is important
to note that the lateral bands used for the lock-in are
also removed. Those lateral bands, at 20MHz from the
carrier, are generated modulating directly the current of
the master laser diode. A saturated spectroscopy set-up on
the 1S0 →3P1 intercombination line is used to compensate
the long term drift of 10−50Hz/s mainly due to the daily
temperature change of the ULE cavity.
The slave beam is sent through an acousto-optical mod-
ulator mounted in a double pass configuration. The laser
detuning can then be tuned within the range of a few
hundreds of linewidth around the resonance. This acousto-
optical modulator is also used for frequency modulation
(FM) of the laser, as required during the loading phase
(see section 3).
The red MOT is made of three retroreflected beams
with a waist of 0.7 cm. The maximum intensity per beam
is about 4mW/cm2 (the saturation intensity being Is =
3µW/cm2). The magnetic gradient used for the red MOT
is varied from 1 to 10G/cm.
To probe the cloud (number of atoms and tempera-
ture) we use a resonant 40µs pulse of blue light (see fig
1). The total emitted fluorescence is collected onto an
avalanche detector. From this measurement, we deduce
the number of atoms and then evaluate the transfer rate
into the red MOT. At the same time, an image of the cloud
is taken with an intensified CCD camera. The typical spa-
tial resolution of the camera is 30µm. Varying the dark
period (time-of-flight) between the red MOT phase and
the probe, we get the ballistic expansion of the cloud. We
then derive the velocity rms value and the corresponding
temperature.
3 Broadband loading of the red MOT
The loading efficiency of a MOT depends strongly on the
width of the transition. With a broad transition, the max-
imum radiation pressure force is typically am =
104 × g, where vr is the recoil velocity [18]. Hence, on
l ≈ 1 cm (usual MOT beam waist) an atom with a veloc-
ity vc =
2aml ≈ 30m/s can be slowed down to zero and
then be captured. During the deceleration, the atom re-
mains always close to resonance because the Doppler shift
is comparable to the linewidth. Thus MOTs can be di-
rectly loaded from a thermal vapor or a slow atomic beam
using single frequency lasers. Moreover typical magnetic
field gradients of few tens of G/cm usually do not dras-
T. Chanelière, L. He, R. Kaiser and D. Wilkowski: Three dimensional cooling and trapping with a narrow line 3
tically change the loading because the Zeeman shift over
the trapping region is also comparable to the linewidth.
An efficient loading is more complex to achieved with
a narrow transition. For Strontium, the maximum radia-
tion pressure force of a single laser is only am ≈ 15 × g.
Assuming the force is maximum during all the capture
process, one gets vc =
2aml ≈ 1.7m/s. Hence, precool-
ing in the blue MOT is almost mandatory. In that case the
initial Doppler shift will be vcλ
−1 ≈ 2.5MHz, 300 times
larger than the linewidth. In order to keep the laser on
resonance during the capture phase, the red MOT lasers
must thus be spectrally broadened. Because of the low
value of the saturation intensity, the spectral power den-
sity can easily be kept large enough to maintain a maxi-
mum force with a reasonable total power (few milliwatts).
The magnetic field gradient of the MOT may also affect
the velocity capture range. To illustrate this point, let us
consider an atom initially in the blue MOT at the center
of the trap with a velocity vc = 1.7m/s. During the decel-
eration, the Doppler shift decreases whereas the Zeeman
shift increases. However, the magnetic field gradient does
not affect the capture velocity as far as the total shift
(Doppler+Zeeman) is still decreasing. This condition is
fulfilled if the magnetic field gradient is lower than [19]:
λgeµbvc
≈ 0.6G/cm (1)
where ge = 1.5 is the Landé factor of the
3P1 level and
µb = 1.4MHz/G is the Bohr magneton. In practice we use
a magnetic field gradient which is larger than bc. In that
case, it is necessary to increase the width of the laser spec-
trum so that the optimum transfer rate is not limited by
the Zeeman shift (see section 3.2). An alternative solution
may consist of ramping the magnetic field gradient during
the loading [7].
3.1 Transfer rate: experimental results
In this section we will present the experimental results re-
garding the loading efficiency of the red MOT from the
blue MOT. To optimize the transfer rate, the laser spec-
trum is broadened using frequency modulation (FM). Thus
the instantaneous laser detuning is ∆(t) = δ+∆ν. sin νmt.
∆ν and νm are the frequency deviation and modulation
frequency respectively, δ is the carrier detuning. Here, the
modulation index ∆ν/νm is always larger than 1, thus the
so-called wideband limit is well fulfilled. Hence one can
assume the FM spectrum to be mainly enclosed in the
interval [δ −∆ν; δ +∆ν].
As shown in figure 2, the transfer rate increases with
νm up to 15 kHz where we observe a plateau at 45% trans-
fer efficiency. On the one hand when νm is larger than
the linewidth, the atoms are in the non-adiabatic regime
where they interact with all the Fourier components of the
laser spectrum. Moreover, the typical intensity per Fourier
component remains always higher than the saturation in-
tensity Is = 3µW/cm
2. As a consequence, the radiation
pressure force should be close to its maximum value for
any atomic velocity. On the other hand when νm < Γ/2π,
the atoms interact with a chirped intense laser where the
mean radiation pressure force (over a period 2π/νm) is
clearly smaller than in the case νm > Γ/2π. As a conse-
quence, the transfer rate is reduced when νm decreases.
In figure 3, the transfer rate is measured as a func-
tion of ∆ν. The carrier detuning is δ = −1MHz and the
modulation frequency is kept larger than the linewidth
(νm = 25 kHz). Starting from no deviation (∆ν = 0), we
observe (fig. 3) an increase of the transfer rate with ∆ν (in
the range 0 < ∆ν < 500 kHz). After reaching its maximum
value, the transfer rate does not depend on ∆ν anymore.
Thus the capturing process is not limited by the laser
spectrum anymore. If we further increase the frequency
deviation ∆ν, the transfer becomes less efficient and fi-
nally decreases again down to zero. This reduction occurs
as soon as ∆ν > |δ|, i.e. some components of the spectrum
are blue detuned. This frequency configuration obviously
should affect the MOT steady regime adding extra heat-
ing at zero velocity (see section 3.3). We can see that it
is also affecting the transfer rate. To confirm that point,
figure 4 shows the same experiment but with a larger de-
tuning δ = −1.5MHz and δ = −2MHz for the figures
4a and 4b respectively. Again the transfer rate decreases
as soon as ∆ν > |δ|. The transfer rate is also very small
on the other side for small values of ∆ν. In that case the
entire spectrum of the laser is too far red detuned. The
radiation pressure forces are significant only for velocities
larger than the capture velocity and no steady state is
expected. Keeping now the deviation fixed and varying
the detuning as shown in figure 5, we observe a maximum
transfer rate when the detuning is close to the deviation
frequency ∆ν ≃ |δ|. Closer to resonance (∆ν < |δ|), the
blue detuned components prevent an efficient loading of
the MOT.
The magnetic field gradient plays also a crucial role for
the loading. We indeed observe (fig. 6) that the transfer
rate decreases when the magnetic field gradient increases.
At very low magnetic field (b < 1G/cm) the reduction
of the transfer rate is most likely due to a lack of stabil-
ity within the trapping region. In that case we actually
observe a strong displacement of the center of mass of
the cloud. This is induced by imperfections of the set-up
such as non-balanced laser intensities which are critical
at low magnetic gradient. Hence, the optimum magnetic
field gradient is found to be the smallest one which ensure
the stability of the cloud in the MOT.
3.2 Theoretical model and comparison with the
experiments
To clearly understand the limiting processes of the transfer
rate, we compare the experimental data to a simple 1D
theoretical model based on the following assumptions:
- An atom undergoes a radiation pressure force and thus a
deceleration if the modulus of its velocity is between vmax
and vmin with
vmax = λ(|δ|+∆ν), vmin = max{λ(|δ|−∆ν);λ(−|δ|+∆ν)}
4 T. Chanelière, L. He, R. Kaiser and D. Wilkowski: Three dimensional cooling and trapping with a narrow line
am = 0 elsewhere. We simply write that the Doppler shift
is contained within the FM spectrum. We add the condi-
tion vmin = λ(−|δ|+∆ν) when some components are blue
detuned ∆ν > |δ|. In this case, we consider the simple
ideal situation where the two counter-propagating lasers
are assumed perfectly balanced and then compensate each
other in the spectral overlapping region.
- Even in the semiclassical model, it is difficult to calculate
the acceleration as a function of the velocity for a FM spec-
trum. However for all the data presented here, the satura-
tion parameter is larger than one. Hence the deceleration
is set to a constant value − 1
am when vmin < |v| < vmax.
The prefactor 1/3 takes into account the saturation by the
3 counter-propagating laser beam pairs.
- The magnetic field gradient is included by giving a spa-
tial dependence of the detuning δ in the expression (2).
- An atom will be trapped if its velocity changes of sign
within a distance shorter that the beam waist.
In figures 3-6 the results of the model are compared to
the experimental data. The agreement between the model
and the experimental data is correct except at large fre-
quency deviation (figures 3 and 4) or at low detuning (fig-
ure 5). In those cases the spectrum has some blue detuned
components. As mentionned before, this is a complex sit-
uation where the assumptions of the simple model do not
hold anymore. Fortunately those cases do not have any
practical interest because they do not correspond to the
optimum transfer efficiency.
At the optimum, the model suggests that the transfer
is limited by the beam waist (see caption of figures (3-6)).
Moreover for all the situation explored in figures 3-5, the
magnetic field gradient is strong enough (b = 1G/cm) to
have an impact on the capture process, as suggested by
the inequality (1). However it is not the transfer limiting
factor because the Zeeman shift is easily compensated by
a larger frequency excursion or by a larger detuning.
Increasing the beam waist would definitely improve the
transfer efficiency as showed in figure 7. If the saturation
parameter would remain large for all values of beam waist,
more than 90% of the atoms would be transferred for a
2 cm beam waist. 25mW of power per beam should be
sufficient to achieve this goal. In our experimental set-up,
the power is limited to 3mW per beam. So the satura-
tion parameter is necessarily reduced once the waist is
increased. To take this into account and get a more realis-
tic estimation of the efficiency for larger beams, we replace
the previous acceleration by the expression ams/(1 + 3s),
with s = I/Is the saturation parameter per beam. In this
case, the transfer efficiency becomes maximum at 70% for
a beam waist of 1.5 cm.
3.3 Temperature
Cooling with a broadband FM spectrum on the intercom-
biaison line decreases the temperature by three orders of
magnitude in comparison with the blue MOT: from 3mK
(σb = 0.6m/s) to 2.5µK (see figure 8). For small detuning,
the temperature is strongly increasing when the spectrum
has some blue detuned components (∆ν > |δ|). Indeed the
cooling force and heating rate are strongly modified at the
vicinity of zero detuning. This effect is illustrated in figure
8. On the other side at large detuning (δ < −1.5MHz), the
temperature becomes constant. This regime corresponds
to a detuning independent steady state, as also observed
in single frequency cooling (see ref. [12] and section 4).
4 Single frequency cooling
About half of the atoms initially in the 461 nm MOT are
recaptured in the red one using a broadband laser. The
final temperature is 2.5µK i.e. 5 times larger than the
photon recoil temperature Tr = 460 nK. To further de-
crease the temperature one has to switch to single fre-
quency cooling (for time sequences: see figure 1). As we
will see in this section, the minimum temperature is now
about 600 nK close to the expected 0.8Tr in an 1D mo-
lasses [5]. Moreover, one has to note that, under proper
conditions described in reference [12], the transfer between
the broadband and the single frequency red MOT can be
almost lossless.
In the steady state regime of the single frequency red
MOT, one has kσv ≈ ωr ≈ Γ . Thus, there is no net sepa-
ration of different time scales as in MOTs operated with a
broad transition where ωr << kσv << Γ . However, here
the saturation parameter s always remains high. It cor-
responds to the so-called regimes (I) and (II) presented
in reference [12]. Thus ωr << Γ
1 + s and the semiclas-
sical Doppler theory describes properly the encountered
experimental situations.
To insure an efficient trapping, the parameter’s val-
ues of the single frequency red MOT are different from
a usual broad transition MOT: the magnetic field gradi-
ent is higher, typically 1000Γ/cm. Moreover the gravity
is not negligible anymore by comparison with the typical
radiation pressure. Those features lead to an unusual be-
havior of the red MOT as we will explain in this section.
We will first independently analyze the MOT properties
along the vertical dimension (section 4.1) then in the hor-
izontal plane (section 4.2), to finally compare those two
situations (section 4.3).
4.1 Vertical direction
In the regime (I) i.e. at large negative detuning and high
saturation (see examples on figure 9a) the temperature is
indeed constant. As explained in reference [12], this be-
havior is due to the balance between the gravity and the
radiation pressure force of the upward laser. At large neg-
ative detuning, the downward laser is too far detuned to
give a significant contribution. In the semiclassical regime,
an atom undergoes a net force of
Fz = h̄k
1 + sT + 4(δ − geµBbz − kvz)2/Γ 2
−mg (3)
Considering the velocity dependence of the force, the first
order term is:
T. Chanelière, L. He, R. Kaiser and D. Wilkowski: Three dimensional cooling and trapping with a narrow line 5
Fz ≈ −γzvz (4)
γz = −4
h̄k2δeff
(1 + sT + 4δ
/Γ 2)2
where the effective detuning δeff = δ − geµBb < z > is
define such as
1 + sT + 4δ
= mg (6)
sT is the total saturation parameter including all the beams.
< z > is the mean vertical position of the cold cloud.
Hence δeff is independent of the laser detuning δ and the
vertical temperature at larger detuning depends only on
the intensity as shown in figures 9a and 9b.
The spatial properties of the cloud are also related to
the effective detuning δeff which is independent of δ. The
mean vertical position depends linearly on the detuning,
so that one has :
d < z >
geµBb
The predicted vertical displacement is compared to the
experimental data in figure 10a. The agreement is excel-
lent (the only adjustable parameter is the unknown origin
of the vertical axe). Because the radiation pressure force
for an atom at rest does not depend on the laser detuning
δ, the vertical rms size should be also δ-independent. This
point is also verified experimentally (see figure 10b).
4.2 x− y horizontal plane
Let us now study the behavior of the cold cloud in the x−y
plane at large laser detuning. As explained in section 4.1,
the position of the cloud is vertically shifted downward
with respect to the center of the magnetic field quadrupole
(see figure 11). The dynamic in the x−y plane occurs thus
in the presence of a high bias magnetic field. To derive
the expression of the semiclassical force in this unusual
situation one has first to project the circular polarizations
states of the horizontal lasers on the eigenstates. We define
the quantification axis along the magnetic field, one gets:
e+x =
1 + sinα
cosα√
1− sinα
e−x =
1− sinα
cosα√
1+ sinα
where e−i , πi and e
i represent respectively the left-handed,
linear and right-handed polarisations along the i axis. The
angle α between the vertical axis and the local magnetic
field is shown on figure 11. For large detuning, α is al-
ways small (α ≪ 1 ) and we write α ≈ −x/ < z >
considering only the dynamics along the x dimension. For
simplicity the magnetic field gradient b is considered as
spatially isotropic with b > 0 as sketched on figure 11b.
The expression of the radiation pressure force is then:
Fx = h̄k
×(10)
s(1− sinα)2/4
1 + sT + 4(δ − geµBb < z > (1− tanα)− kvx)2/Γ 2
s(1 + sinα)2/4
1 + sT + 4(δ − geµBb < z > (1− tanα) + kvx)2/Γ 2
Note that this expression is not restricted to the small α
values. We expect six terms in the expression (11): three
terms for each laser corresponding to the three e−
and e+
polarisation eigenstates. However only two terms,
corresponding to the e+
state, are close to resonance and
thus have a dominant contribution. As for the vertical
dimension, the off resonant terms are removed from the
expression (11). One has also to note that the effective
detuning δeff = δ − geµBb < z > is actually the same as
the one along the vertical dimension.
The first order expansion of (11) in α and kvx/Γ gives
the expression of the horizontal radiation pressure force:
Fx ≈ −καα− γxvx = −κxx− γxvx (11)
κα = − < z > κx = h̄k
1 + sT + 4δ
= mg (12)
= −2 h̄k
2δeff
(1 + sT + 4δ
/Γ 2)2
As for the vertical dimension (equation (6)), the force
depends on δeff but at the position of the MOT does not
depend on the laser detuning δ. Hence, at large detuning,
the horizontal temperature depends only on the intensity
as observed in figures 9a and 9b.
To understand the trapping mechanisms in the x − y
plane, we now consider an atom at rest located at a po-
sition x 6= 0 (corresponding to α 6= 0), i.e. not in the
center of the MOT. The transition rate of two counter-
propagating laser beam is not balanced anymore. This is
due to the opposite sign in the α dependency of the prefac-
tor in expression (11). This mechanism leads to a restoring
force in the x−y plane at the origin of the spatial confine-
ment (equation 11). Applying the equipartition theorem
one gets the horizontal rms size of the cloud:
x2rms =
= −< z > kBT
Without any free adjusting parameter, the agreement with
experimental data is very good as shown in figure 10b.
On the other hand there’s no displacement of the center
of mass in the x − y plane whatever is the detuning δ as
long as the equilibrium of the counter-propagating beams
intensities is preserved (figure 10a).
6 T. Chanelière, L. He, R. Kaiser and D. Wilkowski: Three dimensional cooling and trapping with a narrow line
4.3 Comparing the temperatures along horizontal and
vertical axes
As seen in sections 4.1 and 4.2, gravity has a dominant
impact on cooling in a MOT operated on the intercom-
bination line not only along the vertical axe but also in
the horizontal plane. Even so we expect different behav-
iors along this directions essentially because the gravity
renders the trapping potential anisotropic. This is indeed
the case for the spatial distribution (figures 10a and 10b)
whereas the temperatures are surprisingly the same (fig-
ures 9a and 9b). We will now give few simple arguments
to physically explain this last point.
In the semiclassical approximation, the temperature is
defined as the ratio between the friction and the diffusion
term:
kBTi =
Dabsi +D
with i = x, y, z (15)
Dabs and Dspo correspond to the diffusion coefficients in-
duced by absorption and spontaneous emission events re-
spectively. The friction coefficients has been already de-
rived (equation 13):
γz = 2γx,y (16)
Indeed cooling along an axe in the x − y plane results in
the action of two counter-propagating beams four times
less coupled than the single upward laser beam. The same
argument holds for the absorption term of the diffusion
coefficient:
Dabsz = 2D
x,y (17)
The spontaneous emission contribution in the diffusion co-
efficient can be derived from the differential cross-section
dσ/dΩ of the emitting dipole [20]. With a strong biased
magnetic field along the vertical direction, this calcula-
tion is particularly simple as e+z is the only quasi resonant
state. Hence
dσ/dΩ ∝ (1 + cosφ2) (18)
φ is the angle between the vertical axe and the direction of
observation. After a straightforward integration, one finds
a contribution again two times larger along the vertical
Dspoz = 2D
x,y (19)
From those considerations, the temperature is expected
to be isotropic as observed experimentally (see figures 9a
and 9b).
In the so-called regime (I), the minimum temperature
is given by the semiclassical Doppler theory:
T = NR
s (20)
Where NR is a numerical factor which should be close
to two [12]. This solution is represented in figure 9 by
a dashed line nicely matching the experimental data for
s > 8 but with NR = 1.2. Similar results, i.e. with unex-
pected low NR values, have been found in [12]. For s ≤ 8
we observed a plateau in the final temperature slightly
higher than the low saturation theoretical prediction [5].
We cannot explain why the temperature does not decrease
further down as reported in [12]. For quantitative compar-
ison with the theory, more detailed studies in a horizontal
1D molasses are required.
4.4 Conclusions
Cooling of Strontium atoms using the intercombination
line is an efficient technique to reach the recoil temper-
ature in three dimensions by optical methods. Unfortu-
nately loading from a thermal beam cannot be done di-
rectly with a single frequency laser because of the nar-
row velocity capture range. We have shown experimentally
that more than 50% of the atoms initially in a blue MOT
on the dipole-allowed transition are recaptured in the red
MOT using a frequency-broadened spectrum. Using a sim-
ple model, we conclude that the transfer is limited by the
size of the laser beam. If the total power of the beams
at 689 nm was higher, transfer rates up to 90% could be
expected by tripling our laser beam size. The final tem-
perature in the broadband regime is found to be as low
as 2.5µK, i.e. only 5 times larger than the photon recoil
temperature. The gain in temperature by comparison to
the blue MOT (1−10mK) is appreciable. So in absence of
strong requirements on the temperature, broadband cool-
ing is very efficient and reasonably fast (less than 100ms).
The requirements for the frequency noise of the laser are
also much less stringent than for single frequency cooling.
Using a subsequent single frequency cooling stage, it
is possible to reduce the temperature down to 600 nK,
slightly above the photon recoil temperature. Analyzing
the large detuning regime, we particularly focus our stud-
ies on the comparison between vertical and horizontal di-
rections. We show how gravity indirectly influences the
horizontal parameters of the steady state MOT and find
that the trapping potential remains harmonic along all
directions, but with an anisotropy.
Gravity has a major impact on the MOT as it coun-
terbalances the laser pressure of the upward laser (making
the steady state independent of the detuning). We show
that gravity thus affects the final temperature, which re-
mains isotropic, despite different cooling dynamics along
the vertical and horizontal directions.
5 Acknowledgments
The authors wish to thank J.-C. Bernard and J.-C. Bery
for valuable technical assistances. This research is finan-
cially supported by the CNRS (Centre National de la
Recherche Scientifique) and the former BNM (Bureau Na-
tional de Métrologie) actually LNE (Laboratoire national
de métrologie et d’essais) contract N◦ 03 3 005.
T. Chanelière, L. He, R. Kaiser and D. Wilkowski: Three dimensional cooling and trapping with a narrow line 7
References
1. F. Ruschewitz, J. L. Peng, H. Hinderthr, N. Schaffrath, K.
Sengstock, and W. Ertmer, Phys. Rev. Lett. 80, 3173 (1998);
G. Ferrari, P. Cancio, R. Drullinger, G. Giusfredi, N. Poli, M.
Prevedelli, C. Toninelli, and G. M. Tino Phys. Rev. Lett. 91,
243002 (2003); M. Yasuda and H. Katori Phys. Rev. Lett. 92,
153004 (2004); T. Ido, T. H. Loftus, M. M. Boyd, A. D. Lud-
low, K. W. Holman, and J. Ye Phys. Rev. Lett. 94, 153001
(2005); R. Le Targat, X. Baillard, M. Fouch, A. Brusch, O.
Tcherbakoff, G. D. Rovera, and P. Lemonde Phys. Rev. Lett.
97, 130801 (2006).
2. J. Weiner, V. Bagnato, S. Zilio, and P. S. Julienne, Rev.
Mod. Phys. 71, 1 (1999); T. Dinneen, K. R. Vogel, E.
Arimondo, J. L. Hall, and A. Gallagher, Phys. Rev. A
59, 1216 (1999). A.R.L.Caires, G.D.Telles, M.W.Mancini,
L.G.Marcassa, V.S.Bagnato, D.Wilkowski, R. Kaiser, Bra.
J. Phys. 34, 1504 (2004).
3. M. M. Boyd, T. Zelevinsky, A. D. Ludlow, S.M. Forman, T.
Ido, and J. Ye Science 314, 1430 (2006).
4. D. Hayes, P. Julienne, I. Deutsch, Arxiv, quant-ph/0609111.
5. Y. Castin, H. Wallis, and J. Dalibard, J. Opt. Soc. Am. B.
6, 2046 (1989).
6. T. Binnewies, G. Wilpers, U. Sterr, F. Riehle, and J. Helm-
cke, T. E. Mehlstubler, E. M. Rasel, and W. Ertmer, Phys.
Rev. Lett. 87, 123002 (2001).
7. H. Katori, T. Ido, Y. Isoya, and M. Kuwata-Gonokami,
Phys. Rev. Lett. 82, 1116 (1999)
8. T. H. Loftus, T. Ido, A. D. Ludlow, M. M. Boyd, and J. Ye,
Phys. Rev. Lett. 93, 073003 (2004).
9. T. Ido, Y. Isoya, and H. Katori, Phys. Rev. A 61, 061403
(2000).
10. D. W. Sesko, T. G. Walker and C. E. Wieman, J. Opt.
Soc. Am. B 8, 946 (1991).
11. T. Chanelière, J.-L. Meunier, R. Kaiser, C. Miniatura, and
D. Wilkowski. J. Opt. Soc. Am. B, 22, 1819 (2005).
12. T. H. Loftus, T. Ido, M. M. Boyd, A. D. Ludlow, and J.
Ye, Phys. Rev. A 70, 063413 (2004).
13. K. R. Vogel, Ph. D. Thesis, University of Colorado, Boul-
der, CO 80309, (1999).
14. Y. Bidel, B. Klappauf, J.C. Bernard, D. Delande, G.
Labeyrie, C. Miniatura, D. Wilkowski, R. Kaiser, Phys. Rev.
Lett. 88, 203902 (2002).
15. B. Klappauf, Y. Bidel, D. Wilkowski, T. Chanelière, R.
Kaiser, Appl.Opt. 43, 2510 (2004).
16. D. Wilkowski, Y. Bidel, T. Chanelière, R. Kaiser, B. Klap-
pauf, C. Miniatura, SPIE Proceeding 5866, 298 (2005).
17. N. Poli, G. Ferrari, M. Prevedelli, F. Sorrentino, R. E.
Drullinger, and G. M. Tino, Spectro. Acta Part A 63, 981
(2006).
18. H.J. Metcalf, P. van der Straten, Laser cooling and trap-
ping, Springer, (1999).
19. C. Dedman, J. Nes, T. Hanna, R. Dall, K. Baldwin, and
A. Truscott, Rev. Mod. Phys., 75, 5136 (2004).
20. J.D. Jackson, Classical Electrodynamics (J. Wiley and
sons, third edition New York, 1999).
Blue MOT Laser
Red MOT Laser
Red MOT Laser
Magnetic field gradient
70 G/cm
1-10 G/cm
Du~k vbD
70 ms40 ms
80 ms
Fig. 1. Time sequence and cooling stages of Strontium with
the dipole-allowed transition and with the intercombination
line.
0 5 10 15 20 25 30 35 40 45
Modulation frequency (kHz)
Fig. 2. Transfer rate as a function of the modulation frequency.
The other parameters are fixed: P = 3mW, δ = −1000 kHz,
b = 1G/cm and ∆ν = 1000 kHz
http://arxiv.org/abs/quant-ph/0609111
8 T. Chanelière, L. He, R. Kaiser and D. Wilkowski: Three dimensional cooling and trapping with a narrow line
0 500 1000 1500 2000 2500 3000
Frequency deviation (kHz)
Fig. 3. Transfer rate as a function of the frequency deviation
(squares). The other parameters are fixed: P = 3mW, δ =
−1000 kHz, b = 1G/cm and νm = 25 kHz. The dash and solid
line correspond to a simple model prediction (see text). The
transfer rate is limited by the frequency deviation of the broad
laser spectrum for the dash line and by the waist of the MOT
beam for the solid line.
0 500 1000 1500 2000 2500 3000
Frequency deviation (kHz)
Frequency deviation (kHz)
0 500 1000 1500 2000 2500 3000
(a) (b)
Fig. 4. Transfer rate as a function of the frequency deviation
(squares). δ = −1500 kHz and δ = −2000 kHz for (a) and (b)
respectively, the other parameters and the definitions are the
same than for figure 3.
0 500 1000 1500 2000 2500
Detuning (kHz)
Fig. 5. Transfer rate as a function of the detuning (squares).
The other parameters are fixed: P = 3mW, ∆ν = 1000 kHz,
b = 1G/cm and νm = 25 kHz. The dashed and solid lines have
the same signification than in figure 3.
0 1 2 3 4 5 6 7 8 9 10
b (G/cm)
Fig. 6. Transfer rate as a function of the magnetic gradient
(squares). The other parameters are fixed: P = 3mW, δ =
−1000 kHz, ∆ν = 1000 kHz and νm = 25 kHz. The transfer
rate is limited by the waist of the MOT beam for all values.
The dotted lines represent the case where the magnetic field
gradient do not affect the deceleration.
0 1 2 3 4 5
Beam waist (cm)
Fig. 7. Transfer rate as a function of the beam waist. The solid
lines correspond to a high saturation parameter where as the
dash line correspond to a constant power of P = 3mW. The
other parameters are fixed: δ = −1000 kHz, ∆ν = 1000 kHz
and b = 0.1G/cm.
-2000 -1500 -1000 -500
δ (kHz)
Fig. 8. Measured temperature as a function of the detuning for
a FM spectrum. The other parameters are fixed: P = 3mW,
b = 1G/cm, ∆ν = 1000 kHz and νm = 25 kHz
T. Chanelière, L. He, R. Kaiser and D. Wilkowski: Three dimensional cooling and trapping with a narrow line 9
10 100 1000
-80 -60 -40 -20 0
Detuning (kHz)
Fig. 9. Measured temperature as a function of the detuning
(a) with I = 4Is or I = 15Is and as a function of the intensity
(b) with δ = −100 kHz of single frequency cooling. The circles
(respectively stars) correspond to temperature along one of
the horizontal (respectively vertical) axis. The magnetic field
gradient is b = 2.5G/cm
-700 -600 -500 -400 -300 -200 -100 0
Detuning (kHz)
-700 -600 -500 -400 -300 -200 -100 0
Detuning (kHz)
Fig. 10. Displacement (a) and rms radius (b) of the cold cloud
in single frequency cooling along the z axis (star) and in the
x−y plane (circle). The intensity per beam is I = 20Is and the
magnetic gradient b = 2.5G/cm along the strong axis in the x−
y plane. The linear displacement prediction correspond to the
plain line (graph a). In graph b, the plain curve correspond to
the rms radius prediction based on the equipartition theorem.
10 T. Chanelière, L. He, R. Kaiser and D. Wilkowski: Three dimensional cooling and trapping with a narrow line
d=-1000kHzd=-100kHz
Cloud
Quantization axe
Fig. 11. (a) Images of the cold cloud in the red MOT. The
cloud position for δ = −100 kHz coincides roughly with the
center of the MOT whereas it is shifted downward for δ =
−1000 kHz. The spatial position of the resonance correspond
dot circle. (b) Sketch representing the large detuning case. The
coupling efficiency of the MOT lasers is encoded in the size of
the empty arrow. The laser form below has maximum efficiency
whereas the one pointing downward is absent because is too
detuned. Along a horizontal axe, the lasers are less coupled
because they do not have the correct polarization. The α angle
is the angular position of an atom M with respect to O, the
center of the MOT.
Introduction
Experimental set-up
Broadband loading of the red MOT
Single frequency cooling
Acknowledgments
|
0704.0856 | Approximate Selection Rule for Orbital Angular Momentum in Atomic
Radiative Transitions | Approximate Selection Rule for Orbital Angular Momentum
in Atomic Radiative Transitions
I.B. Khriplovich and D.V. Matvienko
Budker Institute of Nuclear Physics,
630090 Novosibirsk, Russia,
and Novosibirsk University
Abstract
We demonstrate that radiative transitions with ∆l = −1 are strongly dominating
for all values of n and l, except small region where l ≪ n.
It is well-known that the selection rule for the orbital angular momentum l in electro-
magnetic dipole transitions, dominating in atoms, is ∆l = ±1, i. e. in these transitions
the angular momentum can both increase and decrease by unity. Meanwhile, the classical
radiation of a charge in the Coulomb field is always accompanied by the loss of angular
momentum. Thus, at least in the semiclassical limit, the probability of dipole transitions
with ∆ l = − 1 is higher. Here we discuss the question how strongly and under what
exactly conditions the transitions with ∆l = −1 dominate in atoms. (To simplify the
presentation, we mean always, here and below, the radiation of a photon, i. e. transitions
with ∆n < 0. Obviously, in the case of photon absorption, i. e. for ∆n > 0, the angular
momentum predominantly increases.)
The analysis of numerical values for the transition probabilities in hydrogen presented
in [1] has demonstrated that even for n and l, comparable with unity, i. e. in a nonclassical
situation, radiation with ∆l = −1 can be much more probable than that with ∆l = 1.
Later, the relation between the probabilities of transitions with ∆l = −1 and ∆l = 1
was investigated in [2] by analyzing the corresponding matrix elements in the semiclassical
approximation. The conclusion made therein is also that the transitions with ∆l = −1
dominate, and the dominance is especially strong when l > n2/3.
Here we present a simple solution of the problem using the classical electrodynamics
and, of course, the correspondence principle. Our results describe the situation not only
in the semiclassical situation. Remarkably enough, they agree, at least qualitatively, with
the results of [1], although the latter refer to transitions with |∆n| ∼ n ∼ 1 and l ∼ 1,
which are not classical at all.
We start our analysis with a purely classical problem. Let a particle with a mass
m and charge − e moves in an attractive Coulomb field, created by a charge e, along
an ellipse with large semi-axis a and eccentricity ε. It is known [3] that the radiation
intensity at a given harmonic ν is here
4e2ω4
ξ2ν + η
; (1)
J ′ν(νε), ην =
1− ε2
Jν(νε). (2)
In expressions (2), Jν(νε) is the Bessel function, and J
ν(νε) is its derivative. We use the
Fourier transformation in the following form:
x(t) = a
iνω0t = 2a
ξν cos νω0t,
http://arxiv.org/abs/0704.0856v1
y(t) = a
iνω0t = 2a
ην sin νω0t,
where all dimensionless Fourier components ξν and ην are real, and ξ−ν = ξν , η−ν = − ην .
We note that the Cartesian coordinates x and y are related here to the polar coordinates r
and φ as follows: x = r cos φ, y = r sin φ, where φ increases with time. Thus, the angular
momentum is directed along the z axis (but not in the opposite direction).
We note also that, since 0 ≤ ε ≤ 1, both Jν(νε) and J ′ν(νε) are reasonably well
approximated by the first term of their series expansion in the argument. Therefore, all
the Fourier components ξν and ην are positive.
In the quantum problem (where ν = |∆n|), the probability of transition in the unit
time is
h̄ω0ν
4e2ω3
3c3h̄
ξ2ν + η
, ω0 =
h̄2n3
. (3)
Now, the loss of angular momentum with radiation is [3]
r× r... .
Going over here to the Fourier components, we obtain
Ṁν = −
4e2ω2
rν × ṙν ,
or (with our choice of the direction of coordinate axes, and with the angular momentum
measured in the units of h̄)
Ṁν = −
4e2ω3
3c3h̄
2ξνην . (4)
Obviously, the last expression is nothing but the difference between the probabilities
of transitions with ∆l = 1 and ∆l = −1 in the unit time:
Ṁν = W
ν −W−ν . (5)
Of course, the total probability (3) can be written as
Wν = W
ν . (6)
From explicit expressions (3) and (4) it is clear that inequality W+ν ≪ W−ν holds if
2ξνην ≈ ξ2ν + η2ν , or ην ≈ ξν . The last relation is valid for ε ≪ 1, i. e. for orbits close
to circular ones. (The simplest way to check it, is to use in formulae (2) the explicit
expression for the Bessel function at small argument: Jν(νε) = (νε)
ν/(2νν !).)
This conclusion looks quite natural from the quantum point of view. Indeed, it is
the state with the orbital quantum number l equal to n − 1 (i. e. with the maximum
possible value for given n) which corresponds to the circular orbit. In result of radiation
n decreases, and therefore l should decrease as well.
The surprising fact is, however, that in fact the probabilities W−ν of transitions with
∆l = −1 dominate numerically everywhere, except small vicinity of the maximum possible
eccentricity ε = 1. For instance, if ε ≃ 0.9 (which is much more close to 1 than to 0 !),
then at ν = 1 the discussed probability ratio is very large, it constitutes
≃ 12 .
The change with ε of the ratio of W+ν to W
ν for two values of ν is illustrated in Fig. 1.
The curves therein demonstrate in particular that with the increase of ν, the region
0.2 0.4 0.6 0.8 1.0 Ε
�������� ����
Fig. 1
where W−ν and W
ν are comparable, gets more and more narrow, i. e. when ν grows, the
corresponding curves tend more and more to a right angle.
Let us go over now to the quantum problem. In the semiclassical limit, the classical
expression for the eccentricity
is rewritten with usual relations E = −me4/(2h̄2n2) and M = h̄l as
. (8)
In fact, the exact expression for ε, valid for arbitrary l and n, is [3]:
l(l + 1) + 1
. (9)
Clearly, in the semiclassical approximation the eccentricity is close to unity only under
condition l ≪ n. If this condition does not hold, one may expect that in the semiclas-
sical limit the transitions with ∆l = −1 dominate. In other words, as long as l ≪ n,
the probabilities of transitions with decrease and increase of the angular momentum are
comparable. But if the angular momentum is not small, it is being lost predominantly in
radiation. This situation looks quite natural.
The next point is that with the increase of |∆n| = ν, the region where W−ν and W+ν
are comparable, gets more and more narrow in agreement with the observation made in
However, we do not see any hint at some special role (advocated in [2]) of the condition
l > n2/3 for the dominance of transitions with ∆l = −1.
As mentioned already, the analysis of the numerical values of transition probabilities [1]
demonstrates that even for n and l comparable with unity and |∆n| ≃ n, i. e. in the
absolutely nonclassical regime, the transitions with ∆l = −1 are still much more probable
than those with ∆l = 1. The results of this analysis for the ratio W−/W+ in some
transitions are presented in Table 3.1 (first line). Then we indicate in Table 3.1 (last line)
W4p→3s
W4p→3d
W5p→4s
W5p→4d
W5d→4p
W5d→4f
W6f→5d
W6f→5g
W5p→3s
W5p→3d
W6p→3s
W6p→3d
exact
value 10 3.75 28 72 10.67 13.7
ε̄ 0.87 0.92 0.81 0.75 0.90 0.92
ν = |∆n| 1 1 1 1 2 3
semiclassical
value 17.6 8.7 34 58 17.2 15.7
Table 3.1
the values of these ratios obtained in the näıve (semi)classical approximation. Here for
the eccentricity ε̄ we use the value of expression (9), calculated with l corresponding to
the initial state; as to n, we take its value average for the initial and final states.
The table starts with the smallest possible quantum numbers where the transitions,
which differ by the sign of ∆l, occur, i. e. with the ratio W4p→3s/W4p→3d. This table
demonstrates that the ratio of the classical results to the exact quantum-mechanical ones
remains everywhere within a factor of about two. In fact, if one uses as ε̄ expression (8),
calculated in the analogous way, the numbers in the last line change considerably. It is
clear, however, that the classical approximation describes here, at least qualitatively, the
real situation.
References
[1] H.A. Bethe and E.E. Salpeter, Quantum Mechanics of One- and Two-Electron Atoms,
Springer, 1957; §63.
[2] N.B. Delone and V.P. Krainov, FIAN Preprint No. 18, 1979; J. Phys. B 27, (1994)
4403.
[3] L.D. Landau and E.M. Lifshitz, The Classical Theory of Fields, Nauka, 1973; §70,
problem 2 to §72.
[4] L.D. Landau and E.M. Lifshitz, Quantum Mechanics, Nauka, 1974; §36.
|
0704.0857 | Extrasolar scale change in Newton's Law from 5D `plain' R^2-gravity | Extrasolar scale change in Newton’s Law
from 5D ‘plain’ R2-gravity (on ‘very thick brane’)
I L Zhogin (SSRC, Novosibirsk)∗
Abstract
Galactic rotation curves and lack of direct observations of Dark Matter may indicate that
General Relativity is not valid (on galactic scale) and should be replaced with another theory.
There is the only variant of Absolute Parallelism which solutions are free of arising sin-
gularities, if D=5 (there is no room for changes). This variant does not have a Lagrangian,
nor match GR: an equation of ‘plain’ R2-gravity (ie without R-term) is in sight instead.
Arranging an expanding O4-symmetrical solution as the basis of 5D cosmological model,
and probing a universal function of mass distribution (along very-very long the extra dimen-
sion) to place into bi-Laplace equation (R2 gravity), one can derive the Law of Gravitation:
transforms to 1
with distance (not with acceleration).
1 Introduction
Being a ‘close relative’ of General Relativity (GR), Absolute Parallelism (AP) has many interesting
features: larger symmetry group of equations; field irreducibility with respect to this group; vast
list of compatible second order equations (discovered by Einstein and Mayer [1]) not restricted to
Lagrangian ones.
There is the only variant of Absolute Parallelism which solutions are free of arising singularities,
if D=5 (there is no room for changes; this variant of AP does not have a Lagrangian, nor match
GR); in this case AP has topological features of nonlinear sigma-model.
In order to give clear presentation and full picture of the theory’ scope, many items should be
sketched: instability of trivial solution and expanding O4-symmetrical ones; tensor Tµν (positive
energy, but only three polarizations of 15 carry (and angular) momentum; how to quantize such a
stuff ?) and PN-effects; topological classification of symmetric 5D field configurations (alighting on
evident parallels with Standard Model’ particle combinatorics) and ‘quantum phenomenology on
expanding classical background’ (coexistence); ‘plain’ R2-gravity on very thick brane and change
in the Newton’s Law: 1
goes to 1
with distance (not with acceleration – as it is in MOND [2]).
At last, an experiment with single photon interference is discussed as the other way to observe
very-very long (and very undeveloped) the extra dimension.
2 Unique 5D equation of AP (free of singularities in solutions)
There is one unique variant of AP (non-Lagrangian, with the unique D; D=5) which solutions of
general position seem to be free of arising singularities. The formal integrability test [3] can be
∗E-mail: zhogin at inp.nsk.su; http://zhogin.narod.ru
http://arxiv.org/abs/0704.0857v1
http://zhogin.narod.ru
extended to the cases of degeneration of either co-frame matrix, haµ, (co-singularities) or contra-
variant frame (or contra-frame density of some weight), serving as the local and covariant (no
coordinate choice) test for singularities of solutions. In AP this test singles out the next equation
(and D=5, see [4]; ηab = diag(−1, 1, . . . , 1), then h = det haµ =
Eaµ = Laµν;ν − 13(faµ + LaµνΦν) = 0 , (1)
where (see [4] for more detailed introduction to AP and explanation of notations used)
Laµν = La[µν] = Λaµν − Saµν − 23ha[µΦν],
Λaµν = 2ha[µ,ν], Sµνλ = 3Λ[µνλ], Φµ = Λaaµ, fµν = 2Φ[µ,ν] = 2Φ[µ;ν]. (2)
Coma ”,” and semicolon ”;” denote partial derivative and usual covariant differentiation with
symmetric Levi-Civita connection, respectively.
One should retain the identities [which follow from the definitions (2)]:
Λa[µν;λ] ≡ 0 , haλΛabc;λ ≡ fcb (= fµνhµchνb ), f[µν;λ] ≡ 0. (3)
The equation Eaµ;µ = 0 gives ‘Maxwell-like equation’ (we prefer to omit g
µν (ηab) in contrac-
tions that not to keep redundant information – when covariant differentiation is in use only):
(faµ + LaµνΦν);µ = 0, or fµν;ν = (SµνλΦλ);ν (= −12Sµνλfνλ, see below) . (4)
Actually the Eq. (4) follows from the symmetric part of equation, E(ab), because skewsymmetric
one gives just the identity:
2E[νµ] = Sµνλ;λ = 0, E[µν];ν ≡ 0;
note also that the trace part becomes irregular (the principal derivatives vanish) if D = 4 (this
number of dimension is forbidden, and the next number, D = 5, is the most preferable):
Eµµ = Eaµh
ab = 4−D
Φµ;µ + (Λ
2) = 0.
The system (1) remains compatible under adding fµν = 0, see (4); this is not the case for
another covariant, S,Φ, or (some irreducible part of the) Riemannian curvature, which relates to
Λ as usually:
Raµνλ = 2haµ;[ν;λ]; haµhaν;λ =
Sµνλ − Λλµν .
3 Tensor Tµν (despite Lagrangian absence) and PN-effects
One might rearrange E(µν)=0 that to pick out (into LHS) the Einstein tensor, Gµν =Rµν− 12gµνR,
but the rest terms are not proper energy-momentum tensor: they contain linear terms Φ(µ;ν)
(no positive energy ( !); another presentation of ‘Maxwell equation’ (4) is possible instead – as
divergence of symmetrical tensor).
However, the prolonged equation E(µν);λ;λ = 0 can be written as ‘plain’ (no R-term) R
2-gravity:
(−h−1 δ(hRµνGµν)/δgµν=) Gµν;λ;λ +Gǫτ(2Rǫµτν − 12gµνRǫτ ) = Tµν(Λ
′2, . . .), Tµν;ν = 0; (5)
up to quadratic terms,
Tµν =
2 − fµλfνλ) + Aµǫντ (Λ2);(ǫ;τ) + (Λ2Λ′,Λ4);
tensor A has symmetries of Riemann tensor, so the term A′′ adds nothing to momentum and
angular momentum.
It is worth noting that:
(a) the theory does not match GR, but shows ‘plain’ R2-gravity (sure, (5) does not contain all
the theory);
(b) only f -component (three transverse polarizations in D=5) carries D-momentum and an-
gular momentum (‘powerful’ waves); other 12 polarizations are ‘powerless’, or ‘weightless’ (this is
a very unusual feature – impossible in the Lagrangian tradition; how to quantize ? let us not to
try this, leaving the theory ‘as is’);
(c) f -component feels only metric and S-field (‘contorsion’, not ‘torsion’ Λ – to label somehow),
see (4), but S has effect only on polarization of f : S[µνλ] does not enter eikonal equation, and f
moves along usual Riemannian geodesic (if background has f=0); one may think that all ‘quantum
fields’ (phenomenological quantized fields accounting for topological (quasi)charges and carrying
some ‘power’; see further) inherit this property;
(d) the trace Tµµ =
fµνfµν can be non-zero if f
2 6= 0 and this seemingly depends on S-
component [which enters the current in (4)]; in other words, ‘mass distribution’ is to depend on
distribution of f - and S-component;
(e) it should be stressed and underlined that the f -component is not usual (quantum) EM-
field – just important covariant responsible for energy-momentum (suffice it to say that there is
no gradient invariance for f).
4 Linear domain: instability of trivial solution (with powerless waves)
Another strange feature is the instability of trivial solution: some ‘powerless’ polarizations grow
linearly with time in presence of ‘powerful’ f -polarizations. Really, from the linearized Eq. (1)
and the identity (3) one can write (the following equations should be understood as linearized):
Φa,a = 0 (D 6= 4), 3Λabd,d = Φa,b − 2Φb,a, Λa[bc,d],d ≡ 0 ⇒ 3Λabc,dd = −2fbc,a .
The last ‘D‘Alembert equation’ has the ‘source’ in its right hand side. Some components of Λ
(most symmetrical irreducible parts) do not grow (as well as curvature), because (again, linearized
equations are implied below)
Sabc,dd = 0, Φa,dd = 0, fab,dd = 0, Rabcd,ee = 0,
but the least symmetrical components of the tensor Λ do grow up with time (due to terms ∼ t e−iωt;
three growing polarizations which are ‘imponderable’, or powerless) if the ‘ponderable’ waves (three
f -polarizations) do not vanish (and this should be the case for solutions of ‘general position’).
5 Expanding O4-symmetrical (single wave) solutions and cosmology
The unique symmetry of AP equations gives scope for symmetrical solutions. In contrast to GR,
this variant of AP has non-stationary spherically (O4-) symmetric solutions. The O4-symmetric
frame field can be generally written as follows [4]:
haµ(t, x
a bni
cni eninj + d∆ij
; i, j = (1, 2, 3, 4), ni =
. (6)
Here a, . . . , e are functions of time, t = x0, and radius r, ∆ij = δij −ninj, r2 = xixi. As functions
of radius, b, c are odd, while the others are even; other boundary conditions: e = d at r = 0, and
haµ → δ aµ as r → ∞. Placing in (6) b = 0, e = d (the other interesting choice is b=c=0) and
making integrations one can arrive to the next system (resembling dynamics of Chaplygin gas;
dot and prime denote derivation on time and radius, resp.; A = a/e = e1/2, B = −c/e):
A· = AB′ −BA′ + 3AB/r , B · = AA′ − BB′ − 2B2/r . (7)
This system (does not suffer of gradient catastrophe and) has non-stationary solutions; a single-
wave solution of proper ‘amplitude’ might serve as a suitable cosmological (expanding) background.
The condition fµν=0 is a must for solutions with such a high symmetry (as well as Sµνλ=0); so,
these O4-solutions carry no energy, that is, weight nothing (some lack of gravity ! in this theory
the universe expansion seemingly has little common with gravity, GR and its dark energy [5]).
More realistic cosmological model might look like a single O4-wave (or a sequence of such
waves) moving along the radius and being filled with chaos, or stochastic waves, both powerful
(weak, ∆h≪ 1) and powerless (∆h < 1, but intense enough that to lead to non-linear fluctuations
with ∆h ∼ 1), which form statistical ensemble(s) having a few characteristic parameters (after
‘thermalization’). The development and examination of stability of such a model is an interesting
problem. The metric variation in cosmological O4-wave can serve as a time-dependent ‘shallow
dielectric guide’ for that weak noise waves. The ponderable waves (which slightly ‘decelerate’
the O4-wave) should have wave-vectors almost tangent to the S
3-sphere of wave-front that to be
trapped inside this (‘shallow’) wave-guide; the imponderable waves can grow up, and partly escape
from the wave-guide, and their wave-vectors can be less tangent to the S3-sphere.
The waveguide thickness can be small for an observer in the center of O4-symmetry, but in co-
moving coordinates it can be very large (due to relativistic effect), however still small with respect
to the radius of sphere, L≪ R. It seems that the radial dimension has to be very ‘undeveloped’;
that is, there are no other characteristic scales, smaller than L, along this extra-dimension.
6 Non-linear domain: topological charges and quasi-charges
Let AP-space is of trivial topology: no worm-holes, no compactified space dimensions, no singu-
larities. One can continuously deform frame field h(x) to a field of rotation matrices (metric can
be diagonalized and ‘square-rooted’) haµ(x) → saµ(x) ∈ SO(1, d); m=D−1. Further deformation
can remove boosts too, and so, for any space-like (Cauchy) surface, this gives a (pointed) map,
s : Rm ∪∞ = Sm → SOm; ∞ 7→ 1m ∈ SOm.
The set of such maps consists of homotopy classes forming the group of topological charge, Π(m):
Π(m) = πm(SOm); Π(3) = Z, Π(4) = Z2 + Z2. (8)
Here Z is the infinite cyclic group, and Z2 is the cyclic group of order two.
It is important that deformation to s-field can keep symmetry of field configuration. Definition:
localized field (pointed map) s(x) : Rm → SO(m), s(∞) = 1m, is G-symmetric if, in some
coordinates,
s(σx) = σs(x)σ−1 ∀ σ ∈ G ⊂ O(m) . (9)
The set of such fields C(m)G generally consists of separate, disconnected components – homotopy
classes forming the ‘topological quasi-charge group’ denoted here as Π(G;m) ≡ π0(C(m)G ). These
QC-groups classify symmetrical localized configurations of frame field. Since field equation does
not break symmetry, quasi-charge conserves; if symmetry is not exact (because of distant regions),
quasi-charge is not exactly conserving value, and quasi-particle (of zero topological charge) can
annihilate (or be created) during colliding with another quasi-particle.
The other problem. Let G1 ⊃ G2, such that there is a mapping (embedding) i : C(m)G1 → C
which induces the homomorphism of QC-groups: i∗ : Π(G1;m) → Π(G2;m), so one has to
describe this morphism.
Let us consider the simple (discreet) symmetry group P1 with a plane of reflection symmetry:
P1 = {1, p(1)}, where p(1) = diag(−1, 1, . . . , 1) = p−1(1).
It is necessary to set field s(x) on the half-space 1
Rm = {x1 ≥ 0}, with additional condition
imposed on the surface Rm−1 = {x1 = 0} (stationary points of P1 group) where s has to commute
with the symmetry [see (9)]:
p(1)x = x ⇒ s(x) = p(1)sp(1) ⇒ s ∈ 1× SOm−1.
Hence, accounting for the localization requirement, we have a diad map (relative spheroid; here
Dm is anm-ball and Sm−1 its surface) (Dm;Sm−1) → (SOm;SOm−1), and topological classification
of such maps leads to the relative (or diad) homotopy group ([6]; the last equality below follows
due to fibration SOm/SOm−1 = S
m−1):
Π(P1;m) = πm(SOm;SOm−1) = πm(S
m−1).
Similar considerations (of group orbits and stationary points) lead to the following result:
Π(Ol;m) = πm−l+1(SOm−l+1;SOm−l) = πm−l+1(S
m−l).
If l > 3, there is the equality: Π(SOl;m) = Π(Ol;m), while for l = 2, 3 one can find:
Π(SO3;m) = πm−2(SO2 × SOm−2;SOm−3) = πm−2(S1 × Sm−3),
Π(SO2;m) = πm−1(SOm;SOm−2 × SO2) = πm−1(RG+(m, 2)).
The set of quaternions with absolute value one, H1 = {f, |f| = 1}, forms a group under
quaternion multiplication, H1 ∼= SU2 = S3, and any s ∈ SO4 can be represented as a pair of such
quaternions [6], (f , g) ∈ S3(l) × S3(r), |f | = |g| = 1:
x∗ = sx ⇔ x∗ = f x g−1 = f x ḡ ; |x| = |x∗|.
The pairs (f,g) and (–f, –g) correspond to the same rotation s, that is, SO4 = S
(l) × S3(r)/±.
Note that the symmetry condition (9) also splits into two parts:
f(axb−1) = af(x)a−1, g(axb−1) = bg(x)b−1 ∀(a,b) ∈ G ⊂ SO4. (10)
7 Example of SO2-symmetric quaternion field
Let’s consider an example of SO2{2, 3}−symmetric f–field configuration (g=1), which carries both
charge and SO2-quasi-charge (left, of course), f(x): H = R
4 → H1; f(∞) = 1. The symmetry
condition (10) reads
f(eiφ/2xe−iφ/2) = eiφ/2f(x)e−iφ/2. (11)
We’ll switch to ‘double-axial’ coordinates: x = aeiϕ + beiψj. Let us use imaginary quaternions q
as stereogrphic coordinates on H1, and take symmetrical field q(x) consistent with Eq. (11):
q(x) = x i x̄+ i = −q̄, f(x) = −
1 + q
. (12)
It is easy to find the ‘center of quasi-soliton’ (1-submanifold, S1)
S1 = f−1(−1) = q−1(0) = {a = 0, b = 1} = {x0(ψ) = eiψj}
and the ‘vector equipment’ on this circle:
dx|x0 = da eiϕ + (db+ i dψ)eiψj, 14df
= idb− k ei (ϕ+ψ)da ;
i-vector all time looks along the radius b (parallel translation along the circle S1; this is a ‘trivial‘,
or ‘flavor’-vector). Two others (’phase’-vectors) make 2π−rotation along the circle.
In fact, the field (12) has also symmetry SO2{1, 4}, and this feature restricts possible directions
of ‘flavor’-vector (two ‘flavors’ are possible, ±; the P2{1, 4}−symmetry (this is the π-rotation of
x1, x4) gives the same effect). The other interesting observation is that the equipped circle can be
located also at the stationary points of SO2−symmetry (this increases the number of ‘flavors’).
8 Quasi-charges and their morphisms (in 5D, ie m = 4)
If G ⊂ SO4, the QC-group has two isomorphous parts, left and right: Π(G) = Π(l)(G) + Π(r)(G).
The Table below describes quasi-charge groups for G ⊂ G0 = (O3 × P4) ∩ SO4 (P4 is spatial
inversion, the 4-th coordinate is the extra dimension of G0-symmetric expanding cosmological
background).
Table. QC-groups Π(l)(G) and their morphisms to the preceding group; G ⊂ G0.
G Πl(G) → Πl(G∗) ‘label’
SO{1, 2} Z(e)
e→ Z2 e
SO{1, 2} × P{3, 4} Z(ν) + Z(H)
i,m2→ Z(e) ν0; H0 → e + e
SO{1, 2} × P{2, 3} Z(W )
0→ Z(e) W → e + ν0
SO{1, 2} × P{2, 4} Z(Z)
0→ Z(e) Z0 → e+ e
SO{1, 2} × P{3, 4} × Z(γ)
0→ Z(H) γ0 → H0 +H0
×P{2, 3} 0→ Z(W ) →W +W
‘Quasi-particles’, which symmetry includes P4, seem to be true neutral (neutrinos, Higgs particles,
photon).
One can assume further that an hadron bag is a specific place where G0−symmetry does
not work, and the bag’s symmetry is isomorphous to O4. This assumption can lead to another
classification of quasi-solitons (some doubling the above scheme), where self-dual and anti-self-
dual one-parameter groups take place of SO2−group. The total set of quasi-particle parameters
(parameters of equipped 1-manifold (loop) plus parameters of group) for (anti)self-dual groups,
G(4, 2)×RP 2, is larger than the analogous set for groups SO2 ⊂ G0, which is just O3×G(3, 1) =
RP 2 . If the number of ‘flavor’-parameters (which are not degenerate and have some preferable
particular values; this should be sensitive to discreet part of G – at least photons have the same
flavor) is the same as in the case of ‘white’ quasi-particles, the remaining parameters (degenerate,
or ‘phase’) can give room for ‘color’ (in addition to spin). So, perhaps one might think about
‘color neutrinos’ (in the context of pomeron, and baryon spin puzzle), ‘color W, Z, and Higgs’
(another context – B-mesons), and so on.
Note that in this picture the very notion of quasi-particle depends on the background symmetry
(also to note: there are no ’quanta of torsion’ per se). On the other hand, large clusters of
quasi-particles (matter) can disturb the background, and waves of such small disturbances (with
wavelength larger than the thickness L, perhaps) can be generated as well (but these waves do
not carry (quasi)charges, that is, are not quantized).
9 Coexistence: phenomenological ‘quantum fields’ on classical back-
ground
The non-linear, particle-like field configurations with quasi-charges (quasi-particles) should be very
elongated along the extra-dimension (all of the same size L), while being small sized along usual
dimensions, λ≪ L. The motion of such a spaghetti-like quasi-particle should be very complicated
and stochastic due to ‘strong’ imponderable noise, such that different parts of spaghetti are coming
their own paths. At the same time, quasi-particle can acquire ‘its own’ energy–momentum – due to
scattering of ponderable waves (which wave-vectors are almost tangent to usual 3D (sub)space);
so, it seems that scattering amplitudes1 of those spaghetti’s parts which have the same 3D–
coordinates can be summarized providing an auxiliary, secondary field.
So, the imponderable waves provides stochasticity (of motion of spaghetti’s parts), while the
ponderable waves ensure superposition (with secondary fields). Phenomenology of secondary fields
could be of Lagrangian type, with positive energy acquired by quasi-particles, – that to ensure the
stability (of all the waveguide with its infill – with respect to quasi-particle production; the least
action principle has deep concerns with Lyapunov stability and is deducible, in principle, from the
path integral approach).
10 ‘Plain’ R2 gravity on very thick brane
and change in the Newton’s Law of Gravitation
Let us start with 4d (from 5D) bi-Laplace equation with a δ-source [as weak field, non-relativistic
(stationary) approximation (it is assumed that ‘mass is possible’) for R2-gravity (5)] and its
solution (R is 4d distance, radius):
∆2ϕ = − a
δ(R); ϕ(R2) =
lnR2 − b
(+ c , but c does not matter); (13)
the attracting force between two point masses is Fpoint =
, a, b should be proportional to
both masses.
Now let us suppose that all masses are distributed along the extra dimension with a ‘universal
function’, µ(p),
µ(p) dp = 1. Then the attracting (gravitation) force takes the next form [see
1 These amplitudes can depend on additional vector-parameters (‘equipment vectors’) relating to differential of
field mapping at a ‘quasi-particle center’ – where quasi-charge density is largest (if it has covariant sense).
0 1 2 3 4 5 6
Fig. 1. Deviation δF = F − 1/r2 for different µ(p), see Eq. (14) and text below.
(13); r is usual 3d distance]:
F (r) =
ϕ(r2 + (p− q)2)µ(p)µ(q) dp dq =
V − b V ′, V (r) =
µ(p)µ(q) dp dq
r2 + (p− q)2
. (14)
(Note that V (r) can be restored if F (r) is measured.)
Taking µ1(p) = π
−1/(1 + p2) (typical scale along the extra dimension is taken as unit, L = 1;
it seems that L should be greater than ten AU), one can find rV1(r) = 1/(2 + r) and
F (r) =
8 + 4r
2b(1 + r)
r2(2 + r)2
; or (now L 6= 1) F (r) =
2L(2L+ r)2
, where a = b = 2/L2.
Fig. 1, curve (a) shows δF = F − 1/r2 (deviation from the Newton’s Law; a/b is chosen that
δF (0)=0); two other curves, (b) & (c), correspond to µ2 = 2π
−1/(1 + p2)2, µ3 = 2π
−1p2/(1 + p2)2
(also δF (0)=0; residues help to find rV2 = (10 + 6r + r
2)/(2 + r)3, rV3 = (2 + 2r + r
2)/(2 + r)3).
We see that in principle this theory can explain galaxy rotation curves, v2(r)∝ rF r→∞−→ const,
without need for Dark Matter (or MOND [2]; about rotation curves and DM see [7]; they are
looking for DM in Solar system too, [8]).
Q: Can the ‘coherence of mass’ along the extra dimension be disturbed ? (the flyby anomaly,
the Pioneer anomaly [9]); can µ(p) be negative in some domains of p ?
11 How to register ‘powerless’ waves
This section is added perhaps for some funny recreation (or still not ? who knows). We have
learnt that S-waves do not carry momentum and angular momentum, so they can not perform
any work or spin flip.
But let us conceive that these waves can effect a flip-flop of two neighbor spins. So, a ‘detector’
could be a media with two sorts of spins, A and B. Let sA = sB = 1/2 but gA 6= gB, and let
the initial state is prepared as follows: {<sAz >,<sBz >}(0) = {1/2,−1/2}. Then the process of
spin relaxation starts; turning on appropriate magnetic field Hz (and alternating fields of proper
frequencies) one can measure the detector’s state and find the time of spin relaxation.
The next step. Skilled experimenters try to generate S-waves and to register an effect of these
waves on spin relaxation. The generation of intense ‘coherent’ S-waves could be proceeded perhaps
with a similar spin system subjected to alternating polarization.
12 Single photon experiment (that to feel huge extra dimension),
and Conclusion
Today, many laboratories have sources of single (heralded) photons, or entangled bi-photons (say,
for Bell-type experiments [10]); some students can perform laboratory works with single photons,
having convinced on their own experience that light is quantized (the Grangier experiment)[11].
It is being suggested a minor modification of the single (polarized) photon interference exper-
iment, say, in a Mach-Zehnder fiber interferometer with ‘long’ (the fibers may be rolled) enough
arms. The only new element is a fast-acting shutter placed at the beginning of (one of) the inter-
ferometer’s arms (the closing-opening time of the shutter should be smaller than the flight time
in the arms). For example, a fast electro-optical modulator in combination with polarizer (or a
number of such combinations) can be used with polarized photons.
Both Quantum mechanics (no particle’s ontology) and Bohmian mechanics (wave-particle dou-
ble ontology)[12] exclude any change in the interference figure as a result of separating activity
of such a fast shutter (while the photon’s ‘halves’ are making their ways to the place of a meet-
ing). However, if a photon has non-local spaghetti-like ontology (along the extra dimension) and
fragments of this spaghetti are moving along both arms at once, then the shutter should tear up
this spaghetti (mainly without photon absorption), tear out its fragments (which will dissolve in
‘zero-point oscillations’). Hence, if the absorption factor of the shutter (the extinction ratio of
polarizer) is large enough, the 50/50-proportion (between the photon’s amplitudes in the arms)
will be changed and a significant decrease of the interference visibility should be observed.
QM is everywhere (where we can see, of course), and, so, non-linear 5D-field fluctuations,
looking like spaghetti-anti-spaghetti loops, should exist everywhere. (This omnipresence can be
related to the universality of ‘low-level heat death’, restricted by the presence of topological quasi-
solitons – some as the 2D computer experiment by Fermi, Pasta, and Ulam, where the process of
thermalization was restricted by the existence of solitons. See also the sections 5–8 (and [4]) for
arguments in favor of phenomenological (quantized) ‘secondary fields’ accounting for topological
(quasi)charges and obeying superposition, path integral and so on.)
AP, at least at the level of its symmetry, seems to be able to cure the gap between the
two branches of physics – General Relativity (with coordinate diffeomorphisms) and Quantum
Mechanics (with Lorentz invariance).2 Most people give all the rights of fundamentality to quanta,
and so, they are trying to quantize gravity, and the very space-time (probing loops, and strings,
and branes; see also the warning polemic by Schroer [14]). The other possibility is that quanta
have the specific phenomenological origin relating to topological (quasi)charges.
2Rovelli writes[13]: In spite of their empirical success, GR and QM offer a schizophrenic and confused under-
standing of the physical world.
References
[1] A. Einstein and W. Mayer, Sitzungsber. preuss. Akad. Wiss. Kl 257–265 (1931).
[2] M. Milgrom, The modified dynamics – a status review, arXiv: astro-ph/9810302.
[3] J. F. Pommaret, Systems of Partial Differentiation Equations and Lie Pseudogroups (Math.
and its Applications, Vol. 14, New York 1978).
[4] I. L. Zhogin, Topological charges and quasi-charges in AP, arXiv: gr-qc/0610076; spherical
symmetry: gr-qc/0412130; 3-linear equations (contra-singularities): gr-qc/0203008.
[5] S.M. Carroll, Why is the Universe Accelerating ? arXiv: astro-ph/0310342
[6] B.A. Dubrovin, A.T. Fomenko and S.P. Novikov, Modern Geometry – Methods and Applica-
tions, Springer-Verlag, 1984.
[7] M.E. Peskin, Dark Matter: What is it ? Where is it ? Can we make it in the lab ?
http://www.slac.stanford.edu/grp/th/mpeskin/Yale1.pdf; M. Battaglia, M.E. Peskin,
The Role of the ILC in the Study of Cosmic Dark Matter, hep-ph/0509135
[8] L. Iorio, Solar System planetary orbital motions and dark matter, arXiv: gr-qc/0602095;
I.B. Khriplovich, Density of dark matter in Solar system and perihelion precession of planets,
astro-ph/0702260.
[9] C. Lämmerzahl, O. Preuss, and H. Dittus, Is the physics within the Solar system really
understood ? arXiv: gr-qc/0604052; A. Unzicker, Why do we Still Believe in Newton’s Law ?
Facts, Myths and Methods in Gravitational Physics, gr-qc/0702009.
[10] G. Weihs, T. Jennewein, C. Simon, H. Weinfurter, and A. Zeilinger, Phys. Rev. Lett. 81, 5039
(1998); quant-ph/9810080; W. Tittel, G. Weihs, Photonic Entanglement for Fundamental
Tests and Quantum Communication, quant-ph/0107156.
[11] See the next links: departments.colgate.edu/physics/research/Photon/root/ ,
marcus.whitman.edu/ beckmk/QM .
[12] H. Nikolić, Quantum mechanics: Myths and facts, arXiv: quant-ph/0609163 .
[13] C. Rovelli, Unfinished revolution, gr-qc/0604045 .
[14] B. Schroer, String theory and the crisis in particle physics (a Samisdat on particle physics),
arXiv: physics/0603112;
the other sources of contra-string polemic are seemingly the books: P. Woit, Not even wrong;
L. Smolin, The Trouble with Physics (and the blog math.columbia.edu/∼woit/wordpress).
http://arxiv.org/abs/astro-ph/9810302
http://arxiv.org/astro-ph/9810302
http://arxiv.org/abs/gr-qc/0610076
http://arXiv.org/gr-qc/0610076
http://arXiv.org/gr-qc/0412130
http://arXiv.org/gr-qc/0203008
http://arxiv.org/abs/astro-ph/0310342
http://arxiv.org/astro-ph/0310342
http://www.slac.stanford.edu/grp/th/mpeskin/Yale1.pdf
http://arxiv.org/hep-ph/0509135
http://arxiv.org/abs/gr-qc/0602095
http://arxiv.org/gr-qc/0602095
http://arxiv.org/astro-ph/0702260
http://arxiv.org/abs/gr-qc/0604052
http://arxiv.org/gr-qc/0604052
http://arxiv.org/gr-qc/0702009
http://arXiv.org/quant-ph/9810080
http://arXiv.org/quant-ph/0107156
http://departments.colgate.edu/%physics/research/Photon/root/photon_quantum_mechanics.htm
http://marcus.whitman.edu/~beckmk/QM/
http://arxiv.org/abs/quant-ph/0609163
http://arXiv.org/gr-qc/0604045
http://arxiv.org/abs/physics/0603112
http://arXiv.org/physics/0603112
http://www.math.columbia.edu/~woit/wordpress/
Introduction
Unique 5D equation of AP (free of singularities in solutions)
Tensor T (despite Lagrangian absence) and PN-effects
Linear domain: instability of trivial solution (with powerless waves)
Expanding O4-symmetrical (single wave) solutions and cosmology
Non-linear domain: topological charges and quasi-charges
Example of SO2-symmetric quaternion field
Quasi-charges and their morphisms (in 5D, ie m=4)
Coexistence: phenomenological `quantum fields' on classical background
`Plain' R2 gravity on very thick brane and change in the Newton's Law of Gravitation
How to register `powerless' waves
Single photon experiment (that to feel huge extra dimension), and Conclusion
|
0704.0858 | Lessons Learned from the deployment of a high-interaction honeypot | Microsoft Word - Eric2.doc
Lessons learned from the deployment of a high-interaction honeypot
E. Alata1, V. Nicomette1, M. Kaâniche1, M. Dacier2, M. Herrb1
1LAAS-CNRS, University of Toulouse, 7 Avenue du Colonel Roche, 31077 Toulouse Cedex 4, France
2Eurécom, 2229 Route des Crêtes, BP 193, 06904 Sophia Antipolis Cedex, France
{alata, nicomette, kaaniche, herrb}@laas.fr; [email protected]
Abstract
This paper presents an experimental study and the lessons
learned from the observation of the attackers when logged on a
compromised machine. The results are based on a six months
period during which a controlled experiment has been run with
a high interaction honeypot. We correlate our findings with
those obtained with a worldwide distributed system of low-
interaction honeypots.
1. Introduction
During the last decade, several initiatives have been
developed to monitor and collect real world data about
malicious activities on the Internet, e.g., the Internet
Motion Sensor project [1], CAIDA [2] and Dshield [3]. The
CADHo project [4] in which we are involved is
complementary to these initiatives and is aimed at:
• deploying a distributed platform of honeypots [5] that
gathers data suitable to analyze the attack processes
targeting a large number of machines on the Internet;
• validating the usefulness of this platform by carrying out
various analyses, based on the collected data, to
characterize the observed attacks and model their
impact on security.
A honeypot is a machine connected to a network but
that no one is supposed to use. If a connection occurs, it
must be, at best an accidental error or, more likely, an
attempt to attack the machine.
The first stage of the project focused on the
deployment of a data collection environment (called
Leurré.com [6]) based on low-interaction honeypots. As
of today, around 40 honeypot platforms have been
deployed at various sites from academia and industry in
almost 30 different countries over the five continents.
Several analyses and interesting conclusions have been
derived based on the collected data as detailed e.g., in
[4,5,7-9]. Nevertheless, with such honeypots, hackers can
only scan ports and send requests to fake servers without
ever succeeding in taking control over them. The second
stage of our project is aimed at setting up and deploying
high-interaction honeypots to allow us to analyze and
model the behavior of malicious attackers once they have
managed to compromise and get access to a new host,
under strict control and monitoring. We are mainly
interested in observing the progress of real attack
processes and the activities carried out by the attackers in
a controlled environment.
In this paper, we describe the lessons learned from the
development and deployment of such a honeypot. The
main contributions are threefold. First, we do confirm the
findings discussed in [9] showing that different sets of
compromised machines are used to carry out the various
stages of planned attacks. Second, we do outline the fact
that, despite this apparent sophistication, the actors
behind those actions do not seem to be extremely skillful,
to say the least. Last, the geographical location of the
machines involved in the last step of the attacks and the
link with some phishing activities shed a geopolitical and
socio-economical light on the results of our analysis.
The paper is organized as follows. Section 2 presents
the architecture of our high-interaction honeypot and the
design rationales for our solution. The lessons learned
from the attacks observed over a period of almost 4.5
months are discussed in Section 3. Finally, Section 4
concludes and discusses future work. An extended version
of this paper detailing the context of this work and the
related state-of-the art is available in [10].
2. Architecture of our honeypot
In our implementation, we decided to use VMware [11]
and to install virtual operating system upon it. Compared
to solutions based on physical machines, virtual
honeypots provide a cost effective and flexible solution
that is well suited for running experiments to observe
attacks.
The objective of our experiment is to analyze the
behavior of the attackers who succeed in breaking into a
machine. The vulnerability that they exploit is not as
crucial as the activity they carry out once they have broken
into the host. That's why we chose to use a simple
vulnerability: weak passwords for ssh user accounts. Our
honeypot is not particularly hardened for two reasons.
First, we are interested in analyzing the behavior of the
attackers even when they exploit a buffer overflow and
become root. So, if we use some kernel patch such as Pax
[12], our system will be more secure but it will be
impossible to observe some behavior. Secondly, if the
system is too hardened, the intruders may suspect
something abnormal and then give up.
In our setup, only ssh connections to the virtual host
are authorized so that the attacker can exploit this
vulnerability. A firewall blocks all connection attempts
from the Internet, but those to port 22 (ssh). Also, any
connection from the virtual host to the outside is blocked
Proceedings of the Sixth European Dependable Computing Conference (EDCC'06)
0-7695-2648-9/06 $20.00 © 2006
to avoid that intruders attack remote machines from the
honeypot. This does not prevent the intruder from
downloading code, using the ssh connection1.
Our honeypot is a standard Gnu/Linux installation,
with kernel 2.6, with the usual binary tools. No additional
software was installed except the http apache server.
This kernel was modified as explained in the next
subsection. The real host executing VMware uses the
same Gnu/Linux distribution and is isolated from outside.
In order to log what the intruders do on the honeypot,
we modified some drivers functions (tty_read and
tty_write), as well as the exec system call in the Linux
kernel. The modifications of tty_read and tty_write
enable us to intercept the activity on all the terminals of
the system. The modification of the exec system call
enables us to record the system calls used by the intruder.
These functions are modified in such a way that the
captured information is logged directly into a buffer of the
kernel memory of the honeypot itself.
Moreover, in order to record all the logins and
passwords tried by the attackers to break into the
honeypot we added a new system call into the kernel of
the virtual operating system and we modified the source
code of the ssh server so that it uses this new system call.
The logins and passwords are logged in the kernel
memory, in the same buffer as the information related to
the commands used by the attackers.
The activities of the intruder logged by the honeypot
are preprocessed and then stored into an SQL database.
The raw data are automatically processed to extract
relevant information for further analyses, mainly: i) the IP
address of the attacking machine, ii) the login and the
password tested, iii) the date of the connection, iv) the
terminal associated (tty) to each connection, and v) each
command used by the attacker.
3. Experimental results
This section presents the results of our experiments.
First, we give global statistics in order to give an overview
of the activities observed on the honeypot, then we
characterize the various intrusion processes. Finally, we
analyze in detail the behavior of the attackers once they
break into the honeypot. In this paper, an intrusion
corresponds to the activities carried out by an intruder
who has succeeded to break into the system.
3.1. Global statistics
The high-interaction honeypot has been deployed on
the Internet and has been running for 131 days during
which 480 IP addresses have tried to contact its ssh port.
It is worth comparing this value to the amount of hits
observed against port 22, considering all the other low-
interaction honeypot platforms we have deployed in the
rest of the world (40 platforms). In the average, each
platform has received hits on port 22 from around
approximately 100 different IPs during the same period of
time. Only four platforms have been contacted by more
1 We have sometimes authorized http connections for a short time, by
checking that the attackers were not trying to attack other remote hosts.
than 300 different IP addresses on that port and only one
was hit by more visitors than our high interaction
honeypot. Even better, the low-interaction platform
maintained in the same subnet as the high-interaction
honeypot experimented only 298 visits, i.e. less than two
thirds of what the high-interaction did see. This very
simple and first observation confirms the fact already
described in [9] that some attacks are driven by the fact
that attackers know in advance, thanks to scans done by
other machines, where potentially vulnerable services are
running. The existence of such a service on a machine will
trigger more attacks against it. This is what we observe
here: the low interaction machines do not have the ssh
service open, as opposed to the high interaction one, and,
therefore get less attacked than the one where some target
has been identified.
The number of ssh connection attempts to the
honeypot we have recorded is 248717 (we do not consider
here the scans on the ssh port). This represents about
1900 connection attempts a day. Among these 248717
connection attempts, only 344 were successful. Table 1
represents the user accounts that were mostly tried (the
top ten) as well as the number of different passwords that
have been tested by the attackers. It is noteworthy that
many user accounts corresponding to usual first names
have also regularly been tested on our honeypot. The total
number of accounts tested is 41530.
Account Number of
connection
attempts
Percentage of
connection
attempts
Number of
passwords
tested
root 34251 13.77% 12027
admin 4007 1.61% 1425
test 3109 1.25% 561
user 1247 0.50% 267
guest 1128 0.45% 201
info 886 0.36% 203
mysql 870 0.35% 211
oracle 857 0.34% 226
postgres 834 0.33% 194
webmaster 728 0.29% 170
Table 1: ssh connection attempts and number of
passwords tested
Before the real beginning of the experiment
(approximately one and a half month), we had deployed a
machine with a ssh server correctly configured, offering
no weak account and password. We have taken advantage
of this observation period to determine which accounts
were mostly tried by automated scripts. Using this
acquired knowledge, we have created 17 user accounts and
we have started looking for successful intrusions. Some of
the created accounts were among the most attacked ones
and others not. As we already explained in the paper, we
have deliberately created user accounts with weak
passwords (except for the root account). Then, we have
measured the time between the creation of the account
and the first successful connection to this account, then
the duration between the first successful connection and
the first real intrusion (as explained in section 3.2, the
first successful connection is very seldom a real intrusion
but rather an automatic script which tests passwords).
Proceedings of the Sixth European Dependable Computing Conference (EDCC'06)
0-7695-2648-9/06 $20.00 © 2006
Table 2 summarizes these durations (UAi means User
Account i).
User
Account
Duration between
creation and first
successful connection
Duration between first
successful connection
and first intrusion
UA1 1 day 4 days
UA2 Half a day 4 minutes
UA3 15 days 1 day
UA4 5 days 10 days
UA5 5 days null
UA6 1 day 4 days
UA7 5 days 8 days
UA8 1 day 9 days
UA9 1 day 12 days
UA10 3 days 2 minutes
UA11 7 days 4 days
UA12 1 day 8 days
UA13 5 days 17 days
UA14 5 days 13 days
UA15 9 days 7 days
UA16 1 day 14 days
UA17 1 day 12 days
Table 2: History of breaking accounts
The second column indicates that there is usually a gap
of several days between the time when a weak password is
found and the time when someone logs into the system
with this password to issue some commands on the now
compromised host. This is a somehow a surprising fact
and is described with some more details here below. The
particular case of the UA5 account is explained as follows:
an intruder succeeded in breaking the UA4 account. This
intruder looked at the contents of the /etc/passwd file in
order to see the list of user accounts for this machine. He
immediately decided to try to break the UA5 account and
he was successful. Thus, for this account, the first
successful connection is also the first intrusion.
3.2. Intrusion process
In the section, we present the conclusions of our
analyses regarding the process to exploit the weak
password vulnerability of our honeypot. The observed
attack activities can be grouped into three main
categories: 1) dictionary attacks, 2) interactive intrusions,
3) other activities such as scanning, etc.
Figure 3: Classification of observed IP addresses
As illustrated in figure 3, among the 480 IP addresses
that were seen on the honeypot, 197 performed dictionary
attacks and 35 performed real intrusions on the honeypot
(see below for details). The 248 IP addresses left were
used for scanning activity or activity that we did not
clearly identified. Among the 197 IP addresses that made
dictionary attacks, 18 succeeded in finding passwords.
The others (179) did not find the passwords either because
their dictionary did not include the accounts we created or
because the corresponding weak password had already
been changed by a previous intruder. We have also
represented in Figure 3 the corresponding number of IP
addresses that were also seen on the low-interaction
honeypot deployed in the context of the project in the
same network (between brackets). Whereas most of the IP
addresses seen on the high interaction honeypot are also
observed on the low interaction honeypot, none of the 35
IPs used to really log into our machine to launch
commands have ever been observed on any of the low
interaction honeypots that we do control in the whole
world! This striking result is discussed hereafter.
3.2.1. Dictionary attack. The preliminary step of
the intrusion consists in dictionary attacks2. In general, it
takes only a couple of days for newly created accounts to
be compromised. As shown in Figure 3, these attacks have
been launched from 197 IP addresses. By analysing more
precisely the duration between the different ssh
connection attempts from the same attacking machine, we
can say that these dictionary attacks are executed by
automatic scripts. As a matter of fact, we have noted that
these attacking machines try several hundreds, even
several thousands of accounts in a very short time.
We have made then further analyses regarding the
machines that succeed in finding passwords, i.e., the 18 IP
addresses. By searching the leurré.com database
containing information about the activities of these
addresses against the other low interaction honeypots we
found four important elements of information. First, we
note that none of our low interaction honeypot has an ssh
server running, none of them replies to requests sent to
port 22. These machines are thus scanning machines
without any prior knowledge on their open ports. Second,
we found evidences that these IPs were scanning in a
simple sequential way all addresses to be found in a block
of addresses. Moreover, the comparison of the
fingerprints left on our low interaction honeypots
highlights the fact that these machines are running tools
behaving the same way, not to say the same tool. Third,
these machines are only interested in port 22, they have
never been seen connecting to other ports. Fourth, there is
no apparent correlation as far as their geographical
location is concerned: they are located all over the world.
In other words, it comes from this analysis that these
IPs are used to run a well known program. The detailed
analysis of this specific tool is outside the scope of the
paper but, nevertheless, it is worth mentioning that the
activities linked to that tool, as observed in our
Leurré.com database, indicate that it is unlikely to be a
worm but rather an easy to use and widely spread tool.
3.2.2. Interactive attack: intrusion. The second
step of the attack consists in the real intrusion. We have
noted that, several days after the guessing of a weak
2 We consider as “dictionary attack” any attack that tries more than 10
different accounts and passwords.
Proceedings of the Sixth European Dependable Computing Conference (EDCC'06)
0-7695-2648-9/06 $20.00 © 2006
password, an interactive ssh connection is executed on
our honeypot to issue several commands. We believe that,
in those situations, a real human being, as opposed to an
automated script, is connected to our machine. This is
explained and justified in Section 4.3. As shown in Figure
3, these intrusions come from 35 IP addresses never
observed on any of the low-interaction honeypots.
Whereas the geographic localisation of the machines
performing dictionary attacks is very blur, the machines
that are used by a human being for the interactive ssh
connection are, most of the time, clearly identified. We
have a precise idea of their country, geographic address,
the responsible of the corresponding domain.
Surprisingly, these machines, for half of them, come from
the same country, an European country not usually seen
as one of the most attacking ones as reported, for
instance, by the www.leurrecom.org web site.
We then made analyses in order to see if these IP
addresses had tried to connect to other ports of our
honeypot except for these interactive connections; and the
answer is no. Furthermore, the machines that make
interactive ssh connections on our honeypot do not make
any other kind of connections on this honeypot, i.e, no
scan or dictionary attack. Further analyses, using the data
collected from the low-interaction honeypots deployed in
the CADHo project, revealed that none of the 35 IP
addresses have ever been observed on any of our
platforms deployed in the word. This is interesting
because it shows that these machines are totally dedicated
to this kind of attack (they only targeted our high-
interaction honeypot and only when they knew at least
one login and password on this machine).
We can conclude for these analyses that we face two
groups of attacking machines. The first group is composed
of machines that are specifically in charge of making
dictionary attacks. Then the results of these dictionary
attacks are published somewhere. Then, another group of
machines, which has no intersection with the first group,
comes to exploit the weak passwords discovered by the
first group. This second group of machines is, as far as we
can see, clearly geographically identified and commands
are executed by a human being. A similar two steps
process was already observed in the CADHo project when
analyzing the data collected from the low-interaction
honeypots (see [9] for more details).
3.3. Behavior of attackers
This section is dedicated to the analysis of the behavior
of the intruders. We first characterize the intruders, i.e.
we try to know if they are humans or programs. Then, we
present in more details the various actions they have
carried out on the honeypot. Finally, we try to figure out
what their skill level seems to be.
We concentrate the analyses on the last three months
of our experiment. During this period, some intruders
have visited our honeypot only once, others have visited it
several times, for a total of 38 ssh intrusions. These
intrusions were initiated from 16 IP addresses and 7
accounts were used. Table 3 presents the number of
intrusions per account, IP addresses and passwords used
for these intrusions. It is of course difficult to be sure that
all the intrusions for a same account are initiated by the
same person. Nevertheless, in our case, we noted that:
• most of the time, after his first login, the attacker
changes the weak password into a strong which, from
there on, remains unchanged.
• when two different IP addresses access the same
account (with the same password), they are very close
and belong to the same country or company.
These two remarks lead us to believe that there is in
general only one person associated to the intrusions for a
particular account.
Account Number of
intrusions
Number of
passwords
Number of IP
addresses
UA2 1 1 1
UA4 13 2 2
UA5 1 1 1
UA8 1 1 1
UA10 9 2 2
UA13 6 1 5
UA16 5 1 3
UA17 2 1 1
Table 3: Number of intrusions per account
3.3.1. Type of the attackers: humans or
programs. Before analyzing what intruders do when
connected, we can try to identify who they are. They can
be of two different natures. Either they are humans, or
they are programs which reproduce simple behaviors. For
all intrusions but 12, intruders have made mistakes when
typing commands. Mistakes are identified when the
intruder uses the backspace to erase a previously entered
character. So, it is very likely that such activities are
carried out by a human, rather than programs.
When an intruder did not make any mistake, we
analyzed how the data were transmitted from the attacker
machine to the honeypot. We can note that, for ssh
communications, data transmission between the client
and the server is asynchronous. Most of the time, the ssh
client implementation uses the function select() to get
user input. So, when the user presses a key, this function
ends and the program sends the corresponding value to
the server. In the case of a copy and a paste into the
terminal running the client, the select() function also
ends, but the program sends all the values contained in
the buffer used for the paste into the server. We can
assume that, when tty_read() returns more than one
character, these values have been sent after a copy and a
paste. If all the activities during a connection are due to a
copy and a paste, we can strongly assume that it is due to
an automatic script. Otherwise, this is quite likely a
human being who uses shortcuts from time to time (such
as CTRL-V to paste commands into its ssh session). For 7
out of the last 12 activities without mistakes, intruders
have entered several commands on a character-by-
character basis. This, once again, seems to indicate that a
human being is entering the commands. For the 5 others,
their activities are not significant enough to conclude:
they have only launched a single command, like w, which
is not long enough to highlight a copy and a paste.
Proceedings of the Sixth European Dependable Computing Conference (EDCC'06)
0-7695-2648-9/06 $20.00 © 2006
3.3.2. Attacker activities. The first significant
remark is that all of the intruders change the password of
the hacked account. The second remark is that most of
them start by downloading some files. In all cases, but
one, the attackers tried to download some malware to the
compromised machines. In a single case, the attacker has
first tried to download an innocuous, yet large, file to the
machine (the binary for a driver coming from a known
web site). This is probably a simple way to assess the
connectivity quality of the compromised host.
The command used by the intruders to download the
software is wget. To be more precise, 21 intrusions upon
38 include the wget command. These 21 intrusions
concern all the hacked accounts. As mentioned in
section 2, outgoing http connections are forbidden by the
firewall. Nevertheless, the intruders still have the
possibility to download files through the ssh connection
using sftp command (instead of wget). Surprisingly, we
noted that only 30% of the intruders did use this ssh
connection. 70% of the attackers were unable to download
their malware due to the absence of http connectivity!
Three explanations can be envisaged at this stage. First,
they follow some simplistic cookbook and do not even
known the other methods at their disposal to upload a file.
Second, the machines where the malware resides do not
support sftp. Third, the lack of http connectivity made
the attacker suspicious and he decided to leave our
system. Surprisingly, the first explanation seems to be the
right one in our case as we noticed that the attackers leave
after an unsuccessful wget and come back a few hours or
days later, trying the same command again as if they were
hoping it to work at that time. Some of them have been
seen trying this several times. It can be concluded that:
i) they are apparently unable to understand why the
command fails, ii) they are not afraid to come back to the
machine despite the lack of http connectivity,
iii) applying such brute force attack reveals that they are
not aware of any other method to upload the file.
Once the attackers manage to download their malware
using sftp, they try to install it (by decompressing or
extracting files for example). 75% of the intrusions that
installed software did not install it on the hacked account
but rather on standard directories such as /tmp, /var/tmp
or /dev/shm (which are directories with write access for
everybody). This makes the hacker activity more difficult
to identify because these directories are regularly used by
the operating system itself and shared by all the users.
Additionally, we have identified four main activities of
the intruders. The first one is launching ssh scans on
other networks but these scans have never tested local
machines. Their idea is to use the targeted machine to
scan other networks, so that it is more difficult for the
administrator of the targeted network to localize them.
The program used by most intruders, which is easy to find
on the Internet, is pscan.c.
The second type of activity consists in launching irc
clients, e.g., emech [13] and psyBNC. Names of binary files
have regularly been changed by intruders, probably in
order to hide them. For example, the binary files of emech
have been changed to crond or inetd, which are well
known Unix binary file names and processes.
The third type of activity is trying to become root.
Surprisingly, such attempts have been observed for 3
intrusions only. Two rootkits were used. The first one
exploits two vulnerabilities: a vulnerability which
concerns the Linux kernel memory management code of
the mremap system call [14] and a vulnerability which
concerns the internal kernel function used to manage
process's memory heap [15]. This exploit could not
succeed because the kernel version of our honeypot does
not correspond to the version of the exploit. The intruder
should have realized this because he checked the version
of the kernel of the honeypot (uname -a). However, he
launched this rootkit anyway and failed. The other rootkit
used by intruders exploits a vulnerability in the program
ld. Thanks to this exploit, three intruders became root
but the buffer overflow succeeded only partially. Even if
they apparently became root, they could not launch all
desired programs (removing files for example caused
access control errors).
The last activity observed in the honeypot is related to
phishing activities. It is difficult to make precise
conclusions because only one intruder has attempted to
launch such an attack. He downloaded a forged email and
tried to send it through the local smtp agent. But, as far as
we could understand, it looked like a preliminary step of
the attack because the list of recipient emails was very
short. It seems that is was just a preliminary test before
the real deployment of the attack.
3.3.3. Attackers skill. Intruders can roughly
speaking be classified into two main categories. The most
important one is relative to script kiddies. They are
inexperienced hackers who use programs found on the
Internet without really understanding how they work. The
next category represents intruders who are more
dangerous. They are named “black hat”. They can make
serious damage on systems because they are expert in
security and they know how to exploit vulnerabilities on
various systems.
As already presented in §3.3.2. (use of wget and sftp),
we have observed that intruders are not as clever as
expected. For example, for two hacked accounts, the
intruders don't seem to really understand the Unix file
access rights (it's very obvious for example when they try
to erase some files whereas they don't have the required
privileges). For these two same accounts, the intruders
also try to kill the processes of other users. Many
intruders do not try to delete the file containing the
history of their commands or do not try to deactivate this
history function (this file depends on the login shell used,
it is .bash_history for example for the bash). Among the
38 intrusions, only 14 were cleaned by the intruders (11
have deactivated the history function and 3 have deleted
the.bash_history file). This means that 24 intrusions left
behind them a perfectly readable summary of their
activity within the honeypot.
The IP address of the honeypot is private and we have
started another honeypot on this network. This second
honeypot is not directly accessible from the outside, it is
only accessible from the first honeypot. We have modified
the /etc/motd file of the first honeypot (which is
automatically printed on the screen during the login
Proceedings of the Sixth European Dependable Computing Conference (EDCC'06)
0-7695-2648-9/06 $20.00 © 2006
process) and added the following message: “In order to
use the software XXX, please connect to A.B.C.D”.
In spite of this message, only one intruder has tried to
connect to the second honeypot. We could expect that an
experienced hacker will try to use this information. In a
more general way, we have very seldom seen an intruder
looking for other active machines on the same network.
One important thing to note is relative to
fingerprinting activity. No intruder has tried to check the
presence of VMware software. For three hacked accounts,
the intruders have read the contents of the file
/proc/cpuinfo but that's all. None of the methods
discussed on Internet was tested to identify the presence
of VMware software [16,17]. This probably means that the
intruders are not experienced hackers.
4. Conclusion
In this paper, we have presented the results of an
experiment carried out over a period of 6 months during
which we have observed the various steps that lead an
attacker to successfully break into a vulnerable machine
and his behavior once he has managed to take control
over the machine.
The findings are somehow consistent with the informal
know how shared by security experts. The contributions of
the paper reside in performing an experiment and
rigorous analyses that confirm some of these informal
assumptions. Also, the precise analysis of the observed
attacks reveals several interesting facts. First of all, the
complementarity between high and low interaction
honeypots is highlighted as some explanations can be
found by combining information coming from both set
ups. Second, it appears that most of the observed attacks
against port 22 were only partially automated and carried
out by script kiddies. This is very different from what can
be observed against other ports, such as 445, 139 and
others, where worms have been designed to completely
carry out the tasks required for the infection and
propagation. Last, honeypot fingerprinting does not seem
to be a high priority for attackers as none of them has
tried the known techniques to check if they were under
observation. It is also worth mentioning a couple of
important missing observations. First, we did not observe
scanners detecting the presence of the open ssh port and
providing this information to other machines in charge of
running the dictionary attack. This is different from
previous observations reported in [9]. Second, as most of
the attacks follow very simple and repetitive patterns, we
did not observe anything that could be used to derive
sophisticated scenarios of attacks that could be analyzed
by intrusion detection correlation engine. Of course, at
this stage it is too early to derive definite conclusions from
this observation.
Therefore, it would be interesting to keep doing this
experiment over a longer period of time to see if things do
change, for instance if a more efficient automation takes
place. We would have to solve the problem of weak
passwords being replaced by strong ones though, in order
to see more people succeeding in breaking into the
system. Also, it would be worth running the same
experiment by opening another vulnerability into the
system and verifying if the identified steps remain the
same, if the types of attackers are similar. Could it be, at
the contrary, that some ports are preferably chosen by
script kiddies while others are reserved to some more elite
attackers? This is something that we are in the process of
assessing.
Acknowledgement. This work has been partially
supported by: 1) CADHo, a research action funded by the French
ACI “Securité & Informatique” (www.cadho.org), 2) the
CRUTIAL IST-027513 project (crutial.cesiricerca.it), and 3) the
ReSIST IST- 026764 project (www.resist-noe.org).
5. References
[1] M. Bailey, E. Cooke, F. Jahanian, J. Nazario, The Internet
motion sensor - a distributed blackhole monitoring
system. Network and Distributed Systems Security Symp.
(NDSS 2005), San Diego, USA, 2005.
[2] CAIDA Project. Home Page of the CAIDA Project,
http://www.caida.org.
[3] http://www.dshield.org. Home page of the DShield.org
Distributed Intrusion Detection System.
[4] E. Alata, M. Dacier, Y. Deswarte, M. Kaaniche, K.
Kortchinsky, V. Nicomette, V. Hau Pham, and F. Pouget,
Collection and analysis of attack data based on honeypots
deployed on the Internet. QOP 2005, 1st Workshop on
Quality of Protection (co-located with ESORICS and
METRICS), Sept. 15, Milan, Italy, 2005.
[5] F. Pouget, M. Dacier, V. Hau Pham. Leurre.com: on the
advantages of deploying a large scale distributed
honeypot platform. In Proc. of ECCE'05, E-Crime and
Computer Conference, Monaco, 2005.
[6] Home page of Leurré.com: http://www.leurre.org.
[7] Project Leurré.com. Publications web page:
http://www.leurrecom.org/paper.htm.
[8] M. Dacier, F. Pouget, H. Debar. Honeypots: practical
means to validate malicious fault assumptions. 10th IEEE
Pacific Rim Int. Symp., pp. 383--388, Tahiti, 2004.
[9] F. Pouget, M. Dacier, V. Hau Pham, “Understanding
threats: a prerequisite to enhance survivability of
computing systems”, Int. Infrastructure Survivability
Workshop IISW'04, (25th IEEE Int. Real-Time Systems
Symp. (RTSS 04)), Lisboa, Portugal, 2004.
[10] E. Alata, V. Nicomette, M. Kaaniche, M. Dacier, M. Herrb,
Lessons learned from the deployment of a high-
interaction honeypot: Extended version. LAAS Report,
July 2006.
[11] Inc. VMware. Available on: http://www.vmware.com
[12] The PaX Team. Available on: http://pax.grsecurity.net.
[13] EnergyMech team. Energymech. Available on:
http://www.energymech.net.
[14] US-CERT. Linux kernel mremap(2) system call does not
properly check return value from do_munmap() function.
Available on: http://www.kb.cert.org/vuls/id/981222.
[15] US-CERT. Linux kernel do_brk() function contains
integer overflow. http://www.kb.cert.org/vuls/id/981222.
[16] J. Corey, Advanced honeypot identification and
exploitation. Phrack, N 63, Available on:
http://www.phrack.org/fakes/p63/p63-0x09.txt.
[17] T. Holz and F. Raynal, Detecting honeypots and other
suspicious environments. In Systems, Man and
Cybernetics (SMC) Information Assurance Workshop.
Proc. from the Sixth Annual IEEE, pages 29--36, 2005.
Proceedings of the Sixth European Dependable Computing Conference (EDCC'06)
0-7695-2648-9/06 $20.00 © 2006
|
0704.0859 | Transfinite diameter, Chebyshev constant and energy on locally compact
spaces | arXiv:0704.0859v1 [math.CA] 6 Apr 2007
Transfinite diameter, Chebyshev constant and energy
on locally compact spaces
Bálint Farkas∗ ([email protected])
Technische Universität Darmstadt, Fachbereich Mathematik
Department of Applied Analysis
Schloßgartenstraße 7, D-64289, Darmstadt, Germany
Béla Nagy† ([email protected])
Bolyai Institute, University of Szeged
Aradi vértanúk tere 1
H-6720, Szeged, Hungary
Abstract. We study the relationship between transfinite diameter, Chebyshev
constant and Wiener energy in the abstract linear potential analytic setting pio-
neered by Choquet, Fuglede and Ohtsuka. It turns out that, whenever the potential
theoretic kernel has the maximum principle, then all these quantities are equal for
all compact sets. For continuous kernels even the converse statement is true: if the
Chebyshev constant of any compact set coincides with its transfinite diameter, the
kernel must satisfy the maximum principle. An abundance of examples is provided
to show the sharpness of the results.
Keywords: Transfinite diameter, Chebyshev constant, energy, potential theoretic
kernel function in the sense of Fuglede, Frostman’s maximum principle, rendezvous
and average distance numbers.
2000 Math. Subj. Class.: 31C15; 28A12, 54D45
Dedicated to the memory of Professor Gustave Choquet (1 March
1915 - 14 November 2006)
1. Introduction
The idea behind abstract (linear) potential theory, as developed by
Choquet [4], Fuglede [9] and Ohtsuka [15], is to replace the Euclidian
space Rd by some locally compact space X and the well-known Newto-
nian kernel by some other kernel function k : X×X → R∪{+∞}, and
∗ This work was started during the 3rd Summerschool on Potential Theory, 2004,
hosted by the College of Kecskemét, Faculty of Mechanical Engineering and Automa-
tion (GAMF). Both authors would like to express their gratitude for the hospitality
and the support received during their stay in Kecskemét.
† The second named author was supported by the Hungarian Scientific Research
Fund; OTKA 49448
http://arxiv.org/abs/0704.0859v1
to look at which “potential theoretic” assertions remain true in this gen-
erality (see the monograph of Landkof [12]). This approach facilitates
general understanding of certain potential theoretic phenomena and
allows also the exploration of fundamental principles like Frostman’s
maximum principle.
Although there is a vast work done considering energy integrals and
different notions of energies, the familiar notions of transfinite diame-
ter and Chebyshev constants in this abstract setting are sporadically
found, sometimes indeed inaccessible, in the literature, see Choquet [4]
or Ohtsuka [17]. In [4] Choquet defines transfinite diameter and proves
its equality with the Wiener energy in a rather general situation, which
of course covers the classical case of the logarithmic kernel on C. We
give a slightly different definition for the transfinite diameter that, for
infinite sets, turns out to be equivalent with the one of Choquet. The
primary aim of this note is to revisit the above mentioned notions and
related results and also to partly complement the theory.
We already remark here that Zaharjuta’s generalisation of transfi-
nite diameter and Chebyshev constant to Cn is completely different in
nature, see [24], whereas some elementary parts of weighted potential
theory (see, e.g., Mhaskar, Saff [13] and Saff, Totik [20]) could fit in
this framework.
The power of the abstract potential analytic tools is well illustrated
by the notion of the average distance number from metric analysis, see
Gross [11], Stadje [21]. The surprising phenomenon noticed by Gross is
the following: If (X, d) is a compact connected metric space, there al-
ways exists a unique number r(X) (called the average distance number
or the rendezvous number of X), with the property that for any finite
point system x1, . . . , xn ∈ X there is another point x ∈ X with average
distance
d(xj , x) = r(X).
Stadje generalised this to arbitrary continuous, symmetric functions
replacing d. Actually, it turned out, see the series of papers [6, 5, 7] and
the references therein, that many of the known results concerning av-
erage distance numbers (existence, uniqueness, various generalisations,
calculation techniques etc.), can be proved in a unified way using the
works of Fuglede and Ohtsuka. We mention for example that Frost-
man’s Equilibrium Theorem is to be accounted for the existence for
certain invariant measures (see Section 5 below). In these investigations
the two variable versions of Chebyshev constants and energies and even
their minimax duals had been needed, and were also partly available
due to the works of Fuglede [10] and Ohtsuka [16, 17], see also [6].
Another occurrence of abstract Chebyshev constants is in the study
of polarisation constants of normed spaces, see Anagnostopoulos, Ré-
vész [1] and Révész, Sarantopoulos [19].
Let us settle now our general framework. A kernel in the sense of
Fuglede is a lower semicontinuous function k : X × X → R ∪ {+∞}
[9, p. 149]. In this paper we will sometimes need that the kernel is
symmetric, i.e., k(x, y) = k(y, x). This is for example essential when
defining potential and Chebyshev constant, otherwise there would be
a left- and right-potential and the like.
Another assumption, however a bit of technical flavour, is the pos-
itivity of the kernel. This we need, because we would like to avoid
technicalities when integrating not necessarily positive functions. This
assumption is nevertheless not very restrictive. Since we usually con-
sider compact sets of X ×X, where by lower semicontinuity k is nec-
essarily bounded from below, we can assume that k ≥ 0. Indeed, as we
will see, energy, nth diameter and nth Chebyshev constant are linear in
constants added to k.
Denote the set of compactly supported Radon measures on X by
M(X), that is
M(X) := {µ : µ is a regular Borel measure on X,
µ has compact support, ‖µ‖ < +∞}.
Further, let M1(X) be the set of positive unit measures from M(X),
M1(X) := {µ ∈ M(X) : µ ≥ 0, µ(X) = 1}.
We say that µ ∈ M1(X) is supported on H if supp µ, which is
a compact subset of X, is in H. The set of (probability) measures
supported on H are denoted by M(H) (M1(H)).
Before recalling the relevant potential theoretic notions from [9] (see
also [15]), let us spend a few words on integrals (see [2, Ch. III-IV.]). Let
µ be a positive Radon measure on X. Then the integral of a compactly
supported continuous function with respect to µ is the usual integral.
The upper integral of a positive l.s.c. function f is defined as
f dµ := sup
0 ≤ h ≤ f
h ∈ Cc(X)
h dµ.
This definition works well, because by standard arguments (see, e.g.,
[2, Ch. IV., Lemma 1]) one has
k(x, y) = sup
0 ≤ h ≤ k
h ∈ Cc(X ×X)
h(x, y),
where, because of the symmetry assumption, it suffices to take only
symmetric functions h in the supremum.
What should be here noted, is that this notion of integral has all
useful properties that we are used to in case of Lebesgue integrals (note
also the necessity of the positivity assumptions).
The usual topology onM is the so-called vague topology which is a lo-
cally convex topology defined by the family {µ 7→
X f dµ : f ∈ Cc(X)}
of seminorms. We will only encounter this topology in connection with
families M of measures supported on subsets of the same compact set
K ⊂ X. In this case, the weak∗-topology (determined by C(K)) and
the vague topology coincide on M, Fuglede [9].
For a potential theoretic kernel k : X ×X → R+ ∪ {0} Fuglede [9]
and Ohtsuka [15] define the potential and the energy of a measure µ
Uµ(x) :=
k(x, y) dµ(y) , W (µ) :=
k(x, y) dµ(y) dµ(x).
The integrals exist in the above sense, although may attain +∞ as
well.
For a given set H ⊂ X its Wiener energy is
w(H) := inf
µ∈M1(H)
W (µ), (1)
see [9, (2) on p. 153].
One also encounters the quantities (see [9, p. 153])
U(µ) := sup
Uµ(x), V (µ) := sup
x∈ supp µ
Uµ(x).
Accordingly one defines the following energy functions
u(H) := inf
µ∈M1(H)
U(µ), v(H) := inf
µ∈M1(H)
V (µ).
In general, one has the relation
w ≤ v ≤ u ≤ +∞,
where in all places strict inequality may occur. Nevertheless, under our
assumptions we have the equality of the energies v and w, being gen-
erally different, see [9, p. 159]. More importantly, our set of conditions
suffices to have a general version of Frostman’s equilibrium theorem,
see Theorem 9.
In fact, at a certain point (in §4), we will also assume Frostman’s
maximum principle, which will trivially guarantee even u = v, that is,
the equivalence of all three energies treated by Fuglede.
Definition. The kernel k satisfies the maximum principle, if for every
measure µ ∈ M1
U(µ) = V (µ).
As our examples show in §5, this is essential also for the equivalence
of the Chebyshev constant and the transfinite diameter. Carleson [3,
Ch. III.] gives a class of examples satisfying the maximum principle:
Let Φ(r), r = |x|, x ∈ Rd be the fundamental solution of the Laplace
equation, i.e., Φ(|x−y|) the Newtonian potential on Rd. For a positive,
continuous, increasing, convex function H assume also that
H(Φ(r))rd−2 dr < +∞.
Then H ◦Φ satisfies the maximum principle; see [3, Ch. III.] and also
Fuglede [9] for further examples.
Let us now turn to the systematic treatment of the Chebyshev
constant and the transfinite diameter. We call a function g : X →
R log-polynomial, if there exist w1, . . . , wn ∈ X such that g(x) =
j=1 k(x,wj) for all x ∈ X. Accordingly, we will call the wjs and
n the zeros and the degree of g(x), respectively. Obviously the sum of
two log-polynomials is a log-polynomial again. The terminology here is
motivated by the case of the logarithmic kernel
k(x, y) = − log |x− y|,
where the log-polynomials correspond to negative logarithms of alge-
braic polynomials.
Log-polynomials give access to the definition of transfinite diameter
and the Chebyshev constant, see Carleson [3], Choquet [4], Fekete [8],
Ohtsuka [17] and Pólya, Szegő [18]. First we start with the “degree n”
versions, whose convergence will be proved later.
Definition. Let H ⊂ X be fixed. We define the nth diameter of H as
Dn(H) := inf
w1,...,wn∈H
(n− 1)n
1≤j 6=l≤n
k(wj , wl)
; (2)
or, if the kernel is symmetric
Dn(H) = inf
w1,...,wn∈H
(n− 1)n
1≤i<j≤n
k(wi, wj)
If H is compact, then due to the fact that k is l.s.c., Dn(H) is
attained for some points w1, . . . , wn ∈ H, which are then called n-Fekete
points. We will also use the term approximate n-Fekete points with the
obvious meaning. Note also that for a finite set H, #H = m and
n > m, there is always a point from the diagonal ∆ = {(x, x) : x ∈ H}
in the definition of Dn(H). This possibility is completely excluded by
Choquet in [4], thus allowing only infinite sets.
Definition. For an arbitrary H ⊂ X the nth Chebyshev constant of
H is defined as
Mn(H) := sup
w1,...,wn∈H
k(x,wk)
We are going to show that both nth diameters and nth Chebyshev
constants converge from below to some number (or +∞), which are
respectively called the transfinite diameter D(H) and the Chebyshev
constant M(H). The aim of this paper is to relate these quantities as
well as the Wiener energy of a set.
2. Chebyshev constant and transfinite diameter
We define the Chebyshev constant and the transfinite diameter of a
set H ⊂ X and proceed analogously to the classical case. It turns out,
though not very surprisingly, that in general the equality of these two
quantities does not hold.
First, we prove the convergence of nth diameters and nth Chebyshev
constants. This is for both cases classical, we give the proof only for
the sake of completeness, see, e.g., Carleson [3], Choquet [4], Fekete [8],
Ohtsuka [17] and Pólya, Szegő [18].
PROPOSITION 1. The sequence of nth diameters is monotonically
increasing.
Proof. Choose x1, . . . , xn ∈ H arbitrarily. If we leave out any index
i = 1, 2, . . . , n, then for the remaining n − 1 points we obtain by the
definition of Dn−1(H) that
(n− 1)(n − 2)
1 ≤ j 6= l ≤ n
j 6= i, l 6= i
k(xj , xl) ≥ Dn−1(H).
After summing up for i = 1, 2, . . . , n this yields
1≤j 6=l≤n
k(xj , xl) ≥ n ·Dn−1(H),
for each term k(xj , xl) occurs exactly n − 2 times. Now taking the
infimum for all possible x1, . . . , xn ∈ H, we obtain n · Dn(H) ≥ n ·
Dn−1(H), hence the assertion.
The limit D(H) := limn→∞Dn(H) is the transfinite diameter of H.
Similarly, the nth Chebyshev constants converge, too.
PROPOSITION 2. For any H ⊂ X, the Chebyshev constants Mn(H)
converge in the extended sense.
Proof. The sum of two log-polynomials, p(z) =
i=1 k(z, xi) with de-
gree n and q(z) =
j=1 k(z, yj) with degree m, is also a log-polynomial
with degree n+m. Therefore
(n+m)Mn+m ≥ nMn +mMm (3)
for all n,m follows at once. Should Mn(H) be infinity for some n,
then all succeeding terms Mn′(H), n
′ ≥ n are infinity as well, hence
the convergence is obvious. We assume now that Mn(H) is a finite
sequence. At this point, for the sake of completeness, we can repeat the
classical argument of Fekete [8].
Namely, let m,n be fixed integers. Then there exist l = l(n,m) and
r = r(n,m), 0 ≤ r < m nonnegative integers such that n = l ·m + r.
Iterating the previous inequality (3) we get
n ·Mn ≥ l
+ rMr = nMm + r(Mr −Mm).
Fixing now the value of m, the possible values of r remain bounded
by m, and the finitely many values of Mr −Mm’s are finite, too. Hence
dividing both sides by n, and taking lim infn→∞, we are led to
lim inf
Mn ≥ lim inf
Mr −Mm
= Mm .
This holds for any fixed m ∈ N, so taking lim supm→∞ on the right
hand side we obtain
lim inf
Mn ≥ lim sup
that is, the limit exists.
M(H) := limn→∞Mn(H) is called the Chebyshev constant of H.
In the following, we investigate the connection between the Chebyshev
constant M(H) and the transfinite diameter D(H).
THEOREM 3. Let k be a positive, symmetric kernel. For any n ∈ N
and H ⊂ X we have Dn(H) ≤ Mn(H), thus also D(H) ≤ M(H).
Proof. If Mn(H) = +∞, then the assertion is trivial. So assume
Mn(H) < +∞. By the quasi-monotonicity (see (3)) we have that for
all m ≤ n also Mm(H) is finite. We use this fact to recursively find
w1, . . . wn ∈ H such that k(wi, wj) < +∞ for all i < j ≤ n. At the
end we arrive at
1≤i<j≤n k(wi, wj) < +∞, hence Dn(H) < +∞. This
was our first aim to show, in the following this choice of the points
w1, . . . , wn will not play any role. Instead, for an arbitrarily fixed ε > 0,
we take, as we may, an “approximate n-Fekete point system” w1, . . . , wn
(n− 1)n
1≤i 6=j≤n
k(wi, wj) < Dn + ε. (4)
For any x ∈ H the points x,w1, . . . , wn form a point system of n + 1
points, so by the definition of Dn+1 we have
k(x,wi) +
1≤i 6=j≤n
k(wi, wj) ≥ n(n+ 1)Dn+1 ≥ n(n+ 1)Dn,
using also the monotonicity of the sequence Dn. This together with
(4) lead to
pn(x) :=
k(x,wi) ≥
n(n+ 1)
n(n− 1)
Dn + ε
Taking infimum of the left hand side for x ∈ H we obtain
pn(x) ≥ nDn −
n(n− 1)ε
By the very definition of the nth Chebyshev constant, n · Mn ≥
infx∈H pn(x) holds, hence Mn ≥ Dn − (n− 1)ε/2 follows. As this holds
for all ε > 0, we conclude Mn ≥ Dn.
Later we will show that, unlike the classical case of C, the strict
inequality D < M is well possible.
3. Transfinite diameter and energy
We study the connection between the energy w and the transfinite
diameter D. Without assuming the maximum principle we can prove
the equivalence of these two quantities for compact sets. This result
can actually be found in a note of Choquet [4]. There is however a
slight difference to the definitions of Choquet in [4]. There the diagonal
was completely excluded from the definition of D, that is the infimum
in (2) is taken over wi 6= wj, i 6= j and not for systems of arbitrary
wj’s . This means, among others, that in [4] the transfinite diameter is
only defined for infinite sets. The other assumption of Choquet is that
the kernel is infinite on the diagonal. This is completely the contrary
to what we assume in Theorem 8. Indeed, with our definitions of the
transfinite diameter one can even prove equality for arbitrary sets if
the kernel is finite-valued.
THEOREM 4. Let k be an arbitrary kernel and H ⊂ X be any set.
Then D(H) ≤ w(H).
Proof. Let µ ∈ M1(H) be arbitrary, and define ν :=
j=1 µ the
product measure on the product space Xn. We can assume that the
kernel is positive because supp µ, and hence supp ν, is compact so we
can add a constant to k such that it will be positive on these supports.
Consider the following lower semicontinuous functions g and h on Xn
g : (x1, . . . , xn) 7→ Dn(H)
:= inf
(w1,...,wn)∈Xn
n(n−1)
1≤i 6=j≤n
k(wi, wj)
h : (x1, . . . , xn) 7→
n(n−1)
1≤i 6=j≤n
k(xi, xj).
Since 0 ≤ g ≤ h, by the definition of the upper integral the following
holds true
Dn(H) ≤
n(n− 1)
1≤i 6=j≤n
k(xi, xj) dν(x1, . . . , xn)
n(n− 1)
1≤i 6=j≤n
k(xi, xj) dµ(xi) dµ(xj) = W (µ).
Taking infimum in µ yields Dn(H) ≤ w(H), hence also D(H) ≤ w(H).
To establish the converse inequality we need a compactness as-
sumption. With the slightly different terminology, Choquet proves the
following for kernels being +∞ on the diagonal ∆. The arguments there
are very similar, except that the diagonal doesn’t have to be taken care
of in [4]. We give a detailed proof.
PROPOSITION 5 (Choquet [4]). For an arbitrary kernel function k
the inequality D(K) ≥ w(K) holds for all K ⊆ X compact sets.
Proof. First of all the l.s.c. function k attains its infimum on the
compact set K × K. So by shifting k up we can assume that it is
positive, and the validity of the desired inequality is not influenced by
this.
If D(K) = +∞, then by Theorem 4 we have w(K) = +∞, thus
the assertion follows. Assume therefore D(K) < +∞, and let n ∈ N,
ε > 0 be fixed. Let us choose a Fekete point system w1, . . . , wn from
K. Put µ := µn := 1/n
i=1 δwi where δwi are the Dirac measures at
the points wi, i = 1, . . . , n. For a continuous function 0 ≤ h ≤ k with
compact support, we have
h dµ dµ =
i,j=1
h(wi, wj)
h(wi, wi) +
i,j=1
h(wi, wj)
h(wi, wi) +
i,j=1
k(wi, wj)
i,j=1
k(wi, wj)
Dn(K) ≤
+D(K)
using, in the last step, also the monotonicity of the sequenceDn (Propo-
sition 1). In fact, we obtain for n ≥ N = N(‖h‖, ε) the inequality
h dµ dµ ≤ D + ε. (5)
It is known, essentially by the Banach-Alaoglu Theorem, that for a
compact set K the measures of M1(K) form a weak
∗-compact subset
of M, hence there is a cluster point ν ∈ M1(K) of the set MN :=
{µn : n ≥ N} ⊂ M1(K). Let {να}α∈I ⊆ MN be a net converging to
ν. Recall that να⊗να weak
∗-converges to ν⊗ν. We give the proof. For
a function g ∈ C(K ×K), g(x, y) = g1(x) · g2(y) it is obvious that
g dνα dνα →
g dν dν. (6)
The set A of such product-decomposable functions g(x, y) = g1(x)g2(y)
is a subalgebra of C(K ×K), which also separates X ×X, since it is
already coordinatewise separating. By the Stone–Weierstraß theorem
A is dense in C(K ×K). From this, using also that the family MN of
measures is norm-bounded, we immediately get the weak∗-convergence
(6). All these imply
h dν dν ≤ D(K) + ε,
w(K) ≤ W (ν) :=
kdνdν = sup
0 ≤ h ≤ k
h ∈ Cc(X ×X)
hdνdν ≤ D(K)+ε,
for all ε > 0. This shows w(K) ≤ D(K).
COROLLARY 6 (Choquet [4]). For arbitrary kernel k and compact set
K ⊂ X, the equality D(K) = w(K) holds.
Proof. By compactness we can shift k up and therefore assume it is
positive. Then we apply Theorem 4 and Proposition 5.
The assumptions of Choquet [4] are the compactness of the set plus the
property that the kernel is +∞ on the diagonal (besides it is continuous
in the extended sense). This ensures, loosely speaking, that for a set K
of finite energy an energy minimising measure µ (i.e., for whichW (µ) =
w(K)) is necessarily non-atomic, moreover µ ⊗ µ is not concentrated
on the diagonal. Therefore to show equality of w with D, one has to
exclude the diagonal completely from the definition of the transfinite
diameter.
We however allow a larger set of choices for the point system in the
definition of D. Indeed, we allow Fekete points to coincide, and this also
makes it possible to define the transfinite diameter of finite sets. With
this setup the inequality D ≤ w is only simpler than in the case handled
by Choquet. Whereas, however surprisingly, the equality D(K) = w(K)
is still true for compact sets K but without the assumption on the
diagonal values of the kernel.
We will see in §5 Example 13 that even assuming the maximum prin-
ciple but lacking the compactness allows the strict inequality D < w.
This phenomena however may exist only in case of unbounded kernels,
as we will see below. In fact, we show that if the kernel is finite on the
diagonal, thenD = w holds for arbitrary sets. For this purpose, we need
the following technical lemma, which shows certain inner regularity
properties of D and is also interesting in itself.
LEMMA 7. Assume that the kernel k is positive and finite on the
diagonal, i.e., k(x, x) < +∞ for all x ∈ X. Then for an arbitrary
H ⊂ X we have
D(H) = inf
K ⊂ H
K compact
D(K) = inf
W ⊂ H
#W < ∞
D(W ). (7)
Proof. The inequality infD(K) ≤ infD(W ) is clear. For H ⊇ K the
inequality D(H) ≤ D(K) is obvious, so we can assume D(H) < +∞.
For ε > 0 let W = {w1, . . . , wn} be an approximate n-Fekete point set
of H satisfying (4). Then
D(W ) = lim
Dmn(W ) ≤ lim
mn(mn− 1)
1≤i′ 6=j′≤mn
k(wi′ , wj′),
where
wi′ :=
. . .
′ = i+ rn, r = 0, . . . ,m− 1
. . .
Set C := max{k(x, x) : x ∈ W}. So we find
D(W ) ≤ lim
mn(mn−1)
1≤i 6=j≤n
k(wi, wj) +
mn(mn−1)
1≤i≤n
k(wi, wi)
1≤i 6=j≤n
k(wi, wj) lim
mn(mn−1)
+ Cn lim
mn(mn−1)
1≤i 6=j≤n
k(wi, wj) ≤
(Dn(H) + ε) ≤ D(H) + ε.
This being true for all ε > 0, taking infimum we finally obtain
W ⊂ H
#W < ∞
D(W ) ≤ D(H).
Clearly, if k(x, x) = +∞ for all x ∈ W with a finite set #W = n,
then for all m > n we have Dm(W ) = +∞. Thus in particular for
kernels with k : ∆ → {+∞}, the above can not hold in general, at least
as regards the last part with finite subsets.
Now, completely contrary to Choquet [4] we assume that the kernel
is finite on the diagonal and prove D = w for any set. Hence an example
of D < w (see §5 Example 13) must assume k(x, x) = +∞ at least for
some point x.
THEOREM 8. Assume that the kernel k is positive and is finite on
the diagonal, that is k(x, x) < +∞ for all x ∈ X. Then for arbitrary
sets H ⊂ X, the equality D(H) = w(H) holds.
Proof. By Theorem 4 we have D(H) ≤ w(H). Hence there is nothing
to prove, if D(H) = +∞. Assume D(H) < +∞, and let ε > 0 be
arbitrary. By Lemma 7 we have for some n ∈ N a finite set W =
{w1, w2 . . . , wn} with D(H) + ε ≥ D(W ). In view of Proposition 5
we have D(W ) ≥ w(W ), and by monotonicity also w(W ) ≥ w(H). It
follows that D(H) + ε ≥ w(H) for all ε > 0, hence also the “≥” part
of the assertion follows.
4. Energy and Chebyshev constant
To investigate the relationship between the energy and the Cheby-
shev constant the following general version of Frostman’s Equilibrium
Theorem [9, Theorem 2.4] is fundamental for us.
THEOREM 9 (Fuglede). Let k be a positive, symmetric kernel and
K ⊂ X be a compact set such that w(K) < +∞. Every µ which
has minimal energy (µ ∈ M1(K),W (µ) = w(K)) satisfy the following
properties
Uµ(x) ≥ w(K) for nearly every1 x ∈ K,
Uµ(x) ≤ w(K) for every x ∈ supp µ,
Uµ(x) = w(K) for µ-almost every x ∈ X.
Moreover, if the kernel is continuous, then
Uµ(x) ≥ w(K) for every x ∈ K.
THEOREM 10. Let H ⊂ X be arbitrary. Assume that the kernel k is
positive, symmetric and satisfies the maximum principle. Then we have
Mn(H) ≤ w(H) for all n ∈ N, whence also M(H) ≤ w(H) holds true.
Proof. Let n ∈ N be arbitrary. First let K be any compact set.
We can assume w(K) < +∞, since otherwise the inequality holds
irrespective of the value of Mn(K). Consider now an energy-minimising
measure νK of K, whose existence is assured by the lower semicontinu-
ity of µ 7→
k dµ dµ and the compactness of M1(K), see [9, Theorem
2.3].
By the Frostman-Fuglede theorem (Theorem 9) we have UνK (x) ≤
w(K) for all x ∈ supp νK , so V (νK) ≤ w(K), and by the maximum
principle even
UνK (x) ≤ w(K) for all x ∈ X.
1 The set A of exceptional points is small in the sense w(A) = +∞.
Then for all w1, . . . , wn ∈ K
k(x,wj) ≤
k(x,wj) dνK(x) ≤ w(K) .
Taking supremum for w1, . . . , wn ∈ K, we obtain
w1,...,wn∈K
k(x,wj) ≤ w(K).
So Mn(K) ≤ w(K) for all n ∈ N.
Next let H ⊂ X be arbitrary. In view of the last form of (1), for all
ε > 0 there exists a measure µ ∈ M1(H), compactly supported in H,
with w(µ) ≤ w(H) + ε. Let W = {w1, . . . , wn} ⊂ H be arbitrary and
define pW (x) :=
i k(x,wi).
Consider the compact set K := W ∪ supp µ ⊂ H. By definition of
the energy, supp µ ⊂ K implies w(K) ≤ w(µ), hence w(K) ≤ w(H) +
ε. Combining this with the above, we come to Mn(K) ≤ w(H) + ε.
Since W ⊂ K, by definition of Mn(K) we also have
pW (x) ≤ Mn(K). (8)
The left hand side does not increase, if we extend the inf over the
whole of H, and the right hand side is already estimated from above
by w(H) + ε. Thus (8) leads to
pW (x) ≤ w(H) + ε.
This holds for all possible choices of W = {w1, . . . , wn} ⊂ H, hence is
true also for the sup of the left hand side. By definition of Mn(H) this
gives exactly Mn(H) ≤ w(H) + ε, which shows even Mn(H) ≤ w(H).
Remark. In [6] it is proved that M(H) = q(H), where
q(H) = inf
µ∈M1(H)
Uµ(x).
The idea behind is a minimax theorem, see also [16, 17]. Trivially
w(H) ≤ q(H) ≤ u(H). So the maximum principle implies M(H) =
w(H) = q(H) = u(H).
5. Summary of the Results. Examples
In this section, we put together the previous results, thus proving the
equality of the three quantities being studied, under the assumption
of the maximum principle for the kernel. Further, via several instruc-
tive examples we investigate the necessity of our assumptions and the
sharpness of the results.
THEOREM 11. Assume that the kernel k is positive, symmetric and
satisfies the maximum principle. Let K ⊂ X be any compact set. Then
the transfinite diameter, the Chebyshev constant and the energy of K
coincide:
D(K) = M(K) = w(K).
Proof. We presented a cyclic proof above, consisting of M ≥ D
(Theorem 3), D ≥ w (Proposition 5) and finally w ≥ M (Theorem 10).
THEOREM 12. Assume that the kernel k is positive, finite and sat-
isfies the maximum principle. For an arbitrary subset H ⊂ X the
transfinite diameter, the Chebyshev constant and the energy of H co-
incide:
D(H) = M(H) = w(H).
Proof. By finiteness D = w, due to Theorem 8. This with D ≤ M
and M ≤ w (Theorems 3 and 10) proves the assertion.
Remark. In the above theorem, logically it would suffice to assume
that the kernel be finite only on the diagonal. But if this was the case,
the maximum principle would then immediately imply the finiteness of
the kernel everywhere.
Let us now discuss how sharp the results of the preceding sections
are. In the first example we show that, if we drop the assumption of
compactness the assertions of Theorem 3, Theorem 4 and Theorem 10
are in general the strongest possible.
Example 13. Let X = N ∪ {0} endowed with discrete topology and
the kernel
k(n,m) :=
+∞ if n = m,
0 if 0 6= n 6= m 6= 0,
1 otherwise.
The kernel is symmetric, l.s.c. and has the maximum principle. This
latter can be seen by noticing that for a probability measure µ ∈ M1(X)
the potential is +∞ on the support of µ. Indeed, since X is countable,
all measures µ ∈ M1(X) are necessarily atomic, and if for some point
ℓ ∈ X we have µ({ℓ}) > 0, then by definition
X k(x, y) dµ(y) = +∞.
We calculate the studied quantities of the set H = X (also as in all
the examples below). Since the kernel is positive, Dn ≥ 0. On the other
hand, choosing w1 := 1, . . . , wn := n, all the values k(wi, wj) will be
exactly 0, so it follows that Dn = 0, n = 1, 2, . . ., and hence D = 0.
The Chebyshev constant can be estimated from below, if we compute
the infimum of a suitably chosen log-polynomial. Consider the log-
polynomial p(x) with all zeros placed at 0, that is with w1 = . . . =
wn = 0. Then the log-polynomial p(x) is
j k(x,wj) = n · k(x, 0). If
x 6= 0, we have p(x) = n, which gives M ≥ 1. The upper estimate of
M is also easy: suppose that in the system w1, . . . wn there are exactly
m points being equal to 0 (say the first m). Then
p(x) =
+∞ x = w1, . . . , wn,
n x = 0, x 6= w1, . . . , wn (if m = 0)
m x 6= 0, x 6= w1, . . . , wn
This shows for the corresponding log-polynomial inf p(x) = m, so
Mn ≤ 1, whence M = 1.
The energy is computed easily. Using the above reasoning on the
maximum principle, we see W (µ) = +∞ for any µ ∈ M1(X), hence
w(X) = +∞.
Thus we have an example of
+∞ = w > M > D = 0.
The above example completes the case of the kernel with maximum
principle. Let us now drop this assumption and look at what can
happen.
Example 14. Let X := {−1, 0, 1} be endowed with the discrete topol-
ogy. We define the kernel by
k(x, y) :=
2 if 0 ≤ |x− y| < 2,
0 if 2 = |x− y|.
Then k is continuous and bounded on X×X. This, in any case, implies
D = w by Theorem 8. Note that k does not satisfy the maximum
principle. To see this, consider, e.g., the measure µ = 1
δ1. Then
for the potential Uµ one has Uµ(1) = Uµ(−1) = 1 and Uµ(0) = 2,
which shows the failure of the maximum principle.
To estimate the nth diameter from above, let us consider the point
system {wi} of n = 2m points with m points falling at −1 and m points
falling at 1, while no points being placed at 0. Then by definition of
Dn := Dn(X) one can write
n(n− 1)
Dn ≤ 2
· 2 +m2 · 0 =
Applying this estimate for all even n = 2m as n → ∞, it follows that
D = lim
Dn ≤ 1. (9)
Next we estimate the Chebyshev constants from below by computing
the infimum of some special log-polynomials. For pn(x) = k(x, 0) one
has pn(x) ≡ 2 = inf pn. We thus find Mn ≥ 2 and M ≥ 2, showing
M > D, as desired.
Example 15. Let X := N with the discrete topology. Then X is
a locally compact Hausdorff space, and all functions are continuous,
hence l.s.c. on X. Let k : X ×X → [0,+∞] be defined as
k(n,m) :=
+∞ if n = m,
2−n−m if n 6= m.
Clearly k is an admissible kernel function. For the energy we have
again w(X) = +∞, see Example 13.
On the other hand let n ∈ N be any fixed number, and compute the
nth diameter Dn(X). Clearly if we choose wj := m+ j, with m a given
(large) number to be chosen, then we get
Dn(H) ≤
(n− 1)n
1≤i 6=j≤n
2−i−j−2m ≤
(n− 1)n
≤ 2−2m ,
hence we find that the nth diameter is Dn(X) = 0, so D(X) = 0,
too. For any log-polynomial p(x) we have inf p(x) = limx→∞ p(x) = 0,
hence M(X) = 0. That is we have D(X) = M(X) = 0 < w(X) = +∞.
The example shows how important the diagonal, excluded in the
definition of D but taken into account in w, may become for particular
cases. We can even modify the above example to get finite energy.
Example 16. Let X := (0, 1], equipped with the usual topology, and
let xn = 1/n. We take now
k(x, y) :=
+∞ if x = y,
2−n−m if x = xn and y = xm (xn 6= xm),
− log |x− y| otherwise
Compared to the l.s.c. logarithmic kernel, this k assumes different,
smaller values at the relatively closed set of points {(xn, xm) : n 6=
m} ⊂ X ×X only, hence it is also l.s.c. and thus admissible as kernel.
If a measure µ ∈ M1(X) has any atom, say if for some point z ∈ X
we have µ({z}) > 0, then by definition
X k(x, y) dµ(y) = +∞, hence
also w(µ) = +∞. Since for all µ ∈ M1(X) with any atomic component
w(µ) = +∞, we find that for the set H := X we have
w(H) := inf
µ∈M1(H)
w(µ) = inf
µ∈M1(H)
µ not atomic
w(µ).
But for measures without atoms, the countable set of the points xn are
just of measure zero, hence the energy equals to the energy with respect
to the logarithmic kernel. Thus we conclude w(H) = e−cap(H) = e−1/4,
as cap((0, 1]) = 1/4 is well-known.
On the other hand if n ∈ N is any fixed number, we can compute
the nth diameter Dn(H) exactly as above in Example 15. Hence it is
easy to see that Dn(H) = 0, whence also D(H) = 0. Similarly, we find
M(H) = 0, too.
This example shows that even in case w(H) < +∞ we can have
w(H) > D(H) = M(H).
6. Average distance number and the maximum principle
In the previous section, we showed the equality of the Chebyshev con-
stant M and the transfinite diameter D, using essentially elementary
inequalities and the only theoretically deeper ingredient, the assump-
tion of the maximum principle. We have also seen examples showing
that the lack of the maximum principle for the kernel allows strict
inequality between M and D. These observations certify to the rel-
evance of this principle in our investigations. Indeed, in this section
we show the necessity of the maximum principle in case of continuous
kernels for having M(K) = D(K) for all compact sets K. We need
some preparation first.
Recall from the introduction the notion of the average distance (or
rendezvous) number. Actuyally, a more general assertion than there can
be stated, see Stadje [21] or [6]. For a compact connected set K and a
continuous, symmetric kernel k, the average distance number r(K) is
the uniquely existing number with the property that for all probability
measures supported in K there is a point x ∈ K with
Uµ(x) =
k(x, y) dµ(y) = r(K).
This can be even further generalised by dropping the connectedness,
see Thomassen [22] and [6]. Even for not necessarily connected but
compact spacesK with symmetric, continuous kernel k there is a unique
number r(K) with the property that whenever a probability measure
on K and a positive ε are given, there are points x1, x2 ∈ K such that
Uµ(x1)− ε ≤ r(K) ≤ U
µ(x2) + ε.
This number is called the (weak) average distance number, and is par-
ticularly easy to calculate, when a probability measure with constant
potential is available. Such a measure µ is called then an invariant
measure. In this case the average distance number r(K) is trivially just
the constant value of the potential Uµ, see Morris, Nicholas [14] or [7].
It was proved in [7] that one always has M(K) = r(K), so once we
have an invariant measure, then the Chebyshev constant is again easy
to determine.
Also the Wiener energy w(K) has connection to invariant measures,
as shown by the following result, which is a simplified version of a more
general statement from [7], see also Wolf [23].
THEOREM 17. Let ∅ 6= K ⊂ X be a compact set and k be a continu-
ous, symmetric kernel. Then we have
r(K) ≥ w(K).
Furthermore, if r(K) = w(K), then there exists an invariant measure
in M1(K).
As mentioned above, we have r(K) = M(K), so the inequality r(K) ≥
w(K) in the first assertion of the above theorem is also the conse-
quence of Theorems 3 and 8. For the proof of the second assertion
one can use the Frostman-Fuglede Equilibrium Theorem 9 with the
obvious observation that “nearly every” in this context means indeed
“every”. Actually any probability measure µ ∈ M1(K) which minimises
ν 7→ supK U
ν is an invariant measure and its potential is constant
M(K), see [7, Thm. 5.2] (such measures undoubtedly exist because of
compactness of M1(K)). Henceforth we will indifferently use the terms
energy minimising or invariant for expressing this property of measures.
THEOREM 18. Suppose that the kernel k is symmetric and continu-
ous. If M(K) = D(K) for all compact sets K ⊆ X, then the kernel has
the maximum principle.
Proof. Recall from Corollary 6 that D(K) = w(K) for all K ⊆ X
compact. So we can use Theorem 17 all over in the following arguments.
We first prove the assertion in the case when X is a finite set. The
proof is by induction on n = #X. For n = 1 the assertion is trivial.
Let now #X = 2, X = {a, b}. Assume without loss of generality that
k(a, a) ≤ k(b, b). Then we only have to prove that for µ = δa the
maximum principle, i.e., the inequality k(a, b) ≤ k(a, a) holds. To see
this we calculate M(X) and D(X). We certainly have D(X) ≤ k(a, a).
On the other hand for an energy minimising probability measure νp :=
pδa + (1 − p)δb on X we know that its potential is constant over X,
hence
pk(a, a) + (1− p)k(b, a) = pk(a, b) + (1− p)k(b, b)
= M(X) = D(X) ≤ k(a, a).
Here if p = 1, then k(a, a) = k(a, b). If p < 1, then we can write
(1− p)k(b, a) ≤ (1− p)k(a, a), hence k(b, a) ≤ k(a, a),
so the maximum principle holds.
Assume now that the assertion is true for all sets with at most n
elements and for all kernels, and let #X = n + 1. For a probability
measure µ on X we have to prove supx∈X U
µ(x) = supx∈ supp µ U
µ(x).
If supp µ = X, then there is nothing to prove. Similarly, if there are
two distinct points x1 6= x2, x1, x2 ∈ X \ supp µ, then by the induction
hypothesis we have
x∈X\{x1}
Uµ(x) = sup
x∈ supp µ
Uµ(x) = sup
x∈X\{x2}
Uµ(x).
So for a probability measure µ defying the maximum principle we
must have # supp µ = n, say supp µ = X \ {xn+1}; let µ be such a
measure. Set K = supp µ and let µ′ be an invariant measure on K.
We claim that all such measures µ′ are also violating the maximum
principle. If µ = µ′, we are done. Assume µ 6= µ′ and consider the
linear combinations µt := tµ+(1− t)µ
′. There is a τ > 1, for which µτ
is still a probability measure and supp µτ ( supp µ. By the inductive
hypothesis (as # supp µτ < n) we have U
µτ (xn+1) ≤ U
µτ (a) for some
a ∈ supp µτ . We also know that U
µ(xn+1) = U
µ1(xn+1) > U
µ1(a).
Hence for the linear function Φ(t) := Uµt(xn+1) − U
µt(a) we have
Φ(1) > 0 and also Φ(τ) ≤ 0 (τ > 1). This yields Φ(0) > 0, i.e.,
(xn+1) = U
µ0(xn+1) > U
µ0(a) = Uµ
(y) for all y ∈ K. We have
therefore shown that all energy minimising (invariant) measures on K
must defy the maximum principle.
Let now ν be an invariant measure on X. We have
M(X) = Uν(y) = sup
Uν(x) = D(X)
≤ D(K) = sup
(x) = Uµ
(z) < Uµ
(xn+1)
for all y ∈ X, z ∈ K. Thus we can conclude Uν(y) ≤ Uµ
(y) for all
y ∈ X and even “<” for y = xn+1. Integrating with respect to ν would
yield
k dν dν = M(X) <
k dµ′ dν =
k dν dµ′ = M(X),
hence a contradiction, unless ν({xn+1}) = 0. If ν({xn+1}) = 0 held,
then ν would be an energy minimising measure on K. This is because
obviously supp ν ⊆ K holds, and the potential of ν is constant M(X)
over K, so
M(X) =
k dν dµ′ =
k dµ′ dν = M(K) holds.
As we saw above, then ν would not satisfy the maximum principle,
a contradiction again, since the potential of ν is constant on X. The
proof of the case of finite X is complete.
We turn now to the general case of X being a locally compact space
with continuous kernel. Let µ be a compactly supported probability
measure on X and y 6∈ supp µ. Set K = supp µ and note that both
M1(K) ∋ ν 7→ supK U
ν and ν 7→ Uν(y) are continuous mappings with
respect to the weak∗-topology on M1(K). If supK U
µ < Uµ(y) were
true, we could therefore find, by a standard approximation argument,
see for example [6, Lemma 3.8], a finitely supported probability measure
µ′ on K for which
x∈ supp µ′
(x) ≤ sup
(x) < Uµ
This is nevertheless impossible by the first part of the proof, thus the
assertion of the theorem follows.
Acknowledgement
The authors are deeply indebted to Szilárd Révész for his insightful
suggestions and for the motivating discussions.
References
1. Anagnostopoulos, V. and Sz. Gy. Révész: 2006, ‘Polarization constants for
products of linear functionals over R2 and C2 and Chebyshev constants of
the unit sphere’. Publ. Math. Debrecen 68(1–2), 75–83.
2. Bourbaki, N.: 1965, Intégration, Éléments de Mathématique XIII., Vol. 1175 of
Actualités Sci. Ind. Paris: Hermann, 2nd edition.
3. Carleson, L.: 1967, Selected Problems on Exceptional Sets, Vol. 13 of Van
Nostrand Mathematical Studies. D. Van Nostrand Co., Inc.
4. Choquet, G.: 1958/59, ‘Diamètre transfini et comparaison de diverses ca-
pacités’. Technical report, Faculté des Sciences de Paris.
5. Farkas, B. and Sz. Gy. Révész: 2005, ‘Rendezvous numbers in normed spaces’.
Bull. Austr. Math. Soc. 72, 423–440.
6. Farkas, B. and Sz. Gy. Révész: 2006a, ‘Potential theoretic approach to
rendezvous numbers’. Monatshefte Math 148, 309–331.
7. Farkas, B. and Sz. Gy. Révész: 2006b, ‘Rendezvous numbers of metric spaces
– a potential theoretic approach’. Arch. Math. (Basel) 86, 268–281.
8. Fekete, M.: 1923, ‘Über die Verteilung der Wurzeln bei gewissen algebraischen
Gleichungen mit ganzahligen Koeffizienten’. Math. Z. 17, 228–249.
9. Fuglede, B.: 1960, ‘On the theory of potentials in locally compact spaces’. Acta
Math. 103, 139–215.
10. Fuglede, B.: 1965, ‘Le théorème du minimax et la théorie fine du potentiel’.
Ann Inst. Fourier 15, 65–87.
11. Gross, O.: 1964, ‘The rendezvous value of a metric space’. Ann. of Math. Stud.
52, 49–53.
12. Landkof, N. S.: 1972, Foundations of modern potential theory, Vol. 180 of
Die Grundlehren der mathematischen Wissenschaften. New York, Heidelberg:
Springer.
13. Mhaskar, H. N. and E. B. Saff: 1992, ‘Weighted analogues of capacity,
transfinite diameter and Chebyshev constants’. Constr. Approx. 8(1), 105–124.
14. Morris, S. A. and P. Nickolas: 1983, ‘On the average distance property of
compact connected metric spaces’. Arch. Math. 40, 459–463.
15. Ohtsuka, M.: 1961, ‘On potentials in locally compact spaces’. J. Sci. Hiroshima
Univ. ser A 1, 135–352.
16. Ohtsuka, M.: 1965, ‘An application of the minimax theorem to the theory of
capacity’. J. Sci. Hiroshima Univ. ser A 29, 217–221.
17. Ohtsuka, M.: 1967, ‘On various definitions of capacity and related notions’.
Nagoya Math. J. 30, 121–127.
18. Pólya, Gy. and G. Szegő: 1931, ‘Über den transfiniten Durchmesser (Ka-
pazitätskonstante) von ebenen und räumlichen Punktmengen’. J. Reine Angew.
Math. 165, 4–49.
19. Révész, Sz. Gy. and Y. Sarantopoulos: 2004, ‘Plank problems, polarization,
and Chebyshev constants’. J. Korean Math. Soc. 41(1), 157–174.
20. Saff, E. B. and V. Totik: 1997, Logarithmic potentials with external fields, Vol.
316 of Grundlehren der Mathematischen Wissenschaften. Springer, Berlin.
21. Stadje, W.: 1981, ‘A property of compact, connected spaces’. Arch. Math. 36,
275–280.
22. Thomassen, C.: 2000, ‘The rendezvous number of a symmetric matrix and a
compact connected metric space’. Amer. Math. Monthly 107(2), 163–166.
23. Wolf, R.: 1997, ‘On the average distance property and certain energy integrals’.
Ark. Mat. 35, 387–400.
24. Zaharjuta, V. P.: 1975, ‘Transfinite diameter, Chebishev constants, and
capacity for compacta in Cn’. Math. USSR Sbornik 25(3), 350–364.
|
0704.0860 | Availability assessment of SunOS/Solaris Unix Systems based on Syslogd
and wtmpx logfiles : a case study | untitled
Availability Assessment of SunOS/Solaris Unix Systems
based on Syslogd and wtmpx log files: A case study
Cristina Simache and Mohamed Kaâniche
LAAS-CNRS — 7 Avenue du Colonel Roche
31077 Toulouse Cedex 4 — France
[email protected]
Abstract
This paper presents a measurement-based availability
assessment study using field data collected during a 4-
year period from 373 SunOS/Solaris Unix workstations
and servers interconnected through a local area
network. We focus on the estimation of machine
uptimes, downtimes and availability based on the
identification of failures that caused total service loss.
Data corresponds to syslogd event logs that contain a
large amount of information about the normal activity
of the studied systems as well as their behavior in the
presence of failures. It is widely recognized that the
information contained in such event logs might be
incomplete or imperfect. The solution investigated in
this paper to address this problem is based on the use
of auxiliary sources of data obtained from wtmpx files
maintained by the SunOS/Solaris Unix operating
system. The results obtained suggest that the combined
use of wtmpx and syslogd log files provides more
complete information on the state of the target systems
that is useful to provide availability estimations that
better reflect reality.
1. Introduction
Event logs have been widely used to analyze the
error/failure behavior of computer-based systems and
to estimate their dependability. Event logs include a
large amount of information about the occurrence of
various types of events that are collected concurrently
with normal system operation, and as such reflect
actual workload and usage. Some of the events are
informational and are issued from the normal activity
of the target systems, whereas others are recorded when
errors and failures affect local or distributed resources,
or are related to system shutdown and start-up. The
latter events are particularly useful for dependability
analysis.
Computer system dependability analysis based on
event logs has been the focus of several published
papers [1, 2, 4, 5, 7, 8, 9]. Various types of systems
have been studied (Tandem, VAX/VMS, Unix,
Windows NT, Windows 2000, etc.) including
mainframes and largely deployed commercial systems.
The issues addressed in these studies cover a large
spectrum, including the development of techniques and
methodologies for the extraction of relevant
information from the event logs, the identification of
error patterns, their causes and their effects, and the
statistical assessment of dependability measures such
as failure and recovery rates, reliability and availability.
It is widely recognized that such event log based
dependability analyses provide useful feedback to
software and system designers. Nevertheless, it is
important to note that the results obtained are
intimately related to the quality and the accuracy of the
data recorded in the logs. The study reported in [1]
points out various problems that might affect the data
included in the event logs and make incorrect
conclusions likely, considering as an example the
VAX/VMS system. Thus, extreme care is needed to
identify deficiencies in the data and to avoid that they
lead to incorrect conclusions.
In this paper, we show that similar problems can be
observed in the event logs maintained by the
SunOS/Solaris Unix operating system, and we present
a novel approach that is aimed to address such
problems and to improve the dependability estimates
based on such event logs. These results are illustrated
using field data collected during a 4-year period from
373 SunOS/Solaris Unix workstations and servers
interconnected through a LAN. The data corresponds to
event logs recorded via the syslog daemon. In
particular, we use var/adm/messages log files. We
focus on the evaluation of machine uptimes, downtimes
and availability based on the identification of failures
that caused a total service interruption of the machine.
In this study, we show that the consideration of the
information recorded in the var/adm/messages log
files only may lead to dependability estimations that do
not faithfully reflect reality due to incomplete or
imperfect data recorded in the corresponding logs. For
the estimation of these measures, we start with the
assumption that machine failures can be identified by
the last events recorded in the event log before the
machine goes down and then is rebooted. This
assumption was considered in the study reported in [3].
However, the validity of this assumption is
questionable in the following situations: 1) the machine
has a real activity between the last event logged and the
reboot without generating events in the logs, 2) the
time when the failure occurs is earlier than the
timestamp of the last event logged on the machine. To
address these problems and to obtain more realistic
estimations, we propose a solution based on utilization
of additional information obtained from wtmpx Unix
files, as well as data characterizing the state of the
machines included in the data collection that are
recorded at a regular basis during the data collection
procedure. The results clearly show that the combined
use of this additional information and syslogd log
files have a significant impact on the estimations.
To our knowledge, the approach discussed in this
paper and the corresponding results have not been
addressed in the previous studies published on the
exploitation of syslogd log files for the dependability
analysis of Unix based systems, including our paper
The rest of the paper is structured into 5 sections.
Section 2 describes the event logging mechanism in
Unix and the data collection procedure that we have
used in our study. Section 3 presents the dependability
measures that we have considered and discusses
different approaches and assumptions to estimate them
from the collected data. Section 4 presents some results
illustrating the benefits of the proposed approach, as
well as various statistics characterizing the
dependability of the Unix systems considered in our
study.
2. Event logging and data collection
For the Unix operating system, the event logging
mechanism is implemented by the syslog daemon
(denoted as syslogd). Running as a background
process, this daemon listens for the events generated by
different sources: kernel, system components (disk,
memory, network interfaces), daemons and
applications that are configured to communicate with
syslogd. These events inform about the normal
activity of the system as well as its behavior under the
occurrence of errors and failures including reboot and
shutdown events. The configuration file
/etc/syslog.conf specifies the destination of each
event received by syslogd, depending on its severity
level and its origin. The destination could be one or
several log files, the administration console or the
operator.
The events that are relevant to our study are
generally stored in the /var/adm/messages log file.
Each message stored in a log file refers to an event that
occurred on the system due to the local activity or its
interaction with other systems on the network. It
contains the following information: the date and time
of the event, the machine name on which the event is
logged and a description of the message. An example
of an event recorded in the log file is given below:
Mar 2 10:45:12 elgar automountd[124]:
server mahler not responding
The SunOS/Solaris Unix operating system limits the
size of the log files. Generally, only the log files
corresponding to the last 5 weeks of activity are kept. It
is necessary to set up a data collection strategy in order
to archive a large amount of data. This is essential to
obtain representative results for the dependability
measures characterizing the monitored systems.
In our study, we have included all the
SunOS/Solaris machines connected through the LAAS
local area network, excluding those used for
experimental testbeds or maintenance activities. We
have developed a data collection strategy to
automatically collect the /var/adm/messages log
files stored on these machines. This strategy takes into
account the frequent evolution of the network
configuration during the observation period in terms of
variation of the number of connected systems, updates
or changes of the operating system versions,
modification of software configurations, etc. A shell
script executed each week via the cron mechanism
implements the strategy and remotely copies the log
files from each system included in the study and
archives them on a dedicated machine. After each data
collection campaign, a text file (named DCSummary)
containing a summary of the data collection campaign
is created. This summary indicates the status of each
machine included in the campaign and how the
collection of the corresponding log file has been done.
For each machine, the status information reported in
the summary is one of the following:
• alive_OK: the machine is alive and the copy of its log
file succeeded;
• alive_KO: the machine is alive but the copy of its log
file failed. For this case, a description of the failure
symptom and cause is also included: shell problem,
connection ended by tiers, etc.
• no_answer: the machine did not answer to a ping
request before expiration of the default timeout
period.
The information included in the DCSummary file is
used to verify each data collection campaign and solve
the problems that may appear during the collection. It
is also useful to improve the accuracy of dependability
measures estimation (see Section 3.2). More detailed
information about the syslogd mechanism and the
data collection strategy are reported in [6].
3. Dependability measures estimation and
assumptions
Various types of dependability analyses can be
carried out based on the information contained in the
log files and several quantitative measures can be
considered to characterize the dependability of the
target machines: machine uptimes and downtimes,
reliability, availability, failure and recovery rates, etc.
In order to evaluate these measures, it is necessary to
identify from the log files the failure occurrences and
the corresponding service degradation durations. Such
task is tedious and requires the development of
heuristics and predefined failure criteria. An example
of such analysis is reported in [7].
In our study, we have focused on the availability
analysis of the individual machines included in the data
collection. In this context, we have considered machine
failures leading to a total interruption of the service
delivered to the users, followed by a reboot. The time
between the failure occurrence and the end of the
reboot corresponds to the total service interruption
period of the system. Apart from these periods, the
system is considered to be in the normal functioning
state where it delivers an appropriate service to the
users.
In order to evaluate the availability of the machines
included in the study, we need to estimate for each
machine the corresponding uptimes (denoted as UTi)
and downtimes (DTi), based on the information
recorded in the event logs. Each downtime value DTi
corresponds to the total service interruption period
associated to the i
failure. It is composed by the
service degradation period due to the failure occurrence
and the reboot period. Each uptime value corresponds
to the period between two successive downtimes.
Using the uptime and downtime estimates for each
machine j, we can evaluate the corresponding
availability (noted Aj) and the unavailability (noted
UAj). These measures are computed with the following
formulas:
UAj =� UTi ⁄ �(UTi +DTi) and UAj = 1 - UAj (1)
3.1. Machine uptimes and downtimes
estimation
The estimation of machine uptimes and downtimes
is carried out in two steps:
1) Identification of machine reboots and their
duration.
2) Identification of failures associated to each reboot
and of the corresponding service interruption
period.
To identify the occurrence of machine reboots and
their duration, we have developed an algorithm based
on the sequential parsing and matching of each event
recorded in the system log files to specific patterns or
sequences of patterns characterizing the occurrence of
reboots. Indeed, whereas some reboots can be explicitly
identified by a “reboot” or a “shutdown” event, many
others can be detected only by identifying the sequence
of the initialization events that are generated by the
system when it is restarted. The algorithm is described
in [4, 6]. It gives, for each reboot i identified in the
event logs and for each machine, the timestamp of the
reboot start (dateSBi), the timestamp of the reboot end
(dateEBi) and the associated service interruption
duration.
The identification of the timestamp of the failure
associated to each reboot and the corresponding service
interruption period is more problematic. In the study
reported in [3], it was assumed that the timestamp of
the last event recorded before the reboot (denoted as
dateEBRi) identifies the failure occurrence time. With
this assumption, each uptime UTi and downtime DTi
can be evaluated as follows:
UTi = dateEBRi – dateEBi-1 and
DTi = dateEBi - dateEBRi (2)
where i is the index of the current reboot, i-1 the index
of the previous reboot.
The consideration of EBR for the estimation of UTi
and DTi parameters may not be realistic in the
following situations (denoted as S1 and S2):
S1) The system could be in a normal functioning state
during a period of time between EBR and the
following reboot although it does not generate any
event into the log files during that period.
S2) The beginning of the service interruption period
for the users could be prior to the timestamp of
the EBR event. This happens for instance when a
critical failure affects the machine in such a way
that it becomes completely unusable to the users,
without preventing the event logging mechanisms
from recording some messages into the log files.
A careful analysis of the data collected during our
study revealed that the above situations are common.
To address this problem and to improve downtime and
uptime estimation accuracy, it is necessary to use
auxiliary data that provides complementary information
on the activity of the target machines.
In this paper, we present a solution based on the
correlation of data collected from the
/var/adm/messages log files, with data issued from
wtmpx files also maintained by the SunOS/Solaris
operating system. We also use the information recorded
in the DCSummary file (see Section 2). The following
section presents the method developed to extract the
data from the wtmpx file and how we used this data to
adjust the estimation of machine uptimes and
downtimes.
3.2. Uptime and downtime estimations
refinement
3.2.1. wtmpx files. The SunOS/Solaris Unix operating
system records into the /var/adm/wtmpx binary file
information identifying the users login/logout. Through
the pseudo-user reboot it also records information on
the system reboots. The wtmpx file is organized into
records (named also entries) with a fixed size. Each
record has the format of a data structure with the
following fields:
• the user login name: “user”;
• the id associated to the current record in the
/etc/inittab file: “init_id”;
• the device name (console, lnxx): “device”;
• the process id: “pid”;
• the record type: “proc_type”;
• the exit status for a process marked as
DEAD_PROCESS: “exit_status” and “term_status”;
• the timestamp of the record: “date”;
• the session id: “session_id”;
• the length of the machine’s name: “length”;
• the machine’s name used by the user to connect, if it
is a remote one: “host”.
We developed a specific algorithm that collects the
wtmpx file of each machine included in the study on a
regular basis and processes the binary file to extract the
information that is relevant to our study. The results of
the algorithm are kept in a separate file for each
machine. Figure 1 presents examples of records
obtained for a machine of our network.
The first two records show that the root user
connected to the local system from the system named
cubitus on November 6, 2001 at 16h 37mn 41s, using
the rlogin command. The next records inform about the
occurrence of a reboot event about 3 minutes later. The
third record shows that this reboot was done via a
shutdown command executed probably by the root
user. The sequence of records corresponding to a
reboot event is much longer than this example. The
whole sequence is not presented in Figure 1, the aim of
the illustration is to show some examples of records as
extracted from wtmpx files by our algorithm.
In the following, we outline the approach that we
developed to use the information extracted from the
wtmpx files together with the information from the
DCSummary files in order to refine the uptime and
downtime estimations, considering situations S1 and
S2 discussed in Section 3.1.
2001 Nov 6 16:37:41 user=.rlogin host=
length=0 init_id=r100
device=/dev/pts/1 pid=25220
proc_type=6 term_status=0
2001 Nov 6 16:37:41 user=root host=cubitus
length=8 init_id=r100
device=/dev/pts/1 pid=25220
proc_type=7 term_status=0
2001 Nov 6 16:40:35 user=shutdown host=
length=0 init_id=
device=~
pid=0 proc_type=0
term_status=0 exit_status=0
2001 Nov 6 16:41:39 user= host=
length=0 init_id=
device=system boot pid=0
proc_type=2 term_status=0
2001 Nov 6 16:42:09 user= host=
length=0 init_id=
device=run–level 3 pid=0
proc_type=1 term_status=0
Figure 1. Examples of records from /var/adm/wtmpx
obtained with our algorithm
3.2.2. Situation S1: an operational activity exists
between EBR and SB events. The detailed analysis of
the collected data from the log files and comparison
with the information extracted from wtmpx files
showed that the situation where a real activity exists
between the last event recorded before a reboot (EBR)
and the event identifying the start of the following
reboot (SB event) recorded in the
/var/adm/messages log files appears quite often.
This situation occurs when the machine functions
normally but its activity doesn’t produce any message
into the log file maintained by the syslogd daemon. The
cause could be that the applications or services run by
the users aren’t configured to communicate with the
syslogd daemon.
To better understand this case, Figure 2 gives an
example of a sequence of events characterizing the
state of the corresponding system, taking into account
the information extracted from the
/var/adm/messages, wtmpx and DCSummary files.
For each event, we indicate the timestamp when it is
logged, a short description and the source file from
which the event is extracted. For wtmpx events, we
present only the fields which are useful to identify the
system activity, the other fields are not significant for
this analysis.
For this example, the events recorded in the
/var/adm/messages log file let us believe that the
system had no activity between December 8 at 18:06
(EBR event) and December 9 at 15:30, the timestamp
of the reboot start. However, the analysis of the
DCSummary and wtmpx files shows that the system
had a real activity between EBR and SB events. In fact,
we see that the data collection campaign was
successfully carried out on December 9 at 6:43.
Event # Event date Event description File where the event is logged
..................
2002 Dec 8 18:06:08
2002 Dec 9 06:43:34
2002 Dec 9 13:18:45
2002 Dec 9 13:35:21
2002 Dec 9 13:47:57
2002 Dec 9 13:48:48
2002 Dec 9 15:18:46
2002 Dec 9 15:29:20
2002 Dec 9 15:29:25
2002 Dec 9 15:29:25
2002 Dec 9 15:29:27
..................
2002 Dec 9 15:29:52
2002 Dec 9 15:30:52
2002 Dec 9 15:30:52
2002 Dec 9 15:30:52
2002 Dec 9 15:30:52
2002 Dec 9 15:30:53
last event before reboot <EBR>
alive_ok
user=UserC; device=pts/0; pid=2362; proc_type=7
user=UserB; device=pts/1; pid=2379; proc_type=7
user=UserB; device=pts/1; pid=2379; proc_type=8
user=UserA; device=pts/1; pid=2434; proc_type=7
user=UserA; device=pts/1; pid=2434; proc_type=8
user=UserB; device=console; pid=2644; proc_type=7
user=UserB; device=console; pid=338; proc_type=8
user=UserB; device=console; pid=2644; proc_type=8
user=LOGIN; device=console; pid=2742; proc_type=6
..................
user=troot; device=console; pid=334; proc_type=7
user=sac; device=; pid=333; proc_type=8
user=troot; device=console; pid=334; proc_type=8
user=; device=run-level 6; pid=0; proc_type=1
user=rc6; device=; pid=2899; proc_type=5
reboot start <SB>
var/adm/messages log file
DCSummary
wtmpx
wtmpx
wtmpx
wtmpx
wtmpx
wtmpx
wtmpx
wtmpx
wtmpx
..................
wtmpx
wtmpx
wtmpx
wtmpx
wtmpx
var/adm/messages log file
Figure 2. Example illustrating situation S1
Moreover, the records from wtmpx file show, for
example, that UserA used the system on December 9
between 13:48 (information given by the proc_type
field value equal to 7, that is the process with pid=2434
started at the time of this record) and 15:18
(proc_type=8, the same process ended at the time of
this record), corresponding to an utilization period of
the system of nearly one hour and a half.
In this situation, the EBR event as defined earlier
doesn’t correspond to the beginning of the total service
interruption period. Thus, the estimated value of the
downtime parameter using the assumption discussed in
Section 3.1, does not faithfully reflect the real value of
the service interruption period. Based on the correlation
of the information provided by the three data source
files, a refined and more accurate estimation of
machine downtimes and uptimes could be obtained.
The refinement consists in associating the failure
occurrence time to the timestamps of the last event
recorded before the reboot based on the information
contained in /var/adm/messages, wtmpx and
DCSummary files.
3.2.3. Situation S2: the service interruption period
starts before the EBR event. This situation occurs
when critical failures affect the system in such a way
that it becomes completely unusable, without
preventing the event logging mechanisms from
recording some messages into the log files. During the
recovery phase, the actions performed by the system
administrators may include several unsuccessful reboot
attempts that are not recorded in the
/var/adm/messages log file, but some events
referring to them are written in the wtmpx file. Using
this information, just like in the previous case, we can
refine the downtime and uptime estimations by
associating the failure occurrence time to the
timestamps of the events recorded in the wtmpx file
that better reflects the start of the service interruption.
An example of a sequence of events illustrating this
case is given in Figure 3.
Event # Event date Event description File where the event is logged
2003 Jan 9 10:18:59
2003 Jan 9 10:21:39
2003 Jan 9 10:21:39
2003 Jan 9 10:21:39
2003 Jan 9 10:21:39
2003 Jan 9 10:21:48
2003 Jan 9 10:21:48
2003 Jan 9 10:22:05
2003 Jan 9 10:22:13
2003 Jan 9 10:22:13
2003 Jan 9 10:22:16
user=root; device=console; pid=2370; proc_type=7
user=sac; device=; pid=425; proc_type=8
user=root; device=console; pid=2370; proc_type=8
user=; device=run-level 5; pid=0; proc_type=1
user=rc5; device=; pid=25952; proc_type=5
user=UserC; device=pts/3; pid=11584; proc_type=8
user=UserC; device=pts/1; pid=11359; proc_type=8
last event before reboot <EBR>
user=rc5; device=; pid=25953; proc_type=8
user=uadmin;device=; pid=26121; proc_type=5
reboot start <SB>
wtmpx
wtmpx
wtmpx
wtmpx
wtmpx
wtmpx
wtmpx
var/adm/messages log file
wtmpx
wtmpx
var/adm/messages log file
Figure 3. Example illustrating situation S2
We can identify the events extracted from the
wtmpx file informing upon the stop of the system:
event # 2 with user field “sac” and proc_type “ 8”
(dead process) followed by events #3, #4, and #5
notifying the system run-level change to run-level 5
(this one is used to properly stop the system).
This example shows that the start of the service
interruption period is prior to the EBR event recorded
in the /var/adm/messages log file. The refinement
of the uptime and downtime estimations
corresponding to such situations consists in
associating the failure occurrence time to the
timestamps of the last event recorded in the wtmpx
file before the start of the reboot sequence.
4. Experimental results
The analyses presented in this Section are based on
/var/adm/messages log file data collected during
45 months (October 1999 – July 2003) from 418
SunOS/Solaris Unix workstations and servers
interconnected through the LAAS local area
computing network. As shown in Figure 4, the data
collection period differed significantly from one
machine to another due to the dynamic evolution of
the network. For more than 70 % of the machines, the
data collection period was longer than 21 months. On
the other hand, it can be noticed that some machines
have a quite short data collection period. In order to
have significant statistical analysis results, we
excluded from the analysis the machines for which
the data collection period was shorter than 2000 hours
(about 3 months). Consequently, the results presented
in the following concern 373 Unix machines. Among
these machines, 17 correspond to major servers for
the entire network or a sub-set of users: WWW,
NIS+, NFS, FTP, SMTP, file servers, printing
servers, etc.
Figure 4. Examples of records from
/var/adm/wtmpx obtained with our algorithm
The application of the reboot identification
algorithm on the collected data allowed us to identify
12805 reboots for the 373 machines, only 476 reboots
concern the 17 servers. Based on the information
provided by the reboot identification algorithm, we
evaluated for each machine the associated uptimes
UTi and downtimes DTi, and the availability measure.
The collection of wtmpx files started later than the
/var/adm/messages log files. For this reason, we
were able to analyze the impact of uptimes and
downtimes estimation refinement algorithms only on
a subset of UTi and DTi values associated with the
reboots identified from the log files. Among the
12805 reboots, this analysis concerned 6163 reboots
(48.13%). For the remaining 6642 reboots, the
corresponding data from the wtmpx files was not
available.
In the following, we first present in Section 4.1 the
results of machine uptimes and downtimes estimation
based on the processing of the set of 6163 reboots
focusing on the impact of the estimation refinement
algorithms. Then, global results taking into account
the whole data collected during our study are
presented in Section 4.2 in order to give an overall
picture on the availability and the rate of occurrence
of reboots characterizing the Unix machines included
in our study.
4.1. Machine uptimes and downtimes
estimation and refinement
The correlation of the information contained in the
/var/adm/messages log files, the wtmpx files, and
the DCSummary files, revealed that both situations S1
and S2 discussed in Section 3.2 are common:
• Situation S1 was observed for 79.35% of the
analyzed reboots;
• Situation S2 was observed for 10.77% of the
analyzed reboots;
For the 9.88% remaining reboots, the assumption
that the EBR recorded in /var/adm/messages file
identifies the last event recorded on the machine
before the reboot was consistent with the information
available in the wtmpx and the DCSummary files.
In order to analyze the impact of the estimation
refinement algorithms on the results, Table 1 gives
the Mean, Median and Standard Deviation of uptime
and downtime values, before and after the application
of our estimation refinement algorithms discussed in
Section 3. Considering the median of the downtime
values, it can be seen that the refinement algorithms
have a significant impact on the results. The median
estimated after the refinement is 66 times lower than
the value obtained without the refinement. The
refinement algorithms have also an impact on the
uptimes estimation, but as expected the improvement
factor is lower than the one observed for the
downtime values (1.8 compared to 66).
Table 1. Machine uptimes and downtimes
estimates before and after refinement
Uptimes UTi Downtimes DTi
before
refinement
after
refinement
before
refinement
after
refinement
Mean 28.3days 1.1 month 5.9 days 1.9 days
Median 6.1 days 10.8 days 8.9 hours 8.1 min
1.7 months 1.8 months 24.1 days 21.1 days
The impact of the estimation refinement
algorithms on availability is summarized in Table 2.
It can be seen the estimated average unavailability
after the refinement is three times lower than the
value estimated based only on the information in the
/var/adm/messages log files. Clearly, the
difference is significant and cannot be ignored.
Table 2. Impact of the estimation refinement
algorithms on Availability and Unavailability
before refinement after refinement
A 89.3% 96.3 %
UA 39.0 days/year 13.7 days/year
4.2. Availability and reboot rates estimated
from the whole data set
This section presents some results concerning the
reboot rates and the availability of the 373
SunOS/Solaris Unix machines included in our study
taking into account the whole set of 12805 reboots
identified from the /var/adm/messages files.
When the wtmpx files were not available (this
concerned 6642 reboots), the estimation of the UTi,
DTi, availability and reboot rates was based only on
the information in the /var/adm/messages files,
using the assumption discussed in Section 3.1. In the
other case (i.e., for the 6163 reboots), we applied the
estimation refinement algorithms presented in
Section 3.2.
Figure 5 plots the reboot rates estimated for each
machine as a function of the data collection period.
The estimated reboot rate for each machine
corresponds to the average number of reboots
recorded during the corresponding observation. It can
be seen that the reboot rates are uniformly distributed
between 10
/hour and 10
/hour.
Figure 5. Unix machines reboot rates as a function
of the data collection period
As indicated in Table 3, the mean value of
machine reboot rates is 1.3 10
/hour, when
considering all Unix machines including workstations
and servers. If we take into account only the servers,
the mean reboot rate is 1.5 times lower (7.7 10
/hour)
corresponding to one reboot every two months.
Table 3. Reboot rate statistics
Mean Median Std. Dev.
SunOS/Solaris
machines
(Workstations
+ Servers)
1.3 10
/h 1.0 10
/h 1.3 10
Servers only 7.7 10
/h 6.4 10
/h 5.6 10
The results illustrating the availability and
unavailability of the Unix machines including
workstations and servers are given in Figure 6 and
Table 4. The mean availability is 97.81 %
corresponding to an average unavailability of 8 days
per year. Detailed analysis shows that only 15 among
the 373 Unix machines included in the study have an
availability lower than 90%.
When considering only the servers, the estimated
availability varies between 99.36% and 99.1% with
an average unavailability of 12 hours per year.
Figure 6. SunOS/Solaris Unix machines
availability distribution
Table 4. Availability and Unavailability statistics
Mean Median Std. Dev.
A 97.81 % 98.79 % 3.07 %
UA 7.99 day/year 4.41 day/year 11.20 day/year
6. Conclusion
Dependability analyses based on event logs
provide useful feedback to software and system
designers. Nevertheless, the results obtained are
intimately related to the quality and the completeness
of the information recorded in the logs. As the
information contained in such event logs could be
incomplete or imperfect, it is important to use
additional sources of information to ensure that the
conclusions derived from such analyses faithfully
reflect reality. The approach investigated in this paper
is aimed to fulfill this objective considering
SunOS/Solaris Unix systems as an example.
In particular, we have shown that the combined us
of the data contained in the syslogd files and the
information recorded in the wtmpx files or through
the monitoring of systems state during the data
collection campaigns provides uptime and downtime
estimations that are closer to reality than the
estimations obtained based on syslogd files only.
This result is illustrated based on a large set of field
data collected from 373 machines during a 45 month
observation period.
In our future work, we will investigate the
applicability of the approach proposed in this paper to
other operating systems such as Linux, Windows 2K
and Mac OS X.
References
[1] M. F. Buckley, D. P. Siewiorek, “VAX/VMS Event
Monitoring and Analysis”, 25th IEEE Int. Symp. on
Fault-Tolerant Computing (FTCS-25), (Pasadena, CA,
USA), pp. 414-423, IEEE Computer Society, 1995.
[2] R. K. Iyer, D. Tang, “Experimental Analysis of
Computer System Dependability”, in Fault-Tolerant
Computer System Design, D. K. Pradhan, Ed., Prentice
Hall PTR, 1996, pp. 282-392.
[3] M. Kalyanakrishnam, Z. Kalbarczyk, R. K. Iyer,
“Failure Data Analysis of a LAN of Windows NT
Based Computers”, 18th IEEE Symp. on Reliable
Distributed Systems (SRDS-18), (Lausanne,
Switzerland), pp. 178-187, 1999.
[4] C. Simache, M. Kaâniche, “Measurement-based
Availability Analysis of Unix Systems in a Distributed
Environment”, The 12th Int. Symp. on Software
Reliability Engineering (ISSRE-2001), (Hong Kong,
China), pp. 346-355, IEEE Computer Society, 2001.
[5] C. Simache, M. Kaâniche, “Event Log based
Dependability Analysis of Windows NT and 2K
Systems”, 2002 Pacific Rim Int. Symposium on
Dependable Computing (PRDC-2002), (Tsukuba,
Japan), pp. 311-315, IEEE Computer Society, 2002.
[6] C. Simache, “Dependability evaluation of Unix and
Windows Systems based on operational data: A
Method and Application”, PhD Thesis, LAAS Report
N°04333, 2004 (in French).
[7] A. Thakur, R. K. Iyer, “Analyze-NOW — An
Environment for Collection & Analysis of Failures in a
Network of Workstations”, IEEE Transactions on
Reliability, vol. 45, pp. 561-570, 1996.
[8] M. Tsao, D. P. Siewiorek, “Trend Analysis on System
Error Files”, 13th IEEE Int. Symp. on Fault-Tolerant
Computing (FTCS-13), (Milano, Italy), pp. 116-119,
IEEE Computer Society, 1983.
[9] J. Xu, Z. Kalbarczyk, R. K. Iyer, “Networked
Windows NT System Field Failure Data Analysis”,
Proc. 1999 IEEE Pacific Rim Int. Symp. on
Dependable Computing (PRDC-1999), (Los Alamitos,
CA), pp. 178-185, 1999
/ASCII85EncodePages false
/AllowTransparency false
/AutoPositionEPSFiles false
/AutoRotatePages /None
/Binding /Left
/CalGrayProfile (None)
/CalRGBProfile (None)
/CalCMYKProfile (None)
/sRGBProfile (sRGB IEC61966-2.1)
/CannotEmbedFontPolicy /Error
/CompatibilityLevel 1.3
/CompressObjects /Off
/CompressPages true
/ConvertImagesToIndexed true
/PassThroughJPEGImages true
/CreateJDFFile false
/CreateJobTicket false
/DefaultRenderingIntent /Default
/DetectBlends true
/DetectCurves 0.1000
/ColorConversionStrategy /LeaveColorUnchanged
/DoThumbnails true
/EmbedAllFonts true
/EmbedOpenType false
/ParseICCProfilesInComments true
/EmbedJobOptions true
/DSCReportingLevel 0
/EmitDSCWarnings false
/EndPage -1
/ImageMemory 1048576
/LockDistillerParams true
/MaxSubsetPct 100
/Optimize true
/OPM 0
/ParseDSCComments false
/ParseDSCCommentsForDocInfo false
/PreserveCopyPage true
/PreserveDICMYKValues true
/PreserveEPSInfo false
/PreserveFlatness true
/PreserveHalftoneInfo true
/PreserveOPIComments false
/PreserveOverprintSettings true
/StartPage 1
/SubsetFonts true
/TransferFunctionInfo /Remove
/UCRandBGInfo /Preserve
/UsePrologue false
/ColorSettingsFile ()
/AlwaysEmbed [ true
/NeverEmbed [ true
/AntiAliasColorImages false
/CropColorImages true
/ColorImageMinResolution 150
/ColorImageMinResolutionPolicy /OK
/DownsampleColorImages true
/ColorImageDownsampleType /Bicubic
/ColorImageResolution 300
/ColorImageDepth -1
/ColorImageMinDownsampleDepth 1
/ColorImageDownsampleThreshold 2.00333
/EncodeColorImages true
/ColorImageFilter /DCTEncode
/AutoFilterColorImages false
/ColorImageAutoFilterStrategy /JPEG
/ColorACSImageDict <<
/QFactor 0.76
/HSamples [2 1 1 2] /VSamples [2 1 1 2]
/ColorImageDict <<
/QFactor 0.76
/HSamples [2 1 1 2] /VSamples [2 1 1 2]
/JPEG2000ColorACSImageDict <<
/TileWidth 256
/TileHeight 256
/Quality 15
/JPEG2000ColorImageDict <<
/TileWidth 256
/TileHeight 256
/Quality 15
/AntiAliasGrayImages false
/CropGrayImages true
/GrayImageMinResolution 150
/GrayImageMinResolutionPolicy /OK
/DownsampleGrayImages true
/GrayImageDownsampleType /Bicubic
/GrayImageResolution 300
/GrayImageDepth -1
/GrayImageMinDownsampleDepth 2
/GrayImageDownsampleThreshold 2.00333
/EncodeGrayImages true
/GrayImageFilter /DCTEncode
/AutoFilterGrayImages false
/GrayImageAutoFilterStrategy /JPEG
/GrayACSImageDict <<
/QFactor 0.76
/HSamples [2 1 1 2] /VSamples [2 1 1 2]
/GrayImageDict <<
/QFactor 0.76
/HSamples [2 1 1 2] /VSamples [2 1 1 2]
/JPEG2000GrayACSImageDict <<
/TileWidth 256
/TileHeight 256
/Quality 15
/JPEG2000GrayImageDict <<
/TileWidth 256
/TileHeight 256
/Quality 15
/AntiAliasMonoImages false
/CropMonoImages true
/MonoImageMinResolution 1200
/MonoImageMinResolutionPolicy /OK
/DownsampleMonoImages true
/MonoImageDownsampleType /Bicubic
/MonoImageResolution 600
/MonoImageDepth -1
/MonoImageDownsampleThreshold 1.00167
/EncodeMonoImages true
/MonoImageFilter /CCITTFaxEncode
/MonoImageDict <<
/K -1
/AllowPSXObjects false
/CheckCompliance [
/None
/PDFX1aCheck false
/PDFX3Check false
/PDFXCompliantPDFOnly false
/PDFXNoTrimBoxError true
/PDFXTrimBoxToMediaBoxOffset [
0.00000
0.00000
0.00000
0.00000
/PDFXSetBleedBoxToMediaBox true
/PDFXBleedBoxToTrimBoxOffset [
0.00000
0.00000
0.00000
0.00000
/PDFXOutputIntentProfile (None)
/PDFXOutputConditionIdentifier ()
/PDFXOutputCondition ()
/PDFXRegistryName (http://www.color.org)
/PDFXTrapped /False
/Description <<
/JPN <FEFF3053306e8a2d5b9a306f300130d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f00200050004400460020658766f830924f5c62103059308b3068304d306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103057305f00200050004400460020658766f8306f0020004100630072006f0062006100740020304a30883073002000520065006100640065007200200035002e003000204ee5964d30678868793a3067304d307e30593002>
/DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e0020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f0062006100740020006f0064006500720020006d00690074002000640065006d002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e>
/FRA <FEFF004f007000740069006f006e00730020007000650072006d0065007400740061006e007400200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e007400730020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e00200049006c002000650073007400200070006f0073007300690062006c0065002000640027006f00750076007200690072002000630065007300200064006f00630075006d0065006e007400730020005000440046002000640061006e00730020004100630072006f0062006100740020006500740020005200650061006400650072002c002000760065007200730069006f006e002000200035002e00300020006f007500200075006c007400e9007200690065007500720065002e>
/PTB <FEFF005500740069006c0069007a006500200065007300740061007300200063006f006e00660069006700750072006100e700f5006500730020007000610072006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000500044004600200063006f006d00200075006d0061002000760069007300750061006c0069007a006100e700e3006f0020006500200069006d0070007200650073007300e3006f00200061006400650071007500610064006100730020007000610072006100200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f0073002000500044004600200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002c002000520065006100640065007200200035002e00300020006500200070006f00730074006500720069006f0072002e>
/DAN <FEFF004200720075006700200064006900730073006500200069006e0064007300740069006c006c0069006e006700650072002000740069006c0020006100740020006f0070007200650074007400650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650072002000650067006e006500640065002000740069006c0020007000e5006c006900640065006c006900670020007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e007400650072006e00650020006b0061006e002000e50062006e006500730020006d006500640020004100630072006f0062006100740020006f0067002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e>
/NLD <FEFF004700650062007200750069006b002000640065007a006500200069006e007300740065006c006c0069006e00670065006e0020006f006d0020005000440046002d0064006f00630075006d0065006e00740065006e0020007400650020006d0061006b0065006e00200064006900650020006700650073006300680069006b00740020007a0069006a006e0020006f006d0020007a0061006b0065006c0069006a006b006500200064006f00630075006d0065006e00740065006e00200062006500740072006f0075007700620061006100720020007700650065007200200074006500200067006500760065006e00200065006e0020006100660020007400650020006400720075006b006b0065006e002e0020004400650020005000440046002d0064006f00630075006d0065006e00740065006e0020006b0075006e006e0065006e00200077006f007200640065006e002000670065006f00700065006e00640020006d006500740020004100630072006f00620061007400200065006e002000520065006100640065007200200035002e003000200065006e00200068006f006700650072002e>
/ESP <FEFF0055007300650020006500730074006100730020006f007000630069006f006e006500730020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000500044004600200071007500650020007000650072006d006900740061006e002000760069007300750061006c0069007a006100720020006500200069006d007000720069006d0069007200200063006f007200720065006300740061006d0065006e0074006500200064006f00630075006d0065006e0074006f007300200065006d00700072006500730061007200690061006c00650073002e0020004c006f007300200064006f00630075006d0065006e0074006f00730020005000440046002000730065002000700075006500640065006e00200061006200720069007200200063006f006e0020004100630072006f00620061007400200079002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e>
/SUO <FEFF004e00e4006900640065006e002000610073006500740075007300740065006e0020006100760075006c006c006100200076006f006900740020006c0075006f006400610020006a0061002000740075006c006f00730074006100610020005000440046002d0061007300690061006b00690072006a006f006a0061002c0020006a006f006900640065006e0020006500730069006b0061007400730065006c00750020006e00e400790074007400e400e40020006c0075006f00740065007400740061007600610073007400690020006c006f00700070007500740075006c006f006b00730065006e002e0020005000440046002d0061007300690061006b00690072006a0061007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f006200610074002d0020006a0061002000520065006100640065007200200035002e00300020002d006f0068006a0065006c006d0061006c006c0061002000740061006900200075007500640065006d006d0061006c006c0061002000760065007200730069006f006c006c0061002e>
/ITA <FEFF00550073006100720065002000710075006500730074006500200069006d0070006f007300740061007a0069006f006e00690020007000650072002000630072006500610072006500200064006f00630075006d0065006e007400690020005000440046002000610064006100740074006900200070006500720020006c00610020007300740061006d00700061002000650020006c0061002000760069007300750061006c0069007a007a0061007a0069006f006e006500200064006900200064006f00630075006d0065006e0074006900200061007a00690065006e00640061006c0069002e0020004900200064006f00630075006d0065006e00740069002000500044004600200070006f00730073006f006e006f0020006500730073006500720065002000610070006500720074006900200063006f006e0020004100630072006f00620061007400200065002000520065006100640065007200200035002e003000200065002000760065007200730069006f006e006900200073007500630063006500730073006900760065002e>
/NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f00700070007200650074007400650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000700061007300730065007200200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e006500730020006d006500640020004100630072006f0062006100740020006f0067002000520065006100640065007200200035002e00300020006f0067002000730065006e006500720065002e>
/SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006e00e40072002000640075002000760069006c006c00200073006b0061007000610020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f600720020007000e5006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b0072006900660074002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e0020005000440046002d0064006f006b0075006d0065006e00740065006e0020006b0061006e002000f600700070006e006100730020006d006500640020004100630072006f0062006100740020006f00630068002000520065006100640065007200200035002e003000200065006c006c00650072002000730065006e006100720065002e>
/ENU <FEFF005500730065002000740068006500730065002000730065007400740069006e0067007300200074006f0020006300720065006100740065002000500044004600200064006f00630075006d0065006e007400730020007300750069007400610062006c006500200066006f007200200049004500450045002000580070006c006f00720065002e0020004300720065006100740065006400200031003500200044006500630065006d00620065007200200032003000300033002e>
>> setdistillerparams
/HWResolution [600 600]
/PageSize [612.000 792.000]
>> setpagedevice
|
0704.0861 | Empirical analysis and statistical modeling of attack processes based on
honeypots | Microsoft Word - Kaaniche-WEEDS-DSN06-final.doc
Empirical Analysis and Statistical Modeling
of Attack Processes based on Honeypots
M. Kaâniche1, E. Alata1, V. Nicomette1, Y. Deswarte1, M. Dacier2
1LAAS-CNRS, Université de Toulouse
7 Avenue du Colonel Roche, 31077 Toulouse Cedex 4, France
{kaaniche, ealata, deswarte, nicomett}@laas.fr
2Eurécom
2229 Route des Crêtes, BP 193, 06904 Sophia Antipolis Cedex, France
[email protected]
Abstract
Honeypots are more and more used to collect data on
malicious activities on the Internet and to better
understand the strategies and techniques used by
attackers to compromise target systems. Analysis and
modeling methodologies are needed to support the
characterization of attack processes based on the data
collected from the honeypots. This paper presents some
empirical analyses based on the data collected from the
Leurré.com honeypot platforms deployed on the Internet
and presents some preliminary modeling studies aimed
at fulfilling such objectives.
1. Introduction
Several initiatives have been developed during the
last decade to monitor malicious threats and activities on
the Internet, including viruses, worms, denial of service
attacks, etc. Among them, we can mention the Internet
Motion Sensor project [1], CAIDA [2], DShield [3], and
CADHo [4]. These projects provide valuable
information on security threats and the potential damage
that they might cause to Internet users. Analysis and
modeling methodologies are necessary to extract the
most relevant information from the large set of data
collected from such monitoring activities that can be
useful for system security administrators and designers
to support decision making. The designers are mainly
interested in having representative and realistic
assumptions about the kind of threats and vulnerabilities
that their system will have to cope with once it is used in
operation. Knowing who are the enemies and how they
proceed to defeat the security of target systems is an
important step to be able to build systems that can be
resilient with respect to the corresponding threats. From
the system security administrators’ perspective, the
collected data should be used to support the
development of efficient early warning and intrusion
detection systems that will enable them to better react to
the attacks targeting their systems.
As of today, there is still a lack of methodologies and
significant results to fulfill the objectives described
above, although some progress has been achieved
recently in this field. The CADHo project “Collection
and Analysis of Data from Honeypots” [4], an ongoing
research action started in September 2004, is aimed at
contributing to filling such a gap by carrying out the
following activities:
1) deploying a distributed platform of honeypots [5]
that gathers data suitable to analyze the attack
processes targeting a large number of machines
connected to the Internet;
2) developing analysis methodologies and modeling
approaches to validate the usefulness of this
platform by carrying out various analyses, based on
the collected data, to characterize the observed
attacks and model their impact on security.
A honeypot is a machine connected to a network but
that no one is supposed to use. In theory, no connection
to or from that machine should be observed. If a
connection occurs, it must be, at best an accidental error
or, more likely, an attempt to attack the machine.
The Leurré.com data collection environment [5], set
up in the context of the CADHo project, has deployed,
as of to date, thirty five honeypot platforms at various
locations from academia and industry, in twenty five
countries over the five continents. Several analyses
carried out based on the data collected so far from these
honeypots have revealed that very interesting
observations and conclusions can be derived with
respect to the attack activities observed on the Internet
[4, 6-9]. In addition, several automatic data analyses and
clustering techniques have been developed to facilitate
the extraction of relevant information from the collected
data. A list of papers detailing the methodologies used
and the results of these analyses is available in [6].
This paper focuses on modeling-related activities
based on the data collected from the honeypots. We first
discuss the objectives of such activities and the
challenges that need to be addressed. Then we present
some examples of models obtained from the data.
The paper is organized as follows. Section 2 presents
the data collection environment. Section 3 focuses on
the modeling of attacks based on the data collected from
the honeypots deployed. Modeling examples are
presented in Section 4. Finally, Section 5 discusses
future work.
2. The data collection environment
The data collection environment (called Leurré.com
[5]) deployed in the context of the CADHo project is
based on low-interaction honeypots using the freely
available software called honeyd [10]. Since September
2004, 35 honeypot platforms have been progressively
deployed on the Internet at various geographical
locations. Each platform emulates three computers
running Linux RedHat, Windows 98 and Windows NT,
respectively, and various services such as ftp, web, etc.
A firewall ensures that connections cannot be initiated
from the computers, only replies to external solicitations
are allowed. All the honeypot platforms are centrally
managed to ensure that they have exactly the same
configuration. The data gathered by each platform are
securely uploaded to a centralized database with the
complete content, including payload of all packets sent
to or from these honeypots, and additional information
to facilitate its analysis, such as the IP geographical
localization of packets’ source addresses, the OS of the
attacking machine, the local time of the source, etc.
3. Modeling objectives
Modeling involves three main steps:
1) The definition of the objectives of the modeling
activities and the quantitative measures to be
evaluated.
2) The development of one (or several) models that are
suitable to achieve the specified objectives.
3) The processing of the models and the analysis of the
results to support system design or operation
activities.
The data collected from the honeypots can be
processed in various ways to characterize the attack
processes and perform predictive analyses. In particular,
modeling activities can be used to:
• Identify the probability distributions that best
characterize the occurrence of attacks and their
propagation through the Internet.
• Analyze whether the data collected from different
platforms exhibit similar or different malicious
attack activities.
• Model the time relationships that may exist between
attacks coming from different sources (or to different
destinations).
• Predict the occurrence of new waves of attacks on a
given platform based on the history of attacks
observed on this platform as well as on the other
platforms.
For the sake of illustration, we present in the
following sections simple preliminary models based on
the data collected from our honeypots that are aimed at
fulfilling such objectives.
4. Examples
The examples presented in the following address:
1) The analysis of the time evolution of the number of
attacks taking into account the geographic location
of the attacking machine.
2) The characterization and statistical modeling of the
times between attacks.
3) The analysis of the propagation of attacks throughout
the honeypot platforms.
The data considered for the examples has been
collected from January 1st, 2004 to April 17, 2005,
corresponding to a data collection period of 320 days.
We take into account the attacks observed on 14
honeypot platforms among those deployed so far. The
selected honeypots correspond to those that have been
active for almost the whole considered period. The total
number of attacks observed on these honeypots is
816476. These attacks are not uniformly distributed
among the platforms. In particular, the data collected
from three platforms represent more than fifty percent of
the total attack activity.
4.1 Attack occurrence and geographic distribution
The preliminary models presented in this sub-section
address: i) the time-evolution modeling of the number of
attacks observed on different honeypot platforms, and ii)
the analysis of potential correlations for the attack
processes observed on the different platforms taking into
account the geographic location of the attacking
machines and the proportion of attacks observed on each
platform, wrt. to the global attack activity.
Let us denote by:
− Y(t) the function describing the evolution of the
number of attacks per unit of time observed on all
the honeypots during the observation period,
− Xj(t) the function describing the evolution of the
number of attacks per unit of time observed on all
the honeypots during the observation period for
which the IP address of the attacking machine is
located in country j .
In a first stage, we have plotted, for various time
periods, Y(t) and the curves Xj(t) corresponding to
different countries j. Visual inspection showed
surprising similarities between Y(t) and some Xj(t). To
confirm such empirical observations, we have then
decided to rigorously analyze this phenomenon using
mathematical linear regression models.
Considering a linear regression model, we have
investigated if Y(t) can be estimated from the
combination of the attacks described by Xj(t), taking into
account a limited number of countries j. Let us denote
by Y*(t) the estimated model.
Formally, Y*(t) is defined as follows:
Y*(t) = Σαj Xj(t) + β j= 1, 2, .. k (1)
Constants αj and β correspond to the parameters of
the linear model that provide the best fit with the
observed data, and k is the number of countries
considered in the regression.
The quality of fit of the model is measured by the
statistics R2 defined by:
R2 = Σ(Y*(i) – Yav)
2/ Σ(Y (i) – Yav) 2 (2)
Y (i) and Y*(i) correspond to the observed and estimated
number of attacks for unit of time i, respectively. Yav is
the average number of attacks per unit of time, taking
into account the whole observation period.
Indeed, R is the correlation factor between the
estimated model and the observed values. The closer the
R2 value is to 1, the better the estimated model fits the
collected data.
We have applied this model considering linear
regressions involving one, two or more countries.
Surprisingly, the results reveal that a good fit can be
obtained by considering the attacks from one country
only. For example, the models providing the best fit
taking into account the total number of attacks from all
the platforms are obtained by considering the attacks
issued from either UK, USA, Russia or Germany only.
The corresponding R2 values are of the same order of
magnitude (0.944 for UK, 0.939 for USA, 0.930 for
Russia and 0.920 for Germany), denoting a very good fit
of the estimated models to the collected data. For
example, the estimated model obtained when
considering the attacks from Russia only is defined by
equation (3):
Y*(t) = 44.568 X1(t) + 1555.67 (3)
X1(t) represents the evolution of the number of attacks
from Russia. Figure 1 plots the evolution of the
observed and estimated number of attacks per unit of
time during the data collection period considered in this
example. The unit of time corresponds to 4 days. It is
noteworthy that, similar conclusions are obtained if we
consider another granularity for the unit of time, for
example one day, or one week.
These results are even more surprising that the
attacks from Russia and UK represent only a small
proportion of the total number of attacks (1.9% and
3.7% respectively). Concerning the USA, although the
proportion is higher (about 18%), it is not sufficient to
explain the linear model.
Figure 1- Evolution of the number of attacks per unit of time
observed on all the platforms and estimated model considering
attacks from Russia only
We have applied similar analyses by respectively
considering each honeypot platform in order to
investigate if similar conclusions can be derived by
comparing their attack activities per source country to
their global attack activities. The results are summarized
in Table 1. The second column identifies the source
country that provides the best fit. The corresponding R2
value is given in the third column. Finally, the last three
columns give the R2 values obtained when considering
UK, USA, or Russia in the regression model.
It can be noticed that the quality of the regressions
measured when considering attacks from Russia only is
generally low for all platforms (R2 less than 0.80). This
indicates that the property observed at the global level is
not visible when looking at the local activities observed
on each platform. However, for the majority of the
platforms, the best regression models often involve one
of the three following countries: USA, Germany or UK,
which also provide the best regressions when analyzing
the global attack activity considering all the platforms
together. Two exceptions are found with P6 and P8 for
which the observed attack activities exhibit different
characteristics with respect to the origin of the attacks
(Taiwan, China), compared to the other platforms.
The trends discussed above have been also observed
when considering a different granularity for the unit of
time (e.g., 1 day or 1 week) as well as different data
observation periods.
Platform Country
providing
the best
model
Best
model
Russia
P1 Germany 0.895 0.873 0.858 0.687
P2 USA 0.733 0.464 0.733 0.260
P4 Germany 0.722 0.197 0.373 0.161
P5 Germany 0.874 0.869 0.872 0.608
P6 UK 0.861 0.861 0.699 0.656
P8 Taiwan 0.796 0.249 0.425 0.212
P9 Germany 0.754 0.630 0.624 0.631
P11 China 0.746 0.303 0.664 0.097
P13 Germany 0.738 0.574 0.412 0.389
P14 Germany 0.708 0.510 0.546 0.087
P20 USA 0.912 0.787 0.912 0.774
P21 SPAIN 0.791 0.620 0.727 0.720
P22 USA 0.870 0.176 0.870 0.111
P23 USA 0.874 0.659 0.874 0.517
Global UK 0.944 0.944 0.939 0.930
Table 1 – Estimated models for each platform: correlation
factors for the countries providing the best fit and for UK, USA
and Russia
To summarize, two main findings can be derived
from the results presented above:
1) Some trends exhibited at the global level considering
the attack processes on all the platforms together are
not observed when analyzing each platform
individually (this is the case, for example, of attacks
from Russia). On the other hand, we have observed
the other situation where the trends observed
globally are also visible locally on the majority of
the platforms (this is the case, for example, of attacks
from USA, UK and Germany).
2) The attack processes observed on each platform are
very often highly correlated with the attack processes
originating from a particular country. The country
providing the best regressions locally, does not
necessary exhibit high correlations when considering
other platforms or at the global level. These trends
seem to result from specific factors that govern the
attack processes observed on each platform.
4.2 Distribution of times between attacks
In this example, we focus on the analysis and the
modeling of the times between attacks observed on
different honeypot platforms.
Let us denote by ti, the time separating the
occurrence of attack i and attack (i-1). Each attack is
associated to an IP address, and its occurrence time is
defined by the time when the first packet is received
from the corresponding address at one of the three
virtual machines of the honeypot platform. All the
packets received from the same IP address within 24
hours are supposed to belong to the same attack session.
We have analyzed the distribution of the times
between attacks observed on each honeypot platform.
Our objective was to find analytical models that
faithfully reflect the empirical data collected from each
platform. In the following, we summarize the results
obtained considering 5 platforms for which we have
observed the highest attack activity.
4 .2.1 Empirical analyses
Table 2 gives the number of intervals of times
between attacks observed at each platform considered in
the analysis as well as the corresponding number of IP
addresses. As illustrated by Figure 2, most of these
addresses have been observed only once at a given
platform. Nevertheless, some IP addresses have been
observed several times, the maximum number of visits
per IP address for the five platforms was 57, 96, 148,
183, and 83 (respectively). Indeed, the curves plotting
the number of IP addresses as a function of the number
of attacks for each address follow a heavy-tailed power
law distribution. It is noteworthy that such distributions
have been observed in many performance and
dependability related studies in the context of the
Internet, e.g., transfer and interarrival times, burst sizes,
sizes of files transferred over the web, error rates in web
servers, etc.
P5 P6 P9 P20 P23
Number of ti 85890 148942 46268 224917 51580
Number of IP
addresses
79549 90620 42230 162156 47859
Table 2 - Numbers of intervals of times between attacks (ti) and
of different IP addresses observed at each platform
Figure 2- Number of IP addresses versus the number of attacks
per IP address observed at each platform (log-log scale)
4 .2.2 Modeling
Finding tractable analytical models that faithfully
reflect the observed times between attacks is useful to
characterize the observed attack processes and to find
appropriate indicators that can be used for prediction
purposes. We have investigated several candidate
distributions, including Weibull, Lognormal, Pareto, and
the Exponential distribution, which are traditionally
used in reliability related studies. The best fit for each
platform has been obtained using a mixture model
combining a Pareto and an exponential distribution.
Let us denote by T the random variable
corresponding to the time between the occurrence of two
consecutive attacks at a given platform, and t a
realization of T. Assuming that the probability density
function pdf(t) associated to T is characterized by a
mixture distribution combining a Pareto distribution and
an exponential distribution, then f(t) is defined as
follows.
pdf (t) = P
(t +1)
+ (1" P
k is the index parameter of the Pareto distribution, λ is
the rate associated to the exponential distribution and Pa
is a probability.
We have used the R statistical package [11] to estimate
the parameters that provide the best fit to the collected
data. The quality of fit is assessed by applying the
Kolmogorov-Smirnov statistical test. The results are
presented in Figure 3. It can be noticed that for all the
platforms, the mixed distribution provides a good fit to
the observed data whereas the exponential distribution is
not suitable to describe the observed attack processes.
Thus, the traditional assumption considered in hardware
reliability evaluation studies assuming that failures
occur according to a Poisson process does not seem to
be satisfactory when considering the data observed form
our honeypots. These results have been also confirmed
when considering the data collected during other
observation periods.
1 31 61 91 121 151 181 211 241 271
Time between attacks (seconds)
Pa = 0.0051
k = 0.173
! = 0.121/sec.
p-value = 0.90
Data Mixture (Pareto, Exp.)
Exponential
1 31 61 91 121 151 181 211 241 271
Time between attacks (seconds)
Mixture (Pareto, Exp.)
Exponential
Pa = 0.0115
k = 0.1183
! = 0.1364/sec.
p-value = 0.999
a) P5 b) P6
1 31 61 91 121 151 181 211 241 271
Time between attacks (seconds)
Mixture (Pareto, Exp.)
Exponential
Pa = 0.0019
k = 0.1668
! = 0.276/sec.
p-value = 0.99
1 31 61 91 121 151 181 211 241 271
Time between attacks (seconds)
Mixture (Pareto, Exp.)
Exponential
Pa = 0.0144
k = 0.0183
! = 0.0136/sec.
p-value = 0.90
c) P9 d) P20
1 31 61 91 121 151 181 211 241 271
Time between attacks (seconds)
Mixture (Pareto, Exp.)
Exponential
Pa = 0.0031
k = 0.1240
! = 0.275/sec.
p-value = 0.985
e) P23
Figure 3- Observed and estimated times between attacks probability density functions.
4.3 Propagation of attacks
Besides analyzing the attack activities observed at
each platform in isolation, it is useful to identify
phenomena that reflect propagation of attacks through
different platforms. In this section, we analyze simple
scenarios where a propagation between two platforms is
assumed to occur when the IP address of an attacking
machine observed at a given platform is also observed at
another platform. Such a situation might occur for
example as a result of a scanning activity or might be
resulting from the propagation of worms.
For the sake of illustration, we restrict the analysis to
the five platforms considered in the previous example.
For each attacking IP address in the data collected from
the five platforms during the period of the study, we
identified: 1) all the occurrences with the same source
address, 2) the times of each occurrence and 3) the
platform on which each occurrence has been reported. A
propagation is said to occur for this IP address from
platform Pi to platform Pj when the next occurrence of
this address is observed on Pj after visiting Pi.
Based on this information we build a propagation
graph where each node identifies a platform and a
transition between two nodes identifies a propagation
between the nodes. A probability is associated to each
transition to characterize its likelihood of occurrence.
Figure 4 presents the propagation graph obtained for
the five platforms included in the analysis. Considering
platforms P6 and P20, it can be seen that only a few IP
addresses that attacked these platforms have been
observed on the other platforms. The situation is
different when considering platforms P5, P9, and P23.
In particular, it can be noticed that propagation between
P5 and P9 is highly probable. This is related in
particular to the fact that the addresses of the
corresponding platforms belong to the same /8 network
domain. More thorough and detailed analyses are
currently carried out based on the propagation graph in
order to take into account timing information for the
corresponding transitions and also the types of attacks
observed, in order to better explain the propagation
phenomena illustrated by the graph.
Figure 4- Propagation graph
5. Conclusion
This paper presented simple examples and
preliminary models illustrating various types of
empirical analysis and modeling activities that can be
carried out based on the data collected from honeypots
in order to characterize attack processes. The honeypot
platforms deployed so far in our project belong to the
family of so-called “low interaction honeypots”. Thus,
hackers can only scan ports and send requests to fake
servers without ever succeeding in taking control over
them. In our project, we are also interested in running
experiments with “high interaction” honeypots where
attackers can really compromise the targets. Such
honeypots are suitable to collect data that would enable
us to study the behaviors of attackers once they have
managed to get access to a target and try to progress in
the intrusion process to get additional privileges. Future
work will be focused on the deployment of such
honeypots and the exploitation of the collected data to
better characterize attack scenarios and analyze their
impact on the security of the target systems. The
ultimate objective would be to build representative
stochastic models that will enable us to evaluate the
ability of computing systems to resist to attacks and to
validate them based on real attack data.
Acknowledgement. This work has been carried out in the
context of the CADHo project, an ongoing research action
funded by the French ACI “Securité & Informatique”
(www.cadho.org). It is partially supported by the ReSIST
European Network of Excellence (www .resist-noe.org).
References
[1] M. Bailey, E. Cooke, F. Jahanian, J. Nazario, and D.
Watson, "The Internet Motion Sensor: A Distributed
Blackhole Monitoring System," Proc. 12th Annual
Network and Distributed System Security Symposium
(NDSS), San Diego, CA, Feb. 2005.
[2] Home Page of the CAIDA Project, http://www.caida.org/
[3] DShield Distributed Detection System homepage,
http://www.honeynet.org/
[4] E. Alata, M. Dacier, Y. Deswarte, M. Kaâniche, K.
Kortchinsky, V. Nicomette, V.H. Pham, F. Pouget,
Collection and Analysis of Attack data based on
honeypots deployed on the Internet”, 1st Workshop on
Quality of Protection, Milano, Italy, September 2005.
[5] F. Pouget, M. Dacier, V. H. Pham, “Leurré.com: On the
Advantages of Deploying a Large Scale Distributed
Honeypot Platform”, Proc. E-Crime and Computer
Evidence Conference (ECCE 2005), Monaco, Mars 2005.
[6] L. Spitzner, Honeypots: Tracking Hackers, Addison-
Wesley, ISBN from-321-10895-7, 2002
[7] Project Leurré.com. Publications web page,
http://www.leurrecom.org/paper.htm
[8] M. Dacier, F. Pouget, H. Debar, “Honeypots: Practical
Means to Validate Malicious Fault Assumptions on the
Internet”, Proc. 10th IEEE International Symposium
Pacific Rim Dependable Computing (PRDC10), Tahiti,
March 2004, pages 383-388.
[9] M. Dacier, F. Pouget, H. Debar, “Attack Processes found
on the Internet”, Proc. OTAN Symp. on Adaptive Defense
in Unclassified Networks, Toulouse, France, April 2004.
[10] Honeyd Home page,
http://www.citi.umich.edu/u/provos/honeyd/
[11] R statistical package Home page, http://www.r-project.org
|
0704.0862 | The Low CO Content of the Extremely Metal Poor Galaxy I Zw 18 | Draft version October 24, 2018
Preprint typeset using LATEX style emulateapj v. 08/22/09
THE LOW CO CONTENT OF THE EXTREMELY METAL POOR GALAXY I ZW 18
Adam Leroy
, John Cannon
, Fabian Walter
, Alberto Bolatto
, Axel Weiss
Draft version October 24, 2018
ABSTRACT
We present sensitive molecular line observations of the metal-poor blue compact dwarf I Zw 18
obtained with the IRAM Plateau de Bure interferometer. These data constrain the CO J = 1 → 0
luminosity within our 300 pc (FWHM) beam to be LCO < 1×10
5 K km s−1 pc2 (ICO < 1 K km s
−1), an
order of magnitude lower than previous limits. Although I Zw 18 is starbursting, it has a CO luminosity
similar to or less than nearby low-mass irregulars (e.g. NGC 1569, the SMC, and NGC 6822). There is
less CO in I Zw 18 relative to its B-band luminosity, H I mass, or star formation rate than in spiral or
dwarf starburst galaxies (including the nearby dwarf starburst IC 10). Comparing the star formation
rate to our CO upper limit reveals that unless molecular gas forms stars much more efficiently in
I Zw 18 than in our own galaxy, it must have a very low CO-to-H2 ratio, ∼ 10
−2 times the Galactic
value. We detect 3mm continuum emission, presumably due to thermal dust and free-free emission,
towards the radio peak.
Subject headings: galaxies: individual (I Zw 18); galaxies: ISM; galaxies: dwarf, radio lines: ISM
1. INTRODUCTION
With the lowest nebular metallicity in the nearby uni-
verse (12+ logO/H ≈ 7.2, Skillman & Kennicutt 1993),
the blue compact dwarf I Zw 18 plays an important role
in our understanding of galaxy evolution. Vigorous ongo-
ing star formation implies the presence of molecular gas,
but direct evidence has been elusive. Vidal-Madjar et al.
(2000) showed that there is not significant diffuse H2, but
Cannon et al. (2002) found ∼ 103 M⊙ of dust organized
in clumps with sizes 50 – 100 pc. Vidal-Madjar et al.
(2000) did not rule out compact, dense molecular clouds,
and Cannon et al. (2002) argued that this dust may in-
dicate the presence of molecular gas.
Observations by Arnault et al. (1988) and
Gondhalekar et al. (1998) failed to detect CO J = 1 → 0
emission, the most commonly used tracer of H2. This
is not surprising. The low dust abundance and intense
radiation fields found in I Zw 18 may have a dramatic
impact on the formation of H2 and structure of molecular
clouds. A large fraction of the H2 may exist in extended
envelopes surrounding relatively compact cold cores. In
these envelopes, H2 self-shields while CO is dissociated
(Maloney & Black 1988). The result may be that in
such galaxies [CII] or FIR emission trace H2 better than
CO (Madden et al. 1997; Israel 1997a; Pak et al. 1998).
Further, H2 may simply be underabundant, as there is a
lack of grains on which to form while photodissociation
is enhanced by an intense UV field. Indeed, Bell et al.
(2006) found that at Z = Z⊙/100, a molecular cloud
may take as long as a Gyr to reach chemical equilibrium.
A low CO content in I Zw 18 is then expected, and a
stringent upper limit would lend observational support to
predictions for molecular cloud structure at low metallic-
1 Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117,
Heidelberg, Germany; email: [email protected]
2 Astronomy Department, Wesleyan University, Middletown, CT
06459, [email protected]
3 Radio Astronomy Lab, UC Berkeley, 601 Campbell Hall,
Berkeley, CA, 94720
4 MPIfR, Auf dem Hügel 69, 53121, Bonn, Germany
ity. However, while the existing upper limits are sensitive
in an absolute sense, they do not even show I Zw 18 to
have a lower normalized CO content than a spiral galaxy
(e.g. less CO per B-band luminosity). The low luminos-
ity (MB ≈ −14.7, Gil de Paz et al. 2003) and large dis-
tance (d=14 Mpc, Izotov & Thuan 2004) of this system
require very sensitive observations to set a meaningful
upper limit.
In this letter we present observations, obtained with
the IRAM Plateau de Bure Interferometer (PdBI)5, that
constrain the CO luminosity, LCO, to be equal to or less
than that of nearby CO-poor (non-starbursting) dwarf
irregulars.
2. OBSERVATIONS
I Zw 18 was observed with the IRAM Plateau de
Bure Interferometer on 17, 21, and 27 April and 13
May 2004 for a total of 11 hours. The phase calibrators
were 0836+710 (Fν(115GHz) ≈ 1.1 Jy), and 0954+556
(Fν(115GHz) ≈ 0.35 Jy). One or more calibrators with
known fluxes were also observed during each track. The
data were reduced at the IRAM facility in Grenoble us-
ing the GILDAS software package; maps were prepared
using AIPS. The final CO J = 1 → 0 data cube has beam
size 5.59′′ × 3.42′′, and a velocity (frequency) resolution
of 6.5 km s−1 (2.5 MHz). The velocity coverage stretches
from vLSR ≈ 50 to 1450 km s
−1. The data have an RMS
noise of 3.77 mJy beam−1 (18 mK; 1 Jy beam−1 = 4.8
K). The 44′′ (FWHM) primary beam completely covers
the galaxy. Based on variation of the relative fluxes of
the calibrators, we estimate the gain uncertainty to be
< 15%.
3. RESULTS
3.1. Upper Limit on CO Emission
To search for significant CO emission, we smooth the
cube to 20 km s−1 velocity resolution, a typical line width
5 Based on observations carried out with the IRAM Plateau
de Bure Interferometer. IRAM is supported by INSU/CNRS
(France), MPG (Germany) and IGN (Spain).”
http://arxiv.org/abs/0704.0862v1
Fig. 1.— CO 1 → 0 spectra of I Zw 18 towards the radio
continuum/Hα peak (left) and the highest significance spectra
(right), which is still too faint to classify as more than marginal.
The locations of both spectra are shown in Figure 2. Dashed hor-
izontal lines show the magnitude of the RMS noise.
for CO at our spatial resolution (e.g., Helfer et al. 2003).
The noise per channel map in this smoothed cube is
σ20 ≈ 0.25 K km s
−1. Over the H I velocity range (710
– 810 km s−1, van Zee et al. 1998), there are no regions
with ICO,20 > 1 K km s
−1 (4σ) within the primary beam.
We pick a slightly conservative upper limit for two rea-
sons. First, if there were CO emission with this intensity
we would be certain of detecting it. Second, the noise in
the cube is slightly non-Gaussian, so that the false posi-
tive rate for ICO,20 > 1 K km s
−1 — estimated from the
negatives and the channel maps outside the H I velocity
range — is ∼ 0.2%, very close to that of a 3σ deviate.
For d = 14 Mpc, the synthesized beam has a FWHM
of 300 pc and an area of 1.0 × 105 pc2. Our intensity
limit, ICO < 1 K km s
−1, therefore translates to a CO
luminosity limit of LCO < 1× 10
5 K km s−1 pc2.
There is a marginal signal toward the southern knot
of Hα emission (9h34m02s.4, 55◦14′23′′.0). This emission
has the largest |ICO,20| found over the H I velocity range,
corresponding to LCO ∼ 8×10
4 K km s−1 pc2, just below
our limit. This same line of sight also shows |ICO| > 2σ
over three consecutive channels, a feature seen along only
one other line of sight (in negative) over the H I velocity
range. The marginal signal is suggestively located in the
southeast of I Zw 18, where Cannon et al. (2002) identi-
fied several potential sites of molecular gas from regions
of relatively high extinction. While tantalizing, the sig-
nal is not strong enough to be categorized as a detection.
Figure 1 shows CO spectra towards the Hα/radio contin-
uum peak (Cannon et al. 2002, 2005; Hunt et al. 2005a,
see Figure 2) and this marginal signal.
3.2. Continuum Emission
We average the data over all channels and produce a
continuum map with noise σ115GHz = 0.35 mJy beam
The highest value in the map is I115GHz = 1.06±0.35mJy
beam−1 at α2000 = 9
h34m02s.1, δ2000 = +55
◦ 14′ 27′′.0.
This is within a fraction of a beam of the 1.4 GHz peak
identified by Cannon et al. (2005, α2000 = 9
h34m02s.1,
δ2000 = +55
◦ 14′ 28′′.06) and Hunt et al. (2005a, α2000 =
9h34m02s, δ2000 = +55
◦ 14′ 29′′.06). Figure 2 shows the
radio continuum peak and 115 GHz continuum contours
plotted over Hα emission from I Zw 18 (Cannon et al.
2002). There is only one other region with |I115GHz| >
3σ115GHz within the primary beam and the star-forming
extent of I Zw 18 occupies ≈ 10 % of the primary beam.
Therefore, we estimate the chance of a false positive co-
incident with the galaxy to be only ∼ 10%.
4. DISCUSSION
Here we discuss the implications of our CO upper limit
and continuum detection. We adopt the following prop-
erties for I Zw 18, all scaled to d = 14 Mpc: MB =
−14.7 (Gil de Paz et al. 2003), MHI = 1.4 × 10
(van Zee et al. 1998), Hα luminosity log10 Hα = 39.9 erg
s−1 (Cannon et al. 2002; Gil de Paz et al. 2003), 1.4 GHz
flux F1.4 = 1.79 mJy (Cannon et al. 2005).
4.1. Point Source Luminosity
Our upper limit along each line of sight, LCO <
1 × 105 K km s−1 pc2, matches the luminosity of
a fairly massive Galactic giant molecular cloud (Blitz
1993). For a Galactic CO-to-H2 conversion factor, 2 ×
1020 cm−2 (K km s−1)−1, the corresponding molecular
gas mass is MMol ≈ 4.4× 10
5 M⊙, similar to the mass of
the Orion-Monoceros complex (e.g. Wilson et al. 2005).
4.2. Comparison With More Luminous Galaxies
In galaxies detected by CO surveys, the CO content
per unit B-band luminosity is fairly constant. Figure 3
shows the CO luminosity normalized by B-band lumi-
nosity, LCO/LB, as a function of absolute B-band mag-
nitude (LB is extinction corrected). LCO/LB is nearly
constant over two orders of magnitude in LB, though
with substantial scatter (much of it due to the extrapo-
lation from a single pointing to LCO).
Based on these data and assuming that LCO is not
a function of the metallicity of the galaxy, we may ex-
trapolate to an expected CO luminosity for I Zw 18.
For MB,IZw18 ≈ −14.7 the CO luminosity correspond-
ing to the median value of LCO/LB (dashed line) in
Figure 3 is LCO,IZw18 ≈ 1.7 × 10
6 K km s−1 pc2.
The Hα, 1.4 GHz, and H I luminosities lead to simi-
lar predictions. Young et al. (1996) found MH2/LHα ≈
10L⊙/M⊙ for Sd–Irr galaxies, which implies LCO,IZw18 ∼
4 × 106 K km s−1 pc2. Murgia et al. (2005) measured
FCO/F1.4 ≈ 10 Jy km s
−1 (mJy)−1 for spirals, that would
imply LCO,IZw18 ∼ 10
7 K km s−1. For Sd/Sm galax-
ies, MH2/MHI ≈ 0.2 (Young & Scoville 1991), leading
to LCO,IZw18 ∼ 5 × 10
6 K km s−1 pc2. Both MH2/LHα
and MH2/MHI tend to be even higher in earlier-type
spirals.
Therefore, surveys would predict LCO,IZw18 & 2 ×
106 K km s−1 pc2, very close to the previously established
upper limits of 2− 3 × 106 K km s−1pc2 (Arnault et al.
1988; Gondhalekar et al. 1998). With the present obser-
vations, we constrain LCO < 1 × 10
5 K km s−1pc2 and
thus clearly rule out LCO ∼ 10
6 K km s−1 pc2. This
may be seen in Figure 3; even if I Zw 18 has the highest
possible CO content, it will still have a lower LCO/LB
than 97% of the survey galaxies.
4.3. Comparison With Nearby Metal-Poor Dwarfs
The subset of irregular galaxies detected by CO surveys
tend to be CO-rich and actively star-forming, resembling
scaled-down versions of spiral galaxies (Young et al.
1995, 1996; Leroy et al. 2005). Such galaxies may not
be representative of all dwarfs. Because they are nearby,
several of the closest dwarf irregulars have been detected
Fig. 2.— V -band and Hα (right, Cannon et al. 2002) images of I Zw 18. Overlays on the left image show the size of the synthesized
beam and the locations of the spectra shown in Figure 1. Contours on the right image show continuum emission in increments of 0.5σ
significance and the location of the radio continuum peak. The primary beam is larger than the area shown. Both optical maps are on
linear stretches. V -band data obtained from the MAST Archive, originally observed for GO program 9400, PI: T. Thuan).
despite very small LCO. With their low masses and
metallicities, they may represent good points of compar-
ison for I Zw 18. Table 1 and Figure 3 show CO lumi-
nosities and LCO/LB for four nearby dwarfs: NGC 1569,
the Small Magellanic Cloud (SMC), NGC 6822, and
IC 10. The SMC, NGC 1569, and NGC 6822 have
LCO ∼ 10
5 K km s−1 pc2, close to our upper limit, and
occupy a region of LCO/LB-LB parameter space similar
to I Zw 18. All four of these galaxies have active star for-
mation but very low CO content relative to their other
properties.
We test whether our observations would have detected
CO in NGC 1569, the SMC, and IC 10 at the plausible
lower limit of 10 Mpc (from H0 = 72 km s
−1) or our
adopted distance of 14 Mpc. We convolve the integrated
intensity maps to resolutions of 210 and 300 pc and mea-
sure the peak integrated intensity. The results appear in
columns 4 and 5 of Table 1. The PdBI observations of
NGC 1569 resolve out most of the flux, so we also apply
this test to a distribution with the size and luminosity
derived by Greve et al. (1996) from single dish observa-
tions. Our observations would detect an analog to IC 10
but not the SMC, with NGC 1569 an intermediate case.
With a factor of ∼ 3 better sensitivity (requiring ∼ 10
times more observing time) we would expect to detect
all three nearby galaxies. However, achieving such sen-
sitivity with present instrumentation will be quite chal-
lenging. ALMA will likely be necessary to place stronger
constraints on CO in galaxies like I Zw 18.
IC 10 may be the nearest blue compact dwarf
(Richer et al. 2001), so it may be telling that we would
detect it at the distance of I Zw 18. The blue compact
galaxies that have been detected in CO have LCO/LB
similar to IC 10 (Gondhalekar et al. 1998, the diamonds
in Figure 3). Most searches for CO towards BCDs have
yielded nondetections, so those detected may not be rep-
resentative, but I Zw 18 is clearly not among the “CO-
rich” portion of the BCD population.
4.4. Interpretation of the Continuum
We measure continuum intensity of F115GHz = 1.06±
0.35 mJy towards the radio continuum peak. The
continuum is detected along only one line of sight,
so we refer to it here as a point source and com-
pare it to integrated values for I Zw 18. F115GHz is
expected to be the product of mainly two types of
emission: thermal free-free emission and thermal dust
emission. At long wavelengths, the integrated ther-
mal free-free emission is F1.4GHz(free− free) ≈ 0.52 –
0.75 mJy (Cannon et al. 2005; Hunt et al. 2005a), imply-
ing F115GHz(free− free) = 0.36 – 0.51 mJy at 115 GHz
(Fν ∝ ν
−0.1). The Hα flux predicts a similar value,
F115GHz(free− free) = 0.34 mJy (Cannon et al. 2005,
Equation 1). Hunt et al. (2005b) placed an upper limit
of Fν(850) < 2.5 mJy on dust continuum emission at
850µm; this is consistent with the ∼ 5 × 103 M⊙ esti-
mated by Cannon et al. (2002) given almost any reason-
able dust properties. Extrapolating this to 2.6 mm as-
suming a pure blackbody spectrum, the shallowest plau-
sible SED, constrains thermal emission from dust to be
< 0.25 mJy at 115 GHz. Based on these data, we would
predict F115GHz . 0.75 mJy. Thus our measured F115GHz
is consistent with, but somewhat higher than, the ther-
mal free-free plus dust emission expected based on opti-
cal, centimeter, and submillimeter data.
4.5. Relation to Star Formation
I Zw 18 has a star formation rate ∼ 0.06 – 0.1 M⊙
yr−1, based on Hα and cm radio continuum measure-
ments (Cannon et al. 2002; Kennicutt 1998a; Hunt et al.
2005a). Our continuum flux suggests a slightly higher
value ≈ 0.15 – 0.2 M⊙ yr
−1 (following Hunt et al. 2005a;
Condon 1992), with the exact value depending on the
contribution from thermal dust emission. For any value
in this range, the star formation rate per CO luminosity,
SFR/LCO is much higher in I Zw 18 than in spirals. For
Fig. 3.— CO luminosity normalized by absolute blue mag-
nitude for galaxies with Hubble Type Sb or later (black cir-
cles, Young et al. 1995; Elfhag et al. 1996; Böker et al. 2003;
Leroy et al. 2005). We also plot nearby dwarfs from Ta-
ble 1 (crosses) and blue compact galaxies compiled by
Gondhalekar et al. (1998, , diamonds). The shaded regions shows
our upper limit for I Zw 18, with the range inMB for distances from
10 to 20 Mpc. The dashed line and light shaded region show the
median value and 1σ scatter in LCO/LB for spirals and dwarf star-
bursts. Methodology: We extrapolate from ICO in central pointings
to LCO assuming the CO to have an exponential profile with scale
length 0.1 d25 (Young et al. 1995), including only galaxies where
the central pointing measures > 20% of LCO. We adopt B mag-
nitudes (corrected for internal and Galactic extinction), distances
(Tully-Fisher when available, otherwise Virgocentric-flow corrected
Hubble flow), and radii from LEDA (Paturel et al. 2003).
comparison, our upper limit and the molecular “Schmidt
Law” derived by Murgia et al. (2002) predicts a star for-
mation rate . 2 × 10−4 M⊙ yr
−1. Fits by Young et al.
(1996) and Kennicutt (1998b, applied to just the molec-
ular limit) yield similar values. Again, I Zw 18 is similar
to the SMC and NGC 6822, which have star formation
rates of 0.05 M⊙ yr
−1 and 0.04 M⊙ yr
−1 (Wilke et al.
2004; Israel 1997b) and LCO ∼ 10
5 K km s−1 pc2.
4.6. Variations in XCO
Several calibrations of the CO-to-H2 conversion factor,
XCO as a function of metallicity exist in the literature.
The topic has been controversial and these calibrations
range from little or no dependence (e.g. Walter 2003;
Rosolowsky et al. 2003) to very steep dependence (e.g.,
XCO ∝ Z
−2.7 Israel 1997a). Comparing the star for-
mation rate to our CO upper limit, we may rule out
that I Zw 18 has a Galactic XCO unless molecular gas
in I Zw 18 forms stars much more efficiently than in the
Galaxy. Either the ratio of CO-to-H2 is low in I Zw 18
or molecular gas in this galaxy forms stars with an effi-
ciency two orders of magnitude higher than that in spiral
galaxies.
5. CONCLUSIONS
We present new, sensitive observations of the metal-
poor dwarf galaxy I Zw 18 at 3 mm using the Plateau de
Bure Interferometer. These data constrain the integrated
CO J = 1 → 0 intensity to be ICO < 1 K km s
−1 over our
300 pc (FWHM) beam and the luminosity to be LCO <
1× 105 K km s−1 pc2.
I Zw 18 has less CO relative to its B-band luminosity,
H Imass, or SFR than spiral galaxies or dwarf starbursts,
including more metal-rich blue compact galaxies such as
IC 10 (ZIC 10 ∼ Z⊙/4, Lee et al. 2003). Because of its
small size and large distance, these are the first observa-
tions to impose this constraint.
We show that I Zw 18 should be grouped with several
local analogs — NGC 1569, the SMC, NGC 6822 — as
a galaxy with active star formation but a very low CO
content relative to its other properties. In these galax-
ies, observations suggest that the environment affects the
molecular gas and these data suggest that the same is
true in I Zw 18. A simple comparison of star formation
rate to CO content shows that this must be true at a
basic level: either the ratio of CO to H2 is dramatically
low in I Zw 18 or molecular gas in this galaxy forms stars
with an efficiency two orders of magnitude higher than
that in spiral galaxies.
We detect 3mm continuum with F115 GHz = 1.06 ±
0.35 mJy coincident with the radio peak identified by
Cannon et al. (2005) and Hunt et al. (2005a). This flux
is consistent with but somewhat higher than the thermal
free-free plus dust emission one would predict based on
centimeter, submillimeter, and optical measurements.
Finally, we note that improving on this limit with cur-
rent instrumentation will be quite challenging. The order
of magnitude increase in sensitivity from ALMA will be
needed to place stronger constraints on CO in galaxies
like I Zw 18.
We thank Roberto Neri for his help reducing the data.
We acknowledge the usage of the HyperLeda database
(http://leda.univ-lyon1.fr).
REFERENCES
Arnault, P., Kunth, D., Casoli, F., & Combes, F. 1988, A&A,
205, 41
Bell, T. A., Roueff, E., Viti, S., & Williams, D. A. 2006, MNRAS,
371, 1865
Blitz, L. 1993, Protostars and Planets III, 125
Böker, T., Lisenfeld, U., & Schinnerer, E. 2003, A&A, 406, 87
Cannon, J. M., Skillman, E. D., Garnett, D. R., & Dufour, R. J.
2002, ApJ, 565, 931
Cannon, J. M., Walter, F., Skillman, E. D., & van Zee, L. 2005,
ApJ, 621, L21
Condon, J. J. 1992, ARA&A, 30, 575
Gil de Paz, A., Madore, B. F., & Pevunova, O. 2003, ApJS, 147,
Elfhag, T., Booth, R. S., Hoeglund, B., Johansson, L. E. B., &
Sandqvist, A. 1996, A&AS, 115, 439
Gondhalekar, P. M., Johansson, L. E. B., Brosch, N., Glass, I. S.,
& Brinks, E. 1998, A&A, 335, 152
Greve, A., Becker, R., Johansson, L. E. B., & McKeith, C. D.
1996, A&A, 312, 391
Helfer, T. T., Thornley, M. D., Regan, M. W., Wong, T., Sheth,
K., Vogel, S. N., Blitz, L., & Bock, D. C.-J. 2003, ApJS, 145,
Hunt, L. K., Dyer, K. K., & Thuan, T. X. 2005a, A&A, 436, 837
Hunt, L., Bianchi, S., & Maiolino, R. 2005b, A&A, 434, 849
Israel, F. P. 1997, A&A, 328, 471
Israel, F. P. 1997, A&A, 317, 65
Izotov, Y. I., & Thuan, T. X. 2004, ApJ, 616, 768
http://leda.univ-lyon1.fr
TABLE 1
CO in Nearby Low Mass Galaxies
Galaxy MB LCO ICO,210
a ICO,300
a Reference
(mag) (K km s−1 pc2) (K km s−1) (K km s−1)
NGC 1569 −16.5 1.2× 105 1.1 0.8 Greve et al. (1996)
−16.5 0.2× 105 0.8 0.5 Taylor et al. (1999)
SMC −16 1.5× 105 0.5 0.4 Mizuno et al. (2001, 2006)
NGC 6822 −16 1.2× 105 · · · · · · Israel (1997b)
IC 10 −16.5 2.2× 106 3.8 2.2 Leroy et al. (2006)
I Zw 18 −14.7 < 2× 106 · · · · · · Arnault et al. (1988); Gondhalekar et al. (1998)
I Zw 18 −14.7 . 1× 105 < 1 < 1 this paper
a Peak integrated intensity at 210 and 300 pc, corresponding to our beam size at 10 and 14 Mpc, respectively.
Kennicutt, R. C., Jr. 1998a, ARA&A, 36, 189
Kennicutt, R. C., Jr. 1998b, ApJ, 498, 541
Lee, H., McCall, M. L., & Richer, M. G. 2003, AJ, 125, 2975
Leroy, A., Bolatto, A. D., Simon, J. D., & Blitz, L. 2005, ApJ,
625, 763
Leroy, A., Bolatto, A., Walter, F., & Blitz, L. 2006, ApJ, 643, 825
Madden, S. C., Poglitsch, A., Geis, N., Stacey, G. J., & Townes,
C. H. 1997, ApJ, 483, 200
Maloney, P., & Black, J. H. 1988, ApJ, 325, 389
Mizuno, N., Rubio, M., Mizuno, A., Yamaguchi, R., Onishi, T., &
Fukui, Y. 2001, PASJ, 53, L45
Mizuno, N., et al. 2006, in prep.
Murgia, M., Crapsi, A., Moscadelli, L., & Gregorini, L. 2002,
A&A, 385, 412
Murgia, M., Helfer, T. T., Ekers, R., Blitz, L., Moscadelli, L.,
Wong, T., & Paladino, R. 2005, A&A, 437, 389
Pak, S., Jaffe, D. T., van Dishoeck, E. F., Johansson, L. E. B., &
Booth, R. S. 1998, ApJ, 498, 735
Paturel, G., Petit, C., Prugniel, P., Theureau, G., Rousseau, J.,
Brouty, M., Dubois, P., & Cambrésy, L. 2003, A&A, 412, 45
Richer, M. G., et al. 2001, A&A, 370, 34
Rosolowsky, E., Engargiola, G., Plambeck, R., & Blitz, L. 2003,
ApJ, 599, 258
Skillman, E. D., & Kennicutt, R. C., Jr. 1993, ApJ, 411, 655
Taylor, C. L., Kobulnicky, H. A., & Skillman, E. D. 1998, AJ,
116, 2746
Taylor, C. L., Hüttemeister, S., Klein, U., & Greve, A. 1999,
A&A, 349, 424
van Zee, L., Westpfahl, D., Haynes, M. P., & Salzer, J. J. 1998,
AJ, 115, 1000
Vidal-Madjar, A., et al. 2000, ApJ, 538, L77
Walter, F. 2003, IAU Symposium, 221, 176P
Wilke, K., Klaas, U., Lemke, D., Mattila, K., Stickel, M., & Haas,
M. 2004, A&A, 414, 69
Wilson, B. A., Dame, T. M., Masheder, M. R. W., & Thaddeus,
P. 2005, A&A, 430, 523
Young, J. S., et al. 1995, ApJS, 98, 219
Young, J. S., & Scoville, N. Z. 1991, ARA&A, 29, 581
Young, J. S., Allen, L., Kenney, J. D. P., Lesser, A., & Rownd, B.
1996, AJ, 112, 1903
|
0704.0863 | A binary model for the UV-upturn of elliptical galaxies (MNRAS version) | Mon. Not. R. Astron. Soc. 000, 000–000 (0000) Printed 6 November 2018 (MN LATEX style file v2.2)
A binary model for the UV-upturn of elliptical galaxies
Z. Han1⋆, Ph. Podsiadlowski2, A.E. Lynas-Gray2
1 National Astronomical Observatories / Yunnan Observatory, the Chinese Academy of Sciences, Kunming, 650011, China
2 University of Oxford, Department of Physics, Keble Road, Oxford, OX1 3RH
6 November 2018
ABSTRACT
The discovery of a flux excess in the far-ultraviolet (UV) spectrum of elliptical galaxies was a
major surprise in 1969. While it is now clear that this UV excess is caused by an old population
of hot helium-burning stars without large hydrogen-rich envelopes, rather than young stars,
their origin has remained a mystery. Here we show that these stars most likely lost their
envelopes because of binary interactions, similar to the hot subdwarf population in our own
Galaxy. We have developed an evolutionary population synthesis model for the far-UV excess
of elliptical galaxies based on the binary model developed by Han et al. (2002, 2003) for the
formation of hot subdwarfs in our Galaxy. Despite its simplicity, it successfully reproduces
most of the properties of elliptical galaxies with a UV excess: the range of observed UV
excesses, both in (1550 − V ) and (2000 − V ), and their evolution with redshift. We also
present colour-colour diagrams for use as diagnostic tools in the study of elliptical galaxies.
The model has major implications for understanding the evolution of the UV excess and of
elliptical galaxies in general. In particular, it implies that the UV excess is not a sign of age,
as had been postulated previously, and predicts that it should not be strongly dependent on the
metallicity of the population, but exists universally from dwarf ellipticals to giant ellipticals.
Key words: galaxies: elliptical and lenticular, cD – galaxies : starburst – ultraviolet: galaxies
– stars: binaries: close – stars: subdwarfs
1 INTRODUCTION
A long-standing problem in the study of elliptical galaxies is the
far-ultraviolet (UV) excess in their spectra. Traditionally, elliptical
galaxies were supposed to be passively evolving and not contain
any young stars that radiate in the far-UV. Therefore, the discovery
of an excess of radiation in the far-UV by the Orbiting Astronom-
ical Observatory mission 2 (OAO-2) (Code 1969) came as a com-
plete surprise. Indeed, this was one of the first major discoveries
in UV astronomy and became a basic property of elliptical galax-
ies. This far-UV excess is often referred to as the “UV-upturn”,
since the flux increases in the spectral energy distributions of el-
liptical galaxies as the wavelength decreases from 2000 to 1200Å.
The UV-upturn is also known as UV rising-branch, UV rising flux,
or simply UVX (see the review by O’Connell 1999).
The UV-upturn phenomenon exists in virtually all ellipti-
cal galaxies and is the most variable photometric feature. A fa-
mous correlation between UV-upturn magnitude and metallicity
was found by Burstein et al. (1988) from International Ultravi-
olet Explorer Satellite (IUE) spectra of 24 quiescent early-type
galaxies (hereafter BBBFL relation). The UV-upturn could be
important in many respects: for the formation history and evo-
lutionary properties of stars, the chemical enrichment of galax-
ies, galaxy dynamics, constraints to the stellar age and metal-
⋆ E-mail: [email protected]
licity of galaxies and realistic “K-corrections” (O’Connell 1999;
Yi, Demarque & Oemler 1998; Brown 2004; Yi 2004). In particu-
lar, the UV-upturn has been proposed as a possible age indica-
tor for giant elliptical galaxies (Bressan, Chiosi & Tantalo 1996;
Chiosi, Vallenari & Bressan 1997; Yi et al. 1999). The origin of the
UV-upturn, however, has remained one of the great mysteries
of extragalactic astrophysics for some 30 years (Brown 2004),
and numerous speculations have been put forward to explain it:
non-thermal radiation from an active galactic nucleus (AGN),
young massive stars, the central stars of planetary nebulae (PNe)
or post-asymptotic giant branch (PAGB) stars, horizontal branch
(HB) stars and post-HB stars (including post-early AGB stars
and AGB-manqué stars) and accreting white dwarfs (Code 1969;
Hills 1971; Gunn, Stryker & Tinsley 1981; Nesci & Perola 1985;
Mochkovitch 1986; Kjærgaard 1987; Greggio & Renzini 1990).
Using observations made with the the Hopkins Ultraviolet Tele-
scope (HUT) and comparing them to synthetic spectra, Ferguson
et al. (1991) and subsequent studies by Brown, Ferguson & David-
sen (1995), Brown et al. (1997), and Dorman, O’Connell & Rood
(1995) were able to show that the UV upturn is mainly caused by
extreme horizontal branch (EHB) stars. Brown et al. (2000b) de-
tected EHB stars for the first time in an elliptical galaxy (the core
of M 32) and therefore provided direct evidence for the EHB origin
of the UV-upturn.
EHB stars, also known as subdwarf B (sdB) stars, are
core-helium-burning stars with extremely thin hydrogen envelopes
c© 0000 RAS
http://arxiv.org/abs/0704.0863v3
2 Han, Podsiadlowski & Lynas-Gray
(Menv ≤ 0.02M⊙), and most of them are believed to have masses
around 0.5M⊙ (Heber 1986; Saffer et al. 1994), as has recently
been confirmed asteroseismologically in the case of PG 0014+067
(Brassard et al. 2001). They have a typical luminosity of a few
L⊙, a lifetime of ∼ 2 × 10
8yr, and a characteristic surface
temperature of ∼ 25 000K (Dorman, Rood & O’Connell 1993;
D’Cruz et al. 1996; Han et al. 2002). The origin of those hot,
blue stars, as the major source of the far UV radiation, has
remained an enigma in evolutionary population synthesis (EPS)
studies of elliptical galaxies. Two models, both involving single-
star evolution, have previously been proposed to explain the
UV-upturn: a metal-poor model (Lee 1994; Park & Lee 1997)
and a metal-rich model (Bressan, Chiosi & Fagotto 1994;
Bressan, Chiosi & Tantalo 1996; Tantalo et al. 1996;
Yi et al. 1995; Yi, Demarque & Kim 1997;
Yi, Demarque & Oemler 1997; Yi, Demarque & Oemler 1998).
The metal-poor model ascribes the UV-upturn to an old metal-
poor population of hot subdwarfs, blue core-helium-burning stars,
that originate from the low-metallicity tail of a stellar population
with a wide metallicity distribution (Lee 1994; Park & Lee 1997).
This model explains the BBBFL relation since elliptical galaxies
with high average metallicity tend to be older and therefore have
stronger UV-upturns. The model tends to require a very large age
of the population (larger than the generally accepted age of the
Universe), and it is not clear whether the metal-poor population is
sufficiently blue or not. Moreover, the required low-metallicity ap-
pears inconsistent with the large metallicity inferred for the major-
ity of stars in elliptical galaxies (Zhou, Véron-Cetty & Véron 1992;
Terlevich & Forbes 2002; Thomas et al. 2005).
In the metal-rich model the UV-upturn is caused by metal-
rich stars that lose their envelopes near the tip of the first-
giant branch (FGB). This model (Bressan, Chiosi & Fagotto 1994;
Yi, Demarque & Oemler 1997) assumes a relatively high metallic-
ity – consistent with the metallicity of typical elliptical galaxies
(∼ 1 − 3 times the solar metallicity). In the model, the mass-
loss rate on the red-giant branch, usually scaled with the Reimer’s
rate (Reimers 1975), is assumed to be enhanced, where the co-
efficient ηR in the Reimer’s rate is taken to be ∼ 2 − 3 times
the canonical value. In order to reproduce the HB morphology
of Galactic globular clusters, either a broad distribution of ηR is
postulated (D’Cruz et al. 1996), or, alternatively, a Gaussian mass-
loss distribution is applied that is designed to reproduce the dis-
tribution of horizontal-branch stars of a given age and metallicity
(Yi, Demarque & Oemler 1997). This model also needs a popula-
tion age that is generally larger than 10 Gyr. To explain the BBBFL
UV-upturn – metallicity relation, the Reimer’s coefficient ηR is as-
sumed to increase with metallicity, and the enrichment parameter
for the helium abundance associated with the increase in metallic-
ity, ∆Y
, needs to be > 2.5.
Both of these models are quite ad hoc: there is neither obser-
vational evidence for a very old, low-metallicity sub-population in
elliptical galaxies, nor is there a physical explanation for the very
high mass loss required for just a small subset of stars. Further-
more, the onset of the formation of the hot subdwarfs is very sud-
den as the stellar population evolves, and both models require a
large age for the production of the hot stars. As a consequence, the
models predict that the UV upturn of elliptical galaxies declines
rapidly with redshift. However, this does not appear to be con-
sistent with recent observations with the Hubble Space Telescope
(HST) (Brown et al. 1998; Brown et al. 2000a; Brown et al. 2003).
The recent survey with the Galaxy Evolution Explorer (GALEX)
satellite (Rich et al. 2005) showed that the intrinsic UV-upturn
seems not to decrease in strength with redshift.
The BBBFL relation shows that the (1550 − V ) colour be-
comes bluer with metallicity (or Lick spectral index Mg2), where
(1550 − V ) is the far-UV magnitude relative to the V magni-
tude. The relation could support the metal-rich model. However,
the correlation is far from being conclusively established. Ohl et al.
(1998) studied the far-UV colour gradients in 8 early-type galax-
ies and found no correlation between the FUV-B colour gradients
and the internal metallicity gradients based on the Mg2 spectral
line index, a result not expected from the BBBFL relation. De-
harveng, Boselli & Donas (2002) studied the far-UV radiation of
82 early-type galaxies, a UV-flux selected sample, and compared
them to the BBBFL sample, investigating individual objects with
a substantial record in the refereed literature spectroscopically1 .
They found no correlation between the (2000 − V ) colour and
the Mg2 spectral index. Rich et al. (2005) also found no correla-
tion in a sample of 172 red quiescent early-type galaxies observed
by GALEX and the Sloan Digital Sky Survey (SDSS). Indeed, if
there is a weak correlation in the data, the correlation is in the
opposite sense to that of BBBFL: metal-rich galaxies have red-
der (FUV − r)AB (far-UV magnitude minus red magnitude). On
the other hand, Boselli et al. (2005), using new GALEX data, re-
ported a mild positive correlation between (FUV − NUV )AB,
which is the far-UV magnitude relative to the near-UV, and metal-
licity in a sample of early-type galaxies in the Virgo Cluster. Donas
et al. (2006) use GALEX photometry to construct colour-colour
relationships for nearby early-type galaxies sorted by morphologi-
cal type. They also found a marginal positive correlation between
(FUV − NUV )AB and metallicity. These correlations, however,
do not necessarily support the BBBFL relation, as neither Boselli
et al. (2005) nor Donas et al. (2006) show that (FUV − r)AB cor-
relates significantly with metallicity. Therefore, this apparent lack
of an observed correlation between the strength of the UV-upturn
and metallicity casts some doubt on the metal-rich scenario.
Both models ignore the effects of binary evolution. On
the other hand, hot subdwarfs have long been studied in our
own Galaxy (Heber 1986; Green, Schmidt & Liebert 1986;
Downes 1986; Saffer et al. 1994), and it is now well established
that the vast majority of (and quite possibly all) Galactic hot
subdwarfs are the results of binary interactions. Observation-
ally, more than half of Galactic hot subdwarfs are found in
binaries (Ferguson, Green & Liebert 1984; Allard et al. 1994;
Thejll, Ulla & MacDonald 1995; Ulla & Thejll 1998;
Aznar Cuadrado & Jeffery 2001; Maxted et al. 2001;
Williams et al. 2001; Reed & Stiening 2004), and orbital
parameters have been determined for a significant sam-
ple (Jeffery & Pollacco 1998; Koen, Orosz & Wade 1998;
Saffer, Livio & Yungelson 1998; Kilkenny et al. 1999;
Moran et al. 1999; Orosz & Wade 1999; Wood & Saffer 1999;
Maxted, Marsh & North 2000; Maxted et al. 2000;
Maxted et al. 2001; Napiwotzki et al. 2001; Heber et al. 2002;
Heber et al. 2004; Morales-Rueda, Maxted & Marsh 2004;
Charpinet et al. 2005; Morales-Rueda et al. 2006).
There has also been substantial theoretical progress
(Mengel, Norris & Gross 1976; Webbink 1984;
Iben & Tutukov 1986; Tutukov & Yungelson 1990;
D’Cruz et al. 1996; Sweigart 1997). Recently, Han et al. (2002;
1 Note, however, that some of the galaxies show hints of recent star forma-
tion (Deharveng, Boselli & Donas 2002).
c© 0000 RAS, MNRAS 000, 000–000
A binary model for the UV-upturn of elliptical galaxies 3
2003) proposed a binary model (hereafter HPMM model) for the
formation of hot subdwarfs in binaries and single hot subdwarfs.
In the model, there are three formation channels for hot subdwarfs,
involving common-envelope (Paczyński 1976) ejection for hot
subdwarf binaries with short orbital periods, stable Roche lobe
overflow for hot subdwarfs with long orbital periods, and the
merger of helium white dwarfs to form single hot subdwarfs.
The model can explain the main observational characteristics
of hot subdwarfs, in particular, their distributions in the orbital
period–minimum companion mass diagram and in the effective
temperature–surface gravity diagram, their distributions of orbital
period and mass function, their binary fraction and the fraction of
hot subdwarf binaries with white dwarf (WD) companions, their
birth rates and their space density. More recent observations (e.g.
Lisker et al. 2004, 2005) support the HPMM model, and the model
is now widely used in the study of hot subdwarfs (e.g. Heber et al.
2004, Morales-Rueda, Maxted & Marsh 2004, Charpinet et al.
2005, Morales-Rueda et al. 2006).
Hot subdwarfs radiate in UV, and we can apply the HPMM
scenario without any further assumptions to the UV-upturn problem
of elliptical galaxies. The only assumption we have to make is that
the stellar population in elliptical galaxies, specifically their binary
properties, are similar to those in our own galaxy. Indeed, as we
will show in this paper, the UV flux from hot subdwarfs produced
in binaries is important. This implies that any model for elliptical
galaxies that does not include these binary channels is necessarily
incomplete or has to rely on the apriori implausible assumption that
the binary population in elliptical galaxies is intrinsically different.
The main purpose of this paper is to develop an apriori EPS model
for the UV-upturn of elliptical galaxies, by employing the HPMM
scenario for the formation of hot subdwarfs, and to compare the
model results with observations.
The outline of the paper is as follows. We describe the EPS
model in Section 2 and the simulations in Section 3. In Section 4
we present the results and discuss them, and end the paper with a
summary and conclusions in Section 5.
2 THE MODEL
EPS is a technique for modelling the spectrophotomet-
ric properties of a stellar population using our knowledge
of stellar evolution. The technique was first devised by
Tinsley (Tinsley 1968) and has experienced rapid develop-
ment ever since (Bruzual & Charlot 1993; Worthey 1994;
Bressan, Chiosi & Fagotto 1994; Tantalo et al. 1996;
Zhang et al. 2002; Bruzual & Charlot 2003; Zhang et al. 2004a).
Recently, binary interactions have also been incorporated
into EPS studies (Zhang et al. 2004b; Zhang et al. 2005a;
Zhang, Li & Han 2005b; Zhang & Li 2006) with the rapid
binary-evolution code developed by Hurley, Tout & Pols (2002).
In the present paper we incorporate the HPMM model into EPS by
adopting the binary population synthesis (BPS) code of Han et al.
(2003), which was designed to investigate the formation of many
interesting binary-related objects, including hot subdwarfs.
2.1 The BPS code and the formation of hot subdwarfs
The BPS code of Han et al. was originally devel-
oped in 1994 and has been updated regularly ever
since (Han, Podsiadlowski & Eggleton 1994; Han 1995;
Han, Podsiadlowski & Eggleton 1995; Han et al. 1995; Han 1998;
Han et al. 2002; Han et al. 2003; Han & Podsiadlowski 2004).
With the code, millions of stars (including binaries) can be evolved
simultaneously from the zero-age main sequence (ZAMS) to the
white-dwarf (WD) stage or a supernova explosion. The code can
simulate in a Monte-Carlo way the formation of many types of
stellar objects, such as Type Ia supernovae (SNe Ia), double degen-
erates (DDs), cataclysmic variables (CVs), barium stars, planetary
nebulae (PNe) and hot subdwarfs. Note that “hot subdwarfs” in this
paper is used as a collective term for subdwarf B stars, subdwarf O
stars, and subdwarf OB stars. They are core-helium-burning stars
with thin hydrogen envelopes and radiate mainly in the UV (see
Figure 1 for the formation channels of hot subdwarfs).
The main input into the BPS code is a grid of stellar models.
For the purpose of this paper, we use a Population I (pop I) grid
with a typical metallicity Z = 0.02. The grid, calculated with
Eggleton’s stellar evolution code (Eggleton 1971; Eggleton 1972;
Eggleton 1973; Han, Podsiadlowski & Eggleton 1994;
Pols et al. 1995; Pols et al. 1998), covers the evolution of normal
stars in the range of 0.08 − 126.0M⊙ with hydrogen abundance
X = 0.70 and helium abundance Y = 0.28, helium stars in
the range of 0.32 − 8.0M⊙ and hot subdwarfs in the range of
0.35−0.75M⊙ (see Han et al. 2002, 2003 for details). Single stars
are evolved via interpolations in the model grid. In this paper, we
use tFGB instead of logm as the interpolation variable between
stellar evolution tracks, where tFGB is the time from the ZAMS to
the tip of the FGB for a given stellar mass m and is calculated from
fitting formulae. This is to avoid artefacts in following the time
evolution of hot subdwarfs produced from a stellar population.
The code needs to model the evolution of binary stars as well
as of single stars. The evolution of binaries is more complicated due
to the occurrence of Roche lobe overflow (RLOF). The binaries of
main interest here usually experience two phases of RLOF: the first
when the primary fills its Roche lobe which may produce a WD
binary, and the second when the secondary fills its Roche lobe.
The mass gainer in the first RLOF phase is most likely a
main-sequence (MS) star. If the mass ratio q = M1/M2 at the
onset of RLOF is lower than a critical value, qcrit, RLOF is
stable (Paczyński 1965; Paczyński, Ziółkowski & Żytkow 1969;
Plavec, Ulrich & Polidan 1973; Hjellming & Webbink 1987;
Webbink 1988; Soberman, Phinney & van den Heuvel 1997;
Han et al. 2001). For systems experiencing their first phase of
RLOF in the Hertzsprung gap, we use qcrit = 3.2 as is supported
by detailed binary-evolution calculations of Han et al. (2000).
For the first RLOF phase on the FGB or asymptotic giant branch
(AGB), we mainly use qcrit = 1.5. Full binary calculations
(Han et al. 2002) demonstrate that qcrit ∼ 1.2 is typical for
RLOF in FGB stars. We do not explicitly include tidally enhanced
stellar winds (Tout & Eggleton 1988; Eggleton & Tout 1989;
Han et al. 1995) in our calculation. Using a larger value for qcrit
is equivalent to including a tidally enhanced stellar wind to
some degree while keeping the simulation simple (see HPMM
for details). Alternatively, we also adopt qcrit = 1.2 in order to
examine the consequences of varying this important criterion.
For stable RLOF, we assume that a fraction αRLOF of the
mass lost from the primary is transferred onto the gainer, while
the rest is lost from the system (αRLOF = 1 means that RLOF is
conservative). Note, however, that we assume that mass transfer is
always conservative when the donor is a MS star. The mass lost
from the system also takes away a specific angular momentum α
in units of the specific angular momentum of the system. The unit
is expressed as 2πa2/P , where a is the separation and P is the or-
bital period of the binary (see Podsiadlowski, Joss & Hsu 1992 for
c© 0000 RAS, MNRAS 000, 000–000
4 Han, Podsiadlowski & Lynas-Gray
Porb = 0.1− 10 days
MsdB = 0.40− 0.49M⊙
D. CE only (q > 1.2− 1.5)
Unstable RLOF leads to dynamical mass transfer
Common-envelope (CE) phase
Short-period hot subdwarf binary with MS companion
C. Stable RLOF+CE (q < 1.2− 1.5)
Stable RLOF
Wide WD binary with MS companion
Unstable RLOF leads to dynamical mass transfer
Common-envelope (CE) phase
Short-period hot subdwarf binary with WD companion
Common-Envelope Channels
B. Stable RLOF (q < 1.2 − 1.5)
Stable RLOF near the tip of FGB
Wide hot subdwarf binary with MS companion
Porb = 10− 500 days
MsdB = 0.30− 0.49M⊙
MsdB = 0.40− 0.65M⊙
MsdB = 0.45− 0.49M⊙
A. Single hot subdwarfs
Envelope loss near the tip of FGB by stellar wind (rotation, Z ?)
He WD merger (1 or 2 CE phases)
Figure 1. Single and binary channels to produce hot subdwarfs, core-helium-burning stars with no or small hydrogen-rich envelopes. (A) Single hot subdwarfs
may result from large mass loss near the tip of the first giant branch (FGB), as in the metal-rich model, or from the merger of two helium white dwarfs. (B)
Stable Roche lobe overflow (RLOF) near the tip of the FGB produces hot subdwarfs in wide binaries. (C + D) Common-envelope evolution leads to hot
subdwarfs in very close binaries, where the companion can either be a white dwarf (C) or a main-sequence star (D). The simulations presented in this paper
include all channels except for the metal-rich single star channel.
c© 0000 RAS, MNRAS 000, 000–000
A binary model for the UV-upturn of elliptical galaxies 5
details). Stable RLOF usually results in a wide WD binary. Some
of the wide WD binaries may contain hot subdwarf stars and MS
companions if RLOF occurs near the tip of the FGB (1st stable
RLOF channel for the formation of hot subdwarfs). In order to re-
produce Galactic hot subdwarfs, we use αRLOF = 0.5 and α = 1
for the first stable RLOF in all systems except for those on the MS
(see HPMM for details).
If RLOF is dynamically unstable, a common envelope (CE)
may be formed (Paczyński 1976), and if the orbital energy de-
posited in the envelope can overcome its binding energy, the CE
may be ejected. For the CE ejection criterion, we introduced two
model parameters, αCE for the common envelope ejection effi-
ciency and αth for the thermal contribution to the binding energy
of the envelope, which we write as
αCE ∆Eorb > Egr − αth Eth, (1)
where ∆Eorb is the orbital energy that is released, Egr is the grav-
itational binding energy and Eth is the thermal energy of the en-
velope. Both Egr and Eth are obtained from full stellar structure
calculations (see Han, Podsiadlowski & Eggleton 1994 for details;
also see Dewi & Tauris 2000) instead of analytical approximations.
CE ejection leads to the formation of a close WD binary. Some of
the close WD binaries may contain hot subdwarf stars and MS com-
panions if the CE occurs near the tip of the FGB (1st CE channel
for the formation of hot subdwarfs). We adopt αCE = αth = 0.75
in our standard model, and αCE = αth = 1.0 to investigate the
effect of varying the CE ejection parameters.
The WD binary formed from the first RLOF phase continues
to evolve, and the secondary may fill its Roche lobe as a red giant.
The system then experiences a second RLOF phase. If the mass
ratio at the onset of RLOF is greater than qcrit given in table 3 of
Han et al. (2002), RLOF is dynamically unstable, leading again to a
CE phase. If the CE is ejected, a hot subdwarf star may be formed.
The hot subdwarf binary has a short orbital period and a WD com-
panion (2nd CE channel for the formation of hot subdwarfs). How-
ever, RLOF may be stable if the mass ratio is sufficiently small. In
this case, we assume that the mass lost from the mass donor is all
lost from the system, carrying away the same specific angular mo-
mentum as pertains to the WD companion. Stable RLOF may then
result in the formation of a hot subdwarf binary with a WD com-
panion and a long orbital period (typically ∼ 1000 d, 2nd stable
RLOF channel for the formation of hot subdwarfs).
If the second RLOF phase results in a CE phase and the CE
is ejected, a double white dwarf system is formed (Webbink 1984;
Iben & Tutukov 1986; Han 1998). Some of the double WD systems
contain two helium WDs. Angular momentum loss due to gravita-
tional wave radiation may then cause the shrinking of the orbital
separation until the less massive white dwarf starts to fill its Roche
lobe. This will lead to its dynamical disruption if
0.7− 0.1(M2/M⊙) (2)
or M1 >∼ 0.3M⊙, where M1 is the mass of the donor (i.e.
the less massive WD) and M2 is the mass of the gainer
(Han & Webbink 1999). This is expected to always lead to a com-
plete merger of the two white dwarfs. The merger can also produce
a hot subdwarf star, but in this case the hot subdwarf star is a single
object (He WD merger channel for the formation of hot subdwarfs).
If the lighter WD is not disrupted, RLOF is stable and an AM CVn
system is formed.
In this paper, we do not include a tidally enhanced stellar wind
explicitly as was done in Han et al. (1995) and Han (1998). In-
stead we use a standard Reimers wind formula (Reimers 1975) with
η = 1/4 (Renzini 1981; Iben & Renzini 1983; Carraro et al. 1996)
which is included in our stellar models. This is to keep the simu-
lations as simple as possible, although the effects of a tidally en-
hanced wind can to some degree be implicitly included by using
a larger value of qcrit. We also employ a standard magnetic brak-
ing law (Verbunt & Zwaan 1981; Rappaport, Verbunt & Joss 1983)
where appropriate (see Podsiadlowski, Han & Rappaport 2002 for
details and further discussion).
2.2 Monte-Carlo simulation parameters
In order to investigate the UV-upturn phenomenon due to hot sub-
dwarfs, we have performed a series of Monte-Carlo simulations
where we follow the evolution of a sample of a million binaries
(single stars are in effect treated as very wide binaries that do not
interact with each other), including the hot subdwarfs produced in
the simulations, according to our grids of stellar models. The sim-
ulations require as input the star formation rate (SFR), the initial
mass function (IMF) of the primary, the initial mass-ratio distribu-
tion and the distribution of initial orbital separations.
(1) The SFR is taken to be a single starburst in most of our
simulations; all the stars have the same age (tSSP) and the same
metallicity (Z = 0.02), and constitute a simple stellar population
(SSP). In some simulations a composite stellar population (CSP) is
also used (Section 3.3).
(2) A simple approximation to the IMF of Miller & Scalo
(1979) is used; the primary mass is generated with the formula of
Eggleton, Fitchett & Tout (1989),
0.19X
(1−X)0.75 + 0.032(1 −X)0.25
, (3)
where X is a random number uniformly distributed between 0 and
1. The adopted range of primary masses is 0.8 to 100.0M⊙. The
studies by Kroupa, Tout & Gilmore (1993) and Zoccali et al. (2000)
support this IMF.
(3) The mass-ratio distribution is quite uncer-
tain. We mainly take a constant mass-ratio distribution
(Mazeh et al. 1992; Goldberg & Mazeh 1994; Heacox 1995;
Halbwachs, Mayor & Udry 1998; Shatsky & Tokovinin 2002),
) = 1, 0 ≤ q
≤ 1, (4)
where q′ = 1/q = M2/M1. As alternatives we also consider a
rising mass ratio distribution
) = 2q
, 0 ≤ q
≤ 1, (5)
and the case where both binary components are chosen randomly
and independently from the same IMF.
(4) We assume that all stars are members of binary systems
and that the distribution of separations is constant in log a (where
a is the orbital separation) for wide binaries and falls off smoothly
at close separations:
an(a) =
αsep(
)m, a ≤ a0;
αsep, a0 < a < a1,
where αsep ≈ 0.070, a0 = 10R⊙, a1 = 5.75 × 10
6 R⊙ =
0.13 pc, and m ≈ 1.2. This distribution implies that there is an
equal number of wide binary systems per logarithmic interval and
that approximately 50 per cent of stellar systems are binary systems
with orbital periods less than 100 yr.
c© 0000 RAS, MNRAS 000, 000–000
6 Han, Podsiadlowski & Lynas-Gray
2.3 Spectral library
In order to obtain the colours and the spectral energy distribu-
tion (SED) of the populations produced by our simulations, we
have calculated spectra for hot subdwarfs using plane-parallel
static model stellar atmospheres computed with the ATLAS9 stel-
lar atmosphere code (Kurucz 1992) with the assumption of local
thermodynamic equilibrium (LTE). Solar metal abundances were
adopted (Anders & Grevesse 1989) and line blanketing is approx-
imated by appropriate opacity distribution functions interpolated
for the chosen helium abundance. The resulting model atmosphere
grid covers a wide range of effective temperatures (10, 000 ≤
Teff ≤ 40, 000K with a spacing of ∆T = 1000K), gravities
(5.0 ≤ log g ≤ 7.0 with ∆ log g = 0.2), and helium abundances
(−3 ≤ [He/H ] ≤ 0), as appropriate for the hot subdwarfs pro-
duced in the BPS code. For the spectrum and colours of other single
stars, we use the latest version of the comprehensive BaSeL library
of theoretical stellar spectra (see Lejeune et al. 1997, 1998 for a de-
scription), which gives the colours and SEDs of stars with a wide
range of Z, log g and Teff .
2.4 Observables from the model
Our model follows the evolution of the integrated spectra of stellar
populations. In order to compare the results with observations, we
calculate the following observables as well as UBV colours in the
Johnson system (Johnson & Morgan 1953).
(i) (1550−V ) is a colour defined by BBBFL. It is a combination
of the short-wavelength IUE flux and the V magnitude and is used
to express the magnitude of the UV-upturn:
(1550− V ) = −2.5 log(fλ,1250−1850/fλ,5055−5945), (7)
where fλ,1250−1850 is the energy flux per unit wavelength averaged
between 1250 and 1850Å and fλ,5055−5945 the flux averaged be-
tween 5055 and 5945Å (for the V band):
(ii) (1550 − 2500) is a colour defined for the IUE flux by Dor-
man, O’Connell & Rood (1995).
(1550− 2500) = −2.5 log(fλ,1250−1850/fλ,2200−2800). (8)
(iii) (2000 − V ) is a colour used by Deharveng, Boselli &
Donas (2002) in their study of UV properties of the early-
type galaxies observed with the balloon-borne FOCA experiment
(Donas et al. 1990; Donas, Milliard & Laget 1995):
(2000− V ) = −2.5 log(fλ,1921−2109/fλ,5055−5945). (9)
(iv) (FUV − NUV )AB, (FUV − r)AB, (NUV − r)AB are
colours from GALEX and SDSS. GALEX, a NASA Small Ex-
plorer mission, has two bands in its ultraviolet (UV) survey:
a far-UV band centered on 1530 Å and a near-UV band cen-
tered on 2310 Å (Martin et al. 2005; Rich et al. 2005), while SDSS
has five passbands, u at 3551 Å, g at 4686 Å, r at 6165 Å, i at
7481 Å, and z at 8931 Å (Fukugita et al. 1996; Gunn et al. 1998;
Smith et al. 2002). The magnitudes are in the AB system of Oke &
Gunn (1983):
(FUV −NUV )AB = −2.5 log(fν,1350−1750/fν,1750−2750), (10)
(FUV − r)AB = −2.5 log(fν,1350−1750/fν,5500−7000), (11)
(NUV − r)AB = −2.5 log(fν,1750−2750/fν,5500−7000), (12)
where fν,1350−1750 , fν,1750−2750 , fν,5500−7000 are the energy
Table 1. Simulation sets (metallicity Z = 0.02)
Set n(q′) qc αCE αth
Standard SSP simulation set with tSSP varying upto 15 Gyr
1 constant 1.5 0.75 0.75
SSP simulation sets with varying model parameters
2 uncorrelated 1.5 0.75 0.75
3 rising 1.5 0.75 0.75
4 constant 1.2 0.75 0.75
5 constant 1.5 1.0 1.0
CSP simulation sets with a tmajor and variable tminor and f
6 constant 1.5 0.75 0.75
Note - n(q′) = initial mass-ratio distribution; qc = the critical mass ratio
above which the first RLOF on the FGB or AGB is unstable; αCE = CE
ejection efficiency; αth = thermal contribution to CE ejection; tSSP = the
age of a SSP; tmajor = the age of the major population in a CSP; tminor =
the age of the minor population in a CSP; f = the ratio of the mass of the
minor population to the total mass in a CSP.
fluxes per unit frequency averaged in the frequency bands corre-
sponding to 1350 and 1750 Å, 1750 and 2750 Å, 5500 and 7000 Å,
respectively.
(v) βFUV is a far-UV spectral index we defined to measure the
SED slope between 1075 and 1750 Å:
fλ ∼ λ
βFUV , 1075 < λ < 1750 Å, (13)
where fλ is the energy flux per unit wavelength. In this paper, we
fit far-UV SEDs with equation (13) to obtain βFUV. In the fitting
we ignored the part between 1175 and 1250Å for the theoretical
SEDs from our model, as this part corresponds to the Lα line. We
also obtained βFUV via a similar fitting for early-type galaxies ob-
served with the HUT (Brown et al. 1997; Brown 2004) and the IUE
(Burstein et al. 1988). We again ignored the part between 1175 and
1250Å for the HUT SEDs, while the IUE SEDs do not have any
data points below 1250Å.
3 SIMULATIONS
In order to investigate the UV-upturn systematically, we performed
six sets of simulations for a Population I composition (X = 0.70,
Y = 0.28 and Z = 0.02). The first set is a standard set with
the best-choice model parameters from HPMM. In Sets 2 to 5 we
systematically vary the model parameters, and Set 6 models a com-
posite stellar population (Table 1).
3.1 Standard simulation set
In the HPMM model, hot subdwarfs are produced through binary
interactions by stable RLOF, CE ejection or He WD mergers. The
main parameters in the HPMM model are: n(q′), the initial mass
ratio distribution, qc, the critical mass ratio above which the 1st
RLOF on the FGB or AGB is unstable, αCE, the CE ejection ef-
ficiency parameter, and αth, the contribution of thermal energy to
the binding energy of the CE (see Sections 2.1 and 2.2 for details).
The model that best reproduces the observed properties of hot sub-
dwarfs in our Galaxy has n(q′) = 1, qc = 1.5, αCE = 0.75,
αth = 0.75, (see section 7.4 of Han et al. 2003). These are the
parameters adopted in our standard simulation.
c© 0000 RAS, MNRAS 000, 000–000
A binary model for the UV-upturn of elliptical galaxies 7
In our standard set, we first construct a SSP containing a mil-
lion binaries (Section 2.2). The binaries are formed simultaneously
(i.e. in a single starburst) and with the same metallicity (Z = 0.02).
The SSP is evolved with the BPS code (Section 2.1), and the results
are convolved with our spectral libraries (Section 2.3) to produce
integrated SEDs and other observables. The SEDs are normalised
to a stellar population of mass 1010M⊙ at a distance of 10 Mpc.
3.2 Simulation sets with varying model parameters
In order to investigate the importance of the model parameters, we
also carried out simulation sets in which we systematically varied
the main parameters. Specifically, we adopted three initial mass-
ratio distributions: a constant mass-ratio distribution, a rising mass-
ratio distribution, and one where the component masses are uncor-
related and drawn independently from a Miller-Scalo IMF (Sec-
tion 2.2). We also varied the value of qc in the instability criterion
for the first RLOF phase on the FGB or AGB from 1.5 to 1.2, the
parameter αCE for CE ejection efficiency and the parameter αth
for the thermal contribution to CE ejection from 0.75 to 1.0.
3.3 Simulation sets for composite stellar populations
Many early-type galaxies show evidence for some moderate
amount of recent star formation (Yi et al. 2005; Salim et al. 2005;
Schawinski et al. 2006). Therefore, we also perform simulations in
which we evolve composite stellar populations. Here, a compos-
ite stellar population (CSP) consists of two populations, a major
old one and a minor younger one. The major population has solar
metallicity and an age of tmajor, where all stars formed in a single
burst tmajor ago, while the minor one has solar metallicity and an
age tminor, where all stars formed in a starburst starting tminor ago
and lasting 0.1 Gyr. The minor population fraction f is the ratio of
the mass of the minor population to the total mass of the CSP (for
f = 100% the CSP is actually a SSP with an age tminor).
4 RESULTS AND DISCUSSION
4.1 Simple stellar populations
4.1.1 Evolution of the integrated SED
In our standard simulation set, we follow the evolution of the in-
tegrated SED of a SSP (including binaries) of 1010M⊙ up to
tSSP = 15Gyr. The SSP is assumed to be at a distance of 10 Mpc
and the evolution is shown in Figure 2. Note that hot subdwarfs
originating from binary interactions start to dominate the far-UV
after ∼ 1Gyr.
We have compiled a file containing the spectra of the SSP with
ages from 0.1 Gyr to 15 Gyr and devised a small FORTRAN code to
read the file (and to plot the SEDs with PGPLOT). The file and the
code are available online2. In order to be able to apply the model
directly, we have also provided in the file the spectra of the SSP
without any binary interactions considered. This provides an easy
way to examine the differences in the spectra for simulations with
and without binary interactions.
2 The file and the code are available on the VizieR data base of the astro-
nomical catalogues at the Centre de Données astronomiques de Strasbourg
(CDS) web site (http://cdsarc.u-strasbg.fr/) and on ZH’s personal website
(http://www.zhanwenhan.com/download/uv-upturn.html).
Figure 2. The evolution of the restframe intrinsic spectral energy distri-
bution (SED) for a simulated galaxy in which all stars formed at the same
time, representing a simple stellar population (SSP). The stellar population
(including binaries) has a mass of 1010M⊙ and the galaxy is assumed to
be at a distance of 10 Mpc. The figure is for the standard simulation set, and
no offset has been applied to the SEDs.
4.1.2 Evolution of the UV-upturn
The colours of the SSP evolve in time, and Table 2 lists the
colours of a SSP (including binaries) of 1010M⊙ at various
ages for the standard simulation set. In order to see how the
colours evolve with redshift, we adopted a ΛCDM cosmology
(Carroll, Press & Turner 1992) with cosmological parameters of
H0 = 72km/s/Mpc, ΩM = 0.3 and ΩΛ = 0.7, and assumed
a star-formation redshift of zf = 5 to obtain Figures 3 and 4.
The figures show the evolution of the restframe intrinsic colours
and the evolution of the far-UV spectral index with redshift (look-
back time). As these figures show, the UV-upturn does not evolve
much with redshift; for an old stellar population (i.e. with a red-
shift z ∼ 0 or an age of ∼ 12Gyr), (1550 − V ) ∼ 3.5,
(UV −V ) ∼ 4.4, (FUV −r)AB ∼ 6.7, (1550−2500) ∼ −0.38,
(FUV −NUV )AB ∼ 0.42 and βFUV ∼ −3.0.
4.1.3 Colour-colour diagrams
Colour-colour diagrams are widely used as a diagnostic tool in the
study of stellar populations of early-type galaxies. We present a
few such diagrams in Figures 5 and 6 for the standard simulation
set. In these figures, most curves have a turning-point at ∼ 1Gyr,
at which hot subdwarfs resulting from binary interactions start to
dominate the far-UV.
c© 0000 RAS, MNRAS 000, 000–000
http://cdsarc.u-strasbg.fr/
http://www.zhanwenhan.com/download/uv-upturn.html
8 Han, Podsiadlowski & Lynas-Gray
Figure 5. Colour-colour diagrams for the standard simulation set. Ages are denoted by open triangles (0.01 Gyr), open squares (0.1 Gyr), open circles (1 Gyr),
filled triangles (2 Gyr), filled squares (5 Gyr), filled circles (10 Gyr) and filled stars (15 Gyr).
c© 0000 RAS, MNRAS 000, 000–000
A binary model for the UV-upturn of elliptical galaxies 9
Figure 6. Similar to Figure 5, but with (B − V ) as the abscissa.
c© 0000 RAS, MNRAS 000, 000–000
10 Han, Podsiadlowski & Lynas-Gray
Figure 7. Integrated restframe intrinsic SEDs for hot subdwarfs for different formation channels. Solid, thin dark grey and thick light grey curves are for the
stable RLOF channel, the CE ejection channel and the merger channel, respectively. The merger channel starts to dominate at an age of tSSP ∼ 3.5Gyr. The
figure is for the standard simulation set with a stellar population mass of 1010M⊙ (including binaries), and the population is assumed to be at a distance of
10 Mpc.
c© 0000 RAS, MNRAS 000, 000–000
A binary model for the UV-upturn of elliptical galaxies 11
Table 2. Colour evolution of a simple stellar population (including binaries) of 1010M⊙ for the standard simulation set. This table is also available in
machine-readable form on the VizieR data base of the astronomical catalogues at the Centre de Données astronomiques de Strasbourg (CDS) web site
(http://cdsarc.u-strasbg.fr/) and on ZH’s personal website (http://www.zhanwenhan.com/download/uv-upturn.html).
log(tSSP) MV B − V 15− V 20 − V 25− V 15− 25 FUV − r NUV − r FUV −NUV βFUV
-1.000 -22.416 0.173 -1.274 -0.953 -0.447 -0.827 1.483 1.296 0.188 0.775
-0.975 -22.346 0.165 -1.245 -0.938 -0.432 -0.813 1.503 1.304 0.199 0.941
-0.950 -22.306 0.175 -1.195 -0.901 -0.396 -0.799 1.555 1.346 0.209 1.100
-0.925 -22.289 0.185 -1.106 -0.829 -0.326 -0.780 1.644 1.421 0.223 1.305
-0.900 -22.232 0.194 -1.053 -0.788 -0.286 -0.767 1.699 1.466 0.233 1.479
-0.875 -22.168 0.192 -1.019 -0.766 -0.265 -0.754 1.730 1.487 0.243 1.659
-0.850 -22.135 0.199 -0.959 -0.717 -0.215 -0.744 1.792 1.542 0.250 1.842
-0.825 -22.053 0.199 -0.969 -0.723 -0.216 -0.753 1.780 1.537 0.243 1.968
-0.800 -22.000 0.200 -0.945 -0.701 -0.189 -0.756 1.802 1.561 0.241 2.182
-0.775 -21.949 0.205 -0.892 -0.660 -0.148 -0.744 1.855 1.605 0.250 2.407
-0.750 -21.923 0.207 -0.789 -0.587 -0.086 -0.703 1.955 1.674 0.281 2.656
-0.725 -21.885 0.211 -0.682 -0.514 -0.025 -0.658 2.058 1.743 0.316 2.935
-0.700 -21.839 0.218 -0.569 -0.437 0.041 -0.609 2.169 1.816 0.353 3.314
-0.675 -21.804 0.221 -0.468 -0.367 0.101 -0.569 2.266 1.882 0.384 3.565
-0.650 -21.767 0.223 -0.377 -0.307 0.152 -0.529 2.355 1.939 0.416 3.832
-0.625 -21.732 0.230 -0.272 -0.235 0.215 -0.486 2.459 2.009 0.451 4.087
-0.600 -21.689 0.235 -0.166 -0.167 0.273 -0.439 2.565 2.075 0.490 4.343
-0.575 -21.644 0.242 -0.051 -0.092 0.337 -0.388 2.681 2.148 0.532 4.583
-0.550 -21.606 0.246 0.074 -0.015 0.403 -0.329 2.806 2.222 0.583 4.879
-0.525 -21.546 0.250 0.188 0.048 0.456 -0.268 2.918 2.280 0.638 5.176
-0.500 -21.512 0.263 0.315 0.122 0.522 -0.207 3.055 2.359 0.695 5.384
-0.475 -21.478 0.274 0.444 0.196 0.587 -0.144 3.189 2.433 0.757 5.503
-0.450 -21.424 0.285 0.556 0.254 0.637 -0.081 3.316 2.495 0.820 5.524
-0.425 -21.377 0.292 0.688 0.322 0.694 -0.005 3.458 2.560 0.898 5.603
-0.400 -21.335 0.308 0.872 0.421 0.777 0.095 3.661 2.660 1.001 5.881
-0.375 -21.290 0.319 1.015 0.497 0.840 0.175 3.825 2.736 1.089 5.642
-0.350 -21.253 0.340 1.172 0.591 0.919 0.253 4.011 2.834 1.176 5.545
-0.325 -21.217 0.362 1.374 0.706 1.015 0.360 4.246 2.951 1.295 5.507
-0.300 -21.162 0.375 1.543 0.795 1.084 0.459 4.440 3.036 1.404 5.764
-0.275 -21.121 0.394 1.723 0.903 1.168 0.555 4.651 3.143 1.507 5.827
-0.250 -21.068 0.410 1.897 1.012 1.245 0.653 4.846 3.242 1.604 6.004
-0.225 -21.028 0.434 2.116 1.152 1.344 0.772 5.095 3.374 1.721 5.839
-0.200 -20.994 0.462 2.339 1.305 1.448 0.890 5.343 3.517 1.826 5.968
-0.175 -20.965 0.495 2.597 1.485 1.571 1.026 5.629 3.682 1.947 5.989
-0.150 -20.937 0.524 2.778 1.632 1.671 1.107 5.820 3.816 2.004 6.248
-0.125 -20.930 0.572 2.964 1.814 1.804 1.159 6.018 3.994 2.024 6.113
-0.100 -20.864 0.590 3.123 1.932 1.873 1.250 6.184 4.092 2.091 5.806
-0.075 -20.801 0.603 3.274 2.038 1.933 1.340 6.336 4.174 2.162 5.673
-0.050 -20.730 0.622 3.440 2.142 1.992 1.448 6.522 4.264 2.258 5.434
-0.025 -20.664 0.639 3.655 2.254 2.056 1.599 6.768 4.355 2.413 4.681
0.000 -20.583 0.649 3.771 2.336 2.096 1.675 6.897 4.415 2.482 3.445
0.025 -20.511 0.666 3.822 2.424 2.144 1.677 6.955 4.488 2.467 2.006
0.050 -20.419 0.672 3.737 2.484 2.171 1.566 6.860 4.528 2.332 0.404
0.075 -20.313 0.666 3.569 2.517 2.176 1.393 6.672 4.534 2.137 -0.684
0.100 -20.259 0.685 3.420 2.607 2.241 1.179 6.509 4.612 1.897 -1.360
0.125 -20.201 0.707 3.441 2.738 2.323 1.118 6.536 4.719 1.817 -1.734
0.150 -20.150 0.722 3.547 2.866 2.398 1.149 6.646 4.814 1.832 -1.868
0.175 -20.081 0.731 3.603 2.974 2.455 1.148 6.704 4.888 1.816 -2.019
0.200 -20.028 0.750 3.642 3.094 2.525 1.117 6.748 4.983 1.766 -2.148
0.225 -19.973 0.766 3.731 3.240 2.606 1.125 6.837 5.085 1.752 -2.270
4.1.4 The far-UV contribution for different formation channels
of hot subdwarfs
In our model, there are three channels for the formation of hot sub-
dwarfs. In the stable RLOF channel, the hydrogen-rich envelope of
a star with a helium core is removed by stable mass transfer, and
helium is ignited in the core. The hot subdwarfs from this channel
are in binaries with long orbital periods (typically ∼ 1000 d). In the
CE ejection channel, the envelope is ejected as a consequence of the
spiral-in in a CE phase. The resulting hot subdwarf binaries have
c© 0000 RAS, MNRAS 000, 000–000
http://cdsarc.u-strasbg.fr/
http://www.zhanwenhan.com/download/uv-upturn.html
12 Han, Podsiadlowski & Lynas-Gray
Table 2. continued
log(tSSP) MV B − V 15 − V 20− V 25− V 15− 25 FUV − r NUV − r FUV −NUV βFUV
0.250 -19.906 0.773 3.804 3.359 2.664 1.140 6.902 5.157 1.745 -2.342
0.275 -19.853 0.790 3.886 3.492 2.735 1.151 6.992 5.254 1.739 -2.457
0.300 -19.789 0.799 3.936 3.571 2.781 1.155 7.042 5.311 1.731 -2.466
0.325 -19.746 0.811 4.005 3.684 2.842 1.162 7.115 5.389 1.726 -2.573
0.350 -19.664 0.812 4.010 3.761 2.870 1.140 7.119 5.428 1.691 -2.680
0.375 -19.588 0.815 4.007 3.821 2.895 1.111 7.113 5.462 1.651 -2.706
0.400 -19.533 0.818 3.968 3.879 2.932 1.036 7.067 5.499 1.568 -2.759
0.425 -19.484 0.827 3.922 3.915 2.968 0.954 7.025 5.539 1.486 -2.787
0.450 -19.412 0.830 3.854 3.931 2.987 0.867 6.957 5.557 1.401 -2.828
0.475 -19.374 0.846 3.856 4.009 3.049 0.807 6.965 5.629 1.336 -2.865
0.500 -19.337 0.857 3.817 4.041 3.096 0.720 6.924 5.673 1.251 -2.851
0.525 -19.288 0.869 3.743 4.041 3.127 0.615 6.857 5.703 1.154 -2.857
0.550 -19.201 0.864 3.653 4.026 3.128 0.525 6.763 5.693 1.070 -2.896
0.575 -19.170 0.879 3.666 4.082 3.189 0.477 6.782 5.758 1.024 -2.914
0.600 -19.089 0.876 3.654 4.103 3.203 0.451 6.767 5.770 0.996 -2.930
0.625 -19.055 0.891 3.708 4.188 3.271 0.437 6.828 5.850 0.978 -2.944
0.650 -18.992 0.896 3.666 4.179 3.290 0.376 6.785 5.862 0.923 -2.925
0.675 -18.957 0.904 3.695 4.230 3.340 0.355 6.817 5.914 0.903 -2.933
0.700 -18.880 0.905 3.667 4.239 3.357 0.310 6.788 5.928 0.860 -2.955
0.725 -18.812 0.906 3.635 4.238 3.371 0.264 6.754 5.936 0.818 -2.962
0.750 -18.758 0.913 3.631 4.247 3.402 0.229 6.754 5.966 0.788 -2.944
0.775 -18.710 0.925 3.682 4.310 3.453 0.229 6.814 6.029 0.785 -2.958
0.800 -18.680 0.936 3.676 4.325 3.502 0.174 6.810 6.071 0.739 -2.956
0.825 -18.576 0.929 3.615 4.290 3.492 0.123 6.745 6.050 0.695 -2.964
0.850 -18.556 0.947 3.606 4.314 3.560 0.046 6.744 6.113 0.631 -2.974
0.875 -18.473 0.945 3.629 4.348 3.583 0.046 6.765 6.138 0.627 -2.979
0.900 -18.429 0.959 3.591 4.338 3.626 -0.035 6.735 6.173 0.562 -2.980
0.925 -18.362 0.958 3.552 4.319 3.647 -0.095 6.691 6.176 0.514 -2.988
0.950 -18.339 0.981 3.613 4.391 3.729 -0.116 6.767 6.271 0.496 -2.986
0.975 -18.236 0.972 3.528 4.322 3.711 -0.184 6.675 6.231 0.445 -2.992
1.000 -18.191 0.982 3.570 4.374 3.768 -0.198 6.724 6.292 0.432 -2.989
1.025 -18.100 0.975 3.499 4.320 3.764 -0.265 6.645 6.263 0.382 -2.998
1.050 -18.100 0.995 3.481 4.321 3.843 -0.362 6.634 6.321 0.313 -3.005
1.075 -18.036 1.004 3.514 4.359 3.898 -0.384 6.672 6.376 0.296 -3.002
1.100 -18.044 1.032 3.541 4.396 3.997 -0.456 6.712 6.465 0.247 -2.998
1.125 -17.947 1.029 3.473 4.338 3.998 -0.525 6.641 6.441 0.201 -2.999
1.150 -17.884 1.033 3.473 4.345 4.039 -0.566 6.641 6.468 0.173 -2.995
1.175 -17.800 1.030 3.463 4.338 4.062 -0.600 6.626 6.475 0.151 -2.995
Note - tSSP = population age in Gyr; MV = absolute visual magnitude; B − V = (B − V ); 15− V = (1550 − V ); 20− V = (2000 − V ); 25 − V =
(2500 − V ); 15− 25 = (1550 − 2500); FUV − r = (FUV − r)AB; NUV − r = (FUV − r)AB; FUV −NUV = (FUV −NUV )AB; βFUV =
far-UV spectral index.
very short orbital periods (typically ∼ 1 d). In the merger channel,
a helium WD pair coalesces to produce a single object. Hot sub-
dwarfs from the merger channel are generally more massive than
those from stable RLOF channel or the CE channel and have much
thinner (or no) hydrogen envelope. They are therefore expected to
be hotter. See Han et al. (2002; 2003) for further details.
Figure 7 shows the SEDs of the hot subdwarfs produced from
the different formation channels at various ages. It shows that hot
subdwarfs from the RLOF channel are always important, while the
CE channel becomes important at an age of ∼ 1Gyr. The merger
channel, however, catches up with the CE channel at ∼ 2.5Gyr
and the stable RLOF channel at ∼ 3.5Gyr, and dominates the far-
UV flux afterwards.
4.1.5 The effects of the model assumptions
In order to systematically investigate the dependence of the UV-
upturn on the parameters of our model, we now vary the major
model parameters: n(q′) for the initial mass-ratio distribution, qc
for the critical mass ratio for stable RLOF on the FGB or AGB,
αCE for the CE ejection efficiency and αth for the thermal con-
tribution to the CE ejection. We carried out four more simulation
sets (Table 1). Figures 3 and 4 show the UV-upturn evolution
of the various simulation sets. These figures show that the initial
mass-ratio distribution is very important. As an extreme case, the
mass-ratio distribution for uncorrelated component masses (Set 2)
makes the UV-upturn much weaker, by ∼ 1mag in (1550 − V )
or (FUV − r)AB, as compared to the standard simulation set. On
the other hand, a rising distribution (Set 3) makes the UV-upturn
stronger. Binaries with a mass-ratio distribution of uncorrelated
c© 0000 RAS, MNRAS 000, 000–000
A binary model for the UV-upturn of elliptical galaxies 13
Figure 3. The evolution of restframe intrinsic colours (1550 − V ),
(2000−V ) and (FUV − r)AB with redshift (lookback time) for a simple
stellar population (including binaries). Solid, dashed, dash-dotted, dotted,
dash-dot-dot-dot curves are for simulation sets 1 (standard set), 2, 3, 4, 5,
respectively. Ages are denoted by open triangles (0.01 Gyr), open squares
(0.1 Gyr), open circles (1 Gyr), filled triangles (2 Gyr), filled squares (5 Gyr)
and filled circles (10 Gyr).
component masses tend to have bigger values of q (the ratio of
the mass of the primary to the mass of the secondary), and mass
transfer is more likely to be unstable. As a result, the numbers of
hot subdwarfs from the stable RLOF channel and the merger chan-
nel are greatly reduced, and the UV-upturn is smaller in strength.
Binaries with a rising mass-ratio distribution tend to have smaller
values of q and therefore produce a larger UV-upturn. A lower qc
(Set 4) leads to a weaker UV-upturn, as the numbers of hot sub-
dwarfs from the stable RLOF channel and the merger channel are
reduced. A higher CE ejection efficiency αCE and a higher thermal
contribution factor αth (Set 5) result in an increase in the number
of hot subdwarfs from the CE channel, but a decrease in the number
from the merger channel, and the UV-upturn is not affected much
as a consequence.
4.1.6 The importance of binary interactions
Binaries evolve differently from single stars due to the occurrence
of mass transfer. Mass transfer may prevent mass donors from
evolving to higher luminosity and can produce hotter objects than
expected in a single-star population of a certain age (mainly hot
Figure 4. Similar to Figure 3, but for colours (1550 − 2500), (FUV −
NUV )AB, and far-UV spectral index βFUV.
subdwarfs and blue stragglers3). To demonstrate the importance of
binary interactions for the UV-upturn explicitly, we plotted SEDs of
a population for two cases in Figure 8. Case 1 (solid curves) is for
our standard simulation set, which includes various binary interac-
tions, while case 2 (light grey curves) is for a population of the same
mass without any binary interactions. The figure shows that the hot
subdwarfs produced by binary interactions are the dominant con-
tributors to the far-UV for a population older than ∼ 1Gyr. Note,
however, that blue stragglers resulting from binary interactions are
important contributors to the far-UV between 0.5 Gyr and 1.5 Gyr.
In order to assess the importance of binary interactions for the
UV upturn, we define factors that give the fraction of the flux in a
particular waveband that originates from hot subdwarfs produced
in binaries: bFUV = F
FUV/F
total
FUV , where F
FUV is the integrated
flux between 900Å and 1800Å radiated by hot subdwarfs (and
their descendants) produced by binary interactions, and F totalFUV is
the total integrated flux between 900Å and 1800Å . We also de-
fined other similar factors, b1550, b2000, and b2500, for passbands
of 1250Å to 1850Å , 1921Å to 2109Å, and 2200Å to 2800Å,
respectively. Figure 9 shows the time evolution of those factors. As
3 Blue stragglers are stars located on the main sequence well beyond
the turning-point in the colour-magnitude diagram of globular clusters
(Sandage 1953), which should already have evolved off the main sequence.
Collisions between low-mass stars and mass transfer in close binaries are
believed to be responsible for the production of these hotter objects (e.g.
Pols & Marinus 1994, Chen & Han 2004, Hurley et al. 2005).
c© 0000 RAS, MNRAS 000, 000–000
14 Han, Podsiadlowski & Lynas-Gray
Figure 8. Integrated restframe intrinsic SEDs for a stellar population (including binaries) with a mass of 1010M⊙ at a distance of 10 Mpc. Solid curves are
for the standard simulation set with binary interactions included, and the light grey curves for the same population, but no binary interactions are considered;
the two components evolve independently.
c© 0000 RAS, MNRAS 000, 000–000
A binary model for the UV-upturn of elliptical galaxies 15
Figure 9. Time evolution of the fraction of the energy flux in different UV wavebands originating from hot subdwarfs (and their descendants) formed in
binaries for the standard simulation set.
the figure shows, the hot-subdwarf contribution becomes increas-
ingly important in the far- and near-UV as the population ages.
4.2 The model for composite stellar populations
Early-type galaxies with a recent minor starburst can be modelled
as a composite stellar population (CSP). A CSP contains a ma-
jor population with an age of tmajor and a minor population of
age tminor and mass fraction f (Section 3.3). Figure 10 shows the
colour–colour diagram for CSPs with tmajor = 10Gyr with vary-
ing tminor and f . Note that the curves of different f start to con-
verge to the SSP curve, the curve of f = 100%, for tminor >
1Gyr. This implies that there exists a strong degeneracy between
the age of the minor population and the mass fraction.
4.3 Theory versus observations
4.3.1 The fitting of the far-UV SED
In our binary population synthesis model, we adopted solar metal-
licity. To test the model, we chose NGC 3379, a typical elliptical
galaxy with a metallicity close to solar (Gregg et al. 2004) to fit
the far-UV SED. Figure 11 presents various fits that illustrate the
effects of different sub-populations with different ages and differ-
ent amounts of assumed extinction. As the figure shows, acceptable
fits can be obtained for the various cases. Our best fits require the
presence of a sub-population of relatively young stars with an age
< 0.5Gyr, making up ∼ 0.1% of the total population mass.
The existence of a relatively young population could imply the
existence of some core-collapse supernovae in these galaxies. If we
assume that an elliptical galaxy has a stellar mass of 1011M⊙ and
0.1% of the mass was formed during the last 0.4 Gyr, then a mean
star formation rate would be 0.25M⊙/yr, about one tenth that of
the Galaxy. Core-collapse supernovae would be possible. Indeed, a
Type Ib supernova, SN 2000ds, was discovered in NGC 2768, an
elliptical galaxy of type E6 (Van Dyk, Li & Filippenko 2003).
4.3.2 The UV-upturn magnitudes versus the far-UV spectral
index
There is increasing evidence that many elliptical galaxies had
some recent minor star-formation events (Schawinski et al. 2006;
Kaviraj et al. 2006), which also contribute to the far-UV excess. To
model such secondary minor starbursts, we have constructed CSP
galaxy models, consisting of one old, dominant population with an
assumed age tmajor = 10Gyr and a younger population of vari-
able age, making up a fraction f of the stellar mass of the system.
Our spectral modelling shows that a recent minor starburst mostly
affects the slope in the far-UV SED, and we therefore defined a far-
UV slope index βFUV (Section 2.4). In order to assess the impor-
tance of binary interactions, we also defined a binary contribution
c© 0000 RAS, MNRAS 000, 000–000
16 Han, Podsiadlowski & Lynas-Gray
Figure 10. The diagram of (FUV − NUV )AB versus (FUV − r)AB for a composite stellar population (CSP) model of elliptical galaxies with a major
population age of tmajor = 10Gyrs (Set 6). Solid curves are for given minor population fractions f and are plotted in steps of ∆ log(f) = 0.5, as indicated.
Light grey curves are for fixed minor population ages tminor and are plotted in steps of ∆ log(tminor/Gyr) = 0.1, as indicated. The colours are presented
in the restframe. The thick solid curve for f = 100% actually shows the evolution of a simple stellar population with age tminor.
factor bFUV (Section 4.1.6), which is the fraction of far-UV flux
radiated by hot subdwarfs produced by binary interactions.
Figure 12 shows the far-UV slope as a function of UV ex-
cess, a potentially powerful diagnostic diagram which illustrates
how the UV properties of elliptical galaxies evolve with time in a
dominant old population with a young minor sub-population. For
comparison, we also plot observed elliptical galaxies. Some of the
observed galaxies are from Astro-2 observations with an aperture
of 10′′ ×56′′ (Brown et al. 1997), some are from IUE observations
with an aperture of 10′′ × 20′′ (Burstein et al. 1988). The value of
βFUV for NGC 1399, however, is derived from Astro-1 HUT ob-
servation with an aperture of 9′′.4 × 116′′ (Ferguson et al. 1991),
and its (1550 − V ) comes from the IUE observations. As the far-
UV light is more concentrated toward the centre of the galaxy than
the optical light (Ohl et al. 1998), the value of (1550−V ) for NGC
1399 should be considered an upper limit for the galaxy area cov-
ered by the observation. The galaxies plotted are all elliptical galax-
ies except for NGC 5102, which is a S0 galaxy, and the nucleus of
M 31, which is a Sb galaxy. Active galaxies or galaxies with large
errors in βFUV from BBBFL have not been plotted.
Overall, the model covers the observed range of properties
reasonably well. Note in particular that the majority of galaxies
lie in the part of the diagram where the UV contribution from
binaries is expected to dominate (i.e. where bFUV > 0.5). The
location of M 60 and M 89 in this figure implies f ∼ 0.01%
and tminor ∼ 0.11Gyr with bFUV ∼ 0.5. Interestingly, inspec-
tion of the HUT spectrum of M 60 (see the mid-left panel of fig-
ure 3 in Brown et al. (1997)) shows the presence of a marginal
C IV absorption line near 1550Å. Chandra observations show that
M 89 has a low luminosity AGN (Xu et al. 2005). This would make
(1550 − V ) bluer and may also provide indirect evidence for low
levels of star formation.
The galaxy NGC 1399 requires special mention, as it is UV-
bright and the young star-hypothesis was believed to have been
ruled out due to the lack of strong C IV absorption lines in its HUT
spectrum (Ferguson et al. 1991). However, any young star signa-
ture, if it exists, would have been diluted greatly in the HUT spec-
trum, as the aperture of the HUT observation is much larger than
that of the IUE observation covering mainly the galaxy nucleus.
Our model is sensitive to both low levels and high levels of
star formation. It suggests that elliptical galaxies had some star for-
mation activity in the relatively recent past (∼ 1Gyr ago). AGN
and supernova activity may provide qualitative supporting evidence
for this picture, since the former often appears to be accompanied
c© 0000 RAS, MNRAS 000, 000–000
A binary model for the UV-upturn of elliptical galaxies 17
Figure 11. Far-UV SED fitting of NGC 3379, a standard elliptical galaxy. The grey histogram represents the HUT observations of Brown et al. (1997) with
337 bins, while the other curves are based on our theoretical model with different assumptions about a minor recent population and different amounts of
extinction. The age of the old, dominant population, tmajor , is assumed to be 10 Gyr in all cases. The minor population, making up a total fraction f of the
stellar mass of the galaxy, is assumed to have experienced a starburst tminor ago, as indicated, lasting for 0.1 Gyr. To model the effects of dust extinction,
we applied the internal dust extinction model of Calzetti et al. (2000). For a given f and E(B − V ), we varied the galaxy mass and the age of the minor
population. For each case, the curves give the fits obtained. Note that the best fits require the presence of a minor young population.
by active star formation, while supernovae, both core collapse and
thermonuclear, tend to occur mainly within 1 – 2 Gyr after a star-
burst in the most favoured supernova models. In Figure 12, we have
plotted 13 early-type galaxies altogether. Using the Padova-Asiago
supernova online catalogue4, which lists supernovae recorded ever
since 1885, we found 8 supernovae in six of the galaxies: SN 1885A
(Type I) in M 31, SN 1969Q (type unavailable) in M 49, SN 2004W
(Type Ia) in M 60, SN 1939B (Type I) in M 59, SN 1957B (Type Ia),
SN 1980I (Type Ia), SN 1991bg (Type Ia) in M 84 and SN 1935B
(type unavailable) in NGC 3115. The majority of supernovae in
these galaxies appear to be of Type Ia.
4.3.3 Colour-colour diagrams
Similar to the above subsection, we have constructed composite
stellar population models for elliptical galaxies consisting of a ma-
jor old population (tmajor = 10Gyr) and a minor younger popula-
tion, but for colour-colour diagrams.
4 http://web.pd.astro.it/supern/snean.txt
Figure 13 shows the appearance of a galaxy in the colour-
colour diagrams of (1550 − V ) versus (1550 − 2500) (panel (a))
and (2000−V ) versus (1550−2500) (panel (b)) in the CSP model
for different fractions f and different ages of the minor popula-
tion. Solid squares in panel (a) of the figure are for quiescent early-
type galaxies observed with IUE by BBBFL and the data points
are taken from table 2 of Dorman, O’Connell & Wood (1995). The
observed data points are located in a region without recent star for-
mation or with a very low level of recent star formation (f <
0.1%).
Therefore, the observations are naturally explained with our model.
Panel (a) shows the epoch when the effects of the starburst fade
away, leading to a fast evolution of the galaxy colours, and the dia-
gram therefore provides a potentially powerful diagnostic to iden-
tify a minor starburst in an otherwise old elliptical galaxy that oc-
curred up to ∼ 2Gyr ago. For larger ages, the curves tend to con-
verge in the (1550− 2500) versus (1550− V ) diagram. Note that
this is not so much the case in the (1550−2500) versus (2000−V )
diagram, which therefore could provide better diagnostics.
Figure 14 is a diagram of (B−V ) versus (2000−V ) for a CSP
with a major population age tmajor = 10Gyr and variable minor
population age tminor and various minor population mass fractions
c© 0000 RAS, MNRAS 000, 000–000
http://web.pd.astro.it/supern/snean.txt
18 Han, Podsiadlowski & Lynas-Gray
Figure 12. Evolution of far-UV properties [the slope of the far-UV spectrum, βFUV, versus (1550 − V )] for a composite stellar population (CSP) model
of elliptical galaxies with a major population age of tmajor = 10Gyrs (Set 6). The mass fraction of the younger population is denoted as f and the time
since the formation as tminor [squares, triangles or dots are plotted in steps of ∆log(t) = 0.025]. Note that the model for f = 100% shows the evolution
of a simple stellar population with age tminor . The legend is for bFUV , which is the fraction of the UV flux that originates from hot subdwarfs resulting from
binary interactions. The effect of internal extinction is indicated in the top-left corner, based on the Calzetti internal extinction model with E(B − V ) = 0.1
(Calzetti et al. , 2000). For comparison, we also plot galaxies with error bars from HUT (Brown et al. , 1997) and IUE observations (BBBFL). The galaxies
with strong signs of recent star formation are denoted with an asterisk (NGC 205, NGC 4742, NGC 5102).
f . Overlayed on this diagram is figure 2 of Deharveng, Boselli &
Donas (2002) for observational data points of early-type galaxies.
NGC 205 and NGC 5102 (the circles with crosses above the line
(2000 − V ) = 1.4) are known to have direct evidence of massive
star formation (Hodge 1973; Pritchet 1979); therefore Deharveng,
Boselli & Donas (2002) individually examined the seven galaxies
with (2000−V ) < 1.4 in their sample for suspected star formation.
CGCG 119053, CGCG 97125, VCC 1499 (the three solid squares
with big stars) showed hints of star formation, NGC 4168 (the solid
square with a big triangle) has a low-luminosity Seyfert nucleus
and CGCG 119030 (the solid square with a big diamond) could be
a spiral galaxy instead of an elliptical galaxy. However, no hint of
star formation has been found for VCC 616 (the solid square on the
far-left above the line (2000 − V ) = 1.4) and CGCG 119086 (the
solid square on the far-right above the line (2000−V ) = 1.4). Our
model can explain the observations satisfactorily except for CGCG
119086, which needs further study.
Figure 15 shows the diagrams of (FUV − r)AB versus
(FUV −NUV )AB for a CSP galaxy model with a major popula-
tion age tmajor = 10Gyr and variable minor population age tminor
and various minor population mass fractions f . In these diagrams,
the colours are not shown in the restframe, but have been redshifted
(i.e. the wavelength is (1 + z) times the restframe wavelength,
where z is the redshift); panel (a) is for a redshift of z = 0.05
and panel (b) for z = 0.15. Overlayed on the two panels are qui-
escent early-type galaxies observed with GALEX by Rich et al.
(2005). The observed galaxies are for a redshift range 0 < z < 0.1
(panel (a)) and 0.1 < z < 0.2 (panel (b)). We note that most of the
quiescent galaxies are located in the region with f <
In Figures 13 to 15, we adopted a major population age of
tmajor = 10Gyr, and the colours are intrinsic. However, adopt-
ing a different age for the major population can change the dia-
grams; for example, a larger age leads to bluer (1550 − 2500) or
(FUV −NUV ) colours. In contrast, internal dust extinction shifts
the curves towards redder colours (Calzetti et al. 2000). Consider-
ing the uncertainties in the modelling, we take our model to be in
reasonable agreement with the observations.
c© 0000 RAS, MNRAS 000, 000–000
A binary model for the UV-upturn of elliptical galaxies 19
Figure 13. The diagrams of (1550−V ) versus (1550−2500) (a) and (2000−V ) versus (1550−2500) (b) for a composite stellar population (CSP) model
of elliptical galaxies with a major population age of tmajor = 10Gyrs (Set 6). Solid curves are for given minor population fractions f and are plotted, from
left to right, in steps of ∆ log(f) = 0.5, as indicated. Light grey curves are for fixed minor population ages tminor and are plotted, from top to bottom, in
steps of ∆ log(tminor/Gyr) = 0.1 , as indicated. Note that the colours are given in the restframe and intrinsic. The thick solid curve for f = 100% actually
shows the evolution of a simple stellar population with age tminor . Solid squares are for quiescent early-type galaxies observed with IUE by BBBFL and the
data points are taken from Dorman, O’Connell & Wood (1995).
4.3.4 UV-upturn magnitudes and their evolution with redshift
There is an observed spread in the (1550−V ), (1550−2500) and
(2000 − V ) colours of early-type galaxies. As can be seen from
Figures 12, 13, and 14, this spread is satisfactorily explained by
our model.
Brown et al. (2003) showed with HST observations that the
UV-upturn does not evolve much with redshift, a result apparently
confirmed by Rich et al. (2005) with GALEX observation of a large
sample. This is contrary to the prediction of both the metal-poor
and the metal-rich model, as both models require a large age for
the hot subdwarfs and therefore predict that the UV-upturn should
decline rapidly with redshift. Our binary model, however, predicts
that that UV-upturn does not evolve much with redshift (see Fig-
ures 3 and 4), consistent with the recent observations.
Lee et al. (2005) and Ree et al. (2007) studied the look-back
time evolution of the UV-upturn from the brightest elliptical galax-
ies in 12 clusters at redshift z < 0.2 with GALEX. Compared
to local giant elliptical galaxies, they found that the UV-upturn of
the 12 galaxies is redder. However, the local giant elliptical galax-
ies are quite special. NGC 1399 and M 87, with the strongest
UV-upturn, have the largest known specific frequencies of globu-
lar clusters (Ostrov, Geisler & Forte 1993), and M 87 hosts an ac-
tive galactic nuclei (AGN) with the best-known jet (Curtis 1918;
Waters & Zepf 2005). Given a larger sample of elliptical galaxies,
no matter how luminous they are, and a bigger redshift range, the
UV-upturn is not found to decline with redshift.
4.3.5 Implication for star formation history of early-type
galaxies
Boselli et al. (2005) studied the UV properties of 264 early-type
galaxies in the Virgo cluster with GALEX. They showed that
(FUV − NUV )AB ranges from 3 to 0, consistent with the the-
oretical range shown in panel (b) of Figure 4. The colour index
(FUV −NUV )AB of those galaxies becomes bluer with luminos-
ity from dwarfs (LH ∼ 10
8LH,⊙) to giants (LH ∼ 10
11.5LH,⊙),
i.e. a luminous galaxy tends to have a bluer (FUV − NUV )AB.
Panel (b) of Figure 4 shows that (FUV −NUV )AB becomes bluer
with population age for tSSP > 1Gyr. Taking the stellar popu-
lations as an “averaged” SSP, we may conclude that a luminous
c© 0000 RAS, MNRAS 000, 000–000
20 Han, Podsiadlowski & Lynas-Gray
Figure 14. The diagrams of (B−V ) versus (2000−V ) for a composite stellar population (CSP) model of elliptical galaxies with a major population age of
tmajor = 10Gyrs (Set 6). Solid curves are for given minor population fractions f and are plotted, from left to right, in steps of ∆ log(f) = 0.5, as indicated.
Light grey curves are for fixed minor population ages tminor and are plotted, from top to bottom, in steps of ∆ log(tminor/Gyr) = 0.1, as indicated. The
thick solid curve for f = 100% actually shows the evolution of a simple stellar population with age tminor. Note that the colours are intrinsic and are plotted
in the restframe. Overlayed on this diagram is figure 2 of Deharveng, Boselli & Donas (2002), in which solid squares are for their sample observed with the
FOCA experiment and open circles (including circles with crosses) are for the sample of BBBFL observed with IUE. Crosses denote objects that have been
studied in detail with HUT or HST. For galaxies bluer than (2000−V ) = 1.4, solid squares with big stars and circles with crosses are for galaxies with recent
star formation. NGC 4168 (solid square with a big triangle) has a low-luminosity Seyfert nucleus. CGCG 119030 (solid square with a big diamond) could be
misclassified as an elliptical, as it is classified as a spiral in the NASA/IPAC Extragalactic Database (NED).
early-type galaxy is older, or in other words, the less luminous an
early-type galaxy is, the younger the stellar population is or the
later the population formed.
4.4 The UV-upturn and metallicity
As far as we know, metallicity does not play a significant role
in the mass-transfer process or the envelope ejection process for
the formation of hot subdwarfs, although it may affect the prop-
erties of the binary population in more subtle ways. We there-
fore expect that (FUV − r)AB or (1550 − V ) from our model
is not very sensitive to the metallicity of the population, This is
in agreement with the recent large sample of GALEX observa-
tions by Rich et al. (2005). Boselli et al. (2005) and Donas et al.
(2006) studied nearby early-type galaxies with GALEX. Neither of
them show that (FUV − r)AB correlates significantly with metal-
licity, However, both of them found a positive correlation between
(FUV −NUV )AB and metallicity. This can possibly be explained
with our model. The UV-upturn magnitudes (FUV − r)AB or
(1550−V ) does not evolve much with age for tSSP > 1Gyr while
(FUV −NUV )AB decreases significantly with age (see Figures 3
and 4). A galaxy of high metallicity may have a larger age and
therefore a stronger (FUV −NUV )AB.
4.5 Comparison with previous models
Both metal-poor and metal-rich models are quite ad hoc and require
a large age for the hot subdwarf population (Section 1), which im-
plies that the UV-upturn declines rapidly with look-back time or
redshift. In our model, hot subdwarfs are produced naturally by en-
velope loss through binary interactions, which do not depend much
on the age of the population older than ∼ 1Gyr, and therefore
our model predicts little if any evolution of the UV-upturn with
redshift. Note, however, that (FUV − NUV )AB declines signifi-
cantly more with redshift than (FUV −r)AB, as the contribution to
the near-UV from blue stragglers resulting from binary interactions
becomes less important for an older population.
The metal-rich model predicts a positive correlation between
the magnitude of the UV-upturn and metallicity; for example,
(1550 − V ) correlates with metallicity. However, such a correla-
c© 0000 RAS, MNRAS 000, 000–000
A binary model for the UV-upturn of elliptical galaxies 21
Figure 15. The diagrams of (FUV − r)AB versus (FUV −NUV )AB for a composite stellar population (CSP) with a major population age of tmajor =
10Gyrs (Set 6). Solid curves are for given minor population fractions f and are plotted, from bottom to top, in steps of ∆ log(f) = 0.5, as indicated. Light
grey curves are for fixed minor population ages tminor and are plotted, from left to right, in steps of ∆ log(tminor/Gyr) = 0.1, as indicated. The thick solid
curve for f = 100% actually shows the evolution of a simple stellar population with age tminor. Note that the colours are intrinsic but not in the restframe.
Panel (a) assumes a redshift of z = 0.05 and panel (b) z = 0.15. Overlayed on this diagram is figure 3 of Rich et al. (2005), in which circles are for their
sample observed with GALEX for quiescent early-type galaxies. Filled circles (panel (a)) are observational data points for galaxies of 0 < z < 0.1 and open
circles (panel (b)) for 0.1 < z < 0.2.
c© 0000 RAS, MNRAS 000, 000–000
22 Han, Podsiadlowski & Lynas-Gray
tion is not expected from our binary model as metallicity does not
play an essential role in the binary interactions. Furthermore, even
though the metal-rich model could in principle account for the UV-
upturn in old, metal-rich giant ellipticals, it cannot produce a UV-
upturn in lower-metallicity dwarf ellipticals. In contrast, in a binary
model, the UV-upturn is universal and can account for UV-upturns
from dwarf ellipticals to giant ellipticals.
5 SUMMARY AND CONCLUSION
By applying the binary scenario of Han et al. (2002; 2003) for the
formation of hot subdwarfs, we have developed an evolutionary
population synthesis model for the UV-upturn of elliptical galaxies
based on a first-principle approach. The model is still quite simple
and does not take into account more complex star-formation histo-
ries, possible contributions to the UV from AGN activity, non-solar
metallicity or a range of metallicities. Moreover, the binary popu-
lation synthesis is sensitive to uncertainties in the binary modelling
itself, in particular the mass-ratio distribution and the condition for
stable and unstable mass transfer (Han et al. 2003). We have varied
these parameters and found these uncertainties do not change the
qualitative picture, but affect some of the quantitative estimates.
Despite its simplicity, our model can successfully reproduce
most of the properties of elliptical galaxies with a UV excess: the
range of observed UV excesses, both in (1550−V ) and (2000−V )
(e.g. Deharveng, Boselli & Donas, 2002), and their evolution with
redshift. The model predicts that the UV excess is not a strong
function of age, and hence is not a good indicator for the age
of the dominant old population, as has been argued previously
(Yi et al. 1999), but is very consistent with recent GALEX findings
(Rich et al. 2005). We typically find that the (1550 − V ) colour
changes rapidly over the first 1 Gyr and only varies slowly there-
after. This also implies that all old galaxies should show a UV
excess at some level. Moreover, we expect that the model is not
very sensitive to the metallicity of the population. The UV-upturn
is therefore expected to be universal.
Our model is sensitive to both low levels and high levels of
star formation. It suggests that elliptical galaxies had some star for-
mation activity in the relatively recent past (∼ 1Gyr ago). AGN
and supernova activity may provide supporting evidence for this
picture.
The modelling of the UV excess presented in this study is only
a starting point: with refinements in the spectral modelling, includ-
ing metallicity effects, and more detailed modelling of the global
evolution of the stellar population in elliptical galaxies, we propose
that this becomes a powerful new tool helping to unravel the com-
plex histories of elliptical galaxies that a long time ago looked so
simple and straightforward.
ACKNOWLEDGEMENTS
We are grateful to an anonymous referee for valuable comments
which help to improve the presentation, to Kevin Schawinski for
numerous discussions and suggestions, to Thorsten Lisker for in-
sightful comments leading to Section 4.3.5. This work was in part
supported by the Natural Science Foundation of China under Grant
Nos. 10433030 and 10521001, the Chinese Academy of Sciences
under Grant No. KJCX2-SW-T06 (Z.H.), and a Royal Society UK-
China Joint Project Grant (Ph.P and Z.H.).
REFERENCES
Allard F., Wesemael F., Fontaine G., Bergeron P., Lamontagne R., 1994, AJ,
107, 1565
Anders E., Grevesse N., 1989, Geochimica et Cosmochimica Acta, 53, 197
Aznar Cuadrado R., Jeffery C.S., 2001, A&A, 368, 994
Boselli A. et al. , 2005, ApJ, 629, L29
Brassard P., Fontaine G., Billéres M., Charpinet S., Liebert J., Saffer R.A.,
2001, ApJ, 305, 228
Bressan A., Chiosi C., Fagotto F., 1994, ApJS, 94, 63
Bressan A., Chiosi C., Tantalo R., 1996, A&A, 311, 425
Brown T.M., 2004, Ap&SS, 291, 215
Brown T.M., Ferguson H.C., Davidsen A.F., 1995, ApJ, 454, L15
Brown T.M., Ferguson H.C., Davidsen A.F., Dorman B., 1997, ApJ, 482,
Brown T.M., Ferguson H.C., Deharveng J.M., Jedrzejewski R.I., 1998, ApJ,
508, L139
Brown T.M., Bowers C.W., Kimble R.A., Ferguson H.C., 2000a, ApJ, 529,
Brown T.M., Bowers C.W., Kimble R.A., Sweigart A.V., 2000b, ApJ, 532,
Brown T.M., Ferguson H.C., Smith E., Bowers C.B., Kimble R., Renzini
A., Rich R.M., 2003, ApJ, 584, L69
Bruzual A.G., Charlot S., 1993, ApJ, 405, 538
Bruzual A.G., Charlot S., 2003, MNRAS, 344, 1000
Burstein D., Bertola F., Buson L.M., Faber S.M., Lauer T.R., 1988, ApJ,
328, 440 (BBBFL)
Calzetti D., Bohlin R.C., Kinney A.L., Koornneef J., Storchi-Bergmann T.,
2000, ApJ, 533, 682
Carraro G., Girardi L., Bressan A., Chiosi C., 1996, A&A, 305, 849
Carroll M., Press W.H., Turner E.L., 1992, ARA&A, 30, 499
Charpinet S., Fontaine G., Brassard P., Billères M., Green E.M., Chayer P.,
2005, A&A, 443, 251
Chen X., Han Z., 2004, MNRAS, 355, 1182
Chiosi C., Vallenari A., Bressan A., 1997, A&AS, 121, 301
Code A.D., 1969, PASP, 81, 475
Curtis H.D., 1918, Publ. Lick Obs., 13, 31
D’Cruz N.L., Dorman B., Rood R.T., O’Connell R.W., 1996, ApJ, 466, 359
Deharveng J.-M., Boselli A., Donas J., 2002, A&A, 393, 843
Dewi J., Tauris T., 2000, A&A, 360, 1043
Donas J., Milliard B., Laget M., Buat V., 1990, A&A, 235, 60
Donas J., Milliard B., Laget M., 1995, A&A, 303, 661
Donas J. et al. , 2006, astro-ph/0608594
Dorman B., Rood R.T., O’Connell R.W., 1993, ApJ, 419, 596
Dorman B., O’Connell R.W., Rood R.T., 1995, ApJ, 442, 105
Downes R.A., 1986, ApJS, 61, 569
Eggleton P.P., 1971, MNRAS, 151, 351
Eggleton P.P., 1972, MNRAS, 156, 361
Eggleton P.P., 1973, MNRAS, 163, 179
Eggleton P.P., Fitchett M.J., Tout C.A., 1989, ApJ, 347, 998
Eggleton P.P., Tout C.A., 1989, in Batten A.H., ed., Algols, Kluwer, Dor-
drecht, p. 164
Fukugita M., Ichikawa T., Gunn J.E., Doi M., Shimasaku K., Schneider
D.P., 1996, AJ, 111, 1748
Ferguson D.H., Green R.F., Liebert J., 1984, ApJ, 287, 320
Ferguson, H.C. et al. , 1991, ApJ, 382, L69
Goldberg D., Mazeh T., 1994, A&A, 282, 801
Green R.F., Schmidt M., Liebert J., 1986, ApJS, 61, 305
Gregg M.D., Ferguson H.C., Minniti D., Tanvir N., Catchpole R., 2004, AJ,
127, 1441
Greggio L., Renzini A., 1990, ApJ, 364, 35
Gunn J.E., Stryker L.L., Tinsley B.M., 1981, ApJ, 249, 48
Gunn J.E. et al. , 1998, AJ, 116, 3040
Halbwachs J.L., Mayor M., Udry S., 1998, in Rebolo R., Martin E.L., Za-
patero Osorio M.R., eds, Brown Dwarfs and Extrasolar Planets, ASP
Conf. Ser., Vol. 134, p. 308
Han Z., 1995, Ph.D. Thesis (Cambridge)
Han Z., 1998, MNRAS, 296, 1019
c© 0000 RAS, MNRAS 000, 000–000
http://arxiv.org/abs/astro-ph/0608594
A binary model for the UV-upturn of elliptical galaxies 23
Han Z., Podsiadlowski Ph., 2004, MNRAS, 350, 1301
Han Z., Webbink R.F., 1999, A&A, 349, L17
Han Z., Podsiadlowski Ph., Eggleton P.P., 1994, MNRAS, 270, 121
Han Z., Podsiadlowski Ph., Eggleton P.P., 1995, MNRAS, 272, 800
Han Z., Eggleton P.P., Podsiadlowski Ph., Tout C.A., 1995, MNRAS, 277,
Han Z., Tout C.A., Eggleton P.P., 2000, MNRAS, 319, 215
Han Z., Eggleton P.P., Podsiadlowski Ph., Tout C.A., Webbink R.F., 2001,
in Podsiadlowski Ph., Rappaport S., King A.R., D’Antona F., Burderi
L., eds, Evolution of Binary and Multiple Star Systems, ASP Conf. Ser.,
Vol. 229, p. 205
Han Z., Podsiadlowski Ph., Maxted P.F.L., Marsh T.R., Ivanova N., 2002,
MNRAS, 336, 449
Han Z., Podsiadlowski Ph., Maxted P.F.L., Marsh T.R., 2003, MNRAS, 341,
669 (HPMM)
Heacox W.D., 1995, AJ, 109, 2670
Heber U., 1986, A&A, 155, 33
Heber U., Moehler S., Napiwotzki R., Thejll P., Green E.M., 2002, A&A,
383, 938
Heber U. et al. , 2004, A&A, 420, 251
Hills J.G., 1971, A&A, 12, 1
Hjellming M.S., Webbink R.F., 1987, ApJ, 318, 794
Hodge P.W., 1973, ApJ, 182, 671
Hurley J.R., Tout C.A., Pols, O.R., 2002, MNRAS, 329, 897
Hurley J.R., Pols O.R., Aarseth S.J., Tout C.A., 2005, MNRAS, 363, 293
Iben I. Jr., Renzini A., 1983, ARA&A, 21, 271
Iben I. Jr., Tutukov A.V., 1986, ApJ, 311, 753
Jeffery C.S., Pollacco D.L., 1998, MNRAS, 298, 179
Johnson H.L., Morgan W.W., 1953, ApJ, 117, 313
Kaviraj S., et al. 2006, ApJ, in press (available at
http://xxx.lanl.gov/abs/astro-ph/0601029)
Kilkenny D., Koen C., Jeffery J., Hill C.S., O’Donoghue D., 1999, MNRAS,
310, 1119
Kjærgaard P., 1987, A&A, 176, 210
Koen C., Orosz J.A., Wade R.A., 1998, MNRAS, 300, 695
Kroupa P., Tout C.A., Gilmore G., 1993, MNRAS, 262, 545
Kurucz R.L., 1992, in Barbuy B., Renzini A., eds, Proc. IAU Symp. 149,
The Stellar Population of Galaxies, Kluwer, Dordrecht, p.225
Lee Y.W., 1994, ApJ, 430, L113
Lee Y.W. et al. , 2005, ApJ, 619, L103
Lejeune T., Cuisinier F., Buser R., 1997, A&AS, 125, 229
Lejeune T., Cuisinier F., Buser R., 1998, A&AS, 130, 65
Lisker T., Heber U., Napiwotzki R., Christlieb N., Reimers D., Homeier D.,
2004, Ap&SS, 291, 351
Lisker T., Heber U., Napiwotzki R., Christlieb N., Han Z., Homeier D.,
Reimers D., 2005, A&A, 430, 223
Martin D.C. et al. , 2005, ApJ, 619, L1
Maxted P.F.L., Moran C.K.J., Marsh T.R., Gatti A.A., 2000, MNRAS, 311,
Maxted P.R.L., Marsh T.R., North R.C., 2000, MNRAS, 317, L41
Maxted P.F.L., Heber U., Marsh T.R., North R.C., 2001, MNRAS, 326,
Mazeh T., Goldberg D., Duquennoy A., Mayor M., 1992, ApJ, 401, 265
Mengel J.G., Norris J., Gross P.G., 1976, ApJ, 204, 488
Miller G.E., Scalo J.M., 1979, ApJS, 41, 513
Morales-Rueda L.,Maxted P.F.L., Marsh T.R., 2004, Ap&SS, 291, 299
Morales-Rueda L.,Maxted P.F.L., Marsh T.R., Kilkenny D., O’Donoghue
D., 2006, Baltic Astronomy, 15, 187
Moran C., Maxted P., Marsh T.R., Saffer R.A., Livio M., 1999, MNRAS,
304, 535
Mochkovitch R., 1986, A&A, 157, 311
Napiwotzki R., Edelmann H., Heber U., Karl C., Drechsel H., Pauli E.-M.,
Christlieb N., 2001, A&A, 378, L17
Nesci R., Perola G.C., 1985, A&A, 145, 296
Ohl R.G. et al. , 1998, ApJ, 505, L11
O’Connell R.W., 1999, ARA&A, 37, 603
Oke J.B., Gunn J.E., 1983, ApJ, 266, 713
Orosz J.A., Wade R.A., 1999, MNRAS, 310, 773
Ostrov P., Geisler D., Forte J.C., 1993, AJ, 105, 1762
Paczyński B., 1965, Acta Astron., 15, 89
Paczyński B., 1976, in Eggleton P.P., Mitton S., Whelan J., eds, Structure
and Evolution of Close Binaries, Kluwer, Dordrecht, p. 75
Paczyński B., Ziółkowski J., Żytkow A., 1969, in Hack M. ed., Mass Loss
from Stars, Reidel, Dordrecht, P. 237
Plavec M., Ulrich R.K., Polidan R.S., 1973, PASP, 85, 769
Podsiadlowski Ph., Joss P.C, Hsu J.J.L., 1992, ApJ, 391, 246
Podsiadlowski, Ph., Rappaport, S., Pfahl, E., 2002, ApJ, 565, 1107
Pols O.R., Marinus M., 1994, A&A, 288, 475
Pols O.R., Tout C.A., Eggleton P.P., Han Z., 1995, MNRAS, 274, 964
Pols O.R., Schröder K.-P., Hurley J.R., Tout C.A., Eggleton P.P., 1998, MN-
RAS, 298, 525
Park J.H., Lee Y.W., 1997, ApJ, 476, 28
Pritchet C., 1979, ApJ, 231, 354
Rappaport S., Verbunt F., Joss P.C., 1983, ApJ, 275, 713
Ree C.H. et al. , 2007, ApJS, in press (available at
http://xxx.lanl.gov/abs/astro-ph/0703503)
Reed M.D., Stiening R., 2004, PASP, 116, 506
Reimers D., 1975, Mem. R. Soc. Liège, 6ième Serie, 8, 369
Renzini A., 1981, in Effects of Mass Loss on Stellar Evolution, ed. C. Chiosi
& R. Stalio (Dordrecht: Reidel), 319
Rich R.M. et al. , 2005, ApJ, 619, L107
Saffer R.A., Livio M., Yungelson L.R., 1998, ApJ, 502, 394
Saffer R.A., Bergeron P., Koester D., Liebert J., 1994, ApJ, 432, 351
Sandage A.R., 1953, AJ, 58, 61
Salim, S. et al. , 2005, ApJ, 619, L39
Schawinski K., et al. 2006, ApJ, in press (available at
http://xxx.lanl.gov/abs/astro-ph/0601036)
Shatsky N., Tokovinin A., 2002, A&A, 382, 92
Smith J.A. et al. , 2002, AJ, 123, 2121
Soberman G.E., Phinney E.S., van den Heuvel E.P.J., 1997, A&A, 327, 620
Sweigart A.V., 1997, ApJ, 474, L23
Tantalo R., Chiosi C., Bressan A., Fagotto F., 1996, A&A, 311, 361
Terlevich A.I., Forbes D.A., 2002, MNRAS, 330, 547
Thomas D., Maraston C., Bender R., Mendes de Oliveira C., 2005, ApJ,
673, 694
Thejll P., Ulla A., MacDonald J., 1995, A&A, 303, 773
Tinsley B.M., 1968, ApJ, 151, 547
Tout C.A., Eggleton P.P., 1988, MNRAS, 231, 823
Tutukov A.V., Yungelson L.R., 1990, A.Zh., 67, 109
Ulla A., Thejll P., 1998, A&AS, 132, 1
Van Dyk S.D., Li W., Filippenko A.V., 2003, PASP, 115, 1
Verbunt F., Zwaan C., 1981, A&A, 100, L7
Waters C.Z., Zepf S.E., 2005, ApJ, 624, 656
Webbink R.F., 1984, ApJ, 277, 355
Webbink R. F., 1988, in Mikołajewska J., Friedjung M., Kenyon S. J., Viotti
R., eds, The Symbiotic Phenomenon. Kluwer, Dordrecht, p. 311
Williams T., McGraw J.T., Mason P.A., Grashuis R., 2001, PASP, 113, 944
Wood J.H., Saffer R., 1999, MNRAS, 305, 820
Worthey G., 1994, ApJ, 95, 107
Xu Y., Xu H., Zhang Z., Kundu A., Wang Y., Wu. Y., 2005, ApJ, 631, 809
Yi S.K., 2004, Ap&SS, 291, 205
Yi S.K., Afshari E., Demarque P., Oemler Jr. A., 1995, ApJ, 453, L69
Yi S.K., Demarque P., Kim Y.C., 1997, ApJ, 482, 677
Yi S.K., Demarque P., Oemler Jr. A., 1997, ApJ, 486, 201
Yi S.K., Demarque P., Oemler Jr. A., 1998, ApJ, 492, 480
Yi S.K., Lee Y., Woo J., Park J., Demarque P., Oemler Jr. A., 1999, ApJ,
513, 128
Yi, S.K. et al. , 2005, ApJ, 619, L111
Zhang F., Li L., 2006, MNRAS, 370, 1181
Zhang F., Han Z., Li L., Hurley J.R., 2002, MNRAS, 334, 883
Zhang F., Han Z., Li L., Hurley J.R., 2004a, MNRAS, 350, 710
Zhang F., Han Z., Li L., Hurley J.R., 2004b, A&A, 415, 117
Zhang F., Han Z., Li L., Hurley J.R., 2005a, MNRAS, 357, 1088
Zhang F., Li L., Han Z., 2005b, MNRAS, 364, 503
Zhou X., Véron-Cetty M.P., Véron P., 1992, Acta Astrophysica Sinica, 12,
c© 0000 RAS, MNRAS 000, 000–000
http://xxx.lanl.gov/abs/astro-ph/0601029
http://xxx.lanl.gov/abs/astro-ph/0703503
http://xxx.lanl.gov/abs/astro-ph/0601036
24 Han, Podsiadlowski & Lynas-Gray
Zoccali M., Cassisi S., Frogel J.A., Gould A., Ortolani S., Renzini A., Rich
R.M., Stephens A.W., 2000, ApJ, 530, 418
c© 0000 RAS, MNRAS 000, 000–000
Introduction
The Model
The BPS code and the formation of hot subdwarfs
Monte-Carlo simulation parameters
Spectral library
Observables from the model
Simulations
Standard simulation set
Simulation sets with varying model parameters
Simulation sets for composite stellar populations
Results and Discussion
Simple stellar populations
The model for composite stellar populations
Theory versus observations
The UV-upturn and metallicity
Comparison with previous models
Summary and Conclusion
REFERENCES
|
0704.0864 | Redshifts of the Long Gamma-Ray Bursts | Baltic Astronomy, vol.12, XXX–XXX, 2003.
THE REDSHIFT OF LONG GRBS’
Z. Bagoly1 and I. Csabai2 and A. Mészáros3 and P. Mészáros4
and I. Horváth5 and L. G. Balázs6 and R. Vavrek7
1 Lab. for Information Technology, Eötvös University, H-1117 Budapest,
Pázmány P. s. 1./A, Hungary
2 Dept. of Physics for Complex Systems, Eötvös University, H-1117 Bu-
dapest, Pázmány P. s. 1./A, Hungary
3 Astronomical Institute of the Charles University, V Holešovičkách 2,
CZ-180 00 Prague 8, Czech Republic
4 Dept. of Astronomy & Astrophysics, Pennsylvania State University,
525 Davey Lab., University Park, PA 16802, USA
5 Dept. of Physics, Bolyai Military University, H-1456 Budapest, POB
12, Hungary
6 Konkoly Observatory, H-1505 Budapest, POB 67, Hungary
7 Max-Planck-Institut für Astronomie, D-69117 Heidelberg, 17 Königstuhl,
Germany
Received October 20, 2003
Abstract. The low energy spectra of some gamma-ray bursts’ show ex-
cess components beside the power-law dependence. The consequences of
such a feature allows to estimate the gamma photometric redshift of the
long gamma-ray bursts in the BATSE Catalog. There is good correla-
tion between the measured optical and the estimated gamma photometric
redshifts. The estimated redshift values for the long bright gamma-ray
bursts are up to z = 4, while for the the faint long bursts - which should
be up to z = 20 - the redshifts cannot be determined unambiguously with
this method. The redshift distribution of all the gamma-ray bursts with
known optical redshift agrees quite well with the BATSE based gamma
photometric redshift distribution.
Key words: Cosmology - Gamma-ray burst
1. INTRODUCTION
In this article we present a new method called gamma photometric
redshift (GPZ) estimation of the estimation of the redshifts for the
http://arxiv.org/abs/0704.0864v1
2 Z.Bagoly et. al
long GRBs. We utilize the fact that broadband fluxes change sys-
tematically, as characteristic spectral features redshift into, or out of
the observational bands. The situation is in some sense similar to
the optical observations of galaxies, where for galaxies and quasars
the photometric redshift estimation (Csabai et. al (2000), Budavári
et. al (2001)) achieved a great success in estimating redshifts from
photometry only.
We construct our template spectrum that will be used in the GPZ
process in the following manner: let the spectrum be a sum of the
Band’s function and of a low energy soft excess power-law function,
observed in several cases (Preece et. al (2000)). The low energy
cross-over is at Ecr = 90 keV, Eo = 500 keV, and the spectral indices
are α = 3.2, β = 0.5 and γ = 3.0.
Let us introduce the peak flux ratio (PFR hereafter) in the fol-
lowing way:
PFR =
l34 − l12
l34 + l12
where lij is the BATSE DISCSC flux in energy channel Ei < E < Ej ,
here E1 = 25 keV, E2 = E3 = 55 keV, E4 = 100 keV.
0 2 4 6 8 10 12 14
α=3.2 β=0.5 Ecr=90 keV
Fig. 1. The theoretical PFR curves
calculated from the template spec-
trum using the average detector re-
sponse matrix.
The spectra are changing
quite rapidly with time; the typ-
ical timescale for the time vari-
ation is ≃ (0.5 − 2.5) s (Ryde &
Svensson (1999, 2000)). There-
fore, we will consider the spectra
in the 320ms time interval cen-
tered around the peak-flux. If
we redshift the template spec-
trum and use the detector re-
sponse matrix of the given burst,
we can get for any redshift the
observed flux and the PFR value.
On Fig. 1. we plot the the-
oretical PFR curves calculated
from the above defined template
spectrum using the average detector response matrices for the 8 bursts
that have both BATSE data and measured redshifts (Klose (2000))
In the used range of z (i.e. for z<
4) the relation between z and PFR
is invertible, hence we can use it to estimate the gamma photometric
The Redshift of Long GRBs’ 3
redshift (GPZ) from a measured PFR. For the 7 considered GRBs
(leaving out GRB associated with the supernova and GRB having
upper redshift limit only) the estimation error between the real z
and the GPZ is ∆z =≈ 0.33.
2. ESTIMATION OF THE REDSHIFTS
Here restrict ourselves to long and not very faint GRBs with
T90 > 10 s and F256 < 0.65 photon/(cm
2s) to avoid the problems with
the instrumental threshold (Pendleton et. al (1997), Hakkila et. al,
(2000)). Introducing an another cut at F256 > 2.00 photon/(cm
2s) we
can investigate roughly the brighter half of this sample.
As the soft-excess range redshifts out from the BATSE DISCSC
energy channels around z ≈ 4, the theoretical curves converge to a
constant value. For higher z it starts to decrease. This means that
the method is ambiguous: for the given value of PFR one may have
two redshifts - below and above z ≈ 4. Because for the bright GRBs
the values above z ≈ 4 are practically excluded, for them the method
is usable. Using only the 25 − 55 keV and 55 − 100 keV BATSE
energy channels, this method can be used to estimate GPZ only in
the redshift range z <
0 1 2 3 4 5
Gamma Photometric Redshift
F256>0.65 ph/cm
F256>2.0 ph/cm
Fig. 1. The distribution of the GPR
estimators of the long GRBs having
DISCSC data.
Let us assume for a moment
that all observed long bursts, we
have selected above, have z <
4. Then we can simply calculate
the zGPZ redshift for any GRB,
which has PFR from the DISCSC
data. Fig. 2. shows the distribu-
tion of the estimated derived red-
shifts under the assumption that all
GRBs are below z ≈ 4. The dis-
tribution has a clear peak value
around PFR ≈ 0.2, which corre-
sponds to z ≈ (1.5− 2.0).
Although there is a problem
with the degeneracy (e.g. two possible redshift values) we think that
the great majority of values of z obtained for the bright half are
correct. This opinion may be supported by the following arguments:
the obtained distribution of GRBs in z for the bright half is very
similar to the obtained distribution of Schmidt (2001) and Schaefer
4 Z.Bagoly et. al
et. al (2001). An another problem for z as it moves into z>
4 regime for
the bright GRB is the extremely high GRB luminosities, ≃ 1053ergs/s
(Mészáros & Mészáros, 1996).
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
17 bursts with known redshift
GPZ{ F256>0.65 ph/cm2/s
Fig. 3. The redshift distribution of
the 17 GRBs’ with known z and the
distributions from the GPZ estima-
tors.
As an additional statistical
test we compared the redshift
distribution of the 17 GRB with
observed redshift with our re-
constructed GRB z distributions
(limited to the z < 4 range). For
the F256 > 0.65 photon/(cm
group the KS test suggests a 38%
probability, i.e. the observed
N(< z) probability distribution
agrees quite well with the GPZ
reconstructed function.
ACKNOWLEDGMENTS
The useful remarks with Drs. T.
Budavári, S. Klose, D. Reichart,
A.S. Szalay are kindly acknowl-
edged. This research was supported in part through OTKA grants
T024027 (L.G.B.), F029461 (I.H.) and T034549, Czech Research
Grant J13/98: 113200004 (A.M.), NASA grant NAG5-9192 (P.M.).
REFERENCES
??udavári, T., Csabai, I., Szalay, A.S. et. al, 2001, AJ, 122, 1163
??sabai, I., Connolly, A.J., Szalay, A.S. et. al, 2000, AJ, 119, 69
??akkila, J., Haglin, D. J., Pendleton, G. N. et. al, 2000, ApJ, 538,
??lose, S. 2000, Reviews in Modern Astronomy 13, Astronomische
Gesellschaft, Hamburg, p.129
??észáros, A., & Mészáros, P. 1996, ApJ, 466, 29
??reece, R.D., Briggs, M.S., Pendleton, G.N., et. al 1996, ApJ, 473,
??reece, R.D., Briggs, M.S., Mallozzi, et. al, 2000, ApJS, 126, 19
??yde, F., & Svensson, R. 1999, ApJ, 512, 693
??yde, F., & Svensson, R. 2000, ApJ, 529, L13
??chaefer, B. E., Deng, M. & Band, D. L., 2001, ApJ, 563, L123
??chmidt, M. 2001, ApJ, 552, 36
|
0704.0865 | An architecture-based dependability modeling framework using AADL | AN ARCHITECTURE-BASED DEPENDABILITY MODELING
FRAMEWORK USING AADL
Ana-Elena Rugina, Karama Kanoun and Mohamed Kaâniche
LAAS-CNRS, University of Toulouse
7 avenue Colonel Roche, 31077 Toulouse Cedex 4, France
Phone:+33(0)5 61 33 62 00, Fax: +33(0)5 61 33 64 11
e-mail: {rugina, kanoun, kaaniche}@laas.fr
ABSTRACT
For efficiency reasons, the software system designers’
will is to use an integrated set of methods and tools to
describe specifications and designs, and also to perform
analyses such as dependability, schedulability and
performance. AADL (Architecture Analysis and Design
Language) has proved to be efficient for software
architecture modeling. In addition, AADL was designed
to accommodate several types of analyses. This paper
presents an iterative dependency-driven approach for
dependability modeling using AADL. It is illustrated on a
small example. This approach is part of a complete
framework that allows the generation of dependability
analysis and evaluation models from AADL models to
support the analysis of software and system architectures,
in critical application domains.
KEYWORDS
Dependability modeling, AADL, evaluation, architecture
1. Introduction
The increasing complexity of software systems raises
major concerns in various critical application domains, in
particular with respect to the validation and analysis of
performance, timing and dependability requirements.
Model-driven engineering approaches based on
architecture description languages (ADLs) aim at
mastering this complexity at the design level. Over the
last decade, considerable research has been devoted to
ADLs leading to a large number of proposals [1]. In
particular, AADL (Architecture Analysis and Design
Language) [2] has received an increasing interest from the
safety-critical industry (i.e., Honeywell, Rockwell Collins,
Lockheed Martin, the European Space Agency, Airbus)
during the last years. It has been standardized under the
auspices of the International Society of Automotive
Engineers (SAE), to support the design and analysis of
complex real-time safety-critical applications. AADL
provides a standardized textual and graphical notation, for
describing architectures with functional interfaces, and for
performing various analyses to determine the behavior
and performance of the system being modeled. AADL has
been designed to be extensible to accommodate analyses
that the core language does not support, such as
dependability and performance.
In critical application domains, one of the challenges
faced by the software engineers concerns: 1) the
description of the software architecture and its dynamic
behavior taking into account the impact of errors and
failures, and 2) the evaluation of quantitative measures of
relevant dependability properties such as reliability,
availability and safety, allowing them to assess the impact
of errors and failures on the service. For pragmatic
reasons, the designers using an AADL-based engineering
approach are interested in using an integrated set of
methods and tools to describe specifications and designs,
and to perform dependability evaluations. The AADL
Error Model Annex [3] has been defined to complement
the description capabilities of the AADL core language
standard by providing features with precise semantics to
be used for describing dependability-related
characteristics in AADL models (faults, failure modes and
repair assumptions, error propagations, etc.). AADL and
the AADL Error Model Annex are supported by the Open
Source AADL Tool Environment (OSATE)1.
At the current stage, there is a lack of methodologies and
guidelines to help the developers, using an AADL based
engineering approach, to use the notations defined in the
standard for describing complex dependability models
reflecting real-life systems with multiple dependencies
between components. The objective of this paper is to
propose a structured method for AADL dependability
model construction. The AADL model is built and
validated iteratively, taking into account progressively the
dependencies between the components.
The approach proposed in this paper is complementary to
other research studies focused on the extension of the
AADL language capabilities to support formal
verifications and analyses (see e.g. [4]). Also, it is
intended to be complementary to other studies focused on
the integration of formal verification, dependability and
performance related activities in the general context of
1 http://lwww.aadl.info/OpenSourceAADLToolEnvironment.html
model driven engineering approaches based on ADLs and
on UML (see e.g., [5-9]).
The remainder of the paper is organized as follows.
Section 2 presents the AADL concepts that are necessary
for understanding our modeling approach. Section 3 gives
an overview of our framework for system dependability
modeling and evaluation using AADL. Section 4 presents
the iterative approach for building the AADL
dependability model. Section 5 illustrates some of the
concepts of our approach on a small example and
section 6 concludes the paper.
2. AADL concepts
The AADL core language allows analyzing the impact of
different architecture choices (such as scheduling policy
or redundancy scheme) on a system’s properties [10]. An
architecture specification in AADL is an hierarchical
collection of interacting components (software and
compute platform) combined in subsystems. Each AADL
component is modeled at two levels: in the component
type and in one or more component implementations
corresponding to different implementation structures of
the component in terms of subcomponents and
connections. The AADL core language is designed to
describe static architectures with operational modes for
their components. However, it can be extended to
associate additional information to the architecture.
AADL error models are an extension intended to support
(qualitative and quantitative) analyses of dependability
attributes. The AADL Error Model Annex defines a sub-
language to declare reusable error models within an error
model annex library. The AADL architecture model
serves as a skeleton for error model instances. Error
model instances can be associated with components of the
system and with the system itself.
The component error models describe the behavior of
the components with which they are associated, in the
presence of internal faults and recovery events, as well as
in the presence of external propagations from the
component’s environment. Error models have two levels
of description: the error model type and the error model
implementation. The error model type declares a set of
error states, error events (internal to the
component) and error propagations2 (events that
propagate, from one component to other components,
through the connections and bindings between
components of the architecture model). Propagations have
associated directions (in or out or in out). Error
model implementations declare transitions between
states, triggered by events and propagations declared in
the error model type. Both the type and the
implementation can declare Occurrence properties that
2 Error states can also model error free states, error events can also
model repair events and error propagations can model all kinds of
notifications.
specify the arrival rate or the occurrence probability of
events and propagations. An out propagation occurs
according to a specified Occurrence property when it
is named in a transition and the current state is the origin
of the transition. If the source state and the destination
state of a transition triggered by an out propagation are
the same, the propagation is sent out of the component but
does not influence the state of the sender component. An
in propagation occurs as a consequence of an out
propagation from another component. Figure 1 shows an
error model example.
Error Model Type [simple]
error model simple
features
Error_Free: initial error state;
Failed: error state;
Fail: error event
{Occurrence => Poisson λ};
Recover: error event
{Occurrence => Poisson µ};
KO: in out error propagation
{Occurrence => fixed p};
end simple;
Error Model Implementation [simple.general]
error model implementation
simple.general
transitions
Error_Free-[Fail] -> Failed;
Error_Free-[in KO] -> Failed;
Failed-[Recover] -> Error_Free;
Failed-[out KO] -> Failed;
end simple.general;
Figure 1. Simple error model
Error model instances can be customized to fit a particular
component through the definition of Guard properties
that control and filter propagations by means of Boolean
expressions.
The system error model is defined as a composition of a
set of concurrent finite stochastic automata corresponding
to components. In the same way as the entire architecture,
the system error model is described hierarchically. The
state of a system that contains subcomponents can be
specified as a function of its subcomponents’ states (i.e.,
the system has a derived error model).
3. Overview of the modeling framework
For complex systems, the main difficulty for building a
dependability model arises from dependencies between
the system components. Dependencies can be of several
types, identified in [11]: functional, structural or related to
the recovery and maintenance strategies. Exchange of data
or transfer of intermediate results from one component to
another is an example of functional dependency. The fact
that a thread runs on a processor induces a structural
dependency between the thread and the processor. Sharing
a recovery or maintenance facility between several
components leads to a recovery or maintenance
dependency. Functional and structural dependencies can
be grouped into an architecture-based dependency class,
as they are triggered by physical or logical connections
between the dependent components at architectural level.
Instead, recovery and maintenance dependencies are not
always visible at architectural level.
A structured approach is necessary to model dependencies
in a systematic way, to promote model reusability, to
avoid errors in the resulting model of the system and to
facilitate its validation. In our approach, the AADL
dependability-oriented model is built in a progressive and
iterative way. More concretely, in a first iteration, we
propose to build the model of the system’s components,
representing their behavior in the presence of their own
faults and recovery events only. The components are thus
modeled as if they were isolated from their environment.
In the following iterations, dependencies between basic
error models are introduced progressively.
This approach is part of a complete framework that allows
the generation of dependability analysis and evaluation
models from AADL models. An overview of this
framework is presented in Figure 2.
Figure 2. Modeling framework
The first step is devoted to the modeling of the application
architecture in AADL (in terms of components and
operational modes of these components). The AADL
architecture model may be available if it has been already
built for other purposes.
The second step concerns the specification of the
application behavior in the presence of faults through
AADL error models associated with components of the
architecture model. The error model of the application is a
composition of the set of component error models.
The architecture model and the error model of the
application form the dependability-oriented AADL model,
referred to as the AADL dependability model.
The third step aims at building an analytical dependability
evaluation model, from the AADL dependability model,
based on model transformation rules.
The fourth step is devoted to the dependability evaluation
model processing that aims at evaluating quantitative
measures characterizing dependability attributes. This step
is entirely based on existing processing tools.
The iterative approach can be applied to the second step
of the modeling framework only or to the second and third
steps together. In the latter case, semantic validation based
on the analytical model, after each iteration, is helpful to
identify specification errors in the AADL dependability
model.
Due to space limitations, we focus only on the first and
second steps in this paper. A transformation from AADL
to generalized stochastic Petri nets (GSPN) for
dependability evaluation purposes is presented in [12].
4. AADL dependability model construction
To illustrate the proposed approach, the rest of this section
presents successively guidelines for modeling an
architecture-based dependency (structural or functional)
and a recovery and maintenance dependency. More
general practical aspects for building the AADL
dependability model are given at the end of this section.
Note that we illustrate the principles using the graphical
notation for AADL composite components (system
components). However, they apply to all types of
components and connections.
4.1. Architecture-based dependency
The dependency is modeled in the error models associated
with the dependent components, by specifying
respectively outgoing and incoming propagations and
their impact on the corresponding error model. An
example is shown in Figure 3: Component 1 sends data to
Component 2, thus we assume that, at the error model
level, the behavior of Component 2 depends on that of
Component 1.
Figure 3. Architecture-based dependency
Instances of the same error model, shown in Figure 1, are
associated both with Component 1 and with Component 2.
However, the AADL dependability model is asymmetric
because of the unidirectional connection between
Component 1 and Component 2. Thus, the out
propagation KO declared in the error model instance
associated with Component 2 is inactive (i.e., even if it
occurs, it cannot propagate to Component 1).
The out propagation KO from the error model instance
of Component 1, together with its Occurrence property
and the AADL transition triggered by it form the “sender”
part of the dependency. It means that when Component 1
fails, it sends a propagation through the unidirectional
connection. The in propagation KO from the error model
instance of Component2 together with the AADL
transition triggered by it form the “receiver” part of the
dependency. Thus, an incoming propagation KO causes
the failure of the receiving component.
In real applications, architecture-based dependencies
usually require using more advanced propagation
controlling and filtering through Guard properties. In
particular, Boolean expressions can be defined to specify
the consequences of a set of propagations occurring in a
set of sender components on a receiver component.
4.2. Recovery and maintenance dependency
Recovery and maintenance dependencies need to be
described when recovery and maintenance facilities are
shared between components or when the maintenance
activity of some components has to be carried out
according to a given order or a specified strategy (i.e., a
thread can be restarted only if another thread is running).
Components that are not dependent at architectural level
may become dependent due to the recovery and
maintenance strategy. Thus, the AADL dependability
model might need some adjustments to support the
description of dependencies related to the maintenance
strategy. As error models interact only via propagations
through architectural features (i.e., connections, bindings),
the recovery and maintenance dependency between
components’ error models must be supported by the
architecture model. Thus, besides the architecture
components, we may need to model (at architectural
level) a component allowing to describe the recovery and
maintenance strategy. Figure 4-a shows an example of
AADL dependability model. In this architecture,
Component 3 and Component 4 do not interact at the
architecture level. However, if we assume that they share
a recovery and maintenance facility, the recovery and
maintenance strategy has to be taken into account in the
error model of the application. Thus, it is necessary to
represent the recovery and maintenance facility at the
architectural level, as shown in Figure 4-b in order to
model explicitly the dependency between Components 3
and Component 4.
Also, the error models of dependent components with
regards to the recovery and maintenance strategy might
need some adjustments. For example, to represent the fact
that Component 3 can only restart if Component 4 is
running, one needs to distinguish between a failed state of
Component 3 and a failed state where Component 3 is
allowed to restart.
- a - - b -
Figure 4. Maintenance dependency
4.3. Practical aspects
The order for modeling dependencies does not impact the
final AADL dependability model. However, it may
impact the reusability of parts of the model. Thus, the
order may be chosen according to the context of the
targeted analysis. For example, if the analysis is meant to
help the user to choose the best-adapted structure for a
system whose functions are completely defined, it may be
convenient to introduce first functional dependencies
between components and then structural dependencies, as
the model corresponding to functional dependencies is to
be reused. Generally, recovery and maintenance
dependencies are modeled at the end, as one important
aim of the dependability evaluation is to find the best-
suited recovery and maintenance strategies for an
application. Recovery and maintenance dependencies may
have an impact on the system’s structure.
Not all the details of the architecture model are necessary
for the AADL dependability model. Only components that
have associated error models and all connections and
bindings between them are necessary. This allows a
designer to evaluate dependability measures at different
stages in the development cycle by moving from a lower
fidelity AADL dependability model to a detailed one. In
some cases, not all components having associated error
models are part of the AADL dependability model. The
AADL Error Model Annex offers two useful abstraction
options for error models of components composed of
subcomponents:
− The first option is to declare an abstract error model
for a system component. In this case, the
corresponding component is seen as a black box (i.e.,
the detailed subcomponents’ error models are not part
of the AADL dependability model). This option is
useful to abstract away modeling details in case an
architecture model with too detailed error models
associated with components does exist for other
purposes. Issues linked to the relationship between
abstract and concrete error models have been
mentioned in [13].
− The second option is to define the state of a system
component as a function of its subcomponents’ states.
This option can be used to specify state classes for
the overall application. These classes are useful in the
evaluation of measures. If the user wishes to evaluate
reliability or availability, it is necessary to specify the
system states that are to be considered as failed states.
If in addition, the user wishes to evaluate safety, it is
necessary to specify the system states that are
considered as catastrophic.
5. Example
In this section we illustrate our modeling approach on a
small software architecture representing a process whose
functional role is to compute a result. The computation is
divided in three sub computations, each of them being
performed by a thread. The thread Compute2 uses the
result obtained by the thread Compute1 and the thread
Compute3 uses the result obtained by the thread
Compute2 to compute the result expected from the
process. The three threads are connected through data
connections according to the pipe and filter architectural
style [14]. Due to space limitations, we only take into
account two dependencies:
− An architecture-based dependency between the
computing threads: a failure in one of the computing
threads may cause the failure of the following thread
(with a probability p). In some cases, cascading
failures can occur.
− A recovery dependency: Compute3 can only recover
if Compute1 and Compute2 are error free. We assume
that Compute2 can recover if Compute1 is not error
free.
The AADL dependability model of this application is
shown in Figure 5 using the AADL graphical notation.
Figure 5. AADL dependability model
The AADL dependability model of this application is
built in three iterations. The computing threads’ behavior
in the presence of their own fault and recovery events is
represented in the first iteration. The propagation KO
together with corresponding transitions are added in a
second iteration to represent the architecture-based
dependency. The thread Compute1 can have an impact on
Compute2 and Compute2 can have an impact on
Compute3. We remind that the opposite is not possible, as
the connections between threads are unidirectional. The
recovery dependency is modeled in the third iteration. It
requires the existence of a Recovery thread in the
architecture model (see light grey part of Figure 5). Its
role is to send (through the out port to3) a
RecoverAuthorize propagation to Compute3 if Compute1
and Compute2 are error free.
Figure 6-a shows the error model Comp.general
associated with threads Compute1 and Compute2. Figure
6-b shows the error model Comp3.general associated with
the threads Compute3. The three iterations are
highlighted. Each line tagged with a (+) sign is added to
the error model corresponding to the previous iteration
while each line tagged with a (-) sign is removed from it
during the current iteration. The first and second iterations
are the same for all three computing threads. In the third
iteration, it is necessary to distinguish between a failed
state and a failed state from which Compute3 is
authorized to restart. This leads to removing a transition
declared in the first iteration, and adding a state
(CanRecover) and two transitions linking it to the state
machine.
Figure 7 shows the Guard_Out property applied to port
to3 of the Recovery thread in the third iteration. This
property specifies that a RecoverAuthorize propagation is
sent to Compute3 through port to3 when OK propagations
are received through ports in1 and in2 (meaning that
Compute1 and Compute2 are error free). The Recovery
thread has an associated error model that is not shown
here. It declares in and out propagations used in the
Guard_Out property.
The main idea of this method is to verify and validate the
model at each iteration. If a problem arises during
iteration i, only the part of the current AADL
dependability model corresponding to iteration i is
questioned. Thus, the validation process is facilitated
especially in the context of complex systems.
6. Conclusion
This paper presented an iterative approach for system
dependability modeling using AADL. This approach is
meant to ease the task of analyzing dependability
characteristics and evaluating dependability measures for
the AADL users community. Our approach assists the
user in the structured construction of the AADL
dependability model (i.e., architecture model and
dependability-related information). To support and trace
model evolution, this approach proposes that the user
builds the model iteratively. Components’ behaviors in
the presence of faults are modeled in the first iteration as
if they were isolated. Then, each iteration introduces a
new dependency between system components. Error
models representing the behavior of several types of
system components and several types of dependencies
may be placed in a library and then instantiated to
minimize the modeling effort and maximize the
reusability of models.
The OSATE toolset is able to support our modeling
approach. It also allows choosing component models and
error models from libraries. For the sake of illustration,
we used simple examples in this paper. We have already
applied the iterative modeling approach to a system with
multiple dependencies in [12] and we plan to validate it
against other complex case studies.
Error Model Type [Comp]
error model Comp
features
-- iteration 1
(+) Error_Free: initial error state;
(+) Failed: error state;
(+) Fail: error event
(+) {Occurrence => Poisson λ};
(+) Recover: error event
(+) {Occurrence => Poisson µ};
-- iteration 2
(+) KO: in out error propagation
(+) {Occurrence => fixed p};
-- iteration 3
(+) OK: out error propagation
(+) {Occurrence => fixed 1};
end Comp;
Error Model Type [Comp3]
error model Comp3
features
-- iteration 1
(+) Error_Free: initial error state;
(+) Failed: error state;
(+) Fail: error event
(+) {Occurrence => Poisson λ};
(+) Recover: error event
(+) {Occurrence => Poisson µ};
-- iteration 2
(+) KO: in out error propagation
(+) {Occurrence => fixed p};
-- iteration 3
(+) CanRecover: error state;
(+) OK: in error propagation;
end Comp3;
Error Model Implementation [Comp.general]
error model implementation Comp.general
transitions
-- iteration 1
(+) Error_Free-[Fail]->Failed;
(+) Failed-[Recover]->Error_Free;
-- iteration 2
(+) Error_Free-[in KO]->Failed;
(+) Failed-[out KO]->Failed;
-- iteration 3
(+) Error_Free-[out OK]->Error_Free;
end Comp.general;
Error Model Implementation [Comp3.general]
error model implementation Comp3.general
transitions
-- iteration 1
(+) Error_Free-[Fail]->Failed;
(+) Failed-[Recover]->Error_Free;
-- iteration 2
(+) Error_Free-[in KO]->Failed;
(+) Failed-[out KO]->Failed;
-- iteration 3
(-) Failed-[Recover]->Error_Free;
(+) Failed-[RecoverAuthorize]->CanRecover;
(+) CanRecover-[Recover]->Error_Free;
end Comp3.general;
a: Error Model for Compute1 and Compute2 b: Error Model for Compute3
Figure 6. Error model for Compute1 / Compute2
Guard_Out [port Recovery.to3]
-- iteration 3
(+) Guard_Out =>
(+) RecoverAuthorize when
(+) (from1[OK]and from2[OK])
(+) mask when others
(+) applies to to3;
Figure 7. Guard_Out property (port Recovery.to3)
Acknowledgements
This work is partially supported by 1) the European Commission
(European integrated project ASSERT No. IST 004033 and
network of excellence ReSIST No. IST 026764). and 2) the
European Social Fund.
References
[1] N. Medvidovic and R. N. Taylor, A classification and
comparison framework for Software Architecture
Description Languages, IEEE Transactions on Software
Engineering, 26, 2000, 70-93.
[2] SAE-AS5506, Architecture Analysis and Design Language,
Society of Automotive Engineers, 2004.
[3] SAE-AS5506/1, Architecture Analysis and Design
Language (AADL) Annex Volume 1, Annex E: Error
Model Annex, Society of Automotive Engineers, 2006.
[4] J.-M. Farines, et al., The Cotre project: rigorous software
development for real time systems in avionics, 27th
IFAC/IFIP/IEEE Workshop on Real Time Programming,
Zielona Gora, Poland, 2003.
[5] R. Allen and D. Garlan, A Formal Basis for Architectural
Connection, ACM Transactions on Software Engineering
and Methodology, 6, 1997, 213-249.
[6] M. Bernardo, P. Ciancarini, and L. Donatiello, Architecting
Families of Software Systems with Process Algebras, ACM
Transactions on Software Engineering and Methodology,
11, 2002, 386-426.
[7] A. Bondavalli, et al., Dependability Analysis in the Early
Phases of UML Based System Design, Int. Journal of
Computer Systems - Science & Engineering, 16, 2001, 265-
275.
[8] S. Bernardi, S. Donatelli, and J. Merseguer, From UML
Sequence Diagrams and Statecharts to analysable Petri Net
models, 3rd Int. Workshop on Software and Performance
(WOSP 2002), Rome, Italy, 2002, ,35-45.
[9] P. King and R. Pooley, Using UML to Derive Stochastic
Petri Net Models, 15th annual UK Performance
Engineering Workshop, 1999, 45-56.
[10] P. H. Feiler, et al., Pattern-Based Analysis of an Embedded
Real-time System Architecture, 18th IFIP World Computer
Congress, ADL Workshop, Toulouse, France, 2004, 83-91.
[11] K. Kanoun and M. Borrel, Fault-tolerant systems
dependability. Explicit modeling of hardware and software
component-interactions, IEEE Transactions on Reliability,
49, 2000, 363-376.
[12] A. E. Rugina, K. Kanoun, and M. Kaâniche, AADL-based
Dependability Modelling, LAAS-CNRS Research Report
n°06209, April 2006, 85p.
[13] P. Binns and S. Vestal, Hierarchical composition and
abstraction in architecture models, 18th IFIP World
Computer Congress, ADL Workshop, Toulouse, France,
2004, 43-52.
[14] M. Shaw and D. Garlan, Software Architecture:
Perspectives on an Emerging Discipline (Prentice-Hall,
1996).
|
0704.0866 | A priori estimates for weak solutions of complex Monge-Amp\`ere
equations | A PRIORI ESTIMATES FOR WEAK SOLUTIONS OF
COMPLEX MONGE-AMPÈRE EQUATIONS
S.BENELKOURCHI & V.GUEDJ & A.ZERIAHI
Abstract. Let X be a compact Kähler manifold and ω a smooth
closed form of bidegree (1, 1) which is nonnegative and big. We study
the classes Eχ(X,ω) of ω-plurisubharmonic functions of finite weighted
Monge-Ampère energy. When the weight χ has fast growth at infinity,
the corresponding functions are close to be bounded.
We show that if a positive Radon measure is suitably dominated by
the Monge-Ampère capacity, then it belongs to the range of the Monge-
Ampère operator on some class Eχ(X,ω). This is done by establishing
a priori estimates on the capacity of sublevel sets of the solutions.
Our result extends U.Cegrell’s and S.Kolodziej’s results and puts
them into a unifying frame. It also gives a simple proof of S.T.Yau’s
celebrated a priori C0-estimate.
2000 Mathematics Subject Classification: 32W20, 32Q25, 32U05.
1. Introduction
Let X be a compact connected Kähler manifold of dimension n ∈ N∗.
Throughout the article ω denotes a smooth closed form of bidegree (1, 1)
which is nonnegative and big, i.e. such that
ωn > 0. We continue the
study started in [GZ 2], [EGZ] of the complex Monge-Ampère equation
(MA)µ (ω + dd
cϕ)n = µ,
where ϕ, the unknown function, is ω-plurisubharmonic: this means that
ϕ ∈ L1(X) is upper semi-continuous and ω+ ddcϕ ≥ 0 is a positive current.
We let PSH(X,ω) denote the set of all such functions (see [GZ 1] for their
basic properties). Here µ is a fixed positive Radon measure of total mass
µ(X) =
ωn, and d = ∂ + ∂, dc = 1
(∂ − ∂).
Following [GZ 2] we say that a ω-plurisubharmonic function ϕ has fi-
nite weighted Monge-Ampère energy, ϕ ∈ E(X,ω), when its Monge-Ampère
measure (ω+ ddcϕ)n is well defined, and there exists an increasing function
χ : R− → R− such that χ(−∞) = −∞ and χ ◦ ϕ ∈ L1((ω + ddcϕ)n).
In general χ has very slow growth at infinity, so that ϕ is far from being
bounded.
The purpose of this article is twofold. First we extend one of the main
results of [GZ 2] by showing
THEOREM A. There exists ϕ ∈ E(X,ω) such that µ = (ω+ddcϕ)n if and
only if µ does not charge pluripolar sets.
This results has been established in [GZ 2] when ω is a Kähler form. It
is important for applications to complex dynamics and Kähler geometry to
consider as well forms ω that are less positive (see [EGZ]).
http://arxiv.org/abs/0704.0866v2
2 S.BENELKOURCHI & V.GUEDJ & A.ZERIAHI
We then look for conditions on the measure µ which insure that the
solution ϕ is almost bounded. Following the seminal work of S. Kolodziej
[K 2,3], we say that µ is dominated by the Monge-Ampère Capacity Capω
if there exists a function F : R+ → R+ such that limt→0+ F (t) = 0 and
(†) µ(K) ≤ F (Capω(K)), for all Borel subsets K ⊂ X.
Here Capω denotes the global version of the Monge-Ampère capacity intro-
duced by E.Bedford and A.Taylor [BT] (see section 2).
Observe that µ does not charge pluripolar sets since F (0) = 0. When
F (x) . xα vanishes at order α > 1 and ω is Kähler, S. Kolodziej has
proved [K 2] that the solution ϕ ∈ PSH(X,ω) of (MA)µ is continuous. The
boundedness part of this result was extended in [EGZ] to the case when
ω is merely big and nonnegative. If F (x) . xα with 0 < α < 1, two of
us have proved in [GZ 2] that the solution ϕ has finite χ−energy, where
χ(t) = −(−t)p, p = p(α) > 0. This result was first established by U. Cegrell
in a local context [Ce].
Another objective of this article is to fill in the gap inbetween Cegrell’s
and Kolodziej’s results, by considering all intermediate dominating functions
F. Write Fε(x) = x[ε(− ln(x)/n)]
n where ε : R → [0,∞[ is nonincreasing.
Our second main result is:
THEOREM B. If µ(K) ≤ Fε(Capω(K)) for all Borel subsets K ⊂ X,
then µ = (ω + ddcϕ)n where ϕ ∈ PSH(X,ω) satisfies supX ϕ = 0 and
Capω(ϕ < −s) ≤ exp(−nH
−1(s)).
Here H−1 is the reciprocal function of H(x) = e
ε(t)dt + s0, where s0 =
s0(ε, ω) ≥ 0 only depends on ε and ω.
This general statement has several useful consequences:
ε(t)dt < +∞, thenH−1(s) = +∞ for s ≥ s∞ := e
ε(t)dt+
s0, hence Capω(ϕ < −s) = 0. This means that ϕ is bounded from
below by −s∞. This result is due to S. Kolodziej [K 2,3] when ω is
Kähler, and [EGZ] when ω ≥ 0 is merely big;
• the condition (†) is easy to check for measures with density in Lp, p >
1. Our result thus gives a simple proof (Corollary 3.2), following the
seminal approach of S. Kolodziej ([K2]), of the C0-a priori estimate
of S.T. Yau [Y], which is crucial for proving the Calabi conjecture
(see [T] for an overview);
• when
ε(t)dt = +∞, the solution ϕ is generally unbounded. The
faster ε(t) decreases towards zero, the faster the growth of H−1 at
infinity, hence the closer is ϕ from being bounded;
• the special case ε ≡ 1 is of particular interest. Here µ(·) ≤ Capω(·),
and our result shows that Capω(ϕ < −s) decreases exponentially
fast, hence ϕ has “ loglog-singularities”. These are the type of sin-
gularities of the metrics used in Arakelov geometry in relation with
measures µ = fdV whose density has Poincaré-type singularities
(see [Ku], [BKK]).
We prove Theorem B in section 3, after establishing Theorem A in section
2.1 and recalling some useful facts from [GZ 2], [EGZ] in section 2.2. We
A PRIORI ESTIMATES FOR SOLUTIONS OF MONGE-AMPÈRE EQUATIONS 3
then test the sharpness of our estimates in section 4, where we give examples
of measures fulfilling our assumptions: these are absolutely continuous with
respect to ωn, and their density do not belong to Lp, for any p > 1.
2. Weakly singular quasiplurisubharmonic functions
The class E(X,ω) of ω-psh functions with finite weighted Monge-Ampère
energy has been introduced and studied in [GZ 2]. It is the largest subclass
of PSH(X,ω) on which the complex Monge-Ampère operator (ω+ddc·)n is
well-defined and the comparison principle is valid. Recall that ϕ ∈ E(X,ω)
if and only if (ω + ddcϕj)
n(ϕ ≤ −j) → 0, where ϕj := max(ϕ,−j).
2.1. The range of the Monge-Ampère operator. The range of the
operator (ω + ddc·)n acting on E(X,ω) has been characterized in [GZ 2]
when ω is a Kähler form. We extend here this result to the case when ω is
merely nonnegative and big.
Theorem 2.1. Assume ω is a smooth closed nonnegative (1,1) form on X,
and µ is a positive Radon measure such that µ(X) =
ωn > 0.
Then there exists ϕ ∈ E(X,ω) such that µ = (ω + ddcϕ)n if and only if µ
does not charge pluripolar sets.
Proof. We can assume without loss of generality that µ and ω are normalized
so that µ(X) =
ωn = 1. Consider, for A > 0,
CA(ω) := {ν probability measure / ν(K) ≤ A · Capω(K), for all K ⊂ X},
where Capω denotes the Monge-Ampère capacity introduced by E.Bedford
and A.Taylor in [BT] (see [GZ 1] for this compact setting). Recall that
Capω(K) := sup
(ω + ddcu)n / u ∈ PSH(X,ω), 0 ≤ u ≤ 1
We first show that a measure ν ∈ CA(ω) is the Monge-Ampère of a func-
tion ψ ∈ Ep(X,ω), for any 0 < p < 1, where
Ep(X,ω) := {ψ ∈ E(X,ω) / ψ ∈ Lp
(ω + ddcψ)n
Indeed, fix ν ∈ CA(ω), 0 < p < 1, and ωj := ω + εjΩ, where Ω is
a kähler form on X, and εj > 0 decreases towards zero. Observe that
PSH(X,ω) ⊂ PSH(X,ωj), hence Capω(.) ≤ Capωj(.), so that ν ∈ CA(ωj).
It follows from Proposition 3.6 and 2.7 in [GZ 1] that there exists C0 > 0
such that for any v ∈ PSH(X,ωj) normalized by supX v = −1, we have
Capωj(v < −t) ≤
, for all t ≥ 1.
This yields Ep(X,ωj) ⊂ L
p(ν): if v ∈ Ep(X,ωj) with supX v = −1, then
(−v)pdν = p ·
tp−1ν(v < −t)dt
≤ pA ·
tp−1Capω(v < −t)dt+ Cp
+ Cp < +∞.
4 S.BENELKOURCHI & V.GUEDJ & A.ZERIAHI
It follows therefore from Theorem 4.2 in [GZ 2] that there exists ϕj ∈
Ep(X,ωj) with supX ϕj = −1 and (ωj+dd
n = cj ·ν, where cj =
ωnj ≥
1 decreases towards 1 as εj decreases towards zero. We can assume without
loss of generality that 1 ≤ cj ≤ 2. Observe that the ϕj ’s have uniformly
bounded energies, namely
(−ϕj)
p(ωj + dd
n ≤ 2
(−ϕj)
pdν ≤ 2
Since supX ϕj = −1, we can assume (after extracting a convergent subse-
quence) that ϕj → ϕ in L
1(X), where ϕ ∈ PSH(X,ω), supX ϕ = −1.
Set φj := (supl≥j ϕl)
∗. Thus φj ∈ PSH(X,ωj), and φj decreases towards
ϕ. Since φj ≥ ϕj , it follows from the “fundamental inequality” (Lemma 2.3
in [GZ 2]) that
(−φj)
p(ωj + dd
n ≤ 2n
(−ϕj)
p(ωj + dd
n ≤ C ′ < +∞.
Hence it follows from stability properties of the class Ep(X,ω) that ϕ ∈
Ep(X,ω) (see Proposition 5.6 in [GZ 2]). Moreover
(ωj + dd
n ≥ inf
(ωl + dd
n ≥ ν,
hence (ω + ddcϕ)n = lim(ωj + dd
n ≥ ν. Since
ωn = ν(X) = 1, this
yields ν = (ω + ddcϕ)n as claimed above.
We can now prove the statement of the theorem. One implication is
obvious: if µ = (ω+ddcϕ)n, ϕ ∈ E(X,ω), then µ does not charge pluripolar
sets, as follows from Theorem 1.3 in [GZ 2].
So we assume now µ that does not charge pluripolar sets. Since C1(ω) is
a compact convex set of probability measures which contains all measures
(ω + ddcu)n, u ∈ PSH(X,ω), 0 ≤ u ≤ 1, we can project µ onto C1(ω) and
get, by a generalization of Radon-Nikodym theorem (see [R], [Ce]),
µ = f · ν, ν ∈ C1(ω), 0 ≤ f ∈ L
1(ν).
Now ν = (ω + ddcψ)n for some ψ ∈ E1/2(X,ω), ψ ≤ 0, as follows from the
discussion above. Replacing ψ by eψ shows that we can actually assume ψ
to be bounded (see Lemma 4.5 in [GZ 2]). We can now apply line by line the
same proof as that of Theorem 4.6 in [GZ 2] to conclude that µ = (ω+ddcϕ)n
for some ϕ ∈ E(X,ω). �
2.2. High energy and capacity estimates. Given χ : R− → R− an
increasing function, we consider, following [GZ 2],
Eχ(X,ω) :=
ϕ ∈ E(X,ω) /
(−χ)(−|ϕ|) (ω + ddcϕ)n < +∞
Alternatively a function ϕ ≤ 0 belongs to Eχ(X,ω) if and only if
(−χ) ◦ ϕj (ω + dd
n < +∞, where ϕj := max(ϕ,−j)
is the canonical approximation of ϕ by bounded ω-psh functions. When
χ(t) = −(−t)p, Eχ(X,ω) is the class E
p(X,ω) used in previous section.
The properties of classes Eχ(X,ω) are quite different whether the weight
χ is convex (slow growth at infinity) or concave. In previous works [GZ 2],
A PRIORI ESTIMATES FOR SOLUTIONS OF MONGE-AMPÈRE EQUATIONS 5
two of us were mainly interested in weights χ of moderate growth at infinity
(at most polynomial). Our main objective in the sequel is to construct
solutions ϕ of (MA)µ which are “almost bounded”, i.e. in classes Eχ(X,ω)
for concave weights χ of arbitrarily high growth.
For this purpose it is useful to relate the property ϕ ∈ Eχ(X,ω) to the
speed of decreasing of Capω(ϕ < −t), as t → +∞. We set
Êχ(X,ω) :=
ϕ ∈ PSH(X,ω) /
tnχ′(−t)Capω(ϕ < −t)dt < +∞
An important tool in the study of classes Eχ(X,ω) are the “fundamental
inequalities” (Lemmas 2.3 and 3.5 in [GZ 2]), which allow to compare the
weighted energy of two ω-psh functions ϕ ≤ ψ. These inequalities are only
valid for weights of slow growth (at most polynomial), while they become
immediate for classes Êχ(X,ω). So are the convexity properties of Êχ(X,ω).
We summarize this and compare these classes in the following:
Proposition 2.2. The classes Êχ(X,ω) are convex and stable under maxi-
mum: if Êχ(X,ω) ∋ ϕ ≤ ψ ∈ PSH(X,ω), then ψ ∈ Êχ(X,ω).
One always has Êχ(X,ω) ⊂ Eχ(X,ω), while
Eχ̂(X,ω) ⊂ Êχ(X,ω), where χ
′(t− 1) = tnχ̂′(t).
Since we are mainly interested in the sequel in weights with (super)
fast growth at infinity, the previous proposition shows that Êχ(X,ω) and
Eχ(X,ω) are roughly the same: a function ϕ ∈ PSH(X,ω) belongs to one of
these classes if and only if Capω(ϕ < −t) decreases fast enough, as t→ +∞.
Proof. The convexity of Êχ(X,ω) follows from the following simple observa-
tion: if ϕ,ψ ∈ Êχ(X,ω) and 0 ≤ a ≤ 1, then
{aϕ+ (1− a)ψ < −t} ⊂ {ϕ < −t} ∪ {ψ < −t} .
The stability under maximum is obvious.
Assume ϕ ∈ Êχ(X,ω). We can assume without loss of generality ϕ ≤ 0
and χ(0) = 0. Set ϕj := max(ϕ,−j). It follows from Lemma 2.3 below that
(−χ) ◦ ϕj (ω + dd
χ′(−t)(ω + ddcϕj)
n(ϕj < −t)dt
χ′(−t)tnCapω(ϕ < −t)dt < +∞,
This shows that ϕ ∈ Eχ(X,ω). The other inclusion goes similarly, using
the second inequality in Lemma 2.3 below. �
If ϕ ∈ Eχ(X,ω) (or Êχ(X,ω)), then the bigger the growth of χ at −∞,
the smaller Capω(ϕ < −t) when t → +∞, hence the closer ϕ is from being
bounded. Indeed ϕ ∈ PSH(X,ω) is bounded iff it belongs to Eχ(X,ω) for
all weights χ, as was observed in [GZ 2], Proposition 3.1. Similarly
PSH(X,ω) ∩ L∞(X) =
Êχ(X,ω),
where the intersection runs over all concave increasing functions χ.
We will make constant use of the following result:
6 S.BENELKOURCHI & V.GUEDJ & A.ZERIAHI
Lemma 2.3. Fix ϕ ∈ E(X,ω). Then for all s > 0 and 0 ≤ t ≤ 1,
tnCapω(ϕ < −s− t) ≤
(ϕ<−s)
(ω + ddcϕ)n ≤ snCapω(ϕ < −s),
where the second inequality is true only for s ≥ 1.
The proof is a direct consequence of the comparison principle (see Lemma
2.2 in [EGZ] and [GZ 2]).
3. Measures dominated by capacity
From now on µ denotes a positive Radon measure on X whose total mass
is V olω(X): this is an obvious necessary condition in order to solve (MA)µ.
To simplify numerical computations, we assume in the sequel that µ and ω
have been normalized so that
µ(X) = V olω(X) =
ωn = 1.
When µ = ehωn is a smooth volume form and ω is a Kähler form, S.T.Yau
has proved [Y] that (MA)µ admits a unique smooth solution ϕ ∈ PSH(X,ω)
with supX ϕ = 0. Smooth measures are easily seen to be nicely dominated
by the Monge-Ampère capacity (see the proof of Corollary 3.2 below).
Measures dominated by the Monge-Ampère capacity have been exten-
sively studied by S.Kolodziej in [K 2,3,4]. Following S. Kolodziej ([K3], [K4])
with slightly different notations, fix ε : R → [0,∞[ a continuous decreasing
function and set
Fε(x) := x[ε(− lnx/n)]
n, x > 0.
We will consider probability measures µ satisfying the following condition :
for all Borel subsets K ⊂ X,
µ(K) ≤ Fε(Capω(K)).
The main result achieved in [K 2], can be formulated as follows: If ω is a
Kähler form and
ε(t)dt < +∞ then µ = (ω + ddcϕ)n for some contin-
uous function ϕ ∈ PSH(X,ω).
The condition
ε(t)dt < +∞ means that ε decreases fast enough
towards zero at infinity. This gives a quantitative estimate on how fast
ε(− lnCapω(K)/n), hence µ(K), decreases towards zero as Capω(K) → 0.
ε(t)dt = +∞, it follows from Theorem 2.1 that µ = (ω +
ddcϕ)n for some function ϕ ∈ E(X,ω), but ϕ will generally be unbounded.
Our second main result measures how far ϕ is from being bounded:
Theorem 3.1. Assume for all compact subsets K ⊂ X,
(3.1) µ(K) ≤ Fε(Capω(K)).
Then µ = (ω + ddcϕ)n where ϕ ∈ E(X,ω) is such that supX ϕ = 0 and
Capω(ϕ < −s) ≤ exp(−nH
−1(s)), for all s > 0.
Here H−1 is the reciprocal function of H(x) = e
ε(t)dt + s0, where s0 =
s0(ε, ω) ≥ 0 is a constant which only depends on ε and ω.
In particular ϕ ∈ Eχ(X,ω) where −χ(−t) = exp(nH
−1(t)/2).
A PRIORI ESTIMATES FOR SOLUTIONS OF MONGE-AMPÈRE EQUATIONS 7
Recall that here, and troughout the article, ω ≥ 0 is merely big.
Before proving this result we make a few observations.
• It is interesting to consider as well the case when ε(t) increases to-
wards +∞. One can then obtain solutions ϕ such that Capω(ϕ <
−t) decreases at a polynomial rate. When e.g. ω is Kähler and
µ(K) ≤ Capω(K)
α, 0 < α < 1, it follows from Proposition 5.3
in [GZ 2] that µ = (ω + ddcϕ)n where ϕ ∈ Ep(X,ω) for some
p = pα > 0. Here E
p(X,ω) denotes the Cegrell type class Eχ(X,ω),
with χ(t) = −(−t)p.
• When ε(t) ≡ 1, Fε(x) = x and H(x) ≍ e.x. Thus Theorem 3.1 reads
µ ≤ Capω ⇒ µ = (ω + dd
cϕ)n, where
Capω(ϕ < −s) . exp (−ns/e) .
This is precisely the rate of decreasing corresponding to functions
which look locally like − log(− log ||z||), in some local chart z ∈
U ⊂ Cn. This class of ω-psh functions with “loglog-singularities” is
important for applications (see [Ku], [BKK]).
• If ε(t) decreases towards zero, then Capω(ϕ < −t) decreases at a
superexponential rate. The faster ε(t) decreases towards zero, the
slower the growth of H, hence the faster the growth of H−1 at infin-
ity. When
ε(t)dt < +∞, the function ε decreases so fast that
Capω(ϕ < −t) = 0 for t >> 1, thus ϕ is bounded. This is the case
when µ(K) ≤ Capω(K)
α for some α > 1 [K 2], [EGZ].
• When
ε(t)dt = +∞, the solution ϕmay well be unbounded (see
Examples in section 4). At the critical case where µ ≤ Fε(Capω) for
all functions ε such that
ε(t)dt = +∞, we obtain
µ = (ω + ddcϕ)n with ϕ ∈ PSH(X,ω) ∩ L∞(X),
as follows from Proposition 3.1 in [GZ 2]. This partially explains the
difficulty in describing the range of Monge-Ampère operators on the
set of bounded (quasi-)psh functions.
Proof. The assumption on µ implies in particular that it vanishes on pluripo-
lar sets. It follows from Theorem 2.1 that there exists a function ϕ ∈ E(X,ω)
such that µ = (ω + ddcϕ)n and supX ϕ = 0. Set
g(s) := −
logCapω(ϕ < −s), ∀s > 0.
The function g is increasing on [0,+∞] and g(+∞) = +∞, since Capω
vanishes on pluripolar sets. Observe also that g(s) ≥ 0 for all s ≥ 0, since
g(0) = −
logCapω(X) = −
log V olω(X) = 0.
It follows from Lemma 2.3 and (3.1) that for all s > 0 and 0 ≤ t ≤ 1,
tnCapω(ϕ < −s− t) ≤ µ(ϕ < −s) ≤ Fε (Capω(ϕ < −s)) .
Therefore for all s > 0 and 0 ≤ t ≤ 1,
(3.2) log t− log ε ◦ g(s) + g(s) ≤ g(s + t).
We define an increasing sequence (sj)j∈N by induction setting
sj+1 = sj + eε ◦ g(sj), for all j ∈ N.
8 S.BENELKOURCHI & V.GUEDJ & A.ZERIAHI
The choice of s0. Recall that (3.2) is only valid for 0 ≤ t ≤ 1. We choose
s0 ≥ 0 large enough so that
(3.3) e.ε ◦ g(s0) ≤ 1.
This will allow us to use (3.2) with t = tj = sj+1 − sj ∈ [0, 1], since ε ◦ g is
decreasing, while sj ≥ s0 is increasing, hence
0 ≤ tj = eε ◦ g(sj) ≤ eε ◦ g(s0) ≤ 1.
We must insure that s0 = s0(ε, ω) can chosen to be independent of ϕ. This
is a consequence of Proposition 2.7 in [GZ 1]: since supX ϕ = 0, there exists
c1(ω) > 0 so that 0 ≤
(−ϕ)ωn ≤ c1(ω), hence
g(s) := −
logCapω(ϕ < −s) ≥
log s−
log(n+ c1(ω)).
Therefore g(s0) ≥ ε
−1(1/e) for s0 = s0(ε, ω) := (n+ c1(ω)) exp(nε
−1(1/e)),
which is independent of ϕ. This yields e.ε ◦ g(s0) ≤ 1, as desired.
The growth of sj. We can now apply (3.2) and get g(sj) ≥ j + g(s0) ≥ j.
Thus lim g(sj) = +∞. There are two cases to be considered.
If s∞ = lim sj ∈ R
+, then g(s) ≡ +∞ for s > s∞, i.e. Capω(ϕ < −s) =
0, ∀s > s∞. Therefore ϕ is bounded from below by −s∞, in particular
ϕ ∈ Eχ(X,ω) for all χ.
Assume now (second case) that sj → +∞. For each s > 0, there exists
N = Ns ∈ N such that sN ≤ s < sN+1. We can estimate s 7→ Ns:
s ≤ sN+1 =
(sj+1 − sj) + s0 =
e ε ◦ g(sj) + s0
ε(j) + s0 ≤ e.ε(0) + e
ε(t)dt+ s0 =: H(N),
Therefore H−1(s) ≤ N ≤ g(sN ) ≤ g(s), hence
Capω(ϕ < −s) ≤ exp(−nH
−1(s)).
Set now −χ(−t) = exp(nH−1(t)/2). Then
tnχ′(−t)Capω(ϕ < −t)dt
ε(H−1(t)) + s̃0
exp(−nH−1(t)/2)dt
tn exp(−nt/2)dt < +∞.
This shows that ϕ ∈ Eχ(X,ω) where χ(t) = − exp(nH
−1(−t)/2).
It follows from the proof above that when
ε(t)dt < +∞, the solution
ϕ is bounded since in this case we have
s∞ := lim
sj ≤ s0(ε, ω) + e ε(0) + e
ε(t)dt < +∞
where s0(ε, ω) is an absolute constant satisfying (3.3) (see above). �
A PRIORI ESTIMATES FOR SOLUTIONS OF MONGE-AMPÈRE EQUATIONS 9
Let us emphasize that Theorem 3.1 also yields a slightly simplified proof of
the following result [K 2], [EGZ]: if µ(K) ≤ Fε(Capω(K)) for some decreas-
ing function ε : R → R+ such that
ε(t)dt < +∞, then the sequence
(sj) above is convergent, hence µ = (ω + dd
cϕ)n, where ϕ ∈ PSH(X,ω) is
bounded. For the reader’s convenience we indicate a proof of the following
important particular case:
Corollary 3.2. Let µ = fωn be a measure with density 0 ≤ f ∈ Lp(ωn),
where p > 1 and
fωn =
ωn. Then there exists a unique bounded
function ϕ ∈ PSH(X,ω) such that (ω + ddcϕ)n = µ, supX ϕ = 0 and
0 ≤ ||ϕ||L∞(X) ≤ C(p, ω).||f ||
Lp(ωn)
where C(p, ω) > 0 only depends on p and ω.
This a priori bound is a crucial step in the proof by S.T.Yau of the Calabi
conjecture (see [Ca], [Y], [A], [T], [Bl]). The proof presented here follows
Kolodziej’s new and decisive pluripotential approach (see [K2]). Let us stress
that the dependence ω 7−→ C(p, ω) is quite explicit, as we shall see in the
proof. This is important when considering degenerate situations [EGZ].
Proof. We claim that there exists C1(ω) such that
(3.4) µ(K) ≤
C1(ω)||f ||
Lp(ωn)
[Capω(K)]
, for all Borel sets K ⊂ X.
Assuming this for the moment, we can apply Theorem 3.1 with ε(x) =
C1(ω)||f ||
Lp(ωn)
exp(−x), which yields, as observed at the end of the proof
of Theorem 3.1
||ϕ||L∞(X) ≤M(f, ω),
whereM(f, ω) := s0(ε, ω)+e ε(0)+e
ε(t)dt = s0(ε, ω)+2eC1(ω)||f ||
Lp(ωn)
and s0 = s0(ε, ω) is a large number s0 > 1 satisfying the inequality (3.3).
In order to give the precise dependence of the uniform bound M(f, ω) on
the Lp−norm of the density f , we need to choose s0 more carefully. Observe
that condition (3.3) can be written
Capω({ϕ ≤ −s0}) ≤ exp(−nε
−1(1/e).
Since nε−1(1/e) = log
enC1(ω)
n‖f‖Lp(ωn)
, we must choose s0 > 0 so that
(3.5) Capω({ϕ ≤ −s0}) ≤
enC1(ω)n‖f‖Lp(ωn)
We claim that for any N ≥ 1 there exists a uniform constant C2(N, p, ω) >
0 such that for any s > 0,
(3.6) Capω({ϕ ≤ −s}) ≤ C2(N, p) s
−N ‖f‖Lp(ωn).
Indeed observe first that by Hölder inequality,
(−ϕ)Nωnϕ =
(−ϕ)Nfωn ≤ ‖f‖Lp(ωn)‖ϕ‖
LNq (ωn)
10 S.BENELKOURCHI & V.GUEDJ & A.ZERIAHI
Since ϕ belongs to the compact family {ψ ∈ PSH(X,ω); supX ψ = 0}
([GZ2]), there exists a uniform constant C ′2(N, p, ω) > 0 such that ‖ϕ‖
LNq(ωn)
C ′2(N, p, ω), hence
(−ϕ)Nωnϕ ≤ C
2(N, p, ω)‖f‖Lp(ωn).
Fix u ∈ PSH(X,ω) with −1 ≤ u ≤ 0 and N ≥ 1 to be specified later. If
follows from Tchebysheff and energy inequalities ([GZ2]) that
{ϕ≤−s}
(ω + ddcu)n ≤ s−N
(−ϕ)N (ω + ddcu)n
≤ cN s
−N max
(−ϕ)Nωnϕ,
(−u)Nωnu
≤ cN s
−N max
C ′2(N, p, ω), 1
‖f‖Lp(ωn).
We have used here the fact that ‖f‖Lp(ωn) ≥ 1, which follows from the
normalization : 1 =
fωn ≤ ‖f‖Lp(ωn). This proves the claim.
SetN = 2n, it follows from (3.6) that s0 := C1(ω)
nenC2(2n, p, ω)‖f‖
Lp(ωn)
satisfies the required condition (3.5), which implies the estimate of the the-
orem.
We now establish the estimate (3.4). Observe first that Hölder’s inequality
yields
(3.7) µ(K) ≤ ||f ||Lp(ωn) [V olω(K)]
, where 1/p + 1/q = 1.
Thus it suffices to estimate the volume V olω(K). Recall the definition of
the Alexander-Taylor capacity, Tω(K) := exp(− supX VK,ω), where
VK,ω(x) := sup{ψ(x) /ψ ∈ PSH(X,ω), ψ ≤ 0 on K}.
This capacity is comparable to the Monge-Ampère capacity, as was observed
by H.Alexander and A.Taylor [AT] (see Proposition 7.1 in [GZ 1] for this
compact setting):
(3.8) Tω(K) ≤ e exp
Capω(K)
It thus remains to show that V olω(K) is suitably bounded from above by
Tω(K). This follows from Skoda’s uniform integrability result: set
ν(ω) := sup {ν(ψ, x) /ψ ∈ PSH(X,ω), x ∈ X} ,
where ν(ψ, x) denotes the Lelong number of ψ at point x. This actually
only depends on the cohomology class {ω} ∈ H1,1(X,R). It is a standard
fact that goes back to H.Skoda (see [Z]) that there exists C2(ω) > 0 so that
ωn ≤ C2(ω),
for all functions ψ ∈ PSH(X,ω) normalized by supX ψ = 0. We infer
(3.9) V olω(K) ≤
V ∗K,ω
ωn ≤ C2(ω)[Tω(K)]
1/ν(ω).
A PRIORI ESTIMATES FOR SOLUTIONS OF MONGE-AMPÈRE EQUATIONS 11
It now follows from (3.7), (3.8), (3.9), that
µ(K) ≤ ||f ||Lp [C2(ω)]
1/qe1/qν(ω) exp
qν(ω)Capω(K)
The conclusion follows by observing that exp(−1/x1/n) ≤ Cnx
2 for some
explicit constant Cn > 0. �
4. Examples
4.1. Measures invariant by rotations. In this section we produce exam-
ples of radially invariant functions/measures which show that our previous
results are essentially sharp. The first example is due to S.Kolodziej [K 1].
Example 4.1. We work here on the Riemann sphere X = P1(C), with
ω = ωFS, the Fubini-Study volume form. Consider µ = fω a measure with
density f which is smooth and positive on X \ {p}, and such that
f(z) ≃
|z|2(log |z|)2
, c > 0,
in a local chart near p = 0. A simple computation yields µ = ω + ddcϕ,
where ϕ ∈ PSH(P1, ω) is smooth in P1 \ {p} and ϕ(z) ≃ −c′ log(− log |z|)
near p = 0, c′ > 0, hence
logCapω(ϕ < −t) ≃ −t,
Here a ≃ b means that a/b is bounded away from zero and infinity.
This is to be compared to our estimate logCapω(ϕ < −t) . −t/e (Theo-
rem 3.1 ) which can be applied, as it was shown by S.Kolodziej in [K 1] that
µ . Capω. Thus Theorem 3.1 is essentially sharp when ε ≡ 1.
We now generalize this example and show that the estimate provided by
Theorem 3.1 is essentially sharp in all cases.
Example 4.2. Fix ε as in Theorem 3.1. Consider µ = fω on X = P1(C),
where ω = ωFS is the Fubini-Study volume form, f ≥ 0 is continuous on
1 \ {p}, and
f(z) ≃
ε(log(− log |z|))
|z|2(log |z|)2
in local coordinates near p = 0. Here ε : R → R+ decreases towards 0 at
+∞. We claim that there exists A > 0 such that
(4.1) µ(K) ≤ ACapω(K)ε(− logCapω(K)), for all K ⊂ X.
This is clear outside a small neighborhood of p = 0 since the measure µ
is there dominated by a smooth volume form. So it suffices to establish this
estimate when K is included in a local chart near p = 0. Consider
K̃ := {r ∈ [0, R] ; K ∩ {|z| = r} 6= ∅}.
It is a classical fact (see e.g. [Ra]) that the logarithmic capacity c(K) of
K can be estimated from below by the length of K̃, namely
l(K̃)
≤ c(K̃) ≤ c(K).
12 S.BENELKOURCHI & V.GUEDJ & A.ZERIAHI
Using that ε is decreasing, hence 0 ≤ −ε′, we infer
µ(K) ≤ 2π
∫ l(K̃)
f(r)rdr
∫ l(K̃)
ε(log(− log r))− ε′(log− log r)
r(log r)2
ε(log(− log l(K̃)))
− log l(K̃)
ε(log(− log 4c(K)))
− log 4c(K)
Recall now that the logarithmic capacity c(K) is equivalent to Alexander-
Taylor’s capacity T∆(K), which in turn is equivalent to the global Alexander-
Taylor capacity Tω(K) (see [GZ 1]): c(K) ≃ T∆(K) ≃ Tω(K). The Alexander-
Taylor’s comparison theorem [AT] reads
− log 4c(K) ≃ − log Tω(K) ≃ 1/Capω(K),
thus µ(K) ≤ ACapω(K)ε(− logCapω(K)).
We can therefore apply Theorem 3.1. It guarantees that µ = (ω + ddcϕ),
where ϕ ∈ PSH(P1, ω) satisfies logCapω(ϕ < −s) ≃ −nH
−1(s), with
H(s) = eA
ε(t)dt + s0. On the other hand a simple computation shows
that ϕ is continuous in P1 \ {p} and
ϕ ≃ −H(log(− log |z|)) , near p = 0.
The sublevel set (ϕ < −t) therefore coincides with the ball of radius
exp(− exp(H−1(t))), hence logCapω(ϕ < −s) ≃ −H
−1(s).
4.2. Measures with density. Here we consider the case when µ = fdV
is absolutely continuous with respect to a volume form.
Proposition 4.3. Assume µ = fωn is a probability measure whose density
satisfies f [log(1 + f)]n ∈ L1(ωn). Then µ . Capω.
More generally if f [log(1 + f)/ε(log(1 + | log f |))]n ∈ L1(ωn) for some
continuous decreasing function ε : R → R+∗ , then for all K ⊂ X,
µ(K) ≤ Fε(Capω(K)), where Fε(x) = Ax
, A > 0.
Proof. With slightly different notations, the proof is identical to that of
Lemma 4.2 in [K 4] to which we refer the reader. �
We now give examples showing that Proposition 4.3 is almost optimal.
Example 4.4. For simplicity we give local examples. The computations to
follow can also be performed in a global compact setting.
Consider ϕ(z) = − log(− log ||z||), where ||z|| =
|z1|2 + . . .+ |zn|2 de-
notes the Euclidean norm in Cn. One can check that ϕ is plurisubharmonic
in a neighborhood of the origin in Cn, and that there exists cn > 0 so that
µ := (ddcϕ)n = f dVeucl, where f(z) =
||z||2n(− log ||z||)n+1
Observe that f [log(1 + f)]n−α ∈ L1, ∀α > 0 but f [log(1 + f)]n 6∈ L1.
A PRIORI ESTIMATES FOR SOLUTIONS OF MONGE-AMPÈRE EQUATIONS 13
When n = 1 it was observed by S. Kolodziej [K 1] that µ(K) . Capω(K).
Proposition 4.3 yields here
µ(K) . Capω(K)(| logCapω(K)|+ 1).
For n ≥ 1, it follows from Proposition 4.3 and Theorem 3.1 that
logCapω(ϕ < −s) . −nH
−1(s).
On the other hand, one can directly check that logCapω(ϕ < −s) ≃ −nH
−1(s).
One can get further examples by considering ϕ(z) = χ ◦ log ||z||, so that
(ddcϕ)n =
′ ◦ log ||z||)n−1χ′′(log ||z||)
||z||2n
dVeucl.
References
[AT] H.ALEXANDER & B.A.TAYLOR: Comparison of two capacities in Cn.
Math. Zeit, 186 (1984),407-417.
[A] T.AUBIN: Équations du type Monge-Ampère sur les variétés kählériennes
compactes. Bull. Sci. Math. (2) 102 (1978), no. 1, 63–95.
[BT] E.BEDFORD & B.A.TAYLOR: A new capacity for plurisubharmonic func-
tions. Acta Math. 149 (1982), no. 1-2, 1–40.
[Bl] Z.BLOCKI: On uniform estimate in Calabi-Yau theorem. Sci. China Ser. A
48 (2005), suppl., 244–247.
[BKK] G.BURGOS & J.KRAMER & U.KUHN: Arithmetic characteristic classes
of automorphic vector bundles. Doc. Math. 10 (2005), 619–716.
[Ca] E.CALABI: On Kähler manifolds with vanishing canonical class. Algebraic
geometry and topology. A symposium in honor of S. Lefschetz, pp. 78–89.
Princeton Univ. Press, Princeton, N. J. (1957).
[Ce] U.CEGRELL: Pluricomplex energy. Acta Math. 180 (1998), no. 2, 187–217.
[EGZ] P.EYSSIDIEUX & V.GUEDJ & A.ZERIAHI: Singular Kähler-Einstein met-
rics. Preprint arxiv math.AG/0603431.
[GZ 1] V.GUEDJ & A.ZERIAHI: Intrinsic capacities on compact Kähler manifolds.
J. Geom. Anal. 15 (2005), no. 4, 607-639.
[GZ 2] V.GUEDJ & A.ZERIAHI: The weighted Monge-Ampère energy of quasi-
plurisubharmonic functions. J. Funct. Anal. 250 (2007), 442-482.
[K 1] S.KOLODZIEJ: The range of the complex Monge-Ampère operator. Indiana
Univ. Math. J. 43 (1994), no. 4, 1321–1338.
[K 2] S.KOLODZIEJ: The complex Monge-Ampère equation. Acta Math. 180
(1998), no. 1, 69–117.
[K 3] S.KOLODZIEJ: The Monge-Ampère equation on compact Kähler manifolds.
Indiana Univ. Math. J. 52 (2003), no. 3, 667–686
[K 4] S.KOLODZIEJ: The complex Monge-Ampère equation and pluripotential
theory. Mem. Amer. Math. Soc. 178 (2005), no. 840, x+64 pp.
[Ku] U.KUHN: Generalized arithmetic intersection numbers. J. Reine Angew.
Math. 534 (2001), 209–236.
[R] J.RAINWATER: A note on the preceding paper. Duke Math. J. 36 (1969)
799–800.
[Ra] T.RANSFORD: Potential theory in the complex plane. London Mathemati-
cal Society Student Texts, 28. Cambridge University Press, Cambridge, 1995.
x+232 pp.
[T] G.TIAN: Canonical metrics in Kähler geometry. Lectures in Mathematics
ETH Zürich. Birkhäuser Verlag, Basel (2000).
[Y] S.T.YAU: On the Ricci curvature of a compact Kähler manifold and the
complex Monge-Ampère equation. I. Comm. Pure Appl. Math. 31 (1978),
no. 3, 339–411.
[Z] A.ZERIAHI: Volume and capacity of sublevel sets of a Lelong class of psh
functions. Indiana Univ. Math. J. 50 (2001), no. 1, 671–703.
http://arxiv.org/abs/math/0603431
14 S.BENELKOURCHI & V.GUEDJ & A.ZERIAHI
Slimane BENELKOURCHI & Vincent GUEDJ & Ahmed ZERIAHI
Laboratoire Emile Picard
UMR 5580, Université Paul Sabatier
118 route de Narbonne
31062 TOULOUSE Cedex 09 (FRANCE)
[email protected]
[email protected]
[email protected]
1. Introduction
2. Weakly singular quasiplurisubharmonic functions
2.1. The range of the Monge-Ampère operator
2.2. High energy and capacity estimates
3. Measures dominated by capacity
4. Examples
4.1. Measures invariant by rotations
4.2. Measures with density
References
Bibliography
|
0704.0867 | Density oscillation in highly flattened quantum elliptic rings and
tunable strong dipole radiation | Density oscillation in highly flattened quantum elliptic rings and tunable strong
dipole radiation
S.P. Situ, Y.Z. He, and C.G. Bao∗
The State Key Laboratory of Optoelectronic Materials and Technologies,
Zhongshan University, Guangzhou, 510275, P.R. China
A narrow elliptic ring containing an electron threaded by a magnetic field B is studied. When the
ring is highly flattened, the increase of B would lead to a big energy gap between the ground and
excited states, and therefore lead to a strong emission of dipole photons. The photon frequency
can be tuned in a wide range by changing B and/or the shape of the ellipse. The particle density is
found to oscillate from a pattern of distribution to another pattern back and forth against B. This
is a new kind of Aharonov-Bohm oscillation originating from symmetry breaking and is different
from the usual oscillation of persistent current.
∗Corresponding author
It is recognized that micro-devices are important to
micro-techniques. Various kinds of micro-devices, includ-
ing the quantum rings,1 have been extensively studied
theoretically and experimentally in recent years. Quan-
tum rings are different from other devices due to their
special geometry. A distinguished phenomenon of the
ring is the Aharonov-Bohm (A-B) oscillation of the
ground state energy and persistent current2−5. It is be-
lieved that geometry would affect the properties of small
systems. Therefore, in addition to circular rings, ellip-
tic rings or other rings subjected to specific topological
transformations deserve to be studied, because new and
special properties might be found. There have been a
number of literatures devoted to elliptic quantum dots6−9
and rings10−12. It was found that the elliptic rings have
two distinguished features. (i) The avoided crossing of
the levels and the suppression of the A-B oscillation. (ii)
The appearance of localized states which are related to
bound states in infinite wires with bends.13 These feature
would become more explicit if the eccentricity is larger
and the ring is narrower.
On the other hand, as a micro-device, the optical prop-
erty is obviously essential to its application. It is guessed
that very narrow rings with a high eccentricity might
have special optical property, this is a point to be clari-
fied. This paper is dedicated to this topic. It turns out
that the optical properties of a highly flattened narrow
ring is greatly different from a circular ring due to having
a tunable energy gap, which would lead to strong dipole
transitions with wave length tunable in a very broad
range (say, from 0.1 to 0.001cm). Besides, a kind of A-B
density-oscillation originating from symmetry breaking
was found as reported as follows.
We consider an electron with an effective mass m∗ con-
fined on a one-dimensional elliptic ring with a half major
axis rax and an eccentricity ε. Let us introduce an argu-
ment θ so that a point (x, y) at the ring is related to θ as
x = rax cos θ and y = ray sin θ, where ray = rax
1− ε2 is
the half minor axis. A uniform magnetic field B confined
inside a cylinder with radius rin vertical to the plane of
the ring is applied. The associated vector potential reads
A = Br2int/2r, where t is a unit vector normal to the
position vector r. Then, the Hamiltonian reads
H = G/(1− ε2 cos2 θ)[− d
− i2α
1− ε2
(1− ε2 sin2 θ)
1− ε2 cos2 θ
1− ε2 sin2 θ
] (1)
where G = ~2/(2m∗r2ax), α = φ/φo, φ = πr
inB is the
flux, φo = hc/e is the flux quantum.
The eigen-states are expanded as Ψj =
∑kmax
k=kmin
eikθ, where k is an integer ranging
from kmin to kmax, and j = 1, 2, · · · denotes the ground
state, the second state, and so on. The coefficients C
are obtained via the diagonalization of H . In practice, B
takes positive values, kmin = −100 and kmax = 10. This
range of k assures the numerical results having at least
four effective figures. The energy of the j − th state is
Ej = 〈H〉j ≡
dθ(1 − ε2 cos2 θ)Ψ∗jHΨj (2)
where the eigen-state is normalized as
dθ(1− ε2 cos2 θ)Ψ∗jΨj (3)
In the follows the units meV, nm, and Tesla are used,
m∗ = 0.063me (for InGaAs rings), and rin is fixed at
25. When rax = 50, ε = 0 and 0.4, the evolution of
the low-lying spectra with B are given in Fig.1. When
ε = 0.4, the effect of eccentricity is still small, the spec-
trum is changed only slightly from the case ε = 0, but
the avoided crossing of levels can be seen.10,11 In par-
ticular, the A-B oscillation exists and the period of φ
remains to be φo. However, when ε becomes large, three
remarkable changes emerge as shown in Fig.2. (i) The
A-B oscillation of the ground state vanishes gradually.
(ii) The energy of the second state becomes closer and
closer to the ground state. (iii) There is an energy gap
lying between the ground state and the third state, the
http://arxiv.org/abs/0704.0867v1
0 2 4 6 8 10
E(meV)
B(Tesla)
ε =0.4rax=50(b)
ε =0rax=50(a)
FIG. 1: Low-lying spectrum (in meV) of an one-electron sys-
tem on an elliptic ring against B. rax = 50nm and ε = 0 (a)
and 0.4 (b). The period of the flux φo = hc/e is associated
with B = 2.106 Tesla.
0 4 8 12 16 20
B (Tesla)
E(meV)
rax= 50, ε = 0.8
FIG. 2: Similar to Fig.1 but ε = 0.8. The lowest eight
levels are included, where a great energy gap lies between the
ground and the third states.
gap width increases nearly linearly with B. The exis-
tence of the gap is a remarkable feature which has not
yet been found before from the rings with a finite width.
This feature is crucial to the optical properties as shown
later. Fig.3 demonstrates further how the gap varies
with ε, rax, and B , where B is from 0 to 30 (or φ from 0
to 14.24φo). One can see that, when ε is large and rax is
small, the increase of B would lead to a very large gap.
0 5 10 15 20 25 30
0 5 10 15 20 25 30
(b) ε = 0.8
ε =0.8
B (Tesla)
E3-E1
(meV)
(a) r
ε =0.6
ε=0.4
FIG. 3: Evolution of the energy gap E3 − E1 when rax and
ε are given.
The A-B oscillation of the ground state energy is given
in Fig.4. The change of ε does not affect the period
(2.106 Tesla). However, when ε is large, the amplitude of
the oscillation would be rapidly suppressed. Thus, for a
highly flattened elliptic ring, the A-B oscillation appears
only when B is small.
0 2 4 6 8 10
B (Tesla)
ε =0, 0.4, 0.8rax=50
FIG. 4: The A-B oscillation of the ground state energy. The
solid, dash-dot-dot, and dot lines are for ε = 0, 0.4, and 0.8,
respectively.
The persistent current of the j − th state reads14
Jj = G/~[Ψ
1− ε2
(1− ε2 sin2 θ)
)Ψj + c.c.] (4)
The A-B oscillation of Jj is plotted in Fig.5. When ε is
small (≤ 0.4), just as in Fig.4, the effect of ε is small as
shown in 5a. When ε is large there are three noticeable
points: (i) The oscillation of the ground state current
would become weaker and weaker when B increases. (ii)
The current of the second state has a similar amplitude
as the ground state, but in opposite phase. (iii) The
third (and higher) state has a much stronger oscillation
of current.
0 2 4 6 8 10
B (Tesla)
(b) rax=50, ε =0.8
ε=0, 0.4, 0.8(a) r
FIG. 5: The A-B oscillation of the persistent current J . (a)
is for the ground state with ε = 0 (solid line), 0.4 (dash-dot-
dot), and 0.8 (dot). (b) is for the first (ground), second and
third states (marked by 1,2, and 3 by the curves) with ε fixed
at 0.8. The ordinate is 106 times J/c in nm−1.
For elliptic rings, the angular momentum L is not con-
served. However, it is useful to define (L)j = 〈−i ∂∂θ 〉j
(refer to eq.(2)). This quantity would tend to an integer
if ε → 0. It was found that (i) When ε is small (≤ 0.4),
(L)1 of the ground state decreases step by step with B,
each step by one, just as the case of circular rings. How-
ever, when ε is large, (L)1 decreases continuously and
nearly linearly. (ii) When ε is small, |(L)i− (L)1| is close
(not close) to 1 if 2 ≤ i ≤ 3 (otherwise). Since L would
be changed by ±1 under a dipole transition, the ground
state would therefore essentially jump to the second and
third states. Accordingly, the dipole photon has essen-
tially two energies, namely, E2 −E1 and E3 −E1 . How-
ever, this is not exactly true when ε is large.
There is a relation between the dipole photon energies
and the persistent current.15 For ε = 0, the ground state
with L = k1 would have the current J1 = G(k1 + α)/π~,
while the ground state energy E(k1) = G(k1 + α)
2. Ac-
cordingly the second and third states would have L =
k1 ± 1, therefore we have
|E3 − E2| = |E(k1 + 1)− E(k1 − 1)| = 2hJ1 (5)
This relation implies that the current can be accurately
measured simply by measuring the energy difference of
the photons emitted in dipole transitions. For elliptic
rings, this relation holds approximately when ε is small
(≤ 0.4), as shown in Fig.6a. However, the deviation is
quite large when ε is large as shown in 6c.
0 2 4 6 8 10
(c) ε = 0.7
B (Tesla)
(b) ε = 0.5
(a) ε = 0.3
FIG. 6: E3 − E2 and the persistent current of the ground
state. The solid line denotes (E3 − E2)/(2hc)10
6, the dash-
dot-dot line denotes |J |/c·106 . They overlap nearly if ε < 0.3.
The probability of dipole transition from Ψj to Ψj′
reads
(ω/c)3|〈x∓ iy〉j′,j |2 (6)
where ~ω = Ej′ − Ej is the photon energy,
〈x∓ iy〉j′,j = rax
dθ(1 − ε2 cos2 θ)Ψ∗j′
[cos θ ∓ i
1− ε2 sin θ]Ψj (7)
The probability of the transition of the ground state to
the j′ − th state is shown in Fig.7. When ε is small
(≤ 0.4) and B is not very large (≤ 10), the allowed final
states essentially Ψ2 and Ψ3, and the oscillation of the
probability is similar to the case of circular rings with the
same period as shown in 7a and 7b. In particular, P±3,1 is
considerably larger than P±2,1 due to having a larger pho-
ton energy, thus the third state is particularly important
to the optical properties. When ε is large (Fig.7c), the
oscillation disappears gradually with B,while the prob-
ability increases very rapidly due to the factor (ω/c)3.
Since E3 − E1 is nearly proportional to B as shown in
Fig.3, the probability is nearly proportional to B3. This
leads to a very strong emission (absorption). Further-
more, in Fig.7c the black solid curve is much higher than
the dash-dot-dot curve, it implies that the final states
can be higher than Ψ3, this leads to an even larger prob-
ability.
0 2 4 6 8 10
B(Tesla)
c) ε =0.8
b) ε =0.4
a) ε =0
FIG. 7: Evolution of the probability of dipole transition of
the ground state. The green line is for Ψ1 to Ψ2 transition,
red line for Ψ1 to Ψ3, dash-dot-dot line is for the sum of the
above two, solid line in black is for the total probability.
For circular rings, the particle densities ρ of all the
eigen-states are uniform under arbitraryB. However, for
elliptic rings, ρ is no more uniform as shown in Fig.8. For
the ground state (8a), when φ =0, the non-uniformity is
slight and ρ is a little smaller at the two ends of the major
axis (θ = 0, π). When φ increases, the density at the
two ends of the minor axis (θ = π/2, 3π/2) increases as
well. When φ = 4φo the non-uniformity is very strong as
shown by the curve 9, where ρ ≈ 0 when θ ≈ 0 or π. The
second state has a parity opposite to the ground state,
0 20 40 60 80 100 120
0 20 40 60 80 100 120
0 20 40 60 80 100 120
Arc length (nm)
fourth state
third state
8ε =0.8, r
ground state
FIG. 8: Particle densities ρ as functions of the arc length
(the according change of θ is 0 to π). The fluxes are given as
φ = (i− 1)φo/2, where i is an integer from 1 to 9 marked by
the curves. The first group of curves (in violet) have φ/φo =
integer, the second group (in green) have φ/φo = half-integer.
When φ increases, the curve of ρ jumps from the first group
to the second group and jumps back, and repeatedly.
but their densities are similar. For the third state (8b), ρ
is peaked not at the ends of the major and minor axes but
in between. In particular, when B increases, ρ oscillates
from one pattern (say, in violet line) to another pattern
(in green line), and repeatedly. The density oscillation
would become stronger in higher states (8c). The period
of oscillation remains to be φo, thus it is a new type of
A-B oscillation without analogue in circular rings (where
ρ remains uniform). Incidentally, the density oscillation
does not need to be driven by a strong field, instead, a
small change of φ from 0 to φo is sufficient.
Let us evaluate Ej roughly by using (L)j to replace
the operator −i ∂
in eq.(2) Then,
Ej ≈ G
dθ{[(L)j + α
1− ε2
1− ε2 sin2 θ
αε2 sin 2θ
2(1− ε2 sin2 θ)
]2}Ψ∗jΨj (8)
There are two terms at the right each is a square of a
pair of brackets (for circular rings the second term does
not exist). It is reminded that, while α = φ/φo is given
positive, (L)j is negative. Thus there is a cancellation
inside the first term. Therefore, when ε and α are large,
the second term would be more important. It is recalled
that both Ψ1 and Ψ2 are mainly distributed around θ =
π/2 and 3π/2 (refer to Fig.8a), where the second term
is zero due to the factor sin 2θ. Accordingly the energies
of Ψ1 and Ψ2 are lower. On the contrary, both Ψ3 and
Ψ4 are distributed close to the peaks of the second term
(refer to Fig.8b and 8c), this leads to a higher energy.
This effect would be greatly amplified by αε2 , this leads
to the large energy gap shown in Fig.3.
In summary, the optical property of highly flattened
elliptic narrow rings was found to be greatly different
from circular rings. For the latter, both the energy of
the dipole photon and the probability of transition are
low, and they are oscillating in small domains. On the
contrary, for the former, both the energy and the prob-
ability are not limited, the energy (probability) is nearly
proportional to B (B3), they are tunable by changing
ε, rax and/or B. It implies that a strong source of light
with frequency adjustable in a wide domain can be de-
signed by using highly flattened, narrow, and small rings.
Furthermore, a new type of A-B oscillation, namely, the
density oscillation, originating from symmetry breaking,
was found. This is a noticeable point because the density
oscillation might be popular for the systems with broken
symmetry (e.g., with C3 symmetry).
Acknowledgment: The support under the grants
10574163 and 90306016 by NSFC is appreciated.
References
1, S.Viefers, P. Koskinen, P. Singha Deo, M. Manninen,
Physica E 21 , 1 (2004)
2, U.F. Keyser, C. Fühner, S. Borck, R.J. Haug, M.
Bichler, G. Abstreiter, and W. Wegscheider, Phys. Rev.
Lett. 90, 196601 (2003)
3, D. Mailly, C. Chapelier, and A. Benoit, Phys. Rev.
Lett. 70, 2020 (1993)
4, A. Fuhrer, S. Lüscher, T. Ihn, T. Heinzel, K. Ensslin,
W. Wegscheider, and M. Bichler, Nature (London) 413,
822 (2001)
5, A.E. Hansen, A. Kristensen, S. Pedersen, C.B.
Sorensen, and P.E. Lindelof, Physica E (Amsterdam) 12,
770 (2002)
6, M. van den Broek, F.M. Peeters, Physica E,11, 345
(2001)
7, E. Lipparini, L. Serra, A. Puente, European Phys.
J. B 27, 409 (2002)
8, J. Even, S. Loualiche, P. Miska, J. of Phys.: Cond.
Matt., 15, 8737 (2003)
9, C. Yannouleas, U. Landman, Physica Status Solidi
A 203, 1160 (2006)
10, D. Berman, O Entin-Wohlman, and M. Ya. Azbel,
Phys. Rev. B 42, 9299 (1990)
11, D. Gridin, A.T.I. Adamou, and R.V. Craster, Phys.
Rev. B 69, 155317 (2004)
12, A. Bruno-Alfonso, and A. Latgé, Phys. Rev. B 71,
125312 (2005)
13, J. Goldstone and R.L. Jaffe, Phys. Rev. B 45,
14100 (1992)
14, Eq.(4) originates from a 2-dimensional system via
the following steps. (i) the components of the current
along X- and Y-axis are firstly obtained from the conser-
vation of mass as well known. (ii) Then, the component
along the tangent of ellipse jθ can be obtained. (iii) jθ
is integrated along the normal of the ellipse under the
assumption that the wave function is restricted in a very
narrow region along the normal, then it leads to eq.(4).
15, Y.Z. He, C.G. Bao (submitted to PRB)
|
0704.0868 | Effect of electron-electron interaction on the phonon-mediated spin
relaxation in quantum dots | Effect of electron-electron interaction on the phonon-mediated spin relaxation in
quantum dots
Juan I. Climente,1, ∗ Andrea Bertoni,1 Guido Goldoni,1, 2 Massimo Rontani,1 and Elisa Molinari1, 2
1CNR-INFM National Center on nanoStructures and bioSystems at Surfaces (S3), Via Campi 213/A, 41100 Modena, Italy
2Dipartimento di Fisica, Università degli Studi di Modena e Reggio Emilia, Via Campi 213/A, 41100 Modena, Italy
(Dated: October 21, 2018)
We estimate the spin relaxation rate due to spin-orbit coupling and acoustic phonon scattering
in weakly-confined quantum dots with up to five interacting electrons. The Full Configuration
Interaction approach is used to account for the inter-electron repulsion, and Rashba and Dresselhaus
spin-orbit couplings are exactly diagonalized. We show that electron-electron interaction strongly
affects spin-orbit admixture in the sample. Consequently, relaxation rates strongly depend on the
number of carriers confined in the dot. We identify the mechanisms which may lead to improved
spin stability in few electron (> 2) quantum dots as compared to the usual one and two electron
devices. Finally, we discuss recent experiments on triplet-singlet transitions in GaAs dots subject
to external magnetic fields. Our simulations are in good agreement with the experimental findings,
and support the interpretation of the observed spin relaxation as being due to spin-orbit coupling
assisted by acoustic phonon emission.
PACS numbers: 73.21.La,71.70.Ej,72.10.Di,73.22.Lp
I. INTRODUCTION
There is currently interest in manipulating electron
spins in quantum dots (QDs) for quantum information
and quantum computing purposes.1,2,3 A major goal in
this research line is to optimize the spin relaxation time
(T1), which sets the upper limit of the spin coherence
time (T2): T2 ≤ 2T1.4 Therefore, designing two-level
spin systems with long spin relaxation times is an im-
portant step towards the realization of coherent quantum
operations and read-out measuraments. Up to date, spin
relaxation has been investigated almost exclusively in
single-electron4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19 and two-
electron20,21,22,23,24,25,26,27,28,29,30 QDs. Spin relaxation
in QDs with a larger number of electrons has seldom been
considered28,31, even though Coulomb blockade makes
it possible to control the exact number of carriers con-
fined in a QD.32 Yet, recent theoretical works suggest
that Coulomb interaction renders few-electron charge de-
grees of freedom more stable than single-electron ones33,
which leads to the question of whether similar findings
hold for spin degrees of freedom. Moreover, in weakly-
confined QDs, acoustic phonon emission assisted by spin-
orbit (SO) interaction has been identified as the domi-
nant spin relaxation mechanism when cotunneling and
nuclei-mediated relaxation are reduced.6,8,31 The com-
bined effect of Coulomb interaction and SO coupling has
been shown to influence the energy spectrum of few-
electron QDs profoundly,34,35,36 but the consequences on
the spin relaxation remain largely unexplored.37
In Ref. 28 we investigated the effect of a magnetic
field on the triplet-singlet (TS) spin relaxation in two
and four-electron QDs with SO coupling, so as to under-
stand related experimental works. Motivated by the very
different response observed for different number of con-
fined particles, in this work we shall focus on the role of
electron-electron interaction in spin relaxation processes,
extending our analysis to different number of carriers,
highlighting, in particular, the different physics involved
in even and odd number of confined electrons. Further-
more, we will explicitly compare the predictions of our
theoretical model with very recent experiments on spin
relaxation in two-electron GaAs QDs.29
We study theoretically the energy structure and spin
relaxation of N interacting electrons (N = 1 − 5) in
parabolic GaAs QDs with SO coupling, subject to axial
magnetic fields. Both Rashba38 and Dresselhaus39 SO
terms are considered, and the electron-electron repulsion
is accounted for via the Full Configuration Interaction
method.40,41 By focusing on the two lowest spin states,
two different classes of systems are distinguished. For N
odd (1,3,5) and weak magnetic fields, the ground state is
a doublet and then the two-level system is defined by the
Zeeman-split sublevels of the lowest orbital. For N even
(2,4), the two-level system is defined by a singlet and a
triplet. We analyze these two classes of systems sepa-
rately because, as we shall comment below, the physics
involved in the spin transition differs. Thus, we compare
the phonon-induced spin relaxation of N = 1, 3, 5 elec-
trons and that of N = 2, 4 separately. As a general rule,
the larger the number of confined carriers, the stronger
the SO mixing, owing to the increasing density of elec-
tronic states. This would normally yield faster relax-
ation rates. However, we note that this is not necessarily
the case, and few-electron states may display compara-
ble or even slower relaxation than their single-electron
and two-electron counterparts. This is due to charac-
teristic features of the few-particle energy spectra which
tend to weaken the admixture between the initial and
final spin states. In N -odd systems, it is the presence
of low-energy quadruplets for N > 1 that reduces the
admixture between the Zeeman sublevels of the (dou-
blet) ground state, hence inhibiting the spin flipping. In
N -even systems, electronic correlations partially quench
http://arxiv.org/abs/0704.0868v2
phonon emission33, and the relaxation can be further sup-
pressed forN > 2 if one selects initial and final spin states
differing in more than one quantum of angular momen-
tum, which inhibits direct triplet-singlet SO mixing via
linear Rashba and Dresselhaus SO terms.28 Noteworthy,
all these effects are connected with Coulomb interaction
between confined carriers.
The paper is organized as follows. In Section II we give
details about the theoretical model we use. In Section
III we study the energy structure and spin relaxation of
a QD with an odd number of electrons (N = 1, 3, 5). In
Section IV we do the same for QDs with an even number
of electrons (N = 2, 4). In Section V we compare our
numerical simulations with experimental data recently
reported for N = 2 GaAs QDs. Finally, in Section VI we
present the conclusions of this work.
II. THEORY
We consider weakly-confined GaAs/AlGaAs QDs,
which are the kind of samples usually fabricated
by different groups to investigate spin relaxation
processes.7,8,20,22 In these structures, the dot and the sur-
rounding barrier have similar elastic properties, and the
lateral confinement (which we approximate as circular)
is much weaker than the vertical one. A number of use-
ful approximations can be made for such QDs. First,
since the weak lateral confinement gives inter-level spac-
ings within the range of few meV, only acoustic phonons
have significant interaction with bound carriers, while op-
tical phonons can be safely neglected. Second, the elasti-
cally homogeneous materials are not expected to induce
phonon confinement, which allows us to consider three-
dimensional bulk phonons. Finally, the different energy
scales of vertical and lateral electronic confinement allow
us to decouple vertical and lateral motion in the building
of single-electron spin-orbitals. Thus, we take a parabolic
confinement profile in the in-plane (x, y) direction, with
single-particle energy gaps h̄ω0, which yields the Fock-
Darwin states.42 In the vertical direction (z) the confine-
ment is provided by a rectangular quantum well of width
Lz and height determined by the band-offset between
the QD and barrier materials (the zero of energy is then
the bottom of the conduction band). The quantum well
eigenstates are derived numerically. In cylindrical coor-
dinates, the single-electron spin-orbitals can be written
ψµ(ρ, θ, z; sz) =
eimθ Rn,m(ρ) ξ0(z)χsz , (1)
where ξ0 is the lowest eigenstate of the quantum well,
χsz is the spinor eigenvector of the spin z-component
with eigenvalue sz, and Rn,m is the n−th Fock-Darwin
orbital with azimuthal angular momentum m,
Rn,m(ρ) =
(n+ |m|)!
0 L|m|n
In the above expression L|m|n denotes a generalized La-
guerre polynomial and l0 =
h̄/m∗ω0 is the effective
length scale, with m∗ standing for the electron effec-
tive mass. The energy of the single-particle Fock-Darwin
states is given by En,m = (2n + 1 + |m|)h̄Ωc + m2 h̄ωc,
where ωc =
is the cyclotron frequency and Ωc =
+ (ωc/2)2 is the total (spatial plus magnetic) con-
finement frequency.
With regard to Coulomb interaction, we need to go
beyond mean field approximations in order to properly
include electronic correlations, which play an important
role in determining the phonon-induced electron scatter-
ing rate.43 Moreover, since we are interested in the re-
laxation time of excited states, we need to know both
ground and excited states with comparable accuracy.
Our method of choice is the Full Configuration Interac-
tion approach: the few-electron wave functions are writ-
ten as linear combinations |Ψa〉 =
cai|Φi〉, where the
Slater determinants |Φi〉 = Πµic†µi |0〉 are obtained by
filling in the single-electron spin-orbitals µ with the N
electrons in all possible ways consistent with symmetry
requirements; here c†µ creates an electron in the level µ.
The fully interacting Hamiltonian is numerically diago-
nalized, exploiting orbital and spin symmetries.40,41 The
few-electron states can then be labeled by the total az-
imuthal angular momentumM = 0,±1,±2 . . ., total spin
S and its z-projection Sz.
The inclusion of SO terms is done following a similar
scheme to that of Ref. 44, although here we consider
not only Rashba but also linear Dresselhaus terms. For
a quantum well grown along the [001] direction, these
terms read:38,39
HR = α
(kysx − kxsy), (3)
HD = γc 〈k2z〉(kysy − kxsx), (4)
where α and γc are coupling constants, while sj and kj
are the j-th Cartesian projections of the electron spin and
canonical momentum, respectively, along the main crys-
talographic axes (〈k2z〉 = (π/Lz)2 for the lowest eigen-
state of the quantum well). The momentum operator
includes a magnetic field B applied along the vertical di-
rection z. Other SO terms may also be present in the
conduction band of a QD, such as the contribution aris-
ing from the system inversion asymmetry in the lateral
dimension or the cubic Dresselhaus term. However, in
GaAs QDs with strong vertical confinement, HR and HD
account for most of the SO interaction.36
We rewrite Eqs.(3,4) in terms of ladder operators as:
HR = α
(k+s− − k−s+), (5)
HD = β
(k+s+ + k−s−), (6)
where k± and s± change m and sz by one quantum,
respectively, and β = γc (π/Lz)
2 is the Dresselhaus in-
plane coupling constant. It is worth mentioning that
when only Rashba (Dresselhaus) coupling is present, the
total angular momentum j = m + sz (j = m − sz)
is conserved. However, in the general case, when both
coupling terms are present and α 6= β, all symmetries
are broken. Still, SO interaction in a large-gap semi-
conductor such as GaAs is rather weak, and the low-
lying states can be safely labelled by their approximate
quantum numbers (M,S, Sz) except in the vicinity of the
level anticrossings.11,26,45 Since the few-electron M and
Sz quantum numbers are given by the algebraic sum of
the single-particle states m and sz quantum numbers, it
is clear from Eqs. (5,6) that Rashba interaction mixes
(M,Sz) states with (M ± 1, Sz ∓ 1) ones, while Dressel-
haus interaction mixes (M,Sz) with (M ± 1, Sz ± 1).
The SO terms of Eqs. (5,6) can be spanned on a basis of
correlated few-electron states.46 The SO matrix elements
are then given by sums of single-particle contributions of
the form:
〈n′m′ s′z| HR +HD |nmsz〉 =
C∗R O+n′m′ nm δm′ m+1 δs′z sz−1+CR O
n′m′ nm δm′ m−1 δs′z sz+1+
C∗D O+n′m′ nm δm′ m+1 δs′z sz+1+CD O
n′m′ nm
δm′ m−1 δs′
sz−1.
Here CR = α and CD = −iβ are constans for the Rashba
and Dresselhaus interactions respectively, andO± are the
form factors:
Rnm(t),
Rnm(t),
with t = ρ2/l20. The above forms factors have analytical
expressions which depend on the set of quantum num-
bers {n′m′, nm}. The resulting SO-coupled eigenvec-
tors are then linear combinations of the correlated states,
|ΨSOA 〉 =
cAa|Ψa〉.
We assume zero temperature, which suffices to capture
the main features of one-phonon processes.9,16 Indeed, it
is one-phonon processes that account for most of the low-
temperature experimental observations in the SO cou-
pling regime.2,6,8,28,29,31 We evaluate the relaxation rate
between the initial (occupied) and final (empty) states
of the SO-coupled few-electron state, B and A, using the
Fermi Golden Rule:
τ−1B→A =
c∗BbcAa
c∗bicaj〈Φi|Vνq|Φj〉
δ(EB−EA−h̄ωq),
where the electron states |ΨSOK 〉 (K = A,B) have been
written explicitly as linear combinations of Slater deter-
minants, EK stands for the K electron state energy and
h̄ωq represents the phonon energy. Vνq is the interac-
tion operator of an electron with an acoustic phonon of
momentum q via the mechanism ν, which can be either
deformation potential or piezoelectric field interaction.
Details about the electron-phonon interaction matrix el-
ements can be found elsewhere.33
In this work we study a GaAs/Al0.3Ga0.7As QDs, us-
ing the following material parameters:47 electron effective
massm∗ = 0.067, band-offset Vc = 243 meV, crystal den-
sity d = 5310 kg/m3, acoustic deformation potential con-
stant D = 8.6 eV, effective dielectric constant ǫ = 12.9,
and piezoelectric constant h14 = 1.41 · 109 V/m. The
Landé factor is g = −0.44.5 As for GaAs sound speed, we
take cLA = 4.72 · 103 m/s for longitudinal phonon modes
and cTA = 3.34 · 103 m/s for transversal modes.48 Unless
otherwise stated, a lateral confinement of h̄ω0 = 4 meV
and a quantum well width of Lz = 10 nm are assumed
for the QD under study, and a Dressehlaus coupling pa-
rameter γc = 25.5 eV·Å3 is taken49, so that β ≈ 25
meV·Å. The value of the Rashba coupling constant can
be modulated externally e.g. with external electric fields.
Here we will investigate systems both with and without
Rashba interaction. When present, we shall mostly con-
sider α = 50 meV·Å, to represent the case where Rashba
effects prevail over Dresselhaus ones.
Few-body correlated states (M,S, Sz) are obtained us-
ing a basis set composed by the Slater determinants
(SDs) which result from all possible combinations of 42
single-electron spin-orbitals (i.e., from the six lowest en-
ergy shells of the Fock-Darwin spectrum at B = 0) filled
with N electrons. For N = 5, this means that the basis
rank may reach ∼ 2 · 105. The SO Hamiltonian is then
diagonalized in a basis of up to 56 few-electron states,
which grants a spin relaxation convergence error below
2%. Since SO terms break the spin and angular mo-
mentum symmetries, the SO-coupled states |ΨSOK 〉 are
described by a linear combination of SDs coming from
different (M,S, Sz) subspaces. Thus, for N = 5, the
states are described by up to ∼ 8.5 · 105 SDs. To evalu-
ate the electron-phonon interaction matrix elements, we
note that only a small percentage of the huge number
of possible pairs of SDs (∼ 7 · 1011 for N = 5) may
give non-zero matrix elements, owing to spin-orbital or-
thogonalities. We scan all pairs of SDs and filter those
which may give non-zero matrix elements writing the de-
terminants in binary representation and using efficient
bit-per-bit algorithms.40,41 The matrix elements of the
remaining pairs (∼ 2 ·106 for N = 5) are evaluated using
massive parallel computation.
0 1 3
B (T)
FIG. 1: Low-lying energy levels in a QD with N = 1, 3, 5
interacting electrons, as a function of an axial magnetic field.
The SO interaction coefficients are α = 50 meV· Å and β =
25 meV· Å. The dot has h̄ω0 = 4 meV and Lz = 10 nm. Note
the increasing size of the SO-induced anticrossing gaps and
zero-field splittings with increasing N .
III. SPIN RELAXATION IN A QD WITH N ODD
A. Energy structure
When the number of electrons confined in the QD is
odd and the magnetic field is weak enough, the ground
and first excited states are usually the Zeeman sz = 1/2
and sz = −1/2 sublevels of a doublet [Fig. 1]. Since the
initial and final spin states belong to the same orbital,
∆M = 0 and SO mixing (which requires ∆M = ±1)
is only possible with higher-lying states. In addition,
the phonon energy (corresponding to the electron tran-
sition energy) is typically small (in the µeV scale). In
this case, the relaxation rate is determined essentially by
the phonon density, the strength and nature of the SO
interaction, and the proximity of higher-lying states.9,11
In order to gain some insight on the influence of these
factors, in Fig. 1 we compare the energy structure of a
QD with N = 1, 3, 5 vs. an axial magnetic field, in the
presence of Rashba and Dresselhaus interactions.55 One
can see that the increasing number of particles changes
the energy magneto-spectrum drastically. This is be-
cause the quantum numbers of the low-lying energy levels
change, resulting in a different field dependence, and be-
cause Coulomb interaction leads to an increased density
of electron states, as well as to a more complicated spec-
trum.
At first sight, the energy spectra of Fig. 1 closely resem-
ble those in the absence of SO effects. For instance, the
N = 1 spectrum is very similar to the pure Fock-Darwin
spectrum.42 Rashba and Dresselhaus interactions were
expected to split the degenerate |m| > 0 shells at B = 0,
shift the positions of the level crossings and turn them
into anticrossings36,52,53,54, but here such signatures are
hardly visible because SO interaction is weak in GaAs.
In fact, the magnitude of the SO-induced zero-field en-
ergy splittings and that of the anticrossing gaps is of very
few µeV, and SO effects simply add fine features to the
N = 1 spectrum.52
A significantly different picture arises in the N = 3
and N = 5 cases. Here, the increased density of elec-
tronic states enhances SO mixing as compared to the
single-electron case.56 As a result, the anticrossing gaps
can be as large as 30 µeV (N = 3) and 60 µeV (N = 5).
Moreover, unlike in the N = 1 case, where the ground
state orbital has m = 0, here it has |M | = 1. Therefore,
the Zeeman sublevels involved in the fundamental spin
transition are subject to SO-induced zero-field splittings.
To illustrate this point, in Fig. 2 we zoom in on the energy
spectrum of the four lowest states of N = 3 and N = 5
under weak magnetic fields, without (left panels) and
with (right panels) Rashba interaction. Clearly, the four-
fold degeneracy of |M | = 1 spin-orbitals at B = 0 has
been lifted by SO interaction.36 One can also see that the
order of the two lowest sublevels at B ∼ 0 changes when
Rashba interaction is switched on. Thus, for N = 3 and
α = 0, the two lowest sublevels are (M = −1, Sz = 1/2)
and (M = −1, Sz = −1/2), but this order is reversed
when α = 50 meV·Å. The opposite level order as a func-
tion of α is found for N = 5. This behavior constitutes a
qualitative difference with respect to the N = 1 case in
two aspects. First, the phonon energy (i.e., the energy
of the fundamental spin transition) is no longer given by
the bare Zeeman splitting. Instead, it has a more compli-
cated dependence on the magnetic field, and it is greatly
influenced by the particular values of α and β. This is
apparent in the N = 5 panels, where the energy splitting
between the two lowest states strongly differs depending
on the relative value of α and β. Second, it is possible
to find situations where the ground state at B ∼ 0 has
Sz = −1/2 and the first excited state has Sz = 1/2 (e.g.
N = 3 when α > β or N = 5 when α < β). In these
cases, the Zeeman splitting leads to a weak anticrossing
of the two sublevels (highlighted with dashed circles in
Fig. 2) which has no counterpart in single-electron sys-
tems. This kind of B-induced (i.e., not phonon-induced)
ground state spin mixing, also referred to as “intrinsic
spin mixing”, has been previously reported for singlet-
triplet transitions in N = 2 QDs.58 Here we show that
they may also exist in few-electron QDs with N odd.
(−2,1/2)(1,1/2)
(−1,1/2)
(1,1/2)
(1,1/2)
(−4,1/2)
(−2,1/2)
(−4,1/2)
(−1,1/2)
(−1,1/2)
(−1,1/2)
(1,1/2)
α = 50, β = 25α = 0, β = 25
N = 3
N = 5
127.0
127.5
128.0
128.5
0.5 1.0 0.0
236.0
236.5
237.0
B (T)
0.0 0.5 1.0
B (T)
FIG. 2: The four lowest energy levels in a QD with N = 3, 5
interacting electrons, as a function of an axial magnetic field,
without (left column) and with (right column) Rashba SO
interaction. The approximate quantum numbers (M,S) of
the levels are shown, with arrows denoting the spin projection
Sz = 1/2 (↑) and Sz = −1/2 (↓). The dashed circles highlight
the region of intrinsic spin mixing of the ground state.
Figure 1 puts forward yet another qualitative differ-
ence between SO coupling in single- and few-electron
QDs: while in the former low-energy anticrossings are
due to Rashba interaction11,36,52, in few-electron QDs,
when S = 3/2 states come into play, both Rashba and
Dresselhaus terms may induce anticrossings. For exam-
ple, the (M = −1, Sz = 1/2) sublevel couples directly to
both (M = −2, Sz = −1/2) and (M = −2, Sz = 3/2)
sublevels, via the Dresselhaus and Rashba interaction,
respectively. Coupling to S = 3/2 states is a characteris-
tic feature of N > 1 systems, which has important effects
on the spin relaxation rate, as we will discuss below.
B. Spin relaxation between Zeeman sublevels
In Fig. 3 we compare the magnetic field dependence of
the spin relaxation rate between the two lowest Zeeman
sublevels of N = 1, 3, 5. Dashed lines (solid lines) are
used for systems without (with) Rashba interaction.59
While for N = 1 the well-known exponential dependence
with B is found2,6,9, and the main effect of Rashba cou-
pling is to shift the curve upwards (i.e., to accelerate the
relaxation), for N = 3 and N = 5 the relaxation rate ex-
hibits complicated trends which strongly depend on the
values of the SO coupling parameters.
α = 50, β = 25
α = 0, β = 25
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
B (T)
0 1 2 3
FIG. 3: Spin relaxation rate in a QD with N = 1, 3, 5 inter-
acting electrons as a function of an axial magnetic field. Solid
(dashed) lines stand for the system with (without) Rashba
interaction. Note the strong influence of the SO interaction
in the shape of the relaxation curve for N > 1.
To understand this result, one has to bear in mind that
in spin relaxation processes two well-distinguished and
complementary ingredients are involved, namely SO in-
teraction and phonon emission. Phonon emission grants
the conservation of energy in the electron relaxation,
but phonons have zero spin and therefore cannot cou-
ple states with different spin. It is the SO interaction
that turns pure spin states into mixed ones, thus enabling
the phonon-induced transition. The overall efficiency of
the scattering event is then given by the combination
of the two phenomena: the phonon emission efficiency
modulated by the extent of the SO mixing. The shape
of spin relaxation curves shown in Fig. 3 can be directly
related to the energy dispersion of the phonon, which cor-
responds to the splitting between the two lowest levels of
the electron spectrum. Thus, for N = 1, the phonon
energy is simply proportional to B through the Zeeman
splitting, but for N = 3 and N = 5 it has a non-trivial
dependence on B, as shown in Fig. 2. Actually, the relax-
ation minima in Fig. 3 are connected with the magnetic
field values where the two lowest levels anticross in Fig. 2.
In these magnetic field windows, in spite of the fact that
SO coupling is strong, the phonon density is so small that
the relaxation rate is greatly suppressed.28 Similarly, the
relaxation rate fluctuations of N = 3 at B ∼ 3 T are
signatures of the anticrossings with high-angular momen-
tum states. For larger fields (B > 3 T), the ground state
approaches the maximum density droplet configuration
and high-spin states are possible.44 In this work, how-
ever, we restrict ourselves to the magnetic field regime
where the ground state is a doublet.
eV∆ (µ )
10
10
10
10
10
10
10
10
10
0 20 40 60 80
10
FIG. 4: (Color online). Spin relaxation rate in a QD with
N = 1, 3, 5 interacting electrons as a function of the energy
splitting between the two lowest spin states. Top panel: α =
0, β = 25 meV·Å. Bottom panel: α = 50 meV·Å, β = 25
meV·Å. The relaxation of N = 3 is slower than that of N = 1
for a wide range of ∆12. The irregular data distribution is
due to the irregular relaxation rates vs. magnetic field. For
example, the strongly deviated points of N = 3 come from
the peaks at B ∼ 3 in Fig. 3.
For a more direct comparison between the relaxation
rates of N = 1, 3, 5, in Fig. 4 we replot the data of Fig. 3
as a function of the energy splitting between the two
lowest states, ∆12, without (top panel) and with (bot-
tom panel) Rashba interaction. Since the phonon energy
is identical for all points with the same ∆12, differences
in the relaxation rate arise exclusively from the different
strength of SO interaction. ∆12 is also a relevant pa-
rameter from the experimental point of view, since it is
usually required that it be large enough for the states to
be resolvable. In this sense, it is worth noting that, even
if the inter-level splittings shown in Fig. 4 are fairly small,
a number of experiments have successfully addressed this
regime.5,8,21
A most striking feature observed in the figure is that,
for most values of ∆12, the N = 3 relaxation rate is
clearly slower than the N = 1 one. Likewise, N = 5
shows a similar (or slightly faster) relaxation rate than
N = 1. These are interesting results, for they suggest
that improved spin stability may be achieved using few-
electron QDs instead of the single-electron ones typically
employed up to date.8 At first sight the results are sur-
prising, because the higher density of states in the few-
electron systems implies smaller inter-level spacings, and
hence stronger SO mixing, which should translate into
enhanced relaxation. It then follows that another physi-
cal mechanism must be acting upon the few-electron sys-
tems, which reduces the transition probability between
the initial and final spin states, and may even make it
smaller than for N = 1. Here we propose that such
mechanism is the SO admixture with low-lying quadru-
plet (S = 3/2) states, which become available for N > 1.
By coupling to S = 3/2 levels, the projection of the dou-
blet Sz = 1/2 levels onto Sz = −1/2 ones is reduced,
and this partly inhibitis the transition between the low-
est doublet sublevels.
Let us explain this by comparing the spin transition for
N = 1 and N = 3. For N = 1, the spin configuration of
the initial and final states, in the absence of SO coupling,
is |Sz = −1/2〉 and |Sz = +1/2〉, respectively. The tran-
sition between these states is spin-forbidden. However,
when SO coupling is switched on, the two states become
admixed with higher-lying S = 1/2 states fulfilling the
∆Sz = ±1 condition. The transition between the initial
and final states can then be represented schematically as:
ca |Sz = −1/2〉+cb|Sz = +1/2〉 ⇒ cr |Sz = +1/2〉+cs|Sz = −1/2〉,
where ci are the admixture coefficients (in general ca ≫
cb and cr ≫ cs). Clearly now both spin configurations of
the initial state have a finite overlap with the final state,
and so the transition is possible. Let us next consider
the N = 3 case. In the absence of SO coupling, the
initial and final states are again the Sz = −1/2 and Sz =
+1/2 doublets, respectively, and the transition is spin-
forbidden. When we switch on SO coupling, we note that
the ∆Sz = ±1 condition allows for mixing not only with
Sz = ±1/2 states (either doublets or quadruplets) but
also with Sz = ±3/2 quadruplets, so that the transition
can be represented as:
ca|Sz = −1/2〉+ cb|Sz = +1/2〉+ cc|Sz = −3/2〉 ⇒
cr|Sz = +1/2〉+ cs|Sz = −1/2〉+ ct|Sz = +3/2〉,
where, in general, ca ≫ cb, cc, and cr ≫ cs, ct. In this
case, |Sz = −3/2〉 has no overlap with the final state con-
figurations. Likewise, |Sz = +3/2〉 has no overlap with
the initial state configurations. Therefore, these quadru-
plet configurations are inactive from the point of view
of the transition, and the more important they are (i.e.,
the stronger the SO coupling with quadruplet states), the
less likely the transition is.
To prove this argument quantitatively, in Fig. 5 we il-
lustrate the spin relaxation of N = 3 calculated by diag-
onalization of the SO Hamiltonian including and exclud-
ing the low-lying S = 3/2 states from the basis set. As
expected, when the quadruplets are not considered, the
transition is visibly faster. For N = 5, low-lying S = 3/2
levels are also available, but in this case they barely com-
pensate for the large density of electron states, so that
the overall scattering rate turns out to be comparable to
that of N = 1.
eV∆ (µ )
without S=3/2
with S=3/2
10
10
10
10
10
0 20 40 60 80
FIG. 5: (Color online). Spin relaxation rate in a QD with N =
3 interacting electrons as a function of the energy splitting
between the two lowest spin states. α = 0 and β = 25 meV·Å.
Symbol + (×) stands for SO Hamiltonian diagonalized in a
basis which includes (excludes) S = 3/2 states. Clearly, the
inclusion of S = 3/2 states slows down the relaxation.
To test the robustness of the few-electron spin states
stability predicted above, we also compare the relax-
ation rate of N = 1 and N = 3 in a QD with dif-
ferent confinement, namely h̄ω0 = 6 meV, in Fig. 6.
Since the lateral confinement of the dot is now stronger,
(M = −1, S = 1/2) is the N = 3 ground state up to large
values of the magnetic field (B ∼ 5 T). This allows us
to investigate larger Zeeman splittings (i.e., larger ∆12),
which may be easier to resolve experimentally. As seen
in the figure, the relaxation rate of N = 3 is again slower
than that of N = 1 for a wide range of ∆12, the behavior
being very similar to that of Fig. 4, albeit extended to-
wards larger inter-level spacings. The crossing between
N = 3 and N = 1 relaxation rates at large ∆12 val-
ues, both in Fig. 4 and Fig. 6, is due to the proximity
of high-angular momentum levels coming down in energy
for N = 3 when the magnetic field (and hence the Zee-
man splitting) is large. Such levels bring about strong
SO admixture and thus fast relaxation (see middle panel
of Fig. 3 at B ∼ 3 T).
IV. SPIN RELAXATION IN A QD WITH N
A. Energy structure
When the number of electrons confined in the QD is
even and the magnetic field is not very strong, the ground
and first excited states are usually a singlet (S = 0)
and a triplet (S = 1) with three Zeeman sublevels
eV∆ (µ )
10
10
10
10
10
10
10
10
10
10
0 40 80 120
FIG. 6: (Color online). Spin relaxation rate in a QD with
N = 1, 3 interacting electrons as a function of the energy
splitting between the two lowest spin states. The QD has
h̄ω0 = 6 meV. Top panel: α = 0, β = 25 meV·Å. Bottom
panel: α = 50 meV·Å, β = 25 meV·Å. As for the weaker-
confined dot of Fig. 4, the relaxation of N = 3 is slower than
that of N = 1 for a wide range of ∆12.
(Sz = +1, 0,−1). Unlike in the previous section, here the
initial and final states of the spin transition may have dif-
ferent orbital quantum numbers, and the inter-level split-
ting ∆12 may be significantly larger (in the meV scale).
Under these conditions, the phonon emission efficiency no
longer exhibits a simple proportionality with the phonon
density, but it further depends on the ratio between the
phonon wavelength and the QD dimensions.50,51 More-
over, SO interaction is sensitive to the quantum numbers
of the initial and final electron states.26,28 Therefore, in
this class of spin transitions the details of the energy
structure are also relevant to determine the relaxation
rate.
In Fig. 7 we plot the energy levels vs. magnetic field for
a QD with N = 2, 4 in the presence of Rashba and Dres-
selhaus interactions. The approximate quantum num-
bers (M,S) of the lowest-lying states are written between
parenthesis. For N = 2 and weak fields, the ground state
is the (M = 0, S = 0) singlet, and the first excited state
is the (M = −1, S = 1) triplet. As in the previous sec-
tion, SO interaction introduces small zero-field splittings
and anticrossings in the energy levels with |M | > 0.36
As a consequence, when α > β, the zero-field ordering of
the (M = −1, S = 1) Zeeman sublevels is such that they
anticross in the presence of an external magnetic field.
This anticrossing is highlighted in the figure by a dashed
circle. On the other hand, as B increases the singlet-
triplet energy spacing is gradually reduced, and then the
singlet experiences a series of weak anticrossings with all
(−1,1)
(−2,0)
(−3,1)
(0,1)
(0,0)
0 1 2 3
B (T)
FIG. 7: Low-lying energy levels in a QD with N = 2, 4 in-
teracting electrons as a function of an axial magnetic field.
α = 50 meV· Å and β = 25 meV· Å. The approximate
quantum numbers (M,S) of the lowest states are shown. The
dashed circle in N = 2 highlights the anticrossing between
M = −1 Zeeman sublevels.
three Zeeman sublevels of the triplet. These anticross-
ings are due to the fact that (M = 0, S = 0, Sz = 0)
couples to the (M = −1, S = 1, Sz = −1) sublevel via
Dresselhaus interaction, to the (M = −1, S = 1, Sz =
+1) sublevel via Rashba interaction, and finally to the
(M = −1, S = 1, Sz = 0) sublevel indirectly through
higher-lying states.26,28
For N = 4, the density of electronic states is larger
than for N = 2, which again reflects in a larger magni-
tude of the anticrossings gaps due to the enhanced SO
interaction. The ground state at B = 0 is a triplet,
(M = 0, S = 1), but soon after it anticrosses with a
singlet, (M = −2, S = 0). After this, and before the
formation of Landau levels, two different branches of the
first excited state can be distinguished: when B < 1 T,
the first excited state is (M = 0, S = 1), and when B > 1
T it is (M = −3, S = 1). It is worth pointing out that
the complexity of the N = 4 spectrum, as compared to
the simple N = 2 one, implies a greater flexibility to
select initial and final spin states by means of external
fields. As we shall discuss below, this degree of freedom
has important consequences on the relaxation rate.
B. Triplet-singlet spin relaxation
In a recent work, we have investigated the magnetic
field dependence of the TS relaxation due to SO cou-
pling and phonon emission in N = 2 and N = 4 QDs.28.
Here we study this kind of transition from a different
perspective, namely we compare the spin relaxation of
two- and four-electron systems in order to highlight the
changes introduced by inter-electron repulsion. Increas-
ing the number of electrons confined in the QD has three
important consequences on the TS transition. First, it
increases the density of electronic states (and then the
SO mixing), leading to faster relaxation. Second, as
mentioned in the previous section, it introduces a wider
choice of orbital quantum numbers for the singlet and
triplet states. Third, it increases the strength of elec-
tronic correlations. Since now the initial and final spin
states have different orbital wave functions, the latter
factor effectively reduces phonon scattering, in a similar
fashion to charge relaxation processes33 (this effect has
been recently pointed out in Ref. 30 as well). To find out
the overall combined effect of these three factors, in this
section we analyze quantitative simulations of correlated
We focus on the magnetic field regions where the
ground state is a singlet and the excited state is a triplet.
A complete description of the TS transition should then
include spin relaxation between the Zeeman-split sub-
levels of the triplet. However, for the weak fields we con-
sider this relaxation is orders of magnitude slower than
the TS one (compare Figs. 3 and 8),60 the reason for
this being the small Zeeman energy and the fact that the
Zeeman sublevels are not directly coupled by Rashba and
Dresselhaus terms, as mentioned in Section III. There-
fore, it is a good approximation to assume that all three
triplet Zeeman sublevels are equally populated and they
relax directly to the singlet.26
α = 50, β = 25
α = 0, β = 25
10
10
10
10
10
10
0.5 1 1.5 2 2.5 3
B (T)
FIG. 8: Spin relaxation rate in a QD with N = 2, 4 interact-
ing electrons as a function of an axial magnetic field. Solid
(dashed) lines stand for the system with (without) Rashba
interaction. The relaxation of N = 4 when B < 1 T is slower
than that of N = 2.
Figure 8 represents the TS relaxation rate in a QD with
N = 2, 4, after averaging the relaxation from the three
triplet sublevels. Solid (dashed) lines stand for the case
with (without) Rashba interaction.59 The main effect of
Rashba and Dresselhaus interactions is to accelerate the
spin transition by shifting the relaxation curve upwards.
This is in contrast to the N -odd case, where these terms
may induce drastic changes in the shape of the relaxation
rate curve (see Fig. 3). Figure 8 also reveals a different
behavior of the N = 2 and N = 4 TS relaxation rates.
The former increases gradually with B and then drops
in the vicinity of the TS anticrossing, due to the small
phonon energies.28,29,30 Conversely, for N = 4 an addi-
tional feature is found, namely an abrupt step at B ∼ 1.
This is due to the change of angular momentum of the
excited triplet. For B < 1 T the triplet has M = 0, and
for B > 1 T it has M = −3. Since the ground state
is a singlet with M = −2, the M = 0 triplet does not
fulfill the ∆M = ±1 condition for linear SO coupling.
This inhibits direct spin mixing between initial and final
states and reduces the relaxation rate by about one order
of magnitude.28
meV∆ ( )
N=4, M=0
N=4, M=−3
10
10
10
10
10
10
10
0 0.4 0.8 1.2
FIG. 9: (Color online). Spin relaxation rate in a QD with
N = 2, 4 interacting electrons as a function of the energy
spacing between the singlet and the triplet. Here M stands
for the angular momentum of the triplet. Top panel: α = 0,
β = 25 meV·Å. Bottom panel: α = 50 meV·Å, β = 25 meV·Å.
The relaxation of N = 4 is comparable to that of N = 2 when
the triplet has M = −3, and it is much smaller when M = 0.
Noteworthy, the choice of states differing in more than
one quantum of angular momentum is only possible for
N > 2 QDs. One may then wonder if it is more conve-
nient to use these systems instead of the N = 2 ones dom-
inating the experimental literature up to date20,21,29, i.e.
if it compensates for the increased density of electronic
states. Interestingly, Fig. 8 predicts slower relaxation for
the N = 4 QD with M = 0 triplet than for N = 2. To
verify that this arises from weakend SO coupling rather
than from different phonon energy values, in Fig. 9 we
replot the spin relaxation rate of N = 2, 4 as a function
of the TS energy splitting. In the figure, the upper and
bottom panels represent the situations without and with
Rashba interaction, respectively. While N = 4 shows
similar relaxation rate to N = 2 when the triplet has
M = −3, the relaxation is slower by about one order
of magnitude when the triplet has M = 0. This result
indicates that the weakening of SO mixing due to the
violation of the ∆M = ±1 condition clearly exceeds the
strengthening due to the higher density of states, con-
firming that N = 4 systems are more attractive than
N = 2 ones to obtain long triplet lifetimes. We also
point out that, in spite of the different density of states,
the relaxation rate of N = 2 and N = 4, M = −3 triplets
is quite similar. This can be ascribed to the phonon scat-
tering reduction by electronic correlations,33 which may
also explain the fact that experimentally resolved TS re-
laxation rates of N = 8 QDs and N = 2 QDs be quite
similar.20,31
V. COMPARISON WITH N = 2 EXPERIMENTS
Whereas, to our knowledge, no experiments have mea-
sured transitions between Zeeman-split sublevels in N >
1 systems yet, a number of works have dealt with TS re-
laxation in QDs with few interacting electrons. In Ref. 28
we showed that our model correctly predicts the trends
observed in experiments with N = 2 and N = 8 QDs
subject to axial magnetic fields.20,21,31 In this section, we
extend the comparison to new experiments available for
N = 2 TS relaxation in QDs,29 which for the first time
provide continuous measurements of the average triplet
lifetime against axial magnetic fields, from B = 0 to the
vicinity of the TS anticrossing. By using a simple model,
the authors of the experimental work showed that the
measuraments are in clear agreement with the behavior
expected from SO coupling plus acoustic phonon scatter-
ing. However, in such model: (i) the TS energy splitting
was a taken directly from the experimental data, (ii) the
SO coupling effect was accounted for by parametrizing
the admixture of the lowest singlet and triplet states only,
and (iii) the B-dependence of the SO-induced admix-
ture was neglected. Approximation (ii) may overlook the
correlation-induced reduction of phonon scattering,30,33
that we have shown above to be significant, and which
may have an important contribution from higher excited
states in weakly-confined QDs. In turn, approximation
(iii) may overlook the important influence of SO coupling
in the B-dependence of the triplet lifetime, as we had
anticipated in Ref. 28. Here we compare with the exper-
imental findings using our model, which includes these
effects properly. We assume a QD with an effective well
width Lz = 30 nm, as expected by Ref. 29 authors, and
a lateral confinement parabola of h̄ω0 = 2 meV which,
as we shall see next, fits well the position of the TS an-
ticrossing. Yet, the comparison is limited by the lack of
detailed information about the Rashba and Dresselhaus
interaction constants, and because we deal with circular
QDs instead of elliptical ones (the latter effect introduces
simple deviations from the circular case26). In addition,
in the experiment a tilted magnetic field of magnitude
B∗, forming an angle of 68◦ with the vertical direction
was used. Here we consider the vertical component of
the field (B = 0.37B∗), which is the main responsible
for the changes in the energy structure, and the effect of
the in-plane component enters via the Zeeman splitting
only.
Figure 10 illustrates the average triplet lifetime for
N = 2. The bottom axis shows the vertical magnetic
field B value, while the top axis shows the value to be
compared with the experiment B∗.59 As can be seen, the
triplet lifetime first decreases with the field and then it
abruptly increases in the vicinity of the TS anticross-
ing, due to the small phonon density.28 This behavior
is in clear agreement with the experiment (cf. Fig. 3 of
Ref. 29). The position of the anticrossing (B∗ ∼ 2.9 T) is
also close to the experimental value (B∗ ∼ 2.8 T), which
confirms that that h̄ω0 = 2 meV is similar to the mean
confinement frequency of the experimental sample. A
departure from the experimental trend appears at weak
fields (B < 0.5 T), where we observe a continuous in-
crease of T1 with decreasing B, while the experiment re-
ports a plateau. This is most likely due to the ellipticity
of the experimental sample, which renders the electron
states (and consequently the relaxation rate) insensitive
to the field in the B∗ = 0 − 0.5 T region (see Fig. 1a
in 29). In any case, Fig. 10 clearly confirms the role of
phonon-induced relaxation in the experiments, using a
realistic model for the description of correlated electron
states, SO admixture and phonon scattering.
A comment is worth here on the magnitude of the SO
coupling terms. In Fig. 10, we obtain good agreement
with the experimental relaxation times by using small
values of the SO coupling parameters. In particular, a
close fit is obtained using β = 1, α = 0.5 meV·Å, which
yields a spin-orbit length λSO = 48 µm. This value,
which coincides with the experimental guess (λSO ≈ 50
µm), indicates that SO coupling is several times weaker
than that reported for other GaAs QDs.8 Typical GaAs
parameters are often larger. For instance, measuraments
of the Rashba and Dresselhaus constants by analysis of
the weak antilocalization in clean GaAs/AlGaAs two-
dimensional gases revealed α = 4−5 meV·Å, and γc = 28
eV·Å3 (i.e, β = 3 meV·Å for our quantum well of Lz = 30
nm).61 To be sure, the small SO coupling parameters in
the experiment have a major influence on the lifetime
scale. Compare e.g. the β = 1 and β = 5 meV·Å curves
in Fig. 10. Actually, we note that accurate comparison
with the timescale reported for other GaAs samples31 is
also possible within our model, but assuming stronger
SO coupling constants.28 In Ref. 29, it was suspected
that the weak SO coupling inferred from the experimen-
tal data could be the result of the exclusion of higher
orbitals and the magnetic field dependence of SO ad-
α = 0.5 β = 1
β = 5
β = 2
β = 1 α = 0
B (T)
0.2 0.4 0.6 0.8 1 0
0.53 1.58 2.11 2.631.05
B (T)
FIG. 10: Average triplet lifetime in a QD with N = 2 elec-
trons as a function of an axial magnetic field. Only the field
region before the TS anticrossing is shown. α and β are in
meV·Å units. B is the applied axial magnetic field, and B∗
is the equivalent tilted magnetic field, for comparison with
Ref. 29 experiment.
mixture in their model (higher states reduce the effective
SO coupling constants by decreasing the phonon-induced
scattering30,33). Here we have considered both these ef-
fects and still small SO coupling constants are needed to
reproduce the experiment. Therefore, understanding the
origin of their small value remains as an open question.
One possibility could be that the particular direction of
the tilted magnetic field used in the experiment corre-
sponded to a reduced degree of SO admixture.30
VI. CONCLUSIONS
We have investigated theoretically the energy structure
and spin relaxation rate of weakly-confined QDs with
N = 1 − 5 interacting electrons, subject to axial mag-
netic fields, in the presence of linear Rashba and Dressel-
haus SO interactions. It has been shown that the num-
ber of electrons confined in the dot introduces changes
in the energy spectrum which significantly influence the
intensity of the SO admixture, and hence the spin re-
laxation. In general, the larger the number of confined
carriers, the higher the density of electronic states. This
decreases the energy splitting between consecutive lev-
els and then enhances SO admixture, which should lead
to faster spin relaxation. However, we find that this is
not necessarily the case, and slower relaxation rate may
be found for few-electron QDs as compared to the usual
single and two-electron QDs used up to date. The physi-
cal mechanisms responsible for this have been identified.
For N -odd systems, when the spin transition takes place
between Zeeman-split sublevels, it is the presence of low-
energy S = 3/2 states for N > 1 that reduces the pro-
jection of the doublet Sz = 1/2 sublevels into Sz = −1/2
ones, thus partly inhibiting the spin transition. For N -
even systems, when the spin transition takes place be-
tween triplet and singlet levels, there are two underlying
mechanisms. On the one hand, electronic correlations
tend to reduce phonon emission efficiency. On the other
hand, for N > 2 a magnetic field can be used to se-
lect a pair of singlet-triplet states which do not fulfill
the ∆M = ±1 condition of direct SO admixture, which
significantly weakens the SO mixing.
Last, we have compared our estimates with recent
experimental data for TS relaxation in N = 2 QDs.29
Our results support the interpretation of the experi-
ment in terms of SO admixture plus acoustic phonon
scattering, even though quantitative agreement with the
experiment requires assuming much weaker SO coupling
than that reported for similar GaAs structures.
Acknowledgments
We acknowledge support from the Italian Ministry
for University and Scientific Research under FIRB
RBIN04EY74, Cineca Calcolo parallelo 2006, and
Marie Curie IEF project NANO-CORR MEIF-CT-2006-
023797.
∗ Electronic address: [email protected];
URL: www.nanoscience.unimore.it
1 I. Zutic, J. Fabian, and S. Das Sarma, Rev. Mod. Phys.
76, 323 (2004).
2 D. Heiss, M. Kroutvar, J.J. Finley, and G. Abstreiter, Solid
State Comm. 135, 519 (2005).
3 D. Loss, and D.P. DiVincenzo, Phys. Rev. A 57, 120
(1998).
4 V.N. Golovach, A. Khaetskii, and D. Loss, Phys. Rev. Lett.
93, 016601 (2004).
5 R. Hanson, B. Witkamp, L.M.K. Vandersypen, L.H.
Willems van Beveren, J.M. Elzerman, and L.P. Kouwen-
hoven, Phys. Rev. Lett. 91, 196802 (2003).
6 M. Kroutvar, Y. Ducommun, D. Heiss, M. Bichler, D.
Schuh, G. Abstreiter, and J.J. Finley, Nature (London)
432, 81 (2004).
7 J.M. Elzerman, R. Hanson, L.H. Willems van Beveren, B.
Witkamp, L.M.K. Vandersypen, and L.P. Kouwenhoven,
Nature (London) 430, 431 (2004).
8 S. Amasha, K. MacLean, I. Radu, D.M. Zumbühl,
M.A. Kastner, M.P. Hanson, and A.C. Gossard,
cond-mat/0607110.
9 A.V. Khaetskii, and Y.V. Nazarov, Phys. Rev. B 64,
125316 (2001).
10 J.L. Cheng, M.W. Wu, and C. Lü, Phys. Rev. B 69, 115318
(2004).
11 D.V. Bulaev, and D. Loss, Phys. Rev. B 71, 205324 (2005).
12 C.F. Destefani, and S.E. Ulloa, Phys. Rev. B 72, 115326
(2005).
13 P. Stano, and J. Fabian, Phys. Rev. B 74, 045320 (2006).
14 Y.Y. Wang, and M.W. Wu, Phys. Rev. B 74, 165312
(2006).
15 E. Ya. Sherman, and D.J. Lockwood, Phys. Rev. B 72,
125340 (2005).
16 L.M. Woods, T.L Reinecke, and Y. Lyanda-Geller, Phys.
Rev. B 66, 161318(R) (2002).
17 I.A. Merkulov, Al. L. Efros, and M. Rosen, Phys. Rev. B
65, 205309 (2002).
18 S.I. Erlingsson, and Y.V. Nazarov, Phys. Rev. B 66,
155327 (2002).
19 P. San-Jose, G. Zarand, A. Shnirman, and G. Schön, Phys.
Rev. Lett. 97, 076803 (2006).
20 T. Fujisawa, D.G. Austing, Y. Tokura, Y. Hirayama, and
S. Tarucha, Nature (London) 419, 278 (2002); T. Fujisawa,
D.G. Austing, Y. Tokura, Y. Hirayama, and S. Tarucha,
J. Phys.: Cond. Matter 15, R1395 (2003).
21 R. Hanson, L.H. Willems van Beveren, I.T. Vink, J.M.
Elzerman, W.J.M. Naber, F.H.L. Koppens, L.P. Kouwen-
hoven, and L.M.K. Vandersypen, Phys. Rev. Lett. 94,
196802 (2005).
22 J.R. Petta, A.C. Johnson, J.M. Taylor, E.A. Laird, A. Ya-
coby, M.D. Lukin, C.M. Marcus, M.P. Hanson, and A.C.
Gossard, Science 309, 2180 (2005).
23 A.C. Johnson, J.R. Petta, J.M. Taylor, A. Yacoby, M.D.
Lukin, C.M. Marcus, M.P. Hanson, and A.C. Gossard, Na-
ture (London) 435, 925 (2005).
24 J.R. Petta, A.C. Johnson, A. Yacoby, C.M. Marcus, M.P.
Hanson, and A.C. Gossard, Phys. Rev. B 72, 161301(R)
(2005).
25 W.A. Coish, and D. Loss, Phys. Rev. B 72, 125337 (2005).
26 M. Florescu, S. Dickman, M. Ciorga, A. Sachrajda, and P.
Hawrylak, Physica E (Amsterdam) 22, 414 (2004); M. Flo-
rescu, and P. Hawrylak, Phys. Rev. B 73, 045304 (2006).
27 D. Chaney and P.A. Maksym, Phys. Rev. B 75, 035323
(2007).
28 J.I. Climente, A. Bertoni, G. Goldoni, M. Rontani, and E.
Molinari, Phys. Rev. B 75, 081303(R) (2007).
29 T. Meunier, I.T. Vink, L.H. Willems van Beveren, K.J.
Tielrooij, R. Hanson, F.H.L. Koppens, H.P. Tranitz, W.
Wegscheider, L.P. Kouwenhoven, and L.M.K. Vander-
sypen, Phys. Rev. Lett. 98, 126601 (2007).
30 V.N. Golovach, A. Khaetskii, and D. Loss,
cond-mat/0703427 (unpublished).
31 S. Sasaki, T. Fujisawa, T. Hayashi, and Y. Hirayama, Phys.
Rev. Lett. 95, 056803 (2005).
32 M. Ciorga, A.S. Sachrajda, P. Hawrylak, C. Gould, P. Za-
wadzki, S. Jullian, Y. Feng, and Z. Wasilewski, Phys. Rev.
B 61, R16315 (2000); H. Drexler, D. Leonard, W. Hansen,
J.P. Kotthaus, P.M. Petroff, Phys. Rev. Lett. 73, 2252
(1994).
33 A. Bertoni, M. Rontani, G. Goldoni, and E. Molinari,
Phys. Rev. Lett. 95, 066806 (2005); J.I. Climente, A.
Bertoni, M. Rontani, G. Goldoni, and E. Molinari, Phys.
Rev. B 74, 125303 (2006).
mailto:[email protected]
www.nanoscience.unimore.it
http://arxiv.org/abs/cond-mat/0607110
http://arxiv.org/abs/cond-mat/0703427
34 T. Chakraborty, and P. Pietiläinen, Phys. Rev. B 71,
113305 (2005).
35 P. Pietiläinen, and T. Chakraborty, Phys. Rev. B 73,
155315 (2006).
36 C.F. Destefani, S.E. Ulloa, and G.E. Marques, Phys. Rev.
B 70, 205315 (2004).
37 During the finalization of this paper we have learned about
a parallel work investigating the influence of Coulomb in-
teraction in two-electron TS relaxation.30 Many of the find-
ings in such paper are in agreement with our numerical
results.
38 Y.A. Bychkov, and E.I. Rashba, J. Phys. C 17, 6039
(1984).
39 G. Dresselhaus, Phys. Rev. 100, 580 (1955).
40 M. Rontani, C. Cavazzoni, D. Bellucci, and G. Goldoni, J.
Chem. Phys. 124, 124102 (2006).
41 http://www.s3.infm.it/donrodrigo
42 L. Jacak, P. Hawrylak, and A. Wojs, Quantum Dots,
(Springer Verlag, Berlin, 1998).
43 M. Brasken, S. Corni, M. Lindberg, J. Olsen, and D. Sund-
holm, Mol. Phys. 100, 911 (2002).
44 P. Lucignano, B. Jouault, and A. Tagliacozzo, Phys. Rev.
B 69, 045314 (2004).
45 The (M,S, Sz) quantum numbers of few-electron states are
a good approximation for the lowest-lying states only. For
higher-lying states, the energy spectrum becomes denser
and the SO interaction becomes very strong even for GaAs,
which leads to important departures from the SO-free pic-
ture. This does not occur in single-electron parabolic QDs
because the energy levels are equally spaced.
46 The convenience of using exact diagonalization procedures,
instead of perturbational approaches, to account for the SO
coupling in GaAs QDs has been claimed in Ref. 10.
47 C.S. Ting (ed.), Physics of Hot Electron Transport in Semi-
conductors, (World Scientific, 1992).
48 Landolt-Börnstein: Numerical Data and Functional Rela-
tionships in Science and Technology, Vol. 17. Semiconduc-
tors, Group IV Elements and III-V Compounds, edited by
O. Madelung, (Springer-Verlag, 1982).
49 M. Cardona, N.E. Christensen, and G. Fasol, Phys. Rev.
B 38, 1806 (1988).
50 U. Bockelmann, Phys. Rev. B 50, 17271 (1994).
51 J.I. Climente, A. Bertoni, G. Goldoni, and E. Molinari,
Phys. Rev. B 74, 035313 (2006).
52 P. Stano, and J. Fabian, Phys. Rev. B 72, 155410 (2005).
53 O. Voskoboynikov, C.P. Lee, and O. Tretyak, Phys. Rev.
B 63, 165306 (2001).
54 W.H. Kuan, and C.S. Tang, J. Appl. Phys. 95, 6368 (2004).
55 The energy magneto-spectrum of GaAs parabolic QDs
with SO interaction and up to four interacting electrons
was also investigated in Ref. 35, but considering Rashba
interaction only.
56 Coulomb-enhanced SO interaction was previously pre-
dicted for higher-dimensional structures.57 Here we report
it for QDs.
57 G.H. Chen, and M.E. Raikh, Phys. Rev. B 60, 4826 (1999).
58 C.F. Destefani, S.E. Ulloa, and G.E. Marques, Phys. Rev.
B 69, 125302 (2004).
59 For simplicity of the discussion, in Figs. 3, 8 and 10, the
near vicinity of B = 0 T is not shown. In that range one
finds damped phonon-induced relaxation rates due to de-
generacies arising from the time-reversal symmetry and the
circular symmetry of the confinement we have assumed.
We do not expect these features to be observable in exper-
iments, because QDs are not perfectly circular and because
hyperfine interaction is expected to be the dominant spin
relaxation mechanism for very weak fields (see Refs. 18,23).
60 Greatly suppressed TS spin relaxation, comparable to that
of inter-Zeeman sublevels at very weak B, may be achieved
by means of geometrically or field-induced acoustic phonon
emission minima.27,28
61 J.B. Miller, D.M. Zumbühl, C.M. Marcus, Y.B. Lyanda-
Geller, D. Goldhaber-Gordon, K. Campman, and A.C.
Gossard, Phys. Rev. Lett. 90, 076807 (2003).
http://www.s3.infm.it/donrodrigo
|
0704.0869 | Connected Operators for the Totally Asymmetric Exclusion Process | Connected Operators for the Totally
Asymmetric Exclusion Process
O. Golinelli, K. Mallick
Service de Physique Théorique, Cea Saclay, 91191 Gif, France
6 April 2007
Abstract
We fully elucidate the structure of the hierarchy of the con-
nected operators that commute with the Markov matrix of the
Totally Asymmetric Exclusion Process (TASEP). We prove for
the connected operators a combinatorial formula that was con-
jectured in a previous work. Our derivation is purely algebraic
and relies on the algebra generated by the local jump operators
involved in the TASEP.
Keywords: Non-Equilibrium Statistical Mechanics, ASEP, Exact
Results, Algebraic Bethe Ansatz.
Pacs numbers: 02.30.Ik, 02.50.-r, 75.10.Pq.
1 Introduction
The Asymmetric Simple Exclusion Process (ASEP) is a lattice model of parti-
cles with hard core interactions. Due to its simplicity, the ASEP appears as a
minimal model in many different contexts such as one-dimensional transport
phenomena, molecular motors and traffic models. From a theoretical point
of view, this model has become a paradigm in the field of non-equilibrium
statistical mechanics; many exact results have been derived using various
methods, such as continuous limits, Bethe Ansatz and matrix Ansatz (for re-
views, see e.g., Spohn 1991, Derrida 1998, Schütz 2001, Golinelli and Mallick
2006).
In a recent work (Golinelli and Mallick 2007), we applied the algebraic
Bethe Ansatz technique to the Totally Asymmetric Exclusion Process (TASEP).
http://arxiv.org/abs/0704.0869v1
Golinelli, Mallick — Connected Operators for TASEP 2
This method allowed us to construct a hierarchy of ‘generalized Hamiltonians’
that contain the Markov matrix and commute with each other. Using the
algebraic relations satisfied by the local jump operators, we derived explicit
formulae for the transfer matrix and the generalized Hamiltonians, generated
from the transfer matrix. We showed that the transfer matrix can be inter-
preted as the generator of a discrete time Markov process and we described
the actions of the generalized Hamiltonians. These actions are non-local be-
cause they involve non-connected bonds of the lattice. However, connected
operators are generated by taking the logarithm of the transfer matrix. We
conjectured for the connected operators a combinatorial formula that was
verified for the first ten connected operators by using a symbolic calculation
program.
The aim of the present work is to present an analytical calculation of the
connected operators and to prove the formula that was proposed in (Golinelli
and Mallick 2007). This paper is a sequel of our previous work, however, in
section 2, we briefly review the main definitions and results already obtained
so that this work can be read in a fairly self-contained manner. In section 3,
we derive the general expression of the connected operators.
2 Review of known results
We first recall the dynamical rules that define the TASEP with n particles
on a periodic 1-d ring with L sites labelled i = 1, . . . , L. The particles move
according to the following dynamics: during the time interval [t, t + dt], a
particle on a site i jumps with probability dt to the neighboring site i+ 1, if
this site is empty. This exclusion rule which forbids to have more than one
particle per site, mimics a hard-core interaction between particles. Because
the particles can jump only in one direction this process is called totally
asymmetric. The total number n of particles is conserved. The TASEP
being a continuous-time Markov process, its dynamics is entirely encoded in
a 2L × 2L Markov matrix M , that describes the evolution of the probability
distribution of the system at time t. The Markov matrix can be written as
Mi , (1)
where the local jump operator Mi affects only the sites i and i + 1 and
represents the contribution to the dynamics of jumps from the site i to i+1.
Golinelli, Mallick — Connected Operators for TASEP 3
2.1 The TASEP algebra
The local jump operators satisfy a set of algebraic equations :
M2i = −Mi, (2)
Mi Mi+1 Mi = Mi+1 Mi Mi+1 = 0, (3)
[Mi,Mj ] = 0 if |i− j| > 1. (4)
These relations can be obtained as a limiting form of the Temperley-Lieb
algebra. On the ring we have periodic boundary conditions : Mi+L = Mi.
The local jumps matrices define an algebra. Any product of the Mi’s will be
called a word. The length of a given word is the minimal number of operators
Mi required to write it. A word, that can not be simplified further by using
the algebraic rules above, will be called a reduced word.
Consider any word W and call I(W ) the set of indices i of the operators
Mi that compose it (indices are enumerated without repetitions). We remark
that, if W is not annihilated by application of rule (3), the simplification
rules (2, 4) do not alter the set I(W ), i.e., these rules do not introduce any
new index or suppress any existing index in I(W ). This crucial property is
not valid for the algebra associated with the partially asymmetric exclusion
process (see Golinelli and Mallick 2006).
Using the relation (2) we observe that for any i and any real number
λ 6= 1 we have
(1 + λMi)
−1 = (1 + αMi) with α =
. (5)
2.2 Simple words
A simple word of length k is defined as a word Mσ(1)Mσ(2) . . .Mσ(k), where σ
is a permutation on the set {1, 2, . . . , k}. The commutation rule (4) implies
that only the relative position of Mi with respect to Mi±1 matters. A simple
word of length k can therefore be written as Wk(s2, s3, . . . , sk) where the
boolean variable sj for 2 ≤ j ≤ k is defined as follows : sj = 0 if Mj is
on the left of Mj−1 and sj = 1 if Mj is on the right of Mj−1. Equivalently,
Wk(s2, s3, . . . , sk) is uniquely defined by the recursion relation
Wk(s2, s3, . . . , sk−1, 1) = Wk−1(s2, s3, . . . , sk−1) Mk , (6)
Wk(s2, s3, . . . , sk−1, 0) = Mk Wk−1(s2, s3, . . . , sk−1) . (7)
The set of the 2k−1simple words of length k will be called Wk. For a simple
word Wk, we define u(Wk) to be the number of inversions in Wk, i.e., the
Golinelli, Mallick — Connected Operators for TASEP 4
number of times that Mj is on the left of Mj−1 :
u(Wk(s2, s3, . . . , sk)) =
(1− sj) . (8)
We remark that simple words are connected, they cannot be factorized
in two (or more) commuting words.
2.3 Ring-ordered product
Because of the periodic boundary conditions, products of local jump opera-
tors must be ordered adequately. In the following we shall need to use a ring
ordered product O () which acts on words of the type
W = Mi1Mi2 . . .Mik with 1 ≤ i1 < i2 < . . . < ik ≤ L , (9)
by changing the positions of matrices that appear in W according to the
following rules :
(i) If i1 > 1 or ik < L, we define O (W ) = W . The word W is well-
ordered.
(ii) If i1 = 1 and ik = L, we first write W as a product of two blocks,
W = AB, such that B = MbMb+1 . . .ML is the maximal block of matrices
with consecutive indices that contains ML, and A = M1Mi2 . . .Mia , with
ia < b− 1, contains the remaining terms. We then define
O (W ) = O (AB) = BA = MbMb+1 . . .MLM1Mi2 . . .Mia . (10)
(iii) The previous definition makes sense only for k < L. Indeed, when
k = L, we have W = M1M2 . . .ML and it is not possible to split W in two
different blocks A and B. For this special case, we define
O (M1M2 . . .ML) = |1, 1, . . . , 1〉〈1, 1, . . . , 1| , (11)
which is the projector on the ‘full’ configuration with all sites occupied.
The ring-orderingO () is extended by linearity to the vector space spanned
by words of the type described above.
2.4 Transfer matrix and generalized Hamiltonians Hk
The algebraic Bethe Ansatz allows to construct a one parameter commuting
family of transfer matrices, t(λ), that contains the translation operator T =
t(1) and the Markov matrix M = t′(0). For 0 ≤ λ ≤ 1, the operator
Golinelli, Mallick — Connected Operators for TASEP 5
t(λ) can be interpreted as a discrete time process with non-local jumps :
a hole located on the right of a cluster of p particles can jump a distance
k in the backward direction, with probability λk(1 − λ) for 1 ≤ k < p,
and with probability λp for k = p. The probability that this hole does
not jump at all is 1 − λ. This model is equivalent to the 3-D anisotropic
percolation model of Rajesh and Dhar (1998) and to a 2-D five-vertex model.
It is also an adaptation on a periodic lattice of the ASEP with a backward-
ordered sequential update (Rajewsky et al. 1996, Brankov et al. 2004),
and equivalently of an asymmetric fragmentation process (Rákos and Schütz
2005).
The operator t(λ) is a polynomial in λ of degree L given by
t(λ) = 1 +
λkHk , (12)
where the generalized HamiltoniansHk are non-local operators that act on the
configuration space. [We emphasize that the notation used here is different
from that of our previous work : t(λ) was denoted by tg(λ) in (Golinelli and
Mallick 2007).]
We have H1 = M and more generally, as shown in (Golinelli and Mallick
2007), Hk is a homogeneous sum of words of length k
1≤i1<i2<...<ik≤L
O (Mi1Mi2 . . .Mik) , (13)
where O () represents the ring ordered product that embodies the periodicity
and the translation-invariance constraints.
For a system of size L with N particles only H1, H2, . . . , HN have a non-
trivial action. Because we are interested only in the case N ≤ L− 1 (the full
system as no dynamics) there are at most L − 1 operators Hk that have a
non-trivial action.
3 The connected operators Fk
3.1 Definition
The generalized Hamiltonians Hk and the transfer matrix t(λ) have non-local
actions and couple particles with arbitrary distances between them. Besides
Hk is a highly non-extensive quantity as it involves generically a number of
terms of order Lk. As usual, the local connected and extensive operators
are obtained by taking the logarithm of the transfer matrix. For k ≥ 1, the
Golinelli, Mallick — Connected Operators for TASEP 6
connected Hamiltonians Fk are defined as
ln t(λ) =
Fk . (14)
Taking the derivative of this equation with respect to λ and recalling that
t(λ) commutes with t′(λ), we obtain
λkFk = λ t(λ)
−1 t′(λ) . (15)
Expanding t(λ)−1 with respect to λ, this formula allows to calculate Fk as a
polynomial function of H1, . . . , Hk. For example F1 = H1, F2 = 2H2 − H
etc... (see Golinelli and Mallick 2007). By using (13), we observe that Fk is
a priori a linear combination of products of k local operators Mi. However
this expression can be simplified by using the algebraic rules (2, 3, 4) and in
fine, Fk will be a linear combination of reduced words of length j ≤ k.
Because of the ring-ordered product that appears in the expression (13)
of the Hk’s, it is difficult to derive an expression of Fk in terms of the local
jump operators. An exact formula for the Fk with k ≤ 10 was obtained
in (Golinelli and Mallick 2007) by using a computer program and a general
expression was conjectured for all k. In the following, the conjectured formula
is derived and proved rigorously.
3.2 Elimination of the ring-ordered product
The expression
λkFk can be written as a linear combination of reduced
words W . We know from formula (13) that at most L− 1 operators Hk are
independent in a system of size L, we shall therefore calculate Fk only for
k ≤ L− 1. Thus, we need to consider reduced words of length j ≤ L − 1.
Let W be such a word, and I(W ) be the set of indices of the operators Mi
that compose W ; our aim is to find the expression of W and to calculate
its prefactor from equation (15). Because the rules (2, 4) do not suppress or
add any new index, the following property is true : if a word W ′ appearing
in λ t(λ)−1 t′(λ) is such that I(W ′) 6= I(W ) then even after simplification,
W ′ will remain different from W . Therefore, the prefactor of W in
is the same as the prefactor of W in
λ tI(λ)
−1 t′I(λ) where tI(λ) = O
(1 + λMi)
with I(W ) ⊂ I . (16)
Because Fk commutes with the translation operator T , then for any r =
1, . . . , L−1, the prefactor of W = Mi1Mi2 . . .Mij is the same as the prefactor
Golinelli, Mallick — Connected Operators for TASEP 7
of T rMT−r = Mr+i1Mr+i2 . . .Mr+ij . Furthermore, any word W of size k ≤
L − 1 is equivalent, by a translation, to a word that contains M1 and not
ML : indeed, there exists at least one index i0 such that i0 /∈ I(W ) and
(i0 + 1) ∈ I(W ) and it is thus sufficient to translate W by r = L− i0.
In conclusion, it is enough to study in expression (15), the reduced words
W with set of indices included in
I∗ = {1, 2, . . . , L− 1} . (17)
Because the index L does not appear in I∗, the ring-ordered product has a
trivial action in equation (16) and we have
tI∗(λ) = (1 + λM1)(1 + λM2) . . . (1 + λML−1) . (18)
We have thus been able to eliminate the ring-ordered product.
3.3 Explicit formula for the connected operators
In equation (18), differentiating tI∗(λ) with respect to λ, we have
t′I∗(λ) =
(1 + λM1) . . . (1 + λMi−1)Mi(1 + λMi+1) . . . (1 + λML−1) . (19)
Using equation (5) we obtain
tI∗(λ)
−1 = (1+αML−1)(1 +αML−2) . . . (1 +αM1) , with α =
. (20)
Noticing that λ(1 + αMi)Mi = −αMi, we deduce
λ tI∗(λ)
−1 t′I∗(λ) = (21)
(1 + αML−1) . . . (1 + αMi+1)Mi(1 + λMi+1) . . . (1 + λML−1) .
The ith term in this sum contains words with indices between i and L − 1.
Because we are looking for the words that contain the operator M1, we must
consider only the first term in this sum, which we note by Q
Q = −α(1 + αML−1) . . . (1 + αM2)M1(1 + λM2) . . . (1 + λML−1) . (22)
In the appendix, we show that
Q = R1 +R2 + . . .+RL−1 , (23)
Golinelli, Mallick — Connected Operators for TASEP 8
where Ri is defined by the recursion :
R1 = −αM1 , (24)
Ri = λRi−1Mi + αMiRi−1 for i ≥ 2 . (25)
To summarize, all the words in
k=1 λ
kFk that contain M1 and not ML are
given by Q = R1+R2+. . .+RL−1. From the recursion relation (25) we deduce
that Ri is a linear combination of the 2
i−1 simple words Wi(s2, s3, . . . , si)
defined in section 2.1. Furthermore, we observe from (25) that a factor λ
appears if si = 1 and a factor α = λ/(λ − 1) appears if si = 0. Therefore,
the coefficient f(W ) of W = Wi(s2, s3, . . . , si) in Q is given by
f(W ) = (−1)u
(1− λ)u+1
= (−1)u
λi+j (26)
where i is the length of W and u = u(W ) is its inversion number, defined in
equation (8). We have thus shown that
f(W ) W =
(−1)u(W )
u(W )+j
λi+j , (27)
where Wi is the set of simple words of length i.
Finally, we recall that the coefficient in
k=1 λ
kFk of a reduced word W
that contains M1 and not ML is the same as its coefficient in Q. Extracting
the term of order λk in equation (27) we deduce that any word W in Fk that
contains M1 and not ML is a simple word of length i ≤ k and its prefactor
is given by (−1)u(W )
u(W )+k−i
The full expression of Fk is obtained by applying the translation operator
to the expression (27); indeed any word in Fk can be uniquely obtained by
translating a simple word in Fk that contains M1 and not ML. We conclude
that for k < L,
Fk = T
(−1)u(W )
k−i+u(W )
W , (28)
where T is the translation-symmetrizator that acts on any operator A as
follows : T A =
i=0 T
i A T−i . The presence of T in equation (28) insures
that Fk is invariant by translation on the periodic system of size L. All simple
words being connected, we finally remark that formula (28) implies that Fk
is a connected operator.
Golinelli, Mallick — Connected Operators for TASEP 9
4 Conclusion
By using the algebraic properties of the TASEP algebra (2-4), we have derived
an exact combinatorial expression for the family of connected operators that
commute with the Markov matrix. This calculation allows to fully elucidate
the hierarchical structure obtained from the Algebraic Bethe Ansatz. It
would be of a great interest to extend our result to the partially asymmetric
exclusion process (PASEP), in which a particle can make forward jumps
with probability p and backward jumps with probability q. In particular, we
recall that the symmetric exclusion process is equivalent to the Heisenberg
spin chain : in this case the connected operators have been calculated only
for the lowest orders (Fabricius et al., 1990). This is a challenging and
difficult problem. In our derivation we used a fundamental property of the
TASEP algebra : the rules (2-4) when applied to a word W either cancel
W or conserve the set of indices I(W ). The algebra associated with PASEP
violates this crucial property because there we have Mi Mi+1 Mi = pq Mi.
Therefore the method followed here does not have a straightforward extension
to the PASEP case.
Appendix: Proof of equation (23)
Let us define the following series
Q1 = −αM1 , (29)
Qi = (1 + αMi)Qi−1(1 + λMi) for i ≥ 2 . (30)
We remark that Q defined in equation (22) is given by Q = QL−1. Let us
consider Ri defined by the recursion (25). The indices that appear in the
words of Qi and Ri belong to {1, 2, . . . , i}. Therefore, we have
[Rj ,Mi] = 0 for j ≤ i− 2 , (31)
because the operators M1,M2, . . . ,Mj that compose Rj commute with Mi.
From equations (31) and (5), we obtain
(1 + αMi)Rj(1 + λMi) = Rj for j ≤ i− 2 . (32)
Furthermore, from (25), we obtain
MiRi−1Mi = λMiRi−2Mi−1Mi + αMiMi−1Ri−2Mi . (33)
Because Mi commutes with Ri−2, we can use the relation MiMi−1Mi = 0 to
deduce that
MiRi−1Mi = 0 . (34)
Golinelli, Mallick — Connected Operators for TASEP 10
Using equation (34), we find
(1 + αMi)Ri−1(1 + λMi) = Ri−1 + λRi−1Mi + αMiRi−1 = Ri−1 +Ri . (35)
From equations (32) and (35), we prove that the (unique) solution of the
recursion relation (30) is given by equation (23), Qi = R1 +R2 + . . .+Ri.
References
• Brankov J. G., Priezzhev V. B. and Shelest R. V., 2004, Generalized
determinant solution of the discrete-time totally asymmetric exclusion
process and zero-range process, Phys. Rev. E 69 066136.
• Derrida B., 1998, An exactly soluble non-equilibrium system: the asym-
metric simple exclusion process, Phys. Rep. 301 65.
• Fabricius K., Mütter K.-H. and Grosse H., 1990, Hidden symmetries in
the one-dimensional antiferromagnetic Heisenberg model, Phys. Rev.
B 42 4656.
• Golinelli O. and Mallick K., 2006, The asymmetric simple exclusion
process: an integrable model for non-equilibrium statistical mechanics,
J. Phys. A: Math. Gen. 39 12679.
• Golinelli O. and Mallick K., 2007, Family of Commuting Operators for
the Totally Asymmetric Exclusion Process, Submitted to J. Phys. A:
Math. Theor., cond-mat/0612351.
• Rajesh R. and Dhar D., 1998, An exactly solvable anisotropic directed
percolation model in three dimensions, Phys. Rev. Lett. 81 1646.
• Rajewsky N., Schadschneider A. and Schreckenberg M., 1996, The
asymmetric exclusion model with sequential update, J. Phys. A: Math.
Gen. 29 L305.
• Rákos A. and Schütz G. M., 2005, Current distribution and random
matrix ensembles for an integrable asymmetric fragmentation process,
J. Stat. Phys. 118 511.
• Schütz G. M., 2001, Exactly solvable models for many-body systems far
from equilibrium in Phase Transitions and Critical Phenomena, vol.
19, C. Domb and J. L. Lebowitz Ed., Academic Press, San Diego.
• Spohn H., 1991, Large scale dynamics of interacting particles, Springer,
New-York.
http://arxiv.org/abs/cond-mat/0612351
Introduction
Review of known results
The TASEP algebra
Simple words
Ring-ordered product
Transfer matrix and generalized Hamiltonians Hk
The connected operators Fk
Definition
Elimination of the ring-ordered product
Explicit formula for the connected operators
Conclusion
|
0704.0870 | Proposal for an Enhanced Optical Cooling System Test in an Electron
Storage Ring | PROPOSAL FOR AN ENHANCED OPTICAL COOLING SYSTEM
TEST IN AN ELECTRON STORAGE RING
E.G.Bessonov, M.V.Gorbunkov, Lebedev Phys. Inst. RAS, Moscow, Russia,
A.A.Mikhailichenko, Cornell University, Ithaca, NY, U.S.A.
Abstract
We are proposing to test experimentally the new idea of Enhanced Optical Cooling (EOC)
in an electron storage ring. This experiment will confirm new fundamental processes in beam
physics and will demonstrate new unique possibilities with this cooling technique. It will
open important applications of EOC in nuclear physics, elementary particle physics and in
Light Sources (LS) based on high brightness electron and ion beams.
1. INTRODUCTION
Emittance and the number of stored particles –N in the beam determine the principal
parameter of the beam, its Brightness what can be defined as / x z sB N γε γε γε= , where each , ,x z sγε
stands for invariant emittance associated with corresponding coordinate. Beam cooling
reduces the beam emittance (its size and the energy spread) in a storage ring and therefore
improves its quality for experiments. All high-energy colliders and high-brilliance LS’s
require intense cooling to reach extreme parameters. Several methods for the particle beam
cooling are in hand now: (i) radiation cooling, (ii) electron cooling, (iii) stochastic cooling,
(iv) optical stochastic cooling, (v) laser cooling, (vi) ionization cooling, and (vii) radiative
(stimulated radiation) cooling [1-3]. Recently a new method of EOC was suggested [4-7] and
in this proposal we discuss an experiment which might test this method in an existing
electron storage ring having maximal energy ~ 2.5 GeV, and which can also function down to
energies of ~100-200 MeV.
Figure1: The scheme of the EOC of a particle beam (a) and unwrapped optical scheme (b)
EOC [4] appeared as the symbiosis of enhanced emittance exchange and Optical
Stochastic Cooling (OSC) [8-10]. These ideas have not yet been demonstrated. At the same
time the ordinary Stochastic Cooling (SC) is widely in use in proton and ion colliders. OSC
and EOC extend the potential for fast cooling due to bandwidth. EOC can be successfully
used in Large Hadron Collider (LHC) as well as in a planned muon collider.
The EOC in the simpiest case of two dimensional cooling in the longitudinal and
transverse x-planes is based on one pickup and one or more kicker undulators located at a
distance determined by the betatron phase advance betxψ = ,2 ( 1/ 2)p kkπ + for first kicker
undulator and betxψ = ,2 k kkπ for the next ones, where kij = 0, 1, 2, 3,… is the whole numbers.
Other elements of the cooling system are the optical amplifier (typically Optical Parametric
Amplifier i.e. OPA), optical filters, optical lenses, movable screen(s) and optical line with
variable time delay (see Fig.1). An optical delay line can be used together with (or in some
cases without) isochronous pass-way between undulators to keep the phases of particles such
that the kicker undulator decelerates the particles during the process of cooling [6], [7].
2. TO THE FOUNDATIONS OF ENHANCED OPTICAL COOLING
The total amount of energy carried out by undulator radiation (UR) emitted by electrons
traversing an undulator, according to classical electrodynamics, is given by
2 2 2 22
to t e uE r B Lβ γ= , (1)
where is the classical electron radius; e, are the electron charge and mass
respectively;
2 /er e m c=
2B is an averaged square of magnetic field along the undulator period
uλ ;
/v cβ = is the relative velocity of the electron; is the relativistic factor; 2/ eE m cγ = u uL Mλ= is
the length of the undulator; and M is the number of undulator periods. For a planar harmonic
undulator 2 20 / 2B B= , where B0 is the peak of the undulator field. For a helical undulator
0B B=
. The spectral distribution of the first harmonic of UR for M>>1 is given by [11]
1 1/ ( ) (0
cl cldE d E fξ ξ ξ= ≤ ≤ , (2)
where , , 2 21 /(1 )cl cltotE E K= + )221(3)( 2ξξξξ +−=f ξ = 1,min 1/λ λ , 1min 1 0|θλ λ == , , ( ) 1f dξ ξ =∫ K = 2 /ue B λ
is the deflection parameter, 22 em cπ
1 (1 ) / 2u K
2λ λ ϑ= + + γ is the wavelength of the first harmonic
of the UR, ϑ γθ= ; θ is the azimuthtal angle between the vector of electron average velocity
in the undulator and the undulator axis.
Electrons have effective resonant interaction in the field of the kicker undulator only with
that part of their undulator radiation wavelets (URW) emitted in the pickup undulator if the
frequency bands and the angles of the electron average velocities are selected in the ranges
( )
∆⎛ ⎞ = ∆ =⎜ ⎟
⎝ ⎠ M
+ (3)
nearby maximal frequency and to the axes of both pickup and kicker undulators. Optical
filters which are tuned up to the maximal frequency of the first harmonic of the UR can be
used for this selection. In this case screens must select the URWs emitted at angles
( )URW Cϑ ϑ< ∆ to the pickup undulator axis both in horizontal and vertical directions before
they enter optical amplifier (to do away with the unwanted part of URWs loading OPA). In
this case the angle between the average electron velocity vector in the undulator and the
undulator axis will be small:
( ) ( )e Cϑ ϑ∆ < ∆ . (4)
Below we suggest that the optical system of EOC selects a portion of URWs, emitted in
this range of angles and frequencies, by filters, diaphragms and/or screens. This condition
limits the precision of the phase advance determined by the equation ,
x zδψ ,
, (5) ,( ) ( )
x z Cδθ θ< ∆
where is the change of the angle between the electron
average velocity and the axis of the kicker undulator owing to an error in the arrangement of
, , , , ,( ) (2 / )sin( )
bet bet
x z x z bet x z bet x zAδθ π λ δψ= ,
undulators, is the amplitude of the betatron oscillations of the electron in the storage
ring, in the smooth approximation δψ , is the displacement of the kicker
undulator from optimal position, is the length of the period of betatron oscillations.
, ,x z betA
, , ,2 /
x z x z betsπ λ= ∆ s∆
, ,x z betλ
The number of the photons in the URW emitted by electrons in suitable cooling frequency
and angular ranges (3) is defined by the following formula (see Appendix 1)
1max 1
, (6)
where , 2 21 1( / ) 3 / 2 (1 )
cl cl
totE dE d E M Kω ω∆ = ∆ = + 1maxω = 1min2 /cπ λ , [11].
Filtered URWs must be amplified and directed along the axis of the kicker undulator.
2 1 137e cα = / ≅ /
If the density of energy in the URWs has a Gaussian distribution with a waist size ,
w x zσ σ> ,
, the R.M.S. electric field strength / 2R uZ L> clwE of the wavelet of the length 1min2Mλ in the
kicker undulator defined by the expression
2 2 3/ 2
M K M
σ λ σ
. (7)
where ,
x zσ are the electron beam dimensions,
, 1mi4 /R w cZ nπσ λ= is the Rayleigh length. If
x z w cσ σ< ,W w c, , the R.M.S. electric field strength clwE of the wavelet becomes σ σ=
(8)
where ,w cσ = 1min /8uL λ π is the waist size corresponding to the Rayleigh length . /2R uZ L=
Note that electric field values (7), (8) do not take into account quantum nature of emission
of URWs in a pickup undulator. They are valid for . Such case can be realized only
for heavy ions with atomic number and for deflection parameter . If, according
to classical electrodynamics, , then it means that in the reality, according to quantum
theory, one photon is emitted with the energy
1phN >>
10Z > 1K >
1<phN
max,1ω and with the probability .
In this case the electric field strength is determined by the replacement of the energy
1em php N= <
clE∆ on
in (7) and the frequency of the emission of photons
, where
1 1 1,ph
q clE E N ω−∆ = ∆ ⋅ = max ph emf f p= ⋅ =
phf N f⋅ < f is the revolution frequency of the electron in the storage ring.
If the number of electrons in the URW sample is , then URW emitted by an electron i
in pickup undulator and amplified in OPA decrease the amplitudes of betatron and
synchrotron oscillations of this electron in the kicker undulator. Other electrons emit
URWs including non-synchronous (for the electron i ) photons, which are
amplified by OPA and together with noice photons of the OPA increase the amplitudes of
oscillations of the electron i . If the number of non-synchronous photons in the sample
, where - is the number of noise photons in the URW
sample at the amplifier front end [6], [7], then the jumps of the closed orbit and the electric
field strengths are determined by the replacement of the energy
,e sN
, 1e sN −
,( 1)e s phN N−
, ,( 1)ph e s ph nN N N 1Σ = − + <N nN
clE∆ on 1
qE∆ in (7), (8) and
the frequency of the emission of photons is ,em phf f p f NΣ fΣ= ⋅ = ⋅ < . In the opposite case,
, the electric field strengths are determined by the replacement of the energy , 1phN Σ > 1
clE∆
on in (7) and the frequency of the emission of photons is . 1 , 1
phE N ω
Σ∆ = ⋅ ,max
2 /e s e sN M N
In our case , 1,min , ,0 , λ σΣ= eN Σ stands for the number of electrons in the
bunch, ,0sσ is the initial length of the electron bunch.
The maximum rate of energy losses for the electron in the fields of the kicker undulators
and amplified URW is
max ( ) |
w w c
loss w u m ph kick amplP eE L f N N σ σβ α⊥ == − Φ =
1,min
8 ( )
ph kick ample f N N K
π π α
, (9)
where γβ /K=⊥ ; is the number of kicker undulators (it is supposed that electrons are
decelerated in these undulators); and
kickN
a m p lα is the gain in the optical amplifier. The function
1( ) | phph N em phN p NΦ = = , 1( ) 1phph NN >>Φ = takes into account the quantum nature of
the emission (the frequency of emission of photons ~ phf N⋅ and the electric field strength
~1/ phN ). It follows, that quantum nature of the photon emission in undulators leads to the
decrease of the maximum average rate of energy losses for electrons in the fields of the
kicker undulator and amplified URW by the factor 1( ) | 9.phph N phN NΦ = 3 .
The damping times for the longitudinal and transverse degrees of freedom are
,0, max
s EOC
lossP
,
,0 ,0
, , max
x EOC s EOC
x loss x kick
σ β σ
= = , (10)
where ,0Eσ is the initial energy spread of the electron beam, Ploss stands for the power losses
(9), ,0xσ is the initial radial beam dimension determined by betatron oscillations, , 0x kickη ≠ is
the dispersion function in the kicker undulator, . Note that the damping
time for the longitudinal direction does not depend on
,0 , ,0( /x x kick E Eησ η β σ
,x kickη and one for the transverse
direction is inverse to ,x kickη . Factor 6 in (10) takes into account that the energy spread for
cooling is ,02 Eσ , electrons does not interact with their URWs every turn (screening effect) and
that the jumps of the electron energy and closed orbit in general case lead to lesser jumps of
the amplitude of synchrotron and betatron oscillations [6].
The equilibrium spread in the positions of the closed orbits 2
, the spread of betatron
amplitudes 2
A⎛ ⎞⎜ ⎟
and corresponding beam dimensions ,EOC EOCx xησ σ determined by EOC
2 2 1
, , 1 ,
/ 2 / 2 | ( 1 / ) | |
2 2ph
EOC EOC
EOC EOC
x eq x eq x N e s n ph
eq eq
A x N N
N xη ησ σ δΣ >
⎛ ⎞ ⎛ ⎞= = = = − +⎜ ⎟ ⎜ ⎟
⎝ ⎠ ⎝ ⎠
, (11)
where 1 2 max( /x loss )x E Eηδ η β
−= ∆ is the jump of the electron closed orbit determined by the energy
jump of the electron in the fields of the kicker undulator and its amplified
URW (corresponds to one-photon/mode or one-photon/sample at the amplifier front end).
max max /loss loss phE P f N∆ =
The equilibrium relative energy spread of the electron beam
(12)
EOC EOC
E eq x kick x kickE ησ β σ η= ,/
Note that jumps of closed orbits 1xηδ ~1/ . That is why the electron bunch dimensions
(11) at the same number of particles in the sample and relativistic factor are much higher ions
one. As a sequence of small electron charge the number of photons in the URWs
, 87% of UWRs are empty of synchronous photons and every URW has
non-synchronous photons. That is why the contribution of noise photons for
electrons is greater ( ) then for heavy ions.
21.15 10 1phN
, 1phN Σ >
/ 87n phN N Nn
The power transferred from the optical amplifier to electron beam is
clampl sample e nP f Nε Σ= ⋅ ⋅ + , (13)
where 1,max
sample ph amplNε ω α= is the average energy in a sample, is the noise power. This is the
maximal limit for the power corresponding to the case if all electrons are involved in the
cooling process simultaneously (screening is absent and the amplification time interval of the
amplifier is higher then the time duration of the electron bunch
amplt∆ bt∆ ).
The initial phases inϕ of electrons in their URWs radiated in the pickup undulator and
transferred to the entrance of the kicker undulator(s) depend on their energies and amplitudes
of betatron oscillations. If we assume that synchronous electron enter the kicker undulator
together with their URW at the decelerating phase corresponding to the maximum
decelerating effect, then the initial phases for other electrons in their URWs will correspond
to deceleration as well, if the difference of their closed orbit lengths between undulators
remains 1,min / 2s λ∆ < . In this case the amplitudes of betatron oscillations, the transverse
horizontal emittance of the beam in the smooth approximation and the energy spread of the
beam at zero amplitude of betatron oscillations of electrons must not exceed the values
,lim 1min , /x x x betA A λ λ π<< = , 1min2xε λ< ,
p k c lE E L
σ σ λ β
< = , (14)
where ,p kL is the distance between pickup and kicker undulators along the synchronous orbit,
, ,ln / lnc l p kd T d pη = − is the local slippage factor between the undulators, is the momentum
of an electron, is the pass by time between pickup and kicker undulators. In
accordance with the betatron phase advance
βcLT kpkp /,, =
,2 ( 1/ 2)
x p kkψ π= + ; the value ,p kL =
, ,( 1/x bet p kk 2)λ + , where , /x bet xC vλ = is the wavelength of betatron oscillations, C is the
circumference of the ring, xv is the betatron tune.
The third equation in (14) can be overcome if the isochronous bend or bypass between
undulators will be used. In some cases controllable variable in time optical delay-line can be
used to change in situ the length of the light pass-way between the undulators during the
cooling cycle to keep the decelerating phases of electrons in the kicker undulator in the
process of cooling [6], [7].
Below we investigate this case in more details. The difference in the propagation time
of the URW and the traveling time of the electron between pickup and kicker undulators
depends on initial conditions of electron’s trajectory which can be expressed as
,p kT
1 0 0 2
0 0 0 0
s s s s
s s s s
x C S E
c t c d c x d x d d
τ τ τ
ρ ρ ρ β
′∆ = − = − − −∫ ∫ ∫ ∫ τρ ,
where , ηβ xxx += ηβ 000 xxx += , is the deviation of the electron from its closed orbit,
is the deviation of the closed orbit itself from synchronous one, and stand for
appropriate deviations at location
ηx β0x η0x
0s s= . Two eigenvectors called sine-like S(z) and cosine-
like C(z) trajectories and ρ stands for the local bending radius, is a constant which is
determined by the optical delay line. Basically vectors S(z), C(z) describe the trajectories with
initial conditions like
0 0 0 0 0( ) 0; ( , ) ( , )x s x s s x C s′ = = ⋅ s and 0 0 0 0 0( ) 0; ( , ) ( , )x s x s s x S s′ s= = ⋅ , where s0
corresponds to the longitudinal position of the pick up. So the transverse position of the
particle has the form [12]
)/(),(),(),()( 200000 EEssDssSxssCxsx β∆⋅+⋅′+⋅=
where is the deviation of the electron energy from the dedicated energy and
dispersion D defined as
dE E E∆ = − dE
ssSssD
∫∫ ⋅−⋅−=
00 )(
),(),( ,
Dispersion D(s,s0) describes transverse position of the test particle having relative momentum
deviation from equilibrium as big as /p p∆ , while its initial values of transverse coordinates
at s=s0 are zero. So full expression for transverse position of particle comes to form
0 0 0 0 ,0 0 ,0 0 0 2( ) ( , ) ( , ) [ ( , ) ( , ) ( , ) ]x x
x s x C s s x S s s C s s S s s D s s
′ ′= ⋅ + ⋅ + ⋅ + ⋅ + ,
where xη describes periodic solution for dispersion in damping ring (slippage factor) and
marks pure betatron part in transverse coordinate; ββ 00 , xx ′ 0, 0,,x xη η′ stand for its value at
location of pickup kicker. So the time difference becomes
51 0 0 52 0 0 56 0 , ,2( , ) ( , ) ( , )t t p k c l
c t c R s s x R s s x R s s c cT
′∆ = − ⋅ − ⋅ − ≅ + , (15)
where we neglected terms responsible for the betatron oscillations (i.e. R51=0, R52=0).
In general case , 0c lη ≠ the initial phase of an electron in the field of amplified URW
propagating through kicker undulator, according to (15), 1,max 0in tϕ ω= ∆ ≠ and the rate of the
energy loss
max| | sin( ) (loss loss in )P P fϕ= − ⋅ ∆E , (16)
where ( ) 1 | ( ) | / 2inf E E Mϕ π∆ = − ∆ , if | | 2in Mϕ π≤ and ( )f E 0∆ = if | |inϕ > 2 Mπ . The
function takes into account that electron with some energy and its URW enter
kicker undulator simultaneously at the phase
(f E∆ ) dE
| | 0inϕ = and passing together all undulator
length zero rate of the energy loss if . Electrons having the energies so they
and their URWs enter the kicker undulator non-simultaneously with different phases, travel
together shorter distance in the undulator under smaller rate of the energy change.
0tc = dE E≠
According to (16), electrons with different initial phases are accelerated or decelerated and
gathered at phases 2min mϕ π π= + ( M m M− ≤ ≤ , 0, 1,..m M= ± ± ) and at energies
1,min
1,max , , , ,
(2 1)(2 1)
2m d d dp k c l p k c l
E E E E
λ βπβ
ω η η
= + = + , (17)
if RF accelerating system is switched off (see Fig.2).
Figure 2: In the EOC scheme electrons are grouping near the phases 2in mϕ π π= + (energies mE )
The energy gaps between equilibrium energy positions have magnitudes given by
1,min 2
gap m m d
p k c l
E E E E
= − = β . (18)
Note that the energy gap (18) is 2 times higher the limiting energy spread of the beam at
zero amplitude of betatron oscillations of electrons (14).
The power loss is the oscillatory function of energy |lossP |dE E− with the amplitude
linearly decreasing from the maximum value at the energy max| lossP | dE E= to a zero one at the
energy | | . If the RF accelerating system is switched off, the electron energy
falls into the energy range | |
d gapE E M Eδ− ≥ ⋅
dE E− < gapM Eδ⋅ , excitation of synchrotron oscillations by
non-synchronous photons can be neglected then the electron energy is drifting to the nearest
energy value mE . The variation of the particle’s energy looks like it produces aperiodic
motion in one of 2M potential wells located one by one. The depth of the wells is decreased
with their number | . If the delay time in the optical line is changed, the energies |m mE and
the energies of particles in the well are changed as well. In this case particles stay in their
wells if their maximal power loss satisfies the condition
, (19) max| | | / extloss m lossP dE dt P> + |
where | | stands for the external power losses determined by synchrotron radiation. extlossP
3. VARIANTS OF OPTICAL COOLING
Depending on the local slippage factor and coefficients R51, R52 and R56 in (14), different
variants of optical cooling can be suggested.
1. The local slippage factor 0, =lcη , betatron oscillations are absent, dispersion function
in the pickup undulator , 0x pickupη ≠ . In this case and the initial phase for all electrons
can be installed
t constδ =
inϕ = / 2π . It corresponds to electrons arriving kicker undulator in
decelerating phases of theirs URWs under maximum rate of energy loss. In this case
electrons will be gathered near to the synchronous electron if a moving screen opens the way
only to URWs emitted by electrons with the energy higher than synchronous one. This is the
case of an EOC in the longitudinal plane based on isochronous bend and screening technique.
If electrons develop small betatron oscillations (betatron oscillations introduce phase shift
less than π/2, (see (14)), then the electron beam will be cooled in transverse and longitudinal
directions simultaneously. If the dispersion function value in the pickup undulator 0xη = or if
the synchrotron oscillations of electrons are small (no selection in longitudinal plane) then the
cooling in the transverse direction only takes place ( , 0x kickη ≠ ).
2. The scheme of OSC can be used at 0, =lcη [8]. In this scheme the pickup undulator is a
quadrupole one and kicker undulator is ordinary one. They have the same period. The
magnetic field in the quadrupole undulator is increased with the radial coordinate by the
( )B x G≅ ⋅ x and changes the sign at 0x = , where G stands for the gradient. The phase of
the emitted URWs changes its value on π at 0x = as well. That is why electrons are
grouped around synchronous orbit in the ring where they do not emit URWs. The deflection
parameter in the quadrupole undulator increased with the radial coordinate and
so the emitted wavelength also
( | ( ) |)K B x∝
1min (1 ( )) / 2u K x
2λ λ≅ ⋅ + γ . As the resonance interaction of
URW and the electron emitted the URW is possible in the kicker undulator only if deflection
parameters of undulators are near the same, this opens a possibility for initial selection of
amplitudes in the pickup undulator. So the cooling can be arranged for some specific
amplitude of synchrotron oscillations only. The continuous resonance interaction and cooling
is possible if the magnetic field of kicker undulator is decreased in time for cooling process.
Electrons having other than resonance synchrotron amplitude do not interact with cooling
system. Betatron oscillations in this scheme must introduce the phase shift less then 2/π as
well. This can be arranged by proper zeroing cos and sin-like trajectory integrals [13].
The scheme with two quadrupole undulators (the pickup one and the kicker one) described
in [13]. In this case the second quadrupole undulator decreases the amplitudes of synchrotron
oscillations for positive deviations (we choose the conditions when electrons are
decelerated in their URWs if and betatron amplitudes are neglected
( )) and increases them for negative ones
, 0pickupxη >
, 0pickupxη >
, , 0pickup kickx xβ β= = , 0pickupxη < as it experience
deceleration again (the phases of URWs change their value on π at 0x = and simultaneously
the electron will pass the kicker undulator at opposite magnetic field). In this case to cool
electron beam additional selection of URWs can be used by the screen (cut off URWs
emitted by electrons at negative deviations ). Another scheme which can be used, based on
truncated undulator with the magnetic field of the form
0( ) |xB x G> ⋅ x and .
Such undulator can be linearly polarized one with upper or down array of magnetic poles. It
was used in the undulator radiation experiments in circular accelerators [14].
0( ) | 0xB x <
3. If , 0c lη ≠ , betatron oscillations of electrons introduce phase shift / 2π<< and the
energy gaps have the magnitude gapEδ = ,0(3 5) Eσ÷ ⋅ , then transit-time method of OSC based
on two identical undulators can be used [9]. In this case the main part of electrons including
tail electrons will be gathered at the energy sE , if the energy 0 |m mE E =0= was chosen equal
to sE . Decreasing of the beam dimensions leads to decrease of the rate of cooling. In this
case a time depended local slippage factor can be used to decrease the energy gap for
cooling process and to increase the rate of cooling.
, ( )c l tη
4. If , 0c lη ≠ , the energy gaps between equilibrium energy positions have the magnitudes
, RF accelerating system is switched off then electrons are gathered
at phases
.0(3 5) /gap EEδ σ≅ ÷ ⋅ M
inϕ and energies independently on amplitudes of betatron oscillations. If the
screen overlap URWs emitted by electrons at negative deviation from one having minimum
energy and optical system change the delay time of the rest URWs to move the energies
min ,0(3 5) EE E σ> + ÷ ⋅ to the energy then electrons loose their energy and amplitudes of
betatron oscillations until their energy takes minimum one. Cooling takes place according to
the scheme of the EOC.
5. For this variant , 0c lη ≠ , the energy gaps between equilibrium energy positions have the
magnitudes , the RF accelerating system of the storage ring is switched
off, the screen absorbs the URWs emitted by electrons at a negative deviation of theirs
position from the synchronous one in the radial direction, energy layers are located at positive
deviations from synchronous one outside the energy spread of the beam and optical system
change the delay time of the URWs to move the energy layers to the synchronous energy.
Then the energy layers capture small part of electrons of the beam first and electrons with
smaller energy are captured increasingly and loose their energy and betatron amplitudes until
reaching the minimum energy allowed in the beam. So the cooling process takes place.
This process can be repeated. In this case the energy jump of the electron
in the kicker undulator must be less than the energy gap
,0 /gap EEδ σ<< M
max max| | /loss loss phE P f Nδ =
gapEδ determined by synchronous
photons (18) and non-synchronous photons in the URWs having higher energy jumps
maxnon synEδ − = max ,loss phE Nδ Σ at . That is why the next condition must be fulfilled , 1phN Σ >
1,minmax 2
phloss ph N gap d
p k c l
E N E E
ηΣΣ >
< = β
. (20)
Otherwise electrons can jump over the energy layers and cooling will not be effective.
If the RF accelerating system of the storage ring is switched on, the average energy loss
per turn is higher than the energy loss max| | /turnloss lossE P∆ = 0| (sin sin )seV φ φ− of the electron in
the RF accelerating system of the storage ring or
, (21) 0| sin sin | /
s lossE eVφ φ δ− <
then the energy of electrons is drifting to the nearest energy value and EOC takes place.
Here is the amplitude of the RF accelerating voltage, is the synchronous phase
determined by the equation , is the energy
loss per turn,
0V sφ
, 0 0sin / /s SR s sE eV V Vφ = ∆ = , , /SR s SR s sE P f eV∆ = =
2 2 22
, 3SR s eP cr B γ=
3 42.77 10 / s sR Rγ⋅ eV/sec is the average power of the synchrotron
radiation emitted by the electron in the ring. The value eV/turn. 7 4, / 5.8 10 /SR s sP f Rγ
To keep the condition (21) satisfied, the range of RF phases of particles 2 |
interacting with their URWs must be limited by the value determined by equality
in (21). This can be done by using OPA with short amplification time interval
|sφ φ−
,12 | |c sφ φ−
,1amplt∆
corresponding to the range of phases and by overlapping the center
of this time interval with synchronous particle, where
RF ampltω ⋅ ∆ < ,12 | |c sφ φ−
2RF RFfω π= , RFf is the frequency of
the RF accelerating system of the ring. The last condition is equivalent to
, (22) ,1 ,12 | | /
laser
ampl ampl c s RFl l c φ φ ω< = −
where is the length of the amplified laser bunch. In this case 2 electron ellipses
appear around synchronous phase in the longitudinal plane. The amplitudes of synchrotron
oscillations are determined by the energies which are moved to synchronous energy if the
optical system changes the delay time of the URWs. The condition (20) must be fulfilled if
the RF accelerating system of the storage ring is switched on as well.
,1ampll 1M +
The electrons will be gathered effectively on elliptic trajectories having maximum
energies (17) if conditions (20), (21) are fulfilled, and the deviation of
the electron energies from in the process of interaction are small. The last condition can
be overcame by limiting the amplification time by interval
mE mE E Eδ = −
,2am plt∆ and the corresponding
length to ,2 ,2ampl ampll c t= ∆
,0,2
,02 2
s glaser laser
ampl ampl
< = ap , (23)
where is the initial length of the electron bunch. Above we suggested that electrons are
moving along elliptical trajectories
2 21 /EE sσ∆ = ± − sσ and interact with URWs at the top
energies in the region of the energy deviations . mE∼ / 8m gapE E E Eδ δ= − =
Multiple processes of excitation of synchrotron oscillations by non-synchronous and noise
photons will increase the widths of the electron ellipses and to transfer electrons from one
ellipse to another. They can be neglected if the equilibrium energy spread (12) of the beam
is less then the energy gap (18)or
,
E O C
E e q g a pEσ δ< (24)
Damping time (10) in variant 5 will be increased times: ,02 /s amplσ l
,0 ,0max
E sEOC
loss amplP l
τ =
,0 ,0
s x sEOC
loss x kick ampl
β σ σ
= . (25)
The variant 5 permits to avoid any changes in the existing lattice of the ring (isochronous
bend, bypass). It works easier for existing ion storage rings (see Appendix 2).
The screen permits us to select in pickup undulator electrons with positive deviations of
both betatron and synchrotron oscillations, and such a way to produce effective cooling both
in the transverse and longitudinal direction (we suggested 0xη ≠ in pickup and kicker
undulators in this case). Using the number of kicker undulators permits to cool the
beam either in two directions or in the transverse or in the longitudinal directions only by
selecting corresponding distances between kicker undulators [4], [6].
1kickN >
4. OPTICAL SYSTEM FOR THE EOC SCHEME
UR of an electron gets its well known properties only after the electron passed the undulator
and UR is considered in far zone. The lens located near the pickup undulator can strongly
influence to the UR properties if its focus is inside the undulator [15].
For effective cooling of an electron beam in a storage ring, parameters of the beam under
cooling and the optics of EOC system must fulfill certain requirements.
1. The URW, emitted in a pickup undulator must be filtered and passed through the laser
amplifier.
2. In variant 1 each electron in a beam should enter kicker undulator simultaneously with its
amplified URW emitted in a pickup undulator and to move in decelerating phase of this
URW. For the test electron of a beam (for example, for the synchronous electron with zero
amplitude of betatron oscillations) this requirement is satisfied by equating the propagation
time of the URW with the traveling time of the electron between undulators. Conditions (14)
are necessary for other electrons of the beam to get decelerating phases of theirs URWs in
this case.
3. Each electron in the beam should enter the kicker undulator with its URW emitted in the
pickup undulator near the center of this wavelet in transverse direction. This requirement will
be satisfied if the transverse sizes of all URWs in the kicker undulator are overlapped.
The rms transverse size of one URW at the distance l from the pickup undulator is equal to
(assuming that radiation is emitted from the center of the undulator).
At the distance l from the undulator the R.M.S. transverse size of the beam of emitted URWs
is equal , where d is the transverse size of the electron beam.
If the optical screen opens the way to radiation from the part of the electron beam only, so the
only small angles to the undulator axis passed through, then at the distance from
the end of the undulator , where
( ) ( /2)w c ul Mσ θ λ∆ +
, ( ) /2 ( )w b c u cd Mσ θ λ lθ∆ + ∆ ⋅+
( )Cϑ ϑ∆ < ∆
cl l=
( ) 2
(26)
URWs will be overlapped, and the transverse size of the beam of the selected wave packets
will be equal 2 ( (see. Fig. 3). If the beam of URWs passed optical lenses,
movable screen, the optical amplifier, optical delay and injected into the kicker undulator
then electrons under cooling will hit theirs URWs in the transverse plane if the transverce
dimensions of the electron beam in the undulator are less then .
)cd Mθ λ+ ∆ u
,2 w bσ
4. Generally the angular resolution of an electron bunch by an optical system is
1min1.22res D
δ ϕ ≅ , (27)
where is diameter of the first lens. This formula is valid, if elements of the source emit
radiation which is distributed uniformly in a large solid angle. In our case only a fraction of
the lens affected by radiation, as . That is why the effective diameter must
be used in (27). At the distances the size , so the space resolution of the
optical system is
,w bD σ> ,eff w bD σ=
cll > , ( )w b c lσ ∆θ
res resx lδ δ ϕ≅ or
1min 1min1.22 /( ) 0.86res c ux Lδ λ θ λ≅ ∆ = , (28)
where 2( ) ( ) / (1/ ) (1 ) /c c K Mθ ϑ γ γ∆ = ∆ = + is the observation angle.
Note that at closer distances , the spatial resolution is better. More complicated
optical system can be used for increase the spatial resolution in this case.
cl l<
Figure 3: Scheme of URW’s propagation.
5. URWs must be focused on the crystal of OPA. For the URW beam, the dedicated
optical system with focusing lenses can be used to make the Rayleigh length equal to the
length of the crystal (typically ~1 cm) for small diameters of the focused URW beam in the
crystal (typically ~0.1 mm).
6. The electron bunch spacing in storage rings is much bigger than the bunch length. The
same time structure of the OPA must take advantage on this circumstance.
5. USEFUL EXPRESSIONS FROM THE THEORY OF CICLIC ACCELERATORS
The equilibrium value of relative energy spread of the electron beam in the isomagnetic
lattice of the storage ring determined by synchrotron radiation (SR) described by expression
655 6.2 10
eq SR
s s sE R R
γ γ−Λ= ⋅
, (29)
where cm, 11/ 3.86 10c mc
−Λ = ⋅
sR is the equilibrium bending radius of the synchronous
electron, in the smooth approximation 4 /s c s sR Rαℑ = − is a coefficient determined by the
magnetic lattice ( ), is the momentum compaction factor, 0 3s< ℑ < cα sR is the equilibrium
averaged radius of the storage ring [16].
For small synchrotron oscillations the equilibrium length of the electron bunch is
eq SR
eq SR c E
, (30)
where 0 cos /2rev c s sh eV Eω α φ πΩ= is the synchrotron frequency, 2rev fω π= , h is an
accelerating RF voltage harmonic order.
The equilibrium radial synchrotron and betatron beam dimensions are
eq SR
eq SR E
x x Eη
σ η= , , 55 3 6.2 10
eq SR s
σ γ γ−
= = ⋅
, (31)
where in the smooth approximation /s c s sR Rα 1ℑ = − is a coefficient determined by the
magnetic lattice ( ), is a coefficient [16], [17]. 3x sℑ +ℑ = 1F ∼
The damping time in the storage ring determined be synchrotron radiation in the bending
magnets and comes to the following values ( 1zℑ = )
, , 3 3
, , , , , , ,
355SR s sx z s
SR s x z s e x z s x z s
s sR R RE
= = =
ℑ ℑ ℑ
[sec]. (32)
The maximum deviation of the energy from its synchronous one is
0
[( 2 )sin 2 cos ]sep s s
E h E
β π φ φ
= ± − − sφ , (33)
where is the storage ring slippage factor of the ring. 2/1 γαη −= cc
For the ordinary storage ring lattice (without local isochronous bend or bypass between
undulators) the natural local slippage factor
p kc l c
L Cη η . (34)
6. EXAMPLE
Below we consider an example of one dimensional EOC of an electron beam in the
transverse x-plane in strong focusing storage ring like Siberia-2 (Kurchatov Institute Atomic
Energy, Moscow) having maximal energy 2.5 GeV [18]. The magnetic system of the ring is
designed with so-called separate functions. The lattice consists of six mirror-symmetrical
superperiods, each containing an achromatic bend and two 3 m long straight sections. For the
functionality, the half of the superperiod arranged with two sections. The first one,
comprising the quadrupoles F,D,F and two bending magnets is responsible for the achromatic
bend and high xβ , yβ functions in the undulator straight section. The second part,
comprising quadrupoles D,F,D and dispersion free straight section, allows to change the
betatron tunes, without disturbing the achromatic bend. Main parameters of the ring, the
electron beam in the ring, pickup and kicker undulators and Optical Parametric Amplifier are
presented in Tables 1 - 4.
After single bunch injection in the storage ring the energy 100 MeV is establishd for the
experiment and the beam is cooled by synchrotron radiation damping (see section 5). In this
case the energy spread and the beam size acquire equilibrium values in ~40 seconds (see
Table1). The equilibrium energy spread is equal to , / 3.94 10eq SRE Eσ 5−= ⋅ , the length of the bunch
cm at the amplitude of the accelerating voltage = 73 V, the synchronous
voltage V, the radial emittance cm
, 2.32eq SRsσ = 0V
1.89sV =
, 6 21.25 10 [ ]eq SRx E GeV
−∈ = ⋅ 81.25 10−= ⋅ rad, the radial
betatron beam dimension at pickup undulator , 24.61 10eq SRxσ −= ⋅ mm.
Following synchrotron radiation damping the amplitudes of radial betatron oscillations
,0xσ are artificially excited to be suitable for resolution of the electron beam in the experiment
with EOC (see Table 2). The amplitudes of synchrotron oscillations must stay damped to
work with short electron bunches and short duration of amplification OPAs.
In the variants of the example considered below the optical system resolution of electron
beam, according to (28), is 1.9resxδ = mm at
1,min 2 10λ
−= ⋅ cm, 240uMλ = cm. It yields that
effective EOC in this case is possible if the beam under cooling has total size in the pickup
undulator , 2.0x totσ > mm. We accepted the initial energy spread , the
dispersion beam size mm, the length of the electron bunch
cm, its transverse size at pickup undulator
,0 3.94 10
eq SR
E E Eσ σ
−= = ⋅
,0 3.15 10xησ
−= ⋅ ,0sσ =
,2 4eq SRsσ = .64
,0 4xσ = mm, the laser amplification length
mm (duration 0.5 ps), the radial betatron beam size in kicker undulator 1.5laserampll = ,0 1xσ = mm,
the URW beam size mm. We took the number of electrons at the orbit . In
this case the number of electrons in the URW sample is
, 2w bσ =
, 5 10eN Σ = ⋅
, 129.5e sN = , the number of non-
synchronous photons in the sample is , 2.5phN Σ = for the case of one noice photon at the OPA
front end. In this storage ring the natural local slippage factor (34) is , , /c l c p kL Cη η=
, /c p kL Cα =
34.45 10−⋅ , the energy gap (18) is 0.62gapEδ = keV.
We consider EOC in the transverse plane. In this case the dispersion beam size
,0x resxησ δ<< and that is why there is no selection of electrons in the longitudinal plane. That is
why in order to prevent heating in the longitudinal plane by energy jumps determined by both
synchronous and non-synchronous photons in the URWs, two kicker undulators are used
which produce zero total energy jump [4], [6]. Note that the purpose of this experiment is to
check physics of optical cooling. At the same time cooling in the transverce plane is
important for heavy ions in RHIC, LHC.
We accept the distance between pickup and first kicker undulator along the synchronous
orbit m (, 72.27p kL = ,9 , 4
x p kkψ π= = ). It corresponds to the installation of undulators in
the first and seventh straight sections which are located at a distance 72.38 m (we count off
pickup undulator). Second kicker undulator is located on the same distance from the first one.
Optical line is tuned such a way that electrons are decelerated in the first kicker undulator and
accelerated in the second one.
The URWs have the number of the photons emitted in the pickup undulator (see Table 3)
per electron in the frequency and angular ranges (3) suitable for cooling. The
limiting amplitude of betatron oscillations (14) is
21.15 10phN
,lim 3.2xA = mm. The energy spread of the
beam limited by the separatrix is 4/ 3.3 10sepE E
−∆ = ⋅ . The electric field strength at the first
harmonic of the amplified URW in the kicker undulator is clwE ≅ 32.06 10−⋅ V/cm. The power
loss for the electron passing through one kicker undulator together with its amplified URW
comes to eV/sec if the amplification gain of OPA is (see Tables 2,
4). This power loss corresponds to the maximal energy jumps eV and the average
energy loss per turn eV/turn. The jump of the closed orbits is
max 62.03 10lossP = ⋅
710amplα =
max 73lossE∆ =
0.84turnlossE∆ =
1 55.8 10xηδ
−= ⋅ cm.
Below we will consider two variants of EOC.
1. The variant 1 (section 3). For the parameters presented above the cooling time for the
transverse coordinate, according to (10), comes to , 18.5x EOCτ = msec. SR damping time ~ 40
sec is much bigger (see Table 1). The average power transferred from the optical amplifier to
electron beam (13) is 0.061amplP = mW. It is determined by the power of the URWs (0.036
mW) and noise average power (41) equal to 0.025 mW. We adopted one-photon/mode (one-
photon/sample) at the amplifier front end corresponding to pulse noise power at the amplifier
front end W, W at the gain , used eV.
We took into account that the amplification time interval of the amplifier is less than
the revolution period by a factor of
,0 /nP c Mω λ= = 74 10−⋅
2nP = 70 10G = 0.5ω
amplt∆
,0/ / 2 8.27b sC c t C σ∆ = = ⋅ times.
Necessary conditions for selection of electrons must be created: high beta function in the
pickup undulator (to increase the transverce dimensions of the bunch for selection of
electrons with positive deviations from closed orbit), the isochronous bend between
undulators. We believe that the lattice of the ring is flexible enough to be changed in
nesessary limits by analogy with those presented in [19].
The number of electrons in the bunch is enough to detect them in the experiment and to
neglect intrabeam scattering.
Note that if one kicker undulator is used in the scheme of two-dimensional EOC and the
beam resolution is high mm, the equilibrium relative energy spread, the spread of
closed orbits, the longitudinal, dispersion and radial betatron beam dimensions determined by
EOC, according to (11), are equal to
210resxδ
, 5/ 5.56 10eq EOCE sEσ
−= ⋅ ,,0 2.63
eq EOC
sσ =, cm, , ,eq EOC eq EOCx xησ σ= =
24.46 10−⋅ mm. It follows that if the number of electrons in the bunch then their
influence on the equilibrium dimensions of the bunch can be neglected, longitusinal
dimensions and the energy spread stay small and the radial betatron beam dimensions
determined by EOC are high degree decreased. In reality the equilibrium synchrotron and
betatron bunch dimensions will be much higher. This is the consequence of the finite beam
resolution in pickup undulator. That is why we use two kicker undulators to keep longitudinal
bunch dimensions small to exclude the excitation of longitudinal oscillations by multiple
energy jumps. The situation could be better if we had effective OPA at the wavelength
cm, i.e. about one order less. Shorter undulator can be used as well.
, 5 10eN Σ < ⋅
1,min 3 10λ
2. The variant 5 (section 3). The variant 5 requires easier tuning of the lattice for the
arrangement of the local small slippage factor between undulators. In the case of one-
dimensional EOC, using two kicker undulators, the multiple processes of excitation are not
essential because of the excitation of the synchrotron oscillations in this case is absent or
unessential and that is why there is no need in the small local slippage factor. In this case the
initial phase ( , )in xE Aϕ of the electron in the field of amplified URW propagating through the
kicker undulator, according to (15) is the function of both the energy (which is a constant in
this variant of EOC) and the amplitude of betatron oscillations. The amplitudes of betatron
oscillations will increase or decrease depending on their initial phases until they reach the
equilibrium amplitudes determined, in the smooth approximation, by the expression
, 1min ,2x m x betA m /λ λ= ⋅ π (generelised expression (14)) corresponding to the phases 2m mϕ π= .
Variable in time optical delay-line can be used to change in situ the length of the light pass-
way between the undulators during the cooling cycle to move the initial phases to
2in m / 2ϕ π π+ for production of the optimal rate of decrease the amplitudes of betatron
oscillations of electrons in the fields of amplified URW and the kicker undulator. The
damping time for radial betatron oscillations, according to (25), is , 0.57x EOCτ = sec.
Note that in the case of two-dimensional EOC using one kicker undulator, according to
(18), the energy gaps between equilibrium energy positions have the magnitudes
0.62gapEδ = keV. They are higher than the energy jumps of electrons in kicker undulator
eV and the energy jumps max 73lossE∆ = max , 115loss phE N Σ∆ = eV determined by the non-
synchronous photons in the URWs (see condition (20)). The conditions (22), (23) or
limit the length of the laser URWs by the values cm, cm.
The accepted laser amplification length mm is enough to satisfy these conditions.
The damping time for radial betatron oscillations, according to (25), is
laser
ampl ampll l< ,1 1.69ampll = ,2 0.32ampll =
1.5laserampll =
, 1.14x EOCτ = sec. This
damping time is less then one for synchrotron radiation damping (see Table1). The
equilibrium energy spread determined by EOC is about 6-10 times higher then the energy
gap. It follows that the local slippage factor between undulators, according to (24), must be
decreased by a factor higher then 10. Unfortunately the resolution of the electron beam will
not permit to reach the equilibrium energy spread and cooling in the transverse plane in this
case will be less then heating in the longitudinal one.
7. CONCLUSIONS
We have shown in this paper, that test of EOC is possible in the 2.5 GeV electron strong
focusing storage ring tuned down to the energy ~ 100 MeV. Electron beam can be cooled in
transverse direction. The damping time is much less than one determined by synchrotron
radiation. So the EOC can be identified by the change of the damping rate of the electron
beam. Variant of cooling is found, which permits to avoid any changes in the existing lattice
of the ring (for production of isochronous bend, bypass). It can work for existing ion storage
rings as well (see Appendix 2). Three short undulators in this variant installed in the storage
ring have rather long periods and weak fields. They can be manufactured at low cost.
The cooling of a relatively small number of electrons (one bunch, ) is considered
in this proposal in attempt to avoid strong influence of the non-synchronous photons on the
equilibrium energy spread of the beam. The intrabeam scattering effects could be overcame
as well. Optical amplifier suitable for the EOC - so called Optical Parametric Amplifier -
suggested as a baseline of experiment, must have moderate gain and power. We have chosen
the wavelength of the OPA equal 2 mkm as the OPA technique is more developed for these
wavelengths. At the present times the OPAs having amplification gain ~10
, 5 10eN Σ = ⋅
8 and the power >1
W are fully satisfy requirements for this experiment (See Appendix 3). Usage of OPAs with
shorter wavelength will permit to increase the spatial resolution by the optical system and the
degree of cooling of the beam.
We have predicted that the maximum rate of energy loss for electrons in the fields of the
kicker undulator and amplified URW calculated in the framework of classical
electrodynamics is 9.3phN times lesser then one taking into account quantum nature of
the photon emission in undulators. Quantum aspects of the beam physics will be checked in
the proposed test experiment. It is suggested that the scheme based on optical line with
variable delay time will be tested as well.
Authors thank A.V.Vinogradov and Yu.Ja.Maslova for useful discussions.
Supported by RFBR under grant No 05-02-17162a, 05-02-17448a, by the Program
of Fundamental Research of RAS, subprogram “Laser systems” and by NSF.
Table 1. Parameters of the ring:
The maximal energy of the storage ring Emax= 2.5 GeV
The energy for the experiment Eexp=100 MeV
Relativistic factor for the experiment 200γ ≅
Circumference C=124.13 m
Bending radius cm 490.54R =
Average radius 1976R= cm
Frequency of revolution 62.42 10f = ⋅ Hz
Harmonic number 75h =
RF frequency 181.14RFf = MHz
Energy loss determined by SR , / 1.8SRP fγ 9= eV/turn
The amplitude of the accelerating voltage at = 73 V expE 0V
The synchronous phase 0.026sϕ
Radial tune 7.731xv =
Vertical tune 7.745zv =
Momentum compaction factor 37.6 10cα −= ⋅
Dispersion function at the pickup and the kicker undulator 80xη = cm,
Radial beta function in pickup undulator 17xβ = m
Radial beta function in kicker undulator 1.7xβ = m
Vertical beta function 6zβ = m
Patrician coefficients , ,z x sℑ ℑ ℑ 1, 0.97, 2.03
Damping times by SR at 43.1; 44.4; 21.23 sec expE , ,z xτ τ τs
The length of the period of betatron oscillations , 16.06x betλ = m
Slippage factor of the ring c cη α=
Local slippage factor of the ring , 0.58c l cη α= ⋅
Frequency of synchrotron oscillations at 100 MeV expE = 31.6 10 f−Ω = ⋅ .
Table 2. Initial parameters of the electron beam in the ring:
Number of electrons at the orbit 4, 5 10eN Σ = ⋅ ,
Number of electron bunches being cooled 1
Relative energy spread , 5,0 / 3.94 10E Eσ
Betatron beam size at pickup undulator ( 32xβ = m) ,0 4xσ = mm,
Betatron beam size at kicker undulator ( 2xβ = m) ,0 1xσ = mm,
Dispersion beam size 2,0 3.15 10xησ
−= ⋅ mm,
Total beam size at pickup undulator 2 2, ,0 ,0( ) ( )x tot x xησ σ σ 4= + = mm,
The length of the electron bunch cm, ,0 2.32sσ =
Table 3. Parameters of pickup and kicker undulators:
Magnetic field strength 2 1 3 3 8B = Gs,
Undulator period 8uλ = cm,
Number of periods M = 30,
Deflection parameter K=1.
Table 4. Optical Parametric Amplifier
Number of Optical Parametric Amplifiers 2
Total gain 710amplα =
The wavelength of amplified URWs cm 41,min 2 10λ
The characreristic URW waist size mm . 0.77w cσ =
The URW beam waist size mm 2wσ =
The duration of the amplification time of the OPA 5 psec ( mm) 1.5ampll =
The frequency of the amplified cycles Hz 62.42 10amplf f= = ⋅
Appendix 1
Spectral-angular distribution of the UR energy emitted by the relativistic electron in the
pickup undulator on the harmonic is n
sin ( , )
E M E
c σ ω ϑ
∂ ∂ ∂
, (35)
where is the angular distribution of the energy of the UR emitted in the unit solid
angle at the angle
/nE∂ ∂o
do θ to it’s axis, , , sin sin /n n nσc σ σ= ( )/n n nnMσ π ω ω ω= −
1n nω ω= is the frequency of the -th harmonic of UR. [20] - [22]. For the helical undulator n
2 2 3 2 2 2
( ) ( ) 6 ( )
n n totE e Mn F K E n F K
c K 2 31 )
nω ϑ β ϑ γ ϑ
⊥∂ ,= =
∂ Ω +
n nF K J n
2( ) ( )ϑ χ
2 2 2 21
nK J n
ϑ( ) )χ
+ − ,+ 2 2 24 3totE e M K cπ γ= Ω / , 2 ucπ λΩ = / , is the
Bessel function and it’s derivative,
n nJ J
2 22 (1 ) 1K Kχ ϑ ϑ= / + + < /Kβ γ⊥ =, . The number of
photons emitted in the undulator on the harmonic in the solid angle and
frequency band dω is determined by the relation
n 2 /do dπ ϑ γ= 2
2 2 2
, 2 2 2
sin ( , )
ph n n
n M K F K
α λ ϑ
d doσ ω ϑ ω
+ +∫ ∫ . (36)
If the considered frequency band then we can neglect the angular
dependence of the first multiplier and the frequency dependence of the value in (36).
In this case the value determine the range of angles of the UR (3). The increasing of
the frequency band will lead to the increase of the angular range. Taking the frequency band
out of the integral and taking into account that
/ 1/Mω ω∆ 1
sin nc σ
sin nc σ
ω∆ 2 2 1(2 / ) nd Mϑ γ π ω σ= Ω ,
we can transform (36) to (6) for sin ( )nc σ πδ σ= n / 1/2Mω ω∆ = 0ϑ = and . 1n =
Appendix 2
Below we investigate the possibility of lead ions cooling (Z=82) in the storage ring LHC
based on using the version 5 (section 3) of EOC. We take the example 2 considered in [7]. In
this case the energy eV, (141.85 10E = ⋅ 953γ = ) the slippage factor of the ring 43.23 10c cη α −= ⋅ ,
the amplitude of the RF accelerating voltage MV, the RMS bunch length 8 cm, RF
frequency MHz, synchrotron frequency Hz, circumference C=2665888.3
cm, harmonic order h=35640, RMS relative energy spread , eV,
0 16V =
400RFf = 23Ω=
,0 / 1.1 10E Eσ
−= ⋅ 102.04 10inEσ = ⋅
, 415x betλ = m, RF bucket half-height
4/ 4.43 10sepE E
−∆ = ⋅ , eV. We take the
distance between pickup and kicker undulators
108.19 10sepE∆ = ⋅
, 1453p kL = m (k=3), synchrotron radiation
energy loss per ion per turn eV. 4, / 1.2 10SR sP f = ⋅
For the parameters of the undulators [7] the energy loss per turn is eV,
eV (M=12), the gap between equilibrium energy positions is
eV, cm. It follows that , that is the condition (20)
is satisfied. In the case of ions the equation (21) must include instead of . In this
example . It follows the laser amplification length mm. By
this choice the condition (21) is satisfied as well. To cool the ion beam in the transverse plane
and to keep the magnetic lattice unchanged one pickup and two kicker undulators must be
used.
max 53 10lossE∆ = ⋅
,0 / 1.7 1E Mσ = ⋅ 0
81.97 10gapEδ = ⋅
1,min 5 10λ
−= ⋅ maxloss gapE Eδ∆
0eZ V⋅ 0eV
max 4
0/ 2.2 1lossE eZVδ
−= ⋅ ,2 2.78ampll =
Appendix 3
The principle of OPG is quite simple: in a suitable nonlinear crystal, a high frequency and
high intensity beam (the pump beam, at frequency pω ) amplifies a lower frequency, lower
intensity beam (the signal beam, at frequency sω ); in addition a third beam (the idler beam,
at frequency iω , with i s pω ω ω< < ) is generated (In the OPG process, signal and idler beams
play an interchangeable role, we assume that the signal is at higher frequency, i.e., s iω ω> )..
In the interaction, energy conservation
p s iω ω ω= +
is satisfied; for the interaction to be efficient, also the momentum conservation (or phase
matching) condition
p s i= +k k k
where , pk sk , and are the wave vectors of pump, signal, and idler, respectively, must be
fulfilled. The signal frequency to be amplified can vary in principle from
2pω (the
so-called degeneracy condition) to pω , and correspondingly the idler varies from 2pω to 0;
at degeneracy, signal and idler have the same frequency. In summary, the OPG process
transfers energy from a high-power, fixed frequency pump beam to a low-power, variable
frequency signal beam, thereby generating also a third idler beam.
The signal and idler group velocities sv and (GVM – group velocity mismatch)
determine the phase matching bandwidth for the parametric amplification process. Let us
assume that perfect phase matching is achieved for a given signal frequency
sω (and for the
corresponding idler frequency i p sω ω ω= − . If the signal frequency increases to sω ω+∆ , by
energy conservation the idler frequency decreases to iω ω−∆ . The wave vector mismatch
can then be approximated to the first order as
s i gi gs
k ω ω ω
ω ω ν ν
⎛ ⎞∂ ∂
∆ ≅ − ∆ + ∆ = − ∆⎜ ⎟⎜ ⎟∂ ∂ ⎝ ⎠
The gain bandwidth of an OPA can be estimated using the analytical solution of the
coupled wave equations in the slowly varying envelope approximation and assuming flat top
spatial and temporal profiles and no pump depletion. The intensity gain (G ) and phase (ϕ )
of the amplified signal beam are given in [23] by
( )
2 sinh
γ ⎛ ⎞= + ⎜ ⎟
sin cosh cos sinh
tan ,
cos cosh sin sinh
B A B A A B
B A B A A B
(37)
where 2,A kL= ∆ ( ) ( )2 22 ,B L kLγ= − ∆ and 0gain coefficient 4 2 ,eff p p s i s id I n n n cγ π ε= = λ λ
(phase mismatch ,p s ikL L∆ = = − −k k k where L is the length of amplifier, is the
effective nonlinear coefficient,
pI is the pumping intensity.
The full width at half maximum (FWHM) phase matching bandwidth can then be
calculated within the large-gain approximation as
( )1 2 1 22 ln 2 1
gs gi
⎛ ⎞∆ ≅ ⎜ ⎟
(38)
Large GVM between signal and idler waves dramatically decreases the phase matching
bandwidth; large gain bandwidth can be expected when the OPA approaches degeneracy
( s iω ω→ ) in type I phase matching or in the case of group velocity matching between signal
and idler ( gs giν ν= ). Obviously, in this case Eq. (5) loses validity and the phase mismatch k∆
must be expanded to the second order, giving
( )1 4 1 4
ln 2 1
L k k
∆ ≅ ⎜ ⎟
∂ ∂⎝ ⎠
For the case of perfect phase matching ( 0k∆ = , B Lγ= ) and in the large gain approximation
( 1Lγ ), Eq. (37)(4) simplify to
( ) (1 04 exp 2 ,s s )I L I Lγ≅ ( ) (0 exp 2 .4
)I L I Lω γ
≅ (39)
Note that the ratio of signal and idler intensities is such that an equal number of signal and
idler photons are generated. Equations (39) allow defining a parametric gain as
( )1 exp 2
G Lγ≅
growing exponentially with the crystal length L and with the nonlinear coefficient γ .
( ) ( )( )0 01 1exp 8 2 exp 8 24 4eff p p s i s i eff s s p pG L d I n n n c L d n Iπ ε λ λ π λ ε≅ ≅ n c
(40)
for and in n≈ i sλ λ≈ ,
377 Ohm
= .
The noise (amplified self emission) power of the optical amplifier is determined by the
expression
, (41) ,0 0n nP P G=
where is the noise power at the amplifier front end, is the gain of the amplifier. ,0nP 0G
If the noise power corresponds to one-photon/mode at the amplifier front end then
[27], [28], where in our case is the coherence length,
,0 1max /nP ω τ= coh 2coh MTτ =
1min /T cλ=
Example: MgO Periodically Poled Lithium Niobate
In recent years the development of periodically poled nonlinear materials has enhanced
the flexibility and performance of OPAs. In the case of well-studied Periodically Poled
Lithium Niobat (PPLN), one can get access to the material’s highest effective nonlinearity as
well as retain generous flexibility in phase-matching parameters and nonlinear interaction
lengths.
Operation near the degeneracy wavelength of 2.128 µm reduces thermal-lens effect
because the signal and the idler wavelengths fall within the highest transparency range of
Lithium Niobat. For wideband optical parametric amplification, we will use MgO: PPLN
with a poling period of 31.1 (31.2) µm, which has high damage threshold and high nonlinear
coefficient 16pm/V [25] (17pm/V [26]). To avoid photorefractive damage, thick (~1-2 mm)
PPLN crystal was suggested to be kept at elevated temperatures 1500C [24].
For the signal wavelength sλ = 2 µm, 2.1s pn n≈ = , and the gain, according to formula (40)
comes to
exp 3
pI lG
GW cm mm
≅ ⎜ ⎟
For G=107, Ip=1GW/cm2, l = 5.8 mm. We propose two-stage crystal amplifier (l1 = l2 = 3.5
mm). OPA amplifies linearly polarized radiation. That is why the circular polarization of the
URWs (if helical undulator is used) in our case must be transformed into linear polarized one
before it will be injected in the kicker undulator. Usual quarter wave plate can be used for this
purpose in simplest case. Planar undulators can be used as well. Reflective optics can be used
for dispersion-free undulator light propagation.
REFERENCES
[1] A.M.Sessler, “Methods of beam cooling”, LBL-38278 UC-427, February 1996; 31st
Workshop: “Crystalline Beams and Related Issues”, November 11-21, 1995.
[2] D.Mohl, A.M.Sessler, “Beam cooling: Principles and Achievements”, NIMA, 532(2004),
p.1-10.
[3] D. Mohl, “Stochastic cooling”, CERN Accelerator School Report, CERN No 87-03, 1987,
pp.453- 533.
[4]. E.G.Bessonov, “On Violation of the Robinson’s Damping Criterion and Enhanced
Cooling of Ion, Electron and Muon Beams in Storage Rings”,
http://arxive.org/abs/physics/0404142 .
[5] E.G.Bessonov, A.A.Mikhailichenko, “Enhanced Optical Cooling of Particle Beams in
Storage Rings”, Published in Proc. 2005 Particle Accelerator Conference, May 16-20,
2005, Knoxville, Tennessee, USA, (http://www.sns.gov/PAC05),
http://accelconf.web.cern.ch/accelconf/p05/PAPERS/TPAT086.PDF.
[6] E.G.Bessonov, A.A.Mikhailichenko, A.V.Poseryaev, “Physics of the Enhanced optical
cooling of particle beams in storage rings”, http://arxiv.org/abs/physics/0509196.
[7] E.G.Bessonov, M.V. Gorbunkov, A.A.Mikhailichenko, “Enhanced optical cooling of ion
beams for LHC”, Proc. 2006 European Particle accelerator Conference, June 26-30 2006.
Edinburgh, Scotland,
http://accelconf.web.cern.ch/accelconf/e06/PAPERS/TUPLS001.PDF;
Electronic Journal of Instrumentation – 2006 JINST 1 P08005, http://jinst.sissa.it/,
http://ej.iop.org/links/r5pyfrsWE/sl44atI92xGGY2iAav5vpA/jinst6_08_p08005.pdf.
[8] A.A.Mikhailichenko, M.S.Zolotorev,” Optical Stochastic Cooling”, SLAC-PUB-6272,
Jul1993, 11pp. Published in Phys.Rev.Lett.71:4146-4149,1993.
[9] M.S.Zolotorev, A.A.Zholents, “Transit-Time Method of Optical Stochastic Cooling”,
Phys. Rev. E v.50, No 4, 1994, p. 3087.
[10] M.Babzich, I.Ben-Zvi, I.Pavlishin at all., “Optical Stochastic Cooling for RHIC Using
Optical Parametric Amplification”, PRSTAB, v.7, 012801 (2004).
[11] E.G.Bessonov, “Some Aspects of the Theory and Technology of the Conversion
Systems of Linear Colliders”, Proc. 15th Int. Accelerator Conf. on High Energy
Accelerators, (Int. J. Mod. Phys Proc. Suppl.2A), V.1, pp. 138-141, 1992, Hamburg,
Germany.
[12] K. Steffen, “High Energy Beam optics”, Interscience Pub., 1964.
[13] A.A. Mikhailichenko, “Optical Stochastic Cooling and Requirements for a Laser
Amplifier”, CLNS-98-1539, Dec 1997, Cornell U., 14pp.; Int. Conf. on Lasers '97, New
Orleans, LA, 15-19 Dec. 1997, Proceedings, ed. J.J.Carrol, T.A.Goldman, pp. 890-897.
[14] D.F.Alferov, Yu.A.Bashmakov, K.A.Belovontsev, E.G.Bessonov, P.A.Cherenkov,
“Observation of undulating radiation with the "Pakhra" synchrotron”, Phys. - JETP
Lett., 1977, v.26, N7, p.385.
[15] V.I.Alexeev, E.G.Bessonov, “Experiments on Generation of Long Wavelength Edge
Radiation along the Directions nearly Coincident with the Aaxis of a Straight Section of
the “Pakhra”synchrotron”, Nucl. Instr. Meth. B 173 (2001), p. 51-60.
[16] H.Bruck, “Circular Particle Accelerators”, Press Universities of France, 1966.
[17] A.A.Kolomensky and A.N.Lebedev, “Theory of Cyclic Accelerators”, North Holland
Publ., 1966.
[18] V.V.Anashin, A.G.Valentinov, V.G. Veshcherevich et al., The dedicated synchrotron
radiation source SIBERIA-2, NIM A282 (1989), p.369-374.
[19] H.Hama, S.Takano, G.Isoyama, Control of bunch length on the UVSOR Storage ring,
Workshop on Foth Generation Light Sources, Chairmen M. Cornacchia and H. Winick,
1992, SSRL 92/02, USA, p.208-216.
[20] D.F.Alferov, Yu.A.Bashmakov, E.G.Bessonov. “To the theory of the undulator
radiation”. Sov. Phys.-Tech. Phys., 18, (1974),1336.
http://arxive.org/abs/physics/0404142
http://www.sns.gov/PAC05
http://accelconf.web.cern.ch/accelconf/p05/PAPERS/TPAT086.PDF
http://arxiv.org/abs/physics/0509196
http://accelconf.web.cern.ch/accelconf/p05/PAPERS/TUPLS001.PDF
http://jinst.sissa.it/
http://ej.iop.org/links/r5pyfrsWE/sl44atI92xGGY2iAav5vpA/jinst6_08_p08005.pdf
[21] D.F.Alferov, Yu.A.Bashmakov, K.A.Belovintsev, E.G.Bessonov, P.A.Cherenkov. "The
Undulator as a Source of Electromagnetic Radiation”, Particle accelerators, (1979), v.9,
No 4, p. 223-235.
[22] E.G.Bessonov, Undulators, Undulator Radiation, Free-Electron Lasers, Proc. Lebedev
Phys. Inst., Ser.214, 1993, p.3-119 (Undulator Radiation and Free-Electron Lasers),
Chief ed. N.G.Basov, Editor-in-chief P.A.Cherenkov (in Russian).
[23] J.A. Armstrong, N. Bloembergen, J. Ducuing, P.S. Pershan, Phys. Rev. 127 (1962) 1918.
[24] C.W. Hoyt, M. Sheik-Bahae, M. Ebrahimzadeh, High-power picosecond optical
parametric oscillator based on periodically poled lithium niobate, Opt. Lett. Vol. 27, No.
17, 2002.
[25] K.W. Chang, A.C. Chiang, T.C. Lin, B.C. Wong, Y.H. Chen, Y.C. Huang Simultaneous
wavelength conversion and amplitude modulation in a monolithic periodically-poled
lithium niobate. Opt. Comms, 203 (2002) 163-168.
[26] L. E. Myers, G. D. Miller, E. C. Eckardt, M. M. Fejer, and R. L. Byer, Opt. Let. 20, 52
(1995).
[27] A.Maitland and M.H.Dunn, Laser physics, Noth-Holland publishing Company,
Amsterdam – London, 1969.
[28] I.N.Ross, P.Matousek, M.Towrie, et al., Optics communications 144 (1997), p. 125- 133.
This paper is published in: http://arxive.org/abs/0704.0870
http://arxive.org/abs/0704.0870
|
0704.0873 | Predicting the frequencies of diverse exo-planetary systems | Mon. Not. R. Astron. Soc. 000, 1–6 (2006) Printed 25 October 2018 (MN LATEX style file v2.2)
Predicting the frequencies of diverse exo-planetary systems
J.S. Greaves1⋆, D.A. Fischer2, M.C. Wyatt3, C.A. Beichman4,5 and G. Bryden4
1Scottish Universities Physics Alliance, Physics and Astronomy, University of St. Andrews, North Haugh, St Andrews, Fife KY16, UK
2Department of Astronomy, University of California at Berkeley, 601 Campbell Hall, Berkeley, CA 94720, USA
3Institute of Astronomy, Madingley Rd, Cambridge CB3 0HA, UK
4Michelson Science Center, California Institute of Technology, M/S 100-22, Pasadena, CA 91125, USA
5Jet Propulson Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109, USA.
Accepted 2007. Received 2007; in original form 2006
ABSTRACT
Extrasolar planetary systems range from hot Jupiters out to icy comet belts more
distant than Pluto. We explain this diversity in a model where the mass of solids in
the primordial circumstellar disk dictates the outcome. The star retains measures of
the initial heavy-element (metal) abundance that can be used to map solid masses
onto outcomes, and the frequencies of all classes are correctly predicted. The differing
dependences on metallicity for forming massive planets and low-mass cometary bodies
are also explained. By extrapolation, around two-thirds of stars have enough solids
to form Earth-like planets, and a high rate is supported by the first detections of
low-mass exo-planets.
Key words: circumstellar matter – planetary systems: protoplanetary discs – plan-
etary systems: formation
1 INTRODUCTION
Extrasolar planetary systems have largely been identified by
a change in the line-of-sight velocity in spectra of the host
star, the ‘Doppler wobble’ method (Cumming et al. 1999,
e.g.). This technique detects inner-system gas-giant plan-
ets out to a few astronomical units. Contrasting larger-scale
systems are those with ‘debris’ disks, rings of dust parti-
cles produced in comet collisions, whose presence indicates
that parent bodies exist at least up to a few kilometres in
size (Wyatt & Dent 2002). Most images of debris disks show
central cavities similar to that cleared by Jupiter and Sat-
urn in our own Solar System (Liou & Zook 1999), and also
sub-structure attributed to dust and planetesimals piled up
in positions in mean motion resonance with a distant giant
planet (Wyatt 2003). Beichman et al. (2005) have discov-
ered a few systems with both debris disks and inner giants,
linking these divergent outcomes.
These various planetary systems could reflect differ-
ent initial states, in particular the quantity of planet-
forming materials in the circumstellar disk around the
young host star. In core-growth models (Pollack et al. 1996;
Hubickyj et al. 2005, e.g.), a large supply of refractory el-
ements (carbon, iron, etc.) should promote rapid growth
of planetesimals that can then amalgamate into planetary
cores. If gas still persists in the disk, the core can attract
a thick atmosphere and form into a gas giant planet; the
⋆ E-mail: [email protected] (JSG)
disk is also viscous so that the planet tends to migrate in-
wards (Nelson et al. 2000). For sparse refractories, however,
only small comets up to Neptune-like ‘ice giants’ may have
formed when the gas disperses; Greaves et al. (2004a) sug-
gested that such systems will be characterised by planetesi-
mal collisions and hence debris emission.
Here we classify these outcomes from most to least suc-
cessful, and postulate that the dominant agent is the initial
mass in refractories. We aim to test whether one underlying
property has a highly significant effect on the outcome, and
so intentionally ignore other properties that could affect the
planetary system formed around an individual star. Such
properties include the disk size, geometry, composition and
lifetime, as well as the stellar accretion rate, emission spec-
trum and environment. Stochastic effects are also neglected,
such as inwards migration caused by inter-planet encoun-
ters (Marzari & Weidenschilling 2002) and debris brighten-
ing after collisions of major planetesimals (Dominik & Decin
2003). Such factors are beyond the scope of our simple
model, but are important for detailed understanding of the
formation of particular kinds of planetary system.
2 HYPOTHESIS
The hypothesis explored here is that the mass of solid ele-
ments in a primordial circumstellar disk can be quantified,
and linked to an outcome such as a detectable debris disk or
radial velocity planet. The only piece of relevant ‘relic’ in-
c© 2006 RAS
http://arxiv.org/abs/0704.0873v1
2 J. S. Greaves, D. A. Fischer, M. C. Wyatt, C. A. Beichman & G. Bryden
Figure 1. Cumulative functions of [Fe/H] for the host stars of
hot Jupiters (at 6 0.1 AU), cool Jupiters (> 0.1 AU), debris-and-
planet systems, and debris disks only. One cool Doppler compan-
ion with [Fe/H] of –0.65 is not shown, as it is suspected to be
above planetary mass (Fischer & Valenti 2005).
formation for a particular star is the metallicity, quantified
by [Fe/H], the logarithmic abundance of iron with respect to
hydrogen and normalised to the Solar value, and this quan-
tity is taken here to track refractories in general. In com-
bination with a distribution of total (gas-plus-dust) masses
of primordial disks, masses of solids can then be inferred
statistically. Our basic hypothesis is that the metallicity is
a relic quantity originally common to the young star and
its disk, and that higher values of the trace refractory iron
should correlate with more effective planet growth.
It is well-known that gas giant detection rates rise
at higher [Fe/H] (Fischer & Valenti 2005, e.g), while
Greaves et al. (2006) have noted that debris detection is es-
sentially independent of metallicity. Robinson et al. (2006)
have shown that the growth of gas giant planets can be re-
produced in simulations taking disk mass and metal content
into account. Here we extend such ideas to include systems
with Doppler planets, with debris disks, and with both phe-
nomena.
Metal-based trends are now identified (Figure 1) for
four outcomes1 ranked from most to least successful: a) hot
Jupiters, b) more distant Doppler planets, with semi-major
axis beyond 0.1 AU, c) systems with both giant planets and
debris, and d) stars with debris only. Here we base ‘success’
(implicitly, via core-accretion models) on a large supply of
refractories and so fast evolutionary timescales, with hot
Jupiters rapidly building a core, adding an atmosphere and
migrating substantially towards the star. In systems with
progessively less success: planets form more slowly so they
migrate less over the remaining disk lifetime (cool Jupiters);
only some of the material is formed into planets (planet-and-
debris systems); and mainly planetesimals are formed, per-
haps with planets up to ice-giant size (debris). The 0.1 AU
1 The separation of hot and cool Jupiters in [Fe/H] has been pre-
viously noted as having modest significance (Santos et al. 2003;
Sozzetti 2004); a K-S test here gives P=0.35 for the null hypothe-
sis using the Fig. 1 data (errors in [Fe/H] of typically 0.025 dex).
The overlap of the two distributions is in fact an integral part of
our model, where two factors contribute to the outcome.
divide of hot/cool Jupiters is not necessarily physical, and is
simply intended to give reasonable source counts. Notably,
in systems with debris plus Jupiters, the planets orbit at >
0.5 AU, consistent with even lower success in migration.
The plotted ranges of [Fe/H] shift globally to higher
values with more success, as expected – however, the ranges
also overlap indicating that the fraction of metallic elements
is not the only relevant quantity. The hypothesis to be tested
here is that the underlying dominant factor is the total mass
of metals in the primordial disk. This mass is the product
of the total mass M of the disk (largely in H2 gas) and the
mass-fraction Z of refractory elements. The range of Z for
each outcome is broad because disks of different M were
initially present; thus, a massive low-metal disk and a low-
mass metal-rich disk could both lead to the same outcome.
3 MODEL
The distribution of the product MS ∝ M Z was constructed
from a mass distribution measured for primordial disks
(Andrews & Williams 2005) and our metal abundances mea-
sured in nearby Sun-like stars (Valenti & Fischer 2005).
With the canonical gas-to-dust mass ratio of 100 for present-
day disks, the mass of solid material is then
MS = 0.01M × 10
[Fe/H]
, (1)
with the modifying Z-factor assumed to be traced by iron.
Padgett (1996) and James et al. (2006) find that the metal-
licities of present-day young star clusters are close to Solar
(perhaps lower by ∼ 0.1 dex), so the reference-point values
of 100 for the gas-to-dust ratio and 0 for [Fe/H] are self-
consistent to good accuracy.
Our model has the following assumptions.
• There are contiguous bands of the MS distribution, cor-
responding to different planetary system outcomes. Any one
disk has an outcome solely predicted by its value of MS . The
ranges of M and Z can overlap, as only the MS bands are
required to be contiguous.
• A particular outcome will arise from a set of M Z
products, with the observable quantity being the metallicity
ranging from Z(low) to Z(high). This Z-range sets the MS
bounds, as described below.
• For the stellar ensemble, all possible products are as-
sumed to occur, and to produce some outcome that is actu-
ally observed. There are no unknown outcomes (novel kinds
of planetary system not yet observed).
• M and Z are taken to be independent, as the disk mass
is presumably a dynamical property of the young star-disk
system and the mass in solids is always a small component.
The MS bounds were determined using the minimum
number of free parameters. For a particular outcome, the
observed Z-range is assumed to trace the bounds of MS , i.e.
MS(high)/MS(low) = f × Z(high)/Z(low) (2)
where f is a constant, and this relation sets solid-mass
bounds for each outcome. Results presented here are mainly
for f = 1, but more complex relationships could exist, and
other simple assumptions without further variables could
also have been made – alternative forms of Eq. 2 are ex-
plored in section 4.1
c© 2006 RAS, MNRAS 000, 1–6
Predicting the frequencies of diverse exo-planetary systems 3
Table 1. Results of the planetary system frequency calculations. The ranges of solid mass MS(low) to MS(high) were determined based
on the input data of the observed [Fe/H] range for each outcome. The total disk masses given are consistent with these two boundary
conditions. The ranges of observed frequency include different surveys and/or statistical errors as discussed in the text. The predicted
total is less than 100 % because of a few ‘missing outcomes’ (see text). The null set represents stars searched for both planets and debris
with no detections.
outcome [Fe/H] range solid mass total disk mass predicted observed
(MJup [M⊕]) (MJup) frequency frequency
hot Jupiter –0.08 to +0.39 1.7–5 [500–1600] 70–200 1 % 2 ± 1 %
cool Jupiter –0.44 to +0.40 0.24–1.7 [75–500] 10–200 8 % 9 (5–11) %
planet & debris –0.13 to +0.27 0.10–0.24 [30–75] 5–30 5 % 3 ± 1 %
debris –0.52 to +0.18 0.02–0.10 [5–30] 1–30 16 % 15 ± 3 %
null –0.44 to +0.34 < 0.02 [< 5] < 15 62 % ∼ 75 %
Absolute values for MS were derived iteratively, work-
ing downwards from the most successful outcome. The upper
bound for the hot Jupiter band was derived from the high-
est disk mass observed and highest metallicity seen for this
outcome (under the assumption that all M Z products are
observed). Other outcomes were then derived in turn assum-
ing f = 1, and working down in order of success, with each
lower bound MS(low) setting the upper bound MS(high)
for the next most successful outcome. Both M and Z have
log-normal distributions, and 106 outcomes were calculated,
based on 1000 equally-likely values of log(M) combined with
1000 equally-probable values of log(Z), i.e. [Fe/H].
3.1 Input data
The M-distribution is taken from a deep millimetre-
wavelength survey for dust in disks in the Tau-
rus star-forming region by Andrews & Williams (2005).
Nürnberger et al. (1998) have found similar results from
(less deep) surveys of other regions, so the Taurus results
are taken here to be generic, under the simplification that
local environment is neglected. The total disk masses M were
found from the dust masses multiplied by a standard gas-
to-dust mass ratio of 100. The mean of the log-normal total
disk masses is 1 Jupiter mass (i.e. log M = 0 in MJup units)
with a standard deviation of 1.15 dex, and detections were
actually made down to −0.6σ. The Z-distribution is from
the volume-limited sample from Fischer & Valenti (2005) of
main sequence Sun-like stars (F, G, K dwarfs) within 18 pc.
These authors also list 850 similar stars out to larger dis-
tances that are being actively searched for Doppler planets.
In the 18 pc sample, the mean in logarithmic [Fe/H] is –0.06
and the standard deviation is 0.25 dex.
Both M and Z distributions have an upper cutoff at
approximately the +2σ bound: for M this is because disks
are less massive than ∼ 20 % of the star’s mass (presumably
for dynamical stability), and for Z because the Galaxy has a
metal threshold determined by nucleosynthetic enrichment
by previous generations of stars. The upper cutoff for M is
200 MJup for ‘classical T Tauri’ stars, and for Z the adopted
cutoff in [Fe/H] is +0.40 from the 18 pc sample (with a few
planet-hosts of [Fe/H] up to 0.56 found in larger volumes).
3.2 Observed frequencies
The Doppler-detection frequencies quoted in Table 1 are
mostly from the set of 850 uniformly-searched stars with
[Fe/H] values. The statistics for hot Jupiters range from
16/1330 = 1.2 % in a single-team search (Marcy et al.
2005) up to 22/850 = 2.6 % in the uniform dataset
(Fischer & Valenti 2005). For cool Jupiters (outside 0.1
AU semi-major axis), the counts are 76/850 = 8.9 %
(Fischer & Valenti 2005), with Marcy et al. (2005) finding
a range from 72/1330 = 5.4 % within 5 AU up to an extrap-
olation of 11 % within 20 AU. The upper limit is based on
long-term trends in the radial velocity data, and these as-yet
unconfirmed planetary systems could have [Fe/H] bounds
beyond those quoted here.
The debris counts are based on our surveys with
Spitzer. The debris-only statistics (Beichman et al. 2006;
Bryden et al. 2006) comprise 25 Spitzer detections out of
169 target stars in unbiased surveys, i.e. 15 ± 3 % with Pois-
sonian errors2. For planet-plus-debris systems, the 3 % fre-
quency is estimated from 6/25 detections (24± 10 %) among
stars known to have Doppler planets (Beichman et al. 2005),
multiplied by the 12 % total extrapolated planet frequency
(Marcy et al. 2005). In Table 1, the planet-only rates should
strictly sum to only up to 9 % if this estimate of 3 % of
planet-and-debris systems is subtracted.
The null set quoted has an [Fe/H] range derived from
our Spitzer targets where no debris or planet has been
detected. The MS(low) bound for the null set has been
set to zero rather than the formal limit derived from
Z(low)/Z(high), as lower values of MS(low) presumably also
give no presently observable outcome.
4 RESULTS
The predicted frequencies (Table 1) agree closely with the
observed rates in all outcome categories. This is remarkable
when very different planetary systems are observed by inde-
pendent methods, and ranging in scale from under a tenth
to tens of AU. The good agreement suggest that solid mass
may indeed be the dominant predictor of outcome.
The model is also rather robust. In particular, because
the M-distribution is an independent datum, obtaining a
good match of predicted and observed frequencies is not
inevitable. For example, artificially halving the standard
deviation of the M distribution would yield far too many
2 Figure 1 also includes a few prior-candidate systems
(www.roe.ac.uk/ukatc/research/topics/dust/identification.html)
confirmed by Spitzer programs.
c© 2006 RAS, MNRAS 000, 1–6
4 J. S. Greaves, D. A. Fischer, M. C. Wyatt, C. A. Beichman & G. Bryden
stars with detectable systems (75 %), and many more cool
Jupiters than hot Jupiters (around 20:1 instead of 5:1).
The model is also reasonably independent of outlier data
points. For example, adding in the low-metallicity system
neglected in Figure 1 would raise the prediction for cool
Jupiters from 8 % to 12 %, or extending the upper Z-cutoff
to the [Fe/H] of the distant Doppler systems would raise
this prediction to 11 %. However, the resulting effects on the
less-successful system probabilities are less than 1 %. This
suggests that although small number statistics may affect
the predictions, the results would not greatly differ if large
populations were available, that could be described by sta-
tistical bounds rather than minimum- to maximum-[Fe/H].
The model also accounts for nearly all outcomes, as re-
quired if each MS is to result in only one planetary system
architecture. For a few MS products, one of the outcomes
would be expected except that the Z value involved lies out-
side the observed ranges. These anomalous systems sum to
9 % (hence the Table 1 predictions add to < 100 %). These
‘missing’ systems are predominantly debris and debris-plus-
planet outcomes. The latter class has the smallest number
of detections (Figure 1), and more examples are likely to be
discovered with publication of more distant Doppler planets.
A further check is that the disk masses are realistic, in
terms of producing the observed bodies. Doppler systems are
predicted here to form from 5-200 Jupiter masses of gas in
the disk, enough to make gas giant planets. Also, the MS val-
ues of 30-1600 Earth-masses could readily supply the cores
of Jupiter and Saturn (quoted by Saumon & Guillot (2004)
as ≈ 0− 10 and ≈ 10− 20 M⊕ respectively) or the more ex-
treme ∼ 70 M⊕ core deduced for the transiting hot Jupiter
around HD 149026 (Sato et al. 2005). Similarly, populations
of colliding bodies generating debris have been estimated at
∼ 1–30 Earth-masses (Wyatt & Dent 2002; Greaves et al.
2004b), a quantity that could reasonably be produced from
5–75 Earth-masses in primordial solids (Table 1).
To make Jupiter analogues within realistic timescales,
core growth models (Hubickyj et al. 2005, e.g) need a few
times the MinimumMass Solar Nebula, which comprised ap-
proximately 20 MJup (Davis 2005). The Solar System itself
would thus have contained a few times 0.2 MJup in solids,
of which 0.15 MJup (50 M⊕) has been incorporated in plan-
etary cores. These primordial disk masses would place the
Solar System in the cool Jupiter category, and this is in fact
how it would appear externally (the dust belt being more
tenuous than in detected exo-systems).
4.1 Alternative models
Equation (2) was further investigated with f 6= 1. Reason-
able agreement with the observations was obtained only for
f in a narrow range, ≈ 0.8− 1.1. For higher f , systems with
debris are over-produced, while for lower f there are too few
Doppler detections. This suggests that the hypothesis that
the Z-range directly traces the MS range for the outcome
to occur is close to correct, although not well understood.
Qualitatively, there is a locus of points in M, Z parameter
space that lie inside the appropriate MS bounds for an out-
come, and at some mid-range value of M it is likely that
all the Z(low) to Z(high) values are appropriate, and so this
traces the product MS(high) to MS(low) (provided no more
extreme values of Z are suitable at the extrema of M). Z
seems to have the most constrictive effect on outcome be-
cause the distribution is much narrower than that of M, and
thus if M changes by a large amount, there is no correspond-
ing value of Z than can preserve a similar MS . Simulations
of planetesimal growth as a function of mass of solids in the
disk are needed to further explore why the ranges of Z and
MS match so closely.
One even simpler model was tested, with a constant
range of MS(high)/MS(low) for each observable outcome.
Assuming that ∼ 1 M⊕ of solids is needed for the minimum
detectable system (planetesimals creating debris), and that
the most massive disks contain 1600 M⊕ of solids (from the
maximum M and Z), then this mass range can be divided
into equal parts for the four observable outcomes, with a
range of log-M = 0.8 for each. This simple model fails, in
particular greatly over-producing debris-and-planet systems
at 12 %. Thus, it seems that the Z-range does in fact contain
information on the outcomes.
4.2 Distributions within outcomes
The dependences on metallicity of planet and debris detec-
tions arise naturally in the model. Doppler detections are
strongly affected by metallicity: Fischer & Valenti (2005)
find that the probability is proportional to the square of the
number of iron atoms. Our model predicts P(hot Jupiter) ∝
N(Fe)2.7 and P(cool Jupiter) ∝ N(Fe)0.9, above 80 %- and
30 %-Solar iron content respectively. The observed exponent
of 2 for all planets is intermediate between the two model
values, and as predicted by Robinson et al. (2006), a steeper
trend for short-period planets is perhaps seen, at least at
high metallicities (Figure 1). These dependences arise be-
cause large solid masses are needed for fast gas giant forma-
tion with time for subsequent migration, and so when [Fe/H]
is low a large total disk mass is required, with rapidly de-
creasing probability in the upper tail of the M-distribution.
In contrast, the model predicts a weak relation of
P(debris) ∝ N(Fe)0.2 (for 30-160 %-Solar iron), agreeing
with the lack of correlation seen by Bryden et al. (2006).
As fewer solids are needed to make comets, a wide variety
of parent disks will contain enough material, so little metal-
dependence is expected. For example (Table 1), MS(low) can
result from a 6 MJup disk with [Fe/H] of –0.5 or a 1.2 MJup
disk with [Fe/H] of +0.2. As these masses are both central
in the broad M-distribution, they have similar probability,
and so the two [Fe/H] values occur with similar likelihood.
5 DISCUSSION
The excellent reproduction of observed frequencies and de-
pendencies on metallicity both suggest that the method is
robust. Hence, rather surprisingly, the original search for a
single parameter that has a dominant effect on outcome has
succeeded. Under the assumption that the metallicity ranges
track the different outcomes, the primordial solid mass of the
circumstellar disk is identified as this parameter.
One implication is that for an ensemble of stars of
known metallicity, the proportions of different kinds of plan-
etary system may be predicted, without the need for detailed
models of individual disks. As main-sequence stars retain
no relic information on their primordial disk masses, this
c© 2006 RAS, MNRAS 000, 1–6
Predicting the frequencies of diverse exo-planetary systems 5
Figure 2. Distribution of solid masses. The shaded areas rep-
resent (from right to left) systems producing hot Jupiters, cool
Jupiters, gas giants plus debris disks, debris only, and disks with
1 Earth-mass or more of solids. Disks further to the left have no
presently predicted observable outcome.
is a very useful result, leading to estimates of the frequency
and variety of planetary systems, for example among nearby
Sun-analogues of interest to planet-detection missions.
The model may also be used to examine other plan-
etary system regimes that have just opened up to experi-
ment. For example, transiting hot Jupiters have not been
detected in globular clusters, although the stellar density
allows searches of many stars. In 47 Tuc, no transits were
found amongst ∼ 20, 000 stars, although 7 detections would
be expected based on the typical hot Jupiter occurrence
rate (Weldrake et al. 2005). Assuming these old stars formed
with disks following the standard M-distribution, but with
metallicities of only one-fifth Solar in a narrow range with
σ ≈0.05 dex (Caretta et al. 2004), then the model predicts
that less than 1 in 106 disks can form a hot Jupiter – the
solid masses are too small for fast planet growth and subse-
quent migration.
Finally, an upper limit can be estimated for the number
of young disks that could form an Earth-mass planet. The
maximum fraction (Figure 2) is set by disks of MS > 1 M⊕,
summing to two-thirds of stars. Thus one in three stars
would not be expected to host any Earth-analogue, and irre-
spective of metallicity since the occurrence of this minimum
mass is rather flat with P ∝ N(Fe)0.25. However, if the up-
per two-thirds of disks may be able to form terrestrial plan-
ets, this would agree with the large numbers predicted by
the simulations of Ida & Lin (2004), and also with the first
planetary detections by the microlensing method. Two bod-
ies of only around 5 and 13 Earth-masses have been detected
around low-mass stars, and Gould et al. (2006) estimate a
frequency of 0.37 (–0.21, +0.30) in this regime of icy planets
orbiting at ∼ 3 AU. Our model finds that 40 % of stars have
disks with MS greater than 5 M⊕, so the minimum materi-
als to form such planets are present at about the observed
frequency. While very preliminary, this result supports the
prediction that many stars could have low-mass planets.
6 CONCLUSIONS
We tested the hypothesis that a single underlying parameter
could have the dominant effect on the outcome of planetary
system formation from primordial circumstellar disks. An
empirical model with the mass of solids as this parameter
produces a very good match to the observed frequencies and
to the dependence on host-star metallicity, when the metal-
licity range within each outcome is used to estimate the
solid-mass range. This model may be very useful for making
statistical predictions of planetary system architectures for
ensembles of stars of known metallicity, such as nearby Solar
analogues of interest to exo-Earth detection missions.
ACKNOWLEDGMENTS
JSG thanks PPARC and SUPA for support of this work.
We thank the referee, Philippe Thebault, for comments that
greatly helped the paper.
REFERENCES
Andrews S. M., Williams J. P., 2005, ApJ 631, 1134
Beichman C. A. et al., 2006, ApJ 652, 1674
Beichman C. A. et al., 2005, ApJ 622, 1160
Bryden G. et al., 2006, ApJ 636, 1098
Carretta E., Gratton R. G., Bragaglia A., Bonifacio P.,
Pasquini L., 2004, A&A 416, 925
Cumming A., Marcy G. W., Butler R. P., 1999, ApJ 526,
Davis S. S., 2005, ApJ 627, L153
Dominik C., Decin G., 2003, ApJ 598, 626
Fischer D.A., Valenti J.A., 2005, ApJ 622, 1102
Gould A. et al., 2006, ApJ 644, L37
Greaves J. S., Fischer D. A., Wyatt M. C., 2006, MNRAS
366, 283
Greaves J. S., Wyatt M. C., Holland W. S., Dent W. R.
F., 2004b, MNRAS 351, L54
Greaves J.S., Holland W.S., Jayawardhana R., Wyatt
M.C., Dent W.R.F., 2004a, MNRAS, 348, 1097
Hubickyj O., Bodenheimer P., Lissauer J. J., 2005, Icarus
179, 415
Ida S., Lin D. N. C., 2004, ApJ 604, 388
James D. J., Melo C., Santos N. C., Bouvier J., 2006, A&A
446, 971
Liou J.-C., Zook H. A., 1999, Astron. J. 118, 580
Marcy G. et al., 2005, Prog. Th. Phys. Supp. 158, 24
Marzari F., Weidenschilling S. J., 2002, Icarus 156, 570
Nelson R. P., Papaloizou J. C. B., Masset F., Kley W.,
2000, MNRAS 318, 18
Nürnberger D., Brandner W., Yorke H. W., Zinnecker H.,
1998, A&A 330, 549
Padgett D. L., 1996, ApJ 471, 847
Pollack J. B. et al., 1996, Icarus 124, 62
Robinson S. E., Laughlin G., Bodenheimer P., Fischer D.,
2006, ApJ 643, 484
Santos N. C., Israelian G., Mayor M., Rebolo R., Udry S.,
2003, A&A, 398, 363
Sato B. et al., 2005, ApJ 633, 465
Saumon D., Guillot T., 2004, ApJ 609, 1170
Sozzetti A., 2004, MNRAS 354, 1194
Valenti J. A., Fischer D. A., 2005, ApJS 159, 141
Weldrake D. T. F., Sackett P. D., Bridges T. J., Freeman
K. C., 2005, ApJ 620, 1043
Wyatt M. C., 2003, ApJ 598, 1321
Wyatt M.C., Dent W.R.F., 2002, MNRAS 334, 589
c© 2006 RAS, MNRAS 000, 1–6
6 J. S. Greaves, D. A. Fischer, M. C. Wyatt, C. A. Beichman & G. Bryden
This paper has been typeset from a TEX/ L
ATEX file prepared
by the author.
c© 2006 RAS, MNRAS 000, 1–6
Introduction
Hypothesis
Model
Input data
Observed frequencies
Results
Alternative models
Distributions within outcomes
Discussion
Conclusions
|
0704.0875 | The concrete theory of numbers: initial numbers and wonderful properties
of numbers repunit | arXiv:0704.0875v2 [math.GM] 7 Apr 2007
The concrete theory of numbers: initial numbers
and wonderful properties of numbers repunit
Tarasov, B.V.∗
November 15, 2021
Abstract
In this work initial numbers and repunit numbers have been studied.
All numbers have been considered in a decimal notation. The problem of
simplicity of initial numbers has been studied. Interesting properties of
numbers repunit are proved: gcd(Ra, Rb) = Rgcd(a,b); Rab/(RaRb)
is an integer only if gcd(a, b) = 1, where a ≥ 1, b ≥ 1 are integers.
Dividers of numbers repunit, are researched by a degree of prime number.
Devoted to the tercentenary from the date of birth (4/15/1707)
of Leonhard Euler
1 Introduction
Let x ≥ 0, n ≥ 0 be integers. An integer N , which record consists from n
records of number x, we shall designate by
N = {x}n = x . . . x, n > 0. (1)
For n = 0 it is received {x}0 = ∅ an empty record. For example, {10}31 =
1010101, {10}01 = 1, etc.
Palindromic numbers of a kind
En,k = {1{0}k}n1, (2)
where n ≥ 0, k ≥ 0 we will name initial numbers. We will notice thatE0,k = 1
at any k ≥ 0.
Numbers repunit(see[2, 3, 4]) are natural numbers, which records consist of
units only, i.e. by definition
Rn = En−1,0, (3)
where n ≥ 1.
In decimal notation the general formula for numbers repunit is
Rn = (10
n − 1)/9, (4)
∗Tarasov, B.V. The concrete theory of numbers: initial numbers and wonderful properties
of numbers repunit. MSC 11A67+11B99. c©2007 Tarasov, B.V., independent researcher.
http://arxiv.org/abs/0704.0875v2
Tarasov, B.V. ”Initial and repunit numbers” 2
where n = 1, 2, 3, . . . .
There are known only five prime repunit for n =2,19, 23, 317, 1031.
Known problem ((Prime repunit numbers[3])). Whether exists infinite num-
ber of prime numbers repunit ?
Will we use designations further :
(a, b) = gcd(a, b) the greatest common divider of integers a > 0, b > 0.
p, q odd prime numbers.
If it is not stipulated specially, the integer positive numbers are considered.
2 Initial numbers
Let’s consider the trivial properties of initial numbers.
Theorem 1. Following trivial statements are fair :
(1) General formula of initial numbers is
En,k =
R(k+1)(n+1)
10(k+1)(n+1) − 1
10k+1 − 1
. (5)
(2) For k ≥ 0, n ≥ m ≥ 1 if n + 1 ≡ 0(mod (m + 1)),
then (En,k, Em,k) = Em,k.
(3) For k ≥ 0, n > m ≥ 1 if integer s ≥ 1, exists such
that n + 1 ≡ 0(mod (s + 1)), m + 1 ≡ 0(mod (s + 1)), then
(En,k, Em,k) ≥ Es,k > 1.
(4) For k ≥ 0, n > m ≥ 1 (En,k, Em,k) = 1 when and only then,
(n + 1,m + 1) = 1.
Proof. 1) Properties (1)—(3) are obvious.
2) The Proof of property (4). Necessity. Let
(En,k, Em,k) = 1 and (n + 1,m + 1) = s > 1, s − 1 ≥ 1. From property
(3) of the theorem follows that (En,k, Em,k) ≥ Es−1,k = {1{0}k}s−11 > 1.
Appears the contradiction .
Sufficiency of property (4). Let (n+1,m+1) = 1, then will be integers
a > 0, b > 0, such that either a(n + 1) = b(m + 1) + 1
or b(m + 1) = a(n + 1) + 1. Let’s assume, that (En,k, Em,k) = d > 1.
a) Let a(n + 1) = b(m + 1) + 1, then Eb(m+1),k = Ea(n+1)−1,k =
(10a(n+1)(k+1) − 1)/(10k+1 − 1) ≡ 0(modEn,k) ≡ 0(modd).
On the other hand Eb(m+1),k = (10
(k+1){b(m+1)+1}−1)/(10k+1−1) =
((10b(m+1)(k+1) − 1)/(10k+1 − 1)) · 10k+1 + 1 ≡
≡ 1(modEm,k) ≡ 1(modd). Appears the contradiction.
b) Let b(m + 1) = a(n + 1) + 1, then Ea(n+1),k = Eb(m+1)−1,k =
(10b(m+1)(k+1) − 1)/(10k+1 − 1) ≡ 0(modEm,k) ≡ 0(modd).
On the other hand Ea(n+1),k = (10
(k+1){a(n+1)+1} −1)/(10k+1−1) =
((10a(n+1)(k+1) − 1)/(10k+1 − 1)) · 10k+1 + 1 ≡
≡ 1(modEn,k) ≡ 1(modd). Have received the contradiction.
3 Numbers repunit
Let’s consider trivial properties of numbers repunit.
Tarasov, B.V. ”Initial and repunit numbers” 3
Theorem 2. Following trivial statements are fair :
(1) The number Rn is prime only if n number is prime.
(2) If p > 3 all prime dividers of number Rp look like 1+2px where x ≥ 1
is integer.
(3) (Ra, Rb) = 1 if and only if (a, b) = 1.
Proof. Property (1) of theorem is proved in ([2, 3]), property (2) is proved in
([1]), as exercise. Property (3) is the corollary of the theorem 1.
Theorem 3. (Ra, Rb) = R(a,b), where a ≥ 1, b ≥ 1 are integers.
Proof. Validity of the theorem for (a, b) = 1 follows from property (3) of
theorem2. Let (a, b) = d > 1, where a = a1d, b = b1d, (a1, b1) = 1. Let’s
consider equations
Ra = Rd · {10
d(a1−1) + . . . + 10d + 1},
Rb = Rd · {10
d(b1−1) + . . . + 10d + 1}.
A = 10d(a1−1) + . . . + 10d + 1,
B = 10d(b1−1) + . . . + 10d + 1.
Let’s assume, that (A,B) > 1, and q is a prime odd number such that
A ≡ 0(modq), B ≡ 0(modq). (6)
If q = 3, then 10t ≡ 1(modq) for any integer t ≥ 1. Then from (6) it
follows that a1 ≡ b1 ≡ 0(modq). Have received the contradiction.
Thus, q > 3. Then there exists an index dmin, to which the number 10
belongs on the module q.
(10d)dmin ≡ 1(modq),
where dmin ≥ 1.
If dmin = 1, then it follows from (6) that a1 ≡ b1 ≡ 0(modq). Have
received the contradiction. Hence dmin > 1. As Ra ≡ Rb ≡ 0(modq),
then (10d)a1 ≡ 1(modq) and (10d)b1 ≡ 1(modq).
Then a1 ≡ b1 ≡ 0(moddmin). Have received the contradiction.
Theorem 4. Let p > 3 be a prime number, k ≥ t ≥ 1, t ≥ s ≥ 1 integer
numbers. Then
gcd(Rpk/Rpt , Rps) = 1. (7)
Proof. Let’s consider expression
A = Rpk/Rpt = (10
k−t−1 + (10p
k−t−2 + . . . + 10p
If (A,Rps) > 1, then the prime number q exists such that
A ≡ 0(modq) Rps ≡ 0(modq). Hence 10
≡ 1(modq), then
A ≡ pk−t ≡ 0(modq), p = q = 3. Have received the contradiction,
because p > 3.
Tarasov, B.V. ”Initial and repunit numbers” 4
Theorem 5. Let a ≥ 1, b ≥ 1 are integers, then the following statements are
true :
(1) If (a, b) = 1, then
gcd(Rab, RaRb) = RaRb. (8)
(2) If (a, b) > 1, then
RaRb/R(a,b) ≤ gcd(Rab, RaRb) < RaRb. (9)
Proof. 1) Let (a, b) = 1, then (Ra, Rb) = R(a,b) = 1,
Rab = RaX = RbY , X = cRb, where c ≥ 1 is integer. Rab = cRaRb.
2) Let (a, b) = d > 1, a = a1d, b = b1d, (a1, b1) = 1, a1 ≥ 1, b1 ≥ 1.
As gcd(Ra, Rb) = R(a,b), we receive equality
Ra = R(a,b)X,Rb = R(a,b)Y, (10)
where (X,Y ) = 1.
Further, Rab = RaA = RbB = XAR(a,b) = Y BR(a,b), XA = Y B,
A = Y z, B = Xz, z ≥ 1 is integer. Then Rab = XY R(a,b)z,
Rab = zRaRb/R(a,b). We have proved, that
RaRb/R(a,b) ≤ gcd(Rab, RaRb).
Let’s assume, that gcd(Rab, RaRb) = RaRb, then Rab = zRaRb, where
z ≥ 1 is integer. Let’s consider equalities
Rab = RaA = RbB,
where
A = 10a(b−1) + 10a(b−2) + . . . + 10a + 1,
B = 10b(a−1) + 10b(a−1) + . . . + 10b + 1.
Since A = Rbz, B = Raz, 10
a ≡ 1(modR(a,b)),
10b ≡ 1(modR(a,b)), then A ≡ B ≡ 0(modR(a,b)), hence
a ≡ b ≡ 0(modR(a,b)).
Thus, comparison (a, b) ≡ 0(modR(a,b)) or
d ≡ 0(modRd) is fair, that contradicts an obvious inequality
(10x − 1)/9 > x, (11)
where x > 1 is real.
({⋆} The Important corollary of the theorem 5).
Number Rab/(RaRb) is integer when and only when (a, b) = 1, where a ≥ 1,
b ≥ 1 are integers.
Let’s quote some trivial statements for numbers repunit.
Lemma 1. If a = 3nb, (b, 3) = 1, then
Ra ≡ 0(mod 3
n), butRa 6≡ 0(mod 3
(n+1)). (12)
Tarasov, B.V. ”Initial and repunit numbers” 5
Proof. If n = 1, then Ra = R3B, where B = 10
3(b−1) + . . . + 103 + 1,
R3 ≡ 0(mod 3), B ≡ b 6≡ 0(mod 3). Thus,
Ra ≡ 0(mod 3), but Ra 6≡ 0(mod 3
Let comparisons (12) be proved for n ≤ k− 1. We shall consider a = 3kb,
(b, 3) = 1. Then Ra = R3k−1bA, where A = 10
3k−1b2 + 103
b + 1.
R3k−1b ≡ 0(mod 3
k−1), but R3k−1b 6≡ 0(mod 3
k), A ≡ 0(mod 3), but
A 6≡ 0(mod 32).
Lemma 2. If n ≥ 0 is integer, then
rn = 10
11n + 1 ≡ 0(mod 11n+1), but rn 6≡ 0(mod 11
n+2). (13)
Proof. r0 = 11 ≡ 0(mod 11), but r0 = 11 6≡ 0(mod 11
r1 = 10
11 + 1 ≡ 0(mod 112), but r1 6≡ 0(mod 11
Let’s make the inductive assumption, that formulas (13) are proved for
n ≤ k − 1, where k − 1 ≥ 1, k ≥ 2. Let n = k, then
rk = 10
11k + 1 = (1011
)11 + 1 = rk−1A, where
A = 1011
k−110 − 1011
k−19 + 1011
k−18 − 1011
k−17 + 1011
k−16−
− 1011
k−15 + 1011
k−14 − 1011
k−13 + 1011
k−12 − 1011
+ 1. (14)
Since, due to the inductive assumption 1011
≡ −1(mod 11k), where
k ≥ 2, then A ≡ 11(mod 11k). Then A ≡ 0(mod 11), but
A 6≡ 0(mod 112). Thus, we receive, that rk ≡ 0(mod 11
k+1), but
rk 6≡ 0(mod 11
k+2).
Lemma 3. For an integer a ≥ 1, the following statements are true :
(1) If a is odd, then Ra 6≡ 0(mod 11).
(2) If a = 2(11n)b, (b, 11) = 1, then
Ra ≡ 0(mod 11
n+1), butRa 6≡ 0(mod 11
n+2). (15)
Proof. If a is odd, then Ra ≡ 1(mod 11). If a = 2(11
n)b, (b, 11) = 1,
then Ra = ((10
2(11)n)b − 1)/9 = R11n · rn · A, where rn = 10
11n + 1,
A = 102(11
n)(b−1) + . . . + 102(11
n) + 1. R11n 6≡ 0(mod 11),
A ≡ b 6≡ 0(mod 11). Then validity of the statement (2) of lemma 3 follows
from lemma 2.
({⋆} The assumption: the general formula for gcd(Rab, RaRb)).
If a ≥ 1, b ≥ 1 are integers, d = (a, b), where d = 3L · 11S · c, (c, 3) = 1,
(c, 11) = 1, L ≥ 0, S ≥ 0, then equalities are true :
— if c is an odd number, then
gcd(Rab, RaRb) = ((RaRb)/R(a,b)) · 3
L, (16)
— if c is an even number, then
gcd(Rab, RaRb) = ((RaRb)/R(a,b)) · 3
L · 11S. (17)
Let’s give another two obvious statements in which divisors of numbers
repunit are studied, as degrees of prime number.
Tarasov, B.V. ”Initial and repunit numbers” 6
Lemma 4. If p, q are prime numbers and Rp ≡ 0(modq), but
Rp 6≡ 0(modq
2), then statements are true :
(1) For any integer r, 0 < r < q, Rpr 6≡ 0(modq
(2) For any integer n, n ≥ 1, Rpn 6≡ 0(modq
Proof. 1) Rpr = Rp · R̂pr, where R̂pr = 10
p(r−1) + 10p(r−2) +
+ . . . + 10p + 1. If Rpr ≡ 0(modq
2), then R̂pr ≡ 0(modq),
r ≡ 0(modq). Have received the contradiction.
2) If n > 1 found such that Rpn ≡ 0(modq
2), then from (7) follows
(Rpn/Rp, Rp) = 1. Have received the contradiction.
Lemma 5. If p, q are prime numbers and Rp ≡ 0(modq), then
Rpqn ≡ 0(modq
n+1).
Proof. Since Rpq = Rp · R̂pq, where R̂pq = 10
p(q−1) +
+ 10p(q−2) + . . . + 10p + 1, then R̂pq ≡ 0(modq), Rpq ≡ 0(modq
Let’s assume that Rpqn−1 ≡ 0(modq
n). Then
Rpqn = Rpqn−1·q = Rpqn−1 · R̂pqn−1·q, where
R̂pqn−1·q = 10
n−1·(q−1)+10pq
n−1·(q−2)+ . . .+10pq
+1 ≡ 0(modq),
Rpqn ≡ 0(modq
n+1).
4 Problem of simplicity of initial numbers
Let’s consider the problem of simplicity of initial numbers En,k, where
k ≥ 0, n ≥ 0.
If k = 0, then En,0 = Rn+1. Thus, simplicity of numbers En,0 – is known
problem of prime numbers repunit Rp, where p is prime number.
If n = 1, then E1,k = 1{0}k1 = 10
k+1 + 1. As number E1,k can be
prime only when k + 1 = 2m, m ≥ 0 is integer, then we come to the known
problem of simplicity of the generalized Fermat numbers fm(a) = a
2m + 1
for a = 10. Generalized Fermat numbers nave been define by Ribenboim [5] in
1996, as numbers of the form fn(a) = a
2n + 1, where a > 2 is even.
The generalized Fermat numbers fn(10) = 10
2n + 1 for n ≤ 14 are prime
only if n = 0, 1. f0(10) = 11, f1(10) = 101.
Theorem 6. Let n > 1, k > 0. If any of conditions
(1) n number is odd,
(2) k number is odd,
(3) n + 1 ≡ 0(mod 3),
(4) (n + 1, k + 1) = 1,
is true, then number En,k is compound.
Proof. 1) n + 1 = 2t, t > 1. Then En,k = Et−1,k · (10
t(k+1) + 1), where
t > 1, t − 1 ≥ 1. As Et−1,k > 1, then En,k is compound number.
2) Let k be an odd number. Due to the proved condition (1) we count that
number (n + 1) is odd. k + 1 = 2t ≥ 2, t ≥ 1. Further,
En,k = En,t−1 · ((10
(n+1)t + 1)/(10t + 1)),
where n > 1, t− 1 ≥ 0, En,t−1 > 1, number (10
(n+1)t +1)/(10t +1) > 1
is integer.
Tarasov, B.V. ”Initial and repunit numbers” 7
3) If n + 1 ≡ 0(mod 3), then En,k ≡ 0(mod 3), En,k > 11.
4) Let n > 1, k ≥ 1, (n + 1, k + 1) = 1, then
En,k = R(n+1)(k+1)/R(k+1) = R(n+1) · (R(n+1)(k+1)/(Rk+1 · Rn+1)).
Due to the theorem 5 number z = R(n+1)(k+1)/(Rk+1 · Rn+1) is integer.
Further,
z > (10(n+1)(k+1)−1)/(10n+k+2) = 10nk−1−1/(10n+k+2), nk−1 ≥ 1,
thus, z > 1.
Question of simplicity of initial numbers under conditions, when
(n + 1, k + 1) > 1, (n + 1) number is odd, (k + 1) number is odd,
n + 1 6≡ 0(mod 3), remains open.
In particular, it is interesting to considerate numbers Ep−1,p−1 = Rp2/Rp,
where p is prime number. For p < 100 numbers Ep−1,p−1 are compound.
5 The open problems of numbers repunit
The known problem of numbers repunit remains open.
Problem 1 ((Prime repunit numbers[3])). Whether there exists infinite number
of prime numbers Rp, p–prime number ?
Problem 2. Whether all numbers Rp, p–prime number, are numbers free from
squares ?
The author has checked up for p < 97, that numbers Rp are free from
squares. Another following open questions are interesting :
Problem 3. If number Rp is free from squares, where p > 3 is prime number,
whether will number n, be found such what number Rpn contains a square ?
Problem 4. p is prime number, whether there are simple numbers of a kind
Ep−1,p−1 = Rp2/Rp ?
The author has checked up to p ≤ 127, that numbers Ep−1,p−1 is com-
pound. It is known, that Rp divide by number (2p + 1) for prime numbers
p = 41, 53, Rp divide by number (4p+1) for prime numbers p = 13, 43, 79.
There appears a question :
Problem 5. Whether there is infinite number of prime numbers p, such that
Rp divide by number (2p + 1) or is number (4p + 1) ?
(The remark). If the number p > 5 Sophie Germain prime (i.e. number
2p+1 is prime too), then either Rp or R
= (10p +1)/11 divide by number
(2p + 1).
6 The conclusion
Leonhard Euler, professor of the Russian Academy of sciences since 1731,
has paid mathematics forever ! Euler’s invisible hand directs the develop-
ment of concrete mathematics for more than 200 years.
Euler’s titanic work which has opened a way to freedom to mathematical
community, admires. The pleasure caused by Euler’s works warms hearts.
Tarasov, B.V. ”Initial and repunit numbers” 8
References
[1] Vinogradov I.M. Osnovy teorii chisel. -M. :Nauka, 1981.
[2] Ronald L.Graham,Donald E.Knuth,Oren Patashnik,
Concrete Mathematics : A Foundation for Computer Science, 2nd edition
(Reading,Massachusetts: Addison-Wesley), 1994.
[3] Weisstein, Eric W. ”Repunit.” From MathWorld–A Wolfram Web Resource.
—http://mathworld.wolfram.com/Repunit.html/.
c©1999—2007 Wolfram Research, Inc.
[4] The Prime Clossary repunit.
—http://primes.utm.edu/glossary/page.php?sort=Repunit/.
[5] Ribenboim, P. ”Fermat Numbers” and ”Numbers k × 2n ± 1.” 2.6 and 5.7
in The New Book of Prime Number Records. New York: Springer-Verlag,
pp. 83-90 and 355-360, 1996.
———————————————————————
Institute of Thermophysics, Siberian Branch of RAS
Lavrentyev Ave., 1, Novosibirsk, 630090, Russia
E-mail: [email protected]
———————————————————————
Independent researcher,
E-mail: [email protected]
———————————————————————
|
0704.0876 | Non-monotone convergence in the quadratic Wasserstein distance | NON-MONOTONE CONVERGENCE IN THE QUADRATIC
WASSERSTEIN DISTANCE
WALTER SCHACHERMAYER, UWE SCHMOCK, AND JOSEF TEICHMANN
Abstract. We give an easy counter-example to Problem 7.20 from C. Vil-
lani’s book on mass transport: in general, the quadratic Wasserstein distance
between n-fold normalized convolutions of two given measures fails to decrease
monotonically.
We use the terminology and notation from [5]. For Borel measures µ, ν on Rd
we define the quadratic Wasserstein distance
T (µ, ν) := inf
(X,Y )
‖X − Y ‖2
where ‖ · ‖ is the Euclidean distance on Rd and the pairs (X,Y ) run through all
random vectors defined on some common probabilistic space (Ω,F ,P), such that X
has distribution µ and Y has distribution ν. By a slight abuse of notation we define
T (U, V ) := T (µ, ν) for two random vectors U , V , such that U has distribution µ
and V has distribution ν. The following theorem (see [5, Proposition 7.17]) is due
to Tanaka [4].
Theorem 1. For a, b ∈ R and square integrable random vectors X, Y , X ′, Y ′
such that X is independent of Y , and X ′ is independent of Y ′, and E[X ] = E[X ′]
or E[Y ] = E[Y ′], we have
T (aX + bY, aX ′ + bY ′) ≤ a2T (X,X ′) + b2T (Y, Y ′).
For a sequence of i.i.d. random vectors (Xi)i∈N we define the normalized partial
Sm :=
Xi, m ∈ N.
If µ denotes the law of X1, we write µ
(m) for the law of Sm. Clearly µ
(m) equals,
up to the scaling factor
m, the m-fold convolution µ ∗ µ ∗ · · · ∗ µ of µ.
We shall always deal with measures µ, ν with vanishing barycenter. Given two
measures µ and ν on Rd with finite second moments, we let (Xi)i∈N and (X
i)i∈N
be i.i.d. sequences with law µ and ν, respectively, and denote by Sm and S
m the
corresponding normalized partial sums. From Theorem 1 we obtain
µ(2m), ν(2m)
µ(m), ν(m)
, m ∈ N,
Date: October 4, 2006.
Financial support from the Austrian Science Fund (FWF) under grant P 15889, from the
Vienna Science and Technology Fund (WWTF) under grant MA13, from the European Union
under grant HPRN-CT-2002-00281 is gratefully acknowledged. Furthermore this work was finan-
cially supported by the Christian Doppler Research Association (CDG) via PRisMa Lab. The
authors gratefully acknowledge a fruitful collaboration and continued support by Bank Austria
Creditanstalt (BA-CA) and the Austrian Federal Financing Agency (ÖBFA) through CDG.
http://arxiv.org/abs/0704.0876v1
http://www.fwf.ac.at/
http://www.wwtf.at/
http://www.cdg.ac.at/
http://www.prismalab.at/
http://www.ba-ca.com/
http://www.oebfa.co.at/
2 WALTER SCHACHERMAYER, UWE SCHMOCK, AND JOSEF TEICHMANN
from which one may quickly deduce a proof of the Central Limit Theorem (compare
[5, Ch. 7.4] and the references given there).
However, we can not deduce from Theorem 1 that the inequality
(1) T
µ(m+1), ν(m+1)
µ(m), ν(m)
holds true for all m ∈ N. Specializing to the case m = 2, an estimate, which we
can obtain from Tanaka’s Theorem, is
µ(3), ν(3)
µ(2), ν(2)
+ T (µ, ν)
≤ T (µ, ν).
This contains some valid information, but does not imply (1). It was posed as
Problem 7.20 of [5], whether inequality (1) holds true for all probability measures
µ, ν on Rd and all m ∈ N.
The subsequent easy example shows that the answer is no, even for d = 1 and
symmetric measures. We can choose µ = µn and ν = νn for sufficiently large n ≥ 2,
as the proposition (see also Remark 1) shows.
Proposition 1. Denote by µn the distribution of
∑2n−1
i=1 Zi, and by νn the distri-
bution of
i=1 Zi with (Zi)i∈N i.i.d. and P(Z1 = 1) = P(Z1 = −1) =
. Then
(2) lim
n T (µn ∗ µn, νn ∗ νn) =
while T (µn ∗ µn ∗ µn, νn ∗ νn ∗ νn) ≥ 1 for all n ∈ N.
Remark 1. If one only wants to find a counter-example to Problem 7.20 of [5],
one does not really need the full strength of Proposition 1, i.e. the estimate that
T (µn ∗ µn, νn ∗ νn) = O(1/
n). In fact, it is sufficient to consider the case n = 2
in order to contradict the monotonicity of inequality (1). Indeed, a direct calculation
reveals that
T (µ2 ∗ µ2, ν2 ∗ ν2) = 0.625 <
T (µ2 ∗ µ2 ∗ µ2, ν2 ∗ ν2 ∗ ν2).
Proof of Proposition 1. We start with the final assertion, which is easy to show.
The 3-fold convolutions of the measures µn and νn, respectively, are supported
on odd and even numbers, respectively. Hence they have disjoint supports with
distance 1 and so the quadratic transportation costs are bounded from below by 1.
For the proof of (2), fix n ∈ N, define σn = µn ∗ µn and τn = νn ∗ νn, and note
that σn and τn are supported by the even numbers. For k = −(2n−1), . . . , (2n−1)
we denote by pn,k the probability of the point 2k under σn, i.e.
pn,k =
4n− 2
k + 2n− 1
24n−2
We define pn,k = 0 for |k| ≥ 2n. We have τn = σn ∗ ρ, where ρ is the distribution
giving probability 1
to −2, 0, 2, respectively. We deduce that for 0 ≤ k ≤
2n− 2,
τn(2k + 2) =
pn,k +
pn,k+2 +
pn,k+1
(pn,k − pn,k+1) +
(pn,k+2 − pn,k+1) + σn(2k + 2)
1− pn,k+1
pn,k+1
(pn,k+2
pn,k+1
+ σn(2k + 2).
NON-MONOTONE CONVERGENCE IN THE QUADRATIC WASSERSTEIN DISTANCE 3
Notice that pn,k ≥ pn,k+1 for 0 ≤ k ≤ 2n− 1. The term in the first parentheses is
therefore non-negative. It can easily be calculated and estimated via
0 ≤ 1− pn,k+1
k+2n−1
) = 1− 2n− k − 1
k + 2n
2k + 1
2n+ k
≤ 2k + 1
for 0 ≤ k ≤ 2n− 1.
Following [5] we know that the quadratic Wasserstein distance T can be given
by a cyclically monotone transport plan π = πn. We define the transport plan π
via an intuitive transport map T . It is sufficient to define T for 0 ≤ k ≤ 2n − 1,
since it acts symmetrically on the negative side. T moves mass 1
from the
point 2k to 2k+ 2 for k ≥ 1. At k = 0 the transport T moves 1
pn,0 to every side,
which is possible, since there is enough mass concentrated at 0.
By equation (3) we see that the transport T moves σn to τn, since, for 1 ≤
k ≤ 2n − 2, the first terms corresponds to the mass, which arrives from the left
and is added to σn, and the second term to the mass, which is transported away:
summing up one obtains τn. For k = 2n − 1, mass only arrives from the left. At
k = 0 mass is only transported away. By the symmetry of the problem around 0
and by the quadratic nature of the cost function (the distance of the transport is 2,
hence cost 22), we finally have
T (σn, τn) ≤ 2
2k + 1
2n+ k
2k + 1
By the Central Limit Theorem and uniform integrability of the function x 7→ x+ :=
max(0, x) with respect to the binomial approximations, we obtain
(2k)pn,k =
2/2 dx.
Hence
lim sup
n T (σn, τn) ≤
≈ 0.79788.
In order to obtain equality we start from the local monotonicity of the respective
transport maps on non-positive and non-negative numbers. It easily follows that
the given transport plan is cyclically monotone and hence optimal (see [5, Ch. 2]).
The subsequent equality allows also to consider estimates from below. Rewriting
(3) yields
τn(2k + 2) =
pn,k+1
( pn,k
pn,k+1
pn,k+2
1− pn,k+1
pn,k+2
+ σn(2k + 2)
for 0 ≤ k ≤ 2n− 3, and
τn(2k + 2) =
pn,k+1
( pn,k
pn,k+1
+ σn(2k + 2)
for k = 2n− 2. Furthermore,
pn,k+1
− 1 =
k+2n−1
) − 1 = k + 2n
2n− k − 1
− 1 = 2k + 1
2n− k − 1
≥ 2k + 1
4 WALTER SCHACHERMAYER, UWE SCHMOCK, AND JOSEF TEICHMANN
for 0 ≤ k ≤ 2n− 2. This yields by a reasoning similar to the above that
T (σn, τn) ≥
pn,k+1
2k + 1
hence
lim inf
n T (σn, τn) ≥
Remark 2. Let p ≥ 2 be an integer. By slight modifications of the proof of Propo-
sition 1 we can construct sequences of measures (µn)n∈N and (νn)n∈N, such that
the quadratic Wasserstein distances of k-fold convolutions are bounded from below
by 1 for all k which are not multiples of p, while
T (µ(p)n , ν(p)n ) = 0.
Remark 3. Assume the notations of [5]. In the previous considerations we can
replace the quadratic cost function by any other lower semi-continuous cost function
c : R2 → [0,+∞], which is bounded on parallels to the diagonal and vanishes on
the diagonal. For example, if we choose c(x, y) = |x− y|r for 0 < r < ∞, then we
obtain the same asymptotics as in Proposition 1 (with a different constant).
Remark 4. We have used in the above proof that τn is obtained from σn by con-
volving with the measure ρ. In fact, this theme goes back (at least) as far as L.
Bachelier’s famous thesis from 1900 on option pricing [2, p. 45]. Strictly speaking,
L. Bachelier deals with the measure assigning mass 1
to −1, 1 and considers con-
secutive convolutions, instead of the above ρ. Hence convolutions with ρ correspond
to Bachelier’s result after two time steps. Bachelier makes the crucial observation
that this convolution leads to a radiation of probabilities: Each stock price x radi-
ates during a time unit to its neighboring price a quantity of probability proportional
to the difference of their probabilities. This was essentially the argument which al-
lowed us to prove (1). Let us mention that Bachelier uses this argument to derive
the fundamental relation between Brownian motion (which he was the first to define
and analyse in his thesis) and the heat equation (compare e.g. [3] for more on this
topic).
Remark 5. Having established the above counterexample, it becomes clear how
to modify Problem 7.20 from [5] to give it a chance to hold true. This possible
modification was also pointed out to us by C. Villani.
Problem 1. Let µ be a probability measure on Rd with finite second moment
and vanishing barycenter, and γ the Gaussian measure with same first and second
moments. Does (T (µ(n), γ))n≥1 decrease monotonically to zero?
When entropy is considered instead of the quadratic Wasserstein distance the
corresponding question on monotonicity was answered affirmatively in the recent
paper [1].
One may also formulate a variant of Problem 7.20 as given in (1) by replacing
the measure ν through a log-concave probability distribution. This would again
generalize problem 1.
NON-MONOTONE CONVERGENCE IN THE QUADRATIC WASSERSTEIN DISTANCE 5
References
[1] S. Artstein, K. M. Ball, F. Barthe and A. Naor, Solution of Shannon’s Problem on the Mono-
tonicity of Entropy, Journal of the AMS 17(4), 2004, pp. 975–982.
[2] L. Bachelier, Theorie de la Speculation, Paris, 1900, see also: http://www.numdam.org/en/.
[3] W. Schachermayer, Introduction to the Mathematics of Financial Markets, LNM 1816 - Lec-
tures on Probability Theory and Statistics, Saint-Flour summer school 2000 (Pierre Bernard,
editor), Springer Verlag, Heidelberg (2003), pp. 111–177.
[4] H. Tanaka, An inequality for a functional of probability distributions and its applications to
Kac’s one-dimensional model of a Maxwell gas, Zeitschrift fr Wahrscheinlichkeitstheorie und
verwandte Gebiete 27, 47–52, 1973
[5] C. Villani, Topics in Optimal Transportation, Graduate Studies in Mathematics 58, American
Mathematical Society, Providence Rhode Island, 2003.
Financial and Actuarial Mathematics, Technical University Vienna, Wiedner Haupt-
strasse 8–10, A-1040 Vienna, Austria.
http://www.numdam.org/en/
http://www.fam.tuwien.ac.at/
References
|
0704.0877 | The bimodality of type Ia Supernovae | The bimodality of type Ia Supernovae
F. Mannucci∗, N. Panagia† and M. Della Valle∗∗
∗INAF - IRA, Firenze, Italia
†STScI, USA; INAF - OAC - Catania, Italia; SN Ltd - Virgin Gorda, BVI
∗∗INAF - OAA - Firenze, Italia
Abstract. We comment on the presence of a bimodality in the distribution of delay time between
the formation of the progenitors and their explosion as type Ia SNe. Two "flavors" of such bimodality
are present in the literature: a weak bimodality, in which type Ia SNe must explode from both young
and old progenitors, and a strong bimodality, in which about half of the systems explode within
108 years from formation. The weak bimodality is observationally based on the dependence of the
rates with the host galaxy Star Formation Rate (SFR), while the strong one on the different rates
in radio-loud and radio-quiet early-type galaxies. We review the evidence for these bimodalities.
Finally, we estimate the fraction of SNe which are missed by optical and near-IR searches because
of dust extinction in massive starbursts.
Keywords: Supernova rates
PACS: 97.60.Bw
INTRODUCTION
The supernova (SN) rates in different types of galaxies give strong informations about
the progenitors. For example, soon after the introduction of the distinction between “type
I” and “type II” SNe [1], van den Bergh [2] pointed out that type IIs are frequent in
late type galaxies “which suggest their affiliation with Baade’s population I”. On the
contrary, type Is, are the only type observed in elliptical galaxies and this fact "suggests
that they occur among old stars". This conclusion is still often accepted, even if it is
now known not to be generally valid: first, SN Ib/c were included in the broad class of
“type I” SNe, and, second, also a significant fraction of SNe Ia are known to have young
progenitors.
THE WEAK BIMODALITY IN TYPE IA SNE
In 1983, Greggio & Renzini [3] showed that the canonical binary star models for type
Ia SNe naturally predict that these systems explode from progenitors of very different
ages, from a few 107 to 1010 years. The strongest observational evidence that this is the
case was provided by Mannucci et al. [4] who analyzed the SN rate per unit stellar
mass in galaxies of all types. They found that the bluest galaxies, hosting the highest
Star Formation Rates (SFRs), have SN Ia rates about 30 times larger than those in the
reddest, quiescent galaxies. The higher rates in actively star-forming galaxies imply that
a significant fraction of SNe must be due to young stars, while SNe from old stellar
populations are also needed to reproduce the SN rate in quiescent galaxies. This lead
http://arxiv.org/abs/0704.0877v2
FIGURE 1. SN rate per unit stellar mass as a function of the B–K color of the parent galaxy (from
Mannucci et al. [4]) showing the strong increase of all the rates toward blue galaxies
Mannucci et al. [4] to introduce the simplified two component model for the SN Ia rate
(a part proportional to the stellar mass and another part to the SFR). These results were
later confirmed by Sullivan et al. [5], while Scannapieco & Bildsten [6], Matteucci et
al. [7] and Calura et al. [8] successfully applied this model to explain the chemical
evolution of galaxies and galaxy clusters. A more accurate description is based on the
Delay Time Distribution (DTD), which is found to span a wide range of delay time
between a few 107 to a few 1010 years (Mannucci et al. [9]). The presence of a strong
observational result and the agreement with the predictions of several models (see also
Greggio [10]) make this conclusion very robust.
THE STRONG BIMODALITY IN TYPE IA SNE
Della Valle et al. [11] studied the dependence of the SN Ia rate in early-type galaxies
on the radio power of the host galaxies, and concluded that the higher rate observed
in radio-loud galaxies is due to minor episodes of accretion of gas or capture of small
galaxies. Such events result in both fueling the central black hole, producing the radio
activity, and in creating a new generation of stars, producing the increase in the SN rate.
This effect can be used to derive information on the DTD of type Ia SNe once a model
of galaxy stellar population is introduced.
The difference between radio-loud and radio-quiet galaxies can be reproduced by
the model of early-type galaxy shown in the right panel of figure 2: most of the stars
are formed in a remote past, about 1010 years ago, while a small minority of stars are
created in a number of subsequent bursts. A galaxy appears radio-loud when is observed
during the burst, radio-faint soon after, and radio-quiet during the quiescent inter-burst
period. The abundance ratio between radio-quiet and radio-loud galaxies, about 0.1 in
our sample, means that the duty cycle of the burst events is about 10%. As the duration
FIGURE 2. Left: (B–K) color distribution of early-type radio-loud (solid line) and radio-quiet galaxies
(dashed line) in three stellar mass ranges. The two groups of galaxies have practically indistinguishable
color distributions, meaning that the stellar populations are similar. Right: Model of early-type galaxies
reproducing both the dichotomy radio-loud/radio-faint and the similar (B–K) colors.
of the radio-loud phase is about 108 years, in 1010 years the early-type galaxies are
expected to have experienced 10 small bursts, i.e., 1 every 109 years and lasting for
about 108 years.
This model naturally explains the fact that radio-loud and radio-quiet early-type
galaxies have very similar (B–K) color, a sensitive indicator of star formation and stellar
age. This is shown in the left panel of Fig. 2, where the two color distributions are
compared. Only a small difference in the median of the two distributions might be
present at any mass, i.e., the radio-loud galaxies appear to be 0.03-0.06 mag bluer, and
this could be the effect of last on-going burst of star formation.
The amount of mass in younger stars can be estimated from the (B–K) color, that
is consistent with the value of (B–K)∼4.1 typical of old stellar populations. By using
the Bruzual & Charlot [12] model, we obtain that no more than 3% of stellar mass
can be created in the 10 bursts (0.3% of mass each) if we assume negligible extinction,
otherwise the predicted color would be too blue. The maximum mass in new stars can
reach 5% assuming an average extinction of the new component of AV = 1. More details
will be given in a forthcoming paper.
This model predicts that traces of small amounts of recent star formation should be
present in most of the local early-type galaxies. This is actually the case: most of them
show very faint emission lines (Sarzi et al. [13]), tidal tails (van Dokkum [14]), dust
lanes (Colbert et al. [15]), HI gas (Morganti et al. [16]), molecular gas (Welch & Sage
[17]), and very blue UV colors (Schawinski et al. [18]).
Using this model with a total fraction of new stars of 3%, we derive the results shown
in figure 3. We see that the theoretical models by Greggio & Renzini [3] and Matteucci
& Recchi [19], while giving a good description of the rates displayed in figure 1, predicts
too few SNe in the first 108 years (about 11%) to accurately fit figure 3. The observed
rates can be reproduced only by adding a “prompt” component (in this case modeled
FIGURE 3. Left: The two DTD studied here, from Greggio & Renzini [3] (GR83) and Mannucci et
al. [9] (MDP06). The latter is the sum of two exponentially declining distributions with 3 and 0.03 Gyr
of decay time, respectively, each one containing 50% of the events. Right: the solid dots with error bars
show the type Ia SN rate as a function of the radio power of the parent galaxy. The dashed line shows the
results of the GR83 model, the solid one those of MDP06.
in terms of an exponentially declining distribution with τ =0.03 Gyr) to a “tardy”
component (an other declining exponential with τ =3 Gyr), each one comprising 50%
of the total number of events.
It should be noted that this strong bimodality is based on a small number of SNe (21)
in early-type galaxies, and the results of oncoming larger SN searches are needed to
confirm (or discard) this result.
EVOLUTION OF THE SN RATE WITH REDSHIFT
A related issue is how the rates measured in the local universe and discussed above
are expected to evolve with redshift. The usual approach is to start from the integrated
cosmic star formation history and obtain the rates by using some assumptions on pro-
genitors (for core-collapse SNe) and on explosion efficiency and DTD (for SN Ia, see
Mannucci et al. [4] for a discussion). Near-infrared and radio searches for core-collapse
supernovae in the local universe (Maiolino et al. [20], Mannucci et al. [21], Lonsdale et
al. [22]) have shown that the vast majority of the events occurring in massive starbursts
are missed by current optical searches because they explode in very dusty environments.
Recent mid- and far-infrared observations (see Pérez-González et al. [23] and references
therein) have shown that the fraction of star-formation activity that takes place in very
luminous dusty starbursts sharply increases with redshift and becomes the dominant star
formation component at z≥0.5. As a consequence, an increasing fraction of SNe are
expected to be missed by high-redshift optical searches. By making reasonable assump-
tions on the number of SNe that can be observed by optical and near-infrared searches
in the different types of galaxies (see Mannucci et al. [24] for details) we obtain the re-
sults shown in figure 4. We estimate that 5–10% of the local core-collapse (CC) SNe are
out of reach of the optical searches. The fraction of missing events rises sharply toward
FIGURE 4. Evolution of the rates of type Ia (two left-most panels) and core-collapse SNe (two right-
most panels), from Mannucci et al. [24]. In the first and third panels, the dashed line shows the total
rate expected from the cosmic star formation history, the light grey area the rate of SNe that can be
recovered by the optical and near-IR searches, and the dark grey area the rate of SNe exploding inside
dusty starbursts and which will be missed by the searches. The second and forth panels show the fraction
of missed SNe.
z=1, where about 30% of the CC SNe will be undetected. At z=2 the missing fraction
will be about 60%. Correspondingly, for type Ia SNe, our computations provide missing
fractions of 15% at z=1 and 35% at z=2. Such large corrections are crucially important
to compare the observed SN rate with the expectations from the evolution of the cosmic
star formation history, and to design the future SN searches at high redshifts.
REFERENCES
1. R. Minkowski, 1941, PASP, 53, 224
2. S. van den Bergh, 1959, AnAp, 22, 123
3. L. Greggio & A. Renzini, 1983, ApJ, 118, 217
4. F. Mannucci, et al., 2005, A&A, 433, 807
5. M. Sullivan et al., 2006, ApJ, 648, 868
6. E. Scannapieco & L. Bildsten, 2005, ApJ, 629, L85
7. F. Matteucci et al., 2006, MNRAS, 372, 265
8. F. Calura, F. Matteucci, & P. Tozzi, 2007, MNRAS, in press (astro-ph/0702714)
9. F. Mannucci, M. Della Valle & N. Panagia, 2006, MNRAS, 370, 773
10. L. Greggio, 2005, A&A, 441, 1055
11. M. Della Valle et al., 2005, ApJ, 629, 750
12. G. Bruzual & S. Charlot, 2003, MNRAS, 341, 33
13. M. Sarzi et al., 2006, MNRAS, 366, 1151
14. P. van Dokkum, 2005, AJ, 130, 264
15. J. W. Colbert et al., 2001, AJ, 121, 808
16. R. Morganti et al., 2006, MNRAS, 371, 157
17. G. A. Welch & L. J. Sage, 2003, ApJ, 584, 260
18. K. Schawinski et al., 2007, ApJ, in press (astro-ph/0601036)
19. F. Matteucci & S. Recchi, 2001, ApJ, 558, 351
20. R. Maiolino et al., 2002, A&A, 389, 84
21. F. Mannucci et al., 2003, A&A, 401, 519
22. C. J. Lonsdale et al., 2006, ApJ, 647, 185
23. P. G. Pérez-González et al., 2005, ApJ, 630, 82
24. F. Mannucci, M. Della Valle & N. Panagia, 2007, MNRAS, in press (astro-ph/0702355)
http://arxiv.org/abs/astro-ph/0702714
http://arxiv.org/abs/astro-ph/0601036
http://arxiv.org/abs/astro-ph/0702355
Introduction
The weak bimodality in type Ia SNe
The strong bimodality in type Ia SNe
Evolution of the SN rate with redshift
|
0704.0878 | Structural relaxation around substitutional Cr3+ in MgAl2O4 | Structural relaxation around substitutional Cr3+ in MgAl2O4
Amélie Juhin,∗ Georges Calas, Delphine Cabaret, and Laurence Galoisy
Institut de Minéralogie et de Physique des Milieux Condensés,
UMR CNRS 7590
Université Pierre et Marie Curie, Paris 6
140 rue de Lourmel, F-75015 Paris, France
Jean-Louis Hazemann†
Laboratoire de Cristallographie, CNRS, 25 avenue des Martyrs, BP 166, 38042 Grenoble cedex 9, France
(Dated: October 26, 2018)
The structural environment of substitutional Cr3+ ion in MgAl2O4 spinel has been investigated
by Cr K-edge Extended X-ray Absorption Fine Structure (EXAFS) and X-ray Absorption Near
Edge Structure (XANES) spectroscopies. First-principles computations of the structural relaxation
and of the XANES spectrum have been performed, with a good agreement to the experiment. The
Cr-O distance is close to that in MgCr2O4, indicating a full relaxation of the first neighbors, and the
second shell of Al atoms relaxes partially. These observations demonstrate that Vegard’s law is not
obeyed in the MgAl2O4-MgCr2O4 solid solution. Despite some angular site distortion, the local D3d
symmetry of the B-site of the spinel structure is retained during the substitution of Cr for Al. Here,
we show that the relaxation is accomodated by strain-induced bond buckling, with angular tilts of
the Mg-centred tetrahedra around the Cr-centred octahedron. By contrast, there is no significant
alteration of the angles between the edge-sharing octahedra, which build chains aligned along the
three four-fold axes of the cubic structure.
PACS numbers: 61.72.Bb, 82.33.Pt, 78.70.Dm, 71.15.Mb
I. INTRODUCTION
Most multicomponent materials belong to complete or
partial solid solutions. The presence of chemical sub-
stitutions gives rise to important modifications of the
physical and chemical properties of the pure phases. For
instance, the addition of a minor component can im-
prove significantly the electric, magnetic or mechanical
behaviour of a material.1,2,3 Another evidence for the
presence of impurities in crystals comes from the modifi-
cation of optical properties such as coloration. Transition
metal ions like Cr3+ cause the coloration of wide band
gap solids, because of the splitting of 3d-levels under the
action of crystal field.4 Despite the ubiquitous presence
of substitutional elements in solids, their accommoda-
tion processes and their structural environment are still
discussed,5 since they have important implications. For
example, the interpretation of the color differences be-
tween Cr-containing minerals (e.g. ruby, emerald, red
spinel) requires to know the structural environment of
the coloring impurity.4,6,7,8 The ionic radius of a sub-
stitutional impurity being usually different from that of
the substituted ion, the accommodation of the mismatch
imposes a structural relaxation of the crystal structure.
Vegard’s law states that there is a linear relationship
between the concentration of a substitutional impurity
and the lattice parameters, provided that the substi-
tuted cation and impurity have similar bonding proper-
ties. Chemically selective spectroscopies, like Extended
X-ray Absorption Fine Structure (EXAFS), have pro-
vided evidence that diffraction studies of solid solutions
give only an average vision of the microscopic states and
that Vegard’s law is limited.9,10,11 Indeed, a major result
concerns the existence of a structural relaxation of the
host lattice around the substitutional cation. This im-
plies the absence of modification of the site occupied by
a doping cation, when decreasing its amount in a solid
solution. This important result has been observed in var-
ious materials, including III-V semi-conductors or mixed
salts:12,13 e. g., in mixed alkali halides, some important
angular buckling deviations have been observed.13 Re-
cently, the use of computational tools, as a complement
of EXAFS experiments, has been revealed successful for
the study of oxide/metal epilayers.14 In oxides contain-
ing dilute impurities, this combined approach is manda-
tory. It has been recently applied to the investigation of
the relaxation process around Cr dopant in corundum:
in the α-Al2O3 - α-Cr2O3 system, the radial relaxation
was found to be limited to the first neighbors around Cr,
while the angular relaxation is weak.8,15
In this work, we investigate the relaxation caused by
the substitution of Al3+ by Cr3+ in spinel MgAl2O4,
which gives rise to a solid solution, as observed for corun-
dum α-Al2O3. The spinel MgAl2O4 belongs to an impor-
tant range of ceramic compounds, which has attracted
considerable interest among researchers for a variety of
applications, great electrical, mechanical, magnetic and
optical properties.16 The spinel structure is based on a
cfc close-packing, with a Fd3̄m space group symmetry.
Its chemical composition is expressed as AB2X4, where
A and B are tetrahedral and octahedral cations, respec-
tively, and X is an anion. These two types of cations
define two different cationic sublattices, which may in-
duce a very different relaxation process than in corun-
dum. In the normal spinel structure, the octahedra host
http://arxiv.org/abs/0704.0878v1
trivalent cations and exhibit D3d site symmetry. This
corresponds to a small distortion along the [111] direc-
tion, arising from a departure of the position of oxy-
gen ligands from a cubic arrangement. Small amounts
of chromium oxide improve the thermal and mechanical
properties of spinel.1 A color change from red to green
is also observed with increasing Cr-content. In this arti-
cle, we report new results on the local geometry around
Cr3+ in spinel MgAl2O4, using a combination of EXAFS
and X-ray Absorption Near Edge Structure (XANES).
The experimental data are compared to those obtained
by theoretical calculations, based on the Density Func-
tional Theory in the Local Spin Density Approximation
(DFT-LSDA): this has enabled us to confirm the local
structure around substitutional Cr3+ and investigate in
detail the radial and angular aspects of the relaxation.
The paper is organized as follows. Section II is dedi-
cated to the methods, including the sample description
(Sec. II A), the X-ray absorption measurements and
analysis (Sec. II B), and the computational details (Sec.
II C). Section III is devoted to the results and discussion.
Conclusions are given in Sec. IV.
II. MATERIALS AND METHODS
A. Sample description
Two natural gem-quality red spinel single crystals from
Mogok, Burma (Cr-1, Cr-2) were investigated. They
contain respectively 70.0, 71.4 wt %-Al2O3, 0.70, 1.03
wt%-Cr2O3 and 26.4, 25.3 wt%-MgO. These composi-
tions were analyzed using the Cameca SX50 electron mi-
croprobe at the CAMPARIS analytical facility of the Uni-
versities of Paris 6/7, France. A 15 kV voltage with a 40
nA beam current was used. X-ray intensities were cor-
rected for dead-time, background, and matrix effects us-
ing the Cameca ZAF routine. The standards used were
α-Al2O3, α-Cr2O3 and MgO.
B. X-ray Absorption Spectroscopy measurements
and analysis
Cr K-edge (5989 eV) X-ray Absorption Spectroscopy
(XAS) spectra were collected at room temperature at
beamline BM30b (FAME), at the European Synchrotron
Radiation Facility (Grenoble, France) operated at 6 GeV.
The data were recorded using the fluorescence mode with
a Si (111) double crystal and a Canberra 30-element Ge
detector.17 We used a spacing of 0.1 eV and of 0.05 Å−1,
respectively in the XANES and EXAFS regions. Data
treatment was performed using ATHENA following the
usual procedure and the EXAFS data were analyzed us-
ing IFEFFIT, with the support of ARTEMIS.18 The de-
tails of the fitting procedure can be found elsewhere.19
An uvarovite garnet, Ca3Cr2Si3O12, was used as model
compound to derive the value of the amplitude reduction
factor S20 (0.81) needed for fitting. For each sample, a
multiple-shell fit was performed in the q-space, including
the first four single scattering paths: the photoelectron is
backscattered either by the first (O), second (Al or Cr),
third (O) or fourth (Mg) neighbors. Treating identically
the third and fourth paths, we used a unique energy shift
∆e0 for all paths, three different path lengths R and three
independent values of the Debye-Waller factor σ2. In a
first step, the number of neighbors N was fixed to the
path degeneracy. In a second time, a single amplitude
parameter was fitted for the last three shells, assuming
a proportional variation of the number of atoms on each
shell.
C. Computations
1. Structural relaxation
In order to complement the structural information
from EXAFS, a simulation of the structural relaxation
was performed to quantify the geometric surrounding
around an isolated Cr3+. The calculations were done in a
neutral supercell of MgAl2O4, using a first-principles to-
tal energy code based on DFT-LSDA.20 We used Plane
Wave basis set and norm conserving pseudopotentials21
in the Kleiman Bylander form.22 For Mg, we considered
3s, 3p, 3d as valence states (core radii of 1.05 a.u, ℓ=2
taken as local part) and those of Ref.15 for Al, Cr, O.
We first determined the structure of bulk MgAl2O4. We
used a unit cell, which was relaxed with 2×2×2 k-point
grid for electronic integration in the Brillouin Zone and
cut-off energy of 90 Ry. We obtained a lattice constant of
7.953 Å and an internal parameter of 0.263 (respectively
-1.6 % and +0.3 % relative to experiment),23 which
are consistent with previous calculations.16. In order to
simulate the Cr defect, we used a 2×2×2 supercell, built
using the relaxed positions of the pure phase. It con-
tains 1 neutral Cr, 31 Al, 16 Mg and 64 O atoms. It was
chosen large enough to minimize the interaction between
two paramagnetic ions, with a minimal Cr-Cr distance
of 11.43 Å. While the size of the supercell is kept fixed,
all atomic positions are relaxed in order to investigate
long-range relaxation. We used the same cut-off energy
and a single k-point sampling. The convergence of the
calculation was verified by comparing it to a computa-
tion with a 2×2×2 k-point grid, and discrepancies in the
atomic forces are lower than 0.3 mRy/a.u. In order to
compare directly the theoretical bond distances to those
obtained by EXAFS spectroscopy, the inital slight un-
derestimation of the lattice constant (systematic within
the LDA)24 was removed by rescaling the lattice param-
eter by -1.6 %. This rescaling is homothetic and does not
affect the relative atomic positions.
2. XANES simulations
As the analysis of the experimental XANES data is
not straightforward, ab initio XANES simulations are re-
quired to relate the experimental spectral features to the
local structure around the absorbing atom. The method
used for XANES calculations are described in Ref. 25,26.
The all-electron wave-functions are reconstructed within
the projector augmented wave framework.27 In order to
allow the treatment of large systems, the scheme uses a
recursion method to construct a Lanzcos basis and then
compute the cross section as a continued fraction.28,29
The XANES spectrum is calculated in the electric dipole
approximation, using the same first-principles total en-
ergy code as the one used for the structural relaxation. It
was carried out in the relaxed 2×2×2 supercell (i.e 112
atoms), which contains one Cr atom and results from
ab initio energy minimization mentioned in the previ-
ous subsection. The pseudopotentials used are the same
as those used for structural relaxation, except for Cr.
Indeed, in order to take into account the core-hole ef-
fects, the Cr pseudopotential is generated with only one
1s electron. Convergence of the XANES calculation is
reached for the following parameters: a 70 Ry energy
cut-off for the plane-wave expansion, one k-point for the
self-consistent spin-polarized charge density calculation,
and a Monkhorst-Pack grid of 3×3×3 k-points in the
Brillouin Zone for the absorption cross-section calcula-
tion. The continued fraction is computed with a con-
stant broadening γ=1.1 eV, which takes into account the
core-hole lifetime.30
III. RESULTS AND DISCUSSION
Figure 1 shows the k3-weighted experimental EXAFS
signals for Cr-1 and Cr-2 samples and the Fourier Trans-
forms (FT) for the k-range 3.7-11.9 Å−1. The similari-
ties observed suggest a close environment for Cr in the
two samples (0.70 and 1.03 wt%-Cr2O3), which is con-
firmed by fitting the FT in the R-range 1.0-3.1 Å (see
Table I). The averaged Cr-O distance derived from EX-
TABLE I: Structural parameters obtained from the EXAFS
analysis in the R range [1.0-3.1 Å] for Cr-1 and Cr-2 samples.
The energy shifts ∆e0 were found equal to 1.3 ± 1.5 eV. The
obtained RF factors were 0.0049 and 0.0045.
R(Å) N σ2 (Å2)
Cr-O 1.98 6.0 0.0031 Cr-1
1.98 6.0 0.0026 Cr-2
Cr-Al 2.91 5.3 0.0032 Cr-1
2.91 5.4 0.0033 Cr-2
Cr-O 3.39 1.8 0.0079 Cr-1
3.37 1.8 0.0077 Cr-2
Cr-Mg 3.39 5.3 0.0079 Cr-1
3.39 5.4 0.0077 Cr-2
FIG. 1: Fourier-transform of k3-weighted EXAFS function for
Cr-1 and Cr-2 samples (dashed and solid lines respectively).
Inset: background-subtracted data
FIG. 2: (a) Inverse-FT of EXAFS data (dots) and fitted signal
(solide line) for R=1.0-3.1 Å. (b) Inverse-FT of EXAFS data
(dots) for R=2.0-3.1 Å, multi-shell fit with Cr-Al pairs (solid
line) and theoretical function with Cr-Cr pairs (dashed line)
in the same structural model.
TABLE II: First, second and third neighbor mean distances (in Å) from central M3+ in the different structures considered in
this work.
MgAl2O4: Cr
3+ exp MgAl2O4: Cr
3+ calc MgAl2O4 exp
a MgCr2O4 exp
Cr-O 1.98 1.99 — 1.99
Al-O — — 1.93 —
Cr-Al 2.91 2.88 — —
Cr-Cr — — — 2.95
Al-Al — — 2.86 —
Cr-O 3.37 3.34 — 3.45
Al-O — — 3.34 —
Cr-Mg 3.39 3.36 — 3.45
Al-Mg — — 3.35 —
afrom Ref.23
bfrom Ref.34
AFS data is equal to 1.98 Å (± 0.01 Å), with six oxygen
first neighbors. The second shell is composed of six Al
atoms, located at 2.91 Å (± 0.01 Å). Two oxygen and
six magnesium atoms compose the further shells, at dis-
tances of 3.38 Å and 3.39 Å (± 0.03 Å). We investigated
in detail the chemical nature of these second neighbors,
by fitting the second peak on the FT (2.0-3.1 Å) with ei-
ther a Cr or an Al contribution, this latter corresponding
to a statistical Cr-distribution (Cr/Al ∼ 0.01). The only
satisfactory fits were obtained in the latter case (Fig. 2).
Calculated and experimental interatomic distances are
in good agreement (Table II), a confirmation of the
EXAFS-derived radial relaxation around Cr3+ after sub-
stitution. The symmetry of the relaxed Cr-site is re-
tained from the Al-site in MgAl2O4 and is similar to
the Cr-site in MgCr2O4. It belongs to the D3d point
group, with an inversion center, three binary axes and
a C3 axis (Fig. 3a). This result is consistent with opti-
cal absorption31 and Electron-Nuclear Double Resonance
experiments32 performed on MgAl2O4: Cr
3+. Our first-
principles calculations also agree with a previous inves-
tigation of the first shell relaxation, using Hartree-Fock
formalism on an isolated cluster.33 As it has been men-
tioned previously, the simulation can provide comple-
mentary distances (Fig. 3b): the Al1-O distances, equal
to 1.91 Å, are slightly smaller than Al-O distances in
MgAl2O4. The Al1-Al2 distances are equal to 2.85 Å,
which is close to the Al-Al distances in MgAl2O4.
Apart from the radial structural modifications around
Cr, significant angular deviations are observed in the
doped structure. Indeed, the Cr-centred octahedron is
slightly more distorted in MgAl2O4: Cr
3+, with six O-
Cr-O angles of 82.1◦ (and six supplementary angles of
97.9◦): O-Cr-O is more acute than O-Cr-O in MgCr2O4
(84.5◦, derived from refined structure)34 and than O-Al-
O in MgAl2O4 (either calculated in the present work,
83.5◦, or derived from refined structure, 83.9◦) (Fig. 3a).
At a local scale around the dopant, the sequence of edge-
sharing octahedra is hardly modified by the substitution
(Fig. 3b): the Cr-O-Al1 angles (95.1◦) are similar to
Cr-O-Cr in MgCr2O4 (95.2
◦) and Al-O-Al in MgAl2O4
FIG. 3: (color online) (a) Cr-centred octahedron before re-
laxation (green) and after (red). (b) Model of structural dis-
tortions around Cr (red) in MgAl2O4: Cr
3+. The O first
neighbors (black) and the Al1 (green) second neighbors are
displaced outward the Cr dopant in the direction of arrows.
FIG. 4: Cr K-edge XANES spectra in MgAl2O4: Cr
3+. The
experimental signal (thick line) is compared with the theo-
retical spectra calculated in the relaxed structure (solid line)
and in the non-relaxed structure (dotted line)
(95.8◦). However, the six Al-centred octahedra connected
to the Cr-octahedron are slightly distorted (with six O-
Al1-O angles of 86.7◦), compared to O-Cr-O angles in
MgCr2O4 (84.5
◦) and O-Al-O angles in MgAl2O4 (83.9
This modification affects in a similar way the three types
of chains composed of edge-sharing octahedra, in agree-
ment with the conservation of the C3 axis. On the
contrary, the relative tilt angle between the Mg-centred
tetrahedra and the Cr-centred octahedron is very differ-
ent in MgAl2O4: Cr
3+ (with Cr-O-Mg angle of 117.4◦)
than in MgCr2O4 and MgAl2O4 (with respectively, Cr-
O-Mg and Al-O-Mg angles of 124.5◦ and 121.0◦)
The experimental XANES spectrum of natural
MgAl2O4: Cr
3+ is shown in Fig. 4. It is similar to that of
a synthetic Cr-bearing spinel.35 A good agreement with
the one calculated from the ab initio relaxed structure is
obtained, particularly in the edge region : the position,
intensity and shape of the strong absorption peak (peak
c) is well reproduced by the calculation. The small fea-
tures (peaks a and b) exhibited at lower energy are also
in good agreement with the experimental ones. In our
calculation, the pre-edge features (visible at 5985 eV on
the experimental data) cannot be reproduced, since we
only considered the electric dipole contribution to the X-
ray absorption cross-section: indeed, as it has been said
previously, the Cr-site is centrosymmetric in the relaxed
structure, which implies that the pre-edge features are
due to pure electric quadrupole transitions. The sen-
sitivity of the XANES calculation to the relaxation is
evaluated by computing the XANES spectrum for the
non-relaxed supercell, in which one Cr atom substitutes
an Al atom in its exact position. The result is plotted in
Fig. 4: the edge region (peaks a, b and c) is clearly not
as well reproduced as in the relaxed model, and peak e
is not visible at all. Therefore, we can conclude that the
structural model obtained from our ab initio relaxation
is reliable.
The Cr-O distance is larger than the Al-O distance
in MgAl2O4, but is similar to the Cr-O distance in
MgCr2O4 (Table II). This demonstrates the existence of
an important structural relaxation around the substitu-
tional Cr3+ ion, which is expected since Cr3+ has a larger
ionic radius than Al3+ (0.615 Å vs 0.535 Å).36 The size
mismatch generates indeed a local strain, which locally
expands the host structure. As a result, the O atoms
relax outward the Cr defect. This radial relaxation is ac-
companied with a slight angular deviation of the O first
neighbors, as compared to the host structure. The mag-
nitude of the radial relaxation may be quantified by a
relaxation parameter ζ, defined by the relation:10
RCr−O(MgAl2O4 : Cr
3+)− RAl−O(MgAl2O4)
RCr−O(MgCr2O4)− RAl−O(MgAl2O4)
We find ζ = 0.83 (taking the Cr-O experimental dis-
tance), close to the full relaxation limit (ζ = 1), which
is more than in ruby α-Al2O3: Cr
3+ (ζ = 0.76).8 Veg-
ard’s law, which corresponds to ζ = 0, is thus not obeyed
at the atomic scale. The Cr-Al distance is intermedi-
ate between the Al-Al and Cr-Cr distances in MgAl2O4
and MgCr2O4, which accounts for a partial relaxation
of the second neighbors, but the third and fourth shells
(O, Mg) do not relax, within the experimental and com-
putational uncertainties. The chains of Al-centred octa-
hedra are radially affected only at a local scale around
Cr: the Al second neighbors relax partially outward Cr,
with a Al1-O bond slightly shortened. The angular devi-
ations are also moderate (below 1◦), since the sequence
of octahedra is not modified, but these Al-centred oc-
tahedra are slighlty distorted. Indeed, these octahedra
being edge-shared, the number of degrees of freedom is
reduced, and the polyhedra can either distort or tilt a
little, one around another. It is interesting to point out
that the three chains of octahedra are orientated along
the three four-fold axes of the cubic structure, which are
highly symmetric directions. On the contrary, an angular
relaxation (3.5◦) is observed for the Mg atoms, but with
the absence of radial modifications. This must be con-
nected to the fact that the tetrahedra share a vertex with
the Cr-centred octahedron, a configuration which allows
more flexibility for relative rotation of the polyhedra.
The extension of the relaxation process up to the sec-
ond shell is not observed in the corundum solid solution,
in which it is limited to the first coordination shell.15
Such a difference between these two solid solutions can
be related to the lattice rigidity: the bulk modulus B
is smaller in MgAl2O4 than in α-Al2O3, 200 GPa and
251 GPa, respectively.37 This difference directly arises
from the peculiarity of the structure of these two crystals:
in the spinel structure, one octahedron is edge-shared
to 6 Al octahedra and corner-shared to 6 Mg-centred
tetrahedra (Fig. 3b). In corundum, each octahedron is
face-shared with another, in addition to corner and edge-
sharing bonds: this is at the origin of the rigidity of the
corundum structure, which is less able to relax around a
substitutional impurity such as Cr3+, and relaxation is
thus limited to the first neighbors.
IV. CONCLUSIONS
This study provides a direct evidence of the struc-
tural relaxation during the substitution of Cr for Al in
MgAl2O4 spinel. The local structure determined by X-
ray Absorption Spectroscopy and first-principles calcu-
lations show similar Cr-O distances and local symmetry
in dilute and concentrated spinels. This demonstrates
that, at the atomic scale, Vegard’s law is not obeyed in
the MgAl2O4-MgCr2O4 solid solution. Though this re-
sult has been obtained in other types of materials (semi-
conductors, mixed salts), it is particularly relevant for
oxides like spinel and corundum: indeed, the application
of Vegard’s law has long been a structural tool to in-
terpret, within the so-called ”point charge model”,4 the
color of minerals containing transition metal ions. In
spinel, the full relaxation of the first shell is partially ac-
comodated by strain-induced bond buckling, which was
found to be weak in corundum: important angular tilts
of the Mg-centred tetrahedra around the Cr-centred oc-
tahedron have been calculated, while the angles between
Cr- and Al-bearing edge-sharing octahedra are hardly af-
fected. The improved thermal and mechanical properties
of Cr-doped spinel may be explained by remanent local
strain fields induced by the full relaxation of the structure
around chromium, as it has been observed in other solid
solutions.2 Another important consequence of relaxation
concerns the origin of the partition of elements between
minerals and liquids in geochemical systems.5 Finally, the
data obtained in this study will provide a structural basis
for discussing the origin of color in red spinel and its vari-
ation at high Cr-contents. Indeed, the origin of the color
differences between Cr-containing minerals (ruby, emer-
ald, red spinel, alexandrite) is still actively debated.6,8,38
Acknowledgments
The authors are very grateful to O. Proux (FAME
beamline) for help during experiment. The theoret-
ical part of this work was supported by the French
CNRS computational Institut of Orsay (Institut du
Développement et de Recherche en Informatique Scien-
tifique) under project 62015. This work has been greatly
improved through fruitful discussions with E. Balan, F.
Mauri, M. Lazzeri and Ph. Sainctavit. This is IPGP
Contribution n◦XXXX.
∗ Electronic address: [email protected]
† Electronic address: [email protected]
1 D. Levy, G. Artioli, A. Gualtieri, S. Quartieri S, and M.
Valle, Mater. Res. Bull. 34, 711 (1999)
2 C. Laulhé, F. Hippert, J. Kreisel, M. Maglione, A. Simon,
J. L. Hazemann, and V. Nassif, Phys. Rev. B 74, 014106
(2006)
3 A.I. Frenkel, D. M. Pease, J. I. Budnick, P. Metcalf, E. A.
Stern, P. Shanthakumar, and T. Huang, Phys. Rev. Lett.
97, 195502 (2006)
4 R. G. Burns, Mineralogical Applications of Crystal Field
Theory (Cambridge University Press, Cambridge, 1993)
5 J. Blundy, and B. Wood, Nature 372, 452 (1994)
6 J. M. Garcia-Lastra, M. T. Barriuso, J. A. Aramburu, and
M. Moreno, Phys. Rev. B 72, 113104 (2005)
7 M. Moreno, M. T. Barriuso, J. M. Garcia-Lastra, J. A.
Aramburu, P. Garcia-Frenandez, and M. Moreno, J. Phys.:
Condens. Matter 18, R315 (2006)
8 E. Gaudry, Ph. Sainctavit, F. Juillot, F. Bondioli, Ph.
Ohresser, and I. Letard, Phys. Chem. Minerals 32, 710
(2006)
9 K. Langer, Z. Kristallogr. 216, 87 (2001)
10 J. L. Martins, and A. Zunger, Phys. Rev. B 30, 6217 (1984)
11 L. Galoisy, Phys. Chem. Minerals 23, 217 (1996)
12 J. C. Mikkelsen, Jr., and J. B. Boyce, Phys. Rev. B 28,
7130 (1983)
13 A. I. Frenkel, E. A. Stern, A. Voronel, M. Qian, and M.
Newville, Phys. Rev. B 49, 11662 (1993)
14 C. Lamberti, E. Groppo, C. Prestipino, S. Casassa, A. M.
Ferrari, C. Pisani, C. Giovanardi, P. Luches, S. Valeri, and
F. Boscherini, Phys. Rev. Lett. 91, 046101 (2003)
15 E. Gaudry, A. Kiratisin, P. Sainctavit, C. Brouder, F.
Mauri, A. Ramos, A. Rogalev, and J. Goulon, Phys. Rev.
B 67, 094108 (2003)
16 P. Thibaudeau, and F. Gervais, J. Phys.: Condens. Matter
14, 3543 (2002)
17 O. Proux, X. Biquard, E. Lahera, J-J Menthonnex, A.
Prat, O. Ulrich, Y. Soldo, P. Trévisson, G. Kapoujyan, G.
Perroux, P. Taunier, D. Grand, P. Jeantet, M. Deleglise, J-
P. Roux, and J-L. Hazemann, Phys. Scr. T115, 970 (2005)
18 N. Newville, J. Synch. Radiation 8, 322 (2001)
19 Ravel, and M. Newville, J. Synch. Radiation 12, 537 (2005)
20 Calculations were performed with PARATEC (PARAllel
Total Energy Code) by B. Pfrommer, D. Raczkowski, A.
Canning, S. G. Louie, Lawrence Berkeley National Lab-
oratory (with contributions from F. Mauri, M. Cote, Y.
Yoon, Ch. Pickard and P. Haynes. For more infomration
see www.nersc.gov/projects/paratec
21 N. Troullier, and J. L. Martins, Phys. Rev. B 43, 1993
(1991)
22 L. Kleinman, and D. M. Bylander, Phys. Rev. Lett. 48,
1425 (1982)
23 T.Yamanaka, and Y. Takeuchi, Z. Kristallogr. 165, 65
(1983)
24 S. G. Louie, S. Froyen, and M. L. Cohen , Phys. Rev. B
26, 1738 (1982)
25 M. Taillefumier, D. Cabaret, A.-M. Flank, and F. Mauri,
Phys. Rev. B 66, 195107 (2002)
26 D. Cabaret, E. Gaudry, M. Taillefumier, P. Sainctavit,
and F. Mauri, Physica Scripta, Proc. XAFS-12 conference
T115, 131 (2005)
27 P. E. Blöchl, Phys. Rev. B 50, 17953 (1994)
mailto:[email protected]
mailto:[email protected]
28 R. Haydock, V. Heine, and M. J. Kelly, J. Phys. C: Solid
State Phys. 5, 2845 (1972)
29 R. Haydock, V. Heine, and M. J. Kelly, J. Phys. C: Solid
State Phys. 8, 2591 (1975)
30 M. O. Krause, and J. H. Oliver, J. Phys. Chem. Ref. Data
8, 329 (1979)
31 D. L. Wood, and G.F. Imbush, J. Chem. Phys. 48, 5255
(1968)
32 D. Bravo, and R. Böttcher, J. Phys.: Condens. Matter 4,
7295 (1992)
33 S. L. Votyakov, A. V. Porotnikov, Y. V. Shchapova ,E. I.
Yuryeava, and A. L. Ivanovskii, Int. J. Quant. Chem. 100,
567 (2004)
34 R. J. Hill, J. R. Craig, and G. V. Gibbs, Phys. Chem.
Minerals 4, 317 (1979)
35 D. Levy, G. Artioli, A. Gualtieri, S. Quartieri, and M.
Valle, Mat. Res. Bull. 34, 711 (1999)
36 R. D. Shannon, Acta Crystallogr, Sect. A 32, 751 (1976)
37 O. L. Anderson, and J. E. Nafe, J. Geophys. Res. 70, 3951
(1965)
38 J. M. Garcia-Lastra, J. A. Aramburu, M. T. Barriuso, and
M. Moreno, Phys. Rev. B 74, 115118 (2006)
|
0704.0879 | A Hierarchical Approach for Dependability Analysis of a Commercial
Cache-Based RAID Storage Architecture | Microsoft Word - RAID_IEEE_final.doc
28th IEEE International Symposium on Fault-Tolerant Computing (FTCS-28), Munich (Allemagne),
IEEE Computer Society, Juin 1998, pp.6-15
A Hierarchical Approach for Dependability Analysis
of a Commercial Cache-Based RAID Storage Architecture
M. Kaâniche1*, L. Romano2†, Z. Kalbarczyk2, R. Iyer2 and R. Karcich3
2Center for Reliable and High-Performance
Computing
University of Illinois at Urbana-Champaign
1308 W. Main St., Urbana, IL 61801, USA
[email protected]; {kalbar, yer}@crhc.uiuc.edu
1LAAS-CNRS,
7, Av. du Colonel Roche
31077 Toulouse Cedex 4
France
[email protected]
3Storage Technology
2270 S 88th St. MS 2220
Louisville, CO 80028,
[email protected]
Abstract
We present a hierarchical simulation approach for the
dependability analysis and evaluation of a highly available
commercial cache-based RAID storage system. The archi-
tecture is complex and includes several layers of overlap-
ping error detection and recovery mechanisms. Three ab-
straction levels have been developed to model the cache
architecture, cache operations, and error detection and
recovery mechanism. The impact of faults and errors oc-
curring in the cache and in the disks is analyzed at each
level of the hierarchy. A simulation submodel is associated
with each abstraction level. The models have been devel-
oped using DEPEND, a simulation-based environment for
system-level dependability analysis, which provides facili-
ties to inject faults into a functional behavior model, to
simulate error detection and recovery mechanisms, and to
evaluate quantitative measures. Several fault models are
defined for each submodel to simulate cache component
failures, disk failures, transmission errors, and data errors
in the cache memory and in the disks. Some of the parame-
ters characterizing fault injection in a given submodel cor-
respond to probabilities evaluated from the simulation of
the lower-level submodel. Based on the proposed method-
ology, we evaluate and analyze 1) the system behavior un-
der a real workload and high error rate (focusing on error
bursts), 2) the coverage of the error detection mechanisms
implemented in the system and the error latency distribu-
tions, and 3) the accumulation of errors in the cache and
in the disks.
1 Introduction
A RAID (Redundant Array of Inexpensive Disks) is a
set of disks (and associated controller) that can automati-
cally recover data when one or more disks fail [4, 13].
Storage architectures using a large cache and RAID disks
* Was a Visiting Research Assistant Professor at CRHC, on leave from LAAS-
CNRS, when this work was performed.
† Was a Visiting Research Scholar at CRHC, on leave from Dipartimento di
Informatica e Sistemistica, University of Naples, Italy
are becoming a popular solution for providing high per-
formance at low cost without compromising much data re-
liability [5, 10]. The analysis of these systems is focused
on performance (see e.g., [9, 11]). The cache is assumed to
be error free, and only the impact of errors in the disks is
investigated. The impact of errors in the cache is addressed
(to a limited extent) from a design point of view in [12],
where the architecture of a fault-tolerant, cache-based
RAID controller is presented. Papers studying the impact
of errors in caches can be found in other applications not
related to RAID systems (e.g., [3]).
In this paper, unlike previous work, which mainly ex-
plored the impact of caching on the performance of disk
arrays, we focus on dependability analysis of a cache-
based RAID controller. Errors in the cache might have a
significant impact on the performance and dependability of
the overall system. Therefore, in addition to the fault toler-
ance capabilities provided by the disk array, it is necessary
to implement error detection and recovery mechanisms in
the cache. This prevents error propagation from the cache
to the disks and users, and it reduces error latency (i.e.,
time between the occurrence of an error and its detection
or removal). The analysis of the error detection coverage
of these mechanisms, and of error latency distributions,
early in the design process provides valuable information.
System manufacturers can understand, early on, the fault
tolerance capabilities of the overall design and the impact
of errors on performance and dependability.
In our case study, we employ hierarchical simulation,
[6], to model and evaluate the dependability of a commer-
cial cache-based RAID architecture. The system is decom-
posed into several abstraction levels, and the impact of
faults occurring in the cache and the disk array is evaluated
at each level of the hierarchy. To analyze the system under
realistic operational conditions, we use real input traces to
drive the simulation. The system model is based on the
specification of the RAID architecture, i.e., we do not
evaluate a prototype system. Simulation experiments are
conducted using the DEPEND environment [7].
The cache architecture is complex and consists of sev-
eral layers of overlapping error detection and recovery
mechanisms. Our three main objectives are 1) to analyze
how the system responds to various fault and error scenar-
ios, 2) to analyze error latency distributions taking into ac-
count the origin of errors, and 3) to evaluate the coverage
of error detection mechanisms. These analyses require a
detailed evaluation of the system’s behavior in the pres-
ence of faults. In general, two complementary approaches
can be used to make these determinations: analytical mod-
eling and simulation. Analytical modeling is not appropri-
ate here, due to the complexity of the RAID architecture.
Hierarchical simulation offers an efficient method to con-
duct a detailed analysis and evaluation of error latency and
error detection coverage using real workloads and realistic
fault scenarios. Moreover, the analysis can be completed
within a reasonable simulation time.
To best reproduce the characteristics of the input load, a
real trace file, collected in the field, is used to drive the
simulation. The input trace exhibits the well-known track
skew phenomenon, i.e., a few tracks among the address-
able tracks account for most of the I/O requests. Since
highly reliable commercial systems commonly tolerate iso-
lated errors, our study focuses on the impact of multiple
near-coincident errors occurring during a short period of
time (error bursts), a phenomenon which has seldom been
explored. We show that due to the high frequency of sys-
tem operation, a transient fault in a single system compo-
nent can result in a burst of errors that propagate to other
components. In other words, what is seen at a given ab-
straction level as a single error becomes a burst of errors at
a higher level of abstraction. Also, we analyze how bursts
of errors affect the coverage of error detection mechanisms
implemented in the cache and how they affect the error la-
tency distributions, (taking into account where and when
the errors are generated). In particular, we demonstrate that
the overlapping of error detection and recovery mecha-
nisms provides high error detection coverage for the over-
all system, despite the occurrence of long error bursts. Fi-
nally, analysis of the evolution of the number of faulty
tracks in the cache memory and in the disks shows an in-
creasing trend for the disks but an almost constant number
for cache memory.
This paper contains five sections. Section 2 describes
the system architecture and cache operations, focusing on
error detection and recovery mechanisms. Section 3 out-
lines the hierarchical modeling approach and describes the
hierarchical model developed for the system analyzed in
this paper. Section 4 presents the results of the simulation
experiments. Section 5 summarizes the main results of the
study and concludes the paper.
2 System presentation
The storage architecture analyzed in this paper (Figure
1) is designed to support a large amount of disk storage
and to provide high performance and high availability. The
storage system supports a RAID architecture composed of
a set of disk drives storing data, parity, and Reed-Solomon
coding information, which are striped across the disks [4].
This architecture tolerates the failure of up to two disks. If
a disk fails, the data from the failed disk is reconstructed
on-the-fly using the valid disks; the reconstructed data is
stored on a hot spare disk without interrupting the service.
Data transfer between the hosts and the disks is supervised
by the array controller. The array controller is composed of
a set of control units. The control units process user re-
quests received from the channels and direct these requests
to the cache subsystem. Data received from the hosts is as-
sembled into tracks in the cache. The number of tracks cor-
responding to a single request is application dependent.
Data transfers between the channels and the disks are per-
formed by the cache subsystem via reliable and high-speed
control and data busses. The cache subsystem consists of
1) a cache controller organized into cache controller inter-
faces to the channels and the disks and cache controller in-
terfaces to the cache memory (these interfaces are made of
redundant components to ensure a high level of availabil-
ity) and 2) cache volatile and nonvolatile memory. Com-
munication between the cache controller interfaces and the
cache memory is provided by redundant and multidirec-
tional busses (denoted as Bus 1 and Bus 2 in Figure 1).
The cache volatile memory is used as a data staging area
for read and write operations. The battery-backed nonvola-
tile memory is used to protect critical data against failures
(e.g., data modified in the cache and not yet modified in
the disks, information on the file system that is necessary
to map the data processed by the array controller to physi-
cal locations on the disks).
2.1 Cache subsystem operations
The cache subsystem is for caching read and write re-
quests. A track is always staged in the cache memory as a
whole, even in the event of a write request involving only a
few blocks of the track. In the following, we describe the
main cache operations assuming that the unit of data trans-
fer is an entire track.
Read operation. First, the cache controller checks for
Figure 1: Array controller architecture, interfaces
and data flow
Cache subsystem
cache
controller
(CC)
cache
memory
Bus 1
Bus 2
Control & Data
Busses
Data transfer be-
tween cache and
channels
Requests
for data transfer
to/from hosts
Data transfer
to/from disks
Channel
Interfaces
RAID
Disk
Disk
Interfaces
Hosts
Control
Units
CC interfaces to
channels/disks
CC interfaces to
cache memory
nonvolatile
memory
volatile
memory
Array
Controller
the requested track in the cache memory. If the track is al-
ready there («cache hit»), it is read from the cache and the
data is sent back to the channels. If not («cache miss»), a
request is issued to read the track from the disks and swap
it to the cache memory. Then, the track is read from the
cache.
Write operation. In the case of a cache hit, the track is
modified in the cache and flagged as «dirty.» In the case of
a cache miss, a memory is allocated to the track and the
track is written into that memory location. Two write
strategies can be distinguished: 1) write-through and 2) fast
write. In the write-through strategy, the track is first writ-
ten to the volatile memory. The write operation completion
is signaled to the channels after the track is written to the
disks. In the fast-write strategy, the track is written to the
volatile memory and to nonvolatile memory. The write op-
eration completion is signaled immediately. The modified
track is later written to the disks according to a write-back
strategy, which consists of transferring the dirty tracks to
the disks, either periodically or when the amount of dirty
tracks in the cache exceeds a predefined threshold. Finally,
when space for a new track is needed in the cache, the
track-replacement algorithm based on the Least-Recently-
Used (LRU) strategy is applied to swap out a track from
the cache memory.
Track transfer inside the cache. The transfer of a track
between the cache memory, the cache controller, and the
channel interfaces is composed of several elementary data
transfers. The track is broken down into several data
blocks to accommodate the parallelism of the different de-
vices involved in the transfer. This also makes it possible
to overlap several track transfer operations over the data
busses inside the cache subsystem. Arbitration algorithms
are implemented to synchronize these transfers and avoid
bus hogging by a single transfer.
2.2 Error detection mechanisms
The cache is designed to detect errors in the data, ad-
dress, and control paths by using, among other techniques,
parity, error detection and correction codes (EDAC), and
cyclic redundancy checking (CRC). These mechanisms are
applied to detect errors in the data path in the following
ways:
Parity. Data transfers, over Bus 1 (see Figure 1) are
covered by parity. For each data symbol (i.e., data word)
transferred on the bus, parity bits are appended and passed
over separate wires. Parity is generated and checked in
both directions. It is not stored in the cache memory but is
stripped after being checked.
EDAC. Data transfers over Bus 2 and the data stored in
the cache memory are protected by an error detection and
correction code. This code is capable of correcting on-the-
fly all single and double bit errors per data symbol and de-
tecting all triple bit data errors.
CRC. Several kinds of CRC are implemented in the ar-
ray controller. Only two of these are checked or generated
within the cache subsystem: the frontend CRC (FE-CRC)
and the physical sector CRC (PS-CRC). FE-CRC is ap-
pended, by the channel interfaces, to the data sent to the
cache during a write request. It is checked by the cache
controller. If FE-CRC is valid, it is stored with the data in
the cache memory. Otherwise, the operation is interrupted
and a CRC error is recorded. FE-CRC is checked again
when a read request is received from the channels. There-
fore, extra-detection is provided to recover from errors that
may have occurred while the data was in the cache or in
the disks, errors that escaped the error detection mecha-
nisms implemented in the cache subsystem and the disk ar-
ray. PS-CRC is appended by the cache controller to each
data block to be stored in a disk sector. The PS-CRC is
stored with the data until a read from disk operation oc-
curs. At this time, it is checked and stripped before the data
is stored in the cache. The same algorithm is implemented
to compute FE-CRC and PS-CRC. This algorithm guaran-
tees detection of three or fewer data symbols in error in a
data record.
Table 1 summarizes the error detection conditions for
each mechanism presented above, taking into account the
component in which the errors occur and the number of
noncorrected errors occurring between the computation of
the code and its being checked. The (x) symbol means that
errors affecting the corresponding component can be de-
tected by the mechanism indicated in the column. It is
noteworthy that the number of check bits and the size of
the data symbol (ds) mentioned in the error detection con-
dition are different for parity, EDAC, and CRC.
2.3 Error recovery and track reconstruction
Besides EDAC, which is able to automatically correct
some errors by hardware, software recovery procedures are
invoked when errors are detected by the cache subsystem.
Recovery actions mainly consist of retries, memory fenc-
ing, and track-reconstruction operations. When errors are
detected during a read operation from the cache volatile
memory and the error persists after retries, an attempt is
made to read the data from nonvolatile memory. If this op-
Error detection mechanism
Error Location FE-CRC Parity EDAC PS-CRC
Transfer: channel to cache x
CCI to channels/disks x
Bus 1 x x
CCI to cache memory x
Bus 2 x x
Cache memory x x
Transfer: cache to disk x x
Disks x x
Error detection condition < 4 ds
with
errors
odd # of
errors
per ds
< 4 bit-
errors
per ds
< 4 ds
with er-
rors
ds= data symbol, CCI = Cache Controller Interface
Table 1. Error detection efficiency with respect to the loca-
tion and the number of errors
eration fails, the data is read from the disk array. This op-
eration succeeds if the data on the disks is still valid or it
can be reconstructed (otherwise it fails). Figure 2 describes
a simplified disk array composed of n data disks (D1 to
Dn) and two redundancy disks (P and Q). Each row of the
redundancy disks is computed based on the corresponding
data tracks. For example, the first rows in disks P (P[1;n])
and Q (Q[1;n]) are obtained based on the data tracks T1 to
Tn stored in the disks D1 to Dn. This architecture tolerates
the loss of two tracks in each row; this condition will be re-
ferred to as the track reconstruction condition. The tracks
that are lost due to disk failures or corrupted due to bit-
errors can be reconstructed using the valid tracks in the
row, provided that the track reconstruction condition is sat-
isfied; otherwise data is lost. More information about disk
reconstruction strategies can be found in [8].
3 Hierarchical modeling methodology
We propose a hierarchical simulation approach to en-
able an efficient, detailed dependability analysis of the
RAID storage system described in the previous section.
Establishing the proper number of hierarchical levels
and their boundaries is not trivial. Several factors must be
considered in determining an optimal hierarchical decom-
position that provides a significant simulation speed-up
with one minimal loss of accuracy: 1) system complexity,
2) the level of detail of the analysis and the dependability
measures to be evaluated, and 3) the strength of system
component interactions (weak interactions favor hierarchi-
cal decomposition).
In our study, we define three hierarchical levels (sum-
marized in Figure 3) to model the cache-based storage sys-
tem. At each level, the behavior of the shaded components
is detailed in the lower-level model. Each model is built in
a modular fashion and is characterized by:
• the components to be modeled and their behavior,
• a workload generator specifying the input distribution,
• a fault dictionary specifying the set of faults to be in-
jected in the model, the distribution characterizing the
occurrence of faults, and the consequences of the fault
with the corresponding probability of occurrence, and
• the outputs derived from the submodel simulation.
For each level, the workload can be a real I/O access
trace or generated from a synthetic distribution (in this
study we use a real trace of user I/O requests). The effects
of faults injected at a given level are characterized by sta-
tistical distributions (e.g., probability and number of errors
occurring during data transfer inside the cache). Such dis-
tributions are used as inputs for fault injection at the next
higher level. This mechanism allows the propagation of
fault effects from lower-level models to higher-level mod-
els.
In the model described in Figure 3, the system behavior,
the granularity of the data transfer unit, and the quantita-
tive measures evaluated are refined from one level to an-
other. In the Level 1 model, the unit of data transfer is a set
of tracks to be read or written from a user file. In Level 2,
it is a single track. In Level 3, the track is decomposed into
a set of data blocks, each of which is composed of a set of
data symbols. In the following subsections, we describe the
three levels. In this study, we address Level 2 and Level 3
models, which describe the internal behavior of the cache
and RAID subsystems in the presence of faults. Level 1 is
included to illustrate the flexibility of our approach. Using
the hierarchical methodology, additional models can be
built on top of Level 2 and Level 3 models to study the be-
havior of other systems relying on the cache and RAID
subsystems.
3.1 Level 1 model
Level 1 model translates user requests to read/write a
specified file into requests to the storage system to
read/write the corresponding set of tracks. It then propa-
gates the replies from the storage system back to the users,
taking into account the presence of faults in the cache and
RAID subsystems. A file request (read, write) results in a
sequence of track requests (read, fast-write, write-through).
Concurrent requests involving the same file may arrive
from different users. Consequently, a failure in a track op-
eration can affect multiple file requests. In the Level 1
model, the cache subsystem and the disk array are modeled
as a single entity—a black box. A fault dictionary specify-
ing the results of track operations is defined to characterize
the external behavior of the black box in the presence of
faults. There are four possible results for a track operation
(from the perspective of occurrence, detection, and correc-
tion of errors): 1) successful read/write track operation
(i.e., absence of errors, or errors detected and corrected), 2)
errors detected but not corrected, 3) errors not detected,
and 4) service unavailable. Parameter values representing
the probability of the occurrence of these events are pro-
vided by the simulation of the Level 2 model. Two types of
outputs are derived from the simulation of the Level 1
model: 1) quantitative measures characterizing the prob-
ability of user requests failure and 2) the workload distri-
bution of read or write track requests received by the cache
subsystem. This workload is used to feed the Level 2
model.
3.2 Level 2 model
The Level 2 model describes the behavior in the pres-
ence of faults of the cache subsystem and the disk array.
Cache operations and the data flow between the cache con-
troller, the cache memory, and the disk array are described
Figure 2: A simplified RAID
Tn+1
row2
row1
Q[n+1 ; 2n]
Q[1 ; n]
Redundancy Disks: P, Q Data Disks: D1, … Dn
P[n+1 ; 2n]
P[1 ; n]
to identify scenarios leading to the outputs described in the
Level 1 model and to evaluate their probability of occur-
rence. At Level 2, the data stored in the cache memory and
the disks is explicitly modeled and structured into a set of
tracks. Volatile memory and nonvolatile memory are mod-
eled as separate entities. A track transfer operation is seen
at a high level of abstraction. A track is seen as an atomic
piece of data, traveling between different subparts of the
system (from user to cache, from cache to user, from disk
to cache, from cache to disk), while errors are injected to
the track and to the different components of the system.
Accordingly, when a track is to be transferred between two
communication partners, for example, from the disk to the
cache memory, none of the two needs to be aware of the
disassembling, buffering, and reassembling procedures that
occur during the transfer. This results in a significant simu-
lation speedup, since the number of events needing to be
processed is reduced dramatically.
3.2.1 Workload distribution. Level 2 model inputs
correspond to requests to read or write tracks from the
cache. Each request specifies the type of the access (read,
write-through, fast-write) and the track to be accessed. The
distribution specifying these requests and their interarrival
times can be derived from the simulation of the Level 1
model, from real measurements (i.e., real trace), or by gen-
erating distributions characterizing various types of work-
loads.
3.2.2 Fault models. Specification of adequate fault
models is essential to recreate realistic failure scenarios.
To this end we distinguished three primary fault models,
used to exercise and analyze error detection and recovery
mechanisms of the target system. These fault models in-
clude 1) permanent faults leading to cache controller com-
ponent failures, cache memory component failures, or disk
failures, 2) transient faults leading to track errors affecting
single or multiple bits of the tracks while they are stored in
the cache memory or in the disks, and 3) transient faults
leading to track errors affecting single or multiple bits dur-
ing the transfer of the tracks by the cache controller to the
cache memory or to the disks.
Component failures. When a permanent fault is in-
jected into a cache controller component, the requests
processed by this component are allocated to the other
Figure 3: Hierarchical modeling of the cache-based storage system
Workload
Generator
Workload
Generator
Fault
Injector Cache &
RAID disk array
Channel interfaces
and control units
Level 3 outputs
Prob. and # of errors during transfer
• in CCI to channels/disks
• over Bus 1
• in CCI to cache memory
• over Bus 2
Level 2 outputs
1) Prob. of successful track operation
2) Prob. of failed track operation
• Errors detected & not corrected
• Errors not detected
• Cache or RAID unavailable
2) Coverage (CRC, parity, EDAC)
3) Error latency distribution
4) Frequency of track reconstructions
Level 1 outputs
1) Probability of failure of user requests
2) Distribution of track read/write requests
sent to {cache+RAID}
User requests to read or write Si
tracks from/to a file Fi
Track read/ write requests
to {cache + RAID}
{read,T1}, …, {write,Tn}
Un {read,Fi,Si}, …,U1{write,F1,S1}
result of each
track transfer
Read/write
from/to memory
Track Transfer
Read/write
to/from disks
Requests to read
or write tracks
Level 1: {Cache +RAID} modeled as a black box
Cache Memory RAID Disk
Level 3: Details the transfer of a track inside the cache controller;
each track decomposed into data blocks; data block = set of data symbols
Level 2: interactions between cache controller, RAID and cache memory;
tracks in cache memory and disks explicitly modeled; data unit = track
CCI to
cache memory
CCI to chan-
nels/disks
Cache controller
accesses
to the disks
accesses to
cache memory
CCI
cache memory
CCI chan-
nels/disks
Buffers
Workload
Generator
Bus2
Bus1 Buffers
Fault
Injectors
Fault
Injectors
Request to transfer a Track
composed of n data blocks,
Bi = {di1,…dik} data symbols
user specified
parameters
T1{B1, B2,…, Bn }
fault injection in {cache+disk} based on
probabilities evaluated from level 2
• track errors in the buffers
• bus transmission errors
•cache component/disk failure
•memory/disk track errors
•track errors during transfer
user specified
parameters inputs from level 3
inputs from level 2
components of the cache controller that are still available.
The reintegration of the failed component after repair does
not interrupt the cache operations in progress. Permanent
faults injected into a cache memory card or a single disk
lead to the loss of all tracks stored in these components.
When a read request involving tracks stored on a faulty
component is received by the cache, an attempt is made to
read these tracks from the nonvolatile memory or from the
disks. If the tracks are still valid in the nonvolatile memory
or in the disks, or if they can be reconstructed from the
valid disks, then the read operation is successful, otherwise
the data is lost. Note that when a disk fails, a hot spare is
used to reconstruct the data and the failed disk is sent for
repair.
Track errors in the cache memory and the disks. These
correspond to the occurrence of single or multiple bit-
errors in a track due to transient faults. Two fault injection
strategies are distinguished: time dependent and load de-
pendent. Time dependent strategy simulates faults occur-
ring randomly. The time of injection is sampled from a
predefined distribution, and the injected track, in the mem-
ory or in the disks, is chosen uniformly from the set of ad-
dressable tracks. Load dependent strategy aims at simulat-
ing the occurrence of faults due to stress. The fault injec-
tion rate depends on the number of accesses to the memory
or to the disks (instead of the time), and errors are injected
in the activated tracks. Using this strategy, frequently ac-
cessed tracks are injected more frequently than other
tracks. For both strategies, errors are injected randomly
into one or more bytes of a track. The fault injection rate is
tuned to allow a single fault injection or multiple near-
coincident fault injections (i.e., the fault rate is increased
during a short period of time). This enables us to analyze
the impact of isolated and bursty fault patterns.
Track errors during transfer inside the cache. Track er-
rors can occur:
• in the cache controller interfaces with channels/disks be-
fore transmission over Bus 1 (see Figure 1), i.e., before
parity or CRC computation or checking,
• during transfer over Bus 1, i.e., after parity computation,
• in the cache controller interfaces to cache memory before
transmission over Bus 2, i.e., before EDAC computation,
• during transfer over Bus 2, i.e., after EDAC computation.
To be able to evaluate the probability of occurrence and
the number of errors affecting the track during the transfer,
a detailed simulation of cache operations during this trans-
fer is required. Including this detailed behavior in the
Level 2 model would be far too costly in terms of compu-
tation time and memory occupation. For that reason, this
simulation is performed in the Level 3 model. In the Level
2 model, a distribution is associated with each event de-
scribed above, specifying the probability and the number
of errors occurring during the track transfer. The track er-
ror probabilities are evaluated at Level 3.
3.2.3 Modeling of error detection mechanisms. Per-
fect coverage is assumed for cache components and disk
failures due to permanent faults. The detection of track er-
rors occuring when the data is in the cache memory or in
the disks, or during the data transfer depends on (1) the
number of errors affecting each data symbol to which the
error detection code is appended and (2) when and where
these errors occurred (see Table 1). The error detection
modeling is done using a behavioral approach. The number
of errors in each track is recorded and updated during the
simulation. Each time a new error is injected into the track,
the number of errors is incremented. When a request is
sent to the cache controller to read a track, the number of
errors affecting the track is checked and compared with the
error detection conditions summarized in Table 1. During a
write operation, the track errors that have been accumu-
lated during the previous operations are overwritten, and
the number of errors associated to the track is reset to zero.
3.2.4 Quantitative measures. Level 2 simulation en-
ables us to reproduce several error scenarios and analyze
the likelihood that errors will remain undetected by the
cache or will cross the boundaries of several error detec-
tion and recovery mechanisms before being detected.
Moreover, using the fault injection functions implemented
in the model, we analyze (a) how the system responds to
different error rates (especially burst errors) and input dis-
tributions and (b) how the accumulation of errors in the
cache or in the disks and the error latency affect overall
system behavior. Statistics are recorded to evaluate the fol-
lowing: coverage factors for each error detection mecha-
nism, error latency distributions, and the frequency of track
reconstruction operations. Other quantitative measures,
such as the availability of the system and the mean time to
data loss, can also be recorded.
3.3 Level 3 model
The Level 3 model details cache operations during the
transfer of tracks from user to cache, from cache to user,
from disk to cache, and from cache to disk. This allows us
to evaluate the probabilities and number of errors occur-
ring during data transfers (these probabilities are used to
feed the Level 2 model, as discussed in Section 3.2). Un-
like Level 2, which models a track transfer at a high level
of abstraction as an atomic operation, in Level 3, each
track is decomposed into a set of data blocks, which are in
turn broken down into data symbols (each one correspond-
ing to a predefined number of bytes). The transfer of a
track is performed in several steps and spans several cy-
cles. CRC, parity or EDAC bits are appended to the data
transferred inside the cache or over the busses (Bus 1 and
Bus 2). Errors during the transfer may affect the data bits
as well as the check bits. At this level, we assume that the
data stored in the cache memory and in the disk array is er-
ror free, as the impacts of these errors are considered in the
Level 2 model. Therefore, we need to model only the
cache controller interfaces to the channels/disks and to the
cache memory and the data transfer busses. The Level 3
model input distribution defines the tracks to be accessed
and the interarrival times between track requests. This dis-
tribution is derived from the Level 2 model.
Cache controller interfaces include a set of buffers in
which the data to be transmitted to or received from the
busses is temporarily stored (data is decomposed or as-
sembled into data symbols and redundancy bits are ap-
pended or checked). In the Level 3 model, only transient
faults are injected to the cache components (buffers and
busses). During each operation, it is assumed that a healthy
component will perform its task correctly, i.e., it will exe-
cute the operation without increasing the number of errors
in the data it is currently handling. For example, the cache
controller interfaces will successfully load their own buff-
ers, unless they are affected by errors while performing the
load operation. Similarly, Bus 1 and Bus 2 will transfer a
data symbol and the associated information without errors,
unless they are faulty while doing so. On the other hand,
when a transient fault occurs, single or multiple bit-flips
are continuously injected (during the transient) into the
data symbols being processed. Since a single track transfer
is a sequence of operations spanning several cycles, single
errors due to transients in the cache components may lead
to a burst of errors in the track currently being transferred.
Due to the high operational speed of the components, even
a short transient (a few microseconds) may result in an er-
ror burst, which affects a large number of bits.
4 Simulation experiments and results
In this section, we present the simulation results ob-
tained from Level 2 and Level 3 to highlight the advan-
tages of using a hierarchical approach for system depend-
ability analysis. We focus on the behavior of the cache and
the disks when the system is stressed with error bursts. Er-
ror bursts might occur during data transmission over bus-
ses, in the memory and the disks as observed, e.g., in [2]. It
is well known that the CRC and EDAC error detection
mechanisms provide high error detection coverage of sin-
gle bit errors. Previously the impact of error bursts has not
been extensively explored. In this section, we analyze the
coverage of the error detection mechanisms, the distribu-
tion of error detection latency and error accumulation in
the cache memory and the disks, and finally the evolution
of the frequency of track reconstruction in the disks.
4.1 Experiment set-up
Input distribution. Real traces of user I/O requests were
used to derive inputs for the simulation. Information pro-
vided by the traces included tracks processed by the cache
subsystem, the type of the request (read, fast-write, write-
through), and the interarrival times between the requests.
Using a real trace gave us the opportunity to analyze the
system under a real workload. The input trace described
accesses to more than 127,000 tracks, out of 480,000 ad-
dressable tracks. As illustrated by Figure 4, the distribution
of the number of accesses per track is not uniform. Rather
a few tracks are generally more frequently accessed than
the rest—the well-known track skew phenomenon. For in-
stance, the first 100 most frequently accessed tracks ac-
count for 80% of the accesses in the trace; the leading
track of the input trace is accessed 26,224 times, whereas
only 200 accesses are counted for rank-100 track. The in-
terarrival time between track accesses is about a few milli-
seconds, leading to high activity in the cache subsystem.
Figure 5 plots the probability density function of the inter-
arrival times between track requests. Regarding the type of
the requests, the distribution is: 86% reads, 11.4% fast-
writes and 2.6% write-through operations.
Simulation parameters. We simulated a large disk ar-
ray composed of 13 data disks and 2 redundancy disks.
The RAID data capacity is 480,000 data tracks. The capac-
ity of the simulated cache memory is 5% the capacity of
the RAID. The rate of occurrence of permanent faults is
10-4 per hour for cache components (as is generally ob-
served for hardware components) and 10-6 per hour for the
disks [4]. The mean time for the repair of cache subsystem
components is 72 hours (a value provided by the system
manufacturer). Note that when a disk fails, a hot spare is
used for the online reconstruction of the failed disk.
Transient faults leading to track errors occur more fre-
quently than permanent faults. Our objective is to analyze
how the system responds to high fault rates and bursts of
errors. Consequently, high transient fault rates are assumed
in the simulation experiment: 100 transients per hour over
the busses, and 1 transient per hour in the cache controller
interfaces, the cache memory and the disks. Errors occur
more frequently over the busses than in the other compo-
nents. Regarding the load-dependent fault injection strat-
egy, the injection rate in the disk corresponds to one error
each 1014 bits accessed, as observed in [4]. The same injec-
tion rate is assumed for the cache memory. Finally, the
length of the error burst in the cache memory and in the
disks is sampled from a normal distribution with a mean of
100 and a standard deviation of 10, whereas the length of
the error burst during the track transfer inside the cache is
evaluated from the Level 3 model as discussed in Section
3.3. The results presented in the following subsections cor-
respond to the simulation of 24 hours of system operation.
#accesses per track
1E+0 1E+2 1E+4
rank ordered tracks
interarrival time (ms)
1 2 3 4 5 6 7 8 9 10
Figure 4: Track skew (Log-Log) Figure 5: Interarrival time
4.2 Level 3 model simulation results
As discussed in Section 3.3, the Level 3 model aims at
evaluating the number of errors occurring during the trans-
fer of the tracks inside the cache due to transient faults
over the busses and in the cache controller interfaces. We
assumed that the duration of a transient fault is 5 micro-
seconds. During the duration of the transient, single or
multiple bit flips are continuously injected in the track data
symbols processed during that time. The cache operational
cycle for the transfer of a single data symbol is of the order
of magnitude of a few nanoseconds. Therefore the occur-
rence of a transient fault might affect a large number of
bits in a track. This is illustrated by Figure 6, which plots
the conditional probability density function of the number
of errors (i.e., number of bit-flips) occurring during the
transfer over Bus 1 and Bus 2 (Figure 6-a) and inside the
cache controller interfaces (Figure 6-b), given that a tran-
sient fault occurred. The distribution is the same for Bus 1
and Bus 2 due to the fact these busses have the same
speed. The mean length of the error burst measured from
the simulation is around 100 bits during transfer over the
busses, 800 bits when the track is temporarily stored in the
cache controller interfaces to cache memory, and 1000 bits
when the track is temporarily stored in the cache controller
interfaces to channels/disks. The difference between the
results is related to the difference between the track trans-
fer time over the busses and the track loading time inside
the cache controller interfaces.
4.3 Level 2 model simulation results
We used the burst error distributions obtained from the
Level 3 model simulation to feed Level 2 model as ex-
plained in Section 3.2. In following subsections we present
and discuss the results obtained from the simulation of
Level 2, specifically: 1) the coverage of the cache error de-
tection mechanisms, 2) the error latency distribution, and
3) the error accumulation in the cache memory and disks
and the evolution of the frequency of track reconstruction.
4.3.1 Error detection coverage. For all simulation
experiments that we performed, the coverage factor meas-
ured for the frontend CRC and the physical sector CRC
was 100%. This is due to the very high probability of de-
tecting error patterns by the CRC algorithm implemented
in the system (see Section 2.2). Regarding EDAC and par-
ity, the coverage factors tend to stabilize as the simulation
time increases (see Figures 7-a and 7-b, respectively). Each
unit of time in Figures 7-a and 7-b corresponds to 15 min-
utes of system operation. Note that EDAC coverage re-
mains high even though the system is stressed with long
bursts occurring at a high rate, and more than 98% of the
errors detected by EDAC are automatically corrected on-
the-fly. This is due to the fact that errors are injected ran-
domly in the track and the probability of having more than
three errors in a single data symbol is low. (The size of a
data symbol is around 10-3 the size of the track.) All the er-
rors that escaped EDAC or parity have been detected by
the frontend CRC upon a read request from the hosts. This
result illustrates the advantages of storing the CRC with
the data in the cache memory to provide extra detection of
errors escaping EDAC and parity and to compensate for
the relatively low parity coverage.
0 40 80 120 160 200
# errors during transfer
0 200 400 600 800 1000
# errors during transfer
CCI channels/disks
CCI cache memory
a) Bus 1, Bus 2 b) cache controller interfaces
Figure 6: Pdf of number of errors during track transfer
given that a transient fault is injected
0.970
0.975
0.980
0.985
0.990
0.995
0 15 30 45 60 75
EDAC detection
coverage
EDAC correction
coverage
0.9530
0.9535
0.9540
0.9545
0.9550
0.9555
0 15 30 45 60 75
Parity coverage
a) EDAC b) Parity
Figure 7: EDAC and parity coverage during simulation time
4.3.2 Error latency and error propagation. When
an error is injected in a track, the time of occurrence of the
error and a code identifying which component caused the
error are recorded. This allows us to monitor the error
propagation in the system. Six error codes are defined: CCI
(error occurred when data is stored in the cache controller
interfaces to the channels/disks and to the cache memory),
CM (error in the cache memory), D (error in the disk), B1
(error during transmission over Bus 1), and B2 (error dur-
ing transmission over Bus 2). The time for an error to be
overwritten (during a write operation) or detected (upon a
read operation) is called error latency. Since a track is
considered faulty as soon as an error is injected, we record
the latency associated with the first error injected in the
track. This means that the error latency that we measure
corresponds to the time between when the track becomes
faulty and when the errors are overwritten or detected.
Therefore, the error latency measured for each track is the
maximum latency for errors present in the track. Figure 8
plots the error latency probability density function for er-
rors, as categorized above, and error latency for all sam-
ples without taking into account the origin of errors (the
unit of time is 0.1 ms). The latter distribution is bimodal.
The first mode corresponds to a very short latency that re-
sults mainly from errors occurring over Bus 1 and detected
by parity. The second mode corresponds to longer laten-
cies due to errors occurring in the cache memory or the
disks, or to the propagation of errors occurring during data
transfer inside the cache. Note that most of the errors es-
caping parity (error code B1) remain latent for a longer pe-
riod of time (as discussed in Section 3.2.3).
The value of the latency depends on the input distribu-
tion. If the track is not frequently accessed, then errors pre-
sent in the track might remain latent for a long period of
time. Figure 8-b shows that the latency of errors injected in
the cache memory is slightly lower than the latency of er-
rors injected in the disk. This is because the disks are less
frequently accessed than the cache memory. Finally, it is
important to notice that the difference between the error la-
tency distribution for error codes B1 and B2 (Figure 8-a) is
due to the fact that data transfers over Bus1 (during read
and write operations) are covered by parity, whereas errors
occurring during write operations over Bus 2 are detected
later by EDAC or by FE-CRC when the data is read from
the cache. Consequently, it would be useful to check
EDAC before data is written to the cache memory in order
to reduce the latency of errors due to Bus 2.
4.3.3 Error distribution in the cache memory and
in the disks. Analysis of error accumulation in the cache
memory and disks provides valuable feedback, especially
for scrubbing policy. Figure 9 plots the evolution in time
of the percentage of faulty tracks in the cache memory and
in disks (the unit of time is 15 minutes). An increasing
trend is observed for the disks, whereas in the cache mem-
ory we observe a periodic behavior. In the latter case, the
percentage of faulty tracks first increases and then de-
creases when either errors are detected upon read opera-
tions or are overwritten when tracks become dirty. Since
the cache memory is accessed very frequently (every 5
milliseconds in average) and the cache hit rate is high
(more than 60%), errors are more frequently detected and
overwritten in the cache memory than in the disks. The in-
crease of the number of faulty tracks in the cache affects
the track reconstruction rate (number of reconstructions
per unit of time), as illustrated in Figure 10. The average
track reconstruction rate is approximately 8.7 10-5 per mil-
lisecond. It is noteworthy that the detection of errors in the
cache memory does not necessarily lead to the reconstruc-
tion of a track (the track might still be valid in the disks).
Nevertheless, the detection of errors in the cache has an
impact on performance due to the increase in the number
of accesses to the disk. Figure 9 indicates that different
strategies should be considered for disk and cache memory
scrubbing. The disk should be scrubbed more frequently
than the cache memory; this prevents error accumulation,
which can lead to inability to reconstruct a faulty track.
5 Summary, discussion, and conclusions
The dependability of a complex and sophisticated
cache-based storage architecture is modeled and simulated.
To ensure reliable operation and to prevent data loss, the
system employs a number of error detection mechanisms
and recovery strategies, including parity, EDAC, CRC
checking, and support of redundant disks for data recon-
struction. Due to the complex interactions among these
mechanisms, it is not a trivial task to accurately capture the
behavior of the overall system in the presence of faults. To
enable an efficient and detailed dependability analysis, we
proposed a hierarchical, behavioral simulation-based ap-
proach in which the system is decomposed into several ab-
straction levels and a corresponding simulation model is
associated with each level. In this approach, the impact of
low-level faults is used in a higher level analysis. Using an
appropriate hierarchical system decomposition, the com-
plexity of individual models can be significantly reduced
while preserving the model’s ability to capture detailed
system behavior. Moreover, additional details can be in-
corporated by introducing new abstraction levels and asso-
ciated simulation models.
2 0.0
Latency
0E+01E+21E+41E+61E+8
latency
a) all error codes b) Bus 1, cache memory and disks
Figure 8: Error latency distribution
To demonstrate the capabilities of the methodology, we
have conducted an extensive analysis of the design of a
real, commercial cache RAID storage system. To our
knowledge, this kind of analysis of a cache-based RAID
system has not been accomplished either in academia or in
the industry. The dependability measures used to charac-
terize the system include coverage of the different error de-
tection mechanisms employed in the system, error latency
distribution classified according to the origin of an error,
error accumulation in the cache memory and disks, and
frequency of data reconstruction in the cache memory. To
analyze the system under realistic operational conditions,
we used real input traces to drive the simulations. It is im-
portant to emphasize that an analytical modeling of the
system is not appropriate in this context due to the com-
plexity of the architecture, the overlapping of error detec-
tion and recovery mechanisms, and the necessity of captur-
ing the latent errors in the cache and the disks. Hierarchical
simulation offers an efficient method to accomplish the
above task and allows detailed analysis of the system to be
performed using real input traces.
The specific results of the study are presented in the
previous sections. It is, however, important to summarize
the key points that demonstrate the usefulness of the pro-
posed methodology. First, we focused on the analysis of
the system behavior when it is stressed with high fault
rates. In particular, we demonstrated that transient faults
during a few microseconds—during data transfer over the
busses or while the data is in the cache controller inter-
faces—may lead to bursts of errors affecting a large num-
ber of bits of the track. Moreover, despite the high fault in-
jection rate, high EDAC and CRC error detection coverage
was observed, and the relatively low parity coverage was
compensated for by the extra detection provided by CRC,
which is stored with the data in the cache memory.
The hierarchical simulation approach allowed us to per-
form a detailed analysis of error latency with respect to the
origin of an error. The error latency distribution measured
from the simulation, regardless the origin of the errors, is
bimodal†: short latencies are mainly related to errors oc-
curring and detected during data transfer over the bus pro-
tected by parity, and the highest error latency was observed
for errors injected into the disks. The analysis of the evolu-
tion during the simulation of the percentage of faulty
tracks in the cache memory and the disks showed that, in
† Similar behavior was observed in other studies, e.g.,[1, 14].
spite of a high rate of injected faults, there is no error ac-
cumulation in the cache memory, i.e., the percentage of
faulty tracks in the cache varies within a small range (0.5%
to 2.5%, see Section 4.3.3), whereas an increasing trend
was observed for the disks (see Figure 9). This is related to
the fact that the cache memory is accessed very frequently,
and errors are more frequently detected and overwritten in
the cache memory than in the disks. The primary implica-
tion of this result, together with the results of the error la-
tency analysis, is the need for a carefully designed scrub-
bing policy capable of reducing the error latency with ac-
ceptable performance overhead. Simulation results suggest
that the disks should be scrubbed more frequently than the
cache memory in order to prevent error accumulation,
which may lead to an inability to reconstruct faulty tracks.
We should emphasize that the results presented in this
paper are derived from the simulation of the system using a
single, real trace to generate the input patterns for the
simulation. Additional experiments with different input
traces and longer simulation times should be performed to
confirm these results. Moreover, the results presented in
this paper are preliminary, as we addressed only the impact
of errors affecting the data. Continuation of this work will
include modeling of errors affecting the control flow in
cache operations. The proposed approach is flexible
enough to incorporate these aspects of system behavior.
Including control flow will obviously increase the com-
plexity of the model, as more details about system behav-
ior must be described in order to simulate realistic error
scenarios and provide useful feed back for the designers. It
is clear that this kind of detailed analysis cannot be done
without the support of a hierarchical modeling approach.
Acknowledgments
The authors are grateful to the anonymous reviewers
whose comments helped improve the presentation of the
paper and to Fran Baker for her insightful editing if our
manuscript. This work was supported by the National
Aeronautics and Space Administration (NASA) under
grant NAG-1-613, in cooperation with the Illinois Com-
puter Laboratory for Aerospace Systems and Software
(ICLASS), and by the Advanced Research Projects
Agency under grant DABT63-94-C-0045. The findings,
opinions, and recommendations expressed herein are those
of the authors and do not necessarily reflect the position or
policy of the United States Government or the University
of Illinois, and no official endorsement should be inferred.
References
[1] J. Arlat, M. Aguera, Y. Crouzet, et al., «Experimental
Evaluation of the Fault Tolerance of an Atomic Multicast
System,» IEEE Transactions on Reliability, vol. 39, pp. 455-
467, 1990.
[2] A. Campbell, P. McDonald, and K. Ray, «Single Event Upset
Rates in Space,» IEEE Transactions on Nuclear Science, vol.
39, pp. 1828-1835, 1992.
1 11 21 31 41 51 61 71 81
cache memory
disks
%faulty tracks
frequency
1 11 21 31 41 51 61 71 81
Figure 9: Percentage faulty Figure 10: Frequency of
tracks in cache and disks track reconstruction
[3] C.-H. Chen and A. K. Somani, «A Cache Protocol for Error
Detection and Recovery in Fault-Tolerant Computing Sys-
tems,» 24th IEEE International Symposium on Fault-
Tolerant Computing (FTCS-24), Austin, Texas, USA, 1994,
pp. 278-287.
[4] P. M. Chen, E. K. Lee, G. A. Gibson, et al., «RAID: High-
Performance, Reliable Secondary Storage,» ACM Computing
Surveys, vol. 26, pp. 145-185, 1994.
[5] M. B. Friedman, «The Performance and Tuning of a Stora-
geTek RAID 6 Disk Subsystem,» CMG Transactions, vol.
87, pp. 77-88, 1995.
[6] K. K. Goswami, «Design for Dependability: A simulation-
Based Approach,» PhD., University of Illinois at Urbana-
Champaign, UILU-ENG-94-2204, CRHC-94-03, February
1994.
[7] K. K. Goswami, R. K. Iyer, and L. Young, «DEPEND: A
simulation Based Environment for System level Dependabil-
ity Analysis,» IEEE Transactions on Computers, vol. 46, pp.
60-74, 1997.
[8] M. Holland, G. Gibson, A., and D. P. Siewiorek, «Fast, On-
Line Failure Recovery in Redundant Disk Arrays,» 23rd In-
ternational Symposium on Fault-Tolerant Computing (FTCS-
23), Toulouse, France, 1993, pp. 422-431.
[9] R. Y. Hou and Y. N. Patt, «Using Non-Volatile Storage to
improve the Reliability of RAID5 Disk Arrays,» 27th Int.
Symposium on Fault-Tolerant Computing (FTCS-27), WA,
Seattle, 1997, pp. 206-215.
[10] G. E. Houtekamer, «RAID System: The Berkeley and
MVS Perspectives,» 21st Int. Conf. for the Resource Man-
agement & Performance Evaluation of Enterprise Computing
Systems (CMG'95), Nashville, Tennesse, USA, 1995, pp. 46-
[11] J. Menon, «Performance of RAID5 Disk Arrays with
Read and Write Caching,» International Journal on Distrib-
uted and Parallel Databases, vol. 2, pp. 261-293, 1994.
[12] J. Menon and J. Cortney, «The Architecture of a Fault-
Tolerant Cached RAID Controller,» 20th Annual Interna-
tional Symposium on Computer Architecture, San Diego, CA,
USA, 1993, pp. 76-86.
[13] D. A. Patterson, G. A. Gibson, and R. H. Katz, «A Case
for Redundant Arrays of Inexpensive Disks (RAID),» ACM
International Conference on Management of Data
(SIGMOD), New York, 1988, pp. 109-116.
[14] J. G. Silva, J. Carreira, H. Madeira, et al., «Experimental
Assessment of Parallel Systems,» 26th International Sympo-
sium on Fault-Tolerant Computing (FTCS-26), Sendai, Ja-
pan, 1996, pp. 415-424.
|
0704.0880 | Stochastic action principle and maximum entropy | Microsoft Word - Leastaction_modified.doc
Stochastic action principle and maximum entropy
Q. A. Wang, F. Tsobnang, S. Bangoup, F. Dzangue, A. Jeatsa and A. Le Méhauté
Institut Supérieur des Matériaux et Mécaniques Avancées du Mans, 44 Av. Bartholdi,
72000 Le Mans, France
Abstract
A stochastic action principle for stochastic dynamics is revisited. We present
first numerical diffusion experiments showing that the diffusion path probability
depend exponentially on average Lagrangian action ∫=
LdtA . This result is then
used to derive an uncertainty measure defined in a way mimicking the heat or
entropy in the first law of thermodynamics. It is shown that the path uncertainty
(or path entropy) can be measured by the Shannon information and that the
maximum entropy principle and the least action principle of classical mechanics
can be unified into a concise form 0=Aδ , averaged over all possible paths of
stochastic motion. It is argued that this action principle, hence the maximum
entropy principle, is simply a consequence of the mechanical equilibrium
condition extended to the case of stochastic dynamics.
PACS numbers : 05.45.-a,05.70.Ln,02.50.-r,89.70.+c
1) Introduction
It is a long time conviction of scientists that the all systems in nature optimize certain
mathematical measures in their motion. The search for such quantities has always been a
major objective in the efforts to understand the laws of nature. One of these measures is the
Lagrangian action considered as a most fundamental quantity in physics. The least action
principle1 [1] has been used to derive almost all the physical laws for regular dynamics
(classical mechanics, optics, electricity, relativity, electromagnetism, wave motion, etc.[2]).
This achievement explain the efforts to extend the principle to irregular dynamics such as
equilibrium thermodynamics[3], irreversible process [4], random dynamics[5][6], stochastic
mechanics[7][8], quantum theory[9] and quantum gravity theory[10]. We notice that in most
of these approaches, the randomness or the uncertainty (often measured by information or
entropy) of the irregular dynamics is not considered in the optimization methods. For
example, we often see expression such as RR δδ = concerning the variation of a random
variable R with an expectation R . This is incorrect because the variation of uncertainty
aroused by the variation of the R may play important role in the dynamics.
Another most fundamental measure, called entropy, is frequently used in variational
methods of thermodynamics and statistics. The word "entropy" has a well known definition
given by Clausius in the equilibrium thermodynamics. But it is also used as a measure of
uncertainty in stochastic dynamics. In this sense, it is also referred to as "information" or
"informational entropy". In contrast to the action principle, entropy and its optimization have
always been a source of controversies. It has been used in different even opposite variational
methods based on different physical understanding of the optimization. For instance, there is
the principle of maximum thermodynamic entropy in statistical thermodynamics[11][12], the
maximum information-entropy[13][14] in information theory, the principle of minimum
entropy production [15] for certain nonequilibrium dynamics, and the principle of maximum
entropy production for others[16][17]. Certain interpretation of entropy and of its evolution
was even thought to be in conflict with the mechanical laws[18]. Notice that these laws can be
derived from least action principle. In fact, the definition of entropy is itself a great matter of
investigation for both equilibrium and nonequilibrium systems since the proposition of
Boltzmann and Gibbs entropy. Concerning the maximum entropy calculus, few people still
1 We continue to use this term "least action principle" here considering its popularity in the scientific community,
although we know nowadays that the term "optimal action" is more suitable because the action of a mechanical
system can have a maximum, or a minimum, or a stationary for real paths[19].
contest the fact that the maximization of Shannon entropy yields the correct exponential
distribution. But curously enough, few people are completely satisfied by the arguments of
Jaynes and others[12][13][14] supporting the maximum entropy principle by considering
entropy as anthropomorphic quantity and the principle as only an inference method. This
question will be revisited to the end of the present paper.
In view of the fundamental character of entropy in stochastic dynamics, it seems that the
associated variation approaches must be considered as first principles and cannot be derived
from other ones (such as least action principle) for regular dynamics where uncertainty does
not exist at all. However, a question we asked is whether we can formulate a more general
variation principle covering both the optimization of action for regular dynamics and the
optimization of information-entropy for stochastic dynamics. We can imagine a mechanical
system originally obeying least action principle and then subject to a random perturbation
which makes the movement stochastic. For this kind of systems, we have proposed a
stochastic action principle [20][21][22] which was originally a combination of maximum
entropy principle (MEP) and least action principle on the basis of the following assumptions :
1) A random Hamiltonian system can have different paths between two points in both
configuration space and phase space.
2) The paths are characterized uniquely by their action.
3) The path information is measured by Shannon entropy.
4) The path information is maximum for real process.
This is in fact maximization of path entropy under the constraint associated with average
action over paths (we assume the existence of this average measure). As expected, this
variational principle leads to a path probability depending exponentially on the Lagrangian
action of the paths and satisfying the Fokker-Planck equation of normal diffusion[21]. Some
diffusion laws such as Fick's laws, Ohm's law, and Fourier's law can be derived from this
probability distribution. We noticed that the above combination of two variation principles
could be written in a concise form 0=Aδ [22], i.e., the variation of action averaged over all
possible paths must vanish.
However, many disadvantages exist in the above formalism. The first one is that not all
the above physical assumptions are obvious and convincing. For example, concerning the
path probability, another point of view[23] says that the probability should depend on the
average energy on the paths instead of their action. The second disadvantage of that
formalism is we used the Shannon entropy as a starting hypothesis, which limits the validity
of the formalism. One may think that the principle is probably no more valid if the path
uncertainty cannot be measured by the Shannon formula. The third disadvantage is that MEP
is already a starting hypothesis, while it was expected that the work might help to understand
why entropy goes to maximum.
In this work, the reasoning is totally different even opposite. The only physical
assumption we make is a stochastic action principle (SAP), i.e., 0=Aδ . The first and second
assumptions mentioned above are not necessary because these properties will be extracted
from experimental results. The third and fourth assumptions become purely the consequences
of SAP. This work is limited to the classical mechanics of Hamiltonian systems for which the
least action principle is well formulated. Neither relativistic nor quantum effects is
considered.
2) Stochastic dynamics of particle diffusion
We consider a classical Hamiltonian systems moving, maybe randomly, in the
configuration space between two points a and b. Its Hamiltonian is given by H=T+V and its
Lagrangian by VTL −= where T is the kinetic energy and V the potential one. The
Lagrangian action on a given path is ∫=
LdtA as defined in the Lagrangian mechanics. These
definitions need sufficiently smooth dynamics at smallest time scales of observation. In
addition, if there are random noises perturbing the motion, the energy variation due to the
external perturbation or internal fluctuation is negligible at a time scale τ which is
nevertheless small with respect to the observation period. Hence VTL −= and VTH +=
can exist, where T and V are kinetic and potential energies averaged over τ such as
TdtT .
It is known that if there is no random forces and if the duration of motion tab= tb -ta from a
to b is given, there is only one possible path between a and b. However, this uniqueness of
transport path disappears if the motion is perturbed by random forces. An example is the case
of particle diffusion in random media, where many paths between two given points are
possible. This effect of noise can be easily demonstrated by a thought experiment in Figure 1.
See the caption for detailed description. In this experiment, it is expected that more a path is
different from the least action path (straight line in the figure) between a and b, less there are
particles traveling on that path, i.e., smaller is the probability that the path is taken by the
particles.
Dust particles
h1 h2
Figure 1
A thought experiment for the random diffusion of the dust particles falling in the
air. At time ta, the particles fall out of the hole at point a. At time tb, certain
particles arrive at point b. The existence of more than one path of the particles
from a to b can be proved by the following operations. Let us open only one hole
h1 on a wall between a and b, we will observe dust particles at point b at time tb.
Then close the hole h1 and open another hole h2, we can still observe particles at
point b at time tb, as illustrated by the two curves passing respectively through h1
and h2. Another observation of this experiment is that more a path is different
from the vertical straight line between a and b, less there are particles traveling on
that path, i.e., smaller is the probability that the path is taken by the particles. This
observation can be easily verified by the numerical experiment in the following
section.
Now let us suppose W discrete paths from a to b. Among a very large N particles leaving
the point a, we observe Nk ones arriving at point b by the path k. Then the probability for the
particles to take the path k is defined by
Nkp kab =)( . The normalization is given by
1)( =∑ kp
ab or, in the case of continuous space, by the path integral 1)( =∫ prD ab , where r
denotes the continuous coordinates of the paths.
3) A numerical experiment of particle diffusion and path probability
Does the probability
Nkp kab =)( really exist for each path? If it exists, how does it
change from path to path? What are the quantities associated with the paths which determines
the change in path probability? To answer these questions, we have carried out numerical
experiments (Figure 2) showing the dust particles fall from a small hole a on the top of a two
dimensional experimental box to the bottom of the box. A noise is introduced to perturb
symmetrically in the direction of x the falling particles. We have used three kind of noises:
Gaussian noise, uniform noise (with amplitudes uniformly distributed between -1 and 1) and
truncated uniform noises (uniform noise with a cutoff of magnitude between -z and z where
z<1, i.e., the probability is zero for the magnitude between –z and z).
Figure 2
2a: Model of the numerical experiment showing the dust particles fall from
a small hole a onto the bottom of the experimental box. The distribution of
particles on the bottom (represented by the vertical bars) is caused by the
random noise (air for example) in the direction of x. 2b: An example of
experimental results in which the falling particles are perturbed by a noise
whose magnitude is uniformly distributed between -1 and 1 in x. The
vertical bars are experimental result and the curve is a Gaussian distribution
( ) dxxxN
xdNxdp )
1)()( 2
−== , where dN(x) is the particle number in
the interval x—x+dx, N is the total number of falling particles and σ is the
standard deviation (sd). The experiments show that the dp(x) is always
Gaussian whatever the noise (uniform, Gaussian or other truncated uniform
noises).
Dust particles
x0 x
The observed distributions of particles are Gaussian for the three noises. The standard
deviation of the distributions is uniquely determined by the nature of the noise (type, maximal
magnitude, frequency etc.). This result was expected because of the finite variances of the
used noises and of the central limit theorem saying that the attractor distribution is a normal
(Gaussian) one if the noises (random variables) have finite variance.
What can we conclude from this experiment of falling particles which seems to be trivial?
First, let us suppose that the falling distance h is small so that the path y between a and
any position x on the bottom can be considered as a straight line and the average velocity on y
can be given by
y where τ is the motion time from a to x (see Figure 2a). In this case, it is
easy to show that the action Ax from a to x is proportional to (x-x0)2, i.e.,
τττττ
2)(222222
2222 hmxxmh
mymmghymmghymVTAx −−=−=−=−=−=
where
mT = and
V = are the average kinetic and potential energy, respectively.
This analysis applies to any smooth motion provided h is small. Considering the observed
Gaussian distribution of the falling particles in figure 2, we can write for small h
)exp()( AxdN xη−∝ where η is a constant. The probability that a particle takes the small
straight path from a to x is proportional to the exponential of action Ax.
Now let us consider large h. In this case the paths may not be straight lines. But a curved
path from a to x can be cut into small intervals at x1, x2, .... The above analysis is still valid for
each small segment. The probability that a particle takes the path to x is then equal to the
product of the probabilities on every segment of that path from a to x and should be
proportional to the exponential of the total action from a to x, i.e.,
( ) ( ) ( )AAAp axi i
iax ηηη −=∑−=∏ −∝ expexpexp (1)
where Ai is the action on the segment xi and Aax is the total action on a given path from a to x.
The constant η is a characteristic of the noise and should be the same for every segment. The
conclusion of this section is the path probability depends exponentially on action as long as
the particle distribution on the bottom of the box is Gaussian.
Concerning the exponential form of path probability, there is another proposal [23]
( )kab Hkp γ−∝ exp)( , i.e., the path probability depends exponentially on the negative
average energy. According to this probability, the most probable path has minimum average
energy, so that for vanishing noise (regular dynamics), this minimum energy path would be
the unique one which must also follow the least action principle. Here we have a paradox
because the real path given by least action principle is in general not the path of minimum
average energy.
4) An action principle for stochastic dynamics
Recently, the following stochastic action principle (SAP) was postulated[20][22] :
0=Aδ (2)
where AprDA abδδ ∫= )( is the average of the variation Aδ over all the paths. It can be
written as follows
pArDAprD
AprDA
where ∫= AprDA ab)( is the ensemble average of action A, and abSδ is defined by
( ) pArDAAS abab δηδδηδ ∫=−= )( . (4)
Eq.(4) makes it possible to derive Sab directly from probability distribution if the latter is
known. Let us consider the dynamics in the section 3 that has the exponential path probability
η−= exp1 (5)
where ( )∫ −= ArDZ ηexp)( is the partition function of the distribution. A trivial calculation
tell us that abSδ is a variation of the path entropy Sab given by Shannon formula
∫−= pprDS ababab ln)( . (6)
Eq.(4) is a definition of entropy or information as a measure of uncertainty of random
variable (action in the present case)[26]. It mimics the first law of thermodynamics
dWdUdQ += where EpEU i
i∑== is the average energy, Ei is the energy of the state i with
probability pi, dW is the work of the forces )(
∑−= and qj is some extensive
variables such as volume, surface, magnetic moment etc. The work can be written as
∑ −=−=∑ ⎟
i dEdEpdqq
EpdW . So the first law becomes dEEddQ −= . We see that by
Eq.(4) a “heat” Q is defined as the measure the randomness of action (or of any other random
variables in general[26]). In Eq.(6), this heat” is related to the Shannon entropy since the
probability is exponential. If the probability is not exponential, the functional of the entropy is
probably different from the Shannon one, as discussed in [26].
With the help of Eqs.(2) and (5), it is easy to verify that
App abab δηδ −= (7)
and
App abab δηδ
22 −= . (8)
From Eqs.(7) and (8), the maximum condition of pab , i.e., 0=pabδ and 0
2 <pabδ , is
transformed into 0=Aδ and 02 >Aδ if the constant η is positive, that is the least action path
is the most probable path. On the contrary, if η is negative, we get 0=Aδ and 02 <Aδ , the
most probable path is a maximum action one.
In our previous work, we have proved that the probability distribution of Eq.(5) satisfied
the Fokker-Planck equation in configuration space. It is easy to see that[20], in the case of
free particle, Eq.(5) gives us the transition probability of Brownian motion with 0
where m is the mass and D the diffusion constant of the Brownian particle[25].
5) Return to the regular least action principle
The stochastic action principle Eq.(2) should recover the usual least action principle 0=Aδ
when the stochastic dynamics tends to regular dynamics with vanishing noise. To show this,
let us put the probability Eq.(5) into Eq.(6), a straightforward calculation leads to
AZSab η+=ln . (9)
In regular dynamics, pab=1 for the path of optimal (maximal or minimal or stationary)
action A0 and pab=0 for other paths having different actions, so that 0=abS from Eq.(6). We
have only one path, the integral in the partition function gives ( ) ( )0expexp)( AAqDZ ηη −=∫ −= .
Eq.(9) yields 0AA= . On the other hand, we have ( ) 0=−= AASab δδηδ . Thus our principle 0=Aδ
implies 00== AA δδ or, more generally, 0=Aδ . This is the usual action principle.
6) Stochastic action principle and maximum entropy
Eq.(3) tells us that the SAP given by Eq.(2) implies
0)( =− ASab ηδ . (10)
meaning that the quantity ( )ASab γ− should be optimized. If we add the normalization
condition, the SAP becomes:
0)]1)(([ =∫ −+− pqDAS abab αηδ (11)
which is just the usual Jaynes principle of maximum entropy. Hence Eq.(2) is equivalent to
the Jaynes principle applied t path entropy.
Is Eq.(2) simply a concise mathematical form of Jaynes principle associated to average
action? Or is there something of fundamental which may help us to understand why entropy
gets to maximum for stable or stationary distribution?
From section 4, we understand that, in the case of equilibrium system, the variation dEi is
a work dW. However, in the case of regular mechanics, dW=0 is the condition of equilibrium
meaning that the sum of all the forces acting on the system should be zero and the net torque
taken with respect to any axis must also vanish. So it seems reasonable to take 0=dEi as an
equilibrium condition for stochastic equilibrium. In other words, when a random motion is in
(global) equilibrium, the total work ∑ ⎟
i dqq
EpdW by all the random forces
on all the virtual increments dqj of a state variable (e.g., volume) must vanish. As a
consequence of the first law, 0=dEi naturally leads to 0][ =− US ηδ , i.e., Jaynes maximum
entropy principle associated with the average energy 0]1[ =+− αηδ US where S is the
thermodynamic entropy. This analysis seems to say that the maximum entropy (maximum
randomness) is required by the mechanical equilibrium condition in stochastic situation.
Remember that dEi can also be written as a variation of free energy TSUF −= , i.e.,
dFdEi = . The stochastic equilibrium condition can be put into 0=dF .
Coming back to our SAP in Eq.(2), the system is in nonequilibrium motion. If there is no
noise, the true path satisfies 0=Aδ and 0=
. When there is noise perturbation,
we have[22]
0)( =∑ ⎥
∂∫= ∫
j ab drdtr
prDdA
(12)
where 0≠=
t jjj
is the random force on drj. Let ∫=
j dtft
f 1 be the time average of
the random force fj, we obtain
[ ] 0)( ==∑= ∫ dWtdrfprDtdA ab
jjj abab
(13)
where [ ] ∑=∑= ∫∫
jj abj
jjj ab dWprDdrfprDdW )()( is the ensemble average (over all paths) of the
time average ∫=∫=
j dtWdt
dtrdft
dW 11 and rdfdW jjj= is the work of random force
over the variation (deformation) rd j of a given path. Eq.(13) means
0=dW (14)
since tab is arbitrary. Eq.(14) implies that the average work of the random forces at any
moment over any time interval and over arbitrary path deformation must vanish. This
condition can be satisfied only when the motion is totally random, a state at which the system
does not have privileged degrees of freedom without constraints. Indeed, it is easy to show
that the maximum entropy with only the normalization as constraint yields totally
equiprobable paths. This argument also holds for equilibrium systems. The vanishing work
0==dWdEi needs that, if there is no other constraint than the normalization, no degree of
freedom is privileged, i.e., all microstates of the equilibrium state should be equiprobable.
This is the state which has the maximum randomness and uncertainty.
To summarize this section, the optimization of both equilibrium entropy and
nonequilibrium path entropy is simply the requirement of the mechanical equilibrium
conditions in the case of stochastic motion. There is no mystery in that. Entropy or dynamical
randomness (uncertainty) must take the largest value for the system to reach a state where the
total virtual work of the random forces should vanish. Entropy is not necessarily
anthropomorphic quantity as claimed by Jaynes[14] to be able to take maximum for correct
inference. Entropy is nothing but a measure of physical uncertainty of stochastic situation.
Hence maximum entropy is not merely an inference principle. It is a law of physics. This is a
major result of the present work.
7) Concluding remarks
We have presented numerical experiments showing the path probability distribution of
some stochastic dynamics depends exponentially on Lagrangian action. On this basis, a
stochastic action principle (SAP) formulated for Hamiltonian system perturbed by random
forces is revisited. By using a new definition of statistical uncertainty measure which mimics
the heat in the first law of equilibrium thermodynamics, it is shown that, if the path
probability is exponential of action, the measure of path uncertainty we defined is just
Shannon information entropy. It is also shown that the SAP yields both the Jaynes principle of
maximum entropy and the conventional least action principle for vanishing noise. It is argued
that the maximum entropy is the requirement of the conventional mechanical equilibrium
condition for the motion of random systems to be stabilized, which means the total virtual
work of random forces should vanish at any moment within any arbitrary time interval. This
implies, in equilibrium case, 0=dEi , and in nonequilibrium case, 0== dWdA . In both cases,
the randomness of the motion must be at maximum in order that all degrees of freedom are
equally probable if there is no constraint. By these arguments, we try to give the maximum
entropy principle, considered by many as only an inference principle, the status of a
fundamental physical law.
References
[1] P.L.M. de Maupertuis, Essai de cosmologie (Amsterdam, 1750)
[2] S. Bangoup, F. Dzangue, A. Jeatsa, Etude du principe de Maupertuis dans tous ses
états, Research Communication of ISMANS, June 2006
[3] L. De Broglie, La thermodynamique de la particule isolée, Gauthier-Villars éditeur,
Paris, 1964
[4] L. Onsager and S. Machlup, Fluctuations and irreversible processes, Phys. Rev.,
91,1505(1953); L. Onsager, Reciprocal relations in irreversible processes I., Phys.
Rev. 37, 405(1931)
[5] M.I. Freidlin and A.D. Wentzell, Random perturbation of dynamical systems,
Springer-Verlag, New York, 1984
[6] G.L. Eyink, Action principle in nonequilibrium statistical dynamics, Phys. Rev. E,
54,3419(1996)
[7] F. Guerra and L. M. Morato, Quantization of dynamical systems and stochastic
control theory, Phys. Rev. D, 27, 1774(1983)
[8] F. M. Pavon, Hamilton's principle in stochastic mechanics, J. Math. Phys., 36,
6774(1995);
[9] R.P. Feynman and A.R. Hibbs, Quantum mechanics and path integrals,
McGraw-Hill Publishing Company, New York, 1965
[10] S.W. Hawking, T. Hertog, Phys. Rev. D, 66(2002)123509;
S.W. Hawking, Gary.T. Horowitz, Class.Quant.Grav., 13(1996)1487;
S. Weinberg, Quantum field theory, vol.II, Cambridge University Press,
Cambridge, 1996 (chapter 23: extended field configurations in particle
physics and treatments of instantons)
[11] J. Willard Gibbs, Principes élémentaires de mécanique statistique (Paris, Hermann,
1998)
[12] E.T. Jaynes, The evolution of Carnot's principle, The opening talk at the EMBO
Workshop on Maximum Entropy Methods in x-ray crystallographic and biological
macromolecule structure determination, Orsay, France, April 24-28, 1984;
[13] M. Tribus, Décisions Rationelles dans l'incertain (Paris, Masson et Cie, 1972) P14-
26; or Rational, descriptions, decisions and designs (Pergamon Press Inc., 1969)
[14] E.T. Jaynes, Gibbs vs Boltzmann entropies, American Journal of Physics,
33,391(1965) ; Where do we go from here? in Maximum entropy and Bayesian
methods in inverse problems, pp.21-58, eddited by C. Ray Smith and W.T. Grandy
Jr., D. Reidel, Publishing Company (1985)
[15] I. Prigogine, Bull. Roy. Belg. Cl. Sci., 31,600(1945)
[16] L.M. Martyushev and V.D. Seleznev, Maximum entropy production principle in
physics, chemistry and biology, Physics Reports, 426, 1-45 (2006)
[17] G. Paltridge, Quart. J. Roy. Meteor. Soc., 101,475(1975)
[18] J.R. Dorfmann, An introduction to Chaos in nonequilibrium statistical mechanics,
Cambridge University Press, 1999
[19] C.G.Gray, G.Karl, V.A.Novikov, Progress in Classical and Quantum Variational
Principles, Reports on Progress in Physics (2004), arXiv: physics/0312071
[20] Q.A. Wang, Maximum path information and the principle of least action for
chaotic system, Chaos, Solitons & Fractals, (2004), in press; cond-mat/0405373
and ccsd-00001549
[21] Q.A. Wang, Maximum entropy change and least action principle for
nonequilibrium systems, Astrophysics and Space Sciences, 305 (2006)273
[22] Q.A. Wang, Non quantum uncertainty relations of stochastic dynamics, Chaos,
Solitons & Fractals, 26,1045(2005), cond-mat/0412360
[23] R. M. L. Evans, Detailed balance has a counterpart in non-equilibrium steady
states, J. Phys. A: Math. Gen. 38 293-313(2005)
[24] V.I. Arnold, Mathematical methods of classical mechanics, second edition,
Springer-Verlag, New York, 1989, p243
[25] R. Kubo, M. Toda, N. Hashitsume, Statistical physics II, Nonequilibrium
statistical mechanics, Springer, Berlin, 1995
[26] Q.A. Wang, Some invariant probability and entropy as a measure of uncertainty,
cond-mat/0612076
|
0704.0881 | Constraining the Dark Energy Equation of State with Cosmic Voids | CONSTRAINING THE DARK ENERGY EQUATION OF
STATE WITH COSMIC VOIDS
Jounghun Lee and Daeseong Park
Department of Physics and Astronomy, FPRD, Seoul National University, Seoul 151-747,
Korea
[email protected]
ABSTRACT
Our universe is observed to be accelerating due to the dominant dark en-
ergy with negative pressure. The dark energy equation of state (w) holds a key
to understanding the ultimate fate of the universe. The cosmic voids behave
like bubbles in the universe so that their shapes must be quite sensitive to the
background cosmology. Assuming a flat universe and using the priors on the
matter density parameter (Ωm) and the dimensionless Hubble parameter (h),
we demonstrate analytically that the ellipticity evolution of cosmic voids may
be a sensitive probe of the dark energy equation of state. We also discuss the
parameter degeneracy between w and Ωm.
Subject headings: cosmology:theory — large-scale structure of universe
Recent observations have revealed that our universe is flat and in a phase of acceler-
ation (Riess et al. 1998; Perlmutter et al. 1999; Spergel et al. 2003). It implies that some
mysterious dark energy fills dominantly the universe at present epoch, exerting anti-gravity.
The nature of this mysterious dark energy which holds a key to understanding the ultimate
fate of the universe is often specified by its equation of state, i.e., the ratio of its pressure to
density: w ≡ Pde/ρde. The anti-gravity of the dark energy corresponds to the negative value
of w. The simplest candidate for the dark energy is the vacuum energy (Λ) with w = −1
that is constant at all times (Einstein 1917). Although all current data are consistent with
the vacuum energy model (e.g., Wang & Tegmark 2004; Jassal et al. 2004; Percival 2005;
Guzzo et al. 2008), the notorious failure of the theoretical estimate of the vacuum energy
density (see Caroll et al. 1992, for a review) has led a dynamic dark energy model to emerge
as an alternative. In this dynamic dark energy models which is often collectively called
quintessence, the dark energy is described as a slowly rolling scalar field with time-varying
equation of state in the range of −1 < w < 0 (Caldwell et al. 1998).
http://arxiv.org/abs/0704.0881v2
– 2 –
The following observables have so far been suggested to discriminate the dark en-
ergy models: the luminosity-distance measure of type Ia supernova (Riess et al. 2004, 2007;
Davis et al. 2007; Kowalski et al. 2008); the abundance of galaxy clusters as a function of
mass (Wang & Steinhardt 1998; Haiman et al. 2001; Weller et al. 2002), the baryonic acous-
tic oscillations in the galaxy power spectrum (Blake & Glazebrook 2003; Hu & Haiman 2003;
Cooray 2004; Seo & Eisentein 2005), and the weak gravitational lensing effect (Hu 1999;
Huterer 2001; Takada & Jain 2004; Song & Knox 2004). True as it is that these observables
can constrain powerfully the value of w, it is still quite necessary and important to find out
as many different observables as possible for consistency tests.
Another possible observable as a dark energy constraint may be the shapes of the cos-
mic voids. As the voids behave like bubbles due to their extremely low densities, their
shapes determined by the spatial distribution of the void galaxies tend to change sensitively
according to the competition between the tidal distortion and the gravitational rarefaction
effect. Therefore, the shape evolution of the voids must depend sensitively on the background
cosmology. In this Letter we study the ellipticity evolution of cosmic voids in the QCDM
(quintessence + cold dark matter) model with the help of the analytic formalism developed
by Park & Lee (2007) and explore the possibility of using it as a complimentary probe of
the dark energy equation of state.
According to Park & Lee (2007), the shape of a void region is related to the eigenvalues
of the local tidal shear tensor as
λ1(µ, ν) =
1 + (δv − 2)ν2 + µ2
(µ2 + ν2 + 1)
, (1)
λ2(µ, ν) =
1 + (δv − 2)µ2 + ν2
(µ2 + ν2 + 1)
, (2)
where {λi}3i=1 (with λ1 > λ2 > λ3) are the three eigenvalues of the local tidal field smoothed
on void scale, δv is the density contrast threshold for the formation of a void: δv =
i=1 λi,
and {µ, ν} (with ν < µ) represents a set of the two parameters that quantify the anisotropic
distribution of the void galaxies. They defined the void ellipticity as ε ≡ 1−ν and evaluated
its probability density distribution as
p(1− ε; z) = p(ν; z, RL) =
p[µ, ν|δ = δv; σ(z, RL)]dµ
10πσ5(z, RL)
2σ2(z, RL)
15δv(λ1 + λ2)
2σ2(z, RL)
× exp
15(λ21 + λ1λ2 + λ
2σ2(z, RL)
(2λ1 + λ2 − δv)
×(λ1 − λ2)(λ1 + 2λ2 − δv)
4(δv − 3)2µν
(µ2 + ν2 + 1)3
. (3)
– 3 –
Here, σ(z, RL)) ≡ D2(z)
∆2(k)W 2(kRL)d ln k is the linear rms fluctuation of the matter
density field smoothed on a Lagrangian void scale of RL at redshift z where D(z) is the
linear growth factor, W (kRL) is a top-hat window function, and ∆
2(k) is the dimensionless
linear power spectrum. Throughout this study, we adopt the linear power spectrum of the
cold dark matter cosmology (CDM) that does not depend explicitly on w (Bardeen et al.
1986).
Equation (3) was originally derived under the assumption of a ΛCDM model (w = −1).
We propose here that it also holds good for the case of a QCDM (quintessence+CDM) model
where the dark energy equation of state changes with time as w(z) = w0 + waz/(1 + z)
(Chevallier & Polarski 2001; Linder 2003) where w0 is the value of w at present epoch and
wa quantifies how the dark energy equation of state changes with time. Then, we employ
the following approximation formula for the linear growth factor, D(z), for a QCDM model
(Basilakos 2003; Percival 2005):
D(z) =
2(z + 1)
Ωαm − ΩQ +
(1 +AΩQ)
. (4)
where
E2(z) = Ωm(1 + z)
3 + ΩQ(1 + z)
−f(z), (5)
f(z) = −3(1 + w0)−
2 ln(1 + z)
, (6)
5− 2/(1− w)
(1− w)(1− 3w/2)
(1− 6w/5)3
[1− Ωm], (7)
A = − 0.28
w + 0.08
− 0.3. (8)
The CDM density parameter Ωm and the dark energy density parameter ΩQ evolve with z
respectively as
Ωm(z) =
Ωm0(1 + z)
E2(z)
, ΩQ(z) =
E2(z)(1 + z)f(z)
, (9)
where Ωm0 and ΩQ0 represent the present values. Equation (3) implies that the mean elliptic-
ity of voids decreases with z. A key question is how the rate of the decrease changes with the
dark energy equation of state. Since most of the recent observations indicate that the dark
energy equation of state at present epoch is consistent with w = −1 (e.g., see Guzzo et al.
2008, and references therein) we focus on how the mean void ellipticity depends on the value
of wa. Even in case that w0 = −1, if wa is found to deviate from zero, it would imply the
dynamic dark energy, disproving the simple ΛCDM model.
To explore how the void ellipticity evolution depends on wa, we evaluate the mean
ellipticity of voids as ε̄(z) =
ε p(ε;RL, z)dǫ for different values of wa through equations
– 4 –
(3)- (9). The other key cosmological parameters are set at Ωm = 0.75,ΩQ = 0.75, h = 0.73,
σ8 = 0.9 and w0 = −1. When the abundance of evolution of galaxy clusters is used to
constrain the dark energy equation of state, the cluster mass is usually set at a certain
threshold, MR, defined as the mass within a certain comoving radius (Wang & Steinhardt
1998). Likewise, we set the Lagrangian scale of a void, RL at 4h
−1Mpc, which is related
to the mean effective radius of a void as R̄E = (1 + δv)
−1/3R̄L/(1 + z). The Lagrangian
scale RL = 4h
−1Mpc corresponds to the mean effective size of a void at present epoch,
RE ∼ 8.5h−1Mpc.
Figure 1 plots ε̄(z) for the four different cases: wa = −1/3, 0, 1/3 and 2/3 (long-dashed,
solid, dashed, and dotted line, respectively). As can be seen, the higher the value of wa is,
the more rapidly ε̄(z) decreases. It also suggests that ε̄(z) is well approximated as a linear
function of z in recent epochs (0 < z < 0.2). Therefore, we fit ε(z) to a straight line as
ε̄(z) ≈ Avz + Bv. Varying the value of wa in the range of [0, 2/3], we compute the best-fit
slope Av. The range, 0 ≤ wa ≤ 2/3, corresponds to the dark energy equation of state range,
−1 ≤ w ≤ −0.9. The result is plotted in Fig. 2. As can be seen, the void ellipticity evolves
more rapidly as the value of wa increases. That is, the void ellipticity undergoes a stronger
evolution when the anti-gravitational effect is less strong in recent epochs. Note that Av
shows a noticeable 30% difference as the dark energy equation of state changes w from −1
to −0.9.
We have so far neglected the parameter degeneracy between w and the other key pa-
rameters. However, as the dependence of the void ellipticity distribution on the dark energy
equation of state comes from its dependence on ∆2(k; Ωm0, σ8, h, w), it is naturally expected
that there should be a strong parameter degeneracy. Here, we focus on the degeneracy be-
tween Ωm0 and w. First, we recompute Av, varying the values of Ωm0 and w0 with setting
wa = 1/3. The left panel of Fig. 3 plots a family of the degeneracy curves in the Ωm0-w0
plane for the three different values of Av. As can be seen, there is a strong degeneracy
between the two parameters. For a given value of Av, the value of w0 increases as the value
of Ωm0 decreases. A similar trend is also found in the Ωm0-wa degeneracy curves that are
plotted in the right panel of Fig. 3 for which the value of w0 is set at −1. It is worth noting
that this degeneracy trend is orthogonal to that found from the cluster abundance evolution
(see Fig.3 in Wang & Steinhardt 1998). Thus, when combined with the cluster analysis, the
void ellipticity analysis may be useful to break the degeneracy between Ωm0 and w.
We have shown that the void ellipticity evolution is in principle a useful constraint of the
dark energy equation of state. We have also shown that it provides a new degeneracy curve
for Ωm0 and w. When combined with the cluster abundance analysis, it should be useful
to break the degeneracy. Furthermore, unlike the mass measurement of high-z clusters
– 5 –
which suffers from considerable scatters, the void ellipticities are readily measured from the
positions of the void galaxies without requiring any additional information.
To use our analytic tool in practice to constrain the dark energy equation of state,
however, it will require to account for the redshift distortion effect since the positions of the
void galaxies are measured in redshift space. In our companion paper (Park & Lee 2009 in
preparation), we have analyzed the Millennium Run Redshift-Space catalog (Springel et al.
2005) and determined the ellipticity distribution of the galaxy voids. From this analysis, it
is somewhat unexpectedly found that the void ellipticity distribution measured in redshift
space is hardly changed from the one in real space. In fact, this result is consistent with
the recent claims of Hoyle & Vogeley (2007) and that of van de Weygaert (2008, private
communication) who have already pointed out that the redshift distortion effect has only
negligible, if any, effect on the shapes of voids. We hope to constrain the dark energy
equation of state by applying our theoretical tool to real observational data and report the
result elsewhere in the near future.
We thank an anonymous referee for a constructive report. J.L. am very grateful to
S.Basilakos for very helpful discussion and comments. This work is financially supported
by the Korea Science and Engineering Foundation (KOSEF) grant funded by the Korean
Government (MOST, NO. R01-2007-000-10246-0).
– 6 –
REFERENCES
Bardeen, J. M., Bond, J. R., Kaiser, N., & Szalay, A. S. 1986, ApJ, 304, 15
Basilakos, S. 2003, ApJ, 590, 636
Blake, C., & Glazebrook, K. 2003, ApJ, 594, 665
Caldwell, R. R., Dave, R. & Steinhardt, P.J. 1998, Phys. Rev. Lett., 80, 1582
Carroll, S., Press, W.H. & Turner, E.C. 1992, Ann. Rev. , 30, 499
Chevallier, M. & Polarski, D. 2001, Int. J. Mod. Phys. D, 10, 213
Cooray, A. 2004, MNRAS, 348, 250
Davis, T. M. 2007, ApJ, 666, 716
Einstein, A. 1917, Sitz. Preuss. Akad. Wiss., 142
Guzzo, L., et al. 2008, Nature, 451, 31
Haiman, Z., Mohr, J.J. & Holder, G.P. 2001, ApJ, 553, 545
Hoyle, F., & Vogeley, M. S. 2002, ApJ, 566, 641
Hu, W. 1999, ApJ, 522, L21
Hu, W. & Haiman, Z. 2003, Phys. Rev. D, 68, 063004
Huterer, D. 2001, Phys. Rev. D, 65, 63001
Jassal, H.K., Bagla, J.S., & Padmanabhan, T. 2004, MNRAS, 356, L11
Kowalski, M. et al. 2008, ApJ, 686, 749
Linder, E. 2003, Phys. Rev. Lett., 90, 091301
Park, D., & Lee, J. 2007, Phys. Rev. Lett., 98, 081301
Perlmutter, S. et al. 1999, ApJ, 517, 565
Percival, W. J. 2005, A&A, 819, 830
Riess, A. G. et al. 1998, ApJ, 116, 1009
Riess, A. G. et al. 2004, ApJ, 607, 665
– 7 –
Riess, A. G. et al. 2007, ApJ, 659, 98
Seo, H. & Eisenstein, D. J. 2005, ApJ, 633, 575
Song, Y. S. & Knox, L. 20034 Phys. Rev. D, 70, 063510
Spergel, D. N. et al. 2003, ApJS, 148, 175
Springel, V. et al. 2005, Nature, 435, 629
Strauss, M. et al. 2002, ApJ, 124, 1810
Takada, M. & Jain, B. 2004, MNRAS, 348, 897
Wang, L. & Steinhardt, P. J. 1998, ApJ, 508, 483
Wang, Y. & Tegmark, M. 2004, Phys. Rev. Lett., 92, 241302
Weller, J., Battye, R. A., & Kneissl, R. 2002, Phys. Rev. Lett., 88, 231301
This preprint was prepared with the AAS LATEX macros v5.2.
– 8 –
Fig. 1.— Mean ellipticity of the voids with RL = 4h
−1Mpc as a function of z.
– 9 –
Fig. 2.— Slope of the void ellipticity as a function of wa.
– 10 –
Fig. 3.— Contours of Av in the Ωm0-w0 (left) and in the Ωm-wa (right) plane.
|
0704.0885 | Uniform measures and countably additive measures | Uniform measures and countably additive measures
Jan Pachl
Toronto, Ontario, Canada
April 6, 2007
Abstract
Uniform measures are defined as the functionals on the space of bounded uniformly
continuous functions that are continuous on bounded uniformly equicontinuous sets. If
every cardinal has measure zero then every countably additive measure is a uniform mea-
sure. The functionals sequentially continuous on bounded uniformly equicontinuous sets
are exactly uniform measures on the separable modification of the underlying uniform
space.
1 Introduction
The functionals that we now call uniform measures were originally studied by Berezanskǐı [1],
Csiszár [2], Fedorova [3] and LeCam [17]. The theory was later developed in several directions
by a number of other authors; see the references in [19] and [21].
Uniform measures need not be countably additive, but they have a number of properties that
have traditionally been formulated and proved for countably additive measures, or countably
additive functionals on function spaces. The main result in this paper, in section 3, is that
countably additive measures are uniform measures on a large class of uniform spaces (on all
uniform spaces, if every cardinal has measure zero).
Section 4 deals with the functionals that behave like uniform measures on sequences of func-
tions; or, equivalently, like countably additive measures on bounded uniformly equicontinuous
sets. In the case of a topological group with its right uniformity, these functionals were defined
by Ferri and Neufang [6] and used in their study of topological centres in convolution algebras.
2 Notation
In the whole paper, linear spaces are assumed to be over the field R of reals. Uniform spaces are
assumed to be Hausdorff. Uniform spaces are described by uniformly continuous pseudometrics
([11], Chap. 15), abbreviated u.c.p.
When d is a pseudometric on a set X , define
Lip(d) = {f : X → R | |f(x)| ≤ 1 and |f(x)− f(x′)| ≤ d(x, x′) for all x, x′ ∈ X} .
http://arxiv.org/abs/0704.0885v1
Then Lip(d) is compact in the topology of pointwise convergence onX , as a topological subspace
of the product space RX .
When X is a uniform space, denote by Ub(X) the space of bounded uniformly continuous
functions f : X → R with the norm ‖f‖ = sup{ |f(x)| | x ∈ X}. Let Coz(X) be the set
of all cozero sets in X ; that is, sets of the form {x ∈ X | f(x) 6= 0} where f ∈ Ub(X). Let
σ(Coz(X)) be the sigma-algebra of subsets of X generated by Coz(X).
When d is a pseudometric on a set X , denote by O(d) the collection of open sets in the (not
necessarily Hausdorff) topology defined by d. Note that if d is a u.c.p. on a uniform space X
then O(d) ⊆ Coz(X).
Denote by M(X) the norm dual of Ub(X), and consider three subspaces of M(X):
1. Mu(X) is the space of those µ ∈ M(X) that are continuous on Lip(d) for every u.c.p. d
on X , where Lip(d) is considered with the topology of pointwise convergence on X . The
elements of Mu(X) are called uniform measures on X .
2. Mσ(X) is the space of µ ∈ M(X) for which there is a bounded (signed) countably additive
measure m on the sigma-algebra σ(Coz(X)) such that
µ(f) =
fdm for f ∈ Ub(X) .
3. Muσ(X) is the space of those µ ∈ M(X) that are sequentially continuous on Lip(d) for
each u.c.p. d. That is, limn µ(fn) = 0 whenever d is a u.c.p. on X , fn ∈ Lip(d) for
n = 1, 2, . . ., and limn fn(x) = 0 for each x ∈ X .
When X is a topological group G with its right uniformity, Muσ(X) is the space Leb
in the notation of [6].
Clearly Mu(X) ⊆ Muσ(X) for every uniform space X . By Lebesgue’s dominated conver-
gence theorem ([7], 123C), Mσ(X) ⊆ Muσ(X) for every X .
For any uniform space X , let cX be the set X with the weak uniformity induced by all
uniformly continuous functions fromX to R ([13], p. 129). Let eX be the cardinal reflection Xℵ1
([13], p. 52 and 129), also known as the separable modification of X . Thus eX is a uniform space
on the same set as X , and a pseudometric on X is a u.c.p. on eX if and only if it is a separable
u.c.p. on X . Note that Ub(X) = Ub(eX) = Ub(cX) and M(X) = M(eX) = M(cX).
Let ℵ be a cardinal number, and let A be a set of cardinality ℵ. As in [12], say that ℵ
has measure zero if m(A) = 0 for every non-negative countably additive measure m defined
on the sigma-algebra of all subsets of A and such that m({a}) = 0 for all a ∈ A. A related
notion, not used in this paper, is that of a nonmeasurable cardinal as defined by Isbell [13],
using two-valued measures m in the preceding definition.
It is not known whether every cardinal has measure zero. The statement that every cardinal
has measure zero is consistent with the usual axioms of set theory. A detailed discussion of this
and related properties of cardinal numbers can be found in [9] and [14].
Let d be a pseudometric on a set X . A collection W of nonempty subsets of X is uniformly
d-discrete if there exists ε > 0 such that d(x, x′) ≥ ε whenever x∈V, x′∈V ′, V, V ′∈W , V 6= V ′.
A set Y ⊆ X is uniformly d-discrete if the collection of singletons {{y} | y ∈ Y } is uniformly
d-discrete.
Let X be a uniform space. A set Y ⊆ X is uniformly discrete if there exists a u.c.p. d on X
such that Y is uniformly d-discrete. Say that X is a (uniform) D-space [18] if the cardinality
of every uniformly discrete subset of X has measure zero.
This generalizes the notion of a topological D-space as defined by Granirer [12] and further
discussed by Kirk [16] in the context of topological measure theory. A topological space T is a
D-space in the sense of [12] if and only if T with its fine uniformity ([13], I.20) is a uniform D-
space. If X is a uniform space and Y ⊆ X is uniformly discrete in X then Y is also uniformly
discrete in X with its fine uniformity. Therefore, if X is a topological D-space in the sense
of [12] then it is also a uniform D-space.
Since the countable infinite cardinal ℵ0 has measure zero, every uniform space X such that
X = eX is a D-space. Thus every uniform subspace of a product of separable metric spaces is
a D-space. Moreover, the statement that every uniform space is a D-space is consistent with
the usual axioms of set theory.
3 Measures on uniform D-spaces
The uniform spaces X for which Mu(X) ⊆ Mσ(X) were investigated by several authors [1] [3]
[4] [10] [17]. The opposite inclusion Mσ(X) ⊆ Mu(X) has not attracted as much attention.
Theorem 2 in this section characterizes the uniform spaces X for which Mσ(X) ⊆ Mu(X).
Lemma 1 Let d be a pseudometric on a set X, and ε > 0. Then there exist sets Wn of
nonempty subsets of X, n = 1, 2, . . ., such that
n=1 Wn is a cover of X;
2. for each n, Wn ⊆ O(d);
3. for each n, the d-diameter of each V ∈ Wn is at most ε;
4. each Wn is uniformly d-discrete.
The lemma is essentially the theorem of A.H. Stone about σ-discrete covers in metric spaces.
For the proof, see the proof of 4.21 in [15].
The next theorem is the main result of this paper. It generalizes a known result about
separable measures on completely regular topological spaces — Proposition 3.4 in [16].
Theorem 2 For any uniform space X, the following statements are equivalent:
(i) X is a uniform D-space.
(ii) Mσ(X) ⊆ Mu(X).
In view of Theorem 2 and the remarks in section 2, the statement that Mσ(X) ⊆ Mu(X)
for every uniform space X is consistent with the usual axioms of set theory.
Proof. This proof is adapted from the author’s unpublished manuscript [18].
To prove that (i) implies (ii), let X be a D-space. To show that Mσ(X) ⊆ Mu(X), it is
enough to show that µ ∈ Mu(X) for every non-negative µ ∈ Mσ(X), in view of the Jordan
decomposition of countably additive measures ([8], 231F). Take any µ ∈ Mσ(X), µ ≥ 0 and
any ε > 0. Let m be the non-negative countably additive measure on σ(Coz(X)) such that
µ(f) =
fdm for f ∈ Ub(X).
Let d be a u.c.p. on X , and {fα}α a net of functions fα ∈ Lip(d) such that limα fα(x) = 0
for every x ∈ X . Our goal is to prove that limα µ(fα) = 0.
For the given X , d and ε, let Wn be as in Lemma 1. If V ∈ Wn for some n then choose a
point xV ∈ V . Let Tn = {xV | V ∈ Wn} for n = 1, 2, . . ..
Fix n for a moment. For each subset W ′ ⊆ Wn we have
W ′ ∈ O(d) ⊆ Coz(X). Thus for
each S ⊆ Tn we may define m̃(S) = m(
{V ∈ Wn | xV ∈ S} ), and m̃ is a countably additive
measure defined on all subsets of Tn. Since the set Tn is uniformly discrete and X is a D-space,
it follows that the cardinality of Tn is of measure zero, and there exists a countable set Sn ⊆ Tn
such that
{V ∈ Wn | xV ∈ Tn \ Sn} ) = m̃(Tn \ Sn) = 0.
Denote P =
n=1 Sn and Y = {x ∈ X | d(x, P ) ≤ ε}. If V ∈ Wn for some n and xV ∈ P
then V ⊆ Y , by property 3 in Lemma 1. Therefore
X \ Y ⊆
{V ∈ Wn | xV ∈ Tn \ Sn}
and m(X \ Y ) = 0.
Define gα(x) = supβ≥α |fβ(x)| for x ∈ X . Then gα ∈ Lip(d), gα ≥ gβ for α ≤ β, and
limα gα(x) = 0 for every x ∈ X .
Since the set P is countable, there is an increasing sequence of indices α(n), n = 1, 2, . . .,
such that limn gα(n)(x) = 0 for every x ∈ P , hence limn gα(n)(x) ≤ ε for every x ∈ Y . Thus
|µ(fα)| ≤ lim
µ(gα) ≤ lim
µ(gα(n)) = lim
gα(n)dm+
gα(n)dm
≤ εm(X)
which proves that limα µ(fα) = 0.
To prove that (ii) implies (i), assume that X is not a D-space. Thus there is a u.c.p. d on
X , a subset P ⊆ X and a non-negative countably additive measure m defined on all subsets of
P such that
• d(x, y) ≥ 1 for x, y ∈ P , x 6= y;
• m(x) = 0 for each x ∈ P ;
• m(P ) = 1.
Define µ(f) =
fdm for f ∈ Ub(X). Clearly µ ∈ Mσ(X).
For any set S ⊆ P , define the function fS ∈ Lip(d) by fS(x) = min(1, d(x, S)) for x ∈ X .
Then fS(x) = 0 for x∈S and fS(x) = 1 for x∈P \ S. Let F be the directed set of all finite
subsets of P ordered by inclusion. We have limS∈F fS(x) = fP (x) for each x ∈ X , µ(fS) = 1
for every S ∈ F , and µ(fP ) = 0. Thus µ 6∈ Mu(X). �
The inclusion Mσ(X) ⊆ Mu(cX) in the following corollary is Theorem 2.1 in [5].
Corollary 3 If X is any uniform space then Mσ(X) ⊆ Mu(eX) ⊆ Mu(cX).
Proof. As is noted above, eX is a D-space for any X . Thus Mσ(eX) ⊆ Mu(eX) by
Theorem 2. From the definitions of Mσ(X), eX and cX we get Mσ(X) = Mσ(eX) and
Mu(eX) ⊆ Mu(cX). �
Corollary 3 follows also from Theorem 4 in the next section: Mσ(X) ⊆ Muσ(X) = Mu(eX).
4 Countably uniform measures
In this section we compare the spaces Muσ(X) and Mu(X).
Theorem 4 If X is any uniform space then Muσ(X) = Mu(eX).
Proof. To prove that Muσ(X) ⊆ Mu(eX), note that if a pseudometric d is separable then
Lip(d) with the topology of pointwise convergence is metrizable, and therefore sequential con-
tinuity on Lip(d) implies continuity.
To prove that Mu(eX) ⊆ Muσ(X), take any µ ∈ Mu(eX). Let d be a u.c.p. on X ,
fn ∈ Lip(d) for n = 1, 2, . . ., and limn fn(x) = 0 for each x ∈ X . Define a pseudometric d̃ on X
d̃(x, y) = sup
|fn(x) − fn(y)| for x, y ∈ X .
Then d̃ is a separable u.c.p. on X , hence a u.c.p. on eX , and fn ∈ Lip(d̃ ) for n = 1, 2, . . ..
Therefore limn µ(fn) = 0. �
In view of Theorem 4, spaces Muσ(X) have all the properties of general Mu(X) spaces.
For example, every Mu(X) is weak∗ sequentially complete [19], and the positive part µ
every µ ∈ Mu(X) is in Mu(X) [1] [3] [17]. Therefore the same is true for Muσ(X).
By Theorem 4, if X = eX then Mu(X) = Muσ(X) (cf. [6], 2.5(iii)). To see that the
equality Mu(X) = Muσ(X) does not hold in general, first consider a uniform space X that is
not a uniform D-space. Since Mσ(X) ⊆ Muσ(X), from Theorem 2 we get Mu(X) 6= Muσ(X).
However, that furnishes an actual counterexample only if there exists a cardinal that is not of
measure zero. Next we shall see that, even without assuming the existence of such a cardinal,
there is a space X such that Mu(X) 6= Muσ(X).
Let X̂ denote the completion of a uniform space X . Pelant [20] constructed a complete
uniform space X for which eX is not complete. For such X , there exists an element x ∈ êX \X .
Every f ∈Ub(X) = Ub(eX) uniquely extends to f̂ ∈Ub(êX). Let δx ∈M(X) be the Dirac
measure at x; that is, δx(f) = f̂(x) for f ∈Ub(X). Then δx∈Mu(eX), therefore δx∈Muσ(X)
by Theorem 4. On the other hand, δx 6∈ Mu(X), since δx is a multiplicative functional on
Ub(X) and x 6∈ X̂ ([19], section 6). Thus Mu(X) 6= Muσ(X).
References
[1] I.A. Berezanskǐı. Measures on uniform spaces and molecular measures. (In Russian)
Trudy Moskov. Mat. Obšč. 19 (1968) 3-40.
English translation: Trans. Moscow Math. Soc. 19 (1968) 1-40.
[2] I. Csiszár. On the weak∗ continuity of convolution in a convolution algebra over an arbi-
trary topological group. Studia Sci. Math. Hungarica 6 (1971) 27-40.
[3] V.P. Fedorova. Linear functionals and the Daniell integral on spaces of uniformly con-
tinuous functions. (In Russian) Mat. Sb. 74 (116) (1967) 191-201.
English translation: Math. USSR – Sbornik 3 (1967) 177-185.
[4] V.P. Fedorova. Concerning Daniell integrals on an ultracomplete uniform space. (In Rus-
sian) Mat. Zametki 16, 4 (1974) 601-610.
English translation: Math. Notes AN USSR 16 (1974) 950-955.
[5] V.P. Fedorova. Integral representation of functionals on spaces of uniformly continuous
functions. (In Russian) Sibirsk. Mat. Ž. 23, 5 (1982) 205-218.
English translation: Siber. Math. J. 23 (1982) 753-762.
[6] S. Ferri and M. Neufang. On the topological centre of the algebra LUC(G)∗ for general
topological groups. J. Funct. Anal. 244 (2007) 154-171.
[7] D.H. Fremlin. Measure Theory. Volume 1 (Third Printing). Torres Fremlin (2004).
http://www.essex.ac.uk/maths/staff/fremlin/mt.htm
[8] D.H. Fremlin. Measure Theory. Volume 2 (Second Printing). Torres Fremlin (2003).
http://www.essex.ac.uk/maths/staff/fremlin/mt.htm
[9] D.H. Fremlin. Measure Theory. Volume 5. Torres Fremlin (to appear in 2008).
http://www.essex.ac.uk/maths/staff/fremlin/mt.htm
[10] Z. Froĺık. Measure-fine uniform spaces I. Springer-Verlag Lecture Notes in Mathematics
541 (1975) 403-413.
[11] L. Gillman and M. Jerison. Rings of Continuous Functions. Van Nostrand (1960).
[12] E.E. Granirer. On Baire measures on D-topological spaces. Fund. Math. 60 (1967) 1-22.
http://matwbn.icm.edu.pl/ksiazki/fm/fm60/fm6001.pdf
http://www.essex.ac.uk/maths/staff/fremlin/mt.htm
http://www.essex.ac.uk/maths/staff/fremlin/mt.htm
http://www.essex.ac.uk/maths/staff/fremlin/mt.htm
http://matwbn.icm.edu.pl/ksiazki/fm/fm60/fm6001.pdf
[13] J.R. Isbell. Uniform Spaces. American Mathematical Society (1960).
[14] T. Jech. Set Theory (Second Edition). Springer-Verlag (1997).
[15] J.L. Kelley. General Topology. Van Nostrand (1967).
[16] R.B. Kirk. Complete topologies on spaces of Baire measure. Trans. Amer. Math. Soc.
184 (1973) 1-29.
[17] L. LeCam. Note on a certain class of measures. Unpublished manuscript (1970).
http://www.stat.berkeley.edu/users/rice/LeCam/papers/classmeasures.pdf
[18] J. Pachl. Katětov-Shirota theorem in uniform spaces. Unpublished manuscript (1976).
[19] J. Pachl. Uniform measures and convolution on topological groups.
arXiv:math.FA/0608139v2 (2006)
http://arxiv.org/abs/math.FA/0608139v2
[20] J. Pelant. Reflections not preserving completeness. Seminar Uniform Spaces 1973-74
(Directed by Z. Froĺık) 235-240 (presented in April 1975) MÚ ČSAV (Prague).
[21] M.D. Rice. Uniform ideas in analysis. Real Analysis Exchange 6 (1981) 139-185.
http://www.stat.berkeley.edu/users/rice/LeCam/papers/classmeasures.pdf
http://arxiv.org/abs/math/0608139
http://arxiv.org/abs/math.FA/0608139v2
Introduction
Notation
Measures on uniform D-spaces
Countably uniform measures
|
0704.0886 | Lower bounds for the conductivities of correlated quantum systems | Lower bounds for the conductivities of correlated quantum systems
Peter Jung and Achim Rosch
Institute for Theoretical Physics, University of Cologne, 50937 Cologne, Germany.
(Dated: October 30, 2018)
We show how one can obtain a lower bound for the electrical, spin or heat conductivity of cor-
related quantum systems described by Hamiltonians of the form H = H0 + gH1. Here H0 is an
interacting Hamiltonian characterized by conservation laws which lead to an infinite conductivity
for g = 0. The small perturbation gH1, however, renders the conductivity finite at finite temper-
atures. For example, H0 could be a continuum field theory, where momentum is conserved, or an
integrable one-dimensional model while H1 might describe the effects of weak disorder. In the limit
g → 0, we derive lower bounds for the relevant conductivities and show how they can be improved
systematically using the memory matrix formalism. Furthermore, we discuss various applications
and investigate under what conditions our lower bound may become exact.
PACS numbers: 72.10.Bg, 05.60.Gg, 75.40.Gb, 71.10.Pm
I. INTRODUCTION
Transport properties of complex materials are not only
important for many applications but are also of funda-
mental interest as their study can give insight into the
nature of the relevant quasi particles and their interac-
tions.
Compared to thermodynamic quantities, the transport
properties of interacting quantum systems are notori-
ously difficult to calculate even in situations where in-
teractions are weak. The reason is that conductivities
of non-interacting systems are usually infinite even at fi-
nite temperature, implying that even to lowest order in
perturbation theory an infinite resummation of a per-
turbative series is mandatory. To lowest order this im-
plies that one usually has to solve an integral equation,
often written in terms of (quantum-) Boltzmann equa-
tions or – within the Kubo formalism – in terms of vertex
equations. The situation becomes even more difficult if
the interactions are so strong that an expansion around
a non-interacting system is not possible. Also numeri-
cally, the calculation of zero-frequency conductivities of
strongly interacting clean systems is a serious challenge
and even for one-dimensional systems reliable calcula-
tions are available for high temperatures only1,2,3,4,5,6.
Variational estimates, e.g. for the ground state energy,
are powerful theoretical techniques to obtain rigorous
bounds on physical quantities. They can be used to
guide approximation schemes to obtain simple analytic
estimates and are sometimes the basis of sophisticated
numerical methods like the density matrix renormaliza-
tion group7.
Taking into account both the importance of transport
quantities and the difficulties involved in their calculation
it would be very useful to have general variational bounds
for transport coefficients.
A well known example where a bound for transport
quantities has been derived is the variational solution
of the Boltzmann equation, discussed extensively by
Ziman8. The linearized Boltzmann equation in the pres-
ence of a static electric field can be written in the form
Wk,k′Φk′ (1)
where Wk,k′ is the integral kernel describing the scatter-
ing of quasiparticles and we have linearized the Boltz-
mann equation around the Fermi (or Bose) distribution
= f0(ǫk) using fk = f
Φk. Therefore, the
current is given by I = −e
Φk and the dc con-
ductivity is determined from the inverse of the scattering
matrix W using σ = −e2
k,k′v
. It is
easy to see that this result can be obtained by maximiz-
ing a functional8,9,10,11 F [Φ] with
σ = e2max
F [Φ] ≥ e2 max
F [Φ] =
k,k′(Φk − Φk′)2Wk,k′
where we used that
Wk,k′ = 0 reflecting the conser-
vation of probability. The variational formula (2) is ac-
tually closely related8 to the famous H-theorem of Boltz-
mann which states that entropy always increases upon
scattering.
A lower bound for the conductivity can be obtained
by varying Φ only in a subspace of all possible func-
tions. This allows for example to obtain analytically
good estimates for conductivities without inverting an
infinite dimensional matrix or, euqivalently, solving an
integral equation, see Ziman’s book for a large number
of examples8.
The applicability of Eq. (2) is restricted to situations
where the Boltzmann equation is valid and bounds for
the conductivity in more general setups are not known.
However, for ballistic systems with infinite conductiv-
ity it is possible to get a lower bound for the so-called
Drude weight. Mazur12 and later Suzuki13 considered
situations where the presence of conservation laws pro-
hibits the decay of certain correlation functions in the
http://arxiv.org/abs/0704.0886v2
Re σ(ω) Re σ(ω)
σreg(ω)
πDδ(ω)
g != 0g = 0
FIG. 1: For g = 0 a Drude peak shows up in the conductivity,
resulting from exact conservation laws. For g 6= 0 the Drude
peak broadens and the dc conductivity becomes finite.
long time limit. In the context of transport theory their
result can be applied to systems (see Appendix A for de-
tails) where the finite-temperature conductivity σ(ω, T )
is infinite for ω = 0 and characterized by a finite Drude
weight D(T ) > 0 with
Re σ(ω, T ) = πD(T )δ(ω) + σreg(ω, T ). (3)
Such a Drude weight can arise only in the presence of
exact conservation laws Cj with [H,Cj ] = 0. Suzuki
showed that the Drude weight can be expressed as a sum
over all Cj
〈CjJ〉2
〈C2j 〉
〈CjJ〉2
〈C2j 〉
. (4)
where J is the current associated with σ. For conve-
nience a basis in the space of Ci has been chosen such
that 〈CiCj〉 = 0 for i 6= j. More useful than the equal-
ity in Eq. (4) is often the inequality12 which is obtained
when the sum is restricted to a finite subset of conser-
vation laws. Such a finite sum over simple expectation
values can often be calculated rather easily using either
analytical or numerical methods. The Mazur inequality
has recently been used heavily4,14,15,16,17 to discuss the
transport properties of one-dimensional systems.
Model systems, due to their simplicity, often exhibit
symmetries not shared by real materials. For exam-
ple, the heat conductivity of idealized one-dimensional
Heisenberg chains is infinite at arbitrary temperature as
the heat current is conserved. However, any additional
coupling (next-nearest neighbor, inter-chain, disorder,
phonon,...) renders the conductivity finite1,4,5,6,18,19,20.
If these perturbations are weak, the heat conductivity
is, however, large as observed experimentally21,22. For a
more general example, consider an arbitrary translation-
ally invariant continuum field theory. Here momentum
is conserved which usually implies that the conductivity
is infinite for this model. In real materials momentum
decays by Umklapp scattering or disorder rendering the
conductivity finite. It is obviously desirable to have a
reliable method to calculate transport in such situations.
In this work we consider systems with the Hamiltonian
H = H0 + gH1, (5)
where for g = 0 the relevant heat-, charge- or spin- con-
ductivity is infinite and characterized by a finite Drude
weight given by Eq. (4). As discussed above,H0 might be
an integrable one-dimensional model, a continuum field
theory, or just a non-interacting system. The term gH1
describes a (weak) perturbation which renders the con-
ductivity finite, e.g. due to Umklapp scattering or dis-
order, see Fig. 1. Our goal is to find a variational lower
bound for conductivities in the spirit of Eq. (2) for this
very general situation, without any requirement on the
existence of quasi particles. For technical reasons (see
below) we restrict our analysis to situations where H is
time reversal invariant.
In the following, we first describe the general setup and
introduce the memory matrix formalism, which allows
us to formulate an inequality for transport coefficients
for weakly perturbed systems. We will argue that the
inequality is valid under the conditions which we specify.
Finally, we investigate under which conditions the lower
bounds become exact and briefly discuss applications of
our formula.
II. SETUP
Consider the local density ρ(x) of an arbitrary phys-
ical quantity which is locally conserved, thus obeying a
continuity equation
∂tρ+∇j = 0.
Transport of that quantity is described by the dc con-
ductivity σ which is the response of the current to some
external field E coupling to the current,
〈J〉 = σE,
where J =
j(x) is the total current and 〈J〉 its expec-
tation value. Note that J can be an electrical-, spin-, or
heat current and E the corresponding conjugate field de-
pending on the context. The dynamic conductivity σ(z)
is given by Kubo’s formula, see Eq. (A1). We are inter-
ested in the dc conductivity σ = limω→0 σ(z = ω + i0).
Starting from the Hamiltonian (5) we consider a sys-
tem where H0 posesses a set of exact conservation laws
{Ci} of which at least one correlates with the current,
〈JC1〉 6= 0. Without loss of generality we assume
〈CiCj〉 = 0 for i 6= j. For g = 0 the Drude weight D
defined by Eq. (3) is given by Eq. (4). We can split up
the current under consideration into a part which is par-
allel to the Ci and one that is orthogonal,
J = J‖ + J⊥,
with J‖ =
〈CiJ〉
Ci, which results in a separation of
the conductivity,
σ(z) = σ‖(z) + σ⊥(z). (6)
Since the conductivity σ(z) is given by a current-current
correlation function and the current J‖ (J⊥) is diago-
nal (off-diagonal) in energy, cross-correlation functions
〈〈J‖; J⊥〉〉 vanish in Eq. (6).
According to Eq. (4), the Drude peak of the unper-
turbed system, g = 0, arises solely from J‖:
Re σ‖(ω) = πDδ(ω), (7)
while σ⊥(z) appears in Eq. (3) as the regular part,
Re σ⊥(ω) = σreg(ω).
In this work we will focus on σ‖(ω), since the small
perturbation is not going to affect σ⊥(ω) much (which is
assumed to be free of singularities here, see section IV)
while σ‖(ω = 0) diverges for g → 0, see Fig. 1. As we
are interested in the small g asymptotics only, we may
neglect the contribution σ⊥(0) to the dc conductivity.
Hence we set J = J‖ and σ(ω) = σ‖(ω) in the following.
III. MEMORY MATRIX FORMALISM
We have seen that certain conservation laws ofH0 play
a crucial role in determining the conductivity of both the
unperturbed and the perturbed system. In the presence
of a small perturbation gH1, these modes are not con-
served anymore but at least some of them decay slowly.
Typically, the conductivity of the perturbed system will
be determined by the dynamics of these slow modes. To
separate the dynamics of the slow modes from the rest, it
is convenient to use a hydrodynamic approach based on
the projection of the dynamics onto these slow modes. In
this section we will therefore review the so called memory
matrix formalism23, introduced by Mori and Zwanzig24,25
for this purpose. In the next section we will show that
this approach can be used to obtain a lower bound for
the dc conductivity for small g.
We start by defining a scalar product in the space of
quantum mechanical operators,
(A|B) =
dλ〈A†B(iλ)〉 − β〈A†〉〈B〉 (8)
As the next step we choose a – for the moment – arbi-
trary set of operators {Ci}. In most applications, the Ci
are the relevant slow modes of the system. For notational
convenience, we assume that the {Ci} are orthonormal-
ized,
(Ci|Cj) = δij . (9)
In terms of these we may define the projector P onto (and
Q away from) the space spanned by these ‘slow’ modes
|Ci)(Ci| = 1−Q.
We assume that C1 is the current we are interested in,
|J) ≡ |C1).
The time evolution is given by the Liouville-
(super)operator
L = [H, .] = L0 + gL1
with (LA|B) = (A|LB) = (A|L|B), and the time evo-
lution of an operator may be expressed as |A(t)) =
|eiHtAe−iHt) = eiLt|A). With these notions, one obtains
the following simple, yet formal expression for the con-
ductivity:
σ(ω) =
ω − L
ω − L
Using a number of simple manipulations, one can
show23,24,25 that the conductivity can be expressed as
the (1, 1)-component of the inverse of a matrix
σ(ω) = (M(ω) + iK − iω)−1
, (10)
where
Mij(ω) =
ω − LQ
is the so-called memory matrix and
Kij =
Ċi|Cj
a frequency independent matrix. The formal expression
(10) for the conductivity is exact, and completely gen-
eral, i.e. valid for an arbitrary choice of the modes Ci
(they do not even have to be ‘slow’). Only C1 = J is re-
quired. However, due to the projection operators Q, the
memory matrix (11) is in general difficult to evaluate. It
is when one uses approximations to M that the choice of
the projectors becomes crucial (see below).
Obviously, the dc conductivity is given by the (1, 1)-
component of
(M(0) +K)−1. (13)
More generally, the (m,n)-component of Eq. (13) de-
scribes the response of the ‘current’ Cm to an external
field coupling solely to Cn. We note, that since a matrix
of transport coefficients has to be positive (semi)definite,
this also holds for the matrix M(0) +K.
To avoid technical complications associated with the
presence of K we restrict our analysis in the following to
time reversal invariant systems and choose the Ci such,
that they have either signature +1 or −1 under time
reversal34 Θ. In the dc limit, ω = 0, components of
Eq.(13) connecting modes of different signatures vanish.
Thus, M(0) + K is block-diagonal with respect to the
time reversal signature, and consequently we can restrict
our analysis to the subspace of slow modes with the same
signature as C1. However, if Cm and Cn have the same
signature, then (Cm|Ċn) = 0, and thus K vanishes on
this restricted space. The dc conductivity therefore takes
the form
σ = (M(0)−1)11. (14)
IV. CENTRAL CONJECTURE
To obtain a controlled approximation to the memory
matrix in the limit of small g, it is important to identify
the relevant slow modes of the system. For the Ci we
choose quantities which are conserved by H0, [H0, Ci] =
0, such that Ċi = ig[H1, Ci] is linear in the small coupling
g. As argued below, we require that the singularities
of correlation functions of the unperturbed system are
exclusively due to exact conservation laws Ci, i.e. that
the Drude peak appearing in Eq. (3) is the only singular
contribution. Furthermore, we choose J = J‖ = C1 and
consider only Ci with the same time reversal signature
as J , as discussed in the previous section.
To formulate our central conjecture we introduce the
following notions. We define Mn(ω) as the (exact) n× n
memory matrix obtained by setting up the memory ma-
trix formalism for the first n slow modes {Ci, i = 1, .., n}.
Note that the definitions of the relevant projectors P and
Q also depend on this choice, and that for any choice of
n one gets σ = (M−1n )11. We now introduce the approxi-
mate memory matrix M̃n motivated by the following ar-
guments: Ċi is already linear in g, therefore in Eq. (11)
we approximate L by L0 and replace (.|.) by (.|.)0 as
we evaluate the scalar product with respect to H0. As
L0|Ci) = 0 and (Cj |Ċi) = 0 due to time reversal symme-
try, one has L0Q = 1 and Q|Ċi) = |Ċi) and therefore the
projector Q does not contribute within this approxima-
tion. We thus define the n× n matrix M̃n by
M̃n,ij = lim
ω − L0
Note that M̃n is a submatrix of M̃m for m > n and
therefore the approximate expression for the conductiv-
ity σ ≈ (M̃−1n )11 does depend on n while (M−1n )11 is
independent of n. A much simpler, alternative deriva-
tion for M̃1 is given in Appendix B, where the validity of
this formula is also discussed.
The central conjecture of our paper is, that for small g
(M̃−1n )11 gives a lower bound to the dc conductivity, or,
more precisely,
1/g2 = (M̃
∞ )11 ≥ · · · ≥ (M̃−1n )11 ≥ · · · ≥ M̃−11 . (16)
Here σ|
1/g2 = (1/g
2) limg→0 g
2σ denotes the leading
term ∝ 1/g2 in the small-g expansion of σ. Note that
M̃n ∝ g2 by construction. M̃∞ is the approximate mem-
ory matrix where all35 conservation laws have been in-
cluded. In some special situations, discussed in Ref. 6,
one has σ ∼ 1/g4 and therefore σ|
1/g2 = ∞.
A special case of the inequality above is Eq. (B4) in
appedix B, as the scattering rate Γ̃/χ may be expressed
as Γ̃/χ2 = M̃1.
Two steps are necessary to prove Eq. (16). The simple
part is actually the inequalities in Eq. (16). They are
a consequence of the fact that the matrices M̃n are all
positive definite and that M̃n is a submatrix of M̃m for
m ≥ n. More difficult to prove is that the first equality
in (16) holds. To show this we will need an additional
assumption, namely, that the regular part of all correla-
tion functions (to be defined below) remains finite in the
limit g → 0, ω → 0. In this case, the perturbative ex-
pansion around M̃∞ in powers of g is free of singularities
at finite temperature (which is not the case for M̃n<∞).
This in turn implies that limg→0 M∞/g
2 = M̃∞/g
2 and
therefore σ|
1/g2 = (M̃
∞ )11.
Next, we present the two parts of the proof.
A. Inequalities
We start by investigating the (1,1)-component of the
inverse of the positive definite symmetric matrix M̃∞. It
is convenient to write the inverse as
(M̃−1∞ )11 = max
(ϕTe1)
ϕT M̃∞ϕ
where e1 is the first unit vector. The same method is
used to derive Eq. (2) in the context of the Boltzmann
equation. The maximum is obtained for ϕ = M̃−1∞ e1.
By restricting the variational space in (17) to the first n
components of ϕ we reproduce the submatrix M̃n of M̃∞
and obtain
(M̃−1∞ )11 ≥ max
(ϕTe1)
ϕT M̃∞ϕ
= (M̃−1m )11
≥ max
(ϕT e1)
ϕT M̃∞ϕ
= (M̃−1n<m)11
By choosing different values form and n < m, this proves
all inequalities appearing in (16).
B. Expansion of the memory matrix
We proceed by expanding the exact memory matrix
Mn, where Pn = 1 − Qn is a projector on the first n
conservation laws, in powers of g. Using that LQn =
L0 + gL1Qn, we obtain the geometric series
Mn,ij(ω) =
ω − L0
ω − L0
Note that this is not a full expansion in g, as the scalar
product (8) is defined with respect to the full Hamilto-
nian H = H0 + gH1. We will turn to the discussion of
the remaining g-dependence later.
In general, one can expand
λmn|Am)(An|
in terms of some basis Am in the space of operators.
Therefore Eq. (18) can be written as a sum over products
of terms with the general structure
ω − L0
. (19)
In the following we would like to argue that such an ex-
pansion is regular for n = ∞ if all conservation laws
have been included in the definition of Q. As argued in
Appendix B, we have to investigate whether the series co-
efficients in Eq. (18) diverge for ω → 0. The basis of our
argument is the following: as Q∞ projects the dynam-
ics to the space perpendicular to all of the conservation
laws, the associated singularities are absent in Eq. (19)
and therefore the expansion of M∞ is regular.
To show this more formally, we split up B = B‖ +B⊥
in (19) into a component parallel and one perpendicular
to the space of all conserved quantities, |B‖) = P∞|B).
With this notation, the action of L0 becomes more trans-
parent:
ω − L0
|B) = 1
|B‖) +
ω − L0
|B⊥). (20)
As we assume that all divergencies can be traced back
to the conservation laws, we take the second term to be
regular. It is only the first term which leads in Eq. (19)
to a divergence for ω → 0, provided that (A|Qn|B‖) is fi-
nite. If we consider the perturbative expansion ofMn<∞,
where Pn = 1 − Qn projects only to a subset of con-
served quantities, then finite contributions of the form
(A|Qn|B‖) exist and the perturbative series in g will be
singular (see also Appendix B). Considering M∞, how-
ever, Q∞ projects out all conservation laws and therefore
by construction Q∞|B‖) = Q∞P∞|B) = 0. Thus the
first term in (20) does not contribute in (19) for n = ∞
and the expansion (18) of M∞ is therefore regular.
The only remaining part of our argument is to show
that in the limit g → 0 one can safely replace (.|.) by
(.|.)0. Here it is useful to realize that (A|B) can be in-
terpreted as a (generalized) static susceptibility. In the
absence of a phase transition and at finite temperatures,
susceptibilities are smooth, non-singular functions of the
coupling constants and therefore we do not expect any
further singularities from this step. If we define a phase
transition by a singularity in some generalized suscepti-
bility, then the statement that susceptibilities are regular
in the absence of phase transitions even becomes a mere
tautology.
Combining all arguments, the expansion (18) of
M∞(ω → 0) is regular, and using (Ċi|Q∞ = (Ċi| [see
discussion before Eq. (15)] its leading term, k = 0 is
given by M̃∞. We therefore have shown the missing first
equality of our central conjecture (16).
V. DISCUSSION
In this paper we have established that in the limit of
small perturbations, H = H0 + gH1, lower bounds to dc
conductivities may be calculated for situations where the
conductivity is infinite for g = 0. In the opposite case,
when the conductivity is finite for g = 0, one can use
naive perturbation theory to calculate small corrections
to σ without further complications.
The relevant lower bounds are directly obtained from
the memory matrix formalism. Typically26,27,28 one has
to evaluate a small number of correlation functions and
to invert small matrices. The quality of the lower bounds
depends decisively on whether one has been able to iden-
tify the ‘slowest’ modes in the system.
There are many possible applications for the results
presented in this paper. The mostly considered situ-
ation is the case where H0 describes a non-interacting
system26. For situations where the Boltzmann equation
can be applied, it has been pointed out a long time ago by
Belitz29 that there is a one-to-one relation of the memory
matrix calculation to a certain variational Ansatz to the
Boltzmann equation, see Eq. (2). In this paper we were
able to generalize this result to cases where a Boltzmann
description is not possible. For example, if H0 is the
Hamiltonian of a Luttinger liquid, i.e. a non-interacting
bosonic system, then typical perturbations are of the
form cosφ for which a simple transport theory in the
spirit of a Boltzmann or vertex equation does not exist
to our knowledge.
Another class of applications are systems where H0
describes an interacting system, e.g. an integrable one-
dimensional model6 or some non-trivial quantum-field
theory30. In these cases it can become difficult to cal-
culate the memory matrix and one has to resort to use
either numerical6 or field-theoretical methods30 to obtain
the relevant correlation functions.
An important special case are situations where H0
is characterized by a single conserved current with the
proper symmetries, i.e. with overlap to the (heat-, spin-
or charge-) current J . For example, in a non-trivial con-
tinuum field theory H0, interactions lead to the decay of
all modes with exception of the momentum P . In this
case the momentum relaxation and therefore the con-
ductivity at finite T is determined by small perturba-
tions gH1 like disorder or Umklapp scattering which are
present in almost any realistic system. As M̃∞ = M̃1 in
this case, our results suggest that for small g the conduc-
tivity is exactly determined by the momentum relaxation
rate M̃PP = limω→0 i(Ṗ |(ω − L0)−1|Ṗ ),
for g → 0. (21)
Here we used that J‖ = P (P |J)/(P |P ) with χPJ = (P |J)
and we have restored all factors which arise if the nor-
malization condition (9) is not used. In Appendix C, we
check numerically that this statement is valid for a real-
istic example within the Boltzmann equation approach.
A number of assumptions entered our arguments. The
strongest one is the restriction that all relevant singu-
larities arise from exact conservation laws of H0. We
assumed that the regular parts of correlation functions
are finite for ω = 0. There are two distinct scenarios
in which this assumption does not hold. First, in the
limit T → 0, often scattering rates vanish which can lead
to diverencies of the nominally regular parts of correla-
tion functions. Furthermore, at T = 0 even infinitesi-
mally small perturbations can induce phase transitions
– again a situation where our arguments fail. Therefore
our results are not applicable at T = 0. Second, finite
temperature transport may be plagued by additional di-
vergencies for ω → 0 not captured by the Drude weight.
In some special models, for instance, transport is singu-
lar even in the absence of exactly conserved quantities
(e.g. non-interacting phonons in a disordered crystal8).
In all cases known to us, these divergencies can be traced
back to the presence of some slow modes in the system
(e.g. phonons with very low momentum). While we have
not kept track of such divergencies in our arguments, we
nevertheless believe that they do not invalidate our main
inequality (16) as further slow modes not captured by ex-
act conservation laws will only increase the conductivity.
It is, however, likely that the equality (21) is not valid
for such situations. In Appendix C we analyze in some
detail within the Boltzmann equation formalism under
which conditions (21) holds. As an aside, we note that
the singular heat transport of non-interacting disordered
phonons, mentioned above, is well described within our
formalism if we model the clean system by H0 and the
disorder by H1, see the extensive discussion by Ziman
within the variational approach which can be directly
translated to the memory matrix language, see Ref. [29].
It would be interesting to generalize our results to cases
where time reversal symmetry is broken, e.g. by an exter-
nal magnetic field. As time reversal invariance entered
nontrivially in our arguments, this seems not to be sim-
ple. We nevertheless do not see any physical reason why
the inequality should not be valid in this case, too. One
example where no problems arise are spin chains in a
uniform magnetic field31 where one can map the field to
a chemical potential using a Jordan-Wigner transforma-
tion. Then one can directly apply our results to the time
reversal invariant system of Jordan-Wigner fermions.
Acknowledgments
We thank N. Andrei, E. Shimshoni, P. Wölfle and
X. Zotos for useful discussions. This work was partly sup-
ported by the Deutsche Forschungsgemeinschaft through
SFB 608 and the German Israeli Foundation.
APPENDIX A: DRUDE WEIGHT AND MAZUR
INEQUALITY
In this appendix we clarify the connection between the
Drude weight and the Mazur inequality, mentioned in the
introduction.
The Drude weight D is the singular part of the con-
ductivity at zero frequency, Re σ(ω) = πDδ(ω)+σreg(ω).
It can be calculated from the relation
D = lim
ω Im σ(ω).
It has been introduced by Kohn32 as a measure of ballistic
transport, indicated by D > 0.
Using Kubo formulas, conductivities can be expressed
in terms of the dynamic current susceptibilities33 Π(z)
using
σ(z) = − 1
ΠT −Π(z)
, (A1)
where Π(z) is the current response function
Π(z) =
dteizt〈[J(t), J(0)]〉 (A2)
Π′′(ω)
. (A3)
and ΠT is a current susceptibility. The conductivity may
be calculated by setting σ(ω) = σ(z = ω + i0). Relation
(A3) is a well known sum rule and for all regular corre-
lation functions one has ΠT = Π(0). In the presence of
a singular contribution to σ(ω), one easily identifies the
Drude weight with the expression ΠT−Π(0). For this dif-
ference Mazur12,13 derived a lower bound. Furthermore,
Suzuki13 has shown, that ΠT − Π(0) may be expressed
as a sum over all constants of the motion Ci present in
the system36,
D = ΠT −Π(0) =
〈CjJ〉2
〈C2j 〉
. (A4)
Thus, the Drude weight is intimately connected to the
presence of conservation laws: only components of the
current perpendicular to all conservation laws decay and
any conservation law with a component parallel to the
current (i.e. with a finite cross-correlation 〈CjJ〉) leads
to a finite Drude weight and thus ballistic transport. The
relation between the Drude weight and Mazur’s inequal-
ity has been first pointed out by Zotos14.
APPENDIX B: PERTURBATION THEORY FOR
Let us give an example of a naive perturbative deriva-
tion (see also Ref. [6]) to gain some insight about what
problems can turn up in a perturbative derivation as the
one presented in this work. According to our assump-
tions, the conductivity is diverging for g → 0 and there-
fore it is useful to consider the scattering rate Γ(ω)/χ
(with the current susceptibility χ) defined by
σ(ω) =
Γ(ω)/χ− iω
. (B1)
If J is conserved for g = 0 (i.e. for J = J‖, see above),
the scattering rate vanishes, Γ(ω) = 0, for g = 0, which
results in a finite Drude weight. A perturbation around
this singular point results in a finite Γ(ω). In the limit
g → 0 we can expand (B1) for any finite frequency ω in
Γ to obtain
ω2Re σ(ω) = Re Γ(ω) +O(Γ2/ω). (B2)
We can read this as an equation for the leading order
contribution to Γ(ω), which now is expressed through
the Kubo formula for the conductivity. By partially in-
tegrating twice in time we can write Γ(ω) = Γ̃(ω)+O(g3)
Re Γ̃(ω) = Re
dteizt〈[J̇(t), J̇(0)]〉0
z=ω+i0
where J̇ = i[H, J ] = ig[H1, J ] is linear in g and therefore
the expectation value 〈...〉0 can be evaluated with respect
to H0 (which may describe an interacting system). Thus
we have expressed the scattering rate via a simple corre-
lation function of the time derivative of the current.
To determine the dc conductivity one is interested in
the limit ω → 0 and it is tempting to set ω = 0 in
Eq. (B3). We have, however, derived Eq. (B3) in the
limit g → 0 at finite ω and not in the limit ω → 0 at
finite g. The series Eq. (B2) is well defined for finite
ω 6= 0 only and in the limit ω → 0 the series shows
singularities to arbitrarily high orders in 1/ω.
At first sight this makes Eq. (B3) useless for calculating
the dc conductivity. One of the main results of this paper
is that, nevertheless, Γ̃(ω = 0) can be used to obtain a
lower bound to the dc conductivity
σ(ω = 0) ≥ χ
Γ̃(0)
for g → 0. (B4)
APPENDIX C: SINGLE SLOW MODE
In this appendix we check whether in the presence of
a single conservation law with finite cross correlations
with the current the inequality (16) can be replaced by
the equality (21). This requires us to compare the true
conductivity, which in general is hard to determine, to
the result given by M̃1. Thus we restrict ourselves to
the discussion of models for which a Boltzmann equation
can be formulated and the expression for the conductivity
can be calculated at least numerically. In the following
we first show numerically that the equality (21) holds for
a realistic model. In a second step we discuss the precise
regularity requirement of the scattering matrix such that
Eq. (21) holds.
To simplify numerics, we consider a simple one-
dimensional Boltzmann equation of interacting and
weakly disordered Fermions. Clearly, the Boltzmann ap-
proach breaks down close to the Fermi surface due to
singularities associated with the formation of a Luttinger
liquid, but in the present context we are not interested
in this physics as we only want to investigate properties
of the Boltzmann equation. To avoid the restrictions as-
sociated with momentum and energy conservation in one
dimension we consider a dispersion with two minima and
four Fermi points,
ǫk = −
. (C1)
The Boltzmann equation reads
k′qq′
fkfk′(1− fq)(1 − fq′)
− fqfq′(1 − fk)(1 − fk′)
δ(ǫk − ǫk′)
fk(1 − fk′)− fk′(1 − fk)
Wkk′Φk′ (C2)
where the inelastic scattering term S
kk′ = δ(ǫk + ǫk′ −
ǫq − ǫq′)δ(k+ k′ − q− q′) conserves both energy and mo-
mentum. In the last line we have linearized the right
hand side using the definitions of the introductory chap-
ter. The velocity vk is given by vk =
ǫk. The scat-
tering matrix splits up into an interaction component
and a disorder component, Wkk′ = W
kk′ + g
2W 1kk′ . As
we do not consider Umklapp scattering, W 0kk′ conserves
momentum,
′ = 0, and one expects that mo-
mentum relaxation will determine the conductivity for
small g.
For the numerical calculation we discretize momentum
in the interval [−π/2, π/2], kn = nδk = nπ/N with inte-
ger n. (At the boundaries the energy is already too high
to play any role in transport.) The delta function aris-
ing from energy conservation is replaced by a gaussian of
width δ. The proper thermodynamic limit can for exam-
ple be obtained by choosing δ = 0.3/
N . The numerics
shows small finite size effects.
In Fig. 2 we compare the numerical solution of the
Boltzmann equation to the single mode memory matrix
calculation or, equivalently29, to the variational bound
obtained by setting Φk = k in Eq. (2)
k,k′ kWkk′k
k,k′ kW
. (C3)
As can be seen from the inset, in the limit of small g
one obtains the exact value for the conductivity, which is
what we intended to demonstrate.
0 0.05 0.1 0.15 0.2 0.25 0.3
0 0.1 0.2 0.3
FIG. 2: Comparison of the result of a single mode memory
matrix calculation (solid line), Eq. (C3), to the full numerical
solution of the Boltzmann equation (dotted line) for T = 0.05
and N = 500. The memory matrix is always a lower bound to
the Boltzmann result and converges towards it as the disorder
strength g is reduced, as shown in the inset (ratio of the single
mode approximation to the Boltzmann result).
Next we turn to an analysis of regularity conditions
which have to be met in general by the scattering matrix
Wkk′ such that convergence is guaranteed in the limit
g → 0. According to the assumptions of this appendix,
for g = 0 the variational form of the Boltzmann equa-
tion (2) has a unique solution Φ̄k (up to a multiplica-
tive constant), with F (Φ̄k) = ∞,
kk′ Φ̄k′ = 0 and
k vkΦ̄kdf
0/dǫk > 0.
In the presence of a finite, but small g we write the solu-
tion of the Boltzmann equation as Φ = Φ̄+Φ⊥, where Φ⊥
has no component parallel to Φ̄ (i.e.
k Φ̄kΦ
0/dǫk =
0 ). On the basis of the two inequalities
F [Φ̄] ≤ F [Φ] (C4)
ΦWΦ = Φ̄g2W 1Φ̄ + Φ⊥WΦ⊥ ≥ Φ̄g2W 1Φ̄ (C5)
one concludes that Eq. (21) is valid, i.e. that
F [Φ̄]
F [Φ]
under the condition that
vkΦ̄k
or, equivalently,
vkΦ⊥,k
= 0. (C6)
We therefore have to check whether Φ⊥ becomes small
in the limit of small g.
Expanding the saddlepoint equation for (2) we obtain
W 0kk′Φ
k′ = vk
k′k′′ Φ̄k′g
2W 1k′k′′ Φ̄k′′
k′ vk′
g2W 1kk′ Φ̄k′ +O(g2W1Φ⊥,Φ⊥W0Φ⊥)
As by definition Φ⊥ has no component parallel to Φ̄, we
can insert the projector Q which projects out the con-
servation law in front of Φ⊥k on the left hand side. We
therefore conclude that if the inverse of W 0Q exists, then
Φ⊥ is of order g
2, Eq. (C6) is valid and therefore also
Eq. (21). In our numerical examples these conditions are
all met.
Under what conditions can one expect that Eq. (C6) is
not valid? Within the assumptions of this appendix we
have excluded the presence of other zero modes of W 0
(i.e. conservation laws) with finite overlap with the cur-
rent. But it may happen that W 0 has many eigenvalues
which are arbitrarily small such that the sum in Eq. (C6)
diverges. In such a situation the presence of slow modes
which cannot be identified with conservation laws of the
unperturbed system invalidates Eq. (21).
1 X. Zotos and P. Prelovsek, Phys. Rev. B 53, 983 (1996).
2 K. Fabricius and B. M. McCoy, Phys. Rev. B 57, 8340
(1998).
3 B. N. Narozhny, A. J. Millis, and N. Andrei, Phys. Rev. B
58, R2921 (1998).
4 X. Zotos and P. Prelovsek, e-print
arXiv:cond-mat/0304630 (2003).
5 F. Heidrich-Meisner, A. Honecker, D. C. Cabra, and
W. Brenig, Physica B 359, 1394 (2005).
6 P. Jung, R. W. Helmes, and A. Rosch, Phys. Rev. Lett.
96, 067202 (2006).
7 U. Schollwöck, Rev. Mod. Phys. 77, 259 (2005).
8 J. Ziman, Electrons and Phonons: The theory of transport
phenomena in solids (Oxford University Press, 1960).
9 M. Kohler, Z. Phys. 124, 772 (1948).
10 M. Kohler, Z. Phys. 125, 679 (1949).
11 E. H. Sondheimer, Proc. R. Soc. London, Ser. A 203, 75
(1950).
12 P. Mazur, Physica (Amsterdam) 43, 533 (1969).
13 M. Suzuki, Physica (Amsterdam) 51, 277 (1971).
14 X. Zotos, F. Naef, and P. Prelovsek, Phys. Rev. B 55,
11029 (1997).
http://arxiv.org/abs/cond-mat/0304630
15 S. Fujimoto and N. Kawakami, Phys. Rev. Lett. 90,
197202 (2003).
16 F. Heidrich-Meisner, A. Honecker, and W. Brenig, Phys.
Rev. B 71, 184415 (2005).
17 K. Sakai, Physica E (Amsterdam) 29, 664 (2005).
18 E. Shimshoni, N. Andrei, and A. Rosch, Phys. Rev. B 68,
104401 (2003).
19 F. Heidrich-Meisner, A. Honecker, D. C. Cabra, and
W. Brenig, Phys. Rev. B 66, 140406(R) (2002).
20 A. V. Rozhkov and A. L. Chernyshev, Phys. Rev. Lett.
94, 087201 (2005).
21 A. V. Sologubenko, E. Felder, K. Giannò, H. R. Ott, A. Vi-
etkine, and A. Revcolevschi, Phys. Rev. B 62, R6108
(2000).
22 C. Hess, H. El Haes, A. Waske, B. Buchner, C. Sekar,
G. Krabbes, F. Heidrich-Meisner, and W. Brenig, Phys.
Rev. Lett. 98, 027201 (2007).
Hydrodynamic Fluctuations, Broken Symmetry, and Cor-
relation Functions, edited by D. Forster (Perseus, New
York, 1975).
24 H. Mori, Prog. Theor. Phys. 33, 423 (1965).
25 R. Zwanzig, in Lectures in Theoretical Physics, edited by
W. E. Brittin, B. W. Downs and J. Downs (Interscience,
New York, 1961), Vol. III; J. Chem. Phys. 33, 1338 (1960).
26 W. Götze and P. Wölfle, Phys. Rev. B 6, 1226 (1972).
27 T. Giamarchi, Phys. Rev. B 44, 2905 (1991).
28 A. Rosch and N. Andrei, Phys. Rev. Lett. 85, 1092 (2000).
29 D. Belitz, J. Phys. C 17, 2735 (1984).
30 E. Boulat, P. Mehta, N. Andrei, E. Shimshoni, and
A. Rosch, e-print arXiv:cond-mat/0607837 (2006).
31 A. V. Sologubenko, K. Berggold, T. Lorenz, A. Rosch,
E. Shimshoni, M. D. Phillips, and M. M. Turnbull, Phys.
Rev. Lett. 98, 107201 (2007).
32 W. Kohn, Physical Review 133, 171 (1964).
33 L. P. Kadanoff and P. C. Martin, Ann. Phys. (N.Y.) 24,
419 (1963).
34 As Θ2 = ±1 for states with integer or half-integer spin,
the combinations A±ΘAΘ−1 have signatures ±1 provided
the operator A does not change the total spin by half an
integer, which is the case for all operators with finite cross-
correlation functions with the physical currents.
35 The Ci span the space of all conservation laws, including
those which do not commute with each other.
36 More precisely, {Ci} is taken to be a basis of the space of
operators with energy-diagonal entries only, chosen to be
orthogonal in the sense that 〈CiCj〉 ∝ δij .
http://arxiv.org/abs/cond-mat/0607837
|
0704.0887 | Non-extensive thermodynamics of 1D systems with long-range interaction | Non-extensive thermodynamics of 1D systems with long-range interaction
S. S. Apostolov1,2, Z. A. Mayzelis1,2, O. V. Usatenko1 ∗, V. A. Yampol’skii1
A. Ya. Usikov Institute for Radiophysics and Electronics
Ukrainian Academy of Science, 12 Proskura Street, 61085 Kharkov, Ukraine
V. N. Karazin Kharkov National University, 4 Svoboda Sq., 61077 Kharkov, Ukraine
A new approach to non-extensive thermodynamical systems with non-additive energy and entropy
is proposed. The main idea of the paper is based on the statistical matching of the thermodynamical
systems with the additive multi-step Markov chains. This general approach is applied to the Ising
spin chain with long-range interaction between its elements. The asymptotical expressions for the
energy and entropy of the system are derived for the limiting case of weak interaction. These
thermodynamical quantities are found to be non-proportional to the length of the system (number
of its particle).
PACS numbers: 05.40.-a, 02.50.Ga, 05.50.+q
One of the basic postulates of the classical statistical
physics is an assumption about the particle’s interaction
range considered to be small as compared with the sys-
tem size. If this condition does not hold the internal and
free energies, entropy, etc. are no more additive physical
quantities. Due to this fact the definitions of the tem-
perature, entropy, etc. are not evident. The distribution
function of the non-extensive system is not the Gibbs
function, the Boltzman relationship between the entropy
and the statistical weight is not any longer valid.
The non-extensive systems are common in physics [1].
They are gravitational forces [2], Coulomb forces in glob-
ally charged systems [3], wave-particle interactions, mag-
nets with dipolar interactions [4], etc. Long-range corre-
lated systems are intensively studied not only in physics,
but also in the theory of dynamical systems and the the-
ory of probability. The numerous non-Gibbs distribu-
tions are found in various sciences, e.g., Zipf distribution
in linguistic [5, 6], the distribution of nucleotides in DNA
sequences and computer codes [7, 8, 9], the distributions
in financial markets [10, 11, 12], in sociology, physiology,
seismology, and many other sciences [13, 14].
The important model of the non-extensive systems in
physics is the Ising spin chain with elements interacting
at great distances [15, 16]. Unfortunately, the general-
ization of the standard thermodynamic methods for the
case of arbitrary number of interacting particles is im-
possible. There exist solutions for some particular cases
of particles interaction only [16, 17]. The other results
obtained for the chains with long-range interaction are
either the general theorems about the existence of the
phase transitions in the system or the calculations of the
critical indexes [18, 19].
Several algorithms for calculating the thermodynamic
quantities of the long-range correlated systems have been
proposed. Unfortunately, they are not well grounded and
∗[email protected]
need additional justifications. One of them is the Tsalis
thermodynamics (see [20, 21]) based on axiomatically in-
troduced entropy: S(q)(W ) = (W 1−q − 1)/(1− q). Here
W is the statistical weight and q is the parameter reflect-
ing the non-extensiveness of the system. However, this
expression does not correspond to the entropy of Ising
chain with a long but finite range of interaction. Indeed,
it does not contain the size of the system and remains
non-additive when the length of the system tends to in-
finity. Meanwhile, the entropy has to be additive if the
system is much greater than the characteristic range of
interaction. Thus, the problem of finding the thermody-
namic quantities for the non-extensive systems is still of
great importance.
Recently a new vision on the long-range correlated sys-
tems, based on the association of the latter with the
Markov chains, has been proposed [22]. The binary
Markov chains are the sequences of symbols taking on
two values, for example, ±1. These sequences are de-
termined by the conditional probability of some symbol
to occur after the definite previous N symbols and this
conditional probability does not depend on the values of
symbols at a distance, greater than N , P (si = s|T
i,∞) =
P (si = s|T
). Here T±
are the sets of L sequen-
tial symbols (si±1, si±2, . . . , si±L). The unbiased addi-
tive Markov chain is defined by the additive conditional
probability function,
P (si = 1 | T
i,∞) =
F (r)si−r . (1)
The function F (r) is referred to as the memory func-
tion [23]. It describes the direct interaction between the
elements of the chain contrary to the binary correlation
function K(r) = sisi+r that also takes into account the
indirect interactions. Here . . . means statistical aver-
aging over the chain. As was shown in [24], the corre-
lation function can be found from the recurrence rela-
tion K(r) =
r′=1 2F (r
′)K(r − r′), that establishes the
one-to-one correspondence between these two functions.
http://arxiv.org/abs/0704.0887v1
In the limiting case of small F (r), this equation yields
K(r) ≈ δr,0 + 2F (r).
We consider the chain of classical spins with hamilto-
H = −
j−i<N
ε(j − i)sisj , (2)
where si is the spin variable taking on two values, −1
and 1, and ε(r) > 0 is the exchange integral of the ferro-
magnetic coupling. The range N of spin interaction may
be arbitrary but finite.
We find thermodynamical quantities, specifically, en-
ergy and entropy, of such a chain being in thermody-
namical equilibrium with a Gibbs thermostat of temper-
ature T . The hamiltonian (2) does not include the terms,
corresponding to the interaction of the system with the
thermostat. Nevertheless, this interaction leads to the
relaxation of the temperature in different parts of the
chain, and the thermostat fixes its temperature of the
whole spin chain. A great many numerical procedures,
for example, the Metropolis algorithm, were elaborated
for achieving the equilibrium state. The algorithm con-
sists in the sequential trials to change the value of a ran-
domly chosen spin. The probability of spin-flip is deter-
mined by the values of spins on both sides of the chain
within the interaction range. Thus, the Ising spin chain
in the equilibrium state can be characterized by the con-
ditional probability, P (si = s | T
i,∞, T
i,∞), of some spin
si to be equal to s under the condition of definite val-
ues of all other spins on both sides from si. In Ref. [25],
a chain with the two-sided conditional probability func-
tion, which is independent of symbols at a distance of
more than N , was considered. These chains were shown
to be equivalent to the N -step Markov chain. So, to
calculate the statistical properties of the physical object
with long-range interaction, it is sufficient to find the
corresponding Markov chain.
In this work, we demonstrate that the Ising spin chains
with long-range interaction being in the equilibrium state
are statistically equivalent to the Markov chains with
some conditional probability function. Then, using the
known statistical properties of the Markov chains, we
calculate the thermodynamical quantities of the corre-
sponding spin chains.
First, we present a method of defining a thermodynam-
ical quantity Q (e.g., energy, entropy) for a subsystem of
arbitrary length L (a set of L sequential particles) of any
non-extensive system. This subsystem, denoted by S,
interacts with the thermostat and with the rest of the
system. The interaction with the latter makes it impos-
sible to use the standard methods of statistical physics.
The internal energy of the entire system is not equal to
the sum of the internal energies of S and the rest of the
system.
In order to find the condition of the system equilibrium
we divide the ensemble of the system into the subensem-
bles with fixed 2N closest particles from both sides of S.
We denote these two ”border” subsystems of length N
by letter B. All the other particles, except for S and B,
are denoted by R:
. . . si−N
︸ ︷︷ ︸
si−N+1 . . . si
︸ ︷︷ ︸
si+1 . . . si+L
︸ ︷︷ ︸
si+L+1 . . . si+L+N
︸ ︷︷ ︸
si+L+N+1 . . .
︸ ︷︷ ︸
Within every subensemble, the subsystem B is fixed and
plays the role of a partition wall conducting the energy
and keeps its internal energy unchanged. The energy of
system S + R equals to the sum of energies of S and
R and their energy of interaction with B. Thus, within
every subensemble we can use the equilibrium condition
between S and R, analogous to that for extensive ther-
modynamics:
∂ lnWS(ES |B)
∂ lnWR(ER|B)
, (3)
where WS(ES |B) and WR(ER|B) are the statistical
weights of systems, in which the energy of S is ES , those
of R is ER, and B is fixed. We refer to these statistical
weights as the conditional statistical weights.
In a similar way, we introduce any conditional ther-
modynamical quantity Q(·|B). The real quantity Q(·) is
the conditional one, averaged over the subensembles with
different environments B:
Q(·) =
Q(·|B)
Using this way of calculating the thermodynamical quan-
tities, one does not need to find the distribution function
of the system S. Nevertheless, it is quite appropriate
for the non-extensive systems and yields the thermody-
namical values that could be measured experimentally.
If the system is in the thermal contact with the Gibbs
thermostat, within every subensemble with a fixed B,
the equilibrium condition between thermostat and sub-
system S yields the temperature of S equal to T . Thus,
the averaged temperature of the subsystem S is also T .
The conditional entropy can be introduced as the log-
arithm of the conditional statistical weight: S(ES |B) =
lnW (ES , S|B). Equality dS(ES |B) = dES/T is fulfilled
for the conditional quantities S and E. Meanwhile, such
a relation is not valid for the averaged entropy and en-
ergy.
Note that the presented algorithm for calculating the
thermodynamical quantities is rather general and can be
applied to other quantities, e.g., to the probability for
some spin to take on the definite value under the condi-
tion of the particular environment.
Now we proceed to the application of this algorithm
for the analysis of Ising spin chain with long-range inter-
action. The theory of Markov chains is built around the
expression for the conditional probability function. So, in
an effort to find a Markov chain that corresponds to a
given Ising system, we have to find its conditional prob-
ability function. To this end, we consider one spin si
as a system S and the conditional probability P as a
quantity Q in the above-mentioned algorithm. In the ex-
tensive thermodynamics, this probability is determined
by the Gibbs distribution function. According to our al-
gorithm, one can find that the probability of event si = 1
under the condition of the fixed spins from B is given as
p(si = 1|B) =
1 + exp
2ε(r)
(si−r + si+r)
It should be pointed out that this expression is sim-
ilar to the Glauber formula [26] written in some other
terms. Our result has been that the conditional proba-
bility of the si occurring is determined by two subsys-
tems of length N on both sides of si only and does not
depend on the remoter spins. As mentioned above, this
is a property of the N -step Markov sequences.
To arrive at an explicit expressions for the energy and
entropy of the subsystem S we consider the case of weak
interaction,
ε(r) ≪ T . In this case, Eq. (5) corre-
sponds approximately to the additive Markov chain with
the memory function F (r) ≈ ε(r)/2T and, thus, its bi-
nary correlation function is
K(r) ≈ δr,0 + ε(r)/T. (6)
Figure 1 shows that the Ising spin chain being in equi-
librium state is statistically equivalent to the Markov
chain. The solid circles describe the binary correlation
function of the additive Markov chain with the memory
function F (r) = ε(r)/2T . The open circles correspond
to the correlation function of the spin chain being equi-
librated by the Metropolis scheme.
It can be shown that the Metropolis scheme also yields
the same expression (5) for the conditional probability.
Due to this fact one does not need to use the Metropolis
algorithm to find an equilibrium state of the spin chain
but can generate a Markov chain with a corresponding
conditional probability function.
0 20 40 60 80 100
0.000
0.001
0.002
0.003
0 20 40 60 80
0.000
0.002
FIG. 1: (Color online) The correlation functions of the ad-
ditive Markov chain determined by memory function F (r) =
ε(r)/2T (solid circles) and the spin chain being equilibrated
by the Metropolis scheme (open circles). The inset represents
the function ε(r)/T .
Now we examine binary spin chain determined by the
Hamiltonian (2) with si = ±1 and find the non-extensive
energy and entropy of subsystem S containing L par-
ticles. Since the explicit expressions for the correlation
function were derived for the additive Markov chains with
the small memory function only, we suppose the energy of
interaction to be small as compared to the temperature,
ε(r) ≪ T .
The proposed algorithm of finding thermodynamic
quantities is considerably simpler while finding the en-
ergy of the system S of length L. The energy is the
averaged sum of products of pairs of spin values,
E(T ) =
ε(|k − j|)sksj −
j,k=i
ε(|k − j|)sksj .
Formally, this expression depends on the index i of the
first spin in the system S. However, we study the ho-
mogeneous systems, so function E(T ) in Eq. (7) does
not contain i as an argument. Energy in Eq. (7) can be
calculated via the binary correlation function only with
the arguments, less than N , without finding conditional
energies. Using Eq. (6), we arrive at
E(T ) = −
ǫ2NL+
ε2(i)min{i, L}
/T, (8)
with ǫ2N =
ε2(i). If the system length is much greater
than memory length N this expression yields extensive
energy. Indeed, in this case subsystems S and R interact
nearly extensively. In the opposite limiting case we get:
E(T ) ≈ −(4ǫ2NL− ε
2(1)L2)/2T, L ≪ N. (9)
If one regards the whole chain of the length M , forming
the circle, as the system S in the above-mentioned sense,
the additive energy is
E(T ) = −ǫ2NM/T. (10)
This result is very natural because the extensive thermo-
dynamics is valid.
It is seen, that the non-extensive energy is expressed
in terms of binary correlation functions only. This is not
correct for other thermodynamical quantities, e.g. the
entropy of the system S. Formally, to find the entropy,
we have to calculate all conditional entropies by integra-
tion of dS(E|B) = dE(S|B)/T , and then to average the
result over all realizations of the borders. However, at
high temperatures, to a first approximation in the small
parameter
ε(r)/T we can change the order of these
operations and calculate the averaged entropy by inte-
grating this formula taken with the averaged energy. A
constant of integration is such that the chain is com-
pletely randomized at T → ∞, and its entropy is equal
to ln 2L. Thus, we obtain
S(T ) = L ln 2 + E(T )/2T. (11)
Here E(T ) is determined by one of Eqs. (8)-(10). This
expression describes the non-extensive entropy of the sys-
tem S. However, while the energy is non-extensive in the
main approximation, the entropy is non-extensive as a
first approximation in the parameter
ε(r)/T . The
dependences of the non-extensive energy and entropy on
size L of the system S are given in Fig. 2 for step-wise
interaction ε(r). The solid line corresponds to additive
quantities, it is the asymptotic of these quantities for the
large system length.
It should be emphasized, that knowing the energy and
entropy we can find some other thermodynamic quanti-
ties. For example, at high temperatures the heat capac-
ity can be determined in a similar way as for the entropy
calculating. One can use the classical formula CV (T ) =
TdS(T )/dT with averaged entropy from Eq. (11) and
obtain the heat capacity value as CV = −E(T )/T .
This simple procedure of its finding is valid for the
case of high temperatures as the main approximation
only. In the general case, CV (T ) is not determined by
CV (T ) = TdS(T )/dT . This relation holds for the condi-
tional quantities only.
At high temperatures, we calculated the averaged non-
additive energy and entropy without using the condi-
tional ones. In the opposite limiting case of low temper-
atures, the calculation of conditional quantities proves to
be necessary.
Thus, we have suggested the algorithm of evaluat-
ing thermodynamic characteristics of non-extensive sys-
tems. The value of certain thermodynamical quantity
0 500 1000 1500
N=500
N=100
FIG. 2: (Color online) The specific non-extensive energy
E × 103/L and entropy (S/L − ln 2) × 105 vs. the size L
of system for step-wise interaction ε(r). The memory length
N is indicated near the curves. The constant limiting value
of energy is −4ǫ2N/2T ≈ −5× 10
can be obtained by averaging the corresponding condi-
tional quantity. This method is applied to the Ising spin
chain. The explicit expressions for the non-additive en-
ergy and entropy are deduced in the limiting case of high
temperatures as compared to the energy of spins interac-
tion. At high temperatures, the equilibrium Ising chain
of spin turns out to be equivalent to the additive multi-
step Markov chain.
[1] Dynamics and Thermodynamics of Systems with Long-
Range Interactions, T. Dauxois, S. Ruffo, E. Arimondo,
and M. Wilkens, Lecture Notes in Physics Vol. 602
(Springer-Verlag, New York, 2002).
[2] T. Padmanabhan, Phys. Rep. 188, 285 (1990).
[3] D. R. Nicholson, Introduction to Plasma Theory (John
Wiley, New York, 1983).
[4] J. Barre, T. Dauxois, G. De Ninno, D. Fanelli, and
S. Ruffo, Phys. Rev. E 69, 045501(R) (2004).
[5] G. K. Zipf, Human Behavior and the Principle of Least
Effort (Addison-Wesley, New York, 1949).
[6] I. Kanter and D. A. Kessler, Phys. Rev. Lett. 74,
4559 (1995); K. E. Kechedzhy, O. V. Usatenko, and
V. A. Yampol’skii, Phys. Rev. E. 72, 046138 (2005).
[7] R. N. Mantegna, S. V. Buldyrev, A. L. Goldberger,
S. Havlin, C.-K. Peng, M. Simons, and H. E. Stanley,
Phys. Rev. E 52, 2939 (1995).
[8] R. F. Voss, Phys. Rev. Lett. 68, 3805 (1992).
[9] Z. Ouyang, C. Wang, and Z.-S. She, Phys. Rev. Lett.,
93, 078103 (2004).
[10] H. E. Stanley et. al., Physica A 224,302 (1996).
[11] V. Pareto, Le Cour d’Economie Politique (Macmillan,
London, 1896).
[12] R. N. Mantegna, H. E. Stanley, Nature (London) 376,
46 (1995).
[13] D. Sornette, L. Knopoff, Y. Y. Kagan, and C. Vanneste,
J. Geophys. Res. 101, 13883 (1996).
[14] http://myhome.hanafos.com/philoint/phd-data/Zipf’s-Law-2.htm.
[15] L. Casetti, M. Kastner, Phys. Rev. Lett. 97, 100602
(2006).
[16] F. Baldovin and E. Orlandini, Phys. Rev. Lett. 97,
100601 (2006).
[17] N. Theodorakopoulos, Physica D, 216, 185 (2006).
[18] L. P. Kadanoff et al., Rev. Mod. Phys., 39, 395 (1967).
[19] P. C. Hohenberg, B. I. Halperin, Rev. Mod. Phys., 49,
435 (1977).
[20] C. Tsalis, J. Stat. Phys. 52, 479 (1988).
[21] Nonextensive Statistical Mechanics and Its Applications,
eds. S. Abe and Yu. Okamoto (Springer, Berlin, 2001).
[22] O. V. Usatenko and V. A. Yampol’skii, Phys. Rev. Lett.
90, 110601 (2003); O. V. Usatenko, V. A. Yampol’skii,
K. E. Kechedzhy and S. S. Mel’nyk, Phys. Rev. E, 68,
061107 (2003).
[23] S. S. Melnyk, O. V. Usatenko, V. A. Yampolskii, and
V. A. Golick, Phys. Rev. E 72, 026140 (2005).
[24] S. S. Melnyk, O. V. Usatenko, V. A. Yampol’skii, Phys-
ica A 361, 405 (2005); F. M. Izrailev, A. A. Krokhin,
N. M. Makarov, S. S. Melnyk, O. V. Usatenko,
V. A. Yampol’skii, Physica A, 372, 279 (2006).
[25] S. S. Apostolov, Z. A. Mayzelis, O. V. Usatenko, and
V. A. Yampol’skii, Europhys. Lett. 76 (6), 1015 (2006).
[26] R. J. Glauber, J. Math. Phys. 4, 294 (1963).
http://myhome.hanafos.com/philoint/phd-data/Zipf's-Law-2.htm
|
0704.0888 | NMR evidence for a strong modulation of the Bose-Einstein Condensate in
BaCuSi$_2$O$_6$ | NMR evidence for a strong modulation of the Bose-Einstein Condensate in BaCuSi2O6
S. Krämer,1 R. Stern,2 M. Horvatić,1 C. Berthier,1 T. Kimura,3 and I. R. Fisher4
1Grenoble High Magnetic Field Laboratory (GHMFL) - CNRS, BP 166, 38042 Grenoble Cedex 09, France
2National Institute of Chemical Physics and Biophysics, 12618,Tallinn, Estonia
3Los Alamos National Laboratory, Los Alamos NM 87545, USA
4Geballe Laboratory for Advanced Materials and Department of Applied Physics, Stanford University, Stanford CA 94305, USA
(Dated: October 30, 2018)
We present a 63,65Cu and 29Si NMR study of the quasi-2D coupled spin 1/2 dimer compound
BaCuSi2O6 in the magnetic field range 13-26 T and at temperatures as low as 50 mK. NMR data in
the gapped phase reveal that below 90 K different intra-dimer exchange couplings and different gaps
(∆B/∆A = 1.16) exist in every second plane along the c-axis, in addition to a planar incommensurate
(IC) modulation. 29Si spectra in the field induced magnetic ordered phase reveal that close to the
quantum critical point at Hc1 = 23.35 T the average boson density n of the Bose-Einstein condensate
is strongly modulated along the c-axis with a density ratio for every second plane nA/nB ≃ 5. An
IC modulation of the local density is also present in each plane. This adds new constraints for the
understanding of the 2D value φ = 1 of the critical exponent describing the phase boundary.
PACS numbers: 75.10.Jm,75.40.Cx,75.30.Gw
The interest in Bose-Einstein condensation (BEC) has
been considerably renewed since it was shown to occur
in cold atomic gases [1]. In condensed matter, a formal
analog of the BEC can also be obtained in antiferromag-
netic (AF) quantum spin systems [2, 3, 4, 5] under an
applied magnetic field. Many of these systems have a
collective singlet ground state, separated by an energy
gap ∆ from a band of triplet excitations. Applying a
magnetic field (H) lowers the energy of the Mz = −1
sub-band and leads to a quantum phase transition be-
tween a gapped non magnetic phase and a field induced
magnetic ordered (FIMO) phase at the critical field Hc1
corresponding to ∆min-gµBHc1 = 0, where ∆min is the
minimum gap value corresponding to some q vector qmin
[2, 3, 4, 5]. This phase transition can be described as a
BEC of hard core bosons for which the field plays the role
of the chemical potential, provided the U(1) symmetry is
conserved. Quite often, however, anisotropic interactions
can change the universality class of the transition and
open a gap [6, 7, 8]. From that point of view, BaCuSi2O6
[9] seems at the moment the most promising candidate
for the observation of a true BEC quantum critical point
(QCP) [10]. In addition, this system exhibits an un-
usual dimensionality reduction at the QCP, which was
attributed to frustration between adjacent planes in the
nominally body-centered tetragonal structure [11]. The
material also exhibits a weak orthorhombic distortion at
≃90 K which is accompanied by an in-plane IC lattice
modulation [12]. This structural phase transition affects
the triplon dispersion, and the possibility of a modula-
tion of the amplitude of the BEC along the c-axis has
been speculated based on low field inelastic neutron data
[13].
In order to get a microscopic insight of this system, we
performed 29Si and 63,65Cu NMR in BaCuSi2O6 single
crystals. Our data in the gapped phase reveal that the
structural phase transition which occurs around 90 K not
only introduces an IC distortion within the planes, but
also leads to the existence of two types of planes alter-
nating along the c-axis. From one plane to the other, the
intra-dimer exchange coupling and the energy gap for the
triplet states differs by 16 %. Exploring the vicinity of
the QCP in the temperature (T ) range 50-720 mK, we
confirm the linear dependence of TBEC with H −Hc1 as
expected for a 2D BEC. Our main finding is that the av-
erage boson density n in the BEC is strongly modulated
along the c-axis in a ratio of the order of 1:5 for every sec-
ond plane, whereas its local value n(R) is IC modulated
within each plane.
NMR measurements have been obtained on ∼10 mg
single crystals of BaCuSi2O6 using a home-made spec-
trometer and applying an external magnetic fieldH along
the c axis. The gapped phase was studied using a su-
perconducting magnet in the field range 13-15 T and
the temperature range 3-100 K. The investigation of the
FIMO phase was conducted in a 20 MW resistive magnet
at the GHMFL in the field range 22-25 T and the tem-
perature range 50-720 mK. Except for a few field sweeps
in the gapped phase, the spectra were obtained at fixed
fields by sweeping the frequency in regular steps and sum-
ming the Fourier transforms of the recorded echoes.
Before discussing the microscopic nature of the QCP,
let us first consider the NMR data in the gapped phase.
The system consists of S = 1/2 Cu spin dimers parallel
to the c axis and arranged (at room temperature) on a
square lattice in the ab plane. Each Cu dimer is sur-
rounded by four Si atoms, lying approximately in the
equatorial plane. For Cu nuclei, the interaction with the
electronic spins is dominated by the on-site hyperfine in-
teraction. For 29Si nuclei both the transferred hyperfine
interaction through oxygen atoms with a single dimer and
the direct dipolar interaction are important. According
http://arxiv.org/abs/0704.0888v1
125.0 125.2 125.4 125.6 125.8
0 20 40 60 80 100
[ MHz]
T [ K ]
29Si NMR
H = 14.79 T
H || c-axis
I x 0.5
I x 0.25
T [ K ]
FIG. 1: (Color online) Evolution of the normalized 29Si NMR
spectra as a function of T in the gapped phase. Below 90 K
the line splits into two components, each of them correspond-
ing to an IC pattern. Inset: T dependence of the 1st moment
(i.e., the average position) for i) the total spectra (squares)
and ii) the individual components before they overlap (up
and down triangles). The solid and dashed lines are fits for
non-interacting dimers.
to the room temperature structure I41/acd [14], there
should be only one single Cu and two nearly equivalent
Si sites for NMR when H‖c. As far as 29Si is concerned,
one actually observes a single line above 90 K, as can be
seen in Fig. 1. However, below 90 K, the line splits into
two components, each of them corresponding to an IC
pattern, that is an infinite number of inequivalent sites.
This corresponds to the IC structural phase transition
discovered by X-ray measurements [12]. At 3 K, when
T is much smaller than the gap, the spin polarization is
zero and one observes again a single unshifted line, at the
frequency ν = ν0 =
29γH defined by the Si gyromagnetic
ratio 29γ.
On the 63,65Cu NMR spectra recorded at 3 K and
13.2 T (Fig. 2), however, one can distinguish two dif-
ferent Cu sites, denoted A and B. That is, each of the
6 lines of Cu spectrum (for 2 copper isotopes × 3 tran-
sitions of a spin 3/2 nucleus) is split into two, which
is particularly obvious on the lowest frequency “satel-
lite” 63Cu line. The whole spectra can be nicely fitted
with the following parameters: 63ν
Q = 14.85 (14.14)
MHz, η = 0, and K
zz = 1.80 (1.93) %, where νQ is the
quadrupolar frequency and η the asymmetry parameter.
The Kzz is the hyperfine shift, expected to be purely
orbital since the susceptibility has fully vanished. On in-
creasing T the highest frequency 65Cu “satellite” lines of
sites A and B become well separated and both exhibit
a line shape typical of an IC modulation of the nuclear
spin-Hamiltonian. Although the apparent intensities of
lines A and B look different, they correspond to the same
number of nuclei after corrections due to different spin-
spin relaxation rate 1/T2. Since the satellite NMR lines
135 140 145 150 155 160 165 170 175
= 3.7 meV
= 4.3 meV
63,65Cu NMR
[ MHz ]
H = 13.205 T
15.0 14.5 14.0 13.5 13.0
upper sat.
T = 8.9 K
167 MHz
H [ T ]
FIG. 2: (Color online) 63,65Cu NMR spectra of BaCuSi2O6
in the gapped phase, well below the critical field. The T de-
pendence of the high-frequency “satellite” line clearly reveals
two different copper sites. From their shifts, the two corre-
sponding gap values have been determined. Inset: field sweep
spectrum that reveals the IC nature of the line shape for each
of the two sites. Shading separates the contribution of the
65Cu high-frequency satellite from the rest of the spectrum.
The analysis of the latter part confirms that the observed line
shape has a pure magnetic origin.
at 3 K (the lowest temperature) are narrow, the modu-
lation of νQ is negligible, meaning that the IC lineshapes
visible at higher temperature are purely magnetic. This
is confirmed by the analysis of the spectrum shown in the
inset of Fig. 2, which shows that at 8.9 K the broadening
of the “central” line is the same as that observed on the
“satellites”. Such a broadening results from a distribu-
tion of local hyperfine fields: δhz(R) = Azz(R)mz(R) in
which A(R) is the hyperfine coupling tensor and mz(R)
the longitudinal magnetization at site R. Since νQ(R) is
not modulated by the distortion, one expects that the
modulation of A(R) is negligible too, A(R) =A. This
means that the NMR lineshape directly reflects the IC
modulation of mz in the plane.
Keeping constant the νQ parameters obtained at 3 K,
one can analyze the T dependence of the shift Kαzz(T ) of
each component α = A or B according to the formula
Kαzz(T )−Kαzz(0) = Aαzzmdz(∆α, H, T )/H, (1)
where mdz is the magnetization of a non-interacting
dimer, mdz = gcµB/(e
(∆α−gcµBH)/kBT + 1) in the given
T range, gc = 2.3 [15], and K
zz is determined from the
average line position, i.e., the first moment. The best fit
was obtained for ∆A(B) = 3.7 (4.3) meV and A
-16.4 T/µB. We assumed that A
cc = A
cc, but the values
of ∆ depend only weakly with this quantity. The values
are slightly higher than those determined by neutron in-
elastic scattering for Qmin = [π, π] [13], which is normal
considering our approximate description. However, the
ratio ∆B/∆A = 1.16 is in excellent agreement with the
neutron result 1.15. Considering the fact that there is
FIG. 3: (Color online) Evolution of the normalized 29Si spec-
trum as a function of H at fixed T . The colored spectra
correspond to the BEC. a) T = 50 mK: Instead of a simple
splitting of the line as expected for a standard BEC, a com-
plex pattern appears, typical of an IC distribution of the local
hyperfine field. Inset: H dependence of the 1st (M1, squares)
and 2nd (M2, circles) moment of the spectra. M1 is propor-
tional to mz and M2 to the square of the order parameter. b)
T = 720 mK: The non zero magnetization outside the BEC
leads to an IC pattern for fields H ≤ Hc1(T ), where Hc1(T )
is determined from the H dependence of M2, as shown in the
inset. Lower inset: Tc is linear in H −Hc1, as expected for a
2D BEC QCP.
no disorder in the system (as Cu lines at low T are nar-
row), and that X-rays did not detect any commensurate
peak corresponding to a doubling of the unit cell in the
ab plane, our NMR data can only be explained if there
are two types of planes with different gap values. Look-
ing back at the 29Si spectra in Fig. 1, one also observes
just below 90 K two well separated components, both of
them exhibiting an IC pattern. They indeed correspond
to the two types of planes, as the T dependence of their
positions can be well fit using values close to ∆A and
∆B determined from Cu NMR (inset to Fig. 1). This
means that the 90 K structural phase transition not only
corresponds to the onset of an IC distortion in the ab
plane, but also leads simultaneously to an alternation of
different planes along the c-axis [16], with intra-dimer
exchange in the ratio JB/JA ∼= ∆B/∆A = 1.16
Let us now recall what is expected from a microscopic
point of view in the vicinity of the QCP corresponding to
the onset of a homogeneous BEC for coupled dimer sys-
tems. As soon as a finite density of bosons n is present
(H > Hc1 = ∆min/gµB), a transverse staggered magne-
tization m⊥ (⊥ to H) appears. Its amplitude and direc-
tion correspond respectively to the amplitude and phase
of the order parameter. At the same time, the longitu-
dinal magnetization mz is proportional to the number of
bosons at a given temperature and field, this latter play-
ing the role of the chemical potential. Due to the appear-
ance of a static m⊥, the degeneracy between sites which
were equivalent outside the condensate will be lifted and
their corresponding NMR lines will be split into two. To
be more specific, we consider a pair of Si sites situated
in the ab plane on opposite sides of a Cu dimer. Outside
the condensate, and in the absence of the IC modula-
tion, they should give a single line for H ‖ c. Inside
the condensate the NMR lines of this pair of Si sites will
split by ±29γ|Az⊥|m⊥ because their Az⊥ couplings are
of opposite sign. Obviously, observing a splitting of lines
requires the existence of off-diagonal terms in the hyper-
fine tensor. Such terms are always present due to the
direct dipole interaction between an electronic and the
nuclear spin, which can be easily calculated.
Instead of this expected simple line splitting, the spec-
tra of Fig. 3a reveal a quite complex modification of the
line-shape when entering the condensate. The narrow
single line, observed at 23.41 T at the frequency ν0,
which corresponds to a negligible boson density, sud-
denly changes into a composite line-shape including a
narrow and a broad component. The spread-out of
the broad component increases very quickly with the
field. The width of the narrow component also in-
creases, but at a much lower rate. Both peculiar broad-
enings are related to the IC modulation of the boson
density n(R) due to the structural modulation. To be
more precise, a copper dimer at position R has in to-
tal 4 Si atoms (denoted by k = 0,1,2,3) situated around
in a nearly symmetrical square coordination. The ab-
solute values of the corresponding hyperfine couplings
will thus be nearly identical, and we will also neglect
their dependence on R. These 4 Si sites will give rise
to four NMR lines at the frequencies 29νk(R) = ν0 +
ν1(R) + ν2,k(R), where ν1(R) =
29γAzzgµBn(R) and
ν2,k(R) =
29γAz⊥m⊥(R) cos(φ − kπ/2). Note that ν2,k
only exists when the bosons are condensed, that is when
there is a transverse magnetizationm⊥ pointing in the di-
rection φ. In a uniform condensate m⊥ is proportional to√
n near the QCP, since the mean field behavior is valid
in both, 2D and 3D. We assume that only the amplitude
of the order parameter is spatially modulated, and that
m⊥(R) ∝
n(R). The line shape is the histogram of the
distribution of 29νk(R), convoluted by some broadening
due to nuclei – nuclei interaction.
Three quantities can be derived from the analysis of
NMR lines at fixed T values and variable H : the average
boson density n(H,T ), the field Hc1(T ) corresponding to
to the BEC phase boundary, and the field dependence of
the BEC order parameter (for T close to zero). The aver-
age number of bosons n per dimer is directly proportional
to the first moment M1 (i.e., the average position) of the
line: M1 =
(ν − ν0)f(ν)dν = 29γAzzgµBn(H,T ),
where the line shape f(ν) is supposed to be normalized.
The second moment (i.e., the square of the width) of the
line M2 =
(ν − ν0 − M1)2f(ν)dν has two origins:
the broadening due to the IC distribution of (n(R)-n),
and that due to the onset of m⊥ ∝
n(R) in the con-
densate. When increasing H at T ≃ 0, the condensation
occurs as soon as bosons populate the dimer plane. This
is observed in the inset of Fig. 3a at T = 50 mK. Both
M1 (n) and M2 (m⊥) vary linearly with the field and the
extrapolation of M2 to zero allows the determination of
Hc1 at 50 mK. For higher temperatures a thermal pop-
ulation of bosons n exists and increases with H before
entering the BEC phase. As a result both M1 and M2
increase non-linearly with H , as shown in the upper in-
set of Fig. 3b. However, the increase of M2(H) shows
two clearly separated regimes and allows the determina-
tion of Hc1(T ) as the point where the rate of change of
M2(H) strongly increases due to the appearance of m⊥.
Applying this criterion to all temperatures, we were able
to determine the field dependence of TBEC (lower inset of
Fig. 3b) and define precisely the QCP at Hc1 = 23.35 T.
In agreement with the torque measurements [11], we find
a linear field dependence. This is the signature of a 2D
BEC QCP, where Tc ∝ (H − Hc1)φ with φ = 2/d and
d = 2 [17].
This analysis, however, does not take into account the
specificity of the line shapes, which are related to the ex-
istence of two types of planes with different energy gaps.
A careful examination of the spectra clearly reveals that
they correspond to the superposition of two lines exhibit-
ing different field dependence at fixed T value. For sake
of simplicity, we have made a decomposition only for the
spectra at 50 mK, as shown in the inset to Fig. 4. Clearly,
one of the components remains relatively narrow with-
out any splitting, whereas the other immediately heav-
ily broadens in some sort of triangular line shape. The
field dependence of M1 of the two components, shown in
Fig. 4, reveals that they differ by a factor of 5. This is
attributed to the difference by a factor of 5 in the cor-
responding average populations of bosons. If there were
no hopping of bosons between A and B planes, the B
planes should be empty for the range of field such that
∆A < gµBH < ∆B. Although the observed density of
boson is finite in the B planes, it is strongly reduced,
giving rise to a strong commensurate modulation of n
along the c-axis. According to [11], the hopping along
the c-axis of bosons in the condensate is forbidden by
the frustration, and can only occur as a correlated jump
of a pair. However, this argument does not take into
23.5 24.0 24.5 25.0 25.5
T = 50 mK
total line
H [ T ]
0.0 0.1 0.2
20H = 23.59 T
( - 29 H ) [ MHz ]
FIG. 4: (Color online) Using a simple decomposition of the
spectra into two components as shown in the inset, we deter-
mined the 1st moments of the 29Si lines corresponding to the
different types of planes A and B. From the slopes of their
field dependence, the ratio of the average boson density is
found equal to nA/nB ≃ 5.
account the IC modulation of the boson density.
In conclusion, this NMR study of the 2D weakly cou-
pled dimers BaCuSi2O6 reveals that the microscopic na-
ture of the BEC in this system is much more complicated
than first expected. Two types of planes are clearly evi-
denced, with different intra-dimer J couplings and a gap
ratio of 1.16. Close to the QCP we observed that the
density of bosons, which is IC modulated within each
plane, is reduced in every second plane along the c-axis
by a factor of ≃ 5. This provides new constraints for
the understanding of the quasi-2D character of the BEC
close to the QCP.
We thank S.E. Sebastian, C.D. Batista and T. Gia-
marchi for discussions. Part of this work has been sup-
ported by the European Commission through the Euro-
MagNET network (contract RII3-CT-2004-506239), the
Transnational Access - Specific Support Action (contract
RITA-CT-2003-505474), the Estonian Science Founda-
tion (grant 6852) and the NSF (grant DMR-0134613).
[1] M.H. Anderson et al., Science 269, 198 (1995).
[2] I. Afflek, Phys. Rev. B 43, 3215 (1991).
[3] T. Giamarchi and A.M. Tsvelik, Phys. Rev. B 59, 11398
(1999).
[4] T. Nikuni et al., Phys. Rev. Lett. 84, 5868 (2000).
[5] H. Tanaka et al., J. Phys. Soc. Jpn. 70, 939 (2001).
[6] J. Sirker, A. Weiße, O.P. Sushkov, Europhys. Lett. 68,
275 (2004).
[7] M. Clémancey et al., Phys. Rev. Lett. 97, 167204 (2006).
[8] S. Miyahara et al., cond-mat/0610861.
[9] M. Jaime et al., Phys. Rev. Lett. 93, 087203 (2004).
[10] S.E. Sebastian et al., Phys. Rev. B 74, 180401(R) (2006).
[11] S.E. Sebastian et al., Nature. 441, 617 (2006).
[12] E. Samulon et al., Phys. Rev. B 73, 100407(R) (2006).
[13] Ch. Rüegg et al., Phys. Rev. Lett. 98, 017202 (2007).
[14] K.M. Sparta and G. Roth, Act. Cryst. B. 60, 491 (2004).
[15] S.A. Zvyagin et al., Phys. Rev. B 73, 094446 (2006).
[16] This does not introduce any superstructure peak along
(0,0,l) in X-ray experiments, since the unit cell already
http://arxiv.org/abs/cond-mat/0610861
contains four planes along the c-axis. Only the form fac-
tor, which has not been studied in details below 90 K,
should be slightly affected.
[17] C.D. Batista et al., cond-mat/0608703.
http://arxiv.org/abs/cond-mat/0608703
|
0704.0889 | Bibliometric statistical properties of the 100 largest European
universities: prevalent scaling rules in the science system | Microsoft Word - Stat4ArXiv.doc
Bibliometric statistical properties of the 100 largest European
universities: prevalent scaling rules in the science system
Anthony F. J. van Raan
Centre for Science and Technology Studies
Leiden University
Wassenaarseweg 52
P.O. Box 9555
2300 RB Leiden, the Netherlands
Abstract
For the 100 largest European universities we studied the statistical properties of
bibliometric indicators related to research performance, field citation density and journal
impact. We find a size-dependent cumulative advantage for the impact of universities in
terms of total number of citations. In previous work a similar scaling rule was found at
the level of research groups. Therefore we conjecture that this scaling rule is a prevalent
property of the science system. We observe that lower performance universities have a
larger size-dependent cumulative advantage for receiving citations than top-performance
universities. We also find that for the lower-performance universities the fraction of not-
cited publications decreases considerably with size. Generally, the higher the average
journal impact of the publications of a university, the lower the number of not-cited
publications. We find that the average research performance does not ‘dilute’ with size.
Evidently large universities, particularly top-performance universities are characterized
by ‘big and beautiful’. They succeed in keeping a high performance over a broad range of
activities. This most probably is an indication of their overall scientific and intellectual
attractive power. Next we find that particularly for the lower-performance universities the
field citation density provides a strong cumulative advantage in citations per publication.
The relation between number of citations and field citation density found in this study can
be considered as a second basic scaling rule of the science system. Top-performance
universities publish in journals with significantly higher journal impact as compared to
the lower performance universities. We find a significant decrease of the fraction of self-
citations with increasing research performance, average field citation density, and
average journal impact.
1. Introduction
In previous articles (van Raan 2006a, 2006b, 2007) we presented an empirical approach
to the study of the statistical properties of bibliometric indicators of research groups. Now
we focus on a two orders of magnitude larger aggregation level within the science
system: the university. Our target group consists of the 100 largest European
universities. We will distinguish between different ‘dimensions’: top- and lower-
performance universities, higher and lower field citation densities, and higher and lower
journal impact. In particular, we are interested in the phenomenon of size-dependent
(size of a university in terms of number of publications) cumulative advantage1 of impact
1 By ‘cumulative advantage’ we mean that the dependent variable (for instance, number of citations of a
university, C) increases in a disproportional, nonlinear (in this case: power law) way as a function of the
independent variable (for instance, in the present study the size of a research university, in terms of number of
publications, P). Thus, larger universities (in terms of P) do not just receive more citations (as can be
expected), but they do so increasingly more advantageously: universities that are twice as large as other
universities receive, on average, about 2.5 more citations.
(in terms of numbers of citations), for different levels of research performance, field
citation density and journal impact.
Katz (1999) discussed scaling relationships between number of citations and number of
publications across research fields and countries. He concluded that the science system is
characterized by cumulative advantage, more particularly a size-dependent ‘Matthew
effect’ (Merton 1968, 1988). As explained in footnote 1, this implies a nonlinear increase
of impact with increasing size, demonstrated by the finding that the number of citations
as a function of number of publications (in Katz’ study for 152 fields of science) exhibits a
power law dependence with an exponent larger than 1. In our previous articles (van
Raan 2006a, 2006b, 2007) we demonstrated a size-dependent cumulative advantage of
the correlation between number of citations and number of publications also at the level
of research groups. In this study we extent our observations to the level of entire
universities.
We focus on performance-related differences of bibliometric properties of universities.
Particularly important are the citation characteristics of the research fields in which a
university is active (the field citation densities) and the impact level of the journals used
by a university. Seglen (1992, 1994) found a poor correlation between the impact of
publications and journal impact at the level of individual publications. However, grouping
publications in classes of journal impact yielded a high correlation between publication
and journal impact. This grouping is determined by journal impact classes, and not by a
‘natural’ grouping such as research groups and universities. In our previous study we
showed a significant correlation between the average number of citations per publication
of research groups, and the average journal impact of these groups. In this study we
investigate whether this finding also holds at the level of entire universities.
The structure of this study is as follows. Within a set of the 100 largest universities in
Europe we distinguish in our analysis between performance, field citation densities and
journal impact. In Section 2 we discuss the data material of the universities, the
application of the method, and the calculation of the indicators. In Section 3 we analyse
the data of the 100 largest European universities in the framework of size-dependent
cumulative advantage and classify the results of the analysis in main observations. Our
analysis of performance- and field density-related differences of bibliometric properties of
universities reveals further interesting results, particularly on the role of journal impact.
These observations are discussed in the last part of Section 3. Finally, in Section 4 we
summarize the main outcomes of this study.
2. Basic data and indicators derived from these data
We studied the statistics of bibliometric indicators on the basis of all publications (as far
as published in journals covered by the Citation Index, ‘CI publications’2) of the 100
largest European universities for the period 1997-20043. This material is quite unique. To
our knowledge no such compilations of very accurately collected publication sets on a
large scale are used for statistical analysis of the characteristics of indicators at the
university level. Obtaining data at the university level is not a trivial matter. The
delineation of universities through externally available data such as the address
information in the CI database is very problematic. For a thorough discussion of this
problem, see Van Raan (2005a). The (CI-) publications were collected as part of a large
2 Thomson Scientific, the former Institute for Scientific Information (ISI) in Philadelphia, is the producer and
publisher of the Citation Index system covered by the Web of Science. Throughout this article we use the
acronym CI (Citation Index) to refer to this data system.
3 We included Israel. We have left out Lomonosov University of Moscow. As far as number of publications
concerns, this university is one of the largest in Europe (about 24,000 publications in the covered 8-year
period) but the impact is so low (CPP/FCSm about 0.3) that it would have a very outlying position in the
ranking.
EC study on the scientific strengths of the European Union and its member states4. For a
detailed discussion of methodological and technical issues we refer to Moed (2006). From
a listing of more than 250 European universities we selected the 100 largest. The period
covered is 1997-2004 for both publications and citations received by these publications.
In total, the analysis involves the work of many thousands of senior researchers in 100
large universities and covers around 1,5 million publications and 11 million citations
(excluding self-citations), about 15% of the worldwide scientific output and impact.
The indicators are calculated on the basis of a total time-period analysis. This means that
publications are counted for the entire period (1997-2004) and citations are counted up
to and including 2004 (e.g., for publications from 1997, citations are counted in the
period 1997-2004, and for publications from 2004, citations are counted only in 2004).
We are currently updating our data system with the 2005 and 2006 publication and
citation data.
We apply the CWTS5 standard bibliometric indicators. Only ‘external’ citations, i.e.,
citations corrected for self-citations, are taken into account. An overview of these
indicators is given in the text box here below. For a detailed discussion we refer to Van
Raan (1996, 2004, 2005b).
Standard Bibliometric Indicators:
• Number of publications P in CI-covered journals of a university in the specified period;
• Number of citations C received by P during the specified period, without self-citations; including self-
citations: Ci, i.e., number of self-citations Sc = Ci – C, relative amount of self-citations Sc/Ci;
• Average number of citations per publication, without self-citations (CPP);
• Percentage of publications not cited (in the specified period) Pnc;
• Journal-based worldwide average impact as an international reference level for a university (JCS, journal
citation score, which is our journal impact indicator), without self-citations (on this world-wide scale!); in
the case of more than one journal we use the average JCSm; for the calculation of JCSm the same
publication and citation counting procedure, time windows, and article types are used as in the case of
CPP;
• Field-based6 worldwide average impact as an international reference level for a university (FCS, field
citation score), without self-citations (on this world-wide scale!); in the case of more than one field (as
almost always) we use the average FCSm; for the calculation of FCSm the same publication and citation
counting procedure, time windows, and article types are used as in the case of CPP; we refer in this article
to the FCSm indicator as the ‘field citation density’;
• Comparison of the CPP of a university with the world-wide average based on JCSm as a standard, without
self-citations, indicator CPP/JCSm;
• Comparison of the CPP of a university with the world-wide average based on FCSm as a standard, without
self-citations, indicator CPP/FCSm;
• Ratio JCSm/FCSm is the relative, field-normalized journal impact indicator.
In Table 1 we show as an example the results of our bibliometric analysis for the first 30
universities within the European 100 largest. This table makes clear that our indicator
calculations allow an extensive statistical analysis of these indicators for our set of
universities. Of the above indicators, we regard the internationally standardized (field-
normalized) impact indicator CPP/FCSm as our ‘crown’ indicator. This indicator enables
us to observe immediately whether the performance of a university is significantly far
below (indicator value < 0.5), below (0.5 - 0.8), around (0.8 - 1.2), above (1.2 – 1.5), or
far above (>1.5) the international (Western world dominated) impact standard averaged
over all fields (van Raan 2004).
4 The ASSIST (Analysis and Studies of Statistics and Indicators on Science and Technology) project.
5 Centre for Science and Technology Studies, Leiden University.
6 We here use the definition of fields based on a classification of scientific journals into categories developed by
Thomson Scientific/ISI. Although this classification is not perfect, it provides a clear and ‘fixed’ consistent field
definition suitable for automated procedures within our data-system.
Table 1: Largest 30 European universities
University P C CPP Pnc CPP/
1 UNIV CAMBRIDGE UK 36.349 361.681 9,95 29,1 1,63
2 UNIV COLL LONDON UK 34.407 346.028 10,06 26,9 1,46
3 UNIV OXFORD UK 33.780 355.856 10,53 29,5 1,67
4 IMPERIAL COLL LONDON UK 27.017 222.713 8,24 30,7 1,45
5 LUDWIG MAXIMILIANS UNIV MUNCHEN DE 23.519 177.317 7,54 30,8 1,14
6 UNIV PARIS VI PIERRE & MARIE CURIE FR 23.468 146.483 6,24 32,8 1,09
7 UNIV MILANO IT 23.006 175.181 7,61 30,0 1,11
8 UNIV UTRECHT NL 22.668 189.671 8,37 28,3 1,37
9 KATHOLIEKE UNIV LEUVEN BE 22.521 153.851 6,83 34,9 1,22
10 UNIV MANCHESTER UK 22.470 137.812 6,13 34,4 1,16
11 UNIV WIEN AT 21.940 137.251 6,26 32,9 1,01
12 UNIV ROMA SAPIENZA IT 21.778 119.076 5,47 37,7 0,95
13 TEL AVIV UNIV IL 21.447 112.337 5,24 35,9 0,94
14 UNIV HELSINKI FI 21.034 179.662 8,54 28,5 1,38
15 LUNDS UNIV SE 20.631 157.944 7,66 27,9 1,21
16 KAROLINSKA INST STOCKHOLM SE 20.525 213.629 10,41 23,2 1,30
17 KOBENHAVNS UNIV DK 19.555 153.583 7,85 27,4 1,18
18 UNIV AMSTERDAM NL 19.333 163.417 8,45 28,9 1,35
19 UPPSALA UNIV SE 18.998 140.518 7,40 28,6 1,17
20 RUPRECHT KARLS UNIV HEIDELBERG DE 18.735 155.451 8,30 30,1 1,22
21 ETH ZURICH CH 18.611 148.078 7,96 29,8 1,52
22 KINGS COLL UNIV LONDON UK 18.601 161.460 8,68 28,7 1,32
23 HEBREW UNIV JERUSALEM IL 18.389 127.263 6,92 33,2 1,16
24 UNIV PARIS XI SUD FR 18.183 115.157 6,33 32,8 1,13
25 UNIV EDINBURGH UK 17.786 164.380 9,24 29,7 1,48
26 HUMBOLDT UNIV BERLIN DE 17.780 127.381 7,16 31,6 1,13
27 LEIDEN UNIV NL 16.832 147.821 8,78 26,9 1,26
28 UNIV ZURICH CH 16.783 154.154 9,19 29,2 1,33
29 UNIV BARCELONA ES 16.783 103.628 6,17 32,4 1,03
30 UNIV BRISTOL UK 16.387 119.960 7,32 29,7 1,31
3. Results and Discussion
3.1 Impact scaling and research performance
In our previous study (van Raan 2006a, 2006b, 2007) we showed how a set of research
groups is characterized in terms of the correlation between size (the total number of
publications P of a specific research group7) and the total number of citations C received
by a group. Now we calculated the same correlation for all 100 largest European
universities. Fig. 3.1.1 shows that this correlation is described with a strong significance
(coefficient of determination of the fitted regression is R2 = 0.79) by a power law:
C(P) = 0.36 P 1.31 .
At the lower side of P (and C) we observe a few ‘outliers’. These are universities with a
considerably lower number of citations as compared to the other larger universities
(among them Charles University of Prague and the University of Athens). We observe
that the size of universities leads to a cumulative advantage (with exponent α=+1.31)
for the number of citations received by these universities. Thus, the Matthew effect also
works in at the aggregation level of entire universities. The intriguing question is how the
7 The number of publications is a measure of size in the statistical context described in this article. It is,
however, a proxy for the real size of a research group or a university, for instance in terms number of staff full
time equivalents (fte) available for research.
research performance of the universities (measured by the indicator CPP/FCSm) relates
to size-dependency. Gradual differentiation between top- and lower-performance
(top/bottom 10%, 25%, and 50% of the CPP/FCSm distribution) enables us to study
the correlation of C with P and possible scale effects (size-dependent cumulative
advantage) in more detail. The results are presented in Figs. 3.1.2 - 3.1.4 and a
summary of the findings in Table 3.
Correlation of C (per university) with P (per university)
for the 100 largest European universities
y = 0.3566x1.308
R2 = 0.7929
10,000
100,000
1,000,000
1,000 10,000 100,000
Fig. 3.1.1: Correlation of the number of citations (C) received per university with the
number of publications (P) of these universities for all 100 largest European universities.
The group of highest performance universities (top-10%) does not have a cumulative
advantage (i.e., exponent significantly8 > 1). The bottom-10% exponent is heavily
determined by the outliers. The broader top-25% shows a slight (α=+1.16) and the
bottom-25% a stronger cumulative advantage (α=+1.33). If we divide the entire set of
universities in a top- and bottom-50% we see that both subsets have more or less equal
exponents. Thus, the most intriguing finding is that the lowest performance universities
have a larger size-dependent cumulative advantage than top-performance universities.
This phenomenon was already observed at the level of research groups (van Raan
2006a, 2006b, 2007). It is fascinating that within the science system this scaling rule
covers at least two orders of magnitude in size of entities. Furthermore, the top-
performance universities are generally the larger ones, i.e., in the right hand side of the
correlation function.
8 To estimate the influence of these noisy data, we randomly removed five universities. We found that the error
in the exponent α is about ± 0.05. Thus, the noisiness of data remains within acceptable limits and does not
substantially affect our findings.
top-10% and bottom-10% of CPP/FCSm
y = 18.455x0.9355
R2 = 0.9556
y = 0.0833x1.4287
R2 = 0.6539
10,000
100,000
1,000,000
1,000 10,000 100,000
Fig. 3.1.2: Correlation of the number of citations (C) received per university with the
number of publications (P) for the top-10% (of CPP/FCSm) universities (diamonds) and
the bottom-10% universities (squares) within the 100 largest European universities.
top-25% and bottom-25% of CPP/FCSm
y = 1.7776x1.1608
R2 = 0.8436
y = 0.2328x1.3293
R2 = 0.8291
10,000
100,000
1,000,000
1,000 10,000 100,000P
Fig. 3.1.3: Correlation of the number of citations (C) received per university with the
number of publications (P) for the top-25% (of CPP/FCSm) universities (diamonds) and
the bottom-25% universities (squares) within the 100 largest European universities.
top-50% and bottom-50% of CPP/FCSm
y = 1.4839x1.171
R2 = 0.8197
y = 1.2745x1.1626
R2 = 0.7202
10,000
100,000
1,000,000
1,000 10,000 100,000
Fig. 3.1.4: Correlation of the number of citations (C) received per university with the
number of publications (P) for the top-50% (of CPP/FCSm) universities (diamonds) and
the bottom-50% universities (squares) with the 100 largest European universities.
Table 3.1: Power law exponent α of the correlation of C with P for the 100 largest
European universities in the indicated modalities. The differences in α between top and
bottom modalities are indicated by ∆α(b,t).
All 100 1.31
top 10% 0.94
bottom 10% 1.43
∆α(b,t) 0.49
top 25% 1.16
bottom 25% 1.33
∆α(b,t) 0.17
top 50% 1.17
bottom 50% 1.16
∆α(b,t) -0.01
An important feature of research impact is the number of not-cited publications. We
analysed the correlation of the fraction (percentage) of not-cited-publications Pnc of the
100 largest European universities with size (P) of a university. The results are shown in
Fig. 3.1.5. We observe that the fraction of not-cited publications decreases with low
significance as a function of size. The significance of the correlation is too low for clear
results. Thus, as a further step we investigate this correlation with a distinction between
top- and lower-performance universities. Fig. 3.1.6 shows the results for the top- and
bottom-25%, and Fig. 3.1.7 for the top-50% and bottom-50% of the CPP/FCSm
distribution of the 100 largest universities.
Correlation of Pnc (per university) with P (total per university)
y = 126.14x-0.1425
R2 = 0.1239
100.0
1,000 10,000 100,000P
Fig. 3.1.5: Correlation of the percentage of not cited publications (Pnc) with the number
of publications (P) for the entire set of the 100 largest European universities.
top-25% and bottom-25% of CPP/FCSm
y = 58.445x-0.0705
R2 = 0.0535
y = 196.33x-0.1773
R2 = 0.2131
100.0
1,000 10,000 100,000
Fig. 3.1.6: Correlation of the relative number of not cited publications (Pnc) with the
number of publications (P) for the top-25% (of CPP/FCSm) universities (diamonds),
and the bottom-25% universities (squares).
top-50% and bottom-50% of CPP/FCSm
y = 56.73x-0.0649
R2 = 0.0342
y = 80.722x-0.0902
R2 = 0.0466
100.0
1,000 10,000 100,000P
Fig. 3.1.7: Correlation of the relative number of not cited publications (Pnc) with the
number of publications (P) for the top-50% (of CPP/FCSm) universities (diamonds),
and for the bottom-50% universities (squares).
The observations suggest that the fraction of non-cited publications decreases with size,
particularly for the lower performance universities. This phenomenon was also found at
the level of research groups (van Raan 2006a, 2006b, 2007) which means that we
discovered another scaling rule in the science system covering at least two orders of
magnitude. We notice, however, that this scaling rule for non-cited publications is less
strong at the level of entire universities as compared to groups. Advantage by size works
by a mechanism in which the number of not-cited publications is diminished. This
mechanism works at the level of research groups as follows. The larger the number of
publications in a group, the more those publications are ‘promoted’ which otherwise
would have remained uncited. Thus, size reinforces an internal promotion mechanism,
namely initial citation of these ‘stay behind’ publications in other more cited publications
of the group. Then authors in other groups are stimulated to take notice of these stay
behind publications and eventually decide to cite them. Consequently, the mechanism
starts with within-group citation (which is not necessarily the same as self-citation), and
subsequently spreads. It is obvious that particularly the lower performance groups will
benefit from this mechanism. Top-performance groups do not ‘need’ the internal
promotion mechanism to the same extent as low performance groups. This explains, at
least in a qualitative sense, why top-performance groups show less, or even no
cumulative advantage by size. Since an entire university is the sum of a large number of
research groups, the above mechanism will also be visible at the university level.
We also investigated the relation between research performance as measured by
indicator CPP/FCSm with size in terms of P. We find a very slight positive correlation as
shown in Fig. 3.1.8 for all 100 universities and in Fig. 3.1.9 for the top- and bottom-25%
of the CPP/FCSm distribution. This, however, this is certainly not a cumulative
advantage; the exponent of the correlation is very small, around 0.2. Probably the most
interesting aspect of this measurement is that performance does not decrease, not
‘dilute’ with increasing size.
Correlation of CPP/FCSm (university) with P (total per university)
y = 0.1117x0.2427
R2 = 0.2164
10.00
1,000 10,000 100,000
CPP/FCSm
Fig. 3.1.8: Correlation of CPP/FCSm with the number of publications (P) for the entire
set of all 100 largest European universities.
top-25% and bottom-25% of CPP/FCSm
y = 0.1285x0.209
R2 = 0.2101
y = 0.6862x0.0727
R2 = 0.1138
10.00
1,000 10,000 100,000
CPP/FCSm
Fig. 3.1.9: Correlation of CPP/FCSm with the number of publications (P) for the top-
25% (diamonds) and the bottom-25% (squares) of CPP/FCSm distribution of the 100
largest European universities.
3.2 Impact scaling, field citation density and journal impact
In Fig. 3.2.1 we present the correlation of the number of citations with size for those
universities among the 100 largest European universities that have high and low field
citation densities, i.e., top-25% and bottom-25%, respectively, of the FCSm distribution.
We observe that the high field density universities hardly have a cumulative advantage
(exponent α = 1.09). The low field citation density universities have a considerably size-
dependent cumulative advantage (exponent α = 1.50).
Correlation of C (total per university) with P (total per university)
top-25% and bottom-25% of FCSm
y = 3.6829x1.0853
R2 = 0.8523
y = 0.0458x1.503
R2 = 0.8399
10,000
100,000
1,000,000
1,000 10,000 100,000
Figure 3.2.1: Correlation of the number of citations (C) with the number of publications
(P) for the universities within the top- (diamonds) and the bottom-25% (squares) of the
field citation density (FCSm) distribution.
Correlation of C (total per university) with P (total per university)
top-25% and bottom-25% of JCSm
y = 4.6851x1.0657
R2 = 0.9112
y = 0.0709x1.458
R2 = 0.7474
10,000
100,000
1,000,000
1,000 10,000 100,000
Figure 3.2.2: Correlation of the number of citations (C) with the number of publications
(P) for the universities within the top- (diamonds) and the bottom-25% (squares) of the
field citation density (JCSm) distribution.
In Fig. 3.2.2 we present a similar correlation for the top- and bottom-25% of the JCSm,
the average journal impact of a university. We see that these results are practically the
same as in Fig. 3.2.1. Given the strong correlation of JCSm and FCSm at the level of
universities, as illustrated in Fig. 3.2.3, this similarity can be expected. We remark,
however, that the correlation of JCSm and FCSm has a power exponent 1.22 which
means that the JCSm values increase in a nonlinear way (‘cumulatively’) with FCSm.
Correlation of JCSm (per university) with FCSm (per university)
y = 0,7327x1,2191
R2 = 0,7033
Figure 3.2.3: Correlation of the average journal impact (JCSm) with the average field
citation density (FCSm) for all 100 largest European universities.
We now investigate the relation between citation impact of a university in terms of
average number of citations per publication (CPP) on the one hand, and field citation
density (FCSm) and journal impact (JCSm) on the other. Seglen (1994) showed that the
citedness of individual publications CPP is not significantly affected by journal impact9.
However, grouping publications in classes of journal impact yielded a high correlation
between publication citedness and journal impact. We found that also a ‘natural’ grouping
of publications, such as the work of a research group, leads to a high correlation of CPP
and JCSm (van Raan 2006b, 2007).
In this study we find that this is also the case at the aggregation level of entire
universities. We find a significant correlation between the average number of citations
per publication for the 100 largest European universities (CPP), and both the field
citation density (FCSm) as well as the average journal impact of these universities
(JCSm). We applied again the distinction between top- and lower-performance
universities in order to find performance-related aspects in the above relation. The
results are shown for the correlation of CPP with FCSm for the entire set of all 100
largest European universities in Fig. 3.2.4, and for the top-performance (top-25% of
CPP/FCSm) and lower performance (bottom-25% of CPP/FCSm) universities in Fig.
3.2.5. The correlation of CPP with JCSm for the entire set of all 100 largest European
universities is presented in Fig. 3.2.6 and for the top-performance and lower
performance universities in Fig. 3.2.7. We see hat these correlations are very significant.
9 In Seglen’s work journal impact was defined with the ISI (Web of Science) journal impact factor; he did not
consider the more sophisticated journal impact indicators such as the JCSm used in this study.
Correlation of CPP (per university) with FCSm (per university)
y = 0.5928x1.3654
R2 = 0.5357
10.00
100.00
Fig. 3.2.4: Correlation of CPP with FCSm for all 100 largest European universities.
top-25% and bottom-25% of CPP/FCSm
y = 0.1712x1.9746
R2 = 0.7401
y = 1.3407x1.0219
R2 = 0.8316
10.00
100.00
Fig. 3.2.5: Correlation of CPP with FCSm for the top-25% (diamonds) and the bottom-
25% (squares) of CPP/FCSm distribution of the 100 largest European universities.
Correlation of CPP (per university) with JCSm (per university)
y = 0.6942x1.2222
R2 = 0.907
10.00
100.00
Fig. 3.2.6: Correlation of CPP with JCSm for all 100 largest European universities.
top-25% and bottom-25% of CPP/FCSm
y = 0.6384x1.238
R2 = 0.9224
y = 1.228x0.9641
R2 = 0.9497
10.00
100.00
Fig. 3.2.7: Correlation of CPP with JCSm for the top-25% (diamonds) and the bottom-
25% (squares) of CPP/FCSm distribution of the 100 largest European universities.
Both the top- and lower-performance universities have more citations per publication
(CPP) as a function of field citation density (FCSm, Fig.3.2.5) as well as of average
journal impact (JCSm, Fig. 3.2.7). Clearly, the top universities generally have higher
CPP values. We find that particularly for the lower-performance universities the field
citation density (FCSm) provides a strong cumulative advantage in citations per
publication (CPP) with exponent α = 1.97. The correlation of CPP with the average
journal impact (JCSm) shows a less strong cumulative advantage for the lower-
performance universities, α = 1.24. We also observe clearly (Fig. 3.2.7) that most top-
performance universities publish in journals with significantly higher journal impact as
compared to the lower performance universities. Moreover, the top-25% universities
perform in terms of citations per publications (CPP) with a factor of about 1.3 better than
the bottom-25% universities in journals with the same average impact. An overview of
the exponents of the correlation functions is given in Table 3.2.
Table 3.2: Power law exponent α of the correlation of CPP with FCSm and with JCSm
for the 100 largest European universities. The differences in α between top- and bottom-
modalities are given by ∆α(b,t).
FCSm
JCSm
all 1.37 1.22
top 25% 1.02 0.96
bottom 25% 1.97 1.24
∆α(b,t) 0.95 0.28
Next to the impact measure CPP we also investigated the correlation of the field-
normalized research performance indicator (CPP/FCSm) of the 100 largest European
universities with field citation density and with journal impact. The results are shown for
the correlation of CPP/FCSm with FCSm for the entire set of all 100 largest European
universities in Fig. 3.2.8, and for the top-performance (top-25% of CPP/FCSm) and
lower performance universities in Fig. 3.2.9. The correlation of CPP/FCSm with JCSm
for the entire set of all 100 largest European universities is presented in Fig. 3.2.10 and
for the top-performance and lower performance universities in Fig. 3.2.11.
Correlation of CPP/FCSm with FCSm
y = 0.5928x0.3654
R2 = 0.0763
10.00
1 10FCSm
CPP/FCSm
Fig. 3.2.8: Correlation of CPP/FCSm with FCSm for the entire set of the 100 largest
European universities.
top-25% and bottom-25% of CPP/FCSm
y = 0.1712x0.9746
R2 = 0.4095
y = 1.3407x0.0219
R2 = 0.0023
10.00
CPP/FCSm
Fig. 3.2.9: Correlation of CPP/FCSm with FCSm for the top-25% (diamonds) and the
bottom-25% (squares) of CPP/FCSm distribution of the 100 largest European
universities.
Correlation of CPP/FCSm with JCSm
y = 0.3417x0.6452
R2 = 0.503
10.00
CPP/FCSm
Fig. 3.2.10: Correlation of CPP/FCSm with JCSm for the entire set of the 100 largest
European universities.
top-25% and bottom-25% of CPP/FCSm
y = 0.2535x0.7627
R2 = 0.7952
y = 1.1046x0.116
R2 = 0.0815
10.00
CPP/FCSm
Fig. 3.2.11: Correlation of CPP/FCSm with JCSm for the top-25% (diamonds) and the
bottom-25% (squares) of CPP/FCSm distribution of the 100 largest European
universities.
We observe that the research performance of the top universities is independent of field
citation density (FCSm). For the lower-performance universities there is a slight increase
of performance as a function of FCSm. The results for the average journal impact
(JCSm) are similar but more outspoken. Again we notice that top-performance
universities have a strong preference for the higher-impact journals.
Finally, we analysed the correlation between the number of not-cited publications (Pnc)
of a university and its average journal impact level (JCSm). The results are shown in Fig.
3.2.12 for the entire set of 100 universities and in Fig. 3.2.13 for the top- en lower-
performance universities. We see a quite significant correlation between these two
variables. Very clearly the top universities have the lowest Pnc. Given the strong
correlation between CPP and JCSm (see Fig. 3.2.6) we can also expect a significant
correlation between Pnc and CPP, as confirmed nicely by Fig. 3.2.14 for the entire set of
100 universities and in Fig. 3.2.15 for the top- en lower-performance universities. Thus,
we find that the higher the average journal impact of the publications of a university, the
lower the number of not-cited publications. Also, the higher the average number of
citation per publication in a university, the lower the number of not-cited publications. In
other words, universities that are cited more per paper also have more cited papers.
These findings underline the generally good correlation at the university level between
the average number of citations per publication in a university, and its average journal
impact.
We also find that the relation between the relative number of not-cited publications
(Pnc) and the mean number of citations per publication (CPP) can be written in good
approximation as
Pnc = 1/√(CPP).
This expression reflects the characteristics of the citation-distribution function as it is the
relation between the number of publications with zero citations and the average number
of citations per publications.
Correlation of Pnc (per university) with JCSm per university)
y = 105.22x-0.6333
R2 = 0.8054
100.0
Fig. 3.2.12: Correlation of the relative number of not cited publications (Pnc) with the
mean journal impact (JCSm) of the 100 largest European universities.
top-25% and bottom-25% of CPP/FCSm
y = 84.574x-0.5261
R2 = 0.8363
y = 111.24x-0.6516
R2 = 0.8182
100.0
Fig. 3.2.13: Correlation of the relative number of not cited publications (Pnc) with the
mean journal impact (JCSm) for the top-25% (of CPP/FCSm) universities (diamonds),
and the bottom-25% universities (squares).
Correlation of Pnc (per university) with CPP (per university)
y = 85.039x-0.5058
R2 = 0.8459
100.0
1.00 10.00 100.00CPP
Fig. 3.2.14: Correlation of the relative number of not cited publications (Pnc) with the
mean number of citations per publication (CPP) of the 100 largest European universities.
top-25% and bottom-25% of CPP/FCSm
y = 86.762x-0.5053
R2 = 0.7551
y = 88.425x-0.5303
R2 = 0.9008
100.0
1.00 10.00 100.00
Fig. 3.2.15: Correlation of the relative number of not cited publications (Pnc) with the
mean number of citations per publication (CPP) for the top-25% (of CPP/FCSm)
universities (diamonds), and the bottom-25% universities (squares).
3.3 Characteristics of self-citations
In this section we present a first analysis of a specific feature of the science system, the
statistical properties of self-citations. We calculated the correlation between size (the
total number of publications P) and the total number of citations C for all 100 largest
European universities. Fig. 3.3.1 shows that this correlation is described with high
significance by a power law:
Sc(P) = 0.53 P 1.15 .
Correlation of Sc (per university) with P (per university)
y = 0.5289x1.148
R2 = 0.882
10000
100000
1,000 10,000 100,000P
Fig. 3.3.1: Correlation of the number of self-citations (Sc) received per university with
the number of publications (P) of these universities, for all 100 largest European
universities.
At the lower side of P (and Sc) we again observe the ‘outliers’ as in the case of the
(external) citations (Fig. 3.1.1). We find that the size of universities leads to a cumulative
advantage (with exponent α=+1.15) for the number of self-citations given by these
universities. Gradual differentiation between top- and lower-performance (top/bottom
10%, 25%, and 50%) enables us to study the correlation of Sc with P in more detail as
presented in Figs. 3.3.2 - 3.3.4. We see that the group of highest performance
universities (top-10%) does not have a cumulative advantage (exponent around 1),
whereas the bottom-10% exponent is heavily determined by the outliers. The broader
top-25% and the bottom-25% show a slight cumulative advantage (α= 1.11 and 1.15,
respectively). If we divide the entire set of universities in a top- and bottom-50% we see
that both subsets have more or less equal exponents (around 1.11).
In Fig. 3.3.5 we show that the fraction (percentage) of self-citations (%Sc) decreases
slightly with size (P), but this correlation is not very significant. More significant is the
decrease of the fraction of self-citations as a function of research performance
CPP/FCSm, as shown in Fig. 3.3.6. We also observe a clear decrease of self-citations for
the 100 largest universities in Europe as a function of average field citation density
FCSm, Fig. 3.3.7, average journal impact JCSm, Fig. 3.3.8, and of field-normalized
journal impact JCSm/FCSm, see Fig. 3.3.9.
top-10% and bottom-10% of CPP/FCSm
y = 3.7993x0.9599
R2 = 0.9755
y = 0.0659x1.3587
R2 = 0.6437
10000
100000
1,000 10,000 100,000P
Fig. 3.3.2: Correlation of the number of self-citations (Sc) received per university with
the number of publications (P), for the top-10% (of CPP/FCSm) universities
(diamonds), and the bottom-10% universities (squares) with the 100 largest European
universities.
top-25% and bottom-25% of CPP/FCSm
y = 0.8272x1.1067
R2 = 0.8943
y = 0.4661x1.1531
R2 = 0.8385
10000
100000
1,000 10,000 100,000P
Fig. 3.3.3: Correlation of the number of self-citations (Sc) received per university with
the number of publications (P), for the top-25% (of CPP/FCSm) universities
(diamonds), and the bottom-25% universities (squares) with the 100 largest European
universities.
top-50% and bottom-50% of CPP/FCSm
y = 0.7546x1.1137
R2 = 0.8759
y = 0.7x1.1158
R2 = 0.8415
10000
100000
1,000 10,000 100,000P
Fig. 3.3.4: Correlation of the number of self-citations (Sc) received per university with
the number of publications (P), for the top-50% (of CPP/FCSm) universities
(diamonds), and the bottom-50% universities (squares) with the 100 largest European
universities.
Correlation of %Sc (per university) with P (per university)
y = 76.132x-0.1197
R2 = 0.1076
100.0
1,000 10,000 100,000P
Fig. 3.3.5: Correlation of the relative number of self-citations (%Sc) per university with
the number of publications (P) of these universities, for all 100 largest European
universities.
Correlation of %Sc with CPP/FCSm
y = 25.98x-0.5349
R2 = 0.5853
100.0
0.10 1.00 10.00
CPP/FCSm
Fig. 3.3.6: Correlation of the relative number of self-citations (%Sc) per university with
the performance (CPP/FCSm) of these universities, for all 100 largest European
universities.
Correlation of %Sc (per university) with FCSm
y = 55.092x-0.4599
R2 = 0.2474
100.0
1 10FCSm
Fig. 3.3.7: Correlation of the relative number of self-citations (%Sc) per university with
the field citation density (FCSm) of these universities, for all 100 largest European
universities.
Correlation of %Sc (per university) with JCSm
y = 59.794x-0.4841
R2 = 0.5793
100.0
1 10JCSm
Fig. 3.3.8: Correlation of the relative number of self-citations (%Sc) per university with
the average journal impact (JCSm) of these universities, for all 100 largest European
universities.
Correlation of %Sc (per university) with JCSm/FCSm
y = 25.94x-0.8393
R2 = 0.5499
100.0
0.10 1.00 10.00
JCSm/FCSm
Fig. 3.3.9: Correlation of the relative number of self-citations (%Sc) per university with
the field-normalized journal impact (JCSm/FCSm) of these universities, for all 100
largest European universities.
4. Summary of the main findings and concluding remarks
For the 100 largest European universities we studied statistical properties of bibliometric
characteristics related to research performance, field citation density and journal impact.
Our five main observations are as follows.
First, we find a size-dependent cumulative advantage for the impact of universities in
terms of total number of citations. Quite remarkably, lower performance universities
have a larger size-dependent cumulative advantage for receiving citations than top-
performance universities. We found in previous work a similar scaling rule at the level of
research groups and therefore we conjecture that this scaling rule is a prevalent property
of the science system. We also observe that the top universities are about twice as
efficient in receiving citations (C) as compared to the bottom-performance universities.
Our criterion of top- or low performance is based on the field-normalized indicator
CPP/FCSm. We hypothesize that in network terms this indicator represents the ‘fitness’
of a university as a node in the science system. It brings a university in a better position
to acquire additional links (in terms of citations) on the basis of quality (high
performance).
Second, we find that for the lower-performance universities the fraction of not-cited
publications decreases with size. We explain this phenomenon with a model in which size
is advantageous in an ‘internal promotion mechanism’ to get more publications cited.
Thus, in this model size is a distinctive parameter which acts as a bridge between the
macro-picture (characteristics of the entire set of universities) and the micro-picture
(characteristics within a university). We find that the higher the average journal impact
of a university, the lower the number of not-cited publications. Also, the higher the
average number of citations per publication in a university, the lower the number of not-
cited publications. In other words, universities that are cited more per paper also have
more cited papers.
Third, we find that the average research performance of university measured by our
crown indicator CPP/FCSm does not ‘dilute’ with increasing size. Apparently large
universities, particularly the top-performance universities are characterized by ‘big and
beautiful’. In other words, they succeed in keeping a high performance over a broad
range of activities. This most probably is an indication of their overall scientific and
intellectual attractive power.
Fourth, we observe that particularly the low field citation density and the low journal
impact universities have a considerably size-dependent cumulative advantage for the
total number of citations. We find that particularly for the lower-performance universities
the field citation density (FCSm) provides a strong cumulative advantage in citations per
publication (CPP). We also observe clearly that most top-performance universities
publish in journals with significantly higher journal impact as compared to the lower
performance universities. Moreover, the top universities perform in terms of citations per
publications (CPP) with a factor of about 1.3 better than the bottom universities in
journals with the same average impact. The relation between number of citations and
field citation density found in this study can be considered as a second basic scaling rule
of the science system.
Fifth, we find a significant decrease of the fraction of self-citations as a function of
research performance CPP/FCSm, of the average field citation density FCSm, of the
average journal impact JCSm, and of the field-normalized journal impact JCSm/FCSm.
Acknowledgements
The author would like to thank his CWTS colleagues Henk Moed and Clara Calero for the
work to define and to delineate the universities, and for the data collection, data analysis
and calculation of the bibliometric indicators.
References
Katz, J.S. (1999). The Self-Similar Science System. Research Policy 28, 501-517
Merton, R.K. (1968). The Matthew effect in science. Science 159, 56-63.
Merton, R.K. (1988). The Matthew Effect in Science, II: Cumulative advantage and the
symbolism of intellectual property. Isis 79, 606-623.
Moed, H.F. (2006) Bibliometric Rankings of World Universities,
http://www.cwts.nl/hm/bibl_rnk_wrld_univ_full.pdf
van Raan, A.F.J. (1996). Advanced Bibliometric Methods as Quantitative Core of Peer
Review Based Evaluation and Foresight Exercises. Scientometrics 36, 397-420.
van Raan, A.F.J. (2004). Measuring Science. Capita Selecta of Current Main Issues. In:
H.F. Moed, W. Glänzel, and U. Schmoch (eds.). Handbook of Quantitative Science and
Technology Research. Dordrecht: Kluwer Academic Publishers, p. 19-50.
van Raan, A.F.J. (2005a). Fatal Attraction: Conceptual and methodological problems in
the ranking of universities by bibliometric methods. Scientometrics 62(1), 133-143.
van Raan, A.F.J. (2005b). Measurement of central aspects of scientific research:
performance, interdisciplinarity, structure. Measurement 3(1), 1-19.
van Raan, A.F.J. (2006a). Statistical Properties of Bibliometric Indicators: Research
Group Indicator Distributions and Correlations. Journal of the American Society for
Information Science and Technology (JASIST) 57(3), 408-430.
van Raan, A.F.J. (2006b). Performance-related differences of bibliometric statistical
properties of research groups: cumulative advantages and hierarchically layered
networks. Journal of the American Society for Information Science and Technology
(JASIST) 57(14), 1919-1935.
Van Raan, A.F.J. (2007). Influence of field and journal citation characteristics in size
dependent cumulative advantage of research group impact. To be published.
Seglen, P.O. (1992). The skewness of science. Journal of the American Society for
Information Science, 43, 628-638
Seglen, P.O. (1994). Causal relationship between article citedness and journal impact.
Journal of the American Society for Information Science, 45, 1-11
|
0704.0890 | On the Origin of Asymmetries in Bilateral Supernova Remnants | Astronomy & Astrophysics manuscript no. sorlando˙6045 c© ESO 2018
November 21, 2018
On the origin of asymmetries in bilateral supernova remnants
S. Orlando1,2, F. Bocchino1,2, F. Reale3,1,2, G. Peres3,1,2 and O. Petruk4,5
1 INAF - Osservatorio Astronomico di Palermo “G.S. Vaiana”, Piazza del Parlamento 1, 90134 Palermo, Italy
2 Consorzio COMETA, via Santa Sofia 64, 95123 Catania, Italy
3 Dip. di Scienze Fisiche & Astronomiche, Univ. di Palermo, Piazza del Parlamento 1, 90134 Palermo, Italy
4 Institute for Applied Problems in Mechanics and Mathematics, Naukova St. 3-b Lviv 79060, Ukraine
5 Astronomical Observatory, National University, Kyryla and Methodia St. 8 Lviv 79008, Ukraine
Received ; accepted
Abstract
Aims. We investigate whether the morphology of bilateral supernova remnants (BSNRs) observed in the radio band is determined mainly either
by a non-uniform interstellar medium (ISM) or by a non-uniform ambient magnetic field.
Methods. We perform 3-D MHD simulations of a spherical SNR shock propagating through a magnetized ISM. Two cases of shock propagation
are considered: 1) through a gradient of ambient density with a uniform ambient magnetic field; 2) through a homogeneous medium with a
gradient of ambient magnetic field strength. From the simulations, we synthesize the synchrotron radio emission, making different assumptions
about the details of acceleration and injection of relativistic electrons.
Results. We find that asymmetric BSNRs are produced if the line-of-sight is not aligned with the gradient of ambient plasma density or with
the gradient of ambient magnetic field strength. We derive useful parameters to quantify the degree of asymmetry of the remnants that may
provide a powerful diagnostic of the microphysics of strong shock waves through the comparison between models and observations.
Conclusions. BSNRs with two radio limbs of different brightness can be explained if a gradient of ambient density or, most likely, of ambient
magnetic field strength is perpendicular to the radio limbs. BSNRs with converging similar radio arcs can be explained if the gradient runs
between the two arcs.
Key words. Magnetohydrodynamics (MHD) – Shock waves – ISM: supernova remnants – ISM: magnetic fields – Radio continuum: ISM
1. Introduction
It is widely accepted that the structure and the chemical abun-
dances of the interstellar medium (ISM) are strongly influ-
enced by supernova (SN) explosions and by their remnants
(SNRs). However, the details of the interaction between SNR
shock fronts and ISM depend, in principle, on many factors,
among which the multiple-phase structure of the medium, its
density and temperature, the intensity and direction of the am-
bient magnetic fields. These factors are not easily determined
and this somewhat hampers our detailed understanding of the
complex ISM.
The bilateral supernova remnants (BSNRs, Gaensler 1998;
also called ”barrel-shaped,” Kesteven & Caswell 1987, or
”bipolar”, Fulbright & Reynolds 1990) are considered a bench-
mark for the study of large scale SNR-ISM interactions, since
no small scale effect like encounters with ISM clouds seems
to be relevant. The BSNRs are characterized by two opposed
radio-bright limbs separated by a region of low surface bright-
ness. In general, the remnants appear asymmetric, distorted and
Send offprint requests to: S. Orlando,
e-mail: [email protected]
elongated with respect to the shape and surface brightness of
the two opposed limbs. In most (but not all) of the BSNRs
the symmetry axis is parallel to the galactic plane, and this has
been interpreted as a difficulty for “intrinsic” models, e.g. mod-
els based on SN jets, rather than for “extrinsic” models, e.g.
models based on properties of the surrounding galactic medium
(Gaensler 1998).
In spite of the interest around BSNRs, a satisfactory and
complete model which explains the observed morphology and
the origin of the asymmetries does not exist. The galactic
medium is supposed to be stratified along the lines of con-
stant galactic latitude, and characterized by a large-scale am-
bient magnetic field with field lines probably mostly aligned
with the galactic plane. The magnetic field plays a three-fold
role: first, a magnetic tension and a gradient of the magnetic
field strength is present where the field is perpendicular to the
shock velocity leading to a compression of the plasma; sec-
ond, cosmic ray acceleration is most rapid where the field lines
are perpendicular to the shock speed (Jokipii 1987, Ostrowski
1988); third, the electron injection could be favored where
the magnetic field is parallel to the shock speed (Ellison et al.
1995). Gaensler (1998) notes that magnetic models (i.e. those
http://arxiv.org/abs/0704.0890v1
2 S. Orlando et al.: On the origin of asymmetries in bilateral supernova remnants
considering uniform ISM and ordered magnetic field) cannot
explain the asymmetric morphology of most BSNRs, and in-
vokes a dynamical model based on pre-existing ISM inhomo-
geneities, e.g. large-scale density gradients, tunnels, cavities.
Unfortunately, the predictions of these ad-hoc models have
consisted so far of a qualitative estimate of the BSNRs mor-
phology, with no real estimates of the ISM density interacting
with the shock. Moreover, most likely also non-uniform ambi-
ent magnetic fields may cause asymmetries in BSNRs, without
the need to assume ad-hoc density ISM structures. Two main
aspects of the nature of BSNRs, therefore, remain unexplored:
how and under which physical conditions do the asymmetries
originate in BSNRs? What is more effective in determining the
morphology and the asymmetries of this class of SNRs, the
ambient magnetic field or the non-uniform ISM?
Answering such questions at an adequate level requires
detailed physical modeling, high-level numerical implementa-
tions and extensive simulations. Our purpose here is to inves-
tigate whether the morphology of BSNR observed in the ra-
dio band could be mainly determined by the propagation of
the shock through a non-uniform ISM or, rather, across a non-
uniform ambient magnetic field. To this end, we model the
propagation of a shock generated by an SN explosion in the
magnetized non-uniform ISM with detailed numerical MHD
simulations, considering two complementary cases of shock
propagation: 1) through a gradient of ambient density with a
uniform ambient magnetic field; 2) through a homogeneous
isothermal medium with a gradient of ambient magnetic field
strength.
In Sect. 2 we describe the MHD model, the numerical
setup, and the synthesis of synchrotron emission; in Sect. 3 we
analyze the effects the environment has on the radio emission
of the remnant; finally in Sect. 4 and 5 we discuss the results
and draw our conclusions.
2. Model
2.1. Magnetohydrodynamic modeling
We model the propagation of an SN shock front through a
magnetized ambient medium. The model includes no radia-
tive cooling, no thermal conduction, no eventual magnetic field
amplification and no effects on shock dynamics due to back-
reaction of accelerated cosmic rays. The shock propagation
is modeled by solving numerically the time-dependent ideal
MHD equations of mass, momentum, and energy conservation
in a 3-D cartesian coordinate system (x, y, z):
+ ∇ · (ρu) = 0 , (1)
+ ∇ · (ρuu − BB) + ∇P∗ = 0 , (2)
+ ∇ · [u(ρE + P∗) − B(u · B)] = 0 , (3)
+ ∇ · (uB − Bu) = 0 , (4)
where
P∗ = P +
, E = ǫ +
|u|2 +
are the total pressure (thermal pressure, P, and magnetic pres-
sure) and the total gas energy (internal energy, ǫ, kinetic energy,
and magnetic energy) respectively, t is the time, ρ = µmHnH is
the mass density, µ = 1.3 is the mean atomic mass (assuming
cosmic abundances), mH is the mass of the hydrogen atom, nH
is the hydrogen number density, u is the gas velocity, T is the
temperature, and B is the magnetic field. We use the ideal gas
law, P = (γ − 1)ρǫ, where γ = 5/3 is the adiabatic index. The
simulations are performed using the flash code (Fryxell et al.
2000), an adaptive mesh refinement multiphysics code for as-
trophysical plasmas.
As initial conditions, we adopted the model profiles of
Truelove & McKee (1999), assuming a spherical remnant with
radius r0snr = 4 pc and with total energy E0 = 1.5 × 10
51 erg,
originating from a progenitor star with mass of 15 Msun,
and propagating through an unperturbed magnetohydrostatic
medium. The initial total energy is partitioned so that 1/4 of
the SN energy is contained in thermal energy, and the other
3/4 in kinetic energy. The explosion is at the center (x, y, z) =
(0, 0, 0) of the computational domain which extends between
−30 and 30 pc in all directions. At the coarsest resolution,
the adaptive mesh algorithm used in the flash code (paramesh;
MacNeice et al. 2000) uniformly covers the 3-D computational
domain with a mesh of 83 blocks, each with 83 cells. We al-
low for 3 levels of refinement, with resolution increasing twice
at each refinement level. The refinement criterion adopted
(Löhner 1987) follows the changes in density and temperature.
This grid configuration yields an effective resolution of ≈ 0.1
pc at the finest level, corresponding to an equivalent uniform
mesh of 5123 grid points. We assume zero-gradient conditions
at all boundaries.
We follow the expansion of the remnant for 22 kyrs, consid-
ering two sets of simulations: 1) through a gradient of ambient
density with a uniform ambient magnetic field; or 2) through
a homogeneous isothermal medium with a gradient of ambi-
ent magnetic field strength. Table 1 summarizes the physical
parameters characterizing the simulations.
In the first set of simulations, the ambient magnetic field
is assumed uniform with strength B = 1 µG and oriented
parallel to the x axis. The ambient medium is modeled with
an exponential density stratification along the x or the z direc-
tion (i.e. parallel or perpendicular to the B field) of the form:
n(ξ) = n0 + ni exp(−ξ/h) (where ξ is, respectively, x or z) with
n0 = 0.05 cm
−3 and ni = 0.2 cm
−3, and where h (set either to
25 pc or to 10 pc) is the density scale length. This configuration
has been used by e.g. Hnatyk & Petruk (1999) to describe the
SNR expansion in an environment with a molecular cloud. Our
choice leads to a density variation of a factor ∼ 6 or ∼ 60, re-
spectively, along the x or the z direction over the spatial domain
considered (60 pc in total). The temperature of the unperturbed
ISM is T = 104 K at ξ = 0 and is determined by pressure bal-
ance elsewhere. The adopted values of T = 104 K, n = 0.25
cm−3 and B = 1 µG at ξ = 0, outside the remnant, lead to
S. Orlando et al.: On the origin of asymmetries in bilateral supernova remnants 3
Table 1. Relevant initial parameters of the simulations: n0 and
ni are particle number densities of the stratified unperturbed
ISM (see text), h is the density scale length, and (x, y, z) are
the coordinates of the magnetic dipole moment. The ambient
medium is either uniform or with an exponential density strat-
ification along the x or the z direction (x−strat. and z−strat.,
respectively); the ambient magnetic field is uniform or dipolar
with the dipole oriented along the x axis and located at (x, y, z).
ISM n0 ni h B (x, y, z)
cm−3 cm−3 pc pc
GZ1 z−strat. 0.05 0.2 25 uniform -
GZ2 z−strat. 0.05 0.2 10 uniform -
GX1 x−strat. 0.05 0.2 25 uniform -
GX2 x−strat. 0.05 0.2 10 uniform -
DZ1 uniform 0.25 - - z−strat. (0, 0,−100)
DZ2 uniform 0.25 - - z−strat. (0, 0,−50)
DX1 uniform 0.25 - - x−strat. (−100, 0, 0)
DX2 uniform 0.25 - - x−strat. (−50, 0, 0)
β ∼ 17 (where β = P/(B2/8π) is the ratio of thermal to mag-
netic pressure) a typical order of magnitude of β in the diffuse
regions of the ISM (Mac Low & Klessen 2004).
In the second set of simulations, the unperturbed ambient
medium is uniform with temperature T = 104 K and parti-
cle number density n = 0.25 cm−3. The ambient magnetic
field, B , is assumed to be dipolar. This idealized situation is
adopted here mainly to ensure magnetostaticity of the non-
uniform field. The dipole is oriented parallel to the x axis and
located on the z axis (x = y = 0) either at z = −100 pc or at
z = −50 pc; alternatively the dipole is located on the x axis
(y = z = 0) either at x = −100 pc or at x = −50 pc (as shown
in Fig. 1). In both configurations, the field strength varies by a
factor ∼ 6 (z or x = −100 pc) or ∼ 60 (z or x = −50 pc) over
60 pc: in the first case in the direction perpendicular to the av-
erage ambient field 〈B〉, whereas in the second case parallel to
〈B〉. In all the cases, the initial magnetic field strength is set to
B = 1 µG at the center of the SN explosion (x = y = z = 0).
Note that the transition time from adiabatic to radiative
phase for a SNR is (e.g. Blondin et al. 1998; Petruk 2005)
ttr = 2.84 × 10
4 E4/1751 n
−9/17
ism yr , (5)
where E51 = E0/(10
51 erg) and nism is the particle number den-
sity of the ISM. In our set of simulations, runs GZ2 and GX2
present the lowest values of the transition time, namely ttr ≈ 25
kyr. Since we follow the expansion of the remnant for 22 kyrs,
our modeled SNRs are in the adiabatic phase.
2.2. Nonthermal electrons and synchrotron emission
We synthesize the radio emission from the remnant, assum-
ing that it is only due to synchrotron radiation from relativistic
electrons distributed with a power law spectrum N(E) = KE−ζ ,
where E is the electron energy, N(E) is the number of elec-
trons per unit volume with arbitrary directions of motion and
with energies in the interval [E, E + dE], K is the normaliza-
tion of the electron distribution, and ζ is the power law index.
Figure 1. 2-D sections in the (x, z) plane of the initial mass
density distribution and initial configuration of the unperturbed
dipolar ambient magnetic field in two cases: dipole moment lo-
cated on the z axis (DZ1, left panel), or on the x axis (DX1,
right panel). The initial remnant is at the center of the domain.
Black lines are magnetic field lines.
Following Ginzburg & Syrovatskii (1965), the radio emissivity
can be expressed as:
i(ν) = C1KB
, (6)
where C1 is a constant, B⊥ is the component of the magnetic
field perpendicular to the line-of-sight (LoS), ν is the frequency
of the radiation, α = (ζ − 1)/2 is the synchrotron spectral index
(assumed to be uniform everywhere and taken as 0.5 as ob-
served in many BSNRs). To compute the total radio intensity
(Stokes parameter I) at a given frequency ν0, we integrate the
emissivity i(ν0) along the LoS:
I(ν0) =
i(ν0) dl , (7)
where dl is the increment along the LoS.
The normalization of the electron distribution Ks in Eq.
6 (index “s” refers to the immediately post-shock values) de-
pends on the injection efficiency (the fraction of electrons
that move into the cosmic-ray pool). Unfortunately, it is un-
known how the injection efficiency evolves in time. On theo-
retical grounds, Ks is expected to vary with the shock velocity
Vsh(t) and, in case of inhomogeneous ISM, with the immedi-
ately post-shock value of mass density, ρs; let us assume that
approximately Ks ∝ ρsVsh(t)
−b. Reynolds (1998) considered
three empirical alternatives for b as a free parameter, namely,
b = 0,−1,−2. Petruk & Bandiera (2006) showed that one can
expect b > 0 and its value could be b ≈ 4. Stronger shocks
are more successful in accelerating particles. To be accelerated
effectively, a particle should obtain in each Fermi cycle larger
increase in momentum, which is proportional to the shock ve-
locity. Negative b reflects an expectation that injection effi-
ciency may behave in a way similar to acceleration efficiency:
stronger shocks might inject particles more effectively. In con-
trast, positive b represents a different point of view: efficien-
cies of injection and acceleration may have opposite depen-
dencies on the shock velocity. Stronger shock produces higher
turbulence which is expected to prevent more thermal particles
to recross the shock from downstream to upstream and to be,
4 S. Orlando et al.: On the origin of asymmetries in bilateral supernova remnants
therefore, injected. Since the picture of injection is quite un-
clear from both theoretical and observational points of view,
we do not pay attention to the physical motivations of the value
of b. Instead, our goal is to see how different trends in evolu-
tion of injection efficiency may affect the visible morphology
of SNRs. Such understanding could be useful for future obser-
vational tests on the value of b.
We found, in agreement with Reynolds (1998), that the
value of b does not affect the main features of the sur-
face brightness distribution if SNR evolves in uniform ISM.
Therefore we use the value b = 0 to produce the SNR images in
models with uniform ISM (models DZ1, DZ2, DX1, and DX2).
In cases where non-uniformity of ISM causes variation of the
shock velocity in SNR (models GZ1, GZ2, GX1, and GX2),
we calculate images for b = −2, 0, 2. We follow the model of
Reynolds (1998) in description of the post-shock evolution of
relativistic electrons. Adopting this approach and considering
that ζ = 2 (being α = 0.5, see above), one obtains that (see
Appendix A)
K(a, t)
Ks(R, t)
P(a, t)
P(R, t)
)−b/2 (
ρo(a)
ρo(R)
)−(b+1)/3 (
ρ(a, t)
ρ(R, t)
)5b/6+4/3
where a is the lagrangian coordinate, R is the shock radius, ρ is
the gas density, P is the gas pressure, and the index “o” refers
to the pre-shock values. It is important to note that this formula
accounts for variation of injection efficiency caused by the non-
uniformity of ISM.
The electron injection efficiency may also vary with the
obliquity angle between the external magnetic field and the
shock normal, φBn. The numerical simulations suggest that in-
jection efficiency is larger for parallel shocks, i.e. where the
magnetic field is parallel to the shock speed (obliquity an-
gle close to zero; Ellison et al. 1995). However, it has been
shown (Fulbright & Reynolds 1990) that models with injec-
tion strongly favoring parallel shocks produce SNR maps that
do not resemble any known objects (it is also claimed that
injection is more efficient where the magnetic field is per-
pendicular to the shock speed; Jokipii 1987). On the other
hand, comparison of known SNRs morphologies with model
SNR images calculated for different strengths of the injec-
tion efficiency dependence on obliquity suggests that the in-
jection efficiency in real SNRs could not depend on obliquity
(Petruk, in preparation). In such an unclear situation, we con-
sider the three cases: quasi-parallel, quasi-perpendicular, and
isotropic injection models. Following Fulbright & Reynolds
(1990), we model quasi-parallel injection by multiplying the
normalization of the electron distribution K by cos2 φBn2 (see
also Leckband et al. 1989), where φBn2 is the angle between the
shock normal and the post-shock magnetic field1. By analogy
with the quasi-parallel case, we model quasi-perpendicular in-
jection by multiplying K by sin2 φBn2.
1 For a shock compression ratio of 4 (the shock Mach number is≫
10 in all directions during the whole evolution in each of our simula-
tions), the obliquity angle between the external magnetic field and the
shock normal, φBn, is related to φBn2 by sin
φBn2 = (cot
2 φBn/16+1)
(e.g. Fulbright & Reynolds 1990).
An important point is the degree of ordering of magnetic
field downstream of the shock. Radio polarization observation
of a number of SNRs (e.g. Tycho Dickel et al. 1991, SN1006
Reynolds & Gilmore 1993) show the low degree of polariza-
tion, 10-15% (in case of ordered magnetic field the value ex-
pected is about 70%; Fulbright & Reynolds 1990), indicating
highly disordered magnetic field. Thus we calculate the syn-
chrotron images of SNR for two opposite cases. First, since our
MHD code gives us the three components of magnetic field,
we are able to calculate images with ordered magnetic field.
Second, we introduce the procedure of the magnetic field dis-
ordering (with randomly oriented magnetic field vector with
the same magnitude in each point) and then synthesize the ra-
dio maps. In models which have a disordered magnetic field,
we use the post-shock magnetic field before disordering to cal-
culate the angle φBn2; as discussed by Fulbright & Reynolds
(1990), this corresponds to assume that the disordering process
takes place over a longer time-scale than the electron injection,
occurring in the close proximity of the shock. Since we found
that the asymmetries induced by gradients either of ambient
plasma density or of ambient magnetic field strength are not
significantly affected by the degree of ordering of the magnetic
field downstream of the shock, in the following we will focus
on the models with disordered magnetic field.
The goal of this paper is to look whether non-uniform ISM
or non-uniform magnetic field can produce asymmetries on
BSNRs morphology. In order to clearly see the role of these
two factors in determining the morphology of BSNRs, we use
some simplifying assumptions about electron kinetic and be-
havior of magnetic field in vicinity of the shock front. Our
calculations are performed in the test-particle limit, i.e. they
ignores the energy in cosmic rays. In particular, we do not con-
sider possible amplification of magnetic field by the cosmic-ray
streaming instability (Lucek & Bell 2000, Bell & Lucek 2001).
We expect that the main features of the modeled SNR morphol-
ogy will not change if this process is independent of obliquity
angle. If future investigations show undoubtedly that magnetic
field amplification varies strongly with obliquity, the role of
this effect in producing BSNRs have to be additionally studied.
3. Results
In all the models examined, we found the typical evolution
of adiabatic SNRs expanding through an organized ambient
magnetic field (see Balsara et al. 2001 and references therein):
the fast expansion of the shock front with temperatures of few
millions degrees, and the development of Richtmyer-Meshkov
(R-M) instability, as the forward and reverse shocks progress
through the ISM and ejecta, respectively (see Kane et al. 1999).
As examples, Fig. 2 shows 2-D sections in the (x, z) plane of
the distributions of mass density and of magnetic field strength
for the models GZ2, DZ2, and DX2 at t = 18 kyrs. The in-
ner shell is dominated by the R-M instability that causes the
plasma mixing and the magnetic field amplification. In the in-
ner shell, the magnetic field shows a turbulent structure with
preferentially radial components around the R-M fingers (see
Fig. 3). Note that, some authors have invoked the R-M insta-
bilities to explain the dominant radial magnetic field observed
S. Orlando et al.: On the origin of asymmetries in bilateral supernova remnants 5
Figure 2. 2-D sections in the (x, z) plane of the mass density
distribution (left panels), in log scale, and of the distribution
of the magnetic-field strength (right panels), in log scale, in
the simulations GZ2 (upper panels), DZ2 (middle panels), and
DX2 (lower panels) at t = 18 kyrs. The box in the upper left
panel marks the region shown in Fig. 3.
in the inner shell of SNRs (e.g. Jun & Norman 1996); however,
in our simulations, the radial tendency is observed well inside
the remnant and not immediately behind the shock as inferred
from observations.
We found that, throughout the expansion, the shape of the
remnant is not appreciably distorted by the ambient magnetic
field because, for the values of explosion energy and ambi-
ent field strength (typical of SNRs) used in our simulations,
the kinetic energy of the shock is many orders of magnitude
larger than the energy density in the ambient B field (see also
Mineshige & Shibata 1990). The shape of the remnant does
not differ visually from a sphere also in the cases with den-
sity stratification of the ambient medium2 as it is shown by
Hnatyk & Petruk (1999).
The radio emission of the evolved remnants is character-
ized by an incomplete shell morphology when the viewing an-
gle is not aligned with the direction of the average ambient
2 In these cases, the remnant appears shifted toward the low den-
sity region; see upper panels in Fig. 2 (see also Dohm-Palmer & Jones
1996).
Figure 3. Close-up view of the region marked with a box in
Fig. 2. The dark fingers mark the R-M instability. The mag-
netic field is described by the superimposed arrows the length
of which is proportional to the magnitude of the field vector.
magnetic field (cf. Fulbright & Reynolds 1990); in general, the
radio emission shows an axis of symmetry with low levels of
emission along it, and two bright limbs (arcs) on either side
(see also Gaensler 1998). This morphology is very similar to
that observed in BSNRs.
3.1. Obliquity angle dependence
For each of the models listed in Table 1, we synthesized the
synchrotron radio emission, considering each of the three cases
of variation of electron injection efficiency with shock obliq-
uity: quasi-parallel, quasi-perpendicular, and isotropic particle
injection. As an example, Fig. 4 shows the synchrotron radio
emission synthesized from the uniform ISM model DZ1 with
randomized internal magnetic field at t = 18 kyrs in each of
the three cases. We recall that for these uniform density cases,
we have adopted an injection efficiency independent from the
shock speed (b = 0, Sect. 2.2). All images are maps of total in-
tensity normalized to the maximum intensity of each map and
have a resolution of 400 beams per remnant diameter (DSNR).
The images are derived when the LoS is parallel to the average
direction of the unperturbed ambient magnetic field 〈B〉 (LoS
aligned with the x axis), or perpendicular both to 〈B〉 and to
the gradient of field strength (LoS along y), or parallel to the
gradient of field strength (LoS along z).
The different particle injection models produce images that
can differ considerably in appearance. In particular, the quasi-
parallel case leads to morphologies of the remnant not repro-
duced by the other two cases: a center-brightened SNR when
the LoS is aligned with x (top left panel in Fig. 4), a BSNR
with two bright arcs slanted and converging on the side where
B field strength is higher when the LoS is along y (top center
6 S. Orlando et al.: On the origin of asymmetries in bilateral supernova remnants
Figure 4. Synchrotron radio emission (normalized to the maximum of each panel), at t = 18 kyrs, synthesized from model DZ1
assuming b = 0 (see text) and randomized internal magnetic field, when the LoS is aligned with the x (left), y (center), or z (right)
axis. The figure shows the quasi-parallel (top), isotropic (middle), and quasi-perpendicular (bottom) particle injection cases. The
color scale is linear and is given by the bar on the right. The directions of the average unperturbed ambient magnetic field, 〈B〉,
and of the magnetic field strength gradient, ∇|B |, are shown in the upper left and lower right corners of each panel, respectively.
panel), and a remnant with two symmetric bright spots located
between the center and the border of the remnant when the LoS
is along z (top right panel). Neither the center-brightened rem-
nant nor the double peak structure, showing no structure de-
scribable as a shell, seems to be observed in SNRs3. We found
analogous morphologies in all the models listed in Table 1, con-
sidering the quasi-parallel case. As extensively discussed by
Fulbright & Reynolds (1990) for models with uniform ambient
magnetic field and b = −2, we also conclude that the quasi-
parallel case leads to radio images unlike any observed SNR
(see also Kesteven & Caswell 1987).
3 Excluding filled center and composite SNRs, but these are due to
energy input from a central pulsar.
The isotropic case leads to remnant’s morphologies simi-
lar to those produced in the quasi-perpendicular case although
the latter case shows deeper minima in the radio emission than
the first one. When the LoS is aligned with x (middle left and
bottom left panels in Fig. 4) or with y (middle center and bot-
tom center panels), the remnants have one bright arc on the side
where the B strength is higher. When the LoS is aligned with z
(middle right and bottom right panels), the remnants have two
opposed arcs that appear perfectly symmetric. We found that
the isotropic and quasi-perpendicular cases lead to morpholo-
gies of the remnants similar to those observed.
S. Orlando et al.: On the origin of asymmetries in bilateral supernova remnants 7
Figure 5. Presentation as in Fig. 4 for model GZ1 with randomized internal magnetic field, assuming quasi-perpendicular particle
injection and b = −2 (top panels), b = 0 (middle) and b = 2 (bottom). The directions of the average unperturbed ambient magnetic
field, 〈B〉, and of the ambient plasma density gradient, ∇ρ, are shown in the upper left and lower right corners of each panel,
respectively.
3.2. Non-uniform ISM: dependence from parameter b
For models describing the SNR expansion through a non-
uniform ISM (models GZ1, GZ2, GX1, GX2), we derived the
synthetic radio maps considering three alternatives for the de-
pendence of the injection efficiency on the shock speed, namely
b = −2, 0, 2 (see Sect. 2.2). As an example, Fig. 5 shows the
synthetic maps derived from model GZ1 with randomized in-
ternal magnetic field, assuming quasi-perpendicular particle in-
jection, and considering b = −2 (top panels), b = 0 (middle)
and b = 2 (bottom).
When the LoS is not aligned with the density gradient, the
radio images show asymmetric morphologies of the remnants.
In this case, the main effect of varying b is to change the de-
gree of asymmetry observed in the radio maps. In the example
shown in Fig. 5, the density gradient is aligned with the z axis
and asymmetric morphologies are produced when the LoS is
aligned with x (left panels) or with y (center panels). In all the
cases, the remnant is brighter where the mass density is higher.
On the other hand, the degree of asymmetry increases with in-
creasing value of b.
The reason of such behavior consists in the balance be-
tween the roles of the shock velocity and of density in chang-
ing the injection efficiency. Consider, as an example, the top
left panel in Fig. 5: the increase of the shock velocity on the
north (due to fall of the ambient density) leads to an increase of
the brightness there (due to rise of the injection efficiency) that
partially balances the increase of the brightness on the south
due to higher density of ISM. On the other hand, for the model
shown in the bottom left panel in Fig. 5, the fraction of accel-
8 S. Orlando et al.: On the origin of asymmetries in bilateral supernova remnants
Figure 6. Synchrotron radio emission (normalized to the maximum of each panel), at t = 18 kyrs, synthesized from models
assuming a gradient of ambient plasma density (panels A and D; with b = 2) or of ambient magnetic field strength (panels B
and E; with b = 0) when the LoS is aligned with the y axis. All the models assume quasi-perpendicular particle injection. The
directions of the average unperturbed ambient magnetic field, 〈B〉, and of the plasma density or magnetic field strength gradient,
are shown in the upper left and lower right corners of each panel, respectively. The right panels show two examples of radio maps
(data adapted from Whiteoak & Green 1996 and Gaensler 1998; the arrows point in the north direction) collected for the SNRs
G338.1+0.4 (panel C) and G296.5+10.0 (panel F). The color scale is linear and is given by the bar on the right.
erated electrons increases on the south due to both the rise of
density and the decrease of the shock velocity.
When the LoS is aligned with the density gradient, the radio
images are symmetric. In the example shown in Fig. 5, this
corresponds to the maps derived when the LoS is along z (right
panels); the remnants are characterized by two opposed arcs
with identical surface brightness.
3.3. Morphology
Fig. 6 shows the radio emission maps, at a time of 18 kyrs, syn-
thesized from models with a gradient of ambient plasma den-
sity (panels A and D; assuming b = 2) and of ambient B field
strength (panels B and E; assuming b = 0). All the models as-
sume quasi-perpendicular particle injection (the isotropic case
produces radio maps with similar morphologies and the quasi-
parallel case is discussed later) and randomized internal mag-
netic field. The viewing angle is perpendicular both to the av-
erage direction of the unperturbed ambient magnetic field, 〈B〉,
(direct along the x axis) and to the gradients of density or field
strength (direct either along z, panels A and B, or x, panels D
and E). The right panels show, as examples, the radio maps of
the SNRs G338.1+04 (panel C, data from Whiteoak & Green
1996) and G296.5+10.0 (panel F, from Gaensler 1998).
In the quasi-perpendicular case discussed here, the max-
imum synchrotron emissivity is reached where the magnetic
field is strongly compressed. This configuration has been re-
ferred as “equatorial belt” (e.g. Rothenflug et al. 2004); 〈B〉
runs between the two opposed arcs (along the x axis). We found
that, when the density or the magnetic field strength gradient is
perpendicular to the field itself, the morphology of the radio
map strongly depends on the viewing angle. In these cases, the
two opposed arcs appear perfectly symmetric when the LoS
is aligned with the gradient (see, for instance, the right pan-
els in Fig. 5), otherwise the two arcs can have very different
radio brightness, leading to strongly asymmetric BSNRs (see
panels A and B in Fig. 6). In the former case (LoS aligned
with the gradient), the remnant is characterized by two axes of
symmetry: one between the two symmetric arcs and the other
perpendicular to the two. In models with strong magnetic field
S. Orlando et al.: On the origin of asymmetries in bilateral supernova remnants 9
strength gradients (DZ2; B varies by a factor ∼ 60 over 60 pc),
we found that the radio images are center-brightened when the
LoS is aligned with the gradient (figure not reported). The fact
that center-brightened remnants are not observed suggests that
the external B varies moderately in the neighborhood of the
remnants.
In case of asymmetry, the gradient is always perpendicular
to the arcs, and the brightest arc is located where either mag-
netic field strength or plasma density is higher (see panels A
and B in Fig. 6), since the synchrotron emission depends on
the plasma density, on the pressure, and on the field strength
(see Eqs. 6 and 8); in this case, there is only one axis of sym-
metry oriented along the density or B gradient. When the LoS
is parallel to 〈B〉 (along x in our models), the radio maps show
a shell structure with a maximum intensity located where mag-
netic field strength or plasma density is higher (see left pan-
els in Fig. 4 for isotropic and quasi-perpendicular cases and
left panels in Fig. 5). Our simulations show that, when the
density or the magnetic field strength gradient is perpendicu-
lar to the field itself, remnants with a monopolar morphology
can be observed at LoS not aligned with the gradient (see also
Reynolds & Fulbright 1990). Examples of observed monopolar
remnants are G338.1+0.4 (see panel C in Fig. 6) or G327.4+1.0
or G341.9-0.3.
When the density or B field strength gradient is parallel to
〈B〉 (panels D and E in Fig. 6) and the LoS lies in the plane per-
pendicular to 〈B〉, the morphology of the radio map does not
depend on the viewing angle and the two opposed arcs have
the same radio brightness. In these cases, however, there is only
one axis of symmetry and the two arcs appear slanted and con-
verging on the side where field strength or plasma density is
higher; again, the symmetry axis is aligned with the density
or B strength gradient. Examples of this kind of objects are
G296.5+10.0 (see panel F in Fig. 6) or G332.4-004 or SN1006
(which is, however, much younger than the simulated SNRs).
When the external magnetic field is parallel to the LoS, because
the system is symmetric about the magnetic field, the remnant
is axially symmetric and the radio maps show a complete radio
shell at constant intensity.
In the quasi-parallel case, 〈B〉 runs across the arcs. This
configuration has been referred as “polar caps” and it has been
invoked for the SN1006 remnant (Rothenflug et al. 2004). The
quasi-parallel case, apart from the center-brightened morphol-
ogy discussed in Sect. 3.1, can also produce remnant morpholo-
gies similar to those shown in Fig. 6. As examples, Fig. 7
shows the radio emission maps obtained in the cases dis-
cussed in Fig. 6 but assuming quasi-parallel instead of quasi-
perpendicular particle injection. Again, the viewing angle is
perpendicular both to 〈B〉 (direct along the x axis) and to the
gradients of density or field strength (direct either along z, pan-
els A and B, or x, panels C and D). In the quasi-parallel case,
remnants with a bright radio limb are produced if the gradi-
ent of ambient density or of ambient B field strength is par-
allel to 〈B〉 (instead of perpendicular to 〈B〉 as in the quasi-
perpendicular case), whereas slanting similar radio arcs are ob-
tained if the gradient is perpendicular to 〈B〉 (instead of parallel
as in the quasi-perpendicular case).
4. Discussion
Our simulations show that asymmetric BSNRs are explained
if the ambient medium is characterized by gradients either of
density or of ambient magnetic field strength: the two opposed
arcs have different surface brightness if the gradient runs across
the arcs (see panels A and B in Fig. 6, and panels C and D in
Fig. 7), whereas the two arcs appear slanted and converging
on one side if the gradient runs between them (see panels D
and E in Fig. 6 and panels A and B in Fig. 7). In all the cases
(including the three alternatives for the particle injection), the
symmetry axis of the remnant is always aligned with the gradi-
From the radio maps, we derived the azimuthal intensity
profiles: we first find the point on the map where the intensity
is maximum; then the contour of points at the same distance
from the center of the remnant as the point of maximum in-
tensity defines the azimuthal radio intensity profile. Following
Fulbright & Reynolds (1990), we quantify the degree of “bipo-
larity” of the remnants by using the so-called azimuthal inten-
sity ratio A, i.e. the ratio of maximum to minimum intensity
derived from the azimuthal intensity profiles. In addition, we
quantify the degree of asymmetry of the BSNRs by using a
measure we call the azimuthal intensity ratio Rmax ≥ 1, i.e.
the ratio of the maxima of intensity of the two limbs as de-
rived from the azimuthal intensity profiles, and the azimuthal
distance θD, i.e. the distance in deg of the two maxima. In the
case of symmetric BSNRs, Rmax = 1 and θD = 180
o. As al-
ready noted by Fulbright & Reynolds (1990), the parameter A
depends on the spatial resolution of the radio maps and on the
aspect angle (i.e. the angle between the LoS and the unper-
turbed magnetic field); moreover we note that, in real observa-
tions, the measure of A gives a lower limit to its real value if the
background is not accurately taken into account. On the other
hand, the parameters Rmax and θD have a much less critical de-
pendency on these factors and, therefore, they may provide a
more robust diagnostic in the comparison between models and
observations.
Fig. 8 shows the values of A, Rmax, and θD derived for all
the cases examined in this paper, considering the LoS aligned
with the y axis, and radio maps with a resolution of 25 beams
per remnant diameter4 (DSNR). Note that, our choice of the
LoS aligned with y (aspect angle φ = 90o) implies that the
values of A in Fig. 8 are upper limits, being A maximum at
φ = 90o and minimum at φ = 0o (see Fulbright & Reynolds
1990). The three models of particle injection (isotropic, quasi-
perpendicular and quasi-parallel) lead to different values of A.
In the isotropic and quasi-perpendicular cases, most of the val-
ues of A range between 5 and 20 (for model DX2, A is even
larger than 100); in the quasi-parallel case, the values of A are
larger than 500.
We found that, in general, a gradient of the ambient mag-
netic field strength leads to remnant morphologies similar to
those induced by a gradient of plasma density (compare, for
instance, panel A with B and panel D with E in Fig. 6). On the
other hand, if b < 0 in GX and GZ models, ambient B field
4 After the radio maps are calculated, they are convolved with a
gaussian function with σ corresponding to the required resolution.
10 S. Orlando et al.: On the origin of asymmetries in bilateral supernova remnants
Figure 7. Presentation as in Fig. 6, as-
suming quasi-parallel instead of quasi-
perpendicular particle injection.
gradients are more effective in determining the morphology of
asymmetric BSNRs. This is seen in a more quantitative form
in Fig. 8. DX and DZ models give Rmax values higher and θD
values lower than GX and GZ models with b < 0: a modest
gradient of the magnetic field (models DX1 and DZ1) gives
a value of Rmax higher or θD lower than the two models with
strong density gradients (models GX2 and GZ2) and b < 0.
Fig. 8 also shows that, in models with a density gradient,
the degree of asymmetry of the remnant increases with increas-
ing value of b; the GX and GZ models with b > 0 give values
of Rmax and θD comparable with (or, in the case of Rmax, even
larger of) those derived from DX and DZ models. In the case of
quasi-parallel particle injection for remnants with converging
similar arcs, it is necessary a strong gradient of density perpen-
dicular to B and b ≥ 0 (compare models GZ1 and GZ2 in the
lower panel in Fig. 8) to give values of θD comparable to those
obtained with a moderate gradient of ambient B field strength
perpendicular to B (see model DZ1 in Fig. 8).
In order to compare our model predictions with observa-
tions of real BSNRs, we have selected 11 SNR shells which
show one or two clear lobes of emission in archive total in-
tensity radio images, separated by a region of minima. We
have discarded all those cases in which several point-like or
extended sources appear superimposed to the bright limbs, or
other cases in which the location of maximum or minimum
emission around the shell is difficult to derive. Unlike other lists
of BSNRs published in the literature (e.g. Kesteven & Caswell
1987; Fulbright & Reynolds 1990; Gaensler 1998), here we fo-
cus on a reliable measure of the parameters A, Rmax and θD; we
avoid, therefore, patchy and irregular limbs, as in the case of
G320.4-01.2 of Gaensler (1998). Moreover, we are obviously
not discarding remnants which have constraints on A, Rmax or
θD (e.g. Fulbright & Reynolds 1990 considered only cases with
Rmax < 2), and we are considering remnants observed with a
resolution greater than 10 beams per remnant diameter. Since
in our models we follow the remnant evolution during the adi-
abatic phase, we also need to discard objects that are clearly
in the radiative phase. Unfortunately, for most of the objects
selected, there is no indication of their evolutionary stage in lit-
erature. Assuming that the remnant expands in a medium with
particle number density nism <∼ 0.3 cm
−3, the shock radius de-
rived from the Sedov solution at time ttr (i.e. at the transition
time from the adiabatic to the radiative phase; see Eq. 5) is
rtr = 19 E
−7/17
35 pc , (9)
where we have assumed that E51 = 1.5. Therefore, we only
considered remnants with radius rsnr < 35 pc (i.e. with size <
70 pc) that are, most likely, in the adiabatic phase. Our list does
not pretend to be complete or representative of the class, and
it is compiled to derive the observed values of the parameters
A, Rmax and θD with the lowest uncertainties. For this reason,
we have considered remnants for which a total intensity radio
image in digital format is available. Actually, in most of the
cases, we have used the 843 MHz data of the MOST supernova
remnant catalogue (Whiteoak & Green 1996).
S. Orlando et al.: On the origin of asymmetries in bilateral supernova remnants 11
Figure 8. Azimuthal intensity ratio A (i.e.
the ratio of maximum to minimum inten-
sity around the shell of emission - see text;
upper panel), azimuthal intensity ratio Rmax
(i.e. the ratio of the maxima of intensity
of the two limbs around the shell; middle
panel), and azimuthal distance θD (i.e. the
distance in deg of the two maxima of in-
tensity around the shell; lower panel) for
all the cases examined, considering the LoS
aligned with the y axis and a spatial res-
olution of 25 beams per remnant diame-
ter, DSNR. Black crosses: isotropic; red tri-
angles: quasi-perpendicular; blue diamonds:
quasi-parallel.
Our list is reported in Table 2. We have separated evolved
and young SNRs. While the young SNRs listed in Table 2 have
very reliable measurement of A, Rmax and θD and a good record
of literature, making them very good candidate to test the diag-
nostic power of our model, we stress that the models we are
considering in this paper are focused on evolved SNRs; we
leave the discussion about young SNRs to a separate work. For
each object in Table 2, we show the apparent size, the distance
(from dedicated studies where possible, otherwise from the re-
vised Σ − D relation of Case & Bhattacharya 1998; see their
paper for caveats on usage of the Σ − D relation to derive SNR
distance), the real size, the resolution of the observation, and
the parameters A, Rmax, and θD we have introduced here.
Table 2 shows that most of the 11 remnants have A ≤ 10,
i.e. values consistent with those derived in Fig. 8 for the three
alternatives for the particle injection (recall that the values
shown in the figure have to be considered as upper limits).
Four remnants show high values of A (10 < A < 100) that
are difficult to explain in terms of the isotropic or the quasi-
perpendicular injection models with b < 0 unless the remnant
expands through a non-uniform ambient magnetic field (see
models DX2, and DZ2 in Fig. 8). In the light of these consid-
erations, we cannot exclude a priori any of the three alternative
models for the particle injection.
Four of the 11 objects in Table 2 show values of Rmax ≥ 2,
pointing out that, in these objects, the bipolar morphology is
asymmetric with the two radio limbs differing significantly in
intensity. An example of this kind of remnants is G338.1+0.4
(see panel C in Fig. 6). In the light of our results, its morphol-
ogy can be explained if a gradient of ambient density or of
ambient magnetic field strength is either perpendicular to the
average ambient magnetic field, 〈B〉, in the isotropic and quasi-
perpendicular cases or parallel to 〈B〉 in the quasi-parallel case.
It is worth noting that reveling such a gradient from the obser-
vations may be a powerful diagnostic to discriminate among
the alternative particle injection models, producing real ad-
vances in the understanding of the nonthermal physics of strong
shock waves.
An extreme example of a monopolar remnant with a bright
radio limb is G327.4+1.0 whose value of Rmax is larger than
10. Fig. 8 shows that high values of Rmax can be easily ex-
plained as due to non-uniform ambient magnetic field strength
or to non-uniform ambient density if b > 0. We suggest that the
morphology of G327.4+1.0 may give some hints on the value
of b (and, therefore, on the dependence of the injection effi-
ciency on the shock velocity) if the observations show that the
asymmetry is due to a non-uniform ambient medium through
which the remnant expands.
12 S. Orlando et al.: On the origin of asymmetries in bilateral supernova remnants
Table 2. List of barrel-shaped SNR shells for which a measurement of A, Rmax, and θD is presented for comparison with our
models.
Remnanta Flux size d size Res.b A Rmax θD Ref./notes
Jy arcmin kpc pc beams/DSNR deg
Evolved Remnants
G296.5+10.0 48 90 × 65 2.1 55 × 40 108 > 11 1.2 85 1
G299.6-0.5 1.1 13 × 13 18.1 68 18 6 2 160 2
G304.6+0.1 18 8 × 8 7.9 18 11 20 1.5 120 3
G327.4+1.0 2.1 14 × 13 13.9 56 19 > 10 > 10 ND 2,4
G332.0+0.2 8.9 12 × 12 < 20 < 70 17 5 1 145 2,7
G338.1+0.4 3.8 16 × 14 9.9 46 × 40 21 3 2 > 120 2
G341.9-0.3 2.7 7 × 7 14.0 28 10 8 3 170 2
G346.6-0.2 8.7 11 × 10 8.2 26 × 23 15 2 1.1 110 2,7
G351.7+0.8 11 18 × 14 6.7 35 × 27 22 2 1.6 130 2
Young Remnants
G327.6+14.6 19 30 × 30 2.2 19 × 19 42 22 1 127 5
G332.4-0.4 34 11 × 10 3.1 10 × 9 15 7 1.6 98 6
References and notes. - (1) A.k.a. PKS 1209-51/52. Age: 3–20 kyrs, Roger et al. (1988). Distance from Giacani et al. (2000). (2) Distance
derived by Case & Bhattacharya (1998) using a revised Σ−D relation. (3) Distance from Caswell et al. (1975). (4) This shell has only one limb
(“monopolar” according to the definition of Fulbright & Reynolds 1990). A and Rmax are lower limits and no θD is derived. (5) A.k.a. SN1006.
Distance from Winkler et al. (2003). (6) A.k.a. RCW103. Distance from Reynoso et al. (2004). (7) Two maxima have been found in one lobe.
θD is the average of the two.
a All the data are from the MOST supernova remnant catalogue (Whiteoak & Green 1996), except where noted.
b Spatial resolution of the observation in beams per remnant diameter.
In Table 2, six of the 11 remnants (including the two young
remnants SN1006 and RCW103) have values of θD < 140
pointing out that, in these objects, the two bright radio limbs
appear slanted and converging on one side. An example of this
class of objects is G296.5+10.0 (a.k.a PKS 1209-51/52) shown
in panel F in Fig. 6. In this case, the value of θD ∼ 85
o de-
rived from the observations may be easily explained as due
to a gradient of magnetic field strength either parallel to 〈B〉
in the isotropic and quasi-perpendicular cases or perpendicu-
lar to 〈B〉 in the quasi-parallel case. Models with a gradient
of ambient density cannot explain the low values of θD found
for G296.5+10.0 unless the gradients are strong (the density
should change by a factor 60 over 60 pc) and the dependence
of the injection efficiency on the shock velocity gives5 b ≥ 2.
5. Conclusions
Our findings have significant implications on the diagnostics
and lead to several useful conclusions:
1. The three different particle injection models (namely,
quasi-parallel, quasi-perpendicular and isotropic dependence
of injection efficiency from shock obliquity) can produce con-
siderably different images (see Fig. 4). The isotropic and quasi-
perpendicular cases lead to radio images similar to those ob-
5 Large positive values of b do not necessarily mean an increas-
ing fraction of shock energy going into relativistic particles as the
shock slows down because decelerating shock accelerates particles to
smaller Emax , namely the maximum energy at which the electrons are
accelerated.
served. The parallel-case may produce radio images unlike any
observed SNR (center-brightened or with a double-peak struc-
ture not describable as a shell). This is in agreement with the
findings of Fulbright & Reynolds (1990).
2. In models with gradients of the ambient density, the
dependence of the injection efficiency on the shock velocity
(through the parameter b defined in Sect. 2.2) affects the degree
of asymmetry of the radio images: the asymmetry increases
with increasing value of b.
3. Small variations of the ambient magnetic field lead to
significant asymmetries in the morphology of BSNRs (see
Figs. 6 and 7). Therefore, we conclude that the close similar-
ity of the radio brightness of the opposed limbs of a BSNR
(i.e. Rmax ≈ 1 and θD ≈ 180
o) is evidence of uniform ambient
B field where the remnant expands.
4. Variations of the ambient density lead to asymmetries
of the remnant with extent comparable to that caused by non-
uniform ambient magnetic field if b = 2.
5. Strongly asymmetric BSNRs (i.e. Rmax ≫ 1 or θD ≪
180o) imply either moderate variations of B or strong (moder-
ate) variations of the ISM density if b < 2 (b ≥ 2) as in the
case, for instance, of interaction with a giant molecular cloud.
6. BSNRs with different intensities of the emission of the
radio arcs (i.e. Rmax > 1) can be produced by models with a
gradient of density or of magnetic field strength perpendicular
to the arc (upper panels in Fig. 6 and lower panels in Fig. 7),
and the brightest arc is in the region of higher plasma density
or higher magnetic field strength.
S. Orlando et al.: On the origin of asymmetries in bilateral supernova remnants 13
7. Remnants with two slanting similar arcs (i.e. θD < 180
can be produced by models with a gradient of density or of
magnetic field strength running centered between the two arcs
(lower panels in Fig. 6 and upper panels in Fig. 7), and the
region of convergence is where either the plasma density or the
magnetic field strength is higher.
8. In all the cases examined, the symmetry axis of the rem-
nant is always aligned with the gradient of density or of mag-
netic field.
We found that the degree of ordering of the magnetic field
downstream of the shock does not affect significantly the asym-
metries induced by gradients either of ambient plasma density
or of ambient magnetic field strength; thus our conclusions, de-
rived in the case of disordered magnetic field, do not change in
the case of ordered magnetic field.
We defined useful model parameters to quantify the degree
of asymmetry of the remnants. These parameters may provide a
powerful diagnostic in the comparison between models and ob-
servations, as we have shown in a few cases drawn from a ran-
domly selected sample of BSNRs presented in Table 2. For in-
stance, if the density of the external medium is known by other
means (e.g. thermal X-rays, HI and Co maps, etc.), BSNRs can
be very useful to investigate the variation of the efficiency of
electron injection with the angle between the shock normal and
the ambient magnetic field or to investigate the dependence of
the injection efficiency from the shock velocity. Alternatively,
BSNRs can be used as probes to trace the local configuration
of the galactic magnetic field if the dependence of the injection
efficiency from the obliquity is known.
It is worth emphasizing that our model follows the evolu-
tion of the remnant during the adiabatic phase and, therefore,
their applicability is limited to this evolutionary stage. In the
radiative phase, the high degree of compression suggested by
radiative shocks leads to increase of the radio brightness due
to compression of ambient magnetic field and electrons. Since
our model neglects the radiative cooling it is limited to rela-
tively small compression ratios and, therefore, it is not able to
simulate this mechanism of limb brightening.
It will be interesting to expand the present study, consider-
ing the detailed comparison of model results with observations.
This may lead to a major advance in the study of interactions
between the magnetized ISM and the whole galactic SNR pop-
ulation (not only BSNRs), since the mechanisms at work in the
BSNRs are also valid for SNRs of more complex morphology.
Acknowledgements. We thank the referee for constructive and help-
ful criticism. The software used in this work was in part devel-
oped by the DOE-supported ASC/Alliance Center for Astrophysical
Thermonuclear Flashes at the University of Chicago. The simula-
tions have been executed at CINECA (Bologna, Italy) in the frame-
work of the INAF-CINECA agreement. This work was supported by
Ministero dell’Istruzione, dell’Università e della Ricerca, by Istituto
Nazionale di Astrofisica, and by Agenzia Spaziale Italiana (ASI/INAF
I/023/05/0).
Appendix A: Derivation of Eq. (8)
We follow Reynolds (1998) in the description of the evolution of elec-
tron distribution. His approach is extended here to the possibility to
deal with non-uniform ISM (cf. Petruk 2006). Fluid element a ≡ R(ti)
was shocked at time ti, where R is the radius of the shock, and a is the
Lagrangian coordinate. At that time the electron distribution on the
shock was
N(Ei , a, ti) = Ks(a, ti)E
i , (A.1)
where Ei is the electron energy at time ti, Ks is the normalization of
the electron distribution immediately after the shock (in the following,
index “s” refers to the immediately post-shock values), and ζ is the
power law index. Since we are interested in radio emission, we have
to account for only energy losses of electrons due to the adiabatic
expansion (Reynolds 1998):
, (A.2)
where ρ is the mass density, so the energy varies as
E = Ei
ρ(a, t)
ρs(a, ti)
. (A.3)
The conservation law for the number of particles per unit volume per
unit energy interval
N(E, a, t) = N(Ei, a, ti)
a2 da dEi
σr2 dr dE
, (A.4)
where σ is the shock compression ratio and r is the Eulerian coor-
dinate, together with the continuity equation ρo(a)a
2da = ρ(a, t)r2dr
(index “o” refers to the pre-shock values) and the derivative
ρ(a, t)
ρs(a, ti)
)−1/3
, (A.5)
implies that downstream
N(E, a, t) = Ks(a, ti)E
ρ(a, t)
ρs(a, ti)
)(ζ+2)/3
. (A.6)
If Ks ∝ ρsVsh(t)
−b, where Vsh(t) is the shock velocity and ρs is the
immediately post-shock value of density, then
Ks(a, ti) = Ks(R, t)
ρo(a)
ρo(R)
Vsh(t)
Vsh(ti)
. (A.7)
Therefore, the distribution of relativistic electrons follows
K(a, t)
Ks(R, t)
N(E, a, t)
N(E,R, t)
ρo(a)
ρo(R)
Vsh(t)
Vsh(ti)
ρ(a, t)
ρs(a, ti)
)(ζ+2)/3
. (A.8)
Now we can substitute Eq. A.8 with the ratio of the shock velocities
which comes from the expression (Hnatyk & Petruk 1999)
P(a, t)
Ps(R, t)
ρo(a)
ρo(R)
)−2/3 (
Vsh(ti)
Vsh(t)
ρ(a, t)
ρs(R, t)
. (A.9)
Thus, finally
K(a, t)
Ks(R, t)
P(a, t)
Ps(R, t)
)−b/2 (
ρo(a)
ρo(R)
)−(b+ζ−1)/3 (
ρ(a, t)
ρs(R, t)
)5b/6+(ζ+2)/3
. (A.10)
This formula may easily be used to calculate the profile of K(a) for
known P(a) and ρ(a) in the case of the radial flow of fluid. In the case
when mixing is allowed, the position R should correspond to the same
part of the shock which was at a at time ti.
14 S. Orlando et al.: On the origin of asymmetries in bilateral supernova remnants
References
Balsara, D., Benjamin, R. A., & Cox, D. P. 2001, ApJ, 563, 800
Bell, A. R. & Lucek, S. G. 2001, MNRAS, 321, 433
Blondin, J. M., Wright, E. B., Borkowski, K. J., & Reynolds, S. P.
1998, ApJ, 500, 342
Case, G. L. & Bhattacharya, D. 1998, ApJ, 504, 761
Caswell, J. L., Murray, J. D., Roger, R. S., Cole, D. J., & Cooke, D. J.
1975, A&A, 45, 239
Dickel, J. R., van Breugel, W. J. M., & Strom, R. G. 1991, AJ, 101,
Dohm-Palmer, R. C. & Jones, T. W. 1996, ApJ, 471, 279
Ellison, D. C., Baring, M. G., & Jones, F. C. 1995, ApJ, 453, 873
Fryxell, B., Olson, K., Ricker, P., et al. 2000, ApJS, 131, 273
Fulbright, M. S. & Reynolds, S. P. 1990, ApJ, 357, 591
Gaensler, B. M. 1998, ApJ, 493, 781
Giacani, E. B., Dubner, G. M., Green, A. J., Goss, W. M., & Gaensler,
B. M. 2000, AJ, 119, 281
Ginzburg, V. L. & Syrovatskii, S. I. 1965, ARA&A, 3, 297
Hnatyk, B. & Petruk, O. 1999, A&A, 344, 295
Jokipii, J. R. 1987, ApJ, 313, 842
Jun, B.-I. & Norman, M. L. 1996, ApJ, 472, 245
Kane, J., Drake, R. P., & Remington, B. A. 1999, ApJ, 511, 335
Kesteven, M. J. & Caswell, J. L. 1987, A&A, 183, 118
Leckband, J. A., Spangler, S. R., & Cairns, I. H. 1989, ApJ, 338, 963
Löhner, R. 1987, Comp. Meth. Appl. Mech. Eng., 61, 323
Lucek, S. G. & Bell, A. R. 2000, MNRAS, 314, 65
Mac Low, M.-M. & Klessen, R. S. 2004, Reviews of Modern Physics,
76, 125
MacNeice, P., Olson, K. M., Mobarry, C., de Fainchtein, R., & Packer,
C. 2000, Comp. Phys. Comm., 126, 330
Mineshige, S. & Shibata, K. 1990, ApJ, 355, L47
Ostrowski, M. 1988, MNRAS, 233, 257
Petruk, O. 2005, J. Phys. Studies, 9, 364
Petruk, O. 2006, A&A, 460, 375
Petruk, O. & Bandiera, R. 2006, J. Phys. Studies, 10, 66
Reynolds, P. S. & Fulbright, S. M. 1990, in International Cosmic Ray
Conference, 72
Reynolds, S. P. 1998, ApJ, 493, 375
Reynolds, S. P. & Gilmore, D. M. 1993, AJ, 106, 272
Reynoso, E. M., Green, A. J., Johnston, S., et al. 2004, Publications of
the Astronomical Society of Australia, 21, 82
Roger, R. S., Milne, D. K., Kesteven, M. J., Wellington, K. J., &
Haynes, R. F. 1988, ApJ, 332, 940
Rothenflug, R., Ballet, J., Dubner, G., et al. 2004, A&A, 425, 121
Truelove, J. K. & McKee, C. F. 1999, ApJS, 120, 299
Whiteoak, J. B. Z. & Green, A. J. 1996, A&AS, 118, 329
Winkler, P. F., Gupta, G., & Long, K. S. 2003, ApJ, 585, 324
Introduction
Model
Magnetohydrodynamic modeling
Nonthermal electrons and synchrotron emission
Results
Obliquity angle dependence
Non-uniform ISM: dependence from parameter b
Morphology
Discussion
Conclusions
Derivation of Eq. (8)
|
0704.0892 | Nonstationary pattern in unsynchronizable complex networks | Nonstationary pattern in unsynchronizable complex networks
Xingang Wang,1, 2 Meng Zhan,3 Shuguang Guan,1, 2 and Choy Heng Lai4, 2
1Temasek Laboratories, National University of Singapore, 117508 Singapore
2Beijing-Hong Kong-Singapore Joint Centre for Nonlinear & Complex Systems (Singapore),
National University of Singapore, Kent Ridge, 119260 Singapore
3Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, Wuhan 430071, China
4Department of Physics, National University of Singapore, 117542 Singapore
(Dated: November 12, 2018)
Pattern formation and evolution in unsynchronizable complex networks are investigated. Due to the asym-
metric topology, the synchronous patterns formed in complex networks are irregular and nonstationary. For
coupling strength immediately out of the synchronizable region, the typical phenomenon is the on-off inter-
mittency of the system dynamics. The patterns appeared in this process are signatured by the coexistence of a
giant cluster, which comprises most of the nodes, and a few number of small clusters. The pattern evolution
is characterized by the giant cluster irregularly absorbs or emits the small clusters. As the coupling strength
leaves away from the synchronization bifurcation point, the giant cluster is gradually dissolved into a number of
small clusters, and the system dynamics is characterized by the integration and separation of the small clusters.
Dynamical mechanisms and statistical properties of the nonstationary pattern evolution are analyzed and con-
ducted, and some scalings are newly revealed. Remarkably, it is found that the few active nodes, which escape
from the giant cluster with a high frequency, are independent of the coupling strength while are sensitive to the
bifurcation types. We hope our findings about nonstationary pattern could give additional understandings to
the dynamics of complex systems and have implications to some real problems where systems maintain their
normal functions only in the unsynchronizable state.
PACS numbers: 89.75.-k, 05.45.Xt
I. INTRODUCTION
Synchronization of complex networks has aroused many in-
terest in nonlinear science since the discoveries of the small-
world and scale-free properties in many real and man-made
systems [1, 2]. In this study, one important issue is to ex-
plore the inter-dependent relationship between the collective
behaviors of the complex systems and their underlying topolo-
gies. In particular, many efforts have been paid to the con-
struction of optimal networks, and a number of factors which
have important affections to the synchronizability of complex
networks have been gradually disclosed. Now it is known that
random networks, due to their small average distances, are
generally more synchronizable than regular networks [3, 4];
and scale-free networks, with weighted and asymmetric cou-
plings, can be more synchronizable than homogeneous net-
works [5, 6]. In these studies, the standard method employed
for synchronization analysis is the master stability function
(MSF), where the network synchronizability is estimated by
an eigenratio calculated from the coupling matrix, and system
which has a smaller eigenratio is believed to be more synchro-
nizable than that of larger eigenratio [7]. Inspired by this, to
improve the network synchronizability, the only task seems to
be upgrading the coupling matrix so as to decrease the eigen-
ratio, either by changing the network topology [4] or by ad-
justing the coupling scheme [5, 6].
The MSF method, while bringing great convenience to the
analysis, overlooks the temporal, local property of the sys-
tem and reflects only partial information about the system
dynamics. Specifically, from MSF we only know ultimately
whether the network is globally synchronizable or unsynchro-
nizable, but do not know how the global synchronization is
reached if the network is synchronizable, or what’s the pat-
tern and how it evolves if the network is unsynchronizable.
These evolution details, or the transient behavior in system
development, contain rich information about the system dy-
namics and may give additional insights to the organization
of complex systems. For instance, the recent studies about
synchronization transition have shown that, in the unsynchro-
nizable states, heterogeneous networks are more synchroniz-
able (have a higher degree of coherence) than homogeneous
networks at small couplings, while at larger couplings the op-
posite happens [8]. This crossover phenomenon of network
synchronizability are difficult to understood if we only look
at the final state of the system, but are straightforward if we
look at the transient behaviors of their evolutions [8]. Besides
revealing the synchronization mechanisms, the transient be-
havior of network synchronization can also be used to detect
the topological scales and hierarchical structures in the real
systems, e.g., the detection of cluster structures in social and
biological networks [9, 10]. However, despite of its theoreti-
cal and practical significance, the study of transient dynamics
of complex networks is still at its infancy and many questions
remain open, say for example, the pattern evolution of unsyn-
chronizable complex networks.
Pattern formation in unsynchronizable but near-
synchronization networks has been an important issue
in studying the collective behavior of regular networks
[11, 12]. By setting the coupling strength nearby the
synchronization bifurcation point, the system state shares
both the dynamical properties of the synchronizable and
unsynchronizable states: a state of high coherence but is not
synchronized. The bifacial dynamical property makes this
state a natural choice in investigating the transition process of
networks synchronization. Previously studies about regular
http://arxiv.org/abs/0704.0892v1
networks, say for example the lattices [11], have shown that,
when the coupling strength is slightly out of the synchroniz-
able region, although global synchronization is unreachable,
nodes are still synchronized in a partial sense. That is, nodes
are self-organized into a number of synchronous clusters.
The distribution of these clusters, also called the synchronous
pattern, is determined by a set of factors such as the coupling
strength, the system size and the coupling scheme. As the
coupling strength leaves away from the bifurcation point, the
pattern structure becomes more and more complicated and the
system coherence will be decreased, and finally reaches the
turbulence state. It is worthy of note that the patterns arisen
in regular networks have two common properties: spatially
symmetric and temporally stationary. More specifically, the
contents of each cluster are fixed and the clusters are of
translation symmetry in space. For this reason, we say that
the patterns formed in regular networks are symmetric and
stationary. These two properties, as have been discussed in
the previous studies [11, 12], are rooted in the symmetric
topology of the regular networks. This makes it interesting
to ask the following question: how about the patterns in
unsynchronizable complex networks?
Different to the regular networks, in complex networks we
are not able to find any symmetry from their topologies. The
asymmetric topology, according to the pattern analysis devel-
oped in studying regular networks [11], will induce two sig-
nificant changes to the patterns: 1) the synchronous clusters, if
they exist, will be asymmetric; and 2) all the possible patterns,
including the one of global synchronization, are linearly un-
stable under small perturbations. In other words, the patterns
formed in complex networks are expected to be asymmetric
and nonstationary. Our mission of this paper is just to under-
stand and characterize the nonstationary patterns arisen in the
development of complex networks. Specifically, we are trying
to investigate the following questions: 1) is there any pattern
arises during the system evolution? 2) the pattern is stationary
or nonstationary? if nonstationary, how is it evolving and how
is it reflected from the system dynamics? 3) What happens
to the pattern properties during the transition of network syn-
chronization? and 4) How the coupling strength and bifurca-
tion type affect the pattern properties? By investigating these
dynamical and statistical properties, we wish to have a global
understanding to the dynamics of unsynchronizable complex
networks.
Our main findings are: 1) for coupling strength immedi-
ately outside of the synchronizable region, the system dynam-
ics undergoes the process of on-off intermittency. That is,
most of the time the system stays on the global synchroniza-
tion state (the ”off” state) but, once in a while, it develops
into a breaking state (the ”on” state) which is composed by
a giant cluster and a few number of small clusters (hereafter
we call it the giant-cluster state). As the system develops,
the giant cluster changes its shape by absorbing or emitting
the small clusters, leading to the ”off” or ”on” states, respec-
tively; 2) the few active nodes which escape from the giant
cluster with the high frequencies are coupling-strength inde-
pendent but are bifurcation-type dependent. That is, in the
neighboring region of a fixed bifurcation point, the locations
of these active nodes do not change with the coupling strength;
if we change the coupling strength from nearby another bi-
furcation point (the two bifurcation points will be explained
later), their locations will be totally changed; 3) as coupling
strength leaves away from the bifurcation point, the giant clus-
ter is gradually dissolved and more small clusters are gener-
ated from it. Eventually, the giant cluster disappears and the
pattern is composed by only the small clusters (hereafter we
call it the scattering–cluster state). During the course of sys-
tem evolution, each small cluster may either increase its size
by integrating with other small clusters or decrease its size
by breaking to even small clusters, but it can never reach to
the global synchronization state; 4) besides the giant cluster,
the giant- and scattering-cluster states are also distinct in their
small clusters. For giant-cluster state the size of the small
clusters follows a power-law distribution, while for scattering-
cluster state it follows a Gaussian distribution.
The rest of the paper is going to be arranged as follows. In
Sec. II we will give our model of coupled map network and,
based on the method of MSF, point out the two bifurcation
points and the transition areas that we are going to study with.
In Sec. III we will employ the method of finite-time Lyapunov
exponent to predict and describe the intermittent system dy-
namics in the bifurcation regions. Direct simulations about
on-off intermittency will be presented in Sec. IV. By intro-
ducing the method of temporal phase synchronization, in Sec.
V we will investigate in detail the dynamical and statistical
properties of the nonstationary pattern. Meanwhile, proper-
ties of the giant- and scattering states will be compared and
the transition between the two states will be conducted. In
Sec. VI we will discuss the phenomenon of active nodes and
investigate their dependence to the network properties. Dis-
cussions and conclusions about pattern evolution in complex
networks will be presented in Sec. VII.
II. COUPLED MAP NETWORKS AND THE TWO
BIFURCATION POINTS
Our model of coupled map network is of the following form
xi(t+ 1) = F(xi(t))− ε
Gi,jH [f(xj(t))] . (1)
where xi(t + 1) = F(xi(t)) is a d-dimensional map repre-
senting the local dynamics on node i, ε is a global coupling
parameter, G is Laplacian matrix representing the couplings,
and H is a coupling function. To facilitate our analysis, we
adopt the following coupling scheme [13]:
Gi,j = −
Ai,jk
j=1 Ai,jk
, (2)
for j 6= i and Gi,i = 1, with ki the degree of node i and
A the adjacent matrix of the network: Ai,j = 1 if node i and
j are connected and Ai,j = 0 otherwise. In comparison with
the traditional coupling schemes, one important advantage we
benefit from this coupling scheme is that the synchronizabil-
ity of the network, i.e. the eigenratio of the coupling matrix
described in Eq. [2], can be easily adjusted by the parameter
β, while the network topology is kept unchanged. This advan-
tage brings many convenience in network selection since for
a given network topology, even though it is unsynchronizable
under the traditional schemes, can now be synchronizable by
adjusting β in Eq. [2]. This convenience is of particular im-
portance when our studies of network dynamics are focused
on the bifurcation regions, where the network synchronizabil-
ity should be deliberately arranged in order to demonstrate
both the two types of bifurcations. We note that the adop-
tion of Eq. [2] is only for the purpose of convenient analy-
sis, the findings we are going to report are general and can
also be observed by other coupling schemes given the net-
work is properly prepared. In practice, we use logistic map
F(x) = 4x(1−x) as the local dynamics and adopt H(x) = x
as the coupling function.
We first locate the two bifurcation points of global syn-
chronization. The linear stability of the global synchroniza-
tion state {xi(t) = s(t), ∀i} is determined by the correspond-
ing variational equations, which can be diagonalized into N
blocks of form
y(t+ 1) = [DF(s) + σDH(s)] y(t), (3)
with DF(s) and DH(s) the Jacobian matrices of the corre-
sponding vector functions evaluated at s(t), and y represents
the different modes that are transverse to the synchronous
manifold s(t). We have σ(i) = ελi for the ith block, i =
1, 2, ..., N , and λ1 = 0 ≤ λ2 ≤ ... ≤ λN are the eigenvalues
of matrix G. The largest Lyapunov exponent Λ(σ) of Eq. [3],
known as the master stability function (MSF) [7], determines
the linear stability of the synchronous manifold s(t). In par-
ticular, the synchronous manifold is stable if only Λ(ελi) < 0
for each i = 2, ..., N . The set of Lyapunov exponents Λ(ελi)
govern the stability of the synchronous manifold in the trans-
verse spaces, and a positive value of Λ(ελi) represents the
loss of the stability in the transverse space of mode i. It was
found that for a large class of chaotic systems, Λ(σ) < 0
is only fulfilled within a limit range in the parameter space
σ ∈ (σ1, σ2). This indicates that, to make the global synchro-
nization state linearly stable, all the eigenvalues λi should be
contained within range (σ1, σ2), i.e., λN/λ2 < σ2/σ1. For
the logistic map employed here, it is not difficult to prove that
σ1 = 0.5 and σ2 = 1.5. Therefore, to achieve global syn-
chronization, the coupling matrix G should be designed with
eigenratio R ≡ λN/λ2 < σ2/σ1 = 3 = Rc.
Besides the condition of R < Rc, to guarantee the synchro-
nization, we also need to set the coupling strength in a proper
way: either small or large couplings may deteriorate the syn-
chronization. If ε < ε1 = σ1/λ2, the couplings are too weak
to restrict the node trajectories to the synchronous manifold;
while if ε > ε2 = σ2/λN , the couplings will be too strong
and actually act as large perturbations to the synchronization
manifold. Therefore, to achieve the global synchronization,
we also require ε1 < ε < ε2. The two critical couplings
ε1 and ε2, which are named as the long-wave (LW) [14] and
short-wave (SW) bifurcations [15] respectively in the studies
of regular networks, thus stand as the boundaries of the syn-
chronizable region. Our studies about network synchroniza-
0.80 0.85 0.90 0.95 1.00
= 0.95127
= 0.83488
FIG. 1: For scale-free network of N = 1000 nodes and of average
degree < k >= 8, a schematic plot on the generation of the two
bifurcation points as a function of the coupling strength. The long-
wave bifurcation occurs at about ε1 ≈ 0.83488 which is determined
by the condition ελ2 = σ1 = 0.5 (the lower line). The short-wave
bifurcation occurs at about ε2 ≈ 0.95127 which is determined by
the condition ελN = σ2 = 1.5 (the upper line).
tion will be focused on the neighboring regions of the two
bifurcation points, i.e., the region of ε . ε1 or ε & ε2.
By the standard BA growth model [1], we construct a scale-
free network of 103 nodes and of average degree 〈k〉 = 8. By
setting β = 2.5 in Eq. [2], we have λ2 ≈ 0.6 and λN ≈ 1.58.
Because of R = λN/λ2 ≈ 2.6 < Rc, the network is glob-
ally synchronizable. Also, because of λ2 > σ1 and λN > σ2,
both the two bifurcations can be realized by adjusting the cou-
pling strength within range ε ∈ (0, 1). In specific, when
ε < ε1 ≈ 0.835, we have ελ2 < σ1 and ελN < σ2, the
synchronous manifold loses its stability at the lower boundary
of the synchronizable region and LW bifurcation occurs; and
when ε > ε2 ≈ 0.95, we have ελ2 > σ1 and ελN > σ2, the
synchronous manifold loses its stability at the upper bound-
ary of the synchronizable region and SW bifurcation occurs
[Fig. 1]. In the following we will fix the network topology
and the parameter β, while generating the various patterns by
changing the coupling strength ε nearby the two bifurcation
points.
III. FINITE-TIME LYAPUNOV EXPONENT
Before direct simulations, we first give a qualitative de-
scription (prediction) on the possible system dynamics in bi-
furcation regions. To concrete our analysis, in the following
we will only discuss the situation of SW bifurcation (ε . ε1),
while noting that the same phenomena can be found at the
LW bifurcation as well (ε & ε2) . In preparing the unsynchro-
nizable states, we only let Λ(λ2) be slightly puncturing into
the unstable region, while keeping all the other exponents still
staying in the stable region, i.e., Λ(λ2) & 0 and Λ(λi) < 0
for i = 3, ...N . With this setting, the synchronous manifold
is only desynchronized in the transverse space of mode 2. As
such, the system possesses only two positive Lyapunov expo-
nents, one is Λ(λ0) which is associated to the synchronous
manifold itself and another one is Λ(λ2). Noticing that Λ(λ)
are asymptotic averages, and, as so, they account only for the
global stability properties, but do not warrant the possible co-
herent sets arising in the system evolutions. These coherent
sets, for regular networks, refer to the stationary, symmetric
patterns to which the system finally develops. While for com-
plex networks, these sets can be the temporal, irregular clus-
ters emerged in the process of system evolution.
In the region of ε . ε1, although global synchronization is
unreachable, the system may still keep with the high coher-
ence due to the existence of the synchronous clusters. Espe-
cially, there could be some moments at which all the trajecto-
ries are restrained to a small region in the phase space, very
close to the situation of global synchronization. This vary-
ing system coherence, however, can not be reflected from the
asymptotic value Λ(λ). To characterize the variation, we need
to employ some new quantities which are able to capture the
temporal behavior of system. One of such quantities is the
finite-time Lyapunov exponent (FLE), a technique developed
in studying chaos transition in nonlinear science [16]. In stead
of asymptotic average, FLE measures the diverging rate of
nearby trajectories only in a finite time interval T .
t=(i−1)T
lnDH(s(t)). (4)
As our studies are focused on the situation of one-mode
desynchronization, the stability of the synchronous manifold
and the temporal behavior that it displays are therefore ex-
pected to be more reflected from the variation of Λ2,i, the
FLE that associates with mode 2. With ε = 0.83 and
T = 100, we plot in Fig. 2 the time evolution of Λ2,i. It
is found that, although with a positive asymptotic value about
〈Λ2,i〉 ≈ 6 × 10
−3, the instant value of Λ2,i penetrates to the
negative region at a high frequency. According to the different
signs of Λ2,i, the system evolution is divided into two types of
intervals: the divergent interval and the contractive interval.
In the divergent intervals we have Λ2,i > 0 and the system
dynamics is temporarily dominated by the divergence of the
node trajectories from the synchronous manifold; while in the
contractive intervals we have Λ2,i < 0 and the system dynam-
ics is temporarily dominated by the convergence of the node
trajectories to the synchronous manifold.
The variation of Λ2,i, reflected on the process of pattern
evolution, characterizes the travelling property of the sys-
tem dynamics among the neighboring regions of two different
kinds of states: the desynchronization state and the synchro-
nization state. In Fig. 2, the minimum value of Λ2,i is about
−0.07, during this contractive interval the node trajectories
will converge to the synchronous manifold by an amount of
eminΛ2,iT ≈ e−7 ≈ 10−3 on average. Assuming that before
entering this interval the average distance between the node
trajectories is ∆ (for logistic map we always have ∆ < 1),
then at the end of this interval the average distance is de-
creased to ∆ × 10−3, a small value which is usually over-
shadowed by noise in practice. Due to this small distance,
0 400 800 1200 1600 2000
-0.08
-0.04
FIG. 2: For ε = 0.83 in Fig. 1, the time evolution of the finite-
time Lyapunov exponent Λ2,i calculated on intervals of length T =
100. It is observed that, while having the positive asymptotic value
〈Λ2,i〉 > 0, the temporal value of Λ2,i is penetrating into the negative
region frequently.
the system can be practically regarded as already reached the
synchronization state. On the other hand, if the system enters
a divergent interval, the node trajectories will diverge from
each other and, at the end of this interval, their average dis-
tance will be increased by an order of 103. This large distance
will deteriorate the ordered trajectories (or the high coherence
of the system dynamics) that achieved during the contractive
intervals, and leading to the incoherent, breaking state. The
pattern of the breaking state, however, is not unique. Depend-
ing on the initial conditions and the divergence intervals, the
pattern may assume the different configurations. Therefore,
based on the observations of Λ2,i [Fig. 2] the dynamics of
unsynchronizable networks can be intuitively understood as
an intermittent travelling among the synchronization state and
the different kinds of desynchronization states.
IV. ON-OFF INTERMITTENCY DESCRIBED BY
COMPLETE SYNCHRONIZATION
We now investigate the system dynamics by direct simula-
tions. To implement, we first prepare the system to be staying
on the synchronization state. This can be achieved by adopt-
ing a large coupling strength from the synchronizable region,
i.e. ε1 < ε < ε2. After synchronization is achieved, we then
decrease ε to a value slightly below the bifurcation point ε1
and, in the meantime, an instant small perturbation is added
on each node. In practice, we take i.i.d (independent iden-
tically distributed) noise of strength 1 × 10−5 as the pertur-
bations. After this, we release the system and let it develop
according to Eq. (1). Since ε < ε1, the synchronization state
is unstable and, triggered by the noises, the node trajectories
begin to diverge from each other. The divergent trajectories,
however, will frequently visit the neighborhood of the syn-
chronous manifold, especially during those contractive inter-
vals of small Λ2,i [Fig. 2]. The intermittent system dynamics
is plotted in Fig. 3(a), where the average trajectory distance
∆X = 1
i=1 xi − ~x is plotted as a function of time. As
we have predicted from LLE, the system dynamics indeed un-
dergoes an intermittent process. To characterize the intermit-
tency, we plot in Fig. 3(b) the laminar-phase distribution of
the ∆X sequence plotted in Fig. 3(a). It is found that the lam-
inar length τ (the time interval between two adjacent bursts
of amplitude ∆X(t) > 10−3) and the probability p(τ) for it
to appear follow a power-law scaling p(τ) ∼ τ−γ . The fitted
exponent is about γ ≈ −1.5 ± 0.05, with a fat tail at large τ
due to the finite simulating time.
In chaos theory, intermittent process of laminar-phase expo-
nent −3/2 is classified as the ”on-off” intermittency, a typical
phenomenon observed in dynamical systems with a symmet-
ric invariant set [17]. On-off intermittency is also reported
in chaos synchronization of regular networks, where the in-
variant set refers to the synchronous manifold, and the ”off”
state refers to the long stretches that the system dynamics is
staying nearby the synchronous manifold and the ”on” state
refers to the short bursts that the system dynamics is staying
away from the synchronous manifold. Therefore, in terms of
laminar-phase distribution, the intermittency we have found
in complex networks [Fig. 3] has no difference to the that of
the regular networks, despite of the drastic difference between
their topologies. We have also investigated the transition be-
havior of the averaged distance 〈∆X(t)〉 nearby the bifurca-
tion points. As shown in Fig. 3(c), a linear relation between
〈∆X(t)〉 and ε is found in the region of ε . ε1. This linear
transition of the system performance, again, is consistent with
the transition of regular networks [18]. Therefore, in terms of
complete synchronization, the on-off intermittency we have
found in complex networks has no difference to that of the
regular networks.
V. PATTERN EVOLUTION IN COMPLEX NETWORKS
To reveal the unique properties of the system dynamics that
induced by the complex topology, we go on to investigate
the pattern formation of unsynchronizable networks by the
method of temporal phase synchronization (TPS).
A. Temporal phase synchronization
TPS is defined as follows. Let xi(t) be the time sequence
recorded on node i, we first transform it into a symbolic se-
quence θi(t) according to the following equations
θi(t) =
0, if xi(t) < 0.5,
1, if xi(t) ≥ 0.5.
Then we divide θi(t) into short segments of the equal length
n. Regarding each segment as an new element, we there-
fore have transformed the long, variable sequence xi(t) into
a short, symbolic sequence Θi(t
′). If at moment t′ we have
′) = Θj(t
′), then we say that TPS is achieved between
0.820 0.825 0.830 0.835
0 2000 4000 6000 8000 10000
10 100 1000
FIG. 3: (Color online) The on-off intermittency of the system dy-
namics nearby the LW bifurcation at ε = 0.83. (a) The time evolu-
tion of the average trajectory distance ∆X . (b) The laminar-phase
distribution of ∆X , which follows a power-law scaling with the fit-
ted exponent around 3/2. (c) The transition behavior of the average
distance 〈∆X〉 nearby the LW bifurcation point ε1, where a linear
relation is found between the two quantities.
the nodes i and j. The collection of nodes which have the
same value of Θ at moment t′ are defined as a temporarily
synchronous cluster, and all the synchronous clusters consti-
tute the temporarily pattern of the system. During the course
of system evolution, the clusters will change their shapes and
contents and the pattern will change its configuration.
In comparison with the method of complete synchroniza-
tion, the advantage we benefit from TPS is obvious: it makes
the synchronous pattern detectable. With complete synchro-
nization, it is almost impossible for two nodes to have ex-
actly the same variable at the same time. Despite the fact that
at some moments the system has already reached the high-
coherence states (formed during those contractive intervals in
Fig. 3(a)), with complete synchronization we are not able to
distinguish these states from those low-coherence ones quanti-
tatively (formed during those divergent intervals in Fig. 3(a)).
(A remedy to this difficulty seems to define the clusters by
the method of threshold truncation, i.e., nodes are regarded
as synchronized if the distance between their trajectories is
smaller than some small value. However, this definition of
synchronization will induce the problem of cluster idenfica-
tion, as the same state may generate different patterns if we
choose the different reference nodes.) On the contrary, TPS
focuses on the loose match (phase synchronization) between
the node variables over a period of time. By requiring an ex-
act match of the discrete variable Θ, the synchronous pattern
is uniquely defined; while by requiring the match of the long
sequences of θ, the ”synchronous” nodes are guaranteed with
a strong coherence.
B. Pattern evolution of the giant-cluster state
With the same set of parameters as in Fig. 3(a), by the
method of TPS we plot in Fig. 4 the time evolutions of two ba-
sic quantities of pattern evolution: the number of synchronous
clusters nc and the size of the largest cluster Lmax. It is
found that, similar to the phenomenon in complete synchro-
nization [Fig. 3(a)], on-off intermittency is also found in the
TPS quantities nc and Lmax. In Fig. 4(a) it is shown that most
of the time the system is broken into only a few number of
clusters, nc = 2 or 3, while occasionally it is broken into a
quite large number of clusters, 10 < nc < 50, or united to the
synchronization state, nc = 1. The intermittent pattern evo-
lution is also reflected on the sequence of Lmax [Fig. 4(b)],
where most of the time the size of the giant cluster is about
Lmax ≈ N , while occasionally it decreases to some small
values of Lmax < N/2. As we have discussed previously,
the main advantage we benefit from TPS is in identifying the
clusters. This advantage is clearly shown in Figs. 4(a) and
(b), where for any time instant the two quantities nc and Li
are uniquely defined. Besides cluster identification, we also
benefit from TPS in quantifying the synchronization degrees.
In specific, the different coherence states shown in Fig. 3(a)
now can be clearly quantified: high coherence states are those
of smaller nc or larger Lmax. Specially, the synchronization
state now is unambiguously defined as the moments of nc = 1
in Fig. 4(a) or, equally, the moments of Lmax = N in Fig.
4(b).
We go on to investigate the pattern evolution by statistical
analysis. The first statistic we are interested is the laminar-
phase distribution of the synchronization state, i.e. the time
intervals that nc = 1 in Fig. 4(a) or Lmax = N in Fig. 4(b).
In its original definition, laminar phase refers to the time inter-
vals τ that all node trajectories stays within a small distance
from the synchronous manifold, therefore the actual value of
τ is varying with the predefined threshold distance. This un-
certainty is overcome in TPS. As shown in Fig. 4(a), in TPS
0 1x105 2x105 3x105
0 1x105 2x105 3x105
FIG. 4: For the same set of parameters as in Fig. 3(a). The time
evolutions of the TPS quantities. (a) The number of the synchronous
clusters nc and (b) the size of the giant cluster Lmax. The synchro-
nization state is defined as the moments nc = 1 in (a) or Lmax = N
in (b).
the ”off” state refers to the moments of nc = 1 specifically.
The laminar-phase distribution of nc is plotted in Fig. 5(a). In
consistency with the distribution of complete synchronization
[Fig. 3], the laminar-phase distribution of nc also follows a
power-law scaling and has the same exponent γ ≈ −1.5±0.1.
Therefore the use of TPS, while bringing convenience to the
pattern analysis, still capture the basic properties of the on-off
intermittency. The second statistic we are interested is the size
distribution of the largest cluster, an important indicator for
the coherence degree of the system. For the Lmax sequence
plotted in Fig. 4(b), in Fig. 5(b) we plot the size distribution
of Lmax. It is seen that the probability of finding large clus-
ter Lmax ≈ N is much higher than that of small cluster of
Lmax < 500. In particular, the probability for finding clusters
of Lmax > 990 is about 20 percent and for Lmax > 990 it
is about 70 percent. Therefore, in the region of ε . ε1, the
distinct feature of the system patterns is the existence of a gi-
ant cluster. Due to this special feature, we call these states the
giant-cluster state.
Besides the giant cluster, we are also interested in the prop-
erties of the small clusters. We plot in Fig. 5(c) the distribu-
tion of nc and in Fig. 5(d) the size distribution of the small
clusters Li that surround the giant cluster in the pattern. As
shown in Fig. 5(c), the distribution of nc follows a power-law
100 101 102
0 500 1000
1 10 100
1 10 100 1000
FIG. 5: (Color online) Statistical properties of the on-off intermit-
tency plotted in Fig. 4. (a) The power-law scaling of the laminar-
phase distribution of nc. The fitted slope is about −2.3 ± 0.05. (b)
The size distribution of the size of the giant cluster. (c) The power-
law distribution of the number of small clusters nc. The fitted slope
is about −3±0.1. (d) The power-law scaling on the size distribution
of the small clusters. The fitted slope is about −1.2± 0.01.
scaling with the fixed exponent is about γ ≈ −3± 0.05. The
heterogeneous distribution of nc indicates that, in the giant-
cluster state, the system is usually broken into only a few num-
ber of clusters. An interesting finding exists in the size distri-
bution of the small clusters. As shown in Fig. 5(d), in range
Li ∈ [1, N/2] a power-law scaling is found between PLi and
Li, with the fitted exponent is about γ ≈ −1.1 ± 0.05. The
distribution of Li confirms the finding of Fig. 5(c) that the
small clusters which join or separate from the giant cluster are
usually of small size.
Combining the findings of Fig. 4 and Fig. 5, the picture
of pattern evolution in the bifurcation region ε . ε1 now be-
comes clear. Generally speaking, the evolution can be divided
into two opposite dynamical processes happening around the
giant cluster: the separation and integration of the small clus-
ters. During the separation process, the small clusters are es-
caped from the giant cluster, which weakens the dominant role
of the giant cluster and makes the pattern complicated. How-
ever, the separated clusters occupy only a small proportion of
the nodes [Fig. 5(c)], the majority nodes are still attached to
the giant cluster, which sustains the synchronization skeleton
and keeps the system on the high coherence states. At some
rare moments the giant cluster may disappears, and the pat-
tern is composed by only small clusters of Li < N/2. At
these moments, the synchronization skeleton is broken, the
pattern becomes even complicated and the system coherence
reaches its minimum. In contrast, during the process of cluster
integration, the giant cluster will increase it size by attracting
the small clusters , and gradually towards the state of global
synchronization. It should be noticed that the separation and
integration processes are uneven and are typically occurring
at the same time. For instance, during the separation process,
while the system evolution is dominated by the separation of
new small clusters from the giant cluster, there could be some
small clusters rejoin to the giant cluster.
C. Pattern evolution of the scattering-cluster state
As we further decrease the coupling strength from ε1, the
picture of pattern evolution will be totally changed. With
ε = 0.79, we plot in Fig. 6 the same statistics as in Fig. 5.
The first observation is the loss of the global synchronization
state, as can be found from the time variation of nc plotted in
Fig. 6(a). The loss of global synchronization becomes even
clear if we compare Fig. 6(a) with Fig. 4(a): in Fig. 6(a),
except the moment at t = 0, the system can never reach the
synchronization state at nc = 1 and very often it is broken into
a large number of small clusters at about nc ∼ 10
2. The fact
that the pattern is decomposed into a large number of small
clusters is also manifested by the distribution of nc, as plotted
in Fig. 6(b). Instead of the power-law distribution found in the
giant-cluster state, in the scattering-cluster state nc follows a
Gaussian distribution [Fig. 6(b)]. As ε further decreases from
ε1, the mean value of nc will shift to the larger values, as
indicated by the ε = 0.78 curve plotted in Fig. 6(b). The sec-
ond observation is the disappearance of the giant cluster. As
shown in Fig. 6(c), the size distribution of the largest cluster
also follows a Gaussian distribution, with its mean value lo-
cates at 〈Lmax〉 < N/2. The distribution of Fig. 6(c) is very
different to that of Fig. 5(b), where in Fig. 5(b) the largest (gi-
ant) cluster has size Lmax ≈ N in most of the time. As ε de-
creases, the mean value of the largest cluster 〈Lmax〉 will shift
to small values and the variance of Lmax will be decreased, as
indicated by the ε = 0.78 curve plotted in Fig. 6(c). Similar to
plot of Fig. 5(d), we have also investigated the distribution of
Li, the sizes for all the small clusters appeared in the system
evolution [Fig. 6(d)]. It is found that the distribution of Li fol-
lows a power-law distribution for Li < N/2, while having an
exponential tail for Li > N/2. Numerically we find that the
exponent of the power-law section, i.e. in rangeLi ∈ [1, 200),
is about −2± 0.05, while the fitted exponent for the exponen-
tial section is about −4.5 × 10−3 ± 2 × 10−5. These two
exponents, however, are changing with ε. As ε decreases, the
two exponents will shift to some small values.
Combining Fig. 5 and Fig. 6, we are able to outline the
transition process of network synchronization nearby the bi-
furcation points, i.e., the transition from the giant-cluster state
to the scattering-cluster state as ε leaves away from ε1. In
the region of ε . ε1, the pattern is composed by a giant
cluster and a few number of small clusters, i.e. the giant-
cluster state. As ε decreases from ε1 gradually, more and
more small clusters will be emitted out from the giant clus-
ter and, as a consequence, both the size of the giant cluster
and the fraction of synchronization time will be decreased.
Then, at about εc ≈ 0.832, the giant cluster disappears and
the pattern of the system is composed by several larger clus-
ters, of size Lmax . N/2, together with many small clusters
of heterogenous size distribution, i.e. the scattering-cluster
state. After that, as ε decreases from εc, the clusters shrink
their size by breaking into even small clusters, and the pat-
0 200 400 600 800 1000
0 100 200 300 400 500
1 10 100 1000
0.0 5.0x104 1.0x105 1.5x105 2.0x105
g=0.78
g=0.79
g=0.78
g=0.79
(d) g=0.78
g=0.79
FIG. 6: (Color online) The dynamical and statistical properties of
pattern evolution for ε = 0.79. (a) The time evolution of nc. (b) The
Gaussian distribution of the number of the small clusters nc. (b) The
Gaussian distribution of the size of the largest cluster Lmax. (d) The
two-segment distribution on the size of the small clusters Li. In the
region of Li < 200, Li follows a power-law distribution with fitted
exponent is about −2 ± 0.05; while for Li > 200, the distribution
is exponential with the fitted exponent is about −4.5 × 10−3 ± 2×
−5. As ε further decreases from ε1, the largest cluster becomes
even smaller and more small clusters are emitted out from it. As
illustrated by the ε = 0.78 curves plotted in (b), (c) and (d).
tern becomes even complicated. The detail transition from the
giant-cluster state to the scattering-cluster state is presented
in Fig. 7, where the average number of clusters that the sys-
tem is broken into 〈nc〉, Fig. 7(a), and the average size of
the largest cluster 〈Lmax〉, Fig. 7(b), are plotted as a func-
tion of the coupling strength in the LW bifurcation region.
The transition is found to be smooth and steady, just as we
have expected. Besides the giant cluster, another difference
between the giant-cluster and scattering-cluster states exists
in their pattern evolutions. In the giant-cluster state, while
the configuration of the giant cluster is continuously updated
by emitting or absorbing the small clusters, its main contents
are stable and do not change with time. In contrast, in the
scattering-cluster state the small clusters integrate with or sep-
arate from each other in a random fashion. Although occa-
sionally there could be some large clusters show up in the pat-
tern of the scattering-cluster state [Fig. 6(d)], these ”large”
clusters, however, are very fragile and will break into small
clusters again in a short time. This quick-dissolving prop-
erty stops the scattering-cluster state from having a high co-
herence.
VI. CHARACTERIZING THE ACTIVE NODES
In the giant-cluster state, most of the nodes are organized
into the giant cluster while few nodes, either in forms of small
group or isolated node, are separating from or joining to the
giant cluster with a high frequency. These active nodes, al-
0.79 0.80 0.81 0.82 0.83 0.84 0.85
0.79 0.80 0.81 0.82 0.83 0.84 0.85
> (0.832,500)
FIG. 7: The transition process of the network synchronization nearby
the LW bifurcation point ε1. (a) The average number of clusters that
the system is broken into as a function of coupling strength. (b) The
average size of the largest cluster as a function of coupling strength.
Each date is an averaged result over 108 time steps.
though are few in amount, play an important role in network
synchronization. Clearly, a proper characterization of these
nodes will deepen our understandings on the system dynam-
ics and give indications to the improvement of network perfor-
mance. For instance, to improve the synchronizability of the
system, we may either remove the few most active nodes from
the network, or update their coupling strengths specifically.
In characterizing the active nodes, the following properties
are of general interest: 1) what’s the dependence of the node
activity on the network topology? can we characterize these
nodes by the known network properties such as node degree
or betweenness? 2) are their locations sensitive to the coupling
strength? and 3) what’s the effect of bifurcation type on their
locations? In the following we will explore these questions by
numerical simulations.
We first try to characterize the active nodes by their topo-
logical properties. For the giant-cluster state described in Fig.
4, we plot in Fig. 8(a) the probability pu1 that each node stays
in the giant cluster. While the majority nodes stay in the gi-
ant cluster with a high probability pu1 ≈ 1, few nodes are
of unusually small probabilities: 1 percent of the nodes have
pu1 < 0.8. One important observation of Fig. 8(a) is that
the locations of the active nodes are entangled with those of
310 320 330 340 350
0 200 400 600 800 1000
160 170 180 190 200
0 200 400 600 800 1000
0.9514
0.9515
0.952(c)
0.8342
0.834
0.8335
0.83(a)
FIG. 8: (Color online) The properties of the active nodes. (a) For the
giant-cluster state shown in Fig. 4, the probability that node stays in
the giant cluster versus the node index. (b) A segment of (a) but with
different coupling strengths nearby the LW bifurcation point ε1. (c)
For the giant-cluster state (ε = 0.952) nearby the SW bifurcation
point, the probability that node stays in the giant cluster versus the
node index. (d) A segment of (c) under different coupling strengths
nearby the SW bifurcation point ε2.
the stable nodes. Noticing that in the BA growth model node
of higher index in general assume the smaller degree, the ob-
servation of Fig. 8(a) therefore indicates the independence of
the node degree to the node stability, or the inaccuracy of us-
ing degree to characterize the node activity. Specifically, in
Fig. 8(a) the 5 most unstable nodes, by a descending order of
pu1, are those of degrees k = 47, 36, 26, 10, 4, respectively.
Except the one of k = 4, all the other nodes have higher de-
grees. Another well-known topological property of complex
network is the node betweenness, which counts the number
of shortest pathes that pass through each node and actually
evaluates the node importance from the global-network point
of view. This global-network property, however, is also inca-
pable to characterize the active nodes. In Tab. 1 we list the
detail information about the 5 most active nodes in Fig. 8(a),
where the inaccuracy of node degree or node betweenness in
characterizing the active nodes are summarized.
TABLE I: For the attaching probability pi plotted in Fig. 8(a), we
list the 5 most unstable nodes and try to characterize them by a set of
topological quantities including the node index i, the attaching prob-
ability pi, the stability rank pi rank, the node degree ki, the degree
rank ki rank, the node betweenness Bi, and the betweenness rank
Bi rank.
Node index i pi pi rank ki ki rank Bi Bi rank
615 0.72797 1 5 39→537 1301 280
762 0.74424 2 5 39→537 1375 356
680 0.75416 3 4 1→338 1254 680
372 0.7591 4 6 538→645 1440 406
938 0.75972 5 4 1→338 1215 159
We go on to investigate the affection of the coupling
strength on the locations of the active nodes. In Fig. 8(b) we
fix the network topology and compare the node activities un-
der different coupling strengths nearby the bifurcation point
ε1. It is found that, despite of the changes in pu1, the lo-
cations of the active nodes are kept unchanged. That is, the
active nodes are always the first ones to escape from the gi-
ant cluster whenever the network is unsynchronizable. We
have also investigated the affection of the bifurcation type on
the locations of the active nodes. By choosing the coupling
strength nearby the SW bifurcation ε = 0.952 & ε2, we plot
in Fig. 8(c) the node attaching probability pu2 as a function
of the node index i. An interesting finding is that, comparing
to the situation of LW bifurcation [Fig. 8(a)], the locations
of the active nodes have been totally changed in Fig. (c). In
Tab. 2 we list the detail information about the 5 most active
nodes in Fig. 8(c), again their locations can not be predicted
by the node degree or betweenness. Similar to the LW bifur-
cation, the locations of the active nodes are also independent
to the coupling strength at the SW bifurcation, as shown in
Fig. 8(d).
TABLE II: Similar to Tab. I but for the attaching probability pi plot-
ted in Fig. 8(c). Comparing to Tab. I, one important observation is
the changed locations of the active nodes due to the changed bifurca-
tion type.
Node index i pi pi rank ki ki rank Bi Bi rank
43 0.78196 1 9 779→813 2847 813
35 0.78969 2 18 936→940 8513 953
714 0.795 3 4 1→338 1215 158
130 0.79652 4 13 886→901 4200 886
154 0.19944 5 10 814→846 2898 815
Previous studies about network synchronization have
shown that, while individually it is difficult to predict the dy-
namical behavior of each node, the average performance of
an ensemble of nodes of the same network properties do have
some reliable characters. For instance, it has been shown that
in complex networks the high-degree nodes are on average
more synchronizable than the low-degree ones [19]. Regard-
ing to the problem of node activities, it is natural to ask the
similar question: are the high-degree nodes more synchro-
nized than the low-degree nodes? In Fig. 9 we plot the av-
erage attaching probability 〈pu1〉k as a function of degree k.
Still, we can not find a clear dependence of 〈pu1〉k on k.
VII. DISCUSSIONS AND CONCLUSION
It is worthy of note that our studies of active nodes are only
focused on the giant-cluster state, and the purpose is to under-
stand their dynamics and reveal their properties. By ensemble
0 30 60 90 120
FIG. 9: (Color online) The average attaching probability 〈pu1〉k as
a function of node degree k. On average, the 5 most unstable nodes
are those of degrees k = 47, 36, 26, 10, 4. Still, we can not find a
clear dependence between 〈pu1〉k and k.
average, we may able to improve our prediction of the active
nodes, say for example the dependence of 〈pu1〉k on k in Fig.
9 may be smoothed if we average the results over a large num-
ber of network realizations. Such an improvement, however,
comes at the cost of the decreased prediction accuracy due to
the increased candidates. Taking Fig. 9 as an example, al-
though it is noticed that nodes of k = 4 in general are more
active than those of other degrees, only one of them is listed
as the 5 most unstable nodes [Tab. 1]. In specific, among the
total number of 338 nodes which have degree k = 4, most
of them are tightly attracted to the giant cluster (90 percent of
them have attaching probabilities pu1 > 0.95). Therefore, in
terms of precise predication, the average method is infeasible
in practice.
Beside node degree and betweenness, we have also checked
the dependence of the property of node activity to some other
well-known network properties such as the clustering coeffi-
cient, the modularity, and the assortativity. However, none of
them is suitable to characterize the active nodes, their perfor-
mance is very similar to that of the node degree described in
Tab. 1 and Tab. 2. Our study thus suggests that, to give a pre-
cise prediction to the active nodes, we may need to develop
some new quantities.
Despite of the amount of studies carried on network syn-
chronization, to the best of our knowledge, we are the first to
study the nonstationary pattern in unsynchronizable complex
networks. In Ref. [9] the authors have discussed the transient
process of global synchronization in complex networks, but
their study are concentrating on the synchronizable state in
which, during the course of system evolution, small clusters
integrate into larger clusters monotonically and finally reach
the synchronization state. After that, the system will always
stay on the synchronization state. Our works are also different
to the studies of Refs. [10, 20]. Similar to our works, in these
studies the authors also consider the problem of pattern for-
mation in unsynchronizable networks, but their interests are
focused on the stationary pattern of the system. That is, the
size and contents of the clusters do not change with time. In
contrast, in our studies both the size and contents of the clus-
ters are variable.
In summary, we have reported and investigated a kind of
new phenomena in network synchronization: the nonstation-
ary pattern. That is, the final state of the network settles nei-
ther to the synchronization state nor to any stationary state
of fixed pattern, the system is travelling among all the possi-
ble patterns in an intermittent fashion (the pattern can be of
any configuration, but its probability of showing up is pattern-
dependent). We attribute this nonstationarity to the asymmet-
ric topology of the complex networks, and its dynamical ori-
gin can be understood from the property of the finite-time
Lyapunov exponent associated to the desynchronized mode.
Two types of synchronization formats, the complete synchro-
nization and the temporal phase synchronization, have been
employed to detect the nonstationary dynamics. For coupling
strength immediately out of the stable region, the pattern evo-
lution is characterized by the process of on-off intermittency
and the existence of the giant-cluster; while if the coupling
strength is far away from the bifurcation points, the pattern
evolution is signatured by the random interactivities among
the number of small clusters. A remarkable finding is that,
in the giant-cluster state the locations of the active nodes are
independent of the coupling strength but are sensitive to the
bifurcation types. The active nodes, however, can not be char-
acterized by the currently known network properties, further
investigations about their identification are necessary. While
we are hoping our studies about nonstationary pattern could
give some new understandings to the dynamics of coupled
complex systems, we also hope that our findings about unsyn-
chronizable networks could be used to some practical prob-
lems where system maintains their normal functions only un-
der the unsynchronizable states, for example the problem of
epileptic seizers [21].
[1] D.J. Watts and S.H. Strogatz, Nature 393, 440 (1998); A.-L.
Barabási and R. Albert, Science 286, 509 (1999); R. Albert and
A.-L. Barabási, Rev. Mod. Phys. 74, 47 (2002).
[2] S. Boccaletti and L.M. Pecora, Chaos 16, 015101 (2006); A.
E. Motter, M. A. Matı́as, J. Kurths, E. Ott, Physica D 224, 7
(2006); S. Boccaletti, V. Latora, Y. Moreno, M. Chavez, and
D.-U. Hwang, Phys. Rep. 424, 175 (2006).
[3] X.F. Wang and G. Chen, Int. J. Bifurcation Chaos Appl. Sci.
Eng. 12, 187 (2002).
[4] T. Nishikawa, A. E. Motter, Y.-C. Lai, and F. C. Hoppensteadt,
Phys. Rev. Lett. 91, 014101 (2003).
[5] A.E. Motter, C. Zhou, and J. Kurths, Europohys. Lett. 69, 334
(2005); Phys. Rev. E 71, 016116 (2005); AIP Conf. Proc. 776,
201 (2005); C. Zhou, A.E. Motter, and J. Kurths, Phys. Rev.
Lett. 96, 034101 (2006).
[6] M. Chavez, D.-U. Hwang, A. Amann, H.G.E. Hentschel, and S.
Boccaletti, Phys. Rev. Lett. 94, 218701 (2005); D.-U. Hwang,
M. Chavez, A.Amann, and S. Boccaletti, Phys. Rev. Lett. 94,
138701 (2005).
[7] L.M. Pecora and T.L. Carroll, Phys. Rev. Lett. 80, 2109 (1998);
M. Barahona and L. M. Pecora, ibid, 89, 054101 (2002).
[8] D.-S. Lee, Phys. Rev. E 72, 026208 (2005); J. Gómez-
Gardeñes, Y. Moreno, and A. Arenas, Phys. Rev. Lett. 98,
034101 (2007).
[9] A. Arenas, A. Dı́az-Guilera, and C. J. Pérez-Vicente, Phys. Rev.
Lett. 96, 114102 (2006); C. Zhou, L. Zemanová, G. Zamora, C.
C. Hilgetag, and J. Kurths, ibid, 97, 238103 (2006)
[10] S. Boccaletti, M. Ivancheko, V. Latora, A. Pluchino, and A.
Rapisarda, Preprint physics/0607179 (2006).
[11] M. Zhan, Z.G. Zheng, G. Hu, and X.H. Peng, Phys. Rev. E
62, 3552 (2000); Y. Zhang, G. Hu, H.A. Cerdeira, S. Chen, T.
Braun, and Y. Yao, Phys. Rev. E 63, 026211 (2001); B. Ao and
Z. Zheng, Europhys. Lett. 74, 229 (2006); X. Zhang, M. Fu, J.
Xiao, and G. Hu, Phys. Rev. E 74, 015202 (2006).
[12] E. Ott and J.C. Sommerer, Phys. Lett. A 188, 39 (1994); M.
Ding and W. Yang, Phys. Rev. E 56, 4009 (1997); S.H. Wang,
J. Xiao, X.G. Wang, B. Hu, and G. Hu, Eur. Phys. J. B 30, 571
(2002); J.G. Restrepo, E. Ott, and B. Hunt, Phys. Rev. Lett. 93,
114101 (2004).
[13] X.G. Wang, Y.-C. Lai, and C.-H. Lai, Preprint nlin.CD/0608035
(2006).
[14] L. A. Bunimovich, A. Lambert, and R. Lima, J. Stat. Phys. 65,
253 (1990); J.F. Heagy, T.L. Carroll, and L.M. Pecora, Phys.
Rev. Lett. 73, 3528 (1994); ibid, 74, 4185 (1995).
[15] M.A. Matias, V.P. Munuzuri, M.N. Lorenzo, I.P. Marino, and
V.P. Villar, Phys. Rev. Lett. 78, 219 (1997); G. Hu, J. Yang, and
W. Liu, Phys. Rev. E 58, 4440 (1998).
[16] A. Pikovsky and U. Feudel, Chaos 5, 253 (1995); X. Wang, M.
Zhan, C. H. Lai, and Y.-C. Lai, Phys. Rev. Lett. 92, 074102
(2004).
[17] N. Platt, E.A. Spiegel, and C. Tresser, Phys. Rev. Lett. 70,
279 (1993); E. Ott and J. C. Sommerer, Phys. Lett. A 188, 39
(1994); Y. Nagai and Y.-C. Lai, Phys. Rev. E 55, 1251 (1997).
[18] A.S. Pikovsky, M.G. Rosenblum, and J. Kurths, Synchroniza-
tion: A Universal Concept in Nonlinear Science (Cambridge
University Press, Cambridge, UK, 2001); S. Strogatz, Sync:
The Emerging Science of Spontaneous Order (Hyperion, New
York, 2003).
[19] C. Zhou and J. Kurths, Chaos 16, 015104 (2006).
[20] S. Jalan, R.E. Amritkar, and C.-K. Hu, Phys. Rev. E 72, 016211
(2005); ibid, 72, 016212 (2005).
[21] Y.-C. Lai, M. G. Frei, I. Osorio, and L. Huang, Phys. Rev. Letts.
92, 108102 (2007).
http://arxiv.org/abs/physics/0607179
http://arxiv.org/abs/nlin/0608035
|
0704.0893 | Topological phase for spin-orbit transformations on a laser beam | Topological phase for spin-orbit transformations on a laser beam
C. E. R. Souza†, J. A. O. Huguenin†, P. Milman††, and A.Z. Khoury†
Instituto de F́ısica, Universidade Federal Fluminense, 24210-340 Niterói - RJ, Brasil. and
Laboratoire Matériaux et Phenomènes quantiques CNRS UMR 7162,
Université Denis Diderot, 2 Place Jussieu 75005 Paris cedex.
We investigate the topological phase associated with the double connectedness of the SO(3)
representation in terms of maximally entangled states. An experimental demonstration is provided
in the context of polarization and spatial mode transformations of a laser beam carrying orbital
angular momentum. The topological phase is evidenced through interferometric measurements and
a quantitative relationship between the concurrence and the fringes visibility is derived. Both the
quantum and the classical regimes were investigated.
PACS numbers: PACS: 03.65.Vf, 03.67.Mn, 07.60.Ly, 42.50.Dv
The seminal work by S. Pancharatnam [1] introduced
for the first time the notion of a geometric phase acquired
by an optical beam passing through a cyclic sequence
of polarization transformations. A quantum mechanical
parallel for this phase was later provided by M. Berry
[2]. Recently, the interest for geometric phases was re-
newed by their potential applications to quantum compu-
tation. The experimental demonstration of a conditional
phase gate was recently provided both in nuclear mag-
netic ressonance [3] and trapped ions [4]. Another optical
manifestation of geometric phase is the one acquired by
cyclic spatial mode conversions of optical vortices. This
kind of geometric phase was first proposed by van Enk
[5] and recently found a beautiful demonstration by E. J.
Galvez et al [6].
The Hilbert space of a single qubit admits an useful ge-
ometric representation of pure states on the surface of a
sphere. This is the Bloch sphere for spin 1/2 particles or
the Poincaré sphere for polarization states of an optical
beam. A Poincaré sphere representation can also be con-
structed for the first order subspace of the spatial mode
structure of an optical beam [7]. Therefore, in the quan-
tum domain, we can attribute two qubits to a single pho-
ton, one related to its polarization state and another one
to its spatial structure. Geometrical phases of a cyclic
evolution of the mentioned states can be beautifully in-
terpreted in such representations as being related to the
solid angle of a closed trajectory. However, in order to
compute the total phase gained in a cyclic evolution, one
should also consider the dynamical phase. When added
to the geometrical phase, it leads to a total phase gain
of π after a cyclic trajectory. This phase has been put
into evidence for the first time using neutron interfer-
ence [8]. The appearence of this π phase is due to the
double connectedness of the three dimensional rotation
group SO(3). However, in the neutron experience, only
two dimensional rotations were used, and this topologi-
cal property of SO(3) was not unambiguously put into
evidence, as explained in details in [9, 10].
As discussed by P. Milman and R. Mosseri [9, 11], when
the quantum state of two qubits is considered, the mathe-
matical structure of the Hilbert space becomes richer and
the phase acquired through cyclic evolutions demands a
more careful inspection. The naive sum of independent
phases, one for each qubit, is applicable only for prod-
uct states. In this case, the two qubits are geometrically
represented by two independent Bloch spheres. When
a more general partially entangled pure state is consid-
ered, the phase acquired through a cyclic evolution has
a more complex structure and can be separated in three
contributions: dynamical, geometrical and topological.
Maximally entangled states are solely represented on the
volume of the SO(3) sphere which has radius π and its di-
ametrically opposite points identified. This construction
reveals two kinds of cyclic evolutions, each one mapped
to a different homotopy class of closed trajectories in the
SO(3) sphere. One kind is mapped to closed trajecto-
ries that do not cross the surface of the sphere (0−type)
and the other one is mapped to trajectories that cross
the surface (π−type). The phase acquired by a maxi-
mally entangled state is 0 for the first kind and π for the
second one.
In the present work we demonstrate the topological
phase associated to polarization and spatial mode trans-
formations of an optical vortice. This phase appears first
in the classical description of a paraxial beam with ar-
bitrary polarization state and has its quantum mechan-
ical counterpart in the spin-orbit entanglement of a sin-
gle photon, which constitutes one possible realization of
a two-qubit system and the topological phase discussed
in Ref.[9]. However, it is interesting to observe that,
like the Pancharatnam phase, the two-qubit topological
phase also admits a classical manifestation, since it can
be implemented on the classical amplitude of the opti-
cal field. This is also the first experiment unambiguously
showing the double connectedness of the rotation group
SO(3). The optical modes used in our experiment have
a mathematical structure analog to the one of entangled
states, so that the geometrical representation developped
in [10] also applies and the results of Ref.[9, 11] can be
experimentally demonstrated. When excited with single
photons, these modes give rise to single particle entangled
http://arxiv.org/abs/0704.0893v1
states and provide a more direct relationship with the
ideas put forward in Refs.[9, 10, 11]. This regime is also
investigated in the present work. There are a number of
quantum computing protocols that can be implemented
with single particle entanglement and will certainly ben-
efit from our results.
Let us now combine the spin and orbital degrees of
freedom in the framework of the classical theory in order
to build the same geometric representation applicable to
a two-qubit quantum state. Consider a general first order
spatial mode with arbitrary polarization state:
E(r) = αψ+(r)êH + βψ+(r)êV + γψ−(r)êH
+ δψ−(r)êV , (1)
where êH(V ) are two linear polarization unit vectors
along two orthogonal directions H and V , and ψ±(r)
are the normalized first order Laguerre-Gaussian pro-
files which are orthogonal solutions of the paraxial wave
equation [12]. We may now define two classes of spatial-
polarization modes: the separable (S) and the nonsepa-
rable (NS) ones. The S modes are of the form
E(r) = (α+ψ+(r) + α−ψ+(r)) (βH êH + βV êV ) . (2)
For these modes, a single polarization state can be
atributted to the whole wavefront of the paraxial beam.
They play the role of separable two-qubit quantum
states.
For nonseparable (NS) paraxial modes, the polariza-
tion state varies across the wavefront. As for entangle-
ment in two-qubit quantum states, the separability of a
paraxial mode can be quantified by the analogous defi-
nition of concurrence. For the spin-orbit mode described
by Eq.(1), it is given by:
C = 2 | αδ − βγ | . (3)
Let us first consider the maximally nonseparable
modes (MNS) of the form
E(r) = αψ+(r)êH + βψ+(r)êV − β∗ψ−(r)êH
+ α∗ψ−(r)êV . (4)
For these modes C = 1. It is important to mention that
the concept of entanglement does not applies to the MNS
mode, since the object described by Eq.(4) is not a quan-
tum state, but a classical amplitude. However, we can
build an SO(3) representation of the MNS modes as it
was done in Refs.[11, 13]. Let us define the following
normalized MNS modes:
E1(r) =
[ψ+(r)êH + ψ−(r)êV ] ,
E2(r) =
[ψ+(r)êH − ψ−(r)êV ] , (5)
E3(r) =
[ψ+(r)êV + ψ−(r)êH ] ,
E4(r) =
[ψ+(r)êV − ψ−(r)êH ] .
Laser
HWP-A HWP-B
HWP-1
HWP-2
HWP-3
Pol-V Pol-H
QWP-2
QWP-1
FIG. 1: Experimental setup.
The SO(3) sphere is then constructed in the following
way: mode E1 is represented by the center of the sphere,
while modes E2, E3, and E4 are represented by three
points on the surface, connected to the center by three
mutually orthogonal segments. Each point of the SO(3)
sphere corresponds to a MNS mode. Following the recipe
given in Ref.[13], the coefficients α and β of Eq.(4) are
parametrized to:
α = cos
− i kz sin
β = −(ky + i kx) sin
, (6)
where (kx, ky , kz) = k is a unit vector, and a is an angle
between 0 and π. With this parametrization, each MNS
mode is represented by the vector ak in the sphere.
In order to evidence the topological phase for cyclic
transformations, we must follow two different closed
paths, each one belonging to a different homotopy class,
and compare their phases. The experimental setup is
sketched in Fig.(1). First, a linearly polarized TEM00
laser mode is diffracted on a forked grating used to gen-
erate Laguerre-Gaussian beams [14]. The two side orders
carrying the ψ+(r) and ψ−(r) spatial modes are trans-
mitted through half waveplates HWP-A and HWP-B, fol-
lowed by two orthogonal polarizers Pol-V and Pol-H, and
finally recombined at a beam splitter (BS-1). Half wave-
plates HWP-A and HWP-B are oriented so that their
fast axis are paralell. This allows us to adjust the mode
separability at the output of BS-1 without changing the
corresponding output power, what prevents normaliza-
tion issues.
Experimentally, an MNS mode is produced when both
HWP-A and HWP-B are oriented at 22.5o , so that the
FIG. 2: Interference patterns for a-) a maximally nonsepara-
ble, and b-) a separable mode. From left to right the images
were obtained with QWP-2 oriented at −45o, 0o, and 45o,
respectively.
setup prepares mode E1 located at the centre of the
sphere. Other MNS modes can then be obtained by uni-
tary transformations in only one degree of freedom. Since
polarization is far easier to operate than spatial modes
we choose to implement the cyclic transformations in the
SO(3) sphere using waveplates. The MNS mode E1 is
first transmitted through three waveplates. The first one
(HWP-1) is oriented at 0o and makes the transformation
E1 → E2, the second one (HWP-2) is oriented at −45o
and makes the transformation E2 → E3, and the third
one (HWP-3) is oriented at 90o and makes the transfor-
mation E3 → E4. Finally, two alternative closures of
the path are performed in a Michelson interferometer.
In one arm a π−type closure is implemented by dou-
ble pass through a quarter-waveplate (QWP-1) fixed at
−45o. In the other arm, either a 0−type or a π−type
closure is performed by a double pass through another
quarter-waveplate (QWP-2) oriented at a variable angle
between −45o (π−type) and 45o (0−type). These tra-
jectories are analogous to spin rotations around different
directions of space [13]. They evidence the topological
properties of the three dimensional rotation group.
In order to provide spatial interference fringes, the in-
terferometer was slightly misaligned. The interference
patterns were registered with either a charge coupled
device (CCD) camera or a photocounter (PC), depend-
ing on the working power. First, we registered the
interference patterns obtained when an intense beam
is sent through the apparatus. The images shown in
Fig.(2a) demonstrate clearly the π topological phase
shift. The phase singularity characteristic of Laguerre-
Gaussian beams can be easily identified in the images
and is very useful to evidence the phase shift. When
both arms perform the same kind of trajectory in the
SO(3) sphere (QWP-1 and QWP-2 oriented at −45o), a
bright fringe falls on the phase singularity. When QWP-
2 is oriented at 45o, the trajectory performed in each arm
belongs to a different homotopy class and a dark fringe
falls on the singularity, what clearly demonstrates the π
topological phase shift.
In order to discuss the role played by mode separa-
bility, it is interesting to observe the pattern obtained
when QWP-2 is oriented at intermediate angles, which
correpond to open trajectories in the SO(3) sphere. We
observed that during the phase shift transition, the in-
terference fringes are deformed and finally return to its
initial topology with the π phase shift. This is clearly il-
lustrated by the intermediate image displayed in Fig.(2a),
which corresponds to QWP-2 oriented at 0o . Notice that,
despite the deformation, the interference fringes display
high visibility.
As we mentioned above, the mode preparation settings
can be adjusted in order to provide a separable mode. For
example, when we set HWP-A and B both at 45o , the
output of BS-1 is the separable mode ψ+(r)êH , which
can be represented in the Poincaré spheres for spatial
and polarization modes. The same π phase shift can
be observed when QWP-2 is rotated, but the transition
is essentially different. The intereference pattern is not
topologically deformed, but its visibility decreases until
it completelly vanishes at 0o , and then reappears with
the π phase shift. This transition is clearly illustrated
by the three patterns displayed in Fig.(2b). In this case,
the π phase shift is of purely geometric nature, since the
spatial mode is kept fixed while the polarization mode is
turned around the equator of the corresponding Poincaré
sphere.
The relationship between mode separability and
fringes visibility can be clarified by a straightforward cal-
culation of the interference pattern. Therefore, let us
consider that HWP-A and B are oriented so that the
output of BS-1 is described by
Eǫ(r) =
ǫ ψ+(r)êH +
1− ǫ ψ−(r)êV , (7)
where ǫ is the fraction of the ψ+(r)êH mode in the out-
put power. Now, let us consider that QWP-2 is oriented
at 0o and suppose that the two arms of the Michelson
interferometer are slightly misaligned so that the wave
vectors difference between the two outputs is δk = δk x̂ ,
orthogonal to the propagation axis. Taking into account
the passage through the three half waveplates, and the
transformation performed in each arm of the Michelson
interferometer, we arrive at the following expression for
the interference pattern:
I(r) = 2 |ψ(r)|2
1 + 2
ǫ(1− ǫ) sin 2φ sin (δk x)
, (8)
where φ = arg(x + iy) is the angular coordinate in
the transverse plane of the laser beam, and |ψ(r)|2 is
the doughnut profile of the intensity distribution of a
Laguerre-Gaussian beam. It is clear from Eq.(8) that the
visibility of the interference pattern is 2
ǫ(1− ǫ), which
is precisely the concurrence of Eǫ(r) as given by Eq.(3).
Therefore, the fringes visibility is quantitatively related
to the separability of the mode sent through the setup.
However, the numerical coincidence with the concurrence
0 2 4 6 8 10
Displacement (mm)
FIG. 3: Interference patterns measured in the photocount-
ing regime for ǫ = 1/2 . Empty and full circles correspond
to QWP-2 oriented at −45o and 45o, respectively. Solid and
dashed lines are theoretical fits with sinusoidal functions mod-
ulated by a Laguerre-Gaussian envelope. The phase shift
given by the fits is 3.14 rad .
is restricted to modes of the form given by Eq.(7). In fact,
it is important to stress that the fringes visibility can-
not be regarded as a measure of the concurrence for any
nonseparable mode, but for our purposes it evidences the
topological nature of the phase shift implemented by the
experimental setup. A detailed discussion on the mea-
surement of the concurrence is available in Ref.[15].
Next, we briefly discuss the quantum domain. When
a partially nonseparable mode like Eǫ(r) is occupied by
a single photon, this leads to partially entangled single
particle quantum states of the kind
|ϕǫ〉 =
ǫ |+H〉+
1− ǫ | − V 〉 . (9)
Experimentally, we attenuated the laser beam down to
the single photon regime, and scanned a photocounting
module across the interference pattern. First, HWP-A
and B were set at 22.5o (ǫ = 1/2) in order to evidence
the topological phase in this regime. Fig.(3) displays the
interference patterns obtained with QWP-2 oriented at
−45o and 45o. The π phase shift is again clear.
The relationship between the fringes visibility and the
state separability was evidenced by fixing QWP-2 at
0o and rotating HWP-A and B by an angle θ so that
ǫ = cos2 2θ . Fig.(4) shows the experimental results for
the fringes visibility for several values of ǫ . The solid
line corresponds to the analytical expression of the con-
currence, showing a very good agreement with the exper-
imental values.
As a conclusion, we demonstrated the double con-
nected nature of the SO(3) rotation group and the topo-
logical phase acquired by a laser beam passing through
a cycle of spin-orbit transformations. We investigated
both the classical and the quantum regimes and com-
0,0 0,2 0,4 0,6 0,8 1,0
cos2(2
FIG. 4: Fringes visibility as a function of ǫ. The solid line is
a theoretical fit with C = 2
ǫ(1− ǫ) .
pared the separability of the mode travelling through the
apparatus with the visibility of the interference fringes.
Our results may constitute an useful tool for quantum
computing and quantum information protocols.
The authors are deeply grateful to S.P. Walborn and
P.H. Souto Ribeiro for their precious help with the photo-
counting system and for fruitful discussions. Funding was
provided by Coordenação de Aperfeiçoamento de Pes-
soal de Nı́vel Superior (CAPES), Fundação de Amparo
à Pesquisa do Estado do Rio de Janeiro (FAPERJ-BR),
and Conselho Nacional de Desenvolvimento Cient́ıfico e
Tecnológico (CNPq).
[1] S. Pancharatnam, Proc. Ind. Acad. Sci. 44, 247 (1956).
[2] M. V. Berry, Proc. R. Soc. London A 392, 45 (1984).
[3] J. A. Jones, V. Vedral, A. Ekert, and G. Castagnoli, Na-
ture (London) 403, 869 (2000).
[4] L.-M. Duan, J. I. Cirac, and P. Zoller, Science 292, 1695
(2001).
[5] S.J. van Enk, Opt. Comm. 102, 59 (1993).
[6] E. J. Galvez et al, Phys. Rev. Lett. 90, 203901 (2003).
[7] M. J. Padgett and J. Courtial, Opt. Lett. 24, 430 (1999).
[8] S. A. Werner et al, Phys. Rev. Lett. 35, 1053 (1975).
[9] P. Milman, and R. Mosseri, Phys. Rev. Lett. 90, 230403
(2003).
[10] R. Mosseri, and R. Dandoloff, J. Phys. A 34, 10243
(2003).
[11] P. Milman, Phys. Rev. A 73, 062118 (2006).
[12] A. Yariv, ”Quantum Electronics”, John Wiley & Sons,
third ed. (1988).
[13] W. LiMing, Z. L. Tang, and C. J. Liao, Phys. Rev. A 69,
064301 (2004).
[14] N. R. Heckenberg, R. McDuff, C. P. Smith, and A. G.
White, Opt. Lett. 17, 221 (1992); G.F. Brand, Am. J. of
Phys. 67, 55 (1999).
[15] S. Walborn, P. H. Souto Ribeiro, L. Davidovich, F.
Mintert, and A. Buchleitner, Nature 440, 1022 (2006).
|
0704.0894 | The Solar Neighborhood. XIX. Discovery and Characterization of 33 New
Nearby White Dwarf Systems | to appear in the Astronomical Journal
The Solar Neighborhood XIX:
Discovery and Characterization of 33 New Nearby White Dwarf
Systems
John P. Subasavage, Todd J. Henry
Georgia State University, Atlanta, GA 30302-4106
[email protected], [email protected]
P. Bergeron, P. Dufour
Département de Physique, Université de Montréal, C.P. 6128, Succ. Centre-Ville,
Montréal, Québec H3C 3J7, Canada
[email protected], [email protected]
Nigel C. Hambly
Scottish Universities Physics Alliance (SUPA), Institute for Astronomy, University of
Edinburgh Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ, Scotland, UK
[email protected]
Thomas D. Beaulieu
Georgia State University, Atlanta, GA 30302-4106
[email protected]
ABSTRACT
We present spectra for 33 previously unclassified white dwarf systems brighter
than V = 17 primarily in the southern hemisphere. Of these new systems, 26 are
DA, 4 are DC, 2 are DZ, and 1 is DQ. We suspect three of these systems are unre-
solved double degenerates. We obtained V RI photometry for these 33 objects as
well as for 23 known white dwarf systems without trigonometric parallaxes, also
primarily in the southern hemisphere. For the 56 objects, we converted the pho-
tometry values to fluxes and fit them to a spectral energy distribution using the
http://arxiv.org/abs/0704.0894v1
– 2 –
spectroscopy to determine which model to use (i.e. pure hydrogen, pure helium,
or metal-rich helium), resulting in estimates of Teff and distance. Eight of the
new and 12 known systems are estimated to be within the NStars and Catalogue
of Nearby Stars (CNS) horizons of 25 pc, constituting a potential 18% increase
in the nearby white dwarf sample. Trigonometric parallax determinations are
underway via CTIOPI for these 20 systems.
One of the DCs is cool so that it displays absorption in the near infrared.
Using the distance determined via trigonometric parallax, we are able to constrain
the model-dependent physical parameters and find that this object is most likely a
mixed H/He atmosphere white dwarf similar to other cool white dwarfs identified
in recent years with significant absorption in the infrared due to collision-induced
absorptions by molecular hydrogen.
Subject headings: solar neighborhood — white dwarfs — stars: evolution —
stars: distances — stars: statistics
1. Introduction
The study of white dwarfs (WDs) provides insight to understanding WD formation
rates, evolution, and space density. Cool WDs, in particular, provide limits on the age of the
Galactic disk and could represent some unknown fraction of the Galactic halo dark matter.
Individually, nearby WDs are excellent candidates for astrometric planetary searches because
the astrometric signature is greater than for an identical WD system more distant. As a
population, a complete volume limited sample is necessary to provide unbiased statistics;
however, their intrinsic faintness has allowed some to escape detection.
Of the 18 WDs with trigonometric parallaxes placing them within 10 pc of the Sun (the
RECONS sample), all but one have proper motions greater than 1.′′0 yr−1 (94%). By com-
parison, of the 230 main sequence systems (as of 01 January 2007) in the RECONS sample,
50% have proper motions greater than 1.′′0 yr−1. We have begun an effort to reduce this
apparent selection bias against slower-moving WDs to complete the census of nearby WDs.
This effort includes spectroscopic, photometric, and astrometric initiatives to characterize
newly discovered as well as known WDs without trigonometric parallaxes. Utilizing the Su-
perCOSMOS Sky Survey (SSS) for plate magnitude and proper motion information coupled
with data from other recently published proper motion surveys (primarily in the southern
hemisphere), we have identified relatively bright WD candidates via reduced proper motion
diagrams.
– 3 –
In this paper, we present spectra for 33 newly discovered WD systems brighter than
V = 17.0. Once an object is spectroscopically confirmed to be a WD (in this paper for
the first time or elsewhere in the literature), we obtain CCD photometry to derive Teff and
estimate its distance using a spectral energy distribution (SED) fit and a model atmosphere
analysis. If an object’s distance estimate is within the NStars (Henry et al. 2003) and CNS
(Gliese & Jahreiß 1991) horizons of 25 pc, it is then added to CTIOPI (Cerro Tololo Inter-
American Observatory Parallax Investigation) to determine its true distance (e.g. Jao et al.
2005, Henry et al. 2006).
2. Candidate Selection
We used recent high proper motion (HPM) surveys (Pokorny et al. 2004; Subasavage et al.
2005a,b; Finch et al. 2007) in the southern hemisphere for this work because our long-term
astrometric observing program CTIOPI, is based in Chile. To select good WD candidates
for spectroscopic observations, plate magnitudes via SSS and 2MASS JHKS are extracted
for HPM objects. Each object’s (R59F − J) color and reduced proper motion (RPM) are
then plotted. RPM correlates proper motion with proximity, which is certainly not always
true; however, it is effective at separating WDs from subdwarfs and main sequence stars.
Figure 1 displays an RPM diagram for the 33 new WDs presented here. To serve as examples
for the locations of subdwarfs and main sequence stars, recent HPM discoveries from the
SuperCOSMOS-RECONS (SCR) proper motion survey are also plotted (Subasavage et al.
2005a,b). The solid line represents a somewhat arbitrary cutoff separating subdwarfs and
WDs. Targets are selected from the region below the solid line. Note there are four stars
below this line that are not represented with asterisks. Three have recently been spectro-
scopically confirmed as WDs (Subasavage et al., in preparation) and one as a subdwarf (SCR
1227−4541, denoted by “sd”) that fell just below the line at (R59F − J) = 1.4 and HR59F =
19.8 (Subasavage et al. 2005b).
Completeness limits (S/N > 10) for 2MASS are J = 15.8, H = 15.1, and KS = 14.3
for uncontaminated point sources (Skrutskie et al. 2006). The use of J provides a more
reliable RPM diagram color for objects more than a magnitude fainter than the KS limit,
which is particularly important for the WDs (with (J − KS) < 0.4) discussed here. Only
objects bright enough to have 2MASS magnitudes are included in Figure 1. Consequently,
all WD candidates are brighter than V ∼ 17, and are therefore likely to be nearby. Objects
that fall in the WD region of the RPM diagram were cross-referenced with SIMBAD and
– 4 –
McCook & Sion (1999)1 to determine those that were previously classified as WDs. The
remainder were targeted for spectroscopic confirmation.
The remaining 33 candidates comprise the “new sample” whose spectra are presented
in this work, while the “known sample” constitutes the 23 previously identified WD systems
without trigonometric parallaxes for which we have complete V RIJHKS data.
3. Data and Observations
3.1. Astrometry and Nomenclature
The traditional naming convention for WDs uses the object’s epoch 1950 equinox 1950
coordinates. Coordinates for the new sample were extracted from 2MASS along with the
Julian date of observation. These coordinates were adjusted to account for proper motion
from the epoch of 2MASS observation to epoch 2000 (hence epoch 2000 equinox 2000).
The coordinates were then transformed to equinox 1950 coordinates using the IRAF proce-
dure precess. Finally, the coordinates were again adjusted (opposite the direction of proper
motion) to obtain epoch 1950 equinox 1950 coordinates.
Proper motions were taken from various proper motion surveys in addition to unpub-
lished values obtained via the SCR proper motion survey while recovering previously known
HPM objects. Appendix A contains the proper motions used for coordinate sliding as well
as J2000 coordinates and alternate names.
3.2. Spectroscopy
Spectroscopic observations were taken on five separate observing runs in 2003 Octo-
ber and December, 2004 March and September, and 2006 May at the Cerro Tololo Inter-
American Observatory (CTIO) 1.5m telescope as part of the SMARTS Consortium. The
Ritchey-Chrétien Spectrograph and Loral 1200×800 CCD detector were used with grating
09, providing 8.6 Å resolution and wavelength coverage from 3500 to 6900 Å. Observations
consisted of two exposures (typically 20 - 30 minutes each) to permit cosmic ray rejection,
followed by a comparison HeAr lamp exposure to calibrate wavelength for each object. Bias
subtraction, dome/sky flat-fielding, and extraction of spectra were performed using standard
IRAF packages.
1The current web based catalog can be found at http://heasarc.nasa.gov/W3Browse/all/mcksion.html
http://heasarc.nasa.gov/W3Browse/all/mcksion.html
– 5 –
A slit width of 2′′ was used for the 2003 and 2004 observing runs. Some of these data have
flux calibration problems because the slit was not rotated to be aligned along the direction
of atmospheric refraction. In conjunction with telescope “jitter”, light was sometimes lost
preferentially at the red end or the blue end for these data.
A slit width of 6′′, used for the 2006 May run, eliminated most of the flux calibration
problems even though the slit was not rotated. All observations were taken at an airmass
of less than 2.0. Within our wavelength window, the maximum atmospheric differential
refraction is less than 3′′ (Filippenko 1982). A test was performed to verify that no resolution
was lost by taking spectra of a F dwarf with sharp absorption lines from slit widths of 2′′ to
10′′ in 2′′ increments. Indeed, no resolution was lost.
Spectra for the new DA WDs with Teff ≥ 10000 K are plotted in Figure 2 while spectra
for the new DA WDs with Teff < 10000 K are plotted in Figure 3. Featureless DC spectra
are plotted in Figure 4. Spectral plots as well as model fits for unusual objects are described
in § 4.2.
3.3. Photometry
Optical V RI (Johnson V , Kron-Cousins RI) for the new and known samples was ob-
tained using the CTIO 0.9 m telescope during several observing runs from 2003 through
2006 as part of the Small and Moderate Aperture Research Telescope System (SMARTS)
Consortium. The 2048×2046 Tektronix CCD camera was used with the Tek 2 V RI filter
set2. Standard stars from Graham (1982), Bessel (1990), and Landolt (1992) were observed
each night through a range of airmasses to calibrate fluxes to the Johnson-Kron-Cousins
system and to calculate extinction corrections.
Bias subtraction and dome flat-fielding (using calibration frames taken at the beginning
of each night) were performed using standard IRAF packages. When possible, an aperture
14′′ in diameter was used to determine the stellar flux, which is consistent with the aperture
used by Landolt (1992) for the standard stars. If cosmic rays fell within this aperture, they
were removed before flux extraction. In cases of crowded fields, aperture corrections were
applied and ranged from 4′′ to 12′′ in diameter using the largest aperture possible without
including contamination from neighboring sources. Uncertainties in the optical photometry
were derived by estimating the internal night-to-night variations as well as the external errors
(i.e. fits to the standard stars). A complete discussion of the error analysis can be found in
2The central wavelengths for V , R, and I are 5475, 6425, and 8075Å respectively.
– 6 –
Henry et al. (2004). We adopt a total error of ±0.03 mag in each band. The final optical
magnitudes are listed in Table 1 as well as the number of nights each object was observed.
Infrared JHKS magnitudes and errors were extracted via Aladin from 2MASS and are
also listed in Table 1. JHKS magnitude errors are, in most cases, significantly larger than for
V RI, and the errors listed give a measure of the total photometric uncertainty (i.e. include
both global and systematic components). In cases when the magnitude error is null, the star
is near the magnitude limit of 2MASS and the photometry is not reliable.
4. Analysis
4.1. Modeling of Physical Parameters
The pure hydrogen, pure helium, and mixed hydrogen and helium model atmospheres
used to model the WDs are described at length in Bergeron et al. (2001) and references
therein, while the helium-rich models appropriate for DQ and DZ stars are described in
Dufour et al. (2005, 2007), respectively. The atmospheric parameters for each star are ob-
tained by converting the optical V RI and infrared JHKS magnitudes into observed fluxes,
and by comparing the resulting SEDs with those predicted from our model atmosphere cal-
culations. The first step is accomplished by transforming the magnitudes into average stellar
fluxes fm
received at Earth using the calibration of Holberg et al. (2006) for photon count-
ing devices. The observed and model fluxes, which depend on Teff , log g, and atmospheric
composition, are related by the equation
= 4π (R/D)2 Hm
, (1)
where R/D is the ratio of the radius of the star to its distance from Earth, and Hm
the Eddington flux, properly averaged over the corresponding filter bandpass. Our fitting
technique relies on the nonlinear least-squares method of Levenberg-Marquardt (Press et al.
1992), which is based on a steepest descent method. The value of χ2 is taken as the sum over
all bandpasses of the difference between both sides of eq. (1), weighted by the corresponding
photometric uncertainties. We consider only Teff and the solid angle to be free parameters,
and the uncertainties of both parameters are obtained directly from the covariance matrix
of the fit. In this study, we simply assume a value of log g = 8.0 for each star.
As discussed in Bergeron et al. (1997, 2001), the main atmospheric constituent — hy-
drogen or helium — is determined by comparing the fits obtained with both compositions,
or by the presence of Hα in the optical spectra. For DQ and DZ stars, we rely on the
– 7 –
procedure outlined in Dufour et al. (2005, 2007), respectively: we obtain a first estimate
of the atmospheric parameters by fitting the energy distribution with an assumed value of
the metal abundances. We then fit the optical spectrum to measure the metal abundances,
and use these values to improve our atmospheric parameters from the energy distribution.
This procedure is iterated until a self-consistent photometric and spectroscopic solution is
achieved.
The derived values for Teff for each object are listed in Table 1. Also listed are the
spectral types for each object determined based on their spectral features. The DAs have
been assigned a half-integer temperature index as defined by McCook & Sion (1999), where
the temperature index equals 50,400/Teff. As an external check, we compare in Figure 5
the photometric effective temperatures for the DA stars in Table 1 with those obtained by
fitting the observed Balmer line profiles (Figs. 2 and 3) using the spectroscopic technique
developed by Bergeron et al. (1992b), and recently improved by Liebert et al. (2003). Our
grid of pure hydrogen, NLTE, and convective model atmospheres is also described in Liebert
et al. The uncertainties of the spectroscopic technique are typically of 0.038 dex in log g
and 1.2% in Teff according to that study. We adopt a slightly larger uncertainty of 1.5%
in Teff (Spec) because of the problematic flux calibrations of the pre−2006 data (see § 3.2).
The agreement shown in Figure 5 is excellent, except perhaps at high temperatures where
the photometric determinations become more uncertain. It is possible that the significantly
elevated point in Figure 5, WD 0310−624 (labeled), is an unresolved double degenerate (see
§ 4.2). We refrain here from using the log g determinations in our analysis because these
are available only for the DA stars in our sample, and also because the spectra are not flux
calibrated accurately enough for that purpose.
Once the effective temperature and the atmospheric composition are determined, we
calculate the absolute visual magnitude of each star by combining the new calibration of
Holberg et al. (2006) with evolutionary models similar to those described in Fontaine et al.
(2001) but with C/O cores, q(He) ≡ logMHe/M⋆ = 10
−2 and q(H) = 10−4 (representative
of hydrogen-atmosphere WDs), and q(He) = 10−2 and q(H) = 10−10 (representative of
helium-atmosphere WDs)3. By combining the absolute visual magnitude with the Johnson
V magnitude, we derive a first estimate of the distance of each star (reported in Table 1).
Errors on the distance estimates incorporate the errors of the photometry values as well as
an error of 0.25 dex in log g, which is the measured dispersion of the observed distribution
using spectroscopic determinations (see Figure 9 of Bergeron et al. 1992b).
Of the 33 new systems presented here, 5 have distance estimates within 25 pc. Four
3see http://www.astro.umontreal.ca/˜bergeron/CoolingModels/
http://www.astro.umontreal.ca/~bergeron/CoolingModels/
– 8 –
more systems require additional attention because distance estimates are derived via other
means. Three of these are likely within 25 pc. All four are further discussed in the next
section. In total, 20 WD systems (8 new and 12 known) are estimated (or determined) to be
within 25 pc and one additional common proper motion binary system possibly lies within
25 pc.
4.2. Comments on Individual Systems
Here we address unusual and interesting objects.
WD 0121−429 is a DA WD that exhibits Zeeman splitting of Hα and Hβ, thereby
making its formal classification DAH. The SED fit to the photometry is superb, yielding
a Teff of 6,369 ± 137 K. When we compare the strength of the absorption line trio with
that predicted using the Teff from the SED fit, the depth of the absorption appears too
shallow. Using the magnetic line fitting procedure outlined in Bergeron et al. (1992a), we
must include a 50% dilution factor to match the observed central line of Hα. In light of
this, we utilized the trigonometric parallax distance determined via CTIOPI of 17.7 ± 0.7
pc (Subasavage et al., in preparation) to further constrain this system. The resulting SED
fit, with distance (hence luminosity) as a constraint rather than a variable, implies a mass of
0.43 ± 0.03 M⊙. Given the age of our Galaxy, the lowest mass WD that could have formed
is ∼0.47 M⊙ (Iben & Renzini 1984). It is extremely unlikely that this WD formed through
single star evolution. The most likely scenario is that this is a double degenerate binary
with a magnetic DA component and a featureless DC component (necessary to dilute the
absorption at Hα), similar to G62-46 (Bergeron et al. 1993) and LHS 2273 (see Figure 33
of Bergeron et al. 1997). If this interpretation is correct, any number of component masses
and luminosities can reproduce the SED fit.
The spectrum and corresponding magnetic fit to the Hα lines (including the dilution)
is shown in Figure 6. The viewing angle, i = 65◦, is defined as the angle between the dipole
axis and the line of sight (i = 0 corresponds to a pole-on view). The best fit produces a
dipole field strength, Bd = 9.5 MG, and a dipole offset, az = 0.06 (in units of stellar radius).
The positive value of az implies that the offset is toward the observer. Only Bd is moderately
constrained, both i and az can vary significantly yet still produce a reasonable fit to the data
(Bergeron et al. 1992a).
WD 0310−624 is a DAWD that is one of the hottest in the new sample. Because of it’s
elevation significantly above the equal temperature line (solid) in Figure 5, it is possible that
it is an unresolved double degenerate with very different component effective temperatures.
– 9 –
In fact, this method has been used to identify unresolved double degenerate candidates (i.e.
Bergeron et al. 2001).
WD 0511−415 is a DA WD (spectrum is plotted in Figure 2) whose spectral fit
produces a Teff = 10,813 ± 219 K and a log g = 8.21 ± 0.10 using the spectral fitting
procedure of Liebert et al. (2003). This object lies near the red edge of the ZZ Ceti instability
strip as defined by Gianninas et al. (2006). If variable, this object would help to constrain
the cool edge of the instability strip in Teff , log g parameter space. Follow-up high speed
photometry is necessary to confirm variability.
WD 0622−329 is a DAB WD displaying the Balmer lines as well as weaker He I at
4472 and 5876 Å. The spectrum, shown in Figure 7, is reproduced best with a model having
Teff ∼43,700 K. However, the predicted He II absorption line at 4686 Å for a WD of this
Teff is not present in the spectrum. In contrast, the SED fit to the photometry implies a
Teff of ∼10,500 K (using either pure H or pure He models). Because the Teff values are
vastly discrepant, we explored the possibility that this spectrum is not characterized by a
single temperature. We modeled the spectrum assuming the object was an unresolved double
degenerate. The best fit implies one component is a DB with Teff = 14,170 ± 1,228 K and
the other component is a DA with Teff = 9,640 ± 303 K, similar to the unresolved DA +
DB degenerate binary PG 1115+166 analyzed by Bergeron & Liebert (2002). One can see
from Figure 7 that the spectrum is well modeled under this assumption. We conclude this
object is likely a distant (well beyond 25 pc) unresolved double degenerate.
WD 0840−136 is a DZ WD whose spectrum shows both Ca II (H & K) and Ca I (4226
Å) lines as shown in Figure 8. Fits to the photometric data for different atmospheric com-
positions indicate temperatures of about 4800-5000 K. However, fits to the optical spectrum
using the models of Dufour et al. (2007) cannot reproduce simultaneously all three calium
lines. This problem is similar to that encountered by Dufour et al. (2007) where the atmo-
spheric parameters for the coolest DZ WDs were considered uncertain because of possible
high atmospheric pressure effects. We utilize a photometric relation relevant for WDs of any
atmospheric composition, which links MV to (V −I) (Salim et al. 2004) to obtain a distance
estimate of 19.3 ± 3.9 pc.
WD 1054−226 was observed spectroscopically as part of the Edinburgh-Cape (EC)
blue object survey and assigned a spectral type of sdB+ (Kilkenny et al. 1997). As is evident
in Figure 3, the spectrum of this object is the noisiest of all the spectra presented here and
perhaps a bit ambiguous. As an additional check, this object was recently observed using
the ESO 3.6 m telescope and has been confirmed to be a cool DA WD (Bergeron, private
communication).
– 10 –
WD 1105−340 is a DA WD (spectrum is plotted in Figure 2) with a common proper
motion companion with separation of 30.′′6 at position angle 107.1◦. The companion’s spectral
type is M4Ve with VJ = 15.04, RKC = 13.68, IKC = 11.96, J = 10.26, H = 9.70, and KS
= 9.41. In addition to the SED derived distance estimate for the WD, we utilize the main
sequence distance relations of Henry et al. (2004) to estimate a distance to the red dwarf
companion. We obtain a distance estimate of 19.1 ± 3.0 pc for the companion leaving
open the possibility that this system may lie just within 25 pc. A trigonometric parallax
determination is currently underway for confirmation.
WD 1149−272 is the only DQ WD discovered in the new sample. This object was
observed spectroscopically as part of the Edinburgh-Cape (EC) blue object survey for which
no features deeper than 5% were detected and was labeled a possible DC (Kilkenny et al.
1997). It is identified as having weak C2 swan band absorption at 4737 and 5165 Å and is
otherwise featureless. The DQ model reproduces the spectrum reliably and is overplotted in
Figure 9. This object is characterized as having Teff = 6188 ± 194 K and a log (C/He) =
−7.20 ± 0.16.
WD 2008−600 is a DC WD (spectrum is plotted in Figure 4) that is flux deficient in
the near infrared, as indicated by the 2MASS magnitudes. The SED fit to the photometry
is a poor match to either the pure hydrogen or the pure helium models. A pure hydrogen
model provides a slightly better match than a pure helium model, and yields a Teff of ∼3100
K, thereby placing it in the relatively small sample of ultracool WDs. In order to discern the
true nature of this object, we have constrained the model using the distance obtained from
the CTIOPI trigonometric parallax of 17.1 ± 0.4 pc (Subasavage et al., in preparation). This
object is then best modeled as having mostly helium with trace amounts of hydrogen (log
(He/H) = 2.61) in its atmosphere and has a Teff = 5078 ± 221 K (see Figure 10). A mixed
hydrogen and helium composition is required to produce sufficient absorption in the infrared
as a result of the collision-induced absorption by molecular hydrogen due to collisions with
helium. Such mixed atmospheric compositions have also been invoked to explain the infrared
flux deficiency in LHS 1126 (Bergeron et al. 1994) as well as SDSS 1337+00 and LHS 3250
(Bergeron & Leggett 2002). While WD 2008−600 is likely not an ultracool WD, it is one of
the brightest and nearest cool WDs known. Because the 2MASS magnitudes are not very
reliable, we intend to obtain additional near-infrared photometry to better constrain the fit.
WD 2138−332 is a DZ WD for which a calcium rich model reproduces the spectrum
reliably. The spectrum and the overplotted fit are shown in the bottom panel of Figure
8. Clearly evident in the spectrum are the strong Ca II absorption at 3933 and 3968 Å. A
weaker Ca I line is seen at 4226Å. Also seen are Mg I absorption lines at 3829, 3832, and
3838 Å (blended) as well as Mg I at 5167, 5173, and 5184 Å (also blended). Several weak Fe I
– 11 –
lines from 4000Å to 4500Å and again from 5200Å to 5500Å are also present. The divergence
of the spectrum from the fit toward the red end is likely due to an imperfect flux calibration
of the spectrum. This object is characterized as having Teff = 7188 ± 291 K and a log
(Ca/He) = −8.64 ± 0.16. The metallicity ratios are, at first, assumed to be solar (as defined
by Grevesse & Sauval 1998) and, in this case, the quality of the fit was sufficient without
deviation. The corresponding log (Mg/He) = −7.42 ± 0.16 and log (Fe/He) = −7.50 ± 0.16
for this object.
WD 2157−574 is a DAWD (spectrum is plotted in Figure 3) unique to the new sample
in that it displays weak Ca II absorption at 3933 and 3968 Å (H and K) thereby making its
formal classification a DAZ. Possible scenarios that enrich the atmospheres of DAZs include
accretion via (1) debris disks, (2) ISM, and (3) cometary impacts (see Kilic et al. 2006 and
references therein). The 2MASS KS magnitude is near the faint limit and is unreliable, but
even considering the J and H magnitudes, there appears to be no appreciable near-infrared
excess. While this may tentatively rule out the possibility of a debris disk, this object would
be an excellent candidate for far-infrared spaced-based studies to ascertain the origin of the
enrichment.
5. Discussion
WDs represent the end state for stars less massive than ∼8 M⊙ and are therefore rel-
atively numerous. Because of their intrinsic faintness, only the nearby WD population can
be easily characterized and provides the benchmark upon which WD stellar astrophysics is
based. It is clear from this work and others (e.g. Holberg et al. 2002; Kawka & Vennes 2006)
that the WD sample is complete, at best, to only 13 pc. Spectroscopic confirmation of new
WDs as well as trigonometric parallax determinations for both new and known WDs will
lead to a more complete sample and will push the boundary of completeness outward. We
estimate that 8 new WDs and an additional 12 known WDs without trigonometric parallaxes
are nearer than 25 pc, including one within 10 pc (WD 0141−675). Parallax measurements
via CTIOPI are underway for these 20 objects to confirm proximity. This total of 20 WDs
within 25 pc constitutes an 18% increase to the 109 WDs with trigonometric parallaxes ≥
40 mas.
Evaluating the proper motions of the new and known samples within 25 pc indicates
that almost double the number of systems have been found with µ < 1.′′0 yr−1 than with
µ ≥ 1.′′0 yr−1 (13 vs 7, see Table 2). The only WD estimated to be within 10 pc has µ >
1.′′0 yr−1, although WD 1202−232 is estimated to be 10.2 ± 1.7 pc and it’s proper motion is
small (µ = 0.′′227 yr−1).
– 12 –
Because this effort focuses mainly on the southern hemisphere, it is likely that there
is a significant fraction of nearby WDs in the northern hemisphere that have also gone
undetected. With the recent release of the LSPM-North Catalog (Lépine & Shara 2005),
these objects are identifiable by employing the same techniques used in this work. The
challenge is the need for a large scale parallax survey focusing on WDs to confirm proximity.
Since the HIPPARCOS mission, only six WD trigonometric parallaxes have been published
(Hambly et al. 1999; Smart et al. 2003), and of those, only two are within 25 pc. The USNO
parallax program is in the process of publishing trigonometric parallaxes for ∼130 WDs,
mostly in the northern hemisphere, although proximity was not a primary motivation for
target selection (Dahn, private communication).
In addition to further completing the nearby WD census, the wealth of observational
data available from this effort provides reliable constraints on their physical parameters
(i.e. Teff , log g, mass, and radius). Unusual objects are then revealed, such as those dis-
cussed in § 4.2. In particular, trigonometric parallaxes help identify WDs that are overlu-
minous, as is the case for WD 0121−429. This object, and others similar to it, are excellent
candidates to provide insight into binary evolution. If they can be resolved using high res-
olution astrometric techniques (i.e. speckle, adaptive optics, or interferometry via Hubble
Space Telescope’s Fine Guidance Sensors), they may provide astrometric masses, which are
fundamental calibrators for stellar structure theory and for the reliability of the theoretical
WD mass-radius and initial-to-final-mass relationships. To date, only four WD astrometric
masses are known to better than ∼ 5% (Provencal et al. 1998).
One avenue that is completely unexplored to date is a careful high resolution search
for planets around WDs. Theory dictates that the Sun will become a WD, and when it
does, the outer planets will remain in orbit (not without transformations of their own, of
course). In this scenario, the Sun will have lost more than half of its mass, thereby amplifying
the signature induced by the planets. Presumably, this has already occurred in the Milky
Way and systems such as these merely await detection. Because of the faintness and spectral
signatures of WDs (i.e. few, if any, broad absorption lines), current radial velocity techniques
are inadequate for planet detection, leaving astrometric techniques as the only viable option.
For a given system, the astrometric signature is inversely related to distance (i.e. the nearer
the system, the larger the astrometric signature). This effort aims to provide a complete
census of nearby WDs that can be probed for these astrometric signatures using future
astrometric efforts.
– 13 –
6. Acknowledgments
The RECONS team at Georgia State University wishes to thank the NSF (grant AST
05-07711), NASA’s Space Interferometry Mission, and GSU for their continued support of
our study of nearby stars. We also thank the continuing support of the members of the
SMARTS consortium, who enable the operations of the small telescopes at CTIO where all
of the data in this work were collected. J. P. S. is indebted to Wei-Chun Jao for the use of his
photometry reduction pipeline. P. B. is a Cottrell Scholar of Research Corporation and would
like to thank the NSERC Canada for its support. N. C. H. would like to thank colleagues in
the Wide Field Astronomy Unit at Edinburgh for their efforts contributing to the existence
of the SSS; particular thanks go to Mike Read, Sue Tritton, and Harvey MacGillivray. This
work has made use of the SIMBAD, VizieR, and Aladin databases, operated at the CDS in
Strasbourg, France. We have also used data products from the Two Micron All Sky Survey,
which is a joint project of the University of Massachusetts and the Infrared Processing and
Analysis Center, funded by NASA and NSF.
A. Appendix
In order to ensure correct cross-referencing of names for the new and known WD systems
presented here, Table 3 lists additional names found in the literature. Objects for which there
is an NLTT designation will also have the corresponding L or LP designations found in the
NLTT catalog. This is necessary because the NLTT designations were not published in the
original catalog, but rather are the record numbers in the electronic version of the catalog
and have been adopted out of necessity.
REFERENCES
Bergeron, P., Ruiz, M.-T., & Leggett, S. K. 1992, ApJ, 400, 315
Bergeron, P., Saffer, R. A., & Liebert, J. 1992, ApJ, 394, 228
Bergeron, P., Ruiz, M.-T., & Leggett, S. K. 1993, ApJ, 407, 733
Bergeron, P., Ruiz, M.-T., Leggett, S. K., Saumon, D., & Wesemael, F. 1994, ApJ, 423, 456
Bergeron, P., Ruiz, M. T., & Leggett, S. K. 1997, ApJS, 108, 339
Bergeron, P., Leggett, S. K., & Ruiz, M. T. 2001, ApJS, 133, 413
– 14 –
Bergeron, P., & Leggett, S. K. 2002, ApJ, 580, 1070
Bergeron, P., & Liebert, J. 2002, ApJ, 566, 1091
Bessel, M. S. 1990, A&AS, 83, 357
Carpenter, J. M. 2001, AJ, 121, 2851
Dufour, P., Bergeron, P., & Fontaine, G. 2005, ApJ, 627, 404
Dufour, P., Bergeron, P., Liebert, J., Harris, H. C., Knapp, G. R., Anderson, S. F., Hall,
P. B., Strauss, M. A., Collinge, M. J., & Edwards, M. C. 2007, submitted to ApJ
Filippenko, A. V. 1982, PASP, 94, 715
Finch, C. T., Henry, T. J., Subasavage, J. P., Jao, W.-C., Hambly, N. C. 2007, AJ, submitted
Fontaine, G., Brassard, P., & Bergeron, P. 2001, PASP, 113, 409
Gianninas, A., Bergeron, P., & Fontaine, G. 2006, AJ, 132, 831
Gliese, W., & Jahreiß, H. 1991, On: The Astronomical Data Center CD-ROM: Selected As-
tronomical Catalogs, Vol. I; L.E. Brotzmann, S.E. Gesser (eds.), NASA/Astronomical
Data Center, Goddard Space Flight Center, Greenbelt, MD
Graham, J. A. 1982, PASP, 94, 244
Grevesse, N., & Sauval, A. J. 1998, Space Science Reviews, 85, 161
Hambly, N. C., Smartt, S. J., Hodgkin, S. T., Jameson, R. F., Kemp, S. N., Rolleston,
W. R. J., & Steele, I. A. 1999, MNRAS, 309, L33
Henry, T. J., Walkowicz, L. M., Barto, T. C., & Golimowski, D. A. 2002, AJ, 123, 2002
Henry, T. J., Backman, D. E., Blackwell, J., Okimura, T., & Jue, S. 2003, The Future of
Small Telescopes In The New Millennium. Volume III - Science in the Shadow of
Giants, 111
Henry, T. J., Subasavage, J. P., Brown, M. A., Beaulieu, T. D., Jao, W., & Hambly, N. C.
2004, AJ, 128, 2460
Henry, T. J., Jao, W.-C., Subasavage, J. P., Beaulieu, T. D., Ianna, P. A., Costa, E., &
Méndez, R. A. 2006, AJ, 132, 2360
Holberg, J. B., Oswalt, T. D., & Sion, E. M. 2002, ApJ, 571, 512
– 15 –
Holberg, J. B., & Bergeron, P. 2006, AJ, 132, 1223
Iben, I., & Renzini, A. 1984, Phys. Rep., 105, 329
Jao, W.-C., Henry, T. J., Subasavage, J. P., Brown, M. A., Ianna, P. A., Bartlett, J. L.,
Costa, E., & Méndez, R. A. 2005, AJ, 129, 1954
Kawka, A., & Vennes, S. 2006, ApJ, 643, 402
Kilic, M., von Hippel, T., Leggett, S. K., & Winget, D. E. 2006, ApJ, 646, 474
Kilkenny, D., O’Donoghue, D., Koen, C., Stobie, R. S., & Chen, A. 1997, MNRAS, 287, 867
Kleinman, S. J., et al. 2004, ApJ, 607, 426
Landolt, A. U. 1992, AJ, 104, 340
Lépine, S., Shara, M. M., & Rich, R. M. 2003, AJ, 126, 921
Lépine, S., & Shara, M. M. 2005, AJ, 129, 1483
Lépine, S., Rich, R. M., & Shara, M. M. 2005, ApJ, 633, L121
Liebert, J., Bergeron, P., & Holberg, J. B. 2003, AJ, 125, 348
Liebert, J., Bergeron, P., & Holberg, J. B. 2005, ApJS, 156, 47
Luyten, W. J. 1949, ApJ, 109, 528
Luyten, W. J. 1979, LHS Catalogue (2nd ed.; Minneapolis: Univ. of Minnesota Press)
Luyten, W. J. 1979, New Luyten Catalogue of Stars with Proper Motions Larger than Two
Tenths of an Arcsecond (Minneapolis: Univ. of Minnesota Press)
McCook, G. P., & Sion, E. M. 1999, ApJS, 121, 1
Oppenheimer, B. R., Hambly, N. C., Digby, A. P., Hodgkin, S. T., & Saumon, D. 2001,
Science, 292, 698
Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. 1992, Numerical Recipes
in FORTRAN, 2nd edition (Cambridge: Cambridge University Press), 644
Provencal, J. L., Shipman, H. L., Hog, E., & Thejll, P. 1998, ApJ, 494, 759
Pokorny, R. S., Jones, H. R. A., Hambly, N. C., & Pinfield, D. J. 2004, A&A, 421, 763
– 16 –
Salim, S., Rich, R. M., Hansen, B. M., Koopmans, L. V. E., Oppenheimer, B. R., & Bland-
ford, R. D. 2004, ApJ, 601, 1075
Scholz, R.-D., Szokoly, G. P., Andersen, M., Ibata, R., & Irwin, M. J. 2002, ApJ, 565, 539
Skrutskie, M. F., et al. 2006, AJ, 131, 1163
Smart, R. L., et al. 2003, A&A, 404, 317
Subasavage, J. P., Henry, T. J., Hambly, N. C., Brown, M. A., & Jao, W. 2005, AJ, 129, 413
Subasavage, J. P., Henry, T. J., Hambly, N. C., Brown, M. A., Jao, W.-C., & Finch, C. T.
2005, AJ, 130, 1658
This preprint was prepared with the AAS LATEX macros v5.2.
Table 1. Optical and Infrared Photometry, and Derived Parameters for New and Known White Dwarfs.
WD VJ RC IC # J σJ H σH KS σK
Teff Comp Dist SpT Notes
Name Obs (K) (pc)
New Spectroscopically Confirmed White Dwarfs
0034−602............. 14.08 14.19 14.20 3 14.37 0.04 14.55 0.06 14.52 0.09 14655±1413 H 35.8±5.7 DA3.5
0121−429............. 14.83 14.52 14.19 4 13.85 0.02 13.63 0.04 13.53 0.04 6369± 137 H · · · ± · · · DAH a
0216−398............. 15.75 15.55 15.29 3 15.09 0.04 14.83 0.06 14.89 0.14 7364± 241 H 29.9±4.7 DA7.0
0253−755............. 16.70 16.39 16.08 2 15.77 0.07 15.76 0.15 15.34 null 6235± 253 He 34.7±5.5 DC
0310−624............. 15.92 15.99 16.03 2 16.13 0.10 16.31 0.27 16.50 null 13906±1876 H · · · ± · · · DA3.5 b
0344+014............. 16.52 16.00 15.54 2 15.00 0.04 14.87 0.09 14.70 0.12 5084± 91 He 19.9±3.1 DC
0404−510............. 15.81 15.76 15.70 2 15.74 0.06 15.55 0.13 15.59 null 10052± 461 H 53.5±8.5 DA5.0
0501−555............. 16.35 16.17 15.98 2 15.91 0.08 15.72 0.15 15.82 0.26 7851± 452 He 44.8±6.9 DC
0511−415............. 16.00 15.99 15.93 2 15.96 0.08 15.97 0.15 15.20 null 10393± 560 H 61.8±10.8 DA5.0
0525−311............. 15.94 16.03 16.03 2 16.20 0.12 16.21 0.25 14.98 null 12941±1505 H 76.3±13.6 DA4.0
0607−530............. 15.99 15.92 15.78 3 15.82 0.07 15.66 0.14 15.56 0.21 9395± 426 H 51.7±9.0 DA5.5
0622−329............. 15.47 15.41 15.36 2 15.44 0.06 15.35 0.11 15.53 0.25 · · · ± · · · · · · · · · ± · · · DAB c
0821−669............. 15.34 14.82 14.32 3 13.79 0.03 13.57 0.03 13.34 0.04 5160± 95 H 11.5±1.9 DA10.0
0840−136............. 15.72 15.36 15.02 3 14.62 0.03 14.42 0.05 14.54 0.09 · · · ± · · · · · · · · · ± · · · DZ d
1016−308............. 14.67 14.75 14.81 2 15.05 0.04 15.12 0.08 15.41 0.21 16167±1598 H 50.6±9.2 DA3.0
1054−226............. 16.02 15.82 15.62 2 15.52 0.05 15.40 0.11 15.94 0.26 8266± 324 H 41.0±7.0 DA6.0 e
1105−340............. 13.66 13.72 13.79 2 13.95 0.03 13.98 0.04 14.05 0.07 13926± 988 H 28.2±4.8 DA3.5 f
1149−272............. 15.87 15.59 15.37 4 15.17 0.05 14.92 0.06 14.77 0.11 6188± 194 He (+C) 24.0±3.8 DQ
1243−123............. 15.57 15.61 15.64 2 15.74 0.07 15.73 0.11 16.13 null 12608±1267 H 62.6±10.7 DA4.0
1316−215............. 16.67 16.33 15.99 2 15.56 0.05 15.33 0.08 15.09 0.14 6083± 201 H 31.6±5.3 DA8.5
1436−781............. 16.11 15.82 15.49 2 15.04 0.04 14.88 0.08 14.76 0.14 6246± 200 H 26.0±4.3 DA8.0
1452−310............. 15.85 15.77 15.63 2 15.58 0.06 15.54 0.09 15.50 0.22 9206± 375 H 46.8±8.1 DA5.5
1647−327............. 16.21 15.85 15.49 3 15.15 0.05 14.82 0.08 14.76 0.11 6092± 193 H 25.5±4.2 DA8.5
1742−722............. 15.53 15.62 15.70 2 15.85 0.08 15.99 0.18 15.65 null 15102±2451 H 71.7±12.9 DA3.5
1946−273............. 14.19 14.31 14.47 2 14.72 0.04 14.77 0.09 14.90 0.13 21788±3304 H 52.0±9.9 DA2.5
2008−600............. 15.84 15.40 14.99 4 14.93 0.05 15.23 0.11 15.41 null 5078± 221 He · · · ± · · · DC g
2008−799............. 16.35 15.96 15.57 3 15.11 0.04 15.03 0.08 14.64 0.09 5807± 161 H 24.5±4.1 DA8.5
2035−369............. 14.94 14.85 14.72 2 14.75 0.04 14.72 0.06 14.84 0.09 9640± 298 H 33.1±5.7 DA5.0
2103−397............. 15.31 15.15 14.91 2 14.79 0.03 14.63 0.04 14.64 0.08 7986± 210 H 28.2±4.8 DA6.5
2138−332............. 14.47 14.30 14.16 3 14.17 0.03 14.08 0.04 13.95 0.06 7188± 291 He (+Ca) 17.3±2.7 DZ
2157−574............. 15.96 15.73 15.49 3 15.18 0.04 15.05 0.07 15.28 0.17 7220± 246 H 32.0±5.4 DAZ
2218−416............. 15.36 15.35 15.24 2 15.38 0.04 15.14 0.09 15.39 0.15 10357± 414 H 45.6±8.0 DA5.0
2231−387............. 16.02 15.88 15.62 2 15.57 0.06 15.51 0.11 15.11 0.15 8155± 336 H 40.6±6.9 DA6.0
Known White Dwarfs without a Trigonometric Parallax Estimated to be Within 25 pc
0141−675 ............ 13.82 13.52 13.23 3 12.87 0.02 12.66 0.03 12.58 0.03 6484± 128 H 9.7±1.6 DA8.0
0806−661 ............ 13.73 13.66 13.61 3 13.70 0.02 13.74 0.03 13.78 0.04 10753± 406 He 21.1±3.5 DQ
1009−184 ............ 15.44 15.18 14.91 3 14.68 0.04 14.52 0.05 14.31 0.07 6449± 194 He 20.9±3.2 DZ h
1036−204 ............ 16.24 15.54 15.34 3 14.63 0.03 14.35 0.04 14.03 0.07 4948± 70 He 16.2±2.5 DQ i
1202−232 ............ 12.80 12.66 12.52 3 12.40 0.02 12.30 0.03 12.34 0.03 8623± 168 H 10.2±1.7 DA6.0
1315−781 ............ 16.16 15.73 15.35 2 14.89 0.04 14.67 0.08 14.58 0.12 5720± 162 H 21.6±3.6 DC j
1339−340 ............ 16.43 16.00 15.56 2 15.00 0.04 14.75 0.06 14.65 0.10 5361± 138 H 21.2±3.5 DA9.5
1756+143 ............ 16.30 16.12 15.69 1 14.93 0.04 14.66 0.06 14.66 0.08 5466± 151 H 22.4±3.4 DA9.0 k
Table 1—Continued
WD VJ RC IC # J σJ H σH KS σK
Teff Comp Dist SpT Notes
Name Obs (K) (pc)
1814+134 ............ 15.85 15.34 14.86 2 14.38 0.04 14.10 0.06 14.07 0.06 5313± 115 H 15.6±2.5 DA9.5
2040−392 ............ 13.74 13.77 13.68 2 13.77 0.02 13.82 0.03 13.81 0.05 10811± 325 H 23.1±4.0 DA4.5
2211−392 ............ 15.91 15.61 15.24 2 14.89 0.03 14.64 0.05 14.56 0.08 6243± 167 H 23.5±4.0 DA8.0
2226−754A........... 16.57 15.93 15.33 2 14.66 0.04 14.66 0.06 14.44 0.08 4230± 104 H 12.8±2.0 DC l
2226−754B........... 16.88 16.17 15.51 2 14.86 0.04 14.82 0.06 14.72 0.12 4177± 112 H 14.0±2.2 DC l
Known White Dwarfs without a Trigonometric Parallax Estimated to be Beyond 25 pc
0024−556............. 15.17 15.15 15.07 2 15.01 0.04 15.23 0.10 15.09 0.14 10007± 378 H 39.8±6.8 DA5.0
0150+256............. 15.70 15.52 15.33 2 15.07 0.04 15.07 0.09 15.15 0.14 7880± 280 H 33.0±5.6 DA6.5
0255−705 ............ 14.08 14.03 14.00 2 14.04 0.03 14.12 0.04 13.99 0.06 10541± 326 H 25.8±4.5 DA5.0
0442−304............. 16.03 15.93 15.86 2 15.94 0.09 15.81 null 15.21 null 9949± 782 He 55.1±9.1 DQ
0928−713 ............ 15.11 14.97 14.83 3 14.77 0.03 14.69 0.06 14.68 0.09 8836± 255 H 30.7±5.3 DA5.5
1143−013............. 16.39 16.08 15.79 1 15.54 0.06 15.38 0.08 15.18 0.16 6824± 250 H 34.4±5.8 DA7.5
1237−230 ............ 16.53 16.13 15.74 2 15.35 0.05 15.08 0.08 14.94 0.11 5841± 173 H 26.9±4.5 DA8.5
1314−153............. 14.82 14.89 14.97 2 15.17 0.05 15.26 0.09 15.32 0.21 15604±2225 H 52.7±9.5 DA3.0
1418−088 ............ 15.39 15.21 15.01 2 14.76 0.04 14.73 0.06 14.76 0.10 7872± 243 H 28.5±4.8 DA6.5
1447−190............. 15.80 15.59 15.32 2 15.06 0.04 14.87 0.07 14.78 0.11 7153± 235 H 29.1±4.9 DA7.0
1607−250............. 15.19 15.12 15.09 2 15.08 0.08 15.08 0.08 15.22 0.15 10241± 457 H 41.2±7.2 DA5.0
aDistance via SED fit (not listed) is underestimated because object is likely an unresolved double degenerate with one magnetic component (see § 4.2). Instead, we
adopt the trigonometric parallax distance of 17.7 ± 0.7 pc derived via CTIOPI.
bDistance via SED fit (not listed) is underestimated because object is likely a distant (well beyond 25 pc) unresolved double degenerate (see § 4.2).
cDistance via SED fit (not listed) is underestimated because object is likely a distant (well beyond 25 pc) unresolved double degenerate with components of type DA
and DB (see § 4.2). Temperatures derived from the spectroscopic fit yield 9,640 ± 303 K and 14,170 ± 1,228 K for the DA and DB respectively.
dObject is likely cooler than Teff ∼5000 K and the theoretical models do not provide an accurate treatment at these temperatures (see § 4.2). Instead, we use the
linear photometric distance relation of Salim et al. (2004) and obtain a distance estimate of 19.3 ± 3.9 pc.
eThis object was observed as part of the Edinburgh-Cape survey and was classified as a sdB+ (Kilkenny et al. 1997).
fDistance of 19.1 ± 3.0 pc is estimated using V RIJHKS for the common proper motion companion M dwarf and the relations of Henry et al. (2004). System is
possibly within 25 pc. (see § 4.2).
gDistance estimate is undetermined. Instead, we adopt the distance measured via trogonometric parallax of 17.1 ± 0.4 pc (see § 4.2).
hNot listed in McCook & Sion (1999) but identified as a DC/DQ WD by Henry et al. (2002). We obtained blue spectra that show Ca II H & K absorption and classify
this object as a DZ.
iThe SED fit to the photometry is marginal. This object displays deep swan band absorption that significantly affects its measured magnitudes.
jNot listed in McCook & Sion (1999) but identified as a WD by Luyten (1949). Spectral type is derived from our spectra.
kAs of mid-2004, object has moved onto a background source. Photometry is probably contaminated, which is consistent with the poor SED fit for this object.
lSpectral type was determined using spectra published by Scholz et al. (2002).
– 19 –
Table 2. Distance Estimate Statistics for New and Known White Dwarfs.
Proper motion d ≤ 10 pc 10 pc < d ≤ 25 pc d > 25 pc
µ ≥ 1.′′0 yr−1......................... 1 6 1
1.′′0 yr−1 > µ ≥ 0.′′8 yr−1...... 0 0 0
0.′′8 yr−1 > µ ≥ 0.′′6 yr−1...... 0 2 2
0.′′6 yr−1 > µ ≥ 0.′′4 yr−1...... 0 6 11
0.′′4 yr−1 > µ ≥ 0.′′18 yr−1.... 0 5 22
Total.................................... 1 19 36
– 20 –
Table 3. Astrometry and Alternate Designations for New and Known White Dwarfs.
WD Name RA Dec PM PA Ref Alternate Names
(J2000.0) (J2000.0) (arcsec yr−1) (deg)
New Spectroscopically Confirmed White Dwarfs
0034−602......... 00 36 22.31 −59 55 27.5 0.280 069.0 L NLTT 1993 = LP 122-4 = · · ·
0121−429......... 01 24 03.98 −42 40 38.5 0.538 155.2 L LHS 1243 = NLTT 4684 = LP 991-16
0216−398......... 02 18 31.51 −39 36 33.2 0.500 078.6 L LHS 1385 = NLTT 7640 = LP 992-99
0253−755......... 02 52 45.64 −75 22 44.5 0.496 063.5 S SCR 0252-7522 = · · · = · · ·
0310−624......... 03 11 21.34 −62 15 15.7 0.416 083.3 S SCR 0311-6215 = · · · = · · ·
0344+014......... 03 47 06.82 +01 38 47.5 0.473 150.4 S LHS 5084 = NLTT 11839 = LP 593-56
0404−510......... 04 05 32.86 −50 55 57.8 0.320 090.7 P LEHPM 1-3634 = · · · = · · ·
0501−555......... 05 02 43.43 −55 26 35.2 0.280 191.9 P LEHPM 1-3865 = · · · = · · ·
0511−415......... 05 13 27.80 −41 27 51.7 0.292 004.4 P LEHPM 2-1180 = · · · = · · ·
0525−311......... 05 27 24.33 −31 06 55.7 0.379 200.7 P NLTT 15117 = LP 892-45 = LEHPM 2-521
0607−530......... 06 08 43.81 −53 01 34.1 0.246 327.6 P LEHPM 2-2008 = · · · = · · ·
0622−329......... 06 24 25.78 −32 57 27.4 0.187 177.7 P LEHPM 2-5035 = · · · = · · ·
0821−669......... 08 21 26.70 −67 03 20.1 0.758 327.6 S SCR 0821-6703 = · · · = · · ·
0840−136......... 08 42 48.45 −13 47 13.1 0.272 263.0 S NLTT 20107 = LP 726-1 = · · ·
1016−308......... 10 18 39.84 −31 08 02.0 0.212 304.0 L NLTT 23992 = LP 904-3 = LEHPM 2-5779
1054−226......... 10 56 38.64 −22 52 55.9 0.277 349.7 P NLTT 25792 = LP 849-31 = LEHPM 2-1372
1105−340......... 11 07 47.89 −34 20 51.4 0.287 168.0 S SCR 1107-3420A = · · · = · · ·
1149−272......... 11 51 36.10 −27 32 21.0 0.199 278.3 P LEHPM 2-4051 = · · · = · · ·
1243−123......... 12 46 00.69 −12 36 19.9 0.406 305.4 S SCR 1246-1236 = · · · = · · ·
1316−215......... 13 19 24.72 −21 47 55.0 0.467 179.2 S NLTT 33669 = LP 854-50 = WT 2034
1436−781......... 14 42 51.54 −78 23 53.6 0.409 272.0 S NLTT 38003 = LP 40-109 = LTT 5814
1452−310......... 14 55 23.47 −31 17 06.4 0.199 174.2 P LEHPM 2-4029 = · · · = · · ·
1647−327......... 16 50 44.32 −32 49 23.2 0.526 193.8 L LHS 3245 = NLTT 43628 = LP 919-1
1742−722......... 17 48 31.21 −72 17 18.5 0.294 228.2 P LEHPM 2-1166 = · · · = · · ·
1946−273......... 19 49 19.78 −27 12 25.7 0.213 162.0 L NLTT 48270 = LP 925-53 = · · ·
2008−600......... 20 12 31.75 −59 56 51.5 1.440 165.6 S SCR 2012-5956 = · · · = · · ·
2008−799......... 20 16 49.66 −79 45 53.0 0.434 128.4 S SCR 2016-7945 = · · · = · · ·
2035−369......... 20 38 41.42 −36 49 13.5 0.230 104.0 L NLTT 49589 = L 495-42 = LEHPM 2-3290
2103−397......... 21 06 32.01 −39 35 56.7 0.266 151.7 P LEHPM 2-1571 = · · · = · · ·
2138−332......... 21 41 57.56 −33 00 29.8 0.210 228.5 P NLTT 51844 = L 570-26 = LEHPM 2-3327
2157−574......... 22 00 45.37 −57 11 23.4 0.233 252.0 P LEHPM 1-4327 = · · · = · · ·
2218−416......... 22 21 25.37 −41 25 27.0 0.210 143.4 P LEHPM 1-4598 = · · · = · · ·
2231−387......... 22 33 54.47 −38 32 36.9 0.370 220.5 P NLTT 54169 = LP 1033-28 = LEHPM 1-4859
Known White Dwarfs without a Trigonometric Parallax Estimated to be Within 25 pc
0141−675 ........ 01 43 00.98 −67 18 30.3 1.048 197.8 L LHS 145 = NLTT 5777 = L 88-59
0806−661 ........ 08 06 53.76 −66 18 16.6 0.454 131.4 S NLTT 19008 = L 97-3 = · · ·
1009−184 ........ 10 12 01.88 −18 43 33.2 0.519 268.2 S WT 1759 = LEHPM 2-220 = · · ·
1036−204 ........ 10 38 55.57 −20 40 56.7 0.628 330.3 L LHS 2293 = NLTT 24944 = LP 790-29
1202−232 ........ 12 05 26.66 −23 33 12.1 0.227 002.0 L NLTT 29555 = LP 852-7 = LEHPM 2-1894
1315−781 ........ 13 19 25.63 −78 23 28.3 0.477 139.2 S NLTT 33551 = L 40-116 = · · ·
1339−340 ........ 13 42 02.88 −34 15 19.4 2.547 296.7 Le PM J13420-3415 = · · · = · · ·
1756+143 ........ 17 58 22.90 +14 17 37.8 1.014 235.4 Le LSR 1758+1417 = · · · = · · ·
1814+134 ........ 18 17 06.48 +13 28 25.0 1.207 201.5 Le LSR 1817+1328 = · · · = · · ·
2040−392 ........ 20 43 49.21 −39 03 18.0 0.306 179.0 L NLTT 49752 = L 495-82 = · · ·
2211−392 ........ 22 14 34.75 −38 59 07.3 1.056 110.1 O WD J2214-390 = LEHPM 1-4466 = · · ·
2226−754A........ 22 30 40.00 −75 13 55.3 1.868 167.5 S SSSPM J2231-7514 = · · · = · · ·
2226−754B........ 22 30 33.55 −75 15 24.2 1.868 167.5 S SSSPM J2231-7515 = · · · = · · ·
Known White Dwarfs without a Trigonometric Parallax Estimated to be Beyond 25 pc
0024−556......... 00 26 40.69 −55 24 44.1 0.580 211.8 L LHS 1076 = NLTT 1415 = L 170-27
0150+256......... 01 52 51.93 +25 53 40.7 0.220 076.0 L NLTT 6275 = G 94-21 = · · ·
0255−705......... 02 56 17.22 −70 22 10.8 0.682 097.9 L LHS 1474 = NLTT 9485 = L 54-5
0442−304......... 04 44 29.38 −30 21 14.2 0.196 199.5 P NLTT 13882 = LP 891-65 = HE 0442-3027
0928−713......... 09 29 07.97 −71 33 58.8 0.439 320.2 S NLTT 21957 = L 64-40 = · · ·
1143−013......... 11 46 25.77 −01 36 36.8 0.563 140.2 S LHS 2455 = NLTT 28493 = · · ·
– 21 –
Table 3—Continued
WD Name RA Dec PM PA Ref Alternate Names
(J2000.0) (J2000.0) (arcsec yr−1) (deg)
1237−230......... 12 40 24.18 −23 17 43.8 1.102 219.9 L LHS 339 = NLTT 31473 = LP 853-15
1314−153......... 13 16 43.59 −15 35 58.3 0.708 196.7 L LHS 2712 = NLTT 33503 = LP 737-47
1418−088......... 14 20 54.93 −09 05 08.7 0.480 266.8 S LHS 5270 = NLTT 37026 = · · ·
1447−190......... 14 50 11.93 −19 14 08.7 0.253 285.4 P NLTT 38499 = LP 801-14 = LEHPM 2-1835
1607−250......... 16 10 50.21 −25 13 16.0 0.209 314.0 L NLTT 42153 = LP 861-31 = · · ·
References. — (L) Luyten 1979a,b, (Le) Lépine et al. 2003, Lépine et al. 2005, (O) Oppenheimer et al. 2001, (P) Pokorny et al.
2004, (S) Subasavage et al. 2005a,b, this work
– 22 –
Fig. 1.— Reduced proper motion diagram used to select WD candidates for spectroscopic
follow-up. Plotted are the new high proper motion objects from Subasavage et al. (2005a,b).
The line is a somewhat arbitrary boundary between the WDs (below) and the subdwarfs
(just above). Main sequence dwarfs fall above and to the right of the subdwarfs, although
there is significant overlap. Asterisks indicate the 33 new WDs reported here. Three dots
in the WD region are deferred to a future paper. The point labeled “sd” is a confirmed
subdwarf contaminant of the WD sample.
Fig. 2.— Spectral plots of the hot (Teff ≥ 10000 K) DA WDs from the new sample, plotted
in descending Teff as derived from the SED fits to the photometry. Note that some of the
flux calibrations are not perfect, in particular, at the blue end.
Fig. 3.— Spectral plots of cool (Teff < 10000 K) DA WDs from the new sample, plotted in
descending Teff as derived from the SED fits to the photometry. Note that some of the flux
calibrations are not perfect, in particular, at the blue end.
Fig. 4.— Spectral plots of the four featureless DC white dwarfs from the new sample, plotted
in descending Teff as derived from the SED fits to the photometry. Note that some of the
flux calibrations are not perfect, in particular, at the blue end.
Fig. 5.— Comparison plot of the values of Teff derived from photometric SED fitting vs
those derived from spectral fitting for 25 of the DA WDs in the new sample. The solid line
represents equal temperatures. The elevated point, 0310−624, is discussed in § 4.2.
Fig. 6.— Spectral plot of WD 0121−429. The inset plot displays the spectrum (light line)
in the Hα region to which a magnetic fit (heavy line), as outlined in Bergeron et al. (1992a),
was performed using the Teff obtained from the SED fit to the photometry. The resulting
magnetic parameters are listed below the fit.
Fig. 7.— Spectral plot of WD 0622−329. The inset plot displays the spectrum (light line)
in the region to which the model (heavy line) was fit assuming the spectrum is a convolution
of a DB component and a slightly cooler DA component. Best fit physical parameters are
listed below the fit for each component.
Fig. 8.— (top panel) Spectral plot of WD 0840−136. The DZ model failed to reproduce
the spectrum presumably because this object is cooler than Teff ∼ 5000 K where additional
pressure effects, not included in the model, become important. (bottom panel) Spectral plot
of WD 2138−332. The inset plot displays the spectrum (light line) in the region to which
the model (heavy line) was fit.
– 23 –
Fig. 9.— Spectral plot of WD 1149−272. The inset plot displays the spectrum (light line)
in the region to which the model (heavy line) was fit.
Fig. 10.— Spectral energy distribution plot of WD 2008−600 with the distance constrained
by the trigonometric distance of 17.1 ± 0.4 pc. Best fit physical parameters are listed below
the fit. Points are fit values; error bars are derived from the uncertainties in the magnitudes
and the parallax.
– 24 –
– 25 –
– 26 –
– 27 –
– 28 –
– 29 –
– 30 –
– 31 –
– 32 –
– 33 –
Introduction
Candidate Selection
Data and Observations
Astrometry and Nomenclature
Spectroscopy
Photometry
Analysis
Modeling of Physical Parameters
Comments on Individual Systems
Discussion
Acknowledgments
Appendix
|
0704.0895 | Gorenstein locus of minuscule Schubert varieties | arXiv:0704.0895v1 [math.AG] 6 Apr 2007
Gorenstein locus of minuscule Schubert varieties
Nicolas Perrin
Abstract
In this article, we describe explicitely the Gorenstein locus of all minuscule Schubert varieties.
This proves a special case of a conjecture of A. Woo and A. Yong [WY06b] on the Gorenstein
locus of Schubert varieties.
Introduction
The description of the singular locus and of the types of singularities appearing in Schubert
varieties is a hard problem. A first step in this direction was the proof by V. Lakshmibai and B.
Sandhya [LS90] of a pattern avoidance criterion for a Schubert variety in type A to be smooth. There
exist some other results in this direction, for a detailed account see [BL00]. Another important
result was a complete combinatorial description, still in type A, of the irreducible components of
the singular locus of a Schubert variety (this has been realised, almost in the same time, by L.
Manivel [Ma01a] and [Ma01b], S. Billey and G. Warrington [BW03], C. Kassel, A. Lascoux and C.
Reutenauer [KLR03] and A. Cortez [Co03]). The singularity at a generic point of such a component
is also given in [Ma01b] and [Co03]. However, as far as I know, this problem is still open for other
types. Another partial result in this direction is the description of the irreducible components of
the singular locus and of the generic singularity of minuscule and cominuscule Schubert varieties
(see Definition 1.2) by M. Brion and P. Polo [BP99].
In the same vein as [LS90], A. Woo and A. Yong gave in [WY06a] and [WY06b] a generalised
pattern avoidance criterion, in type A, to decide if a Schubert variety is Gorenstein. They do not
describe the irreducible components of the Gorenstein locus but give the following conjecture (see
Conjecture 6.7 in [WY06b]):
CONJECTURE 0.1. — Let X be a Schubert variety, a point x in X is in the Gorenstein locus of
X if and only if the generic point of any irreducible component of the singular locus of X containing
x is is the Gorenstein locus of X.
The interest of this conjecture relies on the fact that, at least in type A, the irreducible compo-
nents of the singular locus and the singularity of a generic point of that component are well known.
The conjecture would imply that one only needs to know the information on the irreducible com-
ponents of the singular locus to get all the information on the Gorenstein locus.
In this paper we prove this conjecture for all minuscule Schubert varieties thanks to a combi-
natorial description of the Gorenstein locus of minuscule Schubert varieties. To do this we use the
http://arxiv.org/abs/0704.0895v1
combinatorial tool introduced in [Pe07] associating to any minuscule Schubert variety a reduced
quiver generalising Young diagrams. First, we translate the results of M. Brion and P. Polo [BP99]
in terms of the quiver. We define the holes, the virtual holes and the essential holes in the quiver
(see Definitions 2.3 and 3.1) and prove the following:
THEOREM 0.2. — (ı) A minuscule schubert variety is smooth if and only if its associated quiver
has no nonvirtual hole.
(ıı) The irreducible components of the singular locus of a minuscule Schubert variety are indexed
by essential holes.
Furthermore we explicitely describe in terms of the quiver and the essential holes these irre-
ducible components and the singularity at a generic point of a component (for more details see
Theorem 3.2). In particular, with this description it is easy to say if the singularity at a generic
point of an irreducible component of the singular locus is Gorenstein or not. The essential holes
corresponding to irreducible components having a Gorenstein generic point are called Gorenstein
holes (see also Definition 3.8). We give the following complete description of the Gorenstein locus:
THEOREM 0.3. — The generic point of a Schubert subvariety X(w′) of a minuscule Schubert
variety X(w) is in the Gorenstein locus if and only if the quiver of X(w′) contains all the non
Gorenstein holes of the quiver of X(w).
COROLLARY 0.4. — Conjecture 0.1 is true for all minuscule Schubert varieties.
Example 0.5. — Let G(4, 7) be the Grassmannian variety of 4-dimensional subspaces in a 7-
dimensional vector space. Consider the Schubert variety
X(w) = {V4 ∈ G(4, 7) dim(V4 ∩W3) ≥ 2 and dim(V4 ∩W5) ≥ 3}
where W3 and W5 are fixed subspaces of dimension 3 and 5 respectively. The minimal length
representative w is the permutation (2357146). Its quiver is the following one (all the arrows are
going down):
We have circled the two holes on this quiver. The left hole is not a Gorenstein hole (this can be
easily seen because the two peaks above this hole do not have the same height, see Definition 2.3)
but the right one is Gorenstein (the two peaks have the same height). Let X(w′) be an irreducible
component of the singular locus of X(w). The possible quivers of such a variety X(w′) are the
following (for each hole we remove all the vertices above that hole):
These Schubert varieties correspond to the permutations: (1237456) and (2341567). Let X(w′) be
a Schubert subvariety in X(w) whose generic point is not in the Gorenstein locus. Then X(w′) has
to be contained in X(1237456).
Acknowledgements: I thank Frank Sottile and Jim Carrel for their invitation to the BIRS
workshop Comtemporary Schubert calculus during which the major part of this work has been
done.
1 Minuscule Schubert varieties
Let us fix some notations and recall the definitions of minuscule homogeneous spaces and minuscule
Schubert varieties. A basic reference is [LMS79].
In this paper G will be a semi-simple algebraic group, we fix B a Borel subgroup and T a
maximal torus in B. We denote by R the set of roots, by R+ and R− the set of positive and
negative roots. We denote by S the set of simple roots. We will denote by W the Weyl group of G.
We also fix P a parabolic subgroup containing B. We denote by WP the Weyl group of P
and by WP the set of minimal length representatives in W of the coset W/WP . Recall that the
Schubert varieties in G/P (that is to say the B-orbit closures in G/P ) are parametrised by WP .
DEFINITION 1.1. — A fundamental weight ̟ is said to be minuscule if, for all positive roots
α ∈ R+, we have 〈α∨,̟〉 ≤ 1.
With the notation of N. Bourbaki [Bo68], the minuscule weights are:
Type minuscule
An ̟1 · · ·̟n
Bn ̟n
Cn ̟1
Dn ̟1, ̟n−1 and ̟n
E6 ̟1 and ̟6
E7 ̟7
E8 none
F4 none
G2 none
DEFINITION 1.2. — Let ̟ be a minuscule weight and let P̟ be the associated parabolic subgroup.
The homogeneous space G/P̟ is then said to be minuscule. The Schubert varieties of a minuscule
homogeneous space are called minuscule Schubert varieties.
Remark 1.3. — It is a classical fact that to study minuscule homogeneous spaces and their
Schubert varieties, it is sufficient to restrict ourselves to simply-laced groups.
In the rest of the paper, the group G will be simply-laced, the subgroup P will be a maximal
parabolic subgroup associated to a minuscule fundamental weight ̟. The minuscule homogeneous
space G/P will be denoted by X and the Schubert variety associated to w ∈ WP will be denoted
by X(w) with the convention that the dimension of X(w) is the length of w.
2 Miniscule quivers
In [Pe07], we associated to any minuscule Schubert variety X(w) a unique quiver Qw. The definition
a priori depends on the choice of a reduced expression but does not depend on the commuting
relations. In the minuscule setting this implies that the following definitons do not depend on the
choosen reduced expression. Fix a reduced expression w = sβ1 · · · sβr of w (recall that w is in W
the set of minimal length representatives of W/WP ) where for all i ∈ [1, r], we have βi ∈ S.
DEFINITION 2.1. — (ı) The successor s(i) and the predecessor p(i) of an element i ∈ [1, r] are the
elements s(i) = min{j ∈ [1, r] / j > i and βj = βi} and p(i) = max{j ∈ [1, r] / j < i and βj = βi}.
(ıı) Denote by Qw the quiver whose set of vertices is the set [1, r] and whose arrows are given
in the following way: there is an arrow from i to j if and only if 〈β∨j , βi〉 6= 0 and i < j < s(i) (or
only i < j if s(i) does not exist).
Remark 2.2. — (ı) This quiver comes with a coloration of its vertices by simple roots via the
map β : [1, r] → S such that β(i) = βi.
(ıı) There is a natural order on the quiver Qw given by i4j if there is an oriented path from j
to i. Caution that this order is the reversed order of the one defined in [Pe07].
(ııı) Note that if we denote by Q̟ the quiver obtained from the longuest element in W
P , then
the quiver Qw is a subquiver of Q̟. The quivers of Schubert subvarieties are exactely the order
ideals in the quiver Q̟. We will call such a quiver reduced (meaning that it corresponds to a reduced
expression of an element in WP , see [Pe07] for more details on the shape of reduced quivers).
Recall also that we defined in [Pe07] some combinatorial objects associated to the quiver Qw.
DEFINITION 2.3. — (ı) We call peak any vertex of Qw maximal for the partial order 4. We
denote by Peaks(Qw) the set of peaks of Qw.
(ıı) We call hole of the quiver Qw any vertex i of Q̟ satisfying one of the following properties
• the vertex i is in Qw but p(i) 6∈ Qw and there are exactly two vertices j1<i and j2<i in Qw
with 〈β∨i , βjk〉 6= 0 for k = 1, 2.
• the vertex i is not in Qw, s(i) does not exist in Q̟ and there exist j ∈ Qw with 〈β
i , βj〉 6= 0.
Because the vertex of the second type of holes is not a vertex in Qw we call such a hole a virtual
hole of Qw. We denote by Holes(Qw) the set of holes of Qw.
(ııı) The height h(i) of a vertex i is the largest positive integer n such that there exists a sequence
(ik)k∈[1,n] of vertices with i1 = 1, in = r and such that there is an arrow from ik to ik+1 for all
k ∈ [1, n − 1].
Many geometric properties of the Schubert variety X(w) can be read on its quiver. In particular
we proved in [Pe07, Corollary 4.12]:
PROPOSITION 2.4. — A Schubert subvariety X(w′) in X(w) is stable under Stab(X(w)) if and
only if β(Holes(Qw′)) ⊂ β(Holes(Qw)).
An easy consequence of this fact and the result by M. Brion and P. Polo that the smooth locus
of X(w) is the dense Stab(X(w))-orbit is the following:
PROPOSITION 2.5. — A Schubert variety X(w) is smooth if and only if all the holes of its
quiver Qw are virtual.
We will be more precise in Theorem 3.2 and we will describe the irreducible components of
the singular locus and the generic singularity of this component in terms of the quiver. The
Gorensteiness of the variety is also easy to detect on the quiver as we proved in [Pe07, Corollary
4.19]:
PROPOSITION 2.6. — A Schubert variety X(w) is Gorenstein if and only if all the peaks of its
quiver Qw have the same height.
3 Generic singularities of minuscule Schubert varieties
In this section, we go one step further in the direction of reading on the quiver Qw the geometric
properties of X(w). We will translate the results of M. Brion and P. Polo [BP99] on the irreducible
components of the singular locus of X(w) and the singularity at a generic point of such a component
in terms of the quiver Qw. We will need the following notations:
DEFINITION 3.1. — (ı) Let i be a vertex of Qw, we define the subquiver Q
w of Qw as the full
subquiver containing the following set of vertices {j ∈ Qw / j < i}. We denote by Qw,i the full
subquiver of Qw containing the vertices of Qw \ Q
w. We denote by w
i (resp. wi) the elements in
WP associated to the quivers Qiw (resp. Qw,i).
(ıı) A hole i of the quiver Qw is said to be essential if it is not virtual and if there is no hole in
the subquiver Qiw.
(ııı) Following M. Brion and P. Polo, denote by J the set β(Holes(Qw))
We then prove the following:
THEOREM 3.2. — (ı) The set of irreducible components of the singular locus of X(w) is in one
to one correspondence with the set of essential holes of the quiver Qw. In particular, if i is an
essential hole of Qw, the corresponding irreducible component is the Schubert subvariety X(wi) of
X(w) whose quiver is Qw,i.
(ııı) Furthermore, the singularity of X(w) at a generic point of X(wi) is the same singularity
as the one of the B-fixed point in the Schubert variety X(wi) whose quiver is Qiw.
Remark 3.3. — The singularity of the B-fixed point in X(wi) is described in [BP99].
Proof — This result is a reformulation of the main results of M. Brion and P. Polo [BP99].
Proposition 2.4 shows that the essential holes are in one to one correspondence with maximal
Schubert subvarieties in X(w) stable under Stab(X(w)) and that if i is an essential hole, then the
corresponding Schubert subvariety X(wi) is associated to the quiver Qw,i. According to [BP99],
these are the irreducible components of the singular locus.
To describe the singularity of X(wi), M. Brion and P. Polo define two subsets I and I
′ of the
set of simple roots as follows:
• the set I is the union of the connected components of J ∩ wi(RP ) adjacent to β(i)
• the set I ′ is the union I ∪ {β(i)}.
We describe these sets thanks to the quiver.
PROPOSITION 3.4. — The set I ′ is β(Qiw).
Proof — The elements in J ∩ wi(RP ) are the simple roots γ ∈ J such that wi
−1(γ) ∈ RP .
Thanks to Lemma 3.5, these elements are the simple roots in J neither in β(Holes(Qw,i)) nor in
β(Peaks(Qw,i)).
An easy (but fastidious for types E6 and E7) look on the quivers shows that I
′ = β(Qiw). A
uniform proof of this statement is possible but needs an involved case analysis on the quivers. �
LEMMA 3.5. — Let β be a simple root, then we have
1. w−1(β) ∈ R− \R−
if β ∈ β(Peaks(Qw)),
2. w−1(β) ∈ R+ \R+
if β ∈ β(Holes(Qw)) = J
3. w−1(β) ∈ R+
otherwise.
Proof — Let w = sβ1 · sβr be a reduced expression for w, we want to compute w
−1(β) =
sβr · · · sβ1(β). We proceed by induction and deal with the three cases at the same time.
1. Take first β ∈ β(Peaks(Qw)), we may assume that β1 = β and w
−1(β) = sβr · · · sβ2(−β). Let
i ∈ Peaks(Qw) such that β(i) = β, the quiver obtained by removing i has s(i) for hole (possibly
virtual). We may apply induction and the result in case 2.
2.a. Let β ∈ Jc. Assume first that there is no k ∈ Qw with β(k) = β. Then there exist an
i ∈ Qw such that 〈β
∨, βi〉 6= 0. Let us prove that such a vertex i is unique. Indeed, the support
of w is contained in a subdiagram D of the Dynkin diagram not containing β. The diagram D
contains the simple root α corresponding to P (except if X(w) is a point in which case w = Id
and the lemma is easy). The quiver Qw is in particular contained in the quiver of the minuscule
homogeneous variety associated to α ∈ D. It is easy to check on these quivers (see in [Pe07] for
the shape of these quivers) that there is a unique such vertex i.
Now consider the quivers Qiw and Qw,i. Recall that we denote by w
i and wi the associated
elements in W . We have w = wiwi. We compute w
i−1(β) and because all simple roots β(x)
for x ∈ Qiw with x 6= i are orthogonal to β we have w
i−1(β) = sβi(β) = β + βi. We then
have w−1(β) = w−1i (β + βi). Because i was the only vertex such that 〈β
∨, βi〉 6= 0, we have
w−1i (β) = β ∈ R
and by induction (note that i is now a hole of Qw,i) we have w
i (βi) ∈ R
+ \R+
and we have the result.
2.b. Now assume that there exist k ∈ Holes(Qw) with β(k) = β and let i a vertex maximal for
the property 〈β∨, βi〉 6= 0. Remark that we have k < i. Consider one more time the quivers Q
and Qw,i and the elements w
i and wi. We have w
−1(β) = w−1i (βi + β). But as before we have by
induction w−1
(βi) ∈ R
+ \ R+
so that we can conclude by induction as soon as k is not a peak of
Qw,i. But because k is an hole, there exist a vertex j ∈ Qw with j 6= i and such that there is an
arrow j → k in Qw. Because i was taken maximal j is a vertex of Qw,i and k is not a peak of this
quiver.
3. If β is not in the support of w but is not in β(Holes) then w−1(β) = β ∈ R+
Let β in β(Qw) but not in β(Holess(Qw)) or β(Peaks(Qw)) and let k the highest vertex such
that β(k) = β. There exists a unique vertex i ∈ Qw such that i ≻ k and 〈β
∨, β(i)〉 6= 0. We
have w−1(β) = w−1i (βi + β) and the vertex k is a peak of Qw,i so that wi = sβ(k)wk = sβwk and
w−1(β) = w−1
(βi). Now it is easy to see that either s(i) does not exists and in this case it is not a
virtual hole or it exists but is neither a peak nor a hole of Qw,k. We conclude by induction on the
third case. �
The Theorem is now a corollary of the description of the singularities thanks to I and I ′ done
by M. Brion and P. Polo. �
Remark 3.6. — In their article M. Brion and P. Polo also deal with the cominucule Schubert
varieties. We believe that, in that case, Theorem 0.3 should hold true as well as Corollary 0.4.
It is now easy to decide which generic singularity is Gorenstein:
COROLLARY 3.7. — Let i be an essential hole of the quiver Qw. The generic point of the
irreducible component X(wi) of the singular locus is Gorenstein if and only if all the peaks of Q
are of the same height.
We describe the Schubert subvarieties X(w′) in X(w) that are expected to be Gorenstein at
their generic point by the conjecture of A. Woo and A. Yong. Let us give the following
DEFINITION 3.8. — (ı) An essential hole is said to be Gorenstein if the generic point of the
associated irreducible component of the singular locus is in the Gorenstein locus.
(ıı) A Schubert subvariety X(w′) in X(w) is said to have the property (WY) if the generic point
of any irreductible component of the singular locus of X(w) containing X(w′) is in the Gorenstein
locus of X(w).
We have the following:
PROPOSITION 3.9. — Let X(w′) be a Schubert subvariety of the Schubert variety X(w). If the
generic point of X(w′) is Gorentein in X(w), then X(w′) has the property (WY).
Proof — Let X(v) be an irreducible component of the singular locus of X(w) containing X(w′).
Because the property of beeing non Gorenstein is stable under closure, this implies that the generic
point of X(v) is Gorenstein in X(w). �
Remark that, because all the irreducible components of the singular locus of X(w) are stable
under Stab(X(w)), the property (WY) need only to be checked on Stab(X(w))-stable Schubert
subvarieties.
PROPOSITION 3.10. — (ı) The Schubert subvarieties X(w′) in X(w) stable under Stab(X(w))
are exactely those such that the associated quiver Qw′ satisfies
Qw′ =
i∈Holes(Qw)
Qw,ski(i)
where the (ki)i∈Holes(Qw) are integers greater or equal to −1 (if ki = −1, the quiver Qw,ski(i) is Qw
by definition).
(ıı) A Stab(X(w))-stable Schubert subvariety X(w′) of X(w) has the property (WY) if and only
if the only essential holes in the difference Qw \Qw′ are Gorenstein. Equivalentely, writing
Qw′ =
i∈Holes(Qw)
Qw,ski(i),
if and only if the only holes in of the quivers (Q
ski (i)
w )i∈Holes(Qw) are Gorenstein holes. Another
equivalent formulation is that Qw′ contains all the non Gorenstein essential holes of Qw.
Proof — (ı) Consider the subquiver Qw′ in Qw and for each hole i of Qw define the integer
ki = min{k ≥ 0 / s
k(i) ∈ Qw′} − 1. Because of the fact (see for example [LMS79]) that the
strong and weak Bruhat orders coincide for minuscule Schubert varieties, the quiver Qw′ has to be
contained in the intersection
i∈Holes(Qw)
Qw,ski(i).
We therefore need to remove some vertices to Q′ to get Qw′. But removing a vertex j of the quiver
Q′ (it has to be a peak of Q′) creates a hole in s(j) (or a virtual hole in j if s(j) does not exist).
Because X(w′) is Stab(X(w))-stable, the last removed vertex j is such that β(j) ∈ β(Holes(Qw)).
This implies that no more vertex can be removed from Q′ to get Qw′ and in particular Qw′ = Q
(ıı) The Schubert subvariety has the property (WY) if and only if all the irreducible components
X(wi) of the singular locus of X(w) containing X(w
′) are such that i is a Gorenstein hole. But
X(w′) is contained in X(wi) if and only if Qw′ is contained in Qw,i. This is equivalent to the fact
that Qiw is contained in Qw \Qw′ and the proof follows. �
4 Relative canonical model and Gorenstein locus
In this section, we recall the explicit construction given in [Pe07] of the relative canonical model
of X(w). Recall that we described in [Pe07] the Bott-Samelson resolution π : X̃(w) → X(w) as a
configuration variety à la Magyar [Ma98]:
X̃(w) ⊂
G/Pβi
where Pβi is the maximal parabolic associated to the simple root βi. The map π : X̃(w) → X(w)
is given by the projection
G/Pβi → G/Pβm(w) where m(w) is the smallest element in Qw.
We define a partition on the peaks of the quiver Qw and a partition of the quiver itself:
DEFINITION 4.1. — (ı) Define a partition (Ai)i∈[1,n] of Peaks(Qw) by induction: A1 is the set of
peaks with minimal height and Ai+1 is the set of peaks in Peaks(Qw)\
k=1Ak with minimal height
(the integer n is the number of different values the height function takes on the set Peaks(Qw)).
(ıı) Define a partition (Qw(i))i∈[1,n] of Qw by induction:
Qw(i) = {x ∈ Qw / ∃j ∈ Ai : x 4 j and x 64 k ∀k ∈ ∪j>iAj}.
We proved in [Pe07] that these quivers Qw(i) are quivers of minuscule Schubert varieties and
in particular have a minimal element mw(i). We defined the variety X̂(w) as the image of the
Bott-Samelson resolution X̃(w) (seen as a configuration variety) in the product
i=1 G/Pβmw(i) .
Because mw(n) = m(w) we have a map π̂ : X̂(w) → X(w) and a factorisation
X̃(w)
// X̂(w)
X(w).
We proved the following result in [Pe07]:
THEOREM 4.2. — (ı) The variety X̂(w) together with the map π̂ realise X̂(w) as the relative
canonical model of X(w).
(ıı) The variety X̂(w) is a tower of locally trivial fibrations with fibers the Schubert varieties
associated to the quivers Qw(i). In particular X̂(w) is Gorenstein.
We will use this resolution to prove our main result. Indeed, we will prove that the generic fibre
of the map π̂ : X̂(w) → X(w) above a (WY) Schubert subvariety X(w′) is a point. In other words,
the map π̂ is an isomorphism on an open subset of X(w′). As a consequence, the generic point of
X(w′) will be in the Gorenstein locus.
Let us recall some facts on X̃(w) and X̂(w) (see [Pe07]):
FACT 4.3. — (ı) To each vertex i of Qw one can associated a divisor Di on X̃(w) and all these
divisors intersect transversally.
(ıı) For K a subset of the vertices of Qw, we denote by ZK the transverse intersection of the
Di for i ∈ K.
(ııı) The image of the closed subset ZK by the map π is the Schubert variety X(wK) whose
quiver QwK is the biggest reduced subquiver of Qw not containing the vertices in K.
The quiver Qw(i) defines a element w(i) in W and the fact that these quivers realise a partition
of Qw implies that we have an expression w = w(1) · · ·w(n) with l(w) =
l(w(i)). We prove the
following generalisation of this fact:
PROPOSITION 4.4. — Let K be a subset of the vertices of Qw. The image of the closed subset
ZK by the map π̃ is a tower of locally trivial fibrations with fibers the Schubert varieties X(wK(i))
whose quiver QwK(i) is the biggest reduced subquiver of Qw(i) not containing the vertices of K∩Qw(i).
This variety is the image by π̃ of Z∪n
QK(i).
Proof — As we explained in [Pe07, Proposition 5.9], the Bott-Samelson resolution is the quotient
of the product
Ri where theRi are certain minimal parabolic subgroups by a product of Borel
subgroups
i=1 Bi. The variety X̂(w) is the quotient of a product
i=1 Ni of parabolic subgroups
such that the multiplication in G maps
k∈Qw(i)
Rk to Ni by a product
i=1Mi of parabolic
subgroups. The map π̃ is induced by the product from
Ri to
i=1Ni. In particular, this
means that for i ∈ [1, n] fixed, the map
k∈Qw(i)
→ Ni induces the map from the Bott-Samelson
resolution X̃(w(i)) to X(w(i)). We may now apply part (ııı) of the preceding fact because the
quiver Qw(i) is minuscule. �
We now remark that the quivers Qw′ associated to Schubert subvarieties X(w
′) in the Schu-
bert variety X(w) having the property (WY) have a nice behaviour with repect to the partition
(Qw(i))i∈[1,n] of Qw.
PROPOSITION 4.5. — Let X(w′) be a Stab(X(w))-stable Schubert subvariety of X(w) having
the property (WY). Let us denote by (Cj)j∈[1,k] the connected components of the subquiver Qw \Qw′
of Qw. Then for each j, there exist an unique ij ∈ [1, n] such that Cj ⊂ Qw(ij).
Proof — Recall from Proposition 3.10 that, denoting by GorHol(Qw) the set of Gorenstein holes
in Qw, we may write
Qw \Qw′ =
i∈GorHol(Qw)
with ki an integer greater or equal to −1 and with the additional condition that Q
ski(i)
w contains
only Gorenstein holes. Because the quivers Q
ski(i)
w are connected, any connected component of
Qw \Qw′ is an union of such quivers. But we have the following:
LEMMA 4.6. — Let i ∈ Holes(Qw) and assume that Q
sk(i)
w meets at least two subquivers of the
partition (Qw(i))i∈[1,n], then Q
sk(i)
w contains a non Gorenstein hole.
Proof — The quiver Q
sk(i)
w meets two subquivers of the partition (Qw(i))i∈[1,n], in particular it
contains two peaks of Qw of different heights. By connexity of Q
sk(i)
w , we may assume that these
two peaks are adjacent. In particular there is a hole between these two peaks and this hole is not
Gorenstein and is contained in Q
sk(i)
w . �
The proposition follows. �
We describe the inverse image by π̂ of a Stab(X(w))-stable Schubert subvariety of X(w) having
the property (WY). To do this, first remark that the map π is B-equivariant and that the inverse
image π−1(X(w′)) has to be a union of closed subsets ZK for some subsets K of Qw. Let ZK ⊂
π−1(X(w′)) be such that π : ZK → X(w
′) is dominant. We will denote by Qw
w (i) the intersection
Qw′ ∩Qw(i) and by w
′(i) the associated element in W .
PROPOSITION 4.7. — The image of ZK in X̂(w) by π̃ is the same as the image of ZQw\Qw′ .
Proof — Thanks to Proposition 4.4 we only need to compute the quivers QwK(i). Consider the
decomposition into connected components Qw \Qw′ = ∪
j=1Cj . We may decompose K accordingly
as K = ∪kj=1Kj where Kj = K ∩ Cj. But because each connected component of Qw \ Qw′ is
contained in one of the quivers (Qw(i))i∈[1,n] this implies that QwK(i) is exactely QwK ∩ Qw(i)
where QwK is the biggest reduced quiver in Qw Qw not containing the vertices in K (see Fact 4.3).
We get QwK = Qw′ (because ZK is sent onto X(w
′)) and the result follows. �
THEOREM 4.8. — Let X(w′) be a Schubert subvariety in X(w). Then X(w′) has the property
(WY) if and only if its generic point is in the Gorenstein locus of X(w).
Proof — We have already seen in Proposition 3.9 that if the generic point of X(w′) is in the
Gorenstein locus of X(w) then X(w′) has the property (WY).
Conversely let X(w′) be a Schubert subvariety having the property (WY). The previous propo-
sition implies that its inverse image π̂−1(X(w′)) is the variety π̃(ZQw\Qw′ ). But this implies that
the map π̂ : π̃(ZQw\Qw′ ) = π̂
−1(X(w′)) → X(w′) is birational (because the varieties have the
same dimension given by the number of vertices in the quiver). In particular, the map π̂ is an
isomorphism on an open subset of X(w) meeting X(w′) non trivially. Therefore, because X̂(w) is
Gorenstein, it is the case of the generic point in X(w′) as a point in X(w). �
References
[BL00] Sara Billey and Venkatramani Lakshmibai : Singular loci of Schubert varieties. Progress in Math-
ematics, 182. Birkhäuser Boston, Inc., Boston, MA, 2000.
[BW03] Sara Billey and Gregory Warrington: Maximal singular loci of Schubert varieties in SL(n)/B.
Trans. Amer. Math. Soc. 355 (2003), no. 10, 3915–3945.
[Bo68] Nicolas Bourbaki : Éléments de mathématique. Fasc. XXXIV. Groupes et algèbres de Lie.
Chapitre IV : Groupes de Coxeter et systèmes de Tits. Chapitre V: Groupes engendrés par des
réflexions. Chapitre VI: systèmes de racines. Actualités Scientifiques et Industrielles, No. 1337
Hermann, Paris 1968.
[BP99] Michel Brion and Patrick Polo: Generic singularities of certain Schubert varieties. Math. Z. 231
(1999), no. 2, 301–324.
[Co03] Aurélie Cortez : Singularités génériques et quasi-résolutions des variétés de Schubert pour le
groupe linéaire. Adv. Math. 178 (2003), no. 2, 396–445.
[KLR03] Christian Kassel, Alain Lascoux and Chritophe Reutenauer : The singular locus of a Schubert
variety. J. Algebra 269 (2003), no. 1, 74–108.
[LMS79] Venkatramani Lakshmibai, Chitikila Musili and Conjeerveram S. Seshadri : Geometry of G/P .
III. Standard monomial theory for a quasi-minuscule P . Proc. Indian Acad. Sci. Sect. A Math.
Sci. 88 (1979), no. 3, 93–177.
[LS90] Venkatramani Lakshmibai and B. Sandhya: Criterion for smoothness of Schubert varieties in
Sl(n)/B. Proc. Indian Acad. Sci. Math. Sci. 100 (1990), no. 1, 45–52.
[Ma98] Peter Magyar : Borel-Weil theorem for configuration varieties and Schur modules. Adv. Math.
134 (1998), no. 2, 328–366.
[Ma01a] Laurent Manivel : Le lieu singulier des variétés de Schubert. Internat. Math. Res. Notices 2001,
no. 16, 849–871.
[Ma01b] Laurent Manivel : Generic singularities of Schubert Varieties. math.AG/0105239 (2001).
[Pe07] Nicolas Perrin: Small-resolutions of minuscule Schubert varieties. Preprint math.AG/0601117 to
appear in Compositio Mathematica (2007).
[WY06a] Alexander Woo and Alexander Yong: When is a Schubert variety Gorenstein? Adv. Math. 207
(2006), no. 1, 205–220.
[WY06b] Alexander Woo and Alexander Yong: Governing singularities of Schubert varieties. Preprint
math.AG/0603273 (2006).
Université Pierre et Marie Curie - Paris 6
UMR 7586 — Institut de Mathématiques de Jussieu
175 rue du Chevaleret
75013 Paris, France.
email : [email protected]
|
0704.0896 | Model C critical dynamics of random anisotropy magnets | Model C critical dynamics of random anisotropy
magnets
M. Dudka1,2, R. Folk2, Yu. Holovatch1,2 and G. Moser3
1 Institute for Condensed Matter Physics, National Acad. Sci. of Ukraine, UA-79011
Lviv, Ukraine
2 Institut für Theoretische Physik, Johannes Kepler Universität Linz, A–4040 Linz,
Austria
3 Institut für Physik und Biophysik, Universität Salzburg, A–5020 Salzburg, Austria
Abstract. We study the relaxational critical dynamics of the three-dimensional
random anisotropy magnets with the non-conserved n-component order parameter
coupled to a conserved scalar density. In the random anisotropy magnets the
structural disorder is present in a form of local quenched anisotropy axes of random
orientation. When the anisotropy axes are randomly distributed along the edges
of the n-dimensional hypercube, asymptotical dynamical critical properties coincide
with those of the random-site Ising model. However structural disorder gives rise
to considerable effects for non-asymptotic critical dynamics. We investigate this
phenomenon by a field-theoretical renormalization group analysis in the two-loop order.
We study critical slowing down and obtain quantitative estimates for the effective and
asymptotic critical exponents of the order parameter and scalar density. The results
predict complex scenarios for the effective critical exponent approaching an asymptotic
regime.
PACS numbers: 05.50.+q, 05.70.Jk, 61.43.-j, 64.60.Ak, 64.60.Ht
E-mail: [email protected], [email protected], [email protected]
http://arxiv.org/abs/0704.0896v1
Model C critical dynamics of random anisotropy magnets 2
1. Introduction
In this paper, we address the peculiarities of criticality under an influence of the random
anisotropy of structure. To be more specific, given a reference system is a 3d magnet with
n-component order parameter which below the second order phase transition point Tc
characterizes a ferromagnetic state, what will be the impact of random anisotropy [1–3]
on the critical dynamics [4,5] of this transition? It appears, that contrary to the general
believe that even weak random anisotropy destroys ferromagnetic long-range order at
d = 3, this is true only for the isotropic random axis distribution [6]. Therefore, we will
study a particular case, when the second order phase transition survives and, moreover,
it remains in the random-Ising universality class [7, 8] for any n. A particular feature
of 3d systems which belong to the random-Ising universality class is that their heat
capacity does not diverge at Tc (it is the isothermal magnetic susceptibility which
manifests singularity) [9]. Again, general arguments state [10,11] that for such systems
the relaxational critical dynamics of the non-conserved order parameter coupled to a
conserved density, model C dynamics, degenerates to purely relaxation model without
any couplings to conserved densities (model A). Nevertheless, this statement is true only
in the asymptotics [12,13] (i.e. at Tc, which in fact is never reached in experiments or in
simulations). As we will show in the paper, common influence of two different factors:
randomness of structure and coupling of dynamical modes leads to a rich effective critical
behavior which possesses many new unexpected features.
Dynamical properties of a system near the critical point are determined by the
behavior of its slow densities. In addition to the order parameter density ϕ these are
the conserved densities. Here, we consider the case of one conserved density m. For the
description of critical dynamics the characteristic time scales for the order parameter,
tϕ, and for the conserved density, tm, are introduced. Approaching the critical point,
where the correlation length ξ is infinite, they are growing accordingly to the scaling
tϕ ∼ ξz, (1)
tm ∼ ξzm . (2)
These power laws define the dynamical critical exponents of the order parameter, z,
and of the conserved densities, zm. The conserved density dynamical exponents may be
different from that of the order parameter.
The simplest dynamical model taking into account conserved densities is model
C, [4, 14] which contains a static coupling between non-conserved n-dimensional order
parameter ϕ and scalar conserved density m. Being quite simple, the model can be
applied to the description of different physical systems. In particular, in a lattice
model of intermetallic alloys [15] the non-conserved order parameter corresponds to
differences in the concentration of atoms of certain kind between the odd and even
sublattices. It is coupled to a conserved quantity – the concentration of atoms of
this kind in the full system. In the supercooled liquids the fraction of locally favored
Model C critical dynamics of random anisotropy magnets 3
structures is non-conserved “bond order parameter”, coupled to the conserved density
of a liquid [17]. Systems containing annealed impurities with long relaxational times [18]
manifest certain similarity with the model C as well.
Dynamical properties of a model with coupling to a conserved density were less
studied numerically than those for model without any coupling to secondary densities.
It may be the consequence of the complexity of the numerical algorithms, which turn
out to be much slower than for the simpler model. Simulations were performed for an
Ising antiferromagnet with conserved full magnetization and non-conserved staggered
magnetization (i.e. the order parameter) [19] and also for an Ising magnet with conserved
energy [20].
Theoretical analysis of model C critical dynamics were performed by means of
the field-theoretical renormalization group. Critical dynamical behavior of model C in
different regions of d − n plane was analyzed by ε = 4 − d expansion in first order in
ε [14]. The results lead to speculations about the existence of an anomalous region
for 2 < n < 4, where the order parameter is much faster than the conserved density
and dynamic scaling is questionable. Recent two-loop calculation [21, 22] corrected the
results of Ref. [23] and showed an absence of the anomalous region 2 < n < 4.
For the 3d model C with order parameter dimension n = 1, the conserved density
lead to the ”strong” scaling: [21,22] the dynamical exponents z and zm coincide and are
equal 2 + α/ν, where α and ν are the specific heat and the correlation length critical
exponents, correspondingly. For the Ising system (n = 1) the specific heat diverges
and α > 0. While for a system with α < 0, that is for the physically interesting
cases n = 2, 3, the scalar density decouples from the order parameter density in the
asymptotic region. It means that for such values of n the order parameter scales with
the same dynamical critical exponent z as in the model A and the dynamical exponent
of the scalar density is equal to zm = 2. The importance of the sign of α was already
mentioned in Ref. [14].
A rich critical dynamical behavior has already been observed in system with
structural disorder [18, 24–27]. Interest in this case is increased by the fact that real
materials are always characterized by some imperfection of their structure. Obviously,
that models describing their properties should contain terms connected with structural
disorder of certain type. For the static behavior of a system with quenched energy
coupled disorder (e.g. dilution), the Harris criterion [28] states that disorder does not
lead to a new static universality class if the heat capacity of the pure system does not
diverge, that is α < 0. In appears that in diluted systems α < 0 is always the case
(see Ref. [9]). The conclusion about influence of coupling between order parameter and
secondary density works also in this case. The presence of a secondary density does
not affect the dynamical critical properties in the asymptotics [10]: order parameter
dynamics is the same as in an appropriate model A, and zm = 2. Nevertheless, as we
noted at the beginning, the coupling between the order parameter and the secondary
density considerably influences the non-asymptotic critical behavior [12, 13].
We are interested in the critical dynamics of a systems with structural disorder of
Model C critical dynamics of random anisotropy magnets 4
another type, namely, random anisotropy magnets. Their properties are described by
the random anisotropy model (RAM) introduced in Ref. [1]. In this spin lattice model
each spin is subjected to a local anisotropy of random orientation, which essentially is
described by a vector and therefore is defined only for n > 1. The Hamiltonian reads: [1]
H = −
JR,R′ ~SR~SR′ − D̄
(x̂R~SR)
2, (3)
where, ~S = (S1, ..., Sn), are n-component vectors located on the sites R of a d-
dimensional cubic lattice, D̄ > 0 is an anisotropy constant, x̂ is a random unit vector
pointing in direction of the local anisotropy axis. The short-range interaction JR,R′ is
assumed to be ferromagnetic.
The static critical behavior of RAM was analyzed by many theoretical and
numerical investigations which could be compared with the critical properties of random
anisotropy magnets found in experiments (for recent review see Ref. [3]). The results
of this analysis bring about that random anisotropy magnets do not show a second
order phase transition for an isotropic random axis distribution. However they possibly
undergo a second-order phase transition for an anisotropic distribution (for references
see reviews Refs. [2,3]). Renormalization group studies of the asymptotic [7,8,29–31] and
non-asymptotic properties [3] of RAM corroborated such a conclusion. For example, the
RAM with random axes distributed due to the so-called cubic distribution was shown
within two-loop approximation to undergo a second order phase transition governed by
the random Ising critical exponents, [3, 8] as first suggested in Ref. [7]. Recently this
result found its confirmation in a five-loop RG study [31]. The cubic distribution allows
x̂ to point only along one of the 2n directions of the axes k̂i of a (hyper)cubic lattice: [29]
p(x̂) =
δ(n)(x̂− k̂i) + δ(n)(x̂+ k̂i)
, (4)
where δ(y) are Kronecker’s deltas.
Contrary to the static critical behavior of random anisotropy magnets their
dynamics was less investigated. Only dynamical models for systems with isotropic
distribution were briefly discussed in Refs. [32,33]. The critical dynamics was discussed
within model A, Ref. [34], and the dynamical exponents were calculated. However, it
does not give a comprehensive quantitative description since it is (i) restricted to the
isotropic distribution of the random axis and (ii) it is performed only within the first
non-trivial order of ε = 4− d expansion.
The model A critical dynamics of RAM with cubic random axis distribution was
analyzed within two-loop approximation in Ref. [35] Although the asymptotic dynamical
properties found coincide with those of the random-site Ising model, the non-asymptotic
behavior is strongly influenced by the presence of random anisotropy [35].
Beside the slow order parameter an additional slow conserved densities might be
present, for instance the energy density. Therefore considering the non-asymptotic
dynamical behavior of the RAM an extension to model C is of interest. Indeed, there
Model C critical dynamics of random anisotropy magnets 5
exist magnets where the distribution of the local random axes is anisotropic (e.g. the
rare earth compounds, see Ref. [3]).
The structure of the paper is as follows: Section 2 presents the equations defining
the dynamical model and its Lagrangian, the renormalization is performed is Section 3,
there the asymptotic and effective dynamical critical exponents are defined. In Section
4 we give the expressions for the field-theoretic functions in two-loop order and the
resulting non-asymptotic behavior is discussed. Section 5 summarizes our study. Details
of the perturbation expansion are presented in the appendix.
2. Model equations
Here we consider the dynamical model for random anisotropy systems described by
(3) with random axis distribution (4). The structure of the equations of motion for
n-component order parameter ~ϕ0 and secondary density [4, 14] m0 is not changed by
presence of random anisotropy
∂ϕi,0
= − Γ̊ ∂H
∂ϕi,0
+ θϕi , i = 1 . . . n, (5)
= λ̊∇2 ∂H
+ θm . (6)
The order parameter relaxes and conserved density diffuses with the kinetic coefficients
Γ̊, λ̊ correspondingly. The stochastic forces θϕi, θm obey the Einstein relations:
<θϕi(x, t)θϕj (x
′, t′)>= 2Γ̊δ(x− x′)δ(t− t′)δij , (7)
<θm(x, t)θm(x
′, t′)> =−2̊λ∇2δ(x−x′)δ(t−t′)δij . (8)
The disorder-dependent equilibrium effective static functional H describing behavior of
system in the equilibrium reads:
|∇~ϕ0|2+̊r̃|~ϕ0|2
|~ϕ0|4−D0(x̂~ϕ0)2 +
m20 +
γ̊m0|~ϕ0|2 − h̊m0
, (9)
where D0 is an anisotropy constant proportional to D̄ of Eq. (3), ˚̃r and ˚̃v depend on D̄
and the coupling of the usual φ4 model.
Integrating out the secondary density one reduces (9) to usual Ginzburg-Landau-
Wilson model with random anisotropy term and new parameter v̊ and r̊ connected to
the model parameters ˚̃r,˚̃v, γ̊ and h via relations:
r̊ =˚̃r + γ̊h̊, v̊ =˚̃v − 3̊γ2 (10)
We study the critical dynamics by applying the Bausch-Janssen-Wagner approach
[36] of dynamical field-theoretical renormalization group (RG). In this approach, the
critical behavior is studied on the basis of long-distance and long-time properties of
the Lagrangian incorporating features of dynamical equations of the model. The model
defined by expressions (5)-(9) within Bausch-Janssen-Wagner formulation [36] turns out
Model C critical dynamics of random anisotropy magnets 6
to be described by an unrenormalized Lagrangian:
ϕ̃0,iϕ̃0,i+
ϕ̃0,i
+Γ̊(̊µ̃−∇2)
ϕ0,i +
λ̊m̃0∇2m̃0 + m̃0
− λ̊∇2
Γ̊̊ṽϕ̃0,iϕ0,i
ϕ0,jϕ0,j +
2Γ̊D0(x̂ϕ̃0,i(t))(x̂ϕ0,i(t)) + Γ̊γ̊m0ϕ̃0,iϕ0,i−
λ̊̊γm̃0∇2ϕ0,iϕ0,i
,(11)
with auxiliary response fields ϕ̃i(t). There are two ways to average over the disorder
configurations for dynamics. The first way originates from statics and consists in using
the replica trick, [37] where N replicas of the system are introduced in order to facilitate
configurational averaging of the corresponding generating functional. Finally the limit
N → 0 has to be taken.
However we follow the second way proposed in Ref. [33]. There it was shown that
the replica trick is not necessary if one takes just the average of the Lagrangian with
respect to the distribution of random variables. The Lagrangian obtained in this way is
described by the following expression:
ϕ̃i,0
+Γ̊(̊µ̃−∇2)
ϕi,0−Γ̊ϕ̃i+
Γ̊̊ṽ
ϕj,0ϕj,0+
ϕ3i,0
+λ̊m̃0∇2m̃0 + m̃0
− λ̊∇2
m0 + Γ̊γ̊m0ϕ̃i,0ϕi,0−
λ̊γ̊m̃0∇2ϕi,0ϕi,0+
ϕ̃i,0(t)ϕi,0(t)
Γ̊2ů
ϕ̃j,0(t
′)ϕj,0(t
Γ̊2ẘ
ϕ̃i,0(t
′)ϕi,0(t
. (12)
In Eq. (12), the bare mass is ˚̃µ = ˚̃r − D/n, and bare couplings are ů > 0, ˚̃v > 0,
ẘ < 0. Terms with couplings ů and ẘ are generated by averaging over configurations
and the values of ů and ẘ are connected to the moments of distribution (4). Therefore
the ratio of the two couplings has to be ẘ/ů = −n. The ẙ-term in (12) does not
result from the averaging procedure but has to be included since it is generated in the
perturbational treatment. It can be of either sign.
3. RG functions
We perform renormalization within minimal subtraction scheme introducing renormal-
ization factors Zai , ai = {{α}, {δ}}, leading to the renormalized parameters {α} =
{u, v, w, y, γ,Γ, λ} and renormalized densities {δ} = {ϕ, ϕ̃,m, m̃}. For the specific heat
we need also an additive renormalization Aϕ2 which leads to the function
Bϕ2(u,∆) = µ
εZ2ϕ2µ
µ−εAϕ2
, (13)
Model C critical dynamics of random anisotropy magnets 7
with the scale parameter µ and factor Zϕ2 that renormalizes the vertex with ϕ
2 insertion.
From the Z-factors one obtains the ζ-functions describing the critical properties
ζa({α}) = −
d lnZa
d lnµ
, (14)
Relations between the renormalization factors lead to corresponding relations between
the ζ-functions. In consequence for the description of the critical dynamics one needs
only ζ-functions of the couplings, ζui (ui = {u, v, w, y} for i = 1, 2, 3, 4), the order
parameter ζϕ, the auxiliary field ζϕ̃, ϕ
2-insertion ζϕ2 and also function Bϕ2 . In particular,
the ζ-function of the time scale ratio
introduced for the description of dynamic properties is related to the above ζ-functions:
ζϕ̃ − γ2Bϕ2 . (16)
The behavior of the model parameters under renormalization is described by the
flow equations
= β{α} . (17)
The β-functions for the static model parameters have the following explicit form:
βui = ui(ε+ ζϕ + ζui), (18)
βγ = γ(
+ ζϕ2 +
Bϕ2). (19)
The dynamic β-function for the time scale ratio W reads
βW = WζW = W (
ζϕ̃ − γ2Bϕ2). (20)
The asymptotic critical behavior of the system is obtained from the knowledge of
the fixed points (FPs) of the flow equations (17). A FP {α∗} = {u∗, v∗, w∗, y∗, γ∗,W ∗}
is defined as simultaneous zero of the β-functions. The set of equations for the static
fourth order couplings decouple from the other β-functions. Thus for each of the FPs
of the static forth order couplings {u∗i } one obtains two FP values of the static coupling
between the order parameter and the conserved density γ:
=0 and γ∗
ε− 2ζϕ2({u⋆i })
Bϕ2({u⋆i })
νBϕ2({u⋆i })
, (21)
where α and ν are the heat capacity and correlation length critical exponent calculated
at the corresponding FP {u∗}. Inserting the obtained values for the static FPs into the
β-function (20) one finds the corresponding FP values of the time scale ratio W .
The stable FP accessible from the initial conditions corresponds to the critical
point of system. A FP is stable if all eigenvalues ωi of the stability matrix ∂βαi/∂αj
calculated at this FP have positive real parts. The values of ωi indicate also how fast
the renormalized model parameters reach their fixed point values.
Model C critical dynamics of random anisotropy magnets 8
From the structure of β-functions we conclude, that the stability of any FP with
respect to the parameters γ and W is determined solely by the derivatives of the
corresponding β-functions:
, ωW =
. (22)
Moreover using (19) we can write:
ωγ = −
ε− 2ζϕ2({ui})
γ2Bϕ2(u,∆) , (23)
which at the FP {α∗} leads to:
ωγ|{α}={α∗} = −
for γ∗
= 0 , (24)
ωγ|{α}={α∗} =
for γ∗
2 6= 0 . (25)
Therefore, a stability with respect to parameter γ is determined by the sign of the
specific heat exponent α. For a system with non-diverging heat capacity (α < 0) at the
critical point, γ∗ = 0 is the stable FP. Static results report that the stable and accessible
FP is of a random site Ising type. In this case α < 0. This leads to the conclusions that
in the asymptotic region the secondary density decouples from the order parameter.
The critical exponents are defined by the FP values of the ζ-functions. For instance,
the asymptotic dynamical critical exponent z is expressed at the stable FP by:
z = 2 + ζΓ({α∗}), (26)
ζΓ({α}) =
ζϕ({ui})−
ζϕ̃({α}). (27)
In similar way the dynamical critical exponent zm for the secondary density is defined
zm = 2 + ζm({u∗i }, γ∗), (28)
where
ζm({ui}, γ) =
γ2Bϕ2({ui}). (29)
While their effective counterparts in the non asymptotic region are defined by the
solution of flow equations (17) as
zeff = 2 + ζΓ({ui(ℓ)}, γ(ℓ),W (ℓ)), (30)
zeffm = 2 + ζm({ui(ℓ)}, γ2(ℓ)). (31)
In the limit ℓ → 0 the effective exponents reach their asymptotic values. In the next
section we analyze the possible scenarios of effective dynamical behavior as well as check
the approach to the asymptotical regime.
Model C critical dynamics of random anisotropy magnets 9
4. Results
4.1. Asymptotic properties
The static two-loop RG functions of RAM with cubic random axis distribution in the
minimal substraction scheme agree with the results obtained in Ref. [3] using the replica
trick and read:
βu= − εu+
n + 2
uw+yu+
11 (n+ 2)
−5 (n + 2)
v2u−11
u2w− 5
uw2−11
u2y − 5
vuw −
vuy −
v2w −
w2v, (32)
βv= − εv+
v2+2vu+
wv+yv−
3n+ 14
11n+58
wvu, (33)
βw= − εw+
w2+2wu+
wv+yw−7
w3−29
w2u−41
wu2−31
v2w−11
w2y− 5
y2w−17
wuy−5n+ 34
wvu−5
vwy, (34)
βy = − εy +
y2 + 2yu+ 2yv + 2wy +
wv−17
y3 − 41
y2u−23
y2v−23
y2w−5n+ 82
v2y−41
w2y−2w2v−
n + 18
5n+ 82
vuw, (35)
, (36)
u2− 5
n + 2
12v2−5
wu−5yu
. (37)
Here, u, v, w, y stand for the renormalized couplings.
Given the expression for the function ζϕ2 , Eq. (37), the function βγ can be
constructed via Eq. (19) and the two-loop expression Bϕ2 = n/2.
In order to discuss the dynamical FPs it turns out to be useful to introduce the
parameter ρ = W/(1 + W ) which maps W and its FPs into a finite region of the
Model C critical dynamics of random anisotropy magnets 10
parameter space ρ. Then instead of the flow equation for W the flow equation for ρ
arises in (17):
= βρ({ui}, γ, ρ), (38)
where according to (20)
βρ({ui}, γ, ρ) = ρ(ρ− 1)(ζΓ({ui}, γ, ρ)− γ2Bϕ2({ui})). (39)
The function ζΓ in the above expression is obtained from Eq (27) using the static
function ζϕ (36) and the two loop result for the dynamic function ζϕ̃ (calculated from
Eq (A.8)). We get the following two-loop expression for ζΓ:
ζΓ = −
+ γ2ρ+
(6 ln (4/3)− 1)
(n + 2)
vy + y2
5u2 + (n + 2)uv + 10uw + 3uy + 3vw + 5w2 + 3wy
(n+ 2)
v + y
(1− 3 ln (4/3)) +
3 (n+ 2)
ln (4/3) + (1 + ρ) ln
1− ρ2
+ u+ w
ρ2 ln (ρ)
+ (3 + ρ) ln (1− ρ)
. (40)
The two-loop result [22] for the pure model C is recovered by setting in (40) the couplings
u, w, y equal to zero. While setting γ = 0 in (40) the result for model A with random
anisotropy [35] is recovered. The γ2u, γ2w, γ2y-terms represent the intrinsic contribution
of model C for random anisotropy magnets.
There are two different ways to proceed with the numerical analysis of the
perturbative expansions for the RG functions (32) - (35), (40). The first one is an
ε-expansion [38] whereas the second one is the so-called fixed-dimension approach [39].
In the frames of the latter approach, one fixed ε and solves the non-linear FP equations
directly at the space dimension of interest (i.e. at ε = 1 in our d = 3 case). Whilst
in many problems these two ways serve as complementing ones, it appears that for
certain cases only one of them, namely the fixed-d approach leads to the quantitative
description. Indeed, as it is well known by now, the ε-expansion turns into the
expansion for the random-site Ising model and no reliable numerical estimates can be
obtained on its basis (see [9] and references therein). As one will see below, the random-
site Ising model behavior emerges in our problem as well, therefore we proceed within
the fixed-d approach.
The series for RG functions are known to diverge. Therefore to obtain reliable
results on their basis we apply the Padé-Borel resummation procedure [40] to the static
functions. It is performed in following way: we construct the Borel image of the resolvent
series [41] of the initial RG function f :
0≤i+j+l+k≤2
ai,j,k,l(ut)
i(vt)j(wt)k(yt)l →
Model C critical dynamics of random anisotropy magnets 11
0≤i+j+l+k≤2
ai,j,k,lu
ivjwkylti+j+k+l
Γ(i+ j + 1)
where f stands for one of the static RG functions βui , βγ/γ − γ2n/4, ai,j,k,l are the
corresponding expansion coefficients given by Eqs. (32)–(37), and Γ(i+ j+1) is Euler’s
gamma function. Then, the Borel image is extrapolated by a rational Padé approximant
[42] [K/L](t). Within two-loop approximation we use the diagonal approximant with
linear denominator [1/1]. As it is known in the Padé analysis, the diagonal approximants
ensure the best convergence of the results [42]. The resummed function is then calculated
by an inverse Borel transform of this approximant:
f res =
dt exp(−t)[1/1](t). (41)
As far as the above procedure enables one to restore correct static RG flow (as sketched
below) we do not further resum the dynamic RG function βW .
The analysis of the static functions β{ui} at fixed dimension d = 3 brings about an
existence of 16 FPs [3, 31]. Only ten of these FPs are situated in the region of physical
interest u > 0, v > 0, w < 0. Corresponding values of FP coordinates can be found in
Ref. [3].
For each static FP {u∗i} we obtain a set of dynamical FPs with different γ∗ and
ρ∗. The FPs obtained for n = 2, 3 are listed in Table 1 and Table 2 correspondingly.
Stability exponents ωγ and ωρ are given in tables as well. Here we keep the numbering
of FPs already used in Refs. [3, 8, 29, 31, 35]. It is known from the statics that FP
XV governs the critical behavior of RAM with cubic distribution. This FP is of the
same origin as the FP of random-site Ising model therefore all static critical exponents
coincide with those of the random-site Ising model. Since the specific heat exponent in
this case is negative, the asymptotic critical dynamics is described by model A. However
the non-asymptotic critical properties of random anisotropy magnets are different from
the random-site Ising magnets in statics [3] as well as in dynamics [35]. Moreover, for
the model C considered here, the non-asymptotic critical behavior differs considerably
from that of the corresponding model A as we will se below.
4.2. Non-asymptotic properties
The existence of such a large number of dynamical FPs makes non-asymptotic critical
behavior more complex as in model A. We present here results for n = 3. For n = 2
the behavior is qualitatively similar. Solving the flow equations for different initial
conditions we obtain different flows in the space of model parameters. The projection
of most characteristic flows into the subspace w−y−ρ is presented in Fig. 1. The open
circles indicate genuine model C unstable FPs whereas filled circles represent model A
unstable FPs. The filled square denotes the stable FP.
The initial conditions for the couplings u(0), v(0), w(0), y(0) for the flows shown
are the same as those in Refs. [3, 35]. We choose γ(0) = 0.1 and ρ(0) = 0.6. Many
flows are affected by the two Ising FPs VC and XC . Inserting the solutions of the flow
Model C critical dynamics of random anisotropy magnets 12
-1,8-1,6
Figure 1. Projections of flows for n = 3 in the subspace of couplings w− y− ρ. Open
circles represent projections of unstable FPs with non-zero γ∗. Filled circles denote
unstable FPs with γ∗ = 0. The filled square shows the stable FP. See Section 4.2 for
a more detailed description.
-100 -80 -60 -40 -20 0
Figure 2. Dependence of the order parameter effective dynamical critical exponent in
the model C dynamics on the logarithm of flow parameter. See text for full description.
-100 -80 -60 -40 -20 0
Figure 3. Dependence of the conserved density effective dynamical critical exponent in
the model C dynamics on the logarithm of flow parameter. See text for full description.
Model C critical dynamics of random anisotropy magnets 13
Table 1. Two-loop values for the dynamical FPs of random anisotropy magnets with
n = 2 (model C).
FP γ∗ ρ∗ ωγ ωρ z
I 0 0 ≤ ρ∗ ≤ 1 0 0 2
I′ 1 0 1 -1 2
IC 1 0.6106 1 0.745 3
I′1 1 1 1 -∞ ∞
II 0 0 0.0387 0.0526 2.0526
II1 0 1 0.0387 -0.0526 2.0526
III 0 0 -0.1686 -0.1850 1.8150
III1 0 1 -0.1686 0.1850 1.8150
IIIC .5806 0 0.3371 -0.5222 1.8150
III′1 .5806 1 0.3371 ∞ −∞
V 0 0 -0.0525 0.0523 2.0523
V1 0 1 -0.0525 -0.0523 2.0523
V′ .3240 0 0.1050 -0.0527 2.0523
VC .3240 0.5241 0.1050 0.0277 2.1050
V′1 .3240 1 0.1050 -∞ ∞
VI 0 0 -0.0049 -0.0417 2.0107
VI1 0 1 -0.0049 0.0417 2.0107
VI′ 0.0986 0 0.0097 0.00095 2.0107
VI′1 0.0986 1 0.0097 ∞ -∞
VIII 0 0 -0.0525 0.1569 2.1569
VIII1 0 1 -0.0525 -0.1569 2.1569
VIII′ 0.3240 0 0.1050 0.0519 2.1569
VIII′1 0.3240 1 0.1050 -∞ ∞
X 0 0 -0.0525 0.0523 2.0523
X1 0 1 -0.0525 -0.0523 2.0523
X′ .3240 0 0.1050 -0.0527 2.0523
XC .3240 0.5241 0.1050 0.0277 2.1050
X′1 .3240 1 0.1050 -∞ ∞
XV 0 0 0.0018 0.1388 2.1388
XV1 0 1 0.0018 -0.1388 2.1388
equations into the expressions for dynamical exponents we obtain the effective exponents
zeff and zeffm . The dependence of z
eff on the flow parameter ℓ corresponding to flows 1-7
is shown in Fig. 2. Similarly Fig 3 shows this dependence for the effective exponent of
the conserved density zeffm . Flow 3 is affected by both FPs VC and XC . Therefore the
effective exponents demonstrate a region with values which are close to those for model
C in the case of the Ising magnet (see curves 3 in Figs. 2 and 3). The asymptotic values
corresponding to the FPs VC and XC are indicated by the dashed line. They correspond
Model C critical dynamics of random anisotropy magnets 14
Table 2. Two-loop values for the dynamical FPs of random anisotropy magnets with
n = 3 (model C).
FP γ∗ ρ∗ ωγ ωρ z
I 0 0 ≤ ρ∗ ≤ 1 0 0 2
I′ 0.8165 0 1 -1 2
IC 0.8165 0.7993 1 0.5218 3
I′1 0.8165 1 1 -∞ ∞
II 0 0 0.1109 0.0506 2.0506
II1 0 1 0.1109 -0.0506 2.0506
III 0 0 -0.1686 -0.1850 1.8150
III1 0 1 -0.1686 0.1850 1.8150
III′ 0.4741 0 .3371 -0.5222 1.8150
III′1 0.4741 1 0.3371 ∞ −∞
V 0 0 -0.0525 0.0523 2.0523
VI1 0 1 -0.0525 -0.0523 2.0523
VI′ 0.2646 0 0.1050 -0.0527 2.0523
VIC 0.2646 0.7617 0.1050 0.0157 2.1050
VI′1 0.2646 1 0.1050 -∞ ∞
VI 0 0 -0.0162 -0.0401 1.9599
VI1 0 1 -0.0162 0.0401 1.9599
VI′ 0.1467 0 0.0323 -0.0724 1.9599
VI′1 0.1467 1 0.0323 ∞ -∞
VIII 0 0 0.1051 0.0425 2.0425
VIII1 0 1 0.1051 -0.0425 2.0425
IX 0 0 -0.0161 -0.0384 1.9616
IX1 0 1 -0.0161 0.0384 1.9616
IX′ 0.1466 0 0.0322 -0.0707 1.9616
IX′1 0.1466 1 0.0322 ∞ -∞
X 0 0 -0.0525 0.0523 2.0523
X1 0 1 -0.0525 -0.0523 2.0523
X′ 0.2646 0 0.1050 -0.0527 2.0523
XC 0.2646 0.7617 0.1050 0.0157 2.1050
X′1 0.2646 1 0.1050 -∞ ∞
XV 0 0 0.0018 0.1388 2.1388
XV1 0 1 0.0018 -0.1388 2.1388
Model C critical dynamics of random anisotropy magnets 15
to the values asymptotically obtained in the pure model C with n = 1, since the FPs
VC and XC are of the same origin, that FP of pure model C. Curves 6 correspond to
flows near the pure FP II. Whereas curve 7 corresponds to the flow near the cubic FP
VIII.
The main difference of the behavior of the effective dynamical exponent zeff in
model C from that in model A is the appearance of curves with several peaks. The
value of the peak appearing on the right-hand side depends on the initial condition γ(0)
and ρ(0). This is demonstrated in Fig.4.
-100 -80 -60 -40 -20 0
γ=0.01, ρ=0.1
γ=0.01, ρ=0.6
γ=0.1, ρ=0.1
γ=0.1, ρ=0.6
γ=0.6, ρ=0.1
γ=0.6, ρ=0.6
Figure 4. Dependence of zeff on the logarithm of flow parameter for different initial
values γ and ρ
-100 -80 -60 -40 -20 0
Figure 5. Normalized effective dynamical critical exponents of order parameter and
conserved density zeff (solid line), zeffm (dashed line) correspondent to flow 2.
The effective behavior of the two dynamical critical exponents for the order
parameter and the conserved density might be quite different as one sees comparing
Figs. 2 and 3. However, one may ask if both exponents reach the asymptotic values
in the same way. For this purpose we introduce a normalization of the values of the
effective exponents by their values in the asymptotics. In particular, we introduce
notations zeff = zeff/z, zeffm = z
m /zm for order parameter exponent and conserved
density exponent correspondingly. Figs. 5, 6 show behavior of normalized exponents for
order parameter and conserved density for flows 2 and 4 correspondingly. It illustrates
that approach to the asymptotics for order parameter exponents and conserved density
Model C critical dynamics of random anisotropy magnets 16
-100 -80 -60 -40 -20 0
Figure 6. Dependencies of normalized effective dynamical critical exponents of the
order parameter and conserved density correspondent to flow 4. Notations as if Fig.
exponents occurs in different way for different flows, that means for different initial
conditions. For system with small degree of disorder (small u(0) and w(0), flow 4) the
approach of order parameter dynamical exponent to asymptotic regime is faster than
for the conserved density one, while for system with larger amount of disorder (flow 2)
approach of both quantities is almost simultaneous.
5. Conclusion
In this paper, we have studied model C dynamics of the random anisotropy magnets
with cubic distribution of local anisotropy axis. For this purpose two-loop dynamical
RG function ζΓ has been obtained. On the base of static results [3] the dependencies of
effective critical exponents of order parameter, zeff , and conserved density, zeffm , on the
flow parameter were calculated.
The two-loop approximation adopted in our paper may be considered as certain
compromise between what is feasible in static calculations from one side, and in dynamic
ones form the other side. As a matter of fact, the state-of-the-art expansions of the static
RG functions in the minimal subtraction scheme are currently available for many models
within the five-loop accuracy [9, 44] but it is not the case for the dynamic functions.
Complexity of dynamical calculations is reflected in the current situation, when the
results beyond two loops have been obtained for model A only. The model C even with
no structural disorder seems to be outside present manageable problems (see the recent
review [5]). However, there are examples which demonstrate the even in two loops
highly accurate results for dynamical characteristics can be obtained. One of them is
given by the critical dynamics of 4He at the superfluid phase transition [45]. Besides,
analysis of the two-loop static RG functions refined by resummation also brings about
sufficiently accurate quantitative characteristics of a static critical behavior in disordered
systems [9, 46].
In the asymptotics the conserved density is decoupled from the order parameter
and the dynamical critical behavior of random anisotropy model with cubic random axis
distribution is the same as that of the random-site Ising model. Crossover occurring
Model C critical dynamics of random anisotropy magnets 17
between different FPs present in the random anisotropy model considerably influences
the non-asymptotic critical properties. Different scenarios of dynamical critical behavior
are observed depending of the initial values of the model parameters. The main feature
is the presence of additional peaks on the curves for the effective dynamical critical
exponents in comparison with the effective model A critical dynamics.
As far as the approach to the asymptotics is very slow, the effective exponents may
be observed in experiments and in numerical simulations. The effective exponent for the
order parameter may take a value far away from the asymptotic one (the asymptotic
value in in our two loop calculation is z = 2.139). The same holds for the conserved
density effective critical exponent which may be far of its van Hove asymptotic value
zm = 2. For example one can observe values of z
eff and zeffm close to those for pure Ising
model with model C dynamics.
This work was supported by Fonds zur Förderung der wissenschaftlichen Forschung
under Project No. P16574
Appendix A. Perturbation expansion
We perform our calculations on the basis of the Lagrangian defined by (12) using
the Feynman graph technique. The propagators for this Lagrangian are shown in the
Fig. A1.
k,w k ’ ’,w
G(k, ω)δ(k+k′)δ(ω+ω′)δi,j
k,w k ’ ’,w
C(k, ω)δ(k+k′)δ(ω+ω′)δi,j
k,w k ’ ’,w
i j H(k, ω)δ(k+k′)δ(ω+ω′)δi,j
k,w k ’ ’,w
i j D(k, ω)δ(k+k′)δ(ω+ω′)δi,j
Figure A1. Propagators for constructing Feynman graphs. G(k, ω) and H(k, ω) are
response propagators while C(k, ω) and D(k, ω) are correlation propagators.
Response propagators G(k, ω) and H(k, ω) are equal to
G(k, ω) = 1/(−iω + Γ̊(̊µ̃+ k2)) and H(k, ω) = 1/(−iω + λ̊k2) , (A.1)
while the correlation propagators C(k, ω) and D(k, ω) are equal to
C(k, ω) = 2Γ̊/|−iω+Γ̊(̊µ̃+k2)|2 and D(k, ω) = 2̊λk2/|−iω+λ̊k2|2 .(A.2)
The vertices defined by Lagrangian are shown in Fig. A2.
We obtain an expression for the two-point vertex function Γ̊
ϕ̃ϕ by keeping the
diagrams up to two-loop order. The result of calculations can be expressed in form:
Γ̊ϕ̃ϕ(ξ, k, ω) = −iωΩ̊ϕ̃ϕ(ξ, k, ω) + Γ̊stϕϕ(ξ, k)̊Γ . (A.3)
Here we introduce the correlation length ξ(µ̊ = ˚̃µ+ γ̊h̊, ů, v̊, ẘ, ẙ), which is defined by
∂ ln Γ̊stϕϕ
. (A.4)
Model C critical dynamics of random anisotropy magnets 18
k ’ ’,w
k ’’ ’’,w
k ’’’ ’’’,w
ΓAδ(k + k′ + k′′ + k′′′)δ(ω + ω′ + ω′′ + ω′′′)
k ’ ’,w
k ’’ ’’,w
k ’’’ ’’’,w
Γ2Bδ(k + k′)δ(k′′ + k′′′)δ(ω + ω′)δ(ω′′ + ω′′′)
c k,w
k ’ ’,w
k ’’ ’’,w
Γ̊γ̊δ(k + k′ + k′′)δ(ω+ω′+ω′′)δi,j
k ’ ’,w
k ’’ ’’,w
λ̊̊γk2δ(k + k′ + k′′)δ(ω+ω′+ω′′)δi,j
Figure A2. Vertices for our model. In vertex a, A stands for
v0/3! (δi,jδl,m + δi,lδj,m + δi,mδj,l)/3 or y0/3! δi,jδj,lδl,m. In vertex b, B stands for
u0/3! δi,jδl,m or w0/3! δi,jδj,lδl,m. Vertices c and d originate from the coupling to the
conserved density.
The function Γ̊ϕϕ is the static two-loop vertex function of the disordered magnet. The
structure (A.3) of the dynamic vertex function of pure model C was obtained in Ref. [43]
up to two-loop order.
We can express two-loop dynamical function Ω̊ϕ̃ϕ in the following form:
Ω̊ϕ̃ϕ(ξ, k, ω) = 1 + Ω̊
ϕ̃ϕ(ξ, k, ω) + Ω̊
ϕ̃ϕ(ξ, k, ω), (A.5)
where the one loop contribution reads:
Ω̊1ϕ̃ϕ(ξ, k, ω) = −
ů+ ẘ
(−iω + Γ̊(ξ−2 + k′2))(ξ−2+k′2)
+ γΓ̊IC(ξ, k, ω),(A.6)
while the two-loop contribution is of the form:
Ω̊2ϕ̃ϕ(ξ, k, ω) = Γ̊(
v̊2 +
ϕ̃ϕ (ξ, k, ω)−
v̊ + ẙ)̊γ2C̊
ϕ̃ϕ (ξ, k, ω) + Γ̊γ̊
4S̊ϕ̃ϕ(ξ, k, ω) +
v̊ů+
(CD2)
ϕ̃ϕ (ξ, k, ω) + Γ̊(
(CD3)
ϕ̃ϕ (ξ, k, ω) + W̊
(CD4)
ϕ̃ϕ (ξ, k, ω)
ů+ ẘ
(CD5)
ϕ̃ϕ (ξ, k, ω) + W̊
(CD6)
ϕ̃ϕ (ξ, k, ω) + 2W̊
(CD7)
ϕ̃ϕ (ξ, k, ω)
.(A.7)
In (A.6) and (A.7) the expressions for the integrals IC , W̊
(A), C̊(T3) and S̊ of the
pure model C are given in the Appendix A.1 in Ref. [22], while the contributions for
W̊ (CDi) are presented in the Appendix of Ref. [13].
Following the renormalization procedure for Γ̊ϕ̃ϕ we obtain the two-loop
renormalizating factor Zϕ̃:
Zϕ̃ = 1+
Model C critical dynamics of random anisotropy magnets 19
v + y
n + 2
v + y
1−3 ln
3(n+2)
(1+W )2
3(n+ 2)
vw−(u+ w)γ2 W
ln(1 +W )− 1
. (A.8)
[1] R. Harris, M. Plischke, and M. J. Zuckermann, Phys. Rev. Lett. 31, 160 (1973).
[2] R. W. Cochrane, R. Harris, and M. J. Zuckermann, Phys. Reports, 48, 1 (1978).
[3] M. Dudka, R. Folk, and Yu. Holovatch, J. Magn. Magn. Mater., 294, 305 (2005).
[4] B. I. Halperin and P. C. Hohenberg, Rev. Mod. Phys. 49, 436 (1977).
[5] R. Folk and G. Moser, J. Phys. A: Math. Gen. 39, R207 (2006).
[6] As shown in: R.A. Pelcovits, E. Pytte, and J. Rudnick, Phys. Rev. Lett., 40, 476 (1978); S.-k.
Ma and J. Rudnick, Phys. Rev. Lett., 40, 589 (1978), an absence of the ferromagnetic ordering
for an isotropic random axis distribution at d ≤ 4 follows from the Imry-Ma arguments first
formulated in the context of the random field Ising model in: Y. Imry and S.-k. Ma, Phys. Rev.
Lett. 35, 1399 (1975). Later this fact has been proven by several other methods (see Ref. [3] for
a more detailed account).
[7] D. Mukamel and G. Grinstein, Phys. Rev. B 25, 381 (1982).
[8] M. Dudka, R, Folk, and Yu. Holovatch, in: W. Janke, A. Pelster, H.-J. Schmidt and M. Bachmann
(Eds)., Fluctuating Paths and Fields, Singapore, World Scientific, 2001, p. 457; M. Dudka, R.
Folk, and Yu. Holovatch, Condens. Matter Phys. 4, 459 (2001).
[9] For the recent reviews see e.g.: R. Folk, Yu. Holovatch, and T. Yavors’kii, Phys. Rev. B 61, 15114
(2000); R. Folk, Yu. Holovatch, and T. Yavors’kii, Physics - Uspekhi 46 169 (2003) [Uspekhi
Fizicheskikh Nauk 173, 175 (2003)]; A. Pelissetto and E. Vicari, Phys. Rep. 368, 549 (2002).
[10] U. Krey, Phys. Lett. A 57, 215 (1976).
[11] I. D. Lawrie and V. V. Prudnikov, J. Phys. C 17, 1655 (1984).
[12] M. Dudka, R. Folk, Yu. Holovatch, and G. Moser, Phys. Rev. E 72 036107 (2005).
[13] M. Dudka, R. Folk, Yu. Holovatch, and G. Moser, J. Phys. A 39 7943 (2006).
[14] B. I. Halperin, P. C. Hohenberg, and S.-k. Ma, Phys. Rev. B 10, 139 (1974).
[15] V. I. Corentsveig, P. Fraztl, and J. L. Lebowitz, Phys. Rev. B 55 2912 (1997).
[16] K. Binder, W. Kinzel, and D. P. Landau, Surf. Sci. 117 232 (1982).
[17] H. Tanaka, J. Phys.:Condens. Matter 11 L159 (1999).
[18] G. Grinstein, S.-k. Ma, and G. F. Mazenko, Phys. Rev. B 15 258 (1977).
[19] P. Sen, S. Dasgupta, and D. Stauffer, Eur. Phys. J. B 1 107 (1998).
[20] D. Stauffer, Int. J. Mod. Phys. C 8 1263 (1997)
[21] R. Folk and G. Moser, Phys. Rev. Lett. 91, 030601 (2003).
[22] R. Folk and G. Moser, Phys. Rev. E 69 036101 (2004).
[23] E. Brezin and C. De Dominicis, Phys. Rev. B 12 4954 (1975).
[24] V.V. Prudnikov and A. N. Vakilov, Sov. Phys. JETP 74, 900 (1992).
[25] K. Oerding and H. K. Janssen, J. Phys. A: Math. Gen. 28, 4271 (1995).
[26] H. K. Janssen, K. Oerding, and E. Sengenspeick, J. Phys. A: Math. Gen. 28, 6073 (1995).
[27] V. Blavats’ka, M. Dudka, R. Folk, and Yu. Holovatch, Phys. Rev. B, 72 064417 (2005)
[28] A. B. Harris, J. Phys. C: Solid State Phys. 7, 1671 (1974).
Model C critical dynamics of random anisotropy magnets 20
[29] A. Aharony, Phys. Rev. B 12 1038 (1975).
[30] M. Dudka, R. Folk, and Yu. Holovatch, Condens. Matter Phys., 4, 77 (2001); Yu. Holovatch, V.
Blavats’ka, M. Dudka, C. von Ferber, R. Folk, and T. Yavors’kii, Int. J. Mod. Phys. B 16, 4027
(2002).
[31] P. Calabrese, A. Pelissetto, and E. Vicari, Phys. Rev. E 70, 036104 (2004).
[32] S.-k. Ma and J. Rudnick, Phys. Rev. Lett. 40, 587 (1978).
[33] C. De Dominicis, Phys. Rev. B 18, 4913 (1978).
[34] U. Krey, Z. Phys. B 26, 355 (1977).
[35] M. Dudka, R. Folk, Yu. Holovatch, and G. Moser, Condens. Matter Phys. 8, 737 (2005)
[36] R. Bausch, H. K. Janssen, and H. Wagner, Z. Phys. B 24, 113 (1976).
[37] V. J. Emery, Phys. Rev. B 11, 239 (1975).
[38] K. G. Wilson and M. E. Fisher, Phys. Rev. Lett. 28, 240 (1972).
[39] R. Schloms and V. Dohm, Europhys. Lett. 3, 413 (1987); R. Schloms and V. Dohm, Nucl. Phys.
B 328, 639 (1989).
[40] G. A. Baker, Jr., B. G. Nickel, and D. I. Meiron, Phys. Rev. B 17, 1365 (1978).
[41] P. J. S. Watson, J. Phys. A 7, L167 (1974).
[42] G. A. Baker, Jr. and P. Graves-Morris, Padé Approximants (Addison-Wesley: Reading, MA, 1981).
[43] R. Folk and G. Moser, Acta Physica Slovaca 52, 285 (2002).
[44] H. Kleinert, J. Neu, V. Schulte-Frohlinde, K. G. Chetyrkin, and S. A. Larin, Phys. Lett. B 272,
39 (1991); Erratum: Phys. Lett. B 319, 545 (1993); H. Kleinert and V. Schulte-Frohlinde, Phys.
Lett. B 342, 284 (1995).
[45] V. Dohm, Phys. Rev. B 44, 2697; (1991), Phys. Rev. B 73, 09990(E) (2006)
[46] J. Jug, Phys. Rev. B 27, 609 (1983).
Introduction
Model equations
RG functions
Results
Asymptotic properties
Non-asymptotic properties
Conclusion
Perturbation expansion
|
0704.0897 | A unified approach to the theory of separately holomorphic mappings | A UNIFIED APPROACH TO THE THEORY OF SEPARATELY
HOLOMORPHIC MAPPINGS
VIÊT-ANH NGUYÊN
Abstract. We extend the theory of separately holomorphic mappings between
complex analytic spaces. Our method is based on Poletsky theory of discs, Rosay
Theorem on holomorphic discs and our recent joint-work with Pflug on boundary
cross theorems in dimension 1. It also relies on our new technique of conformal
mappings and a generalization of Siciak’s relative extremal function. Our approach
illustrates the unified character: “From local informations to global extensions”.
Moreover, it avoids systematically the use of the classical method of doubly or-
thogonal bases of Bergman type.
1. Introduction
In this article all complex manifolds are supposed to be of finite dimension and
countable at infinity, and all complex analytic spaces are supposed to be reduced,
irreducible, of finite dimension and countable at infinity. For a subset S of a topo-
logical space M, S denotes the closure of S in M, and the set ∂S := S ∩ M \ S
denotes, as usual, the boundary of S in M.
The main purpose of this work is to investigate the following
PROBLEM. Let X, Y be two complex manifolds, let D (resp. G) be an open subset
of X (resp. Y ), let A (resp. B) be a subset of D (resp. G) and let Z be a complex
analytic space. Define the cross
A× (G ∪B)
(D ∪ A)× B
We want to determine the “envelope of holomorphy” of the cross W, that is, an
“optimal” open subset of X×Y, denoted by
W, which is characterized by the following
properties:
Let f : W −→ Z be a mapping that satisfies, in essence, the following two
conditions:
• f(a, ·) is holomorphic on G for all a ∈ A, f(·, b) is holomorphic on D for all
b ∈ B;
• f(a, ·) is continuous on G ∪ B for all a ∈ A, f(·, b) is continuous on D ∪ A
for all b ∈ B.
2000 Mathematics Subject Classification. Primary 32D15, 32D10.
Key words and phrases. Hartogs’ theorem, holomorphic extension, Poletsky theory of discs,
Rosay Theorem on holomorphic discs.
http://arxiv.org/abs/0704.0897v2
2 VIÊT-ANH NGUYÊN
Then there is a holomorphic mapping f̂ defined on
W such that for every (ζ, η) ∈ W,
f̂(z, w) tends to f(ζ, η) as (z, w) ∈
W tends, in some sense, to (ζ, η).
Now we recall briefly the main developments around this problem. All the results
obtained so far may be divided into two directions. The first direction investigates
the results in the “interior” context: A ⊂ D and B ⊂ G, while the second one
explores the “boundary” context: A ⊂ ∂D and B ⊂ ∂G.
The first fundamental result in the field of separate holomorphy is the well-known
Hartogs extension theorem for separately holomorphic functions (see [14]). In the
language of the PROBLEM the following case X = Cn, Y = Cm, A = D, B =
G, Z = C has been solved and the result is
W = D×G. In particular, this theorem
may be considered as the first main result in the first direction. In his famous
article [8] Bernstein obtained some positive results for the PROBLEM in certain
cases where A ⊂ D, B ⊂ G, X = Y = C and Z = C.
More than 60 years later, a next important impetus was made by Siciak (see
[43, 44]) in 1969–1970, where he established some significant generalizations of the
Hartogs extension theorem. In fact, Siciak’s formulation of these generalizations
gives rise to the above PROBLEM: to determine the envelope of holomorphy for sep-
arately holomorphic functions defined on some cross sets W. The theorems obtained
under this formulation are often called cross theorems. Using the so-called relative
extremal function, Siciak completed the PROBLEM for the case where A ⊂ D,
B ⊂ G, X = Y = C and Z = C.
The next deep steps were initiated by Zahariuta in 1976 (see [45]) when he started
to use the method of common bases of Hilbert spaces. This original approach per-
mitted him to obtain new cross theorems for some cases where A ⊂ D, B ⊂ G and
D = X, G = Y are Stein manifolds. As a consequence, he was able to generalize
the result of Siciak in higher dimensions.
Later, Nguyên Thanh Vân and Zeriahi (see [25, 26, 27]) developed the method
of doubly orthogonal bases of Bergman type in order to generalize the result of Za-
hariuta. This is a significantly simpler and more constructive version of Zahariuta’s
original method. Nguyên Thanh Vân and Zeriahi have recently achieved an elegant
improvement of their method (see [24], [47]).
Using Siciak’s method, Shiffman (see [41]) was the first to generalize some Siciak’s
results to separately holomorphic mappings with values in a complex analytic space
Z. Shiffman’s result (see [42]) shows that the natural “target spaces” for obtaining
satisfactory generalizations of cross theorems are the ones which possess the Hartogs
extension property (see Subsection 2.4 below for more explanations).
In 2001 Alehyane and Zeriahi solved the PROBLEM for the case where A ⊂ D,
B ⊂ G and X, Y are Stein manifolds, and Z is a complex analytic space which
possesses the Hartogs extension property (see Theorem 2.2.4 in [5]).
In a recent work (see [28]) we complete, in some sense, the PROBLEM for the
case where A ⊂ D, B ⊂ G and X, Y are arbitrary complex manifolds. The main
ingredients in our approach are Poletsky theory of discs developed in [37, 38], Rosay’s
A UNIFIED APPROACH 3
Theorem on holomorphic discs (see [40]), the above mentioned result of Alehyane–
Zeriahi and the technique of level sets of the plurisubharmonic measure which was
previously introduced in our joint-work with Pflug (see [33]).
To conclude the first direction of research we mention the survey articles by
Nguyên Thanh Vân [23] and Peter Pflug [32] which give nice accounts on this sub-
ject.
The first result in the second direction (i.e. “boundary context”) was established
in the work of Malgrange–Zerner [46] in the 1960s. Further results in this direction
were obtained by Komatsu [21] and Drużkowski [9], but only for some special cases.
Recently, Gonchar [12, 13] has proved a more general result where the following
case has been solved: X = Y = C, D and G are Jordan domains, A (resp. B) is
an open boundary subset of ∂D (resp. ∂G), and Z = C. It should be noted that
Airapetyan and Henkin published a general version of the edge-of-the-wedge theorem
for CR manifolds (see [1] for a brief version and [2] for a complete proof). Gonchar’s
result could be deduced from the latter works. In our joint-articles with Pflug (see
[33, 34, 35]), Gonchar’s result has been generalized considerably. More precisely, the
work in [35] treats the case where the “source spaces” X, Y are arbitrary complex
manifolds, A (resp. B) is an open boundary subset of ∂D (resp. ∂G), and Z =
C. The work in [34] solves the case where the “source spaces” X, Y are Riemann
surfaces, A (resp. B) is a measurable (boundary) subset of ∂D (resp. ∂G), and
Z = C.
The main purpose of this article is to give a new version of the Hartogs extension
theorem which unifies all results up to now. Namely, we are able to give a reason-
able solution to the PROBLEM when the “target space” Z possesses the Hartogs
extension property. Our method is based on a systematic application of Poletsky
theory of discs, Rosay Theorem on holomorphic discs and our joint-work with Pflug
on boundary cross theorems in dimension 1 (see [34]). It also relies on our new
technique of conformal mappings and a generalization of Siciak’s relative extremal
function. The approach illustrates the unified character in the theory of extension
of holomorphic mappings:
One can deduce the global extension from local informations.
Moreover, the novelty of this new approach is that one does not use the classical
method of doubly orthogonal bases of Bergman type.
We close the introduction with a brief outline of the paper to follow.
In Section 2 we formulate the main results.
The tools which are needed for the proof of the main results are developed in
Section 3, 4, 5 and 7.
The proof of the main results is divided into three parts, which correspond to
Section 6, 8 and 9. Section 10 concludes the article with various applications of our
results.
Acknowledgment. The paper was written while the author was visiting the
Abdus Salam International Centre for Theoretical Physics in Trieste. He wishes to
express his gratitude to this organization.
4 VIÊT-ANH NGUYÊN
2. Preliminaries and statement of the main result
First we develop some new notions such as system of approach regions for an open
set in a complex manifold, and the corresponding plurisubharmonic measure. These
will provide the framework for an exact formulation of the PROBLEM and for our
solution.
2.1. Approach regions, local pluripolarity and plurisubharmonic measure.
Definition 2.1. Let X be a complex manifold and let D ⊂ X be an open subset.
A system of approach regions for D is a collection A =
Aα(ζ)
ζ∈D, α∈Iζ
of open
subsets of D with the following properties:
(i) For all ζ ∈ D, the system
Aα(ζ)
forms a basis of open neighborhoods
of ζ (i.e., for any open neighborhood U of a point ζ ∈ D, there is α ∈ Iζ
such that ζ ∈ Aα(ζ) ⊂ U).
(ii) For all ζ ∈ ∂D and α ∈ Iζ , ζ ∈ Aα(ζ).
Aα(ζ) is often called an approach region at ζ.
A is said to be canonical if it satisfies (i) and the following property (which is
stronger than (ii)):
(ii’) For every point ζ ∈ ∂D, there is a basis of open neighborhoods (Uα)α∈Iζ of ζ
in X such that Aα(ζ) = Uα ∩D, α ∈ Iζ.
It is possible that Iζ = ∅ for some ζ ∈ ∂D.
Various systems of approach regions which one often encounters in Complex Anal-
ysis will be described in the next subsection. Systems of approach regions for
D are used to deal with the limit at points in D of mappings defined on some
open subsets of D. Consequently, we deduce from Definition 2.1 that the subfamily(
Aα(ζ)
ζ∈D, α∈Iζ
is, in a certain sense, independent of the choice of a system of ap-
proach regions A. In addition, any two canonical systems of approach regions are,
in some sense, equivalent. These observations lead us to use, throughout the paper,
the following convention:
We fix, for every open set D ⊂ X, a canonical system of approach regions.
When we want to define a system of approach regions A for an open set D ⊂ X, we
only need to specify the subfamily
Aα(ζ)
ζ∈∂D, α∈Iζ
In what follows we fix an open subset D ⊂ X and a system of approach regions
Aα(ζ)
ζ∈D, α∈Iζ
for D.
For every function u : D −→ [−∞,∞), let
(A− lim sup u)(z) :=
lim sup
w∈Aα(z), w→z
u(w), z ∈ D, Iz 6= ∅,
lim sup
w∈D, w→z
u(w), z ∈ ∂D, Iz = ∅.
By Definition 2.1 (i), (A−lim sup u)|D coincides with the usual upper semicontinuous
regularization of u.
For a set A ⊂ D put
hA,D := sup {u : u ∈ PSH(D), u ≤ 1 on D, A− lim sup u ≤ 0 on A} ,
A UNIFIED APPROACH 5
where PSH(D) denotes the cone of all functions plurisubharmonic on D.
A is said to be pluripolar inD if there is u ∈ PSH(D) such that u is not identically
−∞ on every connected component of D and A ⊂ {z ∈ D : u(z) = −∞} . A is said
to be locally pluripolar in D if for any z ∈ A, there is an open neighborhood V ⊂ D
of z such that A ∩ V is pluripolar in V. A is said to be nonpluripolar (resp. non
locally pluripolar) if it is not pluripolar (resp. not locally pluripolar). According to
a classical result of Josefson and Bedford (see [16], [6]), if D is a Riemann domain
over a Stein manifold, then A ⊂ D is locally pluripolar if and only if it is pluripolar.
Definition 2.2. The relative extremal function of A relative to D is the function
ω(·, A,D) defined by
ω(z, A,D) = ωA(z, A,D) := (A− lim sup hA,D)(z), z ∈ D.
Note that when A ⊂ D, Definition 2.2 coincides with the classical definition of
Siciak’s relative extremal function.
Next, we say that a set A ⊂ D is locally pluriregular at a point a ∈ A if ω(a, A ∩
U,D ∩ U) = 0 for all open neighborhoods U of a. Moreover, A is said to be locally
pluriregular if it is locally pluriregular at all points a ∈ A. It should be noted from
Definition 2.1 that if a ∈ A ∩D then the property of local pluriregularity of A at a
does not depend on any particular choices of a system of approach regions A, while
the situation is different when a ∈ A ∩ ∂D : the property does depend on A.
We denote by A∗ the following set
(A ∩ ∂D)
a ∈ A ∩D : A is locally pluriregular at a
If A ⊂ D is non locally pluripolar, then a classical result of Bedford and Taylor (see
[6, 7]) says that A∗ is locally pluriregular and A\A∗ is locally pluripolar. Moreover,
A∗ is locally of type Gδ, that is, for every a ∈ A
∗ there is an open neighborhood
U ⊂ D of a such that A∗ ∩ U is a countable intersection of open sets.
Now we are in the position to formulate the following version of the plurisubhar-
monic measure.
Definition 2.3. For a set A ⊂ D, let à = Ã(A) :=
P∈E(A)
P, where
E(A) = E(A,A) :=
P ⊂ D : P is locally pluriregular, P ⊂ A∗
The plurisubharmonic measure of A relative to D is the function ω̃(·, A,D) defined
ω̃(z, A,D) := ω(z, Ã, D), z ∈ D.
It is worthy to remark that ω̃(·, A,D) ∈ PSH(D) and 0 ≤ ω̃(z, A,D) ≤ 1, z ∈ D.
Moreover,
(2.1)
A− lim sup ω̃(·, A,D)
(z) = 0, z ∈ Ã.
An example in [3] shows that in general, ω(·, A,D) 6= ω̃(·, A,D) on D. Section 10
below is devoted to the study of ω̃(·, A,D) in some important cases.
1Observe that this function depends on the system of approach regions.
6 VIÊT-ANH NGUYÊN
Now we compare the plurisubharmonic measure ω̃(·, A,D) with Siciak’s relative
extremal function ω(·, A,D). We only consider two important special cases: A ⊂ D
and A ⊂ ∂D. For the moment, we only focus on the case where A ⊂ D. The latter
one will be discussed in Section 10 below.
If A is an open subset of an arbitrary complex manifold D, then it is easy to see
ω̃(z, A,D) = ω(z, A,D), z ∈ D.
If A is a (not necessarily open) subset of an arbitrary complex manifold D, then we
will prove in Proposition 7.1 below that
ω̃(z, A,D) = ω(z, A∗, D), z ∈ D.
On the other hand, if, morever, D is a bounded open subset of Cn then we have (see,
for example, Lemma 3.5.3 in [18]) ω(z, A,D) = ω(z, A∗, D), z ∈ D. Consequently,
under the last assumption,
ω̃(z, A,D) = ω(z, A,D), z ∈ D.
Our discussion shows that at least in the case where A ⊂ D, the notion of the
plurisubharmonic measure is a good candidate for generalizing Siciak’s relative ex-
tremal function to the manifold context in the theory of separate holomorphy.
For a good background of the pluripotential theory, see the books [18] or [20].
2.2. Examples of systems of approach regions. There are many systems of
approach regions which are very useful in Complex Analysis. In this subsection we
present some of them.
1. Canonical system of approach regions. It has been given by Definition 2.1
(i)–(ii’).
2. System of angular (or Stolz) approach regions for the open unit disc.
Let E be the open unit disc of C. Put
Aα(ζ) :=
t ∈ E :
∣∣∣∣arg
ζ − t
)∣∣∣∣ < α
, ζ ∈ ∂E, 0 < α <
where arg : C −→ (−π, π] is as usual the argument function. A =
(Aα(ζ))ζ∈∂E, 0<α<π
is referred to as the system of angular (or Stolz) approach regions
for E. In this context A− lim is also called angular limit.
3. System of angular approach regions for certain “good” open subsets
of Riemann surfaces. Now we generalize the previous construction (for the open
unit disc) to a global situation. More precisely, we will use as the local model the
system of angular approach regions for E. Let X be a complex manifold of dimension
1, in other words, X is a Riemann surface, and D ⊂ X an open set. Then D is said
to be good at a point ζ ∈ ∂D2 if there is a Jordan domain U ⊂ X such that ζ ∈ U
and U ∩ ∂D is the interior of a Jordan curve.
Suppose that D is good at ζ. This point is said to be of type 1 if there is a
neighborhood V of ζ such that V0 = V ∩D is a Jordan domain. Otherwise, ζ is said to
be of type 2. We see easily that if ζ is of type 2, then there are an open neighborhood
2 In the work [34] we use the more appealing word Jordan-curve-like for this notion.
A UNIFIED APPROACH 7
V of ζ and two disjoint Jordan domains V1, V2 such that V ∩D = V1∪V2. Moreover,
D is said to be good on a subset A of ∂D if D is good at all points of A.
Here is a simple example which may clarify the above definitions. Let G be the
open square in C with vertices 1 + i, −1 + i, −1− i, and 1− i. Define the domain
D := G \
Then D is good on ∂G ∪
. All points of ∂G are of type 1 and all points of(
are of type 2.
Suppose now that D is good on a nonempty subset A of ∂D.We define the system
of angular approach regions supported on A: A =
Aα(ζ)
ζ∈D, α∈Iζ
as follows:
• If ζ ∈ D \A, then
Aα(ζ)
coincide with the canonical approach regions.
• If ζ ∈ A, then by using a conformal mapping Φ from V0 (resp. V1 and V2)
onto E when ζ is of type 1 (resp. 2), we can “transfer” the angular approach
regions at the point Φ(ζ) ∈ ∂E : (Aα(Φ(ζ)))0<α<π
to those at the point
ζ ∈ ∂D (see [34] for more detailed explanations).
Making use of conformal mappings in a local way, we can transfer, in the same way,
many notions which exist on E (resp. ∂E) to those on D (resp. ∂D).
4. System of conical approach regions.
Let D ⊂ Cn be a domain and A ⊂ ∂D. Suppose in addition that for every point
ζ ∈ A there exists the (real) tangent space Tζ to ∂D at ζ. We define the system of
conical approach regions supported on A: A =
Aα(ζ)
ζ∈D, α∈Iζ
as follows:
• If ζ ∈ D \A, then
Aα(ζ)
coincide with the canonical approach regions.
• If ζ ∈ A, then
Aα(ζ) := {z ∈ D : |z − ζ | < α · dist(z, Tζ)} ,
where Iζ := (1,∞) and dist(z, Tζ) denotes the Euclidean distance from the
point z to Tζ .
We can also generalize the previous construction to a global situation:
X is an arbitrary complex manifold, D ⊂ X is an open set and A ⊂ ∂D is a
subset with the property that at every point ζ ∈ A there exists the (real) tangent
space Tζ to ∂D.
We can also formulate the notion of points of type 1 or 2 in this general context
in the same way as we have already done in Paragraph 3 above.
2.3. Cross and separate holomorphicity and A-limit. Let X, Y be two com-
plex manifolds, let D ⊂ X, G ⊂ Y be two nonempty open sets, let A ⊂ D and
B ⊂ G. Moreover, D (resp. G) is equipped with a system of approach regions
A(D) =
Aα(ζ)
ζ∈D, α∈Iζ
(resp. A(G) =
Aα(η)
η∈G, α∈Iη
). We define a 2-fold
cross W, its interior W o and its regular part W̃ (with respect to A(D) and A(G))
8 VIÊT-ANH NGUYÊN
W = X(A,B;D,G) :=
(D ∪A)×B
A× (B ∪G)
W o = Xo(A,B;D,G) := (A×G) ∪ (D × B),
W̃ = X̃(A,B;D,G) :=
(D ∪ Ã)× B̃
Ã× (G ∪ B̃)
Moreover, put
ω(z, w) := ω(z, A,D) + ω(w,B,G), (z, w) ∈ D ×G,
ω̃(z, w) := ω̃(z, A,D) + ω̃(w,B,G), (z, w) ∈ D ×G.
For a 2-fold cross W := X(A,B;D,G) let
Ŵ := X̂(A,B;D,G) = {(z, w) ∈ D ×G : ω(z, w) < 1} ,
W := X̂(Ã, B̃;D,G) = {(z, w) ∈ D ×G : ω̃(z, w) < 1} .
Let Z be a complex analytic space. We say that a mapping f : W o −→ Z is
separately holomorphic and write f ∈ Os(W
o, Z), if, for any a ∈ A (resp. b ∈ B)
the restricted mapping f(a, ·) (resp. f(·, b)) is holomorphic on G (resp. on D).
We say that a mapping f : W −→ Z is separately continuous and write f ∈
if, for any a ∈ A (resp. b ∈ B) the restricted mapping f(a, ·) (resp.
f(·, b)) is continuous on G ∪ B (resp. on D ∪A).
In virtue of (2.1), for every (ζ, η) ∈ W̃ and every α ∈ Iζ , β ∈ Iη, there are open
neighborhoods U of ζ and V of η such that
U ∩ Aα(ζ)
V ∩Aβ(η)
Then a mapping f :
W −→ Z is said to admit A-limit λ at (ζ, η) ∈ W̃ , and one
writes
(A− lim f)(ζ, η) = λ, 3
if, for all α ∈ Iζ, β ∈ Iη,
cfW∋(z,w)→(ζ,η), z∈Aα(ζ), w∈Aβ(η)
f(z, w) = λ.
Throughout the paper, for a topological space M, C(M, Z) denotes the set of all
continuous mappings f : M −→ Z. If, moreover, Z = C, then C(M,C) is equipped
with the “sup-norm” |f |M := supM |f | ∈ [0,∞]. A mapping f : M −→ Z is said to
be bounded if there exist an open neighborhood U of f(M) in Z and a holomorphic
embedding φ of U into a polydisc of Ck such that φ(U) is an analytic set in this
polydisc. f is said to be locally bounded along N ⊂ M if for every point z ∈ N ,
there is an open neighborhood U of z (in M) such that f |U : U −→ Z is bounded.
f is said to be locally bounded if it is so for N = M. It is clear that if Z = C then
the above notions of boundedness coincide with the usual ones.
3Note that here A = A(D)×A(G).
A UNIFIED APPROACH 9
2.4. Hartogs extension property. The following example (see Shiffman [42])
shows that an additional hypothesis on the “target space” Z is necessary in or-
der that the PROBLEM makes sense. Consider the mapping f : C2 −→ P1 given
f(z, w) :=
[(z + w)2 : (z − w)2], (z, w) 6= (0, 0),
[1 : 1], (z, w) = (0, 0).
Then f ∈ Os
o(C,C;C,C),P1
, but f is not continuous at (0, 0).
We recall here the following notion (see, for example, Shiffman [41]). Let p ≥ 2
be an integer. For 0 < r < 1, the Hartogs figure in dimension p, denoted by Hp(r),
is given by
Hp(r) :=
, zp) ∈ E
p : ‖z
‖ < r or |zp| > 1− r
where E is the open unit disc of C and z
= (z1, . . . , zp−1), ‖z
‖ := max
1≤j≤p−1
|zj|.
Definition 2.4. A complex analytic space Z is said to possess the Hartogs extension
property in dimension p if every mapping f ∈ O(Hp(r), Z) extends to a mapping
f̂ ∈ O(Ep, Z). Moreover, Z is said to possess the Hartogs extension property if it
does in any dimension p ≥ 2.
It is a classical result of Ivashkovich (see [17]) that if Z possesses the Hartogs
extension property in dimension 2, then it does in all dimensions p ≥ 2. Some typical
examples of complex analytic spaces possessing the Hartogs extension property are
the complex Lie groups (see [4]), the taut spaces (see [48]), the Hermitian manifold
with negative holomorphic sectional curvature (see [41]), the holomorphically convex
Kähler manifold without rational curves (see [17]).
Here we mention an important characterization due to Shiffman (see [41]).
Theorem 2.5. A complex analytic space Z possesses the Hartogs extension property
if and only if for every domain D of any Stein manifold M, every mapping f ∈
O(D,Z) extends to a mapping f̂ ∈ O(D̂, Z), where D̂ is the envelope of holomorphy
of D.
In the light of Definition 2.4 and Shiffman’s Theorem, the natural “target spaces”
Z for obtaining satisfactory answers to the PROBLEM are the complex analytic
spaces which possess the Hartogs extension property.
2.5. Statement of the main results. We are now ready to state the main results.
Theorem A. Let X, Y be two complex manifolds, let D ⊂ X, G ⊂ Y be two open
sets, let A (resp. B) be a subset of D (resp. G). D (resp. G) is equipped with a
system of approach regions
Aα(ζ)
ζ∈D, α∈Iζ
(resp.
Aβ(η)
η∈G, β∈Iη
). Let Z be a
complex analytic space possessing the Hartogs extension property. Then, for every
mapping f : W −→ Z which satisfies the following conditions:
• f ∈ Cs(W,Z) ∩ Os(W
o, Z);
10 VIÊT-ANH NGUYÊN
• f is locally bounded along X
A ∩ ∂D,B ∩ ∂G;D,G
• f |A×B is continuous at all points of (A ∩ ∂D)× (B ∩ ∂G),
there exists a unique mapping f̂ ∈ O(
W,Z) which admits A-limit f(ζ, η) at every
point (ζ, η) ∈ W ∩ W̃ .
If, moreover, Z = C and |f |W <∞, then
|f̂(z, w)| ≤ |f |
1−eω(z,w)
A×B |f |
eω(z,w)
W , (z, w) ∈
Theorem A has an important corollary. Before stating this, we need to introduce
a terminology. A complex manifoldM is said to be a Liouville manifold if PSH(M)
does not contain any non-constant bounded above functions. We see clearly that
the class of Liouville manifolds contains the class of connected compact manifolds.
Corollary B. We keep the hypothesis and the notation in Theorem A. Suppose in
addition that G is a Liouville manifold and that Ã, B̃ 6= ∅. Then, for every mapping
f : W −→ Z which satisfies the following conditions:
• f ∈ Cs(W,Z) ∩ Os(W
o, Z);
• f is locally bounded along X
A ∩ ∂D,B ∩ ∂G;D,G
• f |A×B is continuous at all points of (A ∩ ∂D)× (B ∩ ∂G),
there is a unique mapping f̂ ∈ O(D × G,Z) which admits A-limit f(ζ, η) at every
point (ζ, η) ∈ W ∩ W̃ .
Corollary B follows immediately from Theorem A since ω̃(·, B,G) ≡ 0.
We will see in Section 10 below that Theorem A and Corollary B generalizes all
the results discussed in Section 1 above. Moreover, they also give many new results.
Although our main results have been stated only for the case of a 2-fold cross, they
can be formulated for the general case of an N -fold cross with N ≥ 2 (see also
[28, 33]).
3. Holomorphic discs and a Two-Constant Theorem
We recall here some elements of Poletsky theory of discs, some background of the
pluripotential theory and auxiliary results needed for the proof of Theorem A.
3.1. Poletsky theory of discs and Rosay Theorem on holomorphic discs.
Let E denote as usual the open unit disc in C. For a complex manifold M, let
O(E,M) denote the set of all holomorphic mappings φ : E −→ M which extend
holomorphically to a neighborhood of E. Such a mapping φ is called a holomorphic
disc on M. Moreover, for a subset A of M, let
1A,M(z) :=
1, z ∈ A,
0, z ∈ M \ A.
4 It follows from Subsection 2.3 that
A ∩ ∂D,B ∩ ∂G;D,G
(A ∩ ∂D)× (G ∪B)
(D ∪ A)× (B ∩ ∂G)
A UNIFIED APPROACH 11
In the work [40] Rosay proved the following remarkable result.
Theorem 3.1. Let u be an upper semicontinuous function on a complex manifold
M. Then the Poisson functional of u defined by
P[u](z) := inf
u(φ(eiθ))dθ : φ ∈ O(E,M), φ(0) = z
is plurisubharmonic on M.
Rosay Theorem may be viewed as an important development in Poletsky theory
of discs. Observe that special cases of Theorem 3.1 have been considered by Poletsky
(see [37, 38]), Lárusson–Sigurdsson (see [22]) and Edigarian (see [10]).
The following Rosay type result gives the connections between the Poisson func-
tional of the characteristic function 1M\A,M and holomorphic discs.
Lemma 3.2. Let M be a complex manifold and let A be a nonempty open subset of
M. Then for any ǫ > 0 and any z0 ∈ M, there are an open neighborhood U of z0,
an open subset T of C, and a family of holomorphic discs (φz)z∈U ⊂ O(E,M) with
the following properties:
(i) Φ ∈ O(U ×E,M), where Φ(z, t) := φz(t), (z, t) ∈ U × E;
(ii) φz(0) = z, z ∈ U ;
(iii) φz(t) ∈ A, t ∈ T ∩ E, z ∈ U ;
(iv) 1
1∂E\T,∂E(e
iθ)dθ < P[1M\A,M](z0) + ǫ.
Proof. See Lemma 3.2 in [28]. �
The next result describes the situation in dimension 1. It will be very useful later
Lemma 3.3. Let T be an open subset of E. Then
ω(0, T ∩ E,E) ≤
1∂E\T,T (e
iθ)dθ.
Proof. See, for example, Lemma 3.3 in [28]. �
The last result, which is an important consequence of Rosay’s Theorem, gives the
connection between the Poisson functional and the plurisubharmonic measure.
Proposition 3.4. Let M be a complex manifold and A a nonempty open subset of
M. Then ω(z, A,M) = P[1M\A,M](z), z ∈ M.
Proof. See, for example, the proof of Proposition 3.4 in [28]. �
12 VIÊT-ANH NGUYÊN
3.2. Level sets of the relative extremal functions and a Two-Constant
Theorem. Let X be a complex manifold and D ⊂ X an open set. Suppose that
D is equipped with a system of approach regions A =
Aα(ζ)
ζ∈D, α∈Iζ
. For every
open subset G of D, there is a natural system of approach regions for G which is
called the induced system of approach regions A
ζ∈G, α∈I
of A onto G.
It is given by
α(ζ) := Aα(ζ) ∩G, ζ ∈ G, α ∈ I
where I
α ∈ Iζ : ζ ∈ Aα(ζ) ∩G
Proposition 3.5. Under the above hypothesis and notation, let A ⊂ D be a locally
pluriregular set (relative to A). For 0 < δ < 1, define the δ-level set of D relative
to A as follows
Dδ,A := {z ∈ D : ω(z, A,D) < 1− δ} .
We equip Dδ,A with the induced system of approach regions A
of A onto Dδ,A (see
Subsection 2.1 above). Then A ⊂ Dδ,A and
(3.1) ω(z, A,Dδ,A) =
ω(z, A,D)
, z ∈ Dδ,A.
Moreover, A is locally pluriregular relative to A
Proof. Since A is locally pluriregular, we see that
(3.2)
A− lim supω(·, A,D)
(z) = 0, z ∈ A.
Therefore, for every z ∈ A and α ∈ Iz, there is an open neighborhood U of z such
that ∅ 6= Aα(z) ∩ U ⊂ Dδ,A. Hence, A ⊂ Dδ,A.
Next, we turn to the proof of identity (3.1). Observe that 0 ≤
ω(·,A,D)
≤ 1 on
Dδ,A by definition. This, combined with (3.2), implies that
(3.3)
ω(z, A,D)
≤ ω(z, A,Dδ,A), z ∈ Dδ,A.
To prove the converse inequality of (3.3), let u ∈ PSH(Dδ,A) be such that u ≤ 1 on
Dδ,A and A
− lim sup u ≤ 0 on A. Consider the following function
(3.4) û(z) :=
max {(1− δ)u(z), ω(z, A,D)} , z ∈ Dδ,A,
ω(z, A,D), z ∈ D \Dδ,A.
It can be checked that û ∈ PSH(D) and 0 ≤ û ≤ 1. Moreover, in virtue of the
assumption on u and (3.2) and (3.4), we have that
(A−lim sup û)(a) ≤ max
(1− δ)(A
− lim sup u)(a),
A− lim supω(·, A,D)
for all a ∈ A. Consequently, û ≤ ω(·, A,D). In particular, one gets from (3.4) that
u(z) ≤
ω(z, A,D)
, z ∈ Dδ,A.
Since u is arbitrary, we deduce from the latter estimate that the converse inequality
of (3.3) also holds. This, combined with (3.3), completes the proof of (3.1).
A UNIFIED APPROACH 13
To prove the last conclusion of the proposition, fix a point a ∈ A and an open
neighborhood U of a. Then we have
A− lim supω(·, A∩U,Dδ,A∩U)
(a) ≤
A− lim supω(·, A∩U, (D∩U)δ,A∩U )
A− lim supω(·, A ∩ U,D ∩ U)
(a) = 0,
where the first equality follows from identity (3.1) and the second one from the
hypothesis that A is locally pluriregular. �
The following Two-Constant Theorem for plurisubharmonic functions will play
an important role in the proof of the estimate in Theorem A.
Theorem 3.6. Let X be a complex manifold and D ⊂ X an open subset. Suppose
that D is equipped with a system of approach regions
Aα(ζ)
ζ∈D, α∈Iζ
. Let A ⊂ D
be a locally pluriregular set. Let m,M ∈ R and u ∈ PSH(D) such that u(z) ≤ M
for z ∈ D, and (A− lim sup u)(z) ≤ m for z ∈ A. Then
u(z) ≤ m(1− ω(z, A,D)) +M · ω(z, A,D), z ∈ D.
Proof. It follows immediately from Definition 2.2. �
Theorem 3.7. We keep the hypotheses and notation of Theorem 3.6. Let f be a
bounded function in O(D,C) such that (A − lim f)(ζ) = 0, ζ ∈ A. Then f(z) = 0
for all z ∈ D such that ω(z, A,D) 6= 1.
Proof. Fix a finite positive constant M such that |f |D < M. Consequently, the
desired conclusion follows from applying Theorem 3.6 to the function u := log |f |.
3.3. Construction of discs. In this subsection we present the construction of discs
à la Poletsky (see [38]). This is one of the main ingredients in the proof of Theorem
Let mes denote the Lebesgue measure on the unit circle ∂E. For a bounded
mapping φ ∈ O(E,Cn) and ζ ∈ ∂E, f(ζ) denotes the angular limit value of f at
ζ if it exists. A classical theorem of Fatou says that mes ({ζ ∈ ∂E : ∃f(ζ)}) = 2π.
For z ∈ Cn and r > 0, let B(z, r) denote the open ball centered at z with radius r.
Theorem 3.8. Let D be a bounded open set in Cn, A ⊂ D, z0 ∈ D and ǫ > 0.
Let A be a system of approach regions for D. Suppose in addition that A is locally
pluriregular (relative to A). Then there exist a bounded mapping φ ∈ O(E,Cn) and
a measurable subset Γ0 ⊂ ∂E with the following properties:
1) Γ0 is pluriregular (with respect to the system of angular approach regions),
φ(0) = z0, φ(E) ⊂ D, Γ0 ⊂
ζ ∈ ∂E : φ(ζ) ∈ A
, and
·mes(Γ0) < ω(z0, A,D) + ǫ.
14 VIÊT-ANH NGUYÊN
2) Let f ∈ C(D ∪ A,C) ∩ O(D,C) be such that f(D) is bounded. Then there
exist a bounded function g ∈ O(E,C) such that g = f ◦ φ in a neighborhood
of 0 ∈ E and5 g(ζ) = (f ◦ φ)(ζ) for all ζ ∈ Γ0. Moreover, g|Γ0 ∈ C(Γ0,C).
This theorem motivates the following
Definition 3.9. We keep the hypothesis and notation of Theorem 3.8. Then ev-
ery pair (φ,Γ0) satisfying the conclusions 1)–2) of this theorem is said to be an
ǫ-candidate for the triplet (z0, A,D).
Theorem 3.8 says that there always exist ǫ-candidates for all triplets (z, A,D).
Proof. First we will construct φ. To do this we will construct by induction a sequence
k=1 ⊂ O(E,D) which approximates φ as k ր ∞. This will allow to define the
desired mapping as φ := lim
φk. The construction of such a sequence is divided into
three steps.
For 0 < δ, r < 1 let
Da,r := D ∩ B(a, r), a ∈ A.
Aa,r,δ := {z ∈ Da,r : ω(z, A ∩ B(a, r), Da,r) < δ} , a ∈ A,
Ar,δ :=
Aa,r,δ,
(3.5)
where in the second “:=” Da,r is equipped with the induced system of approach
regions of A onto Da,r (see Subsection 3.2 above).
Suppose without loss of generality that D ⊂ B(0, 1).
Step 1: Construction of φ1.
Let δ0 :=
and r0 := 1. Fix 0 < δ1 <
and 0 < r1 <
. Applying Proposition
3.4, we obtain φ1 ∈ O(E,D) such that φ1(0) = z0 and
∂E ∩ φ−11 (Ar1,δ1)
≤ ω(z0, Ar1,δ1, D) + δ0.
On the other hand, using (3.5) and Definition 2.2 and the hypothesis that A is
locally pluriregular, we obtain
ω(z0, Ar1,δ1 , D) ≤ ω(z0, A,D).
Consequently, we may choose a subset Γ1 of Γ0 := ∂E ∩ φ
1 (Ar1,δ1) which consists
of finite disjoint closed arcs (Γ1j)j∈J1 so that
(3.6) 1−
·mes(Γ1) < ω(z0, Ar1,δ1, D) + 2δ0 ≤ ω(z0, A,D) + 2δ0,
t,τ∈Γ1j
|t− τ | < 2δ1, sup
t,τ∈Γ1j
|φ1(t)− φ1(τ)| < 2r1, j ∈ J1.
Step 2: Construction of φk+1 from φk for all k ≥ 1.
By the inductive construction we have 0 < δk <
and 0 < rk <
φk ∈ O(E,D) such that φk(0) = z0 and there exists a closed subset Γk of ∂E ∩
5 Note here that by Part 1), (f ◦ φ)(ζ) exists for all ζ ∈ Γ0.
A UNIFIED APPROACH 15
(Ark,δk) ∩ Γk−1 which consists of finite closed arcs (Γk,j)j∈Jk such that Γk is
relatively compact in the interior of Γk−1, and
(3.7) 1−
·mes(Γk) < 1−
·mes(Γk−1) + 2δk−1,
t,τ∈Γk,j
|t− τ | < 2δk, sup
t,τ∈Γk,j
|φk(t)− φk(τ)| < 2rk, j ∈ Jk,
|φk − φk−1|Γk < 2rk−1.
Here we make the convention that the last inequality is empty when k = 1.
In particular, we have that φk(Γk) ⊂ Ark,δk . Therefore, by (3.5), for every ζ ∈
φk(Γk) there is a ∈ A such that ζ ∈ Aa,rk,δk , that is,
ω(ζ, A∩ B(a, rk), Da,rk) < δk.
Using the hypothesis that A is locally pluriregular and (3.5) we see that
ω(z, Ar,δ ∩Da,rk , Da,rk) ≤ ω(z, A ∩ B(a, rk), Da,rk), 0 < δ, r < 1.
Consequently, for every ζ ∈ φk(Γk) there is a ∈ A such that
ω(ζ, Ar,δ ∩Da,rk , Da,rk) < δk, 0 < δ, r < 1.
Using the last estimate and arguing as in [38, p. 120–121] (see also the proof of
Theorem 1.10.7 in [19] for a nice presentation), we can choose 0 < δk+1 <
0 < rk+1 <
and φk+1 ∈ O(E,D) such that φk+1(0) = z0, and there exists a
closed subset Γk+1 of ∂E ∩ φ
k+1(Ark+1,δk+1) ∩ Γk which consists of finite closed arcs
(Γk+1,j)j∈Jk+1 such that Γk+1 is relatively compact in the interior of Γk, and
(3.8) 1−
·mes(Γk+1) < 1−
·mes(Γk) + 2δk,
t,τ∈Γk+1,j
|t− τ | < 2δk+1, sup
t,τ∈Γk+1,j
|φk+1(t)− φk+1(τ)| < 2rk+1, j ∈ Jk+1,
|φk+1 − φk|Γk+1 < 2rk.
Step 3: Construction of φ from the sequence (φk)
In summary, we have constructed a decreasing sequence (Γk)
k=1 of closed subsets
of ∂E. Consider the new closed set
By (3.7)–(3.8),
·mes(Γ) =
mes(Γ1)− 2
mes(Γ1)− 3δ1.
16 VIÊT-ANH NGUYÊN
This, combined with (3.6), implies the following property
·mes(Γ) < 1−
·mes(Γ1)+3δ1 ≤ ω(z0, A,D)+2δ0+3δ1 < ω(z0, A,D)+ ǫ.
On the other hand, we recall from the above construction the following properties:
(ii) φk(Γ) ⊂ φk(Γk) ⊂ Ark,δk .
(iii) δ0 =
, r0 = 1, 0 < δk+1 <
, 0 < rk+1 <
and |φk+1−φk|Γ ≤ |φk+1−φk|Γk+1 <
(iv) sup
t,τ∈Γkj
|t− τ | < 2δk and sup
t,τ∈Γk,j
|φk(t)− φk(τ)| < 2rk, j ∈ Jk.
(v) For every ζ ∈ Γ there exists a sequence (jk)k≥1 such that jk ∈ Jk, and ζ is an
interior point of Γk,jk , and Γk+1,jk+1 ⋐ Γk,jk , and ζ =
Γk,jk .
Therefore, we are able to apply the Khinchin–Ostrowski Theorem (see [11, The-
orem 4, p. 397]) to the sequence (φk)
k=1. Consequently, this sequence converges
uniformly on compact subsets of E to a mapping φ ∈ O(E,D). Moreover, φ admits
(angular) boundary values at all points of Γ and φ(Γ) ⊂
Ark,δk ⊂ A.
Observe that since φk(0) = φ(0) = z0 ∈ D and f ∈ C(D ∪ A,C) ∩ O(D,C), the
sequence (f ◦ φk)
k=1 converges to f ◦ φ uniformly on a neighborhood of 0 ∈ E. On
the other hand, f(D) is bounded by the hypothesis. Thus by Montel Theorem, the
family (f ◦ φk)
k=1 ⊂ O(E,C) is normal. Consequently, the sequence (f ◦ φk)
converges uniformly on compact subsets of E. Let g be the limit mapping. Then
g ∈ O(E,C) and g = f ◦ φ in a neighborhood of 0 ∈ E. Moreover, it follows
from (i)–(iii) above and the hypothesis f ∈ C(D ∪ A,C) that g(ζ) = (f ◦ φ)(ζ)
for all ζ ∈ Γ. We deduce from (iii)–(v) above that g|Γ ∈ C(Γ,C) Finally, applying
Lemma 4.1 below we may choose a locally pluriregular subset Γ0 ⊂ Γ (relative to
the system of angular approach regions) such that mes(Γ0) = mes(Γ). Hence, the
proof is finished. �
It is worthy to remark that φ(E) ⊂ D; but in general, φ(E) 6⊂ D !
The last result of this section sharpens Theorem 3.8.
Theorem 3.10. Let D be a bounded open set in Cn, A ⊂ D, and ǫ > 0. Let A be a
system of approach regions for D. Suppose in addition that A is locally pluriregular
(relative to A). Then there exists a Borel mapping Φ : D × E −→ Cn with the
following property: for every z ∈ D, there is a measurable subset Γz of ∂E such that
(Φ(z, ·),Γz) is an ǫ-candidate for the triplet (z, A,D).
Roughly speaking, this result says that one can construct ǫ-candidates for (z, A,D)
so that they depend in a Borel-measurable way on z ∈ D.
Proof. Observe that in Proposition 3.4 we can construct ǫ-candidates for (z, A,M)
so that they depend in a Borel-measurable way on z ∈ M. Here an ǫ-candidate
for (z, A,M) is a holomorphic disc φ ∈ O(E,M) such that φ(0) = z and
1∂E\φ−1(A),∂E(e
iθ)dθ < P[1M\A,M](z) + ǫ.
A UNIFIED APPROACH 17
Using this we can adapt the proof of Theorem 3.8 in order to obtain the desired
result. �
4. A mixed cross theorem
Let E be as usual the open unit disc in C. Let B be a measurable subset of ∂E
and ω(·, B, E) the relative extremal function of B relative to E (with respect to the
canonical system of approach regions). Then it is well-known (see [39]) that
(4.1) ω(z, B, E) =
1− |z|2
|eiθ − z|2
· 1∂E\B,∂E(e
iθ)dθ.
The following elementary lemma will be very useful.
Lemma 4.1. We keep the above hypotheses and notation.
1) Let u be a subharmonic function defined on E with u ≤ 1 and let α ∈ (0, π
be such that
lim sup
z→ζ, z∈Aα(ζ)
u(z) ≤ 0 for a.e. ζ ∈ B,
where A = (Aα(ζ)) is the system of angular approach regions defined in
Subsection 2.2. Then u ≤ ω(·, B, E) on E.
2) ω(·, B, E) is also the relative extremal function of B relative to E (with
respect to the system of angular approach regions).
3) For all subsets N ⊂ ∂E with mes(N ) = 0, ω(·, B, E) = ω(·, B ∪N , E).
4) Let B
be the set of all density points of B. Then
z→ζ, z∈Aα(ζ)
ω(z, B, E) = 0, ζ ∈ B
, 0 < α <
In particular, B
is locally pluriregular (with respect to the system of angular
approach regions).
5) ω(·, B, E) = ω̃c(·, B, E) = ω̃a(·, B, E) on E, where ω̃c(·, B, E) (resp.
ω̃a(·, B, E)) is given by Definition 2.3 relative to the system of canonical
approach regions (resp. angular approach regions).
Proof. It follows immediately from the explicit formula (4.1). �
The main ingredient in the proof of Theorem A is the following mixed cross
theorem.
Theorem 4.2. Let D be a complex manifold and E as usual the open unit disc in
C. D (resp. E) is equipped with the canonical system of approach regions (resp.
the system of angular approach regions). Let A be an open subset of D and B a
measurable subset of ∂E such that B is locally pluriregular (relative to the system of
angular approach regions). For 0 ≤ δ < 1 put G := {w ∈ E : ω(w,B,E) < 1− δ} .
18 VIÊT-ANH NGUYÊN
Let W := X(A,B;D,G), W o := Xo(A,B;D,G), and6
Ŵ = X̂(A,B;D,G) :=
(z, w) ∈ D ×G : ω(z, A,D) +
ω(w,B,E)
Let f : W −→ C be such that
(i) f ∈ Os(W
o,C);
(ii) f is locally bounded on W, f |A×B is a Borel function;
(iii) for all z ∈ A,
w→η, w∈Aα(η)
f(z, w) = f(z, η), η ∈ B, 0 < α <
Then there is a unique function f̂ ∈ O(Ŵ ,C) such that f̂ = f on A×G. Moreover,
|f |W = |f̂ |cW .
The proof of this theorem will occupy the present and the next sections. Our
approach here avoid completely the classical method of doubly orthogonal bases
of Bergman type. For the proof we need the following “measurable” version of
Gonchar’s Theorem.
Theorem 4.3. Let D = G := E be equipped with the system of angular approach
regions. Let A (resp. B) be a Borel measurable subset of ∂D (resp. ∂G) such
that A and B are locally pluriregular and that mes(A), mes(B) > 0. Put W :=
X(A,B;D,G) and define W o, Ŵ , ω(z, w) as in Subsection 2.3. Let f : W −→ C
be such that:
(i) f is locally bounded on W and f ∈ Os(W
o,C);
(ii) f |A×B is a Borel function;
(iii) for all a ∈ A (resp. b ∈ B), f(a, ·)|G (resp. f(·, b)|D) admits A-limit
7 f(a, b)
at all b ∈ B (resp. at all a ∈ A).
Then there exists a unique function f̂ ∈ O(Ŵ ,C) which admits A-limit f(ζ, η) at
all points (ζ, η) ∈ W o. If, moreover, |f |W <∞, then
|f̂(z, w)| ≤ |f |
1−ω(z,w)
A×B |f |
ω(z,w)
W , (z, w) ∈ Ŵ .
Proof. It follows from Steps 1–3 of Section 6 in [34]. �
The above theorem is also true in the context of an N -fold cross W (N ≥ 2). We
give here a version of a special 3-fold cross which is needed for the proof of Theorem
Theorem 4.4. Let D = G := E be equipped with the system of angular approach
regions. Let A (resp. B) be a Borel measurable subset of ∂D (resp. ∂G) such that
6 In fact, Theorem 4.10 in [34] says that ω(·, B,G) =
ω(·,B,E)
on G, where ω(·, B,G) is the
relative extremal function with respect to the system of angular approach regions induced onto G.
7 that is, the angular limit
A UNIFIED APPROACH 19
A and B are locally pluriregular and that mes(A), mes(B) > 0. Define W, W o, Ŵ
as follows:
W = X(A, ∂E,B;D,E,G) := A× ∂E × (G ∪B)
A× E × B
(D ∪ A)× ∂E × B,
W o = Xo(A, ∂E,B;D,E,G) := A× ∂E ×G
A× E × B
D × ∂E × B,
Ŵ = X̂(A, ∂E,B;D,E,G) := {(z, t, w) ∈ D × E ×G : ω(z, A,D) + ω(w,B,G) < 1} .
Let f : W −→ C be such that:
(i) f is locally bounded on W and f ∈ Os(W
o,C)8;
(ii) f |A×∂E×B is a Borel function;
(iii) for all (a, λ) ∈ A × ∂E (resp. (a, b) ∈ A × B) (resp. (λ, b) ∈ ∂E × B),
f(a, λ, ·)|G (resp. f(a, ·, b)|E) (resp. f(·, λ, b)|D) admits the angular limit
f(a, λ, b) at all b ∈ B (resp. at all λ ∈ ∂E) (resp. at all a ∈ A).
Then there exists a unique function f̂ ∈ O(Ŵ ,C) such that
cW∋(z,t,w)→(ζ,τ,η),w∈Aα(η)
f̂(z, t, w) =
f(ζ, λ, η)
(ζ, τ, η) ∈ D × E × B, 0 < α <
If, moreover, |f |W <∞, then
|f̂(z, t, w)| ≤ |f |
1−ω(z,A,D)−ω(w,B,G)
A×∂E×B |f |
ω(z,A,D)+ω(w,B,G)
W , (z, t, w) ∈ Ŵ .
Proof. We refer the reader to Subsections 5.2 and 5.3 in [34].
Let ω̂(·, A,D) (resp. ω̂(·, B,G)) be the conjugate harmonic function of ω(·, A,D)
(resp. ω(·, B,G) ) such that ω̂(z0, A,D) = 0 (resp. ω̂(w0, B,G) = 0) for a certain
fixed point z0 ∈ D (resp. w0 ∈ G). Thus we define the holomorphic functions
g1(z) := ω(z, A,D) + iω̂(z, A,D), g2(w) := ω(w,B,G) + iω̂(w,B,G), and
g(z, w) := g1(z) + g2(w), (z, w) ∈ D ×G.
Each function e−g1 (resp. e−g2) is bounded on D (resp. on G). Therefore, in
virtue of [11, p. 439], we may define e−g1(a) (resp. e−g2(b)) for a.e. a ∈ A (resp. for
a.e. b ∈ B) to be the angular limit of e−g1 at a (resp. e−g2 at b).
In virtue of (i), for each positive integer N, we define, as in [12, 13] (see also [34]),
the Gonchar–Carleman operator as follows
(4.2) KN(z, t, w) = KN [f ](z, t, w) :=
(2πi)2
−N(g(a,b)−g(z,w)) f(a, t, b)dadb
(a− z)(b− w)
for (z, t, w) ∈ D × ∂E × G. Reasoning as in [13] and using (i)–(iii) above, we see
that the following limit
(4.3) K(z, t, w) = K[f ](z, t, w) := lim
KN(z, t, w)
8 This notation means that for all (a, λ) ∈ A×∂E (resp. (a, b) ∈ A×B) (resp. (λ, b) ∈ ∂E×B),
the function f(a, λ, ·)|G (resp. f(a, ·, b)|E) (resp. f(·, λ, b)|D) is holomorphic.
20 VIÊT-ANH NGUYÊN
exists for all points in the set
(z, t, w) : t ∈ ∂E, (z, w) ∈ X̂(A,B;D,G)
, and its
limit is uniform on compact subsets of the latter set.
Observe that for n = 0, 1, 2, . . . , and N = 1, 2, . . . ,
KN(z, t, w)dt =
(2πi)2
f(a, t, b)dt
e−N(g(a,b)−g(z,w))dadb
(a− z)(b− w)
where the first equality follows from (4.2), the second one from the equality∫
tnf(a, t, b)dt = 0 which itself is an immediate consequence of (i). Therefore,
we deduce from (4.3) that
tnK(z, t, w)dt = 0, (z, w) ∈ X̂(A,B;D,G), n = 0, 1, 2, . . . .
On the other hand,
(z, t, w) : t ∈ E, (z, w) ∈ X̂(A,B;D,G)
Hence, we are able to define the desired extension function
f̂(z, t, w) :=
K(z, λ, w)
dλ, (z, t, w) ∈ Ŵ .
Recall from Steps 1–3 of Section 6 in [34] that
cW∋(z,w)→(ζ,η),w∈Aα(η)
K(z, t, w) = f(ζ, t, η), (ζ, t, η) ∈ D × ∂E × B, 0 < α <
Inserting this into the above formula of f̂ , the desired conclusion of the theorem
follows. �
We break the proof of Theorem 4.2 into two cases.
CASE 1: δ = 0 (that is G = E).
We follow essentially the arguments presented in Section 4 of [28]. For the sake
of clarity and completeness we give here the most basic arguments.
We begin the proof with the following lemma.
Lemma 4.5. We keep the hypothesis of Theorem 4.2. For j ∈ {1, 2}, let φj ∈
O(E,D) be a holomorphic disc, and let tj ∈ E such that φ1(t1) = φ2(t2) and
1D\A,D(φj(e
iθ))dθ < 1. Then:
1) For j ∈ {1, 2}, the function (t, w) 7→ f(φ(t), w) defined on X(φ−1j (A) ∩
∂E,B;E,G) satisfies the hypothesis of Theorem 4.3, where φ−1j (A) := {t ∈
E : φj(t) ∈ A}.
A UNIFIED APPROACH 21
2) For j ∈ {1, 2}, in virtue of Part 1), let f̂j be the unique function in
X̂(φ−1j (A) ∩ ∂E,B;E,G),C
given by Theorem 4.3. Then
f̂1(t1, w) = f̂2(t2, w),
for all w ∈ G such that (tj, w) ∈ X̂
φ−1j (A) ∩ ∂E,B;E,G
, j ∈ {1, 2}.
Proof of Lemma 4.5. Part 1) follows immediately from the hypothesis. There-
fore, it remains to prove Part 2). To do this fix w0 ∈ G such that (tj, w0) ∈
φ−1j (A) ∩ E,B;E,G
for j ∈ {1, 2}.We need to show that f̂1(t1, w0) = f̂2(t2, w0).
Observe that both functions w ∈ G 7→ f̂1(t1, w) and w ∈ G 7→ f̂2(t2, w) belong to
O(G,C), where G is the connected component which contains w0 of the following
open set
w ∈ G : ω(w,B,G) < 1− max
j∈{1,2}
ω(tj, φ
j (A) ∩ ∂E,E)
Since φ1(t1) = φ2(t2), it follows from Theorem 4.3 and the hypothesis of Part 2)
(A− lim f̂1)(t1, η) = f(φ1(t1), η) = f(φ2(t2), η) = (A− lim f̂2)(t2, η), η ∈ B.
Therefore, by Theorem 3.7, f̂1(t1, w) = f̂2(t2, w), w ∈ G. Hence, f̂1(t1, w0) =
f̂2(t2, w0), which completes the proof of the lemma. �
Now we return to the proof of the theorem in CASE 1 which is divided into two
steps.
Step 1: Construction of the extension function f̂ on Ŵ and its uniqueness.
Proof of Step 1. We define f̂ as follows: Let W be the set of all pairs (z, w) ∈ D×G
with the property that there are a holomorphic disc φ ∈ O(E,D) and t ∈ E such
that φ(t) = z and (t, w) ∈ X̂ (φ−1(A) ∩ ∂E,B;E,G) . By Part 1) of Lemma 4.5 and
Theorem 4.3, let f̂φ be the unique function in O
X̂(φ−1(A) ∩ ∂E,B;E,G),C
(4.4) (A− lim f̂φ)(t, w) = f(φ(t), w), (t, w) ∈ X
−1(A) ∩ ∂E,B;E,G
Then the desired extension function f̂ is given by
(4.5) f̂(z, w) := f̂φ(t, w).
In virtue of Part 2) of Lemma 4.5, f̂ is well-defined on W. We next prove that
(4.6) W = Ŵ .
Taking (4.6) for granted, then f̂ is well-defined on Ŵ .
Now we return to (4.6). To prove the inclusion W ⊂ Ŵ , let (z, w) ∈ W. By the
above definition of W, one may find a holomorphic disc φ ∈ O(E,D), a point t ∈ E
22 VIÊT-ANH NGUYÊN
such that φ(t) = z and (t, w) ∈ X̂ (φ−1(A) ∩ ∂E,B;E,G) . Since ω(φ(t), A,D) ≤
ω(t, φ−1(A) ∩ ∂E,E), it follows that
ω(z, A,D) + ω(w,B,G) ≤ ω(t, φ−1(A) ∩ ∂E,E) + ω(w,B,G) < 1,
Hence (z, w) ∈ Ŵ . This proves the above mentioned inclusion.
To finish the proof of (4.6), it suffices to show that Ŵ ⊂ W. To do this, let
(z, w) ∈ Ŵ and fix any ǫ > 0 such that
(4.7) ǫ < 1− ω(z, A,D)− ω(w,B,G).
Applying Theorem 3.1 and Proposition 3.4, there is a holomorphic disc φ ∈ O(E,D)
such that φ(0) = z and
(4.8)
1D\A,D(φ(e
iθ))dθ < ω(z, A,D) + ǫ.
Observe that
ω(0, φ−1(A) ∩ ∂E,E) + ω(w,B,G) =
1D\A,D(φ(e
iθ))dθ + ω(w,B,G)
< ω(z, A,D) + ω(w,B,G) + ǫ < 1,
where the equality follows from (4.1), the first inequality holds by (4.8), and the
last one by (4.7). Hence, (0, w) ∈ X̂ (φ−1(A) ∩ ∂E,B;E,G) , which implies that
(z, w) ∈ W. This completes the proof of (4.6). Hence, the construction of f̂ on Ŵ
has been completed.
Next we show that f̂ = f on A×G. To this end let (z0, w0) be an arbitrary point
of A × G. Choose the holomorphic disc φ ∈ O(E,D) given by φ(t) := z0, t ∈ E.
Then by formula (4.5),
f̂(z0, w0) = f̂φ(0, w0) = f(φ(0), w0) = f(z0, w0).
If g ∈ O(Ŵ ,C) satisfies g = f on A × G, then we deduce from (4.4)–(4.5) that
g = f̂ . This proves the uniqueness of f̂ . �
Finally, we conclude the proof of CASE 1 by the following
Step 2: Proof of the fact that f̂ ∈ O(Ŵ ,C).
Proof of Step 2. Fix an arbitrary point (z0, w0) ∈ Ŵ and let ǫ > 0 be so small such
(4.9) 2ǫ < 1− ω(z0, A,D)− ω(w0, B,G).
Since ω(·, B,G) ∈ PSH(G), one may find an open neighborhood V of w0 such that
(4.10) ω(w,B,D) < ω(w0, B,G) + ǫ, w ∈ V.
A UNIFIED APPROACH 23
Let n be the dimension of D at the point z0. Applying Lemma 3.2 and Proposition
3.4, we obtain an open set T in C, an open neighborhood U of z0, and a family of
holomorphic discs (φz)z∈U ⊂ O(E,D) with the following properties:
the mapping (z, t) ∈ U × E 7→ φz(t) is holomorphic;(4.11)
φz(0) = z, z ∈ U ;(4.12)
φz(t) ∈ A, t ∈ T ∩ E, z ∈ U ;(4.13)
1∂E\T,∂E(e
iθ)dθ < ω(z0, A,D) + ǫ.(4.14)
By shrinking U (if necessary), we may assume without loss of generality that in a
chart, z0 = 0 ∈ C
n and
(4.15) U =
z = (z1, . . . , zn) = (z
, zn) ∈ Cn : z
∈ S, |zn| < 2
where S ⊂ Cn−1 is an open set.
Consider the 3-fold cross (compared with the notation in Theorem 4.4)
X (T ∩ ∂E, U,B;E,U,G) := (T ∩ ∂E)× U × (G ∪ B)
(T ∩ ∂E)× U × B
(E ∪ (T ∩ ∂E))× U ×B,
and the function g : X (T ∩ ∂E, U,B;E,U,G) −→ C given by
(4.16) g(t, z, w) := f(φz(t), w), (t, z, w) ∈ X (T ∩ ∂E, U,B;E,U,G) .
We make the following observations:
Let t ∈ T ∩ ∂E. Then, in virtue of (4.13) we have φz(t) ∈ A for z ∈ U. Con-
sequently, in virtue of (4.11), (4.16) and the hypothesis f ∈ Os(W
o,C), we con-
clude that g(t, z, ·)|G ∈ O(G,C)
resp. g(t, ·, w)|U ∈ O(U,C)
for any z ∈ U (resp.
w ∈ B). Analogously, for any z ∈ U, w ∈ B, we can show that g(·, z, w)|E ∈ O(E,C).
In summary, we have shown that g is separately holomorphic. In addition,
it follows from hypothesis (ii) and (4.11)–(4.13) that g is locally bounded and
g|(T∩∂E)×U×B is a Borel function.
For z
∈ S write Ez′ :=
z = (z
, zn) ∈ C
n : |zn| < 1
. Then by (4.15),⋃
Ez′ ⊂ U. Consequently, for all z
∈ S, using hypothesis (iii) we are
able to apply Theorem 4.4 to g in order to obtain a unique function ĝ ∈
X̂ (T ∩ ∂E, ∂Ez′ , B;E,Ez′ , G) ,C
9 such that
(t,z,w)→(τ,ζ,η), w∈Aα(η)
ĝ(t, z, w) =
g(τ, ζ
, λ, η)
λ− ζn
(τ, ζ, η) ∈ E ×Ez′ × B, z
∈ S, 0 < α <
9 In fact, we identify Ez′ with E in an obvious way.
24 VIÊT-ANH NGUYÊN
Using (4.11) and (4.15)–(4.16) and the Cauchy’s formula, we see that the right hand
side is equal to g(τ, ζ, η). Hence, we have shown that
(4.17)
(t,z,w)→(τ,ζ,η), w∈Aα(η)
ĝ(t, z, w) = g(τ, ζ, η), (τ, ζ, η) ∈ E×Ez′×B, z
∈ S, 0 < α <
Observe that
X̂ (T ∩ ∂E, ∂Ez′ , B;E,Ez′ , G) = {(t, z, w) ∈ E × Ez′ ×G : ω(t, T ∩ ∂E,E) + ω(w,B,G) < 1} .
On the other hand, for any w ∈ V,
ω(0, T ∩ ∂E,E) + ω(w,B,G) ≤
1∂E\T,∂E(e
iθ)dθ + ω(w0, B,G) + ǫ
< ω(z0, A,D) + ω(w0, B,G) + 2ǫ < 1,
(4.18)
where the first inequality follows from (4.1) and (4.10), the second one from (4.14),
and the last one from (4.9). Consequently,
(4.19) (0, z, w) ∈ X̂ (T ∩ ∂E, ∂Ez′ , B;E,Ez′ , G) , (z, w) ∈ Ez′ × V, z
It follows from (4.5), (4.12), (4.13) and (4.18) that, for z
∈ S and z ∈ Ez′ , f̂φz is
well-defined and holomorphic on X̂(T ∩ ∂E,B;E,G), and
(4.20) f̂(z, w) = f̂φz(0, w), w ∈ V.
On the other hand, it follows from (4.4), (4.16) and (4.17) that
(t,w)→(τ,η), w∈Aα(η)
f̂φz(t, w) = lim
(t,w)→(τ,η), w∈Aα(η)
ĝ(t, z, w),
(τ, η) ∈ E ×B, z ∈ Ez′ , z
∈ S, 0 < α <
Since, for fixed z ∈ Ez′ , the restricted functions (t, w) 7→ ĝ(t, z, w) and f̂φz are
holomorphic on X̂(T ∩ ∂E,B;E,G), we deduce from the latter equality and the
uniqueness of Theorem 4.3 that
ĝ(t, z, w) = f̂φz(t, w), (t, w) ∈ X̂ (T ∩ ∂E,B;E,G) , z ∈ Ez′ , z
In particular, using (4.5), (4.19) and (4.20),
ĝ(0, z, w) = f̂φz(0, w) = f̂(z, w), (z, w) ∈ Ez′ × V, z
Since we know from (4.19) that ĝ is holomorphic in the variables zn and w on a
neighborhood of (0, z0, w0), it follows that f̂ is holomorphic in the variables z
n and
w on a neighborhood of (z0, w0). Exchanging the role of z
n and any other variable
zj , j = 1, . . . , n − 1, we see that f̂ is separetely holomorphic on a neighborhood
of (z0, w0). In addition, f̂ is locally bounded. Consequently, we conclude, by the
classical Hartogs extension Theorem, that f̂ is holomorphic on a neighborhood of
(z0, w0). Since (z0, w0) ∈ Ŵ is arbitrary, it follows that f̂ ∈ O(Ŵ ,C). �
Combining Steps 1–2, CASE 1 follows. �
A UNIFIED APPROACH 25
5. Completion of the proof of Theorem 4.2
In this section we introduce the new technique of conformal mappings. This
technique will allow us to pass from CASE 1 to the general case. We recall a notion
from Definition 4.8 in [34] which will be relevant for our further study.
Definition 5.1. Let A be the system of angular approach regions for E, let Ω be an
open subset of the unit disc E and ζ a point in ∂E. Then the point ζ is said to be
an end-point of Ω if, for every 0 < α < π
, there is an open neighborhood U = Uα of
ζ such that U ∩Aα(ζ) ⊂ Ω. The set of all end-points of Ω is denoted by End(Ω).
The main idea of the technique of conformal mappings is described below.
Proposition 5.2. Let B be a measurable subset of ∂E with mes(B) > 0. For 0 ≤
δ < 1 put G := {w ∈ E : ω(w,B,E) < 1− δ} . Let Ω be an arbitrary connected
component of G. Then
1) End(Ω) is a measurable subset of ∂E and mes(End(Ω)) > 0. Moreover, Ω is
a simply connected domain.
In virtue of Part 1) and the Riemann mapping theorem, let Φ be a confor-
mal mapping of Ω onto E.
2) For every ζ ∈ End(Ω), there is η ∈ ∂E such that
z→ζ, z∈Ω∩Aα(ζ)
Φ(z) = η, 0 < α <
η is called the limit of Φ at the end-point ζ and it is denoted by Φ(ζ).
Moreover, Φ|End(Ω) is one-to-one.
3) Let f be a bounded holomorphic function on Ω, ζ ∈ End(Ω) and λ ∈ C such
that lim
z→ζ, z∈Ω∩Aα(ζ)
f(z) = λ for some 0 < α < π
. Then f ◦ Φ−1 ∈ O(E,C)
admits the angular limit λ at Φ(ζ).
4) Let ∆ be a subset of End(Ω) such that mes(∆) = mes(End(Ω)). Put Φ(∆) :=
{Φ(ζ), ζ ∈ ∆}, where Φ(ζ) is given by Part 2). Then Φ(∆) is a measurable
subset of of ∂E with mes
> 0. and
ω(Φ(z),Φ(∆), E) =
ω(z, B, E)
, z ∈ Ω.
Proof. The first assertions of Part 1) follows from Theorem 4.9 in [34]. To show
that Ω is simply connected, take an arbitrary Jordan domain D such that ∂D ⊂ Ω.
We need to prove that D ⊂ Ω. Observe that D ⊂ E and ω(z, B, E) < 1 − δ for all
z ∈ ∂D ⊂ Ω ⊂ G. By the Maximum Principle, we deduce that ω(z, B, E) < 1 − δ
for all z ∈ D. Hence, D ⊂ G, which, in turn, implies that D ⊂ Ω. This completes
Part 1).
Part 2) follows from the “end-point” version of Theorem 4.4.13 in [39] (that is,
we replace the hypothesis “accessible point” therein by end-point).
Applying the classical Lindelöf’s Theorem to f ◦ Φ−1 ∈ O(E,C), Part 3) follows.
26 VIÊT-ANH NGUYÊN
It remains to prove Part 4). A straightforward argument shows that Φ(∆) is a
measurable subset of ∂E. Next, we show that
(5.1) ω(Φ(z),Φ(∆), E) ≤
ω(z, B, E)
, z ∈ Ω.
To do this pick any u ∈ PSH(E) such that u ≤ 1 and
lim sup
u(w) ≤ 0, η ∈ Φ(∆).
Consequently, Part 2) gives that
(5.2) lim sup
z→ζ, z∈Ω∩Aα(ζ)
u ◦ Φ(z) = 0, ζ ∈ ∆, 0 < α <
Next, consider the following function
(5.3) ũ(z) :=
max{(1− δ) · (u ◦ Φ)(z), ω(z, B, E)}, z ∈ Ω,
ω(z, B, E), z ∈ E \ Ω.
Then it can be checked that ũ is subharmonic and ũ ≤ 1 in E. In addition, for
every density point ζ of B such that ζ 6∈ End(Ω), we know from Theorem 4.9 in [34]
that there is a connected component Ωζ of G other than Ω such that ζ ∈ End(Ωζ).
Consequently, Part 4) of Lemma 4.1 gives, for such a point ζ, that
lim sup
z→ζ, z∈Aα(ζ)
ũ(z) = lim sup
z→ζ, z∈Aα(ζ)
ω(z, B, E) = 0, 0 < α <
This, combined with (5.2), implies that
lim sup
z→ζ, z∈Aα(ζ)
ũ(z) = 0, 0 < α <
, for a.e. ζ ∈ B.
Consequently, applying Part 1) of Lemma 4.1 yields that ũ ≤ ω(·, B, E) on E.
Hence, by (5.3), (u ◦Φ)(z) ≤
ω(z,B,E)
, z ∈ Ω, which completes the proof of (5.1). In
particular, we obtain that mes (Φ(∆)) > 0.
To prove the opposite inequality of (5.1), let u be an arbitrary function in PSH(E)
such that u ≤ 1 and
lim sup
u(z) ≤ 0, ζ ∈ B.
Applying Part 3) to the function f(z) := z, we obtain that
lim sup
w→η, w∈Aα(η)
(u ◦ Φ−1) (w)
≤ 0, η ∈ Φ(∆), 0 < α <
On the other hand, since u ≤ ω(·, B, E) on E, one gets that
(u◦Φ−1)(w)
≤ 1, w ∈ E.
Therefore, applying Part 1) of Lemma 4.1 yields that
(u ◦ Φ−1) (w)
≤ ω(w,Φ(∆), E), w ∈ E,
which, in turn, implies the converse inequality of (5.1). Hence, the proof of Part 4)
is complete. �
A UNIFIED APPROACH 27
Now we are in the position to complete the proof of Theorem 4.2:
CASE 2: 0 < δ < 1.
Let (Gk)k∈K be the family of all connected components of G, where K is an (at
most) countable index set. By Proposition 5.2, we may fix a conformal mapping Φk
from Gk onto E for every k ∈ K. Put
Bk :=
End(Gk) ∩ B
, Wk := X(A,B
k;D,E),
W ok := X
o(A,B
k;D,E), Ŵk := X̂(A,B
k;D,E), k ∈ K.
(5.4)
where [T ]
(or simply T
) for T ⊂ ∂E is, following the notation of Lemma 4.1, the
set of all density points of T.
Recall from the hypotheses of Theorem 4.2 that for every fixed z ∈ A, the holo-
morphic function f(z, ·)|G is bounded and that for every η ∈ B,
w→η, w∈Ω∩Aα(η)
f(z, w) = f(z, η), 0 < α <
Consequently, Part 3) of Proposition 5.2, applied to f(z, ·)|Gk with k ∈ K, implies
that for every fixed z ∈ D, f(z,Φ−1k (·)) ∈ O(E,C) admits the angular limit f(z, η)
at Φk(η) for all η ∈ B ∩ End(Gk). By Part 1) of that proposition, we know that
B ∩ End(Gk)
> 0. This discussion and the hypothesis allow us to apply the
result of CASE 1 to the function gk : Wk −→ C defined by
(5.5) gk(z, w) :=
f(z,Φ−1
(w)), (z, w) ∈ D ×Gk,
f(z,Φ−1k (w)) (z, w) ∈ D × B
where in the second line we have used the definition of Φk|End(Gk) and its one-to-one
property proved by Part 2) of Proposition 5.2.
Consequently, we obtain an extension function ĝk ∈ O(Ŵk,C) such that
(5.6) ĝk(z, w) = gk(z, w), (z, w) ∈ A× E.
Ŵk :=
(z,Φ−1
(w)), (z, w) ∈ Ŵk
, k ∈ K.
Observe that the open sets (Ŵk)k∈K are pairwise disjoint. Moreover, by (5.4),
Ŵk = {(z, w) ∈ D ×E : w ∈ Gk and
ω(z, A,D) + ω
Φk(w),Φk(End(Gk)), E
< 1 for some k ∈ K
(z, w) ∈ D × E : w ∈ Gk and ω(z, A,D) +
ω(w,B,E)
< 1 for some k ∈ K
= Ŵ ,
28 VIÊT-ANH NGUYÊN
where the second equality follows from Part 4) of Proposition 5.2. Therefore, we
can define the desired extension function f̂ ∈ O(Ŵ ,C) by the formula
f̂(z, w) := ĝk(z,Φk(w)), (z, w) ∈ Ŵk, k ∈ K.
This, combined with (5.4)–(5.6), implies that f̂ = f on A×G. The uniqueness of f̂
follows from that of ĝk, k ∈ K. Hence, the proof of the theorem is complete. �
6. A local version of Theorem A
The main purpose of the section is to prove the following result.
Theorem 6.1. Let D ⊂ Cn, G ⊂ Cm be bounded open sets. D (resp. G) is equipped
with a system of approach regions
Aα(ζ)
ζ∈D, α∈Iζ
(resp.
Aα(η)
η∈G, α∈Iη
). Let
A (resp. B) be a subset of D (resp. G) such that A and B are locally pluriregular.
W := X(A,B;D,G), W := X(A,B;D,G),
:= Xo(A,B;D,G), Ŵ := X̂(A,B;D,G).
Then, for every bounded function f : W −→ C such that f ∈ Cs(W,C)∩Os(W
and that f |A×B is continuous at all points of (A ∩ ∂D) × (B ∩ ∂G), there exists a
unique bounded function f̂ ∈ O(Ŵ ,C) which admits A-limit f(ζ, η) at all points
(ζ, η) ∈ W. Moreover,
(6.1) |f̂(z, w)| ≤ |f |
1−ω(z,w)
A×B |f |
ω(z,w)
W , (z, w) ∈ Ŵ .
The core of our unified approach will be presented in the proof below. Our idea is
to use Theorem 3.8 in order to reduce Theorem 6.1 to the case of bidisk, that is, the
case of Theorem 4.3. This reduction is based on Theorem 4.2 and on the technique
of level sets.
Proof. It is divided into four steps.
Step 1: Construction of the desired function f̂ ∈ O(Ŵ ,C) and proof of the estimate
|f̂ |cW ≤ |f |W .
Proof of Step 1. We define f̂ at an arbitrary point (z, w) ∈ Ŵ as follows: Let ǫ > 0
be such that
(6.2) ω(z, A,D) + ω(w,B,G) + 2ǫ < 1.
By Theorem 3.8 and Definition 3.9, there is an ǫ-candidate (φ,Γ) (resp. (ψ,∆))
for (z, A,D) (resp. (w,B,G)). Moreover, using the hypotheses, we see that the
function fφ,ψ, defined by
(6.3) fφ,ψ(t, τ) := f(φ(t), ψ(τ)), (t, τ) ∈ X (Γ,∆;E,E) ,
satisfies the hypotheses of Theorem 4.3. By this theorem, let f̂φ,ψ be the unique
function in X̂ (Γ,∆;E,E) such that
(6.4) (A− lim f̂φ,ψ)(t, τ) = fφ,ψ(t, τ), (t, τ) ∈ X
o (Γ,∆;E,E) .
A UNIFIED APPROACH 29
In virtue of (6.2) and Theorem 3.8 and Lemma 3.3, (0, 0) ∈ X̂ (Γ,∆;E,E) . Then
we can define the value of the desired extension function f̂ at (z, w) as follows
(6.5) f̂(z, w) := f̂φ,ψ(0, 0).
The remaining part of this step is devoted to showing that f̂ is well-defined and
holomorphic on Ŵ .
To this end we fix an arbitrary point w0 ∈ G, a number ǫ0 : 0 < ǫ0 < 1 −
ω(w0, B,G), and an arbitrary ǫ0-candidate (ψ0,∆0) for (w0, B,G).
(6.6) Ŵ0 := {(z, τ) ∈ D × E : ω(z, A,D) + ω(τ,∆0, E) < 1} .
Inspired by formula (6.5) we define a function f̂0 : Ŵ0 −→ C as follows
(6.7) f̂0(z, τ) := f̂φ,ψ0(0, τ).
Here we have used an ǫ-candidate (φ,Γ) for (z, A,D), where ǫ is arbitrarily chosen
so that 0 < ǫ < 1− ω(z, A,D)− ω(τ,∆0, E).
Using (6.3)–(6.4) and (6.7) and arguing as in Part 2) of Lemma 4.5, one can show
that f̂0 is well-defined on Ŵ0.
For all 0 < δ < 1 let
(6.8) Aδ := {z ∈ D : ω(z, A,D) < δ} and Eδ := {w ∈ E : ω(w,∆0, E) < 1− δ} .
Then by the construction in (6.7), we remark that f̂0(z, ·) is holomorphic on Eδ for
every fixed z ∈ Aδ. We are able to define a new function f̃δ on X (Aδ, B;D,Eδ) as
follows
(6.9) f̃δ(z, τ) :=
f̂0(z, τ) (z, τ) ∈ Aδ × Eδ,
f(z, ψ0(τ)) (z, τ) ∈ D ×∆0.
Using the hypotheses on f and the previous remark, we see that f̃δ ∈
o (Aδ, B;D,Eδ) ,C
Observe that Aδ is an open set in D. Consequently, f̃δ satisfies the hypothe-
ses of Theorem 4.2. Applying this theorem yields a unique function f̂δ ∈
X̂ (Aδ, B;D,Eδ) ,C
such that
f̂δ(z, w) = f̃δ(z, w), (z, w) ∈ Aδ × Eδ.
This, combined with (6.9), implies that f̂0 is holomorphic on Aδ ×Gδ. On the other
hand, it follows from (6.6) and (6.8) that
Ŵ0 = X̂ (A,∆0;D,E) =
0<δ<1
Aδ ×Gδ.
Hence, f̂0 ∈ O(Ŵ0,C).
In summary, we have shown that f̂0, given by (6.7), is well-defined and holomor-
phic on Ŵ0.
30 VIÊT-ANH NGUYÊN
Now we are able to prove that f̂ , given by (6.5), is well-defined. To this end we fix
an arbitrary point (z0, w0) ∈ Ŵ , an ǫ0 : 0 < ǫ0 < 1− ω(z0, D,G), and two arbitrary
ǫ0-candidates (ψ1,∆1) and (ψ2,∆2) for (w0, B,G). Let
Ŵj := {(z, τ) ∈ D ×E : ω(z, A,D) + ω(τ,∆j, E) < 1} , j ∈ {1, 2}.
Using formula (6.7) define, for j ∈ {1, 2}, a function f̂j : Ŵj −→ C as follows
(6.10) f̂j(z, τ) := f̂φ,ψj(0, τ).
Here we have used any ǫ-candidate (φ,Γ) for (z, A,D) with a suitable ǫ > 0. Let
τj ∈ E be such that ψj(τj) = w0, j ∈ {1, 2}. Then, in virtue of (6.5) and (6.10) and
the result of the previous paragraph on the well-definedness of f̂0, the well-defined
property of f̂ is reduced to showing that
(6.11) f̂1(φ(t), τ1) = f̂2(φ(t), τ2)
for all t ∈ E and all ǫ-candidates (φ,Γ) for (φ(t), A,D), such that
ω(t,Γ, A) < ǫ := 1− max
j∈{1,2}
{ω(τ1,∆1, E), ω(τ2,∆2, E)} .
Observe that (6.11) follows from an argument based on Part 2) of Lemma 4.5. Hence,
f̂ is well-defined on Ŵ .
As in (6.8), for all 0 < δ < 1 let
Aδ := {z ∈ D : ω(z, A,D) < δ} , Bδ := {w ∈ G : ω(w,B,G) < δ} ,
Dδ := {z ∈ D : ω(z, A,D) < 1− δ} , Gδ := {w ∈ G : ω(w,B,G) < 1− δ} .
(6.12)
Now we combine (6.8) and (6.12) and the result that f̂0, given by (6.7), is well-defined
and holomorphic on Ŵ0, and the result that f̂ is well-defined on Ŵ . Consequently,
we obtain that
f̂(·, w) ∈ O(Dδ,C), w ∈ Bδ, 0 < δ < 1.
Since the formula (6.5) for f̂ is symmetric in two variables (z, w), one also gets that
f̂(z, ·) ∈ O(Gδ,C), z ∈ Aδ, 0 < δ < 1.
Since by (6.12),
0<δ<1
Aδ ×Gδ =
0<δ<1
Dδ × Bδ,
it follows from the previous conclusions that, for all points (z, w) ∈ Ŵ , there is an
open neighborhood U of z (resp. V of w) such that f ∈ Os(X
o(U, V ;U, V ),C). By
the classical Hartogs extension theorem, f ∈ O(U × V,C). Hence, f̂ ∈ O(Ŵ ,C).
On the other hand, it follows from (6.5) and the estimate in Theorem 4.3 that
(6.13) |f̂ |cW ≤ |f |W .
This completes Step 1. �
Step 2: f |A×B ∈ C(A× B,C).
A UNIFIED APPROACH 31
Proof of Step 2. Using the hypotheses we only need to check the continuity of f |A×B
at every point (a0, w0) ∈ A × (G ∩ B) and at every point (z0, b0) ∈ (D ∩ A) × B.
We will verify the first assertion. To do this let (ak)
k=1 ⊂ A and (wk)
k=1 ⊂ (G∩B)
such that lim
ak = a0 and lim
wk = w0. We need to show that
(6.14) lim
f(ak, wk) = f(a0, w0).
Since f |W is locally bounded, we may choose an open connected neighborhood V
of w0 such that sup
|f(ak, ·)|V <∞. Consequently, by Montel’s Theorem, there is a
sequence (kp)
p=1 such that (f(akp, ·)) converges uniformly on compact subsets of V
to a function g ∈ O(V ). Equality (6.14) is reduced to showing that g = f(a0, ·) on
V. Since f ∈ Cs(W,C), we deduce that f(a0, ·) = g on B ∩ V. On the other hand,
B ∩V is non locally pluripolar because B is locally pluriregular and w0 ∈ B. Hence,
we conclude by the uniqueness principle that g = f(a0, ·) on V. �
Step 3: f̂ admits A-limit f(ζ, η) at all points (ζ, η) ∈ W.
Proof of Step 3. To this end we only need to prove that
(6.15)
A− lim sup |f̂ − f(ζ0, η0)|
(ζ0, η0) < ǫ0
for an arbitrary fixed point (ζ0, η0) ∈ W and an arbitrary fixed 0 < ǫ0 < 1. Suppose
without loss of generality that
(6.16) |f |W ≤
First consider (ζ0, η0) ∈ A × B. Since f ∈ C(A × B,C), one may find an open
neighborhood U of ζ0 in C
n (resp. V of η0 in C
m) so that
(6.17) |f − f(ζ0, η0)|(A∩U)×(B∩V ) <
Consider the open sets
(6.18)
z ∈ D : ω(z, A ∩ U,D) <
and G
w ∈ G : ω(w,B ∩ V,G) <
In virtue of (6.16)–(6.18), an application of Theorem 3.6 gives that
|f(ζ, w)− f(ζ, η0)| ≤ (
)1−ω(w,B∩V,G) ≤
, ζ ∈ A ∩ U, w ∈ G
Hence,
(6.19) |f − f(ζ0, η0)|X(A∩U,B∩V ;D′ ,G′ ) ≤
Consider the function g : X(A ∩ U,B ∩ V ;D
) −→ C, given by
(6.20) g(z, w) := f(z, w)− f(ζ0, η0).
32 VIÊT-ANH NGUYÊN
Applying the result of Step 1, we can construct a function ĝ ∈ O(X̂(A ∩ U,B ∩
),C) from g in exactly the same way as we obtain f̂ ∈ O(Ŵ ,C) from f.
Moreover, combining (6.5) and (6.20), we see that
(6.21) ĝ = f̂ − f(ζ0, η0) on X̂(A ∩ U,B ∩ V ;D
On the other hand, it follows from formula (6.20), estimate (6.19), and estimate
(6.13) that
|ĝ|bX(A∩U,B∩V ;D′ ,G′) ≤
This, combined with (6.21) and (6.18), implies that
A− lim sup |f̂(z, w)− f(ζ0, η0)|
(ζ0, η0) ≤
Hence, (6.15) follows. In summary, we have shown that A− lim f̂ = f on A× B.
Now it remains to consider (ζ0, η0) ∈ A ×G. Using the last limit and arguing as
in Step 2, one can show that A− lim f̂(ζ0, η0) = f(ζ0, η0). �
Step 4: Proof of the uniqueness of f̂ and (6.1).
Proof of Step 4. To prove the uniqueness of f̂ suppose that ĝ ∈ O(Ŵ ,C) is a
bounded function which admits A-limit f(ζ, η) at all points (ζ, η) ∈ W. Fix an
arbitrary point (z0, w0) ∈ Ŵ , it suffices to show that f̂(z0, w0) = ĝ(z0, w0). Observe
that both functions f̂(z0, ·) and ĝ(z0, ·) are bounded and holomorphic on the δ-level
set of G relative to B :
Gδ,B := {w ∈ G : ω(w,B,G) < 1− ω(z0, A,D)} ,
where δ := ω(z0, A,D). On the other hand, they admit A-limit f(z0, η) at all
points η ∈ B. Consequently, applying Proposition 3.5 and Theorem 3.7 yields that
f̂(z0, ·) = ĝ(z0, ·) on Gδ,B. Hence, f̂(z0, w0) = ĝ(z0, w0).
To prove (6.1) fix an arbitrary point (z0, w0) ∈ Ŵ . For every η ∈ B, applying
Theorem 3.6 to log |f(·, η)| defined on D, we obtain that
(6.22) |f(z0, η)| ≤ |f |
1−ω(z0,A,D)
A×B |f |
ω(z0,A,D)
Applying Theorem 3.6 again to log |f̂(z0, ·)| defined on Gδ,B of the preceeding para-
graph, one gets that
|f̂(z0, w0)| ≤ |f(z0, ·)|
1−ω(w0,B,G)
B |f̂ |
ω(w0,B,G)
Inserting (6.13) and (6.22) into the right hand side of the latter estimate, (6.1)
follows. Hence Step 4 is finished. �
This completes the proof. �
In the sequel we will need the following refined version of Theorem 6.1.
Theorem 6.2. Let D ⊂ Cn, G ⊂ Cm be bounded open sets. D (resp. G) is equipped
with a system of approach regions
Aα(ζ)
ζ∈D, α∈Iζ
(resp.
Aα(η)
η∈G, α∈Iη
). Let
A UNIFIED APPROACH 33
A, A0 (resp. B, B0) be subsets of D (resp. G) such that A0 and B0 are locally
pluriregular and that A0 ⊂ A
∗ and B0 ⊂ B
∗. Put
W := X(A,B;D,G) and W0 := X(A0, B0;D,G).
Then, for every bounded function f : W −→ C which satisfies the following condi-
tions:
• f ∈ Cs(W,C) ∩ Os(W
o,C);
• f |A×B is continuous at all points of (A ∩ ∂D)× (B ∩ ∂G),
there exists a unique bounded function f̂ ∈ O(Ŵ0,C) which admits A-limit f(ζ, η)
at all points (ζ, η) ∈ W0. Moreover,
(6.23) |f̂(z, w)| ≤ |f |
1−ω(z,A0,D)−ω(w,B0,G)
A0×B0
ω(z,A0,D)+ω(w,B0,G)
W , (z, w) ∈ Ŵ0.
Proof. Using the hypotheses and applying Part 1) of Theorem 7.2 below we can ex-
tend f to a locally bounded function (still denoted by) f defined on X(A∗, B∗, D,G)
such that f ∈ Os
o(A∗, B∗, D,G),C
and that f |X(A∗∩D,B∗∩G;D,G) is continuous.
Therefore, the newly defined function f satisfies
(6.24) f(a, b) := lim
f(ak, b),
where (a, b) is an arbitrary point of A∗ × (G∪B∗) and (ak)
k=1 ⊂ A
∗ is an arbitrary
sequence with lim
ak = a. Since f |W is bounded, it follows that the newly defined
function f is also bounded. In virtue of the definition of A∗ and B∗ we have
(6.25) ∂D ∩ A = ∂D ∩A∗ and ∂G ∩ B = ∂G ∩ B∗.
Using the second • in the hypotheses and formula (6.24) we see that f |A∗×B∗ is
continuous at all points all (∂D ∩ A) × (∂G ∩ B). Consequently, arguing as in the
proof of Step 2 of Theorem 6.1 and using (6.25), we can show that f ∈ C
. In summary, the newly defined function f which is defined and bounded on
X(A∗, B∗, D,G) satisfies
(6.26) f ∈ Os
o(A∗, B∗, D,G),C
and f ∈ C
A∗ × B∗,C
Observe that f is only separately continuous on X(A,B;D,G), but it is not nec-
essarily so on the cross X
A∗, B∗, D,G
. However, we will show that one can adapt
the argument of Theorem 6.1 in order to prove Theorem 6.2.
We define f̂ at an arbitrary point (z0, w0) ∈ Ŵ0 as follows: Let ǫ > 0 be such that
ω(z0, A0, D) + ω(w0, B0, G) + 2ǫ < 1.
By Theorem 3.8 and Definition 3.9, there is an ǫ-candidate (φ,Γ) (resp. (ψ,∆)) for
(z0, A0, D) (resp. (w0, B,G)). To conclude the proof we only need to prove that the
function fφ,ψ, defined by
fφ,ψ(t, τ) := f(φ(t), ψ(τ)), (t, τ) ∈ X (Γ,∆;E,E) ,
satisfies the hypotheses of Theorem 4.3. Indeed, having proved this assertion, the
proof will follow along the same lines as those given in Theorem 6.1. This assertion
is again reduced to showing that for each fixed t ∈ Γ, the function fφ,ψ(t, ·) admits
the angular limit f(φ(t), ψ(τ)) for every point τ ∈ ∆. We will prove the last claim.
34 VIÊT-ANH NGUYÊN
Using the first • and Theorem 3.8, we see that for every a ∈ A, the function
f(a, ψ(·)) ∈ O(E,C) admits the angular limit f(a, ψ(τ)) for every point τ ∈ ∆.
Next, using the hypothesis A0 ⊂ A
∗ we may choose a sequence (ak)
k=1 ⊂ A ∩ A
such that lim
ak = φ(t) ∈ A0. Observe from (6.26) that for every k the uniformly
bounded function f(ak, ψ(·)) ∈ O(E,C) admits the angular limit f(ak, ψ(τ)) and
that lim
f(ak, ψ(τ)) = f(φ(t), ψ(τ)) for every point τ ∈ ∆. Consequently, by the
Khinchin–Ostrowski Theorem (see [11, Theorem 4, p. 397]), the above claim follows.
7. Preparatory results
The first result of this section shows that the two definitions of plurisubharmonic
measure ω̃(·, A,D), given respectively in Definition 2.3 and in Subsection 2.1 of [28],
coincide in the case when A ⊂ D.
Proposition 7.1. Let X be a complex manifold and D ⊂ X an open set. D is
equipped with the canonical system A of approach regions. Let A be a subset of D.
Then ω̃(z, A,D) = ω(z, A∗, D).
Proof. Let P ∈ E(A). Then by Definition 2.3, P ⊂ A∗ and P is locally pluriregular.
Hence, P ⊂ (A∗)∗ = A∗. Since P ∈ E(A) is arbitrary, it follows from Definition 2.3
that à is locally pluriregular and à ⊂ A∗. In particular, (Ã)∗ ⊂ A∗ and
(7.1) ω̃(z, A,D) = ω(z, Ã, D) ≥ ω(z, A∗, D).
In the sequel we will show that
(7.2) A∗ ⊂ (Ã)∗.
Taking (7.2) for granted, we have that A∗ = (Ã)∗. Consequently,
ω̃(z, A,D) = ω(z, Ã, D) ≤ ω(z, A∗, D).
This, coupled with (7.1), completes the proof.
To prove (7.2) fix an arbitrary point a ∈ A∗ and an arbitrary but sufficiently small
neighborhood U ⊂ X of a such that U is biholomorphic to a bounded open set in
n, where n is the dimension of X at a. Since A∗ is a Borel subset of D, Theorem
8.5 in [7] provides a subset P ⊂ A∗ ∩ U of type Fσ
10 such that
(7.3) ω(z, P, U) = ω(z, A∗ ∩ U, U), z ∈ U.
Write P =
Pn, where Pn is closed. Observe that Pn ∩ P
n is locally pluriregular,
Pn \ (Pn ∩ P
n) is locally pluripolar and Pn ∩ P
n ⊂ Pn ⊂ A
∗ ∩ P. Consequently,⋃
(Pn ∩ P
n) ⊂ Ã ∩ P and P \
(Pn ∩ P
n) is locally pluripolar. This implies that
ω(z, Ã ∩ U, U) ≤ ω
(Pn ∩ P
n), U
= ω(z, P, U),
10 This means that P is a countable (or finite) union of relatively closed subsets of U.
A UNIFIED APPROACH 35
where the equality holds by applying Lemma 3.5.3 in [18] and by using the fact that
U is biholomorphic to a bounded open set in Cn. This, combined with (7.3) and the
assumption a ∈ A∗, implies that ω(a, Ã ∩ U, U) = 0. Thus (7.2) follows. �
The main purpose of this and the next sections is to generalize Theorem 6.1 to the
case where the “target space” Z is an arbitrary complex analytic space possessing
the Hartogs extension property.
Theorem 7.2. Let D ⊂ Cn, G ⊂ Cm be two bounded open sets. D (resp. G) is
equipped with the canonical system of approach regions. Let Z be a complex analytic
space possessing the Hartogs extension property. Let A (resp. B) be a subset of D
(resp. G). Put W := X(A,B;D,G) and Ŵ := X̂(A,B;D,G). Let f ∈ Os(W
o, Z).
1) Then f extends to a mapping (still denoted by) f defined on Xo(A∪A∗, B ∪
B∗;D,G) such that f is separately holomorphic on Xo(A∪A∗, B∪B∗;D,G)
and that f |Xo(A∗,B∗;D,G) is continuous.
2) Suppose in addition that A and B are locally pluriregular. Then f extends
to a unique mapping f̂ ∈ O(Ŵ , Z) such that f̂ = f on W.
Proof. This result has already been proved in Théorème 2.2.4 in [5] starting from
Proposition 3.2.1 therein. In the latter proposition Alehyane and Zeriahi make use
of the method of doubly orthogonal bases of Bergman type. We can avoid this
method by simply replacing every application of this proposition by Theorem 6.1.
Keeping this change in mind and using Proposition 7.1, the remaining part of the
proof follows along the same lines as that of Théorème 2.2.4 in [5]. �
Theorem 7.3. Let D, G be complex manifolds, and let A ⊂ D, B ⊂ G be open
subsets. Let Z be a complex analytic space possessing the Hartogs extension property.
Put W := X(A,B;D,G) and Ŵ := X̂(A,B;D,G). Then for any mapping f ∈
Os(W,Z), there is a unique mapping f̂ ∈ O(Ŵ , Z) such that f̂ = f on W.
Proof. It has already been proved in Theorem 5.1 of [28]. The only places where the
method of doubly orthogonal bases of Bergman type is involved is the applications
of Théorème 2.2.4 in [5]. As we already pointed out in Theorem 7.2, one can avoid
this method by using Theorem 6.1 instead. �
We are ready to formulate a slight generalization of Theorems 6.2 and 7.2.
Theorem 7.4. Let D ⊂ Cn, G ⊂ Cm be bounded open sets. D (resp. G) is equipped
with a system of approach regions
Aα(ζ)
ζ∈D, α∈Iζ
(resp.
Aβ(η)
η∈G, β∈Iη
). Let A
and A0 (resp. B and B0) be two subsets of D (resp. G) such that A0 and B0 are
locally pluriregular and that A0 ⊂ A
∗ and B0 ⊂ B
∗. Let Z be a complex analytic
space possessing the Hartogs extension property. Put
W := X(A,B;D,G) and W0 := X(A0, B0;D,G).
Then, for every bounded mapping f : W −→ Z which satisfies the following condi-
tions:
• f ∈ Cs(W,Z) ∩ Os(W
o, Z);
36 VIÊT-ANH NGUYÊN
• f |A×B is continuous at all points of (A ∩ ∂D)× (B ∩ ∂G),
there exists a unique bounded mapping f̂ ∈ O(Ŵ0,C) which admits A-limit f(ζ, η)
at all points (ζ, η) ∈ W0.
Proof. Since f is bounded, one may find an open neighborhood U of f(W ) in Z and
a holomorphic embedding φ of U into the polydisc Ek of Ck such that φ(U) is an
analytic set in Ek. Now we are able to apply Theorem 6.2 to the mapping φ ◦ f :
W −→ Ck. Consequently, one obtains a unique bounded mapping F ∈ O(Ŵ ,Ck)
which admits A-limit (φ ◦ f)(ζ, η) at all points (ζ, η) ∈ W. Using estimate (6.23)
one can show that F ∈ O(Ŵ , Ek). Now using Theorem 3.7 it is not difficult to see
that F (Ŵ ) ⊂ φ(U). Consequently, one can define the desired extension mapping f̂
as follows:
f̂(z, w) := (φ−1 ◦ F )(z, w), (z, w) ∈ Ŵ .
The following Uniqueness Theorem for holomorphic mappings generalizes Theo-
rem 3.7.
Theorem 7.5. Let X be a complex manifold, D ⊂ X an open subset and Z a
complex analytic space. Suppose that D is equipped with a system of approach regions(
Aα(ζ)
ζ∈D, α∈Iζ
. Let A ⊂ D be a locally pluriregular set. Let f1, f2 : D∪A −→ Z
be locally bounded mappings such that f1|D, f2|D ∈ O(D,Z) and A− lim f1 = A −
lim f2 on A. Then f1(z) = f2(z) for all z ∈ D such that ω(z, A,D) 6= 1.
We leave the proof to the interested reader. Finally, we conclude this section with
the following Gluing Lemma.
Lemma 7.6. Let D and G be open subsets of some complex manifolds and Z a com-
plex analytic space. Suppose that D (resp. G) is equipped with a system of approach
regions
Aα(ζ)
ζ∈D, α∈Iζ
(resp.
Aβ(η)
η∈G, β∈Iη
). Let (Dk)
(resp. (Gk)
a family of open subsets of D (resp. G) equipped with the induced system of approach
regions. Let (Pk)
(resp. (Qk)
) be a family of locally pluriregular subsets of
D (resp. G). Suppose, in addition, that
(i) Pk ⊂ Pk0, Dk0 ⊂ Dk, and Pk is locally pluriregular relative to Dk0 . Similarly,
Qk ⊂ Qk0, Gk0 ⊂ Gk, and Qk is locally pluriregular relative to Gk0 .
(ii) There are a family of locally bounded mappings (fk)
such that fk :
o (Pk,Qk;Dk,Gk) −→ Z verifies fk = fk0 on X
o (Pk,Qk;Dk0,Gk0) ,
and a family of holomorphic mappings (f̂k)
such that f̂k ∈
X̂ (Pk,Qk;Dk,Gk) , Z
, and
(A− lim f̂k)(z, w) = fk(z, w), (z, w) ∈ X
o (Pk,Qk;Dk0,Gk0) .
(iii) There are open subsets U of D and V of G such that ω̃(z,Pk,Dk0) +
ω̃(w,Qk,Gk0) < 1 for all (z, w) ∈ U × V and k ≥ k0.
Then f̂k(z, w) = f̂k0(z, w) for all (z, w) ∈ U × V and k ≥ k0.
A UNIFIED APPROACH 37
Proof. By (iii), we have that
(7.4) U × V ⊂ H := X̂ (Pk,Qk;Dk0,Gk0) .
On the other hand, using (i) we see that
(7.5) H ⊂ X̂ (Pk,Qk;Dk,Gk) ∩ X̂ (Pk0 ,Qk0;Dk0,Gk0) .
Fix arbitrary (z0, w0) ∈ H and k ≥ k0. Observe that both mappings f̂k(·, w0) and
f̂k0(·, w0) are defined on {z ∈ Dk0 : ω(z,Pk,Dk0) < 1− ω(w0,Qk,Gk0)} . Using (ii)
and Proposition 3.5, we may apply Theorem 7.5 to these mappings and conclude
that f̂k(z0, w0) = f̂k0(z0, w0). �
8. Local and semi-local versions of Theorem A
The aim of this section is to generalize Theorem 6.2 to some cases where the
“target space” Z is a complex analytic space possessing the Hartogs extension prop-
erty. Our philosophy is the following: we first apply Theorem 6.2 locally in order to
obtain various local extension mappings, then we glue them together. The gluing
process needs the following
Definition 8.1. Let M be a complex manifold and Z a complex space. Let (Uj)j∈J
be a family of open subsets of M, and (fj)j∈J a family of mappings such that fj ∈
O(Uj , Z). We say that the family (fj)j∈J is collective if, for any j, k ∈ J, fj = fk
on Uj ∩ Uk. The unique holomorphic mapping f :
Uj −→ Z, defined by f := fj
on Uj, j ∈ J, is called the collected mapping of (fj)j∈J .
We arrive at the following local version of Theorem A.
Theorem 8.2. Let D ⊂ Cp, G ⊂ Cq be bounded open sets and Z a complex analytic
space possessing the Hartogs extension property. D (resp. G) is equipped with a
system of approach regions
Aα(ζ)
ζ∈D, α∈Iζ
(resp.
Aβ(η)
η∈G, β∈Iη
). Let A, A0
(resp. B, B0) be subsets of D (resp. G) such that A0 and B0 are locally pluriregular
and that A0 ⊂ A
∗ and B0 ⊂ B
∗. Put
W := X(A,B;D,G) and W0 := X(A0, B0;D,G).
Then, for every mapping f : W −→ Z which satisfies the following conditions:
• f ∈ Cs(W,Z) ∩ Os(W
o, Z);
• f is locally bounded along X
A ∩ ∂D,B ∩ ∂G;D,G
• f |A×B is continuous at all points of (A ∩ ∂D)× (B ∩ ∂G),
there exists a unique mapping f̂ ∈ O(Ŵ0, Z) which admits A-limit f(ζ, η) at all
points (ζ, η) ∈ W0.
Theorem 8.2 generalizes Theorem 6.2 to the case where the “target space” Z is an
arbitrary complex analytic space possessing the Hartogs extension property. Since
the proof is somewhat technical, the reader may skip it at the first reading.
38 VIÊT-ANH NGUYÊN
Proof. Recall that for a ∈ Ck and r > 0, B(a, r) denotes the open ball centered at a
with radius r. For 0 < δ < 1 and 0 < r put
Da,δ,r := {z ∈ D ∩ B(a, r) : ω(A0 ∩ B(a, r), D ∩ B(a, r)) < δ} , a ∈ A0,
Gb,δ,r := {w ∈ G ∩ B(b, r) : ω(B0 ∩ B(b, r), G ∩ B(b, r)) < δ} , b ∈ B0.
(8.1)
Applying Part 1) of Theorem 7.2 and using the hypotheses on f, we see that f
extends to a mapping defined on X(A ∪A∗, B ∪B∗;D,G) such that f is separately
holomorphic on Xo(A∪A∗, B ∪B∗;D,G) and that f |X(A∗,B∗;D,G) is locally bounded.
Therefore, using the compactness of A0 and B0, one may find a real number r0 > 0
such that
(8.2) fa,b := f |X(A0∩B(a,r),B0∩B(b,r);D∩B(a,r),G∩B(b,r))
is bounded for all 0 < r ≤ r0 and a ∈ A0, b ∈ B0. Applying Theorem 7.4 to fa,b ,
one obtains a mapping
(8.3) f̂a,b ∈ O
A0 ∩ B(a, r), B0 ∩ B(b, r);D ∩ B(a, r), G ∩ B(b, r)
which admits A-limit f on X
A0 ∩ B(a, r), B0 ∩ B(b, r);D ∩ B(a, r), G ∩ B(b, r)
Fix 0 < δ0 <
. Then it follows from (8.1) that for 0 < r ≤ r0, a ∈ A0, b ∈ B0.
Da,δ0,r ×Gb,δ0,r ⊂ X̂
A0 ∩ B(a, r), B0 ∩ B(b, r);D ∩ B(a, r), G ∩ B(b, r)
This, combined with (8.3), implies that
(8.4) f̂a,b ∈ O (Da,δ0,r ×Gb,δ0,r, Z) , 0 < r ≤ r0, a ∈ A0, b ∈ B0.
Next we fix a finite covering (A0 ∩ B(am, r))
m=1 of A0 and (B0 ∩ B(bn, r))
n=1 of B0,
where (am)
m=1 ⊂ A0 and (bn)
n=1 ⊂ B0.
We divide the proof into two steps.
Step 1: Fix an open set G
⋐ G. Then there exists r1: 0 < r1 < r0 with the following
property: for every a ∈ A0 there exist an open subset Aa of D and a mapping
f̂ = f̂a ∈ O
Gbn,δ0,r0
such that
f̂(z, w) = f̂a,bn(z, w), (z, w) ∈ (Aa ∩Da,δ0,r0)×Gbn,δ0,r0, n = 1, . . . , N ;
and that Aa is of the form {z ∈ D∩B(a, r1) : ω(z, A0∩B(a, r1), D∩B(a, r1)) < δa}
for some 0 < δa < δ0.
Proof of Step 1. Fix an arbitrary point a0 ∈ A0. First we claim that there are a
sufficiently small number r1 : 0 < r1 < r0 and a finite number of open subsets
n=1 of G with the following properties:
(a) V1 = Gb1,δ0,r0 and (Gbn,δ0,r0)
⊂ (Vn)
n=1 (see the notation in (8.1));
(b) f |(A0∩B(a,r1))×Vn is bounded, n = 1, . . . , N0;
(c) G
A UNIFIED APPROACH 39
(d) Vn ∩ Vn+1 6= ∅, n = 1, . . . , N0 − 1.
Indeed, we first start with the test r1 := r0 and N0 := N and (Vn)
n=1 :=
(Gbn,δ0)
. In virtue of (8.2) we see that our choice satisfies (a)–(b). If (c)–(d)
are satisfied then we are done. Otherwise, we will make the following procedure.
Fix a point w0 ∈ G
. For n = 1, . . . , N, let γn : [0, 1] → G be a continuous
one-to-one map such that
γn(0) = w0 and γn(1) ∈ Gbn,δ0,r0.
Since f is locally bounded, there exist sufficiently small numbers r1, s : 0 < r1 ≤
r0 and 0 < s such that f |(A0∩B(a,r1))×B(w,s) is bounded for all a ∈ A0 and w ∈
γn([0, 1]). Therefore, we may add to the starting collection (Vn)
n=1 some balls
of the form B(w, s), where w ∈ G
γn([0, 1]), and the new collection (Vn)
still satisfies (a)–(b). Now it remains to show that by adding a finite number of
suitable balls B(w, s), (c)–(d) are also satisfied. But this assertion follows from an
almost obvious geometric argument. In fact, we may renumber the collection (Vn)
if necessary. Hence, the above claim has been shown.
Using (c)–(d) above we may fix open sets Un ⋐ Vn for n = 1, . . . , N0, such that
(8.5) G
Un and Un ∩ Un−1 6= ∅, 1 < n ≤ N0.
In what follows we will find the desired set Aa0 and the desired holomorphic mapping
f̂ after N0 steps. Namely, after the n-th step (1 ≤ n ≤ N0), we construct an
open subset An of D in the form Da0,δn,r1 for a suitable δn > 0, and a mapping
f̂n ∈ O
. Finally, we obtain Aa0 := AN0 and f̂ := f̂N0. Now we
carry out this construction.
In the first step, using (8.1), (8.3), (8.4) and (a), we define
δ1 := δ0, A1 := Da0,δ1,r1 and f̂1(z, w) := f̂a0,b1(z, w), (z, w) ∈ A1 × U1.
Suppose that we have constructed an open subset An−1 of D and a mapping f̂n−1 ∈
An−1 ×
( n−1⋃
for some n : 2 ≤ n ≤ N0. We wish to construct an open
subset An of D and a mapping f̂n ∈ O
. There are two cases to
consider.
Case Vn = Gbm,δ0 for some 1 ≤ m ≤ N.
In this case let δn := δn−1 and An := An−1 = Da0,δn−1,r1, and
f̂n :=
f̂n−1, on An ×
( n−1⋃
f̂a0,bm, on An × Un.
40 VIÊT-ANH NGUYÊN
Case Vn 6∈
Gbm,δ0
By (8.5) fix a nonempty open set K ⋐ Un ∩ Un−1. Then by the induction, f̂n−1 ∈
O (An−1 ×K,Z) . Recall from (b) that f : (A0 ∩ B(a0, r1))× Vn −→ Z is bounded.
Since f is locally bounded, by decreasing r1 > 0 (if necessary) we may assume that
g := f |
X(A0∩B(a0,r1),K;D∩B(a0,r1),Vn)
is bounded. Applying Theorem 7.4 to g, we obtain
ĝ ∈ O
X̂(A0 ∩ B(a0, r1), K;D ∩ B(a0, r1), Vn), Z
which extends g. Since Un ⋐ Vn, we may choose δn such that 0 < δn < 1 −
ω(w,K, Vn). Using this and (8.1), it follows that
Da0,δn,r1 × Un ⊂ X̂(A0 ∩ B(a0, r1), K;D ∩ B(a0, r1), Vn).
Therefore, let An := Da0,δn,r1 and define
f̂n :=
f̂n−1, on An ×
( n−1⋃
ĝ, on An × Un.
This completes our construction in the n-step. Finally, we put Aa0 := AN0 and
f̂a0 := f̂N0 . Using this and (8.3) and (8.5) and (a), the desired conclusion of Step 1
follows. �
Step 2: Completion of the proof.
Proof of Step 2. Fix a sequence of relatively compact open subsets (D
k=1 of D
(resp. (G
k=1 of G) such that D
k ր D and G
k ր G as k ր ∞. Put
(8.6) Dk := D
Dam,δ0,r0, Gk := G
Gbn,δ0,r0, k ≥ 1.
Using the result of Step 1, we may find, for every k, a number 0 < rk < r0 with the
following properties:
• for every a ∈ A0, there is 0 < δa,k < δ0 such that by considering the open set
Aa,k := {z ∈ D ∩ B(a, rk) : ω (z, A0 ∩ B(a, rk), D ∩ B(a, rk)) < δa,k}
one can find a mapping f̂a,k ∈ O (Aa,k ×Gk, Z) satisfying
(8.7) f̂a,k = f̂a,bn on (Aa,k ∩Da,δ0,rk)×Gbn,δ0,rk , n = 1, . . . , N ;
• for every b ∈ B, there is 0 < δb,k < δ0 such that by considering the open set
Bb,k := {w ∈ G ∩ B(b, rk) : ω (z, B0 ∩ B(b, rk), G ∩ B(b, rk)) < δb,k}
one can find a mapping f̂b,k ∈ O (Dk ×Bb,k, Z) satisfying
(8.8) f̂b,k = f̂am,b on Dam,δ0,rk × (Bb,k ∩Gb,δ0,rk), m = 1, . . . ,M.
A UNIFIED APPROACH 41
Next using the compactness of A0 and B0, one may find, for every k, two fi-
nite coverings (A0 ∩ B(a
m, rk))
of A0 and (B0 ∩ B(bn′ , rk))
of B0, where
(am′ )
⊂ A0 and (bn′ )
⊂ B0. Put
(8.9) Ak :=
′ ,k and Bk :=
′ ,k, k ≥ 1.
In virtue of (8.6)–(8.9) and (8.2)–(8.4), the family (f̂a
′ ,k)
is col-
lective for every k ≥ 1. Let
(8.10) f̂k ∈ O
X(Ak, Bk;Dk, Gk), Z
denote the collected mapping of this family.
Next, we show that
(8.11)
ω(z, A0, Dk) = ω(z, A0, D) and lim
ω(w,B0, Gk) = ω(z, B0, G), z ∈ D, w ∈ G.
It is sufficient to prove the first identity in (8.11) since the proof of the second one
is similar. Observe that there is u ∈ PSH(D) such that ω(·, A0, Dk) ց u as k ր ∞
and u ≥ ω(·, A0, D) on D. Therefore, the proof of (8.11) will be complete if one can
show that u ≤ ω(·, A0, D) on D.
To this end observe that for every a ∈ A0 there is 1 ≤ m ≤ M such that
a ∈ B(am, r0). Consequently, using (8.6),
(A− lim sup u)(a) ≤
A− lim supω(·, A0 ∩ B(am, r0), Dam,δ0,r0)
(a) = 0,
where the equality follows from an application of Proposition 3.5. This, combined
with the obvious inequality u ≤ 1, implies that u ≤ ω(·, A0, D). Hence, (8.11)
follows.
We are now in the position to define the desired extension mapping f̂ . Indeed,
one glues
given in (8.10) together to obtain f̂ in the following way
f̂ := lim
f̂k on Ŵ0.
One needs to check that the last limit exists and possesses all the required properties.
In virtue of (8.7)–(8.11), and the Gluing Lemma 7.6, the proof will be complete if
we can show the following
Claim. For every (z0, w0) ∈ Ŵ0, there are an open neighborhood U × V of (z0, w0)
and δ0 > 0 such that the hypotheses of Lemma 7.6 is fulfilled with
D := D, G := G, Pk := Ak, Qk := Bk, Dk := Dk, Gk := Gk, k ≥ 1.
To this end let
δ0 :=
1− ω(z0, A0, D)− ω(w0, B0, G)
and let U × V be an open neighborhood of (z0, w0) such that
ω(z, A0, D) + ω(w,B0, G) < ω(z0, A0, D) + ω(w0, B0, G) + δ0.
42 VIÊT-ANH NGUYÊN
Then using these inequalities and (8.11), we see that there is a sufficiently big q0 ∈ N
such that for q0 ≤ q ≤ p and (z, w) ∈ U × V,
ω(z, Ap, Dq) + ω(w,Bp, Dq) ≤ ω(z, A0, Dq) + ω(w,B0, Gq)
≤ ω(z, A0, D) + ω(w,B0, G) + δ0 < 1.
This proves the above claim. Hence, the proof of the theorem is finished. �
Now we are able to formulate the following semi-local result.
Theorem 8.3. Let D be an open subset of a complex manifold and G ⊂ Cm a
bounded open set and Z a complex analytic space possessing the Hartogs extension
property. D (resp. G) is equipped with the canonical system of approach regions
(resp. the system of approach regions
Aβ(η)
η∈G, α∈Iη
). Let A be an open subset of
D and let B, B0 be subsets of G such that B0 is locally pluriregular and B0 ⊂ B
W := X(A,B;D,G) and W0 := X(A,B0;D,G).
Then, for every mapping f : W −→ Z which satisfies the following conditions:
• f ∈ Cs(W,Z) ∩ Os(W
o, Z);
• f is locally bounded along D × (B ∩ ∂G),
there exists a unique mapping f̂ ∈ O(Ŵ0, Z) which admits A-limit f(ζ, η) at all
points (ζ, η) ∈ W0.
Proof. First, applying Part 1) of Theorem 7.2 and using the hypotheses on f, we
see that f extends to a mapping (still denoted by) f defined on X(A,B ∪B∗;D,G)
such that f is separately holomorphic on Xo(A,B ∪B∗;D,G) and that f |X(A,B∗;D,G)
is locally bounded.
We define f̂ at a point (z0, w0) ∈ Ŵ0 as follows: Let ǫ > 0 be such that
(8.12) ω(z0, A,D) + ω(w0, B0, G) + ǫ < 1.
By Theorem 3.1 and Proposition 3.4, there is a holomorphic disc φ ∈ O(E,D) such
that φ(0) = z0 and
(8.13) 1−
·mes(φ−1(A) ∩ ∂E) < ω(z0, A,D) + ǫ.
Moreover, using the hypotheses, we see that the mapping fφ, defined by
(8.14) fφ(t, w) := f(φ(t), w), (t, w) ∈ X
φ−1(A) ∩ ∂E,B;E,G
satisfies the hypotheses of Theorem 8.2. By this theorem, let f̂φ be the unique
mapping in X̂ (φ−1(A) ∩ ∂E,B0;E,G) such that
(8.15) (A− lim f̂φ)(t, w) = fφ(t, w), (t, w) ∈ X
φ−1(A) ∩ ∂E,B0;E,G
In virtue of (8.12)–(8.13), (0, w0) ∈ X̂ (φ
−1(A) ∩ ∂E,B0;E,G) . Then the value at
(z0, w0) of the desired extension mapping f̂ is given by
f̂(z0, w0) := f̂φ(0, w0).
A UNIFIED APPROACH 43
Using this and (8.14)–(8.15), and arguing as in Part 2) of Lemma 4.5, one can show
that f̂ is well-defined on Ŵ0.
To show that f̂ is holomorphic, one argues as in Step 1 of the proof of Theorem
6.1. To show that f̂ admits A-limit f(ζ, η) at all points (ζ, η) ∈ W0 and that it is
uniquely defined, one proceeds as in Step 2–4 of the proof of Theorem 6.1 making
the obviously necessary changes and adaptations. Hence, the proof is finished. �
9. The proof of Theorem A
First we need a variant of Definition 2.3. For a set A ⊂ D, Let Ẽ(A) be the set of
all elements P ∈ E(A) with the property that there is an open neighborhood U ⊂ X
of P such that U is biholomorphic to a domain in some Cn. Then it can be checked
(9.1) Ã :=
P∈eE(A)
This identity will allow us to pass from “local informations” to “global extensions”.
For the proof we need to develop some preparatory results.
In virtue of (9.1), for any P ∈ Ẽ(A) (resp. Q ∈ Ẽ(B)) fix an open neighborhood
UP of P (resp. VQ of Q) such that UP (resp. VQ) is biholomorphic to a domain in
dP (resp. in CdQ), where dP (resp. dQ) is the dimension of D (resp. G) at points
of P (resp. Q). For any 0 < δ ≤ 1
define
UP,δ := {z ∈ UP : ω(z, P, UP ) < δ} , P ∈ Ẽ(A),
VQ,δ := {w ∈ VQ : ω(w,Q, VQ) < δ} , Q ∈ Ẽ(B),
Aδ :=
P∈eE(A)
UP,δ, Bδ :=
Q∈eE(B)
VQ,δ,
Dδ := {z ∈ D : ω̃(z, A,D) < 1− δ} , Gδ := {w ∈ G : ω̃(w,B,G) < 1− δ} .
(9.2)
Lemma 9.1. We keep the above notation. Then:
(1) For every ζ ∈ Ã and α ∈ Iζ, there is an open neighborhood U of ζ such that
U ∩ Aα(ζ) ⊂ Aδ.
(2) Aδ is an open subset of D and Aδ ⊂ D1−δ ⊂ Dδ.
(3) ω̃(z, A,D)− δ ≤ ω(z, Aδ, D) ≤ ω̃(z, A,D), z ∈ D.
Proof of Lemma 9.1. To prove Part (1) fix, in view of (9.1)–(9.2), P ∈ Ẽ(A),
ζ ∈ P and α ∈ Iζ . Using the definition of local pluriregularity, we see that
lim sup
z→ζ, z∈Aα(ζ)
ω(z, P, UP ) = 0. Hence, Part (1) follows.
The assertion that Aδ is open follows immediately from (9.2). Since 0 < δ ≤
the second inclusion in Part (2) is clear. To prove the first inclusion let z be an
arbitrary point of Aδ. Then there is P ∈ Ẽ(A) such that z ∈ UP,δ. Using (9.2) and
Definition 2.3 we obtain
(9.3) ω̃(z, A,D) = ω(z, Ã, D) ≤ ω(z, P, UP ) < δ.
44 VIÊT-ANH NGUYÊN
Hence, z ∈ D1−δ, which in turn implies that Aδ ⊂ D1−δ.
It follows from Part (1) that
ω(z, Aδ, D) ≤ ω(z, Ã, D) = ω̃(z, A,D), z ∈ D,
which proves the second estimate in Part (3). To complete the proof let P ∈ Ẽ(A)
and 0 < δ ≤ 1
. We deduce from (9.3) that ω̃(z, A,D) − δ ≤ 0 for z ∈ UP,δ. Hence,
by (9.2),
ω̃(z, A,D)− δ ≤ 0, z ∈ Aδ.
On the other hand, ω̃(z, A,D) − δ < 1, z ∈ D. Recall from Part (2) that Aδ is an
open subset of Dδ. Consequently, the first estimate of Part (3) follows. �
Now we are able to to prove Theorem A in the following special case.
Proposition 9.2. Let D be an open subset of a complex manifold and G a bounded
open subset of Cm and Z a complex analytic space possessing the Hartogs extension
property. D (resp. G) is equipped with a system of approach regions
Aα(ζ)
ζ∈D, α∈Iζ
(resp.
Aβ(η)
η∈G, β∈Iη
). Let A be a subset of D, let B, B0 be subsets of G such
that B0 is locally pluriregular and B0 ⊂ B
∗. Put
W := X(A,B;D,G), W0 := X(A,B0;D,G), W̃
(D ∪ Ã)× B0
Ã× (G ∪ B0)
Ŵ o := {(z, w) ∈ D ×G : ω̃(z, A,D) + ω(w,B0, G) < 1} .
Then, for every mapping f : W −→ Z which satisfies the following conditions:
• f ∈ Cs(W,Z) ∩ Os(W
o, Z);
• f is locally bounded along X
A ∩ ∂D,B ∩ ∂G;D,G
• f |A×B is continuous at all points of (A ∩ ∂D)× (B ∩ ∂G),
there exists a unique mapping f̂ ∈ O(Ŵ o, Z) which admits A-limit f(ζ, η) at all
points (ζ, η) ∈ W̃ o.
Proof of Proposition 9.2. First, applying Part 1) of Theorem 7.2 and using the
hypotheses on f, we see that f extends to a mapping (still denoted by f) defined on
X(A ∪ A∗, B ∪ B∗;D,G) such that f is separately holomorphic on Xo(A ∪ A∗, B ∪
B∗;D,G) and that f |X(A∗,B∗;D,G) is locally bounded.
For each P ∈ Ẽ(A), UP (resp. G) is biholomorphic to an open set in C
dP (resp.
in Cm). Consequently, the mapping fP := f |X(P ,B;UP ,G) satisfies the hypotheses of
Theorem 8.2. Hence, we obtain a unique mapping f̂P ∈ O
X̂ (P,B0;UP , G) , Z
(9.4) (A− lim f̂P )(z, w) = fP (z, w) = f(z, w), (z, w) ∈ X (P,B0;UP , G) .
Let 0 < δ ≤ 1
and G
δ := {w ∈ G : ω(w,B0, G) < 1 − δ}. We will show that the
family
f̂P |UP,δ×G
P∈eE(A)
is collective in the sense of Definition 8.1, where UP,δ is
given in (9.2).
A UNIFIED APPROACH 45
To prove this assertion let P1, P2 be arbitrary elements of Ẽ(A). By (9.4), we have
(9.5)
(A− lim f̂P1)(z, w) = f(z, w) = (A− lim f̂P2)(z, w), (z, w) ∈ (UP1 ∩ UP2)× B0.
The assertion is reduced to showing that
(9.6) f̂P1(z, w) = f̂P2(z, w), (z, w) ∈ X̂ (P1, B0;UP1, G) ∩ X̂ (P2, B0;UP2, G) .
To this end fix (z0, w0) ∈ X̂ (P1, B0;UP1, G) ∩ X̂ (P2, B0;UP2 , G) . Observe that both
mappings w 7→ f̂P1(z0, w) and w 7→ f̂P2(z0, w) belong to O(G, Z), where G is the
connected component which contains w0 of the following open set{
w ∈ G : ω(w,B0, G) < 1− max
j∈{1,2}
ω(z0, Pj, Uj)
Applying Theorem 7.5 to these mappings using (9.5), Proposition 3.5 and (9.6), the
above assertion follows.
In virtue of (9.2) let
(9.7)
fδ ∈ O(Aδ ×G
δ, Z)
denote the collected mapping of the family
f̂P |UP,δ×G
P∈eE(A)
. In virtue of (9.4)
and (9.7), we are able to define a new mapping f̃δ on X
Aδ, B;D,G
as follows
f̃δ :=
fδ, on Aδ ×G
f, on D × B.
Using this and (9.4)–(9.7), we see that
(9.8) A− lim f̃δ = f on X(A ∩ Ã, B0;D,G
Since Aδ is an open subset of X and G
δ is a bounded open set in C
m, we are able to
apply Theorem 8.3 to f̃δ in order to obtain a mapping f̂δ ∈ O
Aδ, B0;D,G
such that
(9.9) A− lim f̂δ = f̃δ on X(Aδ, B0;D,G
We are now in a position to define the desired extension mapping f̂ . Indeed, one
glues
0<δ≤ 1
together to obtain f̂ in the following way
f̂ := lim
on Ŵ o.
One needs to check that the last limit exists and possesses all the required properties.
In virtue of (9.8)–(9.9) and Lemma 7.6, the proof will be complete if one can show
that for every (z0, w0) ∈ Ŵ
o, there are an open neighborhood U × V of (z0, w0) and
δ0 > 0 such that hypothesis (iii) of Lemma 7.6 is fulfilled with
D := D, G := G, Pk := A 1
, Qk := B0, Dk := D, Gk := G
, k > 2.
To this end let
δ0 :=
1− ω̃(z0, A,D)− ω(w0, B0, G)
46 VIÊT-ANH NGUYÊN
and let U × V be an open neighborhood of (z0, w0) such that
ω̃(z, A,D) + ω(w,B0, G) < ω̃(z0, A,D) + ω(w0, B0, G) + δ0.
Then for k > 1
and for (z, w) ∈ U ×V, using the last inequality, and applying Part
(3) of Lemma 9.1 and Proposition 3.5, we see that
ω̃(z, A 1
, D) + ω(w,B0, G
) ≤ ω̃(z, A,D) +
ω(w,B0, G)
1− δ0
ω̃(z, A,D) + ω(w,B0, G)
1− δ0
This proves the above assertion. Hence, the proof of the proposition is finished. �
We now arrive at
Proof of Theorem A. First, applying Part 1) of Theorem 7.2 and using the
hypotheses on f, we see that f extends to a mapping (still denoted by) f defined on
X(A ∪ A∗, B ∪ B∗;D,G) such that f is separately holomorphic on Xo(A ∪ A∗, B ∪
B∗;D,G) and that f |X(A∗,B∗;D,G) is locally bounded.
For each P ∈ Ẽ(A), UP is biholomorphic to an open set in C
dP . Consequently, the
mapping fP := f |X(P,B;UP ,G) satisfies the hypotheses of Proposition 9.2. Hence, we
obtain a unique mapping f̂P ∈ O
o (P,B;UP , G) , Z
11 such that
(9.10) (A− lim f̂P )(z, w) = f(z, w), (z, w) ∈ X
P, B̃ ∩B;UP , G
Let 0 < δ ≤ 1
. Using (9.10) and arguing as in the proof of Proposition 9.2, we
may collect the family
f̂P |UP,δ×Gδ
P∈eE(A)
in order to obtain the collected mapping
f̃Aδ ∈ O(Aδ ×Gδ, Z).
Similarly, for each Q ∈ Ẽ(B), one obtains a unique mapping f̂Q ∈
o (A,Q;D, VQ) , Z
12 such that
(9.11) (A− lim f̂Q)(z, w) = f(z, w), (z, w) ∈ X
A ∩ Ã, Q;D, VQ
Moreover, one can collect the family
f̂Q|Dδ×VQ,δ
Q∈eE(B)
in order to obtain the col-
lected mapping f̃Bδ ∈ O(Dδ × Bδ, Z).
Next, we prove that
(9.12) f̃Aδ = f̃
δ on Aδ × Bδ.
Indeed, in virtue of (9.10)–(9.11) it suffices to show that for any P ∈ Ẽ(A) and
Q ∈ Ẽ(B) and any 0 < δ ≤ 1
(9.13) f̂P (z, w) = f̂Q(z, w), (z, w) ∈ UP,δ × VQ,δ.
Observe that in virtue of (9.10)–(9.11) one has that
(A− lim f̂P )(z, w) = (A− lim f̂Q)(z, w) = f(z, w), (z, w) ∈ X (P,Q;UP , VQ) .
11 Here X̂o (P,B;UP , G) := {(z, w) ∈ UP ×G : ω(z, P, UP ) + ω̃(w,B,G) < 1} .
12 Here X̂o (A,Q;D,VQ) := {(z, w) ∈ D × VQ : ω̃(z, A,D) + ω(w,Q, VQ) < 1} .
A UNIFIED APPROACH 47
Recall that UP (resp. VQ) is biholomorphic to a domain in C
dP (resp. CdQ). Con-
sequently, applying the uniqueness of Theorem 8.2 yields that
f̂P (z, w) = f̂Q(z, w), (z, w) ∈ X̂ (P,Q;UP , VQ) .
Hence, the proof of (9.13) and then the proof of (9.12) are finished.
In virtue of (9.12), we are able to define a new mapping f̃δ :
o (Aδ, Bδ;Dδ, Gδ) −→ Z as follows
(9.14) f̃δ :=
f̃Aδ , on Aδ ×Gδ,
f̃Bδ , on Dδ ×Bδ.
Using formula (9.14) it can be readily checked that f̃δ ∈ Os
o (Aδ, Bδ;Dδ, Gδ) , Z
Since we know from Part (2) of Lemma 9.1 that Aδ (resp. Bδ) is an open subset
of Dδ (resp. Gδ), we are able to apply Theorem 7.3 to f̃δ for every 0 < δ ≤
Consequently, one obtains a unique mapping f̂δ ∈ O
X̂ (Aδ, Bδ;Dδ, Gδ) , Z
(9.15) f̂δ = f̃δ on X
o (Aδ, Bδ;Dδ, Gδ) .
It follows from (9.10)–(9.11) and (9.14)–(9.15) that
(9.16) A− lim f̂δ = f on X
A ∩ Ã, B ∩ B̃;Dδ, Gδ
In addition, for any 0 < δ ≤ δ0 ≤
, and any (z, w) ∈ Aδ × Bδ, there is P ∈ Ẽ(A)
such that z ∈ UP,δ0. Therefore, it follows from the construction of f̃
δ , (9.14) and
(9.15) that
f̂δ(z, w) = f̂P (z, w) = f̂δ0(z, w).
This proves that f̂δ = f̂δ0 on Aδ × Bδ for 0 < δ ≤ δ0 ≤
. Hence,
(9.17) f̂δ = f̂δ0 on X(Aδ, Bδ;Dδ0, Gδ0), 0 < δ ≤ δ0 ≤
We are now in a position to define the desired extension mapping f̂ .
f̂ := lim
To prove that f̂ satisfies the desired conclusion of the theorem one proceeds as in
the end of the proof of Proposition 9.2. In virtue of (9.16)–(9.17) and Lemma 7.6,
the proof will be complete if we can verify that for every (z0, w0) ∈ Ŵ , there are an
open neighborhood U×V of (z0, w0) and δ0 > 0 such that hypothesis (iii) of Lemma
7.6 is fulfilled with
D := D, G := G, Pk := A 1
, Qk := B 1
, Dk := D 1
, Gk := G 1
, k > 2.
Since the verification follows along almost the same lines as that of Proposition 9.2,
it is, therefore, left to the interested reader.
Hence, the proof of Theorem A is finished. �
48 VIÊT-ANH NGUYÊN
10. Applications
In this section we give various applications of Theorem A using different systems
of approach regions defined in Subsection 2.2.
10.1. Canonical system of approach regions. For every open subset U ⊂ R2n−1
and every continuous function h : U −→ R, the graph
z = (z
, zn) = (z
, xn + iyn) ∈ C
n : (z
, xn) ∈ U and yn = h(z
, xn)
is called a topological hypersurface in Cn.
Let X be a complex manifold of dimension n. A subset A ⊂ X is said to be a
topological hypersurface if, for every point a ∈ A, there is a local chart (U, φ : U →
n) around a such that φ(A ∩ U) is a topological hypersurface in Cn
Now let D ⊂ X be an open subset and let A ⊂ ∂D be an open subset (with
respect to the topology induced on ∂D). Suppose in addition that A is a topological
hypersurface. A point a ∈ A is said to be of type 1 (with respect to D) if, for every
neighborhood U of a there is an open neighborhood V of a such that V ⊂ U and
V ∩D is a domain. Otherwise, a is said to be of type 2. We see easily that if a is of
type 2, then for every neighborhood U of a, there are an open neighborhood V of a
and two domains V1, V2 such that V ⊂ U, V ∩D = V1 ∪ V2 and all points in A ∩ V
are of type 1 with respect to V1 and V2.
In virtue of Proposition 3.7 in [35] we have the following
Proposition 10.1. Let X be a complex manifold and D an open subset of X. D
is equipped with the canonical system of approach regions. Suppose that A ⊂ ∂D is
an open boundary subset which is also a topological hypersurface. Then A is locally
pluriregular and A ⊂ Ã.
This, combined with Theorem A, implies the following result.
Theorem 10.2. Let X, Y be two complex manifolds, and D ⊂ X, G ⊂ Y two
nonempty open sets. D (resp. G) is equipped with the canonical system of approach
regions. Let A (resp. B) be a nonempty open subset of ∂D (resp. ∂G) which is also
a topological hypersurface. Let Z be a complex analytic space possessing the Hartogs
extension property. Define
W := X(A,B;D,G),
Ŵ := {(z, w) ∈ D ×G : ω(z, A,D) + ω(w,B,G) < 1} .
Let f : W −→ Z be such that:
(i) f ∈ Cs(W,Z) ∩ Os(W
o, Z);
(ii) f is locally bounded on W ;
(iii) f |A×B is continuous.
Then there exists a unique mapping f̂ ∈ O(Ŵ ) such that
cW∋(z,w)→(ζ,η)
f̂(z, w) = f(ζ, η), (ζ, η) ∈ W.
A UNIFIED APPROACH 49
If, moreover, Z = C and |f |W <∞, then
|f̂(z, w)| ≤ |f |
1−ω(z,w)
A×B |f |
ω(z,w)
W , (z, w) ∈ Ŵ .
The special case where Z = C has been proved in [35].
10.2. System of angular approach regions. We will use the terminology and
the notation in Paragraph 3 of Subsection 2.2. More precisely, if D is an open set
of a Riemann surface such that D is good on a nonempty part of ∂D, we equip D
with the system of angular approach regions supported on this part. Moreover, the
notions such as set of positive length, set of zero length, locally pluriregular point
which exist on ∂E can be transferred to ∂D using conformal mappings in a local
way (see [34] for more details).
Theorem 10.3. Let X, Y be Riemann surfaces and D ⊂ X, G ⊂ Y open subsets
and A (resp. B) a subset of ∂D (resp. ∂G) such that D (resp. G) is good on A
(resp. B) and that both A and B are of positive length. Let Z be a complex analytic
space possessing the Hartogs extension property. Define
W := X(A,B;D,G), W
:= X(A
;D,G),
Ŵ := {(z, w) ∈ D ×G : ω(z, A,D) + ω(w,B,G) < 1} ,
(z, w) ∈ D ×G : ω(z, A
, D) + ω(w,B
, G) < 1
where A
(resp. B
) is the set of points at which A (resp. B) is locally pluriregular
with respect to the system of angular approach regions supported on A (resp. B),
and ω(·, A,D), ω(·, A
, D) (resp. ω(·, B,G), ω(·, B
, G)) are calculated using the
canonical system of approach regions.
Then for every mapping f : W −→ Z which satisfies the following conditions:
(i) f ∈ Cs(W,Z) ∩ Os(W
o, Z);
(ii) f is locally bounded;
(iii) f |A×B is continuous,
there exists a unique mapping f̂ ∈ O(Ŵ
, Z) which admits the angular limit f at all
points of W ∩W
If A and B are Borel sets or if X = Y = C then Ŵ = Ŵ
If Z = C and |f |W <∞, then
|f̂(z, w)| ≤ |f |
1−ω(z,A
,D)−ω(w,B
A×B |f |
ω(z,A
,D)+ω(w,B
W , (z, w) ∈ Ŵ
Theorem 10.3 generalizes, in some sense, the result of [34].
In the above theorem we have used the equality
W = Ŵ
when either A and B
are Borel sets or X = Y = C. This follows from the identity ω(·, A,D) = ω̃(·, A,D)
when either A is a Borel set or D ⊂ C (see Theorem 4.6 in [34]). On the other hand,
we can sharpen Theorem 10.3 further, namely, hypothesis (i) can be replaced by a
weaker hypothesis (i’) as follows:
50 VIÊT-ANH NGUYÊN
(i’) for any a ∈ A the mapping f(a, ·)|G is holomorphic and has angular limit
f(a, b) at all points b ∈ B, and for any b ∈ B the mapping f(·, b)|D is
holomorphic and has angular limit f(a, b) at all points a ∈ A.
To see this it suffices to observe that the hypotheses of Theorem 3.8 and Theorem
6.1 can be weakened considerably when the bounded open set D therein is just
one-dimensional.
10.3. System of conical approach regions. The remaining part of this section
is devoted to two important applications of Theorem A: a boundary cross theorem
and a mixed cross theorem. In order to formulate them, we need to introduce some
terminology and notation.
Let X be an arbitrary complex manifold and D ⊂ X an open subset. We say
that a set A ⊂ ∂D is locally contained in a generating manifold if there exist an
(at most countable) index set J 6= ∅, a family of open subsets (Uj)j∈J of X and
a family of generating manifolds13 (Mj)j∈J such that A ∩ Uj ⊂ Mj, j ∈ J, and
that A ⊂
j∈J Uj . The dimensions of Mj may vary according to j ∈ J. Given a
set A ⊂ ∂D which is locally contained in a generating manifold, we say that A is
of positive size if under the above notation
j∈J mesMj (A∩ Uj) > 0, where mesMj
denotes the Lebesgue measure on Mj. A point a ∈ A is said to be a density point
of A if it is a density point of A ∩ Uj on Mj for some j ∈ J. Denote by A
the set
of density points of A.
Suppose now that A ⊂ ∂D is of positive size. We equip D with the system
of conical approach regions supported on A. Using the work of B. Jöricke (see,
for example, Theorem 3, pages 44–45 in [15]), one can show that14 A is locally
pluriregular at all density points of A. Observe that mesMj
(A \ A
) ∩ Uj
= 0 for
j ∈ J. Therefore, it is not difficult to show that A
is locally pluriregular. Choose
an increasing sequence (An)
n=1 of subsets of A such that An ∩ Uj is closed and
mesMj
An) ∩ Uj
= 0 for j ∈ J. Observe that A
n is locally pluriregular,
n ∩ Uj ⊂ A for j ∈ J and that  :=
n is locally pluriregular and that  is
locally pluriregular at all points of A
. Consequently, it follows from Definition 2.3
ω̃(z, A,D) ≤ ω(z, A
, D), z ∈ D.
This estimate, combined with Theorem A, implies the following result which is a
generalization in higher dimensions of Theorem 10.3.
Theorem 10.4. Let X, Y be two complex manifolds, let D ⊂ X, G ⊂ Y be two
open sets, and let A (resp. B) be a subset of ∂D (resp. ∂G). D (resp. G) is equipped
with a system of conical approach regions
Aα(ζ)
ζ∈D, α∈Iζ
(resp.
Aβ(η)
η∈G, β∈Iη
13 A differentiable submanifold M of a complex manifold X is said to be a generating manifold
if for all ζ ∈ M, every complex vector subspace of TζX containing TζM coincides with TζX.
14 A complete proof will be available in [29].
A UNIFIED APPROACH 51
supported on A (resp. on B). Suppose in addition that A and B are of positive size.
Let Z be a complex analytic space possessing the Hartogs extension property. Define
:= X(A
;D,G),
(z, w) ∈ D ×G : ω(z, A
, D) + ω(w,B
, G) < 1
where A
(resp. B
) is the set of density points of A (resp. B).
Then, for every mapping f : W −→ Z which satisfies the following conditions:
• f ∈ Cs(W,Z) ∩ Os(W
o, Z);
• f is locally bounded;
• f |A×Bis continuous,
there exists a unique mapping f̂ ∈ O(Ŵ
, Z) which admits A-limit f(ζ, η) at every
point (ζ, η) ∈ W ∩W
If, moreover, Z = C and |f |W <∞, then
|f̂(z, w)| ≤ |f |
1−ω(z,A
,D)−ω(w,B
A×B |f |
ω(z,A
,D)+ω(w,B
W , (z, w) ∈ Ŵ
The second application is a very general mixed cross theorem.
Theorem 10.5. Let X, Y be two complex manifolds, let D ⊂ X, G ⊂ Y be open
sets, let A be a subset of ∂D, and let B be a subset of G. D is equipped with
the system of conical approach regions
Aα(ζ)
ζ∈D, α∈Iζ
supported on A and G is
equipped with the canonical system of approach regions
Aβ(η)
η∈G, β∈Iη
. Suppose
in addition that A is of positive size. Let Z be a complex analytic space possessing
the Hartogs extension property. Define
:= X(A
, B∗;D,G),
(z, w) ∈ D ×G : ω(z, A
, D) + ω(w,B∗, G) < 1
where A
is the set of density points of A and B∗ denotes, as usual (see Subsection
2.1 above), the set of points in B ∩G at which B is locally pluriregular.
Then, for every mapping f : W −→ Z which satisfies the following conditions:
• f ∈ Cs(W,Z) ∩ Os(W
o, Z);
• f is locally bounded along A×G,
there exists a unique mapping f̂ ∈ O(Ŵ
, Z) which admits A-limit f(ζ, η) at every
point (ζ, η) ∈ W ∩W
If, moreover, Z = C and |f |W <∞, then
|f̂(z, w)| ≤ |f |
1−ω(z,A
,D)−ω(w,B∗,G)
A×B |f |
ω(z,A
,D)+ω(w,B∗,G)
W , (z, w) ∈ Ŵ
Concluding remarks. In ongoing joint-works with Pflug [30, 31] we develop new
cross theorems with singularities. On the other hand, in [36] the problem of opti-
mality of the envelope of holomorphy
W in Theorem A has been investigated.
52 VIÊT-ANH NGUYÊN
References
[1] R. A. Airapetyan, G. M. Henkin, Analytic continuation of CR-functions across the “edge of
the wedge”, Dokl. Akad. Nauk SSSR, 259 (1981), 777-781 (Russian). English transl.: Soviet
Math. Dokl., 24 (1981), 128–132.
[2] R. A. Airapetyan, G. M. Henkin, Integral representations of differential forms on Cauchy-
Riemann manifolds and the theory of CR-functions. II, Mat. Sb., 127(169), (1985), 92–112,
(Russian). English transl.: Math. USSR-Sb. 55 (1986), 99–111.
[3] O. Alehyane et J. M. Hecart, Propriété de stabilité de la fonction extrémale relative, Potential
Anal., 21, (2004), no. 4, 363–373.
[4] K. Adachi, M. Suzuki and M. Yoshida, Continuation of holomorphic mappings with values
in a complex Lie group, Pacific J. Math., 47, (1973), 1–4.
[5] O. Alehyane et A. Zeriahi, Une nouvelle version du théorème d’extension de Hartogs pour
les applications séparément holomorphes entre espaces analytiques, Ann. Polon. Math., 76,
(2001), 245–278.
[6] E. Bedford, The operator (ddc)n on complex spaces, Semin. P. Lelong - H. Skoda, Analyse,
Années 1980/81, Lect. Notes Math., 919, (1982), 294–323.
[7] E. Bedford, B. A. Taylor, A new capacity for plurisubharmonic functions, Acta Math., 149,
(1982), 1–40.
[8] S. Bernstein, Sur l’ordre de la meilleure approximation des fonctions continues par des
polynômes de degré donné, Bruxelles 1912.
[9] L. M. Drużkowski, A generalization of the Malgrange–Zerner theorem, Ann. Polon. Math.,
38, (1980), 181–186.
[10] A. Edigarian, Analytic discs method in complex analysis, Diss. Math. 402, (2002), 56 pages.
[11] G. M. Goluzin, Geometric theory of functions of a complex variable, (English), Providence,
R. I.:American Mathematical Society (AMS). VI, (1969), 676 pages.
[12] A. A. Gonchar, On analytic continuation from the “edge of the wedge” theorem, Ann. Acad.
Sci. Fenn. Ser. A.I: Mathematica, 10, (1985), 221–225.
[13] A. A. Gonchar, On Bogolyubov’s “edge-of-the-wedge” theorem, Proc. Steklov Inst. Math.,
228, (2000), 18–24.
[14] F. Hartogs, Zur Theorie der analytischen Funktionen mehrer unabhängiger Veränderlichen,
insbesondere über die Darstellung derselben durch Reihen, welche nach Potenzen einer
Veränderlichen fortschreiten, Math. Ann., 62, (1906), 1–88.
[15] B. Jöricke, The two-constants theorem for functions of several complex variables, (Russian),
Math. Nachr. 107 (1982), 17–52.
[16] B. Josefson, On the equivalence between polar and globally polar sets for plurisubharmonic
functions on Cn, Ark. Mat., 16, (1978), 109–115.
[17] S. M. Ivashkovich, The Hartogs phenomenon for holomorphically convex Kähler manifolds,
Math. USSR-Izv., 29, (1997), 225–232.
[18] M. Jarnicki, P. Pflug, Extension of Holomorphic Functions, de Gruyter Expositions in Math-
ematics 34, Walter de Gruyter, 2000.
[19] M. Jarnicki, P. Pflug, Invariant distances and metrics in complex analysis—revisited, Disser-
tationes Math. (Rozprawy Mat.), 430, (2005), 192 pages.
[20] M. Klimek, Pluripotential theory, London Mathematical society monographs, Oxford Univ.
Press., 6, (1991).
[21] H. Komatsu, A local version of Bochner’s tube theorem, J. Fac. Sci., Univ. Tokyo, Sect. I A
19, (1972), 201–214.
[22] F. Lárusson, R. Sigurdsson, Plurisubharmonic functions and analytic discs on manifolds, J.
Reine Angew. Math., 501, (1998), 1–39.
[23] Nguyên Thanh Vân, Separate analyticity and related subjects, Vietnam J. Math., 25, (1997),
81–90.
A UNIFIED APPROACH 53
[24] Nguyên Thanh Vân, Note on doubly orthogonal system of Bergman, Linear Topological
Spaces and Complex Analysis, 3, (1997), 157–159.
[25] Nguyên Thanh Vân et A. Zeriahi, Familles de polynômes presque partout bornées, Bull. Sci.
Math., 107, (1983), 81–89.
[26] Nguyên Thanh Vân et A. Zeriahi, Une extension du théorème de Hartogs sur les fonctions
séparément analytiques, Analyse Complexe Multivariable, Récents Développements, A. Meril
(ed.), EditEl, Rende, (1991), 183–194.
[27] Nguyên Thanh Vân et A. Zeriahi, Systèmes doublement orthogonaux de fonctions holomor-
phes et applications, Banach Center Publ. 31, Inst. Math., Polish Acad. Sci., (1995), 281–297.
[28] V.-A. Nguyên, A general version of the Hartogs extension theorem for separately holomorphic
mappings between complex analytic spaces, Ann. Scuola Norm. Sup. Pisa Cl. Sci., (2005), serie
V, Vol. IV(2), 219–254.
[29] V.-A. Nguyên, Conical plurisubharmonic measure and new cross theorems, in preparation.
[30] V.-A. Nguyên and P. Pflug, Boundary cross theorem in dimension 1 with singularities,
arXiv:0705.4649v1, preprint of the ICTP, Trieste-Italy, (2007), 19 pages.
[31] V.-A. Nguyên and P. Pflug, Cross theorems with singularities, in preparation, 22 pages.
[32] P. Pflug, Extension of separately holomorphic functions–a survey 1899–2001, Ann. Polon.
Math., 80, (2003), 21–36.
[33] P. Pflug and V.-A. Nguyên, A boundary cross theorem for separately holomorphic functions,
Ann. Polon. Math., 84, (2004), 237–271.
[34] P. Pflug and V.-A. Nguyên, Boundary cross theorem in dimension 1, Ann. Polon. Math.,
90(2), (2007), 149-192.
[35] P. Pflug and V.-A. Nguyên, Generalization of a theorem of Gonchar, Ark. Mat., 45, (2007),
105–122.
[36] P. Pflug and V.-A. Nguyên, Envelope of holomorphy for boundary cross sets, Arch. Math.
(Basel), 89, (2007), 326–338.
[37] E. A. Poletsky, Plurisubharmonic functions as solutions of variational problems, Several
complex variables and complex geometry, Proc. Summer Res. Inst., Santa Cruz/CA (USA)
1989, Proc. Symp. Pure Math. 52, Part 1, (1991), 163–171.
[38] E. A. Poletsky, Holomorphic currents, Indiana Univ. Math. J., 42, No.1, (1993), 85–144.
[39] T. Ransford, Potential theory in the complex plane, London Mathematical Society Student
Texts, 28, Cambridge: Univ. Press., (1995).
[40] J. P. Rosay, Poletsky theory of disks on holomorphic manifolds, Indiana Univ. Math. J., 52,
No.1, (2003), 157–169.
[41] B. Shiffman, Extension of holomorphic maps into Hermitian manifolds, Math. Ann., 194,
(1971), 249–258.
[42] B. Shiffman, Hartogs theorems for separately holomorphic mappings into complex spaces, C.
R. Acad. Sci. Paris Sér. I Math., 310 (3), (1990), 89–94.
[43] J. Siciak, Analyticity and separate analyticity of functions defined on lower dimensional sub-
sets of Cn, Zeszyty Nauk. Univ. Jagiellon. Prace Mat., 13, (1969), 53–70.
[44] J. Siciak, Separately analytic functions and envelopes of holomorphy of some lower dimen-
sional subsets of Cn, Ann. Polon. Math., 22, (1970), 145–171.
[45] V. P. Zahariuta, Separately analytic functions, generalizations of the Hartogs theorem and
envelopes of holomorphy, Math. USSR-Sb., 30, (1976), 51–67.
[46] M. Zerner, Quelques résultats sur le prolongement analytique des fonctions de variables com-
plexes, Séminaire de Physique Mathématique.
[47] A. Zeriahi, Comportement asymptotique des systèmes doublement orthogonaux de Bergman:
Une approche élémentaire, Vietnam J. Math., 30, No.2, (2002), 177–188.
[48] H. Wu, Normal families of holomorphic mappings, Acta Math., 119, (1967), 193–233.
http://arxiv.org/abs/0705.4649
54 VIÊT-ANH NGUYÊN
Viêt-Anh Nguyên, Mathematics Section, The Abdus Salam international centre
for theoretical physics, Strada costiera, 11, 34014 Trieste, Italy
E-mail address : [email protected]
1. Introduction
2. Preliminaries and statement of the main result
2.1. Approach regions, local pluripolarity and plurisubharmonic measure
2.2. Examples of systems of approach regions
2.3. Cross and separate holomorphicity and A-limit.
2.4. Hartogs extension property.
2.5. Statement of the main results
3. Holomorphic discs and a Two-Constant Theorem
3.1. Poletsky theory of discs and Rosay Theorem on holomorphic discs
3.2. Level sets of the relative extremal functions and a Two-Constant Theorem
3.3. Construction of discs
4. A mixed cross theorem
5. Completion of the proof of Theorem 4.2
6. A local version of Theorem A
7. Preparatory results
8. Local and semi-local versions of Theorem A
9. The proof of Theorem A
10. Applications
10.1. Canonical system of approach regions
10.2. System of angular approach regions
10.3. System of conical approach regions
References
|
0704.0898 | Higher spin algebras as higher symmetries | Higher spin algebras as higher symmetries
Xavier Bekaert
Laboratoire de Mathématiques et Physique Théorique
Unité Mixte de Recherche 6083 du CNRS, Fédération Denis Poisson
Université François Rabelais, Parc de Grandmount
37200 Tours, France
Abstract
The exhaustive study of the rigid symmetries of arbitrary free field
theories is motivated, along several lines, as a preliminary step in the
completion of the higher-spin interaction problem in full generality. Some
results for the simplest example (a scalar field) are reviewed and com-
mented along these lines.
Expanded version of the lectures presented at the “5th international school
and workshop on QFT & Hamiltonian systems” (Calimanesti, May 2006).
1 Higher-spin interaction problem
Whereas covariant gauge theories describing arbitrary free massless fields
on constant-curvature spacetimes of dimension n are firmly established by
means of the unitary representation theory of their isometry groups, it is
still open to question whether non-trivial consistent self-couplings and/or
cross-couplings among those fields may exist for n > 2 , such that the
deformed gauge algebra is non-Abelian. The goal of the present paper is
to advocate that a lot of information on the interactions can be extracted
from the symmetries of the free field theory.
The conventional local free field theories corresponding to unitary irre-
ducible representations of the helicity group SO(n− 2) , that are spanned
by completely symmetric tensors, have been constructed a while ago (for
some introductory reviews, see [1]). In order to have Lorentz invariance
manifest and second order local field equations with minimal field content,
the theory is expressed in terms of completely symmetric double-traceless
tensor gauge fields hµ1... µs of rank s > 0, the gauge transformation of
which reads
δξ hµ1µ2... µs =
∇ µ1 ξµ2...µs + cyclic , (1)
E-mail address: [email protected]
http://arxiv.org/abs/0704.0898v2
where
∇ is the covariant derivative with respect to the background Levi–
Civita connection and “cyclic” stands for the sum of terms necessary to
have symmetry of the right-hand-side under permutations of the indices.
The gauge parameter ξ is a completely symmetric traceless tensor field
of rank s − 1. In this relativistic field theory, the “spin” is equal to the
rank s. For spin s = 1 the gauge field hµ represents the photon with U(1)
gauge symmetry while for spin s = 2 the gauge field hµν represents the
graviton with linearized diffeomorphism invariance. The gauge algebra of
field independent gauge transformations such as (1) is of course Abelian.
Non-Abelian gauge theories for “lower spin” s 6 2 are well known and
essentially correspond to Yang-Mills (s = 1) and Einstein (s = 2) theories
for which the underlying geometries (principal bundles and Riemannian
manifolds) were familiar to mathematicians before the construction of the
physical theory. In contrast, the situation is rather different for “higher
spin” s > 2 for which the underlying geometry (if any!) remains obscure.
Due to this lack of information, it is natural to look for inspiration in
the perturbative “reconstruction” of Einstein gravity as the non-Abelian
gauge theory of a spin-two particle propagating on a constant-curvature
spacetime (see e.g. [2] for a comprehensive review).
Let us denote by
S [hµ1... µs ] the Poincaré-invariant, local, second-
order, quadratic, ghost-free, gauge-invariant action of a spin-s symmetric
tensor gauge field. In order to perform a perturbative analysis via the
Noether method [3], the non-Abelian interaction problem for a collection
of higher (and possibly lower) spin gauge fields is formulated as a defor-
mation problem.
Higher-spin interaction problem: List all Poincaré-invariant local
deformations
S[h] =
S [h] + ε
S [h] + O(ε
of a positive sum, with at least one s > 2,
S [h] =
S [hµ1... µs ]
of quadratic actions such that the deformed local gauge symmetries
δξh =
δξ h + ε
δξ h + O(ε
are already non-Abelian at first order, in the deformation parameters ε
and do not arise from local redefinitions
h → h + ε φ(h) + O(ε
) , ξ → ξ + ε ζ(h, ξ) + O(ε
of the gauge fields and parameters.
This well-posed mathematical problem is expected to possess non-
trivial solutions including higher-spin fields, as strongly indicated by Vasiliev’s
works (for some reviews, see [4] and references therein) and deserves to
be investigated further along systematic lines.
2 The Noether method
The assumption that the deformations are formal power series in some
deformation parameters ε enables to investigate the problem order by
order. The crucial observation of any perturbation theory is that the first
order deformations are constrained by the symmetries of the undeformed
system. In the present case, the Noether method scrutinizes the gauge
symmetry of the action, δξS = 0 . At zeroth order, the latter equation is
satisfied by hypothesis. At first order, it reads
S = 0 . (2)
This equation may be used to constrain the possible deformations by
reinterpreting them as familiar objects of the undeformed gauge theory.
By definition, an observable of a gauge theory is a functional which is
gauge-invariant on-shell, while a reducibility parameter of a gauge theory
is a gauge parameter such that the corresponding gauge variation vanishes
off-shell.
First-order deformations in terms of the undeformed theory:
• First-order deformations of the action are observables of the undeformed
theory.
• First-order deformations of the gauge symmetries evaluated at reducibil-
ity parameters of the undeformed gauge theory define symmetries of the
undeformed theory.
Proof: In (2) the infinitesimal variation
S of the undeformed action
is proportional to the undeformed Euler–Lagrange equations. This proves
the fist part of the theorem. Reducibility parameters ξ of the undeformed
gauge theory verify
h = 0 by definition. Inserting this fact into (2)
with ξ = ξ gives
S = 0 , which is precisely the translation of the second
part of the theorem.
In the mathematical litterature, a (conformal) Killing tensor of a
pseudo-Riemannian manifold is a symmetric tensor field ξ such that its
symmetrized covariant derivative with respect to the Levi–Civita connec-
tion, ∇µ1 ξµ2...µs + cyclic, vanishes (modulo a term proportional to the
metric for conformal Killing tensors). Therefore, any reducibility parame-
ter ξ of the spin-s symmetric gauge field theory on the constant-curvature
spacetime M is identified with a Killing tensor of rank s− 1 of the mani-
fold M. The space of Killing tensors on any constant-curvature spacetime
is known to be finite-dimensional [5], thus the linear gauge symmetries (1)
are irreducible.
These results suggest two strategies for addressing the higher-spin in-
teraction problem. The most ambitious one is the computation of all lo-
cal observables of the free gauge theory associated to deformations of the
gauge algebra. This result would provide the exhaustive list of algebra-
deforming first order vertices, but this computation is technically demand-
ing and seems out of reach in the completely general case. Nevertheless,
the BRST reformulation of the problem [6] allowed the complete classi-
fication of non-Abelian deformations in various particular cases (see e.g.
the review [7] and references therein). Actually, a more humble strategy
is the computation of all rigid symmetries of the free irreducible gauge
theory. It is of interest because the knowledge of these rigid symmetries
would strongly constrain the candidates for gauge symmetry deforma-
tions. Indeed, the constant tensors appearing in the rigid symmetries
could be compared with the complete list [5] of constant-curvature space-
time Killing tensors.
3 Free theory symmetries
Bosonic fields are usually described in terms of their components living in
some subspace V of the space ⊗(Rn) of tensors on Rn (e.g. V = ⊙(Rn)
for symmetric tensor fields). The background metric of the constant-
curvature spacetime induces some non-degenerate bilinear form on V .
This defines a non-degenerate sesquilinear form 〈 | 〉 on the space L2(Rn)⊗
V of square-integrable fields taking values in the countable space V (the
components). Let † stands for the adjoint with respect to the sesquilinear
form 〈 | 〉 .
Any quadratic action for bosonic fields ψ can be expressed as a quadratic
S [ψ] =
〈ψ | K | ψ 〉 , (3)
where the kinetic operatorK is self-adjoint, K† = K. Because the sesquilin-
ear form 〈 | 〉 is non-degenerate, the Euler-Lagrange equation extremizing
the quadratic action is the linear equation
δ〈ψ |
= K|ψ 〉 = 0 . (4)
Moreover, the quadratic form 〈ψ | K | ψ 〉 is degenerate if and only if the
kinetic operator K is degenerate. This happens if and only if there exists
a linear operator P (on L2(Rn) ⊗ V ) such that KP = 0. Infinitesimal
gauge symmetries then read
δχ | ψ 〉 = P | χ 〉 ,
with gauge parameters χ . The Noether identity is P†K = (KP)† = 0 .
A symmetry of the quadratic action (3) is an invertible linear pseudo-
differential operator U preserving the quadratic form 〈 | K | 〉. In other
words,
KU = K .
The group of off-shell symmetries is the group of symmetries of the quadratic
action endowed with the composition ◦ as product. A symmetry genera-
tor of the quadratic action (3) is a linear differential operator T which is
self-adjoint with respect to the quadratic form 〈 | K | 〉. More concretely,
KT = T
Any symmetry generator T defines a symmetry U = eiT of the quadratic
action (3). If T = T† then the linear operator T is a symmetry generator
of the quadratic action if and only it commutes with K. The real Lie
algebra of off-shell symmetries is the algebra of symmetry generators of
the quadratic action endowed with i times the commutator as Lie bracket,
{ , } := i [ , ].
A symmetry of the linear equation (4) is a linear differential operator
T obeying
KT = SK , (5)
for some linear operator S. Such a symmetry T preserves the space KerK
of solutions to the equations of motion. Any symmetry generator T of
the action (3) is always a symmetry of the equation of motion (4) with
S = T† in (5). A symmetry T is trivial on-shell if T = RK for some
linear operator R. Such an on-shell-trivial symmetry is always a sym-
metry of the field equation (4), since it obeys (5) with S = KR. The
algebra of on-shell-trivial symmetries obviously forms a left ideal in the
algebra of linear differential operators endowed with the composition ◦
as multiplication. Furthermore, it is also a right ideal in the algebra of
symmetries of the linear equation (4). The complex associative algebra of
on-shell symmetries is the associative algebra of symmetries of the linear
equation quotiented by the two-sided ideal of on-shell-trivial symmetries.
The complex Lie algebra of on-shell symmetries is the algebra of on-shell
symmetries endowed with the commutator as Lie bracket.
Notice that when K is non-degenerate, a linear operator T = RK is
a symmetry generator of the quadratic action (3) if and only if R is self-
adjoint. Moreover, the Lie subalgebra of such on-shell-trivial symmetry
generators is an ideal in the Lie algebra of off-shell symmetries.
4 Higher-spin algebras
Let g be the Lie algebra corresponding to the finite-dimensional (confor-
mal) isometry group G of the constant-curvature spacetime of dimension
n > 2. For n = 2 , the spacetime may be arbitrary and the conformal
algebra is of course infinite-dimensional. If the free field theory is rela-
tivistic, then g is linearly realized on the space L2(Rn)⊗ V (respectively,
KerK) of off-shell (resp. on-shell) fields. This induces a linear realiza-
tion of the universal enveloping algebra U(g) over C. The real form of
this realization corresponding to the self-adjoint operators, endowed with
i times the commutator as Lie bracket, is nowadays referred to as (confor-
mal) on/off-shell higher-spin algebra of the constant-curvature spacetime
(see e.g. [8] for an elementary introduction to such algebraic structures).
The name comes from the fact that its generators are in “higher-spin”
representations of the Lorentz group, and the algebra is said to be “on”
or “off” shell whether the algebra is realized on the space of solutions of
the Euler-Lagrange equations or not.
The isometry algebra g of a constant-curvature spacetime is a module
of the Lorentz subalgebra o(n − 1, 1) ⊂ g for the adjoint representation.
This module decomposes as the sum of two irreducible o(n−1, 1)-modules:
the “translations” are in the vector module ∼= R
n while the boosts and ro-
tations are in the antisymmetric module ∼= ∧
2(Rn). These representations
are labelled by one-column Young diagrams of, respectively, one and two
cells. The number of columns is associated with the spin. The fact that
the generators of U(g) are in higher-spin representations is summarized in
the following result.
Universal enveloping algebra of isometries: The universal envelop-
ing algebra U(g) of the isometry algebra g of an n-dimensional constant-
curvature spacetime is an infinite-dimensional module of the general linear
Lie algebra gl(n), decomposing as an infinite sum of finite-dimensional ir-
reducible gl(n)-modules labelled by the set of all Young diagrams, with
multiplicity one, the first column of which has length 6 n.
Proof: The Poincaré-Birkhoff-Witt theorem states that the universal en-
veloping algebra U(g) is isomorphic to the symmetric algebra ⊙(g) as a
vector space. As a gl(n)-module, the vector space g is isomorphic to the
sum Rn ⊕ ∧2(Rn) of irreducible modules. This leads to the following
isomorphism of modules:
⊙ (g) ∼=
. (6)
The idea is to evaluate the right-hand-side of (6) using the available tech-
nology on Kronecker products of irreducible representations [9]. The mod-
ule ⊙(Rn) decomposes as the infinite sum of irreducible modules labelled
by all one-row Young diagrams with multiplicity one. A formula of Lit-
tlewood for symmetric plethsyms implies that the module ⊙
∧2 (Rn)
decomposes as the infinite sum of irreducible modules, with multiplic-
ity one, labelled by all Young diagrams with columns of even lengths.
The Kronecker product in (6) decomposes as the infinite sum of all the
Kronecker products between a one-row Young diagram and a Young dia-
gram with columns of even lengths, each with multiplicity one. Using the
Littlewood–Richardson rule, one may show that the result of this compu-
tation is the infinite sum of irreducible modules labelled with all possible
Young diagrams, each with multiplicity one. The Young diagrams whose
first column has length greater than n lead to vanishing modules, hence
they do not appear in the series.
The higher-spin algebras are important in relativistic field theories
because they always appear as spacetime symmetry algebras in the free
limit.
Spacetime symmetries of relativistic free field theories: If the
Lie algebra of off/on-shell symmetries contains the (conformal) isometry
algebra g of some constant-curvature spacetime M, then it also contains
the (conformal) off/on-shell higher-spin algebra of M.
Proof: The Poincaré-Birkhoff-Witt theorem states that one can realize
the universal enveloping U(g) as Weyl-ordered polynomials in the elements
of the Lie algebra g. The above theorem is proved by observing that
any Weyl-ordered polynomial in on-shell symmetries is itself an on-shell
symmetry. As observed in [10], the same is true for symmetry generators.
As an important corollary, the theorem implies that any relativistic
free field theory has an infinite number of rigid symmetries, and therefore
it possesses an infinite number of conserved currents via the Noether the-
orem, as it is well known. Notice that relativistic integrable models are
precisely such that they possess an infinite set of commuting rigid symme-
tries corresponding to an infinite set of conserved charges in involution.
The infinite-dimensional subalgebra of symmetries of the free field theory
generated by the translations only is, of course, Abelian. Actually, the fac-
torization property is deeply related to the preservation of this subalgebra
of symmetries at the interacting level [11]. Thus the relationship between
higher-spin algebras and integrable models appears to be very intimate
(see also [12] and references therein). The strong form of the Maldacena
conjecture (in the large N limit) and the integrability properties recently
enlightened in this context are further indications of such a relationship.
Symmetries may be characterized by their action on the spacetime co-
ordinates. A smooth change of coordinates is generated by a first-order
linear differential operator. Therefore, a higher-order linear differential
operator does not generate coordinate transformations. For instance, an
isometry generator is a first-order linear differential operator correspond-
ing to a Killing vector field, but the spacetime higher-symmetries are
powers of such isometry generators, hence they are higher-order linear
differential operators. They do not generate coordinate transformations
and this explains why spacetime higher-symmetries are usually not con-
sidered in textbooks.
Let us focus on the first non-trivial example of free field theory: the
quadratic action of a complex scalar field on an n-dimensional spacetime
M. In such case, the space V = C and the kinetic operator K can be
taken to be a constant mass term plus the Laplacian on M,
A scalar field is said to be conformal if its kinetic operator is the conformal
Laplacian
4 (n− 1)
R , (7)
where R denotes the scalar curvature. The quadratic action and the linear
equation are symmetric under the full conformal algebra o(n, 2) if and only
if the scalar field is conformal and has conformal weight 1− n/2.
Higher symmetries of the conformal scalar field: For the quadratic
action of a complex conformal scalar field on a constant-curvature space-
time M of dimension n > 2, the following spaces over R are isomorphic:
• The Lie algebra of off-shell symmetries quotiented by the ideal of on-
shell-trivial symmetry generators,
• A real form of the associative algebra of on-shell symmetries.
• The conformal on-shell higher-spin algebra,
• The real algebra of Weyl-ordered polynomials in the conformal Killing
vector fields quotiented by the ideal generated by the conformal Laplacian,
endowed with i times the commutator as Lie bracket. The symbols of these
differential operators,
T = (−i)
µ1...µr
∇µ1 . . .
∇µr + lower + on-shell-trivial ,
may be represented by real traceless symmetric tensor fields ξ which are
conformal Killing tensors.
Moreover, in n = 2 dimensions the theorem is valid for an arbitrary space-
time manifold.
Proof: The theorem can be extracted from the results of [13] on flat
spacetime of dimension n > 2 by taking into account that any constant-
curvature spacetime M can be seen as a conic in the projective null cone
of the ambient space Rn,2 . The two-dimensional case is addressed by
using the left/right-moving coordinates.
Notice that the on-shell higher-spin algebra of a non-conformal scalar
field on a constant-curvature spacetime is a proper subalgebra of the uni-
versal enveloping algebra of the isometry algebra g: it decomposes as
the infinite sum of irreducible o(n− 1, 1)-modules labelled by all two-row
Young diagrams with multiplicity one, as reviewed in [4, 7]. This algebra
is in one-to-one correspondence with the space of reducibility parameters
of the infinite tower of symmetric tensor gauge fields where each field ap-
pears once and only once for each given spin s > 0. Moreover, notice that
the AdSn+1/CFTn correspondence for n > 2 in the weak tension/coupling
limit also makes use of the isomorphism between the on-shell higher-spin
algebra of a non-conformal scalar field on AdSn+1 and the on-shell sym-
metry algebra of a conformal scalar field on Rn−1,1 (see [14] for the cor-
respondence at the level of conserved currents). Remark also that the
conformal on-shell higher-spin algebra of a two-dimensional spacetime for
a massless scalar field is isomorphic to the direct sum of u(1) and the two
Lie algebras of differential operators for the left and right moving sectors
respectively. Each of such algebras of differential operators is isomorphic
to the algebra W∞ with zero central charge [15].
The deep connection between higher-spin algebras and integrable mod-
els is exhibited by the following example in n = 2 dimensions.
Higher symmetries of the interacting scalar field: A non-linear
action of a real scalar field on the two-dimensional Minkowski spacetime,
without derivative interaction term, of the form
S[φ] =
〈φ | � | φ 〉+
xV (φ) , V (φ) = O(φ
is invariant under an infinite number of local infinitesimal rigid symmetry
transformations, independent of the coordinate xµ, if and only if
V (φ) = ±
cos (αφ)− 1
, m ∈ R ,
the parameter α is either purely real or imaginary. In such case, the field
φ either corresponds to a free massless scalar field (m = 0), a free massive
scalar field (m 6= 0 , α = 0) or sine-Gordon theory (m 6= 0 , α 6= 0).
Moreover, via linearisation, there is a one-to-one correspondence be-
tween:
• The set of on-shell non-trivial, polynomial in the field derivatives, coordinate-
independent, symmetry transformations of the sine-Gordon Lagrangian
• The Lie algebra of coordinate-independent off-shell symmetries of a free
real scalar field quotiented by the ideal of on-shell-trivial symmetry gener-
ators,
• A proper Abelian Lie subalgebra of the on-shell higher-spin algebra of
the Minkowski plane,
• The space of harmonic odd polynomials in the momenta Pµ = −i∂µ .
These differential operators T may be represented by real traceless sym-
metric constant tensors λ:
T = i λ
µ1...µ2q+1∂µ1 . . . ∂µ2q+1 + on-shell-trivial .
Proof: The first part of the theorem is a straightforward consequence
of the results of [16] in the case when V (φ) is at least quadratic in φ
(by hypothesis). The second part is proven by selecting all coordinate-
independent symmetries of a free real scalar field and comparing them
with the conserved currents of [16]. In both cases, the Noether correspon-
dence between non-trivial conserved currents and non-trivial symmetries
(see e.g. [17] for a precise statement of this isomorphism) is performed
via the Hamiltonian formulation of a two-dimensional scalar field where
one of the light-cone coordinate plays the role of “time.”
5 A gauge principle for higher-spins ?
The analogy with lower-spins suggests to guess the full non-Abelian gauge
theory by making use of the “gauge principle.” Moreover, this point of
view actually provides a concrete motivation for using the higher-spin
algebras in the interaction problem.
The idea is to consider some “matter” system described by a quadratic
action (3) with some algebra of rigid symmetries. The rigid symmetries U
of this system are by definition in the “fundamental” representation of the
algebra of off-shell symmetries of the action (3). Connections are usually
introduced in order to “gauge” these rigid symmetries by allowing U to be
a smooth function on Rn taking values in the group of off-shell symmetries
of the action (3). In order to construct a covariant derivative D = ∂ + Γ,
one introduces a connection defined as a covariant vector field Γµ taking
values in the Lie algebra of off-shell symmetries and transforming as
| ψ 〉 −→ U | ψ 〉 , Γ −→ UDU
, (8)
in such a way that
D | ψ 〉 −→ UD | ψ 〉 .
The minimal coupling is the replacement of all partial derivatives ∂ in the
kinetic operator K(∂) by covariant derivatives D which should ensure that
the quadratic action 〈ψ | K(D) | ψ 〉 is preserved by gauge symmetries
(8). The connection transforms in the “adjoint” representation of the
rigid symmetries while the matter field transforms in the “fundamental.”
(More precisely, the covariant derivative transforms in the adjoint while
the matter field belongs to a module of the gauge algebra.)
The introduction of a connection requires the introduction of some new
dynamical fields: the “gauge” sector. In Yang-Mills gauge theories, the
rigid symmetry is internal and the connection is itself made of spin-1 gauge
fields. For spacetime symmetries, the relation between the connection
and the gauge field is more complicated. For instance, in Einstein gravity
the Levi-Civita connection is expressed in terms of the first derivative of
the metric via the torsionlessness and metricity constraints. In general,
the spin-s tensor field propagating on a constant-curvature spacetime is
expected to be the perturbation of some background field
gµ1...µs =
g µ1...µs + ε hµ1...µs ,
so that the deformed gauge symmetries would be of the form
δξgµ1µ2... µs = ε (Dξ)µ1µ2... µs , (9)
where the covariant derivative D = ∇ + O(ε) starts as the covariant
derivative with respect to the Levi–Civita connection for the spacetime
metric plus non-minimal corrections. Thus the background connection
is identified with the Levi-Civita connection for the background metric,
and the linearization of (9) reproduces (1). Furthermore, the reducibility
parameters of (1) exactly correspond to the gauge symmetries (9) leaving
the background geometry invariant. In the present case, this group of
rigid symmetries contains the isometry group g of the constant-curvature
spacetime. The classical theory of (in)homogeneous pseudo-orthogonal
groups tells us that completely symmetric tensor fields which are invariant
under g are constructed from products of the background metric:
g (µ1µ2 . . .
µ2m−1µ2m)
Thus, along these lines, only even-spin symmetric tensor fields can be
perturbations of a non-vanishing higher-spin background in a constant-
curvature spacetime. The first-order deformation of the gauge symmetries
(1) following from (9) would be of the schematic form
δξ hµ1µ2... µs = (
Γ · ξ )µ1µ2... µs , (10)
where
Γ stands for the linearized connection (including the linearized
Levi-Civita connection) and the dot stands for the action on the gauge
parameter ξ. The transformations (10) evaluated on Killing tensors ξ
of the background spacetime would be rigid symmetry transformations
of the free gauge theory. This property highly constrains the possible
expressions for the linearized connection.
Let us now consider the expansion of the minimally coupled action for
the “matter” sector in power series of ε :
〈ψ | K(D) | ψ 〉 =
〈ψ | K(∂) | ψ 〉 + ε 〈h | J 〉 + O(ε
where J denotes a set of symmetric tensors which are bilinear in ψ and
their derivatives. Assuming that the “matter” sector is strictly distinct
from the “gauge” sector, the gauge invariance of the complete action at
first order in ε requires the symmetric tensors Jµ1µ2... µs to be conserved
up to terms proportional to the “matter” free field equations (and deriva-
tives thereof) corresponding to first-order deformations
δξ | ψ 〉 =
U | ψ 〉 (11)
of the gauge transformations of the “matter” sector, where
U is a lin-
ear differential operator depending linearly on ξ and its derivatives. At
zeroth order in ǫ , the “gauge” group does not act on the matter. There-
fore, at leading order, the transformation law (8) reads as (11). Via the
Noether correspondence, the space of all rigid symmetries of the “matter”
quadratic action determines the space of all on-shell-conserved currents
bilinear in the “matter” fields. The latter ones determine, at first order,
the “fundamental” representation of the “gauge” group. The transforma-
tions (11) evaluated on Killing tensors ξ must define off-shell symmetries
of the “matter” quadratic action. Their algebra algebra is non-Abelian,
hence the “gauge” algebra is already non-Abelian at first order.
As a suggestive example, one may consider a “matter” sector contain-
ing only a single scalar field.
Noether cubic couplings of a scalar field: The minimally coupled
action of a complex scalar field on flat spacetime, given by
S[φ] =
〈φ | � −m
| φ 〉 − ε
x hµ1µ2... µsJ
µ1µ2... µs + O(ε
is invariant at first order in ε , for any symmetric tensor field ξµ1µ2... µs−1 ,
under infinitesimal symmetry transformations
δξhµ1µ2... µs =
δξ hµ1µ2... µs + O(ε) ,
δξ | φ 〉 = εT | φ 〉+ O(ε
) , (12)
where the symbol of the differential operator T is represented by ξ and the
lower order terms depend on derivatives of ξ,
T = (−i)
µ1...µs−1∂µ1 . . . ∂µs−1 + lower+ on-shell-trivial ,
if and only if the on-shell-conserved current J is equivalent to a Noether
current associated to the coordinate-independent off-shell symmetries of
the free scalar field. This defines a one-to-one correspondence between
equivalence classes of such symmetric Noether currents J , bilinear in φ
and its derivatives, and equivalence classes of such deformations δξφ at
first order.
Proof: The explicit equation expressing the gauge invariance of the min-
imaly coupled action for any symmetric tensor field ξ(x) of rank s − 1
precisely states that the symmetric tensor J of rank s is conserved mod-
ulo terms proportional to field equation of the scalar field φ. The one-
to-one correspondence, precisely explained in [17], between equivalence
classes of on-shell conserved currents and equivalence classes of off-shell
symmetry transformations shows explicitly that J is necessarily related to
a coordinate-independent transformation of the form (12). In turn, these
transformations are obtained by evaluating the transformation (12), at
lowest order in ε and on gauge parameters ξ equal to constant Killing
tensors. The sufficiency is proven by making use of the symmetric con-
served currents of [18]. The second part of the theorem follows from the
fact that trivial currents define trivial deformations and conversely, as it
can be seen explicitly.
In the lower-spin case, one recovers the standard minimal coupling
procedure. For s = 1 , the minimal coupling stops at second order in ε
since Jµ is the U(1) current and hµ is the Abelian vector gauge field. For
s = 2 , the minimal coupling at first order is the usual coupling between
a spin-two gauge field and the energy-momentum tensor Jµν leading to
the coordinate transformations of the scalar field, generated by the vector
fields T = −i ξµ(x) ∂µ . The commutators of such infinitesimal transfor-
mations close and define the Lie bracket of vector fields, so the underlying
gauge symmetry algebra may already be guessed at first order for gravity:
it is the Lie agebra of smooth vector fields, i.e. the Lie algebra for the
group of diffeomorphisms. The minimally coupled action is obtained to
all orders by introducing the Levi-Civita connection.
In the higher-spin case, it should be stressed that the trace condi-
tions on the gauge field and parameter have not been stated in the former
proposition because they may indeed be relaxed in order to simplify its
formulation. (Nevertheless, these constraints may be included by consis-
tently imposing weaker conservation laws on double-traceless currents.)
Moreover, it is convenient to remove trace constraints for searching a
Non-Abelian higher-spin gauge symmetry algebra. Actually, the trace
constraints may be removed for free field theories in several ways (see [19]
for some reviews, and [20] for the latest developments). The Lie algebra of
gauge transformations (12) for the infinite tower of all gauge parameters
(1 6 s < ∞) is a real form of the algebra of linear differential opera-
tors on Rn endowed with i times the commutator as Lie bracket. Notice
also that the unital associative algebra of linear differential operators on
n is isomorphic to the universal enveloping algebra of vector fields on
n . (Strictly speaking, this is true only for polynomial vector fields and
differential operators, more sophisticated mathematical statements may
be required for smooth functions, but this point is only technical.) More
concretely, the symbol of a differential operator of order r is represented
by a symmetric tensor field of rank r. In the light of these remarks, it is
tempting to conjecture that, for higher-spin gauge theories, the algebra of
Hermitian differential operators,
µ1...µr (x) ∂µ1 . . . ∂µr + Hermitian conjugate
generalizes the algebra of infinitesimal diffeomorphisms for gravity. An-
other argument in favour of this conjecture may be presented in the
“gauge” sector by looking at the metric-like formulation of higher-spins
arising from the frame-like formulation of Vasiliev, at first order in the
coupling constant [21].
6 Conclusion
The conclusion is that there are two complementary but distinct ways
of using rigid symmetries of the free theory in order to guess the proper
gauge symmetry principle of higher-spin gauge theories.
On the one hand, the infinite set of rigid symmetries of the free (or,
maybe, even integrable) “matter” sector, might be gauged by the intro-
duction of a connection via a minimal coupling prescription. The idea of
using a massive scalar field as free matter sector and an infinite tower of
massless symmetric tensor fields as interacting gauge sector is in agree-
ment with the isomorphism between the off-shell higher-spin algebra and
the space of reducibility parameters. (If tensor fields are used as free
“matter” sector, then the symmetry algebra could be larger. Following
the lines of the Vasiliev construction in such case, the structure of the uni-
versal enveloping algebra points towards a larger infinite tower of gauge
fields including mixed-symmetry tensors.)
On the other hand, in the free “gauge” sector, rigid symmetries linked
to reducibility parameters may arise from the linearization of the gauge
symmetries of some non-linear action. Thus the complete knowledge of the
rigid symmetries of free higher-spin gauge theories would indicate what
can be the linearized connection.
Acknowledgments
I. Bakas, G. Barnich, N. Boulanger, T. Damour and J. Remmel are
thanked for very useful exchanges. The author is grateful to the orga-
nizers for their invitation to this enjoyable meeting and the opportunity
to present his lecture. The Institut des Hautes Études Scientifiques de
Bures-sur-Yvette is acknowledged for its hospitality.
References
[1] D. Sorokin, AIP Conf. Proc. 767 (2005) 172 [hep-th/0405069];
N. Bouatta, G. Compere and A. Sagnotti, in the proceedings of the
“First Solvay Workshop on Higher-Spin Gauge Theories” (Brussels,
Belgium; May 2004) [hep-th/0409068].
[2] T. Ortin, Gravity and strings (Cambridge, 2004).
[3] F. A. Berends, G. J. H. Burgers and H. van Dam, Nucl. Phys. B 260
(1985) 295.
[4] M. A. Vasiliev, Comptes Rendus Physique 5 (2004) 1101
[hep-th/0409260];
X. Bekaert, S. Cnockaert, C. Iazeolla and M. A. Vasiliev, in the
proceedings of the “First Solvay Workshop on Higher-Spin Gauge
Theories” (Brussels, Belgium; May 2004) [hep-th/0503128].
[5] G. Thompson, J. Math. Phys. 27 (1986) 2693;
R. G. McLenaghan, R. Milson and R. G. Smirnov, C. R. Acad. Sci.
Paris, Ser. I 339 (2004) 621.
http://arxiv.org/abs/hep-th/0405069
http://arxiv.org/abs/hep-th/0409068
http://arxiv.org/abs/hep-th/0409260
http://arxiv.org/abs/hep-th/0503128
[6] G. Barnich and M. Henneaux, Phys. Lett. B 311 (1993) 123
[hep-th/9304057];
M. Henneaux, Contemp. Math. 219 (1998) 93 [hep-th/9712226].
[7] X. Bekaert, N. Boulanger, S. Cnockaert and S. Leclercq, Fortsch.
Phys. 54 (2006) 282 [hep-th/0602092].
[8] X. Bekaert, in the proceedings of the “First Modave Summer School
in Mathematical Physics” (Modave, Belgium; June 2005).
[9] D.E. Littlewood, The theory of group characters and matrix repre-
sentations of groups (Clarendon, 1958);
G. R. E. Black, R. C. King and B. G. Wybourne, J. Phys. A: Math.
Gen. 16 (1983) 1555.
[10] A. Mikhailov, “Notes on higher spin symmetries,” hep-th/0201019.
[11] A. B. Zamolodchikov and A. B. Zamolodchikov, Annals Phys. 120
(1979) 253.
[12] M. A. Vasiliev, Int. J. Mod. Phys. D 5 (1996) 763 [hep-th/9611024].
[13] R. Geroch, J. Math. Phys. 11 (1970) 1955;
M. G. Eastwood, “Higher symmetries of the Laplacian,”
hep-th/0206233.
[14] S. E. Konstein, M. A. Vasiliev and V. N. Zaikin, JHEP 0012 (2000)
018 [hep-th/0010239].
[15] I. Bakas, B. Khesin and E. Kiritsis, Commun. Math. Phys. 151
(1993) 233.
[16] R. K. Dodd and R. K. Bullough, Proc. Roy. Soc. Lond. A 352 (1977)
[17] G. Barnich and F. Brandt, Nucl. Phys. B 633 (2002) 3
[hep-th/0111246].
[18] D. Anselmi, Class. Quant. Grav. 17 (2000) 1383 [hep-th/9906167];
M. A. Vasiliev, in M. Shifman ed., The many faces of the superworld
(World Scientific, 2000) [hep-th/9910096].
[19] D. Francia and A. Sagnotti, Class. Quant. Grav. 20 (2003)
S473 [hep-th/0212185]; J. Phys. Conf. Ser. 33 (2006) 57
[hep-th/0601199].
[20] D. Francia, J. Mourad and A. Sagnotti, Nucl. Phys. B 773 (2007)
203 [hep-th/0701163];
I. L. Buchbinder, A. V. Galajinsky and V. A. Krykhtin, Nucl. Phys.
B 779 (2007) 155 [hep-th/0702161].
[21] X. Bekaert, work in progress.
http://arxiv.org/abs/hep-th/9304057
http://arxiv.org/abs/hep-th/9712226
http://arxiv.org/abs/hep-th/0602092
http://arxiv.org/abs/hep-th/0201019
http://arxiv.org/abs/hep-th/9611024
http://arxiv.org/abs/hep-th/0206233
http://arxiv.org/abs/hep-th/0010239
http://arxiv.org/abs/hep-th/0111246
http://arxiv.org/abs/hep-th/9906167
http://arxiv.org/abs/hep-th/9910096
http://arxiv.org/abs/hep-th/0212185
http://arxiv.org/abs/hep-th/0601199
http://arxiv.org/abs/hep-th/0701163
http://arxiv.org/abs/hep-th/0702161
Higher-spin interaction problem
The Noether method
Free theory symmetries
Higher-spin algebras
A gauge principle for higher-spins ?
Conclusion
|
0704.0899 | CalFUSE v3: A Data-Reduction Pipeline for the Far Ultraviolet
Spectroscopic Explorer | To Appear in Publications of the Astronomical Society of the Pacific
Preprint typeset using LATEX style emulateapj v. 08/22/09
CALFUSE v3: A DATA-REDUCTION PIPELINE FOR THE FAR ULTRAVIOLET SPECTROSCOPIC
EXPLORER1
W. V. Dixon
, D. J. Sahnow
, P. E. Barrett
, T. Civeit
, J. Dupuis
, A. W. Fullerton
, B. Godard
J.-C. Hsu
, M. E. Kaiser
, J. W. Kruk
, S. Lacour
, D. J. Lindler
, D. Massa
, R. D. Robinson
M. L. Romelfanger
, and P. Sonnentrucker
To Appear in Publications of the Astronomical Society of the Pacific
ABSTRACT
Since its launch in 1999, the Far Ultraviolet Spectroscopic Explorer (FUSE) has made over 4600
observations of some 2500 individual targets. The data are reduced by the Principal Investigator
team at the Johns Hopkins University and archived at the Multimission Archive at Space Telescope
(MAST). The data-reduction software package, called CalFUSE, has evolved considerably over the
lifetime of the mission. The entire FUSE data set has recently been reprocessed with CalFUSE v3.2,
the latest version of this software. This paper describes CalFUSE v3.2, the instrument calibrations
upon which it is based, and the format of the resulting calibrated data files.
Subject headings: instrumentation: spectrographs — methods: data analysis — space vehicles: in-
struments — ultraviolet: general — white dwarfs
1. INTRODUCTION
The Far Ultraviolet Spectroscopic Explorer (FUSE) is a
high-resolution, far-ultraviolet spectrometer operating in
the 905–1187 Å wavelength range. FUSE was launched
in 1999 on a Delta II rocket into a nearly circular, low-
earth orbit with an inclination of 25◦ to the equator and
an approximately 100-minute orbital period. Data ob-
tained with the instrument are reduced by the principal
investigator team at the Johns Hopkins University using
a suite of computer programs called CalFUSE. Both raw
and processed data files are deposited in the Multimis-
sion Archive at Space Telescope (MAST).
CalFUSE evolved considerably in the years following
launch as our increasing knowledge of the spectrograph’s
performance allowed us to correct the data for more and
more instrumental effects. The program eventually be-
came unwieldy, and in 2002 we began a project to re-
write the code, incorporating our new understanding of
the instrument and best practices for data reduction.
1 Based on observations made with the NASA-CNES-CSA Far
Ultraviolet Spectroscopic Explorer. FUSE is operated for NASA by
the Johns Hopkins University under NASA contract NAS5-32985.
2 Department of Physics and Astronomy, Johns Hopkins Univer-
sity, 3400 N. Charles Street, Baltimore, MD 21218
3 Space Telescope Science Institute, ESS/SSG, 3700 San Martin
Drive, Baltimore, MD 21218
4 Current address: Earth Orientation Department, U.S. Naval
Observatory, 3450 Massachusetts Avenue NW, Washington, DC
20392
5 Primary affiliation: Centre National d’Études Spatiales, 2 place
Maurice Quentin, 75039 Paris Cedex 1, France
6 Current address: Canadian Space Agency, 6767 route de
l’Aéroport, Longueuil, QC, Canada, J3Y 8Y9
7 Primary affiliation: Department of Physics and Astronomy,
University of Victoria, P. O. Box 3055, Victoria, BC V8W 3P6,
Canada
8 Current address: Institut d’Astrophysique de Paris, 98 bis,
boulevard Arago, 75014 Paris, France
9 Retired
10 Current address: Sydney University, NSW 2006, Australia
11 Sigma Space Corporation, 4801 Forbes Boulevard, Lanham,
MD 20706
12 SGT, Inc., NASA Goddard Space Flight Center, Code 665.0,
Greenbelt, MD 20771
The result is CalFUSE v3, which produces a higher qual-
ity of calibrated data while running ten times faster than
previous versions. The entire FUSE archive has recently
been reprocessed with CalFUSE v3.2; we expect this to
be the final calibration of these data.
In this paper, we describe CalFUSE v3.2.0 and its cal-
ibrated data products. Because this document is meant
to serve as a resource for researchers analyzing archival
FUSE spectra, we emphasize the interpretation of pro-
cessed data files obtained fromMAST rather than the de-
tails of designing or running the pipeline. An overview of
the FUSE instrument is provided in § 2, and an overview
of the pipeline in § 3. Section 4 presents a detailed de-
scription of the pipeline modules and their subroutines.
The FUSE wavelength and flux calibration are discussed
in § 5, and a few additional topics are considered in § 6. A
detailed description of the various file formats employed
by CalFUSE is presented in the Appendix.
Additional documentation available from MAST in-
cludes the CalFUSE Homepage,13 The CalFUSE
Pipeline Reference Guide,14 The FUSE Instrument and
Data Handbook,15 and The FUSE Data Analysis Cook-
book.16
2. THE FUSE INSTRUMENT
FUSE consists of four co-aligned prime-focus tele-
scopes, each with its own Rowland spectrograph (Fig. 1).
Two of the four channels employ Al+LiF optical coatings
and record spectra over the wavelength range∼ 990–1187
Å, while the other two use SiC coatings, which provide
reflectivity to wavelengths below the Lyman limit. The
four channels overlap between 990 and 1070 Å. Spectral
resolution is roughly 20,000 (λ/∆λ) for point sources.
For a complete description of FUSE, see Moos et al.
(2000) and Sahnow et al. (2000a).
At the prime focus of each mirror lies a focal-plane as-
13 http://archive.stsci.edu/fuse/calfuse.html
14 http://archive.stsci.edu/fuse/pipeline.html
15 http://archive.stsci.edu/fuse/dhbook.html
16 http://archive.stsci.edu/fuse/cookbook.html
http://arxiv.org/abs/0704.0899v1
http://archive.stsci.edu/fuse/calfuse.html
http://archive.stsci.edu/fuse/pipeline.html
http://archive.stsci.edu/fuse/dhbook.html
http://archive.stsci.edu/fuse/cookbook.html
2 Dixon et al.
sembly (or FPA, shown in Fig. 2) containing three spec-
trograph entrance apertures: the low-resolution aperture
(LWRS; 30′′ × 30′′), used for most observations, the
medium-resolution aperture (MDRS; 4′′ × 20′′), and the
high-resolution aperture (HIRS; 1.25′′ × 20′′). The ref-
erence point (RFPT) is not an aperture; when a target
is placed at this location, the three apertures sample the
background sky. For a particular exposure, the FITS
file header keywords RA TARG and DEC TARG con-
tain the J2000 coordinates of the aperture (or RFPT)
listed in the APERTURE keyword, while the keyword
APER PA contains the position angle of the −Y axis (in
the FPA coordinate system; see Fig. 2), corresponding to
a counter-clockwise rotation of the spacecraft about the
target (and thus about the center of the target aperture).
The spectra from the four instrument channels are
imaged onto two photon-counting microchannel-plate
(MCP) detectors, labeled 1 and 2, with a LiF spectrum
and a SiC spectrum on each (Fig. 1). Each detector is
comprised of two MCP segments, labeled A and B. Raw
science data from each detector segment are stored in
a separate data file; an exposure thus yields four raw
data files, labeled 1A, 1B, 2A, and 2B. Because the three
apertures are open to the sky at all times, the LiF and
SiC channels each generate three spectra, one from each
aperture. In most cases, the non-target apertures are
empty and sample the background sky. Figure 3 presents
a fully-corrected image of detector 1A obtained during
a bright-earth observation. The emission features in all
three apertures are geocoronal. Note that the LiF1 wave-
length scale increases to the right, while the SiC1 scale
increases to the left. The Lyman β λ1026 airglow feature
is prominent in each aperture.
Two observing modes are available: In photon-address
mode, also known as time-tag or TTAG mode, the X and
Y coordinates and pulse height (§ 4.3.7) of each detected
photon are stored in a photon-event list. A time stamp
is inserted into the data stream, typically once per sec-
ond. Data from the entire active area of the detector are
recorded. Observing bright targets in time-tag mode can
rapidly fill the spacecraft recorder. Consequently, when
a target is expected to generate more than ∼ 2500 counts
s−1 across all four detector segments, the data are stored
in spectral-image mode, also called histogram or HIST
mode. To conserve memory, histogram data are (usu-
ally) binned by eight pixels in Y (the spatial dimension),
but unbinned in X (the dispersion dimension). Only data
obtained through the target aperture are recorded. Indi-
vidual photon arrival time and pulse height information
is lost. The orbital velocity of the FUSE spacecraft is 7.5
km s−1. Since Doppler compensation is not performed
by the detector electronics, histogram exposures must be
kept short to preserve spectral resolution; a typical his-
togram exposure is about 500 s in length.
The front surfaces of the FPAs are reflective in visible
light. On the two LiF channels, light not passing through
the apertures is reflected into a visible-light CCD camera.
Images of stars in the field of view around the apertures
are used for acquisition and guiding by this camera sys-
tem, called the Fine Error Sensor (FES). FUSE carries
two redundant FES cameras, which were provided by
the Canadian Space Agency. FES A views the FPA on
the LiF1 channel, and FES B views the LiF2 FPA. Dur-
ing initial checkout, FES A was designated the default
camera and was used for all science observations until it
began to malfunction in 2005. In July of that year, FES
B was made the default guide camera. Implications of
the switch from FES A to FES B are discussed in § 6.1.
3. OVERVIEW OF CALFUSE
The new CalFUSE pipeline was designed with three
principles mind: the first was that, to the extent possi-
ble, we follow the path of a photon backwards through
the instrument, correcting for the instrumental effects in-
troduced in each step. The principal steps in this path,
together with the effects imparted by each, are listed be-
low. Most of the optical and electronic components in
this list are labeled in Fig. 1.
1. Satellite motion imparts a Doppler shift.
2. Satellite pointing instabilities shift the target image
within (or out of) the aperture.
3. Thermally-induced mirror motions shift the target im-
age within (or out of) the aperture.
4. FPA offsets shift the spectrum on the detector.
5. Thermally-induced motions of the spectrograph grat-
ings shift the target image within (or out of) the aper-
ture.
6. Ion-repelling wire grids can cast shadows called
“worms.”
7. Detector effects include quantum efficiency, flat field,
dead spots, and background.
8. The spectra are distorted by temperature-, count-rate,
time-, and pulse-height-dependent errors in the photons’
measured X and Y coordinates, as well as smaller-scale
geometric distortions in the detector image.
9. Count-rate limitations in the detector electronics and
the IDS data bus are sources of dead time.
To correct for these effects, we begin at the bottom
of the list and (to the extent possible) work backwards.
First, we adjust the photon weights to account for data
lost to dead time (9) and correct the photons’ X and
Y coordinates for a variety of detector distortions (8).
Second, we identify periods of unreliable, contaminated,
or missing data. Third, we correct the photons’ X and
Y coordinates for grating (5), FPA (4), mirror (3), and
spacecraft (2) motions. Fourth, we assign a wavelength
to each photon based on its corrected X and Y coor-
dinates (5), then convert to a heliocentric wavelength
scale (1). Finally, we correct for detector dead spots
(7); model and subtract the detector and scattered-light
backgrounds (7); and extract (using optimal extraction,
if possible), flux calibrate (7) and write to separate FITS
files the target’s LiF and SiC spectra. Note that we can-
not correct for the effects of worms (6) or the detector
flat field (7).
Our second principal was to make the pipeline as mod-
ular as possible. CalFUSE is written in the C program-
ming language and runs on the Solaris, Linux, and Mac
OS X (versions 10.2 and higher) operating systems. The
pipeline consists of a series of modules called by a shell
script. Individual modules may be executed from the
command line. Each performs a set of related correc-
tions (screen data, remove motions, etc.) by calling a
series of subroutines.
Our third principal was to maintain the data as a pho-
ton list (called an intermediate data file, or IDF) un-
til the final module of the pipeline. Input arrays are
read from the IDF at the beginning of each module, and
CalFUSE: The FUSE Calibration Pipeline 3
output arrays are written at the end. Bad photons are
flagged but not discarded, so the user can examine, fil-
ter, and combine processed data files without re-running
the pipeline. Like all FUSE data, IDFs are stored as
FITS files (Hanisch et al. 2001); the various file formats
employed by CalFUSE are described in the Appendix.
A FUSE observation consists of a set of exposures ob-
tained with a particular target in a particular aperture
on a particular date. Each exposure generates four raw
data files, one per detector segment, and each raw data
file yields a pair of calibrated spectra (LiF and SiC), for
a total of 8 calibrated spectral files per exposure. Each
raw data file is processed individually by the pipeline.
Error and status messages are written to a trailer file
(described in § 4.10). Spectra are extracted only for
the target aperture and are binned in wavelength. Bin-
ning can be set by the user, but the default is 0.013 Å,
which corresponds to about two detector pixels or one
fourth of a point-source resolution element. After pro-
cessing, additional software is used to generate a set of
observation-level spectral files, the ALL, ANO, and NVO
files described in § 4.11. A complete list of FUSE data
files and file-naming conventions may be found in The
FUSE Instrument and Data Handbook. All of the expo-
sures that constitute an observation are processed and
archived together.
Investigators who wish to re-process their data may
retrieve the CalFUSE source code and all associated cal-
ibration files from the CalFUSE Homepage. Instructions
for running the pipeline and detailed descriptions of the
calibration files are provided in The CalFUSE Pipeline
Reference Guide. Note that, within the CalFUSE soft-
ware distribution, all of the calibration files, including
the FUSE.TLE file (§ 4.2), are stored in the directory
v3.2/calfiles, while all of the parameter files, including
master calib file.dat and the screening and parameter
files (SCRN CAL and PARM CAL; § 4.2), are stored
in the directory v3.2/parmfiles.
4. STEP BY STEP
In this section, we discuss the pipeline subroutine
by subroutine. Our goal is to describe the algorithms
employed by each subroutine and any shortcomings or
caveats of which the user should be aware.
4.1. OPUS
The Operations Pipeline Unified System (OPUS) is
the data-processing system used by the Space Telescope
Science Institute to reduce science data from the Hub-
ble Space Telescope (HST). We use a FUSE-specific ver-
sion of OPUS to manage our data processing (Rose et al.
1998). OPUS ingests the data downlinked by the space-
craft and produces the data files that serve as input to
the CalFUSE pipeline. OPUS then manages the execu-
tion of the pipeline and the files produced by CalFUSE
and calls the additional routines that combine spectra
from each channel and exposure into a set of observation-
level spectral files. OPUS reads the FUSE Mission Plan-
ning Database (which contains target information from
the individual observing proposals and instrument con-
figuration and scheduling information from the mission
timeline) to populate raw file header keywords and to
verify that all of the data expected from an observation
were obtained.
OPUS generates six data files for each exposure. Four
are raw data files (identified by the suffix “fraw.fit”), one
for each detector segment. One is a housekeeping file
(“hskpf.fit”) containing time-dependent spacecraft engi-
neering data. Included in this file are detector volt-
ages, count rates, and spacecraft-pointing information.
The housekeeping file is used to generate a jitter file
(“jitrf.fit”), which contains information needed to cor-
rect the data for spacecraft motion during an exposure.
Detailed information on the format and contents of each
file is provided in the Appendix.
4.2. Generate the Intermediate Data File
The first task of the pipeline is to convert the raw data
file into an intermediate data file (IDF), which maintains
the data in the form of a photon-event list. (The format
and contents of the IDF are described in § A-3.) For data
obtained in time-tag mode, the module cf ttag init
merely copies the arrival time, X and Y detector coor-
dinates, and pulse-height of each photon event from the
raw file to the TIME, XRAW, YRAW, and PHA arrays of
the IDF. A fifth array, the photon weight, is initially set
to unity. Photons whose X and Y coordinates place them
outside of the active region of the detector are flagged as
described in § 4.3.8. Raw histogram data are stored by
OPUS as an image; the module cf hist init converts
each non-zero pixel of that image into a single entry in
the IDF, with X and Y equal to the pixel coordinates
(mapped to their location on the unbinned detector),
arrival time set to the mid-point of the exposure, and
pulse height set to 20 (possible values range from 0 to
31). The arrival time and pulse height are modified later
in the pipeline. The photon weight is set to the number
of accumulated counts on the pixel, i.e., the number of
photons detected on that region of the detector.
The IDF has two additional extensions. The first con-
tains the good-time intervals (GTIs), a series of start
and stop times (in seconds from the exposure start time
recorded in the file header) computed by OPUS, when
the data are thought to be valid. For time-tag data, this
extension is copied directly from the raw data file. For
histogram data, a single GTI is generated with START =
0 and STOP = EXPTIME (the exposure time computed
by OPUS). The final extension is called the timeline ta-
ble and consists of 16 arrays containing status flags and
spacecraft-position, detector high-voltage, and count-
rate parameters tabulated once per second throughout
the exposure. Only the day/night and OPUS bits of the
time-dependent status flags are populated (§ A-3); the
others are initialized to zero. The spacecraft-position pa-
rameters are computed as described below. The detector
voltages and the values of various counters are read from
the housekeeping data file.
A critical step in the initialization of the IDF is pop-
ulating the file-header keywords that describe the space-
craft’s orbit and control the subsequent actions of the
pipeline. The names of all calibration files to be used by
the pipeline are read from the file master calib file.dat
and written to file-header keywords. (Keywords for each
calibration file are included in the discussion that fol-
lows.) Three sets of calibration files are time-dependent:
the effective area is interpolated from the two files
with effective dates immediately preceding and follow-
ing the exposure start time (these file names are stored
4 Dixon et al.
in the header keywords AEFF1CAL and AEFF2CAL);
the scattered-light model is taken from the file with an
effective date immediately preceding the exposure start
time (keyword BKGD CAL); and the orbital elements
are read from the FUSE.TLE file, an ASCII file contain-
ing NORAD two-line elements for each day of the mis-
sion. These two-line elements are used to populate both
the orbital ephemeris keywords in the IDF file header and
the various spacecraft-position arrays in the timeline ta-
ble. Finally, a series of data-processing keywords is set
to either PERFORM or OMIT the subsequent steps of
the pipeline. Once a step is performed, the correspond-
ing keyword is set to COMPLETE. Some user control of
the pipeline is provided by the screening and parameter
files (SCRN CAL and BKGD CAL), which allow one,
for example, to select only night-time data or to turn
off background subtraction. An annotated list of file-
header keywords, including the calibration files used by
the pipeline, is provided in the FUSE Instrument and
Data Handbook.
Caveats: Occasionally, photon arrival times in raw
time-tag data files are corrupted. When this happens,
some fraction of the photon events have identical, enor-
mous TIME values, and the good-time intervals contain
an entry with START and STOP set to the same large
value. The longest valid exposure spans 55 ks (though
most are ∼ 2 ks long). If an entry in the GTI table ex-
ceeds this value, the corresponding entry in the timeline
table is flagged as bad (using the “photon arrival time
unknown” flag; § A-3). Bad TIME values less than 55 ks
will not be detected by the pipeline.
Raw histogram files may also be corrupted. OPUS fills
missing pixels in a histogram image with the value 21865.
The pipeline sets the WEIGHT of such pixels to zero and
flags them as bad (by setting the photon’s “fill-data bit”;
§ A-3). Occasionally, a single bit in a histogram image
pixel is flipped, producing (for high-order bits) a “hot
pixel” in the image. The pipeline searches for pixels with
values greater than 8 times the average of their neighbors,
identifies the flipped bit, and resets it.
One or more image extensions may be missing from a
raw histogram file (§A-2). If no extensions are present,
the keyword EXP STAT in the IDF header is set to −1.
Exposures with non-zero values of EXP STAT are pro-
cessed normally by the pipeline, but are not included in
the observation-level spectral files ultimately delivered to
MAST (§ 4.11). Though the file contains no data, the
header keyword EXPTIME is not set to zero.
Early versions of the CalFUSE pipeline did not make
use of the housekeeping files, but instead employed engi-
neering information downloaded every five minutes in a
special “engineering snapshot” file. That information is
used by OPUS to populate a variety of header keywords
in the raw data file. If a housekeeping file is not avail-
able, CalFUSE v3 uses these keywords to generate the
detector high-voltage and count-rate arrays in the time-
line table. Should these header keywords be corrupted,
the pipeline issues a warning and attempts to estimate
the corrupted values. In such cases, it is wise to compare
the resulting dead-time corrections (§ 4.3.2) with those
of other, uncorrupted exposures of the same target.
4.3. Convert to FARF
The pipeline module cf convert to farf is designed
to remove detector artifacts. Our goal is to construct
the data set that would be obtained with an ideal detec-
tor. The corrections can be grouped into two categories:
dead-time effects, which are system limitations that re-
sult in the loss of photon events recorded by the detec-
tor, and positional inaccuracies, i.e., errors in the raw
X and Y pixel coordinates of individual photon events.
The coordinate system defined by these corrections is
called the flight alignment reference frame, or FARF.
Corrected coordinates for each photon event are written
to the XFARF and YFARF arrays of the IDF.
4.3.1. Digitizer Keywords
The first subroutine of this module,
cf check digitizer, merely compares a set of 16
IDF file header keywords, which record various detector
settings, with reference values stored in the calibration
file DIGI CAL. Significant differences result in warning
messages being written to both the file header and the
exposure trailer file. Such warning messages should be
taken seriously, as data obtained when the detectors are
not properly configured are likely to be unusable. Be-
sides issuing a warning, the program sets the EXP STAT
keyword in the IDF header to −2.
4.3.2. Detector Dead Time
The term “dead time” refers specifically to the finite
time interval required by the detector electronics to pro-
cess a photon event. During this interval, the detector is
“dead” to incoming photons. The term is more generally
applied to any loss of data that is count-rate dependent.
There are three major contributions to the effective de-
tector dead time on FUSE. The first is due to limitations
in the detector electronics, which at high count rates may
not be able to process photon events as fast as they ar-
rive. The correction for this effect is computed separately
for each segment from the count rate measured at the
detector anode by the Fast Event Counter (FEC) and
recorded to the engineering data stream, typically once
every 16 seconds. The functional form of the correction
was provided by the detector development group at the
University of California, Berkeley, and its numerical con-
stants were determined from in-flight calibration data. It
is applied by the subroutine cf electronics dead time.
A second contribution to the dead time comes from
the way that the Instrument Data System (IDS) pro-
cesses counts coming from the detector. The IDS can
accept at most 8000 counts per second in time-tag mode
and 32000 counts per second in histogram mode from
the four detector segments (combined). At higher count
rates, photon events are lost. To correct for such losses,
the subroutine cf ids dead time compares the Active
Image Counter (AIC) count rate, measured at the back
end of the detector electronics, with the maximum al-
lowed rate. The IDS dead-time correction is the ratio of
these two numbers (or unity, whichever is greater).
A third contribution occurs when time-tag data are
bundled into 64 kB data blocks in the IDS bulk memory.
This memory is organized as a software FIFO (first-in,
first-out) memory buffer, and the maximum data transfer
rate from it to the spacecraft recorder (the FIFO drain
rate) is approximately 3500 events per second. At higher
count rates, the FIFO will eventually fill, resulting in the
CalFUSE: The FUSE Calibration Pipeline 5
loss of one or more data blocks. The effect appears as
a series of data drop-outs, each a few seconds in length,
in the raw data files. The correction, computed by the
subroutine cf fifo dead time, is simply the ratio of the
AIC count rate to the FIFO drain rate. When triggered,
this correction incorporates (and replaces) the IDS cor-
rection discussed above.
The total dead-time correction (always ≥ 1.0) is
simply the product of the detector electronics and
IDS corrections. It is computed (by the subroutine
cf apply dead time) once each second and applied to
the data by scaling the WEIGHT associated with each
photon event. The mean value of the detector electron-
ics, IDS, and total dead-time corrections are stored in
the DET DEAD, IDS DEAD, and TOT DEAD header
keywords, respectively. Other possible sources of dead
time, such as losses due to the finite response time of the
MCPs, have a much smaller effect and are ignored.
Caveats: Our dead-time correction algorithms are in-
appropriate for very bright targets. If the header key-
word TOT DEAD > 1.5, then the exposure should not
be considered photometric. If the housekeeping file for
a particular exposure is missing, the file header key-
words from which the count rates are calculated appear
to be corrupted, and either DET DEAD or IDS DEAD
is > 1.5, then the dead-time correction is assumed to be
meaningless and is set to unity. In both of these cases,
warning messages are written to the file header and the
trailer file.
4.3.3. Temperature-Dependent Changes in Detector
Coordinates
The X and Y coordinates of a photon event do not
correspond to a physical pixel on the detector, but
are calculated from timing and voltage measurements
of the incoming charge cloud (Siegmund et al. 1997;
Sahnow et al. 2000b). As a result, the detector coor-
dinate system is subject to drifts in the detector elec-
tronics caused by temperature changes and other effects.
To track these drifts, two signals are periodically injected
into the detector electronics. These “stim pulses” appear
near the upper left and upper right corners of each de-
tector, outside of the active spectral region. The stim
pulses are well placed for tracking changes in the scale
and offset of the X coordinate, but they are not well
enough separated in Y to track scale changes along that
axis. The subroutine cf thermal distort determines
the X and Y centroids of the stim pulses, computes the
linear transformation necessary to move them to their
reference positions, and applies that transformation to
the X and Y coordinates of each photon event in the re-
gions of the stim pulses and in the active region of the
detector. Events falling within 64 pixels (in X and Y)
of the expected stim-pulse positions are flagged by set-
ting the stim-pulse bit in the LOC FLGS array (§ A-3).
In raw histogram files, the stim pulses are stored in a
pair of image extensions. If either of these extensions is
missing, the pipeline reads the expected positions of the
stim pulses from the calibration file STIM CAL and ap-
plies the corresponding correction. This works (to first
order) because the stim pulses drift slowly with time,
though short-timescale variations cannot be corrected if
the stim pulses are absent.
4.3.4. Count-Rate Dependent Changes in Detector Y Scale
For reasons not yet understood, the detector Y scale
varies with the count rate, in the sense that the detector
image for a high count-rate exposure is expanded in Y. To
measure this effect, we tabulated the positions of individ-
ual detector features (particularly bad-pixel regions) as a
function of the FEC count rate (§ 4.3.2) and determined
the Y corrections necessary to shift each detector feature
to its observed position in a low count-rate exposure.
From this information, we derived the calibration file
RATE CAL for each detector segment. The correction
is stored as a two-dimensional image: the first dimension
represents the count rate and the second is the observed
Y pixel value. The value of each image pixel is the Y shift
(in pixels) necessary to move a photon to its corrected
position. The subroutine cf count rate y distort ap-
plies this correction to each photon event in the active
region of the detector. For time-tag data, the FEC count
rate is used to compute a time- and Y-dependent correc-
tion; for histogram data, the weighted mean of the FEC
count rate is used to derive a set of shifts that depends
only on Y.
4.3.5. Time-Dependent Changes in Detector Coordinates
As the detector and its electronics age, their proper-
ties change, resulting in small drifts in the computed
coordinates of photon events. These changes are most
apparent in the Lyman β airglow features observed in
each of the three apertures of the LiF and SiC channels
(Fig. 3), which drift slowly apart in Y as the mission pro-
gresses, indicating a time-dependent stretch in the detec-
tor Y scale. To correct for this stretch, the subroutine
cf time xy distort applies a time-dependent correction
(stored in the calibration file TMXY CAL) to the Y co-
ordinate of each photon event in the active region of the
detector.
Caveats: Although there is likely to be a similar change
to the X coordinate, no measurement of time-dependent
drifts in that dimension is available, so no correction is
applied.
4.3.6. Geometric Distortion
In an image of the detector generated from raw X and
Y coordinates, the spectrum is not straight, but wig-
gles in the Y dimension (Fig. 4). To map these geo-
metric distortions, we made use of two wire grids (the
so-called “quantum efficiency” and “plasma” grids) that
lie in front of each detector segment. Both grids are
regularly spaced and cover the entire active area of the
detectors. Although designed to be invisible in the spec-
tra, they cast sharp shadows on the detector when il-
luminated directly by on-board stimulation (or “stim”)
lamps. We determined the shifts necessary to straighten
these shadows. Their spacing is approximately 1 mm, too
great to measure fine-scale structure in the X dimension,
but sufficient for the Y distortion. Geometric distortions
in the X dimension have the effect of compressing and
expanding the spectrum in the dispersion direction, so
the X distortion correction is derived in parallel with the
wavelength calibration as described in § 5.1. The geomet-
ric distortion corrections are stored in a set of calibration
files (GEOM CAL) as pairs of 16384× 1024 images, one
each for the X and Y corrections. The value of each im-
6 Dixon et al.
age pixel is the shift necessary to move a photon to its
corrected position. This shift is applied by the subrou-
tine cf geometric distort.
Caveats: Though designed to be invisible, the wire
grids can cast shadows that are visible in the spectra of
astrophysical targets. These shadows are the “worms”
discussed in § 6.3.
4.3.7. Pulse-Height Variations in Detector X Scale
The FUSE detectors convert each ultraviolet photon
into a shower of electrons, for which the detector elec-
tronics calculate the X and Y coordinates and the to-
tal charge, or pulse height. Prolonged exposure to pho-
tons causes the detectors to become less efficient at
this photon-to-electron conversion (a phenomenon called
“gain sag”), and the mean pulse height slowly decreases.
Unfortunately, the X coordinate of low-pulse-height pho-
ton events is systematically miscalculated by the detec-
tor electronics. As the pulse height decreases with time,
spectral features appear to “walk” across the detector.
The strength of the effect depends on the cumulative
photon exposure experienced by each pixel and therefore
varies with location on the detector.
To measure the error in X as a function of pulse height,
we used data from long stim lamp exposures to construct
a series of 32 detector images, each containing events
with a single pulse height (allowed values range from
0 to 31). We stepped through each image in X, com-
puting the shift (∆X) necessary to align the shadow of
each grid wire with the corresponding shadow in a stan-
dard image constructed from photon events with pulse
heights between 16 and 20. The shifts were smoothed
to eliminate discontinuities and stored in calibration files
(PHAX CAL) as a two-dimensional image: the first di-
mension represents the observed X coordinate, and the
second is the pulse height. The value of each image
pixel is the walk correction (∆X) to be added to the
observed value of X. This correction, assumed to be in-
dependent of detector Y position, is applied by the sub-
routine cf pha x distort.
Caveats: For time-tag data, the walk correction is
straightforward and reasonably accurate. For histogram
data, pulse-height information is unavailable, so the
subroutine cf modify hist pha assigns to each photon
event the mean pulse height for that aperture, derived
from contemporaneous time-tag observations and stored
in the calibration file PHAH CAL. While this trick places
time-tag and histogram data on the same overall wave-
length scale, small-scale coordinate errors due to local-
ized regions of gain sag (e.g., around bright airglow lines,
particularly Lyman β) remain uncorrected in histogram
spectra.
4.3.8. Detector Active Region
When the IDF is first created, photon events with co-
ordinates outside the active region of the detector are
flagged as bad (§ 4.2). Once their coordinates are con-
verted to the FARF, the subroutine cf active region
flags as bad any photons that have been repositioned
beyond the active region of the detector. These limits
are read from the electronics calibration file (stored un-
der the header keyword ELEC CAL). Allowed values are
800 ≤ XFARF ≤ 15583, 0 ≤ YFARF ≤ 1023. The
active-area bit is written to the LOC FLGS array.
4.3.9. Uncorrected Detector Effects
CalFUSE does not perform any sort of flat-field correc-
tion. Pre-launch flat-field observations differ sufficiently
from in-flight data to make them unsuitable for this pur-
pose, and in-flight flat-field data are unavailable. (Even
if such data were available, any flat-field correction would
be only approximate, because MCPs do not have physical
pixels for which pixel-to-pixel variations can be clearly
delineated; § 4.3.3). As a result, detector fixed-pattern
noise limits the signal-to-noise ratio achievable in obser-
vations of bright targets. To the extent that grating,
mirror, and spacecraft motions shift the spectrum on the
detector during an exposure, these fixed-pattern features
may be averaged out. For some targets, we deliberately
move the FPAs between exposures to place the spectrum
on different regions of the detector. Combining the ex-
tracted spectra from these exposures can significantly im-
prove the resulting signal-to-noise ratio (§ 4.5.5). Other
detector effects (including the moiré pattern discussed in
§ 6.4) are described in the FUSE Instrument and Data
Handbook.
4.4. Screen Photons
The module cf screen photons calls subroutines de-
signed to identify periods of potentially bad data, such
as Earth limb-angle violations, SAA passages, and de-
tector bursts. A distinct advantage of CalFUSE v3 over
earlier versions of the pipeline is that bad data are not
discarded, but merely flagged, allowing users to mod-
ify their selection criteria without having to re-process
the data. To speed up processing, the pipeline calcu-
lates the various screening parameters once per second
throughout the exposure, sets the corresponding flags in
the STATUS FLAGS array of the timeline table, then
copies the flags from the appropriate entry of the time-
line table into the TIMEFLGS array for each photon
event (§ A-3). Many of the screening parameters applied
by the pipeline are set in the screening parameter file
(SCRN CAL). Other parameters are stored in various
calibration files as described below.
4.4.1. Airglow Events
Numerous geocoronal emission features lie within the
FUSE waveband (Feldman et al. 2001). While the
pipeline processes airglow photons in the same manner
as all other photon events in the target aperture, it is
occasionally helpful to exclude from consideration re-
gions of the detector likely to be contaminated by geo-
coronal or scattered solar emission. These regions are
listed in the calibration file AIRG CAL; the subroutine
cf screen airglow flags as airglow (by setting the air-
glow bit of the LOC FLGS array in the photon-event list)
all photon events falling within the tabulated regions –
even for data obtained during orbital night, when many
airglow features are absent.
4.4.2. Limb Angle
Spectra obtained when a target lies near the earth’s
limb are contaminated by scattered light from strong
geocoronal Lyman α and O I emission. To minimize this
effect, the subroutine cf screen limb angle reads the
LIMB ANGLE array of the timeline table, identifies pe-
riods when the target violates the limb-angle constraint,
CalFUSE: The FUSE Calibration Pipeline 7
and sets the corresponding flag in the STATUS FLAGS
array of the timeline table. Minimum limb angles for day
and night observations are read from the BRITLIMB
and DARKLIMB keywords of the screening parameter
file and copied to the IDF file header. The default limits
are 15◦ during orbital day and 10◦ during orbital night.
4.4.3. SAA Passages
The South Atlantic Anomaly (SAA) marks a depres-
sion in the earth’s magnetic field that allows particles
trapped in the Van Allen belts to reach low altitudes.
The high particle flux in this region raises the background
count rate of the FUSE detectors to unacceptable levels.
The subroutine cf screen saa compares the spacecraft’s
ground track, recorded in the LONGITUDE and LAT-
ITUDE arrays of the timeline table, with the limits of
the SAA (stored in the calibration file SAAC CAL as
a binary table of latitude-longitude pairs) and flags as
bad periods when data were taken within the SAA. Our
SAA model was derived from orbital information and
on-board counter data from the first three years of the
FUSE mission.
Caveats: Because the SAA particle flux is great enough
to damage the FUSE detectors, we end most exposures
before entering the SAA and lower the detector high volt-
age during each SAA pass. As a result, very little data
is actually flagged by this screening step.
4.4.4. Detector High Voltage
The detector high voltage is set independently for each
detector segment (1A, 1B, 2A, 2B). During normal op-
erations, the voltage on each segment alternates between
its nominal full-voltage and a reduced SAA level. The
SAA level is low enough that the detectors are not dam-
aged by the high count rates that result from SAA passes,
and it is often used between science exposures to mini-
mize detector exposure to bright airglow emission. The
full-voltage level is the normal operating voltage used
during science exposures. It is raised regularly to com-
pensate for the effects of detector gain sag. Without
this compensation, the mean pulse height of real photon
events would gradually fall below our detection thresh-
old. Unfortunately, there is a limit above which the
full-voltage level cannot be raised. Detector segment
2A reached this limit in 2003 and has not been raised
since; it will gradually become less sensitive as the frac-
tion of low-pulse-height events increases. The subroutine
cf screen high voltage reads the instantaneous value
of the detector high voltage from the HIGH VOLTAGE
array of the timeline table, compares it with the nominal
full-voltage level (stored as a function of time in the cali-
bration file VOLT CAL), and flags periods of low voltage
as bad.
For any number of reasons, an exposure may be ob-
tained with the detector high voltage at less than the
full-voltage level. To preserve as much of this data as pos-
sible, we examined all of the low-voltage exposures taken
during the first four years of the mission and found that,
for detector segments 1A, 1B, and 2B, the data quality is
good whenever the detector high voltage is greater than
85% of the nominal (time-dependent) full-voltage level.
For segment 2A, data obtained with the high voltage
greater than 90% of full are good, lower than 80% are
bad, and between 80 and 90% are of variable quality. In
this regime, the pipeline flags the affected data as good,
but writes warning messages to both the IDF header and
the trailer file. When this warning is present in time-tag
data, the user should examine the distribution of pulse
heights in the target aperture to ensure that the photon
events are well separated from the background (§ 4.4.12).
For histogram data, the spectral photometry and wave-
length scale are most likely to be affected.
Caveats: If the header keywords indicate that the de-
tector voltage was high, low, or changed during an ex-
posure, the IDF initialization routines (§ 4.2) write a
warning message to the trailer file. If a valid housekeep-
ing file is available for the exposure, this warning may
be safely ignored, because the pipeline uses housekeep-
ing information to populate the HIGH VOLTAGE array
in the timeline table and properly excludes time inter-
vals when the voltage was low. If the housekeeping file
is not present, each entry of the HIGH VOLTAGE array
is set to the “HV bias maximum setting” reported in the
IDF header. In this case, the pipeline has no information
about time-dependent changes in the detector high volt-
age, and warnings about voltage-level changes should be
investigated by the user.
4.4.5. Event Bursts
Occasionally, the FUSE detectors register large count
rates for short periods of time. These event bursts can
occur on one or more detectors and often have a complex
distribution across the detector, including scalloping and
sharp edges (Fig. 5). CalFUSE includes a module that
screens the data to identify and exclude bursts. The sub-
routine cf screen burst computes the time-dependent
count rate using data from background regions of the de-
tector (excluding airglow features) and applies a median
filter to reject time intervals whose count rates differ by
more than 5 standard deviations (the value may be set
by the user) from the mean. The algorithm rejects any
time interval in which the background rate rises rapidly,
as when an exposure extends into an SAA or the tar-
get nears the earth limb. The background rate com-
puted by the burst-rejection algorithm is stored in the
BKGD CNT RATE array of the timeline table and in-
cluded on the count-rate plots generated for each expo-
sure (§ 4.10). Burst rejection is possible only for data
obtained in time-tag mode.
Caveats: Careful examination of long background ob-
servations reveals that many are contaminated by emis-
sion from bursts too faint to trigger the burst-detection
algorithm. Observers studying, for example, diffuse
emission from the interstellar medium should be alert
to the possibility of such contamination.
4.4.6. Spacecraft Drift
Pointing of the FUSE spacecraft was originally con-
trolled with four reaction wheels, which typically main-
tained a pointing accuracy of 0.2–0.3 arc seconds. In late
2001, two of the reaction wheels failed, and it became
necessary to control the spacecraft orientation along one
axis with magnetic torquer bars. The torquer bars can
exert only about 10% of the force produced by the re-
action wheels, and the available force depends on the
strength and relative orientation of the earth’s magnetic
field. Thus, spacecraft drift increased dramatically along
this axis, termed the antisymmetric or A axis. Drifts
8 Dixon et al.
about the A axis shift the spectra of point-source tar-
gets in a direction 45◦ from the dispersion direction (i.e.,
∆X = ∆Y ). These motions can substantially degrade
the resolution of the spectra, so procedures have been
implemented to correct the data for spacecraft motion
during an exposure. For time-tag observations of point
sources, we reposition individual photon events. For his-
togram observations, we correct only for the exposure
time lost to large excursions of the spacecraft. The abil-
ity to correct for spacecraft drift became even more im-
portant when a third reaction wheel failed in 2004 De-
cember.
The correction of photon-event coordinates for space-
craft motion is discussed in § 4.5.7. During screening, the
subroutine cf screen jitter merely flags times when
the target is out of the aperture, defined as those for
which either ∆X or ∆Y , the pointing error in the disper-
sion or cross-dispersion direction, respectively, is greater
than 30′′, the width of the LWRS aperture. These lim-
its, set by the keywords DX MAX and DY MAX in the
CalFUSE parameter file (PARM CAL), underestimate
the time lost to pointing excursions, but smaller limits
can lead to the rejection of good data for some chan-
nels. Also flagged as bad are times when the jitter track-
ing flag TRKFLG = −1, indicating that the spacecraft
is not tracking properly. If TRKFLG = 0, no track-
ing information is available and no times are flagged as
bad. Pointing information is read from the jitter file
(JITR CAL; § A-2). If the jitter file is not present or the
header keyword JIT STAT = 1 (indicating that the jit-
ter file is corrupted), cf screen jitter issues a warning
and exits; again, no times are flagged as bad.
4.4.7. User-Defined Good-Time Intervals
One bit of the status array is reserved for user-defined
GTIs. For example, to extract data corresponding to a
particular phase of a binary star orbit, one would flag
data from all other phases as bad. A number of tools
exist to set this flag, including cf edit (available from
MAST). CalFUSE users may specify good-time inter-
vals by setting the appropriate keywords (NUSERGTI,
GTIBEG01, GTIEND01, etc.) in the screening pa-
rameter file. (Times are in seconds from the exposure
start time.) If these keywords are set, the subroutine
cf set user gtis flags times outside of these good-time
intervals as bad.
4.4.8. Time-Dependent Status Flags
Once the status flags in the timeline table are popu-
lated, the subroutine cf set photon flags copies them
to the corresponding entries in the photon event list. For
time-tag data, this process is straightforward: match the
times and copy the flags. Header keywords in the IDF
record the number of photon events falling in bad time in-
tervals or outside of the detector active area; the number
of seconds lost to bursts, SAAs, etc.; and the remaining
night exposure time. If more than 90% of the exposure
is lost to a single cause, an explanatory note is written
to the trailer file.
The task is more difficult for histogram data, for which
photon-arrival information is unavailable. We distin-
guish between time flags that represent periods of lost
exposure time (low detector voltage or target out of aper-
ture) and those that represent periods of data contami-
nation (limb angle violations or SAAs). For the former,
we need only modify the exposure time; for the latter,
we must flag the exposure as being contaminated. Our
goal is to set the individual photon flags and header key-
words so that the pipeline behaves in the following way:
When processing a single exposure, it treats all photon
events as good. When combining data from multiple
exposures, it excludes contaminated exposures (defined
below). To this end, we generate an 8-bit status word
containing only day/night information: if the exposure
is more than 10% day, the day bit is set. This status
word is copied onto the time-dependent status flag of
each photon event. We generate a second 8-bit status
word containing information about limb-angle violations
and SAAs: if a single second is lost to one of these events,
the corresponding flag is set and a message is written to
the trailer file. (To avoid rejecting an exposure that, for
example, abuts an SAA, we ignore its initial and final 20
seconds in this analysis.) The status word is stored in
the file header keyword EXP STAT (unless that keyword
has already been set; see § 4.2 and § 4.3.1). The routines
used by the pipeline to combine data from multiple ex-
posures into a single spectrum (§ 4.11) reject data files
in which this keyword is non-zero. The number of bad
events, the exposure time lost to periods of low detector
voltage or spacecraft jitter, and the exposure time dur-
ing orbital night are written to the file header, just as for
time-tag data.
Only in this subroutine is the DAYNIGHT keyword
read from the screening parameter file and written to the
IDF file header. Allowed values are DAY, NIGHT, and
BOTH. The default is BOTH. For most flags, if the bit
is set to 1, the photon event is rejected. The day/night
flag is different: it is always 1 for day and 0 for night.
The pipeline must read and interpret the DAYNIGHT
keyword before accepting or rejecting an event based on
the value of its day/night flag.
4.4.9. Good-Time Intervals
Once the time-dependent screening is complete, the
subroutine cf set good time intervals calculates a
new set of good-time intervals from information in the
timeline table and writes them to the second extension
of the IDF (§ 4.2). For time-tag data, all of the TIME-
FLGS bits are used and the DAYNIGHT filter is applied.
For histogram data, the bits corresponding to limb-angle
violations and SAAs are ignored, since data arriving dur-
ing these events cannot be excluded. The DAYNIGHT
filter is applied (assuming that all are day photons if the
exposure is more than 10% day). The exposure time,
EXPTIME = Σ (STOP−START), summed over all en-
tries in the GTI table, is then written to the IDF file
header.
4.4.10. Histogram Arrival Times
For histogram data, all of the photon events in an
IDF are initially assigned an arrival time equal to the
midpoint of the exposure. Should this instant fall in
a bad-time interval, the data may be rejected by a
subsequent step of the pipeline or one of our post-
processing tools. To avoid this possibility, the subroutine
cf modify hist times resets all photon-arrival times to
the midpoint of the exposure’s longest good-time inter-
val. This subroutine is not called for time-tag data.
CalFUSE: The FUSE Calibration Pipeline 9
4.4.11. Bad-Pixel Regions
Images of the FUSE detectors reveal a number of dead
spots that may be surrounded by a bright ring (see the
FUSE Instrument and Data Handbook for examples).
The subroutine cf screen bad pixels reads a list of
bad-pixel regions from a calibration file (QUAL CAL)
and flags as bad all photon events whose XFARF and
YFARF coordinates fall within the tabulated limits. A
bad-pixel map, constructed later in the pipeline (§ 4.8),
is used by the optimal-extraction algorithm to correct for
flux lost to dead spots.
4.4.12. Pulse Height Limits
For time-tag data, the pulse height of each photon
event is recorded in the IDF. Values range from 0 to
31 in arbitrary units. A typical pulse-height distribution
has a peak at low values due to the intrinsic detector
background, a Gaussian-like peak near the middle of the
range due to “real” photons, and a tail of high pulse-
height events, which likely represent the superposition of
two photons and therefore are not reliable. In addition,
the detector electronics selectively discard high pulse-
height events near the top and bottom of the detectors
(i.e., with large or small values of Y). We can thus im-
prove the signal-to-noise ratio of faint targets by rejecting
photon events with extreme pulse-height values. Pulse-
height limits (roughly 2–24) are defined for each detector
segment in the screening parameter file (SCRN CAL).
The subroutine cf screen pulse height flags photon
events with pulse heights outside of this range (by set-
ting the appropriate bit in the LOC FLGS array; § A-3)
and writes the pulse-height limits used and the number
of photon events rejected to the IDF file header. This
procedure is not performed on histogram data.
Caveats: We do not recommend the use of narrow
pulse-height ranges to reduce the detector background
in FUSE data. Careful analysis has shown that limits
more stringent than the default values can result in sig-
nificant flux losses across small regions of the detector,
particularly in the LiF1B channel, resulting in apparent
absorption features that are not real.
4.5. Remove Motions
Having corrected the data for various detector ef-
fects and identified periods of bad data, we continue
to work backwards through the instrument, correcting
for spectral motions on the detector due to the move-
ments of various optical components – and even of the
spacecraft itself. This task is performed by the module
cf remove motions. It begins by reading the XFARF
and YFARF coordinates of each photon event from the
IDF. It concludes by writing the motion-corrected coor-
dinates to the X and Y arrays of the same file.
4.5.1. Locate Spectra on the Detector
The LiF and SiC channels each produce three spec-
tra, one from each aperture, for a total of six spectra
per detector segment (Fig. 3). Because motions of the
optical components can shift these spectra on the detec-
tor, the first step is to determine the Y centroid of each.
To do this, we use the following algorithm: First, we
project the airglow photons onto the Y axis (summing
all values of X for each value of Y) and search the result-
ing histogram for peaks within 70 pixels of the expected
Y position of the LWRS spectrum. If the airglow fea-
ture is sufficiently bright (33 counts in 141 Y pixels), we
adopt its centroid as the airglow centroid for the LWRS
aperture and compute its offset from the expected value
stored in the CHID CAL calibration file. If the airglow
feature is too faint, we adopt the expected centroid and
assume an offset of zero. We apply the offset to the ex-
pected centroids of the MDRS and HIRS apertures to
obtain their airglow centroids. Second, we project the
non-airglow photons onto the Y axis and subtract a uni-
form background. Airglow features fill the aperture, but
point-source spectra are considerably narrower in Y and
need not be centered in the aperture. For each aperture,
we search for a 5σ peak within 40 pixels of the airglow
centroid. If we find it, we use its centroid; otherwise,
we use the airglow centroid. This scheme, implemented
in the subroutine cf find spectra, allows for the pos-
sibility that an astrophysical spectrum may appear in a
non-target aperture.
For each of the six spectra, two keywords are written
to the IDF file header: YCENT contains the computed
Y centroid, and YQUAL contains a quality flag. The
flag is HIGH if the centroid was computed from a point-
source spectrum, MEDIUM if computed from an airglow
spectrum, and LOW if the tabulated centroid was used.
It is possible for the user to specify the target centroid
by setting the SPEX SIC and SPEC LIF keywords in
the CalFUSE parameter file (PARM CAL). Two other
keywords, EMAX SIC and EMAX LIF, limit the offset
between the expected and calculated centroids: if the cal-
culated centroid differs from the predicted value by more
than this limit, the pipeline uses the default centroid.
Caveats: On detector 1, the SiC LWRS spectrum falls
near the bottom edge of the detector (Fig. 3). Because
the background level rises steeply near this edge, the cal-
culated centroid can be pulled (incorrectly) to lower val-
ues of Y, especially for faint targets.
4.5.2. Assign Photons to Channels
The subroutine cf identify channel assigns each
photon to a channel, where “channel” now refers to one
of the six spectra on each detector (Fig. 3). For each
channel, extraction windows for both point-source and
extended targets are tabulated in the calibration file
CHID CAL along with their corresponding spectral Y
centroids. These extraction limits encompass at least
99.5% of the target flux. For the target channels, iden-
tified in the APERTURE header keyword, we use either
the point-source or extended extraction windows, as indi-
cated by the SRC TYPE keyword; for the other (presum-
ably airglow) channels, we use the extended extraction
windows. The offset between the calculated and tabu-
lated spectral Y centroids (§ 4.5.1) is used to shift each
extraction window to match the data.
To insure that, should two extraction windows overlap,
photon events falling in the overlap region are assigned
to the more likely channel, photon coordinates (XFARF
and YFARF) are compared with the extraction limits
of the six spectral channels in the following order: first
the target channels (LiF and SiC); then the airglow chan-
nels (LiF and SiC) corresponding to the larger non-target
aperture; and finally the airglow channels (LiF and SiC)
10 Dixon et al.
corresponding to the smaller non-target aperture. For
example, if the target were in the MDRS aperture, the
search order would be MDRS LiF, MDRS SiC, LWRS
LiF, LWRS SiC, HIRS LiF, and HIRS SiC. The process
stops when a match is made. The channel assignment
of each photon event is stored in the CHANNEL array
(§ A-3); photon events that do not fall in an extraction
window are assigned a CHANNEL value of 0.
Channel assignment is performed twice, once before
the motion corrections and once after. The first time, all
extraction windows are padded by ±10 Y pixels to ac-
commodate errors in the channel centroids. The second
time, no padding is applied to time-tag data. Histogram
data, which are generally binned by 8 pixels in Y, present
a special challenge: The geometric correction (§ 4.3.6)
can move a row of data out of the extraction window,
producing a significant loss of flux. To prevent this, his-
togram extraction windows are padded by an additional
±8 Y pixels (or an amount equal to the binning factor
in Y, if other than 8).
4.5.3. Track Y Centroids with Time
For the LiF and SiC target apertures, the subroutine
cf calculate ycent motion computes the spectral Y
centroid as a function of time throughout the exposure.
The algorithm requires 500 photon events to compute
an average, so the centroid is updated more often for
bright targets than for faint ones. Photon events flagged
as airglow are ignored. The results are stored in the
YCENT LIF and YCENT SIC arrays of the timeline ta-
ble, but are not currently used by the pipeline. This cal-
culation is not performed for data obtained in histogram
mode.
4.5.4. Correct for Grating Motion
The FUSE spectrograph gratings are subject to small,
thermally-induced motions on orbital, diurnal, and pre-
cessional (60-day) timescales; an additional long-term,
non-periodic drift is also apparent. These motions can
shift the target and airglow spectra by as much as 15
pixels (peak to peak) in both the X and Y dimensions.
Measurements of the Lyman β airglow line in thousands
of exposures obtained throughout the mission reveal that
the gratings’ orbital motion depends on three parame-
ters: beta angle (the angle between the target and the
anti-sun vector), pole angle (the angle between the tar-
get and the orbit pole), and spacecraft roll angle (east of
north, stored in the file-header keyword APER PA). The
subroutine cf grating motion compares the beta, pole,
and roll angles of the spacecraft with a grid of values
in the calibration file GRAT CAL, reads the appropri-
ate correction, and computes the X and Y photon shifts.
The grating-motion correction is applied to all photon
events with CHANNEL > 0; photon events not assigned
to a channel are not moved.
Caveats: Some combinations of beta and pole angle
are too poorly sampled for us to derive a grating-motion
correction; for these regions, no correction is applied. At
present, only corrections for the orbital and long-term
grating motions are available. Because all photon events
in histogram data are assigned the same arrival time (the
midpoint of the longest good-time interval), they receive
the same grating-motion correction.
4.5.5. Correct for FPA Motion
The four focal-plane assemblies (shown in Fig. 1) can
be moved independently in either the X or Z direction.
FPA motions in the X direction are used to correct
for mirror misalignments and to perform FP splits (de-
scribed below). FPA motions in the Z direction are used
to place the apertures in the focal plane of the spectro-
graph. (Strictly speaking, an FPA moves along the tan-
gent to or the radius of the spectrograph Rowland circle,
not the X and Z axes shown in Fig. 1.) Both motions
change the spectrograph entrance angle, shifting the tar-
get spectrum on the detector. The FUSE wavelength
calibration is derived from a single stellar observation
obtained at a particular FPA position. The subroutine
cf fpa position computes the shift in pixels (∆X) nec-
essary to move each spectrum from its observed X posi-
tion on the detector to that of the wavelength-calibration
target. The X and Z positions of the LiF and SiC FPAs
are stored in file header keywords, various spectrograph
parameters are stored in a calibration file (SPEC CAL),
and the wavelength calibration and the FPA position
of the wavelength-calibration target are stored in the
WAVE CAL file. Shifts are computed for both the LiF
and SiC channels; the appropriate shift is applied to all
photon events with CHANNEL > 0; photon events not
assigned to a channel are ignored.
The FUSE detectors suffer from fixed-pattern noise.
Astigmatism in the instrument spreads a typical resolu-
tion element over several hundred detector pixels (pre-
dominantly in the cross-dispersion dimension), mitigat-
ing this effect, but to achieve a signal-to-noise ratio
greater than ∼ 30, one must remove the remaining fixed-
pattern noise. A useful technique is the focal-plane split.
FP splits are performed by obtaining a series of MDRS
or HIRS exposures at several FPA X positions. Moving
the FPA in the X dimension (and moving the satellite
to center the target in the aperture) between exposures
shifts both target and airglow spectra in the dispersion
direction on the detector. CalFUSE shifts each spectrum
to the standard X position expected by our wavelength
calibration routines. If the signal-to-noise ratio in the
spectra obtained at each FPA position is high enough, it
is possible for the user to disentangle the source spectrum
from the detector fixed-pattern noise; however, simply
combining extracted spectra obtained at different FPA
positions will average out most of the small-scale detec-
tor features.
4.5.6. Correct for Mirror Motion
The spectrograph mirrors are subject to thermal mo-
tions that shift the target’s image within the spectro-
graph aperture and thus its spectrum in both X and Y
on the detector. A source in either of the SiC channels
may move as much as 6′′ in a period of 2 ks. This motion
has two effects on the data: first, flux will be lost if the
source drifts (partially or completely) out of the aperture;
second, spectral resolution will be degraded (for LWRS
observations) as the spectrum shifts on the detector. Dif-
fuse sources, such as airglow emission, fill the aperture,
so their spectra are unaffected by mirror motion.
When the LiF1 channel is used for guiding, motions
of the LiF1 mirror are corrected by the spacecraft it-
self. Only the LiF2 and SiC spectra must be corrected
CalFUSE: The FUSE Calibration Pipeline 11
by CalFUSE. In theory, the switch from LiF1 to LiF2
as the primary channel for guiding the spacecraft (§ 2)
should require another set of calibration files. In practice,
the LiF2 mirror motion in the dispersion direction tracks
that of the LiF1 mirror. The mirror-motion correction is
stored as a function of time since orbital sunset (via the
TIME SUNSET array in the timeline table) in the cali-
bration file MIRR CAL. The correction (∆X) is applied
by the subroutine cf mirror motion to all photon events
within the target aperture; photon events in other aper-
tures and those not assigned to a channel are ignored.
This correction is not applied to extended sources. Be-
cause all photon events in histogram data are assigned
the same arrival time (generally the midpoint of the ex-
posure), they receive the same mirror-motion correction.
Caveats: We correct only the relative mirror motion
during an orbit, not the absolute mirror offset based on
longer-term trends. We do not correct for mirror motions
in the Y dimension. Finally, because the shifts for the
SiC1 and SiC2 mirrors are similar, we adopt a single
correction for both channels.
4.5.7. Correct for Spacecraft Motion
Spacecraft motions during an exposure shift the tar-
get spectrum on the detector and thus degrade spectral
resolution. The subroutine cf satellite jitter uses
pointing information stored in the jitter file (JITR CAL;
§ A-2) to correct the observed coordinates of photon
events for these motions. Pointing errors in arc sec-
onds are converted to X and Y pixel shifts and applied
to all photon events within the target aperture; events
in other apertures and those not assigned to a channel
are ignored. The correction is applied only if the jitter
tracking flag TRKFLG > 0, indicating that valid track-
ing information is available. TRKFLG values rise from
1 to 5 as the reliability of the pointing information in-
creases. The minimum acceptable value of the TRKFLG
may be adjusted by modifying the TRKFLG keyword in
the CalFUSE parameter file (PARM CAL).
4.5.8. Recompute Spectral Centroids
Once all spectral motions are removed from the data,
the subroutine cf calculate y centroid recomputes
the spectral Y centroids. Separate source and airglow
centroids are determined for each aperture in turn, from
largest to smallest. (The former is meaningless if the
aperture does not contain a source.) The offset be-
tween the measured airglow centroid in the LWRS aper-
ture and the tabulated centroid (from the calibration file
CHID CAL) is used to compute the airglow centroids for
the MDRS and HIRS apertures; the computed MDRS
and HIRS airglow centroids are ignored. The YCENT
value written to the IDF file header is determined by
the quality flag previously set by cf find spectra (§
4.5.1): if YQUAL = HIGH, the source centroid is used;
if YQUAL = MEDIUM, the airglow centroid is used; and
if YQUAL = LOW, the tabulated centroid is used. The
SPEX SIC, SPEC LIF, EMAX SIC, and EMAX LIF
keywords in the CalFUSE parameter file (PARM CAL)
have the effects discussed in § 4.5.1.
4.5.9. Final Assignment of Photons to Channels
The final assignment of each photon event to a chan-
nel is performed by cf identify channel, just as in §
4.5.2, but with two modifications: First, we consider
only photon events with CHANNEL > 0; unassigned
events (which are not motion corrected) remain unas-
signed. Second, we do not pad the extraction windows
by an additional ±10 pixels in Y, though the extraction
windows for histogram data are padded by ±8 Y pixels
(or an amount equal to the binning factor in Y, if other
than 8), as before.
4.5.10. Compute Count Rates
For time-tag data, cf target count rate computes
the count rate in the target aperture for the LiF and
SiC channels. To account for dead-time effects, the con-
tents of the WEIGHT array are used. Events in airglow
regions are excluded, but no other filters are applied to
the data. Results are written to the LIF CNT RATE
and SIC CNT RATE arrays of the timeline table. For
histogram data, the initial values of these arrays, taken
from the housekeeping file (§ A-3), are scaled by the value
of the header keyword DET DEAD.
4.6. Wavelength Calibration
Once converted to the FARF and corrected for optical
and spacecraft drifts, the data can be wavelength cal-
ibrated. The module cf assign wavelength performs
three tasks: first, it corrects for astigmatism in the spec-
trograph optics; second, it applies a wavelength calibra-
tion to each photon event; and third, it shifts the wave-
lengths to a heliocentric reference frame. The derivation
of the FUSE wavelength scale is discussed in § 5.1.
4.6.1. Astigmatism Correction
The astigmatic height of FUSE spectra perpendicular
to the dispersion axis is significant and varies as a func-
tion of wavelength (Fig. 3). Moreover, spectral features
show considerable curvature, especially near the ends of
the detectors where the astigmatism is greatest. The
subroutine cf astigmatism shifts each photon event in
X to correct for the spectral-line curvature introduced by
the FUSE optics, providing a noticeable improvement in
spectral resolution for point sources (Fig. 6).
The astigmatism correction is derived from observa-
tions of GCRV 12336, the central star of the Dumbbell
Nebula (M 27), which exhibits H2 absorption features
across the FUSE waveband. We cross-correlate and com-
bine the absorption features from a small range in X, fit
a parabola to each set of combined features, compute the
shift required to straighten each parabola, and interpo-
late the shifts across the waveband. Because an astigma-
tism correction has been derived only for point sources,
no correction is performed on the spectra of extended
sources, airglow spectra, or observations with APER-
TURE = RFPT.
The correction is stored in the calibration file
ASTG CAL as a two-dimensional image representing the
region of the detector containing the target spectrum. A
separate image is provided for each aperture. The value
of each image pixel is the astigmatism correction (∆X
in pixels) to be applied to that pixel. The entire image
is shifted in Y to match the centroid of the target spec-
trum, and the appropriate correction is applied to each
photon event in the target aperture. The corrected X co-
ordinates are not written to the IDF, but are passed im-
mediately to the wavelength-assignment subroutine. In
12 Dixon et al.
effect, we apply a two-dimensional wavelength calibra-
tion, which depends upon both the X and Y coordinates
of each photon event.
4.6.2. Assign Wavelengths
The wavelength calibration is stored as a binary table
extension in the calibration file WAVE CAL (§ 5.1); a
separate extension is provided for each aperture. Wave-
lengths are tabulated for integral values of X, assumed
to be in motion-corrected FARF coordinates. Given
the astigmatism-corrected X and CHANNEL arrays, the
subroutine cf dispersion considers each aperture in
turn and reads the corresponding calibration table. It
interpolates between tabulated values of X to derive the
wavelength of each photon event. Photon events not as-
signed to an aperture (CHANNEL = 0) are not wave-
length calibrated.
4.6.3. Doppler Correction
The component of the spacecraft’s orbital velocity
in the direction of the target is stored in the OR-
BITAL VEL array of the timeline table. The component
of the earth’s orbital velocity in the direction of the tar-
get is stored in the IDF file header keyword V HELIO.
Their sum is used to compute a time-dependent Doppler
correction, which is applied to each photon event by the
subroutine cf doppler and heliocentric. The result-
ing wavelength scale is heliocentric. Because histogram
data are assigned identical arrival times, their Doppler
correction is not time dependent. Histogram exposures
are kept short (approximately 500 seconds) to minimize
the resulting loss of spectral resolution. The final wave-
length assigned to each photon event is stored in the
LAMBDA array of the IDF.
4.7. Flux Calibration
Because the instrument sensitivity varies through the
mission, we employ a set of time-dependent effective-
area files (AEFF CAL). We interpolate between the two
files whose dates bracket the exposure start time but
do not extrapolate beyond the most recent effective-area
file. Within each calibration file, the instrumental ef-
fective area is stored as a binary table extension, with
a separate extension provided for each aperture. The
module cf flux calibrate invokes a single subroutine,
cf convert to ergs. Considering each aperture in turn,
the program reads the appropriate calibration files, inter-
polates between them if appropriate, and computes the
“flux density” of each photon (in units of erg cm−2) ac-
cording to the formula
ERGCM2 = WEIGHT× hc / LAMBDA / Aeff(λ), (1)
where ERGCM2, WEIGHT, and LAMBDA are read
from the photon-event list (§ A-3), h is Planck’s con-
stant, c the speed of light, and Aeff(λ) the effective area
at the wavelength of interest. Only photon events as-
signed to an aperture are flux calibrated; events with
CHANNEL = 0 are ignored. The flux density computed
for each photon event is stored in the ERGCM2 array of
the IDF. This array is not used by the pipeline, but is
employed by some of our interactive IDF manipulation
tools.
4.8. Create Bad-Pixel Map
When possible, spectra are extracted using an optimal-
extraction algorithm (§ 4.9.4) that employs a bad-pixel
map (BPM) to correct for flux lost to dead spots and
other detector blemishes. Because motions of the space-
craft and its optical components cause FUSE spectra
to wander on the detector, a particular spectral feature
may be affected by a dead spot for only a fraction of an
exposure. We thus generate a bad-pixel map for each
exposure. The module cf bad pixels reads a list of
dead spots from a calibration file (QUAL CAL), deter-
mines which of them overlap the target aperture, tracks
the motion corrections applied to each, and converts the
motion-corrected coordinates to wavelengths. The re-
sulting bad-pixel map (identified by the suffix “bpm.fit”)
has a format similar to that of an IDF (§ A-4), but the
WEIGHT column, whose values range from 0 to 1, rep-
resents the fraction of the exposure that each pixel was
affected by a dead spot. No BPM file is created if the
(screened) exposure time is less than 1 second. BPM files
are not archived, but can be generated from an IDF and
its associated jitter file using software distributed with
CalFUSE (available from MAST).
4.9. Extract Spectra
Through all previous steps of the pipeline, we resist
the temptation to convert the photon-event list into an
image. In the module cf extract spectra, we relent.
Indeed, we generate four sets of images: a background
model, a bad-pixel mask, a two-dimensional probabil-
ity distribution of the target flux, and a spectral image
for each extracted spectrum (LiF and SiC). Only pho-
ton events that pass all of the requested screening steps
(§ 4.4) are considered. If the (screened) exposure time
is less than 1 second or no photon events survive the
screening routines, then the program generates a null-
valued spectral file.
4.9.1. Background Model
Microchannel plates contribute to the detector back-
ground via beta decay of 40K in the MCP glass. On
orbit, cosmic rays add to this intrinsic background to
yield a total rate of ∼ 0.5 counts cm−2 s−1. Scattered
light, primarily geocoronal Lyman α, contributes a well-
defined illumination pattern (Fig. 7) that varies in inten-
sity during the orbit, with detector-averaged count rates
as small as 20% of the intrinsic background during the
night and 1-3 times the intrinsic rate during the day. We
assume that the observed background consists of three
independent components, a spatially uniform dark count
and spatially-varying day- and nighttime scattered-light
components. Properly scaling them to the data is thus a
problem with three unknown parameters. We attempt to
fit as many of these parameters as possible directly from
the data. When such a fit is not possible, we estimate
one or more components and fit the remainder.
Background events due to the detector generally have
pulse-height values lower than those of real photons (§
4.4.12). The observed dark count thus depends on the
pulse-height limits imposed on the data. An initial esti-
mate of the dark count, as a function of the lower pulse-
height threshold, is read from the background characteri-
zation file (BCHR CAL). The day and night components
CalFUSE: The FUSE Calibration Pipeline 13
of the scattered-light model are read from separate exten-
sions of the appropriate time-dependent background cal-
ibration file (BKGD CAL). The background models are
scaled to match the counts observed on unilluminated
regions of the detector. The Y limits of these regions
(selected according to the target aperture) are read from
header keywords in the IDF. Airglow photons in these
regions are excluded from the analysis. The day and
night counts in the background regions of the detector
are summed and recorded.
In its default mode, the subroutine cf scale bkgd es-
timates the uniform background as follows: The back-
ground regions of the day and night scattered-light mod-
els are scaled by their relative exposure times, summed,
and projected onto the X axis (to produce a histogram
in X). A similar histogram (called the “empirical back-
ground spectrum”) is constructed from the data. An it-
erative process is used to determine the uniform compo-
nent and scattered-light scale factor that best reproduce
the observed X distribution of background counts. The
uniform component is then subtracted from the day and
night totals computed above, and the day and night com-
ponents of the scattered-light model are scaled to match
the remaining observed counts.
If the empirical background spectrum is too faint, we
do not attempt to fit it, but assume that the uniform
component of the background is equal to the tabulated
dark-count rate scaled by the exposure time. The day
and night components of the scattered-light model are
calculated as above. Users who require a more accurate
background model may wish to combine data from mul-
tiple exposures before extracting a spectrum (see § 6.6).
If the empirical background spectrum is very bright –
as, for example, when nebular emission or a background
star contaminates one of the other apertures (and thus
the background-sample region) – no fit is performed. In-
stead, the day and night components of the scattered-
light model are scaled by the day and night exposure
times, the tabulated dark-count rate is scaled by the to-
tal exposure time, and the three components are summed
to produce a background image. This scheme is also used
for histogram data. Because histogram files contain data
only from the region about the desired extraction win-
dow, the background cannot be estimated from other
regions of the detector. Fortunately, histogram observa-
tions typically consist of short exposures of bright tar-
gets, so the background is comparatively faint.
Our day and night scattered-light models were de-
rived from the sum of many deep background observa-
tions spanning hundreds of kiloseconds. Individual ex-
posures that differ markedly from the mean were ex-
cluded from the sum. The data were processed only
through the FARF-conversion and data-screening steps
of the pipeline. Airglow features were replaced with a
mean background interpolated along the dispersion (X)
axis of the detector. An estimate of the uniform back-
ground component was subtracted from the final image,
and the data were binned by 16 detector pixels in X. This
process was performed on both day- and night-only data
sets. We produce a new set of background images every 6
to 12 months, as the effects of gain sag and adjustments
of the detector high voltage slowly alter the relative sen-
sitivity of the illuminated and background regions of the
detectors.
Caveats: While early versions of the pipeline (through
v2.4) assume a 10% uncertainty in the background flux,
propagating it through to the final extracted spectrum,
CalFUSE v3 treats uncertainties in the background as
systematic errors and does not include them in the
(purely statistical) error bars of the extracted spectra.
The algorithm assumes that the intensity of the uni-
form background is constant throughout an exposure.
This would be the case if it were due only to the detec-
tor dark count, but in fact the uniform background in-
cludes a substantial contribution from the scattered light
and is thus brighter during day-time portions of an expo-
sure. The assumption of a constant uniform background
can lead the algorithm to over-estimate the scale factors
for both the uniform and spatially-varying components
of the background model. A better scheme would be to
fit the day and night components of the uniform back-
ground separately. Similarly, when the empirical back-
ground spectrum is very faint, adopting the tabulated
value of the uniform background is not the best solu-
tion. It is likely that the scattered-light component of
the uniform background would be better estimated from
the observed day and night backgrounds. The difference
between the tabulated and observed levels of the uniform
background will become greater as the mission extends
into solar minimum and the intensities of both individ-
ual airglow features and the scattered-light component
of the uniform background continue to weaken.
Grating scattering of point-source photons along the
dispersion direction is potentially significant and is not
corrected by the CalFUSE pipeline. Typical values are
1–1.5% of the continuum flux in the SiC channels and 10
times less in the LiF channels.
4.9.2. Probability Array
The optimal-extraction algorithm (§ 4.9.4) requires as
input a two-dimensional probability array representing
the distribution of flux on the detector. Separate prob-
ability arrays, derived from high signal-to-noise stellar
observations, have been computed for each channel and
stored as image extensions in the weights calibration file
(WGTS CAL).
By construction, the Y dimension of the probability
array represents the maximum extent of the extraction
window for a particular aperture. For simplicity, all ar-
rays employed by the optimal-extraction algorithm are
trimmed to match the probability array in Y. The cen-
troids of the probability distribution and the target spec-
trum (recorded in the corresponding file headers) are
used to determine the offset between detector and prob-
ability coordinates. In the X dimension, all arrays are
binned to the output wavelength scale requested by the
user. Default wavelength parameters for each aperture
are specified in the header of the wavelength calibration
file (WAVE CAL); the default binning for all channels is
0.013 Å per output spectral bin, which corresponds to
approximately two detector pixels. The background ar-
ray, originally binned by 16 pixels in X, is rescaled to the
width of each output wavelength bin by the subroutine
cf rebin background. The probability array is rescaled
by the subroutine cf rebin probability array to have
a sum of unity in the Y dimension for each wavelength
14 Dixon et al.
4.9.3. Bad-Pixel Mask
A bad-pixel mask with the same wavelength scale and
Y dimensions as the probability array is constructed from
the BPM file (§ 4.8) by the subroutine cf make mask.
The array is initialized to zero. For each entry in the
BPM file, the value of the WEIGHT array is added to
the corresponding pixel of the bad-pixel mask. The mask
is then normalized and inverted, so that the center of the
deepest dead spot has a value of 0 and the regions outside
are set to 1. The conversion from pixel to wavelength
coordinates can open gaps in the mask, which appear as
values of unity surrounded by pixels with lower values.
We search for array elements that are larger than their
neighbors and replace them with the mean value of the
adjoining pixels. If the BPM file is absent or a particular
aperture is free of bad pixels, all elements of the bad-pixel
mask are set to 1.
4.9.4. Optimal (Weighted) Spectral Extraction
The extraction subroutine, cf optimal extraction,
is called separately for each of the two target spectra
(LiF and SiC). Inputs include the photon-event list and
the indices of events that pass through the various screen-
ings, as well as the 2-D background, probability, and bad-
pixel arrays described above. For numerical simplicity,
extraction is performed using the WEIGHT of each pho-
ton event, rather than its ERGCM2 value. A pair of 2-D
data and variance arrays with the same dimensions as the
probability array are constructed from the good photons
whose CHANNEL values correspond to the target aper-
ture. For time-tag data, this process is straightforward:
the LAMBDA and Y values of each photon event corre-
spond to a particular cell in the data and variance arrays.
That cell in the data array is incremented by the photon
weight, while the corresponding cell in the variance array
is incremented by the square of the weight. A 1-D raw-
counts spectrum (useful for the statistical analysis of low
count-rate data) is constructed simultaneously: for each
photon event added to the data array, the appropriate
bin of the counts spectrum is incremented by one.
For histogram data, the process is more complex, be-
cause the original detector image is generally binned by
8 pixels in Y and because each entry in the photon-event
list represents the sum of many individual photons. In
the Y dimension, an event’s WEIGHT is divided among
8 pixels (or the actual Y binning for that exposure, if
different) according to the distribution predicted by the
probability array. In the X dimension, each event is as-
sumed to have a width in wavelength space equal to the
mean dispersion per pixel for the channel (read from the
DISPAPIX keyword of the WAVE CAL file), and the
WEIGHT of an event that spans the boundary between
two output wavelength bins is divided between them.
This smoothing in X helps to mitigate the “beating” that
would otherwise occur between detector pixels and out-
put wavelength bins.
One-dimensional background, weights, and variance
spectra are then extracted from the two-dimensional
background, data, and variance arrays. To insure that
the three spectra sample the same region of the detector,
only cells in the 2-D arrays for which the correspond-
ing cell in the probability array has a value greater than
10−4 are included in the sum. These limits differ slightly
from those defined in the aperture (CHID CAL) calibra-
tion files. As a result, the ratio of the final weights and
counts spectra may not be a constant. (Ideally, their ra-
tio would equal the mean dead-time correction for the
exposure.) An initial flux spectrum, equal to the differ-
ence of the weights and background spectra, is used as
input to the optimal-extraction algorithm.
CalFUSE employs the optimal-extraction algorithm
described by Horne (1986), which requires as input the
2-D data, background, probability, and bad-pixel arrays
and the 1-D initial flux spectrum. Originally designed for
CCD spectroscopy, the algorithm has been modified for
the FUSE detectors. Specifically, instead of constructing
a 2-D spatial profile from each data set, we use a tabu-
lated probability array; the 2-D cosmic-ray mask, an in-
teger array in the original algorithm, is replaced with the
bad-pixel mask, which is a floating-point array; and the
2-D variance estimate is scaled by the bad-pixel mask.
Extraction is iterative: in the original version, iteration
is performed until the cosmic-ray mask stops changing.
In our version, iteration continues until the output flux
spectrum changes by less than 0.01 counts in all pixels. If
the loop repeats 50 times, the algorithm fails. The num-
ber of iterations performed is written to the OPT EXTR
keyword of the output file header.
If optimal extraction is successful, the variance of
the optimal spectrum is computed using the recipe of
Horne (1986). We have adapted this recipe to pro-
duce weights and background spectra such that FLUX
= WEIGHTS − BKGD. The resulting background spec-
trum is not smooth. Optimal extraction is not per-
formed on the spectra of extended sources or on those
for which the quality of the computed spectral centroid
is not HIGH. (Both the centroid and its quality flag are
stored in file header keywords.) In these cases, or if the
optimal-extraction algorithm fails, the initial flux, vari-
ance, weights, and background spectra are adopted.
However they are constructed, the final FLUX, ER-
ROR (equal to the square root of the variance),
WEIGHTS, and BKGD arrays (all in units of counts)
are returned to the calling routine. For time-tag data,
the COUNTS array as described above is returned. For
histogram data, the COUNTS array is computed by di-
viding the final WEIGHTS array by the mean dead-
time correction, which is stored in a file-header keyword
TOT DEAD. Also returned is the QUALITY array. It
is the product of the probability array and the bad-pixel
map, projected onto the wavelength axis and expressed
as an integer between 0 and 100. Its value is 0 if all the
flux in a wavelength bin is lost to a detector dead spot,
100 if no flux is lost.
Caveats: The optimal-extraction algorithm is designed
to improve the signal-to-noise ratio of the spectra of faint
point sources. Unfortunately, it is in precisely these cases
that the spectral centroid is most likely to be uncertain.
Because proper positioning of the probability array is es-
sential to the weighting scheme, observers of faint targets
may wish to combine the IDFs from multiple exposures
and re-compute their spectral centroids before attempt-
ing optimal extraction.
4.9.5. Extracted Spectral Files
Because the optimal-extraction routine returns the fi-
nal FLUX and ERROR arrays in units of counts, the
CalFUSE: The FUSE Calibration Pipeline 15
spectral-extraction module applies a flux calibration to
both arrays using the subroutine cf convert to ergs
described in § 4.7. Dividing each array element by (EX-
PTIME × WPC), where EXPTIME is the length in sec-
onds of the (screened) exposure and WPC the width in
Ångstroms of each output wavelength bin, completes the
conversion to units of erg cm−2 s−1 Å−1. The format of
the extracted spectral files is described in § A-5.
4.10. Trailer and Image Files
A number of supplementary files are generated by Cal-
FUSE and archived with the data. For each exposure
and detector segment, the pipeline generates a trailer file
and a pair of image files in Graphics Interchange Format
(GIF). The trailer file (suffix “.trl”) contains timing in-
formation for all pipeline modules and any warning or
error messages that they may have generated. The first
image file contains an image of the detector overlaid by a
wavelength scale and extraction windows for each aper-
ture (suffix “ext.gif”). Only photon events flagged as
good are included in the plot, unless there are none, in
which case all events are plotted. The second image file
presents count-rate plots for both the LiF and SiC target
apertures (suffix “rat.gif”). These arrays come from the
timeline table in the IDF and exclude photons flagged as
airglow. These image files are powerful tools for diagnos-
ing problems in the data, revealing, for example, when
high background levels cause the SiC1 LWRS extraction
window to be misplaced.
4.11. Observation-Level Files
For each exposure, CalFUSE produces LiF and SiC
spectra from each of four detector segments, for a total
of eight extracted spectral files. OPUS combines them
into a set of three observation-level files for submission
to MAST. Observation-level files are distinguished from
exposure-level files by having an exposure number of 000.
Depending on the target and the scientific questions at
hand, these files may be of sufficient fidelity for scientific
investigation. Here is a brief description of their contents:
ALL: For each combination of detector segment and
channel (LiF1A, SiC1A, etc.), we combine data from all
exposures in the observation into a single spectrum. If
the individual spectra are bright enough, we cross corre-
late and shift them before combining. (For each channel,
the shift calculated for the detector segment spanning
1000–1100 Å is applied to the other segment as well.) If
the spectra are too faint for cross correlation, we com-
bine the individual IDFs and extract a single spectrum
to optimize the background model. Combined spectra
(WAVE, FLUX, and ERROR arrays) for each of the eight
channels are stored in separate binary table extensions
in the following order: 1ALIF, 1BLIF, 2BLIF, 2ALIF,
1ASIC, 1BSIC, 2BSIC, and 2ASIC.
ANO (all, night-only): With the same format as the
ALL files, these spectra are constructed using only data
obtained during the night-time portion of each exposure.
They are generated only for time-tag data, and only if
EXPNIGHT > 0. The shifts calculated for the ALL files
are applied to the night-only data; they are not recom-
puted.
NVO (National Virtual Observatory): These files con-
tain a single spectrum spanning the entire FUSE wave-
length range. The spectrum is assembled by cutting and
pasting segments from the most sensitive channel at each
wavelength. Segments are shifted to match the guide
channel (either LiF1 or LiF2) between 1045 and 1070 Å.
Columns are WAVE, FLUX, and ERROR and are stored
in a single binary table extension.
The ALL file is used to generate a “quick-look” spec-
tral plot for each observation. When available, combined
spectra from channels spanning the FUSE waveband are
plotted in a single GIF image file (suffix “specttagf.gif”
or “spechistf.gif”). This plot appears on the MAST pre-
view page of each observation. Four additional GIF files
contain the combined LiF1, LiF2, SiC1, and SiC2 spectra
for each observation.
Caveats: Cross-correlation may fail, even for the spec-
tra of bright stars, if they lack strong spectral features.
Examples are nearby white dwarfs with weak interstellar
absorption lines. If cross-correlation fails for a given ex-
posure, that exposure is excluded from the sum. Thus,
the exposure time for a particular segment in an ALL
file may be less than the total exposure time for that
observation.
The cataloging software used by MAST requires the
presence of an ALL file for each exposure, not just for
the entire observation. We generate exposure-level ALL
files, but they contain no data, only a FITS file header.
The observation-level ALL files discussed above can be
distinguished by the string “00000all” in their names.
4.12. Quality Control and Archiving
Before the reduced data are archived, they undergo
a two-step quality-control process: First, a set of auto-
mated checks is performed on each exposure. The soft-
ware compares the flux observed in the guide channel
(LiF1 or LiF2) with that expected for the target and
with that observed in the other three channels. If an
anomaly is detected, a flag is set requesting manual in-
vestigation. The software works well for bright contin-
uum sources, but often flags faint or emission-line targets
as unsuccessful observations. Second, a member of the
FUSE operations team investigates any warnings gener-
ated by the software. If it is determined that less than
50% of the requested data were obtained, the target is
re-observed.
The philosophy of the FUSE project is to archive data
whenever possible, even if it does not satisfy the require-
ments of the original investigator. As a result, the MAST
archive contains a number of FUSE data sets that are in
some way flawed (e.g., misaligned channels, partial loss
of guiding, or no good observing time). Users should be
aware of this possibility.
If the pipeline detects an error in the data or its as-
sociated housekeeping or jitter files, it writes a warning
to both the trailer file and the headers of the IDF and
extracted spectral files. Users are advised to scan trailer
files for the “WARNING” string and spectral files for
“COMMENT” records. Occasionally, the FUSE opera-
tions team inserts comments directly into the headers of
raw data files. Such comments may warn of an unusual
instrument configuration, errors in the reported target
coordinates, or data obtained during slews.
Observation names beginning with the letter “S” are
science-verification observations and may have been ob-
tained with an unusual instrument configuration. For
16 Dixon et al.
example, the program S523 was designed to test pro-
cedures for observing a bright object by defocusing the
SiC mirrors. The LiF mirrors were moved for some S523
exposures. As a result, data from this program should
be used with caution. Abstracts for all FUSE observing
programs are available from MAST.
5. CALIBRATION FILES
5.1. Wavelength Calibration
5.1.1. Derivation of the FUSE Wavelength Calibration
Our principal wavelength-calibration target is
GCRV 12336, central star of the planetary nebula M 27,
whose spectrum exhibits a myriad of molecular-hydrogen
absorption features (McCandliss et al. 2007). For the
SiC1B channel, these data are supplemented at the
shortest wavelengths by spectra of the hot white dwarf
G 191-B2B (Lemoine et al. 2002). Spectra obtained
through each of the three FUSE apertures were fully
reduced, corrected for astigmatism, and used to derive
an empirical mapping of pixel to wavelength. For each
channel, standard optical expressions were used to
derive a theoretical dispersion solution, which was fit
to the empirical data with only its constant term (the
zero-point of the wavelength scale) as a free parameter.
The shifted theoretical dispersion solution was used to
generate the wavelength-calibration file.
Early versions of the pipeline relied on the wavelength
calibration to correct for non-linearities in the detector X
scale (the geometric distortion discussed in § 4.3.6). Cor-
recting for this effect separately has greatly improved the
accuracy of the FUSE wavelength scale. To determine
the geometric distortion in the X dimension, a spline
was fit to the residuals from each aperture (expected mi-
nus observed X coordinate of each absorption feature;
Fig. 8). In practice, residuals from all six apertures (both
LiF and SiC channels) were included in the fit, but data
from the other five apertures were weighted 100 times
less than those for the aperture being fitted. The addi-
tional data points help to constrain the fit in wavelength
regions where the data are sparse or missing. The spline
fits from all six apertures were then used to construct the
two-dimensional map of detector distortions in the X di-
mension that is used by the geometric-distortion routine
described in § 4.3.6. The process is iterative, with the
residuals (ideally) becoming smaller with each iteration.
The scatter of individual measurements about the
spline fit is caused in some cases by blended absorption
lines and in others by localized distortions induced by
the fiber-bundle structure of the MCPs. This scatter is
thus a fair estimate of the inaccuracies that the user may
expect in the relative measurement of the wavelength of
any given feature. The wavelength inaccuracies caused
by localized distortions are 3–4 detector pixels (0.025 Å
or 7 km s−1) at most wavelengths, but may be as large
as 6–8 pixels. They occur in tiny windows about 1 to 3 Å
wide, depending on the channel and segment. Some data
sets show larger residuals. These distortions are inherent
in the FUSE data set and represent the ultimate limit
to the accuracy of the FUSE wavelength calibration.
5.1.2. Zero-Point Uncertainties
The FUSE wavelength calibration assumes that the
motion-corrected spectrum falls at a precise location on
the detector. If it is shifted in X, then the wavelength
scale of the extracted spectrum will suffer a zero-point
offset. For the guide channel (either LiF1 or LiF2),
the dominant source of wavelength errors is thermally-
induced rotations of the spectrograph gratings, which de-
pend on the satellite attitude. For the other channels,
additional wavelength errors come from mirror misalign-
ments that shift the target away from the center of the
aperture. Such misalignments may produce zero-point
offsets of up to ±0.15 Å for point sources in the LWRS
aperture. Offsets are less than ±0.02 Å for the MDRS
aperture and are negligible for the HIRS aperture.
We define the zero point of our wavelength scale by
requiring that the Lyman β airglow feature, observed
through the HIRS aperture and processed as if it were a
point source, be at rest in spacecraft coordinates when all
Doppler corrections are turned off. The use of an airglow
feature eliminates errors due to mirror motions in the
non-guide channels, but not errors in the grating-motion
correction, so we measure the Lyman β line in some 200
background exposures and shift their mean velocity to
0 km s−1. For each channel, all three apertures (HIRS,
MDRS, and LWRS) use this HIRS-derived wavelength
scale (WAVE CAL version 022 and greater).
The grating-motion correction (§ 4.5.4) is designed to
place the centroid of each Lyman β airglow feature at
a fixed location in FARF coordinates. On average, it
achieves that goal: for our sample of 200 background ex-
posures, the measured velocity of the Lyman β line has a
standard deviation of between 2 and 3 km s−1, depend-
ing on the channel. Unfortunately, some combinations
of pole and beta angle are not well corrected, leading
to velocity offsets of 10 km s−1 or more, and additional
motions – of either the gratings or some other optical
component – can shift the extracted spectra by several
km s−1 from one exposure to the next.
Figure 9 presents the measured wavelength of the in-
terstellar O I λ1039 absorption feature in 47 exposures of
the hot white dwarf KPD 0005+5106 obtained through
the HIRS aperture. The 2001 and 2002 data show little
scatter and yield a mean velocity of −10.7± 1.9 km s−1.
(Holberg, Barstow, and Sion 1998 report a heliocentric
velocity of −7.50 ± 0.76 km s−1 for the interstellar fea-
tures along this line of sight.) The 2003 data (all from
a single observation) span nearly 10 km s−1. The 2004
data (again from a single observation) are tightly corre-
lated but offset by∼ 7 km s−1. These data were obtained
at a spacecraft orientation (beta and pole angles) that is
generally well corrected by our grating-motion algorithm;
apparently, some other effect is at work. The 2006 data
come from the LiF2 channel, which became the default
guide channel in 2005 (§ 6.1).
We do not recommend the general use of airglow lines
to fix the absolute wavelength scale of point-source spec-
tra for several reasons: First, airglow emission fills the
aperture, so the resulting airglow lines provide no in-
formation about the position of the target relative to
the aperture center. Second, the jitter correction (for all
channels) and the mirror-motion correction (for the SiC
channels) are inappropriate for airglow emission. Third,
the Doppler correction for the spacecraft’s orbital motion
can degrade their resolution.
CalFUSE: The FUSE Calibration Pipeline 17
TABLE 1
Stellar Parameters Adopted for FUSE
Flux Standards
Teff log g Vrad
Name (K) (cm s−2) (km s−1)
GD 71 32,843 7.783 80.0
GD 659 35,326 7.923 33.0
GD 153 39,158 7.770 50.0
HZ 43 50,515 7.964 20.6
GD 246 53,000 7.865 −13.2
G 191-B2B 61,200 7.5 · · ·
Note. — For G 191-B2B, we use the model
employed by Kruk et al. 1999 for the final
Astro-2 calibration of HUT.
5.1.3. Diffuse Emission
The FUSE wavelength scale is derived from
astigmatism-corrected, point-source spectra. Extended-
source (diffuse) spectra are not corrected for astig-
matism. If point-source data are processed with the
astigmatism correction turned off, the resulting wave-
length errors are less than about 4 detector pixels,
consistent with the uncertainties in the wavelength
scale. Therefore, the present FUSE wavelength calibra-
tion should be adequate for extended-source spectra.
Airglow lines are useful for determining the zero-point
for extended sources that fill the aperture.
5.2. Flux Calibration
5.2.1. Derivation of the Effective-Area Curve
The FUSE flux calibration is based on in-flight ob-
servations of the well-studied DA (pure-hydrogen) white
dwarfs listed in Table 1, which have been observed at reg-
ular intervals throughout the mission. For each channel,
data from multiple stars are combined to track changes
in the instrument sensitivity using a technique similar
to that developed by Massa and Fitzpatrick (2000) for
the International Ultraviolet Explorer (IUE)/ satellite.
The algorithm yields a series of time- and wavelength-
dependent sensitivity curves as well as the spectrum of
each star, in units of raw counts, as it would have ap-
peared on a date early in the mission, which we choose
to be T0 = 1999 December 31. (We refer to the latter as
“T0 spectra.”)
For each star, we generated a synthetic spectrum using
the programs TLUSTY (version 200) and SYNSPSEC
(version 48) of Hubeny and Lanz (1995). The non-LTE
pure-hydrogen model atmospheres were computed ac-
cording to a prescription by Hubeny (private commu-
nication) using 200 atmospheric layers to ensure an opti-
mal absolute flux accuracy. The atmospheric parameters
listed in Table 1, consistent with HST, IUE, and opti-
cal observations, were used to compute the models (Hol-
berg, private communication). For G 191-B2B, we used
the model employed by Kruk et al. (1999) for the final
Astro-2 calibration of HUT. Observations of these stars
with the Faint Object Spectrograph aboard HST have
shown that the models, including parameter uncertain-
ties, are consistent to within 2% at wavelengths longer
than Lyman α (Bohlin, Colina, and Finley 1995; Bohlin
1996). Uncertainties in the far-ultraviolet waveband are
slightly higher, as discussed by Kruk et al. (1999).
For each channel, the effective area in units of cm2 is
computed by dividing one or more T0 spectra in units of
counts s−1 Å−1 by a synthetic white-dwarf spectrum in
units of photons cm−2 s−1 Å−1. We find excellent agree-
ment between the effective areas derived from the differ-
ent standard stars. Sensitivity curves for the LiF1A and
SiC1A channels are presented in Fig. 10. (Effective-area
curves for all FUSE channels are available from MAST.)
The sensitivity of the LiF1A channel decreased by ∼ 15%
over the first three years of the mission, but appears to
have stabilized; that of the SiC1 channel has declined by
∼ 45% since launch and is falling still (though slowly).
Effective-area curves (AEFF CAL) for each channel and
detector segment were generated at three-month inter-
vals until the loss of the third reaction wheel in 2004
December; we plan to generate them at six-month inter-
vals for the duration of the mission.
Caveats: We do not attempt to correct spectra ob-
tained through the MDRS and HIRS apertures for
changes in instrument sensitivity, but employ a single
effective-area curve for each. The low throughput of
these apertures, combined with the likelihood that their
spectra are non-photometric, makes tracking changes in
their sensitivity both more difficult and less useful than
for the LWRS aperture.
5.2.2. Systematic Uncertainties
The greatest uncertainties in a line or continuum flux
derived from a FUSE spectrum are due to systematic
effects. An estimate of the uncertainty in our flux cali-
bration can be obtained by comparing the effective-area
curves derived from different white-dwarf stars. Differ-
ences among the curves reflect errors in both the model
atmospheres and the stellar parameters upon which they
are based. In most channels, the scatter in the derived
effective areas is between 2 and 4%.
The photometric accuracy of FUSE spectra is subject
to numerous effects that cannot be fully corrected by the
CalFUSE pipeline. A target centered in an aperture of
the guide channel (LiF1 or LiF2) may not be centered
in the corresponding apertures of the other three chan-
nels. Since the loss of the first two reaction wheels in
2001, spacecraft drifts may move the target out of even
the guide-channel aperture. While the pipeline does at-
tempt to flag times when the target is out of the aper-
ture, the algorithm used is conservative in that it un-
derestimates the time lost to pointing errors (§ 4.4.6).
The user is advised to consult the count-rate plots gen-
erated by the pipeline (suffix “rat.gif”; § 4.10) and the
LIF CNT RATE and SIC CNT RATE arrays of the IDF
timeline table to determine the photometric quality of an
exposure. Using tools available from MAST or the user-
defined good-time intervals discussed in § 4.4.7, users can
reject time periods when the count rate is low or re-scale
the flux of low-count-rate exposures.
When a point-source target falls near the top or bottom
edge of an aperture, vignetting in the spectrograph may
attenuate the target flux in a wavelength-dependent way.
Astigmatism gives FUSE spectra the shape of a bow tie
(Fig. 3). If vignetting is important, then the spectrum
will lie below the center of the aperture on one side of
the bow tie and above it on the other. Significant flux
loss is possible in wavelength regions far from the center
of the bow tie.
18 Dixon et al.
Other systematic uncertainties are imposed by various
detector flat-field effects; their relative importance de-
pends upon one’s scientific goals. For narrow emission
lines, flux uncertainties are dominated by the moiré pat-
tern (high-frequency ripples due to beating among the
arrays of microchannel pores in the MCP stack; § 6.4),
unless the observation was obtained using an FP split or
the equivalent was achieved via grating and mirror mo-
tions. For broad features, the moiré is not important,
but larger-scale flat-field features are. These effects are
discussed in The FUSE Instrument and Data Handbook.
Finally, when fitting a spectral energy distribution, the
greatest uncertainty is caused by worms (§ 6.3), which
may depress the observed flux over tens of Ångstroms by
50% or more.
5.2.3. Extended Sources
The FUSE flux calibration is derived from point-
source targets. Because the distribution of flux in the
cross-dispersion direction differs for point and extended
sources, it is possible that the instrumental sensitivity
may also differ; this question has not been explored in
detail. Extended spectra are less affected by worms (§
6.3) than are point-source spectra. Moreover, because
the spectrum of a diffuse emitter is spread over a larger
region of the detector, it will suffer less from local flat-
field effects.
6. DISCUSSION
6.1. Spacecraft Guiding on the LiF2 Channel
The switch from FES A to FES B as the default guide
camera in 2005 July has two principal effects on the qual-
ity of FUSE data. First, tracking with FES A ensured
that targets remained in the center of the LiF1 aperture,
which is the most sensitive channel in the astrophysically-
important 1000–1100 Å waveband. Tracking with FES
B will keep targets centered in the LiF2 aperture, in-
creasing the likelihood of data loss in the LiF1 channel.
Second, in order to optimize the optical focus of FES B,
the LiF2 FPA was moved out of the focal plane of the
LiF2 primary mirror. Observations of point sources with
the LWRS aperture are unaffected, and the point-source
spectral resolution of this channel is unchanged, but the
throughput of the narrow LiF2 apertures is reduced. The
effective transmission of the apertures has not been char-
acterized in detail, but is approximately 70% for LIF2
MDRS and 15% for LiF2 HIRS, versus 98% and 60% for
their LiF1 counterparts. The spectral resolution for dif-
fuse sources is expected to be slightly lower in LiF2 than
in LiF1.
6.2. Scattered Solar Emission
In addition to airglow lines, scattered solar emission
features are present in the SiC channels when observ-
ing at high beta angles during the sunlit portion of the
orbit. Emission from C III λ977.0, Lyman β λ1025.7,
and O VI λλ1031.9, 1037.6 has been positively identified.
Emission from N III λ991.6 and N II λ1085.7 may also
be present. It is believed that sunlight is scattered by
reflective, silver-coated Teflon blankets lying above the
SiC baffles. At low beta angles, scattered solar emission
is less apparent, because the blankets are shaded by the
SiC baffles and the open baffle doors and because the ra-
diation strikes the blankets at a high angle of incidence.
It is unknown at which beta angle, if any, the solar emis-
sion completely disappears. Because the LiF channels
lie on the shadowed side of the spacecraft, solar emission
lines are not seen in LiF spectra. C III and O VI emission
observed in the SiC channels during orbital day should
always be compared with the emission observed either
with the LiF channel or during the nighttime portion of
an orbit.
Since the failure of the third reaction wheel in 2004
December, FUSE mission controllers have experimented
with the use of non-standard roll angles to improve space-
craft stability. These roll angles can place the spacecraft
in a configuration that greatly increases the sunlight scat-
tered into one of the SiC channels. The scattered light,
mostly Lyman continuum emission, appears as an in-
crease in the background at wavelengths shorter than
about 920 Å; strong, resolved Lyman lines are present at
longer wavelengths. When present, it is generally seen
in only one of the two SiC channels. We have no way to
model or subtract this emission.
6.3. The Worm
The spectra of point-source targets occasionally exhibit
a depression in flux that may span as much as 50 Å
(Fig. 11). These depressions appear in detector images
as narrow stripes roughly parallel to the dispersion axis
(Fig. 12). The stripes, known as worms, can attenuate
as much as 50% of the incident light in affected portions
of the spectrum. Worms shift in the dispersion direction
when the target moves in the aperture. They are due
to an unfortunate interaction between the horizontal fo-
cus of the spectrograph and the innermost wire grid (the
quantum-efficiency grid; § 4.3.6). Since the location of
this focus point is a function of wavelength, the strength
of a worm is exquisitely sensitive to the exact position of
the spectrum on the detector. We cannot determine this
position with sufficient precision to correct reliably for
flux lost to worms. Though most prominent in LiF1B
LWRS spectra, worms can appear in all channels and
apertures. Observers who require absolute spectropho-
tometry should carefully examine FUSE spectral image
files for the presence of worms. The redundant wave-
length coverage of the various FUSE channels can be
used to mitigate their effects.
6.4. The Moiré Pattern in Histogram Data
Since the release of CalFUSE v3.0, users have reported
strong, non-Gaussian noise in the spectra of some bright
stars observed in histogram mode. An example is shown
in Fig. 13. The high-frequency ripples have a period of
approximately 9 detector pixels, or about 0.06 Å. These
ripples are a moiré pattern due to beating among the
arrays of microchannel pores in the three layers of the
MCP stack (Tremsin et al. 1999). The moiré fringes are
strongest on segment 2B, but are also visible on segments
1A and 2B. The motion corrections applied to time-tag
data tend to smooth out this effect, but it can be quite
strong in histogram data. Where it is present, users are
advised to smooth or bin their spectra by at least one
resolution element to reduce its effects. This and other
detector artifacts are described in the FUSE Instrument
CalFUSE: The FUSE Calibration Pipeline 19
and Data Handbook.
6.5. A Note about Time
The FUSE spacecraft uses Coordinated Universal
Time (UTC). The spacecraft clock is updated periodi-
cally from the ground using a procedure that corrects
for the signal transit time from the ground station to the
spacecraft. The ground station time comes from GPS
satellites. The Instrument Data System receives a 1 Hz
signal from the spacecraft that is used to align the IDS
clock with the spacecraft clock to an accuracy of ±5 ms.
In time-tag mode, the IDS typically inserts a time stamp
into the data stream once per second, but can insert time
stamps as frequently as 125 times per second. Unfortu-
nately, the binary format of the time stamp rounds the
time value to the nearest 1/128 of a second. The two pe-
riods beat against one another, causing the loss of three
time stamps each second. Additional timing uncertain-
ties due to delays in the detector electronics have not
been measured, but are assumed to be on the order of
a few milliseconds. For most time-tag observations, for
which time stamps are recorded only once per second,
these effects can safely be ignored.
Raw time-tag files are constructed by assigning the
value of the most recent time stamp, in units of seconds
from the exposure start time, to each subsequent photon
event. The frequency of these time markers determines
the temporal resolution of the data. Photon-arrival times
are not modified by the pipeline: values are UTC as as-
signed by the IDS. In particular, photon-arrival times are
not converted to a heliocentric scale.
6.6. Combining Data from Multiple Exposures
For each FUSE observation, OPUS combines data
from individual exposures into a set of observation-level
spectra, as described in § 4.11. While these files are suffi-
cient for many projects, other projects may benefit from
specialized data processing. Here are some points to keep
in mind when combining FUSE data from multiple expo-
sures: For bright targets, the goal is to maximize spectral
resolution, so it is important to align precisely the spectra
from individual exposures before combining them. The
wavelength zero points of segments A and B are consis-
tent across each of the FUSE detectors (§ 5.1), so shifts
measured for one detector segment can safely be applied
to the other. For observations made before 2005 July, the
LiF1 spectrum is likely to have the most accurate wave-
length scale, so it serves as the standard for the other
three channels. For later observations, the LiF2 spectra
are likely to be the most accurate. A procedure to cross-
correlate and shift spectra by hand is described in The
FUSE Data Analysis Cookbook. When cross-correlating
the spectra of point-source targets, it is important to ex-
clude regions contaminated by airglow features, as their
motions are unlikely to track those of the target. For
faint targets, the goal is to optimize the fidelity of the
background model by maximizing the signal-to-noise ra-
tio on background regions of the detector, a goal achieved
by combining the IDFs from multiple exposures before
extracting the spectra. A variety of C- and IDL17-based
tools to perform these and other data-analysis tasks has
17 IDL is a registered trademark of ITT Corporation for their
Interactive Data Language software.
TABLE 2
Format of Raw Time-Tag Files
Array Name Format Description
Primary Header-Data Unit (HDU 1)
Header only. Keywords contain exposure-specific information.
HDU 2: Photon Event List
TIME FLOAT Photon arrival time (seconds)
X SHORT Raw X position (0–16383)
Y SHORT Raw Y position (0–1023)
PHA BYTE Pulse height (0–31)
HDU 3: Good-Time Intervals
START DOUBLE GTI start time (seconds)
STOP DOUBLE GTI stop time (seconds)
Note. — Times are relative to the exposure start time, stored
in the header keyword EXPSTART.
been generated by the FUSE project. Software and doc-
umentation are available from MAST.
We acknowledge with gratitude the efforts of those who
contributed to the design and implementation of initial
versions of the CalFUSE pipeline and its associated cal-
ibration files: G. A. Kriss, E. M. Murphy, J. Murthy,
W. R. Oegerle, and K. C. Roth. This research has made
use of the Multimission Archive at the Space Telescope
Science Institute (MAST). STScI is operated by the As-
sociation of Universities for Research in Astronomy, Inc.,
under NASA contract NAS5-26555. Support for MAST
for non-HST data is provided by the NASA Office of
Space Science via grant NAG5-7584 and by other grants
and contracts. This work is supported by NASA contract
NAS5-32985.
Facility: FUSE
APPENDIX
A. FILE FORMATS
All FUSE data are stored as FITS files (Hanisch et al.
2001) containing one or more Header + Data Units
(HDUs). The first is called the primary HDU (or
HDU 1); it consists of a header and an optional N-
dimensional image array. The primary HDU may be fol-
lowed by any number of additional HDUs, called “exten-
sions.” Each extension has its own header and data unit.
FUSE employs two types of extensions, image extensions
(a 2-dimensional array of pixels) and binary table exten-
sions (rows and columns of data in binary representa-
tion). CalFUSE uses the CFITSIO subroutine library
(Pence 1999) to read and write FITS files.
A-1. Raw Time-Tag and Histogram Files
FUSE raw data files are generated by OPUS using
both data downlinked by the telescope and information
from the FUSE Mission Planning Database (§ 4.1). In-
formation regarding the target, exposure times, instru-
ment configuration, and engineering parameters is stored
in a series of header keywords in the primary HDU. All
header keywords are described in the FUSE Instrument
and Data Handbook. In raw time-tag files (Table 2), the
20 Dixon et al.
primary HDU consists of a header only, with no asso-
ciated image array. HDU 2 contains the photon-event
list, with arrival time (in seconds from the exposure start
time), raw detector coordinates, and pulse height for each
event in turn. HDU 3 lists good-time intervals (GTIs)
calculated by OPUS. Raw time-tag file names end with
the suffix “ttagfraw.fit.” They can be as large as 10–20
MB for the brightest targets.
The data in raw histogram files (suffix “histfraw.fit”)
are stored as a series of image extensions (Table 3).
The primary HDU contains the same header keywords
as time-tag files, along with a small (8 × 64 pixel) im-
age called the Spectral Image Allocation (SIA) table.
The SIA table is used to map regions of the detector
to on-board memory. Each element in the SIA table
corresponds to a 2048 × 16 pixel region on a detector
segment. If the element is set to 1, the photons from
the corresponding region are saved; if 0, they are dis-
carded. Additional image extensions follow, each con-
taining the binned image of some region of the detector;
these regions may overlap. In general, science data are
binned by 8 pixels in Y and unbinned in X; binning fac-
tors for each exposure are stored in the header keywords
SPECBINX and SPECBINY. While the format given in
Table 3 is standard, any number of image extensions may
be present in a histogram file. Raw histogram data files
are 1–1.5 MB in size.
A-2. Housekeeping and Jitter Files
For each exposure, a single housekeeping file is gen-
erated by OPUS from engineering data supplied by the
spacecraft (§ 4.1). Housekeeping files (suffix “hskpf.fit”)
contain 62 arrays, including spacecraft pointing informa-
tion, detector voltage levels, and various counter values,
in a single binary table extension. Arrays are tabulated
once per second, though most parameters are updated
only once every 16 seconds. Only a few of the house-
keeping arrays are employed by the pipeline. The detec-
tor high voltage and LiF, SiC, FEC, and AIC counter
arrays are used to populate the corresponding arrays in
the IDF timeline table (§ A-3).
From pointing information in the housekeeping file,
OPUS derives a jitter file (suffix “jitrf.fit”) consisting of
a single binary table extension with 4 columns: TIME,
DX, DY and TRKFLG. The time refers to the elapsed
time (in seconds) from the start of the exposure. Since
the engineering data commonly begin up to a minute be-
fore the exposure, the first few entries of this array are
negative. DX and DY are the offsets along the X (disper-
sion) and Y (cross-dispersion) directions in arc seconds.
These offsets are defined relative to the commanded posi-
tion of the telescope (presumably the target coordinates).
Finally, TRKFLG is the tracking quality flag. Its value
is −1 if the spacecraft is not tracking properly and 0 if
tracking information is unavailable. Values between 1
and 5 represent increasing levels of fidelity for DX and
Additional details regarding the contents and format
of the housekeeping and jitter files are provided in The
FUSE Instrument and Data Handbook.
A-3. Intermediate Data File (IDF)
The IDF (suffix “idf.fit”) contains three FITS binary
table extensions; their contents are listed in Table 4. The
file’s primary header-data unit (HDU 1) is copied directly
from the raw data file. (For histogram data, the SIA ta-
ble is discarded.) Various keywords are populated by the
initialization routine (§ 4.2) and by subsequent pipeline
modules. The first binary-table extension (HDU 2) con-
tains the photon events themselves. For time-tag data,
the TIME, XRAW, YRAW, and PHA arrays are copied
from the raw data file, and the WEIGHT array is ini-
tialized to 1.0. For histogram data, each image pixel
is mapped back to its coordinates on the full detector,
which are recorded in the XRAW and YRAW arrays.
The WEIGHT array is initialized to the number of pho-
ton events in the pixel. Zero-valued pixels are ignored.
Histogram data are not “unbinned.” Each entry of the
TIME array is set to the midpoint of the exposure and
each entry of the PHA array to 20. (Both arrays are
subsequently modified.)
The first pipeline module (§ 4.3) corrects for various
detector effects; it scales the WEIGHT array to correct
for detector dead time and populates the XFARF and
YFARF arrays. (The flight alignment reference frame
represents the output of an ideal detector.) Each pho-
ton event is assigned to one of six aperture-channel com-
binations or to the background (Table 5) and a corre-
sponding code is written to the CHANNEL array (§ 4.5).
After corrections for mirror, grating, and spacecraft mo-
tions, the photon’s final coordinates are recorded in the
X and Y arrays. Though floating-point arrays, XFARF,
YFARF, X, and Y are written to the IDF as arrays of
8-bit integers using the FITS TZERO and TSCALE key-
words. This process effectively rounds each element of
XFARF and X to the nearest 0.25 of a detector pixel
and each element of YFARF and Y to the nearest 0.1 of
a detector pixel.
The screening routines (§ 4.4) use information from the
timeline table (described below) to identify photons that
violate pulse-height limits, limb-angle constraints, etc.
“Bad” photons are not deleted from the IDF, but merely
flagged. Flags are stored as single bits in an 8-bit byte.
We use two sets of flags, TIMEFLGS for time-dependent
and LOC FLGS for location-dependent effects (Table 6).
For each bit, a value of 0 indicates that the photon is
“good,” except for the day/night flag, for which 0 = night
and 1 = day. It is possible to modify these flags without
re-running the pipeline. For example, one could exclude
day-time photons or include data taken close to the earth
limb.
The LAMBDA array contains the heliocentric wave-
length assigned to each photon (§ 4.6), and the ERGCM2
array records its “energy density” in units of erg cm−2 (§
4.7). To convert an extracted spectrum to units of flux,
one must divide by the exposure time and the width of
an output spectral bin.
The second extension (HDU 3) is a list of good-time
intervals (GTIs). The initial values are copied from the
raw data file, but they are modified by the pipeline once
the various screening routines have been run. By con-
vention, the START value of each GTI corresponds to
the arrival time of the first photon in that interval. The
STOP value is one second later than the arrival time of
the last photon in that interval. The length of the GTI
is thus STOP−START.
The third extension (HDU 4) is called the timeline ta-
CalFUSE: The FUSE Calibration Pipeline 21
ble. It contains status flags and spacecraft and detector
parameters used by the pipeline. An entry in the time-
line table is created for each second of the exposure. For
time-tag data, the first entry corresponds to the time of
the first photon event, and the final entry to the time of
the final photon event plus one second. (Should an expo-
sure’s photon-arrival times purport to exceed 55 ks, we
create timeline entries only for each second in the good-
time intervals.) For histogram data, the first element of
the TIME array is set to zero and the final element to
EXPTIME+1 (where EXPTIME is the exposure dura-
tion computed by OPUS). Because we require that EXP-
TIME equal both Σ (STOP−START), summed over all
entries in the GTI table, and the number of good times
in the timeline table, we must flag the final second of
each GTI as bad. No photons are associated with the
STOP time of a GTI.
Only the day/night and OPUS flags of the STA-
TUS FLAGS array are populated when the IDF is cre-
ated; the other flags are set by the various screening
routines (§ 4.4). The elements of the TIME SUNSET,
TIME SUNRISE, LIMB ANGLE, LONGITUDE, LAT-
ITUDE, and ORBITAL VEL arrays are computed from
the orbital elements in the FUSE.TLE file. The
HIGH VOLTAGE array is populated with values from
the housekeeping file. The LIF CNT RATE and
SIC CNT RATE arrays are initially populated with val-
ues derived from the LiF and SiC counter arrays in the
housekeeping file. For time-tag data, these arrays are
eventually updated with the actual count rates within
the target aperture, excluding regions contaminated by
airglow. The FEC CNT RATE and AIC CNT RATE,
described in § 4.3.2, are also derived from counter ar-
rays in the housekeeping file. For time-tag data, the
BKGD CNT RATE array is populated by the burst-
rejection routine (§ 4.4.5) and represents the count rate
in pre-defined background regions of the detector, ex-
cluding airglow features. The array is not populated for
histogram data. The YCENT LIF and YCENT SIC ar-
rays trace the centroid of the target spectra with time
before motion corrections are applied. These two arrays
are not used by the pipeline.
Raw time-tag files (§ A-1) employ the standard FITS
binary table format, listing TIME, X, Y, PHA for each
photon event in turn. The intermediate data files have a
slightly different format, listing all of the photon arrival
times, then the X coordinates, then the Y coordinates.
Formally, the table has only one row, and each element of
the table is an array. (To use the STSDAS terminology,
IDFs are written as 3-D tables.) The MDRFITS func-
tion from the IDL Astronomy User’s Library (Landsman
1993) can read both file formats; some older FITS read-
ers cannot. Note that, because HDUs 2 and 4 of the
IDFs contain floating-point arrays stored as shorts (using
the TZERO and TSCALE keywords), calls to MRDFITS
must include the keyword parameter FSCALE.
A-4. Bad-Pixel Maps (BPM Files)
The BPM files (suffix “bpm.fit”; § 4.8) consist of a
single binary table extension. Its format is similar to
that of the IDF, but it contains only five columns: X, Y,
CHANNEL, WEIGHT, and LAMBDA. The WEIGHT
column, whose values range from 0 to 1, represents the
fraction of the exposure that each pixel was affected by
a dead spot. The BPM files are not archived, but can
be generated from the IDF and jitter file using pipeline
software available from MAST.
A-5. Extracted Spectral Files
Extracted spectra (suffix “fcal.fit”; § 4.9) are stored
in a single binary table extension. Its contents are pre-
sented in Table 7. Note that the spectra are binned in
wavelength. The bin size can be set by the user, but the
default is 0.013 Å, which corresponds to about 2 detec-
tor pixels or about one-fourth of a spectral resolution el-
ement. The WAVE array records the central wavelength
of each spectral bin. For time-tag data, the COUNTS
array represents the total of all (raw) photon events as-
signed to the target aperture. For histogram data, the
COUNTS array is simply the WEIGHTS array divided
by the mean dead-time correction for the exposure. If op-
timal extraction is performed, the values of the FLUX,
ERROR, WEIGHTS, and BKGD arrays are determined
by that algorithm. As a result, the ratio of WEIGHTS
to COUNTS is constant only for histogram data. The
QUALITY array records the percentage of the extrac-
tion window containing valid data. It is 100 if no bad
pixels fell within the wavelength bin, 0 if the entire bin
was lost to bad pixels.
REFERENCES
Bohlin, R. C. 1996, AJ, 111, 1743
Bohlin, R. C., Colina, L., and Finley, D. S. 1995, AJ, 110, 1316
Feldman, P. D., Sahnow, D. J., Kruk, J. W., Murphy, E. M., and
Moos, H. W. 2001, J. Geophys. Res., 106, 8119
Hanisch, R. J., Farris, A., Greisen, E. W., Pence, W. D.,
Schlesinger, B. M., Teuben, P. J., Thompson, R. W., and
Warnock, A. 2001, A&A, 376, 359
Holberg, J. B., Barstow, M. A., and Sion, E. M. 1998, ApJS, 119,
Horne, K. 1986, PASP, 98, 609
Hubeny, I. and Lanz, T. 1995, ApJ, 439, 875
Kruk, J. W., Brown, T. M., Davidsen, A. F., Espey, B. R.,
Finley, D. S., and Kriss, G. A. 1999, ApJS, 122, 299
Landsman, W. B. 1993, in ASP Conf. Ser. 52: Astronomical Data
Analysis Software and Systems II, ed. R. J. Hanisch, R. J. V.
Brissenden, and J. Barnes (San Francisco: ASP), p. 246.
Lemoine, M. et al. 2002, ApJS, 140, 67
Massa, D. and Fitzpatrick, E. L. 2000, ApJS, 126, 517
McCandliss, S. R., France, K., Lupu, R. E., Burgh, E. B.,
Sembach, K., Kruk, J., Andersson, B-G, and Feldman, P. D.
2007, ApJ, in press (astro-ph/0701439)
Moos, H. W. et al. 2000, ApJ, 538, L1
Pence, W. 1999, in ASP Conf. Ser. 172: Astronomical Data
Analysis Software and Systems VIII, ed. D. M. Mehringer,
R. L. Plante, and D. A. Roberts (San Francisco: ASP), p. 487
Rose, J. F., Heller-Boyer, C., Rose, M. A., Swam, M., Miller, W.,
Kriss, G. A., and Oegerle, W. R. 1998, in Proc. SPIE 3349,
Observatory Operations to Optimize Scientific Return, p. 410
Sahnow, D. J. et al. 2000a, ApJ, 538, L7
Sahnow, D. J., Gummin, M. A., Gaines, G. A., Fullerton, A. W.,
Kaiser, M. E., and Siegmund, O. H. 2000b, in Proc. SPIE 4139,
Instrumentation for UV/EUV Astronomy and Solar Missions,
ed. S. Fineschi, C. M. Korendyke, O. H. Siegmund, and B. E.
Woodgate, p. 149
Siegmund, O. H. et al. 1997, in Proc. SPIE 3114, EUV, X-Ray,
and Gamma-Ray Instrumentation for Astronomy VIII, ed.
O. H. Siegmund and M. A. Gummin, p. 283
Tremsin, A. S., Siegmund, O. H., Gummin, M. A., Jelinsky,
P. N., and Stock, J. M. 1999, Appl. Opt., 38, 2240
http://arxiv.org/abs/astro-ph/0701439
22 Dixon et al.
Al+LiF Coated
Mirror #2
Focal Plane
Assemblies (4)
Detectors (2)
Al+LiF Coated
Mirror #1
SiC Coated
Mirror #2
SiC Coated
Mirror #1
Rowland Circles
Al+LiF Coated
Grating #2
Al+LiF Coated
Grating #1
SiC Coated
Grating #2
Fig. 1.— Schematic of the FUSE instrument optical system. The telescope focal lengths are 2245 mm, and the Rowland circle diameters
are 1652 mm. (Figure from Moos et al. 2000.)
CalFUSE: The FUSE Calibration Pipeline 23
-150 -100 -50 0 50 100 150
X Coordinate (arcsec)
Fig. 2.— The FUSE apertures projected onto the sky. In the FPA coordinate system, the LWRS, HIRS, and MDRS apertures are centered
at Y = −118.′′07, −10.′′27, and +90.′′18, respectively. The reference point (RFPT) at X = +55.′′18 is not an aperture; when a target is
placed at this location, the three apertures sample the background sky. With north on top and east on the left, this diagram corresponds
to an aperture position angle of 0◦. Positive aperture position angles correspond to a counter-clockwise rotation of the spacecraft about
the target aperture. This diagram represents only a portion of the FPA; its active area is 19′×19′.
24 Dixon et al.
0 5.0•103 1.0•104 1.5•104
2.0•103 4.0•103 6.0•103 8.0•103 1.0•104 1.2•104 1.4•1040
1000 1020 1040 1060 1080
1000 1020 1040 1060 10801080 1060 1040 1020
1080 1060 1040 1020
Wavelength (Å)
Fig. 3.— Image of detector segment 1A during a bright-earth observation. All lines are geocoronal. Note the strong Lyman β (1026 Å)
feature in each spectrum. The data have been fully corrected for detector and other distortions. Extended-source extraction windows for all
three apertures in both the LiF and SiC channels are marked; point-source extraction windows are somewhat narrower in Y. Instrumental
astigmatism is responsible for the bow-tie shape of each spectrum. The region shown corresponds to detector pixels 900 to 15,300 in X and
0 to 915 in Y.
Fig. 4.— Image of detector 1A in raw X and Y coordinates showing geometric distortion. The image shows only a portion of the detector.
It was constructed from 3 separate exposures with stars in the HIRS, MDRS, and LWRS apertures of the LiF1A detector.
CalFUSE: The FUSE Calibration Pipeline 25
Fig. 5.— Segments of detector 1A images showing filamentary (top) and checkerboard (bottom) bursts. Checkerboard bursts typically
fill the detector, save for the region around the LiF Lyman β lines on detector 1A.
Fig. 6.— Segment of LiF1A spectrum before (bottom) and after (top) astigmatism correction. Note the reduction of curvature in the
absorption features. A detector dead spot is present on the left side of the figure.
Fig. 7.— Night-time scattered-light image for detector 1A. Note the vertical scattered-light stripe to the right of the image center.
26 Dixon et al.
0 5000 10000 15000
X Coordinate (Pixels)
Fig. 8.— Geometric distortion in the X coordinate of the LiF1A LWRS channel. Data represent the difference between the measured
locations of H2 lines in the spectrum of GCRV 12336 and those predicted by a theoretical dispersion relation. The solid line is a spline fit
to the residuals.
0 10 20 30 40 50
Observation Date
2001/09
2002/08
2002/10
2003/12
2004/07
2006/12
Fig. 9.— Measured heliocentric velocity of the interstellar O I λ1039.23 absorption feature in each of 47 exposures of the white dwarf
KPD 0005+5106 through the high-resolution (HIRS) aperture. Holberg et al. 1998 report a heliocentric velocity of −7.50 ± 0.76 km s−1
for the interstellar features along this line of sight.
CalFUSE: The FUSE Calibration Pipeline 27
2000 2002 2004 2006
Date (Year)
LiF1A
SiC1A
980 1000 1020 1040 1060 1080 1100
Wavelength (Å)
LiF1A
SiC1A
Fig. 10.— FUSE sensitivity as a function of time. Upper panel: Effective area of the LiF1A and SiC1A channels, averaged over the
wavelength region 1030–1040 Å. The gap between 2004 October and 2006 May represents the period after the loss of the third reaction
wheel, when few calibration targets were observed. Lower panel: Effective-area curves for the LiF1A and SiC1A channels, dated 1999 and
2006. (For both channels, the 1999 curve has the higher effective area.)
28 Dixon et al.
1100 1120 1140 1160 1180
Wavelength (Å)
Fig. 11.— Point-source spectra showing the effects of the worm. Spectra A and B, obtained with the LiF1B channel, show deep depressions
near 1145 and 1160 Å, respectively. The wavelength of maximum attenuation varies with the Y position of the target within the aperture.
Spectrum C, obtained with the LiF2A channel, is unattenuated.
0 2000 4000 6000
Fig. 12.— Detector images showing the effects of the worm. In these negative images, worms appear as bright stripes parallel to the
dispersion axis. The data shown correspond to spectra A and B in Fig. 11 and span wavelengths between 1134 and 1187 Å.
1044.0 1044.5 1045.0 1045.5 1046.0
Wavelength (Å)
Fig. 13.— Moiré pattern in the LiF2B spectrum of the star HD 209339, obtained in histogram mode. The associated error array is
overplotted. The moiré ripples are strongest on this detector segment, but are also seen on segments 1A and 1B.
CalFUSE: The FUSE Calibration Pipeline 29
TABLE 3
Format of Raw Histogram Files
Image Sizea
HDU Contents (binned pixels)
1b SIA Tablec 8× 64
2 SiC Spectral Image (12–20) × 16384
3 LiF Spectral Image (12–20) × 16384
4 Left Stim Pulse 2 × 2048
5 Right Stim Pulse 2 × 2048
Note. — While this table describes the format of a typical
raw histogram file, any number of HDUs are allowed.
a Quoted image sizes assume the standard histogram binning: by
8 pixels in Y, unbinned in X. Actual binning factors are given in
the primary file header.
b Header keywords of HDU 1 contain exposure-specific informa-
tion.
c The SIA table describes which regions of the detector are in-
cluded in the file.
TABLE 4
Format of Intermediate Data Files
Array Name Format Description
Primary Header-Data Unit (HDU 1)
Header only. Keywords contain exposure-specific information.
HDU 2: Photon Event List
TIME FLOAT Photon arrival time (seconds)
XRAW SHORT Raw X coordinate (0–16383)
YRAW SHORT Raw Y coordinate (0–1023)
PHA BYTE Pulse height (0–31)
WEIGHT FLOAT Photons per binned pixel for HIST data,
initially 1.0 for TTAG data
XFARF FLOAT X coordinate in geometrically-corrected frame
YFARF FLOAT Y coordinate in geometrically-corrected frame
X FLOAT X coordinate after motion corrections
Y FLOAT Y coordinate after motion corrections
CHANNEL BYTE Aperture+channel ID for the photon (Table 5)
TIMEFLGS BYTE Time flags (Table 6)
LOC FLGS BYTE Location flags (Table 6)
LAMBDA FLOAT Wavelength of photon (Å)
ERGCM2 FLOAT Energy density of photon (erg cm−2)
HDU 3: Good-Time Intervals
START DOUBLE GTI start time (seconds)
STOP DOUBLE GTI stop time (seconds)
HDU 4: Timeline Table
TIME FLOAT Seconds from exposure start time
STATUS FLAGS BYTE Status flags
TIME SUNRISE SHORT Seconds since sunrise
TIME SUNSET SHORT Seconds since sunset
LIMB ANGLE FLOAT Limb angle (degrees)
LONGITUDE FLOAT Spacecraft longitude (degrees)
LATITUDE FLOAT Spacecraft latitude (degrees)
ORBITAL VEL FLOAT Component of spacecraft velocity
in direction of target (km/s)
HIGH VOLTAGE SHORT Detector high voltage (unitless)
LIF CNT RATE SHORT LiF count rate (counts/s)
SIC CNT RATE SHORT SiC count rate (counts/s)
FEC CNT RATE FLOAT FEC count rate (counts/s)
AIC CNT RATE FLOAT AIC count rate (counts/s)
BKGD CNT RATE SHORT Background count rate (counts/s)
YCENT LIF FLOAT Y centroid of LiF target spectrum (pixels)
YCENT SIC FLOAT Y centroid of SiC target spectrum (pixels)
Note. — Times are relative to the exposure start time, stored in the header
keyword EXPSTART. To conserve memory, floating-point values are stored as shorts
(using the FITS TZERO and TSCALE keywords) except for TIME, WEIGHT,
LAMBDA and ERGCM2, which remain floats.
30 Dixon et al.
TABLE 5
Aperture Codes for IDF
CHANNEL Array
Aperture LiF SiC
HIRS 1 5
MDRS 2 6
LWRS 3 7
Not in an aperture 0
TABLE 6
Bit Codes for IDF Time and
Location Flags
Bit Value
Time Flags
8 User-defined bad-time interval
7 Jitter (target out of aperture)
6 Not in an OPUS-defined GTI or
Photon arrival time unknown
5 Burst
4 High voltage reduced
3 SAA
2 Limb angle
1 Day/Night flag (N = 0, D = 1)
Location Flags
8 Not used
7 Fill data (histogram mode only)
6 Photon in bad-pixel region
5 Photon pulse height out of range
4 Right stim pulse
3 Left stim pulse
2 Airglow feature
1 Not in detector active area
Note. — Flags are listed in order from
most- to least-significant bit.
TABLE 7
Format of Extracted Spectral Files
Array Name Format Description
Primary Header-Data Unit (HDU 1)
Header only. Keywords contain exposure-specific information.
HDU 2: Extracted Spectrum
WAVE FLOAT Wavelength (Å)
FLUX FLOAT Flux (erg cm−2 s−1 Å−1)
ERROR FLOAT Gaussian error (erg cm−2 s−1 Å−1)
COUNTS INT Raw counts in extraction window
WEIGHTS FLOAT Raw counts corrected for dead time
BKGD FLOAT Estimated background in extraction window (counts)
QUALITY SHORT Percentage of window used for extraction (0–100)
|
0704.0900 | Voltage-Current curves for small Josephson junction arrays | Voltage-Current curves for small Josephson junction arrays
B. Douçot1 and L.B. Ioffe2
Laboratoire de Physique Théorique et Hautes Énergies, CNRS UMR 7589,
Universités Paris 6 et 7, 4, place Jussieu, 75252 Paris Cedex 05 France
Center for Materials Theory, Department of Physics and Astronomy,
Rutgers University 136 Frelinghuysen Rd, Piscataway NJ 08854 USA
We compute the current voltage characteristic of a chain of identical Josephson circuits charac-
terized by a large ratio of Josephson to charging energy that are envisioned as the implementation
of topologically protected qubits. We show that in the limit of small coupling to the environment
it exhibits a non-monotonous behavior with a maximum voltage followed by a parametrically large
region where V ∝ 1/I . We argue that its experimental measurement provides a direct probe of the
amplitude of the quantum transitions in constituting Josephson circuits and thus allows their full
characterization.
I. INTRODUCTION
In the past years, the dramatic experimental progress
in the design and fabrication of quantum two level sys-
tems in various superconducting circuits1 has raised a
hope that such solid state devices could eventually serve
as basic logical units in a quantum computer (qubits).
However, a very serious obstacle on this path is the ubiq-
uitous decoherence, which in practice limits the typical
life-time of quantum superpositions of two distinct log-
ical states of a qubit to microseconds. This is far from
being sufficient to satisfy the requirements for implement-
ing quantum algorithms and providing systematic error
correction.2
This has motivated us to propose some alternative
ways to design Solid-State qubits, that would be much
less sensitive to decoherence than those presently avail-
able. These protected qubits are finite size Josephson
junction arrays in which interactions induce a degener-
ate ground-state space characterized by the remarkable
property that all the local operators induced by couplings
to the environment act in the same way as the identity
operator. These models fall in two classes. The first
class is directly inspired by Kitaev’s program of topo-
logical quantum computation,3 and amounts to simulat-
ing lattice gauge theories with small finite gauge groups
by a large Josephson junction lattice.4,5,6 The second
class is composed of smaller arrays with sufficiently large
and non-Abelian symmetry groups allowing for a persis-
tent ground-state degeneracy even in the presence of a
noisy environment.7,8 All these systems share the prop-
erty that in the classical limit for the local superconduct-
ing phase variables (i.e. when the Josephson coupling is
much larger than the charging energy), the ground-state
is highly degenerate. The residual quantum processes
within this low energy subspace lift the classical degen-
eracy in favor of macroscopic coherent superpositions of
classical ground-states. The simplest example of such
system is based on chains of rhombi (Fig. 1) frustrated
by magnetic field flux Φ = Φ0/2 that ensures that in the
classical limit each rhombus has two degenerate states.8
Practically, it is important to be able to test these ar-
rays and optimize their parameters in relatively simple
experiments. In particular one needs the means to verify
the degeneracy of the classical ground states, the pres-
ence of the quantum coherent processes between them
and measure their amplitude. Another important pa-
rameter is the effective superconducting stiffness of the
fluctuating rhombi chain. The classical degeneracy and
chain stiffness can be probed by the experiments dis-
cussed in9; they are currently being performed10. The
idea is that a chain of rhombi threaded individually by
half a superconducting flux quantum, the non-dissipative
current is carried by charge 4e objects,11,12 so that the
basic flux quantum for a large closed chain of rhombi
becomes h/(4e) instead of h/(2e) which can be directly
observed by measuring the critical current of the loop
made from such chain and a large Josephson junction.
The main goal of the present paper is to discuss a prac-
tical way to probe directly the quantum coherence associ-
ated with these tunneling processes between macroscop-
ically distinct classical ground-states. In principle, it is
relatively simple to implement, since it amounts to mea-
suring the average dc voltage generated across a finite
Josephson junction array in the presence of a small cur-
rent bias (i. e. this bias current has to be smaller than
the critical current of the global system). The physi-
cal mechanism leading to this small dissipation is very
interesting by itself; it was orinally discussed in a sem-
inal paper by Likharev and Zorin16 in the context of a
single Josephson junction. Consider one element (single
junction or a rhombus) of the chain, and denote by φ
the phase difference across this element. When it is dis-
connected from the outside world, its wave-function Ψ
is 2πζ-periodic in φ where ζ = 1 for a single junction
and ζ = 1/2 for a rhombus. This reflects the quanti-
zation of the charge on the island between the elements
which can change by integer multiples of 2e/ζ. If φ is
totally classical, the element’s energy is not sensitive to
the choice of a quasi-periodic boundary condition of the
form Ψ(φ+ 2πζ) = exp(i2πζq)Ψ(φ), where q represents
the charge difference induced across the rhombus. In the
presence of coherent quantum tunneling processes for φ,
the energy of the element ǫ(q) will acquire q-dependence,
with a bandwidth directly related to the basic tunnel-
http://arxiv.org/abs/0704.0900v1
ing amplitude. Whereas q is constrained to be integer
for an isolated system, it is promoted to a genuine con-
tinuous degree of freedom when the array is coupled to
leads and therefore to a macroscopic dissipative environ-
ment. So, as emphasized by Likharev and Zorin16, the
situation becomes perfectly analogous to the Bloch the-
ory of a quantum particle in a one-dimensional periodic
potential, where the phase φ plays the role of the posi-
tion, and q of the Bloch momentum. A finite bias cur-
rent tilts the periodic potential for the phase variable, so
that in the absence of dissipation, the dynamics of the
phase exhibits Bloch oscillations, very similar to those
which have been predicted17 and observed18,19 for elec-
trons in semi-conductor super-lattices. If the driving cur-
rent is not too large, it is legitimate to neglect inter-band
transitions induced by the driving field, and one obtains
the usual spectrum of equally spaced localized levels of-
ten called a Wannier-Stark ladder. In the presence of
dissipation, these Wannier-Stark levels acquire a finite
life-time, and therefore the time-evolution of the phase
variable is characterized by a slow and uniform drift su-
perimposed on the faster Bloch oscillations. This drift is
translated into a finite dc voltage by the Josephson re-
lation 2eV = ~(dφ/dt). This voltage decreases with cur-
rent until one reaches the current bias high enough to
induce the interband transition. At this point the phase
starts to slide down fast and the junction switches into
a normal state. In the context of Josephson junctions
these effects were first observed in the experiments on
Josephson contacts with large charging energy20,21,22,23
and more recently24,25 in the semiclassical (phase) regime
of interest to us here. Bloch oscillations in the quantron-
ium circuit driven by a time-dependent gate voltage have
also been recently observed.26
This picture holds as long as the dissipation affecting
the phase dynamics is not too strong, so that the radia-
tive width of the Wannier-Stark levels is smaller than the
nearest-level spacing (corresponding to phase translation
by 2πζ) that is proportional to the bias current. This
provides a lower bound for the bias current which has
to be compatible with the upper bound coming from the
condition of no inter-band transitions. As we shall see,
this requires a large real part of the external impedance
Zω ≫ RQ as seen by the element at the frequency of the
Bloch oscillation, where the quantum resistance scale is
RQ = h/(4e
2). This condition is the most stringent in or-
der to access experimentally the phenomenon described
here. Note that this physical requirement is not lim-
ited to this particular experimental situation, because
any circuit exploiting the quantum coherence of phase
variables, for instance for quantum information process-
ing, has to be imbedded in an environment with a very
large impedance in order to limit the additional quantum
fluctuations of the phase induced by the bath. The intrin-
sic dissipation of Josephson elements will of course add to
the dissipation produced by external circuitry, but we ex-
pect that in the quantum regime (i.e. with sizable phase
fluctuations) considered here, this additional impedance
will be of order of RQ at the superconducting transition
temperature, and will grow exponentially below. Thus,
the success of the proposed measurements is also a test of
the quality of the environment for the circuits intended
to serve as protected qubits.
In many physical realizations Zω has a significant fre-
quency dependence and the condition Zω ≫ RQ is sat-
isfied only in a finite frequency range ωmax > ω > ωmin.
This situation is realized, for example, when the Joseph-
son element is decoupled from the environment by a
long chain of larger Josephson junctions (Section V).
In this case the superconducting phase fluctuations are
suppressed at low frequencies implying that a phase co-
herence and thus Josephson current reappears at these
scales. The magnitude of the critical current is however
strongly suppressed by the fluctuations at high frequen-
cies. This behavior is reminiscent of the reappearance
of the Josephson coupling induced by the dissipative en-
vironment observed in27. At higher energy scales fluc-
tuations become relevant, the phase exhibits Bloch os-
cillations resulting in the insulating behavior described
above. Thus, in this setup one expects a large hierarchy
of scales: at very low currents one observes a very small
Josephson current, at larger currents an almost insulat-
ing behavior and finally a switching into the normal state
at largest currents.
In the case of a chain of identical elements, the total
dc voltage is additive, but Bloch oscillation of different
elements might happen either in phase or in antiphase.
In the former case the ac voltages add increasing the
dissipation in the external circuitry; while in the latter
case the dissipation is low and the individual elements
get more decoupled from the environment. As we show in
Section III a small intrinsic dissipation of the individual
elements is crucial to ensure the antiphase scenario.
This paper is organized as follows. In section II, we
present a semi-classical treatment of the voltage versus
bias current curves for a single Josephson element. We
show that this gives an accurate way to measure the effec-
tive dispersion relation ǫ(q) of this element, which fully
characterizes its quantum transition amplitude. Further,
we show that application of the ac voltage provides a
direct probe of the periodicity (2π versus π) of each ele-
ment. In Section III we consider the chain of these ele-
ments and show that under realistic assumptions about
the dynamics of individual elements, it provides much
more efficient decoupling from the environment. Sec-
tion IV focusses on the dispersion relation expected in
a practically important case of a fully frustrated rhom-
bus which is the building block for the protected arrays
considered before.5,8 In this case, the band structure has
been determined by numerical diagonalizations of the
quantum Hamiltonian. An important result of this anal-
ysis is that even in the presence of relatively large quan-
tum fluctuations, the effective band structure is always
well approximated by a simple cosine expression. Finally,
in section V we discuss the conditions for the experimen-
tal implementation of this measurement procedure and
the full V (I) characteristics expected in realistic setup.
After a Conclusion section, an Appendix presents a full
quantum mechanical derivation of the dc voltage when
the bias current is small enough so that inter-band tran-
sitions can be neglected, and large enough so that the
level decay rate can simply be estimated from Fermi’s
golden rule.
II. SEMI-CLASSICAL EQUATIONS FOR A
SINGLE JOSEPHSON ELEMENT
Let us consider the system depicted on Fig. (1). In
the absence of the current source, the energy of the one
dimensional chain of N Josephson elements is a 2πζ pe-
riodic function of the phase difference φ across the chain.
The current source is destroying this periodicity by in-
troducing the additional term −~(I/2e)φ in the system’s
Hamiltonian. Because φ is equal to the sum of phase
differences across all the individual elements, it seems
that the voltage generated by the chain is N times the
voltage of a chain reduced to a single element. This is,
however, not the case: the individual elements are cou-
pled by the common load, and furthermore, as we show
in the next section, their collective behavior is sensitive
to the details of the single element dynamics. In this sec-
tion, we consider the case of a single Josephson element
(N = 1), rederive the results of Likharev and Zorin16 for
single Josephson contact and generalize them for more
complicated structures such as rhombus and give ana-
lytic equations convenient for data comparison.
The dynamics of a single Josephson contact is analo-
gous to the motion of a quantum particle (with a charge
e) in a one-dimensional periodic potential (with period
a) in the presence of a static and uniform force F , the
phase-difference φ playing the role of the spatial coordi-
nate x of the particle.16 In the limit of a weak external
force, it is natural to start by computing the band struc-
ture ǫn(k) for k in the first Brillouin zone [−π/a, π/a],
n being the band label. A first natural approximation is
to neglect interband transitions induced by the driving
field. This is possible provided the Wannier-Stark energy
gap ∆B = Fa is smaller than the typical band gap ∆ in
zero external field. As long as ∆B is also smaller than
the typical bandwidth W , the stationary states of the
Schrödinger equation spread over many (roughlyW/∆B)
periods, so we may ignore the discretization (i.e. one
quantum state per energy band per spacial period) im-
posed by the projection onto a given band. We may
therefore construct wave-packets whose spacial extension
∆x satisfies a≪ ∆x≪ aW/∆B, and the center of such a
wave-packet evolves according to the semi-classical equa-
tions:
dǫn(k)
F (2)
In the presence of dissipation, the second equation is
modified according to:
F − m
where m∗ is the effective mass of the particle in the n-th
band and τ is the momentum relaxation time introduced
by the dissipation.
FIG. 1: The experimental setup discussed in this paper: a
chain of identical building blocks represented by shaded rect-
angle that are biased by the external current source charac-
terized by the impedance Z(ω). The internal structure of the
block that is considered in more detail in the following sec-
tions is either a rhombus (4 junction loop) frustrated by half
flux quantum, or a single Josephson junction but the the re-
sults of the section II can be applied to any circuit of this
form provided that the junctions in the elementary building
blocks are in the phase regime, i.e. EJ ≫ EC .
In the context of a Josephson circuit, we have to diago-
nalize the Hamiltonian describing the array as a function
of the pseudo-charge q associated with the 2πζ periodic
phase variable φ. The quantity q controls the periodic
boundary condition imposed on φ, namely the system’s
wave-function is multiplied by exp(i2πq) when φ is in-
creased by 2πζ. From this phase-factor, we see that
the corresponding Brillouin zone for q is the interval
[−1/2, 1/2]. For a simple Josephson contact (ζ = 1), the
fixed value of q means that the total number of Cooper
pairs on the site carrying the phase φ is equal to q plus an
arbitrary integer. For a doubly periodic element, such as
rhombus (ζ = 1/2), charge is counted in the units of 4e.
To simplify the notations we assume usual 2π periodicity
(ζ = 1) in this and the following Sections and restore
the ζ-factors in Sections IV, V. From the band structure
ǫn(q), we may write the semi-classical equations of mo-
tion in the presence of the bias current I and the outer
impedance Z as:
dǫn(q)
where we used the Josephson relation for the voltage drop
V across the Josephson element as V = (~/2e)(dφ/dt)
and defined ZQ = ~/(4e
This semi-classical model exhibits two different
regimes. Let us denote by ωmax the maximum value of
the “group velocity” |dǫn(q)/(~dq)|. If the driving cur-
rent is small (I < Ic = 2eωmaxZQ/Z), it is easy to see
that after a short transient, the system reaches a station-
ary state where q is constant and:
that is: V = ZI. Thus, at I < Ic the current flows en-
tirely through the external impedance, i.e. the Joseph-
son elements become effectively insulating due to quan-
tum phase fluctuations. Indeed, a Bloch state writ-
ten in the phase reprentation corresponds to a fixed
value of the pseudo-charge q and non-zero dc voltage
(1/2e)(dǫn/dq). Note that the measurement of the maxi-
mal value Vc of the voltage on this linear branch directly
probes the spectrum of an individual Josephson block,
because Vc = ~ωmax/2e
At stronger driving (I > Ic), it is no longer possible
to find a stationary solution for q. The system enters
therefore a regime of Bloch oscillations. In the absence of
dissipation (Z/ZQ → ∞), the motion is periodic in time
for both φ and q. A small but finite dissipation preserves
the periodicity in q, but induces an average drift in φ or
equivalently a finite dc voltage. To see this, we first note
that the above equations of motion imply:
Since the right-hand side is a periodic function of q with
period 1, q(t) is periodic with the period T (I) given by:
T (I) =
∫ 1/2
f(q)dq (8)
f(q) =
On the other hand, the instantaneous dissipated power
reads:
ǫn(q)−
= −~ZQ
)2 (10)
Because q(t) is periodic, averaging this expression over
one period gives:
〉 = 2e
)2〉 (11)
or, equivalently:
〈V 〉 = ~
)2〉 (12)
Using the equations of motion, we get more explicitely:
〈V 〉 = 1
) ∫ 1/2
−1/2(
)2f(q)dq
∫ 1/2
−1/2 f(q)dq
Here we emphasized by the subscript that Zω might have
some frequency dependence. As we show in Appendix,
the dissipation actually occurs at the frequency of Bloch
oscillations that becomes ωB = 2πI/2e in the limit of
large currents. In the limit of large currents, I ≫ Ic,
(that can be achieved for large impedances) we may ap-
proximate f(q) by a constant, so the voltage is given by
the simpler expression:
〈V (I ≫ Ic)〉 =
4e2ZωI
∫ 1/2
)2dq (14)
On the other hand, when I approaches Ic from above,
Bloch oscillations become very slow and f(q) is strongly
peaked in the vicinity of the maximum of the group ve-
locity. Since this velocity is in general a smooth function
of q, we get in this limit for the maximal dc voltage:
2ω2max
4e2Z0Ic
= Z0Ic (15)
0 1 2 3 4 5
0 1 2 3 4 5
FIG. 2: Typical I − V curve of a single Josephson element
measured by a circuits shown in Fig. 1
In the simplest case of a purely harmonic dispersion,
ǫ(q) = 2w cos 2πq, the maximal voltage Vc = 4πw/(2e).
If one can further neglect the frequency dependency of
Z, the V (I) can be computed analytically:
〈V 〉 = ZI I < Ic (16)
〈V 〉 = ZIc
I2 − I2c
I > Ic (17)
We show this dependence in Fig. 2. This expression (16),
(17) is related to the known result for Z ≪ ZQ13,14 by
the duality15 transformation:
V → I,
I → V,
Z → 1
The semi-classical approximation is valid when the os-
cillation amplitude of the superconducting phase is much
larger than 2π, which allows the formation of the semi-
classical wave-packets. When I is much larger than Ic,
this oscillation amplitude is equal to 2eW/~I, whereW is
the total band-width of ǫn(q). This condition also ensures
that the work done by the current source when the phase
increases by 2π is much smaller than the band-width. In
order to observe the region of negative differential resis-
tance, corresponding to the regime of Bloch oscillations,
we require therefore that:
2π~Ic
≪W ≃ 2eVc
, (18)
where the last equality becomes exact in the case of a
purely harmonic dispersion. This translates into:
Z ≫ RQ. (19)
For large currents one can compute dc voltage di-
rectly by using the golden rule (without semiclassics);
we present the results in Appendix A. The result is con-
sistent with the large I limit of Eq. (17), 〈V 〉 = V 2c /2ZI.
Deep in the classical regime (EJ ≫ EC), the bandwidth
and the generated voltage become exponentially small.
In this regime the bandwidth is much smaller than the
energy gaps, so these formulas are applicable (asuming
(19) is satisfied) until the splitting between Wannier-
Stark levels becomes equal to the first energy gap given
by the Josephson plasma frequency, i.e. for I < eωJ/π.
Upon a further increase of the driving current in this
regime the generating voltage experiences resonant in-
crease for each splitting that is equal to the energy gap:
Ik = e(Ek−E0)/π. Physically, at these currents the phase
slips are rare events that lead to the excitation of the
higher levels at a new phase value that are followed by
their fast relaxation. At very large energies, the band-
width of these levels becomes larger than their decay rate
due to relaxation, (RQ/Z)EC . At these driving currents,
the system starts to generate large voltage and switches
to a normal state. At a very large EJ this happens at
the driving currents very close to the Josephson criti-
cal current 2eEJ , but in a numerically wide regime of
100 & EJ/EC & 10 the generated voltage at low curents
is exponentially small but switching to the normal state
occurs at significantly smaller currents than 2eEJ .
In the intermediate regime where EJ and EC are com-
parable, we expect a band-width comparable to energy
gaps so that the range of application of the quantum
derivation is not much larger than the one for the semi-
classical approach.
Negative differential resistance associated to Bloch os-
cillations has been predicted long ago,28 and observed
experimentally29 in the context of semi-conductor su-
perlattices. For Josephson junctions in the cross-over
regime (EJ/EC ≃ 1), a negative differential resistance
has been observed in a very high impedance environ-
ment,24 in good agreement with earlier theoretical pre-
dictions.30 More recently, the I − V curve of the type
shown on Fig 2 have been reported on a junction with
a ratio EJ/EC = 4.5
25. These experiments show good
agreement with a calculation which takes into account
the noise due to residual thermal fluctuations in the
resistor.31
Although the above results allows the extraction of the
band structure of an individual Josephson block from the
measurement of dc I−V curves, the interpretation of ac-
tual data may be complicated by frequency dependence
of the external impedance Zω. Additional information
independent on Zω can be obtained from measuring the
dc V (I) characteristics in the circuit driven by an ad-
ditional ac current. In this situation, the semi-classical
equations of motion become:
dǫn(q)
I + I ′ cos(ωt)
A small ac driving amplitude I ′ strongly affects the V (I)
curve only in the vicinity of resonances where nωB(IR) =
mω, with m and n integers. The largest deviation occurs
for m = n = 1. Furthermore, for I ′ ≪ I the terms
with m > 1 are parametrically small in I ′/I while for
I ≫ Ic the terms with n > 1 are parametrically small
in Ic/I. Experimental determination of the resonance
current, IR, would allow a direct measurement of the
Bloch oscillation frequency and thus the periodicity of the
phase potential (see next Section). Observation of these
mode locking properties have in fact provided the first
experimental evidence of Bloch oscillations in a single
Josephson junction.20,21
We now calculate the shape of V (I) curve in the vicin-
ity ofm = n = 1 point when both I ′ ≪ I and I ≫ Ic. We
denote by φ0(t) and q0(t) the time-dependent solutions
of the equations at I = IR in the absence of ac driving
current. We shall look for solutions which remain close
to φ0(t) and q0(t) at all times and expand them in small
deviations φ1 = φ − φ0, q1 = q − q0. We can always
assume that q1 has no Fourier component at zero fre-
quency because such component can be eliminated by a
time translation applied to q0. The equations for φ1, q1
become
ǫ′′n(q0)q1 (22)
I − IR + I ′ cos(ωt)
Because the main component of
d2ǫn(q0)
oscillates with
frequency ω and q1 has no dc component, the average
value of the voltage
is due to the part of q1 that os-
cillates with the same frequency, q1ω = I
′/(2eω) sin(ωt).
Because q0 = ω(t−t0)+χ(ω(t−t0)) where χ(t) is a small
periodic function, the first equation implies that
> = 〈
ǫ′′n(ω(t− t0)) sin(ωt)〉
ǫ′′n(q) cos(2πq)dq
The deviation q1 remains small only if the constant parts
cancel each other in the right hand side of the equation
(23). This implies
I − IR
ǫn(q) cos(2πq)dq (24)
We conclude that in the near vicinity of the reso-
nances the increase of the current does not lead to addi-
tional current through the Josephson circuit, so the re-
lation between current and voltage becomes linear again
δV = ZδI. In other words, the Josephson circuit be-
comes insulating with respect to current increments. The
width of this region (in voltage) is directly related to the
first moment of the energy spectrum of the Josephson
block providing one with the direct experimental probe
of this quantity. In particular, a Josephson element such
as rhombus in a magnetic flux somewhat different from
Φ0/2 displays a phase periodicity 2π but a very strong
deviations from a simple cos 2πq spectrum that will man-
ifest themselves in first moment of the spectrum. Note
finally, that the discussion above assumes that the ex-
ternal impedance Zω has no resonances in the important
frequency range. The presence of such resonances will
modify significantly the observed V (I) curves because it
would provide an efficient mechanism for the dissipation
of Bloch (or Josephson) oscillations at this frequency.
III. CHAIN OF JOSEPHSON ELEMENTS
We shall first consider the simplest example of a two-
element chain, because it captures the essential physics.
This chain is characterized by two phase differences (φ1
and φ2) and two pseudo-charges (q1 and q2). The equa-
tions of motion for the pseudo-charges (5) implies that
the charge difference q1− q2 is constant, because the cur-
rents flowing through these elements are equal, and thus
the right-hand sides of the evolution equations (5) are
identical. Because of this conservation law, even the long-
term physical properties depend on the initial conditions.
Similar problems have already been discussed in the con-
text of a chain of Josephson junctions driven by a current
larger than the critical current.32,33,34,35 This unphysical
behavior disappears if we take into account the dissipa-
tion associated with individual elements. Physically, it
might be due to stray charges, two-level systems, quasi-
particles, phonon emission, etc.36,37 A convenient model
for this dissipation is to consider an additional resistor
in parallel with each junction. For the sake of simplicity,
we assume that each element has a low energy band with
a simple cosine form. This physics is summarized by the
equations:
φ̇j = 4πw sin 2πqj (25)
q̇j =
I − 1
φ̇i −
Eliminating the phases gives:
(q̇1 +Ω1 sin 2πq1) = ν −
(sin 2πq1 + sin 2πq2)
(q̇2 +Ω2 sin 2πq2) = ν −
(sin 2πq1 + sin 2πq2)
where
(2e)2Ri
(2e)2Z
Here we allowed for different effective resistances asso-
ciated with each element because this has an important
effect on their dynamics. Indeed the difference between
the currents flowing through the resistors changes the
charge accumulated at the middle island and therefore
violates the conservation law mentioned before. Using
the notations δΩ = (Ω2 − Ω1)/2 and q± = (q2 ± q1)/2,
we have:
˙q− +Ωsin 2πq− cos 2πq++
+δΩcos 2πq− sin 2πq+ = 0 (27)
˙q+ + (ν0 +Ω) sin 2πq+ cos 2πq−−
−δΩ sin 2πq− cos 2πq+ = ν (28)
Significant quantum fluctuations imply that internal re-
sistance of the element R ∼ ZQ for individual elements
at T . TC ; at lower temperature it grows exponen-
tially. Thus, in a realistic case R ≫ Z which implies that
Ωi ≪ ν. In the insulating regime the equations (27-28)
have stable stationary solution (ν0 +Ω) sin 2πq+ = ν,
q− = 0. This solution exists for (ν0 +Ω) < ν , i.e. if
the voltage drop across both junctions does not exceed
Vc = 8πw/(2e). The conducting regime occurs when
ν > (ν0 +Ω); to simplify the analytic calculations we
assume that ν ≫ ν0. This allows to solve the equations
(27-28) by iterations in all non-linear terms. In the ab-
sence of non-linearity q+ = νt , q− = const; the first
iteration gives periodic corrections ∝ cos 2πνt. Averag-
ing the result of the second order iteration over the period
we get
˙〈q−〉 = −
ν0 cos
2 2πq− + 2Ω
The second term in the right hand side of this equation
is much smaller than the first if Ω ≪ ν0. In its absence
the dynamics of q− has fixed points at cos 2πq− = 0. At
these fixed points the periodic potentials generated by in-
dividual elements cancel each other and the dissipation in
external circuitry (which is proportional to cos2(2πq−))
is strictly zero. In a general case the equation (29) has
solution
cos2(2πq−) =
1 + ν0+2Ω
2Ω(ν0 + 2Ω)t
that corresponds to the short bursts of dissipation in
external circuitry that occur with low frequency νb =
2Ω(ν0 + 2Ω). The average value of cos
2(2πq−)
< cos2(2πq−) >=
ν0+2Ω
ν0 + 2Ω
is small implying that the effective dissipation introduced
by the external circuitry is strongly suppressed because
the pseudocharge oscillations on different elements al-
most cancel each other. The effective impedance of the
load seen by individual junction is strongly increased:
Zeff =
ν0 + 2Ω
Z (30)
Similar to a single element case discussed in the previous
Section, an additional dissipation in the external circuit
implies dc current across the Josephson chain
V = Vc
I ≫ Ic = Vc/Zeff
We conclude that a chain of Josephson elements has a
current-voltage characteristics similar to the one of the
single element with one important difference: the effec-
tive impedance of the external circuitry is strongly en-
hanced by the antiphase locking of the individual Joseph-
son elements. In particular, it means that the condition
Z ≫ RQ is much easier to satisfy for the chain of the
elements than for a single element. The analytical equa-
tions derived here describe the chain of two elements but
it seems likely that similar suppression of the dissipation
should occur in longer chains.
To substantiate this claim, lets us generalize the av-
eraging method which led to Eq. (29) for N = 2. The
coupled equations of motion read:
q̇j +Ωj sin 2πqj = ν −
sin 2πqk (31)
To second order in Ωj and ν0, the averaged equations of
motion are:
〈q̇j〉 = −
cos(2π(qk − ql))
− ν0Ωj
cos(2π(qj − qk)) (32)
This set of coupled equations is similar to the Kuramoto
model for coupled rotors38 defined as:
q̇j = ωj −
sin(2π(qj − qk) + α) (33)
The equation of motion (33) exhibits synchronisation of
a finite fraction of the rotors only for K > Kc(α).
39,40
The last term in Eq. (32) is equivalent to the interac-
tion term of Kuramoto model with α = π/2. The ad-
ditional (third) term in the model (32) is the same for
all oscillators, it is thus not correlated with individual
qj and thus can not directly lead to their synchroni-
sation. Remarkably, it turns out that for model (33)
Kc(α = π/2) = 0
39,40, suggesting that in our case, syn-
chronization never occurs on a macroscopic scale. Note
that the coupling K arising from Eq. (32) is not only j-
dependent, but it is also proportional to N . This could
present a problem in the infinite N limit, but should not
present a problem in a finite system. It is striking to see
that α = π/2 is the value for which synchronization is
the most difficult.
IV. ENERGY BANDS FOR A FULLY
FRUSTRATED JOSEPHSON RHOMBUS
In order to apply general results of the previous sec-
tion to the physical chains made of Josephson junctions
or more complicated Josephson circuits we need to com-
pute the spectrum of these systems as a function of the
pseudocharge q conjugated to the phase across these ele-
ments. In all cases the superconducting phase in Joseph-
son devices fluctuates weakly near some classical value
φ0 where the Josephson energy has a minimum in the
limit EJ/EC ≫ 1. In the vicinity of the minimum, the
phase Hamiltonian is H = −4EC d
E′′(φ0)(φ−φ0)2,
so a higher energy state of the individual element (at a
fixed q) can be approximated by one of the oscillator
En = (n +
)ωJ where the Josephson plasma frequency
8E′′(φ0)EC ≈
8EJEC . The Josephson en-
ergy is periodic in the phase with the period 2π but the
amplitude of the transitions between these minima is ex-
ponentially small:
w = a~ωJ(EJ/EC)
1/4 exp(−c
EJ/EC)
where a, c ∼ 1. In this limit one can neglect the contribu-
tion of the excited states (separated by a large gap ωJ )
to the lower band, so the low energy spectrum acquires
a simple form ǫ(q) = 2w cos 2πq. The numerical coeffi-
cients c, a in the formulae for the transition amplitude
depend on the element construction. For a single junc-
tion as = 8 2
π , cs =
8 while for the rhombus
ar ≈ 4.0 , cr ≈ 1.61. In case of the rhombus in mag-
netic field with flux Φ0/2 the Hamiltonian is periodic in
phase with period π provided that the rhombus is sym-
metric along its horizontal axis: indeed in this case the
combination of the time reversal symmetry and reflec-
tion ensures that the Josephson energy has a minimum
for φ0 = ±π/2. Thus, in this case the period in q dou-
bles and the low energy band becomes ǫ(q) = 2w cosπq.
The maximal voltage generated by the chain of N such
elements at I = Ic = (8πζew/~)(ZQ/Z) is
Vc = N
The voltage generated at larger currents depend on the
collective behaviour of the elements in the chain. For a
single element it is simply
〈V (I)〉 = (2πζw)
I2 − I2c
For more than one element the total volage is sufficiently
reduced due to the antiphase correlations. Generally, one
expects that
〈V (I)〉 = N (2πζw)
I2 − I2c
, (35)
where Zeffω is the effective impedance of the environment
affecting each Josephson element which is generally much
larger than its ’bare’ impedance Zω. For two elements the
exact solution (see previous Section) gives Zeffω ≈
that shows the increase of the effective impedance by a
large factor
R/Z. We expect that a similar enhance-
ment factor appears for all N & 2. Finallly, For I < Ic,
the system is ohmic with:
〈V (I)〉 = Z0I (36)
As discussed in Section II, application of a small addi-
tional ac voltage produces features on the current-voltage
characteristics for the currents that produce Bloch oscil-
lation with frequencies commensurate with the frequency
of the applied ac field ωB = 2πζI/2e = (m/n)ω. At these
currents the system becomes insulating with respect to
current increments, the largest such feature appears at
m = n = 1 that allows a direct measurement of the
Josephson element periodicity.
For smaller EJ/EC ∼ 1 the quasiclassical formulas
for the transition amplitudes do not work and one has
to perform the numerical diagonalization of the quan-
tum system in order to find its actual spectrum. As
EJ/EC → 1 the higher energy band approaches the
low energy band and the dispersion of the latter de-
viates from the simple cosine form shown in Figure 3.
These deviations, lead to higher harmonics in the dis-
persion: ǫ(q) = 2w cos 2πζq + 2w′ cos 4πζq and change
the equations (34,35). Our numerical diagonalization of
a single rhombus shows, however, that even at relatively
small EJ/EC ∼ 1 the second harmonics w′ does not ex-
ceed 0.15w, so its additional contribution to the voltage
current characteristic (∝ w′2) can always be neglected.
Thus, in the whole range of EJ/EC > 1 the voltage cur-
rent characteristic is given by Eqs. (34,35) where the ef-
fective value of transition amplitude t can be found from
the band width W = E1 − E0 = 4w plotted in Fig. 3.
For comparison we show the variation of the lower band
width for a single junction in Fig. 4
0 2.5 5 7.5 10 12.5 15
EJ/EC
(E1-E0)/EC
(E2-E0)/EC
0 0.1 0.2 0.3 0.4
E2/EC
E1/EC
E0/EC
EJ/EC=4
FIG. 3: Spectrum of a single rhombus biased by magnetic flux
Φ = Φ0/2. The upper pane shows the bands of the rhombus
characterized by Josephson eneergy EJ/EC = 4 as a function
of bias charge, q. The two lower levels are fitted by the first
two harmonics (dashed line), the coefficient w′ of the second
harmonics is w′ = 0.1w. One observes period doubling of the
first two states that reflects the symmetries of the rhombus
frustrated by a half flux quantum. The second excited level is
doubly degenerate that makes its period doubling difficult to
observe. Physically, these states correspond to an excitation
localized on the upper or lower arms of the rhombus. The
lower pane shows the dependence of the gaps for q = 0 as a
function of EJ/EC . Because higher order harmonics are very
small for all EJ/EC > 1, the gap E1 − E0 coincides with 4w
where w is the tunneling amplitude between the two classical
ground states.
V. PHYSICAL IMPLEMENTATIONS
Generally, the effects described in the previous sections
can be observed if the environment does not affect much
the quantum fluctuations of individual elements and the
resulting quasiclassical equations of motion. These physi-
cal requirements translate into different conditions on the
impedance of the environment at different frequencies.
We begin with the quantum dynamics of the individual
elements. The effect of the leads impedance on it can
be taken into account by adding the appropriate current
term to the phase equation of motion before projecting
on a low energy band and requiring that their effect on
the phase dynamics is small at the relevant frequencies.
For instance, for a single junction
= E′J (φ) +
0 1 2 3 4
EJ/EC
FIG. 4: Band width W = 4w of a single Josephson junction
The characteristic frequency of the quantum fluctuations
responsibe for the tunneling of a single element is Joseph-
son plasma frequency, ωJ =
8EJEc, so the first condi-
tion implies that
|Z(ωJ)| ≫
Ec/EJZQ (37)
For a typical ωJ/2π ∼ 10GHz, the impedance of a sim-
ple superconducting lead of the length ∼ 1cm is smaller
than ZQ and the condition (37) is not satisfied. The
situation is changed if the Josephson elements are decou-
pled from the leads by a large resistance or by a chain
of M ≫ 1 large junctions with
ẼJ/Ẽc ≫ 1 that has
no quantum tunneling transitions of their own (the am-
plitude of such transitions is ∝ exp(−
8ẼJ/Ẽc ). As-
suming that elements of this chain have no direct ca-
pacitive coupling to the ground (M2C0 ≪ C), the chain
has an impedance Z =
8Ẽc/ẼJMZQ at the relevant
frequencies, so a realistic chain with M ∼ 50 junctions
8ẼJ/Ẽc ∼ 10 provides the contribution to the
impedance needed to satisfy (37).
Similar decoupling from the leads of the individual el-
ements can be achieved by a sufficiently long chain of
similar Josephson elements, e.g. rhombi. Consider a
long (N ≫ 1) chain of similar elements connected to
the leads characterized by a large but finite capacitance
Cg ≫ C. For a short chain the tunneling of a single
element changes the phase on the leads resulting in a
huge action of the tunneling process. However, in a long
chain of junctions, a tunneling of individual rhombus may
be compensated by a simultaneous change of the phases
δφ/N of the remaining rhombi, and subsequent relax-
ation of δφ from its initial value π towards the equilib-
rium value which is zero. For N ≫ 1, this later process
can be treated within the Gaussian approximation, with
the Lagrangian (in imaginary time):
where Eg = e
2/(2Cg). So the total action involved in the
relaxation is: S = π
If this action S is less
than unity this relaxation has strictly no effect on the
tunneling amplitude of the individual rhombus.
We now turn to the constraints imposed by the qua-
siclassical equations of motion. The solution of these
equations shows oscillation at the Bloch frequency that
is ωB = 2πζI/(2e) for large currents and approaches
zero near the Ic. Thus, for a single Josephson ele-
ment the quasiclassical equations of motion are valid if
Re(RQ/Z(ωB)) ≪ 1 . A realistic energy band for a
Josephson element, W ∼ 0.3K and Z/ZQ ∼ 100 cor-
respond to Bloch frequency ωB/2π ∼ 0.1GHz . In this
frequency range a typical lead gives a capacitive contri-
bution to the dynamics. The condition that it does not
affect significantly the equations of motion implies that
the lead capacitance C . 10fF . As discussed in Section
(III) the individual elements in a short chain oscillate in
antiphase decreasing the effective coupling to the leads
by a factor
R/Z where R is the intrinsic resistance
of the contact. This factor can easily reach 102 at suf-
ficiently low temperatures making much less restrictive
the condition on the lead capacitance.
Large but finite impedance of the environment
Re(RQ/Z(ωB)) . 1 modifes the observed current-voltage
characteristics qualitatively, specially in the limit of very
small driving current. When I vanishes, and with infi-
nite external impedance, the wave function of the phase
variable is completely extended, with the form of a Bloch
state, and the pseudo-charge q is a good quantum num-
ber. As discussed at the end of Sec. II, the system be-
haves as a capacitor. But when the external impedance
is finite, charge fluctuations appear, which in the dual
description means that quantum phase fluctuations are
no longer unbounded. To be specific, consider a realistic
example of N rhombi chain (or two ordinary junctions)
attached to the leads with Z(ω) = Z0 in a broad but fi-
nite frequency interval ωmin < ω < ωmax and decreases as
Z(ω) = Z0(ωmax/ω) for ω > ωmax, Z(ω) = Z0(ω/ωmin)
for ω < ωmin. Such Z(ω) is realized in a long chain
of M Josephson junctions between islands with a finite
capactive coupling to the ground C0: ωmax = ωJ and
ωmin = (
C/C0/M)ωJ . The effective action describ-
ing the phase dynamics across the chain has contribu-
tions from the tunneling of individual rhombi and from
impedance of the chain
Ltot =
8π2ζ2Nw
Here the first term describes the effect of the tunneling of
the Josephson element between its quasiclassical minima
which we approximate by a single tunneling amplitude
w resulting in a spectrum ǫ(q) = −2w cos 2πζq that in a
Gaussian approximation becomes ǫ(q) = 4π2ζ2wq2. This
approximation is justified by the fact that, as we show
below, the main effect of the phase fluctuations comes
from the broad frequency range where the action is dom-
inated by the second term while the first serves only as
a cutoff of the logarithmical divergence. Its precise form
is therefore largely irrelevant.
This action leads to a large but finite phase fluctuations
8π2ζ2Nw
min(ωmax, ω
where ω′max = 8π
2ζ2Nw(ZQ/Z0). These fluctuations
are only logarithmically large, so they result in a finite
renormalization of the Josephson energy of the rhombi
chain and the corresponding critical current. In the ab-
sence of such renormalization the Josephson energy of a
finite chain of elements can be approximated by the lead-
ing harmonics E(φ) = −E0 cos(φ/ζ) with E0 ∼ EJ for
N ∼ 1 and EJ & Ec. Renormalization by fluctuations
replaces E0 by
ER = exp(−
)E0 =
min(ωmax, ω
]− Z0
In the limit of ωmin → 0 or Z0 → ∞ the phase fluc-
tuations renormalize Josephson energy to zero. But for
realistic parameters this suppression of Josephson energy
is finite which thus results in a small but non-zero value of
the critical current. In this situation the current-voltage
characterictics sketched in Fig. 1 is modified for very
small values of currents and voltages: instead of insulat-
ing regime at very low currents and voltages one should
observe a very small supercurrent (ER/2e) followed by a
small voltage step as shown in Fig. 2 by a dashed line.
As is clear from the above discussion the value of the
resulting critical current is controlled by the phase fluc-
tuations at low ω ≪ ωmax; these frequencies are much
smaller than the typical internal frequencies of a chain of
Josephson elements which can be thus lumped together
into an effective object characterized by the bare Joseph-
son energy E(φ) and transition amplitude between its
minima w. We thus expect the same qualitative behav-
ior for a small chain of Josephson elements as for a single
element at low currents.
VI. CONCLUSION
The main results of the present work are the expres-
sions (34), (35) for the I-V curves of a chain of N identi-
cal basic Josephson circuits. They are derived within the
assumption that the Josephson coupling is much larger
than the charging energy, but in fact, the numerical cal-
culations show that they remain very accurate even if
EJ ≈ EC . These equations predict a maximum dc volt-
age when I = Ic and V (I) ∝ 1/I for I ≫ Ic. The anoma-
lous V versus I dependence exhibited by these equations
is a signature of underdamped quantum phase dynamics.
It occurs only if the impedance of the external circuitry
is sufficiently large both at the frequency of Bloch oscil-
lations and at the Josephson frequency of the individual
elements. The precise conditions are given in Section V.
Observation of this dependence and the measurement of
the maximal voltage would provide the proof of the quan-
tum dynamics and the measurements of the tunneling
amplitude which is the most important characteristics of
these systems. It would also provide a crucial test of the
quality of decoupling to the environment.
As a deeply quantum mechanical system, the chain
of Josephson devices is very sensitive to an additional
ac driving. It exhibits resonances when the driving
frequency is commensurate with the frequency ωB =
2πζI/2e of the Bloch oscillations. This would provide
additional ways to characterize the quantum dynamics
of these circuits and confirm the period doubling of the
rhombi frustrated by exactly half flux quantum.
Acknowledgments
LI is thankful to LPTMS Orsay, and LPTHE Jussieu
for their hospitality through a financial support from
CNRS while BD has enjoyed the hospitality of the
Physics Department at Rutgers University. This work
was made possible by support from NSF DMR-0210575,
ECS-0608842 and ARO W911NF-06-1-0208.
APPENDIX A: QUANTUM-MECHANICAL
CALCULATION OF THE DC VOLTAGE
In the large current regime where I ≥ Imax, the energy
drop ∆B = hI/2e induced by the driving current when
φ increases by 2π becomes comparable to or larger than
the bandwidth W . In this regime, the semi-classical ap-
proach is no longer reliable. But as long as ∆B remains
small compared to the typical gap ∆ between nearby
bands, we may still construct the system wave functions
in the presence of the driving field from Wannier orbitals
belonging to a single band. In such quantum-mechanical
approach, dissipation is described as the result of cou-
pling the single degree of freedom (φ, q) to a continuum
of oscillator modes (qα, pα). The corresponding Hamil-
tonian has the form:
Hn = ǫn
(q2α+p
α) (A1)
where we have chosen the following commutation rela-
tions:
[φ, q] = i, [qα, pβ ] = iδαβ (A2)
and all other commutators between these operators van-
ish. The form of (A1) is plausible on the physical ground
because when the superconducting islands are coupled
to macroscopic leads, the charge q undergoes quantum
fluctuations, so that it has to be replaced by a “dressed
charge” q −
α gαqα in the effective Hamiltonian. A
more explicit justification is that the corresponding semi-
classical equations of motion have the same form as
Eqs. (4), (5) which simply mean that the effective cur-
rent going through the superconducting circuit is the bias
current minus the current going through the external im-
pedence. The semi-classical equations deduced from (A1)
read:
gαqα)
= ωαpα
= −ωαqα +
gαqα)
It is then natural to introduce q′ = q −
α gαqα, so that:
(q′) (A3)
To show that (A4) has the same form as (5), we notice
that the driving term for the bath oscillators is directly
proportional to dφ/dt. Specifically:
+ ω2αqα = ωαgα
Going to Fourier space, we see that after averaging over
initial conditions for the bath oscillators, Eq. (A4) takes
the form:
− iωq̃′(ω) = I
2πδ(ω)− i
ω2 − ω2α
−iωφ̃(ω)
This is exactly the frequency space version of Eq. (5),
where as usual, the dissipation is related to the spectral
density of the bath by:
ω2 − ω2α
Here we emphasize that Z is typically frequency depen-
dent, in which case the term (ZQ/Z)(dφ/dt) in Eq. (5)
becomes a convolution with ZQ/Z replaced by a non-local
kernel in time.
Now we turn to the solution of the quantum prob-
lem (A1) in the large driving current regime. Let us first
consider the Josephson array without dissipation. It is
straightforward to express its eigenstates in the q repre-
sentation41 because the Schrödinger equation reads then:
Eψ(q) = ǫn(q)ψ(q)− i
(q) (A8)
so that:
ψ(q) = ψ(0) exp
−i 2e
(ǫn(q
′)− E)dq′
The energy spectrum is determined via the boundary
condition ψ(q + 1) = ψ(q) so that:
∫ 1/2
′)dq′ +∆WSν, ν integer (A10)
This yields a Wannier-Stark ladder of spacially localized
states, with a constant level spacing equal to ∆B. Note
that increasing ν by one unit multiplies the wave-function
ψ(q) by exp(i2πq). In the phase representation, this is
equivalent to a translation by −2π. Of course, in the
absence of dissipation, these levels have infinite life-time,
and therefore, no dc voltage is generated.
Let us now consider the limit of a weak coupling to
the dissipative bath. This means that the decay rate Γ
of the Wannier-Stark levels is much smaller than the level
spacing ∆B. Assuming that transitions take place mostly
between two adjacent levels, we get an average voltage:
〈V 〉 = ~
〉 = hΓ
(A11)
The rate Γ is estimated via Fermi’s golden rule which we
prefer to use in the correlation function formulation :
Γν→ν′ =
|ν′〉|2C̃AA((ν − ν′)ωB) (A12)
where ωB = ∆B/~ = 2πI/(2e). In this expression,
C̃AA is the Fourier transform of the correlation function,
CAA(t − t′) = 〈A(t)A(t′)〉 of the Heisenberg operators
A(t) =
α gαqα(t), taken in the equilibrium state of the
dissipative bath.
We evaluate now the matrix element of the velocity
operator dǫn/dq between Wannier-Stark states:
〈ν|dǫn
|ν′〉 =
∫ 1/2
(q) exp(i2π(ν′ − ν)q) dq (A13)
As we have seen, in most physically interesting situations,
we can approximate the periodic function ǫ(q) by a single
harmonic 2w cos(2πq). In this case:
|ν′〉 = 2πwδ|ν′−ν|,1 (A14)
In the zero temperature limit, the bath correlation func-
tion is:
C̃AA(ω) =
πg2αδ(ω−ωα) =
θ(ω) (A15)
where θ(ω) is the Heaviside step function. Putting all
these elements together gives:
〈V (I ≫ Ic)〉 =
(2πw)2
2e2ZωI
(A16)
where Zω denotes the external impedance taken for
ω = ωB, and this result is in perfect agreement with the
large current limit of the semi-classical treatment, shown
as Eq. (35).
When the band structure is replaced by 2w cos(2πζq)
as in the case of a rhombus (for which ζ = 1/2), we have
〈V 〉 = ζ hΓ
(A17)
The frequency of Bloch oscillations becomes ωB(ζ) =
ζωB(ζ = 1), and the matrix element is multiplied by
ζ, so that the voltage is multiplied by ζ2. Again, this is
compatible with the semi-classical result (35).
1 For a review on superconducting qubits, see for in-
stance: M. H. Devoret, A. Wallraff, and J. M. Martinis,
arXiv:cond-mat/0411174
2 E. Knill, Nature, 434, 39, (2005)
3 A. Y. Kitaev, Ann. Phys. 303, 2, (2003)
4 L. B. Ioffe, andM. V. Feigel’man, Phys. Rev.B 66, 224503,
(2002)
5 B. Douçot, M. V. Feigel’man, and L. B. Ioffe, Phys. Rev.
Lett. 90, 107003, (2003)
6 B. Douçot, L. B. Ioffe and J. Vidal, Phys. Rev. B 69,
214501, (2003)
7 M. V. Feigel’man, L. B. Ioffe, V. B. Geshkenbein, P. Dayal,
and G. Blatter Phys. Rev. B 70, 224524 (2004)
8 B. Douçot, M. V. Feigel’man, L. B. Ioffe, and A. S. Iosele-
vich Phys. Rev. B 71, 024505, (2005)
9 I. V. Protopopov, and M. V. Feigel’man, Phys. Rev. B 70,
184519 (2004)
10 B. Pannetier, private communication.
11 B. Douçot and J. Vidal, Phys. Rev. Lett. 88, 227005,
(2002)
12 M. Rizzi, V. Cataudella, and R. Fazio, Phys. Rev. B 73,
100502(R), (2006)
13 Yu. M. Ivanchenko and L. A. Zil’berman, Zh. Eksp. Teor.
Fiz. 55, 2395 (1968); [Sov. Phys. JETP 28, 1272 (1969)].
14 G.-L. Ingold, and H. Grabert, Phys. Rev. Lett.83, 3721,
(1999)
15 A. Schmid, Phys. Rev. Lett. 51, 1506 (1983)
16 K. K. Likharev and A. B. Zorin, J. Low Temp. Phys. 59,
347, (1985)
17 G. H. Wannier, Rev. Mod. Phys. 34, 645, (1962)
18 E. E. Mendez, F. Agullo-Rueda, and J. M. Hong, Phys.
Rev. Lett. 60, 2426, (1988)
19 P. Voisin, J. Bleuse, C. Bouche, S. Gaillard, C. Alibert,
and A. Regreny, Phys. Rev. Lett. 61, 1639, (1988)
20 L. S. Kuzmin and D. B. Haviland, Phys. Rev. Lett. 67,
2890 (1991); Physica Scripta T42, 171 (1992).
21 L. S. Kuzmin, Yu. A. Pashkin and T. Claeson, Supercond.
Science and Technology, 7, 324 (1994).
22 L. S. Kuzmin, Yu. A. Pashkin, A. Zorin and T. Claeson,
Physica B 203, 376, (1994)
23 L. S. Kuzmin, Yu. A. Pashkin, D. S. Golubev, and A. D.
Zaikin, Phys. Rev. B 54, 10074, (1996)
24 M. Watanabe, and D. B. Haviland, Phys. Rev. B 67,
094505 (2003)
25 S. Corlevi, W. Guichard, F. W. J. Hekking, and D. B.
Haviland, Phys. Rev. Lett. 97 096802, (2006)
26 N. Boulant, G. Ithier, F. Nguyen, P. Bertet, H. Pothier, D.
Vion, C. Urbina, and D. Esteve, arXiv:cond-mat/0605061
27 A. Steinbach, P. Joyez, A. Cottet, D. Esteve, M. H. De-
voret, M. E. Huber, and John M. Martinis, Phys. Rev. Lett
87, 137003 (2001).
28 L. Esaki and R. Tsu, IBM J. Res. Develop. 14, 61 (1970)
29 A. Sibille, J. F. Palmier, H. Wang, and F. Mollot, Phys.
Rev. Lett. 64, 52 (1990)
30 U. Geigenmüller and G. Schön, Physica B 152, 186 (1988)
31 I. S. Beloborodov, F. W. J. Hekking, and F. Pis-
tolesi, in New Directions in Mesoscopic Physics (Towards
Nanoscience), edited by R. Fazio, V. F. Gantmakher, and
Y. Imry (Kluwer Academic Publisher, Dordrecht, 2002),
p. 339.
32 K. Wiesenfeld, and P. Hadley, Phys. Rev. Lett. 62, 1335,
(1989)
33 K. Y. Tsang, S. H. Strogatz, and K. Wiesenfeld, Phys.
Rev. Lett. 66, 1094, (1991)
34 S. Nichols, and K. Wiesenfeld, Phys. Rev. A 45, 8430,
(1992)
35 S. H. Strogatz, and R. E. Mirollo, Phys. Rev. E 47, 220,
(1993)
36 L. Faoro, and L. B. Ioffe, Phys. Rev. Lett. 96, 047001,
(2006)
37 L. B. Ioffe, V. B. Geshkenbein, Ch. Helm, and G. Blatter,
Phys. Rev. Lett. 93, 057001, (2004)
38 Y. Kuramoto, Progr. Theoret. Phys. Suppl. 79, 223, (1984)
39 H. Sakaguchi and Y. Kuramoto, Progr. Theoret. Phys. 76,
576, (1986)
40 H. Daido, Progr. Theoret. Phys. 88, 1213, (1992)
41 See for instance P. W. Anderson, Concepts in Solids, W. A.
Benjamin (1963), reedited by Addison-Wesley Publishing
Co. (1992), chapter 2, section C.
http://arxiv.org/abs/cond-mat/0411174
http://arxiv.org/abs/cond-mat/0605061
|
0704.0901 | The density of critical percolation clusters touching the boundaries of
strips and squares | The density of critical percolation clusters touching the boundaries of strips and
squares
Jacob J. H. Simmons∗ and Peter Kleban†
LASST and Department of Physics & Astronomy, University of Maine, Orono, ME 04469, USA
Kevin Dahlberg‡ and Robert M. Ziff§
MCTP and Department of Chemical Engineering,
University of Michigan, Ann Arbor, MI 48109-2136 USA
(Dated: November 10, 2018)
We consider the density of two-dimensional critical percolation clusters, constrained to touch one
or both boundaries, in infinite strips, half-infinite strips, and squares, as well as several related
quantities for the infinite strip. Our theoretical results follow from conformal field theory, and
are compared with high-precision numerical simulation. For example, we show that the density
of clusters touching both boundaries of an infinite strip of unit width (i.e. crossing clusters) is
proportional to (sin πy)−5/48{[cos(πy/2)]1/3 + [sin(πy/2)]1/3 − 1}. We also determine numerically
contours for the density of clusters crossing squares and long rectangles with open boundaries on
the sides, and compare with theory for the density along an edge.
Keywords: percolation, cluster density, crossing
I. INTRODUCTION
Percolation in two-dimensional systems is an area that has a long history, but remains under very active current
study. A number of very different methods has been applied to critical 2-D percolation, including conformal field
theory (CFT) [1], other field-theoretic methods [2], modular forms [3], computer simulation [4], Stochastic Loẅner
Evolution (SLE) processes [5] and other rigorous methods [6]. (Because the literature is so very extensive we have
cited only a few representative works.)
More specifically, there is a great deal of work of recent work studying universal properties of crossing problems
in critical percolation in two dimensions (i.e., [1, 3, 5, 7, 8, 9, 10, 11, 12]). Another interesting and also practically
important universal feature of percolation at the critical point is the density, defined as the number of times a
site belongs to clusters satisfying some specified boundary condition (such as clusters touching certain parts of the
boundary) divided by the total number of samples N , in the limit that N goes to infinity. This problem has been
addressed for clusters touching any part of the boundary of a system in various geometries, including rectangles, strips,
and disks, via conformal field theory [13] and by solving the problem for a Gaussian free field and then transforming
to other statistical mechanical models, including percolation [14]. In a recent Letter [30], we considered the problem
of clusters simultaneously touching one or two intervals on the boundary of a system, and considered cases where
those intervals shrink to points (anchor points). These results exhibit interesting factorization, are related to two-
dimensional electrostatics, and highlight the universality of percolation densities. Note that the density at a point
z of clusters which touch specified parts of the boundary is proportional to the probability of finding a cluster that
connects those parts of the boundary with a small region around the point z.
In this paper we consider the problem of the density ρb of critical percolation clusters in various geometries where
the clusters simultaneously touch both of the boundaries (i.e. crossing clusters), and several related quantities.
The first case we consider is an infinite strip, with boundaries parallel to the x-axis at y = 0 and y = 1, so the
crossing is in the vertical direction. (All our models are defined so that the crossing is vertical. Fig. 4 below illustrates
the geometries that we consider.) For the infinite strip we find (leaving out an arbitrary normalization constant here
and elsewhere) that
ρb(y) = (sinπy)
−5/48
. (1)
∗Electronic address: [email protected]
†Electronic address: [email protected]
‡Electronic address: [email protected]
§Electronic address: [email protected]
http://arxiv.org/abs/0704.0901v2
mailto:[email protected]
mailto:[email protected]
mailto:[email protected]
mailto:[email protected]
This may be compared to a previous result [13, 14] for clusters touching either the upper or lower boundary (or both)
which is simply given by
ρe(y) = (sinπy)
−5/48 . (2)
We also show that the density of clusters touching one boundary irrespective of touching the other is given by
ρ0(y) = (sinπy)
−5/48
ρ1(y) = (sinπy)
−5/48
, (4)
where ρ0 corresponds to those clusters touching the lower boundary at y = 0, and ρ1 corresponds to those clusters
touching the upper boundary at y = 1. (Note that ρ0 is the analog of the order parameter profile 〈σ〉+,f for the Ising
case (see Eq. (16) in [19].) We also find expressions for clusters touching one boundary and not the other, which are
combinations of the above results.
Perhaps the main new theoretical result in the above is (3) (or equivalently (4)), which follows straightforwardly
from the results in [30]. The derivation is given in section II.
A second type of theoretical prediction gives the density variation along a boundary (the general expressions is in
(15)). This is used to predict the density along the edge in several geometries (see (17), (18), and (19) below).
The above theoretical results are found to be consistent with numerical simulations to a high degree of accuracy.
We include the results of numerical simulations of the density contours of clusters crossing square and rectangular
systems vertically, with open boundaries on the sides, and compare with theory along the boundaries.
Our theoretical treatment is related to previous use of conformal field theory to study order parameter profiles in
various 2-D critical models with edges and similar research [13, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28].
(Note that the density which we consider is the expectation value of the spin operator in percolation, which is the
order parameter in this setting.) Many of these prior results make use of the original Fisher-de Gennes proposal [29]
for the behavior of the order parameter at criticality near a straight edge. In this paper, we limit ourselves to critical
percolation. We also include the results of high-precision computer simulations. In addition, the formula (15) for the
density along the edge of a system is new, to our knowledge.
Note that, because of the fractal nature of critical percolation clusters, the density of clusters is, strictly speaking,
zero everywhere in the system. However, if we properly renormalize the density as the lattice mesh size goes to zero,
the density can remain finite. At the boundaries, for some quantities, this results in the density diverging, as for
ρe(y) at y = 0 and y = L (but remaining integrable). For ρb(y) of equation (1), on the other hand, the renormalized
density remains finite everywhere. When comparing to numerical simulations, one has to normalize the data so
that the densities coincide with the theoretical prediction using whatever normalization convention is chosen for the
theoretical results. The resulting normalization constant, which must be applied to the numerical data, is specific for
each system and is non-universal.
In the following sections, we first give the theoretical derivation of our infinite strip formulas above. Then we
present the numerical results. This is followed by numerical results on square and (long) rectangular systems. These
are compared with theory for the density along the edges of these systems. We end with a few concluding remarks.
II. THEORY FOR THE INFINITE STRIP
We first consider the density of critical percolation clusters which span the sides of an infinite 2-D strip. We can
find the density predicted by conformal field theory [31] using the results of [30]. In that article we showed that in
the upper half plane the density ρ of clusters connected to an interval (xa, xb) is
ρ(z, xa, xb) ∝ (z − z̄)−5/48F
(xb − xa)(z̄ − z)
(z̄ − xa)(xb − z)
, (5)
where the function F (η) was determined by conformal field theory and takes on one of two forms,
F±(η) =
. (6)
If we parameterize z as z = reiθ and let xa → 0 and xb → ∞, then η = 1− e2iθ and using (6) we can rewrite (5) as
ρ+(r, θ, xa → 0, xb → ∞) ∝ (r sin θ)−5/48[cos(θ/2)]1/3 (7)
ρ−(r, θ, xa → 0, xb → ∞) ∝ (r sin θ)−5/48[sin(θ/2)]1/3 . (8)
For the positive real axis θ → 0 and ρ+ ∼ θ−5/48 while for the negative real axis θ → π and ρ+ ∼ (π− θ)11/48. The
powers here arise from the fixed and free boundary exponents, respectively, in the bulk-boundary operator product
expansion of the magnetization operator ψ [30]. (More specifically, as it approaches the boundary, ψ ∼ 1 or ψ ∼ φ1,3,
which have conformal dimensions 0 and 1/3, respectively.) This shows that ρ+ is the density of clusters attached to
the positive axis. Because ρ−(r, θ) = ρ+(r, π− θ), it follows that ρ− is the density of clusters attached to the negative
real axis.
The final density that we need is that of clusters connected arbitrarily to the axis. This is given by 〈ψ(z, z̄)〉fixed ∝
(z − z̄)−5/48 [13, 14, 30] which may also be written
ρa(r, θ) ∝ (r sin θ)−5/48 . (9)
These densities are unnormalized. However for points that are short distances above the positive (negative) axis,
but very far from the origin, the relation ρ+(−) ≈ ρa holds. This condition holds since the points are far from the free
boundary, and thus dominated by the fixed boundary. It is satisfied by our expressions for ρ+, ρ−, and ρa, so they
are properly normalized relative to one another.
We next map these densities into the infinite strip w ∈{x+ iy| x ∈ (−∞,∞), y ∈ (0, 1)} using
w(z) =
log(z) . (10)
This leads to the expressions for ρ0(y), ρ1(y), and ρe(y) given by equations (3), (4), and (2), respectively.
Using these functions we can also determine the densities of clusters that touch one side but not the other,
ρ01̄(y) = ρe(y)− ρ1(y) = (sinπy)−5/48
1− [cos(πy/2)]1/3
ρ10̄(y) = ρe(y)− ρ0(y) = (sinπy)−5/48
1− [sin(πy/2)]1/3
. (12)
In a similar manner we can find the density of clusters touching both sides, ρb(y). Adding ρ0 and ρ1 includes all
configurations that touch either side, but double counts the clusters that touch both sides. Subtracting ρe leaves only
those clusters that touch both sides of the strip. Thus
ρb(y) = ρ0(y) + ρ1(y)− ρe(y) , (13)
which gives equation (1).
III. SIMULATIONS FOR THE INFINITE STRIP
To approximate the infinite strip, we considered rectangular systems with periodic boundary conditions in the
horizontal direction, for both site and bond percolation on square lattices. Here we report our results for site
percolation on the square lattice, for a system of 511 (vertical) × 2048 (horizontal) sites, at p = pc = 0.5927462
[32]. We generated 500,000 samples to compute the average densities, using a Leath-type of algorithm to find all
clusters touching the upper and lower boundaries.
The aspect ratio of the rectangle used in our simulations was 2048/511 = 4.008 . . ., which might seem a bit small.
However, since the correlation length of the system is governed by the width of the rectangle (511), the effect of the
finite ratio drops off exponentially with the length of the rectangle, so our results should be very close to those of
an infinite strip, an expectation which is born out by the results described below. (Furthermore, the probability of
finding a horizontally wrapping cluster drops exponentially with the aspect ratio, so if a longer rectangle were used,
very few wrapping clusters would be found and the data would have poor statistics.)
The agreement of predicted and simulated values is excellent. In Fig. 1 we show the results for ρb(y) with no
adjustments, other than an overall normalization (in particular, the extrapolation length a, described below, is set to
zero). Fig. 2 (upper set of points) shows the ratio of simulation to theoretical results, equation (13). Most points fall
within 1%, except those right near the boundary. However, we can do better.
When comparing simulations on a (necessarily) finite lattice with the results of a continuum theory, there is the
question, due to finite-size and edge effects, of what value of the continuous variable y should correspond to the lattice
variable Y — specifically, where the boundary of the system should be placed. On the lattice, the density goes to zero
at Y = 0 and Y = 512. A näıve assignment of the continuum coordinate would therefore be y = Y/512. However,
we can get a better fit to the data near the boundaries by assuming that the effective boundary is a distance a (in
units of the lattice spacing) beyond the lattice boundaries — that is, at Y = −a and Y = 512 + a. The distance a is
0 0.2 0.4 0.6 0.8 1
FIG. 1: (Color online).
ρb(y) vs. y. Open violet circles, theoretical values from equation (13) (here normalized to 1 at the center point), using (14)
with a = 0. Blue dots: results from simulations.
an effective “extrapolation length” where the continuum density far from the wall extrapolates to the value zero [33].
This is accomplished by defining the continuum variable y by
y = (Y + a)/(512 + 2a) . (14)
Then, Y = −a corresponds to y = 0 and Y = 512 + a corresponds to y = 1. Note, Y = 0 corresponds to
y = a/(512 + 2a) and Y = 512 corresponds to y = (512 + a)/(512 + 2a) = 1 − a/(512 + 2a), and so the theoretical
extrapolated density ρb(y) will be greater than zero at these points on the actual boundary. The spacing between all
points is stretched by a small amount because of the denominator in equation (14), but this stretching does not have
much effect on the behavior of ρb near the center. The main effect of a on the shape of the theoretical curves of ρb is
near the boundaries.
By choosing an extrapolation length of a = 0.26, we can get a much better fit of the data, as can be seen in Fig. 2
(lower set of points). A plot of the data analogous to Fig. 1 puts most of the data points right in the center of the
theoretical circles, but would barely be visible when plotted on this scale. With a = 0.26, the error is now reduced to
less than 0.1%, except right near the boundaries, as can be seen in Fig. 2. A more thorough study of the extrapolation
coefficient a would require the study of different sized lattices, and the demonstration that a is independent of the
lattice size. We have not carried this out. Note, however, that a constant a implies that if one keeps the physical size
of the lattice fixed (so that the increasing number of lattice points makes the mesh size go to zero), the extrapolation
length, measured in physical units, will also go to zero.
Note also that the distance in the y directions was 511 rather than 512 because we used one row at Y = 0, in
conjunction with the periodic boundary conditions, to represent both horizontal boundaries of the system on the
lattice. That is, Y = 1 and Y = 511 are the lowest and highest rows where we occupy sites in the system, where Y
represents the lattice coordinate in the vertical direction.
We note that these simulations were carried out before the theoretical predictions were made.
We have also carried out simulations measuring the density of clusters touching one edge, ρ0(y). The results are
shown in Fig. 3. We also plot the results of the theoretical prediction, equation (3), on the same plot, and find
agreement within 0.5% without using an extrapolation length a.
0.995
1.005
1.015
1.025
1.035
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
FIG. 2: (Color online). Ratio of simulation to theoretical results for ρb(y) with a = 0 (upper set of points) and a = 0.26 (lower
set of points)
0 0.2 0.4 0.6 0.8 1
FIG. 3: (Color online). Density of clusters touching lower boundary, ρ0(y), as a function of y, both simulation (dots) and
theory (open circles), equation (3).
(c) (d)
FIG. 4: Sketches of the cases considered. Solid (dashed) boundary lines represent fixed (open) boundary conditions. Curved
lines indicate crossing clusters; the density ρb is evaluated along the lines with arrowheads. (a) is the infinite strip, cf. e.g.
equation (1); (b) the square (Fig. 6), (c) and (d) vertical half-infinite strips ((17) and (18), respectively); (e) horizontal
half-infinite strip (19).
IV. PERCOLATION ON A SQUARE AND SEMI-INFINITE STRIP
In this section, we consider the density of crossing clusters on a square with open boundaries and also on a (long)
rectangle. We compare the numerical results with the predictions of conformal field theory for the density along an
edge. The various cases considered are illustrated in Fig. 4.
Note that, as mentioned, the crossing is always in the vertical direction. In a slight abuse of notation, we use ρb for
the density of a clusters that touch both anchoring intervals in all cases. The different situations may be distinguished
by the arguments of ρb, e.g. ρb(y) for the infinite strip, where there is no x dependence, and ρb(x, y = 0) along the
bottom of the semi-infinite strip as in (17) below.
Our simulations of percolation densities on an open square examined clusters that cross in the vertical direction,
0 100 200 300 400 500
FIG. 5: (Color online) Contours of constant densities 0.625, 0.75, 0.875, 15/16=0.9375, 31/32=0.96875, 63/64 = 0.984375,
127/128 = 0.9921875, and 1 (outside to center) of clusters touching both the top and bottom edges, with open b.c. on the sides,
for a system of 511× 511 sites.
with open boundary conditions on the left- and right-hand sides. We considered site percolation on a square lattice
of 511× 511, with 2,000,000 samples generated. The resulting contours are shown in Fig. 5. As in the periodic case,
the density goes to zero at the upper and lower boundaries because, compared to an infinite system, these boundaries
intersect many possible crossing paths, leading to large holes in the clusters near the boundaries. As a consequence,
relatively few points on the boundary will be part of the crossing clusters, and in the limit that the mesh size goes to
zero, their density evidently goes to zero. The density also goes to zero on the sides because of the open conditions
there. Interestingly, the contour curves are almost symmetric in the horizontal and vertical directions, indicating that
the anchoring and open boundaries have a similar effect on the density.
We have not carried out a field-theory calculation to find the density inside the square or rectangular systems. To
do so requires a six-point function whose calculation would be unwieldy.
It is however relatively easy to calculate the variation of the density along the bottom edge of the square, now
normalized so that the density remains non-zero as the mesh size goes to zero. To do this, we consider the density of
crossings from one of the anchoring intervals to a single point on the other interval, using the boundary spin operator.
Now crossing from the top edge to one point x on the bottom edge (which is given by a three-point function, depending
only on x and λ) automatically implies crossing from the top to bottom (which is given by a four-point function,
depending only on λ). Therefore the density at x is proportional to the ratio of the former to the latter. It follows
generally that, up to a (λ-dependent) multiplicative constant, one has [34]
ρ(x) =
|z′(x)|
1− λz(x)
. (15)
Here z(w) maps the region of interest (w) onto the 1/2-plane (z), x is the w-coordinate along the anchoring interval
of interest, and λ is the conformally invariant cross-ratio for the anchoring points. For a rectangle, this depends on
the aspect ratio r = K(
1− λ)/K(
λ) [1, 35].
The mapping for the square is
z(w) = 1− ℘
(iw + 1)K(2); 4, 0
, (16)
with ℘(u; g2, g3) the Weierstrass elliptic function and K(z) the elliptic integral function. This mapping takes a unit
square x, y ∈ (0, 1) into the upper half plane. For the square λ = 1/2, and we can take x ∈ (0, 1). In Fig. 6 we
compare the measurement and theory. Clearly the agreement is excellent.
In the case of a half-infinite strip x ∈ (0, 1), y ∈ (0,∞), the density of sites along the x-axis belonging to clusters
crossing vertically is found from (15) using z(w) = sin2(πw/2), and λ = 0. This gives
ρb(x, y = 0) = (sinπx)
1/3 . (17)
Of course, for an infinite strip, the probability of crossing (in the long direction) is zero, so one must consider the
limit of a large system, and calculate the density given that crossing takes place, and take the limit that the length of
the rectangle goes to infinity. It turns out that numerically, the density at the edge for the square, equations (15,16),
differs only very slightly from that of the half-infinite strip, given by equation (17). From the point of view of the
density along an anchoring edge, the square is not much different from the half-infinite strip.
For y ≫ 0 in the above half-infinite strip (or equivalently for any y for a fully infinite strip in the vertical direction)
one can also find the density along the x-direction of the vertically crossing clusters in closed form
ρb(x, y ≫ 0) = (sin πx)11/48 . (18)
This function may be found by transforming the density of clusters connecting two boundary points, derived in [30],
into the infinite strip. We then take the limit as the two anchoring points move infinitely far away in opposite
directions, while normalizing the density so that it remains finite. (Related order-parameter profiles for the Ising
case are studied in [22].) Interestingly, a plot of this density profile (written as a function of y rather than x) is
very similar in appearance to that of the vertically crossing clusters ρb(y) given in equation (1) and plotted in Fig. 1.
When normalized so that they have the same value at y = 1/2, the maximum difference between the two is at y = 0
and y = 1, where (18) is only 1.5 % below (1). This small difference indicates that open boundaries and anchoring
boundaries have similar effects on the density of the crossing percolation clusters, and is consistent with the near
symmetry seen in the contours in Fig. 5.
We can also find the density along the left and right (open) edges. For the case of a half-infinite strip rotated by
90◦ with respect to the one above, (y ∈ (0, 1), x ∈ (0,∞)), we find
ρb(x = 0, y) = (sinπy)
, (19)
which is similar in form to ρb(y) of equation (1) (which in fact corresponds to x≫ 0 for the geometry considered here)
but with different exponents. This similarity arises because the derivations of (19) and (13) are virtually identical,
except that for (19) we leave a free interval between the two anchoring intervals (where the boundary spin operator
sits) which is mapped to the end of the half-infinite strip using sine functions. Comparison with numerical data
(not shown) for the density at the short edge of a rectangle of aspect ratio 8 to approximate the infinite strip shows
excellent agreement with (19).
V. FURTHER COMMENTS AND CONCLUSIONS
If we consider densities raised to the sixth power (compare Ref. [30]), we find a Pythagorean-like relation involving
ρe(y)
3, ρ0(y)
3, and ρ1(y)
3 (which we present without interpretation),
ρ0(y)
6 + ρ1(y)
6 = ρe(y)
6 . (20)
There seems to be no simple relation involving ρb(y) other than its basic definition given in equation (13). For the
corresponding quantities at the edge of the strip as in (19), the power in equation (20) is 3 rather than 6.
Although the overall normalization of a density such as ρb(y) (for the infinite strip, see (1)) is arbitrary, we can fix
its value by requiring that
ρb(y)dy = 1 . (21)
That is, we define ρb(y) = B sin(πy)
−5/48(cos(πy/2)1/3+sin(πy/2)1/3−1) where B is a constant. Then equation (21)
yields
= 1.46408902 . . . . (23)
0 0.2 0.4 0.6 0.8 1
Theory
Simulation
FIG. 6: (Color online) Density of vertically crossing clusters ρb(x, 0) along the lower boundary, in a square system with open
boundaries on the horizontal sides. Red line: theory, equation (15), black dots: simulation results.
Another choice of B is to make ρb(1/2) = 1, which yields B = (2
5/6 − 1)−1 = 1.27910371 . . .
In many problems of percolation density profiles, the density goes to infinity at a boundary point, such as occurs for
ρ0(y) when y → 0. Interestingly, in the case considered here of clusters touching both boundaries, ρb(y), the density
goes to zero at those boundaries and remains finite everywhere.
To highlight the difference between densities of all clusters touching a boundary vs. the densities of crossing clusters
touching one boundary, we consider the limit that the strip width becomes infinite, so that the system becomes a
half-plane. Because we have written our results for a strip of fixed (unit) width, this density is given by the behavior
of ρ for small y. The density of all clusters touching the x-axis is thus found from equation (3) (or [13, 14]) to be
ρ0 ∼ y−5/48 , (24)
where we have left off the coefficient because the normalization is arbitrary. This density diverges at y = 0 because
it is much more likely to find sites belonging to these clusters near the x-axis. On the other hand, the density of
crossing clusters touching the x-axis is found from equation (1) in the limit y → 0,
ρb ∼ y11/48 . (25)
In this case, the density increases as y increases, in contrast to (24), and goes to zero at the anchoring boundary, for
the reason mentioned above.
Behavior identical to (25) also follows from the small-x expansion of ρb(x, y ≫ 0) given by (18), which represents
the behavior of the density of the vertically spanning clusters at the open boundary. Thus, near the boundaries (but
away from the corners), the open and anchoring boundary conditions have identical effects on the density of the
crossing clusters.
In conclusion, we have studied the density of vertically percolating clusters in a square system, as well as for half-
infinite and infinite strips extending in either the horizontal or vertical direction. The various cases considered are
illustrated in Fig. 4.
For the infinite strip, Fig. 4(a), the density for crossing clusters is given by (19) and compared with numerical
simulations in Figs. 1 and 2. For the square, Fig. 4(b), we find theoretical results for the anchoring edge densities
(see (15) and (16) and Fig. 6), as well as numerical results for the density in the interior (Fig. 5), which exhibit an
interesting near symmetry. For the half-infinite strip in the horizontal direction, Fig. 4(e), the density ρb(x = 0, y) at
the left open boundary is given by (19), and at x ≫ 0 (or equivalently, for an infinite strip), the density is given by
(1). For a half-infinite strip in the vertical direction, the density along the lower anchoring boundary (Fig. 4(c)) is
given by (17) while for y ≫ 0 (Fig. 4(d)) the density is given by (18). For the half-infinite systems, the densities near
a wall are given by the same power-law (25) regardless of whether it anchors the crossing clusters or is open. Note
that all of our theoretical predictions were confirmed by computer simulation.
For the future, it would be interesting to study analogous properties for Fortuin-Kasteleyn (FK) clusters of the
critical Ising and Potts models.
VI. ACKNOWLEDGMENTS
This work was supported in part by the National Science Foundation under grants numbers DMS-0553487 (RMZ)
and DMR-0536927 (PK).
[1] J. L. Cardy, Critical percolation in finite geometries, J. Phys. A 25 L201-206 (1992) [arXiv: hep-th/9111026].
[2] B. Duplantier, Higher conformal multifractality, J. Stat. Phys. 110 691-738 (2003) [arXiv: cond-mat/0207743]; Conformal
fractal geometry and boundary quantum gravity, preprint [arXiv: math-ph/0303034].
[3] P. Kleban and Don Zagier, Crossing probabilities and modular forms, J. Stat. Phys. 113 431-454 (2003) [arXiv: math-
ph/0209023].
[4] P. Kleban and R. M. Ziff, Exact results at the two-dimensional percolation point, Phys. Rev. B 57 R8075-R8078 (1998)
[arXiv: cond-mat/9709285].
[5] J. Dubédat, Excursion decompositions for SLE and Watts’ crossing formula, Probab. Theory Related Fields, no. 3, 453-488
(2006) [arXiv: math.PR/0405074].
[6] M. Aizenman, Scaling Limit for the Incipient Spanning Clusters, in Mathematics of Multiscale Materials; the IMA Volumes
in Mathematics and its Applications (K. Golden, G. Grimmett, R. James, G. Milton, and P. Sen, eds.), Springer (1998)
[arXiv: cond-mat/9611040].
[7] R. P. Langlands, C. Pichet, Ph. Pouliot and Y. Saint-Aubin, On the universality of crossing probabilities in two-dimensional
percolation, J. Stat. Phys. 67 553-574 (1992).
[8] S. Smirnov, Critical percolation in the plane, C. R. Acad. Sci. Paris Sr. I Math. 333 no. 3, 239-244 (2001).
[9] Bertrand Berche, Jean-Marc Debierre and Hans-Peter Eckle, Surface shape and local critical behavior: The percolation
problem in two dimensions, Phys. Rev. E 50 4542-4550 (1994).
[10] G. M. T. Watts, A crossing probability for critical percolation in two dimensions, J. Phys. A 29 L363-L368 (1996) [arXiv:
cond-mat/9603167].
[11] Robert M. Ziff, Spanning probability in 2D percolation, Phys. Rev. Lett. 69 2670-2673 (1992).
[12] Oleg A. Vasilyev, Universality of the crossing probability for the Potts model for q = 1, 2, 3, 4, Phys. Rev. E 68 026125
(2003).
[13] Theodore W. Burkhardt and Erich Eisenriegler, Universal order-parameter profiles in confined critical systems with bound-
ary fields, J. Phys. A: Math. Gen. 18 L83-L88 (1985).
[14] Ivica Reš and Joseph P. Straley, Order parameter for two-dimensional critical systems with boundaries, Phys. Rev. B 61,
14425-14433 (2000) [arXiv: cond-mat/9910467].
[15] T. W. Burkhardt and J. L. Cardy Surface critical behaviour and local operators with boundary-induced critical profiles, J.
Phys. A: Math. Gen. 20 L233-L238 (1987).
[16] T. W. Burkhardt and I. Guim Bulk, surface, and interface properties of the Ising model and conformal invariance, Phys.
Rev. B 36 2080-2083 (1987).
[17] J. L. Cardy, in Phase transitions and critical phenomena, C. Domb and J. L. Lebowitz, Vol. 11, Academic, New York
(1987).
[18] J. L. Cardy Universal critical-point amplitudes in parallel-plate geometries, Phys. Rev. Lett. 65 1443-1445 (1990).
[19] T. W. Burkhardt and T. Xue Density profiles in confined critical systems and conformal invariance, Phys. Rev. Lett. 66
895-898 (1991).
[20] T. W. Burkhardt and T. Xue Conformal invariance and critical systems with mixed boundary conditions, Nucl. Phys. B
354 653-665 (1991).
[21] T. W. Burkhardt and E. Eisenriegler Conformal theory of the two-dimensional 0(N) model with ordinary, extraordinary,
and special boundary conditions, Nucl. Phys. B 424 487-504 (1994).
[22] Löıc Turban and Ferenc Iglói, Off-diagonal density profiles and conformal invariance, J. Phys. A: Math. Gen. 30 L105-L111
(1997) [arXiv:cond-mat/9612128].
[23] Ferenc Iglói and Heiko Rieger Density Profiles in Random Quantum Spin Chains, Phys. Rev. Lett. 78 2473-2476 (1997)
[arXiv:cond-mat/9609263].
[24] E. Carlon and F. Iglói Density profiles, Casimir amplitudes, and critical exponents in the two-dimensional Potts model: A
density-matrix renormalization study, Phys. Rev. B 57 7877-7886 (1998) [arXiv:cond-mat/9710144].
[25] D. Karevski, L. Turban and F. Iglói Conformal profiles in the Hilhorst-van Leeuwen model, J. Phys. A: Math. Gen. 33
2663-2673 (2000) [arXiv:cond-mat/0003310].
[26] U. Bilstein The XX-model with boundaries: III. Magnetization profiles and boundary bound states, J. Phys. A: Math. Gen.
33 7661-7686 (2000) [arXiv:cond-mat/0004484].
[27] M. Krech Surface scaling behavior of isotropic Heisenberg systems: Critical exponents, structure factor, and profiles, Phys.
Rev. B 62 6360-6371 (2000) [arXiv:cond-mat/0006448].
[28] L. Turban Conformal off-diagonal boundary density profiles on a semi-infinite strip, J. Phys. A: Math. Gen. 34 L519L523
(2001) [arXiv:cond-mat/0107235].
[29] M. E. Fisher and P. G. de Gennes, Phénomènes aux parois dans un mélange binaire critique: physique des collöıdes , C.
R. Acad. Sci., Paris B 287 207 (1978).
[30] Peter Kleban, Jacob J. H. Simmons, and Robert M. Ziff, Anchored critical percolation clusters and 2D electrostatics, Phys.
Rev. Lett. 97 115702 (2006) [arXiv: cond-mat/0605120].
[31] A. A. Belavin, A. M. Polyakov, and A. B. Zamolodchikov, Infinite conformal symmetry in two-dimensional quantum field
theory, Nucl. Phys. B241, 333-380 (1984).
[32] M. E. J. Newman and R. M. Ziff, Efficient Monte Carlo algorithm and high-precision results for percolation, Phys. Rev.
Lett. 85 4104-4107 (2000) [arXiv: cond-mat/0005264].
[33] R. M. Ziff, Effective boundary extrapolation length to account for finite-size effects in the percolation crossing function,
Phys. Rev. E 54 2547-2554 (1996).
[34] Peter Kleban, Jacob J. H. Simmons, and Robert M. Ziff, Crossing, connection and cluster density in critical percolation
on trapezoids, Preprint (2007).
[35] R. M. Ziff, On Cardy’s formula for the critical crossing probability in 2d percolation, J. Phys. A: Math. Gen. 28, 1249-1255
(1995).
Introduction
Theory for the infinite strip
Simulations for the infinite strip
Percolation on a square and semi-infinite strip
Further comments and conclusions
Acknowledgments
References
|
0704.0902 | Effective band-structure in the insulating phase versus strong dynamical
correlations in metallic VO2 | s_Im_U3.eps
Effective band-structure in the insulating phase versus
strong dynamical correlations in metallic VO2
Jan M. Tomczak,1 Ferdi Aryasetiawan,2 and Silke Biermann1
Centre de Physique Théorique, Ecole Polytechnique, CNRS, 91128 Palaiseau Cedex, France
Research Institute for Computational Sciences, AIST,
Umezono 1-1-1, Tsukuba Central 2, Tsukuba Ibaraki 305-8568, Japan
Using a general analytical continuation scheme for cluster dynamical mean field calculations,
we analyze real-frequency self-energies, momentum-resolved spectral functions, and one-particle
excitations of the metallic and insulating phases of VO2. While for the former dynamical correlations
and lifetime effects prevent a description in terms of quasi-particles, the excitations of the latter
allow for an effective band-structure. We construct an orbital-dependent, but static one-particle
potential that reproduces the full many-body spectrum. Yet, the ground state is well beyond a
static one-particle description. The emerging picture gives a non-trivial answer to the decade-old
question of the nature of the insulator, which we characterize as a “many-body Peierls” state.
PACS numbers: 71.27.+a, 71.30.+h, 71.15.Ap
Describing electronic correlations is a challenge for
modern condensed matter physics. While weak corre-
lations slightly modify quasi-particle states, by broad-
ening them with lifetime effects and shifting their ener-
gies, strong enough correlations can entirely invalidate
the band picture by inducing a Mott insulating state.
In a half-filled one-band model, an insulator is re-
alized above a critical ratio of interaction to band-
width. Though more complex scenarios exist in realis-
tic multi-band cases, a common feature of compounds
that undergo a metal-insulator transition (MIT) upon
the change of an external parameter, such as temper-
ature or pressure, is that the respective insulator feels
stronger correlations than the metal, since it is precisely
their enhancement that drives the system insulating.
In this paper we discuss a material where this rule of
thumb is inverted : We argue that in VO2 it is the insu-
lator that is less correlated, in the sense that band-like
excitations are better defined and have longer lifetimes
than in the metal. Albeit, neither phase is well described
by standard band-structure techniques. Using an an-
alytical continuation scheme for quantum Monte Carlo
solutions to Dynamical Mean Field Theory (DMFT) [1],
we discuss quasi-particle lifetimes, k-resolved spectra (for
comparison with future angle resolved photoemission ex-
periments) and effective band-structures. While dynam-
ical effects are crucial in the metal, the excitations of
the insulator are well described within a static picture :
For the insulator we devise an effective one-particle po-
tential that captures the interacting excitation spectrum.
Still, the corresponding ground state is far from a Slater
determinant, leading us to introduce the concept of a
“many-body Peierls” insulator.
The MIT of VO2 has intrigued solid state physicists
for decades [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]. A
high temperature metallic rutile (R) phase transforms at
Tc=340 K into an insulating monoclinic structure (M1),
in which vanadium atoms pair up to form tilted dimers
along the c-axis. The resistivity jumps up by two orders
of magnitude, yet no local moments form. Despite exten-
sive efforts, the mechanism of the transition is still under
debate [6, 7, 8, 9, 10, 11, 12]. Two scenarios compete : In
the Peierls picture the structural aspect (unit-cell dou-
bling) causes the MIT, while in the Mott picture local
correlations predominate.
VO2 has a d
1 configuration and the crystal field splits
the 3d-manifold into ⁀2g and empty eσg components. The
former further split into eπg and a1g orbitals, which
overlap in R-VO2, accounting for the metallic charac-
ter. Still, the quasi-particle peak seen in photoemission
(PES) [9, 10, 11] is much narrower than the Kohn-Sham
spectrum of density functional theory (DFT) in the local
density approximation (LDA) [7], and eminent satellite
features evidenced in PES are absent. In M1-VO2, the
a1g form bonding/antibonding orbitals, due to the dimer-
ization. As discussed by Goodenough [3], this also pushes
up the eπg relative to the a1g. Yet, the LDA [7] yields a
metal. Non-local correlations beyond LDA were shown
to be essential [15, 16, 17]. Indeed, recent Cluster DMFT
(CDMFT) calculations [15], in which a two-site vanadium
dimer constituted the DMFT impurity, opened a gap,
agreeing well with PES and x-ray experiments [11, 12].
Starting from these LDA + CDMFT results [15] for
the Matsubara ⁀2g Green’s function G(ıωn) we deduce the
real frequency Green’s function G(ω) by the maximum
entropy method [18] and a Kramers-Kronig transform.
The self-energy matrix Σ(ω) we obtain by numerical in-
version of G(ω) =
[ω + µ−Hk − Σ(ω)]−1 [1], with
the LDA Hamiltonian H , and the chemical potential µ.
Fig. 1 shows (a) the diagonal elements of the R-
VO2 self-energy, and (b) the resulting k-resolved spec-
trum. Notwithstanding minor details, the a1g and e
self-energies exhibit a similar dynamical behavior. The
real-parts at zero energy, ℜΣ(0), entailing relative shifts
of quasi-particle bands, are almost equal, congruent with
the low changes in their occupations vis-à-vis LDA [15],
http://arxiv.org/abs/0704.0902v2
-2 -1 0 1 2 3
ω [eV]
a1g
-2 -1 0 1 2 3
ω [eV]
a1g
-2 -1 0 1 2 3
ω [eV]
a1g
-2 -1 0 1 2 3
ω [eV]
a1g
-2 -1 0 1 2 3
ω [eV]
-2 -1 0 1 2 3
ω [eV]
M1 a1g
M1 a1g-a1g
ΓZCYΓ
FIG. 1: (color online) Rutile VO2 : (a) self-energy (Σ − µ).
Real (imaginary) parts are solid (dashed). As comparison M1
ℑΣa1g , ℑΣa1g−a1g are shown. (b) spectral function A(k,ω)
and solutions of the QPE (blue). The LHB is the (yellow)
region at -1.7 eV, the broad UHB appears (yellow) at ∼2.5 eV.
and with the isotropy evidenced in experiment [19].
Neglecting lifetime effects (i.e. ℑΣ≈0), one-particle ex-
citations are given by the poles of G(ω) : det[ωk + µ −
Hk − ℜΣ(ωk)] = 0. We shall refer to this as the quasi-
particle equation (QPE) [23]. For static or absentℜΣ this
reduces to a simple eigenvalue problem. In regions of low
ℑΣ, the QPE solutions will give an accurate description
of the position of spectral weight and constitute an effec-
tive band-structure of the interacting system. Yet, due
to the frequency dependence, the number of solutions is
no longer bounded to the number of orbitals.
Below (above) -0.5 (0.2) eV, the imaginary parts of
the self-energy – the inverse lifetime – of R-VO2 is con-
siderable. Due to our limited precision for ℑΣ(0), we
have not attempted a temperature dependent study to
assess the experimental bad metal behavior, but the re-
sistivity exceeding the Ioffe-Regel-Mott limit [20] indi-
cates that even close to the Fermi level, coherence is not
fully reached. At low energy, the QPE solutions (dots
in Fig. 1b) closely follow the spectral weight. Above
0.2 eV, regions of high intensity appear, howbeit, the
larger ℑΣ broadens the excitations, and no coherent fea-
tures emerge, though the positions of some eπg derived
excitations are discernible. At high energies, positive and
negative, distinctive features appear in ℑΣ(ω) that are
responsible for lower (upper) Hubbard bands (L/UHB),
seen in the spectrum at around -1.7 (2.5) eV. The UHB
exhibits a pole-structure that reminds of the low-energy
quasi-particle band-structure. Hence, an effective band
picture is limited to the close vicinity of the Fermi level,
and R-VO2 has to be considered as a strongly correlated
metal (the weight of the quasi-particle peak is of the or-
der of 0.6). This is experimentally corroborated by the
fact that an increase in the lattice spacing by Nb-doping
results in a Mott insulator of rutile structure [4].
The imaginary parts of the M1 a1g on-site, and a1g–
a1g intra-dimer self-energies, Fig. 1a, are larger than
in R-VO2, usually a hallmark of increased correla-
tions. However, we shall argue that correlations are in
fact weaker than in the metal. Indeed, the dimeriza-
tion in M1 leads to strong inter-site fluctuations, evi-
denced by the significant intra-dimer a1g–a1g self-energy.
Fig. 2 displays the M1-VO2 self-energy in the a1g bond-
ing/antibonding (bab) basis, Σb/ab = Σa1g ± Σa1g−a1g .
The a1g (anti)bonding imaginary part is low and varies
-3 -2 -1 0 1 2 3 4
ω [eV]
a1g b
a1g ab
-3 -2 -1 0 1 2 3 4
ω [eV]
FIG. 2: (color online) Self-energy (Σ−µ) of M1-VO2 in the a1g
bab–basis : (a) real parts. The black stripes delimit the a1g
LDA bandwidths, dashed horizontal lines indicate the values
of the static potential ∆. (b) imaginary parts. Self-energy
elements are dotted in regions irrelevant for the spectrum.
little with frequency in the (un)occupied part of the
spectrum, thus allowing for coherent weight. In the
opposite regions, the imaginary parts reach huge val-
ues. The eπg elements are flat, and their imaginary parts
tiny. This is a direct consequence of the drastically re-
duced eπg occupancy which drops to merely 0.14. These
almost empty orbitals feel only weak correlations, and
sharp bands are expected at all energies. A first idea
for the a1g excitations is obtained from the intersections
ω+µ−ǫb/ab(k)=ℜΣb/ab(ω) as depicted in Fig. 2a, where
the black stripes delimit the LDA a1g bandwidths. The
(anti)bonding band appears as the crossing of the (blue)
red solid line with the stripe at (positive) negative en-
ergy. Hence, the (anti)bonding band emerges at (2.5)
−0.75 eV. Still, the antibonding band is much broadened
since ℑΣab reaches -1 eV. To confirm this, we solved the
QPE and calculated the k-resolved spectrum (Fig. 3a).
As expected, reasonably coherent weight appears over
nearly the entire spectrum from -1 to +2 eV, whose po-
sition coincides with the QPE poles : The filled bands
correspond to the a1g bonding orbitals, while above the
gap, the eπg bands give rise to sharp features. The anti-
bonding a1g is not clearly distinguished since e
g weight
prevails in this range. The L/UHB have faded : a mere
shoulder at -1.5 eV reminds of the LHB. Finally, con-
trary to R-VO2, the number of poles equals the orbital
dimension. Since, moreover, the real-parts of the M1-
VO2 self-energy are almost constant for relevant ener-
gies [24], we construct a static potential, ∆, by evaluat-
ing the dynamical self-energy at the LDA band centers
(pole energies) for the eπg (a1g), see Fig. 2a [25]. Fig. 3b
shows the band-structure ofHk+∆ : The agreement with
the DMFT poles is excellent. Our one-particle potential,
albeit static, depends on the orbital, and is thus non-
local. We emphasize the conceptual difference to the
Kohn-Sham (KS) potential of DFT : The latter gener-
ates an effective one-particle problem with the ground
state density of the true system. The KS energies and
states are auxiliary quantities. Our one-particle poten-
tial, ∆, on the contrary, was designed to reproduce the
interacting excitations. The eigenvalues of Hk+∆ are
thus not artificial. Still, like in DFT, the eigenstates are
SDs by construction, although the true states are not (see
below). The crucial point for M1-VO2 is that spectral
properties are capturable with this effective one-particle
description. It is in this sense that M1-VO2 exhibits only
weak correlation effects. The weight of the bonding ex-
citation is Z=(1 − ∂ωℜΣb(ω))−1ω=−0.7eV≈0.75, and thus
larger than the rutile quasi-particle weight (see above).
What is at the origin of this overall surprising coher-
ence? For the eπg orbitals, this simply owes to their deple-
tion. For the nearly half-filled a1g orbitals the situation is
more intricate. It is a joint effect of charge transfer into
the a1g bands, and the bonding/antibonding–splitting.
Indeed, the filled bonding band experiences only weak
fluctuations, due to its separation of several eV from the
antibonding one. To substantiate these qualitative argu-
ments, we resort to the following model, which treats the
solid as a collection of Hubbard dimers :
H = −t
l1σcl2σ+h.c.
i=1,2
〈l,l′〉
liσcl′iσ+U
nli↑nli↓
Here, c
liσ (cliσ) creates (destroys) an electron with
spin σ on site i of the lth dimer. t is the intra-dimer,
t⊥ the inter-dimer hopping, U the on-site Coulomb
repulsion, and we assume half-filling. First, we discuss
Γ Y C Z Γ
Γ Y C Z Γ
scissors
FIG. 3: (color online) M1-VO2 : (a) spectral function A(k,ω).
(blue) dots ((a) & (b)) are solutions of the QPE. (b) The (red)
dots are the eigenvalues of Hk+∆. See text for discussion.
the t⊥→ 0 limit, which is an isolated dimer : the
Hubbard molecule. We choose t=0.7 eV, the LDA
intra-dimer a1g–a1g hopping, and U=4.0 eV [15] for
all evaluations. The bonding/antibonding–splitting,
∆bab=−2t +
16t2 + U2=3.48 eV, gets enhanced with
respect to the U=0 case. In M1-VO2, the embedding
into the solid, and the hybridization with the eπg reduce
the splitting to ∼3 eV, as can be inferred from the one-
particle poles (Fig. 3), consistent with experiment [11].
The ground state of the dimer is given by |ψ0〉 =
{4t/ (c− U) (| ↓ ↑〉 − | ↑ ↓〉) + (| ↑↓ 0〉+ |0 ↑↓〉)} /a [26]
which is intermediate to the Slater determinant (SD)
(the four states having equal weight), and the Heitler-
London (HL) limit (double occupancies projected out).
With the VO2 parameters, the model dimer is close
to the HL limit [5]. The inset of Fig. 4b shows the
projections of the ground state onto the SD and the
HL state. The former, |〈SD|ψ0〉|2, equals the weight
of the band-derived features in the spectrum (for U>0
satellites appear), while the other measures the double
occupancy
i〈ni↑ni↓〉 = 1 − |〈HL|ψ0〉|
. For U=4.0 eV
the latter is largely suppressed, as a consequence of the
interaction : The N-particle state is clearly not a SD.
Still, the overlap with the SD, and thus the coherent
weight, remains significant, i.e. one-particle excitations
survive and lifetimes are large. To do justice to the
seemingly opposing tendencies of correlation driven
non-SD-behavior, coexisting with a band-like spectrum,
we introduce the notion of a “many-body Peierls” state.
The charge transfer from the eπg into the then almost
half-filled a1g orbitals, finds its origin in the effective re-
duction of the local interaction in the bab–configuration :
While for U=4 eV, 〈SD|H |SD〉 = 2.0 eV in the SD limit,
it reduces to merely 〈ψ0|H |ψ0〉 = 0.91 eV in the ground
state. In fact, inter-site fluctuations are an efficient way
to avoid the on-site Coulomb repulsion. In M1-VO2, this
effect manifests itself in a close cancellation of the local
and inter-site self-energies in the (un-)occupied parts of
the spectrum for the (anti)bonding a1g orbitals.
The gap-opening in VO2 thus owes to two effects : The
self-energy enhancement of the a1g bab–splitting, and
a charge transfer from the eπg orbitals. The difference
in ℜΣ corresponds to this depopulation, seen in exper-
iments [19] and theoretical studies [8, 15], and leads to
the separation of the a1g and e
g at the Fermi level. The
local interactions thus amplify Goodenough’s scenario.
To show that the embedding of the dimer into the
solid does not qualitatively alter our picture of the M1
phase, we solve the model, Eq. (1), using CDMFT. This
moreover allows to study the essentials of the rutile to
M1 MIT by scanning through the degree of dimeriza-
tion t at constant interaction strength U and embedding,
or inter-dimer hopping, t⊥. For the latter we assume
a semi-circular density of states D⊥(ω) of bandwidth
W=4t⊥. In M1-VO2, the t⊥ for direct a1g-a1g hopping
is rather small, yet eπg -hybridizations lead to an effective
D⊥-bandwidth of about 1 eV. We choose U=4t⊥, and an
inverse temperature β=10/t⊥. Fig. 4a displays the or-
bital traced local spectral function A(ω)=Ab(ω)+Aab(ω)
(b,ab denoting again the bonding/antibonding combi-
nations) and the bonding self-energy Σb(ω) for differ-
ent intra-dimer hoppings t : In the absence of t, the
result equals by construction the single site DMFT so-
lution (Σb=Σab), which, for our parameters, is a corre-
lated metal, analog to R-VO2. The spectral weight at the
Fermi level is given by Ab/ab(0) = D⊥(±t − ℜΣb/ab(0)),
with ℜΣb/ab(0)=∓ℜΣab(0). Thus a MIT occurs at
t + ℜΣab(0) = 2t⊥, when all spectral weight has been
shifted out of the bandwidth : Above t/t⊥=0.5 we find a
many-body Peierls phase corresponding to M1-VO2. In
Fig. 4a we have indicated again the graphical QPE ap-
proach : The system evolves from three solutions per or-
bital (Kondo resonance, L/UHB) at t=0 to a single one at
t/t⊥=0.6. Hence the peaks in the insulator are not Hub-
bard satellites, but just shifted bands. The embedding,
t⊥, broadens the excitations and washes out the satellites
of the isolated dimer, like for M1-VO2. Still, as a function
of t, the coherence of the spectrum increases, since the
imaginary part of the (anti-)bonding self-energy subsides
at the renormalized (anti-)bonding excitation energies.
Our model thus captures the essence of the rutile to M1
transition, reproducing both, the dimerization induced
increase in coherence, and the shifting of excitations.
-4 -2 0 2 4
ω / t⊥
t / t⊥
-4 -2 0 2 4
ω / t⊥
t / t⊥
-4 -2 0 2 4
ω / t⊥
t / t⊥
-4 -2 0 2 4
ω / t⊥
t / t⊥
-4 -2 0 2 4
-4 -2 0 2 4
-4 -2 0 2 4
-4 -2 0 2 4
0 1 2 3 4 5
iω / t⊥
0 1 2 3 4 5
iω / t⊥
0 1 2 3 4 5
iω / t⊥
0 1 2 3 4 5
iω / t⊥
0 1 2 3 4 5
iω / t⊥
0 1 2 3 4 5
iω / t⊥
0 1 2 3 4 5
iω / t⊥
t / t⊥=0.0
t / t⊥=0.2
t / t⊥=0.4
t / t⊥=0.6
t / t⊥=0.8
0 2 4
U [eV]
t=0.7eV
0 2 4
|<SD|Ψ0>|
|<HL|Ψ0>|
U [eV]
t=0.7eV
FIG. 4: (color online) (a) spectral function (top), real
(middle), imaginary (bottom) bonding self-energy Σb of the
CDMFT solution to Eq. (1) for U=4.0t⊥, β=10/t⊥, and
varying intra-dimer hopping t/t⊥. ℜΣb(ω)=−ℜΣab(−ω),
ℑΣb(ω)=ℑΣab(−ω) by symmetry. (b) Imaginary Matsuba-
ra self-energy, ℑΣb(ıω)=ℑΣab(ıω), for U=6t⊥, β=10/t⊥ and
varying t. Inset: Projection of the SD and HL limit on the
Hubbard molecule ground state (t=0.7 eV, t⊥=0) versus U.
Under uni-axial pressure or Cr-doping, VO2 develops
the insulating M2 phase [4] in which every second vana-
dium chain along the c-axis consists of untilted dimers,
whereas in the others only the tilting occurs. We may
now speculate that the dimerized pairs in M2 form a1g
Peierls singlets as in M1, while the tilted pairs are in a
Mott state. Hence, we interpret the seminal work of [4]
as the observation of a Mott to many-body Peierls tran-
sition taking place on the tilted chains when going from
M2 to M1. To illustrate this, we solve again Eq. (1) for
appropriate parameters. The tilted M2 chains are akin
to the rutile phase, yet with a reduced a1g bandwidth [7].
Thus we now choose U=6t⊥, β=10/t⊥, and vary t. All
solutions shown in Fig. 4b are insulating, however, the
diverging self-energy at vanishing intra-dimer coupling
(t=0, tilted “M2” chains) becomes regularized with the
bond enhancement (t>0, “M1”). The imaginary part of
the self-energy gets flatter and the system thus more co-
herent. The above is consistent with the finding of (S=0)
S=1/2 for the (dimerized) tilted pairs in M2-VO2 [4].
While our results do not exclude surprises in the direct
vicinity of Tc [22], the nature of insulating VO2 is shown
to be rather “band-like” in the above sense. Our analyti-
cal continuation scheme allowed us to explicitly calculate
this band-structure. The latter can also be derived from
a static one-particle potential. Yet, this does not im-
ply a one-particle picture for quantities other than the
spectrum. Above all, the ground state is not a Slater de-
terminant. Hence, we qualify M1-VO2 as a “many-body
Peierls” phase. We argue that the weakness of lifetime
effects results from strong inter-site fluctuations that cir-
cumvent local interactions in an otherwise strongly cor-
related solid. This is in striking contrast to the strong
dynamical correlations in the metal, which is dominated
by important lifetime effects and incoherent features.
We thank H. T. Kim, J. P. Pouget, M. M. Qazil-
bash, and A. Tanaka for valuable discussions and A. I.
Poteryaev, A. Georges and A. I. Lichtenstein for discus-
sions and the collaboration [15] that was our starting
point. We thank AIST, Tsukuba, for hospitality. JMT
was supported by a JSPS fellowship. Computer time was
provided by IDRIS, Orsay (project No. 071393).
[1] J. M. Tomczak and S. Biermann, J. Phys.: Condens.
Matter (2007), in press.
[2] A. Zylbersztejn, N. F. Mott, Phys. Rev. B 11, 4383
(1975).
[3] J. B. Goodenough, J. Solid State Chem. 3, 490 (1971).
[4] J. P. Pouget, H. Launois, J.Phys. France 37, C4 (1976).
[5] C. Sommers, S. Doniach, Solid State Commun. 28, 133
(1978).
[6] R. M. Wentzcovitch et al., Phys. Rev. Lett. 72, 3389
(1994).
[7] V. Eyert, Ann. Phys. (Leipzig) 11, 650 (2002).
[8] A. Tanaka, J. Phys. Soc. Jpn. 72, 2433 (2003).
[9] R. Eguchi et al., cond-mat/0607712 (2006).
[10] S. Shin et al., Phys. Rev. B 41, 4993 (1990).
[11] T. C. Koethe et al., Phys. Rev. Lett. 97, 116402 (2006).
[12] G. A. Sawatzky, D. Post, Phys. Rev. B 20, 1546 (1979).
[13] A. Continenza et al., Phys. Rev. B 60, 15699 (1999).
[14] M. A. Korotin et al., Phys. Met. Metallogr. 94, 17 (2002).
[15] S. Biermann et al., Phys. Rev. Lett. 94, 026404 (2005).
[16] A. Liebsch et al., Phys. Rev. B 71, 085109 (2005).
[17] M. S. Laad et al., Phys. Rev. B 73, 195120 (2006).
[18] M. Jarrell, J. E. Gubernatis, Phys. Rep. 269, 133 (1996).
[19] M. Haverkort et al., Phys. Rev. Lett. 95, 196404 (2005).
[20] M. M. Qazilbash et al., Phys. Rev. B 74, 205118 (2006).
[21] A. Georges et al., Rev. Mod. Phys. 68, 13 (1996).
[22] H.-T. Kim et al., Phys. Rev. Lett. 97, 266401 (2006).
[23] We solve the equation numerically by iterating until self-
consistency within an accuracy of 0.05 eV.
[24] Explaining why LDA+U opens a gap [14, 16], yet while
missing the correct bonding/antibonding splitting.
[25] ∆eπ
=0.48eV, ∆eπ
=0.54eV, ∆b=−0.32eV, ∆ab=1.2eV
[26] a =
2 (16t2/(c− U)2 + 1), c =
16t2 + U2
|
0704.0903 | New possible properties of atomic nuclei investigated by non linear
methods: Fractal and recurrence quantification analysis | New Possible Properties of Atomic Nuclei Investigated by Non Linear Methods,
Fractal and Recurrence Quantification Analysis.
Elio Conte(*)
(*) Department of Pharmacology and Human Physiology – Tires – Center for Innovative Technologies for
Signal Detection and Processing, University of Bari , Italy;
International Center for Studies on Radioactivity, Bari, Italy.
Andrei Yu. Khrennikov (+)
(+) International Center for Mathematical Modeling in Physics and Cognitive Sciences, University of
Växjo, S-35195 Sweden
and
Joseph P. Zbilut (°)
(°) Department of Molecular Biophysics and Physiology, Rush University,1653 W. Congress, Chicago,
IL60612, USA.
Abstract: For the first time we apply the methodologies of non linear analysis to investigate atomic matter. The sense is that we use such
methods in analysis of Atomic Weights and of Mass Number of atomic nuclei. Using AutoCorrelation Function and Mutual Information we
establish the presence of non linear effects in the mechanism of increasing mass of atomic nuclei considered as function of the Z atomic
number. We also operate reconstruction in phase space and we obtain values for Lyapunov spectrum and 2D - correlation dimension. We
find that such mechanism of increasing mass is divergent, possibly chaotic. Non integer values of 2D are found. According to previous
studies of V. Paar et al. [ 5 ] we also investigate the possible existence of a Power Law for atomic nuclei and , using also the technique of the
variogram, we arrive to conclude that a fractal regime could superintend to the mechanism of increasing mass for nuclei. Finally , using
Hurst exponent, evidence is obtained that the mechanism of increasing mass in atomic nuclei is fractional Brownian regime with long range
correlations. The most interesting results are obtained by using the Recurrence Quantification Analysis (RQA). We estimate % Recurrences,
% Determinism, Entropy and Max Line one time in an embedded phase space with dimension D=2 and the other time in embedding
dimension D=1. New recurrences, psudoperiodicities, self-resemblance and class of self-similarities are identified with values of
determinism showing oscillating values indicating the presence of more or less stability during the process of increasing mass of atomic
nuclei All the data were analyzed using shuffled data for comparison.
In brief, new regimes of regularities are identified for atomic nuclei that deserve to be deepened by future researches. In particular an
accurate analysis of binding energy values by non linear methods is further required.
Introduction
It is well known that the mass represents one of the most basic properties of an atomic nucleus.
It is also a complex and non trivial quantity whose basic properties still must be investigated deeply and
properly understood.
The celebrated Einstein’s mass law is known
m = (1)
On this basis some different contributions of energy are stored inside a nucleus, and contribute to its
mass.
During nucleus formation in its ground state, a certain amount of energy B will be released in the process
so that
BcmMc
j −= ∑
22 . (2)
There are different sources of such energy B. It contributes the strong attractive interaction of nucleons .
However, despite the immense amount of data about nuclear properties, the basic understanding of the
nuclear strong interaction, as example, still lacks. We have a basic model of meson exchange that of
course works at a qualitative level but it does not provide a satisfactory approach to the description of
such basic interaction. Still it contributes Coulomb repulsion between protons, and in addition we have
also surface effects and still many other contributions that in a phenomenological picture are tentatively
taken into account invoking some models as example the liquid drop elaboration as von Weizsacker [1].
It is known some other nuclear mass models may be considered and, despite the numerous parameters
that are contained in these different models and the intrinsic conceptual differences adopted in their
formulation, some common features arise from these calculations. All such models, [2], give similar
results for the known masses, their calculations yield a typical accuracy that results about 4105 −× for a
medium-heavy nucleus having binding energy of the order of MeV1000 , but the predictions of such
different mass models strongly give a net divergence when applied to unknown regions.
One consequence of such two indications seems rather evident. According to [2], there is the possibility
that a basic underlying mechanism oversees the process of mass formation of atomic nuclei, and it is not
presently incorporated and considered in the present nuclear models of the traditional nuclear physics.
In fact, some astonishing results are not lacking as far as this problem is concerned.
Owing to the presence of Pauli’s exclusion principle, when nucleons are put together to form a bound
state , there are not at rest and thus their kinetic energy also contributes to B given in (2) and thus to the
mass of the nucleus. Still according to [2], a part of this energy, that is to say, that one that varies
smoothly with the number of the nucleons, is taken into account in the liquid drop model but the
remaining part of this energy fluctuates with the number of nucleons.
The proper nature of such fluctuations should be more investigated.
P. Leboeuf [2] has extensively analyzed this problem and his conclusion is that the motion of the
nucleons inside the nucleus has a regular plus a chaotic component. We will not enter into details here
[2] but we only remember here that traditionally in nuclear physics dynamical effects in the structure of
nuclei have been referred to as shell effects with the pioneer studies of A. Bohr and B.R. Mottelson [3]
and V.M. Strutinsky [4]. The experience here derives from atomic physics where the symmetries of the
Hamiltonian generates strong degeneracy of the electronic levels and such degeneracy produce
oscillations in the electronic binding energy. Shell effects should be due to deviations of the single
particle levels with respect to their average properties.
According to the different approaches that have been introduced to reproduce the systematics of the
observed nuclear masses that in part are inspired to liquid drop models or Thomas Fermi approximations,
the total energy may be expressed as the sum of two contributions:
)x,Z,N(Û)x,Z,N(U)x,Z,N(U += (3)
with x a parameter set defining the shape of the nucleus. U is describing the bulk or macroscopic
properties of the nucleus and Û instead describes shell effects. This term could be splitted in two
components [2], the first representing the regular component and the second representing instead the
chaotic contribution. The same thing we should have for the mass
BcmMc
j −= ∑
with
)x,E(B̂)x,E(B)x,E(B −= (4)
There is now another important but independent contribution that deserves to be mentioned here.
Rather recently V. Paar et al [5] introduced a power law for description of the line of stability of atomic
nuclei, and in particular for the description of atomic weights. They compared the found power law with
the semi-empirical formula of the liquid drop model, and showed that the power law corresponds to a
reduction of neutron excess in superheavy nuclei with respect to the semi-empirical formula. Some fractal
features of the nuclear valley of stability were analyzed and the value of fractal dimension was
determined.
It is well known that a power law may be often connected with an underlying fractal geometry of the
considered system. If confirmed for atomic nuclei, according to [5], it could be proposed a new approach
to the problem of stability of atomic nuclei. In this case the aim should be to identify the basic features in
underlying dynamics giving rise to the structure of the atomic nuclei. Of course, it was pointed out the
role of fractal geometry in quantum physics and quark dynamics [6] and in particular it was analyzed the
self-similarity of paths of the Feynman path integral.
Finally, M. Pitkanen repeatedly outlined that his TGD model predicts that universe is 4-D spin glass and
this kind of fractal energy landscape might be present in some geometric degrees of freedom such as
shape of nuclear outer surface or, if nuclear string picture is accepted, in the folding dynamics of the
nuclear string [7].
Still examining the problem under a different point of view, we must outline here the results that recently
were obtained in [8].
These authors found non linear dynamical properties of giant monopole resonances in atomic nuclei
investigated within the time-dependent relativistic mean field model.
Finally, in ref [9], the statistics of the radioactive decay of heavy nuclei was the subject of experimental
interest. It was considered that, owing to the intrinsic fluctuations of the decay rate, the counting statistics
could depart from the simple Poissonian behaviour. Several experiments carried out with alfa and beta
sources have found that the counting variance for long counting periods, is higher than the Poissonian
value by more than one order of magnitude. This anomalous large variance has been taken as an
experimental indication that the power spectrum of the decay rate fluctuations has a contribution that
grows as the inverse of the frequency f at low frequencies.
In conclusion, also considered the problem from several and different view points, there are some
different arising evidences that it deserves to be analyzed by the methods of non linear dynamics in order
to obtain some detailed result.
This was precisely the aim of the present paper. We analyzed the Atomic Weights, )Z(Wa and the Mass
Number A(Z) as function of the atomic number Z for stable atomic nuclei and we applied to such data our
non linear test methods, Fractal and Recurrence Quantification Analysis. The results are reported and
discussed in detail in the following section.
2. Preparation of the Experimental Data.
It is known that the trends of nuclear stability may be represented in a well known Z,N chart of nuclides
where each nucleus with Z protons and N neutrons has mass Number NZA += . A line of stability may
be realized by taking for each atomic number Z , the stable nucleus of the isotope having the largest
relative abundance.
The atomic weights of a naturally occurring element are given by averaging the corresponding isotope
weights, weighted so to take into account the relative isotopic abundances. In this paper the data for
stable nuclei with Z values until 83=Z were considered. The data were taken using the IUPAC 1997,
standard atomic weights, at www.webelements.com. )Z(Wa and )Z(A are given in Fig.1.
Fig.1
. Atomic Weights: )Z(Wa Mass Number )Z(A
3.Tests by using Mutual Information.
Autocorrelation Function. The autocorrelation )(τρ is given by the correlation of a time series with itself
using )t(x and )t(x τ+ two time series in the correlation formula. For time series it measures as well
correlated values of the given time series result under different delays. A choice for the delay time to use
when embedding time series data should be the first time the autocorrelation crosses zero. It represents
the lowest value of delay for which the values are uncorrelated. The important thing here is that the
autocorrelation function is a linear measure and thus it should not provide accurate results in all the
situations in which important non linear contributions are expected.
In the present case we examine two series that are )Z(Wa and )Z(A . Here we have not a time variable
respect to which the delay must be characterized but instead it is the atomic number Z that takes the
place of time t. Therefore we will speak here of shiftZ − instead of time –lags in our embedding
procedure.
In Fig.2 we report the results of our calculations for autocorrelation function (ACF) in the case of Atomic
Weights and Mass Number respectively. Both the ACF for )(ZWa and )(ZA were calculated for
shiftZ − ranging from 1 to 80. The first value of Z the ACF crosses the zero was obtained for )(ZWa and
)(ZA , and it resulted 30=− shiftZ . A typical behaviour was obtained for ACF, in the cases of )(ZWa
and )(ZA respectively, resulting in progressively, positive but decreasing values of ACF until the value
30=− shiftZ , and a subsequent negative half-wave for shiftZ − values greater than 30. This seems an
interesting result that deserves in some manner a careful interpretation.
Fig.2
ACF for shiftZ − values ranging from ACF for shiftZ − values ranging from 1 to 80 in
1 to 80 in the case of )(ZWa . the case of ).(ZA
The Mutual Information. It is usually used to determine a useful time delay for attractor reconstruction
of a given time series. Generally speaking, we may observe only a variable from a system, )t(x , and we
wish to reconstruct a higher dimensional attractor. We have to consider
[ ])nt(x),.......,t(x),t(x),t(x τττ +++ 2 to produce a )n( 1+ dimensional representation. Consequently,
the problem is to choose a proper value for the delay τ . If the delay is chosen too short, then )t(x is very
similar to )t(x τ+ . Of course, for a too large delay, then the corresponding coordinates result essentially
independent and no information can be gained. The method of Mutual Information [10] involves the idea
that a good choice for τ is one that, given )(tx provides new information with measurement )t(x τ+ .
In other terms, given a measurement of )(tx , how many bits on the average can be predicted about
)( τ+tx ? In the general case, as τ is increased, Mutual Information decreases and then usually rises again.
The first minimum in Mutual Information is used to select a proper τ . The important thing is here that the
Mutual Information function takes non linear correlations into account.
Before to consider the results that we have obtained, it is important to take into account that they change
in some manner our traditional manner to approach the discussion on atomic weights and mass numbers
of atomic nuclei. In fact, we do not consider here values obtained for a single atomic weight or for a
single mass number. Instead, using M.I., we evaluate M.I. values computed for pairs of Atomic Weights,
i.e. )(ZWa and )( shiftZZWa −+ , for any possible Z and for each considered .shiftZ − The same thing
happens for pairs of atomic nuclei with Mass Numbers )(ZA and )( shiftZZA −+ .
In Fig.3 we give our results for analysis of )(ZWa . The calculated shiftZ − resulted 3=Z .In Fig.4 we give
instead the results for )(ZA . In this case the calculated shiftZ − resulted to be 2=Z . To complete our
results, in Fig.5 we give also the results of M.I computed for )(ZN , being this time N the number of
neutrons considered as ).(ZN Finally, Fig. 6 compares Mutual Information of )(ZWa , )(ZA , )(ZN .
Fig. 3: Mutual Information
Z-shift M. I.
0 2.75572
1 2.29477
2 2.12415
3 2.11128
4 2.29507
5 2.30601
6 2.32549
7 2.15915
8 2.11995
9 2.19359
10 2.34003
11 2.21110
12 2.05087
13 2.09827
14 2.16302
15 2.19951
16 2.09898
17 2.03026
18 2.07486
19 2.15473
20 2.07256
Fig. 4: Mutual Information
Z-shift M. I.
0 2.75025
1 2.24680
2 2.12359
3 2.12379
4 2.24418
5 2.25447
6 2.21247
7 2.15885
8 2.11560
9 2.19567
10 2.25399
11 2.09334
12 2.02933
13 2.08003
14 2.13427
15 2.07279
16 2.09985
17 1.98171
18 2.08486
19 2.11095
20 2.09009
Mutual Information Atomic Weights
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Z(lags)
Mutual Information Mass Number
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Z(lags)
Fig. 5: Mutual Information
Z-shift M.I.
0 2.746829
1 2.247466
2 2.117802
3 2.108233
4 2.248289
5 2.251926
6 2.288034
7 2.144352
8 2.131585
9 2.133403
10 2.219288
11 2.086516
12 2.082935
13 2.129288
14 2.226088
15 2.092718
16 2.034579
17 1.983242
18 2.053253
19 2.038391
20 2.018196
Fig.6 : Mutual Information.
Mutual Information
Atomic Weights-Mass Number-Neutron Number
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Z-shift
atomic w eights
mass number
neutron number
We are now in the condition to reassume some results.
Using autocorrelation function, ACF (Linear Analysis), a shiftZ − value of 30=Z is obtained for both
)(ZW a and )(ZA .
Using Mutual Information (Non Linear Analysis) it is obtained instead 3=− shiftZ for )(ZWa and
2=− shiftZ for )(ZA . Also )(ZN gave 3=− shiftZ .
We have a preliminary indication that the mechanism of increasing mass in atomic nuclei is a non linear
mechanism. Of course, this could be an important indication in understanding of the basic features of
nuclear matter .Therefore it becomes of relevant importance to attempt to confirm such conclusion on the
Mutual Information Neutron Number
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Z-shift
basis of a more deepened control. The test that in such cases one uses in analysis of non linear dynamics
of time series data is that one of surrogate data. Here we used shuffled data. The results are given in
Fig.7 for )(ZWa and in Fig.8 for )(ZA .
Fig.7 : Surrogate Data Analysis
Z-lags M.I.-Surrogate Data
0 2.76156
1 1.48621
2 1.44105
3 1.44755
4 1.38917
5 1.36923
6 1.54505
7 1.38976
8 1.38138
9 1.48849
10 1.36547
11 1.34689
12 1.37347
13 1.42643
14 1.34143
15 1.29609
16 1.25763
17 1.40470
18 1.43727
19 1.41390
20 1.36288
Fig. 8 : Surrogate Data Analysis
Z-lags M.I.-Surrogate Data
0 2.73606
1 1.61453
2 1.39896
3 1.37041
4 1.33673
5 1.30566
6 1.41145
7 1.34616
8 1.35618
9 1.34365
10 1.37382
11 1.25994
12 1.30270
13 1.41078
14 1.47545
15 1.21417
16 1.31047
17 1.27582
18 1.38925
19 1.35851
20 1.41464
The results obtained by using shuffled data clearly confirm that we are in presence of a on linear
mechanism in the process of increasing mass of atomic nuclei.
We also tested statistically the obtained differences between M.I. for original and surrogate data for the
case of Atomic Weights as well as for the case of Mass Numbers. In the case of M.I for Atomic Weights
vs M.I. Atomic Weights – Surrogate Data , using Unpaired t test we obtained a P value P<0.0001 and the
same value was found in the case of M.I. Mass Number vs M.I. Mass Number – Surrogate Data.
Mutual Information
Atomic Weights vs Surrogate Data
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Z(lags)
atomic w eigths
surrogate data
Mutual Information
Mass Number vs Surrogate Data
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Z(lags)
mass number
surrogate data
In conclusion, by accepting the presence of non linearity, we have reached the first relevant conclusion of
the present paper.
Looking now to Figures 3, 4, 5, 6 one may identify now new properties for atomic nuclei. Remember that
we are considering each time, pairs of Atomic Weights or pairs of Mass Numbers or still pairs of Neutron
Numbers for atomic nuclei with shiftedZ − values ranging from 1 to 20. What one should expect in this
case is to find a minimum of Mutual Information followed soon after by a rather constant behaviour for
M.I. Examination of the results reveals instead that we have some definite maxima and some definite
minima at given values of shiftZ − that are quite different in the two and three cases that we have
examined.
In detail the maxima for Atomic Weights are given at Z-shift values = 6,10,15,19,…. . Minima instead
are given at Z-shift values =3,8,12,17,…… .
The maxima for Mass Numbers are given at Z-shift values=5,10,14,16(19) while the minima are given at
Z-shift values =2,8,12,15,17. For Neutrons we have Z-shift values = 3,9,12,17 for the minima and
6,10,14,18 for the maxima. In conclusion: Still repeating here that each time we are exploring the M.I
value for pairs of atomic weights, or of mass number or of neutrons, shifted in the valueZ − by some
given values ranging between 1 and 20, we find that some pairs of nuclei show maxima MI values while
other pairs of nuclei show minima MI value. Therefore we have new and interesting properties identified
in atomic nuclei when analyzed by pairs as in the present methodology. We may call such new identified
regularities for atomic nuclei as pseudo periodicities in pairs of atomic nuclei.
For Mass Numbers we may write as example that
shiftZNA −+∆=∆
Fixed a value of Z , we have consequently 11 NZA += . For an atomic nucleus with muss number 2A and
Z-shifted, we will have 22 NshiftZZA +−+= .
Consequently, shiftZNAAA −+∆=−=∆ 12 with 12 NNN −=∆ . For Z-shift values=5,10,14,16(19), the
considered pairs of atomic nuclei will show maxima of M.I.. Instead for Z-shift values =2,8,12,15,17,
such M.I. values will reach a minimum value.
Let us go in more detail in the analysis.
First of all we have also to note that the values of MI, calculated for each shiftZ − ranging from 1 to 20,
result to be quite different in )(ZWa respect to )(ZA , and N(Z). In addition, as previously said, Mutual
Information measures how much, given two random variables, and knowing one of these two variables, is
reduced our uncertainty about the other. Mutual Information must thus be intended essentially as
estimation of mutual dependence of two variables. In our case we find that the pairs of atomic weights, or
of mass number or of neutrons in atomic nuclei that are shifted by some definite values of the atomic
number Z , show strong dependence (maxima values of M.I.) or, respectively, they show very low
dependence (minima values of M.I.). We have some new pseudoperiodicities that in some manner recall
a new kind of pseudo isotopies. All that seems to be realized in a full regime of non linearity.
4. Phase Space Reconstruction of )(ZWa and )(ZA .
We may now attempt to obtain for the first time a phase space reconstruction of Atomic Weights and
Mass Number of atomic nuclei.
To reach this objective one must estimate Embedding Dimension using the False Nearest Neighbors
Criterion (FFN). We applied it using a shiftZ − = 3 for )(ZWa and a 2=− shiftZ for )(ZA as previously
found. A false criterion distance was considered to be 4.42 for both the cases of the analysis. The results
are reported in Fig.9 for atomic weights, )(ZWa , and in Fig.10 for Mass Number, )(ZA .
Fig. 9 :False Nearest Neighbors for Atomic Fig.10 : False Nearest Neighbors for Mass
Weights Number
The evaluation of the results given in Figures 9, 10 enables us to conclude that the phase space
reconstruction for atomic weights and mass number requires an estimated embedding dimension ,that
results to be included between 1 and 2. We may assume to consider 2=D . Atomic weights and mass
number of atomic nuclei may be approximately represented in a bi dimensional phase space.
Consequently, according to the general framework of the theory on non linear dynamics of systems, we
may conclude that a very few number of independent variables is required in order to describe the
mechanism of increasing mass of atomic nuclei. We may accept to consider that they are two variables
that, with greatest prudence, we may accept to identify as being the proton and the neutron numbers,
respectively. The phase space description of atomic weights )(ZWa and of Mass Number )(ZA requires
with approximation, the use of such two variables.
Since this result has been obtained in a closed form, we may now attempt to analyze if the two given
)(ZWa and )(ZA exhibit or not properties of divergence.
To this purpose we may calculate Lyapunov spectrum in the embedded phase space. The results that we
obtained, are reported in Fig. 11 and in Fig.12 for )(ZWa and )(ZA , respectively.
Fig.11 : Lyapunov spectrum of atomic weights
iteration, exponents
1 -0.139132 -1.767928
2 -0.054642 -1.852418
3 -0.032002 -1.875058
4 -0.021497 -1.885562
iteration, exponents
5 -0.015293 -1.891767
6 -0.011169 -1.895891
7 -0.008224 -1.898835
8 -0.006016 -1.901043
iteration, exponents
9 -0.004299 -1.902761
10 -0.002925 -1.904134
11 -0.001801 -1.905258
12 -0.000864 -1.906195
13 -0.000946 -1.854474
14 -0.000219 -1.810264
15 0.000026 -1.779827
16 0.000273 -1.753227
17 0.000425 -1.705629
18 0.000914 -1.667297
19 0.001349 -1.632997
20 0.001738 -1.602124
21 0.002186 -1.585933
22 0.002567 -1.570617
23 0.002785 -1.552573
24 0.003007 -1.536054
25 0.003233 -1.508474
26 0.003433 -1.483006
27 0.003304 -1.450959
28 0.003100 -1.421118
29 0.003155 -1.394447
30 0.002734 -1.365179
31 0.002481 -1.336786
32 0.002367 -1.309490
33 0.002297 -1.283733
34 0.002104 -1.261030
35 0.002071 -1.235585
36 0.002229 -1.208038
37 0.002491 -1.182091
38 0.002558 -1.161598
39 0.002648 -1.143582
40 0.002781 -1.125369
41 0.002912 -1.108048
42 0.003353 -1.094097
43 0.003464 -1.078832
44 0.003659 -1.064133
iteration, exponents
45 0.003833 -1.043353
46 0.003724 -1.022886
47 0.003713 -1.008188
48 0.003649 -0.995043
49 0.003576 -0.982424
50 0.003419 -0.970124
51 0.003116 -0.958012
52 0.002781 -0.948094
53 0.002434 -0.938526
54 0.002281 -0.929092
55 0.002133 -0.920000
56 0.001982 -0.911222
57 0.001851 -0.902767
58 0.001885 -0.897098
59 0.001945 -0.891648
60 0.002004 -0.886751
61 0.002097 -0.882050
62 0.002113 -0.877724
63 0.002311 -0.872891
64 0.002398 -0.865438
65 0.002469 -0.858201
66 0.002539 -0.854135
67 0.002586 -0.850170
68 0.002372 -0.850370
69 0.002156 -0.850555
70 0.001942 -0.850731
71 0.001733 -0.850900
72 0.001529 -0.851065
73 0.001330 -0.851224
74 0.001136 -0.851379
75 0.000947 -0.851530
76 0.000763 -0.851676
77 0.000585 -0.851819
78 0.000410 -0.851958
Fig.12 : Lyapunov spectrum of mass number
iteration, exponents iteration, exponents
1 -0.063750 -1.815136
2 -0.013655 -1.865231
3 -0.004496 -1.874390
4 -0.000940 -1.877946
5 0.001067 -1.879953
6 0.002390 -1.881276
iteration, exponents
7 0.003333 -1.882218
8 0.004039 -1.882925
9 0.004589 -1.883475
10 0.005029 -1.883914
11 0.005907 -2.030549
12 0.005902 -2.152007
13 0.007184 -2.344940
14 0.007900 -2.508600
15 0.008538 -2.585360
16 0.008446 -2.593648
17 0.008353 -2.600949
18 0.007751 -2.585976
19 0.008750 -2.554168
20 0.008150 -2.531907
21 0.008186 -2.482481
22 0.007894 -2.450329
23 0.008806 -2.420931
24 0.009633 -2.393973
25 0.009816 -2.350399
26 0.009954 -2.310147
27 0.009880 -2.258690
28 0.009662 -2.210759
29 0.009018 -2.163469
30 0.008250 -2.119600
31 0.007832 -2.082773
32 0.007498 -2.052309
33 0.007109 -2.023616
34 0.006523 -2.013469
35 0.005691 -2.003623
36 0.005143 -1.995782
37 0.004718 -1.988460
38 0.003966 -1.963859
39 0.004116 -1.946858
40 0.004339 -1.930786
41 0.004649 -1.922343
42 0.005367 -1.917894
43 0.007274 -1.903393
iteration, exponents
44 0.008790 -1.889480
45 0.010447 -1.876392
46 0.010388 -1.871786
47 0.009970 -1.869670
48 0.009374 -1.855670
49 0.008541 -1.841986
50 0.007498 -1.829730
51 0.006726 -1.821934
52 0.005940 -1.808215
53 0.005215 -1.795046
54 0.004497 -1.783516
55 0.004030 -1.771905
56 0.003596 -1.760725
57 0.002953 -1.750916
58 0.002916 -1.743006
59 0.002961 -1.735446
60 0.003004 -1.728808
61 0.003032 -1.722373
62 0.003108 -1.713120
63 0.003229 -1.704206
64 0.003291 -1.693598
65 0.003299 -1.689223
66 0.003127 -1.691107
67 0.002824 -1.702828
68 0.002620 -1.700615
69 0.002292 -1.708755
70 0.002307 -1.709571
71 0.002341 -1.710383
72 0.002445 -1.714966
73 0.002536 -1.719414
74 0.002212 -1.719409
75 0.001877 -1.719385
76 0.001547 -1.719358
77 0.001225 -1.719332
78 0.000912 -1.719306
79 0.000606 -1.719280
80 0.000308 -1.719255
To calculate the Lyapunov spectrum in the case of the Atomic Weights, )(ZWa , we used a number of 22
fitted points in the embedded phase space.
These are the following results for the calculated Lyapunov exponents:
λ1 = 0.000410 and λ2 = -0.851958. It is seen that we have 01 >λ and 02 <λ with 021 <λ+λ as required.
In conclusion we are in presence of a divergent system and such divergence may be indicative a pure
chaotic regime for Atomic Weights.
In the case of the Mass Number, )(ZA , we utilized a number of 17 fitted points in the embedded phase
space. These are the results we obtained for the calculated Lyapunov exponents: 000308.01 =λ ,
719255.12 −=λ with 01 >λ , 02 <λ and 021 <λ+λ . Also in the case of Mass Number, )(ZA , we are in
presence of a divergent system and it could be indicative of a pure chaotic regime.
In brief, we have reached the following conclusions:
1) In the process of increasing mass of atomic nuclei we are in presence of a non linear mechanism.
Remember that the presence of non linear contributions in the dynamic of a system gives often
origin to chaotic regimes.
2) The mechanism of increasing mass in Atomic Weights and in Mass Number for pairs of atomic
nuclei also exhibits some pseudo periodicities at some definite shiftsZ − of atomic nuclei.
Therefore, we could be in presence of an ordered regime of increasing mass but in the complex
of a whole structure that is divergent and possibly chaotic.
3) A phase space reconstruction has been realized for Atomic Weights and Mass Number of atomic
nuclei, respectively. In our opinion this is a relevant result that is obtained here. In fact, from the
analysis performed by using F.F.N, it does not result in a so clear manner that the reconstructed
phase space has dimension D=2. We have F.F.N. values that suspend embedding dimension
between 1 and 2. In this case in the analysis it is adopted the greatest value. In conclusion we may
accept an embedding dimension D=2, and only in this condition we have that only few, two
variables, are required in order to describe the mechanism of increasing mass of atomic nuclei in
phase space with respect to atomic weights and mass number . The first variable should be the
atomic number, Z , and the other variable should be the Neutron Number , .ZAN −=
4) The analysis of )(ZWa and )(ZA reveals a new important features when we analyze such two
systems by calculation of Lyapunov spectrum. It results that we are in presence of divergent
systems in both case of stable nuclei analyzed by )(ZWa and )(ZA . Such divergent property,
linked to the previously found on non linearity, could be indicative that we are in presence of a
chaotic regime in the mechanism of the increasing mass of atomic nuclei when seen as function
of Z .
We may go on by a further step calculating the Correlation Dimension in the reconstructed phase space
for both )(ZWa and )(ZA . In Fig.13 we report the results for )(ZWa . In Fig. 14 we give instead the results
for )(ZA . Finally in Figures 15 and 16 we have the results using surrogate (shuffled) data.
Fig.13 : Atomic Weights.
Fig.14 : Mass Number.
For the atomic weights it is obtained D2 = 1.955 ± 0.296 as value for Correlation Dimension. For Mass
Number it results instead D2 = 2.120 ± 0.084.
It is important to observe that in both case we obtain non integer values of such topological dimension in
phase space reconstruction.
We may now consider the results for surrogate data.
Fig.15 : Results on Correlation Dimension, Atomic Weights (surrogate data).
Fig.16 : Results on Correlation Dimension, Mass Number (surrogate data).
In the case of )(ZWa we obtain D2 = 5.130 ± 0.624 while instead in the case of )(ZA we have D2 = 5.193
± 0.810
As seen through the results, a net difference is obtained in comparison of original with surrogate data.
They may be quantified in the following manner:
)2(793.3
),4(088.5),2(678.3
=−=−=
lagZNumberMass
lagZWeightsAtomiclagZWeightsAtomic
surrogate
surrogateoriginal
The null hypothesis may be rejected. In conclusion we have a non integer topological dimension for both
)(ZWa and ).(ZA
Therefore it has a sense to attempt to ascertain if we are in presence of a fractal behaviour for both the
system of data that we have in examination.
5. On a Possible Existing Power Law to Represent Increasing Mass in Atomic Weights and Mass
Number of Atomic Nuclei.
As we indicated in the introduction in the present paper, rather recently V. Paar et al [6] introduced a
power law for description of the line of stability of atomic nuclei, and in particular for the description of
atomic weights. They compared the found power law with the semi-empirical formula of the liquid drop
model, and showed that the power law corresponds to a reduction of neutron excess in superheavy nuclei
with respect to the semi-empirical formula. Some fractal features of the nuclear valley of stability were
analyzed and the value of fractal dimension was determined.
It is well known that a power law may be often connected with an underlying fractal geometry of the
considered system. If confirmed for atomic nuclei, according to [5], it could be proposed a new approach
to the problem of stability of atomic nuclei. In this case the aim should be to identify the basic features in
underlying dynamics giving rise to the structure of the atomic nuclei.
The aim is to perform here such kind of analysis for )(ZWa and )(ZA .
For atomic weights let us introduce the following Power Law :
β= aZZWa )( (5)
while instead for the Mass Number let us introduce the following power law
γ= cZZA )( (6)
The problem is now to estimate ( β,a ) and ( γ,c ) by a fitting procedure.
We give here the obtained results for the Atomic Weights.
Curve Fit Report
Y Variable: C2. X Variable: C1. Model Fit: A*X^B
Parameter Estimates for All Groups
Groups CountIter's R2 A B
All 83 24 0.99954 1.47335 1.12133
Combined Plot Section
Fig. 17
125.0
187.5
250.0
0.0 25.0 50.0 75.0 100.0
Y = A*X^B
Model Estimation Section
Parameter Parameter Asymptotic Lower Upper
Name Estimate Standard Error 95% C.L. 95% C.L.
A 1.47335 0.02448 1.42464 1.52206
B 1.12133 0.00399 1.11338 1.12927
Iterations 24 Rows Read 83
R-Squared 0.999538 Rows Used 83
Random Seed 9839 Total Count 83
Estimated Model
Curve Fit Report
Y Variable: C2. X Variable: C1.
Plot Section
Fig.18
0.0 25.0 50.0 75.0 100.0
Residual vs C1
In conclusion, for the (5) we obtained a = 1.47335 and β = 1.12133.
V. Paar et al. [5] obtained instead that a =1.44±0.02 and β=1.120±0.004 and β=1.19±0.01 by using the
Box Counting method. There is an excellent agreement.
As it may be seen the obtained values significantly differ from the line. In addition, the obtained values
strongly give evidence for a possible fractal regime.
Let us see now the results that we obtained for the (6) concerning Mass Number.
Curve Fit Report
Y Variable: C2. X Variable: C1.
Model Fit: A*X^B
Parameter Estimates for All Groups
Groups CountIter's R2 A B
All 83 23 0.99929 1.46185 1.12389
Combined Plot Section
Fig.19
125.0
187.5
250.0
0.0 25.0 50.0 75.0 100.0
Y = A*X^B
Model Estimation Section
Parameter Parameter Asymptotic Lower Upper
Name Estimate Standard Error 95% C.L. 95% C.L.
A 1.46185 0.03010 1.40195 1.52174
B 1.12389 0.00495 1.11404 1.13373
Iterations 23 Rows Read 83
R-Squared 0.999294 Rows Used 83
Random Seed 10007 Total Count 83
Estimated Model
Curve Fit Report
Y Variable: C2. X Variable: C1.
Plot Section
Fig.20
0.0 25.0 50.0 75.0 100.0
Residual vs C1
In conclusion, for the (6) we obtained: c=1.46185, γ=1.12389.
V. Paar et al. obtained [6] c=1.47±0.02 and γ=1.123±0.005 in excellent agreement.
Again we may conclude for values that significantly differ from line. In addition, the obtained values
strongly give evidence for a possible fractal regime.
The possible existence of a fractal regime in the mechanism of increasing mass in atomic weights and
mass umber of atomic nuclei changes radically our traditional manner to conceive nuclear matter.
Consequently, it becomes of relevant importance to attempt to deepen such result so to reach the highest
possible level of certainty on it. A way to deepen such kind of analysis is to follow the way of variogram
method. Variograms usually give powerful indications on the variability of the examined data, on their
self-similarity and self-affine behaviour. In particular, they enable us to calculate the Generalized Fractal
Dimension [ for details see ref.11].
The semivariogram is given in the following manner
))()((
2hxRxRE
=γ (7)
For Atomic Weights it is:
))()((
2hZWZWE
=γ (8)
and for Mass Number it is
))()((
2hZMZME
=γ . shiftZ − is indicated here by ,.....2,1=h . (9)
Still, in the general case we may write
ii xRhxRhN
2))()((
)( (10)
For a self-affine series the semivariogram scales according to
DhCh =γ )( (11)
being D the Generalized Fractal Dimension. It is linked to aH by aHD 2= being aH the Hausdorff
dimension.
We may also estimate the corresponding Probability Density Function that is given in the following
manner
1)( −= a
hP (12)
being 1−= aD and k is a scale parameter.
Let us introduce now the results that we obtained for Atomic Weights.
RESULTS
Fig. 21: Variogram of atomic weights
Z-shift value Z-shift value Z-shift value
1 4.1038233 28 2583.4415 55 9936.2738
2 13.890723 29 2774.3962 56 10263.722
3 30.73476 30 2974.6827 57 10614.884
4 53.281165 31 3177.0212 58 10974.746
5 82.706811 32 3391.3496 59 11360.41
6 118.39056 33 3607.1587 60 11745.812
7 161.00905 34 3826.8629 61 12160.26
8 209.83439 35 4053.0824 62 12559.606
9 265.1562 36 4283.8682 63 12969.082
10 326.91086 37 4517.609 64 13342.27
11 395.38036 38 4760.8117 65 13738.976
12 470.35989 39 5010.0734 66 14163.507
13 551.85195 40 5268.4938 67 14573.838
14 640.44287 41 5533.4834 68 14979.658
15 735.22246 42 5805.6797 69 15415.604
16 837.44107 43 6083.8624 70 15830.968
17 946.67776 44 6370.1678 71 16277.052
18 1060.8571 45 6665.5819 72 16709.305
19 1183.535 46 6969.6876 73 17165.872
20 1314.6237 47 7285.3773 74 17613.299
21 1448.8492 48 7605.9824 75 18096.431
22 1590.1856 49 7928.8907 76 18533.473
23 1734.6391 50 8261.1897 77 18994.677
24 1888.6115 51 8592.0056 78 19457.937
25 2047.654 52 8917.3226 79 20009.547
26 2217.1471 53 9259.8625 80 20578.44
27 2394.576 54 9591.9184
Using the (11) in a Ln-Ln representation, we may now estimate the Generalized Fractal Dimension. We
obtained the following results.
Curve Fit Report
Y Variable: C2. X Variable: C1.
Model Fit: C2=A+B*(C1) Simple Linear
Parameter Estimates for All Groups
Groups Count Iter's R2 A B
All 81 6 0.99986 1.25542 1.98086
Combined Plot Section: Ln-Ln variogram fitting
Fig.22
0.0 1.3 2.5 3.8 5.0
Y = Simple Linear
Model Estimation Section
Parameter Parameter Asymptotic Lower Upper
Name Estimate Standard Error 95% C.L. 95% C.L.
A 1.25542 0.00939 1.23674 1.27410
B 1.98086 0.00264 1.97560 1.98612
Iterations 6 Rows Read 81
R-Squared 0.999859 Rows Used 81
Random Seed 7364 Total Count 81
Estimated Model
(1.25542232430747)+(1.98086225009967)*(C1)
Curve Fit Report
Y Variable: C2. X Variable: C1.
Model Fit: C2=A+B*(C1) Simple Linear
Plot Section
Fig.23
0.0 1.3 2.5 3.8 5.0
Residual vs C1
In conclusion, we obtain for Atomic Weights the following results:
Generalized Fractal Dimension D = 1.98086
Hausdorff dimension Ha = 0.99043.
By using the (12) we may now calculate the Probability Density Function. For atomic weights it results
that
P(Z) = 98086.1610673.5 Z−× (13)
that is given in Fig.24.
Fig. 24
In order to deepen our analysis we may also employ a modified version of standard variogram analysis,
using this time a light modification of its usual form in the following way
iiN hxRxRN
2 ))()((2
)( (14)
where we calculate now by )2/1( N instead of )(2/1 hN − being )1(2 −N the number of degrees of freedom
for the whole system taken in consideration.
We have the results in the case of the variogram N2γ for atomic weights in Figures 25, 26, 27.
Fig. 25
Fig.26
Fig.27
As see, passing from variogram in Fig.25 to variogram in Fig.26 and, finally, in variogram in Fig.27 we
have used each time a different factor of scale and, in spite of such different factors of scale, the behaviour
of the correspondent variograms, remain unchanged. This result may be taken as further indication that we
are in presence of a fractal regime.
In addition, by the N2γγγγ variogram, we may now re-calculate the Generalized Fractal Dimension using a
Ln-Ln scale.
The results are given in the following scheme.
Curve Fit Report
Y Variable: C2. X Variable: C1.
Model Fit: C2=A+B*(C1) Simple Linear
Parameter Estimates for All Groups
Groups Count Iter's R2 A B
All 52 6 0.99288 1.65610 1.70683
Combined Plot Section
Fig.28
0.0 1.0 2.0 3.0 4.0
Y = Simple Linear
Model Estimation Section
Parameter Parameter Asymptotic Lower Upper
Name Estimate Standard Error 95% C.L. 95% C.L.
A 1.65610 0.06408 1.52739 1.78481
B 1.70683 0.02045 1.66576 1.74790
Iterations 6 Rows Read 52
R-Squared 0.992875 Rows Used 52
Random Seed 10882 Total Count 52
Estimated Model
(1.65609637745411)+(1.70683194557246)*(C1)
Curve Fit Report
Y Variable: C2. X Variable: C1.
Model Fit: C2=A+B*(C1) Simple Linear
Plot Section
Fig.29
0.0 1.0 2.0 3.0 4.0
Residual vs C1
In conclusion, also in this case a non integer value of the Generalized Fractal Dimension is obtained. It
results D=1.70683 with Hausdorff dimension 853415.0=aH . Such values result in satisfactory accord
with those previously had in the case of the standard variogram.
We may now consider the results that we obtained in the corresponding analysis for Mass Number.
Fig. 30:Variogram of Mass Number
Variogram values:
Z-lags Variogram- value Z-lags Variogram- value
1 5.243902
2 14.7284
3 32.275
4 54.56329
5 84.44231
6 119.9221
7 163.0592
8 212.2067
9 268.1824
10 329.637
11 399.0625
12 474.9789
13 557.2214
14 646.1667
15 741.9779
16 843.4851
17 953.9318
18 1068.915
19 1192.695
20 1324.532
21 1460.766
22 1604.451
23 1749.508
24 1903.856
25 2063.957
26 2234.096
27 2412.473
28 2603.482
29 2796.472
30 3000.292
31 3205.394
32 3423.01
33 3639.03
34 3860.969
35 4091.052
36 4326.67
37 4561.033
38 4804.122
39 5055.284
40 5320.965
41 5591.036
42 5867.11
43 6145.663
44 6428.603
45 6726.934
46 7036.689
47 7356.458
48 7675.757
49 7997.926
50 8333.5
51 8671.016
52 8996.565
53 9339.117
54 9664.879
55 10009.8
56 10334.06
57 10688.52
58 11053.98
59 11444.02
60 11836.33
61 12256.52
62 12651.02
63 13058.83
64 13431.24
65 13832.53
66 14249.85
67 14660.66
68 15086.9
69 15531.39
70 15943.19
71 16399.63
72 16814.68
73 17282.35
74 17737.39
75 18219.06
76 18626.5
77 19078.67
78 19562.9
79 20150.38
80 20672.67
81 21218.5
We may now give the estimation of the Generalized Fractal Dimension.
Curve Fit Report (Ln-Ln plot)
Y Variable: C2. X Variable: C1.
Model Fit: C2=A+B*(C1) Simple Linear
Parameter Estimates for All Groups
Groups Count Iter's R2 A B
All 81 4 0.99943 1.32691 1.96370
Combined Plot Section
Fig.31
0.0 1.3 2.5 3.8 5.0
Y = Simple Linear
Model Estimation Section
Parameter Parameter Asymptotic Lower Upper
Name Estimate Standard Error 95% C.L. 95% C.L.
A 1.32691 0.01877 1.28955 1.36427
B 1.96370 0.00528 1.95318 1.97422
Iterations 4 Rows Read 81
R-Squared 0.999428 Rows Used 81
Random Seed 2960 Total Count 81
Estimated Model
(1.32690928509202)+(1.96369892532774)*(C1)
Curve Fit Report
Y Variable: C2. X Variable: C1.
Model Fit: C2=A+B*(C1) Simple Linear
Plot Section
Fig.32
0.0 1.3 2.5 3.8 5.0
Residual vs C1
The analysis enables us to give the following results:
Generalized Fractal Dimension D = 1.96370
Hausdorff dimension Ha = 0.98185
We may now calculate the Probability Density Function. It assumes the following form
P(Z) = 9637.1610786.6 Z−−−−××××
Fig. 33
Let us proceed estimating N2γ at different scale factors.
Fig. 34
Fig. 35a
Fig. 35b
By the N2γ variogram we may now re-calculate the Generalized Fractal Dimension using a Ln-Ln scale. The
results are given in the following scheme.
Curve Fit Report
Y Variable: C2. X Variable: C1.
Model Fit: C2=A+B*(C1) Simple Linear
Parameter Estimates for All Groups
Groups Count Iter's R2 A B
All 49 4 0.99538 6.81261 1.70270
Combined Plot Section
Fig.36
0.0 1.0 2.0 3.0 4.0
Y = Simple Linear
Model Estimation Section
Parameter Parameter Asymptotic Lower Upper
Name Estimate Standard Error 95% C.L. 95% C.L.
A 6.81261 0.05209 6.70783 6.91739
B 1.70270 0.01692 1.66867 1.73674
Iterations 4 Rows Read 49
R-Squared 0.995381 Rows Used 49
Random Seed 11153 Total Count 49
Estimated Model
(6.81260680697216)+(1.70270493930648)*(C1)
Curve Fit Report
Y Variable: C2. X Variable: C1.
Model Fit: C2=A+B*(C1) Simple Linear
Plot Section
Fig.37
0.0 1.0 2.0 3.0 4.0
Residual vs C1
Conclusion: also in this case a non integer value of the Generalized Fractal Dimension is obtained. It
results D=1.70270 with Hausdorff dimension 85135.0=aH . Such values result in satisfactory accord with
those previously obtained in the case of the standard variogram.
In conclusion, until here we have used the standard methodologies that generally one utilizes with the
aim to ascertain the presence of non linear contributions in the investigated dynamics as well as to
reconstruct phase space dynamics and to evaluate the possible presence of divergent features in the
system, possibly of chaotic nature, and still the probable presence of a fractal regime in such dynamics.
On the basis of the results that we have obtained, it seems very difficult to escape the conclusion that the
process of increasing mass, regarding Atomic Weighs and Mass Number in atomic nuclei, concerns all the
basic features of non linearity, divergence, possible chaoticity and fractality that we have only just
indicated for systems with non linear dynamics. This is a conclusion that in some manner overthrows our
traditional manner to approach nuclear matter. For this reason it requires still more detailed deepening. In
the following sections we will support our conclusion by other detailed results.
6. Calculation of Hurst Exponent and Possible Presence of Fractional Brownian Behaviour In
Atomic Weights and Mass Number of Atomic Nuclei
It is known that time series arise often from a random walk usually called Brownian motion. The Hurst
exponent [12] in such cases is calculated to be 0.5.
This concept may be generalized introducing the Fractional Brownian Motion (fBM) which arises from
integrating correlated –coloured noise.
The value of Hurst exponent helps us to identify the nature of the regime we have under examination. In
detail, if the H exponent results greater than 0.5, we are in presence of persistence, that is to say, past
trends also persist into the future. On the other hand, in presence of H exponent values less than 0.5 we
conclude for anti persistence, indicating it in this case that past trends tend to reverse in the future.
In the present case the analysis is not performed having a time series but instead we consider the atomic
number Z in )(ZWa , the atomic weights, and in )(ZA , the Mass Number of atomic nuclei.
Our analysis gave the following results.
For atomic weights, )(ZWa , we obtained the subsequent value:
Hurst exponent H = 0.9485604 ; SDH = 0.00625887 ; r2 = 0.999645
Instead for Mass Number , )(ZA , we had the next value:
Hurst exponent H = 0.8953571 ; SDH = 0.0057648 ; r2 = 0.999753.
Both the results obtained respectively for Atomic Weights and for Mass Number, enable us to conclude
that:
1) we are in presence of a Fractional Brownian Regime in both the cases;
2) in both )(ZWa and )(ZA the tendency is for the persistence that results more marked in )(ZWa respect
to )(ZA ;
3) in the case of the Atomic Weights, )(ZWa , the value of Fractal Dimension results to be
0514396.12 =−= HD
while in the case of Mass Number, )(ZA , the value of Fractal Dimension is
1046429.12 =−= HD .
7. Recurrence Quantification Analysis – RQA
Further important information on the nature of the processes presiding over the mechanism of increasing
mass in Atomic Weights and Mass Number of atomic nuclei may be obtained by using RQA, the
Recurrence Quantification Analysis.
This is a kind of analysis that, as it is well known, was introduced by J.P Zbilut and C.L. Webber [13].
Such investigation offers a new opportunity to us. By it we may give a look to the process of increasing
mass of atomic nuclei analyzing in detail the kind of dynamics that governs such mechanism. Therefore,
the results of such investigation must be considered with particular attention owing to their relevance.
The features that we may investigate in detail are the following: first of all we may evaluate the level of
recurrence, that is to say of “periodicity”, that such process exhibits. This is obtained by estimating the %
Rec in RQA. Soon after we may also calculate the Determinism that is involved in such process. This is to
say that we evaluate the level of predictability that it has. We estimate such features by %Det. in RQA. As
third RQA variable we may also estimate the entropy and than the Max Line that is a measure linked
proportionally to the inverse of the Lyapunov exponent. In brief, such measure enables us to evaluate still
again the possible divergence involved in such mechanism.
Usually, when using RQA, one starts with an embedding procedure of the given time series and thus
providing with a given reconstruction of the given time series in phase space. In our case such
reconstruction was previously performed in previous sections and we obtained that we should use an
embedding dimension 2=D with a 3=− shiftZ in the case of Atomic Weights and a 2=− shiftZ in the
case of Mass Numbers. However, in the present analysis our purpose is slightly different in the sense that
we aim to preserve the embedded dimension 2=D but we yearn for analyzing the behaviour of the basic
RQA variables as %Rec., %Det., ENT., and Max Line shifting step by step the value of the atomic
number Z so to explore the mechanism as well as Z increases step by step. In order to perform such kind
of analysis a value of the distance R should be correctly selected. Usually, the distance R in RQA may
be fixed rather empirically selecting a proper value so that %Rec. remain about 1%. However Zbilut and
Webber [13] in their RQA software package introduced RQS that estimates recurrences at various
distances and the cut off that one has at a particular distance respect to a flat behaviour. In this manner one
selects the best optimized distance R to use in the analysis. We applied RQS software to select the proper
distance and it was obtained that such value should be taken 4=R
We also ascertained that such selected value remained rather constant when increasing Z step by step. In
Fig. 38 we give some of the results that were obtained.
Fig. 38
Estimation of Recurrences at various Distances
(Ln-Ln Plot)
0 1 2 3 4 5
Distance R
Optimized distance R = 4
Atomic Weights
Estimation of Recurrences at various Distances
(Ln-Ln Plot)
0 0.5 1 1.5 2 2.5 3 3.5
Distance R
Optimized distance R = 4
Mass Number
In conclusion we selected R=4 for the distance to use in RQA. The embedding dimension was chosen to
be D=2 as it resulted by using FNN criterion and verifying this choice also for different Z values.
Finally, we decided to use the value L=3 for the Line Length.
We have obtained the following results.
Recurrence Quantification Analysis applied to Atomic Weights, )(ZWa , for increasing values of shiftZ − .
The results obtained for %Rec., %Det., ENT., and MaxLine are reported in the following Table 1
Table 1
Z-shift %Rec. %Det. Entropy Max-Line
1 1.36 73.33 2.00 14
2 1.39 44.44 1.50 7
3 1.33 52.38 1.59 13
4 1.36 28.57 1.00 7
5 1.30 38.46 1.00 11
6 1.37 27.50 0.92 5
7 1.16 24.24 0.00 8
8 1.33 13.51 0.00 5
9 1.04 32.14 1.00 5
10 1.33 17.14 0.00 3
11 0.94 25.00 0.00 6
12 1.33 12.12 0.00 4
13 1.04 24.00 0.00 6
14 1.28 20.00 0.00 3
15 0.92 14.29 0.00 3
16 1.31 20.69 0.00 3
17 0.89 15.79 0.00 3
18 1.06 13.64 0.00 3
19 0.84 17.65 0.00 3
20 1.13 13.69 0.00 3
21 0.95 0.00 - -
There are some results that deserve to be outlined .
%Rec. remains rather constant in correspondence of the different shiftZ − values with some fluctuations
taking minima values mainly at shiftZ − = 11, 15, 19,21.
A graph is given in Fig.39.
Fig. 39
Atomic Weight (%Recurrences)
0 2 4 6 8 10 12 14 16 18 20 22 Z-shift
%Det. assumes rather low values also with a length Line L=3. It oscillates among maxima and minima
for increasing values of shiftZ − as it is pictured in Fig.40 (a, b, c). Significantly, %Det. goes definitively
to zero starting with shiftZ − =21.
Fig. 40a
Atomic Weight (%Determinism)
0 2 4 6 8 10 12 14 16 18 20 22 Z-shift
Rather interesting appear also the value we obtain for Entropy and Max Line as reported in the following
figures.
Fig. 40b
Atomic Weight (Entropy)
0 2 4 6 8 10 12 14 16 18 20 22 Z-shift
Fig. 40c
Atomic Weight (Max-Line)
0 2 4 6 8 10 12 14 16 18 20 22 Z-shift
We may now pass to consider Recurrence Quantification Analysis in the case of Mass Numbers, )(ZA .
The results for %Rec., %Det., ENT., and Max Line are given in Table2.
Table 2
Z-shift %Rec. %Det. Entropy Max-Line
1 1.20 77.50 1.37 14
2 1.33 30.23 0.92 7
3 1.23 41.03 1.00 13
4 1.43 36.36 0.81 7
5 1.23 37.84 1.00 11
6 1.30 21.05 1.00 5
7 1.09 38.71 1.00 8
8 1.44 20.00 1.00 5
9 1.19 31.25 1.00 6
10 1.41 8.11 0.00 3
11 1.25 18.75 0.00 6
12 1.33 21.21 1.00 4
13 1.28 41.94 1.59 6
14 1.58 27.03 0.92 4
15 1.14 46.15 1.59 6
16 1.45 9.38 0.00 3
17 1.31 21.43 0.00 3
18 1.59 21.21 1.00 4
19 1.19 25.00 0.00 3
20 0.92 16.67 0.00 3
21 0.79 0.00 - -
%Rec. remains rather constant in correspondence of the different shiftZ − values with some fluctuations
taking minima values mainly at shiftZ − = 7,9,11,13,15,..,19.
A graph is given in Fig.41 (a, b, c, d)
Fig.41a
Mass Number (%Recurrences)
0 2 4 6 8 10 12 14 16 18 20 22 Z-shift
Fig. 41b
Mass Number (%Determinism)
0 2 4 6 8 10 12 14 16 18 20 22 Z-shift
Fig. 41c
Mass Number (Entropy)
0 2 4 6 8 10 12 14 16 18 20 22 Z-shift
Fig. 41d
Mass Number (Max-Line)
0 2 4 6 8 10 12 14 16 18 20 22 Z-shift
%Det. assumes rather low values also with a length Line L=3. It oscillates among maxima and minima
for increasing values of shiftZ − as it is pictured in Fig.41b. Significantly, %Det. goes definitively to zero
starting with shiftZ − =21.
In Fig.42 we have the comparison of %Det of Atomic Weights respect to %Det of Mass Numbers.
Fig.42
Atomic Weights - Mass Number (% Determinism)
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Z, atomic number
atomic weigths
mass number
Looking at the results given in Tables 1 and 2 and linked figures, we deduce that for different values of
shiftZ − , the corresponding values of %Rec tend to show fluctuations. As it is well known, %Rec
indicates in some manner presence of pseudoperiodicities. Therefore the rather small fluctuations of
%Rec indicate that we are in presence of a mechanism of increasing mass that tends to preserve some
kind of periodicity and self-resemblance with rather modest fluctuations The more interesting datum is
given by %Det. In this case we have more marked oscillations showing that in the process of increasing
mass of stable atomic nuclei we have phase of increasing stability as opposed to phases of decrease
stability. Here the law is the mechanism of addition of nucleons that is realized at each step between the
given nucleus and its subsequent as considered in our phase space representation. %Det oscillations
indicate that the process of progressively addition of nucleons in nuclei happens on the basis of a complex
non linear mechanism in which the determinism and thus the same predictability of subsequent Mass
Number and /or Atomic Weights is very complex and so distant from a simple and linear regime of
addition of matter that we have expected to hold for a very long time. Looking at the values of Entropy,
expressed in bits, one finds that also in this case oscillations are dominant for increasing values of Z-shift.
The same happens for MaxLine whose inverse gives estimation of the divergence of the system in
consideration giving direct indication of a possible chaotic regime.
In conclusion, by using RQA we conclude that the mechanism of increasing mass in atomic nuclei is
rather periodical and self-resemblance. We have obtained marked oscillations for the values of RQA
variables. The important thing to remember here is that we are operating in a reconstructed phase space
that takes into account o more an isolated nucleus as in the classical nuclear physics discussions, but each
time pairs of nuclei in the embedded space with dimension D=2.The deriving behaviour of the
mechanism of increasing mass of atomic nuclei evidences in this case all its complexity. We have now set
of nuclei that evidence their oscillatory behaviour for % Rec, %Det, Entropy and Max Line.Such
oscillatory behaviours of classes of nuclei result obviously connected to “periodicities” and mainly to
classes of similarities that also stable nuclei seem to exhibit. The marked variations in the values of
determinism indicate that the whole process results rather complex and it is regulated from phases of
more stability and subsequent phases of increased instability.
In order to conclude such kind of research, and to confirm the new results that we have here indicated, we
have performed the last kind analysis. In this last case we have in some manner overturned the scheme of
the previous analysis in the sense that we have selected an embedding dimension D=1. The reader will
remember that results by FFN gave same uncertainty in selecting the values D=1 or D=2. Our previous
RQA was performed by using D=2 . In this final exploration we use D=1. In this condition of analysis a
given value of delay and thus , in our case of shiftZ − , has no more sense . Each point in phase space is
given by a value of )(ZWa or of )(ZA . To use RQA we have to select a distance , that is to say a Radius R.
Using Euclidean distance , R will result to be the difference A∆ between two values of Mass Numbers in
the case one utilizes )(ZA for the analysis. In conclusion we have
NZA += , 222 NZA += .
The distance , R, to use in RQA will result to be given
NZA ∆+∆=∆
We decided to use RQA considering L=3 as Line Length and R increasing step by step from 1 to209. In
this manner we calculated %Rec, %Det, Entropy and Max Line, for increasing values of ,.....3,2,1=Z .
Note that in such kind of analysis we used also shuffled data in order to ascertain the validity of the
obtained results. In addition , on the obtained data, we used also a Wald-Wolfowitz run test that we
executed on %Det and on %Det / %Rec, and the probability that the results were obtained by chance,
was found to be <0.001.
The results are now given in Tables 3, 4 and in Figures 43, 44, 45, 46, 47.
Table 3: Results of Recurrences quantification analysis of Mass Number with embedding D=1
and distance R ranging from 1 to 209
distance
Rec.
Det. %Det./%Rec.
distance
Rec.
Det. %Det./%Rec.
distance
Rec.
Det. %Det./%Rec.
1 0.029 0.000 0.000 71 42.051 98.742 2.348 141 71.907 99.55 1.384
2 1.029 8.571 8.329 72 42.051 98.742 2.348 142 72.436 99.513 1.374
3 1.381 23.404 16.947 73 42.786 98.283 2.297 143 72.436 99.513 1.374
4 1.381 23.404 16.947 74 43.432 98.241 2.262 144 72.877 99.395 1.364
5 2.204 66.667 30.248 75 43.432 98.241 2.262 145 73.288 99.399 1.356
6 3.115 75.472 24.229 76 44.167 99.069 2.243 146 73.288 99.399 1.356
7 4.29 82.192 19.159 77 44.902 98.822 2.201 147 73.817 99.602 1.349
8 4.29 82.192 19.159 78 45.519 98.773 2.170 148 74.376 99.526 1.338
9 5.172 86.364 16.698 79 45.519 98.773 2.170 149 74.904 99.451 1.328
10 6.024 89.268 14.819 80 46.195 98.885 2.141 150 74.904 99.451 1.328
11 6.024 89.269 14.819 81 46.812 98.87 2.112 151 75.316 99.532 1.322
12 7.082 90.45 12.772 82 46.812 98.87 2.112 152 75.727 99.728 1.317
13 7.875 88.806 11.277 83 47.458 98.885 2.084 153 75.725 99.728 1.317
14 8.639 91.156 10.552 84 48.193 98.963 2.053 154 76.197 99.691 1.308
15 8.639 91.156 10.552 85 48.78 98.675 2.023 155 76.609 99.501 1.299
16 9.58 94.785 9.894 86 48.78 98.675 2.023 156 77.05 99.657 1.293
17 10.638 94.475 8.881 87 49.398 98.989 2.004 157 77.05 99.657 1.293
18 10.638 94.475 8.881 88 50.132 99.062 1.976 158 77.667 99.659 1.283
19 11.666 93.703 8.032 89 50.132 99.062 1.976 159 78.078 99.511 1.275
20 12.401 95.261 7.682 90 50.779 99.016 1.950 160 78.078 99.511 1.275
21 13.282 95.575 7.196 91 51.455 99.029 1.925 161 78.343 99.4 1.269
22 13.282 95.575 7.196 92 52.16 99.155 1.901 162 78.754 99.664 1.266
23 14.252 94.433 6.626 93 52.16 99.155 1.901 163 79.224 99.703 1.258
24 15.046 95.508 6.348 94 52.806 98.998 1.875 164 79.224 99.703 1.258
25 15.046 95.508 6.348 95 53.425 99.065 1.854 165 79.783 99.595 1.248
26 16.927 97.232 5.744 96 53.425 99.065 1.854 166 80.194 99.45 1.240
27 18.513 96.508 5.213 97 54.041 99.402 1.839 167 80.635 99.745 1.237
28 17.602 95.993 5.454 98 54.687 99.087 1.812 168 80.635 99.745 1.237
29 17.602 95.993 5.454 99 55.245 98.83 1.789 169 81.105 99.565 1.228
30 18.513 96.508 5.213 100 55.245 98.83 1.789 170 81.399 99.458 1.222
31 19.16 96.626 5.043 101 55.804 99.052 1.775 171 81.399 99.458 1.222
32 19.16 96.625 5.043 102 56.303 98.904 1.757 172 81.781 99.748 1.220
33 19.982 96.176 4.813 103 56.92 99.019 1.740 173 82.222 99.607 1.211
34 21.04 97.067 4.613 104 56.92 99.019 1.740 174 82.662 99.523 1.204
35 21.863 97.312 4.451 105 57.743 99.237 1.719 175 82.662 99.573 1.205
36 21.863 97.312 4.451 106 58.419 98.994 1.695 176 83.133 99.611 1.198
37 22.656 97.017 4.282 107 58.419 98.994 1.695 177 83.368 99.612 1.195
38 23.45 97.243 4.147 108 58.919 99.202 1.684 178 83.368 99.612 1.195
39 24.42 96.51 3.952 109 59.536 99.112 1.665 179 83.75 99.684 1.190
40 24.42 96.51 3.952 110 60.182 99.121 1.647 180 84.337 99.617 1.181
41 25.272 96.86 3.833 111 60.182 99.121 1.647 181 84.69 99.722 1.177
42 25.918 96.485 3.723 112 60.711 99.177 1.634 182 84.69 99.722 1.177
43 25.918 96.485 3.723 113 61.387 99.234 1.617 183 84.984 99.654 1.173
44 26.8 96.82 3.613 114 61.387 99.234 1.617 184 85.307 99.724 1.169
45 27.593 97.551 3.535 115 62.033 99.337 1.601 185 85.307 99.724 1.169
46 28.299 98.027 3.464 116 62.504 99.436 1.591 186 85.688 99.863 1.165
47 28.299 98.027 3.464 117 63.033 99.441 1.578 187 86.13 99.693 1.157
48 29.151 97.984 3.361 118 63.033 99.441 1.578 188 86.483 99.728 1.153
49 29.944 97.544 3.258 119 63.562 99.353 1.563 189 86.483 99.728 1.153
50 29.944 97.547 3.258 120 64.091 99.358 1.550 190 86.835 99.763 1.149
51 30.767 98.376 3.197 121 64.091 99.358 1.550 191 87.129 99.865 1.146
52 31.472 98.039 3.115 122 64.678 99.5 1.538 192 87.129 99.865 1.146
53 32.119 98.079 3.054 123 65.207 99.459 1.525 193 87.423 99.832 1.142
54 32.119 98.079 3.054 124 65.795 99.285 1.509 194 87.775 99.766 1.137
55 30.059 98.489 3.277 125 65.795 99.285 1.509 195 88.099 99.666 1.131
56 33.911 98.44 2.903 126 66.441 99.513 1.498 196 88.099 99.666 1.131
57 33.911 98.44 2.903 127 66.941 99.517 1.487 197 88.481 99.801 1.128
58 34.558 98.469 2.849 128 66.941 99.517 1.487 198 88.804 99.735 1.123
59 35.292 98.751 2.798 129 67.382 99.477 1.476 199 89.039 99.736 1.120
60 36.086 98.616 2.733 130 67.852 99.524 1.467 200 89.039 99.736 1.120
61 36.086 98.616 2.733 131 68.44 99.614 1.455 201 89.362 99.704 1.116
62 36.938 98.329 2.662 132 68.44 99.614 1.455 202 89.686 99.803 1.113
63 37.702 98.051 2.601 133 69.486 99.49 1.432 203 89.686 99.803 1.113
64 37.702 98.051 2.601 134 69.556 99.493 1.430 204 90.009 99.739 1.108
65 38.29 98.388 2.570 135 69.909 99.496 1.423 205 90.332 99.707 1.104
66 39.024 98.494 2.524 136 69.906 99.496 1.423 206 90.538 99.838 1.103
67 39.788 98.523 2.476 137 70.291 99.373 1.414 207 90.538 99.838 1.103
68 39.788 98.523 2.476 138 70.79 99.377 1.404 208 90.831 99.871 1.100
69 40.406 98.545 2.439 139 70.79 99.377 1.404 209 91.155 99.742 1.094
70 41.17 98.787 2.399 140 71.378 99.506 1.394
Fig. 43
Fig. 44
Fig. 45
Fig. 46
Fig. 47
The obtained results may be considered of valuable interest since they indicate possible new properties
for Mass Number of atomic nuclei.
At increasing values of Radius R, % Rec and % Det increase, as it is trivially expected in some general
case, but the interesting new thing is that, after some regular increasing values of %Rec and %Det,
occurring every two or three step, soon after the values of RQA variables reach values of stability that so
remain for two steps in the increasing values of R. In other terms, in presence of increasing R, we have
corresponding increasing values of % Rec, %Det, Entropy, followed by a phase in which, still for
increasing R, the values of RQA variables remain instead constant.
This is certainly a new mechanism of increasing mass of atomic nuclei that deserves to be carefully
explained.
Table 4
d=2 Z N Z N ∆∆∆∆ Z ∆∆∆∆ N d=3 Z N Z N ∆∆∆∆ Z ∆∆∆∆ N d=4 Z N Z N ∆∆∆∆ Z ∆∆∆∆ N
3 4 4 5 1 1 1 0 2 2 1 2 3 4 5 6 2 2
4 5 5 6 1 1 2 2 3 4 1 2 6 6 8 8 2 2
6 6 7 7 1 1 4 5 6 6 2 1 8 8 10 10 2 2
7 7 8 8 1 1 5 6 7 7 2 1 9 10 11 12 2 2
26 30 28 30 2 0 8 8 9 10 1 2 10 10 12 12 2 2
38 50 40 50 2 0 10 10 11 12 1 2 11 12 13 14 2 2
52 78 54 78 2 0 12 12 13 14 1 2 12 12 14 14 2 2
56 82 58 82 2 0 14 14 15 16 1 2 13 14 15 16 2 2
57 82 59 82 2 0 16 16 17 18 1 2 14 14 16 16 2 2
66 98 68 98 2 0 21 24 22 26 1 2 15 16 17 18 2 2
77 116 78 117 1 1 22 26 23 28 1 2 17 18 19 20 2 2
78 117 79 118 1 1 24 28 25 30 1 2 22 26 24 28 2 2
25 30 28 30 3 0 23 28 25 30 2 2
26 30 27 32 1 2 24 28 26 30 2 2
37 48 38 50 1 2 25 30 27 32 2 2
40 50 41 52 1 2 27 32 29 34 2 2
43 56 44 58 1 2 33 42 35 44 2 2
45 58 46 60 1 2 34 46 36 48 2 2
52 78 55 78 3 0 36 48 38 50 2 2
56 82 59 82 3 0 37 48 39 50 2 2
59 82 60 84 1 2 39 50 41 52 2 2
68 98 69 100 1 2 42 56 44 58 2 2
73 108 74 110 1 2 43 56 45 58 2 2
74 110 75 112 1 2 44 58 46 60 2 2
76 116 78 117 2 1 45 58 47 60 2 2
80 122 81 124 1 2 58 82 60 84 2 2
81 124 82 126 1 2 59 82 61 84 2 2
67 98 69 100 2 2
72 108 74 110 2 2
77 116 79 118 2 2
81 124 83 126 2 2
d=6 Z N Z N ∆∆∆∆ Z ∆∆∆∆ N d=7 Z N Z N ∆∆∆∆ Z ∆∆∆∆ N d=8 Z N Z N ∆∆∆∆ Z ∆∆∆∆ N
1 0 3 4 2 4 2 2 5 6 3 4 1 0 4 5 3 5
7 7 10 10 3 3 3 4 7 7 4 3 2 2 6 6 4 4
19 20 21 24 2 4 4 5 8 8 4 3 5 6 9 10 4 4
21 24 23 28 2 4 6 6 9 10 3 4 6 6 10 10 4 4
24 28 28 30 4 2 8 8 11 12 3 4 8 8 12 12 4 4
28 30 30 34 2 4 10 10 13 14 3 4 9 10 13 14 4 4
29 34 31 38 2 4 12 12 15 16 3 4 10 10 14 14 4 4
31 38 33 42 2 4 14 14 17 18 3 4 11 12 15 16 4 4
32 42 34 46 2 4 16 16 19 20 3 4 12 12 16 16 4 4
35 44 37 48 2 4 21 24 24 28 3 4 13 14 17 18 4 4
36 48 40 50 4 2 22 26 25 30 3 4 15 16 19 20 4 4
41 52 43 56 2 4 23 28 28 30 5 2 16 16 18 22 2 6
48 66 50 70 2 4 24 28 27 32 3 4 16 16 20 20 4 4
49 66 51 70 2 4 26 30 29 34 3 4 18 22 22 26 4 4
51 70 53 74 2 4 43 56 46 60 3 4 20 20 22 26 2 6
53 74 55 78 2 4 47 60 48 66 1 6 22 26 26 30 4 4
54 78 56 82 2 4 48 66 51 70 3 4 23 28 27 32 4 4
55 78 57 82 2 4 50 70 53 74 3 4 25 30 29 34 4 4
56 82 60 84 4 2 54 78 57 82 3 4 26 30 30 34 4 4
57 82 61 84 4 2 55 78 58 82 3 4 34 46 38 50 4 4
62 90 64 94 2 4 56 82 61 84 5 2 37 48 41 52 4 4
63 90 65 94 2 4 61 84 62 90 1 6 40 50 42 56 2 6
64 94 66 98 2 4 62 90 65 94 3 4 42 56 46 60 4 4
65 94 67 98 2 4 64 94 67 98 3 4 43 56 47 60 4 4
69 100 71 104 2 4 65 94 68 98 3 4 46 60 48 66 2 6
70 104 72 108 2 4 70 104 73 108 3 4 47 60 49 66 2 6
71 104 73 108 2 4 72 108 75 112 3 4 52 78 56 82 4 4
73 108 75 112 2 4 78 117 80 122 2 5 54 78 58 82 4 4
75 112 77 116 2 4 80 122 83 126 3 4 55 78 59 82 4 4
80 122 82 126 2 4 60 84 62 90 2 6
61 84 63 90 2 6
64 94 68 98 4 4
68 98 70 104 2 6
74 110 76 116 2 6
75 112 78 117 3 5
79 118 81 124 2 6
d=9 Z N Z N ∆∆∆∆ Z ∆∆∆∆ N d=10 Z N Z N ∆∆∆∆ Z ∆∆∆∆ N d=11 Z N Z N ∆∆∆∆ Z ∆∆∆∆ N
3 4 8 8 5 4 1 0 5 6 4 6 1 0 6 6 5 6
5 6 10 10 5 4 2 2 7 7 5 5 4 5 10 10 6 5
7 7 11 12 4 5 4 5 9 10 5 5 6 6 11 12 5 6
9 10 14 14 5 4 7 7 12 12 5 5 8 8 13 14 5 6
11 12 16 16 5 4 17 18 21 24 4 6 10 10 15 16 5 6
15 16 18 22 3 6 21 24 25 30 4 6 12 12 17 18 5 6
15 16 20 20 5 4 22 26 28 30 6 4 14 14 19 20 5 6
19 20 22 26 3 6 27 32 31 38 4 6 18 22 23 28 5 6
25 30 30 34 5 4 30 34 32 42 2 8 20 20 23 28 3 8
33 42 36 48 3 6 31 38 35 44 4 6 21 24 26 30 5 6
34 46 39 50 5 4 32 42 36 48 4 6 22 26 27 32 5 6
35 44 38 50 3 6 33 42 37 48 4 6 24 28 29 34 5 6
36 48 41 52 5 4 34 46 40 50 6 4 28 30 31 38 3 8
39 50 42 56 3 6 35 44 39 50 4 6 29 34 32 42 3 8
40 50 43 56 3 6 38 50 42 56 4 6 30 34 33 42 3 8
41 52 44 58 3 6 39 50 43 56 4 6 31 38 34 46 3 8
42 56 47 60 5 4 41 52 45 58 4 6 32 42 37 48 5 6
46 60 49 66 3 6 50 70 52 78 2 8 35 44 40 50 5 6
51 70 52 78 1 8 52 78 58 82 6 4 38 50 43 56 5 6
52 78 57 82 5 4 65 94 69 100 4 6 45 58 48 66 3 8
54 78 59 82 5 4 66 98 70 104 4 6 51 70 54 78 3 8
60 84 63 90 3 6 67 98 71 104 4 6 52 78 59 82 7 4
67 98 70 104 3 6 70 104 74 110 4 6 53 74 56 82 3 8
68 98 71 104 3 6 75 112 79 118 4 6 55 78 60 84 5 6
71 104 74 110 3 6 76 116 80 122 4 6 59 82 62 90 3 8
74 110 77 116 3 6 78 117 81 124 3 7 63 90 66 98 3 8
77 116 80 122 3 6 64 94 69 100 5 6
66 98 71 104 5 6
69 100 72 108 3 8
73 108 76 116 3 8
74 110 78 117 4 7
79 118 82 126 3 8
d=13 Z N Z N ∆∆∆∆ Z ∆∆∆∆ N d=14 Z N Z N ∆∆∆∆ Z ∆∆∆∆ N d=15 Z N Z N ∆∆∆∆ Z ∆∆∆∆ N
1 0 7 7 6 7 4 5 11 12 7 7 1 0 8 8 7 8
3 4 10 10 7 6 7 7 14 14 7 7 2 2 9 10 7 8
5 6 12 12 7 6 15 16 21 24 6 8 4 5 12 12 8 7
7 7 13 14 6 7 21 24 27 32 6 8 6 6 13 14 7 8
9 10 16 16 7 6 25 30 31 38 6 8 8 8 15 16 7 8
13 14 18 22 5 8 32 42 38 50 6 8 10 10 17 18 7 8
13 14 20 20 7 6 33 42 39 50 6 8 12 12 19 20 7 8
16 16 21 24 5 8 35 44 41 52 6 8 18 22 25 30 7 8
17 18 22 26 5 8 36 48 42 56 6 8 20 20 25 30 5 10
19 20 24 28 5 8 37 48 43 56 6 8 22 26 29 34 7 8
21 24 28 30 7 6 38 50 44 58 6 8 27 32 32 42 5 10
23 28 30 34 7 6 39 50 45 58 6 8 30 34 35 44 5 10
26 30 31 38 5 8 41 52 47 60 6 8 31 38 36 48 5 10
33 42 38 50 5 8 46 60 50 70 4 10 32 42 39 50 7 8
34 46 41 52 7 6 47 60 51 70 4 10 33 42 40 50 7 8
37 48 42 56 5 8 52 78 60 84 8 6 36 48 43 56 7 8
39 50 44 58 5 8 53 74 59 82 6 8 38 50 45 58 7 8
40 50 45 58 5 8 56 82 62 90 6 8 43 56 48 66 5 10
41 52 46 60 5 8 57 82 63 90 6 8 46 60 51 70 5 10
44 58 49 66 5 8 60 84 64 94 4 10 49 66 52 78 3 12
47 60 50 70 3 10 61 84 65 94 4 10 52 78 61 84 9 6
48 66 53 74 5 8 62 90 68 98 6 8 56 82 63 90 7 8
50 70 55 78 5 8 68 98 72 108 4 10 60 84 65 94 5 10
53 74 58 82 5 8 73 108 78 117 5 9 65 94 70 104 5 10
54 78 61 84 7 6 78 117 83 126 5 9 67 98 72 108 5 10
57 82 62 90 5 8 68 98 73 108 5 10
58 82 63 90 5 8 69 100 74 110 5 10
61 84 64 94 3 10 72 108 78 117 6 9
62 90 67 98 5 8 75 112 80 122 5 10
63 90 68 98 5 8 77 116 82 126 5 10
70 104 75 112 5 8
72 108 77 116 5 8
74 110 79 118 5 8
76 116 81 124 5 8
78 117 82 126 4 9
In Table 4 we give the scheme of increasing R corresponding to A∆ and the corresponding variations in
the number of nucleons as they are induced step by step. Obviously this table 4 cannot be complete.
However, the exposition of the process, also limited to few cases of interest, will contribute to elucidate
the mechanism under consideration. In brief for 2=∆A we have oscillation in the values of RQA
variables but they soon after return to be stable for 3=∆A and 4=∆A . After we pass to 6=∆A where again
RQA variables are unstable but they return to be stable for 7=∆A and 8=∆A . The next step is 9=∆A with
instability, followed from stable values for 10=∆A and 11=∆A . We may continue with 13=∆A that is
unstable but followed from stable 14=∆A and 15=∆A .The same thing happens for
...........180.............119........80........38343027232016 ororororororororororororA =∆ . To each given
unstable A∆ value , will correspond two subsequent stable values that respectively will be given at
17=∆A and 18=∆A ; at 21=∆A and 22=∆A , ……….. at 120=∆A and 121=∆A , ….. at 181=∆A and
182=∆A .
Instabilities are present every three or four increasing values of A∆ . Systematically, each of them is
followed by stable values at the two subsequent increments of A∆ .
In conclusion the law seems as it follows: for each pair of nuclei , fixed the value of A∆ with unstable
value of the RQA variables, the addition of one nucleon by two subsequent steps stabilizes the values of
such variables. Obviously, for each selected value of A∆ we have a class of pair of nuclei as indicated as
example in Table 4.
In conclusion, the use of RQA variables has cleared that we are in presence of new features for atomic
nuclei that deserve to be properly explained. We intend to say that the next step of the present research
should be now to link the different results that have been obtained with concrete evidences expressible in
terms of basic concepts of nuclear physics. If on one hand some of such new findings are just evident by
itself on the other hand we cannot ignore that in this paper we have moved more on the line of the
notions as they are contained in the methods that we have used. More concretely : referring as example to
the basic results that we have obtained by using RQA, and, in particular, to the last results as given by
using embedding dimension D=1 and reported in Tables 3 and 4 and in Figures 43-47, we cannot ignore
that we have to consider now pairs of nuclei with given A∆ and thus to identify pairs of subsequent stable
nuclei and, following this way, to find some new regularities in Z, N and to give new classifications of
nuclei to different groups using such regularities. In short, the results that we have obtained should reveal
new regularities about ground states of nuclei not found so easily by other methods. Consequently, this
new approach might be very useful and important. The aim is to pursue such research work in our future
investigations.
8. Conclusions
In the present paper we have introduced a preliminary but complete analysis of Atomic Weights and Mass
Number using the methods of non linear analysis.
We have obtained some results that appear to be of some interest in understanding the basic foundations
of nuclear matter. As methodology, we have applied the tests of autocorrelation function and of Mutual
Information. We have also provided to a reconstruction of the experimental data in phase space giving
results on Lyapunov spectrum and Correlation Dimension. We have performed an analysis to establish
the presence of a power law in data on Atomic Weights and Mass Number and such kind of analysis has
been completed by using the technique of the variogram. The results seem to confirm the presence of a
fractal regime in the process of increasing mass of atomic nuclei. The estimation of Husrt exponent has
enabled us to indicate that we would be in presence of a fractional Brownian regime with long range
correlations.
To summarize: Some preliminary results have been obtained. The mechanism of increasing mass in
atomic nuclei reveals itself to be a nonlinear mechanism marked by a non integer value of Correlation
dimension in phase space reconstruction. The presence of positive Lyapunov exponents indicate that the
system of mass increasing is divergent and thus possibly chaotic. By using an identified Power Law and
the variogram technique we may conclude that we are in presence of a fractal regime, a fractional
Brownian regime.
The most relevant results have been obtained by using RQA. The process under our investigation results
to be not fully deterministic when considering an embedding dimension D=2. We are in presence of self-
resemblance and pseudo periodicities that show small fluctuations at increasing value of shiftZ − while
instead Determinism shows consistent variations at increasing values of such parameter. Also Entropy
and Max Line reveal the same tendency. Therefore, in the same framework of stable nuclei we have phase
of increasing stability or increasing instability, depending on the mechanism of composition of the
considered atomic nuclei and on the differences that they exhibit in the values of their Atomic Weights
and of Mass Number. A final important result is obtained by using RQA in phase space reconstruction
using embedding dimension D=1 and increasing Radius R corresponding to net differences in Mass
number of the considered atomic nuclei. In this case, in phase space reconstruction, RQA involves pairs
of nuclei in our analysis. New properties are identified at the increasing values of A∆ . In particular,
determinism oscillates but at some regular distances it also shows definite constant values as well as the
other RQA variables . This confirms that we are in presence of a mechanism of increasing mass of atomic
nuclei in which phases of stability result subsequent to phases of instability possibly marked from
conditions of order-disorder like transitions. We have to consider pairs of nuclei with fixed A∆ and to
identify pairs of subsequent stable nuclei that indicate new regularities in Z, N that we need to indicate in
detail . We have to classify nuclei pertaining to different groups using these new regularities. This
approach might be of valuable interest and it will constitute the object of our future work.
In this framework, the next step of the present investigation will be also to analyze data corresponding to
values of binding energies for atomic nuclei. Possibly the complex of such results will give the possibility
to indicate new perspectives in the elaboration of more accurate nuclear models of nuclear matter.
Acknowledgement
Many thanks are due to M. Pitkanen for his continuous and stimulating interest, suggestions and
encouragement through this work.
Software NDT by J. Reiss and VRA by E. Kononov were also used for general non linear analysis.
REFERENCES
[1] C.F. von Weizsacher, Z. Phys., 96, 431-458, 1935
[2] P. Leboeuf, Regularity and Chaos in the nuclear masses, arXiv:nucl-th/0406064; see also H.Olofsson,
S. Alberg, O. Bohigas, P. Leboeuf, Correlations in Nuclear Masses, arXiv:nucl-th (0602041 v1 13
Feb.2006 and O. Bohigas, P. Leboeuf, Nuclear Masses: evidence of order-chaos coexistence,
arXiv:nucl-th/0110025v2 28 Nov. 2001 and references therein.
[3] A. Bohr and B.R. Mottelson , Nuclear Structure vol. I, Benjamin Reading ,1969.
[4] V.M. Strutinsky, Nucl. Phys. A95, 420-442,1967 ).
[5]V. Paar, N. Parvin, A. Rubcic, J. Rubcic, Chaos, Solitons and Fractals, 14, 901-916, 2002
[6]H. Kroger, Fractal geometry in quantum mechanics, Phys. Rep.323, 81-181, 2000.
[7] M. Pitkanen, TGD and Nuclear Physics in book p-adic length scale hypothesis and dark matter
hierarchy,www.helsinki.fi/∼matpitka/paddark/paddark.html≠padnucl.
[8]G.A. Lalazissis, D. Vrtenar, N. Paar, P. Ring, Chaos, Solitons and Fractals,17, 585-590, 2003.
[9]M.A.Azar . K. Gopala, Phys.Rev.A39, 5311-5318, 1989 and Phys. Rev. A37, 2173-2180, 1988.
[10] A.M. Fraser, H.L. Swinney, Independent Coordinates for strange attractors from mutual Information,
Phys. Rev. A33,1134-1137, 1986
[11] For details see as example: BB. Mandelbrot et al. SIAM Review,10, 422-437,10,1968; SHEN Wei,
Zhao Pengda, Multidimensional self-affine distribution with application in geochemistry, Math.
Geology, 34, 2,109-123, 2002, E. Conte, J.P Zbilut et al., Chaos, Solitons and Fractals, 29, 701-730,
2006;
[12] J. Feder, Fractals, Plenum, New York,1988.
[13] C.L Webber. Jr, J.P. Zbilut, Dynamical Assessment of physiological systems and states using
recurrence plot strategies, J. Appl. Physiol. 76, 965-973, 1994. The package of RQA software may be
free downloaded at http://homepages.luc.edu/∼CWebber/.
|
0704.0905 | Dielectronic Recombination of Fe XV forming Fe XIV: Laboratory
Measurements and Theoretical Calculations | Dielectronic Recombination of Fe XV forming Fe XIV: Laboratory
Measurements and Theoretical Calculations
D. V. Lukić1, M. Schnell2, and D. W. Savin
Columbia Astrophysics Laboratory, Columbia University,
New York, NY 10027, USA
[email protected]
C. Brandau3, E. W. Schmidt, S. Böhm4, A. Müller, and S. Schippers
Institut für Atom- und Molekülphysik, Justus-Liebig-Universität, D-35392 Giessen,
Germany
M. Lestinsky, F. Sprenger, and A. Wolf
Max-Planck-Institut für Kernphysik, D-69117 Heidelberg, Germany
Z. Altun
Department of Physics, Marmara University, Istanbul 81040, Turkey
N. R. Badnell
Department of Physics, University of Strathclyde, G4 0NG Scotland, UK
ABSTRACT
We have measured resonance strengths and energies for dielectronic recombi-
nation (DR) of Mg-like Fe XV forming Al-like Fe XIV via N = 3 → N ′ = 3 core
excitations in the electron-ion collision energy range 0–45 eV. All measurements
were carried out using the heavy-ion Test Storage Ring at the Max Planck Insti-
tute for Nuclear Physics in Heidelberg, Germany. We have also carried out new
1On leave from the Institute of Physics, 10001 Belgrade, Serbia
2Present address, Carl Zeiss NTS GmbH, Oberkochen D-73447, Germany
3Present address, Gesellschaft für Schwerionenforschung (GSI), Darmstadt, D-64291, Germany
4Present address, Department of Atomic Physics, Stockholm University, S-106 91 Stockholm, Sweden
http://arxiv.org/abs/0704.0905v1
multiconfiguration Breit-Pauli (MCBP) calculations using the AUTOSTRUC-
TURE code. For electron-ion collision energies . 25 eV we find poor agreement
between our experimental and theoretical resonance energies and strengths. From
25 to 42 eV we find good agreement between the two for resonance energies.
But in this energy range the theoretical resonance strengths are ≈ 31% larger
than the experimental results. This is larger than our estimated total experi-
mental uncertainty in this energy range of ±26% (at a 90% confidence level).
Above 42 eV the difference in the shape between the calculated and measured
3s3p(1P1)nl DR series limit we attribute partly to the nl dependence of the de-
tection probabilities of high Rydberg states in the experiment. We have used our
measurements, supplemented by our AUTOSTRUCTURE calculations, to pro-
duce a Maxwellian-averaged 3 → 3 DR rate coefficient for Fe XV forming Fe XIV.
The resulting rate coefficient is estimated to be accurate to better than ±29% (at
a 90% confidence level) for kBTe ≥ 1 eV. At temperatures of kBTe ≈ 2.5− 15 eV,
where Fe XV is predicted to form in photoionized plasmas, significant discrepan-
cies are found between our experimentally-derived rate coefficient and previously
published theoretical results. Our new MCBP plasma rate coefficient is 19−28%
smaller than our experimental results over this temperature range.
Subject headings: atomic data – atomic processes – plasmas – galaxies: active –
galaxies: nuclei – X-rays: galaxies
1. Introduction
Recent Chandra and XMM Newton X-ray observations of active galactic nuclei (AGNs)
have detected a new absorption feature in the 15-17 Å wavelength range. This has been iden-
tified as an unresolved transition array (UTA) due mainly to 2p− 3d inner shell absorption
in iron ions with an open M-shell (Fe I - Fe XVI). UTAs have been observed in IRAS
13349+2438 (Sako et al. 2001), Mrk-509 (Pounds et al. 2001), NGC 3783 (Blustin et al.
2002; Kaspi et al. 2002; Behar et al. 2003), NGC 5548 (Steenbrugge et al. 2003), MR 2251-
178 (Kaspi et al. 2004), I Zw 1 (Gallo et al. 2004), NGC 4051 (Pounds et al. 2004), and
NGC 985 (Krongold et al. 2005).
Based on atomic structure calculations and photoabsorbtion modeling, Behar et al.
(2001) have shown that the shape, central wavelength, and equivalent width of the UTA
can be used to diagnose the properties of AGN warm absorbers. However, models which fit
well absorption features from second and third row elements cannot reproduce correctly the
observed UTAs due to the fourth row element iron. The models appear to predict too high
an ionization level for iron. Netzer et al. (2003) attributed this discrepancy to an underesti-
mate of the low temperature dielectronic recombination (DR) rate coefficients for Fe M-shell
ions. To investigate this possibility Netzer (2004) and Kraemer et al. (2004) arbitrarily in-
creased the low temperature Fe M-shell DR rate coefficients. Their model results obtained
with the modified DR rate coefficients support the hypothesis of Netzer et al. (2003). New
calculations by Badnell (2006a) using a state-of-the-art theoretical method disscused in § 5
further support the hypotesis of Netzer et al. (2003).
Astrophysical models currently use the DR data for Fe M-shell ions recommended
by Arnaud & Raymond (1992). These data are based on theoretical DR calculations by
Jacobs et al. (1977) and Hahn (1989). The emphasis of this early theoretical work was
on producing data for modeling collisional ionization equilibrium (sometimes also called
coronal equilibrium). Under these conditions an ion forms at a temperature about an
order of magnitude higher than the temperature where it forms in photoionized plasmas
(Kallman & Bautista 2001). The use of the Arnaud & Raymond (1992) recommended DR
data for modeling photoionized plasmas is thus questionable. Benchmarking by experiment
is highly desirable.
Reliable experimentally-derived low temperature DR rate coefficients of M-shell iron
ions are just now becoming available. Until recently, the only published Fe M-shell DR mea-
surements were for Na-like Fe XVI (Linkemann et al. 1995; Müller 1999; here and throughout
we use the convention of identifying the recombination process by the initial charge state
of the ion). The Na-like measurements were followed up with modern theoretical calcula-
tions (Gorczyca & Badnell 1996; Gu 2004; Altun et al. 2007). Additional M-shell experi-
mental work also exists for Na-like Ni XVIII (Fogle et al. 2003) and Ar-like Sc IV and Ti V
(Schippers et al. 1998, 2002). We have undertaken to measure low temperature DR for other
Fe M-shell ions. Our results for Al-like Fe XIV are presented in Schmidt et al. (2006) and
Badnell (2006b). The present paper is a continuation of this research.
DR is a two-step recombination process that begins when a free electron approaches
an ion, collisionally excites a bound electron of the ion and is simultaneously captured into
a Rydberg level n. The electron excitation can be labeled Nlj → N
′l′j′ where N is the
principal quantum number of the core electron, l its orbital angular momentum, and j its
total angular momentum. The intermediate state, formed by simultaneous excitation and
capture, may autoionize. The DR process is complete when the intermediate state emits a
photon which reduces the total energy of the recombined ion to below its ionization limit.
In this paper we present experimental and theoretical results for ∆N=N ′ −N = 0 DR
of Mg-like Fe XV forming Al-like Fe XIV. In specific we have studied 3 → 3 DR via the
resonances:
Fe14+(3s2[1S0]) + e
Fe13+(3s3p[3P o
0,1,2;
1P1]nl)
Fe13+(3s3d[3D1,2,3;
1D2]nl)
Fe13+(3p2[3P0,1,2;
1S0]nl)
Fe13+(3p3d[3Do
1,2,3;
2,3,4;
0,1,2;
; 1Do
; 1F o
Fe13+(3d2[3P0,1,2;
3F2,3,4;
1S0]nl)
Possible contributions due to 3s3p 3P metastable parent ions will be discussed below. Table 1
lists the excitation energies for the relevant Fe XV levels, relative to the ground state, that
have been considered in our theoretical calculations. In our studies we have carried out
measurements for electron-ion center-of-mass collision energies Ecm between 0 and 45 eV.
Our work is motivated by the “formation zone” of Fe M-shell ions in photoionized gas.
This zone may be defined as the temperature range where the fractional abundance of a given
ion is greater than 10% of its peak value (Schippers et al. 2004). We adopt this definition
for this paper. Savin et al. (1997, 1999, 2002a,b, 2006) defined this zone as the temperature
range where the fractional abundance is greater than 10% of the total elemental abundance.
This is narrower than the Schippers et al. (2004) definition. For Fe XV the wider definition
corresponds to a kBTe ≈ 2.5-15 eV (Kallman & Bautista 2001). It should be kept in mind
that this temperature range depends on the accuracy of the underlying atomic data used to
calculate the ionization balance.
The paper is organized as follows: The experimental arrangement for our measure-
ments is described in § 2. Possible contamination of our parent ion beam by metastable
ions is discussed in § 3. Our laboratory results are presented in § 4. In this section the
experimentally-derived DR rate coefficient for a Maxwellian plasma is provided as well.
Theoretical calculations which have been carried out for comparison with our experimental
results are discussed in § 5. Comparison between the experimental and theoretical results is
presented in § 6. A summary of our results is given in § 7.
2. Experimental Technique
DR measurements were carried out at the heavy-ion test storage ring (TSR) of the
Max-Planck Institute for Nuclear Physics (MPI-K) in Heidelberg, Germany. A merged
beams technique was used. A beam of 56Fe14+ with an energy of 156 MeV was provided
by the MPI-K accelerator facility. Ions were injected into the ring and their energy spread
reduced using electron cooling (Kilgus et al. 1990). Typical waiting times after injection
and before measurement were ≈ 1 s. Mean stored ion currents were ≈ 10 µA. Details of
the experimental setup have been given elsewhere (Kilgus et al. 1992; Lampert et al. 1996;
Schippers et al. 1998, 2000, 2001).
Recently a second electron beam has been installed at the TSR (Sprenger et al. 2004;
Kreckel et al. 2005). This allows one to use the first electron beam for continuous cooling
of the stored ions and to use the second electron beam as a target for the stored ions. In
this way a low velocity and spatial spread of the ions can be maintained throughout the
course of a DR measurement. The combination of an electron cooler and an electron target
can be used to scan energy-dependent electron-ion collision cross sections with exceptional
energy resolution. In comparison to the electron cooler, the electron source and the electron
beam are considerably smaller and additional procedures, such as the stabilization of the
beam positions during energy scans and electron beam profile measurements, are required
to control the absolute luminosity product between the ion and electron beam on the same
precise level as reached at the cooler. The target electron beam current was ≈ 3 mA. The
beam was adiabatically expanded from a diameter of 1.6 mm at the cathode to 7.5 mm
in the interaction region using an expansion factor of 22. This was achieved by lowering
the guiding magnetic field from 1.28 T at the cathode to 0.058 T in the interaction region
thus reducing the transverse temperature to approximately 6 meV. The relative electron-
ion collision energy can be precisely controlled and the recombination signal measured as a
function of this energy. We estimate that the uncertainty of our scale for Ecm is . 0.5%.
The electrons are merged and demerged with the ion beam using toroidal magnets.
After demerging, the primary and recombined ion beams pass through two correction dipole
magnets and continue into a bending dipole magnet. Recombined ions are bent less strongly
than the primary ion beam and they are directed onto a particle detector used in single
particle counting mode. Some of the recombined ions can be field-ionized by motional
electric fields between the electron target and the detector and thus are not detected. Here
we assumed a sharp field ionization cutoff and estimated for Fe XV that only electrons
captured into nmax . 80 are detected by our experimental arrangement.
The experimental energy distribution can be described as a flattened Maxwellian dis-
tribution. It is characterized by the transversal and longitudinal temperatures T⊥ and T‖,
respectively. The experimental energy spread depends on the electron-ion collision energy
and can be approximated according to the formula ∆E = ([ln(2)kBT⊥]
2+16 ln(2)EcmkBT‖)
(Pastuszka et al. 1996). For the comparison of our theoretical calculations with our experi-
mental data we convolute the theoretical results described in § 5 with the velocity distribution
function given by Dittner et al. (1986) to simulate the experimental energy spread.
With the new combination of an electron target and an electron cooler we obtain in the
present experiment electron temperatures of kBT⊥ ≈ 6 meV and kBT‖ ≈ 0.05 meV. In order
to verify the absolute calibration of the absolute rate coefficient scale we also performed
a measurement with the electron cooler using the previous standard method (Kilgus et al.
1992, Lampert et al. 1996). We find consistent rate coefficients and spectral shapes, while
the electron temperatures were larger by a factor of about 2 with the electron cooler alone.
Moreover, because of the large density of resonances found in certain regions of the Fe XV
DR spectrum the determination of the background level for the DR signal was considerably
more reliable in the higher resolution electron target data than in the lower resolution cooler
data. Hence, we performed the detailed analysis presented below on the electron target data
only.
Details of the experimental and data reduction procedure are given in Schippers et al.
(2001, 2004) and Savin et al. (2003) and reference therein. The baseline experimental un-
certainty (systematic and statistical) of the DR measurements is estimated to be ±25% at
a 90% confidence level (Lampert et al. 1996). The major sources of uncertainties include
the electron beam density determination, ion current measurements, and corrections for the
merging and demerging of the two beams. Additional uncertainties discussed below result in
a higher total experimental uncertainty as is explained in §§ 3 and 4. Unless stated otherwise
all uncertainties in this paper are cited at an estimated 90% confidence level.
3. Metastable Ions
For Mg-like ions with zero nuclear spin (such as 56Fe), the 1s22s22p63s3p 3P0 level is
forbidden to decay to the ground state via a one-photon transition and the multiphoton
transition rate is negligible. Hence this level can be considered as having a nearly infinite
lifetime (Marques, Parente, & Indelicato 1993; Brage et al. 1998). It is possible that these
metastables are present in the ion beam used for the present measurements.
We estimate that the largest possible metastable 3P0 fraction in our stored beam is 11%.
This assumes that 100% of the initial Fe14+ ions are in 3PJ levels and that the levels are
statistically populated. We expect that the J = 1 and 2 levels will radiatively decay to the
ground state during the ∼ 1 s between injection and measurement. The lifetimes of the 3P1
and 3P2 levels are ∼ 1.4 × 10
−10 s (Marques et al. 1993) and ∼ 0.3 s (Brage et al. 1998),
respectively. These decays leave 1/9th or 11% of the stored ions in the 3P0 level.
Our estimate is only slightly higher than the inferred metastable fraction for the ion
beam used for DR measurements of the analogous Be-like Fe22+ (Savin et al. 2006). The
Be-like system has a metastable 1s22s2p 3P0 state and following the above logic the stored
Be-like ion beam had an estimated maximum 11% 3P0 fraction. Fortunately, for the Be-like
measurements we were able to identify DR resonances due to the 3P0 parent ion and use the
ratio of the experimental to theoretical resonance strengths to infer the 3P0 fraction. There
we determined a metastable fraction of 7% ± 2%. A similar fraction was inferred for DR
measurements with Be-like Ti18+ ions (Schippers et al. 2007).
Using theory as a guide, we have searched our Mg-like data fruitlessly for clearly iden-
tifiable DR resonances due to metastable 3P0 parent ions. First, following our work in the
analogous Be-like Fe22+ with its 2s2p 3P0 → 2p
2 core excitation channel (Savin et al. 2006),
we searched for Fe14+ resonances associated with the relevant 3s3p 3P0 → 3p
2 core exci-
tations. However, most of these yield only very small DR cross sections as they strongly
autoionize into the 3s3p 3PJ=1,2 continuum channels. These are energetically open at Ecm
greater than 0.713 eV and 2.468 eV, respectively (Table 1). Hence, above Ecm ≈ 0.713 eV
there are no predicted significant DR resonances for metastable Fe14+ via 3s3p 3P0 → 3p
core excitations. Below this energy the agreement between theory and experiment is ex-
tremely poor (as can be seen in Fig. 1) and we are unable to assign unambiguously any DR
resonance to either the ground state or metastable parent ion. Second, we searched for reso-
nances associated with 3s3p 3P0 → 3s3p
1P1, 3s3p
3P0 → 3s3p
3P1, and 3s3p
3P0 → 3s3p
core excitation which are energetically possible for capture into the n ≥ 14, 62, and 33 levels,
respectively, and which may contribute to the observed resonance structures. The analogous
2s2p 3P0 → 2s2p
1P1 and 2s2p
3P0 → 2s2p
3P2 core excitations were seen for Be-like Ti
(Schippers et al. 2007). However, again the complexity of the Fe XV DR resonance spec-
trum (cf., Fig. 1) prevented unambiguous identification for DR via any of these three core
excitations. Hence despite these two approaches, we have been unable to directly determine
the metastable fraction of our Fe14+ beam.
Clearly our assumption that the 3PJ levels are statistically populated is questionable.
Ion beam generation using beam foil techniques are known to produce excited levels. The
subsequent cascade relaxation could potentially populated the J levels non-statistically
(Martinson & Gaupp 1974; Quinet et al. 1999). Additionally the magnetic sublevels mJ
can be populated non-statistically (Martinson & Gaupp 1974) which may affect the J lev-
els. However, our argument in the above paragraphs that the 3PJ levels are statistically
populated yields 3P0 fractions of the analgous Be-like Ti
18+ and Fe22+ of 11% while our
measurements found metastable fractions of ∼ 7% for those two beams. From this we con-
clude either (a) that if 100% of the initial ions are in the 3PJ levels, then the J = 0 level is
statistically under-populated or (b) that the fraction of initial ions in the 3PJ levels is less
than 100% by a quantity large enough that any non-statistical populating of the various J
levels still yields only a 7% 3P0 metastable fraction of the ion beam. Thus we believe that
our assumption provides a reasonable upper limit to the metastable fraction of the Fe14+
beam.
Based on our estimates above and the Be-like results we have assumed that 6%±6% of
the Fe14+ ions are in the 3s3p 3P0 metastable state and the remaining fraction in the 3s
2 1S0
ground state. Here, we treat this possible 6% systematic error as a stochastic uncertainty
and add it in quadrature with the 25% uncertainty discussed above.
4. Experimental Results
Our measured 3 → 3 DR resonance spectrum for Fe XV is shown in Figs. 1 - 8. The
data 〈σv〉 represent the summed DR and radiative recombination (RR) cross sections times
the relative velocity convolved with the energy spread of the experiment, i.e., a merged beam
recombination rate coefficient (MBRRC).
The strongest DR resonance series corresponds to 3s2 1S0 → 3s3p
1P1 core excitations.
Other observed features in the DR resonance spectrum are possibly due to double core
excitations discussed in § 1. Trielectronic recombination (TR), as this has been named, has
been observed in Be-like ions (Schnell et al. 2003a,b; Fogle et al. 2005). These ions are the
second row analog to third row Mg-like ions. However in our data unambiguous assignment
of possible candidates for the TR resonances could not be made.
Extracted resonance energies Ei and resonance strengths Si for Ecm ≤ 0.95 eV are listed
in Table 2 along with their fitting errors. These data were derived following the method
outlined in Kilgus et al. (1992). Most of these resonances were not seen in any of the
theoretical calculations for either ground state or metastable Fe14+. Hence their parentage
is uncertain. The implications of this are discussed below.
Difficulties in determining the non-resonant background level of the data contributed
an uncertainty to the extracted DR resonance strengths. For the strongest peaks this was
on the order of ≈ 10% for Ecm . 5 eV and ≈ 3% for Ecm & 5 eV. Taking into account the
25% and 6% uncertainties discussed in §§ 2 and 3, respectively, this results in an estimated
total experimental uncertainty for extracted DR resonance strengths of ±28% below ≈ 5 eV
and ±26% above.
Due to the energy spread of the electron beam, resonances below Ecm . kBT⊥ cannot be
resolved from the near 0 eV RR signal. Here this limit corresponds to ≈ 6 meV. But we can
infer the absence of resonances lying below the lowest resolved resonance at 6.74 meV. For
Ecm . kBT‖, a factor of up to ∼ 2− 3 enhanced MBRRC is observed in merged electron-ion
beam experiments (see e.g., Gwinner et al. 2000; Heerlein et al. 2002). Here this temperature
limit corresponds to Ecm . 0.05 meV. As shown in Fig. 9, at an energy 0.005 meV our
MBRRC is a factor of 2.5 times larger than the fit to our data using the RR cross section
from semi-classical RR theory with quantum mechanical corrections (Schippers et al. 2001)
and the extracted DR resonance strengths and energies. This enhancement is comparable
to that found for systems with no unresolved DR resonances near 0 eV (e.g., Savin et
al. 2003 and Schippers at al. 2004). Hence, we infer that there are no additional significant
unresolved DR resonances below 6.74 meV. Recent possible explanations for the cause of the
enhancement near 0 eV have been given by Hörndl et al. (2005, 2006) and reference therein.
We have generated an experimentally-derived rate coefficient for 3 → 3 DR of Fe XV
forming Fe XIV in a plasma with a Maxwellian electron energy distribution (Fig. 10). For
Ecm ≤ 0.95 eV we have used our extracted resonance strengths listed in Table 2. For
energies Ecm ≥ 0.95 eV we have numerically integrated our MBRRC data after subtracting
out the non-resonant background. The rate coefficient was calculated using the methodology
outlined in Savin (1999) for resonance strengths and in Schippers et al. (2001) for numerical
integration.
In the present experiment only DR involving capture into Rydberg levels with quantum
numbers nmax . 80 contribute to the measured MBRRC. In order to generate a total ∆N=0
plasma rate coefficient we have used AUTOSTRUCTURE calculations (see § 5) to account
for DR into higher n levels. As is discussed in more detail in § 6, between 25-42 eV we find
good agreement between the experimental and AUTOSTRUCTURE resonance energies.
However, the theoretical results lie a factor of 1.31 above the measurement. To account
for DR into n ≥ nmax = 80, above 42 eV we replaced the experimental data with the
AUTOSTRUCTURE results (nmax =1000) reduced by a factor of 1.31. Our resulting rate
coefficient is shown in Fig. 10.
Including the DR contribution due to capture into n > 80 increases our experimentally-
derived DR plasma rate coefficient by < 1% for kBTe < 7 eV, by < 2.5% at 10 eV and by
< 7% at 15 eV. This contribution increases to 20% at 40 eV, rises to 27% at 100 eV and
saturates at ≈ 35% at 1000 eV. Thus we see that accounting for DR into n > nmax = 80
levels has only a small effect at temperatures of kBTe ≈ 2.5-15 eV where Fe XV is predicted
to form in photoionized gas (Kallman & Bautista 2001). Also, any uncertainties in this
theoretical addition, even if relatively large, would still have a rather small effect at these
temperatures on our derived DR total rate coefficient. Hence, we have not included this in
our determination below of the total experimental uncertainty for the experimentally-derived
plasma rate coefficient at kBTe ≥ 1 eV.
The two lowest-energy resonances in the experimental spectrum occur at energies of
6.74 meV and 9.80 meV with resonance strengths of 1.89 × 10−16 cm2 eV and 1.01 ×
10−17 cm2 eV, respectively (see Table 1 and Fig. 9). As already mentioned, the parent-
age for the two lowest energy resonances is uncertain. These resonances dominate the DR
rate coefficient for kBTe < 0.24 eV. The contribution is 50% at 0.24 eV, 16% at 0.5 eV,
6.5% at 1 eV, 2.4% at 2.5 eV, and < 0.31% above 15 eV. At temperatures where Fe XV is
predicted to form in photoionized plasmas, contributions due to these two resonances are
insignificant. Because of this, we do not include the effects of these two resonances when
calculating below the total experimental uncertainty for the experimentally-derived plasma
rate coefficient at kBTe ≥ 1 eV.
An additional source of uncertainty in our results is due to possible contamination of
the Fe XV beam by metastable 3P0 ions. Because we cannot unambiguously identify DR
resonances due to metastable parent ions, we cannot directly subtract out any contributions
they may make to our experimentally-derived rate coefficient. Instead we have used our
AUTOSTRUCTURE calculations for the metastable parent ion as a guide, multiplied them
by 0.06 on the basis of the estimated (6 ± 6)% metastable content. We then integrated
them to produce a Maxwellian rate coefficient and compared the results to our experimental
results, leaving out the two lowest measured resonances at 6.74 and 9.80 meV. As discussed
in the paragraph above, these two resonaces were left out because of the uncertainty in their
parentage and their small to insignificant effects above 1 eV. The metastable theoretical
results are 9.5% of this experimentally-derived rate coefficient at kBTe = 1 eV, 4.9% at
2.5 eV, 2.2% at 5 eV, 1% at 10 eV and < 0.77% above 15 eV.
In reality these are probably lower limits for the unsubtracted metastable contributions
to our experimentally-derived rate coefficient. However, these limits appear to be reasonable
estimates even taking into account the uncertainty in the exact value of the contributions
due to metastable ions. For example, if we assume that we have the estimated maximum
metastable fraction of 11%, then our experimentally-derived rate coefficients would have to
reduced by only 9.0% at 2.5 eV, 4.0% at 5 eV, 1.8% at 10 eV, and less than 1.4% above
15 eV. Alternatively, it is likely that theory underestimates the resonance strength for the
metastable parent ions similar to the case for ground state parent ions (cf., Fig. 1). However,
if the metastable fraction is 6% and the resonance contributions are a factor of 2 higher, then
our experimentally-derived rate coefficients would have to reduced by only 9.8% at 2.5 eV,
4.4% at 5 eV, 2.0% at 10 eV, and less than 1.5% above 15 eV. These are small and not very
significant corrections. We consider it extremely unlikely that we have underestimated by a
factor of nearly 2 both the metastable fraction and the metastable resonance contribution.
Thus we expect contamination due to metastable 3P0 ions to have a small to insignificant
effect on our derived rate coefficient at temperatures where Fe XV is predicted to form in
photoinoized gas.
Taking into account the baseline experimental uncertainty of 25%, the metastable frac-
tion uncertainty of 6%, and the nonresonant background uncertainty of 10%/3%, all dis-
cussed above, as well as the uncertainty due to the possible unsubtracted metastable res-
onances, the estimated uncertainty in the absolute magnitude of our total experimentally-
derived Maxwellian rate coefficient ranges between 26% and 29% for kBTe ≥ 1 eV. Here
we conservatively take the total experimental uncertainty to be ±29%. This uncertainty
increases rapidly below 1 eV due to the ambiguity of the parentage for the two lowest energy
resonances and possible resonance contributions from metastable Fe XV which we have not
been able to subtract out.
We have fitted our experimentally-derived rate coefficient plus the theoretical estimate
for capture into n > 80 using the simple fitting formula
αDR(Te) = T
−Ei/kBTe (2)
where ci is the resonance strength for the ith fitting component and Ei the corresponding
energy parameter. Table 3 lists the best-fit values for the fit parameters. All fits to the total
experimentally-derived Maxwellian-averaged DR rate coefficient show deviations of less than
1.5% for the temperature range 0.001 ≤ kBTe ≤ 10000 eV.
In Table 3, the Experiment (I) column gives a detailed set of fitting parameters where
the first 30 values of ci and their corresponding Ei values are for all the resolved resonances
for Ecm ≤ 0.95 eV given in Table 2. The parentage for these resonances are uncertain, though
the majority are most likely due to ground state and not metastable Fe14+. It is our hope that
future theoretical advances will allow one to determine which resonances are due to ground
state ions and which are due to metastables. Listing the resonances as we have will allow
future researchers to readily exclude those resonances which have been determined to be due
to the metastable parent. The remaining 6 fitting parameters yield the rate coefficient due
to all resonances for Ecm between 0.95 and the 3s3p(
1P1)nl series limit at 43.63 eV. In the
Experiment (II) column of Table 3, the first six sets of ci and Ei give the fitting parameters
for the first six resonances. The remaining sets of fit parameters are due to all resonances
between 0.1 eV and the series limit.
5. Theory
The only published theoretical DR rate coefficient for Fe XV which we are aware of is
the work of Jacobs et al. (1977). Using the work of Hahn (1989), Arnaud and Raymond
(1992) modified the results of Jacobs et al. (1977) to take into account contributions from
2p−3d inner-shell transitions. The resulting rate coefficient of Arnaud and Raymond (1992)
is widely used throughout the astrophysics community.
We have carried out new calculations using a state-of-the-art multiconfiguration Breit-
Pauli (MCBP) theoretical method. Details of the MCBP calculations have been reported in
Badnell et al. (2003). Briefly, the AUTOSTRUCTURE code was used to calculate energy
levels as well as radiative and autoionization rates in the intermediate-coupling approxi-
mation. These must be post-processed to obtain the final state level-resolved and total
dielectronic recombination data. The resonances are calculated in the independent process
and isolated resonance approximation (Seaton & Storey 1976).
The ionic thresholds were shifted to known spectroscopic values for the 3 → 3 transi-
tions. Radiative transitions between autoionizing states were accounted for in the calculation.
The DR cross section was approximated by the sum of Lorentzian profiles for all included
resonances. The AUTOSTRUCTURE calculations were performed with explicit n values up
to 80 in order to compare closely with experiment. The resulting MBRRC is presented for
3 → 3 core excitations in Figs. 1-8.
The theoretical 3 → 3 DR plasma rate coefficient was obtained by convolving calculated
DR cross section times the relative electron-ion velocity with a Maxwellian electron energy
distribution. Cross section calculations were carried out up to nmax = 1000. The resulting
Maxwellian plasma rate coefficient is given in Fig. 10.
We have fit our theoretical 3 → 3 MCBP Maxwellian DR rate coefficients using Eq. 2.
The resulting fit parameters are presented in Table 3. The accuracy of the MCBP fit is
better than 0.5% for the temperature range 0.1 ≤ kBTe ≤ 10000 eV. This lower limit
represents the range over which rate coefficient data were calculated. Data are not presented
below (101z2)/11605 eV, which is estimated to be the lower limit of the reliability for the
calculations (Badnell 2007). Here z = 14 and this limit is 0.17 eV.
6. Discussion
6.1. Resonance Structure
As we have already noted, we find poor agreement between our experimental and the-
oretical resonance energies and strengths for electron-ion collision energies below 25 eV.
Theory does not correctly predict the strength of many DR resonances which are seen in
the measurement. A similar extensive degree of disparity between the theoretical and the
measured resonances was also seen in our recent Fe13+ results (Schmidt et al. 2006; Badnell
2006b).
Some of the weaker peaks in our data below 1 eV may be due to the possible presence
of metastable Fe14+ in our beam. But the estimated small metastable contamination seems
unlikely to be able to account in this range for many of the strong resonances which are not
seen in the present theory. Above ≈ 1 eV, we expect no significant DR resonances due to
metastable Fe14+ (as is discussed in § 3).
In the energy range from 1− 25 eV, the differences between experiment and theory are
extensive. The reader can readily see from Figs. 1-8 that theory does not correctly predict
the strength of many resonances which are observed in the experiment. This conclusion
takes into account the by-eye shifting of the theoretical resonances energies to try to match
up theory with the measured resonances.
Between 25 − 42 eV we find good agreement between the experiment and theory for
resonance energies. The AUTOSTRUCTURE code reproduces well the more regular res-
onance energy structure of high-n Rydberg resonances approaching the 3s3p(1P1)nl series
limit. However the AUTOSTRUCTURE cross section lies ≈ 31% above the measurements.
This discrepancy is larger than the estimated ±26% total experimental uncertainty in this
energy range. A similar discrepancy with theory was found for Fe13+ (Badnell 2006b).
Theory and experiment diverge above 42 eV and approaching the 3s3p(1P1)nl se-
ries limit. We attribute the difference in the shape between the calculated and measured
3s3p(1P1)nl series limit partly to the nl dependence of the field-ionization process in the
experiment. Here we assumed a sharp n cutoff. Schippers et al. (2001) discuss the effects
of a more correct treatment of the field-ionization process in TSR. Their formalism uses the
hydrogenic approximation to take into account the radiative lifetime of the Rydberg level n
into which the initially free electron is captured.
Our theoretical calculations indicate there are no DR resonances due to 2 → 3 or 3 → 4
core excitations below 44 eV, significant or insignificant. The two weak peaks above the
3s3p(1P1)nl series limit at 43.63 eV are attributed to ∆N=1 resonances.
6.2. Rate Coefficients
The recommended rate coefficient of Arnaud & Raymond (1992) is in mixed agreement
with our experimental results (Fig. 10). For temperatures below 90 eV, their rate coefficient
is in poor agreement. At temperatures where Fe XV is predicted to form in photoionzed gas,
their data are a factor of 3 to orders of magnitude smaller than our experimental results. At
temperatures above 90 eV, the Arnaud & Raymond (1992) data are in good agreement with
our combined experimental and theoretical rate coefficient.
As already implied by the work of Netzer et al. (2003) and Kraemer et al. (2004), the
present result shows that the previously available theoretical DR rate coefficients for Fe XV
are much too low at temperatures relevant for photoionized plasmas. Other storage ring
measurements show similar difference with published recommended low temperature DR
rate coefficients for Fe M-shell ions (Müller 1999; Schmidt et al. 2006). The reason for this
discrepancy is primarily because the earlier theoretical calculations were for high temperature
plasmas and did not include the DR channels important for low temperatures plasmas.
At temperatures relevant for the formation of Fe XV in photoionized gas, we find that
the modified Fe XV rate coefficient of Netzer (2004) is up to an order of magnitude smaller
than our experimental results. The modified rate coefficient of Kraemer et al. (2004) is a
factor of over 3 times smaller. These rate coefficients were guesses meant to investigate the
possibility that larger low temperature DR rate coefficients could explain the discrepancy
between AGN observations and models. The initial results were suggestive that this is the
case. Our work confirms that the previously recommended DR data are indeed too low but
additionally shows that the estimates of Netzer et al. (2003) and Kraemer et al. (2004) are
also still too low. A similar conclusion was reached by Schmidt et al. (2006) based on their
measurement for Fe13+. Clearly new AGN modeling studies need to be carried out using our
more accurate DR data (Badnell 2006a).
Our state-of-the-art MCBP calculations are 37% lower than our experimental results at
a temperature of 1 eV. This difference decreases roughly linearly with increasing temperature
to ≈ 25% at 2.5 eV. It is basically constant at ≈ 23% up to 7 eV and then again nearly
monotonically decreases to 19% at 15 eV. As discussed in § 4, a small part of these difference
may be attributed to unsubtracted metastable 3P0 contributions. But these contributions are
< 10% at 2.5 eV, < 5% at 5 eV, < 2.0% at 10 eV, and < 1.4% above 15 eV (hence basically
insignificant). Above 15 eV the difference decreases and at 23 eV and up the agreement is
within . 10% with theory initially smaller than experiment but later greater. Part of the
good agreement at these higher temperatures is due to our use of theory for the unmeasured
DR contribution due to states with n > 80.
7. Summary
We have measured resonance strengths and energies for ∆N=0 DR of Mg-like Fe XV
forming Al-like Fe XIV for center-of-mass collision energies Ecm from 0 to 45 eV and compared
our results with new MCBP calculations. We have generated an experimentally-derived
plasma rate coefficient by convolving the measured MBRRC with a Maxwell-Boltzmann
electron energy distribution. We have supplemented our measured MBRRC with MCBP cal-
culations to account for unmeasured DR into states which are field-ionized before detection.
The resulting plasma recombination rate coefficient has been compared to the recommended
rate coefficient of Arnaud & Raymond (1992) and new calculations using a state-of-the-art
MCBP theoretical method. We have considered the issues of metastable ions in our stored
ion beam, enhanced recombination for collision energies near 0 eV, and field-ionization of
high Rydberg states in the storage ring bending magnets.
As suggested by Netzer et al. (2003) and Kraemer et al. (2004), the present result shows
that the previously available theoretical DR rate coefficients for Fe XV are much too low.
Other storage ring measurements show similar differences with published recommended low
temperature DR rate coefficients for M-shell iron ions (Müller 1999; Schmidt et al. 2006).
We are now in the process of carrying out DR measurements for additional Fe M-shell ions.
As these data become available we recommend that these experimentally-derived DR rate
coefficients be incorporated into AGN spectral models in order to produce more reliable
results.
We gratefully acknowledge the excellent support by the MPI-K accelerator and TSR
crews. CB, DVL, MS, and DWS were supported in part by the NASA Space Astrophysics
Research Analysis program, the NASA Astronomy and Astrophysics Research and Analy-
sis program, and the NASA Solar and Heliosperic Physics program. This work was also
supported in part by the German research-funding agency DFG under contract no. Schi
378/5.
REFERENCES
Altun, Z., Yumak, A., Yavuz, I., Badnell, N. R., Loch, S. D., & Pindzola, M. S. 2007, in
preparation
Arnaud, M., & Raymond, J. 1992, ApJ, 398, 394
Badnell, N. R., et al. 2003 A&A, 406, 1151
Badnell, N. R. 2006a, ApJ, 651, L73
Badnell, N. R. 2006b, J. Phys. B, 39, 4285
Badnell, N. R. 2007, http://amdpp.phys.strath.ac.uk/tamoc/DATA/DR/
Behar, E., Sako, M., & Kahn S. M. 2001, ApJ, 563, 497
Behar, E., et al. 2003, ApJ, 598, 232
http://amdpp.phys.strath.ac.uk/tamoc/DATA/DR/
Blustin, A. J., et al. 2002, A&A, 442, 757
Brage, T., Judge, P. G., Aboussaied, A., Godefroid, M. R., Joensson, P., Ynnerman, A.,
Fischer, C. F., & Leckrone, D. S. 1998, ApJ, 500, 507
Churilov, S. S., Levashov, V. E., & Wyart, J. F. 1989, Phys. Scr., 40, 625
Dittner, P. F., Datz, S., Miller, P. D., Pepmiller, P. L., & Fou, C. M. 1986, Phys. Rev. A,
33, 124
Fogle, M., Badnell, N. R., Eklöw, N., Mohamed, T., & Schuch, R. 2003, A&A, 409, 781
Fogle, M., Badnell, N. R., Glans, P., Loch, S. D., Madzunkov, S., Abdel-Naby, Sh. A.,
Pindzola, M. S., & Schuch, R. 2005, A&A, 442, 757
Gallo, L. C., Boller, T., Brandt, W. N., Fabian, A. C., & Vaughan, S. 2004, A&A, 417, 29
Gorczyca T. W., & Badnell, N. R. 1996, Phys. Rev. A, 54, 4113
Gu, M. F. 2004, ApJ, 589, 389
Gwinner, G., et al. 2000, Phys. Rev. Lett., 84, 4822
Hahn, Y. 1989, J. Quant. Spectrosc. Radiat. Transfer, 41, 315
Heerlein, C., Zwicknagel, G., & Toepffer, C. 2002, Phys. Rev. Lett., 89, 083202
Hörndl, M., Yoshida, S., Wolf, A., Gwinner, G., & Burgdörfer J. 2005, Phys. Rev. Lett., 95,
243201
Hörndl, M., Yoshida, S., Wolf, A., Gwinner, G., Seliger, M., & Burgdörfer J. , Phys. Rev. A
74, 052712
Jacobs, V. L., Davis, J., Kepple, P. C., & Blaha, M. 1977, ApJ, 211, 605
Kallman, T. R., & Bautista M. 2001, ApJS, 133, 221
Kaspi, S., et al. 2002, ApJ, 574, 643
Kaspi, S., Netzer, H., Chelouche, D., George, I. M., Nandra, K., & Turner, T. J. 2004, ApJ,
611, 68
Kilgus, G., et al. 1990, Phys. Rev. Lett., 64, 737
Kilgus, G., Habs, D., Schwalm, D., Wolf, A., Badnell, N. R., & Müller, A. 1992, Phys. Rev. A,
46, 5730
Kraemer, S. B., Ferland, G. J., & Gabel, J. R. 2004, ApJ, 604, 561
Kreckel, H.. et al. 2005, Phys. Rev. Lett., 95, 263201
Krongold, Y., Nicastro, F. M., Brickhouse, N. S., Mathura, S., & Zezas, A. 2005, ApJ, 620,
Lampert, A., Wolf, A., Habs, D., Kilgus, G., Schwalm, D., Pindzola, M. S., & Badnell, N.
R. 1996, Phys. Rev. A, 53, 1413
Linkemann, J., et al. 1995, Nucl. Instrum. Methods Phys. Res. B, 98, 154
Marques, J. P., Parent, F., & Indelicato, P. 1993, At. Data. Nuc. Data Tab., 55, 157
Martinson, I., & Gaupp, A. 1974, Phys. Rep. 15, 113
Müller, A. 1999, Int. J. Mass Spectrom., 192, 9
Netzer, H., et al. 2003, ApJ, 599, 933
Netzer, H. 2004, ApJ, 604, 551
Nikolić, D., et al. 2004, Phys. Rev. A, 70, 062723
Pastuszka, S., et al. 1996, Nucl. Instrum. Methods Phys. Res. A, 369, 11
Pounds, K. A., Reeves, J. N., O’Brien, P. T., Page, K. A., Turner, M. J. L., & Nayakshin S.
2001, ApJ, 559, 181
Pounds, K. A., Reeves, J., King, A. R, & Page, K. L. 2004, MNRAS, 350, 10
Quinet, P., Palmeri, P., Bimont, E., McCurdy. M. M., Rieger, G., Pinnington, E. H., Wick-
liffe, M. E., & Lawler, J. E. 1999, Mon. Not. R. Astron. Soc. 307, 934
Ralchenko, Yu., et al. 2006, NIST Atomic Spectra Database (version 3.1.0), [Online]. Avail-
able: http://physics.nist.gov/asd3. National Institute of Standards and Technology,
Gaithersburg, MD.
Sako, M., et al. 2001, A&A, 365, L168
Savin, D. W. 1999, ApJ, 523, 855
Savin, D. W. 2000, ApJ, 533, 106
Savin, D. W., et al. 1997, ApJ, 489, L115
http://physics.nist.gov/asd3
Savin, D. W., et al. 1999, ApJS, 123, 687
Savin, D. W., et al. 2002a, ApJS, 138, 337
Savin, D. W., et al. 2002b, ApJ, 576, 1098
Savin, D. W., et al. 2003, ApJS, 147, 421
Savin, D. W., et al. 2006, ApJ, 642, 1275
Schippers, S., Bartsch, T., Brandau, C., Gwinner, G., Linkemann, J., Müller, A., Saghiri,
A. A., & Wolf, A. 1998, J. Phys. B, 31, 4873
Schippers, S., et al. 2000, Phys. Rev. A, 62, 022708
Schippers, S., Müller, A., Gwinner, G., Linkemann, J., Saghiri, A. A., & Wolf, A. 2001, ApJ,
555, 1027
Schippers, S., et al. 2002, Phys. Rev. A, 65, 042723
Schippers, S., Schnell, M., Brandau, C., Kieslich, S., Müller, A., & Wolf, A. 2004, A&A,
421, 1185
Schippers, S., et al. 2007, Phys. Rev. Lett., 98, 033001
Schmidt, E. W., et al. 2006, ApJ, 641, L157
Schnell, M., et al. 2003, Phys. Rev. Lett. 91, 043001
Schnell M., et al. 2003, Nucl. Instrum. Methods Phys. Res. B, 205, 367
Seaton, M. J., & Storey, P. J. 1976, in Atomic Processes and Applications, ed. P. G. Burke
& B. L. Moisewitch (North-Holland, Amsterdam), 133
Sprenger, F., Lestinsky, M., Orlov, D. A., Schwalm, D., & Wolf, A. 2004, Nucl. Instrum.
Methods Phys. Res. A, 532, 298
Steenbrugge, K. C., Kaastra, J. S., de Vries, C. P., & Edelson, R. 2003 A&A, 402, 477
This preprint was prepared with the AAS LATEX macros v5.2.
Table 1. Energy levels for the n = 3 shell of Fe XV relative to the ground state.
Level Energy (eV)a
3s3p(3P o
) 28.9927
3s3p(3P o
) 29.7141
3s3p(3P o
) 31.4697
3s3p(1P o
) 43.6314
3p2(3P0) 68.7522
3p2(1D2) 69.3816
3p2(3P1) 70.0017
3p2(3P2) 72.1344
3p2(1S0) 81.7833
3s3d(3D1) 84.1570
3s3d(3D2) 84.2826
3s3d(3D3) 84.4848
3s3d(1D2) 94.4875
3p3d(3F o
) 115.087
3p3d(3F o
) 116.313
3p3d(3F o
) 117.743
3p3d(1Do
) 117.601
3p3d(3Do
) 121.860
3p3d(3Do
) 123.346
3p3d(3Do
) 123.565
3p3d(3P o
) 121.940
3p3d(3P o
) 123.474
3p3d(3P o
) 123.518
3p3d(1F o
) 131.7351
3p3d(1P o
) 133.2690
3d2(3F2) 169.8994
3d2(3F3) 170.1106
3d2(3F4) 170.3612
3d2(1D2) 173.8992
3d2(1G4) 174.4529
3d2(3P0) 174.2613
Table 1—Continued
Level Energy (eV)a
3d2(3P1) 174.3433
3d2(3P2) 174.5416
3d2(1S0) 184.3712
aRalchenko et al. (2006)
unless otherwise noted.
bChurilov et al. (1989)
Table 2. Measured resonance energies Ei and strengths Si for Fe XV forming Fe XIV via
N = 3 → N ′ = 3 DR for Ecm ≤ 0.95. Fitting errors are presented at a 90% confidence level.
Peak Number Ei (eV) Si (10
−21 cm2 eV)
1 (6.74 ± 0.05)E-3 189430.0 ± 20635.3
2 0.0098 ± 0.0008 10078.0 ± 483.1
3 0.0196 ± 0.0008 613.1 ± 56.8
4 0.0254 ± 0.0003 743.9 ± 51.8
5 0.0444 ± 0.0002 686.3 ± 37.9
6 0.0610 ± 0.0002 2949.3 ± 39.0
7 0.1098 ± 0.0002 805.5 ± 699.5
8 0.1674 ± 0.0014 2424.3 ± 954.1
9 0.1943 ± 0.0018 4408.5 ± 1213.1
10 0.2143 ± 0.0022 4735.5 ± 750.9
11 0.2436 ± 0.0003 4257.6 ± 132.6
12 0.2660 ± 0.0006 4169.1 ± 339.0
13 0.2895 ± 0.0122 213.9 ± 218.4
14 0.3102 ± 0.0074 292.5 ± 188.6
15 0.3346 ± 0.0008 1158.1 ± 118.6
16 0.3596 ± 0.0010 943.5 ± 100.3
17 0.4154 ± 0.0149 193.3 ± 230.2
18 0.4536 ± 0.0005 8013.6 ± 328.0
19 0.4781 ± 0.0072 706.9 ± 310.2
20 0.4988 ± 0.0072 781.3 ± 303.5
21 0.5199 ± 0.0266 216.7 ± 285.6
22 0.5433 ± 0.0290 121.8 ± 270.4
23 0.6164 ± 0.0078 136.2 ± 106.9
24 0.6599 ± 0.0006 1269.1 ± 97.8
25 0.6992 ± 0.0010 3090.3 ± 99.5
26 0.7385 ± 0.0010 2068.5 ± 113.4
27 0.7943 ± 0.0006 1594.4 ± 83.7
28 0.8406 ± 0.0006 1740.6 ± 83.6
29 0.8830 ± 0.0006 2164.2 ± 89.9
30 0.9232 ± 0.0013 1420.7 ± 86.9
Table 3. Fit parameters for the total experimentally-derived DR rate coefficient for Fe XV
forming Fe XIV via N = 3 → N ′ = 3 core excitation channels and including the theoretical
estimate for capture into n > 80 (nmax = 1000). See § 4 for an explanation of the columns labeled
“Experiment (I)” and “Experiment (II)”. Also given are the fit parameters for our calculated
MCBP results (nmax = 1000). The units below are cm
3 s−1 K1.5 for ci and eV for Ei.
Parameter Experiment (I) Experiment (II) MCBP
c1 1.07E-4 1.07E-4 7.07E-4
c2 8.26E-6 8.26E-6 7.18E-3
c3 1.00E-6 1.00E-6 2.67E-2
c4 1.46E-6 1.46E-5 3.15E-2
c5 2.77E-6 2.77E-6 1.62E-1
c6 1.51E-5 1.51E-6 5.37E-4
c7 2.90E-6 3.29E-6 -
c8 2.66E-5 1.63E-4 -
c9 5.62E-5 4.14E-4 -
c10 6.66E-5 2.17E-3 -
c11 6.81E-5 6.40E-3 -
c12 7.28E-5 4.93E-2 -
c13 4.07E-6 1.51E-1 -
c14 5.96E-6 - -
c15 2.54E-5 - -
c16 2.23E-5 - -
c17 5.27E-6 - -
c18 2.40E-4 - -
c19 2.22E-5 - -
c20 2.56E-5 - -
c21 7.40E-6 - -
c23 4.35E-6 - -
c23 5.51E-6 - -
c24 5.50E-5 - -
c25 1.42E-4 - -
c26 1.00E-4 - -
c27 8.32E-5 - -
c28 9.61E-5 - -
c29 1.25E-4 - -
c30 8.61E-5 - -
c31 1.02E-4 - -
Table 3—Continued
Parameter Experiment (I) Experiment (II) MCBP
c32 5.46E-1 - -
c33 2.91E-3 - -
c34 4.83E-3 - -
c35 4.86E-2 - -
c36 1.51E-1 - -
E1 6.74E-3 6.74E-3 4.12E-1
E2 9.80E-3 9.80E-3 2.06E+0
E3 1.97E-2 1.97E-2 1.03E+1
E4 2.54E-2 2.54E-2 2.20E+1
E5 4.45E-2 4.45E-2 4.22E+1
E6 6.10E-2 6.10E-2 3.41E+3
E7 1.10E-1 1.10E-1 -
E8 1.67E-1 1.91E-1 -
E9 1.94E-1 3.33E-1 -
E10 2.14E-1 9.63E-1 -
E11 2.44E-1 2.47E+0 -
E12 2.66E-1 1.08E+1 -
E13 2.90E-1 3.83E+1 -
E14 3.10E-1 - -
E15 3.35E-1 - -
E16 3.60E-1 - -
E17 4.15E-1 - -
E18 4.54E-1 - -
E19 4.78E-1 - -
E20 4.99E-1 - -
E21 5.20E-1 - -
E22 5.43E-1 - -
E23 6.16E-1 - -
E24 6.60E-1 - -
E25 6.99E-1 - -
Table 3—Continued
Parameter Experiment (I) Experiment (II) MCBP
E26 7.39E-1 - -
E27 7.94E-1 - -
E28 8.41E-1 - -
E29 8.83E-1 - -
E30 9.23E-1 - -
E31 1.00E+0 - -
E32 1.16E+0 - -
E33 1.62E+0 - -
E34 3.14E+0 - -
E35 1.08E+1 - -
E36 3.82E+1 - -
Fig. 1.— Fe XV to Fe XIV 3 → 3 DR resonance structure versus center-of-mass energy Ecm
from 0 to 1 eV. The solid curve represents the measured rate coefficient 〈σv〉 which is the summed
DR plus radiative recombination (RR) cross sections times the relative velocity convolved with the
experimental energy spread, i.e., a merged beam recombination rate coefficient (MBRRC). The
dotted curve shows our calculated multiconfiguration Breit-Pauli (MCBP) results (nmax = 80) for
ground state Fe XV (top plot) and 3P0 metastable state Fe XV multiplied by a factor of 0.06
to account for the estimated 6% population in our ion beam (bottom plot). To these results we
have added the convolved, non-resonant RR contribution obtained from semi-classical calculations
(Schippers et al. 2001). The inset shows our results for Ecm from 5× 10
−6 to 1× 10−1 eV.
0.8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4 2.6
Center of Mass Energy (eV)
Experiment
MCBP Theory
2.5 3.0 3.5 4.0 4.5
Center of Mass Energy (eV)
Experiment
MCBP Theory
4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 9.0
Center of Mass Energy (eV)
Experiment
MCBP Theory
15 16 17 18 19 20 21 22 23 24
Center of Mass Energy (eV)
Experiment
MCBP Theory
15 16 17 18 19 20 21 22 23 24
Center of Mass Energy (eV)
Experiment
MCBP Theory
Fig. 7.— Same as Fig. 2 but for Ecm from 23 to 36 eV. The dotted curve shows our calculated
MCBP results and the thin solid curve shows our calculated MCBP results reduced by a
factor of 1.31.
Fig. 8.— Same as Fig. 7 but for Ecm from 35 to 45 eV. The weak resonances above 44 eV
are attributed to ∆N=1 DR. These are not included in either our experimentally-derived or
theoretical Maxwellian rate coefficients.
Fig. 9.— Measured and fitted Fe XV to Fe XIV 3 → 3 resonance structure below 0.07 eV.
The experimental MBRRC results are shown by the filled circles. The vertical error bars
show the statistical uncertainty of the data points. The solid curve is the fit to the data
using our calculated RR rate coefficient (dashed curve) and taking into account all resolved
DR resonances. The dotted curves show the fitted DR resonances. At Ecm = 0.005 meV the
difference between the model spectrum α0 and the data is 1 + (∆α/α0) = 2.5.
10-1 100 101 102 103
10-11
10-10
Electron Temperature (eV)
Photoionized
Zone
Fig. 10.— Maxwellian-averaged 3 → 3 DR rate coefficients for Fe XV forming Fe XIV. The
solid curve represent our experimentally-derived rate coefficient plus the theoretical estimate
for unmeasured contributions due to capture into states with n > 80. The error bars show
our estimated total experimental uncertainty of ±29% (at a 90% confidence level). No error
bars are shown below 1 eV for reasons discussed in § 4. The thin solid curve represents our
experimentally-derived rate coefficient without the two lowest energy resonances included.
The dash-dotted curve represents our experimentally-derived rate coefficient alone (nmax =
80). Also shown is the recommended DR rate coefficient of Arnaud & Raymond (1992; thick
dash-dot-dotted curve) and its modification by Netzer (2004; thin dash-dot-dotted curve).
The filled pentagon at 5.2 eV represents the estimated rate coefficient from Kraemer et al.
(2004). The dashed curve shows our MCBP calculations for nmax = 1000. As a reference
we show the recommended RR rate coefficient of Arnaud & Raymond (1992; dotted curve).
Neither the experimental nor theoretical DR rate coefficients include RR. The horizontal
line shows the temperature range over which Fe XV is predicted to form in photoionized gas
(Kallman & Bautista 2001).
Introduction
Experimental Technique
Metastable Ions
Experimental Results
Theory
Discussion
Resonance Structure
Rate Coefficients
Summary
|
0704.0906 | Metropolis algorithm and equienergy sampling for two mean field spin
systems | METROPOLIS ALGORITHM AND EQUIENERGY SAMPLING
FOR TWO MEAN FIELD SPIN SYSTEMS
FEDERICO BASSETTI AND FABRIZIO LEISEN
Abstract. In this paper we study the Metropolis algorithm in connection
with two mean–field spin systems, the so called mean–field Ising model and
the Blume–Emery–Griffiths model. In both this examples the naive choice of
proposal chain gives rise, for some parameters, to a slowly mixing Metropo-
lis chain, that is a chain whose spectral gap decreases exponentially fast (in
the dimension N of the problem). Here we show how a slight variant in the
proposal chain can avoid this problem, keeping the mean computational cost
similar to the cost of the usual Metropolis. More precisely we prove that,
with a suitable variant in the proposal, the Metropolis chain has a spectral
gap which decreases polynomially in 1/N . Using some symmetry structure of
the energy, the method rests on allowing appropriate jumps within the energy
level of the starting state, and it is strictly connected to both the small world
Markov chains of [15, 16] and to the equi-energy sampling of [22] and [26].
1. Introduction.
The Metropolis algorithm, introduced in [29] and later generalized in [18], is
currently (together with other Monte Carlo Markov Chain methods) one of the
most used simulation techniques both in statistics and in physics. See, among
others, [33, 32, 39, 17, 35, 34, 25, 6].
In a finite setting the Metropolis algorithm can be described as follows. Suppose
that, given a probability π(x) on a finite set X , want to approximate
(1.1) µ =
f(x)π(x),
for f : X → R. As a first step, take a reversible Markov chain K(x, y) (the proposal
chain) on X and change its output in order to have a new chain with stationary
distribution π. This can be achieved by constructing a new (π–reversible) chain
(1.2) M(x, y) =
K(x, y)A(x, y) x 6= y
K(x, x) +
z 6=xK(x, z)(1−A(x, z)) x = y
where A(x, y) := min(
π(y)K(y,x)
π(x)K(x,y)
, 1). Then, the metropolis estimate of µ is given by
(1.3) µ̂n =
f(Yi),
where Y0 is generated from some initial distribution π0 and Y1, . . . , Yn fromM(x, y).
It is clear that, from a computational point of view, the speed of convergence to
the stationary distribution and the (asymptotic) variance of the estimate are two
very important features of the Markov chain M .
It is well-known that in some situation a Markov chain can converge very slowly
to its stationary distribution and, moreover, that the asymptotic variance of the es-
timate (1.3) can be much bigger than the variance of f , i.e. V arπ(f) :=
x(f(x)−
Key words and phrases. asymptotic variance, Chain decomposition theorem, fast/slowly mix-
ing chain, mean-field Ising model, Metropolis, spectral gap analysis.
http://arxiv.org/abs/0704.0906v2
2 FEDERICO BASSETTI AND FABRIZIO LEISEN
µ)2π(x), which is equal to the asymptotic variance of the crude Montecarlo estima-
tor. In these cases (1.3) turns out to be a very inefficient estimate of µ.
For the Metropolis chain a classical situation in which the convergence is slow
(and the variance big) is when the target distribution π has many peaks and K is
somehow too “local”.
This is well known in statistical physics, where, typically, a distribution of a
system with energy function h and in thermal equilibrium at temperature T is
described by the Gibbs distribution
πh,T (x) = exp{−h(x)/T }Z−1T
with ZT =
x exp{−h(x)/T }. In point of fact, the Metropolis algorithm has been
proposed in [29] to compute average with respect to such distributions. Indeed,
if h is nice, the Metropolis algorithm is very efficient, but it can perform very
poorly if the energy has many local minima separated by high barriers that cannot
be crossed by the proposal moves K. This problem can be bypassed, for specific
energy, designing appropriate moves that have higher chance to cut across the
energy barrier (see, e.g, [4, 5]), or constructing clever alternative approaches to the
problem, for instance using a reparametrization of the problem (see, e.g., [12, 13])
or using auxiliary variables (see, e.g., [40, 9, 1, 30]). A different kind of solution has
been proposed in [14] and in [28] by introducing the so called simulated tempering,
which essentially means that T is changed (stochastically or not) to flatten h. A
remarkable variant of these methods is the parallel tempering, see, for instance, [19].
More recently new algorithms based on the so called equi–energy levels sampling
have been proposed (see [26] and [22]). In particular, the algorithm proposed in [22]
relies on the so–called equi-energy jump, which enables the chain to reach regions
of the sample space with energy close to the one of the starting state, but that may
be separated by steep energy barriers. In point of fact, even if, according to some
simulations, the method seems to be efficient nothing has been formally proved.
Finally, let us mention a recent algorithm, called small world Markov chains (see
[15, 16]), that combine a local chain with long jumps. In these papers, it has
been shown that a simple modification of the proposal mechanism results in faster
convergence of the chain. That mechanism, which is based on an idea from the
field of small-world networks, amounts to adding occasional wild proposals to any
local proposal scheme.
In the present paper we study two simple examples: the so called mean field Ising
model and the mean field Blume–Emery–Griffiths model. As for the former, it is
well-known that the usual choice of K gives rise, for low temperature, to a slowly
mixing Metropolis chain (see, e.g., [26]). Here we show that a slight variant in the
proposal chain can completely solve this problem, keeping the mean computational
cost similar to the cost of the usual Metropolis. The idea again rests on allowing
appropriate jumps in the same energy level of the starting state. As for the Blume–
Emery–Griffyths mean–field model, we first show that there is a critical region of
the parameters space for which the naive Metropolis chain is slowly mixing. Then
we show how one can modify the proposal chain in order to obtain a better mixing
for the Metropolis chain. The present paper should be intended as a further step in
the direction of a better mathematical understanding of both small world Markov
chains and equi-energy sampling.
The rest of the paper is organized as follows. In Section 2 some general consid-
erations are given. In Section 3 some basic tools concerning Markov chain, which
will be used in the paper, are reviewed. Section 4 contains a warming up example.
In Section 5 the mean field Ising model is treated, while Section 6 deals with the
more complex case of the mean field Blume-Emery-Griffiths model. All the proofs
are deferred to the Appendix.
METROPOLIS ALGORITHM AND EQUIENERGY SAMPLING 3
2. A general strategy
In an abstract setting, what we shall do in the next examples can be summarized
as follows. Let G be a group acting on X for which
(2.1) π(x) = π(g(x)) ∀ x ∈ X , ∀ g ∈ G.
For every x in X let Ox := {y = g(x) : g ∈ G} be the orbit of x (of course if y
belongs to Ox then Ox = Oy).
Assume now that we have a reversible Markov chain KE(x, y) (the proposal) on
X and suppose that the Metropolis chain ME with proposal KE is slowly mixing
(see next section for more details). To speed up the mixing one can try to exploit
(2.1) by taking a proposal of the following form:
(2.2) Kǫ(x, y) = ǫKE(x, y) + (1 − ǫ)KG(x, y)
where
KG(x, y) =
qx(z)Iz(y),
0 < qx(z) < 1 and
qx(z) = 1.
In point of fact, usually KE is “local”; for instance frequently
KE(x, y) = 0
whenever y 6= x belongs to Ox, hence with KG we are adding “long” jumps to the
chain. Moreover, note that if KE is such that KE(x, g(x)) = KE(g(x), x), for every
x in X and g in G, then the Metropolis always accepts the move x→ g(x) and
M(x, g(x)) = ǫKE(x, g(x)) + (1− ǫ)qx(g(x)).
In particular this holds when KE is symmetric.
The heuristics under (2.2) is to combining small world Markov chains and equi-
energy sampling.
Before presenting some examples in which one can actually improve the perfor-
mances of the Metropolis chain using this idea, we collect in the next section some
useful facts concerning Markov chains.
3. Preliminaries
Let P (x, y) be a reversible and ergodic Markov chain on the finite set X with
(unique) stationary distribution p(x). Thus, p(x)P (x, y) = p(y)P (y, x). Let L2(p) =
{f : X → R} with < f, g >p= Ep(fg) =
x f(x)g(x)p(x). Reversibility is equiva-
lent to P : L2 → L2 being self–adjoint. Here Pf(x) =
y f(y)P (x, y). The spec-
tral theorem implies that P has real eigenvalues 1 = λ0(P ) > λ1(P ) ≥ λ2(P ) ≥
· · · ≥ λ|X |−1(P ) > −1 with orthonormal basis of eigen–functions ψi : X → R
(Pψi(x) = λiψi(x), < ψi, ψj >p= δij).
3.1. Spectral gap, variance and speed of convergence. A very important
quantity related to the eigenvalues is the spectral gap, defined by
Gap(P ) = 1−max{λ1, |λ|X |−1|}.
It turns out that the spectral gap is a good index to measure the mixing of a
chain. To better understand this point, assume that f belongs to L2(p) and write
f(x) =
i≥0 aiψi(x) (with ai =< f, ψi >p). Now let Y0 be chosen form some
distribution p0 and Y1, . . . , Yn be a realization of the P (x, y) chain, then
µ̂n =
f(Yi)
4 FEDERICO BASSETTI AND FABRIZIO LEISEN
has asymptotic variance given by
AV ar(f, p, P ) := lim
n · V ar(µ̂n) =
|ak|2
1 + λk
1− λk
See, for instance, Theorem 6.5 in Chapter 6 of [3]. From the last expression, the
classical inequality
(3.1) AV ar(f, p, P ) ≤ 2
1− λ1
V arp(f),
follows easily. The last inequality is the usual way of relating spectral gap to
asymptotic variance and, hence, to the efficency of a chain.
The spectral gap is very important also to give bounds on the speed of con-
vergence to the stationary distribution. For example, if ‖ · ‖TV denotes the total
variation norm, one has
‖δxP k − p‖2TV =
|P k(x,A) − p(x)|
≤ 1− p(x)
4p(x)
(max{λ1, |λ|X |−1|})2k
See, e.g., Proposition 3 in [7]. Another classical bound is
‖p0P k/p− 1‖2,p ≤ Gap(P k)‖p0/p− 1‖2,p
valid for every probability p0. See, for instance, [39].
Roughly speaking one can say that a sequence of Markov chains defined on a
sequence of state space XN is slowly mixing (in the dimension of the problem N)
if the spectral gap decreases exponentially fast in N .
3.2. Cheeger’s inequality. As already recalled, problems of slowly mixing typ-
ically occur when π has two or more peaks and the chain K can only move in
a neighborhood of the starting peak. Usually this phenomenon is called bottle-
neck. A powerful tool to detect the presence of a bottleneck is the conductance and
the related Cheeger’s inequality. Recall that the conductance of a chain P with
stationary distribution p is defined by
h = h(p, P ) := inf
A :p(A)≤ 1
x∈A,y∈Ac
p(x)P (x, y),
and the well-known Cheeger’s inequality is
(3.2) 1− 2h ≤ λ1(P ) ≤ 1−
See, for instance, [3, 37, 7]. Note that, since P is reversible,
(3.3) h ≤ 1
p(x)P (x, y) =
p(y)P (y, x)
for every A such that p(A) ≤ 1/2.
3.3. Chain decomposition theorem. In this subsection we briefly describe a
useful technique to obtain bounds on the spectral gap: the so called chain decom-
position technique. Following [16] assume that A1, . . . , Am is a partition of X .
Moreover, for each i = 1, . . . ,m, define a new Markov chain on Ai by setting
PAi(x, y) := P (x, y) + Ix(y)
P (x, z)
(x, y ∈ Ai).
PAi is a reversible chain on the state space Ai with respect to the probability
measure
pi(x) := p(x)/p(Ai).
METROPOLIS ALGORITHM AND EQUIENERGY SAMPLING 5
The movement of the original chain among the “pieces” A1, . . . , Am can be de-
scribed by a Markov chain with state space {1, . . . ,m} and transition probabilities
PH(i, j) :=
2p(Ai)
x∈Ai,y∈Aj
P (x, y)p(x)
for i 6= j and
PH(i, i) := 1−
j 6=i
PH(i, j),
which is reversible with stationary distribution
p̄(i) := p(Ai).
A variant of a result of Caracciolo, Pelisetto and Sokal (published in [27]), states
(3.4) Gap(P ) ≥ 1
Gap(PH)
i=1,...,m
Gap(PAi)
holds true, see Theorem 2.2 in [16]. Other results about chain decompositions can
be found, for instance, in [20].
In the next very simple example we shall show how this technique can be used,
starting from a slowly mixing chain, to suggest how to modify the proposal chain
in order to obtain a fast mixing chain.
4. Warming up example
Set X = {−N,−N + 1, . . . , 0, 1, . . . , N} and define a probability measure on X
π(x) =
(θ − 1)θ|x|
2θN+1 + 1− θ
θ being a given parameter bigger than 1. Here we can consider G = {+1,−1} (with
group operation given by the usual product) acting on X by g(x) = gx, hence
Ox = {x,−x}.
Now let KE be a chain defined by
KE(x, x + 1) = 1/2 x 6= N
KE(x, x − 1) = 1/2 x 6= −N
KE(N,N) = KE(−N,−N) = 1/2
KE(x, y) = 0 otherwise
and denote by ME the Metropolis chain with stationary distribution π derived
by KE. It is clear that in this case KE(x, y) = 0 whenever y belongs to Ox.
In this example it is very easy to bound the conductance on ME , indeed, taking
A = {−N, . . . ,−1}, by (3.3), it follows that
h(π,ME) ≤
1− π(0)
Hence,
h(π,ME) ≤ Cθ−N ,
and then (3.2) yields
1− λ1 ≤ 2Cθ−N .
This means that, if f is such that a1 6= 0 and θ > 1, then the asymptotic variance
of f blows up exponentially fast, indeed
AV ar(f, π,ME) ≥ 2Celog(θ)N .
6 FEDERICO BASSETTI AND FABRIZIO LEISEN
Now, instead of KE consider
Kǫ(x, y) = (1− ǫ)KE(x, y) + ǫI{−x}(y)
and let M (ǫ) be the Metropolis chain derived by Kǫ. Decompose X as follows
X = A1 ∪A2 · · · ∪ AN
with A1 = {−1, 0, 1} and Ai = {x ∈ X : |x| = i}, for i > 1. Moreover let
π̄(i) = π(Ai) =
(2θ + 1)/Z for i = 1
2θi/Z for i > 1
where
2θN+1 + 1− θ
(θ − 1)
and set
H (i, j) =
2π(Ai)
l∈Ai,m∈Aj
M (ǫ)(l,m)π(l), M
H (i, i) = 1−
j 6=i
H (i, j).
For i 6= 1, N , one has
H (i, i+ 1) =
2π(Ai)
[M (ǫ)(i, i+ 1)π(i) +M (ǫ)(−i,−i− 1)π(−i)]
and, since π(i) = π(−i) and π(i + 1) ≥ π(i)
H (i, i+ 1) =
In the same way it is easy to see that
H (i, i− 1) =
, i 6= 1, N
H (i, i) = 1−
(1 + θ−1) i 6= 1, N
H (N,N − 1) =
H (N,N) = 1−
H (1, 2) =
4(1 + 1/(2θ))
H (1, 1) = 1−
4(1 + 1/(2θ))
Moreover, for every i 6= 1, M (ǫ)Ai in matrix form is given by
1− ǫ ǫ
ǫ 1− ǫ
and hence
Gap(M
) = 1− |1− 2ǫ|.
While M
is given by
(2θ − 1)(1− ǫ)/(2θ) (1− ǫ)/(2θ) ǫ
(1− ǫ)/2 ǫ (1− ǫ)/2
ǫ (1− ǫ)/(2θ) (2θ − 1)(1− ǫ)/(2θ)
and hence
Gap(M
) = k(θ, ǫ) > −1.
Moreover, since
i6=1,N
H (i, i± 1)),M
H (1, 2),M
H (N,N − 1)
≥ min
(1− ǫ)/(4θ), 1− ǫ
4(1 + 1/(2θ))
=: m(ǫ, θ) > 0
METROPOLIS ALGORITHM AND EQUIENERGY SAMPLING 7
and π̄(i) ≤ 3π̄(j) for every i < j, Lemma A.1 in the appendix yields that
1− λ1(M (ǫ)H ) ≥
m(ǫ, θ)
In the same way, since M
H (i, i+1)+M
H (i, i− 1) ≤ (1− ǫ)M(θ)/4, with M(θ) =
max(1 + θ−1, 2θ/(2θ + 1)) ≤ 2, inequality (A.1) in the Appendix yields that
λN−1(M
H ) ≥ 1−
≥ 1 + ǫ
Hence
Gap(M
H ) ≥
m(ǫ, θ)
and (3.4) yield
Gap(M (ǫ)) ≥
h(θ, ǫ)
for a suitable h. This shows that M (ǫ) is fast mixing for every ǫ > 0 and for every
θ > 1 while ME is slowly mixing for every θ > 1.
5. The mean field Ising model
Let X = {−1, 1}N , N being an even integer. For every β > 0 let π = πβ,N be a
probability on X defined by
π(x) = πβ,N (x) := exp
S2N (x)
Z−1N (β) (x ∈ X )
where
ZN (β) = ZN :=
S2N (x)
is the normalization constant (“partition function”) and
SN (x) :=
xi x = (x1, . . . , xN ).
This is the so called mean field Ising model, or Curie-Weiss model, in which every
particle i, with spin xi, interacts equally with every other particle. It is probably
the most simple but also the most studied example of spin system on a complete
graph. The usual Metropolis algorithm uses as proposal chain
KE(x, y) =
I{x(j)}(y)
where x(j) denotes the vector (x1, . . . ,−xj , . . . , xN ). It has been proved in [26] that,
whenever β > 1,
1− λ1 ≤ Ce−D
where λ1 is the first eigenvalues smaller than 1 of the Metropolis chain ME derived
KE . This yields that the variance of an estimator obtained from this Metropolis
algorithm can blow up exponentially fast in N .
The aim of this section is to show how one can construct a different Metropolis
chain avoiding this problem. In the notation of Section 2, we consider
G = SN × {+1,−1}
(SN being the symmetric group of order N) and we define the action of G on
X = {−1, 1}N by
g(x) = (e · xσ(1), . . . , e · xσ(N)) g = (σ, e).
8 FEDERICO BASSETTI AND FABRIZIO LEISEN
In order to introduce a new proposal, it is useful to write X as the union of its
“energy sets”, that is
X = X0 ∪ X2 ∪ X4 ∪ · · · ∪ XN
where
Xi := {x ∈ X : |SN (x)| = i} (i = 0, 2, . . . , N).
Note that energy takes only even values and that Ox = X|SN (x)|. Moreover, for
i 6= 0, set
X+i := {x ∈ X : SN (x) = i} and X
i := {x ∈ X : SN(x) = −i}.
The new proposal chain will be
K(x, y) = p1KE(x, y) + (1 − p1)K0(x, y) if x ∈ X0
K(x, y) = p1KE(x, y) + p2I{−x}(y) + (1− p1 − p2)Ki(x, y)
if x ∈ Xi, i 6= 0
(5.1)
where p1, p2 belong to (0, 1), p1 + p2 < 1, and
Ki(x, y) = IX+
{x}K+i (x, y) + IX−
{x}K−i (x, y) (i 6= 0).
We shall assume that K±i (K0, respectively) are irreducible, symmetric and aperi-
odic chains on X±i ( X0, respectively).
As a leading example we shall take
K0(x, y) =
) y ∈ X0
K±i (x, y) =
(N−i)/2
) y ∈ X±i ,
(5.2)
that is: a realization of a chain K±i (K0, respectively) is simply a sequence of
independent uniform random sampling from X±i (X0, respectively).
Remark 1. Note that (5.2) is the (n, k)-Bose-Einstein distribution with n = (N +
i)/2 and k = (N − i)/2 + 1 and recall that there is a very easy way to directly
generate Bose-Einstein configurations. One may place n balls sequentially into k
boxes, each time choosing a box with probability proportional to its current content
plus one. Starting from the empty configuration this results in a Bose-Einstein
distribution for every stage.
Now let M be the Metropolis chain defined by the transition kernel (1.2) with
K as in (5.1), i.e. for every x in X±i (i 6= 0)
M(x, y) =
if y = x(j), j = 1...N
p2 if y = −x
(1− p1 − p2)K±i (x, y) if y ∈ X
i , y 6= x
z 6=xM(x, z) if y = x
while for x in X0
M(x, y) =
if y = x(j), j = 1...N
(1− p1)K0(x, y) if y ∈ X0, y 6= x
z 6=xM(x, z) if y = x.
METROPOLIS ALGORITHM AND EQUIENERGY SAMPLING 9
By construction M is an aperiodic, irreducible and reversible chain with stationary
distribution π. Then, when (5.2) holds true,
M(x, y) =
if y = x(j), j = 1...N
p2 if y = −x
(1− p1 − p2) 1( N(N−i)/2)
if y ∈ X±i , y 6= x
z 6=xM(x, z) if y = x
for x in X±i (i 6= 0), while if x belongs to X0
M(x, y) =
if y = x(j), j = 1...N
(1− p1) 1( NN/2)
if y ∈ X0, y 6= x
z 6=xM(x, z) if y = x.
In order to bound the spectral gap ofM we shall use the decomposition theorem
described in Subsection 3.3. To this end, for every i = 0, 2, . . . , N and every j 6= i
P̄ (i, j) :=
2π(Xi)
M(x, y)π(x)
P̄ (i, i) := 1−
j 6=i
P̄ (i, j).
As already noted, P̄ is a reversible chain on {0, 2, . . . , N} with stationary distribu-
π̄(i) := π(Xi).
Moreover define for every i = 0, 2, . . . , N a chain on Xi setting
PXi(x, y) :=M(x, y) + Ix(y)
z∈X c
M(x, z)
where both x and y belong to Xi. In the same way, define chains on X+i and X
for i = 2, . . . , N setting
(x, y) := PXi(x, y) (y 6= x, x, y ∈ X±i )
(x, x) := 1−
y 6=x
PXi(x, y).
These chains are reversible on Xi (X±i , respectively) and have as stationary distri-
butions
πXi(x) :=
π(Xi)
and πX±
(x) :=
πXi(x)
πXi(X±i )
|X±i |
10 FEDERICO BASSETTI AND FABRIZIO LEISEN
respectively. Finally, for every i = 2, 4, . . . , N , define a chain on {+,−} setting
Pi(+,−) :=
2πXi(X+i )
PXi(x, y)πXi(x)
Pi(−,+) :=
2πXi(X−i )
PXi(x, y)πXi(x).
Now the lower bound (3.4), applied two times yields
Gap(M) ≥ 1
Gap(P̄ ) min
i=0,2,...,N
{Gap(PXi)}
Gap(P̄ )min
Gap(PX0),
i=2,...,N
Gap(Pi)min{Gap(PX+
), Gap(PX−
(5.3)
Hence, to get a lower bound on Gap(M) it is enough to obtain bounds on the gaps
of the chains P̄ , PX0 , Pi, PX±
The most important of these bounds is given by the following
Proposition 5.1. P̄ is a birth and death chain on {0, 2, . . . , N}, more precisely
(5.4)
P̄ (0, 2) = p1
P̄ (i, i+ 2) = p1
i 6= N, 0
P̄ (i, i− 2) = p1
exp{2β(1− i)/N} i 6= 0.
Moreover
λ1(P̄ ) ≤ 1−
(N/2 + 1)3
λN/2(P̄ ) ≥ 1− p1.
The proof of the previous proposition is based on a bound for a birth and death
chain, given in the Appendix, which can be of its own interest.
As for the others chains, we have the following
Lemma 5.2. For every i = 2, 4, . . . , N
Gap(PX±
) ≥ (1− p1 − p2)Gap(K±i )
Gap(Pi) = p2,
moreover
Gap(PX0) ≥ (1− p1)Gap(K0).
In this way, using (5.3), we can prove the main result of this section.
Proposition 5.3. Let M be the Metropolis chain derived by the chain K defined
as in (5.1) then
Gap(M) ≥ p1p2
(N/2 + 1)3
[ (1− p1)
Gap(K0),
(1− p1 − p2)
min{Gap(K+i ), Gap(K
i, )}
If K±i and K0 are defined as in (5.2) then
Gap(M) ≥
(N/2 + 1)3
[ (1 − p1 − p2)
(1− p1)
for every β > 0 and N ≥ N0.
METROPOLIS ALGORITHM AND EQUIENERGY SAMPLING 11
Proposition 5.3 shows that the gap is polynomial in 1/N independently of β.
Hence, even when β > 1, the variance of the metropolis estimate obtained with this
proposal can not grow up faster than a polynomial in N .
Note that if in Proposition 5.3 we choose
(5.5) p1 = 1− a/(2N), p2 = a/N
we get
Gap(M) ≥ C
Hence, even with this choice, the Metropolis algorithm is still fast mixing for every
β. It is worth noticing that the mean computational cost of this Metropolis does
not change with respect to the Metropolis which uses the proposal KE. Indeed,
in the case of the usual Metropolis, the computational cost needed to go from Xn
to Xn+1 is O(N), since it is essentially due to a sample of one number among N
numbers (we need to decide which coordinate to flip). In the case of the ”modified”
proposal, things are slight more complex. In this case, at the beginning, we have
an extra “toss”. If with this fist toss we decide to flip at random a coordinate the
cost is still O(N) but if we need to sample from K±i the cost is O(N
2) (in this last
case we need to pick a sample from a Bose-Einstein distribution). Hence, although
our algorithm is ”sometime” more expensive, if we take p1 and p2 as in (5.5), we
get that the mean cost of our algorithm is still O(N).
6. The mean–field Blume-Emery-Griffiths model
The Blume-Emery-Griffiths (BEG) model (see [2]) is an important lattice–spin
model in statistical mechanics, it has been studied extensively as a model of many
diverse systems, including He3 − He4 mixtures as well solid–liquid–gas systems,
microemulsions, semiconductor alloys and electronic conduction models. See, for
instance, [2, 38, 23, 24, 31, 36, 21]. We will focus our attention on a simplified
mean–field version of the BEG model. For a mathematical treatment of this mean–
field model see [10]. In what follows let X := {−1, 0, 1}N , N being an even integer,
and for every β > 0 and K > 0 let πβ,K,N be the probability defined by
π(x) = πβ,K,N(x) = exp{−βRN(x) +
S2N(x)}Z−1N (β,K) (x ∈ X )
where
ZN(β,K) = ZN :=
−βRN (x) +
S2N (x)
is the normalization constant,
SN (x) :=
xi and RN (x) :=
x2i x = (x1, x2, ..., xN ).
A natural Metropolis algorithm can be derived by using the proposal chain
(6.1) KE(x, y) =
[I{x(+j)}(y) + I{x(−j)}(y)]
where x(±j) denotes the vector (x1, . . . , xj ± 1, . . . , xN ), with the convention that
2 = −1 and −2 = 1.
The next proposition shows that there exists a critical region of the parameters
space in which the Metropolis chain is slowly mixing. More precisely, using some
results of [10] it is quite straightforward to proove the following
12 FEDERICO BASSETTI AND FABRIZIO LEISEN
Proposition 6.1. Ler ME be the Metropolis chain (with stationary distribution π)
with proposal chain KE defined in (6.1).Then, there exists a non decreasing function
Γ : (0,+∞) → (0,+∞) with limx→0 Γ(x) = +∞ and limx→∞ Γ(x) = γc ≃ 1.082
such that for every couple of positive parametrs (β,K) with K > Γ(β)
Gap(ME) ≤ Ce−∆N
for suitable constants C = C(γ,K) > 0 and ∆ = ∆(γ,K) > 0.
As in the case of the mean–field Ising model, we intend to by pass the slowly
mixing problem of this Metropolis chain by choosing a different proposal. To un-
derstand which kind of proposal is reasonable, here we choose
G = SN × {+1,−1}
with G acting on X = {−1, 0, 1}N by
g(x) = (e · xσ(1), . . . , e · xσ(N)) g = (σ, e).
At this stage, decompose X as the union of its ”energy sets”, that is
X = X0,0 ∪ X1,1 ∪ X0,2 ∪ X1,3 ∪X3,3 ∪ ... ∪ X0,N ∪ X2,N ∪ ...XN,N
where
Xs,r := {x ∈ X : |SN | = s and RN (x) = r}
r = 0, 1, 2, ..., N and s = 1, 3, ..., r if r is odd and s = 0, 2, ..., N if r is even.
Moreover, for s = 1, 2, ..., N , set
X+s,r := {x ∈ X : SN = s and RN (x) = r}
X−s,r := {x ∈ X : SN = −s and RN (x) = r}.
Note again that Ox = Xs,r with s = SN (x) and r = RN (x). The new proposal
chain will be
K(x, y) = p1KE(x, y) + (1 − p1)K0,r(x, y) if x ∈ X0,r, r = 0, 2, ..., N
K(x, y) = p1KE(x, y) + p2I{−x}(y) + (1− p1 − p2)Ks,r(x, y)
if x ∈ Xs,r, s 6= 0
(6.2)
where p1, p2 belong to (0, 1), p1 + p2 < 1, and
Ks,r(x, y) = IX+s,r{x}K
s,r(x, y) + IX−s,r{x}K
s,r(x, y) (s 6= 0)
K0,r(x, y) =
) y ∈ X0,r
K±s,r(x, y) =
(r−s)/2
) y ∈ X±s,r.
(6.3)
Now let M be the Metropolis chain defined by the transition kernel (1.2) with K
as in (6.2), i.e. for every x in X±s,r (s 6= 0)
M(x, y) =
if y = x(±j), j = 1...N
p2 if y = −x
(1 − p1 − p2) 1(Nr )( r(r−s)/2)
if y ∈ X±s,r, y 6= x
z 6=xM(x, z) if y = x,
METROPOLIS ALGORITHM AND EQUIENERGY SAMPLING 13
while if x belongs to X0,r
M(x, y) =
if y = x(±j), j = 1...N
(1− p1) 1(Nr )( rr/2)
if y ∈ X0,r, y 6= x
z 6=xM(x, z) if y = x.
By construction M is an aperiodic, irreducible and reversible chain with stationary
distribution π.
Also in this case, to bound the spectral gap of M , we shall use the chain decom-
position tools. Let
DN = {(0, 0), (1, 1), (0, 2), (2, 2), (1, 3), (3, 3), (0, 4), (2, 4), (4, 4), ..., (0, N), (2, N), ..., (N,N)}
and, for every couple (s, r), (s̃, r̃) in DN , with (s, r) 6= (s̃, r̃), let
P̄ ((s, r), (s̃, r̃)) :=
2π(Xs,r)
x∈Xs,r
y∈Xs̃,r̃
M(x, y)π(x)
P̄ ((s, r), (s, r)) := 1−
(s̃,r̃) 6=(s,r)
P̄ ((s, r), (s̃, r̃)).
Once again, note that P̄ is a reversible chain on DN with stationary distribution
π̄(s, r) := π(Xs,r).
Moreover, for every (s, r) in DN , define a chain on Xs,r setting
PXs,r (x, y) :=M(x, y) + Ix(y)
z∈X cs,r
M(x, z)
where both x and y belong to Xs,r. In the same way, define chains on X+s,r and X−s,r
for (s, r) in DN , s 6= 0, setting
PX±s,r (x, y) := PXs,r(x, y) (y 6= x, x, y ∈ X
PX±s,r (x, x) := 1−
s,ry 6=x
PXs,r (x, y).
These chains are reversible on Xs,r (X±s,r , respectively) and have as stationary dis-
tributions
πXs,r(x) :=
π(Xs,r)
|Xs,r|
and πX±s,r (x) :=
πXs,r(x)
πXs,r (X±s,r)
|X±s,r|
respectively. Finally, for every (s, r) in DN , s 6= 0, define a chain on {+,−} setting
Ps,r(+,−) :=
2πXs,r(X+s,r)
PXs,r (x, y)πXs,r(x)
Ps,r(−,+) :=
2πXs,r(X−s,r)
PXs,r (x, y)πXs,r(x).
14 FEDERICO BASSETTI AND FABRIZIO LEISEN
At this stage, the lower bound (3.4), applied two times, yields
Gap(M) ≥ 1
Gap(P̄ ) min
(s,r)∈DN
Gap(PXs,r)
Gap(P̄ )min
r=0,2,...,N
Gap(PX0,r )
(s,r)∈DN ,s6=0
Gap(Ps,r)min{Gap(PX+s,r), Gap(PX−s,r)}
(6.4)
To derive from the last bound a more explicit bound we need some preliminary
work. The first result we need is exactly the analogous of Lemma 5.2.
Lemma 6.2. Fore every r = 1, . . . , N
Gap(PX0,r ) ≥ (1− p1)Gap(K0,r) = (1− p1),
moreover, for every (s, r) in DN with s 6= 0,
Gap(P±Xs,r) ≥ (1− p1 − p2)Gap(K
s,r) = (1− p1 − p2).
Finally, for every (s, r) in DN ,
Gap(Ps,r) = p2.
Hence, (6.4) can be rewritten as
(6.5) Gap(M) ≥ Gap(P̄ )p2
min{(1− p1)/2, (1− p1 − p2)/2}.
It remains to bound Gap(P̄ ). Unfortunately the the analogous of Proposition 5.1
is not so simple, hence we shall require an additional hypothesis. In what follows
q|[N ]|(r) :=
(r−2i)2
if r is even
q|[N ]|(r) : =
(r−2i)2
if r is odd
r = 0, 1, . . . , N and set
A = {β > 0,K > 0 : ∃N0 such that ∀N ≥ N0, q|[N ]| is unimodal}.
Lemma 6.3. For every (β,K) in A
Gap(P̄ ) ≥ Cp
for a suitable constant C = C(β,K).
Under the same assumptions of the previous Lemma we can state the main
results of this section.
Proposition 6.4. For every (β,K) in A
Gap(M) ≥ C̃p
for a suitable constant C̃ = C̃(β,K).
METROPOLIS ALGORITHM AND EQUIENERGY SAMPLING 15
0 5 10 15
β=2.3, K=0.5,0.6,0.7,0.8
0 5 10 15
β=2.3,2.4,2.5,2.6, K=0.5
0 5 10 15
β=2.3, K=1.2,1.3,1.4,1.5
0 5 10 15
β=2.3,2.4,2.5,2.6, K=1.2
Figure 1. The function q|[N ]| for N = 15 and few values of β and K.
We conjecture that Gap(P̄ ) is polynomial in N for every (β,K) such that β 6=
Γ(K) (where Γ is the function of Proposition 6.1), but we are not able to prove this
conjecture. In point of fact we conjecture that R+×R+ \{(β,K) : Γ(K) = β} ⊂ A.
We plotted q|[N ]| for different N , β andK, and these plotts seem, at least, to confirm
that R+ × R+ \ {(β,K) : |Γ(K)− β| ≤ ǫ} ⊂ A for a suitable small ǫ. In Figure 1
we show the graph of q|[N ]| for few different N , β and K.
Appendix A. The Spectral Gap of a Birth and Death Chain
We derive here some bounds on the eigenvalues of a birth and death chain that we
shall use later. These bounds are obtained using the so called geometric techniques,
see [7]. Let Pn be a birth and death chain on Ωn = {1, . . . , n}. Assume that Pn
is reversible with respect to a probability pn, that is pn(i)Pn(i, j) = pn(j)Pn(j, i).
Moreover let
1 > λ1 ≥ λ2 ≥ . . . λn−1 ≥ −1
the eigenvalues of Pn.
We can now prove the following variant of Proposition 6.3 in [6].
Lemma A.1. If there exist positive constants A, q, B and an integer k such that
Pn(i, i± 1) ≥ An−q (i 6= 1, n)
Pn(1, 2) ≥ An−q
Pn(n, n− 1) ≥ An−q
pn(i) ≤ Bpn(j) i ≤ j ≤ k
pn(j) ≤ Bpn(i) k ≤ i ≤ j
λ1 ≤ 1−
16 FEDERICO BASSETTI AND FABRIZIO LEISEN
Proof. We use the notation and the techniques of [7], see also [3] and [6]. Choose
the set of paths
Γ = {γij = (i, i+ 1, ..., j); i ≤ j; i, j ∈ Ωn}
and for e = (i, i+ 1) (i < n) let
ψ(e) =
pn(i, i+ 1)
γl,m∈Γ
γl,m∋e
|γl,m|
pn(l)pn(m)
pn(i)
where |γ| is the length of the path γ. Setting K := supe ψ(e) one has
λ1 ≤ 1−
(see Proposition 1’ in [7], or Exercise 6.4 page 248 in [3]). So, for our purposes, it
suffices to give an upper bound on K. Assume first that e = (i, i+ 1) with
i < k ≤ n, since |γl,m| ≤ n, it follows that
ψ(e) ≤ n
s≥i+1
pn(r)pn(s)
pn(i)
pn(r)
pn(i)
s≥i+1
pn(s)
pn(s)
≤ nq+2
All the other cases can be treated in the same way. Hence,
ψ(e) ≤ B
and then
λ1 ≤ 1−
As for the smaller eigenvalues, Gershgorin theorem yields that
λn−1 ≥ −1 + 2min
P (i, i).
See, for instance, Corollary 2.1 in the Appendix of [3]. Hence, if there exists a
positive constant D such that
Pn(i, i+ 1) + Pn(i, i− 1) ≤ D/2
for every i, then
(A.1) λn−1 ≥ 1−D.
METROPOLIS ALGORITHM AND EQUIENERGY SAMPLING 17
Appendix B. Proofs
To prove Proposition 5.1 we need first to show that π̄ is essentially unimodal.
Lemma B.1. Let
qN (i) =
i = 0, 2, 4, . . . , N.
For every β < 1 there exists an integer N0 such that for every N ≥ N0
qN (i) ≤ qN (j)
whenever j ≤ i. For every β ≥ 1 there exists an integer N0 such that for every
N ≥ N0
qN (i) ≤ qN (j)
whenever i ≤ j ≤ kN and
qN (i) ≥ qN (j)
whenever kN ≤ i ≤ j, kN being a suitable integer.
Proof. Let ∆N (i) be the ratio
∆N (i) =
qN (i+ 2)
qN (i)
i = 0, 2, 4, ..., N − 2,
so that
∆N (i) =
) exp
(1 + i)
N − i
N + 2 + i
(1 + i)
Setting ∆N (x) =
N+2+x
(1 + x)
, x in [0, N−2], it is enough to prove that
x 7→ ∆N (x) takes the value 1 at most once in [0, N − 2], for sufficiently large N .
To prove this last claim first note that
∆N (0) =
N + 2
1 + 2
= 1− 2
(1 − β) +
− β + 2
Hence, there exists N0 in N such that for N ≥ N0:
β ≥ 1 ⇒ ∆N (0) > 1
β < 1 ⇒ ∆N (0) < 1.
As for the first derivative note that
∆′N (x) =
−2(N + 1) + 2β(N + 2)− 2β
(x2 + 2x)
(N + x+ 2)2
(1 + x)
hence ∆′N (x) = 0 if and only if
−2(N + 1) + 2β(N + 2)− 2β
(x2 + 2x) = 0.
18 FEDERICO BASSETTI AND FABRIZIO LEISEN
Rearranging the last equation as
x2 − 4β
+ 2[(β − 1)N + 2β − 1] = 0
one sees that the roots are
x1,2 = 1±
2β − 1
β − 1
Hence, after setting
r := 1 +
2β − 1
β − 1
N2 and r := 1 +
one has
β < 1 ⇒ ∆′N (x) < 0 ∀x ∈ [0, N − 2]
β > 1 ⇒ ∆′N (x) > 0 for x ∈ [0, r)
∆′N (x) < 0 for x ∈ (r,N − 2]
β = 1 ⇒ ∆′N (x) < 0 for x ∈ [0, r)
∆′N (x) < 0 for x ∈ (r,N − 2]
and this concludes the proof. �
Proof of Proposition 5.1. By direct computations it is easy to prove (5.4). Hence
P (i, i± 2) ≥
4(N + 2)
P (i, i+ 2) + P (i, i− 2) ≤ p1
Now observe that
π̄(0) =
ZN (β)
qN (0)
π̄(i) =
ZN (β)
qN (i) i 6= 0.
Hence, by Lemma B.1, if β < 1
π̄(i) ≤ 2π̄(j)
whenever j ≤ i and N is large enough. While for β > 1
π̄(i) ≤ π̄(j)
whenever i ≤ j ≤ kN and
π̄(i) ≥ π̄(j)
whenever kN ≤ i ≤ j. The thesis follows now by Lemma A.1 and by (A.1). �
In order to prove Lemma 5.2 we recall that by Rayleigh’s theorem
(B.1) 1− λ1(P ) = inf
Ep(f, f)
V arp(f)
: f nonconstant
where
Ep(f, f) :=< (I − P )f, f >p=
(f(x) − f(y))2P (x, y)p(x),
METROPOLIS ALGORITHM AND EQUIENERGY SAMPLING 19
P being a reversible chain w.r.t. p, moreover
(B.2) 1− |λN−1| = inf
x,y(f(x) + f(y))
2P (x, y)p(x)
V arp(f)
: f nonconstant
(see, for instance, Theorem 2.3 in Chapter 6 of [3] and Section 2.1 of [8]). At this
stage set
Pǫ(x, y) := (1 − ǫ)P (x, y) + ǫIx(y).
Hence, (B.1) yields
1− λ1(Pǫ) = inf
f∈L2pf 6=const
x,y(f(x) − f(y))2Pǫ(x, y)p(x)
V arp(f)
= inf
f∈L2pf 6=const
(1− ǫ)
x 6=y(f(x)− f(y))2P (x, y)p(x)
V arp(f)
= (1− ǫ)(1− λ1(P )).
Arguing in the same way and using (B.2) we get
1− |λ|X |−1(Pǫ)| ≥ (1 − ǫ)(1− |λ|X |−1(P )|).
Hence,
(B.3) Gap(Pǫ) ≥ (1− ǫ)Gap(P ).
Proof of Lemma 5.2. Note that
(x, y) = (1 − p1 − p2)K±i (x, y) + (p1 + p2)Ix(y)
and, analogously,
PX0(x, y) = (1 − p1)K0(x, y) + p1Ix(y).
Hence, by (B.3),
Gap(PX±
) ≥ (1− p1 − p2)Gap(K±i )
as well
Gap(PX0) ≥ (1− p1)Gap(K0).
Finally note that Pi is given by
1− p2
1− p2
for every i, hence Gap(Pi) = p2. �
Proof of Proposition 5.3. To prove the first part of the proposition it is enough to
combine Lemma 5.2, Proposition 5.1 and (5.3). To complete the proof observe that
Gap(K±i ) = Gap(K0) = 1, when K
i and K0 are given by (5.2). �
In order to prove Proposition 6.1 we need some results obtained in [10].
Theorem B.2 (Ellis-Otto-Touchette). Let ρN be the distribution of SN (x)/N un-
der πβ,K,N , then ρN satisfies a large deviation principle on [−1, 1] with rate function
Ĩβ,K(z) = Jβ(z)− βKz2 − inf
{Jβ(t)− βKt2}
Jβ(z) = sup
tz − log
1 + e−β(et + e−t
1 + 2e−β
Moreover, if Ẽβ,K := argminĨβ,K, then there exists a non decreasing function Γ :
(0,+∞) → (0,+∞) with limx→0 Γ(x) = +∞ and limx→∞ Γ(x) = γc ≃ 1.082 such
that for every (β,K) with K > Γ(β) then
Ẽβ,K = {±z(β,K) 6= 0}.
20 FEDERICO BASSETTI AND FABRIZIO LEISEN
In particular, for such (β,K) and for every 0 < ǫ < |z(β,K)| there exists a constant
C1 = C1(ǫ, β,K) such that
(B.4) ρ([0, ǫ]) ≤ C1 exp{−
γǫ,β,K}
(B.5) γǫ,β,K = inf
z∈[0,ǫ]
Ĩβ,K(z) > 0.
Proof. For the first part see Theorems 3.3, 3.6 and 3.8 in [10]. As for (B.4)-(B.5),
they are standard consequences of the theory of the large deviations and of the first
part of the proposition, see, e.g., Proposition 6.4 of [11]. �
Proof of Proposition 6.1. We intend to use the Chegeer’s inequality. To do this, let
A := {x : SN (x) < 0}, B := {x : SN (x) > 0}, C := {SN (x) = 0}. First of all note
that, by symmetry, π(A) = π(B) = (1−π(C))/2 ≤ 1/2. The main task is to bound
φ(A) =
π(x)ME(x, y) =
π(y)ME(y, x).
Now, observe that if SN (y) > 1 then ME(y, x) = 0 for every x in A, hence
φ(A) =
y:SN (y)=0
ME(y, x) +
y:SN(y)=1
ME(y, x)
≤ π {y : SN (y) ∈ {0, 1}} .
This yields a bound on the conductance
h = h(π,ME) ≤ φ(A)/π(A) ≤
2π {y : SN (y) ∈ {0, 1}}
1− π{y : SN (y) = 0}
Now by Proposition B.2 we get
h(π,ME) ≤ C2e−∆N
for suitable constants C2 and ∆ > 0. The thesis follows by Cheeger inequality
(3.2). �
Proof of Lemma 6.2. The proof is exactly the same as the proof of Lemma 5.2. �
In order to prove Lemma 6.3 it is convenient to fix some simple properties of the
chain P̄ .
Lemma B.3. P̄ is a random walk on DN . If P̄ ((s, r), (s̃, r̃)) 6= 0,
P̄ ((s, r), (s̃, r̃)) ≥ p1C3
for a suitable constant C3 = C3(β,K), moreover
P̄ ((s, r), (s̃, r̃)) ≤ p1
for every (s, r), (s̃, r̃)) 6= ((0, 0), (1, 1)).
Proof of Lemma B.3. Easy but tedious computations show that
P̄ ((0, 0), (1, 1)) =
1, exp{Kβ
P̄ ((0, N), (1, N − 1)) = p1
P̄ ((0, N), (2, N)) =
METROPOLIS ALGORITHM AND EQUIENERGY SAMPLING 21
P̄ ((0, r), (2, r)) =
r = 0, 2, 4, ..., N − 2
P̄ ((0, r), (1, r − 1)) = p1
r = 0, 2, 4, ..., N − 2
P̄ ((0, r), (1, r + 1)) =
1, exp{Kβ
r = 0, 2, 4, ..., N − 2
P̄ ((s, r), (s + 2, r)) =
(r − s)
(s, r) ∈ DN , 0 < s ≤ N − 2, r ≤ N
P̄ ((s, r), (s − 2, r)) = p1
(r + s) exp{4Kβ
(1 − s)}
(s, r) ∈ DN , 0 < s ≤ N, r ≤ N
P̄ ((s, r), (s + 1, r + 1)) =
(N − r)min
1, exp{Kβ
(2s+ 1)− β}
(s, r) ∈ DN , 0 < s, r ≤ N − 1,
P̄ ((s, r), (s − 1, r + 1)) = p1
(N − r) exp{Kβ
(−2s+ 1)− β}
(s, r) ∈ DN , 0 < s, r ≤ N − 1,
P̄ ((s, r), (s + 1, r − 1)) = p1
(r − s)
(s, r) ∈ DN , 0 < r ≤ N, 0 < s ≤ N − 2
P̄ ((s, r), (s − 1, r − 1)) = p1
(r + s)min
1, exp{Kβ
(2s+ 1)− β}
(s, r) ∈ DN , 0 < r ≤ N, 0 < s ≤ r.
At this stage the statement follows easily. �
Proof of Lemma 6.3. In order to obtain a bound on the gap of P̄ we shall apply
another time the decomposition technique. Write
DN = X̄1 ∪ X̄2 ∪ X̄3 ∪ ... ∪ X̄N ,
where
X̄1 = {(0, 0), (1, 1)} X̄r = {(u1, u2) ∈ Dn : u2 = r}.
On |[N ]| := {1, ..., N} define a chain P|[N ]| setting
P|[N ]|(i, j) :=
2π̄(X̄i)
a∈X̄i
b∈X̄j
P̄ (a, b)π̄(a)
P|[N ]|(i, i) := 1−
j 6=i
P|[N ]|(i, j).
Again P|[N ]| is a reversible chain on |[N ]| with stationary distribution
π̄|[N ]|(i) := π̄(X̄i).
Finally for every r = 1, 2, . . . , N we define a chain on X̄r by setting
PX̄r(a, b) := P̄ (a, b) + Ia(b)
z∈X̄ cr
P̄ (a, z)
where both a and b belong to X̄r. Now note that for every r = 2, 3, . . . , N PX̄r is
a birth and death chain on the state space {(1, r), (3, r), . . . , (r, r)} for r odd and
22 FEDERICO BASSETTI AND FABRIZIO LEISEN
{(0, r), (2, r), . . . , (r, r)} for r even. Let
qr(s) :=
(r − s)/2
and, for r even,
qr(0) := 2
Now observe that PX̄r has stationary distribution
πr(s) ∝ qr(s)
with s = 0, 2, . . . , r if r is even and s = 1, 3, . . . , r if r is odd. First of all let r 6= 1, by
Lemma B.3 and Lemma B.1, it is easy to check that (PX̄r , πr) meets the condition
of Lemma A.1 with
B = 2, n = [(r + 2)/2], A = C3p1[(r + 2)/2]N
([x] being the integer part of x) and then
1− λ1(PX̄r ) ≥
C3p1[(r + 2)/2]
2N [(r + 2)/2]3
Finally, Lemma B.3 with (A.1) yields
λ|X̄r|−1(PX̄r ) ≥ 1− p1.
Hence, for every r 6= 1, we have proved that
(B.6) Gap(PX̄r) ≥ C3/2p1N
For r = 1
PX̄1 =
1− α1/2 α1/2
α2/2 1− α2/2
where
α1 :=
1, exp{
α2 := p1 min
1, exp{Kβ
Gap(PX̄1) ≥ 1− |
2− α1 − α2
| = α1 + α2
where the last equality follows from the fact that α1
and α2
. Hence, for
sufficiently large N , it’s easy to see that
(B.7) Gap(PX̄1) ≥ C4p1N
with C4 = C4(β,K). At this stage (B.6) with (B.7) gives
(B.8) Gap(PX̄r) ≥ C5p1N
for all r ∈ |[N ]|. As for the gap of P|[N ]|, first of all note that P|[N ]| is a birth and
death chain on |[N ]|. From Lemma B.3
P|[N ]|(i, i+1) :=
2π̄(X̄i)
a∈X̄i
b∈X̄i+1
P̄ (a, b)π̄(a) ≥ p1C3
2π̄(X̄i)
a∈X̄i
b∈X̄i+1
π̄(a) ≥ p1C3
and analogously,
P|[N ]|(i, i− 1) ≥
Now, for r 6= 1
π̄|[N ]|(r) = q|[N ]|(r)/(
q|[N ]|(i))
METROPOLIS ALGORITHM AND EQUIENERGY SAMPLING 23
while
π̄|[N ]|(1) = (q|[N ]|(1) + q|[N ]|(0))/(
q|[N ]|(i)).
So, using the unimodality of q|[N ]|, we can apply Lemma B.3 with
which gives
λ1(P|[N ]|) ≤ 1−
≤ 1− p1C3
Using another time Lemma B.3, by (A.1), we get
λN (P|[N ]|) ≥ 1− p1.
Combining this two bounds we have
(B.9) Gap(P|[N ]|) ≥
and so from (3.4)
Gap(P̄ ) ≥
C being a suitable constant that depends by β,K,C3, C4, C5. �
Proof of Proposition 6.4. Combine Lemma 6.3 with 6.5. �
Acknowledgments
We should like to thank Persi Diaconis for useful discussions and for having
encouraged us during this work, Antonietta Mira for suggesting some interesting
references and Claudio Giberti for helping to improve an earlier version of the paper.
References
[1] J. Besag and P. J. Green. Spatial statistics and Bayesian computation. J. Roy. Statist. Soc.
Ser. B, 55(1):25–37, 1993.
[2] M. Blume, V. J. Emery, and R. B. Griffiths. Ising model for the λ transition and phase
separation in he3-he4 mixtures. Phys. Rev. A, 4:1071–1077, 1971.
[3] P. Bremaud. Markov Chains. Springer-Verlag, New York, 1998.
[4] S. Caracciolo, A. Pelissetto, and A. D. Sokal. Nonlocal Monte Carlo algorithm for self-avoiding
walks with fixed endpoints. J. Statist. Phys., 60(1-2):1–53, 1990.
[5] S. Caracciolo, A. Pelissetto, and A. D. Sokal. Dynamic critical exponent of the BFACF
algorithm for self-avoiding walks. J. Statist. Phys., 63(5-6):857–865, 1991.
[6] P. Diaconis and L. Saloff-Coste. What do we know about the Metropolis algorithm? J.
Comput. System Sci., 57(1):20–36, 1998. 27th Annual ACM Symposium on the Theory of
Computing (STOC’95) (Las Vegas, NV).
[7] P. Diaconis and D. Stroock. Geometric bounds for eigenvalues of Markov chains. Ann. Appl.
Probab., 1(1):36–61, 1991.
[8] M. Dyer, L. A. Goldberg, M. Jerrum, and R. Martin. Markov chain comparison. Probab.
Surv., 3:89–111 (electronic), 2006.
[9] R. G. Edwards and A. D. Sokal. Generalization of the Fortuin-Kasteleyn-Swendsen-Wang
representation and Monte Carlo algorithm. Phys. Rev. D (3), 38(6):2009–2012, 1988.
[10] R. S. Ellis, P. T. Otto, and H. Touchette. Analysis of phase transitions in the mean-field
Blume-Emery-Griffiths model. Ann. Appl. Probab., 15(3):2203–2254, 2005.
[11] R.S. Ellis. The Theory of Large Deviation and Applications to Statistical Mechanics.
2006. Lectures for the international seminar on Extreme Events in Complex Dynamics.
http://www.math.umass.edu/ ˜ rsellis/pdf-files/Dresden-lectures.pdf Max-Planck-Institut.
[12] A. E. Gelfand, S. K. Sahu, and B. P. Carlin. Efficient parameterisations for normal linear
mixed models. Biometrika, 82(3):479–488, 1995.
[13] A. Gelman, G. O. Roberts, and W. R. Gilks. Efficient Metropolis jumping rules. In Bayesian
statistics, 5 (Alicante, 1994), Oxford Sci. Publ., pages 599–607. Oxford Univ. Press, New
York, 1996.
http://www.math.umass.edu/
24 FEDERICO BASSETTI AND FABRIZIO LEISEN
[14] C. J. Gey er and E. A. Thompson. Annealing markov chain monte carlo with applications to
ancestral inference. J. Amer. Statist. Assoc., 90:909–920, 1995.
[15] Y. Guan, R. Fleißner, P. Joyce, and S. M. Krone. Markov chain Monte Carlo in small worlds.
Stat. Comput., 16(2):193–202, 2006.
[16] Y. Guan and S. M. Krone. Small-world mcmc and convergence to multi-modal distributions:
From slow mixing to fast mixing. Ann. Appl. Probab., 17(1):284–304, 2007.
[17] J. M. Hammersley and D. C. Handscomb. Monte Carlo methods. Methuen & Co. Ltd., Lon-
don, 1965.
[18] W.K. Hastings. Monte carlo sampling methods using markov chains and their application.
Biometrika, 57:97–109, 1970.
[19] K. Hukushima and K. Nemoto. Exchange monte carlo method and application to spin glass
simulations. J.Phys.Soc.Jpn., 65:1604–1608, 1996.
[20] M. Jerrum, J. Son, P. Tetali, and E. Vigoda. Elementary bounds on Poincaré and log-Sobolev
constants for decomposable Markov chains. Ann. Appl. Probab., 14(4):1741–1765, 2004.
[21] S. A. Kivelson, V. J. Emery, and H. Q. Lin. Doped antiferromagnets in the weak-hopping
limit. Phys. Rev. B, 42:6523–6530, 1990.
[22] S. C. Kou, Qing Zhou, and Wing Hung Wong. Equi-energy sampler with applications in
statistical inference and statistical mechanics. Ann. Statist., 34(4):1581–1619, 2006.
[23] J. Lajzerowicz and J. Sivardiére. Spin 1 lattice gas model. ii. condensation and phase sepa-
ration in a binary fluid. Phys. Rev. A, 11:2090–2100, 1975.
[24] J. Lajzerowicz and J. Sivardiére. Spin 1 lattice gas model. iii. tricritical points in binary and
ternary fluids. Phys. Rev. A, 11:2101–2110, 1975.
[25] J. S. Liu. Monte Carlo strategies in scientific computing. Springer Series in Statistics.
Springer-Verlag, New York, 2001.
[26] N. Madras and M. Piccioni. Importance sampling for families of distributions. Ann. Appl.
Probab., 9(4):1202–1225, 1999.
[27] N. Madras and D. Randall. Markov chain decomposition for convergence rate analysis. Ann.
Appl. Probab., 12(2):581–606, 2002.
[28] E. Marinari and G. Parisi. Simulated tempering: a new monte carlo scheme. Europhy s. Lett,
19:451–458, 1992.
[29] N. Metrpolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller. Equation of state
caluclations by fast computing machines. J. Chem. Phys., 21:1087–1092, 1953.
[30] A. Mira and L. Tierney. Efficiency and convergence properties of slice samplers. Scand. J.
Statist., 29(1):1–12, 2002.
[31] K.E. Newman and J.D. Dow. Zinc blende diamond order disorder transition in metastable
crystalline (gaas)1-xge2x alloys. Phys. Rev. B, 27:7495–7508, 1983.
[32] P. H. Peskun. Optimum Monte-Carlo sampling using Markov chains. Biometrika, 60:607–612,
1973.
[33] P. H. Peskun. Guidelines for choosing the transition matrix in Monte Carlo methods using
Markov chains. J. Comput. Phys., 40(2):327–344, 1981.
[34] C. P. Robert and G. Casella. Monte Carlo statistical methods. Springer Texts in Statistics.
Springer-Verlag, New York, second edition, 2004.
[35] R. Y. Rubinstein. Simulation and the Monte Carlo method. John Wiley & Sons Inc., New
York, 1981. Wiley Series in Probability and Mathematical Statistics.
[36] M. Schick and Wei-Heng Shih. Spin 1 model of a microemulsion. Phys. Rev. B, 34:1797–1801,
1986.
[37] A. Sinclair. Algorithms for random generation and counting. Progress in Theoretical Com-
puter Science. Birkhäuser Boston Inc., Boston, MA, 1993. A Markov chain approach.
[38] J. Sivardiére and J. Lajzerowicz. Spin 1 lattice gas model. i. condensation and solidification
of a simple fluid. Phys. Rev. A, 11:2090–2100, 1975.
[39] A. Sokal. Monte Carlo methods in statistical mechanics: foundations and new algorithms. In
Functional integration (Cargèse, 1996), volume 361 of NATO Adv. Sci. Inst. Ser. B Phys.,
pages 131–192. Plenum, New York, 1997.
[40] R.H. Swendsen and J. S. Wang. Non–universal critical dynamics in monte carlo simulations.
Phy. Rev. Lett., pages 86–88, 1987.
Università degli Studi di Pavia, Dipartimento di Matematica, via Ferrata 1, 27100
Pavia, Italy
Università dell’Insubria, Dipartimento di economia, via monte generoso 71, 21100
Varese , Italy
E-mail address: [email protected]
E-mail address: [email protected]
1. Introduction.
2. A general strategy
3. Preliminaries
3.1. Spectral gap, variance and speed of convergence
3.2. Cheeger's inequality
3.3. Chain decomposition theorem
4. Warming up example
5. The mean field Ising model
6. The mean–field Blume-Emery-Griffiths model
Appendix A. The Spectral Gap of a Birth and Death Chain
Appendix B. Proofs
Acknowledgments
References
|
0704.0907 | Experimental Test of the High-Frequency Quantum Shot Noise Theory in a
Quantum Point Contact | Experimental test of the high frequency quantum shot noise theory in a Quantum
Point Contact
E. Zakka-Bajjani, J. Ségala, F. Portier,∗ P. Roche, and D. C. Glattli†
Nanoelectronic group, Service de Physique de l’Etat Condensé,
CEA Saclay, F-91191 Gif-Sur-Yvette, France
A. Cavanna and Y. Jin
CNRS, Laboratoire de Photonique et Nanostructures ,
Route de Nozay, F-91460 Marcoussis, France
(Dated: November 9, 2018)
We report on direct measurements of the electronic shot noise of a Quantum Point Contact
(QPC) at frequencies ν in the range 4-8 GHz. The very small energy scale used ensures energy
independent transmissions of the few transmitted electronic modes and their accurate knowledge.
Both the thermal energy and the QPC drain-source voltage Vds are comparable to the photon
energy hν leading to observation of the shot noise suppression when Vds < hν/e. Our measurements
provide the first complete test of the finite frequency shot noise scattering theory without adjustable
parameters.
PACS numbers: 73.23.-b,73.50.Td,42.50.-p,42.50.Ar
Pauli’s exclusion principle has striking consequences on
the properties of quantum electrical conductors. In an
ideal quantum wire, it is responsible for the quantization
of the conductance by requiring that at most one electron
(or two for spin degeneracy) occupies the regularly time-
spaced wave-packets emitted by the contacts and propa-
gating in the wire [1]. Concurrently, at zero temperature,
the electron flow is noiseless [2, 3] as can be observed in
ballistic conductors [4, 5, 6]. In more general quantum
conductors, static impurities diffract the noiseless elec-
trons emitted by the contacts. This results in a partition
of the electrons between transmitted or reflected states,
generating quantum shot noise [1, 2, 3, 7, 8]. However,
Pauli’s principle possesses more twists to silence elec-
trons. At finite frequency ν, detection of current fluctu-
ations in an external circuit at zero temperature requires
emission of photons corresponding to a finite energy cost
hν [9]. For drain-source contacts biased at voltage Vds,
a sharp suppression is expected to occur when the pho-
ton energy hν is larger than eVds as an electron emit-
ted by the source can not find an empty state in the
drain to emit such a photon [9, 10, 11]. Another striking
consequence of Pauli’s principle is the prediction of non-
classical photon emission for a conductor transmitting
only one or few electronic modes. It has been shown that
in the frequency range eVds/2h < ν < eVds/h, the popu-
lation of a photon mode obeys a sub-Poissonian statistics
inherited from the electrons [12]. Investigating quantum
shot noise in this high frequency regime using a Quan-
tum Point Contact (QPC) to transmit few modes is thus
highly desirable.
The first step is to check the validity of the above pre-
diction based on a non-interacting picture of electrons.
For 3D or 2D wide conductors with many quantum chan-
nels which are good Fermi liquids, one expects this non-
interacting picture to work well. Indeed, the eVds/h sin-
gularity has been observed in a 3D diffusive wire in the
shot noise derivative with respect to bias voltage [13].
However, for low dimensional systems like 1D wires or
conductors transmitting one or few channels, electron in-
teractions give non-trivial effects. Long 1D wires defined
in 2D electron gas or Single Wall Carbon Nanotubes be-
come Luttinger liquids. Long QPCs exhibit a 0.7 con-
ductance anomaly [14], and a low frequency shot noise
[15] compatible with Kondo physics [16]. Consequently,
new characteristic frequencies may appear in shot noise
reflecting electron correlations. Another possible failure
of the non-interacting finite frequency shot noise model
could be the back-action of the external circuit. For high
impedance circuits, current fluctuations implies potential
fluctuations at the contacts [17]. Also, the finite time re-
quired to eliminate the sudden drain-source charge build-
up after an electron have passed through the conductor
leads to a dynamical Coulomb blockade for the next elec-
tron to tunnel. A peak in the shot noise spectrum at the
electron correlation frequency I/e is predicted for a tun-
nel junction connected to a capacitive circuit [18]. Other
timescales may also be expected which affect both con-
ductance [19] and noise [20] due to long range Coulomb
interaction or electron transit time. This effects have
been recently observed for the conductance [21] .
The present work aims at giving a clear-cut test of
the non-interacting scattering theory of finite frequency
shot noise using a Quantum Point Contact transmitting
only one or two modes in a weak interaction regime. It
provides the missing reference mark to which further ex-
periments in strong interaction regime can be compared
in the future. We find the expected shot noise suppres-
sion for voltages ≤ hν/e in the whole 4-8 GHz frequency
range. The data taken for various transmissions perfectly
http://arxiv.org/abs/0704.0907v4
agree with the finite temperature, non-interacting model
with no adjustable parameter. In addition to provide
a stringent test of the theory, the technique developed
is the first step toward the generation of non-classical
photons with QPCs in the microwave range [12]. The
detection technique uses cryogenic linear amplification
followed by room temperature detection. The electron
temperature much lower than hν/kB, the small energy
scale used (eVds ≪ 0.02EF ) ensuring energy independent
transmissions, the high detection sensitivity, and the ab-
solute calibration allow for direct comparison with the-
ory without adjustable parameters. Our technique differs
from the recent QPC high frequency shot noise measure-
ments using on-chip Quantum Dot detection in the 10-
150 GHz frequency range [22]. Although most QPC shot
noise features were qualitatively observed validating this
promising method, the lack of independent determina-
tion of the QPC-Quantum Dot coupling, and the large
voltage used from 0.05 to 0.5EF making QPC transmis-
sions energy dependent, prevent quantitative comparison
with shot noise predictions. However, Quantum Dot de-
tectors can probe the vacuum fluctuations via the stim-
ulated noise while the excess noise detected here only
probes the emission noise [9, 10].
The experimental set-up is represented in fig. 1. A
two-terminal conductor made of a QPC realized in a
2DEG in GaAs/GaAlAs heterojunction is cooled at 65
mK by a dilution refrigerator and inserted between two
transmission lines. The sample characteristics are a 35
nm deep 2DEG with 36.7 m2V−1s−1 mobility and 4.4
1015 m−2 electron density. Interaction effects have been
minimized by using a very short QPC showing no sign
of 0.7 conductance anomaly. In order to increase the
sensitivity, we use the microwave analog of an optical re-
flective coating. The contacts are separately connected
to 50 Ω coaxial transmission lines via two quarter wave
length impedance adapters, raising the effective input
impedance of the detection lines to 200 Ω over a one
octave bandwidth centered on 6 GHz. The 200 Ω elec-
tromagnetic impedance is low enough to prevent dynam-
ical Coulomb blockade but large enough for good cur-
rent noise sensitivity. The transmitted signals are then
amplified by two cryogenic Low Noise Amplifiers (LNA)
with Tnoise ≃ 5K. Two rf-circulators, thermalized at
mixing chamber temperature protect the sample from
the current noise of the LNA and ensure a circuit en-
vironment at base temperature. After further amplifi-
cation and eventually narrow bandpass filtering at room
temperature, current fluctuations are detected using two
calibrated quadratic detectors whose output voltage is
proportional to noise power. Up to a calculable gain
factor, the detected noise power contains the weak sam-
ple noise on top of a large additional noise generated by
the cryogenic amplifiers. In order to remove this back-
ground, we measure the excess noise ∆SI(ν, T, Vds) =
SI(ν, T, Vds) − SI(ν, T, 0). Practically, this is done by
4.2 K
adjustable
filters
VDrain-Source
50/200 Ω
λ/4 adapter
65 mK
circulator
FIG. 1: Schematic diagram of the measurement set-up. See
text for details.
applying a 93 Hz 0-Vds square-wave bias voltage on the
sample through the DC input of a bias-T, and detecting
the first harmonic of the square-wave noise response of
the detectors using lock-in techniques. In terms of noise
temperature referred to the 50 Ω input impedance, an
excess noise ∆SI(ν, T, Vds) gives rise to an excess noise
temperature
∆T 50Ωn (ν, T, Vds) =
ZeffZ
sample∆SI(ν, T, Vds)
(2Zeff + Zsample)2
. (1)
Eq. 1 demonstrates the advantage of impedance match-
ing : in the high source impedance limit Zsample ≫ Zeff ,
the increase in noise temperature due to shot noise is
proportional to Zeff . Our set up (Zeff = 200Ω) is thus
four times more efficient than a direct connection of the
sample to standard 50 Ω transmission lines. Finally, the
QPC differential conductance G is simultaneously mea-
sured through the DC input of the bias-Tee using low
frequency lock-in technique.
The very first step in the experiment is to characterize
the QPC. The inset of fig. 4 shows the differential con-
ductance versus gate voltage when the first two modes
are transmitted. As the experiment is performed at zero
magnetic field, the conductance exhibits plateaus quan-
tized in units of G0 = 2e
2/h. The short QPC length
(80 nm) leads to a conductance very linear with the
low bias voltage used (δG/G ≤ 6% for Vds ≤ 80µV for
G ≃ 0.5G0). It is also responsible for a slight smooth-
ing of the plateaus. Each mode transmission is extracted
from the measured conductance (open circles) by fitting
with the saddle point model (solid line) [23].
We then set the gate voltage to obtain a single mode
at half transmission corresponding to maximum electron
partition (G ≃ 0.5G0). Fig. 2 shows typical excess noise
measured at frequencies 4.22 GHz and 7.63 GHz and
bandwidth 90 MHz and 180 MHz. We note a striking
suppression of shot noise variation at low bias voltage,
and that the onset of noise increases with the measure-
ment frequency. This is in agreement with the photon
suppression of shot noise in a non-interacting system.
-80 -60 -40 -20 0 20 40 60 80
2 X V
(4.22 GHz)
2 X V
(7.63 GHz)
Drain Source Voltage V
(µV)
4.22 GHz
7.63 GHz
FIG. 2: Color Online. Excess noise temperature as a func-
tion of bias voltage, measured at 4.22 GHz (open circles) and
7.63 GHz (open triangles). The dashed lines represent the
linear fits to the data, from which the threshold V0 is de-
duced. The solid lines represent the expected excess noise
SI(ν, Te(Vds), Vds) − SI(ν, Te(0), 0), using Te(Vds) obtained
from eq. 5. The frequency dependent coupling is the only
fitting parameter.
The expected excess noise reads
∆SI(ν, T, Vds) = 2G0
Di(1−Di)
hν − eVds
e(hν−eVds)/kBT − 1
hν + eVds
e(hν+eV )/kBT − 1
ehν/kBT − 1
. (2)
It shows a zero temperature singularity at eVds = hν :
∆SI(ν, T, Vds) = 2G0
i Di(1−Di)(eVds−hν) if eVds >
hν and 0 otherwise. At finite temperature, the singular-
ity is thermally rounded. At high bias (eVds ≫ hν, kBT ),
equation 2 gives an excess noise
∆SI(ν, T, Vds) = 2G0
iDi(1−Di) (eVds − eV0) (3)
with eV0 = hν coth (hν/2kBT ) . (4)
In the low frequency limit, the threshold V0 charac-
terizes the transition between thermal noise and shot
noise (eV0 = 2kBT ), whereas in the low temperature
limit, it marks the onset of photon suppressed shot noise
(eV0 = hν). As shown on fig. 2, V0 is determined by the
intersection of the high bias linear regression of the mea-
sured excess noise and the zero excess noise axis. Fig. 3
shows V0 for eight frequencies spanning in the 4-8 GHz
range for G ≃ 0.5G0 . Eq. 4 gives a very good fit to
the experimental data. The only fitting parameter is the
electronic temperature Te = 72 mK, very close to the
fridge temperature Tfridge = 65 mK. We will show that
electron heating can account for this small discrepancy.
To get a full comparison with theory, we now inves-
tigate the influence of the transmissions of the first two
electronic modes of the QPC. To do so, we repeat the
same experiment at fixed frequency (here we used a 5.4-
5.9 GHz filter) for different sample conductances. The
0 5 10 15 20 25 30 35
: Experiment
Fit to theory
yields T
= 72mK
(fridge temp = 65 mK)
hν/e (µV)
0 2 4 6 8
Observation Frequency ν (GHz)
2 /k T e
asymptote
V h eν=
FIG. 3: Onset V0 as a function of the observation frequency.
The experimental uncertainty corresponds to the size of the
symbols. The dashed lines correspond to the low (eV0 =
2kBT ) and high (eV0 = hν) frequency limits, and the solid
line is a fit to theory, with the electronic temperature as only
fitting parameter.
0,0 0,5 1,0 1,5 2,0
-0,5 -0,4 -0,3
Gate Voltage(V)
FIG. 4: Open circles: d∆SI/d(eVds) deduced from ∆T
Full line : theoretical prediction. The only fitting parameter
is the microwave attenuation. The experimental uncertainty
corresponds to the size of the symbols. Inset : Open circles :
conductance of the QPC as a function of gate voltage. Solid
Line : fit with the saddle point model [23].
noise suppression at Vds ≤ hν/e is the only singularity
we observe, independently of the QPC conductance G.
Fig. 4 shows the derivative with respect to eVds of the
excess noise d∆SI/d(eVds) deduced from the excess noise
temperature measured between 50 µV and 80 µV. This
energy range is chosen so that eVds is greater than hν
by at least 5kBTfridge over the entire frequency range.
The data agree qualitatively with the expected D(1−D)
dependence of pure shot noise, showing maxima at con-
ductances G = 0.5G0, and G = 1.5G0, and minima at
conductances G = G0 and G = 2G0. The short QPC
is responsible for the non zero minima as, when the sec-
ond mode starts to transmit electrons, the first one has
not reached unit transmission (inset of fig. 4). How-
ever, eq. 2 is not compatible with a second maximum
higher than the first one, which is due to electron heating.
The dimensions of the 2-DEG being much larger than
the electron-electron energy relaxation length, but much
smaller than electron-phonon energy relaxation length,
there is a gradient of electronic temperature from the
QPC to the ohmic metallic contacts assumed at the fridge
temperature. Combining the dissipated power IVds with
the Wiedemann-Franz law, one gets [5, 24]
T 2e = T
fridge +
where Gm stands for the total conductance of the 2D
leads, estimated from measurements to be 12 mS ±20%.
The increased noise temperature is then due to both
shot noise and to the increased thermal noise. For a
fridge temperature of 65 mK and G = G0/2, the elec-
tronic temperature will increase from 69 mK to 77 mK
as Vds increases from 50 µV to 80 µV. This accounts
for the small discrepancy between the fridge temper-
ature and the electron temperature deduced from the
variation of V0 with frequency. As G increases, the ef-
fect is more important, as can be seen both in fig. 4
and eq. 5. The solid line in figure 4 gives the av-
erage derivative with respect to eVds of the total ex-
pected excess noise SI(ν, Te(Vds), Vds) − SI(ν, Te(0), 0),
using the attenuation of the signal as a free parame-
ter. The agreement is quite satisfactory, given the ac-
curacy of the saddle point model description of the QPC
transmission. We find a 4.7 dB attenuation, which is
in good agreement with the expected 4 ±1 dB deduced
from calibration of the various elements of the detection
chain. Moreover, the voltage dependent electron temper-
ature obtained from eq. 5 can also be used to evaluate
SI(ν, Te(Vds), Vds) − SI(ν, Te(0), 0) as a function of Vds
at fixed sample conductance G = 0.5G0. The result, as
shown by the solid lines of fig. 2, is in excellent agreement
with experimental observations.
In conclusion, we performed the first direct measure-
ment of the finite frequency shot noise of the simplest
mesoscopic system, a QPC. Accurate comparison of the
data with non-interacting shot noise predictions have
been done showing perfect quantitative agreement. Even
when a single mode is transmitted, no sign of devia-
tion related to interaction was found, as expected for
the experimental parameters chosen for this work. We
have also shown that accurate and reliable high frequency
shot noise measurements are now possible for conductors
with impedance comparable to the conductance quan-
tum. This opens the way to high frequency shot noise
characterization of Carbon Nanotubes, Quantum Dots
or Quantum Hall samples in a regime where microscopic
frequencies are important and will encourage further the-
oretical work in this direction. Our set-up will also allow
to probe the statistics of photons emitted by a phase co-
herent single mode conductor.
It is a pleasure to thank D. Darson, C. Ulysse, P.
Jacques and C. Chaleil for valuable help in the construc-
tion of the experiments, P. Roulleau for technical help,
and X. Waintal for useful discussions.
∗ Electronic address: [email protected]
† Also at LPA, Ecole Normale Supérieure, Paris.
[1] T. Martin and R. Landauer, Phys. Rev. B 45, 1742
(1992)
[2] V. A. Khlus, Zh. Eksp. Teor. Fiz. 93 (1987) 2179 [Sov.
Phys. JETP 66 (1987) 1243].
[3] G. B. Lesovik, Pis’ma Zh. Eksp. Teor. Fiz. 49 (1989) 513
[JETP Lett. 49 (1989) 592].
[4] M. Reznikov, et al., Phys. Rev. Lett. 75, 3340 (1995);
[5] A. Kumar et al., Phys. Rev. Lett. 76, 2778 (1996).
[6] L. Hermann et al., arXiv:cond-mat/0703123v1.
[7] M. Büttiker, Phys. Rev. Lett. 65, 2901 (1990)
[8] Y. M. Blanter and M. Büttiker, Phys. Rep. 336, 1 (2000).
[9] G.B. Lesovik, R. Loosen, JETP Lett. 65, 295 (1997).
Here is made the distinction between emission noise
SI(ν) =
〈I(0)I(τ )〉ei2πντdτ and stimulated noise
SI(−ν). While observation of the later requires excitation
of the sample by external sources, for a zero temperature
external circuit, only SI(ν) should be observed. For an
earlier high frequency shot noise derivation not making
the distinction between SI(ν) and SI(−ν), see Ref.[2, 3].
[10] R. Aguado and L. P. Kouwenhoven, Phys. Rev. Lett. 84,
1986 (2000);
[11] U. Gavish, Y. Levinson, Y. Imry, Phys. Rev. B 62,
R10637 (2000); M. Creux, A. Crepieux, Th. Martin,
Phys. Rev. B 74 115323 (2006).
[12] C. W. J Beenakker and H. Schomerus, Phys. Rev. Lett.
86, 700 (2001); J. Gabelli, et al., Phys. Rev. Lett. 93,
056801 (2004); C. W. J. Beenakker and H. Schomerus
Phys. Rev. Lett. 93, 096801 (2004).
[13] R. J. Schoelkopf et al., Phys. Rev. Lett. 78, 3370 (1997).
[14] K. J. Thomas et al., Phys. Rev. Lett. 77, 135 (1996); K.
J. Thomas et al., Phys. Rev. B 58, 4846 (1998).
[15] P. Roche et al., Phys. Rev. Lett. 93, 116602 (2004); L.
DiCarlo et al., Phys. Rev. Lett. 97, 036810 (2006).
[16] A. Golub, T. Aono, and Y. Meir Phys. Rev. Lett. 97,
186801 (2006)
[17] B. Reulet, J. Senzier, and D. E. Prober, Phys. Rev. Lett.
91, 196601 (2003); M. Kindermann, Yu. V. Nazarov, and
C. W. J. Beenakker Phys. Rev. B 69, 035336 (2004).
[18] D.V. Averin and K.K. Likharev, J. Low Temp.Phys. 62
345 (1986).
[19] M. Büttiker, H. Thomas, and A. Prêtre, Phys. Lett.
A180, 364 (1993); M. Büttiker, A. Prêtre, H. Thomas,
Phys. Rev. Lett. 70, 4114 (1993)
[20] M. H. Pedersen, S. A. van Langen, and M. Buttiker,
Phys. Rev. B 57 (1998) 1838.
[21] J. Gabelli et al., Science 313, 499 (2006). J. Gabelli et
al., Phys. Rev. Lett. 98, 166806 (2007)
[22] E. Onac et al. Phys. Rev. Lett. 96, 176601 (2006).
The experimental onset in Vds for the emission of high
frequency shot noise was larger than expected (Vds ≃
mailto:[email protected]
http://arxiv.org/abs/cond-mat/0703123
5 × hν/e). After submission of this work, Gustavson et
al. reported on a double quantum dot on-chip detector,
yielding to a more quantitative agreement with theory
(arXiv:0705.3166v1).
[23] M. Büttiker Phys. Rev. B 41, 7906-7909 (1990).
[24] A. H. Steinbach, J. M. Martinis, and M. H. Devoret Phys.
Rev. Lett. 76, 3806 (1996)
http://arxiv.org/abs/0705.3166
|
0704.0908 | Extragalactic Radio Sources and the WMAP Cold Spot | Extragalactic Radio Sources and the WMAP Cold spot
Lawrence Rudnick 1, Shea Brown2, Liliya R. Williams3
Department of Astronomy, University of Minnesota
116 Church St. SE, Minneapolis, MN 55455
ABSTRACT
We detect a dip of 20-45% in the surface brightness and number counts of
NVSS sources smoothed to a few degrees at the location of the WMAP cold spot.
The dip has structure on scales of ∼ 1◦ to 10◦. Together with independent all-sky
wavelet analyses, our results suggest that the dip in extragalactic brightness and
number counts and the WMAP cold spot are physically related, i.e., that the
coincidence is neither a statistical anomaly nor a WMAP foreground correction
problem. If the cold spot does originate from structures at modest redshifts, as
we suggest, then there is no remaining need for non-Gaussian processes at the
last scattering surface of the CMB to explain the cold spot. The late integrated
Sachs-Wolfe effect, already seen statistically for NVSS source counts, can now
be seen to operate on a single region. To create the magnitude and angular size
of the WMAP cold spot requires a ∼ 140 Mpc radius completely empty void at
z≤1 along this line of sight. This is far outside the current expectations of the
concordance cosmology, and adds to the anomalies seen in the CMB.
Subject headings: large-scale structure of the universe – cosmic microwave back-
ground – radio continuum: galaxies
1. Introduction
The detection of an extreme “cold spot” (Vielva et al. 2004) in the foreground-corrected
WMAP images was an exciting but unexpected finding. At 4◦ resolution, Cruz et al. (2005)
determine an amplitude of -73 µK, which reduces to -20 µK at ∼10◦ scales (Cruz et al.
[email protected]
[email protected]
[email protected]
http://arxiv.org/abs/0704.0908v2
– 2 –
2007). The non-gaussianity of this extreme region has been scrutinized, (Cayon, Jun &
Treaster 2005; Cruz et al. 2005, 2006, 2007) concluding that it cannot be explained by either
foreground correction problems or the normal Gaussian fluctuations of the CMB. Thus, the
cold spot seems to require a distinct origin – either primordial or local. Across the whole sky,
local mass tracers such as the optical Sloan Digital Sky Survey (SDSS, York et al. 2000) and
the radio NRAO VLA Sky Survey (NVSS, Condon et al. 1998) are seen to correlate with
the WMAP images of the CMB (Pietrobon, Balbi & Marinucci 2006; Cabre et al. 2006),
probably through the late integrated Sachs-Wolfe effect (ISW, Crittenden & Turok 1996).
McEwen et al. (2007) extended the study of radio source/CMB correlations by per-
forming a steerable wavelet analysis of NVSS source counts and WMAP images. They
isolated 18 regions that, as a group, contributed a significant fraction of the total NVSS-
ISW signal. Three of those 18 regions were additionally robust to the choice of wavelet form.
The centroid of one of those three robust correlated regions, (#16), is inside the 10◦ cold
spot derived from WMAP data alone (Cruz et al. 2007), although McEwen et al. (2007)
did not point out this association.
The investigations reported here were conducted independently and originally without
knowledge of the McEwen et al. (2007) analysis. However, our work is a posteriori in
nature, because we were specifically looking for the properties in the direction of the cold
spot. These results thus support and quantify the NVSS properties in the specific direction
of the cold spot, but should be considered along with the McEwen et al. (2007) analysis for
the purposes of an unbiased proof of a WMAP association.
2. Analysis and Characterization of the NVSS “dip”
We examined both the number counts of NVSS sources in the direction of the WMAP
cold spot and their smoothed brightness distribution. The NVSS 21 cm survey covers the
sky above a declination of -40◦ at a resolution of 45”. It has an rms noise of 0.45 mJy/beam
and is accompanied by a catalog of sources stronger than 2.5 mJy/beam. Because of the
short interferometric observations that went into its construction, the survey is insensitive to
diffuse sources greater than ≈ 15’ in extent. Convolution of the NVSS images to larger beam
sizes, as done here, shows the integrated surface brightness of small extragalactic sources;
this is very different than what would be observed by a single dish of equivalent resolution. In
the latter case, the diffuse structure of the Milky Way Galaxy dominates (e.g, Haslam et al.
(1981)), although it is largely invisible to the interferometer.
To explore the extragalactic radio source population in the direction of the WMAP
– 3 –
cold spot, we first show in Figure 1 the 50◦×50◦ region around the cold spot convolved to a
resolution of 3.4◦. Here, the region of the cold spot is seen to be the faintest region on the
image (minimum at lII ,bII = 207.8
◦, -56.3◦). At minimum, its brightness is 14 mK below
the mean, with an extent of ≈5◦. The WMAP cold spot thus picks out a special region in
the NVSS – at least within this 2500� region.
We examined the smoothed brightness distribution across the whole NVSS survey using
another averaging technique that reduces the confusion from the brightest sources. We first
pre-convolved the images to 800”, which fills all the gaps between neighboring sources, and
then calculated the median brightness in sliding boxes 3.4◦ on a side. The resulting image
is shown in Figure 2 which is in galactic coordinates centered at lII=180
◦. The dark regions
near the galactic plane are regions of the NVSS survey that were perturbed by the presence
of very strong sources. Note that the galactic plane itself, which dominates single dish maps,
is only detectable here between -20◦ < lII < 90
◦, where there is a local increase in the number
of small sources detectable by the interferometer.
To evaluate the NVSS brightness properties of the cold spot, we compared it with the
distribution of median brightnesses in two strips from this all sky map. The first strip was
in the north, taking everything above a nominal galactic latitude of 30◦ (More precisely,
we used the horizontal line in the Aitoff projection tangent to the 30◦ line at lII=180
The second strip was in the south, taking everything below a nominal galactic latitude of
-30◦, but only from 10◦ < lII <180
◦, to avoid regions near the survey limit of δ=-40◦. The
minimum brightness in the cold spot region (≈ 20 mK) is equal to the lowest values seen in
the 16,800 square degree area of the two strips, and is ≈ 30% below the mean (Figure 3).
Formally, the probability of finding this weakest NVSS spot within the ≈10◦ (diameter)
region of the WMAP cold spot is 0.6%. This a posteriori analysis thus is in agreement with
the statistical conclusions of McEwen et al. (2007) that the NVSS properties in this region
are linked to those of WMAP. We also note that the magnitude of the NVSS dip is at the
extreme, but not an outlier of the overall brightness distribution. We thus expect that less
extreme NVSS dips would also individually correlate with WMAP cold regions, although it
may be more difficult to separate those from the primordial fluctuations.
The NVSS brightness dip can be seen at a number of resolutions, and there is probably
more than one scale size present. At resolutions of 1◦, 3.4◦, and 10◦, we find that the dip is
≈ 60%, 30% and 10% of the respective mean brightness. At 10◦ resolution, the NVSS deficit
overlaps with another faint region about 10◦ to the west, while the average dip in brightness
then decreases from -14 mK (at 3.4◦) to -4 mK.
The dip in NVSS brightness in the WMAP cold spot region is not due to some peculiarity
– 4 –
of the NVSS survey itself. In Figure 4, we compare the 1◦ convolved NVSS image with the
similar resolution, single dish 408 MHz image of Haslam et al. (1981). This 408 MHz all
sky map is dominated in most places by galactic emission, and was usedby Bennett et al.
(2003) as a template for estimating the synchrotron contribution in CMB observations.
On scales of 1◦, the fluctuations in brightness are a combination of galactic (diffuse) and
extragalactic (smeared small source) contributions. In the region of the cold spot, we can
see the extragalactic contribution at 408 MHz by comparison with the smoothed NVSS
1.4 GHz image. Note that although there is flux everywhere in the NVSS image, this is
the “confusion” from the smoothed contribution of multiple small extragalactic sources in
each beam, whereas the 408 MHz map has strong diffuse galactic emission as well. Strong
brightness dips are seen in both images in the region of the WMAP cold spot - with the
brightness dropping by as much as 62% in the smoothed NVSS; in the 408 MHz map, this
is diluted by galactic emission.
To look more quantitatively at the source density in the cold spot region, we measured
the density of NVSS sources (independent of their fluxes) as a function of distance from
the cold spot. Figure 5 shows the counts in equal area annuli around the WMAP cold spot
down to two different flux limits. With a limit of 5 mJy, there is a 45±8% decrease in counts
in the 1◦ radius circle around the WMAP cold spot centroid. At the survey flux limit of
2.5 mJy, the decrease is 23±3%. This reduction in number counts is what is measured by
McEwen et al. (2007) in their statistical all-sky analysis.
3. Foreground Corrections
Several studies have claimed that the properties of the cold spot are most likely an effect
of incorrect foreground subtraction (Chiang & Naselsky 2004; Coles 2005; Liu & Zhang
2005; Tojeiro et al. 2006). This possibility has been investigated in detail for both the
first year (Vielva et al. 2004; Cruz et al. 2005, 2006) and third (Cruz et al. 2007) year
WMAP data. The arguments against foreground subtraction errors can be summarized in
three main points – 1) The region of the spot shows no spectral dependence in the WMAP
data. This is consistent with the CMB and inconsistent with the known spectral behavior
of galactic emission (as well as the SZ effect). The flat (CMB-like) spectrum is found both
in temperature and kurtosis, as well as in real and wavelet space. 2) Foreground emission
is found to be low in the region of the spot, making it unlikely that an over-subtraction
could produce an apparent non-Gaussianity. 3) Similar results are found when using totally
independent methods to model and subtract out the foreground emission (Cruz et al. 2006),
namely the combined and foreground cleaned Q-V-W map (Bennett et al. 2003) and the
– 5 –
weighted internal linear combination analysis (Tegmark et al. 2003).
Now that we know that there is a reduction in the extragalactic radio source contribution
in the direction of the cold spot, we can re-examine this issue. We ask whether a 20-30%
decrement in the local brightness of the extragalactic synchrotron emission could translate
into a foreground subtraction problem that could generate the WMAP cold spot. We are
not re-examining the foreground question ab initio, simply examining the plausibility that
the deficit of NVSS sources could complicate the foreground calculations in this location.
The characteristic brightness in the 3.4◦ convolved NVSS image around the cold spot is ∼
51 mK at 1.4 GHz; the brightness of the cold spot is ∼ 37 mK. This difference of 14 mK
in brightness (4 mK at 10◦ resolution) represents the extragalactic population contribution
only, as the NVSS is not sensitive to the large scale galactic synchrotron emission. By
contrast, the single dish 1.4 GHz brightness within a few degrees of the cold spot is ∼ 3.4 K,
as measured using the Bonn Stockert 25 m telescope (Reich and Reich 1986). Therefore the
total synchrotron contribution at 1.4 GHz is ∼ 0.7 K above the CMB, 50 times larger than
the localized extragalactic deficit.
One way to create the cold spot would be if the universal spectral index used for the
normal galactic (plus small extragalactic) subtraction was incorrect for the extra brightness
temperature contribution of the NVSS dip, δT (-14 mK at L band, 1.4 GHz). We make an
order of magnitude estimate of this potential error. Following the first year data analysis
(Bennett et al. 2003), and the similar exercise performed by Cruz et al. (2006), we consider
fitting a synchrotron template map at some reference frequency νref , and extrapolating the
model, (F(ν)model), with a spectral index of β = −3 to the Q, V, and W bands. This
spectral index is consistent with those of Cruz et al. (2006) and the average spectral index
observed in the WMAP images (Bennett et al. 2003; Hinshaw et al. 2006). Under the null
hypothesis that the spectral index of δT is the same as that of the mean brightness T0, we
then calculate in the region of the deficit,
F (ν)model = [T0(νref) + δ T (νref)] (ν/νref)
. (1)
However, if the actual spectrum of δT is -α (from L band through W band) instead of
-3, then the true foreground subtraction, F(ν)true, should have been
F (ν)true = T0 (νref) (ν/νref)
+ δT (νref) (ν/νref)
. (2)
The foreground subtraction would then be in error as a function of frequency as follows,
expressed in terms of the L band temperatures :
– 6 –
δF (ν) ≡ F (ν)true − F (ν)model = δT (νL)(νL/ν)
1− (νref/ν)
. (3)
Three different reference frequencies have been used for synchrotron extrapolation – the
Haslam et al. (1981) 408 MHz map (Bennett et al. 2003), the Jonas, Baart & Nicolson
(1998) 2326 MHz Rhodes/HartRAO survey (Cruz et al. 2006), and the internal K and Ka
band WMAP images (Hinshaw et al. 2006). Since the foreground subtraction errors would
be worst extrapolating from the lowest frequency template, we start at 408 MHz and look
at the problems caused by a spectral index for δT that is different than the assumed -3.
We obtain a rough measure of the spectral index of the dip by comparing the 1◦
resolution maps (Figure 4) at 408 MHz and 1400 MHz. At lII ,bII = 207
◦,-55◦ , we find
δT = 2.6±0.75 K (30±12 mK) at 408 (1400) MHz, yielding a spectral index of -3.6±0.5.
Using Equation (3), this would actually lead to a WMAP “hot spot” if a spectral index of
-3.0 had been assumed for the extrapolation. Within the errors, the worst foreground ex-
trapolation mistakes would then be hot spots that range from +0.5 to +4.6 µK at Q band,
1◦ resolution, (≈ +0.25 to +2.3 µK at 4◦ resolution). Our derived spectral index for the dip
is steeper than expected for extragalactic sources, so we also do the calculation assuming
the flattest reasonable extragalactic spectrum of -2.5 . This would lead to a spurious cold
spot of -2.9 µK at 4◦ resolution . In either case, this is far below the -73 µK observed at this
resolution in WMAP, and we thus conclude that the deficit of NVSS sources does not lead
to a significant foreground subtraction error of either sign.
4. Discussion
The WMAP cold spot could have three origins: a) at the last scattering surface (z ∼
1000), b) cosmologically local (z < 1), or c) galactic. Because the spot corresponds to a
significant deficit of flux (and source number counts) in the NVSS, we have argued here that
the spot is cosmologically local and hence, a localized manifestation of the late ISW effect.
Cruz et al. (2005, 2007) derived a temperature deviation for the cold spot of ∼ −20µK
and a diameter of 10◦ using the WMAP 3 year data; on scales of 4◦ the average temperature is
lower, ∼ −73 µK. Using these two data points we derive an approximate relation between the
temperature deviation and the corresponding size of the cold spot: θ(∆T/T ) ≈ 4.5× 10−5,
where θ is the radius of the cold spot in degrees. We now perform an order of magnitude
calculation to see if the late ISW can produce such a spot, assuming that the entire effect
comes from the ISW.
The contribution of the late ISW along a given line of sight is given by (∆ T/T )|ISW =
– 7 –
Φ̇ dη, where the dot represents differentiation with respect to the conformal time η,
dη = dt/a(t), and a is the scale factor. The integrand will be non-zero only at late times
(z<1) when the cosmological constant becomes dynamically dominant.
We start with the Newtonian potential given by
Φ = GM/r ≈
r2 ρb δ. (4)
The proper size r and the background density ρb scale as a and a
−3, respectively. The growth
of the fractional density excess, δ(a) in the linear regime is given by D(a) = δ(a)/δ(a0), and
D(a) is the linear growth factor. For redshifts below ∼ 1 in ΛCDM, this factor can be
approximated as δ(a) ≈ aδ(a0)(3−a)/2. Assuming that the region is spherical, its comoving
radius is rc = 0.5∆z(c/H), and ∆z is the line of sight diameter of the region. The change
in the potential over dη can be approximated by
r2c ρc δ (∆z), (5)
where subscript c refers to the average comoving size of the void and the comoving back-
ground density. In ΛCDM the Hubble parameter is roughly given by H2 = H2
(1 + 2z),
for redshifts below ∼ 1. Incorporating these approximations we get the following relation
between the size of the region, its redshift and the temperature deviation from the late ISW:
∆Φ ≈ −
(1 + 2z)1/2(1 + z)−2 δ ≈
We now ask under what conditions this expression is consistent with the observed rela-
tion between the size and temperature of the cold spot derived earlier, θ(∆T/T ) ≈ 4.5×10−5.
For Ωm ∼ 0.3 and δ = −1 (i.e. a completely empty region) this leads to the simplified re-
lation, θ(1 + z) ≈ 6, where θ is in degrees, as before. Since the spot’s association with the
NVSS places it at z ∼ 0.5− 1, this leads to a self-consistent value of the radius of ∼ 3− 4◦
for the observed spot. For c/H0 = 4000 Mpc, the comoving radius of the void region is
120-160 Mpc.
How likely is such a large underdense region in a concordance cosmology? Suppose there
is only one such large underdense region in the whole volume up to z=1. The correspond-
ing void frequency is then the ratio of the comoving volume of the void to the comoving
volume of the Universe to z=1, which is roughly 3 × 10−5. Is this consistent with ΛCDM?
Void statistics have been done for a number of optical galaxy surveys, as well as numerical
structure formation simulations. Taking the most optimistic void statistics (filled dots in
Fig. 9 of Hoyle & Vogeley, 2004) which can be approximated by logP = −(r/Mpc)/15, a
– 8 –
140 Mpc void would occur with a probability of 5× 10−10, considerably more rare than our
estimate for our Universe (3× 10−5) based on the existence of the cold spot. One must keep
in mind, however, that observational and numerical void probability studies are limited to
rc ∼ 30 Mpc; it is not yet clear how these should be extrapolated to rc > 100 Mpc.
We note that Inoue & Silk (2006a,b) had already suggested that anomalous tempera-
ture anisotropies in the CMB, such as the cold spot, may be explained by the ISW effect. In
contrast to our calculation described above, their analysis considers the linear ISW plus the
second order effects due to an expanding compensated void, partially filled with pressureless
dust, embedded in a standard CDM (Inoue and Silk 2006a) or ΛCDM (Inoue and Silk 2006b)
background. It is reassuring that the size of the void indicated by their analysis—about 200
Mpc if located at z ∼ 1—is roughly the same as what we get here using linear ISW.
The need for an extraordinarily large void to explain the cold spot would add to the
list of anomalies associated with the CMB. (See Holdman, Mersini-Houghton & Takahashi
(2006a,b) for a theory that predicts such large voids based on a particular landscape model.)
These include the systematically higher strength of the late ISW correlation measured for a
variety of mass tracers, compared to theWMAP predictions (see Fig. 11 of Giannantonio et al.
2006), and the alignment and planarity of the quadrupole and the octopole (de Oliveira-Costa et al.
2004; Land & Magueijo 2005). We can, however, conclude that models linking the cold spot
with the larger scale anomalies, such as the anisotropic Bianchi Type VIIh model of Jaffe et
al. (2005), are no longer necessary. While we suggest that the cold spot is a local effect, low
order global anisotropic models (e.g., Gumrukcuoglu, Contaldi & Peloso 2006) may still be
needed for the low−ℓ anomalies.
5. Concluding Remarks
We have detected a significant dip in the average surface brightness and number counts of
radio sources from the NVSS survey at 1.4 GHz in the direction of the WMAP cold spot. The
deficit of extragalactic sources is also seen in a single dish image at 408 MHz. Together with
previous work, we rule out instrumental artifacts in WMAP due to foreground subtraction.
A fuller examination of the statistical uncertainties associated with our combination of the
McEwen et al. (2007) wavelet results and our own a posteriori analysis should be performed.
With this caveat, we conclude that the cold spot arises from effects along the line of sight,
and not at the last scattering surface itself. Any non-gaussianity of the WMAP cold spot
therefore would then have a local origin.
A 140 Mpc radius, completely empty void at z≤1 is sufficient to create the magnitude
– 9 –
and angular size of the cold spot through the late integrated Sachs-Wolfe effect. Voids this
large currently seem improbable in the concordance cosmology, adding to the anomalies
associated with the CMB.
We suggest that a closer investigation of all mass tracers would be useful to search
for significant contributions from isolated regions. Also, if our interpretation of the cold
spot is correct, it might be possible to detect it indirectly using Planck, through the lack of
lensing-induced polarization B modes (Zaldarriaga & Seljak 1997).
ACKNOWLEDGMENTS We thank Eric Greisen, NRAO, for improvements in the
AIPS FLATN routine, which allows us to easily stitch together many fields in a flexible
coordinate system. The 408 MHz maps were obtained through SkyView, operated under
the auspices of NASA’s Goddard Space Flight Center. We appreciate discussions with M.
Peloso, T. J. Jones and E. Greisen regarding this work, and useful criticisms from the
anonymous referee. LR acknowledges the inspiration from his thesis adviser, the late David
T. Wilkinson, who would have appreciated the notion of deriving information from a hole.
At the University of Minnesota, this work is supported in part, through National Science
Foundation grants AST 03-07604 and AST 06-07674 and STScI grant AR-10985.
REFERENCES
Bennett, C.L., et al. 2003, ApJS 148, 1
Cabre, A., Gaztanaga, E., Manera, M., Fosalba, P. Castander, F. 2006, MNRAS 372, 23
Cayon, L., Jin J., Treaster, A. 2005, MNRAS 362, 826
Chiang, L.-Y., Naselsky, P.D. 2006, IJMPD 15,1283C
Coles, P. 2005, Nature 433, 248
Condon, J. J., Cotton, W. D., Greisen, E. W., Yin, Q. F., Perley, R. A., Taylor, G. B.,
Broderick, J. J. 1998, AJ 115, 1693
Crittenden, R. G., Turok, N. 1996, PRL 76, 575
Cruz, M., Martinez-Gonzalez, E., Vielva, P., Cayon, L. 2005, MNRAS 356, 29
Cruz, M., Tucci, M., Martinez-Gonzalez, E., Vielva, P. 2006, MNRAS 369, 57
Cruz, M., Cayon, L., Martinez-Gonzalez, E., Vielva, P., Jin, J. 2007, ApJ 655, 11
– 10 –
de Oliveira-Costa A., et al. 2004, PhRevD 69, 3516
Giannantonio, T. et al. 2006, PhRvD 74, 352
Gumrukcuoglu, A. E., Contaldi, C. R. & Peloso, M. 2006, astro-ph/0608405
Haslam, C. G. T., Klein, U., Salter, C. J., Stoffel, H., Wilson, W.E., Cleary, M.N., Cooke,
D.J., & Thomasson, P. 1981, A&A, 100,209
Hinshaw, G., et al. 2006, ApJ, submitted (astro-ph/0603451)
Holman, R.; Mersini-Houghton, L.; Takahashi, Tomo, 2006, hep-th/0611223
Holman, R.; Mersini-Houghton, L.; Takahashi, Tomo, 2006, hep-th/0612142
Hoyle, F., Vogeley, M. S. 2004, ApJ 607, 751
Inoue, K. T., Silk, J. 2006, ApJ, 648, 23
Inoue, K. T., Silk, J. 2006, astro-ph/0612347
Jaffe, T. R., Banday A. J., Eriksen, H. K., Forski, K. M, Hansen, F. K. 2005, ApJ 629, L1
Jonas, J., Baart, E. E., Necolson, G. D. 1998, MNRAS, 297, 997
Land, K., Magueijo, J.,2005, PRL 95,071301
Liu, X., & Zhang, S. N. 2005, ApJ, 633,542
McEwen, J. D., Vielva, P., Hobson, M. P., Martinez-Gonzalez, E., & Lasenby, A. N. 2007,
MNRAS, in press
Pietrobon, D., Balbi, A., Marinucci, D. 2006, PhysRevD 74, 352
Reich, W. and Reich, P. 1986, A&A Suppl. 63, 205
Tegmark, M., de Oliveira-Costa, A., Hamilton A. 2003, Phys.Rev. D68, 123523.
Tojeiro, R., Castro, P.G., Heavens, A.G., Gupta, S. 2006, MNRAS,365,265
Vielva, P., Martinez-Gonzalez, E., Varreiro, R. B., Sanz, J. L., Cayon, L. 2004, ApJ 609, 22
York, D. G. et al. 2000, AJ 120, 1579
Zaldarriaga, M. & Seljak, U. 1997, PhRvD 55, 1830
This preprint was prepared with the AAS LATEX macros v5.2.
http://arxiv.org/abs/astro-ph/0608405
http://arxiv.org/abs/astro-ph/0603451
http://arxiv.org/abs/hep-th/0611223
http://arxiv.org/abs/hep-th/0612142
http://arxiv.org/abs/astro-ph/0612347
– 11 –
Fig. 1.— 50◦ field from smoothed NVSS survey at 3.4◦ resolution, centered at lII , bII
= 209◦, -57◦. Values range from black: 9.3 mJy/beam to white: 21.5 mJy/beam. A 10◦
diameter circle indicates the position and size of the WMAP cold spot.
– 12 –
Fig. 2.— Aitoff projection of NVSS survey, centered at lII , bII = 180
◦, 0◦, showing the
median brightness in sliding boxes of 3.4◦. The WMAP cold spot is indicated by the black
box. Closer to the plane, large dark patches arise from sidelobes around strong NVSS sources.
Fig. 3.— The cumulative distribution, normalized to 1000, of median brightness levels (mK)
in 3.4◦ sliding boxes of the NVSS images in two strips above |bII | > 30
◦ (see text). The
minimum brightness (which is from the cold spot region) is indicated by a vertical line.
– 13 –
Fig. 4.— 18◦ fields, with 1◦ resolution, centered at lII , bII = 209
◦, -57◦. Left: 408 MHz
(Haslam et al. 1981). Right: 1.4 GHz (Condon et al. 1998). A 10◦ diameter circle indicates
the position and size of the WMAP cold spot.
– 14 –
Fig. 5.— Number of NVSS sources in 3.14 square degree annuli as a function of distance
from the cold spot. The counts axis refers to the results for counts of sources with S>5mJy;
the grey line refers to counts for S>2.5mJy with those counts multiplied by 0.56. Each bin
is independent.
Introduction
Analysis and Characterization of the NVSS ``dip''
Foreground Corrections
Discussion
Concluding Remarks
|
0704.0909 | L^2 rho form for normal coverings of fibre bundles | L2-RHO FORM FOR NORMAL COVERINGS OF FIBRE BUNDLES
SARA AZZALI
Abstract. We define the secondary invariants L2-eta and -rho forms for families
of generalized Dirac operators on normal coverings of fibre bundles. On the cover-
ing family we assume transversally smooth spectral projections and Novikov–Shubin
invariants bigger than 3(dimB + 1) to treat the large time asymptotic for general
operators. In the case of a bundle of spin manifolds, we study the L2-rho class in
relation to the space R+(M/B) of positive scalar curvature vertical metrics.
1. Introduction
Secondary invariants of Dirac operators are a distinctive issue of the heat equation
approach to index theory. The eta invariant of a Dirac operator first appeared as the
boundary term in the Atiyah–Patodi–Singer index theorem [2]: this spectral invariant,
highly nonlocal and therefore unstable, became a major object of investigation, because
of its subtle relation to geometry. With the introduction of superconnnections in index
theory by Quillen and Bismut, it became possible to employ heat equation techniques
in higher geometric situations, where the primary invariant, the index, is no longer a
number, but a class in a K-theory group [44, 10, 35]. This led to so called local index
theorems, which are refinements of the cohomological index theorems at the level of
differential forms, and gave as new fundamental byproduct the eta forms, coming from
the transgression of the index class [11, 12, 13], which are the higher analogue of eta
invariants [41, 38, 34].
Rho invariants are differences (or, more generally, delocalized parts) of eta invariants,
so they naturally possess stability properties when computed for geometrically relevant
operators, mainly the spin Dirac operator and the signature operator [3, 31, 42]. Fur-
thermore, they can be employed to detect geometric structures: the Cheeger–Gromov
L2-rho invariant, for example, has major applications in distinguishing positive scalar
curvature metrics on spin manifolds [15, 43], and can show the existence of infinitely
many manifolds that are homotopy equivalent but not diffeomorphic to a fixed one [16].
As secondary invariants always accompany primary ones, it is very natural to ask what
are the L2-eta and L2-rho forms in the case of a families, and what are their properties.
We consider the easiest L2-setting one could think of, namely a normal covering of a fibre
bundle. This interesting model contains yet all the features and problems offered by the
presence of continuos spectrum. Since the fibres of the covering family are noncompact,
the large time asymptotic of the superconnection Chern character is in general not
Date: September 4, 2010.
http://arxiv.org/abs/0704.0909v2
2 SARA AZZALI
converging to a differential form representative of the index class, and the same problem
is reflected when trying to integrate on [1,∞) the transgression term involved in the
definition of the L2-eta form.
The major result in this sense is by Heitsch and Lazarov, who gave the first families index
theorem for foliations with Hausdorff graph [30]. They computed the large time limit
of the superconnection Chern character as Haefliger form, assuming smooth spectral
projections and Novikov–Shubin invariants bigger than 3 times the codimension of the
foliation. Their result implies an index theorem in Haefliger cohomology (not a local
one, because they do not deal with the transgression term), which in particular applies
to the easier L2-setting under consideration.
We use the techniques of Heitsch–Lazarov to investigate the integrability on [1,∞) of
the transgression term, in order to define the L2-eta form for families D of generalised
Dirac operators on normal coverings of fibre bundles. Our main result, Theorem 3.4,
implies that the L2-eta form η̂(2)(D) is well defined as a continuos differential form on
the base B if the spectral projections of the family D are smooth, and the families
Novikov–Shubin invariants {αK}K⊂B are greater than 3(dimB + 1).
We define then naturally the L2-rho form ρ̂(2)(D) as the difference between the L2-eta
form for the covering family and the eta form of the family of compact manifolds. When
the fibre is odd dimensional, the zero degree term of ρ̂(2)(D) is the Cheeger–Gromov
L2-rho invariant of the induced covering of the fibre. We prove that the L2-form is
(weakly) closed when the fibres are odd dimensional (Prop. 4.3).
The strong assumptions of Theorem 3.4 are required because we want to define η̂(2) for
a family of generalised Dirac operators. In the particular case of de Rham and signature
operators one can put weaker assumptions: this is showed by Gong–Rothenberg’s result
for the L2-Bismut–Lott index theorem (proved under positivity of the Novikov–Shubin
invariants) [24], and from results in [4], where we develop a new approach to large time
estimate exclusive to the families of de Rham and signature operators. On the contrary,
a family of signature operators twisted by a fibrewise flat bundle has to be treated as a
general Dirac operator [7].
Next we investigate the L2-rho form in relation to the space R+(M/B) of positive scalar
curvature vertical metrics for a fibre bundle of spin manifolds. For this purpose, the
Dirac families D/ involved are uniformly invertible by Lichnerowicz formula, so that the
definition of the L2-rho form does not require Theorem 3.4, but follows from classical
estimates. Here the L2-rho form is always closed, and we prove the first step in order
to use this invariant for the study of R+(M/B), namely that the class [ρ̂(2)(D/)] is the
same for metrics in the same concordance classes of R+(M/B) (Prop.5.1). The action
of a fibrewise diffeomorphism is also taken into account.
Along the lines of [42] we can expect that if Γ is torsion-free and satisfies the Baum–
Connes conjecture, then the L2-rho class of a family of odd signature operators is an
oriented Γ- fibrewise homotopy invariant, and that [ρ̂(2)(D̃/ĝ)] vanishes correspondingly
to a vertical metric ĝ of positive scalar curvature.
L2-RHO FORM FOR NORMAL COVERINGS OF FIBRE BUNDLES 3
Acknowledgements This work was part of my researches for the doctoral thesis. I
would like to thank Paolo Piazza for having suggested the subject, for many interesting
discussions and for the help and encouragement. I wish to express my gratitude to
Moulay-Tahar Benameur for many interesting discussions.
2. Geometric families in the L2-setting
We recall local index theory’s machine, here adapted to the following L2-setting for
families.
Definition 2.1. Let π̃ : M̃ → B be a smooth fibre bundle, with typical fibre Z̃ con-
nected, and let Γ be a discrete group acting fibrewise freely and properly discontinuosly
on M , such that the quotient M = M̃/Γ is a fibration π : M → B with compact fibre Z.
Let p : M̃ → M̃/Γ = M denote the covering map. This setting will be called a normal
covering of the fibre bundle π and will be denoted with the pair (p : M̃ →M,π : M → B).
Let π : M → B be endowed with the structure of a geometric family (π : M →
B, gM/B ,V, E), meaning by definition:
• gM/B is a given metric on the vertical tangent bundle T (M/B)
• V the choice of a smooth projection V : TM → T (M/B) (equivalently, the choice
of a horizontal complement THM = KerV)
• E → M is a Dirac bundle, i.e. an Hermitian vector bundle of vertical Clifford
modules, with unitary action c : Cl(T ∗(M/B), gM/B) → End(E), and Clifford
connection ∇E.
To a gemetric family it is associated a family D = (Db)b∈B of Dirac operators along
the fibres of π, Db = cb ◦ ∇Eb : C∞(Mb, Eb) → C∞(Mb, Eb), where Mb = π−1(b), and
Eb := E|Mb .
If we have a normal Γ-covering p : M̃ → M of the fibre bundle π, the pull back of the
geometric family via p gives a Γ-invariant geometric family which we denote (π̃ : M̃ →
B, p∗gM/B , Ṽ , Ẽ).
2.0.1. The Bismut superconnection. The structure of a geometric family gives a distin-
guished metric connection ∇M/B on T (M/B), defined as follows: fix any metric gB on
the base and endow TM with the metric g = π∗gB ⊕ gM/B ; let ∇g the Levi-Civita con-
nection on M with respect to g; the connection ∇M/B := V∇gV on the vertical tangent
does not depend on gB ([9, Prop. 10.2]).
When X ∈ C∞(B,TB), let XH denote the unique section of THM s.t. π∗XH = X. For
any ξ1, ξ2 ∈ C∞(B,TB) let T (ξ1, ξ2) := [ξH1 , ξH2 ]− [ξ1, ξ2]H and let δ ∈ C∞(M, (THM)∗)
measuring the change of the volume of the fibres LξH vol =: δ(ξH ) vol. Following the
notation of [9], in formulas in local expression we denote as e1, . . . , en a local orthonormal
base of the vertical tangent bundle; f1, . . . fm will be a base of TyB and dy
1, . . . , dym will
denote the dual base. The indices i, j, k.. will be used for vertical vectors, while α, β, . . .
will be for the horizontal ones. The 2-form c(T ) =
α<β(T (fα, fβ), ei)eidy
αdyβ has
4 SARA AZZALI
values vertical vectors. Using the vertical metric, c(T )(fα, fβ) can be seen as a cotangent
vertical vector, hence it acts on E via Clifford multiplication.
Let H → B be the infinite dimensional bundle with fibres Hb = C∞(Mb, Eb). Its space of
sections is given by C∞(B,H) = C∞(M,E). We denote Ω(B,H) := C∞(M,π∗(ΛT ∗B)⊗
E). Let ∇H be the connection on H → B defined by ∇HU ξ = ∇EUHξ +
δ(ξH) where ξ is
on the right hand side is regarded as a section of E. ∇H is compatible with the inner
product < s, s′ >b:=
hE(s, s′) vol b , with s, s
′ ∈ C∞(B,H), and hE the fixed metric
on E.
Even dimensional fibre. When dimF = 2l the bundle E is naturally Z2-graded by chi-
raliry, E = E+ ⊕ E−, and D is odd. Correspondingly, the infinite dimensional bundle
is also Z2-graded: H = H+ ⊕ H−. The Bismut superconnection adapted to D is the
superconnection B = ∇H +D − c(T )
on H.
The corresponding bundle for the covering family π̃ is denoted H̃ → B where the same
construction for the family M̃ → B gives the Bismut superconnection B̃ = ∇H̃ + D̃ −
c(T̃ )
, adapted to D̃. It is Γ-invariant by construction, being the pull-back via p of B.
Odd dimensional fibre. When dimZ = 2l − 1, the appropriate notion is the one of
Cl(1)-superconnection, as introduced by Quillen in [44, sec. 5]. Let Cl(1) the Clifford
algebra Cl(1) = C⊕Cσ, where σ2 = 1, and consider EndE⊗Cl(1), adding therefore the
extra Clifford variable σ. On End(Eb) ⊗ Cl(1) = Endσ(Eb ⊕ Eb) define the supertrace
tr σ(A + Bσ) := trB, extended then to tr σ : C∞(M,π∗Λ∗B ⊗ EndE) → Ω(B) as usual
by tr σ(ω ⊗ (a+ bσ)) = ω tr b, for ω ∈ C∞(B,ΛT ∗B), ∀a, b ∈ C∞(B,EndE).
The family D, as well as c(T ) are even degree elements of the algebra C∞(B,EndH ⊗
Cl(1)⊗̂ΛT ∗B). On the other hand, ∇H is odd. By definition, the Bismut Cl(1)-
superconnection adapted to the family D is the operator of odd total degree Bσ :=
Dσ + ∇̃u − c(T )
Notation. In the odd case we will distinguish between the Cl(1)-superconnection de-
fined above Bσ acting on Ω(B,H) ⊗̂Cl(1), and the differential operator B : Ω(B,H) →
Ω(B,H) given by B := D +∇H − c(T )
, which is not a superconnection but is needed in
the computations.
2.1. The heat operator for the covering family. In this section we briefly discuss
the construction of the heat operator e−B̃
, which can be easily performed combining
the usual construction for compact fibres families in [9, Appendix of Chapter 9], with
Donnelly’s construction for the case of a covering of a compact manifolds [20]. We
integrate notations of [9, Ch. 9-10] with the ones of our appendix A. We refer to the
latter for the definitions of the spaces of operators used the rest of this section.
Let C∞(B,DiffΓ(Ẽ)) the algebra of smooth maps D : B → DiffΓ(Ẽ) satisfying that
∀z ∈ B, Dz is a Γ-invariant differential operator on M̃z, with coefficients depending
L2-RHO FORM FOR NORMAL COVERINGS OF FIBRE BUNDLES 5
smoothly on the variables of B. In the same way, let N = C∞(B,ΛT ∗B⊗Op−∞Γ (Ẽ)) =
Ω(B,Op−∞Γ (Ẽ)) the space of smooth maps A : B → ΛT ∗B ⊗ Op
Γ (Ẽ). N contains
families of Γ-invariant operators of order −∞ with coefficients differential forms, hence
N is filtered by Ni = C∞(B,
j≥i Λ
jT ∗B ⊗Op−∞Γ (Ẽ)). The curvature of B̃ is a family
2 ∈ Ω(B,Diff2Γ(Ẽ)) and can be written as B̃2 = D̃2 − C̃, with C̃ ∈ Ω≥1(B,Diff1Γ(Ẽ)).
2.1.1. Definition and construction. For each point z ∈ B the operator e−tB̃2z is by defi-
nition an the one whose Schwartz kernel p̃zt (x, y) ∈ Ẽx⊗ Ẽ∗y ⊗ΛT ∗zB is the fundamental
solution of the heat equation, i.e.
• p̃zt (x, y) is C1 in t, C2 in x, y;
p̃zt (x, y) + B̃
z,II p̃
t (x, y) = 0 where B̃z,II means it acts on the second variable;
• lim
p̃zt (x, y) = δ(x, y)
• ∀T > 0 ∀t ≤ T ∃ c(T ) :
∥∂it∂
ypt(x, y)
∥ ≤ ct−
−i−j−ke−
2(x,y)
2 , 0 ≤ i, j, k ≤ 1.
Its construction is as follows: pose
e−tB̃
z := e−tD̃
tk e−σ0tD̃
z C̃e−σ1tD̃
z . . . C̃e−σktD̃
︸ ︷︷ ︸
dσ1 . . . dσk (2.1)
Since ∀σ = (σ0, . . . , σk) there exists σi > 1k+1 , then each term Ik ∈ ΛT
zB ⊗Op−∞(Ẽz)
and so does e−tB̃
z . Let p̃zt (x, y) = [e
−B̃2t,z ](x, y) be the Schwartz kernel of the operator
(2.1). Using arguments of [9, theorems 9.50 and 9.51], one proves that p̃zt (x, y) is smooth
in z ∈ B so that one can conclude e−B̃ ∈ Ω(B,Op−∞Γ ).
The next property, proved in [20] and [21], is needed in the t→ 0 asymptotic. For t < T0
−tB̃2 ](x̃, ỹ)
∣ ≤ c1t−
2 e−c2
2(x̃,ỹ)
t (2.2)
2.2. Transgression formulæ, eta integrands. For t > 0 let δt : Ω(B,H) → Ω(B,H)
the operator which on Ωi(B,H) is multiplication by t− i2 . Then consider the rescaled
superconnection Bt = t
2 δtBδ
t = ∇H +
tD − c(T ) 1
2.2.1. Even dimensional fibre. From (A.1) we have
Str Γe
−B̃2t = −dStr Γ
which on a finite interval (t, T ) gives the transgression formula
Str Γ
− Str Γ
Str Γ
ds (2.3)
6 SARA AZZALI
2.2.2. Odd dimensional fibre. Here it is convenient to use that tr σΓe
−(B̃σt )2 = tr oddΓ e
−B̃2t ,
(from [44] and (A.1)), where trodd means we take the odd degree part of the resulting
form. Then taking the odd part of the formula
tr Γe
−B2t = −d tr Γ
Tr oddΓ
− Tr oddΓ
Tr evenΓ
ds (2.4)
Remarks and notation 2.2. Since we wish now to look at the limits as t → 0 and
t → ∞ in (2.3) and 2.4, let us make precise what the convergences on the spaces of
forms are, and for families of operators. On Ω(B) we consider the topology of conver-
gence on compact sets. We say a family of forms ωt
C0→ ωt0 as t → t0 if ∀K
supz∈K ‖ωt(z)− ωt0(z)‖ΛT ∗z B → 0. We say ωt
C1→ ωt0 if the convergence also hold for
first derivatives of ωt with respect to the base variables. We say ωt = O(tδ) as t→ ∞ if
∃ a constant C = C(K) : supz∈K ‖ωt(z) − ωt0(z)‖ΛT ∗z B ≤ Ct
δ. We say ωt
= O(tδ) if
also the first derivatives with respect to base directions are O(tδ).
For a family Tt ∈ UC∞(B,Op−∞(Ẽ)) we say Tt
Ck→ Tt0 as t → t0 if ∀K
⊆ B, ∀r, s ∈ Z
supz∈K ‖Tt(z)− Tt0(z)‖r,s → 0 together with derivatives up to order k with respect to
the base variables.
On the space of kernels UC∞(M̃ ×B M̃, Ẽ4Ẽ∗ ⊗ π∗ΛT ∗B), we say kt → kt0 if ∀ϕ ∈
C∞c (B) ‖(π∗ϕ(x))(kt(x, y) − kt0(x, y))‖k → 0.
We stress that from (A.3) the map Ω(B,Op−∞Γ (Ẽ)) → UC∞(M̃ ×B M̃ , Ẽ4Ẽ∗ ⊗
π∗ΛT ∗B), T 7→ [T ] is continuos.
2.3. The t→ 0 asymptotic.
Proposition 2.3.
Str Γ
Â(M/B) chE/S if dim Z̃ = even
tr oddΓ
Â(M/B) chE/S if dim Z̃ = odd
The result is proved exactly as in the classic case of compact fibres, together with the
following argument of [33, Lemma 4, pag. 4]:
Lemma 2.4. [33] ∃A > 0, c > 0 s.t.
−B2t ](π(x̃), π(x̃))− [e−B̃2t ](x̃, x̃)
∣ = O(t−ce−
For the proof of the lemma see [32], or also [5], [24]. With the same technique we deduce
Proposition 2.5. The differential forms StrΓ
and tr σΓ
dB̃σt
e−(B̃
integrable on [0, 1], uniformly on compact subsets.
L2-RHO FORM FOR NORMAL COVERINGS OF FIBRE BUNDLES 7
Proof. The proof is as in [9, Ch.10, pag. 340]. We reason for example in the even case.
Consider the rescaled superconnection B̃s as a one-parameter family of superconnections,
s ∈ R+, and construct the new family M̆ = M̃ ×R+ → B ×R+ =: B̆. On Ĕ = Ẽ ×R+
there is a naturally induced family of Dirac operators whose Bismut superconnection
is B̆ = B̃s + dR+ − n4sds, and its rescaling is B̆t = B̃st + dR+ −
ds. Its curvature is
t = B̃
st + t
∧ ds, so that
t = e−B̃
e−uB̃
e−(1−u)B
st ∧ ds = e−F̃st − ∂B̃st
e−B̃st ∧ ds.
Str Γ
= Str Γ(e
−B̃2st)− Str Γ
∂B̃st
e−B̃st
ds (2.5)
At t = 0 we have the asymptotic expansion Str Γ(e
−B̆t) ∼
j=0 t
2 (Φ j
− α j
without singular terms. Computing (2.5) in s = 1, since
∂B̃st
, one has
Str Γ
e−F̃t
, and therefore Str Γ
e−F̃t
j=0 t
−1α j
. Let’s
compute α0. From the local formula
Φ0 − α0ds = lim
Str Γ
e−F̆t
M̆/B̆
Â(M̆/B̆) (2.6)
since M̆(z,s) = M̃z×{s} and the differential forms are pulled back from those on M̃ → B,
then the right hand side of (2.6) does not contain ds so that α0 = 0. This implies that
Str Γ(
t ) ∼
−1α j
3. The L2-eta form
We prove in Theorem 3.4 the well definiteness of the L2-eta form η̂(2)(D̃) under opportune
regularity assumptions. We make use of the techniques of [30].
3.1. The family Novikov–Shubin invariants. The t → ∞ asymptotic of the heat
kernel is controlled by the behaviour of the spectrum near zero. Let P̃ = (P̃ z)z∈B
the family of projections onto ker D̃ and let P̃ǫ = χ(0,ǫ)(D̃) be the family of spectral
projections relative to the interval (0, ǫ); denote Q̃ǫ = 1− P̃ǫ − P̃ .
For any z ∈ B the operator D̃z is a Γ-invariant unbounded operator: let D̃2z =
λdEz(λ)
be the spectral decomposition of D̃2z , andN
z(λ) = trΓE
z(λ) its spectral density function
[27]. Denote bz = trΓ P̃
z. Then N z(ǫ) = bz + trΓ P̃
ǫ and from [22] the behaviour of
θz(t) = trΓ(exp(−tD̃z)) at ∞ is governed by
αz = sup{a : θz(t) = bz +O(t−a)} = sup{a : N z(ǫ) = bz +O(ǫa)} (3.1)
8 SARA AZZALI
where αz is called the Novikov–Shubin invariant of D̃z.
We shall later impose conditions on αz uniformly on compact subset of B, so we intro-
duce the following definition from [24]: let K ⊂ B be a compact, define αK := infz∈K αz.
We call {αK}K⊂B the family Novikov–Shubin invariants of the fibre bundle M̃ → B.
By results of Gromov and Shubin [27], when D̃2z is the Laplacian, αz is a Γ-homotopy
invariant of M̃z [27], in particular it does not depend on z. In that case αz is locally
constant on B. For a general Dirac type operator this is not true and we need to use
the αK ’s.
Definition 3.1. [30] We say the family D̃ has regular spectral projections if P̃ and P̃ǫ are
smooth with respect to z ∈ B, for ǫ small, and ∇H̃P̃ ,∇H̃P̃ǫ are in N and are bounded
independently of ǫ. We say that the family D̃ has regularity A, if ∀K
⊆ B it holds
αK ≥ A.
Remark 3.2. To have regular projections is a strong condition, difficult to be verified
in general. The family of signature operators verifies the smoothness of P̃ [24, Theorem
2.2] but the smoothness of P̃ǫ is not clear even in that case.
The large time limit of the superconnection-Chern character StrΓ e
−B̃2t is computed in
[30, Theorem 5]. Specializing to our L2-setting it says the following.
Theorem 3.3. [30] Let ∇̃0 = P̃∇H̃P̃ . If D̃ has regular projections and regularity
> 3 dimB,
Str Γ(e
−B̃2t ) = Str Γe
3.2. The L2-eta form. We now use the same techniques of [30] to analyse the trans-
gression term in (2.3) and define the secondary invariant L2 eta form. We prove
Theorem 3.4. If D̃ has regular spectral projections and regularity > 3(dimB+1), then
= O(t−δ−1), for δ > 0. The same holds for trevenΓ
We start with some remarks and lemmas. In particular we shall repeatedly use the
following.
Remark 3.5. Let T ∈ N . From lemma A.6, ∀z ∈ B its Schwartz kernel [Tz] satis-
fies that for sufficiently large l, ∃ czl such that ∀x, y ∈ M̃z | [Tz](x, y) | ≤ czl ‖Tz‖−l,l
Therefore an estimate of ‖Tz‖−l,l produces directly via an estimate of TrΓ Tz.
Notation. Since in this section we are dealing only with the family of operators on the
covering, to simplify the notations let’s call D̃ = D, removing all tildes. Pose
Bǫ := (P +Qǫ)B(P +Qǫ) + PǫBPǫ
Aǫ = B− Bǫ
L2-RHO FORM FOR NORMAL COVERINGS OF FIBRE BUNDLES 9
and write the rescaled operators as
Bǫ,t = (P +Qǫ)(Bt −
tD)(P +Qǫ) +
tD + Pǫ(Bt −
tD)Pǫ (3.2)
Aǫ,t = (P +Qǫ)(Bt −
tD)Pǫ + Pǫ(Bt −
tD)(P +Qǫ)
Denote also Tǫ = QǫBQǫ and Tǫ,t = QǫBtQǫ as in [30].
We will need the following two lemmas from [30]. The first is the “diagonalization” of
ǫ with respect to the spectral splitting of H.
Lemma 3.6. [30, Prop.6] Let M be the space of all maps f : B → ΛTB⊗End H̃. There
exists a measurable section gǫ ∈ M, with gǫ ∈ 1 +N1 such that
∇20 0 0
0 T 2ǫ 0
0 0 (PǫBPǫ)
N3 0 0
0 N2 0
0 0 0
The diagonalization procedure acts on (P ⊕Qǫ)H, in fact gǫ has the form gǫ = ĝǫ ⊕ 1,
with ĝǫ acting on (P ⊕Qǫ)H. From this lemma we get B2ǫ,t = tδtB2ǫδ−1t =
= tδtg
∇20 0 0
0 T 2ǫ 0
0 0 (PǫBtPǫ)
N3 0 0
0 N2 0
0 0 0
gǫδt =
= δtg
tδt(∇20 +N3)δ−1t 0 0
0 tδt(T
ǫ +N2)δ−1t 0
0 0 PǫBtPǫ
δtgǫδ
The next lemma gives an estimate of the terms which are modded out.
Lemma 3.7. [30, lemma 9] If A ∈ Nk is a residual term in the diagonalization lemma
or is a term in gǫ − 1 or g−1ǫ − 1, then, posing ǫ = t−
a , At := δtAδ
t verifies: ∀r, s
‖At‖r,s = O(t
a ) as t→ ∞.
The lemma implies that at place (1,1) in the diagonalized matrix above we get ∇20 +
O(t− 32+ 3a+1) = O(t− 12+ 3a ). To have −1
< 0 we take a > 6. The term at place (2,2)
gives T 2ǫ,t +O(t
a ). Then
ǫ,t = δtg
∇20 +O(t−γ) 0 0
0 T 2ǫ +O(t
a ) 0
0 0 (PǫBPǫ)
δtgǫδ
t , with γ > 0
Now since gǫ = ĝǫ ⊕ 1
ǫ,t =
∇20 +O(t−γ) 0
0 T 2ǫ,t +O(t
δtĝǫδ
0 PǫBPǫ
10 SARA AZZALI
Observe that since gǫ − 1, g−1ǫ − 1 ∈ N1, we have δtĝ−1ǫ δ−1t = Id+
O(t− 12+ 1a ).
Denote w := O(t− 12+ 1a ). Then
∇20 +O(t−γ) 0
0 T 2ǫ,t +O(t
δtĝǫδ
1 + w w
w 1 + w
∇20 +O(t−γ) 0
0 T 2ǫ,t +O(t
1 + w w
w 1 + w
Since e−∇
0+O(t−γ ) = e−∇
0 +O(t−γ), then leaving (P +Qǫ) out of the notation
ǫ,t =
1 + w w
w 1 + w
0 +O(t−γ) 0
0 e−T
1 + θ w
w 1 + w
+ e−(PǫBPǫ)
= e−(PǫBPǫ)
where
(1 + w)2e−∇
0 w(1 + w)e−∇
w(1 + w)e−∇
0 w2e−∇
O(t−1+ 2a ) O(t− 12+ 1a )
O(t− 12+ 1a ) O(t−1+ 2a )
(1 + w)2O(t−γ) w(1 + w)[O(t−γ) + e−T ]
w(1 + w)[O(t−γ) + e−T ] w2O(t−γ) + (1 + w)2e−T
Proof of theorem 3.4. To fix notation, say Z is even dimensional. In the odd case use
trevenΓ instead of Str Γ.
Let K ⊆ B be a compact, and denote as β = αK the Novikov–Shubin invariant on it.
Write Bt = Bǫ,t + Aǫ,t as in (3.2), and define Bt(z) = Bt,ǫ + zAt,ǫ, z ∈ [0, 1], so that by
Duhamel’s principle (for example [30, eq. (3.10)])
t − e−B2t,ǫ =
e−Bt(z)
dz = −
e−(s−1)B
t (z)
dB2t (z)
t (z)dsdz =: Fǫ,t
Write then
Str Γ(
t ) = Str Γ(
dBt,ǫ
︸ ︷︷ ︸
+Str Γ(
Fǫ,t)
︸ ︷︷ ︸
(3.3)
For the family
we shall use that
D + c(T )
D+O(t−
2 ), as in
Remark 2.2.
L2-RHO FORM FOR NORMAL COVERINGS OF FIBRE BUNDLES 11
3.2.1. The term I.
t,ǫ =
0 0 0
2QǫDQǫ 0
0 0 t−
2PǫDPǫ
+O(t−
e−(PǫBPǫ)
0 0 0
2QǫDQǫ 0
0 0 t−
2PǫDPǫ
0 0 0
0 0 0
0 0 0
O(t−1+ 2a ) O(t− 12+ 1a ) 0
O(t− 12+ 1a ) O(t−1+ 2a ) 0
0 0 0
0 0 0
2QǫDQǫ 0
0 0 t−
2PǫDPǫ
(1 + w)2O(t−γ) w(1 +w)2(O(t−γ) + e−T ) 0
w(1 + w)2(O(t−γ) + e−T ) w2O(t−γ) + (1 + w)2e−T 0
0 0 0
0 0 0
2QǫDQǫ 0
0 0 t−
2PǫDPǫ
e−(PǫBPǫ)
2PǫDPǫe
−(PǫBPǫ)2 +
0 0 0
2QǫDQǫO(t−
a ) QǫDQǫO(t−
a ) 0
0 0 0
0 0 0
2QǫDQǫw(1 + w)(O(t−γ) + e−T ) t−
2QǫDQǫ(w
2O(t−γ) + (1 + w)2e−T ) 0
0 0 0
The choice of a > 6 implies 2
. Moreover only diagonal blocks contribute1 to the
StrΓ, therefore we only have to guarantee the integrability of StrΓ(t
2PǫDPǫe
−PǫB2tPǫ),
because from [30, Prop.11] Str Γ e
−T = O(t−δ), ∀δ > 0.
We reason as follows: Str Γ(t
2PǫDPǫe
−PǫB2tPǫ) = t−
2 tr Γ(UPǫ), where U =
τPǫDPǫe
−PǫB2tPǫ , and τ is the chirality grading.
Next we evaluate trΓ(UPǫ) = trΓ(UP
ǫ ) = trΓ(PǫUPǫ). To do this, since our trace has
values differential forms, let ω1, . . . , ωJ a base of ΛT
zB, for z fixed on K. U is a family
of operators and Uz acts on C∞(M̃z, Ẽz)⊗ ΛT ∗zB. Write Uz =
j Uj ⊗ ωj.
tr Γ(PǫUPǫ) =
tr Γ(PǫUjPǫ)⊗ ωj =
tr(χFPǫUjPǫχF )⊗ ωj.
Now tr(χFPǫUjPǫχF ) =
i < χFPǫUjPǫχFδvi , δvi >=
i < UjPǫχFδvi , PǫχFδvi >,
where {δvi} is a base of L2(M̃z |F , Ẽz |F). Therefore
| < UjPǫχFδvi , PǫχFδvi > | ≤ ‖UjPǫχFδvi‖ · ‖PǫχFδvi‖ ≤
≤ ‖Uj‖ ‖PǫχFδvi‖
2 ≤ ‖Uz‖ ‖PǫχFδvi‖
1In fact if Pi are orthogonal projections s.t.
Pi = 1, then for a fibrewise operator A we have
StrA = tr ηA = tr(
PiηAPi) + tr(
PiηAPj) = tr(
PiηAPi).
12 SARA AZZALI
i ‖PǫχFδvi‖ =
i < PǫχFδvi , PǫχFδvi >=
i < χFPǫχFδvi , δvi >= tr Γ(Pǫ) =
O(ǫβ) where β = αK . Hence
tr Γ(PǫUPǫ) ≤ ‖U‖O(ǫβ) = ‖U‖O(t−
a ) , with ǫ = t−
Claim ([30, Lemma 13]):
∥ is bounded independently of t, for t large. This follows
because (PǫBPǫ)
2 = PǫD
2Pǫ − C̄t, with C̄t is a fibrewise differential operator of order
at most one with uniformly bounded coefficients. Therefore
2 C̄t
l,l−1
is bounded
independently of t, for t large. Now writing the Volterra series for e−t(PǫD
2+C̄t ,
we have U = τPǫ
e−tσ0PǫD
2PǫC̄te
−tσ1PǫD2Pǫ . . . C̄te
−tσkPǫD2Pǫdσ, then estimating
each addend as
−tσ0PǫD2PǫC̄te
−tσ1PǫD2Pǫ
∥τPǫDe
−tσ0PǫD2Pǫ
l,l+1
l+1,l
−tσ1PǫD2Pǫ
l,l+1
·· · ··
l+1,l
−tσkPǫD2Pǫ
l,l+1
we get the Claim.
Thus t−
2 trΓ(UPǫ) ≤ c ‖U‖ t−
2 , and Str Γ(
t,ǫ) ≤ ct
2 . We require then
< −1 to have integrability hence we need finally a < 2β
. Because a was also
required to be a > 6 (see lines after Lemma 3.7), the hypothesis
β > 3(q + 1) (3.4)
is a sufficient condition to have the first term in (3.3) equal O(t−1−δ), with δ > 0.
3.2.2. The term II. Now let’s consider the second term in (3.3). As in [30, pag.197-
198], write Bt =
tD + B1 +
B2, and locally B1 = d + Φ. We have
dB2t (z)
Bt(z)Aǫ,t + Aǫ,tBt(z) =
tDA1 + A2
tD + A3, where Ai = Ci,1PǫCi,2, and Ci,j ∈ M1
are sums of words in Φ, d(Φ), t−
2B[2], t
2 d(B[2]). This implies that Ci,j are differential
operators with coefficients uniformly bounded in t.
Str Γ
= tr Γτ(t
2D − t−
2B[2])
e−(s−1)B
t (z)(
tDC1,1PǫC1,2+
+ C2,1PǫC2,2
tD + C3,1PǫC3,2)e
−sB2t (z)dsdz =
= tr Γ
C1,2e
−sB2t (z)τ
e−(s−1)B
t (z)
tDC1,1Pǫ +
+ C2,2
tDe−sB
t (z)τ
e−(s−1)B
t (z)C2,1Pǫ+
+C3,2e
−sB2t (z)τ
e−(s−1)B
t (z)C3,1Pǫ
dsdz = tr Γ(PǫWPǫ)
with W the term in square brackets.
L2-RHO FORM FOR NORMAL COVERINGS OF FIBRE BUNDLES 13
With a similar argument as in the Claim above and as in [30, p. 199], we have that
2 e−sB
t (z)τe−(s−1)B
t (z)
∥ is bounded independently of t as t→ ∞ so that the condition
(3.4) on the Novikov–Shubin exponent guaranties that the term II. is O(t−1−δ) as t → ∞
as well. �
Theorem 3.4 and Proposition 2.5 taken together imply
Corollary 3.8. If D̃ has regular spectral projections and regularity > 3(dimB + 1)
η̂(2)(D̃) =
dt if dim Z̃ = even
trevenΓ
dt if dim Z̃ = odd
is well defined as a continuos differential form on B.
Remark 3.9. Theorem 3.4 gives η̂(2) as a continuos form on B. Therefore η̂(2) fits
into a weak L2-local index theorem (see [24, 4]). To get a strong local index theorem
one should prove estimates for Str Γ(
t ) in C1-norm, assuming more regularity
on αK .
Remark 3.10. If Z odd dimensional, ρ̂(2) is an even degree differential form, whose
zero degree term is a continuos function on B with values the Cheeger–Gromov L2-eta
invariant of the fibre, η̂
(b) = η(2)(Db, M̃b →Mb).
3.3. Case of uniform invertibility. Suppose the two families D and D̃ are both
uniformly invertible, i.e.
∃µ > 0 such that ∀b ∈ B
spec(Db) ∩ (−µ, µ) = ∅
spec(D̃b) ∩ (−µ, µ) = ∅
(3.5)
In this case the t → ∞ asymptotic is easy and in particular StrΓ(
t ) = O(t−δ),
∀δ > 0 [5]. With the same estimates (see [30, p. 194]) one can look at ∂
StrΓ(
and obtain that StrΓ(
= O(t−δ), ∀δ > 0.
4. The L2 rho form
Definition 4.1. Let (π : M → B, gM/B ,V, E) be a geometric family, p : M̃ → M a
normal covering of it. Assume that kerD forms a vector bundle, and that the family D̃
has regular projections with family Novikov–Shubin invariants αK > 3(dimB + 1). We
define the L2-rho form to be the difference
ρ̂(2)(M,M̃,D) := η̂(2)(D̃)− η̂(D) ∈ C0(B,ΛT ∗B).
14 SARA AZZALI
Remark 4.2. When the fibres are odd dimensional, ρ̂(2) is an even degree differential
form, whose zero degree term is a continuos function on B with values the Cheeger–
Gromov L2-rho invariant of the fibre, ρ̂
(b) = ρ(2)(Db, M̃b →Mb).
We say a continuos k-form ϕ on B has weak exterior derivative ψ (a (k+1)-form) if, for
each smooth chain c : ∆k+1 → B, it holds
ϕ, and we write dϕ = ψ.
Proposition 4.3. If π : M → B has odd dimensional fibres, ρ̂(2)(D) is weakly closed.
Proof. From (2.4),
odde−B̃
odde−B̃
dt. Tak-
ing the limits t → 0, T → ∞ we get
Â(M/B) ch(E/S) =
η̂(2)(D̃)
because limT→∞ tr
odde−B
T = tr(e−∇
0)odd = 0 because tr(e−∇
0) is a form of even degree.
The same happens for the family D̃ where
Â(M/B) ch(E/S) = dη̂(D̃) (strongly).
ρ̂(2)(D) = 0, which gives the result. �
Corollary 4.4. Under uniform invertibility hypothesis (3.5) the form ρ̂(2)(D) is always
(strongly) closed.
Proof. The argument is standard: from transgression formulæ (2.3) (2.4), asymptotic
behaviour, and Remark 3.9, we have dη̂(D) =
Â(M/B) ch(E/S) = dη̂(2)(D̃). �
5. ρ̂(2) and positive scalar curvature for spin vertical bundle
Let π : M → B be a smooth fibre bundle with compact base B. If ĝ denotes a metric
on the vertical tangent bundle T (M/B), and b ∈ B, denote with ĝb the metric induced
on the fibre Mb, and write ĝ = (ĝb)b∈B . Define
R+(M/B) := {ĝ metric on T (M/B) | scal ĝb > 0 ∀b ∈ B}
to be the space of positive scalar curvature vertical metrics (= PSC).
Assume that T (M/B) is spin and let ĝ ∈ R+(M/B) 6= ∅. By Lichnerowicz formula
the family of Dirac operators D/ĝ is uniformly invertible. Let p : M̃ → M be a normal
Γ-covering of π, with M̃ → B having connected fibres, and denote with r : M → BΓ the
map classifying it. The same holds for D̃/ĝ, so that we are in the situation of (3.3).
On the space R+(M/B) we can define natural relations, following [43]. We say ĝ0,
ĝ1 ∈ R+(M/B) are path-connected if there exists a continuos path ĝt ∈ R+(M/B)
between them.
L2-RHO FORM FOR NORMAL COVERINGS OF FIBRE BUNDLES 15
We say ĝ0 and ĝ1 are concordant if on the bundle of the cylinders Π: M × I → B,
Π(m, t) = π(m), there exists a vertical metric Ĝ such that: ∀b ∈ B Ĝb is of product-
type near the boundary, scal(Ĝb) > 0, and onM×{i} → B it coincides with ĝi, i = 0, 1.
Proposition 5.1. Let π : M → B be a smooth fibre bundle with T (M/B) spin and
B compact. Let p : M̃ → M be a normal Γ-covering of the fibre bundle, such that
M̃ → B has connected fibres. Then the rho class [ρ̂(2)(D/)] ∈ H∗dR(B) is constant on the
concordance classes of R+(M/B).
Proof. Let ĝ0 and ĝ1 be concordant, and Ĝ the PSC vertical metric on the family of
cylinders. The family of Dirac operators D/
M×I/B,Ĝ has as boundary the two families
D/0 = (Dz , ĝ0,z)z∈B and D/1 = (Dz , ĝ1,z)z∈B , both invertible. Then the Bismut–Cheeger
theorem in [11] can be applied
M×I/B
Â(M × I/B)− 1
η̂(D/ĝ0) +
η̂(D/ĝ1) in H∗dR(B)
where Ch(IndDM×I,h) = 0 ∈ H∗dR(B).
On the family of coverings we reason as before and apply the index theorem in [36,
Theorem 4] to get
M×I/B
Â(M × I/B)− 1
η̂(2)(D̃/ĝ0) +
η̂(2)(D̃/ĝ1) in H
dR(B)
Subtracting we get [ρ̂(2)(D/g0)] = [ρ̂(2)(D/g1)] ∈ H∗dR(B). �
5.1. ρ̂(2) and the action of a fibre bundle diffeomorphism on R+(M/B). Let
(p, π) be as in Definition 2.1 and assume further that p is the universal covering of M .
If one wants to use [ρ̂(2)(D/)] for the study of R+(M/B) it is important to check how this
invariant changes when ĝ ∈ R+(M/B) is acted on by a fibre bundle diffeomorphism f
preserving the spin structure.
Proposition 5.2. Let f : M →M be a fibre bundle diffeomorphism preserving the spin
structure. Then [ρ̂(2)(D/ĝ)] = [ρ̂(2)(D/f∗ ĝ)]
Proof. We follow the proof [43, Prop. 2.10] for the Cheeger–Gromov rho invariant. Let
ĝ be a vertical metric and denote S = PSpin(M/B) a fixed spin structure, i.e. a 2-fold
covering2 of PSOĝ(T (M/B)) →M .
The eta form downstairs of D/ depends in fact on ĝ, on the spin structure, and on the
horizontal connection THM , so we write here explicitly η̂(D/ĝ) = η̂(D/ĝ,S , THM).
First of all η̂(D/ĝ,S , THM) = η̂(D/f∗ ĝ,f∗S , f∗THM), because f induces a unitary equiva-
lence between the superconnections constructed with the two geometric structures.
2or, equivalently, a 2-fold covering of PGL+(T (M/B)) which is not trivial along the fibres of
PGL+(T (M/B)) → M , [43, p. 8].
16 SARA AZZALI
Because f spin structure preserving, it induces an isomorphism βGL+ between the orig-
inal spin structure S and the pulled back one df∗S. Then βGL+ gives a unitary equiv-
alence between the operator obtained via the pulled back structures, and the Dirac
operator for f∗ĝ and the chosen fixed spin structure, so that η̂(D/f∗ĝ,f∗S , f∗THM) =
η̂(D/f∗ĝ,S , f∗THM). Taken together
η̂(D/ĝ,S , THM) = η̂(D/f∗ĝ,S , f∗THM)
Let p : M̃ →M be the universal covering. Now we look at η̂(2)(D̃/) = η̂(2)(D̃/ĝ,S , THM,p),
where on M̃ the metric, spin structure and connection are the lift via p as by defini-
tion. Again, if we construct the L2 eta form for the entirely pulled back structure, we
get η̂(2)(D̃/ĝ,S , THM,p) = η̂(2)(D̃/f∗ĝ,f∗S , f∗THM,f∗p). Proceeding as above on the spin
structure, η̂(2)(D̃/f∗ĝ,f∗S , f∗THM,f∗p) = η̂(2)(D̃/f∗ĝ,S , f∗THM,f∗p). Since M̃ is the uni-
versal covering we have a covering isomorphism between f∗M̃ and M̃ , which becomes
an isometry when M̃ is endowed of the lift of the pulled back metric f∗ĝ, therefore
η̂(2)(D̃/f∗ ĝ,S , f
∗THM,f∗p) = η̂(2)(D̃/f∗ ĝ,S , f
∗THM,p)
It remains to observe how η̂ and η̂(2) depends on the connection T
HM . We remove for
the moment the hat ˆ to simplify the notation. Let TH0 M,T
1 M two connections, say
given by ω0, ω1 ∈ Ω1(M,T (M/B)) and pose ωt = (1− t)ω0 + tω1. Construct the family
M̆ =M × [0, 1] π̆→ B× [0, 1] =: B̆ as in the proof of Prop. 2.5. On this fibre bundle put
the connection one form ω̆ + dt. Since d̆η̆ = dη̆(·, t)− ∂
η(t)dt we have
η0 − η1 =
d̆η̆ −
M̆/B̆
Â(M × I/B × I)− d
which is the sum of a local contribution plus an exact form. Writing the same for η(2)
we get that for the L2-rho form ρ̂(2)(D/, TH0 M) = ρ̂(2)(D/, TH1 M) ∈ Ω(B)/dΩ(B) and
therefore we get the result. �
5.2. Conjectures. Along the lines of [31, 42] we can state the following conjectures.
Conjecture 5.1. If Γ is torsion-free and satisfies the Baum-Connes conjecture for the
maximal C∗-algebra, then [ρ̂(2)(D/ĝ)] vanishes if ĝ ∈ R+(M/B).
Definition 5.3. Let π : M → B and θ : N → B be two smooth fibre bundles of compact
manifolds over the same base B. A continuos map h : N → M is called a fibrewise
homotopy equivalence if π ◦ h = θ, and there exists g : N →M such that θ ◦ g = π and
such that h ◦ g, g ◦ h are homotopic to the identity by homotopies that take each fibre
into itself.
We work in the following with smooth fibrewise homotopy equivalences.
Definition 5.4. Let Γ be a discrete group and (π : M → M,p : M̃ → M), (θ : N →
B, q : Ñ → N) be two normal Γ-coverings of the fibre bundles π and θ. Denote as
r : M → BΓ, s : N → BΓ the two classifying maps. We say (π, p) and (θ, q) are Γ-
fibrewise homotopy equivalent if there exists a fibrewise homotopy equivalence h : N →
M such that s ◦ h is homotopic to r.
L2-RHO FORM FOR NORMAL COVERINGS OF FIBRE BUNDLES 17
Let Dsign denote the family of signature operators.
Conjecture 5.2. Assume Γ is a torsion-free group that satisfies the Baum-Connes
conjecture for the maximal C∗-algebra. Let h be a orientation preserving Γ-fibrewise
homotopy equivalence between (π, p) and (θ, q) and suppose D̃sign
and D̃sign
have smooth
spectral projections and Novikov–Shubin invariants > 3(dimB + 1).
Then [ρ̂(2)(D̃signM/B)] = [ρ̂(2)(D̃
)] ∈ H∗dR(B).
Appendix A. Analysis on normal coverings
We summarize the analytic tools we use to investigate L2 spectral invariants, namely
NΓ-Hilbert spaces and Sobolev spaces on manifolds of bounded geometry, following the
nice exposition in [46].
A.1. NΓ-Hilbert spaces and von Neumann dimension. Let Γ be a discrete count-
able group and l2(Γ) the Hilbert space of complex valued, square integrable functions
on Γ. Denote with δγ ∈ CΓ the function with value 1 on γ, and zero elsewhere. The
convolution law on CΓ is δγ ∗ δβ = δγβ. Let L be the action of Γ on l2(Γ) by left con-
volution L : Γ → U(l2(Γ)), Lγ(f) = (δγ ∗ f)(x) = f(γ−1x). Right convolution action is
denoted by R.
Definition A.1. The group von Neumann algebra NΓ is defined to be the weak closure
NΓ := L(CΓ)weak in B(l2(Γ)). By the double commutant theorem NΓ = R(CΓ)′, so
that NΓ is the algebra of operators commuting with the right action of Γ. An important
feature of the group von Neumann algebra is its standard trace trΓ : NΓ −→ C defined
as trΓA =< Aδe, δe >l2(Γ). In particular for A =
aγLγ ∈ NΓ, then trΓ(A) = ae.
Definition A.2. A free NΓ-Hilbert space is a Hilbert space of the form W ⊗ l2(Γ),
where W is a Hilbert space and Γ acts on l2(Γ) on the right.
A NΓ-Hilbert space H is a Hilbert space with a unitary right-action of Γ such
that there exists a Γ-equivariant immersion H → V ⊗ l2(Γ) in some free NΓ-
Hilbert space. For H1,H2 NΓ-Hilbert spaces, define BΓ(H1H2) : = {T : H1 →
H2 bounded and Γ-equivariant}.
Let H = V ⊗ l2(Γ) be a free NΓ-Hilbert space. Then BΓ(V ⊗ l2(Γ)) ≃ B(V) ⊗ NΓ.
There exist a trace on the positive elements of this von Neumann algebra, with values
in [0,∞]: let (ψj)j∈N is a orthonormal base of V; if f ∈ B(H)+, its trace is given by
trΓ(f) =
j∈N < f(ψj ⊗ δe), ψj ⊗ δe >. A Γ-trace can be defined also on any NΓ-
Hilbert-space H using the immersion j : H →֒ V ⊗ l2Γ and proveing that the trace does
not depend on the choice of j (see [17] or [39, pag. 17]).
Definition A.3. Let H be a NΓ-Hilbert space. Its von Neumann dimension is defined
as dim Γ(H) = trΓ(id : H → H) ∈ [0,+∞).
Definition A.4. Let H1 and H2 be NΓ-Hilbert spaces. Define
18 SARA AZZALI
• BfΓ(H1,H2) := {A ∈ BΓ(H1,H2)′|dim Γ
<∞} are the Γ-finite rank oper-
ators
• B∞Γ (H1,H2) := B
Γ(H1,H2)
, are the Γ-compact operators
• B2Γ(H) := {A ∈ BΓ(H)s.t. trΓ (AA∗) <∞}, are the Γ-Hilbert-Schmidt operators
• B1Γ(H) := B2Γ(H)B2Γ(H)∗ the Γ-trace class operators.
Their main properties are:
1) Bf (H),B∞(H),B2(H),B1(H) are ideals and Bf ⊂ B1 ⊂ B2 ⊂ B∞;
2) A ∈ Bi(H) if and only if |A| ∈ Bi(H) for i = 1, 2, f,∞.
A.2. Covering spaces, bounded geometry techniques. Let p : Z̃ → Z a normal
Γ-covering of a compact Riemannian manifold Z. Let I ⊂ Z̃ be a fundamental domain
for the (right) action of Γ on Z̃ (I is an open subset s.t. I · γ ∩ I and Z̃ \
I · γ have
zero measure ∀γ 6= e).
Let E → Z a Hermitian vector bundle, and Ẽ = p∗E the pull-back. The sections
C∞c (Z̃, Ẽ) form a CΓ-right module for the action (ξ · f)(m̃) =
(R∗gξ)(m̃)f(g
−1) where
(R∗gξ)(m̃) := ξ(m̃g). Its Hilbert space completion L
2(Z̃, Ẽ) is a Γ-free Hilbert space
in the sense of definition A.2, in fact the map ψ : L2(Z̃, Ẽ) −→ L2(I, Ẽ|I) ⊗ l2(Γ),
|I ⊗ δγ is an isomorphism.
The Γ-trace class operators are characterized as follows: let A ∈ BΓ(L2(Z̃, Ẽ))
A ∈ B1Γ(L2(Z̃, Ẽ)) if and only if χI |A|χI ∈ B1(L2(I, E|I))
If A ∈ B1Γ(L2(Z̃, Ẽ)) then trΓ(A) = tr(χIAχI). If A ∈ B1Γ(L2(Z̃, Ẽ)) has Schwartz
kernel [A] continuos, then
trA =
([A](x, x)) dx =
π∗ tr Ẽx ([A](x, x)) dx . (A.1)
The covering of a compact manifold and the pulled back bundle Ẽ above are the most
simple examples of manifolds of bounded geometry3.
The analysis on manifolds of bounded geometry was developped in [45]. We specialize
here to the case of a normal covering Z̃.
The Sobolev spaces of sections are defined, for k ≥ 0, as the completion Hk(Z̃, Ẽ) :=
C∞c (Z̃, Ẽ)
where ‖f‖k :=
L2(Z̃,Ẽ⊗jT ∗Z̃); for k < 0 H
k(Z̃, Ẽ) is defined
as the dual of H−k(Z̃, Ẽ).
3Let (N, g) be a Riemannian manifold. N is of bounded geometry if
(1) it has positive injectivity radius i(N, g);
(2) the curvature RN and all its covariant derivatives are bounded.
A hermitian vector bundle E → N is of bounded geometry if the curvature RE and all its covariant
derivatives are bounded. This can be characterized in normal coordinates with conditions on g, coordi-
nate transformations and ∇ (see for example in [45] and [46]).
L2-RHO FORM FOR NORMAL COVERINGS OF FIBRE BUNDLES 19
The spaces of uniform Ck sections are defined as follows: UCk(M̃) = {f : M̃ → C | f ∈
Ck and ‖f‖k ≤ c(k) ∀k}, where ‖f‖k = supm̃∈M̃,Xi{|∇X1 . . .∇Xkf(m̃)|}, and analo-
gously for sections UCk(M̃, Ẽ). UC∞(M̃, Ẽ) is the Fréchet space :=
k UCk(M̃ , Ẽ).
The following Sobolev embedding property holds [45]: if dim M̃ = n, then for j > n
there is a continuos inclusion Hj(M̃ , Ẽ) →֒ UCk(M̃ , Ẽ).
The algebra UDiff(M̃, Ẽ) of uniform differential operators is the algebra generated by
operators in UC∞(M̃ ,End Ẽ) and derivatives {∇ẼX}X∈UC∞(M̃ ,TM̃) with respect to uni-
form vector fields. P ∈ UDiff(M̃ , Ẽ) extends to a continuos operator Hj(M̃, Ẽ) →
Hj−k(M̃ , Ẽ) ∀j ∈ Z.
P ∈ UDiff(M̃ , Ẽ) is called uniformly elliptic if its principal symbol σpr ∈
UC∞(T ∗M̃, π∗ End Ẽ) is invertible out of an ǫ-neighborhood of 0 ∈ T ∗M̃ , with inverse
section which can be uniformly estimated.
For a uniformly elliptic operator T the G̊arding inequality holds:
Hs+k(M̃,Ẽ)
≤ c(s, k) (‖ϕ‖Hs + ‖Tϕ‖Hs) ∀s ∈ R (A.2)
If T is a continuos operator T : C∞c (N,E) → (C∞c (M̃ , Ẽ))′ we will denote its Schwartz
kernel with [T ] ∈ C∞(M̃ × M̃, Ẽ4Ẽ∗).
Definition A.5. We say that T : C∞c (N,E) → (C∞c (M̃ , Ẽ))′ has order k ∈ Z if ∀s ∈
Z it admits a bounded extension Hs(M̃, Ẽ) → Hs−k(M̃, Ẽ). Hence it is closable as
unbounded operator on L2(M̃ , Ẽ).
The space of order k operators is denoted Opk(M̃, Ẽ), and comes with the seminorms
on B(Hs(M̃ , Ẽ),Hs−k(M̃ , Ẽ)). The space Op−∞(M̃, Ẽ) =
k(M̃, Ẽ) is a Fréchet
space.
Finally, an operator T ∈ Opk(M̃ , Ẽ) is called elliptic if it satisfies G̊arding inequality.
We will denote as OpkΓ(M̃, Ẽ) the subspace of Γ-invariant operators in Op
k(M̃, Ẽ).
Consider the Fréchet space of continuos rapidly decreasing functions
RB(R) = {f : R → C : f continuos, and
∣(1 + x
2 f(x)
∣ <∞ ∀k}
Let T ∈ Opk(M̃ , Ẽ) , k ≥ 1 an elliptic, formally self-adjoint operator. Denote again by T
its closure, with domain Dom T = Hk(M̃ , Ẽ). From G̊arding inequality (A.2) the map
RC(R) −→ B(Hj(M̃ , Ẽ),H l(M̃ , Ẽ)), f 7→ f(T ) is continuos ∀j, l ∈ Z, so that
RC(R) −→ Op−∞(Z̃, Ẽ) , f 7→ f(T )
is continuos. One can prove that the Schwartz kernel of such operator is smooth: by
(A.2) and Sobolev embedding, for L = [n
+ 1] the map
Op−2L−l(Z̃, Ẽ) −→ UC l(Z̃ × M̃,E4E∗) , T 7→ [T ] (A.3)
is continuos ∀l ∈ N; then in particular for f ∈ RC(R), the kernel [f(T )] ∈ UC∞(Z̃ ×
Z̃, Ẽ4Ẽ∗) and the map RB(R) −→ UC∞(Z̃ × Z̃, Ẽ4Ẽ∗) , f 7→ [f(T )] is continuos.
20 SARA AZZALI
Lemma A.6. Since elements in Op−∞Γ (M̃, Ẽ) are Γ-trace class, then for T ∈
Op kΓ(M̃, Ẽ) elliptic and selfadjoint, the map RC(R) → B1Γ(M̃ , Ẽ) , f 7→ f(T ) is contin-
uos. As a consequence ∀m ∃l such that
| trΓ f(T )|m ≤ C ‖f(T )‖−l,l . (A.4)
References
[1] M. F. Atiyah, Elliptic operators, discrete groups and von Neumann algebras, Asterisque 32/33
(1976), 43–72.
[2] M. F. Atiyah, V. K. Patodi, I. M. Singer. Spectral asymmetry and Riemannian geometry I, Math.
Proc. Cambridge Philos. Soc. 77, 43–49, 1975.
[3] M.F. Atiyah, V.K. Patodi, I. M. Singer, Spectral asymmetry and Riemannian geometry II, Math.
Proc. Cambridge Philos. Soc. 78 (3), 405–432, 1975.
[4] S. Azzali, S. Goette, T. Schick, in preparation.
[5] S. Azzali, Two spectral invariants of type rho Ph.D. thesis, Università La Sapienza, Roma 2007.
[6] M-T. Benameur, J. Heitsch, Index theory and non-commutative geometry. I and II. K-Theory
36 (2005), and J. K-Theory 1 (2008).
[7] M-T. Benameur, J. Heitsch, The Twisted Higher Harmonic Signature for Foliations, preprint
arXiv:0711.0352v2 [math.KT]
[8] M-T. Benameur, P. Piazza, Index, eta and rho invariants on foliated bundles, Asterisque 327,
2009, 199–284.
[9] N. Berline, E. Getzler, M. Vergne, Heat kernels and Dirac operators, Springer-Verlag, New York
1992.
[10] J-M. Bismut, The Atiyah–Singer index theorem for families of Dirac operators: two heat equation
proofs, Invent. Math. 83 (1986), 91–151.
[11] J-M. Bismut, J. Cheeger, Families index for manifolds with boundary, superconnections, and
cones I and II, J. Funct. Anal. 89 (1990), no. 2, 313–363.
[12] J-M. Bismut, D. Freed, The analysis of elliptic families II, Comm. Math. Phys. 107 (1986),
103–163.
[13] J-M. Bismut, J. Cheeger, Eta invariants and their adiabatic limits, J. Amer. Math. Soc. 2
(1989), 33–70.
[14] J-M. Bismut, J. Lott, Flat vector bundles, direct images and higher real analytic torsion, J.
Amer. Math. Soc. 8 (1995).
[15] B. Botvinnik, P. B. Gilkey, The eta invariant and metrics of positive scalar curvature, Math.
Ann., 302 (3), 507-517, 1995.
[16] S. Chang, S. Weinberger, On Invariants of Hirzebruch and Cheeger-Gromov, Geom. Topol., 7,
pp. 311–319, (2003).
[17] J. Cheeger, M. Gromov, On the characteristic numbers of complete manifolds of bounded cur-
vature and finite volume, in Differential geometry and complex analysis, pp. 115–154, Springer,
Berlin, 1985.
[18] J. Cheeger, M. Gromov, Bounds on the von Neumann dimension of L2-cohomology and the
Gauss-Bonnet theorem for open manifolds, J. Differential Geom. 21 (1985), no. 1, pp.1–34
[19] X. Dai, Adiabatic limits, nonmultiplicativity of signature, and Leray spectral sequence, J. Amer.
Math. Soc. 4 (1991), 265–321.
[20] H. Donnelly, Asymptotic expansion for the compact quotients of properly discontinuos group
actions, Illinois Journal of Mathematics 23 (3) , 485-496, 1979.
[21] H. Donnelly, Local index theorem for families, Michigan Math J. 35 (1988), 11-20.
[22] D. V. Efremov, M. A. Shubin, Spectrum distribution function and variational principle for au-
tomorphic operators on hyperbolic space, Séminaire sur les Équations aux Dérivées Partielles,
1988–1989, Exp. No. VIII, École Polytech., Palaiseau, 1989.
[23] D. Freed, Notes on index theory, http://www.ma.utexas.edu/users/dafr/Index/index.html.
L2-RHO FORM FOR NORMAL COVERINGS OF FIBRE BUNDLES 21
[24] D. Gong and M. Rothenberg, Analytic torsion forms for noncompact fiber bundles, MPIM
preprint 1997. Available at www.mpim-bonn.mpg.de/preprints.
[25] A. Gorokhovsky, J. Lott, Local index theory over étale groupoids J. Reine Angew. Math. 560
(2003).
[26] A. Gorokhovsky, J. Lott, Local index theory over foliation groupoids, Adv. Math 204 (2006),
413–447.
[27] M. Gromov, M. A. Shubin, Von Neumann spectra near zero, Geom. and Funct. Analysis vol. 1,
No. 4 (1991) 375–404
[28] A. Hassell, R. Mazzeo, R. Melrose, A signature formula for manifolds with corners of codimension
two, Topology 36 (5), 1055–1075, 1997.
[29] J. Heitsch, Bismut superconnections and the Chern character for Dirac operators on foliated
manifolds, K-Theory 9 (1995), no. 6, 507–528.
[30] J. Heitsch, C. Lazarov, A general families index theorem, K-theory 18, 181–202, 1999.
[31] N. Keswani, Von Neumann eta-invariants and C∗-algebra K-theory, J. London Math. Soc. (2)
62 (2000), 771–783.
[32] J. Lott, Higher eta invariants K-theory 6, 191-233, (1992).
[33] J. Lott, Heat kernels on covering spaces and topological invariants, J. Diff. Geom. 35 (1992),
471–510.
[34] J. Lott, Eta and Torsion, Symtries quantiques (Les Houches, 1995), 947–955, North-Holland,
Amsterdam, 1998.
[35] Superconnections and higher index theory, Geom. Funct. Anal. 2 (1992), 421–454.
[36] E. Leichtnam, P. Piazza, Étale groupoids, eta invariants and index theory. J. Reine Angew. Math.
587 (2005), 169–233.
[37] E. Leichtnam, P. Piazza, On higher eta-invariants and metrics of positive scalar curvature, K-
Theory 24 (2001), 341–359.
[38] E. Leichtnam, P. Piazza, Spectral sections and higher Atiyah-Patodi-Singer index theory on
Galois coverings, Geom. Funct. Anal. 8 (1998), 17–58.
[39] W. Lück, L2-invariants: theory and applications to geometry and K-theory, Springer-Verlag,
Berlin, 2002.
[40] V. Mathai, The Novikov conjecture for low degree cohomology classes, Geometriae Dedicata 99,
(2003) 1–15.
[41] R. B. Melrose, P. Piazza, Families of Dirac operators, boundaries and the b-calculus. J. Differ-
ential Geom. 46 (1997), no. 1, 99–180.
[42] P. Piazza, T. Schick, Bordism, rho-invariants and the Baum-Connes conjecture. J. of Noncom-
mutative Geometry 1 (2007), 27–111.
[43] P. Piazza, T. Schick, Groups with torsion, bordism and rho invariants. Pacific J. Math. 232
(2007), no. 2, 355–378.
[44] D. Quillen, Superconnections and the Chern character. Topology 24 (1985), 89–95.
[45] J. Roe, An index theorem on open manifolds I. J. Diff. Geom. 27 (1988), 87–113.
[46] B. Vaillant, Indextheorie für Überlagerungen, Diploma Thesis, 1996, available at www.math.uni-
bonn.de/people/strohmai/globan/boris/
Mathematisches Institut, Georg-August Universität Göttingen
E-mail address: [email protected]
1. Introduction
2. Geometric families in the L2-setting
2.1. The heat operator for the covering family
2.2. Transgression formulæ, eta integrands
2.3. The t0 asymptotic
3. The L2-eta form
3.1. The family Novikov–Shubin invariants
3.2. The L2-eta form
3.3. Case of uniform invertibility
4. The L2 rho form
5. (2) and positive scalar curvature for spin vertical bundle
5.1. (2) and the action of a fibre bundle diffeomorphism on R+(M/B)
5.2. Conjectures
Appendix A. Analysis on normal coverings
A.1. N-Hilbert spaces and von Neumann dimension
A.2. Covering spaces, bounded geometry techniques
References
|
0704.0910 | On the Nonexistence of Nontrivial Involutive n-Homomorphisms of
C*-algebras | TRANSACTIONS OF THE
AMERICAN MATHEMATICAL SOCIETY
Volume 00, Number 0, Pages 000–000
S 0002-9947(XX)0000-0
ON THE NONEXISTENCE OF NONTRIVIAL INVOLUTIVE
n-HOMOMORPHISMS OF C⋆-ALGEBRAS
EFTON PARK AND JODY TROUT
Abstract. An n-homomorphism between algebras is a linear map φ : A → B
such that φ(a1 · · · an) = φ(a1) · · ·φ(an) for all elements a1, . . . , an ∈ A. Every
homomorphism is an n-homomorphism, for all n ≥ 2, but the converse is false,
in general. Hejazian et al. [7] ask: Is every ∗-preserving n-homomorphism
between C⋆-algebras continuous? We answer their question in the affirmative,
but the even and odd n arguments are surprisingly disjoint. We then use these
results to prove stronger ones: If n > 2 is even, then φ is just an ordinary
∗-homomorphism. If n ≥ 3 is odd, then φ is a difference of two orthogonal
∗-homomorphisms. Thus, there are no nontrivial ∗-linear n-homomorphisms
between C⋆-algebras.
1. Introduction
Let A and B be algebras and n ≥ 2 an integer. A linear map φ : A → B is an
n-homomorphism if for all a1, a2, . . . , an ∈ A,
φ(a1a2 · · · an) = φ(a1)φ(a2) · · ·φ(an).
A 2-homomorphism is then just a homomorphism, in the usual sense, between
algebras. Furthermore, every homomorphism is clearly also an n-homomorphism
for all n ≥ 2, but the converse is false, in general. The concept of n-homomorphism
was studied for complex algebras by Hejazian, Mirzavaziri, and Moslehian [7]. This
concept also makes sense for rings and (semi)groups. For example, an AEn-ring is a
ring R such that every additive endomorphism φ : R → R is an n-homomorphism;
Feigelstock [4, 5] classified all unital AEn-rings.
In [7], Hejazian et al. ask: Is every ∗-preserving n-homomorphism between C⋆-
algebras continuous? We answer in the affirmative by proving that every involutive
n-homomorphism φ : A → B between C⋆-algebras is in fact norm contractive:
‖φ‖ ≤ 1. Surprisingly, the arguments for the even and odd n cases are disjoint
and, thus, are discussed in different sections. When n = 3, automatic continuity is
reported by Bračič and Moslehian [2], but note that the proof of their Theorem 2.1
does not extend to the nonunital case since the unitization of a 3-homomorphism
is not a 3-homomorphism, in general.
Using these automatic continuity results, we prove the following stronger results:
If n > 2 is even, every ∗-linear n-homomorphism φ : A → B between C⋆-algbras
is in fact a ∗-homomorphism. If n ≥ 3 is odd, every ∗-linear n-homomorphism
φ : A→ B is a difference φ(a) = ψ1(a)−ψ2(a) of two orthogonal ∗-homomorphisms
ψ1 ⊥ ψ2. Regardless, for all integers n ≥ 3, every positive linear n-homomorphism
MSC 2000 Classification: Primary 46L05; Secondary 47B99, 47L30.
c©1997 American Mathematical Society
http://arxiv.org/abs/0704.0910v3
2 EFTON PARK AND JODY TROUT
is a ∗-homomorphism. Note that if ψ is a ∗-homomorphism, then −ψ = 0− ψ is a
norm contractive ∗-preserving 3-homomorphism that is not positive linear.
There is also a dichotomy between the unital and nonunital cases. When the
domain algebra A is unital, there is a simple representation of an n-homomorphism
as a certain n-potent multiple of a homomorphism (discussed in the Appendix.)
The nonunital case is more subtle. For example, if A and B are nonunital (Banach)
algebras such that An = Bn = {0}, then every linear map L : A→ B (bounded or
unbounded) is, trivially, an n-homomorphism (see Examples 2.5 and 4.3 of [7]).
The outline of the paper is as follows: In Section 2, we prove automatic continuity
for the even case and in Section 3 for the odd case. In Section 4, we prove our
nonexistence results. A key fact in many of our proofs is the Cohen Factorization
Theorem [3] of C⋆-algebras. (See Proposition 2.33 [8] for an elementary proof of this
important result.) Finally, in Appendix A, we collect some facts about n-potents
that we need.
The authors would like to thank Dana Williams and Tom Shemanske for their
helpful comments and suggestions.
2. Automatic Continuity: The Even Case
In this section, we prove that when n > 2 is even, every involutive (i.e., ∗-linear)
n-homomorphism between C⋆-algebras is completely positive and norm contractive,
which generalizes the well-known result for ∗-homomorphisms (n = 2). Recall that
a linear map θ : A→ B between C⋆-algebras is positive if a ≥ 0 implies θ(a) ≥ 0 or,
equivalently, for every a ∈ A there is a b ∈ B such that θ(a∗a) = b∗b. We say that
θ is completely positive if, for all k ≥ 1, the induced map θk : Mk(A) → Mk(B),
θk((aij)) = (θ(aij)), on k × k matrices is positive.
Theorem 2.1. Let H be a Hilbert space. If n ≥ 2 is even, then every involutive
n-homomorphism from a C*-algebra A into B(H) is completely positive.
Proof. Let φ : A → B(H) be an involutive n-homomorphism. We may assume
n = 2k > 2. Let 〈·, ·〉 denote the inner product on H. By Stinespring’s Theorem
[9] (see Prop. II.6.6 [1]), φ is completely positive if and only for any m > 1 and
elements a1, . . . , am ∈ A and vectors v1, . . . , vm ∈ H we have
i,j=1
〈φ(a∗i aj)vj , vi〉 ≥ 0.
We proceed as follows: for each 1 ≤ i ≤ m use the Cohen Factorization Theorem
[3] to factor ai = ai1 · · · aik into a product of k elements. Thus, their adjoints factor
as a∗i = a
ik · · · a
i1. Since n = 2k, we compute
i,j=1
〈φ(a∗i aj)vj , vi〉 =
i,j=1
〈φ(a∗ik · · ·a
i1aj1 · · · ajk)vj , vi〉
i,j=1
〈φ(aik)
∗ · · ·φ(ai1)
∗φ(aj1) · · ·φ(ajk)vj , vi〉
φ(aj1) · · ·φ(ajk)vj ,
φ(ai1) · · ·φ(aik)vj〉
= 〈x, x〉 ≥ 0,
NONEXISTENCE OF NONTRIVIAL n-HOMOMORPHISMS 3
where x =
i=1 φ(ai1) · · ·φ(aik)vi ∈ H. The result now follows. �
Even though the previous result is a corollary of the more general theorem below,
we have included it because the proof technique is different.
Lemma 2.2. Let φ : A → B be an n-homomorphism. Then, for all k ≥ 1, the
induced maps φk : Mk(A) → Mk(B) on k × k matrices are n-homomorphisms.
Moreover, if φ is involutive (φ(a∗) = φ(a)∗), then each φk is also involutive.
Proof. Given n matrices a1 = (a1ij), . . . , a
n = (anij) in Mk(A), we can express
their product a1a2 · · · an = (aij), where the (i, j)-th entry aij is given by the formula
aij =
m1,··· ,mn−1=1
a1im1a
· · · anmn−1j .
Since φk(a
1a2 · · · an) = (φ(aij)) by definition and
φ(aij) =
m1,··· ,mn−1=1
φ(a1im1a
· · ·anmn−1j)
m1,··· ,mn−1=1
φ(a1im1)φ(a
) · · ·φ(anmn−1j)
= [φk(a
1)φk(a
2) · · ·φk(a
n)]ij ,
it follows that φk : Mk(A) → Mk(B) is an n-homomorphism. Now suppose that φ
is involutive. We compute for all a = (aij) ∈Mk(A):
∗) = φk((a
ji)) = (φ(a
ji)) = (φ(aji)
∗) = φk(a)
and hence each φk :Mk(A) →Mk(B) is involutive. �
Theorem 2.3. Let φ : A → B be an involutive n-homomorphism between C*-
algebras. If n ≥ 2 is even, then φ is completely positive. Thus, φ is bounded.
Proof. We may assume n = 2k > 2. Since φ is linear, we want to show that for
every a ∈ A we have φ(a∗a) ≥ 0. By the Cohen Factorization Theorem, for any
a ∈ A we can find a1, ..., ak ∈ A such that the factorization a = a1 · · · ak holds.
Thus, the adjoint factors as a∗ = a∗k · · ·a
1. Since n = 2k and φ is n-multiplicative
and ∗-preserving,
φ(a∗a) = φ(a∗k · · · a
1a1 · · · ak)
= φ(ak)
∗ · · ·φ(a1)
∗φ(a1) · · ·φ(ak)
= (φ(a1) · · ·φ(ak))
∗(φ(a1) · · ·φ(ak))
= b∗b ≥ 0,
where b = φ(a1) · · ·φ(ak) ∈ B. Thus, φ is a positive linear map. By the previous
lemma, all of the induced maps φk : Mk(A) → Mk(B) on k × k matrices are
involutive n-homomorphisms and are positive. Hence, φ is completely positive and
therefore bounded [1]. �
We now wish to show that if n ≥ 2 is even, then an involutive n-homomorphism
is actually norm-contractive. First, we will need generalizations of the familiar
C⋆-identity appropriate for n-homomorphisms.
4 EFTON PARK AND JODY TROUT
Lemma 2.4. Let A be a C⋆-algebra. For all k ≥ 1, we have that
‖x‖2k = ‖(x∗x)k‖
‖x‖2k+1 = ‖x(x∗x)k‖
for all x ∈ A.
Proof. In the even case, we have easily that
‖x‖2k = (‖x‖2)k = ‖x∗x‖k = ‖(x∗x)k‖
by the functional calculus since x∗x ≥ 0. In the odd case, we compute again using
the C⋆-identity and functional calculus:
‖x(x∗x)k‖2 = ‖(x(x∗x)k)∗(x(x∗x)k)‖
= ‖(x∗x)kx∗x(x∗x)k‖
= ‖(x∗x)2k+1‖ = ‖(x∗x)‖2k+1
= (‖x‖2)2k+1 = (‖x‖2k+1)2;
the result follows by taking square roots. �
Theorem 2.5. Let φ : A → B be an involutive n-homomorphism of C⋆-algebras.
If φ is bounded, then φ is norm contractive (‖φ‖ ≤ 1).
Proof. Suppose n = 2k is even. Then for all x ∈ A we have
(x∗x)k
= φ(x∗x · · ·x∗x) =
φ(x∗)φ(x)
φ(x)∗φ(x)
Thus by the previous lemma,
‖φ(x)‖n = ‖φ(x)‖2k
= ‖(φ(x)∗φ(x))k‖ = ‖φ((x∗x)k)‖
≤ ‖φ‖‖(x∗x)k‖ = ‖φ‖‖x‖2k = ‖φ‖‖x‖n,
which implies that ‖φ‖ ≤ 1 by taking n-th roots.
The proof for the odd case n = 2k + 1 is similar. �
3. Automatic Continuity: The Odd Case
The positivity methods above do not work when n is odd, since the negation
of a ∗-homomorphism defines an involutive 3-homomorphism that is (completely)
bounded, but not positive. We need the following slight generalization of Lemma
3.5 of Harris [6].
Lemma 3.1. Let A be a C⋆-algebra and let λ 6= 0 and k ≥ 1. If a ∈ A then
λ ∈ σ((a∗a)k) if and only if there does not exist an element c ∈ A with
(1) c (λ− (a∗a)k) = a.
Proof. If λ 6∈ σ((a∗a)k), then c = a(λ− (a∗a)k)−1 ∈ A satisfies
c (λ− (a∗a)k) = a(λ− (a∗a)k)−1(λ− (a∗a)k) = a.
and so (1) holds.
NONEXISTENCE OF NONTRIVIAL n-HOMOMORPHISMS 5
On the other hand, if λ ∈ σ((a∗a)k) then, by the commutative functional
calculus, there is a sequence {bm}
1 in the unitization A
+ with bm 6→ 0 but
dm =def (λ − (a
∗a)k)bm → 0. Since λ 6= 0 we must have
a∗(aa∗)k−1(abm) = (a
∗a)kbm = λbm − dm 6→ 0,
which implies abm 6→ 0. Hence, there does not exist an element c ∈ A that can
satisfy equation (1), since this would imply that
abm = c (λ− (a
∗a)k)bm → 0,
which is a contradiction. This proves the lemma. �
We now prove automatic continuity for involutive n-homomorphisms of C⋆-
algebras for all odd values of n. Note that we do not assume that A is unital,
nor do we appeal to the unitization φ+ : A+ → B+ of φ, which is not an n-
homomorphism, in general.
Theorem 3.2. Let φ : A → B be an involutive n-homomorphism between C⋆-
algebras. If n ≥ 3 is odd, then ‖φ‖ ≤ 1, i.e., φ is norm contractive.
Proof. Let n = 2k + 1 where k ≥ 1. Given any a ∈ A and λ > 0 such that that
λ 6∈ σ((a∗a)k), there is, by the previous lemma, an element c ∈ A such that
a = c (λ− (a∗a)k) = (λc− c(a∗a)k).
Noting that c(a∗a)k is a product of 2k + 1 = n elements in A, and φ is a ∗-linear
n-homomorphism, we compute:
φ(a) = φ(λc − c(a∗a)k) = λφ(c) − φ(c(a∗a)k)
= λφ(c) − φ(c)(φ(a)∗φ(a))k = φ(c)(λ − (φ(a)∗φ(a))k)
which yields that there is an element φ(c) ∈ B with:
φ(c)(λ − (φ(a)∗φ(a))k) = φ(a).
By the previous lemma, we conclude that λ 6∈ σ((φ(a)∗φ(a))k). Thus, we have
shown the following inclusion of spectra:
σ((φ(a)∗φ(a))k) ⊆ σ((a∗a)k) ∪ {0}.
Therefore, by the spectral radius formula [1, II.1.6.3] and the generalization of the
C⋆-identity in Lemma 2.4, we must deduce that:
‖φ(a)‖2k = ‖(φ(a)∗φ(a))k‖
= r((φ(a)∗φ(a))k) ≤ r((a∗a)k)
= ‖(a∗a)k‖ = ‖a‖2k,
which implies that ‖φ(a)‖ ≤ ‖a‖ for all a ∈ A, as desired. �
Note that the argument in the previous proof does not work for n = 2k even,
since we would need to employ (a∗a)k−1a which is a product of 2k − 1 = n − 1
elements as needed, but not self-adjoint, in general. Thus, we could not appeal
to the spectral radius formula for self-adjoint elements and Lemma 3.1 would not
apply. Hence, the even and odd n arguments are essentially disjoint.
6 EFTON PARK AND JODY TROUT
4. Nonexistence of Nontrival Involutive n-homomorphisms of
C⋆-algebras
Our first main result is the nonexistence of nontrivial n-homomorphisms on
unital C⋆-algebras for all n ≥ 3. We do the unital case first since it is much simpler
to prove and helps to frame the argument for the nonunital case.
Theorem 4.1. Let φ : A→ B be an involutive n-homomorphism between the C⋆-
algebras A and B, where A is unital. If n ≥ 2 is even, then φ is a ∗-homomorphism.
If n ≥ 3 is odd, then φ is the difference φ(a) = ψ1(a) − ψ2(a) of two orthogonal
∗-homomorphisms ψ1 ⊥ ψ2 : A→ B.
Proof. In either case, by Proposition A.1, the element e = φ(1) ∈ B is an
n-potent (en = e) and is self-adjoint, because
e = φ(1) = φ(1∗) = φ(1)∗ = e∗.
Also, there is an associated algebra homomorphism ψ : A→ B defined for all a ∈ A
by the formula
ψ(a) = en−2φ(a) = φ(a)en−2
such that φ(a) = eψ(a) = ψ(a)e. In either case, ψ is ∗-linear since φ is ∗-linear and
e is self-adjoint and commutes with the range of φ:
ψ(a∗) = en−2φ(a∗) = en−2φ(a)∗ =
en−2φ(a)
= ψ(a)∗.
Now, if n = 2k is even, e = en = (ek)∗ek ≥ 0 and so e = p is a projection. Thus,
φ(a) = pψ(a) = ψ(a)p = pψ(a)p is a ∗-homomorphism. If n ≥ 3 is odd, then by
Lemma A.8, e is the difference of two orthogonal projections e = p1−p2 which must
commute with both ψ and φ by the functional calculus. Define ψ1, ψ2 : A → B
by ψi(a) = piψ(a)pi for all a ∈ A and i = 1, 2. Then ψ2 ⊥ ψ2 are orthogonal
∗-homomorphisms, and
ψ1(a)− ψ2(a) = p1ψ(a)− p2ψ(a) = eψ(a) = φ(a)
for all a ∈ A, from which the desired result follows. �
Corollary 4.2. Let φ : A→ B be a linear map between C⋆-algebras. If A is unital,
the following are equivalent for all integers n ≥ 2:
a.) φ is a ∗-homomorphism.
b.) φ is a positive n-homomorphism.
c.) φ is an involutive n-homomorphism and φ(1) ≥ 0.
Proof. Clearly (a) =⇒ (b) =⇒ (c). If n ≥ 2 is even, then (c) =⇒ (a) by the
previous result. If n ≥ 3 is odd, then by the previous result, we only need to show
that φ is positive. Let n = 2k + 1. Given any a ∈ A, by the Cohen Factorization
Theorem, we can write a = a1 · · · ak. Since φ(1) ≥ 0, by hypothesis, and n = 2k+1,
we compute:
φ(a∗a) = φ(a∗1a) = φ(a∗k · · · a
11a1 · · ·ak)
= φ(ak)
∗ · · ·φ(a1)
∗φ(1)φ(a1) · · ·φ(ak)
φ(a1) · · ·φ(ak)
φ(a1) · · ·φ(ak)
= b∗φ(1)b ≥ 0,
NONEXISTENCE OF NONTRIVIAL n-HOMOMORPHISMS 7
where b = φ(a1) · · ·φ(ak) ∈ B. Thus, φ is positive linear and therefore a ∗-
homomorphism. �
Next, we extend our nonexistence results to the nonunital case, by appealing to
approximate unit arguments (which require continuity!) and the following impor-
tant factorization property of ∗-preserving n-homomorphisms.
Lemma 4.3 (Coherent Factorization Lemma). Let φ : A → B be an involutive
n-homomorphism of C⋆-algebras. For any 1 ≤ k ≤ n and any a ∈ A, if a =
a1 · · ·ak = b1 · · · bk in A, then
φ(a1) · · ·φ(ak) = φ(b1) · · ·φ(bk) ∈ B.
Note that, in general, φ(a) 6= φ(a1) · · ·φ(ak) when 1 < k < n.
Proof. Clearly, we may assume 1 < k < n. Since φ is ∗-linear, the range
φ(A) ⊂ B is a self-adjoint linear subspace of B (but not necessarily a subalgebra,
in general). Given any d = φ(c) ∈ φ(A), using the Cohen Factorization Theorem,
write d = d1 · · · dn = φ(c1) · · ·φ(cn) where di = φ(ci) for 1 ≤ i ≤ n. Consider the
following computation:
φ(a1) · · ·φ(ak)d = φ(a1) · · ·φ(ak)φ(c1) · · ·φ(cn)
= φ(a1 · · · akc1 · · · cn−k)φ(cn−k+1) · · ·φ(cn)
= φ(b1 · · · bkc1 · · · cn−k)φ(cn−k+1) · · ·φ(cn)
= φ(b1) · · ·φ(bk)φ(c1) · · ·φ(cn)
= φ(b1) · · ·φ(bk)d.
Let f = φ(a1) · · ·φ(ak) − φ(b1) · · ·φ(bk). Then fd = 0 for all d ∈ φ(A) ⊂ B, and
thus fd = 0 for all d in the ∗-subalgebra Aφ of B generated by φ(A). In particular,
for the element
da = φ(a
k) · · ·φ(a
1)− φ(b
k) · · ·φ(b
1) = f
∗ ∈ Aφ.
Hence, ff∗ = fda = 0 and so ‖f‖
2 = ‖ff∗‖ = 0 by the C⋆-identity. Therefore,
φ(a1) · · ·φ(ak)− φ(b1) · · ·φ(bk) = f = 0,
and the result is proven. �
Definition 4.4. An approximate unit for a (nonunital) C⋆-algebra A is a net
{eλ}λ∈Λof elements in A indexed by a directed set Λ such that
a.) 0 ≤ eλ and ‖eλ‖ ≤ 1 for all λ ∈ Λ;
b.) eλ ≤ eµ if λ ≤ µ in Λ;
c.) For all a ∈ A,
‖aeλ − a‖ = lim
‖eλa− a‖ = 0.
Every C⋆-algebra has an approximate unit, which is countable (Λ = N) if A is
separable (see Section II.4 of Blackadar [1].)
Theorem 4.5. Suppose φ : A → B is an involutive n-homomorphism of C⋆-
algebras, where A is nonunital. Then, for all a ∈ A, the limit
ψ(a) = lim
φ(eλ)
n−2φ(a) = lim
φ(a)φ(eλ)
8 EFTON PARK AND JODY TROUT
exists, independently of the choice of the approximate unit {eλ} of A, and defines
a ∗-homomorphism ψ : A→ B such that
φ(a) = lim
φ(eλ)ψ(a)
for all a ∈ A.
Proof. We may assume n ≥ 3. Given a ∈ A, use the Cohen Factorization
Theorem to factor a = a1a2 · · · an. Define a map ψ : A→ B by
ψ(a) = φ(a1a2)φ(a3) · · ·φ(an) = φ(a1) · · ·φ(an−2)φ(an−1an),
which is well-defined by the Coherent Factorization Lemma. The continuity of φ
implies that
φ(eλ)
n−2φ(a) = lim
φ(eλ)
n−2φ(a1) · · ·φ(an)
= lim
φ(en−2λ a1a2)φ(a3) · · ·φ(an)
= φ(a1a2)φ(a3) · · ·φ(an) = ψ(a) ∈ B.
It follows that we can write:
ψ(a) = lim
φ(eλ)
n−2φ(a) = lim
φ(a)φ(eλ)
and so ψ : A→ B is linear since φ is linear. Moreover, since φ is ∗-linear, it follows
that ψ is also ∗-linear:
ψ(a)∗ =
φ(a1a2)φ(a3) · · ·φ(an)
= φ(an)
∗ · · ·φ(a3)
φ(a1a2)
= φ(a∗n) · · ·φ(a
3)φ(a
= φ(a∗n1a
n2)φ(a
n−1) · · ·φ(a
= ψ((a∗n1a
n2)(a
n−1) · · · (a
= ψ(a∗n · · ·a
1) = ψ(a
In the computation above, we factored an = an2an1 and set a12 = a1a2 to obtain
the factorization a∗ = a∗n · · ·a
1 = (a
n−1 · · · a
12 into n elements. Given
a, b ∈ A with factorizations a = a1 · · · an and b = b1 · · · bn, the fact that φ is an
n-homomorphism implies:
ψ(a)ψ(b) =
φ(a1a2)φ(a3) · · ·φ(an)
φ(b1b2)φ(b3) · · ·φ(bn)
= φ((a1a2)a3 · · · an(b1b2))φ(b3) · · ·φ(bn)
= φ((ab1)b2)φ(b3) · · ·φ(bn)
= ψ(ab);
NONEXISTENCE OF NONTRIVIAL n-HOMOMORPHISMS 9
note that ab = (ab1)b2b3 · · · bn is a factorization of ab into n elements. A second
proof of multiplicativity goes as follows:
ψ(ab) = lim
φ(eλ)
φ(ab) = lim
φ(eλ)
φ( lim
= lim
φ(eλ)
n−2 lim
φ(aen−2µ b)
= lim
φ(eλ)
n−2 lim
φ(a)φ(eµ)
n−2φ(b)
= lim
φ(eλ)
n−2φ(a) lim
φ(eµ)
n−2φ(b)
= ψ(a)ψ(b).
Thus, ψ is a well-defined ∗-homomorphism. Finally, we compute:
φ(eλ)ψ(a) = lim
φ(eλ)φ(a1a2)φ(a3) · · ·φ(an)
= lim
φ(eλ(a1a2)a3 · · · an) = lim
φ(eλa)
= φ(a).
Using similar factorizations, the fact that {enλ} is also an approximate unit for
A, and the fact that the strict completion of the C⋆-algebra C⋆(φ(A)) generated
by the range φ(A) is the multiplier algebra M(C⋆(ψ(A))), we obtain the nonunital
version of Proposition A.1.
Corollary 4.6. Suppose that A and B are C⋆-algebras with A nonunital, and let
φ : A → B be an involutive n-homomorphism with associated ∗-homomorphism
ψ : A→ B. Then there is a self-adjoint n-potent e = e∗ = en ∈M(C∗(φ(A))) such
that φ(eλ) → e strictly for any approximate unit {eλ} of A, and with the property
φ(a) = eψ(a) = ψ(a)e
ψ(a) = en−2φ(a)
for all a ∈ A.
Proof. By the previous proof, we can define e ∈ M(C⋆(φ(A))) on generators
φ(a) by
eφ(a) = lim
φ(eλ)φ(a) = φ(a1a2 · · · an−1)φ(an) ∈ C
⋆(φ(A))
for any a = a1 · · · an ∈ A. It follows that:
enφ(a) = lim
φ(eλ)
nφ(a)
= lim
φ(enλ)φ(a1)φ(a2) · · ·φ(an)
= lim
φ((enλ)a1a2 · · · an−1)φ(an)
= φ(a1 · · · an−1)φ(an) = eφ(a),
which implies e ∈ M(C⋆(φ(A))) is n-potent. The fact that e = e∗ follows from
φ(eλ)
∗ = φ(e∗λ) = φ(eλ). The other statements follow from the previous proof. �
The dichotomy between the unital and nonunital cases is now clear. If A is unital,
then C⋆(φ(A)) ⊂ B is a unital C⋆-subalgebra of B with unit ψ(1) = φ(1)n−1 ∈ B
(which is a projection!) and so
M(C⋆(ψ(A))) = C⋆(φ(A)) ⊂ B.
10 EFTON PARK AND JODY TROUT
However, for A nonunital, we cannot identify the multiplier algebra M(C⋆(φ(A)))
as a subalgebra of B, or evenM(B), unless φ is surjective. In general, we only have
inclusions ψ(A) ⊂ C⋆(φ(A)) ⊂ B.
Now that we know, as in the unital case, every involutive n-homomorphism is an
n-potent multiple of a ∗-homomorphism, we can prove the following general version
of Theorem 4.1 and its corollary in a similar manner using Lemma A.8.
Theorem 4.7. Let φ : A→ B be an involutive n-homomorphism of C⋆-algebras. If
n ≥ 2 is even, then φ is a ∗-homomorphism. If n ≥ 3 is odd, then φ is the difference
φ(a) = ψ1(a)− ψ2(a) of two orthogonal ∗-homomorphisms ψ1 ⊥ ψ2 : A→ B.
Corollary 4.8. For all n ≥ 2 and C⋆-algebras A and B, φ : A → B is a positive
n-homomorphism if and only if φ is a ∗-homomorphism.
Appendix A. On n-homomorphisms and n-potents
An element x ∈ A is called an n-potent if xn = x. Note that if φ : A→ B is an n-
homomorphism, then φ(x) = φ(xn) = φ(x)n ∈ B is also an n-potent. The following
important result is Proposition 2.2 [7], whose proof is included for completeness.
Proposition A.1. If A is a unital algebra (or ring) and φ : A → B is an n-
homomorphism, then there is a homomorphism ψ : A → B and an n-potent e =
en ∈ B such that φ(a) = eψ(a) = ψ(a)e for all a ∈ A. Also, e commutes with the
range1 of φ, i.e., eφ(a) = φ(a)e for all a ∈ A.
Proof. Note that e = φ(1) = φ(1n) = φ(1)n = en ∈ B is an n-potent. Define a
linear map ψ : A→ B by ψ(a) = en−1φ(a) for all a ∈ A. For all a, b ∈ A,
ψ(ab) = en−2φ(ab) = en−2φ(a1n−2b)
en−2φ(a)
φ(1)n−2φ(b)
= ψ(a)ψ(b),
and so ψ is an algebra homomorphism. Furthermore,
eψ(a) = φ(1)(φ(1)n−2φ(a)) = φ(1)n−1φ(a) = φ(1n−1a) = φ(a).
Similarly, ψ(a)e = φ(a) for all a ∈ A. The final statement is a consequence of the
fact that for all a ∈ A,
eφ(a) = φ(1)φ(a1n−1) =
φ(1)φ(a)φ(1)n−2
φ(1) = φ(1a1n−2)e = φ(a)e.
The following computation will be more significant when we consider the nonuni-
tal case (see the proof of Theorem 4.5.)
Corollary A.2. Let φ and ψ be as in Proposition A.1 and n ≥ 3. Then for all
a ∈ A, if a = a1a2 · · ·an with a1, . . . , an ∈ A,
ψ(a) = φ(a1a2)φ(a3) · · ·φ(an).
1Note that the range φ(A) is not a subalgebra of B in general.
NONEXISTENCE OF NONTRIVIAL n-HOMOMORPHISMS 11
Proof. We compute as follows:
ψ(a) =def e
n−2φ(a) = φ(1)n−2φ(a1 · · · an)
= φ(1)n−2φ(a1) · · ·φ(an)
φ(1)n−2φ(a1)φ(a2)
φ(a3) · · ·φ(an)
= φ(1n−2a1a2)φ(a3) · · ·φ(an)
= φ(a1a2)φ(a3) · · ·φ(an). �
Definition A.3. Let A be a unital algebra. An n-partition of unity is an ordered
n-tuple (e0, e1, . . . , en−1) of idempotents (e
k = ek) that sum to the identity e0 +
e1 + · · · + en−1 = 1 and are pairwise mutually orthogonal, i.e., ejek = δjk1 for all
0 ≤ j, k ≤ n− 1, where δjk is the Kronecker delta.
Note that e0 = 1− (e1+ · · ·+ en−1) is completely determined by e1, e2, . . . , en−1
and is thus redundant in the notation for an n-partition of unity.
Definition A.4. Let ω0 = 0 and ωk = e
2πi(k−1)/(n−1) for 1 ≤ k ≤ n − 1.
Note that ω1 = 1 and ω1, . . . , ωn−1 are the (n − 1)-th roots of unity and Σn =
{ω0, ω1, . . . , ωn−1} are the n roots of the polynomial equation x
n−x = x(xn−1−1) =
If A is a complex algebra, we let à denote A, if A is unital, or the unitization
A+ = A⊕ C, if A is nonunital.
Theorem A.5. Let A be a complex algebra. If e ∈ A is an n-potent, there is a
unique n-partition of unity (e0, e1, . . . , en−1) in à such that
ωkek.
If A is nonunital, then e1, . . . , en−1 ∈ A.
Proof. Define the n polynomials p0, p1, . . . , pn−1 by
pk(x) =
j 6=k(x− ωj)
j 6=k(ωk − ωj)
In particular, p0(x) = 1− x
n−1. Each polynomial pk has degree n− 1 and satisfies
pk(ωk) = 1 and pk(ωj) = 0 for all j 6= k. It follows that pj(x)pk(x) = 0 for all
x ∈ Σn. We also claim that for all x ∈ C that
pk(x) = p0(x) + · · ·+ pn−1(x) = 1
(3) x =
ωkpk(x).
Indeed, these identities follow from the fact that these polynomial equations have
degree n− 1 but are satisfied by the n distinct points in Σn.
Now, given any xn = x in C it follows that pk(x)
2 = pk(x). Hence, for any n-
potent e ∈ A, if we define ek = pk(e) then (e0, e1, . . . , en−1) consists of idempotents
12 EFTON PARK AND JODY TROUT
e2k = pk(e)
2 = pk(e) = ek and satisfy, by (2),
pk(e) = 1Ã.
They are pairwise orthogonal, because ejek = pj(e)pk(e) = 0 for j 6= k. Moreover,
ωkpk(e) =
by Equation (3). For 1 ≤ k ≤ n− 1, note that pk(x) = xqk(x) for some polynomial
qk(x). Hence, if A is nonunital and 1 ≤ k ≤ n−1, we have ek = pk(e) = eqk(e) ∈ A,
since A is an ideal in Ã. �
The following result is the n-homomorphism version of the previous n-potent
result. Recall say that two linear maps ψi, ψj : A→ B are orthogonal (ψi ⊥ ψj) if
ψi(a)ψj(b) = ψj(b)ψi(a) = 0
for all a, b ∈ A.2
Proposition A.6. Let A and B be complex algebras. If A is unital then a linear
map φ : A → B is an n-homomorphism if and only if there exist n − 1 mutually
orthogonal homomorphisms ψ1, . . . , ψn−1 : A→ B such that for all a ∈ A,
φ(a) =
ωkψk(a).
Proof. (⇒) Let φ : A → B be an n-homomorphism. By Proposition A.1, there
is an n-potent e ∈ B and a homomorphism ψ : A → B such that φ(a) = eψ(a) =
ψ(a)e. Using the previous result, write e =
k=1 ωkek, where (e0, e1, . . . , en−1)
is the associated n-partition of unity in à defined by the polynomials pk. Since
ek = pk(e), we have that ekψ(a) = ψ(a)ek for 1 ≤ k ≤ n− 1. Define ψk : A → B
ψk(a) =def ekψ(a) = e
kψ(a) = ekψ(a)ek.
Then ψ1, . . . , ψn−1 are orthogonal homomorphisms and, for all a ∈ A,
φ(a) = eψ(a) =
ωkekψ(a) =
ωkψk(a).
(⇐) Follows from the fact that ωnk = ωk for all k = 1, . . . , n− 1. �
Remark A.7. If A is nonunital, the above result does not hold. One reason is
that the unitization φ+ : A+ → B+ of an n-homomorphism is not, in general, an
n-homomorphism. Also, if An = Bn = {0}, then every linear map L : A → B is
an n-homomorphism (See Examples 2.5 and 4.3 of Hejazian et al [7]).
Let Σn be the n roots of the polynomial equation x = x
n from Definition A.4.
If A is a C⋆-algebra, it follows that a normal n-potent e = en must have spectrum
σ(e) ⊆ Σn. Recall that a projection is an element p = p
∗ = p2 ∈ A. Two projections
p1 and p2 are orthogonal if p1p2 = 0. A tripotent is a 3-potent element e
3 = e ∈ A.
The following characterization of self-adjoint n-potents in C⋆-algebras is impor-
tant for our nonexistence results on n-homomorphisms.
2Note that the zero homomorphism is orthogonal to every homomorphism.
NONEXISTENCE OF NONTRIVIAL n-HOMOMORPHISMS 13
Lemma A.8. Let A be a C⋆-algebra.
a.) If n ≥ 2 is an even integer, the following are equivalent:
i.) e is a projection.
ii.) e is a positive n-potent.
iii.) e is a self-adjoint n-potent.
b.) If n ≥ 3 is an odd integer, the following are equivalent:
i.) e is a self-adjoint tripotent.
ii.) e = p1 − p2 is a difference of two orthogonal projections.
iii.) e is a self-adjoint n-potent.
Proof. In both the even and odd cases, (i) =⇒ (ii) =⇒ (iii) (See Theorem
A.5). Suppose (iii) holds. If n = 2k is even,
e = e∗ = en = e2k = (ek)∗(ek) ≥ 0,
and so the spectrum of e satisfies σ(e) ⊂ Σn∩[0,∞] = {0, 1}. Thus, e is a projection.
If n ≥ 3 is odd, then since e = e∗ we must have σ(e) ⊂ Σn ∩R = {−1, 0, 1}. Thus,
λ = λ3 for all λ ∈ σ(e), which implies e = e3 is tripotent. �
References
[1] B. Blackadar, Theory of C∗-algebras and von Neumann algebras, Encyclopaedia of Math-
ematical Sciences, 122. Operator Algebras and Non-commutative Geometry, III. Springer-
Verlag, Berlin, 2006.
[2] J. Bračič and S. Moslehian, On Automatic Continuity of 3-Homomorphisms on Banach
Algebras, to appear in Bull. Malays. Math. Sci. Soc. arXiv: math.FA/0611287.
[3] P. Cohen, Factorization in group algebras, Duke Math. J. 26 (1959) 199–205.
[4] S. Feigelstock, Rings whose additive endomorphisms are N-multiplicative, Bull. Austral.
Math. Soc. 39 (1989), no. 1, 11–14.
[5] S. Feigelstock, Rings whose additive endomorphisms are n-multiplicative. II, Period. Math.
Hungar. 25 (1992), no. 1, 21–26.
[6] L. Harris, A Generalization of C⋆-algebras, Proc. London Math. Soc. 42 (1981) no. 3, 331–
[7] M. Hejazian, M. Mirzavaziri, and M.S. Moslehian, n-homomorphisms, Bull. Iranian Math.
Soc. 31 (2005), no. 1, 13-23.
[8] I. Raeburn and D. P. Williams, Morita Equivalence and Continuous-Trace C⋆-Algebras,
Mathematical Surveys and Monographs, vol. 60, American Mathematical Society, 1998.
[9] W. Stinespring, Positive functions on C⋆-algebras, Proc. Amer. Math. Soc. 6 (1955), 211–216.
Box 298900, Texas Christian University, Fort Worth, TX 76129
E-mail address: [email protected]
6188 Kemeny Hall, Dartmouth College, Hanover, NH 03755
E-mail address: [email protected]
http://arxiv.org/abs/math/0611287
1. Introduction
2. Automatic Continuity: The Even Case
3. Automatic Continuity: The Odd Case
4. Nonexistence of Nontrival Involutive n-homomorphisms of C-algebras
Appendix A. On n-homomorphisms and n-potents
References
|
0704.0911 | Smooth and Starburst Tidal Tails in the GEMS and GOODS Fields | Smooth and Starburst Tidal Tails in the GEMS and GOODS
Fields
Debra Meloy Elmegreen
Vassar College, Dept. of Physics & Astronomy, Box 745, Poughkeepsie, NY 12604;
[email protected]
Bruce G. Elmegreen
IBM Research Division, T.J. Watson Research Center, P.O. Box 218, Yorktown Heights,
NY 10598, [email protected]
Thomas Ferguson
Vassar College, Dept. of Physics & Astronomy, Box 745, Poughkeepsie, NY 12604;
[email protected]
Brendan Mullan
Vassar College, Dept. of Physics & Astronomy, Box 745, Poughkeepsie, NY 12604 and
Colgate University, Dept. of Astronomy, Hamilton, NY; [email protected]
ABSTRACT
GEMS and GOODS fields were examined to z ∼1.4 for galaxy interactions
and mergers. The basic morphologies are familiar: antennae with long tidal tails,
tidal dwarfs, and merged cores; M51-type galaxies with disk spirals and tidal
arm companions; early-type galaxies with diffuse plumes; equal-mass grazing-
collisions; and thick J-shaped tails beaded with star formation and double cores.
One type is not common locally and is apparently a loose assemblage of smaller
galaxies. Photometric measurements were made of the tails and clumps, and
physical sizes were determined assuming photometric redshifts. Antennae tails
are a factor of ∼ 3 smaller in GEMS and GOODS systems compared to local
antennae; their disks are a factor of ∼ 2 smaller than locally. Collisions among
early type galaxies generally show no fine structure in their tails, indicating that
stellar debris is usually not unstable. One exception has a 5×109 M⊙ smooth red
clump that could be a pure stellar condensation. Most tidal dwarfs are blue and
probably form by gravitational instabilities in the gas. One tidal dwarf looks like
it existed previously and was incorporated into the arm tip by tidal forces. The
star-forming regions in tidal arms are 10 to 1000 times more massive than star
complexes in local galaxies, although their separations are about the same. If
http://arxiv.org/abs/0704.0911v1
– 2 –
they all form by gravitational instabilities, then the gaseous velocity dispersions
in interacting galaxies have to be larger than in local galaxies by a factor of ∼ 5
or more; the gas column densities have to be larger by the square of this factor.
Subject headings: galaxies: formation — galaxies: merger — galaxies: high-
redshift
1. Introduction
Galaxy interactions and mergers are observed at all redshifts and play a key role in
galaxy evolution. Two percent of local galaxies are interacting or merging (Athanassoula &
Bosma 1985; Patton et al. 1997), and this fraction is larger at high redshift (e.g., Abraham
et al. 1996b; Neuschaefer et al. 1997 ; Conselice et al. 2003; Lavery et al. 2004; Straughn
et al. 2006; Lotz et al. 2006, and others). Conselice (2006a) estimates that massive galaxies
have undergone about 4 major mergers by redshift 1. Toomre (1977) described a sequence
of merger activity ranging from separated galaxies with tails and a bridge between them,
to double nuclei in a common envelope with tails, to merged nuclei with tails. Ground-
based (Hibbard & van Gorkom 1996) and space-based (Laine et al. 2003; Smith et al. 2007)
observations of this sequence show optical, infrared, and radio activity in the tails and nuclei.
High resolution images and numerical simulations of nearby interactions demonstrate
how star formation and morphology are affected. General reviews of interaction simulations
are given by Barnes & Hernquist (1992) and Struck (1999). The initial galaxy properties,
such as mass, rotational velocity, gas content and dark matter content, and their initial sep-
arations and velocity vectors, all play a role in generating structure. The viewing angle also
affects the morphology. Early-type galaxies with little gas are expected to display smooth
plumes and shells, while spiral interactions and mergers should exhibit clumpy star forma-
tion along tidal tails, and condensations of material at the tail ends. Equal mass companions
may show bridges between them. A prominent example of a tidal interaction is the Antennae
(NGC4038/9), a merging pair of disk galaxies with rampant star formation in the central
regions, including young globular clusters (Whitmore et al. 2005). Its interaction was first
modeled by Toomre & Toomre (1972). The Cartwheel galaxy is a collisional ring system
rimmed with star formation from a head-on collision (Struck et al. 1996). Sometimes polar-
ring or spindle galaxies are the result of perpendicular collisions (Struck 1999). The Mice
(NGC 4676) has a long narrow straight tail and a curved tidal arm (Vorontsov-Velyaminov
1957; Burbidge & Burbidge 1959); numerical simulations reproduce both features well in a
model with a halo:(disk+bulge) mass ratio of 5 (Barnes 2004). The Superantennae (IRAS
19254-7245) is a pair of infrared-luminous merging giant galaxies having Seyfert and star-
– 3 –
burst nuclei and ∼ 200 kpc tails with a tidal tail dwarf (Mirabel, Lutz, & Maza, 1991). The
Leo Triplet includes NGC 3628 with an 80 kpc stellar tail containing star-forming complexes
with masses up to 106 M⊙ (Chromey et al. 1998). The Tadpole galaxy UGC10214 (Tran et
al. 2003; de Grijs et al. 2003; Jarrett et al. 2006), the IC2163/NGC2207 pair (Elmegreen
et al. 2001, 2006), and Arp 107 (Smith et al. 2005) are all interacting systems observed
with HST and SST and modeled in simulations. Many local mergers have intense nuclear
activity, such as the Seyfert galaxy NGC 5548, which also has an 80 kpc long, low surface
brightness (V=27-28 mag arcsec−2) tidal tail and a 1-arm diffuse spiral (Tyson et al. 1998).
The GEMS (Galaxy Evolution from Morphology and SEDs; Rix et al. 2004), GOODS
(Great Observatories Origins Deep Survey; Giavalisco et al. 2004), and UDF (Ultra Deep
Field; Beckwith et al. 2006) surveys done with the HST ACS (Hubble Space Telescope
Advanced Camera for Surveys) have enabled high resolution studies of the morphology of
intermediate and high redshift galaxies. Light distribution parameters such as the Gini co-
efficient (Lotz et al. 2006) and concentration index, asymmetry, and clumpiness (CAS;
Conselice 2006) have been applied to galaxies in these fields to study possible merger
systems. For GEMS and GOODS, John Caldwell of the GEMS team has posted images
(archive.stsci.edu/prepds/gems/datalist.html) of several galaxies from each field, including
peculiar and interacting systems with tails and bridges. Here we examine the entire GEMS
and GOODS fields systematically for such galaxies and study their tails, bridges, and star-
forming regions. Their properties are useful for understanding interactions and interaction-
triggered star formation, and for probing the relative dark matter content (e.g., Dubinski,
Mihos, & Hernquist 1999).
2. The Sample of Interactions and Mergers
The GOODS and GEMS images from the public archive were used for this study. They
include exposures in 4 filters for GOODS: F435W (B435), F606W (V606), F775W (i775), and
F850LP (z850); and 2 filters (V606 and z850) for GEMS. The public images were drizzled to
produce final archival images with a scale of 0.03 arcsec per px. GEMS, which incorporates
the southern GOODS survey (Chandra Deep Field South, CDF-S) in the central quarter of
its field, covers 28 arcmin x 28 arcmin; there are 63 GEMS and 18 GOODS images that
make up the whole field. The GOODS images have a limiting AB mag of V606= 27.5 for
an extended object, or about two mags fainter than the GEMS images. There are over
25,000 galaxies catalogued in the COMBO-17 survey (Classifying Objects by Medium-Band
Observations, a spectrophotometric 17-filter survey; Wolf et al. 2003), and 8565 that are
cross-correlated with the GEMS survey (Caldwell et al. 2005).
– 4 –
Interacting galaxies with tails, bridges, diffuse plumes and other features were identified
by eye on the online Skywalker images and examined on high resolution V606 fits images.
The lower limit to the length of detectable tails is about 20 pixels. Snapshots of several
different morphologies for interacting galaxies are shown in Figures 1-6. Out of an initial
list of about 300 galaxies, a total of 100 best cases are included in our sample: 14 diffuse
types, 18 antennae types, 22 M51 types, 19 shrimp types, 15 equal mass interactions, and
12 assemblies, as we describe below.
GEMS and GOODS galaxy redshifts were obtained from the COMBO-17 list (Wolf et
al. 2003). Our sample ranges from redshift z = 0.1 to 1.4 in an area of 2.8×106 square arcsec.
The linear diameters of the central objects were determined from their angular diameters
and redshifts using the appropriate conversion for a ΛCDM cosmology (Carroll et al., 1992;
Spergel et al., 2003). The range is ∼ 3 to 33 kpc. Projected tail lengths were measured in
a straight line from the galaxy center to the 2σ noise limit (25.0 mag arcsec−2) in the outer
tail.
Photometry was done on the whole galaxies, on each prominent star-forming clump, and
on the tails using the IRAF task imexam. A box of variable size was defined around each
feature; the outer limits of the boxes were chosen to be where the clump brightness is about
3 times the surrounding region. Sky subtraction was not done because the background
is negligible. The photometric errors are ∼0.1-0.2 mag for individual clumps. The V606
surface brightnesses of the tidal tails were determined using imagej (Rasband 1997) to trace
freehand contours around the tails, so that they could be better defined than with rectangular
or circular apertures.
Figure 1 shows galaxies with diffuse plumes and either no blue star formation patches
or only a few tiny patches (e.g., galaxies number 5 and 6); we refer to these interactions as
diffuse types. The colors of the plumes match the colors of the outer parts of the central
galaxies, indicating the plumes are tidally shorn stars with little gas. There is structure in
most of the plumes consisting of arcs or sharp edges. This is presumably tidal debris from
early type galaxies with little or no gas (e.g. Larson & Tinsley 1978; Malin & Carter 1980;
Schombert et al. 1990). This type of interaction is relatively rare in the GEMS and GOODS
images, perhaps because the tidal debris is faint. The best cases are shown here and they
all have relatively small redshifts compared to the other interaction types (the average z is
0.23 and the maximum z is 0.69).
The image in the top left panel of Figure 1 (galaxy 1) has a giant diffuse clump in the
upper right corner. This could be a condensation in the tidal arm, or it could be another
galaxy. In either case, it has the same color as the rest of the tidal arm nearby. That is,
V606 − z850 = 0.90 ± 0.5 for the clump and also in six places along the tail; the color is
– 5 –
essentially the same, 0.94 ± 0.05, in the core of the galaxy. The absolute magnitude of the
clump is MV = −18.41 for redshift z = 0.15. The mass is ∼ 5 × 10
9 M⊙ (Sect. 3.2). If
this clump is a condensation in the tail, then it could be a rare case where a pure stellar arc
has collapsed gravitationally into a gas-free tidal dwarf. The final result could be a dwarf
elliptical. Usually tidal dwarfs form by gaseous condensations in tidal arms (Wetzstein,
Naab, & Burkert 2007).
Figure 2 shows interactions that resemble the local Antennae pair, so we refer to them as
antennae types. These types have long tidal tails and double nuclei or highly distorted centers
that appear to be mergers of disk galaxies. Note that antennae are not the same as “tadpole”
galaxies (Elmegreen et al. 2005a; de Mello et al. 2006; Straughn et al. 2006), which have one
main clump and a sometimes wiggly tail that may contain smaller clumps. Some antennae
have giant clumps near the ends of the tails which could have formed there (galaxies 16 and
17) and are analogous to the clump at the end of the Superantennae (Mirabel et al. 1991).
Galaxy 18 is in a crowded field with at least two long tidal arms; here we consider only the
tail system in the north, which is in the upper part of the figure. These long-tail systems are
relatively rare and all the best cases are shown in the figure; their average redshift, 0.70, is
typical for GEMS and GOODS fields. Galaxy 24 is somewhat like a tadpole galaxy, but its
very narrow tail and protrusion on the anti-tail side of the main clump are unlike structures
seen in tadpoles of the Ultra Deep Field.
For the antennae galaxies in Figure 2, the tails have an average (V-z) color that is
negligibly bluer, 0.10±0.25 mag, than the central disks. In a study of tidal features in local
Arp atlas galaxies, Schombert et al. (1990) also found that the tail colors are uniform and
similar to those of the outer disks. They noted that the most sharply-defined tails are with
spiral systems and the diffuses plumes are with ellipticals. This correlation may be true
here also, but it is difficult to tell from Figure 1 whether the smooth distorted systems are
intrinsically disk-like.
Galaxy 20 in Figure 2 is an interesting case. It has an elliptical clump at the end of its
tail that could be one of the collision partners. There are two central galaxy cores, however,
and their interaction may have formed the tidal arms without this companion. Furthermore,
the clump at the tip is aligned perpendicular to the tail, which is unusual for a tidal dwarf.
Thus it is possible that the clump was a pre-existing galaxy lying in the orbital plane of one
of the larger galaxies now at the center. Presumably this former host is the galaxy currently
connected to the dwarf by the tidal arm. The interaction could have swung it around to
its current position at the tip. A similar case occurs for the local IC 2163/NGC 2207 pair,
which has a spheroidal dwarf galaxy at the tip of its tidal arm (Elmegreen et al. 2001). Such
swing-around dwarfs should have the same dynamical origin as the large pools of gas and
– 6 –
star formation that are at the tips of superantenna-type galaxies; i.e. the whole outer disk
moves to this position during the interaction (Elmegreen et al. 1993; Duc et al. 1997).
Figure 3 shows examples of interactions that we refer to as M51-type galaxies, where
the tidal arms can be bridges that connect the main disk galaxy to the companion (galaxy
33), or tails on the opposite side of the companion (e.g., galaxies 34 and 35), or both (galaxy
36). In galaxy 44, the tidal arm looks like the debris path of a pre-existing galaxy that lies at
the right; the orbit path apparently curves around on the left. The M51-types usually have
strong spirals in the main disk. In the top row, the tails and bridges are thin and diffuse.
The galaxy on the left in the lower row (galaxy 42) has a thick, fan-shaped tail opposite
the companion. Some bridges have star formation clumps (galaxy 40) and others appear
smooth (galaxy 33). Interactions like this, especially those with small companions, are more
common than the previous two types and only a few best cases are shown in Figure 3 and
discussed in the rest of this paper.
Figure 4 shows examples of galaxies dominated by one highly curved, dominant arm and
large, regularly-spaced clumps of star formation. We call these “shrimp” galaxies because
of their resemblance to the tail of a shrimp. Although their star formation indicates they
contain gas and therefore are disk systems, there are no well-defined spirals (except for the
prominent arm), merging cores, or obvious central nuclei. The clumps resemble the beads-
on-a-string star formation in spiral density waves and probably have the same origin, a
gravitational instability (Elmegreen & Elmegreen 1983; Kim & Ostriker 2006; Bournaud,
Duc, & Masset 2003). The J-shaped morphology is reminiscent of the 90 kpc gas tail of M51
(Rots et al. 1990) and the 48 kpc gas tail observed in NGC 2535 (Kaufman et al. 1997).
Rots el al. point out that the M51 gas tail is much broader (10 kpc) than the narrow tails
seen in merging systems like the Antennae. The broad tail in galaxy 42 (Fig. 3) is similar
to the M51 tail. Sometimes there is a bright tail with no obvious companion (galaxies 56,
57, and 60); one of these, galaxy 56, was in our ring galaxy study (Elmegreen & Elmegreen
2006). Asymmetric, strong arm galaxies like this are not common in GEMS and GOODS;
this figure shows the best cases.
Figure 5 has a selection of irregular galaxies that appear to be interactions. Most of
them suggest an assembly of small pieces, so we refer to them as assembly types. If they were
slightly more round in overall shape, with more obvious interclump emission, then we would
classify them as clump-clusters, as we did in the UDF (Elmegreen et al. 2005a). The galaxy
in the lower left (galaxy 83) is like this. The resemblance of these types to clump-clusters
suggests that some of the clumps are accreted from outside the disk and others form from
gravitational instabilities in a pre-existing gas disk, as suggested previously (Elmegreen &
Elmegreen 2005). The system in the lower right (galaxy 85) could be interacting spirals, or a
– 7 –
triple system, or a bent chain (as studied in Elmegreen & Elmegreen 2006). There are many
examples of highly irregular galaxies like these in the GEMS and GOODS fields; indeed most
galaxies at z > 1.5 are peculiar in this sense (Conselice 2005). In what follows, we discuss
only these 12 galaxies.
Figure 6 has samples of grazing or close interactions, with spirals at the top of the
page (numbers 86-93), ellipticals lower down (numbers 95-97) and two polar-ring galaxies
(numbers 99 and 100) in the lowest row at the middle and right bottom. We refer to these
paired systems as “equals” because their distinguishing feature is that the two galaxies have
comparable size. The pair number 89 has a bright oval in the smaller galaxy, which is char-
acteristic of recent tidal forces for an in-plane, prograde encounter such as IC2163/NGC2207
(Sundin 1993; Elmegreen et al. 1995 ). There is a spiral-elliptical pair on the right in the
middle row (galaxy 94). Double ellipticals in the UDF were studied previously (Elmegreen
et al. 2005a, Coe et al. 2006). Near neighbors like this have been studied previously in the
GEMS field; 6 double systems out of 379 red sequence galaxies were identified as being dry
merger candidates, as reproduced in simulations (Bell et al. 2006). The models of mergers
of early-type systems by Naab et al. (2006) apparently account for kinematic and isophotal
properties of ellipticals better than the formation of ellipticals through late-type mergers
alone. For the pairs in our figure, both components have the same COMBO17 redshift.
There are many other examples of close galaxy groups and near interactions in the GEMS
and GOODS surveys. In what follows we discuss only the properties of those shown in Figure
The interacting types shown in the figures are meant to be as distinct as possible. These
and other good cases are listed in Table 1 by running number, along with their COMBO-
17 catalog number, redshift, and R magnitude. There is occasionally some ambiguity and
overlap in the interaction types, particularly between M51-types and shrimps when the M51-
types have small or uncertain companions at the ends of their prominent tails. Projection
effects can lead to uncertainties in the classifications as well, particularly for antennae whose
tails may be foreshortened. Nevertheless, these divisions serve as a useful attempt to sort out
the most prominent features among interacting galaxies. There are numerous other galaxies
in GEMS and GOODS that are apparently interacting, but most of them are too highly
distorted to indicate the particular physical properties of interest here, namely, disk-to-halo
mass ratio and star formation scale.
– 8 –
3. Photometric Results
3.1. Global galaxy properties
The integrated Johnson restframe (U-B) and (B-V) colors from COMBO-17 for the
observed galaxies with measured redshifts are shown in a color-color diagram in Figure 7.
The crosses in the diagram are Johnson colors for standard Hubble types (Fukugita et al.
1995). Our sample of galaxies spans the range of colors from early to late Hubble types,
although the bluest are bluer than standard irregular galaxies (a typical Im has U-B= −0.35,
B-V= 0.27). The reddest galaxies tend to be the diffuse types, thought to originate with
ellipticals involved in interactions. The two reddest galaxies in our sample are the diffuse
types number 1 and 2 in Figure 1. The bluest tend to be the assemblies, consistent with
their having formed recently.
Figure 8 shows a restframe color-magnitude diagram. Early and late type galaxies
usually separate into a “red sequence” and a “blue cloud” on such a diagram (Baldry et
al. 2004; Faber et al. 2005). The solid line indicates the boundary between these two
regions from a study of 22,000 nearby galaxies (Conselice 2006b). The short-dashed lines
are the limits of the Conselice (2006) survey; local galaxies are brighter than the vertical
short-dashed line and their colors lie between the horizontal short-dashed lines. The long-
dashed lines approximately outline the bright limit for the local blue cloud galaxies. Our
galaxies fall in both the red sequence and the blue cloud. The restframe colors in Figure 8
are consistent with their morphological appearances. The red sequence galaxies in the figure
usually appear smooth (the diffuse types) or lack obvious huge star formation clumps (the
equal mass mergers), while the blue cloud galaxies usually have patches of star formation
(the M51-types, shrimps, assemblies, and many antennae). We see now why the redshifts of
the diffuse galaxies (z < 0.3) are much lower than the others: this is a selection effect for the
ACS camera. These tails comprise old stellar populations without star-forming clumps, and
their intrinsic redness makes them difficult to see at high redshifts. Also, they tend to have
intrinsically low surface brightnesses because of a lack of star formation, and cosmological
dimming makes them too faint to see at high redshift. Hibbard & Vacca (1997) note that it
is difficult to detect tidal arms beyond z ∼ 1.5.
3.2. Clump properties
Prominent star-forming clumps are apparent in many of the interacting galaxies. Their
sizes and magnitudes were measured using rectangular apertures. The observed magnitudes
were converted to restframe B magnitudes whenever possible, using linear interpolations
– 9 –
between the ACS bands. For example, GEMS observations are at two filters, V606 and z850.
GEMS galaxies with redshifts z between 0.39 (= 606/435 − 1) and 0.95 (= 850/435 − 1)
were assumed to have restframe blue luminosities given by LB,rest = LV,obs(0.95− z)/(0.95−
0.39) + Lz,obs(z − 0.39)/(0.95 − 0.39). The restframe B magnitude is then −2.5 logLB,rest.
For GOODS galaxies, the conversions were divided into 3 redshift bins to make use of
the 4 available filters, and a linear interpolation was again applied to get restframe clump
magnitudes. For the GOODS galaxies, the restframe magnitudes determined by interpolation
between the nearest 2 filters among the 4 filters are within ±0.2 mag of the restframe
magnitudes determined from only the V and z filters. Thus, the GEMS interpolations are
accurate to this level. (We do not include corrections for intergalactic absorption in these
colors, because we are comparing them directly with their parent galaxy properties. Below,
when we convert the colors and magnitudes to masses and ages, absorption corrections are
taken into account.)
The apparent restframe B magnitudes of the clumps were converted to absolute rest-
frame B magnitudes using photometric redshifts and the distance modulus for a ΛCDM
cosmology. These absolute clump magnitudes are shown as a function of absolute galaxy
magnitude in Figure 9. The clump absolute B magnitudes scale linearly with the galaxy
magnitudes. The clumps are typically a kpc in size (∼ 3 to 8 pixels across), comparable
to star-forming complexes in local galaxies (Efremov 1995), which also scale with galaxy
magnitude (Elmegreen et al. 1996; Elmegreen & Salzer 1999).
Clump ages and masses were estimated by comparing observed clump colors, magni-
tudes, and redshifts with evolutionary models that account for bandshifting and intergalac-
tic absorption and that assume an exponential star formation rate decay (see Elmegreen &
Elmegreen 2005). Internal dust extinction as a function of redshift is taken from Rowan-
Robinson (2003). The GEMS galaxy clumps only have (V606-z850) colors, so the ages are not
well constrained. For the GOODS galaxies, the additional B and I filters help place better
limits on the ages, although there is still a wide range of possible fits.
Figure 10 shows sample model results for redshift z = 1. The different lines in each
panel correspond to different decay times for the star formation rate, in years: 107, 3× 107,
108, 3×108, and 109, and the sixth line represents a constant rate. Generally the shorter the
decay time, the redder the color and higher the mass for a given duration of star formation.
This correspondence between color and mass gives a degeneracy to plots of mass versus color
at a fixed apparent magnitude (top left) and apparent magnitude versus color at a fixed mass
(top right). Thus the masses of clumps can be derived approximately from their colors and
magnitudes, without needing to know their ages or star formation histories.
Figure 11 shows observations and models in the color-magnitude plane for 6 redshift in-
– 10 –
tervals spanning our galaxies. Each curve represents a wide range of star formation durations
that vary along the curve as in the top right panel of Fig. 10; each curve in a set of curves is
a different decay time. The different sets of curves, shifted vertically in the plots, correspond
to different clump masses, as indicated by the adjacent numbers, which are in M⊙. Each
different point is a different clump; many galaxies have several points. Only clumps with
both V606 and z850 magnitudes above the 2σ noise limit are plotted in Figure 11. The clump
(V606 − z850) colors range from 0 to 1.5. The magnitudes tend to be about constant for each
redshift because of a selection effect (brighter magnitudes are rare and fainter magnitudes
are not observed).
Figure 11 indicates that the masses of the observable clumps are between 106 and 109
M⊙ for all redshifts, with higher masses selected for the higher redshifts. The masses for all
of the clumps are plotted in Figure 12 versus the galaxy type (types 1 through 6 are in order
of Figs. 1 through 6 above). The masses are obtained from the observed values of V606 and
V606 − z850 using the method indicated in Figure 11. The different mass evaluations for the
six decay times are averaged together in the log to give the log of the mass plotted as a dot
in Figure 12. The rms values of log-mass among these six evaluations are shown in Figure
12 as plus-symbols, using the right-hand axes. These rms deviations are less than 0.2, so
the uncertainties in star formation decay times and clump ages do not lead to significant
uncertainties in the clump mass. (Systematic uncertainties involving extinctions, stellar
evolution models, photometric redshifts, and so on, would be larger.)
The clump ages cannot be determined independently from the star formation decay
times with only the few passbands available at high angular resolution. Figure 13 shows
model results that help estimate the clump ages. As in the other figures, each line is a
different exponential decay time for the star formation rate. If we consider the two extreme
decay times in this figure (continuous star formation for the bottom lines in each panel and
107 years for the top lines), then we can estimate the age range for each decay time from
the observed color range. For V606 − z850 colors in the range from 0 to 0.5 at low z (cf. Fig
11), the clump ages range from 107 to 1010 yr with continuous star formation and from 107
to 3 × 108 yr with a decay time of 107 yrs. For colors in the range from 0 to 1.5 at higher
redshifts, the age ranges are about the same in each case. For intermediate decay times, the
typical clump ages are between ∼ 107 years for the bluest clumps and ∼ 109 years for the
reddest clumps. These are reasonable ages for star formation regions, and consistent with
model tail lifetimes.
The star-forming complexes in the GEMS and GOODS interacting galaxies are 10 to
1000 times more massive than the local analogs seen in non-interacting late-type galaxies
(Elmegreen & Salzer 1999), but the low mass end in the present sample is similar to the
– 11 –
high mass end of the complexes measured in local interacting galaxies. For example, the
Tadpole galaxy, UGC 10214, contains 106 M⊙ complexes along the tidal arm (Tran et al.
2003; Jarrett et al. 2006). The interacting galaxy NGC 6872 has tidal tails with 109 M⊙ HI
condensations (Horellou & Koribalski 2007), but the star clusters have masses only up to
106 M⊙ (Bastian et al. 2005). The most massive complexes in the tidal tail of NGC 3628
in the Leo Triplet are also ∼ 106 M⊙ (Chromey et al. 1998). The NGC 6872 clusters differ
qualitatively from those in our sample in being spread out along a narrow arm; ours are big
round clumps spaced somewhat evenly along the arm. Small star clusters are also scattered
along the tidal arms the Tadpole and Mice systems; they typically contain less than 106 M⊙
(de Grijs et al. 2003). The NGC 3628 clusters are also faint with surface brightnesses less
than 27 mag arcsec−2; they would not stand out at high redshift.
It is reasonable to consider whether the observed increase of complex mass with increas-
ing redshift is a selection effect. Our clumps are several pixels in size, corresponding to a
scale of ∼ 1 kpc. Individual clusters are not resolved and we only sample the most massive
conglomerates. These kpc sizes are comparable to the complex sizes in local galaxies, but the
high redshift complexes are much brighter and more massive. They would be observed easily
in local galaxies. The massive complexes in our sample are more similar to those measured
generally in UDF galaxies (Elmegreen & Elmegreen 2005).
Clump separations were measured for clumps along the long arms in the shrimp galaxies
of Figure 4. They average 2.20±0.94 kpc for 49 separations. This is about the same separa-
tion as that for the largest complexes in the spiral arms of local spiral galaxies (Elmegreen
& Elmegreen 1983, 1987), and comparable to the spacing between groups of dust-feathers
studied by La Vigne et al. (2006). Yet the clumps in shrimp galaxies and others studied here
are much more massive than the complexes in local spiral arms, which are typically < 106
M⊙ in stars and ∼ 10
7 M⊙ in gas. This elevated mass can be explained by a heightened
turbulent speed for the gas, combined with an elevated gas density. Considering that the
separation is about equal to the two-dimensional Jeans length, λ ∼ 2a2/ (GΣ) for velocity
dispersion a and mass column density Σ, and that the mass is the Jeans mass, λ2Σ, the
mass scales with the square of the velocity dispersion, M = M0 (a/a0)
for fixed length
λ0 = 2a
0/ (GΣ0) and M0 = λ
0Σ0. The mass column density also scales with the square of
the dispersion, Σ = Σ0 (a/a0)
to keep λ constant. Thus the interacting tidal arm clumps
are massive because the velocity dispersions and column densities are high. Another way
to derive this is to note that for regular spiral arm instabilities, 2Gµ/a2 is about unity at
the instability threshold, where µ is the mass/length along the arm (Elmegreen 1994). Thus
cloud mass scales with a2 for constant cloud separation. High velocity dispersions for neutral
hydrogen, ∼ 50 km s−1, are also observed in local interacting galaxies (Elmegreen et al. 1993;
Irwin 1994; Elmegreen et al. 1995; Kaufman et al. 1997; Kaufman et al. 1999; Kaufman
– 12 –
et al. 2002). Presumably the interaction agitates the interstellar medium to make the large
velocity dispersions. The orbital motions are forced to be non-circular and then the gaseous
orbits cross, converting orbital energy into turbulent energy and shocks. Similar evidence
for high velocity dispersions was found in the masses and spacings of star forming complexes
in clump cluster galaxies (Elmegreen & Elmegreen 2005) and in spectral line widths (Genzel
et al. 2006; Weiner et al. 2006).
3.3. Tail Properties
Figure 14 shows the average tail surface brightness as a function of (1+ z)4 for galaxies
in Figures 1-4. Some systems have more than one tail. Cosmological dimming causes a fixed
surface brightness to get fainter as (1+z)−4, so there should be an inverse correlation in this
diagram. Clearly, the tails are brighter for the more nearby galaxies, and they decrease out
to z ∼ 1, where they are fairly constant. This constant limit is at the 2σ detection limit of
25 mag arcsec−2. Antennae galaxies with average tail surface brightnesses fainter than this
limit have patchy tails with no apparent emission between the patches. Only the brightest
high redshift tails can be observed in this survey.
Simulations by Mihos (1995) suggested that tidal tails are observable for a brief time in
the early stages of a merger, corresponding to ∼ 150 Myr at a redshift z = 1 and 350 Myr
at z = 0.4. The difference is the result of surface brightness dimming as tails disperse. A
nearby galaxy merger, Arp 299, has a 180 kpc long tail encompassing 2 to 4% of the total
galaxy luminosity, with an interaction age of 750 Myr, but its low surface brightness of 28.5
mag arcsec−2 (Hibbard & Yun 1999) would be below the GOODS/GEMS detection limit.
The ratio of the luminosity of the combined tails and bridges to the luminosity of the
disk (the luminosity fraction) is shown in Figure 15. The luminosity fraction in the tidal
debris ranges from 10% to 80%, averaging about 30% regardless of redshift. This range is
consistent with that of local galaxies in the Arp atlas and Toomre sequence (e.g., Schombert,
Wallin, & Struck-Marcell 1990; Hibbard & van Gorkom 1996).
Interaction models with curled tails, as in our shrimp galaxies, were made by Bournaud
et al. (2003). Their models had dark matter halos with masses ∼ 10 times the disk mass
and extents less than 12 disk scale lengths. Some of our shrimp galaxies have one prominent
curved arm that is pulled out from the main disk but not very far, resulting in a lopsided
galaxy. Simulations indicate that such lopsidedness may be the result of a recent minor
merger (Bournaud et al. 2005). In some of our cases, a nearby companion is obvious.
The linear sizes of the tidal tails in our sample are shown in Figure 16. They range from
– 13 –
2 to 60 kpc, and are typically a few times the disk diameter, as shown in Figure 17, which
plots this ratio versus redshift. The average tail to diameter ratio is 2.9±1.7 for diffuse tails,
2.5± 1.3 for antennae, 2.5± 1.1 for M51-types and 1.5± 1.4 for shrimps, so the shrimps are
about 60% as extended as the antennae types. There is no apparent dependence of these
ratios on redshift in Figure 17. Projection effects make these apparent ratios smaller than
the intrinsic ratios.
For comparison, the ratio of tail length to disk diameter versus the tail length for local
galaxies is shown in Figure 18 based on measurements of antennae-type systems in the Arp
atlas (1966) and the Vorontsov-Velyaminov atlas (1959). Our galaxies are also shown. The
average tail length for the local galaxies in this figure is 72 ± 48 kpc, while the average tail
length for the GEMS and GOODS antennae is 37% as much, 27±16 kpc. The diameters for
these two groups are 20±12 kpc and 11±5 kpc, and the ratios of tail length to diameter are
4.5± 3.7 and 2.5± 1.3, respectively. Thus the local antennae mergers are larger in diameter
by a factor of 2 than the GEMS and GOODS antennae, and the tails for the locals are larger
by a factor of 2.7. These results for the diameters are consistent with other indicators that
galaxies are smaller at higher redshift, although usually this change does not show up until
z > 1 (see observations and literature review in Elmegreen et al. 2007).
3.4. Tidal dwarf galaxy candidates
Three antennae galaxies at the top of Figure 2, numbers 15, 16, and 18, have long
straight tidal arms with large star-forming regions at the ends. These clumps are possibly
tidal dwarf galaxies. The clump diameters and restframe B magnitudes are listed in Table
2, along with the clump in diffuse galaxy number 1 discussed in Sect. 2. Listed are their
V606 and V606 − z850 magnitudes and associated masses, calculated as in Sect. 3.2. The
masses range from 0.2× 108 to 4.6× 108 M⊙ for the star-forming dwarfs, but for the stellar
condensation in the diffuse-tail galaxy 1 (Fig. 1), the mass is 50×108 M⊙. The star-forming
dwarf masses are similar to or larger than those found for the tidal object at the end of the
Superantennae (Mirabel et al. 1991) as well as the tidal object at the end of the tidal arm in
the IC 2163/NGC 2207 interaction (Elmegreen et al. 2001) and at the end of the Antennae
tail (Mirabel et al. 1992). The HI dynamical masses for these local tidal dwarfs are ∼ 109
Simulations of interacting galaxies that form tidal dwarf galaxies require long tails and
a dark matter halo that extends a factor of 10 beyond the optical disk (Bournaud et al.
2003). If one or both galaxies contain an extended gas disk before the interaction, then more
massive, 109 M⊙ stellar objects can form at the tips of the tidal arms from the accumulated
– 14 –
pool of outer disk material (Elmegreen et al. 1993; Bournaud et al. 2003). Observations of
nearby interactions show clumpy regions of tidal condensations with masses of ∼ 108 − 109
M⊙ (Bournaud et al. 2004; Weilbacher et al. 2002, 2003; Knierman et al. 2003; Iglesias-
Paramo & Vilchez 2001), like what is observed in our high redshift tidal dwarfs.
No well-resolved models have yet formed tidal dwarfs from stellar debris. Wetzstein,
Naab, & Burkert (2007) considered this possibility and found collapsing gas more likely. Yet
the condensed object in the tail of galaxy 1 could have formed there and it is interesting
to consider whether the Jeans mass in such an environment is comparable to the observed
mass. If, for example, the tidal arm surface density corresponds to a value typical for the
outer parts of disks, ∼ 10 M⊙ pc
−2, and the stellar velocity dispersion is comparable to that
required in Sect. 3.2 for the gas to give the giant star forming regions, ∼ 40 km s−1, then the
Jeans mass is M ∼ a4/ (G2Σ) ∼ 1010 M⊙. This is not far from the value we observe, 5× 10
M⊙, so the diffuse clump could have formed by self-gravitational collapse of tidal tail stars.
The timescale for the collapse would be a/ (πGΣ) ∼ 300 Myr, which is not unreasonable
considering that the orbit time at this galactocentric radius is at least this large.
4. Dark Matter Halo Constraints
Models of interacting galaxies have been used to place constraints on dark halo poten-
tials. Springel & White (1999) and Dubinski, Mihos, & Hernquist (1999) found that tidal
tail lengths can be long compared to the disk if the ratio of escape speed to rotation speed
at 2 disk scale lengths is small, ve/Vr < 2.5, and the rotation curve is falling in the outer
disk. In a series of models, Dubinski et al. showed that this condition may result from either
disk-dominated rotation curves where the halo is extended and has a low concentration, or
halo-dominated rotation curves where the halo is compact and low mass. Dubinski et al.
point out that the latter possibility is inconsistent with observed flat or rising disk rotation
curves, but the first is compatible if the disk is massive and dominant in the inner regions.
The first case also gives prominent bridges. In addition, Springel & White (1999) found that
CDM halo models with embedded disks allow long tidal tails, but Dubinski et al. noted that
most of those which do are essentially low surface brightness disks in massive halos, and not
normal bright galaxies. Galaxies without dark matter halos are not capable of generating
long tidal tails (Barnes 1988). In all cases, longer tails develop in prograde interactions.
The smooth diffuse types and antenna types in Figures 1 and 2 have relatively long tails,
so the progenitors were presumably disks of early and late types, respectively, with falling
rotation curves in their outer parts. These long-tail cases are relatively rare, comprising
only about 8% and 9%, respectively, of our original (300 galaxy) interacting sample from
– 15 –
GEMS and GOODS. The more compact M51 types and shrimps represent 9% and 12% of
the sample. Short tail interactions could be younger, less favorably projected, or have a
more steeply rising rotation curve than long tail interactions. The M51 types have clear
companions, so the prominent features are bridges. According to Dubinski et al. (1999),
bridging requires a prograde interaction with a maximum-disk galaxy, that is, one with a
low-mass, extended halo.
5. Conclusions
Mergers and interactions out to redshift z = 1.4 have tails, bridges, and plumes that are
analogous to features in local interacting galaxies. Some interactions have only smooth and
red features, indicative of gas-free progenitors, while others have giant blue star-formation
clumps. The tail luminosity fraction has a wide range, comparable to that found locally.
A striking difference arises regarding the tail lengths, however. The tails in our antenna
sample, at an average redshift of 0.7, are only one-third as long as the tails in local antenna
mergers, and the disk diameters are about half the local merger diameters. This difference is
consistent with the observations that high redshift galaxies are smaller than local galaxies,
although such a drop in size has not yet been seen for galaxies at redshifts this low. The
implication is that dark matter halos have not built up to their full sizes for typical galaxies
in GEMS and GOODS.
Star formation is strongly triggered by the interactions observed here, as it is locally.
The star-forming clumps tend to be much more massive than their local analogs, however,
with masses between ∼ 106 M⊙ and a few ×10
8 M⊙, increasing with redshift. This is not
merely a selection effect, since the massive clumps seen at high redshift would show up at
lower redshift, although of course smaller clumps would not be resolved at high redshift. The
clump spacings were measured along the tidal arms of the most prominent one-arm type of
interaction, the shrimp-type, and found to be 2.20 ± 0.94, which is typical for the spacing
between beads on a string of star formation in local spiral arms. If both types of arms form
clumps by gravitational instabilities, then the turbulent speed of the interstellar medium in
the GEMS and GOODS sample has to be larger than it is locally by a factor of ∼ 5 or more;
the gas mass column density has to be larger by this factor squared.
Some interactions have tidal dwarf galaxies at the ends of their tidal arms, similar to
those found in the Superantennae galaxy and other local mergers. One diffuse interaction
with red stellar tidal debris has a large stellar clump that may have formed by gravitational
collapse in a stellar tidal arm; the clump mass is 5 × 109 M⊙. Long-arm interactions are
relatively rare, comprising only ∼ 17% of our total sample of ∼ 300 interacting systems (only
– 16 –
a fraction of which were discussed here). For those with long arms, numerical models suggest
the dark matter halos must be extended, so that the rotation curves are falling in the outer
disks. Most interactions are not like this, however, so the rotation curves are probably still
rising in their outer disks, like most galaxies locally.
We gratefully acknowledge summer student support for B.M. and T.F. through an REU
grant for the Keck Northeast Astronomy Consortium from the National Science Foundation
(AST-0353997) and from the Vassar URSI (Undergraduate Research Summer Institute) pro-
gram. D.M.E. thanks Vassar for publication support through a Research Grant. We thank
the referee for useful comments. This research has made use of the NASA/IPAC Extragalac-
tic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute
of Technology, under contract with the National Aeronautics and Space Administration.
REFERENCES
Abraham, R., Tanvir, N., Santiago, B., Ellis, R., Glazebrook, K., & van den Bergh, S. 1996,
MNRAS, 279, L47
Arp, H.J. 1966, Atlas of Peculiar Galaxies (Pasadena: CalTech)
Athanassoula, E., & Bosma, A. 1985, ARAA, 23, 147
Baldry, I. K. et al. 2004, ApJ, 600, 681
Barnes, J.E. 1988, ApJ, 331, 699
Barnes, J.E. 2004, MNRAS, 350, 798
Barnes, J., & Hernquist, L. 1992, ARAA, 30, 705
Bastian, N., Hempel, M., Kissler-Patig, M., Homeier, N., & Trancho, G. 2005, A&A, 435,
Beckwith, S.V.W., et al. 2006, AJ, 132, 1729
Bell, E.F., et al. 2006, ApJ, 640, 241
Bournaud, F., Combes, F., Jog, C., & Puerari, I. 2005, A&A, 438, 507
Bournard, F., Duc, P.-A., Amram, P., Combes, F., & Gach, J.-L. 2004, A&A, 425, 813
Bournaud, F., Duc., P.A.-, & Masset, F. 2003, A&A, 411, L469
– 17 –
Burbidge, E.M., & Burbidge, G.R. 1959, ApJ, 130, 23
Caldwell, J. et al., 2005, astro-ph/0510782
Carroll, S. M., Press, W. H., & Turner, E. L. 1992, ARAA, 30, 499
Chromey, F. R., Elmegreen, D. M., Mandell, A., & McDermott, J. 1998, AJ, 115, 2331
Conselice, C.J. 2005, in Multiwavelength mapping of galaxy formation and evolution, ESO
Workshop, ed. A. Renzini & R. Bender (Berlin: Springer), 163
Conselice, C.J. 2006b, MNRAS, 373, 1389
Conselice, C.J. 2006a, ApJ, 638, 686
Conselice, C., Bershady, M., Dickinson, M., & Papovich, C. 2003, AJ, 126, 1183
Coe, D., Benitez, N., Sanchez, S., Jee, M., Bouwens, R., & Ford, H. 2006, AJ, 132, 926
de Grijs, R., Lee, J., Mora Herrera, M., Fritze-v. Alvensleben, U., & Anders, P. 2003, NewA,
8, 155
de Mello, D., Wadadekar, Y., Dahlen, T., Casertano, S., & Gardner, J.P. 2006, AJ, 131, 216
Dubinski, J., Mihos, J., & Hernquist, L. 1999, ApJ, 526, 607
Duc, P.-A., et al., 1997, A&A, 326, 537
Efremov, Y.N. 1995, AJ, 110, 2757
Elmegreen, B.G. 1994, ApJ, 433, 39
Elmegreen, B.G., & Elmegreen, D.M. 1983, MNRAS, 203, 31
Elmegreen, B.G., & Elmegreen, D.M. 1987, ApJ, 320, 182
Elmegreen, B.G., & Elmegreen, D.M. 2005, ApJ, 627, 632
Elmegreen, B., Elmegreen, D., Salzer, J., & Mann, H. 1996, ApJ, 467, 579
Elmegreen, B., Kaufman, M., & Thomasson, M. 1993, ApJ, 412, 90
Elmegreen, D.M., & Elmegreen, B.G. 2006, ApJ, 651, 676
Elmegreen, D.M., Kaufman, M., Elmegreen, B.G., Brinks, E., Struck, C., Klaric, M., &
Thomasson, M. 2001, AJ, 121, 182
http://arxiv.org/abs/astro-ph/0510782
– 18 –
Elmegreen, D.M., Elmegreen, B.G., Ravindranath, S., & Coe, D. 2007, astroph/0701121
Elmegreen, D.M., Elmegreen, B.G., Rubin, D.S., & Schaffer, M.A. 2005, ApJ, 631, 85
Elmegreen, D., Kaufman, M., Brinks, E., Elmegreen, B., & Sundin, M. 1995, ApJ, 453, 100
Elmegreen, D., & Salzer, J. 1999, AJ, 117, 764
Faber, S. et al. 2005, astro-ph/0506044
Fukugita, M., Shimasaku, K., & Ichikawa, T. 1995, PASP, 107, 945
Genzel, R. et al. 2006, Nature, 442, 786
Giavalisco, M., et al. 2004, ApJ, 600, L103
Hibbard, J.E., & Vacca, W. D. 1997, AJ, 114, 1741
Hibbard, J., & van Gorkom, J. 1996, AJ, 111, 655
Hibbard, J.E., & Yun, M.S. 1999, AJ, 118, 162
Horellou, C., & Koribalski, B. 2007, A&A, 464, 155
Iglesias-Paramo, J., & Vilchez, J.M. 2001, ApJ, 550, 204
Irwin, J.A. 1994, ApJ, 429, 618
Jarrett, T.H., et al. 2006, AJ, 131, 261
Kaufman, M., Brinks, E., Elmegreen, D., Thomasson, M., Elmegreen, B., Struck, C., &
Klaric, M. 1997, AJ, 114, 2323
Kaufman, M., Brinks, E., Elmegreen, B.G., Elmegreen, D.M., Klaric, M., Struck, C.,
Thomasson, M., & Vogel, S., 1999, AJ, 118, 1577
Kaufman, M., Sheth, K., Struck, C., Elmegreen, B G., Thomasson, M., Elmegreen, D.M.,
Brinks, E. 2002, AJ, 123, 702
Kim, W.-T. & Ostriker, E.C. 2006, ApJ, 646, 213
Knierman, K., et al. 2003, AJ, 126, 1227
Laine, S., van der Marel, R., Rossa, J., Hibbard, J., Mihos, J., Boker, T., & Zabludoff, A.
2003, AJ, 126, 2717
http://arxiv.org/abs/astro-ph/0506044
– 19 –
Larson, R. B., & Tinsley, B. M. 1978, ApJ, 219, 46
Lavery, R., Remijan, A., Charmandaris, V., Hayes, R., & Ring, A. 2004, ApJ, 612, 679
La Vigne, M.A., Vogel, S.N., & Ostriker, E.C. 2006, ApJ, 650, 818
Lotz, J. M., Madau, P., Giavalisco, M., Primack, J., & Ferguson, H. 2006, ApJ, 636, 592
Malin, D.F., & Carter, D. 1980, Nature, 285, 643
Mihos, C. 1995, ApJ, 438, L75
Mirabel, I., Dottori, H., & Lutz, D. 1992, A&A, 256, L19
Mirabel, I., Lutz, D., & Maza, J. 1991, A&A, 243, 367
Naab, T., Khochfar, S., & Burkert, A. 2006, ApJ, 636, L81
Neuschaefer, L., Im, M., Ratnatunga, U., Griffiths, R., & Casertano, S. 1997, ApJ, 480, 59
Patton, D., Pritchet, C., Yee, H., Ellingson, E., & Carlberg, R. 1997, ApJ, 475, 29
Rasband, W.S., 1997, ImageJ, U.S. National Institutes of Health, Bethesda, MD,
http://rsb.info.nih.gov/ij/
Rix, H.W., et al. 2004, ApJS, 152, 163
Rots, A., Bosma, A., van der Hulst, J., Athanassoula, E., & Crane, P. 1990, AJ, 100, 387
Rowan-Robinson, M. 2003, MNRAS, 345, 819
Schombert, J., Wallin, J., & Struck-Marcell, C. 1990, AJ, 99, 497
Smith, B.J., Struck, C., Appleton, P.N., Charmandaris, V., Reach, W., & Eitter, J.J. 2005,
AJ, 130, 2117
Smith, B.J., Struck, C., Hancock, M., Appleton, P.N., Charmandaris, V., & Reach, W.T.
2007, AJ, 133, 791
Spergel, D. N., et al. 2003, ApJS, 148, 175
Springel, V. & White, S. 1999, MNRAS, 307, 162
Sundin, M. 1993, Ph.D. thesis, Chalmers Univ. of Technology
http://rsb.info.nih.gov/ij/
– 20 –
Straughn, A. N., Cohen, S. H., Ryan, R. E., Hathi, N. P., Windhorst, R. A., & Jansen, R.
A. 2006, ApJ, 639, 724
Struck, C. 1999, Physics Reports, 321, 1
Struck, C., Appleton, P., Borne, K., & Lucas, R. 1996, AJ, 112, 1868
Toomre, A. 1977, in “The Evolution of Galaxies and Stellar Populations,” eds. W. Becker
and G. Contopoulos (Reidel: Dordrecht), 401
Toomre, A., & Toomre, J. 1972, ApJ, 431, L9
Tran, H.D., et al. 2003, ApJ, 585, 750
Trujillo, I., et al. 2006, MNRAS, 373, L36
Tyson, J. A., Fischer, P., Guhathakurta, P., McIlroy, P., Wenk, R., Huchra, J., Macri, L.,
Neuschaefer, L., Sarajedini, V., Glazebrook, K., Ratnatunga, K., & Griffiths, R. 1998,
AJ, 116, 102
Vorontsov-Velyaminov, B.A. 1957, Astr. Circ., USSR, 178, 19
Vorontsov-Velyaminov, B.A. 1959, Atlas and Catalog of Interacting Galaxies, (Moscow:
Sternberg Institute)
Weilbacher, P., Duc, P.-A., & Fritze-v. Alvensleben, U. 2003, A&A, 397, 545
Weilbacher, P., Fritze-v. Alvensleben, U., Duc, P.-A., & Fricke, K.J. 2002, ApJ, 579, L79
Weiner, B.J, Willmer, C. N. A., Faber, S. M., Melbourne, J., Kassin, S.A., Phillips, A.C.,
Harker, J., Metevier, A. J., Vogt, N. P., & Koo, D. C. 2006, ApJ, 653, 1027
Wetzstein, M., Naab, T., & Burkert, A. 2007, MNRAS, 375, 805
Whitmore, B., et al. 2005, AJ, 130, 2104
Wolf, C., Meisenheimer, K., Rix, H.-W., Borch, A., Dye, S., & Kleinheinrich, M. 2003, A&A,
401, 73
This preprint was prepared with the AAS LATEX macros v5.2.
– 21 –
Table 1. Interacting Galaxies in GEMS and GOODS
Type, Figure Number COMBO 17 z R mag.
Diffuse (Fig. 1) 1 6423 0.15 16.572
2 12639 0.154 16.678
3 11538 0.134 17.713
4 53129 0.171 16.968
5 57881 0.118 17.552
6 28509 0.093 18.79
7 17207 0.69 19.742
8 30824 0.341 19.755
9 25874 0.262 19.757
Diffuse (other) 10 22588 0.684 21.263
11 21990 0.429 21.243
12 46898 0.617 20.794
13 49709 0.302 20.23
14 15233 0.304 18.857
Antennae (Fig. 2) 15 61546 0.552 20.41
16 45115 0.579 21.275
17 20280 0.555 21.653
18 41907 0.702 22.66
19 35611 1.256 22.655
20 10548 0.698 22.43
21 33650 0.169 18.86
22 42890 0.421 20.68
23 49860 1.169 23.632
24 34926 0.779 -19.69
Antennae (other) 25 14829 0.219 21.429
26 18588 0.814 22.748
27 46738 1.204 20.65
28 7551 1.162 25.926
29 20034 1.326 21.932
30 33267 0.067 23.112
31 38651 0.988 23.89
32 55495 1.00 24.261
M51-type (Fig. 3) 33 5640 0.204 19.477
34 9415 0.523 21.16
35 40901 0.193 19.751
36 17522 0.82 23.103
37 6209 1.187 22.723
38 23667 1.151 23.514
39 37293 0.274 20.533
40 39805 0.557 20.089
41 53243 0.698 21.683
42 15599 0.56 21.381
43 25783 0.663 20.732
44 39228 0.117 18.031
M51-type (other) 45 1984 0.762 22.855
– 22 –
Table 1—Continued
Type, Figure Number COMBO 17 z R mag.
46 2760 1.281 23.202
47 15040 0.667 22.392
48 18502 0.228 21.942
49 14959 0.306 19.581
50 16023 0.668 21.887
51 30226 0.509 22.689
52 40744 0.292 21.119
53 45102 0.857 22.514
54 60582 0.946 22.54
Shrimp (Fig. 4) 55 40198 0.201 20.55
56 14373 0.795 23.183
57 12222 1.004 22.417
58 28344 0.257 19.509
59 56284 0.657 21.667
60 2385 0.283 21.334
61 54335 0.892 22.824
62 28841 0.673 20.971
63 6955 0.983 22.24
Shrimp (other) 64 34244 0.999 22.504
65 48298 0.429 21.663
66 37809 0.357 20.667
67 25316 0.985 23.717
68 49595 0.663 21.939
69 59467 0.487 21.568
70 9062 0.854 23.82
71 30076 0.832 22.672
72 2760 1.281 23.202
73 54335 0.892 22.824
Assembly (Fig. 5) 74 28751 0.093 23.506
75 4728 0.702 22.799
76 23187 1.183 23.565
77 45309 1.061 22.916
78 41835 0.098 19.134
79 61945 1.309 21.813
80 62605 1.011 23.143
81 44956 0.506 22.494
82 4546 0.809 22.163
83 23000 0.132 22.951
84 63112 0.499 22.273
85 43975 1.059 22.878
Equal (Fig. 6) 86 40813 0.182 19.983
87 8496 0.354 22.415
88 13836 0.661 21.054
89 11164 0.464 19.351
90 39877 0.493 22.142
– 23 –
Table 1—Continued
Type, Figure Number COMBO 17 z R mag.
91 40598 0.263 20.128
92 51021 0.743 20.96
93 35317 0.671 20.755
94 56256 0.502 20.309
95 47568 0.649 20.206
96 40766 0.46 19.997
97 24927 0.524 19.647
98 15233 0.304 18.857
99 18663 1.048 24.011
100 43242 0.657 21.177
– 24 –
Table 2: Tidal Dwarf Galaxy Candidates
Galaxy z Galaxy Diam. Dwarf V606 V606 − z850 Clump Mass
(COMBO17 #) MB,rest (mag) (kpc) MB,rest (mag) mag mag x10
1 (6423) 0.15 -20.64 13.9 -17.55 20.83 0.90 50
15 (61546) 0.552 -20.77 5.5 -16.67 26.12 0.78 1.2
16 (45115) 0.579 -20.17 4.7 -17.72 25.42 1.1 4.6
18 (41907) 0.702 -19.21 1.9 -16.03 27.45 0.55 0.24
17 (20280) 0.555 -19.56 6.2 -17.06 25.77 0.73 1.4
– 25 –
Fig. 1.— Color images of galaxies in the GEMS and GOODS fields with smooth diffuse tidal
debris. The galaxy at the top right, number 3 in Table 1, is only partially covered by the
GEMS field; the right-hand portion of the image is from ground-based observations. The
smooth debris is presumably from old stars that were spread out during the interaction. A
few small star-formation patches are evident in some cases. The clump in the upper right
corner of the galaxy 1 image could be a rare example of a gravitationally driven condensation
in a pure-stellar arm. The smooth arcs and spirals in this and other images are probably
a combination of orbital debris and flung-out tidal tails. The galaxy numbers, as listed in
Table 1, are 1 through 9, as plotted from left to right and top to bottom. (Image quality
degraded for astroph.)
– 26 –
Fig. 2.— Color images of interacting antennae galaxies with long and structured tidal arms.
Galaxy numbers, in order, are 15 through 24. Several have dwarf galaxy-like condensations
at the arm tips or broad condensations midway out in the arms. The dwarf elliptical at the
tip of the tidal arm in galaxy 20 might have existed before the interaction and been placed
there by tidal forces; the main body of this system has a double nucleus from the main
interaction. (Image quality degraded for astroph.)
– 27 –
Fig. 3.— M51-type galaxies are shown as logarithmic grayscale V-band images. In order,
the galaxy numbers are 33 through 44. The linear streak in galaxy 44 could be orbital debris
from the small companion on the right. (Image quality degraded for astroph.)
– 28 –
Fig. 4.— Shrimp galaxies, named because of their curved tails, are shown as logarithmic
V-band images. In order, they are numbers 55 through 63. (Image quality degraded for
astroph.)
– 29 –
Fig. 5.— Assembly galaxies look like they are being assembled through mergers. In order:
galaxy 74 through 85.
– 30 –
Fig. 6.— Galaxies with approximately equal-mass grazing companions, in order, are 86
through 100.
– 31 –
0 0.2 0.4 0.6 0.8 1
Restframe Johnson B–V
Equal
Assembly
Shrimp
M51 type
Antenna
Diffuse
Fig. 7.— Restframe (U-B) and (B-V) integrated colors for interacting galaxies in the GEMS
and GOODS fields, from COMBO-17. The reddest tend to be the diffuse types, which
are presumably dry mergers, and the bluest are the assembly types, which could be young
proto-galaxies. Crosses indicate standard Hubble types, measured by Fukugita et al. (1995).
– 32 –
–24 –22 –20 –18 –16 –14 –12
MB (mag)
Diffuse
Antenna
M51 type
Shrimp
Assembly
Equal
Fig. 8.— Restframe Johnson U-B integrated color versus absolute restframe MB, from
COMBO-17. The solid line separates the red sequence and blue cloud (Conselice 2006b).
Color limits for local galaxies are indicated by the horizontal short-dashed lines; local galaxies
are brighter than the vertical line. The local blue cloud galaxies are approximately delimited
on the left side of the diagram by the long-dashed lines. Thus, most of our observed galaxies
fall near the local galaxy colors and magnitudes.
– 33 –
–17 –18 –19 –20 –21 –22
Galaxy MB
Diffuse
Antenna
M51 type
Shrimp
Assembly
Tidal Dwarf
Fig. 9.— Restframe B absolute magnitudes of star-forming clumps versus integrated galaxy
restframe magnitudes. The correlation is also found for local galaxies.
– 34 –
0.01 0.1 1 10
Duration of SF (Gyr)
0.01 0.1 1 10
Duration of SF (Gyr)
–1 0 1 2 3
V606–z850
–1 0 1 2 3
V606–z850
Fig. 10.— Models at z = 1 for clump color (bottom left) and clump mass at an apparent
V606 magnitude of 27 (lower right) are shown in the bottom panels versus the duration of star
formation in 6 models with exponentially decaying star formation. Five lines are for decay
times of 107, 3×107, 108, 3×108, and 109 years, and the sixth line represents a constant rate.
Shorter decay times correspond to redder color (upper lines) and higher masses (upper lines).
In the top panels, the clump mass at V606 = 27 (top left) and the clump apparent magnitude
at 108 M⊙ masses (top right) are shown versus the clump color. The correspondence between
color and mass gives a degeneracy to plots of mass versus color at a fixed apparent magnitude
(top left) and apparent magnitude versus color at a fixed mass (top right). Thus the masses
of clumps can be derived approximately from their V606 − z850 colors and V606 magnitudes
for each redshift.
– 35 –
–1 0 1 2 3
V606–z850
0–0.125
Diffuse
Antenna
M51 type
Shrimp
Assembly
Equal
Tidal Dw.
0 1 2 3
V606–z850
0.125–0.375
0.375–0.625
0.625–0.875
z=0.875–1.125
z=1.125–1.375
Fig. 11.— The masses of the clumps can be estimated from this figure. Each curve in a
cluster of curves is a different model for color-magnitude evolution of a star-forming region,
with the age of the region changing along the curve and the exponential decay rate of the
star formation changing from curve to curve. The different clusters of curves correspond to
different total masses for the star-forming regions (mass in M⊙ is indicated to the right of each
curve). The symbols represent observations of apparent magnitude and color. Bandshifting
and absorption are considered by plotting the observations and models in redshift bins.
The mass scales shift slightly with redshift. The mass of each star-forming region can be
determined by interpolation between the curves. Typical masses are 106 M⊙ for low z and
108 M⊙ for high z. The circle near the 10
10 M⊙ curves in the z = 0.125 − 0.375 interval
corresponds to the diffuse clump in the tidal debris of galaxy 1 in Fig. 1.
– 36 –
D A M S AS E T
0–0.125
D A M S AS E T
0.125–0.375
log M
= 9.7
0.375–0.625 0.625–0.875
z=0.875–1.125 z=1.125–1.375
Fig. 12.— Clump masses (left axis) are plotted versus galaxy type in order of Figs. 1-6:
Diffuse, Antenna, M51-type, Shrimp, Assembly, and Equal, with T representing the tidal
dwarfs. The method of Fig. 11 is used. The rms deviations among the six star formation
decay times are shown as plus-symbols using the right-hand axes.
– 37 –
0.01 0.1 1 10
Duration of SF (Gy)
50 z=0.25
0.01 0.1 1 10
Duration of SF (Gy)
z=0.5
z=0.75 z=1
z=1.25 z=1.5
Fig. 13.— The apparent color of a star forming region is shown versus the duration of star
formation for an exponentially decaying star formation law. The decay times are as in Fig.
10, with short decay times the upper lines and continuous star formation the lower lines.
Using the observed clump colors, the durations of star formation are found to range between
107 and 3× 108 yrs for short decay times.
– 38 –
0 0.5 1 1.5
log(1+z)4
Diffuse
Antenna
M51 type
Shrimp
Fig. 14.— V-band surface brightness of tidal tails for galaxies in Figures 1-4 plotted as a
function of (1 + z)4 for redshift z. Some systems have more than one tail. Cosmological
dimming causes a decrease with redshift equal to 2.5 magnitudes for each factor of 10 in
(1 + z)
; this decrease is consistent with the dimming seen here. The observable 2σ limit
for these fields is ∼ 25 mag arcsec−2. Some antenna galaxies have patchy tails with fainter
average surface brightnesses.
– 39 –
0 0.5 1
Redshift, z
Diffuse
Antenna
M51 type
Shrimp
Fig. 15.— Fraction of V-band luminosity in antennae tidal tails relative to their integrated
galaxy luminosity, as a function of redshift.
– 40 –
0 5 10 15 20 25 30 35
Disk Diameter (kpc)
Diffuse
Antenna
M51 type
Shrimp
Fig. 16.— Tail length versus disk diameter from Figs. 1-4, based on the V-band images.
Conversions to linear size assumed a standard ΛCDM cosmology applied to the photometric
redshifts.
– 41 –
0 0.5 1 1.5
Redshift, z
Diffuse
Antenna
M51 type
Shrimp
Fig. 17.— Tail length/disk diameter as a function of redshift for shrimps and antennae,
measured from the V-band images. There is no obvious trend.
– 42 –
0 50 100 150 200
Tail Length (kpc)
GEMS GOODS Antennae
Local Antennae
Superantennae
Arp 299
NGC 3628Arp 241
VV109
NGC 3256
Antennae
Arp 243
Arp 242
Arp 226
Arp 157 Arp 75
Arp 35 NGC 5548
Arp 33
Arp 102
Fig. 18.— Tail length/disk diameter versus the tail length for antenna galaxies in our sample
as well as for local antennae, whose names are indicated. The GEMS and GOODS systems
are significantly smaller than the local antenna galaxies, even if the two extreme local cases,
the Superantennae and Arp 299, are excluded.
Introduction
The Sample of Interactions and Mergers
Photometric Results
Global galaxy properties
Clump properties
Tail Properties
Tidal dwarf galaxy candidates
Dark Matter Halo Constraints
Conclusions
|
0704.0912 | Nuclear Spin Effects in Optical Lattice Clocks | Nuclear Spin Effects in Optical Lattice Clocks
Martin M. Boyd, Tanya Zelevinsky, Andrew D. Ludlow, Sebastian
Blatt, Thomas Zanon-Willette, Seth M. Foreman, and Jun Ye
JILA, National Institute of Standards and Technology and University of Colorado,
Department of Physics, University of Colorado, Boulder, CO 80309-0440
(Dated: August 28, 2018)
We present a detailed experimental and theoretical study of the effect of nuclear spin on the
performance of optical lattice clocks. With a state-mixing theory including spin-orbit and hyperfine
interactions, we describe the origin of the 1S0-
3P0 clock transition and the differential g-factor be-
tween the two clock states for alkaline-earth(-like) atoms, using 87Sr as an example. Clock frequency
shifts due to magnetic and optical fields are discussed with an emphasis on those relating to nuclear
structure. An experimental determination of the differential g-factor in 87Sr is performed and is
in good agreement with theory. The magnitude of the tensor light shift on the clock states is also
explored experimentally. State specific measurements with controlled nuclear spin polarization are
discussed as a method to reduce the nuclear spin-related systematic effects to below 10−17 in lattice
clocks.
Optical clocks [1] based on alkaline-earth atoms con-
fined in an optical lattice [2] are being intensively ex-
plored as a route to improve state of the art clock accu-
racy and precision. Pursuit of such clocks is motivated
mainly by the benefits of Lamb-Dicke confinement which
allows high spectral resolution [3, 4], and high accuracy
[5, 6, 7, 8] with the suppression of motional effects, while
the impact of the lattice potential can be eliminated using
the Stark cancelation technique [9, 10, 11, 12]. Lattice
clocks have the potential to reach the impressive accu-
racy level of trapped ion systems, such as the Hg+ opti-
cal clock [13], while having an improved stability due to
the large number of atoms involved in the measurement.
Most of the work performed thus far for lattice clocks has
been focused on the nuclear-spin induced 1S0-
3P0 tran-
sition in 87Sr. Recent experimental results are promis-
ing for development of lattice clocks as high performance
optical frequency standards. These include the confir-
mation that hyperpolarizability effects will not limit the
clock accuracy at the 10−17 level [12], observation of tran-
sition resonances as narrow as 1.8 Hz [3], and the excel-
lent agreement between high accuracy frequency mea-
surements performed by three independent laboratories
[5, 6, 7, 8] with clock systematics associated with the lat-
tice technique now controlled below 10−15 [6]. A main
effort of the recent accuracy evaluations has been to min-
imize the effect that nuclear spin (I = 9/2 for 87Sr) has
on the performance of the clock. Specifically, a linear
Zeeman shift is present due to the same hyperfine inter-
action which provides the clock transition, and magnetic
sublevel-dependent light shifts exist, which can compli-
cate the stark cancelation techniques. To reach accuracy
levels below 10−17, these effects need to be characterized
and controlled.
The long coherence time of the clock states in alkaline
earth atoms also makes the lattice clock an intriguing
system for quantum information processing. The closed
electronic shell should allow independent control of elec-
tronic and nuclear angular momenta, as well as protec-
tion of the nuclear spin from environmental perturbation,
providing a robust system for coherent manipulation[14].
Recently, protocols have been presented for entangling
nuclear spins in these systems using cold collisions [15]
and performing coherent nuclear spin operations while
cooling the system via the electronic transition [16].
Precise characterization of the effects of electronic and
nuclear angular-momentum-interactions and the resul-
tant state mixing is essential to lattice clocks and po-
tential quantum information experiments, and therefore
is the central focus of this work. The organization of
this paper is as follows. First, state mixing is discussed
in terms of the origin of the clock transition as well as
a basis for evaluating external field sensitivities on the
clock transition. In the next two sections, nuclear-spin
related shifts of the clock states due to both magnetic
fields and the lattice trapping potential are discussed.
The theoretical development is presented for a general
alkaline-earth type structure, using 87Sr only as an ex-
ample (Fig. 1), so that the results can be applied to other
species with similar level structure, such as Mg, Ca, Yb,
Hg, Zn, Cd, Al+, and In+. Following the theoretical dis-
cussion is a detailed experimental investigation of these
nuclear spin related effects in 87Sr, and a comparison to
the theory sections. Finally, the results are discussed in
the context of the performance of optical lattice clocks,
including a comparison with recent proposals to induce
the clock transition using external fields in order to elim-
inate nuclear spin effects [17, 18, 19, 20, 21, 22]. The
appendix contains additional details on the state mixing
and magnetic sensitivity calculations.
I. STATE MIXING IN THE nsnp
CONFIGURATION
To describe the two-electron system in intermediate
coupling, we follow the method of Breit and Wills [23]
and Lurio [24] and write the four real states of the ns np
configuration as expansions of pure spin-orbit (LS) cou-
http://arxiv.org/abs/0704.0912v2
)5( Ss
)55( Pps
)55( Pps
)55( Pps
)55( Pps
State A(MHz) Q(MHz)
-260 -35
-212 67
-3.4 39
State Mixing
FIG. 1: (color online) Simplified 87Sr energy level diagram
(not to scale). Relevant optical transitions discussed in the
text are shown as solid arrows, with corresponding wave-
lengths given in nanometers. Hyperfine structure sublevels
are labeled by total angular momentum F , and the magnetic
dipole (A) and electric quadrupole (Q, equivalent to the hy-
perfine B coefficient) coupling constants are listed in the inset.
State mixing of the 1P1 and
3P1 states due to the spin-orbit
interaction is shown as a dashed arrow. Dotted arrows repre-
sent the hyperfine induced state mixing of the 3P0 state with
the other F = 9/2 states in the 5s5p manifold.
pling states,
P0〉 = |
P1〉 = α|
1 〉 + β|
P2〉 = |
P1〉 = −β|
1 〉 + α|
Here the intermediate coupling coefficients α and β
(0.9996 and -0.0286 respectively for Sr) represent the
strength of the spin-orbit induced state mixing between
singlet and triplet levels, and can be determined from
experimentally measured lifetimes of 1P1 and
3P1 (see
Eq. 15 in the appendix). This mixing process results in
a weakly allowed 1S0-
3P1 transition (which would other-
wise be spin-forbidden), and has been used for a variety
of experiments spanning different fields of atomic physics.
In recent years, these intercombination transitions have
provided a unique testing ground for studies of narrow-
line cooling in Sr [25, 26, 27, 28, 29] and Ca [30, 31], as
well as the previously unexplored regime of photoassocia-
tion using long lived states [32, 33, 34]. These transitions
have also received considerable attention as potential op-
tical frequency standards [35, 36, 37], owing mainly to
the high line quality factors and insensitivity to external
fields. Fundamental symmetry measurements, relevant
to searches of physics beyond the standard model, have
also made use of this transition in Hg [38]. Furthermore,
the lack of hyperfine structure in the bosonic isotopes
(I = 0) can simplify comparison between experiment and
theory.
The hyperfine interaction (HFI) in fermionic isotopes
provides an additional state mixing mechanism between
states having the same total spin F , mixing the pure 3P0
state with the 3P1,
3P2 and
1P1 states.
|3P0〉 = |
〉+ α0|
3P1〉+ β0|
1P1〉+ γ0|
〉. (2)
The HFI mixing coefficients α0, β0, and γ0 (2×10
−4, −4×
10−6, and 4 × 10−6 respectively for 87Sr) are defined in
Eq. 16 of the appendix and can be related to the hyperfine
splitting in the P states, the fine structure splitting in the
3P states, and the coupling coefficients α and β [23, 24].
The 3P0 state can also be written as a combination of
pure states using Eq. 1,
P0〉 =|
0 〉 + (α0α− β0β)|
+ (α0β + β0α)|
1 〉 + γ0|
The HFI mixing enables a non-zero electric-dipole tran-
sition via the pure 1P 01 state, with a lifetime which can
be calculated given the spin-orbit and HFI mixing coef-
ficients, the 3P1 lifetime, and the wavelengths (λ) of the
3P0 and
3P1 transitions from the ground state [39].
3P0 =
3P1−1S0
(α0β + β0α)2
3P1 . (4)
In the case of Sr, the result is a natural lifetime on the
order of 100 seconds [9, 40, 41], compared to that of a
bosonic isotope where the lifetime approaches 1000 years
[41]. Although the 100 second coherence time of the
excited state exceeds other practical limitations in cur-
rent experiments, such as laser stability or lattice life-
time, coherence times approaching one second have been
achieved [3]. The high spectral resolution has allowed a
study of nuclear-spin related effects in the lattice clock
system discussed below.
The level structure and state mixing discussed here
are summarized in a simplified energy diagram, shown
in Fig. 1, which gives the relevant atomic structure and
optical transitions for the 5s5p configuration in 87Sr.
II. THE EFFECT OF EXTERNAL MAGNETIC
FIELDS
With the obvious advantages in spectroscopic precision
of the 1S0-
3P0 transition in an optical lattice, the sensi-
tivity of the clock transition to external field shifts is a
central issue in developing the lattice clock as an atomic
frequency standard. To evaluate the magnetic sensitivity
of the clock states, we follow the treatment of Ref. [24] for
the intermediate coupling regime described by Eqns. 1-3
in the presence of a weak magnetic field. A more general
treatment for the case of intermediate fields is provided
in the appendix. The Hamiltonian for the Zeeman inter-
action in the presence of a weak magnetic field B along
the z-axis is given as
HZ = (gsSz + glLz − gIIz)µ0B. (5)
Here gs ≃ 2 and gl = 1 are the spin and orbital an-
gular momentum g-factors, and Sz, Lz, and Iz are the
z-components of the electron spin, orbital, and nuclear
spin angular momentum respectively. The nuclear g-
factor, gI , is given by gI=
µI(1−σd)
µ0|I|
, where µI is the nuclear
magnetic moment, σd is the diamagnetic correction and
. Here, µB is the Bohr magneton, and h is Planck’s
constant. For 87Sr, the nuclear magnetic momement and
diamagnetic correction are µI = −1.0924(7)µN [42] and
σd = 0.00345 [43] respectively, where µN is the nuclear
magneton. In the absence of state mixing, the 3P0 g-
factor would be identical to the 1S0 g-factor (assuming
the diamagnetic effect differs by a negligible amount for
different electronic states), equal to gI . However since
the HFI modifies the 3P0 wavefunction, a differential g-
factor, δg, exists between the two states. This can be
interpreted as a paramagnetic shift arising due to the
distortion of the electronic orbitals in the triplet state,
and hence the magnetic moment [44]. δg is given by
δg = −
〈3P0|HZ |
3P0〉 − 〈
3P 00 |HZ |
3P 00 〉
mFµ0B
= − 2 (α0α− β0β)
〈3P 00 ,mF |HZ |
3P 01 , F = I,mF 〉
mFµ0B
+ O(α
0 , γ
0 , . . .).
Using the matrix element given in the appendix for
87Sr (I = 9/2), we find 〈3P 00 ,mF |HZ |
3P 01 , F =
,mF 〉=
mFµ0B, corresponding to a modification of the
g-factor by ∼60%. Note that the sign in Eq. 6 differs
from that reported in [39, 44] due to our choice of sign
for the nuclear term in the Zeeman Hamiltonian (oppo-
site of that found in Ref. [24]). The resulting linear
Zeeman shift ∆
B = −δgmFµ0B of the
3P0 transition
is on the order of ∼110×mF Hz/G (1 G = 10
−4 Tesla).
This is an important effect for the development of lattice
clocks, as stray magnetic fields can broaden the clock
transition (deteriorate the stability) if multiple sublevels
are used. Furthermore, imbalanced population among
the sublevels or mixed probe polarizations can cause fre-
quency errors due to line shape asymmetries or shifts.
It has been demonstrated that if a narrow resonance is
achieved (10 Hz in the case of Ref. [6]), these systematics
can be controlled at 5×10−16 for stray fields of less than
5 mG. To reduce this effect, one could employ narrower
resonances or magnetic shielding.
An alternative measurement scheme is to measure
the average transition frequency between mF and −mF
states of to cancel the frequency shifts. This requires
application of a bias field to resolve the sublevels, and
therefore the second order Zeeman shift ∆
B must be
considered. The two clock states are both J = 0 so the
shift ∆
B arises from levels separated in energy by the
fine-structure splitting, as opposed to the more tradi-
tional case of alkali(-like) atoms where the second order
shift arises from nearby hyperfine levels. The shift of
the clock transition is dominated by the interaction of
0 500 1000 1500 2000 2500 3000
0 1 2 3
=-9/2C
Magnetic Field (G)
=+9/2
Magnetic Field (G)
FIG. 2: (color online) A Breit-Rabi diagram for the 1S0-
clock transition using Eq. 22 with δgµ0 = −109 Hz/G. Inset
shows the linear nature of the clock shifts at the fields relevant
for the measurement described in the text.
the 3P0 and
3P1 states since the ground state is sepa-
rated from all other energy levels by optical frequencies.
Therefore, the total Zeeman shift of the clock transition
∆B is given by
∆B = ∆
B + ∆
|〈3P0, F,mF |HZ |
3P1, F
′,mF 〉|
ν3P1,F ′ − ν3P0
The frequency difference in the denominator is mainly
due to the fine-structure splitting and is nearly indepen-
dent of F ′, and can therefore be pulled out of the sum-
mation. In terms of the pure states, and ignoring terms
of order α0, β0, β
2, and smaller, we have
B ≃− α
|〈3P 00 , F,mF |HZ |
3P 01 , F
′,mF 〉|
ν3P1 − ν3P0
2α2(gl − gs)
3(ν3P1 − ν3P0)
where we have used the matrix elements given in the
appendix for the case F = 9/2. From Eq. 8 the sec-
ond order Zeeman shift (given in Hz for a magnetic field
given in Gauss) for 87Sr is ∆
B =−0.233B
2. This is con-
sistent with the results obtained in Ref. [20] and [45] for
the bosonic isotope. Inclusion of the hyperfine splitting
into the frequency difference in the denominator of Eq. 7
yields an additional term in the second order shift pro-
portional to m2F which is more that 10
−6 times smaller
than the main effect, and therefore negligible. Notably,
the fractional frequency shift due to the second order
Zeeman effect of 5×10−16 G−2 is nearly 108 times smaller
than that of the Cs [46, 47] clock transition, and more
than an order of magnitude smaller than that present in
Hg+ [13], Sr+ [48, 49],and Yb+ [50, 51] ion optical clocks.
A Breit-Rabi like diagram is shown in Fig. 2, giving
the shift of the 1S0-
3P0 transition frequency for different
mF sublevels (assuming ∆m = 0 for π transitions), as a
function of magnetic field. The calculation is performed
using an analytical Breit-Rabi formula (Eq. 22) provided
in the appendix. The result is indistinguishable from the
perturbative derivation in this section, even for fields as
large as 104 G.
III. THE EFFECT OF THE OPTICAL LATTICE
POTENTIAL
In this section we consider the effect of the confining
potential on the energy shifts of the nuclear sublevels. In
the presence of a lattice potential of depth UT , formed by
a laser linearly polarized along the axis of quantization
defined by an external magnetic field B, the level shift of
a clock state (h∆g/e) from its bare energy is given by
∆e = −mF (gI + δg)µ0B − κ
e ξmF
F − F (F + 1)
∆g = −mF gIµ0B − κ
g ξmF
F − F (F + 1)
Here, κS , κV , and κT are shift coefficients proportional
to the scalar, vector (or axial), and tensor polarizabil-
ities, and subscripts e and g refer to the excited (3P0)
and ground (1S0) states respectively. ER is the energy
of a lattice photon recoil and UT /ER characterizes the
lattice intensity. The vector (∝ mF ) and tensor (∝ m
light shift terms arise solely from the nuclear structure
and depend on the orientation of the light polarization
and the bias magnetic field. The tensor shift coefficient
includes a geometric scaling factor which varies with the
relative angle φ of the laser polarization axis and the axis
of quantization, as 3cos2 φ − 1. The vector shift, which
can be described as an pseudo-magnetic field along the
propagation axis of the trapping laser, depends on the
trapping geometry in two ways. First, the size of the
effect is scaled by the degree of elliptical polarization ξ,
where ξ = 0 (ξ = ±1) represents perfect linear (circular)
polarization. Second, for the situation described here,
the effect of the vector light shift is expected to be orders
of magnitude smaller than the Zeeman effect, justifying
the use of the bias magnetic field direction as the quan-
tization axis for all of the mF terms in Eq. 9. Hence
the shift coefficient depends on the relative angle be-
tween the pseudo-magnetic and the bias magnetic fields,
vanishing in the case of orthogonal orientation [52]. A
more general description of the tensor and vector effects
in alkaline-earth systems for the case of arbitrary ellipti-
cal polarization can be found in Ref. [10]. Calculations of
the scalar, vector, and tensor shift coefficients have been
performed elsewhere for Sr, Yb, and Hg [9, 10, 11, 52]
and will not be discussed here. Hyperpolarizability ef-
fects (∝ U2T ) [9, 10, 11, 12] are ignored in Eq. 9 as they
are negligible in 87Sr at the level of 10−17 for the range of
lattice intensities used in current experiments [12]. The
second order Zeeman term has been omitted but is also
present.
Using Eq. 9 we can write the frequency of a π-
transition (∆mF = 0) from a ground state mF as
νπmF = νc −
F (F + 1)
mF ξ + ∆κ
− δgmFµ0B,
where the shift coefficients due to the differential polar-
izabilities are represented as ∆κ, and νc is the bare clock
frequency. The basic principle of the lattice clock tech-
nique is to tune the lattice wavelength (and hence the
polarizabilities) such that the intensity-dependent fre-
quency shift terms are reduced to zero. Due to the mF -
dependence of the third term of Eq. 10, the Stark shifts
cannot be completely compensated for all of the sublevels
simultaneously. Or equivalently, the magic wavelength
will be different depending on the sublevel used. The
significance of this effect depends on the magnitude of
the tensor and vector terms. Fortunately, in the case of
the 1S0-
3P0 transition the clock states are nearly scalar,
and hence these effects are expected to be quite small.
While theoretical estimates for the polarizabilities have
been made, experimental measurements are unavailable
for the vector and tensor terms. The frequencies of σ±
(∆mF = ±1) transitions from a ground mF state are
similar to the π-transitions, given by
= νc −
F (F + 1)
e (mF ± 1) − κ
g mF )ξ
e 3(mF ± 1)
− (±gI + δg(mF ± 1))µ0B.
IV. EXPERIMENTAL DETERMINATION OF
FIELD SENSITIVITIES
To explore the magnitude of the variousmF -dependent
shifts in Eq. 10, a differential measurement scheme can
be used to eliminate the large shifts common to all levels.
Using resolved sublevels one can extract mF sensitivities
by measuring the splitting of neighboring states. This is
the approach taken here. A diagram of our spectroscopic
setup is shown in Fig. 3(a). 87Sr atoms are captured
from a thermal beam into a magneto-optical trap (MOT),
based on the 1S0-
1P1 cycling transition. The atoms are
then transferred to a second stage MOT for narrow line
cooling using a dual frequency technique [26]. Full de-
tails of the cooling and trapping system used in this
work are discussed elsewhere [5, 28]. During the cooling
process, a vertical one-dimensional lattice is overlapped
-30 -20 -10 0 10 20 30
Laser Detuning (Hz)
FIG. 3: (color online) (a) Schematic of the experimental ap-
paratus used here. Atoms are confined in a nearly vertical
optical lattice formed by a retro-reflected 813 nm laser. A
698 nm probe laser is co-aligned with the lattice. The probe
polarization EP can be varied by an angle θ relative to that of
the linear lattice polarization EL. A pair of Helmholtz coils
(blue) is used to apply a magnetic field along the lattice po-
larization axis. (b) Nuclear structure of the 1S0 and
3P0 clock
states. The large nuclear spin (I = 9/2) results in 28 total
transitions, and the labels π, σ+, and σ− represent transi-
tions where mF changes by 0, +1, and −1 respectively. (c)
Observation of the clock transition without a bias magnetic
field. The 3P0 population (in arbitrary units) is plotted (blue
dots) versus the probe laser frequency for θ = 0, and a fit to
a sinc-squared lineshape yields a Fourier-limited linewidth of
10.7(3) Hz. Linewidths as narrow as 5 Hz have been observed
under similar conditions and when the probe time is extended
to 500 ms.
with the atom cloud. We typically load ∼104 atoms into
the lattice at a temperature of ∼1.5µK. The lattice is
operated at the Stark cancelation wavelength [6, 12] of
813.4280(5) nm with a trap depth of U0 = 35ER. A
Helmholtz coil pair provides a field along the lattice po-
larization axis for resolved sub-level spectroscopy. Two
other coil pairs are used along the other axes to zero the
orthogonal fields. The spectroscopy sequence for the 1S0-
3P0 clock transition begins with an 80 ms Rabi pulse from
a highly stabilized diode laser [53] that is co-propagated
with the lattice laser. The polarization of the probe laser
is linear at an angle θ relative to that of the lattice. A
shelved detection scheme is used, where the ground state
population is measured using the 1S0-
1P1 transition. The
3P0 population is then measured by pumping the atoms
through intermediate states using 3P0-
3S1, and
the natural decay of 3P1 , before applying a second
-300 -200 -100 0 100 200 300
+1/2-1/2
Laser Detuning (Hz)
FIG. 4: (color online) Observation of the 1S0-
3P0 π-
transitions (θ = 0) in the presence of a 0.58 G magnetic field.
Data is shown in grey and a fit to the eight observable line-
shapes is shown as a blue curve. The peaks are labeled by
the ground state mF -sublevel of the transition. The relative
transition amplitudes for the different sublevels are strongly
influenced by the Clebsch-Gordan coefficients. Here, transi-
tion linewidths of 10 Hz are used. Spectra as narrow as 1.8
Hz can be achieved under similar conditions if the probe time
is extended to 500 ms.
1P1 pulse. The 461 nm pulse is destructive, so for each
frequency step of the probe laser the ∼800 ms loading
and cooling cycle is repeated.
When π polarization is used for spectroscopy (θ = 0),
the large nuclear spin provides ten possible transitions,
as shown schematically in Fig. 3(b). Figure 3(c) shows
a spectroscopic measurement of these states in the ab-
sence of a bias magnetic field. The suppression of mo-
tional effects provided by the lattice confinement allows
observation of extremely narrow lines [3, 4, 19], in this
case having Fourier-limited full width at half maximum
(FWHM) of ∼10 Hz (quality factor of 4 × 1013). In our
current apparatus the linewidth limitation is 5 Hz with
degenerate sublevels and 1.8 Hz when the degeneracy is
removed [3]. The high spectral resolution allows for the
study of nuclear spin effects at small bias fields, as the
ten sublevels can easily be resolved with a few hundred
mG. An example of this is shown in Fig. 4, where the ten
transitions are observed in the presence of a 0.58 G bias
field. This is important for achieving a high accuracy
measurement of δg as the contribution from magnetic-
field-induced state mixing is negligible. To extract the
desired shift coefficients we note that for the π transi-
tions we have a frequency gap between neighboring lines
fπ,mF = νπmF − νπmF −1
= −δgµ0B − ∆κ
3(2mF − 1)
From Eq. 12, we see that by measuring the differences in
-500 -250 0 250 500
-9/2 ( +)
-7/2 (
Laser Detuning (Hz)
+7/2 ( +)
+9/2 ( )
FIG. 5: (color online) Observation of the 18 σ transitions
when the probe laser polarization is orthogonal to that of the
lattice (θ = π
). Here, a field of 0.69 G is used. The spectro-
scopic data is shown in grey and a fit to the data is shown
as a blue curve. Peak labels give the ground state sublevel of
the transition, as well as the excitation polarization.
frequency of two spectroscopic features, the three terms
of interest (δg, ∆κV , and ∆κT ) can be determined inde-
pendently. The differential g factor can be determined
by varying the magnetic field. The contribution of the
last two terms can be extracted by varying the inten-
sity of the standing wave trap, and can be independently
determined since only the tensor shift depends on mF .
While the π transitions allow a simple determination
of δg, the measurement requires a careful calibration of
the magnetic field and a precise control of the probe
laser frequency over the ∼500 seconds required to pro-
duce a scan such as in Fig. 4. Any linear laser drift
will appear in the form of a smaller or larger δg, de-
pending on the laser scan direction. Furthermore, the
measurement can not be used to determine the sign of
δg as an opposite sign would yield an identical spectral
pattern. In an alternative measurement scheme, we in-
stead polarize the probe laser perpendicular to the lattice
polarization (θ = π
) to excite both σ+ and σ− tran-
sitions. In this configuration, 18 spectral features are
observed and easily identified (Fig. 5). Ignoring small
shifts due to the lattice potential, δg is given by extract-
ing the frequency splitting between adjacent transitions
of a given polarization (all σ+ or all σ− transitions) as
fσ±,mF =νσ±mF
mF −1
=−δgµ0B . If we also measure the
frequency difference between σ+ and σ− transitions from
the same sublevel, fd,mF =νσ+mF
=−2(gI + δg)µ0B,
we find that the differential g-factor can be determined
from the ratio of these frequencies as
fd,mF
σ±,mF
. (13)
-600 -300 0 300 600
Laser Detuning (Hz)
FIG. 6: (color online) Calculation of the 18 σ transition fre-
quencies in the presence of a 1 G bias field, including the influ-
ence of Clebsch-Gordan coefficients. The green (red) curves
show the σ+ (σ−) transitions. (a) Spectral pattern for g-
factors gIµ0 = −185 Hz/G and δgµ0 = −109 Hz/G. (b) Same
pattern as in (a) but with δgµ0 = +109 Hz/G. The qualita-
tive difference in the relative positions of the transitions allows
determination of the sign of δg compared to that of gI .
In this case, prior knowledge of the magnetic field is not
required for the evaluation, nor is a series of measure-
ment at different fields, as δg is instead directly deter-
mined from the line splitting and the known 1S0 g factor
gI . The field calibration and the δg measurement are in
fact done simultaneously, making the method immune to
some systematics which could mimic a false field, such as
linear laser drift during a spectroscopic scan or slow mag-
netic field variations. Using the σ transitions also elim-
inates the sign ambiguity which persists when using the
π transitions for measuring δg. While we can not extract
the absolute sign, the recovered spectrum is sensitive to
the relative sign between gI and δg. This is shown explic-
itly in Fig. 6 where the positions of the transitions have
been calculated in the presence of a ∼1 G magnetic field.
Figure 6(a) shows the spectrum when the signs of gI and
δg are the same while in Fig. 6(b) the signs are oppo-
site. The two plots show a qualitative difference between
the two possible cases. Comparing Fig. 5 and Fig. 6 it is
obvious that the hyperfine interaction increases the mag-
nitude of the 3P0 g-factor (δg has the same sign as gI).
We state this point explicitly because of recent inconsis-
tencies in theoretical estimates of the relative sign of δg
and gI in the
87Sr literature [7, 8].
To extract the magnitude of δg, data such as in Fig. 5
are fit with eighteen Lorentzian lines, and the relevant
splitting frequencies fd,mF and fσ± are extracted. Due
to the large number of spectral features, each experimen-
tal spectrum yields 16 measurements of δg. A total of
31 full spectra was taken, resulting in an average value
of δgµ0 = −108.4(4) Hz/G where the uncertainty is the
0.0 0.5 1.0 1.5 2.0
Lattice Depth (U/U
FIG. 7: (color online) Summary of δg-measurements for dif-
ferent lattice intensities. Each data point (and uncertainty)
represents the δg value extracted from a full σ± spectrum
such as in Fig. 5. Linear extrapolation (red line) to zero lat-
tice intensity yields a value −108.4(1) Hz/G.
standard deviation of the measured value. To check for
sources of systematic error, the magnetic field was varied
to confirm the field independence of the measurement.
We also varied the clock laser intensity by an order of
magnitude to check for Stark and line pulling effects. It
is also necessary to consider potential measurement er-
rors due to the optical lattice since in general the splitting
frequencies fd,mF and fσ± will depend on the vector and
tensor light shifts. For fixed fields, the vector shift is in-
distinguishable from the linear Zeeman shift (see Eqs. 10-
12) and can lead to errors in calibrating the field for a δg
measurement. In this work, a high quality linear polar-
izer (10−4) is used which would in principle eliminate the
vector shift. The nearly orthogonal orientation should
further reduce the shift. However, any birefringence of
the vacuum windows or misalignment between the lattice
polarization axis and the magnetic field axis can lead to
a non-zero value of the vector shift. To measure this ef-
fect in our system, we varied the trapping depth over a
range of ∼ (0.6 − 1.7)U0 and extrapolated δg to zero in-
tensity, as shown in Fig. 7. Note that this measurement
also checks for possible errors due to scalar and tensor
polarizabilites as their effects also scale linearly with the
trap intensity. We found that the δg-measurement was
affected by the lattice potential by less then 0.1%, well
below the uncertainty quoted above.
Unlike the vector shift, the tensor contribution to the
sublevel splitting is distinguishable from the magnetic
contribution even for fixed fields. Adjacent σ transitions
can be used to measure ∆κT and κTe due to the m
F de-
pendence of the tensor shift. An appropriate choice of
transition comparisons results in a measurement of the
tensor shift without any contributions from magnetic or
vector terms. To enhance the sensitivity of our measure-
0 5 10 15 20 25
Measurement
UT=1.7U0 UT=0.85U0 UT=1.3U0
FIG. 8: (color online) Measurement of the tensor shift coef-
ficients ∆κT (blue triangles), and κTe (green circles), using σ
spectra and Eq. 14. The measured coefficients show no sta-
tistically significant trap depth dependence while varying the
depth from 0.85–1.7 U0.
ment we focus mainly on the transitions originating from
states with large mF ; for example, we find that
fσ+,mF=7/2 − fσ+,mF =−7/2
e = −
fd,mF =7/2 − fd,mF =−7/2
while similar combinations can be used to isolate the dif-
ferential tensor shift from the σ− data as well as the
tensor shift coefficient of the 1S0 state. From the σ split-
ting data we find ∆κT = 0.03(8) Hz/U0 and |κ
e |=0.02(4)
Hz/U0. The data for these measurements is shown in
Fig. 8. Similarly, we extracted the tensor shift coeffi-
cient from π spectra, exploiting the mF -dependent term
in Eq. 12, yielding ∆κT = 0.02(7) Hz/U0. The measure-
ments here are consistent with zero and were not found
to depend on the trapping depth used for a range of 0.85–
1.7 U0, and hence are interpreted as conservative upper
limits to the shift coefficients. The error bars represent
the standard deviation of many measurements, with the
scatter in the data due mainly to laser frequency noise
and slight under sampling of the peaks. It is worth noting
that the tensor shift of the clock transition is expected to
be dominated by the 3P0 shift, and therefore, the limit
on κTe can be used as an additional estimate for the up-
per limit on ∆κT . Improvements on these limits can be
made by going to larger trap intensities to enhance sen-
sitivity, as well as by directly stabilizing the clock laser
to components of interest for improved averaging.
Table I summarizes the measured sensitivities to mag-
netic fields and the lattice potential. The Stark shift
coefficients for linear polarization at 813.4280(5) nm are
given in units of Hz/(UT /ER). For completeness, a recent
measurement of the second order Zeeman shift using 88Sr
has been included [45], as well as the measured shift coef-
ficient ∆γ for the hyperpolarizability [12] and the upper
TABLE I: Measured Field Sensitivities for 87Sr
Sensitivity Value Units Ref.
B /mFB -108.4(4) Hz/G This work
2 -0.233(5) Hz/G2 [45]a
∆κT 6(20) ×10−4 Hz/(UT /ER) This work
∆κT 9(23)×10−4 Hz/(UT /ER) This work
κTe 5(10)×10
−4 Hz/(UT /ER) This work
κ -3(7)×10−3 Hz/(UT /ER) [6]
∆γ 7(6)×10−6 Hz/(UT /ER)
2 [12]d
a Measured for 88Sr
b Measured with π spectra
c Measured with σ± spectra
d Measured with degenerate sublevels
limit for the overall linear lattice shift coefficient κ from
our recent clock measurement [6]. While we were able
to confirm that the vector shift effect is small and con-
sistent with zero in our system, we do not report a limit
for the vector shift coefficient ∆κV due to uncertainty
in the lattice polarization purity and orientation relative
to the quantization axis. In future measurements, use of
circular trap polarization can enhance the measurement
precision of ∆κV by at least two orders of magnitude.
Although only upper limits are reported here, the re-
sult can be used to estimate accuracy and linewidth lim-
itations for lattice clocks. For example, in the absence
of magnetic fields, the tensor shift can cause line broad-
ening of the transition for unpolarized samples. Given
the transition amplitudes in Fig. 4, the upper limit for
line broadening, derived from the tensor shift coefficients
discussed above, is 5 Hz at U0. The tensor shift also
results in a different magic wavelength for different mF
sublevels, which is constrained here to the few picometer
level.
V. COMPARISON OF THE δg MEASUREMENT
WITH THEORY AND
3P0 LIFETIME ESTIMATE
The precise measurement of δg provides an opportu-
nity to compare various atomic hyperfine interaction the-
ories to the experiment. To calculate the mixing param-
eters α0 and β0 (defined in Eq. 16 of the Appendix),
we first try the simplest approach using the standard
Breit-Wills (BW) theory [23, 24] to relate the mixing
parameters to the measured triplet hyperfine splitting
(hfs). The parameters α (0.9996) and β (−0.0286(3))
are calculated from recent determinations of the 3P1 [32]
and 1P1 [54] lifetimes. The relevant singlet and triplet
single-electron hyperfine coefficients are taken from Ref.
[55]. From this calculation we find α0 = 2.37(1) × 10
β0 = −4.12(1) × 10
−6, and γ0 = 4.72(1) × 10
−6, resulting
in δgµ0 = −109.1(1) Hz/G . Using the mixing values in
conjunction with Eq. 4 we find that the 3P0 lifetime is
152(2) s. The agreement with the measured g-factor is
excellent, however the BW-theory is known to have prob-
lems predicting the 1P1 characteristics based on those of
the triplet states. In this case, the BW-theory frame-
work predicts a magnetic dipole A coefficient for the 1P1
state of -32.7(2) MHz, whereas the experimental value is
-3.4(4) MHz [55]. Since δg is determined mainly by the
properties of the 3P1 state, it is not surprising that the
theoretical and experimental values are in good agree-
ment. Conversely, the lifetime of the 3P0 state depends
nearly equally on the 1P1 and
3P1 characteristics, so the
lifetime prediction deserves further investigation.
A modified BW (MBW) theory [44, 55, 56] was at-
tempted to incorporate the singlet data and eliminate
such discrepancies. In this case 1P1,
3P1, and
3P2 hfs are
all used in the calculation, and two scaling factors are
introduced to account for differences between singlet and
triplet radial wavefunctions when determining the HFI
mixing coefficients (note that γ0 is not affected by this
modification). This method has been shown to be suc-
cessful in the case of heavier systems such as neutral Hg
[44]. We find α0 = 2.56(1)× 10
−4 and β0 = −5.5(1)× 10
resulting in δgµ0 = −117.9(5) Hz/G and τ
3P0 = 110(1) s.
Here, the agreement with experiment is fair, but the un-
certainties in experimental parameters used for the the-
ory are too small to explain the discrepancy.
Alternatively, we note that in Eq. 6, δg depends
strongly on α0α and only weakly (< 1%) on β0β, such
that our measurement can be used to tightly constrain
α0 = 2.35(1)×10
−4, and then use only the triplet hfs data
to calculate β0 in the MBW theory framework. In this
way we find β0 = −3.2(1) × 10
−6, yielding τ
3P0 = 182(5)s.
The resulting 1P1 hfs A coefficient is −15.9(5) MHz,
which is an improvement compared to the standard BW
calculation. The inability of the BW and MBW theory to
simultaneously predict the singlet and triplet properties
seems to suggest that the theory is inadequate for 87Sr.
A second possibility is a measurement error of some of
the hfs coefficients, or the ground state g-factor. The
triplet hfs is well resolved and has been confirmed with
high accuracy in a number of measurements. An error in
the ground state g-factor measurement at the 10% level
is unlikely, but it can be tested in future measurements
TABLE II: Theoretical estimates of δg and τ
3P0 for 87Sr
Values used in Calculation
α = 0.9996 β = −0.0286(3)
Calc. α0 β0 τ
3P0 δgµ0 A
×104 ×106 (s) mF (Hz/G) (MHz)
BW 2.37(1) -4.12(1) 152(2) -109.1(1) -32.7(2)
MBW I 2.56(1) -5.5(1) 110(1) -117.9(5) -3.4(4)a
MBW II 2.35(1) -3.2(1) 182(5) -108.4(4)b -15.9(5)
Ref [40] — — 132 — —
Ref [41, 59] 2.9(3) -4.7(7) 110(30) -130(15) c —
Ref [8, 9] — — 159 106d —
a Experimental value [55]
b Experimental value from this work
c Calculated using Eq. 6
d Sign inferred from Figure 1 in Ref. [8]
by calibrating the field in an independent way so that
both gI and δg can be measured. On the other hand,
the 1P1 hfs measurement has only been performed once
using level crossing techniques, and is complicated by the
fact that the structure is not resolved, and that the 88Sr
transition dominates the spectrum for naturally abun-
dant samples. Present 87Sr cooling experiments could be
used to provide an improved measurement of the 1P1 data
to check whether this is the origin of the discrepancy.
Although one can presumably predict the lifetime with
a few percent accuracy (based on uncertainties in the
experimental data), the large model-dependent spread
in values introduces significant additional uncertainty.
Based on the calculations above (and many other similar
ones) and our experimental data, the predicted lifetime is
145(40) s. A direct measurement of the natural lifetime
would be ideal, as has been done in similar studies with
trapped ion systems such as In+ [39] and Al+ [57] or neu-
tral atoms where the lifetime is shorter, but for Sr this
type of experiment is difficult due to trap lifetime limi-
tations, and the measurement accuracy would be limited
by blackbody quenching of the 3P0 state [58].
Table II summarizes the calculations of δg and τ
discussed here including the HFI mixing parameters α0
and β0. Other recent calculations based on the BW the-
ory [8, 9], ab initio relativistic many body calculations
[40], and an effective core calculation [41] are given for
comparison, with error bars shown when available.
VI. IMPLICATIONS FOR THE
SR LATTICE
CLOCK
In the previous sections, the magnitude of relevant
magnetic and Stark shifts has been discussed. Briefly, we
will discuss straightforward methods to reduce or elim-
inate the effects of the field sensitivities. To eliminate
linear Zeeman and vector light shifts the obvious path is
to use resolved sublevels and average out the effects by al-
ternating between measurements of levels with the same
|mF |. Figure 9 shows an example of a spin-polarized mea-
surement using the mF = ±9/2 states for cancelation of
the Zeeman and vector shifts. To polarize the sample,
we optically pump the atoms using a weak beam reso-
nant with the 1S0-
3P1 (F = 7/2) transition. The beam
is co-aligned with the lattice and clock laser and linearly
polarized along the lattice polarization axis (θ = 0), re-
sulting in optical pumping to the stretched (mF = 9/2)
states. Spectroscopy with (blue) and without (red) the
polarizing step shows the efficiency of the optical pump-
ing as the population in the stretched states is dramati-
cally increased while excitations from other sublevels are
not visible. Alternate schemes have been demonstrated
elsewhere [8, 26] where the population is pumped into a
single mF = ±9/2 state using the
3P1 (F = 9/2)
transition. In our system, we have found the method
shown here to be more efficient in terms of atom number
in the final state and state purity. The highly efficient
-150 -100 -50 0 50 100 150
Laser Detuning (Hz)
FIG. 9: (color online) The effect of optical pumping via the
3P1 (F = 7/2) state is shown via direct spectroscopy with θ =
0. The red data shows the spectrum without the polarizing
light for a field of 0.27 G. With the polarizing step added
to the spectroscopy sequence the blue spectum is observed.
Even with the loss of ∼ 15% of the total atom number due to
the polarizing laser, the signal size of the mF = ±9/2 states
is increased by more than a factor of 4.
optical pumping and high spectral resolution should al-
low clock operation with a bias field of less than 300 mG
for a 10 Hz feature while keeping line pulling effects due
to the presence of the other sublevels below 10−17. The
corresponding second order Zeeman shift for such a field
is only ∼21 mHz, and hence knowledge of the magnetic
field at the 10% level is sufficient to control the effect
below 10−17. With the high accuracy δg-measurement
reported here, real time magnetic field calibration at the
level of a few percent is trivial. For spin-polarized sam-
ples, a new magic wavelength can be determined for the
mF -pair, and the effect of the tensor shift will only be to
modify the cancelation wavelength by at most a few pi-
cometers if a different set of sublevels are employed. With
spin-polarized samples, the sensitivity to both magnetic
and optical fields (including hyperpolarizability effects)
should not prevent the clock accuracy from reaching be-
low 10−17.
Initial concerns that nuclear spin effects would limit
the obtainable accuracy of a lattice clock have prompted
a number of recent proposals to use bosonic isotopes
in combination with external field induced state mixing
[17, 18, 20, 21, 22] to replace the mixing provided natu-
rally by the nuclear spin. In these schemes, however, the
simplicity of a hyperfine-free system comes at the cost
of additional accuracy concerns as the mixing fields also
shift the clock states. The magnitudes of the shifts de-
pend on the species, mixing mechanism, and achievable
spectral resolution in a given system. As an example,
we discuss the magnetic field induced mixing scheme [20]
which was the first to be experimentally demonstrated
for Yb [19] and Sr [45]. For a 10 Hz 88Sr resonance (i.e.
the linewidth used in this work), the required magnetic
and optical fields (set to minimize the total frequency
shift) result in a second order Zeeman shift of −30 Hz
and an ac Stark shift from the probe laser of −36 Hz.
For the same transition width, using spin-polarized 87Sr,
the second order Zeeman shift is less than −20 mHz for
the situation in Fig. 9, and the ac Stark shift is less than
1 mHz. Although the nuclear-spin-induced case requires
a short spin-polarizing stage and averaging between two
sublevels, this is preferable to the bosonic isotope, where
the mixing fields must be calibrated and monitored at the
10−5 level to reach below 10−17. Other practical concerns
may make the external mixing schemes favorable, if for
example isotopes with nuclear spin are not readily avail-
able for the species of interest. In a lattice clock with
atom-shot noise limited performance, the stability could
be improved, at the cost of accuracy, by switching to a
bosonic isotope with larger natural abundance.
In conclusion we have presented a detailed experimen-
tal and theoretical study of the nuclear spin effects in op-
tical lattice clocks. A perturbative approach for describ-
ing the state mixing and magnetic sensitivity of the clock
states was given for a general alkaline-earth(-like) system,
with 87Sr used as an example. Relevant Stark shifts from
the optical lattice were also discussed. We described in
detail our sign-sensitive measurement of the differential
g-factor of the 1S0-
3P0 clock transition in
87Sr, yield-
ing µ0δg = −108.4(4)mF Hz/G, as well as upper limit for
the differential and exited state tensor shift coefficients
∆κT = 0.02 Hz/(UT /ER) and κ
e = 0.01 Hz/(UT /ER).
We have demonstrated a polarizing scheme which should
allow control of the nuclear spin related effects in the 87Sr
lattice clock to well below 10−17.
We thank T. Ido for help during the early stages of
the g-factor measurement, and G. K. Campbell and A.
Pe’er for careful reading of the manuscript. This work
was supported by ONR, NIST, and NSF. Andrew Lud-
low acknowledges support from NSF-IGERT through the
OSEP program at the University of Colorado.
[1] S. A. Diddams et al., Science 306, 1318 (2004).
[2] M. Takamoto et al., Nature 435, 321 (2005).
[3] M. M. Boyd et al., Science 314, 1430 (2006).
[4] C. W. Hoyt et al., in Proceedings of the 20th European
Frequency and Time Forum, Braunschweig, Germany,
March 27-30, p. 324-328 (2006).
[5] A. D. Ludlow et al., Phys. Rev. Lett. 96, 033003 (2006).
[6] M. M. Boyd et al., Phys. Rev. Lett. 98, 083002 (2007).
[7] R. Le Targat et al., Phys. Rev. Lett. 97, 130801 (2006).
[8] M. Takamoto et al., J. Phys. Soc. Japan 75, 10 (2006).
[9] H. Katori et al., Phys. Rev. Lett. 91, 173005 (2003).
[10] V. Ovsiannikov et al., Quantum Electron. 36, 3 (2006).
[11] S. G. Porsev et al., Phys. Rev. A 69, 021403(R) (2004).
[12] A. Brusch et al., Phys. Rev. Lett. 96, 103003 (2006).
[13] W. H. Oskay et al., Phys. Rev. Lett. 97, 020801 (2006).
[14] L. Childress et al., Science 314, 281 (2006).
[15] D. Hayes, P. S. Julienne, and I. H. Deutsch, Phys. Rev.
Lett. 98, 070501 (2007)
[16] I. Reichenbach and I. Deutsch, quant-ph/0702120.
[17] R. Santra et al., Phys. Rev. Lett. 94, 173002 (2005).
[18] T. Hong et al., Phys. Rev. Lett. 94, 050801 (2005).
[19] Z. W. Barber et al., Phys. Rev. Lett. 96, 083002 (2006).
[20] A. V. Taichenachev et al., Phys. Rev. Lett. 96, 083001
(2006).
[21] T. Zanon-Willette et al., Phys. Rev. Lett. 97, 233001
(2006).
[22] V. D. Ovsiannikov et al., Phys. Rev. A 75, 020501 (2007).
[23] G. Breit and L. A. Wills, Phys. Rev. 44, 470 (1933).
[24] A. Lurio, M. Mandel, and R. Novick, Phys. Rev. 126,
1758 (1962).
[25] K. R. Vogel et al., IEEE Trans. on Inst. and Meas. 48,
618 (1999).
[26] T. Mukaiyama et al., Phys. Rev. Lett. 90, 113002 (2003).
[27] T. H. Loftus et al., Phys. Rev. Lett. 93, 073003 (2004).
[28] T. H. Loftus et al., Phys. Rev. A 70, 063413 (2004).
[29] N. Poli et al., Phys. Rev. A 71, 061403 (2005).
[30] E. A. Curtis, C. W. Oates, and L. Hollberg, Phys. Rev.
A 64, 031403 (2001).
[31] T. Binnewies et al., Phys. Rev. Lett. 87, 123002 (2001)
[32] T. Zelevinsky et al., Phys. Rev. Lett. 96, 203201 (2006).
[33] S. Tojo et al., Phys. Rev. Lett. 96, 153201 (2006).
[34] R. Ciury lo et al., Phys. Rev. A 70, 062710 (2004).
[35] C. Degenhardt et al., Phys. Rev. A 72, 062111 (2005).
[36] G. Wilpers et al., Appl. Phys. B 85, 31 (2006).
[37] T. Ido et al., Phys. Rev. Lett. 94, 153001 (2005).
[38] M. V. Romalis et al., Phys. Rev. Lett. 86, 2505 (2001).
[39] T. Becker et al., Phys. Rev. A 63, 051802 (2001).
[40] S. G. Porsev and A. Derevianko, Phys. Rev. A 69, 042506
(2004).
[41] R. Santra et al., Phys. Rev. A 69, 042510 (2004).
[42] L. Olschewski, Z. Phys. 249, 205 (1972).
[43] H. Kopfermann, Nuclear Moments, New York, (1963).
[44] B. Lahaye and J. Margerie, J. Physique 36, 943 (1975).
[45] X. Baillard et al., physics/0703148.
[46] S. Bize et al., J. Phys. B 38, S449 (2005).
[47] T. P. Heavner et al., Metrologia 42, 411 (2005).
[48] H. S. Margolis et al., Science 306, 1355 (2004).
[49] P. Dubé et al., Phys. Rev. Lett. 95, 033001 (2005).
[50] T. Schneider et al., Phys. Rev. Lett. 94, 230801 (2005).
[51] P. J. Blythe et al., Phys. Rev. A 67, 020501 (2003).
[52] M. V. Romalis and E. N. Fortson, Phys. Rev. A 59, 4547
(1999).
[53] A. D. Ludlow et al., Opt. Lett. 32, 641 (2007).
[54] P. Mickelson et al., Phys. Rev. Lett. 95, 223002 (2005).
[55] H. J. Kluge and H. Sauter, Z. Physik 270, 295 (1974).
[56] A. Lurio, Phys. Rev. 142, 46 (1966).
[57] T. Rosenband et al., Phys. Rev. Lett. 98, 220801 (2007).
[58] X. Xu et al., J. Opt. Soc. Am. B 20, 5 (2003).
[59] Unpublished HFI coefficients extracted from Ref. [41], R.
Santra private communication.
[60] S. G. Porsev and A. Derevianko, Phys. Rev. A 74, 020502
(2006).
[61] G. Breit and I. I. Rabi, Phys. Rev. 38, 2082 (1932).
[62] S. M. Heider and G. O. Brink, Phys. Rev. A. 16, 1371
(1977).
[63] G. zu Putlitz, Z. Phys. 175, 543 (1963).
http://arxiv.org/abs/quant-ph/0702120
http://arxiv.org/abs/physics/0703148
VII. APPENDIX
The appendix is organized as follows, in the first sec-
tion we briefly describe calculation of the mixing coeffi-
cients needed to estimate the effects discussed in the main
text. We also include relevant Zeeman matrix elements.
In the second section we describe a perturbative treat-
ment of the magnetic field on the hyperfine-mixed 3P0
state, resulting in a Breit-Rabi like formula for the clock
transition. In the final section we solve the more general
case and treat the magnetic field and hyperfine interac-
tion simultaneously, which is necessary to calculate the
sensitivity of the 1P1,
3P1 and
3P2 states.
A. State mixing coefficients and Zeeman elements
The intermediate coupling coefficients α and β are typ-
ically calculated from measured lifetimes and transition
frequencies of the 1P1 and
3P1 states and a normalization
constraint, resulting in
= 1. (15)
The HFI mixing coefficients α0, β0, and γ0 are due to
the interaction between the pure 3P0 state and the spin-
orbit mixed states in Eq. 1 having the same total angular
momentum F . They are defined as
〈3P1, F = I |HA|
3P 00 , F = I〉
ν3P0 − ν3P1
〈1P1, F = I |HA|
3P 00 , F = I〉
ν3P0 − ν1P1
〈3P2, F = I |HQ|
3P 00 , F = I〉
ν3P0 − ν3P2
Where HA and HQ are the magnetic dipole and electric
quadrupole contributions of the hyperfine Hamiltonian.
A standard technique for calculating the matrix elements
is to relate unknown radial contributions of the wavefunc-
tions to the measured hyperfine magnetic dipole (A) and
electric quadrupole (Q) coefficients. Calculation of the
matrix elements using BW theory [23, 24, 39, 44, 55] can
be performed using the measured hyperfine splitting of
the triplet state along with matrix elements provided in
[24]. Inclusion of the 1P1 data (and an accurate predic-
tion of β0) requires a modified BW theory [44, 55, 56]
where the relation between the measured hyperfine split-
ting and the radial components is more complex but man-
ageable if the splitting data for all of the states in the
nsnp manifold are available. A thorough discussion of
the two theories is provided in Refs. [44, 55].
Zeeman matrix elements for singlet and triplet states in
the nsnp configuration have been calculated in Ref. [24].
Table III summarizes those elements relevant to the work
here, where the results have been simplified by using the
electronic quantum numbers for the alkaline-earth case,
but leaving the nuclear spin quantum number general
for simple application to different species. Note that the
results include the application of our sign convention in
Eq. 5 which differs from that in Ref. [24].
B. Magnetic field as a perturbation
To determine the magnetic sensitivity of the 3P0 state
due to the hyperfine interaction with the 3P1 and
states, we first use a perturbative approach to add the
Zeeman interaction as a correction to the |3P0〉 state in
Eq. 3. The resulting matrix elements depend on spin-
orbit and hyperfine mixing coefficients α, β, α0, β0, and
γ0. For the
3P0 state, diagonal elements to first order in
α0 and β0 are relevant, while for
1P1 and
3P1, the contri-
bution of the hyperfine mixing to the diagonal elements
can be ignored. All off-diagonal terms of order β2, α0α,
α0β, α
, and smaller can be neglected. Due to the selec-
tion rules for pure (LS) states, the only contributions of
the 3P2 hyperfine mixing are of order α0γ0, γ
, and β0γ0.
Thus the state can be ignored and the Zeeman interac-
tion matrixMz between atomic P states can be described
in the
|1P1, F,mF 〉, |
3P0, F,mF 〉, |
3P1, F,mF 〉
basis as
ν1P1 M
ν3P0 M
, (17)
where we define diagonal elements as
ν3P0 = ν
0 |HZ |
+ 2(αα0 − ββ0)〈
1 , F = I |HZ |
ν3P1 = ν
1 , F
|HZ |
1 , F
1 , F
|HZ |
1 , F
ν1P1 = ν
1 , F
|HZ |
1 , F
1 , F
|HZ |
1 , F
Off diagonal elements are given by
|〈3P 01 , F
′|HZ |3P 00 , F 〉|
|〈3P 00 , F |HZ |
3P 01 , F
′〉|2.
TABLE III: Zeeman Matrix Elements for Pure (2S+1L0J ) States
Relevant Elements for the
3P0 State:
〈3P 00 , F = I |HZ|
3P 00 , F = I〉= −gImFµ0B
〈3P 00 , F = I |HZ|
3P 01 , F
′ = I〉 =(gs − gl)mFµ0B
3I(I+1)
〈3P 00 , F = I |HZ|
3P 01 , F
′ = I + 1〉 =(gs − gl)µ0B
((I+1)2−m2
)(4I+6)
3(I+1)(4(I2+1)−1)
〈3P 00 , F = I |HZ|
3P 01 , F
′ = I − 1〉 =(gs − gl)µ0B
(I2−m2
)(4I−2)
3I(4I2−1)
Relevant Diagonal Elements within
3P1 Manifold:
〈3P 01 , F = I |HZ|
3P 01 , F = I〉=
gl+gs−gI (2I(I+1)−2)
2I(I+1)
mFµ0B
〈3P 01 , F = I + 1|HZ |
3P 01 , F = I + 1〉=
gl+gs−2gII
2(I+1)
mFµ0B
〈3P 01 , F = I − 1|HZ |
3P 01 , F = I − 1〉=
gl+gs+2gI (I+1)
mFµ0B
Relevant Diagonal Elements within
1P1 Manifold:
〈1P 01 , F = I |HZ|
1P 01 , F = I〉=
gl−gI (I(I+1)−1)
I(I+1)
mFµ0B
〈1P 01 , F = I + 1|HZ |
1P 01 , F = I + 1〉=
gl−gII
(I+1)
mFµ0B
〈1P 01 , F = I − 1|HZ |
1P 01 , F = I − 1〉=
gl+gI(I+1)
mFµ0B
The eigenvalues of Eq. 17 can be written analytically as
three distinct cubic roots
ν20 + 3ν
arccos
2ν30 + 9ν0ν
1 + 27ν
2(ν20 + 3ν
νmF ≡ν3P0,mF =
ν20 + 3ν
arccos
2ν30 + 9ν0ν
1 + 27ν
2(ν20 + 3ν
where we have
ν0 =ν3P0 + ν3P1 + ν1P1
−ν3P0ν3P1 − ν3P1ν1P1 − ν3P0ν1P1 + (M
ν3P0ν3P1ν1P1 − ν3P1(M
− ν1P1(M
Since the main goal is a description of the 3P0 state sen-
sitivity, the solution can be simplified when one considers
the relative energy spacing of the three states, and that
elements having terms β, αβ, and smaller are negligible
compared to those proportional to only α. Therefore we
can ignore M
terms and find simplified eigenvalues
arising only from the interaction between 3P1 and
that can be expressed as a Breit-Rabi like expression for
the 3P0 state given by
ν3P0,mF =
ν3P0 + ν3P1
ν3P0 − ν3P1
1 + 4
α2|〈3P 00 , F |HZ|
3P 01 , F
(ν3P0 − ν3P1)
For magnetic fields where the Zeeman effect is small com-
pared to the fine-structure splitting, the result is identi-
cal to that from Eq. 8 of the main text. The magnetic
0 10 20 30 40 50 60 70 80
F=9/2
F=11/2
F=7/2
Magnetic Field(G)
FIG. 10: (color online) Magnetic sensitivity of the 1P1 state
calculated with the expression in Eq. 24 using A = −3.4 MHz
and Q = 39 MHz [55]. Note the inverted level structure.
sensitivity of the clock transition (plotted in Fig. 2) is de-
termined by simply subtracting the 〈3P 00 |HZ |
3P 00 〉 term
which is common to both states.
C. Full treatment of the HFI and magnetic field
For a more complete treatment of the Zeeman effect
we can relax the constraint of small fields and treat the
hyperfine and Zeeman interactions simultaneously using
the spin-orbit mixed states in Eq. 1 as a basis. The total
Hamiltonian is written Htotal = HZ+HA+HQ including
hyperfine HA and quadrupole HQ effects in addition to
the Zeeman interaction HZ defined in Eq. 5 of the main
0 500 1000 1500 2000 2500 3000
Magnetic Field (G)
F=11/2
F=9/2
F=7/2
FIG. 11: (color online) Magnetic sensitivity of the 3P1 state
calculated with the expression in Eq. 24 using A = −260 MHz
and Q = −35 MHz [63].
text. The Hamiltonian Htotal can be written as
Htotal =HZ + A~I · ~J
~I · ~J(2~I · ~J + 1) − IJ(I + 1)(J + 1)
2IJ(2I − 1)(2J − 1)
Diagonalization of the full space using Eq. 23 does not
change the 3P0 result discussed above, even for fields
as large as 104 G. This is not surprising since the 3P0
state has only one F level, and is therefore only af-
fected by the hyperfine interaction through state mix-
ing which was already accounted for in the previous cal-
culation. Alternatively, for an accurate description of
the 1P1,
3P1 and
3P2 states, Eq. 23 must be used. For
an alkaline-earth 2S+1L1 state in the |I, J, F,mF 〉 basis
we find an analytical expression for the field dependence
of the F = I, I ± 1 states and sublevels. The solution
is identical to Eq. 20 except we replace the frequencies
in Eq. 21 with those in Eq. 24. We define the relative
strengths of magnetic, hyperfine, and quadrupole inter-
actions with respect to an effective hyperfine-quadrupole
coupling constant WAQ = A +
4I(1−2I)
as XBR =
, and XQ =
I(1−2I)WAQ
, respectively. The so-
lution is a generalization of the Breit-Rabi formula [61]
for the 2S+1L1 state in the two electron system with nu-
clear spin I. The frequencies are expanded in powers of
XBR as
ν0 = −2WAQ
mFXBR
ν1 = WAQ
2(geff − gI)XA + 3geffXQ
mFXBR +
(geff + gI)
3m2F g
(geff+gI)
ν2 = WAQ
I(I + 1)X
I(I+1)
3(1−2I)(3+2I)
I(I+1)
−XAXQ
geff(2 −
2I(I+1)
) + gI
mFXBR
2gIgeff
I(I+1)
(geff+gI )
3m2F g
I(I+1)(geff+gI)
gI((geff+gI )
2−(gImF )
I(I+1)
with abbreviations
=I(I + 1)
− I(I + 1)XQ(XA − 1)
=Xeff
XQXeff +
X2A −X
3(3 + 2I)(1 − 2I)
Xeff =XA +XQ
(3 + 2I)(1 − 2I)
geff =
− gs)
(L(L+ 1)− S(S + 1)) .
The resulting Zeeman splitting of the 5s5p1P1 and
5s5p3P1 hyperfine states in
87Sr is shown in Fig. 10 and
Fig. 11. For the more complex structure of 3P2, we
have solved Eq. 23 numerically, with the results shown in
Fig. 12. The solution for the 1P1 state depends strongly
on the quadrupole (Q) term in the Hamiltonian, while
for the 3P1 and
3P2 states the magnetic dipole (A) term
is dominant.
0 500 1000 1500 2000 2500 3000
10 F=5/2
F=7/2
F=11/2
F=13/2
F=9/2
Magnetic Field (G)
FIG. 12: (color online) Magnetic sensitivity of the 3P1 state
calculated numerically with Eq. 23 using A=-212 MHz and
Q=67 MHz [62].
|
0704.0914 | Electromagnetic wormholes via handlebody constructions | Electromagnetic wormholes via
handlebody constructions
Allan Greenleaf, Yaroslav Kurylev,
Matti Lassas and Gunther Uhlmann ∗
Abstract
Cloaking devices are prescriptions of electrostatic, optical or elec-
tromagnetic parameter fields (conductivity σ(x), index of refraction
n(x), or electric permittivity ǫ(x) and magnetic permeability µ(x))
which are piecewise smooth on R3 and singular on a hypersurface Σ,
and such that objects in the region enclosed by Σ are not detectable to
external observation by waves. Here, we give related constructions of
invisible tunnels, which allow electromagnetic waves to pass between
possibly distant points, but with only the ends of the tunnels visible
to electromagnetic imaging. Effectively, these change the topology of
space with respect to solutions of Maxwell’s equations, corresponding
to attaching a handlebody to R3. The resulting devices thus function
as electromagnetic wormholes.
∗AG and GU are supported by US NSF, ML by CoE-programm 213476 of Academy of
Finland.
http://arxiv.org/abs/0704.0914v1
1 Introduction
There has recently been considerable interest, both theoretical [16, 18, 19, 3,
13] and experimental [20], in invisibility (or “cloaking”) from observation by
electromagnetic (EM) waves. (See also [17] for a treatment of cloaking in the
context of elasticity.) Theoretically, cloaking devices are given by specifying
the conductivity σ(x) (in the case of electrostatics), the index of refraction
n(x) (for optics in the absence of polarization, where one uses the Helmholtz
equation), or the electric permittivity ǫ(x) and magnetic permeability µ(x)
(for the full system of Maxwell’s equations.) In the constructions to date,
the EM parameter fields ( σ;n; ǫ and µ ) have been piecewise smooth and
anisotropic. (See, however, [5, Sec.4] for an example that can be interpreted
as cloaking with respect to Helmholtz by an isotropic negative index of re-
fraction material.) Furthermore, the EM parameters have singularities, with
one or more eigenvalues of the tensors going to zero or infinity as one ap-
proaches from on or both sides the cloaking surface Σ, which encloses the
region within which objects may be hidden from external observation. Such
constructions might have remained theoretical curiosities, but the advent of
metamaterials[1] allows one, within the constraints of current technology, to
construct media with fairly arbitrary ǫ(x) and µ(x).
It thus becomes an interesting mathematical problem with practical signif-
icance to understand what other new phenomena of wave propagation can
be produced by prescribing other arrangements of ǫ and µ. Geometrically,
cloaking can be viewed as arising from a singular transformation of R3. In-
tuitively, for a spherical cloak [6, 7, 18], it is as if an infinitesimally small
hole in space has been stretched to a ball D; an object can be inserted in-
side the hole so created and is then invisible to external observations. On
the level of the EM parameters, homogeneous, isotropic parameters ǫ, µ are
pushed forward to become inhomogeneous, anisotropic and singular as one
approaches Σ = ∂D from the exterior. There are then two ways, referred to
as the single and double coating in [3], of continuing ǫ, µ to within D so as to
rigorously obtain invisibility with respect to locally finite energy waves. We
refer to either process as blowing up a point. As observed in [3], one can use
the double coating to produce a manifold with a different topology, but with
the change in topology invisible to external measurements.
To define the solutions of Maxwell’s equations rigorously in the single coating
case, one has to add boundary conditions on Σ. Physically, this corresponds
to the lining of the interior of the single coating material, e.g., in the case
of blowing up a point, with a perfectly conducting layer, see [3]. We point
out here that in the recent preprint [21], the single coating construction is
supplemented with selfadjoint extensions of Maxwell operators in the interior
of the cloaked regions; these implicitly impose interior boundary conditions
on the boundary of the cloaked region, similar to the PEC boundary condi-
tion suggested in [3]. For the case of an infinite cylinder the Soft-and-Hard
(SH) interior boundary condition is used in [3] to guarantee cloaking of active
objects, and is needed even for passive ones.
In this paper, we show how more elaborate geometric constructions, cor-
responding to blowing up a curve, enable the description of tunnels which
allow the passage of waves between distant points, while only the ends of
the tunnels are visible to external observation. These devices function as
electromagnetic wormholes, essentially changing the topology of space with
respect to solutions of Maxwell’s equations.
We form the wormhole device around an obstacle K ⊂ R3 as follows. First,
one surrounds K with metamaterials, corresponding to a specification of EM
parameters ε̃ and µ̃. Secondly, one lines the surface of K with material im-
plementing the Soft-and-Hard (SH) boundary condition from antenna theory
[8, 10, 11]; this condition arose previously [3] in the context of cloaking an
infinite cylinder. The EM parameters, which become singular as one ap-
proaches K, are given as the pushforwards of nonsingular parameters ε and
µ on an abstract three-manifold M , described in Sec. 2. For a curve γ ⊂ M ,
we construct the diffeomorphism F fromM \γ to the wormhole device in Sec.
3. For the resulting EM parameters ε̃ and µ̃, we have singular coefficients of
Maxwell’s equations at K, and so it is necessary to formulate an appropriate
notion of locally finite energy solutions (see Def. 4.1). In Theorem 4.2, we
then show that there is a perfect correspondence between the external mea-
surements of EM waves propagating through the wormhole device and those
propagating on the wormhole manifold.
It was shown in [3] that the cloaking constructions are mathematically valid
at all frequencies k. However, both cloaking and the wormhole effect stud-
ied here should be considered as essentially monochromatic, or at least very
narrow-band, using current technology, since, from a practical point of view
the metamaterials needed to implement the constructions have to be fabri-
cated and assembled with a particular wavelength in mind, and theoretically
are subject to significant dispersion [18]. Thus, as for cloaking in [16, 18, 3],
here we describe the wormhole construction relative to electromagnetic waves
at a fixed positive frequency k. We point out that the metamaterials used in
the experimental verification of cloaking [20] should be readily adaptable to
yield a physical implementation, at microwave frequencies, of the wormhole
device described here. See Remark 1 in Sec. 4.2 for further discussion.
The results proved here were announced in [4].
2 The wormhole manifold M
First we explain, somewhat informally, what we mean by a wormhole. The
concept of a wormhole is familiar from general relativity [9, 22], but here
we define a wormhole as an object obtained by gluing together pieces of
Euclidian space equipped with certain anisotropic EM parameter fields. We
start by describing this process heuristically; later, we explain more precisely
how this can be effectively realized vis-a-vis EM wave propagation using
metamaterials.
We first describe the wormhole as an abstract manifold M , see Fig. 1; in
the next section we will show how to realize this concretely in R3, as a
wormhole device N . Start by making two holes in the Euclidian space R3 =
{(x, y, z)|x, y, z ∈ R}, say by removing the open ball B1 = B(O , 1) with
center at the origin O and of radius 1, and also the open ball B2 = B(P, 1),
where P = (0, 0, L) is a point on the z-axis having the distance L > 3 to
the origin. We denote by M1 the region so obtained, M1 = R
3 \ (B1 ∪ B2),
which is the first component we need to construct a wormhole. Note that
M1 is a 3-dimensional manifold with boundary, the boundary of M1 being
∂M1 = ∂B1∪∂B2, the union of two 2-spheres. Thus, ∂M1 can be considered
as a disjoint union S2 ∪ S2, where we will use S2 to denote various copies of
the two-dimensional unit sphere.
The second component needed is a 3−dimensional cylinder, M2 = S2× [0, 1].
This cylinder can be constructed by taking the closed unit cube [0, 1]3 in R3
and, for each value of 0 < s < 1, gluing together, i.e., identifying, all of the
points on the boundary of the cube with z = s. Note that we do not identify
points at the top of the boundary, at z = 1, or at the bottom, at z = 0. We
then glue together the boundary ∂B(O , 1) ∼ S2 of the ball B(O , 1) with the
lower end (boundary component) S2×{0} of M2, and the boundary ∂B(P, 1)
with the upper end, S2 × {1}. In doing so we identify the point (0, 0, 1) ∈
∂B(O , 1) with the point NP × {0} and the point (0, 0, L − 1) ∈ ∂B(P, 1)
with the point NP × {1}, where NP is the north pole on S2.
The resulting manifold M no longer lies in R3, but rather is the connected
sum of the components M1 and M2 and has the topology of R
3 with a
3−dimensional handle attached. Note that adding this handle makes it pos-
sible to travel from one point in M1 to another point in M1, not only along
curves lying in M1 but also those in M2.
To consider Maxwell’s equations on M , let us start with Maxwell’s equations
on R3 at frequency k ∈ R, given by
∇× E = ikB, ∇×H = −ikD, D(x) = ε(x)E(x), B(x) = µ(x)H(x).
Here E and H are the electric and magnetic fields, D and B are the electric
displacement field and the magnetic flux density, ε and µ are matrices corre-
sponding to permittivity and permeability. As the wormhole is topologically
different from the Euclidian space R3, we use a formulation of Maxwell’s
equations on a manifold, and as in [3], do this in the setting of a general Rie-
mannian manifold, (M, g). For our purposes, as in [14, 3] it suffices to use
ε, µ which are conformal, i.e., proportional by scalar fields, to the metric g.
In this case, Maxwell’s equations can be written, in the coordinate invariant
form, as
dE = ikB, dH = −ikD, D = ǫE, B = µH in M,
where E,H are 1-forms, D,B are 2-forms, d is the exterior derivative, and ǫ
and µ are scalar functions times the Hodge operator of (M, g), which maps
1-forms to the corresponding 2-forms [2]. In local coordinates these equations
are written in the same form as Maxwell’s equations in Euclidian space with
matrix valued ε and µ. Although not necessary, for simplicity one can choose
a metric on the wormhole manifold M which is Euclidian on M1, and on M2
is the product of a given metric g0 on S
2 and the standard metric of [0, 1].
More generally, can also choose the metric on M2 to be a warped product.
Even the simple choice of the product of the standard metric of S2 and the
metric δ2ds2, where δ is the “length” of the wormhole, gives rise to interesting
ray-tracing effects for rays passing through the wormhole tunnel. For δ << 1,
the image through one end of the wormhole (of the region beyond the other
end) would resemble the image in a a fisheye lens; for δ & 1, multiple images
and greater distortion occur. (See [4, Fig.2].)
The proof of the wormhole effect that we actually give is for yet another
variation, where the balls that form the ends have their boundary spheres
flattened; this may be useful for applications, since it allows for there to be a
vacuum (or air) in a neighborhood of the axis of the wormhole, so that, e.g.,
instruments may be passed through the wormhole. We next show how to
construct, using metamaterials, a device N in R3 that effectively realizes the
geometry and topology of M , relative to solutions of Maxwell’s equations at
frequency k, and hence functions as an electromagnetic wormhole.
3 The wormhole device N in R3
We now explain how to construct a “device” N in R3, i.e., a specification
of permittivity ε and permeability µ, which affects the propagation of elec-
tromagnetic waves in the same way as the presence of the handle M2 in the
wormhole manifold M . What this means is that we prescribe a configuration
of metamaterials which make the waves behave as if there were an invisible
tube attached to R3, analogous to the handle M2 in the wormhole manifold
M . In the other words, as far as external EM observations of the wormhole
device are concerned, it appears as if the topology of space has been changed.
We use cylindrical coordinates (θ, r, z) corresponding to a point (r cos θ, r sin θ, z)
in R3. The wormhole device is built around an obstacle K ⊂ R3. To de-
fine K, let S be the two-dimensional finite cylinder {θ ∈ [0, 2π], r = 2, 0 ≤
z ≤ L} ⊂ R3. The open region K consists of all points in R3 that have
distance less than one to S and has the shape of a long, thick-walled tube
with smoothed corners.
Let us first introduce a deformation map F from M to N = R3 \K or, more
precisely, from M \γ to N \Σ, where γ is a closed curve in M to be described
shortly and Σ = ∂K. We will define F separately on M1 and M2 denoting
the corresponding parts by F1 and F2.
To describe F1, let γ1 be the line segment on the z−axis connecting ∂B(O , 1)
and ∂B(P, 1) in M1, namely, γ1 = {r = 0, z ∈ [1, L − 1]}. Let F1(r, z) =
(θ, R(r, z), Z(r, z)) be such that (R(r, z), Z(r, z)), shown in Fig. 2,
PSfrag replacements
Figure 1: Schematic figure: a wormhole manifold is glued from two com-
ponents, the “handle” and space with two holes. Note that in the actual
construction, the components are three dimensional.
PSfrag replacements
A B C D A
Figure 2: The map (R(r, z), Z(r, z)) in cylindrical coordinates (z, r).
transforms in the (r, z) coordinates the semicircles AB and CD in the left
picture to the vertical line segments A′B′ = {r ∈ [0, 1], z = 0} and C ′D′ =
{r ∈ [0, 1], z = L} in the right picture and the cut γ1 on the left picture to the
curve B′C ′ on the right picture. This gives us a map F1 : M1 \ γ1 → N1 \Σ,
where the closed region N1 in R
3 is obtained by rotation of the region exterior
to the curve A′B′C ′D′ around the z−axis. We can choose F1 so that it is
the identity map in the domain U = R3 \ {−2 ≤ z ≤ L+ 2, 0 ≤ r ≤ 4}.
To describe F2, consider the line segment, γ2 = {NP} × [0, 1] on M2 . The
sphere without the north pole can be ”flattened” and stretched to an open
disc with radius one which, together with stretching [0, 1] to [0, L], gives us a
map F2 from M2 \γ2 to N2\Σ. The region N2 is the 3−dimensional cylinder,
N2 = {θ ∈ [0, 2π], r ∈ [0, 1], z ∈ [0, L]}. When flattening S2 \ NP , we do
it in such a way that F1 on ∂B(O , 1) and ∂B(P, 1) coincides with F2 on
(S2 \NP )× {0} and (S2 \NP )× {1}, respectively.
Thus, F maps M \ γ, where γ = γ1 ∪ γ2 is a closed curve in M , onto N \ Σ;
in addition, F is the identity on the region U .
Now we are ready to define the electromagnetic material parameter tensors
on N . We define the permittivity to be
ε̃ = F∗ε(y) =
(DF )(x)· ε(x)· (DF (x))t
det(DF )
x=F−1(y)
where DF is the derivative matrix of F , and similarly the permeability to
be µ̃ = F∗µ. These deformation rules are based on the fact that permittivity
and permeability are conductivity type tensors, see [14].
Maxwell’s equations are invariant under smooth changes of coordinates. This
means that, by the chain rule, any solution to Maxwell’s equations in M \ γ,
endowed with material parameters ε, µ becomes, after transformation by F ,
a solution to Maxwell’s equations in N \Σ with material parameters ε̃ and µ̃,
and vice versa. However, when considering the fields on the entire spaces M
and N , these observations are not enough, due to the singularities of ε̃ and µ̃
near Σ; the significance of this for cloaking was observed and analyzed in [3].
In the following, we will show that the physically relevant class of solutions to
Maxwell’s equations, namely the (locally) finite energy solutions, remains the
same, with respect to the transformation F , in (M ; ε, µ) and (N ; ε̃, µ̃). One
can analyze the rays in M and N endowed with the electromagnetic wave
propagation metrics g =
εµ and g̃ =
ε̃µ̃, respectively. Then the rays on
M are transformed by F into the rays in N . As almost all the rays on M do
not intersect with γ, therefore, almost all the rays on N do not approach Σ.
This was the basis for [16, 18] and was analyzed further in [19]; see also [17]
for a similar analysis in the context of elasticity. Thus, heuristically one is
led to conclude that the electromagnetic waves on (M ; ε, µ) do not feel the
presence of γ, while those on (N ; ε̃, µ̃) do not feel the presence of K, and
these waves can be transformed into each other by the map F .
Although the above considerations are mathematically rigorous, on the level
both of the chain rule and of high frequency limits, i.e., ray tracing, in the
exteriors M \ γ and N \Σ, they do not suffice to fully describe the behavior
of physically meaningful solution fields on M and N . However, by carefully
examining the class of the finite-energy waves in M and N and analyzing
their behavior near γ and Σ, respectively, we can give a complete analysis,
justifying the conclusions above. Let us briefly explain the main steps of the
analysis using methods developed for theory of invisibility (or cloaking) at
frequency k > 0 [3] and at frequency k = 0 in [6, 7]. The details will follow.
First, to guarantee that the fields in N are finite energy solutions and do not
blow up near Σ, we have to impose at Σ the appropriate boundary condition,
namely, the Soft-and-Hard (SH) condition, see [8, 11],
eθ ·E|Σ = 0, eθ ·H|Σ = 0,
where eθ is the angular direction. Secondly, the map F can be considered
as a smooth coordinate transformation on M \ γ; thus, the finite energy
solutions on M \ γ transform under F into the finite energy solutions on
N \ Σ, and vice versa. Thirdly, the curve γ in M has Hausdorff dimension
equal to one. This implies that the possible singularities of the finite energy
electromagnetic fields near γ are removable [12], that is, the finite energy
fields in M \ γ are exactly the restriction to M \ γ of the fields defined on all
of M .
Combining these steps we can see that measurements of the electromagnetic
fields on (M ; ε, µ) and on (R3 \K; ε̃, µ̃) coincide in U . In the other words, if
we apply any current on U and measure the radiating electromagnetic fields
it generates, then the fields on U in the wormhole manifold (M ; ε, µ) coincide
with the fields on U in (R3 \K; ε̃, µ̃), 3-dimensional space equipped with the
wormhole device construction.
Summarizing our construction, the wormhole device consists of the metama-
terial coating of the obstacle K. This coating should have the permittivity
ε̃ and permeability µ̃. In addition, we need to impose the SH boundary
condition on Σ, which may be realized by fabricating the obstacle K from a
perfectly conducting material with parallel corrugations on its surface [8, 11].
In the next section, the permittivity ε̃ and and permeability µ̃ are described
in a rather simple form. (As mentioned earlier, in order to allow for a tube
around the axis of the wormhole to be a vacuum or air, we deal with a
slightly different construction than was described above, starting with flat-
tened spheres). It should be possible to physically implement an approxima-
tion to this mathematical idealization of the material parameters needed for
the wormhole device, using concentric rings of split ring resonators as in the
experimental verification of cloaking obtained in [20].
4 Rigorous construction of the wormhole
Here we present a rigorous model of a typical wormhole device and justify
the claims above concerning the behavior of the electromagnetic fields in the
wormhole device in R3 in terms of as the fields on the wormhole manifold
(M, g).
4.1 The wormhole manifold (M, g) and the wormhole
device N
Here we prove the wormhole effect for a variant of the wormhole device
described in the previous sections. Instead of using a round sphere S2 as
before, we present a construction that uses a deformed sphere S2flat that is
flat the near the south and north poles, SP and NP . This makes it possible
to have constant isotropic material parameters near the z-axis located inside
the wormhole. For possible applications, see [4].
We use following notations. Let (θ, r, z) ∈ [0, 2π]×R+×R be the cylindrical
coordinates of R3, that is the map
X : (θ, r, z) → (r cos θ, r sin θ, z)
that maps X : [0, 2π] × R+ × R → R3. In the following, we identify [0, 2π]
and the unit circle S1.
Let us start by removing from R3 two “deformed” balls which have flat
portions near the south and north poles. More precisely, let M1 = R
3 \ (P1∪
P2), where in the cylindrical coordinates
P1 = {X(θ, r, z) : −1 ≤ z ≤ 1, 0 ≤ r ≤ 1}
∪{X(θ, r, z) : (r − 1)2 + z2 ≤ 1},
P2 = {X(θ, r, z) : 1 ≤ z − L ≤ 1, 0 ≤ r ≤ 1}
∪{X(θ, r, z) : (r − 1)2 + (z − L)2 ≤ 1}.
We say that the boundary ∂P1 of P1 is a deformed sphere with flat portions,
and denote it by S2flat. We say that the intersection points of S
flat with the
z-axis are the north pole, NP , and the south pole, SP .
Let g1 be the metric on M1 inherited from R
3, and let γ1 be the path
γ1 = {X(0, 0, z) : 1 < z < L− 1} ⊂ M1.
A1 = M1 \ V1/4,
Vt = {X(θ, r, z) : 0 ≤ r ≤ t, 1 < z < L− 1}, 0 < t < 1,
and consider a map G0 : M1 \γ1 → A1; see Fig. 3. G0 defined as the identity
map on M1 \ V1/2 and, in cylindrical coordinates, as
G0(X(θ, r, z)) = X(θ,
, z), (θ, r, z) ∈ V1/2.
Clearly, G0 is C
0,1−smooth.
Let U(x) ∈ R3×3, x = X(θ, r, z), be the orthogonal matrix that maps the
standard unit vectors e1, e2, e3 of R
3 to the Euclidian unit vectors correspond-
ing to the θ, r, and z directions, that is,
U(x)e1 = (− sin θ, cos θ, 0), U(x)e2 = (cos θ, sin θ, 0), U(x)e3 = (0, 0, 1).
Then the differential of G0 in the Euclidian coordinates at the point x ∈ V1/2
is the matrix
DG0(x)U(y)
) 0 0
0 0 1
U(x)−1, x = X(θ, r, z), y = G0(x). (1)
Later we impose on part of the boundary, Σ0 = ∂A1 ∩ {1 < z < L− 1}, the
soft-and-hard boundary condition (marked red in the figures).
Next, let (θ, z, τ) = (θ(x), z(x), τ(x)) be the Euclidian boundary normal
coordinates associated to Σ0, that is, τ(x) = distR3(x,Σ0) and (θ(x), z(x))
are the θ and z-coordinates of the closest point of Σ0 to x.
Denote by (G0)∗g1 the push forward of the metric g1 in G0, that is, the
metric obtained from g1 using the change of coordinates G0, see [2]. The
metric (G0)∗g1 coincides with g1 in A1 \ V1/2, and in the Euclidian boundary
normal coordinates of Σ0, on A1 ∩ V1/2, the metric (G0)∗g1, has the length
element
ds2 = 4τ 2 dθ2 + dz2 + 4dτ 2.
PSfrag replacements
Figure 3: A schematic figure on the map G0, considered in the (r, z) coor-
dinates. Later, we impose the SH boundary condition on the portion of the
boundary coloured red.
Next, let
q3 = conv
{(r, z) : (r − 2)2 + (z − (−2))2 ≤ 1}
∪{(r, z) : (r − 2)2 + (z − (L+ 2))2 ≤ 1}
q4 = {(r, z) : 0 ≤ r ≤ 1, −1 ≤ z ≤ L+ 1},
where conv(q) denotes the convex hull of the set q.
N1 = R
3 \ (P3 ∪ P4),
P3 = {X(θ, r, z) : (r, z) ∈ q3},
P4 = {X(θ, r, z) : (r, z) ∈ q4},
Σ1 = ∂N1 \ ∂P4.
We can find a Lipschitz smooth map G1 : A1 → N1, see Fig. 4, of the form
G1(X(θ, r, z)) = X(θ, R(r, z), Z(r, z))
such that it maps Σ0 to Σ1, and in A1 near Σ0 it is given by
G1(x+ tν0) = G1(x) + tν1. (2)
Here, x ∈ Σ0, ν0 is the Euclidian unit normal vector of Σ0, ν1 is the Euclidian
unit normal vector of Σ1, and 0 < t <
. Moreover, we can find a G1 so that
it is the identity map near the z-axis, that is,
G1(x) = x, x ∈ A1 ∩ {0 ≤ r <
} (3)
and such that G1 is also the identity map in the set of points with the
Euclidian distance 4 or more from P1 ∪ P2. Note that we can find such a
G1 such that both G1 and its inverse G
1 are Lipschitz smooth up to the
boundary. Thus the differentialDG1 ofG1 at x ∈ A1 in Euclidian coordinates
DG1(x) = U(y)
a11(r, z) 0
0 A(r, z)
U(x)−1, x = X(θ, r, z), y = G1(x),
where c0 ≤ a11(r, z) ≤ c1 and A(r, z) is a symmetric (2×2)-matrix satisfying
c0I ≤ A(r, z) ≤ c1I
with some c0, c1 > 0.
The map F1(x) = G1(G0(x)) then maps F1 : M1\γ1 → N1. Let g̃1 = (F1)∗g1
be metric on N1. From the above considerations, we see that the differential
DF1 of F1 at x ∈ M1 \ γ1 near Σ0, in Euclidian coordinates, is given by
DF1(x) = U(y)
b11(θ, r, z) 0
0 B(r, z)
U(x)−1, (4)
b11(θ, r, z) =
c11(r, z)
distR3(X(θ, r, z),Σ0)
x = X(θ, r, z), y = F1(x)
where c0 ≤ c11(r, z) ≤ c1, and B(r, z) is a symmetric (2×2)-matrix satisfying
c0I ≤ B(r, z) ≤ c1I,
for some c0, c1 > 0.
Note that ∂P4∩{r < 1} consists of two two-dimensional discs, B2(0, 1)×{−1}
and B2(0, 1)× {L+ 1}. Below, we will use the map
f2 = F1|∂P1\NP : ∂P1 \NP → B2(0, 1)× {−1} ⊂ ∂N1.
The map f2 can be considered as the deformation that “flattens” S
flat \NP
to a two dimensional unit disc.
PSfrag replacements
Figure 4: Map G1 in (r, z)-coordinates.
To describe f2, consider S
flat as a surface in Euclidian space and define on it
the θ coordinate corresponding to the θ coordinate of R3 \ {z = 0}. Let then
s(y) be the intrinsic distance of y ∈ S2flat to the south pole SP . Then (θ, s)
define coordinates in S2flat\{SP,NP}. We denote by y(θ, s) ∈ S2flat\{SP,NP}
the point corresponding to the coordinates (θ, s).
By the above construction, the map f2 has the form, with respect to the
coordinates used above,
f2(y(θ, s)) = X(θ, R(s),−1) ∈ B2(0, 1)× {−1}, where (5)
R(s) = s, for 0 < s <
R(s) = 1− 1
[(π + 4)− s], for (π + 4)− 1
< s < (π + 4),
cf. formulae (2) and (3). In the following we identify B2(0, 1) × {−1} with
the disc B2(0, 1).
Let h1 be the metric on ∂P1 \NP inherited from (M1, g1). Let h2 = (f2)∗h1
be the metric on B2(0, 1). We observe that the metric h2 makes the disc
B2(0, 1) isometric to S
flat \NP , endowed with the metric inherited from R3.
Thus, let
M2 = S
flat × [−1, L+ 1].
OnM2, let the metric g2 be the product of the metric of S
flat inherited from R
and the metric α2(z)dz
2, α2 > 0 on [−1, L+1]. Let γ2 = {NP}× [−1, L+1]
be a path on M2.
Define N2 = P4 = {X(θ, r, z) : 0 ≤ r < 1,−1 ≤ z ≤ L + 1} ⊂ R3,
Σ2 = ∂N2 ∩ {r = 1}, and let F2 : M2 \ γ2 → N2 be the map of the form
F2(y, z) = (f2(y), z) ∈ R3, (y, z) ∈ (S2flat \NP )× [−1, L+ 1]. (6)
Let g̃2 = (F2)∗g2 be the resulting metric on N2.
Figure 5: The set N2 in the (r, z) coordinates. Later, we impose the SH
boundary condition on the portion of the boundary colored red.
Denote byM 1 = M1∪∂M1 the closure ofM1 and let (M, g) = (M 1, g1)#(M2, g2)
be the connected sum of M 1 and M2, that is, we glue the boundaries ∂M1
and ∂M2. The set N = N1 ∪ N2 ⊂ R3 is open, and its boundary ∂N is
Σ = Σ1 ∪ Σ2.
Let F be the map F : M \ γ → N defined by the maps F1 : M1 \ γ1 → N1
and F2 : M2 \ γ2 → N2, and finally, let γ = γ1 ∪ γ2 and g̃ = F∗g.
Figure 6: The set N = N1 ∪ N2 ⊂ R3 having the complement K, presented
in the (r, z) coordinates. Later, the SH boundary condition is imposed on
Let K = R3 \N . On the surface Σ = ∂K we can use local coordinates (t̃, θ̃),
where θ̃ is the θ-coordinate of the ambient space R3 and t̃ is either the r or
z -coordinate of the ambient space R3 restricted to Σ. Denote also
τ̃ = τ̃ (x) = distR3(x, ∂K).
Then by formula (2) we see that in N1, in the Euclidian boundary normal
coordinates (θ̃, t̃, τ̃) associated to the surface Σ1, the metric g̃ has the length
element
ds2 = 4dτ̃ 2 + α1(t̃) dt̃
2 + 4τ̃ 2 dθ̃2, 0 < τ̃ <
, c−10 ≤ α1(t̃) ≤ c0, c0 ≥ 1.
The construction of F2 yields that in N2 , in the Euclidian boundary normal
coordinates (θ̃, t̃, τ̃) with t̃ = z, associated to the surface Σ2 = ∂K ∩ ∂N2,
the metric g̃ has the length element, near Σ2,
ds2 = 4dτ̃ 2 + α2(t̃)dt̃
2 + 4τ̃ 2 dθ̃2, 0 < τ̃ <
Here, near ∂N1 ∩ ∂N2, we use t̃ = z on Σ1. Choosing the map G1 in the
construction of the map F1 appropriately, we have α2(−1) = α1(−1), α2(L+
1) = α1(L+ 1), and the resulting map is Lipschitz.
On M1, N1, and N2 that are subsets of R
3 we have the well defined cylin-
drical coordinates (θ, r, z). Similarly, M2 = S
flat × [−1, L + 1] we define
the coordinates (θ, s, z), where (θ, s) are the above defined coordinates on
flat \ {SP,NP}.
We can also consider on N ⊂ R3 also the Euclidian metric, denoted by
ge. In Euclidean coordinates, (ge)ij = δjk. Consider next the above defined
Euclidian boundary normal coordinates (θ̃, t̃, τ̃) associated to ∂K. They are
well defined in a neighborhood of ∂K. We define the vector fields
ξ̃ = ∂eτ , η̃ = ∂eθ, ζ̃ = ∂et
on N near ∂K. These vector fields are orthogonal with respect to the metric
g̃ and to the metric ge.
On M near γ, we use coordinates (θ, t, τ). On M1, near γ1 they in the terms
of the cylindrical coordinates are (θ, t, τ) = (θ, z, r). On M2, they are the
coordinates (θ, t, τ) = (θ, z, s), where s is the intrinsic distance to the north
pole NP . We define also the vector fields
ξ = ∂τ , η = ∂θ, ζ = ∂t
on M \ γ near γ. These vector fields are orthogonal with respect to the
metric g.
In the sequel, we consider the differential of F as the linear map
DF : (TxM, g) → (TyN, ge), y = F (x), x ∈ M \ γ.
Using formula (4) in M1 and formulas (5), (6) in M2, we see that DF
−1(x)
at x ∈ N near ∂N is a bounded linear map that satisfies
|(η,DF−1(x)η̃)g| ≤ C τ̃(x), (ζ,DF−1(x)η̃)g = 0, (ξ,DF−1(x)η̃)g = 0,
(η,DF−1(x)ζ̃)g = 0, |(ζ,DF−1(x)ζ̃)g| ≤ C, |(ξ,DF−1(x)ζ̃)g| ≤ C,
(η,DF−1(x)ξ̃)g = 0, |(ζ,DF−1(x)ξ̃)g| ≤ C, |(ξ,DF−1(x)ξ̃)g| ≤ C,
where C > 0 and (· , · )g is the inner product defined by the metric g. More-
over, we obtain similar estimates for DF in terms of the Euclidian metric
|(η̃, DF (y)η)ge| ≤ C τ(y)−1, (ζ̃ , DF (y)η)ge = 0, (ξ̃, DF (y)η)ge = 0,
(η̃, DF (y)ζ)ge = 0, |(ζ̃ , DF (y)ζ)ge| ≤ C, |(ξ̃, DF (y)ζ)ge| ≤ C,
(η̃, DF (y)ξ)ge = 0, |(ζ̃ , DF (y)ξ)ge| ≤ C, |(ξ̃, DF (y)ξ)ge| ≤ C
for y ∈ M \ γ near γ with C > 0.
Next, consider DF (y) at y ∈ M \γ. Recall that the singular values sj(y), j =
1, 2, 3 of DF (y) are the square roots of the eigenvalues of (DF (y))tDF (y),
where (DF )t is the transpose of DF . By (7), the singular values sj = sj(y),
j = 1, 2, 3, of DF (y), numbered in increasing order, satisfy
c1 ≤ s1(y) ≤ c2,
c1 ≤ s2(y) ≤ c2,
≤ s3(y) ≤
where c1, c2 > 0.
The determinant of the matrix DF (y) can be computed in terms of its sin-
gular values by det(DF ) = s1s2s3. Later, we need the norm of the matrix
det(DF (y))−1DF (y). It satisfies by formula (8)
‖det(DF (y))−1DF (y)‖ = ‖(
s−1k )diag (s1, s2, s3)‖ = max
1≤j≤3
k 6=j
s−1k ≤ c−21 . (9)
4.2 Maxwell’s equations on the wormhole with SH coat-
Let dV0(x) denote the Euclidian volume element on N ⊂ R3. Recall that
N ⊂ R3 is open set with boundary ∂N = Σ. Let dVg be the Riemannian
volume on (M, g). We consider below the map F : M\γ → N as a coordinate
deformation. The map F induces for any differential form Ẽ on N a form
E = F ∗Ẽ in M \ γ called the pull back of Ẽ in F , see [2].
Next, we consider Maxwell equations with degenerate material parameters ε̃
and µ̃ on N with SH boundary conditions on Σ. On M and N we define the
permittivity and permeability by setting
εjk = µjk = det(g)1/2gjk, on M, (10)
ε̃jk = µ̃jk = det(g̃)1/2g̃jk, on N.
Here, and below, the matrix [gjk(x)] is the representation of the metric g in
local coordinates, [gjk(x)] is the inverse of the matrix [gjk(x)], and det(g) is
the determinant of [gjk(x)]. We note that the metric g̃ is degenerate near
Σ, and thus ε̃ and µ̃, represented as matrices in the Euclidian coordinates,
have elements that tend to infinity at Σ, that is, the matrices ε̃ and µ̃ have
a singularity near Σ.
Remark 1. Modifying the above construction by replacing M2 with M2 =
flat× [l1, l2] for appropriate l1, l2 ∈ R and choosing F1 in an appropriate way,
we can use local coordinates (θ̃, t̃) on Σ such that the Euclidian distance
along Σ of points (θ̃, t̃1) and (θ̃, t̃2) is proportional to |t̃1− t̃2|, and the metric
g̃ in the Euclidian boundary normal coordinates (θ̃, t̃, τ̃) associated to ∂K
has the form
ds2 = 4dτ̃ 2 + dt̃2 + 4τ̃ 2 dθ̃2, 0 < τ̃ <
The metric corresponding to the metamaterials used in the physical exper-
iment in [20] has the same form in Euclidian boundary normal coordinates
associated to an infinitely long cylinder B2(0, 1)×R. Thus it seems likely that
metamaterials similar to those used in the experimental verification of cloak-
ing could be used to create physical wormhole devices working at microwave
frequencies.
4.3 Finite energy solutions of Maxwell’s equations
and the equivalence theorem
In the following, we consider 1-forms Ẽ =
j Ẽjdx̃
j and H̃ =
j H̃jdx̃
the Euclidian coordinates (x̃1, x̃2, x̃3) of N ⊂ R3. In the sequel, we use Ein-
stein’s summation convention and omit the sum signs. We use the Euclidian
coordinates as we want to consider N with the differential structure inherited
from the Euclidian space. We say that Ẽj and H̃j are the (Euclidian) coeffi-
cients of the forms Ẽ and H̃, correspondingly. We say that these coefficients
are in L
loc(N, dV0), 1 ≤ p < ∞, if∫
|Ej(x)|p dV0(x) < ∞, for all bounded measurable sets W ⊂ N.
Definition 4.1 We say that the 1-forms Ẽ and H̃ are finite energy solutions
of Maxwell’s equations in N with the soft-and-hard (SH) boundary conditions
on Σ and the frequency k 6= 0,
∇× Ẽ = ikµ̃(x)H̃, ∇× H̃ = −ikε̃(x)Ẽ + J̃ on N,
η̃ · Ẽ|Σ = 0, η̃ · H̃|Σ = 0,
if 1-forms Ẽ and H̃ and 2-forms D̃ = ε̃Ẽ and B̃ = µ̃H̃ in N have coefficients
in L1loc(N, dV0) and satisfy
‖Ẽ‖2
L2(W,|eg|1/2dV0)
ε̃jk Ẽj Ẽk dV0(x) < ∞,
‖H̃‖2
L2(W,|eg|1/2dV0))
µ̃jk H̃j H̃k dV0(x) < ∞
for all bounded measurable sets W ⊂ N , and finally,
((∇× h̃) · Ẽ − ikh̃ · µ̃(x)H̃) dV0(x) = 0,
((∇× ẽ) · H̃ + ẽ · (ikε̃(x)Ẽ − J̃)) dV0(x) = 0,
for all 1-forms ẽ and h̃ with coefficients in C∞0 (N) that satisfy
η̃ · ẽ|Σ = 0, η̃ · h̃|Σ = 0, (11)
where η̃ = ∂θ is the angular vector field that is tangential to Σ.
Below, we use for 1-forms E = Ejdx
j and H = Hjdx
j , given in local coordi-
nates (x1, x2, x3) on M , the notations
∇× E = dH, ∇· (εE) = d ∗ E, ∇· (µH) = d ∗H,
where d is the exterior derivative and ∗ is the Hodge operator on (M, g), cf.
formula (10).
We have the following “equivalent behavior of electromagnetic fields on N
and M” result, analogous to the results of [3] for cloaking.
Theorem 4.2 Let E and H be 1-forms on M \ γ and Ẽ and H̃ be 1-forms
with coefficients in L1loc(N, dV0) such that E = F
∗Ẽ, H = F ∗H̃. Let J̃
and J = F ∗J̃ be 2-forms with smooth coefficients in N and M \ γ that are
supported away from Σ and γ.
Then the following are equivalent:
1. On N , the 1-forms Ẽ and H̃ satisfy Maxwell’s equations with SH bound-
ary conditions in the sense of Definition 4.1.
2. On M , the forms E and H can be extended on M so that they are
classical solutions E and H of Maxwell’s equations,
∇× E = ikµH, in M,
∇×H = −ikεE + J, in M.
Proof. Assume first that E and H satisfy Maxwell’s equations on M with
source J supported away from γ. Then E and H are C∞ smooth near γ.
Using F−1 : N → M \ γ we define the 1-forms Ẽ, H̃ and 2-form J̃ on N
by Ẽ = (F−1)∗E, H̃ = (F−1)∗H , and J̃ = (F−1)∗J. These fields satisfy
Maxwell’s equations in N ,
∇× Ẽ = ikµ̃(x)H̃, ∇× H̃ = −ikε̃(x)Ẽ + J̃ in N. (12)
Now, writing E = Ej(x)dx
j on M near γ, we see using the transformation
rule for differential 1-forms that the form Ẽ = (F−1)∗E is in local coordinates
Ẽ = Ẽj(x̃)dx̃
j , Ẽj(x̃) = (DF
−1)kj (x̃)Ek(F
−1(x̃)), x̃ ∈ N. (13)
Using the smoothness of E and H near γ on M and formulae (7), we see
that Ẽ, H̃ are forms on N with L1loc(N, dV0) coefficients. Moreover,
ε̃(x)Ẽ(x) = det(DF (y))−1DF (y)ε(y)DF (y)t(DF (y)t)−1E(y)
= det(DF (y))−1DF (y)ε(y)E(y)
where x ∈ N , y = F−1(x) ∈ M \ γ. Formula (9) shows that D̃ = ε̃Ẽ, and
B̃ = µ̃H̃ are 2-forms on N with L1loc(N, dV0) coefficients.
Let Σ(t) ⊂ N be the t-neighbourhood of Σ in the g̃-metric. Note that for
small t > 0 the set Σ(t) is the Euclidian (t/2)-neighborhood of ∂K. Denote
by ν be the unit exterior Euclidian normal vector of ∂Σ(t) and the Euclidian
inner product by (η̃, Ẽ)ge = η̃ · Ẽ.
Formulas (7) and (13) imply that the angular components satisfy
|η̃ · Ẽ| ≤ Ct, x ∈ ∂Σ(t),
|ζ̃ · Ẽ| ≤ C, x ∈ ∂Σ(t)
with some C > 0. Thus denoting by dS the Euclidian surface area on ∂Σ(t),
Stokes’ formula, formula (12), and the identity ν × ξ̃ = ±η̃ yield
((∇× h̃) · Ẽ − ikh̃ · µ̃H̃) dV0(x)
= lim
N\Σ(t)
((∇× h̃) · Ẽ − ikh̃ · µ̃H̃) dV0(x)
= − lim
∂Σ(t)
(ν × Ẽ) · h̃ dS(x)
= − lim
∂Σ(t)
ν × ((η̃ · Ẽ)η̃ + (ζ̃ · Ẽ)ζ̃) · h̃ dS(x)
for a test function h̃ satisfying formula (11).
Similar analysis for H̃ shows that 1-forms Ẽ and H̃ satisfy Maxwell’s equa-
tions with SH boundary conditions in the sense of Definition 4.1.
Next, assume that Ẽ and H̃ form a finite energy solution of Maxwell’s equa-
tions on (N, g) with a source J̃ supported away from Σ, implying in particular
ε̃jkẼjẼk ∈ L1(W, dV0), µ̃jkH̃jH̃k ∈ L1(W, dV0)
where W = F (U \ γ) ⊂ N and U ⊂ M is a relatively compact open neigh-
bourhood of γ, supp (J̃)∩W = ∅. Define E = F ∗Ẽ, H = F ∗H̃ , and J = F ∗J̃
on M \ γ. Therefore we conclude that
∇×E = ikµ(x)H, ∇×H = −ikε(x)E + J, in M \ γ
εjkEjEk ∈ L1(U \ γ, dVg), µjkHjHk ∈ L1(U \ γ, dVg).
As representations of ε and µ, in local coordinates of M , are matrices that
are bounded from above and below, these imply that
∇× E ∈ L2(U \ γ, dVg), ∇×H ∈ L2(U \ γ, dVg),
∇· (εE) = 0, ∇· (µH) = 0, in U \ γ.
Let Ee, He ∈ L2(U, dVg) be measurable extensions of E and H to γ. Then
∇× Ee − ikµ(x)He = 0, in U \ γ,
∇× Ee − ikµ(x)He ∈ H−1(U, dVg),
∇×He + ikε(x)Ee = 0, in U \ γ,
∇×He + ikε(x)Ee ∈ H−1(U, dVg),
where H−1(U, dVg) is the Sobolev space with smoothness (−1) on (U, g).
Since γ is a subset with (Hausdorff) dimension 1 of the 3-dimensional domain
U , it has zero capacitance. Thus, the Lipschitz functions on U that vanish on
γ are dense in H1(U), see [12]. Therefore, there are no non-zero distributions
in H−1(U) supported on γ. Hence we see that
∇× Ee − ikµ(x)He = 0, ∇×He + ikε(x)Ee = 0 in U.
This also implies that
∇· (εEe) = 0, ∇· (µHe) = 0 in U,
which, by elliptic regularity, imply that Ee and He are C∞ smooth in U .
In summary, E and H have unique continuous extensions to γ, and the
extensions are classical solutions to Maxwell’s equations. ✷
References
[1] G. Eleftheriades and K. Balmain, Negative-Refraction Metamaterials,
IEEE Press (Wiley-Interscience), 2005.
[2] T. Frankel, The geometry of physics, Cambridge University Press, Cam-
bridge, 1997.
[3] A. Greenleaf, Y. Kurylev, M. Lassas and G. Uhlmann, Full-wave invisi-
bility of active devices at all frequencies, ArXiv.org:math.AP/0611185),
2006; Comm. Math. Phys., to appear.
[4] A. Greenleaf, Y. Kurylev, M. Lassas and G. Uhlmann,
Electromagnetic wormholes and virtual magnetic monopoles,
ArXiv.org:math-ph/0703059, submitted, 2007.
[5] A. Greenleaf, M. Lassas, and G. Uhlmann, The Calderón problem for
conormal potentials, I: Global uniqueness and reconstruction, Comm.
Pure Appl. Math 56 (2003), no. 3, 328–352
[6] A. Greenleaf, M. Lassas, and G. Uhlmann, Anisotropic conductivities
that cannot detected in EIT, Physiological Measurement (special issue
on Impedance Tomography), 24 (2003), pp. 413-420.
[7] A. Greenleaf, M. Lassas, and G. Uhlmann, On nonuniqueness for
Calderón’s inverse problem, Math. Res. Let. 10 (2003), no. 5-6, 685-693.
[8] I. Hänninen, I. Lindell, and A. Sihvola, Realization of generalized Soft-
and-Hard Boundary, Progr. In Electromag. Res., PIER 64, 317-333, 2006.
[9] S. Hawking and G. Ellis, The Large Scale Structure of Space-Time, Cam-
bridge Univ. Press, 1973.
http://arxiv.org/abs/math/0611185
http://arxiv.org/abs/math-ph/0703059
[10] P.-S. Kildal, Definition of artificially soft and hard surfaces for electro-
magnetic waves, Electron. Lett. 24 (1988), 168–170.
[11] P.-S. Kildal, Artificially soft-and-hard surfaces in electromagnetics,
IEEE Trans. Ant. and Propag., 10 (1990), 1537-1544.
[12] T. Kilpeläinen, J. Kinnunen, and O. Martio, Sobolev spaces with zero
boundary values on metric spaces. Potential Anal. 12 (2000), no. 3, 233–
[13] R. Kohn, H. Shen, M. Vogelius, and M. Weinstein, in preparation.
[14] Y. Kurylev, M. Lassas, and E. Somersalo, Maxwell’s equations with a
polarization independent wave velocity: Direct and inverse problems, J.
Math. Pures Appl., 86 (2006), 237-270.
[15] M. Lassas, M. Taylor, G. Uhlmann, On determining a non-compact
Riemannian manifold from the boundary values of harmonic functions,
Comm. Geom. Anal. 11 (2003), 207-222.
[16] U. Leonhardt, Optical Conformal Mapping, Science 312 (23 June,
2006), 1777-1780.
[17] G. Milton, M. Briane, J. Willis, On cloaking for elasticity and physical
equations with a transformation invariant form, New J. Phys. 8 (2006),
[18] J.B. Pendry, D. Schurig, D.R. Smith, Controlling Electromagnetic
Fields, Science 312 (23 June, 2006), 1780-1782.
[19] J.B. Pendry, D. Schurig, D.R. Smith, Optics Express 14, 9794 (2006).
[20] D. Schurig, J. Mock, B. Justice, S. Cummer, J. Pendry, A. Starr, and
D. Smith, Metamaterial electromagnetic cloak at microwave frequencies,
Science 314 (10 Nov. 2006), 977-980.
[21] R. Weder, A rigorous time-domain analysis of full–wave electromagnetic
cloaking (Invisibility), preprint, ArXiv.org:07040248v1 (2007).
[22] M. Visser, Lorentzian Wormholes, AIP Press, 1997.
Department of Mathematics
University of Rochester
Rochester, NY 14627, USA, [email protected]
Department of Mathematical Sciences
University of Loughborough
Loughborough, LE11 3TU, UK, [email protected]
Institute of Mathematics
Helsinki University of Technology
Espoo, FIN-02015, Finland, [email protected]
Department of Mathematics
University of Washington
Seattle, WA 98195, USA, [email protected]
Introduction
The wormhole manifold M
The wormhole device N in R3
Rigorous construction of the wormhole
The wormhole manifold (M,g) and the wormhole device N
Maxwell's equations on the wormhole with SH coating
Finite energy solutions of Maxwell's equations and the equivalence theorem
|
0704.0915 | Millimeter-Thick Single-Walled Carbon Nanotube Forests: Hidden Role of
Catalyst Support | Microsoft Word - manuscript_supergrowth070322.doc
Millimeter-Thick Single-Walled Carbon Nanotube Forests:
Hidden Role of Catalyst Support
Suguru Noda1*, Kei Hasegawa1, Hisashi Sugime1, Kazunori Kakehi1,
Zhengyi Zhang2, Shigeo Maruyama2 and Yukio Yamaguchi1
1 Department of Chemical System Engineering, School of Engineering, The University
of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
2 Department of Mechanical Engineering, School of Engineering, The University of
Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
A parametric study of so-called "super growth" of single-walled carbon nanotubes
(SWNTs) was done by using combinatorial libraries of iron/aluminum oxide catalysts.
Millimeter-thick forests of nanotubes grew within 10 min, and those grown by using
catalysts with a thin Fe layer (about 0.5 nm) were SWNTs. Although nanotube forests
grew under a wide range of reaction conditions such as gas composition and
temperature, the window for SWNT was narrow. Fe catalysts rapidly grew nanotubes
only when supported on aluminum oxide. Aluminum oxide, which is a well-known
catalyst in hydrocarbon reforming, plays an essential role in enhancing the nanotube
growth rates.
KEYWORDS: single-walled carbon nanotubes, vertically aligned nanotubes,
combinatorial method, growth mechanism
* Corresponding author. E-mail address: [email protected]
Soon after the realizations of the vertically-aligned single-walled carbon
nanotube (VA-SWNT) forests1) by alcohol chemical vapor deposition (ACCVD),2)
many groups achieved this morphology of nanotubes by several tricks in CVD
conditions.3-6) Among these methods, the water-assisted method, the so-called "super
growth" method,3) realized an outstanding growth rate of a few micrometers per second,
thus yielding millimeter-thick VA-SWNT forests. Despite its significant impact on the
nanotube community, no other research groups have been successful in reproducing
"super growth". Later, the control of the nominal thickness of Fe in the Fe/ Al2O3
catalyst was shown crucial for controlling the number of walls and diameters of the
nanotubes.7) In this work, we carried out a parametric study of this growth method by
using a combinatorial method that we previously developed for catalyst optimization.8,9)
Si wafers that had a 50-nm-thick thermal oxide layer and quartz glass
substrates were used as substrates, and Fe/ SiO2, Fe/Al2Ox, and Fe/Al2O3 catalysts were
prepared by sputter deposition on them. An Al2Ox layer was formed by depositing
15-nm-thick Al on the substrates, and then exposing the layer to air. A 20-nm-thick
Al2O3 layer was formed by sputtering an Al2O3 target. Then, Fe was deposited on SiO2,
on Al2Ox, and on Al2O3. In some experiments, gradient-thickness profiles were formed
for Fe by using the combinatorial method previously described.9) The catalysts were set
in a tubular, hot-wall CVD reactor (22-mm inner diameter and 300-mm length), heated
to a target temperature (typically 1093 K), and kept at that temperature for 10 min while
being exposed to 27 kPa H2/ 75 kPa Ar at a flow rate of 500 sccm, to which H2O vapor
was added at the same partial pressure as for the CVD condition (i.e., 0 to 0.03 kPa).
During this heat treatment, Fe formed into a nanoparticle structure with a diameter and
areal density that depended on the initial Fe thickness.8) After the heat treatment, CVD
was carried out by switching the H2/ H2O /Ar gas to C2H4/ H2/ H2O/ Ar. The standard
condition was 8.0 kPa C2H4/ 27 kPa H2/ 0.010 kPa H2O/ 67 kPa Ar and 1093 K. The
samples were analyzed by using transmission electron microscopy (TEM) (JEOL
JEM-2000EX) and micro-Raman scattering spectroscopy (Seki Technotron, STR-250)
with an excitation wavelength at 488 nm.
Figure 1a shows a photograph of the nanotubes grown for 30 min under the
standard condition. Nanotubes formed forests that were about 2.5 mm thick. The taller
nanotubes at the edge compared with those at the center of the substrates indicate that
the nanotube growth rate was limited by the diffusion of the growth species through the
millimeter-thick forests of nanotubes. Figure 1b shows a TEM image of the as-grown
sample shown in the center of Fig. 1a. The nanotubes were mostly SWNTs. These
figures show that "super growth" was achieved. Although catalysts with thicker Fe layer
(≥ 1 nm) yielded rapid growth for a wide range of CVD conditions, mainly multi-walled
nanotubes (MWNTs) formed instead of SWNTs. Rapid growth of SWNTs requires
complicated optimization of the CVD conditions, i.e., C2H4/ H2/ H2O pressures and the
growth temperature, because the thinner layer of Fe catalysts (around 0.5 nm) yielded
rapid SWNT growth under a narrow window near the standard condition.
The effect of the catalyst supports on the nanotube growth was also studied
here. Figure 2a shows normal photographs of nanotubes grown by using three types of
combinatorial catalyst libraries; i.e., Fe/ SiO2, Fe/ Al2Ox, and Fe/ Al2O3. For the Fe/
SiO2 catalyst, the surface was slightly darker at regions with 0.4- to 0.5-nm-thick Fe.
For the Fe/ Al2Ox, and Fe/ Al2O3 catalyst, the result was completely different; nanotube
forests even thicker than the substrates were formed within 10 min. Differences also
were evident between the catalysts with Al2Ox and Al2O3 supports. When Fe was
relatively thick (> 0.6 nm), nanotube forests grew thick by using either of these two
catalysts. When Fe was thinner (≤ 0.6 nm), however, nanotube forests grew thick only
by using the Fe/ Al2Ox catalyst (Fig. 2b). Figure 2c shows Raman spectra taken at
several locations for each catalyst library. For Fe/ SiO2, a Raman signal of nanotubes
was obtained only when the Fe layer was thin (i.e. ≤ 0.8 nm). The sharp and branched
G-band with small D-band and the peaks of radial breathing mode (RBM) indicate the
existence of SWNTs. The G/D peak area ratios exceeding 10 indicate that the SWNTs
were of relatively good quality. For Fe/ Al2Ox, the Raman signal of nanotubes was
observed also for a thick Fe region (i.e. ≥ 1.0 nm) with G/D ratios somewhat smaller
than the G/D ratios for Fe/ SiO2. The G/D ratio of 10 for the nanotubes by 0.5 nm Fe/
Al2Ox shows that the SWNTs still were of relatively good quality compared with the
original "super growth".3) As the Fe thickness was increased, G/D ratios became smaller
because MWNTs became the main product at the thicker Fe regions. For Fe/ Al2O3, the
results were similar to those for Fe/ Al2Ox except when the Fe layer was thin (around 0.5
nm) where nanotube forests did not grow. Similar phenomenon was observed also for
Co and Ni catalysts; they yielded nanotube forests when supported on an aluminum
oxide layer. These results show that an aluminum oxide layer is essential for "super
growth", that the growth rate enhancement by Al2Oxmight accompany some decrease in
the G/D ratio, and that the catalyst Fe layer needs to be thin (< 1 nm for the CVD
condition studied here) to grow SWNTs. An Al2Ox catalyst support was more suitable
than Al2O3 to grow SWNTs, and the underlying growth mechanism is now under
investigation.
The effect of the H2O vapor on the nanotube growth was studied next. Figure
3a shows the thickness profiles of nanotube forests grown on the Fe/ Al2Ox catalyst
library. In the absence of H2O vapor, nanotubes grew at the thin Fe region (0.3- to 1-nm
thick). Addition of 0.010 kPa H2O, which corresponds to 100 ppmv in the reactant gases,
enhanced the nanotube growth, especially at the thicker Fe region (> 0.7 nm). Further
addition of H2O (0.030 kPa), however, inhibited the nanotube growth at the thinner Fe
region (0.3- 0.6 nm) where SWNTs grew at lower H2O partial pressures. Figure 3b
shows Raman spectra of these samples. Slight addition of H2O (0.01 kPa) did not affect
the G/D ratio at the thin Fe region (0.5 nm) but decreased the G/D ratio at the thicker
region (0.8 and 1.0 nm). Further addition of H2O (0.03 kPa) significantly decreased the
G/D ratio at the whole region of Fe thickness. These results show that the H2O addition
up to a certain level can enhance the nanotube growth rate, but too much addition
degrades the nanotube quality.
Considering that alumina and its related materials catalyze hydrocarbon
reforming,10) a possible mechanism for "super growth" is proposed as follows: C2H4 or
its derivatives adsorb onto aluminum oxide surfaces, diffuse on the surface to be
incorporated into Fe nanoparticles, and segregate as nanotubes from Fe nanoparticles.
H2O vapor keeps aluminum oxide surface reactive by removing the carbon byproducts,
while simultaneously, H2O reacts with the nanotubes and degrades the quality of the
nanotubes. The C2H4/H2O pressure ratio needs to be kept large (790 for the standard
condition in this work) as previously reported in ref. 11. The complicated optimization
among C2H4, H2, and H2O to achieve "super growth" of SWNTs indicates that balancing
the carbon fluxes of adsorption onto aluminum oxides, the surface diffusion from
aluminum oxides to Fe nanoparticles, and the segregation as nanotubes from Fe
nanoparticles is essential to sustain the rapid nanotube growth at a few micrometers per
second. During nanotube growth, because the surface of catalyst nanoparticles is mostly
covered by nanotubes, nanotube growth can be enhanced by introducing a carbon
source not only through the limited open sites on catalyst nanoparticles but also through
the catalyst supports whose surface remains uncovered with growing nanotubes. This
concept might provide a new route for further development of supported catalysts for
nanotube growth.
Acknowledgements:
This work is financially supported in part by the Grant-in-Aid for Young Scientists (A),
18686062, 2006, from the Ministry of Education, Culture, Sports, Science and
Technology (MEXT), Japan.
References:
1) Y. Murakami, S. Chiashi, Y. Miyauchi, M. Hu, M. Ogura, T. Okubo and S.
Maruyama: Chem. Phys. Lett. 385 (2004) 298.
2) S. Maruyama, R. Kojima, Y. Miyauchi, S. Chiashi and M. Kohno: Chem. Phys. Lett.
360 (2002) 229.
3) K. Hata, D.N. Futaba, K. Mizuno, T. Nanami, M. Yumura and S. Iijima: Science 306
(2004) 1362.
4) G. Zhong, T. Iwasaki, K. Honda, Y. Furukawa, I. Ohdomari and H. Kawarada: Jpn. J.
Appl. Phys. 44 (2004) 1558.
5) L. Zhang, Y. Tan and D.E. Resasco: Chem. Phys. Lett. 422 (2006) 198.
6) G. Zhang, D. Mann, L. Zhang, A. Javey, Y. Li, E. Yenilmez, Q. Wang, J. P. McVittie,
Y. Nishi, J. Gibbons and H Dai, Proc. Nat. Acad. Sci. 102 (2005) 16141.
7) T. Yamada, T. Nanami, K. Hata, D.N. Futaba, K. Mizuno, J. Fan, M. Yudasaka, M.
Yumura and S. Iijima: Nat. Nanotechnol. 1 (2006) 131.
8) S. Noda, Y. Tsuji, Y. Murakami and S. Maruyama: Appl. Phys. Lett. 86 (2005)
173106.
9) S. Noda, H. Sugime, T. Osawa, Y. Tsuji, S. Chiashi, Y. Murakami and S. Maruyama:
Carbon 44, (2006) 1414.
10) S.E. Tung and E, Mcininch: J. Catal. 4 (1965) 586.
11) D.N. Futaba, K. Hata, T. Yamada, K. Mizuno, M. Yumura and S. Iijima: Phys. Rev.
Lett. 95 (2005) 056104.
Figure Captions:
Fig. 1. Typical nanotubes grown in this work. (a) Normal photographs of nanotube
forests grown on Fe/ Al2Ox for 30 min under the standard condition (8.0 kPa C2H4/ 27
kPa H2/ 0.010 kPa H2O/ 67 kPa Ar and 1093 K). Fe catalyst thickness was uniform at
0.45 nm (left sample), 0.50 nm (middle), and 0.55 nm (right). (b) TEM image of
nanotubes in Fig. 1a grown using 0.50-nm-thick Fe catalysts. Insets show the enlarged
images (2.5x) of nanotubes.
Fig. 2. Effect of support materials for Fe catalyst on nanotube growth. Nanotubes were
grown for 10 min under the standard condition. (a) Photographs of nanotubes grown by
using combinatorial catalyst libraries, which had a nominal Fe thickness profile ranging
from 0.2 nm (at left on each sample) to 3 nm (right) formed on either SiO2, Al2Ox, or
Al2O3. (b) Relationship between the thickness of nanotube forest (shown in Fig. 2a) and
the nominal Fe thickness of the catalyst. (c) Raman spectra of the same samples.
Intensity at the low wavenumber region (< 300 cm-1) is shown magnified by a factor of
5x in this figure. Declined background signals in some of the RBM spectra (e.g., 0.5,
0.8-nm-Fe/ SiO2 and 0.5-nm-Fe/ Al2O3) were due to the signal from SiO2 substrates
passing through the thin nanotube layer.
Fig. 3. Effect of H2O vapor on the nanotube growth. Nanotubes were grown using Fe/
Al2Ox combinatorial catalyst libraries for 10 min under the standard condition except for
H2O partial pressures. (a) Relationship between the thickness of nanotube forest and the
nominal Fe thickness of the catalyst at different H2O partial pressures. (b) Raman
spectra of the same samples. Intensity at the low wavenumber region (< 300 cm-1) is
shown magnified by a factor of 5x in this figure.
Fig. 1 S. Noda, et al., submitted to Jpn. J. Appl. Phys.
Fig. 2 S. Noda, et al., submitted to Jpn. J. Appl. Phys.
Nominal Fe thickness [nm]
Al2Ox
0.3 0.5 1 3
Al2O3
1300 1400 1500 1600
Raman shift [cm-1]
5.6
5.2
4.1
6.8
5.0
100 200 300
0.5 nm
0.8 nm
Fe thickness
0.5 nm
0.8 nm
1.0 nm
0.5 nm
0.8 nm
1.0 nm
on Al2Ox
on Al2O3
on SiO2
Fig. 3 S. Noda, et al., submitted to Jpn. J. Appl. Phys.
Nominal Fe thickness [nm]
0 kPa
0.010 kPa
0.030 kPa
0.3 0.5 1 3
1300 1400 1500 1600
Raman shift [cm-1]
5.6
5.2
3.1
2.4
2.5
100 200 300
0.5 nm
0.8 nm
1.0 nm
Fe thickness
0.5 nm
0.8 nm
1.0 nm
0.5 nm
0.8 nm
1.0 nm
0.010 kPa
0.030 kPa
0 kPa 9.7
7.6
7.9
|
0704.0916 | Test of nuclear level density inputs for Hauser-Feshbach model
calculations | Test of nuclear level density inputs for Hauser-Feshbach model calculations
1A.V. Voinov∗, 1S.M. Grimes, 1C.R. Brune, 1M.J. Hornish, 1T.N. Massey, 1,2A. Salas
Department of Physics and Astronomy, Ohio University, Athens, OH 45701, USA and
Los-Alamos National Laboratory, P-25 MS H846, Los Alamos, New Mexico 87545, USA
The energy spectra of neutrons, protons, and α-particles have been measured from the d+59Co
and 3He+58Fe reactions leading to the same compound nucleus, 61Ni. The experimental cross
sections have been compared to Hauser-Feshbach model calculations using different input level
density models. None of them have been found to agree with experiment. It manifests the serious
problem with available level density parameterizations especially those based on neutron resonance
spacings and density of discrete levels. New level densities and corresponding Fermi-gas parameters
have been obtained for reaction product nuclei such as 60Ni,60Co, and 57Fe.
I. INTRODUCTION
The nuclear level density (NLD) is an important in-
put for the calculation of reaction cross sections in the
framework of Hauser-Feshbach (HF) theory of compound
nuclear reactions. Compound reaction cross sections are
needed in many applications including astrophysics and
nuclear data for science and technology. In astrophysics
the knowledge of reaction rates is crucial for understand-
ing nucleosynthesis and energy generation in stars and
stellar explosions. In many astrophysical scenarios, e.g.
the r-process, the cross sections required to compute the
reaction rates are in the regime where the statistical ap-
proach is appropriate [1]. In these cases HF calculations
are an essential tool for determining reaction rates, par-
ticularly for reactions involving radioactive nuclei which
are presently inaccessible to experiment. HF calculations
are likewise very important for other applications, e.g.,
the advanced reactor fuel cycle program [2].
The statistical approach utilized in HF theory [3] re-
quires knowledge of the two quantities for participating
species (see details below). These are the transmission
coefficients of incoming and outgoing particles and level
densities of residual nuclei. Transmission coefficients can
be obtained from optical model potentials established on
the basis of experimental data of elastic and total cross
sections. Because of experimental constraints, the dif-
ference between various sources of transmission coeffi-
cients usually does not exceed 10 − 15%. Level densi-
ties are more uncertain. The reason is that it is diffi-
cult to obtain them experimentally above the region of
well-resolved discrete low-lying levels known from nuclear
spectroscopy. At present, the level density for practi-
cal applications is calculated mainly on the basis of the
Fermi-gas [4] and Gilbert-Cameron [5] formulas with ad-
justable parameters which are found from experimental
data on neutron resonance spacing and the density of low-
lying discrete levels. Parameters recommended for use in
HF calculations are tabulated in Ref. [6]. The global pa-
rameter systematics for both the Fermi-gas and Gilbert-
Electronic address: [email protected]
Cameron formulas have been developed in Ref. [7]. How-
ever, it is still unclear how well these parameters repro-
duce compound reaction cross sections. No systematic
investigations have been performed yet. Experimental
data on level density above discrete levels are scarce.
Some information is available from particle evaporation
spectra (i.e. from compound nuclear reactions). The lat-
est data obtained from (p,n) reaction on Sn isotopes [8]
claims that the available level density parameters do not
reproduce neutron cross sections thereby indicating the
problem with level density parameterizations. It becomes
obvious that more experimental data on level density are
needed in the energy region above discrete levels.
In this work, we study compound nuclear reactions
to obtain information about level densities of the resid-
ual nuclei from particle evaporation spectra. Two dif-
ferent reactions, d+59Co and 3He+58Fe, which produce
the same 61Ni compound nucleus, have been investigated.
This approach helps to eliminate uncertainties connected
to a specific reaction mechanism. As opposed to most of
the similar experiments where only one type of outgo-
ing particles has been measured, we have measured cross
sections of all main outgoing particles including neu-
trons, protons, and α-particles, populating 60Ni, 60Co,
and 57Fe, respectively.
We will begin with a discussion of the present status of
level density estimates used as inputs for HF calculations.
II. METHODS OF LEVEL DENSITY
ESTIMATES FOR HF CODES
The simple level counting method to determine the
level density of a nucleus works only up to a certain exci-
tation energy below which levels are well separated and
can be determined from nuclear spectroscopy. This re-
gion is typically up to 2 MeV for heavy nuclei and up
to 6-9 MeV for light ones. Above these energies, more
sophisticated methods have to be applied.
http://arxiv.org/abs/0704.0916v2
A. Level density based on neutron resonance
spacings
In the region of neutron resonances which are located
just above the neutron binding energy (Bn), the level
density can again be determined by counting. In this case
neutron resonances are counted; one must also take into
account the assumed spin cut-off factor σ. Traditionally,
because of the the absence of reliable data below Bn, the
level density is determined by an interpolation procedure
between densities of low-lying discrete levels and den-
sity obtained from neutron resonance spacing.The Bethe
Fermi-gas model [4] with adjustable parameters a and δ
is often used as an interpolation formula:
ρ(E) =
exp[2
a(E − δ)]
2σa1/4(E − δ)5/4
, (1)
where the σ is the spin cut-off factor determining the level
spin distribution. There are a few drawbacks to this ap-
proach. One shortcoming is that it uses an assumption
that the selected model is valid in the entire excitation
energy region including low-lying discrete states and neu-
tron resonances. Undoubtedly this is correct for some of
the nuclei. A nice example is the level density of 26Al
which exhibits Fermi-gas behavior up to 8 MeV of exci-
tation energy [7]. On the other hand the level densities
of 56,57Fe measured with Oslo method [9], for example,
show complicated behavior which cannot be described
by simple Fermi-gas formula. The reason for this might
be the influence of pairing correlations leading to step
structures in vicinity of proton and neutron paring en-
ergies and above. In such cases the model function fit
to discrete levels may undergo considerable deviations in
the higher excitation energy region leading to incorrect
determination of level density parameters.
Another consideration is associated with the spin cut-
off parameter which is important in determination of the
total level density from density of neutron resonances at
Bn. In Fermi-gas model the spin cut-off parameter is
determined according to:
σ2 = m2gt =
t, (2)
wherem2 is the average of the square of the single particle
spin projections, t =
(E − δ)/a is the temperature,
g = 6a/π2 is the single particle level density, I is the rigid
body moment of inertia expressed as I = (2/5)µAR2,
where µ is the nucleon mass, A is the mass number and
R = 1.25A1/3 is the nuclear radius. The spin cut off
parameter in rigid body model is :
σ21 = 0.0146A
5/3t = 0.0146A5/3
((E − δ)/a)). (3)
On the other hand the Gilbert and Cameron [5] used
m2 = 0.146A2/3. The corresponding formula for σ is:
σ22 = 0.089A
((E − δ)/a). (4)
Eqs. (3) and (4) have the same energy and A dependence
(σ2 ∼ A7/6(E − δ)1/2) but differ by a factor of ≈ 2. It
should be mentioned also that the recent model calcula-
tions [10] show the suppression of the moment of inertia
at low temperatures compared to its rigid body value.
Thus uncertainties in spin cut off parameter transform
to corresponding uncertainties of total level densities de-
rived from neutron resonance spacings.
Experimentally, the spin cutoff parameter can be ob-
tained only from spin distribution of low-lying discrete
levels. However, because of the small number of known
spins, the uncertainty of such procedure is large. It turns
out that reported systematics based on such investiga-
tion σ = (0.98 ± 0.23)A(0.29±0.06) [7] is different from
above expressions for which σ ∼
A7/6 = A0.58. At
higher excitation energies determining the cutoff param-
eter becomes problematic due to the high level density
and the absence of the reliable observables sensitive to
this parameter. One can mention Ref. [11] where the
spin cutoff parameter has been determined from the an-
gular distribution of evaporation neutrons with α and
proton projectiles. The deviation from the expected A
dependence has also been reported. The absolute values
of the parameter agree with Eq. (3).
The parity dependence of level densities is also not
established experimentally beyond the discrete level re-
gion. At the neutron binding energy the assumption is
usually made about the equality of negative and positive
parity states. This is supported by some experimental
results [12]. However, recent calculations, performed for
Fe, Ni, and Zn isotopes, show that for some of them the
assumption of equally distributed states is not fulfilled
even far beyond the neutron binding energy, up to exci-
tation energies 15-20 MeV [13].
As is seen from the above considerations, the cal-
culation of the total level density from neutron res-
onance spacing might contain uncertainties associated
with many factors such as the possible deviation from
Fermi-gas dependence in interpolation region, uncertain-
ties in spin cutoff parameter and inequality of states with
different parity. Thus the question of how large these
uncertainties are or to what extent the level density ex-
tracted in such a way can be applicable to calculations
of reaction cross sections still remains important and not
completely resolved.
B. Level density from evaporation particles
The cross section of evaporated particles from the first
stage of a compound-nuclear reaction (i.e. when the out-
going particle is the first particle resulting from com-
pound nucleus decay ) can be calculated in the framework
of the Hauser-Feshbach theory:
(εa, εb) = (5)
σCN(εa)
Iπ Γb(U, J, π, E, I, π)ρb(E, I, π)
Γ(U, J, π)
Γ(U, J, π) =
Γb′(U, J, π, Ek, Ik, πk)+ (6)
∫ U−B
dE′ Γb′(U, J, π, E
′, I ′, π′) ρb′(E
′, I ′, π′)
Here σCN (εa) is the fusion cross section, εa and εb
are energies of relative motion for incoming and outgo-
ing channels (εb = U − Ek − Bb, where Bb is the sepa-
ration energy of particle b from the compound nucleus),
the Γb are the transmission coefficients of the outgoing
particle, and the quantities (U, J, π) and (E, I, π) are the
energy, angular momentum, and parity of the compound
and residual nuclei, respectively. The energy Ec is the
continuum edge, above which levels are modeled using a
level density parameterization. For energies below Ec the
known excitation energies, spins, and parities of discrete
levels are used. In practice Ec is determined by the avail-
able spectroscopic data in the literature. It follows from
Eq. (6) that the cross section is determined by both trans-
mission coefficients of outgoing particles and the NLD of
the residual nucleus ρb(E, I, π). It is believed that trans-
mission coefficients are known with sufficient accuracy
near the line of stability because they can be obtained
from optical model potentials usually based on experi-
mental data for elastic scattering and total cross sections
in the corresponding outgoing channel. Transmission co-
efficients obtained from different systematics of optical
model parameters do not differ by more that 15-20 %
from each other in our region of interest (1− 15 MeV of
outgoing particles). The uncertainties in level densities
are much larger. Therefore the Hauser Feshbach model
can be used to improve level densities by comparing ex-
perimental and calculated particle evaporation spectra.
Details and assumptions of this procedure are described
in Refs [14, 15].
The advantage of this method is that because of the
wide range of spin population in both the compound and
final nuclei, evaporation spectra are determined by the
total level density (integrated over all level spins) as op-
posed to the neutron resonance technique where reso-
nances are known for one or two spins and one parity.
The drawback stems from possible direct or multistep
compound reaction contributions distorting the evapora-
tion spectra, especially in the region of low-lying discrete
levels needed for the absolute normalization of obtained
level densities.
According to Hodgson [16], the interaction process can
usefully be considered to take place in a series of stages
corresponding to the successive nucleon-nucleon interac-
tion until complete equilibrium is reached. At each stage
it is possible for particles to be emitted from the nucleus.
The direct reactions refer to the fast, first stages of this
process giving forward peaked angular distribution. The
term multistep direct reaction implies that that such pro-
cess may take place in a number of states. Compound
nuclear reactions refers to all processes giving angular
distributions symmetric about 900; they are subdivided
into multistep compound reactions that take place before
the compound system has attained final statistical equi-
librium and statistical compound reactions that corre-
spond to the evaporation of particle from an equilibrium
system.
The use of evaporation spectra to infer level densities
requires that the reaction goes through to complete equi-
librium. Significant contributions from either multistep
direct or multistep compound reactions could cause in-
correct level density parameters to be deduced. Multi-
step direct reactions would usually be forward peaked
and also concentrated in peaks. If the reaction has a lim-
ited number of stages, the two-body force cannot cause
transitions to states which involve a large number of rear-
rangements from the original state. Multistep compound
reactions would be expected to lead to angular distribu-
tions which are symmetric about 900. They would, if
complete equilibration has not occurred, also preferen-
tially reach states which are similar to the target plus pro-
jectile. The shape of spectra from a multistep compound
reaction would be different for a deuteron-induced as op-
posed to a 3He-induced reaction. Hodgson has reviewed
[16] the evidence for multistep compound reactions. He
finds the most convincing evidence for such contributions
comes from fluctuation measurements for the 27Al(3He,p)
reaction. In this case, certain low-lying states show level
widths in the compound system which are larger than
expected. These states are low-lying and are the ones
which would be most likely to show such effects. It ap-
pears that measurements of continuum spectra do not
show evidence of such contributions. The uncertainties
connected to contributions of pre-equilibrium reactions
are generally difficult to estimate experimentally. The
measurement of angular distribution does not solve the
problem in the case of multistep compound mechanism.
We believe that the use of different reactions to form the
same compound nucleus is the most reliable way to esti-
mate and eliminate such contributions.
In this work we investigate reactions with deuteron and
3He projectiles on 59Co and 58Fe, respectively. These two
reactions form the same compound nucleus, 61Ni. The
purpose was to investigate if the cross section of outgo-
ing particles from both reactions can be described in the
framework of Hauser-Feshbach model with same set of
level density parameters. This is possible only when pro-
duction cross section is due to compound reaction mecha-
nism in both reactions. Neutron, protons, and α-particles
have been measured. These outgoing particles exhaust
the majority of the fusion cross section. The ratio be-
tween cross sections of different particles is determined
by the ratio of level densities of corresponding residual
nuclei. It puts constraints on relative level density val-
ues obtained from an experiment. In our experiment, the
level densities of 60Ni, 60Co, and 57Fe residual nuclei have
been determined from the region of the energy spectra of
neutron, proton, and α-particles where only first state
emission is possible.
III. EXPERIMENT AND METHOD
The tandem accelerator at Ohio University’s Edwards
Accelerator Laboratory provided 3He and deuteron
beams with energies of 10 and 7.5 MeV, respectively.
Self-supporting foils of 0.625-mg/cm2 58Fe (82% en-
riched) and 0.89-mg/cm2 59Co (100% natural abun-
dance) have been used as targets. The outgoing charged
particles were registered by charged-particle spectrome-
ters as shown in Fig. 2. The setup has ten 2-m time-
of-flight legs ending with Si detectors (see Fig. 1). Legs
are set up at different angles ranging from 22.5◦ up to
157.5◦. The mass of the charged particles is determined
by measuring both the energy deposited in Si detectors
and the time of flight. Additionally, a neutron detector
was placed at the distance of 140 cm from the target to
measure the neutron energy spectrum. The mass resolu-
tion was sufficient to resolve protons, deuterons, 3H/3He,
and α-particles.
He-3 beamTarget
2m flight path
FIG. 1: Charge particle spectrometer utilized for the mea-
surements.
Additionally, the neutron spectra from both the
58Fe(3He, Xn) and the 59Co(d,Xn) reactions have been
measured by the time-of-flight method with the Swinger
facility of Edwards Laboratory [17]. Here a flight path
of 7 m has been used to obtain better energy resolution
for outgoing neutrons, allowing us to measure the shape
of neutron evaporation spectrum more accurately. The
energy of the outgoing neutrons is determined by time-of-
flight method. The 3-ns pulse width provided an energy
resolution of about 100 keV and 800 keV at 1 and 14 MeV
of neutrons, respectively. The neutron detector efficiency
was measured with neutrons from the 27Al(d, n) reaction
on a stopping Al target at Ed = 7.44 MeV [18]. This
measurement allowed us to determine the detector ef-
ficiency from 0.2 to 14.5 MeV neutron energy with an
accuracy of ∼ 6%. The neutron spectra have been mea-
sured at backward angles from 110◦ to 150◦. Additional
measurements with a blank target have been performed
at each angle to determine background contribution. The
absolute cross section has been calculated by taking into
account the target thickness, the accumulated charge of
incoming deuteron or 3He beam, and the neutron detec-
tor efficiency. The overall systematic error for the abso-
lute cross sections is estimated to be 15%. The errors in
ratios of proton and α cross sections are only a few per-
cents because they are determined by counting statistics
alone.
IV. EXPERIMENTAL PARTICLE SPECTRA
AND LEVEL DENSITY OF PRODUCT NUCLEI
Energy spectra of neutron, protons, and α-particles
have been measured at backward angles (from 112◦ to
157◦) to eliminate contributions from direct reaction
mechanisms. Fig. 2 show energy spectra of outgoing
particles for both the 3He+58Fe and d+59Co reactions.
The calculations of particle energy spectra have been
performed with Hauser-Feshbach (HF) program devel-
oped at Edwards Accelerator Lab of Ohio University
[19]. Particle transmission coefficients have been calcu-
lated with optical model potentials taken from the RIPL
data base [6]. Different potentials have been tested and
found to be the same within 15%. Alpha-particle po-
tentials are more uncertain. Differences between corre-
sponding α-transmission coefficients depends on the α-
energy and varies from ∼ 40% for lower α-energies to
< 1% for higher α-energies in our region of interest (8-
18 MeV). In order to reduce these uncertainties the RIPL
α-potentials have been tested against the experimental
data on low energy α−elastic scattering on 58Ni [20]. The
data have been reproduced best by the potential from
Ref.[21] which has been adopted for our HF calculations.
Four level density models have been chosen for testing:
• The M1 model uses the Bethe formula (1) with pa-
rameters adjusted to fit both discrete level density
and neutrons s-wave resonance spacings.
• The M2 model uses the Gilbert-Cameron [5] for-
mula with parameters adjusted to fit both discrete
level density and neutrons s-wave resonance spac-
ings.
• The M3 model uses Bethe formula but δ parame-
ters are obtained from pairing energies according
to Ref. [1]. The a parameter has been adjusted
to match s-wave neutron resonance spacing. This
model does not fit discrete levels.
• The M4 model is based on microscopic Hartree-
Fock-BCS calculations [22] which are available from
RIPL data base [6]. According to Ref. [22], this
model has also been renormalized to fit discrete lev-
els and neutron resonance spacings.
0 2 4 6 8 10 12 14
4 6 8 10 12 14 16 18 6 8 10 12 14 16 18 20
0 2 4 6 8 10 12 14
Neutron energy (MeV)
2 4 6 8 10 12
Proton energy (MeV)
6 8 10 12 14 16
Alpha energy (MeV)
FIG. 2: Particle energy spectra for the 3He+58Fe (upper panel)and d+59Co (lower panel) reactions. The experimental data
are shown by points. Solid lines are HF calculations with level density parameters extracted from the experiment. Calculations
have been multiplied by reduction factor K = 0.52 due to direct reaction contributions. Arrows show energies above which
spectra contain only contributions from the first stage of the reaction.
The value of total level density derived from neutron res-
onance spacings depends on spin cutoff parameter used.
Therefore two prescriptions (3) and (4) for this parame-
ter have been tested for M1-M3 models. The M4 model
uses its one spin distribution which it is close to the pre-
scription (3).
The measured particle energy spectra include particles
from all possible stages of the reaction. However, by lim-
iting our consideration to particles with energies above
a particular threshold, we can ensure that only particles
from the first stage of the reaction contribute. These
thresholds depend on the particular reaction and are in-
dicated by the arrows in Fig. 2. In this energy interval
cross sections are determined exclusively by the level den-
sity of those residual nuclei. Another aspect which should
be taken into consideration when comparing calculations
and experiment is the contribution of direct processes.
Direct processes take away the incoming flux resulting in
reduction of compound reaction contribution. Assuming
that the total reaction cross section (σR) can be decom-
posed into the sum of direct (σdr ) and compound reaction
mechanisms (σcR), we have σR = σ
R + σ
R. In this case,
the HF calculations should be multiplied by the constant
factorK = σ
c exp
R /σR to correct for the absorbed incident
flux which does not lead to compound nucleus formation.
In our experiment the K has been estimated from the ra-
tioKexp = (σexpn +σ
α )/(σ
α ) ≈ K
where the experimental cross sections have been mea-
sured at backward angles. If level densities used in calcu-
lations are correct, Kexp = K. However, the calculations
show that this parameter is not very sensitive to input
level densities and can be estimated with ∼20% accuracy
with any reasonable level density models.
Table I shows the ratio of theoretical and experimen-
tal cross sections for different level density models used
in calculations. Calculations have been multiplied by re-
duction factor K which for both reactions varied within
0.48-0.54 for different level density models. Results show
that all of the models reproduce neutron cross sections
within ∼20%. However, they overestimate α-particle
cross sections by ∼30% in average and underestimate
protons by 5-80%. None of the models reproduce the
ratio of p/α cross section; for example all models sys-
tematically overestimate this ratio by a factor of ∼2 for
the d+59Co reaction. Assuming that particle transmis-
sion coefficients are known with sufficient accuracy, we
conclude that the level density of residual nuclei is re-
sponsible for such disagreement. In particular, the level
density ratio ρ[57Fe]/ρ[60Co] is overestimated by model
calculations.
In order to obtain correct level densities, the following
procedure has been used as described in Ref. [23]. The
NLD model is chosen to calculate the differential cross
section of Eq. (6). The parameters of the model were
adjusted to reproduce the experimental spectra as closely
as possible. The input NLD was improved by binwise
renormalization according to the expression:
ρb(E, I, π) = ρb(E, I, π)input
(dσ/dεb)meas
(dσ/dεb)calc
. (7)
The absolute normalization of the improved level den-
sities (later referred to as experimental level densities)
has been obtained by using discrete level densities of
60Ni populated by neutrons from the 59Co(d,n) reac-
tion. Protons and α-particles populating discrete lev-
els behave differently for different reactions. The Fig. 2
shows that the ratio between experiment and calculations
in discrete energy region is greater for 59Co(d,p) com-
pared to 58Fe(3He,p) and for 58Fe(3He,α) compared to
59Co(d,α). These enhancements are apparently reaction
specific and connected to contribution of direct or/and
multistep compound reaction mechanism. We are not
able to make the same comparison for neutron spectra
because the counting statistics in the region of discrete
levels for 58Fe(3He,n) reaction are rather poor. However,
our recent result from 55Mn(d,n) [23] indicates that the
neutron spectrum measured at backward angles is purely
evaporated even for high energy neutrons populating dis-
crete levels. Therefore we used the neutron spectrum
from the 59Co(d,n) reaction to determine the absolute
normalization of the level density for the residual nu-
cleus 60Ni. The absolute level densities of both 60Co and
57Fe nuclei have been adjusted in such a way as to repro-
duce ratios of both neutron/proton and neutron/alpha
cross sections. Uncertainties of obtained level densities
have been estimated to be about 20% which include un-
certainties of absolute cross section measurements and
uncertainties of particle transmission coefficients.
Both experimental and calculated level densities are
displayed in Fig. 3. The level density for 60Ni has been
extracted from (d,n) spectra because of better counting
statistics but (3He,p) and (3He,α) reactions have been
used to obtain level density for 60Co and 57Fe, respec-
tively, because of larger Q value. This approach allows
one to obtain level densities in a larger excitation energy
interval. Calculations have been performed with models
M1-M4 with spin cutoff parameters σ1 and σ2 for M1-M3
models. The M4 model uses its own spin distribution
which is close to σ1 for these nuclei. The χ
2 values for
calculated and experimental level densities are shown in
the Table III. Results show that the M1 model with σ1
gives worse agreement with experimental data. The use
of σ2 improve the agreement for all of the models. The
M2 and M3 models using σ2 give best agreement with
experiment on average, however level density for 60Co
agrees better when using σ1 and the best agreement is
reached with M4 model. It appears that the spin cut-
off parameter is very important when deriving the total
level density from neutron resonance spacings. However,
none of the models give a perfect description of the ex-
perimental data.
In order to improve level density parameters, the
experimental level densities have been fitted with the
Fermi-gas function (1) for two different spin cutoff fac-
tors σ1 and σ2. Best fit parameters are presented in the
Table II. They allow one to reproduce both shapes of
particle spectra (fig.2) as well as ratios of neutron, pro-
ton and α cross sections for both 3He+58Fe and d+59Co
reactions (Table I). Level density parameters have been
adjusted independently for both spin cutoff parameters
resulting in the approximately same final ratio of exper-
imental/calculated cross sections and χ2 values. There-
fore the only one entry M1exp is presented in tables. The
fact that a single set of level density parameters allows
one to reproduce all particle cross sections from both re-
actions supports our conclusion that the compound nu-
clear mechanism is dominant in these reactions. Finally
we note that the HF calculations do not perfectly repro-
duce the low-energy regions of the proton spectra where
the second stage of outgoing protons dominate. Here
the calculations also depends on additional level densi-
ties of corresponding residual nuclei as well as on the
γ-strength functions. We leave this problem for further
investigations.
The level density of 57Fe below the particle separation
threshold has also been obtained [9] by Oslo technique
using particle-γ coincidences from 57(3He,3He′)57Fe re-
action. We performed a similar comparison for the 56Fe
nucleus where we confirmed consistency of both the Oslo
technique and the technique based on particle evapo-
ration spectra. Figure 4 shows the comparison for the
57Fe nucleus. Here we also see good agreement between
level densities obtained from two different experiments.
It supports the obtained level densities. The Fermi-
gas parameters for 60Ni have been obtained in Ref. [24]
from 63Cu(p,α)60Ni reaction at Ep=12 MeV . The val-
ues a=6.4 and δ=1.3 are in a good agreement with our
parameters presented in the Table II.
V. SPIN CUTOFF PARAMETER
As it has been mentioned in the previous section the
spin cutoff parameter σ2 obtained according to Eq. (4)
gives slightly better agreement with the experiment com-
pared to σ1 obtained from Eq. (3). On the other hand,
the spin cut off parameters at the neutron binding energy
can be directly obtained from the experimental total level
density and the density of levels for one or several spin
states which are known from the analysis of neutron res-
TABLE I: Ratio of experimental and calculated cross sections obtained with four prior level density models M1-M4 and one
posterior M1exp which uses parameters fit to experimental level densities (see Table II). The spin cutoff parameters σ1 and σ2
are defined according to Eqs. (3) and (4).
M1 M2 M3 M4 M1exp Kexp
σ1 σ2 σ1 σ2 σ1 σ2
58Fe(3He,n) 0.79(12) 1.04(16) 1.03(16) 1.22(19) 1.03(16) 1.05(16) 0.90(14) 1.03(16) 0.52
58Fe(3He,p) 1.23(20) 1.01(15) 1.05(16) 0.93(14) 1.03(15) 1.01(15) 1.11(17) 0.98(15)
58Fe(3He,α) 0.66(10) 0.81(12) 0.66(10) 0.86(13) 0.73(11) 0.81(12) 0.72(11) 1.01(15)
59Co(d,n) 0.81(12) 0.90(14) 0.84(13) 0.91(14) 0.92(14) 0.93(14) 0.89(13) 0.97(15) 0.53
59Co(d,p) 1.82(27) 1.42(21) 1.70(26) 1.40(21) 1.32(20) 1.24(19) 1.41(21) 1.07(16)
59Co(d,α) 0.69(11) 0.59(10) 0.64(10) 0.57(10) 0.59(9) 0.70(11) 0.64(10) 0.97(15)
0 2 4 6 8 10 12
0 2 4 6 8 10 0 2 4 6 8 10 12
Excitation energy (MeV)
57Fe60Co
FIG. 3: Our experimental level density are shown as points. Curves indicate level densities from the four model prescriptions
M1-M4. The upper and lower curves for M1-M3 relate to two spin cutoff parameters σ1 and σ2 used to determine total level
densities from neutron resonance spacings. The histogram is the density of discrete levels.
onances [6]. We used the spin distribution formula from
Ref. [5]:
G(J) =
(2J + 1)
[−(J + 0.5)2
with normalization condition:
G(J) = 1 (9)
TABLE II: Fermi-gas parameters obtained from experimental
level densities
Nucleus 60Ni 60Co 57Fe
a, δ for Eq.(3) 6.16;1.43 6.91;-1.89 5.92;-0.13
a, δ for Eq.(4) 6.39;0.80 7.17;-2.6 6.14;-0.78
TABLE III: χ2 of experimental and calculated total level den-
sities for different level density models and spin cutoff factors
Nucleus M1 M2 M3 M4 M1exp
σ1 σ2 σ1 σ2 σ1 σ2
60Ni 15.3 3.8 1.5 0.2 0.9 1.4 2.5 0.6
60Co 1.3 1.9 2.9 3.1 1.8 2.0 0.8 0.6
57Fe 20.2 2.5 18.9 2.0 10.8 1.8 5.8 0.6
All nuclei 11.5 2.7 7.5 1.8 4.3 1.8 3.1 0.6
0 2 4 6 8 10 12
Excitation energy (MeV)
FIG. 4: The experimental level densities of 57Fe nucleus.
Filled points are present experimental values. Open points
are data from Oslo experiment [9]. Histogram is density of
discrete levels.
The total level density ρ(U) can be connected to the neu-
tron resonance spacing by using the expression:
= ρ(Bn + 0.5∆E)
|I0+0.5+L|
J=|I0−0.5−L|
G(J), (10)
where DL is the neutron resonance spacing for neutrons
with orbital momentum L, ∆E is the energy interval con-
taining neutron resonances. The assumption of equality
of level numbers with positive and negative parity is used.
Because the total level density ρ(Bn+0.5∆E) around the
neutron separation energy is known from our experiment,
the parameter σ can be obtained from Eqs.(8)-(10).
The data on neutron resonance spacings for nuclei un-
der study are taken from Ref. [25]. The estimated spin
cut off parameters from both s-wave (L = 0) and p-
wave (L = 1) resonance spacings are presented in the
Table IV. The uncertainties include a 20% normaliza-
tion uncertainty in total level densities and uncertainties
in the resonance spacings. For 57Fe, we have obtained
good agreement between two values of σ derived from s-
and p-wave neutron resonances. It indicates the parity
equilibrium of neutron resonances. For 60Co, the uncer-
tainties are too large to draw a definite conclusion. For
60Ni, the onlyD0 is known and one value of σ is obtained.
It agrees better with σ2 but σ1 cannot be excluded.
The calculations of spin cutoff parameter have been
performed with Eqs. (3) and (4) with Fermi-gas param-
eters from Table II. The experiment shows better agree-
ment with σ2 for
57Fe. Spin cut off parameters for 60Ni
and 60Co agree better with σ2 and σ1 respectively, how-
ever because of the large uncertainties, it is impossible to
draw an unambiguous conclusion.
TABLE IV: Spin cut off parameter obtained from s-wave σexps
and p-wave σexpp resonances with using the total level density
from the experiment. σcal1 and σ
2 have been calculated ac-
cording to Eqs. (3) and (4), respectively, with parameters
from Table II.
Nucleus 60Ni 60Co 57Fe
s 3.3(8) 3.6(15) 2.80(30)
p 5.2(12) 2.88(35)
1 4.13 3.95 3.76
2 3.22 3.26 3.0
VI. DISCUSSION
The consistency between results from two different re-
actions supports our conclusion that these reactions are
dominated by the compound-nuclear reaction mechanism
at backward angles. Our results show that level densi-
ties estimated on the basis of interpolation procedure be-
tween neutron resonance and discrete energy regions do
not reproduce experimental cross sections of all outgo-
ing particles simultaneously. The reason is that for some
of the nuclei the level density between discrete and con-
tinuum regions has a complicated behavior which can-
not be described by simple formulas based on Fermi-gas
or Gilbert-Cameron models. It is seen for 57Fe (Fig. 4)
where the level density exhibits some step structure at en-
ergy around 3.7 MeV. Nevertheless, the Fermi-gas model
can still be used to describe the level density at higher ex-
citation energies where density fluctuations vanish. The
M3 model, which does not use discrete levels, gives best
agreement. However, a problem apparently connected to
the spin cutoff parameters is still present. These results
indicate that it is necessary to use level density systemat-
ics obtained from compound-nuclear particle evaporation
spectra. Obviously, the region of discrete levels should be
excluded from such an analysis.
Spin cut off parameters obtained from this experiment
are in general agreement with model prediction of Eqs.(3)
and (4). However it is difficult to reduce uncertainties
to make more specific conclusions about the origin of
this parameter. Most probably, this parameter fluctuates
from nucleus to nucleus and is determined by the internal
properties of nuclei such as the specific population of shell
orbits.
As it has been discussed in the introduction, level den-
sities affect reaction rates which are important in astro-
physics and other applications. The magnitude of this
affect depends mainly on level densities and contribution
of the channel of interest to the total reaction cross sec-
tion. According to the Table I, the neutron outgoing
channel is less sensitive to variations of level densities
while changes in proton and α cross sections can reach a
factor of 2 from corresponding changes in level densities.
Changes in predicted cross sections will also occur at this
level.
VII. CONCLUSION
The neutron, proton, and α-particle cross sections have
been measured at backward angles from 3He+58Fe and
d+59Co reactions. The calculations using HF model have
been performed with three level density models adjusted
to match discrete levels and neutron resonance spacings
and one model adjusted to match neutron resonances
only. None of the model reproduces cross sections of all
outgoing particles simultaneously from both reactions.
However, the model M3 suggested in Ref.[1] gives the
best agreement with experiment.
Level densities of residual nuclei 60Ni, 60Co, and 57Fe
have been obtained from particle evaporation spectra.
Experimental level densities have been fit by Fermi-gas
function and new level density parameters have been ob-
tained. The new level densities allow us to reproduce
all particle energy spectra from both reactions that in-
dicate the dominance of compound nuclear mechanism
in particle spectra measured at backward angles. The
contribution of compound mechanism to the total cross
section is estimated about 50% for both reactions.
The total level density obtained from particle spectra
and neutron resonance spacings have been used to extract
the spin cut off parameter at the neutron separation ener-
gies. The extracted parameters agree with predictions of
Eq. (4) for 57Fe but no definite conclusions can be made
for 60Ni and 60Co. A better understanding of parity ra-
tio systematics would help to make this technique more
reliable.
VIII. ACKNOWLEDGMENTS
We are grateful to J.E. O’Donnell and D. Carter for
computer and electronic support during the experiment,
A. Adekola, C. Matei, B. Oginni and Z. Heinen for
taking shifts, D.C. Ingram for target thickness calcula-
tions done for us. We also acknowledge financial sup-
port from Department of Energy, grant No. DE-FG52-
06NA26187/A000.
[1] T. Rauscher, F.K. Thielemann, K.L. Kratz, Phys. Rev.
C 56, 1613 (1997).
[2] Report of the Nuclear Physics and Related Com-
putational Science R&D for Advanced Fuel Cycle
Workshop, Bethesda Maryland, August 10-12,2006,
http://www-fp.mcs.anl.gov/nprcsafc/Report FINAL.pdf
[3] W. Hauser and H. Feshbach, Phys. Rev. 87, 366 (1952).
[4] H.A. Bethe, Phys.Rev. 50, 332(1936).
[5] A. Gilbert and A.G.W. Cameron, Can.J.Phys. 43, 269
(1965).
[6] T. Belgya, O. Bersillon, R. Capote, T. Fukahori, G. Zhi-
gang, S. Goriely, M. Herman, A.V. Ignatyuk, S. Kailas,
A. Koning, P. Oblozhinsky, V. Plujko, and P. Young,
Handbook for calculations of nuclear reaction data: Ref-
erence Input Parameter Library. Available online at
http://www-nds.iaea.org/RIPL-2/, IAEA, Vienna, 2005.
[7] T. von Egidy and D. Bucurescu, Phys. Rev. C 72, 044311
(2005); 73, 049901(E) (2006).
[8] B.V. Zhuravlev, A.A.Lychagin, and N.N.Titarenko,
Physics of Atomic Nuclei, 69, 363(2006).
[9] A. Schiller et al., Phys. Rev. C 68, 054326 (2003).
[10] Y. Alhassid, G.F. Bertsch, L. Fang, and S. Liu, Phys.
Rev. C 72, 064326 (2005).
[11] S.M. Grimes, J.D. Anderson, J.W. McClure, B.A. Pohl,
and C. Wong, Phys. Rev. C 10, p.2373 (1974).
[12] S.J.Lokitz, G.E.Mitchell, J.F.Shriner, Jr, Phys. Rev. C
71, 064315(2005).
[13] D Mocelj, T.Rauscher, F-K Thielemann, G Mart́ınez
Pinedo, K.Langanke, L.Pacearescu and A.Fäßler,
J.Phys.G:Nucl.Part. Phys. 31, 1927(2005)
[14] H. Vonach, Proceedings of the IAEA Advisory Group
Meeting on
[15] A. Wallner, B. Strohmaier, and H. Vonach, Phys. Rev.
C 51, 614 (1994).
[16] P.E. Hodgson, Rep. Prog. Phys. 50 1171(1987). Basic
and Applied Problems of Nuclear Level Densities, Upton,
NY, 1983, BNL Report No. BNL-NCS-51694, 1983, p.
[17] A. Salas-Bacci, S.M. Grimes, T.N. Massey, Y. Parpottas,
R.T. Wheeler, J.E. Oldendick, Phys. Rev. C 70, 024311
(2004).
[18] T.N. Massey, S. Al-Quraishi, C.E. Brient, J.F.
Guillemette, S.M. Grimes, D. Jacobs, J.E. O’Donnell,
J. Oldendick and R. Wheeler, Nuclear Science and Engi-
neering 129, 175 (1998).
[19] S. M. Grimes, Ohio University Report INPP-04-03, 2004
(unpublished).
[20] L.R.Gasques, L.C.Chamon, D.Pereira, V.Guimarães,
A.Lépine-Szily, M.A.G.Alvarez, E.S.Rossi, Jr., C.P.Silva,
B.V.Carlson, J.J.Kolata, L.Lamm, D.Peterson, P.Santi,
S.Vincent, P.A.De Young, G.Peasley, Phys. Rev. C 67,
024602 (2003).
[21] R.C.Harper and W.L.Alford, J.Phys.G:Nucl.Phys. 8,
153(1982)
http://www-fp.mcs.anl.gov/nprcsafc/Report_FINAL.pdf
http://www-nds.iaea.org/RIPL-2/
[22] P. Demetriou and S. Goriely, Nucl. Phys. A695, 95
(2001).
[23] A.V.Voinov, S.M.Grimes, U.Agvaanluvsan, E.Algin,
T.Belgya, C.R.Brune, M.Guttormsen, M.J.Hornish,
T.Massey, G.E.Mitchell, J.Rekstad, A.Schiller, S.Siem,
Phys. Rev. C 74, 014314 (2006).
[24] Louis C. Vaz, C.C.Lu, and J.R.Huizenga, Phys. Rev. C
5, p.463 (1972).
[25] S.F. Mughabhab, Atlas of Neutron Resonances, Elsevier
5-th ed. 2006.
|
0704.0917 | The HARPS search for southern extra-solar planets. IX. Exoplanets
orbiting HD 100777, HD 190647, and HD 221287 | Astronomy & Astrophysics manuscript no. dnaef˙7361˙final c© ESO 2021
August 29, 2021
The HARPS search for southern extra-solar planets.
IX. Exoplanets orbiting HD 100777, HD 190647, and HD 221287?
D. Naef1,2, M. Mayor2, W. Benz3, F. Bouchy4, G. Lo Curto1, C. Lovis2, C. Moutou5, F. Pepe2, D. Queloz2,
N.C. Santos2,6,7, and S. Udry2
1 European Southern Observatory, Casilla 19001, Santiago 19, Chile
2 Observatoire Astronomique de l’Université de Genève, 51 Ch. des Maillettes, CH-1290 Sauverny, Switzerland
3 Physikalisches Institut Universität Bern, Sidlerstrasse 5, CH-3012 Bern, Switzerland
4 Institut d’Astrophysique de Paris, UMR7095, Université Pierre & Marie Curie, 98 bis Bd Arago, F-75014 Paris, France
5 Laboratoire d’Astrophysique de Marseille, Traverse du Siphon, F-13376 Marseille 12, France
6 Centro de Astronomia e Astrofı́sica da Universidade de Lisboa, Observatório Astronómico de Lisboa, Tapada da Ajuda, P-1349-018
Lisboa, Portugal
7 Centro de Geofı́sica de Évora, Rua Romão Ramalho 59, P-7002-554 Évora, Portugal
Received 27 February 2007 / Accepted 11 April 2007
ABSTRACT
Context. The HARPS high-resolution high-accuracy spectrograph was made available to the astronomical community in the second
half of 2003. Since then, we have been using this instrument for monitoring radial velocities of a large sample of Solar-type stars
(' 1400 stars) in order to search for their possible low-mass companions.
Aims. Amongst the goals of our survey, one is to significantly increase the number of detected extra-solar planets in a volume-limited
sample to improve our knowledge of their orbital elements distributions and thus obtain better constraints for planet-formation models.
Methods. Radial-velocities were obtained from high-resolution HARPS spectra via the cross-correlation method. We then searched
for Keplerian signals in the obtained radial-velocity data sets. Finally, companions orbiting our sample stars were characterised using
the fitted orbital parameters.
Results. In this paper, we present the HARPS radial-velocity data and orbital solutions for 3 Solar-type stars: HD 100777, HD 190647,
and HD 221287. The radial-velocity data of HD 100777 is best explained by the presence of a 1.16 MJup planetary companion on a
384–day eccentric orbit (e = 0.36). The orbital fit obtained for the slightly evolved star HD 190647 reveals the presence of a long-
period (P = 1038 d) 1.9 MJup planetary companion on a moderately eccentric orbit (e = 0.18). HD 221287 is hosting a 3.1 MJup planet
on a 456–day orbit. The shape of this orbit is not very well-constrained because of our non-optimal temporal coverage and because of
the presence of abnormally large residuals. We find clues that these large residuals result from spectral line-profile variations probably
induced by processes related to stellar activity.
Key words. stars: individual: HD 100777 – stars: individual: HD 190647 – stars: individual: HD 221287 – stars: planetary systems –
techniques: radial velocities
1. Introduction
The High Accuracy Radial-velocity Planet Searcher (HARPS,
Pepe et al. 2002, 2004; Mayor et al. 2003) was put in opera-
tion during the second half of 2003. HARPS is a high-resolution,
fiber-fed echelle spectrograph mounted on the 3.6–m telescope
at ESO–La Silla Observatory (Chile). It is placed in a vacuum
vessel and is accurately thermally-controlled (temperature varia-
tions are less than 1 mK over one night, less than 2 mK over one
month). Its most striking characteristic is its unequaled stability
and radial-velocity (RV) accuracy: 1 m s−1 in routine operations.
A sub-m s−1 accuracy can even be achieved for inactive, slowly
rotating stars when an optimized observing strategy aimed at
averaging out the stellar oscillations signal is applied (Santos
et al. 2004a; Lovis et al. 2006).
Send offprint requests to: D. Naef e-mail: [email protected]
? Based on observations made with the HARPS instrument on the
ESO 3.6-m telescope at the La Silla Observatory (Chile) under the GTO
programme ID 072.C-0488.
The HARPS Consortium that manufactured the instrument
for ESO has received Guaranteed Time Observations (GTO). The
core programme of the HARPS-GTO is a very high RV-precision
search for low-mass planets around non-active and slowly ro-
tating Solar-type stars. Another programme carried out by the
HARPS-GTO is a lower RV precision planet search. It is a survey
of about 850 Solar-type stars at a precision better than 3 m s−1.
The sample is a volume-limited complement (up to 57.5 pc) of
the CORALIE sample (Udry et al. 2000). The goal of this sub-
programme is to obtain improved Jupiter-sized planets orbital
elements distributions by substantially increasing the size of the
exoplanets sample. Statistically robust orbital elements distribu-
tions put strong constraints on the various planet formation sce-
narios. The total number of extra-solar planets known so far is
over 200. Nevertheless, some sub-categories of planets with spe-
cial characteristics (e.g. hot-Jupiters or very long-period planets)
are still weakly populated. The need for additional detections
thus remains high.
With typical measurement precisions of 2-3 m s−1, the
HARPS volume-limited programme does not necessarily aim at
2 D. Naef et al.: The HARPS search for southern extra-solar planets IX
Table 1. Observed and inferred stellar characteristics of
HD 100777, HD 190647, and HD 221287 (see text for details).
HD 100777 HD 190647 HD 221287
HIP 56572 99115 116084
Type K0 G5 F7V
mV 8.42 7.78 7.82
B − V 0.760 0.743 0.513
π [mas] 18.84± 1.14 18.44± 1.10 18.91± 0.82
d [pc] 52.8+3.4
−3.0 54.2
−3.1 52.9
MV 4.807 4.109 4.203
B.C. −0.119 −0.109 −0.010
L [L�] 1.05 1.98 1.66
Teff [K] 5582± 24 5628± 20 6304± 45
log g [cgs] 4.39± 0.07 4.18± 0.05 4.43± 0.16
[Fe/H] 0.27± 0.03 0.24± 0.03 0.03± 0.05
Vt [km s−1] 0.98± 0.03 1.06± 0.02 1.27± 0.12
M∗ [M�] 1.0± 0.1 1.1± 0.1 1.25± 0.10
log R
HK −5.03 −5.09 −4.59
Prot [d] 39± 2 39± 2 5.0± 2
age [Gyr] >2 >2 1.3
v sin i [km s−1] 1.8± 1.0 2.4± 1.0 4.1± 1.0
detecting very low-mass planetary companions, and it is mostly
sensitive to planets that are more massive than Saturn. To date,
it has already allowed the detection of several short-period pla-
nets: HD 330075 b (Pepe et al. 2004), HD 2638 b, HD 27894 b,
HD 63454 b (Moutou et al. 2005), and HD 212301 b (Lo Curto
et al. 2006).
In this paper, we report the detection of 3 longer-period pla-
netary companions orbiting stars in the volume-limited sample:
HD 100777 b, HD 190647 b, and HD 221287 b. In Sect. 2, we de-
scribe the characteristics of the 3 host stars. In Sect. 3, we present
our HARPS radial-velocity data and the orbital solutions for the
3 targets. In Sect. 3.4, we discuss the origin of the high residuals
to the orbital solution for HD 221287. Finally, we summarize our
findings in Sect. 4.
2. Stellar characteristics of HD 100777, HD 190647,
and HD 221287
The main characteristics of HD 100777, HD 190647, and
HD 221287 are listed in Table 1. Spectral types, apparent magni-
tudes mV, colour indexes B − V , astrometric parallaxes π, and
distances d are from the HIPPARCOS Catalogue (ESA 1997).
From the same source, we have also retrieved information on
the scatter in the photometric measurements and on the good-
ness of the astrometric fits for the 3 targets. The photometric
scatters are low in all cases. HD 190647 is flagged as a constant
star. The goodness-of-fit parameters are close to 0 for the 3 stars,
indicating that their astrometric data are explained by a single-
star model well. Finally, no close-in faint visual companions are
reported around these objects in the HIPPARCOS Catalogue.
We performed LTE spectroscopic analyses of high signal-
to-noise ratio (SNR) HARPS spectra for the 3 targets follow-
ing the method described in Santos et al. (2004b). Effective
temperatures (Teff), gravities (log g), iron abundances ([Fe/H]),
and microturbulence velocities (Vt) indicated in Table 1 re-
sult from these analyses. Like most of the planet-hosting stars
Table 2. HARPS radial-velocity data obtained for HD 100777.
Julian date RV Uncertainty
BJD− 2 400 000 [d] [km s−1]
53 063.7383 1.2019 0.0016
53 377.8740 1.2380 0.0015
53 404.7626 1.2205 0.0014
53 407.7461 1.2164 0.0014
53 408.7419 1.2176 0.0016
53 409.7891 1.2161 0.0015
53 468.6026 1.2109 0.0016
53 470.6603 1.2122 0.0021
53 484.6703 1.2273 0.0013
53 489.5948 1.2319 0.0022
53 512.5671 1.2494 0.0017
53 516.5839 1.2539 0.0014
53 518.5962 1.2554 0.0021
53 520.5778 1.2558 0.0022
53 543.5598 1.2649 0.0013
53 550.5289 1.2641 0.0016
53 573.4689 1.2682 0.0014
53 579.4705 1.2641 0.0022
53 724.8617 1.2520 0.0012
53 762.8169 1.2355 0.0013
53 765.7645 1.2311 0.0016
53 781.8894 1.2242 0.0016
53 789.7730 1.2209 0.0018
53 862.6200 1.2217 0.0015
53 866.6037 1.2239 0.0014
53 871.6270 1.2357 0.0015
53 883.5589 1.2397 0.0014
53 918.4979 1.2589 0.0017
53 920.5110 1.2613 0.0023
(Santos et al. 2004b), HD 100777 and HD 190647 have very high
iron abundances, more than 1.5 times the solar value, whereas
HD 221287 has a nearly solar metal content. Using the spec-
troscopic effective temperatures and the calibration in Flower
(1996), we computed bolometric corrections. Luminosities were
obtained from the bolometric corrections and the absolute ma-
gnitudes. The low gravity and the high luminosity of HD 190647
indicate that this star is slightly evolved. The other two stars
are still on the main sequence. Stellar masses M∗ were derived
from L, Teff , and [Fe/H] using Geneva and Padova evolutiona-
ry models (Schaller et al. 1992; Schaerer et al. 1993; Girardi
et al. 2000). Values of the projected rotational velocities, v sin i,
were derived from the widths of the HARPS cross-correlation
functions using a calibration obtained following the method de-
scribed in Santos et al. (2002, see their Appendix A) 1
Stellar activity indexes log R
HK (see the index definition
in Noyes et al. 1984) were derived from Ca II K line core re-
emission measurements on high-SNR HARPS spectra following
the method described in Santos et al. (2000). Using these va-
lues and the calibration in Noyes et al. (1984), we derived es-
timates of the rotational periods and stellar ages. The chromo-
spheric ages obtained with this calibration for HD 100777 and
HD 190647 are 6.2 and 7.6 Gyr. Pace & Pasquini (2004) have
shown that chromospheric ages derived for very low-activity,
Solar-type stars were not reliable. This is due to the fact that
the chromospheric emission drops rapidly after 1 Gyr and be-
comes virtually constant after 2 Gyr. For this reason, we have
chosen to indicate ages greater than 2 Gyr instead of the cali-
1 Using cross-correlation function widths for deriving projected rota-
tional velocities is a method that was first proposed by Benz & Mayor
(1981).
D. Naef et al.: The HARPS search for southern extra-solar planets IX 3
Fig. 1. Top: HARPS radial-velocity data (dots) for HD 100777
and fitted orbital solution (solid curve). The radial-velocity sig-
nal is induced by the presence of a 1.16 MJup planetary compan-
ion on a 384-day orbit. Bottom: Residuals to the fitted orbit. The
scatter of these residuals is compatible with the velocity uncer-
tainties.
brated values for these two stars. HD 221287 is much more active
and thus younger making its chromospheric age estimate more
reliable: 1.3 Gyr. We have also measured the lithium abundance
for this target following the method described in Israelian et al.
(2004) and again using a high-SNR HARPS spectrum. The mea-
sured equivalent width for the Li I λ6707.70Å line (deblended
from the Fe I λ6707.44Å line) is 66.6 mÅ leading to the fol-
lowing lithium abundance: log n(Li) = 2.98. Unfortunately, the
lithium abundance cannot provide any reliable age constraint in
our case. Studies of the lithium abundances of open cluster main
sequence stars have shown that log n(Li) remains constant and
equals ' 3 for Teff = 6300 K stars older than a few million years
(see for example Sestito & Randich 2005).
From the activity level reported for HD 221287
(log R
HK =−4.59), we can estimate the expected level of
activity-induced radial-velocity scatter (i.e. the jitter). Using
the results obtained by Santos et al. (2000) for stars with
similar activity levels and spectral types, the range of expected
jitter is between 8 and 20 m s−1. A similar study made on a
different stellar sample by Wright (2005) gives similar results:
an expected jitter of the order of 20 m s−1 (from the fit this
author presents in his Fig. 4). It has to be noted that both
studies contain very few F stars and even fewer high-activity
F stars. This results from the stellar sample they have used:
planet search samples selected against rapidly-rotating young
stars. Their predictions for the expected jitter level are therefore
not well-constrained for active F stars. Both HD 100777 and
HD 190647 are inactive and slowly rotating. Following the same
studies, low jitter values are expected in both cases: ≤ 8 m s−1.
Table 3. HARPS radial-velocity data obtained for HD 190647.
Julian date RV Uncertainty
BJD− 2 400 000 [d] [km s−1]
52 852.6233 −40.2874 0.0013
52 854.6727 −40.2868 0.0013
53 273.5925 −40.2435 0.0022
53 274.6041 −40.2354 0.0024
53 468.8874 −40.2688 0.0015
53 470.8634 −40.2689 0.0014
53 493.9234 −40.2700 0.0029
53 511.9022 −40.2723 0.0015
53 543.8020 −40.2785 0.0013
53 550.7721 −40.2804 0.0014
53 572.8074 −40.2818 0.0015
53 575.7266 −40.2844 0.0017
53 694.5136 −40.3056 0.0014
53 836.9226 −40.2982 0.0015
53 861.8630 −40.2950 0.0017
53 883.8224 −40.2907 0.0013
53 886.8881 −40.2883 0.0013
53 917.8390 −40.2785 0.0021
53 921.8472 −40.2785 0.0018
53 976.6039 −40.2623 0.0014
53 979.6654 −40.2610 0.0017
3. HARPS radial-velocity data
Stars belonging to the volume-limited HARPS-GTO sub-
programme are observed most of the time without the simulta-
neous Thorium-Argon reference (Baranne et al. 1996). The ob-
tained radial velocities are thus uncorrected for possible instru-
mental drifts. This only has a very low impact on our results
as the HARPS radial-velocity drift is less than 1 m s−1 over one
night. For this large volume-limited sample, we aim at a radial-
velocity precision of the order of 3 m s−1 (or better). This corres-
ponds roughly to an SNR of 40-50 at 5500Å. For bright targets
like the three stars of this paper, the exposure times required for
reaching this signal level can be very short as an SNR of 100
(at 5500Å) is obtained with HARPS in a 1-minute exposure on
a 6.5 mag G dwarf under normal weather and seeing conditions.
In order to limit the impact of observing overheads (telescope
preset, target acquisition, detector read-out), we normally do not
use exposure times less than 60 seconds. As a consequence, the
SNR obtained for bright stars is significantly higher than the tar-
geted one, and the output measurement errors are frequently be-
low 2 m s−1.
For our radial-velocity measurements, we consider two main
error terms. The first one is obtained through the HARPS Data
Reduction Software (DRS). It includes all the known calibration
errors (' 20 cm s−1), the stellar photon-noise, and the error on
the instrument drift. For observations taken with the simultane-
ous reference, the drift error is derived from the photon noise
of the Thorium-Argon exposure. For observations taken without
the lamp, a drift error term of 50 cm s−1 is quadratically added.
The second main error term is called the non-photonic error. It
includes guiding errors and a lower limit for the stellar pulsa-
tion signals. For the volume-limited programme, we use an ad-
hoc value for this term: 1.0 m s−1. Stellar noise (activity jitter,
pulsation signal) can of course be greater in some cases (as for
example for HD 221287, cf. Sect. 3.3). The non-photonic term
nearly vanishes for targets belonging to the very high RV pre-
cision sample (non-active stars, pulsation modes averaged out
by the specific observing strategy). In this latter case, this term
thus only contains the guiding errors: ' 30 cm s−1. The error bars
4 D. Naef et al.: The HARPS search for southern extra-solar planets IX
Fig. 2. Top: HARPS radial-velocity data (dots) for HD 190647
and fitted orbital solution (solid curve). The radial-velocity sig-
nal is induced by the presence of a 1.9 MJup planetary companion
on a 1038-day orbit. Bottom: Residuals to the fitted orbit. The
scatter of these residuals is compatible with the velocity uncer-
tainties.
listed in this paper correspond to the quadratic sum of the DRS
and the non-photonic errors.
In the following sections, we present the HARPS radial-
velocity data obtained for HD 100777, HD 190647, and
HD 221287 in more detail, as well as the orbital solutions fitted
to the data.
3.1. A 1.16 MJup planet around HD 100777
We have gathered 29 HARPS radial-velocity measurements of
HD 100777. These data span 858 days between February 27th
2004 (BJD = 2 453 063) and July 4th 2006 (BJD = 2 453 921).
Their mean radial-velocity uncertainty is 1.6 m s−1 (mean DRS
error: 1.3 m s−1). We list these measurements in Table 2 (elec-
tronic version only).
A nearly yearly signal is present in these data. We fit-
ted a Keplerian orbit. The resulting parameters are listed in
Table 5. The fitted orbit is displayed in Fig. 1, together with our
radial-velocity measurements. The radial-velocity data is best
explained by the presence of a 1.16 MJup planet on a 384 d fairly
eccentric orbit (e = 0.36). The inferred separation between the
host star and its planet is a = 1.03 AU. Both m2 sin i and a were
computed using a primary mass of 1 M�.
We performed Monte-Carlo simulations using the ORBIT
software (see Sect. 3.1. in Forveille et al. 1999) in order to
double-check the parameter uncertainties. The errorbars ob-
tained from these simulations are quasi-symmetric and some-
what larger ('18%) than the ones obtained from the covariance
matrix of the Keplerian fit. The errors we have finally quoted in
Table 5 are the Monte-Carlo ones. The residuals to the fitted orbit
(see bottom panel of Fig. 1) are flat and have a dispersion com-
patible with the measurement noise. The low reduced χ2 value
(1.45) and the χ2 probability (0.074) further demonstrate the
Table 4. HARPS radial-velocity data obtained for HD 221287.
Julian date RV Uncertainty
BJD− 2 400 000 [d] [km s−1]
52 851.8534 −21.9101 0.0012
52 853.8544 −21.9201 0.0021
52 858.7810 −21.9252 0.0019
53 264.7097 −21.8617 0.0021
53 266.6805 −21.8852 0.0020
53 268.7030 −21.8595 0.0034
53 273.6951 −21.8888 0.0031
53 274.7129 −21.8718 0.0021
53 292.6284 −21.9049 0.0020
53 294.6239 −21.8998 0.0019
53 295.6781 −21.9017 0.0024
53 296.6388 −21.9186 0.0026
53 339.6009 −21.9289 0.0015
53 340.5974 −21.9225 0.0013
53 342.5955 −21.9172 0.0013
53 344.5961 −21.9428 0.0014
53 345.5923 −21.9291 0.0013
53 346.5566 −21.9284 0.0013
53 546.9385 −21.8022 0.0039
53 550.9121 −21.8294 0.0022
53 551.9479 −21.7956 0.0018
53 723.5707 −21.8679 0.0033
53 727.5302 −21.8815 0.0021
53 862.9297 −21.9205 0.0023
53 974.7327 −21.8240 0.0021
53 980.7273 −21.8192 0.0022
good fit quality. The presence of another massive short-period
companion around HD 100777 is thus unlikely.
3.2. A 1.9 MJup planet orbiting HD 190647
Between August 1, 2003 (BJD = 2 452 852) and September 2,
2006 (BJD = 2 453 980), we obtained 21 HARPS radial-velocity
measurements for HD 190647. These data have a mean radial-
velocity uncertainty of 1.7 m s−1 (mean DRS error: 1.3 m s−1).
We list these measurements in Table 3 (electronic version only).
A long-period signal is clearly present in the RV data. We
performed a Keplerian fit. The resulting fitted parameters are
listed in Table 5. The fitted orbital period (1038 d) is slightly
shorter than the observing time span (1128 d) and the orbital
eccentricity is low (0.18). Monte-Carlo simulations were car-
ried out. The uncertainties on the orbital parameters obtained in
this case are nearly symmetric and a bit larger ('18%) than the
ones resulting from the Keplerian fit. In Table 5, we have listed
these more conservative Monte-Carlo uncertainties. The small
discrepancy between the two sets of errorbars is most probably
due to the rather short time span of the observations (only 1.09
orbital cycle covered) and to our still not optimal coverage of
both the minimum and the maximum of the radial-velocity orbit.
From the fitted parameters and with a primary mass of 1.1 M�,
we compute a minimum mass of 1.90 MJup for this planetary
companion. The computed separation between the two bodies
is 2.07 AU. Figure 2 shows our data and the fitted orbit.
The weighted rms of the residuals (1.6 m s−1) is slightly
smaller than the mean RV uncertainty. The low dispersion of the
residuals, the low reduced χ2 value, and its associated proba-
bility (0.11) allow us to exclude the presence of an additional
massive short-period companion.
D. Naef et al.: The HARPS search for southern extra-solar planets IX 5
Fig. 3. Top: HARPS radial-velocity data (dots) for HD 221287
and fitted orbital solution (solid curve). The detected Keplerian
signal is induced by a 3.1 MJup planet on a 456–day orbit.
Because of a non-optimal coverage of the radial-velocity maxi-
mum and the presence of stellar activity-induced jitter, the ex-
act shape of the orbit is not very well-constrained. Bottom:
Residuals to the fitted orbit. The scatter of these residuals is
much larger than the velocity uncertainties. This large dispersion
is probably due to the fairly high activity level of this star.
3.3. A 3.1 MJup planetary companion to HD 221287
A total of 26 HARPS radial-velocity data were obtained
for HD 221287. These data are spread over 1130 days: be-
tween July 31, 2003 (BJD = 2 452 851) and September 3, 2006
(BJD = 2 453 981). Unlike the two other targets presented in
this paper, a substantial fraction ('65%) of these velocities
were taken using the simultaneous Thorium-Argon reference.
They were thus corrected for the measured instrumental velocity
drifts. The mean radial-velocity uncertainty computed for this
data set is 2.1 m s−1 (mean DRS error: 1.8 m s−1). We list these
measurements in Table 4 (electronic version only).
A 456 d radial-velocity variation is clearly visible in our data
(see Fig. 3). This period is two orders of magnitude longer than
the rotation period obtained from the Noyes et al. (1984) cali-
bration for HD 221287: 5± 2 d. This large discrepancy between
PRV and Prot is probably sufficient for safely excluding stellar
spots as the origin of the detected RV signal, but we nevertheless
checked if this variability could be due to line-profile variations.
The cross-correlation function (CCF) bisector span versus radial
velocity plot is shown in the top panel of Fig. 4. The average
CCF bisector value is computed in two selected regions: near the
top of the CCF (i.e. near the continuum) and near its bottom (i.e
near the RV minimum). The span is the difference between these
two average values (top−bottom) and thus represents the overall
slope of the CCF bisector (for details, see Queloz et al. 2001).
As for the case of HD 166435 presented in Queloz et al.
(2001), an anti-correlation between spans and velocities is
expected in the case of star-spot induced line-profile varia-
tions. The bisector span data are quite noisy (weighted rms of
10.4 m s−1), but they are not correlated with the RV data. The
Fig. 4. a: Bisector span versus radial-velocity plot for
HD 221287. The dispersion of the span data is quite
large (10.4 m s−1) revealing potential line-profile variations.
Nevertheless, the main radial-velocity signal is not correlated
to these profile variations and is thus certainly of Keplerian
origin. b: Bisector span versus radial-velocity residuals to the
Keplerian orbit (see Table 5) displayed in the same velocity
scale. A marginal anti-correlation between the two quantities is
observed.
main signal can therefore not be due to line-profile variations
and certainly has a Keplerian origin.
Table 5 contains the results of a Keplerian fit that we per-
formed. Our data and the fitted orbit are displayed in Fig. 3. The
RV maximum remains poorly covered by our observations. As
for the other two targets, we made Monte-Carlo simulations for
checking our orbital parameter uncertainties. As expected, the
uncertainties obtained in this case largely differ from the ones
obtained via the Keplerian fit. For most of the parameters, the
errorbars resulting from the simulations are not symmetric and
much larger ('5 times larger). In order to be more conserva-
tive, we have chosen to quote these errors in Table 5. The shape
of the orbit is not very well-constrained, but there is no doubt
about the planetary nature of HD 221287 b. The fitted eccen-
tricity is low (0.08), but circular or moderately eccentric orbits
(up to 0.25) cannot be excluded yet. Using a primary mass of
1.25 M�, we compute the companion minimum mass and sepa-
ration: m2 sin i = 3.09 MJup and a = 1.25 AU.
6 D. Naef et al.: The HARPS search for southern extra-solar planets IX
Table 5. HARPS orbital solutions for HD 100777, HD 190647, and HD 221287.
HD 100777 HD 190647 HD 221287
P [d] 383.7± 1.2 1038.1± 4.9 456.1± +7.7
T [JD†] 456.2± 2.3 868± 24 263± +99
e 0.36± 0.02 0.18± 0.02 0.08± +0.17
−0.05
γ [km s−1] 1.246± 0.001 −40.267± 0.001 −21.858± +0.008
−0.005
ω [◦] 202.7± 3.1 232.5± 9.4 98± +92
K1 [m s−1] 34.9± 0.8 36.4± 1.2 71± +18−8
f (m) [10−9M�] 1.37± 0.10 4.94± 0.50 16.4± 12.6
a1 sin i [10−3AU] 1.15± 0.03 3.42± 0.12 2.95± 0.75
m2 sin i [MJup] 1.16± 0.03 1.90± 0.06 3.09± 0.79
a [AU] 1.03± 0.03 2.07± 0.06 1.25± 0.04
N 29 21 26
\ [m s−1] 1.7 1.6 8.5
χ2red
? 1.45 1.46 26.7
p(χ2, ν)‡ 0.074 0.11 0
† JD = BJD− 2 453 000
\ σO−C is the weighted rms of the residuals (weighted by 1/�2, where � is the O−C uncertainty)
? χ2red = χ
2/ν where ν is the number of degrees of freedom (here ν= N − 6).
‡ Post-fit χ2 probability computed with ν= N − 6.
3.4. Residuals to the HD 221287 orbital fit
The residuals to the orbital solution for HD 221287 presented in
Sect. 3.3 are clearly abnormal. Their weighted rms, 8.5 m s−1,
is much larger than the mean radial-velocity uncertainty ob-
tained for this target: 〈�RV〉= 2.1 m s−1. The abnormal scatter ob-
tained by quadratically correcting the residual rms for 〈�RV〉 is
8.2 m s−1. This matches the lowest value expected for this star
from the Santos et al. (2000) and Wright (2005) studies. Again,
we stress that these two studies clearly lack active F stars, and
their activity versus jitter relations are thus weakly constrained
for this kind of target. Our measured jitter value certainly does
not strongly disagree with their results.
We have searched for periodic signals in the radial-velocity
residuals by computing their Fourier transform, but no signifi-
cant peak in the power spectrum could be found. The absence of
significant periodicity is not surprising since the phase of star-
spot induced signals is not always conserved over more than a
few rotational cycles.
Cross-correlation function bisector spans are plotted against
the observed radial-velocity residuals in the bottom panel of
Fig. 4. A marginal anti-correlation (Spearman’s rank correlation
coefficient: ρ=−0.1) between the two quantities is visible. A
weighted linear regression (i.e the simplest possible model) was
computed. The obtained slope is only weakly significant (1σ).
We are thus unable, at this stage, to clearly establish the link be-
tween the line-profile variations and our residuals. As indicated
in Sect. 3.3, our orbital solution is not very well-constrained.
This probably affects the residuals and possibly explains the
absence of a clear anti-correlation. Additional radial-velocity
measurements are necessary for establishing this relation, but
activity-related processes so far remain the best explanation for
the observed abnormal residuals to the fitted orbit.
HD 221287 has a planet with an orbital period of 456 d but
with an additional radial-velocity signal, probably induced by
the presence of cool spots whose visibility is modulated by stel-
lar rotation.
4. Conclusion
We have presented our HARPS radial-velocity data for 3 Solar-
type stars: HD 100777, HD 190647, and HD 221287. The radial-
velocity variations detected for these stars are explained by the
presence of planetary companions. HD 100777 b has a minimum
mass of 1.16 MJup. Its orbit is eccentric (0.36) and has a period of
384 days. The 1038–day orbit of the 1.9 MJup planet around the
slightly evolved star HD 190647 is moderately eccentric (0.18).
The planetary companion inducing the detected velocity signal
for HD 221287 has a minimum mass of 3.1 MJup. Its orbit has
a period of 456 days. The orbital eccentricity for this planet is
not well-constrained. The fitted value is 0.08 but orbits with
0.0≤ e≤ 0.25 cannot be excluded yet. This rather weak con-
straint on the orbital shape is explained by two reasons. First,
our data cover the radial-velocity maximum poorly. Second, the
residuals to this orbit are abnormally large. We have tried to es-
tablish the relation between these high residuals and line-profile
variations through a study of the CCF bisectors. As expected, a
marginal anti-correlation of the two quantities is observed, but
it is only weakly significant, thereby preventing us from clearly
establishing the link between them.
Acknowledgements. The authors would like to thank the ESO–La Silla
Observatory Science Operations team for its efficient support during the observa-
tions and to all the ESO staff involved in the HARPS maintenance and techni-
cal support. Support from the Fundação para Ciência e a Tecnologia (Portugal)
to N.C.S. in the form of a scholarship (reference SFRH/BPD/8116/2002) and
a grant (reference POCI/CTEAST/56453/2004) is gratefully acknowledged.
Continuous support from the Swiss National Science Foundation is apprecia-
tively acknowledged. This research has made use of the Simbad database oper-
ated at the CDS, Strasbourg, France.
References
Baranne, A., Queloz, D., Mayor, M., et al. 1996, A&AS, 119, 373
Benz, W. & Mayor, M. 1981, A&A, 93, 235
ESA. 1997, The HIPPARCOS and TYCHO catalogue, ESA-SP 1200
Flower, P. J. 1996, ApJ, 469, 355
Forveille, T., Beuzit, J., Delfosse, X., et al. 1999, A&A, 351, 619
Girardi, L., Bressan, A., Bertelli, G., & Chiosi, C. 2000, A&AS, 141, 371
D. Naef et al.: The HARPS search for southern extra-solar planets IX 7
Israelian, G., Santos, N. C., Mayor, M., & Rebolo, R. 2004, A&A, 414, 601
Lo Curto, G., Mayor, M., Clausen, J. V., et al. 2006, A&A, 451, 345
Lovis, C., Mayor, M., Pepe, F., et al. 2006, Nature, 441, 305
Mayor, M., Pepe, F., Queloz, D., et al. 2003, The Messenger, 114, 20
Moutou, C., Mayor, M., Bouchy, F., et al. 2005, A&A, 439, 367
Noyes, R. W., Hartmann, L. W., Baliunas, S. L., Duncan, D. K., & Vaughan,
A. H. 1984, ApJ, 279, 763
Pace, G. & Pasquini, L. 2004, A&A, 426, 1021
Pepe, F., Mayor, M., Queloz, D., et al. 2004, A&A, 423, 385
Pepe, F., Mayor, M., Rupprecht, G., et al. 2002, The Messenger, 110, 9
Queloz, D., Henry, G. W., Sivan, J. P., et al. 2001, A&A, 379, 279
Santos, N. C., Bouchy, F., Mayor, M., et al. 2004a, A&A, 426, L19
Santos, N. C., Israelian, G., & Mayor, M. 2004b, A&A, 415, 1153
Santos, N. C., Mayor, M., Naef, D., et al. 2000, A&A, 361, 265
Santos, N. C., Mayor, M., Naef, D., et al. 2002, A&A, 392, 215
Schaerer, D., Meynet, G., Maeder, A., & Schaller, G. 1993, A&AS, 98, 523
Schaller, G., Schaerer, D., Meynet, G., & Maeder, A. 1992, A&AS, 96, 269
Sestito, P. & Randich, S. 2005, A&A, 442, 615
Udry, S., Mayor, M., Naef, D., et al. 2000, A&A, 356, 590
Wright, J. T. 2005, PASP, 117, 657
Introduction
Stellar characteristics of HD100777, HD190647, and HD221287
HARPS radial-velocity data
A 1.16MJup planet around HD100777
A 1.9MJup planet orbiting HD190647
A 3.1MJup planetary companion to HD221287
Residuals to the HD221287 orbital fit
Conclusion
|
0704.0918 | Algebraic geometry of Gaussian Bayesian networks | ALGEBRAIC GEOMETRY OF GAUSSIAN BAYESIAN NETWORKS
SETH SULLIVANT
Abstract. Conditional independence models in the Gaussian case are algebraic varieties in the
cone of positive definite covariance matrices. We study these varieties in the case of Bayesian
networks, with a view towards generalizing the recursive factorization theorem to situations with
hidden variables. In the case when the underlying graph is a tree, we show that the vanishing
ideal of the model is generated by the conditional independence statements implied by graph.
We also show that the ideal of any Bayesian network is homogeneous with respect to a multi-
grading induced by a collection of upstream random variables. This has a number of important
consequences for hidden variable models. Finally, we relate the ideals of Bayesian networks to
a number of classical constructions in algebraic geometry including toric degenerations of the
Grassmannian, matrix Schubert varieties, and secant varieties.
1. Introduction
A Bayesian network or directed graphical model is a statistical model that uses a directed
acyclic graph (DAG) to represent the conditional independence structures between collections
of random variables. The word Bayesian is used to describe these models because the nodes
in the graph can be used to represent random variables that correspond to parameters or hy-
perparameters, though the basic models themselves are not a priori Bayesian. These models
are used throughout computational statistics to model complex interactions between collections
of random variables. For instance, tree models are used in computational biology for sequence
alignment [4] and in phylogenetics [5, 15]. Special cases of Bayesian networks include familiar
models from statistics like factor analysis [3] and the hidden Markov model [4].
The DAG that specifies the Bayesian network specifies the model in two ways. The first is
through a recursive factorization of the parametrization, via restricted conditional distributions.
The second method is via the conditional independence statements implied by the graph. The
recursive factorization theorem [13, Thm 3.27] says that these two methods for specifying a
Bayesian network yield the same family of probability density functions.
When the underlying random variables are Gaussian or discrete, conditional independence
statements can be interpreted as algebraic constraints on the parameter space of the global
model. In the Gaussian case, this means that conditional independence corresponds to algebraic
constraints on the cone of positive definite matrices. One of our main goals in this paper is to
explore the recursive factorization theorem using algebraic techniques in the case of Gaussian
random variables, with a view towards the case of hidden random variables. In this sense, the
current paper is a generalization of the work began in [3] which concerned the special case of
factor analysis. Some past work has been done on the algebraic geometry of Bayesian networks
in the discrete case in [6, 7], but there are many open questions that remain in both the Gaussian
and the discrete case.
2 SETH SULLIVANT
In the next section, we describe a combinatorial parametrization of a Bayesian network in
the Gaussian case. In statistics, this parametrization in known as the trek rule [17]. We also
describe the algebraic interpretation of conditional independence in the Gaussian case which
leads us to our main problem: comparing the vanishing ideal of the model IG to the conditional
independence ideal CG. Section 3 describes the results of computations regarding the ideals
of Bayesian networks, and some algebraic conjectures that these computations suggest. In
particular, we conjecture that the coordinate ring of a Bayesian network is always normal and
Cohen-Macaulay.
As a first application of our algebraic perspective on Gaussian Bayesian networks, we provide a
new and greatly simplified proof of the tetrad representation theorem [17, Thm 6.10] in Section
4. Then in Section 5 we provide an extensive study of trees in the fully observed case. In
particular, we prove that for any tree T , the ideal IT is a toric ideal generated by linear forms
and quadrics that correspond to conditional independence statements implied by T . Techniques
from polyhedral geometry are used to show that C[Σ]/IT is always normal and Cohen-Macaulay.
Sections 6 and 7 are concerned with the study of hidden variable models. In Section 6 we
prove the Upstream Variables Theorem (Theorem 6.4) which shows that IG is homogeneous
with respect to a two dimensional multigrading induced by upstream random variables. As
a corollary, we deduce that hidden tree models are generated by tetrad constraints. Finally
in Section 7 we show that models with hidden variables include, as special cases, a number
of classical constructions from algebraic geometry. These include toric degenerations of the
Grassmannian, matrix Schubert varieties, and secant varieties.
Acknowledgments. I would like to thank Mathias Drton, Thomas Richardson, Mike Stillman,
and Bernd Sturmfels for helpful comments and discussions about the results in this paper. The
IMA provided funding and computer equipment while I worked on parts of this project.
2. Parametrization and Conditional Independence
Let G be a directed acyclic graph (DAG) with vertex set V (G) and edge set E(G). Often,
we will assume that V (G) = [n] := {1, 2, . . . , n}. To guarantee the acyclic assumption, we
assume that the vertices are numerically ordered ; that is, i → j ∈ E(G) only if i < j. The
Bayesian network associated to this graph can be specified by either a recursive factorization
formula or by conditional independence statements. We focus first on the recursive factorization
representation, and use it to derive an algebraic description of the parametrization. Then we
introduce the conditional independence constraints that vanish on the model and the ideal that
these constraints generate.
Let X = (X1, . . . , Xn) be a random vector, and let f(x) denote the probability density
function of this random vector. Bayes’ theorem says that this joint density can be factorized as
a product
f(x) =
fi(xi|x1, . . . , xi−1),
where fi(xi|x1, . . . , xi−1) denotes the conditional density of Xi given X1 = x1, . . . , Xi−1 = xi−1.
The recursive factorization property of the graphical model is that each of the conditional
densities fi(xi|x1, . . . , xi−1) only depends on the parents pa(i) = {j ∈ [n] | j → i ∈ E(G)}. We
ALGEBRAIC GEOMETRY OF GAUSSIAN BAYESIAN NETWORKS 3
can rewrite this representation as
fi(xi|x1, . . . , xi−1) = fi(xi|xpa(i)).
Thus, a density function f belongs to the Bayesian network if it factorizes as
f(x) =
fi(xi|xpa(i)).
To explore the consequences of this parametrization in the Gaussian case, we first need to
recall some basic facts about Gaussian random variables. Each n-dimensional Gaussian random
variable X is completely specified by its mean vector µ and its positive definite covariance matrix
Σ. Given these data, the joint density function of X is given by
f(x) =
(2π)n/2|Σ|1/2
exp(−
(x− µ)T Σ−1(x− µ)),
where |Σ| is the determinant of Σ. Rather than writing out the density every time, the short-
hand X ∼ N (µ,Σ) is used to indicate that X is a Gaussian random variable with mean µ
and covariance matrix Σ. The multivariate Gaussian generalizes the familiar “bell curve” of
a univariate Gaussian and is an important distribution in probability theory and multivariate
statistics because of the central limit theorem [1].
Given an n-dimensional random variable X and A ⊆ [n], let XA = (Xa)a∈A. Similarly, if
x is a vector, then xA is the subvector indexed by A. For a matrix Σ, ΣA,B is the submatrix
of Σ with row index set A and column index set B. Among the nice properties of Gaussian
random variables are the fact that marginalization and conditioning both preserve the Gaussian
property; see [1].
Lemma 2.1. Suppose that X ∼ N (µ,Σ) and let A,B ⊆ [n] be disjoint. Then
(1) XA ∼ N (µA,ΣA,A) and
(2) XA|XB = xB ∼ N (µA + ΣA,BΣ−1B,B(xB − µB),ΣA,A − ΣA,BΣ
B,BΣB,A).
To build the Gaussian Bayesian network associated to the DAG G, we allow any Gaussian con-
ditional distribution for the distribution f(xi|xpa(i)). This conditional distribution is recovered
by saying that
i∈pa(j)
λijXi +Wj
where Wj ∼ N (νj , ψ2j ) and is independent of the Xi with i < j, and the λij are the regression
parameters. Linear transformations of Gaussian random variables are Gaussian, and thus X is
also a Gaussian random variable. Since X is completely specified by its mean µ and covariance
matrix Σ, we must calculate these from the conditional distribution. The recursive expression
for the distribution of Xj given the variables preceding it yields a straightforward and recursive
expression for the mean and covariance. Namely
µj = E(Xj) = E(
i∈pa(j)
λijXi +Wj) =
i∈pa(j)
λijµi + νj
4 SETH SULLIVANT
and if k < j the covariance is:
σkj = E ((Xk − µk)(Xj − µj))
(Xk − µk)
i∈pa(j)
λij(Xi − µi) +Wj − νj
i∈pa(j)
λijE ((Xk − µk)(Xi − µi)) + E ((Xk − µk)(Wj − νj))
i∈pa(j)
λijσik
and the variance satisfies:
σjj = E
(Xj − µj)2
i∈pa(j)
λij(Xi − µi) +Wj − νj
i∈pa(j)
k∈pa(j)
λijλkjσik + ψ
If there are no constraints on the vector ν, there will be no constraints on µ either. Thus, we
will focus attention on the constraints on the covariance matrix Σ. If we further assume that the
ψ2j are completely unconstrained, this will imply that we can replace the messy expression for
the covariance σjj by a simple new parameter aj . This leads us to the algebraic representation
of our model, called the trek rule [17].
For each edge i → j ∈ E(G) let λij be an indeterminate and for each vertex i ∈ V (G) let ai
be an indeterminate. Assume that the vertices are numerically ordered, that is i → j ∈ E(G)
only if i < j. A collider is a pair of edges i → k, j → k with the same head. For each pair of
vertices i, j, let T (i, j) be the collection of simple paths P in G from i to j such that there is no
collider in P . Such a colliderless path is called a trek. The name trek come from the fact that
every colliderless path from i to j consists of a path from i up to some topmost element top(P )
and then from top(P ) back down to j. We think of each trek as a sequence of edges k → l. If
i = j, T (i, i) consists of a single empty trek from i to itself.
Let φG be the ring homomorphism
φG : C[σij | 1 ≤ i ≤ j ≤ n]→ C[ai, λij | i, j ∈ [n]i→ j ∈ E(G)]
σij 7→
P∈T (i,j)
atop(P ) ·
k→l∈P
When i = j, we get σii = ai. If there is no trek in T (i, j), then φG(σij) = 0. Let IG = kerφG.
Since IG is the kernel of a ring homomorphism, it is a prime ideal.
ALGEBRAIC GEOMETRY OF GAUSSIAN BAYESIAN NETWORKS 5
Example 2.2. Let G be the directed graph on four vertices with edges 1 → 2, 1 → 3, 2 → 4,
and 3→ 4. The ring homomorphism φG is given by
σ11 7→ a1 σ12 7→ a1λ12 σ13 7→ a1λ13 σ14 7→ a1λ12λ24 + a1λ13λ34
σ22 7→ a2 σ23 7→ a1λ12λ13 σ24 7→ a2λ24 + a1λ12λ13λ34
σ33 7→ a3 σ34 7→ a3λ34 + a1λ13λ12λ24
σ44 7→ a4
The ideal IG is the complete intersection of a quadric and a cubic:
σ11σ23 − σ13σ21, σ12σ23σ34 + σ13σ24σ23 + σ14σ22σ33 − σ13σ24σ33 − σ13σ22σ34 − σ14σ223
Dual to the ring homomorphism is the rational parametrization
φ∗G : R
E(G)+V (G) → R(
φ∗G(a, λ) = (
P∈T (i,j)
atop(P ) ·
k→l∈P
λkl)i,j .
We will often write σij(a, λ) to denote the coordinate polynomial that represents this function.
Let Ω ⊂ RE(G)+V (G) be the subset of parameter space satisfying the constraints:
j∈pa(i)
k∈pa(i)
λjiλkiσjk(a, λ)
for all i, where in the case that pa(i) = ∅ the sum is zero.
Proposition 2.3. [Trek Rule] The set of covariance matrices in the Gaussian Bayesian network
associated to G is the image φ∗G(Ω). In particular, IG is the vanishing ideal of the model.
The proof of the trek rule parametrization can also be found in [17].
Proof. The proof goes by induction. First, we make the substitution
i∈pa(j)
k∈pa(j)
λijλkjσik + ψ
which is valid because, given the λij ’s, ψ2j can be recovered from aj and vice versa. Clearly
σ11 = a1. By induction, suppose that the desired formula holds for all σij with i, j < n. We
want to show that σin has the same formula. Now from above, we have
σin =
k∈pa(n)
λknσik
k∈pa(n)
P∈T (i,k)
atop(P ) ·
r→s∈P
This last expression is a factorization of φ(σkn) since every trek from i to n is the union of a
trek P ∈ T (i, k) and an edge k → n where k is some parent of n. �
The parameters used in the trek rule parametrization are a little unusual because they involve
a mix of the natural parameters (regression coefficients λij) and coordinates on the image space
(variance parameters ai). While this mix might seem unusual from a statistical standpoint,
we find that this parametrization is rather useful for exploring the algebraic structure of the
covariance matrices that come from the model. For instance:
6 SETH SULLIVANT
Corollary 2.4. If T is a tree, then IT is a toric ideal.
Proof. For any pair of vertices i, j in T , there is at most one trek between i and j. Thus φ(σij)
is a monomial and IT is a toric ideal. �
In fact, as we will show in Section 5, when T is a tree, IT is generated by linear forms and
quadratic binomials that correspond to conditional independence statements implied by the
graph. Before getting to properties of conditional independence, we first note that these models
are identifiable. That is, it is possible to recover the λij and ai parameters directly from Σ. This
also allows us to determine the most basic invariant of IG, namely its dimension.
Proposition 2.5. The parametrization φ∗G is birational. In other words, the model parameters
λij and ai are identifiable and dim IG = #V (G) + #E(G).
Proof. It suffices to prove that the parameters are identifiable via rational functions of the entries
of Σ, as all the other statements follow from this. We have ai = σii so the ai parameters are
identifiable. We also know that for i < j
σij =
k∈pa(j)
σikλkj .
Thus, we have the matrix equation
Σpa(j),j = Σpa(j),pa(j)λpa(j),j
where λpa(j),j is the vector (λij)
i∈pa(j). Since Σpa(j),pa(j) is invertible in the positive definite
cone, we have the rational formula
λpa(j),j = Σ
pa(j),pa(j)
Σpa(j),j
and the λij parameters are identifiable. �
One of the problems we want to explore is the connection between the prime ideal defining the
graphical model (and thus the image of the parametrization) and the relationship to the ideal
determined by the independence statements induced by the model. To explain this connection,
we need to recall some information about the algebraic nature of conditional independence.
Recall the definition of conditional independence.
Definition 2.6. Let A, B, and C be disjoint subsets of [n], indexing subsets of the random
vector X. The conditional independence statement A⊥⊥B|C (“A is independent of B given C)
holds if and only if
f(xA, xB|xC) = f(xA|xC)f(xB|xC)
for all xC such that f(xC) 6= 0.
We refer to [13] for a more extensive introduction to conditional independence. In the Gauss-
ian case, a conditional independence statement is equivalent to an algebraic restriction on the
covariance matrix.
Proposition 2.7. Let A,B,C be disjoint subsets of [n]. Then X ∼ N (µ,Σ) satisfies the con-
ditional independence constraint A⊥⊥B|C if and only if the submatrix ΣA∪C,B∪C has rank less
than or equal to #C.
ALGEBRAIC GEOMETRY OF GAUSSIAN BAYESIAN NETWORKS 7
Proof. If X ∼ N (µ, σ), then
XA∪B|XC = xC ∼ N
µA∪B + ΣA∪B,CΣ
C,C(xC − µC),ΣA∪B,A∪B − ΣA∪B,CΣ
C,CΣC,A∪B
The CI statement A⊥⊥B|C holds if and only if (ΣA∪B,A∪B − ΣA∪B,CΣ−1C,CΣC,A∪B)A,B = 0. The
A,B submatrix of ΣA∪B,A∪B − ΣA∪B,CΣ−1C,CΣC,A∪B is easily seen to be ΣA,B − ΣA,CΣ
C,CΣC,B
which is the Schur complement of the matrix
ΣA∪C,B∪C =
ΣA,B ΣA,C
ΣC,B ΣC,C
Since ΣC,C is always invertible (it is positive definite), the Schur complement is zero if and only
if the matrix ΣA∪C,B∪C has rank less than or equal to #C. �
Given a DAG G, a collection of conditional independence statements are forced on the joint
distribution by the nature of the graph. These independence statements are usually described
via the notion of d-separation (the d stands for “directed”).
Definition 2.8. Let A, B, and C be disjoint subsets of [n]. The set C d-separates A and B if
every path in G connecting a vertex i ∈ A and B ∈ j contains a vertex k that is either
(1) a non-collider that belongs to C or
(2) a collider that does not belong to C and has no descendants that belong to C.
Note that C might be empty in the definition of d-separation.
Proposition 2.9 ([13]). The conditional independence statement A⊥⊥B|C holds for the Bayesian
network associated to G if and only if C d-separates A from B in G.
A joint probability distribution that satisfies all the conditional independence statements
implied by the graph G is said to satisfy the global Markov property of G. The following theorem
is a staple of the literature of graphical models, that holds with respect to any σ-algebra.
Theorem 2.10 (Recursive Factorization Theorem). [13, Thm 3.27] A probability density has
the recursive factorization property with respect to G if and only if it satisfies the global Markov
property.
Definition 2.11. Let CG ⊆ C[Σ] be the ideal generated by the minors of Σ corresponding to
the conditional independence statements implied by G; that is,
CG = 〈(#C + 1) minors of ΣA∪C,B∪C | C d-separates A from B in G〉 .
The ideal CG is called the conditional independence ideal of G.
A direct geometric consequence of the recursive factorization theorem is the following
Corollary 2.12. For any DAG G,
V (IG) ∩ PDn = V (CG) ∩ PDn.
In the corollary PDn ⊂ R(
2 ) is the cone of n × n positive definite symmetric matrices. It
seems natural to ask whether or not IG = CG for all DAGs G. For instance, this was true for
the DAG in Example 2.2. The Verma graph provides a natural counterexample.
8 SETH SULLIVANT
2 3 4 5
Example 2.13. Let G be the DAG on five vertices with edges 1 → 3, 1 → 5, 2 → 3, 2 → 4,
3→ 4, and 4→ 5. This graph is often called the Verma graph.
The conditional independence statements implied by the model are all implied by the three
statements 1⊥⊥2, 1⊥⊥4|{2, 3}, and {2, 3}⊥⊥5|{1, 4}. Thus, the conditional independence ideal
CG is generated by one linear form and five determinantal cubics. In this case, we find that
IG = CG + 〈f〉 where f is the degree four polynomial:
f = σ23σ24σ25σ34 − σ22σ25σ234 − σ23σ
24σ35 + σ22σ24σ34σ35
−σ223σ25σ44 + σ22σ25σ33σ44 + σ
23σ24σ45 − σ22σ24σ33σ45.
We found that the primary decomposition of CG is
CG = IG ∩ 〈σ11, σ12, σ13, σ14〉
so that f is not even in the radical of CG. Thus, the zero set of CG inside the positive semidefinite
cone contains singular covariance matrices that are not limits of distributions that belong to the
model. Note that since none of the indices of the σij appearing in f contain 1, f vanishes on
the marginal distribution for the random vector (X2, X3, X4, X5). This is the Gaussian version
of what is often called the Verma constraint. Note that this computation shows that the Verma
constraint is still needed as a generator of the unmarginalized Verma model. �
The rest of this paper is concerned with studying the ideals IG and investigating the circum-
stances that guarantee that CG = IG. We report on results of a computational study in the
next section. Towards the end of the paper, we study the ideals IG,O that arise when some of
the random variables are hidden.
3. Computational Study
Whenever approaching a new family of ideals, our first instinct is to compute as many exam-
ples as possible to gain some intuition about the structure of the ideals. This section summarizes
the results of our computational explorations.
We used Macaulay2 [9] to compute the generating sets of all ideals IG for all DAGs G on n ≤ 6
vertices. Our computational results concerning the problem of when CG = IG are summarized
in the following proposition.
Proposition 3.1. All DAGs on n ≤ 4 vertices satisfy CG = IG. Of the 302 DAGs on n = 5
vertices, exactly 293 satisfy CG = IG. Of the 5984 DAGs on n = 6 vertices exactly 4993 satisfy
CG = IG.
On n = 5 vertices, there were precisely nine graphs that fail to satisfy CG = IG. These
nine exceptional graphs are listed below. The numberings of the DAGs come from the Atlas of
ALGEBRAIC GEOMETRY OF GAUSSIAN BAYESIAN NETWORKS 9
Graphs [14]. Note that the Verma graph from Example 2.13 appears as A218 after relabeling
vertices.
(1) A139: 1→ 4, 1→ 5, 2→ 4, 3→ 4, 4→ 5.
(2) A146: 1→ 3, 2→ 3, 2→ 5, 3→ 4, 4→ 5.
(3) A197: 1→ 2, 1→ 3, 1→ 5, 2→ 4, 3→ 4, 4→ 5.
(4) A216: 1→ 2, 1→ 4, 2→ 3, 2→ 5, 3→ 4, 4→ 5.
(5) A217: 1→ 3, 1→ 4, 2→ 4, 2→ 5, 3→ 4, 4→ 5.
(6) A218: 1→ 3, 1→ 4, 2→ 3, 2→ 5, 3→ 4, 4→ 5.
(7) A275: 1→ 2, 1→ 4, 1→ 5, 2→ 3, 2→ 5, 3→ 4, 4→ 5.
(8) A277: 1→ 2, 1→ 3, 1→ 5, 2→ 4, 3→ 4, 3→ 5, 4→ 5.
(9) A292: 1→ 2, 1→ 4, 2→ 3, 2→ 5, 3→ 4, 3→ 5, 4→ 5.
The table below displays the numbers of minimal generators of different degrees for each of the
ideals IG where G is one of the nine graphs on five vertices such that CG 6= IG. The coincidences
among rows in this table arise because sometimes two different graphs yield the same family of
probability distributions. This phenomenon is known as Markov equivalence [13, 17].
Network 1 2 3 4 5
A139 3 1 2 0 0
A146 1 3 7 0 0
A197 0 1 5 0 1
A216 0 1 5 0 1
A217 2 1 2 0 0
A218 1 0 5 1 0
A275 0 1 1 1 3
A277 0 1 1 1 3
A292 0 1 1 1 3
It is worth noting the methods that we used to perform our computations, in particular,
how we computed generators for the ideals IG. Rather than using the trek rule directly, and
computing the vanishing ideal of the parametrization, we exploited the recursive nature of the
parametrization to determine IG. This is summarized by the following proposition.
Proposition 3.2. Let G be a DAG and G \ n the DAG with vertex n removed. Then
IG\n +
σin −
j∈pa(n)
λjnσij | i ∈ [n− 1]
〉⋂C[σij | i, j ∈ [n]]
where the ideal IG\n is considered as a graph on n− 1 vertices.
Proof. This is a direct consequence of the trek rule: every trek that goes to n passes through a
parent of n and cannot go below n. �
Based on our (limited) computations up to n = 6 we propose some optimistic conjectures
about the structures of the ideals IG.
10 SETH SULLIVANT
Conjecture 3.3.
IG = CG :
A⊂[n]
(|ΣA,A|)∞
Conjecture 3.3 says that all the uninteresting components of CG (that is, the components that
do not correspond to probability density functions) lie on the boundary of the positive definite
cone. Conjecture 3.3 was verified for all DAGs on n ≤ 5 vertices. Our computational evidence
also suggests that all the ideals IG are Cohen-Macaulay and normal, even for graphs with loops
and other complicated graphical structures.
Conjecture 3.4. The quotient ring C[Σ]/IG is normal and Cohen-Macaulay for all G.
Conjecture 3.4 was verified computationally for all graphs on n ≤ 5 vertices and graphs with
n = 6 vertices and less than 8 edges. We prove Conjecture 3.4 when the underlying graph is a
tree in Section 5. A more negative conjecture concerns the graphs such that IG = CG.
Conjecture 3.5. The proportion of DAGs on n vertices such that IG = CG tends to zero as
To close the section, we provide a few useful propositions for reducing the computation of the
generating set of the ideal IG to the ideals for smaller graphs.
Proposition 3.6. Suppose that G is a disjoint union of two subgraph G = G1 ∪G2. Then
IG = IG1 + IG2 + 〈σij | i ∈ V (G1), j ∈ V (G2)〉 .
Proof. In the parametrization φG, we have φG(σij) = 0 if i ∈ V (G1) and j ∈ V (G2), because
there is no trek from i to j. Furthermore, φG(σij) = φG1(σij) if i, j ∈ V (G1) and φG(σkl) =
φG2(σkl) if k, l ∈ V (G2) and these polynomials are in disjoint sets of variables. Thus, there can
be no nontrivial relations involving both σij and σkl. �
Proposition 3.7. Let G be a DAG with a vertex m with no children and a decomposition into
two induced subgraphs G = G1 ∪G2 such that V (G1) ∩ V (G2) = {m}. Then
IG = IG1 + IG2 + 〈σij | i ∈ V (G1) \ {m}, j ∈ V (G2) \ {m}〉 .
Proof. In the paremtrization φG, we have φG(σij) = 0 if i ∈ V (G1) \ {m} and j ∈ V (G2) \ {m},
because there is no trek from i to j. Furthermore φG(σij) = φG1(σij) if i, j ∈ V (G1) and
φG(σkl) = φG2(σkl) if k, l ∈ V (G2) and these polynomials are in disjoint sets of variables unless
i = j = k = l = m. However, in this final case, φG(σmm) = am and this is the only occurrence
of am in any of the expressions φG(σij). This is a consequence of the fact that vertex m has no
children. Thus, we have a partition of the σij into three sets in which φG(σij) appear in disjoint
sets of variables and there can be no nontrivial relations involving two or more of these sets of
variables. �
Proposition 3.8. Suppose that for all i ∈ [n − 1], the edge i → n ∈ E(G). Let G \ n be the
DAG obtained from G by removing the vertex n. Then
IG = IG\n · C[σij : i, j ∈ [n]].
ALGEBRAIC GEOMETRY OF GAUSSIAN BAYESIAN NETWORKS 11
Proof. Every vertex in G \ n is connected to n and is a parent of n. This implies that n cannot
appear in any conditional independence statement implied by G. Furthermore, if C d-separates
A from B in G\n, it will d-separate A from B in G, because n is below every vertex in G\n. This
implies that the CI statements that hold for G are precisely the same independence statements
that hold for G \ n. Thus
V (CG) ∩ PDn = V (CG\n · C[σij | i, j ∈ [n]]) ∩ PDn.
Since IG = I(V (CG) ∩ PDn), this implies the desired equality. �
4. Tetrad Representation Theorem
An important step towards understanding the ideals IG is to derive interpretations of the
polynomials in IG. We have an interpretation for a large part of IG, namely, the subideal
CG ⊆ IG. Conversely, we can ask when polynomials of a given form belong to the ideals IG.
Clearly, any linear polynomial in IG is a linear combination of polynomials of the form σij with
i 6= j, all of which must also belong to IG. Each linear polynomial σij corresponds to the
independence statement Xi⊥⊥Xj . Combinatorially, the linear from σij is in IG if and only if
there is no trek from i to j in G.
A stronger result of this form is the tetrad representation theorem, first proven in [17], which
gives a combinatorial characterization of when a tetrad difference
σijσkl − σilσjk
belongs to the ideal IG. The constraints do not necessarily correspond to conditional indepen-
dence statements, and need not belong to the ideal CG. This will be illustrated in Example
The original proof of the tetrad representation theorem in [17] is quite long and technical. Our
goal in this section is to show how our algebraic perspective can be used to greatly simplify the
proof. We also include this result here because we will need the tetrad representation theorem
in Section 5.
Definition 4.1. A vertex c ∈ V (G) is a choke point between sets I and J if every trek from a
point in I to a point in J contains c and either
(1) c is on the I-side of every trek from I to J , or
(2) c is on the J-side of every trek from I to J .
The set of all choke points in G between I and J is denoted C(I, J).
Example 4.2. In the graph c is a choke point between {1, 4} and {2, 3}, but is not a choke
point between {1, 2} and {3, 4}.
1 2 3 4
12 SETH SULLIVANT
Theorem 4.3 (Tetrad Representation Theorem [17]). The tetrad constraint σijσkl−σilσjk = 0
holds for all covariance matrices in the Bayesian network associated to G if and only if there is
a choke point in G between {i, k} and {j, l}.
Our proof of the tetrad representation theorem will follow after a few lemmas that lead to
the irreducible factorization of the polynomials σij(a, λ).
Lemma 4.4. In a fixed DAG G, every trek from I to J is incident to every choke point in
C(I, J) and they must be reached always in the same order.
Proof. If two choke points are on, say, the I side of every trek from I to J and there are two
treks which reach these choke points in different orders, there will be a directed cycle in G. If
the choke points c1 and c2 were on the I side and J side, respectively, and there were two treks
from I to J that reached them in a different order, this would contradict the property of being
a choke point. �
Lemma 4.5. Let i = c0, c1, . . . , ck = j be the ordered choke points in C({i}, {j}). Then the
irreducible factorization of σij(a, λ) is
σij(a, λ) =
f tij(a, λ)
where f tij(a, λ) only depends on λpq such that p and q are between choke points ct−1 and ct.
Proof. First of all, we will show that σij(a, λ) has a factorization as indicated. Then we will
show that the factors are irreducible. Define
f tij(a, λ) =
P∈T (i,j;ct−1,ct)
atop(P )
k→l∈P
where T (i, j; ct−1, ct) consists of all paths from ct−1 to ct that are partial treks from i to j (that
is, that can be completed to a trek from i to j) and atop(P ) = 1 if the top of the partial trek
P is not the top. When deciding whether or not the top is included in the partial trek, note
that almost all choke points are associated with either the {i} side or the {j} side. So there is
a natural way to decide if atop(P ) is included or not. In the exceptional case that c is a choke
point on both the {i} and the {j} side, we repeat this choke point in the list. This is because c
must be the top of every trek from i to j, and we will get a factor f tij(a, λ) = ac.
Since each ct is a choke point between i and j, the product of the monomials, one from each
f tij , is the monomial corresponding to a trek from i to j. Conversely, every monomial arises as
such a product in a unique way. This proves that the desired factorization holds.
Now we will show that each of the f tij(a, λ) cannot factorize further. Note that every monomial
in f tij(a, λ) is squarefree in all the a and λ indeterminates. This means that every monomial
appearing in f tij(a, λ) is a vertex of the Newton polytope of f
ij(a, λ). This, in turn, implies
that in any factorization f tij(a, λ) = fg there is no cancellation since in any factorization of any
polynomial, the vertices of the Newton polytope is the product of two vertices of the constituent
Newton polytopes. This means that in any factorization f tij(a, λ) = fg, f and g can be chosen
to be the sums of squarefree monomials all with coefficient 1.
ALGEBRAIC GEOMETRY OF GAUSSIAN BAYESIAN NETWORKS 13
Now let f tij(a, λ) = fg be any factorization and let m be a monomial appearing in f
ij(a, λ).
If the factorization is nontrivial m = mfmg where mf and mg are monomials in f and g
respectively. Since the factorization is nontrivial and m corresponds to a partial trek P in
T (i, j; ct−1, ct), there must exist a c on P such that, without loss of generality such that λpc
appears in mf and λcq appears in mg. Since every monomial in the expansion of fg corresponds
to a partial trek from ct−1 to ct it must be the case that every monomial in f contains an
indeterminate λsc from some s and similarly, every monomial appearing in g contains a λcs for
some s. But this implies that every partial trek from ct−1 to ct passes through c, with the same
directionality, that is, it is a choke point between i and j. However, this contradicts the fact the
C({i}, {j}) = {c0, . . . , ct}. �
Proof of Thm 4.3. Suppose that the vanishing tetrad condition holds, that is,
σijσkl = σilσkj
for all covariance matrices in the model. This factorization must thus also hold when we sub-
stitute the polynomial expressions in the parametrization:
σij(a, λ)σkl(a, λ) = σil(a, λ)σkj(a, λ).
Assuming that none of these polynomials are zero (in which case the choke condition is satisfied
for trivial reasons), this means that each factor f tij(a, λ) must appear on both the left and the
right-hand sides of this expression. This is a consequence of the fact that polynomial rings
over fields are unique factorization domains. The first factor f1ij(a, λ) could only be a factor of
σil(a, λ). There exists a unique t ≥ 1 such that f1ij · · · f
ij divides σil but f
ij · · · f
ij does not
divide σil. This implies that f
ij divides σkj . However, this implies that ct is a choke point
between i and j, between i and l, between k and j. Furthermore, this will imply that ct is a
choke point between k and l as well, which implies that ct is a choke point between {i, k} and
{j, l}.
Conversely, suppose that there is a choke point c between {i, k} and {j, l}. Our unique
factorization of the σij implies that we can write
σij = f1g1, σkl = f2g2, σil = f1g2, σkj = f2g1
where f1 and f2 corresponds to partial treks from i to c and k to c, respectively, and g1 and g2
correspond to partial treks from c to j and l, respectively. Then we have
σijσkl = f1g1f2g2 = σilσkj ,
so that Σ satisfies the tetrad constraint. �
At first glance, it is tempting to suggest that the tetrad representation theorem says that
a tetrad vanishes for every covariance matrix in the model if and only if an associated condi-
tional independence statement holds. Unfortunately, this is not true, as the following example
illustrates.
Example 4.6. Let A139 be the graph with edges 1→ 4, 1→ 5, 2→ 4, 3→ 4 and 4→ 5. Then
4 is a choke point between {2, 3} and {4, 5} and the tetrad σ24σ35 − σ25σ34 belongs to IA139 .
However, it is not implied by the conditional independence statements implied by the graph
(that is, σ24σ35 − σ25σ34 /∈ CA139). It is precisely this extra tetrad constraint that forces A139
onto the list of graphs that satisfy CG 6= IG from Section 3.
14 SETH SULLIVANT
In particular, a choke point between two sets need not be a d-separator of those sets. In the
case that G is a tree, it is true that tetrad constraints are conditional independence constraints.
Proposition 4.7. Let T be a tree and suppose that c is a choke point between I and J in T .
Then either c d-separates I \ {c} and J \ {c} or ∅ d-separates I \ {c} and J \ {c}.
Proof. Since T is a tree, there is a unique path from an element in I \ c to an element in J \ c.
If this path is not a trek, we have ∅ d-separates I \ {c} from J \ {c}. On the other hand, if this
path is always a trek we see that {c} d-separates I \ {c} from J \ {c}. �
The tetrad representation theorem gives a simple combinatorial rule for determining when a
2 × 2 minor of Σ is in IG. More generally, we believe that there should exist a graph theoretic
rule that determines when a general determinant |ΣA,B| ∈ IG in terms of structural features of
the DAG G. The technique we have used above, which relies on giving a factorization of the
polynomials σ(a, λ), does not seem like it will extend to higher order minors. One approach
at a generalization of the tetrad representation theorem would be to find a cancellation free
expression for the determinant |ΣA,B| in terms of the parameters ai and λij , along the lines of
the Gessel-Viennot theorem [8]. From such a result, one could deduce a combinatorial rule for
when |ΣA,B| is zero. This suggests the following problem.
Problem 4.8. Develop a Gessel-Viennot theorem for treks; that is, determine a combinatorial
formula for the expansion of |ΣA,B| in terms of the treks in G.
5. Fully Observed Trees
In this section we study the Bayesian networks of trees in the situation where all random
variables are observed. We show that the toric ideal IT is generated by linear forms σij and
quadratic tetrad constraints. The Tetrad Representation Theorem and Proposition 4.7 then
imply that IT = CT . We also investigate further algebraic properties of the ideals IT using the
fact that IT is a toric ideal and some techniques from polyhedral geometry.
For the rest of this section, we assume that T is a tree, where by a tree we mean a DAG
whose underlying undirected graph is a tree. These graphs are sometimes called polytrees in
the graphical models literature. A directed tree is a tree all of whose edges are directed away
from a given source vertex.
Since IT is a toric ideal, it can be analyzed using techniques from polyhedral geometry. In
particular, for each i, j such that T (i, j) is nonempty, let aij denote the exponent vector of the
monomial σij = atop(P )
k→l∈P λkl. Let AT denote the set of all these exponent vectors. The
geometry of the toric variety V (IT ) is determined by the discrete geometry of the polytope
PT = conv(AT ).
The polytope PT is naturally embedded in R2n−1, where n of the coordinates on R2n−1
correspond to the vertices of T and n − 1 of the coordinates correspond to the edges of T .
Denote the first set of coordinates by xi and the second by yij where i→ j is an edge in T . Our
first results is a description of the facet structure of the polytope PT .
Theorem 5.1. The polytope PT is the solution to the following set of equations and inequalities:
ALGEBRAIC GEOMETRY OF GAUSSIAN BAYESIAN NETWORKS 15
xi ≥ 0 for all i ∈ V (T )
yij ≥ 0 for all i→ j ∈ E(T )∑
i∈V (T ) xi = 1
i: i→j∈E(T ) yij − yjk ≥ 0 for all j → k ∈ E(T )
2xj +
i: i→j∈E(T ) yij −
k: j→k∈E(T ) yjk ≥ 0 for all j ∈ V (T ).
Proof. Let QT denote the polyhedron defined as the solution space to the given constraints.
First of all, QT is bounded. To see this, first note that because of the positive constraints and
the equation
i∈V (T ) xi = 1, we have that xi ≤ 1 is implied by the given constraints. Then,
starting from the sources of the tree and working our way down the edges repeatedly using the
inequalities xj +
i: i→j∈E(T ) yij − yjk ≥ 0, we see that the yij are also bounded.
Now, we have PT ⊆ QT , since every trek will satisfy any of the indicated constraints. Thus,
we must show that QT ⊆ PT . To do this, it suffices to show that for any vector (x0, y0) ∈ QT ,
there exists λ > 0, (x1, y1) and (x2, y2) such that
(x0, y0) = λ(x1, y1) + (1− λ)(x2, y2)
where (x1, y1) is one of the 0/1 vectors aij and (x2, y2) ∈ QT . Because QT is bounded, this
will imply that the extreme points of QT are a subset of the extreme points of PT , and hence
QT ⊆ PT . Without loss of generality we may suppose that all of the coordinates y0ij are positive,
otherwise the problem reduces to a smaller tree or forest because the resulting inequalities that
arise when yij = 0 are precisely those that are necessary for the smaller tree. Note that for
a forest F , the polytope PF is the direct join of polytopes PT as T ranges over the connected
components of F , by Proposition 3.6.
For any fixed j, there cannot exist distinct values k1, k2, and k3 such that all of
x0j +
i: i→j∈E(T )
y0ij − y
x0j +
i: i→j∈E(T )
y0ij − y
x0j +
i: i→j∈E(T )
y0ij − y
hold. If there were, we could add these three equations together to deduce that
3x0j + 3
i: i→j∈E(T )
y0ij − y
− y0jk2 − y
This in turn implies that
2x0j +
i: i→j∈E(T )
y0ij − y
− y0jk2 − y
with equality if and only if pa(j) = ∅ and x0j = 0. This in turn implies that, for instance,
y0jk1 = 0 contradicting our assumption that y
ij > 0 for all i and j. By a similar argument, if
exactly two of these facet defining inequalities hold sharply, we see that
2x0j + 2
i: i→j∈E(T )
y0ij − y
− y0jk2 = 0
16 SETH SULLIVANT
which implies that j has exactly two descendants and no parents.
Now mark each edge j → k in the tree T such that
x0j +
i: i→j∈E(T )
y0ij − y
jk = 0.
By the preceding paragraph, we can find a trek P from a sink in the tree to a source in the tree
and (possibly) back to a different sink that has the property that for no i in the trek there exists
k not in the path such that i → k is a marked edge. That is, the preceding paragraph shows
that there can be at most 2 marked edges incident to any given vertex.
Given P , let (x1, y1) denote the corresponding 0/1 vector. We claim that there is a λ > 0
such that
(1) (x0, y0) = λ(x1, y1) + (1− λ)(x2, y2)
holds with (x2, y2) ∈ QT . Take λ > 0 to be any very small number and define (x2, y2) by the
given equation. Note that by construction the inequalities x2i ≥ 0 and y
ij ≥ 0 will be satisfied
since for all the nonzero entries in (x1, y1), the corresponding inequality for (x0, y0) must have
been nonstrict and λ is small. Furthermore, the constraint
x2i = 1 is also automatically
satisfied. It is also easy to see that the last set of inequalities will also be satisfied since through
each vertex the path will either have no edges, an incoming edge and an outgoing edge, or two
outgoing edges and the top vertex, all of which do not change the value of the linear functional.
Finally to see that the inequalities of the form
i: i→j∈E(T )
yij − yjk ≥ 0
are still satisfied by (x2, y2), note that marked edges of T are either contained in the path
P or not incident to the path P . Thus, the strict inequalities remain strict (since they will
involve modifying by an incoming edge and an outgoing edge or an outgoing edge and the top
vertex), and the nonstrict inequalities remain nonstrict since λ is small. Thus, we conclude that
QT ⊆ PT , which completes the proof. �
Corollary 5.2. Let ≺ be any reverse lexicographic term order such that σii � σjk for all i and
j 6= k. Then in≺(IT ) is squarefree. In other words, the associated pulling triangulation of PT is
unimodular.
Proof. The proof is purely polyhedral, and relies on the geometric connections between trian-
gulations and initial ideals of toric ideals. See Chapter 8 in [19] for background on this material
including pulling triangulations. Let aij denote the vertex of PT corresponding to the monomial
φG(σij). For i 6= j, each of the vertices aij has lattice distance at most one from any of the
facets described by Theorem 5.1. This is seen by evaluating each of the linear functionals at the
0/1 vector corresponding to the trek between i and j.
If we pull from one of these vertices we get a unimodular triangulation provided that the
induced pulling triangulation on each of the facets of PT not containing aij is unimodular. This
is because the normalized volume of a simplex is the volume of the base times the lattice distance
from the base to the vertex not on the base.
The facet defining inequalities of any face of PT are obtained by taking an appropriate subset
of the facet defining inequalities of PT . Thus, as we continue the pulling triangulation, if the
ALGEBRAIC GEOMETRY OF GAUSSIAN BAYESIAN NETWORKS 17
current face contains a vertex aij with i 6= j, we will pull from this vertex first and get a
unimodular pulling triangulation provided the induced pulling triangulation of every face is
unimodular. Thus, by induction, it suffices to show that the faces of PT that are the convex
hull of vertices aii have unimodular pulling triangulations. However, these faces are always
unimodular simplices. �
Corollary 5.3. The ring C[Σ]/IT is normal and Cohen-Macaulay when T is a tree.
Proof. Since PT has a unimodular triangulation, it is a normal polytope and hence the semigroup
ring C[Σ]/IT is normal. Hochster’s theorem [10] then implies that C[Σ]/IT is Cohen-Macaulay.
While we know that C[Σ]/IT is always Cohen-Macaulay, it remains to determine how the
Cohen-Macaulay type of IT depends on the underlying tree T . Here is a concrete conjecture
concerning the special case of Gorenstein trees.
Conjecture 5.4. Suppose that T is a directed tree. Then C[Σ]/IT is Gorenstein if and only if
the degree of every vertex in T is less than or equal to three.
A downward directed tree is a tree all of whose edges point to the unique sink in the tree. A
leaf of such a downward directed tree is then a source of the tree. With a little more refined
information about which inequalities defining PT are facet defining, we can deduce results about
the degrees of the ideals IT in some cases.
Corollary 5.5. Let T be a downward directed tree and let i be any leaf of T , s the sink of T ,
and P the unique trek in T (i, s). Then
deg IT =
k→l∈P
deg IT\k→l
where T \ k → l denotes the forest obtained from T by removing the edge k → l.
Proof. First of all, note that in the case of a downward directed tree the inequalities of the form
2xj +
i: i→j∈E(T )
yij −
k: j→k∈E(T )
yjk ≥ 0
are redundant: since each vertex has at most one descendant, it is implied by the the other
constraints. Also, for any source t, the inequality xt ≥ 0 is redundant, because it is implied by
the inequalities xt − ytj ≥ 0 and ytj ≥ 0 where j is the unique child of t.
Now we will compute the normalized volume of the polytope PT (which is equal to the degree
of the toric ideal IT ) by computing the pulling triangulation from Corollary and relating the
volumes of the pieces to the associated subforests.
Since the pulling triangulation of PT with ais pulled first is unimodular, the volume of PT is
the sum of the volumes of the facets of PT that do not contain ais. Note that ais lies on all the
facets of the form
i: i→j∈E(T )
yij − yjk ≥ 0
since through every vertex besides the source and sink, the trek has either zero or two edges
incident to it. Thus, the only facets that ais does not lie on are of the form ykl ≥ 0 such that
18 SETH SULLIVANT
k → l is an edge in the trek P . However, the facet of PT obtained by setting ykl = 0 is precisely
the polytope PT\k→l, which follows from Theorem 5.1. �
Note that upon removing an edge in a tree we obtain a forest. Proposition 3.6 implies that
the degree of such a forest is the product of the degrees of the associated trees. Since the degree
of the tree consisting of a single point is one, the formula from Corollary 5.5 yields a recursive
expression for the degree of a downward directed forest.
Corollary 5.6. Let Tn be the directed chain with n vertices. Then deg ITn =
, the n−1st
Catalan number.
Proof. In Corollary 5.5 we take the unique path from 1 to n. The resulting forests obtained
by removing an edge are the disjoint unions of two paths. By the product formula implied by
Proposition 3.6 we deduce that the degree of ITn satisfies the recurrence:
deg ITn =
deg ITi · deg ITn−i
with initial condition deg IT1 = 1. This is precisely the recurrence and initial conditions for the
Catalan numbers [18]. �
Now we want to prove the main result of this section, that the determinantal conditional
independence statements actually generate the ideal IT when T is a tree. To do this, we will
exploit the underlying toric structure, introduce a tableau notation for working with monomials,
and introduce an appropriate ordering of the variables.
Each variable σij that is not zero can be identified with the unique trek in T from i to j.
We associate to σij the tableau which records the elements of T in this unique trek, which is
represented like this:
σij = [aBi|aCj]
where B and C are (possibly empty) strings. If, say, i were at the top of the path, we would
write the tableau as
σij = [i|iCj].
The tableau is in its standard form if aBi is lexicographically earlier than aCj. We introduce a
lexicographic total order on standard form tableau variables by declaring [aA|aB] ≺ [cC|cD] if
aA is lexicographically smaller that cC, or if aA = cC and aB is lexicographically smaller than
cD. Given a monomial, its tableau representation is the row-wise concatenation of the tableau
forms of each of the variables appearing in the monomial.
Example 5.7. Let T be the tree with edges 1 → 3, 1 → 4, 2 → 4, 3 → 5, 3 → 6, 4 → 7,
and 4→ 8. Then the monomial σ14σ18σ24σ234σ38σ57σ78 has the standard form lexicographically
ALGEBRAIC GEOMETRY OF GAUSSIAN BAYESIAN NETWORKS 19
ordered tableau:
1 148
13 14
13 14
13 148
135 147
47 48
Note that if a variable appears to the d-th power in a monomial, the representation for this
variable is repeated as d rows in the tableau. �
When we write out general tableau, lower-case letters will always correspond to single char-
acters (possibly empty) and upper case letters will always correspond to strings of characters
(also, possibly empty).
Theorem 5.8. For any tree T , the conditional independence statements implied by T generate
IT . In particular, IT is generated by linear polynomials σij and quadratic tetrad constraints.
Proof. First of all, we can ignore the linear polynomials as they always correspond to indepen-
dence constraints and work modulo these linear constraints when working with the toric ideal
IT . In addition, every quadratic binomial of the form σijσkl − σilσkj that belongs to IT is im-
plied by a conditional independence statement. This follows from Proposition 4.7. Note that
this holds even if the set {i, j, k, l} does not have four elements. Thus, it suffices to show that
IT modulo the linear constraints is generated by quadratic binomials.
To show that IT is generated by quadratic binomials, it suffices to show that any binomial in
IT can be written as a polynomial linear combination of the quadratic binomials in IT . This, in
turn, will be achieved by showing that we can “move” from the tableau representation of one
of the monomials to the other by making local changes that correspond to quadratic binomials.
To show this last part, we will define a sort of distance between two monomials and show that
it is always possible to decrease this distance using these quadratic binomials/ moves. This is a
typical trick for dealing with toric ideals, illustrated, for instance, in [19].
To this end let f be a binomial in IT . Without loss of generality, we may suppose the terms
of f have no common factors, because if σa · f ∈ IT then f ∈ IT as well. We will write f as the
difference of two tableaux, which are in standard form with their rows lexicographically ordered.
The first row in the two tableaux are different and they have a left-most place where they
disagree. We will show that we can always move this position further to the right. Eventually
the top rows of the tableaux will agree and we can delete this row (corresponding to the same
variable) and arrive at a polynomial of smaller degree.
Since f ∈ IT , the treks associated to the top rows of the two tableaux must have the same
top. There are two cases to consider. Either the first disagreement is immediately after the top
or not. In the first case, this means that the binomial f must have the form:[
abB acC
abB adD
20 SETH SULLIVANT
Without loss of generality we may suppose that c < d. Since f ∈ IT the string ac must appear
somewhere on the right-hand monomial. Thus, f must have the form:
abB acC
abB adDaeE acC ′
If d 6= e, we can apply the quadratic binomial[
abB adD
aeE acC ′
abB acC ′
aeE adD
to the second monomial to arrive at a monomial which has fewer disagreements with the left-
hand tableau in the first row. On the other hand, if d = e, we cannot apply this move (its
application results in “variables” that do not belong to C[Σ]). Keeping track of all the ad
patterns that appear on the right-hand side, and the consequent ad patterns that appear on the
left-hand side, we see that our binomial f has the form
abB acC
ad∗ ∗
ad∗ ∗
−
abB adD
adD′ acC ′
ad∗ ∗
ad∗ ∗
.
Since there are the same number of ad’s on both sides we see that there is at least one more a
on the right-hand side which has no d’s attached to it. Thus, omitting the excess ad’s on both
sides, our binomial f contains:
abB acC
abB adDadD′ acC ′
aeE agG
with d 6= e or g. We can also assume that c 6= e, g otherwise, we could apply a quadratic move
as above. Thus we apply the quadratic binomials[
adD′ acC ′
aeE agG
adD′ agG
aeE acC ′
and [
abB adD
aeE acC ′
abB acC ′
aeE adD
to reduce the number of disagreements in the first row. This concludes the proof of the first
case. Now suppose that the first disagreement does not occur immediately after the a. Thus we
may suppose that f has the form:[
aAxbB aC
aAxdD aE
Note that it does not matter whether or not this disagreement appears on the left-hand or
right-hand side of the tableaux. Since the string xd appears on right-hand monomial it must
also appear somewhere on the left-hand monomial as well. If x is not the top in this occurrence,
ALGEBRAIC GEOMETRY OF GAUSSIAN BAYESIAN NETWORKS 21
we can immediately apply a quadratic binomial to reduce the discrepancies in the first row. So
we may assume the f has the form:
aAxbB aCxdD′ xgG
aAxdD aE
If b 6= g we can apply the quadratic binomial[
aAxbB aC
xdD′ xgG
aAxdD′ aC
xbB xgG
to the left-hand monomial to reduce the discrepancies in the first row. So suppose that g = b.
Enumerating the xb pairs that can arise on the left and right hand monomials, we deduce, akin
to our argument in the first case above, that f has the form:
aAxbB aC
xdD′ xbG
xhH xkK
aAxdD aE
where h and k are not equal to b or d. Then we can apply the two quadratic binomials:[
xdD′ xbG
xhH xkK
xhH xbG
xdD′ xkK
and [
aAxbB aC
xdD′ xkK
aAxdD′ aC
xbB xkK
to the left-hand monomial to produce a monomial with fewer discrepancies in the first row. We
have shown that no matter what type of discrepancy that can occur in the first row, we can
always apply quadratic moves to produce fewer discrepancies. This implies that IT is generated
by quadrics. �
Among the results in this section were our proofs that IT has a squarefree initial ideal (and
hence C[Σ]/IT is normal and Cohen-Macaulay) and that IT is generated by linear forms and
quadrics. It seems natural to wonder if there is a term order that realizes these two features
simultaneously.
Conjecture 5.9. There exists a term order ≺ such that in≺(IT ) is generated by squarefree
monomials of degree one and two.
6. Hidden Trees
This section and the next concern Bayesian networks with hidden variables. A hidden or
latent random variable is one which we do not have direct access to. These hidden variables
might represent theoretical quantities that are directly unmeasurable (e.g. a random variable
representing intelligence), variables we cannot have access to (e.g. information about extinct
species), or variables that have been censored (e.g. for sensitive random variables in census
data). If we are given a model over all the observed and hidden random variables, the partially
22 SETH SULLIVANT
observed model is the one obtained by marginalizing over the hidden random variables. A
number of interesting varieties arise in this hidden variable setting.
For Gaussian random variables, the marginalization is again Gaussian, and the mean and
covariance matrix are obtained by extracting the subvector and submatrix of the mean and
covariance matrix corresponding to the observed random variables. This immediately yields the
following proposition.
Proposition 6.1. Let I ⊆ C[µ,Σ] be the vanishing ideal for a Gaussian model. Let H ∪O = [n]
be a partition of the random variables into hidden and observed variables H and O. Then
IO := I ∩ C[µi, σij | i, j ∈ O]
is the vanishing ideal for the partially observed model.
Proof. Marginalization in the Gaussian case corresponds to projection onto the subspace of pairs
(µO,ΣO,O) ⊆ R|O| × R(
|O|+1
2 ). Coordinate projection is equivalent to elimination [2]. �
In the case of a Gaussian Bayesian network, Proposition 6.1 has a number of useful corollaries,
of both a computational and theoretical nature. First of all, it allows for the computation of the
ideals defining a hidden variable model as an easy elimination step. Secondly, it can be used to
explain the phenomenon we observed in Example 2.13, that the constraints defining a hidden
variable model appeared as generators of the ideal of the fully observed model.
Definition 6.2. Let H ∪ O be a partition of the nodes of the DAG G. The hidden nodes H
are said to be upstream from the observed nodes O in G if there are no edges o→ h in G with
o ∈ O and h ∈ H.
If H∪O is an upstream partition of the nodes of G, we introduce a grading on the ring C[a, λ]
which will, in turn, induce a grading on C[Σ]. Let deg ah = (1, 0) for all h ∈ H, deg ao = (1, 2)
for all o ∈ O, deg λho = (0, 1) if h ∈ H and o ∈ O, and deg λij = (0, 0) otherwise.
Lemma 6.3. Suppose that H ∪O = [n] is an upstream partition of the vertices of G. Then each
of the polynomials φG(σij) is homogeneous with respect to the upstream grading and
deg(σij) =
(1, 0) if i ∈ H, j ∈ H
(1, 1) if i ∈ H, j ∈ O or i ∈ O, j ∈ H
(1, 2) if i ∈ O, j ∈ O.
Thus, IG is homogeneous with respect to the induced grading on C[Σ].
Proof. There are three cases to consider. If both i, j ∈ H, then every trek in T (i, j) has a top
element in H and no edges of the form h→ o. In this case, the degree of each path is the vector
(1, 0). If i ∈ H and j ∈ O, every trek from i to j has a top in H and exactly one edge of the
form h→ o. Thus, the degree of every monomial in φ(σij) is (1, 1). If both i, j ∈ O, then either
each trek P from i to j has a top in O, or has a top in H. In the first case there can be no
edges in P of the form h → o, and in the second case there must be exactly two edges in P of
the form h→ o. In either case, the degree of the monomial corresponding to P is (1, 2). �
Note that the two dimensional grading we have described can be extended to an n dimensional
grading on the ring C[Σ] by considering all collections of upstream variables in G simultaneously.
ALGEBRAIC GEOMETRY OF GAUSSIAN BAYESIAN NETWORKS 23
Theorem 6.4 (Upstream Variables Theorem). Let H∪O be an upstream partition of the vertices
of G. Then every minimal generating set of IG that is homogeneous with respect to the upstream
grading contains a minimal generating set of IG,O.
Proof. The set of indeterminates σij corresponding to the observed variables are precisely the
variables whose degrees lie on the facet of the degree semigroup generated by the vector (1, 2).
This implies that the subring generated by these indeterminates is a facial subring. �
The upstream variables theorem is significant because any natural generating set of an ideal
I is homogeneous with respect to its largest homogeneous grading group. For instance, every
reduced Gröbner basis if IG will be homogeneous with respect to the upstream grading. For
trees, the upstream variables theorem immediately implies:
Corollary 6.5. Let T be a rooted directed tree and O consist of the leaves of T . Then IT,O is
generated by the quadratic tetrad constraints
σikσjl − σilσkj
such that i, j, k, l ∈ O, and there is a choke point c between {i, j} and {k, l}.
Corollary 6.5 says that the ideal of a hidden tree model is generated by the tetrad constraints
induced by the choke points in the tree. Sprites et al [17] use these tetrad constraints as a tool
for inferring DAG models with hidden variables. Given a sample covariance matrix, they test
whether a collection of tetrad constraints is equal to zero. From the given tetrad constraints
that are satisfied, together with the tetrad representation theorem, they construct a DAG that
is consistent with these vanishing tetrads. However, it is not clear from that work whether or
not it is enough to consider only these tetrad constraints. Indeed, as shown in [17], there are
pairs of graphs with hidden nodes that have precisely the same set of tetrad constraints that do
not yield the same family of covariance matrices. Theorem 6.5 can be seen as a mathematical
justification of the tetrad procedure of Spirtes, et al, in the case of hidden tree models, because
it shows that the tetrad constraints are enough to distinguish between the covariance matrices
coming from different trees.
7. Connections to Algebraic Geometry
In this section, we give families of examples to show how classical varieties from algebraic
geometry arise in the study of Gaussian Bayesian networks. In particular, we show how toric
degenerations of the Grassmannian, matrix Schubert varieties, and secant varieties all arise as
special cases of Gaussian Bayesian networks with hidden variables.
7.1. Toric Initial Ideals of the Grassmannian. Let Gr2,n be the Grassmannian of 2-planes
in Cn. The Grassmannian has the natural structure of an algebraic variety under the Plücker
embedding. The ideal of the Grassmannian is generated by the quadratic Plücker relations:
I2,n := I(Gr2,n) = 〈σijσkl − σikσjl + σilσjk | 1 ≤ i < j < k < l ≤ n〉 ⊂ C[Σ].
The binomial initial ideals of I2,n are in bijection with the unrooted trivalent trees with n
leaves. These binomial initial ideals are, in fact, toric ideals, and we will show that:
24 SETH SULLIVANT
Theorem 7.1. Let T be a rooted directed binary tree with [n] leaves and let O be the set of
leaves of T . Then there is a weight vector ω ∈ R(
2 ) and a sign vector τ ∈ {±1}(
2 ) such that
IT,O = τ · inω(I2,n).
The sign vector τ acts by multiplying coordinate σij by τij .
Proof. The proof idea is to show that the toric ideals IT,O have the same generators as the toric
initial ideals of the Grassmannian that have already been characterized in [16]. Without loss
of generality, we may suppose that the leaves of T are labeled by [n], that the tree is drawn
without edge crossings, and the leaves are labeled in increasing order from left to right. These
assumptions will allow us to ignore the sign vector τ in the proof. The sign vector results from
straightening the tree and permuting the columns in the Steifel coordinates. This results in sign
changes in the Plücker coordinates.
In Corollary 6.5, we saw that IT,O was generated by the quadratic relations
σikσjl − σilσkj
such that there is a choke point in T between {i, j} and {k, l}. This is the same as saying
that the induced subtree of T on {i, j, k, l} has the split {i, j}|{k, l}. These are precisely the
generators of the toric initial ideals of the Grassmannian G2,n identified in [16]. �
In the preceding Theorem, any weight vector ω that belongs to the relative interior of the cone
of the tropical Grassmannian corresponding to the tree T will serve as the desired partial term
order. We refer to [16] for background on the tropical Grassmannian and toric degenerations of
the Grassmannian. Since and ideal and its initial ideals have the same Hilbert function, we see
Catalan numbers emerging as degrees of Bayesian networks yet again.
Corollary 7.2. Let T be a rooted, directed, binary tree and O consist of the leaves of T . Then
deg IT,O = 1n−1
, the (n− 2)-nd Catalan number.
The fact that binary hidden tree models are toric degenerations of the Grassmannian has
potential use in phylogenetics. Namely, it suggests a family of new models, of the same di-
mension as the binary tree models, that could be used to interpolate between the various tree
models. That is, rather than choosing a weight vector in a full dimensional cone of the tropical
Grassmannian, we could choose a weight vector ω that sits inside of lower dimensional cone.
The varieties of the initial ideals V (inω(I2,n)) then correspond to models that sit somewhere
“between” models corresponding of the full dimensional trees of the maximal dimensional cones
containing ω. Phylogenetic recovery algorithms could reference these in-between models to indi-
cate some uncertainty about the relationships between a given collection of species or on a given
branch of the tree. These new models have the advantage that they have the same dimension
as the tree models and so there is no need for dimension penalization in model selection.
7.2. Matrix Schubert Varieties. In this section, we will describe how certain varieties called
matrix Schubert varieties arise as special cases of the varieties of hidden variable models for
Gaussian Bayesian networks. More precisely, the variety for the Gaussian Bayesian network will
be the cone over one of these matrix Schubert varieties. To do this, we first need to recall some
equivalent definitions of matrix Schubert varieties.
ALGEBRAIC GEOMETRY OF GAUSSIAN BAYESIAN NETWORKS 25
Let w be a partial permutation matrix, which is an n × n 0/1 matrix with at most one 1 in
each row and column. The matrix w is in the affine space Cn×n. The Borel group B of upper
triangular matrices acts on Cn×n on the right by multiplication and on the left by multiplication
by the transpose.
Definition 7.3. The matrix Schubert variety Xw is the orbit closure of w by the action of B
on the right and left:
Xw = BTwB.
Let Iw be the vanishing ideal of Xw.
The matrix Schubert variety Xw ⊆ Cn×n, so we can identify its coordinate ring with a quotient
of C[σij | i ∈ [n], j ∈ [n′]]. Throughout this section [n′] = {1′, 2′, . . . , n′}, is a set of n symbols
that we use to distinguish from [n] = {1, 2, . . . , n}.
An equivalent definition of a matrix Schubert variety comes as follows. Let S(w) = {(i, j) | wij =
1} be the index set of the ones in w. For each (i, j) let Mij be the variety of rank one matrices:
Mij =
x ∈ Cn×n | rankx ≤ 1, xkl = 0 if k < i or l < j
(i,j)∈S(w)
where the sum denotes the pointwise Minkowski sum of the varieties. Since Mij are cones over
projective varieties, this is the same as taking the join, defined in the next section.
Example 7.4. Let w be the partial permutation matrix
1 0 00 1 0
0 0 0
Then Xw consists of all 3× 3 matrices of rank ≤ 2 and Iw =
|Σ[3],[3′]|
. More generally, if w is
a partial permutation matrix of the form
where Ed is a d×d identity matrix, then Iw is the ideal of (d+1) minors of a generic matrix. �
The particular Bayesian networks which yield the desired varieties come from taking certain
partitions of the variables. In particular, we assume that the observed variables come in two
types labeled by [n] = {1, 2, . . . , n} and [n′] = {1′, 2′, . . . , n′}. The hidden variables will be
labeled by the set S(w).
Define the graph G(w) with vertex set V = [n] ∪ [n′] ∪ S(w) and edge set consisting of edges
k → l for all k < l ∈ [n], k′ → l′ for all k′ < l′ ∈ [n′], (i, j) → k for all (i, j) ∈ S(w) and k ≥ i
and (i, j)→ k′ for all (i, j) ∈ S(w) and k′ ≥ j.
Theorem 7.5. The generators of the ideal Iw defining the matrix Schubert variety Xw are the
same as the generators of the ideal IG(w),[n]∪[n′] of the hidden variable Bayesian network for the
DAG G(w) with observed variables [n] ∪ [n′]. That is,
Iw · C[σij | i, j ∈ [n] ∪ [n′]] = IG(w),[n]∪[n′].
26 SETH SULLIVANT
Proof. The proof proceeds in a few steps. First, we give a parametrization of a cone over
the matrix Schubert variety, whose ideal is naturally seen to be Iw · C[σij | i, j ∈ [n] ∪ [n′]].
Then we describe a rational transformation φ on C[σij | i, j ∈ [n] ∪ [n′]] such that φ(Iw) =
IG(w),[n]∪[n′]. We then exploit that fact that this transformation is invertible and the elimination
ideal IG(w),[n]∪[n′] ∩ C[σij | i ∈ [n], j ∈ [n′]] is fixed to deduce the desired equality.
First of all, we give our parametrization of the ideal Iw. To do this, we need to carefully
identify all parameters involved in the representation. First of all, we split the indeterminates
in the ring C[σij | i, i ∈ [n]∪ [n′]] into three classes of indeterminates: those with i, j ∈ [n], those
with i, j ∈ [n′], and those with i ∈ [n] and j ∈ [n′]. Then we define a parametrization φw which
is determined as follows:
φw : C[τ, γ, a, λ]→ C[σij | i, j ∈ [n] ∪ [n′]
φw(σij) =
τij if i, j ∈ [n]
γij if i, j ∈ [n′]∑
(k,l)∈S(w):k≤i,l≤j a(k,l)λ(k,l),iλ(k,l),j if i ∈ [n], j ∈ [n
Let Jw = kerφw. Since the τ , γ, λ, and a parameters are all algebraically independent, we
deduce that in Jw, there will be no generators that involve combinations of the three types
of indeterminates in C[σij | i, j ∈ [n] ∪ [n′]]. Furthermore, restricting to the first two types of
indeterminates, there will not be any nontrivial relations involving these types of indeterminates.
Thus, to determine Jw, it suffices to restrict to the ideal among the indeterminates of the form
σij such that i ∈ [n] and j ∈ [n′]. However, considering the parametrization in this case, we see
that this is precisely the parametrization of the ideal Iw, given as the Minkowski sum of rank
one matrices. Thus, Jw = Iw.
Now we will define a map from φ : C[σij ] → C[σij ] which sends Jw to another ideal, closely
related to IG(w),[n]∪[n′]. To define this map, first, we use the fact that from the submatrix
Σ[n],[n] we can recover the λij and ai parameters associated to [n], when only considering the
complete subgraph associated to graph G(w)[n] (and ignoring the treks that involve the vertices
(k, l) ∈ S(w)). This follows because these parameters are identifiable by Proposition 2.5. A
similar fact holds when restricting to the subgraph G(w)[n′]. The ideal Jw we have defined thus
far can be considered as the vanishing ideal of a parametrization which gives the complete graph
parametrization for G(w)[n] and G(w)[n′] and a parameterization of the matrix Schubert variety
Xw on the σij with i ∈ [n] and j ∈ [n′]. So we can rationally recover the λ and a parameters
associated to the subgraphs G(w)[n] and G(w)[n′].
For each j < k pair in [n] or in [n′], define the partial trek polynomial
sjk(λ) =
j=l0<l1<...<lm=k
λli−1li .
We fit these into two upper triangular matrices S and S′ where Sjk = sjk if j < k with j, k ∈ [n],
Sjj = 1 and Sjk = 0 otherwise, with a similar definition for S′ with [n] replaced by [n′]. Now
we are ready to define our map. Let φ be the rational map φ : C[Σ] → C[Σ] which leaves σij
fixed if i, j ∈ [n] or i, j ∈ [n′], and maps σij with i ∈ [n] and j ∈ [n′] by sending
Σ[n],[n′] 7→ SΣ[n],[n′](S
′)T .
ALGEBRAIC GEOMETRY OF GAUSSIAN BAYESIAN NETWORKS 27
This is actually a rational map, because the λij that appear in the formula for sjk are expressed
as rational functions in terms of the σij by the rational parameter recovery formula of Proposition
2.5. Since this map transforms Σ[n],[n′] by multiplying on the left and right but lower and upper
triangular matrices, this leaves the ideal Jw ∩ C[σij | i ∈ [n], j ∈ [n′]] fixed. Thus Jw ⊆ φ(Jw).
On the other hand φ is invertible on Jw so Jw = φ(Jw).
If we think about the formulas for the image φ◦φw, we see that the formulas for σij with i ∈ [n]
and j ∈ [n′] in terms of parameters are the correct formulas which we would see coming from
the parametrization φG(w). On the other hand, the formulas for σij with i, j ∈ [n] or i, j ∈ [n′]
are the formulas for the restricted graph G[n] and G[n′], respectively. Since every trek contained
in G[n] or G[n′] is a trek in G(w), we see that the current parametrization of Jw is only “almost
correct”, in that it is only missing terms corresponding to treks that go outside of G(w)[n] or
G(w)[n′]. Denote this map by ψw, and let φG(w) be the actual parametrizing map of the model.
Thus, we have, for each σij with i, j ∈ [n] or i, j ∈ [n′], φG(w)(σij) = ψw(σij) + rw(σij), where
rw(σij) is a polynomial remainder term that does not contain any ai with i ∈ [n] ∪ [n′], when
i, j ∈ [n] or i, j ∈ [n′], and rw(σij) = 0 otherwise. On the other hand, every term of ψw(σij) will
involve exactly one ai with i ∈ [n] ∪ [n′], when i, j ∈ [n] or i, j ∈ [n′].
Now we define a weight ordering ≺ on the ring C[a, λ] that gives deg ai = 1 if i ∈ [n]∪ [n′] and
deg ai = 0 otherwise and deg λij = 0 for all i, j. Then, the largest degree term of φG(w)(σij) with
respect to this weight ordering is ψw(σ). Since highest weight terms must all cancel with each
other, we see that f ∈ IG(w),[n]∪[n′], implies that f ∈ Jw. Thus, we deduce that IG(w),[n]∪[n′] ⊆ Jw.
On the other hand,
IG(w),[n]∪[n]′ ∩ C[σij | i ∈ [n], j ∈ [n
′]] = Jw ∩ C[σij | i ∈ [n], j ∈ [n′]]
and since the generators of Jw ∩ C[σij | i ∈ [n], j ∈ [n′]] generate Jw, we deduce that Jw ⊆
IG(w),[n]∪[n′] which completes the proof. �
The significance of Theorem 7.5 comes from the work of Knutson and Miller [11]. They gave
a complete description of antidiagonal Gröbner bases for the ideals Iw. Indeed, these ideals
are generated by certain subdeterminants of the matrix Σ[n],[n′]. These determinants can be
interpretted combinatorially in terms of the graph G(w).
Theorem 7.6. [11] The ideal Iw defining the matrix Schubert variety is generated by the con-
ditional independence statements implied by the DAG G(w). In particular,
#C + 1 minors ofΣA,B | A ⊂ [n], B ⊂ [n′], C ⊂ S(w), and C d-separates A from B
7.3. Joins and Secant Varieties. In this section, we will show how joins and secant varieties
arise as special cases of Gaussian Bayesian networks in the hidden variable case. This, in turn,
implies that techniques that have been developed for studying defining equations of joins and
secant varieties (e.g. [12, 20]) might be useful for studying the equations defining these hidden
variable models.
Given two ideals I and J in a polynomial ring K[x] = K[x1, . . . , xm], their join is the new
ideal
I ∗ J := (I(y) + J(z) + 〈xi − yi − zi | i ∈ [m]〉)
28 SETH SULLIVANT
where I(y) is the ideal obtained from I by plugging in the variables y1, . . . , ym for x1, . . . , xm.
The secant ideal is the iterated join:
I{r} = I ∗ I ∗ · · · ∗ I
with r copies of I. If I and J are homogeneous radical ideals over an algebraically closed field,
the join ideal I ∗ J is the vanishing ideal of the join variety which is defined geometrically by
the rule
V (I ∗ J) = V (I) ∗ V (J) =
a∈V (I)
b∈V (J)
< a, b >
where < a, b > denotes the line spanned by a and b and the bar denotes the Zariski closure.
Suppose further that I and J are the vanishing ideals of parametrizations; that is there are
φ and ψ such that
φ : C[x]→ C[θ] and ψ : C[x]→ C[η]
and I = kerφ and J = kerψ. Then I ∗ J is the kernel of the map
φ+ ψ : C[x]→ C[θ, η]
xi 7→ φ(xi) + ψ(xi).
Given a DAG G and a subset K ⊂ V (G), GK denotes the induced subgraph on K.
Proposition 7.7. Let G be a DAG and suppose that the vertices of G are partitioned into
V (G) = O ∪H1 ∪H2 where both H1 and H2 are hidden sets of variables. Suppose further that
there are no edges of the form o1 → o2 such that o1, o2 ∈ O or edges of the form h1 → h2 or
h2 → h1 with h1 ∈ H1 and h2 ∈ H2. Then
IG,O = IGO∪H1 ,O ∗ IGO∪H2 ,O.
The proposition says that if the hidden variables are partitioned with no edges between the
two sets and there are no edges between the observed variables the ideal is a join.
Proof. The parametrization of the hidden variable model only involves the σij such that i, j ∈ O.
First, we restrict to the case where i 6= j. Since there are no edges between observed variables
and no edges between H1 and H2, every trek from i to j involves only edges in GO∪H1 or only
edges in GO∪H2 . This means that
φG(σij) = φGO∪H1 (σij) + φGO∪H2 (σij)
and these summands are in non-overlapping sets of indeterminates. Thus, by the comments
preceding the proposition, the ideal only in the σij with i 6= j ∈ O is clearly a join. However,
the structure of this hidden variable model implies that there are no nontrivial relations that
involve the diagonal elements σii with i ∈ O. This implies that IG,O is a join. �
Example 7.8. Let Kp,m be the directed complete bipartite graph with bipartition H = [p′]
and O = [m] such that i′ → j ∈ E(Kp,m) for all i′ ∈ [p′] and j ∈ [m]. Then Kp,m satisfies the
conditions of the theorem recursively up to p copies, and we see that:
IKp,m,O = I
K1,m,O
This particular hidden variable Gaussian Bayesian network is known as the factor analysis model.
This realization of the factor analysis model as a secant variety was studied extensively in [3].
ALGEBRAIC GEOMETRY OF GAUSSIAN BAYESIAN NETWORKS 29
Example 7.9. Consider the two “doubled trees” pictured in the figure.
1 2 3 4 5 6
1 2 3 4 5 6
Since in each case, the two subgraphs GO∪H1 and GO∪H2 are isomorphic, the ideals are secant
ideals of the hidden tree models IT,O for the appropriate underlying trees. In both cases, the
ideal I{2}T,O = IG,O is a principal ideal, generated by a single cubic. In the first case, the ideal
is the determinantal ideal J{2}T = 〈|Σ123,456|〉. In the second case, the ideal is generated by an
eight term cubic
IG,O = 〈σ13σ25σ46 − σ13σ26σ45 − σ14σ25σ36 + σ14σ26σ35
+σ15σ23σ46 − σ15σ24σ36 − σ16σ23σ45 + σ16σ24σ35〉 .
In both of the cubic cases in the previous example, the ideals under questions were secant
ideals of toric ideals that were initial ideals of the Grassmann-Plücker ideal, as we saw in Theorem
7.1. Note also that the secant ideals I{2}T,O are, in fact, the initial terms of the 6× 6 Pfaffian with
respect to appropriate weight vectors. We conjecture that this pattern holds in general.
Conjecture 7.10. Let T be a binary tree with n leaves and O the set of leaves of T . Let
I2,n be the Grassmann-Pluücker ideal, let ω be a weight vector and τ a sign vector so that
IT,O = τ · inω(I2,n) as in Theorem 7.1. Then for each r
T,O = τ · inω(I
2,n ).
References
[1] P. Bickel and K. Doksum. Mathematical Statistics. Vol 1. Prentice-Hall, London, 2001.
[2] D. Cox, J. Little, and D. O’Shea. Ideals, Varieties, and Algorithms. Undergraduate Texts in Mathematics.
Springer, New York, 1997.
[3] M. Drton, B. Sturmfels, and S. Sullivant. Algebraic factor analysis: tetrads, pentads, and beyond. To appear
in Probability Theory and Related Fields, 2006.
[4] R. Durbin, S. Eddy, A. Krogh, and G. Mitchison. Biological Sequence Analysis. Cambridge University Press,
Cambridge, 1999.
[5] J. Felsenstein. Inferring Phylogenies. Sinauer Associates, Inc. Sunderland, MA, 2004.
[6] L. Garcia. Polynomial constraints of Bayesian networks with hidden variables. Preprint, 2006.
[7] L. Garcia, M. Stillman, and B. Sturmfels. Algebraic geometry of Bayesian networks. Journal of Symbolic
Computation 39 (2005) 331-355.
[8] I. Gessel and G. Viennot. Binomial determinants, paths, and hook length formulae. Adv. in Math. 58 (1985),
no. 3, 300–321.
30 SETH SULLIVANT
[9] D. Grayson and M. Stillman. Macaulay 2, a software system for research in algebraic geometry. Available at
http://www.math.uiuc.edu/Macaulay2/
[10] M. Hochster. Rings of invariants of tori, Cohen-Macaulay rings generated by monomials, and polytopes,
Annals of Mathematics 96 (1972) 318–337.
[11] A. Knutson and E. Miller. Gröbner geometry of Schubert polynomials. Ann. of Math. (2) 161 (2005), no. 3,
1245–1318.
[12] J. M. Landsberg, L. Manivel. On the ideals of secant varieties of Segre varieties. Found. Comput. Math. 4
(2004), no. 4, 397–422.
[13] S. Lauritzen. Graphical Models. Oxford Statistical Science Series 17 Clarendon Press, Oxford, 1996.
[14] R. Read and R. Wilson. An Atlas of Graphs. Oxford Scientific Publications. (1998)
[15] C. Semple and M. Steel. Phylogenetics. Oxford Lecture Series in Mathematics and Its Applications 24 Oxford
University Press, Oxford, 2003.
[16] D. Speyer and B. Sturmfels. The tropical Grassmannian. Advances in Geometry 4 (2004) 389-411.
[17] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search. The MIT Press, Cambridge,
MA, 2000.
[18] R. Stanley. Enumerative Combinatorics Vol. 2 Cambridge Studies in Advanced Mathematics 62, Cambridge
University Press, 1999.
[19] B. Sturmfels. Gröbner Bases and Convex Polytopes. University Lecture Series 8, American Mathematical
Society, Providence, RI, 1996.
[20] B. Sturmfels and S. Sullivant. Combinatorial secant varieties. Quarterly Journal of Pure and Applied Math-
ematics 2 (2006) 285-309.
Department of Mathematics and Society of Fellows, Harvard University, Cambridge, MA 02138
http://www.math.uiuc.edu/Macaulay2/
1. Introduction
Acknowledgments
2. Parametrization and Conditional Independence
3. Computational Study
4. Tetrad Representation Theorem
5. Fully Observed Trees
6. Hidden Trees
7. Connections to Algebraic Geometry
7.1. Toric Initial Ideals of the Grassmannian
7.2. Matrix Schubert Varieties
7.3. Joins and Secant Varieties
References
|
0704.0919 | Interactions, superconducting $T_c$, and fluctuation magnetization for
two coupled dots in the crossover between the Gaussian Orthogonal and Unitary
ensembles | Untitled
Interactions, superconducting Tc, and fluctuation magnetization
for two coupled dots in the crossover between the Gaussian
Orthogonal and Unitary ensembles
Oleksandr Zelyak∗ and Ganpathy Murthy†
Department of Physics and Astronomy,
University of Kentucky, Lexington, Kentucky 40506, USA
Igor Rozhkov‡
Department of Physics, University of Dayton,
300 College Park, Dayton, OH 45469
(Dated: November 25, 2018)
Abstract
We study a system of two quantum dots connected by a hopping bridge. Both the dots and
connecting region are assumed to be in universal crossover regimes between Gaussian Orthogonal
and Unitary ensembles. Using a diagrammatic approach appropriate for energy separations much
larger than the level spacing we obtain the ensemble-averaged one- and two-particle Green’s func-
tions. It turns out that the diffuson and cooperon parts of the two-particle Green’s function can
be described by separate scaling functions. We then use this information to investigate a model
interacting system in which one dot has an attractive s-wave reduced Bardeen-Cooper-Schrieffer
interaction, while the other is noninteracting but subject to an orbital magnetic field. We find that
the critical temperature is nonmonotonic in the flux through the second dot in a certain regime of
interdot coupling. Likewise, the fluctuation magnetization above the critical temperature is also
nonmonotonic in this regime, can be either diamagnetic or paramagnetic, and can be deduced from
the cooperon scaling function.
PACS numbers: 73.21.La, 05.40.-a, 73.50.Jt
Keywords: quantum dot, scaling function, crossover, quantum criticality
http://arxiv.org/abs/0704.0919v1
I. INTRODUCTION
The idea of describing a physical system by a random matrix Hamiltonian to explain
its spectral properties goes back to Wigner1,2. It was further developed by Dyson, Mehta
and others, and became the basis for Random Matrix Theory (RMT)3. First introduced in
nuclear physics, RMT has been used with great success in other branches of physics and
mathematics. A notable example was a conjecture by Gorkov and Eliashberg4 that the
single-particle spectrum of a diffusive metallic grain is controlled by RMT. This conjecture
was proved by Altshuler and Shklovskii5 who used diagrammatic methods and by Efetov
who used the supersymmetry method6. In 1984 Bohigas, Giannoni and Schmit7 conjectured
that RMT could also be employed in the study of ballistic quantum systems whose dynam-
ics is chaotic in the classical limit. Their conjecture broadened the area of applicability
of RMT enormously and was supported by numerous ensuing experiments and numerical
simulations7–10. The crucial energy scale for the applicability of RMT is the Thouless energy
ET = ~/τerg, where τerg is the time for a wave packet to spread over the entire system. For
a diffusive system of size L, we have ET ≃ ~D/L2, while for a ballistic/chaotic system we
have ET ≃ ~vF/L, where vF is the Fermi velocity.
In this paper we consider a system of two quantum dots/nanoparticles which are coupled
by a hopping bridge. The motion of electrons inside each dot can be either ballistic or
diffusive. In the case of ballistic dots we assume that the dots have irregular shapes leading
to classically chaotic motion, so that RMT is applicable.
RMT Hamiltonians fall into three main ensembles3. These are the Gaussian Orthogonal
Ensemble (GOE), Gaussian Unitary Ensemble (GUE), and Gaussian Symplectic Ensemble
(GSE). They are classified according to their properties time-reversal (TR). The Hamilto-
nians invariant with respect to TR belong to the GOE. An example of GOE is a quantum
dot which has no spin-orbit coupling and is not subject to an external magnetic field. GUE
Hamiltonians, on the contrary, are not invariant with respect to TR and describe motion
in an orbital magnetic field, with or without spin-orbit coupling. Hamiltonians from GSE
group describe systems of particles with Kramers degeneracy that are TR invariant but
have no spatial symmetries, and correspond to systems with spin-orbit coupling but with
no orbital magnetic flux. In our paper we only deal with the first two classes.
For weak magnetic flux the spectral properties of the system deviate from those predicted
by either the GOE or the GUE11. In such cases the system is said to be in a crossover3. For
these systems the Hamiltonian can be decomposed into real symmetric and real antisym-
metric matrices:
HS + iXHA√
1 +X2
, (1)
where X is the crossover parameter12 which is equal, up to factors of order unity, to Φ/Φ0,
where Φ is the magnetic flux through the dot, and Φ0 = h/e is the quantum unit of magnetic
flux. Note that the gaussian orthogonal and unitary ensembles are limiting cases of X → 0
and X → 1 respectively.
To understand the meaning of the crossover parameter consider the Aharonov-Bohm
phase shift picked up by a ballistic electron in a single orbit in the dot:
∆φ = 2π
. (2)
For one turn the flux enclosed by the trajectory is proportional to Φ = BL2, where L is the
size of the dot. After N turns the total flux is Φtotal =
NΦ, where factor
N originates
from the fact that electron has equal probability to make clockwise or counterclockwise
orbits, and thus does a random walk in the total flux enclosed. The minimal phase shift
for the electron to notice the presence of the magnetic flux is of the order 2π, and thus
the minimal cumulative flux enclosed by the orbit should be Φ0 =
NΦ. This leads to
N = (Φ0/Φ)
2, while the time to make N turns is τ = LN/vf (for a ballistic/chaotic dot).
From the Heisenberg uncertainty principle the associated energy scale is:
Ecross ≈
, (3)
where ET is the ballistic Thouless energy
13. For a diffusive dot it should be substituted by
the diffusive Thouless energy ET ∼= ~D/L2. One can see that when Φ is equal to Φ0, EX is
equal to ET which means that energy levels are fully crossed over.
In this paper the reader will encounter many crossover parameters, and thus many
crossover energy scales. By a line of argument similar to that leading to Eq. (3), it
can be shown that to every crossover parameter Xi there is a corresponding energy scale
EXi ≃ X2i ET .
Breaking the time reversal symmetry of system changes the two-particle Green’s function.
While the two-particle Green’s function can in general depend separately on ET , EX , and
the measurement frequency ω, it turns out that in the universal limit ω, EX ≪ ET , it
becomes a universal scaling function of the ratio EX/ω. The scaling function describes the
modification of 〈GR(E+ω)GA(E)〉 as one moves away from the “critical” point ω = 0. The
limits of the scaling function can be understood as follows: If the measurement frequency
ω is large (small) compared to the crossover energy scale EX , the 〈GR(E +ω)GA(E)〉 takes
the form of the GOE (GUE) ensemble correlation function. If ω ∼ EX , the Green’s function
describes the system in crossover regime.
The one-particle Green’s function 〈GR(E)〉 is not critical as ω → 0, although it gets
modified by the interdot coupling. The two-particle Green’s function 〈GR(E + ω)GA(E)〉
always has a diffuson mode14, that diverges for small ω in our large-N approximation,
which means that our results are valid on scales much larger than mean level spacing. This
divergence is not physical and will be cut off by vanishing level correlations for ω ≪ δ
in a more exact calculation15. On the other hand, the energy scale ω should be smaller
than Thouless energy of the system for RMT to be applicable. These limitations hold for
the crossover energy EX as well. In what follows we study the regime corresponding to
δ ≪ ω,EX ≤ ET .
The other term that appears in the two-particle Green’s function is a cooperon mode. In
general the cooperon term is gapped if at least one of the crossover parameters is different
from zero. In the case when the total Hamiltonian of the system is time reversal invariant, all
the crossover parameters are zero and the cooperon, just like the diffuson, becomes gapless.
Finally, when each part of compound system belongs to the GUE (the case when all crossover
parameters are much larger than ω) the cooperon term disappears.
Our study has a two-fold motivation. The first part comes from works on cou-
pled structures with noninteracting particles in acoustic and electronic systems16–18, and
crossovers11,19–22. We focus on a complete description of the crossover regimes in all three
regions (the two dots and the bridge) and define scaling functions for the diffuson and
cooperon parts of the two-particle Green’s function. Using parameters analogous to EX we
describe crossover regimes in dots 1 and 2 and the effects of the tunable hopping between
them. Varying these parameters allows us to obtain results for various physical realizations,
when different parts of the compound system behave as pure GOE, GUE, or belong to the
crossover ensemble. In electronic systems it is easy to break time-reversal by turning on an
external orbital magnetic flux. In acoustic systems one can break time-reversal by rotating
the system or a part thereof. As mentioned before, the system of two dots coupled by hop-
ping has been investigated before using supersymmetry methods18. However, the authors
considered only the GUE, whereas here we are interested in the full crossover. In fact, the
crossover is essential to the second aspect of our work, as will become clear immediately.
The second part of our motivation is the possibility of using the information gained in
noninteracting systems to predict the behavior of interacting systems23–26. We consider
interacting systems controlled by the Universal Hamiltonian27–30, which is known to be the
interacting low-energy effective theory31–33 deep within the Thouless band |ε− εF | ≪ ET in
the renormalization group34,35 sense for weak-coupling when the kinetic energy is described
by RMT and the Thouless number g = ET/δ ≫ 1. For the GOE the Universal Hamiltonian
HU has the form
27–30
α,scα,s +
N̂2 − JS2 + λT †T (4)
where N̂ is the total particle number, S is the total spin, and T =
cβ,↓cβ,↑. In addition
to the charging energy, HU has a Stoner exchange energy J and a reduced superconducting
coupling λ. This last term is absent in the GUE, while the exchange term disappears in the
In this paper we concentrate on the reduced Bardeen-Cooper-Schrieffer (BCS) coupling
λ which leads to a mean-field superconducting state when λ < 0. Previous work by one
of us26 sets the context for our investigation. We consider an interacting system which has
a single-particle symmetry and a quantum phase transition in the limit ET /δ → ∞. An
example relevant to us is a superconducting nanoparticle originally in the GOE. It has the
reduced BCS interaction and time-reversal symmetry, and the (mean-field) quantum phase
transition is between the normal and superconducting states and occurs at λ = 0. Now con-
sider the situation when the symmetry is softly broken, so that the single-particle dynamics
is described by a crossover RMT ensemble. It can be shown26 that this step allows us to
tune into the many-body quantum critical regime36–38 of the interacting system. Thus, the
scaling functions of the noninteracting crossover are transmuted into scaling functions of
the interacting system in the many-body quantum critical regime. In our example, the or-
bital magnetic flux breaks the time-reversal symmetry which is crucial to superconductivity.
FIG. 1: The system of two vertically coupled quantum dots.
When the orbital flux increases to a critical value, it destroys the mean-field superconduct-
ing state. Above the critical field, or more generically above the critical temperature, the
system is in the quantum critical regime.
To be more specific, we consider two vertically coupled quantum dots, the first of which
has an attractive reduced BCS coupling, while the second has no BCS coupling. Fig. 1
shows the geometry, the reason for which will become clear soon. We apply an orbital
magnetic flux only through (a part of) the second dot, and observe the effect on the coupled
system. Our main results are for the mean-field critical temperature Tc of the system, and
its magnetization in the normal state (above Tc) as a function of the flux in the normal
nanoparticle. Such a system could be realized physically without too much difficulty, by,
for example, growing a thin film of normal metal (such as Au) on an insulating substrate,
then a layer of insulator which could serve as the hopping bridge, and finally a thin film of
superconductor(such as Al, which has a mean-field superconducting transition temperature
of around 2.6K). The orbital flux can be applied selectively to the Au layer as shown in
Fig. 1 by a close pair of oppositely oriented current carrying wires close to the Au quantum
dot, but far from the Al quantum dot.
The reason for this geometry is that we want to disregard interdot charging effects entirely
and concentrate on the BCS coupling. The Hamiltonian for the coupled interacting system
contains charging energies for the two dots and an interdot Coulomb interaction24.
N21 +
N22 + U12N1N2 (5)
Defining the total number of particles as N = N1 +N2, and the difference in the number as
n = N1 −N2 the interaction can also be written as
U1 + U2 + 2U12
U1 + U2 − 2U12
U1 − U2
nN (6)
We see that there is an energy cost to transfer an electron from one dot to the other. This
interaction is irrelevant in the RG sense24, but vanishes only asymptotically deep within an
energy scale defined by the hopping. Our geometry is chosen so as to make U1 = U2 = U12 as
nearly as possible, which can be achieved by making the dots the same thickness and area,
and by making sure that their vertical separation is much smaller than their lateral linear
size. In this case, since N is constant, we can ignore charging effects entirely. Charging effects
and charge quantization in finite systems can be taken into account using the formalism
developed by Kamenev and Gefen39, and futher elaborated by Efetov and co-workers40,41.
Since our primary goal is to investigate quantum critical effects associated with the BCS
pairing interaction, we will assume the abovementioned geometry and ignore charging effects
in what follows.
After including the effect of the BCS interaction, we find the surprising result that in
certain regimes of interparticle hopping strength, the mean-field transition temperature of
the system can increase as the flux through the second quantum dot increases. Indeed, its
behavior can be monotonic increasing, monotonic decreasing, or nonmonotonic as the flux
is increased. We can qualitatively understand these effects by the following considerations.
In the absence of orbital flux, hopping between the dots reduces Tc since it “dilutes” the
effect of the attractive BCS coupling present only in the first dot. The application of an
orbital flux through the second dot has two effects: (i) To raise the energy of Cooper pairs
there, thus tending to localize the pairs in the first dot and raise the Tc. (ii) To cause time-
reversal breaking in the first dot, and reduce Tc. The nonmonotonicity of Tc arises from the
competition between these two effects.
Another quantity of interest above the mean-field Tc is the fluctuation magnetization
which corresponds to gapped superconducting pairs forming and responding to the external
orbital flux. In contrast to the case of a single quantum dot subjected to an orbital flux,
we find that the fluctuation magnetization42 can be either diamagnetic (the usual case) or
paramagnetic. A paramagnetic magnetization results from a free energy which decreases as
the flux increases. The origin of this effect is the interplay between the localizing effect of
high temperature or the orbital flux in the second dot on the one hand, and the reduced
BCS interaction on the other.
The regimes we describe should be distinguished from other superconducting single-
particle RMT ensembles discovered in the past decade43,44, which apply to a normal meso-
scopic system in contact with two superconductors with a phase difference of π between
their order parameters43 (so that there is no gap in the mesoscopic system despite Andreev
reflection), or to a mesoscopic d-wave superconducting system44. In our case, the symmetry
of the superconducting interaction is s-wave. However, the most important difference is that
we focus on quantum critical fluctuations, which are inherently many-body, while the RMT
classes described previously are single-particle ensembles43,44.
This paper is organized as follows. In Sec. II we review the basic steps of calculating
the one-particle and two-particle Green’s functions for a single dot. Then in Sec. III we
present the system of Dyson equations for the one-particle Green’s function in the case of
two coupled dots and solve it in the limit of weak coupling. In addition, we set up and solve
the system of four Bethe-Salpeter equations for the two-particle Green’s function. In Sec.
IV we apply our results to the system of superconducting quantum dot weakly coupled to
other quantum dot made from a normal metal. We end with our conclusions, some caveats,
and future directions in Sec. V.
II. REVIEW OF RESULTS FOR A SINGLE DOT.
Our goal in this section is to calculate the statistics of one and two-particle Green’s
functions for an uncoupled dot in a GOE→ GUE crossover (see appendix A, and12 for more
details), starting from the series expansion of Green’s function:
〈β|GR(E)|α〉 = GRαβ(E) =
E+ −H
(E+)2
(E+)3
+ . . . . (7)
We are interested in averaging this expansion over the appropriate random matrix ensemble.
The corresponding Dyson equation for averaged Green’s function is:
The bold line denotes the averaged propagator 〈GR(E)〉 and regular solid line defines the
bare propagator 1/E+ with E+ = E + iη, where η is infinitely small positive number. Here
Σ stands for self-energy and is a sum of all topologically different diagrams.
One can solve Dyson equation approximating self-energy only by first leading term and
find:
−E2, (9)
where δ is the mean level spacing. This approximation works only for E ≫ δ. As E gets
comparable with δ, other terms in expansion for Σ should be taken into account.
Then, the average of the one-particle Green’s function is given by:
〈GRαβ(E)〉 =
)2 −E2
Next, we repeat the procedure for the averaged two-particle Green’s function, which can be
represented by the series:
where two bold lines on the left hand side denote 〈GR(E + ω)GA(E)〉. The leading contri-
bution comes from ladder and maximally crossed diagrams. The sum of these diagrams can
be conveniently represented by Bethe-Salpeter equation. For example, the contribution of
all the ladder diagrams can be expressed in closed form by:
where ΠD is a self-energy. For maximally crossed diagrams we have similar equation:
where ΠD and ΠC are related to the connected part of two-particle Green’s function as:
In the limit of ω being much smaller than bandwidth (ω ≪ Nδ), the two-particle Green’s
function (connected part) is expressed as:
〈GRαγ(E + ω)GAδβ(E)〉 =
δαβδγδ
δαδδγβ
1 + iEX
The second term is a contribution of maximally crossed diagrams. EX is a crossover energy
scale, connected to the crossover parameter as EX = 4X
2Nδ/π.
Depending on values of EX one can speak of different types of averaging. If EX ≪ ω, we
get average over GOE ensemble, if EX is of order ω, averaging is performed over ensemble
being in crossover, and, if EX ≫ ω, contribution of maximally crossed diagrams can be
disregarded, thus going to the limit of the GUE ensemble.
III. TWO COUPLED DOTS.
Next we discuss general framework of our calculation and calculate correlation functions
for our system of interest, which is two weakly coupled quantum dots (see appendix B for
more technical details). The Hamiltonian for this system can be represented as:
Htot =
V † 0
V † H2
. (16)
where H1 andH2 are the Hamiltonians of uncoupled dots 1 and 2. The coupling is realized by
a matrix V . The elements of H1, H2, and V are statistically independent random variables.
We assume that both dots and the hopping bridge are in crossover regimes, characterized
by parameters X1, X2, and Γ respectively.
In the crossover matrices Hi and V are given by:
HSi + iXiH
1 +X2i
, i = 1, 2; V =
V R + iΓV I√
1 + Γ2
, (17)
where H
i is a symmetric (antisymmetric) part of Hi, and V
R,I is real (imaginary) matrix.
In what follows we assume that the bandwidths in dot 1 and dot 2 are the same. That
is, N1δ1 = N2δ2. This should not make any difference in the universal limit N → ∞.
In addition we introduce the parameter ξ – the ratio of mean level spacing in two dots:
ξ = δ1/δ2. For each realization of matrix elements of the Hamiltonian Htot, the Green’s
function of this system can be computed as follows:
G = (I ⊗ E −H)−1 =
E −H1 −V
−V † E −H2
G11 G12
G21 G22
. (18)
Each element of G has the meaning of a specific Green’s function. For example, G11 and
G22 are the Green’s functions that describe particle propagation in dots 1 and 2 respectively.
On the other hand, G12 and G21 are the Green’s functions representing travel from one dot
to another.
Calculating (I ⊗E −H)−1 one finds the components of G. For example,
G11 =
(E −H1)− V (E −H2)−1V †
= G1 +G1V G2V
†G1 +G1V G2V
†G1V G2V
†G1 + . . .
where G1 and G2 are bare propagators in dot 1 and dot 2 defined by G1 = (E −H1)−1 and
G2 = (E −H2)−1.
To find the ensemble average of G11 one needs to average the whole expansion (19) term
by term. For coupled dots Gij interrelated and in large N approximation can be found from
the following system of equations:
The bold straight and wavy lines with arrows represent averaged Green’s functions
〈Gαβ,1(E)〉 and 〈Gij,2(E)〉 respectively, while regular solid lines are bare propagators in
dots 1 and 2. The dotted line describes pairing between hopping matrix elements V , and
the dashed (wavy) line denotes pairing between matrix elements of H1 (H2).
The system (20) accounts for all possible diagrams without line crossing. Diagrams
containing crossed lines of any type are higher order in 1/N and can be neglected when
N → ∞. If the hopping between dots is zero, this system decouples into two separate
Dyson equations for each dot. In the case of weak coupling (U ≪ 1), where U is a parameter
controlling the strength of coupling between dots, this system can be readily solved. As zero
approximation, we use results for a single dot.
In this approximation one-particle Green’s function for dot 1 and dot 2 are calculated as
follows:
〈GRαβ,1(E)〉 =
〈GRαβ,0(E)〉
E−2Σ0
1− ǫ2
1 + U
1 + i ǫ√
〈GRij,2(E)〉 =
〈GRij,0(E)〉
1− U√
E−2Σ0
1− ǫ2
1 + U
1 + i ǫ√
where ǫ is a dimensionless energy ǫ = πE/2Nδ. We used subindex 0 in Σ0 and 〈GR0 (E)〉 to
denote solutions for one uncoupled dot.
In the large N approximation the contribution to the two-particle Green’s function comes
from ladder diagrams and maximally crossed diagrams. It is convenient to sum them sepa-
rately. The ladder diagram contribution can be found from the following system of equations:
where ΠDij with proper external lines denote various two-particle Green’s functions. As in
the case of the one-particle Green’s function equations, if the inter-dot coupling is zero, the
system reduces to two Bethe-Salpeter equations for uncoupled dots.
The system of four equations (22) can be broken into two systems of two equations to
〈GRαγ,1(E + ω)GAδβ,1(E)〉D1 =
N21 δ1
δαβδγδ
〈GRil,2(E + ω)GAkj,2(E)〉D2 =
N22 δ2
δijδlk
where gD are the scaling functions of diffusion terms in dot 1 and dot 2 defined by:
gD1 =
1 + i√
1 + i(
ξ + 1√
gD2 =
1 + i
1 + i(
ξ + 1√
Here EU = 2UNδ/π is the interdot coupling energy scale. These dimensionless functions
show how diffusion part is modified due to the coupling to another dot.
Next, for the maximally crossed diagrams the system of equations we have:
The subsequent solution of this system produces:
〈GRαγ,1(E + ω)GAδβ,1(E)〉C1 =
N21 δ1
δαδδγβ
〈GRil,2(E + ω)GAkj,2(E)〉C2 =
N22 δ2
δikδlj
where gC are the scaling functions for cooperon term defined according to:
gC1 =
1 + i√
1 + i
EX1+EX2
− EX1EX2
− EX1EU√
ξEX2EU
ξ + 1√
1 + iEΓ
gC2 =
1 + i
1 + i
EX1+EX2
− EX1EX2
− EX1EU√
ξEX2EU
ξ + 1√
1 + iEΓ
Here EX1,2 = 4X
1,2Nδ/π, and EΓ = 4Γ
2EU/(
ξ + 1√
) are the crossover energy scales,
describing transition from GOE to GUE ensemble in dot 1 and dot 2, as well as in hopping
bridge V .
As we determined how the scaling function gC modifies cooperon part of two-particle
Green’s function and depends on the crossover energy scales defined above, we are ready to
proceed with write up the connected part of the total two-particle Green’s function, which
is a sum of diffuson and cooperon parts:
〈GRαγ,1(E + ω)GAδβ,1(E)〉 =
N21 δ1
δαβδγδ
1 + i√
1 + i(
ξ + 1√
N21 δ1
δαδδγβ
1 + i√
1 + i
EX1+EX2
− EX1EX2
− EX1EU√
ξEX2EU
ξ + 1√
1 + iEΓ
〈GRil,2(E + ω)GAkj,2(E)〉 =
N22 δ2
δijδlk
1 + i
1 + i(
ξ + 1√
N22 δ2
δikδlj
1 + i
1 + i
EX1+EX2
− EX1EX2
− EX1EU√
ξEX2EU
ξ + 1√
1 + iEΓ
In general, the coupling between dots changes the bandwidth of each dot. Corrections to
the bandwidth are of the order of U and can be neglected for weak coupling. Calculating
approximations to the second order in U one can be ensure that one-particle and two-particle
Green’s functions can be treated perturbatively.
Diagrams on Fig.2 show the typical behavior of absolute value and phase of scaling
functions gD and gC in dot 1. All energy parameters are measured in units of EU .
Next we analyze the temporal behavior of the computed statistical characteristics. The
Fourier transform of the two-particle Green’s function shows the time evolution of the density
matrix of the system. One can observe that the diffuson part of 〈GRGA〉 diverges for small
ω. To get the correct behavior we replace 1/ω with ω/(ω2 + η2), and take η to zero in the
final result. As for the cooperon term, it stays regular in the small ω limit if at least one of
the crossover parameters differs from zero.
First of all, we look at the Fourier transform of 〈GRGA〉 in the first dot. We have
FIG. 2: Absolute value and phase of diffuson (a,b) and cooperon (c,d) scaling functions in dot
1. Frequency ω is measured in units of EU . For these graphs the crossover parameters are:
EX1/EU = EX2/EU = 1, EΓ/EU = 0.8, ξ = 1.
〈GRαγ,1(t)GAδβ,1(t)〉 = δαβδγδ
N1(1 + ξ)
ξ+ 1√
)EU t
+ δαδδγβ
EX2 +
a+ − a−
e−ta− − e−ta+
, (29)
where a± depend on the crossover parameters (see Eq. (C11) in appendix C)
Then, for the corresponding quantity in the second dot the Fourier transform produces:
〈GRil,2(t)GAkj,2(t)〉 = δijδlk
N2(1 + ξ)
ξ+ 1√
)EU t
+ δikδlj
EX1 +
a+ − a−
e−ta− − e−ta+
. (30)
IV. TWO COUPLED METALLIC QUANTUM DOTS
In this section we apply the results obtained in the previous sections to an interacting
system. We consider two vertically coupled metallic quantum dots, as shown in Fig. 1, the
first of which is superconducting and the second noninteracting. For simplicity the quantum
dots are assumed to have the same level spacing (ξ = 1). The calculations presented
in this section can be extended to the case ξ 6= 1 in a straightforward way. The first
(superconducting) quantum dot and the hopping bridge belong to the GOE ensemble. A
nonzero orbital magnetic flux penetrating the second (noninteracting) quantum dot drives
it into the GOE to GUE crossover described by the crossover energy scale EX2 . The other
crossover energy scale EU describes the hopping between the quantum dots. Because of this
hopping one can observe a nonzero magnetization in the first particle caused by a magnetic
flux through the second particle. Roughly speaking, when the electrons in the first dot travel
to the second and return they bring back information about the orbital flux.
We wish to compute the magnetization as a function of orbital flux, as well as the mean-
field critical temperature. It should be noted that since the quantum dot is a finite system,
there cannot be any true spontaneous symmetry breaking. However, when the mean-field
superconducting gap ∆BCS ≫ δ, the mean-field description is a very good one45–47. Recent
numerical calculations have investigated the regime ∆BCS ≃ δ where quantum fluctuations
are strong48. We will focus on the quantum critical regime of the system above the mean-field
critical temperature/field, so we do not have to worry about symmetry-breaking.
We start with BCS crossover Hamiltonian for the double-dot system including the inter-
actions in the first dot and the hopping between the dots26:
HBCSX2 =
µ0ν0c
cν0s − λT †T +
i0j0s
i0j0s
ci0s +
Vµ0i0(c
ci0s + h.c.)
µ,scµ,s − δλ̃T †T, (31)
where H(2) contains the effect of the orbital flux through the second quantum dot. Here
T, T † are the operators which appear in the Universal Hamiltonian, and are most simply
expressed in terms of electron creation/annihilation operators in the original GOE basis of
the first dot (which we call µ0, ν0) as
cµ0,↓cµ0,↑ (32)
Now we need to express the operators cµ0,s in terms of the eigenoperators of the combined
single-particle Hamiltonian of the system of two coupled dots. The result is
Mµνcν,↓cµ,↑, Mµν =
ψµ(µ0)ψν(µ0), (33)
where ǫµ denotes the eigenvalues of the total system, cµ,s operator annihilates electron in the
orbital state µ with spin s, ψµ(µ0) is the eigenvector of the compound system, δ is the mean
level spacing of a single isolated dot, λ̃ > 0 is the attractive dimensionless BCS coupling
valid in region of width 2ωD around the Fermi energy. Note that while the indices µ, ν
enumerate the states of the total system, the index µ0 goes only over the states of the first
dot, since the superconducting interaction is present only in the first dot.
To study the magnetization of the first quantum dot in the crossover we follow previous
work by one of us26: We start with the partition function Z = Tr(exp−βH) where β = 1/T
is the inverse temperature. We convert the partition function into an imaginary time path
integral and use the Hubbard-Stratanovich identity to decompose the interaction, leading
to the imaginary time Lagrangian
c̄µ,s(∂τ − ǫµ)cµ,s + σT̄ + σ̄T (34)
where σ, σ̄ are the bosonic Hubbard-Stratanovich fields representing the BCS order parame-
ter and c̄, c are Grassman fields representing fermions. The fermions are integrated out, and
as long as the system does not have a mean-field BCS gap, the resulting action for σ, σ̄ can
be expanded to second order to obtain
Seff ≈
|σ(iωn)|2( 1λ̃ − fn(β, EX , ωD)) (35)
fn(β, EX , ωD) = δ
|Mµν |2 1−NF (ǫµ)−NF (ǫν)ǫµ+ǫν−iωn (36)
where ωn = 2πn/β, and the sums are restricted to |ǫµ|, |ǫν| < ~ωD. We see that the
correlations between different states µ, ν play an important role. Deep in the crossover (for
EX ≫ δ) we can replace |Mµν |2 by its ensemble average26. We will also henceforth replace
the summations over energy eigenstates by energy integrations with the appropriate cutoffs.
In previous work26 the statistics23–25 of |Mµν |2 was used to obtain analytical results for this
expression.
The (interacting part of the) free energy of the system in the quantum critical regime is
given by26:
ln(1− λ̃f(iωn, β, EX2)), (37)
where f is the scaling function given by expression:
f(iωn, β, EX2) = δ
|Mµν |2
1− nµ(β)− nν(β)
ǫµ + ǫν − iωn
, (38)
nν(β) = (1 + exp(βǫν))
−1 is the Fermi-Dirac distribution. We have shifted the energy so
that the chemical potential is 0.
Converting this double sum into integral and substituting |Mµν |2 by its ensemble average
(see Appendices D and E), we get:
dǫ1dǫ2
(ǫ1 − ǫ2)2 + EX2EU + E2X2
((ǫ1 − ǫ2)2 − EX2EU)2 + (EX2 + 2EU)2(ǫ1 − ǫ2)2
tanh(βǫ1
) + tanh(βǫ2
ǫ1 + ǫ2 − iωn
where ωD is the Debye frequency, and β = 1/kBT is the inverse temperature.
One can decompose the ratio in the first part of integrand into two Lorentzians to get26:
E2X2 + EUEX2 − E
E22 − E21
4(~ωD)
2 + ω2n
C ′/β2 + (E1 + |ωn|)2
E22 −E2X2 −EUEX2
E22 −E21
4(~ωD)
2 + ω2n
C ′/β2 + (E2 + |ωn|)2
. (40)
Here C ′ ≈ 3.08 and E1,2 depend on crossover energy scales as follows:
E21,2 =
(EX2 + 2EU)
2 − 2EUEX2 ∓
(EX2 + 2EU)
2(E2X2 + 4E
. (41)
The magnetization can then be obtained from the free energy:
M = −
=Mnonint +
1− λ̃fn
, (42)
where Mnonint is the contribution from noninteracting electrons
49. We will be interested in
the second term, which is the fluctuation magnetization42.
For illustrative purposes, we use the parameters for Al in all our numerical calculations,
with ωD = 34meV and λ̃ = 0.193. This leads to a mean-field transition temperature
Tc0 = 0.218meV = 2.6K for an isolated Al quantum dot in the absence of magnetic flux.
In all our calculations we evaluate Matsubara sums with a cutoff exp−|ωn|/ωD. We have
verified that changing the cutoff does not qualitatively affect our results, but only produces
small numerical changes.
It will be informative to compare the two-dot system with a single dot subject to an
orbital magnetic flux26 (see Fig. 3). We draw the reader’s attention to two important
features. Firstly, the critical temperature Tc decreases monotonically with EX , resulting from
the fact that time-reversal breaking disfavors superconductivity. Secondly, the fluctuation
magnetization is always negative, or diamagnetic, resulting from the fact that the free energy
monotonically increases as the orbital flux increases.
Now let us turn to our system of two quantum dots coupled by hopping. Before we carry
out a detailed analysis, it is illuminating to inspect the behavior of E1,2 and the coefficients
of the two logarithms in Eq. (40) (which we call A1,2) as a function of EX2 . This is shown
in Fig. 4. E1 tends to EX2/2 for EX2 ≪ EU , and to EU in the opposite limit EX2 ≫ EU .
E2 tends to EU for EX2 ≪ EU , while in the opposite limit EX2 ≫ EU E2 → EX2 . Both
coefficients A1,2 start at
for small EX2. For EX2 ≫ EU A1 → 1, while A2 → 0.
FIG. 3: Magnetization (per unit volume) in a single dot system as a function of temperature
for different values of crossover parameters EX . Panel (d) shows the dependence of the critical
temperature on EX .
The asymptotic regimes T,EX2 ≪ EU and T,EX2 ≫ EU can be understood simply. In
the first regime, EU is the largest energy scale, and far below it the spatial information that
there are two distinct quantum dots is lost. The system behaves like a single large dot with
a smaller “diluted” superconducting coupling. On the other hand, when T,EX2 ≫ EU , A2
is vanishingly small, and the system resembles the isolated first dot with a superconducting
coupling λ̃ but with a crossover energy EU . Note that the approach of the energies to the
asymptotes is slow, so for a particular value of EU it may happen that one cannot realistically
0 1 2 3 4 5 6 7
0 1 2 3 4 5 6
FIG. 4: The behavior of Log coefficients in Eq. (40) and E1,E2 as functions of the ratio EX2/EU .
approach the asymptotic regime without running into either δ at the lower end or ωD at the
higher end. Finally, one can envisage situations in which EX2 ≪ EU but T ≥ EU , for which
there are no simple pictures.
The temperature dependence of magnetization per unit volume for different values of
crossover parameters EX2 and EU (excluding the part due to noninteracting electrons) is
shown in Fig. 5.
In the range where magnetization changes significantly, the fluctuation magnetization
shows both diamagnetic and paramagnetic behavior. This is in contrast to the case of a
single superconducting quantum dot subjected to an orbital flux where the fluctuation mag-
netization is always diamagnetic (Fig. 3). Close to T = 0 an increase in temperature makes
the fluctuation magnetization more diamagnetic. A further temperature increase changes
the fluctuation magnetization from diamagnetic to paramagnetic. For large values of temper-
ature the fluctuation magnetization is paramagnetic and decreasing as T increases. Another
set of diagrams, Fig. 6, demonstrates the dependence of the fluctuation magnetization in
the first dot on crossover parameter EX2 in the second dot. Generically, we find that at low
T the fluctuation magnetization is diamagnetic while at high T it is paramagnetic.
The variation of crossover energy scales EX2 and EU does not change the qualitative
behavior of the fluctuation magnetization as a function of T or EX2 . A paramagnetic
magnetization is counterintuitive in superconducting system, because one believes that “an
orbital flux is the enemy of superconductivity”, and therefore that the free energy must
FIG. 5: Magnetization (per unit volume) as a function of temperature for different values of
crossover parameters EX2 and EU . The fluctuation magnetization is diamagnetic for low T and
paramagnetic for high T .
always increase as the orbital flux increases. This assumption is false for our system. The
explanation is fairly simple, as we will see immediately after the results for Tc have been
presented.
The mean-field critical temperature Tc of transition between normal and superconducting
state strongly depends on EX2 and EU . As one can see from Fig. 7, for very strong
hopping (EU ≫ Tc0) between quantum dots Tc is monotonically decreasing as EX2 increases.
On the other hand, for intermediate hopping Tc has a maximum as a function of orbital
flux, which means that for small values of orbital magnetic flux Tc increases as the orbital
flux increases. Finally, when EU is very weak, Tc monotonically increases as a function of
FIG. 6: Fluctuation magnetization in the first dot vs crossover parameter EX2 in the second dot
for different values of temperature. The fluctuation magnetization is diamagnetic for low T and
paramagnetic for high T .
orbital flux through the second quantum dot. This is in contrast to the behavior of a single
superconducting quantum dot for which Tc decreases monotonically as a function of orbital
flux.
These counterintuitive phenomena can be understood in terms of the following cartoon
picture. One can think of the two dots as two sites, each capable of containing a large
number of bosons (the fluctuating pairs). The BCS pairing interaction occurs only on the
first site. When there is no magnetic flux, hopping delocalizes the bosons between the two
sites, leading to a “dilution” of the BCS attraction and a low critical temperature. The effect
FIG. 7: Critical temperature as a function of EX2 for several intermediate to strong values (com-
pared to Tc0) of the hopping parameter EU . For larger values of EX2 (not shown on graphs) critical
temperature is equal to zero.
of the magnetic flux on the second dot is twofold: (i) Firstly, it gaps the cooperon of the
second dot, which we think of as raising the energy for the bosons to be in the second dot.
(ii) Secondly, by virtue of the interdot hopping, a small time-reversal symmetry breaking
is produced in the first dot, thereby raising the energy of the bosons there as well. As the
flux through the second dot rises, the bosons prefer to be in the first dot since they have
lower energy there. The more localized the cooper pairs are in the first dot due to effect (i),
the more “undiluted” will be the effect of the BCS attraction λ, and the more favored will
FIG. 8: Behavior of critical temperature TC as a function of EX2 for small to intermediate values
(compared to Tc0) of EU .
be the superconducting state. However, effect (ii) produces a time-reversal breaking in the
first dot, thus disfavoring the superconducting state. These two competing effects lead to
the varying behaviors of Tc and the fluctuation magnetization versus the orbital flux in the
second quantum dot. When the hopping between the quantum dots is weak (EU < Tc0), the
first effect dominates, and Tc increases with EX2 . When the hopping is stronger (EU ≃ Tc0)
the first effect dominates at small orbital flux, and the second at large orbital flux. Finally,
at very large hopping (EU ≫ Tc0), effect (ii) is always dominant.
When considering the magnetization one must take into account the temperature as
well, so the picture is more complex. The general feature is that effect (i) which tends
to localize the pairs in the first dot also tends to decrease the interacting free energy of
the system, which leads to a paramagnetic fluctuation magnetization. Effect (ii), which
breaks time-reversal in the first dot, increases the free energy of the system and thus leads
to a diamagnetic fluctuation magnetization. Based on our results we infer that at high
temperature the coherence of pair hopping is destroyed leading to more localization in the
first quantum dot. The consequences of high T are thus similar to that of the effect (i): A
lowering of the interacting free energy and a paramagnetic fluctuation magnetization.
We can make this picture a bit more quantitative for the behavior of Tc with respect to
EX . Consider once more the scaling function of Eq. (40), which we reproduce here for the
reader’s convenience
fn(EX2 , EU , T ) =
E2X2 + EUEX2 − E
E22 − E21
4(~ωD)
2 + ω2n
C ′/β2 + (E1 + |ωn|)2
E22 −E2X2 −EUEX2
E22 −E21
4(~ωD)
2 + ω2n
C ′/β2 + (E2 + |ωn|)2
. (43)
It is straightforward to show that fn reaches its maximum value for ωn = 0. The condition
for Tc is then
λ̃f0(EX2 , EU , Tc) = 1 (44)
Let us first set EX2 = 0. Let us also call the mean-field critical temperature of the isolated
first dot in the absence of a magnetic flux Tc0 (recall that for the parameters pertinent to Al,
Tc0 = 0.218meV = 2.6K). Now there are two possible limits, either EU ≪ Tc0 or EU ≫ Tc0.
In the first case we obtain
Tc(EU) ≃ Tc0
λ̃C ′T 2c0
+ · · ·
In the second case, EU ≫ Tc0, we obtain
Tc(EU) ≃ Tc0
e−1/λ̃ (46)
Note that this can be much smaller than Tc0 and is an illustration of the “dilution” of the
BCS attraction due to the second dot mentioned earlier. Of course, there will be a smooth
crossover between the expressions of Eq. (45) and Eq. (46), so that Tc is always smaller
than Tc0.
FIG. 9: The behavior of E∗X2 vs EU for numerical simulation and analytical approximation.
Now under the assumption EX2 , Tc ≪ EU we can solve analytically for Tc to obtain
T 2c (EX2 , EU) ≃ −
C ′2E2U
e−4/λ̃e
−ln ωD
One can further find the maximum of this expression. It turns out that EU has to be larger
than a critical value E∗U for there to be a maximum.
E∗U = ωDe
For our values of the parameters ωD = 34meV , λ̃ = 0.193, we find E
U = 0.245meV . The
position of the maximum can now be estimated asymptotically for EU > E
E∗X2 ≃ 16e
−1E∗U
Fig.9 compares the dependence of E∗X2 vs EU in case of numerical simulation and the one
described by Eq. (49). For large values of EU compared to E
U the numerically computed
curve matches the analytical approximation.
V. CONCLUSION AND DISCUSSION
In writing this paper we began with two objectives. We intended to compute noninteract-
ing scaling functions in the GOE→GUE crossover in a system of two dots coupled by hop-
ping, and to use this information to investigate the properties of an interacting system23–26
in the many-body quantum critical regime36–38.
We have considered a system of two coupled quantum dots, each of which could have
its own time-reversal breaking parameter, coupled by a bridge which could also have time-
reversal breaking. For each crossover parameter, there is a corresponding crossover energy
scale, which represents the inverse of the time needed for the electron to “notice” the presence
of that coupling in the Hamiltonian. We have computed the two-particle Green’s functions in
the coupled system in a large-N approximation12, valid when all energies of interest are much
greater than the mean level spacing. This allows us to compute the correlations of products
of four wavefunctions belonging to two different energy levels (which have been previously
calculated for a single dot for the pure ensembles by Mirlin using supersymmetry methods50,
and for the Orthogonal to Unitary crossover by Adam et al23). The two-particle Green’s
function splits naturally into a diffuson part and a cooperon part. Each of these parts can
be represented as 1−iω times a scaling function, where ω represents the frequency at which
the measurement is being performed. For example, when we use the two-particle Green’s
function to find the ensemble average of four wavefunctions belonging to two energies, ω is
the energy difference between the two states. The “scaling” nature of the scaling function
is represented by the fact that it depends only on the ratio of ω to certain crossover energy
scales. For the diffuson part the crossover energy EU is controlled solely by the strength of
the hopping between the two dots, while the scaling function for the cooperon part depends
sensitively on the time-reversal breaking in all three parts of the system.
In the second part of the paper, we consider the case when one of the dots has an
attractive BCS interaction, implying that it would be superconducting in the mean-field
limit at zero temperature if it were isolated, and the other dot has no electron interactions
but is penetrated by an orbital magnetic flux. The BCS interaction is one part of the
Universal Hamiltonian27–30, known to be the correct low-energy effective theory31–33 in the
renormalization group34,35 sense for weak-coupling and deep within the Thouless band |ε−
εF | ≪ ET . In order to eliminate complications arising from the charging energy, we consider
a particular geometry with the dots being vertically coupled and very close together in the
vertical direction, as shown in Fig. 1. Our focus is on the quantum critical regime36–38,
achieved by increasing either the temperature or the orbital flux through the second dot.
The first dot is coupled by spin-conserving hopping to a second dot on which the electrons are
noninteracting. This coupling always reduces the critical temperature, due to the “diluting”
effect of the second dot, that is, due to the fact that the electrons can now roam over both
dots, while only one of them has a BCS attraction. Thus, the mean-field critical temperature
Tc of the coupled system is always less than that of the isolated single superconducting dot
Tc0. This part of the phenomenology is intuitively obvious.
However, when the hopping crossover energy EU is either weak or of intermediate strength
compared to Tc0, turning on an orbital flux in the second dot can lead to a counterintuitive
increase in the mean-field critical temperature of the entire system. For very weak hopping,
the mean-field Tc monotonically increases with orbital flux through the second dot, reaching
its maximum when the second dot is fully time-reversal broken. For intermediate hopping
strength, the mean-field Tc initially increases with increasing orbital flux to a maximum.
Eventually, as the orbital flux, and therefore the crossover energy corresponding to time-
reversal breaking in the second dot increases, the critical temperature once again decreases.
For strong hopping EU ≫ Tc0, Tc monotonically decreases as a function of the orbital flux
in the second quantum dot.
We have obtained the detailed dependence of the fluctuation magnetization in the quan-
tum critical regime as a function of the dimensionless parameters T/EX2 and EX2/EU . Once
again, the coupled dot system behaves qualitatively differently from the single dot in having
a paramagnetic fluctuation magnetization in broad regimes of T , EX2 , and EU .
We understand these phenomena qualitatively as the result of two competing effects of
the flux through the second dot. The first effect is to raise the energy for Cooper pairs
in the second dot, thereby tending to localize the pairs in the first dot, and thus reducing
the “diluting” effect of the second dot. This first effect tends to lower the interacting free
energy (as a function of orbital flux) and raise the critical temperature. The second effect is
that as the electrons hop into the second dot and return they carry information about time-
reversal breaking into the first dot, which tends to increase the free energy (as a function of
orbital flux) decrease the critical temperature. The first effect dominates for weak hopping
and/or high T , while the second dominates for strong hopping and/or low T . Intermediate
regimes are more complex, and display nonmonotonic behavior of Tc and the fluctuation
magnetization.
It should be emphasized that the quantum critical regime we focus on is qualitatively
different from other single-particle random matrix ensembles applicable to a normal meso-
scopic system which is gapless despite being in contact with one or more superconducting
regions43,44, either because the two superconductors have a phase difference of π in their
order parameters43, or because they are d-wave gapless superconductors44. The main dif-
ference is that we investigate and describe an interacting regime, not a single-particle one.
Without the interactions there would be no fluctuation magnetization.
Let us consider some of the limitations of our work. The biggest limitation of the nonin-
teracting part of the work is that we have used the large-N approximation, which means that
we cannot trust our results when the energy scales and/or the frequency of the measurement
becomes comparable to the mean level spacing. When ω ≃ δ the wavefunctions and levels
acquire correlations in the crossover which we have neglected. Another limitation is that we
have used a particular model for the interdot hopping which is analytically tractable, and is
modelled by a Gaussian distribution of hopping amplitudes. This might be a realistic model
in vertically coupled quantum dots, or where the bridge has a large number of channels, but
will probably fail if the bridge has only a few channels. These limitations could conceivably
be overcome by using supersymmetric methods14,18.
Coming now to the part of our work which deals with interactions, we have restricted
ourselves to the quantum critical regime of the system, that is, when there is no mean-field
BCS gap. Of course, a finite system cannot undergo spontaneous symmetry-breaking. How-
ever, in mean-field, one still finds a static BCS gap. The paradox is resolved by considering
phase fluctuations of the order parameter which restore the broken symmetry48. To system-
atically investigate this issue one needs to analyze the case when the bosonic auxiliary field
σ in the coupled-dot system acquires a mean-field expectation value and quantize its phase
fluctuations.
We have also chosen a geometry in which interdot charging effects can be ignored. How-
ever, most experimental systems with superconducting nanoparticles deal with almost spher-
ical particles. For two such nanoparticles coupled by hopping, one cannot ignore charging
effects24,39–41. We expect these to have a nontrivial effect on the mean-field Tc and fluctuation
magnetization of the combined system. We defer this analysis to future work.
There are several other future directions in which this work could be extended. New
symmetry classes51,52 have been discovered recently for two-dimensional disordered/ballistic-
chaotic systems subject to spin-orbit coupling53,54. In one of these classes, the spin-orbit
coupling is unitarily equivalent to an orbital flux acting oppositely51,52 on the two eigen-
states of a single-particle quantum number algebraically identical to σz . Due to the unitary
transformation, this quantum number has no simple interpretation in the original (Orthog-
onal) basis. However, it is clear that the results of this paper could be applied, mutatis
mutandis, to two coupled two-dimensional quantum dots subject to spin-orbit couplings. In
particular, consider the situation where one quantum dot has no spin-orbit coupling, but
does have a Stoner exchange interaction, while the other dot is noninteracting, but is made
of a different material and has a strong spin-orbit coupling. Work by one of us has shown26
that by tuning the spin-orbit coupling one can access the quantum critical regime, which is
dominated by many-body quantum fluctuations. The above configuration offers a way to
continuously tune the spin-orbit coupling in the first dot by changing the strength of the
hopping between the dots.
In general, one can imagine a wide range of circumstances where changing a crossover
parameter in one (noninteracting) dot allows one to softly and tunably break a symmetry
in the another (interacting) dot, thereby allowing one access to a quantum critical regime.
We hope the present work will be useful in exploring such phenomena.
Acknowledgments
The authors would like to thank National Science Foundation for partial support under
DMR-0311761, and Yoram Alhassid for comments on the manuscript. OZ wishes to thank
the College of Arts and Sciences and the Department of Physics at the University of Kentucky
for partial support. The authors are grateful to O. Korneta for technical help with graphics.
APPENDIX A: ONE UNCOUPLED DOT
In this Appendix we calculate one-particle and two-particle Green’s functions for a single
dot undergoing the crossover. The strength of magnetic field inside the dot is controlled by
crossover parameter X . The Hamiltonian of the system in crossover is:
HS + iXHA√
1 +X2
, (A1)
where HS,A are symmetric and antisymmetric real random matrices with the same variance
for matrix elements. Normalization (1 + X2)−1/2 keeps the mean level spacing δ fixed as
magnetic field changes inside the dot.
We define the retarded one-particle Green’s function as follows:
GRαβ(E) =
E+ −H
(E+)2
+ . . .
(E+)2
(E+)3
+ . . . ,
Here H is a Hamiltonian, and E+ is the energy with infinitely small positive imaginary part
E+ = E + iη.
This series has nice graphical representation:
GR(E) = , (A3)
where straight solid line represents 1/E+ and dashed line stands for Hamiltonian.
= , H = (A4)
Just as in disordered conductor or quantum field theory the target is not the Green’s
function itself, but rather its mean and mean square. We take on random matrix ensemble
average of Gαβ . Such averaging assumes knowledge of 〈Hn〉, where angular brackets stand
for gaussian ensemble averaging, and n = 1,∞. For n = 1 we have 〈H〉 = 0, while for n = 2
the second moment reads:
〈HαγHδβ〉 =
〈HsαγHsδβ〉 −X2〈HaαγHaδβ〉
1 +X2
δαβδγδ +
1 +X2
δαδδγβ . (A5)
All higher moments of H can be computed using Wick’s theorem55. Thus, the ensemble
averaging leaves only the terms containing even moments of H . Introducing the notation
for 〈HH〉 = , we obtain, for the averaged GR series:
〈GR(E)〉 = (A6)
Then, the expansion (A6) can be written in a compact form of Dyson equation:
The bold line denotes the full one-particle Green’s function averaged over Gaussian en-
semble, and Σ is a self-energy, representing the sum of all topologically different diagrams.
The corresponding algebraic expression for the Dyson equation can be easily extracted from
Eq. (A7) producing:
Gαβ =
GανΣνµ
, (A8)
where Gαβ means 〈GRαβ(E)〉. Now, using the fact that Gαβ = Gαδαβ and Σαβ = Σαδαβ (no
summation over α implied), one can solve this equation and obtain:
Gαβ =
E+ − Σ
. (A9)
Next we approximate self-energy by the first term in large N approximation:
Σαβ = = G
〈HαγHγβ〉 ≈
E+ − Σ
. (A10)
Solving Eq. (A10) for the self-energy we determine:
−E2. (A11)
Consequently, the ensemble average of one-particle Green’s function is given by:
〈GRαβ(E)〉 =
)2 − E2
; 〈GAαβ(E)〉 = 〈GRβα(E)〉∗. (A12)
Next, to study the two-particle Green’s function we notice that the main contributions
come from ladder and maximally crossed diagrams:
(A13)
Two bold lines on the left side stand for the average two-particle Green’s function 〈GR(E+
ω)GA(E)〉. The sum of ladder diagrams is described by Bethe-Salpeter equation:
(A14)
δαδδβγ +
F [E, ω]
, (A15)
where ΠD is a ladder approximation of diffuson part of two-particle Green’s function. Here
F [E, ω] is a product of two inversed averaged one-particle Green’s functions and in the limit
ω ≪ Nδ is:
F [E, ω] = 〈GR(E + ω)〉−1〈GA(E)〉−1 ≈ −
. (A16)
One can solve this equation taking into account Π
δγ = Π
Dδαδδβγ:
F [E, ω]
F [E, ω]−
. (A17)
Multiplying ΠD by F 2[E, ω] we arrive at the following expression for the diffuson term:
〈GRαγ(E + ω)GAδβ(E)〉D =
δαβδγδ
. (A18)
Then, we turn our attention to the equation for maximally crossed diagrams. We have
(A19)
and ΠC is expressed in terms of F [E, ω] again:
1 +X2
F [E, ω]
F [E, ω]− 1−X2
. (A20)
Assuming X to be small compared to unity (weak crossover), we evaluate the contribution
of maximally crossed diagrams to Green’s function to get:
〈GRαγ(E + ω)GAδβ(E)〉C =
δαδδγβ
1 + iEX
, (A21)
where EX = 4X
2Nδ/π is a crossover energy scale. Final expression for the connected
part of the two-particle Green’s function is:
〈GRαγ(E + ω)GAδβ(E)〉 =
δαβδγδ
δαδδγβ
1 + iEX
. (A22)
APPENDIX B: TWO COUPLED DOTS
This Appendix contains details of the derivation for statistical properties of the Green’s
functions for the two coupled dots connected to each other via hopping bridge V . Coupling
between dots is weak and characterized by dimensionless parameter U . For the system of
uncoupled dots the Hilbert space is a direct sum of spaces for dot 1 and dot 2. Hopping V
mixes the states from two spaces. The Hamiltonian of the system can be represented as:
Htot =
V † H2
. (B1)
For H1,2 and V we have:
HSn + iXnH
1 +X2n
, i = 1, 2; V =
V R + iΓV I√
1 + Γ2
. (B2)
Here S (A) stands for symmetric (antisymmetric), and R (I) means real (imaginary).
Below we use Greek indices for dot 1, and Latin indices for dot 2. We also found it convenient
to keep bandwidth of both dots the same; that is, N1δ1 = N2δ2 with ξ = δ1/δ2.
The following averaged products of matrix elements of H can be obtained:
〈HαγHδβ〉 =
δαβδγδ +
1−X21
1 +X21
δαδδγβ
〈HilHkj〉 =
δijδlk +
1−X22
1 +X22
δikδlj ,
where X1 and X2 are the crossover parameters in dot 1 and 2. Pairings between V matrix
elements are:
〈VαiVβj〉 = 〈V †iαV
jβ〉 =
1− Γ2
1 + Γ2
N1N2δ1δ2U
δαβδij
〈VαiV †jβ〉 =
N1N2δ1δ2U
δαβδij,
with Γ a crossover parameter in hopping bridge. Normalization for V pairing is chosen to
coincide with that of 〈HH〉 when ξ = 1.
To determine one-particle Green’s function we use the system listed in Eq. (20). The
straight and wavy bold lines with arrows represent averaged functions 〈GR1 (E)〉, 〈GR2 (E)〉
in dot 1 and 2, regular lines represent bare propagators, and the rest of the lines describe
pairings between Htot matrix elements. We have:
= 〈GR1 (E)〉; =
= 〈GR2 (E)〉; =
= 〈H1H1〉 = 〈H2H2〉 = 〈V V †〉.
The corresponding analytical expressions of this system of equations are:
Σ11G1
Σ12G1
Σ22G2
Σ21G2
with G1 and G2 connected to Green’s functions via: 〈GRαγ,1(E)〉 = G1δαγ , 〈GRil,2(E)〉 = G2δil
The self-energies Σnm are to be determined using standard procedure14.
We observe, that the system of two linear equations (B) has a solution:
E+ − Σ11 − Σ12
, G2 =
E+ − Σ22 − Σ21
Here we approximated self-energies by the first term in large N expansion again. In this
approximation evaluation of Σnm yields:
Σ11αβ = Σ
11δαβ = = G1
〈HαγHγβ〉 =
E+ − Σ11 − Σ12
Σ12αβ = Σ
12δαβ = = G2
〈VαiV †iβ〉 =
N1N2N2δ1δ2U
E+ − Σ22 − Σ21
Σ22ij = =
E+ − Σ22 − Σ21
Σ21ij = =
N1N2N1δ1δ2U
E+ − Σ11 − Σ12
Thus, to find all Σnm one needs to solve the following system of equations:
E+ − Σ11 − Σ12
E+ − Σ22 − Σ21
N1N2N2δ1δ2U
E+ − Σ22 − Σ21
E+ − Σ11 − Σ12
N1N2N1δ1δ2U
Observing that Σ21 = UΣ11/
ξ and Σ12 = U
ξΣ11 we decouple the system given in Eq.
(B6). For example, the pair of first and third equations can be rewritten as:
(Σ11)2 − EΣ11 + U
ξΣ11Σ22 = −
(Σ22)2 − EΣ22 +
Σ11Σ22 = −
For weak coupling the solution can be found by expanding self-energies Σ11 and Σ22 in
series in U . Taking the solution for single dot as zero approximation (below all the solutions
for the uncoupled dot will be marked with subscript 0) we get
Σ11 = Σ110 + UΣ
1 (B8)
Σ22 = Σ220 + UΣ
1 . (B9)
Note that N1δ1 = N2δ2, and Σ
0 = Σ
0 ≡ Σ0.
Plugging into the right hand side of Eq. (B9) in system (B7) we arrive at:
Σ11 = Σ0
1 + U
E+ − 2Σ0
Σ22 = Σ0
E+ − 2Σ0
Σ21 =
Σ12 = U
ξΣ22.
(B10)
Neglecting the higher powers in U for one-particle Green’s functions we finally arrive at
the following expressions for the single particle Green’s functions:
〈GRαβ,1(E)〉 =
〈GRαβ,0(E)〉
E−2Σ0
1− ǫ2
1 + U
1 + i ǫ√
〈GRij,2(E)〉 =
〈GRij,0(E)〉
1− U√
E−2Σ0
1− ǫ2
1 + U
1 + i ǫ√
where ǫ = πE/2Nδ.
Now we switch our attention to the calculational procedure for the average of the two-
particle Green’s functions 〈GRαγ,1(E + ω)GAδβ,1(E)〉 and 〈GRil,2(E + ω)GAkj,2(E)〉. In the limit
of large N1 and N2 ladder and maximally crossed diagrams contribute the most. For ladder
diagrams we obtain the system of Bethe-Salpeter equations (see Eq. (22)). Here we used
the following notation:
= 〈GR11(E + ω)GA11(E)〉, = 〈GR22(E + ω)GA22(E)〉
= 〈GR12(E + ω)GA21(E)〉, = 〈GR21(E + ω)GA12(E)〉
(B11)
For the diffuson ΠDnm the system of algebraic equations reeds:
ΠD11 =
F1[E, ω]
N1N2δ1δ2U
F2[E, ω]
ΠD22 =
F2[E, ω]
N1N2δ1δ2U
F1[E, ω]
ΠD12 =
N1N2δ1δ2U
F1[E, ω]
N1N2δ1δ2U
F2[E, ω]
ΠD21 =
N1N2δ1δ2U
F2[E, ω]
N1N2δ1δ2U
F1[E, ω]
(B12)
where F1[E, ω] and F2[E, ω] are defined as products of inverse averaged one-particle Green’s
functions in the first and second dots respectively. For small values of U and ω these
functions can be approximated as follows:
F1[E, ω] = 〈GR1 (E + ω)〉−1〈GA1 (E)〉−1 ≈
ξU − iω̃
F2[E, ω] = 〈GR2 (E + ω)〉−1〈GA2 (E)〉−1 ≈
− iω̃
(B13)
where ω̃ = πω/2Nδ. The system of four equations given by the Eq. (B12) can be decoupled
into the two systems of two equations each. To determine ΠD11 one solves the system of the
first and the last equations of Eq. (B12) to get:
F1(E, ω)
ΠD11 −
N1N2N2δ1δ2U
F2(E, ω)
F2(E, ω)
ΠD21 −
N1N2N1δ1δ2U
F1(E, ω)
N1N2δ1δ2U
(B14)
Then, solving the resulting system (Eq. (B14)) and attaching external lines one obtains
expression for the two-particle Green’s function in dot 1:
〈GRαγ,1(E + ω)GAδβ,1(E)〉D =
N21 δ1
δαβδγδ
1 + i√
1 + i(
ξ + 1√
. (B15)
The corresponding correlator for dot 2 is readily obtained as well:
〈GRil,2(E + ω)GAkj,2(E)〉D =
N22 δ2
δijδlk
1 + i
1 + i(
ξ + 1√
. (B16)
For the second part of the Green’s function (which is the sum of maximally crossed
diagrams) the system of equations is described by Eq. (24). Transforming this graphical
system into the algebraic one, we get:
ΠC11 =
1−X21
1 +X21
1−X21
1 +X21
F1[E, ω]
1− Γ2
1 + Γ2
N1N2δ1δ2U
F2[E, ω]
ΠC22 =
1−X22
1 +X22
1−X22
1 +X22
F2[E, ω]
1− Γ2
1 + Γ2
N1N2δ1δ2U
F1[E, ω]
ΠC12 =
1− Γ2
1 + Γ2
N1N2δ1δ2U
1− Γ2
1 + Γ2
N1N2δ1δ2U
F2[E, ω]
1−X21
1 +X21
F1[E, ω]
ΠC21 =
1− Γ2
1 + Γ2
N1N2δ1δ2U
1− Γ2
1 + Γ2
N1N2δ1δ2U
F1[E, ω]
1−X22
1 +X22
F2[E, ω]
(B17)
Once again, the system at hand breaks into systems of two equations each. We proceed
by combining the first and the last equations to obtain:
1−X21
1 +X21
F1[E, ω]
ΠC11 −
1− Γ2
1 + Γ2
N1N2N2δ1δ2U
F2[E, ω]
1−X21
1 +X21
1− Γ2
1 + Γ2
N1N2N1δ1δ2U
F1[E, ω]
1−X22
1 +X22
F2[E, ω]
ΠC21 =
1− Γ2
1 + Γ2
N1N2δ1δ2U
(B18)
Now we can construct approximations for the expressions, containing crossover parame-
ters. For example, for small values of X and Γ the solution for ΠC11 is expressed as follows:
ΠC11 =
(1− 2X21 )( U√ξ − iω̃ + 2X
2 ) + (1− 4Γ2)U2
ξU − iω̃ + 2X21 )( U√ξ − iω̃ + 2X
2 )− (1− 4Γ2)U2
. (B19)
Next, introducing crossover energy scales:
EX = 4X
EU = 2U
4Γ2EU√
ξ + 1√
(B20)
we obtain the solution for ΠC11 in the following form:
ΠC11 =
1− EU√
− EX2
1− EX1+EX2
EX1EX2
(iω)2
EX1EU√
ξ(iω)2
ξEX2EU
(iω)2
ξ + 1√
(B21)
Then, adding external lines to ΠC11 for Green’s function we get:
〈GRαγ,1(E + ω)GAδβ,1(E)〉C =
N21 δ1
δαδδγβ
1 + i√
1 + i
EX1+EX2
− EX1EX2
− EX1EU√
ξEX2EU
ξ + 1√
1 + iEΓ
(B22)
Similar manipulations for the corresponding correlator of Green’s functions for the second
room result in:
〈GRil,2(E + ω)GAkj,2(E)〉C =
N22 δ2
δikδlj
1 + i
1 + i
EX1+EX2
− EX1EX2
− EX1EU√
ξEX2EU
ξ + 1√
1 + iEΓ
(B23)
Finally, the connected part of the total two-particle Green’s function is obtained as a sum
of diffuson and cooperon parts, yielding:
〈GRαγ,1(E + ω)GAδβ,1(E)〉 =
N21 δ1
δαβδγδ
1 + i√
1 + i(
ξ + 1√
N21 δ1
δαδδγβ
1 + i√
1 + i
EX1+EX2
− EX1EX2
− EX1EU√
ξEX2EU
ξ + 1√
1 + iEΓ
(B24)
〈GRil,2(E + ω)GAkj,2(E)〉 =
N22 δ2
δijδlk
1 + i
1 + i(
ξ + 1√
N22 δ2
δikδlj
1 + i
1 + i
EX1+EX2
− EX1EX2
− EX1EU√
ξEX2EU
ξ + 1√
1 + iEΓ
(B25)
APPENDIX C: FOURIER TRANSFORM OF TWO-PARTICLE GREEN’S
FUNCTION
To be able to study temporal behavior of electrons in the rmt system we introduce the
Fourier transform of two-particle Green’s function. We define it via the following integral:
〈GRαγ(t)GAδβ(t)〉 =
(2π)2
exp−iωt〈GRαγ(E + ω)GAδβ(E)〉dωdE. (C1)
To get the correct behavior of the diffuson part for small ω, we replace 1/ω by ω/(ω2+η2),
where η is infinitesimal positive number. Now we introduce for dot 1:
fD(ω) =
N21 δ1
δαβδγδ
1 + i√
1 + i
ξ + 1√
→ δαβδγδ
N21 δ1
ω2 + η2
ω + i√
ω + i
ξ + 1√
= δαβδγδ
N21 δ1
ω + i√
(ω − iη)(ω + iη)(ω + i(
ξ + 1√
. (C2)
The Fourier transform of this diffuson term gives:
fD(t) =
exp(−iωt)fD(ω)dω. (C3)
Next steps are the standard steps of integration in complex plane. For t > 0 one closes
contour in lowerhalf plane. One root is located in upper half plane and two more are located
in lower half plane. The integration yields:
fD(t) = δαβδγδ
N21 δ1
− η)e−ηt
ξ + 1√
)EU − η
ξ + 1√
ξE2Ue
ξ+ 1√
)EU t
ξ + 1√
)2E2U − η2
. (C4)
As η approaches zero, fD(t) becomes:
fD(t) = δαβδγδ
N21 δ1
1 + ξ
ξ+ 1√
)EU t
. (C5)
The full Fourier transformation includes integration over E as well. In current approx-
imation, when E is close to the center of the band, 〈GR1GA1 〉 is independent of E. It will
depend on E if we integrate over the whole bandwidth. The exact dependence of 〈GR1GA1 〉
on E far from the center of the band is not known. To get correct expression we assume
that integration over E adds to 〈GR1GA1 〉 multiplicative factor N1δ1 along with normalization
coefficient A. Also, for index pairing α = β and γ = δ, GRαγG
δβ becomes transition proba-
bility density P (t)α→γ. Using equipartition theorem, for t→ ∞ summation of P (t)α→γ over
α one can get total probability to stay in dot 1. It is equal to N1/(N1 +N2). That is,
dEfD(t) =
N1 +N2
1 + ξ
. (C6)
Integration over E and summation over α gives the factor of AN21 δ1. We identify the
normalization constant as A = 1/π. Note, that we did not use cooperon part fC(t) to
determine normalization constant A. The reason for that is chosen index pairing. After
the summation over α cooperon part contribution is of the order 1/N1 compared with the
diffuson part. After integration over E with proper normalization fD(t) becomes:
fD(t) = δαβδγδ
N1(1 + ξ)
ξ+ 1√
)EU t
. (C7)
Then we perform the Fourier transform of the cooperon part:
fC(ω) =
N21 δ1
δαδδγβ
1− EX2
1− EX1+EX2
EX1EX2
(iω)2
EX1EU√
ξ(iω)2
ξEX2EU
(iω)2
ξ + 1√
The fC(ω) is a regular function when ω approaches limiting values, provided at least one
of the crossover energy scales EX1 , EX2 , or EΓ differs from zero.
To make fC(ω) more suitable for the Fourier transform we manipulate Eq. (C8) into:
fC(ω) = −δαδδγβ
N21 δ1
iω − EX2 −
(iω)2 −
(EX1 +
ξEU ) + (EX2 +
EX1EX2 +
EX1EU√
+ EX2EU
ξ + (
)EUEΓ
and observe that the poles of fC(ω) are given by
iω± =
(EX1 +
ξEU) + (EX2 +
(C10)
with D = ((EX1 +
ξEU)− (EX2 +EU/
ξ))2 + 4E2U(1− 4Γ2). The parameter D is always
positive and ω± are imaginary complex numbers.
It can be proved that (EX1 +
ξEU)+(EX2 +EU/
D for all values of parameters,
which means that the poles are pure imaginary numbers in lower half complex plane:
ω± = −i
(EX1 +
ξEU) + (EX2 +
= −ia±, a+ > a− > 0. (C11)
The function fC(ω) now reads:
fC(ω) = −δαδδγβ
N21 δ1
iω − EX2 − EU√ξ
(iω − a−)(iω − a+)
. (C12)
We perform the Fourier transform and use the normalization factor to obtain:
fC(t) =
exp(−iωt)fC(ω)dωdE = δαδδγβ
EX2 +
a+ − a−
e−ta− − e−ta+
(C13)
Hence, the full expression for the Fourier transform for the two-particle Green’s function
in the dot are given by:
〈GRαγ(t)GAδβ(t)〉11 = δαβδδγ
N1(1 + ξ)
ξ+ 1√
)EU t
+ δαδδγβ
EX2 +
a+ − a−
e−ta− − e−ta+
(C14)
〈GRik(t)GAlj(t)〉22 = δijδkl
N2(1 + ξ)
ξ+ 1√
)EU t
+ δilδkj
EX1 +
a+ − a−
e−ta− − e−ta+
, (C15)
where a± is defined through Eq. (C11).
APPENDIX D: CORRELATION OF FOUR WAVE FUNCTIONS
In this appendix we obtain correlation of four wave functions 〈ψn(α)ψ∗n(γ)ψm(δ)ψ∗m(β)〉
for the system of two coupled dots. This has been obtained in a single dot for the pure
ensembles by supersymmetry methods by Mirlin50, and for the GOE→GUE crossover by
Adam et al23. We consider ensemble average of the following product:
GRαγ(E + ω)−GAαγ(E + ω)
GRδβ(E)−GAδβ(E)
− 〈GRαγ(E + ω)GAδβ(E)− 〈GAαγ(E + ω)GRδβ(E)〉 = −2(δαβδγδRe[D1] + δαδδγβRe[C1]), (D1)
where D1 and C1 are the diffuson and cooperon expressions from Eq. (27). Here we used
the fact that ensemble average of GRGR and GAGA are smaller than GRGA and GAGR.
On the other hand, we have:
GRαγ(E)−GAαγ(E) = −2πi
ψn(α)ψ
n(γ)δ(E −En), (D2)
GRαγ(E + ω)−GAαγ(E + ω)
GRδβ(E)−GAδβ(E)
− 4π2〈
ψn(α)ψ
n(γ)ψm(δ)ψ
m(β)δ(E + ω − En)δ(E − Em)〉. (D3)
We know that in the crossover components of eigenvalues and eigenvectors are correlated
with each other. This correlation is small already on the distances of a few δ and can be
neglected in the limit ω ≫ δ, so Eq. (D3) can be approximated by:
−4π2〈ψn̄(α)ψ∗n̄(γ)ψm̄(δ)ψ∗n̄(β)〉〈
δ(E + ω −En)〉〈
δ(E − Em)〉.
where n̄ and m̄ mark energy levels close to E + ω and E respectively.
The average of the sum is a density of states ρ(E) = 〈
n δ(E − En)〉 = 1/δ. Then, we
GRαγ(E + ω)−GAαγ(E + ω)
GRδβ(E)−GAδβ(E)
〈ψn(α)ψ∗n(γ)ψm(δ)ψ∗m(β)〉. (D4)
For the two coupled dots we have:
Re[D1] =
ω2 + (
ξ + 1√
)2E2U
In order to calculate Re[C1] from Eq. (27) we are going to assume that magnetic field is
zero in the first dot and in the hopping region (EX1 = EΓ = 0), and the second dot is in
GOE to GUE crossover (EX2 ∼ ω). Then,
Re[C1] =
N21 δ1
2 + (EU +
ξEX2)EUEX2
(ω2 −
ξEUEX2)
2 + (EX2 + (
ξ + 1√
)EU)2ω2
The relation between the mean level spacing δ for the system of coupled dots and the
mean level spacing in the first uncoupled dot δ1 is as follows. The averaged density of states
in coupled system is going to be the sum of densities of each dot: 〈ρ〉 = 〈ρ1〉 + 〈ρ2〉, or
δ−1 = δ−11 + δ
2 . Thus, we conclude that δ = δ1/(1 + ξ).
Finally, we set Eq. (D1) and Eq. (D4) equal and obtain correlation of for the wave
functions:
〈ψn(α)ψ∗n(γ)ψm(δ)ψ∗m(β)〉 = δαβδγδ
π(1 + ξ)2N21
ω2 + (
ξ + 1√
)2E2U
+ δαδδγβ
π(1 + ξ)2N21
ξω2 + (
ξE2X2 + EUEX2)
(ω2 −
ξEUEX2)
2 + (EX2 + (
ξ + 1√
)EU)2ω2
. (D7)
APPENDIX E: SUM RULE FOR DOUBLE DOT SYSTEM
To verify the expressions we have obtained for the averaged Green’s functions we use a
sum rule.
The pair annihilation (creation) operator T (T †) in the basis of two uncoupled dots is a
sum of two terms belonging to each dot:
cα0,↓cα0,↑ +
ci0,↓ci0,↑,
T † =
α0,↑c
α0,↓ +
i0,↑c
i0,↓.
Greek indices go over the states in the first dot, and Latin indices go over the states in
the second dot. The subindex 0 denotes the basis of two uncoupled dots.
Our first goal is to calculate the commutator [T †, T ]. As operators from different dots
anticommute, one gets:
[T †, T ] =
α0,β0
α0,↑c
α0,↓, cβ0,↓cβ0,↑] +
i0,j0
i0,↑c
i0,↓, cj0,↓cj0,↑] = N̂1e + N̂2e −N1 −N2, (E2)
where N̂1e, N̂2e are the operators of total number of electrons in dot 1 and dot 2, and N1, N2
are the total number of levels in dot 1 and dot 2.
The expectation value of [T †, T ] in ground state at zero temperature is:
[T †, T ] = 〈Ω|[T †, T ]|Ω〉 = Ne −N. (E3)
Ne and N are the total number of electrons and levels in both dots. This number is conserved
when going to another basis.
Now we choose the basis of the system of coupled dots. In this basis cα0,s =
m ψm(α0)cm,s, and ci0,s =
m ψm(i0)cm,s, where cm,s is annihilation operator in new basis.
Using this transformation, we rewrite pair destruction operator as follows:
cα0,↓cα0,↑ +
ci0,↓ci0,↑ =
m1,m2
Dm1m2cm1,↓cm2,↑, (E4)
where Dm1m2 is defined by the following expression:
Dm1m2 =
m1,m2
ψm1(α0)ψm2(α0) +
ψm1(i0)ψm2(i0)
cm1,↓cm2,↑
ψm1(p0)ψm2(p0). (E5)
The index p0 runs over all states in the first and second dots for the basis of uncoupled dots.
In the new basis the T, T † operators look like this:
m1,m2
Dm1m2cm1,↓cm2,↑,
T † =
m1,m2
D∗m1m2c
m2,↑c
m1,↓.
Consequently, in the new basis,
[T †, T ] =
m1,m2
m3,m4
D∗m1m2Dm3m4 [c
m2,↑c
m1,↓, cm3,↓cm4,↑]
m2,m4
D∗m1m2Dm1m4
m2,↑cm4,↑ −
m1,m3
D∗m1m2Dm3m2
cm3,↓c
m1,↓. (E7)
One can go further and use completeness condition
m(p0)ψm(n0) = δp0n0 to show
that in the new basis the value of commutator is N̂e−N . Our next goal, however, is to take
the disorder average of the vacuum expectation value and to prove the invariance of [T †, T ].
Taking into account that 〈Ω|c†m1,↑cm2,↑|Ω〉 = δm1m2Θ(µ − Em1) and 〈Ω|cm2,↓c
m1,↓|Ω〉 =
δm1m2(1−Θ(µ− Em1)), the ground state expectation value for the commutator is:
[T †, T ] = 〈Ω|[T †, T ]|Ω〉 =
m1,m2
|Dm1m2 |2[2Θ(µ− Em1)− 1], (E8)
where Θ(x) is a step function.
Averaging over disorder gives:
〈[T †, T ]〉 = 2
m1,m2
Θ(µ− Em1)〈|Dm1m2 |2〉 −
m1,m2
〈|Dm1m2 |2〉 (E9)
Converting this into integral, we get:
〈[T †, T ]〉 = 2
dE1dE2ρ(E1)ρ(E2)〈|D(E1, E2)|2〉
dE1dE2ρ(E1)ρ(E2)〈|D(E1, E2)|2〉 (E10)
The density of states ρ(E) is the Winger’s semicircle law:
ρ(E) =
W 2 − E2,
where 2W is the bandwidth and N is the number of states in the system.
To proceed we need to find the ensemble average of the following object:
〈|Dm1m2 |2〉 =
p0,n0
〈ψ∗m1(p0)ψ
(p0)ψm1(n0)ψm2(n0)〉. (E11)
Using results of appendix D one can obtain expression for the correlation of four wave
functions in the form:
〈ψ∗m1(p0)ψ
(p0)ψm1(n0)ψm2(n0)〉 =
2π2ρ(E1)ρ(E2)
〈GRn0p0(E2)G
(E1)〉
〈GRn0p0(E2)G
(E1)〉
. (E12)
Note, that to get the correct answer for the sum rule one should keep 〈GRGR〉 term as
well. Summation in Eq. (E12) is performed over the states in both dots.
When the dots have equal mean level spacing δ1 = δ2 = δ0, one particle Green’s function
can be found exactly from the system (B7) without approximation in U :
(E)〉 =
W 2 −E2
(E)〉 =
W 2 − E2
e−iφ,
(E13)
where W = 2N0δ0
1 + U/π is the half bandwidth and sin φ = E/W . Here both indices
p0 and p
0 belong either to the first or to the second dot.
The sum in Eq. (E12) can be broken into four sums, when the indices p0, n0 belong either
to the first dot, or to the second dot, or one of the indices go over the states in the first dot,
and the other one goes over the states in the second dot.
For example, for 〈GRGA〉 part we have the following expression:
〈GRn0p0(E2)G
(E1)〉 =
(1 + U)
(1 + U)e−iφ21 − ζ
[(1 + U)e−iφ21 − 1][(1 + U)e−iφ21 − ζ ]− U2
(1 + U)
(1 + U)e−iφ21 − 1
[(1 + U)e−iφ21 − 1][(1 + U)e−iφ21 − ζ ]− U2
(1 + U)
[(1 + U)e−iφ21 − 1][(1 + U)e−iφ21 − ζ ]− U2
(E14)
Here φ21 = φ2 − φ1, and ζ = (1−X22 )/(1 +X22 )
The first term in Eq. (E14) is the contribution of 〈GR〉〈GA〉 plus the cooperon part of
two particle Green’s function in the first dot. The second term describes contribution of
free term and cooperon part in the second dot. The last term is a sum of transition parts
from dot 1 to dot 2 and vice versa. It appears that these transition terms are equal, which
explains coefficient 2 in front of the last term in Eq. (E14).
Summation of the 〈GRGR〉 gives similar result:
〈GRn0p0(E2)G
(E1)〉 =
(1 + U)
(1 + U)e−iψ21 + ζ
[(1 + U)e−iψ21 + 1][(1 + U)e−iψ21 + ζ ]− U2
(1 + U)
(1 + U)e−iψ21 + 1
[(1 + U)e−iψ21 + 1][(1 + U)e−iψ21 + ζ ]− U2
(1 + U)
[(1 + U)e−iψ21 + 1][(1 + U)e−iψ21 + ζ ]− U2
(E15)
where ψ21 = φ2 + φ1.
In principle, there should be terms corresponding to diffusons in dot 1 and dot 2. However,
these terms after summation over p0, n0 are 1/N0 smaller than the others and in the large
N0 limit can be neglected.
Although one can use Eq. (E10) to verify the sum rule, it is more convenient to work
with derivative of Eq. (E10) over µ at µ = 0.
It gives:
〈[T †, T ]〉µ=0 = 2ρ(0)
dE2ρ(E2)〈|D(E1 = 0, E2)|2〉. (E16)
On the other hand, this expression should be equal to:
(Ne −N) =
ρ(E)dE −N
= 2ρ(µ). (E17)
Comparison of Eq. (E16) and (E17) at µ = 0 results in the following condition for the
sum rule:
dE2ρ(E2)〈|D(E1 = 0, E2)|2〉 = 1. (E18)
The integral in Eq. (E18) was computed numerically and matched the unity with high
accuracy.
∗ Electronic address: [email protected]
† Electronic address: [email protected]
‡ Electronic address: [email protected]
1 E. Wigner, Ann. Math. 62, 548 (1955).
2 E. Wigner, Ann. Math. 65, 203 (1957).
3 M. L. Mehta, Random Matrices, vol. 142 of Pure and Applied Mathematics (Academic Press,
2004), 3rd ed.
4 L. Gorkov and G. Eliashberg, Zh. Eksp. i Teor. Fiz. 48, 1407 (1965).
5 B. L. Al’tshuler and B. I. Shklovskii, Sov. Phys. JETP 64, 127 (1986).
6 K. Efetov, Advances in Physics 32, 53 (1983).
7 O. Bohigas, M. J. Giannoni, and C. Schmit, Phys. Rev. Lett. 52, 1 (1984).
8 A. Hönig and D. Wintgen, Phys. Rev. A 39, 5642 (1989).
9 T. Zimmermann, H. Köppel, L. S. Cederbaum, G. Persch, and W. Demtröder, Phys. Rev. Lett.
61, 3 (1988).
10 S. Deus, P. M. Koch, and L. Sirko, Phys. Rev. E 52, 1146 (1995).
11 H.-J. Sommers and S. Iida, Phys. Rev. E 49, R2513 (1994).
12 I. L. Aleiner, P. W. Brouwer, and L. I. Glazman, Phys. Rep. 358, 309 (2002).
13 A. Altland, Y. Gefen, and G. Montambaux, Phys. Rev. Lett. 76, 1130 (1996).
14 K. Efetov, Supersymmetry in Disorder and Chaos (Cambridge University Press, 1999).
15 A. Abrikosov, Methods of Quantum Field Theory in Statistical Physics (Dover Publications,
1975).
16 F. R. Waugh, M. J. Berry, D. J. Mar, R. M. Westervelt, K. L. Campman, and A. C. Gossard,
Phys. Rev. Lett. 75, 705 (1995).
17 R. L. Weaver and O. I. Lobkis, Journal of Sound and Vibration 231, 1111 (2000).
18 A. Tschersich and K. B. Efetov, Phys. Rev. E 62, 2042 (2000).
19 V. I. Fal’ko and K. Efetov, Phys. Rev. B 50, 11267 (1994).
20 A. P. J. B. French, V. K. B. Kota and S. Tomsovic, Annals of Physics 181, 177 (1988).
21 S. A. van Langen, P. W. Brouwer, and C. W. J. Beenakker, Phys. Rev. E 55, R1 (1997).
22 A. Pandey and M. L. Mehta, Communications in Mathematical Physics 87, 449 (1983).
23 S. Adam, P. W. Brouwer, J. P. Sethna, and X. Waintal, Phys. Rev. B 66, 165310 (2002).
24 S. Adam, P. W. Brouwer, and P. Sharma, Phys. Rev. B 68, 241311 (2003).
25 Y. Alhassid and T. Rupp, cond-mat/0312691 (2003).
26 G. Murthy, Physical Review B (Condensed Matter and Materials Physics) 70, 153304 (pages 4)
(2004).
27 A. V. Andreev and A. Kamenev, Phys. Rev. Lett. 81, 3199 (1998).
28 P. W. Brouwer, Y. Oreg, and B. I. Halperin, Phys. Rev. B 60, R13977 (1999).
29 H. U. Baranger, D. Ullmo, and L. I. Glazman, Phys. Rev. B 61, R2425 (2000).
30 I. L. Kurland, I. L. Aleiner, and B. L. Altshuler, Phys. Rev. B 62, 14886 (2000).
31 G. Murthy and H. Mathur, Phys. Rev. Lett. 89, 126804 (2002).
32 G. Murthy and R. Shankar, Phys. Rev. Lett. 90, 066801 (2003).
33 G. Murthy, R. Shankar, D. Herman, and H. Mathur, Physical Review B (Condensed Matter
and Materials Physics) 69, 075321 (2004).
34 R. Shankar, Rev. Mod. Phys. 66, 129 (1994).
35 R. Shankar, Physica A: Statistical and Theoretical Physics 177, 530 (1991).
36 S. Chakravarty, B. I. Halperin, and D. R. Nelson, Phys. Rev. B 39, 2344 (1989).
37 S. Chakravarty, B. I. Halperin, and D. R. Nelson, Phys. Rev. Lett. 60, 1057 (1988).
38 S. Sachdev, Quantum Phase Transitions (Cambridge University Press, 2001).
39 A. Kamenev and Y. Gefen, Phys. Rev. B 54, 5428 (1996).
40 K. B. Efetov and A. Tschersich, Phys. Rev. B 67, 174205 (2003).
41 I. S. Beloborodov, K. B. Efetov, A. V. Lopatin, and V. M. Vinokur, arXiv:cond-mat/0603522
(2006).
42 L. G. Aslamazov and A. I. Larkin, Sov. Phys. Solis State 10, 875 (1968).
43 A. Altland and M. R. Zirnbauer, Phys. Rev. B 55, 1142 (1997).
44 A. Altland, B. D. Simons, and M. R. Zirnbauer, Physics Reports 359, 283 (2002).
45 M. Schechter, Y. Oreg, Y. Imry, and Y. Levinson, Phys. Rev. Lett. 90, 026805 (2003).
46 V. Ambegaokar and U. Eckern, Phys. Rev. Lett. 65, 381 (1990).
47 V. Ambegaokar and U. Eckern, Europhysics Letters 13, 733 (1990).
48 Y. Alhassid, L. Fang, and S. Schmidt, arXiv:cond-mat/0702304 (2006).
49 B. L. Altshuler, Y. Gefen, and Y. Imry, Phys. Rev. Lett. 66, 88 (1991).
50 A. D. Mirlin, Physics Reports 326, 259 (2000), URL http://www.sciencedirect.com/
science/article/B6TVP-3YS34MM-2/2/6c0cda8b40b326b5efc838cdd7239e16.
51 I. L. Aleiner and V. I. Fal’ko, Phys. Rev. Lett. 87, 256801 (2001).
52 I. L. Aleiner and V. I. Fal’ko, Phys. Rev. Lett. 89, 079902 (2002).
53 G. Dresselhaus, Phys. Rev. 100, 580 (1955).
54 Y. Bychkov and E. Rashba, JETP Lett. 39, 78 (1984).
55 H.-J. Stöckmann, Quantum Chaos: An Introduction (Cambridge University Press, 1999).
|
0704.0920 | Two-proton radioactivity and three-body decay. III. Integral formulae
for decay widths in a simplified semianalytical approach | Two-proton radioactivity and three-body decay. III. Integral formulae for decay
widths in a simplified semianalytical approach.
L. V. Grigorenko1,2, 3 and M. V. Zhukov4
Flerov Laboratory of Nuclear Reactions, JINR, RU-141980 Dubna, Russia
Gesellschaft für Schwerionenforschung mbH, Planckstrasse 1, D-64291, Darmstadt, Germany
RRC “The Kurchatov Institute”, Kurchatov sq. 1, 123182 Moscow, Russia
Fundamental Physics, Chalmers University of Technology, S-41296 Göteborg, Sweden
Three-body decays of resonant states are studied using integral formulae for decay widths. Theo-
retical approach with a simplified Hamiltonian allows semianalytical treatment of the problem. The
model is applied to decays of the first excited 3/2− state of 17Ne and the 3/2− ground state of 45Fe.
The convergence of three-body hyperspherical model calculations to the exact result for widths and
energy distributions are studied. The theoretical results for 17Ne and 45Fe decays are updated and
uncertainties of the derived values are discussed in detail. Correlations for the decay of 17Ne 3/2−
state are also studied.
PACS numbers: 21.60.Gx – Cluster models, 21.45.+v – Few-body systems, 23.50.+z – Decay by proton
emission, 21.10.Tg – Lifetimes
I. INTRODUCTION
The idea of the “true” two-proton radioactivity was
proposed about 50 years ago in a classical paper of
Goldansky [1]. The word “true” denotes here that we
are dealing not with a relatively simple emission of two
protons, which becomes possible in every nucleus above
two-proton decay threshold, but with a specific situa-
tion where one-proton emission is energetically (due to
the proton separation energy in the daughter system) or
dynamically (due to various reasons) prohibited. Only
simultaneous emission of two protons is possible in that
case (see Fig. 1, more details on the modes of the three-
body decays can be found in Ref. [2]). The dynamics of
such decays can not be reduced to a sequence of two-body
decays and from theoretical point of view we have to deal
with a three-body Coulomb problem in the continuum,
which is known to be very complicated.
Progress in this field was quite slow. Only recently a
consistent quantum mechanical theory of the process was
developed [2, 3, 4], which allows to study the two-proton
(three-body) decay phenomenon in a three-body cluster
model. It has been applied to a range of a light nuclear
systems (12O, 16Ne [5], 6Be, 8Li∗, 9Be∗ [6], 17Ne∗, 19Mg
[7]). Systematic exploratory studies of heavier prospec-
tive 2p emitters 30Ar, 34Ca, 45Fe, 48Ni, 54Zn, 58Ge, 62Se,
and 66Kr [4, 8]) have been performed providing predic-
tions of lifetime ranges and possible correlations among
fragments.
Experimental studies of the two-proton radioactivity is
presently an actively developing field. Since the first ex-
perimental identification of 2p radioactivity in 45Fe [9, 10]
it was also found in 54Zn [11]. Some fingerprints of the
48Ni 2p decay were observed and the 45Fe lifetime and de-
cay energy were measured with improved accuracy [12].
There was an intriguing discovery of the extreme en-
hancement of the 2p decay mode for the high-spin 21+
isomer of 94Ag, interpreted so far only in terms of the
hyperdeformation of this state [13]. New experiments,
(A-2) + 2N
(A-1) + N
E2rE3r
(A-2) + 2N
(A-1) + N
(a) (b)
Three-body
decay
window
thresholds
FIG. 1: Energy conditions for different modes of the two-
nucleon emission (three-body decay): true three-body decay
(a), sequential decay (b).
aimed at more detailed 2p decay studies (e.g. observa-
tion of correlations), are under way at GSI (19Mg), MSU
(45Fe), GANIL (45Fe), and Jyväskylä (94Ag).
Several other theoretical approaches were applied to
the problem in the recent years. We should mention
the “diproton” model [14, 15], “R-matrix” approach
[16, 17, 18, 19], continuum shell model [20], and adiabatic
hyperspherical approach of [21]. Some issues of a com-
patibility between different approaches will be addressed
in this work.
Another, possibly very important, field of applica-
tion of the two-proton decay studies was shown in Refs.
[22, 23]. It was demonstrated in [22] that the importance
of direct resonant two-proton radiative capture processes
was underestimated in earlier treatment of the rp-process
waiting points [24]. The scale of modification of the astro-
physical 2p capture rates can be as large as several orders
of magnitude in certain temperature ranges. In paper [23]
it has been found that nonresonant E1 contributions to
three-body (two-proton) capture rates can also be much
larger than was expected before. The updated 2p as-
trophysical capture rate for the 15O(2p,γ)17Ne reaction
appears to be competing with the standard 15O(α,γ)19Ne
breakout reaction for the hot CNO cycle. The improve-
ments of the 2p capture rates obtained in [22, 23] are
connected to consistent quantum mechanical treatment
http://arxiv.org/abs/0704.0920v1
of the three-body Coulomb continuum in contrast to the
essentially quasiclassical approach typically used in as-
trophysical calculations of three-body capture reactions
(e.g. [24, 25]).
The growing quality of the experimental studies of the
2p decays and the high precision required for certain as-
trophysical calculations inspired us to revisit the issues
connected with different uncertainties and technical diffi-
culties of our studies. In this work we make the following.
(i) Extend the two-body formalism of the integral for-
mulae for width to the three-body case. We perform the
relevant derivations for the two-body case to make the
relevant approximations and assumptions explicit. (ii)
Formulate a simplified three-body model which has many
dynamical features similar to the realistic case, but allows
the exact semianalytical treatment and thus makes pos-
sible a precise calibration of three-body calculations. It
is also possible to study in great detail several impor-
tant dependencies of three-body widths in the frame of
this model. (iii) Perform practical studies of some sys-
tems of interest and demonstrate a connection between
the simplified semianalytical formalism and the realistic
three-body calculations.
The unit system h̄ = c = 1 is used in the article.
II. INTEGRAL FORMULA FOR WIDTH
Integral formalisms of width calculations for narrow
two-body states are known for a long time, e.g. [26, 27].
The prime objective of those studies was α-decay widths.
An interesting overview of this field can be found in the
book [28]. This approach, to our opinion, did not pro-
duce novel results as the inherent uncertainties of the
method are essentially the same as those of the R-matrix
phenomenology, which is technically much simpler (see
e.g. a discussion in [29]). An important nontrivial appli-
cation of the integral formalism was calculation of widths
for proton emission off deformed states [30, 31]. There
were attempts to extend the integral formalism to the
three-body decays, using a formal generalization for the
hyperspherical space [2, 32]. These were shown to be
difficult with respect to technical realisation and to be
inferior to other methods developed in [2, 3].
Here we develop an integral formalism for the three-
body (two-proton) decay width in a different way. How-
ever, first we review the standard formalism to define
(clearer) the approximations used.
A. Width definition, complex energy WF
For decay studies we consider the wave function (WF)
with complex pole energy
Ẽr = k̃
r/(2M) = Er − iΓ/2 , k̃r ≈ kr − iΓ/(2vr) ,
where v =
2E/M . The pole solution for Hamiltonian
(H − Ẽr)Ψ
lm (r) = (T + V − Ẽr)Ψ
lm (r) = 0
provides the WF with outgoing asymptotic
lm (r) = r
l (kr)Ylm(r̂) . (1)
For single channel two-body problem the pole solution is
formed only for one selected value of angular momentum
l. In the asymptotic region
l (k̃rr)
l (k̃rr) = Gl(k̃rr) + iFl(k̃rr) . (2)
The above asymptotic is growing exponentially
l (k̃rr)
∼ exp[+ik̃rr] ≈ exp[+ikrr] exp[+Γr/(2vr)]
as a function of the radius at pole energy. This unphys-
ical growth is connected to the use of time-independent
formalism and could be reliably neglected for typical ra-
dioactivity time scale as it has a noticeable effect at very
large distances.
Applying Green’s procedure to complex energy WF
Ψ(+)†
(H − Ẽr)Ψ
(H − Ẽr)Ψ
Ψ(+) = 0
we get for the partial components at pole energy Ẽr
After radial integration from 0 to R (here and below R
denotes the radius sufficiently large that the nuclear in-
teraction disappears) we obtain
which corresponds to a definition of the width as a decay
probability (reciprocal of the lifetime):
N = N0 exp[−t/τ ] = N0 exp[−Γt] .
The width Γ is then equal to the outgoing flux jl through
the sphere of sufficiently large radius R, divided by num-
ber of particles Nl inside the sphere.
Using Eq. (2) the flux in the asymptotic region could
be rewritten for k̃r → kr in terms of a Wronskian
= (kr/M)W (Fl(krR), Gl(krR)) = vr , (4)
where the Wronskian for real energy functions Fl, Gl is
W (Fl, Gl) = GlF
lFl ≡ 1 .
The effect of the complex energy is easy to estimate (ac-
tually without loss of a generality) in a small energy ap-
proximation
Fl(kr)
∼ Cl(kr)
l+1, Gl(kr)
(kr)−l
(2l+ 1)Cl
, (5)
where Cl is a Coulomb coefficient (defined e.g. in Ref.
[33]). The flux is then
l (k̃
l (k̃rr) − k̃
l (k̃
l (k̃rr)
2l(l+ 1)
+ l × o[Γ3]
So, the equality (4) is always valid for l = 0 and for l 6= 0
we get
l(l + 1)
B. Two-body case, real energy WF
Now we need a WF as real energy E = k2/2M solution
of Schrödinger equation
(H − E)Ψk(r) = (T + V
nuc + V coul − E)Ψk(r) = 0 ,
Ψk(r) = 4π
il(kr)−1ψl(kr)
Y ∗lm(k̂)Ylm(r̂) ,
in S-matrix representation, which means that for r > R
ψl(kr) =
[(Gl(kr) − iFl(kr)) − Sl(Gl(kr) + iFl(kr))] .
At resonance energy Er
Sl(Er) = e
2iδl(Er) = e2iπ/2 = −1
and in asymptotic region, defined by the maximal size of
nuclear interaction R,
ψl(krr)
= i Gl(krr) .
At resonance energy we can define a “quasibound” WF
ψ̃l as matching the irregular solution Gl and normalized
to unity for the integration in the internal region limited
by radius R:
ψ̃l(krr) =
(−i)ψl(krr)
|ψl(krx)|
ψl(krr)
. (6)
Now we introduce an auxiliary Hamiltonian H̄ with
different short range nuclear interaction V̄ nuc,
(H̄ − E)Φk(r) = (T + V̄
nuc + V coul − E)Φk(r) = 0 ,
and also construct other WF in S-matrix representation
Φk(r) = 4π
il(kr)−1ϕl(kr)
Y ∗lm(k̂)Ylm(r̂) ,
ϕl(kr) =
(Gl(kr)− iFl(kr)) − S̄l(Gl(kr) + iFl(kr))
for r > R. Or in equivalent form:
ϕl(kr) = exp(iδ̄l)
Fl(kr) cos(δ̄l) +Gl(kr) sin(δ̄l)
. (7)
The Hamiltonian H̄ should provide the WF Φk(r) which
at energy Er is sufficiently far from being a resonance
WF and for this WF δ̄l(Er) ∼ 0.
For real energy WFs Ψk(r) and Φk(r) we can write:
Φk(r)
† [(H − E)Ψk(r)] −
(H̄ − E)Φk(r)
Ψk(r) = 0 ,
ϕ∗l (V − V̄ )ψl =
For WFs taken at resonance energy Er this expression
provides
ϕ∗l (V − V̄ )ψldr = 2MiNl
ϕ∗l (V − V̄ )ψ̃ldr
= exp(−iδ̄l) cos(δ̄l) kr W (Fl(krR), Gl(krR)) ,(9)
1/2 =
−i exp(−iδ̄l) cos(δ̄l) kr
ϕ∗l (V − V̄ )ψ̃ldr
From Eqs. (3), (4), (6) and the approximation ψ
l ≈ ψl
it follows that
vr cos2(δ̄l)
ϕ∗l (V − V̄ )ψ̃ldr
. (10)
So, the idea of the integral method is to define the in-
ternal normalizations for the WF with resonant boundary
conditions (this is equivalent to determination of the out-
going flux for normalized “quasibound” WF) by the help
of the eigenfunction of the auxiliary Hamiltonian, which
has the same long-range behaviour and differs only in the
compact region.
III. ALTERNATIVE DERIVATION
Let us reformulate the derivation of Eq. (10) in a more
general way, so that the detailed knowledge of the WF
structure for ψl and ψ
l is not required. It would
allow a straightforward extension of the formalism to
the three-body case. We start from Schrödinger equa-
tion in continuum with solution Ψ(+) at the pole energy
Ẽr = Er + iΓ/2:
H − Ẽr
Ψ(+) =
T + V − Ẽr
Ψ(+) = 0 . (11)
Then we rewrite it identically via the auxiliary Hamilto-
nian H̄ = T + V̄
H + V̄ − V − Ẽr
Ψ(+) =
V̄ − V
H̄ − Er
Ψ(+) =
V̄ − V + iΓ/2
Ψ(+). (12)
Thus we can use the real-energy Green’s function ḠEr of
auxiliary Hamiltonian H̄ to “regenerate” the WF with
outgoing asymptotic
Ψ̄(+) = Ḡ
V̄ − V + iΓ/2
Ψ(+) . (13)
At this point in Eq. (13) Ψ̄(+) ≡ Ψ(+) and the bar in the
notation for “corrected” WF Ψ̄(+) is introduced for later
use to distinguish it from the “initial” WF Ψ(+) [the one
before application of Eq. (13)]. Further assumptions we
should consider separately in two-body and three-body
cases.
A. Two-body case
To define the width Γ by Eq. (3) we need to know
the complex-energy solution Ψ(+) at pole energy. For
narrow states Γ ≪ Er this solution can be obtained in a
simplified way using the following approximations.
(i) For narrow states we can always choose the auxil-
iary Hamiltonian in such a way that Γ ≪ V̄ −V , and we
can assume Γ → 0 in the Eq. (13).
(ii) Instead of complex-energy solution Ψ(+) in the
right-hand side of (13) we can use the normalized real-
energy quasibound solution Ψ̃ defined for one real reso-
nant value of energy Er = k
dr r2
Ψ̃lm(r)
≡ 1 .
So, the Eq. (13) is used in the form
lm = Ḡ
V̄ − V
Ψ̃lm . (14)
The solution Ψ̄(+) is matched to function
l (kr) = Gl(kr) + iFl(kr) , (15)
while the solution Ψ̃ is matched to function Gl. For deep
subbarrier energies it is reasonable to expect that in the
internal region r ≤ R
Gl ≫ Fl →
Re[Ψ̃(+)]
Im[Ψ̃(+)]
In the single channel case it can be shown by direct cal-
culation that an approximate equality
Re[Ψ̃(+)]
Im[Ψ̃(+)]
holds in the internal region and thus for narrow states
Γ ≪ Er the approximation (13) → (14) should be very
reliable.
(c) N2N1
X , lx
Y , ly
CoreCore
Y , ly
X , lx
(b)N1
r1 , l1
r2 , l2
FIG. 2: Single particle coordinate systems: (a) “V” system
typical for a shell model. In the Jacobi “T” system (b),
“diproton” and core are explicitly in configurations with def-
inite angular momenta lx and ly . For a heavy core the Jacobi
“Y” system (c) is close to the single particle system (a).
To derive Eq. (10) the WF with outgoing asymptotic
is generated using the Green’s function of the auxiliary
Hamiltonian H̄ and the “transition potential” (V − V̄ ).
The standard two-body Green’s function is
k2/(2m)
(r, r′) =
ϕl(kr)h
l (kr
′), r ≤ r′
l (kr)ϕl(kr
′), r > r′
Ylm(r̂)Y
lm(r̂
′) , (16)
where the radial WFs h
l and ϕl of the auxiliary Hamil-
tonian are defined in (15) and (7).
lm (r) =
dr′ Ḡ
k2/(2m)
(r, r′)
V̄ − V
Ψ̃l′m′(r
For the asymptotic region r > R
lm (r) =
l (krr)Ylm(r̂)
dr′ ϕl(krr
V̄ − V
ψ̃l(kr, r
The outgoing flux is then calculated [see Eq. (4)]
2l+ 1
lm (r)∇Ψ̄
As far as function Ψ̃ is normalized by construction then
Γ ≡ jl =
dr ϕl(krr)
V̄ − V
ψ̃l(kr, r)
. (17)
Note, that this equation differs from Eq. (10) only by a
factor 1/(cos2[δ̄l]) which should be very close to unity for
sufficiently high barriers.
B. Simplified model for three-body case
In papers [2, 3] the widths for three-body decays were
defined by the following procedure. We solve numerically
the problem
(H − E3r) Ψ̃ = 0
with some box boundary conditions (e.g. zero or qua-
sibound in diagonal channels at large distances) getting
the WF Ψ̃ normalized in the finite domain and the value
of the real resonant energy E3r. Thereupon we search for
the outgoing solution Ψ(+) of the equation
(H − E3r) Ψ
(+) = −iΓ/2 Ψ̃
with approximate boundary conditions of three-body
Coulomb problem (see Ref. [2] for details) and arbitrary
Γ. The width is then defined as the flux through the
hypersphere of the large radius divided by normalization
within this radius:
dΩ5 Ψ
(+)∗ρ5/2 d
ρ5/2Ψ(+)
ρ=ρmax
∫ ρmax
∣Ψ(+)
The 3-body WF with outgoing asymptotic is
JM (ρ,Ω5) = ρ
Kγ (ρ)J
Kγ (Ω5) , (19)
where the definitions of the hyperspherical variables ρ,
Ω5 and hyperspherical harmonics J
Kγ can be found in
Ref. [4].
Here we formulate the simplified three-body model in
the way which, on one hand, keeps the important dy-
namical features of the three-body decays (typical sizes
of the nuclear potentials, typical energies in the subsys-
tems, correct ratios of masses, etc.), and, on the other
hand, allows a semianalytical treatment of the problem.
Two types of approximations are made here.
The three-body Coulomb interaction is
V coul =
Z1Z2α
Z1Z3α
Y + A2X
A1+A2
Z2Z3α
Y − A1X
A1+A2
, (20)
where α is the fine structure constant. By convention, see
e.g. Fig. 2, in the “T” Jacobi system the core is particle
number 3 and in “Y” system it is particle number 2. We
assume that the above potential can be approximated by
Coulomb terms which depend on Jacobi variables X and
Y only:
V coulx (X) =
, V couly (Y ) =
(in reality for the small X and Y values the Coulomb
formfactors of the homogeneously charged sphere with
radius rsph are always used). The effective charges Zx
and Zy could be considered in two ways.
1. We can neglect one of the Coulomb interactions.
This approximation is consistent with physical situ-
ation of heavy core and treatment of two final state
interactions. Such a situation presumes that Jacobi
“Y” system is preferable and there is a symmetry
in the treatment of the X and Y coordinates, which
are close to shell-model single particle coordinates.
Zx = Z1Zcore , Zy = Z2Zcore . (21)
Further we refer this approximation as “no p-
p Coulomb” case, as typically the proton-proton
Coulomb interaction is neglected compared to
Coulomb interaction of a proton with heavy core.
2. We can also consider two particles on the X coor-
dinate as one single particle. The Coulomb inter-
action in p-p channel is thus somehow taken into
account effectively via a modification of the Zy
charge:
Zx = Z1Zcore , Zy = Z2(Zcore + Z1) . (22)
Below we call this situation as “effective p-p
Coulomb” case.
For nuclear interactions we can assume that
1. There is only one nuclear pairwise interaction and
H = T + V3(ρ) + V
x (X)
+V nucx (X) + V
y (Y ) ,
∆V (X,Y ) = V nucy (Y )− V3(ρ) . (23)
This approximation is good for methodological pur-
poses as it allows to focus on one degree of freedom
and isolate it from the others. From physical point
of view it could be reasonable if only one FSI is
strong [42], or we have reasons to think that de-
cay mechanism associated with this particular FSI
is dominating. Potential V nucy (Y ) in the auxiliary
Hamiltonian (27) is “unphysical” in that case and
can be put zero [43]. We further refer this model
as “one final state interaction” (OFSI).
2. We can consider two final state interactions (TFSI).
Simple form of the Green’s function in that case can
be preserved only if the core mass is considered as
infinite (the X and Y coordinates in the Jacobi
“Y” system coincide with single-particle core-p co-
ordinates). In that case both pairwise interactions
V nucx (X) and V
y (Y ) are treated as “physical”,
that means that they are both present in the ini-
tial and in the auxiliary Hamiltonians. Thus only
three-body potential “survive” the V̄ − V subtrac-
tion:
H = T + V3(ρ) + V
x (X) + V
x (X)
+V couly (Y ) + V
y (Y ) ,
∆V (X,Y ) = −V3(ρ) . (24)
The three-body potential is used in this work in
Woods-Saxon form
V3(ρ) = V
3 (1 + exp [(ρ− ρ0)/aρ])
, (25)
with ρ0 = 5 fm for
17Ne, ρ0 = 6 fm for
45Fe [44], and
a small value of diffuseness parameter aρ = 0.4 fm. Use
of such three-body potential is an important difference
from our previous calculations, where it was utilized in
the form
V3(ρ) = V
1 + (ρ/ρ0)
, (26)
which provides the long-range behaviour ∼ ρ−3. Such
an asymptotic in ρ variable is produced by short-range
pairwise nuclear interactions and thus the interpretation
of three-body potential (26) is phenomenological taking
into account those components of pairwise interactions
which were omitted for some reasons in calculations. In
this work the aim of the potential V3 is different. On
one hand we would like to keep the three-body energy
fixed while the properties (and number) of pairwise in-
teractions are varied. On the other hand we do not want
to change the properties of the Coulomb barriers beyond
the typical nuclear distance (this is achieved by the small
diffuseness of the potential). Thus this potential is phe-
nomenological taking into account interactions that act
only when both valence nucleons are close to the core
(both move in the mean field of the nucleus).
The auxiliary Hamiltonian is taken in the form that
allows a separate treatment of X and Y variables
H̄ = T+V coulx (X)+V
x (X)+V
y (Y )+V
y (Y ) (27)
In this formulation of the model the Coulomb potentials
are fixed as shown above. The nuclear potential V nucx (X)
[V nucy (Y ) if present] defines the position of the state in the
X [Y ] subsystem. The three-body potential V3(ρ) defines
the position of the three-body state, which is found using
the three-body HH approach of [2, 4]. After that a new
WF with outgoing asymptotic is generated by means of
the three-body Green’s function which can be written for
(27) in a factorized form (without paying attention to the
angular coupling)
(XY,X′Y′) =
(X,X′)G
(Y,Y′),
where E3r = Ex + Ey (Ex, Ex are energies of subsys-
tems). The two-body Green’s functions in the expres-
sions above are defined as in (16) via eigenfunctions of
the subhamiltonians
H̄x − Ex = Tx + V
x (X) + V
x (X)− Ex
H̄y − Ey = Ty + V
y (Y ) + V
y (Y )− Ey
In the OFSI case the nuclear potential in the “Y” sub-
system should be put V nucy (Y ) ≡ 0. The “corrected”
continuum WF Ψ̄(+) is
Ψ̄(+)(X,Y) =
dX′dY′
(X,X′)
(Y,Y′) ∆V (X ′, Y ′) Ψ(+)(X′Y′)
The “initial” solution Ψ(+) of Eq. (19) rewritten in the
coordinates X and Y is
JM (X,Y) =
ϕLlxlyS(X,Y )
[ly ⊗ lx]L ⊗ S
0 20 40 60 80 100
17Ne, "Y"-system, core-p interaction in s-wave
HH + FR Kmax → 12
HH + FR Kmax → 24
CorrectedΓ
FIG. 3: Convergence of the 17Ne width in a simplified model
in the “Y” Jacobi system. One final state interaction model
with experimental position E2r = 0.535 KeV of the s-wave
two-body resonance. Diamonds show the results of dynamic
HH calculations. Solid curves correspond to calculations with
effective FR potentials.
The asymptotic form of the ”corrected” continuum
WF Ψ̄
JM is
JM (X,Y) =
vx(ε)vy(ε)
×eikx(ε)X+iky(ε)Y
[ly ⊗ lx]L ⊗ S
Ex = εE3r ; Ey = (1− ε)E3r ; vi(ε) =
2Ei/Mi
A(ε) =
dY ′ ϕlx(kx(ε)X
′) ϕly (ky(ε)Y
×∆V (X ′, Y ′) ϕLlxlyS(X
′, Y ′) .(29)
The “corrected” outgoing flux jc can be calculated on the
sphere of the large radius for any of two Jacobi variables.
E.g. for X coordinate we have [45]
jc(E3r) = Im
Ψ̄(+)∗
Ψ̄(+)
= E23r
A∗(ε)
kx(ε)
A(ε′)
2π δ(ky(ε
′)− ky(ε)) . (30)
Values v′i above denote vi(ε
′). The flux is obtained as
jc(E3r) =
vx(ε)vy(ε)
|A(ε)|
. (31)
In principle as we have seen above that the widths ob-
tained with both fluxes Eqs. (18) and (31) should be
equal
≡ Γc =
. (32)
10 20 30 40 50 60 70 80
17Ne, "Y"-system, core-p interaction in s-wave
HH + FR
no nucl. potential
E2R = 1.0 MeV
E2R = 0.535 MeV
E2R = 0.36 MeV
FIG. 4: Convergence of widths in OFSI model for different
positions E2r of the two-body resonance in the core-p channel
(Jacobi “Y” system). For Kmax > 24 the value of Kmax de-
note the size of the basis for Feshbach reduction toKmax = 24.
This is the idea of calibration procedure for the simplified
three-body model. The convergence of the HH method
(for WF Ψ
JM ) is expected to be fast in the internal re-
gion and much slower in the distant subbarrier region.
This should be true for the width Γ calculated in the HH
method. However, the procedure for calculation of the
“corrected” width Γc is exact under the barrier and it is
sensitive only to HH convergence in the internal region,
which is achieved easily. Below we demonstrate this in
particular calculations.
IV. DECAYS OF THE
NE 3/2− AND 45FE 3/2−
STATES IN A SIMPLIFIED MODEL
In this Section when we refer widths of 17Ne and 45Fe
we always mean the 17Ne 3/2− state (E3r = 0.344 MeV)
and the 45Fe 3/2− ground state (E3r = 1.154 MeV) cal-
culated in a very simple models. We expect that impor-
tant regularities found for these models should be true
also in realistic calculations. However, particular values
obtained in realistic models may differ significantly, and
this issue is considered specially in the Section V.
To keep only the most significant features of the sys-
tems we assume pure sd structure (lx = 0, ly = 2) for
17Ne and pure p2 structure (lx = 1, ly = 1) for
45Fe in
”Y” Jacobi system (see Fig. 2). Spin dependencies of
the interactions are neglected. The Gaussian formfactor
V nuci (r) = Vi0 exp[−(r/r0)
where i = {x, y}, is taken for 17Ne (see Table I), and a
standard Woods-Saxon formfactor is used for 45Fe (see
Table II),
V nuci (r) = Vi0 [1 + exp[(r − r0)/a]]
. (33)
The simplistic structure models can be expected to
overestimate the widths. There should be a considerable
−14 −12 −10 −8 −6 −4 −2 0
10-16
10-15
10-14
10-13
10-12
0.2 0.3 0.4 0.5 0.6 0.70.80.9 1 2 3
10-16
10-15
10-14
10-13
10-12 17
Three-body
regime
E2r = E3r
E2r (exp.)
E2r (MeV)
Corrected
Kmax = 24
Kmax = 24 + FR
Transition region
E2r = (0.7−0.85) E3r
Vx (MeV)
FIG. 5: Width of the 17Ne 3/2− state as a function of two-
body resonance position E2r. Dashed, dotted and solid lines
show cases of pure HH calculations with Kmax = 24, the
same but with Feshbach reduction from Kmax = 100, and the
corrected width Γc. Inset shows the same, but as a function
of the potential depth parameter Vx0. Gray area shows the
transition region from three-body to two-body decay regime.
The gray curve shows simple analytical dependence of Eq.
(34).
weight of d2 component (lx = 2, ly = 2) in
17Ne and f2
component (lx = 3, ly = 3) in
45Fe. Also the spin-angular
coupling should lead to splitting of the single-particle
strength and corresponding reduction of the width es-
timates (e.g. we assume one s-wave state at 0.535 keV in
the “X” subsystem of 17Ne while in reality there are two
s-wave states in 16F: 0− at 0.535 MeV and 1− at 0.728
MeV). Thus the results of the simplified model should
most likely be regarded as upper limits for widths.
A. One final state interaction — core-p channel
First we take into account only the 0.535 MeV s-wave
two-body resonance in the 16F subsystem (this is the
experimental energy of the first state in 16F). Conver-
gence of the 17Ne width in a simplified model for Jacobi
“Y” system is shown in Fig. 3. The convergence of the
corrected width Γc as a function of Kmax is very fast:
Kmax > 8 for the width is stable within ∼ 1%. For maxi-
mal achieved in the fully dynamic calculation Kmax = 24
the three-body width Γ is calculated within 30% preci-
sion. Further increase of the effective basis size is possi-
ble within the adiabatic procedure based on the so called
Feschbach reduction (FR).
Feschbach reduction is a procedure, which eliminates
from the total WF Ψ = Ψp +Ψq an arbitrary subspace q
using the Green’s function of this subspace:
Hp = Tp + Vp + VpqGqVpq
In a certain adiabatic approximation we can assume that
the radial part of kinetic energy is small and constant un-
0.0 0.2 0.4 0.6 0.8 1.0
"Y"-system
(p-core)-p
Kmax = 6
Kmax = 10
Kmax = 14
Kmax = 24
Kmax = 100
corrected
ε =Ex / E3r
FIG. 6: Convergence of energy distribution for 17Ne in the
“Y” Jacobi system.
der the centrifugal barrier in the channels with so high
centrifugal barrier that it is much higher than any other
interaction. In this approximation the reduction proce-
dure becomes trivial as it is reduced to construction of
effective three-body interactions V effKγ,K′γ′ by matrix op-
TABLE I: Parameters for 17Ne calculations. Potential pa-
rameters for 15O+p channel in s-wave (Vx0 in MeV, r0 = 3.53
fm) and 16F+p channel in d-wave (Vy0 in MeV). Radius of the
charged sphere is rsph = 3.904 fm. Widths Γi of the state in
the subsystem and experimental width values Γexp for really
existing at these energies states are given in keV. The cor-
rected three-body width Γc is given in the units 10
−14 MeV.
TFSI calculations with d-wave state at 1.2 MeV are made
with s-wave state at 0.728 MeV.
E2r lx (ly) Vx0 (Vy0) Γx (Γy) Γexp Γc
0.258 0 −14.4 0.221 144
0.275 0 −14.35 0.355 16.6
0.292 0 −14.3 0.544 7.75
0.360 0 −14.1 2.09 2.34
0.535 0 −13.55 17.9 25(5) [34] 0.545
0.728 0 −12.89 72.0 70(5) [34] 0.211
1.0 0 −12.0 252 0.093
2.0 0 −9.0 ∼ 1500 0.021
0.96 2 −87.06 3.5 6(3) [34] 4.73a
1.256 2 −85.98 12.2 < 15 [35] 2.0a
0.96 2 −66.46 3.6 6(3) [34] 1.37b
1.256 2 −65.4 13.7 < 15 [35] 0.584b
aThis is TFSI calculation with “no p-p” Coulomb, r0 = 2.75 fm.
bThis is TFSI calculation with “effective” Coulomb, r0 = 3.2 fm.
0.0 0.2 0.4 0.6 0.8 1.0
94.4%
68.0%
no nucl. potential
E2r = 1.0 MeV
E2r = 0.535 MeV
E2r = 0.360 MeV
E2r = 0.284 MeV
E2r = 0.266 MeV
E2r = 0.249 MeV
ε =Ex / E3r
FIG. 7: Energy distributions for 17Ne in the “Y” Jacobi
system for different two-body resonance positions E2r. The
three-body decay energy is E3r = 0.344 MeV. The distri-
butions are normalized to have unity value on maximum of
three-body components. The values near the peaks show the
fraction of the total intensity concentrated within the peak.
Note the change of the scale at vertical axis.
erations
G−1Kγ,K′γ′ = (H − E)Kγ,K′γ′ = VKγ,K′γ′
Ef − E +
(K + 3/2)(K + 5/2)
δKγ,K′γ′ ,
V effKγ,K′γ′ = VKγ,K′γ′ +
VKγ,K̄γ̄GK̄γ̄,K̄′γ̄′VK̄′γ̄′,K′γ′ .
Summation over indexes with bar is made for eliminated
channels. No strong sensitivity to the exact value of the
“Feshbach energy” Ef is found and we take it as Ef ≡ E
in our calculations. More detailed account of the pro-
cedure applied within HH method can be found in Ref.
[36].
It can be seen in Fig. 3 (solid line) that Feschbach re-
duction procedure drastically improves the convergence.
However, the calculation converges to a width value,
which is somewhat smaller than the corrected width value
(that should be exact). The reason for this effect can
be understood if we make a reduction to a smaller “dy-
namic” basis size (Kmax = 12, gray line). The calculation
in this case also converges, but even to a smaller width
value. We can conclude that FR procedure allows any-
how to approach the real width value, but provides a
good result only for sufficiently large size of the dynamic
sector of the basis.
The next issue to be discussed is a convergence of the
width in calculations with different positions E2r of two-
body resonance in the core+p subsystem. It is demon-
strated for several energies E2r in Fig. 4. When the
resonance in the subsystem is absent (or located rela-
tively high) the convergence of the width value to the
exact result is very fast both in the pure three-body
and in the “corrected” calculation (in that case, how-
ever, much faster). Here even FR is not required as the
0 20 40 60 80 100 120 140
HH + FRHH
Gaussian potential in s-wave
OFSI TFSI
Potential with repulsive core in s-wave
OFSI TFSI
17Ne, "Y"-system
FIG. 8: Convergence of the 17Ne width in a simplified model.
Jacobi “Y” system. OFSI model with s-wave two-body reso-
nance at E2r = 0.535 MeV; Gaussian potential and potential
with repulsive core. TFSI model with d-wave two-body reso-
nance at E2r = 0.96 MeV.
convergent result is achieved in the HH calculations by
Kmax = 10 − 24. The closer two-body resonance ap-
proaches the decay window, the worse is convergence of
HH calculations. At energy E2r = 360 keV (which is al-
ready close to three-body decay window E3r = 344 keV)
even FR procedure provides a convergence to the width
value which is only about 65% of the exact value.
In Fig. 5 the calculations with different E2r values are
summarized. The width grows rapidly as the two-body
resonance moves closer to the decay window. The pen-
etrability enhancement provided by the two-body reso-
nance even before it moves into the three-body decay
window is very important. Difference of widths with no
core-p FSI and FSI providing the s-wave resonance to
be at it experimental position E2r = 0.535 MeV is more
than two orders of the magnitude. The convergence of
HH calculations also deteriorates as E2r moves closer to
the decay window. However, the disagreement between
the HH width and the exact value is within the order
of the magnitude, until the resonance achieves the range
E2r ∼ (0.7 − 0.85)E3r. Within this range a transition
from three-body to two-body regime happens (see also
discussion in [8]), which can be seen as a drastic change
of the width dependence on E2r. This means that a se-
quential decay via two-body resonanceE2r becomes more
efficient than the three-body decay. In that case the hy-
perspherical expansion can not treat the dynamics effi-
ciently any more and the disagreement with exact result
becomes as large as orders of the magnitude. The de-
cay dynamics in the transition region is also discussed in
details below.
It can be seen in Fig. 5 that in three-body regime the
dependence of the three-body width follows well the an-
alytical expression
Γ ∼ (E3r/2− E2r)
−2 (34)
The reasons of such a behaviour will be clarified in
0 20 40 60 80 100 120 140
HH + FR Kmax → 24
Corrected
17Ne, "T"-system, p-p interaction only
FIG. 9: Convergence of the 17Ne width in a simplified model.
Jacobi “T” system. Final state interaction describes s-wave
p-p scattering.
the forthcoming paper [37]. The deviations from this de-
pendence can be found in the decay window (close to
“transition regime”) and at higher energies. This depen-
dence is quite universal; e.g. for 45Fe it is demonstrated
in Fig. 14, where it follows the calculation results even
with higher precision.
Another important issue is a convergence of energy dis-
tributions in the HH calculations, demonstrated in Fig.
6 for calculations with E2r = 535 keV. The distribution
is calculated in “Y” Jacobi subsystem, thus Ex is the en-
ergy between the core and one proton. The energy dis-
tribution convergence is fast: the distribution is stable at
Kmax = 10− 14 and does not change visibly with further
increase of the basis. There remain a visible disagree-
ment with exact (”corrected”) results, which give more
narrow energy distribution. We think that this effect was
understood in our work [4]. The three-body calculations
are typically done for ρmax ∼ 500−2000 fm (ρmax ∼ 1000
fm everywhere in this work). It was demonstrated in Ref.
[4] by construction of classical trajectories that we should
expect a complete stabilization of the energy distribution
in core+p subsystem at ρmax ∼ 30000−50000 fm and the
effect on the width of the energy distribution should be
comparable to one observed in Fig. 6.
The evolution of the energy distribution in core+p sub-
system with variation of E2r is shown in Fig. 7. When
we decrease the energy E2r the distribution is very stable
until the two-body resonance enters the three-body de-
cay energy window. After that the peak at about ε ∼ 0.5
first drifts to higher energy and then for E2r ∼ 0.85E3r
the noticeable second narrow peak for sequential decay
is formed. At E2r ∼ 0.7E3r the sequential peak becomes
so high that the three-body component of the spectrum
is practically disappeared in the background.
The result concerning the transition region obtained
in this model is consistent with conclusion of the paper
[8] (where much simpler model was used for estimates).
The three-body decay is a dominating decay mode, not
0.0 0.2 0.4 0.6 0.8 1.0
Kmax = 10
Kmax = 100
corrected (p-p FSI)
corrected (no p-p FSI)
ε =Ex / E3r
"T"-system (p-p)-core
FIG. 10: Energy distributions for 17Ne in “T” Jacobi system
(between two protons).
only when the sequential decay is energy prohibited as
E2r > E3r. Also the three-body approach is valid when
the sequential decay is formally allowed (because E2r <
E3r) but is not taking place in reality due to Coulomb
suppression at E2r >∼ 0.8E3r.
Geometric characters of potentials can play an impor-
tant role in the width convergence. To test this aspect
of the convergence we have also made the calculations
for potential with repulsive core. This class of potentials
was employed in studies of 17Ne and 19Mg in Ref. [7]. A
comparison of the convergence of HH calculations with s-
wave 15O+p potential from [7] and Gaussian potential is
given in Fig. 8. The width convergence in the case of the
“complicated” potential with a repulsive core is drasti-
cally worse than in the “easy” case of Gaussian potential.
For typical dynamic calculations withKmax = 20−24 the
HH calculations provide only 20 − 25% of the width for
potential with a repulsive core. On the other hand the
calculations with both potentials provide practically the
same widths Γc [46] and FR provides practically the same
and very well converged result in both cases.
B. One final state interaction — p-p channel
As far as two-proton decay is often interpreted as
“diproton” decay we should also consider this case and
study how important this channel could be. For this cal-
culation we use a simple s-wave Gaussian p-p potential,
providing a good low-energy p-p phase shifts,
V (r) = −31 exp[−(r/1.8)2] . (35)
Calculations with this potential are shown in Fig. 9 (see
also Table V). First of all the penetrability enhancement
provided by p-p FSI is much less than the enhancement
provided by core-p FSI (the widths differs more than two
orders of the magnitude, see Fig. 3). This is the feature,
which has been already outlined in our works. The p-p in-
teraction may boost the penetrability strongly, but only
0 4 8 12 16 20
ρ0 /√2 (fm)
ρ0 = 6 fm
0 2 4 6 8
Ypeak (fm)
FIG. 11: Comparison of the OFSI calculations (solid lines) for
45Fe in the “T” system with diproton model Eq. (36) (dashed
lines). Effective equivalent channel radius rch(dp) for “dipro-
ton emission” (a) as a function of radius ρ0 of the three-body
potential (25), the value ρ0/
2 should be comparable with
typical nuclear sizes. (b) as a function of the position of the
peak Ypeak in the three-body WF Ψ
(+) in Y coordinate. The
dashe lines are given to guide the eye.
in the situation, when protons occupy predominantly or-
bitals with high orbital momenta. In such a situation the
p-p interaction allows transitions to configurations with
smaller orbital momenta in the subbarrier region, which
provide a large increase of the penetrability. In our sim-
ple model for 17Ne 3/2− state, we have already assumed
the population of orbitals with minimal possible angular
momenta and thus no strong effect of the p-p interaction
is expected.
Also a very slow convergence of the decay width should
be noted in this case. For core-p interaction the Kmax ∼
10 − 40 were sufficient to obtain a reasonable result. In
the case of the p-p interaction theKmax ∼ 100 is required.
Energy distributions between two protons obtained in
this model are shown in Fig. 10. Important feature of
these distributions is a strong focusing of protons at small
p-p energies. This feature is connected, however, not
with attractive p-p FSI, but with dominating Coulomb
repulsion in the core-p channel. This is demonstrated by
the calculation with nuclear FSI turned off, which pro-
vides practically the same energy distributions. Similarly
to the case of the core-p FSI, very small Kmax > 10 is
sufficient to provide the converged energy distribution.
The converged HH distribution is very close to the exact
(”corrected”) one but it is, again, somewhat broader.
So far the diproton model has been treated by us as a
reliable upper limit for three-body width [8]. With some
technical improvements this model was used for the two-
proton widths calculations in Refs. [16, 17, 18, 19]. It
is important therefore to try to understand qualitatively
the reason of the small width values obtained in this form
of OFSI model, which evidently represents appropriately
formulated diproton model [47]. In Fig. 11 we compared
the results of the OFSI calculations for 45Fe in the “T”
0 20 40 60 80 100
17Ne, "Y"-system, core-p interactions
in s- and d-waves
HH + FR Kmax → 24
Corrected
FIG. 12: Convergence of the 17Ne width for experimental
positions E2r = 0.535 MeV of the 0
+ two-body resonance in
the “X” subsystem and E2r = 0.96 MeV of the 2
+ two-body
resonance in the “Y” subsystem (TFSI model).
system with diproton width estimated by expression
Γdp =
Mredr
ch(dp)
Pl=0(0.95E3r, rch(dp), 2Zcore) ,
where Mred is the reduced mass for
43Cr-pp motion and
rch(dp) is channel radius for diproton emission. The en-
ergy for the relative 43Cr-ppmotion is taken 0.95E3r bas-
ing on the energy distribution in the p-p channel (see
Fig. 10 for example). In Fig. 11a we show the effective
equivalent channel radii for diproton emission obtained
by fulfilling condition Γdp ≡ Γc for OFSI model calcula-
tions with different radii ρ0 of the three-body potential
Eq. (25). It is easy to see that for realistic values of these
radii (ρ0 ∼ 6 fm for
45Fe) the equivalent diproton model
radii should be very small (∼ 1.5 fm). This happens pre-
sumably because the “diproton” is too large to be con-
sidered as emitted from nuclear surface of such small ρ0
radius. Technically it can be seen as the nonlinearity of
the rch(dp)-ρ0 dependence, with linear region achieved at
ρ0 ∼ 15−20 fm. Only at such unrealistically large ρ0 val-
ues the typical nuclear radius (when it becomes compa-
rable with the “size” of the diproton) can be reasonably
interpreted as the surface, off which the “diproton” is
emitted. It is interesting to note that in the nonlinearity
region for Fig. 11a there exists practically exact corre-
spondence between the Y coordinate of the WF peak in
the internal region and the channel radius for diproton
emission (Fig. 11b). This fact is reasonable to interpret
in such a way that the diproton is actually emitted not
from nuclear surface (as it is presumed by the existing
systematics of diproton calculations) but from the inte-
rior region, where the WF is mostly concentrated.
0 20 40 60 80
45Fe, "Y"-system, core-p interactions
in p-waves
HH + FR Kmax → 24
Corrected
FIG. 13: Convergence of the 45Fe width for position of the 1−
two-body resonances in “X” and “Y” subsystems E2r = 1.48
C. Two final state interactions
As we have already mentioned the situation of one final
state interaction is comfortable for studies, but rarely re-
alized in practice. An exception is the case of the E1 tran-
sitions to continuum in the three-body systems, consid-
ered in our previous work [23]. For narrow states in typ-
ical nuclear system of the interest there are at least two
comparable final state interactions (in the core-p chan-
nel). For systems with heavy core this situation can be
treated reasonably well as the Y coordinate (in “Y” Ja-
cobi system) for such systems practically coincides with
the core-p coordinate. Below we treat in this way 17Ne
(for which this approximation could be not very consis-
tent) and 45Fe (for which this approximation should be
good). In the case of 17Ne we are thus interested in the
scale of the effect, rather in the precise width value.
For calculations with two FSI for 17Ne we used Gaus-
sian d-wave potential (see Table I), in addition to the
s-wave potential used in Section IVA. This potential
provides a d-wave state at 0.96 MeV (Γ = 13.5 keV),
which corresponds to the experimental position of the
first d-wave state in 16F. The convergence of the 17Ne
decay width is shown in Fig. 12. Comparing with Fig.
3 one can see that the absolute value of the width has
changed significantly (2−3 times) but not extremely and
the convergence is practically the same. Interesting new
feature is a kind of the convergence curve “staggering”
for odd and even values of K/2. Also the convergence
of the corrected calculations requires now a considerable
Kmax ∼ 12− 14.
The improved experimental data for 2p decay of 45Fe
is published recently in Ref. [12]: E3r = 1.154(16) MeV,
Γ2p = 2.85
+0.65
−0.68× 10
−19 MeV [T1/2(2p) = 1.6
−0.3 ms] for
two-proton branching ratio Br(2p) = 0.57. Below we use
the resonance energy from this work.
The convergence of the 45Fe width is shown in Fig. 13.
The character of this convergence is very similar to that
1 2 3
10-19
10-18
E2r from Refs. [4,8]
E2r = E3r
Three-body
regime
n Corrected
Kmax = 24
Kmax = 24 + FR
E2r (MeV)
FIG. 14: The 45Fe g.s. width as a function of the two-body
resonance position E2r. Dashed, dotted and solid lines show
cases of a pure HH calculation with Kmax = 24, the same but
with Feshbach reduction from Kmax = 100, and the corrected
width Γc. Gray area shows the transition region from three-
body to two-body decay regime. The gray curve shows simple
analytical dependence of Eq. (34).
in the 17Ne case, except the “staggering” feature is more
expressed.
The dependence of the 45Fe width on the two-body
resonance energy E2r is shown in Fig. 14. Potential
parameters for these 45Fe calculations are given in Ta-
ble II. The result calculated for E3r = 1.154 MeV and
E2r = 1.48 MeV in paper [4] for pure [p
2] configuration
is Γ = 2.85× 10−19 MeV. The value Kmax = 20 was used
in these calculations. If we take the HH width value from
Fig. 13 at Kmax = 20 it provides Γ = 2.62× 10
−19 MeV,
which is in a good agreement with a full HH three-body
model of Ref. [4]. However, from Fig. 13 we can conclude
that in the calculations of [4] the width was about 35%
underestimated. Thus the value of about Γ = 6.3×10−19
MeV should be expected in these calculations. On the
other hand much larger uncertainty could be inferred
from Fig. 14 due to uncertain energy of the 44Mn ground
state. If we assume a variation E2r = 1.1− 1.6 MeV the
TABLE II: Parameters for 45Fe calculations. Potential pa-
rameters for p-wave interactions (33) in 43Cr+p channel (Vx0
in MeV, r0 = 4.236 fm, rsph = 5.486 fm) and
44Mn+p (Vy0 in
MeV, r0 = 4.268 fm, rsph = 5.527 fm), a = 0.65 fm. Calcula-
tions are made with “effective Coulomb” of Eq. (22). Widths
Γx, Γy of the states in the subsystems are given in keV. Cor-
rected three-body widths are given in the units 10−19 MeV.
E2r Vx0 Γx Vy0 Γy Γc
1.0 −24.350 4.3× 10−3 −24.54 2.1 × 10−3 26.5
1.2 −24.03 0.032 −24.224 0.018 11.8
1.48 −23.58 0.26 −23.78 0.15 5.6
2.0 −22.7 3.6 −22.93 2.3 2.3
3.0 −20.93 58 −21.19 44 0.84
10 100
HH + FR
TFSI
Three-body realistic
17Ne
FIG. 15: Interpolation of 17Ne decay width obtained in full
three-body calculations by means of TFSI convergence curves
(see Fig. 8). Upper curves correspond to TFSI case with
Gaussian potential in s-wave and compatible S1 case for full
three-body model. Lower curves correspond to TFSI case
with repulsive core potential in s-wave and compatible GMZ
case for full three-body model.
inferred from Fig. 14 uncertainty of the width would be
Γ = (4 − 16)× 10−19 MeV. On top of that we expect a
strong p2/f2 configuration mixing which could easily re-
duce the width within an order of the magnitude. Thus
we can conclude that a better knowledge about spectrum
of 44Mn and a reliable structure information about 45Fe
are still required to make sufficiently precise calculations
of the 45Fe width. More detailed account of these issues
is provided below.
V. THREE-BODY CALCULATIONS
Having in mind the experience of the convergence stud-
ies we have performed large-basis calculations for 45Fe
and 17Ne. They are made with dynamicalKmax = 16−18
(including Fechbach reduction from Kmax = 30− 40) for
17Ne and Kmax = 22 (FR from Kmax = 40) for
45Fe.
The calculated width values are extrapolated using the
convergence curves obtained in TFSI model (Figs. 15) for
17Ne and 13for 45Fe). We have no proof that the width
convergence in the realistic three-body case is asolutely
the same as in the TFSI case. However, the TFSI model
takes into account main dynamic features of the system
causing a slow convergence, and we are expecting that
the convergence should be nearly the same in both cases.
A. Widths and correlations in
The potentials used in the realistic calculations are the
same as used for 17Ne studies in Refs. [7, 38]. The GPT
potential [39] is used in the p-p channel. The core-p
potentials are referred in [38] as “GMZ” (potential in-
FIG. 16: Correlations for 17Ne decay in “T” and “Y” Jacobi systems. Three-body calculations with realistic (GMZ) potential.
troduced in [7]) and “high s” (with centroid of d-wave
states is shifted upward which is providing a higher con-
tent of s2 components in the 17Ne g.s. WF). Both poten-
tials provide correct low-lying spectrum of 16F and differ‘
only for d-wave continuum above 3 MeV (see Table III).
The core-p nuclear potentials, including central, ss and
ls terms, are taken as
V (r) =
V lc + (s1 · s2)V
1 + exp[(r − rl0)/a]
− (l · s)
2.0153V lls
× exp[(r − rl0)/a]
1 + exp[(r − rl0)/a]
,(37)
with parameters: a = 0.65 fm, r00 = 3.014 fm, r
2.94 fm, V 0c = −26.381 MeV, V
c = −9 MeV, V
−57.6 (−51.48) MeV, V 3c = −9 MeV, V
ss = 0.885 MeV,
V 2ss = 4.5 (12.66) MeV, Vls = 4.4 (13.5) MeV (the values
in brackets are for “high s” case). There are also repulsive
cores for s- and p-waves described by a = 0.4 fm, r00 =
0.89 fm, Vcore = 200 MeV. These potentials are used
together with Coulomb potential obtained for Gaussian
charge distribution reproducing the charge radius of 15O.
To have extra confidence in the results, the width of
the 17Ne 3/2− state is calculated in several models of
growing complexity (Tables IV-VI). One can see from
those Tables that improvements introduced on each step
provide quite smooth transition from the very simple to
the most sophisticated model.
In Table IV we demonstrate how the calculations in the
simplified model of Section IV are compared with calcu-
lations of the full three-body model with appropriately
truncated Hamiltonian. We can switch off correspond-
ing interactions in the full model to make it consistent
with approximations of the simplified model. To remind,
the differences of the full model and simplified model are
the following: (i) antisymmetrization between protons
is missing in the simplified model and (ii) Y coordinate
is only approximately equal to the coordinate between
core and second proton. Despite these approximations
the models demonstrate very close results: the worst dis-
agreement is not more than 30%.
In Table V we compare approximations of a different
kind: those connected with choice of the Jacobi coordi-
nate system in the simplified model. First we compare
the “pure Coulomb” case: all pairwise nuclear interac-
tions are off and the existence of the resonance is provided
solely by the three-body potential (25). This model pro-
vides some hint what should be the width of the system
without nuclear pairwise interactions. Then the models
are compared with the nuclear FSIs added. The addition
of nuclear FSI drastically increase width in all cases. It
is the most “efficient” (in the sense of width increase) in
the case of TFSI model in the “Y” system. Choice of
this model provides the largest widths and can be used
for the upper limit estimates.
In Table VI full three-body models are compared. The
simplistic S1 and S2 interactions correspond to calcula-
tions with simplified spectra of the 16F subsystem. For S1
case it includes one s-wave state at 0.535 MeV (Γ = 18.8
keV) and one d-wave state at 0.96 MeV (Γ = 3.5 keV).
These are two lower s- and d-wave states known experi-
mentally. In the S2 case we use instead the experimental
positions of the higher component of the s- and d-wave
doublets: s-wave at 0.72 MeV (Γ = 73.4 keV) and d-
TABLE III: Low-lying states of 16F obtained in the “GMZ”
and “high s” core-p potentials. The potential is diagonal in
the representation with definite total spin of core and proton
S, which is given in the third column.
Case GMZ high s Exp.
Jπ l S E2r (MeV) Γ (keV) E2r (MeV) Γ (keV) Γ (keV)
0− 0 0 0.535 18.8 0.535 18.8 25(5) [34]
1− 0 1 0.728 73.4 0.728 73.4 70(5) [34]
2− 2 0 0.96 3.5 0.96 3.5 6(3) [34]
3− 2 1 1.2 9.9 1.2 10.5 < 15 [35]
2− 2 1 3.2 430 7.6 ∼ 3000
1− 2 1 4.6 1350 ∼ 15 ∼ 6000
FIG. 17: Correlations for 17Ne decay in “T” and “Y” Jacobi systems. Three-body calculations with Coulomb FSIs only (all
nuclear pairwise potentials are turned off).
wave at 1.2 MeV (Γ = 10 keV). Parameters of the core-p
potentials can be found in Table I. Simple Gaussian p-
p potential (35) is used. The variation of the results
between these models is moderate (∼ 30%). The calcu-
lations with GMZ potential provide the width for 17Ne
3/2− state which comfortably rests in between the re-
sults obtained in the simplified S1 and S2 models. The
structure of the WF is also obtained quite close to these
calculations. The structure in the “high s” case is ob-
tained with a strong domination of the sd component.
The width in the “high s” case is obtained somewhat
larger (∼ 11%) than in GMZ case, but this increase is
consistent with the increase of the sd WF component,
(∼ 15%) which is expected to be more preferable for de-
cay than d2 component.
It is important for us that the results obtained in the
three-body models with considerably varying spectra of
the two-body subsystems and different convergence sys-
tematics appear to be quite close: Γ ∼ (5 − 8) × 10−15
MeV. Thus we have not found a factor which could lead
to a considerable variation of the three-body width, given
the ingredients of the model are reasonably realistic.
The decomposition of the 17Ne WF obtained with
GMZ potential is provided in Table VII in terms of par-
tial internal normalizations and partial widths. The cor-
respondence between the components with large weights
and large partial widths is typically good. However, there
are several components giving large contribution to the
width in spite of negligible presence in the interior.
Complete correlation information for three-body de-
cay of a resonant state can be described by two variables
(with omission of spin degrees of freedom). We use the
energy distribution parameter ε = Ex/E3r and the angle
cos(θk) = (kxky)/(kxky) between the Jacobi momenta.
The complete correlation information is provided in Fig.
16 for realistic 17Ne 3/2− decay calculations. We can see
that the profile of the energy distribution is characterized
by formation of the double-hump structure, expected so
far for p2 configurations (see, e.g. [4]). This structure
can be seen both in “T” system (in energy distribution)
and in “Y” system (in angular distribution). In the cal-
culations of ground states of the s-d shell nuclei we were
getting such distributions to be quite smooth. It can be
found that the profile of this distribution is defined by
the sd/d2 components ratio. For example in the calcula-
tions with “high s” potential the total domination of the
sd configuration leads to washing out of the double-hump
profile.
The correlations in the 17Ne (shown in Fig. 16) are
strongly influenced by the nuclear FSIs. Calculations for
only Coulomb pairwise FSIs left in the Hamiltonan are
TABLE IV: Comparison of widths for 17Ne (in 10−14 MeV
units) obtained in simplified model in “Y” Jacobi system
and in full three-body model with correspondingly truncated
Hamiltonian. Structure information is provided for the three-
body model. In the simplified model the weight of the [sd]
configuration is 100% by construction. “No p-p” column
shows the case where Coulomb interaction in p-p channel is
switched off (see, (21)). “Eff.” column corresponds to the ef-
fective treatment (see, (22)) of Coulomb interaction in the p-p
channel in the simplified model, but to the exact treatment
in full three-body model.
pure Coulomb OFSI TFSI
“no p-p” Eff. “no p-p” Eff. “no p-p” Eff.
Simpl. 0.017 0.0032 3.02 0.545 4.70 1.37
3-body 0.024a 0.0041a 3.22 0.555 3.91 0.445
[sd] 99.8 99.3 99.6 99.5 92.0 72.6
[p2] 0.2 0.6 0.3 0.4 0.1 0.2
[d2] 0 0 0 0 7.8 27.1
aSmall repulsion (∼ 0.5 MeV) was added in that case in the p-
wave core-p channel to split the states with sd and p2 structure
which appear practically degenerated and strongly mixed in this
model.
FIG. 18: Correlations for 17Ne decay calculated in simplified OFSI model in “T” (only p-p FSI) and in “Y” Jacobi systems
(only s-wave core-p FSI).
shown in Fig. 17. The strong peak at small p-p energy
is largely dissolved and the most prominent feature of
the correlation density in that case is a rise of the distri-
bution for cos(θk) → 1 in the “Y” Jacobi system. This
kinematical region corresponds to motion of protons in
the opposite directions from the core and is qualitatively
understandable feature of the three-body Coulomb in-
teraction (the p-p Coulomb interaction is minimal along
such a trajectory).
The distributions calculated in the simplified (OFSI)
model are shown in Fig. 18 on the same {ε, cos(θk)} plane
as in Figs. 16 and 17. It should be noted that here the
calculations in “T” and “Y” Jacobi systems represent
different calculations (with p-p FSI only and with core-p
FSI only). In Figs. 16 and 17 two panels show differ-
ent representations of the same result. Providing rea-
sonable (within factor 2 − 4) approximation to the full
three-body model in the sense of the decay width, the
simplified model is very deficient in the sense of correla-
TABLE V: Comparison of widths calculated for 17Ne (10−14
MeV units) and 45Fe (10−19 MeV units) with pure Coulomb
FSIs and for nuclear plus Coulomb FSIs. Simplified OFSI
model in “T”, TFSI in “Y” Jacobi systems (“effective”
Coulomb is used in both cases) and full three-body calcu-
lations.
pure Coulomb Nuclear+Coulomb
“T” “Y” 3-body “T” “Y” 3-body
17Ne 0.0011 0.0032 0.0041 0.0077 1.37 0.76a
[sd] 100 100 99.3 100 100 73.1
[p2] 0 0 0.6 0 0 1.8
[d2] 0 0 0 0 0 24.2
45Fe 0.0053 0.0167 0.26 0.034 4.94 6.3b
aThis is a calculation with S1 Hamiltonian.
bThis is a calculation providing pure p2 structure.
tions. The only feature of the realistic correlations which
is even qualitatively correctly described in the simplified
model is the energy distribution in the “Y” system. The
“diproton” model (OFSI model with p-p interaction) fails
especially strongly, which is certainly relevant to the very
small width provided by this calculation.
B. Width of
The calculation strategy is the same as in [4]. We start
with interactions in the core-p channel which give a res-
onance in p-wave at fixed energy E2r. Such a calculation
provides 45Fe with practically pure p2 structure. Then
we gradually increase the interaction in the f -wave, until
it replaces the p-wave resonance at fixed E2r and then
we gradually move the p-wave resonance to high energy.
Thus we generate a set of WFs with different p2/f2 mix-
ing ratios.
The results of the improved calculations with the same
settings as in [4] (the 44Mn g.s. is fixed to have E2r = 1.48
MeV) are shown in Fig. 19 (see also Table V) together
with updated experimental data [12]. The basis size used
in [4] was sufficient to provide stable correlation pictures
(as we have found in this work) and they are not updated.
TABLE VI: Width (in 10−14 MeV units) and structure of
17Ne 3/2− state calculated in a full three-body model with
different three-body Hamiltonians.
S1 S2 GMZ high s
Kmax = 18 0.35 0.27 0.14 0.16
Extrapolated 0.76 0.56 0.69 0.76
[sd] 73.1 71.7 80.2 95.1
[p2] 1.8 1.8 2.0 1.3
[d2] 24.2 25.7 16.8 3.1
The sensitivity of the obtained results to the experi-
mentally unknown energy of 44Mn can be easily studied
by means of Eq. (34). The results are shown in Fig. 20 in
terms of the regions consistent with experimental data
on the {E2r,W (p
2)} plane [W (p2) is the weight of p2
configuration in 45Fe WF]. It is evident from this plot
that our current experimental knowledge is not sufficient
to draw definite conclusions. However, it is also clear
that with increased precision of the lifetime and energy
measurements for 45Fe and the appearance of more de-
tailed information on 44Mn subsystem the restrictions on
the theoretical models should become strong enough to
provide the important structure information.
VI. DISCUSSION
General trends of the model calculations can be well
understood from Tables IV-VI. For the pure Coulomb
case the simplified model calculations (in the “Y” and
“T” systems) and three-body calculations provide rea-
sonably consistent results. The simplified calculations in
the “Y” system always give larger widths than those in
the “T” system. From decay dynamics point of view this
leads to understanding of the contradictory fact that the
sequential decay path is preferable even if no even virtual
sequential decay is possible (as the nuclear interactions
are totally absent in this case).
The calculations with attractive nuclear FSIs rather
TABLE VII: Partial widths ΓKγ of different components of
17Ne 3/2− WF calculated in “T” Jacobi systems. Partial
weights are given in “T” (valueN
) and in “Y” (valueN
Jacobi systems. Sx is the total spin of two protons.
K L lx ly Sx N
2 2 0 2 0 23.88 33.87 44.93
2 2 2 0 0 24.97 16.52 13.29
2 2 1 1 1 0.28 7.39 3.59
2 2 1 1 0 1.54
2 2 0 2 1 3.68
2 2 2 0 1 3.68
4 2 0 2 0 8.97 20.04 3.19
4 2 2 0 0 8.68 13.57 5.57
4 2 2 2 0 15.49 0.32 18.80
4 2 1 3 1 0.03 2.18 0.95
4 2 3 1 1 0 1.89 0.63
4 1 2 2 1 1.02
4 2 0 2 1 1.99
4 2 2 0 1 2.07
6 2 2 4 0 0.14 0.77 3.57
6 2 4 2 0 0.14 0.77 0.78
6 2 0 2 0 0.50 0.09 0.69
8 2 4 4 0 0.02 0.003 1.58
1.0 1.1 1.2 1.3
10−20
10−19
10−18
f 2
26Fe
E3r (MeV)
4.56 10-21
MeV
E2r = 1.48 MeV
FIG. 19: The lifetime of 45Fe as a function of the 2p decay
energy E3r. The plot is analogue of Fig. 6a from [4] with up-
dated experimental data [12] and improved theoretical results.
Solid curves shows the cases of practically pure p2 and f2
configurations, dashed curves stand for different mixed p2/f2
cases. The numerical labels on the curves show the weights
of the s2 and p2 configurations in percents.
expectedly provide larger widths than the corresponding
calculations with Coulomb interaction only. The core-
proton FSI is much more efficient for width enhancement
than p-p FSI. This fact is correlated with the observation
of the previous point and is a very simple and strong indi-
cation that the wide-spread perception of the two-proton
decay as “diproton” decay is to some extent misleading.
As it has already been mentioned the p-p FSI influences
the penetration strongly in the very special case when the
decay occurs from high-l orbitals (e.g. f2 in the case of
45Fe). Thus we should consider as not fully consistent the
attempts to explain two-proton decay results only by the
FSI in the p-p channel (e.g. Ref. [19]) as much stronger
decay mechanism is neglected in these studies.
From techical point of view the states considered in
this work belong to the most complicated cases. The
complication is due to the ratio between the decay energy
and the strength of the Coulomb interaction (it defines
the subbarrier penetration range to be considered dy-
namically). Thus the convergence effects demonstrated
in this work for 17Ne have the strongest character among
the systems studied in our previous works [4, 6, 7, 8].
Because of the relatively small Kmax = 12 used in the
previous works we have found an order of the magnitude
underestimation of the 17Ne(3/2−) width. For systems
like 48Ni — 66Kr the underestimation of widths in our
previous calculations is expected to be about factor of 2.
A much smaller effect is expected for lighter systems.
It was demonstrated in [22, 23] that the capture rate
for the 15O(2p,γ)17Ne reaction depends strongly on the
two-proton width of the first excited 3/2− state in 17Ne.
This width was calculated in Ref. [7] as 4.1× 10−16 MeV
(some confusion can be connected with misprint in Table
1.0 1.2 1.4 1.6 1.8 2.0
E2r (MeV)
FIG. 20: Compatibility of the measured width of the 45Fe
with different assumptions about position E2r of the ground
state in the 44Mn subsystem and structure of 45Fe [weights
of the p2 configuration W (p2) are shown on the vertical
axis]. Central gray area corresponds to experimental width
uncertainty Γ = 2.85+0.65
−0.68 × 10
−19 MeV [12]. The light
gray area also takes into account the energy uncertainty
E3r = 1.154(16) MeV from [12]. The vertical dashed line
corresponds to E2r used in Fig. 19.
III of Ref. [7], see erratum). However, in the subsequent
work [21], providing very similar to [7] properties of the
17Ne WFs for the ground and the lowest excited states,
the width of the 3/2− state was found to be 3.6× 10−12
MeV. It was supposed in [21] that such a strong disagree-
ment is connected with poor subbarrier convergence of
the HH method in [7] compared to Adiabatic Faddeev
HH method of [21]. This point was further reiterated in
Ref. [41]. We can see now that this statement has a cer-
tain ground. However, the convergence problems of the
HH method are far insufficient to explain the huge dis-
agreement: the width increase found in this work is only
one order of magnitude. The most conservative upper
limit Γ ∼ 5× 10−14 MeV (see Table IV) was obtained in
a TFSI calculation neglecting p-p Coulomb interaction.
The other models systematically produce smaller values,
with realistic calculations confined to the narrow range
Γ ∼ (5 − 8) × 10−15 MeV (Table VI). Thus the value
Γ ∼ 4× 10−12 MeV obtained in paper [21] is very likely
to be erroneous. That result is possibly connected with a
simplistic quasiclassical procedure for width calculations
employed in this work.
VII. CONCLUSION.
In this work we derive the integral formula for the
widths of the resonances decaying into the three-body
channel for simplified Hamiltonians and discuss various
aspects of its practical application. The basic idea of the
derivation is not new, but for our specific purpose (pre-
cision solution of the multichannel problem) several im-
portant features of the scheme have not been discussed.
We can draw the following conclusions from our stud-
(i) We presume that HH convergence in realistic calcu-
lations should be largely the same as in the simplified
calculations as they imitate the most important dynamic
aspects of the realistic situation. The width values were
somewhat underestimated in our previous calculations.
The typical underestimation ranges from few percent to
tens of percent for “simple” potential and from tens of
percent to an order of magnitude in “complicated” cases
(potentials with repulsive core).
(ii) Convergence of the width calculations in the three-
body HH model can be drastically improved by a simple
adiabatic version of the Feshbach reduction procedure.
For a sufficiently large dynamic sector of the basis the
calculation with effective FR potential converges from
below and practically up to the exact value of the width.
For a small dynamic basis the FR calculation converges
towards a width value smaller than the exact value, but
still improves considerably the result.
(iii) The energy distributions obtained in the HH calcu-
lations are quite close to the exact ones. Convergence
with respect to basis size is achieved at relatively small
Kmax values. The disagreement with exact distributions
is not very significant and is likely to be connected not
with basis size convergence but, with radial extent of the
calculations [4].
(iv) Contributions of different decay mechanisms were
evaluated in the simplified models. We have found that
the “diproton” decay path is much less efficient than the
“sequential” decay path. This is true even in the model
calculations without nuclear FSIs (no specific dynamics),
which means that the “sequential” decay path is some-
how kinematically preferable.
(v) The value of the width for 17Ne 3/2− state was un-
derestimated in our previous works by around an order of
magnitude. A very conservative upper limit is obtained
in this work as Γ ∼ 5 × 10−14 MeV, while typical values
for realistic calculations are within the (5 − 8) × 10−15
MeV range. Thus the value Γ ∼ 4×10−12 MeV obtained
in papers [21, 41] is likely to be erroneous.
From this paper it is clear that the convergence issue
is sufficiently serious, and in some cases were underesti-
mated in our previous works. However, from practical
point of view, the convergence issue is not a principle
problem. For example the uncertain structure issues and
subsystem properties impose typically much larger uncer-
tainties for width values. For heavy two-proton emitters
(e.g. 45Fe) the positions of resonances in the subsystems
are experimentally quite uncertain. For a moment this is
the issue most limiting the precision of theoretical predic-
tions. We have demonstrated that with increased preci-
sion the experimental data impose strong restrictions on
theoretical calculations allowing to extract an important
structure information.
VIII. ACKNOWLEDGEMENTS
The authors are grateful to Prof. K. Langanke and
Prof. M. Ploszajczak for interesting discussions. The au-
thors acknowledge the financial support from the Royal
Swedish Academy of Science. LVG is supported INTAS
Grants 03-51-4496 and 05-1000008-8272, Russian RFBR
Grants Nos. 05-02-16404 and 05-02-17535 and Russian
Ministry of Industry and Science grant NS-8756.2006.2.
[1] V. I. Goldansky, Nucl. Phys. 19, 482 (1960).
[2] L. V. Grigorenko, R. C. Johnson, I. G. Mukha, I. J.
Thompson, and M. V. Zhukov, Phys. Rev. C 64 054002
(2001).
[3] L. V. Grigorenko, R. C. Johnson, I. G. Mukha, I. J.
Thompson, and M. V. Zhukov, Phys. Rev. Lett. 85, 22
(2000).
[4] L. V. Grigorenko, and M. V. Zhukov, Phys. Rev. 68 C,
054005 (2003).
[5] L. V. Grigorenko, I. G. Mukha, I. J. Thompson, and M.
V. Zhukov, Phys. Rev. Lett. 88, 042502 (2002).
[6] L. V. Grigorenko, R. C. Johnson, I. G. Mukha, I. J.
Thompson, and M. V. Zhukov, Eur. Phys. J. A 15 125
(2002).
[7] L. V. Grigorenko, I. G. Mukha, and M. V. Zhukov, Nucl.
Phys. A713, 372 (2003); erratum A740, 401 (2004).
[8] L. V. Grigorenko, I. G. Mukha, and M. V. Zhukov, Nucl.
Phys. A714, 425 (2003).
[9] M. Pfutzner, E. Badura, C. Bingham, B. Blank, M.
Chartier, H. Geissel, J. Giovinazzo, L. V. Grigorenko,
R. Grzywacz, M. Hellstrom, Z. Janas, J. Kurcewicz, A.
S. Lalleman, C. Mazzocchi, I. Mukha, G. Munzenberg,
C. Plettner, E. Roeckl, K. P. Rykaczewski, K. Schmidt,
R. S. Simon, M. Stanoiu, J.-C. Thomas, Eur. Phys. J. A
14, 279 (2002).
[10] J. Giovinazzo, B. Blank, M. Chartier, S. Czajkowski,
A. Fleury, M. J. Lopez Jimenez, M. S. Pravikoff, J.-
C. Thomas, F. de Oliveira Santos, M. Lewitowicz, V.
Maslov, M. Stanoiu, R. Grzywacz, M. Pfutzner, C.
Borcea, B. A. Brown, Phys. Rev. Lett. 89, 102501 (2002).
[11] B. Blank, A. Bey, G. Canchel, C. Dossat, A. Fleury, J.
Giovinazzo, I. Matea, N. Adimi, F. De Oliveira, I. Ste-
fan, G. Georgiev, S. Grevy, J. C. Thomas, C. Borcea,
D. Cortina, M. Caamano, M. Stanoiu, F. Aksouh, B. A.
Brown, F. C. Barker, and W. A. Richter, Phys. Rev. Lett.
94, 232501 (2005).
[12] C. Dossat, A. Bey, B. Blank, G. Canchel, A. Fleury, J.
Giovinazzo, I. Matea, F. de Oliveira Santos, G. Georgiev,
S. Grèvy, I. Stefan, J. C. Thomas, N. Adimi, C. Borcea,
D. Cortina Gil, M. Caamano, M. Stanoiu, F. Aksouh,
B. A. Brown, and L. V. Grigorenko, Phys. Rev. C 72,
054315 (2005).
[13] Ivan Mukha, Ernst Roeckl, Leonid Batist, Andrey
Blazhev, Joachim Döring, Hubert Grawe, Leonid Grig-
orenko, Mark Huyse, Zenon Janas, Reinhard Kirchner,
Marco La Commara, Chiara Mazzocchi, Sam L. Tabor,
Piet Van Duppen, Nature 439, 298 (2006).
[14] B. A. Brown, Phys. Rev. C 43, R1513 (1991); 44, 924(E)
(1991).
[15] W. Nazarewicz, J. Dobaczewski, T. R. Werner, J. A.
Maruhn, P.-G. Reinhard, K. Rutz, C. R. Chinn, A. S.
Umar, and M. R. Strayer, Phys. Rev. C 53, 740 (1996).
[16] F. C. Barker, Phys. Rev. C 63, 047303 (2001).
[17] F. C. Barker, Phys. Rev. C 66, 047603 (2002).
[18] F. C. Barker, Phys. Rev. C 68, 054602 (2003).
[19] B. A. Brown and F. C. Barker, Phys. Rev. C 67,
041304(R) (2003).
[20] J. Rotureau, J. Okolowicz, and M. Ploszajczak, Nucl.
Phys. A767, 13 (2006).
[21] E. Garrido, D. V. Fedorov, and A. S. Jensen, Nucl. Phys.
A733, 85 (2004).
[22] L. V. Grigorenko and M. V. Zhukov, Phys. Rev. C 72
015803 (2005).
[23] L. V. Grigorenko, K. Langanke, N. B. Shul’gina, and M.
V. Zhukov, Phys. Lett. B641, 254 (2006).
[24] J. Görres, M. Wiescher, and F.-K. Thielemann, Phys.
Rev. C 51, 392 (1995).
[25] K. Nomoto, F. Thielemann, and S. Miyaji, Astron. As-
trophys. 149, 239 (1985).
[26] K. Harada and E. A. Rauscher, Phys. Rev. 169, 818
(1968).
[27] S. G. Kadmensky and V. E. Kalechits, Yad. Fiz. 12, 70
(1970) [Sov. J. Nucl. Phys. 12, 37 (1971)].
[28] S. G. Kadmensky and V. I. Furman, Alpha-decay and
relevant reactions, Moscow, Energoatomizdat, 1985 (in
Russian).
[29] S. G. Kadmensky, Z. Phys. A312, 113 (1983).
[30] V. P. Bugrov and S. G. Kadmenskii, Sov. J. Nucl. Phys.
49, 967 (1989).
[31] C. N. Davids, P. J. Woods, D. Seweryniak, A. A. Son-
zogni, J. C. Batchelder, C. R. Bingham, T. Davinson, D.
J. Henderson, R. J. Irvine, G. L. Poli, J. Uusitalo, W. B.
Walters, Phys. Rev. Lett. 80, 1849 (1998).
[32] B. V. Danilin and M. V. Zhukov, Yad. Fiz. 56, 67 (1993)
[Phys. At. Nucl. 56, 460 (1993)].
[33] M. Abramowitz and I. Stegun, Handbook of Mathematical
Functions, p. 538.
[34] I. Stefan, F. de Oliveira Santos, M. G. Pellegriti, G. Du-
mitru, J. C. Angelique, M. Angelique, E. Berthoumieux,
A. Buta, R. Borcea, A. Coc, J. M. Daugas, T. Davin-
son, M. Fadil, S. Grevy, J. Kiener, A. Lefebvre-Schuhl,
M. Lenhardt, M. Lewitowicz, F. Negoita, D. Pantelica,
L. Perrot, O. Roig, M. G. Saint Laurent, I. Ray, O. Sor-
lin, M. Stanoiu, C. Stodel, V. Tatischeff, J. C. Thomas,
nucl-ex/0603020 v3.
[35] Ajzenberg-Selove, Nucl. Phys. A460, 1 (1986).
[36] B. V. Danilin et al., to be submitted.
[37] L. V. Grigorenko and M. V. Zhukov, to be submitted.
[38] L. V. Grigorenko, Yu. L. Parfenova, and M. V. Zhukov,
Phys. Rev. C 71, 051604(R) (2005).
[39] D. Gogny, P. Pires, and R. de Tourreil, Phys. Lett. B 32,
591 (1970).
[40] J. Görres, H. Herndl, I. J. Thompson, and M. Wiescher,
Phys. Rev. C 52, 2231 (1995).
[41] E. Garrido, D. V. Fedorov, A. S. Jensen, H.O.U. Fynbo,
Nucl. Phys. A748, 39 (2005).
[42] A realistic example of this situation is the case of “E1”
(coupled to the ground state by the E1 operator) con-
http://arxiv.org/abs/nucl-ex/0603020
tinuum considered in Ref. [23]. This case is relevant to
the low energy radiative capture reactions, important for
astrophysics, but deal with nonresonant continuum only.
[43] Interesting numerical stability test is a variation of
the “unphysical” (for OFSI approximation) potential
V nucy (Y ) in the auxiliary Hamiltonian (27). It can be used
for numerical tests of the procedure as it should not in-
fluence the width. Really, for variation of this potential
from weak attraction (we should not allow an unphysical
resonance into decay window) to strong repulsion (scale
of the variation is tens of MeV for potential with some
typical radius) the width is varied only within couple of
percents. This shows high numerical stability of the pro-
cedure.
[44] These values can be evaluated as typical nuclear ra-
dius for the system multiplied by
2: 3.53
2 ≈ 5 and
2 ≈ 6.
[45] The derivation of the flux here is given in a schematic
form. The complete proof is quite bulky to be provided
in the limited space. We would mention only that it is
easy to check directly that the derived expression for flux
preserves the continuum normalization.
[46] We demonstrate in paper [37] that a three-body width
should depend linearly on two-body widths of the sub-
systems and only very weakly on various geometrical fac-
tors. This is confirmed very well by direct calculations.
[47] The assumed nuclear structure is very simple, but the
diproton penetration process is treated exactly — with-
out assumptions about the emission of diproton from
some nuclear surface, which should be made in “R-
matrix” approach.
|
0704.0923 | When the Cramer-Rao Inequality provides no information | WHEN THE CRAMÉR-RAO INEQUALITY PROVIDES NO INFORMATION
STEVEN J. MILLER
Abstract. We investigate a one-parameter family of probability densities (related to the
Pareto distribution, which describes many natural phenomena) where the Cramér-Rao inequal-
ity provides no information.
1. Cramér-Rao Inequality
One of the most important problems in statistics is estimating a population parameter from a
finite sample. As there are often many different estimators, it is desirable to be able to compare
them and say in what sense one estimator is better than another. One common approach is to
take the unbiased estimator with smaller variance. For example, if X1, . . . , Xn are independent
random variables uniformly distributed on [0, θ], Yn = maxi Xi and X = (X1 + · · ·+ Xn)/n, then
Yn and 2X are both unbiased estimators of θ but the former has smaller variance than the
latter and therefore provides a tighter estimate.
Two natural questions are (1) which estimator has the minimum variance, and (2) what bounds
are available on the variance of an unbiased estimator? The first question is very hard to solve
in general. Progress towards its solution is given by the Cramér-Rao inequality, which provides
a lower bound for the variance of an unbiased estimator (and thus if we find an estimator that
achieves this, we can conclude that we have a minimum variance unbiased estimator).
Date: February 5, 2008.
2000 Mathematics Subject Classification. 62B10 (primary), 62F12, 60E05 (secondary).
Key words and phrases. Cramér-Rao Inequality, Pareto distribution, power law.
The author would like to thank Alan Landman for many enlightening conversations and the referees for helpful
comments. The author was partly supported by NSF grant DMS0600848.
http://arXiv.org/abs/0704.0923v1
2 STEVEN J. MILLER
Cramér-Rao Inequality: Let f(x; θ) be a probability density function with continuous parameter
θ. Let X1, . . . , Xn be independent random variables with density f(x; θ), and let Θ̂(X1, . . . , Xn)
be an unbiased estimator of θ. Assume that f(x; θ) satisfies two conditions:
(1) we have
· · ·
Θ̂(x1, . . . , xn)
f(xi; θ)dxi
· · ·
Θ̂(x1, . . . , xn)
i=1 f(xi; θ)
dx1 · · ·dxn;
(1.1)
(2) for each θ, the variance of Θ̂(X1, . . . , Xn) is finite.
var(Θ̂) ≥
∂ log f(x;θ)
)2], (1.2)
where E denotes the expected value with respect to the probability density function f(x; θ).
For a proof, see for example [CaBe]. The expected value in (1.2) is called the information
number or the Fisher information of the sample.
As variances are non-negative, the Cramér-Rao inequality (equation (1.2)) provides no useful
bounds on the variance of an unbiased estimator if the information is infinite, as in this case we
obtain the trivial bound that the variance is greater than or equal to zero. We find a simple
one-parameter family of probability density functions (related to the Pareto distribution) that
satisfy the conditions of the Cramér-Rao inequality, but the expectation (i.e., the information) is
infinite. Explicitly, our main result is
Theorem: Let
f(x; θ) =
−θ log−3 x if x ≥ e
0 otherwise,
(1.3)
WHEN THE CRAMÉR-RAO INEQUALITY PROVIDES NO INFORMATION 3
where aθ is chosen so that f(x; θ) is a probability density function. The information is infinite
when θ = 1. Equivalently, the Cramér-Rao inequality yields the trivial (and useless) bound that
Var(Θ̂) ≥ 0 for any unbiased estimator Θ̂ of θ when θ = 1.
In §2 we analyze the density in our theorem in great detail, deriving needed results about aθ
and its derivatives as well as discussing how f(x; θ) is related to important distributions used to
model many natural phenomena. We show the information is infinite when θ = 1 in §3, which
proves our theorem. We also discuss there properties of estimators for θ. While it is not clear
whether or not this distribution has an unbiased estimator, there is (at least for θ close to 1) an
asymptotically unbiased estimator rapidly converging to θ as the sample size tends to infinity. By
examining the proof of the Cramér-Rao inequality we see that we may weaken the assumption of
an unbiased estimator. While typically there is a cost in such a generalization, as our information
is infinite there is no cost in our case. We may therefore conclude that arguments such as those
used to prove the Cramér-Rao inequality cannot provide any information for estimators of θ from
this distribution.
2. An Almost Pareto Density
Consider
f(x; θ) =
aθ/(x
θ log3 x) if x ≥ e
0 otherwise,
(2.1)
where aθ is chosen so that f(x; θ) is a probability density function. Thus
xθ log3 x
= 1. (2.2)
We chose to have log3 x in the denominator to ensure that the above integral converges, as does
log x times the integrand; however, the expected value (in the expectation in (1.2)) will not
converge.
4 STEVEN J. MILLER
For example, 1/x logx diverges (its integral looks like log log x) but 1/x log2 x converges (its
integral looks like 1/ logx); see pages 62–63 of [Rud] for more on close sequences where one
converges but the other does not. This distribution is close to the Pareto distribution (or a power
law). Pareto distributions are very useful in describing many natural phenomena; see for example
[DM, Ne, NM]. The inclusion of the factor of log−3 x allows us to have the exponent of x in the
density function equal 1 and have the density function defined for arbitrarily large x; it is also
needed in order to apply the Dominated Convergence Theorem to justify some of the arguments
below. If we remove the logarithmic factors then we obtain a probability distribution only if the
density vanishes for large x. As log
x is a very slowly varying function, our distribution f(x; θ)
may be of use in modeling data from an unbounded distribution where one wants to allow a
power law with exponent 1, but cannot as the resulting probability integral would diverge. Such
a situation occurs frequently in the Benford Law literature; see [Hi, Rai] for more details.
We study the variance bounds for unbiased estimators Θ̂ of θ, and in particular we show that
when θ = 1 then the Cramér-Rao inequality yields a useless bound.
Note that it is not uncommon for the variance of an unbiased estimator to depend on the
value of the parameter being estimated. For example, consider again the uniform distribution on
[0, θ]. Let X denote the sample mean of n independent observations, and Yn = max1≤i≤n Xi be
the largest observation. The expected value of 2X and n+1
Yn are both θ (implying each is an
unbiased estimator for θ); however, Var(2X) = θ2/3n and Var(n+1
Yn) = θ
2/n(n+1) both depend
on θ, the parameter being estimated (see, for example, page 324 of [MM] for these calculations).
Lemma 2.1. As a function of θ ∈ [1,∞), aθ is a strictly increasing function and a1 = 2. It has
a one-sided derivative at θ = 1, and daθ
∈ (0,∞).
Proof. We have
xθ log3 x
= 1. (2.3)
WHEN THE CRAMÉR-RAO INEQUALITY PROVIDES NO INFORMATION 5
When θ = 1 we have
x log3 x
, (2.4)
which is clearly positive and finite. In fact, a1 = 2 because the integral is
x log3 x
log−3 x
d log x
2 log2 x
∣∣∣∣∣
; (2.5)
though all we need below is that a1 is finite and non-zero, we have chosen to start integrating at
e to make a1 easy to compute.
It is clear that aθ is strictly increasing with θ, as the integral in (2.4) is strictly decreasing with
increasing θ (because the integrand is decreasing with increasing θ).
We are left with determining the one-sided derivative of aθ at θ = 1, as the derivative at any
other point is handled similarly (but with easier convergence arguments). It is technically easier
to study the derivative of 1/aθ, as
(2.6)
xθ log
. (2.7)
The reason we consider the derivative of 1/aθ is that this avoids having to take the derivative of
the reciprocals of integrals. As a1 is finite and non-zero, it is easy to pass to
|θ=1. Thus we
= lim
x1+h log3 x
x log3 x
= lim
1 − xh
x log3 x
. (2.8)
We want to interchange the integration with respect to x and the limit with respect to h above.
This interchange is permissible by the Dominated Convergence Theorem (see Appendix A for
details of the justification). Note
1 − xh
= − log x; (2.9)
6 STEVEN J. MILLER
one way to see this is to use the limit of a product is the product of the limits, and then use
L’Hospital’s rule, writing xh as eh log x. Therefore
x log2 x
; (2.10)
as this is finite and non-zero, this completes the proof and shows daθ
|θ=1 ∈ (0,∞). �
Remark 2.2. We see now why we chose f(x; θ) = aθ/x
θ log3 x instead of f(x; θ) = aθ/x
θ log2 x.
If we only had two factors of log x in the denominator, then the one-sided derivative of aθ at θ = 1
would be infinite.
Remark 2.3. Though the actual value of daθ
|θ=1 does not matter, we can compute it quite
easily. By (2.10) we have
x log
d log x
log x
= −1. (2.11)
Thus by (2.6), and the fact that a1 = 2 (Lemma 2.1), we have
= −a21 ·
= 4. (2.12)
3. Computing the Information
We now compute the expected value, E
∂ log f(x;θ)
; showing it is infinite when θ = 1
completes the proof of our main result. Note
log f(x; θ) = log aθ − θ log x + log log
∂ log f(x; θ)
− log x. (3.1)
WHEN THE CRAMÉR-RAO INEQUALITY PROVIDES NO INFORMATION 7
By Lemma 2.1 we know that daθ
is finite for each θ ≥ 1. Thus
∂ log f(x; θ)
− log x
− log x
xθ log3 x
. (3.2)
If θ > 1 then the expectation is finite and non-zero. We are left with the interesting case when
θ = 1. As daθ
|θ=1 is finite and non-zero, for x sufficiently large (say x ≥ x1 for some x1, though
by Remark 2.3 we see that we may take any x1 ≥ e
4) we have
∣∣∣∣ ≤
log x
. (3.3)
As a1 = 2, we have
∂ log f(x; θ)
)2] ∣∣∣∣∣
log x
x log
2x logx
log−1 x
d log x
log log x
= ∞. (3.4)
Thus the expectation is infinite. Let Θ̂ be any unbiased estimator of θ. If θ = 1 then the
Cramér-Rao inequality gives
var(Θ̂) ≥ 0, (3.5)
which provides no information as variances are always non-negative. This completes the proof of
our theorem. �
We now discuss estimators for θ for our distribution f(x; θ). If X1, . . . , Xn are n independent
random variables with common distribution f(x; θ), then as n → ∞ the sample median converges
to the population median µ̃θ (if n = 2m + 1 then the sample median converges to being normally
distributed with median µ̃θ and variance 1/8mf(µ̃θ; θ)
2; see for example Theorem 8.17 of [MM]).
8 STEVEN J. MILLER
1.1 1.2 1.3 1.4
Figure 1. Plot of the median µ̃θ of f(x; θ) as a function of θ (µ̃1 = e
For θ close to 1 we see in Figure 1 that the median µ̃θ of f(x; θ) is strictly decreasing with
increasing θ, which implies that there is an inverse function g such that g(µ̃θ) = θ. We obtain an
estimator to θ by applying g to the sample median. This estimator is a consistent estimator (as
the sample size tends to infinity it will tend to θ) and should be asymptotically unbiased.
The proof of the Cramér-Rao inequality starts with
0 = E
· · ·
Θ̂(x1, . . . , xn) − θ
h(x1; θ) · · ·h(xn; θ)dx1 · · ·dxn
, (3.6)
where Θ̂(x1, . . . , xn) is an unbiased estimator of θ depending only on the sample values x1, . . . , xn.
In our case (when each h(x; θ) = f(x; θ)) we may not have an unbiased estimator. If we denote
this expectation by F(θ), for our investigations all that we require is that dF(θ)/dθ is finite (which
is easy to show). Going through the proof of the Cramér-Rao inequality shows that the effect
of this is to replace the factor of 1 in (1.2) with (1 + dF(θ)/dθ)2; thus the generalization of the
Cramér-Rao inequality for our estimator is
var(Θ̂) ≥
dF(θ)
∂ log f(x; θ)
. (3.7)
As our variance is infinite for θ = 1 we see that, no matter what ‘nice’ estimator we use, we will
not obtain any useful information from such arguments.
WHEN THE CRAMÉR-RAO INEQUALITY PROVIDES NO INFORMATION 9
Appendix A. Applying the Dominated Convergence Theorem
We justify applying the Dominated Convergence Theorem in the proof of Lemma 2.1. See, for
example, [SS] for the conditions and a proof of the Dominated Convergence Theorem.
Lemma A.1. For each fixed h > 0 and any x ≥ e, we have
1 − xh
∣∣∣∣ ≤ e log x, (A.1)
and e log x
x log3 x
is positive and integrable, and dominates each 1−x
x log3 x
Proof. We first prove (A.1). As x ≥ e and h > 0, note xh ≥ 1. Consider the case of 1/h ≤ log x.
Since |1 − xh| < 1 + xh ≤ 2xh, we have
|1 − xh|
≤ 2 log x. (A.2)
We are left with the case of 1/h > log x, or h logx < 1. We have
|1 − xh| = |1 − eh log x|
∣∣∣∣∣1 −
(h log x)n
∣∣∣∣∣
= h log x
(h log x)n−1
< h log x
(h log x)n−1
(n − 1)!
= h logx · eh log x. (A.3)
This, combined with h log x < 1 and xh ≥ 1 yields
|1 − xh|
eh log x
= e log x. (A.4)
It is clear that log x
x log3 x
is positive and integrable, and by L’Hospital’s rule (see (2.9)) we have that
1 − xh
x log3 x
x log2 x
. (A.5)
Thus the Dominated Convergence Theorem implies that
1 − xh
x log3 x
x log2 x
= −1 (A.6)
10 STEVEN J. MILLER
(the last equality is derived in Remark 2.3). �
References
[CaBe] G. Casella and R. Berger, Statistical Inference, 2nd edition, Duxbury Advanced Series, Pacific Grove,
CA, 2002.
[DM] D. Devoto and S. Martinez, Truncated Pareto Law and oresize distribution of ground rocks, Mathematical
Geology 30 (1998), no. 6, 661–673.
[Hi] T. Hill, A statistical derivation of the significant-digit law, Statistical Science 10 (1996), 354–363.
[MM] I. Miller and M. Miller, John E. Freund’s Mathematical Statistics with Applications, seventh edition,
Prentice Hall, 2004.
[Ne] M. E. J. Newman, Power laws, Pareto distributions and Zipfs law, Contemporary Physics 46 (2005),
no. 5, 323-351.
[NM] M. Nigrini and S. J. Miller, Benford’s Law applied to hydrology data – results and relevance to other
geophysical data, preprint.
[Rai] R. A. Raimi, The first digit problem, Amer. Math. Monthly 83 (1976), no. 7, 521–538.
[Rud] W. Rudin, Principles of Mathematical Analysis, third edition, International Series in Pure and Applied
Mathematics, McGraw-Hill Inc., New York, 1976.
[SS] E. Stein and R. Shakarchi, Real Analysis: Measure Theory, Integration, and Hilbert Spaces, Princeton
University Press, Princeton, NJ, 2005.
Department of Mathematics, Brown University, 151 Thayer Street, Providence, RI 02912
E-mail address: [email protected]
1. Cramér-Rao Inequality
2. An Almost Pareto Density
3. Computing the Information
Appendix A. Applying the Dominated Convergence Theorem
References
|
0704.0924 | Lower order terms in the 1-level density for families of holomorphic
cuspidal newforms | LOWER ORDER TERMS IN THE 1-LEVEL DENSITY FOR FAMILIES OF
HOLOMORPHIC CUSPIDAL NEWFORMS
STEVEN J. MILLER
ABSTRACT. The Katz-Sarnak density conjecture states that, in the limit as the analytic conductors
tend to infinity, the behavior of normalized zeros near the central point of families of L-functions
agree with the N → ∞ scaling limits of eigenvalues near 1 of subgroups of U(N). Evidence for
this has been found for many families by studying the n-level densities; for suitably restricted test
functions the main terms agree with random matrix theory. In particular, all one-parameter families of
elliptic curves with rank r over Q(T ) and the same distribution of signs of functional equations have
the same universal limiting behavior for their main term. We break this universality and find family
dependent lower order correction terms in many cases; these lower order terms have applications
ranging from excess rank to modeling the behavior of zeros near the central point, and depend on the
arithmetic of the family. We derive an alternate form of the explicit formula for GL(2) L-functions
which simplifies comparisons, replacing sums over powers of Satake parameters by sums of the
moments of the Fourier coefficients λf (p). Our formula highlights the differences that we expect to
exist from families whose Fourier coefficients obey different laws (for example, we expect Sato-Tate
to hold only for non-CM families of elliptic curves). Further, by the work of Rosen and Silverman we
expect lower order biases to the Fourier coefficients in one-parameter families of elliptic curves with
rank over Q(T ); these biases can be seen in our expansions. We analyze several families of elliptic
curves and see different lower order corrections, depending on whether or not the family has complex
multiplication, a forced torsion point, or non-zero rank over Q(T ).
1. INTRODUCTION
Assuming the Generalized Riemann Hypothesis (GRH), the non-trivial zeros of any L-function
have real part equal to 1/2. Initial investigations studied spacing statistics of zeros far from the
central point, where numerical and theoretical results [Hej, Mon, Od1, Od2, RS] showed excellent
agreement with eigenvalues from the Gaussian Unitary Ensemble (GUE). Further agreement was
found in studying moments of L-functions [CF, CFKRS, KeSn1, KeSn2, KeSn3] as well as low-
lying zeros (zeros near the critical point).
In this paper we concentrate on low-lying zeros of L(s, f), where f ∈ H⋆k(N), the set of all
holomorphic cuspidal newforms of weight k and level N . Before stating our results, we briefly
review some notation and standard facts. Each f ∈ H⋆k(N) has a Fourier expansion
f(z) =
af(n)e(nz). (1.1)
Date: September 1, 2021.
2000 Mathematics Subject Classification. 11M26 (primary), 11G40, 11M41, 15A52 (secondary).
Key words and phrases. n-Level Density, Low Lying Zeros, Elliptic Curves.
The author would like to thank Walter Becker, Colin Deimer, Steven Finch, Dorian Goldfeld, Filip Paun and Matt
Young for useful discussions. Several of the formulas for expressions in this paper were first guessed by using Sloane’s
On-Line Encyclopedia of Integer Sequences [Sl]. The numerical computations were run on the Princeton Math Server,
and it is a pleasure to thank the staff there for their help. The author was partly supported by NSF grant DMS0600848.
http://arxiv.org/abs/0704.0924v4
2 STEVEN J. MILLER
Let λf (n) = af (n)n
−(k−1)/2. These coefficients satisfy multiplicative relations, and |λf(p)| ≤ 2.
The L-function associated to f is
L(s, f) =
λf(n)
1− λf(p)
χ0(p)
, (1.2)
where χ0 is the principal character with modulus N . We write
λf(p) = αf(p) + βf (p). (1.3)
For p |rN , αf (p)βf(p) = 1 and |αf(p)| = 1. If p|N we take αf(p) = λf(p) and βf (p) = 0. Letting
L∞(s, f) =
)1/2 (√
k − 1
k + 1
(1.4)
denote the local factor at infinity, the completed L-function is
Λ(s, f) = L∞(s)L(s, f) = ǫfΛ(1− s, f), ǫf = ±1. (1.5)
Therefore H⋆k(N) splits into two disjoint subsets, H
k (N) = {f ∈ H⋆k(N) : ǫf = +1} and
H−k (N) = {f ∈ H⋆k(N) : ǫf = −1}. Each L-function has a set of non-trivial zeros ρf,j = 12+ßγf,ℓ.
The Generalized Riemann Hypothesis asserts that all γf,ℓ ∈ R.
In studying the behavior of low-lying zeros, the arithmetic and analytic conductors determine the
appropriate scale. For f ∈ H⋆k(N), the arithmetic conductorNf is the integerN from the functional
equation, and the analytic conductor Qf is
(k+1)(k+3)
. The number of zeros within C units of the
central point (where C is any large, absolute constant) is of the order logQf . For us k will always
be fixed, so Nf and Qf will differ by a constant. Thus logQf ∼ logNf , and in the limit as the level
N tends to infinity, we may use either the analytic or arithmetic conductor to normalize the zeros
near the central point. See [Ha, ILS] for more details.
We rescale the zeros and study γf,ℓ
logQf
. We let F = ∪FN be a family of L-functions ordered by
conductor (our first example will be FN = H∗k(N); later we shall consider one-parameter families
of elliptic curves). The n-level density for the family is
Dn,F(φ) := lim
|FN |
ℓ1,...,ℓn
ℓi 6=±ℓk
γf,ℓ1
logQf
· · ·φn
γf,ℓn
logQf
, (1.6)
where the φi are even Schwartz test functions whose Fourier transforms have compact support and
+ßγf,ℓ runs through the non-trivial zeros of L(s, f). As the φi’s are even Schwartz functions, most
of the contribution to Dn,F(φ) arises from the zeros near the central point; thus this statistic is well-
suited to investigating the low-lying zeros. For some families, it is more convenient to incorporate
weights (for example, the harmonic weights facilitate applying the Petersson formula to families of
cuspidal newforms).
Katz and Sarnak [KaSa1, KaSa2] conjectured that, in the limit as the analytic conductors tend to
infinity, the behavior of the normalized zeros near the central point of a family F of L-functions
agrees with the N → ∞ scaling limit of the normalized eigenvalues near 1 of a subgroup of U(N):
Dn,F(φ) =
· · ·
φ1(x1) · · ·φn(xn)Wn,G(F)(x1, . . . , xn)dx1 · · · dxn, (1.7)
LOWER ORDER TERMS IN 1-LEVEL DENSITIES 3
where G(F) is the scaling limit of one of the following classical compact groups: N × N unitary,
symplectic or orthogonal matrices.1 Evidence towards this conjecture is provided by analyzing the
n-level densities of many families, such as all Dirichlet characters, quadratic Dirichlet characters,
L(s, ψ) with ψ a character of the ideal class group of the imaginary quadratic field Q(
families of elliptic curves, weight k level N cuspidal newforms, symmetric powers of GL(2) L-
functions, and certain families of GL(4) and GL(6) L-functions; see [DM1, FI, Gü, HR, HM, ILS,
KaSa2, Mil2, OS, RR1, Ro, Rub, Yo2].
Different classical compact groups exhibit a different local behavior of eigenvalues near 1, thus
breaking the global GUE symmetry. This correspondence allows us, at least conjecturally, to assign
a definite “symmetry type” to each family of primitive L-functions.2
Now that the main terms have been shown to agree with random matrix theory predictions (at
least for suitably restricted test functions), it is natural to study the lower order terms.3 In this paper
we see how various arithmetical properties of families of elliptic curves (complex multiplication,
torsion groups, and rank) affect the lower order terms. 4 For families of elliptic curves these lower
order terms have appeared in excess rank investigations [Mil3], and in a later paper [DHKMS] they
will play a role in explaining the repulsion observed in [Mil4] of the first normalized zero above the
central point in one-parameter families of elliptic curves.
We derive an alternate version of the explicit formula for a family F of GL(2) L-functions
of weight k which is more tractable for such investigations, which immediately yields a useful
expansion for the 1-level density for a family F of GL(2) cuspidal newforms. We should really
1For test functions φ̂ supported in (−1, 1), the one-level densities are
φ(u)W1,SO(even)(u)du = φ̂(u) +
φ(0)∫
φ(u)W1,SO(odd)(u)du = φ̂(u) +
φ(0)∫
φ(u)W1,O(u)du = φ̂(u) +
φ(0)∫
φ(u)W1,USp(u)du = φ̂(u)− 12φ(0)∫
φ(u)W1,U(u)du = φ̂(u).
(1.8)
2For families of zeta orL-functions of curves or varieties over finite fields, the corresponding classical compact group
can be determined by the monodromy (or symmetry group) of the family and its scaling limit. No such identification
is known for number fields, though function field analogues often suggest what the symmetry type should be. See also
[DM2] for results about the symmetry group of the convolution of families, as well as determining the symmetry group
of a family by analyzing the second moment of the Satake parameters.
3Recently Conrey, Farmer and Zirnbauer [CFZ1, CFZ2] conjectured formulas for the averages over a family of
ratios of products of shifted L-functions. Their L-functions Ratios Conjecture predicts both the main and lower order
terms for many problems, ranging from n-level correlations and densities to mollifiers and moments to vanishing at
the central point (see [CS]). In [Mil6, Mil7] we verified the Ratios Conjecture’s predictions (up to error terms of size
O(X−1/2+ǫ)!) for the 1-level density of the family of quadratic Dirichlet characters and certain families of cuspidal
newforms for test functions of suitably small support. Khiem is currently calculating the predictions of the Ratios
Conjecture for certain families of elliptic curves.
4While the main terms for one-parameter families of elliptic curves of rank r over Q(T ) and given distribution of
signs of functional equations all agree with the scaling limit of the same orthogonal group, in [Mil1] potential lower
order corrections were observed (see [FI, RR2, Yo1] for additional examples, and [Mil3] for applications of lower order
terms to bounding the average order of vanishing at the central point in a family). The problem is that these terms are
of size 1/ logR, while trivially estimating terms in the explicit formula lead to errors of size log logR/ logR; here
logR is the average log-conductor of the family. These lower order terms are useful in refining the models of zeros
near the central point for small conductors. This is similar to modeling high zeros of ζ(s) at height T with matrices of
size N = log(T/2π) (and not the N → ∞ scaling limits) [KeSn1, KeSn2]; in fact, even better agreement is obtained
by a further adjustment of N arising from an analysis of the lower order terms (see [BBLM, DHKMS]).
4 STEVEN J. MILLER
write FN and RN below to emphasize that our calculations are being done for a fixed N , and then
take the limit as N → ∞. As there is no danger of confusion, we suppress the N in the FN and
LetNf be the level of f ∈ F and let φ be an even Schwartz function such that φ̂ has compact sup-
port, say supp(φ̂) ⊂ (−σ, σ). We weight each f ∈ F by non-negative weights wR(f), where logR
is the weighted average of the logarithms of the levels, and we rescale the zeros near the central
point by (logR)/2π (in all our families of interest, logR ∼ logN). Set WR(f) =
f∈F wR(f).
The 1-level density for the family F with weights wR(f) and test function φ is
D1,F(φ) =
WR(F)
wR(f)
f∈F wR(f)(A(k) + logNf)
WR(F) logR
φ̂(0)
WR(F)
wR(f)
αf(p)
m + βf(p)
log p
log p
log2R
f∈F wR(f)(A(k) + logNf)
WR(F) logR
φ̂(0) + S(F) +Ok
log2R
, (1.9)
with ψ(z) = Γ′(z)/Γ(z), A(k) = ψ(k/4) + ψ((k + 2)/4)− 2 log π, and
S(F) = − 2
WR(F)
wR(f)
αf(p)
m + βf (p)
log p
log p
. (1.10)
The above is a straightforward consequence of the explicit formula, and depends crucially on having
an Euler product for our L-functions; see [ILS] for a proof. As φ is a Schwartz function, most of
the contribution is due to the zeros near the central point. The error of size 1/ log2R arises from
simplifying some of the expressions involving the analytic conductors, and could be improved to
be of size 1/ log3R at the cost of additional analysis (see [Yo1] for details); as we are concerned
with lower order corrections due to arithmetic differences between the families, the above suffices
for our purposes.
The difficult (and interesting) piece in the 1-level density is S(F). Our main result is an alternate
version of the explicit formula for this piece. We first set the notation. For each f ∈ F , let
S(p) = {f ∈ F : p |rNf}. (1.11)
Thus for f /∈ S(p), αf (p)m + βf (p)m = λf (p)m. Let
Ar,F(p) =
WR(F)
f∈S(p)
wR(f)λf(p)
r, A′r,F(p) =
WR(F)
f /∈S(p)
wR(f)λf(p)
r; (1.12)
we use the convention that 00 = 1; thus A0,F (p) equals the cardinality of S(p).
LOWER ORDER TERMS IN 1-LEVEL DENSITIES 5
Theorem 1.1 (Expansion for S(F) in terms of moments of λf(p)). Let logR be the average log-
conductor of a finite family of L-functions F , and let S(F) be as in (1.10). We have
S(F) = − 2
A′m,F(p)
log p
log p
−2φ̂(0)
2A0,F(p) log p
p(p+ 1) logR
2A0,F(p) log p
p logR
log p
A1,F(p)
log p
log p
+ 2φ̂(0)
A1,F(p)(3p+ 1)
p1/2(p+ 1)2
log p
A2,F(p) log p
p logR
log p
+ 2φ̂(0)
A2,F(p)(4p
2 + 3p+ 1) log p
p(p+ 1)3 logR
−2φ̂(0)
Ar,F(p)p
r/2(p− 1) log p
(p+ 1)r+1 logR
log3R
= SA′(F) + S0(F) + S1(F) + S2(F) + SA(F) +O
log3R
. (1.13)
If we let
ÃF (p) =
WR(F)
f∈S(p)
wR(f)
λf(p)
p+ 1− λf(p)
, (1.14)
then by the geometric series formula we may replace SA(F) with SÃ(F), where
SÃ(F) = −2φ̂(0)
ÃF (p)p
3/2(p− 1) log p
(p+ 1)3 logR
. (1.15)
Remark 1.2. For a general one-parameter family of elliptic curves, we are unable to obtain exact,
closed formulas for the rth moment terms Ar,F(p); for sufficiently nice families we can find exact
formulas for r ≤ 2 (see [ALM, Mil3] for some examples, with applications towards constructing
families with moderate rank over Q(T ) and the excess rank question). Thus we are forced to
numerically approximate the Ar,F(p) terms when r ≥ 3.5
We prove Theorem 1.1 by using the geometric series formula for
m≥3(αf (p)/
p)m (and sim-
ilarly for the sum involving βf(p)
m) and properties of the Satake parameters. We find terms like
λf(p)
3 − 3λf(p)
p+ 1− λf(p)
λf(p)
2 − 2
p+ 1− λf (p)
. (1.16)
While the above formula leads to tractable expressions for computations, the disadvantage is that
the zeroth, first and second moments of λf(p) are now weighted by 1/(p + 1 − λf (p)
p). For
many families (especially those of elliptic curves) we can calculate the zeroth, first and second
moments exactly up to errors of size 1/N ǫ; this is not the case if we introduce these weights in the
5This greatly hinders comparison with the L-Functions Ratios Conjecture, which gives useful interpretations for
the lower order terms. In [CS] the lower order terms are computed for a symplectic family of quadratic Dirichlet L-
functions. The (conjectured) expansions there show a remarkable relation between the lower order terms and the zeros
of the Riemann zeta function; for test functions with suitably restricted support, the number theory calculations are
tractable and in [Mil6] are shown to agree with the Ratios Conjecture.
6 STEVEN J. MILLER
denominator. We therefore apply the geometric series formula again to expand 1/(p+1−λf (p)
and collect terms.
An alternate proof involves replacing each αf (p)
m + βf (p)
m for p ∈ S(p) with a polynomial∑m
r=0 cm,rλf(p)
m, and then interchanging the order of summation (which requires some work, as
the resulting sum is only conditionally convergent). The sum over r collapses to a linear combina-
tion of polylogarithm functions, and the proof is completed by deriving an identity expressing these
sums as a simple rational function.6
Remark 1.3. An advantage of the explicit formula in Theorem 1.1 is that the answer is expressed
as a weighted sum of moments of the Fourier coefficients. Often much is known (either theoret-
ically or conjecturally) for the distribution of the Fourier coefficients, and this formula facilitates
comparisons with conjectures. In fact, often the r-sum can be collapsed by using the generating
function for the moments of λf(p). Moreover, there are many situations where the Fourier coef-
ficients are easier to compute than the Satake parameters; for elliptic curves we find the Fourier
coefficients by evaluating sums of Legendre symbols, and then pass to the Satake parameters by
solving aE(p) = 2
p cos θE(p). Thus it is convenient to have the formulas in terms of the Fourier
coefficients. As ÃF(p) = O(1/p)), these sums converge at a reasonable rate, and we can evaluate
the lower order terms of size 1/ logR to any specified accuracy by simply calculating moments and
modified moments of the Fourier coefficients at the primes.
We now summarize the lower order terms for several different families of GL(2) L-functions;
many other families can be computed through these techniques. The first example is analyzed in
§3, the others in §5. Below we merely state the final answer of the size of the 1/ logR term to a few
digits accuracy; see the relevant sections for expressions of these constants in terms of prime sums
with weights depending on the family. For sufficiently small support, the main term in the 1-level
density of each family has previously been shown to agree with the three orthogonal groups (we can
determine which by calculating the 2-level density and splitting by sign); however, the lower order
terms are different for each family, showing how the arithmetic of the family enters as corrections
to the main term. For most of our applications we have weight 2 cuspidal newforms, and thus the
conductor-dependent terms in the lower order terms are the same for all families. Therefore below
we shall only describe the family-dependent corrections.
• All holomorphic cusp forms (Theorem 3.4): Let Fk,N be either the family of even weight
k and prime level N cuspidal newforms, or just the forms with even (or odd) functional
equation. Up to O(log−3R), for test functions φ with supp(φ̂) ⊂ (−4/3, 4/3), as N → ∞
6The polylogarithm function is Lis(x) =
k=1 k
−sxk . If s is a negative integer, say s = −r, then the polylogarithm
function converges for |x| < 1 and equals
〉xr−j
(1 − x)r+1, where the 〈 r
〉 are the Eulerian numbers (the
number of permutations of {1, . . . , r} with j permutation ascents). In [Mil5] we show that if aℓ,i is the coefficient of
ki in
j=0(k
2 − j2), and bℓ,i is the coefficient of ki in (2k + 1)
j=0(k − j)(k + 1 + j), then for |x| < 1 and ℓ ≥ 1
we have
aℓ,2ℓLi−2ℓ(x) + · · ·+ aℓ,0Li0(x) =
(2ℓ)!
xℓ(1 + x)
(1− x)2ℓ+1
bℓ,2ℓ+1Li−2ℓ−1(x) + · · ·+ bℓ,0Li0(x) = (2ℓ+ 1)!
xℓ(1 + x)
(1− x)2ℓ+2
. (1.17)
Another application of this identity is to deduce relations among the Eulerian numbers.
LOWER ORDER TERMS IN 1-LEVEL DENSITIES 7
the (non-conductor) lower order term is
− 1.33258 · 2φ̂(0)/ logR. (1.18)
Note the lower order corrections are independent of the distribution of the signs of the func-
tional equations.
• CM example, with or without forced torsion (Theorem 5.6): Consider the one-parameter
families y2 = x3 + B(6T + 1)κ over Q(T ), with B ∈ {1, 2, 3, 6} and κ ∈ {1, 2}; these
families have complex multiplication, and thus the distribution of their Fourier coefficients
does not follow Sato-Tate. We sieve so that (6T + 1) is (6/κ)-power free. If κ = 1 then all
values ofB have the same behavior, which is very close to what we would get if the average
of the Fourier coefficients immediately converged to the correct limiting behavior.7 If κ = 2
the four values ofB have different lower order corrections; in particular, ifB = 1 then there
is a forced torsion point of order three, (0, 6T + 1). Up to errors of size O(log−3R), the
(non-conductor) lower order terms are approximately
B = 1, κ = 1 : −2.124 · 2φ̂(0)/ logR,
B = 1, κ = 2 : −2.201 · 2φ̂(0)/ logR,
B = 2, κ = 2 : −2.347 · 2φ̂(0)/ logR
B = 3, κ = 2 : −1.921 · 2φ̂(0)/ logR
B = 6, κ = 2 : −2.042 · 2φ̂(0)/ logR. (1.19)
• CM example, with or without rank (see §5.2): Consider the one-parameter families
y2 = x3 − B(36T + 6)(36T + 5)x over Q(T ), with B ∈ {1, 2}. If B = 1 the family
has rank 1, while if B = 2 the family has rank 0; in both cases the family has complex
multiplication. We sieve so that (36T + 6)(36T + 5) is cube-free. The most important
difference between these two families is the contribution from the S eA(F) terms, where the
B = 1 family is approximately −.11 · 2φ̂(0)/ logR, while the B = 2 family is approxi-
mately .63 · 2φ̂(0)/ logR. This large difference is due to biases of size −r in the Fourier
coefficients at(p) in a one-parameter family of rank r over Q(T ). Thus, while the main
term of the average moments of the pth Fourier coefficients are given by the complex mul-
tiplication analogue of Sato-Tate in the limit, for each p there are lower order correction
terms which depend on the rank. This is in line with other results. Rosen and Silverman
[RoSi] prove
t mod p at(p) is related to the negative of the rank of the family over Q(T );
see Theorem 5.8 for an exact statement.
• Non-CM Example (see Theorem 5.14): Consider the one-parameter family y2 = x3 −
3x + 12T over Q(T ). Up to O(log−3R), the (non-conductor) lower order correction is
approximately
− 2.703 · 2φ̂(0)/ logR, (1.20)
7In practice, it is only as p → ∞ that the average moments converge to the complex multiplication distribution;
for finite p the lower order terms to these moments mean that the answer for families of elliptic curves with complex
multiplication is not the same as what we would obtain by replacing these averages with the moments of the complex
multiplication distribution.
8 STEVEN J. MILLER
which is very different than the family of weight 2 cuspidal newforms of prime level N .
Remark 1.4. While the main terms of the 1-level density in these families depend only weakly on
the family,8 we see that the lower order correction terms depend on finer arithmetical properties
of the family. In particular, we see differences depending on whether or not there is complex
multiplication, a forced torsion point, or rank. Further, the lower order correction terms are more
negative for families of elliptic curves with forced additive reduction at 2 and 3 than for all cuspidal
newforms of prime level N → ∞. This is similar to Young’s results [Yo1], where he considered
two-parameter families and noticed that the number of primes dividing the conductor is negatively
correlated to the number of low-lying zeros. A better comparison would perhaps be to square-free
N with the number of factors tending to infinity, arguing as in [ILS] to handle the necessary sieving.
Remark 1.5. The proof of the Central Limit Theorem provides a useful analogy for our results. If
X1, . . . , XN are ‘nice’ independent, identically distributed random variables with mean µ and vari-
ance σ2, then as N → ∞ we have (X1+ · · ·+XN −Nµ)/σ
N converges to the standard normal.
The universality is that, properly normalized, the main term is independent of the initial distribu-
tion; however, the rate of convergence to the standard normal depends on the higher moments of the
distribution. We observe a similar phenomenon with the 1-level density. We see universal answers
(agreeing with random matrix theory) as the conductors tend to infinity in the main terms; how-
ever, the rate of convergence (the lower order terms) depends on the higher moments of the Fourier
coefficients.
The paper is organized as follows. In §2 we review the standard explicit formula and then prove
our alternate version (replacing averages of Satake parameters with averages of the Fourier coef-
ficients). We analyze all cuspidal newforms in §3. After some preliminary expansions for elliptic
curve families in §4, we analyze several one-parameter families in §5.
2. EXPLICIT FORMULAS
2.1. Standard Explicit Formula. Let φ be an even Schwartz test function whose Fourier trans-
form has compact support, say supp(φ̂) ⊂ (−σ, σ). Let f be a weight k cuspidal newform of level
N ; see (1.1) through (1.5) for a review of notation. The explicit formula relates sums of φ over
the zeros of Λ(s, f) to sums of φ̂ and the Fourier coefficients over prime powers. We have (see for
example Equations (4.11)–(4.13) of [ILS]) that
Ak,N(φ)
αf(p)
m + βf(p)
log p
log p
(2.1)
8All that matters are the first two moments of the Fourier coefficients. All families have the same main term in
the second moments; the main term in the first moment is just the rank of the family. See [Mil2] for details for one-
parameter families of elliptic curves
LOWER ORDER TERMS IN 1-LEVEL DENSITIES 9
where
Ak,N(φ) = 2φ̂(0) log
Ak,N ;j(φ),
Ak,N ;j(φ) =
φ(x)dx,
(2.2)
with ψ(z) = Γ′(z)/Γ(z), α1 =
and α2 =
In this paper we concentrate on the first order correction terms to the 1-level density. Thus
we are isolating terms of size 1/ logR, and ignoring terms that are O(1/ log2R). While a more
careful analysis (as in [Yo1]) would allow us to analyze these conductor terms up to an error of size
O(log−3R), these additional terms are independent of the family and thus not as interesting for our
purposes. We use (8.363.3) of [GR] (which says ψ(a + bß) + ψ(a − bß) = 2ψ(a) + O(b2/a2) for
a, b real and a > 0), and find
Ak,N ;j(φ) = φ̂(0)ψ
(αj + 1)2 log
. (2.3)
This implies that
Ak,N(φ) = φ̂(0) logN + φ̂(0)
k + 2
− 2 log π
(αj + 1)2 log
. (2.4)
As we shall consider the case of k fixed and N → ∞, the above expansion suffices for our purposes
and we write
Ak,N(φ) = φ̂(0) logN + φ̂(0)A(k) +Ok
log2R
. (2.5)
We now average (2.1) over all f in our family F . We allow ourselves the flexibility to introduce
slowly varying non-negative weights wR(f), as well as allowing the levels of the f ∈ F to vary.
This yields the expansion for the 1-level density for the family, which is given by (1.9).
We have freedom to choose the weights wR(f) and the scaling parameter R. For families of
elliptic curves we often take the weights to be 1 for t ∈ [N, 2N ] such that the irreducible polynomial
factors of the discriminant are square or cube-free, and zero otherwise (equivalently, so that the
specialization Et yields a global minimal Weierstrass equation); logR is often the average log-
conductor (or a close approximation to it). For families of cuspidal newforms of weight k and
square-free levelN tending to infinity, we might takewR(f) to be the harmonic weights (to simplify
applying the Petersson formula) and R around k2N (i.e., approximately the analytic conductor).
The interesting piece in (1.9) is
S(F) = − 2
WR(F)
wR(f)
αf(p)
m + βf(p)
log p
log p
. (2.6)
10 STEVEN J. MILLER
We rewrite the expansion above in terms of the moments of the Fourier coefficients λf (p). If p|Nf
then αf(p)
m + βf(p)
m = λf(p)
m. Thus
S(F) = − 2
WR(F)
wR(f)
λf(p)
log p
log p
WR(F)
p |rNf
wR(f)
αf(p)
m + βf(p)
log p
log p
(2.7)
In the explicit formula we have terms such as φ̂(m log p/ logR). As φ̂ is an even function, Taylor
expanding gives
log p
= φ̂(0) +O
log p
. (2.8)
As we are isolating lower order correction terms of size 1/ logR in S(F), we ignore any term
which is o(1/ logR). We therefore may replace φ̂(m log p/ logR) with φ̂(log p/ logR) at a cost of
O(1/ log3R) for all m ≥ 3,9 which yields
S(F) = − 2
WR(F)
wR(f)
λf(p)
log p
log p
WR(F)
p |rNf
wR(f)
λf(p)
log p
log p
WR(F)
p |rNf
wR(f)
λf(p)
2 − 2
log p
log p
WR(F)
p |rNf
wR(f)
αf(p)
m + βf (p)
log p
log p
log3R
(2.9)
We have isolated them = 1 and 2 terms from p|rNf as these can contribute main terms (and not just
lower order terms). We used for p|rNf that αf(p)+βf (p) = λf(p) and αf (p)2+βf (p)2 = λf(p)2−2.
2.2. The Alternate Explicit Formula.
9As φ̂ has compact support, the only m that contribute are m ≪ logR, and thus we do not need to worry about the
m-dependence in this approximation because these terms are hit by a p−m/2.
LOWER ORDER TERMS IN 1-LEVEL DENSITIES 11
Proof of Theorem 1.1. We use the geometric series formula for the m ≥ 3 terms in (2.9). We have
M3(p) :=
αf (p)√
βf(p)√
αf(p)
p− αf(p))
βf(p)
p− βf(p))
(αf (p)
3 + βf(p)
p− (αf (p)2 + βf (p)2)
p(p+ 1− λf(p)
λf(p)
3√p− λf (p)2 − 3λf(p)
p(p+ 1− λf(p)
(2.10)
where we use αf(p)
3 + βf (p)
3 = λf(p)
3 − 3λf(p) and αf (p)2 + βf (p)2 = λf(p)2 − 2. Writing
(p+1−λf (p)
p)−1 as (p+1)−1
1− λf (p)
, using the geometric series formula and collecting
terms, we find
M3(p) =
p(p+ 1)
p(3p+ 1)λf(p)
p(p+ 1)2
2 + 3p+ 1)λf(p)
p(p+ 1)3
pr/2(p− 1)λf(p)r
(p+ 1)r+1
(2.11)
We use (2.8) to replace φ̂(log p/ logR) in (2.9) with φ̂(0) + O(1/ log2R) and the above expansion
for M3(p); the proof is then completed by simple algebra and recalling the definitions of Ar,F(p)
and A′r,F(p), (1.12). �
2.3. Formulas for the r ≥ 3 Terms. For many families we either know or conjecture a distribution
for the (weighted) Fourier coefficients. If this were the case, then we could replace the Ar,F(p) with
the rth moment. In many applications (for example, using the Petersson formula for families of
cuspidal newforms of fixed weight and square-free level tending to infinity) we know the moments
up to a negligible correction.
In all the cases we study, the known or conjectured distribution is even, and the moments have a
tractable generating function. Thus we may show
Lemma 2.1. Assume for r ≥ 3 that
Ar,F(p) =
Mℓ +O
log2 R
if r = 2ℓ
log2 R
otherwise,
(2.12)
and that there is a nice function gM such that
gM(x) = M2x
2 +M3x
3 + · · · =
ℓ. (2.13)
Then the contribution from the r ≥ 3 terms in Theorem 1.1 is
− 2φ̂(0)
(p+ 1)2
· (p− 1) log p
log3R
. (2.14)
12 STEVEN J. MILLER
Proof. The big-Oh term in Ar,F(p) yields an error of size 1/ log
3R. The contribution from the
r ≥ 3 terms in Theorem 1.1 may therefore be written as
− 2φ̂(0)
(p− 1) log p
(p+ 1)2
log3R
. (2.15)
The result now follows by using the generating function gM to evaluate the ℓ-sum. �
Remark 2.2. In the above lemma, note that gM(x) has even and odd powers of x, even though the
known or conjectured distribution is even. This is because the expansion in Theorem 1.1 involves
pr/2, and the only contribution is when r = 2ℓ.
Lemma 2.3. If the distribution of the weighted Fourier coefficients satisfies Sato-Tate (normalized
to be a semi-circle) with errors in the moments of size O(1/ log2R), then the contribution from the
r ≥ 3 terms in Theorem 1.1 is
ST; eA φ̂(0)
log3R
, (2.16)
where
ST; eA =
(2p+ 1)(p− 1) log p
p(p+ 1)3
≈ .4160714430. (2.17)
If the Fourier coefficients vanish except for primes congruent to a mod b (where φ(b) = 2) and the
distribution of the weighted Fourier coefficients for p ≡ a mod b satisfies the analogue of Sato-
Tate for elliptic curves with complex multiplication, then the contribution from the r ≥ 3 terms in
Theorem 1.1 is
− 2γCM,a,b φ̂(0)
log3R
, (2.18)
where
γCM,a,b =
p≡a mod b
2(3p+ 1) log p
(p+ 1)3
. (2.19)
In particular,
γCM1,3 ≈ .38184489, γCM1,4 ≈ 0.46633061. (2.20)
Proof. If the distribution of the weighted Fourier coefficients satisfies Sato-Tate (normalized to be
a semi-circle here), then Mℓ = Cℓ =
, the ℓth Catalan number. We have (see sequence
A000108 in [Sl])
gST(x) =
1− 4x
− 1− x = 2x2 + 5x3 + 14x4 + · · · =
(p+ 1)2
2p+ 1
p(p+ 1)2
. (2.21)
The value for γ
ST; eA was obtained by summing the contributions from the first million primes.
For curves with complex multiplication, Mℓ = Dℓ = 2 · 12
; while the actual sequence is just(
= (ℓ+1)Cℓ, we prefer to write it this way as the first 2 emphasizes that the contribution is zero
LOWER ORDER TERMS IN 1-LEVEL DENSITIES 13
for half the primes, and it is 1
that is the natural sequences to study. The generating function is
gCM(x) =
1− 4x√
1− 4x
− 2x = 6x2 + 20x3 + 70x4 + · · · =
(p+ 1)2
2(3p+ 1)
(p− 1)(p+ 1)2
; (2.22)
these numbers are the convolution of the Catalan numbers and the central binomial (see sequence
A000984 in [Sl]). The numerical values were obtained by calculating the contribution from the first
million primes. �
Remark 2.4. It is interesting how close the three sums are. Part of this is due to the fact that these
sums converge rapidly. As the small primes contribute more to these sums, it is not surprising that
γCM1,4 > γCM1,3 (the first primes for γCM1,4 are 5 and 11, versus 7 and 13 for γCM1,3).
Remark 2.5. When we investigate one-parameter families of elliptic curves over Q(T ), it is im-
plausible to assume that for each p the rth moment agrees with the rth moment of the limiting
distribution up to negligible terms. This is because there are at most p data points involved in the
weighted averages Ar,F(p); however, it is enlightening to compare the contribution from the r ≥ 3
terms in these families to the theoretical predictions when we have instantaneous convergence to
the limiting distribution.
We conclude by sketching the argument for identifying the presence of the Sato-Tate distribution
for weight k cuspidal newforms of square-free level N → ∞. In the expansion of λf(p)r, to first
order all that often matters is the constant term; by the Petersson formula this is the case for cuspidal
newforms of weight k and square-free level N → ∞, though this is not the case for families of
elliptic curves with complex multiplication. If r is odd then the constant term is zero, and thus to
first order (in the Petersson formula) these terms do not contribute. For r = 2ℓ even, the constant
term is 1
(2ℓ)!
ℓ!(ℓ+1)!
= Cℓ, the ℓ
th Catalan number. We shall write
λf (p)
br,r−2kλf(p
r−2k), (2.23)
and note that if r = 2ℓ then the constant term is b2ℓ,0 = Cℓ. We have
Ar,F(p) =
WR(F)
f∈S(p)
wR(f)λf(p)
WR(F)
f∈S(p)
wR(f)
br,r−2kλf(p
r−2k) =
br,r−2kAr,F ;k(p), (2.24)
where
Ar,F ;k(p) =
WR(F)
f∈S(p)
wR(f)λf(p
r−2k). (2.25)
We expect the main term to be A2ℓ,F ;0, which yields the contribution described in (2.16).
14 STEVEN J. MILLER
3. FAMILIES OF CUSPIDAL NEWFORMS
Let F be a family of cuspidal newforms of weight k and prime level N ; perhaps we split by
sign (the answer is the same, regardless of whether or not we split). We consider the lower order
correction terms in the limit as N → ∞.
3.1. Weights. Let
ζN(s) =
Z(s, f) =
ζN(s)L(s, f ⊗ f)
; (3.1)
L(s, sym2f) =
ζ(2s)Z(s, f)
ζN(2s)
, Z(1, f) =
ζN(2)
L(1, sym2f). (3.2)
To simplify the presentation, we use the harmonic weights10
wR(f) = ζN(2)/Z(1, f) = ζ(2)/L(1, sym
2f), (3.4)
and note that
WR(F) =
wR(f) =
(k − 1)N
+O(N−1); (3.5)
we may take R to be the analytic conductor, so R = 15N/64π2. We have introduced the harmonic
weights to facilitate applying the Petersson formula to calculate the average moments Ar,F(p) from
studying Ar,F ;k(p). The Petersson formula (see Corollary 2.10, Equation (2.58) of [ILS]) yields,
for m,n > 1 relatively prime to the level N ,
WR(F)
wR(f)λf(m)λf(n) = δmn + O
(mn)1/4
log 2mnN
k5/6N
, (3.6)
where δmn = 1 if m = n and 0 otherwise.
3.2. Results. From Theorem 1.1, there are five terms to analyze: SA′(F), S0(F), S1(F), S2(F)
and SA(F). One advantage of our approach (replacing sums of αf(p)r + βf(p)r with moments of
λf(p)
r) is that the Fourier coefficients of a generic cuspidal newform should follow Sato-Tate; the
Petersson formula easily gives Sato-Tate on average as we vary the forms while letting the level
tend to infinity, which is all we need here. Thus Ar,F(p) is basically the r
th moment of the Sato-Tate
distribution (which, because of our normalizations, is a semi-circle here). The odd moments of the
semi-circle are zero, and the (2ℓ)th moment is Cℓ. If we let
P (ℓ) =
(p− 1) log p
(p + 1)2
, (3.7)
10The harmonic weights are essentially constant. By [I1, HL] they can fluctuate within the family as
N−1−ǫ ≪k ωR(f) ≪k N−1+ǫ; (3.3)
if we allow ineffective constants we can replace N ǫ with logN for N large.
LOWER ORDER TERMS IN 1-LEVEL DENSITIES 15
then we find
SA,0(F) = −
2φ̂(0)
CℓP (ℓ), (3.8)
and we are writing the correction term as a weighted sum of the expected main term of the moments
of the Fourier coefficients; see Lemma 2.3 for another way of writing this correction. These expan-
sions facilitate comparison with other families where the coefficients do not follow the Sato-Tate
distribution (such as one-parameter families of elliptic curves with complex multiplication).
Below we sketch an analysis of the lower order correction terms of size 1/ logR to families of
cuspidal newforms of weight k and prime levelN → ∞. We analyze the five terms in the expansion
of S(F) in Theorem 1.1.
The following lemma is useful for evaluating many of the sums that arise. We approximated γPNT
below by using the first million primes (see Remark 3.3 for an alternate, more accurate expression
for γPNT). The proof is a consequence of the prime number theorem; see Section 8.1 of [Yo1] for
details.
Lemma 3.1. Let θ(t) =
p≤t log p and E(t) = θ(t)− t. If φ̂ is a compactly support even Schwartz
test function, then
2 log p
p logR
log p
2φ̂(0)
log3R
, (3.9)
where
γPNT = 1 +
dt ≈ −1.33258. (3.10)
Remark 3.2. The constant γPNT also occurs in the definition of the constants c4,1 and c4,2 in [Yo1],
which arise from calculating lower order terms in two-parameter families of elliptic curves. The
constants c4,1 and c4,2 are in error, as the value of γPNT used in [Yo1] double counted the +1.
Remark 3.3. Steven Finch has informed us that γPNT = −γ −
(log p)/(p2 − p); see
http://www.research.att.com/∼njas/sequences/A083343 for a high precision
evaluation and [Lan, RoSc] for proofs.
Theorem 3.4. Let φ̂ be supported in (−σ, σ) for some σ < 4/3 and consider the harmonic weights
wR(f) = ζ(2)/L(1, sym
2f). (3.11)
S(F) = φ(0)
2(−γST;0 + γST;2 − γST; eA + γPNT)φ̂(0)
log3R
(3.12)
where
γST;0 =
2 log p
p(p+1)
≈ 0.7691106216
γST;2 =
(4p2+3p+1) log p
p(p+1)3
≈ 1.1851820642
ST; eA =
ℓ=2CℓP (ℓ) ≈ 0.4160714430
γPNT = 1 +
dt ≈ −1.33258
(3.13)
16 STEVEN J. MILLER
− γST;0 + γST;2 − γST; eA = 0. (3.14)
The notation above is to emphasize that these coefficients arise from the Sato-Tate distribution.
The subscript 0 (resp. 2) indicates that this contribution arises from the A0,F(p) (resp. A2,F(p))
terms, the subscript à indicates the contribution from S eA(F) (the Ar,F(p) terms with r ≥ 3),
and we use PNT for the final constant to indicate a contribution from applying the Prime Number
Theorem to evaluate sums of our test function.
Proof. The proof follows by calculating the contribution of the five pieces in Theorem 1.1. We
assume φ̂ is an even Schwartz function such that supp(φ̂) ⊂ (−σ, σ), with σ < 4/3, F is the
family of weight k and prime level N cuspidal newforms (with N → ∞), and we use the harmonic
weights of §3.1. Straightforward algebra shows11
(1) SA′(F) ≪ N−1/2.
(2) SA(F) = −
ST; eA
bφ(0)
R.11 log2 R
N .73
N3σ/4 logR
. In particular, for test
functions supported in (−4/3, 4/3) we have SA(F) = −
ST; eA
bφ(0)
+O (R−ǫ), where γ
ST; eA
≈ .4160714430 (see Lemma 2.3).
(3) S0(F) = φ(0)+ 2(2γPNT−γST;0)
bφ(0)
log3 R
, where γST;0 =
2 log p
p(p+1)
≈ 0.7691106216,
γPNT = 1 +
dt ≈ −1.33258.
(4) S1(F) ≪ logNN
≪ N 34σ−1 logN .
(5) Assume σ < 4. Then
S2(F) = −
− 2γPNT φ̂(0)
γST;2 φ̂(0)
log3R
γST;2 =
(4p2 + 3p+ 1) log p
p(p+ 1)3
≈ 1.1851820642 (3.15)
and γPNT is defined in (3.10).
The SA′(F) piece does not contribute, and the other four pieces contribute multiples of γST;0,
γST;2, γST;3 and γPNT. �
Remark 3.5. Numerical calculations will never suffice to show that −γST;1 + γST;2 − γST; eA is
exactly zero; however, we have
− γST;0 + γST;2 − γST; eA =
p(p+ 1)
4p2 + 3p+ 1
p(p+ 1)3
(2p+ 1)(p− 1)
p(p+ 1)3
log p
0 · log p = 0. (3.16)
This may also be seen by calculating the lower order terms using a different variant of the explicit
formula. Instead of expanding in terms of αf (p)
m + βf(p)
m we expand in terms of λf (p
m). The
11Except for the SA(F) piece, where a little care is required; see Appendix A for details.
LOWER ORDER TERMS IN 1-LEVEL DENSITIES 17
terms which depend on the Fourier coefficients are given by
WR(F)
wR(f)
λf(p)
m log p
pm/2 logR
log p
log p
p logR
log p
WR(F)
wR(f)
m) log p
pm/2 logR
log p
(m+ 2)
log p
(3.17)
this follows from trivially modifying Proposition 2.1 of [Yo1]. ForN a prime, the Petersson formula
shows that only the second piece contributes for σ < 4/3, and we regain our result that the lower
order term of size 1/ logR from the Fourier coefficients is just 2γPNTφ̂(0)/ logR. We prefer our
expanded version as it shows how the moments of the Fourier coefficients at the primes influence
the correction terms, and will be useful for comparisons with families that either do not satisfy
Sato-Tate, or do not immediately satisfy Sato-Tate with negligible error for each prime.
4. PRELIMINARIES FOR FAMILIES OF ELLIPTIC CURVES
4.1. Notation. We review some notation and results for elliptic curves; see [Kn, Si1, Si2] for more
details. Consider a one-parameter family of elliptic curves over Q(T ):
E : y2 = x3 + A(T )x+B(T ), A(T ), B(T ) ∈ Z[T ]. (4.1)
For each t ∈ Z we obtain an elliptic curve Et by specializing T to t. We denote the Fourier
coefficients by at(p) = λt(p)
p; by Hasse’s bound we have |at(p)| ≤ 2
p or |λt(p)| ≤ 2. The
discriminant and j-invariant of the elliptic curve Et are
∆(t) = −16(4A(t)3 + 27B(t)2), j(t) = −1728 · 4A(t)3/∆(t). (4.2)
Consider an elliptic curve y2 = x3 + Ax + B (with A,B ∈ Z) and a prime p ≥ 5. As p ≥ 5,
the equation is minimal if either p4 does not divide A or p6 does not divide B. If the equation is
minimal at p then
at(p) = −
x mod p
x3 + A(t)x+B(t)
= p+ 1−Nt(p), (4.3)
where Nt(p) is the number of points (including infinity) on the reduced curve Ẽ mod p. Note that
at+mp(p) = at(p). This periodicity is our analogue of the Petersson formula; while it is significantly
weaker, it will allow us to obtain results for sufficiently small support.
Let E be an elliptic curve with minimal Weierstrass equation at p, and assume p divides the
discriminant (so the reduced curve modulo p is singular). Then aE(p) ∈ {−1, 0, 1}, depending on
the type of reduction. By changing coordinates we may write the reduced curve as (y − αx)(y −
βx) = x3. If α = β then we say E has a cusp and additive (or unstable) reduction at p, and aE(p) =
0. If α 6= β then E has a node and multiplicative (or semi-stable) reduction at p; if α, β ∈ Q we
say E has split reduction and aE(p) = 1, otherwise it has non-split reduction and aE(p) = −1. We
shall see later that many of our arguments are simpler when there is no multiplicative reduction,
which is true for families with complex multiplication.
Our arguments below are complicated by the fact that for many p there are t such that y2 =
x3 + A(T )x+ B(T ) is not minimal at p when we specialize T to t. For the families we study, the
specialized curve at T = t is minimal at p provided pk (k depends on the family) does not divide a
18 STEVEN J. MILLER
polynomial D(t) (which also depends on the family, and is the product of irreducible polynomial
factors of ∆(t)). For example, we shall later study the family with complex multiplication
y2 = x3 +B(6T + 1)κ, (4.4)
whereB|6∞ (i.e., p|B implies p is 2 or 3) and κ ∈ {1, 2}). Up to powers of 2 and 3, the discriminant
is ∆(T ) = (6T + 1)2κ, and note that (6t + 1, 6) = 1 for all t. Thus for a given t the equation is
minimal for all primes provided that 6t + 1 is sixth-power free if κ = 1 and cube-free if κ = 2. In
this case we would take D(t) = 6t+ 1 and k = 6/κ. To simplify the arguments, we shall sieve our
families, and rather than taking all t ∈ [N, 2N ] instead additionally require that D(t) is kth power
free. Equivalently, we may take all t ∈ [N, 2N ] and set the weights to be zero if D(t) is not kth
power free. Thus throughout the paper we adopt the following conventions:
• the family is y2 = x3 + A(T )x + B(T ) with A(T ), B(T ) ∈ Z[T ], and we specialize T to
t ∈ [N, 2N ] with N → ∞;
• we associate polynomials D1(T ), . . . , Dd(T ) and integers k1, . . . , kd ≥ 3, and the weights
are wR(t) = 1 if t ∈ [N, 2N ] and Di(t) is kith power free, and 0 otherwise;
• logR is the average log-conductor of the family, and logR = (1 + o(1)) logN (see [DM2,
Mil2]).
4.2. Sieving. For ease of notation, we assume that we have a family where D(T ) is an irreducible
polynomial, and thus there is only one power, say k; the more general case proceeds analogously.
We assume that k ≥ 3 so that certain sums are small (if k ≤ 2 we need to assume either the ABC
of Square-Free Sieve Conjecture). Let δkNd exceed the largest value of |D(t)| for t ∈ [N, 2N ]. We
say a t ∈ [N, 2N ] is good if D(t) is kth power free; otherwise we say t is bad. To determine the
lower order correction terms we must evaluate S(F), which is defined in (1.10). We may write
S(F) =
WR(F)
wR(t)S(t). (4.5)
As wR(t) = 0 if t is bad, for bad t we have the freedom of defining S(t) in any manner we may
choose. Thus, even though the expansion for at(p) in (4.3) requires the elliptic curve Et to be
minimal at p, we may use this definition for all t. We use inclusion - exclusion to write our sums
in a more tractable form; the decomposition is standard (see, for example, [Mil2]). Letting ℓ be an
integer (its size will depend on d and k), we have
S(F) = 1
WR(F)
D(t) k−power free
wR(t)S(t)
WR(F)
logℓ N∑
D(t)≡0 mod dk
S(t) +
WR(F)
δNd/k∑
d=1+logℓ N
D(t)≡0 mod dk
S(t),
(4.6)
where µ is the Möbius function. For many families we can show that
D(t)≡0 mod dk
S(t)2 = O
. (4.7)
LOWER ORDER TERMS IN 1-LEVEL DENSITIES 19
If this condition12 holds, then applying the Cauchy-Schwarz inequality to (4.6) yields
S(F) = 1
WR(F)
logℓ N∑
D(t)≡0 mod dk
S(t) +O
WR(F)
δNd/k∑
d=1+logℓ N
WR(F)
logℓ N∑
D(t)≡0 mod dk
S(t) +O
WR(F)
· (logN)−(
k−1)·ℓ
. (4.8)
For all our families WR(F) will be of size N (see [Mil2] for a proof). Thus for ℓ sufficiently
large the error term is significantly smaller than 1/ log3R, and hence negligible (remember logR =
(1 + o(1)) logN). Note it is important that k ≥ 3, as otherwise we would have obtained logN to
a non-negative power (as we would have summed 1/d). For smaller k we may argue by using the
ABC or Square-Free Sieve Conjectures.
The advantage of the above decomposition is that the sums are over t in arithmetic progressions,
and we may exploit the relation at+mp(p) = at(p) to determine the family averages by evaluating
sums of Legendre symbols. This is our analogue, poor as it may be, to the Petersson formula.
There is one technicality that arises here which did not in [Mil2]. There the goal was only
to calculate the main term in the n-level densities; thus “small” primes (p less than a power of
logN) could safely be ignored. If we fix a d and consider all t with D(t) ≡ 0 mod dk, we ob-
tain a union of arithmetic progressions, with each progression having step size dk. We would
like to say that we basically have (N/dk)/p complete sums for each progression, with summands
at0(p), at0+dkp(p), at0+2dkp(p), and so on. The problem is that if p|d then we do not have a complete
sum, but rather we have the same term each time! We discuss how to handle this obstruction in the
next sub-section.
4.3. Moments of the Fourier Coefficients and the Explicit Formula. Our definitions imply that
Ar,F(p) is obtained by averaging λt(p)
r over all t ∈ [N, 2N ] such that p |r ∆(t); the remaining t
yield A′r,F(p). We have sums such as
WR(F)
logℓ N∑
D(t)≡0 mod dk
S(t). (4.9)
In all of our families D(T ) will be the product of the irreducible polynomial factors of ∆(T ). For
ease of exposition, we assume D(T ) is given by just one factor.
We expand S(F) and S(t) by using Theorem 1.1. The sum of S(t) over t with D(t) ≡ 0 mod dk
breaks up into two types of sums, those where ∆(t) ≡ 0 mod p and those where ∆(t) 6≡ 0 mod p.
For a fixed d, the goal is to use the periodicity of the t-sums to replace Ar,F(p) with complete sums.
Thus we need to understand complete sums. If t ∈ [N, 2N ], d ≤ logℓN and p is fixed, then the
set of t such that D(t) ≡ 0 mod dk is a union of arithmetic progressions; the number of arithmetic
progressions equals the number of distinct solutions to D(t) ≡ 0 mod dk, which we denote by
k). We have (N/dk)/p complete sums, and at most p summands left over.
12Actually, this condition is a little difficult to use in practice. It is easier to first pull out the sum over all primes p
and then square; see [Mil2] for details.
20 STEVEN J. MILLER
Recall
Ar,F(p) =
WR(F)
f∈S(p)
wR(f)λf(p)
r, A′r,F(p) =
WR(F)
f 6∈S(p)
wR(f)λf(p)
r, (4.10)
and set
Ar,F(p) =
t mod p
p |r∆(t)
at(p)
r = pr/2
t mod p
p |r∆(t)
λt(p)
r, A′r,F(p) =
t mod p
p|∆(t)
at(p)
r. (4.11)
Lemma 4.1. Let D be a product of irreducible polynomials such that (i) for all t no two factors are
divisible by the same prime; (ii) the same k ≥ 3 (see the conventions on page 18) is associated to
each polynomial factor. For any ℓ ≥ 7 we have
Ar,F(p) =
Ar,F(p)
p · pr/2
logℓ/2N
A′r,F(p) =
A′r,F(p)
p · pr/2
logℓ/2N
. (4.12)
Proof. For our family, the d ≥ logℓN terms give a negligible contribution. We rewrite Ar,F(p) as
Ar,F(p) =
WR(F)
t∈[N,2N],p |rD(t)
D(t) k−power free
λt(p)
WR(F)
logℓ N∑
t∈[N,2N],p |rD(t)
D(t)≡0 mod dk
λt(p)
log−ℓ/2N
WR(F)
logℓ N∑
k)N/dk
t mod p
p |rD(t)
λt(p)
WR(F)
logℓ N∑
WR(F)
logℓ N∑
µ(d)δp|d
k)N/dk
t mod p
p |rD(t)
λt(p)
, (4.13)
where δp|d = 1 if p|d and 0 otherwise. For sufficiently small support the big-Oh term above is
negligible. As k ≥ 3, we have
WR(F) = N
1− νD(d
logℓ/2N
logℓ N∑
µ(d)νD(d
logℓ/2N
. (4.14)
LOWER ORDER TERMS IN 1-LEVEL DENSITIES 21
For the terms with µ(d)δp|d in (4.13), we may write d as d̃p, with (d̃, p) = 1 (the µ(d) factor forces
d to be square-free, so p||d). For sufficiently small support, (4.13) becomes
Ar,F(p)
p · pr/2
1− νD(p
log−ℓ/2N
; (4.15)
this is because
WR(F)
logℓ N∑
µ(d)νD(d
µ(p)νD(p
logℓ N∑
p |rd̃
µ(d̃)νD(d̃
= −νD(p
1− νD(p
logℓ/2N
(4.16)
(the last line follows because of the multiplicativity of νD (see for example [Nag]) and the fact that
we are missing the factor corresponding to p). The proof for A′r,F(p) follows analogously. �
We may rewrite the expansion in Theorem 1.1. We do not state the most general version possible,
but rather a variant that will encompass all of our examples.
Theorem 4.2 (Expansion for S(F) for many elliptic curve families). Let y2 = x3+A(T )x+B(T )
be a family of elliptic curves over Q(T ). Let ∆(T ) be the discriminant (and the only primes dividing
the greatest common divisor of the coefficients of ∆(T ) are 2 or 3), and let D(T ) be the product of
the irreducible polynomial factors of ∆(T ). Assume for all t that no prime simultaneously divides
two different factors of D(t), that each specialized curve has additive reduction at 2 and 3, and that
there is a k ≥ 3 such that for p ≥ 5 each specialized curve is minimal provided that D(T ) is kth
power free (if the equation is a minimal Weierstrass equation for all p ≥ 5 we take k = ∞); thus
we have the same k for each irreducible polynomial factor of D(T ). Let νD(d) denote the number
of solutions to D(t) ≡ 0 mod d. Set wR(t) = 1 if t ∈ [N, 2N ] and D(t) is kth power free, and 0
otherwise. Let
Ar,F(p) =
t mod p
p |r∆(t)
at(p)
r = pr/2
t mod p
p |r∆(t)
λt(p)
r, A′r,F(p) =
t mod p
p|∆(t)
at(p)
ÃF(p) =
t mod p
p|r∆(t)
at(p)
p3/2(p+ 1− at(p))
t mod p
p |r∆(t)
λt(p)
p+ 1− λt(p)
HD,k(p) = 1 +
1− νD(p
. (4.17)
22 STEVEN J. MILLER
We have
S(F) = −2φ̂(0)
A′m,F(p)HD,k(p) log p
pm+1 logR
−2φ̂(0)
2A0,F(p)HD,k(p) log p
p2(p+ 1) logR
2A0,F(p)HD,k(p) log p
p2 logR
log p
A1,F(p)HD,k(p)
log p
log p
+ 2φ̂(0)
A1,F(p)HD,k(p)(3p+ 1)
p2(p+ 1)2
log p
A2,F(p)HD,k(p) log p
p3 logR
log p
+ 2φ̂(0)
A2,F(p)HD,k(p)(4p2 + 3p+ 1) log p
p3(p+ 1)3 logR
−2φ̂(0)
ÃF(p)HD,k(p)p3/2(p− 1) log p
p(p+ 1)3 logR
log3R
= SA′(F) + S0(F) + S1(F) + S2(F) + S eA(F) +O
log3R
. (4.18)
If the family only has additive reduction (as is the case for our examples with complex multiplica-
tion), then the A′m,F(p) piece contributes 0.
Proof. The proof follows by using Lemma 4.1 to simplify Theorem 1.1, and (2.8) to replace the
φ̂(m log p/ logR) terms with φ̂(0) + O(log−2R) in the A′m,F(p) terms. See Remark 1.2 for com-
ments on the need to numerically evaluate the ÃF(p) piece. �
For later use, we record a useful variant of Lemma 3.1.
Lemma 4.3. Let ϕ be the Euler totient function, and
θa,b(t) =
p≡a mod b
log p, Ea,b(t) = θa,b(t)−
. (4.19)
If φ̂ is a compactly support even Schwartz test function, then
2 log p
p logR
log p
2φ̂(0)
2E1,3(t)
log3R
, (4.20)
where
γPNT;1,3 = 1 +
2E1,3(t)
dt ≈ −2.375
γPNT;1,4 = 1 +
2E1,4(t)
dt ≈ −2.224; (4.21)
γPNT;1,3 and γPNT;1,4 were approximated by integrating up to the four millionth prime, 67,867,979.
LOWER ORDER TERMS IN 1-LEVEL DENSITIES 23
Remark 4.4. Steven Finch has informed us that, similar to Remark 3.3, using results from [Lan,
Mor] yields formulas for γPNT;1,3 and γPNT;1,4 which converge more rapidly:
γPNT;1,3 = −2γ − 4 log 2π + log 3 + 6 log Γ
p≡1,2 mod 3
log p
p2 − pδ1,3(p)
≈ −2.375494
γPNT;1,4 = −2γ − 3 log 2π + 4 log Γ
p≡1,3 mod 4
log p
p2 − pδ1,4(p)
≈ −2.224837; (4.22)
here γ is Euler’s constant and δ1,n(p) = 1 if p ≡ 1 mod n and 0 otherwise.
5. EXAMPLES: ONE-PARAMETER FAMILIES OF ELLIPTIC CURVES OVER Q(T )
We calculate the lower order correction terms for several one-parameter families of elliptic curves
over Q(T ), and compare the results to what we would obtain if there was instant convergence (for
each prime p) to the limiting distribution of the Fourier coefficients. We study families with and
without complex multiplication, as well as families with forced torsion points or rank. We perform
the calculations in complete detail for the first family, and merely highlight the changes for the other
families.
5.1. CM Example: The family y2 = x3 +B(6T + 1)κ over Q(T ).
5.1.1. Preliminaries. Consider the following one-parameter family of elliptic curves over Q(T )
with complex multiplication:
y2 = x3 +B(6T + 1)κ, B ∈ {1, 2, 3, 6}, κ ∈ {1, 2}, k = 6/κ. (5.1)
We obtain slightly different behavior for the lower order correction terms depending on whether
or not B is a perfect square for all primes congruent to 1 modulo 3. For example, if B = b2 and
κ = 2, then we have forced a torsion point of order 3 on the elliptic curve over Q(T ), namely
(0, b(6T + 1)). The advantage of using 6T + 1 instead of T is that (6T + 1, 6) = 1, and thus we
do not need to worry about the troublesome primes 2 and 3 (each at(p) = 0 for p ∈ {2, 3}). Up to
powers of 2 and 3 the discriminant is (6T + 1)κ; thus we take D(T ) = 6T + 1. For each prime p
the specialized curve Et is minimal at p provided that p
2k |r 6t + 1. If p2k|6t + 1 then wR(t) = 0,
so we may define the summands any way we wish; it is convenient to use (4.3) to define at(p),
even though the curve is not minimal at p. In particular, this implies that at(p) = 0 for any t where
p3|6t+ 1.
One very nice property of our family is that it only has additive reduction; thus if p|D(t) but
p2k |rD(t) then at(p) = 0. As our weights restrict our family to D(t) being k = 6/κ power free, we
always use (4.3) to define at(p).
It is easy to evaluate A1,F(p) and A2,F(p). While these sums are the average first and second
moments over primes not dividing the discriminant, as at(p) = 0 for p|∆(t) we may extend these
sums to be over all primes.
24 STEVEN J. MILLER
We use Theorem 4.2 to write the 1-level density in a tractable manner. Straightforward calcula-
tion (see Appendix B.1 for details) shows that
A0,F(p) =
p− 1 if p ≥ 5
0 otherwise
A1,F(p) = 0
A2,F(p) =
2p2 − 2p if p ≡ 1 mod 3
0 otherwise.
(5.2)
Not surprisingly, neither the zeroth, first or second moments depend on B or on κ; this universality
leads to the common behavior of the main terms in the n-level densities. We shall see dependence
on the parameters B and κ in the higher moments Ar,F(p), and this will lead to different lower
order terms for the different families.
As we are using Theorem 4.2 instead of Theorem 1.1, each prime sum is weighted by
HD,k(p) = 1 +
= HmainD,k (p) +H
sieve
D,k (p), (5.3)
with HmainD,k (p) = 1. H
sieve
D,k (p) arises from sieving our family to D(t) being (6/κ)-power free. We
shall calculate the contribution of these two pieces separately. We expect the contribution from
HsieveD,k (p) to be significantly smaller, as each p-sum is decreased by approximately 1/p
5.1.2. Contribution from HmainD,k (p).
We first calculate the contributions from the four pieces of HmainD,k (p). We then combine the
results, and compare to what we would have had if the Fourier coefficients followed the Sato-Tate
distribution or for each prime immediately perfectly followed the complex multiplication analogue
of Sato-Tate.
Lemma 5.1. Let supp(φ̂) ⊂ (−σ, σ). We have
S0(F) = φ(0) +
2φ̂(0) · (2γPNT − γ(≥5)CM;0 − γ
log3R
+O(Nσ−1), (5.4)
where
CM;0 =
4 log p
p(p+ 1)
≈ 0.709919
2,3 =
2 log 2
2 log 3
≈ 1.4255554, (5.5)
and γPNT is defined in Lemma 3.1.
Note γ(≥5)CM;0 is almost 2γST;0 (see (3.13)); the difference is that here p ≥ 5.
Proof. Substituting for A0,F(p) and using (2.8) yields
S0(F) = −
2φ̂(0)
4 log p
p(p+ 1)
2 log p
p logR
log p
log3R
. (5.6)
LOWER ORDER TERMS IN 1-LEVEL DENSITIES 25
The first prime sum converges; using the first million primes we find γ
CM;0 ≈ 0.709919. The
remaining piece is
2 log p
p logR
log p
− 2φ̂(0)
2 log 2
2 log 3
log3R
. (5.7)
The claim now follows from the definition of γ
2,3 and using Lemma 3.1 to evaluate the remaining
sum. �
Lemma 5.2. Let supp(φ̂) ⊂ (−σ, σ) and
(1,3)
CM;2 =
p≡1 mod 3
2(5p2 + 2p+ 1) log p
p(p+ 1)3
≈ 0.6412881898. (5.8)
S2(F) = −
2φ̂(0) · (−γPNT;1,3 + γ(1,3)CM;2)
log3R
+O(Nσ−1), (5.9)
where γPNT;1,3 = −2.375494 (see Lemma 4.3 for its definition).
Proof. Substituting our formula for A2,F(p) and collecting the pieces yields
S2(F) = −2
p≡1 mod 3
2 log p
log p
2φ̂(0)
p≡1 mod 3
2(5p2 + 2p+ 1) log p
p(p+ 1)3
. (5.10)
The first sum is evaluated by Lemma 4.3. The second sum converges, and was approximated by
taking the first four million primes. �
Lemma 5.3. For the families FB,κ: y2 = x3 + B(6T + 1)κ with B ∈ {1, 2, 3, 6} and κ ∈ {1, 2},
we have SÃ(F) = −2γ
(1,3)
CM;Ã,B,κ
φ̂(0)/ logR + O(log−3R), where
(1,3)
CM;Ã;1,1
≈ .3437
(1,3)
CM;Ã;1,2
≈ .4203
(1,3)
CM;Ã;2,2
≈ .5670
(1,3)
CM;Ã;3,2
≈ .1413
(1,3)
CM;Ã;6,2
≈ .2620; (5.11)
the error is at most .0367.
Proof. As the sum converges, we have written a program in C (using PARI as a library) to approxi-
mate the answer. We used all primes p ≤ 48611 (the first 5000 primes), which gives us an error of
at most about 8√
p+1−2√p ≈ .0367. The error should be significantly less, as this is assuming no
oscillation. We also expect to gain a factor of 1/2 as half the primes have zero contribution. �
Remark 5.4. When κ = 1 a simple change of variables shows that all four values of B lead to the
same behavior. The case of κ = 2 is more interesting. If κ = 2 and B = 1, then we have the torsion
point (0, 6T + 1) on the elliptic surface. If B ∈ {2, 3, 6} and
= 1 then (0, 6t+ 1 mod p) is on
the curve Et mod p, while if
= −1 then (0, 6t+ 1 mod p) is not on the reduced curve.
26 STEVEN J. MILLER
5.1.3. Contribution from HsieveD,k (p).
Lemma 5.5. Notation as in Lemma 5.3, the contributions from the HsieveD,k (p) sieved terms to the
lower order corrections are
(1,3)
CM, sieve;012 + γ
(1,3)
CM, sieve;B,κ)φ̂(0)
log3R
, (5.12)
(1,3)
CM, sieve;012 ≈ −.004288
(1,3)
CM, sieve;1,1 ≈ .000446
(1,3)
CM, sieve;1,2 ≈ .000699
(1,3)
CM, sieved;2,2 ≈ .000761
(1,3)
CM, sieve;3,2 ≈ .000125
(1,3)
CM, sieve;6,2 ≈ .000199, (5.13)
where the errors in the constants are at most 10−15 (we are displaying fewer digits than we could!).
Proof. The presence of the additional factor of 1/p3 ensures that we have very rapid convergence.
The contribution from the r ≥ 3 terms was calculated at the same time as the contribution in Lemma
5.3, and is denoted by γ
(1,3)
CM,sieve;B,κ. The other terms (r ∈ {0, 1, 2}) were computed in analogous
manners as before, and grouped together into γ(1,3)CM, sieve;012. �
5.1.4. Results. We have shown
Theorem 5.6. For σ < 2/3, the HmainD,k (p) terms contribute φ(0)/2 to the main term. The lower
order correction from the HmainD,k (p) and H
sieve
D,k (p) terms is
2φ̂(0) · (2γPNT − γ(≥5)CM;0 − γ
2,3 − γPNT;1,3 + γ
(1,3)
CM;2 − γ
(1,3)
CM;Ã,B,κ
− γ(1,3)CM, sieve;012 − γ
(1,3)
CM, sieve;B,κ)
log3R
. (5.14)
Using the numerical values of our constants for the five choices of (B, κ) gives, up to errors of size
O(log−3R), lower order terms of approximately
B = 1, κ = 1 : −2.124 · 2φ̂(0)/ logR,
B = 1, κ = 2 : −2.201 · 2φ̂(0)/ logR,
B = 2, κ = 2 : −2.347 · 2φ̂(0)/ logR
B = 3, κ = 2 : −1.921 · 2φ̂(0)/ logR
B = 6, κ = 2 : −2.042 · 2φ̂(0)/ logR. (5.15)
These should be contrasted to the family of cuspidal newforms, whose correction term was
γPNT ·
2φ̂(0)
≈ −1.33258 · 2φ̂(0)
. (5.16)
LOWER ORDER TERMS IN 1-LEVEL DENSITIES 27
Remark 5.7. The most interesting piece in the lower order terms is from the weighted moment
sums with r ≥ 3 (see Lemma 5.3); note the contribution from the sieving is significantly smaller
(see Lemma 5.5). As each curve in the family has complex multiplication, we expect the limiting
distribution of the Fourier coefficients to differ from Sato-Tate; however, the coefficients satisfy
a related distribution (it is uniform if we consider the related curve over the quadratic field; see
[Mur]). This distribution is even, and the even moments are: 2, 6, 20, 70, 252 and so on. In
general, the 2ℓth moment is Dℓ = 2 · 12
(the factor of 2 is because the coefficients vanish for
p ≡ 2 mod 3, so those congruent to 2 modulo 3 contribute double); note the 2ℓth moment of the
Sato-Tate distribution is Cℓ =
. The generating function is
gCM(x) =
1− 4x√
1− 4x
− 2x = 6x2 + 20x3 + 126x4 + · · · =
ℓ (5.17)
(see sequence A000984 in [Sl]). The contribution from the r ≥ 3 terms is
− 2φ̂(0)
p≡1 mod 3
(p− 1) log p
(p+ 1)2
. (5.18)
Using the generating function, we see that the ℓ-sum is just 2(3p + 1)/(p − 1)(p + 1)2, so the
contribution is
2φ̂(0)
p≡1 mod 3
2(3p+ 1) log p
(p+ 1)3
(1,3)
CM;Ã
φ̂(0)
, (5.19)
where taking the first million primes yields
(1,3)
CM;Ã
≈ .38184489. (5.20)
It is interesting to compare the expected contribution from the Complex Multiplication distribution
(for the moments r ≥ 3) and that from the Sato-Tate distribution (for the moments r ≥ 3). The
contribution from the Sato-Tate, in this case, was shown in Lemma 2.3 to be
SA,0(F) = −
ST; eA φ̂(0)
, γST ≈ 0.4160714430. (5.21)
Note how close this is to .38184489, the contribution from the Complex Multiplication distribution.
5.2. CM Example: The family y2 = x3−B(36T +6)(36T +5)x over Q(T ). The analysis of this
family proceeds almost identically to the analysis for the families y2 = x3+B(6T +1)κ over Q(T ),
with trivial modifications because D(T ) has two factors; note no prime can simultaneously divide
both factors, and each factor is of degree 1. The main difference is that now at(p) = 0 whenever
p ≡ 3 mod 4 (as is seen by sending x → −x). We therefore content ourselves with summarizing
the main new feature.
There are two interesting cases. If B = 1 then the family has rank 1 over Q(T ) (see Lemma
B.5); note in this case that we have the point (36T + 6, 36T + 6). If B = 2 then the family
has rank 0 over Q(T ). This follows by trivially modifying the proof in Lemma B.5, resulting in
A1,F(p) = −2p
if p ≡ 1 mod 4 and 0 otherwise (which averages to 0 by Dirichlet’s Theorem
for primes in arithmetic progressions).
As with the previous family, the most interesting pieces are the lower order correction terms from
S eA(F), namely the pieces from HmainD,k (p) and HsieveD,k (p) (as we must sieve). We record the results
28 STEVEN J. MILLER
from numerical calculations using the first 10,000 primes. We write the main term as γ
(1,4)
CM; eA,B
(1, 4) denotes that there is only a contribution from p ≡ 1 mod 4) and the sieve term as γ(1,4)CM,sieve;B.
We find that
(1,4)
CM; eA,1
≈ −0.1109 γ(1,4)CM,sieve;1 ≈ −.0003
(1,4)
CM; eA,2
≈ 0.6279 γ(1,4)CM,sieve;2 ≈ .0013.
(5.22)
What is fascinating here is that, when B = 1, the value of γ
(1,4)
CM; eA,B
is significantly lower than
what we would predict for a family with complex multiplication. A natural explanation for this is
that the distribution corresponding to Sato-Tate for curves with complex multiplication cannot be
the full story (even in the limit) for a family with rank. Rosen and Silverman [RoSi] prove
Theorem 5.8 (Rosen-Silverman). Assume Tate’s conjecture holds for a one-parameter family E of
elliptic curves y2 = x3+A(T )x+B(T ) over Q(T ) (Tate’s conjecture is known to hold for rational
surfaces). Let AE(p) =
t mod p at(p). Then
−AE(p) log p = rank E(Q(T )). (5.23)
Thus if the elliptic curves have positive rank, there is a slight bias among the at(p) to be negative.
For a fixed prime p the bias is roughly of size −r for each at(p), where r is the rank over Q(T ) and
each at(p) is of size
p. While in the limit as p → ∞ the ratio of the bias to at(p) tends to zero, it
is the small primes that contribute most to the lower order terms. As γ
(1,4)
CM; eA,B
arises from weighted
sums of at(p)
3, we expect this term to be smaller for curves with rank; this is born out beautifully
by our data (see (5.22)).
5.3. Non-CM Example: The family y2 = x3 − 3x + 12T over Q(T ). We consider the family
y2 = x3−3x+12T over Q(T ); note this family does not have complex multiplication. For all t the
above is a global minimal Weierstrass equation, and at(2) = at(3) = 0. Straightforward calculation
(see Appendix B.3 for details) shows that
A0,F(p) =
p− 2 if p ≥ 5
0 otherwise
A1,F(p) =
if p ≥ 5
0 otherwise
A2,F(p) =
p2 − 2p− 2− p
if p ≥ 5
0 otherwise.
(5.24)
Unlike our families with complex multiplication (which only had additive reduction), here we
have multiplicative reduction13, and must calculate A′m,F(p). We have
A′m,F(p) =
0 if p = 2, 3
2 if m is even(
if m is odd;
(5.25)
13As we have multiplicative reduction, for each t as p → ∞ the at(p) satisfy Sato-Tate; see [CHT, Tay].
LOWER ORDER TERMS IN 1-LEVEL DENSITIES 29
this follows (see Appendix B.3) from the fact that for a given p there are only two t modulo p such
that p|∆(t), and one has at(p) =
and the other has at(p) =
We sketch the evaluations of the terms from (4.18) of Theorem 4.2; for this family, note that
HD,k(p) = 1. We constantly use the results from Appendix B.3.
Lemma 5.9. We have SA′(F) = −2γ(3)A′ φ̂(0)/ logR +O(log
−3R), where
A′ = 2
log p
p3 − p
p≡1 mod 12
log p
p2 − 1
p≡5 mod 12
log p
p2 − 1
≈ −0.082971426. (5.26)
Proof. As A′m,F(p) =
, the result follows by separately evaluating m even and odd,
and using the geometric series formula. �
Lemma 5.10. We have
S0(F) = φ(0)−
2φ̂(0) · (γ(3)0 + γ
2,3 − 2γPNT)
log3R
, (5.27)
where
(4p− 2) log p
p2(p+ 1)
≈ 0.331539448, (5.28)
γPNT is defined in Lemma 3.1 and γ
2,3 is defined in Lemma 5.1.
Proof. For p ≥ 5 we have A0,F(p) = p− 2. The γ(3)0 term comes from collecting the pieces whose
prime sum converges for any bounded φ̂ (and replacing φ̂(2 log p/ logR) with φ̂(0) at a cost of
O(log−2R)), while the remaining pieces come from using Lemma 3.1 to evaluate the prime sum
which converges due to the compact support of φ̂. �
Lemma 5.11. We have S1(F) = −2γ(3)1 φ̂(0)/ logR +O(log−3R), where
· (p− 1) log p
p2(p+ 1)2
= −0.013643784. (5.29)
Proof. As the prime sums decay like 1/p2, we may replace φ̂(log p/ logR) with φ̂(0) at a cost of
O(log−2R). The claim follows from A1,F(p) =
and simple algebra. �
Lemma 5.12. We have
S2(F) = −
2φ̂(0) · (γ(3)2 − 12γ
2,3 + γPNT)
log3R
, (5.30)
where
p4 − (13 + 7)
p3 − (25 + 6
)p2 − (16 + 2
)p− 4) log p
p3(p+ 1)3
≈ .085627. (5.31)
30 STEVEN J. MILLER
Proof. For p ≥ 5 we have A0,F(p) = p2−2p−2−
p. The γ
2 term comes from collecting the
pieces whose prime sum converges for any bounded φ̂ (and replacing φ̂(2 log p/ logR) with φ̂(0)
at a cost of O(log−2R)), while the remaining pieces come from using Lemma 3.1 to evaluate the
prime sum which converges due to the compact support of φ̂. �
Lemma 5.13. We have S eA(F) = −2γ
φ̂(0)/ logR +O(log−3R), where
≈ .3369. (5.32)
Proof. As the series converges, this follows by direct evaluation. �
We have shown
Theorem 5.14. The S0(F) and S2(F) terms contribute φ(0)/2 to the main term. The lower order
correction terms are
2φ̂(0) ·
A′ + γ
0 + γ
1 + γ
2 + γ
2,3 − γPNT
log3R
; (5.33)
using the calculated and computed values of these constants gives
− 2.703 ·
2φ̂(0)
log3R
. (5.34)
Our result should be contrasted to the family of cuspidal newforms, where the correction term
was of size
γPNT ·
2φ̂(0)
≈ −1.33258 · 2φ̂(0)
. (5.35)
Remark 5.15. It is not surprising that our family of elliptic curves has a different lower order
correction than the family of cuspidal newforms. This is due, in large part, to the fact that we do not
have immediate convergence to the Sato-Tate distribution for the coefficients. This is exasperated
by the fact that most of the contribution to the lower order corrections comes from the small primes.
APPENDIX A. EVALUATION OF SA(F) FOR THE FAMILY OF CUSPIDAL NEWFORMS
Lemma A.1. Notation as in §3, we have
SA(F) = −
ST; eA φ̂(0)
R.11 log2R
N .73
N3σ/4 logR
(A.36)
In particular, for test functions supported in (−4/3, 4/3) we have
SA(F) = −
ST; eA φ̂(0)
, (A.37)
where γ
ST; eA ≈ .4160714430 (see Lemma 2.3).
LOWER ORDER TERMS IN 1-LEVEL DENSITIES 31
Proof. Recall
SA(F) = −2φ̂(0)
Ar,F(p)p
r/2(p− 1) log p
(p+ 1)r+1 logR
. (A.38)
Using |Ar,F(p)| ≤ 2r, we may easily bound the contribution from r large, say r ≥ 1 + 2 logR.
These terms contribute
r=1+2 logR
2rpr/2(p− 1) log p
(p+ 1)r+1 logR
log p
r=1+2 logR
log p
)2 logR
2007 ·
)2 logR
p≥2008
log p
p(2 logR)/3
R.77 logR
; (A.39)
note it is essential that 2
2/3 < 1. Thus it suffices to study r ≤ 2 logR.
SA(F) = −2φ̂(0)
2 logR∑
br,r−2k
Ar,F ;k(p)p
r/2(p− 1) log p
(p+ 1)r+1 logR
R.77 logR
= −2φ̂(0)
(p− 1) log p
logR∑
(p+ 1)2
R.77 logR
− 2φ̂(0)
2 logR∑
k 6=r/2
br,r−2k
Ar,F ;k(p)p
r/2(p− 1) log p
(p+ 1)r+1
. (A.40)
In Lemma 2.3 we handled the first p and ℓ-sum when we summed over all ℓ ≥ 2; however, the
contribution from ℓ ≥ logR is bounded by (8/9)logR ≪ R−.11. Thus
SA(F) = −
2γST;3 φ̂(0)
R.11 logR
2φ̂(0)
2 logR∑
(r−2)/2∑
br,r−2k
Ar,F ;k(p)p
r/2(p− 1) log p
(p+ 1)r+1
. (A.41)
To finish the analysis we must study the br,r−2kAr,F ;k(p) terms. Trivial estimation suffices for all
r when p ≥ 13; in fact, bounding these terms for small primes is what necessitated our restricting
to r ≤ 2 logR. From (3.6) (the Petersson formula with harmonic weights) we find
Ar,F ;k(p) ≪
p(r−2k)/4 log
p(r−2k)/4N
k5/6N
rpr/4 log(pN)
. (A.42)
32 STEVEN J. MILLER
∑(r−2)/2
k=0 br,r−2k| ≤ 2r, we have
SA(F) = −
ST; eA φ̂(0)
R.11 logR
2 logR∑
r2rp3r/4 log(pN)
(p+ 1)r logR
. (A.43)
As our Schwartz test functions restrict p to be at most Rσ, the second error term is bounded by
N logR
log(pN)
2 logR∑
2p3/4
≪ logR
p≤2007
2 logR∑
2p3/4
p≥2008
2 logR∑
2p3/4
2 · 33/4
)2 logR
logR +
p≥2008
2p3/4
p + 1
.27 log2R
p=2011
p−1/4 ≪ log
N .73
N3σ/4 logR
, (A.44)
which is negligible provided that σ < 4/3. �
APPENDIX B. EVALUATION OF Ar,F FOR FAMILIES OF ELLIPTIC CURVES
The following standard result allows us to evaluate the second moment of many one-parameter
families of elliptic curves over Q (see [ALM, BEW] for a proof).
Lemma B.1 (Quadratic Legendre Sums). Assume a and b are not both zero mod p and p > 2. Then
at2 + bt + c
(p− 1)
if p |r b2 − 4ac
otherwise.
(B.1)
B.1. The family y2 = x3 +B(6T + 1)κ over Q(T ).
In the arguments below, we constantly use the fact that if p|∆(t) then at(p) = 0. This allows us
to ignore the p |r∆(t) conditions. We assume B ∈ {1, 2, 3, 6} and κ ∈ {1, 2}.
Lemma B.2. We have
A0,F(p) =
p− 1 if p ≥ 5
0 otherwise.
(B.2)
Proof. We have A0,F(p) = 0 if p = 2 or 3 because, in these cases, there are no t such that p |r∆(t).
If p ≥ 5 then p |r ∆(t) is equivalent to p |r B(6t + 1) mod p. As 6 is invertible mod p, as t ranges
over Z/pZ there is exactly one value such that B(6t+ 1) ≡ 0 mod p, and the claim follows. �
Lemma B.3. We have A1,F(p) = 0.
LOWER ORDER TERMS IN 1-LEVEL DENSITIES 33
Proof. The claim is immediate for p = 2, 3 or p ≡ 2 mod 3; it is also clear when κ = 1. Thus we
assume below that p ≡ 1 mod 3 and κ = 2:
−A1,F(p) =
t mod p
at(p)
t mod p
x mod p
x3 +B(6t+ 1)2
t mod p
x mod p
x3 +Bt2
. (B.3)
The x = 0 term gives
(p−1), and the remaining p−1 values of x each give −
by LemmaB.1.
Therefore A1,F(p) = 0. �
Lemma B.4. We have A2,F(p) = 2p2 − 2p if p ≡ 1 mod 3, and 0 otherwise.
Proof. The claim is immediate for p = 2, 3 or p ≡ 2 mod 3. We do the proof for the harder case of
κ = 2; the result is the same when κ = 1 and follows similarly. For p ≡ 1 mod 3:
A2,F(p) =
t mod p
a2t (p) =
t mod p
x mod p
y mod p
x3 +B(6t+ 1)2
y3 +B(6t+ 1)2
t mod p
x mod p
y mod p
x3 +Bt2
y3 +Bt2
y mod p
x3 +Bt2
y3 +Bt2
x mod p
y mod p
tx3 +B
ty3 +B
x mod p
y mod p
t mod p
tx3 +B
ty3 +B
. (B.4)
We use inclusion / exclusion to reduce to xy 6= 0. If x = 0, the t and y-sums give p
If y = 0, the t and x-sums give p
. We subtract the doubly counted contribution from
x = y = 0, which gives p
. Thus
A2,F(p) =
t mod p
tx3 +B
ty3 +B
+ 2p− p− p2. (B.5)
By Lemma B.1, the t-sum is (p − 1)
if p|B2(x3 − y3)2 and −
otherwise; as B|6∞
we have p |r B. As p = 6m + 1, let g be a generator of the multiplicative group Z/pZ. Solving
g3a ≡ g3b yields b = a, a + 2m, or a + 4m, so x3 ≡ y3 three times (for x, y 6≡ 0 mod p). In each
instance y equals x times a square (1, g2m, g4m). Thus
A2,F(p) =
y3≡x3
+ p− p2
= (p− 1)3p+ p− p2 = 2p2 − 2p. (B.6)
34 STEVEN J. MILLER
B.2. The family y2 = x3−(36T+6)(36T+5)x over Q(T ). In the arguments below, we constantly
use the fact that if p|∆(t) then at(p) = 0. This allows us to ignore the p |r∆(t) conditions.
Lemma B.5. We have A0,F(p) = p− 2 if p ≥ 3 and 0 otherwise.
Proof. We have A0,F(p) = 0 if p = 2 because there are no t such that p|r∆(t). If p ≥ 3 then p|r∆(t)
is equivalent to p |r (36t + 6)(36t + 5) mod p. As 36 is invertible mod p, as t ranges over Z/pZ
there are exactly two values such that (36t+ 6)(36 + 5) ≡ 0 mod p, and the claim follows. �
Lemma B.6. We have A1,F(p) = −2p if p ≡ 1 mod 4 and 0 otherwise.
Proof. The claim is immediate if p = 2 or p ≡ 3 mod 4. If p ≡ 1 mod 4 then we may replace
36t+ 6 with t in the complete sums, and we find that
A1,F(p) = −
t mod p
x mod p
x3 − t(t− 1)x
x mod p
t mod p
t2 − t− x2
. (B.7)
As p ≡ 1 mod 4, −1 is a square, say −1 ≡ α2 mod p. Thus
above. Further by Lemma
B.1 the t-sum is p− 1 if p divides the discriminant 1 + 4x2, and is −1 otherwise. There are always
exactly two distinct solutions to 1 + 4x2 ≡ 0 mod p for p ≡ 1 mod 4, and both roots are squares
modulo p.
To see this, letting w denote the inverse of w modulo p we find the two solutions are ±2α. As(
= 1, we have
. Let p = 4n+1. Then
= (−1)(p2−1)/8 = (−1)n,
and by Euler’s criterion we have
≡ α(p−1)/2 ≡
)(p−1)/4 ≡ (−1)n mod p. (B.8)
= 1, and the two roots to 1 + 4x2 ≡ 0 mod p are both squares. Therefore
A1,F(p) = −2p +
x mod p
= −2p. (B.9)
Remark B.7. By the results of Rosen and Silverman [RoSi], our family has rank 1 over Q(T ); this
is not surprising as we have forced the point (36T + 6, 36T + 6) to lie on the curve over Q(T ).
Lemma B.8. Let E denote the elliptic curve y2 = x3 − x, with aE(p) the corresponding Fourier
coefficient. We have
A2,F(p) =
2p(p− 3)− aE(p)2 if p ≡ 1 mod 4
0 otherwise.
(B.10)
Proof. The proof follows by similar calculations as above. �
LOWER ORDER TERMS IN 1-LEVEL DENSITIES 35
B.3. The family y2 = x3 − 3x+ 12T over Q(T ). For the family y2 = x3 − 3x+ 12T , we have
c4(T ) = 2
4 · 32
c6(T ) = 2
7 · 34T
∆(T ) = 26 · 33(6T − 1)(6T + 1); (B.11)
further direct calculation shows that at(2) = at(3) = 0 for all t. Thus our equation is a global
minimal Weierstrass equation, and we need only worry about primes p ≥ 5. Note that c4(t) and
∆(t) are never divisible by a prime p ≥ 5; thus this family can only have multiplicative reduction
for primes exceeding 3.
If p|6t−1, replacing xwith x+1 (to move the singular point to (0, 0)) gives y2−3x2 ≡ x3 mod p.
The reduction is split if
3 ∈ Fp and non-split otherwise. Thus if p|6t − 1 then at(p) =
similar argument (sending x to x − 1) shows that if p|6t + 1 then at(p) =
. A straightforward
calculation shows
1 if p ≡ 1, 11 mod 12
−1 if p ≡ 5, 7 mod 12,
1 if p ≡ 1, 7 mod 12
−1 if p ≡ 5, 11 mod 12.
(B.12)
Lemma B.9. We have A0,F(p) = p− 2 if p ≥ 3 and 0 otherwise.
Proof. We have A0,F(p) = 0 if p = 2 or 3 by direct computation. As 12 is invertible mod p, as t
ranges over Z/pZ there are exactly two values such that (6t− 1)(6t+1) ≡ 0 mod p, and the claim
follows. �
Lemma B.10. A1,F(2) = A1,F(3) = 0, and for p ≥ 5 we have
A1,F(p) =
2 if p ≡ 1 mod 12
0 if p ≡ 7, 11 mod 12
−2 if p ≡ 5 mod 12.
(B.13)
Proof. The claim is immediate for p ≤ 3. We have
A1,F(p) = −
t mod p
∆(t) 6≡0 mod p
at(p)
t mod p
x3 − 3x+ 12t
t mod p
∆(t)≡0 mod p
x3 − 3x+ 12
= 0 +
; (B.14)
the last line follows from our formulas for at(p) for p|∆(t). �
Lemma B.11. A2,F(2) = A2,F(3) = 0, and for p ≥ 5 we have A2,F(p) = p2 − 3p− 4− 2
Proof. The claim is immediate for p ≤ 3. For p ≥ 5 we have at(p)2 = 1 if p|∆(t). Thus
A2,F(p) =
t mod p
∆(t) 6≡0 mod p
at(p)
t mod p
x mod p
y mod p
x3 − 3x+ 12t
y3 − 3y + 12t
− 2. (B.15)
36 STEVEN J. MILLER
Sending t→ 12−1t mod p, we have a quadratic in t with discriminant
(x3 − 3x)− (y3 − 3y)
= (x− y)2 · (y2 + xy + x2 − 3)2 = δ(x, y). (B.16)
We use Lemma B.1 to evaluate the t-sum; it is p − 1 if p|δ(x, y), and −1 otherwise. Letting
η(x, y) = 1 if p|δ(x, y) and 0 otherwise, we have
A2,F(p) =
x mod p
y mod p
η(x, y)p− p2 − 2. (B.17)
For a fixed x, p|δ(x, y) if y = x or if y2 + xy + x2 − 3 ≡ 0 mod p (we must be careful about
double counting). There are two distinct solutions to the quadratic (in y) if its discriminant 12−3x2
is a non-zero square in Z/pZ, one solution (namely −2−1x, which is not equivalent to x) if it is
congruent to zero (which happens only when x ≡ ±2 mod p), and no solutions otherwise. If the
discriminant 12−3x2 is a square, the two solutions are distinct from x provided that x 6≡ ±1 mod p
(if x ≡ ±1 mod p then one of the solutions is x and the other is distinct). Thus, for a fixed x, the
number of y such that p|δ(x, y) is 2 +
12−3x2
if x 6≡ ±1,±2 and 2 if x ≡ ±1,±2. Therefore
A2,F(p) =
x mod p
x 6≡±1,±2 mod p
12− 3x2
x≡±1,±2 mod p
2 · p− p2 − 2
= 2(p− 4)p+ p
x mod p
x 6≡±1,±2 mod p
12− 3x2
+ 4 · 2p− p2 − 2
= p2 − 2 + p
t mod p
12− 3x2
− 2p = p2 − 2p− 2− p
, (B.18)
where we used Lemma B.1 to evaluate the x-sum (as p ≥ 5, p does not divide its discriminant). �
REFERENCES
[ALM] S. Arms, A. Lozano-Robledo and S. J. Miller, Constructing one-parameter families of elliptic curves over
Q(T ) with moderate rank, Journal of Number Theory 123 (2007), no. 2, 388–402.
[BEW] B. Berndt, R. Evans, and K. Williams, Gauss and Jacobi Sums, Canadian Mathematical Society Series of
Monographs and Advanced Texts, Vol. 21, Wiley-Interscience Publications, John Wiley & Sons, New York,
1998.
[BBLM] E. Bogomolny, O. Bohigas, P. Leboeuf and A. G. Monastra, On the spacing distribution of the Riemann
zeros: corrections to the asymptotic result, Journal of Physics A: Mathematical and General 39 (2006), no.
34, 10743–10754.
[BK] E. B. Bogomolny and J. P. Keating, Gutzwiller’s trace formula and spectral statistics: beyond the diagonal
approximation, Phys. Rev. Lett. 77 (1996), no. 8, 1472–1475.
[CHT] L. Clozel, M. Harris and R. Taylor, Automorphy for some ℓ-adic l ifts of automorphic mod l representations,
preprint. http://www.math.harvard.edu/∼rtaylor/twugnew.ps
[CF] B. Conrey and D. Farmer, Mean values of L-functions and symmetry, Internat. Math. Res. Notices 2000,
no. 17, 883–908.
[CFKRS] B. Conrey, D. Farmer, P. Keating, M. Rubinstein and N. Snaith, Integral moments of L-functions, Proc.
London Math. Soc. (3) 91 (2005), no. 1, 33–104.
[CFZ1] J. B. Conrey, D. W. Farmer and M. R. Zirnbauer, Autocorrelation of ratios of L-functions, preprint.
http://arxiv.org/abs/0711.0718
[CFZ2] J. B. Conrey, D. W. Farmer and M. R. Zirnbauer, Howe pairs, supersymmetry, and ra-
tios of random characteristic polynomials for the classical compact groups, preprint.
http://arxiv.org/abs/math-ph/0511024
LOWER ORDER TERMS IN 1-LEVEL DENSITIES 37
[CS] J. B. Conrey and N. C. Snaith, Applications of the L-functions Ratios Conjecture, Proc. Lon. Math. Soc. 93
(2007), no 3, 594–646.
[DHKMS] E. Dueñez, D. K. Huynh, J. P. Keating, S. J. Miller and N. C. Snaith, Bose-Einstein condensation of zeros
of L-functions, preprint.
[DM1] E. Dueñez and S. J. Miller, The low lying zeros of a GL(4) and a GL(6) family of L-functions, Compositio
Mathematica 142 (2006), no. 6, 1403–1425.
[DM2] E. Dueñez and S. J. Miller, The effect of convolving families of L-functions on the underlying group sym-
metries, preprint. http://arxiv.org/abs/math/0607688
[FI] E. Fouvry and H. Iwaniec, Low-lying zeros of dihedral L-functions, Duke Math. J. 116 (2003), no. 2,
189–217.
[GR] I. Gradshteyn and I. Ryzhik, Tables of Integrals, Series, and Products, New York, Academic Press, 1965.
[Gü] A. Güloğlu, Low Lying Zeros of Symmetric Power L-Functions, Internat. Math. Res. Notices 2005, no. 9,
517–550.
[Guy] R.K. Guy, Catwalks, sandsteps and Pascal pyramids, J. Integer Seq. 3 (2000), Article 00.1.6,
http://www.cs.uwaterloo.ca/journals/JIS/VOL3/GUY/catwalks.html.
[Ha] G. Harcos, Uniform approximate functional equation for principal L-functions, Int. Math. Res. Not. 2002,
no. 18, 923–932.
[Hej] D. Hejhal, On the triple correlation of zeros of the zeta function, Internat. Math. Res. Notices 1994, no. 7,
294–302.
[HL] J. Hoffstein and P. Lockhart, Coefficients of Maass forms and the Siegel zero. With an appendix by Dorian
Goldfeld, Hoffstein and Daniel Lieman, Ann. of Math. (2) 140 (1994), no. 1, 161–181.
[HM] C. Hughes and S. J. Miller, Low-lying zeros of L-functions with orthogonal symmtry, Duke Mathematical
Journal, 136 (2007), no. 1, 115–172.
[HR] C. Hughes and Z. Rudnick, Linear statistics of low-lying zeros of L-functions, Quart. J. Math. Oxford 54
(2003), 309–333.
[I1] H. Iwaniec, Small eigenvalues of Laplacian for Γ0(N), Acta Arith. 56 (1990), no. 1, 65–82.
[ILS] H. Iwaniec, W. Luo and P. Sarnak, Low lying zeros of families of L-functions, Inst. Hautes Études Sci. Publ.
Math. 91 (2000), 55–131.
[KaSa1] N. Katz and P. Sarnak, Random Matrices, Frobenius Eigenvalues and Monodromy, AMS Colloquium Pub-
lications 45, AMS, Providence, 1999.
[KaSa2] N. Katz and P. Sarnak, Zeros of zeta functions and symmetries, Bull. AMS 36 (1999), 1–26.
[KeSn1] J. P. Keating and N. C. Snaith, Random matrix theory and ζ(1/2+ it), Comm. Math. Phys. 214 (2000), no.
1, 57–89.
[KeSn2] J. P. Keating and N. C. Snaith, Random matrix theory and L-functions at s = 1/2, Comm. Math. Phys. 214
(2000), no. 1, 91–110.
[KeSn3] J. P. Keating and N. C. Snaith, Random matrices and L-functions, Random matrix theory, J. Phys. A 36
(2003), no. 12, 2859–2881.
[Kn] A. Knapp, Elliptic Curves, Princeton University Press, Princeton, 1992.
[Lan] E. Landau, Handbuch der Lehre von der Verteilung der Primzahlen, 2nd ed., Chelsea, 1953.
[Mic] P. Michel, Rang moyen de familles de courbes elliptiques et lois de Sato-Tate, Monat. Math. 120 (1995),
127–136.
[Mil1] S. J. Miller, 1- and 2-Level Densities for Families of Elliptic Curves: Evidence for the Underlying Group
Symmetries, P.H.D. Thesis, Princeton University, 2002,
http://www.williams.edu/go/math/sjmiller/public html/math/thesis/thesis.html.
[Mil2] S. J. Miller, 1- and 2-level densities for families of elliptic curves: evidence for the underlying group
symmetries, Compositio Mathematica 140 (2004), 952–992.
[Mil3] S. J. Miller, Variation in the number of points on elliptic curves and applications to excess rank, C. R. Math.
Rep. Acad. Sci. Canada 27 (2005), no. 4, 111–120.
[Mil4] S. J. Miller, Investigations of zeros near the central point of elliptic curve L-functions, Experimental Math-
ematics 15 (2006), no. 3, 257–279.
[Mil5] S. J. Miller, An identity for sums of polylogarithm functions, Integers: Electronic Journal Of Combinatorial
Number Theory 8 (2008), #A15.
38 STEVEN J. MILLER
[Mil6] S. J. Miller, A Symplectic Test of the L-Functions Ratios Conjecture, Int Math Res Notices (2008) Vol.
2008, article ID rnm146, 36 pages, doi:10.1093/imrn/rnm146.
[Mil7] S. J. Miller, An orthogonal test of the L-Functions Ratios Conjecture, preprint.
http://arxiv.org/abs/0805.4208
[Mon] H. Montgomery, The pair correlation of zeros of the zeta function, Analytic Number Theory, Proc. Sympos.
Pure Math. 24, Amer. Math. Soc., Providence, 1973, 181–193.
[Mor] P. Moree, Chebyshev’s bias for composite numbers with restricted prime divisors, Math. Comp. 73 (2004),
425–449.
[Mur] V. K. Murty, On the Sato-Tate conjecture, in Number Theory Related to Fermat’s Last Theorem (Cam-
bridge, Mass., 1981), pages 195–205, Birkhäuser, Boston, 1982.
[Nag] T. Nagell, Introduction to Number Theory, Chelsea Publishing Company, New York, 1981.
[Od1] A. Odlyzko, On the distribution of spacings between zeros of the zeta function, Math. Comp. 48 (1987), no.
177, 273–308.
[Od2] A. Odlyzko, The 1022-nd zero of the Riemann zeta function, Proc. Conference on Dynamical, Spectral and
Arithmetic Zeta-Functions, M. van Frankenhuysen and M. L. Lapidus, eds., Amer. Math. Soc., Contempo-
rary Math. series, 2001, http://www.research.att.com/∼amo/doc/zeta.html.
[OS] A. E. Özlük and C. Snyder, On the distribution of the nontrivial zeros of quadratic L-functions close to the
real axis, Acta Arith. 91 (1999), no. 3, 209–228.
[RR1] G. Ricotta and E. Royer, Statistics for low-lying zeros of symmetric power L-functions in the level aspect,
preprint. http://arxiv.org/abs/math/0703760
[RR2] G. Ricotta and E. Royer, Lower order terms for the one-level density of symmetric power L-functions in the
level aspect, preprint. http://arxiv.org/pdf/0806.2908
[RoSi] M. Rosen and J. Silverman, On the rank of an elliptic surface, Invent. Math. 133 (1998), 43–67.
[RoSc] J. B. Rosser and L. Schoenfeld, Approximate formulas for some functions of prime numbers, Illinois J.
Math. 6 (1962) 64–94.
[Ro] E. Royer, Petits zéros de fonctions L de formes modulaires, Acta Arith. 99 (2001), 47–172.
[Rub] M. Rubinstein, Low-lying zeros of L-functions and random matrix theory, Duke Math. J. 109 (2001), no. 1,
147–181.
[RS] Z. Rudnick and P. Sarnak, Zeros of principal L-functions and random matrix theory, Duke Math. J. 81
(1996), 269–322.
[Si1] J. Silverman, The Arithmetic of Elliptic Curves, Graduate Texts in Mathematics 106, Springer-Verlag,
Berlin - New York, 1986.
[Si2] J. Silverman, Advanced Topics in the Arithmetic of Elliptic Curves, Graduate Texts in Mathematics 151,
Springer-Verlag, Berlin - New York, 1994.
[Sl] N. Sloane, On-Line Encyclopedia of Integer Sequences,
http://www.research.att.com/∼njas/sequences/Seis.html.
[Tay] R. Taylor, Automorphy for some ℓ-adic lifts of automorphic mod l representations. II, preprint.
http://www.math.harvard.edu/∼rtaylor/twugk6.ps
[Yo1] M. Young, Lower-order terms of the 1-level density of families of elliptic curves, Internat. Math. Res.
Notices 2005, no. 10, 587–633.
[Yo2] M. Young, Low-lying zeros of families of elliptic curves, J. Amer. Math. Soc. 19 (2006), no. 1, 205–250.
E-mail address: [email protected]
DEPARTMENT OF MATHEMATICS AND STATISTICS, WILLIAMS COLLEGE, WILLIAMSTOWN, MA 01267
1. Introduction
2. Explicit Formulas
2.1. Standard Explicit Formula
2.2. The Alternate Explicit Formula
2.3. Formulas for the r 3 Terms
3. Families of cuspidal newforms
3.1. Weights
3.2. Results
4. Preliminaries for Families of Elliptic Curves
4.1. Notation
4.2. Sieving
4.3. Moments of the Fourier Coefficients and the Explicit Formula
5. Examples: One-parameter families of elliptic curves over Q(T)
5.1. CM Example: The family y2 = x3 + B (6T+1) over Q(T)
5.2. CM Example: The family y2 = x3 -B(36T+6)(36T+5)x over Q(T)
5.3. Non-CM Example: The family y2 = x3 -3x + 12T over Q(T)
Appendix A. Evaluation of SA(F) for the family of cuspidal newforms
Appendix B. Evaluation of Ar,F for families of elliptic curves
B.1. The family y2 = x3 + B (6T+1) over Q(T)
B.2. The family y2 = x3 -(36T+6)(36T+5)x over Q(T)
B.3. The family y2 = x3 -3x+12T over Q(T)
References
|
0704.0925 | Spinor Dynamics in an Antiferromagnetic Spin-1 Condensate | Spinor Dynamics in an Antiferromagnetic Spin-1 Condensate
A. T. Black, E. Gomez, L. D. Turner, S. Jung, and P. D. Lett
Joint Quantum Institute, University of Maryland and
National Institute of Standards and Technology, Gaithersburg, Maryland 20899
(Dated: October 30, 2018)
We observe coherent spin oscillations in an antiferromagnetic spin-1 Bose-Einstein condensate
of sodium. The variation of the spin oscillations with magnetic field shows a clear signature of
nonlinearity, in agreement with theory, which also predicts anharmonic oscillations near a critical
magnetic field. Measurements of the magnetic phase diagram agree with predictions made in the
approximation of a single spatial mode. The oscillation period yields the best measurement to date
of the sodium spin-dependent interaction coefficient, determining that the difference between the
sodium spin-dependent s-wave scattering lengths af=2−af=0 is 2.47 ± 0.27 Bohr radii.
PACS numbers: 03.75.Mn, 32.80.Cy, 32.80.Pj
Atomic collisions are essential to the formation of Bose-
Einstein condensates (BEC), redistributing energy dur-
ing evaporative cooling. Collisions can be coherent and
reversible, leading to diverse phenomena such as super-
fluidity [1] and reversible formation of molecules [2] in
BECs with a single internal state. When internal de-
grees of freedom are included (as in spinor condensates),
coherent collisions lead to rich dynamics [3, 4] in which
the population oscillates between different Zeeman sub-
levels. We present the first observation of coherent spin
oscillations in a spin-1 condensate with antiferromagnetic
interactions (in which the interaction energy of colliding
spin-aligned atoms is higher than that of spin-antialigned
atoms.)
Spinor condensates have been a fertile area for the-
oretical studies of dynamics [5, 6, 7, 8], ground state
structures [9, 10], and domain formation [11]. Extensive
experiments on the ferromagnetic F=1 hyperfine ground
state of 87Rb have demonstrated spin oscillations and co-
herent control of spinor dynamics [3, 12]. Observation of
domain formation in 23Na demonstrated the antiferro-
magnetic nature of the F=1 ground state [13] and de-
tected tunneling across spin domains [14]; no spin oscilla-
tions have been reported in sodium BEC until now. The
F=2 state of 87Rb is thought to be antiferromagnetic, but
a cyclic phase is possible [15, 16]. Experiments on this
state have demonstrated that the amplitude and period
of spin oscillations can be controlled magnetically [4].
At low magnetic fields, spin interactions dominate the
dynamics. The different sign of the spin dependent in-
teraction causes the antiferromagnetic F=1 case to differ
from the ferromagnetic one both in the structure of the
ground-state magnetic phase diagram and in the spinor
dynamics. Both cases can exhibit a regime of slow, an-
harmonic spin oscillations; however, this behavior is pre-
dicted over a wide range of initial conditions only in the
antiferromagnetic case [8]. The spin interaction energies
in sodium are more than an order of magnitude larger
than in 87Rb F =1 for a given condensate density [3],
facilitating studies of spinor dynamics.
The dynamics of the spin-1 system are much simpler
than the spin-2 case [4, 15, 16], having a well-developed
analytic solution [8]. This solution predicts a divergence
in the oscillation period (not to be confused with the
amplitude peak observed in 87Rb F=2 [4] oscillations).
This Letter reports the first measurement of the
ground state magnetic phase diagram of a spinor con-
densate, and the first experimental study of coherent
spinor dynamics in an antiferromagnetic spin-1 conden-
sate. Both show good agreement with the single-spatial-
mode theory [10]. To study the dynamics, we displace
the spinor from its ground state, observing the resulting
oscillations of the Zeeman populations as a function of
applied magnetic field B. At low field the oscillation pe-
riod is constant, at high field it decreases rapidly, and at a
critical field it displays a resonance-like feature, all as pre-
dicted by theory [8]. These measurements have allowed
us to improve by a factor of three the determination of
the sodium F = 1 spin-dependent interaction strength,
which is proportional to the difference af=2 − af=0 in
the spin-dependent scattering lengths.
The state of the condensate in the single-mode ap-
proximation (SMA) is written as the product φ(r)ζ of a
spin-independent spatial wavefunction φ(r) and a spinor
ζ = (
eiθ− ,
iθ0 ,
iθ+). We use ρ
, ρ0, and
ρ+ (θ−, θ0, and θ+) to denote fractional populations
(phases) of the Zeeman sublevels mF = −1, 0, and 1,
so that
ρi=1. The spinor’s ground state and its non-
linear dynamics may be derived from the spin-dependent
part of the Hamiltonian in the single-mode and mean-
field approximations, subject to the constraints that to-
tal atom number N and magnetization m≡ ρ+−ρ− are
conserved [8]. The “classical” spinor Hamiltonian E is a
function of only two canonical variables: the fractional
population ρ0 and the relative phase θ ≡ θ+ + θ− − 2θ0.
It is given by
E = δ(1−ρ0) + cρ0
(1−ρ0) +
(1−ρ0)2−m2 cos θ
where δ = h × (2.77× 1010Hz/T2)B2 is the quadratic
http://arxiv.org/abs/0704.0925v2
Zeeman shift [8] with h the Planck constant. (The linear
Zeeman shift has no effect on the dynamics.) The spin-
dependent interaction energy is c= c2 〈n〉, where 〈n〉 is
the mean particle density of the condensate and
(af=2 − af=0) (2)
is the spin-dependent interaction coefficient [8, 17]. Here
M is the atomic mass. af=2 and af=0 are the s-wave
scattering lengths for a colliding pair of atoms of total
spin f = 2 and f = 0, respectively; Bose symmetry en-
sures there are no s-wave collisions with total spin of
1. If c2 is positive (negative), the system is antiferro-
magnetic (ferromagnetic). The spinor ground state and
spinor dynamics are determined by Eq. (1).
The apparatus is similar to that described previ-
ously [18]. We produce a BEC of 105 23Na atoms in
the F=1 state, with an unobservably small thermal frac-
tion, in a crossed-beam 1070nm optical dipole trap. The
trap beams lie in the horizontal xy plane, so that the trap
curvature is nearly twice as large along the vertical z axis
as in the xy plane. By applying a small magnetic field
gradient with the MOT coils (less than 10mT/m) during
the 9 s of forced evaporation, we fully polarize the BEC:
all atoms are in mF =+1. Conservation of spin angular
momentum ensures that the magnetization remains con-
stant once evaporation has ceased; a state with ρ+ = 1
persists for the lifetime of the condensate, about 14 s.
We then turn off the gradient field and adiabatically
apply a bias field B of 4 to 51µT along x̂, leaving the
BEC in the ρ+ = 1 state. To prepare an initial state,
we apply an rf field resonant with the linear Zeeman
splitting; typically the frequency is tens to hundreds of
kilohertz. Rabi flopping in the three-level system is ob-
served [19], and controlling the amplitude and duration
of the pulse can produce any desired magnetization m,
which also determines the population ρ0. The flopping
time is less than 50µs, much shorter than the character-
istic times for spin evolution governed by Eq. (1). Using
this Zeeman transition avoids populating the F=2 state,
thus avoiding inelastic losses, which are much greater for
23Na than for 87Rb.
We measure the populations ρi of atoms in the three
Zeeman sublevels by Stern-Gerlach separation and ab-
sorption imaging [20]. The Stern-Gerlach gradient is par-
allel to the bias field ~B, while the imaging beam propa-
gates in the ẑ direction. The phase θ is not measured.
To measure the ground state population distribution
as a function of magnetization and magnetic field, we
first set the magnetization using the rf pulse. We then
ramp the field to a desired final value over 1 s, wait 3 s
for equilibration, and measure the populations as above.
Figure 1(b) displays the measured ground-state mag-
netic phase diagram. The theoretical prediction in
Fig. 1(a) is the population ρ0 that minimizes the energy,
B (µT)
B (µT)
FIG. 1: a) Theoretical prediction of the ground-state frac-
tional population ρ0 as a function of magnetization m and
applied magnetic field B, assuming a spin-dependent interac-
tion energy c=h×20.5 Hz. The thick line lying in the ρ0 = 0
plane indicates the boundary between the ρ0 = 0 and the
ρ0 > 0 regions. b) Experimental measurement. The surface
plot is produced by interpolation of data points.
Eq.(1). Such minima always occur at θ = π for antifer-
romagnetic interactions. The measurements agree well
with the prediction, which is made for spin interaction
energy c= h×20.5Hz (determined by spin dynamics as
described below).
The first term of Eq. (1) depends on the external mag-
netic field and tends to maximize the equilibrium ρ0
population. The second, spin dependent, term has the
same sign as c2 and in the antiferromagnetic case tends
to minimize the equilibrium ρ0 population. The phase
transition indicated by the thick line in Fig. 1a arises
at the point where these opposing tendencies cancel for
ρ0 = 0. Along the transition contour, ρ0 rapidly falls
to zero. By contrast, the ferromagnetic phase diagram
has ρ0 = 0 only at m = 1. In the region B < 15µT
and m > 0.6, there should be virtually no population in
mF = 0 for antiferromagnetic interactions, and popula-
tions up to ρ0 = 0.34 for ferromagnetic interactions (as-
suming the same magnitude of c). For our equilibrium
data, the reduced χ2 with respect to the antiferromag-
netic (ferromagnetic) prediction in this region is 2 (20).
This demonstrates that sodium F =1 spin interactions
are antiferromagnetic, as previously shown by the misci-
bility of spin domains formed in a quasi-one-dimensional
trap [13].
Across most of the phase diagram, the scatter in the
population is consistent with measured shot-to-shot vari-
ation in atom number. This variation is 20%, implying
an 8% variation in the mean condensate density accord-
ing to Thomas-Fermi theory. The variance of results is
not due to the magnetic field (calibrated to a precision of
0.2µT), nor to residual field variations across the BEC
(less than 250pT). Uncertainties in setting the magne-
tization are obviated, as the magnetization is measured
for each point as the difference in fractional populations
m= ρ+−ρ−. Discrepancies between theory and exper-
iment at low magnetic fields may be attributed to the
field dependence of the equilibration time. We observe
equilibration times (see below) ranging from 200ms at
high fields to several seconds at low fields, by which time
atom loss is substantial.
If the spinor is driven away from equilibrium, the full
coherent dynamics of the spinor system Eq.(1) are re-
vealed. We initiate the spinor dynamics with the rf tran-
sition described above, but now look at the evolution over
millisecond timescales.
The spinor dynamics are described by the Hamilton
equations for Eq. (1) [8]:
ρ̇0 = −
and θ̇ =
The system is closely related to the double-well “bosonic
Josephson junction” (BJJ) [21, 22] and exhibits a regime
of small, harmonic oscillations and, near a critical field
Bc, is predicted to display large, anharmonic oscilla-
tions. At Bc the period diverges (where δ(Bc) = c[(1 −
ρ0) +
(1 − ρ0)2 −m2 cos θ], with ρ0 and θ taken at
t = 0) [8]. The critical value corresponds to a transi-
tion from periodic-phase solutions of Eq. (3) to running-
phase solutions. At the critical value it is predicted that
the population is trapped in a spin state with ρ0=0. This
phenomenon is related to the macroscopic quantum self-
trapping that has been observed in the BJJ [22]. How-
ever, very small fluctuations in field or density will drive
ρ0 away from 0. Observing a ten-fold increase in the pe-
riod above its zero-field value would require a technically
challenging magnetic field stability of better than 100 fT.
Figure 2 plots the period and amplitude of oscillation
as a function of magnetic field. An example of the os-
cillating populations is shown in the inset. The spinor
condensate is prepared with initial ρ0=0.50± 0.01 1 and
m = 0.00 ± 0.02, and a plot of ρ0 versus time is taken
at each field value. Qualitatively, the period is nearly
independent of magnetic field at low fields, with a small
peak at a critical value Bc = 28µT, followed by a steep
decline in period. The amplitude likewise shows a max-
imum at Bc. Oscillations are visible over durations of
40ms to 300ms. Beyond these times, the amplitude of
the shot-to-shot fluctuations in ρ0 is roughly equal to
the harmonic amplitude. This indicates dephasing due
to shot-to-shot variation in oscillation frequency, proba-
bly associated with the variations in magnetic field and
condensate density, rather than any fundamental damp-
ing process. At even longer times, we observe damping
and equilibration to a new constant ρ0; the damping time
varies with magnetic field from 200ms to 5 s.
For the theoretical prediction in Fig. 2, the initial value
of ρ0 and m are obtained experimentally. We treat only
c and θ(t = 0) as free parameters; c is also predicted
by prior determinations of c2 and our knowledge of the
condensate density. The initial relative phase is not the
equilibrium value θ=π, due to our rf preparation. For a
three-level system driven in resonance with both transi-
tions, the relative phase is θ=0 at all times during the rf
transition, as we derive from Ref. [19]. Small deviations
from initial θ=0 could be caused by an unequal splitting
between the levels, from e.g., the quadratic Zeeman shift.
The best fit to the data in Fig. 2a and b is obtained
by using c=h×(21± 2)Hz and θ(t=0)=0.5± 0.3 (with
no other free parameters). Away from the critical field
Bc, agreement with theory is good. The fitted value of
c implies that Bc is 27µT, in reasonable agreement with
the apparent peak observed at 28µT. Our ability to ob-
serve strong variations in period near Bc is limited by
density fluctuations (8%) and magnetic field fluctuations
(0.2µT). Near Bc, typically only one cycle is visible be-
fore dephasing is complete. Such rapid dephasing can,
itself, be taken as evidence of a strongly B-dependent
period, as expected near the critical field.
To include the known fluctuations in density and mag-
netic field in our model, we perform a Monte Carlo sim-
ulation of the expected signal, based on measured, nor-
mally distributed shot-to-shot variations in values of c,
δ, m and ρ0(t = 0). At each value of B in Fig. 2, we
generate 80 simulated time traces, with each point in the
time trace determined from Eq. 3. We fit the simulated
traces using sine waves and record the mean and stan-
dard deviation of the amplitude and period of the fits.
The results (shaded regions in Fig. 2) show a less sharp
peak in the period. The smoothing of the peak at Bc is
consistent with our data.
1 All uncertainties in this paper are one standard deviation com-
bined statistical and systematic uncertainties
Magnetic Field (µT)
0 10 20 30 40 50
Time (ms)
0 40 80
FIG. 2: Period (a) and amplitude (b) of spin oscillations as a
function of applied magnetic field, following a sudden change
in spin state. The solid lines are theoretical predictions from
solving Eq. (3). The theoretical prediction of the period goes
to infinity at about 27µT. The shaded regions are ±1 stan-
dard deviation about the mean values predicted by the Monte
Carlo simulation. Inset: Fractional Zeeman population (solid
dots) and magnetization (open circles) as a function of time
after the spinor condensate is driven to ρ0 = 0.5, m = 0.
B=6.1µT. The solid line is a sinusoidal fit.
It is clear in Fig. 2 that the oscillation period is insensi-
tive to the magnetic field at low values of the field. In this
regime, the period is sensitive only to the spin interaction
c2 and the density of the condensate 〈n〉. Measuring this
period allows us to determine the difference in scattering
lengths af=2 − af=0. The trace inset in Fig. 2 was taken
in this regime, at a magnetic field of B = 6.1µT, and
shows harmonic oscillations with period 24.6 ± 0.3ms.
Here the predicted period dependence on magnetic field,
14µs/µT, is indeed weak and the oscillations dephase
only slightly over the duration shown. Using this mea-
surement of the period (in which much more data was
taken than for each point making up Fig. 2 (a) and (b)),
and including uncertainties in initial θ, ρ0, and m, we
obtain the spin interaction energy c=h× (20.5± 1.3)Hz.
Finding af=2 − af=0 requires a careful measurement
of the condensate density. We take absorption images
with various expansion times to find the mean field en-
ergy. The images yield the column density in the xy
plane, and the distribution in the z direction can be
inferred from our trap beam geometry. We find that
the mean density of the condensate under the conditions
of the inset to Fig. 2 is 〈n〉 = 8.6 ± 0.9× 1013 cm−3.
From this we calculate af=2 − af=0 = (2.47 ± 0.27)a0,
where a0 = 52.9pm is the Bohr radius. This is consis-
tent with a previous measurement, from spin domain
structure, of af=2 − af=0 = (3.5 ± 1.5)a0 [13] and is
smaller than the difference between scattering lengths
determined from molecular levels, af=2 = (55.1 ± 1.6)a0
and af=0=(50.0± 1.6)a0 [23]. A multichannel quantum
defect theory calculation gives af=2 − af=0 = 5.7a0 [24].
Finally, we consider the validity of the spatial single-
mode approximation. The SMA was clearly violated in
previous work on 23Na [13] and 87Rb [3] F =1 spinor
condensates where spatial domains formed. Spatial de-
grees of freedom decouple from spinor dynamics when the
spin healing length ξs=2π~/
2m|c2|n is larger than the
condensate. From our density measurements we find typ-
ical Thomas-Fermi radii of (9.4, 6.7, 5.7)µm. The spin
healing length, based on our measurements of c, is typi-
cally ξs =17µm. We therefore operate within the range
of validity of the SMA. Furthermore, Stern-Gerlach ab-
sorption images show three components with identical
spatial distributions after ballistic expansion, indicating
that domain formation does not occur.
In conclusion, we have studied both the ground state
and the spinor dynamics of a sodium F=1 spinor conden-
sate. Both agree well with theoretical predictions in the
SMA. By measuring the spin oscillation frequency at low
magnetic field, we have determined the difference in spin-
dependent scattering lengths. The observed peak in oscil-
lation period as a function of magnetic field demonstrates
that the spinor dynamics are fundamentally nonlinear.
It also suggests the existence of the predicted regime of
highly anharmonic spin oscillations at the center of this
peak, which should be experimentally accessible with suf-
ficient control of condensate density and magnetic field.
Observation of anharmonic oscillations, as well as popu-
lation trapping and spin squeezing effects, could be aided
by a minimally destructive measurement of Zeeman pop-
ulations [25] to reduce the effects of magnetic field drifts
and shot-to-shot density variations.
We thank W. Phillips for helpful discussions, and ONR
and NASA for support. ATB acknowledges an NRC
Fellowship. LDT acknowledges an Australian-American
Fulbright Fellowship.
[1] M. R. Matthews, B. P. Anderson, P. C. Haljan, D. S.
Hall, C. E. Wieman, and E. A. Cornell, Phys. Rev. Lett.
83, 2498 (1999).
[2] E. A. Donley, N. R. Claussen, S. T. Thompson, and C. E.
Wieman, Nature (London) 417, 529 (2002).
[3] M.-S. Chang, Q. Qin, W. Zhang, L. You, and M. S. Chap-
man, Nature Physics 1, 111 (2005).
[4] J. Kronjäger, C. Becker, P. Navez, K. Bongs, and K. Sen-
gstock, Phys. Rev. Lett. 97, 110404 (2006).
[5] M. Moreno-Cardoner, J. Mur-Petit, M. Guilleumas,
A. Polls, A. Sanpera, and M. Lewenstein, arXiv:cond-
mat/0611379 (2006).
[6] C. K. Law, H. Pu, and N. P. Bigelow, Phys. Rev. Lett.
81, 5257 (1998).
[7] T. Ohmi and K. Machida, J. Phys. Soc. Jpn. 67, 1822
(1998).
[8] W. Zhang, D. L. Zhou, M.-S. Chang, M. S. Chapman,
and L. You, Phys. Rev. A 72, 013602 (2005).
[9] T.-L. Ho, Phys. Rev. Lett. 81, 742 (1998).
[10] W. Zhang, S. Yi, and L. You, New J. Phys. 5, 77 (2003).
[11] W. Zhang, D. L. Zhou, M.-S. Chang, M. S. Chapman,
and L. You, Phys. Rev. Lett. 95, 180403 (2005).
[12] J. Kronjäger, C. Becker, M. Brinkmann, R. Walser,
P. Navez, K. Bongs, and K. Sengstock, Phys. Rev. A
72, 063619 (2005).
[13] J. Stenger, S. Inouye, D. M. Stamper-Kurn, H.-J. Mies-
ner, A. P. Chikkatur, and W. Ketterle, Nature 396, 345
(1998).
[14] D. M. Stamper-Kurn, H.-J. Miesner, A. P. Chikkatur,
S. Inouye, J. Stenger, and W. Ketterle, Phys. Rev. Lett.
83, 661 (1999).
[15] A. Widera, F. Gerbier, S. Fölling, T. Gericke, O. Mandel,
and I. Bloch, New J. Phys. 8, 152 (2006).
[16] T. Kuwamoto, K. Araki, T. Eno, and T. Hirano, Phys.
Rev. A 69, 063604 (2004).
[17] D. M. Stamper-Kurn and W. Ketterle, in Coherent
Atomic Matter Waves, edited by R. Kaiser, C. West-
brook, and F. David (Springer, New York, 2001), no. 72
in Les Houches Summer School Series, pp. 137–217, cond-
mat/0005001.
[18] R. Dumke, M. Johanning, E. Gomez, J. D. Weinstein,
K. M. Jones, and P. D. Lett, New J. Phys. 8, 64 (2006).
[19] M. Sargent III and P. Horwitz, Phys. Rev. A 13, 1962
(1976).
[20] R. Dumke, J. D. Weinstein, M. Johanning, K. M. Jones,
and P. D. Lett, Phys. Rev. A 72, 041801(R) (2005).
[21] S. Raghavan, A. Smerzi, S. Fantoni, and S. R. Shenoy,
Phys. Rev. A 59, 620 (1999).
[22] M. Albiez, R. Gati, J. Fölling, S. Hunsmann, M. Cris-
tiani, and M. K. Oberthaler, Phys. Rev. Lett. 95, 010402
(2005).
[23] A. Crubellier, O. Dulieu, F. Masnou-Seeuws, M. Elbs,
H. Knöckel, and E. Tiemann, Eur. Phys. J. D 6, 211
(1999).
[24] J. P. Burke, C. H. Greene, and J. L. Bohn, Phys. Rev.
Lett. 81, 3355 (1998).
[25] G. A. Smith, S. Chaudhury, A. Silberfarb, I. H. Deutsch,
and P. S. Jessen, Phys. Rev. Lett. 93, 163602 (2004).
|
0704.0926 | A Contraction Theory Approach to Stochastic Incremental Stability | FitzHugh-Nagumo.eps
A Contraction Theory Approach to Stochastic
Incremental Stability
Quang-Cuong Pham ∗
LPPA, Collège de France
Paris, France
[email protected]
Nicolas Tabareau
LPPA, Collège de France
Paris, France
[email protected]
Jean-Jacques Slotine
Nonlinear Systems Laboratory, MIT
Cambridge, MA 02139, USA
[email protected]
September 6, 2018
Abstract
We investigate the incremental stability properties of Itô stochastic
dynamical systems. Specifically, we derive a stochastic version of non-
linear contraction theory that provides a bound on the mean square
distance between any two trajectories of a stochastically contracting
system. This bound can be expressed as a function of the noise inten-
sity and the contraction rate of the noise-free system. We illustrate
these results in the contexts of stochastic nonlinear observers design
and stochastic synchronization.
1 Introduction
Nonlinear stability properties are often considered with respect to an equi-
librium point or to a nominal system trajectory (see e.g. [31]). By contrast,
incremental stability is concerned with the behaviour of system trajectories
with respect to each other. From the triangle inequality, global exponential
incremental stability (any two trajectories tend to each other exponentially)
is a stronger property than global exponential convergence to a single tra-
jectory.
Historically, work on deterministic incremental stability can be traced
back to the 1950’s [23, 7, 16] (see e.g. [26, 20] for a more extensive list and
historical discussion of related references). More recently, and largely inde-
pendently of these earlier studies, a number of works have put incremental
To whom correspondance should be addressed.
http://arxiv.org/abs/0704.0926v2
stability on a broader theoretical basis and made relations with more tradi-
tional stability approaches [14, 32, 24, 2, 6]. Furthermore, it was shown that
incremental stability is especially relevant in the study of such problems as
state detection [2], observer design or synchronization analysis.
While the above references are mostly concerned with deterministic sta-
bility notions, stability theory has also been extended to stochastic dynam-
ical systems, see for instance [22, 17]. This includes important recent de-
velopments in Lyapunov-like approaches [12, 27], as well as applications to
standard problems in systems and control [13, 34, 8]. However, stochastic
versions of incremental stability have not yet been systematically investi-
gated.
The goal of this paper is to extend some concepts and results in in-
cremental stability to stochastic dynamical systems. More specifically, we
derive a stochastic version of contraction analysis in the specialized context
of state-independent metrics.
We prove in section 2 that the mean square distance between any two
trajectories of a stochastically contracting system is upper-bounded by a
constant after exponential transients. In contrast with previous works on
incremental stochastic stability [5], we consider the case when the two tra-
jectories are subject to distinct and independent noises, as detailed in sec-
tion 2.2.1. This specificity enables our theory to have a number of new and
practically important applications. However, the fact that the noise does
not vanish as two trajectories get very close to each other will prevent us
from obtaining asymptotic almost-sure stability results (see section 2.3.2).
In section 3, we show that results on combinations of deterministic con-
tracting systems have simple analogues in the stochastic case. These combi-
nation properties allow one to build by recursion stochastically contracting
systems of arbitrary size.
Finally, as illustrations of our results, we study in section 4 several ex-
amples, including contracting observers with noisy measurements, stochas-
tic composite variables and synchronization phenomena in networks of noisy
dynamical systems.
2 Main results
2.1 Background
2.1.1 Nonlinear contraction theory
Contraction theory [24] provides a set of tools to analyze the incremental
exponential stability of nonlinear systems, and has been applied notably
to observer design [24, 25, 1, 21, 36], synchronization analysis [35, 28] and
systems neuroscience modelling [15]. Nonlinear contracting systems enjoy
desirable aggregation properties, in that contraction is preserved under many
types of system combinations given suitable simple conditions [24].
While we shall derive global properties of nonlinear systems, many of our
results can be expressed in terms of eigenvalues of symmetric matrices [19].
Given a square matrix A, the symmetric part of A is denoted by As. The
smallest and largest eigenvalues of As are denoted by λmin(A) and λmax(A).
Given these notations, the matrix A is positive definite (denoted A > 0) if
λmin(A) > 0, and it is uniformly positive definite if
∃β > 0 ∀x, t λmin(A(x, t)) ≥ β
The basic theorem of contraction analysis, derived in [24], can be stated
as follows
Theorem 1 (Contraction) Consider, in Rn, the deterministic system
ẋ = f(x, t) (2.1)
where f is a smooth nonlinear function. Denote the Jacobian matrix of f
with respect to its first variable by ∂f
. If there exists a square matrix Θ(x, t)
such that M(x, t) = Θ(x, t)TΘ(x, t) is uniformly positive definite and the
matrix
F(x, t) =
Θ(x, t) +Θ(x, t)
Θ−1(x, t)
is uniformly negative definite, then all system trajectories converge exponen-
tially to a single trajectory, with convergence rate | sup
x,t λmax(F)| = λ > 0.
The system is said to be contracting, F is called its generalized Jacobian,
M(x, t) its contraction metric and λ its contraction rate.
2.1.2 Standard stochastic stability
In this section, we present very informally the basic ideas of standard stochas-
tic stability (for a rigourous treatment, the reader is referred to e.g. [22]).
This will set the context to understand the forthcoming difficulties and dif-
ferences associated with incremental stochastic stability.
For simplicity, we consider the special case of global exponential stability.
Let x(t) be a Markov stochastic process and assume that there exists a non-
negative function V (V (x) may represent e.g. the squared distance of x
from the origin) such that
∀x ∈ Rn ÃV (x) ≤ −λV (x) (2.2)
where λ is a positive real number and à is the infinitesimal operator of the
process x(t). The operator à is the stochastic analogue of the deterministic
differentiation operator. In the case that x(t) is an Itô process, Ã corre-
sponds to the widely-used [27, 34, 8] differential generator L (for a proof of
this fact, see [22], p. 15 or [3], p. 42).
For x0 ∈ Rn, let Ex0(·) = E(·|x(0) = x0). Then by Dynkin’s formula
([22], p. 10), one has
∀t ≥ 0 Ex0V (x(t))− V (x0) = Ex0
ÃV (x(s))ds
≤ −λEx0
V (x(s))ds = −λ
Ex0V (x(s))ds
Applying the Gronwall’s lemma to the deterministic real-valued function
t → Ex0V (x(t)) yields
∀t ≥ 0 Ex0V (x(t)) ≤ V (x0)e−λt
If we assume furthermore that Ex0V (x(t)) < ∞ for all t, then the above
implies that V (x(t)) is a supermartingale (see lemma 3 in the Appendix for
details), which yields, by the supermartingale inequality
T≤t<∞
V (x(t)) ≥ A
≤ Ex0V (x(T ))
≤ V (x0)e
(2.3)
Thus, one obtains an almost-sure stability result, in the sense that
∀A > 0 lim
T≤t<∞
V (x(t)) ≥ A
= 0 (2.4)
2.2 The stochastic contraction theorem
2.2.1 Settings
Consider a noisy system described by an Itô stochastic differential equation
da = f(a, t)dt+ σ(a, t)dW d (2.5)
where f is a Rn × R+ → Rn function, σ is a Rn × R+ → Rnd matrix-valued
function and W d is a standard d-dimensional Wiener process.
To ensure existence and uniqueness of solutions to equation (2.5), we
assume, here and in the remainder of the paper, the following standard
conditions on f and σ
Lipschitz condition: There exists a constant K1 > 0 such that
∀t ≥ 0, a,b ∈ Rn ‖f(a, t)− f(b, t)|+ ‖σ(a, t) − σ(b, t)‖ ≤ K1‖a− b‖
Restriction on growth: There exists a constant K2 > 0
∀t ≥ 0, a ∈ Rn ‖f(a, t)‖2 + ‖σ(a, t)‖2 ≤ K2(1 + ‖a‖2)
Under these conditions, one can show ([3], p. 105) that equation (2.5)
has on [0,∞[ a unique Rn-valued solution a(t), continuous with probability
one, and satisfying the initial condition a(0) = a0, with a0 ∈ Rn.
In order to investigate the incremental stability properties of system (2.5),
consider now two system trajectories a(t) and b(t). Our goal will consist of
studying the trajectories a(t) and b(t) with respect to each other. For this,
we consider the augmented system x(t) = (a(t),b(t))T , which follows the
equation
f(a, t)
f(b, t)
σ(a, t) 0
0 σ(b, t)
dW d1
dW d2
f(x, t)dt+
σ(x, t)dW 2d (2.6)
Important remark As stated in the introduction, the systems a and
b are driven by distinct and independent Wiener processes W d1 and W
This makes our approach considerably different from [5], where the authors
studied two trajectories driven by the same Wiener process.
Our approach enables us to study the stability of the system with respect
to variations in initial conditions and to random perturbations: indeed, two
trajectories of any real-life system are typically affected by distinct “real-
izations” of the noise. In addition, it leads very naturally to nice results on
the comparison of noisy and noise-free trajectories (cf. section 2.4), which
are particularly useful in applications (cf. section 4).
However, because of the very fact that the two trajectories are driven
by distinct Wiener processes, we cannot expect the influence of the noise
to vanish when the two trajectories get very close to each other. This con-
strasts with [5], and more generally, with the standard stochastic stability
case, where the noise vanishes near the origin (cf. section 2.1.2). The con-
sequences of this will be discussed in detail in section 2.3.2.
2.2.2 The basic stochastic contraction theorem
We introduce two hypotheses
(H1) f(a, t) is contracting in the identity metric, with contraction rate λ,
(i.e. ∀a, t λmax
≤ −λ)
(H2) tr
σ(a, t)Tσ(a, t)
is uniformly upper-bounded by a constant C (i.e.
∀a, t tr
σ(a, t)Tσ(a, t)
In other words, (H1) says that the noise-free system is contracting, while
(H2) says that the variance of the noise is upper-bounded by a constant.
Definition 1 A system that verifies (H1) and (H2) is said to be stochas-
tically contracting in the identity metric, with rate λ and bound C.
Consider now the Lyapunov-like function V (x) = ‖a−b‖2 = (a−b)T (a−
b). Using (H1) and (H2), we derive below an inequality on ÃV (x), similar
to equation (2.2) in section 2.1.2.
Lemma 1 Under (H1) and (H2), one has the inequality
ÃV (x) ≤ −2λV (x) + 2C (2.7)
Proof Since x(t) is an Itô process, Ã is given by the differential operator
L of the process [22, 3]. Thus, by the Itô formula
ÃV (x) = L V (x) =
∂V (x)
f(x, t) +
σ(x, t)T
∂2V (x)
σ(x, t)
1≤i≤2n
f(x, t)i +
1≤i,j,k≤2n
σ(x, t)ij
∂xi∂xk
σ(x, t)kj
1≤i≤n
f(a, t)i +
1≤i≤n
f(b, t)i
1≤i,j,k≤n
σ(a, t)ij
∂ai∂ak
σ(a, t)kj
1≤i,j,k≤n
σ(b, t)ij
∂bi∂bk
σ(b, t)kj
= 2(a− b)T (f(a, t) − f(b, t))
+tr(σ(a, t)Tσ(a, t)) + tr(σ(b, t)Tσ(b, t))
Fix t ≥ 0 and, as in [10], consider the real-valued function
r(µ) = (a− b)T (f(µa+ (1− µ)b, t)− f(b, t))
Since f is C1, r is C1 over [0, 1]. By the mean value theorem, there exists
µ0 ∈]0, 1[ such that
r′(µ0) = r(1)− r(0) = (a− b)T (f(a)− f(b))
On the other hand, one obtains by differentiating r
r′(µ0) = (a− b)T
(µ0a+ (1− µ0)b, t)
(a− b)
Thus, one has
(a− b)T (f(a)− f(b)) = (a− b)T
(µ0a+ (1− µ0)b, t)
(a− b)
≤ −λ(a− b)T (a− b) = −2λV (x) (2.8)
where the inequality is obtained by using (H1).
Finally,
ÃV (x) = 2(a− b)T (f(a)− f(b)) + tr(σ(a, t)Tσ(a, t)) + tr(σ(b, t)Tσ(b, t))
≤ −2λV (x) + 2C
where the inequality is obtained by using (H2). �
We are now in a position to prove our main theorem on stochastic incre-
mental stability.
Theorem 2 (Stochastic contraction) Assume that system (2.5) verifies
(H1) and (H2). Let a(t) and b(t) be two trajectories whose initial condi-
tions are given by a probability distribution p(x(0)) = p(a(0),b(0)). Then
∀t ≥ 0 E
‖a(t)− b(t)‖2
+ e−2λt
‖a0 − b0‖2 −
dp(a0,b0)
(2.9)
where [·]+ = max(0, ·). This implies in particular
∀t ≥ 0 E
‖a(t)− b(t)‖2
‖a(0) − b(0)‖2
e−2λt (2.10)
Proof Let x0 = (a0,b0) ∈ R2n. By Dynkin’s formula ([22], p. 10)
Ex0V (x(t)) − V (x0) = Ex0
ÃV (x(s))ds
Thus one has ∀u, t 0 ≤ u ≤ t < ∞
Ex0V (x(t)) − Ex0V (x(u)) = Ex0
ÃV (x(s))ds
≤ Ex0
(−2λV (x(s)) + 2C)ds (2.11)
(−2λEx0V (x(s)) + 2C)ds
where inequality (2.11) is obtained by using lemma 1.
Denote by g(t) the deterministic quantity Ex0V (x(t)). Clearly, g(t) is a
continuous function of t since x(t) is a continuous process. The function g
then satisfies the conditions of the Gronwall-type lemma 4 in the Appendix,
and as a consequence
∀t ≥ 0 Ex0V (x(t)) ≤
V (x0)−
e−2λT
Integrating the above inequality with respect to x0 yields the desired
result (2.9). Next, inequality (2.10) follows from (2.9) by remarking that
‖a0 − b0‖2 −
dp(a0,b0) ≤
‖a0 − b0‖2dp(a0,b0)
‖a(0) − b(0)‖2
(2.12)
Remark Let ǫ > 0 and Tǫ =
E(‖a0−b0‖2)
. Then inequal-
ity (2.10) and Jensen’s inequality [30] imply
∀t ≥ Tǫ E(‖a(t)− b(t)‖) ≤
C/λ+ ǫ (2.13)
Since ‖a(t)−b(t)‖ is non-negative, (2.13) together with Markov inequal-
ity [11] allow one to obtain the following probabilistic bound on the distance
between a(t) and b(t)
∀A > 0 ∀t ≥ Tǫ P (‖a(t)− b(t)‖ ≥ A) ≤
C/λ+ ǫ
Note however that this bound is much weaker than the asymptotic
almost-sure bound (2.4).
2.2.3 Generalization to time-varying metrics
Theorem 2 can be vastly generalized by considering general time-dependent
metrics (the case of state-dependent metrics is not considered in this article
and will be the subject of a future work). Specifically, let us replace (H1)
and (H2) by the following hypotheses
(H1’) There exists a uniformly positive definite metric M(t) = Θ(t)TΘ(t),
with the lower-bound β > 0 (i.e. ∀x, t xTM(t)x ≥ β‖x‖2) and f(a, t)
is contracting in that metric, with contraction rate λ, i.e.
Θ(t) +Θ(t)
Θ−1(t)
≤ −λ uniformly
or equivalently
M(t) +
M(t) ≤ −2λM(t) uniformly
(H2’) tr
σ(a, t)TM(t)σ(a, t)
is uniformly upper-bounded by a constant C
Definition 2 A system that verifies (H1’) and (H2’) is said to be stochas-
tically contracting in the metric M(t), with rate λ and bound C.
Consider now the generalized Lyapunov-like function V1(x, t) = (a −
b)TM(t)(a − b). Lemma 1 can then be generalized as follows.
Lemma 2 Under (H1’) and (H2’), one has the inequality
ÃV1(x, t) ≤ −2λV1(x, t) + 2C (2.14)
Proof Let us compute first ÃV1
ÃV1(x, t) =
f(x, t) +
σ(x, t)T
σ(x, t)
= (a− b)T
(a− b) + 2(a− b)TM(t)(f(a, t) − f(b, t))
+tr(σ(a, t)TM(t)σ(a, t)) + tr(σ(b, t)TM(t)σ(b, t))
Fix t > 0 and consider the real-valued function
r(µ) = (a− b)TM(t)(f(µa+ (1− µ)b, t)− f(b, t))
Since f is C1, r is C1 over [0, 1]. By the mean value theorem, there exists
µ0 ∈]0, 1[ such that
r′(µ0) = r(1)− r(0) = (a− b)TM(t)(f(a) − f(b))
On the other hand, one obtains by differentiating r
r′(µ0) = (a− b)TM(t)
(µ0a+ (1− µ0)b, t)
(a− b)
Thus, letting c = µ0a+ (1− µ0)b, one has
(a− b)T
(a− b) + 2(a− b)TM(t)(f(a) − f(b))
= (a− b)T
(a− b) + 2(a− b)TM(t)
(c, t)
(a− b)
= (a− b)T
M(t) +M(t)
(c, t)
(c, t)
(a− b)
≤ −2λ(a− b)TM(t)(a− b) = −2λV1(x) (2.15)
where the inequality is obtained by using (H1’).
Finally, combining equation (2.15) with (H2’) allows to obtain the de-
sired result. �
We can now state the generalized stochastic contraction theorem
Theorem 3 (Generalized stochastic contraction) Assume that system
(2.5) verifies (H1’) and (H2’). Let a(t) and b(t) be two trajectories whose
initial conditions are given by a probability distribution p(x(0)) = p(a(0),b(0)).
∀t ≥ 0 E
(a(t)− b(t))TM(t)(a(t) − b(t))
+ e−2λt
(a0 − b0)TM(0)(a0 − b0)−
dp(a0,b0) (2.16)
In particular,
∀t ≥ 0 E
‖a(t)− b(t)‖2
‖a(0) − b(0)‖2
e−2λt
(2.17)
Proof Following the same reasoning as in the proof of theorem 2, one
obtains
∀t ≥ 0 Ex0V1(x(t)) ≤
V1(x0)−
e−2λt
which leads to (2.16) by integrating with respect to (a0,b0). Next, observing
‖a(t)− b(t)‖2 ≤ 1
(a(t)− b(t))TM(t)(a(t) − b(t)) = 1
EV1(x(t))
and using the same bounding as in (2.12) lead to (2.17). �
2.3 Strength of the stochastic contraction theorem
2.3.1 “Optimality” of the mean square bound
Consider the following linear dynamical system, known as the Ornstein-
Uhlenbeck (colored noise) process
da = −λadt+ σdW (2.18)
Clearly, the noise-free system is contracting with rate λ and the trace of
the noise matrix is upper-bounded by σ2. Let a(t) and b(t) be two system
trajectories starting respectively at a0 and b0 (deterministic initial condi-
tions). Then by theorem 2, we have
∀t ≥ 0 E
(a(t)− b(t))2
(a0 − b0)2 −
e−2λt (2.19)
Let us verify this result by solving directly equation (2.18). The solution
of equation (2.18) is ([3], p. 134)
a(t) = a0e
−λt + σ
eλ(s−t)dW (s) (2.20)
Next, let us compute the mean square distance between the two trajec-
tories a(t) and b(t)
E((a(t)− b(t))2) = (a0 − b0)2e−2λt +
((∫ t
eλ(s−t)dW1(s)
((∫ t
eλ(u−t)dW2(u)
= (a0 − b0)2e−2λt +
(1− e−2λt)
(a0 − b0)2 −
e−2λt
The last inequality is in fact an equality when (a0 − b0)2 ≥ σ
. Thus,
this calculation shows that the upper-bound (2.19) given by theorem 2 is
optimal, in the sense that it can be attained.
2.3.2 No asymptotic almost-sure stability
From the explicit form (2.20) of the solutions, one can deduce that the
distributions of a(t) and b(t) converge to the normal distribution N
([3], p. 135). Since a(t) and b(t) are independent, the distribution of the
difference a(t)−b(t) will then converge to N
. This observation shows
that, contrary to the case of standard stochastic stability (cf. section 2.1.2),
one cannot – in general – obtain asymptotic almost-sure incremental stability
results (which would imply that the distribution of the difference converges
instead to the constant 0).
Compare indeed equations (2.2) (the condition for standard stability, sec-
tion 2.1.2) and (2.7) (the condition for incremental stability, section 2.2.2).
The difference lies in the term 2C, which stems from the fact that the influ-
ence of the noise does not vanish when two trajectories get very close to each
other (cf. section 2.2.1). The presence of this extra term prevents ÃV (x(t))
from being always non-positive, and as a result, it prevents V (x(t)) from be-
ing always “non-increasing”. As a consequence, V (x(t)) is not – in general –
a supermartingale, and one cannot then use the supermartingale inequality
to obtain asymptotic almost-sure bounds, as in equation (2.3).
Remark If one is interested in finite time bounds then the supermartin-
gale inequality is still applicable, see ([22], p. 86) for details.
2.4 Noisy and noise-free trajectories
Consider the following augmented system
f(a, t)
f(b, t)
0 σ(b, t)
f(x, t)dt +
σ(x, t)dW2d
(2.21)
This equation is the same as equation (2.6) except that the a-system is
not perturbed by noise. Thus V (x) = ‖a − b‖2 will represent the distance
between a noise-free trajectory and a noisy one. All the calculations will be
the same as in the previous development, with C being replaced by C/2.
One can then derive the following corollary
Corollary 1 Assume that system (2.5) verifies (H1’) and (H2’). Let a(t)
be a noise-free trajectory starting at a0 and b(t) a noisy trajectory whose
initial condition is given by a probability distribution p(b(0)). Then
∀t ≥ 0 E
‖a(t)− b(t)‖2
‖a0 − b(0)‖2
e−2λt
(2.22)
Remarks
• One can note here that the derivation of corollary 1 is only permitted
by our initial choice of considering distinct driving Wiener process for
the a- and b-systems (cf. section 2.2.1).
• Corollary 1 provides a robustness result for contracting systems, in the
sense that any contracting system is automatically protected against
noise, as quantified by (2.22). This robustness could be related to the
exponential nature of contraction stability.
3 Combinations of contracting stochastic systems
Stochastic contraction inherits naturally from deterministic contraction [24]
its convenient combination properties. Because contraction is a state-space
concept, such properties can be expressed in more general forms than input-
output analogues such as passivity-based combinations [29]. The following
combination properties allow one to build by recursion stochastically con-
tracting systems of arbitrary size.
Parallel combination Consider two stochastic systems of the same di-
mension {
dx1 = f1(x1, t)dt+ σ1(x1, t)dW1
dx2 = f2(x2, t)dt+ σ2(x2, t)dW2
Assume that both systems are stochastically contracting in the same
constant metric M, with rates λ1 and λ2 and with bounds C1 and C2.
Consider a uniformly positive bounded superposition
α1(t)x1 + α2(t)x2
where ∀t ≥ 0, li ≤ αi(t) ≤ mi for some li,mi > 0, i = 1, 2.
Clearly, this superposition is stochastically contracting in the metric M,
with rate l1λ1 + l2λ2 and bound m1C1 +m2C2.
Negative feedback combination In this and the following paragraphs,
we describe combinations properties for contracting systems in constant met-
rics M. The case of time-varying metrics can be easily adapted from this
development but is skipped here for the sake of clarity.
Consider two coupled stochastic systems
dx1 = f1(x1,x2, t)dt+ σ1(x1, t)dW1
dx2 = f2(x1,x2, t)dt+ σ2(x2, t)dW2
Assume that system i (i = 1, 2) is stochastically contracting with respect to
Mi = Θ
i Θi, with rate λi and bound Ci.
Assume furthermore that the two systems are connected by negative
feedback [33]. More precisely, the Jacobian matrices of the couplings are of
the form Θ1J12Θ
2 = −kΘ2JT21Θ
1 , with k a positive constant. Hence,
the Jacobian matrix of the augmented system is given by
J1 −kΘ−11 Θ2JT21Θ
J21 J2
Consider a coordinate transform Θ =
associated with
the metric M = ΘTΘ > 0. After some calculations, one has
ΘJΘ−1
Θ1J1Θ
Θ2J2Θ
≤ max(−λ1,−λ2)I uniformly (3.1)
The augmented system is thus stochastically contracting in the metric
M, with rate min(λ1, λ2) and bound C1 + kC2.
Hierarchical combination We first recall a standard result in matrix
analysis [19]. Let A be a symmetric matrix in the formA =
A21 A2
Assume that A1 and A2 are definite positive. Then A is definite positive if
sing2(A21) < λmin(A1)λmin(A2) where sing(A21) denotes the largest singu-
lar value of A21. In this case, the smallest eigenvalue of A satisfies
λmin(A) ≥
λmin(A1) + λmin(A2)
λmin(A1)− λmin(A2)
+ sing2(A21)
Consider now the same set-up as in the previous paragraph, except that
the connection is now hierarchical and upper-bounded. More precisely, the
Jacobians of the couplings verify J12 = 0 and sing
2(Θ2J21Θ
1 ) ≤ K. The
Jacobian matrix of the augmented system is then given by J =
J21 J2
Consider a coordinate transform Θǫ =
0 ǫΘ2
associated with the
metric Mǫ = Θ
ǫ Θǫ > 0. After some calculations, one has
ΘJΘ−1
Θ1J1Θ
ǫ(Θ2J21Θ
ǫΘ2J21Θ
Θ2J2Θ
Set now ǫ =
2λ1λ2
. The augmented system is then stochastically con-
tracting in the metric Mǫ, with rate
(λ1 + λ2 −
λ21 + λ
2)) and bound
2C2λ1λ2
Small gains In this paragraph, we require no specific assumption on the
form of the couplings. Consider the coordinate transformΘ =
associated with the metric Mk = Θ
Θk > 0. Aftersome calculations, one
Θ1J1Θ
Θ2J2Θ
where Bk =
kΘ2J21Θ
Θ1J12Θ
Following the matrix analysis result stated at the beginning of the pre-
vious paragraph, if infk>0 sing
2(Bk) < λ1λ2 then the augmented system is
stochastically contracting in the metric Mk, with bound C1 + kC2 and rate
λ verifying
λ ≥ λ1 + λ2
λ1 − λ2
+ inf
sing2(Bk) (3.2)
4 Some examples
4.1 Effect of measurement noise on contracting observers
Consider a nonlinear dynamical system
ẋ = f(x, t) (4.1)
If a measurement y = y(x) is available, then it may be possible to choose
an output injection matrix K(t) such that the dynamics
˙̂x = f(x̂, t) +K(t)(ŷ − y) (4.2)
is contracting, with ŷ = y(x̂). Since the actual state x is a particular
solution of (4.2), any solution x̂ of (4.2) will then converge towards x expo-
nentially.
Assume now that the measurements are corrupted by additive “white
noise”. In the case of linear measurement, the measurement equation be-
comes y = H(t)x+Σ(t)ξ(t) where ξ(t) is a multidimensional “white noise”
and Σ(t) is the matrix of measurement noise intensities.
The observer equation is now given by the following Itô stochastic dif-
ferential equation (using the formal rule dW = ξdt)
dx̂ = (f(x̂, t) +K(t)(H(t)x −H(t)x̂))dt+K(t)Σ(t)dW (4.3)
Next, remark that the solution x of system (4.1) is a also a solution of
the noise-free version of system (4.3). By corollary 1, one then has, for any
solution x̂ of system (4.3)
∀t ≥ 0 E
‖x̂(t)− x(t)‖2
+ ‖x̂0 − x0‖2e−2λt (4.4)
where
λ = inf
∣∣∣∣λmax
∂f(x, t)
−K(t)H(t)
)∣∣∣∣
C = sup
Σ(t)TK(t)TK(t)Σ(t)
Remark The choice of the injection gain K(t) is governed by a trade-
off between convergence speed (λ) and noise sensitivity (C/λ) as quantified
by (4.4). More generally, the explicit computation of the bound on the
expected quadratic estimation error given by (4.4) may open the possibility
of measurement selection in a way similar to the linear case. If several
possible measurements or sets of measurements can be performed, one may
try at each instant (or at each step, in a discrete version) to select the
most relevant, i.e., the measurement or set of measurements which will best
contribute to improving the state estimate. Similarly to the Kalman filters
used in [9] for linear systems, this can be achieved by computing, along
with the state estimate itself, the corresponding bounds on the expected
quadratic estimation error, and then selecting accordingly the measurement
which will minimize it.
4.2 Estimation of velocity using composite variables
In this section, we present a very simple example that hopefully suggests the
many possibilities that could stem from the combination of our stochastic
stability analysis with the composite variables framework [31].
Let x be the position of a mobile subject to a sinusoidal forcing
ẍ = −U1ω2 sin(ωt) + 2U2
where U1 and ω are known parameters. We would like to compute good
approximations of the mobile’s velocity v and acceleration a using only mea-
surements of x and without using any filter. For this, construct the following
observer
−αv 1
−αa 0
(αa − α2v)x
−αaαvx− U1ω3 cos(ωt)
3 cos(ωt)
(4.5)
and introduce the composite variables v̂ = v + αvx and â = a + αax. By
construction, these variables follow the equation
v̂ − v
−U1ω3 cos(ωt)
(4.6)
and therefore, a particular solution of (v̂, â) is clearly (v, a). Choose now
αa = α
v = α
2 and let Mα =
α2 −α/2
−α/2 1
. One can then show that
system (4.6) is contracting with rate λα = α/2 in the metric Mα. Thus, by
the basic contraction theorem [24], (v̂, â) converges exponentially to (v, a)
with rate λα in the metric Mα. Also note that the β-bound corresponding
to the metric Mα is given by βα =
1+α2−
α4−α2+1
Next, assume that the measurements of x are corrupted by additive
“white noise”, so that xmeasured = x+ σξ. Equation (4.5) then becomes an
Itô stochastic differential equation
3 cos(ωt)
By definition of B, the variance of the noise in the metric Mα is upper-
bounded by α
. Thus, using again corollary 1, one obtains (see Figure 1
for a numerical simulation)
∀t ≥ 0 E
‖v̂(t)− v(t)‖2 + ‖â(t)− a(t)‖2
‖v̂0 − v0‖2 + ‖â0 − a0‖2
4.3 Stochastic synchronization
Consider a network of n dynamical elements coupled through diffusive con-
nections
dxi =
f(xi, t) +
j 6=i
Kij(xj − xi)
dt+ σi(xi, t)dW di i = 1, . . . , n
(4.7)
x, t) =
f(x1, t)
f(xn, t)
, ⌢σ(⌢x, t) =
σ1(x1, t) 0 0
. . . 0
0 0 σn(xn, t)
The global state
x then follows the equation
x, t)− L⌢x
x, t)dW nd (4.8)
0 2 4 6 8 10
0 2 4 6 8 10
Figure 1: Estimation of the velocity of a mobile using noisy measurements
of its position. The simulation was performed using the Euler-Maruyama
algorithm [18] with the following parameters: U1 = 10, U2 = 2, ω = 3,
σ = 10 and α = 1. Left plot: simulation for one trial. The plot shows
the measured position (red), the actual velocity (blue) and the estimate of
the velocity using the measured position (green). Right plot: the average
over 1000 trials of the squared error ‖v̂ − v‖2 + ‖â − a‖2 (green) and the
asymptotic bound
= 200
given by our approach (red).
In the sequel, we follow the reasoning of [28], which starts by defining an
appropriate orthonormal matrix V describing the synchronization subspace
(V represents the state projection on the subspace M⊥, orthogonal to the
synchronization subspace M = {(x1, . . . ,xn)T : x1 = . . . = xn}, see [28] for
details). Denote by
y the state of the projected system,
y = V
x. Since the
mapping is linear, Itô differentiation rule simply yields
y = Vd
x, t) −VL⌢x
x, t)dW nd
y, t) −VLVT⌢y
y, t)dW nd (4.9)
Assume now that ∂f
is uniformly upper-bounded. Then for strong
enough coupling strength, A = V ∂f
VT − VLVT will be uniformly neg-
ative definite. Let λ = |λmax(A)| > 0. System (4.9) then verifies condition
(H1) with rate λ. Assume furthermore that each noise intensity σi is upper-
bounded by a constant Ci (i.e. supx,t tr(σi(x, t)
Tσi(x, t)) ≤ Ci). Condition
(H2) will then be satisfied with the bound C =
i Ci.
Next, consider a noise-free trajectory
yu(t) of system (4.9). By theo-
rem 3 of [28], we know that
yu(t) converges exponentially to zero. Thus,
by corollary 1, one can conclude that, after exponential transients of rate λ,
E(‖⌢y(t)‖2) ≤ C
On the other hand, one can show that
‖⌢y(t)‖2 = 1
‖xi − xj‖2
Thus, after exponential transients of rate λ, we have
‖xi − xj‖2 ≤
Remarks
• The above development is fully compatible with the concurrent syn-
chronization framework [28]. It can also be easily generalized to the
case of time-varying metrics by combining theorem 3 of this paper and
corollary 1 of [28].
• The synchronization of Itô dynamical systems has been investigated
in [4]. However, the systems considered by the authors of that article
were dissipative. Here, we make a less restrictive assumption, namely,
we only require ∂f
to be uniformly upper-bounded. This enables us
to study the synchronization of a broader class of dynamical systems,
which can include nonlinear oscillators or even chaotic systems.
Example As illustration of the above development, we provide here a de-
tailed analysis for the synchronization of noisy FitzHugh-Nagumo oscillators
(see [35] for the references). The dynamics of two diffusively-coupled noisy
FitzHugh-Nagumo oscillators can be described by
dvi = (c(vi + wi − 13v
i + Ii) + k(v0 − vi))dt+ σdWi
dwi = −1c (vi − a+ bwi)dt
i = 1, 2
Let x = (v1, w1, v2, w2)
T and V = 1√
1 0 −1 0
0 1 0 −1
. The Jaco-
bian matrix of the projected noise-free system is then given by
c− c(v
− k c
−1/c −b/c
Thus, if the coupling strength verifies k > c then the projected system
will be stochastically contracting in the diagonal metric M = diag(1, c) with
rate min(k − c, b/c) and bound σ2. Hence, the average absolute difference
between the two “membrane potentials” |v1 − v2| will be upper-bounded by
min(1, c)min(k − c, b/c) (see Figure 2 for a numerical simulation).
Acknowledgments We are grateful to Dr S. Darses, Prof D. Bennequin
and Prof M. Yor for stimulating discussions, and to the Associate Editor
and the reviewers for their helpful comments.
0 2 4 6 8 10
Figure 2: Synchronization of two noisy FitzHugh-Nagumo oscillators. The
simulation was performed using the Euler-Maruyama algorithm [18] with
the following parameters: a = 0.3, b = 0.2, c = 30, k = 40 and σ = 1. The
plot shows the “membrane potentials” of the two oscillators.
A Appendix
A.1 Proof of the supermartingale property
Lemma 3 Consider a Markov stochastic process x(t) and a non-negative
function V such that ∀t ≥ 0 EV (x(t)) < ∞ and
∀x ∈ Rn ÃV (x) ≤ −λV (x) (A.1)
where λ is a non-negative real number and à is the infinitesimal operator
of the process x(t). Then V (x(t)) is a supermartingale with respect to the
canonical filtration Ft = {x(s), s ≤ t}.
We need to show that for all s ≥ t, one has E(V (x(s))|Ft) ≤ V (x(t)).
Since x(t) is a Markov process, it suffices to show that
∀x0 ∈ Rn E(V (x(t))|x(0) = x0) ≤ V (x0)
By Dynkin’s formula, one has for all x0 ∈ Rn
Ex0V (x(t)) = V (x0) + Ex0
ÃV (x(s))ds
≤ V (x0)− λEx0
V (x(s))ds ≤ V (x0)
where Ex0(·) = E(·|x(0) = x0).
A.2 A variation of Gronwall’s lemma
Lemma 4 Let g : [0,∞[→ R be a continuous function, C a real number
and λ a strictly positive real number. Assume that
∀u, t 0 ≤ u ≤ t g(t)− g(u) ≤
−λg(s) + Cds (A.2)
∀t ≥ 0 g(t) ≤ C
g(0) − C
e−λt (A.3)
where [·]+ = max(0, ·).
Proof Case 1 : C = 0, g(0) > 0.
Define h(t) by
∀t ≥ 0 h(t) = g(0)e−λt
Remark that h is positive with h(0) = g(0), and satisfies (A.2) where the
inequality has been replaced by an equality
∀u, t 0 ≤ u ≤ t h(t)− h(u) = −
λh(s)ds
Consider now the set S = {t ≥ 0 | g(t) > h(t)}. If S = ∅ then the
lemma holds true. Assume by contradiction that S 6= ∅. In this case, let
m = inf S < ∞. By continuity of g and h and by the fact that g(0) = h(0),
one has g(m) = h(m) and there exists ǫ > 0 such that
∀t ∈]m,m+ ǫ[ g(t) > h(t) (A.4)
Consider now φ(t) = g(m)− λ
g(s)ds. Equation (A.2) implies that
∀t ≥ m g(t) ≤ φ(t)
In order to compare φ(t) and h(t) for t ∈]m,m+ ǫ[, let us differentiate the
ratio φ(t)/h(t).
φ′h− h′φ
−λgh+ λhφ
λh(φ − g)
Thus φ(t)/h(t) is increasing for t ∈]m,m + ǫ[. Since φ(m)/h(m) = 1, one
can conclude that
∀t ∈]m,m+ ǫ[ φ(t) ≥ h(t)
which implies, by definition of φ and h, that
∀t ∈]m,m+ ǫ[
g(s)ds ≤
h(s)ds (A.5)
Choose now t0 such that m < t0 < m+ ǫ, then one has by (A.4)
g(s)ds >
h(s)ds
which clearly contradicts (A.5).
Case 2 : C = 0, g(0) ≤ 0
Consider the set S = {t ≥ 0 | g(t) > 0}. If S = ∅ then the lemma holds
true. Assume by contradiction that S 6= ∅. In this case, let m = inf S < ∞.
By continuity of g and by the fact that g(0) ≤ 0, one has g(m) = 0 and
there exists ǫ such that
∀t ∈]m,m+ ǫ[ g(t) > 0 (A.6)
Let t0 ∈]m,m+ ǫ[. Equation (A.2) implies that
g(t0) ≤ −λ
g(s)ds
which clearly contradicts (A.6).
Case 3 : C 6= 0
Define ĝ = g − C/λ. One has
∀u, t 0 ≤ u ≤ t ĝ(t)−ĝ(u) = g(t)−g(u) ≤
−λg(s)+Cds = −
λĝ(s)ds
Thus ĝ satisfies the conditions of Case 1 or Case 2, and as a consequence
∀t ≥ 0 ĝ(t) ≤ [ĝ(0)]+e−λt
The conclusion of the lemma follows by replacing ĝ by g−C/λ in the above
equation. �
References
[1] N. Aghannan and P. Rouchon. An intrinsic observer for a class of la-
grangian systems. IEEE Transactions on Automatic Control, 48, 2003.
[2] D. Angeli. A lyapunov approach to incremental stability properties.
IEEE Transactions on Automatic Control, 47:410–422, 2002.
[3] L. Arnold. Stochastic Differential Equations : Theory and Applications.
Wiley, 1974.
[4] T. Caraballo and P. Kloeden. The persistence of synchronization under
environmental noise. Proceedings of the Royal Society A, 461:2257–
2267, 2005.
[5] T. Caraballo, P. Kloeden, and B. Schmalfuss. Exponentially stable
stationary solutions for stochastic evolution equations and their pertu-
bation. Applied Mathematics and Optimization, 50:183–207, 2004.
[6] L. d’Alto and M. Corless. Incremental quadratic stability. In Proceed-
ings of the IEEE Conference on Decision and Control, 2005.
[7] B. Demidovich. Dissipativity of a nonlinear system of differential equa-
tions. Ser. Mat. Mekh., 1961.
[8] H. Deng, M. Krstic, and R. Williams. Stabilization of stochastic nonlin-
ear systems driven by noise of unknown covariance. IEEE Transactions
on Automatic Control, 46, 2001.
[9] E. Dickmanns. Dynamic Vision for Intelligent Vehicles. Course Notes,
MIT EECS dept, 1998.
[10] K. El Rifai and J.-J. Slotine. Contraction and incremental stability.
Technical report, MIT NSL Report, 2006.
[11] W. Feller. An Introduction to Probability Theory and Its Applications.
Wiley, 1968.
[12] P. Florchinger. Lyapunov-like techniques for stochastic stability. SIAM
Journal of Control and Optimization, 33:1151–1169, 1995.
[13] P. Florchinger. Feedback stabilization of affine in the control stochastic
differential systems by the control lyapunov function method. SIAM
Journal of Control and Optimization, 35, 1997.
[14] V. Fromion. Some results on the behavior of lipschitz continuous sys-
tems. In Proceedings of the European Control Conference, 1997.
[15] B. Girard, N. Tabareau, Q.-C. Pham, A. Berthoz, and J.-J. Slotine.
Where neuroscience and dynamic system theory meet autonomous
robotics: a contracting basal ganglia model for action selection. Neural
Networks, 2008.
[16] P. Hartmann. Ordinary differential equations. Wiley, 1964.
[17] R. Has’minskii. Stochastic Stability of Differential Equations. Sijthoff
and Nordhoff, Rockville, 1980.
[18] D. Higham. An algorithmic introduction to numerical simulation of
stochastic differential equations. SIAM Review, 43:525–546, 2001.
[19] R. Horn and C. Johnson. Matrix Analysis. Cambridge University Press,
1985.
[20] J. Jouffroy. Some ancestors of contraction analysis. In Proceedings of
the IEEE Conference on Decision and Control, 2005.
[21] J. Jouffroy and T. Fossen. On the combination of nonlinear contracting
observers and uges controllers for output feedback. In Proceedings of
the IEEE Conference on Decision and Control, 2004.
[22] H. Kushner. Stochastic Stability and Control. Academic Press, 1967.
[23] D. Lewis. Metric properties of differential equations. American Journal
of Mathematics, 71:294–312, 1949.
[24] W. Lohmiller and J.-J. Slotine. On contraction analysis for nonlinear
systems. Automatica, 34:671–682, 1998.
[25] W. Lohmiller and J.-J. Slotine. Nonlinear process control using con-
traction theory. A.I.Ch.E. Journal, 2000.
[26] W. Lohmiller and J.-J. Slotine. Contraction analysis of nonlinear dis-
tributed systems. International Journal of Control, 78, 2005.
[27] X. Mao. Stability of Stochastic Differential Equations with Respect to
Semimartingales. Longman, White Plains, NY, 1991.
[28] Q.-C. Pham and J.-J. Slotine. Stable concurrent synchronization in
dynamic system networks. Neural Netw, 20(1):62–77, Jan. 2007.
[29] V. Popov. Hyperstability of Control Systems. Springer-Verlag, 1973.
[30] W. Rudin. Real and complex analysis. McGraw-Hill, 1987.
[31] J.-J. Slotine and W. Li. Applied Nonlinear Control. Prentice-Hall, 1991.
[32] E. Sontag and Y. Wang. Output-to-state stability and detectability of
nonlinear systems. Systems and Control Letters, 29:279–290, 1997.
[33] N. Tabareau and J.-J. Slotine. Notes on contraction theory. Technical
report, MIT NSL Report, 2005.
[34] J. Tsinias. The concept of “exponential iss” for stochastic systems and
applications to feedback stabilization. Systems and Control Letters,
36:221–229, 1999.
[35] W. Wang and J.-J. E. Slotine. On partial contraction analysis for cou-
pled nonlinear oscillators. Biol Cybern, 92(1):38–53, Jan. 2005.
[36] Y. Zhao and J.-J. Slotine. Discrete nonlinear observers for inertial
navigation. Systems and Control Letters, 54, 2005.
Introduction
Main results
Background
Nonlinear contraction theory
Standard stochastic stability
The stochastic contraction theorem
Settings
The basic stochastic contraction theorem
Generalization to time-varying metrics
Strength of the stochastic contraction theorem
``Optimality'' of the mean square bound
No asymptotic almost-sure stability
Noisy and noise-free trajectories
Combinations of contracting stochastic systems
Some examples
Effect of measurement noise on contracting observers
Estimation of velocity using composite variables
Stochastic synchronization
Appendix
Proof of the supermartingale property
A variation of Gronwall's lemma
|
0704.0927 | A Symplectic Test of the L-Functions Ratios Conjecture | A SYMPLECTIC TEST OF THE L-FUNCTIONS RATIOS CONJECTURE
STEVEN J. MILLER
ABSTRACT. Recently Conrey, Farmer and Zirnbauer [CFZ1, CFZ2] conjectured for-
mulas for the averages over a family of ratios of products of shifted L-functions. Their
L-functions Ratios Conjecture predicts both the main and lower order terms for many
problems, ranging from n-level correlations and densities to mollifiers and moments
to vanishing at the central point. There are now many results showing agreement be-
tween the main terms of number theory and random matrix theory; however, there are
very few families where the lower order terms are known. These terms often depend
on subtle arithmetic properties of the family, and provide a way to break the universal-
ity of behavior. The L-functions Ratios Conjecture provides a powerful and tractable
way to predict these terms. We test a specific case here, that of the 1-level density for
the symplectic family of quadratic Dirichlet characters arising from even fundamental
discriminants d ≤ X . For test functions supported in (−1/3, 1/3) we calculate all
the lower order terms up to size O(X−1/2+ǫ) and observe perfect agreement with the
conjecture (for test functions supported in (−1, 1) we show agreement up to errors of
size O(X−ǫ) for any ǫ). Thus for this family and suitably restricted test functions, we
completely verify the Ratios Conjecture’s prediction for the 1-level density.
1. INTRODUCTION
Montgomery’s [Mon] analysis of the pair correlation of zeros of ζ(s) revealed a strik-
ing similarity to the behavior of eigenvalues of ensembles of random matrices. Since
then, this connection has been a tremendous predictive aid to researchers in number
theory in modeling the behavior of zeros and values of L-functions, ranging from spac-
ings between adjacent zeros [Mon, Hej, Od1, Od2, RS] to moments of L-functions
[CF, CFKRS]. Katz and Sarnak [KaSa1, KaSa2] conjectured that, in the limit as the
conductors tend to infinity, the behavior of the normalized zeros near the central point
agree with theN → ∞ scaling limit of the normalized eigenvalues near 1 of a subgroup
of U(N). One way to test this correspondence is through the n-level density of a family
F of L-functions L(s, f); we concentrate on this statistic in this paper. The n-level
density is
Dn,F(φ) :=
ℓ1,...,ℓn
ℓi 6=±ℓk
γf,ℓ1
logQf
· · ·φn
γf,ℓn
logQf
, (1.1)
Date: October 25, 2018.
2000 Mathematics Subject Classification. 11M26 (primary), 11M41, 15A52 (secondary).
Key words and phrases. 1-Level Density, Dirichlet L-functions, Low Lying Zeros, Ratios Conjecture.
The author would like to thank Eduardo Dueñez, Chris Hughes, Duc Khiem Huynh, Jon Keating,
Nina Snaith and Sergei Treil for many enlightening conversations, Jeffrey Stopple for finding a typo in
the proof of Lemma 3.2, and the University of Bristol for its hospitality (where much of this work was
done). This work was partly supported by NSF grant DMS0600848.
http://arxiv.org/abs/0704.0927v5
2 STEVEN J. MILLER
where the φi are even Schwartz test functions whose Fourier transforms have compact
support, 1
+ iγf,ℓ runs through the non-trivial zeros of L(s, f), and Qf is the analytic
conductor of f . As the φi are even Schwartz functions, most of the contribution to
Dn,F(φ) arises from the zeros near the central point; thus this statistic is well-suited to
investigating the low-lying zeros.
There are now many examples where the main term in number theory agrees with
the Katz-Sarnak conjectures (at least for suitably restricted test functions), such as all
Dirichlet characters, quadratic Dirichlet characters, L(s, ψ) with ψ a character of the
ideal class group of the imaginary quadratic field Q(
−D), families of elliptic curves,
weight k level N cuspidal newforms, symmetric powers of GL(2) L-functions, and
certain families of GL(4) and GL(6) L-functions (see [DM1, FI, Gü, HR, HM, ILS,
KaSa2, Mil1, OS2, RR, Ro, Rub1, Yo2]).
For families of L-functions over function fields, the corresponding classical compact
group can be identified through the monodromy. While the situation is less clear for L-
functions over number fields, there has been some recent progress. Dueñez and Miller
[DM2] show that for sufficiently nice families and sufficiently small support, the main
term in the 1-level density is determined by the first and second moments of the Satake
parameters, and a symmetry constant (which identifies the corresponding classical com-
pact group) may be associated to any nice family such that the symmetry constant of the
Rankin-Selberg convolution of two families is the product of the symmetry constants.
There are two avenues for further research. The first is to increase the support of
the test functions, which often leads to questions of arithmetic interest (see for example
Hypothesis S in [ILS]). Another is to identify lower order terms in the 1-level density,
which is the subject of this paper. The main term in the 1-level density is independent of
the arithmetic of the family, which surfaces in the lower order terms. This is very similar
to the Central Limit Theorem. For nice densities the distribution of the normalized
sample mean converges to the standard normal. The main term is controlled by the
first two moments (the mean and the variance of the density) and the higher moments
surface in the rate of convergence. This is similar to our situation, where the universal
main terms arise from the first and second moments of the Satake parameters.
There are now several families where lower order terms have been isolated in the
1-level density [FI, Mil2, Mil3, Yo1]; see also [BoKe], where the Hardy-Littlewood
conjectures are related to lower order terms in the pair correlation of zeros of ζ(s) (see
for example [Be, BeKe, CS2, Ke] for more on lower terms of correlations of Riemann
zeros). Recently Conrey, Farmer and Zirnbauer [CFZ1, CFZ2] formulated conjectures
for the averages over families ofL-functions of ratios of products of shiftedL-functions,
such as
+ α, χd
+ γ, χd
ζ(1 + 2α)
ζ(1 + α + γ)
AD(α; γ)
) ζ(1− 2α)
ζ(1− α + γ)
AD(−α; γ)
+ O(X1/2+ǫ) (1.2)
A SYMPLECTIC TEST OF THE L-FUNCTIONS RATIOS CONJECTURE 3
(here d ranges over even fundamental discriminants, −1/4 < ℜ(α) < 1/4, 1/ logX ≪
ℜ(γ) < 1/4, and AD (we only give the definition for α = γ, as that is the only in-
stance that occurs in our applications) is defined in (1.4)). Their L-functions Ratios
Conjecture arises from using the approximate functional equation, integrating term by
term, and retaining only the diagonal pieces (which they then ‘complete’); they also
assume uniformity in the parameters so that the resulting expressions may be differen-
tiated (this is an essential ingredient for 1-level density calculations). It is worth noting
the incredible detail of the conjecture, predicting all terms down to O(X1/2+ǫ).
There are many difficult computations whose answers can easily be predicted through
applications of theL-functions Ratios Conjecture, ranging from n-level correlations and
densities to mollifiers and moments to vanishing at the central point (see [CS1]). While
these are not proofs, it is extremely useful for researchers to have a sense of what the
answer should be. One common difficulty in the subject is that often the number theory
and random matrix theory answers appear different at first, and much effort must be
spent on combinatorics to prove agreement (see for example [Gao, HM, Rub1, RS]); the
analysis is significantly easier if one knows what the final answer should be. Further,
the Ratios Conjecture often suggest a more enlightening way to group terms (see for
instance Remark 1.4).
Our goal in this paper is to test the predictions of the Ratios Conjecture for a specific
family, that of quadratic Dirichlet characters. We let d be a fundamental discriminant.
This means (see §5 of [Da]) that either d is a square-free number congruent to 1 mod-
ulo 4, or d/4 is square-free and congruent to 2 or 3 modulo 4. If χd is the quadratic
character associated to the fundamental discriminant d, then if χd(−1) = 1 (resp., −1)
we say d is even (resp., odd). If d is a fundamental discriminant then it is even (resp.,
odd) if d > 0 (resp., d < 0). We concentrate on even fundamental discriminants below,
though with very few changes our arguments hold for odd discriminants (for example,
if d is odd there is an extra 1/2 in certain Gamma factors in the explicit formula).
For notational convenience we adopt the following conventions throughout the paper:
• Let X∗ denote the number of even fundamental discriminants at most X; thus
X∗ = 3X/π2 +O(X1/2), and X/π2 +O(X1/2) of these have 4|d (see Lemma
B.1 for a proof).
• In any sum over d, d will range over even fundamental discriminants unless
otherwise specified.
The goal of these notes is to calculate the lower order terms (on the number theory
side) as much as possible, as unconditionally as possible, and then compare our answer
to the prediction from the L-functions Ratios Conjecture, given in the theorem below.
Theorem 1.1 (One-level density from the Ratios Conjecture [CS1]). Let g be an even
Schwartz test function such that ĝ has finite support. Let X∗ denote the number of
even fundamental discriminants at most X , and let d denote a typical even fundamental
discriminant. Assuming the Ratios Conjecture for
d≤X L(
+ α, χd)/L(
+ γ, χd),
4 STEVEN J. MILLER
we have
X∗ logX
X∗ logX
+ A′D
− e−2πiτ log(d/π)/ logX
− πiτ
+ πiτ
1− 4πiτ
− 2πiτ
+ O(X−
+ǫ), (1.3)
AD(−r, r) =
(p+ 1)p1−2r
A′D(r; r) =
log p
(p+ 1)(p1+2r − 1)
. (1.4)
The above is
1− sin(2πx)
, (1.5)
which is the 1-level density for the scaling limit of USp(2N). If supp(ĝ) ⊂ (−1, 1),
then the integral of g(x) against − sin(2πx)/2πx is −g(0)/2.
If we assume the Riemann Hypothesis, for supp(ĝ) ⊂ (−σ, σ) ⊂ (−1, 1) we have
X∗ logX
g(τ) e
−2πiτ
log(d/π)
− πiτ
+ πiτ
1− 4πiτ
− 2πiτ
= −g(0)
+O(X−
(1−σ)+ǫ); (1.6)
the error term may be absorbed into the O(X−1/2+ǫ) error in (1.3) if σ < 1/3.
The conclusions of the above theorem are phenomenal, and demonstrate the power of
the Ratios Conjecture. Not only does its main term agree with the Katz-Sarnak conjec-
tures for arbitrary support, but it calculates the lower order terms up to sizeO(X−1/2+ǫ).
While Theorem 1.1 is conditional on the Ratios Conjecture, the following theorem is
not, and provides highly non-trivial support for the Ratios Conjecture.
Theorem 1.2 (One-level density for quadratic Dirichlet characters). Let the notation be
as in Theorem 1.1, with supp(ĝ) ⊂ (−σ, σ).
A SYMPLECTIC TEST OF THE L-FUNCTIONS RATIOS CONJECTURE 5
(1) Up to terms of size O(X−(1−σ)/2+ǫ), the 1-level density for the family of qua-
dratic Dirichlet characters with even fundamental discriminants at most X
agrees with (1.3) (the prediction from the Ratios Conjecture).
(2) If we instead consider the family {8d : 0 < d ≤ X, d an odd, positive square-
free fundamental discriminant}, then the 1-level density agrees with the predic-
tion from the Ratios Conjecture up to terms of size O(X−1/2 + X−(1−
σ)+ǫ +
(1−σ)+ǫ). In particular, if σ < 1/3 then the number theory calculation
agrees with the Ratios Conjecture up to errors at most O(X−1/2+ǫ).
Remark 1.3. The above theorem indicates that, at least for the family of quadratic
Dirichlet characters and suitably restricted test functions, the Ratios Conjecture is pre-
dicting all lower order terms up to size O(X−1/2+ǫ). This is phenomenal agreement
between theory and conjecture. Previous investigations of lower order terms in 1-level
densities went as far as O(logN X) for some N ; here we are getting square-root agree-
ment, and strong evidence in favor of the Ratios Conjecture.
Remark 1.4 (Influence of zeros of ζ(s) on lower order terms). From the expansion
in (1.3) we see that one of the lower order terms (arising from the integral of g(τ)
against ζ ′(1 + 4πiτ/ logX)/ζ(1 + 4πiτ/ logX)) in the 1-level density for the family
of quadratic Dirichlet characters is controlled by the non-trivial zeros of ζ(s). This
phenomenon has been noted by other researchers (Bogomolny, Conrey, Keating, Ru-
binstein, Snaith); see [CS1, BoKe, HKS, Rub2] for more details, especially [Rub2] for
a plot of the influence of zeros of ζ(s) on zeros of L-functions of quadratic Dirichlet
characters.
The proof of Theorem 1.2 starts with the Explicit Formula, which relates sums over
zeros to sums over primes (for completeness a proof is given in Appendix A). For
convenience to researchers interested in odd fundamental discriminants, we state it in
more generality than we need.
Theorem 1.5 (Explicit Formula for a family of Quadratic Dirichlet Characters). Let g
be an even Schwartz test function such that ĝ has finite support. For d a fundamental
discriminant let a(χd) = 0 if d is even (χd(−1) = 1) and 1 otherwise. Consider a
family F(X) of fundamental discriminants at most X in absolute value. We have
|F(X)|
d∈F(X)
|F(X)| logX
d∈F(X)
a(χd)
a(χd)
|F(X)|
d∈F(X)
χd(p)
k log p
pk/2 logX
log pk
(1.7)
As our family has only even fundamental discriminants, all a(χd) = 0. The terms
arising from the conductors (the log(|d|/π) and the Γ′/Γ terms) agree with the Ratios
Conjecture. We are reduced to analyzing the sums of χd(p)
k and showing they agree
6 STEVEN J. MILLER
with the remaining terms in the Ratios Conjecture. As our characters are quadratic, this
reduces to understanding sums of χd(p) and χd(p)
2. We first analyze the terms from
the Ratios Conjecture in §2 and then we analyze the character sums in §3. We proceed
in this order as one of the main uses of the Ratios Conjecture is in predicting simple
forms of the answer; in particular, it suggests non-obvious simplifications of the number
theory sums.
2. ANALYSIS OF THE TERMS FROM THE RATIOS CONJECTURE.
We analyze the terms in the 1-level density from the Ratios Conjecture (Theorem
1.1). The first piece (involving log(d/π) and Γ′/Γ factors) is already matched with the
terms in the Explicit Formula arising from the conductors and Γ-factors in the functional
equation. In §3 we match the next two terms (the integral of g(τ) against ζ ′/ζ and A′D)
to the contributions from the sum over χd(p)
k for k even; we do this for test functions
with arbitrary support. The number theory is almost equal to this; the difference is the
presence of a factor −g(0)/2 from the even k terms, which we match to the remaining
piece from the Ratios Conjecture.
This remaining piece is the hardest to analyze. We denote it by
R(g;X) = −
X∗ logX
g(τ)e
−2πiτ
log(d/π)
− πiτ
+ πiτ
dτ, (2.1)
with (see (1.4))
AD(−r, r) =
(p+ 1)p1−2r
. (2.2)
There is a contribution to R(g;X) from the pole of ζ(s). The other terms are at
most O(1/ logX); however, if the support of ĝ is sufficiently small then these terms
contribute significantly less.
Lemma 2.1. Assume the Riemann Hypothesis. If supp(ĝ) ⊂ (−σ, σ) then
R(g;X) = −
+O(X−
(1−σ)+ǫ). (2.3)
In particular, if σ < 1/3 then R(g;X) = −1
g(0) +O(X−
Remark 2.2. If we do not assume the Riemann Hypothesis we may prove a similar re-
sult. The error term is replaced with O(X−(1−
)(1−σ)+ǫ), where θ is the supremum
of the real parts of zeros of ζ(s). As θ ≤ 1, we may always bound the error by
O(X−(1−σ)/2+ǫ). Interestingly, this is the error we get in analyzing the number the-
ory terms χ(p)k with k odd by applying Jutila’s bound (see §3.2.1); we obtain a better
bound of O(X−(1−
σ)) by using Poisson summation to convert long character sums to
shorter ones (see §3.2.2).
A SYMPLECTIC TEST OF THE L-FUNCTIONS RATIOS CONJECTURE 7
Remark 2.3. The proof of Lemma 2.1 follows from shifting contours and keeping
track of poles of ratios of Gamma and zeta functions. We can prove a related result with
significantly less work. Specifically, if for supp(ĝ) ⊂ (−1, 1) we are willing to accept
error terms of size O(log−N X) for any N then we may proceed as follows: (1) modify
Lemma B.2 to replace the d-sum with X∗e−2πi(1−
log π
logX )τ
1− 2πiτ
+O(X1/2); (2)
use the decay properties of g to restrict the τ sum to |τ | ≤ logX and then Taylor expand
everything but g, which gives a small error term and
|τ |≤logX
lognX
(2πiτ)ne
−2πi(1− logπlogX )τdτ
lognX
|τ |≤logX
(2πiτ)ng(τ)e
−2πi(1− log πlogX )τdτ ; (2.4)
(3) use the decay properties of g to extend the τ -integral to all of R (it is essential here
that N is fixed and finite!) and note that for n ≥ 0 the above is the Fourier transform of
g(n) (the nth derivative of g) at 1− π
, and this is zero if supp(ĝ) ⊂ (−1, 1).
We prove Lemma 2.1 in §2.1; this completes our analysis of the terms from the
Ratios Conjecture. We analyze the lower order term of size 1/ logX (present only if
supp(ĝ) 6⊂ (−1, 1)) in Lemma 2.6 of §2.2. We explicitly calculate this contribution
because in many applications all that is required are the main and first lower order
terms. One example of this is that zeros at height T are modeled not by the N → ∞
scaling limits of a classical compact group but by matrices of size N ∼ log(T/2π)
[KeSn1, KeSn2]. In fact, even better agreement is obtained by changing N slightly due
to the first lower order term (see [BBLM, DHKMS]).
2.1. Analysis of R(g;X). Before proving Lemma 2.1 we collect several useful facts.
Lemma 2.4. In all statements below r = 2πiτ/ logX and supp(ĝ) ⊂ (−σ, σ) ⊂
(−1, 1).
(1) AD(−r, r) = ζ(2)/ζ(2− 2r).
(2) If |r| ≥ ǫ then |ζ(−3− 2r)/ζ(−2− 2r)| ≪ǫ (1 + |r|).
(3) For w ≥ 0, g
τ − iw logX
≪ Xσw
τ 2 + (w logX
for any B ≥ 0.
(4) For 0 < a < b we have |Γ(a± iy)/Γ(b± iy)| = Oa,b(1).
Proof. (1): From simple algebra, as we may rewrite each factor as
p2−2r
p2−2r
. (2.5)
(2): By the functional equations of the Gamma and zeta functions Γ(s/2)π−s/2ζ(s)
= Γ((1− s)/2)π−(1−s)/2ζ(1− s) and Γ(1 + x) = xΓ(x) gives
ζ(−3− 2r)
ζ(−2− 2r)
Γ(1− (−1 − r))π−2−rΓ(−1− r)π1+rζ(4 + 2r)
− r)π 32+rΓ(1− (−3
− r))(3
+ r)−1π−
+rζ(3 + 2r)
. (2.6)
Using
Γ(x)Γ(1− x) = π/ sin πx = 2πi/(eiπx − e−iπx), (2.7)
8 STEVEN J. MILLER
we see the ratio of the Gamma factors have the same growth as |r| → ∞ (if r = 0 then
there is a pole from the zero of ζ(s) at s = −2), and the two zeta functions are bounded
away from 0 and infinity.
(3): As g(τ) =
ĝ(ξ)e2πiξτdξ, we have
g(τ − iy) =
ĝ(ξ)e2πi(τ−iy)ξdξ
ĝ(2n)(ξ)(2πi(τ − iy))−ne2πi(τ−iy)ξdξ
≪ e2πyσ(τ − iy))−2n; (2.8)
the claim follows by taking y = (w logX)/2π.
(4): As |Γ(x− iy)| = |Γ(x+ iy)|, we may assume all signs are positive. The claim
follows from the definition of the Beta function:
Γ(a + iy)Γ(b− a)
Γ(b+ iy)
ta+iy−1(1− t)b−a−1 = Oa,b(1); (2.9)
see [ET] for additional estimates of the size of ratios of Gamma functions. �
Proof of Lemma 2.1. By Lemma 2.4 we may replace AD(−2πiτ/ logX, 2πiτ/ logX)
with ζ(2)/ζ(2−4πiτ/ logX). We replace τ with τ − iw logX
with w = 0 (we will shift
the contour in a moment). Thus
R(g;X) = − 2
X∗ logX
τ − iw logX
−2πi(τ−iw logX2π )
log(d/π)
− πiτ
+ πiτ
ζ(2)ζ
1− w − 4πiτ
2− 2w − 4πiτ
) dτ. (2.10)
We now shift the contour to w = 2. There are two different residue contributions as
we shift (remember we are assuming the Riemann Hypothesis, so that if ζ(ρ) = 0 then
either ρ = 1
+ iγ for some γ ∈ R or ρ is a negative even integer), arising from
• the pole of ζ
1− w − 4πiτ
at w = τ = 0;
• the zeros of ζ
2− 2w − 4πiτ
when w = 3/4 and τ = γ logX
(while potentially there is a residue from the pole of Γ
− πiτ
when w = 1/2
and τ = 0, this is canceled by the pole of ζ
2− 2w − 4πiτ
in the denominator).
We claim the contribution from the pole of ζ
1− w − 4πiτ
at w = τ = 0 is
−g(0)/2. As w = τ = 0, the d-sum is just X∗. As the pole of ζ(s) is 1/(s− 1), since
s = 1 − 4πiτ
the 1/τ term from the zeta function has coefficient − logX
. We lose the
factor of 1/2πi when we apply the residue theorem, there is a minus sign outside the
integral and another from the direction we integrate (we replace the integral from −ǫ to
ǫ with a semi-circle oriented clockwise; this gives us a minus sign as well as a factor of
A SYMPLECTIC TEST OF THE L-FUNCTIONS RATIOS CONJECTURE 9
1/2 since we only have half the contour), and everything else evaluated at τ = 0 is g(0).
We now analyze the contribution from the zeros of ζ(s) as we shift w to 2. Thus
w = 3/2 and we sum over τ = γ logX
with ζ(1
+ iγ) = 0. We use Lemma B.2 (with
z = τ − iw logX
) to replace the d-sum with
−2πi(1− log πlogX )τ
− 2πiτ
2 log π
logX +O(logX). (2.11)
The contribution from the O(logX) term is dwarfed by the main term (which is of size
X1/4+ǫ). From (3) of Lemma 2.4 we have
≪ X3σ/4(τ 2 + 1)−B (2.12)
for any B > 0. From (4) of Lemma 2.4, we see that the ratio of the Gamma factors
is bounded by a power of |τ | (the reason it is a power is that we may need to shift a
few times so that the conditions are met; none of these factors will every vanish as we
are not evaluating at integral arguments). Finally, the zeta function in the numerator is
bounded by |τ |2. Thus the contribution from the critical zeros of ζ(s) is bounded by
+iγ)=0
X∗ logX
·X1/4 ·
X3σ/4
(γ2 + 1)B
· (|γ logX|+ 1)n. (2.13)
For sufficiently largeB the sum over γ will converge. This term is of sizeO(X−
(1−σ)+ǫ).
This error is O(X−ǫ) whenever σ < 1, and if σ < 1/3 then the error is at most
O(X−1/2+ǫ).
The proof is completed by showing that the integral over w = 2 is negligible. We
use Lemma B.2 (with z = τ − i2 logX
) to show the d-sum is O(X∗X−2+ǫ). Arguing
as above shows the integral is bounded by O(X−2+2σ+ǫ). (Note: some care is required,
as there is a pole when w = 2 coming from the trivial zero of ζ(s) at s = −2. The
contribution from the residue here is negligible; we could also adjust the contour to
include a semi-circle around w = 2 and use the residue theorem.) �
Remark 2.5. We sketch an alternate start of the proof of Lemma 2.1. One difficulty is
that R(g;X) is defined as an integral and there is a pole on the line of integration. We
may write
ζ(s) = (s− 1)−1 +
ζ(s)− (s− 1)−1
. (2.14)
For us s = 1 − 4πiτ
, so the first factor is just − logX
. As g(τ) is an even function, the
main term of the integral of this piece is
e−2πiτ
e−2πiτ
e2πiτ
sin(2πτ)
dτ = −g(0)
, (2.15)
10 STEVEN J. MILLER
where the last equality is a consequence of supp(ĝ) ⊂ (−1, 1). The other terms from
the (s − 1)−1 factor and the terms from the ζ(s) − (s − 1)−1 piece are analyzed in a
similar manner as the terms in the proof of Lemma 2.1.
2.2. Secondary term (of size 1/ logX) of R(g;X).
Lemma 2.6. Let supp(ĝ) ⊂ (−σ, σ); we do not assume σ < 1. Then the 1/ logX term
in the expansion of R(g;X) is
ζ′(2)
− 2γ + 2 log π
ĝ(1). (2.16)
It is important to note that this piece is only present if the support of ĝ exceeds (−1, 1)
(i.e., if σ > 1).
Proof. We sketch the determination of the main and secondary terms of R(g;X). We
may restrict the integrals to |τ | ≤ log1/4X with negligible error; this will allow us to
Taylor expand certain expressions and maintain good control over the errors. As g is
a Schwartz function, for any B > 0 we have g(τ) ≪ (1 + τ 2)−4B . The ratio of the
Gamma factors is of absolute value 1, and AD(−r; r) = ζ(2)/ζ(2− 2r) = O(1). Thus
the contribution from |τ | ≥ log1/4X is bounded by
|τ |≥log1/4 X
(1 + τ 2)−4B ·max
logC τ
dτ ≪ (logX)−B (2.17)
for B sufficiently large.
We use Lemma B.2 to evaluate the d-sum in (2.1) for |τ | ≤ log1/4X; the error term
is negligible and may be absorbed into the O(log−BX) error. We now Taylor expand
the three factors in (2.1). The main contribution comes from the pole of ζ ; the other
pieces contribute at the 1/ logX level.
We first expand the Gamma factors. We have
− πiτ
+ πiτ
) = 1−
log2X
. (2.18)
As AD(−r; r) = ζ(2)/ζ(2− 2r),
= 1 + 2
ζ ′(2)
log2X
. (2.19)
Finally we expand the ζ-piece. We have (see [Da]) that
ζ(1 + iy) =
+ γ +O(y), (2.20)
where γ is Euler’s constant. Thus
1− 4πiτ
= − logX
+ γ +O
. (2.21)
A SYMPLECTIC TEST OF THE L-FUNCTIONS RATIOS CONJECTURE 11
We combine the Taylor expansions for the three pieces (the ratio of the Gamma fac-
tors, the ζ-function and AD), and keep only the first two terms:
− logX
. (2.22)
Finally, we Taylor expand the d-sum, which was evaluated in Lemma B.2. We may
ignore the error term there because it is O(X1/2). The main term is
−2πi(1− log πlogX )τ
1− 2πiτ
= X∗e
−2πi(1− logπlogX )τ
log2X
(2.23)
R(g;X) =
X∗ logX
∫ logX
− log1/4 X
g(τ) ·X∗e
log π
log1/4 X
log2X
− logX
dτ +O
logBX
∫ log1/4 X
− log1/4 X
g(τ) · e−2πi(1−
log π
logX )τ ·
ζ ′(2)
log5/4X
. (2.24)
We may write
−2πi(1− logπlogX )τ = e−2πiτ ·
2πiτ log π
log2X
. (2.25)
The effect of this expansion is to change the 1/ logX term above by adding log π
Because g is a Schwartz function, we may extend the integration to all τ and absorb
the error into our error term. The main term is from (logX)/4πiτ ; it equals −g(0)/2
(see the analysis in §2.1). The secondary term is easily evaluated, as it is just the Fourier
transform of g at 1. Thus
R(g;X) = −g(0)
ζ′(2)
− 2γ + 2 log π
ĝ(1) +O
log5/4X
(2.26)
3. ANALYSIS OF THE TERMS FROM NUMBER THEORY
We now prove Theorem 1.2. The starting point is the Explicit Formula (Theorem
1.5, with each d an even fundamental discriminant). As the log(d/π) and the Γ′/Γ
terms already appear in the expansion from the Ratios Conjecture (Theorem 1.1), we
12 STEVEN J. MILLER
need only study the sums of χd(p)
k. The analysis splits depending on whether or not k
is even. Set
Seven = −
χd(p)
2 log p
pℓ logX
log pℓ
Sodd = −
χd(p) log p
p(2ℓ+1)/2 logX
log p2ℓ+1
. (3.1)
Based on our analysis of the terms from the Ratios Conjecture, the proof of Theorem
1.2 is completed by the following lemma.
Lemma 3.1. Let supp(ĝ) ⊂ (−σ, σ) ⊂ (−1, 1). Then
Seven = −
g(τ)A′D
+O(X−
Sodd = O(X
− 1−σ
2 log6X). (3.2)
If instead we consider the family of characters χ8d for odd, positive square-free d ∈
(0, X) (d a fundamental discriminant), then
Sodd = O(X
−1/2+ǫ +X−(1−
σ)+ǫ). (3.3)
We prove Lemma 3.1 by analyzing Seven in §3.1 (in Lemmas 3.2 and 3.3) and Sodd
in §3.2 (in Lemmas 3.4, 3.5 and 3.6).
3.1. Contribution from k even. The contribution from k even from the Explicit For-
mula is
Seven = −
χd(p)
2 log p
pℓ logX
log pℓ
, (3.4)
where
d≤X 1 = X
∗, the cardinality of our family. Each χd(p)
2 = 1 except when p|d.
We replace χd(p)
2 with 1, and subtract off the contribution from when p|d. We find
Seven = −2
log p
pℓ logX
log pℓ
log p
pℓ logX
log pℓ
= Seven;1 + Seven;2. (3.5)
In the next subsections we prove the following lemmas, which completes the analysis
of the even k terms.
Lemma 3.2. Notation as above,
Seven;1 = −
dτ. (3.6)
A SYMPLECTIC TEST OF THE L-FUNCTIONS RATIOS CONJECTURE 13
Lemma 3.3. Notation as above,
Seven;2 =
g(τ)A′D
+O(X−
+ǫ). (3.7)
3.1.1. Analysis of Seven;1.
Proof of Lemma 3.2. We have
Seven;1 =
. (3.8)
We use Perron’s formula to re-write Seven;1 as a contour integral. For any ǫ > 0 set
ℜ(z)=1+ǫ
(2z − 2) logA
dz; (3.9)
we will later take A = X1/2. We write z = 1 + ǫ + iy and use (A.6) (replacing φ with
g) to write g(x+ iy) in terms of the integral of ĝ(u). We have
y logA
− iǫ logA
e−iy lognidy
ĝ(u)eǫu logA
e−2πi
−y logA
e−iy logndy. (3.10)
We let hǫ(u) = ĝ(u)e
ǫu logA. Note that hǫ is a smooth, compactly supported function
hǫ(w) = hǫ(−w). Thus
−y logA
e−iy logndy
ĥǫ(y)e
−y log n
− log n
log n
eǫ logn
log n
. (3.11)
By taking A = X1/2 we find
Seven;1 =
= −I1. (3.12)
We now re-write I1 by shifting contours; we will not pass any poles as we shift. For
each δ > 0 we consider the contour made up of three pieces: (1 − i∞, 1 − iδ], Cδ,
14 STEVEN J. MILLER
and [1 − iδ, 1 + i∞), where Cδ = {z : z − 1 = δeiθ, θ ∈ [−π/2, π/2]} is the semi-
circle going counter-clockwise from 1− iδ to 1+ iδ. By Cauchy’s residue theorem, we
may shift the contour in I1 from ℜ(z) = 1 + ǫ to the three curves above. Noting that∑
nΛ(n)n
−z = −ζ ′(z)/ζ(z), we find that
[∫ 1−iδ
∫ 1+i∞
(2z − 2) logA
−ζ ′(z)
. (3.13)
The integral over Cδ is easily evaluated. As ζ(s) has a pole at s = 1, it is just half the
residue of g
(2z−2) logA
(the minus sign in front of ζ ′(z)/ζ(z) cancels the minus sign
from the pole). Thus the Cδ piece is g(0)/2. We now take the limit as δ → 0:
− lim
[∫ −δ
y logA
ζ ′(1 + iy)
ζ(1 + iy)
. (3.14)
As g is an even Schwartz function, the limit of the integral above is well-defined (for
large y this follows from the decay of g, while for small y it follows from the fact that
ζ ′(1+ iy)/ζ(1+ iy) has a simple pole at y = 0 and g is even). We again takeA = X1/2,
and change variables to τ = y logA
= y logX
. Thus
dτ, (3.15)
which completes the proof of Lemma 3.2. �
3.1.2. Analysis of Seven;2.
Proof of Lemma 3.3. Recall
Seven;2 =
log p
pℓ logX
log pℓ
. (3.16)
We may restrict the prime sum to p ≤ X1/2 at a cost of O(log logX/X). We sketch
the proof of this claim. Since ĝ has finite support, p ≤ Xσ and thus the p-sum is finite.
Since d ≤ X and p ≥ X1/2, there are at most 2 primes which divide a given d. Thus
p=X1/2
log p
pℓ logX
log pℓ
p=X1/2
p>X1/2
≪ log logX
.(3.17)
In Lemma B.1 we show that
X +O(X1/2) (3.18)
and that for p ≤ X1/2 we have
+O(X1/2). (3.19)
A SYMPLECTIC TEST OF THE L-FUNCTIONS RATIOS CONJECTURE 15
Using these facts we may complete the analysis of Seven;2:
Seven;2 =
p≤X1/2
log p
pℓ logX
log pℓ
log logX
p≤X1/2
log p
pℓ logX
log pℓ
d≤X, p|d
log logX
p≤X1/2
log p
pℓ logX
log pℓ
p≤X1/2
log logX
p≤X1/2
log p
pℓ logX
log pℓ
+O(X−
+ǫ). (3.20)
We re-write ĝ(2 log pℓ/ logX) by expanding the Fourier transform.
Seven;2 = 2
p≤X1/2
log p
(p+ 1)pℓ logX
g(τ)e−2πiτ ·2 log p
ℓ/ logXdτ +O(X−
p≤X1/2
log p
(p+ 1) logX
p−ℓ · p−2πiτ ·2ℓ/ logXdτ +O(X−
p≤X1/2
log p
(p+ 1) logX
g(τ)p
−(1+2· 2πiτ
1− p−(1+2·
dτ +O(X−
(3.21)
We may extend the p-sum to be over all primes at a cost of O(X−1/2+ǫ); this is because
the summands are O(log p/p2) and g is Schwartz. Recalling the definition of A′D(r; r)
in (1.4), we see that the resulting p-sum is just A′D(2πiτ/ logX ; 2πiτ/ logX); this
completes the proof of Lemma 3.3. �
3.2. Contribution from k odd. As k is odd, χd(p)
k = χd(p). Thus we must analyze
the sum
Sodd = −
χd(p) log p
p(2ℓ+1)/2 logX
log p2ℓ+1
. (3.22)
If supp(ĝ) ⊂ (−1, 1), Rubinstein [Rub1] showed (by applying Jutila’s bound [Ju1,
Ju2, Ju3] for quadratic character sums) that if our family is all discriminants then
Sodd = O(X
−ǫ/2). In his dissertation Gao [Gao] extended these results to show that
the odd terms do not contribute to the main term provided that supp(ĝ) ⊂ (−2, 2). His
analysis proceeds by using Poisson summation to convert long character sums to shorter
ones. We shall analyze Sodd using both methods: Jutila’s bound gives a self-contained
presentation, but a much weaker result; the Poisson summation approach gives a better
16 STEVEN J. MILLER
bound but requires a careful book-keeping of many of Gao’s lemmas (as well as an
improvement of one of his estimates).
3.2.1. Analyzing Sodd with Jutila’s bound.
Lemma 3.4. Let supp(ĝ) ⊂ (−σ, σ). Then Sodd = O(X−
2 log6X).
Proof. Jutila’s bound (see (3.4) of [Ju3]) is
1<n≤N
n non−square
∣∣∣∣∣∣
0<d≤X
d fund. disc.
χd(n)
∣∣∣∣∣∣
≪ NX log10N (3.23)
(note the d-sum is over even fundamental discriminants at most X). As 2ℓ + 1 is odd,
p2ℓ+1 is never a square. Thus Jutila’s bound gives
p(2ℓ+1)/2≤Xσ
∣∣∣∣∣
χd(p)
∣∣∣∣∣
2 log5X. (3.24)
Recall
Sodd = −
log p
p(2ℓ+1)/2 logX
log p2ℓ+1
χd(p). (3.25)
We apply Cauchy-Schwartz, and find
|Sodd| ≤
p2ℓ+1≤Xσ
log p
p(2ℓ+1)/2 logX
log p2ℓ+1
)∣∣∣∣
p2ℓ+1≤Xσ
∣∣∣∣∣
χd(p)
∣∣∣∣∣
2 log5X
2 log6X ; (3.26)
thus there is a power savings if σ < 1. �
3.2.2. Analyzing Sodd with Poisson Summation.
Gao analyzes the contribution from Sodd by applying Poisson summation to the char-
acter sums. The computations are simplified if the character χ2(n) =
is not present.
He therefore studies the family of odd, positive square-free d (where d is a fundamental
discriminant). His family is
{8d : X < d ≤ 2X, d an odd square− free fundamental discriminant}; (3.27)
we discuss in Lemma 3.6 how to easily modify the arguments to handle the related
family with 0 < d ≤ X . The calculation of the terms from the Ratios Conjecture
A SYMPLECTIC TEST OF THE L-FUNCTIONS RATIOS CONJECTURE 17
proceeds similarly (the only modification is to X∗, which also leads to a trivial mod-
ification of Lemma B.2 which does not change any terms larger than O(X−1/2+ǫ) if
supp(ĝ) ⊂ (−1/3, 1/3)), as does the contribution from χ(p)k with k even. We are left
with bounding the contribution from Sodd. The following lemma shows that we can
improve on the estimate obtained by applying Jutila’s bound.
Lemma 3.5. Let supp(ĝ) ⊂ (−σ, σ) ⊂ (−1, 1). Then for the family given in (3.27),
Sodd = O(X
+ǫ +X−(1−
σ)+ǫ). In particular, if σ < 1/3 then Sodd = O(X
−1/2+ǫ).
Proof. Gao is only concerned with main terms for the n-level density (for any n) for
all sums. As we only care about Sodd for the 1-level density, many of his terms are not
present. We highlight the arguments. We concentrate on the ℓ = 0 term in (3.22) (the
other ℓ ≪ logX terms are handled similarly, and the finite support of ĝ implies that
Sodd(ℓ) = 0 for ℓ≫ logX):
Sodd = −
χd(p) log p
p(2ℓ+1)/2 logX
log p2ℓ+1
Sodd(ℓ). (3.28)
Let Y = Xσ, where supp(ĝ) ⊂ (−σ, σ). Our sum Sodd(0) is S(X, Y, ĝ) in Gao’s
thesis:
S(X, Y, ĝ) =
X<d<2X
(2,d)=1
µ(d)2
log p
χ8d(p)ĝ
log p
. (3.29)
Let Φ be a smooth function supported on (1, 2) such that Φ(t) = 1 for t ∈ (1 +
U−1, 2 − U−1) and Φ(j)(t) ≪j U j for all j ≥ 0. We show that S(X, Y, ĝ) is well
approximated by the smoothed sum S(X, Y, ĝ,Φ), where
S(X, Y, ĝ,Φ) =
(d,2)=1
µ(d)2
log p
χ8d(p)ĝ
log p
. (3.30)
To see this, note the difference between the two involves summing d ∈ (X,X +X/U)
and d ∈ (2X−X/U, 2X). We trivially bound the prime sum for each fixed d by log7X
(see Proposition III.1 of [Gao]). As there are O(X/U) choices of d and Φ(d/X) ≪ 1,
we have
S(X, Y, ĝ)− S(X, Y, ĝ,Φ) ≪ X log
. (3.31)
We will take U =
X . Thus upon dividing by X∗ ≫ X (the cardinality of the family),
this difference is O(X−1/2+ǫ). The proof is completed by bounding S(X, Y, ĝ,Φ).
To analyze S(X, Y, ĝ,Φ), we write it as SM(X, Y, ĝ,Φ) + SR(X, Y, ĝ,Φ), with
SM(X, Y, ĝ,Φ) =
(d,2)=1
MZ(d)
log p
χ8d(p)ĝ
log p
SR(X, Y, ĝ,Φ) =
(d,2)=1
RZ(d)
log p
χ8d(p)ĝ
log p
, (3.32)
18 STEVEN J. MILLER
where
µ(d)2 = MZ(d) +RZ(d)
MZ(d) =
µ(ℓ), RZ(d) =
µ(ℓ); (3.33)
here Z is a parameter to be chosen later, and SM(X, Y, ĝ,Φ) will be the main term (for
a general n-level density sum) and SR(X, Y, ĝ,Φ) the error term. In our situation, both
will be small.
In Lemma III.2 of [Gao], Gao proves that SR(X, Y, ĝ,Φ) ≪ (X log3X)/Z. We
haven’t divided any of our sums by the cardinality of the family (which is of size X).
Thus for this term to yield contributions of size X−1/2+ǫ, we need Z ≥ X1/2.
We now analyze SM(X, Y, ĝ,Φ). Applying Poisson summation we convert long char-
acter sums to short ones. We need certain Gauss-type sums:
1 + i
Gm(k) =
a mod k
e2πiam/k . (3.34)
For a Schwartz function F let
F̃ (ξ) =
1 + i
F̂ (ξ) +
F̂ (−ξ). (3.35)
Using Lemma 2.6 of [So], we have (see page 32 of [Gao])
SM(X, Y, ĝ,Φ) =
2<p<Y
log p
log p
(α,2p)=1
(−1)mGm(p)Φ̃
. (3.36)
We follow the arguments in Chapter 3 of [Gao]. The m = 0 term is analyzed in §3.3
for the general n-level density calculations. It is zero if n is odd, and we do not need
to worry about this error term (thus we do not see the error terms of size X logn−1X
or (X lognX)/Z which appear in his later estimates). In §3.4 he analyzes the contri-
butions from the non-square m in (3.36). In his notation, we have k = 1, k2 = 0,
k1 = 0, α1 = 1 and α0 = 0, and these terms’ contribution is ≪ (U2Z
Y log7X)/X
(remember we haven’t divided by the cardinality of the family, which is of order X).
This is too large for our purposes (we have seen that we must take U = Z =
and Y = Xσ). We perform a more careful analysis of these terms in Appendix C, and
bound these terms’ contribution by
Y log7X
UZY 3/2 log4X
Z3U2Y 7/2 log4X
X4018−2ǫ
. (3.37)
Lastly, we must analyze the contribution from m a square in (3.36). From Lemma
III.3 of [Gao] we have that Gm(p) = 0 if p|m. If p |r m and m is a square, then
A SYMPLECTIC TEST OF THE L-FUNCTIONS RATIOS CONJECTURE 19
Gm(p) =
p. Arguing as in [Gao], we are left with
(p,2)=1
log p
log p
(α,2p)=1
(−1)mΦ̃
(−1) emΦ̃
p2m̃2X
(3.38)
If we assume supp(ĝ) ⊂ (−1, 1), then arguing as on page 41 of [Gao] we find the
m-sum above is ≪ α
p/X , which leads to a contribution ≪
Y/X logX logZ; the
m̃-sum is ≪ α/
pX and is thus dominated by the contribution from the m-sum.
Collecting all our bounds, we see a careful book-keeping leads to smaller errors than
in §3.6 of [Gao] (this is because (1) many of the error terms only arise from n-level
density sums with n even, where there are main terms and (2) we did a more careful
analysis of some of the errors). We find that
S(X, Y, ĝ,Φ) ≪ X log
Y log7X
UZY 3/2 log4X
Y logX logZ√
(3.39)
We divide this by X∗ ≫ X (the cardinality of the family). By choosing Z = X1/2,
Y = Xσ with σ < 1, and U =
X (remember we need such a large U to handle
the error from smoothing the d-sum, i.e., showing |S(X, Y, ĝ) − S(X, Y, ĝ,Φ)|/X ≪
X−1/2+ǫ), we find
S(X, Y, ĝ,Φ)/X ≪ X−1/2+ǫ +X−(1−
σ)+ǫ, (3.40)
which yields
Sodd ≪ X−1/2+ǫ +X−(1−
σ)+ǫ. (3.41)
Note that if σ < 1/3 then Sodd ≪ X−1/2+ǫ. �
Lemma 3.6. Let supp(ĝ) ⊂ (−σ, σ) ⊂ (−1, 1). Then for the family
{8d : 0 < d ≤ X, d an odd square− free fundamental discriminant} (3.42)
we have Sodd = O(X
−1/2+ǫ + X−(1−
σ)+ǫ). In particular, if σ < 1/3 then Sodd =
O(X−1/2+ǫ).
Proof. As the calculation is standard, we merely sketch the argument. We write
(0, X ] =
log2 X⋃
. (3.43)
Let Xi = X/2
i. For each i, in Lemma 3.5 we replace most of the X’s with Xi, U
with U/
2i, Z with Z/
2i; the X’s we don’t replace are the cardinality of the family
(which we divide by in the end) and the logX which occurs when we evaluate the test
function ĝ at log p/ logX . We do not change Y , which controls the bounds for the
prime sum. As we do not have any main terms, there is no loss in scaling the prime
sums by logX instead of logXi. We do not use much about the test function ĝ in our
estimates. All we use is that the prime sums are restricted to p < Y , and therefore we
will still have bounds of Y (to various powers) for our sums.
20 STEVEN J. MILLER
We now finish the book-keeping. Expressions such as UZ/X in (3.39) are stillO(1),
and expressions such as X/U and X/Z are now smaller. When we divide by the cardi-
nality of the family we still have terms such as Y 3/2/X , and thus the support require-
ments are unchanged (i.e., Sodd ≪ X−1/2+ǫ +X−(1−
σ)+ǫ). �
APPENDIX A. THE EXPLICIT FORMULA
We quickly review some needed facts about Dirichlet characters; see [Da] for details.
Let χd be a primitive quadratic Dirichlet character of modulus |d|. Let c(d, χd) be the
Gauss sum
c(d, χd) =
χd(k)e
2πik/d, (A.1)
which is of modulus
d. Let
L(s, χd) =
1− χd(p)p−s
(A.2)
be the L-function attached to χd; the completed L-function is
Λ(s, χd) = π
−(s+a)/2Γ
d−(s+a)/2L(s, χd) = (−1)a
c(d, χd)√
Λ(1− s, χd),
(A.3)
where
a = a(χd) =
0 if χd(−1) = 1
1 if χd(−1) = −1.
(A.4)
We write the zeros of Λ(s, χd) as
+ iγ; if we assume GRH then γ ∈ R. Let φ be an
even Schwartz function and φ̂ be its Fourier transform (φ̂(ξ) =
φ(x)e−2πixξdx); we
often assume supp(φ̂) ⊂ (−σ, σ) for some σ <∞. We set
H(s) = φ
. (A.5)
While H(s) is initially define only when ℜ(s) = 1/2, because of the compact support
of φ̂ we may extend it to all of C:
φ(x) =
φ̂(ξ)e2πixξdξ
φ(x+ iy) =
φ̂(ξ)e2πi(x+iy)ξdξ
H(x+ iy) =
φ̂(ξ)e2π(x−
· e2πiyξdξ. (A.6)
Note that H(x+ iy) is rapidly decreasing in y (for a fixed x it is the Fourier transform
of a nice function, and thus the claim follows from the Riemann-Lebesgue lemma). We
now derive the Explicit Formula for quadratic characters; note the functional equation
will always be even. We follow the argument given in [RS].
A SYMPLECTIC TEST OF THE L-FUNCTIONS RATIOS CONJECTURE 21
Proof of the Explicit Formula, Theorem 1.5. We have
Λ(s, χ) = π−(s+a)/2Γ
d(s+a)/2L(s, χd) = Λ(1− s, χd)
Λ′(s, χd)
Λ(s, χd)
log π
log d
L′(s, χd)
L(s, χd)
L′(s, χd)
L(s, χd)
χd(p) log p
1− χd(p)p−s
χd(p)
k log p
. (A.7)
We will not approximate any terms; we are keeping all lower order terms to facilitate
comparison with the L-functions Ratios Conjecture. We set
ℜ(s)=3/2
Λ′(s, χd)
Λ(s, χd)
H(s)ds. (A.8)
We shift the contour to ℜ(s) = −1/2. We pick up contributions from the zeros and
poles of Λ(s, χd). As χd is not the principal character, there is no pole from L(s, χd).
There is also no need to worry about a zero or pole from the Gamma factor Γ
L(1, χd) 6= 0. Thus the only contribution is from the zeros of Λ(s, χd); the residue at a
zero 1
+ iγ is φ(γ). Therefore
φ(γ) +
ℜ(s)=−1/2
Λ′(s, χd)
Λ(s, χd)
H(s)ds. (A.9)
As Λ(1− s, χd) = Λ(s, χd), −Λ′(1− s, χd) = Λ(s, χd) and
φ(γ)−
ℜ(s)=−1/2
Λ′(1− s, χd)
Λ(1− s, χd)
H(s)ds. (A.10)
We change variables (replacing s with 1− s), and then use the functional equation:
φ(γ)−
ℜ(s)=3/2
Λ′(s, χd)
Λ(s, χd)
H(1− s)ds. (A.11)
Recalling the definition of I gives
φ(γ) =
ℜ(s)=3/2
Λ′(s, χd)
Λ(s, χd)
[H(s) +H(1− s)] ds. (A.12)
We expandΛ′(s, χd)/Λ(s, χd) and shift the contours of all terms exceptL
′(s, χd)/L(s, χd)
to ℜ(s) = 1/2 (this is permissible as we do not pass through any zeros or poles of the
other terms); note that if s = 1
+ iy then H(s) = H(1 − s) = φ(y) (φ is even).
22 STEVEN J. MILLER
Expanding the logarithmic derivative of Λ(s, χd) gives
φ(γ) =
φ(y)dy
ℜ(s)=3/2
L′(s, χd)
L(s, χd)
· [H(s) +H(1− s)] ds
φ(y)dy
ℜ(s)=3/2
L′(s, χd)
L(s, χd)
· [H(s) +H(1− s)] ds, (A.13)
where the last line follows from the fact that φ is even.
We use (A.7) to expand L′/L. In the arguments below we shift the contour to ℜs =
1/2; this is permissible because of the compact support of φ̂ (see (A.6)):
ℜ(s)=3/2
(s+ iy) · [H (s) +H (1− s)] dy
χd(p)
k log p
ℜ(s)=3/2
[H (s) +H (1− s)] e−ks log pdy
χd(p)
k log p
φ(y)e−2πiy·
log pk
2π dy
= − 2
χd(p)
k log p
log pk
. (A.14)
We therefore find that
φ(γ) =
φ(y)dy
χd(p)
k log p
log pk
. (A.15)
We replace φ(x) with g(x) = φ
x · logX
. A standard computation gives ĝ(ξ) =
ξ · 2π
. Summing over d ∈ F(X) completes the proof. �
APPENDIX B. SUMS OVER FUNDAMENTAL DISCRIMINANTS
Lemma B.1. Let d denote an even fundamental discriminant at most X , and set X∗ =∑
d≤X 1. Then
X +O(X1/2) (B.1)
and for p ≤ X1/2 we have
+O(X1/2). (B.2)
A SYMPLECTIC TEST OF THE L-FUNCTIONS RATIOS CONJECTURE 23
Proof. We first prove the claim forX∗, and then indicate how to modify the proof when
p|d. We could show this by recognizing certain products as ratios of zeta functions or
by using a Tauberian theorem; instead we shall give a straightforward proof suggested
to us by Tim Browning (see also [OS1]).
We first assume that d ≡ 1 mod 4, so we are considering even fundamental discrim-
inants {d ≤ X : d ≡ 1 mod 4, µ(d)2 = 1}; it is trivial to modify the arguments below
for d such that d/4 ≡ 2 or 3 modulo 4 and µ(d/4)2 = 1. Let χ4(n) be the non-trivial
character modulo 4: χ4(2m) = 0 and
χ4(n) =
1 if n ≡ 1 mod 4
0 if n ≡ 3 mod 4.
(B.3)
We have
S(X) =
µ(d)2=1, d≡1 mod 4
µ(d)2 ·
1 + χ4(d)
µ(d)2 +
µ(d)2χ4(d) = S1(X) + S2(X). (B.4)
By Möbius inversion
µ(m) =
1 if d is square-free
0 otherwise.
(B.5)
S1(X) =
m≤X1/2
µ(m) ·
d ≤ X/m2
m≤X1/2
+O(1)
+O(X1/2)
·X +O(X1/2)
X +O(X1/2) (B.6)
24 STEVEN J. MILLER
(because we are missing the factor corresponding to 2 in 1/ζ(2) above). Arguing in a
similar manner shows S2(X) = O(X
1/2); this is due to the presence of χ4, giving us
S2(X) =
m≤X1/2
2)µ(m)
d≤X/m2
χ4(d) ≪ X1/2 (B.7)
(because we are summing χ4 at consecutive integers, and thus this sum is at most 1). A
similar analysis shows that the number of even fundamental discriminants d ≤ X with
d/4 ≡ 2 or 3 modulo 4 is X/π2 +O(X1/2). Thus
d an even fund. disc.
1 = X∗ =
X +O(X1/2). (B.8)
We may trivially modify the above calculations to determine the number of even
fundamental discriminants d ≤ X with p|d for a fixed prime p. We first assume p ≡
1 mod 4. In (B.4) we replace µ(d)2 with µ(pd)2, d ≤ X with d ≤ X/p, 2 |r d and
(2p, d) = 1. These imply that d ≤ X , p|d and p2 does not divide d. As d and p are
relatively prime, µ(pd) = µ(p)µ(d) and the main term becomes
S1;p(X) =
d≤X/p
(2p,d)=1
m≤(X/p)1/2
(2p,m)=1
µ(m) ·
d ≤ (X/p)/m2
(2p,d)=1
m≤(X/p)1/2
(2p,m)=1
· p− 1
+ O(1)
(p− 1)X
(2p,m)=1
+O(X1/2)
· (p− 1)X
+O(X1/2)
(p+ 1)π2
+O(X1/2), (B.9)
and the cardinality of this piece is reduced by (p + 1)−1 (note above we used #{n ≤
Y : (2p, n) = 1} = p−1
Y + O(1)). A similar analysis holds for S2;p(X), as well as the
even fundamental discriminants d with d/4 ≡ 2 or 3 modulo 4).
We need to trivially modify the above arguments if p ≡ 3 mod 4. If for instance
we require d ≡ 1 mod 4 then instead of replacing µ(d)2 with µ(d)2(1 + χ4(d))/2 we
replace it with µ(pd)2(1− χ4(d))/2, and the rest of the proof proceeds similarly.
It is a completely different story if p = 2. Note if d ≡ 1 mod 4 then 2 never divides
d, while if d/4 ≡ 2 or 3 modulo 4 then 2 always divides d. There are 3X/π2+ o(X1/2)
even fundamental discriminants at most X , and X/π2 + O(x1/2) of these are divisible
by 2. Thus, if our family is all even fundamental discriminants, we do get the factor of
A SYMPLECTIC TEST OF THE L-FUNCTIONS RATIOS CONJECTURE 25
1/(p+ 1) for p = 2, as one-third (which is 1/(2 + 1) of the fundamental discriminants
in this family are divisible by 2. �
In our analysis of the terms from the L-functions Ratios Conjecture, we shall need a
partial summation consequence of Lemma B.1.
Lemma B.2. Let d denote an even fundamental discriminant at most X and X∗ =∑
d≤X 1 and let z = τ − iw
with w ≥ 1/2. Then
−2πiz
log(d/π)
logX = X∗e
−2πi(1− log πlogX )z
1− 2πiz
+O(logX). (B.10)
Proof. By Lemma B.1 we have
+O(u1/2). (B.11)
Therefore by partial summation we have
−2πiz
log(d/π)
log π
d−2πiz/ logX
log π
3X +O(X1/2)
− 2πiz
logX −
∫ X (3u
+O(u1/2)
−2πiz
(B.12)
As we are assuming w ≥ 1/2, the first error term is of size O(X1/2X−w) = O(1).
The second error term (from the integral) is O(logX) for such w. This is because the
integration begins at 1 and the integrand is bounded by u−
−w. Thus
−2πiz
log(d/π)
log π
e−2πiz +
· 2πiz
u−2πiz/ logXdu
+O(logX)
log π
e−2πiz +
· 2πiz
X1−2πiz/ logX
1− 2πiz/ logX
+O(logX)
= X∗e
logX e−2πiz
+O(logX)
= X∗e
−2πi(1− log πlogX )z
1− 2πiz
+O(logX). (B.13)
26 STEVEN J. MILLER
APPENDIX C. IMPROVED BOUND FOR NON-SQUARE m TERMS IN SM(X, Y, ĝ,Φ)
Gao [Gao] proves that the non-squarem-terms contribute ≪ (U2Z
Y log7X)/X to
SM(X, Y, ĝ,Φ). As this bound is just a little too large for our applications, we perform
a more careful analysis below. Denoting the sum of interest by R,
(α,2)=1
(2α,p)=1
log p
log p
m6=0,✷
(−1)mΦ̃
, (C.14)
Gao shows that
log3X
(R1 +R2 +R3), (C.15)
R1, R2 ≪
Y log4X
Y log7X
. (C.16)
The bounds for R1 and R2 suffice for our purpose, leading to contributions bounded by
Y log4X)/X; however, the R3 bound gives too crude a bound – we need to save
a power of U .
We have (see page 36 of [Gao], with k = 1, k2 = 0, k1 = 0, α1 = 1 and α0 = 0) that
α2V 5/2
log p
(log3m)mΦ̃′
2α2pV
dV. (C.17)
We have (see (3.10) of [Gao]) that
Φ̃′(ξ) ≪ U j−1|ξ|−j for any integer j ≥ 1. (C.18)
Letting M = X2008, we break the m-sum in R3 into m ≤M and m > M . For m ≤M
we use (C.18) with j = 2 while for m > M we use (C.18) with j = 3. (Gao uses j = 3
for all m. While we save a bit for small m by using j = 2, we cannot use this for all m
as the resulting m sum does not converge.)
Thus the small m contribute
α2V 5/2
log p
(log3m)m
U22α4p2V 2
log p
log3m
UY 3/2α2 log4X
(C.19)
A SYMPLECTIC TEST OF THE L-FUNCTIONS RATIOS CONJECTURE 27
(since M = X2008 the m-sum is O(log4X)). The large m contribute
α2V 5/2
log p
(log3m)m
U223α6p3V 3
p log p
log3m
V 1/2dV
α4U2Y 3/2Y 2 logX
X2M2−ǫ
. (C.20)
For our choices of U , Y and Z, the contribution from the large m will be negligible
(due to the M2−ǫ = X4016−2ǫ in the denominator). Thus for these choices
log3X
(R1 +R2 +R3)
Y log7X
UZY 3/2 log4X
Z3U2Y 7/2 log4X
X4018−2ǫ
. (C.21)
The last term is far smaller than the first two. In the first term we save a power of U from
Gao’s bound, and in the second we replace U with Y . As Y = Xσ, for σ sufficiently
small there is a significant savings.
REFERENCES
[Be] M. V. Berry, Semiclassical formula for the number variance of the Riemann zeros, Nonlin-
earity 1 (1988), 399–407.
[BeKe] M. V. Berry and J. P. Keating, The Riemann zeros and eigenvalue asymptotics, Siam Review
41 (1999), no. 2, 236–266.
[BBLM] E. Bogomolny, O. Bohigas, P. Leboeuf and A. G. Monastra, On the spacing distribution of
the Riemann zeros: corrections to the asymptotic result, Journal of Physics A: Mathematical
and General 39 (2006), no. 34, 10743–10754.
[BoKe] E. B. Bogomolny and J. P. Keating, Gutzwiller’s trace formula and spectral statistics: be-
yond the diagonal approximation, Phys. Rev. Lett. 77 (1996), no. 8, 1472–1475.
[CF] B. Conrey and D. Farmer, Mean values of L-functions and symmetry, Internat. Math. Res.
Notices 2000, no. 17, 883–908.
[CFKRS] B. Conrey, D. Farmer, P. Keating, M. Rubinstein and N. Snaith, Integral moments of L-
functions, Proc. London Math. Soc. (3) 91 (2005), no. 1, 33–104.
[CFZ1] J. B. Conrey, D. W. Farmer and M. R. Zirnbauer, Autocorrelation of ratios of L-functions,
preprint. http://arxiv.org/abs/0711.0718
[CFZ2] J. B. Conrey, D. W. Farmer and M. R. Zirnbauer, Howe pairs, supersymmetry, and ra-
tios of random characteristic polynomials for the classical compact groups, preprint.
http://arxiv.org/abs/math-ph/0511024
[CS1] J. B. Conrey and N. C. Snaith, Applications of the L-functions Ratios Conjecture, Proc. Lon.
Math. Soc. 93 (2007), no 3, 594–646.
[CS2] J. B. Conrey and N. C. Snaith, Triple correlation of the Riemann zeros, preprint.
http://arxiv.org/abs/math/0610495
[Da] H. Davenport, Multiplicative Number Theory, 2nd edition, Graduate Texts in Mathematics
74, Springer-Verlag, New York, 1980, revised by H. Montgomery.
[DHKMS] E. Dueñez, D. K. Huynh, J. P. Keating, S. J. Miller and N. C. Snaith, work in progress.
[DM1] E. Dueñez and S. J. Miller, The low lying zeros of a GL(4) and a GL(6) family of L-
functions, Compositio Mathematica 142 (2006), no. 6, 1403–1425.
28 STEVEN J. MILLER
[DM2] E. Dueñez and S. J. Miller, The effect of convolving families of L-functions on the underlying
group symmetries, preprint. http://arxiv.org/abs/math/0607688
[ET] A. Erdélyi and F. G. Tricomi, The asymptotic expansion of a ratio of gamma functions,
Pacific J. Math. 1 (1951), no. 1, 133–142.
[FI] E. Fouvry and H. Iwaniec, Low-lying zeros of dihedral L-functions, Duke Math. J. 116
(2003), no. 2, 189-217.
[Gao] P. Gao, N -level density of the low-lying zeros of quadratic Dirichlet L-functions, Ph. D
thesis, University of Michigan, 2005.
[Gü] A. Güloğlu, Low-Lying Zeros of Symmetric Power L-Functions, Internat. Math. Res. No-
tices 2005, no. 9, 517-550.
[HW] G. Hardy and E. Wright, An Introduction to the Theory of Numbers, fifth edition, Oxford
Science Publications, Clarendon Press, Oxford, 1995.
[Hej] D. Hejhal, On the triple correlation of zeros of the zeta function, Internat. Math. Res. Notices
1994, no. 7, 294-302.
[HM] C. Hughes and S. J. Miller, Low-lying zeros of L-functions with orthogonal symmtry, Duke
Math. J., 136 (2007), no. 1, 115–172.
[HR] C. Hughes and Z. Rudnick, Linear Statistics of Low-Lying Zeros of L-functions, Quart. J.
Math. Oxford 54 (2003), 309–333.
[HKS] D. K. Huynh, J. P. Keating and N. C. Snaith, work in progress.
[ILS] H. Iwaniec, W. Luo and P. Sarnak, Low lying zeros of families of L-functions, Inst. Hautes
Études Sci. Publ. Math. 91, 2000, 55–131.
[Ju1] M. Jutila, On character sums and class numbers, Journal of Number Theory 5 (1973), 203–
[Ju2] M. Jutila, On mean values of Dirichlet polynomials with real characters, Acta Arith. 27
(1975), 191–198.
[Ju3] M. Jutila, On the mean value of L(1/2, χ) for real characters, Analysis 1 (1981), no. 2,
149–161.
[KaSa1] N. Katz and P. Sarnak, Random Matrices, Frobenius Eigenvalues and Monodromy, AMS
Colloquium Publications 45, AMS, Providence, 1999.
[KaSa2] N. Katz and P. Sarnak, Zeros of zeta functions and symmetries, Bull. AMS 36, 1999, 1−26.
[Ke] J. P. Keating, Statistics of quantum eigenvalues and the Riemann zeros, in Supersymmetry
and Trace Formulae: Chaos and Disorder, eds. I. V. Lerner, J. P. Keating & D. E Khmelnit-
skii (Plenum Press), 1–15.
[KeSn1] J. P. Keating and N. C. Snaith, Random matrix theory and ζ(1/2+ it), Comm. Math. Phys.
214 (2000), no. 1, 57–89.
[KeSn2] J. P. Keating and N. C. Snaith, Random matrix theory and L-functions at s = 1/2, Comm.
Math. Phys. 214 (2000), no. 1, 91–110.
[KeSn3] J. P. Keating and N. C. Snaith, Random matrices and L-functions, Random matrix theory,
J. Phys. A 36 (2003), no. 12, 2859–2881.
[Mil1] S. J. Miller, 1- and 2-level densities for families of elliptic curves: evidence for the underly-
ing group symmetries, Compositio Mathematica 104 (2004), 952–992.
[Mil2] S. J. Miller, Variation in the number of points on elliptic curves and applications to excess
rank, C. R. Math. Rep. Acad. Sci. Canada 27 (2005), no. 4, 111–120.
[Mil3] S. J. Miller, Lower order terms in the 1-level density for families of holomorphic cuspidal
newforms, preprint. http://arxiv.org/abs/0704.0924
[Mon] H. Montgomery, The pair correlation of zeros of the zeta function, Analytic Number Theory,
Proc. Sympos. Pure Math. 24, Amer. Math. Soc., Providence, 1973, 181− 193.
[Od1] A. Odlyzko, On the distribution of spacings between zeros of the zeta function, Math. Comp.
48 (1987), no. 177, 273–308.
[Od2] A. Odlyzko, The 1022-nd zero of the Riemann zeta function, Proc. Confer-
ence on Dynamical, Spectral and Arithmetic Zeta-Functions, M. van Frankenhuy-
sen and M. L. Lapidus, eds., Amer. Math. Soc., Contemporary Math. series, 2001,
http://www.research.att.com/∼amo/doc/zeta.html.
A SYMPLECTIC TEST OF THE L-FUNCTIONS RATIOS CONJECTURE 29
[OS1] A. E. Özlük and C. Snyder, Small zeros of quadratic L-functions, Bull. Austral. Math. Soc.
47 (1993), no. 2, 307–319.
[OS2] A. E. Özlük and C. Snyder, On the distribution of the nontrivial zeros of quadratic L-
functions close to the real axis, Acta Arith. 91 (1999), no. 3, 209–228.
[RR] G. Ricotta and E. Royer, Statistics for low-lying zeros of symmetric power L-functions in
the level aspect, preprint. http://arxiv.org/abs/math/0703760
[Ro] E. Royer, Petits zéros de fonctions L de formes modulaires, Acta Arith. 99 (2001), no. 2,
147-172.
[Rub1] M. Rubinstein, Low-lying zeros of L–functions and random matrix theory, Duke Math. J.
109, (2001), 147–181.
[Rub2] M. Rubinstein, Computational methods and experiments in analytic number theory. Pages
407–483 in Recent Perspectives in Random Matrix Theory and Number Theory, ed. F. Mez-
zadri and N. C. Snaith editors, 2005.
[RS] Z. Rudnick and P. Sarnak, Zeros of principal L-functions and random matrix theory, Duke
Math. J. 81, 1996, 269− 322.
[So] K. Soundararajan, Nonvanishing of quadratic Dirichlet L-functions at s = 1/2, Ann. of
Math. (2) 152 (2000), 447–488.
[Yo1] M. Young, Lower-order terms of the 1-level density of families of elliptic curves, Internat.
Math. Res. Notices 2005, no. 10, 587–633.
[Yo2] M. Young, Low-lying zeros of families of elliptic curves, J. Amer. Math. Soc. 19 (2006), no.
1, 205–250.
E-mail address: [email protected]
DEPARTMENT OF MATHEMATICS, BROWN UNIVERSITY, PROVIDENCE, RI 02912
1. Introduction
2. Analysis of the terms from the Ratios Conjecture.
2.1. Analysis of R(g;X)
2.2. Secondary term (of size 1/logX) of R(g;X)
3. Analysis of the terms from Number Theory
3.1. Contribution from k even
3.2. Contribution from k odd
Appendix A. The Explicit Formula
Appendix B. Sums over fundamental discriminants
Appendix C. Improved bound for non-square m terms in SM(X,Y,g"0362g,)
References
|
0704.0928 | Cosmology from String Theory | arXiv:0704.0928v3 [hep-ph] 26 Oct 2007
Cosmology from String Theory
Luis Anchordoqui,1 Haim Goldberg,2 Satoshi Nawata,1 and Carlos Nuñez3
1Department of Physics,
University of Wisconsin-Milwaukee, Milwaukee, WI 53201
2Department of Physics,
Northeastern University, Boston, MA 02115
3 Department of Physics,
University of Swansea, Singleton Park, Swansea SA2 8PP, UK
(Dated: April 2007)
Abstract
We explore the cosmological content of Salam-Sezgin six dimensional supergravity, and find a
solution to the field equations in qualitative agreement with observation of distant supernovae,
primordial nucleosynthesis abundances, and recent measurements of the cosmic microwave back-
ground. The carrier of the acceleration in the present de Sitter epoch is a quintessence field slowly
rolling down its exponential potential. Intrinsic to this model is a second modulus which is au-
tomatically stabilized and acts as a source of cold dark matter, with a mass proportional to an
exponential function of the quintessence field (hence realizing VAMP models within a String con-
text). However, any attempt to saturate the present cold dark matter component in this manner
leads to unacceptable deviations from cosmological data – a numerical study reveals that this
source can account for up to about 7% of the total cold dark matter budget. We also show that
(1) the model will support a de Sitter energy in agreement with observation at the expense of a
miniscule breaking of supersymmetry in the compact space; (2) variations in the fine structure
constant are controlled by the stabilized modulus and are negligible; (3) “fifth” forces are carried
by the stabilized modulus and are short range; (4) the long time behavior of the model in four
dimensions is that of a Robertson-Walker universe with a constant expansion rate (w = −1/3).
Finally, we present a String theory background by lifting our six dimensional cosmological solution
to ten dimensions.
http://arxiv.org/abs/0704.0928v3
I. GENERAL IDEA
The mechanism involved in generating a very small cosmological constant that satisfies
’t Hooft naturalness is one of the most pressing questions in contemporary physics. Re-
cent observations of distant Type Ia supernovae [1] strongly indicate that the universe is
expanding in an accelerating phase, with an effective de-Sitter (dS) constant H that nearly
saturates the upper bound given by the present-day value of the Hubble constant, i.e.,
H <∼ H0 ∼ 10−33 eV. According to the Einstein field equations, H provides a measure of
the scalar curvature of the space and is related to the vacuum energy density ρvac through
Friedmann’s equation, 3M2PlH
2 ∼ ρvac, where MPl ≃ 2.4 × 1018 GeV is the reduced Planck
mass. However, the “natural” value of ρvac coming from the zero-point energies of known
elementary particles is found to be at least ρvac ∼ TeV4. Substitution of this value of ρvac into
Friedmann’s equation yields H >∼ 10−3 eV, grossly inconsistent with the set of supernova
(SN) observations. The absence of a mechanism in agreement with ’t Hooft naturalness
criteria then centers on the following question: why is the vacuum energy needed by the
Einstein field equations 120 orders of magnitude smaller than any “natural” cut-off scale in
effective field theory of particle interactions, but not zero?
Nowadays, the most popular framework which can address aspects of this question is
the anthropic approach, in which the fundamental constants are not determined through
fundamental reasons, but rather because such values are necessary for life (and hence intel-
ligent observers to measure the constants) [2]. Of course, in order to implement this idea in
a concrete physical theory, it is necessary to postulate a multiverse in which fundamental
physical parameters can take different values. Recent investigations in String theory have
applied a statistical approach to the enormous “landscape” of metastable vacua present in
the theory [3]. A vast ensemble of metastable vacua with a small positive effective cosmo-
logical constant that can accommodate the low energy effective field theory of the Standard
Model (SM) have been found. Therefore, the idea of a string landscape has been used to
proposed a concrete implementation of the anthropic principle.
Nevertheless, the compactification of a String/M-theory background to a four dimen-
sional solution undergoing accelerating expansion has proved to be exceedingly difficult.
The obstruction to finding dS solutions in the low energy equations of String/M theory
is well known and summarized in the no-go theorem of [4]. This theorem states that in
a D-dimensional theory of gravity, in which (a) the action is linear in the Ricci scalar
curvature (b) the potential for the matter fields is non-positive and (c) the massless fields
have positive defined kinetic terms, there are no (dynamical) compactifications of the form:
ds2D = Ω
2(y)(dx2d+ ĝmndy
ndym), if the d dimensional space has Minkowski SO(1, d−1) or dS
SO(1, d) isometries and its d dimensional gravitational constant is finite (i.e., the internal
space has finite volume). The conclusions of the theorem can be circumvented if some of its
hypotheses are not satisfied. Examples where the hypotheses can be relaxed exist: (i) one
can find solutions in which not all of the internal dimensions are compact [5]; (ii) one may
try to find a solution breaking Minkowski or de Sitter invariance [6]; (iii) one may try to
add negative tension matter (e.g., in the form of orientifold planes) [7]; (iv) one can even
appeal to some intrincate String dynamics [8].
Salam-Sezgin six dimensional supergravity model [9] provides a specific example where
the no-go theorem is not at work, because when their model is lifted to M theory the
internal space is found to be non-compact [10]. The lower dimensional perspective of this,
is that in six dimensions the potential can be positive. This model has perhaps attracted
the most attention because of the wide range of its phenomenological applications [11]. In
this article we examine the cosmological implications of such a supergravity model during
the epochs subsequent to primordial nucleosynthesis. We derive a solution of Einstein field
equations which is in qualitative agreement with luminosity distance measurements of Type
Ia supernovae [1], primordial nucleosynthesis abundances [12], data from the Sloan Digital
Sky Survey (SDSS) [13], and the most recent measurements from the Wilkinson Microwave
Anisotropy Probe (WMAP) satellite [14]. The observed acceleration of the universe is
driven by the “dark energy” associated to a scalar field slowly rolling down its exponential
potential (i.e., kinetic energy density < potential energy density ≡ negative pressure) [15].
Very interestingly, the resulting cosmological model also predicts a cold dark matter (CDM)
candidate. In analogy with the phenomenological proposal of [16], such a nonbaryonic matter
interacts with the dark energy field and therefore the mass of the CDM particles evolves with
the exponential dark energy potential. However, an attempt to saturate the present CDM
component in this manner leads to gross deviations from present cosmological data. We
will show that this type of CDM can account for up to about 7% of the total CDM budget.
Generalizations of our scenario (using supergravities with more fields) might account for the
rest.
II. SALAM-SEZGIN COSMOLOGY
We begin with the action of Salam-Sezgin six dimensional supergravity [9], setting to
zero the fermionic terms in the background (of course fermionic excitations will arise from
fluctuations),
R− κ2(∂Mσ)2 − κ2eκσF 2MN −
e−κσ − κ
e2κσG2MNP
. (1)
Here, g6 = det gMN , R is the Ricci scalar of gMN , FMN = ∂[MAN ], GMNP = ∂[MBNP ] +
κA[MFNP ], and capital Latin indices run from 0 to 5. A re-scaling of the constants: G6 ≡ 2κ2,
φ ≡ −κσ and ξ ≡ 4 g2 leads to
R− (∂Mφ)2 −
eφ − G6
e−φF 2MN −
e−2φG2MNP
. (2)
The length dimensions of the fields are: [G6] = L
4, [ξ] = L2, [φ] = [g2MN ] = 1, [A
M ] = L
and [F 2MN ] = [G
MNP ] = L
Now, we consider a spontaneous compactification from six dimension to four dimension.
To this end, we take the six dimensional manifold M to be a direct product of 4 Minkowski
directions (hereafter denoted by N1) and a compact orientable two dimensional manifold N2
with constant curvature. Without loss of generality, we can set N2 to be a sphere S
2, or a
Σ2 hyperbolic manifold with arbitrary genus. The metric on M locally takes the form
ds26 = ds4(t, ~x)
2 + e2f(t,~x)dσ2, dσ2 =
r2c (dϑ
2 + sin2 ϑdϕ2) for S2
r2c (dϑ
2 + sinh2 ϑdϕ2) for Σ2 ,
where (t, ~x) denotes a local coordinate system in N1, rc is the compactification radius of N2.
We assume that the scalar field φ is only dependent on the point of N1, i.e., φ = φ(t, ~x).
We further assume that the gauge field AM is excited on N2 and is of the form
b cos ϑ (S2)
b cosh ϑ (Σ2) .
This is the monopole configuration detailed by Salam-Sezgin [9]. Since we set the Kalb-
Ramond field BNP = 0 and the term A[MFNP ] vanishes on N2, GMNP = 0. The field
strength becomes
F 2MN = 2b
2e−4f/r4c . (5)
Taking the variation of the gauge field AM in Eq. (2) we obtain the Maxwell equation
2f−φFMN
= 0. (6)
It is easily seen that the field strengths in Eq. (5) satisfy Eq. (6).
With this in mind, the Ricci scalar reduces to [17]
R[M ] = R[N1] + e
−2fR[N2]− 4✷f − 6(∂µf)2 , (7)
where R[M ], R[N1], and R[N2] denote the Ricci scalars of the manifolds M, N1, and N2;
respectively. (Greek indices run from 0 to 3). The Ricci scalar of N2 reads
R[N2] =
+2/r2c (S
−2/r2c (Σ2).
To simplify the notation, from now on, R1 and R2 indicate R[N1] and R[N2], respectively.
The determinant of the metric can be written as
g6 = e
2f√g4
gσ, where g4 = det gµν and
gσ is the determinant of the metric ofN2 excluding the factor e
2f . We define the gravitational
constant in the four dimension as
2πr2c
. (9)
Hence, by using the field configuration given in Eq. (4) we can re-write the action in Eq. (2)
as follows
e2f [R1 + e
−2fR2 + 2(∂µf)
2 − (∂µφ)2]−
e2f+φ − G6b
e−2f−φ
. (10)
Let us consider now a rescaling of the metric of N1: ĝµν ≡ e2fgµν and
ĝ4 = e
4f√g4. Such a
transformation brings the theory into the Einstein conformal frame where the action given
in Eq. (10) takes the form
R[ĝ4]− 4(∂µf)2 − (∂µφ)2 −
e−2f+φ −
e−6f−φ + e−4fR2
. (11)
The four dimensional Lagrangian is then
R− 4(∂µf)2 − (∂µφ)2 − V (f, φ)
, (12)
V (f, φ) ≡ ξ
e−2f+φ +
e−6f−φ − e−4fR2 , (13)
where to simplify the notation we have defined: g ≡ ĝ4 and R ≡ R[ĝ4].
Let us now define a new orthogonal basis, X ≡ (φ + 2f)/
G4 and Y ≡ (φ − 2f)/
so that the kinetic energy terms in the Lagrangian are both canonical, i.e.,
(∂X)2 − 1
(∂Y )2 − Ṽ (X, Y )
, (14)
where the potential Ṽ (X, Y ) ≡ V (f, φ)/G4 can be re-written (after some elementary algebra)
as [18]
Ṽ (X, Y ) =
G4X −R2e−
G4X +
. (15)
The field equations are
Rµν −
gµνR =
∂µX∂νX −
∂ηX ∂
∂µY ∂νY −
∂ηY ∂
− gµνṼ (X, Y )
, (16)
✷X = ∂X Ṽ , and ✷Y = ∂Y Ṽ . In order to allow for a dS era we assume that the metric takes
the form
ds2 = −dt2 + e2h(t)d~x 2, (17)
and that X and Y depend only on the time coordinate, i.e., X = X(t) and Y = Y (t). Then
the equations of motion for X and Y can be written as
Ẍ + 3ḣẊ = −∂X Ṽ (18)
Ÿ + 3ḣẎ = −∂Y Ṽ , (19)
whereas the only two independent components of Eq. (16) are
ḣ2 =
(Ẋ2 + Ẏ 2) + Ṽ (X, Y )
2ḧ+ 3ḣ2 =
(Ẋ2 + Ẏ 2) + Ṽ (X, Y )
. (21)
The terms in the square brackets in Eq. (15) take the form of a quadratic function of
G4 X . This function has a global minimum at e−
G4 X0 = R2 r
c/(2G6 b
2). Indeed, the
necessary and sufficient condition for a minimum is that R2 > 0, so hereafter we only
consider the spherical compactification, where e−
G4 X0 = M2Pl/(4πb
2). The condition for
the potential to show a dS rather than an AdS or Minkowski phase is ξb2 > 1. Now, we
expand Eq. (15) around the minimum,
Ṽ (X, Y ) =
(X −X0)2 +O
(X −X0)3
, (22)
where
π brc
4πr2cb
(b2ξ − 1) . (24)
As shown by Salam-Sezgin [9] the requirements for preserving a fraction of supersymmetry
(SUSY) in spherical compactifications to four dimension imply b2ξ = 1, corresponding to
winding number n = ±1 for the monopole configuration. Consequently, a (Y -dependent)
dS background can be obtained only through SUSY breaking. For now we will leave open
the symmetry breaking mechanism and come back to this point after our phenomenological
discussion. The Y -dependent physical mass of the X-particles at any time is
MX(Y ) =
G4 Y/2
MX , (25)
which makes this a varying mass particle (VAMP) model [16], although, in this case, the
dependence on the quintessence field is fixed by the theory. The dS (vacuum) potential
energy density is
K . (26)
In general, classical oscillations for the X particle will occur for
MX > H =
G4ρtot
, (27)
where ρtot is the total energy density. (This condition is well known from axion cosmol-
ogy [19]). A necessary condition for this to hold can be obtained by saturating ρ with VY
from Eq. (26) and making use of Eqs. (23) to (27), which leads to ξb2 < 7. Of course, as
we stray from the present into an era where the dS energy is not dominant, we must check
at every step whether the inequality (27) holds. If the inequality is violated, the X-particle
ceases to behave like CDM.
In what follows, some combination of the parameters of the model will be determined by
fitting present cosmological data. To this end we assume that SM fields are confined to N1
and we denote with ρrad the radiation energy, with ρX the matter energy associated with
the X-particles, and with ρmat the remaining matter density. With this in mind, Eq. (19)
can be re-written as
Ÿ + 3H Ẏ = −∂Veff
, (28)
where Veff ≡ VY + ρX and H is defined by the Friedmann equation
H2 ≡ ḣ2 = 1
3M2Pl
Ẏ 2 + Veff + ρrad + ρmat
. (29)
(Note that the matter energy associated to the X particles is contained in Veff .)
It is more convenient to consider the evolution in u ≡ − ln(1+ z), where z is the redshift
parameter. As long as the oscillation condition is fulfilled, the VAMP CDM energy density
is given in terms of the X-particle number density nX [20]
ρX(Y, u) = MX(Y ) nX(u) = C e
G4Y/2 e−3u , (30)
where C is a constant to be determined by fitting to data. Along with Eq. (26), these define
for us the effective (u-dependent) VAMP potential
Veff(Y, u) ≡ VY + ρX = A e
G4Y + C e
G4Y/2 e−3u , (31)
where a A is just a constant given in terms of model parameters through Eqs. (22) and (24).
Hereafter we adopt natural units, MPl = 1. Denoting by a prime derivatives with respect
to u, the equation of motion for Y becomes
1− Y ′2/6
+ 3 Y ′ +
∂uρ Y
′/2 + 3 ∂Y Veff
= 0 , (32)
where ρ = Veff + ρrad + ρmat. Quantities of importance are the dark energy density
H2 Y ′2 + VY , (33)
generally expressed in units of the critical density (Ω ≡ ρ/ρc)
, (34)
and the Hubble parameter
3− Y ′2/2
. (35)
The equation of state is
H2 Y ′2
H2 Y ′2
. (36)
We pause to note that the exponential potential VY ∼ eλY/MPl , with λ =
2. Asymptotically,
this represents the crossover situation with wY = −1/3 [22], implying expansion at constant
velocity. Nevertheless, we will find that there is a brief period encompassing the recent past
(z <∼ 6) where there has been significant acceleration.
Returning now to the quantitative analysis, we take ρmat = Be
−3u and ρrad =
10−4 ρmat e
−u f(u) [21] where B is a constant and f(u) parameterizes the u-dependent
number of radiation degrees of freedom. In order to interpolate the various thresholds
appearing prior to recombination (among others, QCD and electroweak), we adopt a conve-
nient phenomenological form f(u) = exp(−u/15) [23]. We note at this point that solutions
of Eq. (32) are independent by an overall normalization for the energy density. This is also
true for the dimensionless quantities of interest ΩY and wY .
With these forms for the energy densities, Eq. (32) can be integrated for various choices
of A, B, and C, and initial conditions at u = −30. We take as initial condition Y (−30) = 0.
Because of the slow variation of Y over the range of u, changes in Y (−30) are equivalent
to altering the quantities A and C [24]. In accordance to equipartition arguments [24, 25]
we take Y ′(−30) = 0.08. Because the Y evolution equation depends only on energy density
ratios, and hence only on the ratios A : B : C of the previously introduced constants, we
may, for the purposes of integration and without loss of generality, arbitrarily fix B and
then scan the A and C parameter space for applicable solutions. In Fig. 1 we show a sample
qualitative fit to the data. It has the property of allowing the maximum value of X-CDM
FIG. 1: The upper panel shows the evolution of Y as a function of u. Today corresponds to
z = 0 and for primordial nucleosynthesis z ≈ 1010. We set the initial conditions Y (−30) = 0 and
Y ′(−30) = 0.08; we take A : B : C = 11 : 0.3 : 0.1. The second panel shows the evolution of ΩY
(solid line), Ωmat (dot-dashed line), and Ωrad (dashed line) superposed over experimental best fits
from SDSS and WMAP observations [13, 14]. The curves are not actual fits to the experimental
data but are based on the particular choice of the Y evolution shown in the upper panel, which
provides eyeball agreement with existing astrophysical observations. The lower panel shows the
evolution of the equation of state wY superposed over the best fits to WMAP + SDSS data sets
and WMAP + SNGold [14] . The solution of the field equations is consistent with the requirement
from primordial nucleosynthesis, ΩY < 0.045 (90%CL) [12], it also shows the established radiation
and matter dominated epochs, and at the end shows an accelerated dS era.
(about 7% of the total dark matter component) before the fits deviate unacceptably from
data.
It is worth pausing at this juncture to examine the consequences of this model for vari-
ation in the fine structure constant and long range forces. Specifically, excitations of the
electromagnetic field on N1 will, through the presence of the dilaton factor in Eq. (2), seem-
ingly induce variation in the electromagnetic fine structure constant αem = e
2/4π, as well
as a violation of the equivalence principle through a long range coupling of the dilaton to
the electromagnetic component of the stress tensor. We now show that these effects are
extremely negligible in the present model. First, it is easily seen using Eqs. (2) and (3)
together with Eqs. (8)-(15), that the electromagnetic piece of the lagrangian as viewed from
N1 is
Lem = −
G4X f̃ 2µν , (37)
where f̃µν denotes a quantum fluctuation of the electromagnetic U(1) field. (Fluctuations of
the U(1) background field are studied in the Appendix). At the equilibrium value X = X0,
the exponential factor is
G4X0 =
, (38)
so that we can identify the electromagnetic coupling (1/e2) ≃ M2Pl/b2. This shows that
b ∼ MPl. We can then expand about the equilibrium point, and obtain an additional factor
of (X − X0)/MPl. This will do two things [26]: (a) At the classical level, it will induce a
variation of the electromagnetic coupling as X varies, with ∆αem/αem ≃ (X − X0)/MPl;
(b) at the quantum level, exchange of X quanta will induce a new force through coupling to
the electromagnetic component of matter.
Item (b) is dangerous if the mass of the exchanged quanta are small, so that the force
is long range. This is not the case in the present model: from Eq. (22) the X quanta have
mass of O(MXMPl) ∼ MPl/(rcb), so that if rc is much less than O(cm), the forces will play
no role in the laboratory or cosmologically.
As far as the variation of αem is concerned, we find that ρX/ρmat = (C/B)e
2, so that
ρX ≃ 3× 10−120e−3uM4PleY/
X(X −X0)2eY
2M2Pl . (39)
This then gives,
〈(X −X0)2〉 ≡ ∆Xrms ≈ 10−60e−3u/2MPleY/(2
2)/MX . (40)
During the radiation era, Y ≃ const ≃ 0 (see Fig. 1), so that during nucleosynthesis
(u ≃ −23) ∆Xrms/MPl ≃ 10−45/MX , certainly no threat. It is interesting that such a small
value can be understood as a result of inflation: from the equation of motion for the X field,
it is simple to see that during a dS era with Hubble constant H , the amplitude ∆Xrms is
damped as e−3Ht/2. For 50 e-foldings, this represents a damping of 1032. In order to make the
numbers match (assuming a pre-inflation value ∆Xrms/MPl ∼ 1) an additional damping of
∼ 1013 is required from reheat temperature to primordial nucleosynthesis. With the e−3u/2
behavior, this implies a low reheat temperature, about 106 GeV. Otherwise, one may just
assume an additional fine-tuning of the initial condition on X .
As mentioned previously, the solutions of Eq. (32), as well as the quantities we are fitting
to (ΩY and wY ), depend only on the ratios of the energy densities. From the eyeball fit in
Fig. 1 we have, up to a common constant, ρordinary matter ≡ ρmat ∝ 0.3 e−3u and VY ∝ 11 e
We can deduce from these relations that
VY (now)
ρmat(now)
2Y (now) ≃ 36 e
2Y (now) . (41)
Besides, we know that ρmat(now) ≃ 0.3ρc(now) ≃ 10−120 M4Pl. Now, Eqs. (22) and (24) lead
VY (now) = e
2Y (now) M
8π r2c b
(b2ξ − 1) (42)
so that from Eqs. (41) and (42) we obtain
8π r2c b
(b2ξ − 1) ≃ 10−119 . (43)
It is apparent that this condition cannot be naturally accomplished by choosing large values
of rc and/or b. There remains the possibility that SUSY breaking [27] or non-perturbative
effects lead to an exponentially small deviation of b2ξ from unity, such that b2ξ = 1 +
O(10−119) [29]. Since a deviation of b2ξ from unity involves a breaking of supersymmetry,
a small value for this dimensionless parameter, perhaps (1 TeV/MPl)
2 ∼ 10−31, can be
expected on the basis of ’t Hooft naturalness. It is the extent of the smallness, of course,
which remains to be explained.
III. THE STRING CONNECTION
We now briefly comment on how the six dimensional solution derived above reads in
String theory. To this end, we use the uplifting formulae developed by Cvetic, Gibbons and
Pope [10]; we will denote with the subscript “cgp” the quantities of that paper and with
“us” quantities in our paper. Let us more specifically look at Eq. (34) in Ref. [10], where
the authors described the six dimensional Lagrangian they uplifted to Type I String theory.
By simple inspection, we can see that the relation between their variables and fields with
the ones we used in Eq. (2) is φ|cgp = −2φ|us, F2|cgp =
G6F2|us, H3|cgp =
G6/3G3|us, and
ḡ2|cgp = ξ/(8G6)|us. Our six dimensional background is determined by the (string frame)
metric ds26 = e
− dt2 + e2hdx23 + r2c dσ22
, the gauge field Fϑϕ = −b sin ϑ, and the t-
dependent functions h(t), f(t) =
G4 (X − Y )/4, and φ(t) =
G4 (X + Y )/2. Identifying
these expressions with those in Eqs. (47), (48) and (49) of Ref. [10] one obtains a full Type
I or Type IIB configuration, consisting of a 3-form (denoted by F3),
8G6 sinh ρ̂ cosh ρ̂
ξ cosh2 2ρ̂
dρ̂ ∧
b cos ϑdϕ
b cosϑdϕ
2G6b√
ξ cosh 2ρ̂
sinϑdθ ∧ dϕ ∧
cosh2 ρ̂
b cosϑdϕ
− sinh2 ρ̂
dβ +
b cosϑdϕ
, (44)
a dilaton (denoted by φ̂)
e2φ̂ =
cosh(2ρ̂)
, (45)
and a ten dimensional metric that in the string frame reads
ds2str = e
φ ds26 + dz
dρ̂2 +
cosh2 ρ̂
cosh 2ρ̂
b cosϑdϕ
sinh2 ρ̂
cosh 2ρ̂
dβ +
b cosϑdϕ
, (46)
where ρ̂, z, α, and β denote the four extra coordinates. It is important to stress that though
the uplifted procedure decribed above implies a non-compact internal manifold, the metric
in Eq. (46) can be interpreted within the context of [7] (i.e., 0 ≤ ρ̂ ≤ L, with L ≫ 1 an
infrared cutoff where the spacetime smoothly closes up) to obtain a finite volume for the
internal space and consequently a non-zero but tiny value for G6.
IV. CONCLUSIONS
We studied the six dimensional Salam-Sezgin model [9], where a solution of the form
Minkowski4×S2 is known to exist, with a U(1) monopole serving as background in the two-
sphere. This model circumvents the hypotheses of the no-go theorem [4] and then when lifted
to String theory can show a dS phase. In this work we have allowed for time dependence
of the six-dimensional moduli fields and metric (with a Robertson-Walker form). Time
dependence in these fields vitiates invariance under the supersymmetry transformations.
With these constructs, we have obtained the following results:
(1) In terms of linear combinations of the S2 moduli field and the six dimensional dilaton,
the effective potential consists of (a) a pure exponential function of a quintessence field
(this piece vanishes in the supersymmetric limit of the static theory) and (b) a part which
is a source of cold dark matter, with a mass proportional to an exponential function of the
quintessence field. This presence of a VAMP CDM candidate is inherent in the model.
(2) If the monopole strength is precisely at the value prescribed by supersymmetry, the
model is in gross disagreement with present cosmological data – there is no accelerative
phase, and the contribution of energy from the quintessence field is purely kinetic.
However, a miniscule deviation of O(10−120) from this value permits a qualitative match
with data. Contribution from the VAMP component to the matter energy density can be
as large as about 7% without having negative impact on the fit. The emergence of a
VAMP CDM candidate as a necessary companion of dark energy has been a surprising
aspect of the present findings, and perhaps encouraging for future exploration of
candidates which can assume a more prominent role in the CDM sector.
(3) In our model, the exponential potential VY ∼ eλY/MPl , with Y the quintessence field
and λ =
2. The asymptotic behavior of the scale factor for exponential potentials
eh(t) ≈ t2/λ2 , so that for our case h ≈ ln t, leading to a conformally flat Robertson-Walker
metric for large times. The deviation from constant velocity expansion into a brief
accelerated phase in the neighborhood of our era makes the model phenomenologically
viable. In the case that the supersymmetry condition (b2ξ = 1) is imposed, and there is
neither radiant energy nor dark matter except for the X contribution, we find for large
times that the scale parameter eh(t) ≈
t, so that even in this case the asymptotic metric
is Robertson-Walker rather than Minkowski. Moreover, and rather intriguingly, the scale
parameter is what one would find with radiation alone [28].
In sum, in spite of the shortcomings of the model (not a perfect fit, requirement of a tiny
deviation from supersymmetric prescription for the monopole embedding), it has provided
a stimulating new, and unifying, look at the dark energy and dark matter puzzles.
Acknowledgments
We would like to thank Costas Bachas and Roberto Emparan for valuable discussions.
The research of HG was supported in part by the National Science Foundation under Grant
No. PHY-0244507.
V. APPENDIX
In this appendix we study the quantum fluctuations of the U(1) field associted to the
background configuration. We start by considering fluctuations of the background field A0M
in the 4 dimensional space, i.e,
AM → A0M + ǫ aM , (47)
where A0M = 0 if M 6= ϕ and aM = 0 if M = ϑ, ϕ. The fluctuations on A0M lead to
FMN → F 0MN + ǫ fMN . (48)
Then,
MN = gML gNP [F 0MNF
LP + ǫ F
MN fLP + ǫ
2fMN fLP ] . (49)
The second term vanishes and the first and third terms are nonzero because F 0MN 6= 0 in
the compact space and fMN 6= 0 in the 4 dimensional space. If the Kalb-Ramond potential
BNM = 0, then the 3-form field strength can be written as
GMNP = κA[M FNP ] =
[AM FNP + AP FMN −AN FMP ] . (50)
Now we introduce notation of differential forms, in which the usual Maxwell field and field
strenght read
A1 = AMdx
M and F2 = FMN dx
M ∧ dxN ; (51)
respectively. (Note that dxM ∧ dxN is antisymmetrized by definition.) With this in mind
the 3-form reads
G3 = κA1 ∧ F2 = κAMFNP dxM ∧ dxN ∧ dxP . (52)
Substituting Eqs. (47) and (48) into Eq. (52) we obtain
G3 = κ
(A0M + ǫaM )(F
NP + ǫfNP ) dx
M ∧ dxN ∧ dxP
. (53)
The background fields read
A01 = b cosϑ dϕ, F
2 = −b sinϑ dϑ ∧ dϕ , (54)
and the fluctuations on the probe brane become
a1 = aµdx
µ, f2 = fdx
µ ∧ dxν , with f = ∂µaν − ∂µaν . (55)
All in all,
= A0ϕF
ϑϕ dϕ ∧ dϑ ∧ dϕ+ ǫA0ϕfµν dϕ ∧ dxµ ∧ dxν + ǫF 0ϑϕaµ dϑ ∧ dϕ ∧ dxµ
+ ǫ2aµfζνdx
µ ∧ dxζ ∧ dxν . (56)
Using Eq. (54) and the antisymmetry of the wedge product, Eq. (56) can be re-written as
b cos ϑfµνdϕ ∧ dxµ ∧ dxν − baµ sinϑdϑ ∧ dϕ ∧ dxµ + ǫaµfζνdxµ ∧ dxζ ∧ dxν
. (57)
From the metric
ds2 = e2αdx24 + e
2β(dϑ2 + sin ϑ2dϕ2) (58)
we can write the vielbeins
ea = eαdxa, eϑ = eβdϑ, eϕ = eβ sinϑdϕ,
dxa = e−αea, dϑ = e−βeϑ, dϕ =
eϕ (59)
where β ≡ f+ln rc. (Lower latin indeces from the beginning of the alphabet indicate coordi-
nates associted to the four dimensional Minkowski spacetime with metric ηab.) Substituting
into Eq. (57) we obtain
cos ϑ
sin ϑ
e−2α−βfabe
ϕ ∧ ea ∧ eb − be−α−2βaaeϑ ∧ eϕ ∧ ea + ǫe−3αaafcbea ∧ ec ∧ eb
, (60)
where fab = ∂aab − ∂baa. Because the three terms are orthogonal to each other straightfor-
ward calculation leads to
G23 = κ
2ǫ2(b2 cot2 ϑ e−4α−2βf 2ab + b
2e−2α−4βa2a) +O(ǫ4) . (61)
Then, the 5th term in Eq. (2) can be written as
SG3 = −
e4α+2β
dϑdϕ sinϑ
κ2ǫ2b2 cot2 ϑe−4α−2β
f 2ab
κ2ǫ2b2e−2α−4β
, (62)
whereas the contribution from the 4th term in Eq. (2) can be computed from Eq. (49)
yielding
SF2 = −
η42πe
2β−φG6ǫ
2f 2ab
2f−φr2cǫ
2f 2ab . (63)
Thus,
SG3 + SF2 = −
f 2ab +
, (64)
where the four dimensional effective coupling and the effective mass are of the form
= 4 ǫ2
πe2f−φr2c +
κ2b2e−2φ
dϑdϕ sinϑ cot2 ϑ
→ ∞ (65)
πκ2b2ǫ2e2α−2β−2φ . (66)
For the moment we let
dϑdϕ sinϑ cot2 ϑ = N , where eventually we set N → ∞. Now
to make quantum particle identification and coupling, we carry out the transformation
aa → gâa [30]. This implies that the second term in the right hand side of Eq. (64) vanishes,
yielding
fab = ∂a(gâb)− ∂b(gâa) = ∂ag âb − ∂bg âa + g ∂aâb − g ∂bâa = gf̂ab + â ∧ dg (67)
and consequently to leading order in N
f 2ab =
[g2f̂ 2ab + (â ∧ dg)2 + 2 g âb f̂ab ∂ag] . (68)
If the coupling depends only on the time variable,
f 2ab → f̂ 2ab +
â2a + 2
âi f̂
ti (69)
where ġ = ∂tg and lower latin indices from the middle of the alphabet refer to the brane
space-like dimensions. If we choose a time-like gauge in which at = 0, then the term
(ġ/g) âi f̂
ti can be written as (1/2)(ġ/g)(d/dt)(âi)
2, which after an integration by parts
gives −(1/2)[(d/dt)(ġ/g)]â2i ; with g ∼ e−φ, the factor in square brackets becomes −φ̈. Since
G4(X + Y ), the rapidly varying Ẍ will average to zero, and one is left just with the
very small Ÿ , which is of order Hubble square. For the term (ġ/g)2(ai)
2, the term (Ẋ)2
also averages to order Hubble square, implying that the induced mass term is of horizon
size. These “paraphotons” carry new relativistic degrees of freedom, which could in turn
modify the Hubble expansion rate during Big Bang nucleosynthesis (BBN). Note, however,
that these extremely light gauge bosons are thought to be created through inflaton decay
and their interactions are only relevant at Planck-type energies. Since the quantum gravity
era, all the paraphotons have been redshifting down without being subject to reheating, and
consequently at BBN they only count for a fraction of an extra neutrino species in agreement
with observations.
[1] A. G. Riess et al. [Supernova Search Team Collaboration], Astron. J. 116, 1009 (1998)
[arXiv:astro-ph/9805201]; S. Perlmutter et al. [Supernova Cosmology Project Collaboration],
Astrophys. J. 517, 565 (1999) [arXiv:astro-ph/9812133]; N. A. Bahcall, J. P. Ostriker, S. Perl-
mutter and P. J. Steinhardt, Science 284, 1481 (1999) [arXiv:astro-ph/9906463].
[2] S. Weinberg, Phys. Rev. Lett. 59, 2607 (1987).
[3] R. Bousso and J. Polchinski, JHEP 0006, 006 (2000) [arXiv:hep-th/0004134]; L. Susskind
arXiv:hep-th/0302219; M. R. Douglas, JHEP 0305, 046 (2003) [arXiv:hep-th/0303194];
N. Arkani-Hamed and S. Dimopoulos, JHEP 0506, 073 (2005) [arXiv:hep-th/0405159];
M. R. Douglas and S. Kachru, arXiv:hep-th/0610102.
[4] J. M. Maldacena and C. Nunez, Int. J. Mod. Phys. A 16, 822 (2001) [arXiv:hep-th/0007018];
G. W. Gibbons, “Aspects of Supergravity Theories,” lectures given at GIFT Seminar on The-
oretical Physics, San Feliu de Guixols, Spain, 1984. Print-85-0061 (CAMBRIDGE), published
in GIFT Seminar 1984:0123.
[5] G. W. Gibbons and C. M. Hull, arXiv:hep-th/0111072.
[6] P. K. Townsend and M. N. R. Wohlfarth, Phys. Rev. Lett. 91, 061302 (2003) [arXiv:hep-
th/0303097]. See also, N. Ohta, Phys. Rev. Lett. 91, 061303 (2003) [arXiv:hep-th/0303238].
[7] S. B. Giddings, S. Kachru and J. Polchinski, Phys. Rev. D 66, 106006 (2002) [arXiv:hep-
th/0105097].
[8] S. Kachru, R. Kallosh, A. Linde and S. P. Trivedi, Phys. Rev. D 68, 046005 (2003) [arXiv:hep-
th/0301240].
[9] A. Salam and E. Sezgin, Phys. Lett. B 147, 47 (1984).
[10] M. Cvetic, G. W. Gibbons and C. N. Pope, Nucl. Phys. B 677, 164 (2004) [arXiv:hep-
th/0308026].
[11] See e.g., J. J. Halliwell, Nucl. Phys. B 286, 729 (1987); Y. Aghababaie, C. P. Burgess,
S. L. Parameswaran and F. Quevedo, JHEP 0303, 032 (2003) [arXiv:hep-th/0212091];
Y. Aghababaie, C. P. Burgess, S. L. Parameswaran and F. Quevedo, Nucl. Phys. B 680,
389 (2004) [arXiv:hep-th/0304256]; G. W. Gibbons, R. Guven and C. N. Pope, Phys. Lett.
B 595, 498 (2004) [arXiv:hep-th/0307238]; Y. Aghababaie et al., JHEP 0309, 037 (2003)
[arXiv:hep-th/0308064].
[12] K. A. Olive, G. Steigman and T. P. Walker, Phys. Rept. 333, 389 (2000) [arXiv:astro-
ph/9905320]; R. Bean, S. H. Hansen and A. Melchiorri, Nucl. Phys. Proc. Suppl. 110, 167
(2002) [arXiv:astro-ph/0201127].
[13] M. Tegmark et al. [SDSS Collaboration], Phys. Rev. D 69, 103501 (2004) [arXiv:astro-
ph/0310723].
[14] D. N. Spergel et al. [WMAP Collaboration], arXiv:astro-ph/0603449.
[15] J. J. Halliwell, Phys. Lett. B 185, 341 (1987); B. Ratra and P. J. E. Peebles, Phys. Rev. D
37, 3406 (1988); P. G. Ferreira and M. Joyce, Phys. Rev. Lett. 79, 4740 (1997) [arXiv:astro-
ph/9707286]; P. G. Ferreira and M. Joyce, Phys. Rev. D 58, 023503 (1998) [arXiv:astro-
ph/9711102]; E. J. Copeland, A. R. Liddle and D. Wands, Phys. Rev. D 57, 4686 (1998)
[arXiv:gr-qc/9711068].
[16] D. Comelli, M. Pietroni and A. Riotto, Phys. Lett. B 571, 115 (2003) [arXiv:hep-ph/0302080];
U. Franca and R. Rosenfeld, Phys. Rev. D 69, 063517 (2004) [arXiv:astro-ph/0308149].
[17] R. M. Wald, “General Relativity,” (University of Chicago Press, Chicago, 1984).
[18] A similar expression was derived by J. Vinet and J. M. Cline, Phys. Rev. D 71, 064011 (2005)
[arXiv:hep-th/0501098].
[19] J. Preskill, M. B. Wise and F. Wilczek, Phys. Lett. B 120, 127 (1983).
[20] M. B. Hoffman, arXiv:astro-ph/0307350.
[21] This assumption will be justified a posteriori when we find that ρX ≪ ρmat.
[22] E. J. Copeland, A. R. Liddle and D. Wands, op. cit. in Ref. [15].
[23] L. Anchordoqui and H. Goldberg, Phys. Rev. D 68, 083513 (2003) [arXiv:hep-ph/0306084].
[24] U. J. Lopes Franca and R. Rosenfeld, JHEP 0210, 015 (2002) [arXiv:astro-ph/0206194].
[25] P. J. Steinhardt, L. M. Wang and I. Zlatev, Phys. Rev. D 59, 123504 (1999) [arXiv:astro-
ph/9812313].
[26] S. M. Carroll, Phys. Rev. Lett. 81, 3067 (1998) [arXiv:astro-ph/9806099].
[27] Y. Aghababaie, C. P. Burgess, S. L. Parameswaran and F. Quevedo, op. cit. in Ref. [15].
[28] This comes from a behavior Y ≃ −
2u (compatible with the equations of motion), when
combined with the e−3u in Eq. (30).
[29] Before proceeding, we remind the reader that the requirements for preserving a fraction of
SUSY in spherical compactifications to four dimensions imply b2ξ = 1, corresponding to
the winding number n = ±1 for the monopole configuration. In terms of the Bohm-Aharonov
argument on phases, this is consistent with usual requirement of quantization of the monopole.
The SUSY breaking has associated a non-quantized flux of the field supporting the two sphere.
In other words, if we perform a Bohm-Aharonov-like interference experiment, some phase
change will be detected by a U(1) charged particle that circulates around the associated Dirac
string. The quantization of fluxes implied the unobservability of such a phase, and so in our
cosmological set up, the parallel transport of a fermion will be slightly path dependent. One
possibility is that the non-compact ρ coordinate (in the uplift to ten dimensions, see Sec. III) is
the direction in which the Dirac string exists. Then the cutoff necessary on the physics at large
ρ will introduce a slight (time-dependent) perturbation on the flux quantization condition. We
are engaged at present in exploring possibilities along this line.
[30] This is because the definition of the propagator with proper residue for correct Feyman rules
in perturbation theory, and therefore also the couplings, needs to be consistent with the form
of the Hamiltonian =
k ω(k)a
kak, with [a, a
†] = 1. This in turn implies that the kinetic term
in the Lagrangian has the canonical form, (1/4)f̂2ab, with the usual expansion of the vector
field aa.
|
0704.0929 | Noncommutative Electromagnetism As A Large N Gauge Theory | HU-EP-07/12
arXiv:0704.0929
Noncommutative Electromagnetism As A Large N Gauge Theory
Hyun Seok Yang ∗
Institut für Physik, Humboldt Universität zu Berlin
Newtonstraße 15, D-12489 Berlin, Germany
ABSTRACT
We map noncommutative (NC) U(1) gauge theory on IRdC × IR2nNC to U(N → ∞) Yang-Mills the-
ory on IRdC , where IR
C is a d-dimensional commutative spacetime while IR
NC is a 2n-dimensional NC
space. The resulting U(N) Yang-Mills theory on IRdC is equivalent to that obtained by the dimensional
reduction of (d+2n)-dimensional U(N) Yang-Mills theory onto IRdC . We show that the gauge-Higgs
system (Aµ,Φ
a) in the U(N → ∞) Yang-Mills theory on IRdC leads to an emergent geometry in the
(d+2n)-dimensional spacetime whose metric was determined by Ward a long time ago. In particular,
the 10-dimensional gravity for d = 4 and n = 3 corresponds to the emergent geometry arising from
the 4-dimensional N = 4 vector multiplet in the AdS/CFT duality. We further elucidate the emergent
gravity by showing that the gauge-Higgs system (Aµ,Φ
a) in half-BPS configurations describes self-
dual Einstein gravity.
PACS numbers: 11.10.Nx, 02.40.Gh, 04.50.+h
Keywords: Noncommutative Gauge Theory, Large N Gauge Theory, Emergent Gravity
October 31, 2018
∗[email protected]
http://arxiv.org/abs/0704.0929v3
http://arxiv.org/abs/0704.0929
1 Introduction
A noncommutative (NC) spacetimeM is obtained by introducing a symplectic structure B = 1
Babdy
dyb and then by quantizing the spacetime with its Poisson structure θab ≡ (B−1)ab, treating it as a
quantum phase space. That is, for f, g ∈ C∞(M),
{f, g} = θab
⇒ −i[f̂ , ĝ]. (1.1)
According to the Weyl-Moyal map [1, 2], the NC algebra of operators is equivalent to the deformed
algebra of functions defined by the Moyal ⋆-product, i.e.,
f̂ · ĝ ∼= (f ⋆ g)(y) = exp
θab∂ya∂
f(y)g(z)
. (1.2)
Through the quantization rules (1.1) and (1.2), one can define NC IR2n by the following commutation
relation
[ya, yb]⋆ = iθ
ab. (1.3)
It is well-known [2, 3] that a NC field theory can be identified basically with a matrix model or a
large N field theory. This claim is based on the following fact. Let us consider a NC IR2 for simplicity,
[x, y] = iθ, (1.4)
although the same argument equally holds for a NC IR2n as it will be shown later. After scaling the
coordinates x →
θx, y →
θy, the NC plane (1.4) becomes the Heisenberg algebra of harmonic
oscillator
[a, a†] = 1. (1.5)
It is a well-known fact from quantum mechanics that the representation space of NC IR2 is given by
an infinite-dimensional, separable Hilbert space H = {|n〉, n = 0, 1, · · · } which is orthonormal, i.e.,
〈n|m〉 = δnm and complete, i.e.,
n=0 |n〉〈n| = 1. Therefore a scalar field φ̂ ∈ Aθ on the NC plane
(1.4) can be expanded in terms of the complete operator basis
Aθ = {|m〉〈n|, n,m = 0, 1, · · · }, (1.6)
that is,
φ̂(x, y) =
Mmn|m〉〈n|. (1.7)
One can regard Mmn in (1.7) as components of an N × N matrix M in the N → ∞ limit. More
generally one may replace NC IR2 by a Riemann surface Σg of genus g which can be quantized via
deformation quantization [4]. For a compact Riemann surface Σg with finite area A(Σg), the matrix
representation can be finite-dimensional, e.g., for a fuzzy sphere [5]. In this case, A(Σg) ∼ θN but
we simply take the limit N → ∞. We then arrive at the well-known relation:
Scalar field on NC IR2 (or Σg) ⇐⇒ N ×N matrix at N → ∞. (1.8)
If φ̂ is a real scalar field, then M should be a Hermitean matrix. We will see that the above relation
(1.8) has far-reaching applications to string theory.
The matrix representation (1.7) clarifies why NC U(1) gauge theory is a large N gauge theory.
An important point is that the NC gauge symmetry acts as a unitary transformation on H for a field
φ̂ ∈ Aθ in the adjoint representation of U(1) gauge group
φ̂ → Uφ̂ U †. (1.9)
This NC gauge symmetry Ucpt(H) is so large that Ucpt(H) ⊃ U(N) (N → ∞) [6, 7], which is
rather obvious in the matrix basis (1.6). Therefore the NC gauge theory is essentially a large N gauge
theory. It becomes more precise on a NC torus through the Morita equivalence where NC U(1) gauge
theory with rational θ = M/N is equivalent to an ordinary U(N) gauge theory [8]. For this reason,
it is not so surprising that NC electromagnetism shares essential properties appearing in a large N
gauge theory such as SU(N → ∞) Yang-Mills theory or matrix models.
It is well-known [9] that 1/N expansion of any large N gauge theory using the double line for-
malism reveals a picture of a topological expansion in terms of surfaces of different genus, which
can be interpreted in terms of closed string variables as the genus expansion of string amplitudes. It
has been underlain the idea that large N gauge theories have a dual description in terms of gravita-
tional theories in higher dimensions. For example, BFSS matrix model [10], IKKT matrix model [11]
and AdS/CFT duality [12]. From the perspective (1.8), the 1/N expansion corresponds to the NC
deformation in terms of θ/A(Σg).
All these arguments imply that there exists a solid map between a NC gauge theory and a large N
gauge theory. In this work we will find a sound realization of this idea. It turns out that the emergent
gravity recently found in [13, 14, 15, 16] can be elegantly understood in this framework. Therefore
the correspondence between NC field theory and gravity [3] is certainly akin to the gauge/gravity
duality in large N limit [10, 11, 12].
This paper is organized as follows. In Section 2 we map NC U(1) gauge theory on IRdC × IR2nNC to
U(N → ∞) Yang-Mills theory on IRdC , where IRdC is a d-dimensional commutative spacetime while
IR2nNC is a 2n-dimensional NC space. The resulting U(N) Yang-Mills theory on IR
C is equivalent to
that obtained by the dimensional reduction of (d + 2n)-dimensional U(N) Yang-Mills theory onto
IRdC . In Section 3, we show that the gauge-Higgs system (Aµ,Φ
a) in the U(N → ∞) Yang-Mills
theory on IRdC leads to an emergent geometry in the (d + 2n)-dimensional spacetime whose metric
was determined by Ward [17] a long time ago. In particular, the 10-dimensional gravity for d = 4 and
n = 3 corresponds to the emergent geometry arising from the 4-dimensional N = 4 vector multiplet
in the AdS/CFT duality [12]. We further elucidate the emergent gravity in Section 4 by showing that
the gauge-Higgs system (Aµ,Φ
a) in half-BPS configurations describes self-dual Einstein gravity. A
notable point is that the emergent geometry arising from the gauge-Higgs system (Aµ,Φ
a) is closely
related to the bubbling geometry in AdS space found in [18]. Finally, in Section 5, we discuss several
interesting issues that naturally arise from our construction.
2 A Large N Gauge Theory From NC U(1) Gauge Theory
We will consider a NC U(1) gauge theory on IRD = IRdC × IR2nNC , where D-dimensional coordinates
XM (M = 1, · · · , D) are decomposed into d-dimensional commutative ones, denoted as zµ (µ =
1, · · · , d) and 2n-dimensional NC ones, denoted as ya (a = 1, · · · , 2n), satisfying the relation (1.3).
We assume the metric on IRD = IRdC × IR2nNC as the following form 1
ds2 = GMNdXMdXN
= gµνdz
µdzν +Gabdy
adyb. (2.1)
The action for D-dimensional NC U(1) gauge theory is given by
4g2YM
detGGMPGNQ(FMN + ΦMN ) ⋆ (FPQ + ΦPQ), (2.2)
where the NC field strength FMN is defined by
FMN = ∂MAN − ∂NAM − i[AM , AN ]⋆. (2.3)
The constant two-form Φ will be taken either 0 or −B = −1
Babdy
a ∧ dyb with rank(B) = 2n.
Here we will use the background independent prescription [8, 19] where the open string metric
Gab, the noncommutativity θ
ab and the open string coupling Gs are determined by
θab =
, Gab = −κ2
, Gs = gs
det′(κBg−1), (2.4)
with κ ≡ 2πα′. The closed string metric gab in Eq.(2.4) is independent of gµν in Eq.(2.1) and det′
denotes a determinant taken along NC directions only in IR2nNC . In terms of these parameters, the
couplings are related by
, (2.5)
det′G
gs|Pfθ|
. (2.6)
An important fact is that translations in NC directions are basically gauge transformations, i.e.,
eik·y ⋆ f(z, y) ⋆ e−ik·y = f(z, y+ θ · k) for any f(z, y) ∈ C∞(M). This means that translations along
NC directions act as inner derivations of the NC algebra Aθ:
[ya, f ]⋆ = iθ
ab∂bf. (2.7)
1 Here we can take the d-dimensional spacetime metric gµν with either Lorentzian or Euclidean signature since the
signature is inconsequential in our most discussions. But we implicitly assume the Euclidean signature for some other
discussions.
Using this relation, each component of FMN can be written as the following forms
Fµν = i[Dµ, Dµ]⋆, (2.8)
Fµa = θ
[Dµ, x
b]⋆ = −Faµ, (2.9)
Fab = −iθ−1ac θ−1bd
[xc, xd]⋆ − iθcd
, (2.10)
where the covariant derivative Dµ and the covariant coordinate x
a are, respectively, defined by
Dµ ≡ ∂µ − iAµ, (2.11)
xa ≡ ya + θabAb. (2.12)
Collecting all these facts, one gets the following expression for the action (2.2) with Φ = −B 2
(2πκ)
detgµνTrH
gµλgνσFµν ⋆ Fλσ +
gµνgabDµΦ
a ⋆ DνΦ
gacgbd[Φ
a,Φb]⋆ ⋆ [Φ
c,Φd]⋆
, (2.13)
where we defined adjoint scalar fields Φa ≡ xa/κ of mass dimension and
TrH ≡
(2π)n|Pfθ| . (2.14)
Note that the number of the adjoint scalar fields is equal to the rank of θab. The resulting action (2.13)
is not new but rather well-known in NC field theory, e.g., see [19, 20].
The NC algebra (1.3) is equivalent to the Heisenberg algebra of an n-dimensional harmonic os-
cillator in a frame where θab has a canonical form:
[ai, a
j ] = δij , (i, j = 1, · · · , n). (2.15)
The NC space (1.3) is therefore represented by the infinite-dimensional Hilbert space H = {|~m〉 ≡
|m1, · · · , mn〉;mi = 0, 1, · · · , N → ∞ for i = 1, · · · , n} whose set of eigenvalues forms an n-
dimensional positive integer lattice. A set of operators in H
Aθ = {|~m〉〈~n|;mi, ni = 0, 1, · · · , N → ∞ for i = 1, · · · , n} (2.16)
can be identified with the generators of a complete operator basis and so any NC field φ(z, y) ∈ Aθ
can be expanded in the basis (2.16) as follows,
φ(z, y) =
~m,~n
Ω~m,~n (z)|~m〉〈~n|. (2.17)
2If Φ = 0 in Eq.(2.2), the only change in Eq.(2.13) is [Φa,Φb] → [Φa,Φb]− i
Now we use the ‘Cantor diagonal method’ to put the n-dimensional positive integer lattice in H
into a one-to-one correspondence with the infinite set of natural numbers (i.e., 1-dimensional positive
integer lattice): |~m〉 ↔ |i〉, i = 1, · · · , N → ∞. In this one-dimensional basis, Eq.(2.17) is relabeled
as the following form
φ(z, y) =
Ωij (z)|i〉〈j|. (2.18)
Following the motivation discussed in the Introduction, we regard Ωij(z) in (2.18) as components of
an N ×N matrix Ω in the N → ∞ limit, which also depend on zµ, the coordinates of IRdC . If the field
φ(z, y) is real which is the case for the gauge-Higgs system (Aµ,Φ
a) in the action (2.13), the matrix
Ω should be Hermitean, but not necessarily traceless. So the N × N matrix Ω(z) can be regarded as
a field in U(N → ∞) gauge theory on d-dimensional commutative space IRdC , where TrH in (2.14)
is identified with the matrix trace over the basis (2.18). All the dependence on NC coordinates is now
encoded into N ×N matrices and the noncommutativity in terms of star product is transferred to the
matrix product.
Adopting the matrix representation (2.18), the D-dimensional NC U(1) gauge theory (2.2) is
mapped to the U(N → ∞) Yang-Mills theory on d-dimensional commutative space IRdC . One can
see that the resulting U(N) Yang-Mills theory on IRdC in Eq.(2.13) is equivalent to that obtained by
the dimensional reduction of (d + 2n)-dimensional U(N) Yang-Mills theory onto IRdC . It might be
emphasized that the map between the D-dimensional NC U(1) gauge theory and the d-dimensional
U(N → ∞) Yang-Mills theory is “exact” and thus the two theories should describe a completely
equivalent physics. For example, we can recover the D-dimensional NC U(1) gauge theory on IRdC ×
IR2nNC from the d-dimensional U(N → ∞) Yang-Mills theory on IRdC by recalling that the number of
adjoint Higgs fields in the U(N) Yang-Mills theory is equal to the dimension of the extra NC space
IR2nNC and by applying the dictionary in Eqs.(2.8)-(2.10).
One can introduce linear algebraic conditions of D-dimensional field strengths FMN as a higher
dimensional analogue of 4-dimensional self-duality equations such that the Yang-Mills equations in
the action (2.2) follow automatically. These are of the following type [21, 22]
TMNPQFPQ = λFMN (2.19)
with a constant 4-form tensor TMNPQ. The relation (2.19) clearly implies via the Bianchi identity
D[MFPQ] = 0 that the Yang-Mills equations are satisfied provided λ is nonzero. For D > 4, the 4-
form tensor TMNPQ cannot be invariant under SO(D) transformations and the equation (2.19) breaks
the rotational symmetry to a subgroup H ⊂ SO(D). Thus the resulting first order equations can be
classified by the unbroken symmetry H under which TMNPQ remain invariant [21, 22]. It was also
shown [23] that the first order linear equations above are closely related to supersymmetric states, i.e.,
BPS states in higher dimensional Yang-Mills theories.
The equivalence between D- and d-dimensional gauge theories can be effectively used to clas-
sify classical solutions in the d-dimensional U(N) Yang-Mills theory (2.13). The group theoretical
classification [21], integrability condition [22] and BPS states [23] for the D-dimensional first-order
equations (2.19) can be directly translated into the properties of the gauge-Higgs system (Aµ,Φ
a) in
the d-dimensional U(N) gauge theory (2.13). These classifications will also be useful to classify the
geometries emerging from the gauge-Higgs system (Aµ,Φ
a) in the U(N → ∞) Yang-Mills theory
(2.13), which will be discussed in the next section. Unfortunately, the D = 10 case is missing in
[21, 22, 23] which is the most interesting case (d = 4 and n = 3) related to the AdS/CFT duality.
3 Emergent Geometry From NC Gauge Theory
Let us first recapitulate the result in [17]. It turns out that the Ward’s construction perfectly fits with
the emergent geometry arising from the gauge-Higgs system (Aµ,Φ
a) in the U(N → ∞) Yang-Mills
theory (2.13). Suppose that we have gauge fields on IRdC taking values in the Lie algebra of volume-
preserving vector fields on an m-dimensional manifold M [24, 25]. In other words, the gauge group
G = SDiff(M). The gauge covariant derivative is given by Eq.(2.11), but the Aµ(z) are now vector
fields on M , also depending on zµ ∈ IRdC . The other ingredient in [17] consists of m Higgs fields
Φa(z) ∈ sdiff(M), the Lie algebra of SDiff(M), for a = 1, · · · , m. The idea [24, 25] is to specify
f−1(D1, · · · , Dd,Φ1, · · · ,Φm) (3.1)
forms an orthonormal frame and hence defines a metric on IRdC ×M with a volume form ν = ddz∧ω.
Here f is a scalar, a conformal factor, defined by
f 2 = ω(Φ1, · · · ,Φm). (3.2)
The result in [24, 25] immediately implies that the gauge-Higgs system (Aµ,Φ
a) leads to a metric
on the (d + m)-dimensional space IRdC × M . A local coordinate expression for this metric is easily
obtained from Eq.(3.1). Let ya be local coordinates on M . So Aµ(z) and Φa(z) have the form
Aµ(z) = A
µ(z, y)
, Φa(z) = Φ
a(z, y)
, (3.3)
where the y-dependence, originally hidden in the Lie algebra of G = SDiff(M), now explicitly
appears in the coefficients Aaµ and Φ
a. Let V
b denote the inverse of the m×m matrix Φba, and let Aa
denote the 1-form Aaµdz
µ. Then the metric is [17]
ds2 = f 2δµνdz
µdzν + f 2δabV
d (dy
c −Ac)(dyd −Ad). (3.4)
It will be shown later that the choice of the volume form ω for the conformal factor (3.2) corresponds
to that of a particular conformally flat background although we mostly assume a flat volume form,
i.e., ω ∼ dy1 ∧ · · · ∧ dy2n, unless explicitly specified.
The gauge and Higgs fields in Eq.(3.3) are not arbitrary but must be subject to the Yang-Mills
equations, for example, derived from the action (2.13), which are, in most cases, not completely
integrable. Hence to completely determine the geometric structure emerging from the gauge-Higgs
system (Aµ,Φ
a) is as much difficult as solving the Einstein equations in general. But the self-dual
Yang-Mills equations in four dimensions or Eq.(2.19) in general are, in some sense, “completely
solvable”. Thus the metric (3.4) for these cases might be completely determined. Let us discuss two
notable examples. See [17] for more examples describing 4-dimensional self-dual Einstein gravity.
• Case d = 0, m = 4: This case was dealt with in detail in [25, 26, 27]. It was proved that the
self-dual Einstein equations are equivalent to the self-duality equations
[Φa,Φb] = ±
εabcd[Φc,Φd] (3.5)
on the four Higgs fields Φa. Furthermore reinterpreting n of the Φa’s as Dµ leads to the case d =
n, m = 4− n. In Section 5, we will discuss the physical meaning about the interpretation Φa 7→ Dµ.
• Case d = 3, m = 1: Here M is one-dimensional, so the Lie algebra of vector fields on M is the
Virasoro algebra. Thus Aµ and Φ are now real-valued vector fields on M which must be independent
of y to preserve the volume form ν = d3z ∧ dy [27]. The metric (3.4) reduces to
ds2 = Φd~z · d~z + Φ−1(dy −Aµdzµ)2 (3.6)
and has a Killing vector ∂/∂y. In this case, the self-duality equations (3.5) reduce to the Abelian
Bogomol’nyi equations, ∇× ~A = ∇Φ, and the metric (3.6) describes a gravitational instanton [28].
Recently we showed in [15, 16] for the d = 0 and m = 4 case that self-dual electromagnetism in
NC spacetime is equivalent to self-dual Einstein gravity and the metric is precisely given by Eq.(3.4).
A key observation [16] was that the self-dual system (3.5) defined by vector fields on M can be
derived from the action (2.2) or (2.13) for slowly varying fields, where all ⋆-commutators between
NC fields are approximated by the Poisson bracket (1.1). An important point in NC geometry is
that the adjoint action of (covariant) coordinates with respect to star product can be identified with
(generalized) vector fields on some (curved) manifold [15, 16], as the trivial case was already used
in Eq.(2.7). In the end, a D-dimensional manifold described by the metric (3.4) corresponds to an
emergent geometry arising from the gauge-Higgs system in Eq.(3.3). Now we will show in a general
context how the nontrivial geometry (3.4) emerges from the gauge-Higgs system (Aµ,Φ
a) in the
action (2.13).
Let us collectively denote the covariant derivatives Dµ in (2.11) and the Higgs fields Da ≡
−iκBabΦb = −i(Babyb +Aa) in (2.12) as DA(z, y). Therefore DA(z, y) transform covariantly under
NC U(1) gauge transformations
DA(z, y) → g(z, y) ⋆ DA(z, y) ⋆ g−1(z, y). (3.7)
Define the adjoint action of DA(z, y) with respect to star product acting on any NC field f(z, y) ∈ Aθ:
adDA[f ] ≡ [DA, f ]⋆. (3.8)
Then it is easy to see [16] that the above adjoint action satisfies the Leibniz rule and the Jocobi
identity, i.e.,
[DA, f ⋆ g]⋆ = [DA, f ]⋆ ⋆ g + f ⋆ [DA, g]⋆, (3.9)
[DA, [DB, f ]⋆]⋆ − [DB, [DA, f ]⋆]⋆ = [[DA, DB]⋆, f ]⋆. (3.10)
These properties imply that adDA can be identified with ‘generalized’ vector fields or Lie deriva-
tives acting on the algebra Aθ, which can be viewed as a gauge covariant generalization of the inner
derivation (2.7). Note that the generalized vector field in Eq.(3.8) is a kind of general higher or-
der differential operators in [29]. Indeed it turns out that they constitute a generalization of volume
preserving diffeomorphisms to ⋆-differential operators acting on Aθ (see Eqs.(4.1) and (4.2) in [7]).
In particular, the generalized vector fields in Eq.(3.8) reduce to usual vector fields in the commu-
tative, i.e. O(θ), limit:
adDA [f ] = iθ
ab∂DA
+ · · · = i{DA, f}+O(θ3)
≡ V aA(z, y)∂af(z, y) +O(θ3) (3.11)
where we defined [∂µ, f ]⋆ = ∂µf . Note that the vector fields VA(z, y) = V
A(z, y)∂a are exactly of
the same form as Eq.(3.3) and belong to the Lie algebra of volume preserving diffeomorphisms, as
precisely required in the Ward construction (3.1), since they are all divergence free, i.e., ∂aV
A = 0.
Thus the vector fields f−1VA(z, y) for A = 1, · · · , D can be identified with the orthonormal frame
(3.1) defining the metric (3.4). It should be emphasized that the emergent gravity (3.4) arises from a
general, not necessarily self-dual, gauge-Higgs system (Aµ,Φ
a) in the action (2.13).
Note that
[DA, DB]⋆ = −i(FAB − BAB) (3.12)
where the NC field strength FAB is given by Eq.(2.3). Then the Jacobi identity (3.10) leads to the
following identity for a constant BAB
ad[DA,DB]⋆ = −i adFAB = [adDA, adDB ]⋆. (3.13)
The inner derivation (3.11) in commutative limit is reduced to the well-known map C∞(M) →
TM : f 7→ Xf between the Poisson algebra (C∞(M), {·, ·}) and vector fields in TM defined by
Xf(g) = {g, f} for any smooth function g ∈ C∞(M). The Jacobi identity for the Poisson algebra
(C∞(M), {·, ·}) then leads to the Lie algebra homomorphism
X{f,g} = −[Xf , Xg] (3.14)
where the right-hand side is defined by the Lie bracket between Hamiltonian vector fields. One can
check by identifying f = DA and g = DB that the Lie algebra homomorphism (3.14) correspond
to the commutative limit of the Jacobi identity (3.10). That is, one can deduce from Eq.(3.14) the
following identity
XFAB = −[VA, VB] (3.15)
using the relation {DA, DB} = −FAB +BAB and XDA = iVA.
Using the homomorphism (3.15), one can translate the generalized self-duality equation (2.19)
into the structure equation between vector fields
TABCDFCD = λFAB ⇔
TABCD[VC , VD] = λ[VA, VB]. (3.16)
Therefore a D-dimensional NC gauge field configuration satisfying the first-order system defined by
the left-hand side of Eq.(3.16) is isomorphic to a D-dimensional emergent geometry defined by the
right-hand side of Eq.(3.16) whose metric is given by Eq.(3.4). For example, in four dimensions
where TABCD = εABCD and λ = ±1, the right-hand side of Eq.(3.16) is precisely equal to Eq.(3.5)
describing gravitational instantons [24, 25, 26, 27]. This proves, as first shown in [15, 16], that self-
dual NC electromagnetism is equivalent to self-dual Einstein gravity. Note that the Einstein gravity
described by the metric (3.4) arises from the commutative, i.e., O(θ) limit. Therefore it is natural to
expect that the higher order differential operators in Eq.(3.11), e.g. O(θ3), give rise to higher order
gravity [16]. We will further discuss the derivative correction in Section 5.
The 10-dimensional metric (3.4) for d = 4 and n = 3 (m = 6) is particularly interesting since it
corresponds to an emergent geometry arising from the 4-dimensional N = 4 vector multiplet in the
AdS/CFT duality. Note that the gravity in the AdS/CFT duality is an emergent phenomenon arising
from particle interactions in a gravityless, lower-dimensional spacetime. As a famous example, the
type IIB supergravity (or more generally the type IIB superstring theory) on AdS5 × S5 is emergent
from the 4-dimensional N = 4 supersymmetric U(N) Yang-Mills theory [12].3 In our construction,
N × N matrices are mapped to vector fields on some manifold M , so the vector fields in Eq.(3.3)
correspond to master fields of large N matrices [30], in other words, (Aµ,Φ
a) ∼ N2. According to
the AdS/CFT duality, we thus expect that the metric (3.4) describes a deformed geometry induced
by excitations of the gauge and Higgs fields in the action (2.13). For example, we may look for
1/2 BPS geometries in type IIB supergravity that arise from chiral primaries of N = 4 super Yang-
Mills theory [18]. Recently this kind of BPS geometries, the so-called bubbling geometry in AdS
space, with a particular isometry was completely determined in [18], where the AdS5 × S5 geometry
emerges from the simplest and most symmetric configuration. In next section we will illustrate such
kind of bubbling geometry described by the metric (3.4) by considering self-dual configurations in
the gauge-Higgs system.
3The overall U(1) = U(N)/SU(N) factor actually corresponds to the overall position of D3-branes and may be
ignored when considering dynamics on the branes, thereby leaving only an SU(N) gauge symmetry.
4 Self-dual Einstein Gravity From Large N Gauge Theory
In the previous section we showed that the Ward’s metric (3.4) naturally emerges from the D-dimensional
NC U(1) gauge fields AM on IR
C × IR2nNC or equivalently the gauge-Higgs system (Aµ,Φa) in d-
dimensional U(N) gauge theory on IRdC . So, if an explicit solution for AM or (Aµ,Φ
a) is known, the
corresponding metric (3.4) is, in principle, exactly determined. However, it is extremely difficult to
get a general solution by solving the equations of motion for the action (2.2) or (2.13). Instead we
may try to solve a more simpler system such as the first-order equations (2.19), which are morally
believed to be ‘exactly solvable’ in most cases. In this section we will further elucidate the emer-
gent gravity arising from gauge fields by showing that the gauge-Higgs system (Aµ,Φ
a) in half-BPS
configurations describes self-dual Einstein gravity. Since the case for D = 4 and n = 2 has been
extensively discussed in [14, 15, 16], we will consider the other cases for D ≥ 4. For simplicity,
the metrics in the action (2.13) are supposed to be the form already used in Eq.(3.4); gµν = δµν and
gab = δab.
Note that the action (2.2) or (2.13) contains a background B, due to a uniform condensation of
gauge fields in a vacuum. But we will require a rapid fall-off of fluctuating fields around the back-
ground at infinity in IRD as usual.4 Our boundary condition is FMN → 0 at infinity. Eq.(2.10) then
requires that [xa, xb]⋆ → iθab at |y| → ∞. Thus the coordinates ya in (2.12) are vacuum expectation
values of xa characterizing the uniform condensation of gauge fields [16]. This condensation of the
B-fields endows the ⋆-algebra Aθ with a remarkable property that translations act as an inner auto-
morphism of the algebra Aθ as shown in Eq.(2.7). But the gauge symmetry on NC spacetime requires
the covariant coordinates xa in Eq.(2.12) instead of ya [31]. The inner derivation adDa in Eq.(3.8) is
then a ‘dual element’ related to the coordinate xa. This is also true for the covariant derivatives Dµ
in (2.11) since they are related to Da = −iBabxb by the ‘matrix T -duality’; Da 7→ Dµ, as will be
explained in Section 5.
It is very instructive to take an analogy with quantum mechanics. Quantum mechanical time
evolution in Heisenberg picture is defined as an inner automorphism of the Weyl algebra obtained
from a quantum phase space
f(t) = eiHtf(0)e−iHt
and its evolution equation is of the form (3.8)
df(t)
= i[H, f(t)].
Here we liberally interpret DA(z, y) in Eq.(3.8) as ‘multi-Hamiltonians’ determining the spacetime
evolution in IRD. Then it is quite natural to interpret Eq.(3.8) as a spacetime evolution equation
determined by the “covariant Hamiltonians” DA(z, y).
4In the matrix representation (2.18), this means that matrix components Ωij(z) for the fluctuations are rapidly vanish-
ing for i, j = N → ∞ as well as for |z| → ∞, since roughly N ∼ ~y · ~y.
Let us be more precise about the meaning of the spacetime evolution. If the Hamiltonian is slightly
deformed, H → H + δH , the time evolution of a system is correspondingly changed. Likewise, the
fluctuation of gauge fields AM or (Aµ,Φ
a) around the background specified by ya’s changes DA(z, y),
which in turn induces a deformation of the background spacetime according to Eq.(3.11). This is
precisely the picture about the emergent geometry in [15, 16] and also a dependable interpretation of
the Ward’s geometry (3.4). A consistent picture related to the AdS/CFT duality was also observed in
the last of Section 3.
For the above reason, all equations in the following will be understood as inner derivations acting
on Aθ like as (3.8). The adjoint action defined in this way naturally removes a contribution from
the background in the action (2.2) or (2.13) [15]. For example, the first equation in (4.1) can be
consistent only in this way since the left hand side goes to zero at infinity but the right hand side
becomes ∼ θ/κ2. It might be remarked that this is the way to define the equations of motion in
the background independent formulation [8, 19] and thus it should be equivalent to the usual NC
prescription with ΦMN = 0.
4.1 D = 4 and n = 1
NC instanton solutions in this case were constructed in [32]. As was proved in Eq.(3.16), NC U(1)
instantons are in general equivalent to gravitational instantons. We thus expect that the NC self-
duality equations for D = 4 and n = 1 are mapped to self-dual Einstein equations. We will show
that the gauge-Higgs system (Aµ,Φ
a) in this case is mapped to two-dimensional U(∞) chiral model,
whose equations of motion are equivalent to the Plebański form of the self-dual Einstein equations
[33, 17, 34].
We showed in Section 2 that 4-dimensional NC U(1) gauge theory on IR2C × IR2NC is mapped
to 2-dimensional U(N → ∞) gauge theory with the action (2.13). The 4-dimensional self-duality
equations now become the U(N → ∞) Hitchin equations on IR2C :
Fµν = ±
εµν [Φ,Φ
†], DµΦ = ±iεµνDνΦ, (4.1)
where Φ = Φ1 + iΦ2. Note that the above equations also arise as zero-energy solutions in U(N)
Chern-Simons gauge theory coupled to a nonrelativistic complex scalar field in the adjoint represen-
tation [35]. It was shown in [36] that the self-dual system in Eq.(4.1) is completely solvable in terms
of Uhlenbeck’s uniton method. A NC generalization of Eq.(4.1), the Hitchin’s equations on IR2NC ,
was also considered in [37] with very parallel results to the commutative case. We will briefly discuss
the NC Hitchin’s equations in Section 5.
The equations (4.1) for the self-dual case (with + sign) can be elegantly combined into a zero-
curvature condition [35, 36] for the new connections defined by5
A+ = A+ + Φ, A− = A− − Φ† (4.2)
5Here we will relax the reality condition of the fields (Aµ,Φ
a) and complexify them.
with A± = A1 ± iA2:
F+− = ∂+A− − ∂−A+ − i[A+,A−] = 0 (4.3)
where ∂± = ∂1 ± i∂2. Thus the new gauge fields should be a pure gauge, that is, A± = ig−1∂±g for
some g ∈ GL(N, IC). Thus we can choose them to be zero, viz.
A+ = −Φ, A− = Φ†. (4.4)
Then the self-dual equations (4.1) reduce to
† + ∂−Φ + 2i[Φ,Φ
†] = 0, (4.5)
† − ∂−Φ = 0. (4.6)
Introducing another gauge fields C+ = −2Φ and C− = 2Φ†, Eq.(4.5) also becomes the zero-curvature
condition, hence C± are a pure gauge or
Φ = − i
h−1∂+h, Φ
h−1∂−h. (4.7)
A group element h(z) defines a map from IR2C to GL(N, IC) group, which is contractible to the map
from IR2C to U(N) ⊂ GL(N, IC). Then Eq.(4.6) implies that h(z) satisfies the equation in the two-
dimensional U(N) chiral model [35, 36]
−1∂−h) + ∂−(h
−1∂+h) = 0. (4.8)
Eq.(4.8) is the equation of motion derived from the two-dimensional U(N) chiral model governed
by the following Euclidean action
d2zTr ∂µh
−1∂νhδ
µν . (4.9)
A remarkable (mysterious) fact has been known [33, 17, 34] that in the N → ∞ limit the chiral
model (4.9) describes a self-dual spacetime whose equation of motion takes the Plebański form of
self-dual Einstein equations [38]. Thus, including the case of D = 4 and n = 2 in [14, 15, 16], we
have confirmed Eq.(3.16) stating that the 4-dimensional self-dual system in the action (2.2) or (2.13)
in general describes the self-dual Einstein gravity where self-dual metrics are given by Eq.(3.4).
4.2 D = 6 and n = 1
Our current work has been particularly motivated by this case since it was already shown in [39] that
SU(N) Yang-Mills instantons in the N → ∞ limit are gravitational instantons too. Since NC U(1)
instantons are also gravitational instantons as we showed before, it implies that there should be a close
relationship between SU(N) Yang-Mills instantons and NC U(1) instantons. A basic observation was
the relation (1.8), which leads to the sound realization in Eq.(2.13). But we will simply follow the
argument in [39] for the gauge group G = U(N); in the meantime, we will confirm the results for the
emergent geometry from NC gauge fields.
Let us look at the instanton solution in U(N) Yang-Mills theory. The self-duality equation is
given by
Fµν = ±
εµναβFαβ (4.10)
where the field strength is defined by
Fµν = ∂µAν − ∂νAµ − i[Aµ, Aν ]. (4.11)
In terms of the complex coordinates and the complex gauge fields defined by
(x2 + ix1), z2 =
(x4 + ix3),
Az1 = A2 − iA1, Az2 = A4 − iA3,
Eq.(4.10) can be written as
Fz1z2 = 0 = Fz̄1z̄2 , (4.12)
Fz1z̄1 ∓ Fz2z̄2 = 0. (4.13)
Now let us consider the anti-self-dual (ASD) case. We first notice that Fz1z2 = 0 implies that there
exists a u(N)-valued function g such that Aza = ig
−1∂zag (a = 1, 2). Therefore one can choose a
gauge
Aza = 0. (4.14)
Under the gauge (4.14), the ASD equations lead to
∂z̄1Az̄2 − ∂z̄2Az̄1 − i[Az̄1 , Az̄2 ] = 0, (4.15)
∂z1Az̄1 + ∂z2Az̄2 = 0. (4.16)
First notice a close similarity with Eqs.(4.5) and (4.6). Eq.(4.16) can be solved by introducing a
u(N)-valued function Φ such that
Az̄1 = −∂z2Φ, Az̄2 = ∂z1Φ. (4.17)
Substituting (4.17) into (4.15) one finally gets
(∂z1∂z̄1 + ∂z2∂z̄2)Φ− i[∂z1Φ, ∂z2Φ] = 0. (4.18)
Adopting the correspondence (1.8), we now regard Φ ∈ u(N)⊗C∞(IR4) in Eq.(4.18) as a smooth
function on IR4 × Σg, i.e., Φ = Φ(xµ, p, q) where (p, q) are local coordinates of a two-dimensional
Riemann surface Σg. Moreover, a Lie algebra commutator is replaced by the Poisson bracket (1.1)
{f, g} = ∂f
that is,
[Φ1,Φ2] → i{Φ1,Φ2}, (4.19)
where we absorbed θ into the coordinates (p, q). After all, the ASD Yang-Mills equation (4.18) in
the large N limit is equivalent to a single nonlinear equation in six dimensions parameterized by
(xµ, p, q):
(∂z1∂z̄1 + ∂z2∂z̄2)Φ + {∂z1Φ, ∂z2Φ} = 0. (4.20)
Since Eq.(4.20) is similar to the well-known second heavenly equation [38], it was called in [39] as
the six dimensional version of the second heavenly equation.
Starting from U(N) Yang-Mills instantons in four dimensions, we arrived at the nonlinear dif-
ferential equation for a single function in six dimensions. It is important to notice that the resulting
six-dimensional theory is a NC field theory since the Riemann surface Σg carries a symplectic struc-
ture inherited from the u(N) Lie algebra through Eq.(4.19) and it can be quantized in general via
deformation quantization [4]. Since the function Φ in (4.20) is a master field of N ×N matrices [30],
so Φ ∼ N2, the AdS/CFT duality [12] implies that the master field Φ describes a six-dimensional
emergent geometry induced by Yang-Mills instantons.
To see the emergent geometry, consider an appropriate symmetry reduction of Eq.(4.20) to show
that it describes self-dual gravity in four dimensions. There are many reductions from six to four
dimensional subspace leading to self-dual four-manifolds [39]. A common feature is that the four
dimensional subspace necessarily contains the NC Riemann surface Σg. We will show later how the
symmetry reduction naturally arises from the BPS condition in six dimensions. As a specific example,
we assume the following symmetry,
∂z1Φ = ∂z̄1Φ, ∂z2Φ = ∂z̄2Φ, (4.21)
Φ(z1, z2, z̄1, z̄2, p, q) = Λ(z1 + z̄1 ≡ x, z2 + z̄2 ≡ y, p, q). (4.22)
Then Eq.(4.20) is precisely equal to the Husain’s equation [34] which is the reduction of self-dual
Einstein equations to the sdiff(Σg) chiral field equations in two dimensions:
Λxx + Λyy + ΛxqΛyp − ΛxpΛyq = 0. (4.23)
Note that we already encountered in Section 4.1 the two-dimensional sdiff(Σg) chiral field equations
since sdiff(Σg) ∼= u(N) according to the correspondence (1.8). We showed in [16] that Eq.(4.23)
can be transformed to the first heavenly equation [38] which is a governing equation of self-dual
Einstein gravity. In the end we conclude that self-dual U(N) Yang-Mills theory in the large N limit
is equivalent to self-dual Einstein gravity.
Now it is easy to see that the self-dual Einstein equation (4.23) is coming from a 1/2 BPS equation
in six dimensions (see Eq.(34) in [23]) defined by the first-order equation (2.19). According to our
construction, the six-dimensional NC U(1) gauge theory (2.2) is equivalent to the four-dimensional
U(N) gauge theory (2.13). Therefore six-dimensional BPS equations can be equivalently described
by the gauge-Higgs system (Aµ,Φ
a) in the action (2.13). Let us newly denote the NC coordinates
y1, y2 and commutative ones z3, z4 as uα, α = 1, 2, 3, 4 while z1, z2 as vA, A = 1, 2. The 1/2 BPS
equations, Eq.(34), in [23] can then be written as the following form
Fαβ = ±
εαβγδFγδ, (4.24)
FαA = FAB = 0. (4.25)
Using Eqs.(2.8)-(2.10), the above equations can be rewritten in terms of (Aµ,Φ
a) where the constant
term in (2.10) can simply be dropped for the reason explained before.
FAB = 0 in Eq.(4.25) can be solved by AB = 0 (B = 1, 2) and then FαA = 0 demand that the
gauge fields Aα should not depend on v
A. Thereby Eq.(4.24) precisely reduces to the self-duality
equation (4.1) for D = 4 and n = 1. The symmetry reduction considered above is now understood
as the condition (4.25); in specific, the coordinates vA correspond to i(z1 − z̄1) and i(z2 − z̄2) for
the reduction (4.22). However there are many different choices taking a four-dimensional subspace
in Eq.(4.24) which are related by SO(6) rotations [23]. Unless vA ∈ (y1, y2), that is, Eq.(4.24)
becomes commutative Abelian equations in which there is no non-singular solution, Eqs.(4.24) and
(4.25) reduce to four-dimensional self-dual Einstein equations, as was shown in [39]. The above BPS
equations also clarify why the two-dimensional chiral equations in Section 4.1 reappear in Eq.(4.23).
4.3 D = 8 and D = 10
The analysis for the first-order system (2.19) becomes much more complicated in higher dimensions.
The unbroken supersymmetries in D = 8 have been analyzed in [23]. Because the integrable structure
of Einstein equations in higher dimensions is little known, it is difficult to precisely identify governing
geometrical structures emergent from the gauge theory (2.2) or (2.13) even for BPS states. Neverthe-
less some BPS configurations can be easily implemented as follows. As we did in Eqs.(4.24)-(4.25),
one can imbed the 4-dimensional self-dual system for n = 1 or n = 2 into eight or ten dimensions.
The simplest case is that the metric (3.4) becomes (locally) a product manifold M4×X where M4 is
a self-dual (hyper-Kähler) four-manifold. For example, we can consider an eight-dimensional config-
uration where (A1, A2,Φ
3,Φ4) depend only on (z1, z2, y3, y4) coordinates while (Φ1,Φ2, A3, A4) do
only on (y1, y2, z3, z4) in a B-field background with θ12 6= 0 and θ34 6= 0, only non-vanishing com-
ponents. There are many similar configurations. We will not exhaust them, instead we will consider
the simplest cases which already have some relevance to other works.
The simplest BPS state in D = 8 is the case with n = 2 in the action (2.13); see Eq.(55) in [23].
The equations are of the form
Fµν = ±
εµνλσFλσ, (4.26)
[Φa,Φb] = ±1
εabcd[Φ
c,Φd], (4.27)
a = 0. (4.28)
A solution of Eq.(4.28) is given by Aµ = Aµ(z) and Φ
a = Φa(y). Then Eq.(4.26) becomes com-
mutative Abelian equations which allow no non-singular solutions, while (4.27) reduces to Eq.(3.5)
describing 4-dimensional self-dual manifolds [15]. Thus the metric (3.4) in this case leads to a half-
BPS geometry IR4×M4. Since we don’t need instanton solutions in Eq.(4.26), we may freely replace
IR4 by 4-dimensional Minkowski space IR1,3 (see the footnote 1).
The above system was considered in [40] in the context of D3-D7 brane inflationary model. The
model consists of a D3-brane parallel to a D7-brane at some distance in the presence of Fab =
(B + F )ab on the worldvolume of the D7-brane, but transverse to the D3-brane. The F -field plays
the role of the Fayet-Illiopoulos term from the viewpoint of the D3-brane worldvolume field theory.
Because of spontaneously broken supersymmetry in de Sitter valley the D3-brane is attracted towards
the D7-brane and eventually it is dissolved into the D7-brane as a NC instanton. The system ends in
a supersymmetric Higgs phase with a smooth instanton moduli space. An interesting point in [40] is
that there is a relation between cosmological constant in spacetime and noncommutativity in internal
space. Our above result adds a geometrical picture that the internal space after tachyon condensation
is developed to a gravitational instanton, e.g., an ALE space or K3.
Another interesting point, not mentioned in [40], is that it effectively realizes the dynamical com-
pactification of extra dimensions suggested in [41]. Since the D3-brane is an instanton inside the
D7-brane, particles living in the D3-brane are trapped in the core of the instanton with size ∼ θ2
where the noncommutativity scale θ is believed to be roughly Planck scale. Since the instanton
(D3-brane) results in a spontaneous breaking of translation symmetry and supersymmetry partially,
Goldstone excitations corresponding to the broken bosonic and fermionic generators are zero-modes
trapped in the core of the instanton. “Quarks” and “leptons” might be identified with these fermionic
zero-modes [41].
We argued in the last of Section 3 that the 10-dimensional metric (3.4) for d = 4 and n = 3
reasonably corresponds to an emergent geometry arising from the 4-dimensional N = 4 supersym-
metric U(N) Yang-Mills theory. Especially it may be closely related to the bubbling geometry in AdS
space found by Lin, Lunin and Maldacena (LLM) [18]. One may notice that the LLM geometry is
a bubbling geometry deformed from the AdS5 × S5 background which can be regarded as a vacuum
manifold emerging from the self-dual RR five-form background, while the Ward’s geometry (3.4) is
defined in a 2-form B-field background and becomes (conformally) flat if all fluctuations are turned
off, say, (Aµ,Φ
a) → (0, ya/κ). But it turns out that the LLM geometry is a special case of the Ward’s
geometry (3.4).
To see this, recall that the AdS5 × S5 background is conformally flat, i.e.,
ds2 =
(ηµνdz
µdzν + dyadya) =
(ηµνdz
µdzν + dρ2) + L2dΩ25 (4.29)
where ρ2 =
a=1 y
aya and dΩ25 is the spherically symmetric metric on S
5. It is then easy to see that
the metric (4.29) is exactly the vacuum geometry of Eq.(3.4) when the volume form ω in Eq.(3.2) is
given by
dy1 ∧ · · · ∧ dy6
. (4.30)
Therefore it is obvious that the Ward’s metric (3.4) with the volume form (4.30) describes a bubbling
geometry which approaches to the AdS5 × S5 space at infinity where fluctuations are vanishing,
namely, (Aµ,Φ
a) → (0, ya/κ). Note that the flat spacetime IR1,9 is coming from the volume form
ω = dy1 ∧ · · · ∧ dy6, so Eq.(4.30) should correspond to some nontrivial soliton background from the
gauge theory point of view. We will discuss in Section 5 a possible origin of the volume form (4.30).
Now let us briefly summarize half-BPS geometries of type IIB string theory corresponding to
the chiral primaries of N = 4 super Yang-Mills theory [18]. These BPS states are giant graviton
branes which wrap an S3 in AdS5 or an S̃
3 in S5. Thus the geometry induced (or back-reacted)
by the giant gravitons preserves SO(4) × SO(4) × R isometry. It turns out that the solution is
completely determined by a single function which is specified with two types of boundary conditions
on a particular plane corresponding to either of two different spheres shrinking on the plane in a
smooth fashion. The LLM solutions are thus in one-to-one correspondence with various 2-colorings
of a 2-plane, usually referred to as ‘droplets’ and the geometry depends on the shape of the droplets.
The droplet describing gravity solutions turns out to be the same droplet in the phase space describing
free fermions for the half-BPS states.
The solutions can be analytically continued to those with SO(2, 2) × SO(4) × U(1) symmetry
[18], so the solutions have the AdS3 × S3 factor rather than S3 × S̃3. After an analytic continuation,
a underlying 4-dimensional geometry M4 attains a nice geometrical structure at asymptotic region,
where AdS3 × S3 → IR1,5 and M4 reduces to a hyper-Kähler geometry. But it loses the nice picture
in terms of fermion droplet since the solution is now specified by one type of boundary condition. It
is interesting to notice that the asymptotic bubbling geometry for the type IIB case is the Gibbons-
Hawking metric [28] and the real heaven metric [42] for the M theory case, which are all solutions of
NC electromagnetism [15, 16].
It is quite demanding to completely determine general half-BPS geometries emerging from the
gauge-Higgs system in the action (2.13). Hence we will look at only an asymptotic geometry (or a
local geometry) which is relatively easy to identify. For the purpose, we consider the n = 3 case on
4-dimensional Minkowski space IR1,3. It is simple to mimic the previous half-BPS configurations in
D = 6, 8 with trivial extra Higgs fields. Then the resulting metric (3.4) will be locally of the form
M4 × IR1,5 akin to the asymptotic bubbling geometry. However M4 can be a general hyper-Kähler
manifold. Therefore the solutions we get will be more general, whose explicit form will depend on
underlying Killing symmetries and boundary conditions. For example, the type IIB case is given by a
hyper-Kähler geometry with one translational Killing vector (Gibbons-Hawking) while the M theory
case is with one rotational Killing vector (real heaven) [43]. Therefore we may get in general bubbling
geometries in the M theory as well as the type IIB string theory.
5 Discussion
We showed reasonable evidences that the 10-dimensional metric (3.4) for d = 4 and n = 3 describes
the emergent geometry arising from the 4-dimensional N = 4 supersymmetric U(N) Yang-Mills
theory and thus might explain the AdS/CFT duality [12]. An important point in this context is that
the volume form (4.30) is required to describe the AdS5 × S5 background. What is the origin of this
nontrivial volume form ? In other words, how to realize the self-dual RR five-form background from
the gauge theory point of view ?
To get some hint about the question, first note that the AdS5 × S5 geometry emerges from multi-
instanton collective coordinates which dominates the path integral in a large N limit [44]. The factor
d4zdρρ−5 appears in bosonic collective coordinate integration (with zµ the instanton 4-positions)
which agrees with the volume form of the conformally invariant space AdS5, where instanton size
corresponds to the radial coordinate ρ in Eq.(4.29). Another point is that the AdS5 × S5 space cor-
responds to the LLM geometry for the simplest and most symmetric configuration which reduces to
the usual Gibbons-Hakwing metric (3.6) at asymptotic regions [18]. This result is consistent with
the picture in Section 4.2 that U(N) instantons at large N limit are indeed gravitational instantons.
It is then tempted to speculate that the AdS5 × S5 geometry would be emerging from a maximally
supersymmetric instanton solution of Eq.(2.19) in D = 10. It should be an interesting future work.
In addition, we would like to point out that an AdSp × Sq background arises from Eq.(3.4) in the
same way as Eq.(4.30) by choosing the volume form ω as follows
dy1 ∧ · · · ∧ dyq+1
(5.1)
with ρ2 =
a=1 y
aya and (Aµ,Φ
a) = (0, ya/κ). A particularly interesting case is d = 2 and n = 2
for which the volume form (5.1) leads to the AdS3 × S3 background and the action (2.13) describes
matrix strings [45, 46]. We believe that the metric (3.4) with ω = dy1 ∧ · · · ∧ dy4/ρ2 describes a
bubbling geometry emerging from the matrix strings.
One might already notice a subtle difference between the matrix action (2.13) and the Ward’s
metric (3.4). According to our construction in Section 2, the number of the Higgs fields Φa is even
while the Ward construction has no such restriction. But it was shown in [15, 16] that the Gibbons-
Hawking metric (3.6) for the d = 3 and m = 1 case also arises from the d = 0 and m = 4. It implies
that we can replace some transverse scalars by gauge fields and vice versa. Recalling that the fields
in the action (2.13) are all N × N matrices, of course, N → ∞, it is precisely ‘matrix T -duality’
exchanging transverse scalars and gauge fields associated with a compact direction in p-brane and
(p+ 1)-brane worldvolume theories through (see Eq.(154) in [46])
Φa ↔ iDµ = i(∂µ − iAµ). (5.2)
With this identification, the d-dimensional U(N) gauge theory (2.13) can be obtained by applying the
d-fold ‘matrix T -duality’ (5.2) to the 0-dimensional IKKT matrix model [11, 20]
S = −2πκ
gMPgNQ[Φ
M ,ΦN ][ΦP ,ΦQ]
. (5.3)
However, the T -duality (5.2) gives rise to qualitatively radical changes in worldvolume theory.
First it changes the dimensionality of the theory and thus it affects its renormalizability (see Sec. VI
in [46] and references therein for this issue in Matrix theory). For example, the action (2.13) for
d > 4 is not renormalizable since the coupling constant g2YM ∼ gsκ
2 ∼ gsm4−ds has negative mass
dimension in this case. Second it also changes a behavior of the emergent metric (3.4). But these
changes are rather consistent with the fact that under the T -duality (5.2) a Dp-brane is transformed
into a D(p+ 1)-brane and vice versa.
Our construction in Section 2 raises a bizarre question about the renormalization property of NC
field theory. If we look at the action (2.2), the theory superficially seems to be non-renormalizable for
D > 4 since the coupling constant (2.5) has negative mass dimension. But this non-renormalizability
appears as a fake if we use the matrix representation (2.18) together with the redefinition of variables
in Eq.(2.4). The resulting coupling constant, denoted as gd, in the final action (2.13) depends only on
the dimension of commutative spacetime rather than the entire spacetime. Since the resulting U(N)
theory is in the limit N → ∞, while the ’t Hooft coupling λ ≡ g2dN is kept fixed, planar diagrams
dominate in this limit [9]. Since the dependence of NC coordinates in the action (2.2) has been
encoded into the matrix degrees of freedom, one may suspect that the divergence of the original theory
might appear as a divergence of perturbation series as a whole in the action (2.13). The convergence
aspect of the planar perturbation theory concerns Np(n), the number of planar diagrams in nth order
in λ. It was shown in [47] that Np(n) behaves asymptotically as
Np(n)
n→∞∼ cn, c = constant. (5.4)
Therefore the planar theory (unlike the full theory) for d ≤ 4 has a formally convergent perturbation
series, provided the ultraviolet and infrared divergences of individual diagrams are cut off [47]. It will
be interesting to carefully examine the renormalization property of NC field theories along this line.
We showed in Section 3 that the Ward metric (3.4) is emerging from commutative, i.e., O(θ),
limit. Since the vector fields in Eq.(3.11) are in general higher order differential operators acting on
Aθ, we thus expect that they actually define a ‘generalized gravity’ beyond Einstein gravity, e.g., the
NC gravity [29] or the NC unimodular gravity [48].6 It was shown in [16] that the leading derivative
6The latter seems to be quite relevant to our emergent gravity since the vector fields VM in Eq.(3.11) always belong to
the volume preserving diffeomorphisms, which is a generic property of vector fields defined in NC spacetime. It should
be interesting to more clarify the relation between the NC unimodular gravity [48] and the emergent gravity.
corrections in NC gauge theory start with four derivatives, which was conjectured to give rise to higher
order gravity. As was explicitly checked for the self-dual case, Einstein gravity maybe emerges from
NC gauge fields in commutative limit, which then implies that the leading derivative corrections give
rise to higher order terms with four more derivatives compared to the Einstein gravity. This means
that the higher order gravity starts from the second order corrections in θ with higher derivatives, that
is, no first order correction in θ to the Einstein gravity. Interestingly this result is consistent with those
in [29] and also in [49] calculated from the context of NC gravity.
It was shown in Section 4.1 that the self-duality system for the D = 4 and n = 1 case is mapped
to the two-dimensional U(∞) chiral model (4.9) which is remarkably equivalent to self-dual Einstein
gravity [33, 17, 34]. But this case should not be much different from the D = 4 and n = 2 case
in [15, 16] since they equally describe the self-dual Einstein gravity. Indeed we can make them
bear a close resemblance each other. For the purpose, let us consider a four-dimensional NC space
IR2NC × IR2NC . We can choose the matrix representation (2.18) only for the second factor, i.e.,
φ(y1, y2, y3, y4) =
Ωij (y
1, y2)|i〉〈j|. (5.5)
As a result, the action (2.13) now becomes two-dimensional U(N) gauge theory on IR2NC . The self-
dual equations in Eq.(4.1) in this case are given by the NC Hitchin equations, now defined on IR2NC
instead of IR2C . The NC Hitchin equations have been considered by K. Lee in [37] with very parallel
results with the commutative case (4.1). It is interesting that there exist two different realizations for
self-dual Einstein gravity, whose relationship should be more closely understood.
Finally it will be interesting to consider a compact NC space instead of IR2nNC , for instance, a
NC 2n-torus T2nNC . Since the module over a NC torus is still infinite dimensional [8], the matrix
representation (2.18) is also infinite dimensional. Thus we expect that our construction in Section 2
and 3 can be applied even to the NC torus without many essential changes.
Acknowledgments
After posting this paper to the arXiv, we were informed of related works on YM-Higgs BPS configu-
rations on NC spaces [50] and on the relation between a large N gauge theory, a Moyal deformation
and a self-dual gravity [51] by O. Lechtenfeld and C. Castro, respectively. We thank them for the
references. This work was supported by the Alexander von Humboldt Foundation.
References
[1] M. R. Douglas and N. A. Nekrasov, Noncommutative field theory, Rev. Mod. Phys. 73, 977
(2001), hep-th/0106048; R. J. Szabo, Quantum field theory on noncommutative spaces,
Phys. Rep. 378, 207 (2003), hep-th/0109162.
[2] R. J. Szabo, Symmetry, gravity and noncommutativity, Class. Quantum Grav. 23, R199 (2006),
hep-th/0606233.
[3] H. S. Yang, On The Correspondence Between Noncommuative Field Theory And Gravity, Mod.
Phys. Lett. A22, 1119 (2007), hep-th/0612231.
[4] M. Kontsevich, Deformation Quantization of Poisson Manifolds, Lett. Math. Phys. 66, 157
(2003), q-alg/9709040.
[5] J. Madore, The fuzzy sphere, Class. Quantum Grav. 9, 69 (1992).
[6] J. A. Harvey, Topology of the Gauge Group in Noncommutative Gauge Theory,
hep-th/0105242.
[7] F. Lizzi, R. J. Szabo and A. Zampini, Geometry of the gauge algebra in noncommutative Yang-
Mills theory, J. High Energy Phys. 08, 032 (2001), hep-th/0107115.
[8] N. Seiberg and E. Witten, String theory and noncommutative geometry, J. High Energy Phys.
09, 032 (1999), hep-th/9908142.
[9] G. ’t Hooft, A planar diagram theory for strong interactions, Nucl. Phys. B72, 461 (1974).
[10] T. Banks, W. Fischler, S. H. Shenker and L. Susskind, M theory as a matrix model: A conjecture,
Phys. Rev. D55, 5112 (1997), hep-th/9610043.
[11] N. Ishibashi, H. Kawai, Y. Kitazawa and A. Tsuchiya, A large-N reduced model as superstring,
Nucl. Phys. B498, 467 (1997), hep-th/9612115.
[12] J. M. Maldacena, The Large N Limit of Superconformal Field Theories and Supergravity, Adv.
Theor. Math. Phys. 2, 231 (1998); Int. J. Theor. Phys. 38, 1113 (1999), hep-th/9711200;
S. S. Gubser, I. R. Klebanov and A. M. Polyakov, Gauge Theory Correlators from Non-Critical
String Theory, Phys. Lett. B428, 105 (1998), hep-th/9802109; E. Witten, Anti De Sitter
Space And Holography, Adv. Theor. Math. Phys. 2, 253 (1998), hep-th/9802150.
[13] M. Salizzoni, A. Torrielli and H. S. Yang, ALE spaces from noncommutative U(1) instantons
via exact Seiberg-Witten map, Phys. Lett. B634, 427 (2006), hep-th/0510249.
[14] H. S. Yang and M. Salizzoni, Gravitational Instantons from Gauge Theory, Phys. Rev. Lett. 96,
201602 (2006), hep-th/0512215.
http://arxiv.org/abs/hep-th/0106048
http://arxiv.org/abs/hep-th/0109162
http://arxiv.org/abs/hep-th/0606233
http://arxiv.org/abs/hep-th/0612231
http://arxiv.org/abs/q-alg/9709040
http://arxiv.org/abs/hep-th/0105242
http://arxiv.org/abs/hep-th/0107115
http://arxiv.org/abs/hep-th/9908142
http://arxiv.org/abs/hep-th/9610043
http://arxiv.org/abs/hep-th/9612115
http://arxiv.org/abs/hep-th/9711200
http://arxiv.org/abs/hep-th/9802109
http://arxiv.org/abs/hep-th/9802150
http://arxiv.org/abs/hep-th/0510249
http://arxiv.org/abs/hep-th/0512215
[15] H. S. Yang, Instantons and Emergent Geometry, hep-th/0608013.
[16] H. S. Yang, Emergent Gravity from Noncommutative Spacetime, hep-th/0611174.
[17] R. S. Ward, The SU(∞) chiral model and self-dual vacuum spaces, Class. Quantum Grav. 7,
L217 (1990).
[18] H. Lin, O. Lunin and J. Maldacena, Bubbling AdS space and 1/2 BPS geometries, J. High
Energy Phys. 10, 025 (2004), hep-th/0409174; H. Lin and J. Maldacena, Fivebranes from
gauge theory, Phys. Rev. D74, 084014 (2006), hep-th/0509235.
[19] N. Seiberg, A note on background independence in noncommutative gauge theories, matrix
model and tachyon condensation, J. High Energy Phys. 09, 003 (2000), hep-th/0008013.
[20] H. Aoki, N. Ishibashi, S. Iso, H. Kawai, Y. Kitazawa and T. Tada, Non-commutative Yang-Mills
in IIB matrix model, Nucl. Phys. B565 (2000) 176, hep-th/9908141.
[21] E. Corrigan, C. Devchand, D. B. Fairlie and J. Nuyts, First-order equations for gauge fields in
spaces of dimension greater than four, Nucl. Phys. B214, 452 (1983).
[22] R. S. Ward, Completely solvable gauge-field equations in dimension greater than four, Nucl.
Phys. B236, 381 (1984).
[23] D. Bak, K. Lee and J.-H. Park, BPS equations in six and eight dimensions, Phys. Rev. D66,
025021 (2002), hep-th/0204221.
[24] A. Ashtekar, T. Jabobson and L. Smolin, A New Characterization Of Half-Flat Solutions to
Einstein’s Equation, Commun. Math. Phys. 115, 631 (1988).
[25] L. J. Mason and E. T. Newman, A Connection Between the Einstein and Yang-Mills Equations,
Commun. Math. Phys. 121, 659 (1989).
[26] S. Chakravarty, L. Mason and E. T. Newman, Canonical structures on anti-self-dual four-
manifolds and the diffeomorphism group, J. Math. Phys. 32, 1458 (1991).
[27] D. D. Joyce, Explicit Construction of Self-dual 4-Manifolds, Duke Math. J. 77, 519 (1995).
[28] G. W. Gibbons and S. W. Hawking, Gravitational Multi-instantons, Phys. Lett. 78B, 430 (1978).
[29] P. Aschieri, C. Blohmann, M. Dimitrijevic, F. Meyer, P. Schupp and J. Wess, A gravity theory on
noncommutative spaces, Class. Quant. Grav. 22, 3511 (2005), hep-th/0504183; P. Aschieri,
M. Dimitrijevic, F. Meyer and J. Wess, Noncommutative geometry and gravity, Class. Quant.
Grav. 23, 1883 (2006), hep-th/0510059.
http://arxiv.org/abs/hep-th/0608013
http://arxiv.org/abs/hep-th/0611174
http://arxiv.org/abs/hep-th/0409174
http://arxiv.org/abs/hep-th/0509235
http://arxiv.org/abs/hep-th/0008013
http://arxiv.org/abs/hep-th/9908141
http://arxiv.org/abs/hep-th/0204221
http://arxiv.org/abs/hep-th/0504183
http://arxiv.org/abs/hep-th/0510059
[30] R. Gopakumar and D. J. Gross, Mastering the master field, Nucl. Phys. B451, 379 (1995),
hep-th/9411021; I. Ya. Aref’eva and I. V. Volovich, The master field for QCD and q-
deformed quantum field theory, Nucl. Phys. B462, 600 (1996), hep-th/9510210.
[31] J. Madore, S. Schraml, P. Schupp and J. Wess, Gauge theory on noncommutative spaces, Eur.
Phys. J. C16, 161 (2000), hep-th/0001203.
[32] C.-S. Chu, V. V. Khoze and G. Travaglini, Notes on noncommutative instantons, Nucl. Phys.
B621, 101 (2002), hep-th/0108007; K.-Y. Kim, B.-H. Lee and H. S. Yang, Noncommuta-
tive instantons on R2NC ×R2C , Phys. Lett. B523, 357 (2001), hep-th/0109121.
[33] Q-H. Park, Self-dual Gravity as A Large-N Limit of the 2D Non-linear Sigma Model, Phys.
Lett. B238, 287 (1990).
[34] V. Husain, Self-Dual Gravity and the Chiral Model, Phys. Rev. Lett. 72, 800 (1994),
gr-qc/9402020.
[35] G. V. Dunne, R. Jackiw, S.-Y. Pi and C. A. Trugenberger, Self-dual Chern-Simons solitons and
two-dimensional nonlinear equations, Phys. Rev. D43, 1332 (1991); Erratum-ibid. D45, 3012
(1992).
[36] G. V. Dunne, Chern-Simons solitons, Toda theories and the chiral model, Commun. Math. Phys.
150, 519 (1992), hep-th/9204056.
[37] K.-M. Lee, Chern-Simons solitons, chiral model, and (affine) Toda model on noncommutative
space, J. High Energy Phys. 08, 054 (2004), hep-th/0405244.
[38] J. F. Plebañski, Some solutions of complex Einstein equations, J. Math. Phys. 16, 2395 (1575).
[39] J. F. Plebański and M. Przanowski, The Lagrangian of a self-dual gravitational field as a limit of
the SDYM Lagrangian, Phys. Lett. A212, 22 (1996), hep-th/9605233.
[40] K. Dasgupta, C. Herdeiro, S. Hirano and R. Kallosh, D3-D7 inflationary model and M theory,
Phys. Rev. D65, 126002 (2002), hep-th/0203019.
[41] G. Dvali and M. Shifman, Dynamical compactification as a mechanism of spontaneous super-
symmetry breaking, Nucl. Phys. B504, 127 (1997), hep-th/9611213.
[42] C. P. Boyer and J. D. Finley, III, Killing vectors in self-dual, Euclidean Einstein spaces, J. Math.
Phys. 23, 1126 (1982).
[43] I. Bakas and K. Sfetsos, Toda Fields of SO(3) Hyper-Kahler Metrics and Free Field Realizations,
Int. J. Mod. Phys. A12, 2585 (1997), hep-th/9604003.
http://arxiv.org/abs/hep-th/9411021
http://arxiv.org/abs/hep-th/9510210
http://arxiv.org/abs/hep-th/0001203
http://arxiv.org/abs/hep-th/0108007
http://arxiv.org/abs/hep-th/0109121
http://arxiv.org/abs/gr-qc/9402020
http://arxiv.org/abs/hep-th/9204056
http://arxiv.org/abs/hep-th/0405244
http://arxiv.org/abs/hep-th/9605233
http://arxiv.org/abs/hep-th/0203019
http://arxiv.org/abs/hep-th/9611213
http://arxiv.org/abs/hep-th/9604003
[44] M. Bianchi, M. B. Green, S. Kovacs and G. Rossi, Instantons in supersymmetric Yang-
Mills and D-instantons in IIB superstring theory, J. High Energy Phys. 08, 013 (1998),
hep-th/9807033; N. Dorey, T. J. Hollowood, V. V. Khoze, M. P. Mattis and S. Vandoren,
Multi-instanton calculus and the AdS/CFT correspondence in N = 4 superconformal field the-
ory, Nucl. Phys. B552, 88 (1999), hep-th/9901128.
[45] L. Motl, Proposals on nonperturbative superstring interactions, hep-th/9701025; R. Di-
jkgraaf, E. Verlinde and H. Verlinde, Matrix string theory, Nucl. Phys. B500, 43(1997),
hep-th/9703030.
[46] W. Taylor, M(atrix) theory: matrix quantum mechanics as a fundamental theory, Rev. Mod.
Phys. 73, 419 (2001), hep-th/0101126.
[47] J. Koplik, A. Neveu and S. Nussinov, Some aspects of the planar perturbation series, Nucl. Phys.
B123, 109 (1977).
[48] X. Calmet and A. Kobakhidze, Noncommutative general relativity, Phys. Rev. D72, 045010
(2005), hep-th/0506157.
[49] P. Mukherjee and A. Saha, Note on the noncommutative correction to gravity, Phys. Rev. D74,
027702 (2006), hep-th/0605287; X. Calmet and A. Kobakhidze, Second order noncommu-
tative corrections to gravity, Phys. Rev. D74, 047702 (2006), hep-th/0605275; R. Banerjee,
P. Mukherjee and S. Samanta, Lie algebraic Noncommutative Gravity, Phys. Rev. D75, 125020
(2007), hep-th/0703128.
[50] O. Lechtenfeld, A. D. Popov and R. J. Szabo, Noncommutative instantons in higher dimensions,
vortices and topological K-cycles, J. High Energy Phys. 12, 022 (2003), hep-th/0310267;
A. V. Domrin, O. Lechtenfeld and S. Petersen, Sigma-Model Solitons in the Noncommu-
tative Plane: Construction and Stability Analysis, J. High Energy Phys. 03, 045 (2005),
hep-th/0412001; O. Lechtenfeld, A. D. Popov and R. J. Szabo, Rank two quiver gauge
theory, graded connections and noncommutative vortices, J. High Energy Phys. 09, 054 (2006),
hep-th/0603232.
[51] C. Castro, SU(∞) (super)gauge theories and self-dual (super)gravity, J. Math. Phys. 34, 681
(1993); The N = 2 super-Wess-Zumino-Novikov-Witten model valued in superdiffeomorphism
(SDIFF) M2 is self-dual supergravity in four dimensions, J. Math. Phys. 35, 920 (1994); C.
Castro and J. Plebański, The generalized Moyal-Nahm and continuous Moyal-Toda equations,
J. Math. Phys. 40, 3738 (1996), hep-th/9710041; C. Castro, A Moyal quantization of the
continuous Toda field, Phys. Lett. B413, 53 (1997), hep-th/9703094; S. Ansoldi, C. Castro
and E. Spallucci, Chern-Simons hadronic bag from quenched large-N QCD, Phys. Lett. B504,
174 (2001), hep-th/0011013.
http://arxiv.org/abs/hep-th/9807033
http://arxiv.org/abs/hep-th/9901128
http://arxiv.org/abs/hep-th/9701025
http://arxiv.org/abs/hep-th/9703030
http://arxiv.org/abs/hep-th/0101126
http://arxiv.org/abs/hep-th/0506157
http://arxiv.org/abs/hep-th/0605287
http://arxiv.org/abs/hep-th/0605275
http://arxiv.org/abs/hep-th/0703128
http://arxiv.org/abs/hep-th/0310267
http://arxiv.org/abs/hep-th/0412001
http://arxiv.org/abs/hep-th/0603232
http://arxiv.org/abs/hep-th/9710041
http://arxiv.org/abs/hep-th/9703094
http://arxiv.org/abs/hep-th/0011013
Introduction
A Large N Gauge Theory From NC U(1) Gauge Theory
Emergent Geometry From NC Gauge Theory
Self-dual Einstein Gravity From Large N Gauge Theory
D=4 and n=1
D=6 and n=1
D=8 and D=10
Discussion
|
0704.0930 | Marginal Solutions for the Superstring | August 14, 2018
Marginal Solutions for the Superstring
Theodore Erler
Harish-Chandra Research Institute
Chhatnag Road, Jhunsi, Allahabad 211019, India
E-mail:[email protected]
Abstract
We construct a class of analytic solutions of WZW-type open superstring field theory describ-
ing marginal deformations of a reference D-brane background. The deformations we consider
are generated by on-shell vertex operators with vanishing operator products. The superstring
solution exhibits an intriguing duality with the corresponding marginal solution of the bosonic
string. In particular, the superstring problem is “dual” to the problem of re-expressing the
bosonic marginal solution in pure gauge form. This represents the first nonsingular analytic
solution of open superstring field theory.
http://arxiv.org/abs/0704.0930v2
Contents
1 Introduction 1
2 Bosonic Solution 2
3 Superstring Solution 5
4 Pure Gauge for Bosonic Solution 10
5 Conclusion 12
A B0,L0 with Split Strings 13
B Unitary eΦ 15
1 Introduction
Following the breakthrough analytic solution of Schnabl[1], our analytic understanding of open
string field theory (OSFT) has seen remarkable progress[2, 3, 4, 5, 6, 7, 8]. So far most work has
focused on the open bosonic string, but clearly it is also important to consider the superstring.
This is not just because superstrings are ultimately the theory of interest, but because there
are important physical questions, especially the holographic encryption of closed string physics
in OSFT, which may be difficult to decipher in the bosonic case[9].
Ideally, the first goal should be to find an analytic solution of superstring field theory1 on a
non-BPS brane describing the endpoint of tachyon condensation, i.e. the closed string vacuum.
However, the construction of this solution is will likely be subtle—indeed, Schnabl’s solution for
the bosonic vacuum is very close to being pure gauge[1, 2]. Thus, it may be useful to consider a
simpler problem first: constructing solutions describing marginal deformations of a (non)BPS D-
brane. Marginal deformations correspond to a one-parameter family of open string backgrounds
obtained by adding a conformal boundary interaction to the worldsheet action—for example,
turning on a Wilson line on a brane by adding the boundary term Aµ
dt∂Xµ(t) to the
worldsheet action. Such backgrounds were studied numerically for the bosonic string in ref.[11]
and for the superstring in ref.[12]. Recently, Schnabl[13] and Kiermaier et al[14] found analytic
solutions for marginal deformations in bosonic OSFT2. The solutions bear striking resemblance
1In this paper we will work with the Berkovits WZW-type superstring field theory[10].
2For previous efforts to construct such solutions analytically in bosonic and super OSFT, see refs.[15, 16].
to Schnabl’s vacuum solution, but are simpler in the sense that they are manifestly nontrivial
and can be constructed systematically with a judicious choice of gauge.
In this note, we construct solutions of super OSFT describing marginal deformations gen-
erated by on-shell vertex operators with vanishing operator products (in either the 0 or −1
picture). As was found in ref.[13, 14] such deformations are technically simpler since they
allow for solutions in Schnabl’s gauge, B0Φ = 0—though probably more general marginal solu-
tions can be obtained once the analogous problem is understood for the bosonic string, either
by adding counterterms as described in ref.[14] or by employing a “pseudo-Schnabl gauge” as
suggested in ref.[13]. The superstring solution exhibits a remarkable duality with its bosonic
counterpart: it formally represents a re-expression of the bosonic solution in pure gauge form.
It would be very interesting if this duality generalized to other solutions.
This paper is organized as follows. In section 2 we briefly review the bosonic marginal
solution in the split string formalism[2, 8, 17], which we will prove convenient for many com-
putations. In section 3 we consider the superstring, motivating the solution as analogous to
constructing an explicit pure gauge form for the bosonic marginal solution. This strategy
quickly gives a very simple expression for the complete analytic solution of super OSFT. In sec-
tion 4 we consider the dual problem: finding a pure gauge expression for the bosonic marginal
deformation describing a constant, light-like gauge field on a non-compact brane. Though quite
analogous to the superstring, this problem is slightly more complex. Nevertheless we are able
to find an analytic solution. We end with some conclusions.
While this note was in preparation, we learned of the independent solution by Yuji Okawa[18].
His paper should appear concurrently.
2 Bosonic Solution
Let us begin by reviewing the bosonic marginal solution[13, 14] in the language of the split
string formalism[2, 8, 17], which is a useful shorthand for many calculations. The first step in
this approach is to find a subalgebra of the open string star algebra, closed under the action of
the BRST operator, in which we hope to find an analytic solution. For the bosonic marginal
solution the subalgebra is generated by three string fields K,B and J :
K = Grassmann even, gh# = 0
B = Grassmann odd, gh# = −1
J = Grassmann odd, gh# = 1 (2.1)
satisfying the identities,
[K,B] = 0 B2 = J2 = 0 (2.2)
dK = 0 dJ = 0 dB = K (2.3)
where d = QB is the BRST operator and the products above are open string star products (we
will mostly omit the ∗ in this paper). The relevant explicit definitions of K,B, J are3,
K = −π
(K1)L|I〉 K1 = L1 + L−1
B = −
(B1)L|I〉 B1 = b1 + b−1
J = J(1)|I〉 (2.4)
where |I〉 is the identity string field and the subscript L denotes taking the left half of the
corresponding charge4. The operator J(z) is a dimension zero primary generating the marginal
trajectory. It takes the form,
J(z) = cO(z) (2.5)
where O is a dimension one matter primary with nonsingular OPE with itself. This is crucial
for guaranteeing that the square of the field J vanishes, as in eq.(2.2). With these preliminaries,
the marginal solution for the bosonic string is:
Ψ = λFJ
1− λB F 2−1
F (2.6)
where λ parameterizes the marginal trajectory and F = eK/2 = Ω1/2 is the square root of the
SL(2,R) vacuum (a wedge state). To linear order in λ the solution is,
Ψ = λFJF + ... = λJ(0)|Ω〉+ ... (2.7)
which is the nontrivial element of the BRST cohomology generating the marginal trajectory.
Let us prove that eq.2.6 satisfies the equations of motion. Using the identities Eqs.(2.2,2.3),
dΨ = −λFJd
1− λB F 2−1
= −λFJ
1− λB F 2−1
F 2 − 1
1− λB F 2−1
= −λ2FJ 1
1− λB F 2−1
(F 2 − 1)J 1
1− λB F 2−1
F (2.8)
3We may generalize the construction by considering other projector frames[4, 7, 8] or by allowing the field F
in eq.(2.6) to be an arbitrary function of K[2, 8]. Such generalizations do not add much to the current discussion
so we will stick with the definitions presented here.
4“Left” means integrating the current counter-clockwise on the positive half of the unit circle. This convention
differs by a sign from ref.[8] but agrees with ref.[4].
Notice the (F 2−1)J factor in the middle. Since J2 = 0, the −1)J term vanishes when multiplied
with the Js to the left—thus the necessity of marginal operators with nonsingular OPE. This
leaves,
dΨ = −λ2FJ 1
1− λB F 2−1
1− λB F 2−1
F = −Ψ2 (2.9)
i.e. the bosonic equations of motion are satisfied.
The solution has a power series expansion in λ:
λnΨn (2.10)
where,
Ψn = FJ
F 2 − 1
F (2.11)
To make contact with the expressions of refs.[13, 14], note the relation,
F 2 − 1
dtΩt (2.12)
To prove this, recall Ωt = etK and calculate5,
dtΩt =
etK = eK − 1 = F 2 − 1 (2.13)
Using this and the mapping between the split string notation and conformal field theory de-
scribed in ref.[8], the Ψns can be written as CFT correlators on the cylinder:
〈Ψn, χ〉 =
dt1...
dtn−1 〈J(tn−1 + ...+ t1 + 1)B...J(t1 + 1)BJ(1) fS ◦ χ(0)〉Ctn−1+...+t1+2
(2.14)
where fS(z) =
tan−1 z is the sliver conformal map, and in this context B is the insertion
∫ −i∞
b(z) to be integrated parallel to the axis of the cylinder in between the J insertions on
either side. This matches the expressions found in refs.[13, 14].
In passing, we mention that this solution was originally constructed systematically by using
the equations of motion to recursively determine the Ψns in Schnabl gauge. If desired, it is also
possible to perform such calculations in split string language; we offer some sample calculations
in appendix A.
5Note that, in general, the inverse of K is not well defined. However, when operating on F 2 − 1 it is. This
is why we cannot simply use F 2/K in the solution in place of F
, which would naively give a solution even
for marginal operators with singular OPEs.
3 Superstring Solution
Let us now consider the superstring. The marginal deformation is generated by a −1 picture
vertex operator,
e−φcO(z) (3.1)
where O(z) is a dimension 1
superconformal matter primary. We will use Berkovits’s WZW-
type superstring field theory[10]6, in which case the string field is given by multiplying the −1
picture vertex operator by the ξ ghost:
X(z) = ξe−φcO(z) (3.2)
This corresponds to a solution of the linearized Berkovits equations of motion,
η0QB (λX(0)|Ω〉) = 0 (3.3)
since η0 eats the ξ and the −1 picture vertex operator is in the BRST cohomology. We will
also find it useful to consider the 0 picture vertex operator,
J(z) = QB ·X(z) = cG−1/2 · O(z)− eφηO(z) (3.4)
A complimentary way of seeing the linearized equations of motion are satisfied is to note that
J(z) is in the small Hilbert space. As with the bosonic string, it is very helpful to assume that
X(z) and J(z) have vanishing OPEs:
J(z)X(w) = lim
J(z)J(w) = lim
X(z)X(w) = 0 (3.5)
We mention two examples of such deformations. The simplest is the light-like Wilson line
O(z) = ψ+(z) (α′ = 1), where
X(z) = ξe−φcψ+(z)
J(z) = i
2c∂X+(z)− eφηψ+(z) (3.6)
There is also a “rolling tachyon” marginal deformation[22] O(z) = σ1eX
2(z) on a non-BPS
brane. The corresponding vertex operators are,
X(z) = σ1ξe
−φceX
J(z) = σ2(cψ
0 − ieφη)eX0/
2(z) (3.7)
6See refs.[19, 20, 21] for nice reviews.
The Pauli matrices σ1, σ2, σ3 are “internal” Chan-Paton factors[23, 24], necessary to accom-
modate non-BPS GSO(−) states into the Berkovits framework. Though we will not write it
explicitly, in this context it is important to remember that the BRST operator and the eta zero
mode are carrying a factor of σ3 (thus the presence iσ2 = σ3σ1 in the above expression for J).
We mention that both X(0)|Ω〉 and J(0)|Ω〉 are in Schnabl gauge and annihilated by L0.
Let us describe the subalgebra relevant for finding the marginal solution. It consists of the
products of four string fields, K,B,X, J7:
K = Grassmann even, gh# = 0
B = Grassmann odd, gh# = −1
X = Grassmann even, gh# = 0
J = Grassmann odd, gh# = 1 (3.8)
All four of these have vanishing picture number. K and B are the same fields encountered
earlier in eq.(2.4); X and J are defined,
X = X(1)|I〉 J = J(1)|I〉 (3.9)
with X(z), J(z) as in Eqs.(3.2,3.4). We have the identities,
[K,B] = 0 B2 = 0 X2 = J2 = XJ = JX = 0 (3.10)
where the third set follows because the corresponding vertex operators have vanishing OPEs.
The algebra is closed under the action of the BRST operator:
dB = K dK = 0
dX = J dJ = 0 (3.11)
Note that the eta zero mode d̄ ≡ η0 annihilates K,B and J ,
d̄K = d̄B = d̄J = 0 (3.12)
since they live in the small Hilbert space. However, it does not annihilate X , and the algebra is
not closed under d̄. Though it is not a priori obvious that the K,B,X, J algebra is rich enough
to encapsulate the marginal solution, we will quickly see that it is.
7Note that for for a GSO(−) deformation the Grassmann assignments of X, J are opposite. Still, as far
as the solution is concerned X is even and J is odd because QB, η0 carry a σ3 which anticommutes with the
internal Chan-Paton matrices of the vertex operators.
We seek a one parameter family of solutions of the super OSFT equations of motion,
e−ΦdeΦ
= 0 (3.13)
where Φ is a Grassmann even, ghost and picture number zero string field which to linear order
in the marginal parameter takes the form,
Φ = λFXF + ... (3.14)
There are many strategies one could take to solve this equation, but before describing our
particular approach it is worth mentioning the “obvious” method: fixing Φ in Schnabl gauge
and attempting a perturbative solution, as in refs.[13, 14]:
λnΦn Φ1 = FXF (3.15)
At second order8, the Schnabl gauge solution is actually fairly simple:
F 2 − 1
JF + FJB
F 2 − 1
(3.16)
and seems quite similar to the bosonic solution. At third order, however, we found an extremely
complicated expression (though still within the K,B,X, J subalgebra). It seems doubtful that
a closed form solution for Φ in Schnabl gauge can be obtained.
Since the Schnabl gauge construction appears complicated, we are lead to consider another
approach. To motivate our particular strategy, we make two observations: First, the combi-
nation e−ΦdeΦ which enters the superstring equations of motion also happens to be a pure
gauge configuration from the perspective of bosonic OSFT. Second, there is a basic similarity
between the K,B, J algebra for the bosonic marginal solution and the K,B, J,X algebra for
the superstring. The main difference of course is the presence of X for the superstring, whose
BRST variation gives J . If such a field were present for the bosonic string, the bosonic marginal
solution would be pure gauge because J would be trivial in the BRST cohomology. With this
motivation, we are lead to consider the equation
e−ΦdeΦ = λFJ
1− λB F 2−1
F (3.17)
From the bosonic string perspective, this equation represents an expression of the bosonic
marginal solution in a form which is pure gauge. From the superstring perspective, this is a
8Explicitly, if we plug eq.(3.15) into the equations of motion, we find a recursive set of equations of the form
d̄dΦn = d̄Fn−1[Φ], where Fn−1[Φ] depends on Φ1, ...,Φn−1. The Schnabl gauge solution is obtained by writing
Fn−1[Φ].
partially gauge fixed form of the equations of motion, since the expression on the right hand
side is in the small Hilbert space.
Let us now solve this equation. It will turn out to be simpler to solve for the group element
g = eΦ; we make a perturbative ansatz,
g = eΦ = 1 +
λngn g1 = Φ1 = FXF (3.18)
Expanding out eq.(3.17) to second order gives,
dg2 = FJB
F 2 − 1
JF + g1dg1
= FJB
F 2 − 1
JF + FXF 2JF (3.19)
As it turns out, this equation is solved by the second order Schnabl gauge solution eq.(3.16):
g2 = Φ2 +
Φ21 =
F 2 − 1
JF + FJB
F 2 − 1
XF + FXF 2XF
(3.20)
but there is a simpler solution:
g2 = FXB
F 2 − 1
JF (3.21)
Using this form of g2 we can proceed to third order—remarkably, the solution is practically just
as simple:
g3 = FX
F 2 − 1
F (3.22)
This leads to an ansatz for the full solution:
eΦ = 1 + λFX
1− λB F 2−1
F (3.23)
To check this, calculate:
deΦ = λFJ
1− λB F 2−1
F + λFXd
1− λB F 2−1
= λFJ
1− λB F 2−1
F + λFX
1− λB F 2−1
F 2 − 1
1− λB F 2−1
= λFJ
1− λB F 2−1
F + λ2FX
1− λB F 2−1
1− λB F 2−1
1 + λFX
1− λB F 2−1
1− λB F 2−1
= eΦλFJ
1− λB F 2−1
F (3.24)
Therefore, eq.(3.23) is indeed a complete solution to the super OSFT equations of motion!
Note, however, that it is not quite a solution to the pure gauge problem of the bosonic string.
In particular, in step three we needed to assume XJ = 0—something we would not expect
to hold in the bosonic context. We will give the solution to the bosonic problem in the next
section.
Let us make a few comments about this solution. First, though the string field Φ itself is
not in Schnabl gauge, the nontrivial part of the group element eΦ is—this is not difficult to see,
but we offer one explanation in appendix A. The second comment is related to the string field
reality condition. In super OSFT, the natural reality condition is that Φ should be “imaginary”
in the following sense:
〈Φ, χ〉 = −〈Φ|χ〉 (3.25)
where 〈Φ| is the Hermitian dual of |Φ〉 and χ is any test state. In split string notation we can
write this,
Φ† = −Φ (3.26)
where † is an anti-involution on the star algebra, formally completely analogous to Hermitian
conjugation of operators. With this reality condition, the group element should be unitary:
g† = g−1
Using,
K† = K B† = B J† = J X† = −X (3.27)
it is not difficult to see that the analytic solution eΦ is not unitary9. However, it is possible to
obtain a unitary solution by a simple gauge transformation of eq.(3.23); we explain details in
appendix B.
Let us take the opportunity to express the solution in a few other forms which may be more
convenient for explicit computations. Following the usual prescription we may express the gns
as correlation functions on the cylinder:
〈gn, χ〉 =
dt1...
dtn−1 〈X(tn−1 + ...+ t1 + 1)BJ(tn−2 + ..+ t1 + 1)...BJ(1) fS ◦ χ(0)〉Ctn−1+...+t1+2
= (−1)n
dt1...
dtn−1 〈X(L+ 1)[O′(ℓn−2 + 1)...O′(ℓ1 + 1)]BJ(1) fS ◦ χ(0)〉CL+2 (3.28)
In the second line we manipulated the multiple B insertions, simplifying the vertex operators
and obtaining a single B insertion to the right; we introduced the length parameters[14]:
tk L = ℓn−1 (3.29)
9By contrast, the Schnabl gauge construction automatically gives an imaginary Φ and unitary eΦ.
and defined O′(z) = G− 1
· O(z) (times a σ3 for GSO(−) deformations). We may also express
the solution in the operator formalism of Schnabl[1]:
|gn〉 =
(−1)nO+1
dt1...
dtn−1ÛL+2 f
S ◦ (ξe
−φO(L/2))Õ′(yn−2)...Õ′(y1)
Õ′(−L
)[B+c̃(L
)c̃(−L
)− c̃(L
)− c̃(−L
)] + f−1S ◦ (ηe
φO(−L
))[B+c̃(L
) + 1]
(3.30)
where yi = ℓi −L/2 and[6] Ûr =
. Also we have used f−1S to define the tilde to hide
some factors of π
. The expression is somewhat more complicated than the bosonic solution
since the vertex operator J(z) has a piece without a c ghost, so in the bc CFT the solution has
a component not proportional to Schnabl’s ψn[1].
4 Pure Gauge for Bosonic Solution
In the last section, we found a solution for the superstring by analogy with the pure gauge
problem of the bosonic string; but we did not solve the latter. The scenario we have in mind is
a constant, lightlike gauge field on a non-compact D-brane. Since there is no flux and no way
to wind a Wilson loop, such a field configuration should be pure gauge. From the string field
theory viewpoint, this is reflected by the fact that the marginal vertex operator becomes BRST
trivial in the noncompact limit,
ic∂X+(z) = QB · 2iX+(z) (4.1)
Of course, on a compact manifold the operator X+(z) is not globally defined so the marginal
deformation is nontrivial.
Translating to split string language, we consider an algebra generated by four fieldsK,B,X, J ,
where K,B are defined as before and,
X = 2iX+(1)|I〉 J = ic∂X+(1)|I〉 (4.2)
These have the same Grassmann and ghost number assignments as eq.(3.8). We have the
algebraic relations,
[K,B] = 0 B2 = 0 J2 = 0 [X, J ] = 0 (4.3)
Note the difference from the superstring case: the products of X with itself and with J , though
well defined (the OPEs are nonsingular), are nonvanishing. However, we still have
dB = K dK = 0
dX = J dJ = 0 (4.4)
with the second set implying that J is trivial in the BRST cohomology.
We now want to solve eq.(3.17) assuming this slightly more general set of algebraic relations.
Playing around a little bit, the solution we found is,
eΛ = 1 + λFuλ(X)
1− λB F 2−1
F (4.5)
where,
uλ(X) =
eλX − 1
(4.6)
The relevant identity satisfied by this particular combination is,
duλ = J(λuλ + 1) (4.7)
Let us prove that this gives a pure gauge expression for the bosonic marginal solution:
deΛ = λFduλ
1− λB F 2−1
F + λFuλ
1− λB F 2−1
F 2 − 1
1− λB F 2−1
= λFJ(λuλ + 1)
1− λB F 2−1
F + λ2Fuλ
1− λB F 2−1
(F 2 − 1)J
1− λB F 2−1
Now we come to the critical difference from the superstring. Note the −1)J piece in the middle
of the second term. Before it vanished when multiplied by X, J to the left. This time it
contributes because XJ 6= 0; still, the Js in the denominator of the factor to the left get killed
because J2 = 0. Thus we have,
deΛ = λFJ(λuλ + 1)
1− λB F 2−1
F + λ2Fuλ
1− λB F 2−1
1− λB F 2−1
−λ2FuλJ
1− λB F 2−1
F (4.8)
where the third term comes from the −1)J piece. Note the cancellation. We get,
deΛ = λFJ
1− λB F 2−1
F + λ2Fuλ
1− λB F 2−1
1− λB F 2−1
1 + λFuλ
1− λB F 2−1
1− λB F 2−1
= eΛλFJ
1− λB F 2−1
F (4.9)
thus we have a pure gauge expression for the marginal solution.
To further emphasize the duality with the superstring, note that for the pure gauge problem
the role of the eta zero mode is played by the lightcone derivative:
d̄ ∼ d
(4.10)
In particular we have solved the equation,
e−ΛdeΛ
= 0 (4.11)
Though there are many pure gauge trajectories generated by FXF , only a trajectory which
in addition satisfies this equation will be a well-defined, nontrivial solution once spacetime is
compactified.
5 Conclusion
In this note, we have constructed analytic solutions of open superstring field theory describing
marginal deformations generated by vertex operators with vanishing operator products. We
have not attempted to perform any detailed calculations with these solutions, though such
calculations are certainly possible. The really important questions about marginal solutions—
such as mapping out the relation between CFT and OSFT marginal parameters, obtaining
analytic solutions for vertex operators with singular OPEs, or proving Sen’s rolling tachyon
conjectures[22]—require more work even for the bosonic string. Hopefully progress will translate
directly to the superstring.
For us, the main motivation was the hope that marginal solutions could give us a hint
about how to construct the vacuum for the open superstring. Indeed, for the bosonic string
the marginal and vacuum solutions are closely related: To get the vacuum solution (up to the
ψN piece), one simply replaces J with d(Bc) = cKBc and takes the limit λ→ ∞10. Perhaps a
similar trick will work for the superstring.
The author would like to thank A. Sen and D. Gross for conversations, and A. Bagchi for
early collaboration. The author also thanks Y. Okawa for correspondence which motivated
discovery of the unitary analytic solution presented in appendix B. This work was supported
in part by the National Science Foundation under Grant No.NSF PHY05-51164 and by the
Department of Atomic Energy, Government of India.
10The λ used here and the λ parameterizing the pure gauge solutions of Schnabl[1] are related by λ(Schnabl) =
A B0,L0 with Split Strings
In many analytic computations in OSFT it is useful to invoke the operators B0,L0 and their
cousins[1, 4]. To avoid unnecessary transcriptions of notation, it is nice to accommodate these
types of operations in the split string formalism.
We begin by defining the fields,
L = (L0)L|I〉 L∗ = (L∗0)L|I〉 (A.1)
and their b-ghost counterparts B,B∗. We can split the operators L0,L∗0 into left/right halves
non-anomalously because the corresponding vector fields vanish at the midpoint[4]. The fields
L,L∗ satisfy the familiar special projector algebra,
[L,L∗] = L+ L∗ (A.2)
Following ref.[4] we may define even/odd combinations,
L+ = L+ L∗ = −K L− = L − L∗ (A.3)
where K is the field introduced before. . Note that we have,
L0 ·Ψ = LΨ+ΨL∗
B0 ·Ψ = BΨ+ (−1)ΨΨB∗ (A.4)
We can use similar formulas to describe the many related operators introduced in ref.[4]
Let us now describe a few convenient facts. Let J(z) be a vertex operator for a state J(0)|Ω〉
in Schnabl gauge, and let J = J(1)|I〉 be its corresponding field. Then,
[B−, J ] = 0 (A.5)
where [, ] is the graded commutator. A similar result [L−, J ] = 0 holds if J(0)|Ω〉 is killed by
L0. We also have the useful formulas,
LF = 1
FL− FL∗ = −1
L−F [L−,Ωγ ] = 2γKΩγ (A.6)
The third equation is a special case of,
[L−, G(K)] = 2KG′(K) (A.7)
with similar formulas involving B,B∗. Of course, these equations are well-known consequences
of the Lie algebra eq.(A.2).
As an application, let us prove the identity,
J1(0)|Ω〉 ∗ J2(0)|Ω〉 = (−1)J1FJ1B
F 2 − 1
J2F (A.8)
where J1, J2(0)|Ω〉 are killed by B0,L0. This expression occurs when constructing the marginal
solution (bosonic or superstring) in Schnabl gauge. The direct approach is to compute L−10
on the left hand side in split string notation; the resulting derivation is fairly reminiscent of
ref.[14]. Instead, we will multiply this equation by L0 and prove that both sides are equal. The
left hand side gives,
B0 · FJ1F 2J2F = BFJ1F 2J2F + (−1)J1+J2FJ1F 2J2FB∗
(−1)J1FJ1[B−, F 2]J2F
= (−1)J1FJ1BF 2J2F (A.9)
The right hand side gives,
L0 · FJ1B
F 2 − 1
J2F = LFJ1B
F 2 − 1
J2F + FJ1B
F 2 − 1
J2FL∗
L−, B
F 2 − 1
= FJ1B
F 2 − 1
J2F +
L−, F
2 − 1
J2F (A.10)
Focus on the commutator:
F 2 − 1
= [L−, F 2]
+ (F 2 − 1)
= 2F 2 − 2F
2 − 1
(A.11)
where we used eq.(A.7). This computation is a somewhat formal because the inverse of K is
not generally well defined, but it can be checked using the integral representation eq.(2.12).
Plugging the commutator back in, the F
terms cancel and we are left with,
L0 · FJ1B
F 2 − 1
J2F = FJ1BF
2J2F (A.12)
which after multiplying by (−1)J1 establishes the result.
Before concluding, we mention that any state of the form,
FJ1BG2(K)J2 ... BGn(K)JnF (A.13)
with [B−, Ji] = 0, is in Schnabl gauge. The proof follows at once upon noting,
[B−, BG(K)] = −2B2G′(K) = 0 (A.14)
so the entire expression between the F s commutes with B−. This is one way of seeing that the
nontrivial part of the group element eΦ − 1 for the superstring solution is in Schnabl gauge.
B Unitary eΦ
The analytic solution eq.(3.23) is very simple, but it has the disadvantage of not satisfying the
standard reality condition, i.e. eΦ is not unitary and Φ is not imaginary. Presumably there
is an infinite dimensional array of marginal solutions which do satisfy the reality condition,
and some may have analytic descriptions. In this appendix we give one construction which
is particularly closely related to our solution eq.(3.23). For a very interesting and completely
different solution, we refer the reader to an upcoming paper by Okawa[25].
Our strategy will be to find a finite gauge transformation of g in eq.(3.23) yielding a unitary
solution. The transformation is,
U = V g (B.1)
where V is some string field of the form,
V = 1 + dv (B.2)
with v carrying ghost number −1. A little thought reveals a natural candidate for V :
(B.3)
where g† is the conjugate of eq.(3.23):
g† = 1− λF 1
1− JλB F 2−1
XF (B.4)
and we use the Hermitian definition of the square root. Intuitively, this is just taking the
original solution and dividing by its “norm.” More explicitly, if we define,
gg† = 1 + T
T = λFX
1− λB F 2−1
F − λF 1
1− JλB F 2−1
−λ2FX 1
1− λB F 2−1
1− JλB F 2−1
XF (B.5)
then the required gauge transformation is given by the formal sum,
T n (B.6)
This proposal must be subject to two consistency checks. First, of course, is that the field
U is actually unitary. The proof is straightforward:
UU † =
= gg†
U †U = g†
g = g†(g†)−1g−1g = 1 (B.7)
The second check is that V is a gauge transformation of the form eq.(B.2). This follows if the
field T is BRST exact, T = du, since then we can write (for example),
V = 1 + d
uT n−1
(B.8)
A little guesswork reveals the following BRST exact expression for T :
T = d
1− λB F 2−1
F 2 − 1
(B.9)
This establishes not only that U is an analytic solution, but (perhaps more importantly) that
the simpler expression g is in the same gauge orbit with a solution satisfying the physical reality
condition. This leaves no question as to the physical viability of our original analytic solution
eq.(3.23).
As usual, the unitary solution U can be defined explicitly in terms of cylinder correlators by
expanding eq.(B.1) as a power series in λ. Unfortunately this is somewhat tedious because the
implicit dependence on λ in eq.(B.1) is complicated. As an expansion for the imaginary field
Φ, the first two orders agree with the Schnabl gauge solution (as they must11), while at third
order we find:
F 2 − 1
F 2 − 1
JF + FJB
F 2 − 1
F 2 − 1
FXF 2JB
F 2 − 1
+ FJB
F 2 − 1
F 2 − 1
JF 2XF + F 2XB
F 2 − 1
(FXF )3 (B.10)
This expression is much simpler than the Schnabl gauge solution at third order, which involves
intricate constrained and entangled integrals over moduli separating vertex operator insertions.
11The reality condition fixes the form of the second order solution uniquely within the K,B, J,X subalgebra.
References
[1] M. Schnabl, “Analytic solution for tachyon condensation in open string field theory,” Adv.
Theor. Math. Phys. 10 (2006) 433-501, arXiv:hep-th/0511286.
[2] Y. Okawa, “Comments on Schnabl’s analytic solution for tachyon condensation in Witten’s
open string field theory,” JHEP 0604, 055 (2006), arXiv:hep-th/0603159.
[3] E. Fuchs and M. Kroyter, “On the validity of the solution of string field theory,” JHEP
0605 006 (2006), arXiv:hep-th/0603195.
[4] L. Rastelli and B. Zwiebach, “Solving open string field theory with special projectors,”
arXiv:hep-th/0606131.
[5] I. Ellwood and M. Schnabl, “Proof of vanishing cohomology at the tachyon vacuum,”
JHEP 0702 (2007) 096, arxiv:hep-th/0606142.
[6] H. Fuji, S. Nakayama, and H Suzuki, “Open string amplitudes in various gauges,” JHEP
0701 (2007) 011, arXiv:hep-th/0609047.
[7] Y. Okawa, L.Rastelli and B.Zwiebach, “Analytic Solutions for Tachyon Condensation with
General Projectors,” arXiv:hep-th/0611110.
[8] T. Erler, “Split String Formalism and the Closed String Vacuum,”
arXiv:hep-th/0611200. T. Erler, “Split String Formalism and the Closed String
Vacuum, II” arXiv:hep-th/0612050.
[9] I. Ellwood, J. Shelton, and W. Taylor, “Tadpoles and Closed String Backgrounds in Open
String Field Theory,” JHEP 0307 (2003) 059, arXiv:hep-th/0304258.
[10] N. Berkovits, “Super-Poincare Invariant Superstring Field Theory,” Nucl. Phys. B450
(1995) 90, arXiv:hep-th/9503099; N. Berkovits, “A New Approach to Superstring Field
Theory,” proceedings to the 32nd International symposium Ahrenshoop on the Theory of
Elementary Particles, Fortschritte der Physik 48 (2000) 31, arXiv:hep-th/9912121.
[11] A. Sen and B. Zwiebach, “Large Marginal Deformations in String Field Theory,” JHEP
0010 (2000) 009, arXiv:hep-th/0007153.
[12] A. Iqbal and A. Naqvi, “On Marginal Deformations in Superstring Field Theory,” JHEP
0101 (2001) 040, arXiv:hep-th/0008127.
http://arxiv.org/abs/hep-th/0511286
http://arxiv.org/abs/hep-th/0603159
http://arxiv.org/abs/hep-th/0603195
http://arxiv.org/abs/hep-th/0606131
http://arxiv.org/abs/hep-th/0606142
http://arxiv.org/abs/hep-th/0609047
http://arxiv.org/abs/hep-th/0611110
http://arxiv.org/abs/hep-th/0611200
http://arxiv.org/abs/hep-th/0612050
http://arxiv.org/abs/hep-th/0304258
http://arxiv.org/abs/hep-th/9503099
http://arxiv.org/abs/hep-th/9912121
http://arxiv.org/abs/hep-th/0007153
http://arxiv.org/abs/hep-th/0008127
[13] M. Schnabl, “Comments on Marginal Deformations in Open String Field Theory,”
arXiv:hep-th/0701248.
[14] M. Kiermaier, Y.Okawa, L.Rastelli and B.Zwiebach, “Analytic Solutions for Marginal De-
formations in Open String Field Theory,” arXiv:hep-th/0701249.
[15] J. Kluson, “Exact solutions in SFT and marginal deformations in BCFT,” JHEP 0312,
050 (2003), arXiv:hep-th/0303199; T. Takahashi and S. Tanimoto, “Wilson lines and
classical solutions in cubic open string field theory,” Prog. Theor. Phys. 106 863 (2001),
arXiv:hep-th/0107046.
[16] I. Kishimoto and T. Takahashi, “Marginal deformations and classical solutions in open
superstring field theory,” JHEP 0511, 051 (2005) arXiv:hep-th/0506240.
[17] D. J. Gross and W. Taylor, “Split String Field Theory. I,II” JHEP 0108 009 (2001),
arXiv:hep-th/0105059, JHEP 0108 010 (2001), arXiv:hep-th/0106036; L. Rastelli, A.
Sen, B. Zwiebach, “Half-strings, Projectors, and Multiple D-branes in Vacuum String Field
Theory,” JHEP 0111 (2001) 035, arXiv:hep-th/0105058; I. Bars, “Map of Witten’s ⋆
to Moyal’s ⋆,” Phys.Lett. B517 (2001) 436-444, arXiv:hep-th/0106157; M. R. Douglas,
H. Liu, G. Moore and B. Zweibach, “Open String Star as a Continuous Moyal Product,”
JHEP 0204 (2002) 022, arXiv:hep-th/0202087.
[18] Y. Okawa, “Analytic Solutions for Marginal Deformations in Open Superstring Field The-
ory,” arXiv:0704.0936.
[19] N. Berkovits, “Review of Open Superstring Field Theory,” arXiv:hep-th/0105230.
[20] K. Ohmori, “A Review on Tachyon Condensation in Open String Field Theories,”
arXiv:hep-th/0102085.
[21] P. De Smet, “Tachyon Condensation: Calculations in String Field Theory,”
arXiv:hep-th/0109182.
[22] A. Sen, “Rolling Tachyon,” JHEP 0204 (2002) 048, arXiv:hep-th/0203211; A. Sen,
“Tachyon Matter,” JHEP 0207 (2002) 065, arXiv:hep-th/0203265.
[23] N. Berkovits, “The Tachyon Potential in Open Neveu-Schwarz String Field Theory,” JHEP
0004 (2000) 022, arXiv:hep-th/0001084.
http://arxiv.org/abs/hep-th/0701248
http://arxiv.org/abs/hep-th/0701249
http://arxiv.org/abs/hep-th/0303199
http://arxiv.org/abs/hep-th/0107046
http://arxiv.org/abs/hep-th/0506240
http://arxiv.org/abs/hep-th/0105059
http://arxiv.org/abs/hep-th/0106036
http://arxiv.org/abs/hep-th/0105058
http://arxiv.org/abs/hep-th/0106157
http://arxiv.org/abs/hep-th/0202087
http://arxiv.org/abs/0704.0936
http://arxiv.org/abs/hep-th/0105230
http://arxiv.org/abs/hep-th/0102085
http://arxiv.org/abs/hep-th/0109182
http://arxiv.org/abs/hep-th/0203211
http://arxiv.org/abs/hep-th/0203265
http://arxiv.org/abs/hep-th/0001084
[24] N. Berkovits, A. Sen and B. Zwiebach, “Tachyon Condensation in Superstring Field The-
ory,” Nucl. Phys. B587 (2000) 147-178, arXiv:hep-th/0002211.
[25] Y. Okawa, to appear.
http://arxiv.org/abs/hep-th/0002211
Introduction
Bosonic Solution
Superstring Solution
Pure Gauge for Bosonic Solution
Conclusion
B0,L0 with Split Strings
Unitary e
|
0704.0931 | The Isophotal Structure of Early-Type Galaxies in the SDSS: Dependence
on AGN Activity and Environment | To appear in ApJ
The Isophotal Structure of Early-Type Galaxies in the SDSS:
Dependence on AGN Activity and Environment.
Anna Pasquali, Frank C. van den Bosch and Hans-Walter Rix
Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany
[email protected], [email protected], [email protected]
ABSTRACT
We study the dependence of the isophotal shape of early-type galaxies on their ab-
solute B-band magnitude, MB , their dynamical mass, Mdyn, and their nuclear activity
and environment, using an unprecedented large sample of 847 early-type galaxies identi-
fied in SDSS by Hao et al. (2006a). We find that the fraction of disky galaxies smoothly
decreases from fdisky ∼ 0.8 at MB − 5 log(h) = −18.7 (Mdyn = 6 × 1010h−1M⊙) to
∼ 0.5 at MB − 5 log(h) = −20.8 (Mdyn = 3 × 1011h−1M⊙). The large sample allows
us to describe these trends accurately with tight linear relations that are statistically
robust against the uncertainty in the isophotal shape measurements. There is also a
host of significant correlations between fdisky and indicators of nuclear activity (both
in the optical and in the radio) and environment (soft X-rays, group mass, group hier-
archy). Our analysis shows however that these correlations can be accurately matched
by assuming that fdisky only depends on galaxy luminosity or mass. We therefore con-
clude that neither the level of activity, nor group mass or group hierarchy help in better
predicting the isophotal shape of early-type galaxies.
Subject headings: galaxies: active — galaxies: elliptical and lenticular, cD — galaxies:
Seyfert — galaxies: structure
1. Introduction
Early-type galaxies form a remarkably homogeneous class of objects with a well-defined Funda-
mental Plane and with tight relations between colour and magnitude, between colour and velocity
dispersion, and between the amount of α-element enhancement and velocity dispersion (e.g., Faber
& Jackson 1976; Visvanathan & Sandage 1977; Dressler 1987; Djorgovski & Davis 1987; Bower et
al. 1992; Ellis et al. 1997). They have old stellar populations, though sometimes with a younger
component (Trager et al. 2000; Serra et al. 2006; Kuntschner et al. 2006), contain little ionized and
cold gas (Sarzi et al. 2006; Morganti et al. 2006), and are preferentially located in massive dark
matter halos (e.g., Dressler 1980; Weinmann et al. 2006).
http://arxiv.org/abs/0704.0931v1
– 2 –
Ever since the seminal work by Davies et al. (1983), however, it has become clear that early-
type galaxies encompasses two distinct families. Davies et al. showed that bright ellipticals typically
have little rotation, such that their flattening must originate from anisotropic pressure. This is
consistent with bright ellipticals being in general triaxial. Low luminosity ellipticals, on the other
hand, typically have rotation velocities that are consistent with them being oblate isotropic rotators.
With the advent of CCD detectors, it quickly became clear that these different kinematic classes also
have different morphologies. Although ellipticals have isophotes that are ellipses to high accuracy,
there are small deviations from perfect ellipses (e.g., Lauer 1985; Carter 1987; Bender & Möllenhoff
1987). In particular, bright, pressure-supported systems typically have boxy isophotes, while the
lower luminosity, rotation-supported systems often reveal disky isophotes (e.g., Bender 1988; Nieto
et al. 1988). With the high angular resolution of the Hubble Space Telescope it has become clear
that both types have different central surface brightness profiles as well. The bright, boxy ellipticals
typically have luminosity profiles that break from steep outer power-laws to shallow inner cusps
(often called ‘cores’). The fainter, disky ellipticals, on the other hand, have luminosity profiles that
lack a clear break and have a steep central cusp (e.g., Jaffe et al. 1994; Ferrarese et al. 1994; Lauer
et al. 1995; Gebhardt et al. 1996; Faber et al. 1997; Rest et al. 2001; Ravindranath et al. 2001;
Lauer et al. 2005).
The isophotal shapes of early-type galaxies have also been found to correlate with the radio
and X-ray properties of elliptical galaxies (Bender et al. 1989; Pellegrini 1999). Objects which
are radio-loud and/or bright in soft X-ray emission generally have boxy isophotes, while disky
ellipticals are mostly radio-quiet and faint in soft X-rays. As shown in Pellegrini (2005), the soft
X-ray emission of power-law (and hence disky) ellipticals is consistent with originating from X-ray
binaries. Ellipticals with a central core (which are mainly boxy), however, often have soft X-ray
emission in excess of what may be expected from X-ray binaries. This emission originates from
a corona of hot gas which is distributed beyond the optical radius of the galaxy (e.g., Trinchieri
& Fabbiano 1985, Canizares et al. 1987; Fabbiano 1989). In terms of the radio and hard X-ray
emission, thought to originate from active galactic nuclei (AGN), it is found that those ellipticals
with the highest luminosities in radio and/or hard X-rays are virtually always boxy (Bender et
al. 1989; Pellegrini 2005). This is consistent with the results of Ravindranath et al. (2001), Lauer
et al. (2005) and Pellegrini (2005), all of whom find a somewhat higher fraction of ellipticals with
optical AGN activity (i.e., nuclear line emission) among cored galaxies.
The above mentioned trends between isophotal shape and galaxy properties have mainly been
based on relatively small, somewhat heterogenious samples of relatively few objects ( <∼ 100). Re-
cently, however, Hao et al. (2006a, hereafter H06) compiled a sample of 847 nearby, early-type
galaxies from the Sloan Digital Sky Survey (SDSS) for which they measured the isophotal shapes.
Largely in agreement with the aforementioned studies they find that (i) more luminous galaxies
are on average rounder and are more likely to have boxy isophotes (ii) disky ellipticals favor field
environments, while boxy ellipticals prefer denser environments, and (iii) disky ellipticals tend to
lack powerful radio emission, although this latter trend is weak.
– 3 –
The prevailing idea as to the origin of this disky-boxy dichotomy is that it reflects the galaxy’s
assembly history. Within the standard, hierarchical formation picture, in which ellipticals are
formed via mergers, the two main parameters that determine whether an elliptical will be boxy
and cored or disky and cuspy are the progenitor mass ratio and the progenitor gas mass fractions.
Pure N -body simulations without gas show that the isophotal shapes of merger remnants depend
sensitively on the progenitor mass ratio: major mergers create ellipticals with boxy isophotes,
while minor mergers mainly result in systems with disky isophotes (Khochfar & Burkert 2005,
Jesseit et al. 2005). As shown by Naab et al. (2006), including even modest amounts of gas
has a dramatic impact on the isophotal shape of equal-mass merger remnants. The gas causes a
significant reduction of the fraction of box and boxlet orbits with respect to collisionless mergers,
and the remnant appears disky rather than boxy. Therefore, it seems that the massive, boxy
ellipticals can only be produced via dry, major mergers. The cores in these boxy ellipticals are
thought to arise from the binding energy liberated by the coalescence of supermassive binary black
holes during the major merger event (e.g., Faber et al. 1997; Graham et al. 2001; Milosavljević et
al. 2002). When sufficient gas is present, however, dissipation and sub-sequent star formation may
regenerate a central cusp. Alternatively, the gas may serve as an energy sink for the binding energy
of the black hole binary, leaving the original stellar cusp largely intact. Thus, following Lauer
et al. (2005), we may summarize this picture as implying that power-laws reflect the outcome of
dissipation and concentration, while cores owe to mixing and scattering.
But what about the correlation between isophotal shape and AGN activity? It is tempting
to believe that this correlation simply derives from the fact that both isophotal shape and AGN
activity may be related to mergers. After all, it is well known that mergers can drive nuclear inflows
of gas, which produce starbursts and feed the central supermassive black hole(s) (Toomre & Toomre
1972, Barnes & Hernquist 1991,1996, Mihos & Hernquist 1994,1996, Springel 2000, Cattaneo et
al. 2005). However, since the onset of such AGN activity requires wet mergers, this would predict
a higher frequency of AGN among disky ellipticals, contrary to the observed trend. Another
argument against mergers being responsible for the AGN-boxiness correlation is that the time scale
for merger-induced AGN activity is relatively short ( <∼ 10
8 yrs) compared to the dynamical time
in the outer parts of the merger remnant. This implies that active ellipticals should reveal strongly
distorted isophotes, which is not the case.
An important hint may come from the strong correlation between the presence of dust (either
clumpy, filamentary, or in well defined rings and disks) and the presence of optical emission line
activity (Tran et al. 2001; Ravindranath et al. 2001; Lauer et al. 2005). Although this suggests
that this dust is (related to) the actual fuel for the AGN activity, many questions remain. For
instance, it is unclear whether the origin of the dust is internal (shed by stellar winds) or external
(see Lauer et al. 2005 for a detailed discussion). In addition, it is not clear why the presence of dust,
and hence the AGN activity, would be more prevalent in boxy ellipticals. One option is that boxy
ellipticals are preferentially central galaxies (as opposed to satellites), so that they are more efficient
at accreting external gas (and dust). This is consistent with the fact that boxy ellipticals (i) are, on
– 4 –
average, brighter, (ii) reside in dense environments (Shioya & Taniguchi 1993; H06), and (iii) more
often contain hot, soft X-ray emitting halos. Another, more benign possibility, is that the relation
between morphology and AGN activity is merely a reflection of the fact that both morphology and
AGN activity depend on the magnitude of the galaxy (or on its stellar or dynamical mass). In this
case, AGN activity is only indirectly related to the morphology of its host galaxy.
In this paper we use the large data set of H06 to re-investigate the correlations between
morphology and (i) luminosity, (ii) dynamical mass, and (iii) emission line activity in the optical,
where we discriminate between AGN activity and star formation. In addition, we also examine to
what extent morphology correlates with X-ray emission (using data from ROSAT), with 1.4GHz
radio emission (using data from FIRST), and with the mass of the dark matter halo in which the
galaxy resides (using a SDSS galaxy group catalog). The outline of this paper is as follows. In § 2
we describe the data of H06; in § 3 we present the fraction of disky galaxies across the full sample
as a function of galaxy luminosity, dynamical mass and environment. In § 4 we split the sample
galaxies according to their activity in the optical, radio and X-rays, and investigate how the disky-
boxy morphology correlates with these various levels of ‘activity’. Finally, in § 5 we summarize
and discuss our findings. Throughout this paper we adopt a ΛCDM cosmology with Ωm = 0.3,
ΩΛ = 0.7, and H0 = 100h kms
−1 Mpc−1. Magnitudes are given in the AB system.
2. Data
2.1. Sample Selection
In order to investigate the interplay among AGN activity, morphology and environment for
early-type galaxies, we have analyzed the sample of H06, which consists of 847 galaxies in the SDSS
DR4 (Adelman-McCarthy et al. 2006) classified as ellipticals (E) or lenticulars (S0). As described
in H06, these objects are selected to be at z < 0.05, in order to ensure sufficient spatial resolution
to allow for a meaningful measurement of the isophotal parameters. In addition, the galaxies are
selected to have an observed velocity dispersion between 200 km s−1 and 420 km s−1 (where the
upper limit corresponds to the largest velocity dispersion that can be reliably measured from the
SDSS spectra), and are not allowed to be saturated. Note that, for the median sample distance,
the fiber radius of 1.5 arcsec corresponds to about 30% of the sample mean effective radius. From
all galaxies that obey these criteria, early-types have been selected by H06 using visual inspection.
Galaxies with prominent dust lanes have been excluded from the final sample in order to reduce
the effects of dust on the isophotal analysis.
– 5 –
2.2. Isophotal Analysis
Isophotes are typically parameterized by their corresponding surface brightness, I0, their semi-
major axis length, a, their ellipticity, ǫ, and their major axis position angle, θ0. In addition, since
isophotes are not perfectly elliptical, it is common practice to expand the angular intensity variation
along the best fit ellipse, δI(θ), in a Fourier series:
δI(θ) =
A′n cosn(θ − θ0) +B
n sinn(θ − θ0)
(e.g., Carter 1987; Jedrzejewski 1987; Bender & Möllenhoff 1987). Only the terms with n = 3 and
n = 4 are usually computed, as the data is often too noisy to reliably measure higher-order terms
(but see Scorza & Bender 1995 and Scorza & van den Bosch 1999). Note that, by definition, the
terms with n = 0, 1, and 2 are equal to zero within the errors. If the isophote is perfectly elliptical,
then A′n and B
n are also equal to zero for n ≥ 3. Non-zero A′3 and B′3 express deviations from
a pure ellipse that occur along the observed isophote every 120o. Typically, such deviations arise
from the presence of dust features or clumps. The most important Fourier coefficient, however, is
A′4, which quantifies the deviations taking place along the major and minor axes. Isophotes with
A′4 < 0 have a ‘boxy’ shape, while those with a positive A
4 parameter are ‘disk’-shaped.
For each of the 847 E/S0 galaxies in their sample H06 measured the isophotal parameters
using the IRAF1 task ELLIPSE. In particular, for each galaxy they provide the ellipticity, ǫ, the
position angle of the major axis, θ0, and the third and fourth order Fourier coefficients A3 and A4,
which are equal to A′3 and A
4, respectively, divided by the semi-major axis length and the local
intensity gradient. All the available parameters are intensity-weighted averages over the radial
interval 2Rs < R < 1.5R50. Here Rs is the seeing radius (typically lower than 1.5 arcsec, Stoughton
et al. 2002) and R50 is the Petrosian half-light radius
2. The Petrosian radius is defined as the
radius at which the ratio of the local surface brightness to the mean interior surface brightness is
0.2 (cf. Strauss et al. 2002). Therefore, R50 is the radius enclosing half of the flux measured within
a Petrosian radius and can be used as a proxy for the galaxy effective radius Re. In what follows,
we refer to galaxies with A4 ≤ 0 and A4 > 0 as ‘boxy’ and ‘disky’, respectively.
In their seminal papers on the isophotal shapes of elliptical galaxies, Bender & Möllenhoff
(1987), Bender et al. (1988) and Bender et al. (1989) define alternative structural parameters,
an/a and bn/a, which are related to the An and Bn parameters defined here as
1− ǫAn
1− ǫBn (2)
(Bender et al. 1988; Hao et al. 2006b).
IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Asssociation of
Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation
2These data are publicly available at http://www.jb.man.ac.uk/∼smao/isophote.html
http://www.jb.man.ac.uk/~smao/isophote.html
– 6 –
2.3. Additional data
For all galaxies in the H06 sample we determined the absolute magnitudes in the SDSS g, r and
i bands, corrected for Galactic extinction, and K-corrected to z = 0, using the luminosity distances
corrected for Virgo-centric infall of the Local Group (LG) following Blanton et al. (2005). In order
to allow for a comparison with the samples of Bender et al. (1989) and Pellegrini (1999, 2005), we
transform these magnitudes to those in the Johnson B-band using the filter transformations given
by Smith et al. (2002).
We also estimated, for each galaxy, the total dynamical mass as
Mdyn = A
σ2corrR50
Here G is the gravitational constant, A is a normalization constant, and σcorr is the velocity dis-
persion measured from the SDSS spectra corrected for aperture effects using
σcorr = σmeasured
Rfiber
R50/8
)0.04
, (4)
with Rfiber = 1.5 arcsec (Bernardi et al. 2003). The aperture correction is meant to give the velocity
dispersion within R50/8, and to make comparable galaxies at different distance but sampled with a
spectroscopic fiber of fixed size. Throughout this paper we adopt A = 5, which has been shown to
accurately reproduce the total dynamical masses inferred from more accurate modeling (Cappellari
et al. 20063). Note that Cappellari et al. have also shown that these dynamical masses are roughly
proportional to the stellar masses of early-type galaxies.
H06 cross-correlated their E/S0 sample with the FIRST radio survey (Becker, White & Helfand
1995), which yielded the 1.4GHz fluxes for 162 objects in the sample. In order to investigate the
relation between isophotal structure and soft X-ray properties, we also matched the H06 sample to
the ROSAT All Sky Survey Catalog (Voges et al. 1999). This yields ROSAT/PSPC count-rates in
the 0.1 – 2.4 keV energy band for 40 sample galaxies. We used the WebPIMMS tool4 to transform
the observed count-rates into astrophysical fluxes, corrected for Galactic extinction, assuming an
X-ray power-law spectrum with energy index αX = 1.5 (cf. Anderson et al. 2003). In addition,
we cross-identified the H06 sample with the spectroscopic catalogs released for DR4 by Kauffmann
et al. (2003a,b), and extracted, when available, the luminosity of the [OIII] λ5007 line corrected
for dust extinction (in ergs−1), the line-flux ratios [OIII]/Hβ and [NII]/Hα, and the S/N values
associated with the [OIII] and Hα fluxes.
3Cappellari et al. (2006) use a slightly different definition of σcorr in equation (2), namely that measured within
Re rather than R50/8. Given the weak dependence of the velocity dispersion on the enclosed radius, we estimate
that this difference results in an offset of ∼0.07 dex in Mdyn
4http://heasarc.gsfc.nasa.gov/Tools/w3pimms.html
– 7 –
Finally, in order to assess the environment of the sample galaxies, we cross-identified the
H06 sample with the SDSS group catalog of Weinmann et al. (2006; hereafter WBYM), which is
constructed using the halo-based group finder of Yang et al. (2005). This yields group (i.e. dark
matter halo) masses for a total of 431 galaxies, distributed over 403 groups. Of these, 350 are
‘central’ galaxies (defined as the brightest galaxy in its group) and 81 are ‘satellites’. As for the
groups, 83 have just a single member (the early-type galaxy in our sample), while 320 groups have
2 or more members. The fact that only 51 percent of the galaxies in the H06 sample are affiliated
with a group is due to the fact that the WBYM group catalog is based on the DR2, and to the fact
that not all galaxies can be assigned to a group (see Yang et al. 2005 for details).
3. The disky fraction across the sample
The main properties of the full H06 sample (comprising all 847 early-type galaxies) are summa-
rized in Figure 1. The sample spans about 3 orders of magnitude in MB (−17.8 > MB −5 log(h) >
−21.4) and a range of about 1.5 dex in dynamical mass (10.5 < log[Mdyn(h−1M⊙)] < 12) and 3 dex
in group (halo) mass (11.8 < log[Mgroup(h
−1M⊙)] < 15). As expected, the B-band magnitude is
well correlated with the dynamical mass, independent of whether the galaxy is a central galaxy or a
satellite, and independent of whether it is disky or boxy. The absolute magnitudes and dynamical
masses of satellite galaxies are clearly separated from those of the central galaxies when plotted as
function of the group (halo) mass. This simply reflects that centrals are defined as the brightest
(and, hence, most likely the most massive) group members. This clear segregation disappears when
the galaxies are split in disky and boxy systems (lower panels), indicating that there is no strong
correlation between morphology and group hierarchy.
The upper panels of Figure 2 show scatter plots of MB, Mdyn and Mgroup as function of the
isophotal parameter A4. They indicate that the fraction of disky systems (those with A4 > 0)
increases with decreasing luminosity and dynamical mass, in qualitative agreement with Bender et
al. (1989) and H06. In the case of Mgroup, a similar trend seems to be present, but only for the
central galaxies. In order to quantify these trends, we have computed the fraction, fdisky, of disky
galaxies as a function of MB , Mdyn and Mgroup. For each bin in absolute magnitude, dynamical
mass, or group mass, fdisky is defined as the number ratio between disky galaxies and the total
number of galaxies in that bin. Each bin contains at least ten disky galaxies. For comparison, the
disky fraction of the full H06 sample is 0.66.
The lower left-hand panel of Figure 2 plots fdisky as function of MB . The errorbars are
computed assuming Poisson statistics. The fraction of disky galaxies declines by a factor of about
1.6 from ∼ 0.8 at MB − 5 log(h) = −18.7 to ∼ 0.5 at MB − 5 log(h) = −20.8, and is well fitted by
fdisky(MB) = (0.61 ± 0.02) + (0.17 ± 0.03) [MB − 5 log(h) + 20] (5)
which is shown as the solid, grey line. Note that this relation should not be extrapolated to arbitrary
faint and/or bright magnitudes. Since 0 ≤ fdisky ≤ 1 it is clear that fdisky(MB) must flatten at
– 8 –
both ends of the magnitude distribution. Apparently the magnitude range covered by our sample
roughly corresponds to the range in which the distribution transits (relatively slowly and smoothly)
from mainly disky to mainly boxy.
It has to be noted that the exact relation between fdisky and MB is somewhat sensitive to the
exact sample selection criteria, and equation (5) therefore has to be used with some care.
We have tested the robustness of the above relation by adding Gaussian deviates to each
measured value of A4, and then recomputing the best-fit relation between fdisky and MB . Figure
3 shows the slope and zero-point of this relation as function of the standard deviation of the
Gaussian deviates used (filled circles). The grey shaded horizontal bar represents the 1 σ interval
around the best-fit slope (left-hand panel) and the best-fit zero-point (right-hand panel). The grey
shaded vertical bar indicates the mean uncertainty on the observed A4 parameter obtained by
H06 (σ(A4) = 0.0012 ± 0.0008). Note that the best-fit slope and zero-point are extremely robust.
Adding an artificial error to the A4 measurements with an amplitude that is a factor five larger
than the average error quoted by H06 yields best-fit values that agree with those of equation (5)
at better than the 1σ errorbar on these parameters obtained from the fit.
The middle panel in the lower row of Figure 2 plots fdisky as function of Mdyn. As with the
luminosity, the disky fraction decreases smoothly with increasing dynamical mass, dropping from
∼ 0.80 at Mdyn = 6 × 1010h−1 M⊙ to ∼ 0.45 at Mdyn = 3 × 1011h−1 M⊙. The grey, dashed line
indicates the best-fit log-linear relation, which is given by
fdisky(Mdyn) = (0.73 ± 0.02) − (0.53 ± 0.08) log
1011h−1 M⊙
As for equation (5), the Gaussian-deviates test shows that this relation is robust against uncertain-
ties in the A4 measurements.
As is well-known from the morphology-density relation (e.g., Dressler 1980), early-type galaxies
preferentially reside in denser environments and hence in more massive halos (e.g, Croton et al. 2005;
Weinmann et al. 2006). It is interesting to investigate whether the halo mass also determines
whether the early-type galaxies are disky or boxy. We can address this using the WBYM group
catalog described in §2.3. The lower right-hand panel of Figure 2 plots the disky fraction of centrals
(crosses) and satellites (open triangles) as function of group mass. The fraction of disky centrals
decreases with increasing group (halo) mass, declining from ∼ 0.82 at Mgroup = 1.7 × 1012h−1 M⊙
to ∼ 0.54 at Mgroup = 5.0 × 1013h−1 M⊙. For the most massive groups, we have enough satellite
galaxies to also compute their disky fraction. Interestingly, these are larger (though only marginally
so) than those of central galaxies in groups of the same mass.
Although these results seem to suggest that group mass and group hierarchy (i.e., central vs.
satellite) play a role in determining the morphology of an early-type galaxy, they may also simply
be reflections of the fact that (i) satellite galaxies are fainter than central galaxies in the same
parent halo, (ii) fainter centrals typically reside in lower mass halos (cf. Figure 1), and (iii) fainter
galaxies have a larger fdisky. In order to discriminate between these options we proceed as follows.
– 9 –
Under the null-hypothesis that the isophotal structure of an early-type galaxy is only governed by
the galaxy’s absolute magnitude or dynamical mass, the predicted fraction of disky systems for a
given sub-sample is simply
fdisky,0 =
fdisky(Xi) (7)
where Xi is either MB − 5 log(h) or log(Mdyn) of the ith galaxy in the sample, and fdisky(X) is the
average relation between fdisk and X. The grey solid and dashed lines in the lower right-hand panel
of Figure 2 show the fdisky,0(Mgroup) thus obtained, using equations (4) and (5), respectively. These
are perfectly consistent with the observed trends (for both the centrals and the satellites). A possible
exception is the disky fraction of central galaxies in groups with Mgroup < 3.0× 1012h−1M⊙, which
is ∼ 2.5σ higher than predicted by the null-hypothesis. Overall, however, these results support
the null-hypothesis that the morphology of an early-type galaxy depends only on its luminosity or
dynamical mass: there is no significant indication that group mass and/or group hierarchy have a
direct impact on the morphology of early-type galaxies.
4. The disky fraction of active early-type galaxies
4.1. Defining different activity classes
In the standard unified model, AGN are distinguished in AGN of Type I when the central black
hole, its continuum emission and its broad emission-line region are viewed directly, and Type II, if
the central engine is obscured by a dusty circumnuclear medium. Our sample of early-type galaxies
does not contain any Type I AGN, simply because these systems are not part of the main galaxy
sample in the SDSS. However, the E/S0 sample of H06 is not biased against Type II AGN. In order
to identify these systems, one needs to be able to distinguish them from early-types with some
ongoing, or very recent, star formation, which also produces narrow emission lines. Since stars and
AGN produce different ionization spectra, one can discriminate between them by using line-flux
ratios. In particular, star formation and AGN activity can be fairly easily distinguished using the
so-called BPT diagram (after Baldwin, Phillips & Terlevich 1981; see also Veilleux & Osterbrock
1987), whose most common version involves the line-flux ratios [OIII]/Hβ and [NII]/Hα.
Figure 4 plots the BPT diagram for those sample galaxies whose [OIII] λ5007 and Hα lines
have been detected with a signal-to-noise ratio S/N ≥ 3. The solid curve was derived by Kauffmann
et al. (2003b) and separates star-forming galaxies from type II AGN, with the latter lying above the
curve. We follow Kauffmann et al. (2003b) and split the Type II AGN into Seyferts, LINERS, and
Transition Objects (TOs) according to their line-flux ratios: Type II Seyferts have log([OIII]/Hα) ≥
0.5 and log([NII]/Hα) ≥ −0.2, LINERS have log([OIII]/Hα) < 0.5 and log([NII]/Hα) ≥ −0.2, and
all galaxies with log([NII]/Hα) < −0.2 and laying above the curve are labelled TO. Kewley et al.
(2006) have recently studied in detail the properties of LINERs and type II Seyferts, and found
that LINERs and Seyferts form a continuous progression in the accretion rate L
, with LINERs
– 10 –
dominating at low L
and Seyferts prevailing at high L
. The results obtained by Kewley et
al. suggest that most LINERs are AGN and require a harder ionizing radiation field together with
a lower ionization parameter than Seyferts.
In order to increase the statistics of our subsequent analysis, we have organized the 847 galaxies
in the H06 sample into 3 categories:
1. AGN: This class consists of 28 early-type galaxies with a Seyfert-like activity and 286 early-
type galaxies with a LINER-like activity.
2. Emission-line (EL): This class consists of those galaxies that according to the BPT diagram
are star formers or transition objects, as well as those galaxies that lack one or both of the
BPT line-flux ratios, but that have an [OIII] emission line with a S/N ≥ 3. There are a total
of 383 early-type galaxies in the H06 sample that fall in this category.
3. Non-active (NA): These are the 150 galaxies that are not in the AGN or EL categories.
Therefore, these galaxies either have no emission lines at all, or have a detected [OIII] line
but with a S/N < 3. Among these, 43 objects (29 percent) show Hα emission with a S/N
≥ 3. Their presence could signal a problem with the spectroscopic pipeline, which failed to
properly measure the [OIII] line, or be real and due to an episode of star formation in its
early phases. In any case, their low S/N in [OIII] prevents us from classifying these galaxies
in one of the above two categories.
Given our aim to establish the presence/absence of a correlation between the AGN activity
and the disky/boxy morphology of the host early-type galaxy, the classification above is clearly
driven by the detection of the [OIII] line emission, which is commonly used as a proxy for the AGN
strength (cf. Kauffmann et al. 2003b, Kewley et al. 2006).
Along with these 3 categories which describe the galaxy activity in the optical, we have also
defined two additional activity classes: ‘FIRST’, which consists of the 162 sample galaxies with a
1.4GHz flux in the FIRST catalog (Becker et al. 1995), and ‘ROSAT’, containing the 40 sample
galaxies that have been detected in the ROSAT All Sky Survey (Voges et al. 1999). The soft
X-ray luminosities of these ROSAT galaxies span the range 41.3 < log[LX/(ergs
−1)] < 42.7 and
are consistent (though with large scatter), with the well known LX ∝ L2B relation (Trinchieri &
Fabbiano 1985; Canizares et al. 1987). This X-ray emission is therefore associated with a hot corona
surrounding the galaxy, rather than with X-ray binaries, and we can use it to indirectly probe the
environment where galaxies live. As shown by Bender et al. (1989) and O’Sullivan et al. (2001),
the LX ∝ L2B relation applies to X-ray luminosities between 10
40 and 1043 erg s−1. Our ROSAT
category is thus somehow incomplete at 40 < log[LX/(ergs
−1)] < 41 and the trends discussed below
for this class should be taken with some caution.
Table 1 lists the number of galaxies in each of these five activity classes. Note that the AGN,
EL and NA classes are mutually exclusive, but that a galaxy in each of these three classes can
– 11 –
appear also in the FIRST and ROSAT sub-samples. The vast majority of the galaxies detected by
FIRST or ROSAT reveal activity also in the optical, and are classified as either AGN or EL. The
radio and soft X-ray detections themselves, however, are not well correlated: only 12 percent of the
galaxies detected by FIRST have also been detected in soft X-rays.
Before computing fdisky for the galaxies in these various activity classes, it is useful to examine
how their respective distributions in MB, Mgroup and L[OIII] compare. This is shown in Figures 5,
6 and 7, respectively. While the luminosity distributions of the AGN and EL galaxies are in good
agreement with that of the full sample, the galaxies detected by ROSAT are on average about half
a magnitude brighter than the galaxies in the full sample. Also the non-active and radio galaxies
are brighter than average, though the differences are less pronounced. McMahon et al. (2002)
estimated a limiting magnitude of R ≃ 20 for the optical counterparts of FIRST sources at a 97
percent completeness level. Since the apparent magnitude limit of the H06 sample is brighter than
this limit, the FIRST subsample extracted from H06 is to be considered complete. Therefore, the
shift towards higher luminosities for the galaxies detected by FIRST with respect to the full sample
in Figure 5 is real rather than an artifact due to the depth of the different surveys.
Similar trends are present with respect to the group masses: whereas AGN and EL galaxies
have group masses that are very similar to those of the full sample, galaxies detected by ROSAT and
FIRST seem to prefer more massive groups. Somewhat surprisingly, the same applies to the class
of non-active galaxies. As for the luminosity of their [OIII] line plotted in Figure 7, AGN galaxies
tend to be brighter than EL and ROSAT galaxies, while the [OIII] luminosities of FIRST galaxies
are consistent with those of the EL and AGN galaxies combined (grey shaded histogram). In
agreement with Best et al. (2005), no correlation is found between the radio and [OIII] luminosities
of the sample galaxies in common with FIRST. Finally, it is worth emphasizing that the optical
activity defined in this paper occurs at log(L[OIII]/L⊙) ≥ 4.6; therefore, the class of non-active
galaxies may also contain weak AGN with [OIII] fluxes below this limit. Using the KS-test, we
have investigated whether the various distributions are consistent with being drawn from the same
parent distribution. We have found that only EL and ROSAT are consistent, in terms of their [OIII]
luminosity, with belonging to the same population, as well as the pairs (AGN,EL) and (NA,FIRST)
in terms of their absolute magnitude, the pair (AGN,EL) with respect to their dynamical mass,
and the pairs (AGN,EL), (NA,FIRST), (NA,ROSAT) and (FIRST,ROSAT) in terms of their group
halo mass.
Another aspect of defining different modes of activity is to study their actual frequency, i.e.
the fraction of galaxies sharing the same kind of activity (with respect to the full sample) as a
function of MB , Mdyn and environment. This is plotted in Figure 8, where the percentage of
NA, FIRST and ROSAT galaxies increases by a factor of about 4 towards higher luminosities and
larger dynamical masses (cf. Best et al. 2005, O’Sullivan et al. 2001), and by a factor of about 3
as their hosting group halo becomes more massive. EL and AGN galaxies define a far less clear
picture; EL galaxies seem to occur at any MB and Mgroup with a constant frequency, while their
fraction decreases by a factor of about 1.5 as Mdyn gets larger. The percentage of AGN galaxies
– 12 –
drops by a factor of about 2 at brighter MB values. It very weakly decreases in massive group
halos, and appears quite insensitive to Mdyn. As for the hierarchy inside a group, there is a weak
indication that EL, FIRST and ROSAT galaxies are preferentially associated with central galaxies,
while satellite galaxies are more frequently NA and AGN galaxies. Within the Poisson statistics,
however, none of these trends with group hierarchy is significant.
4.2. The relation between activity and morphology
A first glance at how morphology varies with activity is provided by Table 2, which lists
fdisky for the 5 classes defined in §4.1. As for Table 1, AGN, EL and NA galaxies are mutually
exclusive, while any of them can be included in the FIRST and ROSAT categories. In this case,
fdisky is derived from the pool of galaxies common to FIRST (ROSAT) and one of the optically
active sub-samples. The ROSAT galaxies are clearly biased towards boxy shapes as their fdisky
is systematically lower than ∼ 0.50. AGN and NA galaxies with or without radio emission are
generally disky (with fdisky > 0.60). The radio emission seems to make a difference in the case
of EL galaxies: while the full sub-sample of ELs is as disky as AGN and NA galaxies, those ELs
detected by FIRST are dominated by boxy systems with fdisky ≃ 0.45.
The upper panels of Figure 9 show scatter plots of the [OIII] luminosity, radio luminosity and
X-ray luminosity as function of A4. The lower panels plot the corresponding fractions of disky
systems. In the lower left-hand panel, fdisky is plotted as function of L[OIII] for both AGN (filled
squares) and EL (filled triangles) galaxies. This shows that both AGN and EL galaxies have a disky
fraction that is consistent with that of the full H06 sample (fdisky = 0.66), and with no significant
dependence on the actual [OIII] luminosity. The grey lines (solid for AGN and dotted for EL
galaxies) indicate the disky fractions predicted under the null-hypothesis that fdisky is a function
only of MB . These predictions are in excellent agreement with the data, suggesting that the (level
of) optical activity does not help in better predicting the disky/boxy morphology of an early-type
galaxy. The only possible exception is the sub-sample of EL galaxies with log(L[OIII]/L⊙) < 5.2
which has fdisky = 0.54, approximately 3σ lower than given by the null-hypothesis.
The relatively small number of sample galaxies detected by FIRST and ROSAT prevents us
from applying the above analysis as a function of radio and/or soft X-ray luminosity. Instead
we have determined fdisky separately for the sub-samples of galaxies detected and not-detected by
FIRST or ROSAT. The results are shown in the lower middle and lower right-hand panels of Figure
9. Clearly, the disky fraction of galaxies detected by ROSAT (fdisky = 0.48 ± 0.08) is significantly
lower than those with no detected soft X-ray flux (fdisky = 0.66 ± 0.02), in agreement with the
results of Bender et al. (1989) and Pellegrini (1999, 2005). The fact that galaxies detected by
ROSAT are more boxy is expected since they are significantly brighter than those with no soft
X-ray detection (cf. Figure 5). The grey lines, which correspond to fdisky,0(MB), indicate that this
explains most of the effect. Although it is intriguing that the disky fraction of ROSAT detections
is ∼ 1σ lower than predicted, a larger sample of early-type galaxies with soft X-ray detections is
– 13 –
needed to rule out (or confirm) the null-hypothesis. As for the galaxies detected by FIRST, there is
a weak indication that these galaxies have a somewhat lower fdisky: this finding is again in excellent
agreement with the predictions based on the null-hypothesis. Therefore, there is no indication that
the morphology of an early-type galaxy is directly related to whether the galaxy is active in the
radio or not.
To further test the null-hypothesis that the isophotal structure of early-type galaxies is entirely
dictated by their absolute magnitude or dynamical mass, we have derived fdisky of NA, EL and
AGN galaxies in bins of MB and Mdyn. The results are shown in Figure 10 (symbols), which shows
that the disky fraction of all three samples decreases with increasing luminosity and dynamical
mass. The grey solid and dashed lines indicate the predictions based on the null-hypothesis, which
have been computed using equations (5)–(7). Overall, these predictions are in excellent agreement
with the data, indicating that elliptical galaxies with ongoing star formation or with an AGN do
not have a significantly different morphology (statistically speaking) than other ellipticals of the
same luminosity or dynamical mass.
Finally, in Figure 11 we plot the disky fractions of NA, EL and AGN galaxies as function of
their group mass (upper panels) and group hierarchy (lower panels). For comparison, the grey
solid and dashed lines indicate the predictions based on the null-hypothesis. Although overall these
predictions are in good agreement with the data, there are a few noteworthy trends. At the massive
end (Mgroup >∼ 10
13h−1 M⊙) the disky fraction of AGN is higher than expected, while that of NA
galaxies is lower. The lower panels show that this mainly owes to the satellite galaxies in these
massive groups. Whereas the null-hypothesis accurately predicts the disky fractions of NA, EL and
AGN centrals, it overpredicts fdisky of NA satellites, while underpredicting that of AGN satellites
at the 3 σ level. These results clearly warrant a more detailed investigation with larger samples.
Note that only about half of the 847 galaxies in the H06 sample are also in our group catalog. A
future analysis based on larger SDSS samples and a more complete group catalog would sufficiently
boost the statistics to examine the trends identified here with higher confidence.
5. Discussion and conclusions
In spite of their outwardly bland and symmetrical morphology, early-type galaxies reveal a
far more complex structure, whose isophotes usually deviate from a purely elliptical shape. As
shown by Bender et al. (1989), these deviations correlate with other parameters; for example, boxy
early-type galaxies are on average brighter and bigger than disky galaxies and are supported by
anisotropic pressure. Early-type galaxies with disky isophotes, on the other hand, are consistent
with being isotropic oblate rotators. With the advent of large galaxy redshift surveys such as
the SDSS, it is now possible to collect large and homogeneous samples of early-type galaxies and
quantify these correlations in much greater detail. In addition, it also allows for a detailed study
of the relation between morphology and environment.
– 14 –
We have used a sample of 847 early-type galaxies imaged by the SDSS and analyzed by Hao et
al. (2006a) to study the fraction of disky galaxies (fdisky) as a function of their absolute magnitude
MB , their dynamical mass Mdyn and the mass of the dark matter halo Mgroup in which they are
located. Using the Hα, Hβ, [OIII] and [NII] emission lines in the SDSS spectra we have split the
sample in AGN galaxies, emission-line (EL) galaxies, and non-active (NA) galaxies (see Figure 4).
In addition we also constructed two sub-samples of those ellipticals that have also been detected in
the radio (in FIRST) or in soft X-rays (with ROSAT), and we have analyzed the relations between
fdisky and the level of AGN activity in the optical and the radio, and the strength of soft X-ray
emission.
The fraction of disky galaxies in the full sample decreases strongly with increasing luminosity
and dynamical mass (see Figure 2). More quantitatively, fdisky decreases from ∼ 0.8 at MB −
5 log(h) = −18.6 (Mdyn = 6 × 1010h−1M⊙) to ∼ 0.5 at MB − 5 log(h) = −20.6 (Mdyn = 3 ×
1011h−1M⊙). This indicates a smooth transition between disky and boxy shapes, which is well
represented by a log-linear relation between fdisky and luminosity or dynamical mass (at least over
the ranges probed here). The relatively large sample allows us to measure these relations with a
good degree of accuracy that is robust against the uncertainties involved in the measurement of
the A4 parameter.
We have used these log-linear relations to test the null-hypothesis that the isophotal shape of
early-type galaxies depends only on their absolute magnitude or dynamical mass. The main result
of this paper is that the data is fully consistent with this simple ansatz, and that the correlations
seen among group mass, group hierarchy (central vs. satellite), soft X-ray emission, activity (both
in the optical and in the radio) and the disky/boxy morphology of an early-type galaxy reflect
the dependence of each of these properties on galaxy luminosity. In fact, the luminosity (mass)
dependence of fdisky predicts, with good accuracy, the following observed trends:
1. The variation of fdisky of central and satellite galaxies in the sample as a function of their
group halo mass (see Figure 2).
2. The constancy of fdisky of EL and AGN galaxies with respect to their [OIII] luminosity (see
Figure 9).
3. The decreasing fdisky of NA, EL and AGN galaxies with increasing MB and Mdyn (see Figure
4. The dependence of fdisky of NA, EL and AGN galaxies on their group halo mass and hierarchy
(see Figure 11).
5. The average value of fdisky among those sample galaxies detected by ROSAT and FIRST.
The fact that our null-hypothesis is also consistent with the fraction of disky radio-emitters con-
tradicts Bender et al. (1989), who wrote that “the isophotal shape is the second parameter besides
– 15 –
luminosity determining the occurance of radio activity in ellipticals”. We claim instead, using a
much larger, more homogeneous sample, that the radio activity is merely a reflection of the multi-
variate dependence of radio activity, luminosity and morphology. We have further checked this
result using an inverse approach, based on the fradio - MB relation. Briefly, we derived, for the full
sample, the fraction of radio galaxies (fradio) with respect to the total as a function of MB , and
obtained a log-linear relation whereby fradio smoothly increases from 0.06 at MB−5 log(h) = −18.7
to 0.34 at MB − 5 log(h) = −20.7. The fraction of radio galaxies among disky and boxy galaxies in
the full sample turns out to be fradio(disky) = 0.17 ± 0.02 and fradio(boxy) = 0.23 ± 0.02. Entering
the mean absolute magnitude of disky and boxy galaxies in the fradio - MB relation, we obtain radio
fractions of 0.18 and 0.21, respectively, well within 1 σ from the observed values.
Although the data is in good overall agreement with the null-hypothesis, there are a few weak
deviations at the 1 (3 at most) σ level (throughout errors have been computed assuming Poisson
statistics). First of all, emission line galaxies with log(L[OIII]/L⊙) < 5.2 have a disky fraction
that is ∼ 3σ lower than predicted by the null-hypothesis. Note however, that for higher [OIII]
luminosities, the null-hypothesis is in excellent agreement with the disky fraction of EL galaxies.
Another mild discrepancy between data and null-hypothesis regards the disky fraction of ellipticals
detected by ROSAT, which is ∼ 1σ lower than predicted. Finally, the disky fraction of NA and
AGN satellites in groups with Mgroup >∼ 10
13h−1 M⊙ are slightly too high and low, with respect
to the null-hypothesis, respectively. Whether these discrepancies indicate a true shortcoming of
the null-hypothesis, and thus signal that the isophotal shape of early-type galaxies depends on
additional parameters, requires a larger sample even. In the relatively near future, the final SDSS
should be able to roughly double the size of the sample used here, while a group catalog of this final
SDSS should increase the statistics regarding the environmental dependencies by an even larger
amount.
The relations between fdisky and MB (Mdyn) derived here provide a powerful test-bench for
theories of galaxy formation. In particular, they can be used to constrain the nature and the
merging history of the progenitors of present-day early-type galaxies. In a follow-up paper, we
will use semi-analytical models featuring AGN and supernova feedback in order to predict and
understand the observed log-linear relations in terms of the amount of cold gas in the progenitors
at the time of the last merger and their mass ratio (Kang et al., in prep).
AP acknowledges useful discussions with Sandra Faber and John Kormendy. We thank an
anonymous referee for his/her useful comments on the paper. Funding for the creation and distri-
bution of the SDSS Archive has been provided by the Alfred P. Sloan Foundation, the Participating
Institutions, the National Aeronautics and Space Administration, the National Science Foundation,
the U.S. Department of Energy, the Japanese Monbukagakusho, and the Max Planck Society. The
SDSS Web site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research
Consortium (ARC) for the Participating Institutions. The Participating Institutions are The Uni-
http://www.sdss.org/
– 16 –
versity of Chicago, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The
Johns Hopkins University, the Korean Scientist Group, Los Alamos National Laboratory, the Max-
Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New
Mexico State University, University of Pittsburgh, University of Portsmouth, Princeton University,
the United States Naval Observatory, and the University of Washington.
REFERENCES
Adelman-McCarthy, J.K., et al. 2006, ApJS, 162, 38
Anderson, S.F., et al. 2003, AJ, 126, 2209
Baldwin, J., Phillips, M., Terlevich, R. 1981, PASP, 93, 5
Barnes, J.E., Hernquist, L.E. 1991, ApJL, 370, L65
Barnes, J.E., Hernquist, L.E. 1996, ApJ, 471, 115
Becker, R.H., White, R.L., Helfand, D.J. 1995, ApJ, 450, 559
Bender, R. 1988, A&AS, 193, 7
Bender, R., Döbereiner, S., Möllenhoff, C., 1988, A&AS, 74, 385
Bender, R., Möllenhoff, C. 1987, A&A, 177, 71
Bender, R., Surma, P., Döbereiner, S., Möllenhoff, C., Madejsky, R. 1989, A&A, 217, 35
Bernardi, M., et al. 2003, AJ, 125, 1817
Best, P.N., Kauffmann, G., Heckman, T.M., Brinchmann, J., Charlot, S., Ivezić, Z., White, S.D.M.
2005, MNRAS, 362, 25
Blanton, M.R., et al. 2005, AJ, 129, 2562
Bower, R.G., Lucey, J.R., Ellis, R.S. 1992, MNRAS, 254, 601
Canizares, C.R., Fabbiano, G., Trinchieri, G. 1987, ApJ, 312, 503
Cappellari, M., et al. 2006, MNRAS, 366, 1126
Carter, D. 1987, ApJ, 312, 514
Cattaneo, A., Blaizot, J., Devriendt, J., Guiderdoni, B. 2005, MNRAS, 364, 407
Croton, D.J., et al. 2005, MNRAS, 356, 1155
Djorgovski, S., Davis, M. 1987, ApJ, 313, 59
– 17 –
Davies, R.L., Efstathiou, G., Fall, S.M., Illingworth, G., Schechter, P.L. 1983, ApJ, 266, 41
Dressler, A. 1980, ApJ, 236, 351
Dressler, A. 1987, ApJ, 317, 1
Ellis, R.S., Smail, I., Dressler, A., Couch, W.J., Oemler, A.Jr., Butcher, H., Sharples, R.M. 1997,
ApJ, 483, 582
Fabbiano, G. 1989, ARA&A, 27, 87
Faber, S.M., Jackson, R.E. 1976, ApJ, 204, 668
Faber, S.M., et al. 1997, AJ, 114, 1771
Ferrarese, L., van den Bosch, F.C., Ford, H.C., Jaffe, W., O’Connell, R.W. 1994, AJ, 108, 1598
Gebhardt, K., et al. 1996, AJ, 112, 105
Graham, A.W., Erwin, P, Caon, N., Trujillo, I. 2001, ApJ, 563, L11
Hao, C.N., Mao, S., Deng, Z.G., Xia, X.Y., Wu, H. 2006a, MNRAS, 370, 1339
Hao, C.N., Mao, S., Deng, Z.G., Xia, X.Y., Wu, H. 2006b, MNRAS, 373, 1264
Jaffe, W., Ford, H.C., O’Connell, R.W., van den Bosch, F.C., Ferrarese, L. 1994, AJ, 108, 1567
Jedrzejewski, R.I. 1987, MNRAS, 226, 747
Jesseit, R., Naab, Y., Burkert, A. 2005, MNRAS, 360, 1185
Kang, X., van den Bosch, F.C., Pasquali, A., 2007, preprint (arXiv:0704.0932)
Kauffmann, G., et al. 2003a, MNRAS, 341, 33
Kauffmann, G., et al. 2003b, MNRAS, 346, 1055
Kewley, L.J., Groves, B., Kauffmann, G., Heckman, T., 2006, MNRAS, 372, 961
Khochfar, S., Burkert, A. 2005, MNRAS, 359, 1379
Kuntschner, H., et al. 2006, MNRAS, 369, 497
Lauer, T.R. 1985, MNRAS, 216, 429
Lauer, T.R., et al. 1995, AJ, 110, 2622
Lauer, T.R., et al. 2005, AJ, 129, 2138
McMahon, R.G., White, R.L., Helfand, D.J., Becker, R.H. 2002, ApJ Supl.Ser., 143, 1
http://arxiv.org/abs/0704.0932
– 18 –
Mihos, J.C., Hernquist, L.E. 1994, ApJL, 425, L13
Mihos, J.C., Hernquist, L.E. 1996, ApJ, 464, 641
Milosavljević, M., Merritt D., Rest A., van den Bosch F.C. 2002, MNRAS, 331, 51
Morganti, R., et al. 2006, MNRAS, 371, 157
Naab, Y., Jesseit, R., Burkert, A. 2006, MNRAS, in press (astro-ph/0605155)
Nieto, J.-L., Capaccioli, M., Held, E.V. 1988, A&A, 195, 1
O’Sullivan, E., Forbes, D.A., Ponman, T.J. 2001, MNRAS, 328, 461
Pellegrini, S. 1999, A&A, 351, 487
Pellegrini, S. 2005, MNRAS, 364, 169
Ravindranath, S., Ho, L.C., Peng, C.Y., Filippenko, A.V., Sargent, W.L.W. 2001, AJ, 122, 653
Rest, A., et al. 2001, AJ, 121, 2431
Sarzi, M., et al. 2006, MNRAS, 366, 1151
Scorza, C., Bender, R. 1995, A&A, 293, 20
Scorza, C., van den Bosch, F.C. 1998, MNRAS, 300, 469
Serra, P., Trager, S.C., van der Hulst, J.M., Oosterloo, T.A., Morganti, R. 2006, A&A, 453, 493
Shioya, Y., Taniguchi, Y. 1993, PASJ, 43, 39
Smith, J.A. et al. 2002, AJ, 123, 2121
Springel, V. 2000, MNRAS, 312, 859
Stoughton, C., et al., 2002, AJ, 123, 485
Strauss, M.A., et al, 2002, AJ, 124, 1810
Toomre, A., Toomre, J. 1972, ApJ, 178, 623
Trager, S.C., Faber, S.M., Worthey, G., González, J.J. 2000, AJ, 119, 1645
Tran, H.D., Tsvetanov, Z., Ford, H.C., Davies, J., Jaffe, W., van den Bosch, F.C., Rest, A. 2001,
AJ, 121, 2928
Trinchieri, G., Fabbiano, G. 1985, ApJ, 296, 447
Veilleux, S., Osterbrock, D. 1987, ApJS, 63, 295
http://arxiv.org/abs/astro-ph/0605155
– 19 –
Visvanathan, N., Sandage, A. 1977, 216, 214
Voges, W., et al. 1999, A&A, 349, 389
Weinmann, S., van den Bosch, F.C., Yang, X., Mo, H.J. 2006, MNRAS, 366, 2
Yang, X., Mo, H.J., van den Bosch, F.C., Jing, Y.P. 2005, MNRAS, 356, 1293
This preprint was prepared with the AAS LATEX macros v5.2.
– 20 –
Fig. 1.— The distributions in MB , Mdyn and Mgroup for the full sample, split between central and
satellite galaxies (grey crosses and open triangles respectively, in the top panels) and between disky
and boxy galaxies (open triangles and grey crosses respectively, in the bottom panels).
– 21 –
Fig. 2.— Top panels: the distributions of MB , Mdyn and Mgroup as a function of the isophotal
parameter A4 for the full sample, also split between central (grey crosses) and satellite (open
triangles) galaxies. Bottom panels: the fraction of disky galaxies as a function of MB and Mdyn
for the full sample. The fraction of disky galaxies is also shown per bin of group halo mass Mgroup
for central (crosses) and satellite (open triangles) galaxies. The errorbars are at the 1 σ level, and
were computed assuming Poisson statistics. The grey solid and dashed lines in the left hand-side
and middle panels are the best fits to the fractions of disky galaxies across the full sample. The
same lines in the right hand-side panel represent the predicted fractions of disky galaxies from the
working null-hypothesis.
– 22 –
Fig. 3.— Impact of individual A4 measurement errors: the slope and the zero-point of the log-
linear correlation between fdisky and MB (equation 4) are shown as a function of the standard
deviation of the Gaussian used to simulate errors on the observed A4 values. The grey shaded
areas indicate the best-fitting slope (0.17 ± 0.03) and zero-point (0.61 ± 0.02) in equation 4, and
the mean uncertainty on the observed A4 parameter (0.0012 ± 0.0008) as measured by H06.
– 23 –
-1 -0.5 0 0.5 1
Liners
Seyferts
Fig. 4.— The BPT diagram for the galaxies in the full sample, whose [OIII] λ5007 and Hα emission
lines were detected with a S/N ratio larger than 3. These objects have been split among Seyfert
galaxies of type 2, LINERs, Transition Objects (TO) and star-forming (SB) according to Kauffmann
et al. (2003b).
– 24 –
Fig. 5.— The distributions of the full sample (grey shaded area) and the 5 different activity classes
defined in §4.1 in absolute magnitude MB (in AB system). Each distribution is normalized by the
size of the sample from where it was extracted.
– 25 –
Fig. 6.— As in Figure 5, but for the group halo mass Mgroup.
– 26 –
Fig. 7.— As in Figure 5, but for the luminosity in the [OIII] line. Here, the grey shaded histogram
refers to emission-line and AGN galaxies together.
– 27 –
-19 -20 -21
ROSAT
FIRST
11 11.5
12 12.5 13 13.5 14
Group hierarchy
Fig. 8.— The fraction of galaxies in the 5 different activity classes with respect to the full sample
as a function of MB , Mdyn, Mgroup and split between centrals and satellites.
– 28 –
Fig. 9.— Top panels: the distribution of the luminosities in the [OIII] line, at 1.4 GHz and in
the soft X-rays, as a function of the isophotal parameter A4. Emission-line and AGN galaxies
are represented with grey filled triangles and black filled squares respectively. Bottom panels: the
fraction of disky galaxies among emission-line (triangles) and AGN (squares) galaxies as a function
of the luminosity in the [OIII] line (left hand-side panel). The grey solid and dotted lines trace the
predictions from the working null-hypothesis in MB . The fraction of disky galaxies for the galaxies
detected and non-detected by FIRST and ROSAT are shown in the middle and right hand-side
panels, together with the predictions from equations (4) and (5). The errorbars are at the 1 σ level,
and were computed assuming Poisson statistics.
– 29 –
Fig. 10.— The fraction of disky galaxies for non-active (black filled circles), emission-line (black
filled triangles) and AGN (black filled squares) galaxies as a function of MB and Mdyn. The grey
solid and dashed lines represent the predictions from equation (5), i.e. the working null-hypothesis.
The errorbars are at the 1 σ level, and were computed assuming a Poisson statistics.
– 30 –
Fig. 11.— As in Figure 10, but splitting the non-active, emission-line and AGN galaxies between
centrals and satellites.
– 31 –
Table 1. Activity Classes
AGN EL NA FIRST ROSAT
AGN 314 −− −− 91 17
EL −− 383 −− 53 16
NA −− −− 150 18 7
FIRST 91 53 18 162 22
ROSAT 17 16 7 22 40
Note. — The number of sample galaxies in the
five different activity classes. Note that the AGN,
EL and NA classes are mutually exclusive.
– 32 –
Table 2. Fraction of disky galaxies across the activity classes
AGN EL NA FIRST ROSAT
AGN 0.69 −− −− 0.65 0.47
EL −− 0.64 −− 0.45 0.50
NA −− −− 0.63 0.61 0.43
FIRST 0.65 0.45 0.61 0.58 0.54
ROSAT 0.47 0.50 0.43 0.54 0.47
Introduction
Data
Sample Selection
Isophotal Analysis
Additional data
The disky fraction across the sample
The disky fraction of active early-type galaxies
Defining different activity classes
The relation between activity and morphology
Discussion and conclusions
|
0704.0932 | On the Origin of the Dichotomy of Early-Type Galaxies: The Role of Dry
Mergers and AGN Feedback | arXiv:0704.0932v1 [astro-ph] 6 Apr 2007
Mon. Not. R. Astron. Soc. 000, 1–13 (2000) Printed 28 August 2021 (MN LATEX style file v1.4)
On the Origin of the Dichotomy of Early-Type Galaxies:
The Role of Dry Mergers and AGN Feedback
X. Kang1,2⋆, Frank C. van den Bosch1, A. Pasquali1
1Max-Planck-Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg, Germany
2Shanghai Astronomical Observatory; the Partner Group of MPA, Nandan Road 80, Shanghai 200030, China
ABSTRACT
Using a semi-analytical model for galaxy formation, combined with a large N -body
simulation, we investigate the origin of the dichotomy among early-type galaxies. In
qualitative agreement with previous studies and with numerical simulations, we find
that boxy galaxies originate from mergers with a progenitor mass ratio n < 2 and with
a combined cold gas mass fraction Fcold < 0.1. Our model accurately reproduces the
observed fraction of boxy systems as a function of luminosity and halo mass, for both
central galaxies and satellites. After correcting for the stellar mass dependence, the
properties of the last major merger of early-type galaxies are independent of their halo
mass. This provides theoretical support for the conjecture of Pasquali et al. (2007) that
the stellar mass (or luminosity) of an early-type galaxy is the main parameter that
governs its isophotal shape. If wet and dry mergers mainly produce disky and boxy
early-types, respectively, the observed dichotomy of early-type galaxies has a natural
explanation within the hierarchical framework of structure formation. Contrary to
naive expectations, the dichotomy is independent of AGN feedback. Rather, we argue
that it owes to the fact that more massive systems (i) have more massive progenitors,
(ii) assemble later, and (iii) have a larger fraction of early-type progenitors. Each
of these three trends causes the cold gas mass fraction of the progenitors of more
massive early-types to be lower, so that their last major merger involved less cold
gas (was more “dry”). Finally, our model predicts that (i) less than 10 percent of all
early-type galaxies form in major mergers that involve two early-type progenitors, (ii)
more than 95 percent of all boxy early-type galaxies with M∗ <∼ 2 × 10
10h−1 M⊙ are
satellite galaxies, and (iii) about 70 percent of all low mass early-types do not form
a supermassive black hole binary at their last major merger. The latter may help to
explain why low mass early-types have central cusps, while their massive counterparts
have cores.
Key words: dark matter — galaxies: elliptical and lenticular — galaxies: interactions
— galaxies: structure — galaxies: formation
1 INTRODUCTION
Ever since the seminal paper by Davies et al. (1983) it
is clear that early-type galaxies (ellipticals and lenticulars,
hereafter ETGs) can be split in two distinct sub-classes.
Davies et al. showed that bright ETGs typically have lit-
tle rotation, such that their flattening must originate from
anisotropic pressure. This is consistent with bright ellipti-
cals being in general triaxial. Low luminosity ellipticals, on
the other hand, typically have rotation velocities that are
consistent with them being oblate isotropic rotators (see
Emsellem et al. 2007 and Cappellari et al. 2007 for a more
contemporary description). These different kinematic classes
⋆ E-mail:[email protected]
also have different morphologies and different central surface
brightness profiles. In particular, bright, pressure-supported
systems usually have boxy isophotes and luminosity pro-
files that break from steep outer power-laws to shallow in-
ner cusps (often called ‘cores’). The low luminosity, rotation
supported systems, on the other hand, often reveal disky
isophotes and luminosity profiles with a steep central cusp
(e.g., Bender 1988; Nieto et al. 1988; Ferrarese et al. 1994;
Gebhardt et al. 1996; Rest et al. 2001; Lauer et al. 2005,
2006). Finally, the bimodality of ETGs has also been found
to extend to their radio and X-ray properties. Objects which
are radio-loud and/or bright in soft X-ray emission generally
have boxy isophotes, while disky ETGs are mostly radio-
quiet and faint in soft X-rays (Bender et al. 1989; Pellegrini
1999, 2005; Ravindranath et al. 2001).
c© 2000 RAS
http://arxiv.org/abs/0704.0932v1
2 Kang et al.
Recently, Hao et al. (2006) constructed a homoge-
neous samples of 847 ETGs from the SDSS DR4 catalogue
(Adelman-McCarthy et al. 2006), and analyzed their isopho-
tal shapes. This sample was used by Pasquali et al. (2007:
hereafter P07) to investigate the relative fractions of disky
and boxy ETGs as function of luminosity, stellar mass and
environment. They found that the disky fraction decreases
smoothly with increasing (B-band) luminosity, stellar mass,
and halo mass, where the latter is obtained from the SDSS
group catalogue of Weinmann et al. (2006). In addition, the
disky fraction is found to be higher for satellite galaxies than
for central galaxies in a halo of the same mass. These data
provide a powerful benchmark against which to test models
for the formation of ETGs.
Within the framework of hierarchical structure for-
mation, elliptical galaxies are generally assumed to form
through major mergers (e.g., Toomre & Toomre 1972;
Schweizer 1982; Barnes 1988; Hernquist 1992; Kauffmann,
White & Guiderdoni 1993). In this case, it seems logical that
the bimodality in their isophotal and kinematical properties
must somehow be related to the details of their merger his-
tories. Using dissipationless N-body simulations it has been
shown that equal mass mergers of disk galaxies mainly result
in slowly rotating ETGs with boxy (but sometimes disky)
isophotes, while mergers in which the mass ratio between the
progenitor disks is significantly different from unity mainly
yields disky ETGs (Negroponte & White 1983; Barnes 1988;
Hernquist 1992; Bendo & Barnes 2000; Naab & Burkert
2003; Bournaud et al. 2004, 2005; Naab & Trujillo 2006).
However, simulations that also include a dissipative gas com-
ponent and star formation have shown that the presence of
even a relatively small amount of cold gas in the progenitors
results in a merger remnant that resembles a disky ellipti-
cal even when the mass ratio of the progenitors is close to
unity (Barnes & Hernquist 1996; Naab et al. 2006a; Cox
et al. 2006a). This suggests that boxy ETGs can only form
out of dry, major mergers (see also discussion in Faber et
al. 1997). In this paper we test this paradigm using a semi-
analytical model for galaxy formation and the observational
constraints of P07.
Our study is similar to those of Khochfar & Burk-
ert (2005; hereafter KB05) and Naab et al. (2006b; here-
after N06), who also used semi-analytical models to explore
whether the dichotomy of elliptical galaxies can be related to
their merger properties. However, our analysis differs from
theirs on the following grounds.
• We use a numerical N-body simulation to construct the
merger histories of dark matter haloes. The models of KB05
and N06, on the other hand, used merger trees based on
the extended Press-Schechter (EPS) formalism (e.g., Lacey
& Cole 1993). As we will show, this results in significant
differences.
• Because of our use of numerical N-body simulations,
our model more accurately traces the dynamical evolution of
dark matter subhaloes with their associated satellite galax-
ies. In particular, it takes proper account of dynamical fric-
tion, tidal stripping and the merging between subhaloes.
• Contrary to KB05 and N06, we include a prescription
for the feedback from active galactic nuclei (AGN) in our
semi-analytical model.
• Our semi-analytical model is tuned to reproduce the
luminosity function and the color-bimodality of the redshift
zero galaxy population (see Kang et al. 2005). The works of
KB05 and N06 do not mention such a comparison.
• Our criteria for the production of boxy ETGs are dif-
ferent from those used in KB05 and N06.
• We use a much larger, more homogeneous data set to
constrain the models.
This paper is organized as follows. In Section 2 we de-
scribe our N-body simulation and semi-analytical model,
and outline the methodology. The results are described in
Section 3 and discussed in Section 4. We summarize our
findings in Section 5
2 METHODOLOGY
The aim of this paper is to investigate to what extend semi-
analytical models of galaxy formation can reproduce the
ecology of ETGs, in particular the fractions of disky and
boxy systems as function of luminosity and halo mass. Par-
tially motivated by numerical simulations of galaxy mergers,
both with and without gas, we adopt a framework in which
(i) ETGs are red and dominated by a spheroidal compo-
nent, (ii) ETGs are the outcome of major mergers, (iii) the
remnant is boxy if the merger is sufficiently “dry” (i.e., the
progenitors have little or no cold gas) and sufficiently “ma-
jor” (i.e., the progenitors have roughly equal masses) and
(iv) a boxy elliptical becomes a disky elliptical if newly ac-
creted material builds a sufficiently large stellar disk.
2.1 N-body simulation and model descriptions
In order to have accurate merger trees, and to be able
to follow the dynamical evolution of satellite galaxies, we
use a numerical simulation of the evolution of dark mat-
ter which we populate with galaxies using a state-of-the-art
semi-analytical model for galaxy formation. The numerical
simulation has been carried out by Jing & Suto (2002) using
a vectorized-parallel P3M code. It follows the evolution of
5123 particles in a cosmological box of 100h−1 Mpc, assum-
ing a flat ΛCDM ‘concordance’ cosmology with Ωm = 0.3,
σ8 = 0.9, and h = H0/100 kms
−1 Mpc−1 = 0.7. Each parti-
cle has a mass of 6.2 × 108h−1 M⊙. Dark matter haloes are
identified using the friends-of-friends (FOF) algorithm with
a linking length equal to 0.2 times the mean particle sepa-
ration. For each halo thus identified we compute the virial
radius, rvir, defined as the spherical radius centered on the
most bound particle inside of which the average density is
340 times the average density of the Universe (cf. Bryan &
Norman 1998). The virial mass is simply defined as the mass
of all particles that have halocentric radii r ≤ rvir. Since our
FOF haloes have a characteristic overdensity of ∼ 180 (e.g.,
White 2002), the virial mass is typically smaller than the
FOF mass.
Dark matter subhaloes within each FOF (parent) halo
are identified using the SUBFIND routine described in
Springel et al. (2001). In the present study, we use all haloes
and subhaloes with masses down to 6.2 × 109h−1M⊙ (10
particles). Using 60 simulation outputs between z = 15 and
z = 0, equally spaced in log(1 + z), Kang et al. (2005; here-
after K05) constructed the merger history for each (sub)halo
c© 2000 RAS, MNRAS 000, 1–13
On the Origin of the Dichotomy of Early-Type Galaxies 3
in the simulation box, which are then used in the semi-
analytical model. In what follows, whenever we refer to a
halo, we mean a virialized object which is not a sub-structure
of a larger virialized object, while subhaloes are virialized ob-
jects that orbit within a halo. In addition, (model) galaxies
associated with haloes and subhaloes are referred to as cen-
tral galaxies and satellites, respectively.
The semi-analytical model used to populate the haloes
and subhaloes with galaxies is described in detail in K05, to
which we refer the reader for details. Briefly, the model as-
sumes that the baryonic material accreted by a dark matter
halo is heated to the virial temperature. The gas then cools
radiatively and settles down into a centrifugally supported
disk, in which the star formation rate is proportional to the
total amount of cold gas, and inversely proportional to the
dynamical time of the disk. The energy feedback from su-
pernova is related to the initial stellar mass function (IMF)
and proportional to the star formation rate. It is assumed
that the gas that is reheated by supernova feedback remains
bound to the host halo so that it can cool back onto the
disk at later stages. When the subhalo associated with a
satellite galaxy is dissolved in the numerical simulation the
satellite galaxy becomes an “orphan” galaxy, which is as-
sumed to merge with the central galaxy of the parent halo
after a dynamical friction time (computed assuming stan-
dard Chandrasekhar dynamical friction). When two galaxies
merge, the outcome is assumed to depend on their mass ratio
n ≡ M1/M2 with M1 ≥ M2. If n ≤ 3 the merger is assumed
to result in the formation of an elliptical galaxy, and we
speak of a “major merger”. Any cold gas available in both
progenitors is turned into stars. This is supported by hy-
drodynamical simulations, which show that major mergers
cause the cold gas to flow to the center where the resulting
high gas density triggers a starburst (e.g., Barnes & Hern-
quist 1991, 1996; Mihos & Hernquist 1996; Springel 2000;
Cox et al. 2006a,b; Di Matteo et al. 2007). A new disk of cold
gas and stars may form around the elliptical if new gas can
cool in the halo of the merger remnant, thus giving rise to
a disk-bulge system. If n > 3 we speak of a “minor merger”
and we simply add the cold gas of the less massive progeni-
tor to that of the disk of the more massive progenitor, while
its stellar mass is added to the bulge of the massive progen-
itor. The semi-analytical model also includes a prescription
for “radio-mode” AGN feedback as described in Kang, Jing
& Silk (2006; see also Section 3.2). This ingredient is es-
sential to prevent significant amounts of star formation in
the brightest galaxies, and thus to ensure that these systems
are predominantly members of the red sequence (e.g., Cat-
taneo et al. 2006; De Lucia et al. 2006; Bower et al. 2006;
Croton et al. 2006). Finally, luminosities for all model galax-
ies are computed using the predicted star formation histo-
ries and the stellar population models of Bruzual & Charlot
(2003). Throughout we assume a Salpeter IMF and we self-
consistently model the metalicities of gas and stars, includ-
ing metal-cooling.
As shown in K05 and Kang et al. (2006) this model ac-
curately fits, among others, the galaxy luminosity function
at z = 0, the color bimodality of the z = 0 galaxy popula-
tion, and the number density of massive, red galaxies out to
z ∼ 3. We emphasize that in this paper we use this model
without changing any of its parameters.
2.2 Predicting Isophotal Shapes
In our model we determine whether an elliptical galaxy is
disky or boxy as follows. Using the output at z = 0 we first
identify the early-type (E/S0) galaxies based on two criteria.
First of all, following Simien & de Vaucouleurs (1986), we
demand that an ETG has a bulge-to-disk ratio in theB-band
of LB,bulge/LB,total ≥ 0.4. In addition, we require the B−V
color of the galaxy to be red. Following Hao et al. (2006) and
P07, we adopt B − V > 0.8. We have verified that none of
our results are sensitive to the exact choice of these selection
criteria.
Having thus identified all ETGs at z = 0, we subse-
quently trace their formation histories back until their last
major merger, and register the mass ratio n of the merger
event, as well as the total cold gas mass fraction at that
epoch, defined as
Fcold ≡
Mcold,i
(Mcold,i +M∗,i)
Here Mcold,i and M∗,i are the cold gas mass and stellar mass
of progenitor i, respectively. We adopt the hypothesis that
the merger results in a boxy elliptical if, and only if, n < ncrit
and Fcold < Fcrit. The main aim of this paper is to use the
data of P07 to constrain the values of ncrit and Fcrit, and to
investigate whether a model can be found that is consistent
with the observed fraction of boxy (disky) ETGs as function
of both galaxy luminosity and halo mass.
The final ingredient for determining whether an ETG
is disky or boxy is the potential regrowth of a stellar disk.
Between its last major merger and the present day, new gas
in the halo of the remnant may cool out to form a new disk.
In addition, the ETG may also accrete new stars and cold
gas via minor mergers (those with n > 3). Any cold gas in
those accreted systems is added to the new disk, where it
is allowed to form new stars. Whenever the stellar disk has
grown sufficiently massive, its presence will reveal itself in
the isophotes, and the system changes from being boxy to
being disky. To take this effect into account, we follow KB05
and we reclassify a boxy system as disky if at z = 0 it has
grown a disk with a stellar mass that contributes more than
a fraction fd,max of the total stellar mass in the galaxy. In
our fiducial model we set fd,max = 0.2. This is motivated by
Rix & White (1990), who have shown that if an embedded
stellar disk contains more than ∼ 20 percent of the total
stellar mass, the isophotes of its host galaxy become disky.
Note that the same value for fd,max has also been used in
the analysis of KB05.
3 RESULTS
In Figure 1 we show the fraction of boxy ETGs, fboxy, as a
function of the luminosity in the B-band (in the AB system).
Open squares with errorbars (reflecting Poisson statistics)
correspond to the data of P07, while the various lines indi-
cate the results obtained from three different models that
we describe in detail below. The (Poisson) errors on these
model predictions are comparable to those on the P07 data
and are not shown for clarity. Each column and row show,
c© 2000 RAS, MNRAS 000, 1–13
4 Kang et al.
Figure 1. The boxy fraction of ETGs as function of their B-band magnitude (in the AB system). The open, red squares with (Poissonian)
errorbars correspond to the data of P07, and are duplicated in each panel. The solid, dotted and dashed lines correspond to the three
models discussed in the text. Different columns and rows correspond to different values for the critical progenitor mass ratio, ncrit, and
the critical cold gas mass fraction, Fcrit, respectively, as indicated.
respectively, the results for different values of ncrit and Fcrit
as indicated.
We start our investigation by setting fd,max = 1, which
implies that the isophotal shape of an elliptical galaxy (disky
or boxy) is assumed to be independent of the amount of mass
accreted since the last major merger. Although we don’t con-
sider this realistic, it is a useful starting point for our investi-
gation, as it clearly separates the effects of ncrit and Fcrit on
the boxy fraction. The results thus obtained from our semi-
analytical model with AGN feedback are shown in Figure 1
as dotted lines. If we assign the isophotal shapes of ETGs
depending only on the progenitor mass ratio n, which cor-
responds to setting Fcrit = 1, we obtain the boxy fractions
shown in the upper panels. In agreement with KB05 (see
their Figure 1) this results in a boxy fraction that is virtually
independent of luminosity, in clear disagreement with the
data. Note that, for a given value of ncrit, our boxy fractions
are significantly higher than in the model of KB05. For ex-
ample, for ncrit = 2 we obtain a boxy fraction of ∼ 0.6, while
KB05 find that fboxy ∼ 0.33. This mainly reflects the differ-
ence in the type of merger trees used. As discussed above, we
use the merger trees extracted from a N-body simulation,
while KB05 use monte-carlo merger trees based on the EPS
formalism. It is well known that EPS merger trees predict
masses for the most massive progenitors that are too large
(e.g., Lacey & Cole 1994; Somerville et al. 2000; van den
Bosch 2002; Benson, Kamionkowski & Hassani 2005). This
implies that the number of mergers with a small progenitor
c© 2000 RAS, MNRAS 000, 1–13
On the Origin of the Dichotomy of Early-Type Galaxies 5
Figure 2. The fractions of last major mergers between centrals and satellites (C-S; solid lines) and between two satellites (S-S; dashed
lines) that result in the formation of an ETG as function of its stellar mass at z = 0. Results are shown for all ETGs (left-hand panel) and
for those with Fcold < 0.1 (right-hand panel). Black and red lines correspond to models with and without AGN feedback, respectively.
Low mass ETGs that form in dry mergers, and hence end up being boxy, mainly form out of S-S mergers. At the massive end, the
fraction of ETGs that form out of S-S mergers with Fcold = 0.1 depends strongly on the presence or absence of AGN feedback. See text
for detailed discussion.
mass ratio n will be too small, which explains the difference
between our results and those of KB05. Using cosmological
SPH simulations, Maller et al. (2006) found that the distri-
bution of merger mass ratios scales as dN/dn ∝ n−0.8. This
means that 60 percent of all galaxy mergers with n < 3 have
a progenitor mass ratio n < 2, in excellent agreement with
our results.
The dotted lines in the remaining panels of Figure 1
show the results obtained for three different values of the
maximum cold gas mass fraction, Fcrit = 0.6, 0.3, and 0.1.
Lowering Fcrit has a strong impact on the boxy fraction
of low-luminosity ETGs, while leaving that of bright ETGs
largely unaffected. As we show in §4 this mainly owes to the
fact that Fcold decreases strongly with increasing luminosity.
Consequently, by changing Fcrit we can tune the slope of
the relation between fboxy and luminosity, while ncrit mainly
governs the absolute normalization. We obtain a good match
to the P07 data for ncrit = 2 and Fcrit = 0.1 (third panel in
lowest row). This implies that boxy ETGs only form out of
relatively dry and violent mergers, in good agreement with
numerical simulations.
3.1 The influence of disk regrowth
The analysis above, however, does not consider the impact
of the growth of a new disk around the merger remnant.
Since this may turn boxy systems into disky systems, it can
have a significant impact on the predicted fboxy. We now
take this effect into account by setting fd,max to its fiducial
value of 0.2.
The solid lines in Fig. 1 show the boxy fractions thus
obtained. A comparison with the dotted lines shows that the
newly formed disks only cause a significant decrease of fboxy
at the faint end. At the bright end, AGN feedback prevents
the cooling of hot gas, therewith significantly reducing the
rate at which a new disk can regrow†. However, when Fcrit =
0.1, we obtain the same boxy fractions for fd,max = 0.2 as
for fd,max = 1, even at the faint end. This implies that we
obtain the same conclusions as above: matching the data
of P07 requires ncrit ≃ 2 and Fcrit ≃ 0.1. In other words,
our constraints on ncrit and Fcrit are robust to exactly how
much disk regrowth is allowed before it reveals itself in the
isophotes.
Why do faint ETGs with Fcold < 0.1 not regrow signifi-
cant disks, while does with Fcold > 0.1 do? Note that during
the last major merger, the entire cold gas mass is converted
into stars in a starburst. Therefore, it is somewhat puzzling
that the galaxy’s ability to regrow a disk depends on its cold
gas mass fraction at the last major merger. As it turns out,
this owes to the fact that progenitors with a low cold gas
mass fraction are more likely to be satellite galaxies. Fig. 2
plots the fractions of ETGs with last major mergers be-
tween a central galaxy and a satellite (C-S; solid lines) and
between two satellites (S-S; dashed lines). Note that in our
model, S-S mergers occur whenever their dark matter sub-
haloes in the N-body simulation merge. Results are shown
for all ETGs (left-hand panel), and for only those ETGs
that have Fcold < 0.1 (right-hand panel). In our fiducial
model with AGN feedback (black lines) the most massive
ETGs almost exclusively form out of C-S mergers. Since a
satellite galaxy can never become a central galaxy, this is
† In the absence of cooling, the only way in which a galaxy can
(re)grow a disk is via minor mergers.
c© 2000 RAS, MNRAS 000, 1–13
6 Kang et al.
Figure 3. The progenitor mass ratio, n, (left-hand panel) and the cold gas mass fraction at the last major merger, Fcold, (right-hand
panel) as function of the z = 0 stellar mass, M∗, of ETGs. Solid lines with errorbars indicate the median and the 20
th and 80th percentiles
of the distributions. While the mass ratio of the progenitors of early-type galaxies is independent of its stellar mass, Fcold decreases
strongly with increasing M∗.
consistent with the fact that virtually all massive ETGs at
z = 0 are central galaxies (in massive haloes). Roughly 40
percent of all low mass ETGs have a last major merger be-
tween two satellite galaxies. However, when we only focus on
low mass ETGs with Fcold < 0.1, we find that ∼ 95 percent
of their last major mergers are between two satellite galax-
ies. Since the z = 0 descendents of S-S mergers will also
be satellites, this implies that virtually all boxy ETGs with
M∗ <∼ 2 × 10
10h−1 M⊙ are satellite galaxies. Furthermore,
since satellite galaxies do not have a hot gas reservoir (at
least not in our semi-analytical model) they can not regrow
a new disk by cooling. This explains why for Fcrit = 0.1 the
boxy fractions are independent of the value of fd,max.
3.2 The role of AGN feedback
Our semi-analytical model includes “radio-mode” AGN
feedback, similar to that in Croton et al. (2006), in order to
suppresses the cooling in massive haloes. This in turn shuts
down star formation in the central galaxies in these haloes,
so that they become red. In the absence of AGN feedback,
new gas continues to cool making the central galaxies in
massive haloes overly massive and too blue (e.g., Benson
et al. 2003; Bower et al. 2006; Croton et al. 2006; Kang et
al. 2006). In order to study its impact on fboxy as func-
tion of luminosity, we simply turn off AGN feedback in our
model. Although this results in a semi-analytical model that
no longer fits the galaxy luminosity function at the bright
end, and results in a color-magnitude relation with far too
many bright, blue galaxies, a comparison with the models
discussed above nicely isolates the effects that are directly
due to our prescription for AGN feedback.
The dashed lines in Figure 1 show the predictions of
our model without AGN feedback (and with fd,max = 0.2).
A comparison with our fiducial model (solid lines) shows
that apparently the AGN feedback has no impact on fboxy
for faint ETGs with MB −5 log h >∼ −18. At the bright end,
though, the model without AGN feedback predicts boxy
fractions that are significantly lower (for reasons that will
be discussed in §4.2). Consequently, the luminosity depen-
dence of fboxy is much weaker than in the fiducial case. The
only model that comes close to matching the data of P07
is the one with ncrit = 3 and Fcrit = 0.1. We emphasize,
though, that this model is not realistic. In addition to the
fact that this semi-analytical model does not fit the observed
luminosity function and color magnitude relation, a value of
ncrit = 3 is also very unlikely: numerical simulations have
clearly shown that mergers with a mass ratio near 1:3 almost
invariably result in disky remnants (e.g., Naab & Burkert
2003).
4 DISCUSSION
4.1 The Origin of the ETG Dichotomy
We now examine the physical causes for the various scal-
ings noted above. We start by investigating why our fiducial
model with ncrit = 2 and Fcrit = 0.1 is successful in repro-
ducing the luminosity dependence of fboxy, i.e., why it pre-
dicts that the boxy fraction increases with luminosity. Given
the method used to assign isophotal shapes to the ETGs in
our model, there are three possibilities: (i) brighter ETGs
have smaller progenitor mass ratios, (ii) brighter ETGs have
progenitors with smaller cold gas mass fractions, or (iii)
brighter ETGs have less disk regrowth after their last major
merger.
We can exclude (i) from the fact that the models that
ignore disk regrowth (i.e., with fd,max = 1) and that ignore
c© 2000 RAS, MNRAS 000, 1–13
On the Origin of the Dichotomy of Early-Type Galaxies 7
Figure 4. Contour plots for the number density of ETGs as function of their present day stellar mass, M∗, and their cold gas mass
fraction at the last major merger, Fcold. Results are shown both for our fiducial model with AGN feedback (left-hand panel), as well as
for the model without AGN feedback (right-hand panel). In both cases a clear bimodality is apparent: ETGs with large and low masses
formed out of dry and wet mergers, respectively. Note that this bimodality is present independent of the presence of AGN feedback.
the cold gas mass fractions (i.e., with Fcrit = 1) predict
that the boxy fraction is roughly independent of luminosity
(dotted lines in upper panels of Fig. 1). This suggests that
the distribution of n is roughly independent of the (present
day) luminosity of the ETGs. This is demonstrated in the
left-hand panel of Fig. 3, were we plot n as function of,
M∗, the stellar mass at z = 0. Each dot corresponds to an
ETG in our fiducial model, while the solid black line with
the errorbars indicates the median and the 20th and 80th
percentiles of the distribution: clearly the progenitor mass
ratio is independent of M∗.
The boxy fraction of our best-fit model with ncrit = 2
and Fcrit = 0.1 is also independent of the regrowth of
disks, which is evident from the fact that the models with
fd,max = 1 (dotted lines) and fd,max = 0.2 (solid lines) pre-
dict boxy fractions that are indistinguishable. Therefore op-
tion (iii) can also be excluded, and the luminosity depen-
dence of fboxy thus has to indicate that the progenitors of
more luminous ETGs have a lower gas mass fraction. That
this is indeed the case can be seen from the right-hand panel
of Fig. 3 which shows Fcold as function of M∗. Once again,
the solid black line with the errorbars indicates the median
and the 20th and 80th percentiles of the distribution. Note
that Fcold decreases strongly with increasing stellar mass;
the most massive ETGs form almost exclusively from dry
mergers with Fcold < 0.1.
The left-hand panel of Fig. 4 shows a different rendi-
tion of the relation between Fcold and M∗. Contours indicate
the number density, φ(Fcold,M∗), of ETGs in the Fcold-M∗
plane, normalized by the total number of ETGs at each given
M∗-bin, i.e.,
φ(Fcold,M∗) dFcold = 1 (2)
Note that φ(Fcold,M∗) is clearly bimodal: low mass ETGs
with M∗ <∼ 3 × 10
9h−1 M⊙ have high Fcold, while the pro-
genitors of massive ETGs have low cold gas mass fractions.
Clearly, if wet and dry mergers produce disky and boxy el-
lipticals, respectively, this bimodality is directly responsible
for the ETG dichotomy.
What is the physical origin of this bimodality? It is
tempting to expect that it owes to AGN feedback. After all,
in our model AGN feedback is more efficient in more massive
galaxies. Since more massive ETGs have more massive pro-
genitors, one could imagine that their cold gas mass fractions
are lower because of the AGN feedback. However, the right-
hand panel of Fig. 4 shows that this is not the case. Here
we show φ(Fcold,M∗) for the model without AGN feedback.
Somewhat surprisingly, this model predicts almost exactly
the same bimodality as the model with AGN feedback. Their
are subtle differences, which have a non-negligible effect on
the boxy fractions and which will be discussed in §4.2 be-
low. However, it should be clear from Fig. 4 that the overall
bimodality in φ(Fcold,M∗) is not due to AGN feedback.
In order to explore alternative explanations for the bi-
modality, Fig. 5 shows some relevant statistics. Upper and
lower panels correspond to the models with and without
AGN feedback, respectively. Here we focus on our fiducial
model with AGN feedback; the results for the model without
AGN feedback will be discussed in §4.2. The upper left-hand
panel shows the average cold gas mass fraction of individ-
ual galaxies, 〈fcold〉, as function of lookback time. Note that
here we use fcold to distinguish it from Fcold, which indi-
cates the cold gas mass fraction of the combined progeni-
tors taking part in a major merger, as defined in eq. (1).
Results are shown for galaxies of two different (instanta-
neous) stellar masses, M∗ = 3× 109h−1 M⊙ (red lines) and
M∗ = 3× 1010h−1 M⊙ (black lines), and for two (instanta-
neous) types: early-types (dotted lines) and late-types (solid
lines). Following N06, here we define early-types as systems
with a bulge-to-total stellar mass ratio of 0.6 or larger; con-
trary to our z = 0 selection criteria described in §2.1, we do
not include a color selection, simply because the overall color
of the galaxy population evolves as function of time. First
of all, note that 〈fcold〉 of galaxies of given type and given
mass decreases with increasing time (i.e., with decreasing
lookback time). This is simply due to the consumption by
star formation. Secondly, at a given time, early-type galax-
ies have lower gas mass fractions than late-type galaxies.
This mainly owes to the fact that at a major merger, which
creates an early-type, all the available cold gas is consumed
in a starburst. Consequently, each early-type starts its life
c© 2000 RAS, MNRAS 000, 1–13
8 Kang et al.
Figure 5. Various statistics of our semi-analytical models. Upper and lower panels refer to our models with and without AGN feedback,
respectively. Left-hand panels: The average cold gas mass fraction of individual galaxies as function of lookback time t. Results are shown
for galaxies of two different (instantaneous) stellar masses, M∗ = 3 × 10
9h−1 M⊙ (red lines) and M∗ = 3 × 10
10h−1 M⊙ (black lines),
and for two (instantaneous) types: early-types (dotted lines) and late-types (solid lines). Middle panels: The fractions of late-late (L-L;
solid lines) , early-late (E-L; dotted lines) and early-early (E-E; dashes lines) type mergers as function of the z = 0 stellar mass of the
resulting ETG. Right-hand panels: The average lookback time to the last major merger of z = 0 ETGs as function of their z = 0 stellar
mass. Results are shown separately for L-L mergers (solid line), E-L mergers (dotted line), and E-E mergers (dashed line). The errorbars
indicate the 20th and 80th percentiles of the distribution of the E-L mergers. For clarity, we do not show these percentiles for the L-L
and E-E mergers, though they are very similar. See text for detailed discussion.
with fcold = 0. Finally, massive galaxies have lower gas
mass fractions than their less massive counterparts. This
owes to the fact that more massive galaxies live, on average,
in more massive haloes, which tend to form (not assem-
ble!) earlier thus allowing star formation to commence at
an earlier epoch (see Neistein et al. 2006). In addition, the
star formation efficiency used in the semi-analytical model
is proportional to the mass of the cold gas times M0.73vir . As
discussed in K05, this scaling with the halo virial mass is re-
quired in order to match the observed 〈fcold〉(M∗) at z = 0
(see also Cole et al. 1994, 2000; De Lucia et al. 2004).
The middle panel in the upper row of Fig. 5 shows what
kind of galaxy types are involved in the last major mergers
of present-day ETGs. Solid, dotted and dashed curves show
the fractions of L-L, E-L and E-E mergers, where ‘L’ and ‘E’
refer to late-types and early-types, respectively. As above,
these types are based solely on the bulge-to-total mass ratio
of the galaxy and not on its color. In our semi-analytical
model, the lowest mass ETGs almost exclusively form via
L-L mergers. With increasing M∗, however, there is a pro-
nounced decrease of the fraction of L-L mergers, which are
mainly replaced by E-L mergers. The fraction of E-E merg-
ers increases very weakly with increasing stellar mass but
never exceeds 10 percent. Thus, although boxy ellipticals
form out of dry mergers, these are not necessarily mergers
between early-type galaxies. In fact, our model predicts that
the vast majority of all dry mergers involve at least one late-
type galaxy (though with a low cold gas mass fraction). This
is in good agreement with the SPH simulation of Maller et
al. (2006), who also find that E-E mergers are fairly rare.
However, it is in stark contrast to the predictions of the
semi-analytical model of N06, how find that more than 50
percent of the last major mergers of massive ellipticals are E-
E mergers. We suspect that the main reason for this strong
discrepancy is the fact that N06 used merger trees based on
the EPS formalism.
Finally, the upper right-hand panel of Fig. 5 plots the
average lookback time to the last major merger of ETGs as
function of their present day stellar mass. Results are shown
separately for L-L mergers (solid line), E-L mergers (dotted
line), and E-E mergers (dashed line). Clearly, more massive
ETGs assemble later (at lower lookback times). This mainly
owes to the fact that more massive galaxies live in more
massive haloes, which themselves assemble later (cf. Lacey
& Cole 1993; Wechsler et al. 2002; van den Bosch 2002;
Neistein et al. 2006; De Lucia et al. 2006). In addition, it is
clear that at fixed stellar mass, E-E mergers occur later than
c© 2000 RAS, MNRAS 000, 1–13
On the Origin of the Dichotomy of Early-Type Galaxies 9
L-L mergers, with E-L mergers in between. This difference,
however, is small compared to the scatter.
If we combine all this information, we infer that the
bimodality in φ(Fcold,M∗) owes to the following three facts:
• More massive ETGs have more massive progenitors
(this follows from the fact that n is independent of M∗).
Since at a given time more massive galaxies of a given type
have lower cold gas mass fractions, 〈Fcold〉 decreases with
increasing M∗.
• More massive ETGs assemble later (at lower redshifts).
Galaxies of given mass and given type have lower 〈fcold〉 at
later times. Consequently, 〈Fcold〉 decreases with increasing
• More massive ETGs have a larger fraction of early-type
progenitors. ETGs of a given mass have a lower cold gas
mass fraction than late type galaxies of the same mass, at
any redshift. In addition, E-L mergers occur at later times
than L-L mergers. Both these effects also contribute to the
fact that 〈Fcold〉 decreases with increasing M∗.
4.2 Is AGN feedback relevant?
A comparison of the upper and lower panels in Fig. 5
shows that the three effects mentioned above, and thus
the bimodality in φ(Fcold,M∗), are present independent of
whether or not the model includes feedback from active
galactic nuclei. There are only two small differences: with-
out AGN feedback massive ETGs (i) are more likely to result
from L-L mergers, and (ii) have a higher 〈fcold〉 (cf. black
dotted curves in the left-hand panels of Fig. 5). Both ef-
fects reflect that AGN feedback prevents the cooling of hot
gas around massive galaxies, therewith removing an impor-
tant channel for building a new disk. As is evident from
Fig. 4, these two effects only have a very mild impact on
φ(Fcold,M∗). We therefore conclude that the bimodality of
ETGs is not due to AGN feedback.
This does not imply, however, that AGN feedback does
not have an impact on the boxy fractions. As is evident from
Fig. 1, the models with and without AGN feedback clearly
predict different fboxy at the bright end. To understand the
origin of these differences, first focus on Fig. 4. Although
both panels look very similar, upon closer examination one
can notice that at M∗ >∼ 10
11h−1 M⊙ the number density of
ETGs with 0.1 <∼ Fcold <∼ 0.25 is significantly larger in the
model without AGN feedback. In the model with AGN feed-
back these systems all have Fcold < 0.1. This explains why
the model without AGN feedback predicts a lower boxy frac-
tion for bright galaxies when Fcrit = 0.1. However, this does
not explain why fboxy is also different when Fcrit ≥ 0.3.
After all, for those models it should not matter whether
Fcold = 0.05 or Fcold = 0.25, for example. It turns out that
in these cases the differences between the models with and
without AGN feedback are due to the regrowth of a new
disk; since AGN feedback suppresses the cooling of hot gas
around massive galaxies, it strongly suppresses the regrowth
of a new disk, thus resulting in higher boxy fractions.
Note however, that in ETGs with Fcold < 0.1, disk re-
growth is always negligible. In the presence of AGN feedback
this is due to the suppression of cooling in massive haloes.
In the absence of AGN feedback it owes to the fact that only
a very small fraction of ETGs are central galaxies. As can
Figure 6. The boxy fraction of ETGs as function of halo (group)
mass. Red triangles (for satellite galaxies) and blue circles (for
central galaxies) are taken from P07, and have been obtained us-
ing the galaxy group catalogue of Weinmann et al. (2006). Dashed
and solid lines correspond to the predictions from our fiducial
model.
be seen from the right-hand panel of Fig. 2, more than 90
percent of the ETGs have last major mergers between two
satellite galaxies (with AGN feedback this fraction is smaller
than 20 percent). Since satellite galaxies do not have hot gas
reservoirs, no significant disks can regrow around these sys-
tems.
4.3 Environment dependence
Using the SDSS galaxy group catalogue of Weinmann et
al. (2006), which has been constructed using the halo-based
group finder developed by Yang et al. (2005), P07 inves-
tigated how fboxy scales with group mass. They also split
their sample in ‘central’ galaxies (defined as the brightest
group members) and ‘satellites’. The open circles and trian-
gles in Fig. 6 show their results for centrals and satellites,
respectively. Although there are only two data points for the
satellites, it is clear that central galaxies are more likely to
be boxy than a satellite galaxy in a group (halo) of the same
mass.
We now investigate whether our fiducial semi-analytic
model that fits the luminosity dependence of the boxy frac-
tion (i.e., the one with ncrit = 2 and Fcrit = 0.1) can also
reproduce these trends. The model predictions for the cen-
trals and satellites are shown in Fig. 6 as solid and dashed
lines, respectively. Here we have associated the halo virial
mass with the group mass, and an ETG is said to be a cen-
tral galaxy if it is the brightest galaxy in its halo. The model
accurately reproduces the boxy fraction of both central and
satellite galaxies. In particular, it reproduces the fact that
fboxy of central galaxies is higher than that of satellites in
groups (haloes) of the same mass.
As shown in P07, the boxy fraction as function of group
c© 2000 RAS, MNRAS 000, 1–13
10 Kang et al.
Figure 7. The residuals of the relations between n and M∗ (left panel) and Fcold and M∗ (right panel) as functions of the virial mass of
the halo in which the ETGs reside at z = 0. As in Fig. 3 the solid lines with errorbars indicate the mean and the 20th and 80th percentiles
of the distributions. These show that after one corrects for the stellar mass dependence, the properties of the last major mergers of ETGs
are independent of their halo mass.
mass, for both centrals and satellites, is perfectly consistent
with the null-hypothesis that the isophotal shape of an ETG
depends only on its luminosity; the fact that centrals have a
higher boxy fraction than satellites in the same group, sim-
ply owes to the fact that the centrals are brighter. Also, the
increase of fboxy with increasing group mass simply reflects
that more massive haloes host brighter galaxies. It there-
fore may not come as a surprise that our semi-analytical
model that fits the luminosity dependence of fboxy also fits
the group mass dependencies shown in Fig. 6. It does mean,
though, that in our model the merger histories of ETGs of
a given luminosity do not strongly depend on the halo mass
in which the galaxy resides.
To test this we proceed as follows. For each ETG in
our model we compute 〈n〉 and 〈Fcold〉, where the average
is over all ETGs with stellar masses similar to that of the
galaxy in question. Fig. 7 plots the residuals n − 〈n〉 and
Fcold − 〈Fcold〉 as function of the virial mass, Mvir, of the
halo in which they reside. This clearly shows that after one
corrects for the stellar mass dependence, the properties of
the last major merger of ETGs are indeed independent of
their halo mass‡. This provides theoretical support for the
conclusion of P07 that the stellar mass (or luminosity) of an
ETG is the main parameter that determines whether it will
be disky or boxy.
4.4 The Origin of Cusps and Cores
As discussed in §1, the dichotomy of ETGs is not only re-
stricted to their isophotal shapes. One other important as-
pect of the dichotomy regards the central density distribu-
‡ The fact that the distribution of the progenitor mass ratio n is
independent of halo mass was also found by KB05
tion of ETGs; while disky systems typically have cuspy pro-
files, the bright and boxy ellipticals generally reveal density
profiles with a pronounced core. Here we briefly discuss how
the formation of cusps and cores fits in the picture sketched
above.
In the paradigm adopted here, low luminosity ETGs
form mainly via wet mergers. Due to the fluctuating poten-
tial of the merging galaxies and the onset of bar instabilities,
the gas experiences strong torques which causes a significant
fraction of the gas to sink towards the center of the po-
tential well where it undergoes a starburst (e.g., Shlosman,
Frank & Begelman 1989; Barnes & Hernquist 1991; Mihos
& Hernquist 1996). Detailed hydrodynamical simulations of
gas-rich mergers (e.g., Springel & Hernquist 2005; Cox et
al. 2006b) result in the formation of remnants with surface
brightness profiles that are reminiscent of cuspy ETGs (John
Kormendy, private communication). Hence, cusps seem a
natural by-product of the dissipative processes associated
with a wet merger.
Boxy ETGs, however, are thought to form via dry merg-
ers. As can be seen from Fig. 5 roughly 35 percent of all mas-
sive ETGs originate from a last major merger that involves
an early-type progenitor. If this progenitor contains a cusp,
this will survive the merger, as most clearly demonstrated by
Dehnen (2005). The only mergers that are believed to result
directly in a remnant with a core, are mergers between pure
stellar disks with a negligible fcold (e.g., Cox et al. 2006a).
Fig. 8 shows the cumulative distributions of the bulge-to-
total stellar mass ratios of the progenitors of present day
ETGs. Results are shown for ETGs in three mass ranges,
as indicated. The probability that a progenitor of a massive
ETGs (with M∗ > 10
11h−1 M⊙) has a negligible bulge com-
ponent (M∗,bulge < 0.01M∗) is only about 3 percent. Hence,
we expect that only about 1 out of every 1000 major merg-
ers that result in a massive ETG will have a remnant with
c© 2000 RAS, MNRAS 000, 1–13
On the Origin of the Dichotomy of Early-Type Galaxies 11
a core. And this is most likely an overestimate, since we
did not take the cold gas mass fractions into consideration.
Since the cusp accounts for only about one percent of the
total stellar mass (e.g., Faber et al. 1997; Milosavljević et
al. 2002), cold gas mass fractions of a few percent are prob-
ably enough to create a cusp via dissipational processes.
Therefore, an additional mechanism is required in order
to create a core (i.e., destroy a cusp). Arguably the most
promising mechanism is the orbital decay of a supermas-
sive black hole (SMBH) binary, which can scour a core by
exchanging angular momentum with the cusp stars (e.g.,
Begelman et al. 1980; Ebisuzaki et al. 1991; Quinlan 1996;
Faber et al. 1997; Milosavljević et al. 2002; Merritt 2006a).
Since virtually all spheroids contain a SMBH at their cen-
ter, with a mass that is tightly correlated with the mass of
the spheroid (e.g., Kormendy & Richstone 1995; Ferrarese &
Merritt 2000; Gebhardt et al. 2000; Marconi & Hunt 2003;
Häring & Rix 2004), it is generally expected that such bina-
ries are common in merger remnants (but see below).
While offering an attractive explanation for the pres-
ence of cores in massive, boxy ETGs, this picture simulta-
neously poses a potential problem for the presence of cusps
in disky ETGs. After all, if the progenitors of disky ETGs
also harbor SMBHs, the same process could create a core
in these systems as well. There are two possible ways out
of this paradox: (i) low mass ETGs do not form a SMBH
binary, or (ii) a cusp is regenerated after the two SMBHs
have coalesced. We now discuss these two options in turn.
In order for a SMBH binary to form, dynamical friction
must first deliver the two SMBHs from the two progenitors
to the center of the newly formed merger remnant. This
process will only be efficient if the spheroidal hosts of the
SMBHs are sufficiently massive. Consider a (small) bulge
that was part of a late-type progenitor which is now orbiting
the remnant of its merger with another galaxy. Assume for
simplicity that both the bulge and the merger remnant are
purely stellar singular isothermal spheres (ρ ∝ r−2) with
velocity dispersions equal to σb and σg, respectively. Then,
assuming that the bulge is on a circular orbit, with an initial
radius ri, Chandrasekhar’s (1943) formula gives an infall
time for the bulge of
tinfall ≈ 3.3
≈ 4.7× 108 yr
(Merritt 2006b). Here we have used that ri/σg ≃
2 tcross
with tcross ∼ 108 yr the galaxy crossing time. If the galaxy
is the remnant of an equal mass merger, so that Mb ∼
(∆/2)Mg , with ∆ the bulge-to-total stellar mass ratio of
the late-type progenitor, we find that tinfall is equal to the
Hubble time (1.3 × 1010 yr) for ∆ ≃ 0.07. As can be seen
from Fig. 8, about 70 percent of the low mass ETGs (with
109h−1M⊙ < M∗ < 10
10h−1M⊙) have at least one progeni-
tor with a stellar bulge-to-total mass ratio ∆ < 0.07. There-
fore, we expect that a similar fraction will form without a
SMBH binary, and thus will not form a core. For compari-
son, for massive ETGs (with M∗ > 10
11h−1M⊙) only about
20 percent of the progenitors will have a sufficiently small
bulge to prevent the formation of a SMBH binary.
An alternative explanation for the presence of cusps in
low mass ETGs is that the cusp is regenerated by star for-
mation from gas present at the last major merger. However,
as emphasized by Faber et al. (1997), this results in a serious
Figure 8. The cumulative probability that a progenitor of a
z = 0 ETG has a stellar bulge-to-total mass ratio less then
M∗,bulge/M∗. Results are shown for the progenitors of ETGs in
three stellar mass ranges, as indicated (masses are in h−1 M⊙).
Note that the progenitors of more massive ETGs have signifi-
cantly higher M∗,bulge/M∗. As discussed in the text, this may
help to explain why low mass ETGs have cusps, while massive
ones have cores.
timing problem, as it requires that the new stars must form
after the SMBH binary has coalesced. Another potential
problem with this picture, is that the cusp would be younger
than the main body of the ETG which may lead to observ-
able effects (i.e., cusp could be bluer than main body). How-
ever, in light of the results presented here, we believe that
neither of these two issues causes a serious problem. First of
all, the cold gas mass fractions involved with the last major
merger, and hence the mass fraction that is turned into stars
in the resulting starburst, is extremely large: 〈Fcold〉 ∼ 0.8
(see Fig. 4). As mentioned above, a significant fraction of
this gas is transported to the center, where it will function
as an important energy sink for the SMBH binary, greatly
speeding up its coalescence (Escala et al. 2004, 2005) and
therewith reducing the timing problem mentioned above. In
fact, the gas may well be the dominant energy sink, so that
the pre-existing cusps of the progenitors are only mildly af-
fected. But even if the cusps were destroyed, there clearly
should be enough gas left to build a new cusp. In fact, if,
as envisioned in our semi-analytical model, all the cold gas
present at the last major merger is consumed in a starburst,
a very significant fraction of the stars in the main body
would also be formed in this starburst (not only the cusp).
This would help to diminish potential population differences
between the cusp and the main body of the ETG. In addi-
tion, as can be seen from the right-hand panels of Fig. 5,
the last major merger of low luminosity ETGs occurred on
average ∼ 9.5 Gyr ago. Hence, the stars made in this burst
are not easily distinguished observationally from the ones
that were already present before the last major merger.
To summarize, our semi-analytical model predicts that
c© 2000 RAS, MNRAS 000, 1–13
12 Kang et al.
the progenitors of ETGs have cold gas mass fractions and
bulge-to-total mass ratios that offer a relatively natural ex-
planation for the observed dichotomy between cusps and
cores.
5 CONCLUSIONS
Using a semi-analytical model for galaxy formation, com-
bined with a large N-body simulation, we have investigated
the origin of the dichotomy among ETGs. In order to assign
isophotal shapes to the ETGs in our model we use three cri-
teria: an ETG is said to be boxy if (i) the progenitor mass
ratio at the last major merger is n < ncrit, (ii) the total
cold gas mass fraction of the sum of the two progenitors
at the last major merger is Fcold < Fcrit, and (iii) after its
last major merger the ETG is not allowed to regrow a new
disk with a stellar mass that exceeds 20 percent of the total
stellar mass.
In agreement with KB05, we find that we can not repro-
duce the observed luminosity (or, equivalently, stellar mass)
dependence of fboxy if we assign isophotal shapes based only
on the progenitor mass ratio. This owes to the fact that
the distribution of n is virtually independent of the stellar
mass, M∗, of the ETG at z = 0. Rather, to obtain a boxy
fraction that increases with increasing luminosity one also
needs to consider the cold gas mass fraction at the last ma-
jor merger. In fact, we can accurately match the data of P07
with ncrit = 2 and Fcrit = 0.1. This implies that boxy galax-
ies originate from relatively violent and dry mergers with
roughly equal mass progenitors and with less than 10 per-
cent cold gas, in good agreement with numerical simulations
(e.g., Naab et al. 2006a; Cox et al. 2006a). Our model also
nicely reproduces the observed boxy fraction as function of
halo mass, for both central galaxies and satellites. We have
demonstrated that this owes to the fact that after one cor-
rects for the stellar mass dependence, the properties of the
last major merger of ETGs are independent of their halo
mass. This provides theoretical support for the conjecture
of P07 that the stellar mass (or luminosity) of an ETG is
the main parameter that determines whether it will be disky
or boxy.
Our model predicts a number density distribution,
φ(Fcold,M∗), of ETGs in the Fcold-M∗ plane that is clearly
bimodal: low mass ETGs with M∗ <∼ 3 × 10
9h−1 M⊙ have
high Fcold, while the progenitors of massive ETGs have low
cold gas mass fractions. Clearly, if wet and dry mergers pro-
duce disky and boxy ellipticals, respectively, this bimodal-
ity is directly responsible for the ETG dichotomy. Contrary
to naive expectations, we find that this bimodality is in-
dependent of the inclusion of AGN feedback in the model.
Although AGN feedback is essential for regulating the lumi-
nosities and colors of the brightest galaxies (which end up
as ETGs with AGN feedback, but as blue disk-dominated
systems without AGN feedback), it does not explain the
bimodality among ETGs. Rather, this bimodality is due to
the fact that more massive ETGs (i) have more massive pro-
genitors, (ii) assemble later, and (iii) have a larger fraction
of early-type progenitors. Each of these three trends causes
the cold gas mass fraction of the progenitors of more mas-
sive ETGs to be lower, and thus its last major merger to
be dryer. In conclusion, the dichotomy among ETGs has a
very natural explanation within the hierarchical framework
of structure formation and does not require AGN feedback.
We also examined the morphological properties of the
progenitors of present day ETGs (at the epoch of the last
major merger). Indicating early- and late-type galaxies with
‘E’ and ‘L’, respectively, we find that the lowest mass ETGs
almost exclusively form via L-L mergers. With increasing
M∗, however, there is a pronounced decrease of the fraction
of L-L mergers, which are mainly replaced by E-L mergers.
The E-E mergers, however, never contribute more than 10
percent, in good agreement with the SPH simulations of
Maller et al. (2006). Thus, although boxy ellipticals form
out of dry mergers, these only rarely involve two early-type
systems.
Since satellite galaxies do not have a hot corona from
which new gas cools down, they typically have lower cold gas
mass fractions than central galaxies of the same mass. Con-
sequently, dry mergers are preferentially mergers between
two satellite galaxies. In fact, since a satellite galaxy can
not become a central galaxy, our model predicts that more
than 95 percent of all boxy ETGs with M∗ <∼ 2×10
10h−1M⊙
are satellites.
We also find that the progenitors of less massive ETGs
typically have lower bulge-to-total mass ratios. In fact,
for ETGs with present day stellar masses in the range
109h−1 M⊙ < M∗ < 10
10h−1 M⊙ we find that almost half
of the progenitors at the last major merger have bulges
that do not contribute more than one percent to the to-
tal stellar mass. This may have important implications for
the observed dichotomy between cusps and cores in ETGs.
Cores are believed to form via the scouring effect of a SMBH
binary, that arises when the SMBHs associated with the
spheroidal components of the progenitor galaxies form a
bound pair. This requires both spheroids to sink to the cen-
ter of the potential well of the merger remnant via dynamical
friction. However, if the time scale for this infall exceeds the
Hubble time, no SMBH binary will form, thus preventing
the creation of a core. Using our prediction for the bulge-to-
total mass ratios of progenitor galaxies, and a simple esti-
mate based on Chandrasekhar’s dynamical friction formula,
we have estimated that ∼ 70 percent of low mass ETGs
in the aforementioned mass range will not form a SMBH
binary. For massive ETGs with M∗ > 10
11h−1M⊙ this frac-
tion is only ∼ 20 percent. This may help to explain why
low mass ETGs have steep cusps, while massive ETGs have
cores.
Finally, in those low mass systems that do form a SMBH
binary, the large cold gas mass fraction at its last major
merger (〈Fcold〉 ≃ 0.8) provides more than enough raw ma-
terial for the regeneration of a new cusp. In addition, a large
fraction of the cold gas will sink to the center due to angular
momentum transfer where it will function as an important
energy sink for the SMBH binary. As shown by Escala et
al. (2004, 2005), this can cause a tremendous acceleration of
the coalescence of the SMBHs, largely removing the timing
problem interjected by Faber et al. (1997).
6 ACKNOWLEDGEMENTS
We are grateful to Eric Bell, Eric Emsellem, John Kormendy,
Thorsten Naab, Hans-Walter Rix, and the entire Galaxies-
c© 2000 RAS, MNRAS 000, 1–13
On the Origin of the Dichotomy of Early-Type Galaxies 13
Cosmology-Theory group at the MPIA for enlightening dis-
cussions.
REFERENCES
Adelman-McCarthy J.K., et al., 2006, ApJS, 162, 38
Barnes J.E., 1988, ApJ, 331, 699
Barnes J.E., Hernquist L.E., 1991, ApJ, 370, 65
Barnes J.E., Hernquist L.E., 1996, ApJ, 471, 115
Begelman M.C., Blandford R.D., Rees M.J., 1980, Nature, 287,
Bender R., 1988, A&AS, 193, 7
Bender R., Surma P., Döbereiner S., Möllenhoff C., Madejsky R.,
1989, A&A, 217, 35
Bendo G.J., Barnes J.E., 2000, MNRAS, 316, 315
Benson A.J., Bower R.G., Frenk C.S., Lacey C.G., Baugh C.M.,
Cole S., 2003, ApJ, 599, 38
Benson A.J., Kamionkowski M., Hassani S.H., 2005, MNRAS,
357, 847
Bower R.G., Benson A.J., Malbon R., Helly J.C., Frenk C.S.,
Baugh C.M., Cole S., Lacey C.G., 2006, MNRAS, 370, 645
Bournaud F., Combes F., Jog C.J., 2004, A&A, 418, 27
Bournaud F., Jog C.J., Combes F., 2005, A&A, 437, 69
Bruzual G., Charlot S., 2003, MNRAS, 344, 1000
Bryan G., Norman M., 1998, ApJ, 495, 80
Cappellari M, et al., 2007, preprint (astro-ph/0703533)
Cattaneo A., Dekel A., Devriendt J., Guiderdoni B., Blaizot J.,
2006, MNRAS, 370, 1651
Chandrasekhar S., 1943, ApJ, 97, 255
Cole S., Aragon-Salamanca A., Frenk C.S., Navarro J., Zepf S.E.,
1994, MNRAS, 271, 781
Cole S., Lacey C.G., Baugh C.M., Frenk C.S., 2000, MNRAS,
319, 168
Cox T.J., Dutta S.N., Di Matteo T., Hernquist L., Hopkins P.F.,
Robertson B., Springel V., 2006a, ApJ, 650, 791
Cox T.J., Jonsson P., Primack J.R., Somerville R.S., 2006b, MN-
RAS, 373, 1013
Croton D.J., et al., 2006, MNRAS, 365, 11
Davies R.L., Efstathiou G., Fall S.M., Illingworth G., Schechter
P.L., 1983, ApJ, 266, 41
Dehnen W., 2005, MNRAS, 360, 892
De Lucia G., Kauffmann G., White S.D.M., 2004, MNRAS, 349,
De Lucia G., Springel V., White S.D.M., Croton D., Kauffmann
G., 2006, MNRAS, 366, 499
Di Matteo P., Combes F., Melchior A.L., Semelin B., 2007,
preprint (astro-ph/0703212)
Ebisuzaki T., Makino J., Okamura S.K., 1991, Nature, 354, 212
Emsellem E., et al., 2007, preprint (astro-ph/0703531)
Escala A., Larson R.B., Coppi P.S., Mardones D., 2004, ApJ, 607,
Escala A., Larson R.B., Coppi P.S., Mardones D., 2005, ApJ, 630,
Faber S.M., et al. 1997, AJ, 114, 1771
Ferrarese L., van den Bosch F.C., Ford H.C., Jaffe W., O’Connell
R.W. 1994, AJ, 108, 1598
Ferrarese L., Merritt D., 2000, ApJ, 539, L9
Gebhardt K., et al. 1996, AJ, 112, 105
Gebhardt K., et al. 2000, ApJ, 539, L13
Hao C.N., Mao S., Deng Z.G., Xia X.Y., Wu H., 2006, MNRAS,
370, 1339
Häring N., Rix H.W., 2000, ApJ, 604,
Hernquist L., 1992, ApJ, 400, 460
Jing Y.P., Suto Y., 2002, ApJ, 574, 538
Kang X., Jing Y.P., Mo H.J., Börner G., 2005, ApJ, 631, 21 (K05)
Kang X., Jing Y.P, Silk J., 2006, ApJ, 648, 820
Kauffmann G., White S.D.M., Guiderdoni B., 1993, MNRAS, 264,
Khochfar S., Burkert A., 2005, MNRAS, 359, 1379 (KB05)
Kormendy J., Richstone D.O., 1995, ARA&A, 33, 581
Lacey C., Cole S., 1993, MNRAS, 262, 627
Lacey C., Cole S., 1994, MNRAS, 271, 676
Lauer T.R., et al., 2005, AJ, 129, 2138
Lauer T.R., et al., 2006, preprint (astro-ph/0609762)
Maller A.H., Katz N., Keres D., Davé R., Weinberg D.H., 2006,
ApJ, 647, 763
Marconi A., Hunt L.K., 2003, ApJ, 589, L21
Merritt D., 2006a, ApJ, 648, 976
Merritt D., 2006b, Rep. Prog. Phys., 69, 2513
Mihos J.C., Hernquist L.E., 1996, ApJ, 464, 641
Milosavljević M., Merritt D., Rest A., van den Bosch F.C., 2002,
MNRAS, 331, L51
Naab T., Burkert A., 2003, ApJ, 597, 893
Naab T., Jesseit R., Burkert A., 2006a, MNRAS, 372, 839
Naab T., Khochfar S., Burkert A., 2006b, ApJ, 636, L81 (N06)
Naab T., Trujillo I., 2006, MNRAS, 369, 625
Negroponte J., White S.D.M., 1983, MNRAS, 205, 1009
Neistein E., van den Bosch F.C., Dekel A., 2006, MNRAS, 372,
Nieto J.-L., Capaccioli M., Held E.V. 1988, A&A, 195, 1
Pasquali A., van den Bosch F.C., Rix H.W., 2007, preprint
(arXiv:0704.0931)
Pellegrini S. 1999, A&A, 351, 487
Pellegrini S. 2005, MNRAS, 364, 169
Quinlan G.D., 1996, New Astronomy, 1, 35
Ravindranath S., Ho L.C., Peng C.Y., Filippenko A.V., Sargent
W.L.W., 2001, AJ, 122, 653
Rest A., van den Bosch F.C., Jaffe W., Tran H., Tsvetanov Z.,
Ford H.C., Davies J., Schafer J., 2001, AJ, 121, 2431
Rix H.W., White S.D.M., 1990, ApJ, 362, 52
Schweizer F., 1982, ApJ, 252, 455
Shlosman I., Frank J., Begelman M.C., 1989, Nature, 338, 45
Simien F., de Vaucouleurs G., 1986, ApJ, 302, 564
Somerville R.S., Lemson G., Kolatt T.S., Dekel A., 2000, MN-
RAS, 316, 479
Springel V., 2000, MNRAS, 312, 859
Springel V., White S.D.M., Tormen G., Kauffmann G., 2001, MN-
RAS, 328, 726
Springel V., Hernquist L., 2005, ApJ, 622, 9
Toomre A., Toomre J., 1972, ApJ, 178, 623
van den Bosch F.C., 2002, MNRAS, 331, 98
Wechsler R.H., Bullock J.S., Primack J.R., Kravtsov A.V., Dekel
A., 2002, ApJ, 568, 52
Weinmann S.M., van den Bosch F.C., Yang X.H., Mo H.J., 2006,
MNRAS, 366, 2
White M., 2002, ApJS, 143, 241
Yang X.H., Mo H.J., van den Bosch F.C., Jing Y.P., 2005, MN-
RAS, 356, 1293
c© 2000 RAS, MNRAS 000, 1–13
|
0704.0935 | Percolation Modeling of Conductance of Self-Healing Composites | Microsoft Word - 205.doc
– 1 –
Percolation Modeling of Conductance of Self-Healing Composites
Alexander Dementsov and Vladimir Privman *
Center for Advanced Materials Processing, and
Department of Physics, Clarkson University,
Potsdam, New York 13699, USA
PACS: 62.20.Mk, 46.50.+a, 72.80.Tm
Key Words: self-healing materials, composites, conductance, material fatigue
Abstract
We explore the conductance of self-healing materials as a measure of the material
integrity in the regime of the onset of the initial fatigue. Continuum effective-field
modeling and lattice numerical simulations are reported. Our results illustrate the general
features of the self-healing process: The onset of the material fatigue is delayed, by
developing a plateau-like time-dependence of the material quality. We demonstrate that
in this low-damage regime, the changes in the conductance and similar transport/response
properties of the material can be used as measures of the material quality degradation.
________________________
* www.clarkson.edu/Privman
Physica A 385, 543-550 (2007) arXiv:0704.0935
– 2 –
Recently a significant research effort has been devoted to the design of “smart
materials.” In particular, self-healing composites [1-10] can restore their mechanical
properties with time or at least reduce material fatigue caused by the formation of
microcracks. It is expected that microcracks propagating through such materials can
break embedded capsules/fibers which contain the healing agent — a “glue” that
heals/delays further microcrack development — thus triggering the self-healing
mechanism. In recent experiments [1,7-10], an epoxy (polymer) was studied, with
embedded microcapsules containing a healing agent. Application of a periodic load on a
specimen with a crack, induced rupture of microcapsules [1]. The healing glue was
released from the damaged microcapsules, permeated the crack, and a catalyst triggered a
chemical reaction which re-polymerized the crack.
Defects of nanosizes are randomly distributed throughout the material.
Mechanical loads during the use of the material then cause formation of craze fibrils
along which microcracks develop. This leads to material fatigue and, ultimately,
degradation. Triggering self-healing mechanism at the nanoscale might offer several
advantages [10] for a more effective prevention of growth of microcracks. Indeed, it is
hoped [10] that nanoporous fibers with glue will heal smaller damage features, thus
delaying the material fatigue at an earlier stage than larger capsules [1,9] which basically
re-glue large cracks. Furthermore, on the nanoscale, the glue should be distributed/mixed
with the catalyst more efficiently because transport by diffusion alone will be effective
[10,11], thereby also eliminating the need for external UV irradiation [9], etc.
Theoretical and numerical modeling of self-healing materials are only in the
initiation stages [10,12,13]. Many theoretical works and numerical simulations [14-17]
consider formation and propagation of large cracks which, once developed, can hardly be
healed by an embedded nano-featured capsules. Therefore, we have proposed [10] to
focus the modeling program on the time dependence of a gradual formation of damage
(fatigue) and its manifestation in material composition, as well as its healing by
nanoporous fiber rupture and release of glue.
We will shortly formulate rate equations [10] for such a process. In addition to
continuum rate equations for the material composition, numerical modeling can yield
useful information on the structure, and, later in this article, we report results of Monte
Carlo simulations. We also point out that the calculated material composition and
structure must be related to macroscopic properties that are experimentally probed. The
relation between composite materials composition and properties is an important and
rather broad field of research [18].
Recently, it has been demonstrated experimentally [19] that a rather dilute
network of carbon nanotubes, incorporated in the epoxy matrix, can provide a percolation
cluster the conductance of which can not only reflect the degree of the fatigue of the
material but also shows promise for probing the self-healing process. The main purpose
of the present article is to initiate continuum effective-field, as well as numerical lattice
modeling of percolation properties for materials with self-healing.
– 3 –
Different transport properties can be used to probe material integrity (damage
accumulation due to the formation of cracks). These include thermal conductivity
[20,21], photoacoustic waves [22,23], electrical conductivity [19,24-27]. Generally,
transport properties can be highly nonlinear as functions of the degree of damage. For
example, the conductance can sharply drop to zero if the conducting network density
drops below the percolation threshold. However, for probing the initial fatigue, in the
regime of low levels of damage, one expects most transport properties to decrease
proportionately to the damage.
Let us summarize our recently proposed model [10] of the material composition
in the continuum rate equation approach. We denote by ( )u t the fraction of material that
is undamaged, by ( )g t the fraction of material consisting of glue-carrying capsules, by
( )d t the fraction of material that is damaged, and by ( )b t the fraction of material with
broken capsules, so that we have
( ) ( ) ( ) ( ) 1u t g t d t b t+ + + = . (1)
We consider the regime of small degree of degradation of the material, i.e., we assume
that at least for small times, t , we have ( ) 1u t ≈ , whereas ( )d t , ( )b t and ( )g t are
relatively small. In fact, (0) 0b = .
For the purposes of simple modeling, we assume that on average the capsules
degrade with the rate P , which is somewhat faster than the rate of degradation of the
material itself due to its continuing use (fatigue), p , i.e., P p> . The latter assumption
was made to mimic the expected property that a significant amount of microcapsules
embedded in the material may actually weaken its mechanical properties and, were it not
for their healing effect, reduce its usable lifetime (though it was noted [1,11] that a small
amount of microcapsules actually increased the epoxy toughness); the density of the
“healing” microcapsules is one of the important system parameters to optimize in any
modelling approach. Thus, we approximately take
( ) ( )g t Pg t= − , yielding ( ) (0) Ptg t g e−= . (2)
One can write a more complicated rate equation for ( )g t , but the added, nonlinear terms
are small in the considered regime.
However, for the fraction of the undamaged material, we cannot ignore the
second, nonlinear term in the relation
( ) ( ) ( )u t pu t H t= − + . (3)
Here we introduced the healing efficiency, ( )H t , which can be approximated by the
expression
– 4 –
( ) ( ) ( ) (volume healed by one capsule)H t d t g t∝ × . (4)
The healing efficiency is proportional to the fraction of glue capsules, as well as to the
fraction of the damaged material, because that is where the healing process is effective.
The latter will be approximated by ( ) 1 ( )d t u t≈ − , which allows us to obtain a closed
equation for ( )u t . Indeed, in Eq. (3) we can now use
( ) [1 ( )]PtH t Ae u t−= − . (5)
The healing efficiency is controlled by the parameter
(0) (volume healed by one capsule)A g∝ × . (6)
While the model just formulated is quite simple, “minimal,” and many
improvements can be suggested, it has the advantage of offering an exact solution,
1(1 ) ( )( ) (0)
Pt Pt P
pt AP e pt AP e P p AP eu t u e Ae d e
− − − − −−− − − − + − − −= + ∫ . (7)
This result is illustrated by the solid curves in Fig. 1, where we set (0) 1u = for simplicity.
The main feature observed is that even when the healing efficiency parameter A is rather
small (here 0.02) but nonzero, the decay of the fraction of the undamaged material is
delayed for some interval of time. This represents the self-healing effect persisting until
the glue capsules are used up.
Equation (6) suggests that an important challenge in the design of self-healing
materials will be to have the healing effect of most capsules cover volumes much larger
than a capsule, in order to compensate for a relatively small value of (0)g , which is the
fraction of the material volume initially occupied by the glue-filled capsules. Since the
glue cannot “decompress,” its healing action, after it spreads out and solidifies, should
have a relatively long-range stress-relieving effect in order to prevent further crack
growth over a large volume.
The present simple continuum modeling cannot address the details of the
morphological material properties and glue transport; numerical simulations will be
needed to explore this issue. It is interesting to note that most material properties will
also depend on the specific morphological assumptions; their derivation within various
approximation schemes, will require more information than that provided by the average,
“effective field” approximate “materials quality” measures such as ( )u t . Here we are
interested, specifically, in the material conductivity.
– 5 –
Figure 1: The solid curves illustrate the fraction of the undamaged
material, ( )u t , calculated according to Eq. (7) with 0.02A = , 0.003p = ,
0.008P = — the top curve, and without self-healing: 0A = , 0.003p =
— the bottom curve. The dashed curves illustrate the behavior of the
mean-field conductance for these two cases, respectively, with the
conductance decreasing slower with self-healing present, eventually
reaching zero at the percolation transition at 0.5u = , see Eq. (8).
________________________________________________________________________
Since our numerical calculations reported below, assume square lattice
(coordination number 4z = ) bond percolation, the conductance, ( )G t , shown as the
dashed curves in Fig. 1, was calculated by using the bond-percolation mean-field formula
[28],
[ ]1 ( )( ) max 1 , 0 max 2 ( ) 1, 0
G t z u t
− = − = − −
. (8)
Here the conductance is normalized to have ( 0) 1G t = = , and for simplicity we assumed
that the bond percolation probability is given by ( )u t , i.e., we consider the situation when
– 6 –
the conductance of the healthy/healed material is maximal, whereas the other areas do not
conduct at all. In the regime of a relatively low damage, which is likely the only one of
practical interest, and also the one where the mean-field expressions are accurate, we note
that the conductance provides a convenient, proportional measure of the material
degradation,
[ ](0) ( ) (0) ( )G G t K u u t− − , (9)
where the constant /( 2)K z z= − depends on the microscopic details of the material
conductivity. Here 2K = , but in practical situations this parameter can be fitted from
experimental data.
In order to further explore the self-healing process, we carried out Monte Carlo
simulations on square lattices of varying sizes, with periodic boundary conditions. All the
bonds in the lattice were initially present, and the healing cells were a small fraction of
the lattice (square) unit cells, distributed uniformly over the lattice, with the built in
constraint that they do not touch each other (including no corner contact). Our
simulations reported here, were carried out with this fraction (0) 0.15g = , i.e., the
probability that a cell was designated glue-carrying was 15%.
At times 0t > , bonds were randomly broken with the probability (rate per unit
time) p for ordinary bonds, and ( )P p> for bonds of healing cells (those with “glue”).
If at least two bonds are broken in a healing cell, the glue leaks out and restores broken
bonds. Here we assumed local healing, with the glue only spreading to the 8 neighboring
cells before solidifying, thus restoring all the 24 bonds of the 3 3× square group of cells
that includes the healing cell as its center. Furthermore, once the glue leaks out, the
healing cell becomes inactive, but its bonds still have the larger probability, P , to be re-
broken.
We further assumed that all the original or healed bonds have the same, maximal
conductance, whereas all the broken bonds do not conduct at all. Since the periodic
boundary conditions induce a torus geometry, the conductance of a system of N N×
square cells, was calculated between two parallel lines, each N lattice bonds long (which
were really circles due to periodicity), at the distance / 2N from each other, by using a
standard algorithm [29]. Note that these two lines are connected by two equal-size system
halves (we took N even for simplicity), and the conductivities via these two pathways
were included in the overall calculation. Our typical results are illustrated in Fig. 2, where
we plot the number (fraction) of unbroken bonds, ( )n t , with initially (0) 1n = , as well as
the normalized conductance, ( )G t .
– 7 –
Figure 2: The solid curves illustrate the fraction of the unbroken bonds, n
(top curve), and the normalized conductance, G (bottom curve), for the
following choice of the parameters: 32N = , 0.003p = , 0.008P = , and
the initial fraction of the healing cells 15%. The dashed curves illustrate
similar results with the same parameters but with no healing cells. The
data were averaged over 40 Monte Carlo runs for the case with self-
healing, and over 20 runs for the case without self-healing.
________________________________________________________________________
The two lower curves in Fig. 2, showing the conductance with and without self-
healing, do not vanish at finite times due to finite-size effects [30]. In fact, without self-
healing the simulation of the conductance is just the ordinary numerical evaluation for
square-lattice uncorrelated bond percolation. As the lattice size increases, the finite-lattice
conductance, as well as other percolation properties, develop critical-point behavior at the
percolation transition that occurs, for this particular morphology, when the fraction of the
broken bonds reaches 0.5. Thus, the conductance, ( )G t , shows significant lattice-size
dependence even without self-healing, as illustrated in Fig. 3, whereas the fraction of the
unbroken bonds, ( ) ptn t e−= , not shown in the figure, has no size dependence.
– 8 –
Figure 3: Size dependence of the conductance without self-healing, with
otherwise the same parameters as in Fig. 2. From top to bottom, the results
shown correspond to lattice sizes 8N = , 16, 32. The data were averaged
over 500, 100 and 20 Monte Carlo runs, respectively. Note that the
percolation transition occurs at (ln 2) 231/t p= , from which time value
on, the N →∞ limiting value of the conductance is zero.
________________________________________________________________________
Results with self-healing, for the size dependence of the conductance, are shown
in Fig. 4. We point out that the fraction of the unbroken bonds also has some variation
with N in this case. However, the differences in n -values are too small to be displayed
in the figure. (The size dependence of ( ; )n t N might become quite pronounced and
interesting when the self-healing process is non-local, as discussed in [10].) For most
practical purposes the self-healing process will be of interest as long as the material
fatigue is small, i.e., in the regime of the initial plateau that develops in properties such as
( )n t , or ( )u t in the continuum model. Therefore, we did not attempt to study in detail the
percolation transition for the conductance, which in this case should be a variant of some
sort of a correlated bond percolation, though the universality class is likely not changed.
– 9 –
Figure 4: Size dependence of the conductance (the solid curves) with self-
healing, with the same parameters as in Figs. 2 and 3. From top to bottom,
the solid curves correspond to lattice sizes 8N = , 16, 32. The data were
averaged over 2000, 400 and 40 Monte Carlo runs, respectively. The
dashed curve shows the fraction of the healthy bonds, the size-dependence
of which leads to variations too small to be shown on the scale of the
vertical axis in this plot (the curve shown is for 32N = ).
________________________________________________________________________
Let us now discuss the extent to which the continuum model can fit the results for
the lattice model. Now, without self-healing, the mean-field approximation can provide
rather accurate results for the conductance, except perhaps right near the percolation
transition [31], as illustrated in Fig. 5. With self-healing, the situation is less consistent.
The numerical lattice-model result (for our largest 32N = ), is compared to the
continuum model expression with varying A , in Fig. 6.
While, especially for larger values of A , the continuum model curves show all
the features of the self-healing conductance, including the initial drop followed by
“shoulder,” the overall agreement is at best only qualitative. Thus, using A as the
– 10 –
adjustable parameter, one cannot achieve a quantitatively accurate fit of the lattice-model
data. We note that the continuum model considered, should be viewed as “minimal” in
that it represents the simplest possible set of assumptions that yield the self-healing
behavior and also offer exact solvability. Specifically, the continuum model assumes that
the initial fraction of the glue-carrying capsules is very small, and the finite healing
efficiency is achieved by each cell healing a large volume, see the discussion in
connection with Eq. (6). On the other hand, to have a full-featured self-healing behavior,
in the lattice case with short-range healing, we had to take the initial fraction of the
healing cells at least of order 10% (we took 15% in our simulations). Thus, for better-
quality fit the continuum model will have to be modified, and will be more complicated,
involving more than one quantity (now we only consider ( )u t , for which we obtain a
closed equation) and likely nonlinear equations. We plan to consider this in our future
work.
________________________________________________________________________
Figure 5: The dashed curve shows the mean-field approximation for the
conductance without self-healing, calculated according to Eq. (8), with
0.003p = . The solid curve is the 32N = lattice-model result, as in Fig. 3.
– 11 –
Figure 6: The dashed curves show the conductance calculated according
to the continuum model, for 0.003p = and 0.008P = , with, from bottom
to top, 0.01A = , 0.02, 0.03, 0.04, 0.05. The solid curve is the 32N =
lattice-model result with self-healing, as in Fig. 4.
________________________________________________________________________
In summary, we explored the conductance of self-healing materials, with several
assumptions that include short-range healing, conductivity being directly proportional to
the local material “health,” and the use of simple effective-field continuum model, as
well as two-dimensional square lattice numerical simulations. While our assumptions
may have to be modified for different, more realistic situations, our results illustrate the
general features of the self-healing process. Specifically, the onset of the material fatigue
is delayed, by developing a plateau-like time-dependence of the material quality at initial
times. In this regime, the changes in the conductance, and likely in most other
transport/response properties of the material that can be experimentally probed, measure
the material quality degradation proportionately, whereas for larger damage at later times,
transport properties may undergo dramatic changes, such as the vanishing of the
conductance in our case, and they might not be good measures of the material integrity.
– 12 –
We wish to thank Dr. D. Robb for bringing reference [19] to our attention and for
helpful discussions, and we acknowledge support of this research by the US-ARO under
grant W911NF-05-1-0339 and by the US-NSF under grant DMR-0509104.
References
1. S. R. White, N. R. Sottos, P. H. Geubelle, J. S. Moore, M. R. Kessler,
S. R. Sriram, E. N. Brown and S. Viswanathan, Nature 409, 794 (2001).
2. C. Dry, Composite Structures 35, 263 (1996).
3. B. Lawn, Fracture of Brittle Solids (Cambridge University Press,
Cambridge, 1993), Chapter 7.
4. C. M. Dry and N. R. Sottos, Proc. SPIE 1916, 438 (1996).
5. E. N. Brown, N. R. Sottos and S. R. White, Exper. Mech. 42, 372 (2002).
6. Y. Kievsky and I. Sokolov, IEEE Trans. Nanotech. 4, 490 (2005).
7. E. N. Brown, S. R. White and N. R. Sottos, J. Mater. Sci. 39, 1703 (2004).
8. M. Zako and N. Takano, J. Intel. Mater. Syst. Struct. 10, 836 (1999).
9. J. W. C. Pang and I. P. Bond, Compos. Sci. Tech. 65, 1791 (2005).
10. V. Privman, A. Dementsov and I. Sokolov, J. Comput. Theor. Nanosci. 4,
190 (2007).
11. I. Sokolov, private communication.
12. S. R. White, P. H. Geubelle and N. R. Sottos, Multiscale Modeling and
Experiments for Design of Self-Healing Structural Composite Materials, US Air
Force research report AFRL-SR-AR-TR-06-0055 (2006).
13. J. Y. Lee, G. A. Buxton and A. C. Balazs, J. Chem. Phys. 121, 5531 (2004).
14. S. Hao, W. K. Liu, P. A. Klein and A. J. Rosakis, Int. J. .Solids Struct. 41,
1773 (2004).
15. H. J. Herrmann, A. Hansen and S. Roux, Phys. Rev. B 39, 637 (1989).
16. M. Sahimi and S. Arbabi, Phys. Rev. B 47, 713 (1993).
17. J. Rottler, S. Barsky and M. O. Robbins, Phys. Rev. Lett. 89, 148304 (2002).
– 13 –
18. G. W. Milton, The Theory of Composites (Cambridge University Press,
Cambridge, 2001), Chapter 10.
19. E. T. Thostenson and T.-W. Chou, Adv. Mater. 18, 2837 (2006).
20. I. Sevostianov, Int. J. Eng. Sci. 44, 513 (2006).
21. I. Sevostianov and M. Kachanov, Connections Between Elastic and Conductive
Properties of Heterogeneous Materials, preprint (2007).
22. M. Navarrete, M. Villagrán-Munizb, L. Poncec and T. Flores, Opt. Lasers
Eng. 40, 5 (2003).
23. A. S. Chekanov, M. H. Hong, T. S. Low and Y. F. Lu, IEEE Trans. Magn. 33,
2863 (1997).
24. K. Schulte and C. Baron, Compos. Sci. Tech. 36, 63 (1989).
25. I. Weber and P. Schwartz, Compos. Sci. Tech. 61, 849 (2001).
26. M. Kupke, K. Schulte and R. Schüler, Compos. Sci. Tech. 61, 837 (2001).
27. R. Schueler, S. P. Joshi and K. Schulte, Compos. Sci. Tech. 61, 921 (2001).
28. S. Kirkpatrick, Rev. Mod. Phys. 45, 574 (1973).
29. H. A. Knudsen and S. Fazekas, J. Comput. Phys. 211, 700 (2006).
30. V. Privman, Finite Size Scaling and Numerical Simulation of Statistical Systems
(World Scientific, Singapore, 1990).
31. H. E. Stanley, Introduction to Phase Transitions and Critical Phenomena (Oxford
University Press, Oxford, 1993).
|
0704.0937 | Invariants of Triangular Lie Algebras | Invariants of Triangular Lie Algebras
Vyacheslav Boyko †, Jiri Patera ‡ and Roman Popovych †§
† Institute of Mathematics of NAS of Ukraine, 3 Tereshchenkivs’ka Str., Kyiv-4, 01601 Ukraine
E-mail: [email protected], [email protected]
‡ Centre de Recherches Mathématiques, Université de Montréal,
C.P. 6128 succursale Centre-ville, Montréal (Québec), H3C 3J7 Canada
E-mail: [email protected]
§ Fakultät für Mathematik, Universität Wien, Nordbergstraße 15, A-1090 Wien, Austria
Abstract
Triangular Lie algebras are the Lie algebras which can be faithfully represented by triangular
matrices of any finite size over the real/complex number field. In the paper invariants (‘gen-
eralized Casimir operators’) are found for three classes of Lie algebras, namely those which
are either strictly or non-strictly triangular, and for so-called special upper triangular Lie al-
gebras. Algebraic algorithm of [J. Phys. A: Math. Gen., 2006, V.39, 5749; math-ph/0602046],
developed further in [J.Phys. A: Math. Theor., 2007, V.40, 113; math-ph/0606045], is used to
determine the invariants. A conjecture of [J.Phys. A: Math. Gen., 2001, V.34, 9085], concerning
the number of independent invariants and their form, is corroborated.
1 Introduction
The invariants of Lie algebras are one of their defining characteristics. They have numerous appli-
cations in different fields of mathematics and physics, in which Lie algebras arise (representation
theory, integrability of Hamiltonian differential equations, quantum numbers etc). In particular,
the polynomial invariants of a Lie algebra exhaust its set of Casimir operators, i.e., the center of
its universal enveloping algebra. That is why non-polynomial invariants are also called general-
ized Casimir operators, and the usual Casimir operators are seen as ‘trivial’ generalized Casimir
operators. Since the structure of invariants strongly depends on the structure of the algebra and
the classification of all (finite-dimensional) Lie algebras is an inherently difficult problem (actually
unsolvable), it seems to be impossible to elaborate a complete theory for generalized Casimir op-
erators in the general case. Moreover, if the classification of a class of Lie algebras is known, then
the invariants of such algebras can be described exhaustively. These problems have already been
solved for the semi-simple and low-dimensional Lie algebras, and also for the physically relevant
Lie algebras of fixed dimensions (see, e.g., references in [3, 7, 8, 18, 19]).
The actual problem is the investigation of generalized Casimir operators for classes of solvable Lie
algebras or non-solvable Lie algebras with non-trivial radicals of arbitrary finite dimension. There
are a number of papers on the partial classification of such algebras and the subsequent calculation
of their invariants [1, 6, 7, 14, 15, 16, 20, 21, 22, 23]. In particular, Tremblay and Winternitz [22]
classified all the solvable Lie algebras with the nilradicals isomorphic to the nilpotent algebra t0(n)
of strictly upper triangular matrices for any fixed dimension n. Then in [23] invariants of these
algebras were considered. The case n = 4 was investigated exhaustively. After calculating the
invariants for a sufficiently large value of n, Tremblay and Winternitz made conjectures for an
arbitrary n on the number and form of functionally independent invariants of the algebra t0(n),
and the ‘diagonal’ solvable Lie algebras having t0(n) as their nilradicals and possessing either the
maximal (equal to n − 1) or minimal (one) number of nilindependent elements. A statement on a
functional basis of invariants was only proved completely for the algebra t0(n). The infinitesimal
invariant criterion was used for the construction of the invariants. Such an approach entails the
http://arxiv.org/abs/0704.0937v4
http://arxiv.org/abs/math-ph/0602046
http://arxiv.org/abs/math-ph/0606045
necessity of solving a system of ρ first-order linear partial differential equations, where ρ has the
order of the algebra’s dimension. This is why the calculations were very cumbersome and results
were obtained due to the thorough mastery of the method.
In this paper, we use our original algebraic method for the construction of the invariants (‘gen-
eralized Casimir operators’) of Lie algebras via the moving frames approach [3, 4]. The algorithm
makes use of the knowledge of the associated inner automorphism groups and Cartan’s method
of moving frames in its Fels–Olver version [9, 10]. (For modern developments about the moving
frame method and more references, see also [17].) Unlike standard infinitesimal methods, it allows
us to avoid solving systems of differential equations, replacing them instead by algebraic equations.
As a result, the application of the algorithm is simpler. Note that a closed approach was earlier
proposed in [12, 13, 19] for the specific case of inhomogeneous algebras.
The invariants of three classes of triangular Lie algebras are exhaustively investigated (below n
is an arbitrary integer):
• nilpotent Lie algebras t0(n) of n× n strictly upper triangular matrices (Section 3);
• solvable Lie algebras t(n) of n× n upper triangular matrices (Section 4);
• solvable Lie algebras st(n) of n× n special upper triangular matrices (Section 5).
The triangular algebras are especially interesting due to their ‘universality’ properties. More pre-
cisely, any finite-dimensional nilpotent Lie algebra is isomorphic to a subalgebra of t0(n). Similarly,
any finite-dimensional solvable Lie algebra over an algebraically closed field of characteristic 0 (e.g.,
over C) can be embedded as a subalgebra in t(n) (or st(n)).
We have adapted and optimized our algorithm for the specific case of triangular Lie algebras via
special double enumeration of basis elements, individual choice of coordinates in the corresponding
inner automorphism groups and an appropriate modification of the normalization procedure of
the moving frame method. As a result, the problems related to the construction of functional
bases of invariants are reduced for the algebras t0(n) and t(n) to solving linear systems of algebraic
equations! Let us note that due to the natural embedding of st(n) to t(n) and the representation
t(n) = st(n) ⊕ Z(t(n)), where Z(t(n)) is the center of t(n), we can construct a basis in the set
of invariants of st(n) without the usual calculations from a previously found basis in the set of
invariants of t(n).
We re-prove the statement for a basis of invariants of t0(n), which was first constructed in [23]
using the infinitesimal method in a heuristic way, thereafter constructed in [4] using an empiric
technique based on the exclusion of parameters within the framework of the algebraic method. The
aim of this paper in considering t0(n) is to test and better understand the technique of working
with triangular algebras. The calculations for t(n) are similar, albeit more complex, although they
are much clearer and easier than under the standard infinitesimal approach.
As proved in [22], there is a unique algebra with the nilradical t0(n) that contains a maximum
possible number (n− 1) of nilindependent elements. A conjecture on the invariants of this algebra
is formulated in Proposition 1 of [23]. We show that this algebra is isomorphic to st(n). As a result,
the conjecture by Tremblay and Winternitz on its invariants is effectively proved.
2 The algorithm
The applied algebraic algorithm was first proposed in [3] and then developed in [4]. Ibid it was
effectively tested for the low-dimensional Lie algebras and a wide range of solvable Lie algebras
with a fixed structure of nilradicals. The presentation of the algorithm here differs from [3, 4], the
differences being important within the framework of applications.
For convenience of the reader and to introduce some necessary notations, before the description
of the algorithm, we briefly repeat the preliminaries given in [3, 4] about the statement of the
problem of calculating Lie algebra invariants, and on the implementation of the moving frame
method [9, 10]. The comparative analysis of the standard infinitesimal and the presented algebraic
methods, as well as their modifications, is given in the second part of this section.
Consider a Lie algebra g of dimension dim g = n < ∞ over the complex or real field and the
corresponding connected Lie group G. Let g∗ be the dual space of the vector space g. The map
Ad∗ : G → GL(g∗), defined for any g ∈ G by the relation
〈Ad∗gx, u〉 = 〈x,Adg−1u〉 for all x ∈ g
∗ and u ∈ g
is called the coadjoint representation of the Lie group G. Here Ad: G → GL(g) is the usual adjoint
representation of G in g, and the image AdG of G under Ad is the inner automorphism group Int(g)
of the Lie algebra g. The image of G under Ad∗ is a subgroup of GL(g∗) and is denoted by Ad∗G.
A function F ∈ C∞(g∗) is called an invariant of Ad∗G if F (Ad
gx) = F (x) for all g ∈ G and x ∈ g
The set of invariants of Ad∗G is denoted by Inv(Ad
G). The maximal number Ng of functionally
independent invariants in Inv(Ad∗G) coincides with the codimension of the regular orbits of Ad
i.e., it is given by the difference
Ng = dim g− rankAd
Here rankAd∗G denotes the dimension of the regular orbits of Ad
G and will be called the rank of the
coadjoint representation of G (and of g). It is a basis independent characteristic of the algebra g,
the same as dim g and Ng.
To calculate the invariants explicitly, one should fix a basis E = {e1, . . . , en} of the algebra g.
It leads to fixing the dual basis E∗ = {e∗1, . . . , e
n} in the dual space g
∗ and to the identification
of Int(g) and Ad∗G with the associated matrix groups. The basis elements e1, . . . , en satisfy the
commutation relations [ei, ej ] =
k=1 c
ijek, i, j = 1, . . . , n, where c
ij are components of the tensor
of structure constants of g in the basis E .
Let x → x̌ = (x1, . . . , xn) be the coordinates in g
∗ associated with E∗. Given any invariant
F (x1, . . . , xn) of Ad
G, one finds the corresponding invariant of the Lie algebra g by symmetriza-
tion, SymF (e1, . . . , en), of F . It is often called a generalized Casimir operator of g. If F is
a polynomial, SymF (e1, . . . , en) is a usual Casimir operator, i.e., an element of the center of the
universal enveloping algebra of g. More precisely, the symmetrization operator Sym acts only on
the monomials of the forms ei1 · · · eir , where there are non-commuting elements among ei1 , . . . , eir ,
and is defined by the formula
Sym(ei1 · · · eir) =
eiσ1 · · · eiσr ,
where i1, . . . , ir take values from 1 to n, r > 2. The symbol Sr denotes the permutation group
consisting of r elements. The set of invariants of g is denoted by Inv(g).
A set of functionally independent invariants F l(x1, . . . , xn), l = 1, . . . , Ng, forms a functional
basis (fundamental invariant) of Inv(Ad∗G), i.e., any invariant F (x1, . . . , xn) can be uniquely rep-
resented as a function of F l(x1, . . . , xn), l = 1, . . . , Ng. Accordingly the set of SymF
l(e1, . . . , en),
l = 1, . . . , Ng, is called a basis of Inv(g).
Our task here is to determine the basis of the functionally independent invariants for Ad∗G, and
then to transform these invariants into the invariants of the algebra g. Any other invariant of g is
a function of the independent ones.
Let us recall some facts from [9, 10] and adapt them to the particular case of the coadjoint
action of G on g∗. Let G = Ad∗G×g
∗ denote the trivial left principal Ad∗G-bundle over g
∗. The right
regularization R̂ of the coadjoint action of G on g∗ is the diagonal action of Ad∗G on G = Ad
It is provided by the map R̂g(Ad
h, x) = (Ad
h · Ad
,Ad∗gx), g, h ∈ G, x ∈ g
∗, where the action
on the bundle G = Ad∗G × g
∗ is regular and free. We call R̂g the lifted coadjoint action of G.
It projects back to the coadjoint action on g∗ via the Ad∗G-equivariant projection πg∗ : G → g
Any lifted invariant of Ad∗G is a (locally defined) smooth function from G to a manifold, which
is invariant with respect to the lifted coadjoint action of G. The function I : G → g∗ given by
I = I(Ad∗g, x) = Ad
gx is the fundamental lifted invariant of Ad
G, i.e., I is a lifted invariant, and
any lifted invariant can be locally written as a function of I. Using an arbitrary function F (x)
on g∗, we can produce the lifted invariant F ◦ I of Ad∗G by replacing x with I = Ad
gx in the
expression for F . Ordinary invariants are particular cases of lifted invariants, where one identifies
any invariant formed as its composition with the standard projection πg∗. Therefore, ordinary
invariants are particular functional combinations of lifted ones that happen to be independent of
the group parameters of Ad∗G.
The algebraic algorithm for finding invariants of the Lie algebra g is briefly formulated in the
following four steps.
1. Construction of the generic matrix B(θ) of Ad∗G. B(θ) is the matrix of an inner automorphism
of the Lie algebra g in the given basis e1, . . . , en, θ = (θ1, . . . , θr) is a complete tuple of group
parameters (coordinates) of Int(g), and r = dimAd∗G = dim Int(g) = n − dimZ(g), where Z(g) is
the center of g.
2. Representation of the fundamental lifted invariant. The explicit form of the fundamental
lifted invariant I = (I1, . . . ,In) of Ad
G in the chosen coordinates (θ, x̌) in Ad
∗ is I = x̌ ·B(θ),
i.e., (I1, . . . ,In) = (x1, . . . , xn) · B(θ1, . . . , θr).
3. Elimination of parameters by normalization. We choose the maximum possible number ρ of
lifted invariants Ij1 , . . . , Ijρ, constants c1, . . . , cρ and group parameters θk1 , . . . , θkρ such that
the equations Ij1 = c1, . . . , Ijρ = cρ are solvable with respect to θk1 , . . . , θkρ . After substituting
the found values of θk1 , . . . , θkρ into the other lifted invariants, we obtain Ng = n− ρ expressions
F l(x1, . . . , xn) without θ’s.
4. Symmetrization. The functions F l(x1, . . . , xn) necessarily form a basis of Inv(Ad
G). They
are symmetrized to SymF l(e1, . . . , en). It is the desired basis of Inv(g).
Let us give some remarks on the steps of the algorithm, mainly paying attention to the special
features of its variation in this paper, and where it differs from the conventional infinitesimal
method.
Usually, the second canonical coordinate on Int(g) is enough for the first step, although some-
times, the first canonical coordinate on Int(g) is the more appropriate choice. In both the cases, the
matrix B(θ) is calculated by exponentiation from matrices associated with the structure constants.
Often the parameters θ are additionally transformed in a trivial manner (signs, renumbering, re-
denotation etc) for simplification of the final presentation of B(θ). It is also sometimes convenient
for us to introduce ‘virtual’ group parameters corresponding to the center basis elements. Efficient
exploitation of the algorithm imposes certain constrains on the choice of bases for g, in particular,
in the enumeration of their elements; thus automatically yielding simpler expressions for elements
of B(θ) and, therefore, expressions of the lifted invariants. In some cases the simplification is
considerable.
In contrast with the general situation, for the triangular Lie algebras we use special coordinates
for their inner automorphism groups, which naturally harmonize with the canonical matrix rep-
resentations of the corresponding Lie groups and with special ‘matrix’ enumeration of the basis
elements. The application of the individual approach results in the clarification and a substantial
reduction of all calculations. In particular, algebraic systems solved under normalization become
linear with respect to their parameters.
Since B(θ) is a general form matrix from Int(g), it should not be adapted in any way for the
second step.
Indeed, the third step of the algorithm can involve different techniques of elimination of pa-
rameters which are also based on using an explicit form of lifted invariants [3, 4]. The applied
normalization procedure [9, 10] can also be subject to some variations and can applied in a more
involved manner.
As a rule, in complicated cases the main difficulty is created by the determination of the num-
ber ρ, who is actually equal to rankAd∗G, which is equivalent to finding the maximum number Ng
of functionally independent invariants in Inv(Ad∗G), since Ng = dim g − rankAd
G. The rank ρ of
the coadjoint representation Ad∗G can be calculated in different ways, e.g., by the closed formulas
ρ = max
x̌∈Rn
ckijxk
i,j=1
, ρ = max
x̌∈Rn
or with the use of indirect argumentation. The first formula is native to the infinitesimal approach
to invariants (see, e.g., [5, 16, 18, 23] and other references) since it gives the number of algebraically
independent differential equations in the linear system of first-order partial differential equations∑n
j,k=1 c
ijxkFxj = 0, which arises under this approach and is the infinitesimal criterion for invariants
of the algebra g under the fixed basis E . The second formula shows that rankAd∗G coincides
with the maximum dimension of a nonsingular submatrix in the Jacobian matrix ∂I/∂θ. The
tuples of lifted invariants and parameters associated with this submatrix are appropriate for the
normalization procedure, where the constants c1, . . . , cρ are chosen to lie in the range of values of
the corresponding lifted invariants.
If ρ is known then the sufficient number (Ng = dim g − ρ) of functionally independent invari-
ants can be found with various ‘empiric’ techniques in the frameworks of both the infinitesimal
and algebraic approaches. For example, expressions of candidates for invariants can be deduced
from invariants of similar low-dimensional Lie algebras and then tested via substitution to the
infinitesimal criterion for invariants. It is the method used in [23] to describe invariants of the Lie
algebra t0(n) of strictly upper triangular n × n matrices for any fixed n > 2. In the framework of
the algebraic approach, invariants can be constructed via the combination of lifted invariants in
expressions not depending on the group parameters [9, 10]. This method was applied, in particular,
to low-dimensional algebras and the algebra t0(n) [3, 4]. Other empiric techniques, e.g., based on
commutator properties [2] also can be used.
At the same time, a basis of Inv(Ad∗G) may be constructed without first determining the number
of basis elements. Since under such consideration the infinitesimal approach leads to the necessity
of the complete integration of the partial differential equations from the infinitesimal invariant
criterion, the domain of its applicability seems quite narrow (low-dimensional algebras and Lie
algebra of special simple structure). A similar variation of the algebraic method is based on the
following obvious statement.
Proposition 1. Let I = (I1, . . . ,In) be a fundamental lifted invariant, for the lifted invariants
Ij1 , . . . , Ijρ and some constants c1, . . . , cρ the system Ij1 = c1, . . . , Ijρ = cρ be solvable with
respect to the parameters θk1 , . . . , θkρ and substitution of the found values of θk1 , . . . , θkρ into
the other lifted invariants result in m = n− ρ expressions Îl, l = 1, . . . ,m, depending only on x’s.
Then ρ = rankAd∗G, m = Ng and Î1, . . . , Îm form a basis of Inv(Ad
Our experience on the calculation of invariants of a wide range of Lie algebras shows that the
version of the algebraic method, which is based on Proposition 1, is most effective. It is the version
that is used in this paper.
Note that the normalization procedure is difficult to be made algorithmic. There is a big
ambiguity in the choice of the normalization equations. We can take different tuples of ρ lifted
invariants and ρ constants, which lead to systems solvable with respect to ρ parameters. Moreover,
lifted invariants can be additionally combined before forming a system of normalization equations
or substitution of found values of parameters. Another possibility is to use a floating system of
normalization equations (see Section 6.2 of [4]). This means that elements of an invariant basis are
constructed under different normalization constraints. The choice of an optimal method results in
a considerable reduction of calculations and a practical form of constructed invariants.
3 Nilpotent algebra of strictly upper triangular matrices
Consider the nilpotent Lie algebra t0(n) isomorphic to the one of the strictly upper triangular n×n
matrices over the field F, where F is either C or R. t0(n) has dimension n(n − 1)/2. It is the Lie
algebra of the Lie group T0(n) of upper unipotent n × n matrices, i.e., upper triangular matrices
with entries equal to 1 in the diagonal.
As mentioned above, the basis of Inv(t0(n)) was first constructed in a heuristic way in [23]
within the framework of the infinitesimal approach. This result was re-obtained in [4] with the use
of the pure algebraic algorithm first proposed in [3] and developed in [4]. Also, it is the unique
example included among the wide variety of solvable Lie algebras investigated in [4], in which the
‘empiric’ technique of excluding group parameters from lifted invariants was applied. Although
this technique was very effective in constructing a set of functionally independent invariants (calcu-
lations were reduced via a special representation of the coadjoint action to a trivial identity using
matrix determinants, see Note 2), the main difficulty was in proving that the set of invariants is a
basis of Inv(t0(n)), i.e. cardinality of the set equals the maximum possible number of functionally
independent invariants. Under the infinitesimal approach [23] the main difficulty was the same.
In this section we construct a basis of Inv(t0(n)) with the algebraic algorithm but exclude group
parameters from lifted invariants by the normalization procedure. In contrast with the previous
expositions (Section 3 of [23] and Section 8 of [4]), sufficiency of the number of found invariants
for forming a basis of Inv(t0(n)) is proved in the process of calculating them. Investigation of
Inv(t0(n)) in this way gives us a sense of the specific features of the normalization procedure in the
case of Lie algebras having nilradicals isomorphic (or closed) to t0(n).
For the algebra t0(n) we use a ‘matrix’ enumeration of the basis elements with an ‘increasing’
pair of indices, in a similar way to the canonical basis {Enij , i < j} of the isomorphic matrix algebra.
Hereafter Enij (for fixed values of i and j) denotes the n×n matrix (δii′δjj′) with i
′ and j′ running
the numbers of rows and column respectively, i.e., the n × n matrix with a unit element on the
cross of the i-th row and the j-th column, and zero otherwise. En = diag(1, . . . , 1) is the n × n
unity matrix. The indices i, j, k and l run at most from 1 to n. Only additional constraints on the
indices are indicated.
Thus, the basis elements eij ∼ E
ij , i < j, satisfy the commutation relations
[eij , ei′j′] = δi′jeij′− δij′ei′j,
where δij is the Kronecker delta.
Let e∗ji, xji and yij denote the basis element and the coordinate function in the dual space t
and the coordinate function in t0(n), which correspond to the basis element eij , i < j. In particular,
, eij〉 = δii′δjj′. The reverse order of subscripts of the objects associated with the dual space t
is justified by the simplification of a matrix representation of lifted invariants. We complete the
sets of xji and yij in the matrices X and Y with zeros. Hence X is a strictly lower triangular
matrix and Y is a strictly upper triangular one.
We reproduce Lemma 1 from [4] together with its proof, since it is important for further con-
sideration.
Lemma 1. A complete set of independent lifted invariants of Ad∗
T0(n)
is exhaustively given by the
expressions
Iij = xij +
bii′xi′j +
bj′jxij′ +
i<i′, j′<j
bii′ b̂j′jxi′j′ , j < i,
where B = (bij) is an arbitrary matrix from T0(n), and B
−1 = (̂bij) is the inverse matrix of B.
Proof. The adjoint action of B ∈ T0(n) on the matrix Y is AdBY = BYB
−1, i.e.,
yijeij =
(BY B−1)ijeij =
i6i′<j′6j
bii′yi′j′ b̂j′jeij .
After changing eij → xji, yij → e
ji, bij ↔ b̂ij in the latter equality, we obtain the representation of
the coadjoint action of B
i6i′<j′6j
bj′jxjib̂ii′e
j′i′ =
i′<j′
(BXB−1)j′i′e
j′i′ .
Therefore, the elements Iij, j < i, of the matrix I = BXB
−1, B ∈ T0(n), form a complete set of
the independent lifted invariants of Ad∗T0(n).
Note 1. The center of the group T0(n) is Z(T0(n)) = {E
n+b1nE
1n, b1n ∈ F}. The inner automor-
phism group of t0(n) is isomorphic to the factor-group T0(n)/Z(T0(n)) and hence its dimension is
n(n−1)−1. The parameter b1n in the above representation of the lifted invariants is not essential.
Below A
i1,i2
j1,j2
, where i1 6 i2, j1 6 j2, denotes the submatrix (aij)
i=i1,...,i2
j=j1,...,j2
of a matrix A = (aij).
The conjugate value of k with respect to n is denoted by κ, i.e. κ = n − k + 1. The standard
notation |A| = detA is used.
Theorem 1. A basis of Inv(Ad∗
T0(n)
) consists of the polynomials
|, k = 1, . . . ,
Proof. Under normalization we impose the following restriction on the lifted invariants Iij, j < i:
Iij = 0 if j < i, (i, j) 6= (n− j
′ + 1, j′), j′ = 1, . . . ,
It means that we do not only fix the values of the elements of the lifted invariant matrix I, which
are situated on the secondary diagonal, under the main diagonal. The other significant elements
of I are given the value 0. As shown below, the chosen normalization is correct since it provides
satisfying the conditions of Proposition 1.
In view of the (triangular) structure of the matrices B and X the formula I = BXB−1, deter-
mining the lifted invariants implies that BX = IB. This matrix equality is also significant for the
matrix elements underlying the main diagonals of the left and right hand sides, i.e.,
xij +
bii′xi′j = Iij +
Iij′bj′j, j < i.
For convenience we divide the latter system under the chosen normalization conditions into four
sets of subsystems
Sk1 : xκj +
bκi′xi′j = 0, i = κ, j < k, k = 2, . . . ,
Sk2 : xκk +
bκi′xi′k = Iκk, i = κ, j = k, k = 1, . . . ,
Sk3 : xκj +
bκi′xi′j = Iκkbkj, i = κ, k < j < κ, k = 1, . . . ,
Sk4 : xkj +
bki′xi′j = 0, i = k, j < k, k = 2, . . . ,
and solve them one by one. The subsystem S12 consists of the single equation In1 = xn1 which gives
the simplest form of the invariant corresponding to the center of the algebra t0(n). For any fixed
k ∈ {2, . . . , [n/2]} the subsystem Sk1 ∪ S
2 is a well-defined system of linear equations with respect
to bκi′ , i
′ > κ, and Iκk. Solving it, e.g., by the Cramer method, we obtain that bκi′ , i
′ > κ, are
expressions of xi′j , i
′ > κ, j < k, the explicit form of which is not essential in what follows, and
Iκk = (−1)
1,k |
κ+1,n
1,k−1 |
, k = 2, . . . ,
The combination of the found values of Iκk results in the invariants from the statement of the
theorem. The functional independence of these invariants is obvious.
After substituting the expressions of Iκk and bκi′ , i
′ > κ, via x’s, into Sk3 , we trivially resolve
Sk3 with respect to bkj as an uncoupled system of linear equations. In performing the subsequent
substitution of the calculated expressions for bkj to S
4 , for any fixed k, we obtain a well-defined
system of linear equations, e.g., with respect to bki′ , i
′ > κ.
Under the normalization we express the non-normalized lifted invariants via x’s and find a part
of the parameters b’s of the coadjoint action via x’s and the other b’s. No equations involving
only x’s are obtained. In view of Proposition 1, this implies that the choice of the normalization
constraints is correct and, therefore, the number of functionally independent invariants found is
maximal, i.e., they form a basis of Inv(Ad∗T0(n)).
Corollary 1. A basis of Inv(t0(n)) is formed by the Casimir operators
det(eij)
i=1,...,k
j=n−k+1,...,n, k = 1, . . . ,
Proof. Since the basis elements corresponding the coordinate functions from the constructed basis
of Inv(Ad∗T0(n)) commute, the symmetrization procedure is trivial.
Note 2. The set of the invariants from Theorem 1 can be easily found from the equality I = BXB−1
by the following empiric trick used in Lemma 2 from [4]. For any fixed k ∈ {1, . . . , [n/2]} we re-
strict the equality to the submatrix with the row range κ, . . . , n and the column range 1, . . . , k:
1,k = B
1,k (B
1,k. Since |B
κ,n | = |(B
1,k| = 1, we obtain |I
1,k | = |X
1,k |, i.e., |X
1,k |
is an invariant of Ad∗T0(n) in view of the definition of an invariant. Functional independence of
the constructed invariants is obvious. The proof of Nt0(n) = [n/2] is much more difficult (see
Lemma 3 of [4]).
4 Solvable algebra of upper triangular matrices
In a way analogous to the previous section, consider the solvable Lie algebra t(n) isomorphic to the
one of upper triangular n× n matrices. t(n) has dimension n(n+1)/2. It is the Lie algebra of the
Lie group T (n) of nonsingular upper triangular n× n matrices.
Its basis elements are convenient to enumerate with a ‘non-decreasing’ pair of indices in a similar
way to the canonical basis {Enij , i 6 j} of the isomorphic matrix algebra. Thus, the basis elements
eij ∼ E
ij , i 6 j, satisfy the commutation relations
[eij , ei′j′] = δi′jeij′− δij′ei′j,
where δij is the Kronecker delta.
Hereafter the indices i, j, k and l again run at most from 1 to n. Only additional constraints
on the indices are indicated.
The center of t(n) is one-dimensional and coincides with the linear span of the sum e11+ · · ·+enn
corresponding to the unity matrix En. The elements eij , i < j, and e11 + · · ·+ enn form a basis of
the nilradical of t(n), which is isomorphic to t0(n)⊕ a. Here a is the one-dimensional (Abelian) Lie
algebra.
Let e∗ji, xji and yij denote the basis element and the coordinate function in the dual space
t∗(n) and the coordinate function in t(n), which correspond to the basis element eij , i 6 j. Thus,
, eij〉 = δii′δjj′. We complete the sets of xji and yij in the matrices X and Y with zeros. Hence
X is a lower triangular matrix and Y is an upper triangular one.
Lemma 2. A fundamental lifted invariant of Ad∗
T (n) is formed by the expressions
Iij =
i6i′, j′6j
bii′ b̂j′jxi′j′ , j 6 i,
where B = (bij) is an arbitrary matrix from T (n), and B
−1 = (̂bij) is the inverse matrix of B.
Proof. The adjoint action of B ∈ T (n) on the matrix Y is AdBY = BYB
−1, i.e.
yijeij =
(BY B−1)ijeij =
i6i′6j′6j
bii′yi′j′ b̂j′jeij .
After changing eij → xji, yij → e
ji, bij ↔ b̂ij in the latter equality, we obtain the representation
for the coadjoint action of B
i6i′6j′6j
bj′jxjib̂ii′e
j′i′ =
i′6j′
(BXB−1)j′i′e
j′i′ .
Therefore, the elements Iij, j 6 i, of the matrix
I = BXB−1, B ∈ T (n),
form a complete set of the independent lifted invariants of Ad∗T (n).
Note 3. The center of the group T (n) is Z(T (n)) = {βEn | β ∈ F/{0} }. If F = C then the group
T (n) is connected. In the real case the connected component T+(n) of the unity in T (n) is formed
by the matrices from T (n) with positive diagonal elements, i.e., T+(n) ≃ T (n)/Z
2 , where Z
{diag(ε1, . . . , εn) | εi = ±1}. The inner automorphism group Int(t(n)) of t(n) is isomorphic to the
factor-group T (n)/Z(T (n)) (or T+(n)/Z(T (n)) if F is real) and hence its dimension is
n(n+1)−1.
The value of one from the diagonal elements of the matrix B or a homogenous combination of them
in the above representation of lifted invariants can be assumed inessential. It is evident from the
proof of Theorem 2 that in all cases, the invariant sets of the coadjoint representations of Int(t(n))
and t(n) coincide.
Let us remind that A
i1,i2
j1,j2
, where i1 6 i2, j1 6 j2, denotes the submatrix (aij)
i=i1,...,i2
j=j1,...,j2
of a matrix
A = (aij). The conjugate value of k with respect to n is denoted by κ, i.e. κ = n− k + 1.
Under the proof of the below theorem the following technical lemma on matrices is used.
Lemma 3. Suppose 1 < k < n. If |X
κ+1,n
1,k−1 | 6= 0 then for any β ∈ F
1,k−1(X
κ+1,n
1,k−1 )
κ+1,n
j,j =
(−1)k+1
κ+1,n
1,k−1 |
∣∣∣∣∣
1,k−1
κ+1,n
1,k−1
κ+1,n
∣∣∣∣∣ .
In particular, xκk −X
1,k−1(X
κ+1,n
1,k−1 )
κ+1,n
= (−1)k+1|X
κ+1,n
1,k−1 |
1,k |. Analogously
xκj −X
1,k−1
κ+1,n
1,k−1
κ+1,n
xjk −X
1,k−1
κ+1,n
1,k−1
κ+1,n
κ+1,n
1,k−1
∣∣∣∣∣
1,k X
∣∣∣∣∣+
1,k |
κ+1,n
1,k−1
∣∣∣∣∣
1,k−1
κ+1,n
1,k−1 X
κ+1,n
∣∣∣∣∣ .
Theorem 2. A basis of Inv(Ad∗
T (n)) is formed by the rational expressions
1,k |
j=k+1
∣∣∣∣∣
1,k xjj
1,k X
∣∣∣∣∣ , k = 0, . . . ,
where |X
n+1,n
1,0 | := 1.
Proof. We choose the following normalization restriction on the lifted invariants Iij, j 6 i:
In−j+1,j = 1, j = 1, . . . ,
Iij = 0 if j 6 i, (i, j) 6= (j
′, j′), (n − j′ + 1, j′), j′ = 1, . . . ,
This means that we do not only fix the values of the elements of the lifted invariant matrix I, which
are situated on the main diagonal over or on the secondary diagonal. The elements of the secondary
diagonal underlying the main diagonal are given a value of 1. The other significant elements of I
are given a value 0. As shown below, the imposed normalization provides satisfying the conditions
of Proposition 1 and, therefore, is correct.
Similarly to the case of strictly triangular matrices, in view of the (triangular) structure of
the matrices B and X the formula I = BXB−1 determining the lifted invariants implies that
BX = IB. This matrix equality is significant for the matrix elements lying not over the main
diagonals of the left and right hand sides, i.e.,
bii′xi′j =
Iij′bj′j , j 6 i.
For convenience we again divide the latter system under the chosen normalization conditions into
four sets of subsystems
Sk1 :
bκi′xi′j = 0, i = κ, j < k, k = 2, . . . ,
Sk2 :
bκi′xi′j = bkj, i = κ, k 6 j 6 κ, k = 1, . . . ,
Sk3 :
bki′xi′j = 0, i = k, j < k, k = 2, . . . ,
Sk4 :
bki′xi′k = bkkIkk, i = k, j < k, k = 1, . . . ,
and solve them one by one. The subsystem S12 consists of the equations
b1j = bnnxnj
which are already solved with respect to b1j . For any fixed k ∈ {2, . . . , [n/2]} the subsystem S
is a well-defined system of linear equations with respect to bκi′ , i
′ > κ, and bkj, k 6 j 6 κ. We
can solve the subsystem Sk1 with respect to bκi′ , i
′ > κ:
κ+1,n = −bκκX
1,k−1(X
κ+1,n
1,k−1 )
and then substitute the obtained values into the subsystem Sk2 . Another way is to find the expres-
sions for bkj, k 6 j 6 κ, by the Cramer method, from the whole system S
1 ∪ S
2 at once since
only these parameters are further considered. As a result, they have two representations via bκκ
and x’s:
bkj = bκκ
xκj −X
1,k−1
κ+1,n
1,k−1
κ+1,n
(−1)k+1bκκ
κ+1,n
1,k−1
∣∣∣∣∣
1,k−1 xκj
κ+1,n
1,k−1 X
κ+1,n
∣∣∣∣∣ ,
where k 6 j 6 κ. In particular,
bkk = (−1)
k+1bκκ|X
κ+1,n
1,k−1 |
1,k |.
Analogously, for any fixed k ∈ {2, . . . , [(n + 1)/2]} the subsystem Sk3 is a well-defined system of
linear equations with respect to bkj, j > κ, and it implies
κ+1,n = −
k6j6κ
1,k−1(X
κ+1,n
1,k−1 )
Substituting the found expressions for b’s into the equations of the subsystems Sk4 , we completely
exclude the parameters b’s and obtain expressions of Ikk only via x’s. Thus, under k = 1
I11 =
b1ixi1 =
xnixi1 =
xnixi1 =
xi1 xii
xn1 xni
∣∣∣∣+
where the summation range in the first sum can be bounded by 2 and n − 1 since for i = 1 and
i = n the determinants are equal to 0. In the case k ∈ {2, . . . , [(n + 1)/2]}
bkkIkk =
bkixik =
k6j6κ
bkjxjk +
bkixik
k6i6κ
xjk −X
1,k−1(X
κ+1,n
1,k−1 )
κ+1,n
= bκκ
k6i6κ
xκj −X
1,k−1(X
κ+1,n
1,k−1 )
κ+1,n
xjk −X
1,k−1(X
κ+1,n
1,k−1 )
κ+1,n
After using the representation for bnn and manipulations with submatrices of X (see Lemma 3),
we derive that
Ikk =
(−1)k+1
1,k |
k6i6κ
∣∣∣∣∣
∣∣∣∣∣+
(−1)k+1
κ+1,n
1,k−1
k6i6κ
∣∣∣∣∣
1,k−1
κ+1,n
1,k−1
κ+1,n
∣∣∣∣∣ ,
where k = 2, . . . , [(n + 1)/2]. The summation range in the first sum can be taken from k + 1 and
κ − 1 since for i = k and i = κ the determinants are equal to 0.
The combination of the found values of Ikk in the following way
Ĩ00 =
Ijj =
xii, Ĩkk = (−1)
k+1Ikk − Ĩk−1,k−1, k = 1, . . . ,
results in the invariants Ĩk′k′, k
′ = 0, . . . , [(n − 1)/2], from the statement of the theorem. The
functional independence of these invariants is obvious.
Under the normalization we express the non-normalized lifted invariants via x’s and find a part
of the parameters b’s of the coadjoint action via x’s and the other b’s. No equations involving
only x’s are obtained. In view of Proposition 1, this implies that the choice of the normalization
constraints is correct, i.e., the number of the found functionally independent invariant is maximal
and, therefore, they form a basis of Inv(Ad∗
T (n)).
Note 4. An expanded form of the invariants from Theorem 2 is
xj1 xjj
xn1 xnj
∣∣∣∣∣∣
xj1 xj2 xjj
xn−1,1 xn−1,2 xn−1,j
xn1 xn2 xnj
∣∣∣∣∣∣
xn−1,1 xn−1,2
xn1 xn2
, . . . .
The first invariant corresponds to the center of t(n). The invariant tuple ends with
1,n+1
1,n−1
if n is odd and
∣∣∣∣∣∣
∣∣∣∣∣∣
if n is even.
Corollary 2. A basis of Inv(t(n)) consists of the rational invariants
Îk =
j=k+1
∣∣∣∣∣
j,j E
ejj E
∣∣∣∣∣ , k = 0, . . . ,
where E
i1,i2
j1,j2
, i1 6 i2, j1 6 j2, denotes the matrix (eij)
i=i1,...,i2
j=j1,...,j2
n+1,n| := 1, κ = n− k + 1.
Proof. The symmetrization procedure for the tuple of invariants presented in Theorem 2 can be
assumed trivial. To show this, we expand the determinants in each element of the tuple and obtain,
as a result, a rational expression in x’s. Only the monomials from the numerator, which do not
contain the ‘diagonal’ elements xjj, include coordinate functions associated with noncommuting
basis elements of the algebra t(n). More precisely, each of the monomials includes a single pair of
such coordinate functions, namely, xji′ and xj′j for some values i
′ ∈ {1, . . . , k}, j′ ∈ {κ, . . . , n} and
j ∈ {k + 1, . . . ,κ − 1}. Hence, it is sufficient to symmetrize only the corresponding pairs of basis
elements.
After the symmetrization and the transposition of the matrices, we construct the following
expressions for the invariants of t(n) associated with the invariants from Theorem 2:
(−1)k
j=k+1
ejj +
j=k+1
ei′jejj′+ ejj′ei′j
(−1)i
∣∣E1,k;̂i
κ,n;ĵ′
∣∣E1,k;̂i
κ,n;ĵ′
∣∣ denotes the minor of the matrix E1,kκ,n complementary to the element ei′j′. Since
ei′ieij′ = eij′ei′i + ei′j′ then
ei′ieij′+ eij′ei′i
(−1)i
∣∣E1,k;̂i
κ,n;ĵ′
∣∣∣∣∣
i,i E
∣∣∣∣∣±
|E1,k
κ,n|,
where the sign ‘+’ (resp. ‘−’) have to be taken if the elements of E
i,i are placed after (resp. before)
the elements of E
κ,n in all the relevant monomials. Up to constant summands, we obtain the
expressions for the elements of an invariant basis, which are adduced in the statement and formally
derived from the corresponding expressions given in Theorem 2 by the replacement xij → eji and
the transposition of all matrices. That is why the symmetrization procedure can be assumed trivial
in the sense described. The transposition is necessary in order to improve the representation of
invariants since xij ∼ eji, j 6 i.
Note 5. The invariants from Corollary 2 can be rewritten as
Îk =
j=k+1
∣∣∣∣∣
j,j E
∣∣∣∣∣+ (−1)
j=k+1
ejj, k = 0, . . . ,
In particular, Î0 =
j ejj .
Note 6. Let us emphasize that a uniform order of elements from E
and E
κ,n has to be fixed in all
the monomials under usage of the ‘non-symmetrized’ forms of invariants presented in Corollary 2,
Note 5 and Theorem 4 (see below).
5 Solvable algebra of special upper triangular matrices
The Lie algebra st(n) of the special (i.e., having zero traces) upper triangular n × n matrices is
imbedded in a natural way in t(n) as an ideal. dim st(n) = 1
n(n+ 1)− 1. Moreover,
t(n) = st(n)⊕ Z(t(n)),
where Z(t(n)) = 〈e11 + · · · + enn〉 is the center of t(n), which corresponds to the one-dimensional
Abelian Lie algebra of the matrices proportional to En. Due to this fact we can construct a basis of
Inv(st(n)) without the usual calculations involved in finding the basis of Inv(t(n)). It is well known
that if the Lie algebra g is decomposable into the direct sum of Lie algebras g1 and g2 then the
union of bases of Inv(g1) and Inv(g2) is a basis of Inv(g). A basis of Inv(Z(t(n))) obviously consists
of only one element, e.g., e11 + · · · + enn. Therefore, the cardinality of the basis of Inv(st(n)) is
equal to the cardinality of the basis of Inv(t(n)) minus 1, i.e., [(n − 1)/2]. To construct a basis
of Inv(st(n)), it is enough for us to rewrite [(n − 1)/2] functionally independent combinations of
elements from a basis of Inv(t(n)) via elements of st(n) and to exclude the central element from
the basis.
The following basis in st(n) is chosen as a subalgebra of t(n):
eij , i < j, fk =
eii −
i=k+1
eii, k = 1, . . . , n− 1.
(Usage of this basis allows for the presentation of our results in such a form that their identity with
Proposition 1 from [23] becomes absolutely evident.) The commutation relations of st(n) in the
chosen basis are
[eij , ei′j′] = δi′jeij′− δij′ei′j, i < j, i
′ < j′;
[fk, fk′ ] = 0, k, k
′ = 1, . . . , n− 1;
[fk, eij ] = 0, i < j 6 k or k 6 i < j;
[fk, eij ] = eij , i 6 k 6 j, i < j
and, therefore, coincide with those of the algebra L(n, n−1) from [22], i.e., L(n, n−1) is isomorphic
to st(n). Combining this observation with Lemma 6 of [22] results in the following theorem.
Theorem 3. The Lie algebra st(n) has the maximal number of dimensions (equal to 1
n(n+1)−1)
among the solvable Lie algebras which have nilradicals isomorphic to t0(n). It is the unique algebra
with such a property.
Theorem 4. A basis of Inv(st(n)) consists of the rational invariants
Ǐk =
(−1)k+1
j=k+1
∣∣∣∣∣
j,j E
∣∣∣∣∣+ fk − fn−k, k = 1, . . . ,
where E
i1,i2
j1,j2
, i1 6 i2, j1 6 j2, denotes the matrix (eij)
i=i1,...,i2
j=j1,...,j2
n+1,n| := 1, κ = n− k + 1.
Proof. It is enough to observe (see Note 5) that
Ǐk = (−1)
k+1Îk +
n− 2k
Î0, k = 1, . . . ,
These combinations of elements from a basis of Inv(t(n)) are functionally independent. They are
expressed via elements of st(n). Their number is [(n − 1)/2]. Therefore, they form a basis of
Inv(st(n)).
6 Conclusion and discussion
In this paper we extend our purely algebraic approach for computing invariants of Lie algebras by
means of moving frames [3, 4] to the classes of Lie algebras t0(n), t(n) and st(n) of strictly, non-
strictly and special upper triangular matrices of an arbitrary fixed dimension n. In contrast to the
conventional infinitesimal method which involves solving an associated system of PDEs, the main
steps of the applied algorithm are the construction of the matrix B(θ) of inner automorphisms of the
Lie algebra under consideration, and the exclusion of the parameters θ from the algebraic system
I = x̌ · B(θ) in some way. The version of the algorithm, applied in this paper, is distinguished in
that a special usage of the normalization procedure when the number, and a form of elements in a
functional basis of an invariant set, are determined under excluding the parameters simultaneously.
A basis of Inv(t0(n)) was already known and constructed by both the infinitesimal method [23]
and the algebraic algorithm with an elegant but empiric technique of excluding the parameters [4].
Note that the proof introduced in [23] is very sophisticated and was completed only due to the
thorough mastery of the used infinitesimal method. A form of elements from a functional basis of
Inv(t0(n)) was guessed via calculation of bases for a number of small n’s and then justified with
the infinitesimal method, and both the testing steps (on invariance and on sufficiency of number)
were quite complicated.
Invariants of t0(n) are considered in this paper in order to demonstrate the advantages of the
normalization technique and to pave the way for further applications of this technique to the more
complicated algebras t(n) and st(n), being too complex for the infinitesimal method (only the
lowest few were completely investigated there). First the invariants of the algebras t(n) and st(n)
are exhaustively studied in this paper. The performed calculations are simple and clear since the
normalization procedure is reduced by the choice of natural coordinates on the inner automorphism
groups and by the use of a special normalization technique to solving a linear system of algebraic
equations. The results obtained for Inv(st(n)) in Theorem 4 completely agree with the conjecture
formulated as Proposition 1 in [23] on the number and form of basis elements of this invariant set.
A direct extension of the present investigation is to describe the invariants of the subalgebras
of st(n), which contain t0(n). Such subalgebras exhaust the set of solvable Lie algebras which can be
embedded in the matrix Lie algebra gl(n) and have the nilradicals isomorphic to t0(n). A technique
similar to that used in this paper can be applied. The main difficulties will be created by breaking
in symmetry and complication of coadjoint representations. The question on ways of investigation
of the other solvable Lie algebras with the nilradicals isomorphic to t0(n) remains open. (See, e.g.,
[22] for classification of the algebras of such type.)
A more general problem is to circumscribe an applicability domain of the developed algebraic
method. It has been already applied only to the low-dimensional Lie algebras and a wide range of
classes of solvable Lie algebras in [3, 4] and this paper. The next step which should be performed
is the extension of the method to classes of unsolvable Lie algebras of arbitrary dimensions, e.g.,
with fixed structures of radicals or Levi factors. An adjoining problem is the implementation of
the algorithm with symbolic calculation systems. Similar work has already began in the framework
of the general method of moving frames, e.g., in the case of rational invariants for rational actions
of algebraic groups [11]. Some other possibilities on the applications of the algorithm are outlined
in [4].
Acknowledgments. The work was partially supported by the National Science and Engineering
Research Council of Canada, by MITACS. The research of R.P. was supported by Austrian Science
Fund (FWF), Lise Meitner project M923-N13. V. B. is grateful for the hospitality the Centre de
Recherches Mathématiques, Université de Montréal.
References
[1] Ancochea J.M., Campoamor-Stursberg R. and Garcia Vergnolle L. Solvable Lie algebras with nat-
urally graded nilradicals and their invariants, J. Phys. A: Math. Gen., 2006, V.39, 1339–1355,
math-ph/0511027.
[2] Barannyk L.F. and Fushchych W.I. Casimir operators of the generalised Poincaré and Galilei groups, in
Group theoretical methods in physics (Yurmala, 1985), Vol. II, VNU Sci. Press, Utrecht, 1986, 275–282.
[3] Boyko V., Patera J. and Popovych R. Computation of invariants of Lie algebras by means of moving
frames, J. Phys. A: Math. Gen., 2006, V.39, 5749–5762, math-ph/0602046.
[4] Boyko V., Patera J. and Popovych R. Invariants of Lie algebras with fixed structure of nilradicals,
J. Phys. A: Math. Theor., 2007, V.40, 113–130, math-ph/0606045.
[5] Campoamor-Stursberg R. An alternative interpretation of the Beltrametti–Blasi formula by means of
differential forms, Phys. Lett. A, 2004, V.327, 138–145.
[6] Campoamor-Stursberg R. Application of the Gel’fand matrix method to the missing label problem in
classical kinematical Lie algebras, SIGMA, 2006, V.2, Paper 028, 11 pages, math-ph/0602065.
[7] Campoamor-Stursberg R. Affine Lie algebras with non-compact rank one Levi subalgebra and their
invariants, Acta Phys. Polon. B, 2007, V.38, 3–20.
[8] Chaichian M., Demichev A.P. and Nelipa N.F. The Casimir operators of inhomogeneous groups, Comm.
Math. Phys., 1983, V.90, 353–372.
[9] Fels M. and Olver P. Moving coframes: I. A practical algorithm, Acta Appl. Math., 1998, V.51, 161–213.
[10] Fels M. and Olver P. Moving coframes: II. Regularization and theoretical foundations, Acta Appl.
Math., 1999, V.55, 127–208.
[11] Hubert E. and Kogan I. Rational invariants of a group action: construction and rewriting, J. Symbolic
Comp., 2007, V.42, 203–217.
[12] Kaneta H. The invariant polynomial algebras for the groups IU(n) and ISO(n), Nagoya Math. J., 1984,
V.94, 43–59.
[13] Kaneta H. The invariant polynomial algebras for the groups ISL(n) and ISp(n), Nagoya Math. J., 1984,
V.94, 61–73.
[14] Ndogmo J.C. Invariants of a semi-direct sum of Lie algebras, J. Phys. A: Math. Gen., 2004, V.37,
5635–5647.
[15] Ndogmo J.C. and Winternitz P. Solvable Lie algebras with Abelian nilradicals, J. Phys. A: Math. Gen.,
1994, V.27, 405–423.
[16] Ndogmo J.C. and Winternitz P. Generalized Casimir operators of solvable Lie algebras with Abelian
nilradicals, J. Phys. A: Math. Gen., 1994, V.27, 2787–2800.
[17] Olver P.J. and Pohjanpelto J. Moving frames for Lie pseudo-groups, Canadian J. Math., to appear.
[18] Patera J., Sharp R.T., Winternitz P. and Zassenhaus H. Invariants of real low dimension Lie algebras,
J. Math. Phys., 1976, V.17, 986–994.
[19] Perroud M. The fundamental invariants of inhomogeneous classical groups, J. Math. Phys., 1983, V.24,
1381–1391.
[20] Rubin J.L. and Winternitz P. Solvable Lie algebras with Heisenberg ideals, J. Phys. A: Math. Gen.,
1993, V.26, 1123–1138.
[21] Snobl L. and Winternitz P. A class of solvable Lie algebras and their Casimir invariants, J. Phys. A:
Math. Gen., 2005, V.38, 2687–2700, math-ph/0411023.
[22] Tremblay S. and Winternitz P. Solvable Lie algebras with triangular nilradicals, J. Phys. A: Math.
Gen., 1998, V.31, 789–806, arXiv:0709.3581.
[23] Tremblay S. and Winternitz P. Invariants of the nilpotent and solvable triangular Lie algebras,
J. Phys. A: Math. Gen., 2001, V.34, 9085–9099, arXiv:0709.3116.
http://arxiv.org/abs/math-ph/0511027
http://arxiv.org/abs/math-ph/0602046
http://arxiv.org/abs/math-ph/0606045
http://arxiv.org/abs/math-ph/0602065
http://arxiv.org/abs/math-ph/0411023
http://arxiv.org/abs/0709.3581
http://arxiv.org/abs/0709.3116
Introduction
The algorithm
Nilpotent algebra of strictly upper triangular matrices
Solvable algebra of upper triangular matrices
Solvable algebra of special upper triangular matrices
Conclusion and discussion
|
0704.0938 | Approaching equilibrium and the distribution of clusters | Approaching equilibrium and the distribution of clusters
Hui Wang,1 Kipton Barros,2 Harvey Gould,1 and W. Klein2
1Department of Physics, Clark University, Worcester, MA 01610
2Department of Physics and the Center for Computational Science,
Boston University, Boston, MA 02215
Abstract
We investigate the approach to stable and metastable equilibrium in Ising models using a cluster
representation. The distribution of nucleation times is determined using the Metropolis algorithm
and the corresponding φ4 model using Langevin dynamics. We find that the nucleation rate is
suppressed at early times even after global variables such as the magnetization and energy have
apparently reached their time independent values. The mean number of clusters whose size is
comparable to the size of the nucleating droplet becomes time independent at about the same
time that the nucleation rate reaches its constant value. We also find subtle structural differences
between the nucleating droplets formed before and after apparent metastable equilibrium has been
established.
http://arxiv.org/abs/0704.0938v4
I. INTRODUCTION
Understanding nucleation is important in fields as diverse as materials science, biological
physics, and meteorology [1, 2, 3, 4, 5, 6, 7, 8, 9]. Fundamental progress was made when
Gibbs assumed that the nucleating droplet can be considered to be a fluctuation about
metastable equilibrium, and hence the probability of a nucleating droplet is independent of
time [10]. Langer [11] has shown that the probability of a nucleating droplet can be related
to the analytic continuation of the stable state free energy in the limit that the metastable
state lifetime approaches infinity. Hence the assumption by Gibbs is valid in this limit. It has
also been shown that the Gibbs assumption is correct in systems for which the interaction
range R → ∞ [12, 13].
For metastable states with finite lifetimes equilibrium is never reached because a large
enough fluctuation would initiate the transformation to the stable state. However, if the
probability of such a fluctuation is sufficiently small, it is possible that systems investigated
by simulations and experiments can be well approximated as being in equilibrium. Hence,
for metastable lifetimes that are very long, we expect the Gibbs assumption to be a good
approximation.
In practice, nucleation is not usually observed when the lifetime of the metastable state
is very long. Processes such as alloy formation, decay of the false vacuum, and protein
crystallization generally occur during a continuous quench of a control parameter such as
the temperature. It is natural to ask if the nucleation process that is observed occurs when
the system can be reasonably approximated by one in metastable equilibrium. If so, the
nucleation rate will be independent of time.
It is usually assumed that metastable equilibrium is a good approximation when the
mean value of the order parameter and various global quantities are no longer changing
with time. As an example, we consider the nearest-neighbor Ising model on a square lattice
and equilibrate the system at temperature T = 4Tc/9 in a magnetic field h = 0.44. The
relatively small value of the linear dimension L = 200 was chosen in order to avoid nucleation
occurring too quickly. At time t = 0 the sign of the magnetic field is reversed. In Fig. 1 we
plot the evolution of the magnetizationm(t) and the energy e(t) per spin using the Metropolis
algorithm. The solid lines are the fits to an exponential function with the relaxation time
τg ≈ 1.5. In the following we will measure the time in terms of Monte Carlo steps per spin.
(a) m(t). (b) e(t).
FIG. 1: The evolution of the magnetization m(t) and the energy e(t) per spin of the nearest-
neighbor Ising model on a square lattice with linear dimension L = 200 using the Metropolis
algorithm. The system was prepared at temperature T = 4Tc/9 in the external magnetic field
h = 0.44. At time t = 0 the sign of the magnetic field is reversed. The solid lines are fits to
an exponential function with relaxation time τg = 1.5 and 1.2 respectively. (Time is measured in
Monte Carlo steps per spin.) The data is averaged over 5000 runs.
A major goal of our work is to address the question, “Can the system be treated as being
in metastable equilibrium for t >∼ τg?”
If the nucleation rate is independent of time, the probability of a nucleating droplet
occurring at time t after the change of magnetic field is an exponentially decreasing function
of time. To understand this dependence we divide the time into intervals ∆t and write the
probability that the system nucleates in a time interval ∆t as λ∆t, where the nucleation
rate λ is a constant. The probability that nucleation occurs in the time interval (N + 1) is
given by
PN = (1− λ∆t)
Nλ∆t. (1)
If we assume that λ∆t is small and write N = t/∆t, we can write
P (t)∆t = (1− λ∆t)t/∆tλ∆t → e−λtλ∆t, (2)
where P (t)∆t is the probability that the system nucleates at a time between t and t+∆t after
the change of the magnetic field. In the following we ask if the nucleation rate and the mean
values of the order parameter and other thermodynamic quantities become independent
of time at approximately the same time after a quench or is the approach to metastable
equilibrium more complicated?
In Sec. II we determine the probability distribution of the nucleation times and find that
the nucleation rate becomes a constant only after a time τnequil that is much longer than the
relaxation time τg of m(t) and e(t). In Sec. III we study the microscopic behavior of the
system and determine the relaxation time τs for ns, the mean number of clusters of size s,
to approach its equilibrium value [14]. Our main result is that τs is an increasing function of
s, and the time required for ns to reach its equilibrium value is the same order of magnitude
as τnequil for values of s comparable to the nucleating droplet. That is, the time for the
number of clusters that are the size of the nucleating droplet to reach its equilibrium value
is considerably longer than the time for the mean value of the order parameter to become
independent of time within the accuracy that we can determine.
In Secs. IV and V we show that there are subtle differences between the structure of the
nucleating droplets which occur before and after metastable equilibrium appears to have
been achieved. This difference suggests the possibility of finding even greater differences in
the nucleating droplets in systems of physical and technological importance. We summarize
and discuss our results in Sec. VI. In the Appendix we study the evolution of the clusters
after a quench to the critical temperature of the Ising model and again find that that the
clusters equilibrate in size order, with the smaller clusters equilibrating first. Hence in
principle, an infinite system will never equilibrate. How close to equilibrium a system needs
to be and on what spatial scale so that it can be treated by equilibrium methods depends
on the physical process of interest.
II. DISTRIBUTION OF NUCLEATION TIMES
We simulate the Ising model on a square lattice with interaction range R with the Hamil-
tonian
H = −J
<i,j>
sisj − h
si, (3)
where h is the external field. The notation <i, j> in the first sum means that the distance
between spins i and j is within the interaction range R. We studied both nearest-neighbor
(R = 1) and long-range interactions (R ≥ 20). The interaction strength J is scaled as
J = 4/q, where q = 2R(R + 1) is the number of interaction neighbors per spin. The
external field h and the temperature are measured in terms of J . All of our simulations
are at temperature T = 4Tc/9, where Tc is the critical temperature. For R = 1 the critical
temperature is Tc ≈ 2.269. For R >∼ 20 the mean field result Tc = 4 is a good approximation
to the exact value of the critical temperature [21]. As discussed in Sec. I the system is
equilibrated in a magnetic field h. The time t = 0 corresponds to the time immediately after
the magnetic field is reversed.
The clusters in the Ising model are defined rigorously by a mapping of the Ising critical
point onto the percolation transition of a properly chosen percolation model [9, 22, 23]. Two
parallel spins that are within the interaction range R are connected only if there is a bond
between them. The bonds are assigned with the probability pb = 1 − e
−2βJ for R = 1 and
pb = 1− e
−2βJ(1−ρ) near the spinodal, where ρ is the density of the stable spins, and β is the
inverse temperature. Spins that are connected by bonds form a cluster.
Because the intervention method [15] of identifying the nucleating droplet is time con-
suming (see Sec. IV), we use a simpler criterion in this section to estimate the nucleation
time. We monitor the size of the largest cluster (averaged over 20 bond realizations) and
estimate the nucleation time as the time when the largest cluster first reaches a threshold
size s∗. The threshold size s∗ is chosen so that the largest cluster begins to grow rapidly
once its size is greater than or equal to s∗. Because s∗ is larger than the actual size of the nu-
cleating droplet, the nucleation time that we estimate by this criterion will be 1 to 2 Monte
Carlo steps per spin later than the nucleation time determined by the intervention method.
Although the distribution function P (t) is shifted to slightly later times, the nucleation rate
is found to be insensitive to the choice of the threshold.
Figure 2 shows P (t) for R = 1 and h = 0.44, where P (t)∆t is the probability that
nucleation has occurred between time t and t+∆t. The results for P (t) were averaged over
5000 runs. The mean size of the nucleating droplet is estimated to be approximately 25
spins for this value of h. Note that P (t) is an increasing function of t for early times, reaches
a maximum at t = τnequil ≈ 60, and fits to the expected exponential form for t >∼ τnequil.
The fact that P (t) falls below the expected exponential for t < τnequil indicates that the
nucleation rate is reduced from its equilibrium value and that the system is not in metastable
equilibrium. Similar nonequilibrium effects have been observed in Ising-like [16, 17] and
continuous systems [18]. We conclude that the time for the nucleation rate to become
independent of the time after the change of magnetic field is much longer than the relaxation
time τg ≃ 1.5 of the magnetization and energy. We will refer to nucleation that occurs before
metastable equilibrium has been reached as transient nucleation.
(a) P (t). (b) lnP (t).
FIG. 2: The distribution of nucleation times P (t) averaged over 5000 runs for the same system as
in Fig. 1. The threshold size was chosen to be s∗ = 30. (The mean size of the nucleating droplet
is ≈ 25 spins.) (a) P (t) begins to decay exponentially at τnequil ≈ 60. The nucleation rate after
equilibrium has been established is determined from the log-linear plot in (b) and is λ ≈ 9× 10−4
(see Eq. (2)).
In order to see if the same qualitative behavior holds near the pseudospinodal, we simu-
lated the long-range Ising model with R = 20 and h = 1.258. In the mean-field limit R → ∞
the spinodal field is at hs = 1.2704 (for T = 4Tc/9). A plot of m(t) for this system is shown
in Fig. 3(a) and is seen to have the same qualitative behavior as in Fig. 2 for R = 1; the
relaxation time τg ≈ 4.5. In Fig. 3(b) the distribution of nucleation times is shown, and we
see that P (t) does not decay exponentially until t >∼ τnequil = 40. According to Ref. 19, τnequil
should become comparable to τg in the limit R → ∞ because the free energy is described
only by the magnetization in the mean-field limit. We find that the difference between τnequil
and τg is smaller for R = 20 than for R = 1, consistent with Ref. 19.
III. RELAXATION OF CLUSTERS TO METASTABLE EQUILIBRIUM
Given that there is a significant time delay between the relaxation of the magnetization
and the energy and the equilibration of the system as measured by the nucleation rate,
it is interesting to monitor the time-dependence of the cluster-size distribution after the
reverse of the magnetic field. After the change the system gradually relaxes to metastable
equilibrium by forming clusters of spins in the stable direction. How long is required for
the number of clusters of size s to reach equilibrium? In particular, we are interested in the
(a) m(t). (b) ln(P (t)).
FIG. 3: (a) The evolution of m(t) for the long-range Ising model on a square lattice with R = 20,
h = 1.258, and L = 500. The solid line is an exponential fit with the relaxation time τg ≈ 4.5.
The data is averaged over 2000 runs. (b) The distribution of nucleation times P (t) for the same
system and number of runs. P (t) decays exponentially for t >∼ τnequil ≈ 40. The nucleation rate
once equilibrium has been established is λ = 6.4 × 10−2. The mean size of the nucleating droplet
is ≈ 300 spins.
time required for clusters that are comparable in size to the nucleating droplet.
We first consider R = 1 and monitor the number of clusters ns of size s at time t. To
obtain good statistics we chose L = 200 and averaged over 5000 runs. Figure 4 shows the
FIG. 4: The evolution of the number of clusters of size s = 6 averaged over 5000 runs for R = 1
and the same conditions as in Fig. 1. The fit is to the exponential form in Eq. (4) with τs ≈ 8.1
and ns,∞ = 0.0175.
(a) R = 1. (b) R = 20.
FIG. 5: (a) The equilibration time τs as a function of the cluster size s for R = 1 and h = 0.44 the
same conditions as in Fig. 1. The s-dependence of τs is approximately linear. The extrapolated
value of τs corresponding to the mean size of the nucleating droplet (≈ 25 spins) is τextrap ≈ 34,
which is the same order of magnitude as time τnequil ≈ 60 for the system to reach metastable
equilibrium. (b) Log-log plot of the equilibration time τs versus s for R = 20 and h = 1.258 and
the same conditions as in Fig. 3(b). We find that τs ∼ s
x with the exponent x ≈ 0.56. The
extrapolated value of τs corresponding to the mean size of the nucleating droplet (≈ 300 spins)
is τextrap ≈ 30, which is comparable to the time τnequil ≈ 40 for the system to reach metastable
equilibrium.
evolution of n6(t), which can be fitted to the exponential form:
ns(t) = ns,∞[1− e
−t/τs ]. (4)
We find that τs ≈ 8.1 for s = 6. By doing similar fits for a range of s, we find that the time
τs for the mean number of clusters of size s to become time independent increases linearly
with s over the range of s that we can simulate (see Fig. 5). The extrapolated value of τs
corresponding to the mean size of the nucleating droplet (≈ 25 spins by direct simulation)
is τextrap ≈ 34. That is, it takes a time of τextrap ≈ 34 for the mean number of clusters whose
size is the order of the nucleating droplets to become time independent. The time τextrap is
much longer than the relaxation time τg ≈ 1.5 of the macroscopic quantities m(t) and e(t)
and is comparable to the time τnequil ≈ 60 for the nucleation rate to become independent of
time.
Because the number of clusters in the nucleating droplet is relatively small for R = 1
except very close to coexistence (small h), we also consider a long-range Ising model with
R = 20 and h = 1.258 (as in Fig. 3). The relaxation time τs of the clusters near the
pseudospinodal fits to a power law τs ∼ s
x with x ≈ 0.56 (see Fig. 5(b)). We know of no
theoretical explanation for the qualitatively different dependence f the relaxation time τs
on s near coexistence (τs ≃ s) and near the spinodal (τs ≃ s
1/2). If we extrapolate τs to
s = 300, the approximate size of the nucleating droplet, we find that the equilibration time
for clusters of the size of the nucleating droplet is τextrap ≈ 30, which is comparable to the
time τnequil ≈ 40 for the nucleation rate to become independent of time.
To determine if our results are affected by finite size effects, we compared the equilibra-
tion time of the clusters for lattices with linear dimension L = 2000 and L = 5000. The
equilibration times of the clusters were found to be unaffected.
IV. STRUCTURE OF THE NUCLEATING DROPLET
Because nucleation can occur both before and after the system is in metastable equilib-
rium, we ask if there are any structural differences between the nucleating droplets formed
in these two cases. To answer this question, we determine the nature of the nucleating
droplets for the one-dimensional (1D) Ising model where we can make R (and hence the
size of the nucleating droplets) large enough so that the structure of the nucleating droplets
is well defined. In the following we take R = 212 = 4096, h = 1.265, and L = 218. The
relaxation time for m(t) is τg ≈ 40, and the time for the distribution of nucleation times to
reach equilibrium is τnequil ≈ 90.
We use the intervention method to identify nucleation [15]. To implement this method, we
choose a time at which a nucleating droplet might exist and make many copies of the system.
Each copy is restarted using a different random number seed. The idea is to determine if
the largest cluster in each of the copies grows in approximately the same place at about
the same time. If the percentage of copies that grow is greater than 50%, the nucleating
droplet is already in the growth phase; if it is less than 50%, the time chosen is earlier than
nucleation. We used a total of 20 trials to make this determination.
Our procedure is to observe the system for a time tobs after the intervention and determine
if the size of the largest cluster exceeds the threshold size s∗ at approximately the same
location. To ensure that the largest cluster at tobs is the same cluster as the original one, we
require that the center of mass of the largest cluster be within a distance r∗ of the largest
cluster in the original configuration. If these conditions are satisfied, the nucleating droplet
is said to grow. We choose tobs = 6, r
∗ = 2R, and s∗ = 2000. (In comparison, the size of the
nucleating droplet for the particular run that we will discuss is ≈ 1080 spins.)
There is some ambiguity in our identification of the nucleation time because the saddle
point parameter is large but finite [9]. This ambiguity manifests itself in the somewhat
arbitrary choices of the parameters tobs, r
∗, and s∗. We tried different values for tobs, r
∗, and
s∗ and found that our results depend more strongly on the value of the parameter r∗ than on
the values of tobs and s
∗. If we take r∗ = R/2, the nucleating droplets almost always occur
one to two Monte Carlo steps per spin later than for r∗ = 2R. The reason is that the linear
size of the nucleating droplet is typically 6 to 8R, and its center of mass might shift more
than R/2 during the time tobs. If such a shift occurs, a cluster that would be said to grow for
r∗ = 2R would not be counted as such because it did not satisfy the center of mass criterion.
This shift causes an overestimate of the time of the nucleating droplet. A reasonable choice
of r∗ is 20% to 40% of the linear size of the nucleating droplet. The choice of parameters is
particularly important here because the rate of growth of the transient nucleating droplets
is slower than the growth rate of droplets formed after metastable equilibrium has been
reached. Hence, we have to identify the nucleating droplet as carefully as possible.
Because nucleation studies are computationally intensive, we used a novel algorithm for
simulating Ising models with a uniform long-range interaction [20]. The algorithm uses a
hierarchical data structure to store the magnetization at many length scales, and can find
the energy cost of flipping a spin in time O((lnR)d), rather than the usual time O(Rd),
where d is the spatial dimension.
Figure 6 shows the fraction of copies for which the largest cluster grows as a function of
the intervention time. For this particular run the nucleating droplet is found to occur at
t ≈ 37.4.
We simulated 100 systems in which nucleation occurred before global quantities such
as m(t) became independent of time, t < τg ≈ 40, and 100 systems for which nucleation
occurred after the nucleation rate became time independent (t > τnequil ≈ 90). We found
that the mean size of the nucleating droplet for t < τg is ≈ 1200 with a standard deviation
of σ ≈ 150 in comparison to the mean size of the nucleating droplet for t > τnequil of ≈ 1270
and σ ≈ 200. That is, the nucleating droplets formed before metastable equilibrium has
been reached are somewhat smaller.
We introduce the cluster profile ρcl to characterize the shape of the largest cluster at the
time of nucleation. For a particular bond realization a spin that is in the stable direction
might or might not be a part the largest cluster due to the probabilistic nature of the bonds.
For this reason bond averaging is implemented by placing 100 independent sets of bonds
between spins with probability pb = 1 − e
−2βJ(1−ρ) in the stable direction. The clusters are
identified for each set of bonds, and the probability pi that spin i is in the largest cluster is
determined. The values of pi for the spins in a particular bin are then averaged using a bin
width equal to R/4. This mean value of pi is associated with ρcl. Note that the spins that
point in the unstable direction are omitted in this procedure. The mean cluster profile is
found by translating the peak position of each droplet to the origin.
Figure 7(a) shows the mean cluster profile formed after metastable equilibrium has been
established (t > τnequil ≈ 90). The position x is measured in units of R. For comparison we
fit ρcl to the form
ρ(x) = A sech2(x/w) + ρ0, (5)
with Acl = 0.36, wcl = 2.95 and ρ0 = 0 by construction. In Fig. 7(b) we show a comparison
of ρcl to the Gaussian form Ag exp(−(x/wg)
2) with Ag = 0.35 and wg = 3.31. Note that
Eq. (5) gives a better fit than a Gaussian, which underestimates the peak at x = 0 and the
FIG. 6: The fraction of copies for which the largest cluster grows for a particular run for a 1D
Ising model with R = 212, h = 1.265, and L = 218. The time for 50% growth is ≈ 37.4. The
largest cluster at this time corresponds to the nucleating droplet and has ≈ 1080 spins. For this
intervention 100 copies were considered; twenty copies were considered for all other runs.
(a) Comparison to Eq. (5). (b) Comparison to Gaussian.
FIG. 7: Comparison of the mean cluster profile (•) in the 1D Ising model after metastable equi-
librium has been established with (a) the form in Eq. (5) and (b) a Gaussian. Note that Eq. (5)
gives a better fit than the Gaussian, which underestimates the peak at x = 0 and the wings. The
x axis is measured in units of R.
wings. Although Unger and Klein [12] derived Eq. (5) for the magnetization saddle point
profile, we see that this form also provides a good description of the cluster profile.
A comparison of the cluster profiles formed before and after metastable equilibrium is
shown in Fig. 8. Although both profiles are consistent with the form in Eq. (5), the transient
nucleating droplets are more compact, in agreement with the predictions in Ref. 19.
We also directly coarse grained the spins at the time of nucleation to obtain the density
profile of the coarse-grained magnetization ρm(x) (see Fig. 9(a)). The agreement between
the simulation and analytical results [24] are impressive, especially considering that the
analytical form is valid only in the limit R → ∞. The same qualitative differences between
the nucleating droplets that occur before and after metastable equilibrium is found (see
Fig. 9(b)), although the magnetization density profile is much noisier than that based on
the cluster analysis.
V. LANGEVIN SIMULATIONS
It is interesting to compare the results for the Ising model and the Langevin dynamics
of the φ4 model. One advantage of studying the Langevin dynamics of the φ4 theory is
that it enables the efficient simulation of systems with a very large interaction range R. If
FIG. 8: The cluster profiles of the nucleating droplets formed before (dashed line) and after (solid
line) metastable equilibrium has been established. Both profiles are consistent with the form given
in Eq. (5), but the transient nucleating droplets are slightly more compact. The fitting parameters
are A = 0.38 and w = 2.67 for the transient droplets and A = 0.35 and w = 2.95 for the droplets
formed after the nucleation rate has become independent of time.
all lengths are scaled by a large value of R, the effective magnitude of the noise decreases,
making faster simulations possible.
The coarse grained Hamiltonian analogous to the 1D ferromagnetic Ising model with
long-range interactions in an external field h can be expressed as
H [φ] = −
+ ǫφ2 + uφ4 − hφ, (6)
where φ(x) is the coarse-grained magnetization. A dynamics consistent with this Hamilto-
nian is given by,
+ η = −M
+ 2εφ+ 4uφ3 − h
+ η, (7)
where M is the mobility and η(x, t) represents zero-mean Gaussian noise with
〈η(x, t)η(x′, t′)〉 = 2kTMδ(x− x′)δ(t− t′).
For nucleation near the spinodal the potential V = εφ2+ uφ4− hφ has a metastable well
only for ε < 0. The magnitude of φ and h at the spinodal are given by hs =
(8|ε|3/27u)
and φs =
(|ε|/6u), and are found by setting V ′ = V ′′ = 0. The distance from the spinodal
is characterized by the parameter ∆h = |hs − h|. For ∆h/hs ≪ 1, the bottom of the
metastable well φmin is near φs, specifically φmin = −φs(1 +
2∆h/3hs).
(a) Comparison with Eq. (5). (b) Comparison of profiles.
FIG. 9: (a) The magnetization density profile of the nucleating droplets formed after metastable
equilibrium has been established. The solid line is the analytical solution [24] which has the form
in Eq. (5) with the calculated values A = 0.085, w = 2.65, and ρ0 = −0.774. (b) Comparison of the
density profile of nucleating droplets formed before (dashed line) and after (solid line) metastable
equilibrium has been established by coarse graining the magnetization. The same qualitative
differences between the nucleating droplets that occur before and after metastable equilibrium are
observed as in Fig. 8, although the magnetization density profile is much noisier than the cluster
density profile.
The stationary solutions of the dynamics are found by setting δH/δφ = 0. Besides
the two uniform solutions corresponding to the minima in V , there is a single nonuniform
solution which approximates the nucleating droplet profile when the nucleation barrier is
large. When ∆h/hs ≪ 1, the profile of the nucleating droplet is described by Eq. (5) with
hs/6∆h/φs, w = (8hs∆hφ
−1/4, and ρ0 = φmin [12].
The dynamics (7) is numerically integrated using the scheme [25]
φ(t+∆t) = φ(t)−∆tM
+ 2εφ+ 4uφ3 − h
η, (8)
where d2φ/dx2 is replaced by its central difference approximation. Numerical stability re-
quires that ∆t < (∆x/R)2, but it is often desirable to choose ∆t even smaller for accuracy.
As for the Ising simulations, we first prepare an equilibrated system with φ in the stable
well corresponding to the direction of the external field h. At t = 0 the external field is
reversed so that the system relaxes to metastable equilibrium. We choose M = 1, T = 1,
ε = −1, u = 1, and ∆h = 0.005. The scaled length of the system is chosen to be L/R = 300.
FIG. 10: Log-linear plot of the distribution P (t) of nucleation times for the one-dimensional
Langevin equation with R = 2000 (×) and R = 2500 (•) averaged over 50,000 runs. The distribu-
tion is not exponential for early times, indicating that the system is not in metastable equilibrium.
Note that the nucleation rate is a rapidly decreasing function of R.
We choose R to be large so that, on length scales of R, the metastable φ fluctuates near its
equilibrium value φmin ≈ −0.44.
After nucleation occurs φ will rapidly grow toward the stable well. To determine the
distribution of nucleation times, we assume that when the value of the field φ in any bin
reaches 0, nucleation has occurred. This relatively crude criterion is sufficient for determin-
ing the distribution of nucleation times if we assume that the time difference between the
nucleation event and its later detection takes a consistent value between runs.
Figure 10 compares the distribution of 50,000 nucleation times for systems with R = 2000
and R = 2500 with ∆x/R = 1 and ∆t = 0.1. The distribution shows the same qualitative
behavior as found in the Metropolis simulations of the Ising model (see Fig. 2). For example,
the distribution of nucleation times is not exponential for early times after the quench. As
expected, the nucleation rate decreases as R increases. Smaller values of ∆x and ∆t give
similar results for the distribution.
To find the droplet profiles, we need to identify the time of nucleation more precisely. The
intervention criterion, which was applied in Sec. IV, is one possible method. In the Langevin
context we can employ a simpler criterion: nucleation is considered to have occurred if φ
decays to the saddle-point profile (given by Eq. (5) for ∆h/hs ≪ 1) when φ is evolved using
FIG. 11: Comparison of the density profile φ(x) of the nucleating droplets found by numerically
solving the Langevin equation after metastable equilibrium has been reached for R = 2000 (×)
and R = 4000 (•) to the theoretical prediction (solid line) from Eq. (5) using the calculated values
A = 0.096, w = 3.58, and ρ0 = −0.44. The numerical solutions are averaged over 1000 profiles.
The results suggest that as R increases, the observed nucleation profiles converge to the prediction
of mean-field theory.
noiseless dynamics [19, 26]. For fixed ∆h these two criteria agree in the R → ∞ limit, but
can give different results for finite R [27].
In Fig. 11 we plot the average of 1,000 density profiles of the nucleating droplets formed
after metastable equilibrium has been established for R = 2000 and R = 4000. Note that
there are noticeable deviations of the averaged profiles from the theoretical prediction in
Eq. (5), but the deviation is less for R = 4000. The deviation is due to the fact that the
bottom of the free energy well in the metastable state is skewed; a similar deviation was also
observed in the Ising model. We also note that the individual nucleating droplets look much
different from their average. It is expected that as R increases, the profiles of the individual
nucleating droplets will converge to the form given by Eq. (5).
In Fig. 12 we compare the average of 1,000 density profiles of nucleating droplets before
and after metastable equilibrium has been established. As for the Ising model, there are
subtle differences consistent with the predictions of Ref. 19. The transient droplets have
slightly lower background magnetization and compensate by being denser and more compact.
FIG. 12: The density profile of the nucleating droplets found from numerical solutions of the
Langevin equation formed before (dotted line) and after (solid line) metastable equilibrium has
been established. Nucleation events occurring before t = 15 are transient, and events occurring
for t ≥ 30 are metastable. Both plots are the result of 1000 averaged profiles with an interaction
range R = 2000.
VI. SUMMARY
Although the time-independence of the mean values of macroscopic quantities such as
the magnetization and the energy is often used as an indicator of metastable equilibrium, we
find that the observed relaxation time of the clusters is much longer for sizes comparable to
the nucleating droplet. This longer relaxation time explains the measured non-constant nu-
cleation rate even when global quantities such as the magnetization appear to be stationary.
By identifying the nucleating droplets in the one-dimensional long-range Ising model and the
Langevin equation, we find structural differences between the nucleating droplets which oc-
cur before and after metastable equilibrium has been reached. Our results suggest that using
global quantities as indicators for metastable equilibrium may not be appropriate in general,
and distinguishing between equilibrium and transient nucleation is important in studying
the structure of nucleating droplets. Further studies of transient nucleation in continuous
models of more realistic systems would be of interesting and practical importance.
Finally, we note a subtle implication of our results. For a system to be truly in equilibrium
would require that the mean number of clusters of all sizes be independent of time. The
larger the cluster, the longer the time that would be required for the mean number to
become time independent. Hence, the bigger the system, the longer the time that would
(a) R = 1. (b) R = 128.
FIG. 13: The relaxation of the magnetization m(t) of the 2D Ising model at T = Tc starting from
T0 = 0. (a) R = 1, Tc = 2.269, L = 5000. (b) R = 128, Tc = 4, L = 1024. The straight line is the
fit to a power law with slope ≈ 0.057 for R = 1 and slope ≈ 0.51 for R = 128.
be required for the system to reach equilibrium. Given that the system is never truly
in metastable equilibrium so that the ideas of Gibbs, Langer, and others are never exactly
applicable, when is the system close enough to equilibrium so that any possible simulation or
experiment cannot detect the difference? We have found that the magnetization and energy
are not sufficient indicators for nucleation and that the answer depends on the process being
studied. For nucleation the equilibration of the number of clusters whose size is comparable
to the size of the nucleating droplet is the relevant indicator.
APPENDIX A: RELAXATION OF CLUSTERS AT THE CRITICAL TEMPER-
ATURE
Accurate determinations of the dynamical critical exponent z have been found from the
relaxation of the magnetization and energy at the critical temperature. In the following
we take a closer look at the relaxation of the Ising model by studying the approach to
equilibrium of the distribution of clusters of various sizes.
We consider the Ising model on a square lattice with L = 5000. The system is initially
equilibrated at either zero temperature T0 = 0 (all spins up) or at T0 = ∞, and then
instantaneously quenched to the critical temperature Tc. The Metropolis algorithm is used.
As a check on our results we first determine m(t) starting from T0 = 0. Scaling arguments
FIG. 14: The evolution of the number of clusters of size s = 100 at T = Tc starting from T0 = 0.
The fit to Eq, (A2) gives ns,∞ = 51.3, C1 = −42, C2 = −15, τ1 = 156, and τ2 = 1070.
suggest that m(t) approaches its equilibrium value as [28]
f(t) = Bt−β/νz + f∞, (A1)
where the static critical exponents are β = 1/8 and ν = 1 for finite R and β = 1/2 and
ν = 1/2 in the mean-field limit. The fit of our results in Fig. 13 to Eq. (A1) yields the
estimate z ≈ 2.19 for R = 1 and z ≈ 1.96 for R = 128, which are consistent with previous
results [29, 30]. Note that no time scale is associated with the evolution of m(t).
We next determined ns(t), the number of clusters of size s at time t after the temperature
quench. Because all the spins are up at t = 0, the number of (down) clusters of size s begins
at zero and increases to its (apparent) equilibrium value ns,∞. The value of the latter
depends on the size of the system.
Figure 14 shows the evolution of clusters of size s = 100 for one run. Because we know
of no argument for the time dependence of ns(t)− ns,∞ except in the mean-field limit [30],
we have to rely on empirical fits. We find that the time-dependence of ns(t) can be fitted
to the sum of two exponentials,
ns(t)− ns,∞ = C1e
−t/τ1 + C2e
−t/τ2 , (A2)
where C1, C2, τ1, and τ2 are parameters to be fitted with τ2 > τ1.
Figure 15(a) shows the relaxation time τ2 as a function of s for R = 1 at T = Tc starting
from T0 = 0. Note that the bigger the cluster, the longer it takes to reach its equilibrium
distribution. That is, small clusters form first, and larger clusters are formed by the merging
(a) T0 = 0. (b) T0 = ∞.
FIG. 15: The relaxation time τ2 versus the cluster size s at T = Tc for R = 1 starting from (a)
T0 = 0 and (b) T0 = ∞. The log-log plot in (a) yields τ2 ∼ s
(a) s = 30. (b) s = 3000.
FIG. 16: The time dependence of the number of clusters of size s = 30 and s = 3000 at T = Tc
for R = 1 starting from T0 = ∞. Note that ns=30 monotonically decreases to its equilibrium value
and ns=1000 overshoots its equilibrium value. (a) C1 = 2367, C2 = 332, ns=30,∞ = 738, τ1 = 16,
and τ2 = 403. (b) C1 = −0.42, C2 = 0.22, ns=3000,∞ = 0.11, τ1 = 130, and τ2 = 1290.
of smaller ones. The s-dependence of τ2 can be approximately fitted to a power law with
the exponent 0.4.
To prepare a configuration at T0 = ∞, the system is randomized with approximately half
of the spins up and half of the spins down. The temperature is instantaneously changed to
T = Tc. As before, we focus on the relaxation of down spin clusters. In contrast to the T0 = 0
case, the evolution of the clusters falls into three classes (see Fig. 16). For small clusters
(1 ≤ s ≤ 40), ns monotonically decreases to its equilibrium value. This behavior occurs
because the initial random configuration has an abundance of small clusters so that lowering
the temperature causes the small clusters to merge to form bigger ones. For intermediate
size clusters (40 < s < 4000), ns first increases and then decreases to its equilibrium value.
The initial growth is due to the rapid coalescence of smaller clusters to form intermediate
ones. After there are enough intermediate clusters, they slowly coalesce to form bigger
clusters, which causes the decrease. For clusters with s > 4000, ns slowly increases to its
equilibrium value. The range of sizes for these different classes of behavior depends on the
system size. In all three cases ns(t) can be fitted to the sum of two exponentials. One of
the two coefficients is negative for 40 < s < 4000 for which ns(t) overshoots its equilibrium
value. The relaxation time τ2 is plotted in Fig. 15(b) as a function of s.
ACKNOWLEDGMENTS
We thank Aaron O. Schweiger for very useful discussions. Bill Klein acknowledges the
support of Department of Energy grant # DE-FG02-95ER14498 and Kipton Barros was
supported in part by the National Science Foundation grant # DGE-0221680. Hui Wang
was supported in part by NSF grant # DUE-0442581. The simulations at Clark University
were done with the partial support of NSF grant # DBI-0320875.
[1] Dimo Kashchiev, Nucleation: Basic Theory with Applications (Butterworths-Heinemann, Ox-
ford, 2000).
[2] N. E. Chayen, E. Saridakis, R. El-Bahar, and Y. J. Nemirovsky, Mol. Biol. 312 (4), 591 (2001).
[3] N. Wang, Y. H. Tang, Y. F. Zhang, and C. S. Lee, Phys. Rev. B 58, R16 024 (1998).
[4] S. Auer and D. Frenkel, Nature 413, 711 (2001).
[5] N. Delgehyr, J. Sillibourne, and M. Bornens, J. Cell Science 118, 1565 (2005).
[6] E. Pechkova and C. Nicolini, J. Cell Biochem. 85 (2), 243 (2002).
[7] A. R. Fersht, Proc. Natl. Acad. Sci. U. S. A. 92, 10869 (1995).
[8] D. Stauffer, A. Coniglio, and D. W. Heermann, Phys. Rev. Lett. 49, 1299 (1982).
[9] W. Klein, H. Gould, N. Gulbahce, J. B. Rundle, and K. Tiampo, Phys. Rev. E 75, 031114
(2007).
[10] J. D. Gunton, M. san Miguel, and P. Sahni, in Phase Transitions and Critical Phenomena,
edited by C. Domb and J. L. Lebowitz (Academic Press, New York, 1983), Vol. 8.
[11] J. S. Langer, Ann. Phys. (NY) 41, 108 (1967).
[12] C. Unger and W. Klein, Phys. Rev. B 29, 2698 (1984).
[13] More precisely, the Gibbs assumption is correct in systems for which the interaction range
R ≫ 1 and which are not too close to the spinodal. See Ref. 9 for more details.
[14] We assume that when the nucleation rate and mean number of clusters of a given size become
apparently independent of time that they have reached their equilibrium values.
[15] L. Monette, W. Klein, and M. Zuckermann, J. Stat. Phys. 66, 117 (1992).
[16] D. W. Heermann and C. Cordeiro, Int. J. Mod. Phys. 13, 1419 (2003).
[17] K. Brendel, G. T. Barkema, and H. van Beijeren, Phys. Rev. B 71, 031601 (2005).
[18] H. Huitema, J. van der Eerden, J. Janssen, and H. Human, Phys. Rev. B 62, 14690 (2000).
[19] A. O. Schweiger, K. Barros, and W. Klein, Phys. Rev. E 75, 031102 (2007).
[20] K. Barros, manuscript in preparation.
[21] E. Luijten, H. W. J. Blöte, and K. Binder, Phys. Rev. E 54, 4626 (1996).
[22] W. Klein in Computer Simulation Studies in Condensed Matter Physics III, edited by D. P.
Landau, K. K. Mon, and H. B. Schuttler (Springer-Verlag, Berlin, Heidelberg, 1991).
[23] A. Coniglio and W. Klein, J. Phys. A 13, 2775 (1980).
[24] The density profile of the nucleating droplet of the Ising model has been calculated analytically
in the limit R → ∞ (K. Barros, unpublished). The result is consistent with the form in
Eq. (5) with A = 3(βJ)−1
∆h/φs, w = 3
−1/2φ
−1/4, and ρ0 = −φs − A/3, where
1− (βJ)−1 is the magnitude of φ at the spinodal. From this analytical solution, the
calculated parameters are found to be A = 0.085, w = 2.65, ρ0 = −0.774 which are very close
to the values fitted to the simulation data, A = 0.084, w = 2.45, ρ0 = −0.764.
[25] J. G. Gaines, in Stochastic Partial Differential Equations, edited by A. M. Etheridge (Cam-
bridge University Press, Cambridge, 1995), pp. 55–71.
[26] A. Roy, J. M. Rickman, J. D. Gunton, and K. R. Elder, Phys. Rev. E 57, 2610 (1998).
[27] Consider a fluctuation that decays to the metastable phase under noiseless dynamics. To
perform the intervention method we make many copies of the configuration and examine the
percentage that grow after a given waiting time. Although the expected drift is a decay to the
metastable phase, every copy has time to sample a path in configuration space. It is possible
that during this waiting time the majority of copies discover and grow toward the stable phase,
contradicting the result from the zero-noise criterion. However, for R ≫ 1 the sampling path
will be dominated by the drift term and the two nucleation criteria agree.
[28] M. Suzuki. Phys. Lett. A 58, 435 (1976) and M. Suzuki. Prog. Theor. Phys. 58, 1142 (1977).
See also A. Linke, D. W. Heermann, P. Altevogt, and M. Siegert, Physica A 222, 205 (1995).
[29] M. Nightingale and H. Blöte, Phys. Rev. B 62, 1089 (2000).
[30] L. Colonna-Romano, A. I. Mel’cuk, H. Gould, and W. Klein, Physica A 209, 396 (1994).
Introduction
Distribution of nucleation times
Relaxation of clusters to metastable equilibrium
Structure of the nucleating droplet
Langevin simulations
Summary
Relaxation of clusters at the critical temperature
Acknowledgments
References
|
0704.0940 | Glueball Masses in (2+1)-Dimensional Anisotropic Weakly-Coupled
Yang-Mills Theory | NSF-KITP-07-57
BCCUNY-HEP/07-03
GLUEBALL MASSES IN (2 + 1)-DIMENSIONAL
ANISOTROPIC WEAKLY-COUPLED YANG-MILLS
THEORY
Peter Orlanda.b.c.1
a. Kavli Institute for Theoretical Physics, The University of California, Santa
Barbara, CA 93106, U.S.A.
b. Physics Program, The Graduate School and University Center, The City
University of New York, 365 Fifth Avenue, New York, NY 10016, U.S.A.
c. Department of Natural Sciences, Baruch College, The City University of New
York, 17 Lexington Avenue, New York, NY 10010, U.S.A.
Abstract
The confinement problem has been solved in the anisotropic (2+1)-dimensional SU(N)
Yang-Mills theory at weak coupling. In this paper, we find the low-lying spectrum for
N = 2. The lightest excitations are pairs of fundamental particles of the (1 + 1)-
dimensional SU(2) × SU(2) principal chiral nonlinear sigma model bound in a linear
potential, with a specified matching condition where the particles overlap. This match-
ing condition can be determined from the exactly-known S-matrix for the sigma model.
[email protected]
http://arxiv.org/abs/0704.0940v2
1 Introduction
In recent papers, some new techniques have been developed for calculating quantities in
a (2+1)-dimensional SU(N) gauge theories [1], [2], [3]. These techniques exploit the fact
that in an anisotropic limit of small coupling, the gauge theory becomes a collection of
completely-integrable quantum field theories, namely SU(N)× SU(N) principal chiral
nonlinear sigma models. These integrable systems are decoupled, save for a constraint
which is necessary for complete gauge invariance. In the case of N = 2, is possible to
perturb away from integrability, using exactly-known off-shell matrix elements of the
integrable theory.
Though the gauge theory we consider s not spatially-rotation invariant, it has fea-
tures one expects of real (3+1)-dimensional QCD; it is asymptotically free and confines
quarks at weak coupling. Thus the limit of no regularization is accessible.
One can formally remove the regulator in strong-coupling expansions of (2 + 1)-
dimensional gauge theories; the vacuum state in this expansion yields a string tension
and a mass gap which have formal continuum limits. This can be done in a Hamiltonian
lattice formalism [4], or with an ingenious choice of degrees of freedom and point-
splitting regularization [5]. This leaves open the question of whether these expressions
can be trusted at weak coupling (more discussion of this issue can be found in the
introduction of reference [2]). In particular, one would like to rule out a deconfinement
transition, or different dependence of physical quantities on the coupling (as in compact
QED [6]). There is a proposal for the vacuum state [7], in the formulation of reference
[5] which seems to give correct values for some glueball masses [8], but this proposal
evidently requires more mathematical justification.
In this paper, we will work out the masses of the lightest glueballs for the case
of gauge group SU(2). Our method would also work in principle for SU(N) gauge
theories, and our reason for choosing N = 2 is that the analysis is simplest for that
case.
The basic connection between the gauge theory and integrable systems is most easily
seen in axial gauge [1]. The string tensions in the x1-direction and x2-direction (which
we called the horizontal and vertical string tensions, respectively) for very small g′0,
were found by simple physical arguments. The result for the horizontal string tension
was confirmed for gauge group SU(2), and additional corrections in g′0 were found [2],
through the use of exact form factors for the currents of the sigma model. String
tensions for higher representations can also be worked out, and adjoint sources are not
confined [3].
Careful derivations of the connection between the gauge theory and integrable sys-
tems use the Kogut-Susskind lattice formalism [1], [2]. A shorter derivation was given
in reference [9], which we summarize again here. The formalism is essentially that of
“deconstruction” [10].
The Yang-Mills action is
d3L, where the Lagrangian is L = 1
2e′ 2
TrF 201+
TrF 202−
TrF 212, and where A0,A1 and A1 are SU(N)-Lie-algebra-valued components of the
gauge field, and the field strength is Fµν = ∂µAν − ∂νAµ − i[Aµ, Aν ]. This action
is invariant under the gauge transformation Aµ(x) → ig(x)−1[∂µ − iAµ(x)]g(x), where
g(x) is an SU(N)-valued scalar field. We take e′ 6= e, thereby losing rotation invariance.
We discretize the 2-direction, so that the x2 takes on the values x2 = a, 2a, 3a . . . ,
where a is a lattice spacing. All fields are considered functions of x = (x0, x1, x2). We
define the unit vector 2̂ = (0, 0, 1). We replace A2(x) by a field U(x) lying in SU(N),
via U(x) ≈ exp−iaA2(x). There is a natural discrete covariant-derivative operator:
DµU(x) = ∂µU(x) − iAµ(x)U(x) + iU(x)Aµ(x + 2̂a), µ = 0, 1, for any N × N complex
matrix field U(x). The action is S =
x2 a L where
2(g′0)
TrF 201 +
Tr[D0U(x)]†D0U(x)−
Tr[D1U(x)]†D1U(x) , (1.1)
and where g20 = e
0a and (g
2 = e′ 2a. The Lagrangian (1.1) is invariant under the gauge
transformation: Aµ(x) → ig(x)−1[∂µ − iAµ(x)]g(x) and U(x) → g(x)−1U(x)g(x + 2̂a)
where again, g(x) ∈ SU(N) and µ is restricted to 0 or 1. The bare coupling constants g0
and g′0 are dimensionless. We recover from (1.1) the anisotropic continuum action in the
limit a→ 0. The sigma model field is U(x0, x1, x2), and each discrete x2 corresponds to
a different sigma model. The system (1.1) is a collection of parallel (1+1)-dimensional
SU(N) × SU(N) sigma models, each of which couples to the auxiliary fields A0, A1.
The sigma-model self-interaction is the dimensionless number g0.
We feel it worth commenting on the nature of the anisotropic regime and how it
is different from the standard (2 + 1)-dimensional Yang-Mills theory. The point where
the regulator can be removed in the theory is g′0 = g0 = 0. This point can be reached
in our treatment, but only if
(g′0)
2 ≪ 1
e−4π/(g
N) . (1.2)
The left-hand side and ride-hand side are proportional to the two energy scales in the
theory (the latter comes from the two-loop beta function of the sigma model). Thus
our method cannot accommodate fixing the ratio g′0/g0, which is natural in standard
perturbation theory [11]. This is why the mass gap is not of order e, e′ and the string
tension is not of order e2, (e′)2.
We now discuss the Hamiltonian in the axial gauge A1 = 0. The left-handed and
right-handed currents are, jLµ (x)b = iTr tb ∂µU(x)U(x)
† and jRµ (x)b = iTr tb U(x)
†∂µU(x),
respectively, where µ = 0, 1. The Hamiltonian obtained from (1.1) is H0 +H1, where
{[jL0 (x)b]2 + [jL1 (x)b]2} , (1.3)
(g′0)
∂1Φ(x
1, x2)∂1Φ(x
1, x2)
)2 L2−a
jL0 (x
1, x2)Φ(x1, x2)− jR0 (x1, x2)Φ(x1, x2 + a)
+ (g′0)
2qbΦ(u
1, u2)b − (g′0)2q′bΦ(v1, v2)b , (1.4)
where −Φb = A0 b is the temporal gauge field, and where in the last term we have
inserted two color charges - a quark with charge q at site u and an anti-quark with
charge q′ at site v. Some gauge invariance remains after the axial-gauge fixing, namely
that for each x2
jL0 (x
1, x2)b − jR0 (x1, x2 − a)b
− g20Q(x2)b
Ψ = 0 , (1.5)
where Q(x2)b is the total color charge from quarks at x
2 and Ψ is any physical state. To
derive the constraint (1.5) more precisely, we started with open boundary conditions
in the 1-direction and periodic boundary conditions in the 2-direction, meaning that
the two-dimensional space is a cylinder [1], [2].
From (1.4) we see that the left-handed charge of the sigma model at x2 is coupled
to the electrostatic potential Φ, at x2. The right-handed charge of the sigma model
is coupled to the electrostatic potential at x2 + a. The excitations of H0, which we
call Fadeev-Zamoldochikov or FZ particles, behave like solitons, though they do not
correspond to classical configurations. Some of these FZ particles are elementary and
others are bound states of the elementary FZ particles. An elementary FZ particle has
an adjoint charge and mass m1. An elementary one-FZ-particle state is a superposition
of color-dipole states, with a quark (anti-quark) charge at x1, x2 and an anti-quark
(quark) charge at x1, x2 + a. The interaction H1 produces a linear potential between
color charges with the same value of x2. Residual gauge invariance (1.5) requires that
at each value of x2, the total color charge is zero. If there are no quarks with coordinate
x2, the total right-handed charge of FZ particles in the sigma model at x2 − a is equal
to the total left-handed charge of FZ particles in the sigma model at x2.
The particles of the principal chiral sigma model carry a quantum number r, with
the values r = 1, . . . , N − 1 [21]. Each particle of label r has an antiparticle of the
same mass, with label N − r. The masses are given by
mr = m1
sin rπ
sin π
, m1 = KΛ(g
−1/2e
N + non−universal corrections , (1.6)
where K is a non-universal constant and Λ is the ultraviolet cut-off of the sigma model.
Lorentz invariance in each x0, x1 plane is manifest. For this reason, the linear
potential is not the only effect of H1. The interaction creates and destroys pairs of
elementary FZ particles. This effect is quite small, provided that g′0 is small enough.
Specifically, this means that the square of the 1 + 1 string tension in the x1-direction
coming fromH1 is small compared to the square of the mass of fundamental FZ particle;
this is just the condition (1.2). The effect is important, however, in that it is responsible
for the correction to the horizontal string discussed in the next paragraph in equation
(1.8).
Simple arguments readily show that at leading order in g′0, the vertical and hori-
zontal string tensions are given by
, σH =
(g′0)
CN , (1.7)
respectively, where CN is the smallest eigenvalue of the Casimir of SU(N). These naive
results for the string tension have further corrections in g′0, which were determined for
the horizontal string tension for SU(2) [2]:
0.7296
(g′0)
e4π/g
. (1.8)
The leading term agrees with (1.7). This calculation was done using the exact form
factor for sigma model currents obtained by Karowski and Weisz [12]. The form factor
can also be used to find corrections of order (g′0)
2 to the vertical string tension; this
problem should be solved soon. If the reader is not familiar with form-factor techniques
in relativistic integrable field theories, a self-contained review is in the appendix of
reference [2].
Another recent application of exact form factors to the (2 + 1)-dimensional SU(2)
gauge theory is reference [13], in which form factors of the two-dimensional Ising model
[14] are used to find the profile of the electric string near the high-temperature decon-
fining transition, assuming the Svetitsky-Yaffe conjecture [15].
A rough picture of a gauge-invariant state for the gauge group SU(2) with no quarks
is given in Figure 1. For N > 2, there are more complicated ways in which strings can
join particles. For example, a junction of N strings is possible. Figure 1 is inaccurate
in an important respect; the “ring” of particles held together by horizontal strings is
extremely broad in extent in the x2-direction compared to the x1-direction. This is
because σH ≪ σV.
The lightest states have the smallest number of particles, by virtue of σH ≪ σV.
Thus the lightest glueballs are pairs of FZ particles with the same value of x2. For
small enough g′0, the very lightest state has a mass well-approximated by 2m1. The
purpose of this paper is to find the leading corrections in (g′0)
2 to this result. This
will be done using the S-matrix of the sigma model and the WKB formula. There
are further small corrections, due to the softening of the potential near where particles
overlap, which we do do not determine.
It is clear that the lightest bound states of FZ particles are (1 + 1)-dimensional in
character. If we formulated a gauge theory in which x2 was fixed in U(x0, x1, x2), we
would find the same spectrum, as a function of m1 and σH. In the Kogut-Susskind
lattice formulation, a long row of plaquettes with open boundary conditions is a regular-
ized gauge theory of this type. The only real difference between this (1+1) dimensional
model and that we study is that σH will receive different corrections of order (g
− − − − − − − − − − − −
− − − − − − − − − − − −
− − − − − − − − − − − −
− − − − − − − − − − − −
− − − − − − − − − − − −
− − − − − − − − − − − −
− − − − − − − − − − − −
− − − − − − − − − − − −
Figure 1. A glueball state is a collection of heavy particles, held weakly together
by strings. The horizontal coordinate is x1 and the vertical coordinate is x2.
In the next section we will discuss the wave function of an unbound pair of FZ
particles. We find that this is described by phase shift for the color-singlet sector. In
Section 3, we determine the bound-state spectrum. The problem we solve is very similar
to that of two particle-states of the two-dimensional Ising model with an external
magnetic field [16] (for a good summary of this problem, see reference [17]); the only
genuine difference is the presence of a matching condition where the particles overlap.
This matching condition comes from the phase shift of the scattering problem. We
present our conclusions in Section 4.
2 Scattering states of FZ particles
The lightest glueball state, as discussed above, is simply a pair of FZ particles located
at the points (x1, x2) and (y1, x2) and bound in a linear potential. Residual gauge
invariance (1.5), demands that the state be a color singlet. To begin with, however,
we simply write the form of a free state of two particles.
The state of the SU(2) × SU(2) ≃ O(4) nonlinear sigma model with a particles of
momenta p1 and p2 and quantum numbers j1 and j2 (which take the values 1, 2, 3, 4)
is described by the wave function
ψp1p2(x
1, y1)j1,j2 =
eip1x
1+ip2y
Aj1,j2 , x
1 < y1
eip2x
1+ip1y
k1,k2=1
Sk1k2j1j2 (p1, p2)Ak2,k1 , x
1 > y1
, (2.1)
where Aj1j2 is an arbitrary set of complex numbers and S
(p1, p2) is the two-particle
S-matrix. We have not yet imposed (1.5).
The wave function (2.1) is written in a form where the O(4) symmetry is manifest.
It is straightforward to write it in a form where the left SU(2)L and the right SU(2)R
symmetries are manifest, by writing
ψp1p2(x
1, y1)
j1,j2
(δj14ac − iσj1ac)
bd − iσ
∗ ψp1p2(x
1, y1)j1,j2 (2.2)
describing a pair of color dipoles, one with quantum numbers a, b̄ and the other with
quantum numbers c̄, d, where σj , j = 1, 2, 3 denotes the Pauli matrices.
We impose the physical state condition (1.5) on (2.2) by requiring that a = b and
c = d and summing over these colors. The projected wave function is, up to an overall
constant,
ψp1p2(x
1, y1) =
eip1x
1+ip2y
, x1 < y1
eip2x
1+ip1y
S0(p1, p2) , x
1 > y1
, (2.3)
where S0(p1, p2) is the singlet projection of the O(4) S-matrix. This S-matrix was first
obtained by Zamolodchikov and Zamolodchikov [18]. A useful form is given in reference
[12]:
S0(p1, p2) = S0(θ) = −
π − iθ
π + iθ
exp i
1− e−ξ
1 + eξ
, (2.4)
where the relative rapidity θ is given by θ = θ2− θ1, p1 = m sinh θ1, p2 = m sinh θ2 and
where we denote the particle mass m1, given by (1.6), by m (because there is only one
mass for the case of N = 2). A derivation of (2.4) is in the appendix of reference [2].
The singlet S-matrix is just given by a phase shift φ(θ): S0(θ) = exp iφ(θ). The
phase shift has a simple form in the low-energy, non-relativistic limit, |p1 − p2| ≪ m.
In this limit, θ ≈ |p1− p2|/m. The integral on the right-hand side of (2.4) can be done
by Taylor expanding in |p1 − p2|/m yielding
φ(θ) = φ(p1, p2) = π −
3− 2 ln 2
|p1 − p2| +O
|p− r|2
. (2.5)
3 The low-lying glueball spectrum
Let us now consider the states of a bound pair of FZ particles in the potential V (x1, y1) =
2σH|x1 − y1| (the reason for the factor of two is simply that the particles are joined by
a pair of strings). We use the non-relativistic approximation, used to find (2.5). For
our problem, the horizontal string tension times the size of a typical bound state is
small compared to the mass, by (1.2). This justifies the non-relativistic approximation
for low-lying states. The mass of a low-lying glueball is given by
M = 2m+ E ,
where E is the energy eigenstate of the two-particle problem.
Let us introduce center-of-mass coordinates, X = (x1+y1)/2 and x = y1−x1. The
reduced mass of the system is m/2. We factor out the phase depending on X , leaving
us only with a wave function depending on x. The Schrödinger equation we consider
+ 2σH|x|ψ = Eψ
with a matching condition at x = 0 between the wave function ψ(x) at x > 0 and the
wave function at x < 0. There is actually a further complication, which we do not
consider here; the potential changes slightly in the region where x ≈ 0. This is due
to the fact that the color charge is slightly smeared out. This smearing out can be
calculated from the form factor [12].
Our results (2.3), (2.5) for the unbound two-particle state, tell us that for x1 ≈ y1,
where the effect of the potential can be ignored, the bound-state wave function in the
center-of-mass frame will be of the form
ψ(x) =
cos(px+ ω) , x < 0
cos[−px+ ω − φ(p)] , x > 0 , (3.1)
for some angle ω, where p = p1 − p2 and φ(p) = π− 3−2 ln 2πm |p|+O(|p|
2/m2). The value
of p near x = 0 is given by p = (mE)1/2, where E is the energy eigenvalue of the state.
This is the matching condition between the wave function for x > 0 and for x < 0.
The wave function for x < 0 an Airy function. So is the wave function for x > 0.
We therefore obtain the approximate WKB form
ψ(x) =
C(x+ E
)−1/4 cos
(2mσH)
1/2(x+ E
)3/2 − π
, x < 0
C ′( E
− x)−1/4 cos
(2mσH)
1/2( E
− x)3/2 + π
, x > 0
, (3.2)
for some constants C and C ′. The expression (3.2) can be made to agree with (3.1) for
small x, provided the generalization of the Bohr-Sommerfeld quantization condition
2(m)1/2
E3/2n +
3− 2 ln 2
πm1/2
E1/2n −
π = 0 , n = 0, 1, 2, . . . , (3.3)
is satisfied by E = En. The only new feature in this semi-classical formula is the
second term, produced by the phase shift. Absorbing the horizontal string tension in
the energy, by defining un = Enσ
H , this cubic equation becomes
2(m)1/2
u3/2n +
3− 2 ln 2
πm1/2
π = 0 .
The second term can be ignored for sufficiently small σH, i.e. sufficiently small g
There is a unique real solution of the cubic equation (3.3) for a given integer n ≥ 0,
because 3− 2 ln 2 = 1.613706 > 0. The low-lying glueball masses are given by
Mn = 2m+ En = 2m+
ǫ1/3n −
3(3− 2 ln 2)σH
ǫ−1/3n
, (3.4)
where
3πσH(n+
4m1/2
4m1/2(n + 1
3(3− 2 ln 2)σH
. (3.5)
4 Conclusions
We have identified the low-lying glueballs of the anisotropic Yang-Mills theory in (2 +
1) dimensions as bound pairs of the fundamental massive particles of the principal
chiral nonlinear sigma model. We found a matching condition for the bound-state
wave function at the origin, which when combined with elementary methods yields the
spectrum of the lightest states.
There are other aspects of the two-particle bound-state problem we have not con-
sidered here. First, the potential is not precisely linear in the region where the two
particles are close together. The corrections to the potential can be determined us-
ing form factors. This will slightly modify (3.3). A completely different issue is that
there are small corrections to the form factors themselves, coming from the presence
of bound states. This, in turn, will give a further correction to the horizontal string
tension found in [2]. Such corrections to form factors in theories close to integrability
were first discussed by Delfino, Mussardo and Simonetti [19]. The bound-state energies
proliferate between 2m and 4m, as g′0 → 0. Our method breaks down as the bound-
state mass reaches 4m, because the bound state develops an instability towards fission
into a pair of two-particle bound states. This is analogous to the situation for the Ising
model in a field [16], [17] as we stated earlier. It should be worthwhile to understand
the relativistic corrections to the bound-state formula, along the lines of the work of
Fonseca and Zamolodchikov [20].
A similar calculation is possible for SU(N). The exact S-matrix of the principal
chiral nonlinear sigma model is known for N > 2 [21]. An interesting feature is that the
phase shift should vanish as N → ∞, with g20N fixed, meaning that the wave function
would be continuous where FZ particles overlap.
It would be interesting to study the scattering of a glueball by an external particle.
If the scattering is sufficiently short range, the FZ particles could be liberated from the
glueball, after which hadronization would ensue.
The results of this paper and of references [1] and [2] may be extendable to the
standard (2 + 1)-dimensional isotropic Yang-Mills theory with g′0 = g0. The strat-
egy we have in mind is an anisotropic renormalization procedure. At the start is a
standard field theory with an isotropic cut-off. By anisotropically integrating out high-
momentum degrees of freedom, the isotropic theory will flow to an anisotropic theory
with a small momentum cut-off in the x2-direction and a large momentum cut-off in the
x1 direction. If the renormalized couplings satisfy the condition (1.2), we could apply
our techniques. A check of such a method would be approximate rotational invariance
of the string tension. This would give an analytic first-principles method of solving
the isotropic gauge theory with fixed dimensionful coupling constant e, and no cut-off.
The only other analytic weak-coupling argument for a mass gap and confinement in
(2 + 1)-dimensions, namely that of orbit-space distance estimates, discussed by Feyn-
man [22], by Karabali and Nair in the second of references [5], and by Semenoff and
the author [23] is suggestive, but has not yielded definite results yet1.
Acknowledgments
This research was supported in part by the National Science Foundation under Grant
No. PHY05-51164 and by a grant from the PSC-CUNY.
References
[1] P. Orland, Phys. Rev. D71 (2005) 054503.
[2] P. Orland, Phys. Rev. D74 (2006) 085001.
[3] P. Orland, Phys. Rev. D75 (2007) 025001.
[4] J.P. Greensite, Nucl. Phys. B166 (1980) 113; Q.-Z. Chen, X.-Q. Luo, S.-H. Guo,
Phys. Lett. B341 (1995) 349.
[5] D. Karabali and V.P. Nair, Nucl. Phys. B464 (1996) 135; Phys. Lett. B379
(1996) 141; D. Karabali, C. Kim and V.P. Nair, B524 (1998) 661; Phys. Lett.
B434 (1998) 103; Nucl. Phys. B566 (2000) 331, Phys. Rev. D64 (2001) 025011.
[6] A.M. Polyakov, Phys. Lett. B59 (1975) 82.
[7] R.G. Leigh, D. Minic and A. Yelnikov, hep-th/0604060 (2006).
[8] H.B. Meyer and M.J. Teper, Nucl.Phys. B668 (2003) 111.
[9] P. Orland, in Quark Confinement and the Hadron Spectrum VII, Ponta
Delgada, Azores, Portugal, 2006, AIP Conference Proceedings 892 (2007) 206,
available at http://proceedings.aip.org/proceedings.
1See also reference [24] for a general discussion of distance in orbit space.
http://arxiv.org/abs/hep-th/0604060
http://proceedings.aip.org/proceedings
[10] N. Arkani-Hamed, A.G. Cohen and H. Georgi, Phys. Rev. Lett. 86 (2001) 4757.
[11] D. Colladay and P. McDonald, hep-ph/0609084 (2006).
[12] M. Karowski and P. Weisz, Nucl. Phys. B139 (1978) 455.
[13] M. Caselle, P. Grinza and N. Magnoli, J. Stat. Mech. 0611 (2006) P003.
[14] B.M McCoy, C.A. Tracy and T.T. Wu, Phys. Rev. Lett. 38 (1977) 793; M. Sato,
T. Miwa and M. Jimbo, Publ. Res. Inst. Math. Sci. Kyoto (1978) 223; B. Berg,
M. Karowski and P. Weisz, Phys. Rev. D19 (1979) 2477.
[15] B. Svetitsky and L. G. Yaffe, Nucl. Phys. B210 (1982) 423.
[16] B. M. McCoy and T.T. Wu, Phys. Rev. D18 (1978) 1259.
[17] M.J. Bhaseen and A.M. Tsvelik, in From Fields to Strings; Circumnavigat-
ing Theoretical Physics, Ian Kogan memorial volumes, Vol. 1 (2004), pg. 661,
M. Shifman, A. Vainshtein and J. Wheater ed., cond-mat/0409602
[18] A.B. Zamolodchikov and Al. B. Zamolodchikov, Nucl. Phys. B133 (1978) 525.
[19] G. Delfino, G. Mussardo and P. Simonetti, Nucl. Phys. B473(1996) 469.
[20] P. Fonseca and A.B. Zamolodchikov, J. Stat. Phys, 110 (2003) 527.
[21] E. Abdalla, M.C.B. Abdalla and A. Lima-Santos, Phys. Lett.B140 (1984) 71; P.B.
Wiegmann; Phys. Lett. B142 (1984) 173; A.M. Polyakov and P.B. Wiegmann,
Phys. Lett. B131 (1983) 121; P.B. Wiegmann, Phys. Lett. B141 (1984) 217.
[22] R.P. Feynman, Nucl. Phys. B188 (1981) 479.
[23] P. Orland and G.W. Semenoff, Nucl. Phys. B576 (2000) 627.
[24] P. Orland, hep-th/9607134 (1996).
http://arxiv.org/abs/hep-ph/0609084
http://arxiv.org/abs/cond-mat/0409602
http://arxiv.org/abs/hep-th/9607134
Introduction
Scattering states of FZ particles
The low-lying glueball spectrum
Conclusions
|
0704.0941 | Recovering galaxy star formation and metallicity histories from spectra
using VESPA | Mon. Not. R. Astron. Soc. 000, 000–000 (0000) Printed 30 August 2021 (MN LATEX style file v2.2)
Recovering galaxy star formation and metallicity histories
from spectra using VESPA
R. Tojeiro⋆1, A. F. Heavens1, R. Jimenez2 and B. Panter1
1Institute for Astronomy, University of Edinburgh, Royal Observatory, Blackford Hill, Edinburgh, EH9 3HJ, UK
2Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA-19104, USA
30 August 2021
ABSTRACT
We introduce VErsatile SPectral Analysis (VESPA): a new method which aims to
recover robust star formation and metallicity histories from galactic spectra. VESPA
uses the full spectral range to construct a galaxy history from synthetic models. We
investigate the use of an adaptative parametrization grid to recover reliable star for-
mation histories on a galaxy-by-galaxy basis. Our goal is robustness as opposed to high
resolution histories, and the method is designed to return high time resolution only
where the data demand it. In this paper we detail the method and we present our find-
ings when we apply VESPA to synthetic and real Sloan Digital Sky Survey (SDSS)
spectroscopic data. We show that the number of parameters that can be recovered
from a spectrum depends strongly on the signal-to-noise, wavelength coverage and
presence or absence of a young population. For a typical SDSS sample of galaxies, we
can normally recover between 2 to 5 stellar populations. We find very good agreement
between VESPA and our previous analysis of the SDSS sample with MOPED.
Key words: methods: data analysis - methods: statistical - galaxies: stellar content
- galaxies: evolution - galaxies: formation
1 INTRODUCTION
The spectrum of a galaxy holds vasts amounts of informa-
tion about that galaxy’s history and evolution. Finding a
way to tap directly into this source of knowledge would
not only provide us with crucial information about that
galaxy’s evolutionary path, but would also allow us to
integrate this knowledge over a large number of galaxies
and therefore derive cosmological information.
Galaxy formation and evolution are still far from being
well understood. Galaxies are extremely complex objects,
formed via complicated non-linear processes, and any
approach (be it observational, semi-analytical or compu-
tational) inevitably relies on simplifications. If we try to
analyse a galaxy’s luminous output in terms of a history
parametrized by some chosen physical quantities, such a
simplification is also in order. The reason is two-fold: firstly
we are limited by our knowledge and ability to model
all the physical processes which happen in a galaxy and
produce the observed spectrum we are analysing; secondly,
the observed spectrum is inevitably perturbed by noise,
which intrinsically limits the amount of information we can
⋆ E-mail: [email protected]
recover.
Measuring and understanding the star formation history
of the Universe is therefore essential to our understanding
of galaxy evolution - when, where and in what conditions
did stars form throughout cosmic history? The traditional
and simplest way to probe this is to measure the observed
instantaneous star formation rate in galaxies at differ-
ent redshifts. This can be achieved by looking at light
emitted by young stars in the ultra-violet (UV) band or
its secondary effects. (e.g. Madau et al. 1996; Kennicutt
1998; Hopkins et al. 2000; Bundy et al. 2006; Erb et al.
2006; Abraham et al. 2007; Noeske et al. 2007; Verma et al.
2007). A complementary method is to look at present day
galaxies and extract their star formation history, which
spans the lifetime of the galaxy. Different teams have
analysed a large number of galaxies in this way, whether
by using the full available spectrum (Glazebrook et al.
2003; Panter et al. 2003; Cid Fernandes et al. 2004;
Heavens et al. 2004; Mathis et al. 2006; Ocvirk et al. 2006;
Panter et al. 2006; Cid Fernandes et al. 2007), or by con-
centrating on particular spectral features or indices (e.g.
Kauffmann et al. 2003; Tremonti et al. 2004; Gallazzi et al.
2005; Barber et al. 2006), which are known to be correlated
with age or metallicities (e.g. Worthey 1994; Thomas et al.
c© 0000 RAS
http://arxiv.org/abs/0704.0941v2
2 Tojeiro et al.
2003).
To do this, we rely on synthetic stellar population models to
describe a galaxy in terms of its stellar components, but by
modelling a galaxy in this way we are intrinsically limited
by the quality of the models. There are also potential
concerns with flux calibration errors. However, using the
full spectrum to recover the fossil record of a galaxy - or of
an ensemble of galaxies - is an extremely powerful method,
as the quality and amount of data relating to local galaxies
vastly outshines that which concerns high-redshift galaxies.
Splitting a galaxy into simple stellar populations of different
ages and metallicities is a natural way of parameterising
a galaxy, and it allows realistic fits to real galaxies (e.g.
Bruzual & Charlot 2003). Galactic archeology has become
increasingly popular in the literature recently, largely due to
the increase in sophistication of stellar population synthesis
codes and the improvement of the stellar spectra libraries
upon which they are based, and also due to the availability
of large well-calibrated spectroscopic databases, such as
the Sloan Digital Sky Survey (SDSS) (York et al. 2000;
Strauss et al. 2002).
In any case, without imposing any constraints on the
allowed form of the star formation history, or perhaps an
age-metallicity relation, the parameter space can become
unsustainably large for a traditional approach. Ideally, one
would like to do without such pre-constraints. Recently,
different research teams have come up with widely different
solutions for this problem. MOPED (Heavens et al. 2000)
and STARLIGHT (Cid Fernandes et al. 2004) explore a
well-chosen parameter space in order to find the best
possible fit to the data. In the case of MOPED, this relies
on compression of the full spectrum to a much smaller set
of numbers which retains all the information about the
parameters it tries to recover; STARLIGHT on the other
hand, searches for its best fit using the full spectrum with
a Metropolis algorithm. STECMAP (Ocvirk et al. 2006)
solves the problem using an algebraic least-squares solution
and a well-chosen regularization to keep the inversions
stable. All of these and other methods acknowledge the
same limitation - noise in the data and in the models
introduces degeneracies into the problem which can lead
to unphysical results. MOPED, for example, has produced
some remarkable results concerning the average star forma-
tion history of the Universe by analysing a large sample
of galaxies. However, MOPED’s authors have cautioned
against over-interpreting the results on a galaxy-by-galaxy
basis, due to the problem mentioned above. This is directly
related to the question of how finely one should param-
eterise a galaxy, and what the consequences of this might be.
Much of the motivation for VESPA came from the reali-
sation that this problem will vary from galaxy to galaxy,
and that the method of choosing a single parametrization
to analyse a large number of galaxies can be improved on.
VESPA is based on three main ideas, which we present here
and develop further in the main text:
• There is only so much information one can safely recover
from any given set of data, and the amount of information
which can be recovered from an individual galaxy varies.
• The recovered star formation fractions should be posi-
tive.
• Even though the full unconstrained problem is non-
linear, it is piecewise linear in well-chosen regions of pa-
rameter space.
VESPA’s ultimate goal is to derive robust information
for each galaxy individually, by adapting the number of
parameters it recovers on a galaxy-by-galaxy basis and
increasing the resolution in parameter space only where the
data warrant it. In a nutshell, this is how VESPA works: we
estimate how many parameters we can recover from a given
spectrum, given its noise, shape, spectral resolution and
wavelength range using an analysis given by Ocvirk et al.
(2006). In that paper, Singular Value Decomposition (SVD)
is used to find a least squares solution, and this solution
is analysed in terms of its singular vectors. VESPA uses
this method only as an analysis of the solution, and uses
Bounded-Variable Least-Squares (BVLS) (Stark & Parker
1995) to reach a non-negative solution in several regimes
where linearity applies.
This paper is organised as follows: in Section 2 we present
the method, in Section 3 we apply VESPA to a variety of
synthetic spectra, in Section 4 we apply VESPA to a sample
of galaxies from the Sloan Digital Sky Survey spectroscopic
database and we compare our results to those obtained with
MOPED, and finally in Section 5 we present our conclusions.
2 METHOD
In this section we lay down the problem to solve in detail,
and explain the different steps VESPA uses to find a solution
for each galaxy.
2.1 The problem
We assume a galaxy is composed of a series of simple stel-
lar populations (SSP) of varying ages and metallicities. The
unobscured rest frame luminosity per unit wavelength of a
galaxy can then be written as
ψ(t)Sλ(t, Z)dt (1)
where ψ(t) is the star formation rate (solar masses formed
per unit of time) and Sλ(t, Z) is the luminosity per unit
wavelength of a single stellar population of age t and metal-
licity Z, per unit mass. The dependency of the metallicity on
age is unconstrained, turning this into a non-linear problem.
In order to solve this problem, we start by discretizing in
wavelength and time, by averaging these two quantities
into well chosen bins. For now we present the problem with
a generalised parametrization, and discuss our choice in
Section 2.3. We will use greek indices to indicate time bins,
and roman indices to indicate wavelength bins.
The problem becomes
c© 0000 RAS, MNRAS 000, 000–000
Recovering galaxy histories using VESPA 3
xαG(Zα)αj (2)
where Fj = (F1, ..., FD) is the luminosity of the jth wave-
length bin of width ∆λ, G(Zα)αj is the jth luminosity point
of a stellar population of age tα = (t1, ..., tS) (spanning an
age range of ∆tα) and metallicity Zα, and xα = (x1, ..., xS)
is the total mass of population G(Z)αj in the time bin ∆tα.
Although the full metallicity problem is non-linear, interpo-
lating between tabulated values of Z gives a piecewise linear
behaviour:
G(Zα)αj = gαG(Za,α)αj + (1− gα)G(Zb,α)αj , (3)
and the problem then becomes
xα [gαG(Za,α)αj + (1− gα)G(Zb,α)αj ] (4)
where G(Za,α)αj and G(Zb,α)αj are equivalent to G(Zα)αj
as above, but at fixed metallicities Za and Zb, which bound
the true Z. If this interpolation matches the models’ reso-
lution in Z, then we are not degrading the models in any way.
Solving the problem then requires finding the correct metal-
licity range. One should not underestimate the complexity
this implies - trying all possible combination of consecutive
values of Za and Zb in a grid of 16 age bins would lead to a
total number of calculations of the order of 109, which is un-
feasible even in today’s fast personal workstations. We work
around this problem using an iterative approach, which we
describe in Section 2.3.2.
2.1.1 Dust extinction
An important component when describing the luminous out-
put of a galaxy is dust, as different wavelengths are affected
in different ways. The simplest possible approach is to use
one-parameter dust model, according to which we apply
a single dust screen to the combined luminosity of all the
galactic components. Equation (1) becomes
Fλ = fdust(τλ)
ψ(t)Sλ(t, Z)dt (5)
where we are assuming the dust extinction is the same for
all stars, and characterised by the optical depth, τλ.
However, it is also well known that very young stars are
likely to be more affected by dust. In an attempt to in-
clude this in our modelling, we follow the two-parameter
dust model of Charlot & Fall (2000) in which young stars
are embebbed in their birth cloud up to a time tBC , when
they break free into the inter-stellar medium (ISM):
fdust(τλ, t)ψ(t)Sλ(t, Z)dt (6)
fdust(τλ, t) =
fdust(τ
λ )fdust(τ
λ ), t ≤ tBC
fdust(τ
λ ), t > tBC
where τ ISMλ is the optical depth of the ISM and τ
λ is the
optical depth of the birth cloud. Following Charlot & Fall
(2000), we take tBC = 0.03 Gyrs.
There is a variety of choices for the form of fdust(τλ). To
model the dust in the ISM, we use the mixed slab model of
Charlot & Fall (2000) for low optical depths (τV ≤ 1), for
which
fdust(τλ) =
[1 + (τλ − 1) exp(−τλ)− τ
λE1(τλ)] (8)
where E1 is the exponential integral and τλ is the optical
depth of the slab. This model is known to be less accurate
for high dust values, and for optical depths greater than one
we take a uniform screening model with
fdust(τλ) = exp(−τλ). (9)
We only use the uniform screening model to model the dust
in the birth cloud and we use τλ = τV (λ/5500Å)
−0.7 as our
extinction curve for both environments.
As described, dust is a non-linear problem. In practice, we
solve the linear problem described by equation (4) with a
number of dust extinctions applied to the matrices G(Z)ij
and choose the values of τ ISMV and τ
V which result in the
best fit to the data.
We initially use a binary chop search for τ ISMV ∈ [0, 4] and
keep τBCV fixed and equal to zero, which results in trying out
typically around nine values of τ ISMV . If this initial solution
reveals star formation at a time less than tBC we repeat
our search on a two-dimensional grid, and fit for τ ISMV and
τBCV simultaneously. There is no penalty except in CPU time
to apply the two-parameter search, but we find that this
procedure is robust (see section 3.4).
2.2 The solution
In this section we describe the method used to reach a
solution for a galaxy, given a set of models and a gener-
alised parametrization. The construction of these models
and choice of parameters is described in Sections 2.3 and 2.4.
We re-write the problem described by equation (4) in a sim-
pler way
cκAκj(Zκ) (10)
where Zκ = Za for κ < S and Zκ = Zb for κ ≥ S. A
is a D × 2S matrix composed of synthetic models at the
corresponding metallicities, and c = (c1, ..., c2S) is the solu-
tion vector, from which the xα and gα in equation (4) can
be calculated. We can then calculate a linearly interpolated
metallicity at age tα
Zα = gαZa + (1− gα)Zb. (11)
For every age tα we aim to recover two parameters: xα -
the total mass formed at that age (within a period ∆tα) -
and Zα - a mass-weighted metallicity.
At this stage we are not concerned with our choice of tα and
∆tα - although these are crucial and will be discussed later.
For a given set of chosen parameters, we find c, such that
c© 0000 RAS, MNRAS 000, 000–000
4 Tojeiro et al.
(Fj −
cκAκj)
is minimised (where σj is the error in the measured flux bin
A linear problem with a least squares constraint has a simple
analytic solution which, for constant σj (white-noise) is
cLS = (A
·F (13)
In principle, any matrix inversion method, e.g. Singular
Value Decomposition (SVD), can be used to solve (13). How-
ever, we would like to impose positivity constraints on the
recovered solutions. Negative solutions are unphysical, but
unfortunately common in a problem perturbed by noise.
2.2.1 BVLS and positivity
We use Bounded-Variable Least-Squares (BVLS)
(Stark & Parker 1995) in order to solve (13). BVLS is
an algorithm which solves linear problems whose variables
are subject to upper and lower constraints. It uses an
active set strategy to choose sets of free variables, and
uses QR matrix decomposition to solve the unconstrained
least-squares problem of each candidate set of free variables
using (13):
cLS = (E
·F (14)
where E is effectively composed of those columns of A
for which ck is unconstrained, and of zero vectors for
those columns for which ck is set to zero. BVLS is an
extension of the Non-Negative Least Squares algorithm
(Lawson & Hanson 1974), and they are both proven to
converge in a finite number of iterations. Positivity is the
only constraint in VESPA’s solution.
BVLS and positivity have various advantages. Most obvious
is the fact that we do away with negative solutions. In a
non-constrained method (such as SVD) negative values are
a response to the fact that the data is noisy. Similarly, we
find that zero values returned by BVLS (in, for example,
a synthetic galaxy with continuous star formation across
all time) are also an artifact from noisy data. It should be
kept in mind that, if the method is unbiased, this problem
is solved by analysing a number of noisy realisations of the
original problem - what we find is that the true values of
the parameters we try to recover are consistent with the
distributions yielded by this process. In this sense, not even
a negative value presents a problem necessarily, as long as
it is consistent with zero (or the correct solution). Given
that we have found no bias when using BVLS, we feel it is
an advantage to discard a priori solutions we know to be
unphysical.
Another advantage to using BVLS is the fact that, by fixing
some parameters to the lower boundary (zero, in this case),
it effectively reduces the number of fitting parameters to
the number of those which keeps unconstrained. Given the
overall aims of VESPA, this has proven to be advantageous.
2.2.2 Noise
The inversion in equation (13) is often highly sensitive to
noise, and care is needed when recovering solutions with
matrix inversion methods. The fit in data-space will always
improve as we increase the number of parameters, but these
might not all provide meaningful information. We follow an
analysis given in Ocvirk et al. (2006) in order to understand
how much this affects our results, and to choose a suitable
age parametrization for each galaxy. This is not an exact
method, and it does not guarantee that the solutions we
recover have no contribution from noise. However, we found
that in most cases it provides a very useful guideline (see
section 3.3, in particular Figure 11).
We refer the reader to the above paper for a full discussion,
and we reproduce here the steps used in our analysis.
We use SVD to decompose the model matrix E as
E = U ·W ·V
where U is a D×2S orthonormal matrix with singular data-
vectors uκ as columns, V is a 2S × 2S orthonormal matrix
with the singular solution-vectors vκ as columns, and W is
a 2S × 2S diagonal matrix W = diag(w1, ..., w2S) where wκ
are the matrix singular values in decreasing order. Replacing
E by this decomposition in equation (13) gives
cLS = V ·W
vκ (16)
The solution vector is a linear combination of the solution
singular values, parametrized by the dot product between
the data and the corresponding data singular vector, and
divided by the kth singular value. The data vector itself is a
combination of the true underlying emitted flux and noise:
F = Ftrue + e. Equation (16) becomes
cLS =
κ ·Ftrue
vκ ≡ ctrue + ce (17)
where ctrue is the solution vector to the noiseless problem
and ce is an unavoidable added term due to the presence of
noise.
It is extremely informative to compare the amplitudes
of the two terms in the sum (17), and to monitor their
contributions to the solution vector with varying rank. In
Figure 1 we plot |uTκ · F| and u
κ ·e as a function of rank
κ, for a synthetic spectrum with a SNR per pixel of 50
(at a resolution of 3Å) and an exponentially-decaying star
formation history. We observe the behaviour described
and discussed in Ocvirk et al. (2006). The combinations
associated with the noise terms maintain a roughly constant
power across all ranks, with a an average value of 〈F〉 /SNR.
The data terms, however, drop significantly with rank, and
we can therefore identify two ranges: a noise-dominated
κ-range, in which the noise contributions match or dom-
inate the true data contributions, and a data-dominated
range, where the contributions to the solution are largely
data motivated. We call the transition rank κcrit. Overall,
high-κ ranks tend to dominate the solution, since the
singular values wκ decrease with κ. This only amplifies
c© 0000 RAS, MNRAS 000, 000–000
Recovering galaxy histories using VESPA 5
Figure 1. The behaviour of the singular values with matrix rank
k. The stars are |uTκ ·F| and the squares are u
κ ·e. The line is
〈F〉 /SNR, which in this case has a value of approximately 0.06.
the problem by giving greater weight to noise-dominated
terms in the sum (16). Figure 2 shows the contribution
coming from each rank κ to the final solution - the co-
efficient (uTκ ·F)/wκ. We see this weight increases with rank.
Whereas this analysis gives us great insight into the
problem, we do not in fact use the sum (16) to obtain cLS ,
for the reasons given in section 2.2.1.
For real data we are only able to calculate uTκ ·F and estimate
the noise level at 〈F〉 /SNR and we use this information
to estimate the number of non-zero parameters to recover
from the data. Our aim is to a have a solution which is
dominated by the signal, and not by the noise. We therefore
want our number of non-zero recovered parameters to be
less than or equal to κcrit. Estimating where this transition
happens is always a noisy process. In this paper we take
the conservative approach of setting κcrit to be the rank at
which the perturbed singular values first cross the 〈F〉 /SNR
barrier. In the case of Figure 1 this happens at κcrit = 7.
2.3 Choosing a galaxy parametrization
One of the advantages of VESPA is that it has the ability
to choose the number of parameters to recover in any
given galaxy. This is possible due to a time grid of varying
resolutions, which VESPA can explore to find a solution.
This section describes this grid and the criteria used to
reach a final parametrization.
2.3.1 The grid
We work on a grid with a maximum resolution of 16 age
bins, logarithmically spaced in lookback time from 0.02
Figure 2. The coefficients in sum (16) as a function of rank κ.
We see that the highest rank modes (corresponding to the smaller
singular values) tend to contribute the most to the solution.
up to 14 Gyr. The grid has three further resolution levels,
where we split the age of the Universe in eight, four and
finally two age bins, also logarithmically spaced in the same
range.
The idea behind the multi-resolution grid is to start our
search with a low number of parameters (in coarser resolu-
tion, so that the entirety of the age of Universe is covered),
and then increase the resolution only where the data war-
rant it by splitting the bin with the highest flux contribution
in two, and so on. In effect, we construct one such grid for
each of the tabulated metallicities, Za and Zb. We work with
five metallicity values, Z = [0.0004, 0.004, 0.008, 0.02, 0.05]
which correspond to the metallicity resolution of the models
used, where Z is the fraction of the mass of the star
composed of metals (Z⊙ = 0.02). The construction of the
models for each of the time bins is discussed in Section 2.4.
To each of the grids we can apply a dust extinction as ex-
plained in Section 2.1.1.
2.3.2 The search
We go through the following steps in order to reach a solu-
tion:
(i) We begin our search with three bins: two bins of width
4 and one bin of width 8 (oldest), where here we are mea-
suring widths in units of high-resolution bins.
(ii) We calculate a solution using equation (10) for every
possible combination of consecutive boundaries Za and Zb,
and we choose the one which gives the best value of reduced
(iii) We calculate the number of perturbed singular values
above the noise level, as described at the end of Section 2.2.2.
c© 0000 RAS, MNRAS 000, 000–000
6 Tojeiro et al.
(iv) We find the bin which contributes the most to the
total flux and we split it into two.
(v) We find a solution in the new parametrization, this
time by trying out all possible combinations of Za and Zb for
the newly split bins only, and fixing the metallicity bound-
aries of the remainder bins to the boundaries obtained in
the previous solution. If a bin had no stars in the previous
iteration, we set Za = 0.0004 and Zb = 0.05.
(vi) We return to (iii) and we proceed until we have
reached the maximum resolution in all populated bins.
(vii) We look backwards in our sequence of solutions for
the last instance with a number of non-zero recovered pa-
rameters equal to or less than κcrit as calculated in (iii) and
take this as our best solution.
We illustrate this sequence in Figure 3, where we show the
evolution of the search in a synthetic galaxy composed of
two stellar bursts of equal star formation rates - one young
and one old. VESPA first splits the components which con-
tribute the most to the total flux. In this case this is the
young burst which can be seen in the first bin. Even though
VESPA always resolves bins with any mass to the possible
highest resolution, it then searches for the latest solution
which has passed the SVD criterion explained in Section
2.2.2. In this case, this corresponds to the fifth from the top
solution. VESPA chooses this solution in favour of the fol-
lowing ones due to the number of perturbed singular values
above the solid line (right panel). In this case, the solution
chosen by VESPA is a better fit in parameters space (note
the logarithmic scale in the y-axis - the following solution
put the vast majority of the mass in the wrong bin). We ob-
served this type of improvement in the majority of all cases
studied (see Figure 11).
2.3.3 The final solution
Our final solution comes in a parametrization such that the
total number of non-zero recovered parameters is less than
or equal to the number of perturbed singular values above
the estimated noise level.
The above sequence is performed for each of several combi-
nations of τBCV , τ
V , and we choose the attenuation which
provides the best fit.
For each galaxy we recover N star formation masses, with
an associated metallicity, where N is the total number of
bins, and a maximum of two dust parameters.
2.4 The models
The backbone to our grid of models is the BC03 set of
synthetic SSP models (Bruzual & Charlot 2003), with a
Chabrier initial mass function (Chabrier 2003) and Padova
1994 evolutionary tracks (Alongi et al. 1993; Bressan et al.
1993; Fagotto et al. 1994a,b; Girardi et al. 1996) . Although
any set of stellar population models can be used, these
provide a detailed spectral evolution of stellar populations
over a suitable range of wavelength, ages and metallicities:
S(λ, t, Z). The models have been normalised to one solar
mass at the age t = 0.
2.4.1 High-resolution age bins
At our highest resolution we work with 16 age bins, equally
spaced in a logarithmic time scale between now and the
age of the Universe. In each bin, we assume a constant star
formation rate
α (λ,Z) = ψ
S(λ, t, Z)dt (18)
with ψ = 1/∆tα.
2.4.2 Low-resolution age bins
As described in Section 2.3.1, we work on a grid of different
resolution time bins and we construct the low resolution bins
using the high resolution bins described in Section 2.4.1. We
do not assume a constant star formation rate in this case, as
in wider bins the light from the younger components would
largely dominate over the contribution from the older ones.
Instead, we use a decaying star formation history, such that
the light contributions from all the components are compa-
rable. Recall equation (1)
α (λ,Z) =
ψ(t)S(λ, t, Z)dt, (19)
which we approximate to
β (λ,Z) =
fHRα (λ,Z)ψα∆tα
ψα∆tα
where low resolution bin β incorporates the high resolution
bins α ∈ β, and we set
ψα∆tα =
fHRα (λ,Z)dλ
. (21)
Depending on the galaxy, the final solution obtained with
the sequence detailed in Section 2.3.2 can be described in
terms of low-resolution age bins. In this case we should in-
terpret the recovered mass as the total mass formed during
the period implied by the width of the bin, but we can-
not make any conclusions as to when in the bin the mass
was formed. Similarly, the recovered metallicity for the bin
should be interpreted as a mass-weighted metallicity for the
total mass formed in the bin.
2.5 Errors
The quality of our fits and of our solutions is affected by
the noise in the data, the noise in the models, and the
parametrization we choose (which does not reflect the
complete physical scenario within a galaxy). We aim to
apply VESPA firstly to SDSS galaxies, which typically have
a SNR ≈ 20 per resolution element of 3Å, which puts us in
a regime where the main limitations come from the noise in
the data.
To estimate how much noise affects our recovered solutions
we take a rather empirical approach. For each recovered so-
lution we create nerror random noisy realisations and we
apply VESPA to each of these spectra. We re-bin each re-
covered solution in the parametrization of the solution we
want to analyse and estimate the covariance matrices
c© 0000 RAS, MNRAS 000, 000–000
Recovering galaxy histories using VESPA 7
Figure 3. The evolution of the fit, as VESPA searches for a solution. Sequence should be read from top to bottom. Each line shows
a stage in the sequence: the left panel shows the input star formation history in the dashed line (red on the online version), and the
recovered mass fractions on the solid line (black on the online version) for a given parametrization ; the middle panel shows the input
metallicities in the dashed line (red on the online version), and the recovered metallicities on the solid line (black on the online version);
the right panel shows the absolute value of the perturbed singular values |uκ · F| (stars and solid line) and the estimated noise level
〈F〉 /SNR. In this panel we also show the value of κcrit and the number of non-zero elements of cLS in each iteration. The chosen solution
is the fifth from the top, and indicated accordingly. This galaxy consists of 2 burst events of equal star formation rate - a very young
and an old burst. It was modelled with a resolution of 3Å and a signal-to-noise ratio per pixel of 50. We see the recovery is good but not
perfect - there is a 1 per cent leakage from the older population - but better than the following solutions, where this bin is split. See text
in Section 2.3.2 for more details.
C(x)αβ = 〈(xα − x̄α)(xβ − x̄β)〉 (22)
C(Z)αβ =
(Zα − Z̄α)(Zβ − Z̄β)
. (23)
All the plots in Sections 3 and 4 show error bars derived
from C
αα , although it is worth keeping in mind that these
are typically highly correlated.
2.6 Timings
A basic run of VESPA (which consists of roughly 5 runs
down the sequence detailed in Section 2.3.2, one for each
value of dust extinction) takes about 5 seconds. If accurate
error estimations are needed per galaxy, this will add an-
other one or two minutes to the timing, depending on how
c© 0000 RAS, MNRAS 000, 000–000
8 Tojeiro et al.
accurately one would like to estimate the covariance ma-
trices, and depending on the number of data points. With
nerror = 10, a typical SDSS galaxy takes around one minute
to analyse.
3 TESTS ON SIMULATED DATA
We tested VESPA on a variety of synthetic spectra, in
order to understand its capabilities and limitations. In
particular, we tried to understand the effect of three factors
in the quality of our solutions: the input star formation
history, the noise in the data, and the wavelength coverage
of the spectrum. We have also looked at the effects of dust
extinction. Throughout we have modelled our galaxies in a
resolution of 3Å.
Even though we are aware that showing individual examples
of VESPA’s results from synthetic spectra can be extraor-
dinarily unrepresentative, we feel obliged to show a few for
illustration purposes. We will show a typical result for most
of the cases we present, but we also define some measure-
ments of success, so that the overall performance of VESPA
can be tracked as we vary any factors. We define
xα − x
ωα (24)
Zα − Z
ωα (25)
where xIα and Z
α are the total mass and correspondent
metallicity in bin α (re-binned to match the solution’s
parametrization if necessary), and ωα is the flux contribu-
tion of population of age tα. Gx and GZ are a flux-weighted
average of the total absolute fractional errors in the solu-
tion, and give an indication of how well VESPA recovers
the most significant parameters. A perfect solution gives
Gx = GZ = 0. It is also worth noting that this statistic
does not take into account the associated error with each
recovered parameter - deviations from the true solution are
usually expected given the estimated covariance matrices.
We will also show how these factors affect the recovered to-
tal mass for a galaxy. In all cases we have re-normalised the
total masses such that total input mass for each galaxy is 1.
3.1 Star formation histories
We present here some results for synthetic spectra with
two different star formation histories. All of the spectra
in this section were synthesised with a SNR per pixel of
50, and we initially fit the very wide wavelength range
λ ∈ [1000, 9500]Å.
We choose two very difference cases: firstly a star formation
history of dual bursts, with a large random variety of
burst age separations and metallicities (where we set the
star formation rate to be 10 solar masses per Gyr in all
bursts). Secondly, we chose a SFH with an exponentially
decaying star formation rate: SFR ∝ exp(γtα), where tα is
Figure 8. The recovered number of non-zero parameters in 50
galaxies with an exponentially decaying star formation history,
using different wavelength ranges: λ ∈ [1000, 9500] Å(solid line)
and λ ∈ [3200, 9500] Å(dashed line). Please note that these corre-
spond to the total number of non-zero components in the solution
vector cκ and not to the number of recovered stellar populations.
the age of the bin in lookback time in Gyr. Here we show
results for γ = 0.3 Gyr−1. Rather than being physically
motivated, our choice of γ reflects a SFH which is not
too steep as to essentially mimic a single old burst, but
which is also not completely dominated by recent star
formation. In all cases the metallicity in each bin is ran-
domly set. Figure 4 shows a typical example from each type.
Figure 5 shows the distribution of Gx, GZ and of the recov-
ered total masses for a sample of 50 galaxies. We see differ-
ences between the two cases. Firstly, in dual bursts galaxies,
we seem to do better in recovering data from significant in-
dividual bins, but worse in overall mass. This reflects the
fact that Gx is dominated by the fractional errors in the
most significant bins, but the total mass can be affected by
small flux contributions in old bins which can have large
masses. On the other hand, with an exponentially decaying
star formation rate, we do worse overall (although this is
mainly a reflection that more bins have significant contribu-
tions to the flux) but we recover the total mass of the galaxy
exceptionally well.
3.2 Wavelength range
Wavelength range is an important factor in this sort of
analysis, as different parts of the spectrum will help to break
different degeneracies. Since we are primarily interested in
SDSS galaxies, we have studied how well VESPA does in
the more realistic wavelength range of λ ∈ [3200, 9500] Å.
Figure 6 shows the results for the same galaxies shown in
Figure 4, obtained with the new wavelength range. In these
c© 0000 RAS, MNRAS 000, 000–000
Recovering galaxy histories using VESPA 9
Figure 4. Two examples of VESPA’s analysis on synthetic galaxies. The top panels show the original spectrum in the dark line (black
in the online version) and fitted spectrum in the lighter line (red in the online version ). The middle panels show the input (dashed, red)
and the recovered (solid, black) star formation rates and the bottom panel shows the input (dashed, red) and recovered (solid, black)
metallicities per bin. Note that even though many of the recovered metallicities are wrong, these tend to correspond to bins with very
little star formation, and are therefore virtually unconstrained.
Figure 5. The distribution of Gx, GZ and total mass recovered for 50 galaxies with a SNR per pixel of 50. Solid lines correspond to
dual burst and dashed lines to exponentially decaying ones. See text in Section 3.1 for details.
particular cases, we notice a more pronounced difference in
the dual bursts galaxy, but looking at a more substantial
sample of galaxies shows that this is not generally the case.
Figure 7 shows Gx, GZ and total mass recovered for 50
exponentially decaying star formation history galaxies, with
a signal to noise ratio of 50 and the two different wavelength
ranges. We do not see a largely significant change in both
cases, and we observe a less significant difference in the
dual bursts galaxies (not plotted here).
We find it instructive to keep track of how many parame-
ters we recover in total, as we change any factors. Figure
8 shows an histogram of the total number of non-zero
parameters we recovered from our sample galaxies with
exponentially-decaying star formation histories and both
wavelength ranges. Note that these are the components of
the solution vector cκ which are non-zero - they do not
represent a number of recovered stellar populations. In this
case there is a clear decrease in the number of recovered
parameters, suggesting a wider wavelength range is a useful
way to increase resolution in parameter space.
c© 0000 RAS, MNRAS 000, 000–000
10 Tojeiro et al.
Figure 6. Same galaxies as in Figure 4, but results obtained by using a smaller wavelength range. The goodness-of-fit in data space is
still excellent, but it becomes more difficult to break certain degeneracies.
Figure 7. The distribution of Gx, GZ and total mass recovered for 50 galaxies with a SNR per pixel of 50 and two different wavelength
coverage. Solid line corresponds to λ ∈ [1000, 9500]Å and dashed line to λ ∈ [3200, 9500] Å.
3.3 Noise
It is of interest to vary the signal-to-noise ratio in the
synthetic spectra. We have repeated the studies detailed
in the two previous sections with varying values of noise,
and we investigate how this affects both the quality of the
solutions and their resolution in parameter space.
Figure 10 shows how the recovered number of parameters
changed by increasing the noise in the galaxies with an
exponentially decaying star formation rate and wide wave-
length range. In this case the increase in the noise leads to a
significant reduction of the number of parameters recovered
for each galaxy. This behaviour is equally clear for different
star formation histories and different wavelength coverage,
and is directly caused by the stopping criterion defined in
Section 3.3.
The quality of the solutions is also affected by this increase
in noise, as can be seen in figure 9, where we have plotted Gx,
GZ and the total recovered mass for two different values of
SNR. The quality of the solutions decreases with the higher
noise levels, as is to be expected. However, a more interest-
ing question to ask is whether this decrease in the quality
of the solutions would indeed be more pronounced without
the SVD stopping criterion. Figure 11 shows a comparison
between Gx obtained as we have described and obtained
without any stopping mechanism (so letting our search go
to the highest possible resolution and taking the final so-
lution) for 50 galaxies with an exponentially decaying star
formation history and a signal-to-noise ratio of 20. The re-
c© 0000 RAS, MNRAS 000, 000–000
Recovering galaxy histories using VESPA 11
Figure 9. The distribution of Gx, GZ and total mass recovered for 50 galaxies with an exponential decaying star formation history and
different signal-to-noise ratios. Solid lines correspond to SNR = 50 and dashed lines to SNR =20. See text in Section 3.3 for details.
Figure 10. The recovered number of non-zero parameters as we
change the noise in the data from 50 (solid line) to 20 (dashed
line), in a sample of galaxies with an exponentially decaying star
formation rate. Please note that these correspond to the total
number of non-zero components in the solution vector cκ and
not to the number of recovered stellar populations.
sults show clearly that there is a significant advantage in
using the SVD stopping criterion. Naturally, the goodness
of fit in data space is consistently better as we increase the
number of parameters but this improvement is illusory - the
parameter recovery is worse. This is exactly the expected be-
haviour - we choose to sacrifice resolution in parameter space
in favour of a more robust solution - even though naively one
could think a lower χ2 solution would indicate a better solu-
tion. The significance of this improvement changes with the
amount of noise and wavelength range of the data (and to
a lesser extent with type of star formation history) but we
observed an improvement in all cases we have studied.
Figure 11. Testing the SVD stopping criterion. Plots show good-
ness of fit Gx for the solution of 50 galaxies obtained with and
without the SVD stopping criterion. We see that by recovering
only as many parameters as the data warrants gives improved
parameter estimation in almost all cases, and a striking improve-
ment in many.
As expected, further decreasing the signal-to-noise ratio
leads to a further degradation of the recovered solutions.
This is accompanied by a suitable increase in the error bars
and correlation matrices, but in cases of a SNR≈ 10 and less
it becomes very difficult to recover any meaningful informa-
tion from individual spectra.
3.4 Dust
In this section we use simulated galaxies to study the effect
of dust in our solutions. As explained in Section 2.1.1,
due to the non-linear nature of the problem, we cannot
include dust as one of the free parameters analysed by
c© 0000 RAS, MNRAS 000, 000–000
12 Tojeiro et al.
Figure 12. Testing the recovery of τISM
for 50 galaxies with
a exponentially-decaying star formation history (triangles) and
50 galaxies formed with a random combination of dual bursts
(stars). The input values are randomly chosen and continuously
distributed between 0 and 2. The recovered values are chosen from
a tabulated grid between 0 and 4.
SVD. Instead, we fit for a maximum of two dust parameters
using a brute-force approach which aims to minimise χ2 in
data-space by trying out a series of values for τ ISMV and τ
For each galaxy we assign random values of τ ISMV ∈ [0, 2]
and τBCV ∈ [1, 2] and we are interested in how well we
recover these parameters and any possible degeneracies.
Figure 12 shows the input and recovered values for τ ISMV for
galaxies with a signal-to-noise ratio of 50, and which were
analysed using the wavelength range λ ∈ [3200, 9500]Å. We
show results for two different cases of star formation his-
tory: 50 galaxies with an exponentially-decaying SFR and
50 galaxies formed by dual-bursts. We observe a good recov-
ery of τ ISMV in both cases, especially at low optical depths.
However, we mostly observe a poor recovery of τBCV ,
especially at high optical depths. This is unsurprisingly
flagging up a certain level of degeneracy between mass and
degree of extinction, which gets worse as the optical depth
increases. Essentially, it becomes difficult to distinguish
between a highly obscured massive population and a less
massive population surrounded by less dust. It is worth
keeping in mind that young populations are affected by
both dust components simultaneously, and generally, even
though the recovery of the second dust parameter may
not be accurate, it allows for a better estimation of the
dominant dust component.
This can be tested by simulating galaxies on a two-
component dust model and by analysing them using both a
single component model, and a two-component model. E.g.,
when using the more sophisticated model, we noted that the
mean error on τ ISMV on a subsample of dual-burst galaxies
(synthesised as explained in section 3.1, but chosen to have
young star formation) was reduced from 35 to 28 per cent.
This simple test also revealed that we are less likely to
underestimate the mass of young populations by allowing
an extra dust component, but that we are also introducing
an extra degeneracy, especially so in the case of faint young
populations. However, we feel that the two-parameter dust
model brings more advantages than disadvantages, with
the caveat being that dusty young populations can be
poorly constrained. In any case, we note that each galaxy
is always analysed with a one-parameter model before
being potentially analysed with a two-parameter model,
and both solutions are kept and always available for analysis.
Finally, our test also partly justifies our choice to first
run a single dust component model and only apply a two-
component model if we detect stars in the first two bins -
we find that although a one-component model might un-
derestimate the amount of young stars, it does not fail
to detect them. We repeated a similar test in real data,
by analysing the same sample with and one- and a two-
parameter dust model. We found similar results, with a one-
parameter model failing to yield star formation in young bins
only around 1 per-cent of the time (compared to the two-
parameter model), and only in cases where the contribution
of the light from the young populations was very small (of
the order of 1 to 2 per cent).
4 RESULTS
In this section we present some results obtained by applying
VESPA to galaxies in the SDSS. Our aim is to analyse these
galaxies, and to produce and publish a catalogue of robust
star formation histories, from which a wealth of information
can then be derived. We leave this for another publication,
but we present here results from a sub-sample of galaxies,
which we used to test VESPA in a variety of ways.
4.1 Handling SDSS data
Prior to any analysis, we processed the SDSS spectroscopic
data, so as to accomplish the desired spectral resolution
and mask out any wanted signal.
The SDSS data-files supply a mask vector, which flags
any potential problems with the measured signal on a
pixel-by-pixel basis. We use this mask to remove any
unwanted regions and emission lines. In practical terms, we
ignore any pixel for which the provided mask value is not
zero.
The BC03 synthetic models produce outputs at a resolution
of 3Å, which we convolve with a Gaussian velocity disper-
sion curve with a stellar velocity σV = 170kms
−1, this being
a typical value for SDSS galaxies. We take the models’
tabulated wavelength values as a fixed grid and re-bin the
SDSS data into this grid, using an inverse-variance weighted
average. We compute the new error vector accordingly. Note
that the number of SDSS data points averaged into any new
c© 0000 RAS, MNRAS 000, 000–000
Recovering galaxy histories using VESPA 13
bin is not constant, and that the re-binning process is done
after we have masked out any unwanted pixels. Additionally
to the lines yielded by the mask vector, we mask out the
following emission line regions in every spectrum’s rest-
frame wavelength range: [5885-5900, 6702-6732, 6716-6746,
6548-6578, 6535-6565, 6569-6599, 4944-4974, 4992-5022,
4846-4876, 4325-4355, 4087-4117, 3711-3741, 7800-11000] Å.
These re-binned data- and noise-vectors are essentially the
ones we use in our analysis. However, since the linear algebra
assumes white-noise, we pre-whiten the data and construct
a new flux vector F ′j = Fj/σj , which has unit variance,
σ′j = 1,∀j, and a new model matrix A
ij = Aij/σj .
4.2 Duplicate galaxies
There are a number of galaxies in the SDSS database
which have been observed more than once, for a variety
of reasons. This provides an opportunity to check how
variations in observation-dependent corrections affect the
results obtained by VESPA.
We have used a subset of the sample of duplicate objects in
Brinchmann et al. (2004)1 to create two sets of oservations
for 2000 galaxies, which we named list A and list B.
We are interested in seeing how the errors we estimate
for our results compare to errors introduced by intrinsic
variations caused by changing the observation conditions
(such as quality of the spectra, placement of the fibre, sky
subtraction or spectrophometric calibrations).
Figure 13 shows the average star formation fraction as a
function of lookback time for both sets of observation. The
error bars showed are errors on the mean. We see no signs
of being dominated by systematics when estimating the
star formation fraction of a sample of galaxies.
Figure 14 shows the total stellar mass obtained for a set of
500 galaxies in both observations (details of how we estimate
the total stellar mass of a galaxy are included in section 4.4).
The error bars are obtained directly from the estimated
covariance matrix C(x) (equation 22). Even though most of
the galaxy duplicates produce mass estimates in agreement
with each other given the error estimates, a minority
does not. Upon inspection, these galaxies show significant
differences in their continuum, but after further investiga-
tion it remains unclear what motivates such a difference.
The simplest explanation is that the spectrophotometric
calibration differs significantly between both observations,
and that might have been the reason the plate or object
was re-observed. Whatever the reason however, the clear
conclusion is that stellar mass estimates are highly sensitive
to changes in the spectrum continuum, and the errors we es-
timate from the covariance matrix alone might be too small.
We did not find any signs of a systematic bias in any of the
1 Available at http://www.mpa-garching.mpg.de/SDSS/
Figure 13. Average star formation fraction as a function of look-
back time for the 2000 galaxies in list A (solid line) and list B
(dashed line). The error bars shown are the errors bars on the
mean for each age bin. We show only the errors from list A to
avoid cluttering - the errors from list B are of similar amplitude.
Figure 14. Total stellar mass recovered for two sets of observa-
tions of 500 galaxies in the main galaxy sample. The error bars
are calculated from C(x).
analysis we carried out.
4.3 Real fits
In this section we discuss the quality of the fits to SDSS
galaxies obtained with VESPA.
As explained in Section 2, VESPA finds the best fit solution
in a χ2 sense for a given parametrisation, which is self-
c© 0000 RAS, MNRAS 000, 000–000
http://www.mpa-garching.mpg.de/SDSS/
14 Tojeiro et al.
Figure 15. The distribution of reduced values of χ2 for a sample
of 360 galaxies analysed by VESPA.
regulated in order to not allow an excessive number of fitting
parameters. We have shown that this self-regularization
gives a better solution in parameter space (Figure 11),
despite often not allowing the parametrization which would
yield the best fit in data space (Figure 3). However, our
aim is still to find a solution which gives a good fit to the
real spectrum. Figure 15 shows the 1-point distribution of
reduced values of reduced χ2 for 1 plate of galaxies. This
distribution peaks at around χ2reduced = 1.3, and figure 16
shows a fit to one of the galaxies with a typical value of
goodness of fit.
It is worth noting that the majority of the fits which are
most pleasing to the eye, correspond to the ones with a
high signal to noise ratio and high value of reduced χ2. One
would expect the best fits to come from the galaxies with
the best signal. However, we believe the fact that they do
not is not a limitation of the method, but a limitation of
the modelling. There are a number of reasons why VESPA
would be unable to produce very good fits to the SDSS
data. One is the adoption of a single velocity dispersion
(170 kms−1) which could easily be improved upon at the
expense of CPU time. However, the dominant reason is
likely to be lack of accuracy in stellar and dust modelling
- whereas BC03 models can and do reproduce a lot of the
observed features, it is also well known that this sucess
is limited as there are certain spectral features not yet
accurately modelled, or even modelled at all. There are
similar deficiencies in dust models and dust extinction
curves. The effect of the choice of modelling should not be
overlooked, and we refer the reader to a discussion in Section
4.5 of Panter et al. (2006)), where these issues are discussed.
4.4 VESPA and MOPED
In this Section we take the opportunity to compare the
results from VESPA and MOPED, obtained from the same
Figure 17. The recovered average star formation history for the
821 galaxies as recovered by VESPA (solid line) and MOPED
(dashed line). Both were initially normalised such that the sum
over all bins is 1, and the MOPED line was then adjusted by
11/16 to account for the different number of bins used in each
method, to facilitate direct comparison.
sample of galaxies. The VESPA solutions used here are
obtained with a one-parameter dust model, to allow a more
fair comparison between the two methods. Both methods
make similar assumptions regarding stellar models, but
MOPED uses an LMC (Gordon et al. 2003) dust extinc-
tion curve, and single screen modelling for all optical depths.
Our sample consist of two plates from the SDSS DR3
(Abazajian et al. 2005) (plates 0288 and 0444), from which
we analyse a total of 821 galaxies. We are mainly inter-
ested in comparing the results in a global sense. MOPED in
its standard configuration attempts to recover 23 parame-
ters (11 star formation fractions, 11 metallicities and 1 dust
parameter), so we might expect considerable degeneracies.
Indeed, in the past the authors of MOPED have cautioned
against using it to interpret individual galaxy spectra too
precisely. We have observed degeneracies between adjacent
bins in MOPED, but on the other hand a typical MOPED
solution has many star formation fractions which are es-
sentially zero, so the number of significant contributions is
always much less than 23.
Figure 17 shows the recovered average star formation
history for the 821 galaxies using both methods. In the
case of VESPA, solutions parametrized by low-resolution
bins had to be re-parametrized in high-resolution bins, so
that a common grid across all galaxies could be used. This
was done using the weights given by (21). The lines show a
remarkably good agreement between the two methods.
Having recovered a star formation history for each galaxy,
one can then estimate the stellar mass of a galaxy. We calcu-
lated this quantity for all galaxies using the solutions from
both methods, and with similar assumptions regarding cos-
c© 0000 RAS, MNRAS 000, 000–000
Recovering galaxy histories using VESPA 15
Figure 16. Typical fit to a galaxy from the SDSS. The dark line is the real data (arbitrary normalisation), and the lighter line (red on
the online version) is VESPA’s fit to the data.
mological parameters and fibre-size corrections. Explicitly,
we have done the following:
(i) We converted from flux to luminosity assuming the set
of cosmological parameters given by Spergel et al. (2003).
(ii) We recovered the initial mass in each age bin using
each method.
(iii) We calculated the remaining present-day mass for
each population after recycling processes. This information
is supplied by the synthetic stellar models, as a function of
age and metallicity.
(iv) We summed this across all bins to calculate the total
stellar present-day mass in the fibre aperture, M .
(v) We corrected for the aperture size by scaling up the
mass to Mstellar using the petrosian and fibre magnitudes
in the z-band, Mp(z) and Mf (z), with: Mstellar = M ×
100.4[Mp(z)−Mf (z)]
Figure 18 shows the recovered galaxy masses as recovered
from MOPED and from VESPA. We see considerable agree-
ment between VESPA and MOPED. Over 75 per cent of
galaxies have 0.5 ≤ MV ESPA/MMOPED ≤ 1.5. There is a
tail of around 10 per cent of galaxies where VESPA recov-
ers 2 to 4 times the mass recovered by MOPED. The main
reason for this difference is in the dust model used - we find
a correlation between dust extinction and the ratio of the
two mass estimates. This again reflects the fact that total
stellar mass estimates are highly sensitive to changes in the
spectrum continuum (see also section 4.2).
Our sub-sample of 821 includes galaxies with a wide range
of signal-to-noise ratios, star formation histories and even
wavelength range (mainly due to each galaxy having dif-
ferent masks applied to it, according to the quality of the
spectroscopic data). Figure 19 shows the number of recov-
ered non-zero parameters in the sample, using VESPA. As
an average, it falls below the synthetic examples studied in
c© 0000 RAS, MNRAS 000, 000–000
16 Tojeiro et al.
Figure 18. Galaxy stellar mass (in units of solar masses) as re-
covered by VESPA and MOPED for a sub-sample of 821 SDSS
galaxies. The small percentage of galaxies with significantly larger
VESPA masses have large extinction. The difference is accounted
for by the fact that MOPED and VESPA use different dust mod-
Section 3. This is not surprising, though, as each galaxy will
have an unique and somewhat random combination of char-
acteristics which will lead to a different number of param-
eters being recovered. The total combination of these sets
of characteristics would be impossible to investigate using
the empirical method described in Section 3, and here lies
the advantage of VESPA of dynamically adapting to each
individual case. Also important to note is the fact that the
wavelength coverage is normally not continuous in an SDSS
galaxy, due to masked regions. This was not modelled in
Section 3, and is likely to further reduce the number of re-
covered parameters in any given case.
Perhaps more useful is to translate this number into a
number of recovered significant stellar populations for each
galaxy. We define a significant component as a stellar pop-
ulation which contributes 5 per cent or more to the total
flux. Figure 20 shows the distribution of the number of sig-
nificant components for our sub-sample of galaxies, as re-
covered by MOPED and VESPA. It is interesting to note
that both methods recover on average a similar amount
of components, even though MOPED has no explicit self-
regularization mechanism, as VESPA clearly does.
5 CONCLUSIONS
We have developed a new method to recover star formation
and metallicity histories from integrated galactic spectra
- VESPA. Motivated by the current limitations of other
methods which aim to do the same, our goal was to develop
an algorithm which is robust on a galaxy-by-galaxy basis.
VESPA works with a dynamic parametrization of the
Figure 19. Number of non-zero parameters in solutions recov-
ered from 821 SDSS galaxies with VESPA. Please note that these
correspond to the total number of non-zero components in the
solution vector cκ and not to the number of recovered stellar
populations. For a information about the number of recovered
populations see Figure 14.
Figure 20. The distribution of the total number of recovered
stellar populations which contribute 5 per-cent or more to the
total flux of the galaxy, as recovered from MOPED (dashed line)
and VESPA (solid line).
c© 0000 RAS, MNRAS 000, 000–000
Recovering galaxy histories using VESPA 17
star formation history, and is able to adapt the number
of parameters it attempts to recover from a given galaxy
according to its spectrum. In this paper we tested VESPA
against a series of idealised synthetic situations, and against
SDSS data by comparing our results with those obtained
with the well-established code, MOPED.
Using synthetic data we found the quality and resolution
of the recovered solutions varied with factors such as type
of star formation history, noise in the data and wavelength
coverage. In the vast majority of cases, and within the
estimated errors and bin-correlations, we observed a reliable
reproduction of the input parameters. As the signal-to-noise
decreases, it becomes increasingly difficult to recover robust
solutions. Whereas our method cannot guarantee a perfect
solution, we have shown that the self-regularization we im-
posed helped obtain a cleaner solution in an overwhelming
majority of the cases studied.
On the real data analysis, we have studied possible effects
from systematics using duplicate observations of the same
set of galaxies, and have also compared VESPA’s to
MOPED’s results obtained using the same data sample. We
found that in the majority of cases our results are robust
to possible systematics effects, but that in certain cases
and particularly when calculating stellar masses, VESPA
might underestimate the mass errors. However, we found
no systematic bias in any of our tests. We have also shown
that VESPA’s results are in good agreement with those
of MOPED for the same sample of galaxies. VESPA and
MOPED are two fundamentally different approaches to
the same problem, and we found good agreement both in
a global sense by looking at the average star formation
history of the sample, and in an individual basis by looking
at the recovered stellar masses of each galaxy. VESPA
typically recovered between 2 to 5 stellar populations from
the SDSS sample.
VESPA’s ability to adapt dynamically to each galaxy and
to extract only as much information as the data warrant is
a completely new way to tackle the problem of extracting
information from galactic spectra. Our claim is that, for
the most part, VESPA’s results are robust for any given
galaxy, but our claim comes with two words of caution. The
first one concerns very noisy galaxies - in extreme cases
(SNR≈10 or less, at a resolution of 3Å), it becomes very
difficult to extract any meaningful information from the
data. This uncertainty is evident in the large error bars
and bin-correlations, and the solutions can be essentially
unconstrained even at low-resolutions. We are therefore
limited when it comes to analysing individual high-noise
galaxies, which is the case of many SDSS objects. Our
second word of caution concerns the stellar models used to
analyse real galaxies - any method can only do as well as the
models it bases itself upon. We are limited in our knowledge
and ability to reproduce realistic synthetic models of stellar
populations, and this is inevitably reflected in the solutions
we obtain by using them. On the plus side, VESPA works
with any set of synthetic models and can take advantage of
improved versions as they are developed.
VESPA is fast enough to use on large spectroscopic sam-
ples (a typical SDSS galaxy takes 1 minute on an average
workstation), and we are in the process of analysing SDSS’s
Data Release 5 (DR5), which consists of roughly half a mil-
lion galaxies. Our first aim is to publish and exploit a cata-
logue of robust star formation histories, which we hope will
be a valuable resource to help constrain models of galaxy
formation and evolution.
6 ACKNOWLEDGMENTS
We are grateful to the anonymous referee for a very
thoughtful report which led to material improvements in
the paper.
RT is funded by the Fundação para a Ciência e a Tecnologia
under the reference PRAXIS SFRH/BD/16973/04. RJ’s
research is supported by the NSF through grant PIRE-
0507768 and AST-0408698 to the Atacama Cosmology
Telescope.
Funding for the SDSS and SDSS-II has been provided by the
Alfred P. Sloan Foundation, the Participating Institutions,
the National Science Foundation, the U.S. Department of
Energy, the National Aeronautics and Space Administra-
tion, the Japanese Monbukagakusho, the Max Planck Soci-
ety, and the Higher Education Funding Council for England.
The SDSS Web Site is http://www.sdss.org/. The SDSS is
managed by the Astrophysical Research Consortium for the
Participating Institutions. The Participating Institutions
are the American Museum of Natural History, Astrophys-
ical Institute Potsdam, University of Basel, University of
Cambridge, Case Western Reserve University, University of
Chicago, Drexel University, Fermilab, the Institute for Ad-
vanced Study, the Japan Participation Group, Johns Hop-
kins University, the Joint Institute for Nuclear Astrophysics,
the Kavli Institute for Particle Astrophysics and Cosmol-
ogy, the Korean Scientist Group, the Chinese Academy
of Sciences (LAMOST), Los Alamos National Laboratory,
the Max-Planck-Institute for Astronomy (MPIA), the Max-
Planck-Institute for Astrophysics (MPA), New Mexico State
University, Ohio State University, University of Pittsburgh,
University of Portsmouth, Princeton University, the United
States Naval Observatory, and the University of Washing-
REFERENCES
Abazajian K., et al., 2005, AJ, 129, 1755
Abraham R. G., et al., 2007, preprint (astro-ph/0701779)
Alongi M., Bertelli G., Bressan A., Chiosi C., Fagotto F.,
Greggio L., Nasi E., 1993, A&AS, 97, 851
Barber T., Meiksin A., Murphy T., 2006, preprint
(astro-ph/0611053)
Bressan A., Fagotto F., Bertelli G., Chiosi C., 1993, A&AS,
100, 647
Brinchmann J., Charlot S., White S. D. M., Tremonti C.,
Kauffmann G., Heckman T., Brinkmann J., 2004, MN-
RAS, 351, 1151
Bruzual G., Charlot S., 2003, MNRAS, 344, 1000
Bundy K., et al., 2006, ApJ, 651, 120
c© 0000 RAS, MNRAS 000, 000–000
http://www.sdss.org/
http://arxiv.org/abs/astro-ph/0701779
http://arxiv.org/abs/astro-ph/0611053
18 Tojeiro et al.
Chabrier G., 2003, PASP, 115, 763
Charlot S., Fall S. M., 2000, ApJ, 539, 718
Cid Fernandes R., Asari N. V., Sodré L., Stasińska G., Ma-
teus A., Torres-Papaqui J. P., Schoenell W., 2007, MN-
RAS, 375, L16
Cid Fernandes R., Gu Q., Melnick J., Terlevich E., Ter-
levich R., Kunth D., Rodrigues Lacerda R., Joguet B.,
2004, MNRAS, 355, 273
Erb D. K., Steidel C. C., Shapley A. E., Pettini M., Reddy
N. A., Adelberger K. L., 2006, ApJ, 647, 128
Fagotto F., Bressan A., Bertelli G., Chiosi C., 1994a,
A&AS, 104, 365
Fagotto F., Bressan A., Bertelli G., Chiosi C., 1994b,
A&AS, 105, 29
Gallazzi A., Charlot S., Brinchmann J., White S. D. M.,
Tremonti C. A., 2005, MNRAS, 362, 41
Girardi L., Bressan A., Chiosi C., Bertelli G., Nasi E., 1996,
A&AS, 117, 113
Glazebrook K., et al., 2003, ApJ, 587, 55
Gordon K. D., Clayton G. C., Misselt K. A., Landolt A. U.,
Wolff M. J., 2003, ApJ, 594, 279
Heavens A., Panter B., Jimenez R., Dunlop J., 2004, Nat,
428, 625
Heavens A. F., Jimenez R., Lahav O., 2000, MNRAS, 317,
Hopkins A. M., Connolly A. J., Szalay A. S., 2000, AJ, 120,
Kauffmann G., et al., 2003, MNRAS, 341, 33
Kennicutt Jr. R. C., 1998, ApJ, 498, 541
Lawson C., Hanson R., 1974, Solving Least Squares Prob-
lems. Prentice-Hall, Inc.
Madau P., Ferguson H. C., Dickinson M. E., Giavalisco M.,
Steidel C. C., Fruchter A., 1996, MNRAS, 283, 1388
Mathis H., Charlot S., Brinchmann J., 2006, MNRAS, 365,
Noeske K. G., et al., 2007, preprint (astro-ph/0703056)
Ocvirk P., Pichon C., Lançon A., Thiébaut E., 2006, MN-
RAS, 365, 46
Panter B., Heavens A. F., Jimenez R., 2003, MNRAS, 343,
Panter B., Jimenez R., Heavens A. F., Charlot S., 2006,
preprint (astro-ph/0608531)
Spergel D. N., et al., 2003, ApJ Supplement Series, 148,
Stark P. B., Parker R. L., 1995, Computational Statistics,
10, 143
Strauss M. A., et al., 2002, AJ, 124, 1810
Thomas D., Maraston C., Bender R., 2003, MNRAS, 339,
Tremonti C. A., et al., 2004, ApJ, 613, 898
Verma A., Lehnert M. D., Foerster Schreiber N. M., Bremer
M. N., Douglas L., 2007, preprint (astro-ph/0701725)
Worthey G., 1994, ApJ Supplement Series, 95, 107
York D. G., et al., 2000, AJ, 120, 1579
c© 0000 RAS, MNRAS 000, 000–000
http://arxiv.org/abs/astro-ph/0703056
http://arxiv.org/abs/astro-ph/0608531
http://arxiv.org/abs/astro-ph/0701725
Introduction
Method
The problem
The solution
Choosing a galaxy parametrization
The models
Errors
Timings
Tests on Simulated Data
Star formation histories
Wavelength range
Noise
Dust
Results
Handling SDSS data
Duplicate galaxies
Real fits
VESPA and MOPED
Conclusions
Acknowledgments
|
0704.0943 | Search for gravitational-wave bursts in LIGO data from the fourth
science run | Search for gravitational-wave bursts in LIGO data
from the fourth science run
B Abbott14, R Abbott14, R Adhikari14, J Agresti14,
P Ajith2, B Allen2,51, R Amin18, S B Anderson14,
W G Anderson51, M Arain39, M Araya14, H Armandula14,
M Ashley4, S Aston38, P Aufmuth36, C Aulbert1, S Babak1,
S Ballmer14, H Bantilan8, B C Barish14, C Barker15,
D Barker15, B Barr40, P Barriga50, M A Barton40,
K Bayer17, K Belczynski24, J Betzwieser17,
P T Beyersdorf27, B Bhawal14, I A Bilenko21,
G Billingsley14, R Biswas51, E Black14, K Blackburn14,
L Blackburn17, D Blair50, B Bland15, J Bogenstahl40,
L Bogue16, R Bork14, V Boschi14, S Bose52, P R Brady51,
V B Braginsky21, J E Brau43, M Brinkmann2, A Brooks37,
D A Brown14,6, A Bullington30, A Bunkowski2,
A Buonanno41, O Burmeister2, D Busby14, R L Byer30,
L Cadonati17, G Cagnoli40, J B Camp22, J Cannizzo22,
K Cannon51, C A Cantley40, J Cao17, L Cardenas14,
M M Casey40, G Castaldi46, C Cepeda14, E Chalkey40,
P Charlton9, S Chatterji14, S Chelkowski2, Y Chen1,
F Chiadini45, D Chin42, E Chin50, J Chow4,
N Christensen8, J Clark40, P Cochrane2, T Cokelaer7,
C N Colacino38, R Coldwell39, R Conte45, D Cook15,
T Corbitt17, D Coward50, D Coyne14, J D E Creighton51,
T D Creighton14, R P Croce46, D R M Crooks40,
A M Cruise38, A Cumming40, J Dalrymple31,
E D’Ambrosio14, K Danzmann36,2, G Davies7, D DeBra30,
J Degallaix50, M Degree30, T Demma46, V Dergachev42,
S Desai32, R DeSalvo14, S Dhurandhar13, M D́ıaz33,
J Dickson4, A Di Credico31, G Diederichs36, A Dietz7,
E E Doomes29, R W P Drever5, J.-C Dumas50,
R J Dupuis14, J G Dwyer10, P Ehrens14, E Espinoza14,
T Etzel14, M Evans14, T Evans16, S Fairhurst7,14, Y Fan50,
D Fazi14, M M Fejer30, L S Finn32, V Fiumara45,
N Fotopoulos51, A Franzen36, K Y Franzen39, A Freise38,
R Frey43, T Fricke44, P Fritschel17, V V Frolov16, M Fyffe16,
V Galdi46, J Garofoli15, I Gholami1, J A Giaime16,18,
S Giampanis44, K D Giardina16, K Goda17, E Goetz42,
L M Goggin14, G González18, S Gossler4, A Grant40,
S Gras50, C Gray15, M Gray4, J Greenhalgh26,
A M Gretarsson11, R Grosso33, H Grote2, S Grunewald1,
M Guenther15, R Gustafson42, B Hage36, D Hammer51,
http://arxiv.org/abs/0704.0943v3
Search for gravitational-wave bursts in LIGO data 2
C Hanna18, J Hanson16, J Harms2, G Harry17, E Harstad43,
T Hayler26, J Heefner14, I S Heng40, A Heptonstall40,
M Heurs2, M Hewitson2, S Hild36, E Hirose31, D Hoak16,
D Hosken37, J Hough40, E Howell50, D Hoyland38,
S H Huttner40, D Ingram15, E Innerhofer17, M Ito43,
Y Itoh51, A Ivanov14, D Jackrel30, B Johnson15,
W W Johnson18, D I Jones47, G Jones7, R Jones40, L Ju50,
P Kalmus10, V Kalogera24, D Kasprzyk38,
E Katsavounidis17, K Kawabe15, S Kawamura23,
F Kawazoe23, W Kells14, D G Keppel14, F Ya Khalili21,
C Kim24, P King14, J S Kissel18, S Klimenko39,
K Kokeyama23, V Kondrashov14, R K Kopparapu18,
D Kozak14, B Krishnan1, P Kwee36, P K Lam4,
M Landry15, B Lantz30, A Lazzarini14, B Lee50, M Lei14,
J Leiner52, V Leonhardt23, I Leonor43, K Libbrecht14,
P Lindquist14, N A Lockerbie48, M Longo45, M Lormand16,
M Lubinski15, H Lück36,2, B Machenschalk1, M MacInnis17,
M Mageswaran14, K Mailand14, M Malec36, V Mandic14,
S Marano45, S Márka10, J Markowitz17, E Maros14,
I Martin40, J N Marx14, K Mason17, L Matone10,
V Matta45, N Mavalvala17, R McCarthy15,
D E McClelland4, S C McGuire29, M McHugh20,
K McKenzie4, J W C McNabb32, S McWilliams22,
T Meier36, A Melissinos44, G Mendell15, R A Mercer39,
S Meshkov14, E Messaritaki14, C J Messenger40,
D Meyers14, E Mikhailov17, S Mitra13, V P Mitrofanov21,
G Mitselmakher39, R Mittleman17, O Miyakawa14,
S Mohanty33, G Moreno15, K Mossavi2, C MowLowry4,
A Moylan4, D Mudge37, G Mueller39, S Mukherjee33,
H Müller-Ebhardt2, J Munch37, P Murray40, E Myers15,
J Myers15, T Nash14, G Newton40, A Nishizawa23,
K Numata22, B O’Reilly16, R O’Shaughnessy24,
D J Ottaway17, H Overmier16, B J Owen32, Y Pan41,
M A Papa1,51, V Parameshwaraiah15, P Patel14,
M Pedraza14, J Pelc17, S Penn12, V Pierro46, I M Pinto46,
M Pitkin40, H Pletsch2, M V Plissi40, F Postiglione45,
R Prix1, V Quetschke39, F Raab15, D Rabeling4,
H Radkins15, R Rahkola43, N Rainer2, M Rakhmanov32,
M Ramsunder32, K Rawlins17, S Ray-Majumder51, V Re38,
H Rehbein2, S Reid40, D H Reitze39, L Ribichini2,
R Riesen16, K Riles42, B Rivera15, N A Robertson14,40,
C Robinson7, E L Robinson38, S Roddy16, A Rodriguez18,
A M Rogan52, J Rollins10, J D Romano7, J Romie16,
R Route30, S Rowan40, A Rüdiger2, L Ruet17, P Russell14,
K Ryan15, S Sakata23, M Samidi14,
L Sancho de la Jordana35, V Sandberg15, V Sannibale14,
S Saraf25, P Sarin17, B S Sathyaprakash7, S Sato23,
P R Saulson31, R Savage15, P Savov6, S Schediwy50,
R Schilling2, R Schnabel2, R Schofield43, B F Schutz1,7,
Search for gravitational-wave bursts in LIGO data 3
P Schwinberg15, S M Scott4, A C Searle4, B Sears14,
F Seifert2, D Sellers16, A S Sengupta7, P Shawhan41,
D H Shoemaker17, A Sibley16, J A Sidles49, X Siemens14,6,
D Sigg15, S Sinha30, A M Sintes35,1, B J J Slagmolen4,
J Slutsky18, J R Smith2, M R Smith14, K Somiya2,1,
K A Strain40, D M Strom43, A Stuver32,
T Z Summerscales3, K.-X Sun30, M Sung18, P J Sutton14,
H Takahashi1, D B Tanner39, M Tarallo14, R Taylor14,
R Taylor40, J Thacker16, K A Thorne32, K S Thorne6,
A Thüring36, M Tinto14, K V Tokmakov40, C Torres33,
C Torrie40, G Traylor16, M Trias35, W Tyler14, D Ugolini34,
C Ungarelli38, K Urbanek30, H Vahlbruch36, M Vallisneri6,
C Van Den Broeck7, M Varvella14, S Vass14, A Vecchio38,
J Veitch40, P Veitch37, A Villar14, C Vorvick15,
S P Vyachanin21, S J Waldman14, L Wallace14, H Ward40,
R Ward14, K Watts16, D Webber14, A Weidner2,
M Weinert2, A Weinstein14, R Weiss17, S Wen18, K Wette4,
J T Whelan1, D M Whitbeck32, S E Whitcomb14,
B F Whiting39, C Wilkinson15, P A Willems14,
L Williams39, B Willke36,2, I Wilmut26, W Winkler2,
C C Wipf17, S Wise39, A G Wiseman51, G Woan40,
D Woods51, R Wooley16, J Worden15, W Wu39,
I Yakushin16, H Yamamoto14, Z Yan50, S Yoshida28,
N Yunes32, M Zanolin17, J Zhang42, L Zhang14, C Zhao50,
N Zotov19, M Zucker17, H zur Mühlen36 and J Zweizig14
(LIGO Scientific Collaboration)
1 Albert-Einstein-Institut, Max-Planck-Institut für Gravitationsphysik, D-14476
Golm, Germany
2 Albert-Einstein-Institut, Max-Planck-Institut für Gravitationsphysik, D-30167
Hannover, Germany
3 Andrews University, Berrien Springs, MI 49104 USA
4 Australian National University, Canberra, 0200, Australia
5 California Institute of Technology, Pasadena, CA 91125, USA
6 Caltech-CaRT, Pasadena, CA 91125, USA
7 Cardiff University, Cardiff, CF24 3AA, United Kingdom
8 Carleton College, Northfield, MN 55057, USA
9 Charles Sturt University, Wagga Wagga, NSW 2678, Australia
10 Columbia University, New York, NY 10027, USA
11 Embry-Riddle Aeronautical University, Prescott, AZ 86301 USA
12 Hobart and William Smith Colleges, Geneva, NY 14456, USA
13 Inter-University Centre for Astronomy and Astrophysics, Pune - 411007, India
14 LIGO - California Institute of Technology, Pasadena, CA 91125, USA
15 LIGO Hanford Observatory, Richland, WA 99352, USA
16 LIGO Livingston Observatory, Livingston, LA 70754, USA
17 LIGO - Massachusetts Institute of Technology, Cambridge, MA 02139, USA
18 Louisiana State University, Baton Rouge, LA 70803, USA
19 Louisiana Tech University, Ruston, LA 71272, USA
20 Loyola University, New Orleans, LA 70118, USA
21 Moscow State University, Moscow, 119992, Russia
22 NASA/Goddard Space Flight Center, Greenbelt, MD 20771, USA
23 National Astronomical Observatory of Japan, Tokyo 181-8588, Japan
24 Northwestern University, Evanston, IL 60208, USA
25 Rochester Institute of Technology, Rochester, NY 14623, USA
26 Rutherford Appleton Laboratory, Chilton, Didcot, Oxon OX11 0QX United
Kingdom
Search for gravitational-wave bursts in LIGO data 4
27 San Jose State University, San Jose, CA 95192, USA
28 Southeastern Louisiana University, Hammond, LA 70402, USA
29 Southern University and A&M College, Baton Rouge, LA 70813, USA
30 Stanford University, Stanford, CA 94305, USA
31 Syracuse University, Syracuse, NY 13244, USA
32 The Pennsylvania State University, University Park, PA 16802, USA
33 The University of Texas at Brownsville and Texas Southmost College,
Brownsville, TX 78520, USA
34 Trinity University, San Antonio, TX 78212, USA
35 Universitat de les Illes Balears, E-07122 Palma de Mallorca, Spain
36 Universität Hannover, D-30167 Hannover, Germany
37 University of Adelaide, Adelaide, SA 5005, Australia
38 University of Birmingham, Birmingham, B15 2TT, United Kingdom
39 University of Florida, Gainesville, FL 32611, USA
40 University of Glasgow, Glasgow, G12 8QQ, United Kingdom
41 University of Maryland, College Park, MD 20742 USA
42 University of Michigan, Ann Arbor, MI 48109, USA
43 University of Oregon, Eugene, OR 97403, USA
44 University of Rochester, Rochester, NY 14627, USA
45 University of Salerno, 84084 Fisciano (Salerno), Italy
46 University of Sannio at Benevento, I-82100 Benevento, Italy
47 University of Southampton, Southampton, SO17 1BJ, United Kingdom
48 University of Strathclyde, Glasgow, G1 1XQ, United Kingdom
49 University of Washington, Seattle, WA, 98195
50 University of Western Australia, Crawley, WA 6009, Australia
51 University of Wisconsin-Milwaukee, Milwaukee, WI 53201, USA
52 Washington State University, Pullman, WA 99164, USA
E-mail: [email protected]
Abstract. The fourth science run of the LIGO and GEO 600 gravitational-wave
detectors, carried out in early 2005, collected data with significantly lower noise
than previous science runs. We report on a search for short-duration gravitational-
wave bursts with arbitrary waveform in the 64–1600 Hz frequency range appearing
in all three LIGO interferometers. Signal consistency tests, data quality cuts, and
auxiliary-channel vetoes are applied to reduce the rate of spurious triggers. No
gravitational-wave signals are detected in 15.5 days of live observation time; we
set a frequentist upper limit of 0.15 per day (at 90% confidence level) on the rate
of bursts with large enough amplitudes to be detected reliably. The amplitude
sensitivity of the search, characterized using Monte Carlo simulations, is several
times better than that of previous searches. We also provide rough estimates
of the distances at which representative supernova and binary black hole merger
signals could be detected with 50% efficiency by this analysis.
PACS numbers: 04.80.Nn, 95.30.Sf, 95.85.Sz
Submitted to Classical and Quantum Gravity
1. Introduction
Large interferometers are now being used to search for gravitational waves with
sufficient sensitivity to be able to detect signals from distant astrophysical sources.
At present, the three detectors of the Laser Interferometer Gravitational-wave
Observatory (LIGO) project [1] have achieved strain sensitivities consistent with their
design goals, while the GEO 600 [2] and Virgo [3] detectors are in the process of being
commissioned and are expected to reach comparable sensitivities. Experience gained
with these detectors, TAMA300 [4], and several small prototype interferometers has
Search for gravitational-wave bursts in LIGO data 5
nurtured advanced designs for future detector upgrades and new facilities, including
Advanced LIGO [5], Advanced Virgo [6], and the Large-scale Cryogenic Gravitational-
wave Telescope (LCGT) proposed to be constructed in Japan [7]. The LIGO Scientific
Collaboration (LSC) carries out the analysis of data collected by the LIGO and
GEO 600 gravitational-wave detectors, and has begun to pursue joint searches with
other collaborations (see, for example, [8]) as the network of operating detectors
evolves.
As the exploration of the gravitational-wave sky can now be carried out with
greater sensitivity than ever before, it is important to search for all plausible signals
in the data. In addition to well-modeled signals such as those from binary inspirals [9]
and spinning neutron stars [10], some astrophysical systems may emit gravitational
waves which are modeled imperfectly (if at all) and therefore cannot reliably be
searched for using matched filtering. Examples of such imperfectly-modeled systems
include binary mergers (despite recent advances in the fidelity of numerical relativity
calculations for at least some cases; see, for example, [11]) and stellar core collapse
events. For the latter, several sets of simulations have been carried out in the past
(see, for example, [12] and [13]), but more recent simulations have suggested a new
resonant core oscillation mechanism, driven by in-falling material, which appears to
power the supernova explosion and also to emit strong gravitational waves [14, 15].
Given the current uncertainties regarding gravitational wave emission by systems such
as these, as well as the possibility of detectable signals from other astrophysical sources
which are unknown or for which no attempt has been made to model gravitational
wave emission, it is desirable to cast a wide net.
In this article, we report the results of a search for gravitational-wave “bursts”
that is designed to be able to detect short-duration (≪ 1 s) signals of arbitrary form
as long as they have significant signal power in the most sensitive frequency band
of LIGO, considered here to be 64–1600 Hz. This analysis uses LIGO data from
the fourth science run carried out by the LSC, called S4, and uses the same basic
methods as previous LSC burst searches [17, 18] that were performed using data from
the S2 and S3 science runs. (A burst search was performed using data from the S1
science run using different methods [16].) We briefly describe the instruments and
data collection in section 2. In sections 3 and 4 we review the two complementary
signal processing methods—one based on locating signal power in excess of the baseline
noise and the other based on cross-correlating data streams—that are used together
to identify gravitational-wave event candidates. We note where the implementations
have been improved relative to the earlier searches and describe the signal consistency
tests which are based on the outputs from these tools. Section 5 describes additional
selection criteria which are used to “clean up” the data sample, reducing the average
rate of spurious triggers in the data. The complete analysis “pipeline” finds no event
candidates that pass all of the selection criteria, so we present in section 6 an upper
limit on the rate of gravitational-wave events which would be detected reliably by our
pipeline.
The detectability of a given type of burst, and thus the effective rate limit for a
particular astrophysical source model, depends on the signal waveform and amplitude;
in general, the detection efficiency (averaged over sky positions and arrival times) is less
than unity. We do not attempt a comprehensive survey of possible astrophysical signals
in this paper, but use a Monte Carlo method with a limited number of ad-hoc simulated
signals to evaluate the amplitude sensitivity of our pipeline, as described in section 7.
Overall, this search has much better sensitivity than previous searches, mostly due to
Search for gravitational-wave bursts in LIGO data 6
Mode Cleaner
Smoothes out fluctuations
of the input beam,
passes only fundamental
Gaussian beam mode
Stabilized
Laser
Power Recycling Mirror
(2.7% transmission)
Increases the stored power
by a factor of ~45, reducing
the photostatistics noise
Fabry-Perot Arm Cavity
Increases the sensitivity
to small length changes by
a factor of ~140
Photodiode
Input Mirror End Mirror
Beam Splitter
(50% transmission)
2 km or 4 km
Figure 1. Simplified optical layout of a LIGO interferometer.
using lower-noise data and partly due to improvements in the analysis pipeline. In
section 8 we estimate the amplitude sensitivity for certain modeled signals of interest
and calculate approximate distances at which those signals could be detected with 50%
efficiency. This completed S4 search sets the stage for burst searches now underway
using data from the S5 science run of the LIGO and GEO 600 detectors, which benefit
from much longer observation time and will be able to detect even weaker signals.
2. Instruments and data collection
LIGO comprises two observatory sites in the United States with a total of three
interferometers. As shown schematically in figure 1, the optical design is a Michelson
interferometer augmented with additional partially-transmitting mirrors to form
Fabry-Perot cavities in the arms and to “recycle” the outgoing beam power by
interfering it with the incoming beam. Servo systems are used to “lock” the mirror
positions to maintain resonance in the optical cavities, as well as to control the mirror
orientations, laser frequency and intensity, and many other degrees of freedom of the
apparatus. Interference between the two beams recombining at the beam splitter is
detected by photodiodes, providing a measure of the difference in arm lengths that
would be changed by a passing gravitational wave. The large mirrors which direct
the laser beams are suspended from wires, with the support structures isolated from
ground vibrations using stacks of inertial masses linked by damped springs. Active
feed-forward and feedback systems provide additional suppression of ground vibrations
for many of the degrees of freedom. The beam path of the interferometer, excluding
the laser light source and the photodiodes, is entirely enclosed in a vacuum system.
The LIGO Hanford Observatory in Washington state has two interferometers within
the same vacuum system, one with arms 4 km long (called H1) and the other with
arms 2 km long (called H2). The LIGO Livingston Observatory in Louisiana has a
single interferometer with 4 km long arms, called L1.
The response of an interferometer to a gravitational wave arriving at local time
Search for gravitational-wave bursts in LIGO data 7
t depends on the dimensionless strain amplitude and polarization of the wave and its
arrival direction with respect to the arms of the interferometer. In the low-frequency
limit, the differential strain signal detected by the interferometer (effective arm length
difference divided by the length of an arm) can be expressed as a projection of the
two polarization components of the gravitational wave, h+(t) and h×(t), with antenna
response factors F+(α, δ, t) and F×(α, δ, t):
hdet(t) = F+(α, δ, t)h+(t) + F×(α, δ, t)h×(t) , (1)
where α and δ are the right ascension and declination of the source. F+ and F× are
distinct for each interferometer site and change slowly with t over the course of a
sidereal day as the Earth’s rotation changes the orientation of the interferometer with
respect to the source location.
The electrical signal from the photodiode is filtered and digitized continuously at a
rate of 16 384 Hz. The time series of digitized values, referred to as the “gravitational-
wave channel” (GW channel), is recorded in a computer file, along with a timestamp
derived from the Global Positioning System (GPS) and additional information. The
relationship between a given gravitational-wave signal and the digitized time series is
measured in situ by imposing continuous sinusoidal position displacements of known
amplitude on some of the mirrors. These are called “calibration lines” because they
appear as narrow line features in a spectrogram of the GW channel.
Commissioning the LIGO interferometers has required several years of effort and
was the primary activity through late 2005. Beginning in 2000, a series of short data
collection runs was begun to establish operating procedures, test the detector systems
with stable configurations, and provide data for the development of data analysis
techniques. The first data collection run judged to have some scientific interest,
science run S1, was conducted in August-September 2002 with detector noise more
than two orders of magnitude higher than the design goal. Science runs S2 and S3
followed in 2003 with steadily improving detector noise, but with a poor duty cycle
for L1 due primarily to low-frequency, large-amplitude ground motion from human
activities and weather. During 2004, a hydraulic pre-isolation system was installed
and commissioned at the Livingston site to measure the ground motion and counteract
it with a relative displacement between the external and internal support structures
for the optical components, keeping the internal components much closer to an inertial
frame at frequencies above 0.1 Hz. At the same time, several improvements were made
to the H1 interferometer at Hanford to allow the laser power to be increased to the
full design power of 10 W.
The S4 science run, which lasted from 22 February to 23 March 2005, featured
good overall “science mode” duty cycles of 80.5%, 81.4%, and 74.5% for H1, H2,
and L1, respectively, corresponding to observation times of 570, 576, and 528 hours.
Thanks to the improvements made after the S3 run, the detector noise during S4 was
within a factor of two of the design goal over most of the frequency band, as shown in
figure 2. The GEO 600 interferometer also collected data throughout the S4 run, but
was over a factor of 100 less sensitive than the LIGO interferometers at 200 Hz and
a factor of few at and above the 1 kHz frequency range. The analysis approach used
in this article effectively requires a gravitational-wave signal to be distinguishable
above the noise in each of a fixed set of detectors, so it uses only the three LIGO
interferometers and not GEO 600. There are a total of 402 hours of S4 during which
all three LIGO interferometers were simultaneously collecting science-mode data.
Search for gravitational-wave bursts in LIGO data 8
100 1000
LIGO Detector Sensitivities During S4 Science Run
Frequency (Hz)
LIGO SRD goal (4 km)
Figure 2. Best achieved detector noise for the three LIGO interferometers during
the S4 science run, in terms of equivalent gravitational wave strain amplitude
spectral density. “LIGO SRD goal” is the sensitivity goal for the 4-km LIGO
interferometers set forth in the 1995 LIGO Science Requirements Document [19].
3. Trigger generation
The first stage of the burst search pipeline is to identify times when the GW channels
of the three interferometers appear to contain signal power in excess of the baseline
noise; these times, along with parameters derived from the data, are called “triggers”
and are used as input to later processing stages. As in previous searches [17, 18],
the WaveBurst algorithm [20] is used for this purpose; it will only be summarized
here [21].
WaveBurst performs a linear wavelet packet decomposition, using the symlet
wavelet basis [22], on short intervals of gravitational-wave data from each
interferometer. This decomposition produces a time-frequency map of the data similar
to a windowed Fourier transformation. A time-frequency data sample is referred to as a
pixel. Pixels containing significant excess signal power are selected in a non-parametric
way by ranking them with other pixels at nearby times and frequencies. As in the S3
analysis, WaveBurst has been configured for S4 to use six different time resolutions
and corresponding frequency resolutions, ranging from 1/16 s by 8 Hz to 1/512 s by
256 Hz, to be able to closely match the natural time-frequency properties of a variety
of burst signals. The wavelet decomposition is restricted to 64–2048 Hz. At any
given resolution, significant pixels from the three detector data streams are compared
and coincident pixels are selected; these are used to construct “clusters”, potentially
spanning many pixels in time and/or frequency, within which there is evidence for
a common signal appearing in the different detector data streams. These coincident
clusters form the basis for triggers, each of which is characterized by a central time,
Search for gravitational-wave bursts in LIGO data 9
Entries 8325975
Mean 2.583
RMS 0.9666
0 5 10 15 20 25 30 35 40
Figure 3. Distribution of Zg values for all WaveBurst triggers. The arrow shows
the location of the initial significance cut, Zg > 6.7.
duration, central frequency, frequency range, and overall significance Zg as defined
in [23]. Zg is calculated from the pixels in the cluster and is roughly proportional to
the geometric average of the excess signal power measured in the three interferometers,
relative to the average noise in each interferometer at the relevant frequency. Thus,
a large value of Zg indicates that the signal power in those pixels is highly unlikely
to have resulted from usual instrumental noise fluctuations. In addition, the absolute
strength of the signal detected by each interferometer within the sensitive frequency
band of the search is estimated in terms of the root-sum-squared amplitude of the
detected strain,
hrssdet =
|hdet(t)|
dt . (2)
WaveBurst was run on time intervals during which all three LIGO interferometers
were in science mode, but omitting periods when simulated signals were injected into
the interferometer hardware, any photodiode readout experienced an overflow, or the
data acquisition system was not operating. In addition, the last 30 seconds of each
science-mode data segment were omitted because it was observed that loss of “lock”
is sometimes preceded by a period of instability. These selection criteria reduced the
amount of data processed by WaveBurst from 402 hours to 391 hours.
For this analysis, triggers found by WaveBurst are initially required to have a
frequency range which overlaps 64–1600 Hz. An initial significance cut, Zg ≥ 6.7, is
applied to reject the bulk of the triggers and limit the number passed along to later
stages of the analysis. Figure 3 shows the distribution of Zg prior to applying this
significance cut.
Besides identifying truly simultaneous signals in the three data streams,
WaveBurst applies the same pixel matching and cluster coincidence tests to the three
data streams with many discrete relative time shifts imposed between the Hanford
and Livingston data streams, each much larger than the maximum light travel time
between the sites and the duration of the signals targeted by this search. The time-
shifted triggers found in this way provide a large sample to allow the “background”
(spurious triggers produced in response to detector noise in the absence of gravitational
waves) to be studied, under the assumption that the detector noise properties do not
Search for gravitational-wave bursts in LIGO data 10
−150 −100 −50 0 50 100 150
Mean = 41.1
χ2 = 130.5
d.o.f. = 97
WaveBurst trigger rate versus time shift
Time shift (s)
Figure 4. WaveBurst trigger rate as a function of the relative time shift applied
between the Hanford and Livingston data streams. The horizontal line is a fit to
a constant value, yielding a χ2 of 130.5 for 97 degrees of freedom.
vary much over the span of a few minutes and are independent at the two sites.
The two Hanford data streams are not shifted relative to one another, so that any
local environmental effects which influence both detectors are preserved. In fact,
some correlation in time is observed between noise transients in the H1 and H2 data
streams.
Initially, WaveBurst found triggers for 98 time shifts in multiples of 3.125 s
between −156.25 and −6.25 s and between +6.25 and +156.25 s. These 5119 triggers,
called the “tuning set”, were used to choose the parameters of the signal consistency
tests and additional selection criteria described in the following two sections. As
shown in figure 4, the rate of triggers in the tuning set is roughly constant for all time
shifts, with a marginal χ2 value but without any gross dependence on time shift. The
unshifted triggers were kept hidden throughout the tuning process, in order to avoid
the possibility of human bias in the choice of analysis parameters.
4. Signal consistency tests
The WaveBurst algorithm requires only a rough consistency among the different
detector data streams—namely, some apparent excess power in the same pixels
in the wavelet decomposition—to generate a trigger. This section describes more
sophisticated consistency tests based on the detailed content of the GW channels.
These tests succeed in eliminating most WaveBurst triggers in the data, while keeping
essentially all triggers generated in response to simulated gravitational-wave signals
added to the data streams. (The simulation method is described in section 7.) Similar
tests were also used in the S3 search [18].
Search for gravitational-wave bursts in LIGO data 11
]Hz [ strain / rssdetH1 h
-2210 -2110 -2010 -1910
-2210
-2110
-2010
-1910
]Hz [ strain / rssdetH1 h
-2210 -2110 -2010 -1910
-2210
-2110
-2010
-1910
]Hz [ strain / rssdetH1 h
-2210 -2110 -2010 -1910
-2210
-2110
-2010
-1910
(b) (c)
Figure 5. (a) Two-dimensional histogram, with bin count indicated by greyscale,
of H2 vs. H1 amplitudes reconstructed by WaveBurst for the tuning set of
time-shifted triggers. (b) Two-dimensional histogram of H2 vs. H1 amplitudes
reconstructed for simulated sine-Gaussian signals with a wide range of frequencies
and amplitudes from sources uniformly distributed over the sky (see section 7). In
these plots, the diagonal lines show the limits of the H1/H2 amplitude consistency
cut: 0.5 < ratio < 2 . (c) Two-dimensional histogram of L1 vs. H1 amplitudes for
the same simulated sine-Gaussian signals. Diagonal lines are drawn at ratios of
0.5 and 2 only to guide the eye; no cut is applied using this pair of interferometers.
4.1. H1/H2 amplitude consistency test
Because the two Hanford interferometers are co-located and co-aligned, they will
respond identically (in terms of strain) to any given gravitational wave. Thus, the
overall root-sum-squared amplitudes of the detected signals, estimated by WaveBurst
according to equation (2), should agree well if the estimation method is reliable.
Figure 5a shows that the time-shifted triggers in the tuning set often have poor
agreement between the detected signal amplitudes in H1 and H2. In contrast,
simulated signals injected into the data are found with amplitudes which usually agree
within a factor of 2, as shown in figure 5b. Therefore, we keep a trigger only if the
ratio of estimated signal amplitudes is in the range 0.5 to 2.
The Livingston interferometer is roughly aligned with the Hanford interferome-
ters, but the curvature of the Earth makes exact alignment impossible. The antenna
responses to a given gravitational wave will tend to be similar, but not reliably enough
to allow a consistency test which is both effective at rejecting noise triggers and effi-
cient at retaining simulated signals, as shown in figure 5c.
Search for gravitational-wave bursts in LIGO data 12
4.2. Cross-correlation consistency tests
The amplitude consistency test described in the previous subsection simply compares
scalar quantities derived from the data, without testing whether the waveforms are
similar in detail. We use a program called CorrPower [24], also used in the S3 burst
search [18], to calculate statistics based on Pearson’s linear correlation statistic,
i=1(xi − x̄)(yi − ȳ)
i=1(xi − x̄)
i=1(yi − ȳ)
. (3)
In the above expression {xi} and {yi} are sequences selected from the two GW channel
time series, possibly with a relative time shift, and x̄ and ȳ are their respective mean
values. The length of each sequence, N samples, corresponds to a chosen time window
(see below) over which the correlation is to be evaluated. r assumes values between
−1 for fully anti-correlated sequences and +1 for fully correlated sequences.
The r statistic measures the correlation between two data streams, such as
would be produced by a common gravitational-wave signal embedded in uncorrelated
detector noise [25]. It compares waveforms without being sensitive to the relative
amplitudes, and is thus complementary to the H1/H2 amplitude consistency test
described above. Furthermore, the r statistic may be used to test for a correlation
between H1 and L1 or between H2 and L1, even though these pairs consist of
interferometers with different antenna response factors, because each polarization
component will produce a measurable correlation for a suitable relative time delay
(unless the wave happens to arrive from one of the special directions for which one
of the detectors has a null response for that polarization component). In the special
case of a linearly polarized gravitational wave, the detected signals will simply differ
by a multiplicative factor, which can be either positive or negative depending on the
polarization angle and arrival direction.
Before calculating the r statistic for each detector pair, the data streams are
filtered to select the frequency band of interest (bandpass between 64 Hz and 1600 Hz)
and whitened to equalize the contribution of noise from all frequencies within this
band. The filtering is the same as was used in the S3 search [18] except for the
addition of a Q=10 notch filter, centered at 345 Hz, to avoid measuring correlations
from the prominent vibrational modes of the wires used to suspend the mirrors, which
are clustered around that frequency. The r statistic is then calculated over multiple
time windows with lengths of 20, 50, and 100 ms and a range of starting times,
densely placed (99% overlap) to cover the full duration of the trigger as reported by
WaveBurst; the maximum value from among these different time windows is used.
CorrPower [26] calculates two quantities, derived from the r statistic, which are
used to select triggers. The first of these, called R0, is simply the signed cross-
correlation between H1 and H2 with no relative time delay. Triggers with R0 < 0
are rejected. The second quantity, called Γ, combines the r-statistic values from the
three detector pairs, allowing relative time delays of up to 11 ms between H1 and L1
and between H2 and L1, and up to 1 ms between H1 and H2 (to allow for a possible
mismatch in time calibration). Specifically, Γ is the average of “confidence” values
calculated from the absolute value of each of the three individual r-statistic values.
A large value of Γ indicates that the data streams are correlated to an extent that
is highly unlikely to have resulted from normal instrumental noise fluctuations. This
quantity complements Zg, providing a different and largely independent means for
distinguishing real signals from background.
Search for gravitational-wave bursts in LIGO data 13
0 5 10 15 20 25 30 35 40 45 50
0 5 10 15 20 25 30 35 40 45 50
(a) (b)
Figure 6. Plots of Γ versus Zg, after the H1/H2 amplitude consistency cut but
before any other cuts. (a) Scatter plot for all time-shifted triggers in the tuning
set. (b) Two-dimensional histogram, with bin count indicated by greyscale, for
simulated sine-Gaussian signals with a wide range of frequencies and amplitudes
from sources uniformly distributed over the sky (see section 7). In both plots, the
vertical dashed line indicates the initial WaveBurst significance cut at Zg=6.7.
Figure 6 shows plots of Γ vs. Zg for time-shifted triggers and for simulated
gravitational-wave signals after the H1/H2 amplitude consistency cut but before the
R0 cut. The time-shifted triggers with Γ < 12 and Zg < 20 are the tail of the bulk
distribution of triggers. The outliers with Γ > 12 all arise from a few distinct times
when large noise transients occurred in H1 and H2; these are found many times, paired
with different L1 time shifts, and have similar values of Γ because the calculation of
Γ is dominated by the H1-H2 pair in these cases. The outliers with Γ < 12 and
Zg > 20 are artefacts of sudden changes in the power line noise at 60 Hz and 180 Hz
which WaveBurst recorded as triggers. A cut on the value of Γ can eliminate many
of the time-shifted triggers in figure 6a, but at the cost of also rejecting weak genuine
gravitational-wave signals that may have the distribution in figure 6b. Therefore, the Γ
cut is chosen only after additional selection criteria have been applied; see section 5.3.
5. Additional selection criteria for event candidates
Environmental disturbances or instrumental misbehaviour occasionally produce non-
stationary noise in the GW channel of a detector which contributes to the recording of
a WaveBurst trigger. These triggers can sometimes pass the H1-H2 consistency and
cross-correlation consistency tests, particularly since an environmental disturbance
at the Hanford site affects both H1 and H2. As noted in the previous section, the
calculated value of Γ is susceptible to being dominated by the H1-H2 pair even if
there is minimal signal power in the L1 data stream. A significant background rate of
event candidates caused by environmental or instrumental effects could obscure the
rare gravitational-wave bursts that we seek, or else require us to apply more aggressive
cuts and thus lose sensitivity for weak signals.
This section describes the two general tactics we use to reject data with
identifiable problems and thereby reduce the rate of background triggers. First,
we make use of several “data quality flags” that have been introduced in order to
describe the status of the instruments and the quality of the recorded data over time
intervals ranging from seconds to hours. Second, we remove triggers attributed to
Search for gravitational-wave bursts in LIGO data 14
short-duration instrumental or environmental effects by applying “vetoes” based on
triggers generated from auxiliary channels which have been found to correlate with
transients in the GW channel. Applying data quality conditions and vetoes to the
data set reduces the amount of “live” observation time (or “livetime”) during which
an arriving gravitational-wave burst would be detected and kept as an event candidate
at the end of the analysis pipeline. Therefore, we must balance this loss (“deadtime”)
against the effectiveness for removing spurious triggers from the data sample.
Choosing data quality and veto conditions with reference to a sample of
gravitational-wave event candidates could introduce a selection bias and invalidate any
upper limit calculated from the sample. Therefore, we have evaluated the relevance
of potential data quality cuts and veto conditions using other trigger samples. In
addition to the tuning set of time-shifted WaveBurst triggers, we have applied the
KleineWelle [27] method to identify transients in each interferometer’s GW channel.
(We have also used KleineWelle to identify transients in numerous auxiliary channels
for veto studies, as described in 5.2.) Like WaveBurst, KleineWelle is a time-frequency
method utilizing multi-resolution wavelet decomposition, but it processes each data
channel independently [28]. In analyzing data, the time series is first whitened using
a linear predictor error filter [27]. Then the time-frequency decomposition is obtained
using the Haar wavelet transform. The squared wavelet coefficients normalized to the
scale’s (frequency’s) root-mean-square provide an estimate of the energy associated
with a certain time-frequency pixel. A clustering mechanism is invoked in order to
increase the sensitivity to signals with less than optimal shapes in the time-frequency
plane and a total normalized cluster energy is computed. The significance of a
cluster is then defined as the negative natural logarithm of the probability of the
computed total normalized cluster energy to have resulted from Gaussian white noise;
we apply a threshold on this significance to define KleineWelle triggers. The samples
of KleineWelle triggers from each detector, as well as the subsample of coincident
H1 and H2 triggers, are useful indicators of localized disturbances. They may in
principle contain one or more genuine gravitational-wave signals, but decisions about
data quality and veto conditions are based on the statistics of the entire sample which
is dominated by instrumental artefacts and noise fluctuations.
5.1. Data quality conditions
We wish to reject instances of clear hardware problems with the LIGO detectors
or conditions that could affect our ability to unequivocally register the passage of
gravitational-wave bursts. Various studies of the data, performed during and after
data collection, produced a catalog of conditions that might affect the quality of the
data. Each named condition, or “flag”, has an associated list of time intervals during
which the condition is present, derived either from one or more diagnostic channels
or from entries made in the electronic logbook by operators and scientific monitors.
We have looked for significant correlations between the flagged time intervals and
time-shifted WaveBurst triggers, and also between the flagged time intervals and
KleineWelle single-detector triggers (particularly the “outliers” with large significance
and the coincident H1 and H2 triggers). Based on these studies, we decided to impose
a number of data quality conditions.
We first require the calibration lines to be continuously present. On several
occasions when they dropped out briefly, due to a problem with the excitation engine,
the data is removed from the analysis. The livetime associated with these occurrences
Search for gravitational-wave bursts in LIGO data 15
is negligible while they are all correlated with transients appearing in the GW channel.
Local winds and sound from airplanes may couple to the instrument through
the ground and result in elevated noise and/or impulsive signals. A data quality
flag was established to identify intervals of local winds at the sites with speeds of
56 km/hour (35 miles per hour) and above. We studied the correlation of these times
with the single-detector triggers produced with KleineWelle. The correlation is more
apparent in the H2 detector, for which 7.4% of the most significant KleineWelle triggers
(threshold of 1600) coincide with the intervals of strong winds at the Hanford site. The
livetime that is rejected in this way is 0.66% of the H1-H2 coincident observation time
over which this study was performed. Thanks to improved acoustic isolation installed
after the S2 science run, acoustic noise from airplanes was not found to contribute
to triggers in the GW channel in general; however, a period of 300 seconds has been
rejected around a particularly loud time when a fighter jet passed over the Hanford
site.
Elevated low-frequency seismic activity has been observed to cause noise
fluctuations and transients in the GW channel. Data from several seismometers at the
Hanford observatory was band-pass filtered in various narrow bands between 0.4 Hz
and 2.4 Hz, and the root-mean-square signal in each band was tracked over time. A
set of particularly relevant seismometers and bands was selected, and time intervals
were flagged whenever a band in this set exceeded 7 times its median value. A follow
up analysis of the single instrument as well as coincident H1-H2 KleineWelle triggers
found significant correlation with the elevated seismic noise. The strongest correlation
is observed in the outlier triggers (KleineWelle significance of 1600 or greater) in H2,
of which 41.9% coincide with the seismic flags, compared to a deadtime of 0.6%.
In the two Hanford detectors, a diagnostic channel counting ADC overflows in the
length sensing and control subsystem was used to flag intervals for exclusion from the
analysis. One minute of livetime around these overflows is rejected. Such overflows
were indeed seen to correlate with single-detector outlier triggers in H1 (44.4% of
them, with 0.68% deadtime) and H2 (74.1% of them, with 0.41% deadtime).
Two data quality cuts are derived from “trend” data (summaries of minimum,
maximum, mean and root-mean-square values over each one-second period)
monitoring the interferometry used in the LIGO detectors. The first one is based
on occasional transient dips in the stored light in the arm cavities. These have been
identified by scanning the trend data for the relevant monitoring photodiodes, defining
the size of a dip as the fractional drop of the minimum in that second relative to the
average of the previous ten seconds, and applying various thresholds on the minimum
dip size. For the three LIGO detectors, thresholds of 5%, 4% and 5% respectively
for L1, H1 and H2 are used. High correlation of such light dips with single-detector
triggers is observed, while the deadtime resulting from them in each of the three LIGO
instruments is less than 0.6%. The second data quality cut of this type is based on the
DC level of light reaching the photodiode at the output of the interferometer, which
sees very little light when the interferometer is operating properly. By thresholding
on the trend data for this channel, intervals when its value was unusually high are
identified in H1 and L1. These intervals are seen to correlate with instrument outlier
triggers significantly. The deadtime resulting from them is 1.02% in H1 and 1.74% in
Altogether, these data quality cuts result in a net loss of observation time of 5.6%.
Search for gravitational-wave bursts in LIGO data 16
5.2. Auxiliary-channel vetoes
LIGO records thousands of auxiliary read-back channels of the servo control systems
employed in the instruments’ interferometric operation as well as auxiliary channels
monitoring the instruments’ physical environment. There are plausible couplings of
environmental disturbances or servo instabilities both to these monitoring channels
and to the GW channel; thus, transients appearing in these auxiliary channels may
be used to veto triggers seen simultaneously in the GW channel. This assumes that
a genuine gravitational-wave burst would not appear in these auxiliary channels, or
at least that any coupling is small enough to stay below the threshold for selecting
transients in these channels.
We have used KleineWelle to produce triggers from over 100 different auxiliary
channels that monitor the interferometry and the environment in the three LIGO
detectors. A first analysis of single-detector KleineWelle triggers from the L1 GW
channel and coincident KleineWelle triggers from the H1 and H2 GW channels
against respective auxiliary channels identified the ones that showed high GW channel
trigger rejection power with minimal livetime loss (in the vast majority of channels
much less that 1%). In addition to interferometric channels, environmental ones
(accelerometers and microphones) located on the optical tables holding the output
optics and photodiodes appeared to correlate with GW channel triggers recorded at
the same site.
Auxiliary interferometric channels (besides the GW channel) could in principle
be affected by a gravitational wave, and a veto condition derived from such a channel
could reject a genuine signal. Hardware signal injections imitating the passage of
gravitational waves through our detectors, performed at several pre-determined times
during the run, have been used to establish under what conditions each channel is
safe to use as a veto. Non-detection of a hardware injection by an auxiliary channel
suggests the unconditional safety of this channel as a veto in the search, assuming that
a reasonably broad selection of signal strengths and frequencies were injected. But
even if hardware injections are seen in the auxiliary channels, conditions can readily
be derived under which no triggers caused by the hardware injections are used as
vetoes. This involves imposing conditions on the significance of the trigger and/or
on the ratio of the signal strength seen in the auxiliary channel to that seen in the
GW channel. We have thus established the conditions under which several channels
involved in the length and angular sensing and control systems of the interferometers
can be used safely as vetoes. (The data quality conditions described in section 5.1
were also verified to be safe using hardware injections.)
The final choice of vetoes was made by examining the tuning set of time-
shifted triggers remaining in the WaveBurst search pipeline after applying the signal
consistency tests and data quality conditions. The ten triggers from the time-shifted
analysis with the largest values of Γ, plus the ten with the largest values of Zg, were
examined and six of them were found to coincide with transients in one or more of the
following channels: the in-phase and quadrature-phase demodulated signals from the
pick-off beam from the H1 beamsplitter, the in-phase demodulated pitch signal from
one of the wavefront sensors used in the H1 alignment sensing and control system, the
beam splitter pitch and yaw control signals, and accelerometer readings on the optical
tables holding the H1 and H2 output optics and photodiodes. KleineWelle triggers
produced from these seven auxiliary channels were clustered (with a 250 ms window)
and their union was taken. This defines the final list of veto triggers for this search,
Search for gravitational-wave bursts in LIGO data 17
each indicating a time interval (generally ≪ 1 s long) to be vetoed.
The total duration of the veto triggers considered in this analysis is at the level
of 0.15% of the total livetime. However, this does not reliably reflect the deadtime
of the search since a GW channel trigger is vetoed if it has any overlap with a veto
trigger. Thus, the actual deadtime of the search depends on the duration of the
signal being sought, as reconstructed by WaveBurst. We reproduce this effect in
the Monte Carlo simulation used to estimate the efficiency of the search (described
in section 7) by applying the same analysis pipeline and veto logic. The effective
deadtime depends on the morphology of the signal and on the signal amplitude, since
larger-amplitude signals tend to be assigned longer durations by WaveBurst. For the
majority of waveforms we considered in this search and for plausible signals strengths,
the resulting effective deadtime is of the order of 2%. Because this loss is signal-
dependent, in this analysis we consider it to be a loss of efficiency rather than a loss
of live observation time; in other words, the live observation time we state reflects the
data quality cuts applied but does not reflect the auxiliary-channel vetoes.
5.3. Gamma cut
The cuts described above cleaned up the outliers in the data considerably, as shown by
the sequence of scatter plots in figure 7. Following the data quality and veto criteria
we just described, the remaining time-shifted WaveBurst triggers (shown in figure 7d)
were used as the basis for choosing the cross correlation Γ threshold. As with previous
all-sky searches for gravitational-wave bursts with LIGO, we desire the number of
background triggers expected for the duration of the observation to be much less than
1 but not zero, typically of order ∼ 0.1. On that basis, we chose a threshold of Γ > 4
which results in 7 triggers in 98 time shifts, or 0.08 such triggers normalized to the
duration of the S4 observation time.
6. Search results
After all of the trigger selection criteria had been established using the tuning set of
time-shifted triggers, WaveBurst was re-run with a new, essentially independent set
of 100 time shifts, in increments of 5 s from −250 s to −5 s and from +5 s to +250 s,
in order to provide an estimate of the background which is minimally biased by the
choice of selection criteria. The total effective livetime for the time-shifted sample is
77.4 times the unshifted observation time, reflecting the reduced overlap of Hanford
and Livingston data segments when shifted relative to one another. The unshifted
triggers were looked at for the first time. Table 1 summarizes the trigger counts for
these time-shifted and unshifted triggers at each stage in the sequence of cuts. In
addition, the expected background at each stage (time-shifted triggers normalized to
the S4 observation time) is shown for direct comparison with the observed zero-lag
counts. Figure 8 shows a scatter plot of Γ vs. Zg and histograms of Γ for both time-
shifted and unshifted triggers after all other cuts. These new time-shifted triggers
are statistically consistent with the tuning set (figure 7d), although no triggers are
found with Zg > 15 in this case. Five unshifted triggers are found, distributed in a
manner reasonably consistent with the background. All five have Γ<4 and thus fail
the Γ cut. Three time-shifted triggers pass the Γ cut, corresponding to an estimated
average background of 0.04 triggers over the S4 observation time.
Search for gravitational-wave bursts in LIGO data 18
0 5 10 15 20 25 30 35 40 45 50
0 5 10 15 20 25 30 35 40 45 50
(a) (b)
0 5 10 15 20 25 30 35 40 45 50
0 5 10 15 20 25 30 35 40 45 50
(c) (d)
Figure 7. Scatter plots of Γ versus Zg for the tuning set of time-shifted triggers.
(a) All triggers; (b) after data quality cuts; (c) after data quality and H1-
H2 consistency cuts (amplitude ratio and R0); (d) after data quality, H1-H2
consistency, and auxiliary-channel vetoes.
Table 1. Counts of time-shifted and unshifted triggers as cuts are applied
sequentially. The column labeled “Normalized” is the time-shifted count divided
by 77.4, representing an estimate of the expected background for the S4
observation time.
Time-shifted
Unshifted
Cut Count Normalized Count
Data quality 3153 40.7 44
H1/H2 amplitude consistency 1504 19.4 14
R0 > 0 755 9.8 5
Auxiliary-channel vetoes 671 8.7 5
Γ > 4 3 0.04 0
With no unshifted triggers in the final sample, we place an upper limit on the
mean rate of gravitational-wave events that would be detected reliably (i.e., with
efficiency near unity) by this analysis pipeline. Since the background estimate is small
and is subject to some systematic uncertainties, we simply take it to be zero for
purposes of calculating the rate limit; this makes the rate limit conservative. With
15.5 days of observation time, the one-sided frequentist upper limit on the rate at 90%
confidence level is − ln (0.1)/T = 2.303/(15.5 days) = 0.15 per day. For comparison,
the S2 search [17] arrived at an upper limit of 0.26 per day. The S3 search [18] had
Search for gravitational-wave bursts in LIGO data 19
5 6 7 8 9 10 11 12 13 14 15
time-shifted
unshifted
0 1 2 3 4 5 6 7 8 9 10
0 1 2 3 4 5 6 7 8 9 10
unshifted
time-shifted
rms spread
(a) (b)
Figure 8. (a) Scatter plot of Γ vs. Zg for time-shifted triggers (grey circles)
and unshifted triggers (black circles) after all other analysis cuts. The vertical
dashed line indicates the initial WaveBurst significance cut at Zg=6.7. The
horizontal dashed line indicates the final Γ cut. (b) Overlaid histograms of Γ
for unshifted triggers (black circles) and mean background estimated from time-
shifted triggers (black stairstep with statistical error bars). The shaded bars
represent the expected root-mean-square statistical fluctuations on the number of
unshifted background triggers in each bin.
an observation time of only 8 days and did not state a rate limit.
7. Amplitude sensitivity of the search
The previous section presented a limit on the rate of a hypothetical population
of gravitational-wave signals for which the analysis pipeline has perfect detection
efficiency. However, the actual detection efficiency will depend on the signal waveform
and amplitude, being zero for very weak signals and generally approaching unity for
sufficiently strong signals. The signal processing methods used in this analysis are
expressly designed to be able to detect arbitrary waveforms as long as they have
short duration and frequency content in the 64–1600 Hz band which stands out above
the detector noise. Therefore, for any given signal of this general type, we wish to
determine a characteristic minimum signal amplitude for which the pipeline has good
detection efficiency. As in past analyses, we use a Monte Carlo technique with a
population of simulated gravitational wave sources. Simulated events are generated
at random sky positions and pseudo-random times (imposing a minimum separation
of 80 s) during the S4 run; the resulting signal waveforms in each interferometer are
calculated with the appropriate antenna factors and time delays. These simulated
signals are added to the actual detector data, and the summed data streams are
analyzed using the same pipeline with the same trigger selection criteria.
The intrinsic amplitude of a simulated gravitational wave may be characterized
by its root-sum-squared strain amplitude at the Earth, without folding in antenna
response factors:
hrss ≡
(|h+(t)|2 + |h×(t)|2) dt . (4)
This quantity has units of s1/2, or equivalently Hz−1/2. In general, the root-sum-
squared signal measured by a given detector, hrssdet, will be somewhat smaller. The
Search for gravitational-wave bursts in LIGO data 20
Monte Carlo approach taken for this analysis is to generate a set of signals all with
fixed hrss and then to add this set of signals to the data with several discrete scale
factors to evaluate different signal amplitudes. For a given signal morphology and hrss,
the efficiency of the pipeline is the fraction of simulated signals which are successfully
recovered.
For this analysis, we do not attempt to survey the complete spectrum of
astrophysically motivated signals, but rather we use a limited number of ad-hoc
waveforms to characterize the sensitivity of the search in terms of hrss. Similar
sensitivities may be expected for different waveforms with similar overall properties
(central frequency, bandwidth, duration); the degree to which this is true has been
investigated in [18] and [29]. The waveforms evaluated in the present analysis are:
• Sine-Gaussian: sinusoid with a given frequency f0 inside a Gaussian amplitude
envelope with dimensionless width Q and arrival time t0:
h(t0 + t) = h0 sin(2πf0t) exp
− (2πf0t)
. (5)
These are generated with linear polarization, with f0 ranging from 70 Hz to
1053 Hz and with Q equal to 3, 8.9, and 100. The signal consistency tests
described in section 4 were developed using an ensemble of sine-Gaussian signals
with all simulated frequencies and Q values.
• Gaussian: a simple unipolar waveform with a given width τ and linear
polarization:
h(t0 + t) = h0 exp(−t
2/τ2) . (6)
• Band-limited white noise burst: a random signal with two independent
polarization components that are white over a given frequency band, described
by a base frequency f0 and a bandwidth ∆f (i.e. containing frequencies from f0
to f0 + ∆f). The signal amplitude has a Gaussian time envelope with a width
τ . Because these waveforms have two uncorrelated polarizations (in a coordinate
system at some random angle), they provide a stringent check on the robustness
of our cross-correlation test.
In all cases, we generate each simulated signal with a random arrival direction and a
random angular relationship between the wave polarization basis and the Earth.
Figures 9 and 10 show the measured efficiency of the analysis pipeline as a function
of root-sum-squared strain amplitude, ǫ(hrss), for each simulated waveform. The
efficiency data points for each waveform are fit with a function of the form
ǫ(hrss) =
)α(1+β tanh(hrss/hmidrss ))
, (7)
where ǫmax corresponds to the efficiency for strong signals (normally very close to
unity), hmidrss is the hrss value corresponding to an efficiency of ǫmax/2, β is the
parameter that describes the asymmetry of the sigmoid (with range −1 to +1), and
α describes the slope. Data points with efficiency below 0.05 are excluded from the
fit because they do not necessarily follow the functional form, while data points with
efficiency equal to 1.0 are excluded because their asymmetric statistical uncertainties
are not handled properly in the chi-squared fit. The empirical functional form in
equation 7 has been found to fit the remaining efficiency data points well.
Note that the Gaussian waveform with τ = 6.0 ms has efficiency less than 0.8
even for the largest simulated amplitude. This broad waveform, with little signal
Search for gravitational-wave bursts in LIGO data 21
]Hz [strain/rssh
-2210 -2110 -2010 -1910
70 Hz
100 Hz
153 Hz
235 Hz
361 Hz
554 Hz
849 Hz
1053 Hz
Sine-Gaussians, Q=3
]Hz [strain/rssh
-2210 -2110 -2010 -1910
70 Hz
100 Hz
153 Hz
235 Hz
361 Hz
554 Hz
849 Hz
1053 Hz
Sine-Gaussians, Q=8.9
]Hz [strain/rssh
-2210 -2110 -2010 -1910
1 100 Hz
153 Hz
235 Hz
361 Hz
554 Hz
849 Hz
1053 Hz
Sine-Gaussians, Q=100
Figure 9. Efficiency curves for simulated gravitational-wave signals: linearly-
polarized sine-Gaussian waves with (a) Q=3; (b) Q=8.9; (c) Q=100. Statistical
errors are comparable to the size of the plot symbols.
Search for gravitational-wave bursts in LIGO data 22
]Hz [strain/rssh
-2210 -2110 -2010 -1910
0.05 ms
0.1 ms
0.25 ms
0.5 ms
1.0 ms
2.5 ms
4.0 ms
6.0 ms
Gaussians
]Hz [strain/rssh
-2210 -2110 -2010 -1910
100-110 Hz, 0.1 s
100-200 Hz, 0.1 s
100-200 Hz, 0.01 s
250-260 Hz, 0.1 s
250-350 Hz, 0.1 s
250-350 Hz, 0.01 s
1000-1010 Hz, 0.1 s
1000-1100 Hz, 0.1 s
1000-1100 Hz, 0.01 s
1000-2000 Hz, 0.1 s
1000-2000 Hz, 0.01 s
1000-2000 Hz, 0.001 s
Band-limited white noise bursts
Figure 10. Efficiency curves for simulated gravitational-wave signals: (a)
linearly-polarized Gaussian waves; (b) band-limited white-noise bursts with two
independent polarization components. Note that four curves in the latter plot
are nearly identical: 100–110 Hz, 0.1 s; 100–200 Hz, 0.1 s; 250–260 Hz, 0.1 s;
and 250–350 Hz, 0.01 s. Statistical errors are comparable to the size of the plot
symbols.
power at frequencies above 64 Hz (the lower end of the nominal search range), is at
the limit of what the search method can detect. For some of the other waveforms,
the efficiency levels off at a value slightly less than 1.0 due to the application of the
auxiliary-channel vetoes, which randomly coincide in time with some of the simulated
signals. This effect is most pronounced for the longest-duration simulated signals due
to the veto logic used in this analysis, which rejects a trigger if there is any overlap
between the reconstructed trigger duration and a vetoed time interval. The 70-Hz
sine-Gaussian with Q=100 has a duration longer than 1 s and is reconstructed quite
poorly; it is omitted from figure 9c and from the following results.
The analytic expressions of the fits are used to determine the signal strength hrss
for which efficiencies of 50% and 90% are reached. These fits are subject to statistical
Search for gravitational-wave bursts in LIGO data 23
Table 2. hrss values corresponding to 50% and 90% detection efficiencies for
simulated sine-Gaussian signals with various central frequencies and Q values.
The 70 Hz sine-Gaussian with Q=100 is not detected reliably.
hrss (10
−21 Hz−1/2)
50% efficiency 90% efficiency
Central
frequency (Hz) Q=3 Q=8.9 Q=100 Q=3 Q=8.9 Q=100
70 3.4 5.8 — 19.2 52.0 —
100 1.8 1.7 2.6 10.4 9.4 17.7
153 1.5 1.4 1.7 8.2 8.3 8.7
235 1.6 1.7 1.9 11.0 9.8 12.6
361 2.4 2.7 3.2 11.5 16.7 20.9
554 3.3 3.2 3.2 16.1 17.9 20.4
849 5.9 4.9 4.5 28.4 28.9 24.9
1053 8.3 7.2 6.6 39.3 37.5 37.5
Table 3. hrss values corresponding to 50% and 90% detection efficiencies for
simulated Gaussian signals with various widths. The waveform with τ=6.0 ms
does not reach an efficiency of 90% within the range of signal amplitudes
simulated.
hrss (10
−21 Hz−1/2)
τ (ms) 50% efficiency 90% efficiency
0.05 6.6 33.9
0.1 4.4 25.3
0.25 3.0 14.3
0.5 2.2 13.5
1.0 2.2 10.6
2.5 3.4 20.5
4.0 8.3 43.3
6.0 39.0 —
errors from the limited number of simulations performed to produce the efficiency data
points. Also, the overall amplitude scale is subject to the uncertainty in the calibration
of the interferometer response, conservatively estimated to be 10% [30]. We increase
the nominal fitted hrss values by the amount of these systematic uncertainties to arrive
at conservative hrss values at efficiencies of 50% and 90%, summarized in tables 2,
3, and 4. The sine-Gaussian hrss values are also displayed graphically in figure 11,
showing how the frequency dependence generally follows that of the instrumental
noise.
Event rate limits as a function of waveform type and signal amplitude can be
represented by an “exclusion diagram”. Each curve in an exclusion diagram indicates
what the rate limit would be for a population of signals with a fixed hrss, as a
function of hrss. The curves in figure 12 illustrate, using selected sine-Gaussian and
Gaussian waveforms that were also considered in the S1 and S2 analyses, that the
amplitude sensitivities achieved by this S4 analysis are at least an order of magnitude
better than the sensitivities achieved by the S2 analysis. For instance, the 50%
efficiency hrss value for 235 Hz sine-Gaussians with Q=8.9 is 1.5 × 10
−20 Hz−1/2
for S2 and 1.7× 10−21 Hz−1/2 for S4. (Exclusion curves were not generated for the S3
Search for gravitational-wave bursts in LIGO data 24
Table 4. hrss values corresponding to 50% and 90% detection efficiencies for
simulated “white noise burst” signals with various base frequencies, bandwidths,
and durations.
hrss (10
−21 Hz−1/2)
Base frequency Bandwidth Duration
(Hz) (Hz) (s) 50% eff. 90% eff.
100 10 0.1 1.8 4.7
100 100 0.1 1.9 4.1
100 100 0.01 1.3 2.9
250 10 0.1 1.8 4.5
250 100 0.1 2.4 5.4
250 100 0.01 1.8 4.3
1000 10 0.1 6.5 15.8
1000 100 0.1 7.9 16.7
1000 100 0.01 5.5 12.7
1000 1000 0.1 19.2 42.6
1000 1000 0.01 9.7 22.3
1000 1000 0.001 9.5 23.7
100 1000
Sensitivities for Sine−Gaussian Waveforms
Frequency (Hz)
90% for Q=3
90% for Q=8.9
90% for Q=100
50% for Q=3
50% for Q=8.9
50% for Q=100
H2 noise
L1 noise
H1 noise
Figure 11. Sensitivity of the analysis pipeline for sine-Gaussian waveforms as a
function of frequency and Q. Symbols indicate the hrss values corresponding to
50% and 90% efficiency, taken from table 2. The instrumental sensitivity curves
from figure 2 are shown for comparison.
analysis, but the S3 sensitivity was 9 × 10−21 Hz−1/2 for this particular waveform.)
The improvement is greatest for lower-frequency sine-Gaussians and for the widest
Gaussians, due to the reduced low-frequency detector noise and the explicit extension
of the search band down to 64 Hz.
Search for gravitational-wave bursts in LIGO data 25
]Hz [strain/rssh
-2210 -2110 -2010 -1910 -1810 -1710 -1610
70 Hz
153 Hz
235 Hz
554 Hz
849 Hz
1053 Hz
S4 LIGO-TAMA
(a) Sine-Gaussians with Q=8.9
]Hz [strain/rssh
-2210 -2110 -2010 -1910 -1810 -1710 -1610
0.05 ms
0.1 ms
0.25 ms
1.0 ms
2.5 ms
4.0 ms
(b) Gaussians
Figure 12. Exclusion diagrams (rate limit at 90% confidence level, as a function
of signal amplitude) for (a) sine-Gaussian and (b) Gaussian simulated waveforms
for this S4 analysis compared to the S1 and S2 analyses (the S3 analysis did not
state a rate limit). These curves incorporate conservative systematic uncertainties
from the fits to the efficiency curves and from the interferometer response
calibration. The 849 Hz curve labeled “LIGO-TAMA” is from the joint burst
search using LIGO S2 with TAMA DT8 data [8], which included data subsets
with different combinations of operating detectors with a total observation time
of 19.7 days and thereby achieved a lower rate limit. The hrss sensitivity of the
LIGO-TAMA search was nearly constant for sine-Gaussians over the frequency
range 700–1600 Hz.
Search for gravitational-wave bursts in LIGO data 26
8. Astrophysical reach estimates
In order to set an astrophysical scale to the sensitivity achieved by this search, we
can ask what amount of mass converted into gravitational-wave burst energy at a
given distance would be strong enough to be detected by the search pipeline with 50%
efficiency. We start with the expression for the instantaneous energy flux emitted by a
gravitational wave source in the two independent polarizations h+(t) and h×(t) [31],
d2EGW
(ḣ+)
2 + (ḣ×)
, (8)
and follow the derivations in [32]. Plausible astrophysical sources will, in general, emit
gravitational waves anisotropically, but here we will assume isotropic emission in order
to get simple order-of-magnitude estimates. The above formula, when integrated over
the signal duration and over the area of a sphere at radius r (assumed not to be at
a cosmological distance), yields the total energy emitted in gravitational waves for a
given signal waveform. For the case of a sine-Gaussian with frequency f0 and Q ≫ 1,
we find
EGW =
(2πf0)
2h2rss . (9)
Taking the waveform for which we have the best hrss sensitivity, a 153 Hz sine-
Gaussian with Q=8.9, and assuming a typical Galactic source distance of 10 kpc, the
above formula relates the 50%-efficiency hrss = 1.4× 10
−21 Hz−1/2 to 10−7 solar mass
equivalent emission into a gravitational-wave burst from this hypothetical source and
under the given assumptions. For a source in the Virgo galaxy cluster, approximately
16 Mpc away, the same hrss would be produced by an energy emission of roughly
0.25M⊙c
2 in a burst with this highly favourable waveform.
We can draw more specific conclusions about detectability for models of
astrophysical sources which predict the absolute energy and waveform emitted. Here
we consider the core-collapse supernova simulations of Ott et al. [15] and a binary black
hole merger waveform calculated by the Goddard numerical relativity group [11] (as a
representative example of the similar merger waveforms obtained by several groups).
While the Monte Carlo sensitivity studies in section 7 did not include these particular
waveforms, we can relate the modeled waveforms to qualitatively similar waveforms
that were included in the Monte Carlo study and thus infer the approximate sensitivity
of the search pipeline for these astrophysical models.
Ott et al. simulated core collapse for three progenitor models and calculated
the resulting gravitational wave emission, which was dominated by oscillations of
the protoneutron star core driven by accretion [15]. Their s11WW model, based
on a non-spinning 11-M⊙ progenitor, produced a total gravitational-wave energy
emission of 1.6× 10−8M⊙c
2 with a characteristic frequency of ∼654 Hz and duration
of several hundred milliseconds. If this were a sine-Gaussian, it would have a Q
of several hundred; table 2 shows that our sensitivity does not depend strongly on
Q, so we might expect 50% efficiency for a signal at this frequency with hrss of
∼3.7 × 10−21 Hz−1/2. However, the signal is not monochromatic, and its increased
time-frequency volume may degrade the sensitivity by up to a factor of ∼2. Using
this EGW and hrss ≈ 7 × 10
−21 Hz−1/2 in equation 9, we find that our search
has an approximate “reach” (distance for which the signal would be detected with
50% efficiency by the analysis pipeline) of ∼0.2 kpc for this model. The m15b6
model, based on a spinning 15-M⊙ progenitor, yields a very similar waveform and
Search for gravitational-wave bursts in LIGO data 27
essentially the same reach. The s25WW model, based on a 25-M⊙ progenitor, was
found to emit vastly more energy in gravitational waves, 8.2 × 10−5M⊙c
2, but with
a higher characteristic frequency of ∼937 Hz. With respect to the Monte Carlo
results in section 7, we may consider this similar to a high-Q sine-Gaussian, yielding
hrss ≈ 5.5×10
−21 Hz−1/2, or to a white noise burst with a bandwidth of ∼100 Hz and
a duration of > 0.1 s, yielding hrss ≈ 8 × 10
−21 Hz−1/2. Using the latter, we deduce
an approximate reach of 8 kpc for this model.
A pair of merging black holes emits gravitational waves with very high efficiency;
for instance, numerical evolutions of equal-mass systems without spin have found the
radiated energy from the merger and subsequent ringdown to be 3.5% or more of the
total mass of the system [11]. From figure 8 of that paper, the frequency of the signal
at the moment of peak amplitude is seen to be
fpeak ≈
15 kHz
(Mf/M⊙)
, (10)
where Mf is the final mass of the system. Very roughly, we can consider the
merger+ringdown waveform to be similar to a sine-Gaussian with central frequency
fpeak and Q ≈ 2 for purposes of estimating the reach of this search pipeline for binary
black hole mergers. (Future analyses will include Monte Carlo efficiency studies using
complete inspiral-merger-ringdown waveforms.) Thus, a binary system of two 10-M⊙
black holes (i.e. Mf ≈ 20M⊙) has fpeak ≈ 750 Hz, and from table 2 we can estimate
the hrss sensitivity to be ∼5.5×10
−21 Hz−1/2. Using EGW = 0.035Mfc
2, we conclude
that the reach for such a system is roughly 1.4 Mpc. Similarly, a binary system with
Mf = 100M⊙ has fpeak ≈ 150 Hz, a sensitivity of ∼1.5×10
−21 Hz−1/2, and a resulting
reach of roughly 60 Mpc.
9. Discussion
The search reported in this paper represents the most sensitive search to date for
gravitational-wave bursts in terms of strain amplitude, reaching hrss values below
10−20 Hz−1/2, and covers a broad frequency range, 64–1600 Hz, with a live observation
time of 15.5 days.
Comparisons with previous LIGO [16, 17] and LIGO-TAMA [8] searches have
already been shown graphically in figure 12. The LIGO-TAMA search targeted
millisecond-duration signals with frequency content in the 700–2000 Hz frequency
regime (i.e., partially overlapping the present search) and had a detection efficiency
of at least 50% (90%) for signals with hrss greater than ∼ 2 × 10
−19 Hz−1/2
(10−18 Hz−1/2). Among other searches with broad-band interferometric detectors [33,
34, 35], the most recent one by the TAMA collaboration reported an upper limit of
0.49 events per day at the 90% confidence level based on an analysis of 8.1 days of the
TAMA300 instrument’s ninth data taking run (DT9) in 2003–04. The best sensitivity
of this TAMA search was achieved when looking for narrow-band signals at TAMA’s
best operating frequency, around 1300 Hz, and it was at hrss ≈ 10
−18 Hz−1/2 for 50%
detection efficiency [35]. Although we did not measure the sensitivity of the S4 LIGO
search with narrow-band signals at 1300 Hz, LIGO’s noise at that frequency range
varies slowly enough so that we do not expect it to be significantly worse than the
sensitivity for 1053 Hz sine-Gaussian signals described in section 7, which stands at
about 7× 10−21 Hz−1/2.
Search for gravitational-wave bursts in LIGO data 28
Comparisons with results from resonant mass detectors were detailed in our
previous publications [16, 17]. The upper limit of ∼ 4×10−3 events per day at the 95%
confidence level on the rate of gravitational wave bursts set by the IGEC consortium
of five resonant mass detectors still represents the most stringent rate limit for hrss
signal strengths of order 10−18 Hz−1/2 and above [36]. This upper limit quickly falls off
and becomes inapplicable to signals weaker than 10−19 Hz−1/2 (see figure 14 in [17].)
Furthermore, with the improvement in our search sensitivity, the signal strength of the
events corresponding to the slight excess seen by the EXPLORER and NAUTILUS
resonant mass detectors in their 2001 data [37] falls well above the 90% sensitivity of
our current S4 search: as described in [17], the optimal orientation signal strength of
these events assuming a Gaussian morphology with τ=0.1 ms corresponds to a hrss
of 1.9 × 10−19 Hz−1/2. For such Gaussians our S4 search all-sky 90% sensitivity is
2.5 × 10−20 Hz−1/2 (see Table 3) and when accounting for optimal orientation, this
improves by roughly a factor of 3, to 9.3×10−21 Hz−1/2. The rate of the EXPLORER
and NAUTILUS events was of order 200 events/year (or 0.55 events per day) [37, 38].
A steady flux of gravitational-wave bursts at this rate is excluded by our present
measurement at the 99.9% confidence level. Finally, in more recent running of the
EXPLORER and NAUTILUS detectors, an analysis of 149 days of data collected in
2003 set an upper limit of 0.02 events per day at the 95% confidence level and with a
hrss sensitivity of ∼ 2× 10
−19 Hz−1/2 [39].
The S5 science run, which began in November 2005 and is expected to continue
until late 2007, has a goal of collecting a full year of coincident LIGO science-mode
data. Searches for gravitational-wave bursts using S5 data are already underway and
will be capable of detecting any sufficiently strong signals which arrive during that
time, or else placing an upper limit on the rate of such signals on the order of a few
per year. Furthermore, the detector noise during the S5 run has reached the design
goals for the current LIGO interferometers, and so the amplitude sensitivity of S5
burst searches is expected to be roughly a factor of two better than the sensitivity of
this S4 search.
Another direction being pursued with the S5 data is to make appropriate use
of different detector network configurations. In addition to the approach used
in the S4 analysis reported here, which requires a signal to appear with excess
power in a time-frequency map in all three LIGO interferometers, data from two-
detector combinations is also being analyzed to maximize the total observation
time. Furthermore, using LIGO data together with simultaneous data from other
interferometers can significantly improve confidence in a signal candidate and allow
more properties of the signal to be deduced. The GEO 600 interferometer has joined
the S5 run for full-time observing in May 2006, and we look forward to the time
when VIRGO begins operating with sensitivity comparable to the similarly-sized
LIGO interferometers. Members of the LSC are currently implementing coherent
network analysis methods using maximum likelihood approaches for optimal detection
of arbitrary burst signal (see, for example, [40]) and for robust signal consistency
tests [41, 42]. Such methods will make the best use of the data collected from the
global network of detectors to search for gravitational-wave bursts.
Search for gravitational-wave bursts in LIGO data 29
Acknowledgments
The authors gratefully acknowledge the support of the United States National Science
Foundation for the construction and operation of the LIGO Laboratory and the
Science and Technology Facilities Council of the United Kingdom, the Max-Planck-
Society, and the State of Niedersachsen/Germany for support of the construction
and operation of the GEO600 detector. The authors also gratefully acknowledge the
support of the research by these agencies and by the Australian Research Council, the
Council of Scientific and Industrial Research of India, the Istituto Nazionale di Fisica
Nucleare of Italy, the Spanish Ministerio de Educación y Ciencia, the Conselleria
d’Economia, Hisenda i Innovació of the Govern de les Illes Balears, the Scottish
Funding Council, the Scottish Universities Physics Alliance, The National Aeronautics
and Space Administration, the Carnegie Trust, the Leverhulme Trust, the David
and Lucile Packard Foundation, the Research Corporation, and the Alfred P. Sloan
Foundation. This document has been assigned LIGO Laboratory document number
LIGO-P060016-C-Z.
References
[1] Sigg D (for the LSC) 2006 Class. Quantum Grav. 23 S51–6
[2] Lück H et al 2006 Class. Quantum Grav. 23 S71–8
[3] Acernese F et al 2006 Class. Quantum Grav. 23 S63–9
[4] Ando M and the TAMA Collaboration 2005 Class. Quantum Grav. 22 S881–9
[5] Fritschel P 2003 Gravitational-Wave Detection: Proc. SPIE vol 4856 ed Cruise M and Saulson
P (SPIE) p 282–91
[6] Acernese F et al 2006 Class. Quantum Grav. 23 S635–42
[7] Kuroda K et al 2003 Proc. 28th Int. Cosmic Ray Conf. ed Kajita T et al (Universal Academy
Press) p 3103
[8] Abbott B et al (LSC) and Akutsu T et al (TAMA Collaboration) 2006 Phys. Rev. D 73 102002
[9] Blanchet L, Damour T, Esposito-Farèse G and Iyer BR 2004 Phys. Rev. Lett. 93 091101
[10] Jaranowski P, Królak A and Schutz BF 1998 Phys. Rev. D 58 063001
[11] Baker J G, Centrella J, Choi D, Koppitz M and van Meter J 2006 Phys. Rev. D 73 104002
[12] Dimmelmeier H, Font J A and Müller E 2001 Astrophys. J. Lett. 560 L163–6
[13] Ott C D, Burrows A, Livne E and Walder R 2004 Astrophys. J. 600 834–64
[14] Burrows A, Livne E, Dessart L, Ott C D and Murphy J 2006 Astrophys. J. 640 878–90
[15] Ott C D, Burrows A, Dessart L and Livne E 2006 Phys. Rev. Lett. 96 201102
[16] Abbott B et al (LSC) 2004 Phys. Rev. D 69 102001
[17] Abbott B et al (LSC) 2005 Phys. Rev. D 72 062001
[18] Abbott B et al (LSC) 2006 Class. Quantum Grav. 23 S29–39
[19] Lazzarini A and Weiss R 1995 LIGO Science Requirements Document (SRD) LIGO technical
document LIGO-E950018-02-E
[20] Klimenko S and Mitselmakher G 2004 Class. Quantum Grav. 21 S1819–30
[21] The version of WaveBurst used for this analysis may be found at
http://ldas-sw.ligo.caltech.edu/cgi-bin/cvsweb.cgi/Analysis/WaveBurst/S4/?cvsroot=GDS
with the CVS tag “S4”
[22] Daubechies I 1992 Ten Lectures on Wavelets (Philadelphia: SIAM)
[23] Klimenko S, Yakushin I, Rakhmanov M and Mitselmakher G 2004 Class. Quantum Grav. 21
S1685–94
[24] Cadonati L and Márka S 2005 Class. Quantum Grav. 22 S1159–67
[25] Cadonati L 2004 Class. Quantum Grav. 21 S1695–703
[26] The version of CorrPower used for this analysis may be found at
http://www.lsc-group.phys.uwm.edu/cgi-bin/cvs/viewcvs.cgi/matapps/src/searches/burst/CorrPower/?cvsroot=lscsoft
with the CVS tag “CorrPower-080605”
[27] Chatterji S, Blackburn L, Martin G and Katsavounidis E 2004 Class. Quantum Grav. 21 S1809–
[28] The version of KleineWelle used for this analysis may be found at
http://ldas-sw.ligo.caltech.edu/-cgi-bin/-cvsweb.cgi/-Analysis/-WaveBurst/-S4/-?cvsroot=GDS
http://www.lsc-group.phys.uwm.edu/-cgi-bin/-cvs/-viewcvs.cgi/-matapps/-src/-searches/-burst/-CorrPower/-?cvsroot=lscsoft
Search for gravitational-wave bursts in LIGO data 30
http://ldas-sw.ligo.caltech.edu/cgi-bin/cvsweb.cgi/gds/Monitors/kleineWelle/?cvsroot=GDS
dated January 20, 2005
[29] Beauville F et al 2006 “A comparison of methods for gravitational wave burst searches from
LIGO and Virgo”, submitted to Phys. Rev. D, preprint gr-qc/0701026
[30] Dietz A, Garofoli J, González G, Landry M, O’Reilly B and Sung M 2006 LIGO technical
document LIGO-T050262-01-D
[31] Shapiro S L and Teukolsky S A 1983 Black Holes, White Dwarfs, and Neutron Stars (New York:
John Wiley & Sons)
[32] Riles K 2004 LIGO technical document LIGO-T040055-00-Z
[33] Nicholson D et al 1996 Phys. Lett. A 218 175
[34] Forward R L 1978 Phys. Rev. D 17 379
[35] Ando M et al (TAMA Collaboration) 2005 Phys. Rev. D 71 082002
[36] Astone P et al (International Gravitational Event Collaboration) 2003 Phys. Rev. D 68 022001
[37] Astone P et al 2002 Class. Quantum Grav. 19 5449
[38] Coccia E, Dubath F and Maggiore M 2004 Phys. Rev. D 70 084010
[39] Astone P et al 2006 Class. Quantum Grav. 23 S169–78
[40] Klimenko S, Mohanty S, Rakhmanov M and Mitselmakher G 2005 Phys. Rev. D 72 122002
[41] Wen L and Schutz B 2005 Class. Quantum Grav. 22 S1321–35
[42] Chatterji S, Lazzarini A, Stein L, Sutton P J, Searle A and Tinto M 2006 Phys. Rev. D 74
082005
http://ldas-sw.ligo.caltech.edu/-cgi-bin/-cvsweb.cgi/-gds/-Monitors/-kleineWelle/-?cvsroot=GDS
http://arxiv.org/abs/gr-qc/0701026
Introduction
Instruments and data collection
Trigger generation
Signal consistency tests
H1/H2 amplitude consistency test
Cross-correlation consistency tests
Additional selection criteria for event candidates
Data quality conditions
Auxiliary-channel vetoes
Gamma cut
Search results
Amplitude sensitivity of the search
Astrophysical reach estimates
Discussion
|
0704.0944 | GLAST and Dark Matter Substructure in the Milky Way | arXiv:0704.0944v1 [astro-ph] 7 Apr 2007
GLAST and Dark Matter Substructure in the Milky Way
Michael Kuhlen∗, Jürg Diemand†,∗∗ and Piero Madau†,‡
∗School of Natural Science, Institute for Advanced Study, Einstein Lane, Princeton, NJ 08540, USA
†Department of Astronomy and Astrophysics, UC Santa Cruz, 1156 High Street, Santa Cruz, CA, USA
∗∗Hubble Fellow
‡Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85740 Garching, Germany
Abstract. We discuss the possibility of GLAST detecting gamma-rays from the annihilation of neutralino dark matter in the
Galactic halo. We have used “Via Lactea”, currently the highest resolution simulation of cold dark matter substructure, to
quantify the contribution of subhalos to the annihilation signal. We present a simulated allsky map of the expected gamma-ray
counts from dark matter annihilation, assuming standard values of particle mass and cross section. In this case GLAST should
be able to detect the Galactic center and several individual subhalos.
Keywords: Gamma-rays, Dark Matter Structure, Dark Matter Annihilation
PACS: 95.55.Ka, 98.70.Rz, 95.35.+d
INTRODUCTION
One of the most exciting discoveries that the Gamma-ray Large Area Space Telescope (GLAST) could make, is the
detection of gamma-rays from the annihilation of dark matter (DM). Such a measurement would directly address one
of the major physics problems of our time: the nature of the DM particle.
Whether or not GLAST will actually detect a DM annihilation signal depends on both unknown particle physics
and unknown astrophysics theory. Particle physics uncertainties include the type of particle (axion, neutralino, Kaluza-
Klein particle, etc.), its mass, and its interaction cross section. From the astrophysical side it appears that DM is not
smoothly distributed throughout the Galaxy halo, but instead exhibits abundant clumpy substructure, in the form of
thousands of so-called subhalos. The observability of DM annihilation radiation originating in Galactic DM subhalos
depends on their abundance, distribution, and internal properties.
Numerical simulations have been used in the past to estimate the annihilation flux from DM substructure [1, 2, 3, 4],
but since the subhalo properties, especially their central density profile, which determines their annihilation luminosity,
are very sensitive to numerical resolution, it makes sense to re-examine their contribution with higher resolution
simulations.
DM ANNIHILATION IN SUBSTRUCTURE
Here we report on the substructure annihilation signal in “Via Lactea”, the currently highest resolution simulation of
an individual DM halo. Details about this simulation, including the properties of the host halo and its substructure
population, can be found in [4, 5]. To briefly summarize: The central halo is resolved with ∼ 200 million high
resolution DM particles, corresponding to a particle mass of Mp = 2× 104 M⊙. At z = 0 the host halo has a mass
of M200 = 1.8× 1012 M⊙, and it underwent its last major merger at z = 1.7. In total we resolve close to 10,000
subhalos, which make up 5.3% of the host halo mass. The subhalo mass function is well approximated by a powerlaw
dN/d lnM ∝ M−1 over three orders of magnitude down to the resolution limit of about 200 particles per subhalo
(∼ 4×106 M⊙). This power law slope corresponds to equal mass in substructure per decade, and it implies that the total
subhalo mass fraction has not yet converged. Future simulations with even lower particle masses will presumably find
an even larger subhalo mass fraction. A limitation of this present simulation is that it completely neglects the effects
of baryons. Gas cooling will likely increase the DM density in the central regions of the host halo through adiabatic
compression [6]. However, because of their shallower potential wells, the DM distribution in galactic subhalos is
unlikely to be significantly altered by baryonic effects.
http://arxiv.org/abs/0704.0944v1
FIGURE 1. Left panel: The annihilation signal of individual subhalos (crosses) in units of the total luminosity of the spherically
averaged host halo. The curves are the average (solid) and total (dotted) signal in a sliding window over one decade in mass. Right
panel: The angular size subtended by 2.0rs for a fiducial observer located 8 kpc from the halo center vs. the subhalo tidal mass. For
an NFW density profile ∼ 90% of the total luminosity originates within rs. The expected GLAST 68% angular resolution at > 10
GeV of 9 arcmin is denoted by the solid horizontal line.
We approximate the annihilation luminosity of an individual subhalo by
Ssub,i =
ρ2subdVi = ∑
jε{Pi}
ρ jmp, (1)
where ρ j is the density of the jth particle (estimated using a 32 nearest neighbor SPH kernel), and {Pi} is the set of all
particles belonging to halo i. In the left panel of Figure 1 we plot Ssub normalized by Shost, the total luminosity of the
spherically averaged host halo.
We find that the subhalo luminosity is proportional to its mass. Given our measured substructure abundance of
dN/d lnMsub ∝ M−1sub, this implies a total subhalo annihilation luminosity that is approximately constant per decade of
substructure mass, as the Figure shows (dotted line). We measure a total annihilation luminosity from the host halo that
is a factor of 2 higher than the spherically-averaged smooth signal, obtained by integrating the square of the binned
radial density profile. About half of this boost is due to resolved substructure, and we attribute the remaining half to
other deviations from spherical symmetry. Similar boost factors may apply to the luminosity of individual subhalos as
well (see next section).
The detectability of DM annihilation originating in subhalos depends not only on their luminosity, but also on
the angular size of the sources in the sky, which we can constrain by “observing” the subhalo population in our
simulation. For this purpose we have picked a fiducial observer position, located 8 kpc from the halo center along the
intermediate axis of the triaxial host halo mass distribution. In the right panel of Figure 1 we plot the angular size
∆θ of the subhalos for this observer position. For an NFW density profile with scale radius rs, about 90% of the total
annihilation luminosity originates within rs. We define ∆θ to be the angle subtended by rVmax/2.16, where rVmax is
the radius of the peak of the circular velocity curve Vc(r)
2 = GM(< r)/r, which is equal to 2.16rs for an NFW profile.
GLAST’s expected 68% containment angular resolution for photons above 10 GeV is 9 arcmin. We find that (553,
85, 20) of our subhalos have angular sizes greater than (9, 30, 60) arcmin. In the following section we consider the
brightness of these subhalos and discuss the possibility of actually detecting some of them with GLAST.
FIGURE 2. Simulated GLAST allsky map of neutralino DM annihilation in the Galactic halo, for a fiducial observer located 8
kpc from the halo center along the intermediate principle axis. We assumed Mχ = 46 GeV, 〈σv〉= 5×10−26 cm3 s−1, a pixel size
of 9 arcmin, and a 2 year exposure time. The flux from the subhalos has been boosted by a factor of 10 (see text for explanation).
Backgrounds and known astrophysical gamma-ray sources have not been included.
DM ANNIHILATION ALLSKY MAP
Using the DM distribution in our Via Lactea simulation, we have constructed allsky maps of the gamma-ray flux from
DM annihilation in our Galaxy. As an illustrative example we have elected to pick a specific set of DM particle physics
and realistic GLAST/LAT parameters. This allows us to present maps of expected photon counts.
The number of detected DM annihilation gamma-ray photons from a solid angle ∆Ω along a given line of sight (θ ,
φ ) over an integration time of τexp is given by
Nγ (θ ,φ) = ∆Ω τexp
Aeff(E)dE
ρ(l)2dl, (2)
where Mχ and 〈σv〉 are the DM particle mass and velocity-weighted cross section, Eth and Aeff(E) are the detector
threshold and energy-dependent effective area, and dNγ/dE is the annihilation spectrum.
We assume that the DM particle is a neutralino and have chosen standard values for the particle mass and annihilation
cross section: Mχ = 46 GeV and 〈σv〉= 5×10−26 cm3 s−1. These values are somewhat favorable, but well within the
range of theoretically and observationally allowed models. As a caveat we note that the allowed Mχ -〈σv〉 parameter
space is enormous (see e.g. [7]), and it is quite possible that the true values lie orders of magnitude away from the
chosen ones, or indeed that the DM particle is not a neutralino, or not even weakly interacting at all. We include only
the continuum emission due to the hadronization and decay of the annihilation products (bb̄ and uū only, for our low
Mχ ) and use the spectrum dNγ/dE given in [8].
For the detector parameters we chose an exposure time of τexp = 2 years and a pixel angular size of ∆θ = 9 arcmin,
corresponding to the 68% containment GLAST/LAT angular resolution. For the effective area we used the curve
published on the GLAST/LAT performance website [9] and adopted a threshold energy of Eth = 0.45 GeV (chosen to
maximize the significance, see below). The fiducial observer is located 8 kpc from the center along the intermediate
principle axis of the host halo’s ellipsoidal mass distribution.
Lastly, we applied a boost factor of 10 to all subhalo fluxes. The motivation for this boost factor is twofold: First, we
expect the central regions of our simulated subhalos to be artificially heated due to numerical relaxation, and hence less
dense and less luminous than in reality. Secondly, we expect the subhalo signal to be boosted by its own substructure.
We in fact observe sub-subhalos in the most massive of Via Lactea’s subhalos [4], and this sub-substructure, and
indeed sub-sub-substructure, etc., will lead to a boost in the annihilation luminosity analogous to the one for the whole
host halo, discussed in the previous section. An analytical model [10] for subhalo flux boost factors gives boosts from
a few up to ∼ 100, depending on the slope and lower mass cutoff of the subhalo mass function.
Figure 2 shows the resulting allsky map in a Mollweide projection. The coordinate system has been rotated such
that the major axis of the host halo ellipsoid is aligned with the horizontal direction, which would also correspond
to the plane of the Milky Way disk, if its angular momentum vector were aligned with the minor axis of the host
halo. The halo center (at l = 0◦, b = 0◦) is the brightest source of annihilation radiation, but the most massive subhalo
(at around l = +70◦, b = −10◦) is of comparable brightness. Additionally a large number of individual subhalos are
clearly visible, especially towards the halo center (−90◦ < l <+90◦, −60◦ < b <+60◦).
In order to quantify the detectability of individual subhalos (given our assumptions) we include diffuse Galactic and
extragalactic backgrounds, and convert our photon counts Nγ into significance S = Ns/
Nb, where Ns and Nb are the
source and background counts, respectively. For the extragalactic background we use the EGRET measurement [11]
and for the Galactic background we follow [12] and assume that it is proportional to the Galactic H I column density
[13]. Whereas the extragalactic component is uniform over the sky, the Galactic background is strongest towards the
center and in a band of b± 10◦ around the Galactic disk.
We consider all objects with S > 5 to be detectable by GLAST. With our choice of parameters the halo center
could be significantly detected, with S > 100. The number of subhalos with S > 5 depends strongly on the applied
boost factor. Without boosting the subhalo fluxes, only the most massive halo is detectable. Applying a boost factor
of 5 (10), we find that 29 (71) subhalos satisfy the S > 5 threshold for detectability. Note that subhalos below our
current resolution limit might also be detectable. Their greater abundance reduces the expected distance to the nearest
neighbor, and this may compensate for their lower intrinsic luminosities (see Koushiappas’ contribution in these
Proceedings).
In conclusion we find that with favorable particle physics parameters, GLAST may very well detect gamma-ray
photons originating from DM annihilations, either from the Galactic center or from individual subhalos. This would
be a sensational discovery of great importance, and it is worth including a search for a DM annihilation signal in the
data analysis.
ACKNOWLEDGMENTS
P.M. acknowledges support from NASA grants NAG5-11513 and NNG04GK85G, and from the Alexander von
Humboldt Foundation. J.D. acknowledges support from NASA through Hubble Fellowship grant HST-HF-01194.01.
The Via Lactea simulation was performed on NASA’s Project Columbia supercomputer system.
REFERENCES
1. Calcaneo-Roldan, C., & Moore, B. 2000, PhRvD, 62, 123005
2. Stoehr, F., White, S. D. M., Springel, V., Tormen, G., & Yoshida, N. 2003, MNRAS, 345, 1313
3. Diemand, J., Kuhlen, M., & Madau, P. 2006, ApJ, 649, 1
4. Diemand, J., Kuhlen, M., & Madau, P. 2007, ApJ, 657, 262
5. Diemand, J., Kuhlen, M., & Madau, P. 2007, submitted to ApJ, (astro-ph/0703337)
6. Blumenthal, G. R., Faber, S. M., Flores, R., & Primack, J. R. 1986, ApJ, 301, 27
7. Colafrancesco, S., Profumo, S., & Ullio, P. 2006, A&A, 455, 21
8. Bergström, L., Ullio, P., & Buckley, J. H. 1998, Astroparticle Physics, 9, 137
9. http://www-glast.slac.stanford.edu/software/IS/glast_lat_performance.htm
10. Strigari, L. E., Koushiappas, S. M., Bullock, J. S., & Kaplinghat, M. 2006, submitted to Phys. Rev. D (astro-ph/0611925)
11. Sreekumar, P., et al. 1998, ApJ, 494, 523
12. Baltz, E. A., Briot, C., Salati, P., Taillet, R., & Silk, J. 2000, Phys. Rev. D, 61, 023514
13. Dickey, J. M., & Lockman, F. J. 1990, ARAA, 28, 215
|
0704.0945 | Gibbs fragmentation trees | Gibbs fragmentation trees
Bernoulli 14(4), 2008, 988–1002
DOI: 10.3150/08-BEJ134
Gibbs fragmentation trees
PETER MCCULLAGH1 , JIM PITMAN2 and MATTHIAS WINKEL3
1Department of Statistics, University of Chicago, 5734 University Ave, Chicago, IL 60637, USA.
E-mail: [email protected]
2Statistics Department, 367 Evans Hall # 3860, University of California, Berkeley, CA 94720-
3860, USA. E-mail: [email protected]
3Department of Statistics, University of Oxford, 1 South Parks Road, Oxford OX1 3TG, UK.
E-mail: [email protected]
We study fragmentation trees of Gibbs type. In the binary case, we identify the most general
Gibbs-type fragmentation tree with Aldous’ beta-splitting model, which has an extended pa-
rameter range β >−2 with respect to the beta(β + 1, β + 1) probability distributions on which
it is based. In the multifurcating case, we show that Gibbs fragmentation trees are associated
with the two-parameter Poisson–Dirichlet models for exchangeable random partitions of N, with
an extended parameter range 0≤ α≤ 1, θ ≥−2α and α< 0, θ =−mα, m ∈ N.
Keywords: Aldous’ beta-splitting model; Gibbs distribution; Markov branching model;
Poisson–Dirichlet distribution
1. Introduction
We are interested in various models for random trees associated with processes of re-
cursive partitioning of a finite or infinite set, known as fragmentation processes [2, 4, 9].
We start by introducing a convenient formalism for the kind of combinatorial trees aris-
ing naturally in this context [16, 18]. Let #B be the number of elements in the finite
non-empty set B. Following standard terminology, a partition of B is a collection
πB = {B1, . . . ,Bk}
of non-empty disjoint subsets of B whose union is B. To introduce a new terminology
convenient for our purpose, we make the following recursive definition. A fragmentation
of B (sometimes called a hierarchy or a total partition) is a collection tB of non-empty
subsets of B such that
(i) B ∈ tB ;
(ii) if #B ≥ 2 then, there is a partition πB of B into k parts, B1, . . . ,Bk, called the
children of B, for some k ≥ 2, with
tB = {B} ∪ tB1 ∪ · · · ∪ tBk , (1)
This is an electronic reprint of the original article published by the ISI/BS in Bernoulli,
2008, Vol. 14, No. 4, 988–1002. This reprint differs from the original in pagination and
typographic detail.
1350-7265 c© 2008 ISI/BS
http://arxiv.org/abs/0704.0945v2
http://isi.cbs.nl/bernoulli/
http://dx.doi.org/10.3150/08-BEJ134
mailto:[email protected]
mailto:[email protected]
mailto:[email protected]
http://isi.cbs.nl/BS/bshome.htm
http://isi.cbs.nl/bernoulli/
http://dx.doi.org/10.3150/08-BEJ134
Gibbs fragmentation trees 989
Figure 1. Two fragmentations of [9] graphically represented as trees labeled by subsets of [9].
where tBi is a fragmentation of Bi for each 1≤ i≤ k.
Necessarily, Bi ∈ tB , each child Bi of B with #Bi ≥ 2 has further children, and so on,
until the set B is broken down into singletons. We use the same notation tB both
• for such a collection of subsets of B, and
• for the tree whose vertices are these subsets of B and whose edges are defined by
the parent/child relation determined by the fragmentation.
To emphasize the tree structure, we may call tB a fragmentation tree. Thus, B is the root
of tB and each singleton subset of B is a leaf of tB (see Figure 1 – here [9] = {1, . . . ,9};
we also put [n] = {1, . . . , n}). We denote by TB the collection of all fragmentations of B.
A fragmentation tB ∈ TB is called binary if every A ∈ tB has either 0 or 2 children. We
denote by BB ⊆ TB the collection of binary fragmentations of B.
For each non-empty subset A of B, the restriction to A of tB , denoted tA,B , is the
fragmentation tree whose root is A, whose leaves are the singleton subsets of A and
whose tree structure is defined by restriction of tB . That is, tA,B is the fragmentation
{C ∩ A : C ∩ A 6= ∅,C ∈ tB} ∈ TA, corresponding to a reduced subtree, as discussed by
Aldous [1].
Given a rooted combinatorial tree with no single-child vertices and whose leaves are
labeled by a finite set B, there is a corresponding fragmentation tB , where each vertex
of the combinatorial tree is associated with the set of leaves in the subtree above that
vertex. So the fragmentations defined here provide a convenient way to label the vertices
of a combinatorial tree and to encode the tree structure in the labeling.
A random fragmentation model is an assignment, for each finite subset B of N, of a
probability distribution on TB for a random fragmentation TB of B. We assume through-
out this paper that the model is exchangeable, meaning that the distribution of TB is
invariant under the obvious action of permutations of B on fragmentations of B. The
distribution of ΠB , the partition of B generated by the branching of TB at its root, is
then of the form
P(ΠB = {B1, . . . ,Bk}) = p(#B1, . . . ,#Bk) (2)
for all partitions {B1, . . . ,Bk} with k ≥ 2 blocks and some symmetric function p of com-
positions of positive integers, called a splitting probability rule. The model is called
990 P. McCullagh, J. Pitman and M. Winkel
• consistent if for every A⊂B, the restricted tree TA,B is distributed like TA;
• Markovian if, given ΠB = {B1, . . . ,Bk}, the k restricted trees TB1,B, . . . , TBk,B are
independent and distributed as TB1 , . . . , TBk ;
• binary if TB is a binary tree with probability one, for every B.
Aldous [2] initiated the study of consistent Markovian binary trees as models for neutral
evolutionary trees. He observed parallels between these models and Kingman’s theory
of exchangeable random partitions of N, and posed the problem of characterizing these
models analogously to known characterizations of the Ewens sampling formula for random
partitions. In [9], we showed how consistent Markovian trees arise naturally in Bertoin’s
theory of homogeneous fragmentation processes [4] and deduced from Bertoin’s theory a
general integral representation for the splitting rule of a Markovian fragmentation model.
To briefly review these developments in the binary case, the distribution of a Markovian
binary fragmentation TB is determined by a splitting rule p, which is a symmetric function
p of pairs of positive integers (i, j), according to the following formula for the probability
of a given tree t ∈ BB :
P(TB = t) =
A∈t:#A≥2
p(#A1,#A2), (3)
where A1 and A2 denote the two children of A in the tree TB .
The following proposition collects some known results.
Proposition 1. (i) Every non-negative symmetric function p subject to normalization
conditions
p(k,n− k) = 1 for all n≥ 2
defines a Markovian binary fragmentation model.
(ii) A splitting rule p gives rise to a consistent Markovian binary fragmentation if
and only if
p(i, j) = p(i+1, j) + p(i, j + 1)+ p(i+ j,1)p(i, j) for all i, j ≥ 1. (4)
(iii) Every consistent splitting rule admits an integral representation
p(i, j) =
Z(i+ j)
(0,1)
xi(1− x)jν(dx) + c1{i=1 or j=1}
for all i, j ≥ 1, (5)
with characteristics c≥ 0 and ν a symmetric measure on (0,1) with
(0,1)
x(1−x)ν(dx)<
∞, and Z(n) a sequence of normalization constants.
Proof. (i) is elementary. For (ii), Ford [6], Proposition 41, gave a characterizaton of
consistency for models of unlabeled trees which is easily shown to be equivalent to the
Gibbs fragmentation trees 991
condition stated here. The interpretation (and sketch of proof) of this condition is that
for B = C ∪ {k} (with k /∈ C), the vertex C of TC splits into a particular partition of
sizes i and j if and only if TB splits into that partition with k added to one or the other
block, or if TB first splits into C and {k} and then C splits further into that partition
of sizes i and j. (iii) is directly read from [9]. �
Aldous [2] studied in some detail the beta-splitting model which arises as the particular
case of (5) with characteristics c= 0 and
ν(dx) = xβ(1− x)βdx for β ∈ (−2,∞) and ν(dx) = δ1/2(dx) for β =∞. (6)
Aldous posed the problem of characterizing this model among all consistent binary
Markov models. The main focus of this paper is the following result.
Theorem 2. Aldous’ beta-splitting models for β ∈ (−2,∞] are the only consistent
Markovian binary fragmentations with splitting rule of the form
p(i, j) =
w(i)w(j)
Z(i+ j)
for all i, j ≥ 1, (7)
for some sequence of weights w(j)≥ 0, j ≥ 1, and normalization constants Z(n), n≥ 2.
As a corollary, we extract a statement purely about measures on (0,1).
Corollary 3. Every symmetric measure ν on (0,1) with
(0,1)
x(1−x)ν(dx)<∞, whose
moments factorize into the form
(0,1)
xi(1− x)jν(dx) =w(i)w(j) for all i, j ≥ 1
for some w(i)≥ 0, i≥ 1, is a multiple of one of Aldous’ beta-splitting measures (6).
In particular, this characterizes the symmetric beta distributions among probability
measures on (0,1).
Berestycki and Pitman [3] encountered a different one-dimensional class of Gibbs split-
ting rules in the study of fragmentation processes related to the affine coalescent. These
are not consistent, but the Gibbs fragmentations are naturally embedded in continuous
time.
The rest of this paper is organized as follows. Section 2 offers an alternative char-
acterization of what we call binary Gibbs models, meaning models with splitting rule
of the form (7), without assuming consistency. Theorem 2 is then proved in Section 3.
In Section 4, we discuss growth procedures and embedding in continuous time for the
consistent case. Section 5 gives a generalization of the Gibbs results to multifurcating
trees.
992 P. McCullagh, J. Pitman and M. Winkel
2. Characterization of binary Gibbs fragmentations
The Gibbs model (7) is overparameterized: if we multiply w(k), k ≥ 1, by abk (and
then Z(m), m≥ 2, by a2bm), the model remains unchanged. Note, further, that neither
w(1) = 0 nor w(2) = 0 is possible since then (7) does not define a probability function for
n= i+ j = 3. Hence, we may assume w(1) = 1 and w(2) = 1. It is now easy to see that
for any two different such sequences, the models are different. Note that the following
result does not assume a consistent model.
Proposition 4. The following two conditions on a collection of random binary fragmen-
tations TB indexed by finite subsets B of N are equivalent:
(i) TB is for each B an exchangeable Markovian binary fragmentation with splitting
rule of the Gibbs form (7) for some sequence of weights w(j)> 0, j ≥ 1, and normaliza-
tion constants Z(n), n≥ 2;
(ii) for each B, the probability distribution of TB is of the form
P(TB = t) =
w(#B)
ψ(#A) for all t ∈ BB , (8)
for some sequence of weights ψ(j)> 0, j ≥ 1, and normalisation constants w(n), n≥ 1.
More precisely, if (i) holds with w(1) = 1, then (ii) holds for the same sequence w with
ψ(1) = 1 and ψ(k) =w(k)/Z(k), k ≥ 2. (9)
Conversely, if (ii) holds for some sequence ψ with ψ(1) = 1, then (i) holds for the sequence
w(n), n≥ 1, determined by (8); in particular, w(1) = 1.
Proof. Given a Gibbs model with w(1) = 1, we can combine (3) and (7) to get, for all
t ∈ BB ,
P(TB = t) =
A∈t:#A≥2
w(#A1)w(#A2)
Z(#A)
w(#B)
A∈t:#A≥2
w(#A)
Z(#A)
If we make the substitution (9), we can read off w(n) as the correct normalization constant
and (8) follows, with ψ(1) = 1.
On the other hand, (8) determines the sequence w(n), n≥ 1, as
w(n) =
t∈B[n]
ψ(#A).
Note, in particular, that w(1) = ψ(1). We can express the normalization constants in the
Gibbs model (7) by the formula
Z(m) =
w(k)w(m− k) (10)
Gibbs fragmentation trees 993
t1∈B[k]
ψ(#A)
t2∈B[m−k]
ψ(#A)
t∈B[m]
A∈t:A 6=[m]
ψ(#A) =w(m)/ψ(m),
as in (9). By application of the previous implication from (i) to (ii), formula (8) gives
the distribution of the Gibbs model derived from this weight sequence w(n) and the
conclusion follows. �
Note that the normalization constant Z(m) in the Gibbs splitting rule (7) model and
given in (10) is a partial Bell polynomial in w(1),w(2), . . . (see [15] for more applications of
Bell polynomials), whereas the normalization constant w(n) in the Gibbs tree formula (8)
is a polynomial in ψ(1), ψ(2), . . . of a much a more complicated form. The normalization
constant in (8) is
w(n) =
t∈B[n]
ψ(#A).
In an attempt to study this polynomial in ψ(1), ψ(2), . . . , we introduce the signature
σt : [n]→N of a tree t ∈ B[n] by
σt(j) =#{A ∈ t :#A= j}, j = 1, . . . , n.
Note that P(Tn = t) depends on t only via σt, that is, σt is a sufficient statistic for
the Gibbs probabilities (8). Denote the set of signatures by Sign = {σt : t ∈ B[n]}. The
inductive definition of B[n] yields
Sign = {σ
(1) + σ(2) + 1n :σ
(1) ∈ Sign1 , σ
(2) ∈ Sign2 , n1 + n2 = n},
where 1n(j) = 1 if j = n, 1n(j) = 0 otherwise. The coefficients Qσ in w(n), when expanded
as a polynomial in ψ(1), ψ(2), . . . , are numbers of fragmentations with the same signature
σ ∈ Sign:
w(n) =
σ∈Sign
σ, where ψσ =
ψ(j)σ(j).
Let us associate with each fragmentation t ∈ B[n] its tree shape (combinatorial tree
without labels) t◦ and denote by B◦n the collection of shapes of binary trees with n
leaves. Clearly, two fragmentations with the same tree shape have the same signature,
so we can define σ(t◦) in the obvious way. For n ≤ 8 (and many larger trees), direct
enumeration shows that the tree shape t◦ ∈ B◦n is uniquely determined by its signature
σ, and Qσ is just the number q(t
◦) of different labelings. For n≥ 9, this is false: there
are two tree shapes with signature (9,3,1,2,1,0,0,0,1); see Figure 2. If we denote by
994 P. McCullagh, J. Pitman and M. Winkel
I◦σ ⊆ B
n the set of tree shapes with signature σ, then Qσ =
t◦∈I◦
q(t◦). The remaining
combinatorial problem is therefore to study I◦σ and q(t
◦). We have not been able to solve
this problem. The preprint version [12] of the present paper includes an Appendix with
a partial study: see also Corollary 2.4.3 of [17].
3. Consistent binary Gibbs rules
The statement of Theorem 2 specifies Aldous’ [2] beta-splitting models by their integral
representation (5). Observe that the moment formula for beta distributions easily gives
p(i, j) =
Z(i+ j)
xi+β(1− x)j+β dx
Γ(i+ β + 1)Γ(j + β + 1)
R(i+ j)
for all i, j ≥ 1,
for normalization constants R(n) = Z(n)Γ(n+ 2β + 2), n ≥ 2. This is for β ∈ (−2,∞).
For β =∞, we simply get p(i, j) = 1/R(i+ j) for all i, j ≥ 1, where R(n) = Z(n)2n, n≥ 2.
Proof of Theorem 2. We start from a general Gibbs model (7) with w(1) = 1 and
follow [7], Section 2 closely, where a similar characterization is derived in a partition
rather than a tree context. Let the Gibbs model be consistent. This immediately implies
that w(j)> 0 for all j ≥ 1. The consistency criterion (4) in terms of Wj =w(j +1)/w(j)
now gives
Wi +Wj =
Z(i+ j + 1)−w(i+ j)
Z(i+ j)
for all i, j ≥ 1. (12)
The right-hand side is a function of i+ j, soWj+1−Wj is constant and henceWj = a+bj
for some b≥ 0 and a >−b. Now, either b= 0 (excluded for the time being) or
w(j) =W1 · · ·Wj−1 =
(a+ bq)
Figure 2. Two tree shapes with the same signature (here marked by subtree sizes).
Gibbs fragmentation trees 995
= bj−1
= bj−1
Γ(a/b+ j)
Γ(a/b+ 1)
and, hence, reparameterizing by β = a/b − 1 ∈ (−2,∞) and pushing bi+j−2 into the
normalization constant di+j = b
i+j−2/Z(i+ j), we have
p(i, j) =
w(i)w(j)
Z(i+ j)
= di+j
Γ(i+ 1+ β)
Γ(2 + β)
Γ(j + 1+ β)
Γ(2+ β)
The case b= 0 is the limiting case β =∞, when, clearly, w(j) ≡ 1 (now pushing ai+j−2
into the normalization constant).
These are precisely Aldous’ beta-splitting models, as in (11). �
While we identified the boundary case β =∞ as being of Gibbs type, the boundary
case β =−2 is not of Gibbs type, although it can still be made precise as a Markovian
fragmentation model with characteristics c > 0 and ν = 0 (pure erosion): p(i, j) = 0 unless
i= 1 or j = 1, so the Markovian fragmentations Tn are combs, where all n− 1 branching
vertices are lined up in a single spine.
In the proof of the theorem, we obtained as parameterization for the Gibbs models
w(j) =
Γ(j +1+ β)
Γ(2 + β)
, j ≥ 1, (13)
for some β ∈ (−2,∞), or w(j)≡ 1 for β =∞. Note that the simple convention w(2) = 1
from Section 2 is not useful here. We can now still deduce the parameterization (8) by
Proposition 4, in principle. However, since ψ(k) =w(k)/Z(k) involves partial Bell polyno-
mials Z(k) in w(1),w(2), . . . , this is less explicit in terms of β than the parameterization
ψ(2) = 2+ β, ψ(3) =
, ψ(4) =
(3 + β)(4 + β)
18 + 7β
, . . . .
Special cases that have been studied in various biology and computer science contexts
(see Aldous [2] for a review) include the following: β = −3/2,−1,0,∞. In these cases,
we can explicitly calculate the Gibbs parameters in (7) and (8) and the normalisation
constants.
If β = −3/2, we can take ψ(n) ≡ 1 and TB is uniformly distributed : if #B = n, then
P(TB = t) = 2
n−1(n − 1)!/(2n− 2)!, t ∈ BB . The asymptotics of uniform trees lead to
Aldous’ Brownian CRT [1]; see also [15], Section 6.3. Table 1 uses a different parameter-
ization via the convenient relations (9) and (13).
The case β = −1 is the limiting conditional distribution in the Ewens family as the
Ewens parameter λ→ 0, conditional on the occurrence of a split. The β = 0 case is
known as the Yule model and β = ∞ as the symmetric binary trie (see Aldous [2]).
Continuum tree limits of the beta-splitting model for β ∈ (−2,−1) are described in [9].
996 P. McCullagh, J. Pitman and M. Winkel
The normalization that leads to a compact limit tree is here T[n]/n
−β−1, where T[n] is
represented as a metric tree with unit edge lengths and the scaling T[n]/n
−β−1 refers
to scaling of edge lengths. Aldous [2] studies weaker asymptotic properties for average
distance from a leaf to the root, also for β ≥−1, where growth is logarithmic.
4. Growth rules and embedding in continuous time
In [9], we study the consistently growing sequence Tn, n≥ 1, where Tn := T[n] = T[n],[n+1]
is the restriction of Tn+1 to [n] for all n≥ 1, in a general context of consistent Marko-
vian multifurcating fragmentation models. The integral representation (5) stems from an
association with Bertoin’s theory of homogeneous fragmentation processes in continuous
time [4]. Let us here look at the binary case in general and Gibbs fragmentations in
particular.
Consider the distribution of Tn+1, given Tn. The tree Tn+1 has a vertex A ∪ {n+ 1}
with children {n + 1} and A ∈ Tn. We say that n + 1 has been attached below A. In
passing from Tn to Tn+1, leaf n+1 can be attached below any vertex A of Tn (including
[n] and all leaf nodes). Note that to construct Tn+1 from Tn, n+ 1 is also added as an
element to all vertices on the path from [n] to A. Vertex A ∈ Tn is special in that both
A and A∪ {n+ 1} are in Tn+1.
Fix a vertex A of t ∈ B[n] and consider the conditional probability, given Tn = t, of
n+ 1 being attached below A. This is the ratio of two probabilities of the form (3) in
which many common factors cancel so that only the probabilities along the path from
[n] to A remain. This yields the following result.
Proposition 5. Let t ∈ B[n] and A ∈ t. Denote by
[n] =A1 ⊃ · · · ⊃Ah =A
Table 1. Closed form expressions of the parameters for β =
−3/2,−1,0,∞
β −3/2 −1 0 ∞
(2n− 2)!
22n−2(n− 1)!
(n− 1)! n! 1
(2n− 2)!
22n−3(n− 1)!
(n− 1)!
(n− 1)n! 2n−1 − 1
2n−1 − 1
Gibbs fragmentation trees 997
the path from [n] to A. We refer to h≥ 1 as the height of A in t. The probability that
n+1 attaches below A is then
p(#Aj+1 + 1,#(Aj \Aj+1))
p(#Aj+1,#(Aj \Aj+1))
p(#Ah,1).
For the uniform model (Gibbs fragmentation with β =−3/2), this product is telescop-
ing, or we calculate directly from (8)
p(#Aj+1 +1,#(Aj \Aj+1))
p(#Aj+1,#(Aj \Aj+1))
p(#Ah,1) =
2n− 1
giving a simple sequential construction (see, e.g., [15], Exercise 7.4.11).
It was shown in [9] that consistent Markovian fragmentation models can be assigned
consistent independent exponential edge lengths, where the edge below vertex A is given
parameter λ#A, for a family (λm)m≥1 of rates, where λ1 = 0, λ2 is arbitrary and λm,
m≥ 3, is determined by λ2 and the splitting rule p, in that consistency requires
λn+1(1− p(n,1)) = λn for all n≥ 2. (14)
The interpretation is that the partition of [n+1] in Tn+1 (arriving at rate λn+1) splits [n]
only with probability 1− p(n,1) and this thinning must reduce the rate for the partition
of [n] in Tn to λn. This rate λn also applies in Tn+1 after a first split {[n],{n+ 1}}.
Using consistency, equation (14) also implies
λnp(i, j) = λn+1(p(i, j + 1)+ p(i+ 1, j)) for all i, j ≥ 1 with i+ j = n.
For the Gibbs fragmentation models, we obtain, using (14), (7), (12) and (13),
λn = λ2
1− p(j,1)
Z(j + 1)
Z(j + 1)−w(j)
= λ2Z(n)
W1 +Wj−1
= λ2Z(n)
w(j − 1)
w(2)w(j − 1) +w(j)
= λ2Z(n)
Γ(4 + 2β)
Γ(n+2+ 2β)
where we require β <∞ for the last step. Table 2 contains the rate sequences for β =
−3/2,−1,0,∞ in the case λ2 = 1.
Not only is (λn)n≥3 determined by p, but a converse of this also holds.
Proposition 6. Let (λn)n≥2 be a consistent rate sequence associated with a consistent
Markovian binary fragmentation model with splitting rule p, meaning that (14) holds.
Then, p is uniquely determined by (λn)n≥2.
998 P. McCullagh, J. Pitman and M. Winkel
Proof. It is evident from (14) that p(n,1) is determined for all n ≥ 2, and p(1,1) = 1.
Now, (4) for i= 1 determines p(i+1, j) for all j ≥ 2, and an induction in i completes the
proof. �
A more subtle question is to ask what sequences (λn)n≥2 arise as consistent rate
sequences. The above argument can be made more explicit to yield
p(k,n− k) =
(−1)k−j+1
λn−j , 1≤ k ≤ n/2,
which means that (λn)n≥2 must have a discrete complete monotonicity, in that kth
differences of (λn)n≥2 must be of alternating signs, k ≥ 1. This condition is not sufficient,
however, as simple examples for n= 3 show (λn = (n− 1)
α is completely monotone for
α ∈ (0,1), but exchangeability implies that 1/3 = p(1,2) = (λ3 −λ2)/λ3 and so λ3 = 3/2,
whereas (3− 1)α ∈ (1,2) – even in the multifurcating case, cf. Section 5, we always have
λ3 ≤ 3/2).
Proposition 7. A sequence (λn)n≥2 arises as rate sequence of a consistent Markovian
binary fragmentation model if and only if
λn = nc+
(0,1)
(1− xn − (1− x)n)ν(dx)
for some c≥ 0 and ν a symmetric measure on (0,1) with
(0,1)
x(1− x)ν(dx) <∞. The
characteristics of the splitting rules associated with (λn)n≥2 are (c, ν).
Proof. This is a consequence of the integral representation (5) and [9], Proposition 3.
Specifically, the association with Bertoin’s theory of homogeneous fragmentations yields
that each of 1, . . . , n suffer erosion (being turned into a singleton) at rate c; the measure
ν(dx) gives the rate of fragmentations into two parts, to which 1, . . . , n are allocated
independently with probabilities (x,1− x), hence splitting [n] with probability 1− xn −
(1− x)n. �
The complete monotonicity is related to the study of the block containing 1, a tagged
fragment ; see [4, 10]. Since λn is the rate at which one or more of {2, . . . , n} leave the
Table 2. Explicit rate sequences for β =−3/2,−1,0,∞
β −3/2 −1 0 ∞
22n−3
2n− 2
) n−1
3n− 3
2(1− 2−(n−1)).
Gibbs fragmentation trees 999
block containing 1, the rate is composed of three components – a rate c for the erosion
of 1, a rate (n− 1)c for the erosion of 2, . . . , n and a rate Λ(dz) of fragmentations into
two parts, to which 2, . . . , n are allocated independently with probabilities (e−z,1− e−z),
with 1 in the former part, hence splitting [n] with probability 1− e−(n−1)z . Therefore
λn = c+ (n− 1)c+
(0,∞)
(1− e−(n−1)z)Λ(dz) = cn+
(0,1)
1− ξn−1
µ(dξ) = Φ(n− 1)
for a Bernstein function Φ, a finite measure µ on (0,1) or a Lévy measure Λ on (0,∞)
(0,∞)
(1∧x)Λ(dx)<∞; (see [4, 8, 10]), that is, λn can be extended to a completely
monotone function of a real parameter.
5. Multifurcating Gibbs fragmentations and
Poisson–Dirichlet models
As a generalization of the binary framework of the previous sections, we consider in this
section consistent Markovian fragmentation models with splitting rule p as in (2) of the
Gibbs form
p(n1, . . . , nk) =
w(ni) (15)
for some w(j) ≥ 0, j ≥ 1, a(k) ≥ 0, k ≥ 2, and normalization constants c(n) > 0, n ≥ 2.
Note that we must have w(1)> 0 and a(2)> 0 to get positive probabilities for n= 2. To
remove overparameterization, we will assume w(1) = 1 and a(2) = 1. Also, if we multiply
w(j) by bj−1 and a(k) by bk (and c(n) by bn), the model remains unchanged. We will
use this observation to get a nice parameterization in the consistent case (Theorem 8
below).
In [9], we showed that consistency of the model is equivalent to the set of equations
p(n1, . . . , nk) = p(n1 + 1, n2, . . . , nk) + · · ·+ p(n1, . . . , nk + 1)+ p(n1, . . . , nk,1)
+ p(n1 + · · ·+ nk,1)p(n1, . . . , nk)
for all n1, . . . , nk ≥ 1, k ≥ 2. We also established an integral representation extending (5)
to the multifurcating case. The special case relevant for us is in terms of a measure ν on
S↓ = {s = (si)i≥1 : s1 ≥ s2 ≥ · · · ≥ 0, s1 + s2 + · · ·= 1} satisfying
(1− s1)ν(ds)<∞:
p(n1, . . . , nk) =
Z(n1 + · · ·+ nk)
i1,...,ik distinct
ν(ds). (17)
The general case has a further parameter c ≥ 0, as in (5), and also allows ν to charge
(si)i≥1 with s1 + s2 + · · ·< 1; see [9]. We will only meet the extreme case p(1, . . . ,1) = 1,
which corresponds to ν = δ(0,0,...).
1000 P. McCullagh, J. Pitman and M. Winkel
We set
a(k+ 1)
c(n+1)
w(n+ 1)
and, in analogy to Proposition 5, we find that, given Tn = t ∈ T[n], for each vertex B ∈ t,
the probability that n+ 1 attaches below B is
Wnj+1
a(2)w(nh)w(1)
c(nh +1)
where [n] ⊃ S1 ⊃ · · · ⊃ Sh = B is the path from [n] to B, nj =#Sj and kj denotes the
number of children of Sj , j = 1, . . . , h.
However, n+1 can also attach as a singleton block to an existing partition {B1, . . . ,Bk}
of B ∈ Tn. In this case, we say that n+ 1 attaches to the vertex B. For each non-leaf
vertex B ∈ t, the probability that n+ 1 attaches to the vertex B is
Wnj+1
Akhw(1)
In this framework, we have the following generalization of Theorem 2 to the multifurcat-
ing case.
Theorem 8. If p is of the Gibbs form (15) and consistent, then p is associated with the
two-parameter Ewens–Pitman family given by
w(n) =
Γ(n−α)
Γ(1− α)
, n≥ 1, and a(k) = αk−2
Γ(k+ θ/α)
Γ(2 + θ/α)
, k ≥ 2
(or limiting quantities α ↓ 0), c(n), n≥ 1, being normalization constants, for a parameter
range extended as follows:
• either 0≤ α < 1 and θ >−2α (multifurcating cases with arbitrarily high block num-
bers),
• or α < 0 and θ = −mα for some integer m ≥ 3 (multifurcating with at most m
blocks),
• or α< 1 and θ =−2α (binary case),
• or α = −∞ and θ = m for some integer m ≥ 2, that is, a(2) = 1, a(k) = (m −
2) · · · (m− k + 1), k ≥ 3, and w(j) ≡ 1 (recursive coupon collector, where a split of
[n] is obtained by letting each element of [n] pick one of m coupons at random, just
conditioned so that at least two different coupons are picked),
• or α= 1, that is, w(1) = 1, w(j) = 0, j ≥ 2 (deterministic split into singleton blocks).
In terms of the integral representation (17), the measure ν on S↓ is, respectively, size-
ordered Poisson–Dirichlet(α, θ), Dirichlet(−α, . . . ,−α), Beta(−α,−α), δ(1/m,...,1/m) and
δ(0,0,...).
Gibbs fragmentation trees 1001
Proof. For the Gibbs fragmentation model with w(1) = a(2) = 1 and w(j) > 0 for all
j ≥ 2 with notation as introduced, consistency (16) is easily seen to be equivalent to
Cn =Wn1 + · · ·+Wnk +Ak +
for all n1 + · · ·+ nk = n, (18)
where k ≤m if m= inf{i≥ 1 :a(i+ 1) = 0}<∞.
As in the proof of Theorem 2, we deduce from this (the special case k = 2) that either
Wj = a > 0 (excluded for the time being as b= 0) or
Wj = a+ bj ⇒ w(j) =W1 . . .Wj−1 = b
j−1Γ(j −α)
Γ(1− α)
for all j ≥ 1,
for some b > 0, a > −b and α := −a/b < 1. As noted above, we can reparameterize so
that we get b= 1 without loss of generality. In particular, Wj = j−α, j ≥ 1, and so (18)
reduces to
Cn = n− kα+Ak +
for all 2≤ k ≤m∧ n.
Similarly, we deduce that θ :=Ak−kα does not depend on k and so a(k) = θ
k−2 if α= 0,
and otherwise,
Ak = θ+ kα ⇒ a(k) =A2 . . .Ak−1 = α
k−2Γ(k+ θ/α)
Γ(2 + θ/α)
for all 2≤ k ≤m+ 1.
Note that this algebraic derivation leads to probabilities in (15) only in the following
cases.
• If 0 ≤ α < 1, then a(3) = A2 = θ + 2α > 0 if and only if θ > −2α, and then also
Ak = θ+ kα > 0 and a(k)> 0 for all k ≥ 3.
• If α< 0, then a(3) =A2 = θ+2α> 0 if and only if θ >−2α also, but then Ak = θ+kα
is strictly decreasing in k and Ak < 0 eventually, which impedes m=∞. If we have
m<∞, we achieve a(m+ 1) = 0 if and only if θ =−mα. The iteration only takes
us to a(m+ 1) = 0 and we specify a(k) = 0 for k >m also. We cannot specify a(k),
k > m + 1, differently, since every consistent Gibbs fragmentation with a(k) > 0
for k > m+ 1 has the property that T[k] = {[k],{1}, . . . ,{k}} has only one branch
point [k] of multiplicity k with positive probability, but then the restricted tree
T[m+1],[k] = {[m+ 1],{1}, . . . ,{m+ 1}} with positive probability, which contradicts
a(m+ 1) = 0.
• If a(3) = 0, that is, m= 2, the argument of the preceding bullet point shows that we
are in the binary case a(k) = 0 for all k ≥ 3 and we can conclude by Theorem 2.
• The case b= 0 is the limiting case α=−∞ with w(j)≡ 1. We take up the argument
to see that Ak = θ − k and so m<∞ and θ =m, where we then get a(2) = 1 and
a(k) = (m− 2) · · · (m− k+ 1), 3≤ k ≤m+ 1.
1002 P. McCullagh, J. Pitman and M. Winkel
Finally, if w(m) = 0 for some m ≥ 2, then consistency imposes w(j) = 0 for all j ≥m,
and it follows from the integral representation (17) that in fact w(j) = 0 for all j ≥ 2.
The identification of ν on the standard parameter range can be read from [15], Section
3.2. For the extension −α≥ θ≥−2α, we refer to [10]. �
Kerov [11] showed that the only exchangeable partitions of N of Gibbs type are of
the two-parameter family PD(α, θ) with usual range for parameters θ > −α, etc.; see
also [7, 14]. Theorem 8 is a generalization to splitting rules that allows an extended
parameter range for the same reason as in the binary case: the trivial partition of one
single block is excluded from p and when associating consistent exponential edge lengths
with parameters λm, m≥ 1, the first split of [m+1] happens at a higher and higher rate
and we may have λm →∞. In fact,
κ({π ∈ PN :π|[n] = {B1, . . . ,Bk}}) = λnp(#B1, . . . ,#Bk)
uniquely defines a σ-finite measure on PN \ {N}, the set of non-trivial partitions of N,
associated with a homogeneous fragmentation process. This is closely related to (17)
via Kingman’s paintbox representation κ =
κsν(ds). The extended range was first
observed by Miermont [13] in the special case θ = −1 (related to the stable trees of
Duquesne and Le Gall [5]).
We refer to [10] for a study of spinal partitions of Markovian fragmentation models.
There are notions of fine and coarse spinal partitions. First, remove from Tn the spine
of 1, that is, the path from [n] to {1}. The resulting collection is a disjoint union of
fragmentations of sets Bj , say, that form a partition of {2, . . . , n}, which is called the fine
spinal partition. Second, merge blocks (in the multifurcating case) that were children of
the same spinal vertex; the resulting partition is called the coarse spinal partition. It is
shown that for the splitting rules from the two-parameter family with parameters α and
θ (the Gibbs fragmentations), the fine partition is obtained from the coarse partition by
applying independently for each block of the coarse partition an exchangeable partition
from the two-parameter family of random partitions, with parameters α and α+ θ.
Acknowledgements
This research was supported in part by EPSRC Grant GR/T26368/01 and NSF Grants
DMS-04-05779 and DMS-03-05009. M. Winkel was also supported by the Institute of
Actuaries and the insurance group Aon Limited.
References
[1] Aldous, D. (1991). The continuum random tree. I. Ann. Probab. 19 1–28. MR1085326
[2] Aldous, D. (1996). Probability distributions on cladograms. In Random Discrete Struc-
tures (Minneapolis, MN, 1993). IMA Vol. Math. Appl. 76 1–18. New York: Springer.
MR1395604
http://www.ams.org/mathscinet-getitem?mr=1085326
http://www.ams.org/mathscinet-getitem?mr=1395604
Gibbs fragmentation trees 1003
[3] Berestycki, N. and Pitman, J. (2007). Gibbs distributions for random partitions generated
by a fragmentation process. J. Stat. Phys. 127 381–418. MR2314353
[4] Bertoin, J. (2001). Homogeneous fragmentation processes. Probab. Theory Related Fields
121 301–318. MR1867425
[5] Duquesne, T. and Le Gall, J.-F. (2002). Random trees, Lévy processes and spatial branching
processes. Astérisque 281 vi+147. MR1954148
[6] Ford, D.J. (2005). Probabilities on cladograms: Introduction to the alpha model. Preprint.
arXiv:math.PR/0511246.
[7] Gnedin, A. and Pitman, J. (2005). Exchangeable Gibbs partitions and Stirling triangles.
Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) 325 (Teor. Predst.
Din. Sist. Komb. i Algoritm. Metody 12) 83–102, 244–245. MR2160320
[8] Gnedin, A. and Pitman, J. (2006). Moments of convex distribution functions and completely
alternating sequences. Preprint. arXiv:math.PR/0602091.
[9] Haas, B., Miermont, G., Pitman, J. and Winkel, M. (2006). Continuum tree asymp-
totics of discrete fragmentations and applications to phylogenetic models. Preprint.
arXiv:math.PR/0604350. Ann. Probab. To appear.
[10] Haas, B., Pitman, J. and Winkel, M. (2007). Spinal partitions and invariance under re-
rooting of continuum random trees. Preprint. arXiv:0705.3602. Ann. Probab. To ap-
pear.
[11] Kerov, S. (2005). Coherent random allocations, and the Ewens–Pitman formula. Zap.
Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) 325 (Teor. Predst.
Din. Sist. Komb. i Algoritm. Metody 12) 127–145, 246. MR2160323
[12] McCullagh, P., Pitman, J. and Winkel, M. (2007). Gibbs fragmentation trees. Preprint.
arXiv:0704.0945.
[13] Miermont, G. (2003). Self-similar fragmentations derived from the stable tree. I. Splitting
at heights. Probab. Theory Related Fields 127 423–454. MR2018924
[14] Pitman, J. (2003). Poisson–Kingman partitions. In Statistics and Science: A Festschrift for
Terry Speed. IMS Lecture Notes Monogr. Ser. 40 1–34. Beachwood, OH: Inst. Math.
Statist. MR2004330
[15] Pitman, J. (2006). Combinatorial Stochastic Processes. Lecture Notes in Math. 1875. Lec-
tures from the 32nd Summer School on Probability Theory held in Saint-Flour, July
7–24, 2002. Berlin: Springer. MR2245368
[16] Schroeder, E. (1870). Vier combinatorische Probleme. Z. f. Math. Phys. 15 361–376.
[17] Semple, C. and Steel, M. (2003). Phylogenetics. Oxford Lecture Series in Mathematics and
Its Applications 24. Oxford Univ. Press. MR2060009
[18] Stanley, R.P. (1999). Enumerative Combinatorics. 2. Cambridge Studies in Advanced Math-
ematics 62. Cambridge Univ. Press. MR1676282
Received April 2007 and revised March 2008
http://www.ams.org/mathscinet-getitem?mr=2314353
http://www.ams.org/mathscinet-getitem?mr=1867425
http://www.ams.org/mathscinet-getitem?mr=1954148
http://arxiv.org/math.PR/0511246
http://www.ams.org/mathscinet-getitem?mr=2160320
http://arxiv.org/math.PR/0602091
http://arxiv.org/math.PR/0604350
http://arxiv.org/math.PR/0705.3602
http://www.ams.org/mathscinet-getitem?mr=2160323
http://arxiv.org/math.PR/0704.0945
http://www.ams.org/mathscinet-getitem?mr=2018924
http://www.ams.org/mathscinet-getitem?mr=2004330
http://www.ams.org/mathscinet-getitem?mr=2245368
http://www.ams.org/mathscinet-getitem?mr=2060009
http://www.ams.org/mathscinet-getitem?mr=1676282
Introduction
Characterization of binary Gibbs fragmentations
Consistent binary Gibbs rules
Growth rules and embedding in continuous time
Multifurcating Gibbs fragmentations and Poisson–Dirichlet models
Acknowledgements
References
|
0704.0946 | Efficient Simulations of Early Structure Formation and Reionization | Submitted to the ApJ
Preprint typeset using LATEX style emulateapj v. 6/22/04
EFFICIENT SIMULATIONS OF EARLY STRUCTURE FORMATION AND REIONIZATION
Andrei Mesinger & Steven Furlanetto
Yale Center for Astronomy and Astrophysics, Yale University, New Haven, CT 06520
Submitted to the ApJ
ABSTRACT
Detailed theoretical studies of the high-redshift universe, and especially reionization, are generally
forced to rely on time-consuming N-body codes and/or approximate radiative transfer algorithms. We
present a method to construct semi-numerical “simulations”, which can efficiently generate realizations
of halo distributions and ionization maps at high redshifts. Our procedure combines an excursion-
set approach with first-order Lagrangian perturbation theory and operates directly on the linear
density and velocity fields. As such, the achievable dynamic range with our algorithm surpasses
the current practical limit of N-body codes by orders of magnitude. This is particularly significant in
studies of reionization, where the dynamic range is the principal limiting factor because ionized regions
reach scales of tens of comoving Mpc. We test our halo-finding and ionization-mapping algorithms
separately against N-body simulations with radiative transfer and obtain excellent agreement. We
compute the size distributions of ionized and neutral regions in our maps. We find even larger ionized
bubbles than do purely analytic models at the same volume-weighted mean hydrogen neutral fraction,
x̄HI, especially early in reionization. We also generate maps and power spectra of 21-cm brightness
temperature fluctuations, which for the first time include corrections due to gas bulk velocities. We
find that velocities widen the tails of the temperature distributions and increase small-scale power,
though these effects quickly diminish as reionization progresses. We also include some preliminary
results from a simulation run with the largest dynamic range to date: a 250 Mpc box that resolves
halos with massesM ≥ 2.2×108M⊙. We show that accurately modeling the late stages of reionization,
x̄HI ∼< 0.5, requires such large scales. The speed and dynamic range provided by our semi-numerical
approach will be extremely useful in the modeling of early structure formation and reionization.
Subject headings: cosmology: theory – early Universe – galaxies: formation – high-redshift – evolution
1. INTRODUCTION
Accurately modeling the formation of bound struc-
tures is invaluable for understanding any process in
the early universe. Reionization, the epoch when
radiation from early generations of astrophysical objects
managed to ionize the intergalactic medium (IGM), is
particularly sensitive to the distribution of collapsed
structure. Current observations paint a complex pic-
ture of the reionization epoch (Mesinger & Haiman
2004; Wyithe & Loeb 2004; Fan et al. 2006;
Mesinger & Haiman 2006; Malhotra & Rhoads 2004;
Furlanetto et al. 2006c; Malhotra & Rhoads 2006;
Page et al. 2006; Kashikawa et al. 2006; Totani et al.
2006). The next generation of instruments (James
Webb Space Telescope; 21-cm instruments such as the
Low Frequency Array and the Mileura Widefield Array
Low-Frequency Demonstrator; CMB polarization mea-
surements with Planck, etc.), could potentially shed light
on this poorly understood milestone. Unfortunately,
we still do not have accurate models of reionization
with which to interpret these upcoming (and current)
observations.
The main difficulty lies in the enormous dynamic range
required. Ionized regions are expected to reach charac-
teristic sizes of tens of comoving Mpc (Furlanetto et al.
2004c; Furlanetto & Oh 2005), which is over seven or-
ders of magnitude in mass larger than the pertinent cool-
ing mass, corresponding to gas with a temperature of
T ∼ 104 K (e.g. Efstathiou 1992; Thoul & Weinberg
1996; Gnedin 2000b; Shapiro et al. 1994). The required
dynamic range is even larger if smaller “minihalos” be-
low this cooling threshold are important during reion-
ization. Because of the steep mass dependence of halo
abundances, halos with masses close to the cooling mass
could dominate the photon budget. Hence modeling
reionization requires simulation box sizes of hundreds
of megaparsecs on a side, with extremely high resolu-
tion. Attempts to overcome these obstacles have gener-
ally followed the same fundamental and well-trod path
(e.g. Gnedin 2000a; Razoumov et al. 2002; Ciardi et al.
2003; Sokasian et al. 2003; Iliev et al. 2006b; Zahn et al.
2007; Trac & Cen 2006): (1) N-body codes are run to
generate halo distributions; (2) a simple prescription is
used to relate the halo mass to an ionizing efficiency; (3)
approximate methods (generally so-called ray-tracing al-
gorithms) are used to model radiative transfer (RT) on
large scales.
Even with modest halo resolution
(Springel & Hernquist 2003) of tens of dark matter
particles per halo, such schemes are computationally
limited to box sizes of tens of megaparcecs, if they
wish to resolve the likely cooling mass. McQuinn et al.
(2006a) extended the mass resolution of their sim-
ulations by using a merger tree scheme to populate
sub-grid scales with unresolved halos in a stochastic
manner. Such hybrid schemes are useful for extending
the dynamic range, but merger trees require a number
of corrections to achieve consistent mass functions (see,
e.g., Sheth & Pitman 1997; Benson et al. 2005, and Fig.
1 in McQuinn et al. 2006a) and to track individual halos
with redshift. Moreover, although they are perfectly
adequate for many purposes (including studying the
http://arxiv.org/abs/0704.0946v1
large-scale features of reionization), they prevent one
from taking full advantage of the simulation.
Aside from dynamic range, the other main limiting fac-
tor in all of the above numerical approaches is speed.
Even if the relevant scales can be resolved with N-body
codes, such as may be the case in the early phase of reion-
ization or with hybrid stochastic schemes. The codes
themselves generally take days to run on large super-
computing clusters, with the approximate RT algorithms
consuming a few additional days. The computational
cost of each simulation makes it difficult to explore the
full range of parameter space for reionization, which is
particularly large because we know so little about high-
redshift galaxies.
The computational cost becomes truly prohibitive if
hydrodynamics is included: the largest such simulation
of reionization performed to date spanned only 10h−1
Mpc (Sokasian et al. 2003). Including self-consistent de-
scriptions of galaxy formation – even at the approximate
level currently implemented in lower-redshift cosmolog-
ical simulations (e.g., Springel & Hernquist 2003) – re-
quires hydrodynamics, so N-body simulations of reion-
ization are limited to semi-analytic prescriptions for star
formation, feedback, etc. It is therefore worthwhile to
explore even simpler schemes.
The purpose of this paper is to introduce approximate
but efficient methods for generating halo distributions at
high redshifts as well as for generating the associated ion-
ization maps. We apply an excursion-set approach (e.g.
Bond et al. 1991; Lacey & Cole 1993) to the filtering of
a realization of the linear density field and then adjust
halo locations with first-order perturbation theory. We
can thus generate halo distributions at any given red-
shift, without explicitly including information from any
higher redshifts. This scheme is an updated form of
the “peak-patch” formalism developed and validated by
Bond & Myers (1996a,b), although it was conceived and
implemented completely independently. We then apply
a similar technique to obtain the ionization field from the
halo field. This part is similar to the schemes described
in Zahn et al. (2005, 2007), except applied to our effi-
ciently built halo distributions. As such, our methods
allow us to make general predictions about non-linear
processes, such as structure formation and reionization,
without making use of time-guzzling cosmological sim-
ulations. The speed of our approach also allows us to
explore a larger dynamic range than is possible with cur-
rent cosmological simulations while preserving detailed
spatial information (at least in a statistical sense), un-
like purely analytic models.
This paper is organized as follows. In § 2, we introduce
and test the components of our halo finding algorithm.
In § 3, we introduce and test our HII bubble finding
algorithm. In § 4, we use our semi-numerical scheme
to generate maps and power spectra of expected 21-cm
brightness temperature fluctuations throughout reioniza-
tion. In § 5, we summarize our key findings and present
our conclusions.
Unless stated otherwise, we quote all quantities in co-
moving units. We adopt the background cosmological
parameters (ΩΛ, ΩM, Ωb, n, σ8, H0) = (0.76, 0.24,
0.0407, 1, 0.76, 72 km s−1 Mpc−1), consistent with the
three–year results of the WMAP satellite (Spergel et al.
2006).
2. SEMI-NUMERICAL SIMULATIONS OF HALO
PROPERTIES
In brief, our algorithm generates a linear density field
and identifies halos within it. Because only linear evo-
lution is required, the algorithm is fast and flexible. We
generate 3D Monte-Carlo realizations of the linear den-
sity field on a box with sides of length L = 100 Mpc and
N = 12003 grid cells. As such, we are able to take advan-
tage of many pre-existing tools operating on the linear
density field alone. Our method consists of the following
principal steps:
1. creating the linear density and velocity fields
2. filtering halos from the linear density field using
the excursion-set formalism
3. adjusting halo locations using their linear-order
displacements
Step (1) only needs to be done once for each realization,
since it is independent of redshift. As mentioned above,
steps (2) and (3) need only be performed on redshifts of
interest, i.e. since our output at redshift z is independent
of any outputs at higher redshifts, there is no need for
our code to “run down” to z, as is the case for N-body
codes.
Our algorithm is an updated and simplified version of
the “peak-patch” algorithm of Bond & Myers (1996a);
we refer the interested reader there for more detailed
explanations of some steps. A simpler version has also
been used by Scannapieco et al. (2002) to study metal
enrichment at high redshifts.
We perform our semi-numerical simulations on a sin-
gle desktop Mac Pro with two dual-core 3.00 GHz Quad
Xeon processors and 16 GB of RAM. ForN = 12003, step
(1) takes ∼ 1 hour. For a given redshift in our range of
interest, specifically for z = 8.75, steps (2) and (3) take
∼ 2.5 hours. To achieve comparable halo mass resolution
(including halos with M ∼> 10
7M⊙) with a minimum of
∼ 500 particles per halo (Springel & Hernquist 2003), N-
body codes would require a prohibitively large number
of particles, N ∼ 1012! Below we describe in detail the
components of our model.
2.1. The Linear Density Field
Our linear density field is generated in much the same
way as it is for N-body codes. We briefly outline the
procedure here.
The density field of the universe, δ(x) ≡ ρ(x)/ρ̄ − 1,
in the linear regime1 is well-represented as a Gaussian
random field, whose statistical properties are fully de-
fined by its power spectrum, σ2(k) ≡ 〈|δ(k)|2〉. Here,
δ(k) is the Fourier transform of δ(x), and the standard
assumption of isotropy implies σ2(k) = σ2(k) while ho-
mogeneity implies that there are no density fluctuations
with wavelengths larger than the box size L = V 1/3. We
use the following, standard (e.g. Bagla & Padmanabhan
1997; Sirko 2005) Fourier transform conventions:
δ(k) =
δ(x)e−ik·x , (1)
1 In linear theory, density perturbations evolve in redshift as δ(z)
= δ(0)D(z), where D(z) is the linear growth factor normalized
so that D(0) = 1 (e.g., Liddle et al. 1996). Unless the redshift
dependence is noted explicitly, from this point forward we will work
with quantities linearly-extrapolated to z = 0.
with the inverse transform being
δ(x) =
δ(k)eik·x . (2)
The discrete simulation box only permits a finite set
of wavenumbers: k = ∆k(i, j, k), where ∆k = 2π/L and
i, j, k are integers in the range (-
N/2]. For each
independent wavenumber,2 we assign
δ(k) =
σ2(k)
(ak + ibk) , (3)
where ak and bk are drawn from a zero-mean Gaussian
distribution with unit variance. We use the power spec-
trum from Eisenstein & Hu (1999). Then the real-space
density field, δ(x), is obtained by performing an inverse
Fourier transform on δ(k).
2.2. The Linear Velocity Field
We construct a linear velocity field corresponding to
our linear density field using the standard Zel’Dovich
approximation (c.f. Zel’Dovich 1970; Efstathiou et al.
1985; Sirko 2005):
x1=x+ ψ(x) , (4)
v≡ ẋ1 = ψ̇(x) , (5)
δ(x)=−∇·[(1 + δ(x))ψ(x)]
≈−∇ · ψ(x) , (6)
where x and x1 denote initial (Lagrangian) and updated
(Eulerian) coordinates, respectively, ψ(x) is the displace-
ment vector, and the last equation follows from the conti-
nuity criterion, with the final approximation using linear-
ity, δ(x) ≪ 1. We note again that all units are comoving,
unless stated otherwise. From the above, one can relate
the velocity mode in our simulation at redshift z to the
linear density field:
v(k, z) =
Ḋ(z)δ(k) , (7)
where for computational convenience differentiation is
performed in k-space.
Another convenient property of this first-order
Zel’Dovich approximation is that the velocity field can
be decomposed into purely spatial, vx(x), and purely
temporal, vz(z), components:
v(x, z) = vz(z)vx(x) , (8)
where vz(z) = Ḋ(z) and vx(x) is the inverse Fourier
transform of ikδ(k)/k2. This is computationally conve-
nient, as we only need to compute the vx(x) field once
in order to be able to scale it for all redshifts, and it
also allows us to write a simple, exact expression for the
integrated linear displacement field, Ψ. When eq. (8) is
integrated from some large initial z0 [D(z0) ≪ D(z)], the
total displacement is just
Ψ(x, z)= [D(z)−D(z0)]vx(x)
≈D(z)vx(x) (9)
2 Since δ(x) is real-valued, only half of the k-modes defined
above are independent. The other half are determined by the usual
Hermitian constraints for real-valued functions (see for example
Hockney & Eastwood 1988; Bagla & Padmanabhan 1997).
We make use of this displacement field to adjust the halo
locations obtained by our filtering procedure (see § 2.4),
as well as to adjust the linear density field for our 21-cm
temperature maps (see § 4).
In principle, one could obtain non-linear velocities by
mapping the linear overdensity to a corresponding non-
linear overdensity obtained from a spherical collapse
model (Mo & White 1996), and then taking the time
derivative of the non-linear overdensity. However, due
to the large spread in the dynamical times of the non-
linear density field, accurately capturing the time evolu-
tion is non-trivial. Furthermore, although the non-linear
density field implicitly captures the velocities of collaps-
ing gas, mapping each pixel’s linear density to its non-
linear counterpart independently of other nearby pixels
does not properly preserve correlations on larger scales.
Hence, we choose to use the linear density field directly
in estimating velocities. For the purposes of studying the
ionization field, we are further justified in this procedure
because our final ionization maps are smoothed on large
scales, on which most pixels are still in the linear regime
at the high redshifts of interests. It is possible to include
higher-order contributions to the Zel’Dovich approxima-
tion where necessary (e.g., Scoccimarro & Sheth 2002).
2.3. Halo Filtering
In standard Press-Schechter theory (PS; see e.g.,
Press & Schechter 1974; Bond et al. 1991; Lacey & Cole
1993), the halo mass function can be written as
∂n(> M, z)
δc(z)
σ2(M)
∂σ(M)
c (z)
2σ2(M)
where n(> M, z) is the mean number density of halos
with total mass greater than M , ρ̄ = ΩMρcrit is the
mean background matter density, δc(z) ∼ 1.68/D(z) is
the scale-free critical over–density evaluated in the case
of spherically symmetric collapse (Peebles 1980), and
σ2(M) =
σ2(k)W 2(k,M) , (11)
is the squared r.m.s. fluctuation in the mass enclosed
within a region described by the filter function,W (k,M),
normalized to integrate to unity.
Although the PS mass function in eq. (10) is in fair
agreement with simulations, especially for halos near the
characteristic mass, at low redshifts it underestimates
the number of high–mass halos and overestimates the
number of low–mass halos when compared with large nu-
merical simulations (e.g. Jenkins et al. 2001). A mod-
ified expression shown to fit low-redshift simulation re-
sults more accurately (to within ∼ 10%) was obtained
by Sheth & Tormen (1999):
∂n(> M, z)
= − ρ̄
∂[ln σ(M)]
ν̂e−ν̂
where ν̂ ≡
aδc(z)/σ(M), and a, p, and A are fitting
parameters. Sheth et al. (2001) derive this form of the
mass function by including shear and ellipticity in model-
ing non–linear collapse, effectively changing the scale-free
critical over–density δc(z), into a function of filter scale,
δc(M, z) =
aδc(z)
1 + b
σ2(M)
aδ2c (z)
. (13)
Here b and c are additional fitting parameters (a is
the same as in eq. 12). For the constants above, we
adopt the recent values obtained by Jenkins et al. (2001),
who studied a large range in redshift and mass: a =
0.73, A = 0.353, p = 0.175, b = 0.34, c = 0.81. We
note, however, that the situation at high redshifts is
less clear: studies disagree on the relative accuracy of
the Press & Schechter (1974) and Jenkins et al. (2001)
forms (Reed et al. 2003; Iliev et al. 2006b; Zahn et al.
2007). Our algorithm can be trivially modified to ac-
commodate other choices for the mass function; fortu-
nately, for the purposes of the ionization maps (see §3),
the choice of mass function makes very little difference
because all have a similar dependence on the local den-
sity (Furlanetto et al. 2006a).
The mass functions in equations (10) and (12) can be
obtained by the standard excursion set random walk pro-
cedure. The approach is to smooth the density field
around a point, x, on successively smaller scales start-
ing with M → ∞ [where σ2(M) → 0] and to identify
the point as belonging to the halo with the largest M
such that δ(x,M) > δc(M, z). If W
2(k,M) is chosen to
have a sharp cut-off, this procedure amounts to a random
walk of δ(x,M) along the mass axis, since the change in
δ(x,M) as the scale is shrunk is independent of δ(x,M)
for a top-hat filter in k-space (see eq. 11).
We perform this procedure on our realization of the
linear density field by filtering the field using a real-
space top-hat filter3, starting on scales comparable to
the box size and going down to grid cell scales, in log-
arithmic steps of width ∆M/M = 1.2.4 At each filter
scale, we use the scale-dependent barrier in eq. (13)
to mark a collapsed halo if δ(x,M) > δc(M, z). Filter
scales large enough that collapsed structure is extremely
unlikely, δc(M, z) > 7σ(M), are skipped (Mesinger et al.
2005). Since this procedure treats each cell as the center
of a spherical filter, neighboring pixels are not properly
placed in the same halo. Because of this, we discount
halos which overlap with previously marked halos.
As mentioned above, this algorithm is similar
to the “peak-patch” approach first introduced by
Bond & Myers (1996a). The primary differences are:
3 There is a slight swindle in the current application of this for-
malism. The filter function is assumed to be a top-hat in k-space
in order to facilitate the analytic random walk approach described
above. However, when the power spectrum is normalized to ob-
servations [i.e. σ(R = 8h−1Mpc] = σ8), the filter that is used to
define the mass M corresponding to R is a top-hat filter in real
space. Nevertheless, it has been shown that the mass function is
not very sensitive to this filter choice (Bond et al. 1991).
4 We note that Mesinger et al. (2005) required a much smaller
step size at these redshifts, ∆M/M ∼ 0.1, in order to produce ac-
curate mass functions using 1D Monte-Carlo random walks. How-
ever, here we find that we can reproduce accurate mass functions
with a larger step size, since in our 3D realization of the density
field, “overstepping” δc(M, z) due to a large filter step size can be
compensated with a small offset in the filter center, i.e. by cen-
tering the filter in a neighboring cell. This is the case since over-
stepping δc(M, z) means that some dense matter between the two
filter scales was “missed”. In a 1D Monte-Carlo random walk this
matter is unrecoverable; however, in a 3D realization of the density
field, the missed matter will be picked up by a filter centered on a
neighboring cell.
(1) we use the Jenkins et al. (2001) barrier to identify
halos (rather than calculating the strain tensor to ac-
count for ellipsoidal collapse), (2) we do not separately
identify peaks in the density field (this step is not re-
quired given modern computing power), and (3) we use
the “full exclusion” criterion for preventing halo overlap.
Bond & Myers (1996b) found that a “binary exclusion”
method in which pairs of overlapping halos are compared
and eliminated was somewhat more accurate. However,
at the high redshifts of interest to us, halo overlap is rare,
and we are primarily interested in the large-scale prop-
erties of the halo field, which are relatively insensitive to
the details of the overlap criterion.
We also note that our halo finder is similar
in spirit to the PTHalos algorithm introduced by
Scoccimarro & Sheth (2002) to generate mock galaxy
surveys at low redshifts. There are two key differences.
First, at present we use only first-order perturbation the-
ory to displace the particles.5 This limits us to higher
redshifts, where velocities are smaller. However, our al-
gorithm does not require particles in order to resolve ha-
los and hence can accommodate a considerably larger
dynamic range than PTHalos.
Mass functions resulting from this procedure are shown
as points in Figure 1, with error bars indicating 1-σ Pois-
son uncertainties and bin widths spanning our mass filter
steps. Dotted red curves denote PS mass functions gen-
erated by eq. (10); short–dashed blue curves denote ex-
tended PS conditional mass functions generated by eq.
(10) but also taking into account the absence of den-
sity modes longer than the box size; long–dashed green
curves denote mass functions generated using the Sheth-
Tormen correction in eq. (12). The upper (lower) set
of curves and points correspond to redshifts of z=6.5
(z=10). The dotted and short–dashed curves overlap at
these redshifts due to our large box size (L = 100 Mpc),
so we are immune to the finite box effects pointed out by
Barkana & Loeb (2004).
Fig. 1 shows that we obtain accurate mass functions
for M ∼> 10
8M⊙. Our procedure seems to underpre-
dict the abundance of halos with masses approaching
the cell size, Mcell ∼ 107M⊙. However, as the Jeans
mass corresponding to a gas temperature of ∼ 104 K
is MJ(z ∼ 8) ∼ 108M⊙, in subsequent calculations, we
only use halos with masses greater than Mmin = 10
Using thisMmin, we match the collapse fraction obtained
by integrating eq. (12) to better than ∼ 10%.
This mass cutoff corresponds to the minimum tem-
perature required for efficient atomic hydrogen cooling
and would be the pertinent mass scale if: (1) the H2
cooling channel is suppressed, e.g. due to a perva-
sive Lyman-Werner (LW) background, and if (2) photo-
ionization feedback is ineffective at suppressing gas cool-
ing and collapse onto higher mass halos. While feed-
back at high redshifts remains poorly-constrained, both
of these assumptions seem reasonable during the mid-
dle stages of reionization on which we focus. A dis-
sociating LW background is likely to have established
5 We note that a similar scheme to ours has been independently
created by O. Zahn (private communication). This scheme uses
a simple Press-Schechter barrier but adjusts halo locations follow-
ing second-order Lagrangian perturbation theory. However, he has
found that the second-order corrections make very little difference
to the map.
Fig. 1.— Mass functions generated from our halo filtering pro-
cedure discussed in §2.3 are shown as points. Dotted red curves
denote PS mass functions generated by eq. (10); short–dashed
blue curves denote extended PS conditional mass functions gener-
ated by eq. (10) but also taking into account the absence of density
modes longer than the box size; long–dashed green curves denote
mass functions generated using the Sheth-Tormen correction in eq.
(12). The upper (lower) set of curves and points correspond to
redshifts of z=6.5 (z=10).
itself well before the universe is significantly ionized
(Haiman et al. 1997). Model-dependent empirical evi-
dence supporting the suppression of star formation in
smaller mass halos, M ∼< Mmin, can also be gleaned
from WMAP data (Haiman & Bryan 2006). Further-
more, although early work suggested that an ionizing
background could partially suppress star formation in
halos with virial temperatures of Tvir ∼< 3.6 × 10
(M ∼< 2×10
9M⊙) (Thoul & Weinberg 1996), more recent
studies (Kitayama & Ikeuchi 2000; Dijkstra et al. 2004)
find that at high redshifts (z ∼> 3), self-shielding and the
increased cooling efficiency could be strong countering
effects for halos with virial temperatures Tvir > 10
We postpone a more detailed analysis of the reionization
footprint left by photo-ionization feedback to a future
work.
2.4. Adjusting Halo Locations
Once the halo field is obtained, we use the displace-
ment field obtained through eq. (9) to adjust the halo
locations at each redshift. This corrects for the enhanced
halo bias in Eulerian space with respect to our filtering,
which is done in Lagrangian space (i.e. using the initial
locations at large z). For computational convenience, we
smooth the 12003 velocity field onto a coarser-grained
2003 grid before adjusting halo locations. The choice of
resolution, where each cell is (100 Mpc)/200 = 0.5 Mpc
on a side, is somewhat arbitrary here, and we have veri-
fied that our halo and 21-cm power spectra are unaffected
Fig. 2.— Halo power spectra at z = 8.7, with L = 20 h−1Mpc
and cosmological parameters taken from McQuinn et al. (2006a).
The solid red curve is the halo power spectrum from an N-body
simulation obtained from McQuinn et al. (2006a) (c.f. the bottom
panel of their Fig. 2). The short-dashed green and the long-dashed
violet curves are obtained from our filtering procedure with and
without the halo location adjustments, respectively.
by this choice. We also note that in linear theory, the
mean velocity dispersion inside a (0.5 Mpc)3 sphere with
mean density at z = 10 is a factor of ∼10 lower than the
r.m.s. bulk velocity of such regions, so smoothing over
smaller scale velocities appears reasonable. Furthermore,
we keep in mind that our “endproducts” in this work are
ionization and 21-cm temperature fluctuation maps, for
which such “low-resolution” is more than adequate (com-
pare, e.g., to N-body simulations of reionization, which
typically have similar cell sizes for the radiative transfer
component).
In Figure 2, we plot the halo power spectrum, de-
fined as ∆hh(k, z) = k
3/(2π2V ) 〈|δhh(k, z)|2〉k, where
δhh(x, z) ≡ Mcoll(x, z)/〈Mcoll(z)〉 − 1 is the collapsed
mass field.6 The solid red curve is the halo power spec-
trum from a 20 h−1 Mpc N-body simulation at z = 8.7
obtained from McQuinn et al. (2006a) (c.f. the bottom
panel of their Figure 2). The short-dashed green and the
long-dashed violet curves are obtained from our filtering
procedure (matching the assumed cosmology) with and
without the halo location adjustments, respectively. We
note that ignoring the cumulative motions of halos re-
sults in an underestimate of the power of long-wavelength
modes of the halo field by a factor of ∼ 2 in this case.
The average Eulerian bias of these halos is ∼ 2, about
half of which comes from the correction from Lagrangian
to Eulerian coordinates.
After the halo locations are adjusted according to lin-
ear theory, our halo power spectrum agrees almost per-
6 We use the collapsed mass field, rather than the individual
galaxies, because we calculate the power from the smoothed cells.
fectly with the simulation. By design our procedure in-
cludes Poisson fluctuations in the halo number counts,
which dominate the power spectrum at k ∼> 5 h/Mpc
and are lost in purely analytic estimates (McQuinn et al.
2006a). We also note that both the halo mass func-
tions and power spectra are statistical tests and hence
the agreement shown here does not imply that our halo
field has a one-to-one mapping with an N-body halo
field sourced by identical initial conditions. Indeed,
Gelb & Bertschinger (1994) showed that those particles
located nearest initial linear density peaks are not nec-
essarily incorporated into massive galaxies. The “peak
particle” algorithm is less robust than our smoothing
technique, but we still do not expect to recover halo
masses or locations precisely. We plan on doing a “one-
on-one” comparison between halo fields obtained from
our halo finder to those obtained from N-body codes
in a future work. However, it is certainly encouraging
that the very similar “peak-patch” group finding formal-
ism of Bond & Myers (1996a) did very well when com-
pared “one-on-one” to N-body codes at large mass scales
(Bond & Myers 1996b).
In Figure 3 we show slices through the halo field from
our simulation box at z = 8.25, generated by the above
procedure, again with (right panel) and without (left
panel) the halo location adjustments. In the figure, the
halo field is mapped to a lower resolution 4003 grid for
viewing purposes. Each slice is 100 Mpc on a side and
0.25 Mpc deep. Collapsed halos are shown in blue. Vi-
sually, it is obvious that peculiar motions increase halo
clustering.
3. GENERATING THE IONIZATION FIELD
Once the halo field is generated as described above,
we can perform a similar filtering procedure (also us-
ing the excursion-set formalism) to obtain the ionization
field (similar methods have been discussed by Zahn et al.
2005, 2007). The time required for this final step is a
function of x̄HI, with large x̄HI requiring less time than
small x̄HI. Specifically, at x̄HI ∼ 0.5 this step takes ∼ 15
minutes to generate a 2003 ionization box on our work-
station.
There are two main differences between the halo fil-
tering and the HII bubble filtering procedures: (1) HII
bubbles are allowed to overlap, and (2) the excursion
set barrier (the criterion for ionization) becomes, as per
Furlanetto et al. (2004a):
fcoll(x1,M, z) ≥ ζ−1 , (14)
where ζ is some efficiency parameter and fcoll(x1,M, z)
is the fraction of mass residing in collapsed halos inside
a sphere of mass M = 4/3πR3ρ̄[1 + 〈δnl(x1, z)〉R], with
mean physical overdensity 〈δnl(x1, z)〉R, centered on Eu-
lerian coordinate x1, at redshift z.
Equation (14) is only an approximate model and makes
several simplifying assumptions about reionization. In
particular, it assumes a constant ionizing efficiency per
halo and ignores spatially-dependent recombinations and
radiative feedback effects. It can easily be modified to
include these effects (e.g., Furlanetto et al. 2004b, 2006a;
Furlanetto & Oh 2005), and we plan to do so in future
work. Here we present the simplest case in order to best
match current RT numerical simulations.
This prescription models the ionization field as a two-
phase medium, containing fully-ionized regions (which
we refer to as HII bubbles) and fully-neutral regions.
This is obviously much less information than can be
gleaned from a full RT simulation, which precisely tracks
the ionized fraction. However, HII bubbles are typi-
cally highly-ionized during reionization, and for many
purposes (such as for 21 cm maps) this two-phase ap-
proximation is perfectly adequate.
In order to “find” the HII bubbles at each redshift we
smooth the halo field onto a 2003 grid. Then we filter
the halo field using a real-space top-hat filter, starting on
scales comparable to the box size and decreasing to grid
cell scales in logarithmic steps of width ∆M/M = 0.33.
At each filter scale, we use the criterion in eq. (14)
to check whether the region is ionized. If so, we flag
all pixels inside that region as ionized. We do this for
all pixels and scales, regardless of whether the resulting
bubble would overlap with other bubbles. Note, there-
fore, that the nominal ionizing efficiency ζ that we use
as an input parameter does not equal (1 − x̄HI)/fcoll.
They typically differ by . 30%, with ζfcoll < 1 − x̄HI
early in reionization and ζfcoll > 1− x̄HI late in reioniza-
tion). Unfortunately, we thus cannot use our algorithm
to self-consistently predict the time evolution of the ion-
ized fraction (rather, that must be prescribed from some
other model). Of course, the same is true for N-body
simulations, because the evolution of the ionized fraction
depends on the evolving ionization efficiency of galaxies
and cannot be self-consistently included in any present-
day simulation.
In order to obtain the density field used in eq. (14),
δnl(x1, z), we use the Zel’Dovich approximation on our
linear density field, δ(x), in much the same manner as
we did to adjust our halo field in § 2.4. Starting at some
arbitrarily large initial redshift (we use z0 = 50), we
discretize our high-resolution 12003 field into “particles”
whose mass equals that in each grid cell. We then use
the displacement field (eq. 9) to move the particles to
new locations at each redshift. This resulting mass field
is then smoothed onto our lower resolution 2003 box to
obtain δnl(x1, z). We then recalculate the velocity field
(§ 2.2) using the new densities.
Zahn et al. (2007) showed that a very similar HII bub-
ble filtering procedure performed on an N-body halo field
was able to reproduce the ionization topology obtained
through a ray-tracing RT algorithm fairly well. Their
algorithm differs from ours in two ways. First, they used
a slightly different barrier definition; however, this dif-
ference has only a small impact on the ionization topol-
ogy.7 More importantly, for each filter scale at each pixel,
Zahn et al. (2007) flag only the center pixel as ionized if
the barrier is crossed, whereas we flag the entire filtered
sphere.
In order to test our bubble filtering algorithm, we ex-
ecute it on the same N-body halo field at z = 6.89 as
was used to generate the bottom panels of Fig. 3 in
Zahn et al. (2007). We compare analogous ionization
maps created using various algorithms in Figure 4. All
7 Specifically, in order to match the physics of their simulations
better, they required
dt fcoll > ζ
−1. However, the density mod-
ulation ends up nearly identical to our model, so the topology is
almost unchanged.
Fig. 3.— Slices through the halo field from our simulation box at z = 8.25. The halo field is generated on a 12003 grid and then mapped
to a 4003 grid for viewing purposes. Each slice is 100 Mpc on a side and 0.25 Mpc deep. Collapsed halos are shown in blue. The left panel
shows the halo field directly filtered in Lagrangian space; the right panel maps the field to Eulerian space according to linear theory (see
§ 2.4 and eq. 9). The right panel corresponds to the bottom-left (x̄HI = 0.53) ionization field in Figure 5.
slices are 93.7 Mpc on a side and 0.37 Mpc deep, with ζ
adjusted so that the mean neutral fraction in the box is
x̄HI = 0.49. Ionized regions are shown as white. The left-
most and right-most panels are taken from Zahn et al.
(2007). The left-most panel was created by perform-
ing their bubble filtering procedure directly on the lin-
ear density field (without explicitly identifying halos).
The second panel was created by performing their bub-
ble filtering procedure on their N-body halo field, but
with the slightly different barrier definition in eq. (14).
The third panel was created by performing our bubble
filtering procedure on the same N-body halo field, but ig-
noring density fluctuations outside of halos (i.e. setting
〈δnl(x1, z)〉R = 0), which we have verified give nearly
identical bubble maps as our full procedure (so long as
x̄HI is fixed). The right-most panel was created using
an approximate RT algorithm (Abel & Wandelt 2002;
Sokasian et al. 2001, 2003) on the same halo field.
It is immediately obvious from Fig. 4 that all of the
approximate maps (first three panels) reproduce the RT
map (right-most panel) fairly well. Even the HII bub-
ble filtering performed directly on the linear density field
(left-most panel) performs well, which is encouraging, as
that is the starting point for our semi-numerical proce-
dure and we only improve on this scheme.
Figure 4 shows that our HII bubble filtering algorithm
is an excellent approximation to RT. The similar algo-
rithm proposed by Zahn et al. (2007) also performs well.
In comparison, our algorithm produces somewhat more
“bubbly” maps but appears to better capture the connec-
tivity of HII regions. Both are an obvious improvement
on directly filtering the linear density field.
Of course, in our full algorithm we identify halos from
the linear density field (rather than from simulations), so
our method consumes comparable processing time to the
one used to generate the leftmost panel in Figure 4, once
the halos have been identified. Moreover, we are able
to capture the “stochastic” component of the halo bias
that causes the relatively large differences between the
leftmost panel and the full RT simulation. That is, the
algorithm used to generate the leftmost panel uses the
large-scale linear density field to predict the distribution
of halos (Zahn et al. 2005, 2007). In reality, the rela-
tion is not deterministic because of random fluctuations
in the small-scale modes comprising each region. This
leads to nearly Poisson scatter in the halo number densi-
ties (Sheth & Lemson 1999; Casas-Miranda et al. 2002)
that can substantially modify the bubble size distribution
whenever sources are rare, particularly early in reion-
ization (Furlanetto et al. 2006a). By directly sampling
the small-scale modes to build the halo distribution, we
better recover this scatter (at least statistically, as illus-
trated by Fig. 2). Another way to include this scatter
is by directly sampling halos from an N-body simulation
(as in Zahn et al. 2007, or the second panel of Fig. 4),
although that obviously requires much more computing
power.
3.1. Ionization Maps
Now that we have demonstrated in turn the success of
our halo and bubble filtering procedures, we present the
resulting ionization maps when the two are combined.
In Figure 5, we show 100 Mpc × 100 Mpc × 0.5 Mpc
slices through our 2003 ionization field at z = 10, 9,
8.25, 7.25 (left to right across rows). With the assump-
tion of ζ = 15.1, these redshifts correspond to x̄HI = 0.89,
0.74, 0.53, 0.18, respectively. As has been pointed out by
Furlanetto et al. (2004c), the neutral fraction is the more
relevant descriptor; bubble morphologies at a constant
x̄HI vary little with redshift (see also McQuinn et al.
Fig. 4.— Slices from the ionization field at z = 6.89 created using different algorithms. All slices are 93.7 Mpc on a side and 0.37 Mpc
deep, with the mean neutral fraction in the box being x̄HI = 0.49. Ionized regions are shown as white. The left-most panel was created
by performing the bubble filtering procedure of Zahn et al. (2007) directly on the linear density field. The second panel was created by
performing their bubble filtering procedure on their N-body halo field, but with the slightly different barrier definition in eq. (14). The
third panel was created by performing our bubble filtering procedure described in § 3 on the same N-body halo field. The right-most panel
(from Zahn et al. 2007) was created using an approximate RT algorithm on the same halo field.
2006a). The bottom-left panel corresponds to the halo
field in the top-right panel of Fig. 3, generated on a
high-resolution 12003 grid.
To quantify the ionization topology resulting from our
method, we calculate the size distributions of both the
ionized and neutral regions. We randomly choose a pixel
of the desired phase (neutral or ionized), and record the
distance from that pixel to a phase transition along a
randomly chosen direction. We repeat this Monte Carlo
procedure 107 times. Volume-weighted probability dis-
tribution functions (PDFs) produced thusly are shown
by the solid curves in Figure 6 for ionized regions (top
panel) and neutral regions (bottom panel). Curves corre-
spond to (z, x̄HI) = (10, 0.89), (9.25, 0.79), (8.50, 0.61),
(8.00, 0.45), (7.50, 0.27), (7.00, 0.10), from left to right
in the top panel, respectively (or from right to left in
the bottom panel). All curves are normalized so that the
probability density integrates to unity.
It is useful to compare these distributions to the an-
alytic bubble mass function of Furlanetto et al. (2004c);
although this analytic approach is motivated by the same
excursion set barriers as our semi-numerical approach, it
does not account for the full geometry of sources. We
compute the probability distribution from the analytic
model by assuming purely spherical bubbles and convolv-
ing with the volume-weighted distance to the sphere’s
edge:
p(r) dr =
2πr2 dr
(1− x̄HI)
dRnb(R)
, (15)
where nb(R) is the comoving number density of bub-
bles with radii between R and R + dR (taken from
Furlanetto et al. 2004c).
Several points are evident from Figures 5 and
6. As expected (e.g., Furlanetto et al. 2004c, 2006a;
McQuinn et al. 2006a), there is a well-defined bubble
scale at each neutral fraction, despite some scatter in
the sizes. This scale also gets more pronounced (i.e. the
PDF peaks more) as reionization progresses; this is a
result of the changing shape of the underlying matter
power spectrum (Furlanetto et al. 2006a).
Also, the purely analytic estimates underpredict the
size distributions at all values of the neutral fraction,
though they do become increasingly accurate as the neu-
tral fraction decreases. This trend is perhaps counterin-
tuitive, as the analytic model, which rests on the assump-
tion of spherical bubbles, should perform best when the
bubbles are isolated, as one would expect at earlier times,
i.e. high neutral fractions. However, looking at the top-
left panel of Fig. 5, the typical bubbles filling most of
the ionized volume overlap due to the strong clustering
of early sources and bubbles. This results in many “over-
lapping pairs of spheres” at early times, resulting from
merging HII bubbles sourced by clustered sources. Thus
the spherical bubble-based analytic model underpredicts
the true size distribution, using our “mean free path” def-
inition of bubble sizes above. This effect was not noted
by previous studies (Zahn et al. 2007), because they used
a different definition of bubble sizes, based on spherical
filters used to flag regions in which x̄HI < 0.1. As time
progresses and the universe becomes more ionized, this
“overlapping pair of spheres” effect becomes less and less
dominant (see Fig. 5), and the analytic model becomes
increasingly more accurate.
Finally, the size distributions of neutral regions pre-
sented in the bottom panel of Fig. 6 are a new result and
potentially important for the 21-cm signal (which origi-
nates in neutral hydrogen, of course). In the later stages
of reionization, when the topology has transformed to
isolated neutral islands in a sea of ionized gas, this fig-
ure pinpoints the typical sizes of “mostly neutral” pixels
that continue to emit strongly. In contrast to the ionized
regions, the neutral regions (defined in this way) do not
grow substantially during reionization. From x̄HI = 0.89
to x̄HI = 0.1, the peak of the distribution shifts only by a
factor of ∼ 6, whereas the peak of the ionized region dis-
tribution shifts by a factor of ∼40 over the same range.
The reason for this is also evident in Figure 5: even when
the universe is mostly neutral, space is dotted with is-
lands of ionized gas, such that our “mean free path”–type
size distributions never become too large. The converse
does not hold true for ionized regions. However, a slight
parallel for ionized islands in a mostly neutral IGM, could
be found in Lyman limit systems (LLS) inside larger HII
regions (e.g. Barkana & Loeb 2002; Shapiro et al. 2004;
Miralda-Escudé et al. 2000), though it is not clear how
prevalent such neutral clumps are at high redshifts.
Throughout this paper, we have used a L = 100
Mpc “simulation” box. This size facilitates compar-
ison of our results with those from recent hybrid N-
Fig. 5.— Slices through the 2003 ionization field at z = 10, 9, 8.25, 7.25 (left to right across rows). With the assumption of ζ = 15.1,
these redshifts correspond to x̄HI = 0.89, 0.74, 0.53, 0.18, respectively. All slices are 100 Mpc on a side and 0.5 Mpc deep. The bottom-left
panel corresponds to the halo field in the top-right panel of Fig. 3, generated on a high-resolution 12003 grid.
body works (Zahn et al. 2007; McQuinn et al. 2006a;
Iliev et al. 2006a; Trac & Cen 2006). However, the speed
of our semi-numerical approach allows us to explore
larger cosmological scales while still consistently resolv-
ing the small halos that could dominate the photon bud-
get during reionization. As mentioned previously, exist-
ing N-body codes must resort to merger-tree methods
to populate their distribution of small-mass halos, even
for box sizes ∼< 100 Mpc (McQuinn et al. 2006a). In
this spirit, we present some preliminary results from a
N = 15003, L = 250 Mpc simulation, capable of directly
resolving halos with masses M ∼> 2.2× 10
8M⊙, with re-
sulting mass functions accurate to better than a factor
of two even at the smallest scale. This resolution pushes
the RAM limit of our machine and so each redshift can
take several hours to complete.8
In Figure 7, we compare size distributions of ionized
8 We note here that our halo-finding algorithm requires signifi-
cantly higher resolution than does predicting the ionization field
directly from the linear density field smoothed on larger scales
(Zahn et al. 2005, 2007). The latter method can be extended to
even larger boxes, though at the price of a somewhat less accurate
ionization map (compare the left and right panels in Fig. 4).
Fig. 6.— Size distributions (see definition in text) of ionized
(top panel) and neutral (bottom panel) regions. Curves correspond
to (z, x̄HI) = (10, 0.89), (9.25, 0.79), (8.50, 0.61), (8.00, 0.45),
(7.50, 0.27), (7.00, 0.10), from left to right in the top panel, re-
spectively (or from right to left in the bottom panel). Solid curves
are produced from our simulation while dotted curves correspond
to the analytic mass function. All curves are normalized so that
the probability distribution integrates to unity.
(top panel) and neutral (bottom panel) regions from our
two different simulation boxes. Curves correspond to (z,
x̄HI) = (9.00, 0.80), (8.00, 0.56), (7.00, 0.21), from left
to right in the top panel, respectively (or from right to
left in the bottom panel, respectively). Solid curves are
generated from our fiducial, N = 12003, L = 100 Mpc,
simulation while dashed curves are generated from our
larger simulation with N = 15003, L = 250 Mpc. The
cell size in all ionization maps is 0.5 Mpc on a side, with
the efficiency parameter, ζ, adjusted to obtain matching
values of x̄HI, and we set the minimum halo mass to
Mmin = 2.2× 108M⊙ even in the higher resolution runs
for easier comparison.
As reionization progresses, an increasing number of
large HII regions are “missed” by the L = 100 Mpc sim-
ulation. Interestingly, the analogous trend in the neutral
region size distributions (bottom panel) is weaker. This is
most likely because the “ionized island” effect limits the
size distributions of neutral regions as described above.
4. 21-CM TEMPERATURE FLUCTUATIONS
A natural application of our “simulation” technique is
to predict 21-cm brightness temperatures during reion-
ization. The offset of the 21-cm brightness temperature
from the CMB temperature, Tγ , along a line of sight
(LOS) at observed frequency ν, can be written as (e.g.
Furlanetto et al. 2006b):
δTb(ν)=
TS − Tγ
1 + z
(1− e−τν0 ) (16)
Fig. 7.— Size distributions of ionized (top panel) and neutral
(bottom panel) regions from different simulation boxes. Curves
correspond to (z, x̄HI) = (9.00, 0.80), (8.00, 0.56), (7.00, 0.21),
from left to right in the top panel, respectively (or from right to
left in the bottom panel). Solid curves are generated from our
fiducial, N = 12003, L = 100 Mpc, simulation while dashed curves
are generated from a larger simulation with N = 15003, L = 250
Mpc. The cell size in all ionization maps is 0.5 Mpc on a side,
with the efficiency parameter, ζ, adjusted to get matching values
of x̄HI and the minimum halo mass set to Mmin = 2.2 × 10
for comparison purposes.
≈ 9(1 + z)1/2xHI(1 + δnl)
dvr/dr +H
where TS is the gas spin temperature, τν0 is the opti-
cal depth at the 21-cm frequency ν0, δnl is the physical
overdensity (see discussion under eq. 14), H is the Hub-
ble parameter, dvr/dr is the comoving gradient of the
line of sight component of the comoving velocity, and all
quantities are evaluated at redshift z = ν0/ν − 1. The
final approximation makes the standard assumption that
TS ≫ Tγ for all redshifts of interest during reionization
(e.g. Furlanetto 2006) and also that dvr/dr ≪ H . We
verify in our simulation that dvr/dr < H for all neutral
pixels.
Maps of δTb(x, ν) generated in this manner are shown
in Figure 8. All slices are 100 Mpc on a side, 0.5 Mpc
deep, and correspond to (z, x̄HI) = (9.00, 0.74), (8.25,
0.53), (7.50, 0.27), from left to right. The top panels take
into account the velocity correction term in eq. (16),
while the bottom panels ignore it.
As seen in Fig. 8, velocities typically increase the con-
trast in temperature maps, making hot spots hotter and
cool spots cooler. We also see that temperature hot
spots, which correspond to dense pixels, tend to clus-
ter around the edges of HII bubbles, especially smaller
bubbles. This occurs because HII bubbles correlate with
peaks of the density field and long-wavelength biases in
the density field can extend beyond the edge of the ion-
ized region. This enhanced contrast might be useful in
Fig. 8.— Brightness temperature of 21-cm radiation relative to the CMB temperature. All slices are 100 Mpc on a side, 0.5 Mpc deep,
and correspond to (z, x̄HI) = (9.00, 0.74), (8.25, 0.53), (7.50, 0.27), left to right. Top panels include the velocity correction term in eq.
(16), while the bottom panels do not. For animated versions of these pictures, see http://pantheon.yale.edu/∼am834/Sim.
the detection of the boundaries of ionized regions with fu-
ture 21-cm experiments. As reionization progresses most
hot spots become swallowed up by HII bubbles, and the
effects of velocities diminish.
In Figure 9 we plot the dimensionless 21-
cm power spectrum, defined as ∆221(k, z) =
k3/(2π2V ) 〈|δ21(k, z)|2〉k, where δ21(x, z) ≡
δTb(x, z)/ ¯δTb(z) − 1. Solid blue curves take into
account gas velocities, while dashed red curves do not.
Curves correspond to (x̄HI, z) = (0.79, 9.25), (0.61,
8.50), (0.45, 8.00), (0.27, 7.50), (0.10, 7.00), bottom to
top. Error bars on the bottom dashed curve denote 1-σ
Poisson uncertainties; fractional errors in a given bin
are the same for all curves. As reionization progresses,
small-scale power is traded for large-scale power, and the
curves become flatter. Note that, with our dimensionless
definition of the power spectrum, curves with smaller
x̄HI have larger values of ∆
21(k, z). This is because the
mean brightness temperature offset drops quite rapidly
as reionization progresses, since ¯δTb(z) ∝ x̄HI, but the
scatter remains significant (see Fig. 6) and thus the
fractional perturbation, δ21(x, z), increases throughout
reionization.
Finally, in Figure 10 we plot dimensional power spec-
tra, ¯δTb(z)
2∆221(k, z). The curves correspond to (x̄HI, z)
= (0.80, 9.00), (0.56, 8.00), (0.21, 7.00), top to bottom at
large k, respectively. The dotted green curves are gen-
erated from a large, high-resolution “simulation”, with
Fig. 9.— Dimensionless 21-cm power spectra for (x̄HI, z) =
(0.79, 9.25), (0.61, 8.50), (0.45, 8.00), (0.27, 7.50), (0.10, 7.00),
bottom to top. Solid blue curves take into account gas velocities,
while dashed red curves do not.
http://pantheon.yale.edu/~am834/Sim
Fig. 10.— Dimensional 21-cm power spectra. The curves corre-
spond to (x̄HI, z) = (0.80, 9.00), (0.56, 8.00), (0.21, 7.00), top to
bottom at large k, respectively. The dotted green curves are gen-
erated from a large, high-resolution “simulation”, with N = 15003
and L = 250 Mpc, with no velocity contribution to the power spec-
tra. Solid blue curves and dashed red curves are generated with
our fiducial N = 12003 and L = 100 Mpc simulation, with and
without the velocity contribution, respectively.
N = 15003 and L = 250 Mpc, with no velocity contribu-
tion to the power spectra. Solid blue curves and dashed
red curves are generated with our fiducial N = 12003
and L = 100 Mpc simulation, with and without the ve-
locity contribution, respectively. The cell size in all δTb
maps is 0.5 Mpc on a side, with the efficiency parame-
ter, ζ, adjusted to achieve matching values of x̄HI and
the minimum halo mass set to Mmin = 2.2× 108M⊙ for
comparison purposes.
As seen in Figures 9 and 10, velocities make a mod-
est contribution to the 21 cm power spectrum, boosting
power on small scales early in reionization. Note that
the apparent slight decrease in power at small k when
velocities are included is well within the errors from av-
eraging over the few modes available to us on the largest
scales (e.g., see Poisson error bars on the bottom dashed
curve in Fig. 9). While the maximum δTb value in our
simulation box increases by a factor of a few when ve-
locities are included, most of the pixels are only slightly
affected. When the power spectrum is plotted in a di-
mensional version, ¯δTb(z)
2∆221(k, z), small-scale power is
boosted by ∼ 40% at (x̄HI, z) = (0.80, 9.00), with this en-
hancement monotonically decreasing as reionization pro-
gresses. Linear theory predicts that velocities enhance
the density power spectrum by a factor of 1.87 when
x̄HI = 1 (Kaiser 1987). In fact we do recover this en-
hancement for a fully neutral IGM; however, as predicted
by analytic models (McQuinn et al. 2006b), the ionized
bubbles rapidly remove most of this amplification.
Figure 10 also confirms the inferences drawn from Fig.
7, primarily that larger box sizes are needed to capture
the ionization topology at the end stages of reionization.
Comparing the dashed red to the dotted green curves in
Fig. 10, we note that our fiducial L = 100 Mpc simula-
tions are accurate for scales smaller than k ∼> 0.2 Mpc
(or λ . 30 Mpc). As reionization progresses, larger scales
lose power more rapidly than in the L = 250 Mpc simu-
lation. This is again evidence that very large scale simu-
lations are needed to model the middle and late stages of
reionization. Thus the speed and high resolution of our
semi-numeric approach will be extremely useful in future
modeling of reionization.
5. CONCLUSIONS
We introduce a method to construct semi-numeric sim-
ulations that can efficiently generate realizations of halo
distributions and ionization maps at high redshifts. Our
procedure combines an excursion-set approach with first-
order Lagrangian perturbation theory and operates di-
rectly on the linear density and velocity fields. As such,
our algorithm can exceed the dynamic range of exist-
ing N-body codes by orders of magnitude. As this is
the main limiting factor in simulating the ionized bubble
topology throughout reionization, when ionized regions
reach scales of tens of comoving Mpc, this will be partic-
ularly useful in such studies. Moreover, the efficiency of
the algorithm will allow us to explore the large parame-
ter space required by the many uncertainties associated
with high-redshift galaxy formation.
We find that our halo finding algorithm compares well
with N-body simulations on the statistical level, yield-
ing both accurate mass functions and power spectra. We
have not yet compared our halo distribution with sim-
ulations on a point-by-point basis, but we do not ex-
pect perfect agreement because of the vagaries of the ex-
cursion set approach. However, it is encouraging that
a very similar algorithm independently developed by
Bond & Myers (1996a) fares quite well in a comparison
of high-mass halos.
Our HII bubble finding algorithm captures the bubble
topology quite well, as compared to ionization maps from
ray-tracing RT algorithms at an identical x̄HI. Our al-
gorithm is similar to other codes, although we build the
ionization map from our excursion set halo field rather
than directly from the linear density field or from halos
found in an N-body simulation (Zahn et al. 2005, 2007).
Compared to codes built only from the linear density
field, we can better track the “stochastic” component of
the bias, though at the cost of somewhat more compu-
tation and a harder limit on resolution. On the other
hand, our scheme is much faster than using an N-body
code and offers superior dynamic range.
We create ionization maps using a simple efficiency pa-
rameter and compute the size distributions of ionized
and neutral regions. Our size distributions are gener-
ally shifted to larger scales when compared with purely
analytic models (Furlanetto et al. 2004c) at the same
mean neutral fraction. The discrepancy lies in the fact
that, at their core, the purely analytic models are based
on ensemble-averaged distributions of isolated spheres.
Hence they do not capture overlapping bubble shapes,
which are most important at large x̄HI (when the bubbles
are small and random fluctuations in the source densities,
as well as clustering, are most important).
In this paper, we have confined ourselves to a sim-
ple ionization criterion (essentially photon counting;
Furlanetto et al. 2004c). However, our algorithm can
easily accommodate more sophisticated prescriptions, so
long as they can be expressed either with the excursion
set formalism (Furlanetto & Oh 2005; Furlanetto et al.
2006a) or built from the halo field (in a similar way to
semi-analytic models of galaxy formation embedded in
numerical simulations).
We also use our procedure to generate maps and power
spectra of the 21-cm brightness temperature fluctuations
during reionization. We note that temperature hot spots
generally cluster around HII bubbles, especially in the
early phases of reionization. Because HII bubbles cor-
relate with peaks of the density field, long-wavelength
biases in the density field can extend beyond the edge of
the ionized region, with the resulting overdensities ap-
pearing as hot spots. This effect might be useful for
detecting the boundaries of ionized regions with future
21-cm experiments. We study the imprint of gas bulk ve-
locities on 21-cmmaps and power spectra, an effect which
was not included in previous studies. We find that ve-
locities do not have a major impact during reionization,
although they do increase the contrast in temperature
maps, making some hot spots hotter and some cool spots
cooler. Velocities also increase small-scale power, though
the effect decreases with decreasing x̄HI.
We also include some preliminary results from a sim-
ulation run with the largest dynamical range to date:
a 250 Mpc box which resolves halos with masses M ∼>
2.2 × 108M⊙. This simulation run confirms that ex-
tremely large scales are required to model the late stages
of reionization, x̄HI ∼< 0.5, when the typical scale of ion-
ized bubbles becomes several tens of Mpc.
The speed and dynamic range provided by our semi-
numeric approach will be extremely useful in the mod-
eling of early structure formation and reionization. Our
ionization maps can be efficiently folded into analyses of
current and upcoming high-redshift observations, espe-
cially 21-cm surveys.
We thank Greg Bryan for many helpful conversations
concerning the inner workings of cosmological simula-
tions and the generation of initial conditions. We also
thank Oliver Zahn for permitting the use of the halo field
from his simulation output as well as for several interest-
ing discussions. We thank Mathew McQuinn for provid-
ing the halo power spectra from his simulation as well
as for associated helpful comments. We thank Zoltan
Haiman, Greg Bryan, Oliver Zahn and Mathew McQuinn
for insightful comments on a draft version of this paper.
This research was supported by NSF-AST-0607470.
REFERENCES
Abel, T., & Wandelt, B. D. 2002, MNRAS, 330, L53
Bagla, J. S., & Padmanabhan, T. 1997, Pramana, 49, 161
Barkana, R., & Loeb, A. 2002, ApJ, 578, 1
—. 2004, ApJ, 609, 474
Benson, A. J., Kamionkowski, M., & Hassani, S. H. 2005, MNRAS,
357, 847
Bond, J. R., Cole, S., Efstathiou, G., & Kaiser, N. 1991, ApJ, 379,
Bond, J. R., & Myers, S. T. 1996a, ApJS, 103, 1
—. 1996b, ApJS, 103, 41
Casas-Miranda, R., Mo, H. J., Sheth, R. K., & Boerner, G. 2002,
MNRAS, 333, 730
Ciardi, B., Stoehr, F., & White, S. D. M. 2003, MNRAS, 343, 1101
Dijkstra, M., Haiman, Z., Rees, M. J., & Weinberg, D. H. 2004,
ApJ, 601, 666
Efstathiou, G. 1992, MNRAS, 256, 43P
Efstathiou, G., Davis, M., White, S. D. M., & Frenk, C. S. 1985,
ApJS, 57, 241
Eisenstein, D. J., & Hu, W. 1999, ApJ, 511, 5
Fan, X. et al. 2006, AJ, 132, 117
Furlanetto, S. R. 2006, MNRAS, 371, 867
Furlanetto, S. R., Hernquist, L., & Zaldarriaga, M. 2004a, MNRAS,
354, 695
Furlanetto, S. R., McQuinn, M., & Hernquist, L. 2006a, MNRAS,
365, 115
Furlanetto, S. R., & Oh, S. P. 2005, MNRAS, 363, 1031
Furlanetto, S. R., Oh, S. P., & Briggs, F. H. 2006b, Phys. Rep.,
433, 181
Furlanetto, S. R., Zaldarriaga, M., & Hernquist, L. 2004b, ApJ,
613, 16
—. 2004c, ApJ, 613, 1
—. 2006c, MNRAS, 365, 1012
Gelb, J. M., & Bertschinger, E. 1994, ApJ, 436, 491
Gnedin, N. Y. 2000a, ApJ, 535, 530
—. 2000b, ApJ, 542, 535
Haiman, Z., & Bryan, G. L. 2006, ApJ, 650, 7
Haiman, Z., Rees, M. J., & Loeb, A. 1997, ApJ, 484, 985
Hockney, R. W., & Eastwood, J. W. 1988, Computer simulation
using particles (Bristol: Hilger, 1988)
Iliev, I. T. et al. 2006a, MNRAS, submitted, preprint astro-
ph/0603199
Iliev, I. T., Mellema, G., Pen, U.-L., Merz, H., Shapiro, P. R., &
Alvarez, M. A. 2006b, MNRAS, 369, 1625
Jenkins, A., Frenk, C. S., White, S. D. M., Colberg, J. M., Cole,
S., Evrard, A. E., Couchman, H. M. P., & Yoshida, N. 2001,
MNRAS, 321, 372
Kaiser, N. 1987, MNRAS, 227, 1
Kashikawa, N. et al. 2006, ApJ, 648, 7
Kitayama, T., & Ikeuchi, S. 2000, ApJ, 529, 615
Lacey, C., & Cole, S. 1993, MNRAS, 262, 627
Liddle, A. R., Lyth, D. H., Viana, P. T. P., & White, M. 1996,
MNRAS, 282, 281
Malhotra, S., & Rhoads, J. E. 2004, ApJ, 617, L5
—. 2006, ApJ, 647, L95
McQuinn, M., Lidz, A., Zahn, O., Dutta, S., Hernquist, L., &
Zaldarriaga, M. 2006a, ArXiv Astrophysics e-prints
McQuinn, M., Zahn, O., Zaldarriaga, M., Hernquist, L., &
Furlanetto, S. R. 2006b, ApJ, 653, 815
Mesinger, A., & Haiman, Z. 2004, ApJ, 611, L69
—. 2006, ArXiv Astrophysics e-prints astro-ph/0610258
Mesinger, A., Perna, R., & Haiman, Z. 2005, ApJ, 623, 1
Miralda-Escudé, J., Haehnelt, M., & Rees, M. J. 2000, ApJ, 530, 1
Mo, H. J., & White, S. D. M. 1996, MNRAS, 282, 347
Page, L. et al. 2006, ArXiv Astrophysics e-prints astro-ph/0603450
Peebles, P. J. E. 1980, The large-scale structure of the
universe (Research supported by the National Science
Foundation. Princeton, N.J., Princeton University Press,
1980. 435 p.)
Press, W. H., & Schechter, P. 1974, ApJ, 187, 425
Razoumov, A. O., Norman, M. L., Abel, T., & Scott, D. 2002, ApJ,
572, 695
Reed, D., Gardner, J., Quinn, T., Stadel, J., Fardal, M., Lake, G.,
& Governato, F. 2003, MNRAS, 346, 565
Scannapieco, E., Ferrara, A., & Madau, P. 2002, ApJ, 574, 590
Scoccimarro, R., & Sheth, R. K. 2002, MNRAS, 329, 629
Shapiro, P. R., Giroux, M. L., & Babul, A. 1994, ApJ, 427, 25
Shapiro, P. R., Iliev, I. T., & Raga, A. C. 2004, MNRAS, 348, 753
Sheth, R. K., & Lemson, G. 1999, MNRAS, 304, 767
Sheth, R. K., Mo, H. J., & Tormen, G. 2001, MNRAS, 323, 1
Sheth, R. K., & Pitman, J. 1997, MNRAS, 289, 66
Sheth, R. K., & Tormen, G. 1999, MNRAS, 308, 119
Sirko, E. 2005, ApJ, 634, 728
Sokasian, A., Abel, T., Hernquist, L., & Springel, V. 2003, MNRAS,
344, 607
Sokasian, A., Abel, T., & Hernquist, L. E. 2001, New Astronomy,
6, 359
Spergel, D. N. et al. 2006, ApJ, submitted, preprint astro-
ph/0603449
Springel, V., & Hernquist, L. 2003, MNRAS, 339, 312
Thoul, A. A., & Weinberg, D. H. 1996, ApJ, 465, 608
Totani, T., Kawai, N., Kosugi, G., Aoki, K., Yamada, T., Iye, M.,
Ohta, K., & Hattori, T. 2006, PASJ, 58, 485
Trac, H., & Cen, R. 2006, ArXiv Astrophysics e-prints astro-
ph/0612406
Wyithe, J. S. B., & Loeb, A. 2004, Nature, 427, 815
Zahn, O., Lidz, A., McQuinn, M., Dutta, S., Hernquist, L.,
Zaldarriaga, M., & Furlanetto, S. R. 2007, ApJ, 654, 12
Zahn, O., Zaldarriaga, M., Hernquist, L., & McQuinn, M. 2005,
ApJ, 630, 657
Zel’Dovich, Y. B. 1970, A&A, 5, 84
|
0704.0947 | Jet-disturbed molecular gas near the Seyfert 2 nucleus in M51 | Astronomy & Astrophysics manuscript no. m51pdbi-main c© ESO 2018
October 28, 2018
Letter to the Editor
Jet-disturbed molecular gas near the Seyfert 2 nucleus in M51
S. Matsushita, S. Muller, and J. Lim
Academia Sinica, Institute of Astronomy and Astrophysics, P.O. Box 23-141, Taipei 106, Taiwan, R.O.C.
Preprint online version: October 28, 2018
ABSTRACT
Context. Previous molecular gas observations at arcsecond-scale resolution of the Seyfert 2 galaxy M51 suggest the
presence of a dense circumnuclear rotating disk, which may be the reservoir for fueling the active nucleus and obscures
it from direct view in the optical. However, our recent interferometric CO(3-2) observations show a hint of a velocity
gradient perpendicular to the rotating disk, which suggests a more complex structure than previously thought.
Aims. To image the putative circumnuclear molecular gas disk at sub-arcsecond resolution to better understand both
the spatial distribution and kinematics of the molecular gas.
Methods. We carried out CO(2-1) and CO(1-0) line observations of the nuclear region of M51 with the new A configu-
ration of the IRAM Plateau de Bure Interferometer, yielding a spatial resolution lower than 15 pc.
Results. The high resolution images show no clear evidence of a disk, aligned nearly east-west and perpendicular to the
radio jet axis, as suggested by previous observations, but show two separate features located on the eastern and western
sides of the nucleus. The western feature shows an elongated structure along the jet and a good velocity correspondence
with optical emission lines associated with the jet, suggesting that this feature is a jet-entrained gas. The eastern
feature is elongated nearly east-west ending around the nucleus. A velocity gradient appears in the same direction with
increasingly blueshifted velocities near the nucleus. This velocity gradient is in the opposite sense of that previously
inferred for the putative circumnuclear disk. Possible explanations for the observed molecular gas distribution and
kinematics are that a rotating gas disk disturbed by the jet, gas streaming toward the nucleus, or a ring with another
smaller counter- or Keplarian-rotating gas disk inside.
Key words. galaxies: individual (M51, NGC 5194) – galaxies: ISM – galaxies: Seyfert
1. Introduction
Active Galactic Nuclei (AGNs) are believed to be powered
by gas accretion. This gas is supplied from interstellar mat-
ter in host galaxies, and the gas may form rotationally-
supported structures around the central supermassive black
hole. If they are viewed close to edge-on, they may ob-
scure the central activity from direct view. AGNs can be
categorized as type 1 if seen face-on, and type 2 if seen
edge-on; this explanation is known as a unified model (e.g.
Antonucci & Miller 1985). Indeed, a few hundred pc res-
olution molecular gas imaging toward the central regions
of the Seyfert 2 galaxies NGC 1068 (Planesas et al. 1991;
Jackson et al. 1993) and M51 (Kohno et al. 1996) show
strong peaks at the nuclei with velocity gradients perpen-
dicular to radio jets, which suggest the existence of edge-
on circumnuclear rotating disks. Recent ∼ 50 pc resolu-
tion imaging studies toward NGC 1068 and the Seyfert
1 galaxy NGC 3227 support this view, showing more de-
tailed structures, namely warped disks (Schinnerer et al.
2000a,b). However, observations toward a few low activ-
ity AGN galaxies with < 100 pc resolution show lopsided,
weak, or no molecular gas emission toward the nuclei (e.g.
Garćıa-Burillo et al. 2003, 2005).
M51 (NGC 5194) has also been observed in detail
with molecular lines in the past, since it is one of the
nearest (7.1 Mpc; Takáts & Vinkó 2006) Seyfert galax-
Send offprint requests to: S. Matsushita, e-mail:
[email protected]
ies. A pair of radio jets emanates from the nucleus and
narrow line regions (NLRs) are associated with the jet
(e.g., Crane & van der Hulst 1992; Grillmair et al. 1997;
Bradley et al. 2004). Interferometric images in molecular
gas show blueshifted emission on the eastern side of the
Seyfert 2 nucleus, and redshifted gas on the western side
(Kohno et al. 1996; Scoville et al. 1998). This shift is al-
most perpendicular to the jet axis, and the estimated col-
umn density is consistent with that estimated from X-ray
absorption toward the nucleus, suggesting that the molecu-
lar gas can be a rotating disk and play an important role in
obscuring the AGN. Interferometric CO(3-2) observations
suggest a velocity gradient along the jet in addition to that
perpendicular to the jet (Matsushita et al. 2004). These re-
sults imply more complicated features than a simple disk
structure. We therefore performed sub-arcsecond resolution
CO(2-1) and CO(1-0) imaging observations of the center of
M51 to study the distribution and kinematics of the molec-
ular gas around the AGN in more detail.
2. Observation and data reduction
We observed CO(2-1) and CO(1-0) simultaneously toward
the nuclear region of M51 using the IRAM Plateau de
Bure Interferometer. The array was in the new A configu-
ration, whose maximum baseline length extends to 760 m.
Observations were carried out on February 4th, 2006. The
system temperatures in DSB at 1 mm were in the range
200-700 K, except for Antenna 6, for which a new genera-
http://arxiv.org/abs/0704.0947v1
2 Matsushita et al.: Jet-disturbed molecular gas near the Sy 2 nucleus in M51
379.0 km/s 399.4 km/s 419.7 km/s
440.0 km/s 460.3 km/s 480.6 km/s
501.0 km/s 521.3 km/s 541.6 km/s
582.2 km/s 602.6 km/s561.9 km/s
2 0 −2 2 0 −22 0 −2
R.A. Offset [arcsec]
Fig. 1. Channel maps of the CO(2-1) line. The contour lev-
els are −3, 3, 5, 7, and 9σ, where 1σ corresponds to 5.2 mJy
beam−1 (= 0.96 K). The cross in each map indicates
the position of the 8.4 GHz radio continuum peak posi-
tion of R.A. = 13h29m52.s7101 and Dec. = 47◦11′42.′′696
(Hagiwara et al. 2001; Bradley et al. 2004). The R.A. and
Dec. offsets are the offsets from the phase tracking center of
R.A. = 13h29m52.s71 and Dec. = 47◦11′42.′′6. The synthe-
sized beam is shown at the bottom-left corner of the first
channel map.
tion receiver gave system temperatures of 150-230 K. Those
in SSB at 3 mm were in the range 140-250 K for Antenna
6, and 220-550 K for other antennas. Four of the corre-
lators were configured to cover a 209 MHz (272 km s−1)
bandwidth for the CO(2-1) line, and a 139 MHz (362 km
s−1) bandwidth for the CO(1-0) line. The remaining four
units of the correlator were configured to cover a 550 MHz
bandwidth for continuum observations and calibration. The
strong quasar 0923+392 was used for the bandpass calibra-
tion, and the quasars 1150+497 and 1418+546 were used
for the phase and amplitude calibrations.
The data were calibrated using GILDAS, and were im-
aged using AIPS. The data were CLEANed with natural
weighting, and the synthesized beam sizes are 0.′′40× 0.′′31
(14 pc × 11 pc) with a position angle (P.A.) of 0◦ and
0.′′85× 0.′′55 (29 pc × 19 pc) with a P.A. of 13◦ for CO(2-1)
and CO(1-0) images, respectively. Fig. 1 shows the channel
maps of CO(2-1) emission with a 20.3 km s−1 velocity res-
olution. The channel maps of CO(1-0) emission show simi-
lar features to that of CO(2-1) emission with lower spatial
resolution. Fig. 2 shows integrated intensity and intensity
weighted mean velocity maps of the CO(2-1) and CO(1-
34 pc
3 2 1 −1 −2 −30
1 20 [Jy/beam km/s]
450 550[km/s]500
3 2 1 0 −1 −2 −3
0.0 0.5 1.0 [Jy/beam km/s]
3 2 1 0 −1 −2 −3
03 12 −1 −2 −3
450 500 550 [km/s]
R.A. Offset [arcsec]
440460
(b) CO(1−0)
R.A. Offset [arcsec]
(d) CO(1−0)
(a) CO(2−1)
Moment 0 Map
(c) CO(2−1)
Moment 1 Map
Moment 0 Map
Moment 1 Map
Fig. 2. Integrated intensity (moment 0) and intensity
weighted velocity (moment 1) maps of the CO(2-1) and
CO(1-0) lines. The synthesized beams are shown at the
bottom-left corner of each image. The crosses and the ref-
erence positions of the R.A. and Dec. offsets are the same
as in Fig. 1. (a) The CO(2-1) moment 0 image. The contour
levels are (1, 3, 5, 7, 9, and 11) × 0.334 Jy beam−1 km s−1
(= 62.0 K km s−1). (b) The CO(1-0) moment 0 image. The
contour levels are (1, 3, 5, and 7) × 0.257 Jy beam−1 km
s−1 (= 50.6 K km s−1). (c) The CO(2-1) moment 1 image.
(d) The CO(1-0) moment 1 image.
0) lines. The noise levels for continuum maps are 1.2 mJy
beam−1 at 1.3 mm and 0.54 mJy beam−1 at 2.6 mm, respec-
tively. We did not detect any significant continuum emission
at either frequency.
3. Results
Most of the CO(2-1) emission is detected within ∼ 1′′
(34 pc) of the center, and is located mainly on the eastern
and western sides of the nucleus. There is also weak emis-
sion located∼ 2.′′7 northwest of the nucleus. The overall dis-
tribution and kinematics are consistent with past observa-
tions (Kohno et al. 1996; Scoville et al. 1998), if we degrade
our image to lower angular resolution; a blueshifted feature
with the average velocity of ∼ 460 km s−1 at the eastern
side of the nucleus, and a redshifted feature with an average
velocity of ∼ 500 km s−1 at the western side (Figs. 1, 2; see
also Fig. 3b). We refer to these main structures with the
same labels as in Scoville et al. (1998) (Fig. 2a).
Our higher resolution images, however, show more com-
plicated structures and kinematics than the previous low
angular resolution observations. Molecular gas on the west-
ern side of the nucleus, S1, is elongated in the north-south
direction and separated into two main peaks (S1a and b).
S1a is located 0.′′9 (30 pc) northwest of the nucleus, and S1b
Matsushita et al.: Jet-disturbed molecular gas near the Sy 2 nucleus in M51 3
is 1.′′0 (34 pc) to the southwest. On the eastern side of the
nucleus, the molecular gas has an intensity peak 0.′′6 (20 pc)
to the northeast (labeled S2), which is located closer to the
nucleus in projected distance than that of S1a/b.
The feature S1 shows a clear velocity gradient along
the north-south direction, which is shown in Fig. 2(c) and
also in the position-velocity (PV) diagram (Fig. 3a). This
gradient was previously suggested by the CO(3-2) data
(Matsushita et al. 2004), but the magnitude of the velocity
gradient is different. The computation of the magnitude of
the velocity gradient is similar to that used for the CO(3-2)
data. The fitting result indicates a velocity gradient within
S1 of 2.2 ± 0.3 km s−1 pc−1, which is larger than that re-
ported previously, 0.77± 0.01 km s−1 pc−1 (the value has
been modified by the different distance of the galaxy used).
This difference is partially due to the larger beam size of
the previous result; the CO(3-2) data set has a beam size
of 3.′′9× 1.′′6 with a P.A. of 146◦, and the velocities of S2/C
and S3 contaminate that of S1.
The CO(1-0) maps show very similar molecular gas
distribution and kinematics as those in CO(2-1) maps
(Fig. 2b,d). Only the western emission was detected in
previous observations (Aalto et al. 1999; Sakamoto et al.
1999), but our map clearly shows the emission from both
side of the nucleus.
In addition to the previously known features, our CO(2-
1) image also shows a weak emission near the nucleus with
a structure elongated in the northeast-southwest direction
(feature C in Fig. 2a). This structure could be a part of
S2, since the velocity map (Fig. 2c) and the PV diagram
(Fig. 3b) show a smooth velocity gradient, although most
of the emission in C comes from only one velocity channel
(419.7 km s−1 map in Fig. 1). The velocity gradient between
S2 and C is in an opposite sense to that previously seen
with the lower angular resolution observations mentioned
above. This structure is not detected in the CO(1-0) line,
but a hint of a velocity gradient can be seen in Fig. 2d.
The total CO(2-1) integrated intensity of S1, S2, and C
is 25.01 Jy km s−1, and that of S1 and S2 in Scoville et al.
(1998) is 33.44 Jy km s−1, so that our data detected 75%
of their intensity. Scoville et al. (1998) detected ∼ 50%
and 20% of the single dish CO(2-1) flux in redshifted and
blueshifted emission, respectively, so that our data recov-
ered ∼ 25% of the single dish flux.
4. Discussion
4.1. Jet-entrained molecular gas
Our molecular gas data show a clear north-south velocity
gradient within the feature S1. We suggested from our pre-
vious study that this velocity gradient may be due to molec-
ular gas entrainment by the radio jet (Matsushita et al.
2004). Here we revisit this possibility with higher spa-
tial and velocity resolution data. Fig. 4 shows our CO(2-
1) image overlaid on the 6 cm radio continuum image
(Crane & van der Hulst 1992). The radio continuum image
shows a compact radio core coincident with the nucleus,
and the southern jet emanating from there (note that the
northern jet is located outside our figure). The CO(2-1)
map clearly shows that S1 is aligned almost parallel to the
jet. In addition, Figs. 2 and 3 show that the velocity gradi-
ent in S1 is also almost parallel to the jet.
R.A. Offset [arcsec]Dec. Offset [arcsec]
1.51.5 1.01.0 0.50.5 0.00.0 −0.5−0.5 −1.0−1.0 −1.5−1.5
620 580
600 560
580 540
560 520
540 500
520 480
500 460
480 440
460 420
440 400
420 380
(a) (b)
Fig. 3. Position-velocity (PV) diagrams of the CO(2-1)
line. The contour levels are 3, 5, 7, and 9σ, where 1σ cor-
responds to 5.2 mJy beam−1 (= 0.96 K). (a) PV diagram
along the north-south elongated S1 feature (P.A. of the cut
is 103◦). The positions for S1a and S1b are shown with
labels. (b) PV diagram along R.A. with the cut through
the S1a, C, and S2 features (P.A. of the cut is 90◦). The
positions for S1a, S2, and C are shown with labels.
The velocity increases from ∼ 480 km s−1 at S1a to
∼ 540 km s−1 south of S1b. This increment is very simi-
lar to that observed in the NLR clouds along the radio jet;
Bradley et al. (2004) measured the velocities and velocity
dispersions of the clouds using the [O III] λ5007 line, and
showed that the velocity of the southern <≃ 1
′′ clouds from
the nucleus are at VLSR ∼ 440− 590 km s
−1 and the veloc-
ity increases as the clouds move away from the nucleus (see
Table 2 and Fig. 9 of their paper)1. This velocity range and
increment are consistent with our data. Furthermore recent
observations of H2O masers toward the nucleus also show
a velocity gradient along the jet with the same sense as our
results (Hagiwara 2007), in addtion to the good correspon-
dance of the velocity range (Hagiwara et al. 2001; Hagiwara
2007; Matsushita et al. 2004). These results suggest that
the molecular gas in S1 (and the NLR clouds and the H2O
masers) is possibly entrained by the radio jet. These results
also suggest that some of the material in NLRs is supplied
from molecular gas close to AGNs.
Another example of jet-entrained neutral gas is found in
the radio galaxy 3C293 (Emont et al. 2005). The velocity
of H I gas in absorption spectra toward the AGN matches
that of ionized gas along kpc-scale radio jets. The spatial
coincidence is not clear, since the spatial resolution of the
H I data is lower (25.′′3 × 11.′′9) than that of the ionized
gas data. Our result is therefore the first possible case of
entrainment of molecular gas by a jet at the scale of ten pc.
The better resolution of our new CO data allows us to
revisit the values of the molecular gas mass, momentum,
and energy of the entrained gas. We derive 6 × 105 M⊙,
8 × 1045 g cm s−1, and 3 × 1052 ergs for these quantities.
These values are about half of the previous values derived
from the CO(3-2) data, mainly due to the larger beam, but
the conclusion is similar; the energy of the entrained gas
could be similar to that of the radio jet (> 6.9× 1051 ergs;
1 We selected the clouds with a velocity dispersion of less than
100 km s−1; Clouds 3, 4, and 4a in Bradley et al. (2004). If we
include all the clouds, the velocity is ∼ 440 − 690 km s−1 with
a range of velocity dispersion of ∼ 25 − 331 km s−1; Clouds 2,
3, 3a, 4, 4a, and 4b.
4 Matsushita et al.: Jet-disturbed molecular gas near the Sy 2 nucleus in M51
47 11 46
52.9 52.7 52.5 52.313 29 53.1
RIGHT ASCENSION (J2000)
Fig. 4. The CO(2-1) integrated intensity image (contours)
overlaid on the VLA 6cm radio continuum image (greyscale;
Crane & van der Hulst 1992). The contour levels, the syn-
thesized beam, and the cross are the same as in Fig. 1.
Crane & van der Hulst 1992), but the momentum is much
larger than that of the jet (2 × 1041 g cm s−1). One way
to explain this discrepancy is through a continuous input
of momentum from the jet (see Matsushita et al. 2004, for
more detail discussions).
4.2. Obscuring material around the Seyfert 2 nucleus
The feature C is located in front of the Seyfert 2 nu-
cleus, and the CO(2-1) intensity is about 62.0 K km s−1
(Fig. 2a). Hence the column density can be calculated as
6.2 × 1021 cm−2 using a CO-to-H2 conversion factor of
1.0 × 1020 cm−2 (K km s−1)−1 (Matsushita et al. 2004)
and assuming a CO(2-1)/(1-0) ratio of unity. This value is
far lower than that derived from the X-ray absorption of
5.6× 1024 cm−2 (Fukazawa et al. 2001). As is mentioned in
Sect.3, the missing flux of our data is ∼ 75%. However, even
if all of this missing flux contributes to obscuring the nu-
clear emission, this large column density difference cannot
be explained. Changing the conversion factor or the ratio
by an order of magnitude also cannot explain this large
difference. One way to reconcile this disparity is to assume
that C is not spatially resovled, in which case the computed
column density is a lower limit. Alternatively, the obscur-
ing material preferentially traced by higher-J CO lines or
denser molecular gas tracers such as HCN may be involved.
The CO(3-2) intensity in brightness temperature scale is
∼ 2 times stronger than that of CO(1-0) (Matsushita et al.
2004), and the HCN(1-0) intensity is also relatively stronger
(HCN/CO ∼ 0.4; Kohno et al. 1996) than normal galaxies.
4.3. Molecular gas at ten pc scale from the Seyfert nucleus
Previous studies suggest that the blue shifted eastern fea-
ture S2 and the red shifted western feature S1 may be the
outer part of a rotating disk as in the AGN unified model.
However, our images show a more complicated nature, and
no clear evidence of simple disk characteristics.
The simplest interpretation is that S1 and S2/C are in-
dependent structures. Since S1 is affected by the jet but
S2/C is not, S1 is expected to be located closer to the nu-
cleus than S2/C, and the projection effect makes the po-
sition of S2/C closer in our images. Alternatively, S2/C
may be close to the nucleus, but the entrained gas has
been already swept away or ionized by the jet. S2/C has
a velocity gradient, and therefore can be interpreted as a
streaming gas, presumably infalling toward the nucleus, as
is observed in the Galactic Center (Lo & Claussen 1983;
Ho et al. 1991).
S1 and S2/C can also be interpreted as a rotating disk
that is largely disturbed by the jet, and only a part remains.
According to the velocity gradient along S2/C, the bluesh-
fited gas is expected at S1, which is the opposite sense to
the previous suggestion, but the gas shows no signs of it due
to the jet entrainment. This is possible from the timescale
point of view; under this interpretation, S1 should have
a blueshifted rotation velocity of ∼ 380 km s−1 based on
the velocity gradient in S2/C. S1 has a velocity ∼ 150 km
s−1 higher than the expected rotational velocity, and we
assume that this is the entrained velocity. In this case, it
takes 2 × 105 years to be elongated along the jet by ∼ 1′′
or 34 pc. On the other hand, the rotation timescale at this
radius is about 2× 106 years, an order of magnitude longer
timescale. The rotating disk can therefore be locally dis-
turbed by the jet.
However, the above two explanations have difficulty in
explaining optical images of the nucleus; the Hubble Space
Telescope images show “X” shaped dark lanes in front of the
nucleus (Grillmair et al. 1997), suggesting the existence of a
warped disk or two rings with one tilted far from another.
An alternative explanation of the dark lanes is that, as
previously proposed, there is a rotating edge-on ring with
S2 as blueshifted gas and S1a as redshifted gas. In this case,
the feature C can be the counterpart of another dark lane,
which runs northeast-southwest, although C has to be a
counter-rotating or Keplarian rotating disk to explain the
opposite sense of the velocity gradient to that of the S1a/S2
(Sect. 3). This configuration explains the “X” shape, but
has a rather complicated configuration, and it is difficult to
explain why the inner disk C is not disturbed by the jet.
We imaged the nuclear region of the Seyfert 2 galaxy
M51 at ∼ 10 pc resolution, and we see no clear evidence of
a circumnuclear rotating molecular gas disk as previously
suggested. The molecular gas along the radio jet is most
likely entrained by the jet. The explanations for other gas
components are speculative, possibly involving a circumnu-
clear rotating disk or streaming gas.
Acknowledgements. We thank Arancha Castro-Carrizo and the
IRAM staff for the new A configuration observations. We also thank
the anonymous referee for helpful comments. IRAM is supported by
INSU/CNRS (France), MPG (Germany) and IGN (Spain). This work
is supported by the National Science Council (NSC) of Taiwan, NSC
95-2112-M-001-023.
References
Aalto, S., Hüttemeister, S., Scoville, N. Z., & Thaddeus, P. 1999, ApJ,
522, 165
Matsushita et al.: Jet-disturbed molecular gas near the Sy 2 nucleus in M51 5
Antonucci, R. R. J., & Miller, J. S. 1985, ApJ, 297, 621
Bradley, L. D., Kaiser, M. E., & Baan, W. A. 2004, ApJ, 603, 463
Crane, P. C., & van der Hulst, J. M. 1992, AJ, 103, 1146
Emonts, B. H. C., Morganti, R., Tadhunter, C. N., et al. 2005,
MNRAS, 362, 931
Fukazawa, Y., Iyomoto, N., Kubota, A., Matsumoto, Y., &
Makishima, K. 2001, A&A, 374, 73
Garćıa-Burillo, S., Combes, F., Hunt, L. K., et al. 2003, A&A, 407,
Garćıa-Burillo, S., Combes, F., Schinnerer, E., Boone, F., & Hunt, L.
K. 2005, A&A, 441, 1011
Grillmair, C. J., Faber, S. M., Lauer, T. R., et al. 1997, AJ, 113, 225
Hagiwara, Y. 2007, AJ, 133, 1176
Hagiwara, Y., Henkel, C., Menten, K. M., & Nakai, N. 2001, ApJ,
560, L37
Ho, P. T. P., Ho, L. C., Szczepanski, J. C., Jackson, J. M., Armstrong,
J. T. 1991, Nature, 350, 309
Jackson, J. M., Paglione, T. A. D., Ishizuki, S., & Nguyen-Q-Rieu
1993, ApJ, 418, L13
Kohno, K., Kawabe, R., Tosaki T., & Okumura S. K. 1996, ApJ, 461,
Lo, K. Y., Claussen, M. J. 1983, Nature, 306, 647
Matsushita, S., Sakamoto, K., Kuo, C.-Y., et al. 2004, ApJ, 616, L55
Planesas, P., Scoville, N., & Myers, S. T. 1991, ApJ, 369, 364
Sakamoto, K., Okumura, S. K., Ishizuki, S., & Scoville, N. Z. 1999,
ApJS, 124, 403
Schinnerer, E., Eckart, A., & Tacconi, L. J. 2000a, ApJ, 533, 826
Schinnerer, E., Eckart, A., Tacconi, L. J., Genzel, R., & Downes, D.
2000b, ApJ, 533, 850
Scoville, N. Z., Yun, M. S., Armus, L., & Ford, H. 1998, ApJ, 493,
Takáts, K., & Vinkó, J. 2006, MNRAS, 372, 1735
Introduction
Observation and data reduction
Results
Discussion
Jet-entrained molecular gas
Obscuring material around the Seyfert 2 nucleus
Molecular gas at ten pc scale from the Seyfert nucleus
|
0704.0948 | Spectroscopy of Nine Cataclysmic Variable Stars | Spectroscopy of Nine Cataclysmic Variable Stars 1
Holly A. Sheets, John R. Thorstensen, Christopher J. Peters, and Ann B. Kapusta
Department of Physics and Astronomy
6127 Wilder Laboratory, Dartmouth College
Hanover, NH 03755-3528;
[email protected]
Cynthia J. Taylor
The Lawrenceville School
P.O. Box 6008, Lawrenceville, NJ 08648
ABSTRACT
We present optical spectroscopy of nine cataclysmic binary stars, mostly dwarf no-
vae, obtained primarily to determine orbital periods Porb. The stars and their periods
are LX And, 0.1509743(5) d; CZ Aql, 0.2005(6) d; LU Cam, 0.1499686(4) d; GZ Cnc,
0.0881(4) d; V632 Cyg, 0.06377(8) d; V1006 Cyg, 0.09903(9) d; BF Eri, 0.2708804(4)
d; BI Ori, 0.1915(5) d; and FO Per, for which Porb is either 0.1467(4) or 0.1719(5) d.
Several of the stars proved to be especially interesting. In BF Eri, we detect the
absorption spectrum of a secondary star of spectral type K3 ±1 subclass, which leads
to a distance estimate of ∼ 1 kpc. However, BF Eri has a large proper motion (∼ 100
mas yr−1), and we have a preliminary parallax measurement that confirms the large
proper motion and yields only an upper limit for the parallax. BF Eri’s space velocity
is evidently large, and it appears to belong to the halo population. In CZ Aql, the
emission lines have strong wings that move with large velocity amplitude, suggesting
a magnetically-channeled accretion flow. The orbital period of V1006 Cyg places it
squarely within the 2- to 3-hour ‘gap’ in the distribution of cataclysmic binary orbital
periods.
Subject headings: novae, cataclysmic variables — stars: individual (LX And, CZ Aql,
LU Cam, GZ Cnc, V632 Cyg, V1006 Cyg, BF Eri, BI Ori, FO Per) — stars: distances
— binaries: close — binaries: spectroscopic
1Based on observations obtained at the MDM Observatory, operated by Dartmouth College, Columbia University,
Ohio State University, Ohio University, and the University of Michigan.
http://arxiv.org/abs/0704.0948v1
– 2 –
1. Introduction
Cataclysmic variables (CVs) are binary star systems in which the secondary, usually a late-
type main sequence star, fills its Roche lobe and loses mass to the white dwarf primary (Warner
1995). CVs are long-lived systems that are stable against mass transfer, so the mass transfer must
be driven by gradual changes in the orbit, or in the secondary star, or both. It is commonly
believed that the evolution of most CVs is driven by the slow loss of angular momentum from the
orbit, most likely through magnetic braking of the co-rotating secondary star, at least at longer
orbital periods Porb where gravitational radiation is ineffective (Andronov & Pinsonneault 2004
give a recent discussion). The loss of angular momentum constricts the Roche critical lobe around
the secondary and causes the system to transfer mass as it evolves toward shorter Porb. In this
scenario, Porb serves as a proxy measurement for the system’s evolutionary state. Correct and
complete orbital period measurements are fundamental to any accurate theory of CV evolution.
Given the usefulness of Porb, it is fortunate that it can usually be measured accurately and precisely.
This paper presents optical spectroscopy of the nine CVs listed in Table 1. We took these
observations mostly for the purpose of finding orbital periods using radial velocities (none of these
systems are known to eclipse). The long cumulative exposures also allowed us to look for any
unusual features. The Catalog and Atlas of Cataclysmic Variables Archival Edition (Downes et al.
2001) 2 lists seven of the stars as dwarf novae, one as either a dwarf nova or a DQ Her star, and one
simply as a cataclysmic, possibly a dwarf nova similar to U Gem or SS Cygni (type UGSS). Except
for CZ Aql, for which we confirm a 4.8-hour candidate period suggested by Cieslinski et al. (1998),
all of these objects lacked published orbital periods when we began working on them. Subsequently
Tappert & Bianchini (2003) found Porb = 0.0883 d for GZ Cancri; we had communicated our
advance findings to these authors so they could disambiguate their period determination.
2. Observations, Reductions, and Analysis
2.1. Observations
All our spectra were taken at the MDM Observatory on Kitt Peak, Arizona, using either the
1.3m McGraw-Hill telescope or the 2.4m Hiltner telescope. The earliest observations we report
here are from 1995, and the latest were obtained 2007 January. Table 2 gives a journal of the
observations.
At the 1.3m we used the Mark III spectrograph and a SITe 1024 × 1024 CCD detector. The
spectral resolution is 5.0 Å, covering a range of either 4480 to 6760 Å with 2.2 Å pixel−1 for the
2001 December BF Eri data, or 4646 to 6970 Å with 2.3 Å pixel−1 for the remaining data. The
2Available at http://archive.stsci.edu/prepds/cvcat/index.html; this had been called the Living Edition until its
author retired and ceased updates.
http://archive.stsci.edu/prepds/cvcat/index.html
– 3 –
2.4m spectra, except for those of FO Per, were obtained with the modular spectrograph and a
SITe 20482 CCD detector, with 2.0 Å pixel−1, over a range of 4210 to 7500 Å and with a spectral
resolution of 3.5 Å. The relatively small number of 2.4 m spectra of FO Per were taken with a
LORAL 20482-pixel detector, and cover from 4285 to 6870 Å at 1.25 Å pixel−1.
2.2. Reductions
For the most part we reduced the spectra using standard IRAF3 procedures. The wavelength
calibration was based on exposures of Hg, Ne, and Xe lamps. Prior to 2003 we took lamp exposures
through the night and whenever the telescope was moved. For the 2.4m data from 2003 to the
present, we used lamp exposures taken in twilight to find the shape of the pixel-to-wavelength
relation, and set the zero point individually for each nighttime exposure using the OI λ5577 night-
sky feature. The apparent velocity of the telluric OH emission bands at the far red end of the
spectrum, found with a cross-correlation routine, provided a check; although these are far from
the feature used to set the zero point, their apparent velocity typically remain within 10 km s−1
of zero. Because of the increased efficiency of this technique, we attempted to use it at the 1.3m
telescope also, during the 2004 June/July observing run. For unknown reasons the results were
unsatisfactory. To salvage the Hα emission velocities from that run, we determined a correction by
cross-correlating the night-sky emission features in the 6200-6625 Å range with a well-calibrated
night-sky spectrum obtained with a similar instrument. The correction was calculated for each
individual spectrum and then applied to each measured velocity, and it did reduce the scatter
somewhat, evidently because the wavelength range used includes the Hα emission line for which
we measured velocities.
On all our runs, we observed flux standards during twilight when the sky was clear, and applied
the resulting calibration to the data. The reproducibility of these observations suggests that our
fluxes are typically accurate to ±20 per cent. We also took short exposures of bright O and B
stars in twilight to map the telluric absorption features and divide them out approximately from
our program object spectra. Before flux calibration, we divided our program star spectra by a
mean hot-star continuum, in order to remove the bulk of the response variation. Table 1 lists
V magnitudes synthesized from our mean spectra, using the IRAF sbands task and the passband
tabulated by Bessell (1990); clouds, losses at the slit, and calibration errors make these uncertain
by a few tenths of a magnitude, but they do give a rough indication of the brightness of each system
at the time of our observation.
3IRAF is distributed by the National Optical Astronomy Observatories.
– 4 –
2.3. Analysis
Except for a few spectra taken in outburst (which show weak emission or absorption on a
strong continuum), all of the stars show the prominent emission lines. Figs. 1 and 2 show averaged
spectra, and Table 3 gives the equivalent width and FWHM of each line measured for each star
from its averaged spectrum.
Two stars, BF Eri and BI Ori, showed the spectral features of a late-type star. To quantify the
secondary contribution in these objects, we began by preparing averaged flux-calibrated spectra
(in BF Eri’s case the secondary’s radial velocity curve was measurable, so we shifted the individual
spectra to the secondary’s rest frame before averaging). Over time we have used the 2.4 m and
modular spectrograph to collect spectra of K and M stars classified by Keenan & McNeil (1989) or
Boeshaar (1976). The wavelength coverage and spectral resolution of these data are similar to the
1.3m data. We applied a range of scaling factors to the library spectra, subtracted them from the
averaged spectra, and examined the results by eye to estimate a range of spectral types and scaling
factors giving acceptable cancellation of the late-type features.
We use the spectral type and secondary flux to estimate the distance in the following manner.
We begin by finding the surface brightness of the secondary star in V , on the assumption that the
surface brightness is similar to that of main-sequence stars of the same spectral type; the Barnes-
Evans relation for late-type stars is discussed by Beuermann (2006). Combining the known Porb
with the assumption that the secondary fills its Roche critical lobe yields the secondary’s radius
R2 as a function of its mass, M2. In the relevant range of mass ratio, R2 ∝ M
2 , approximately,
and the dependence on M1 is weak enough to ignore. We generally do not know M2, so we guess
at a generous allowable range for this parameter using evolutionary simulations by Baraffe & Kolb
(2000) as a guideline; the weakness of the dependence of R2 on M2 means that this (rather ques-
tionable) step does not dominate the error budget. Combining the surface brightness with R2
yields the absolute magnitude MV . Subtracting this from the apparent magnitude measured for
the secondary star gives a distance modulus. The reddening maps of Schlegel et al. (1998) then
can be used to estimate the extinction. Note carefully that we do not assume that the secondary
is a ‘normal’ main-sequence star; we assume only that the secondary’s surface brightness is similar
to field stars of the same spectral type. The normalization of the secondary’s contribution also
depends on the assumption that the spectral features used to judge the subtraction are similar in
strength to those of a normal star.
As noted earlier, the immediate aim of our observations was to find orbital periods from radial
velocity time series data. The Hα emission line is usually the strongest feature, and it generally
gives good results in dwarf novae. All the emission-line velocities reported here are of Hα.
We measured radial velocities of Hα emission using convolution methods described by Schneider & Young
(1980) and Shafter (1983). In this technique one convolves an antisymmetric function with the line
profile, and takes the zero of the convolution (where the two sides of the line contribute equally)
as the line center. For the antisymmetric function with which the spectrum is convolved, we used
– 5 –
either the derivative of a Gaussian with adjustable width, or positive and negative Gaussians of
adjustable width offset from each other by an adjustable separation. Uncertainties in the convolu-
tion velocities are estimated by propagating forward the counting-statistics errors in the individual
data channels; in practice, these are lower limits to the true uncertainties, since the line profile
can vary in ways unrelated to the orbital modulation. The choice of convolution parameters is
dictated by the shape and width of the line, and in practice the parameters are adjusted to give
the best detection of the orbit. The physical interpretation of CV emission lines is complicated
and controversial (see, e.g., Shafter 1983, Marsh 1988, Robinson 1992), but in almost all cases the
emission-line periodicity accurately reflects Porb (though Araujo-Betancor et al. 2005 describe a
noteworthy exception to this rule). A sample of the radial velocities for each object are listed in
Table 4, while the full tables can be found online.
One of our systems, BF Eri, has a K-type absorption component in its spectrum. We measured
velocities of this using the cross-correlation radial velocity package described by Kurtz & Mink
(1998), using the region from 5000 to 6500 Å, and excluding the region containing the He I λ5876
emission line and and the NaD absorption complex. For a cross-correlation template spectrum, we
used the a velocity-compensated sum of many observations of IAU velocity standards taken with
the same instrument, as described in Thorstensen et al. (2004).
We searched for periods in all the velocity time series using the “residualgram” method
(Thorstensen et al. 1996); the resulting periodograms are given in Figs. 3 and 4. At the best
candidate periods we fitted least-squares sinusoids of the form v(t) = γ + K sin[2π(t − T0)/P ].
Fig. 5 shows the velocities folded on the best-fitting periods, and Table 5 gives the parameters of
these fits. Because of limitations of the sampling (e.g., the need to observe only at night from a
single site), a single periodicity generally manifests as a number of alias frequencies. To assess the
confidence with which we could assert that the strongest alias is the true period, we used a Monte
Carlo test described by Thorstensen & Freed (1985).
The alias problem can be particularly irksome over longer timescales; in this case the uncertain
number of cycles elapsed between observing runs causes fine-scale “ringing” in the periodogram.
The individual periods have tiny error bars, because of the large time span covered, but the am-
biguity in period means that a realistic error bar – one that covers the range of possibilities – is
much larger. In those cases, the period uncertainties given in Table 5 are estimated by analyzing
data from the individual observing runs separately. When only two observing runs are available,
the allowable fine-scale frequencies are well-described by a fitting formula
Porb = (t2 − t1)/n.
Here t1 and t2 are the epochs of blue-to-red velocity crossing observed on the two runs, and n
is the integer number of cycles that have passed between t1 and t2. The allowed range of n is
determined from the weighted average of the periods derived from separate fits to the two runs’
data. When more than two observing runs are available, the situation becomes more complex. In
some happy cases there are enough overlapping constraints that only a single, very precise period
– 6 –
remains tenable. We were able to find such precise periods for LX And, LU Cam, and BF Eri.
3. Notes on Individual Objects
We discuss the stars in alphabetical order by constellation.
3.1. LX Andromedae
LX And was first identified as a variable star (RR V-3) in the Lick RR Lyrae search (Kinman et al.
1982). It was classified incorrectly as an RV Tauri star, and its dwarf nova nature was unrecognized
until the photometric study by Uemura et al. (2000). Morales-Rueda & Marsh (2002) obtained
spectra of LX And as part of their study of dwarf novae in outburst and determined the equivalent
widths and FWHMs of the Balmer and He II lines. Our mean spectrum appears typical for a dwarf
nova at minimum light.
Because of the large hour-angle span, the radial velocity time series leaves no doubt about
the daily cycle count, which is near 6.6 cycle d−1. The several observing runs constrain the fine-
scale period in a more complicated way, but the Monte Carlo test indicates that a precise period
of 0.1509743(5) d is preferred with about 98 per cent confidence. Two other candidate periods
separated from this by 1 cycle per 53.2 d in frequency are much less likely.
3.2. CZ Aquilae
Very little has been published on CZ Aql, which is listed in the Archival Edition as a U-
Gem dwarf nova. Cieslinski et al. (1998) included the star in their spectroscopic study of irregular
variables, and noted a probable 4.8 hour period and emission lines typical of dwarf novae. Our
velocities confirm the suggested 4.8-hour period, but we cannot determine a unique cycle count
between our observing runs.
While the spectrum superficially resembles that of a dwarf nova, a closer look reveals interesting
behavior. Fig. 6, constructed using methods described by Taylor et al. (1999), presents our spectra
as a phase-averaged greyscale image. There is a striking broad component in the stronger Balmer
and HeI lines that shows a large velocity excursion, with the red wing of Hα reaching to +3100 km
s−1 at phase 0.3 (where phase 0 corresponds to the blue-to-red crossing of the line core). The broad
components around Hβ and λ6678 move in phase with those of Hα and range from 900 to 2600
and −2500 to −900 km s−1 and 700 to 2100 and −1000 to −600 km s−1, respectively. The wings
of λ5876 also move in phase with the others, but the red edge is difficult to follow at its minimum
because of interference from the NaD absorption lines, which are stationary and hence interstellar.
The maximum of the red edge is 3200 km s−1, while the blue edge ranges from −1500 to −600 km
– 7 –
s−1. The blueward wing of all these lines is noticeably weaker than the redward wing.
Other emission lines present include HeII λ4686, HeI λ4713 and λ4921, and, very weakly, FeII
λ5169. We also detect unidentified emission lines at λ6344, as is also seen in LS Peg (Taylor et al.
1999), and at λ5046. The strength of the λ5780 diffuse interstellar band (Jenniskens & Desert
1994) and the NaD lines suggest that a good deal of interstellar material lies along the line of sight,
and that the luminosity is relatively high.
High-velocity wings reminiscent of the ones seen here have been seen in V795 Her (Casares et al.
1996; Dickinson et al. 1997), LS Peg (Taylor et al. 1999), V533 Her (Thorstensen & Taylor 2000),
and RX J1643+34 (Patterson et al. 2002), all of which are SW Sex stars. We do not, how-
ever, detect another SW Sex characteristic, namely phase-dependent absorption in the HeI lines
(Thorstensen et al. 1991). The orbital periods of most SW Sex stars are shorter than 4 hours, so
CZ Aql’s 4.8-h period would be unusually long for an SW Sex star.
3.3. LU Camelopardalis
Jiang et al. (2000) obtained the first spectrum of this dwarf nova in a follow-up study of CV
candidates from the ROSAT All Sky Survey. We found no other published spectroscopic studies.
Our velocities constrain the period to a unique value, 0.1499685(7) d. The averaged spectrum
shows a rather strong, blue continuum, which may indicate a state somewhat above true minimum.
3.4. GZ Cancri
Jiang et al. (2000) confirmed the cataclysmic nature of GZ Cnc by obtaining the first spectrum
of the object. Kato et al. (2002) suggested that this star, originally labeled as a dwarf nova, could
possibly be an intermediate polar (DQ Her star), based on similarities in its long-term photometric
behavior to that of other intermediate polars. Tappert & Bianchini (2003) conducted a photometric
and spectroscopic study of the system. Using advance results from the present study to help decide
the daily cycle count, they found Porb = 0.08825(28) d, or 2.118(07) h, placing the system near
the lower edge of the so-called gap in the CV period distribution – a dearth of systems in the
period range from roughly 2 to 3 hr. Tappert & Bianchini (2003) also saw characteristics that
could indicate an intermediate polar classification, but did not claim their evidence was definitive
on this point.
Almost all our observations come from two observing runs a year apart. The full set of
velocities strongly indicates an orbital frequency near 11.4 cycle d−1, with the Monte Carlo test
giving a discriminatory power greater than 0.99 for the choice of daily cycle count. However, the
number of cycles between the two observing runs is not determined. Precise periods that fit the
combined data set are given by P = [349.785(3) d]/n, where n is the integer number of cycle counts;
– 8 –
n = 3972± 8 corresponds to roughly 1 standard deviation. While our period agrees well with that
of Tappert & Bianchini (2003), our data neither support nor disprove the claim that GZ Cnc may
be an intermediate polar.
3.5. V632 Cygni
Liu et al. (1999) offer the only published spectrum of this dwarf nova. They measured the
equivalent widths and integrated line fluxes of the Balmer, HeI, and HeII emission lines and sug-
gested that the orbital period is likely short based on the very strong Balmer emission. Our spec-
trum appears similar to theirs, and our measured flux level is also nearly the same. The periodigram
in Fig. 3 clearly favors an orbital frequency near 15.7 cycles d−1, with a discriminatory power of 95
per cent and a correctness likelihood near unity. This confirms the suggestion of Liu et al. (1999)
that the period is rather short and suggests that it is an SU UMa-type dwarf nova.
3.6. V1006 Cygni
Bruch & Schimpke (1992) present the only published spectrum we know of, and characterized
it as a “textbook example” of a dwarf nova spectrum. They noted a slightly blue continuum with
strong Balmer and He I emission, as well as clear He II λ4686 and Fe II emission. Our spectrum
(Fig. 1) is similar to theirs both in appearance and normalization, and our line measurements
(Table 3) are also comparable.
The periodogram (Fig. 4) indicates a frequency near 10.1 cycles d−1, and the Monte Carlo test
confirms that the daily cycle count is securely determined. Most of our data are from 2004 June,
but we returned in 2005 June/July to confirm the unusual period indicated in the earlier data.
The periods found by analyzing the two runs separately are consistent within their uncertainties.
As with GZ Cnc, there are multiple choices for the cycle count between the two observing runs;
the best-fitting periods are given by P = [369.006(4) d]/n, where n = 3726 ± 4 corresponds to
1 standard deviation. Including a few velocities from other observing runs suggests that n is
slightly larger, perhaps 3728. In any case, the period amounts to 2.38 h, which places V1006 Cyg
firmly in the period gap (Warner 1995), where there is apparently a true scarcity of dwarf novae
(Hellier & Naylor 1998).
3.7. BF Eri
The first evidence that BF Eridani was a cataclysmic variable came when an Einstein X-
ray source, 1ES0437-046, was matched to the variable (Elvis et al. 1992). Schachter et al. (1996)
confirmed this match and presented an optical spectrum. Kato (1999) and the Variable Star
– 9 –
Observers’ League in Japan (VSOLJ) found photometric variability characteristic of a dwarf nova.
The spectrum of BF Eri (Fig. 2) shows a significant contribution from a K star along with the
usual dwarf-nova emission lines. Normally, this suggests that Porb > 6 h. Nearly all our spectra
yielded good cross-correlation radial velocity measurements as well as emission-line velocities. The
absorption- and emission-line velocities independently give a period near 6.50 h (Table 5), in ac-
cordance with expectation based on the spectrum. There is no ambiguity in cycle count over the
5-year span of the observations, so the period is precise to a few parts per million. Fig. 6 shows a
phase-resolved average of the BF Eri spectra, with the absorption spectrum shifting in antiphase
to the emission lines.
If the emission-line velocities faithfully trace the primary’s center-of-mass motion, and the
absorption-line velocities also trace the secondary’s motion, then the two velocity curves should be
exactly one-half cycle out of phase. In BF Eri, we find a shift of 0.515 ± 0.007 cycles between the
two curves, consistent with 0.5 cycles, so we feel emboldened to explore the system dynamics.
Masses can only be derived when the orbital inclination is known, as in eclipsing systems. To
see if BF Eri might eclipse, we derived differential magnitudes from images that were taken for
astrometry (discussed below) and plotted them as a function of orbital phase. Some images were
taken at the phase at which an eclipse would appear, but no evidence for an eclipse was found.
Limits on the depth and duration of the eclipse are difficult to quantify because the data were taken
in short bursts in the presence of strong intrinsic variability, so a weak eclipse cannot be ruled out,
but the photometry does suggest that the inclination is not close to edge-on.
Because the system apparently does not eclipse, we cannot derive masses; rather, we find broad
constraints on the inclination by assuming astrophysically reasonable masses for the components.
Taken at face value, the velocity amplitudes K imply a mass ratio q = M2/M1 = 0.60 ± 0.03. If
we arbitrarily choose a white dwarf mass M1 = 0.9 M⊙ (so that M2 = 0.53 M⊙), the observed K
velocities imply i = 50 degrees. To find a rough lower limit on the inclination, we consider a massive
white dwarf (M1 = 1.2 M⊙) and, ignoring the constraint on q for the moment, take M2 = 0.4 M⊙;
this yields i = 40 degrees. For a rough upper limit, we assume M1 = 0.6 M⊙ and M2 = 0.4 M⊙,
which gives i = 67 degrees.
The decomposition procedure described earlier yielded a spectral type of K3 ±1 subclass; the
result of the subtraction is shown in Fig. 2. Using the V passband tabulated by Bessell (1990) and
the IRAF sbands task, we find a synthetic V = 16.9± 0.3 for the K star’s contribution. Taking the
range of plausible secondary star masses to be 0.4 to 0.8 M⊙ yields R2 = 0.7± 0.1 R⊙ at this Porb.
Combining this with the surface brightness expected at this spectral type yields MV = 6.8 ± 0.4
for the secondary. If there is no significant interstellar extinction, we have m − M = 10.1 ± 0.5,
or a distance of approximately 1100 ± 300 pc. The dust maps of Schlegel et al. (1998) give a total
E(B − V ) = 0.062 in this direction. Assuming that BF Eri is beyond the Galactic dust and taking
AV /E(B − V ) = 3.3 gives an extinction-corrected (m−M)0 = 9.9, and a distance estimate of 950
(+250,−200) pc.
– 10 –
We can also estimate a distance using the relation found by Warner (1987) between Porb, i,
and the absolute magnitude at maximum light MV (max). Using our inclination constraints, the
Warner relation predicts MV max = 3.9±0.7 at this orbital period. The General Catalog of Variable
Stars (Kholopov et al. 1999) lists mp = 13.2 at maximum light; taking this to be similar to Vmax
yields m−M = 9.3, or 9.1 corrected for extinction, which corresponds to 660 pc.
Given these distance estimates, it is surprising that BF Eri has a very substantial proper
motion. The Lick proper motion survey (Hanson et al. 2004) gives [µX , µY ] = [+34,−97] mas
yr−1. We have begun a series of parallax observations with the Hiltner 2.4m telescope using the
protocols described by Thorstensen (2003); so far we have five epochs from 2005 November and 2007
January. The proper motion relative to the background stars is [µX , µY ] = [32,−111] mas yr
and the parallax is not detected, with a nominal value of 1 ± 2 mas. The parallax determination
is very preliminary, but given the data so far we estimate the lower limit on the distance based on
the astrometry alone to be ∼ 200 pc.
At the nominal 950 pc distance derived from the secondary star, a 100 mas yr−1 proper
motion corresponds to a transverse velocity vT = 451 km s
−1. This is implausibly large, so we
are left wondering how we might have overestimated the distance. One effect might be as follows.
Our distance is based on the secondary’s apparent brightness, and we estimate the secondary’s
contribution to the total light by searching for the best cancellation of its features. If the secondary’s
absorption lines are weaker than those in the spectral-type standards, we would underestimate the
secondary’s contribution. In our best decomposition, the secondary is about 2.2 magnitudes fainter
than the total light in V . Assuming (unrealistically) that all the light is from the secondary would
therefore decrease the distance modulus by 2.2 magnitudes, to a distance of 340 pc.
We do not yet have enough information to resolve the conundrum posed by BF Eri’s unmis-
takably large proper motion and its apparently large distance, but a reasonable compromise might
be to put it at something like 400-500 pc, with an underluminous, low-metallicity secondary. The
cross-correlation velocities of the secondary have a zero point determined to ±5 km s−1, more or
less, and give a substantial systemic velocity of −72±3 km s−1, or −86 km s−1 in the local standard
of rest. If the star is at 450 pc, its space velocity with respect to the local standard of rest is ∼250
km s−1, with Galactic components [U, V,W ] = [−180,−180,−3] km s−1, that is, the velocity is
mostly parallel to the Galactic plane and lags far behind the rotation of the Galactic disk. This
would put BF Eri on a highly eccentric orbit; these are halo-population kinematics (even though
the star remains close to the plane). These kinematics would be qualitatively consistent with the
weak-line conjecture used earlier.
3.8. BI Ori
Szkody (1987) published the first quiescent spectrum of BI Orionis, which showed emission
lines typical of dwarf novae. Morales-Rueda & Marsh (2002) show a spectrum in outburst and
– 11 –
note the possible presence of weak HeII λ4686 emission.
Only the 2006 January velocities are extensive enough for period finding; they give Porb =
4.6 hr, with no significant ambiguity in the daily cycle count. The average spectrum shows the
usual emission lines; M-dwarf absorption features are also visible, though the signal-to-noise of the
individual spectra was not adequate for finding absorption-line velocities. Using the procedures
described earlier, we estimate the secondary’s spectral type to be M2.5 ± 1.5, with the secondary
alone having V = 20.0 ± 0.4. Assuming that the secondary’s mass lies in the broad range from
0.2 to 0.6 M⊙, its radius at this Porb would be 0.35 to 0.6 R⊙. Combining this with the surface
brightness derived from the spectral types gives an absolute magnitude MV = 10.2 ± 1.0. The
distance modulus, uncorrected for extinction, is therefore m − M = +9.8 ± 1.1, corresponding
to 910(+600,−360) pc. Schlegel et al. (1998) estimate a total reddening E(B − V ) = 0.11 in this
direction; assuming that BI Ori lies beyond all the dust, and taking AV /E(B−V ) = 3.3 reduces the
distance to ∼ 770 pc. At maximum light, BI Ori has mp = 13.2 (Kholopov et al. 1999). Assuming
the color is neutral, we find MV = 3.4 ± 1.1 at maximum. At BI Ori’s period, the Warner (1987)
relation predicts MV > 3.6 (with the brightest value corresponding to i = 0). This agrees broadly
with our nominal value based on the secondary star’s distance, but is a little fainter, suggesting
that BI Ori is not too far from face-on, or a little closer than our nominal distance, or both.
3.9. FO Per
FO Persei was apparently discovered by Morgenroth (1939), but its cataclysmic nature was
not immediately recognized. Bruch (1989) obtained spectra and gave equivalent widths for the
Balmer lines for two different nights of observations, between which the continuum changed from
relatively flat to inclined toward the red.
The emission lines in FO Per are rather narrow (Fig. 1, Table 3). This is often taken to indicate
a low orbital inclination. The velocity amplitude K is small, so that K/σ ≈ 1.6 for the best fits
(Table 5). Because of this, the daily cycle count remains ambiguous; the orbital frequency is either
5.8 or 6.8 cycle d−1, corresponding to Porb of 3.52 or 4.13 hr. CVs with periods in the 3-4 hour
range tend to be novalike variables (Shafter 1992), whereas FO Per is a dwarf nova; thus the 4.13
hr period is more likely a priori.
4. Summary
We have determined the orbital periods of eight CVs without significant daily cycle count
ambiguity; for FO Per, the period is narrowed to two choices. For three of the systems we find
high-precision periods by establishing secure cycle counts over long baselines.
While most of these objects are similar to others already known, three stand out as especially
– 12 –
interesting. CZ Aql shows asymmetric, high-velocity wings around the Balmer and HeI λ5876 and
λ6678 lines, possibly indicating a magnetic system. BF Eri’s proper motion of ∼ 100 mas yr−1 is
surprising in view of the large distance indicated by its secondary spectrum and by the Warner
relation; even if it is somewhat nearer than these indicators suggest, its kinematics are not typical of
disk stars. Finally, the orbital period of V1006 Cyg places it squarely in the middle of the so-called
period gap between 2 and 3 hours.
Acknowledgments. We are most grateful for support from the National Science Foundation
through grants AST-9987334 and AST-0307413. Bill Fenton took most of the spectra of GZ Cnc,
and J. Cameron Brueckner assisted with the BF Eri spectroscopy. Some of the astrometric images of
BF Eri were obtained by Sébastien Lépine and Michael Shara of the American Museum of Natural
History. We would like to thank the MDM Observatory staff for their skillful and conscientious
support. Finally, we are grateful to the Tohono O’odham for leasing us their mountain for a while,
so that we may study the glorious universe in which we all live.
– 13 –
REFERENCES
Andronov, N., & Pinsonneault, M. H. 2004, ApJ, 614, 326
Araujo-Betancor, S., et al. 2005, A&A, 430, 629
Baraffe, I., & Kolb, U. 2000, MNRAS, 318, 354
Bessell, M. S. 1990, PASP, 102, 1181
Beuermann, K. 2006, A&A, 460, 78
Boeshaar, P. 1976, Ph. D. thesis, Ohio State University
Bruch, A. 1989, A&AS, 78, 145
Bruch, A., & Schimpke, T. 1992, A&AS, 93, 419
Casares, J., Martinez-Pais, I. G., Marsh, T. R., Charles, P. A., & Lazaro, C. 1996, MNRAS, 278,
Cash, W. 1979, ApJ, 228, 939
Cieslinski, D., Steiner, J. E., & Jablonski, F. J. 1998, A&AS, 131, 119
Dickinson, R. J., Prinja, R. K., Rosen, S. R., King, A. R., Hellier, C., & Horne, K. 1997, MNRAS,
286, 447
Downes, R. A., Webbink, R. F., Shara, M. M., Ritter, H., Kolb, U., & Duerbeck, H. W. 2001,
PASP, 113, 764
Elvis, M., Plummer, D., Schachter, J., & Fabbiano, G. 1992, ApJS, 80, 257
Hellier, C., & Naylor, T. 1998, MNRAS, 295, L50
Jenniskens, P., & Desert, F.-X. 1994, A&AS, 106, 39
Jiang, X. J., Engels, D., Wei, J. Y., Tesch, F., & Hu, J. Y. 2000, A&A, 362, 263
Kato, T. 1999, Informational Bulletin on Variable Stars, 4745, 1
Kato, T., et al. 2002, A&A, 396, 929
Keenan, P. C., & McNeil, R. C. 1989, ApJS, 71, 245
Kholopov, P. N., et al. 1999, VizieR Online Data Catalog, 2214, 0
Kinman, T. D., Mahaffey, C. T., & Wirtanen, C. A. 1982, AJ, 87, 314
Hanson, R. B., Klemola, A. R., Jones, B. F., & Monet, D. G. 2004, AJ, 128, 1430
– 14 –
Kurtz, M. J., & Mink, D. J. 1998, PASP, 110, 934
Liu, W., Hu, J. Y., Zhu, X. H., & Li, Z. Y. 1999, ApJS, 122, 243
Marsh, T. R. 1988, MNRAS, 231, 1117
Monet, D. et al. 1996, USNO-A2.0, (U. S. Naval Observatory, Washington, DC)
Morgenroth, O. 1939, Astronomische Nachrichten, 268, 273
Morales-Rueda, L., & Marsh, T. R. 2002, MNRAS, 332, 814
Patterson, J., et al. 2002, PASP, 114, 1364
Robinson, E. L. 1992, ASP Conf. Ser. 29: Cataclysmic Variable Stars, 29, 3
Schachter, J. F., Remillard, R., Saar, S. H., Favata, F., Sciortino, S., & Barbera, M. 1996, ApJ,
463, 747
Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525
Schneider, D. P. & Young, P. 1980, ApJ, 238, 946
Shafter, A. W. 1983, ApJ, 267, 222
Shafter, A. W. 1992, ApJ, 394, 268
Szkody, P. 1987, ApJS, 63, 685
Tappert, C., & Bianchini, A. 2003, A&A, 401, 1101
Taylor, C. J., Thorstensen, J. R., & Patterson, J. 1999, PASP, 111, 184
Thorstensen, J. R. 2003, AJ, 126, 3017
Thorstensen, J. R., Fenton, W. H., & Taylor, C. J. 2004, PASP, 116, 300
Thorstensen, J. R. & Freed, I. W. 1985, AJ, 90, 2082
Thorstensen, J. R., Patterson, J. O., Shambrook, A., & Thomas, G. 1996, PASP, 108, 73
Thorstensen, J. R., Ringwald, F. A., Wade, R. A., Schmidt, G. D., & Norsworthy, J. E. 1991, AJ,
102, 272
Thorstensen, J. R., & Taylor, C. J. 2000, MNRAS, 312, 629
Uemura, M., Kato, T., & Watanabe, M. 2000, Informational Bulletin on Variable Stars, 4831, 1
Warner, B. 1987, MNRAS, 227, 23
– 15 –
Warner, B. 1995, Cambridge Astrophysics Series, Cambridge, New York: Cambridge University
Press, —c1995
Zacharias, N., Urban, S. E., Zacharias, M. I., Wycoff, G. L., Hall, D. M., Monet, D. G., & Rafferty,
T. J. 2004, AJ, 127, 3043
This preprint was prepared with the AAS LATEX macros v5.2.
– 16 –
Fig. 1.— Plots of the average flux-calibrated spectra for eight of the stars studied here. The weak
features seen near λ5577 are artifacts caused by imperfect subtraction of the strong [OI] night-sky
emission.
– 17 –
Fig. 2.— Plot of the averaged spectrum of BF Eri (top) and the spectrum after scaled late-type
(K3V) star has been subtracted (bottom). The spectra have been shifted into a rest frame before
averaging and do not include the 2006 March or 2007 January data.
– 18 –
Fig. 3.— Periodigrams for most of the stars studied here. The vertical axis in each case is the
inverse of chi-square for the least-squares best fitting sinusoid at each trial frequency. When data
from more that one observing run are combined, the periodigram can require hundreds of thousands
of points to resolve the fine-scale ringing; in those cases, the curve shown is formed by connecting
local maxima of the periodogram with straight lines. In those cases the right-hand panel gives a
close-up view of the region around the highest peak, revealing the alias structure resulting from
different choices of cycle count between the observing runs. The periodogram of BF Eri is for the
absorption-line velocities.
– 19 –
Fig. 4.— Periodigrams for the remainder of the stars, plotted in the same manner as the previous
figure. Because the choice of daily cycle count for FO Per remains ambiguous, we have not chosen
to enlarge either peak region.
– 20 –
Fig. 5.— Radial velocities plotted as a function of phase using the adopted orbital periods. For
CZ Aql, GZ Cnc, and V1006 Cyg, the number of cycle counts between observing runs is unknown,
and the exact period chosen to fold the velocities is one of a number of possibilities. The two
plots for FO Per are for different choices of the daily cycle count, and each of these in turn is also
an arbitrary choice among many finely-spaced periods. In BF Eri, both emission and absorption
velocities are plotted; the absorption velocities are shown with error bars.
– 21 –
Fig. 6.— Phase-averaged spectra of CZ Aql (top two panels) and BF Eri (bottom two panels),
presented as a greyscale. The scale is inverted, so that emission is represented by darker shades.
The two CZ Aql spectra are scaled differently to show the line cores (top) and the extent of the the
line wings. Note the NaD lines in CZ Aql remain stationary, indicating an interstellar origin. The
feature at λ6280 is telluric. BF Eri’s spectrum is plotted in two overlapping sections; the K-star’s
orbital motion is plainly visible.
– 22 –
Table 1. Stars Observed
Star α2000
a δ2000 Epoch
b Vobs
c maxd min
[hh:mm:ss] [◦:′:”] [mag.] [mag.] [mag.]
LX And 2:19:44.08 +40:27:22.3 2006.7 16.3 13.5p 16.4p
CZ Aql 19:19:58.21 −07:10:55.2 2003.4 15.4 13.p 15.p
LU Cam 5:58:17.86 +67:53:46.2 2002.0 16.3 14.v (16.v
GZ Cnc 9:15:51.68 +09:00:49.6 2000.3 15.4 13.1v 15.4v
V632 Cyg 21:36:04.22 +40:26:19.4 2000.5 17.9 12.6p 17.5p
V1006 Cyg 19:48:47.20 +57:09:22.8 2000.5 17.8 15.4p 17.0p
BF Eri 4:39:29.96 −04:35:59.5 2006.2 14.8 13.5p 15.5p
BI Ori 5:23:51.77 +01:00:30.6 2002.8 17.1 13.2p 16.7p
FO Per 4:08:34.98 +51:14:48.5 2004.0 17.1 11.8v 16.v
aPositions measured from images taken at the 2.4m Hiltner telescope, using
astrometric solutions from fits to USNO A2.0 (Monet et al. 1996) or UCAC 2
(Zacharias et al. 2004) stars. Uncertainties are of order 0.1 arcsec.
bThe date of the image used in the position measurement. The coordinate
system (equator and equinox) is J2000 in all cases.
cSynthesized from spectra, as described in text.
dTaken from the GCVS (Kholopov et al. 1999). Photgraphic magnitudes
flagged with ‘p’, visual with ‘v’.
– 23 –
Table 2. Journal of Observations
data N HA start HA end telescope
(UT) [hh:mm] [hh:mm]
LX And
2004 Jan 13 1 +2:35 +2:35 2.4m
2004 Mar 02 3 +3:45 +4:00 2.4m
2004 Nov 18 1 +0:29 +0:29 1.3m
2004 Nov 19 26 −3:36 +4:17 1.3m
2004 Nov 19 2 +2:31 +2:35 2.4m
2004 Nov 20 13 −3:17 +3:15 1.3m
2004 Nov 20 1 −1:23 −1:23 2.4m
2006 Jan 19 14 +1:31 +4:10 1.3m
2006 Jan 22 8 +3:31 +4:46 1.3m
2007 Jan 26 3 +1:36 +1:58 1.3m
2007 Jan 27 15 +0:58 +3:39 1.3m
CZ Aql
2005 Jul 02 3 +1:40 +2:04 1.3m
2005 Jul 04 48 −3:28 +3:17 1.3m
2005 Jul 05 13 −1:55 −0:06 1.3m
2005 Jul 06 12 −3:57 +2:28 1.3m
2005 Sep 03 2 +1:23 +1:32 1.3m
2005 Sep 07 2 −0:03 +0:05 1.3m
2005 Jun 28 2 −0:01 +0:04 2.4m
2006 Jun 18 2 +2:05 +2:10 2.4m
2006 Jun 19 2 +0:36 +0:40 2.4m
2006 Jun 23 3 −1:24 −1:12 2.4m
– 24 –
Table 2—Continued
data N HA start HA end telescope
(UT) [hh:mm] [hh:mm]
LU Cam
2002 Jan 22 2 −1:40 −1:19 2.4m
2002 Jan 23 8 −1:33 +2:31 2.4m
2002 Jan 24 25 −3:15 +5:47 2.4m
2002 Feb 18 2 +2:20 +2:28 2.4m
2002 Feb 19 2 +2:41 +2:50 2.4m
2002 Feb 20 4 −0:02 +3:43 2.4m
2002 Feb 22 1 +2:18 +2:18 2.4m
2004 Jan 16 2 +2:49 +2:53 2.4m
2004 Jan 17 1 +0:40 +0:40 2.4m
2004 Jan 19 1 −0:32 −0:32 2.4m
2004 Mar 02 1 +0:53 +0:53 2.4m
2004 Mar 07 6 +2:29 +3:14 2.4m
2004 Nov 19 4 +2:48 +3:27 2.4m
2005 Mar 21 1 +1:12 +1:12 2.4m
2005 Mar 22 2 +1:21 +1:30 2.4m
2005 Sep 09 1 −2:00 −2:00 2.4m
2005 Sep 12 1 −1:49 −1:49 2.4m
2006 Jan 09 2 +1:49 +1:55 2.4m
GZ Cnc
2000 Apr 07 2 +4:26 +4:32 2.4m
2000 Apr 08 1 +0:27 +0:27 2.4m
2000 Apr 10 5 −0:31 +4:32 2.4m
2000 Apr 11 15 +2:19 +4:22 2.4m
2001 Mar 24 2 +4:39 +4:49 2.4m
2001 Mar 25 21 −1:10 +3:10 2.4m
2001 Mar 26 20 −0:50 +4:27 2.4m
2001 Mar 27 2 −0:08 +0:01 2.4m
2001 Mar 28 3 +0:14 +0:25 2.4m
V632 Cyg
2005 Jul 07 2 +1:00 +1:16 1.3m
2005 Jul 08 10 −5:12 +0:33 1.3m
2005 Jul 09 18 −5:03 +1:09 1.3m
2005 Jul 10 18 −5:09 +1:07 1.3m
2005 Jul 11 3 +0:52 +1:19 1.3m
– 25 –
Table 2—Continued
data N HA start HA end telescope
(UT) [hh:mm] [hh:mm]
V1006 Cyg
2003 Jun 22 1 +1:06 +1:06 2.4m
2004 Jun 24 5 +0:52 +1:56 1.3m
2004 Jun 25 5 −1:43 +0:34 1.3m
2004 Jun 25 1 +4:06 +4:06 2.4m
2004 Jun 26 10 −4:25 +1:56 1.3m
2004 Jun 27 3 −3:00 −2:00 1.3m
2004 Jun 28 4 −2:25 +1:02 1.3m
2004 Jun 28 1 +0:28 +0:28 2.4m
2004 Jun 29 5 +0:55 +1:59 1.3m
2004 Jun 29 1 +4:07 +4:07 2.4m
2004 Jun 30 18 −4:39 +2:26 1.3m
2004 Jul 01 10 −3:58 +1:53 2.4m
2005 Jul 05 13 +0:58 +3:11 1.3m
2004 Jun 30 4 −3:41 +3:52 2.4m
2004 Jul 01 12 −4:07 +2:17 1.3m
2005 Jul 03 27 −3:57 +1:20 1.3m
2005 Jul 05 13 +0:58 +3:11 1.3m
BF Eri
2001 Dec 18 3 +2:35 +2:56 1.3m
2001 Dec 19 10 −3:06 +4:04 1.3m
2001 Dec 20 12 −3:49 +2:00 1.3m
2001 Dec 21 2 +3:04 +3:14 1.3m
2001 Dec 22 5 −3:13 −2:31 1.3m
2001 Dec 23 12 −3:18 +4:35 1.3m
2001 Dec 24 13 −3:17 +3:08 1.3m
2001 Dec 25 14 −2:20 +1:07 1.3m
2001 Dec 26 8 −2:10 +3:05 1.3m
2001 Dec 27 18 −2:39 +4:14 1.3m
2002 Jan 19 1 +1:27 +1:27 2.4m
2002 Jan 20 2 −1:33 +2:13 2.4m
2002 Jan 22 1 −2:02 −2:02 2.4m
2002 Feb 21 2 +1:26 +1:35 2.4m
2002 Feb 22 2 +0:40 +0:49 2.4m
2002 Oct 26 2 −0:11 +0:05 2.4m
2002 Oct 31 1 +3:21 +3:21 2.4m
2003 Feb 02 1 +0:04 +0:04 2.4m
2005 Sep 11 2 −0:57 −0:40 1.3m
2006 Mar 16 1 +1:56 +1:56 1.3m
2006 Mar 17 5 +2:15 +2:57 1.3m
– 26 –
Table 2—Continued
data N HA start HA end telescope
(UT) [hh:mm] [hh:mm]
2007 Jan 28 9 −1:46 −0:13 1.3m
BI Ori
2006 Jan 20 32 −2:54 +4:05 1.3m
2006 Jan 21 22 −2:47 +2:21 1.3m
2006 Jan 23 6 +1:03 +2:08 1.3m
FO Per
1995 Oct 09 9 −4:40 −3:26 2.4m
1995 Oct 10 5 −5:22 +1:26 2.4m
1996 Dec 19 14 +1:38 +4:03 1.3m
1996 Dec 20 5 +2:08 +3:43 1.3m
2001 Dec 18 9 −2:35 +4:36 1.3m
2004 Nov 18 1 −0:11 −0:11 1.3m
2004 Nov 19 9 +2:52 +5:08 1.3m
2004 Nov 19 2 +3:24 +2:48 2.4m
2004 Nov 20 20 −0:56 +5:10 1.3m
2006 Jan 10 12 −0:35 +2:35 1.3m
2006 Jan 10 1 +4:21 +4:21 2.4m
2006 Jan 11 18 −1:29 +2:41 1.3m
2006 Jan 11 1 −3:12 −3:12 2.4m
2006 Jan 12 4 −1:16 −0:36 1.3m
2006 Jan 13 10 +3:44 +5:37 1.3m
2006 Jan 16 11 −1:36 +1:00 1.3m
– 27 –
Table 3. Spectral Features in Quiescence
E.W.a Flux FWHM b
Feature (Å) (10−16 erg cm−2 s1) (Å)
LX And
Hβ 45 690 18
HeI λ4921 4 60 25
HeI λ5015 3 50 20
Fe λ5169 2 20 14
HeI λ5876 11 120 19
Hα 54 560 17
HeI λ6678 4 40 19
CZ Aql
Hβ 21 670 18
HeI λ4921 1 40 12
HeI λ5015 2 50 13
HeI λ5876 7 160 15
NaD −1 −16 · · ·
Hα 61 1250 27
HeI λ6678 5 90 16
LU Cam
Hγ 10 170 12
HeI λ4471 2 30 10
Hβ 14 190 12
HeI λ4921 1 20 12
HeI λ5015 2 20 13
Fe λ5169 1 10 12
HeI λ5876 4 50 12
Hα 27 240 13
HeI λ6678 3 20 15
HeI λ7067 3 20 · · ·
GZ Cnc
Hγ 26 940 26
HeI λ4471 8 260 28
HeII λ4686 5 140 46
Hβ 36 1040 25
HeI λ4921 5 140 26
HeI λ5015 4 100 28
Fe λ5169 2 60 26
HeI λ5876 9 200 27
– 28 –
Table 3—Continued
E.W.a Flux FWHM b
Feature (Å) (10−16 erg cm−2 s1) (Å)
Hα 38 790 25
HeI λ6678 4 90 31
HeI λ7067 3 60 32
V632 Cyg
Hβ 80 260 24
HeI λ4921 6 20 27
HeI λ5015 8 20 26
Fe λ5169 5 10 26
HeI λ5876 28 70 27
Hα 113 260 27
HeI λ6678 15 30 32
V1006 Cyg
Hβ 74 250 27
HeI λ4921 8 30 30
HeI λ5015 8 20 28
Fe λ5169 8 30 28
HeI λ5876 26 70 34
Hα 108 250 31
HeI λ6678 11 30 38
BF Eri
HeII λ4686 6 240 46
Hβ 23 1060 22
HeI λ5015 2 110 29
HeI λ5876 5 240 21
Hα 27 1200 22
HeI λ6678 3 120 27
BI Ori
Hβ 34 180 32
HeI λ4921 6 30 36
HeI λ5015 6 30 43
Fe λ5169 5 30 39
HeI λ5876 6 30 31
Hα 36 190 31
FO Per
– 29 –
Table 3—Continued
E.W.a Flux FWHM b
Feature (Å) (10−16 erg cm−2 s1) (Å)
Hβ 24 160 9
HeI λ4921 2 20 7
HeI λ5015 3 20 7
Fe λ5169 2 10 7
HeI λ5876 5 30 7
Hα 29 170 9
HeI λ6678 2 10 9
aEmission equivalent widths are counted as positive.
bFrom Gaussian fits.
– 30 –
Table 4. Radial Velocities
Star time a vabs σvabs vemn σvemn
(km s−1) (km s−1) (km s−1) (km s−1)
LX And 53017.7061 · · · · · · −44 −11
LX And 53066.6160 · · · · · · −57 −8
LX And 53066.6208 · · · · · · −37 −8
LX And 53066.6263 · · · · · · −77 −7
aHeliocentric Julian date of mid-integration, minus 2400000.
Note. — All emission-line velocities are of Hα. Emission-line velocity
uncertainties are derived from counting statistics and should be regarded as
lower limits. Table 4 is published in its entirety in the electronic version of
the Publications of the Astronomical Society of the Pacific. A short portion
is shown here for guidance regarding its form and content.
– 31 –
Table 5. Fits to Radial Velocities
Star Algorithma T0
b P K γ N σc
(d) (km s−1) (km s−1) (km s−1)
LX And G2,21,7 53754.6861(12) 0.1509743(5) 81(4) −48(3) 87 15
CZ Aql G2,18,8 53557.880(2) 0.2005(6)d 193(15) 5(10) 89 61
LU Cam D,15 52327.7421(14) 0.1499686(4) 57(4) 44(3) 66 14
GZ Cnc G2,15,9 51992.8928(13) 0.0881(4)d 79(7) 22(5) 71 26
V632 Cyg D,28 53560.9746(13) 0.06377(8) 62(8) −49(5) 51 28
V1006 Cyg G2,20,9 53187.9091(16) 0.09904(9)d 89(8) −11(6) 120 44
BF Eri emission G2,21,7 52574.0027(18) 0.2708801(6) 109(5) −91(3) 126 24
BF Eri absorption · · · 52573.8632(9) 0.2708805(4) 182(4) −72(3) 117 20
BF Eri mean: · · · · · · 0.2708804(4) · · · · · · · · ·
BI Ori 53756.541(3) G2,35,9 0.1915(5) 131(13) 24(9) 60 44
FO Per (shorter) D,11 52261.872(3) 0.1467(4)d 27(3) −49(2) 131 17
FO Per (longer) D,11 52261.893(3) 0.1719(5)d 27(3) −45(2) 131 17
Note. — Parameters of sinusoidal least-squares fits to the velocity timeseries, of the form v(t) = γ +K sin(2π(t −
T0)/P ). The quoted parameter uncertainties are based on the assumption that the scatter of the data around the best
fit is a realistic estimate of the velocity uncertainty (Cash 1979). In practice this is more conservative than assuming
that counting statistics uncertainties are realistic.
aCode for the convolution function used to derive emission line velocities; D = derivative of a Gaussian, G2 =
double-Gaussian function (see text). For the D algorithm the number that follows gives the line full-width at half-
maximum, in Å, for which the function is optimized; for the G2 algorithm the two numbers are respectively the
separation of the two Gaussians and their individual FWHMa, again in Å.
bHeliocentric Julian Date minus 2400000. The epoch is chosen to be near the center of the time interval covered
by the data, and within one cycle of an actual observation.
cRoot-mean-square residual of the fit.
dThe period determination in this case is complicated by unknown numbers of cycles between observing runs; the
uncertainty given here is an estimate based on fits to individual runs. Only certain values within the period range
given here are allowed; see text for details.
This figure "f6.png" is available in "png"
format from:
http://arxiv.org/ps/0704.0948v1
http://arxiv.org/ps/0704.0948v1
Introduction
Observations, Reductions, and Analysis
Observations
Reductions
Analysis
Notes on Individual Objects
LX Andromedae
CZ Aquilae
LU Camelopardalis
GZ Cancri
V632 Cygni
V1006 Cygni
BF Eri
BI Ori
FO Per
Summary
|
0704.0949 | Conservation laws for invariant functionals containing compositions | Conservation laws for invariant functionals
containing compositions∗
Gastão S. F. Frederico†
[email protected]
Higher Institute of Education
University of Cabo Verde
Praia, Santiago – Cape Verde
Delfim F. M. Torres‡
[email protected]
Department of Mathematics
University of Aveiro
3810-193 Aveiro, Portugal
Abstract
The study of problems of the calculus of variations with composi-
tions is a quite recent subject with origin in dynamical systems governed
by chaotic maps. Available results are reduced to a generalized Euler-
Lagrange equation that contains a new term involving inverse images of
the minimizing trajectories. In this work we prove a generalization of the
necessary optimality condition of DuBois-Reymond for variational prob-
lems with compositions. With the help of the new obtained condition,
a Noether-type theorem is proved. An application of our main result is
given to a problem appearing in the chaotic setting when one consider
maps that are ergodic.
Mathematics Subject Classification 2000: 49K05, 49J05.
Keywords: variational calculus, functionals containing compositions,
symmetries, DuBois-Reymond condition, Noether’s theorem.
1 Introduction and motivation
The theory of variational calculus for problems with compositions has been re-
cently initiated in [5]. The new theory considers integral functionals that depend
not only on functions q(·) and their derivatives q̇(·), but also on compositions
(q ◦ q)(·) of q(·) with q(·). As far as chaos is often a byproduct of iteration
∗Accepted for an oral presentation at the 7th IFAC Symposium on Nonlinear Control
Systems (NOLCOS 2007), to be held in Pretoria, South Africa, 22–24 August, 2007.
†This work is part of the author’s PhD project. Supported by the Portuguese Institute for
Development (IPAD).
‡Supported by the Centre for Research on Optimization and Control (CEOC) through
the Portuguese Foundation for Science and Technology (FCT), cofinanced by the European
Community fund FEDER/POCTI.
http://arxiv.org/abs/0704.0949v1
of nonlinear maps [2], such problems serve as an interesting model for chaotic
dynamical systems. Let us briefly review this relation (for more details, we refer
the interested reader to [3, 4, 5]). Let q : [0, 1] → [0, 1] be a piecewise mono-
tonic map with probability density function fq(·), which captures the long term
statistical behavior of a nonlinear dynamical system. It is natural (see [2]) to
consider the problem of minimizing or maximizing the functional
I[q(·), fq(·)] =
(q(t) − t)
fq(t)dt , (1)
which depends on q(·) and its probability density function fq(·) (usually a com-
plicated function of q(·)). It turns out that fq(·) is the fixed point of the
Frobenius-Perron operator Pq[·] associated with q(·). For a piecewise mono-
tonic map q : [0, 1] → [0, 1] with r pieces, Pq[·] has the representation
Pq[f ](t) =
v∈{q−1(t)}
|q̇(v)|
where for any point t ∈ [0, 1] the set {q−1(t)} consists of at most r points. The
fixed point fq(·) associated with an ergodic map q(·) can be expressed as the
limit
fq = lim
P iq [1] , (2)
where 1 is the constant function 1 on [0, 1]. Substituting (2) into (1), and using
the adjoint property [2, Prop. 4.2.6], one eliminates the probability density
function fq(·), obtaining (1) in the form
I[q(·)] =
t, q(t), q(2)(t), q(3)(t), . . .
where we are using the notation q(i)(·) to denote the i-th composition of q(·)
with itself: q(1)(t) = q(t), q(2)(t) = (q ◦ q)(t), q(3)(t) = (q ◦ q ◦ q)(t), etc. In [5] a
generalized Euler-Lagrange equation, which involves the inverse images of the
extremizing function q(·) (cf. (11)), was proved for such functionals in the cases
t, q(t), q(2)(t)
t, q(t), q̇(t), q(2)(t)
t, q(t), q(2)(t), q(3)(t)
To the best of our knowledge, these generalized Euler-Lagrange equations com-
prise all the available results on the subject. Thus, one concludes that the
theory of variational calculus with compositions is in its childhood: much re-
mains to be done. Here we go a step further in the theory of functionals con-
taining compositions. We are mainly interested in Noether’s classical theo-
rem, which is one of the most beautiful results of the calculus of variations
and optimal control, with many important applications in Physics (see e.g.
[6, 13, 14]), Economics (see e.g. [1, 17]), and Control Engineering (see e.g.
[11, 15, 18, 20, 22]), and source of many recent extensions and developments
(see e.g. [7, 8, 9, 10, 16, 19, 21]). Noether’s symmetry theorem describes the
universal fact that invariance with respect to some family of parameter transfor-
mations gives rise to the existence of certain conservation laws, i.e. expressions
preserved along the Euler-Lagrange or Pontryagin extremals of the problem.
Our results are a generalized DuBois-Reymond necessary optimality condition
(Theorem 7), and a generalized Noether’s theorem (Theorem 13) for function-
als of the form
t, q(t), q̇(t), q(2)(t)
dt. In §4 an illustrative example is
presented.
2 Preliminaries – review of classical results of
the calculus of variations
There exist many different ways to prove the classical Noether’s theorem (cf.
e.g. [6, 12, 13, 17]). We review here one of those proofs, which is based on the
DuBois-Reymond necessary condition. Although this proof is not so common
in the literature of Noether’s theorem, it turns out to be the most suitable
approach when dealing with functionals containing compositions.
Let us consider the fundamental problem of the calculus of variations:
I[q(·)] =
L (t, q(t), q̇(t)) dt −→ min (P)
under the boundary conditions q(a) = qa and q(b) = qb, where q̇ =
, with q(·)
a piecewise-smooth function, and the Lagrangian L : [a, b] × Rn × Rn → R is a
C2 function with respect to all its arguments.
The concept of symmetry has a very important role in mathematics and its
applications. Symmetries are defined through transformations of the system
that leave the problem invariant.
Definition 1 (Invariance of (P)). The integral functional (P) it said to be
invariant under the ε-parameter infinitesimal transformations
t̄ = t + ετ(t, q) + o(ε) ,
q̄(t) = q(t) + εξ(t, q) + o(ε) ,
where τ and ξ are piecewise-smooth, if
L (t, q(t), q̇(t)) dt =
∫ t̄(tb)
t̄(ta)
L (t̄, q̄(t̄), ˙̄q(t̄)) dt̄ (4)
for any subinterval [ta, tb] ⊆ [a, b].
Along the work we denote by ∂iL the partial derivative of L with respect to
its i-th argument.
Theorem 2 (Necessary condition of invariance). If functional (P) is invariant
under the infinitesimal transformations (3), then
∂1L (t, q, q̇) τ + ∂2L (t, q, q̇) · ξ + ∂3L (t, q, q̇) ·
ξ̇ − q̇τ̇
+ L (t, q, q̇) τ̇ = 0 . (5)
Proof. Since (4) is to be satisfied for any subinterval [ta, tb] ⊆ [a, b], equality (4)
is equivalent to
t + ετ + o(ε), q + εξ + o(ε),
q̇ + εξ̇ + o(ε)
1 + ετ̇ + o(ε)
= L (t, q, q̇) . (6)
We obtain (5) differentiating both sides of (6) with respect to ε, and then setting
ε = 0.
Another very important notion in mathematics and its applications is the
concept of conservation law. One of the most important conservation laws was
proved by Leonhard Euler in 1744: when the Lagrangian L(q, q̇) corresponds to
a system of conservative points, then
− L (q(t), q̇(t)) +
(q(t), q̇(t)) · q̇(t) ≡ constant , (7)
t ∈ [a, b], holds along the solutions of the Euler-Lagrange equations.
Definition 3 (Conservation law). A quantity C(t, q, q̇) defines a conservation
law if
C(t, q(t), q̇(t)) = 0 , t ∈ [a, b] ,
along all the solutions q(·) of the Euler-Lagrange equation
∂3L (t, q, q̇) = ∂2L (t, q, q̇) . (8)
Conservation laws can be used to lower the order of the Euler-Lagrange equa-
tions (8) and simplify the resolution of the respective problems of the calculus of
variations and optimal control [16]. Emmy Amalie Noether formulated in 1918
a very general principle on conservation laws, with many important implications
in modern physics, economics and engineering. Noether’s principle asserts that
“the invariance of the functional
L (t, q(t), q̇(t)) dt under one-parameter in-
finitesimal transformations (3), imply the existence of a conservation law”. One
particular example of application of Noether’s theorem gives (7), which corre-
sponds to conservation of energy in classical mechanics or to the income-wealth
law of economics.
Theorem 4 (Noether’s theorem). If functional (P) is invariant, in the sense
of the Definition 1, then
C(t, q, q̇) = ∂3L (t, q, q̇) · ξ(t, q) + (L(t, q, q̇) − ∂3L (t, q, q̇) · q̇) τ(t, q) (9)
defines a conservation law.
We recall here the proof of Theorem 4 by means of the classical necessary
optimality condition of DuBois-Reymond.
Theorem 5 (DuBois-Reymond condition). If q(·) is a solution of problem (P),
∂1L (t, q, q̇) =
[L (t, q, q̇) − ∂3L (t, q, q̇) · q̇] . (10)
Proof. The DuBois-Reymond necessary optimality condition is easily proved
using the Euler-Lagrange equation (8):
[L (t, q, q̇) − ∂3L (t, q, q̇) · q̇]
= ∂1L (t, q, q̇) + ∂2L (t, q, q̇) · q̇ + ∂3L (t, q, q̇) · q̈
∂3L (t, q, q̇) · q̇ − ∂3L (t, q, q̇) · q̈
= ∂1L (t, q, q̇) + q̇ ·
∂2L (t, q, q̇) −
∂3L (t, q, q̇)
= ∂1L (t, q, q̇) .
Proof. (of Theorem 4) To prove the Noether’s theorem, we use the Euler-
Lagrange equation (8) and the DuBois-Reymond condition (10) into the neces-
sary condition of invariance (5):
0 = ∂1L (t, q, q̇) τ + ∂2L (t, q, q̇) · ξ
+ ∂3L (t, q, q̇) ·
ξ̇ − q̇τ̇
+ L (t, q, q̇) τ̇
= ∂2L (t, q, q̇) · ξ + ∂3L (t, q, q̇) · ξ̇ + ∂1L (t, q, q̇) τ
+ τ̇ (L (t, q, q̇) − ∂3L (t, q, q̇) · q̇)
∂3L (t, q, q̇) · ξ + ∂3L (t, q, q̇) · ξ̇
(L (t, q, q̇) − ∂3L (t, q, q̇) · q̇) τ
+ τ̇ (L (t, q, q̇) − ∂3L (t, q, q̇) · q̇)
∂3L (t, q, q̇) · ξ +
L(t, q, q̇) − ∂3L (t, q, q̇) · q̇
3 Main results
We consider the following problem of the calculus of variations with composition
of functions:
I[q(·)] =
L (t, q(t), q̇(t), z(t)) dt −→ min (Pc)
subject to given boundary conditions q(a) = qa, q(b) = qb, z(a) = za, and
z(b) = zb, where q̇ =
and z(t) = (q ◦ q)(t). We assume that the Lagrangian
L : [a, b] × R × R × R → R is a function of class C2 with respect to all the
arguments, and that admissible functions q(·) are piecewise-smooth. The main
result of [5] is an extension of the Euler-Lagrange equation (8) for problems of
the calculus of variations (Pc).
Theorem 6 ([5]). If q(·) is a weak minimizer of problem (Pc), then q(·) satisfies
the Euler-Lagrange equation
∂2L (x, q(x), q̇(x), z(x)) −
∂3L (x, q(x), q̇(x), z(x))
+ ∂4L (x, q(x), q̇(x), z(x)) q̇(q(x)) +
t=q−1(x)
∂4L (t, q(t), q̇(t), z(t))
|q̇(t)|
= 0 (11)
for any x ∈ (a, b).
3.1 Generalized DuBois-Reymond condition
We begin by proving an extension of the DuBois-Reymond necessary optimality
condition (10) for problems of the calculus of variations (Pc).
Theorem 7 (cf. Theorem 5). If q(·) is a weak minimizer of problem (Pc), then
q(·) satisfies the DuBois-Reymond condition
L (x, q(x), q̇(x), z(x)) − ∂3L (x, q(x), q̇(x), z(x)) q̇(x)
= ∂1L (x, q(x), q̇(x), z(x)) − q̇(x)
t=q−1(x)
∂4L (t, q(t), q̇(t), z(t))
|q̇(t)|
for any x ∈ (a, b).
Remark 8. If L (t, q, q̇, z) = L (t, q, q̇), then (12) coincides with the classical
DuBois-Reymond condition (10).
Proof. To prove Theorem 7 we use the Euler-Lagrange equation (11):
L (x, q, q̇, z) − ∂3L (x, q, q̇, z) q̇
= ∂1L (x, q, q̇, z) + ∂2L (x, q, q̇, z) q̇
+ ∂3L (x, q, q̇, z) q̈ + ∂4L (x, q, q̇, z) q̇(q(x))q̇
∂3L (x, q, q̇, z) − ∂3L (x, q, q̇, z) q̈
= ∂1L (x, q, q̇, z) + q̇
∂2L (x, q, q̇, z)
+ ∂4L (x, q, q̇, z) q̇(q(x)) −
∂3L (x, q, q̇, z)
= ∂1L (x, q, q̇, z) − q̇(x)
t=q−1(x)
∂4L (t, q(t), q̇(t), z(t))
|q̇(t)|
3.2 Noether’s theorem for functionals containing compo-
sitions
We introduce now the definition of invariance for the functional (Pc). As done
in the proof of Theorem 2 (see (6)), we get rid off of the integral signs in (4).
Definition 9 (cf. Definition 1). We say that functional (Pc) is invariant under
the infinitesimal transformations (3) if
L (t̄, q̄(t̄), q̄′(t̄), z̄(t̄))
= L (t, q(t), q̇(t), z(t)) + o(ε) , (13)
where q̄′ = dq̄/dt̄.
Along the work, in order to simplify the presentation, we sometimes omit
the arguments of the functions.
Theorem 10 (cf. Theorem 2). If functional (Pc) is invariant under the in-
finitesimal transformations (3), then
∂1L (t, q, q̇, z) τ + ∂2L (t, q, q̇, z) ξ + ∂3L (t, q, q̇, z)
ξ̇ − q̇τ̇
+ ∂4L (t, q, q̇, z) q̇(q(t))ξ + ∂4L (t, q, q̇, z) ξ(q(t)) + Lτ̇ = 0 . (14)
Proof. Equation (13) is equivalent to
t + ετ + o(ε), q + εξ + o(ε),
q̇ + εξ̇ + o(ε)
1 + ετ̇ + o(ε)
q(q + εξ + o(ε)) + εξ(q + εξ + o(ε))
× (1 + ετ̇ + o(ε))
= L (t, q, q̇, z) + o(ε) . (15)
We obtain equation (14) differentiating both sides of equality (15) with respect
to the parameter ε, and then setting ε = 0.
Remark 11. Using the Frobenius-Perron operator (see [2, Chap. 4]) and the
Euler-Lagrange equation (11), we can write (14) in the following form:
∂1L (x, q, q̇, z) τ + ∂2L (x, q, q̇, z) ξ
+ ∂3L (x, q, q̇, z)
ξ̇ − q̇τ̇
+ ∂4L (x, q, q̇, z) q̇(q(x))ξ
t=q−1(x)
∂4L (t, q(t), q̇(t), z(t))
|q̇(t)|
ξ + Lτ̇
= ∂1L (x, q, q̇, z) τ +
∂3L (x, q, q̇, z) ξ
+ ∂3L (x, q, q̇, z)
ξ̇ − q̇τ̇
+ Lτ̇ = 0 . (16)
Definition 12 (Conservation law for (Pc)). We say that a quantity C (x, q, q̇, z)
defines a conservation law for functionals containing compositions if
C (x, q(x), q̇(x), z(x)) = 0
along all the solutions q(·) of the Euler-Lagrange equation (11).
Our main result is an extension of Noether’s theorem for problems of the
calculus of variations (Pc) containing compositions.
Theorem 13 (Noether’s theorem for (Pc)). If functional (Pc) is invariant, in
the sense of the Definition 9, and there exists a function f = f(x, q, q̇, z) such
(x, q(x), q̇(x), z(x)) = τ q̇(x)
t=q−1(x)
∂4L (t, q(t), q̇(t), z(t))
|q̇(t)|
, (17)
C (x, q(x), q̇(x), z(x))
L(x, q(x), q̇(x), z(x)) − ∂3L (x, q(x), q̇(x), z(x)) q̇
τ(x, q)
+ ∂3L (x, q(x), q̇(x), z(x)) ξ(x, q) + f(x, q(x), q̇(x), z(x)) (18)
defines a conservation law (Definition 12).
Remark 14. If L (x, q, q̇, z) = L (x, q, q̇), then f is a constant and expression
(18) is equivalent to the conserved quantity (9) given by the classical Noether’s
theorem.
Proof. To prove the theorem, we use conditions (12) and (17) in (16):
0 = ∂1L (x, q, q̇, z) τ +
∂3L (x, q, q̇, z) ξ
+ ∂3L (x, q, q̇, z)
ξ̇ − q̇τ̇
+ Lτ̇
L (x, q, q̇, z) − ∂3L (x, q, q̇), z)) q̇
+ τ̇ [L (x, q, q̇, z) − ∂3L (x, q, q̇), z)) q̇]
+ ξ̇∂3L (x, q, q̇, z) + ξ
∂3L (x, q, q̇, z)
+ τ q̇(x)
t=q−1(x)
∂4L (t, q(t), q̇(t), z(t))
|q̇(t)|
∂3L (x, q, q̇, z) ξ +
L (x, q, q̇, z)
− ∂3L (x, q, q̇, z) q̇
τ + f(x, q, q̇, z)
4 An example
Let us consider the problem
I[q(·)] =
[x + q(x) + q(q(x))] dx −→ min
q(0) = 1 , q(1) = 0 , (19)
q(q(0)) = 0 , q(q(1)) = 1 .
In [5, §3] it is proven that (19) has the extremal
q(x) =
q1(x) = −2x + 1 , x ∈
q2(x) = −2x + 2 , x ∈
that is, (20) satisfies the Euler-Lagrange equation (11) for L(x, q, q̇, z) = 1
(x + q + z).
We now illustrate the application of our Theorem 13 to this problem. First, we
need to determine the variational symmetries. Substituting the Lagrangian L
in (16) we obtain that
x + q + z
τ̇ = 0 . (21)
The differential equation (21) admits the solution
τ = ke−
x+q+z , (22)
where k is an arbitrary constant. From Theorem 13 we conclude that
(x + q1 + z1)τ +
τ q̇1
|q̇1(t)|
dx, x ∈
, (23)
(x + q2 + z2)τ +
τ q̇2
|q̇2(t)|
dx, x ∈
, (24)
defines a conservation law, where τ is obtained from (22):
τ = ke−
3x = kelnx
= kx−
3 , x ∈ [0, 1] . (25)
Since for this problem we know the extremal, we can verify the validity of the
obtained conservation law directly from Definition 12: substituting equalities
(20) and (25) in (23) and (24), we obtain, as expected, a constant (zero in this
case):
(x + q1 + z1)τ +
τ q̇1
|q̇1(t)|
= 3kxτ − 2
τdx = 3kx
3 − 3kx
3 = 0 ,
(x + q2 + z2)τ +
τ q̇2
|q̇2(t)|
= 3kxτ − 2
τdx = 3kx
3 − 3kx
3 = 0 .
5 Conclusions
We proved a generalization (i) of the necessary optimality condition of DuBois-
Reymond, (ii) of the celebrated Noether’s symmetry theorem, for problems of
the calculus of variations containing compositions (respectively Theorems 7 and
13). Our main result is illustrated with the example studied in [5].
The compositional variational theory is in its childhood, so that much re-
mains to be done. In particular, it would be interesting to obtain an Hamil-
tonian formulation and to study more general optimal control problems with
compositions.
Acknowledgements
The authors are grateful to Pawe l Góra who shared Chapter 4 of [2].
References
[1] P. Askenazy (2003). Symmetry and optimal control in economics. J. Math.
Anal. Appl. 282, 603–613.
[2] P. Bracken, P. Góra (1997). Laws of chaos . Birkhaüser. Bassel.
[3] P. Bracken, P. Góra, A. Boyarsky (2001). Deriving chaotic dynamical sys-
tems from energy functionals. Stochastics and Dynamics 1, 377–388.
[4] P. Bracken, P. Góra, A. Boyarsky (2002). A minimal principle for chaotic
systems. Physica D 166, 63–75.
[5] P. Bracken, P. Góra, A. Boyarsky (2004). Calculus of variations for func-
tionals containing compositions. J. Math. Anal. Appl. 296, 658–664.
[6] D. S. Djukic, A. M. Strauss (1980). Noether’s theory for nonconservative
generalised mechanical systems. J. Phys. A 13, no. 2, 431–435.
[7] G. S. F. Frederico, D. F. M. Torres (2006). Constants of motion for frac-
tional action-like variational problems. Int. J. Appl. Math. 19, no. 1, 97–104.
[8] G. S. F. Frederico, D. F. M. Torres (2007). Nonconservative Noether’s
theorem in optimal control. Int. J. Tomogr. Stat. 5, no. W07, 109–114.
[9] J. Fu, L. Chen (2003). Non-Noether symmetries and conserved quantities
of nonconservative dynamical systems. Phys. Lett. A 317, no. 3-4, 255–259.
[10] P. D. F. Gouveia, D. F. M. Torres (2005). Automatic computation of con-
servation laws in the calculus of variations and optimal control. Comput.
Methods Appl. Math. 5, no. 4, 387–409.
[11] A. Gugushvili, O. Khutsishvili, V. Sesadze, G. Dalakishvili, N.
Mchedlishvili, T. Khutsishvili, V. Kekenadze, D. F. M. Torres (2003).
Symmetries and Conservation Laws in Optimal Control Systems . Georgian
Technical University, Tbilisi.
[12] J. Jost, X. Li-Jost (1998). Calculus of variations . Cambridge Univ. Press.
Cambridge.
[13] J. D. Logan (1987). Applied mathematics – a contemporary approach.
Wiley-Interscience Publication, John Wiley & Sons, Inc. New York.
[14] S. Moyo, P. G. L. Leach (1998). Noether’s theorem in classical mechanics.
Politehn. Univ. Bucharest Sci. Bull. Ser. A Appl. Math. Phys. 60, no. 3-4,
221–234.
[15] H. Nijmeijer, A. van der Schaft (1982). Controlled invariance for nonlinear
systems. IEEE Trans. Automat. Control 27, no. 4, 904–914
[16] E. A. M. Rocha, D. F. M. Torres (2006). Quadratures of Pontryagin ex-
tremals for optimal control problems. Control and Cybernetics 35, no. 4.
[17] R. Sato, R. V. Ramachandran (1990). Conservation laws and symmetry
– Applications to economics and finance. Kluwer Academic Publishers.
Boston, MA.
[18] H. J. Sussmann (1995). Symmetries and integrals of motion in optimal
control. Geometry in nonlinear control and differential inclusions (Warsaw,
1993), Polish Acad. Sci., Warsaw, 379–393.
[19] D. F. M. Torres (2002). On the Noether theorem for optimal control, Eu-
ropean Journal of Control 8, no. 1, 56–63.
[20] D. F. M. Torres (2002). Conservation laws in optimal control. Dynamics,
Bifurcations and Control, Springer-Verlag, Lecture Notes in Control and
Information Sciences, Berlin, Heidelberg, 287–296.
[21] D. F. M. Torres (2004). Proper extensions of Noether’s symmetry theorem
for nonsmooth extremals of the calculus of variations. Commun. Pure Appl.
Anal. 3, no. 3, 491–500.
[22] A. van der Schaft (1981/82). Symmetries and conservation laws for Hamil-
tonian systems with inputs and outputs: a generalization of Noether’s the-
orem. Systems & Control Letters 1, no. 2, 108–115.
Introduction and motivation
Preliminaries – review of classical results of the calculus of variations
Main results
Generalized DuBois-Reymond condition
Noether's theorem for functionals containing compositions
An example
Conclusions
|
0704.0950 | Displacement of the Sun from the Galactic Plane | Mon. Not. R. Astron. Soc. 000, 1–?? (2007) Printed 21 November 2021 (MN LATEX style file v2.2)
Displacement of the Sun from the Galactic Plane
Y. C. Joshi⋆
Astrophysics Research Centre, School of Mathematics and Physics, Queen’s University Belfast, Belfast BT7 1NN, UK
Accepted 2007 April 5; Received 2006 July 5;
ABSTRACT
We have carried out a comparative statistical study for the displacement of the Sun
from the Galactic plane (z⊙) following three different methods. The study has been
done using a sample of 537 young open clusters (YOCs) with log(Age) < 8.5 lying
within a heliocentric distance of 4 kpc and 2030 OB stars observed up to a distance
of 1200 pc, all of them have distance information. We decompose the Gould Belt’s
member in a statistical sense before investigating the variation in the z⊙ estimation
with different upper cut-off limits in the heliocentric distance and distance perpen-
dicular to the Galactic plane. We found z⊙ varies in a range of ∼ 13 − 20 pc from
the analysis of YOCs and ∼ 6− 18 pc from the OB stars. A significant scatter in the
z⊙ obtained due to different cut-off values is noticed for the OB stars although no
such deviation is seen for the YOCs. We also determined scale heights of 56.9+3.8
and 61.4+2.7
pc for the distribution of YOCs and OB stars respectively.
Key words: Galaxy: structure, open clusters, OB stars, Gould Belt – method: statis-
tical – astronomical data bases
1 INTRODUCTION
It has long been recognized that the Sun is not located precisely in the mid-plane of the Galactic
disk defined by b = 0◦ but is displaced a few parsecs to the North of Galactic plane (GP) (see
Blitz & Teuben 1996 for a review) and understanding the exact value of z⊙ is vital not only for
the Galactic structure models but also in describing the asymmetry in the density distribution of
different kind of stars in the north and south Galactic regions (Cohen 1995, Méndez & van Altena
1998, Chen et al. 1999). Several independent studies in the past have been carried out to estimate
⋆ E-mail: [email protected]
c© 2007 RAS
http://arxiv.org/abs/0704.0950v1
2 Y. C. Joshi
z⊙ using different kind of astronomical objects, for example, Gum, Kerr & Westerhout (1960)
concluded that z⊙ = 4 ± 12 pc from the neutral hydrogen layer, Kraft & Schmidt (1963) and
Fernie (1968) used Cepheid variables to estimate z⊙ ∼ 40 pc while Stothers & Frogel (1974)
determined z⊙ = 24 ± 3 pc from the B0-B5 stars within 200 pc from the Sun, all pointing to
a broad range of z⊙. More recently various different methods have been employed to estimate
z⊙ e.g. Cepheid variables (Caldwell & Coulson 1987), Optical star count technique (Yamagata
& Yoshii 1992, Humphreys & Larsan 1995, Chen et al. 2001), Wolf-Rayet stars (Conti & Vecca
1990), IR survey (Cohen 1995, Binney, Gerhard & Spergel 1997, Hammersley et al. 1995) along
with different simulations (Reed 1997, Méndez & van Altena 1998) and models (Chen et al. 1999,
Elias, Cabrera-Caño & Alfaro 2006, hereafter ECA06). Most of these studies constrained z⊙ in
the range of 15 to 30 pc in the north direction of the GP.
In recent years, the spatial distribution of open clusters (OCs) have been extensively used to
evaluate z⊙ since continued compilation of new clusters has brought together more extensive and
accurate data than ever. Using the OCs as a diagnostic tool to determine z⊙, Janes & Adler (1982)
found z⊙ = 75 pc for 114 clusters of age smaller than 10
8 yr while Lyngȧ (1982) determined
z⊙ ∼ 20 pc with 78 young clusters up to 1000 pc. Pandey & Mahra (1987) reported z⊙ as 10
pc from the photometric data of OCs within |b| ≤ 10◦ and Pandey, Bhatt & Mahra (1988) using
a subsample of YOCs within 1500 pc obtained z⊙ = 28 ± 5 pc. Most recently, z⊙ have been
determined in three independent studies based on the analysis of OCs. Considering about 600 OCs
within 5◦ of GP, we derived z⊙ = 22.8 ± 3.3 pc through the analysis of interstellar extinction in
the direction of the OCs (Joshi 2005, hereafter JOS05). Bonatto et al. (2006) reported z⊙ as 14.8 ±
2.4 pc using 645 OCs with age less than 200 Myrs while Piskunov et al. (2006, hereafter PKSS06)
estimated a value of 22 ± 4 pc using a sample of 650 OCs which is complete up to about 850 pc
from the Sun. On the other hand using a few thousand OB stars within 10◦ of the GP and 4 kpc
from the Sun, Reed (1997) approximately estimated the value as 10-12 pc while Maı́z-Apellániz
(2001) determined this values as 24.2± 2.1 pc using a sample of about 3400 O-B5 stars obtained
from the Hipparcos catalogue.
The large range of z⊙ derived from these different methods could be possibly caused by the
selection of data of varying age, heliocentric distance d, spectral type, etc. along with the method of
the determination. The aim of the present paper is therefore to study the variation in z⊙ following
different methods by constraining different upper limits in z and d using a large sample of OCs
and OB stars. The paper is organized as follows. First we detail the data used in this study in Sect.
2. In Sect. 3, we examine the distribution of z with the age of clusters while Sect. 4 deals their
c© 2007 RAS, MNRAS 000, 1–??
Displacement of the Sun from the Galactic Plane 3
distribution with the different z cut-off and d cut-off in order to determine z⊙. The exponential
decay of z distribution of the OCs and OB stars and their variation over the Galactic longitude are
discussed in Sects. 5 and 6 respectively. Our results are summarized in Sect. 7.
2 THE DATA
We use two catalogues in this study. The OC catalogue is complied by Dias et al. (2002)1 which
includes information available in the catalogues of the Lyngȧ (1987) as well as WEBDA2 with the
recent information on proper motion, age, distance from the Sun, etc. The latest catalogue (Version
2.7) that was updated in October 2006 gives physical parameters of 1759 OCs. Of these, 1013 OCs
have distance information for which it is possible to determine z which is equivalent to d sin b
where b is the Galactic latitude. Out of the 1013 OCs, age information is available for 874 OCs
with ages ranging from 1 Myr to about 10 Gyrs, although the majority of them are young clusters.
Though the clusters are observed up to a distance of about 15 kpc, it should be born in mind that
the cluster sample is not complete owing to large distance and/or low contrast of many potential
cluster candidates (Bonatto et al. 2006) and may be smaller by an order of magnitude since a good
fraction of clusters are difficult to observe at shorter wavelengths due to large extinction near the
GP (Lada & Lada 2003, Chen, Chen & Shu 2004, PKSS06). When we plot cumulative distribution
of the clusters in our sample as a function of d in Fig. 1, we notice that the present cluster sample
may not be complete beyond a distance of about 1.7 kpc. A comprehensive discussion on the
completeness of OCs has recently been given by Bonatto et al. (2006) which along with PKSS06
puts the total number of Galactic OCs in the order of 105.
The other sample used in the present study is that of the OB stars taken from the catalogue of
Reed (2006) which contains a total of 3457 spectroscopic observations for the 2397 nearby OB
stars3. The distance of OB stars are derived through their spectroscopic parallaxes. It is worth to
note that the individual distance of OB stars may not be accurate (Reed 1997), nevertheless, a
statistical study with significant number of OB stars can still be useful for the determination of
z⊙. Although, several studies on the determination of z⊙ using OB stars have already been carried
out on the basis of Hipparcos catalogue (Maı́z-Apellániz 2001, ECA06 and references therein),
however, it is noticed by some authors that the Hipparcos catalogue gives a reliable distance esti-
mation within a distance of only 200-400 pc from the Sun (cf. Torra, Fernández & Figueras 2000).
1 Updated information about the OCs is available in the on-line data catalogue at the web site http://www.astro.iag.usp.br/∼wilton/.
2 http://obswww.unige.ch/webda
3 For the detailed information about the data, the reader is referred to http://othello.alma.edu/∼reed/OBfiles.doc
c© 2007 RAS, MNRAS 000, 1–??
http://www.astro.iag.usp.br/~wilton/
http://othello.alma.edu/~reed/OBfiles.doc
4 Y. C. Joshi
0 5 10 15
Heliocentric distance (kpc)
Figure 1. A cumulative distribution diagram for the number of the open clusters with distance from the Sun. The vertical dashed line indicates the
completeness limit while continuous line represents the least square fit in that region.
This is exactly the region where OB stars in the Gould Belt (hereafter GB) lie and this can cause
an anomaly in the determination of z⊙ if the stars belonging to the GB are not be separated from
the data sample. Further Abt (2004) also noticed that classification of the stars in the Hipparcos
catalogue is uncertain by about +/-1.2 subclass in the spectral classifications and about 10% in
the luminosity classifications. In the present study we therefore preferred Reed’s catalogue of OB
stars over the Hipparcos catalogue despite lesser in numbers but are reported up to a distance of
about 1200 pc from the Sun and V ∼ 10 mag. The OB stars which have two different distances
in the catalogue are assigned the mean distance provided they do not differ by more than 100 pc,
otherwise we remove them from our analysis. If there are more than two distances available for
any OB star, we use the median distance. In this way, we considered a sample of 2367 OB stars in
this study.
c© 2007 RAS, MNRAS 000, 1–??
Displacement of the Sun from the Galactic Plane 5
6 7 8 9
6 7 8 9
log(Age)
Figure 2. The distribution of mean z with log(Age). A vertical dotted line shows upper boundary for the age limit considered as YOCs in the
present study. The horizontal dashed lines are drawn to represent the weighted mean z value of the YOCs in the z > 0 and z < 0 regions. Note
that there is one cluster of log(Age) = 10.0 (z ∼ −172 pc) which is not shown in the plot.
3 DISTRIBUTION OF Z WITH THE AGE
It is a well known fact that OCs are born and distributed throughout the Galactic disk. Young
clusters are normally seen in the thin disk while old clusters are found mainly in the thick disk of
the Galaxy which van den Bergh (2006) termed as a ‘cluster thick disk’. In order to study the z
distribution of clusters with their age, we assemble the clusters according to their log(Age) in 0.2
bins dex in width and estimate a mean value of z for each bin. A distribution of mean z vs log(Age)
is plotted in Fig. 2 which clearly demonstrates that the distribution of clusters perpendicular to the
GP has a strong correlation with their ages. While clusters with log(Age) < 8.5 (∼ 300 Myrs)
have almost a constant width of z distribution in both the directions of the GP, clusters older than
this have mean z > 100 pc which is continuously increases with the age. This indicates that the
thickness of the Galactic disk has not changed substantially on the time scale of about 300 Myrs
and most of the OCs, in general, formed somewhere inside ± 100 pc of the GP. A similar study
carried out by Lyngȧ (1982) using a smaller sample of 338 OCs found that clusters younger than
c© 2007 RAS, MNRAS 000, 1–??
6 Y. C. Joshi
one Gyr formed within ∼ 150 pc of the Galactic disk. It is quite apparent from the figure that the
clusters with log(Age) > 8.5 are found not only far away from the GP but are also highly scattered
in their distribution. However, this is not unexpected since it is a well known fact that clusters
close to GP gets destroyed with the time in a timescale of a few hundred million years due to
tidal interactions with the Galactic disk and the bulge, encounters with the passing giant molecular
clouds or mass loss due to stellar evolution. The few remaining survivors reach to outer parts
of the Galactic disk (cf. Friel (1995), Bergond, Leon & Guibert (2001)). If we just consider the
clusters with log(Age) < 8.5, which we describe as YOCs in our following analysis, we find that
the 226 clusters (∼ 38%) lie above GP while 363 clusters (∼ 62%) lie below GP. The asymmetry
in cluster density above and below the GP is a clear indication of inhomogeneous distribution of
clusters around GP. This asymmetry can be interpreted as due to the location of the Sun above the
GP, displacement of the local dust layer from the GP or asymmetry in the distribution of young
star formation near the Sun with respect to the GP or a combination of all these effects as pointed
out by the van den Bergh (2006). However, it is generally believed that it is the solar offset which
plays a major role in this asymmetry.
When we estimate weighted mean displacement along the GP for the clusters lying within
log(Age) < 8.5, we find a value of z = 37.0±3.0 pc above the GP and z = −64.3±2.9 pc below
the GP. If we consider a plane defined by the YOCs at zyoc, then zyoc can be expressed as,
zyoc =
n1z1 + n2z2
n1 + n2
where z1 and z2 are the mean z for the YOCs above and below the GP respectively; n1 and n2 are
number of YOCs in their respective regions. This gives us a value of zyoc = −25.4 ± 3.0 pc. If
the observed asymmetry in the z distribution of YOCs is indeed caused by the solar offset from
the GP then the negative mean displacement of z perpendicular to GP can be taken as z⊙ (towards
north direction) which is about 25.4 pc.
However, it is a well known fact that a large fraction of the young populations with ages under
60 Myrs in the immediate solar neighbourhood belong to the GB (Gould 1874, Stothers & Frogel
1974, Lindblad 1974). It is widely believed that this belt is associated with a large structure of
the interstellar matter including reflection nebulae, dark clouds, HI gas, etc. and is tilted by about
18 deg with respect to the GP and is stretches out to a distance of about 600 pc distance from
the Sun (Taylor, Dickman & Scoville 1987, Franco et al. 1988, Pöppel 1997). In our sample of
589 clusters, we found 38 such clusters which confined in the region of 600 pc from the Sun and
have age below 60 Myrs. Out of the 38 clusters, 26 (∼ 68%) follow a specific pattern in the d− z
c© 2007 RAS, MNRAS 000, 1–??
Displacement of the Sun from the Galactic Plane 7
-1000 -500 0 500 1000
d (pc)
(b)
Figure 3. The distribution of YOCs in the d− z plane (a). Clusters towards Galactic center direction are assigned positive distances while clusters
towards Galactic anti-center direction are assigned negative distances. Only clusters with |d| < 1 kpc are plotted here for the clarity. Dark points
in the shaded region indicate the YOC’s which could be associated with the GB and XY-distribution of these 26 GB members on the GP is shown
in (b) where clusters are positioned by their distance from the Sun which is marked by a star at the center.
plane as shown by the dark points in the shaded region of Fig. 3(a) which is slightly tilted with
respect to the GP and resembles the GB. The association of these clusters with the GB seems to
be confirmed by the fact that 23 out of 26 YOCs are clumped in the longitude range of about
180-300 degrees as shown in Fig. 3(b). This contains the most significant structures accounting
for the expansion of the GB (Torra, Fernández & Figueras 2000). A mean and median age of these
26 YOCs are 24.4 and 21.2 Myrs respectively. Although no detailed study has been carried out
on the fraction of the clusters actually belonging to the GB, however, on the basis of 37 clusters
in the log(Age) < 7.9 which lie within a distance of 500 pc from the Sun, PKSS06 found that
c© 2007 RAS, MNRAS 000, 1–??
8 Y. C. Joshi
about 55% of the clusters could be members of the GB. On the basis of OB stars in the Hipparcos
catalogue, Torra et al. (2000) estimated that roughly 60-65% of the stars younger than 60 Myr in
the solar neighbourhood belong to the GB. Although it is difficult to decide unambiguously which
clusters belong to the GB, we believe that most of these 26 YOCs could be associated with the GB
instead of the Local Galactic disk (hereafter LGD). Hence to reduce any systematic effect on the
determination of z⊙ due to contamination of the clusters belong to the GB, we excluded all these
26 clusters from our subsequent analysis except when otherwise stated. When we re-derived the
value of z⊙ from the remaining 563 clusters, we find it to be 22.9 ± 3.4 pc north of the Galactic
plane. A further discussion on the z⊙ and its dependence on various physical parameters shall be
carried out below.
4 DISTRIBUTION OF Z WITH THE MAXIMUM HELIOCENTRIC DISTANCE
4.1 z⊙ from YOCs
Various studies indicate that the plane of symmetry defined by the OCs is inclined with respect
to the GP (Lyngȧ 1982, Pandey, Bhatt & Mahra 1988, JOS05). If this is the case, then z⊙ shall
be dependent on the distance of OCs from the Sun and inclination angle between the two planes.
Therefore, a simple determination of z⊙ considering all the OCs could be misleading. To examine
to what extent z⊙ depends on the distance, we study the distribution of clusters and their mean
displacement from the GP as a function of the heliocentric distance (dmax) taking advantage of
the OCs observed up to a large distance. Since YOCs are primarily confined closer to the GP as
discussed in the previous section, it seems worthwhile to investigate z⊙ using only YOCs despite
the fact that the YOCs are generally embedded in dust and gas clouds and many are not observed
up to a large distance. Although we found that some young clusters are reported as far as 9 kpc
from the Sun but only less than 5% YOCs are observed beyond 4 kpc, most of them in the anti-
center direction of the Galaxy which we do not include in our analysis. Following all the above
cuts, we retain only 537 YOCs observed up to 4 kpc from the Sun as a working sample for the
present study. Their distribution normal to the GP as a function of Galactic longitude is plotted in
Fig. 4(a).
Fig. 4(b) shows the logarithmic distribution of the YOCs as a function of |z|. Here we derive the
number density in bins of 20 pc and error bars shown in the y-axis is the Poisson error. Following
an exponential-decay profile, we estimate a scale height for the YOCs as zh = 59.4
−3.0 pc which
is represented by a continuous straight line in the figure. However, a careful look in the figure
c© 2007 RAS, MNRAS 000, 1–??
Displacement of the Sun from the Galactic Plane 9
0 60 120 180 240 300 360
longitude (deg)
|z| (pc)
0 60 120 180 240 300 360
Figure 4. The distribution of YOCs in the l− z plane (a) and their density distribution as a function of z (b). The continuous line represents a least
square fit to the points.
suggests that the zh could be better described by the YOCs lying within z = ±250 pc and a least
square fit in this region gives a value of zh = 56.9
−3.4 pc.
It is however interesting to see if the scale height shows any shift in its value when considering
a possible displacement of the cluster plane from the GP. In order to analyse any effect of the
displacement on zh, we shift the cluster plane by 10, 15, 20 and 25 pc from the GP and recalculate
zh using YOCs within z < 250 pc. Our results are given in Table 1. It is seen that these values of
zh are quite consistent and we conclude that the solar offset has no bearing in the determination of
scale height. Using a sample of 72 OCs younger than 800 Myrs, Janes & Phelps (1994) reported
a scale height of zh ∼ 55 pc. Recently Bonatto et al. (2006) derived a scale height of zh = 48± 3
c© 2007 RAS, MNRAS 000, 1–??
10 Y. C. Joshi
Table 1. Scale heights determined due to various offsets between cluster plane and GP. All the values are in pc.
shift zh
0 56.9+3.8
10 55.1+3.3
15 54.7+3.2
20 57.2+3.9
25 56.6+3.9
pc using a sample of clusters younger than 200 Myrs, however, they have also found a larger zh
when considering OCs older than 200 Myrs. PKSS06 obtained a scale height of zh = 56 ± 3 pc
using the OCs within 850 pc from the Sun. Our value of zh = 56.9
−3.4 pc obtained with the YOCs
within 4 kpc from the Sun and z < 250 pc is thus consistent with these determinations.
An important issue that needs to be addressed in the determination of z⊙ is the possible con-
tamination by the outliers which are the objects lying quite far away from the GP that can seriously
affect the z⊙ estimation. Hence it is worthwhile at this point to investigate z⊙ using a subsample of
YOCs in different z zone excluding the clusters far away from the GP without significantly reduc-
ing the number of clusters. If the observed asymmetry in the cluster distribution is really caused
by an offset of the Sun from the GP, then a single value of z should result from the analysis. In
order to study z⊙ distribution using YOCs, we select three different zones normal to the z = 0
plane considering the clusters within |z| < 150 pc, |z| < 200 pc and |z| < 300 pc. Here, we have
not made smaller zones than |z| = 150 pc keeping in mind the fact that accounting lesser number
of YOCs could have resulted in a larger statistical error while zone larger than |z| = 300 pc can
cause significant fluctuations due to few but random clusters observed far away from the GP. To
determine z⊙, we keep on moving the mid-plane towards the southwards direction in bins of 0.1
pc to estimate the mean z till we get the mean value close to zero i.e. a plane defined by the YOCs
around which the mean z is zero within the given zone that is in fact equivalent to z⊙. This ap-
proach of a running shift of z in order to determine z⊙ is preferred over the simple mean to remove
any biases owing to the displacement of the cluster plane itself towards the southwards direction.
Hence it gives a more realistic value of the z⊙. We estimate z⊙ with different cut-off limits in dmax
using an increment of 0.3 kpc in each step and for all the three zones. The variation in z⊙ with
dmax for all the zones is illustrated in Fig. 5. The figure gives a broad idea of the variation in z⊙
c© 2007 RAS, MNRAS 000, 1–??
Displacement of the Sun from the Galactic Plane 11
1 2 3 4
Figure 5. The variation in z⊙ with the maximum distance of YOCs from the Sun (see text for the detail).
which increases with the increasing distance as well as zone size, however, it has to be noted that
the range of variation is very small and varies between ∼ 13 to 21 pc throughout the regions.
Here, it is necessary to look into the increasing trend in z⊙ whether it is internal variation or
due to our observational limitations. We note that 21 out of 25 YOCs observed beyond 1 kpc in
the region |z| > 150 pc are observed in the direction of l = 120◦ < l < 300◦. Moreover, most of
these young clusters are observed below GP and majority of them are located in the direction of
l ∼ 200◦ < l < 300◦. This could be due to low interstellar extinction in the Galactic anti-center
direction which is least around the longitude range 220◦ − 250◦ (Neckel & Kare 1980, Arenou,
Grenon & Gómez 1992, Chen et al. 1998). Based on the study of extinction towards open clusters
from the same catalogue of Dias et al. (2002), we found the direction of minimum extinction
towards l ∼ 230◦ below the GP (JOS05). Hence a lower extinction allows us to have a higher
observed cluster density in the surrounding area of the l ∼ 230◦ as well as observable up to farther
distance which reflected in our larger value of z⊙ with the increase of the distance. Therefore, we
conclude that the larger z⊙ values obtained with the bigger zone or greater distance is not due to
c© 2007 RAS, MNRAS 000, 1–??
12 Y. C. Joshi
Figure 6. The X-Z distribution of the OB stars in (a). The open circles represent the OB stars belong to LGD and filled circles represent possible
GB members. The x-axis is drawn for only ±600 pc to show the GB members clearly which is quite evident in the diagram. Their distribution in
the l− z plane is drawn in (b). A number density distribution of the OB stars belong to the LGD as a function of z is shown in (c). The continuous
line here indicates a least square fit to the points.
internal variation in z⊙ but due to our observational constraint. In general, we found a value of
17± 3 pc for the z⊙.
4.2 z⊙ from OB stars
Since YOCs are on an average more luminous than the older clusters and also possess a large
number of OB stars hence lends us an opportunity to compare the results with the independent
study using massive OB stars which are also a younger class of objects and confined very close to
the GP. In the present analysis, we use 2367 OB stars which are strongly concentrated towards the
c© 2007 RAS, MNRAS 000, 1–??
Displacement of the Sun from the Galactic Plane 13
GP as those of the YOCs. However, a natural problem in the determination of z⊙ is to separate the
OB stars belonging to the GB with the LGD. The issue has already been dealt with a great detail
by several authors (Taylor, Dickman & Scoville 1987, Comeron, Torra & Gomez 1994, Cabrera-
Caño, Elias & Alfaro 1999, Torra, Fernández & Figueras 2000). A recent model proposed by the
ECA06 based on the three dimensional classification scheme allows us to determine the probability
of a star belonging to the GB plane or LGD. A detailed discussion of the method can be found in
the ECA06 and we do not repeat it here. Though it is not possible to unambiguously classify the
membership of the stars among two populations but to statistically isolate the GB members from
our sample, we used the results derived for the GB plane by the ECA06 through the exponential
probability density function for the O-B6 stars selected from the Hipparcos catalogue while we
used an initial guess value of 60 pc and -20 pc for the scale height and z⊙ respectively for the
GP. Since typical maximum radius of the GB stars is not greater than about 600 pc (Westin 1985,
Comeron, Torra & Gomez 1994, Torra, Fernández & Figueras 2000), we search OB stars belonging
to GB up to this distance only.
Following the ECA06 method, we found that 315 stars out of 2367 OB stars of our data sample
belong to the GB. Further, 22 stars do not seem to be associated with either of the planes. In this
way, we isolate 2030 OB stars belonging to the LGD which are used in our following analysis.
A X − Z distribution of the OB stars is shown in Fig. 6(a) (in the Cartesian Galactic coordinate
system, positive X represents the axes pointing to the Galactic center and positive Z to the north
Galactic pole) and their distribution in the GP as a function of Galactic longitude is displayed in
Fig. 6(b). A clear separation of the GB plane from the GP can be seen in the figure which follows a
sinusoidal variation along the Galactic longitude and reaches its lower latitude at l = 200− 220◦.
A number density in the logarithmic scale of the OB stars belonging to LGD is shown in Fig
6(c) as a function of |z| where stars are counted in the bins of 20 pc. We derive a scale height of
zh = 61.4
−2.4 pc from the least square fit that is drawn by a continuous straight line in the same
figure. Maı́z-Apellániz (2001) using a Gaussian disk model determined a value of zh = 62.8± 6.4
pc which is well in agreement with our result. However, Reed (2000) derived a broad range of
zh ∼ 25 − 65 pc using O-B2 stars while ECA06 estimates smaller value of 34 ± 3 pc using O-
B6 stars which are more in agreement with the 34.2 ± 3.3 pc derived with the self-gravitating
isothermal disk model of Maı́z-Apellániz (2001).
It is seen in Fig. 6(b) that the OB stars are sparsely populated around the GP in comparison
of the YOCs and a significant fraction of them are below z = −150 pc. In order to study the z⊙
distribution with dmax, we here make four different zones normal to the z = 0 plane considering
c© 2007 RAS, MNRAS 000, 1–??
14 Y. C. Joshi
0.6 0.8 1.0 1.2
0.6 0.8 1.0 1.2
Figure 7. A similar plots as in Fig. 5 but for the OB stars. A big dot here represents the z⊙ using all the OB stars considered in our study.
the OB stars within |z| < 150 pc, |z| < 200 pc, |z| < 250 and |z| < 350 pc. The z⊙ is estimated
by the same procedure as followed for the YOCs. A variation in the z⊙ with dmax is illustrated in
Fig. 7 where we have made a bin size of 50 pc. It is seen that z⊙ derived in this way for the OB
stars show a continuous decay with the dmax as well as size of the zone which seems to be due to
the preferential distribution of the OB stars below the GP. When we draw the spatial distribution
of OB stars in the X-Y coordinate system in Fig. 8, we notice that most of the OB stars are not
distributed randomly but concentrated in the loose group of the OB associations. This difference
in density distribution of OB stars could be primarily related with the star forming regions. The
number of OB stars below the GP are always found to be greater than the OB stars above the GP
in all the distance bin of 100 pc. However, in the immediate solar neighbourhood within 500 pc
distance, OB stars below the GP are as much as twice than those above the GP. This is clearly
a reason behind a large value of z⊙ in the smaller dmax value which systematically decreases as
more and more distant OB stars are included. A mean value of 19.5±2.2 pc was obtained by Reed
(2006) using the same catalogue of 2397 OB stars, albeit without removing the GB members. In
c© 2007 RAS, MNRAS 000, 1–??
Displacement of the Sun from the Galactic Plane 15
|z| (pc)
Figure 8. A spatial distribution of the OB stars belonging to the LGD projected on the GP where position of the Sun is shown by a star symbol at
the center. Open triangles and filled circles represent the stars below and above the GP respectively. Size of the points signify the distance of OB
stars normal to the GP as indicated at the top of the diagram. The co-centric circles at an equal distance of 100 pc from 500 pc to 1200 pc are also
drawn.
fact this is also noticeable in the present study (see big dot in Fig. 7). However, we cannot give a
fixed value of z⊙ from the present analysis of the OB stars as it depends strongly on the dmax as
well as selection of the z cut-off.
5 EXPONENTIAL DECAY OF THE Z DISTRIBUTION
It is normally assumed that the cluster density distribution perpendicular to the GP could be well
described in the form of a decaying exponential away from the GP, as given by,
N = N0exp
|z + z⊙|
where z⊙ and zh are the solar offset and scale height respectively. We determine z⊙ by fitting the
above function. For example in Fig. 9(a), we have drawn z distribution in 30 pc bin considering
c© 2007 RAS, MNRAS 000, 1–??
16 Y. C. Joshi
Figure 9. The z distribution for all the OCS within |z| < 300 pc and d < 4 kpc (a). A least square exponential decay profile fit is also drawn by
the continuous line. The z⊙ derived from the fits for different dmax is shown in (b). The same is shown for the OB stars in (c) and (d).
all the 537 YOCs which lie within |z| < 300 pc and d < 4 kpc. Since we have already derived the
scale height for the YOCs as 56.9 pc in our earlier section hence kept it fixed in the present fit. A
least square exponential is fitted for all the distance limits. Here we do not divide the data sample
in different zones of z as we have done in the previous section since only the central region of ±
150 pc has significant effect on the determination of solar offset in the exponential decay method
as can be seen in Fig. 9(a).
Our results are shown in Fig. 9(b) where we have displayed z⊙ derived for the YOCs as a
function of dmax. We can see a consistent value of about 13 pc for z⊙ except when only YOCs
closer to 1 kpc from the Sun are considered. This may be due to undersampling of the data in
that region. Our estimate is close to the Bonatto et al. (2006) who reported a value of 14.2 ± 2.3
pc following the same approach, however, clearly lower in comparison of z⊙ determined in the
previous section. Here, it is worth to point out that following the same approach PKSS06 found
a significantly large value of z⊙ (∼ 28 − 39 ± 9 pc) when considering only those clusters within
log(Age) < 8.3. However, the value of z⊙ substantially comes down to 8± 8 pc for the clusters in
c© 2007 RAS, MNRAS 000, 1–??
Displacement of the Sun from the Galactic Plane 17
the age range of 8.3 < log(Age) < 8.6 in their study. If we confine our sample to log(Age) < 8.3
only, we find that z⊙ increases marginally up to 14.6 pc which is not quite different than our
earlier estimate but still considerably lower than the PKSS06 and we suspect that their values are
overestimated by a significant factor.
A similar study for the z distribution of OB stars is also carried out and our results are shown
in Fig. 9(c), as an example, considering all the data sample. The resultant variation of z⊙ for the
different dmax are shown in Fig. 9(d). It is clearly visible that z⊙ varies in the range of 6 to 12 pc
which is substantially lower in comparison of the values obtained in the previous method for the
same data set. Reed (1997, 2000) also reported a similar lower value of ∼ 6 to 13 pc for the z⊙
using exponential model. A significant feature we notice here is that the z distribution to the left
and right of the peak do not seem symmetric particularly in the bottom half of the region where
exponential fit in the z > z(Nmax) region is higher than their observed value while reverse is
the case for the z < z(Nmax) region. Therefore, a single exponential profile fit to the distribution
of the OB stars for the whole range results in a large χ2 since points are well fitted only over a
short distance interval around the mid-plane. This may actually shift z⊙ towards the lower value
which results in an underestimation for the z⊙ determination. We believe that a single value of z⊙
determined through exponential decay method is underestimated and needs further investigation.
6 DISTRIBUTION OF Z WITH THE GALACTIC LONGITUDE
A distribution of clusters in the Galactic longitude also depends upon the Age (Dias & Lépine,
2005) and it is a well known fact that the vertical displacement of the clusters from the GP is cor-
related with the age of the clusters. Hence, one alternative way to ascertain the mean displacement
of Sun from the GP is to study the distribution of YOCs and OB stars projected on the GP as a
function of the Galactic longitude where it is noticeable that the distribution follows an approx-
imately sinusoidal variation. We estimated z⊙ in this way in our earlier study (JOS05) although
analysis there was based on the differential distribution of interstellar extinction in the direction of
To study the variation of z as a function of Galactic longitude, we assemble YOCs in 30◦
intervals of the Galactic longitude and mean z is determined for each interval. Here we again divide
the YOCs in three different zones as discussed in Sect. 4 and the results are illustrated in Fig. 10
where points are drawn by the filled circles. Considering the scattering and error bars in mind,
we do not see any systematic trend in the z variation and a constant value of 14.5 ± 2.2, 17.4 ±
c© 2007 RAS, MNRAS 000, 1–??
18 Y. C. Joshi
0 90 180 270 360
|z|<150
|z|<200
|z|<300
longitude (deg)
Figure 10. Mean z of the YOCs as a function of Galactic longitude. Here open and filled circles represent the z distribution with and without GB
members respectively. A least squares sinusoidal fit is drawn by the continuous line. Respective regions in |z| and z⊙ determined from the fit are
shown at the top of each plot.
2.6, 18.5 ± 2.9 pc (in negative direction) are found for |z| < 150, |z| < 200 and |z| < 300 pc
respectively. However, when we consider all the YOCs including possible GB members as drawn
by open circles in the same figure, we found a weak sinusoidal variation as plotted in Fig. 10
by the continuous lines and has a striking resemblance with z distribution at maximum Galactic
absorption versus longitude diagram (Fig. 8 of JOS05). We fit a function,
z = −z⊙ + asin(l + φ),
to the z(l) distribution with z⊙ estimated from the least square fits in all the three zones and
resultant values are given at the top of each panel in Fig. 10. It is clearly visible that the z⊙
estimated in this way varies between 17 to 20 pc and it is not too different for the case when GB
members are excluded. The largest shift in the mean z below the GP occurs at about 210◦ which
is the region associated with the GB (see Fig. 6(b)) as can be seen by the maximum shift between
filled and open circular points in Fig. 10.
In Fig. 11, we plot a similar variation for the OB stars in four different zones as selected in
c© 2007 RAS, MNRAS 000, 1–??
Displacement of the Sun from the Galactic Plane 19
0 90 180 270 360
0 90 180 270 360
|z|<150 |z|<200
|z|<250 |z|<350
longitude (deg)
Figure 11. A similar plots as in Fig. 10 but for the OB stars.
Sect. 4 and it is noticeable that the sinusoidal variation is more promising for the OB stars. The
values of z⊙ ranges from 8.4 to 18.0 and like in all our previous methods, it shows a significant
variation among different dmax for the OB stars.
It is interesting to note that mean z shows a lower value in the vicinity of l ∼ 15◦ − 45◦ region
in both the YOCs and OB stars. Pandey, Bhatt & Mahra (1988) argued that since the maximum
absorption occurs in the direction of l ∼ 50◦ as well as reddening plane is at the maximum distance
from the GP in the same direction of the Galactic longitude, it may cause a lower detection of the
objects. We also found a similar result in JOS05. In his diagram of the distribution of OCs as a
function of longitude, van den Bergh (2006) also noticed that the most minimum number of OCs
among various dips lies in the region of l ∼ 50◦ where there is an active star forming region,
Sagitta. However, the lack of visible OCs are compensated by the large number of embedded
clusters detected from the 2MASS data (Bica, Dutra & Soares 2003). We therefore attribute an
apparent dip in z⊙ around the region l ∼ 50
◦ to the observational selection effects associated due
c© 2007 RAS, MNRAS 000, 1–??
20 Y. C. Joshi
to star forming molecular clouds which may result in the non-detection of many potential YOCs
towards far-off directions normal to the GP.
7 CONCLUDING REMARKS
The spatial distribution of the young stars and star clusters have been widely used to probe the
Galactic structure due to their enormous luminosity and preferential location near the GP and
displacement of the Sun above GP is one issue that has been addressed before by many authors. In
the present paper we considered a sample of 1013 OCs and 2397 OB stars which are available in
the web archive. Their z distribution around the GP along with the asymmetry in their displacement
normal to the GP allowed us to statistically examine the value of z⊙. The cut-off limit of 300 Myrs
in the age for YOCs has been chosen on the basis of their distribution in the z − log(Age) plane.
We have made an attempt to separate out the OCs and OB stars belonging to the GB from the
In our study, we have attempted three different approaches to estimate z⊙ using 537 YOCs
lying within 4 kpc from the Sun. We have studied z⊙ variation with the maximum heliocentric dis-
tance and found that z⊙ shows a systematic increase when plotted as a function of dmax, however,
we noticed that it is more related to observational limitations due to Galactic absorption rather
that a real variation. After analysing these YOCs, we conclude that 17 ± 3 pc is the best estimate
for the z⊙. A similar value has been obtained when we determined z⊙ through the z distribution
of YOCs as a function of Galactic longitude, however, a smaller value of about 13 pc is resulted
through exponential decay method. Considering the YOCs within z < 250 pc, we determined that
the clusters are distributed on the GP with a scale height of zh = 56.9
−3.4 pc and noticed that
the z⊙ has no bearing in the estimation of zh. A scale height of zh = 61.4
−2.4 pc has also been
obtained for the OB stars belonging to the LGD.
A comparative study for the determination of z⊙ has been made using the 2030 OB stars lying
within a distance of 1200 pc from the Sun and belonging to the LGD. It is seen that the z⊙ obtained
through OB stars shows a substantial variation from about 8 to 28 pc and strongly dependent on
the dmax as well as z cut-off limit. It is further noted that z⊙ estimated through exponential decay
method for the OB stars gives a small value in comparison of the YOCs and ranges from 6-12 pc.
Therefore, a clear cut value of z⊙ based on the OB stars cannot be given from the present study,
however, we do expect that a detailed study of OB associations in the solar neighbourhood by the
future GAIA mission may provide improved quality and quantity of data to precisely determine z⊙
c© 2007 RAS, MNRAS 000, 1–??
Displacement of the Sun from the Galactic Plane 21
in order to understand the Galactic structure. This paper presents our attempt to study the variation
in z⊙ due to selection of the data and method of determination using a uniform sample of YOCs
and OB stars as a tool. It is quite clear from our study that the differences in approach and choice
of the data sample account for most of the disagreements among z⊙ values.
ACKNOWLEDGMENTS
This publication makes use of the catalog given by W. S. Dias for the OCs and by B. C. Reed for
the OB stars. Author is thankful to the anonymous referee for his/her comments and suggestions
leading to the significantly improvement of this paper. The critical remarks by John Eldridge are
gratefully acknowledged.
REFERENCES
Abt H. A., 2004, ApJS, 155, 175
Arenou F., Grenon M., Gómez A., 1992, A&A, 258, 104
Bergond G., Leon S., Guibert J., 2001, A&A, 377, 462
Bica E., Dutra C. M., Soares J., Barbuy B., 2003, A&A, 404, 223
Binney J., Gerhard O., Spergel D., 1997, MNRAS, 288, 365
Blitz L., Teuben P., 1997, The Observatory, 117, 109
Bonatto C., Kerber L. O., Bica E., Santiago B. X., 2006, A&A, 446, 121
Cabrera-Caño J, Elias F., Alfaro E.J., 2000, ApSS, 272, 95
Caldwell J. A. R., Coulson I. M., 1987, AJ, 93, 1090
Chen B., Vergely J. L., Valette B., Carraro G, 1998, A&A, 336, 137
Chen B., Figueras F., Torra J., Jordi C., Luri X., Galadı́-Enrı́quez D., 1999, A&A, 352, 459
Chen B., Stoughton C., Smith, J. A., et al., 2001, ApJ, 553, 184
Chen W. P., Chen C. W., Shu C. G., 2004, AJ, 128, 2306
Cohen M., 1995, ApJ, 444, 874
Comeron F., Torra J., Gomez A. E., 1994, A&A, 286, 789
Conti P. S., Vacca W. D., 1990, AJ, 100, 431
Dias W. S., Alessi B. S., Moitinho A., Lépine J. R. D., 2002, A&A, 389, 871
Dias W. S., Lépine J. R. D., 2005, ApJ, 629, 825
Elias F., Cabrera-Caño J., Alfaro, E. J., 2006, AJ, 131, 2700 (ECA06)
Fernie J. D., 1968, AJ, 73, 995
Franco J., Tenerio-Tagle G., Bodenheimer P., Rozyczka M., Mirabel I. F., 1988, ApJ, 333, 826
Friel E. D., 1995, ARAA, 33, 381
Gould B. A., 1874, Proc. Am. Ass. Adv. Sci, 115
Gum C. S., Kerr F. J., Westerhout G., 1960, MNRAS, 121, 132
Humphreys R. M., Larsen J. A., 1995, AJ, 110, 2183
Hammersley P. L., Graźon F., Mahoney T., Calbet X., 1995, MNRAS, 273, 206
Janes K., Adler D., 1982, ApJS, 49, 425
Janes K., Phelps R. L., 1994, AJ, 108, 1773
c© 2007 RAS, MNRAS 000, 1–??
22 Y. C. Joshi
Joshi Y. C., 2005, MNRAS, 362, 1259 (JOS05)
Kraft R. P., Schmidt M., 1963, ApJ, 137, 249
Lada C. J., Lada E. A., 2003, ARAA, 41, 57
Lindblad P. O., 1974, in: Stars and Milky Way System, ed. L. N. Marvridis, Springer-Verlag
Lyngȧ G., 1982, A&A, 109, 213
Lyngȧ G., 1987, Computer Based Catalogue of Open Cluster Data, 5th ed. (Strasbourg CDS)
Maı́z-Apellániz J., 2001, AJ, 121, 2737
Méndez R. A., van Altena, W. F., 1998, A&A, 330, 910
Neckel T., Klare G., 1980, AAS, 42, 251
Pandey A. K., Mahra H. S., 1987, MNRAS, 226, 635
Pandey A. K, Bhatt, B. C., Mahra, H. S., 1988, A&A, 189, 66
Piskunov A. E, Kharchenko N. V., Röser S., Schilbach E., Scholz R. -D., 2006, A&A, 445, 545 (PKSS06)
Pöppel W. G. L., 1997, Fundamental of Cosmic Physics, 18, 1
Reed B. C., 1997, PASP, 109, 1145
Reed B. C., 2000, AJ, 120, 314
Reed B. C., 2006, JRAC, 100, 146
Taylor D. K., Dickman R. L., Scoville N. Z., 1987, ApJ, 315, 104
Stothers R., Frogel J., 1974, AJ, 79, 456
Torra J., Fernández D., Figueras F., 2000, A&A, 359, 82
Torra J., Fernández D., Figueras F., Comeŕon F., 2000, ApSS, 272, 109
van den Bergh, S., 2006, AJ, 131, 1559
Westin T. N. G., 1985, A&AS, 60, 99
Yamagata T., Yoshii Y, 1992, AJ, 103, 117
c© 2007 RAS, MNRAS 000, 1–??
Introduction
The Data
Distribution of z with the age
Distribution of z with the maximum heliocentric distance
z from YOCs
z from OB stars
Exponential decay of the z distribution
Distribution of z with the Galactic longitude
Concluding remarks
REFERENCES
|
0704.0955 | Charges from Attractors | 0704.0955
Charges from Attractors
Nemani V. Suryanarayana 1 and Matthias C. Wapler 2
1 Theoretical Physics Group and
Institute for Mathematical Sciences
Imperial College London, UK
E-mail: [email protected]
2 Perimeter Institute for Theoretical Physics,
Waterloo, ON, N2L 2Y5, Canada
Department for Physics and Astronomy,
University of Waterloo, Waterloo, ON, N2L 3G1, Canada
Kavli Institute for Theoretical Physics,
University of California, Santa Barbara, CA, 93106, USA
E-mail: [email protected]
September 26, 2018
Abstract
We describe how to recover the quantum numbers of extremal black holes from their near
horizon geometries. This is achieved by constructing the gravitational Noether-Wald charges
which can be used for non-extremal black holes as well. These charges are shown to be equiva-
lent to the U(1) charges of appropriately dimensionally reduced solutions. Explicit derivations
are provided for 10 dimensional type IIB supergravity and 5 dimensional minimal gauged su-
pergravity, with illustrative examples for various black hole solutions. We also discuss how to
derive the thermodynamic quantities and their relations explicitly in the extremal limit, from
the point of view of the near-horizon geometry. We relate our results to the entropy function
formalism.
http://arxiv.org/abs/0704.0955v2
1 Introduction
Studies of extremal black holes in string theory have regained importance with the advent of
the attractor mechanism. In its simplest form the attractor mechanism states that the near
horizon geometry of an extremal black hole is fixed in terms of its charges. Further, it has been
realized that there is a single function, called the entropy function, which determines the near
horizon geometry of extremal black holes [1] (see also [2]). Even though the entropy function
provides the non-zero charges such as the electric, magnetic charges and angular momenta, for
many extremal black holes, it does not always give the correct charges. For instance, there are
apparent discrepancies when there are Chern-Simons terms for the gauge fields present in the
Lagrangian. This is the case, for instance, in 5d minimal (and minimally gauged) supergravities.
On the other hand it has been believed [4] that the near horizon geometry of an extremal
rotating black hole of 5d supergravities knows about only part of the the full black hole angular
momentum, called the horizon angular momentum. In [4] this has been argued to be the case
for the BMPV black hole [16].
Given that finding the near horizon geometries of the yet to be discovered extremal black
hole solutions might be easier than finding the full black hole solutions, it will be useful to
have a prescription to extract the quantum numbers of the full black hole from its near horizon
geometry. In this note we show, by careful analysis of the near horizon geometries of these black
holes, that one can find the full set of asymptotic charges and angular momenta of extremal
rotating black holes that satisfy certain assumptions.
For this, we first construct gravitational Noether charges following Wald [5] for several su-
pergravity theories. These charges can be defined for Killing vectors of any given solution of the
theory of interest. We mainly focus on type IIB in 10d, minimal and gauged supergravities in 5d.
We present closed form expressions for the Nother-Wald charges of these theories as integrals
over compact submanifolds of co-dimension 2 of any given solution.
The 5d minimal gauged supergravity can be obtained by a consistent truncation of type IIB
reduced on S5 [22] (see also [23]). We show that the charges of the 5d theory can be obtained by
the same dimensional reduction of the corresponding 10d charges. We further reduce the theory
down to 3 dimensions and show that the Nother-Wald charges corresponding to Killing vectors
that generate translations along compact directions are the same as the usual Noether charges
for the corresponding Kaluza-Klein gauge fields in the dimensionally reduced theory. We use
the understanding of the charges in the reduced theory to show how the entropy function may
be modified to reproduce the charges of the 5d black holes.
We will argue that these Noether-Wald charges can be used to extract the charges of extremal
black holes from their near horizon geometries under certain assumptions which will be discussed
later on. Thus the formulae presented in this paper should prove useful in extracting the con-
served charges of an extremal black hole from only its near-horizon geometry without having to
know the full black hole solution. We exhibit the successes and limitations of our formulae by
considering the examples of Gutowski-Reall black holes [12] and their generalizations [17] and
BMPV [16, 4] black holes, black rings [18] and the 10d lift of Gutowski-Reall black holes [13].
The analysis of the conserved charges in this paper can be applied to many geometries other
than the extremal black holes considered here and in particular to non-extremal black holes too.
In addition to the charges of a black hole, one is typically interested in the entropy, the mass,
as well as the laws of black hole thermodynamics. Up to now, the entropy has been defined in
terms of a Noether charge only for non-extremal black holes [5]. To find these thermodynamic
quantities and the laws of thermodynamics on the “extremal shell”, it was necessary to take
the extremal limit of the relations defined for the non-extremal black holes (see for instance [1]).
Furthermore, computations of quantities such as the mass, the euclidean action and relations like
the first law and the Smarr formula relied on computing quantities in the asymptotic geometry.
Hence, it would be desirable to derive appropriate relations intrinsically for extremal black holes,
and with only minimal reference to the existence of an asymptotic geometry.
With this motivation, in the second part of the paper, we propose a definition of the entropy
for extremal black holes in the near horizon geometry that does not require taking the extremal
limit of Wald’s entropy, but agrees with it. With a similar approach, we also derive the extremal
limit of the first law from the extremal geometry, assuming only that the near-horizon geometry
be connected to some asymptotic geometry. This definition of the entropy further allows us
to derive a statistical version of the first law [6]. We also show that this gives us the entropy
function directly from a study of the appropriate Noether charge in the near-horizon geometry
of extremal black holes. We will comment on the interpretation of the mass as well, from the
point of view of the near horizon solution.
The rest of the paper is organized as follows. In section 2, we review Wald’s construction of
gravitational Noether charges and use it to derive the charges for type IIB supergravity (with
the metric and the five-form fields) and for the 5d minimal and gauged supergravity theories and
show that they are related by dimensional reduction. In section 3, we show that the Noether-
Wald charges are identical to the standard Noether charges for the Kaluza-Klein U(1) gauge
fields of the corresponding compact Killing vectors. We also discuss various assumptions under
which these charges, when evaluated anywhere in the interior of the geometry, match with the
standard Komar integrals evaluated in the asymptotes. Some issues of gauge (in)dependence
of our charges are also address there. In section 4, we demonstrate how our formulae work on
several examples of interest. The readers who are only interested in the formalism may skip this
section. In section 5, we turn to modifying the entropy function formalism to include the Chern-
Simons terms. In section 6, we discuss thermodynamics of the extremal black holes and define
various physical quantities like the entropy, chemical potentials for the charges and the mass.
We end with conclusions in section 7. The example for black rings is given in the appendix.
2 Charges from Noether-Wald construction
Here we derive expressions for the gravitational Noether charges corresponding to Killing isome-
tries of the gravitational actions we are interested in following Wald [5, 7]. We review first
the general formalism and point out some relevant subtleties. Then we construct these charges
for 10d type IIB supergravity and for minimally gauged supergravity and Einstein-Maxwell-CS
theory in 5d. Finally, we show how the 10d and 5d expressions can be related by dimensional
reduction.
2.1 Review of Noether construction
Let us first review the construction of the charges and discuss some of the relevant properties. In
[7], Lee and Wald described how to construct the Noether charges for diffeomorphism symmetries
of a Lagrangian L(φi = gµν , Aµ, · · · ), a d-form in d spacetime dimensions. For this, one first
writes the variation of L under arbitrary field variations δφi as
δ L = Ei(φ) δφ
i + dΘ(δφ) (1)
where Ei(φ) = 0 are the equations of motion and Θ is a (d − 1)-form. Secondly, one finds the
variation of the Lagrangian under a diffeomorphism
δξL = d(iξ L), (2)
where ξa is the (infinitesimal) generator of a diffeomorphism. Then one defines the (d− 1)-form
current Jξ
Jξ = Θ(δξφ)− iξ L (3)
where δξφ
i are the variations of the fields under the particular diffeomorphism. Then Jξ are
conserved, i.e. dJξ = 0, for any configuration satisfying the equations of motion. Since Jξ is
closed, one can write (for trivial cohomology)
Jξ = dQξ (4)
for some (d − 2)-form charge Qξ. Now consider ξ to be a Killing vector and suppose that the
field configurations on the given solution respect the symmetry generated by it, Lξφi = 0. Since
Θ(δξφ
i) is linear in Lξφi we have Θ(δξφi) = 0 and so Jξ = −iξL. Next, let us illustrate that the
charge defined as the integral
Qξ over a compact (d-2)-surface Σr is conserved when (i) ξ is
a Killing vector generating a periodic isometry or (ii) when the current Jξ = 0 (as for Killing
vectors in theories with L = 0 on the solutions). Consider a (d − 1)-hypersurface M12 which is
foliated by compact (d−2)-hypersurfaces Σr over some interval R12 ⊂ R. Using Gauss’ theorem
one has
Jξ (5)
for ∂M12 = {Σ1,Σ2}. If Jξ = 0, it follows that the charge
Qξ does not depend on Σr and
therefore is conserved along the direction r. Next, let us assume that ξ generates translations
along a periodic direction of Σr. In general,
Jξ receives contributions from terms in Jξ that
contain the one-form ξ̂ dual to the Killing vector field ξ and terms that do not. The terms not
involving ξ̂ vanish by the periodicity of ξ. Since Jξ = −iξL, there are no terms involving ξ̂.
Therefore
Qξ is again independent of Σr.
We will now discuss two important ambiguities in the above prescription. The first one is
that the charge density defined by the equation Jξ = dQξ is ambiguous as Qξ → Qξ + dΛξ does
not change Jξ for some (d-3)-form Λξ. The extra term does not contribute to the integrated
charge only if Λξ is a globally defined (d-3)-form on Σr, that is, it is periodic in the coordinates
of Σr and non-singular. While this is the case for most of our examples, there may be situations
in which, for instance, some gauge potentials that go into Qξ are only locally defined. Similarly,
conservation of Qξ is not guaranteed if any component of Qξ ∈ Ωd−1
is not globally defined.
To illustrate this, consider the Jξ = dQξ = 0 case and let n be a normal to Σr, such that dn = 0.
Qξ = (ind)
indQξ +
d (inQξ) =
d (inQξ) , (6)
which is only forced to vanish if inQξ is globally defined on Σr. The second, and a more impor-
tant, ambiguity comes from possible boundary terms in the Lagrangian L. For the boundary
terms Sbdy. =
Lbdy. =
dLbdy., the variation that gives the equations of motion is done on
the boundary,
δξSbdy. =
δLbdy.
δLbdy.
δξLbdy. =
d(δξLbdy.). (7)
Since δξLbdy. = iξ(dLbdy.) + d(iξLbdy.), the current is just given by
Jξ = −iξ(dLbdy.) + iξ(dLbdy.) + d(iξLbdy.) (8)
and hence the charge is Qξ = iξLbdy.. This implies that boundary terms contribute only to
conserved charges
Qξ of (Killing) vectors that do not lie in Σr.
2.2 The Noether-Wald charges for type IIB supergravity
Now we would like to find the Noether-Wald charges in 10d type IIB supergravity for config-
urations with just the metric and the 5-form turned on. As is standard, we work with the
action
LIIB =
16πG10
−g [R−
4 · 5!
F 2(5)] (9)
neglecting the self-duality of the 5-form and impose it only at the level of the equations of motion.
We follow the procedure outlined in section 2.1 to find the Noether-Wald currents. Using the
variations
−g R) =
−g [Rµν − 12Rgµν ] δg
−g gµν [∇σδ̄Γσµν −∇ν δ̄Γσµσ] and
−g F 2(5)) =
−g [5F (5)
µκσωλ
F (5) κσωλν − 12 gµν F
(5)] δg
−2 · 5![δC(4)
−g Fµνσωλ
)− ∂µ(δC
µνσωλ
−g)], (10)
where δ̄Γλµν =
gλσ [∇µδgσν +∇νδgµσ −∇σδgµν ], one can find the equations of motion
Rµν −
µκσωλ
F (5)ν
= 0 and ∂µ(
−g Fµνσωλ
) = 0. (11)
These are supplemented by the self-duality condition ⋆(10)F
(5) = F (5). The self-duality constraint
F (5) = ⋆F (5) implies that F 2
= 0, and then the metric equation of motion in (11) implies R = 0
for any solution. Hence the Lagrangian vanishes on the solutions and therefore the Noether-Wald
current in (3) is given entirely by the 9-form Θ (or equivalently by its dual vector field). This
can be found from the total derivative terms in δL by substituting δξg
µν = ∇µξν +∇νξµ and
and δξC
= 4 ∂[ν|(ξ
θ|σωλ]) + ξ
θνσωλ
. This gives us the current
J α = −2
−g gασ [Rσλ −
λνθωλ
F (5) νθωλσ ]ξ
+∂µ[−
−g gµνgασ(∇νξσ −∇σξν) +
2 · 3!
−g ξθC(4)
αµσωλ
] , (12)
where the first term vanishes by the equations of motion and the second term gives us the charge
density
16πG(10)
∇αξµ −∇µξα +
αµσωλ
. (13)
Noting that the self-duality constraint
−g Fµ0···µ4
ǫµ0···µ9F
µ5···µ9 implies
αµσωλ
= ξνC
3! 5!
ǫαµσωλµ5···µ9F
µ5···µ9 , (14)
the Noether-Wald charge density (13) can be equivalently written as the 8-form
= − 1
16πG10
⋆ dξ̂ − 1
(4) ∧ F (5)
where ξ̂ is the dual 1-form of the vector field ξµ. This can be integrated over a compact 8d
submanifold to get the corresponding conserved charge. A quick calculation verifies that the
current for this charge vanishes identically as expected because of the vanishing Lagrangian.
Hence, all charges that are computed from it are conserved as discussed in section 2.1. If we
further assume that LξC(4) = 0, we have iξF (5) = −d(iξC(4)). This can be used to rewrite (15)
16πG(10)
⋆ dξ̂ +
C(4) ∧ iξF (5)
up to an additional term proportional to d(C(4) ∧ iξC(4)). This extra term does not contribute
when integrated over a compact 8-manifold provided that C(4) ∧ iξC(4) is a globally well defined
7-form as we discussed in section 2.1. In such cases (16) can be used instead of (15).
In section 4, we will demonstrate that this formula reproduces conserved charges [12] of
Gutowski-Reall black holes of type IIB in 10 dimensions successfully. We hope this expression
may be useful in obtaining the charges of the yet to be discovered black holes from their near
horizon geometries alone.
2.3 The Noether-Wald charges for 5d Einstein-Maxwell-CS
The action for 5d Einstein-Maxwell-Chern-Simons gravity is
16πG5
−g (R− FµνFµν)−
ǫmnpqrAmFnpFqr
which is the same as the action for the 5d minimal gauged supergravity up to the cosmological
constant, which turns out not to contribute to the Noether charge. After a straight forward but
slightly lengthy calculation it is easy to show that the Noether current for this action is
16πG5
(Rαλ − 1
gαλR)− 2 (F λµFαµ − 14g
λαF 2)
+ 4 (ξ · A)
−gFαµ) + 2√
ǫανσωλFνσFωλ
−ggµνgαλ (∇νξλ −∇λξν)− 4
−g(ξ · A)Fαµ − 8
(ξ · A) ǫαµσωλAσFωλ
.(18)
The first two lines are simply proportional to the equations of motion and vanish on-shell and
hence the Noether-Wald charges for this theory are
16πG5
−g (∇αξµ −∇µξα) + 4(ξ ·A)(
−g Fαµ +
ǫαµσωλAσFωλ)
. (19)
These expressions have also appeared recently in [8]. An alternative derivation of (19) in terms
of KK charges will be presented in section 3.3. The charge density (19) can equivalently be
written as the 3-form
16πG5
⋆dξ̂ + 4 (iξA)
⋆ F − 4
A ∧ F
. (20)
As before the charges can be obtained by integrating Qξ over a 3d compact sub-manifold. Note
that if we set the gauge fields to zero we recover the standard Komar integral for the angular
momentum.
2.4 Reduction from 10 dimensions
Now, we will find the dimensional reduction of the 10d formula of conserved charges to the 5d
formula to show that they are indeed identical, so let us first review the reduction formulae to
obtain the equations of motion of 5d minimal gauged supergravity from 10d type IIB supergravity
with only the metric and the self-dual 5-form F (5) turned on [13, 14].
As usual, we express the metric in terms of the frame fields e0, . . . , e9 and do the dimensional
reduction along the compact 5-manifold Σc that is spanned by the 5-form e
5∧e6∧e7∧e8∧e9 =:
e56789. Then, the lift formula is [22] (see also [23])
ds210 = ds
5 + l
(dµi)
2 + µ2i
dξi +
F (5) = (1 + ∗(10))
vol(5) +
d(µ2i ) ∧ dξi ∧ ∗(5)F
, (21)
where µ1 = sinα, µ2 = cosα sin β, µ3 = cosα cosβ with 0 ≤ α ≤ π/2, 0 ≤ β ≤ π/2, 0 ≤ ξi ≤ 2π
and together they parametrise S5. Note that we define the Hodge star of a p-form ω in n-
dimensions as ∗(n)ωi1...in−p = 1p!ǫi1...in−p
j1...jpωj1...jp , with ǫ0123456789 = 1 and ǫ01234 = 1 in an
orthonormal frame. The 10d geometry is specified by {e0, · · · e4}, an orthonormal frame for the
5d metric ds25, together with
e5 = l dα, e6 = l cosαdβ, e7 = l sinα cosα [dξ1 − sin2β dξ2 − cos2β dξ3], (22)
e8 = l cosα sinβ cosβ[dξ2 − dξ3], e9 = −2√3A− l sin
2α dξ1 − l cos2α(sin2β dξ2 + cos2β dξ3).
and the five form [22, 23, 13]
F (5)=
e0···4 + e5···9
(e57 + e68) ∧ (∗(5)F − e9 ∧ F ) (23)
One can write the 5-form RR field strength as F (5) = dC(4) where
C(4) = Ω4 + cotα e
678 ∧ (e9 + 2√
A ∧ (e57 + e68) ∧ (e9 + 2√
(e9 + 2√
A) ∧ (⋆F + 2√
A ∧ F )
. (24)
where Ω4 is a 4-form such that e
01234 = dΩ4. Now we are ready to do the reduction of the 10d
charge
Qχ := −
16π G10
⋆ dχ̂− 1
(4) ∧ F (5)
where Σ8 is a compact 8d submanifold that is composed of a spacelike 3-surface Σ in 5d and
Σc. Hence, only e
5...9 will contribute to the integral. Let us consider χ to be a Killing vector of
the 10d geometry which also reduces to a Killing vector of the 5d geometry and χ̂ be its dual
1-form. Then we find from the expression for the frame fields (21, 22):
χ̂ = χ̂5 + (iχe
9) e9 = χ̂5 − 2√3(iχA) e
9 , so
⋆dχ̂ = ⋆dχ̂5 − 2√
(iχA) ⋆ d e
9 + . . . = ⋆dχ̂5 +
(iχA) ⋆ F + . . . (26)
where “. . .” denotes terms that do not contribute to Qξ. Next, let us find the relevant terms in
C(4) and F (5) (23,24). Noting that iχ
e9 + 2√
= 0, they are:
(4) = iχΩ4 − 2√
(iχA)
e57 + e68
e9 + 2√
e9 + 2√
iχ ⋆ F +
iχ(A ∧ F )
+ . . . (27)
F (5) = −4
e56789 + 2√
⋆ F − F ∧ e9
e57 + e68
+ . . . (28)
(4) ∧ F (5) = −2
iχΩ4 +
(iχA) +A ∧ iχ
⋆ F + 2√
A ∧ F
e56789 + . . . . (29)
After some algebra, the charge reads
Qχ = −
16π G5
⋆dχ̂5 + 4 (iχA) ⋆ F +
(iχA)A ∧ F +
iχΩ4 −
iχ(A ∧ ⋆F )
. (30)
We see immediately that for vectors in the directions of Σ it just reproduces the 5d Noether
charge (19). For vectors orthogonal to Σ, it is different, as is not unexpected, since typically in
dimensional reduction the actions agree only up to boundary terms.
3 Charges from dimensional reduction
In this section we will rederive the Noether-Wald charges for 5d supergravity of section (2.3)
using further dimensional reduction. In particular, we will demonstrate that the 5d Noether-
Wald charges can alternatively be obtained from Kaluza-Klein U(1) charges. For this, we will
first dimensionally reduce the 5d theory along the relevant Killing vectors and then find the
Noether charges of the resulting gauge theory.1 Then we will lift the results back to 5d and
show that they agree with the corresponding 5d Noether-Wald charges. Finally, we will discuss
in which cases the charges obtained by our methods in the interior of the solution agree with
the asymptotic ones.
3.1 Dimensional reduction
In 5 dimensions one can have two independent angular momenta, so we consider dimensional
reduction over both compact Killing vector directions which generate translations along which we
have the independent angular momenta. We will again assume that all fields obey the isometries
and hence only need to consider zero-modes in the compact directions.
We take lower case greek letters α, β, . . . ∈ {t, r, θ, φ, ψ} to be the 5d indices, upper case latin
A,B, . . . ∈ {t, r, θ} to be the 3d indices and lower case latin a, b, . . . , i, j, l,m, . . . ∈ {θ, φ} to be
the indices for the compactified directions in 5d or scalar fields in 3d. The appropriate reduction
ansatz is:
Gµν =
gMN + hijB
, Am =: Am and AM =: A
M + AaB
M , (31)
such that we get
Fµν =
FMN + (dAa ∧Ba)MN An,M
−Am,N 0
, (32)
in terms of the 3d gauge fields Ha = dBa and F 3d = dA3d, and we defined for simplicity F =
F 3d + AaH
a. The definition of A3d in (31) is needed to have the appropriate transformations
of the KK and Maxwell U(1) symmetries and arises naturally from the reduction using frame
fields (see, for instance, [9] for details). Now, we find
µν = FMNF
MN − 2habA,MA,M and
ǫαµνρσAαFµνFρσ = 4ǫ
LMNǫab
Aa,LFMNAb − A3dLAa,MAb,N
, (33)
such that the 5d Lagrangian (17) can be rewritten as :
G5 × L3d =
R3d −
HaMNH
b MN − FMNFMN + 2habAa,MA ,Mb
1This dimensional reduction has been used recently in [10, 11] for defining the entropy functions for such theories.
ǫLMNǫab
Aa,LFMNAb − A3dLAa,MAb,N
, (34)
where VT 2 is the “volume” of the compact coordinates. One can now construct conserved currents
using the Noether procedure for the gauge symmetries of the two U(1) gauge fields Baµ and A
We find the corresponding Noether charges for Baµ to be
Ja = −
16πG5
a rt + 4AaF
ǫLrtǫmnAm,LAn
. (35)
which we identify as the two independent angular momenta. The Noether charge for A3dµ works
out to be
Q = − VT 2
hF rt +
ǫLrtǫmnAm,LAn
which we identify with the 5d electric charge. Alternatively, these charges can be read off by
writing the left hand side of the equations of motion for the Lagrangian (34)
a MN + 4AFMN
+ 16Aa
ǫLMNǫmnAm,LAn
= 0 (37)
−g hFMN + 4
ǫLMNǫabAa,LAb
ǫLMNǫmnAm,LAn,M , (38)
as a total derivative and interpreting the resulting total conserved quantities as the charges.
For geometries with just one independent angular momentum, one can apply the above
formulae in a straight forward way, or do a reduction only down to 4d as in such cases only one
U(1) isometry is expected in the geometry. The computations for the latter are identical to the
ones here, so we just state the expressions for the angular momentum along ∂ξ and the charge:
J = − VT1
16πG5
e2σHrt + 4A F rt
ǫrtAB
A FAB − 2A,AA4dB
, (39)
Q = − VT1
−geσF rt + 1
ǫrtAB
3A FAB + A F
AB − 4A,AA4dB
, (40)
where e2σ = gψψ , VT 1 is the periodicity of ψ, and the conservation follows by the equations of
motion
e2σHMN + 4A FMN
ǫABMN
A FAB − 2A,AA4dB
= 0, (41)
−geσFMN + 2
ǫABMN
A FAB − 2A,AA4dB
ǫABMNFABA,M . (42)
3.2 Oxidation of the angular momentum
Now we would like to demonstrate that the lower dimensional Noether charges above, when
lifted back to 5d, give the Noether-Wald charges for the compactified Killing vectors. For
simplicity, we look at the expression with only one independent angular momentum and only
one dimension (along ψ) reduced. Our results will hold in general though, as the gauge theory
corresponding to the angular momentum is abelian, so we can examine different Killing vectors
independently. First, we note that the dimensional reduction ansatz can be obtained with the
following triangular form of the frame fields [9]:
V Iµ =
viM e
and the inverse V MI =
0 e−σ
, (43)
with (bold latin) tangent space indices A,B, . . . ∈ {0, . . . , 4} and a,b, . . . ∈ {0, . . . , 3} such that
we can write the 4d fields in terms of the 5d fields (but still in 4d coordinates):
BM = e
−σV 4M , HMN = e
−σ(dV 4
− 2e−σ
deσ) ∧B
σ = ξµV Iµ and A = ξ
µAµ . (44)
Now the conservation equation (41) for the angular momentum Jψ reads in flat indices
ηacηbd
ξµV Kµ ηKLdV
L− 2eσ(deσ) ∧B
+ 4ξµAµ(F− 2(dA ) ∧B)cd
8ξµAµ
ǫcdij
F− 2(dA ) ∧B
− 2(dA )cAd + (dA 2)cBd
= 0 . (45)
Extending the summations to A,B, .. and using the form of the frame fields and the indepen-
dence from ψ yields:
ηACηBD
d(ξµV IµηIJV
+ 4ξµAµFCD
8ξνAν
ǫCDABEA
(dξ̂)µN + 4ξ ·AFµN
8ξ · A
ǫµNαρσAαFρσ
= 0 . (46)
The conserved charge extracted from this equation exactly reproduces the charge in (19).
3.3 Generalization and Limitations
3.3.1 Relation to the Asymptotes
Let us now discuss in which situations the charges computed in the spacetime interior give
the charges as defined on the asymptotic boundary. We see most easily from (20) that when
evaluated on a hypersurface on which iξA = 0, such as a suitable asymptotic boundary, our
formulae match with the appropriate Komar integral.
We can compute a (possibly zero) KK or Noether-Wald charge, that corresponds in a specific
geometry to the angular momentum, for every U(1) isometry. However, the asymptotic hyper-
surface on which the angular momentum of a black hole is defined is an Sd−2. When in such
a geometry angular momenta are turned on, its SO(d − 1) isometry breaks (generically) down
to its U(1) subgroups whose charges give the angular momenta, so only the local U(1) factors
that correspond to the asymptotic U(1) subgroups will be related to the angular momentum.
Furthermore, the normalization of the period generated by the Killing vector also has to be
taken into account.
We saw in sections 2.1 and 3.1 how the charges of compact Killing vectors are conserved
whenever the source-free equations of motion hold. That is, they are independent of the position
of the surface on which they are computed, QΣr2 −QΣr1 =
M dMM ∂NQ
MN = 0 where Σr1 and
Σr2 are the boundaries of the volume M - provided that the U(1) theory is defined throughout
the bulk volume and we can consistently compactify the manifold (at least outside the horizon).
Hence, the black hole charge and angular momentum as defined on a spacelike d-2 hyper-
surface Σ∞ at the asymptotes are given by the corresponding KK or Noether-Wald charge,
computed over any spacelike d-2 hypersurface Σr0 in the spacetime for any (not necessarily ex-
tremal) black hole (or in general any spacetime with a suitable asymptotic boundary). That
is, provided there exists a spacelike d-1 hypersurface M with ∂M = {Σr,Σ∞} on which the
following sufficient conditions are satisfied:
1. The relevant compact Killing vector is a restriction to Σr of a Killing vector field that is
globally defined on M and generates a constant periodicity.
2. There are no sources, i.e. the vacuum equations of motion for the gauge fields are satisfied.
3. There exists a smooth fibration of surfaces
π→ [r0,∞[
= M such that π−1r0 = Σr0
limr→∞ π
−1r = Σ∞.
An example where these conditions are satisfied is the region outside the (outer) horizon
of a stationary black hole solution with an Sd−2 horizon topology, embedded in a geodesically
complete spacetime with an asymptotic Sd−2 boundary. One example where these conditions
are violated is that of black rings [18] which will be considered separately in an appendix.
3.3.2 Gauge Issues
The contributions of the CS term in the conserved quantities in (3.1) depend explicitly on the
gauge potentials. This does not however make them gauge dependent. To see this in 5d, let
us consider the electric charge computed by the Noether procedure which is given in [4] as
⋆ F + 2√
A ∧ F
. We notice that the charges get contributions of the form
A ∧ F ,
that change under a transformation δA = dΛ as
dΛ∧F =
d(ΛF ) = 0 because Σ is compact.
From the 3d point of view the KK scalars A may depend on a 5d gauge transformation. However
Λ must be periodic in the angular coordinates so that the contributions from dΛ vanish after
integration. This is also the reason why the term containing ξ ·A in eq. (19) is gauge independent
for compact Killing vectors. On the other hand, the Noether charge for a non-compact Killing
vector is gauge-dependent and hence is only physically relevant when measured with respect to
some boundary condition or as a difference of charges.
4 Examples
So far we have derived Noether charges for various supergravity theories that may be used to
calculate the electric charges and angular momenta of the solutions. In particular, they can be
used on the near horizon geometries to calculate the conserved charges of the corresponding black
holes. In this section we will demonstrate with several examples how our charges successfully
reproduce the known black hole charges in different dimensions, for equal or unequal angular
momenta and independent of the asymptotic geometries. We will start with a 10d example and
then cover 5d examples, first with one angular momentum in AdS and flat asymptotics, and
then with unequal angular momenta in asymptotic AdS.
4.1 The 10d Gutowski-Reall black hole
In [12], Gutowski and Reall found the first example of a supersymmetric black hole which
asymptotes to AdS5 as a solution to minimal gauged supergravity in 5d (see also [34, 17, 35, 36]).
Their solution was lifted to a solution to 10d type IIB supergravity in [13] and shown to admit
two supersymmetries. In [14] (see also [15]), the near horizon geometry of this 10d black hole was
studied. Here we use the formulae found in section 2.2 to calculate the Noether-Wald charges in
the near horizon geometry and show that they agree with the charges of the black hole measured
from the asymptotes. The 10d metric of this near horizon geometry is ds210 = ηabe
aeb with the
orthonormal frame
σL3 , e
, e2 =
σL1 , e
σL2 , e
λ σL3 , (47)
and the five-form is
F (5) =
(e0···4+e5···9)−
(e57+e68)∧ [−3e023+e014−
e234+e9∧ (3e14−e23−
e01)] (48)
where e5 . . . e9 are given in (22) and
dt+ ω
σL3 ) =
(e0 + 2ω
e4), λ =
l2 + 3ω2 and
σL1 = sinφdθ − sin θ cosφdψ, σL2 = cosφdθ + sin θ sinφdψ, σL3 = dφ+ cos θ dψ. (49)
The potential C(4) for the above field strength was given in section 2.4 with Ω4 =
e0234 [14].
Here we concentrate on the compact Killing vectors ∂φ and ∂ξ1 + ∂ξ2 + ∂ξ3 of this geometry and
calculate the corresponding conserved charges. For χ = ∂φ which has a period 4π, we have
χ̂ = 3ω
e0 + ωλ
e4 − ω2
e9 and
dχ̂ = −2ωλ
e01 + 3ω
e14 − (1 + ω2
)e23 + ω
(e57 + e68) (50)
and hence the relevant terms in ⋆dχ̂ are ω
(4l2 + 3ω2) e2···9. Similarly, we find
C(4) ∧ iχF (5) = ω
(2 l2 + ω2) 1
σL1 ∧ σL2 ∧ σL3 ∧ e56789 . (51)
After noting that the integral over 1
σ123 ∧ e56789 gives a factor of 2π5l5, we find
Q∂φ = −
16π4 l5G5
S3∧S5
[⋆dχ̂+
C(4) ∧ iχF (5)] = −
8 l G5
) , (52)
which agrees with the angular momentum, up to a minus sign, that comes from the definition
of the angular momentum as minus the Noether charge [12]. For χ = ∂ξ1 + ∂ξ2 + ∂ξ3 , we have
9 = −l. One can calculate the 10d current and find that
⋆ dχ̂+
C(4) ∧ iχF (5) =
(⋆5F +
A ∧ F ) ∧ e5678 ∧ (e9 +
A) + · · · . (53)
Therefore the corresponding charge is
Q∂ξ1+∂ξ2+∂ξ3
π l ω2
) . (54)
This differs from the answer Q(GR) =
3π ω2
(1 + ω
) [12] by a factor of −l/
12. The minus
sign is because of a difference in our conventions from those of [12] and the factor of l is there
to make the charge Q(GR) dimensionless. The killing vector ∂ξ1 + ∂ξ2 + ∂ξ3 has a period of 6π
and to normalise it to have a period of 2π we have to multiply it by a factor of 3. If we take this
into account the extra factor reduces to
3/2. This is precisely the factor required to define the
5d gauge field in the conventions of dimensional reduction from 10d to 5d [22]. Thus we find
complete agreement between our 10d computation of charges from the NHG and the asymptotic
black hole charges of [12].
4.2 5d Black Holes
Now we turn to black hole solutions in 5d Einstein-Maxwell-CS and minimal gauged supergravity.
4.2.1 Equal Angular Momenta: BMPV and GR
Let us consider two examples that are similar in the near-horizon geometry, with a squashed
S3 horizon, but differ by their asymptotic behaviour; the BMPV black hole [4, 16] with asymp-
totically flat geometry and the Gutowski-Reall (GR) black hole [12] with asymptotically AdS5
geometry.
Their near-horizon solutions can be put in to the form
ds2 = v1
− r2dt2 + dr
σ21 + σ
2 + η(σ3 − αr dt)2
, A = −e r dt+ p(σ3 − αr dt) (55)
which, when dimensionally reduced along the ψ-direction, gives ds24 = v1
− r2dt2 + dr2
dθ2 + sin2θ dφ2
. This has AdS2 × S2 symmetry as expected. The fields take the form
B = −rαdt+ cos θ dφ, e2σ = v2η, A = p and A4d = −e r dt. For the BMPV case, we find:
v1 = v2 =
, η = 1−
, α =
µ3 − j2
, e = −
µ3 − j2
and p =
. (56)
Evaluating the 4d quantities and noting that ǫtrφθ = 1 and VT 1 = 4π, (39, 40) gives us J = πj4G5
which is equal in magnitude to the angular momentum in [4] up to a factor of 2, which arises
from the canonical normalization of the Killing vector ξ = 2∂ψ , and Q =
For the GR case, we have:
, v2 =
, η = 1 + 3
, α = − 3ωl
4l2 + 3ω2
, e =
α, p =
. (57)
Note that we have defined A with an overall factor of −1 compared to [14] to account for a
different convention for the CS term. This gives the results J = −3πω2
(1 + 2ω
) and Q =
(1 + ω
) as expected. Note that [12] do not use the canonical normalization for ∂ψ of [4].
4.2.2 Non-equal Angular Momenta: Supersymmetric Black Holes
Here, we present as the most simple example the N=2 supersymmetric black holes with non-
equal angular momenta of [17], which are asymptotically AdS5, just as the GR case. We start
off with the metric in the form [17]
gtt =
(ρ2ΞaΞb)
ρ2ΞaΞb(1 + r
2) − ∆t(2mρ2 − q2 + 2abrρ2)
, grr =
, gθθ =
gtφ =
−∆t sin2θ
ρ4Ξ2aΞb
a(2mρ2 − q2) + bqρ2(1 + a2)
, gtψ = gtφ(a↔ b, sin θ ↔ cos θ)
gφφ =
sin2θ
ρ2Ξ2a
(r2 + a2)ρ4Ξa + a sin
a(2mρ2 − q2) + 2bqρ2
gψψ = gφφ(a↔ b, sin θ ↔ cos θ), gφψ = sin
2θ cos2θ
ρ4ΞaΞb
ab(2mρ2 − q2) + (a2 + b2)qρ2
with the the gauge field
∆tΞaΞbdt −
a sin2θ
dφ − b cos
where
ρ2 = r2 + a2 cos2θ + b2 sin2θ, ∆t = 1− a2 cos2θb2 sin2θ,
(r2+a2)(r2+b2)(1+r2)+q2+2abq
r2−2m , Ξa = 1− a
2 and Ξb = 1− b2 . (60)
We consider the case with saturated BPS-limit and no CTC’s, which requires:
1 + a+ b
, m = (a+ b)(1 + a)(1 + b)(1 + a+ b). (61)
Now we can find the near horizon geometry with explicit AdS2 symmetry as in [18], by re-defining
t̃ = ǫt, r̃ =
4(1+3a+a2+3b+b2+3ab)
(1+a)(1+b)(a+b)
a+b+ab
, d̃φ = dt+ dφ, d̃ψ = dt+ dψ, (62)
then taking the limit of ǫ→ 0 and applying a gauge transformation to get rid of a constant term
in At. We can read off the 3d scalar fields hmn and A and find
BmN = h
maGaN , gMN = GMN − BaMhabBbN and A3dM = AM − AaBaM . (63)
Noting that VT 2 = 4π2, eqns. (35) give us the angular momenta Jφ̃ = π
a2+2b2+3ab+a2b+ab2
4G5(1−a)(1−b)2 and
= π b
2+2a2+3ab+a2b+ab2
4G5(1−b)(1−a)2 . These agree precisely with the corresponding asymptotic angular
momenta of [18].
5 Charges from the entropy function
The original incarnation of the entropy function formalism [3, 1] was not only a useful tool for
finding near-horizon solutions, but also for extracting the conserved charges from a given solution.
However, in the presence of Chern-Simons terms, the entropy function formalism captures only
part of the conserved charges. We demonstrate here two equivalent ways to cure this problem.
Let us first recall the entropy function formalism [3, 1]:
One considers a general theory of gravity described by the Lagrangian density L with abelian
gauge fields F i(x) and scalar fields Φj(x). Then one writes down the most general ansatz for the
near horizon geometry assuming the isometries of AdS2 × S1 (for simplicity, we consider here
d=4 as in [3, 1]):
ds2 = v1(θ)
− r2dt2 +
dθ2 + v2(θ)
dφ2 − α r dt
F i =
ei − αbi(θ)
dr ∧ dt + ∂θbi(θ)dθ ∧ (dφ − α r dt) and Φj = uj(θ) , (64)
in terms of the parameters {α, ei, β} and θ-dependent scalars {vi(θ), bi(θ), ui(θ)}. Then, one de-
fines the “reduced action” f(α, ~e, β, ~v(θ), ~b(θ), ~u(θ)) =
dθdφL - a functional that generates
the equations of motion
δbi(θ)
δvi(θ)
δui(θ)
= 0, where the functional derivatives
can be understood in terms of the Fourier coefficients in the expansion along θ, and
= qi ,
= j , (65)
where qi and j are supposed to give the charges of the black hole. Then the entropy function is
defined to be the Legendre-transform of the reduced action
E(j, qi, β, ~v(θ), ~b(θ), ~u(θ)) = 2π(eiqi + αj − f) . (66)
Finally, the entropy of the black hole is S = E , evaluated on the solution.
5.1 Completing the equations of motion
In section 3.1, we learned how to find the conserved charges in the presence of Chern-Simons
by writing the KK gauge field equations of motion in a conserved form. Since we now know the
right reduction ansatz, we just need to find a mechanism to parametrize both the variation with
respect to At and Bt and the integration of the right hand side of the equations of motion to
obtain the closed form. One such mechanism is a modification of the ansatz with the pure gauge
terms {ǫi,ℵa} to do the variations δL
and δL
; and with a dummy function c(r), that introduces
an artificial and unphysical r-dependence into fields that are constant by the symmetries. c(r)
then allows to keep track of their, otherwise vanishing, derivatives and to do their integration
on the right hand side of the equations of motion. Hence, we write
Ai = −(ǫi + ei r)dt + c(r) pia(θ)
dφa − (ℵa + αar) dt
, (67)
ds2 = v(θ)
− r2dt2 + dr
dθ2 + ηab(θ)(dφ
a − (ℵa + αar)dt)(dφb − (ℵb + αbr)dt)
and we also wrap all scalar fields that appear in the Chern-Simons terms with a factor of c(r),
ui(θ, r) = c(r)Φi(θ). The solution corresponds to setting c(r) = 1 and c′(r) = 0, which we can
either implement by furnishing c(r) with a control parameter, or by choosing c(r), s.t. c(r0) = 1
and c′(r0) = 0 for some r0, but c
′(r0) 6= 0 for r 6= r0. The equations of motion for the gauge
fields are then ∂r
and ∂r
∂ℵa and give rise to the conserved charges
and Ja =
, (68)
evaluated on the solution. A simple variation of this is c(r) = 1 + 1
r, n being the number of
3d scalar fields in the CS term, which automatically takes care of the integration of the second
term and ensures that all remnant dummy terms will disappear in the first term at r = 0.
The other computations follow just as in the original form of the Entropy function, using
c = 1, c′ = 0 throughout. Note that the entropy function is still computed as originally defined,
E = 2π
αa + ∂L
ei − f
, i.e. not using the conserved charges.
One can easily see that this gives the equations of motion, and it also gives the correct value
for the entropy as the original derivation [3, 1] is independent of what the conserved charges are.
This can also be seen by repeating the derivation in section 6.4 with the original action (34).
As a simple example we have already written the 4d ansatz (55) in section 4.2.1 in a suggestive
form, such that the coefficients can be read off from (56) and (57) with β2 = v2. We note that
the ℵa parameters do not appear here in the action. A simple computation reveals that this
gives indeed the results in section 4.2.1.
5.2 Gauge invariance from boundary terms
In section 3.3, we found that the charges are gauge invariant. However, it would be desirable if
we could impose gauge invariance at the level of the Lagrangian of the 3d action (34). The result
can, in principle, be oxidized back to 5d, but we will stick for simplicity to 3d. The only term of
concern is the A3d ∧ dA[a ∧ dAb] in the CS term in (34), which varies under A3d → A3d + dΛ as
dΛ ∧ dA[a ∧ dAb]. This variation is a total derivative d(ΛdA[a ∧ dAb]) which, after integration,
gives a boundary term ΛdA[a ∧ dAb]. This can be re-expressed as d(ΛA[adAb])− A[adΛ ∧ dAb],
where the first term vanishes if we consider a stationary boundary. The second term is suitably
cancelled by adding a boundary term Abdy. [aA
∧ dAbdy. b], which is identical to a bulk term
d(A[aA
3d ∧ dAb]). Expressed in index notation, and furnished with appropriate factors, the
boundary term that we need to add corresponds to the bulk term is
δL 3d = −
16πG5
ǫLMNǫab
Aa,LF
MNAb + 2 A
LAa,MAb,N
, (69)
which brings the Lagrangian to
G5 × L3d =
R3d −
Ha MNH
b MN − FMNFMN + 2habAa,MA ,Mb
ǫLMNǫab
2Aa,LFMNAb + Aa,LF
, (70)
eliminating the gauge dependent term. A quick calculation shows that this does not affect the
value of the charges (35, 36). Effectively, what we have done is to differentiate the components
of the 5d gauge field in the CS term whose gauge transformations do not vanish automatically
by periodicity constraints, and remove the derivative from other components by an integration
by parts. Hence, the right hand side of each of the 3d gauge field equations of motion does
vanish, and the charges are just the conjugate momenta of the gauge fields B and A3d:
Q = −
δF 3dµν
ǫρµνdx
ρ and Ja = −
δHaµν
ǫρµνdx
ρ , (71)
as in the absence of CS terms. It is easy to verify that the value of the charges remains unchanged.
This means that, if we compute the reduced action from the gauge independent action, the
original formalism will give us the right charges. The entropy function, now computed with the
full charges, does not depend on the extra boundary term and hence also gives us the correct
value of the entropy as we shall derive directly from the Poincaré time Noether charge in section
6 Thermodynamic Charges
Having computed the charges of the Sd−2 isometries, we now turn to the charges of the AdS2
isometries. In particular, we will concentrate on the charge of ∂t, as this will be related to the
thermodynamic quantities entropy S and mass M . First we will compute the Poincar’e time
Noether charge from the Hamiltonian in the NHG and propose a new definition of the black
hole entropy for extremal black holes in the NHG in terms of this charge - similar to Wald’s
definition for non-extremal black holes. Then we (i) justify this definition by showing that it
gives the right extremal limit of the first law, (ii) derive from the Noether charge a statistical
version of the first law suitable for extremal black holes and (iii) re-derive the entropy function
directly from the definition of the entropy. Finally, we discuss the notion of mass as seen from
the NHG by deriving a Smarr-like formula.
6.1 Poincaré Time Hamiltonian
For the Poincaré time Killing vector ∂t, one expects the Noether charge to be related to the
Hamiltonian, which we will explore now.
Since the theory is generally diffeomorphism invariant, we expect the bulk contribution to
vanish. So we concentrate on boundary terms Sbdy. =
B Lbdy., that are necessary to cancel total
derivatives dΘ in the variation of the bulk action δS =
(Eiδφ
i + dΘ(δφ)). In our example,
we have to consider both the variations of the metric and of the 3d gauge fields. For the gauge
fields, the term that we ignored in the derivation of the equations of motion was
µ = ∂µ
δ Aν,µ
δAν +
δ Baν,µ
. (72)
For a complete spacetime, the textbook answer is to place the usual restriction δA|bdy. =
δB|bdy. = 0. Then, the only boundary term that one needs to add in order to make the vari-
ational principle consistent is a Gibbons-Hawking-like term, that compensates for a variation
proportional to the normal derivative of δg at the boundary. For the Einstein-Hilbert action,
that is the usual Gibbons-Hawking term
SGH =
LGH =
hK = − VT 2
16πG5
hγMNn
M ;N , (73)
where γ is the boundary metric and K is the surface gravity of the boundary B, which, in our
geometry, is just an S1 fibred over time. Note that we took n = −∂r to be inward-pointing
in order to define the bi-normal NMN :=
(∂t)[MnN]
|∂t||n| of Σbdy. with a positive signature. Now,
we can read off the Hamiltonian of the NHG if it were an isolated solution. By definition,
Lξgµν = 0, such that the canonical Hamiltonian is just HI = −
i∂tLGH with the time
slice of B being Σbdy = S1. Since ∂t is a Killing vector, a quick calculation shows |∂t|
−γK =√
−gNMN (d ∂̂t)MN , and hence the Hamiltonian is just
HI = −
Σbdy.
i∂tLGH =
16πG5
hNMN (d ∂̂t)
MN . (74)
Now, if we consider the near-horizon geometry being embedded in the full black hole solution,
we cannot put δA|bdy. = δB|bdy. = 0, but we need to satisfy the variational principle by adding
a Hawking-Ross-like boundary term as in [28]:
LHR = nM
δ AN,M
=: −nN
Q̃MNAN + J
and impose the condition to keep the charges fixed under variations of the boundary fields. Now,
the boundary action varies as:
δSHR = −
d2σ nM
δQ̃MN
δJMNa
d2σ nM
Q̃MNδAN + J
where the second term cancels the total derivative in the variation of the bulk action (note the
inward-pointing n), and the first term vanishes as the charges are fixed. A little caveat occurs if
we use the gauge-dependent form of the action (34), when Q̃ 6= Q, however the missing bit does
not depend on the 3d gauge fields, but only on the scalar fields, and hence it is invariant under
variations of the gauge fields. If we consider the gauge-independent form of the action (70), then
Q̃ = Q. Again, by definition we have LξBi = 0, and we will choose a gauge such that LξA=0,
and the canonical Hamiltonian is just
H = −
i∂t(LHR + LGH) . (77)
Because of the AdS2 symmetries, we have
Σbdy.
i∂t (Q∧A) =
Σbdy.
Q(i∂tA) and similar for Ji∧Bi.
This puts the Hawking-Ross contribution to the boundary Hamiltonian to−
Σbdy.
dθ NMN
Q̃MN (i∂tA)+
JMNa (i∂tB
. This gives for the action (34)
H = − VT 2
16πG5
dθNMN
(d ∂̂t)
MN + HaMNhab(i∂tB
b)+4FMN i∂t
ǫPMNǫabAa,PAb i∂t
We now compare (78) with the Noether charge obtained by dimensional reduction of the 5d
expression (20). For this, we work out how the individual terms look like in 3d with the notation
of section 3.1. We consider only the components QMN
in the non-compact directions, and only
zero modes of the fields in the compact directions. Hence we get from the reduction formulae
(31 - 34):
(dξ̂)MN =
dξ̂3d
ξ3d ·Bjhji + χihij
H iMN , FMN = FMN ,
ǫMNαβγ = 2ǫMNLǫijAi,LAj and ξ ·A = ξ3d ·A3d + ξ3d ·BiAi + χiAi . (79)
Now, we can write down the charges of ξ3d, the non-compact components of ξ, and χ, its compact
components, separately:
QMNξ3d = −
16πG5
dξ̂3d
+ ξ3d ·Bj
j MN + 4AiF
+ 4ξ3d ·A3dFMN
ξ3d · A3d + ξ3d ·BiAi
ǫMNLǫijAi,LAj
QMNχ = −
16πG5
iMN + 4AiF
ǫMNLǫkjAk,LAj
, (81)
where we have implicitly done an integration over the compact coordinates. Thus we see that
(78) is just the Noether charge Q∂t in 3d (80) as expected, and we have yet another confirmation
of the KK charge (35), as it matches with (80).
6.2 Entropy
The entropy S of non-extremal black holes was shown by Wald [5] to be given by the Noether
charge κS = 2π
Qξ of the timelike Killing vector ξ that generates the horizon, evaluated on
the bifurcate d-2 surface B of the horizon, and κ is the surface gravity of the horizon. Jacobsen,
Myers and Kang [19] later showed that the charge can be evaluated anywhere on the horizon,
provided all fields are regular at the bifurcation surface. After a coordinate transformation, one
sees that this requires all gauge fields to vanish on the horizon, such that the gauge is fixed to
ξ · A = 0 at the horizon, and hence eliminates the ambiguity of the gauge-dependence of the
Noether charge.
For extremal black holes, κ = 0 on the horizon (r = 0), so Wald does not give a suitable
definition of S, and furthermore there is no bifurcation surface - putting in doubt the gauge fixing.
In the AdS NHG, there should be no special point where to compute physical quantities. Using
the concept that the entropy is intrinsic to the horizon, and hence does not require embedding
the NHG into an asymptotic geometry, those problems are cured by defining the entropy as
κ(rbdy.)
HI(rbdy.) , (82)
in the dimensionally reduced theory with the boundary placed at any radius rbdy. 6= 0. The fact
that the 3d theory is static allows us to use
κ = −
gtt,r
−gttgrr
[9] that is well-defined and physically motivated as the acceleration of a probe at any radius r
with respect to an asymptotic observer and hence related to the temperature of Unruh radiation.
It also ensures that the entropy is independent of rbdy. with well-defined limits rbdy. → 0 and
rbdy. → ∞. Now, in terms of the Noether charge (80), the entropy is just as expected
Q∂t(r) (84)
in the gauge ξ · A(r) = ξ · B(r) = 0; but evaluated at r 6= 0, rather than r = 0 that one would
näıvely expect. We will see in the following three subsections that this definition of the entropy
naturally arises from black hole thermodynamics.
6.3 First Law
Since we have now an expression for the entropy intrinsic to the extremal limit, let us see whether
we can also find an expression for its variation as derived for non-extremal black holes by Wald
in [5]. First let us write the the Noether charge for the gauge-invariant action (70) in 3d for
ξ3d = ∂t as
Qξ3d(r) =
S − ξ3d · A(r)Qel. − ξ3d · Ba(r)Ja . (85)
Then, we consider variations of the dynamical fields δφi that keep the solution on-shell and use
the identity δdQξ3d = d
ξ3d · Θ
[5], with Θ defined in section 2, such that we can relate the
variation of the charge evaluated over two boundaries Σ1 and Σ2 of a spacelike d-1 surface:
δQξ3d − ξ3d ·Θ
δQξ3d − ξ3d ·Θ
. (86)
Now, let us move the boundaries into the near-horizon geometry (→ ΣH) and into some asymp-
totic limit (→ Σ∞). On ΣH , we have
ξ3d ·Θ =
L dθM ǫLMN
gOP δ̄ΓNOP + g
ON δ̄ΓPOP
δAO,N
δAO +
δκ − Qelδ(ξ3d ·A) − Jiδ(ξ3d ·Bi) , (87)
where we used for the second equality the AdS2 isometries, and assumed an Einstein-Hilbert
term for the gravitational action, and any gauge field term that can be written with only first
derivatives of A, such as (70). The right hand side of (86) can be interpreted by following Wald,
and defining the canonical energy, i.e. the Hamiltonian measured by an asymptotic observer at
Σ∞, E =
(Qξ3d − ξ3d·V ) with some d-1 form V: δ
ξ3d ·V =
ξ3d ·Θ. This corresponds,
for the asymptotic boundary conditions A = B = 0 and suitable normalization of ξ3d, to the
mass. Altogether, (87) gives us now an expression similar to the first law
δS + Φ(r) δQel. + Ω
i(r) δJi = δE (88)
at some r 6= 0, where Φ(r) = −ξ3d · A(r) and Ωi(r) = −ξ3d · Bi(r) measure the co-rotating
electric potential and angular frequency2 at r in the NHG with respect to the definition of E .
This, however is not yet a relation for the full black hole, but captures only physics outside Σr.
The extremal limit of the non-extremal first law of the full black hole solution is reproduced by
taking the limit r → 0:
ΦH δQel. + Ω
H δJi = δE , (89)
where ΦH = −ξ3d ·A(0) and ΩH = −ξ3d ·B(0) are the horizon co-rotating electric potential and
angular frequency. It is interesting to observe though, that (88) and corresponding expressions
for the Smarr formula resemble the first law of a finite temperature black hole, even though its
physical significance is limited, as Σrfor r 6= 0 is not a horizon.
An interesting observation and lesson is that when embedding the near horizon solution into
an asymptotic solution, but computing Noether charges in the NHG, we need to use the gauge
invariant action (70) and the full Noether charge, because there is no boundary of the NHG on
which we were allowed to fix the gauge fields and its gauge variations.
2To illustrate that this definition of Ω corresponds to the one in [5], consider a vector ξ = ∂t − Ω∂φ in static
coordinates with a diagonal metric g, and ξ = ∂t′ in co-rotating coordinates with a non-diagonal metric g
′. Then
ξ̂ = gttdt− Ωgφφdφ = gt′t′dt′ + Bφt′gφφdφ. A similar argument follows from requiring constant normalization of ξ and
considering gtt + gφφ = gt′t′ in the explicit coordinate transformation.
We see that our version of the first law also holds also for perturbations away from extremality,
which connects it smoothly (in a thermodynamic sense) to the near-extremal limit of the non-
extremal black hole, again supporting our definition of the entropy.
6.4 Entropy Function and the Euclidean Action
Now, let us continue following Wald [5] and relate the (integrated) mass (or energy E) to the
entropy. Starting with (85), we apply Gauss’ law to find
S − ξ3d ·A(r)Qel. − ξ3d · Ba(r)Ja = E −
Jξ3d +
ξ3d · V =: E −
I(r) , (90)
where the euclidean action3 I is now, in principle, a function of the radial position of ΣH , since
∂M = {ΣH ,Σ∞}. Even though I is defined only for κ 6= 0 as the integral of the analytically
continued Lagrangian, with τ = it having period 2π
, one would like to find a well-defined limit
as κ→ 0, i.e. r → 0, representing the full extremal black hole solution. This requires
ΦHQel. + Ω
HJa = E . (91)
This relation can be taken as a (gauge-dependent) definition of the mass of the black hole in
the near-horizon geometry. We note that since the action is gauge-invariant, (91) is gauge-
independent in the sense that a gauge transformation that changes ΦH and ΩH on Σ0 changes E
at Σ∞ accordingly. In the appropriate gauge in which E =M , it should agree with the BPS (or
extremality) condition - as we verified for BMPV and GR - and with an applicable Smarr-like
formula, supposed one has a full solution at hand. Now, let us study the remaining terms of (90).
Again, we make use of the AdS2 geometry to find that ξ3d ·
A(r)−A(0)
�κ(r) = F 3drt =: −EH
is the constant co-rotating electric field-strength in the NHG, as is ξ3d ·
Bi(r)−Bi(0)
�κ(r) =
Hrt =: −HH the field strength of the KK gauge field. Now, (90) reads
S = −2π
EHQel. + H
− I , (92)
with all terms, including I, being independent of the position r 6= 0 of ΣH in the NHG. (92)
holds also in the limit as r → 0. A similar expression was proposed and discussed in a statistical
context by Silva in [6], where it was motivated by taking the extremal limit of non-extremal black
holes, assuming an appropriate expansion of ΦH and ΩH in terms of the inverse temperature.
This is identical to (92), provided one identifies the NHG field strengths with the appropriate
expansion coefficients in [6]. Note that this relation is particular for extremal black holes and
profoundly different from the relation of the entropy to the euclidean action for non-extremal
black holes [29, 30].
Let us now show how this relates to the entropy function formalism. Given I = −2π
M iξ3dL +
iξ3dV
[5], we use the fact that the spacetime in the NHG can be trivially
foliated with spheres to re-write this as
I = −
iξ3dL +
iξ3dV −
iξ3dL
=: I0 +
L , (93)
where ∂M0 = {Σr=0,Σ∞}. Since
L is supposed to be invariant under the AdS2 isometries,
it is proportional to the volume form on AdS2 and (
L)�κ(r) = ⋆
L = const. Now,
the fact that I = const. implies that I0 = 0 and we are left with
S = −2π
EHQel. + H
HJi + ⋆
. (94)
3I equals the euclidean action only for stationary spacetimes, see [5].
This is just the entropy function for the gauge invariant action (70). The same derivation can
be applied to the original action (34) to give its corresponding entropy function. In that case E
in (91) will have a different value, because of the boundary terms in the action, stressing again
the need to work with (70) when relating the NHG to the asymptotic geometry.
6.5 Mass
Even though the mass of extremal black holes is fixed by the extremality (or BPS) relation 91,
let us now study its physical interpretation from the point of view of the NHG by deriving a
Smarr-like formula for the 5d Einstein-Maxwell-CS case.
Let us suppose there is some asymptotic geometry attached to the near horizon geometry in a
way that the conditions in section 3.3 are satisfied, and follow closely the derivation by Gauntlett,
Myers and Townsend in [4] for a few steps. The mass, E in a gauge in which A = B = 0 at Σ∞,
can be re-written using Gauss’s law in 5d as
M = −
16πG5
⋆dk̂ =
16πG5
⋆dk̂ +
, (95)
for some ∂M = {Σ,Σ∞} and k being the asymptotic unit norm timelike Killing vector. Assuming
we work in a gauge in which LξA = 0, and using the relations ✷kµ = −Rµνkν , LkΩ = ik(dΩ) +
d(ikΩ) for any form Ω and the equations of motion for g and A, the result is
16πG5
⋆dk̂ + 4(k · A) ⋆ F −
k̂ ∧ (Â · F )
(k · A)A ∧ F
, (96)
plus a term at Σ∞ that vanishes as A→ 0. In dimensions other than d = 5, there will be an extra
term that cannot be expressed as a surface integral at ΣH . For details see [4]. Now, we see that
the first, second and last terms combine to give the Noether charge (19). Decomposing k into
its compact and non-compact components, k = ∂t + Ω
iχi, and choosing Σ to be an r = const.
surface in the NHG, we find from the 3d expressions (80,81) that this gives us
S + ΩiJi
+Φ(r)Qel.−
(∂t · A) ⋆ F −
(∂̂t +Ω
iχ̂i) ∧ (Â · F )
In (∂̂t + Ω
iχ̂i) ∧ (Â · F ), we find that in terms of frame fields the relevant components are
(∂̂t+Ω̂
iχi)0, A0 and F01, since the AdS2 symmetries restrict non-vanishing FM1 to M = 0. This
makes the last term vanishing, such that we get in the limit r → 0 the Smarr formula
ΩiHJi + ΦHQel. , (98)
that agrees with the near-horizon limit of the non-extremal one. From the point of view of
the near-horizon solution, we find that the mass is now a gauge-dependent expression, with the
gauge given by the embedding of the near-horizon solution in the asymptotic solution. We find
that (98) looks different from (91), however they are in agreement since ΩH vanishes for BMPV
black holes [4].
7 Conclusions
In this paper we presented expressions for conserved currents and charges of 10d type IIB
supergravity (with the metric and five-form) and minimal (gauged) supergravity theories in 5
dimensions. These have been obtained following Wald’s construction of gravitational Noether
charges. Those of the 5d gauged supergravity can also be obtained by dimensional reduction of
the 10d formulae. We further showed that the Noether charges of the higher dimensional theories,
after dimensional reduction, match precisely with the Noether charges of gauge fields obtained
by Kaluza-Klein reduction over the compact Killing vector directions of interest. Our expressions
for the charges should be valid generally for both extremal and non-extremal geometries. We
then turned to their applications to extremal black holes and demonstrated that, when evaluated
in the near horizon geometries, our charges reproduce the conserved charges of the corresponding
extremal black holes under certain assumptions. In particular, we exhibited that our methods
give the correct electric charges and angular momenta for the BMPV and Gutowski-Reall black
holes.
A host of new solutions to supergravity theories with AdS2 isometries have been found
recently [20] and many more such solutions are expected to be found in the future. These
solutions may be interpreted as the near horizon geometries of some yet to be found black holes.
In such cases, our results should be useful in extracting the black hole charges without having to
know the full black hole solutions but just the near horizon geometries. On the other hand, the
holographic duals of string theories in the NHG are expected to be supersymmetric conformal
quantum mechanics. Our conserved charges should be part of the characterising data of these
conformal quantum mechanics.
We argued that the black holes with AdS3 near horizons do not satisfy our assumptions when
embedded in black hole asymptotes with Sd−2 isometries (rather than black string asymptotes).
Supersymmetric black rings are the main examples for which our formulae do not seem to apply.
More generally for black holes with AdS3 one has to find the correct way to extract the conserved
charges separately which we would like to return to in future.
We then presented a new entropy function valid for rotating black holes in 5d with CS terms
which gives the correct electric charges as well as the entropy. This is an improvement over [21].
We used appropriate boundary terms, that make the action fully gauge-independent which turns
out to be relevant to obtain the thermodynamics in the second part of the paper.
In the second part of the paper we exhibited a new definition of the entropy as a Noether
charge, and a derivation of the first law, which are applicable for extremal black holes directly.
We used this definition to produce the statistical version of the first law and moved on to re-
derive the entropy function from a more physical perspective. Finally, we commented on the
physical interpretation of the mass in the near-horizon solution. The relevant calculations were
done in the near-horizon geometry, only assuming an embedding into some asymptotic solution
for the purpose of formally defining the Mass. We did not, however, produce a conserved charge
corresponding to the the level number. In terms of the 5d fields, the expression in [27] is just
proportional to
⋆F , which is conserved in the NHG by the symmetries, but not by the
equations of motion in a general geometry. Various potentially interesting candidates, such as
the R-charge and global AdS2 time Noether-Wald charge did not produce an interesting result.
We find that the gauge-independent thermodynamic quantities can be evaluated everywhere
in the near-horizon geometry, as they are a statement about the near-horizon geometry. In
particular, they are the entropy, euclidean action and charges and their chemical potentials,
as well as the statistical version of the first law (92). Relations and quantities related to the
asymptotic geometry and to thermodynamics of non-extremal black holes (the mass, horizon
electric potential and angular frequency, as well as the first law and Smarr formula) however are
gauge-dependent from the point of view of the near-horizon geometry. They need to be evaluated
on a specific hypersurface, r = 0, as they come from position-dependent statements in the near-
horizon geometry. This means that the former ones may be more relevant for characterising
attractors.
Acknowledgements
We thank Rob Myers for helpful discussions and suggestions and helpful comments on the
manuscript. MW was supported by funds from the CIAR and from an NSERC Discovery grant.
Research at the KITP is supported in part by the National Science Foundation under Grant No.
PHY05-51164 and research at the Perimeter Institute in part by funds from NSERC of Canada
and MEDT of Ontario.
A Black Rings
The non-equal angular momentum generalization of the BMPV case is the supersymmetric black
ring [18]. It is an excellent counter-example in which the conditions in section 3.3 are not satisfied.
To demonstrate this, we sketch out the derivation of the asymptotic and near horizon limits as
given in [18]. The general form of the solution is given by:
ds1 = −f2(dt + ωφdφ + ωψdψ)2 +
f−1R2
(x−y)2
( dy2
+ (1−x2)dφ2 + (y2−1)dψ2
f(dt+ ω) − q
(1 + x)dφ + (1 + y)dψ
, (99)
where y ∈]−∞,−1] , x ∈ [−1, 1] , φ, ψ ∈ R�2πZ and f−1 = 1 + Q−q
(x − y) − q
(x2 − y2),
ωφ = − q8R2 (1− x
3Q− q2(3 + x+ y)
and ωψ =
(1 + y) +
(1− y2)
3Q− q2(3 + x+ y)
The asymptotic limit is given by (x + 1) → +0 and (y + 1) → −0, and its geometry of a
squashed sphere with broken isometry SO(4) → U(1)2 can be made manifest by combining
(x, y) into a radial coordinate ρ ∈ R+ and an angular coordinate Θ ∈ [−π2 ,
ρ sinΘ =
x−y and ρ cosΘ =
x−y (100)
The near horizon limit, on the other hand, is given by y → −∞, such that appropriate radial
and angular coordinates are r = −R
and cos θ = x. A first observation is that the two limits
are just points in the “opposite” coordinates, (ρ,Θ) → (R, π
) and (r, θ) → (R,π). To obtain
the near horizon geometry in a suitable form, we define χ = φ − ψ, take the limit r = ǫr̃R−1,
t = ǫ−1t̃, ǫ→ 0 and get:
ds2 =
q2dr̃2
dt̃dψ +
(q2 −Q)2 − 4q2R2
dψ2 +
dθ2 + sin2θdχ2
A = −
(q2 +Q)dψ + q2(1 + cos θ)dχ
. (101)
Now, we also see that the topology of the horizon is S1×S2 with U(1)×SO(3) ∋ U(1)2 isometry
and whose subgroup U(1)2 is not guaranteed to agree with the U(1)2 of the asymptotic geometry.
The AdS2 geometry is more apparent after dimensional reduction, when gtt ∝ r̃2 is restored,
and after suitably rescaling t̃. [18] show furthermore that the AdS2 and S
1 combine into a local
AdS3. The conserved charges are now Jψ =
(q2 −Q)2 − 12q2R2
, Jχ = − π8G5 q(q
2 +Q)
and Qel. =
(q2+Q), or in the old coordinates Jψ =
(q2−Q)2+2q2(q2−2Q−6R2
qQ . They compare to the asymptotic quantities computed in [18] Jψ =
q(3Q− q2),
q(6R2 + 3Q− q2) and Qel. =
The distinguishing feature here is that black rings have an AdS3×S2 near-horizon geometry.
Thus the S1 × S2 of the horizon and the S3 of the asymptotic hypersurface are topologically
distinct, such that there is no continuous fibration of hypersurfaces over r between them. In
particular, The coordinates that describe the asymptotic S3 shrink the horizon and the area
bounded by the black ring to a point in 3d (or an S1 × S1 in 5d), and are missing part of the
boundary of the full solution because of the difference in topology. This missing part shrinks
into the coordinate singularity that also contains the horizon, so flux that passes though that
part of the boundary will not be seen from the asymptotic geometry.
It is not inconceivable that if we consider the black rings on Taub-Nut spaces like in [31, 32, 33]
and obtain a 4d black hole which satisfies our criteria one may yet be able to recover the charges
of such black rings.
References
[1] A. Sen, “Black hole entropy function and the attractor mechanism in higher derivative
gravity,” JHEP 0509, 038 (2005) [arXiv:hep-th/0506177].
[2] P. Kraus and F. Larsen, “Microscopic black hole entropy in theories with higher derivatives,”
JHEP 0509, 034 (2005) [arXiv:hep-th/0506176].
[3] D. Astefanesei, K. Goldstein, R. P. Jena, A. Sen and S. P. Trivedi, “Rotating attractors,”
JHEP 0610, 058 (2006) [arXiv:hep-th/0606244].
[4] J. P. Gauntlett, R. C. Myers and P. K. Townsend, “Black holes of D = 5 supergravity,”
Class. Quant. Grav. 16, 1 (1999) [arXiv:hep-th/9810204].
[5] R. M. Wald, “Black hole entropy in the Noether charge,” Phys. Rev. D 48, 3427 (1993)
[arXiv:gr-qc/9307038].
[6] P. J. Silva, “Thermodynamics at the BPS bound for black holes in AdS,” JHEP 0610, 022
(2006) [arXiv:hep-th/0607056].
[7] J. Lee and R. M. Wald, “Local symmetries and constraints,” J. Math. Phys. 31, 725 (1990).
[8] M. Rogatko, “First law of black rings thermodynamics in higher dimensional Chern-Simons
gravity,” Phys. Rev. D 75, 024008 (2007) [arXiv:hep-th/0611260].
[9] T. Ortin, “Gravity And Strings,” (Cambridge University Press, Cambridge, England, 2004)
[10] G. L. Cardoso, J. M. Oberreuter and J. Perz, “Entropy function for rotating extremal black
holes in very special geometry,” JHEP 0705, 025 (2007) [arXiv:hep-th/0701176].
[11] K. Goldstein and R. P. Jena, “One entropy function to rule them all,” arXiv:hep-th/0701221.
[12] J. B. Gutowski and H. S. Reall, “Supersymmetric AdS5 black holes,” JHEP 0402, 006
(2004) [arXiv:hep-th/0401042].
[13] J. P. Gauntlett, J. B. Gutowski and N. V. Suryanarayana, “A deformation of AdS5 × S5,”
Class. Quant. Grav. 21, 5021 (2004) [arXiv:hep-th/0406188].
[14] A. Sinha, J. Sonner and N. V. Suryanarayana, “At the horizon of a supersym-
metric AdS5 black hole: Isometries and half-BPS giants,” JHEP 0701, 087 (2007)
[arXiv:hep-th/0610002].
[15] P. Davis, H. K. Kunduri and J. Lucietti, “Special symmetries of the charged Kerr-
AdS black hole of D = 5 minimal gauged supergravity,” Phys. Lett. B 628, 275 (2005)
[arXiv:hep-th/0508169].
[16] J. C. Breckenridge, R. C. Myers, A. W. Peet and C. Vafa, “D-branes and spinning black
holes,” Phys. Lett. B 391, 93 (1997) [arXiv:hep-th/9602065].
http://arxiv.org/abs/hep-th/0506177
http://arxiv.org/abs/hep-th/0506176
http://arxiv.org/abs/hep-th/0606244
http://arxiv.org/abs/hep-th/9810204
http://arxiv.org/abs/gr-qc/9307038
http://arxiv.org/abs/hep-th/0607056
http://arxiv.org/abs/hep-th/0611260
http://arxiv.org/abs/hep-th/0701176
http://arxiv.org/abs/hep-th/0701221
http://arxiv.org/abs/hep-th/0401042
http://arxiv.org/abs/hep-th/0406188
http://arxiv.org/abs/hep-th/0610002
http://arxiv.org/abs/hep-th/0508169
http://arxiv.org/abs/hep-th/9602065
[17] Z. W. Chong, M. Cvetic, H. Lu and C. N. Pope, “General non-extremal rotating black
holes in minimal five-dimensional gauged supergravity,” Phys. Rev. Lett. 95, 161301 (2005)
[arXiv:hep-th/0506029].
[18] H. Elvang, R. Emparan, D. Mateos and H. S. Reall, “A supersymmetric black ring,” Phys.
Rev. Lett. 93, 211302 (2004) [arXiv:hep-th/0407065].
[19] T. Jacobson, G. Kang and R. C. Myers, “On Black Hole Entropy,” Phys. Rev. D 49, 6587
(1994) [arXiv:gr-qc/9312023].
[20] J. P. Gauntlett, N. Kim and D. Waldram, “Supersymmetric AdS(3), AdS(2) and bubble
solutions,” JHEP 0704, 005 (2007) [arXiv:hep-th/0612253].
[21] J. F. Morales and H. Samtleben, “Entropy function and attractors for AdS black holes,”
JHEP 0610, 074 (2006) [arXiv:hep-th/0608044].
[22] A. Chamblin, R. Emparan, C. V. Johnson and R. C. Myers, “Charged AdS black holes and
catastrophic holography,” Phys. Rev. D 60, 064018 (1999) [arXiv:hep-th/9902170].
[23] M. Cvetic et al., “Embedding AdS black holes in ten and eleven dimensions,” Nucl. Phys.
B 558, 96 (1999) [arXiv:hep-th/9903214].
[24] J. P. Gauntlett, R. C. Myers and P. K. Townsend, “Supersymmetry Of Rotating Branes,”
Phys. Rev. D 59, 025001 (1999) [arXiv:hep-th/9809065].
[25] R. C. Myers and M. J. Perry, “Black Holes In Higher Dimensional Space-Times,” Annals
Phys. 172, 304 (1986).
[26] B. Sahoo and A. Sen, “BTZ black hole with Chern-Simons and higher derivative terms,”
JHEP 0607, 008 (2006) [arXiv:hep-th/0601228].
[27] R. Emparan and D. Mateos, “Oscillator level for black holes and black rings,” Class. Quant.
Grav. 22, 3575 (2005) [arXiv:hep-th/0506110].
[28] S. W. Hawking and S. F. Ross, “Duality between electric and magnetic black holes,” Phys.
Rev. D 52, 5865 (1995) [arXiv:hep-th/9504019].
[29] V. Iyer and R. M. Wald, “A Comparison of Noether charge and Euclidean methods
for computing the entropy of stationary black holes,” Phys. Rev. D 52, 4430 (1995)
[arXiv:gr-qc/9503052].
[30] S. Dutta and R. Gopakumar, “On Euclidean and noetherian entropies in AdS space,” Phys.
Rev. D 74, 044007 (2006) [arXiv:hep-th/0604070].
[31] H. Elvang, R. Emparan, D. Mateos and H. S. Reall, “Supersymmetric 4D rotating black
holes from 5D black rings,” JHEP 0508, 042 (2005) [arXiv:hep-th/0504125].
[32] D. Gaiotto, A. Strominger and X. Yin, “5D black rings and 4D black holes,” JHEP 0602,
023 (2006) [arXiv:hep-th/0504126].
[33] D. Gaiotto, A. Strominger and X. Yin, “New connections between 4D and 5D black holes,”
JHEP 0602, 024 (2006) [arXiv:hep-th/0503217].
[34] J. B. Gutowski and H. S. Reall, “General supersymmetric AdS(5) black holes,” JHEP 0404,
048 (2004) [arXiv:hep-th/0401129].
[35] H. K. Kunduri, J. Lucietti and H. S. Reall, “Supersymmetric multi-charge AdS(5) black
holes,” JHEP 0604, 036 (2006) [arXiv:hep-th/0601156].
[36] H. K. Kunduri, J. Lucietti and H. S. Reall, “Do supersymmetric anti-de Sitter black rings
exist?,” JHEP 0702, 026 (2007) [arXiv:hep-th/0611351].
http://arxiv.org/abs/hep-th/0506029
http://arxiv.org/abs/hep-th/0407065
http://arxiv.org/abs/gr-qc/9312023
http://arxiv.org/abs/hep-th/0612253
http://arxiv.org/abs/hep-th/0608044
http://arxiv.org/abs/hep-th/9902170
http://arxiv.org/abs/hep-th/9903214
http://arxiv.org/abs/hep-th/9809065
http://arxiv.org/abs/hep-th/0601228
http://arxiv.org/abs/hep-th/0506110
http://arxiv.org/abs/hep-th/9504019
http://arxiv.org/abs/gr-qc/9503052
http://arxiv.org/abs/hep-th/0604070
http://arxiv.org/abs/hep-th/0504125
http://arxiv.org/abs/hep-th/0504126
http://arxiv.org/abs/hep-th/0503217
http://arxiv.org/abs/hep-th/0401129
http://arxiv.org/abs/hep-th/0601156
http://arxiv.org/abs/hep-th/0611351
Introduction
Charges from Noether-Wald construction
Review of Noether construction
The Noether-Wald charges for type IIB supergravity
The Noether-Wald charges for 5d Einstein-Maxwell-CS
Reduction from 10 dimensions
Charges from dimensional reduction
Dimensional reduction
Oxidation of the angular momentum
Generalization and Limitations
Relation to the Asymptotes
Gauge Issues
Examples
The 10d Gutowski-Reall black hole
5d Black Holes
Equal Angular Momenta: BMPV and GR
Non-equal Angular Momenta: Supersymmetric Black Holes
Charges from the entropy function
Completing the equations of motion
Gauge invariance from boundary terms
Thermodynamic Charges
Poincaré Time Hamiltonian
Entropy
First Law
Entropy Function and the Euclidean Action
Mass
Conclusions
Black Rings
|
0704.0959 | Theoretical Status of Pentaquarks | Theoretical Status of Pentaquarks
Takumi Doi1,2,∗)
1 Dept. of Physics and Astronomy, Univ. of Kentucky, Lexington, KY 40506, USA
2 RIKEN BNL Research Center, BNL, Upton, NY 11973, USA
We review the current status of the theoretical pentaquark search from the direct QCD
calculation. The works from the QCD sum rule and the lattice QCD in the literature are
carefully examined. The importance of the framework which can distinguish the exotic
pentaquark state (if any) from the NK scattering state is emphasized.
§1. Introduction
While QCD was established as a fundamental theory of the strong interaction
a few decades ago, its realization in hadron physics has not been understood com-
pletely. For instance, (apparent) absence of “exotic” states, which are different from
ordinary qq̄ mesons and qqq baryons, has been a long standing problem. Therefore,
the announcement1) of the discovery of Θ+ (1540), whose minimal configuration is
uudds̄, was quite striking. For the current experimental status, we refer to Ref.2)
In this report, we review the theoretical effort to search the Θ+ pentaquark
state. The main issue here is whether QCD favors its existence or not, and the
determination of possible quantum numbers for the pentaquark families (if any). In
particular, in order to understand the narrow width of Θ+ observed in the experi-
ment, it is crucial to determine the spin and parity directly from QCD.
For this purpose, we employ two frameworks, the QCD sum rule and the lattice
QCD, where both allow the nonperturbative QCD calculation without models, and
have achieved a great success for ordinary mesons/baryons. Note, however, that
neither of them is infallible, and we consider them as complementary to each other.
For instance, the lattice simulation cannot be performed at completely realistic setup,
i.e., there often exists the artifact stemming from discretization error, finite volume,
heavy u,d-quark masses and neglection of dynamical quark effect (quenching), etc.
On the other hand, the sum rule can be constructed at realistic situation, and is free
from such artifacts in lattice. Unfortunately, it suffer from another type of artifact.
Because a sum rule yields only the dispersion integral of spectrum, an interpretive
model function have to be assumed phenomenologically. Compared to the ordinary
hadron analyses, this procedure may weaken the predictability for the experimentally
uncertain system, such as pentaquarks. Another artifact in the sum rule is the OPE
truncation: one have to evaluate whether the OPE convergence is enough or not.
We also comment on the important issue common to both of the methods. Recall
that the decay channel Θ+ → N +K is open experimentally. Considering also that
both methods calculate a two-point correlator and seek for a pentaquark signal in it,
it is essential to develop a framework which can distinguish the pentaquark from the
NK state in the correlator. In the subsequent sections, we examine the literatures
and see how the above-described issues have been resolved or remain unresolved.
∗) e-mail address: [email protected]
typeset using PTPTEX.cls 〈Ver.0.9〉
http://arxiv.org/abs/0704.0959v1
2 T. Doi
§2. The QCD Sum Rule Work
More than ten sum rule analyses forΘ+ spectroscopy exist for J = 1/2.3), 4), 5), 6), 7), 8), 9), 10), 11), 12)
The first parity projected sum rule was studied by us5) for I = 0. The posi-
tivity of the pole residue in the spectral function is proposed as a signature of
the pentaquark signal. This is superior criterion to the consistency check of pre-
dicted/experimental masses, because it is difficult to achieve the mass prediction
within 100MeV (∼ [m(Θ+) −m(NK)]) accuracy. We also propose the diquark ex-
otic current J5q = ǫ
abcǫdef ǫcfg(uTaCdb)(u
d Cγ5de)Cs̄
g , in order to suppress the NK
state contamination. The OPE is calculated up to dimension 6, checking that the
highest dimensional contribution is reasonably small. We obtain a possible signal in
negative parity.
In the treatment of the NK state, improvement is proposed in Ref.7) There,
NK contamination is evaluated using the soft-Kaon theorem. Note here that the
NK contamination calculated by two-hadron reducible (2HR) diagrams in the OPE
level6) is invalid because what have to be calculated is the 2HR part in the hadronic
level, not in the QCD (OPE) level. The reanalysis7) of sum rule up to dimension 6
shows that the subtraction of the NK state does not change the result of Ref.5)
Yet, as described in Sec.1, the above sum rules may suffer from the OPE trun-
cation artifact. In fact, the explicit calculation up to higher dimension have shown
that this is indeed the case.10), 11), 12) Here, we refer to the elaborated work in Ref.12)
They calculate the OPE for I(JP ) = 0(1/2±) up to dimension D = 15. It is shown
that the terms with D > 6 are important as well, while further high dimensional
terms D > 15 are not significant. Another idea in Ref.12) is the use of the combi-
nation of two independent pentaquark sum rules. In fact, the proper combination
is found to suppress the continuum contamination drastically, which corresponds to
reducing the uncertainty in the phenomenological model function. Examining the
positivity of the pole residue, they conclude the pentaquark exists in positive parity.
Does the result12) definitely predict the JP = 1/2+ pentaquark ? At this mo-
ment, we conservatively point out remaining issues. The first problem is still the
NK contamination. While such contamination is expected to be partly suppressed
through the continuum suppression, it is possible that the obtained signal corre-
sponds to just scattering states. In this point, Ref.12) argues that the signal has
different dependence on the parameter 〈q̄q〉 from the NK state. We, however, con-
sider this discussion uncertain, because 〈q̄q〉 is not a free parameter independent of
other condensates. For further study, the explicit estimate in the soft-Kaon limit7) is
interesting check, but the calculation up to high dimension has not been worked out
yet. Second issue is related to the OPE. In the evaluation of the high dimensional
condensates, one have to rely on the vacuum saturation approximation practically,
while the uncertainty originating from this procedure is not known. Furthermore,
there exists an issue for the validity of the OPE itself when considering the sum rule
with high dimensionality. In fact, rough analysis of the gluonic condensates shows13)
that the nonperturbative OPE may break down around D >∼ 11− 16. One may have
to consider this effect as well, through, for instance, the instanton picture.11)
So far, we have reviewed J = 1/2 sum rules. While there are J = 3/2 works,14), 15)
Theoretical Status of Pentaquarks 3
it is likely that they suffer from slow OPE convergence. Further progress is awaited.
§3. The Lattice QCD Work
There are a dozen of quenched lattice calculations:16), 17), 18), 19), 20), 21), 22), 23), 24), 25), 26), 27), 28)
some of them16), 17), 23), 25) report the positive signal, while others18), 19), 20), 21), 26), 24)
report null results. This apparent inconsistency, however, can be understood in a
unified way, by taking a closer look at the “interpretation” of the numerical results
and the pending lattice artifact.
As discussed in Sec.1, the question is how to identify the pentaquark signal in
the correlator, because the correlator at large Euclidian time is dominated by the
ground state, the NK scattering state. In this point, we develop a new method in
Ref.19), 26) Intuitively, this method makes use of that a scattering state is sensitive to
the spacial boundary condition (BC), while a compact one-particle state is expected
to be insensitive. Practically, we calculate the correlator under two spacial BCs: (1)
periodic BC (PBC) for all u,d,s-quarks, (2) hybrid BC (HBC) where anti-periodic
BC for u,d-quarks and periodic BC for s-quark. The consequences are as follows. In
PBC, all of Θ+, N, K are subject to periodic BC. In HBC, while Θ+(uudds̄) remains
subject to periodic BC, N(uud,udd) and K(s̄d,s̄u) are subject to anti-periodic BC.
Therefore, the energy of NK will shift by PBC → HBC due to the momentum of
N and K, while there is no energy shift for Θ+. (Recall that the momentum is
quantized on lattice as 2~nπ/L for periodic BC and (2~n + 1)π/L for anti-periodic
BC, with spatial lattice extent L and ~n ∈ ZZ3.) In this way, the different behavior
between NK and Θ+ can be used to identify whether the signal is NK or Θ+. We
simulate the anisotropic lattice, β = 5.75, V = 123 × 96, aσ/aτ = 4, with the clover
fermion. The conclusion is: (1) the signal in 1/2− is found to be s-wave NK from
HBC analysis. No pentaquark is found up to ∼ 200MeV above the NK threshold.
(2) the 1/2+ state is too massive (> 2GeV) to be identified as Θ+(1540).
In comparison with other lattice results, we introduce another powerful method18)
to distinguish Θ+ from NK. This method makes use of that the volume dependence
of the spectral weight behaves as O(1) for one-particle state, and as O(1/L3) for
two-particle state. Intuitively, the latter factor O(1/L3) can be understood as the
encounter probability of the two particles. The calculation18) of the spectral weight
from 163 × 28 and 123 × 28 lattices reveals that the ground states of both the 1/2±
channels are not the pentaquark, but the scattering states. Further analysis is per-
formed in Ref.23) There, the 1st excited state in 1/2− is extracted with 2 × 2 vari-
ational method. The volume dependence of the spectral weight indicates that the
1st excited state is not a scattering state but a pentaquark state. This is consistent
with Ref.,27) where 19× 19 variational method is used to extract the excited states.
Note here that this results is consistent with the HBC analysis.19) In fact,
HBC analysis exclude the pentaquark up to ∼ 200MeV above threshold, while the
resonance observed in Ref.23) locates 200-300MeV above the threshold. The question
is whether the observed resonance is really Θ+ which experimentally locates 100MeV
above the threshold. To address this question, explicit simulation is necessary at
physically small quark mass without quenching. In particular, small quark mass
4 T. Doi
would be important considering that Refs.19), 23) are simulated at rather heavy quark
masses and expected to suffer from large uncertainty in the chiral extrapolation.
Finally, we discuss the JP = 3/2± lattice results. We performed the com-
prehensive study26) with three different operators and conclude that all the lattice
signals are too massive (> 2GeV) for Θ+, and are identified as not pentaquarks but
scattering states from the HBC analysis. On the other hand, Ref.25) claims that a
pentaquark candidate is found in 3/2+. We, however, observe that the latter result
are contaminated by significantly large statistical noise, which makes their result
quite uncertain. Note also that their criterion to distinguish Θ+ from scattering
states is based on rather limited argument compared to the HBC analysis.
§4. Conclusions
We have examined both of the QCD sum rule and lattice QCD works. In the
sum rule, progresses in OPE calculation and continuum suppression have achieved
stable analysis, while the subtraction of NK contamination remains a critical issue.
In the lattice, the framework which distinguish the pentaquark from NK have been
successfully established. In order to resolve the superficial inconsistency in the lattice
prediction, the calculation at small quark mass without quenching is highly desirable.
Acknowledgements
This work is completed in collaboration with Drs. H.Iida, N.Ishii, Y.Nemoto,
M.Oka, F.Okiharu, H.Suganuma and J.Sugiyama. T.D. is supported by Special Post-
doctoral Research Program of RIKEN and by U.S. DOE grant DE-FG05-84ER40154.
References
1) LEPS Collaboration, T. Nakano et al., Phys. Rev. Lett. 91 (2003), 012002.
2) T. Nakano, in these proceedings.
3) S.-L. Zhu, Phys. Rev. Lett. 91 (2003), 232002.
4) R.D. Matheus et al., Phys. Lett. B 578 (2004), 323.
5) J. Sugiyama, T. Doi, and M. Oka, Phys. Lett. B 581 (2004), 167.
6) Y. Kondo, O. Morimatsu, T. Nishikawa, Phys. Lett. B 611 (2005), 93.
7) S.H. Lee, H. Kim, Y. Kwon, Phys. Lett. B 609 (2005), 252.
8) Y. Kwon, A. Hosaka and S.H. Lee, hep-ph/0505040 (2005).
9) M. Eidemuller, Phys. Lett. B 597 (2004), 314.
10) B.L. Ioffe and A.G. Oganesian, JETP Lett. 80 (2004), 386, R.D. Matheus and S. Narison,
Nucl. Phys. Proc. Suppl. 152 (2006), 236, A.G. Oganesian, hep-ph/0510327 (2005).
11) H.-J. Lee et al., Phys. Rev. D 73 (2006), 014010, ibid., Phys. Lett. B 610 (2005), 50.
12) T. Kojo, A. Hayashigaki, D. Jido, Phys. Rev. C 74 (2006), 045206.
13) M.A. Shifman, A.I. Vainshtein and V.I. Zakharov, Nucl. Phys. B 147 (1979), 385, 448.
14) T. Nishikawa et al., Phys. Rev. D 71 (2005), 016001, 076004.
15) W. Wei, P.-Z. Huang and H.-X. Chen and S.-L. Zhu, JHEP 0507 (2005), 015.
16) F. Csikor et al., JHEP 0311 (2003), 070, S. Sasaki, Phys. Rev. Lett. 93 (2004), 152001.
17) T.W. Chiu and T.H. Hsieh, Phys. Rev. D 72 (2005), 034505.
18) N. Mathur et al., Phys. Rev. D 70 (2004), 074508.
19) N. Ishii et al., Phys. Rev. D 71 (2005), 034001.
20) B.G. Lasscock et al., Phys. Rev. D 72 (2005), 014502.
21) F. Csikor et al., Phys. Rev. D 73 (2006), 034506.
22) C. Alexandrou and A. Tsapalis, Phys. Rev. D 73 (2006), 014507.
23) T.T. Takahashi, T. Kunihiro, T. Onogi, and T. Umeda, Phys. Rev. D 71 (2005), 114509.
24) K. Holland, and K.J. Juge, Phys. Rev. D 73 (2006), 074505.
http://arxiv.org/abs/hep-ph/0505040
http://arxiv.org/abs/hep-ph/0510327
Theoretical Status of Pentaquarks 5
25) B.G. Lasscock et al., Phys. Rev. D 72 (2005), 074507.
26) N. Ishii, T. Doi, Y. Nemoto, M. Oka and H. Suganuma, Phys. Rev. D 72 (2005), 074503.
27) O. Jahn, J.W. Negele and D. Sigaev PoS LAT2005 (2006), 069.
28) C. Hagen, D. Hierl and A. Schafer, Eur. Phys. J. A 29 (2006), 221.
Introduction
The QCD Sum Rule Work
The Lattice QCD Work
Conclusions
|
0704.0961 | On the orbital period of the magnetic Cataclysmic Variable HS 0922+1333 | Astronomy & Astrophysics manuscript no. tovmassian˙englisheditor c© ESO 2018
November 2, 2018
On the orbital period of the magnetic Cataclysmic Variable
HS 0922+1333
G. H. Tovmassian ⋆ and S.V. Zharikov
Observatorio Astrónomico Nacional SPM, Instituto de Astronomı́a, Universidad Nacional Autónoma de México, Ensenada, BC,
México⋆⋆;
e-mail: gag,[email protected]
ABSTRACT
Context. The object HS 0922+1333 was visited briefly in 2002 in a mini survey of low accretion rate polars (LARPs) in order to test
if they undergo high luminosity states similar to ordinary polars. On the basis of that short observation the suspicion arose that the
object might be an asynchronous polar (Tovmassian et al. 2004). The disparity between the presumed orbital and spin period appeared
to be quite unusual.
Aims. We performed follow-up observations of the object to resolve the problem.
Methods. New simultaneous spectroscopic and photometric observations spanning several years allowed measurements of radial
velocities of emission and absorption lines from the secondary star and brightness variations due to synchrotron emission from the
primary.
Results. New observations show that the object is actually synchronous and its orbital and spin period are equal to 4.04 hours.
Conclusions. We identify the source of confusion of previous observations to be a high velocity component of emission line arousing
from the stream of matter leaving L1 point.
Key words. stars: - cataclysmic variables - magnetic, individual: - stars: HS 0922+1333
1. Introduction
Magnetic cataclysmic variables (CV) are accreting binary sys-
tems in which material transfers from a dwarf secondary star
onto a magnetic (∼5 < B < ∼250 MG) white dwarf (WD)
through Roche lobe overflow. Polars or AM Her systems with
magnetic fields larger than ∼ 10 MG stand out among magnetic
CVs because the spin period of the primary WD is synchronized
with the orbital period of the system. Unlike non-magnetic or
low-magnetic accreting binaries, they have neither a disk nor
the capacity to accumulate the transferred matter, so the bulk of
flux of these systems comes from the accretion flow, particularly
around magnetic poles. Therefore, their luminosity is sensitive
to the mass transfer rate Ṁ. Polars are known to have highs and
lows in their luminosity state, which is directly dependent on Ṁ.
In recent years a number of polars were identified with ex-
tremely low accretion rates. They are commonly called LARPs,
a name coined by Schwope et al. (2002). Their mass accretion
rate is estimated to be about a few 10−13 M⊙/yr, two orders
of magnitude below the average for CVs and they are distin-
guished for their prominent cyclotron emission lines on top of
otherwise featureless blue continua. The first two LARPs, in-
cluding the subject of this study, were discovered in the course
of the Hamburg QSO survey, thanks to a broad variable fea-
ture in the spectra subsequently identified with cyclotron lines
(Reimers et al. 1999; Reimers & Hagen 2000, hereafter RH).
Later, another newly identified magnetic CV from the list of
Send offprint requests to: G. Tovmassian
⋆ Visiting research fellow at Center for Astrophysics and Space
Sciences, University of California, San Diego, 9500 Gilman Drive, La
Jolla, CA 92093-0424, USA
⋆⋆ PO Box 439027, San Diego, CA, 92143-9024, USA
ROSAT sources (RX J1554.2+2721) was spotted in the low
state with a spectrum identical to LARPs (Tovmassian et al.
2001, 2004). Intrigued by that discovery, we conducted a blitz
campaign to check if canonical LARPs, namely HS 1023+3900
and HS 0922+1333, might be caught in a high state as well.
Since both objects had only recently been discovered and had
very limited observational coverage, we obtained one full binary
orbital period of spectral observations. Our instrumental setup
provided higher spectral resolution than the original discovery
observation by RH. We observed emission from the Hα line ap-
parently arising from the irradiated surface of the secondary star
facing the hot accreting spot on the WD and Na i infrared dou-
blet from the cooler parts of the secondary star (Tovmassian et al.
2004). The derived radial velocity (RV) curve from that obser-
vation did not fold well with the period estimated in the dis-
covery paper. However, the limited time coverage undermined
our ability to measure the period properly. We could only state
that the period might be exceeding what was reported by RH by
at least 1.14 times, corresponding to Pspin/Porb=0.88. It should
be noted that RH determined their period from the cyclotron
hump cycles and thus, they measured the WD spin period rather
than the binary orbital period. It would be quite usual to find
some degree of de-synchronization between the spin period of
the WD and orbital period. Nevertheless, the difference in peri-
ods was too large for an asynchronous polar and too small for an
intermediate polar. The latter mostly follow the empirical ratio
Pspin/Porb ∼ 0.1. In rare cases, Pspin/Porb ∼ 0.25 (see e.g. Norton
et al. 2004). There are also theoretical restrictions on a kind of
ratio that was indicated by our observation as evident from the
Norton et al. (2004) paper. Therefore, we conducted a new se-
ries of observations in order to establish the orbital period of the
compact binary and to classify it properly. This brief paper anal-
http://arxiv.org/abs/0704.0961v1
2 Tovmassian & Zharikov: Orbital period of the HS 0922+1333
yses a combined set of observations and discusses the reasons
that led us to an erroneous conclusion in 2004.
In Sect.2 we describe our observations and the data reduc-
tion. The data analysis and the results are presented in Sect.3,
and conclusions are drawn in Sect.4.
2. Observations and reduction
Sets of observations were collected over a four-year period and
analyzed. All observations of HS 0922+1333 reported here were
obtained at the Observatorio Astrónomico Nacional San Pedro
Martir, México. The B&Ch spectrograph installed at the 2.1 me-
ter telescope was used for the extensive spectroscopy, while a
1.5 m telescope was used to obtain simultaneous photometry
during the 2003 March run. In the first observations, upon which
Tovmassian et al. (2004) depended, we used a 600 l/mm grating
centered in the optical IR range (6200 – 8340 Å) to achieve a
spectral resolution of 4.2 Å FWHM in a sequence of 900 sec ex-
posures covering one orbital period. The controversy over the
periods led us to re-observe the object during three nights in
March 2003. This time we utilized the highest available grat-
ing of 1200 l/mm. The spectral resolution reached 2.2 Å FWHM
covering the 6100 – 7200 Å range. Later we collected more ob-
servations with lower resolution to refine the orbital period and
properly classify the secondary star.
In all observations an SITe 1024 × 1024 24 µm pixel CCD
was used to acquire the data. The slit width was usually set to 2.′′0
and oriented in the E–W direction. He-Ar arc lamp exposures
were taken at the beginning and end of each run for wavelength
calibration.
In 2003 March observations we conducted simultaneously
with differential photometry. Exposure times were 40–60 sec
with an overall time resolution of about 80–100 sec using the
Johnson-Cousins Rc filter.
The reduction of data was done in a fairly standard manner.
The bulk of reduction was performed using IRAF1 procedures,
except for removing of cosmic rays by a corresponding program
in MIDAS2, as this is an easier and more reliable tool. The bi-
ases were taken at the beginning and end of the night and were
subtracted after being combined using the CCD overscan area
for control of possible temperature-related variations during the
night. We did not do flat field correction for spectral observations
and used blank sky images taken at twilight for direct images.
The flux calibration was done by observing a spectrophotomet-
ric standard star. Feige 34 was observed during a 2002 run and
G191-B2B during the rest of the observations.
The wavelength calibration is routinely done by observing
a He–Ar arc lamp at the beginning and end of a sequence on
the object or every 2 hours if the sequence is too long. Then the
wavelength solutions calculated for each arc-lamp exposure and
an average of preceding and succeeding images are applied to
the object observed in between. The wavelength solutions are
usually good to a few 1/10 of an Angstrom, while deviations due
to the telescope position and flexations of the spectrograph can
exceed that by an order of magnitude. Usually, that does not pose
a problem since we work with moderate resolutions and the am-
plitude of radial velocity variation is on the order of hundreds
of km/sec. The sensible way of checking and correcting wave-
length calibration is to measure the night sky lines. We mea-
1 http://iraf.noao.edu
2 ESO-MIDAS is the acronym for the European Southern
Observatory Munich Image Data Analysis System which is developed
and maintained by the European Southern Observatory
Fig. 1. The CLEANed power spectrum of the RV variation is
presented by a solid line. The dashed line is the power spectrum
of photometric data. Vertical axes on the right side correspond
to the photometric power scale.
sured several lines by selecting unblended ones located close to
Hα and Na I. The measurements of sky lines show a clear trend
and indicate the scale of errors that one can incur depending on
the telescope inclination. Although the trend is unusually steep,
reaching 30 km/sec over 4 hours of observation, the scatter of
points around a linear fit is relatively small, which defines the
error of the measurements (rms) and is ≤8 km/sec. Nevertheless,
the error bars in the corresponding plots reflect the entire range
of deviation just to demonstrate the scale of corrections applied
to the data. The deviations of the linear fit to the measured night
sky lines (with an average of 2 night sky lines around each mea-
sured line) from the rest value were used to correct the wave-
length calibration by the corresponding amount.
3. Orbital period and system parameters
We measured the Hα line in the 2002 spectra with single
Gaussian fits. The resulting RV curve was reasonably smooth
and sinusoidal, but the ends of the curve would not overlap when
folded with the period reported by RH (see Fig. 5 in 2004). We
speculated that the actual orbital period is longer than the one de-
rived from the photometry. However, the measurements of new
spectra obtained in 2003 do not show such a large discrepancy,
and the period analysis of the combined dataset easily reveals
that the true period is indeed 4.0395 hours and coincides with the
photometric period derived from the synchrotron lines variabil-
ity within errors of measurement. The combination of data taken
years apart and several nights in a row each year allowed us to
determine the period very precisely. We applied the CLEAN pro-
cedure (Roberts et al. 1987) to sort out the alias periods resulting
from the uneven data sampling and daily gaps and obtained a
strong peak in the power spectrum at the 5.94131 ± 0.00065 cy-
cles/day, corresponding to a 4.0395 ± 0.0001 hour period (see
Fig. 1). Simultaneous with spectroscopy, we obtained photome-
Tovmassian & Zharikov: Orbital period of the HS 0922+1333 3
Table 1. Log of observations of HS 0922+1333
Date HJD+ Telescope Instrument/Grating Range/Band Exp.Time Duration
Spectroscopy 24530000 Num. of Integrations
2002 02 04 2309 2.1m B&Ch1 600l/mm 6200-8340Å 900s×19 4.5h
2003-03-25 2723 2.1m B&Ch 1200l/mm 6100-7200Å 900s×13 3.3h
2003-03-26 2724 2.1m B&Ch 1200l/mm 6100-7200Å 900s×6 1.3h
2003-03-27 2725 2.1m B&Ch 1200l/mm 6100-7200Å 900s×15 2.6h
2005-10-27 3670 2.1m B&Ch 400l/mm 6100-9200Å 900s×15 2.2h
2005-10-29 3672 2.1m B&Ch 400l/mm 6100-9200Å 900s×11 1.7h
2005-10-30 3673 2.1m B&Ch 400l/mm 6100-9200Å 900s×8 1.2h
2006-01-18 3673 2.1m B&Ch 400l/mm 5800-8900Å 1800s×8 3.7h
Photometry
2003-03-26 2724 1.5m RUCA2 R 120s×107 3.8h
2003-03-27 2725 1.5m RUCA R 120s×99 3.4h
2003-03-28 2726 1.5m RUCA R 120s×99 3.4h
1 B&Ch - Boller & Chivens spectrograph (http://haro.astrospp.unam.mx/Instruments/bchivens/bchivens.htm)
2 RUCA - CCD photometer (http://haro.astrospp.unam.mx/Instruments/laruca/laruca intro.htm)
Fig. 2. The light curve of HS 0922+1333 obtained in filter Rc
and folded with the orbital period. The phasing is according to
spectroscopic data.
try in Rc band that partially includes the strongest cyclotron line.
It is a dominant contributor to the light curve (Fig.2), so we can
use it to determine the spin period of the WD. The power spec-
trum calculated from photometry gives exactly the same result,
but the peak is broader, because the data lacks a longer time base.
In Fig.1 the power spectra of spectral and photometric data are
presented together.
It is clear that this system is a synchronous magnetic cata-
clysmic variable. The spin period of its white dwarf primary is
locked with the orbital and is not shorter, as suspected earlier.
We explored the cause of confusion. First of all, we corrected all
measured radial velocities using the night sky lines to remove
the trends. This decreased the gap a little between points in the
2002 data in phases 0.0 through 0.2 where they were not overlap-
ping. But even taking errors related to the wavelength calibration
into account, they still do not fold properly (see the open (blue)
square symbols in Fig.3). What is more interesting, however, is
that the amplitude of the radial velocity variation has a much
higher value in the 2002 data than in 2003 data, as measured
with single Gaussians. The careful examination of the 2003 data,
with twice the spectral resolution than in 2002, reveals that at the
bottom of the Hα emission line there is a weak and broad bump
present in most phases. We de-blended the Hα line from 2003
observations using two Gaussian components in the IRAF splot
procedure. The result is shown on the right side of the left panel
of Fig.3 only (positive phases). The strong, narrow component
basically coincides with the single Gaussian measurements. But
the weak broad component appears to show a much larger am-
plitude and reveals itself mainly between phases 0.3 through 0.9.
This component is clearly identified as the heated matter leaving
the Lagrangian L1 point, the nozzle where the accretion stream
forms. Outflowing matter has intrinsic velocity, so at phase 0.75
when the secondary star reaches maximum velocity toward the
observer, it tilts the weight of the emission line toward larger
velocity. Its phasing appears to be similar to a high-velocity
component (HVC) detected routinely in polars (Schwope et al.
1997, Tovmassian et al. 1999) that originates in the ballistic part
of the stream. In the lower-resolution spectra this component
could not be separated, therefore the radial velocity curve be-
came stretched and deformed. That and the short time coverage
limited to just a little over one orbital period led to the misinter-
pretation of the 2002 spectral data. It is very interesting that we
were able to distinguish the accretion flow onset. So far these ob-
jects have been known to show only the synchrotron humps as an
evidence of accretion processes taking place in them (Schwope
et al. 2002).
The RV curve derived from sodium lines (see Fig.3) gives the
measure of the rotation of the center of mass of the secondary in
the orbital plane, while the narrow component of the Hα line
originates from the front side of the elliptically distorted sec-
ondary. The ephemerides of HS 0922+1333 from the RV mea-
surements can be described as
T0 = HJD 2452308.336+ 0.
d168313[200]× E,
where T0 corresponds to the −/+ crossing of the RV curve as fol-
lows from the fitting of sinusoid to the RV measurements of Hα
and sodium lines separately according to the following equation:
V(t) = γ + K × sin(2π(t − t0)/Porb)
http://haro.astrospp.unam.mx/Instruments/bchivens/bchivens.htm
http://haro.astrospp.unam.mx/Instruments/laruca/laruca_intro.htm
4 Tovmassian & Zharikov: Orbital period of the HS 0922+1333
Fig. 3. The radial velocity curve of Hα line (left panel) and of Na i doublet (right). The open squares (blue) in the left panel represent
2002 data obtained with lower spectral resolution. The filled squares (red) are measurements of 2003 observations with single
Gaussian fitting. The error bars on the left side of the plots reflect the amplitude of wavelength corrections. The points are placed at
the correct positions after trend removal. The right side of the plot presents measurements of the 2003 data but with double Gaussian
de-blending of the line. The filled (green) circles are from the stronger line component originating at the irradiated secondary, the
open circles correspond to a much weaker component coming from the stream. In the right panel, measurements of the Na I lines
are presented from 2002 observations. The filled square symbols denote RV of λ 8197 Å measured with Gaussian deblending, after
velocity correction with sky lines. The open squares and triangles are measurements of the same doublet with single gaussians
(squares λ 8185Å and triangles λ 8197 Å). The diamonds are measurements of the λ 8185 Å line from 2006 observations. The curve
is a result of sin fit to the combined data. Note that scales of y-axes of panels are different.
Table 2. Radial velocity parameters of HS 0922+1333.
Line γ K Residuals
km/sec km/sec km/sec
Hα 36.6±7 132±12 25.1
Na I 8185Å -81±11 162±17 29.5
Na I 8197Å -65±13 139±20 32.7
Corresponding numbers derived from the fitting are pre-
sented in the Table 2. Unfortunately, due to the large errors there
is no marked difference between the semi-amplitude of radial
velocities between Hα and Na I lines. Otherwise, knowing the
spectral type of the secondary, we could deduce the basic pa-
rameters of the binary since that difference reflects the size of
the Roche lobe of the secondary.
The spectrum of the secondary in the absence of an accretion
disk is clearly seen, and in the phases when the magnetic ac-
creting spot that is radiating strong synchrotron emission is self-
eclipsed, one can see undisturbed secondary spectrum in the near
infrared range. In the Fig. 4 the flux calibrated spectrum of the
object obtained at phase 0.5 is presented. Overplotted are stan-
dard spectra of M3 to M5 main sequence stars (Pickles 1998)
normalized to the object. The WD’s contribution has not been
removed. However, at wavelengths above 6500 Å, its contribu-
tion is apparently insignificant and a good accordance emerges
between the object and M4 V standard star. This is also consis-
tent with what is expected from the Porb - spectral type II rela-
tion (Beuermann 2000), although the secondary is a M3.5 star
according to RH. The masses of secondaries in systems with pe-
Fig. 4. The spectrum of HS 0922+1333is presented by the solid
line. For comparison the standard spectra of M3-M5 stars are
plotted from the Pickles (1998)
Tovmassian & Zharikov: Orbital period of the HS 0922+1333 5
V (km/s)
V (km/s)
V (km/s)
V (km/s)
V (km/s)
Fig. 5. The Doppler maps of HS 0922+1333. On the top the to-
mograms of Hα emission line are presented in two panels with
different contrast levels to emphasize the concentration of the
emitting region on the facing side of the secondary on the left
and possibly some trace of mass transfer stream on the right.
The curved lines in the top panels correspond to the stream tra-
jectory, with numbers in the top right panel indicating stream
azimuth. The tomogram corresponding to the Na I line is placed
below in the right corner. The circle-shaped emission around the
center of mass is caused by the presence of the component of the
doublet line. In the bottom left corner, the observed and recon-
structed trailed spectra of Hα line (above) and Na I (below) are
presented.
riods similar to HS 0922+1333 range from 0.35 to 0.42 M⊙, in
those cases where the mass could be estimated precisely. Such
a secondary would follow the empirical mass-period and radius-
period relations from Smith & Dhillon (1998)
M2/M⊙ = 0.126(11) P(h) − 0.11(4)
R2/R⊙ = 0.117(4) P(h) − 0.041(18), (1)
Observations with higher resolution in the near IR will per-
mit investigators to precisely measure the difference between the
RV of Hα originating at the facing side of the secondary and
sodium absorption lines reflecting the motion of the center of
mass. Subsequently, it should allow for estimating the observed
radius of the star to check the possibility that it fills the Roche
lobe. For now, we can only assume that the mass transfer pro-
ceeds in a way similar to other polars, based on the detection of
a high velocity component in the emission line. Its presence can
also be illustrated by constructing Doppler tomograms.
Doppler tomography (Marsh & Horne 1988; Marsh 2001) is
a powerful tool in cases like this, where the origin of line profiles
is bound to the orbital plane and the system has relatively high
inclination. We constructed Doppler maps, or tomograms, using
both the Hα emission line and the Na I λ8197Å absorption line
to prove the accuracy of our estimate of the binary parameters.
The tomograms in Fig.5 show that the Hα line is mostly confined
to the front side of the secondary, while the sodium absorption
fills the entire body of the secondary. However, the difference is
not very obvious. The reason for that appears to be the the lower
spectral resolution and fewer spectra employed.
4. Conclusions
1. We have determined the 4.0395 hours spectroscopic period
of the LARP HS 0922+1333 based on the radial velocity
measurements of Hα emission line originating at the irra-
diated secondary star. The derived value coincides within
measurement errors with the spin period of the system, thus
proving that the object is a synchronized polar.
2. The profiles of the Hα emission line in higher-spectral res-
olution observations turned out to be complex. They are
formed basically on the irradiated surface of the secondary
star, but they also show a small contribution from the matter
in close proximity to the L1 point. The matter escaping the
secondary shows RVs with higher velocity and a different
phase.
3. The Doppler tomograms tend to confirm detection of a
stream of transfer matter.
The parameters of the system that we have obtained are in-
teresting in the context of the model proposed by Webbink and
Wickramasinghe (2005). According to it, the LARPs are rela-
tively young and are still approaching their first Roche lobe over-
flow. The accretion is due to the capture of the wind material
from the secondary by the strong magnetic field of the primary.
We think that we see evidence of a faint stream common to the
polars that transfer material through the L1 point which is usually
due to the Roche lobe overflow. However, the wind will proba-
bly also cause a flow of matter through the same trajectory, so
it is difficult to say if the observation runs against the model.
The precise measurement of the secondary star size may help to
clarify this.
Acknowledgements. This study was supported partially by grant 25454 from
CONACyT. GT acknowledges the UC-MEXUS fellowship program enabling
him to visit CASS UCSD. The authors are grateful to the anonymous referee for
careful reading of the manuscript and valuable comments. We thank L.Valencic
for help in language related issues.
References
Beuermann, K. 2000, New Astronomy Review, 44, 93
Demircan, O., & Kahraman, G. 1991, Ap&SS, 181, 313
Marsh, T. R., Horne, K. 1988, MNRAS, 235, 269
Marsh, T. R. 2001, ”Doppler Tomography”, Astrotomography, Indirect Imaging
Methods in Observational Astronomy, Edited by H.M.J. Boffin, D. Steeghs
and J. Cuypers, Lecture Notes in Physics, vol. 573, p.1
Norton, A. J., Wynn, G. A., & Somerscales, R. V. 2004, ApJ, 614, 349
Pickles, A. J. 1998, PASP, 110, 863
Roberts, D. H., Lehar, J., Dreher, J. W. 1987, AJ, 93, 968
Reimers, D., & Hagen, H.-J. 2000, A&A, 358, L45
Reimers, D., Hagen, H.-J., & Hopp, U. 1999, A&A, 343, 157
Schwope, A. D., Mantel, K.-H., & Horne, K. 1997, A&A, 319, 894
Schwope, A. D., Brunner, H., Hambaryan, V., & Schwarz, R. 2002, ASP
Conf. Ser. 261: The Physics of Cataclysmic Variables and Related Objects,
261, 102
Smith, D. A., Dhillon, V. S. 1998, MNRAS, 301, 767
Tovmassian, G. H., et al. 1999, ASP Conf. Ser. 157: Annapolis Workshop on
Magnetic Cataclysmic Variables, 157, 133
Tovmassian, G. H., Greiner, J., Zharikov, S. V., Echevarrı́a, J., & Kniazev, A.
2001, A&A, 380, 504
Tovmassian, G., Zharikov, S., Mennickent, R., & Greiner, J. 2004, ASP
Conf. Ser. 315: IAU Colloq. 190: Magnetic Cataclysmic Variables, 315, 15
6 Tovmassian & Zharikov: Orbital period of the HS 0922+1333
Warner, B. 1995, Cataclysmic variable stars, Cambridge Astrophysics Series,
Cambridge, New York: Cambridge University Press, 1995
Webbink, R. F., & Wickramasinghe, D. T. 2005, Astronomical Society of the
Pacific Conference Series, 330, 137
Introduction
Observations and reduction
Orbital period and system parameters
Conclusions
|
0704.0962 | The Einstein-Varicak Correspondence on Relativistic Rigid Rotation | 7 The Einstein-Varićak Correspondence on
Relativistic Rigid Rotation∗
Tilman Sauer
Einstein Papers Project
California Institute of Technology 20-7
Pasadena, CA 91125, USA
[email protected]
Abstract
The historical significance of the problem of relativistic rigid rotation
is reviewed in light of recently published correspondence between Einstein
and the mathematician Vladimir Varićak from the years 1909 to 1913.
1 Introduction
The rigidly rotating disk has long been recognized as a crucial ‘missing link’
in our historical reconstruction of Einstein’s recognition of the non-Euclidean
nature of spacetime in his path toward general relativity.1, 2 Relativistic rigid
rotation combines several different but related problems: the issue of a Lorentz-
covariant definition of rigid motion, the number of degrees of freedom of a rigid
body, the reality of length contraction,3 as well as Ehrenfest’s paradox4 and the
introduction of non-Euclidean geometric concepts into the theory of relativity.5
2 Relativistic rigid motion
A relativistic definition of rigid motion was first given by Max Born.6 The
definition was given in the context of a theory of the dynamics of a model of
an extended, rigid electron, and defined a rigid body as one whose infinitesimal
volume elements appear undeformed for any observer that is comoving instanta-
neously with the (center of the) respective volume element. The definition and
its implications were discussed at the 81st meeting of the Gesellschaft Deutscher
Naturforscher und Ärzte in Salzburg in late September 1909.
Gustav Herglotz and Fritz Noether, in papers received by the Annalen der
Physik on 7 and 27 December, respectively, further elaborated on the mathe-
matical consequences of Born’s definition.7 Herglotz, in particular, reformulated
∗To appear in: Proceedings of the Eleventh Marcel Grossmann Meeting on General Rela-
tivity, ed. H. Kleinert, R.T. Jantzen and R. Ruffini, World Scientific, Singapore, 2007.
http://arxiv.org/abs/0704.0962v1
the definition in more geometric terms: A continuum performs rigid motion if
the world lines of all its points are equidistant curves. The analysis showed that
Born’s infinitesimal condition of rigidity can only be extended to the motion of
a finite continuum in special cases. It implied that a rigid body has only three
degrees of freedom. The motion of one of its points fully determines its motion.
Translation and uniform rotation are special cases. In particular, the definition
does not allow for acceleration of a rigid disk from rest to a state of uniform
rotation with finite angular velocity.
In view of these consequences, various other definitions of a rigid body were
suggested, e.g. by Born and Noether,7, 8 until it became clear that special rel-
ativity does not allow for the usual concept of a rigid body. In other words, a
relativistic rigid body necessarily has an infinite number of degrees of freedom.9
On 22 November 1909, a short note appeared by Paul Ehrenfest pointing
to a paradox that follows from Born’s relativistic definition of rigid motion of
a continuum.10 He considered a rigid cylinder rotating around its axis and
contended that its radius would have to meet two contradictory requirements.
The periphery must be Lorentz-contracted, while its diameter would show no
Lorentz contraction. The difficulty became known as the “Ehrenfest paradox.”
In a polemic exchange with von Ignatowsky,11 Ehrenfest devised the following
thought experiment to illustrate the difficulty. He imagined the rotating disk
to be equipped with markers along the diameter and the periphery. If their
positions were marked onto tracing paper in the rest frame at a fixed instant,
with the disk both at rest and in uniform rotation, the two images should show
the same radius but different circumferences.
3 The Einstein-Varićak correspondence
Immediately after the 1909 Salzburg meeting, Einstein wrote to Arnold Som-
merfeld that “the treatment of the uniformly rotating rigid body seems to me of
great importance because of an extension of the relativity principle to uniformly
rotating systems.”12 This was a necessary step for Einstein following the heuris-
tics of his equivalence hypothesis, but only in spring 1912, a few weeks before
he made the crucial transition from a scalar to a tensorial theory of gravitation
based on a general spacetime metric,5 do we find another hint at the problem
in his writings.1, 2
The Collected Papers of Albert Einstein recently published13 nine letters by
Einstein to Vladimir Varićak (1865–1942), professor of mathematics at Agram
(now Zagreb, Croatia). Varićak had published on non-Euclidean geometry14 and
is known for representing special relativistic relations in terms of real hyperbolic
geometry.15, 16 The correspondence seems to have been initiated by Varićak ask-
ing for offprints of Einstein’s papers. In his response, Einstein added a personal
tone to it with his wife Mileva Marić, a native Hungarian Serb, writing the
address in Cyrillic script in order to raise Varićak’s curiosity. After exchanging
publications, Varićak soon commented on Einstein’s (now) famous 1905 special
relativity paper, pointing to misprints but also raising doubts about his treat-
ment of reflection of light rays off moving mirrors. These were rebutted by
Einstein in a response of 28 February 1910 in which he also, with reference to
Ehrenfest’s paradox, referred to the rigidly rotating disk as the “most interesting
problem” that the theory of relativity would presently have to offer. In his next
two letters, dated 5 and 11 April 1910 respectively, Einstein argued against the
existence of rigid bodies invoking the impossibility of superluminal signalling,
and also discussed the rigidly rotating disk. A resolution of Ehrenfest’s paradox,
suggested by Varićak, in terms of a distortion of the radial lines so as to preserve
the ratio of π with the Lorentz contracted circumference, was called interesting
but not viable. The radial and tangential lines would not be orthogonal in spite
of the fact that an inertial observer comoving with a circumferential point would
only see a pure rotation of the disk’s neighborhood.
About a year later, Einstein and Varićak corresponded once more. Varićak
had contributed to the polemic between Ehrenfest and von Ignatowsky by sug-
gesting a distinction between ‘real’ and ‘apparent’ length contraction. The real-
ity of relativistic length contraction was discussed in terms of Ehrenfest’s tracing
paper experiment, but for linear relative motion. According to Varićak, the ex-
periment would show that the contraction is only a psychological effect whereas
Einstein argued that the effect will be observable in the distance of the recorded
marker positions. When Varićak published his note, Einstein responded with a
brief rebuttal.17
Despite their differences in opinion, the relationship remained friendly. In
1913, Einstein and his wife thanked Varićak for sending them a gift, commented
favorably on his son who stayed in Zurich at the time, and Einstein announced
sending a copy of his recent work on a relativistic theory of gravitation. The
Einstein-Varićak correspondence thus gives us additional insights into a signifi-
cant debate. It shows Einstein’s awareness of the intricacies of relativistic rigid
rotation and bears testimony to the broader context of the conceptual clarifica-
tions in the establishment of the special and the genesis of the general theory
of relativity.
References
[1] J. Stachel, Einstein and the Rigidly Rotating Disk, in General Relativity
and Gravitation: One Hundred Years after the Birth of Albert Einstein.
Vol. 1, ed. A. Held (Plenum, 1980), 1–15; see also “The First Two Acts,”
in J. Stachel. Einstein from ‘B’ to ‘Z’ (Birkhäuser, 2002), 261–292.
[2] G. Maltese and L. Orlando. Stud. Hist. Phil. Mod. Phys. 26, 263 (1995).
[3] M. Klein et al. (ed.) The Collected Papers of Albert Einstein. Vol. 3. The
Swiss Years: Writings, 1909–1911. (Princeton University Press, 1993),
478–480.
[4] M. Klein. Paul Ehrenfest: The Making of a Theoretical Physicist. (North-
Holland, 1970), 152–154.
[5] M. Janssen, J. Norton, J. Renn, T. Sauer, J. Stachel. The Genesis of Gen-
eral Relativity: Einstein’s Zürich Notebook. Vol. 1. Introduction and Source.
Vol. 2. Commentary and Essays. (Springer, 2007).
[6] M. Born. Ann. Phys. 30, 1 (1909); Phys. Zs. 10, 814 (1909).
[7] G. Herglotz, Ann. Phys. 31, 393 (1910); F. Noether, Ann Phys. 31, 919
(1910).
[8] M. Born, Nachr. Königl. Ges. d. Wiss. (Göttingen) 161 (1910).
[9] A. Einstein, Jahrb. Radioaktiv. Elektr. 4, 411 (1907); M. Laue, Phys. Zs.
12, 85 (1911).
[10] P. Ehrenfest, Phys. Zs. 10, 918 (1909).
[11] P. Ehrenfest, Phys. Zs. 11, 1127 (1910); 12, 412 (1911); W.v.Ignatowsky,
Ann. Phys. 33, 607 (1910); Phys. Zs. 12, 164, 606 (1911).
[12] M. Klein et al. (ed.) The Collected Papers of Albert Einstein. Vol. 5. The
Swiss Years: Correspondence, 1902–1914. (Princeton University Press,
1993).
[13] D. Buchwald et al. (ed.) The Collected Papers of Albert Einstein. Vol. 10.
The Berlin Years: Correspondence, May–December 1920 and Supplemen-
tary Correspondence, 1909–1920. (Princeton University Press, 2006).
[14] V. Varićak. Jahresber. dt. Math. Ver. 17, 70 (1908); Atti del Cong. inter-
nat. del Mat. 2, 213 (1909).
[15] V. Varićak. Phys. Zs. 11, 93, 287, 586 (1910); Jahresber. dt. Math. Ver.
21, 103 (1912).
[16] S. Walter. The Non-Euclidean Style of Minkowskian Relativity, in The
Symbolic Universe. ed. J. Gray (Oxford University Press, 1999), 91–127.
[17] V. Varićak, Phys. Zs. 12, 169 (1911); A. Einstein. Phys. Zs. 12, 509 (1911).
Introduction
Relativistic rigid motion
The Einstein-Varicak correspondence
|
0704.0963 | Nova Geminorum 1912 and the Origin of the Idea of Gravitational Lensing | 7 Nova Geminorum 1912 and the Origin of the
Idea of Gravitational Lensing
Tilman Sauer
Einstein Papers Project
California Institute of Technology 20-7
Pasadena, CA 91125, USA
[email protected]
Abstract
Einstein’s early calculations of gravitational lensing, contained in
a scratch notebook and dated to the spring of 1912, are reexamined.
A hitherto unknown letter by Einstein suggests that he entertained
the idea of explaining the phenomenon of new stars by gravitational
lensing in the fall of 1915 much more seriously than was previously
assumed. A reexamination of the relevant calculations by Einstein
shows that, indeed, at least some of them most likely date from early
October 1915. But in support of earlier historical interpretation of
Einstein’s notes, it is argued that the appearance of Nova Geminorum
1912 (DN Gem) in March 1912 may, in fact, provide a relevant context
and motivation for Einstein’s lensing calculations on the occasion of
his first meeting with Erwin Freundlich during a visit in Berlin in April
1912. We also comment on the significance of Einstein’s consideration
of gravitational lensing in the fall of 1915 for the reconstruction of
Einstein’s final steps in his path towards general relativity.
Introduction
Several years ago, it was discovered that Einstein had investigated the idea
of geometric stellar lensing more than twenty years before the publication
http://arxiv.org/abs/0704.0963v1
of his seminal note on the subject.1 The analysis of a scratch notebook2
showed that he had derived equations in notes dated to the year 1912 that
are equivalent to those that he would only publish in 1936.3 In the notes
and in the paper, Einstein derived the basic lensing equation for a point-like
light source and a point-like gravitating mass. From the lensing equation it
follows readily that a terrestial observer will see a double image of a lensed
star or, in the case of perfect alignment, a so-called “Einstein ring.” Einstein
also derived an expression for the apparent magnification of the light source
as seen by a terrestial observer. The dating for the notes was based on other
entries in the notebook. Some of these entries are related to a visit by Einstein
in Berlin April 15-22, 1912, and it was conjectured that the occasion for the
lensing entries was his meeting with the Berlin astronomer Erwin Freundlich
during this week.
The lensing idea lay dormant with Einstein until in 1936 he was prodded
by the amateur scientist Rudi W. Mandl into publishing his short note in
Science. In the meantime, the idea surfaced occasionally in publications by
other authors, such as Oliver Lodge (1919), Arthur Eddington (1920), and
Orest Chwolson (1924).4 We only have one other piece of evidence that
Einstein thought about the problem between 1912 and 1936. In a letter to
his friend Heinrich Zangger, dated 8 or 15 October 1915, Einstein remarked
that he has now convinced himself that the “new stars” have nothing to do
with the lensing effect, and that with respect to the stellar populations in
the sky the phenomenon would be far too rare to be observable.5
The Albert Einstein Archives in Jerusalem recently acquired a hitherto
unknown letter by Einstein that both corroborates some of the historical
conjectures of the early history of the lensing idea and also adds significant
new insight into the context of Einstein’s early considerations. From this
letter it appears that the phenomenon of “new stars,” i.e. the observation
of this type of cataclysmic variables, played a much more prominent role in
the origin of the idea than was suggested by the side remark in Einstein’s
letter to Zangger. It also adds important new information about Einstein’s
thinking in the crucial period between losing faith in the precursor theory to
1[Renn, Sauer, and Stachel 1997] and [Renn and Sauer 2003].
2Albert Einstein Archives (AEA), call number 3-013, published as [CPAE3, Appendix
A]. A facsimile is available on Einstein Archives Online at http://www.alberteinstein.info.
3[Einstein 1936].
4[Lodge 1919], [Eddington 1920, pp. 133–135], [Chwolson 1924].
5Einstein to Heinrich Zangger, 8 or 15 October 1915 [CPAE8, Doc. 130].
http://www.alberteinstein.info
the general theory of relativity entertained in the years 1913–1915, and the
breakthrough to a general relativistic theory of gravitation in the fall of 1915.6
In fact, the new letter justifies a reexamination of our reconstruction of what
we know about Einstein’s intellectual preoccupations both in April 1912 and
in October 1915, and more generally about the genesis of the concept of
gravitational lensing.
1 Einstein’s letter to Emil Budde
The new letter is a response to Emil Arnold Budde (1842–1921), dated 22
May 1916.7 Budde had been director of the Charlottenburg works of the
company of Siemens & Halske from 1893 until 1911.8 He was the author
of a number of scientific publications, among them a monograph on ten-
sors in three-dimensional space [Budde 1914a]9 and of a critical comment on
relativity published in 1914 in the Verhandlungen of the German Physical
Society.10
In an unknown letter to Einstein, Budde apparently had written about the
possibility of observing what are now called Einstein rings, i.e. ring shaped
images of a distant star that is in perfect alignment with a lensing star and
a terrestial observer. The subject matter of Budde’s initial letter can be in-
ferred from Einstein’s response in which he pointed out that one would expect
the phenomenon to be extraordinarily rare, and that it could not be detected
on photographic plates “as little circles” since irradiation would diffuse the
images that would hence only appear as bright little discs, indistinguishable
from the image of a regular star.
6For historical discussion, see [Norton 1984], [Janssen et al. 2007], and further refer-
ences cited therein.
7AEA 123-079. The letter will be published in the forthcoming volume of the Collected
Papers of Albert Einstein.
8Budde had studied catholic theology and science, and had worked as a secondary
school teacher and as a correspondent for the German daily Kölnische Zeitung in Paris,
Rome, and Constantinople. In 1887, he became a Privatgelehrter in Berlin, edited the
journal Fortschritte der Physik, and entered the company Siemens & Halske as a physicist
in 1892. In 1911, he retired and moved to Feldafing, near Lake Starnberg, since he had been
advised by his physicians to live at an altitude of at least 600m [Laue 1921, Werner 1921].
9In [Norton 1992, pp. 309–310] this textbook is cited as evidence for the argument that
Grossmann’s generalization of the term ‘tensor’ in [Einstein and Grossmann 1913] was an
original development.
10[Budde 1914b], [Budde 1914c].
The interesting part of Einstein’s response follows after this negative com-
ment. Einstein continued to relate that he himself had put his hopes on a
different aspect, namely that “due to the lensing effect” the distant star
would appear with an “immensely increased intensity,” and that he initially
had thought that this would provide an explanation of the “new stars.” He
went on to list three reasons why he had given up this hope after more
careful consideration. First, the temporal development of the intensity of a
nova is asymmetric. The luminosity increases much faster than it declines
again. Second, the color of the novae usually changes towards the red and,
in general, its spectral character changes in a distinct and characteristic way.
Third, the phenomenon would be very unlikely for the same reasons that the
observation of an Einstein ring would be unlikely.
In the beginning of his letter, Einstein pointed out that Budde’s idea con-
cerned the same thing that “about half a year ago” (“vor etwa einem halben
Jahre”) had put him into “joyous excitement” (“freudige Aufregung”). At
the end of the letter, he again wrote that the joy had been “just as short as
it had been great.” Counting back six months from the date of Einstein’s
letter, 22 May 1916, takes us to the 22nd of November 1915, which is just
the time of the final formulation of general relativity. It is also just another
six weeks or so away from the date of his letter to Zangger of early October,
in which he wrote about the very same subject of the possible explanation
of novae as a phenomenon of gravitational lensing.
2 The lensing calculations in the scratch note-
In light of this new letter, let us briefly reexamine the calculations in the
Scratch Notebook that had been dated to April 1912.11 Stellar gravitational
lensing is an implicit consequence of a law of the deflection of light rays in
a gravitational field. Such a law had been obtained by Einstein in 1911 as
a direct consequence of the equivalence hypothesis. The angle of deflection
11The following brief recapitulation refers to [CPAE3, 585–586], or
http://www.alberteinstein.info/db/ViewImage.do?DocumentID=34432&Page=23 and
· · ·&Page=26. For a complete and detailed paraphrase of Einstein’s notes, see the
Appendix below.
http://www.alberteinstein.info/db/ViewImage.do?DocumentID=34432
Figure 1: The geometric constellation for stellar gravitational lensing as
sketched in Einstein’s Scratch Notebook. From [CPAE3, p. 585].
α̃12 was found to be
where k is the gravitational constant, M the mass of the lensing star, c
the speed of light, and ∆ the distance of closest approach of the light ray
measured from the center of the massive star.13 On [p. 43] of the Scratch
Notebook we find the sketch shown in Fig. (1) and underneath it the lensing
equation
r = ρ
R +R′
where R denotes the distance between the light emitting distant star and
the massive star that is acting as a lens, R′ the distance between the lensing
star and the position of a terrestial observer who is located a distance r away
from the line connecting light source and lensing star. ρ is the distance of
closest approach of a light ray emitted by the star and seen by the observer.
α = 2kM/c2 is a typical length (later known as the Schwarzschild radius)
that depends on the mass of the light deflecting star and that determines
12I am using the notation α̃ instead of α (as in [Einstein 1911]) in order to distinguish
this angle from the quantity α (effectively the Schwarzschild radius) in Einstein’s scratch
notebook.
13[Einstein 1911, p. 908]. Qualitatively, Einstein had already derived the consequence of
light bending in a gravitational field when he first formulated his equivalence hypothesis
[Einstein 1907, p. 461]. In the final theory of general relativity, the same relation is
obtained with an additional factor of 2, as observed explicitly in [Einstein 1915c, p. 834].
Incidentally, the relevant formula was printed incorrectly by a factor of 2 in (the first
printing of) Einstein’s 1916 review paper of general relativity [Einstein 1916, p. 822], see
[CPAE6, Doc. 30, n. 36] and also Einstein’s response to Carl Runge, 8 November 1920
[CPAE10, Doc. 195].
the angle of deflection to be α
. The lensing equation can be written in
dimensionless variables as
r0 = ρ0 −
, (1)
after defining r0 and ρ0 as
r0 = r
R′(R +R′)α
ρ0 = ρ
R +R′
. (2)
The fact that equation (1) is a quadratic equation for ρ0 entails that there are
two solutions which correspond to two light rays that can reach an observer,
along either side of the lensing star,14 and hence that a terrestial observer
will see a double image of the distant star. For perfect alignment, the double
image will turn into a ring shaped image, an “Einstein-ring” whose diameter
0 = ρ
= 1 also follows immediately from the lensing equation.
In light of Einstein’s letters to Zangger and Budde, it is interesting that
Einstein went on to compute also the apparent magnification, obtaining the
following expression:
Htot = H
. (3)
Here Htot is the total intensity received by the observer, and H the intensity
of the star light at distance R. ρ1,2 denote the two roots of the quadratic
equation (1). The term in brackets gives the relative brightness, reducing
to 1 if no lensing takes place. Finally, some order of magnitude calculations
on these pages showed that the probability of observing this effect would be
given by the probability of having two stars within a solid angle that would
cover 10−15 of the sky, which is highly improbable given that the number of
known stars at the time was of the order of 106.15
Equations that are entirely equivalent to these were published much later,
in 1936, in Einstein’s note to Science.16
14Since only three points are given, the problem is intrinsically a planar one, as long as
the three points are not in perfect alignment.
15See the discussion in the appendix.
16[Renn, Sauer, and Stachel 1997].
The dating of the lensing notes in the scratch notebook to Einstein’s visit
in Berlin in April 1912 was based on other evidence in the notebook. Most
importantly, p. [36] lists Einstein’s appointments during his Berlin visit. In
addition, pp. [38] and [39] recapitulate very specifically the equations of Ein-
stein’s two papers on the theory of the static gravitational field of February
and March 1912, respectively.17 The calculations that deal with the lensing
problem then appear on pp. [43]-[48], and on pp. [51] and [52] of the note-
book. The sheet containing pp. [44] and [45] is a loose sheet inserted between
p. [43] and p. [45]. After p. [53], three pages have been torn out, and then
follow 37 blank pages, with some pages torn out in between. The remainder
of the notebook contains entries that begin at the other end of the notebook
which was turned upside down. Except for some apparently unrelated and
undated entires on pp. [49], [50],18 and [54], the lensing calculations hence
are at the end of a more or less continuous flow of entries. These physical
characteristics of the notebook lead to an important consequence. All infor-
mation that was pointing to a date of the lensing calculations in the year
1912 preceded the actual lensing calculations. Reexaming pp. [51] and [52]
of the notebook in light of the letters to Zangger and to Budde in fact reveals
that at least these entries were not written in 1912, but rather most likely
at the time of the letter to Zangger, in early October 1915. There are two
reasons for this. First, at the top of p. [51], Einstein wrote down the title
of a book published only in 1914.19 Therefore, the following calculations are
almost certainly to be dated later than the publication of this book. Second,
at the bottom of p. [52], Einstein explicitly refers to the “apparent diameter
of a Nova st[ar].” The calculations on pp. [51] and [52] in fact are a calcula-
tion of the apparent brightness and diameter of a star. We conclude that, in
all probability, the calculations on pp. [51] and [52] were written at the time
of Einstein’s letter to Zangger, early October 1915.
Does the dating of pp. [51] and [52] to October 1915 also compel us to
17[Einstein 1912a, Einstein 1912b].
18On the bottom half of p. [49] there is a sketch of Pascal’s and Brianchon’s Theorems,
which deal with hexagons inscribed in or circumscribed on a conical section. I wish to
thank Jesper Lützen for this identification. Other entries on pp. [49] and [50] also appear
to deal with problems from projective geometry. There is also a sketch of a vessel filled
with a liquid and the words “eau glyceriné” and what appears to be sketch of a magnetic
moment in a sinusoidal magnetic field.
19[Fernau 1914]. Could it be that the book was mentioned to Einstein when he met
with Romain Rolland in Geneva in September 1915, see [CPAE8, Doc. 118]?
revise our dating of the other lensing calculations in the notebook? To answer
this question, we need to consider the broader historical context of the notes.
But before doing so, we first observe that pp. [49] and [50] contain entries
that appear unrelated to the lensing problem. As shown by the detailed
paraphrase given in the appendix, the calculations on pp. [43] to [48] on
the other hand represent a coherent train of thought, as do the calculations
of pp. [51] and [52]. We also note that Einstein used a slightly different
notation on pp. [43]ff. and on pp. [51]-[52]. In the first set, he denoted the
distances between light source and lens and between lens and observer as
R and R′, respectively. On pp. [51]-[52] he used the notation R1 and R2,
respectively. He also reversed the roles of r and ρ. We conclude that there
is a discontinuity between the first set of lensing calculations on pp. [43] to
[48] and the second set on pp. [51] and p. [52].
3 The context of Einstein’s early lensing cal-
culations
From Einstein’s letter to Budde we learn that he had investigated the idea
that stellar lensing might explain the phenomenon of the “new stars,” and
that he had given up this idea after looking more closely into the character-
istic features of novae, especially their light curves and the changes in their
spectral characteristics. Let us therefore briefly look into the astronomical
knowledge about novae at the time.
The observation of a new star is an event that, in the early twentieth
century, occurred only every few years. Between 1900 and 1915, eight novae
were observed:20 Nova Persei 1901 (GK Per), Nova Geminorum (1) 1903 (DM
Gem), Nova Aquilae 1905 (V604 Aql), Nova Vela 1905 (CN Vel), Nova Arae
1910 (OY Ara), Nova Lacertae 1910 (DI Lac), Nova Sagittarii 1910 (V999
Sgr), and Nova Geminorum (2) 1912 (DN Gem) with maximum brightness
of 0.2, 4.8, 8.2, 10.2, 6.0, 4.6, 8.0, 3.5 magnitudes, respectively. At the time,
“the two most interesting Novae of the present century,” [Campbell 1914,
p. 493], were Nova Persei of 1901 and Nova Geminorum of 1912. The next
spectacular nova to occur was the very bright Nova Aquilae 1918 (V603 Aql)
with a maximum brightness of −1.1 mag.
Nova Geminorum (2) was discovered on March 12, 1912, by the as-
20For the following, see [Duerbeck 1987].
Figure 2: The light curve of Nova Geminorum 1912 for the first three
months after its appearance, as put together by Fischer-Petersen on the
basis of 253 individual observations. The points are the magnitudes re-
ported by the individual observers, the solid line is to guide the eye. From
[Fischer-Petersen 1912, p.429].
tronomer Sigurd Enebo at Dombaas, Norway [Pickering 1912]. On a pho-
tographic plate taken at Harvard College Observatory on March 10, showing
stars of magnitude 10.5, it was not visible, but it was visible as a magni-
tude 5 star in the constellation Gemini on a Harvard plate of March 11. On
March 13, a cablegram was received at Harvard and distributed throughout
the United States. In the following days all major observatories as well as
many amateur astronomers pointed their instruments towards the new star.
The maximum brightness of mag 3.5 was reached on March 14 (Einstein’s
33rd birthday!) [Fischer-Petersen 1912]. By March 16, the brightness was
down to a magnitude of 5.5 and in the following weeks it decreased further,
with distinct oscillations. By mid-April 1912, most observers registered a
brightness of mag 6 ≈ 7, see Fig. (2). We now know that the DN Gem is a
fast nova with a t3-time of 37d. Its light curve is type Bb in the classification
of [Duerbeck 1987], i.e. it declines with major fluctuations.
Like all classical novae, Nova Geminorum is, in fact, a binary system of
a white dwarf and main sequence star, where hydrogen-rich matter is being
accreted onto the white dwarf. Recent observations have even determined the
binary period [Retter et al. 1999]. The eruption of a classical nova occurs
when a hydrogen-rich envelope of the white dwarf suffers a thermonuclear
runaway.21 This explanation of classical novae also entails that they display
the same sequence of spectral behaviour as the luminosity decreases, see also
Fig. (3) below. However, our current understanding of classical novae was
suggested only in the fifties.22
The temporal proximity of the appearance of Nova Geminorum 1912 with
Einstein’s Berlin visit during the week of April 15–22, suggests that this
astronomical event was discussed also when Einstein met with Freundlich
for the first time.23 We know that the observatory in Potsdam took a
number of photographs of the new star between March 15 and April 12
[Furuhjelm 1912, Ludendorff 1912], and that Freundlich, among others, was
charged with photometric observations of the nova [Fischer-Petersen 1912,
p. 429]. Einstein and Freundlich had earlier corresponded about the possib-
lity of observing gravitational light deflection through the gravitational field
of the sun.24 The purpose of their meeting was to discuss possible astro-
nomical tests of Einstein’s emerging relativistic theory of gravitation. The
recent observation of the brightest nova since 1901 must have been on Fre-
undlich’s mind, and it seems more than likely that the idea of explaining
the phenomenon in terms of gravitational lensing therefore came up in the
course of their conversation. We conclude that our earlier dating of the first
set of calculations of the lensing problem in the Scratch Notebook to the
time of Einstein’s encounter with Freundlich in April 1912 is the most likely
possibility.
In fact, the context of the observation of Nova Geminorum 1912 provides
an answer to the question as to why Einstein would have done the calculations
at all and, in particular, why he would not have been content at the time
with a calculation of the lensing equation, the separation of the double star
image and, perhaps, the radius of the Einstein ring. Without this context
it might seem a rather ingenious move on Einstein’s part to go ahead and
immediately compute the apparent magnification of the lensed star as well.
But this answer to the question of motivation for the specific details of the
21For a review, see [Shara 1989].
22For a historical overview of previous theories, see [Duerbeck 2007].
23For evidence that Einstein met with Freundlich, see his letter to Michele Besso, 26
March 1912, in which he mentions planned discussions (“Besprechungen”) with Nernst,
Planck, Rubens, Warburg, Haber, and “an astronomer”—presumably Freundlich [CPAE5,
Doc. 377].
24Einstein to Freundlich, 1 September 1911, 21 September 1911, and 8 January 1912
[CPAE5, Docs. 281, 287, 336].
Figure 3: Changes in the spectrum of Nova Geminorum 1912, March 22 to
August 19, 1912. From [Adams and Kohlschütter 1912].
calculations in the Scratch Notebook, immediately raises another question.
Assuming that the first set of lensing calculations were done in spring
1912, why do we have no evidence that this idea was followed up by either
Einstein or by Freundlich until the fall of 1915? To answer this question,
it should first be observed that no summarizing results and analyses of the
observations of Nova Geminorum 1912 were published before the end of the
summer.
Let us briefly recall Einstein’s intellectual preoccupations after his visit
to Berlin in April 1912.25 Shortly before his trip to Berlin he had submitted
his two papers on a theory of the static gravitational field.26 After his return
to Prague in April 1912, Einstein was preparing for his move to Zurich. The
two papers were published in the 23 May issue of the Annalen der Physik.
Einstein wrote an addendum at proof stage to the second one, in which
he showed that the equations of motion could be written in a variational
form, adding that this would give us “an idea about how the equations of
motion of the material point in a dynamic gravitational field are constructed”
[Einstein 1912b, p. 458]. He also entered into a published dispute with Max
Abraham on their respective theories of gravitation.27 At the end of July,
he departed Prague for Zurich. The next thing we know about his work on
gravitation comes from a letter to Ludwig Hopf, dated 16 August 1912, in
which he wrote:
The work on gravitation is going splendidly. Unless I am com-
pletely wrong, I have now found the most general equations.28
These most general equations are, in all probability, equations of motion
in a gravitational field, represented by a metric tensor. After his arrival
in Zurich, Einstein began a collaboration with his former classmate Marcel
Grossmann, now his colleague at the ETH. Their research on a generalized
25We will focus here on his work of gravitation yet for the sake of completeness it should
be noted that Einstein at the same time was also thinking about quantum theory, most
notably about the law of photochemical equivalence and about the problem of zero point
energy, see [CPAE4, Docs. 5, 6, 11, 12].
26[Einstein 1912a], [Einstein 1912b], were received by the Annalen der Physik on 26
February and 23 March, respectively.
27[Einstein 1912c] which was received by the Annalen on 4 July 1912 is a response to a
critique by Abraham.
28Einstein to Hopf, 16 August 1912 [CPAE5, Doc. 416].
theory of relativity is documented in Einstein’s so-called “Zurich Notebook”29
and culminates in the publication of the “Outline [Entwurf] of a generalized
theory of relativity and a theory of gravitation,” in early summer of 1913 co-
authored with Marcel Grossmann.30 This so-called Entwurf-theory contains
all the elements of the final theory of general relativity, except for generally
relativistic field equations. Einstein would hold onto this theory until his
final breakthrough to general relativity in the fall of 1915.
In conclusion, we observe that Einstein’s path toward the general theory
of relativity in 1912 took him deep into the unknown land of the mathematics
associated with the metric tensor, before there was a chance to reconsider
the lensing idea in light of the data for Nova Geminorum 1912. In any case,
he would have to rely on Freundlich or other professional astronomers for a
secure assessment of the possibilities of an observation of the lensing effect
at the time.
Freundlich, on the other hand, continued to think about ways to test
Einstein’s new theory of gravitation.31 But his focus was on observations
of light deflection during a solar eclipse.32 In August 1914, he led a first
(unsuccessful) expedition to the Crimea to observe the eclipse of 21 August
1914. Even these efforts were hampered by the lack of funding and, more
generally, by the difficulties of securing increased research time that would
have allowed Freundlich to freely pursue his collaboration with Einstein.
Given these circumstances, and the fact that order-of-magnitude calcu-
lations may have convinced Einstein already in 1912 that the phenomenon
would be rare, it seems plausible that the lensing idea was not pursued further
for some time after Einstein’s visit in Berlin in April 1912.
Let us finally reexamine the events of fall 1915. Einstein, in the meantime
had left Zurich in the spring of 1914, accepting an appointment as member
of the Prussian Academy in Berlin. In September 1915, Einstein spent a few
weeks in Switzerland where he met, among others, with Heinrich Zangger,
Michele Besso, and Romain Rolland. On 22 September 1915, he left Zurich33
but travelled via Eisenach where he was on the 24th of September.34 By the
29AEA 3-006, see [CPAE4, Doc. 10]. For a comprehensive discussion of this document,
including a facsimile, transcription, and detailed paraphrase, see [Janssen et al. 2007].
30[Einstein and Grossmann 1913].
31See [Hentschel 1994] and [Hentschel 1997].
32See his correspondence with Einstein in [CPAE5].
33[CPAE8, p. 998].
34[CPAE10, Doc. Vol. 8, 122a].
30th of September, at the latest, he was back in Berlin, and wrote a letter
to Freundlich:
I am writing you now about a scientific matter that electrifies me
enormously.35
It is clear from the letter, however, that the excitement indicated to Fre-
undlich is not about the idea of gravitational lensing. Rather, Einstein
had found an internal contradiction in his Entwurf theory that amounted
to the realization that Minkowski space-time in rotating Cartesian coordi-
nates would not be a solution of the Entwurf field equations.36 This insight
undermined his confidence in the validity of the Entwurf theory, and is later
mentioned as one of three arguments that induced Einstein to lose faith in
the Entwurf equations.37 The first of these arguments was the fact that a cal-
culation of the planetary perihelion advance in the framework of the Entwurf
theory did not produce the well-known anomaly that had been established
for Mercury. This problem had been known to Einstein for some time.38
The third argument was realized sometime in early October, a few days after
stumbling upon the problem with rotation, and concerned the mathematical
derivation of the Entwurf field equations in Einstein’s comprehensive review
of October 1914.39 In any case, we know that Einstein asked Freundlich to
look into the problem of the rotating metric, and that they met some time in
early October. This follows from a letter Einstein wrote to Otto Naumann,
35Einstein to Freundlich, 30 September 1915 [CPAE8, Doc. 123]. For a detailed discus-
sion of this letter and its significance for the reconstruction of Einstein’s final breakthrough
to general relativity, see [Janssen 1999].
36Interestingly, the Scratch Notebook contains an entry that is pertinent to this problem.
On p. [66], i.e. on the last page of the backward end of the notebook, Einstein considers the
case of rotation in a calculation that exactly matches corresponding calculations dating
from October 1915, see [Janssen 1999]. Janssen cautiously remarks that he believes this
calculation to date from 1913 [Janssen 1999, p. 139]. It seems possible, however, that these
entries as well as the immediately preceding ones on the perihelion advance (see note 38)
may well date from late 1915 as well.
37See Einstein to Arnold Sommerfeld, 28 November 1915, and to Hendrik A. Lorentz, 1
January 1916 [CPAE8, Docs. 153, 177].
38See [Earman and Janssen 1993] and [CPAE4, pp. 344–359]. The Scratch Notebook
contains some calculations related to the perihelion advance on pp. [61–66], i.e. in the
backward end of the notebook. On p. [61], Einstein there explicitly noted that the advance
of Mercury’s perihelion would be 17′′ which is the value that is obtained on the basis of
the Entwurf-theory. These calculations are undated, see note 36.
39[Einstein 1914].
dated after 1 October 1915, in which Einstein asked about possibilities to al-
low Freundlich more freedom to pursue independent research. In this letter,
Einstein mentioned that Freundlich had visited him “recently.”40
By 12 October, Einstein had realized the third problem with the Entwurf
theory, the unproven uniqueness of the Lagrangian for the Entwurf field equa-
tions, as he reported in a letter to Lorentz. In this letter, he neither men-
tioned the problem with the rotating metric nor the issue of gravitational
lensing.41
For our reconstruction of this episode, the precise date of Einstein’s letter
to Zangger in which he remarked that he had given up the hope of explaining
the “new stars” as a lensing phenomenon is relevant. It could have been
written either on the 8th or the 15th of October.42
The letter to Zangger suggests that they had talked about the idea earlier
since Einstein seems to presuppose that Zangger knew what he was talking
about and did not explain what he meant by “lens effect” (“Linsenwirkung”).
As mentioned before, Einstein had just recently met with Zangger, as well as
with Besso before returning to Berlin. The following scenario seems therefore
plausible:
Upon returning to Berlin some time after the 24th of September 1915,
Einstein realized the problem of the rotating metric solution and wrote to
Freundlich on the 30th, asking him to look into this issue. Shortly afterwards,
the two met in person. Most likely they discussed not only the rotation
problem, but also the lensing idea. Having found troubling indications of an
inner inconsistency in the very foundations of this theory, it would have been
a natural move for Einstein to go back and reconsider early arguments such as
one based safely on the equivalence hypothesis.43 After this meeting, Einstein
40“Letzter Tage war Herr Dr. Freundlich von der Sternwarte N bei mir.” [CPAE8,
Doc. 124].
41In a letter to Hilbert, dated 7 November 1915, Einstein wrote that he realized the flaw
in his proof “about four weeks ago” [CPAE8, Doc. 136].
42The editors of [CPAE8] dated this letter explicitly to the 15th of October. It seems,
however, that the 8th is also a possibility. The letter was written on a Friday between
September 30, when a fire and explosion took place in the comb factory Walter near Lake
Biel took place, mentioned in the letter, and October 22 when Einstein participated in
the first Academy session after the summer break. I see no reason why Einstein could not
have heard of the accidents from Zangger before October 8.
43It seems unlikely that Einstein at that time was already contemplating a quantitatively
different law of light deflection. Einstein first observed in [Einstein 1915c, p. 834] that an
additional factor of 2 would arise from the different first-order approximation for the
wrote to Naumann exploring possibilities to give Freundlich more research
freedom. By October 8, Einstein had convinced himself that gravitational
lensing cannot explain the “new stars.” On 12 October, he realized the third
problem of his mathematical derivation of the Entwurf field equation.
According to this reconstruction of the sequence of events, it is remarkable
that the “joyous excitement” about the lensing idea falls within days after
his being “electrified” about the realization of the rotation problem on 30
September, and his realization of the third problem of the mathematical
derivation of the Entwurf equation, on or before 12 October 1915.44
Some five weeks later, his excitement was even greater and his heart,
allegedly, skipped a beat when he found that he could derive the anomalous
advance of Mercury’s perihelion on the basis of his new field equations. And
after having submitted the last of his four November communications to the
Prussian Academy on 25 November which presented the final gravitational
field equations, the “Einstein equations,” he wrote to Sommerfeld:
You must not be cross with me that I am answering your kind
and interesting letter only today. But in the last month I had
one of the most exciting, exhausting times of my life, indeed also
one of the most successful. I could not think of writing.45
It is interesting to learn from Einstein’s letter to Budde that in addition to
the realization of the problems with the Entwurf theory and the eventual suc-
metric if the Newtonian limit is derived on the basis of generally covariant field equations
in which the Ricci tensor is directly set proportional to the energy-momentum tensor.
These latter equations were published in his second November memoir, presented on 11
November, under the assumption that the trace of the energy-momentum tensor vanish.
In his comment on the factor of 2, Einstein refers to this result as being in contrast to
“earlier calculations” where the hypothesis of vanishing energy-momentum had not yet
been made.
44For completeness, one should point one other intellectual activity of Einstein’s during
those days. In Einstein’s letter to Zangger of 8 or 15 October, he also mentioned that
he wrote “a supplementary paper to my last year’s analysis on general relativity.” The
last year’s analysis is, in all likelyhood [Einstein 1914]; the supplementary paper is, in all
likelihood, an early version of [Einstein 1916b], or, perhaps, an early version of Einstein’s
first November memoir [Einstein 1915a], see [CPAE8, Doc. 130, note 5] and [Janssen 1999,
note 51].
45“Sie dürfen mir nicht böse sein, dass ich erst heute auf Ihren freundlichen und in-
teressanten Brief antworte. Aber ich hatte im letzten Monat eine der aufregendsten,
anstrengendsten Zeiten meines Lebens, allerdings auch der erfolgreichsten.” Einstein to
Sommerfeld, 28 November 1915 [CPAE8, Doc. 153].
cess of his breakthrough to general relativity, an astronomical problem, the
idea of explaining novae in terms of gravitational lensing added to Einstein’s
excitement in the midst of what must indeed have been the most intense
period of intellectual turmoil in his life.
4 Concluding remarks
Einstein’s recollections of his thought concerning the explanation of the “new
stars” as a phenomenon of gravitational lensing in his letter to Budde add
two significant insights to our reconstruction of the genesis of general rel-
ativity. If our dating and context hypothesis of the lensing calculations in
the scratch notebook are correct, we learn that it was an astronomical ob-
servation that triggered the elaboration of a significant consequence of the
equivalence hypothesis and its consequence of gravitational light deflection.
It is also interesting that on his intellectual path from the Entwurf theory to
the final theory of general relativity, Einstein also took a detour in which he
explored further consequences of one of the solid pillars of general relativity,
the equivalence hypothesis.
Appendix: Einstein’s lensing calculations in
the Scratch Notebook AEA 3-013
The following is a self-contained line-by-line paraphrase of Einstein’s lensing
calculations in his scratch notebook, [CPAE3, pp. 585–589]. The pagination
in square brackets refers to the sequence of pages in the notebook.
The calculations start out on p. [43] with Fig. (1) and continue on the
facing page p. [46]. From the more explicit sketch in Fig. (4), we read off the
lensing equation:
r = ρ
R +R′
. (4)
Here R is the distance between the light emitting star S and the lensing star
L; R′ the distance between the massive star L and the projected position of
the observer O on the line connecting light source and lens; ρ is the distance
of closest approach of a light ray emitted from the distant star and seen by
an observer; r is the orthogonal distance of the terrestial observer to the
line connecting light source and lens. The first term in the lensing equation
Figure 4: The geometry of stellar lensing.
(4) is obtained from the similarity of triangles with baseline R and R + R′,
respectively, and the second term is the angle of deflection as given by the law
of gravitational light bending, where α is the Schwarzschild radius 2GM/c2.
If we want to write this equation in dimensionless variables, we need to
multiply it by a factor of
R′(R +R′)α
so that, when we define r0 and ρ0 as
r0 = r
R′(R +R′)α
ρ0 = ρ
R +R′
the lensing equation (4) turns into
r0 = ρ0 −
. (8)
This is a quadratic equation for r0, the two solutions of which correspond to
the two light rays passing above and below L. The observer O therefore sees
two images of S at positions S ′ and S ′′, respectively. To read off the radius
of an “Einstein ring,” obtained for perfect alignment of S, L, and O, one
only needs to set r0 ≡ 1.
In order to get an expression for the apparent magnification, Einstein
proceeded as follows. He first took the square of eq. (8) as
2 + r2 = ρ2 +
. (9)
If we multiply this equation by π and denote the areas of the circles corre-
sponding to the radii r and ρ as f = πr2 and ϕ = πρ2, respectively, we can
write this equation as
2π + f = ϕ+
. (10)
We are not interested in the full circle corresponding to these radii but in the
differential area element associated with these radii. More precisely, we are
interested in the change of the differential area element df associated with f
when we change the differential area element dϕ associated with ϕ. Hence,
Einstein wrote
dϕ. (11)
The intensity H of the brightness received at r is related to the intensity H
of the brightness at ρ by
Hdf = ±Hdϕ, (12)
where the plus and minus signs refer to the two solutions of the quadratic
equation. Since we have from (11)
, (13)
we get
H = ± H
. (14)
or, inserting the explicit solutions, we can write the total brightness at r as
Htot = H
. (15)
As Einstein remarked, the term in brackets gives the relative brightness, if
we take the value for r → ∞ to be 1.46 This result is equation number (3)
46“Klammer gibt relative Helligkeit”
in Einstein’s notes, and most of the following material on pp. [47] and [48],
as well as on the loose sheet containing pp. [44] and [45], will be a discussion
of this expression for the relative brightness.
On p. [47], Einstein first rewrote the reduced lensing equation as
− x, (16)
and then the terms in brackets as
1− x41
x42 − 1
. (17)
The next step is to bring the two terms to a common denominator47
x41 − x42
(1− x41)(1− x42)
. (18)
If one squares the lensing equation (16) twice, one obtains
− 2 + (2 + r2)2 = 1
+ x4. (19)
If we now introduce new variables A and u via
2A = −2 + (2 + r2)2, (20)
A = −1 +
(2 + r2)2 = 1 + 2r2 +
r4, (21)
u = x4, (22)
we can write the quadrupled equation (19) as
2A = u+
. (23)
Multiplication by u and adding A2 on each side gives
u2 − 2Au+ A2 = −1 + A2, (24)
47In the notes, Einstein refers to this step as “Rationalisierung”.
from which one can immediately read off the two solutions of eq. (23) as
u = −A±
A2 − 1. (25)
Given (18), the difference between the two roots,
u1 − u2 = 2
A2 − 1, (26)
provides an expression for the nominator of Hr in (18). With the two roots,
we can also rewrite the quadratic equation in the form
u2 − 2Au+ 1 = (u− u1)(u− u2), (27)
and if we now set u = 1, we obtain
2(1−A) = (1− u1)(1− u2), (28)
which gives us an expression for the denominator of Hr in (18). Combining
the two expressions, as Einstein did on p. [48], we obtain
1 + 1
) , (30)
where we have inserted (21) to obtain the second line.
We now have an explicit expression for the relative brightness as a func-
tion of the dimensionless variable r. We now evidently see that Hr → 1/r
for r → 0, and that Hr approaches 1 asymptotically from above for large r,
see Fig. (5).
Let us now reconstruct Einstein’s order-of-magnitude estimate for the
expected frequency of the phenomenon on p. [45]. The explicit expression
for the relative brightness gives us a measure of the maximal distance r for
which significant magnification is obtained. We can look at specific values of
Hr(r). For instance, for r0 = 12 we find
1 + 1
5 ≈ 2. (31)
0 1 2 3
H (r)
Figure 5: A plot of the expression (30) for the relative brightness Hr as a
function of r. The inset is from [CPAE3, p. 587].
Hence, Einstein concluded that up to a distance of r0 =
one would obtain
an increase of the intensity by a factor of 2. In other words, if we write the
intensity Hr0 asymptotically for small r0 and R′ ≫ R as
R′(R +R′)α
, (33)
we see that for a lensing star at a distance of R, the relative increase in
intensity is given by
= tg ᾱ. (34)
Here ᾱ is the angle that determines how well the distant star has to be aligned
with the lensing star and the observer to produce appreciable magnification.
In order to get an order-of-magnitude estimate for this angle, one needs an
order-of-magnitude estimate for
. In order to obtain such an estimate,
Einstein notes that the ratio of the solar Schwarschild radius α to the solar
equatorial radius Rs is given approximately by
= 3 · 10−6. (35)
The radius of the sun is 2 light seconds, and the distance of the nearest stars
is of the order of 10 light years, or
105 · 365 · 10 ≈ 4 · 108 lightseconds. (36)
It follows that α
for a star of 1 solar mass 10 lightyears away is
≈ 10−14 or
≈ 10−7. (37)
To see the distant star with double intensity, we therefore have
tg ᾱ
, (38)
so that the angle ᾱ is of order 10−7. A linear angle corresponds to a solid
angle roughly by taking its square. Thus, the angular size of the region
where the distant star needs to be found behind a massive star in order to
be magnified in the lens is of order 10−14. In angular units, the total sky has
an area of 4π ≈ 10, so that the angular size of the region in question covers a
fraction of 10−15 of the total sky. This has to be contrasted with the average
density of stellar population in the sky. The Bonner Durchmusterung listed
of the order of 3 · 105 stars to ninth magnitude for the northern hemisphere,
so a reasonable average density of the number of stars would be 1 star per
10−5 of the sky.48
On the back of the loose sheet [p. 44] we find a few more calculations
related to order-of-magnitude estimates that start from (32). Einstein here
again goes back to the definition of r0 and ρ0 in terms of R, R
′, and α.49
Again, he observes that r0 =
would give twice the usual intensity, and
rewrites (6) for this case:
R′(R +R′)α
. (39)
The latter equation for R′ ≫ R turns into
, (40)
and for R ≪ R′ into
αR′. (41)
Einstein concluded that the smaller of the two distances R and R′ determines
the angle r
. In the top right corner of the page, Einstein jotted down another
order-of-magnitude calculation, which I do not fully understand. Apparently,
he computed the distance of 100 lightyears in terms of centimeters
3 · 1010 · 3 · 107 · 102 [cm] ≈ 1020 cm (42)
48On the relevant page under discussion here, we also find a little sketch by Einstein of
a circle and the angle of its radius for a point some distance away. The precise meaning
of this sketch is unclear but the numbers written next to it suggest that Einstein was
considering the order of magnitude for the angular size of the moon. The radius of the
moon is seen under an angle of 15′ from the earth, and the mean distance between the
earth and the moon in units of the lunar radius is about 200, which translates to an angle
of 50o.
49One can see here that Einstein corrected an error in his earlier calculations on [p. 43],
where he had erroneously written the second term of the lensing equation (4) with R
instead of R′, which resulted in a confusion of the factors of R and R′ in expressions (5)
and (6).
Figure 6: A sketch in Einstein’s scratch notebook to obtain eq. (43). From
[CPAE3, p.585].
He also computed the angle x under which the star at distance R′ and
the star at distance R+R′ would be seen by an observer at distance r away
from the connecting line between the two stars if no lensing took place:
x = r
R +R′
R′(R +R′)
. (43)
The first equation can be read off from a little sketch of the geometry of light
source, lensing star, and observer, at the bottom of the page, see Fig. (6).
Let us finally comment on the calculations on pp. [51] and [52]. As men-
tioned in the main text of this article, Einstein here introduced a change
of notation. On p. [51], he sketched again the geometry for stellar lensing.
Here, the geometry has been turned by 90 degrees, and the notation changed
so that R and R′ become R1 and R2, and ρ and r are interchanged to be-
come r and ρ, respectively. This change of notation is reflected in the lensing
equation, written down on p. [52] as
ρ = r +R1
, (44)
where tanw = r/R2. Einstein then immediately proceeded to compute the
magnification by taking the square of the lensing equation and then comput-
ing the derivative as
d(ρ2)
d(r2)
(R1α)
= A ·
. (45)
Instead of pursueing this calculation further, Einstein instead wrote “appar-
ent diameter of a Nova star,” and wrote down the solution of eq. (44) for
ρ = 0, as to obtain the diameter of an Einstein ring:
R1R2α
R1 +R2
. (46)
He computed the angle w0 as
R2(R1 +R2)
. (47)
The calculation ends with an attempt at a numerical order-of-magnitude
estimation which seems to proceed along the same lines as in eqs. (35,36).
The calculation, however, was broken off, and the whole page was struck
through.
Acknowledgments
I wish to thank Diana Buchwald for a critical reading of an earlier version of
this paper, and Hilmar Duerbeck for some helpful comments. Unpublished
correspondence in the Albert Einstein Archives is quoted by kind permission.
References
[Adams and Kohlschütter 1912] Adams, Walter S., and Kohlschutter [sic],
Arnold. “Observations of the spectrum of Nova Geminorum No. 2.”
Astrophysical Journal 36 (1912), 293–321.
[Budde 1914a] Budde, Emil Arnold. Tensoren und Dyaden im dreidimen-
sionalen Raum. Braunschweig: Vieweg, 1914.
[Budde 1914b] Budde, Emil Arnold. “Kritisches zum Relativitätsprinzip.”
Verhandlungen der Deutschen Physikalischen Gesellschaft 16 (1914)
586–612.
[Budde 1914c] Budde, Emil Arnold. “Kritisches zum Relativitätsprinzip II.”
Verhandlungen der Deutschen Physikalischen Gesellschaft 16 (1914)
914–925.
[Campbell 1914] Campbell, Leon. “A systematic search for bright Novae.”
Popular Astronomy 22 (1914), 493–495.
[Chwolson 1924] Chwolson, Orest. “Über eine mögliche Form fiktiver Dop-
pelsterne.” Astronomische Nachrichten 221 (1924) 329–330.
[CPAE2] Stachel, John, et al. (eds.) The Collected Papers of Albert Einstein,
Vol. 2: The Swiss Years: Writings, 1900–1909, Princeton: Princeton
University Press, 1989.
[CPAE3] Klein, Martin, et al. (eds.) The Collected Papers of Albert Einstein,
Vol. 3: The Swiss Years: Writings, 1909–1911, Princeton: Princeton
University Press, 1993.
[CPAE4] Klein, Martin, et al. (eds.) The Collected Papers of Albert Einstein,
Vol. 4: The Swiss Years: Writings, 1912–1914, Princeton: Princeton
University Press, 1995.
[CPAE5] Klein, Martin, et al. (eds.) The Collected Papers of Albert Ein-
stein, Vol. 5: The Swiss Years: Correspondence, 1902–1914, Princeton:
Princeton University Press, 1993.
[CPAE6] Kox, A.J., et al. (eds.) The Collected Papers of Albert Einstein,
Vol. 6: The Berlin Years: Writings, 1914–1917, Princeton: Princeton
University Press, 1996.
[CPAE8] Schulmann, Robert, et al. (eds.) The Collected Papers of Albert
Einstein, Vol. 8: The Berlin Years: Correspondence, 1914–1918, Prince-
ton: Princeton University Press, 1998.
[CPAE10] Buchwald, Diana K., et al. (eds.) The Collected Papers of Al-
bert Einstein, Vol. 10: The Berlin Years: Correspondence, May–
December 1920 and Supplementary Correspondence, 1909–1920, Prince-
ton: Princeton University Press, 2006.
[Duerbeck 1987] Duerbeck, Hilmar W. “A Reference Catalogue and Atlas of
Galactic Novae.” Space Science Reviews 45 (1987) 1–212.
[Duerbeck 2007] Duerbeck, Hilmar W. “Novae - a Historical Perspective.”
In Bode, M.F., Evans, A. (eds.) Classical Novae, Cambridge University
Press, forthcoming.
[Earman and Janssen 1993] Earman, John and Janssen, Michel. “Einstein’s
Explanation of the Motion of Mercury’s Perihelion.” In: Earman, John
et al. (eds.) The Attraction of Gravitation, Boston: Birkhäuser, 1993
(Einstein Studies Vol. 5), 129–172.
[Eddington 1920] Eddington, Arthur S. Space, Time, and Gravitation Cam-
bridge: Cambridge University Press, 1920.
[Einstein 1907] Einstein, Albert. “Über das Relativitätsprinzip und die aus
demselben gezogenen Folgerungen.” Jahrbuch der Radioaktivität und
Elektronik 4 (1907) 411–462. Reprinted in [CPAE2, Doc. 47].
[Einstein 1911] Einstein, Albert. “Über den Einfluß der Schwerkraft auf
die Ausbreitung des Lichtes.” Annalen der Physik 35 (1911) 898–908.
Reprinted in [CPAE3, Doc. 23].
[Einstein 1912a] Einstein, Albert. “Lichtgeschindigkeit und Statik des Grav-
itationsfeldes.” Annalen der Physik 38 (1912) 355–369. Reprinted in
[CPAE4, Doc. 3].
[Einstein 1912b] Einstein, Albert. “Zur Theorie des statischen Gravitations-
feldes.” Annalen der Physik 38 (1912) 443–458. Reprinted in [CPAE4,
Doc. 4].
[Einstein 1912c] Einstein, Albert. “Relativität und Gravitation. Erwiderung
auf eine Bemerkung von M. Abraham” Annalen der Physik 38 (1912)
1059–1064. Reprinted in [CPAE4, Doc. 8].
[Einstein 1914] Einstein, Albert. “Die formale Grundlage der allgemeinen
Relativitätstheorie” Königlich Preußische Akademie der Wissenschaften
(Berlin). Sitzungsberichte 1914, 1030–1085. Reprinted in [CPAE6,
Doc. 9].
[Einstein 1915a] Einstein, Albert. “Zur allgemeinen Relativitätstheorie.”
Königlich Preußische Akademie der Wissenschaften (Berlin). Sitzungs-
berichte 1915, 778–786. Reprinted in [CPAE6, Doc. 21].
[Einstein 1915b] Einstein, Albert. “Zur allgemeinen Relativitätstheorie
(Nachtrag).” Königlich Preußische Akademie der Wissenschaften
(Berlin). Sitzungsberichte 1915, 799–801. Reprinted in [CPAE6,
Doc. 22].
[Einstein 1915c] Einstein, Albert. “Erklärung der Perihelbewegung des
Merkur aus der allgemeinen Relativitätstheorie (Nachtrag).” Königlich
Preußische Akademie der Wissenschaften (Berlin). Sitzungsberichte
1915, 831–839. Reprinted in [CPAE6, Doc. 24].
[Einstein 1915d] Einstein, Albert. “Die Feldgleichungen der Gravitation.”
Königlich Preußische Akademie der Wissenschaften (Berlin). Sitzungs-
berichte 1915, 844–847. Reprinted in [CPAE6, Doc. 25].
[Einstein 1916] Einstein, Albert. “Die Grundlage der allgemeinen Rela-
tivitätstheorie.” Annalen der Physik 49 (1916) 769–822. Reprinted in
[CPAE6, Doc. 30].
[Einstein 1916b] Einstein, Albert. “Eine neue formale Deutung der
Maxwellschen Feldgleichungen der Elektrodynamik.” Königlich Preußis-
che Akademie der Wissenschaften (Berlin). Sitzungsberichte 1916, 184–
188. Reprinted in [CPAE6, Doc. 27].
[Einstein and Grossmann 1913] Einstein, Albert and Grossmann, Marcel.
Entwurf einer verallgemeinerten Relativitätstheorie und einer Theorie
der Gravitation. Leipzig: Teubner, 1913. Reprinted in [CPAE4, Doc. 13].
[Einstein 1936] Einstein, Albert. “Lens-like action of a star by the deviation
of light in the gravitational field.” Science 84 (1936) 506–507.
[Fernau 1914] Fernau, Hermann. Die französische Demokratie. Sozialpoli-
tische Studien aus Frankreichs Kulturwerkstatt. München: Duncker &
Humblot, 1914.
[Fischer-Petersen 1912] Fischer-Petersen, J. “Über die Lichtkurve der Nova
(18.1912) Geminorum 2.” Astronomische Nachrichten 192 (1912) 429–
[Furuhjelm 1912] Furuhjelm, Ragnar. “Über das Spektrum der Nova Gemi-
norum 2.” Astronomische Nachrichten 192 (1912) 117–124.
[Hentschel 1994] Hentschel, Klaus. “Erwin Finlay Freundlich and Testing
Einstein’s Theory of Relativity.” Archive for History of Exact Sciences
47 (1994) 143–201.
[Hentschel 1997] Hentschel, Klaus. The Einstein Tower. An Intertexture of
Dynamic Construction, Relativity Theory, and Astronomy. Stanford:
Stanford University Press, 1997.
[Janssen 1999] Janssen, Michel. “Rotation as the Nemesis of Einstein’s En-
twurf Theory.” In: Goenner, Hubert, et al. (eds.). The Expanding Worlds
of General Relativity Boston: Birkhäuser, 1999 (Einstein Studies Vol. 7),
127–157.
[Janssen et al. 2007] Janssen, Michel, Norton, John D., Renn, Jürgen, Sauer,
Tilman, and Stachel, John. The Genesis of General Relativity. Einstein’s
Zurich Notebook. Vol. 1. Introduction and Source, Vol. 2. Commentary
and Essays. Dordrecht: Springer, 2007.
[Laue 1921] Laue, Max von. [Nachruf auf Emil Arnold Budde] Verhandlun-
gen der Deutschen Physikalischen Gesellschaft (1921) 66–68.
[Lodge 1919] Lodge, Oliver J. “Gravitation and Light” Nature 104 (1919)
[Ludendorff 1912] Ludendorff, H. “Über die schwachen Absorptionslinien im
Spektrum der Nova Geminorum 2.” Astronomische Nachrichten 192
(1912) 124–130.
[Norton 1984] Norton, John D. “How Einstein Found His Field Equations.”
Historical Studies in the Physical Sciences 14 (1984), 253–316.
[Norton 1992] Norton, John D. “The Physical Content of General Covari-
ance.” In: Eisenstaedt, Jean, and A.J.Kox (eds.). Studies in the History
of General Relativity Boston: Birkhäuser, 1992 (Einstein Studies Vol. 3),
281–315.
[Pickering 1912] Pickering, Edward C. Astronomical Bulletin of the Harvard
College Observatory 17 March 1912.
[Renn, Sauer, and Stachel 1997] Renn, Jürgen, Sauer, Tilman, and Stachel,
John. “The Origin of Gravitational Lensing: A Postscript to Einstein’s
1936 Science Paper.” Science 275 (1997) 184–186.
[Renn and Sauer 2003] Renn, Jürgen, and Sauer, Tilman. “Eclipses of the
Stars. Mandl, Einstein, and the Early History of Gravitational Lensing.”
In: A. Ashtekar et al. (eds.). Revisiting the Foundations of Relativistic
Physics, Dordrecht: Kluwer, 2003, 69–92.
[Retter et al. 1999] Retter, A., Leibowitz, E.M., Naylor, T. “An irradiation
effect in Nova DN Gem 1912 and the significance of the period gap for
classical novae.” Monthly Notices of the Royal Astronomical Society 308
(1999) 140–146.
[Shara 1989] Shara, Michael M. “Recent Progress in Understanding the
Eruptions of Classical Novae.” Publications of the Astronomical Soci-
ety of the Pacific 101 (1989) 5–31.
[Werner 1921] Werner, R. “Emil Arnold Budde.” Elektrotechnische
Zeitschrift 42 (1921) 1153–1154.
Einstein's letter to Emil Budde
The lensing calculations in the scratch notebook
The context of Einstein's early lensing calculations
Concluding remarks
|
0704.0967 | Cross-Layer Optimization of MIMO-Based Mesh Networks with Gaussian
Vector Broadcast Channels | Cross-Layer Optimization of MIMO-Based Mesh
Networks with Gaussian Vector Broadcast Channels
Jia Liu and Y. Thomas Hou
The Bradley Department of Electrical and Computer Engineering
Virginia Polytechnic Institute and State University, Blacksburg, VA 24061
Email: {kevinlau, thou}@vt.edu
Abstract— MIMO technology is one of the most significant
advances in the past decade to increase channel capacity and has
a great potential to improve network capacity for mesh networks.
In a MIMO-based mesh network, the links outgoing from each
node sharing the common communication spectrum can be
modeled as a Gaussian vector broadcast channel. Recently, re-
searchers showed that “dirty paper coding” (DPC) is the optimal
transmission strategy for Gaussian vector broadcast channels. So
far, there has been little study on how this fundamental result will
impact the cross-layer design for MIMO-based mesh networks.
To fill this gap, we consider the problem of jointly optimizing
DPC power allocation in the link layer at each node and
multihop/multipath routing in a MIMO-based mesh networks.
It turns out that this optimization problem is a very challenging
non-convex problem. To address this difficulty, we transform
the original problem to an equivalent problem by exploiting
the channel duality. For the transformed problem, we develop
an efficient solution procedure that integrates Lagrangian dual
decomposition method, conjugate gradient projection method
based on matrix differential calculus, cutting-plane method, and
subgradient method. In our numerical example, it is shown that
we can achieve a network performance gain of 34.4% by using
I. INTRODUCTION
Since Telatar’s [1] and Foschini’s [2] pioneering works
predicting the potential of high spectral efficiency provided
by multiple antenna systems, the last decade has witnessed
a soar of research activity on Multiple-Input Multiple-Output
(MIMO) technologies. The benefits of substantial improve-
ments in wireless link capacity at no cost of additional
spectrum and power have quickly positioned MIMO as one
of the breakthrough technologies in modern wireless com-
munications, rendering it as an enabling technology for next
generation wireless networks. However, applying MIMO in
wireless mesh networks (WMNs) is not a trivial technical
extension. With the increased number of antennas at each
node, interference is likely to become stronger if power level
at each node, power allocation to each antenna element, and
routing are not managed wisely. As a result, cross-layer design
is necessary for MIMO-based WMNs.
In a MIMO-based WMN, the set of outgoing links from
a node sharing a common communication spectrum can be
modeled as a nondegraded Gaussian vector broadcast channel,
for which the capacity region is notoriously hard to analyze
[3]. In the networking literature, most works considering
links sharing a common communication spectrum are con-
cerned with how to allocate frequency sub-bands/time-slots
and schedule transmissions to efficiently share the common
communication spectrum. As an example, Fig. 1(a) shows a
simple broadcast channel where there are three uncoordinated
users and a single transmitting node. Suppose that messages
x, y, and z need to be delivered to user 1, user 2, and user 3,
respectively. Also, suppose that the received signals subject to
ambient noise are x̂, ŷ, and ẑ, and the decoding functions are
f1(·), f2(·), and f3(·), respectively. The conventional strategy
is to divide a unit time frame into three time slots (or divide
a unit band into three sub-bands) τ1, τ2, and τ3, and then
find the optimal scheduling for transmissions to users 1, 2
and 3, accordingly. The major benefit of this strategy is that
interference can be eliminated.
Link 3
User 1
Transmitter
User 2
User 3
( )1 ˆf x in 1τ
( )2 ˆf y in 2τ
( )3 ˆf z in 3τ
(a) Time or frequency division
Link 3
User 1
Transmitter
User 2
User 3
ˆy y x′ = −
′ = −
( )1 1 1ˆ ˆ ˆf x y z+ +
( )2 2ˆ ˆf y z+
( )3 ˆf z
(b) DPC transmission strategy
Fig. 1. A 3-user broadcast channel example.
Although the time or frequency division schemes are simple
and effective, they are not necessarily the smartest strategy.
In fact, Cover had shown in his classical paper [4] that the
transmission scheme jointly encoding all receivers’ informa-
tion at the transmitter can do strictly better in broadcast chan-
nels. However, the capacity achieving transmission signaling
scheme for general nondegraded Gaussian vector broadcast
channels is very difficult to determine and has become one
of the most basic questions in network information theory
[3]. Very recently, significant progress has been made in
this area. Most notably, Weigarten et. al. finally proved the
long-open conjecture that the “dirty paper coding” strategy
(DPC) [5] is the optimal transmission scheme for Gaussian
vector broadcast channels [6] in the sense that the DPC rate
region CDPC of a broadcast channel is equal to the broadcast
channel’s capacity region CBC, i.e., CBC = CDPC. However,
this fundamental result is still not adequately exposed to the
networking research community. So far, how to exploit DPC’s
http://arxiv.org/abs/0704.0967v1
benefits in the cross-layer design for wireless mesh networks
has not yet been studied in the literature. The main objective
of this study is to fill this gap and to obtain a rigorous and
systematic understanding of the impact of applying DPC to
the cross-layer optimization for MIMO-based mesh networks.
To begin with, it is beneficial to introduce the basic idea
of DPC, which turns out to be very simple. For the same
3-user example, consider the following strategy as shown
in Fig. 1(b). We first jointly encode the messages for all
the users in a certain order and then broadcast the resulting
codeword simultaneously. Suppose that we pick user 1 to be
encoded first, then followed by user 2, and finally user 3.
We choose the codeword x for user 1 as before. Then, the
interference seen by user 2 due to user 1 (denoted by x̂2)
is known at the transmitter. So, the transmitter can subtract
the interference and encode user 2 as y′ = y − x̂2 rather
than y itself. As a result, user 2 does not see any interference
from the signal intended for user 1. Likewise, after encoding
user 2, the interferences seen by user 3 due to user 1 and 2
(denoted by x̂3 and ŷ3) are known at the transmitter. Then, the
transmitter can subtract the interferences and encode user 3 as
z′ = z− x̂3− ŷ3 rather than z itself. Therefore, user 3 does not
see any interferences from the signals intended for user 1 and
2. In the end, the transmitter adds all the codewords together
and broadcasts the sum to all users simultaneously. As a result,
it is easy to see from Fig. 1(b) that the received signal at user
1 is x̂ + ŷ1 + ẑ1, i.e., user 1 will experience the interference
from the signals intended for users 2 and 3; the received
signal at user 2 is ŷ + ẑ2, i.e., user 2 only experiences the
interference from the signal intended for user 3; and finally, the
received signal at user 3 is ẑ, i.e., user 3 does not experience
any interference. This process operates like writing on a dirty
paper, hence the name. Although counterintuitive, the capacity
region of DPC that allows interference is strictly larger than
those of time or frequency division schemes.
After understanding what DPC is, one may ask two very
natural and interesting questions:
1) How will the enlarged capacity region at each node due
to DPC impact the network performance in the upper
layers?
2) Are there any new challenges if DPC is employed in a
MIMO-based networking environment?
Notice that, when DPC is employed, the encoding order
plays a critical role. For a K-user broadcast channel, there
exists K! permutations. Also, since DPC allows interference
among the users, power allocation among different users
along with the encoding order has a significant impact on
the system performance. As we show later, the DPC link
rates in a broadcast channel are non-connvex functions. Thus,
even the optimization for a single K-user Gaussian vector
broadcast channel is a very challenging combinatorial non-
convex problem, not to mention the cross-layer design in a
networking environment with multiple broadcast channels.
In this paper, we aim to solve the problem of jointly
optimizing DPC per-antenna power allocation at each node
in the link layer and multihop/multipath routing in a MIMO-
based WMN. Our contributions are three-fold. First, this paper
is the first work that studies the impacts of applying DPC
to the cross-layer design for MIMO-based WMNs. In our
numerical example, it is shown that we can achieve a network
performance gain of 34.4% by using DPC in MIMO-based
WMNs. Also, since the traditional single-antenna systems can
be viewed as a special case of MIMO systems, the findings
and results in this paper are also applicable to conventional
WMNs with single-antenna. Second, to address the non-
convex difficulty, we transform the original problem to an
equivalent problem under the dual MIMO multiple access
channel (MIMO-MAC) and show that the transformed prob-
lem is convex with respect to the input covariance matrices. We
simplify the maximum weighted sum rate problem for the dual
MIMO-MAC such that enumerating different encoding order
is unnecessary, thus paving the way to efficiently solve the
link layer subproblem in Lagrangian dual decomposition. Last,
for the transformed problem, we develop an efficient solu-
tion procedure that integrates Lagrangian dual decomposition
method, conjugate gradient projection method based on matrix
differential calculus, cutting-plane method, and subgradient
method.
The remainder of this paper is organized as follows. In
Section II, we discuss the network model and problem for-
mulation. Section III discusses how to reformulate the non-
connvex original problem by exploiting channel duality. In
Section IV, we introduce the key components for solving
the challenging link layer subproblem in the Lagrangian
decomposition. Numerical results are provided in Section V
to illustrate the efficacy of our proposed solution procedure
and to study the network performance gain by using DPC.
Section VI reviews related work and Section VII concludes
this paper.
II. NETWORK MODEL
We first introduce notations for matrices, vectors, and com-
plex scalars in this paper. We use boldface to denote matrices
and vectors. For a matrix A, A† denotes the conjugate trans-
pose, Tr{A} denotes the trace of A, and |A| denotes the deter-
minant of A. Diag{A1, . . . ,An} represents the block diagonal
matrix with matrices A1, . . . ,An on its main diagonal. We let
I denote the identity matrix with dimension determined from
context. A � 0 represents that A is Hermitian and positive
semidefinite (PSD). 1 and 0 denote vectors whose elements
are all ones and zeros, respectively, and their dimensions are
determined from context. (v)m represents the m
th entry of
vector v. For a real vector v and a real matrix A, v ≥ 0
and A ≥ 0 mean that all entries in v and A are nonnegative,
respectively. We let ei be the unit column vector where the i
entry is 1 and all other entries are 0. The dimension of ei is
determined from context as well. The operator “〈, 〉” represents
the inner product operation for vectors or a matrices.
A. Network Layer
In this paper, the topology of a MIMO-based wireless
mesh network is represented by a directed graph, denoted
by G = {N ,L}, where N and L are the set of nodes
and all possible MIMO-based links, respectively. By saying
“possible” we mean the distance between a pair of nodes is
less than or equal to the maximum transmission range Dmax,
i.e., L = {(i, j) : Dij ≤ Dmax, i, j ∈ N , i 6= j}, where
Dij represents the distance between node i and node j. Dmax
can be determined by a node’s maximum transmission power.
We assume that G is always connected. Suppose that the
cardinalities of the sets N and L are |N | = N and |L| = L,
respectively. For convenience, we index the links numerically
(e.g., link 1, 2, . . . , L) rather than using node pairs (i, j).
The network topology of G can be represented by a node-
arc incidence matrix (NAIM) [7] A ∈ RN×L, whose entry
anl associating with node n and arc l is defined as
anl =
1 if n is the transmitting node of arc l
−1 if n is the receiving node of arc l
0 otherwise.
We define O (n) and I (n) as the sets of links that are
outgoing from and incoming to node n, respectively. We use
a multicommodity flow model for the routing of data packets
across the network. In this model, several nodes send different
data to their corresponding destinations, possibly through
multipath and multihop routing. We assume that the flow
conservation law at each node is satisfied, i.e., the network
is a flow-balanced system.
Suppose that there are F sessions in total in the network,
representing F different commodities. The source and des-
tination nodes of session f , 1 ≤ f ≤ F , are denoted as
src(f) and dst(f), respectively. For the supply and demand
of each session, we define a source-sink vector sf ∈ RN ,
whose entries, other than at the positions of src(f) and dst(f),
are all zeros. In addition, from the flow conservation law, we
must have (sf )src(f) = −(sf )dst(f). Without loss of generality,
we let (sf )src(f) ≥ 0 and simply denote it as a scalar sf .
Therefore, we can further write the source-sink vector of flow
sf = sf
· · · 1 · · · −1 · · ·
, (2)
where the dots represent zeros, and 1 and −1 are in the
positions of src(f) and dst(f), respectively. Note that for the
source-sink vector of a session f , 1 does not necessarily appear
before −1 as in (2), which is only for an illustrative purpose.
Using the notation “=x,y” to represent the component-wise
equality of a vector except at the xth and the yth entries,
we have sf =src(f),dst(f) 0. In addition, using the matrix
s1 s2 . . . sF
∈ RN×F to denote the collection
of all source-sink vectors, we further have
Sef =src(f),dst(f) 0, 1 ≤ f ≤ F, (3)
〈1,Sef 〉 = 0, 1 ≤ f ≤ F, (4)
(Sef )src(f) = sf , 1 ≤ f ≤ F, (5)
where ef is the f
th unit column vector.
On link l, we let t
≥ 0 be the amount of flow of session
f in link l. We define t(f) ∈ RL as the flow vector for
session f . At node n, components of the flow vector and
source-sink vector for the same commodity satisfy the flow
conservation law as follows:
l∈O(n) t
l∈I(n) t
(sf )n, 1 ≤ n ≤ N , 1 ≤ f ≤ F . With NAIM, the flow
conservation law across the whole network can be compactly
written as At(f) = sf , 1 ≤ f ≤ F . We use matrix T ,
t(1) t(2) . . . t(F )
∈ RL×F to denote the collection
of all flow vectors. With T and S, the flow conservation law
can be further compactly written as AT = S.
B. Channel Capacity of a MIMO Link
In this section, we first briefly introduce some background
of MIMO. We use a matrix Hl ∈ Cnr×nt to represent
the MIMO channel gain matrix from the transmitting node
to the receiving node of link l, where nt and nr are the
numbers of transmitting and receiving antenna elements of
each node, respectively. Hl captures the effect of the scattering
environment between the transmitter and the receiver of link
l. In an additive white Gaussian noise (AWGN) channel, the
received complex base-band signal vector for a MIMO link
l with nt transmitting antennas and nr receiving antennas is
given by
ρlHlxl + nl. (6)
where yl and xl represent the received and transmitted signal
vector; nl is the normalized additive white Gaussian noise
vector; ρl captures the path-loss effect, which is usually
modeled as ρl = G ·D−αl , where G is some system specific
constant, Dl denotes the distance between the transmitting
node and the receiving node of link l, and α denotes the path
loss exponent. Let matrix Ql represent the covariance matrix
of a zero-mean Gaussian input symbol vector xl at link l,
i.e., Ql = E
xl · x†l
. This implies that Ql is Hermitian
and Ql � 0. Physically, Ql represents the power allocation
in different antenna elements in link l’s transmitter and the
correlation between each pair of the transmit and receive
antenna elements. Tr{Ql} is the total transmission power at
the transmitter of link l. The capacity of a MIMO link l in an
AWGN channel with a unit bandwidth can be computed as
Rl(Ql) = log2
I+ ρlHlQlH
, (7)
It can be seen that different power allocations to the antennas
will have different impacts on the link capacity. Therefore, the
optimal input covariance matrix Q∗l needs to be determined.
In a single link environment, the optimal input covariance
matrix can be computed by water-filling the total power over
the eigenmodes (signaling direction) of the MIMO channel
matrix [1]. However, in a networking environment, finding
the optimal input covariance matrices is a substantially more
challenging task. Determining the optimal input covariance
matrices is one of the major goals in our cross-layer opti-
mization.
C. MIMO-BC Link Layer
A communication system where a single transmitter sends
independent information to multiple uncoordinated receivers is
referred to as a broadcast channel. If the channel gain of each
link in the broadcast channel is a matrix and the noise to each
link is a Gaussian random vector, the channel is termed “Gaus-
sian vector broadcast channel”. Fig. 2 illustrates a K-user
Gaussian vector broadcast channel, where independent mes-
sages W1, . . . ,WK are jointly encoded by the transmitter, and
the receivers are trying to decode W1, . . . ,WK , respectively.
A (n, 2nR1 , . . . , 2nRK )BC codebook for a broadcast channel
consists of an encoding function xn(W1, . . . ,WK) where
Wi ∈ {1, . . . , 2nRi}, i = 1, 2, . . . ,K . The decoding function
of receiver i is Ŵi(y
i ) . An error occurs when Ŵi 6= Wi.
A rate vector R = [R1, . . . , RK ]
T is said to be achievable if
there exists a sequence of (n, 2nR1 , . . . , 2nRK )BC codebooks
for which the average probability of error Pe → 0 as the code
length n→ ∞. The capacity region of a broadcast channel is
defined as the union of all achievable rate vectors [3]. Gaussian
vector broadcast channel can be used to model many different
types of systems [3]. Due to the close relationship between
Gaussian vector broadcast channel and MIMO, we will call
the Gaussian vector broadcast channel in the MIMO case as
MIMO-BC throughout the rest of this paper.
nRW ∈
nRW ∈
2 KnRKW ∈
( )1, ,n KW Wx �
( )1 1ˆ nW y
( )2 2ˆ nW y
( )ˆ nK KW y
Transmitter
Receivers
Fig. 2. A Gaussian vector broadcast channel.
For clarity, we use Γi to specifically denote the input
covariance matrix of link i in a MIMO-BC, and Qj to denote
an input covariance matrix in other types of MIMO channels.
From the encoding process of DPC, the achievable rate in DPC
scheme can be computed as follows:
Rπ(i) = log
I+Hπ(i)
j≥i Γπ(j)
I+Hπ(i)
Γπ(j)
, (8)
where π denotes a permutation of the set {1, . . . ,K}, π(i)
represents the ith position in permutation π. One important
observation of the dirty paper rate equation in (8) is that the
rate equation is neither a concave nor a convex function of the
input covariance matrices Γi, i = 1, 2, . . . ,K .
Let H = [H1, . . . ,HK ]
T be the collection of K channel
gain matrices in the MIMO-BC, and Γ = [Γ1 . . . ,ΓK ] be
the collection of K input covariance matrices. We define the
dirty paper region CDPC(P,H) as the convex hull of the
union of all such rates vectors over all positive semidefinite
covariance matrices Γ1, . . . ,ΓK satisfying Tr{
i=1 Γi} ≤ P
(the maximum transmit power constraint at the transmitter)
and over all K! permutations:
CDPC(P,H) , Cov
∪π,ΓRBC(π,Γ)
where Cov(·) represents the convex hull operation.
D. Problem Formulation
In this paper, we aim to solve the problem of jointly
optimizing DPC per-antenna power allocation at each node
in the link layer and multihop/multipath routing in a MIMO-
based WMN. Suppose that each node in the network has
been assigned a certain (possibly reused) frequency band that
will not cause interference to any other node in the network.
Also, the incoming and outgoing bands of each node are
non-overlapping such that each node can transmit and receive
simultaneously. How to perform channel assignments is a huge
research topic on its own merits, and there are a vast amount
of literature that discuss channel assignment problems. Thus,
in this paper, we focus on how to jointly optimize routing
in the network layer and the DPC power allocation in the
link layer for each node when a channel assignment is given.
We adopt the well-known proportional fairness utility function,
i.e., ln(sf ) for flow f . In CRPA, we wish to maximize the sum
of all utility functions. In the link layer, since the total transmit
power of each node is subject to a maximum power constraint,
we have
l∈O(n) Tr{Γl} ≤ P
max, 1 ≤ n ≤ N , where P (n)max
represents the maximum transmit power of node n. Since the
total amount of flow in each link l cannot exceed its capacity
limit, we must have
f=1 t
≤ Rl(Γ), 1 ≤ l ≤ L. This can
be further compactly written using matrix-vector notations as
〈1,TTel〉 ≤ Rl(Γ), 1 ≤ l ≤ L. Coupling the network layer
model in Section II-A and MIMO-BC link layer model in
Section II-C, we have the problem formulation for CRPA as
in (9).
CRPA:
Maximize
f=1 ln(sf )
subject to AT = S
T ≥ 0
Sef =src(f),dst(f) 0 ∀ f
〈1,Sef〉 = 0 ∀ f
(Sef )src(f) = sf ∀ f
〈1,TTel〉 ≤ Rl(Γ) ∀ l
Rl(Γ) ∈ C(n)DPC(P
max,H
(n)) ∀l ∈ O (n)
l∈O(n) Tr{Γl} ≤ P
max ∀n
Γl � 0 ∀ l
Variables: S, T, Γ
III. REFORMULATION OF CRPA
As we pointed out earlier, the DPC rate equation in (8) is
neither a concave nor a convex function of the input covariance
matrices. As a result, the cross-layer optimization problem in
(9) is a non-convex optimization problem, which is very hard
to solve numerically, let alone analytically. However, in the
following, we will show that (9) can be reformulated as an
equivalent convex optimization problem by projecting all the
MIMO-BC channels onto their dual MIMO multiple-access
channels (MIMO-MAC). We first provide some background
of Gaussian vector multiple access channels and the channel
duality between MIMO-BC and MIMO-MAC.
A. MIMO-MAC Channel Model
A communication system where multiple uncoordinated
transmitters send independent information to a single receiver
is referred to as a multiple access channel. If the channel
gain of each link in the multiple access channel is a matrix
and the noise is a Gaussian random vector, the channel is
termed “Gaussian vector multiple access channel”. Fig. 3
illustrates a K-user Gaussian vector multiple access chan-
nel, where independent messages W1, . . . ,WK , are encoded
by transmitters 1 to K , respectively, and the receiver is
trying to decode W1, . . . ,WK . A (n, 2
nR1 , . . . , 2nRK )MAC
codebook for a multiple access channel consists of encoding
functions xn1 (W1), . . . ,x
K(WK) where Wi ∈ {1, . . . , 2nRi},
i = 1, 2, . . . ,K . The decoding functions at the receiver are
Ŵi(y
i ), i = 1, 2, . . . ,K . An error occurs when Ŵi 6= Wi.
A rate vector R = [R1, . . . , RK ]
T is said to be achievable if
there exists a sequence of (n, 2nR1 , . . . , 2nRK )MAC codebooks
for which the average probability of error Pe → 0 as the
code length n → ∞. The capacity region of a multiple
access channel, denoted by CMAC is defined as the union of
all achievable rate vectors [3]. We call the Gaussian vector
multiple access channel in the MIMO case as MIMO-MAC
throughout the rest of this paper.
nRW ∈
nRW ∈
2 KnRKW ∈
( )1ˆ nW y
( )2ˆ nW y
( )ˆ nKW y
Transmitters Receiver
( )1 1n Wx
( )2 2n Wx
( )nK KWx
Fig. 3. A Gaussian vector multiple access channel.
B. Duality between MIMO-BC and MIMO-MAC
The dual MIMO-MAC of a MIMO-BC can be constructed
by changing the receivers in the MIMO-BC into transmitters
and changing the transmitter in the MIMO-BC into the re-
ceiver. The channel gain matrices in dual MIMO-MAC are the
conjugate transpose of the channel gain matrices in MIMO-
BC. The maximum sum power in the dual MIMO-MAC is the
same maximum power level as in MIMO-BC. The relationship
between a MIMO-BC and its dual MIMO-MAC is illustrated
in Fig. 4. Similar to MIMO-BC, We denote the capacity region
of the dual MIMO-MAC as CMAC(P,H†). The following
Lemma states the relationship between the capacity regions
of a MIMO-BC and its dual MIMO-MAC.
Receiver
Transmitter 1
Transmitter 2
Transmitter K
Transmitter
Receiver 1
Receiver 2
Receiver K
MIMO-BC MIMO-MAC
Fig. 4. The relationship between MIMO-BC and its dual MIMO-MAC
Lemma 1: The DPC region of a MIMO-BC channel with
maximum power constraint P is equal to the capacity region
of the dual MIMO MAC with sum power constraint P
CDPC(P,H) = CMAC(P,H†).
Proof: The proof of this theorem can be arrived in various
ways [8]–[10]. The most straightforward approach is to show
that any MIMO-BC achievable rate vector is also achievable
in its dual MIMO-MAC and vice versa. The MAC-to-BC and
BC-to-MAC mappings can be found in [8]. It is also shown
in [8] that any rate vector in a MIMO-BC with a particular
encoding order can be achieved in its dual MIMO-MAC with
the reversed successive decoding order.
C. Convexity of MIMO-MAC Capacity Region
From Lemma 1, we know that the capacity region of a
MIMO-BC and its dual MIMO-MAC is exactly the same.
Therefore, we can replace CDPC(·) in (9) by the capacity
regions of the dual MIMO-MAC channels CMAC(·). The
benefits of such replacements is due to the following theorem.
Theorem 1: The capacity region of a K-user MIMO-MAC
channel with a sum power constraint
i=1 Tr(Qi) ≤ Pmax
is convex with respect to the input covariance matrices
Q1, . . . ,QK .
Proof: Denote the input signals of the K users by
x1, . . . ,xK , respectively, and denote the output of the MIMO-
MAC channel by y. Since ρi is a scalar, we absorb ρi into Hi
in this proof for notation convenience. Theorem 14.3.5 in [3]
states that the capacity region of a MIMO-MAC is determined
CMAC(Q1, . . . ,QK) =
(R1, . . . , RK)
i∈S Ri(Q) ≤
I(xi, i ∈ S;y|xi, i ∈ Sc),
∀S ⊆ {1, . . . ,K}
i=1 Tr(Qi) ≤ Pmax
, (10)
where the mutual information expression I(; ) can be bounded
as follows:
I(xi, i ∈ S;y|xi, i ∈ Sc) ≤ log
iQiHi
. (11)
To show that the capacity region of the MIMO-MAC with
a sum power constraint is convex, it is equivalent to show
that the convex hull operation in (10) is unnecessary. To show
this, consider the convex combination of two arbitrarily cho-
sen achievable rate vectors [R1, . . . , RK ] and [R̂1, . . . , R̂K ]
determined by two feasible power vectors [Q1, . . . ,QK ] and
[Q̂1, . . . , Q̂K ], respectively, i.e., we have
i=1 Tr(Qi) ≤
Pmax and
i=1 Tr(Q̂i) ≤ Pmax. Let 0 ≤ α ≤ 1 and consider
the convex combination
[R̄1, . . . , R̄K ] = α[R1, . . . , RK ] + (1− α)[R̂1, . . . , R̂K ].
Also, let Q̄i = αQi + (1− α)Q̂i, i = 1, . . . ,K . It is easy to
verify that
i=1 Tr(Q̄i) ≤ Pmax, i.e., the convex combination
of two feasible power vectors is also feasible. Now, consider
Ri + (1− α)
R̂i ≤ α log
iQiHi
+(1− α) log
iQ̂iHi
Since the function log |A| is a concave function for any
positive semidefinite matrix variable A [3], it follows from
Jensen’s inequality that
Ri + (1− α)
R̂i ≤
iQ̄iHi
which means that the convex combination of rate vectors
[R1, . . . , RK ] and [R̂1, . . . , R̂K ] can also be achieved by using
the feasible power vector [Q̄1, . . . , Q̄K ] directly. As a result,
the convex hull operation is unnecessary.
D. Maximum Weighted Sum Rate Problem of the Dual MIMO-
Now, we consider the maximum weighted sum rate problem
of the dual MIMO-MAC. We simplify this problem such
that we do not have to enumerate all possible successive
decoding order in the dual MIMO-MAC, thus paving the way
to efficiently solve the link layer subproblem we discuss in
Section IV.
Theorem 2: Associate each rate Ri in MIMO-MAC a non-
negative weight ui, i = 1, . . . ,K , the maximum weighted
sum max
i=1 Ri(Q) can be solved by the following convex
optimization problem:
Maximize
i=1(uπ(i) − uπ(i−1))×
j=i ρπ(j)H
Qπ(j)Hπ(j)
subject to
i=1 Tr(Qi) ≤ Pmax
Qi � 0, i = 1, . . . ,K,
where uπ(0) , 0, π(i), i = 1, . . . ,K is a permutation on
{1, . . . ,K} such that uπ(1) ≤ . . . ≤ uπ(K). In particular,
suppose that (Q∗
, . . . ,Q∗
) solves (12), then the optimal
rates of (12) are given by
R∗π(K) = log
I+ ρπ(K)H
π(K)HK
π(i) = log
j=i ρπ(j)H
π(j)Hj
− log
j=i+1 ρπ(j)H
, (14)
for i = 1, 2, . . . ,K − 1.
Proof: For convenience, we let
Φ(S) = log
Qπ(i)Hπ(i)
Since π(i) is simply a permutation on {1, . . . ,K}, from (10)
and (11) we have the maximum weighted sum rate problem
can be written as
Maximize
i=1 uπ(i)Rπ(i)
subject to
i∈S Rπ(i) ≤ Φ(S), ∀S ⊆ {1, . . . ,K}.
Also from (10) and (11), it is easy to derive that Rπ(i) ≤
Φ({π(i)}) = log
I+ ρπ(i)H
Qπ(i)Hπ(i)
. Since uπ(1) ≤
. . . ≤ uπ(K), from KKT condition, we must have that the
constraint Rπ(K) = Φ({π(K)}) must be tight at optimality.
That is,
Rπ(K) = log
I+ ρπ(K)H
Qπ(K)Hπ(K)
. (15)
Again, from (10) and (11), we have
Rπ(K−1) +Rπ(K) ≤ log
I+ ρπ(K)H
Qπ(K)Hπ(K)
+ρπ(K−1)H
π(K−1)
Qπ(K−1)Hπ(K−1)
Rπ(K−1) ≤ log
I+ ρπ(K)Hπ(K)Qπ(K)H
+ρπ(K−1)Hπ(K−1)Qπ(K−1)H
π(K−1)
I+ ρπ(K)Hπ(K)Qπ(K)H
Since uπ(K−1) is the second largest weight, again from KKT
condition, we must have that (16) must be tight at optimality.
This process continues for all K users. Subsequently, we have
Rπ(i) = log
j=i ρπ(j)H
Qπ(j)Hj
− log
j=i+1 ρπ(j)H
Qπ(j)Hj
, (17)
for i = 1, . . . ,K − 1. Summing up all uπ(i)Rπ(i) and after
rearranging the terms, it is readily verifiable that
uπ(i)Rπ(i) =
(uπ(i) − uπ(i−1))×
ρπ(j)H
Qπ(j)Hπ(j)
. (18)
It then follows that the maximum weighted sum rate problem
of MIMO-MAC is equivalent to maximizing (18) subject to
the sum power constraint, i.e., the optimization problem in
(12). Since log |·| is a concave function of positive semidefinite
matrices, (18) is a convex optimization problem with respect to
Qπ(1), . . . ,Qπ(K). After we obtain the optimal solution power
solution (Q∗
π(1), . . . ,Q
π(K)), the corresponding link rates can
be computed by simply following (15) and (17).
E. Problem Reformulation
We now reformulate CRPA by replacing CDPC in (9) with
CMAC, and we denote the equivalent problem by CRPA-E.
After solving CRPA-E, we can recover the corresponding
MIMO-BC covariance matrices Γ∗ from the optimal solution
Q∗ of CRPA-E by the MAC-to-BC mapping provided in [8].
CRPA-E:
Maximize
f=1 ln(sf )
subject to AT = S
T ≥ 0
Sef =src(f),dst(f) 0 ∀ f
〈1,Sef〉 = 0 ∀ f
(Sef )src(f) = sf ∀ f
〈1,TTel〉 ≤ Rl(Q) ∀ l
Rl(Q) ∈ C(n)MAC(P
max,H
†(n)) ∀l ∈ O (n)
l∈O(n) Tr{Ql} ≤ P
max ∀n
Ql � 0 ∀ l
Variables: S, T, Q
IV. SOLUTION PROCEDURE
Since CRPA-E is a convex programming problem, we can
solve CRPA-E exactly by solving its Lagrangian dual problem.
Introducing Lagrangian multipliers ui to the link capacity
coupling constraints 〈1,TTel〉 ≤ Rl(Q), Hence, we can write
the Lagrangian as
Θ(u) = sup
S,T,Q
{L(S,T,Q,u)|(S,T,Q) ∈ Ψ} , (20)
where
L(S,T,Q,u) =
ln (sf ) +
Rl(Q)− 〈1,TTel〉
and Ψ is defined as
(S,T,Q)
AT = S
T ≥ 0
Sef =src(f),dst(f) 0 ∀ f
〈1,Sef 〉 = 0 ∀ f
(Sef )src(f) = sf ∀ f
l∈O(n) Tr{Ql} ≤ P
max ∀n
Ql � 0 ∀ l
Rl(Q) ∈ CMAC(P
max,H
†(n)) ∀n
The Lagrangian dual problem of CRPA can thus be written
DCRPA−E : Minimize Θ(u)
subject to u ≥ 0.
It is easy to recognize that, for a given u, the Lagrangian in
(20) can be rearranged and separated into two terms:
Θ(u) = Θnet(u) + Θlink(u),
where, for a given Lagrangian multiplier u, Θnet and Θlink
are corresponding to network layer and link layer variables,
respectively:
CRPA−E
net : Θnet(u) , Maximize
ln (sf )
ul〈1,TTel〉
subject to AT = S
T ≥ 0
Sef =src(f),dst(f) 0 ∀ f
〈1,Sef 〉 = 0 ∀ f
(Sef )src(f) = sf ∀ f
Variables: S, T
CRPA−E
link : Θlink(u) , Maximize
ulRl(Q)
subject to
l∈O(n) Tr{Ql} ≤ P
max ∀n
Ql � 0 ∀ l
Rl(Q) ∈ CMAC(P (n)max,H†(n)),
∀ l ∈ O (n) , n ∈ N
Variables: Q
The CRPA-E Lagrangian dual problem can thus be written
as the following master dual problem:
CRPA−E : Minimize Θnet(u) + Θlink(u)
subject to u ≥ 0
Notice that Θlink(u) can be further decomposed on a node-
by-node basis as follows:
Θlink(u) = max
ulRl(Q)
l∈O(n)
ulRl(Q)
link(u
(n)). (21)
It is seen that Θ
link(u
(n)) , max
l∈O(n) ulRl(Q) is a
maximum weighted sum rate problem of the dual MIMO-
MAC for some given dual variables u(n) as weights. Without
loss of generality, suppose that node n has K outgoing links,
which are indexed as 1, . . . ,K and are associated with dual
variables u1, . . . , uK , respectively. Let π(i) ∈ {1, . . . ,K} be
the permutation such that 0 ≤ uπ(1) ≤ . . . ≤ uπ(K) and define
uπ(0) = 0. Θ
link(u
(n)) can be written as follows:
Maximize
i=1(uπ(i) − uπ(i−1))×
j=i ρπ(j)H
Qπ(j)Hπ(j)
subject to
i=1 Tr(Qi) ≤ P
Qi � 0, i = 1, . . . ,K.
Note that in the network layer subproblem Θnet(u), the
objective function is concave and all constraints are affine.
Therefore, Θnet(u) is readily solvable by many polynomial
time convex programming methods. However, even though
link(u
(n)) is also a convex problem, generic convex pro-
gramming methods are not efficient because the structures of
its objective function and constraints are very complex. In the
following subsections, we will discuss in detail how to solve
link(u
(n)).
A. Conjugate Gradient Projection for Solving Θ
link(u
We propose an efficient algorithm based on conjugate gradi-
ent projection (CGP) to solve (22). CGP utilizes the important
and powerful concept of Hessian conjugacy to deflect the
gradient direction appropriately so as to achieve the superlinear
convergence rate [11], which is similar to that of the well-
known quasi-Newton methods (e.g., BFGS method). In each
iteration, CGP projects the conjugate gradient direction to find
an improving feasible direction. The framework of CGP for
solving (22) is shown in Algorithm 1.
Algorithm 1 Gradient Projection Method
Initialization:
Choose the initial conditions Q(0) = [Q
2 , . . . ,Q
]T . Let
k = 0.
Main Loop:
1. Calculate the conjugate gradients G
, i = 1, 2, . . . ,K .
2. Choose an appropriate step size sk . Let Q
+ skG
for i = 1, 2, . . . , K .
3. Let Q̄(k) be the projection of Q
′(k) onto Ω+(P
max).
4. Choose an appropriate step size αk . Let Q
(k+1)
αk(Q̄
), i = 1, 2, . . . ,K .
5. k = k+1. If the maximum absolute value of the elements in Q
(k−1)
< ǫ, for i = 1, 2, . . . , L, then stop; else go to step 1.
We adopt the “Armijo’s Rule” inexact line search method
to avoid excessive objective function evaluations, while still
enjoying provable convergence [11]. For convenience, we
use F (Q) to represent the objective function in (22), where
Q = (Q1, . . . ,QK) denotes the set of covariance matrices at
a node. According to Armijo’s Rule, in the kth iteration, we
choose σk = 1 and αk = β
mk (the same as in [12]), where
mk is the first non-negative integer m that satisfies
F (Q(k+1))− F (Q(k)) ≥ σβm〈G(k), Q̄(k) −Q(k)〉
= σβm
, (23)
where 0 < β < 1 and 0 < σ < 1 are fixed scalars.
B. Computing the Conjugate Gradients
The gradient Ḡπ(j) , ∇Qπ(j)F (Q) depends on the partial
derivatives of F (Q) with respect to Qπ(j). By using the
formula ∂ ln|A+BXC|
C(A+BXC)−1B
[12], [13],
we can compute the partial derivative of the ith term in the
summation of F (Q) with respect to Qπ(j), j ≥ i, as follows:
∂Qπ(j)
(uπ(i) − uπ(i−1))×
ρπ(k)H
Qπ(k)Hπ(k)
= ρπ(j)
uπ(i) − uπ(i−1)
Hπ(j)
ρπ(k)H
Qπ(k)Hπ(k)
To compute the gradient of F (Q) with respect to Qπ(j), we
notice that only the first j terms in F (Q) involve Qπ(j). From
the definition ∇zf(z) = 2(∂f(z)/∂z)∗ [14], we have
Ḡπ(j) = 2ρπ(j)Hπ(j)
uπ(i) − uπ(i−1)
ρπ(k)H
Qπ(k)Hπ(k)
. (24)
Remark 1: It is important to point out that we can exploit
the special structure in (24) to significantly reduce the com-
putation complexity in the implementation of the algorithm.
Note that the most difficult part in computing Ḡπ(j) is the
summation of the terms in the form of H†
Qπ(k)Hπ(k).
Without careful consideration, one may end up computing
such additions j(2K + 1 − j)/2 times for Ḡπ(j). However,
notice that when j varies, most of the terms in the summation
are still the same. Thus, we can maintain a running sum for
k=i ρπ(k)H
Qπ(k)Hπ(k), start out from j = K , and
reduce j by one sequentially. As a result, only one new term
is added to the running sum in each iteration, which means
we only need to do the addition once in each iteration.
The conjugate gradient direction in the mth iteration can
be computed as G
+ κmG
(m−1)
. We adopt the
Fletcher and Reeves’ choice of deflection [11], which can be
computed as
‖Ḡ(m)
‖Ḡ(m−1)
. (25)
The purpose of deflecting the gradient using (25) is to find
, which is the Hessian-conjugate of G
(m−1)
. By doing
so, we can eliminate the “zigzagging” phenomenon encoun-
tered in the conventional gradient projection method, and
achieve the superlinear convergence rate [11] without actually
storing a large Hessian approximation matrix as in quasi-
Newton methods.
C. Projection onto Ω+(P
Noting from (24) that Gπ(j) is Hermitian, we have that
+ skG
is Hermitian as well. Then, the
projection problem becomes how to simultaneously project
|O (n) | Hermitian matrices onto the set
max) ,
Tr{Ql} ≤ P (n)max,
Ql � 0, l ∈ O (n)
This problem belongs to the class of “matrix nearness prob-
lems” [15], [16], which are not easy to solve in general.
However, by exploiting the special structure in Θ
link(u), we
are able to design a polynomial-time algorithm.
We construct a block diagonal matrix D =
Qπ(1) . . .Qπ(K)
∈ C(K·nr)×(K·nr). It is easy to
recognize that Qπ(j) ∈ Ω+(P
max), j = 1, . . . ,K , only if
Tr(D) =
j=1 Tr
Qπ(j)
≤ P (n)max and D � 0. We use
Frobenius norm, denoted by ‖ · ‖F , as the matrix distance
criterion. The distance between two matrices A and B is
defined as ‖A − B‖F =
(A−B)†(A−B)
2 . Thus,
given a block diagonal matrix D, we wish to find a matrix
D̃ ∈ Ω+(P (n)max) such that D̃ minimizes ‖D̃ − D‖F . For
more convenient algebraic manipulations, we instead study
the following equivalent optimization problem:
Minimize 1
‖D̃−D‖2F
subject to Tr(D̃) ≤ P (n)max, D̃ � 0.
In (26), the objective function is convex in D̃, the constraint
D̃ � 0 represents the convex cone of positive semidefinite
matrices, and the constraint Tr(D̃) ≤ P (n)max is a linear
constraint. Thus, the problem is a convex minimization prob-
lem and we can exactly solve this problem by solving its
Lagrangian dual problem. Associating Hermitian matrix Π to
the constraint D̃ � 0 and µ to the constraint Tr(D̃) ≤ P (n)max,
we can write the Lagrangian as g(Π, µ) = min
{(1/2)‖D̃−
D‖2F − Tr(Π†D̃) + µ(Tr(D̃) − P
max)}. Since g(Π, µ) is
an unconstrained convex quadratic minimization problem, we
can compute the minimizer of the Lagrangian by simply
setting its first derivative (with respect to D̃) to zero, i.e.,
(D̃ − D) − Π† + µI = 0. Noting that Π† = Π, we have
D̃ = D − µI+Π. Substituting D̃ back into the Lagrangian,
we have
g(Π, µ) = −1
‖D− µI+Π‖2F − µP
max +
‖D‖2.
Therefore, the Lagrangian dual problem can be written as
Maximize − 1
‖D− µI+Π‖2
− µP (n)max + 12‖D‖
subject to Π � 0, µ ≥ 0.
After solving (27), we can have the optimal solution to (26)
∗ = D− µ∗I+Π∗,
where µ∗ and Π∗ are the optimal dual solutions to Lagrangian
dual problem in (27). We now consider the term D − µI +
Π, which is the only term involving Π in the dual objective
function. From Moreau Decomposition [17], we immediately
‖D− µI+Π‖
= (D− µI)+ ,
where the operation (A)+ means performing eigenvalue de-
composition on matrix A, keeping the eigenvector matrix
unchanged, setting all non-positive eigenvalues to zero, and
then multiplying back. Thus, the matrix variable Π in the
Lagrangian dual problem can be removed and the Lagrangian
dual problem can be rewritten as
Maximize ψ(µ) , − 1
∥(D− µI)+
− µP (n)max
subject to µ ≥ 0.
Suppose that after performing eigenvalue decomposition on D,
we have D = UΛU†, where Λ is the diagonal matrix formed
by the eigenvalues of D, U is the unitary matrix formed by
the corresponding eigenvectors. Since U is unitary, we have
(D− µI)+ = U (Λ− µI)+ U
It then follows that
∥(D− µI)+
∥(Λ− µI)+
We denote the eigenvalues in Λ by λi, i = 1, 2, . . . ,K · nr.
Suppose that we sort them in non-increasing order such that
Λ = Diag{λ1 λ2 . . . λK·nr}, where λ1 ≥ . . . ≥ λK·nr . It
then follows that
∥(Λ− µI)+
(max {0, λj − µ})2 .
So, we can rewrite ψ(µ) as
ψ(µ) = −1
(max {0, λj − µ})2 − µP (n)max. (29)
It is evident from (29) that ψ(µ) is continuous and (piece-wise)
concave in µ. Due to this special structure, we can search the
optimal value of µ as follows. Let Î index the pieces of ψ(µ),
Î = 0, 1, . . . ,K · nr. Initially we set Î = 0 and increase Î
subsequently. Also, we introduce λ0 = ∞ and λK·nr+1 =
−∞. We let the endpoint objective value ψ
(λ0) = 0, φ
(λ0), and µ
∗ = λ0. If Î > K · nr, the search stops. For a
particular index Î , by setting
(ν) ,
(λi − µ)2 − µP (n)max
= 0,
we have
i=1 λi − P
Now we consider the following two cases:
1) If µ∗
∩ R+, where R+ denotes the set
of non-negative real numbers, then µ∗
is the optimal
solution because ψ(µ) is concave in µ. Thus, the point
having zero-value first derivative, if exists, must be the
unique global maximum solution. Hence, we can let
µ∗ = µ∗
and the search is done.
2) If µ∗
∩ R+, we must have that the local
maximum in the interval
∩R+ is achieved at
one of the two endpoints. Note that the objective value
has been computed in the previous iteration
because from the continuity of the objective function,
we have ψ
. Thus, we only need to
compute the other endpoint objective value ψ
= φ∗, then we know µ∗ is the
optimal solution; else let µ∗ = λ
, φ∗ = ψ
Î = Î + 1 and continue.
Since there are K ·nr+1 intervals in total, the search process
takes at most K ·nr +1 steps to find the optimal solution µ∗.
Hence, this search is of polynomial-time complexity O(nrK).
After finding µ∗, we can compute D̃∗ as
∗ = (D− µ∗I)+ = U (Λ− µ
I)+ U
†. (30)
The projection of D onto Ω+(P
max) is summarized in Algo-
rithm 2.
Algorithm 2 Projection onto Ω+(P
Initiation:
1. Construct a block diagonal matrix D. Perform eigenvalue decompo-
sition D = UΛU†, sort the eigenvalues in non-increasing order.
2. Introduce λ0 = ∞ and λK·nt+1 = −∞. Let Î = 0. Let the
endpoint objective value ψ
(λ0) = 0, φ∗ = ψÎ (λ0), and µ
∗ = λ0.
Main Loop:
1. If Î > K ·nr , go to the final step; else let µ∗
j=1 λj −P )/Î .
2. If µ∗
]∩R+, then let µ∗ = µ∗
and go to the final step.
3. Compute ψ
). If ψ
) < φ∗, then go to the final step;
else let µ∗ = λ
, φ∗ = ψ
), Î = Î + 1 and continue.
Final Step: Compute D̃ as D̃ = U (Λ− µ∗I)+ U
D. Solving the Master Dual Problem
1) Cutting-Plane Method for Solving Θ(u): The attractive
feature of the cutting-plane method is its robustness, speed of
convergence, and its simplicity in recovering primal feasible
optimal solutions. The primal optimal feasible solution can be
exactly computed by averaging all the primal solutions (may or
may not be primal feasible) using the dual variables as weights
[11]. Letting z = Θ(u), the dual problem is equivalent to
Minimize z
subject to z ≥
ln (sf ) +
Rl(Q)− 〈1,TTel〉
u ≥ 0,
where (S,T,Q) ∈ Ψ. Although (31) is a linear program with
infinite constraints not known explicitly, we can consider the
following approximating problem:
Minimize z
subject to z ≥
(j))−
〈1,T(j)T el〉
u ≥ 0,
where the points (S(j),T(j),Q(j)) ∈ Ψ, j = 1, . . . , k − 1.
The problem in (32) is a linear program with a finite number
of constraints and can be solved efficiently. Let (z(k),u(k))
be an optimal solution to the approximating problem, which
we refer to as the master program. If the solution is feasible
to (31), then it is an optimal solution to the Lagrangian dual
problem. To check the feasibility, we consider the following
subproblem:
Maximize
ln (sf ) +
Rl(Q)− 〈1,TTel〉
subject to (S,T,Q) ∈ Ψ
Suppose that (S(k),T(k),Q(k)) is an optimal solution to the
subproblem (33) and Θ∗(u(k)) is the corresponding optimal
objective value. If zk ≥ Θ∗(u(k)), then u(k) is an optimal
solution to the Lagrangian dual problem. Otherwise, for u =
u(k), the inequality constraint in (31) is not satisfied for
(S(j),T(j),Q(j)). Thus, we can add the constraint
(k))− 〈1,T(k)T el〉
to (32), and re-solve the master linear program. Obviously,
(z(k),u(k)) violates (34) and will be cut off by (34). The
cutting plane algorithm is summarized in Algorithm 3.
Algorithm 3 Cutting Plane Algorithm for Solving DCRPA
Initialization:
Find a point (S(0),T(0),Q(0)) ∈ Ψ. Let k = 1.
Main Loop:
1. Solve the master program in (32). Let (z(k),u(k)) be an optimal
solution.
2. Solve the subproblem in (33). Let (S(k),T(k),Q(k)) be an optimal
point, and let Θ∗(u(k)) be the corresponding optimal objective value.
3. If z(k) ≥ Θ(u(k)), then stop with u(k) as the optimal dual solution.
Otherwise, add the constraint (34) to the master program, replace k
by k + 1, and go to step 1.
2) Subgradient Algorithm for Solving Θ(u): Since the
Lagrangian dual objective function is piece-wise differentiable,
subgradient method can also be applied. For Θ(u), starting
with an initial u(1) and after evaluating subproblems Θnet(u)
and Θlink for u
(k) in the kth iteration, we update the dual
variables by u(k+1) =
uk − λ(k)d(k)
, where the operator
[·]+ projects a vector on to the nonnegative orthant, and λk
denotes a positive scalar step size. d(k) is a subgradient of
the Lagrangian at point u(k). It is proved in [11] that the
subgradient algorithm converges if the step size λk satisfies
λk → 0 as k → ∞ and
k=0 λk = ∞. A simple and useful
step size selection strategy is the divergent harmonic series
k=1 β
= ∞, where β is a constant. The subgradient for
the Lagrangian dual problem can be computed as
∗(u)) − 〈1,T∗(u)T el〉, l = 1, 2, . . . , L. (35)
Specifically, the subgradient method has the following proper-
ties which make it possible to be implemented in a distributed
fashion:
1) Subgradient computation only requires local traffic in-
formation 〈1,TTel〉 and the available link capacity
information Rl(Q) at each link l. As a result, it can
be computed locally.
2) The choice of step size λk = β
depends only upon the
iteration index k, and does not require any other global
knowledge. In conjunction with the first property, the
dual variable, in the iterative form of u
(k+1)
λk(∂Θ(u)/∂ul), can also be computed locally.
3) The objective functions Θlink can be decomposed on a
node-by-node basis such that each node in the network
can perform the computation in parallel. Likewise, the
network layer subproblem Θnet can be decomposed on
a source-by-source basis such that each source node can
perform the routing computation locally after receiving
the dual variable information of each link in the network.
It is worth to point out that care must be taken when recov-
ering the primal feasible optimal solution in the subgradient
method. Generally, the primal variables in the dual optimal
solution are not primal feasible unless the dual optimal solu-
tion happens to be the saddle point. Fortunately, since CRPA-E
is convex, its primal feasible optimal solution can be exactly
0 200 400 600 800 1000 1200
N5 N6N7
N10 N11
Fig. 5. A 15-node network example.
0 20 40 60 80 100 120 140 160
Number of Iterations
Lagrangian Dual UB
Primal Feasible Solution
Fig. 6. Convergence behavior of the cutting-plane method
computed by solving a linear programming problem (see [11]
for further details). However, such a recovery approach cannot
be implemented in a distributed fashion. In this paper, we
adopt a variant of Shor’s rule to recovery primal optimal
feasible solution. Due to space limitation, we refer readers
to [18] for more details.
V. NUMERICAL RESULTS
In this section, we present some numerical results through
simulations to provide further insights on solving CRPA.
N randomly-generated MIMO-enabled nodes are uniformly
distributed in a square region. Each node in the network is
equipped with two antennas. The maximum transmit power for
each node is set to Pmax = 10dBm. Each node in the network
is assigned a unit bandwidth. We illustrate a 15-node network
example, as shown in Fig. 5, to show the convergence process
of the cutting-plane and the subgradient methods for solving
DCRPA−E. In this example, there are three flows transmitting
across the network: N14 to N1, N6 to N10, and N5 to N4,
respectively.
A. Cutting-Plane Method
For the 15-node example in Fig. 5, the convergence process
for the cutting-plane method is illustrated in Fig. 6. The
optimal objective value for this 15-node example is 6.72. The
optimal flows for sessions N14 to N1, N6 to N10, and N5 to
0 200 400 600 800 1000 1200 1400 1600
Number of Iterations
Lagrangian Dual UB
Pimal Feasible Solution
Fig. 7. Convergence behavior of the subgradient method
N4 are 9.17 bps/Hz, 9.30 bps/Hz, and 9.93 bps/Hz, respec-
tively. It can be observed that the cutting-plane algorithm is
very efficient: It converges with approximately 160 cuts. As
expected, the duality gap is zero because the convexity of the
transformed equivalent problem based on dual MIMO-MAC.
B. Subgradient Method
For the 15-node example in Fig. 5, the convergence process
for the subgradient method is illustrated in Fig. 7. The step
size selection is λk = 0.1/k. The subgradient method also
achieves the same optimal solution and objective value when
it converges. However, it is seen that the subgradient algorithm
takes approximately 1600 iterations to converge, which is
much slower than the cutting-plane method. This is partially
due to the heuristic nature in step size selection (cannot be too
large or too small at each step). It is also partially due to the
cumbersomeness in recovering the primal feasible solution in
the subgradient method. In this example, the dual upper bound
takes approximately 1050 iterations to reach near the optimal.
However, the near-optimal primal feasible solution cannot be
identified until after 1500 iterations.
C. Comparison between BC and TDM
We now study how much performance gain we can get by
using Gaussian vector broadcast channel technique as opposed
to the conventional time-division (TDM) scheme. The cross-
layer optimization problem of MIMO-based mesh networks
over TDM scheme is also a convex problem. Thus, the
basic Lagrangian dual decomposition framework and gradient
projection technique for the link layer subproblem are still
applicable. The only difference is in the gradient computation,
which is simpler in TDM case. For the same 15-node network
with TDM, we plot the convergence process of the cutting-
plane algorithm in Fig. 8. In TDM case, the optimal objective
value is 5.01. For this example, we have 34.4% improvement
by using DPC.
VI. RELATED WORK
Despite significant research progress in using MIMO for
single-user communications, research on multi-user multi-hop
0 10 20 30 40 50 60 70 80
Number of Iterations
Lagrangian Dual UB
Primal Feasible Solution
Fig. 8. The convergence behavior in TDM case
MIMO networks is still in its inception stage. There are many
open problems, and many areas are still poorly understood
[19]. Currently, the relatively well-studied research area of
multi-user MIMO systems are cellular systems, which are
single-hop and infrastructure-based. For multi-hop MIMO-
based mesh networks, research results remain limited. In [20],
Hu and Zhang studied the problem of joint medium access
control and routing, with a consideration of optimal hop
distance to minimize end-to-end delay. In [21], Sundaresan and
Sivakumar used simulations to study various characteristics
and tradeoffs (multiplexing gain vs. diversity gain) of MIMO
links that can be leveraged by routing layer protocols in rich
multipath environments to improve performance. In [22], Lee
et al. proposed a distributed algorithm for MIMO-based multi-
hop ad hoc networks, in which diversity and multiplexing
gains of each link are controlled to achieve the optimal rate-
reliability tradeoff. The optimization problem assumes fixed
SINRs and fixed routes between source and destination nodes.
However, in these works, there is no explicit consideration of
per-antenna power allocation and their impact on upper layers.
Moreover, DPC in cross-layer design has never been studied
either.
VII. CONCLUSIONS
In this paper, we investigated the cross-layer optimization
of DPC per-antenna power allocation and multi-hop multi-
path routing for MIMO-based wireless mesh networks. Our
contributions are three-fold. First, this paper is the first work
that studies the impacts of applying dirty paper coding, which
is the optimal transmission scheme for MIMO broadcast
channels (MIMO-BC), to the cross-layer design for MIMO-
based wireless mesh networks. We showed that the network
performance has dramatic improvements compared to that
of the conventional time-division/frequency division schemes.
Second, we solved the challenging non-connvex cross-layer
optimization problem by exploiting the channel duality be-
tween MIMO-MAC and MIMO-BC, and we showed that
transformed problem under dual MIMO-MAC is convex. We
simplified the maximum weighted sum rate problem, thus
paving the way for solving the link layer subproblem in
the Lagrangian dual decomposition. Last, for the transformed
problem, we develop an efficient solution procedure that
integrates Lagrangian dual decomposition, conjugate gradient
projection based on matrix differential calculus, cutting-plane,
and subgradient methods. Our results substantiate the impor-
tance of cross-layer optimization for MIMO-based wireless
mesh networks with Gaussian vector broadcast channels.
REFERENCES
[1] I. E. Telatar, “Capacity of multi-antenna Gaussian channels,” European
Trans. Telecomm., vol. 10, no. 6, pp. 585–596, Nov. 1999.
[2] G. J. Foschini and M. J. Gans, “On limits of wireless communications in
a fading environment when using multiple antennas,” Wireless Personal
Commun., vol. 6, pp. 311–355, Mar. 1998.
[3] T. M. Cover and J. A. Thomas, Elements of Information Theory. New
York-Chichester-Brisbane-Toronto-Singapore: John Wiley & Sons, Inc.,
1991.
[4] T. M. Cover, “Broadcast channels,” IEEE Trans. Inf. Theory, vol. 18,
no. 1, pp. 2–14, Jan. 1972.
[5] M. Costa, “Writing on dirty paper,” IEEE Trans. Inf. Theory, vol. 29,
no. 3, pp. 439–441, May 1983.
[6] H. Weingarten, Y. Steinberg, and S. Shamai (Shitz), “The capacity region
of the Gaussian multiple-input multiple-output broadcast channel,” IEEE
Trans. Inf. Theory, vol. 52, no. 9, pp. 3936–3964, Sep. 2006.
[7] M. S. Bazaraa, J. J. Jarvis, and H. D. Sherali, Linear Programming and
Network Flows. New York-Chichester-Brisbane-Toronto-Singapore:
John Wiley & Sons Inc., 1990.
[8] S. Vishwanath, N. Jindal, and A. Goldsmith, “Duality, achievable rates,
and sum-rate capacity of MIMO broadcast channels,” IEEE Trans. Inf.
Theory, vol. 49, no. 10, pp. 2658–2668, Oct. 2003.
[9] P. Viswanath and D. N. C. Tse, “Sum capacity of the vector Gaussian
broadcast channel and uplink-downlink duality,” IEEE Trans. Inf. The-
ory, vol. 49, no. 8, pp. 1912–1921, Aug. 2003.
[10] W. Yu, “Uplink-downlink duality via minimax duality,” IEEE Trans. Inf.
Theory, vol. 52, no. 2, pp. 361–374, Feb. 2006.
[11] M. S. Bazaraa, H. D. Sherali, and C. M. Shetty, Nonlinear Programming:
Theory and Algorithms, 3rd ed. New York, NY: John Wiley & Sons
Inc., 2006.
[12] S. Ye and R. S. Blum, “Optimized signaling for MIMO interference
systems with feedback,” IEEE Trans. Signal Process., vol. 51, no. 11,
pp. 2839–2848, Nov. 2003.
[13] J. R. Magnus and H. Neudecker, Matrix Differential Calculus with
Applications in Statistics and Economics. New York: Wiley, 1999.
[14] S. Haykin, Adaptive Filter Theory. Englewood Cliffs, NJ: Prentice-Hall,
1996.
[15] S. Boyd and L. Xiao, “Least-squares covariance matrix adjustment,”
SIAM Journal on Matrix Analysis and Applications, vol. 27, no. 2, pp.
532–546, Nov. 2005.
[16] J. Malick, “A dual approach to semidefinite least-squares problems,”
SIAM Journal on Matrix Analysis and Applications, vol. 26, no. 1, pp.
272–284, Sep. 2005.
[17] J.-B. Hiriart-Urruty and C. Lemaréchal, Fundamentals of Convex Anal-
ysis. Berlin: Springer-Verlag, 2001.
[18] H. D. Sherali and G. Choi, “Recovery of primal solutions when using
subgradient optimization methods to solve Lagrangian duals of linear
programs,” Operations Research Letters, vol. 19, no. 3, pp. 105–113,
Sep. 1996.
[19] A. Goldsmith, S. A. Jafar, N. Jindal, and S. Vishwanath, “Capacity limits
of MIMO channels,” IEEE J. Sel. Areas Commun., vol. 21, no. 1, pp.
684–702, Jun. 2003.
[20] M. Hu and J. Zhang, “MIMO ad hoc networks: Medium access control,
saturation throughput, and optimal hop distance,” Special Issue on
Mobile Ad Hoc Networks, Journal of Communications and Networks,
pp. 317–330, Dec. 2004.
[21] K. Sundaresan and R. Sivakumar, “Routing in ad hoc networks with
MIMO links,” in Proc. IEEE ICNP, Boston, MA, U.S.A., Nov. 2005,
pp. 85–98.
[22] J.-W. Lee, M. Chiang, and A. R. Calderbank, “Price-based distributed
algorithms for rate-reliability tradeoff in network utility maximization,”
IEEE J. Sel. Areas Commun., vol. 24, no. 5, pp. 962–976, May 2006.
Introduction
Network Model
Network Layer
Channel Capacity of a MIMO Link
MIMO-BC Link Layer
Problem Formulation
Reformulation of CRPA
MIMO-MAC Channel Model
Duality between MIMO-BC and MIMO-MAC
Convexity of MIMO-MAC Capacity Region
Maximum Weighted Sum Rate Problem of the Dual MIMO-MAC
Problem Reformulation
Solution Procedure
Conjugate Gradient Projection for Solving link(n)(u(n))
Computing the Conjugate Gradients
Projection onto +(Pmax(n))
Solving the Master Dual Problem
Cutting-Plane Method for Solving (u)
Subgradient Algorithm for Solving (u)
Numerical Results
Cutting-Plane Method
Subgradient Method
Comparison between BC and TDM
Related Work
Conclusions
References
|
0704.0968 | Criteria in the Selection of Target Events for Planetary Microlensing
Follow-Up Observation | DRAFT VERSION OCTOBER 31, 2018
Preprint typeset using LATEX style emulateapj v. 08/22/09
CRITERIA IN THE SELECTION OF TARGET EVENTS FOR PLANETARY MICROLENSING FOLLOW-UP
OBSERVATIONS
CHEONGHO HAN
Program of Brain Korea 21, Institute for Basic Science Research, Department of Physics,
Chungbuk National University, Chongju 361-763, Korea; [email protected]
Draft version October 31, 2018
ABSTRACT
To provide criteria in the selection of target events preferable for planetary lensing follow-up observations,
we investigate the variation of the probability of detecting planetary signals depending on the observables
of the lensing magnification and source brightness. In estimating the probability, we consider variation of
the photometric precision by using a quantity defined as the ratio of the fractional deviation of the planetary
perturbation to the photometric precision. From this investigation, we find consistent result from previous
studies that the probability increases with the increase of the magnification. The increase rate is boosted at a
certain magnification at which perturbations caused by central caustic begin to occur. We find this boost occurs
at moderate magnifications of A . 20, implying that probability can be high even for events with moderate
magnifications. The probability increases as the source brightness increases. We find that the probability of
events associated with stars brighter than clump giants is not negligible even at magnifications as low as A ∼ 5.
In the absence of rare the prime target of very high-magnification events, we, therefore, recommend to observe
events with brightest source stars and highest magnifications among the alerted events. Due to the increase of
the source size with the increase of the brightness, however, the probability rapidly drops off beyond a certain
magnification, causing detections of low mass ratio planets (q . 10−4) difficult from the observations of events
involved with giant stars with magnifications A & 70.
Subject headings: gravitational lensing – planets and satellites: general
1. INTRODUCTION
With the advantages of being able to detect very low-mass
planets and those with separations from host stars that can-
not be covered by other methods, microlensing is one of
the most important methods that can detect and character-
ize extrasolar planets (Mao & Paczyński 1994; Gould & Loeb
1992). The microlensing planetary signal is a short duration
perturbation to the standard lensing light curve produced by
the primary star. To achieve high monitoring frequency re-
quired for the detection of the short-lived planetary signal,
current lensing experiments are employing early-warning sys-
tem to issue alerts of ongoing events in the early stage of
lensing magnification (Udalski et al. 1994; Bond et al. 2002)
and follow-up observations to intensively monitor the alerted
events (Dominik et al. 2002; Yoo et al. 2004). Under current
surveys, there exist in average & 50 alerted events at a certain
time (Dominik et al. 2002). Then, an important issue related
to the follow-up observation is which event should be moni-
tored for better chance of planet detections.
There have been several estimates of microlensing planet
detection efficiencies (Bolatto & Falco 1994; Bennett & Rhie
1996; Gaudi & Sackett 2000; Peale 2001). Most of these
works estimated the efficiency as a function of the instanta-
neous angular star-planet separation normalized by the angu-
lar Einstein radius, s, and planet/star mass ratio, q. However,
the efficiency determined in this way is of little use in the point
of view of observers who are actually carrying out follow-
up observations of lensing events. This is because the planet
parameters s and q are not known in the middle of lensing
magnification and thus they cannot be used as criteria in the
selection of target events for follow-up observations. Related
to the target selection, Griest & Safizadeh (1998) proposed a
useful criterion to observers. They pointed out that by focus-
ing on very high-magnification (A & 100) events, the proba-
bility of detecting planets in the lensing zone could be very
high. However, these events are rare and thus they cannot
be usually found in the list of alerted events. Therefore, it is
necessary to have criteria applicable to general lensing events
in the absence of very high-magnification events. To provide
such criteria, we investigate the dependency of the probability
of detecting planetary signals on the observables such as the
lensing magnification and source type.
The paper is organized as follows. In § 2, we briefly de-
scribe the basics of planetary microlensing. In § 3, we investi-
gate the variation of the probability of detecting planetary sig-
nals depending on the lensing magnification and source type
for events caused by planetary systems with different masses
and separations. We analyze the result and qualitatively ex-
plain the tendencies found from the investigation. Based on
the result of the investigation, we then present criteria for the
selection of target events preferable for follow-up observa-
tions. In § 4, we summarize the results and conclude.
2. BASICS OF PLANETARY LENSING
The lensing behavior of a planetary lens system is described
by the formalism of a binary lens with a very low-mass com-
panion. Because of the very small mass ratio, planetary a lens-
ing light curve is well described by that of a single lens of the
primary star for most of the event duration. However, a short-
duration perturbation can occur when the source star passes
the region around the caustics, that are the set of source po-
sitions at which the magnification of a point source becomes
infinite. The caustics of binary lensing form a single or multi-
ple sets of closed curves where each of which is composed of
concave curves (fold caustics) that meet at points (cusps).
For a planetary case, there exist two sets of disconnected
caustics: ‘central’ and ‘planetary’ caustics. The single central
http://arxiv.org/abs/0704.0968v1
2 MICROLENSING FOLLOW-UP CRITERIA
caustic is located close to the host star. It has a wedge shape
with four cusps and its size (width along the star-planet axis)
is related to the planet parameters by (Chung et al. 2005)
∆ξcc ∝
(s − 1/s)2
. (1)
For a given mass ratio, a pair of central caustics with sepa-
rations s and s−1 are identical to the first order of approxima-
tion (Dominik 1999; Griest & Safizadeh 1998; An 2005). The
planetary caustic is located away from the host star. The cen-
ter of the planetary caustic is located on the star-planet axis
and the position vector to the center of the planetary caustic
measured from the primary lens position is related to the lens-
source separation vector, s, by
rpc = s
. (2)
Then, the planetary caustic is located on the planet side, i.e.
sign(rpc) = sign(s), when s > 1, and on the opposite side, i.e.
sign(rpc) = −sign(s), when s < 1. When s > 1, there exists a
single planetary caustic and it has a diamond shape with four
cusps. When s< 1, there are two caustics and each caustic has
a triangular shape with three cusps. The size of the planetary
caustic is related to the planet parameters by
∆ξpc ∝
q1/2/(s
s2 − 1) for s > 1,
q1/2(κ0 − 1/κ0 +κ0/s2)cosθ0 for s < 1,
where κ(θ) =
[cos2θ± (s4 − sin2 2θ)1/2]/(s2 − 1/s2)
, θ0 =
[π±sin−1(31/2s2/2)]/2, and κ0 = κ(θ0) (Han 2006). The plan-
etary caustic is always bigger than the central caustic and the
size ratio between the two types of caustics, ∆ξcc/∆ξpc, be-
comes smaller as the mass of the planet becomes smaller and
the planet is located further away from the Einstein ring. The
planetary caustic is located within the Einstein ring of the pri-
mary when the planet is located in the range of separation
from the star of 0.6 . s . 1.6. The size of the caustic, which
is directly proportional to the planet detection efficiency, is
maximized when the planet is located in this range, and thus
this range is called as the ‘lensing zone’. As the position of
the planet approaches to the Einstein ring radius, s → 1, the
location of the planetary caustic approaches the position of
the central caustic. Then, the two types of caustic eventually
merge together, forming a single large one.
3. VARIATION OF DETECTABILITY
3.1. Quantification of Detectability
The quantity that has been often used in the previous es-
timation of the planet detection probability is the ‘fractional
deviation’ of the planetary lensing light curve from that of the
single lensing event of the primary, i.e.,
A − A0
. (4)
With this quantity, however, one cannot consider the variation
of the photometric precision depending on the lensing magni-
fication. In addition, it is difficult to consider the variation of
the detectability depending on the source type.
To consider the effect of source star brightness and its
lensing-induced variation on the planet detection probability,
we carry out our analysis based on a new quantity defined as
the ratio of the fractional deviation, ǫ, to the photometric pre-
cision, σ
, i.e,
D = |ǫ|
ν,S + Fν,B)1/2
(A − 1)F
, (5)
where F
ν,S and Fν,B represent the fluxes from the source star
and blended background stars, respectively. Here we as-
sume that photometry is carried out by using the difference
imaging method (Tomaney & Crotts 1996; Alard 1999). In
this technique, photometry of the lensed source star is con-
ducted on the subtracted image obtained by convolving two
images taken at different times after geometrically and pho-
tometrically aligning them. Then the signal from the lensed
star measured on the subtracted image is the flux variation
of the lensed source star, (A − 1)F
ν,S, while the noise orig-
inates from both the source and background blended stars,
ν,S + Fν,B. Under this definition of the planetary signal de-
tectability,D = 1 implies that the planetary signal is equivalent
to the photometric precision. Hereafter we refer the quantity
D as the ‘detectability’.
3.2. Contour Maps of Detectability
To see the variation of the detectability depending on the
separation parameter s, mass ratio q, and the types of involved
source star, we construct maps of detectability as a function
of the position in the source plane. Figure 1 shows example
maps. The individual sets of panels show the maps for events
associated with different types of source stars. All lengths
are normalized by the angular Einstein radius and ξ and η
represent the coordinates parallel with and normal to the star-
planet axis, respectively. A contours (yellow curve) is drawn
at the level of D = 3.0. The maps are centered at the position
of the primary lens star and the planet is located on the left.
The dotted arc in each panel represents the Einstein ring of the
primary star. The closed figures drawn by red curves represent
the caustics.
For the construction of the maps, we assume a mass of
the primary lens star of m = 0.3 M⊙ and distances to the
lens and source of DL = 6 kpc and DS = 8 kpc, respec-
tively. Then, the corresponding Einstein radius is rE =
(4Gm/c2)[(DL(DS − DL)/DS]
= 1.9 AU. For the source
stars, we test three different types of giant, clump giant, and
main-sequence stars. The assumed I-band absolute magni-
tudes of the individual types of stars are MI = 0.0, 1.5, and
3.6, respectively. With the assumed amount of extinction to-
ward the Galactic bulge field of AI = 1.0, these correspond
to the apparent magnitudes of I = 15.5, 17, and 19.1, respec-
tively. As the source type changes, not only the brightness
but also the size of the star changes. Source size affects the
planetary signal in lensing light curves (Bennett & Rhie 1996)
and thus we take account the finite source effect into consid-
eration. The assumed source radii of the individual types of
source stars are 10.0 R⊙, 3.0 R⊙, and 1.1 R⊙, respectively.
We assume that events are affected by blended flux equivalent
to that of a star with I = 20. We note that the adopted lens
and source parameters are the typical values of Galactic bulge
events that are being detected by the current lensing surveys
(Han & Gould 2003).
For the observational condition, we assume that images are
obtained by using 1 m telescopes, which are typical ones be-
ing used in current follow-up observations. We also assume
that the photon acquisition rate of each telescope is 10 pho-
tons per second for an I = 20 star and a combined image with
HAN 3
FIG. 1.— Contour maps of the detectability of the planetary signal, D, as a function of the position in the source plane for events caused by planetary systems
with various lens-source separations and mass ratios. The detectability represents the ratio of the fractional deviation of the planetary lensing light curve from
the single lensing light curve of the primary to the photometric precision. All lengths are normalized by the angular Einstein radius and ξ and η represent the
coordinates parallel with and normal to the star-planet axis, respectively. The individual sets of panels show the maps for events associated with different types
of source stars. Contours (yellow curve) are drawn at the level of D = 3.0. The maps are centered at the position of the primary lens star and the planet is located
on the left. The dotted arc in each panel represents the Einstein ring of the primary star. The closed figures drawn by red curves represent the caustics. For the
details about the assumed lens parameters and observational conditions, see § 3.2.
FIG. 2.— Geometric representation of the probability of detecting planetary
signals, P. Under the definition of P as the average probability of detecting
planetary signals with a detectability greater than a threshold value Dth at
the time of observation with a magnification A, the probability corresponds
to the portion of the arclet(s) where the detectability is greater than a threshold
value out of a circle around the primary with a radius equal to the lens-source
separation corresponding to the magnification at the time of observation. The
individual circles in the upper panel correspond to the source positions at
which the lensing magnifications are A = 1.5 (pink), 3.0 (cyan), 5.0 (green),
and 10.0 (red), respectively. The curves in the bottom panels show the vari-
ation of the detectability as a function of the position angle (θ) of points on
the circles with corresponding colors in the upper panel. We set the thresh-
old detectability as Dth = 3.0, i.e. 3σ detection of the planetary signal. The
dashed circle represents the Einstein ring.
a total exposure time of 5 minutes is obtained from each set
of observations.
3.3. Probability of Detecting Planetary Signals
Based on the maps of detectability, we then investigate the
probability of detecting planetary signals as a function of the
lensing magnification. We define the probability P as the av-
erage probability of detecting planetary signals with a de-
tectability greater than a threshold value Dth at the time of
observation with a magnification A. Geometrically, this prob-
ability corresponds to the portion of the arclet(s) where the
detectability is greater than a threshold value out of a circle
FIG. 3.— Probability of detecting planetary signals as a function of lens-
ing magnification. The individual panels show the probabilities for events
involved with different types of source stars. The curves in each panel show
the variation of the probability for planets with different mass ratios and sep-
arations. We note that although not presented, the probabilities for planets
with separations s < 1 are similar to those of the corresponding planets with
s−1. The probability is defined the average probability of detecting plane-
tary signals with a detectability greater than a threshold value Dth at the time
of observation with a magnification A. We set the threshold detectability as
Dth = 3.0, i.e. 3σ detection of the planetary signal. We note that there is a
maximum magnification specific to the angular size of the source star and
thus the curves stop at certain magnifications.
around the primary with a radius equal to the lens-source sep-
aration corresponding to the magnification at the time of ob-
servation. This is illustrated in Figure 2. We note that the
magnification is a unique function of the absolute value of the
lens-source separation u1, and thus A = const corresponds to
a circle around the lens. The lens-source separation is related
to the magnification by
u(A) =
(1 − A−2)1/2
. (6)
We set the threshold detectability as Dth = 3.0, i.e. 3σ detec-
tion of the planetary signal.
1 Strictly speaking, the magnification depends additionally on the size of
the source star.
4 MICROLENSING FOLLOW-UP CRITERIA
TABLE 1
LIMITATION BY FINITE-SOURCE EFFECT
source type event type
giant A & 70 for planets with q . 10−3
clump giant A & 200 for planets with q . 5× 10−4
main-sequence A & 500 for planets with q . 10−4
NOTE. — Cases of planetary microlensing events
where detection of planetary signal is limited by finite
source effect. We note that “-” means the respective con-
figuration cannot be realized.
In Figure 3, we present the resulting probability as a func-
tion of magnification. The individual panels show the proba-
bilities for events involved with different types of source stars.
In each panel, we present the variations of the probability for
planets with different mass ratios and separations. We test six
different planetary separations of s = 1/1.6, 1/1.4, 1/1.2, 1.2,
1.4, and 1.6 as representative values for planets in the lensing
zone. For the mass ratio, we test five values of q = 5× 10−3,
10−3, 5× 10−4, 10−4, and 5× 10−5.
From the variation of the probability, we find the follow-
ing tendencies. First, we find that the probability increases
with the increase of the lensing magnification. This is con-
sistent with the result of K. Horne (private communication).
This tendency is due to three factors. First, the size of the
planetary caustic increases as it is located closer to the pri-
mary star. This can be seen in Figure 4, where we present the
relation between the location of the planetary caustic and its
size, which is obtained by using equations (2) and (3). Then,
higher chance of planetary perturbation is expected when the
source is located closer to the primary during which the lens-
ing magnification is high. Second, perturbation regions of
the same size cover a larger range of angle as the planetary
caustic moves closer to the lens. This also contributes to the
higher probability. Third, the photometric precision improves
with the increasing brightness of the source star due to lensing
magnification. As the photometric precision improves, it is
easier to detect small deviations induced by planets. The same
reason can explain the considerable size of the perturbation
region induced by central caustics. Perturbations induced by
the central caustics occur at high magnifications during which
the photometric precision is high. As a result, despite much
smaller size of the central caustic than that of the planetary
caustic, the central perturbation region is considerable and can
even be comparable to the perturbation region induced by the
planetary caustic. This can be seen in the detectability maps
presented in Figure 1.
However, the probability does not continue to increase with
the increase of the magnification. Instead, the probability
drops off rapidly beyond a certain magnification. This critical
value corresponds to the magnification at which finite-source
effect begins to wash out the planetary signal. In Table 1, we
present the cases where finite source effect limits planet detec-
tions. As a result, detections of planets with low mass ratios
would be difficult for events involved with giant source stars
with magnifications A & 70. We note that the finite source
effect also limits the maximum magnifications of events and
thus the curves in Figure 3 discontinue at a certain value.
Second, as the magnification increases, the probability of
detecting planetary signal increases with two dramatically dif-
ferent rates of dP/d logA. We find that this abrupt change
of dP/d logA occurs due to the transition from the regime of
FIG. 4.— Variation of the size of the planetary caustic as a function of its
location. The value rpc represents the separation between the center of the
planetary caustic and the primary lens star. The sign of rpc is positive when
the caustic is on the planet side and vice versa. We note that the caustic size
at around rpc is not presented because the analytic expression in eq. (1) is not
valid in this region. In addition, there is no distinction between the planetary
and central caustics in this region.
perturbations induced by planetary caustics into the one of
perturbations induced by central caustics. The perturbation
region induced by the central caustic forms around the pri-
mary lens and thus the probability becomes very high once the
source star is in the central perturbation regime. The boost of
the increase rate occurs at different magnifications depending
on the planetary parameters and the types of involved source
stars. The critical magnification becomes lower as the mass
ratio of the planet increases and the separation of the planet
approaches the Einstein ring radius. In Table 2, we present
these critical magnifications. An important finding to be noted
is that the critical magnification occurs at moderate magnifi-
cations of . 20 for a significant fraction of events caused by
planetary systems with planets located in the lensing zone.
This implies that probability of detecting planetary signal can
be high even for events with moderate magnifications.
Third, the probability is higher for events involved with
brighter source stars. This is because of the improved pho-
tometric precision with the increase of the source brightness.
The difference in the probability depending on the source type
is especially important at low magnifications. For example,
the probabilities at a magnification of A = 5 for events caused
by a common planetary system with q = 10−3 and s = 1.2 but
associated with different source stars of giant, clump giant,
and main-sequence are P ∼ 20%, 10%, and 1%, respectively.
In the absence of high magnification events, therefore, the sec-
ond prime candidate event for follow-up observation is the
one involved with brightest source star. As the magnification
further increases and once the source star enters the central
perturbation region, the difference becomes less important.
4. SUMMARY AND CONCLUSION
For the purpose of providing useful criteria in the selec-
tion of target events preferable for planetary lensing follow-
up observations, we investigated the variation of the proba-
bility of detecting planetary lensing signals depending on the
observables of the lensing magnification and source bright-
ness. From this investigation, we found consistent result from
previous studies that the probability increases with the in-
crease of the lensing magnification due to the improvement
HAN 5
TABLE 2
CRITICAL MAGNIFICATIONS OF CENTRAL PERTURBATION
source planetary mass ratio
type separation q = 5× 10−3 q = 10−3 q = 5× 10−4 q = 10−4 q = 5× 10−5
s = 1.2, 1/1.2 A ∼ 2.2 A ∼ 7 A ∼ 8 A ∼ 22 A ∼ 22
giant s = 1.4, 1/1.4 A ∼ 2.5 A ∼ 8 A ∼ 12 – –
s = 1.6, 1/1.6 A ∼ 3.5 A ∼ 9 A ∼ 18 – –
clump s = 1.2, 1/1.2 A ∼ 7 A ∼ 8 A ∼ 11 A ∼ 30 A ∼ 60
giant s = 1.4, 1/1.4 A ∼ 8 A ∼ 12 A ∼ 17 A ∼ 60 A ∼ 80
s = 1.6, 1/1.6 A ∼ 9 A ∼ 16 A ∼ 20 A ∼ 745 –
main s = 1.2, 1/1.2 A ∼ 6 A ∼ 11 A ∼ 20 A ∼ 55 A ∼ 100
sequence s = 1.4, 1/1.4 A ∼ 8 A ∼ 20 A ∼ 30 A ∼ 100 A ∼ 150
s = 1.6, 1/1.6 A ∼ 11 A ∼ 30 A ∼ 40 A ∼ 150 A ∼ 200
NOTE. — Critical magnifications at which transition from the regime of perturbations induced
by planetary caustics into the one of perturbations induced by central caustics occur. We note that
the critical magnifications are . 20 in many cases.
of the photometric precision combined with the expansion of
the perturbation region. The increase rate of the probabil-
ity is boosted at a certain magnification at which perturba-
tion caused by the central caustic begins to occur. We found
that this boost occurs at moderate magnifications of A . 20
for a significant fraction of events caused by planetary sys-
tems with planets located in the lensing zone, implying that
probabilities can be high even for events with moderate mag-
nifications. The probability increases with the increase of the
source star brightness. We found that the probability of events
associated with source stars brighter than clump giants is not
negligible even at magnifications as low as A ∼ 5. In the ab-
sence of rare prime target of very high-magnification events
(A & 100), we, therefore, recommend to observe events with
brightest source stars and highest magnifications among the
alerted events. Due to the increase of the source size with the
increase of the brightness, however, the probability rapidly
drops off beyond a certain magnification. As a result, detec-
tions of planets with low mass ratios (q . 10−4) would be dif-
ficult for events involved with giant source stars with magni-
fications A & 70.
This work was supported by the Astrophysical Research
Center for the Structure and Evolution of the Cosmos (ARC-
SEC) of Korea Science and Engineering Foundation (KOSEF)
through Science Research Program (SRC) program.
REFERENCES
Alard, C. 1999, A&A, 343, 10
An, J. H. 2005, MNRAS, 356, 1409
Bennett, D. P., & Rhie, S. H. 1996, ApJ, 472, 660
Bolatto, D. B., & Falco, E. E. 1994, ApJ, 436, 112
Bond, I., et al. 2002, MNRAS, 331, L19
Chung, S. J., et al. 2005, ApJ, 630, 535
Dominik, M. 1999, A&A, 349, 108
Dominik, M., et al. 2002, Planetary and Space Science, 50, 299
Gould, A., & Loeb, A. 1992, ApJ, 396, 104
Griest, K., & Safizadeh, N. 1998, ApJ, 500, 37
Gaudi, B. S., & Sackett, P. D. 2000, ApJ, 532, 340
Han, C. 2006, ApJ, 638, 1080
Han, C., & Gould, A. 2003, ApJ, 592, 172
Mao, S., & Paczyński, B. 1991, ApJ, 374, L37
Peale, S. J. 2001, ApJ, 552, 889
Tomaney, A. B., & Crotts, A. P. S. 1996, AJ, 112, 2872
Udalski, A., Szymański, M., Kałużny, J., Kubiak, M., Mateo, M.,
Krzemiński, W., & Paczyński, B. 1994, 44, 227
Yoo, J., et al. 2004, ApJ, 616, 1204
This figure "fig1.jpg" is available in "jpg"
format from:
http://arxiv.org/ps/0704.0968v1
http://arxiv.org/ps/0704.0968v1
|
0704.0972 | The dissolution of the vacancy gas and grain boundary diffusion in
crystalline solids | The dissolution of the vacancy gas and grain boundary diffusion
in crystalline solids
Fedor V.Prigara
Institute of Microelectronics and Informatics, Russian Academy of Sciences,
21 Universitetskaya, Yaroslavl 150007, Russia∗
(Dated: September 12, 2021)
Abstract
Based on the formula for the number density of vacancies in a solid under the stress or tension,
the model of grain boundary diffusion in crystalline solids is developed. We obtain the activation
energy of grain boundary diffusion (dependent on the surface tension or the energy of the grain
boundary) and also the distributions of vacancies and the diffusing species in the vicinity of the
grain boundary.
PACS numbers: 61.72.Bb, 66.30.Dn, 68.35.-p
http://arxiv.org/abs/0704.0972v4
Recently, it was shown that sufficiently high pressures as well as mechanical stresses
applied to a crystalline solid lead to the decrease in the energy of the vacancy formation
and create, therefore, an additional amount of vacancies in the solid [1]. The last effect
enhances self-diffusion in the crystal which is normally vacancy-mediated, at least in simple
metals. Since large mechanical stresses are normally present in grain boundaries, these new
results can elucidate the mechanisms of grain boundary diffusion which have remained so
far unclear [2].
According to the thermodynamic equation [3]
dE = TdS − pdV, (1)
where E is the energy, T is the temperature, S is the entropy, p is the pressure, and V is
the volume of a solid, the energy of a solid increases with pressure, so the pressure acts as
the energy factor similarly to the temperature. Therefore, the number of vacancies in a solid
increases both with temperature and with pressure.
The thermodynamic consideration based on the Clausius- Clapeyron equation gives the
number density n of vacancies in a solid in the form [1]
n = (P0/T ) exp (−Ev/T ) = (n0T0/T ) exp (−Ev/T ) , (2)
where Ev is the energy of the vacancy formation, P0 = n0T0 is a constant, T0 can be put
equal to the melting temperature of the solid at ambient pressure, and the constant n0 has
an order of magnitude of the number density of atoms in the solid. Here the Boltzmann
constant kB is included in the definition of the temperature T.
The formula (2) describes the thermal expansion of the solid. It should be taken into
account that the dissolution of the vacancy gas in a solid causes the deformation of the
crystalline lattice and changes the lattice parameters.
The energy of the vacancy formation Ev depends linearly on the pressure P (in the region
of high pressures) as given by the formula
Ev = E0 − αP/n0, (3)
where α is a dimensionless constant, α ≈ 18 for sufficiently high pressures. On the atomic
scale, the pressure dependence of the energy of the vacancy formation in the equation (3) is
produced by the strong atomic relaxation in a crystalline solid under high pressure.
With increasing pressure, the number density of vacancies in a solid increases, according
to the relation
n = (n0T0/T ) exp (− (E0 − αP/n0) /T ) , (4)
and, finally, the vacancies can condense, forming their own sub-lattice. Such is the explana-
tion of the appearance of composite incommensurate structures in metals and some other
elemental solids under high pressure [4-7].
Further increase of the number density of vacancies in a solid with increasing pressure
leads to the melting of the solid under sufficiently high pressure (and fixed temperature).
Such effect has been observed in sodium [6]. In general, such behavior is universal for solids,
though the corresponding melting pressure is typically much larger than those for sodium.
We assume that the melting of the crystalline solid occurs when the critical number
density nc of vacancies is achieved. In view of the equation (2), it means that the ratio
of the energy of the vacancy formation Ev to the melting temperature Tm of the solid is
approximately constant,
Ev/Tm ≈ α. (5)
The value of the constant α in the last relation can be determined from the empirical
relation between the activation energy of self diffusion (which is approximately equal to the
energy of vacancy formation) and the melting temperature of a solid [8]:
E0 ≈ 18Tm, (6)
so that α ≈ 18.
Substituting the expression (3) in the relation (5), we obtain
(E0 − αP/n0) /Tm ≈ α. (7)
The last equation gives the melting curve of the crystalline solid in the region of high
pressures in the form
T + P/n0 ≈ E0/α ≈ T0, (8)
where T0 is the melting temperature of the solid at ambient pressure.
The constant n0 can be determined from the relation between the tensile strength σs and
the melting temperature Tm of a solid [1]
n0 ∼= σs/Tm. (9)
The numerical value of this constant is n0 ≈ 1.1× 10
22cm−3 [1].
Replacing in the relation (4) the pressure P by the absolute value of the stress or tension
σ = F/S, applied to a solid, where F is the applied force and S is the cross-section area of
the solid in the plane perpendicular to the direction of the applied force, we can estimate
the mean number density of vacancies in the solid under the stress or tension:
〈n〉 ∼= (n0T0/T ) exp (− (E0 − ασ/n0) /T ) . (10)
The dissolution of the vacancy gas in a solid under the stress or tension is responsible
for the low values of the elastic limit and the tensile strength of solids as compared with
theoretical estimations not taking into account this process [9].
As indicated above, large mechanical stresses are normally present in grain boundaries.
The absolute value σb of the mechanical stress in the close vicinity of a grain boundary is
given by the formula
σb ∼= γb/r0, (11)
where γb is the energy of the grain boundary and r0 is the radius of the atomic relaxation
region (around a vacancy) which will be estimated below.
According to the relation (10), the energy of the vacancy formation in the close vicinity
of the grain boundary is given by the formula
Eb = E0 − αγb/ (n0r0) . (12)
For the small values of misorientation angle θ 6 10 − 15 degrees, the energy of the
dislocation structure contributes to the energy of the grain boundary [10]. However, for
larger misorientation angles, the energy of the grain boundary is approximately constant
and is determined by the surface tension γ of the solid, γb ∼= γ.
Due to the Einstein relation between the mobility of an atom, µ = v/F , where v is the
velocity of the atom and F is the force acting on the atom, and the diffusion coefficient D
µ = v/F = D/T, (13)
the speed of grain boundary motion v is proportional to the diffusion coefficient D⊥ for
self-diffusion in the direction perpendicular to the plane of a grain boundary. Therefore,
the activation energy E of grain boundary motion is equal to the activation energy E⊥ of
self-diffusion across the grain boundary. The last activation energy is equal to the activation
energy Eb of grain boundary self-diffusion in the case of high-angle grain boundaries, and
is approximately equal to the activation energy E0 of bulk self-diffusion for low-angle grain
boundaries. Thus, there is a step of the activation energy for grain boundary motion at
some critical value θc of the misorientation angle (θc = 10− 15
◦, as indicated above). Such
a step of the activation energy for grain boundary motion has been observed experimentally
in high-purity aluminium, the critical value of the misorientation angle being in this case
θc = 13.6
◦ [11].
The driving force for grain boundary motion is provided by the distribution of mechanical
stresses in a crystalline solid [12].
Assuming that the free surface of a crystalline solid is formed by the plane of vacancies,
we can estimate the surface tension of the solid as follows
γ ∼= βn0E0a0, (14)
where a0 = n
∼= 0.45nm has an order of magnitude of the lattice spacing a, and β is a
dimensionless constant which has an order of unity. For hard metals such as Al, Zr, Nb, Fe,
Pt, β ∼= 0.8. In the case of mild metals, β is normally smaller, e.g. for Rb and Sr, β ∼= 1/4.
Substituting the estimation (14) for the energy of the grain boundary γb ∼= γ in the
equation (12), we find
Eb ≈ E0 (1− βαa0/r0) . (15)
Due to the atomic relaxation and thermal motion of atoms, the migration barriers are
small [2,13], and the activation energy of self-diffusion is approximately equal to the energy
of the vacancy formation. The analysis of experimental data on the activation energy of
grain boundary self-diffusion gives an empirical relation [14]
Eb ≈ 9Tm ≈ E0/2. (16)
From equations (15) and (16), we find the estimation of the radius of the atomic relaxation
region,
r0 ≈ 2βαa0 ∼= αa0, (17)
since β has an order of unity. The radius of the atomic relaxation region has an order of
r0 ∼= 18n
≈ 8nm. This value is comparable with the diameters of tracks produced by
high energy ions in metals [15-17]. The grain boundary diffusion width δ [14] is smaller than
the radius of the atomic relaxation region due to the non-uniform distribution of vacancies
inside the atomic relaxation region in the grain boundary.
If we assume that the mechanical stress σ decreases linearly with the distance x from the
plane of the grain boundary,
σ = σ0 (1− kx) , (18)
where σ0 is the stress at the boundary of the atomic relaxation region with the width r0 in
the grain boundary (this value is smaller than σb ∼= γ/r0 ∼= (1/2)n0Tm and has an order of
magnitude σ0 ∼= (1/2)n0T ), then the equation (10) gives the distribution of vacancies in the
vicinity of the grain boundary in the form
n ∼= (n0T0/T ) exp (− (E0 − ασ0 (1− kx) /n0) /T ) = nbexp (−ασ0kx/ (n0T )) , (19)
where nb is the number density of vacancies at the boundary of the atomic relaxation region.
Due to the trapping by vacancies [18], the distribution of the concentration c of the
diffusing species in the vicinity of the grain boundary follows the same law:
c ∼= cbexp (−x/l) , (20)
where cb is the concentration of the diffusing species at the boundary of the relaxation region,
and the scale l is given by the formula
l = n0T/ (ασ0k) . (21)
Here k has an order of magnitude of 1/d, d being the size of the grain, so that l ∼= d/α. The
penetration profiles described by the equation (20) have been indeed observed experimentally
in the case of grain boundary diffusion in metals [8, 18], the measured penetration depth l
having an order of a few micrometers [8].
To summerize, we obtained the dependence of the activation energy of grain boundary
self-diffusion on the energy of the grain boundary, the estimation of the surface tension of a
solid and of the energy of the grain boundary, and the width of the atomic relaxation region
in the grain boundary (or the radius of the atomic relaxation region around a vacancy). We
obtained further the distributions of vacancies and the diffusing species in the vicinity of
the grain boundary. The obtained radius of the atomic relaxation region is consistent with
the diameters of tracks produced by high energy ions in metals.
—————————————————————
[1] F.V.Prigara, E-print archives, cond-mat/0701148.
[2] A.Suzuki and Y.Mishin, J. Mater. Sci. 40, 3155 (2005).
[3] S.-K.Ma, Statistical Mechanics (World Scientific, Philadelphia, 1985).
[4] R.J.Nelmes, D.R.Allan, M.I.McMahon, and S.A.Belmonte, Phys. Rev. Lett. 83, 4081
(1999).
[5] M.I.McMahon, S.Rekhi, and R.J.Nelmes, Phys. Rev. Lett. 87, 055501 (2001).
[6] O.Degtyareva, E.Gregoryanz, M.Somayazulu, H.K.Mao, and R.J.Hemley, Phys. Rev.
B 71, 214104 (2005).
[7] V.F.Degtyareva, Usp. Fiz. Nauk 176, 383 (2006) [Physics- Uspekhi 49, 369 (2006)].
[8] B.S.Bokstein, S.Z.Bokstein, and A.A.Zhukhovitsky, Thermodynamics and Kinetics of
Diffusion in Solids (Metallurgiya Publishers, Moscow, 1974).
[9] G.I.Epifanov, Solid State Physics (Higher School Publishers, Moscow, 1977).
[10] A.A.Smirnov, Kinetic Theory of Metals (Nauka, Moscow, 1966).
[11] M.Winning, G.Gottstein, and L.S.Shvindlerman, Acta Mater. 49, 211 (2001).
[12] K.J.Draheim and G.Gottstein, in APS Annual March Meeting, 17- 21 March 1997,
Abstract D41.87.
http://arxiv.org/abs/cond-mat/0701148
[13] B.P.Uberuaga, G.Henkelman, H.Jonsson, S.T.Dunham, W.Windl, and R.Stumpf,
Phys. Stat. Sol. B 233, 24 (2002).
[14] I.Kaur and W.Gust, Fundamentals of Grain and Interphase Boundary Diffusion
(Ziegler Press, Stuttgart, 1989).
[15] F.F.Komarov, Usp. Fiz. Nauk 173, 1287 (2003) [Physics- Uspekhi 46, 1253 (2003)].
[16] F.V.Prigara, E-print archives, cond-mat/0406222.
[17] M.Toulemonde, C.Trautmann, E.Balanzat, K.Hjort, and A.Weidinger, Nucl. In-
strum. Meth. B 217, 7 (2004).
[18] W.P.Ellis and N.H.Nachtrieb, J. Appl. Phys. 40, 472 (1969).
∗ Electronic address: [email protected]
http://arxiv.org/abs/cond-mat/0406222
mailto:[email protected]
References
|
0704.0973 | X-ray Timing Observations of PSR J1930+1852 in the Crab-like SNR
G54.1+0.3 | draft version of Feb. 19 2007
X-ray Timing Observations of PSR J1930+1852 in the Crab-like
SNR G54.1+0.3
Fangjun Lu1,2, Q.Daniel Wang2, E. V. Gotthelf3, and Jinlu Qu1
ABSTRACT
We present new X-ray timing and spectral observations of PSR J1930+1852,
the young energetic pulsar at the center of the non-thermal supernova remnant
G54.1+0.3. Using data obtained with the Rossi X-ray Timing Explorer (RXTE )
and Chandra X-ray observatories we have derived an updated timing ephemeris
of the 136 ms pulsar spanning 6 years. During this interval, however, the period
evolution shows significant variability from the best fit constant spin-down rate of
Ṗ = 7.5112(6)×10−13 s s−1, suggesting strong timing noise and/or glitch activity.
The X-ray emission is highly pulsed (71 ± 5% modulation) and is characterized
by an asymmetric, broad profile (∼ 70% duty cycle) which is nearly twice the
radio width. The spectrum of the pulsed emission is well fitted with an absorbed
power law of photon index Γ = 1.2 ± 0.2; this is marginally harder than that
of the unpulsed component. The total 2 − 10 keV flux of the pulsar is 1.7 ×
10−12 erg cm−2 s−1. These results confirm PSR J1930+1852 as a typical Crab-
like pulsar.
Subject headings: ISM: individual (G54.1+0.3)—ISM: jets and outflows—radiation
mechanisms: non-thermal—stars:neutron (PSR J1930+1852)— supernova remnants—
X-rays: ISM
1Laboratory of Particle Astrophysics, Institute of High Energy Physics, CAS, Beijing 100039, P.R. China;
[email protected]; [email protected]
2Astronomy Department, University of Massachusetts, Amherst, MA 01003; [email protected]
3Columbia Astrophysics Laboratory, Columbia University, 550 West 120th Street, New York, NY 10027;
[email protected]
http://arxiv.org/abs/0704.0973v1
– 2 –
1. Introduction
Young rotation-powered pulsars typically radiate a large fraction of their spin-down
energy at X-ray energies. Observations in this band are thus important to the study of the
spin-down evolution of such pulsars and their emission mechanism(s). The study also helps to
understand the mechanical energy output of the pulsars into their surroundings, manifested
as pulsar wind nebulae (PWNe). To this end, one needs to monitor the spin-down at various
evolutionary stages of young pulsars and to measure their energy spectra, both pulsed and
unpulsed, with various viewing angles. However, only a dozen or so of young pulsars with
PWNe have been identified and studied in detailed so far.
The recently discovered 136 ms pulsar PSR J1930+1852 at the center of the supernova
remnant (SNR) G54.1+0.3 is the latest example of a Crab-like pulsar (Camilo et al. 2002).
Known as the “Bulls-Eye” pulsar, PSR J1930+1852 is surrounded by a bright symmetric
ring of emission (Lu et al. 2002) similar to the toroidal and jet-like structure associated
with the Crab pulsar, but viewed nearly face-on. Based on the initial timing parameters,
PSR J1930+1852 is the eighth most energetic pulsar known, with a rotational energy loss
rate of Ė = 1.2 × 1037 erg s−1, well above the empirical threshold for generating a bright
pulsar wind nebula (Ė ∼> 4 × 10
36 erg s−1, Gotthelf 2004). Such young pulsars are often
embedded in observable shell-type remnant which have yet to dissipate. However, like the
Crab, G54.1+0.3 lacks evidence for a thermal remnant in any waveband (Lu et al. 2002).
Most likely, the SN ejecta in these two remnants are still expanding into a very low density
medium.
In this paper we present the first dedicated X-ray timing and spectral follow-up observa-
tions of PSR J1930+1852 since discovery. Previous X-ray results were based on archival data
of limited quality. We use the new data to characterize the pulse shape and energy spectrum
and provide a long term ephemeris. Throughout the paper, the uncertainties (statistical
fluctuation only) are quoted at the 68% confidence level.
2. Observations and Data Analysis
The pulsar PSR J1930+1852 was observed twice with RXTE on 2002 September 12
– 14 and on 2002 December 23 – 25 using a combination of event and instrument modes.
For consistency, we analyze the data taken with the proportional counter array (PCA) in
the Good Xenon mode. PCA has a field of view of 1◦ (FWHM), total collecting area of
about 6500 cm2, time resolution of 1 µs, and spectral resolution of ≤ 18% at 6 keV. The
data are reduced and analyzed using the ftools software package version v5.2. We filter
– 3 –
the data using the standard RXTE criteria, selecting time intervals for which parameters
Elevation Angles < 10◦, Time Since SAA ≥ 30 min, Pointing Offsets < 0.02◦, and the
background electron rate Electron2 < 0.1. The effective exposure time after this filtering
is 31.7 ks and 41.7 ks for the September and December observations. Since the background
of RXTE is high and the spectral resolution is relatively low, the RXTE data is used
herein exclusively for timing analysis, selecting photons detected from PCA PHA channels
0 − 35 (∼ 2 − 15 keV). This results in a total of ∼ 1 and ∼ 1.6 million counts in the two
observations for the subsequent analysis. The photon arrival times are corrected to the Solar
system barycenter, based on the DE200 Solar ephemeris time system and the Chandra J2000
coordinates of J193030.13+185214.1 (Lu et al. 2002).
SNR G54.1+0.3 was also observed with Chandra on 2003 June 30 for a total of 58.4 ks.
The pulsar was placed at the aim-point of the front-illuminated ACIS-I detector. The CCD
chip I3 was operated in continuous-clocking mode (CC-mode), providing a time resolution of
2.85 ms and an one-dimensional imaging, in which the 2-D CCD image is integrated along the
column direction in each CCD readout cycle. The photon arrival times are post-processed to
account for the spacecraft dithering and SIM motion prior to the barycenter correction. The
spectral data are corrected for the effects of CTI (Charge Transfer Inefficiency). However,
the spectral gain is not well calibrated in the CC-mode, requiring adjustment in the fitting
process (details are given in §3). Spectral response matrices are generated for the ACIS-I
aimpoint, the location of the pulsar in this observation. After filtering the data using the
standard criteria, the remaining effective exposure is 57.2 ks. Reduction and analysis of
the Chandra data are all based on the standard software package CIAO (v3.2) and CALDB
(v3.0.0).
Figure 1 presents the geometry of the CC-mode observation overlaid on an archival
Chandra X-ray image of SNR G54.1+0.3. The CCD image is summed along the dimension
perpendicular to the marked line which is orientated with a position angle P.A. = 19◦ East
of North. The count distribution along this dimension is shown in Figure 2. The central
peak corresponds to the presence of the pulsar, which significantly contributes to the six
adjacent pixels, as denoted by the upper horizontal bar. The neighboring four pixels (two
on each side of the pulsar region), marked by the two lower horizontal bars, show the nearly
same intensity level in the ACIS-I3 image-mode data with the pulsar excised. We therefore
select counts falling in the inner six pixels for both our pulsar timing and spectral analysis of
the pulsar, while those counts in the outer four pixels are used to estimate the background
from the surrounding nebula.
– 4 –
Fig. 1.— Geometry of the Chandra ACIS-I3 CCD continuous-clocking (CC-mode) obser-
vation of PSR J1930+1852 presented herein. The dashed line gives the orientation of the
CC-mode observation shown overlaid on an archival Chandra broadband (0.3−10 keV) X-ray
image of SNR G54.1+0.3.
– 5 –
Fig. 2.— Source and background region determination for the Chandra CC-mode observation
of PSR J1930+1852. The 1-D count distribution of SNR G54.1+0.3 as observed in CC-
mode using ACIS-S3 (solid line), compared with the distributions constructed from the
collapsed Chandra ACIS-I3 image-mode data, with (dashed line) and without (dotted) the
pulsar excised. All data is restricted to the 0.3−10 keV energy band. The on-pulsar (central)
thick horizontal bar denotes the 6 pixels that contains significant pulsar emission, while the
two adjacent off-pulsar (outer) thick horizontal bars mark the pixels that are used to estimate
the local nebula background (see §2 for details).
– 6 –
3. Results
3.1. Pulsar Timing
For each observation, we search for the periodic signal of PSR J1930+1852 by folding
events around the period extrapolated from the early radio ephemeris of Camilo et al (2002).
For each period folding with a period P , a χ2 is calculated from the fit to the pulse profile
with a constant count rate. The null hypothesis of no periodic signal can be ruled out when
a significant peak is seen in the resultant “periodogram” (χ2 vs. P ), which is the case for
each of the X-ray observations at a high confidence (χ2 > 300 for 10 phase bins). We further
fit the peak shape with a Gaussian profile to maximize the accuracy of our pulsar period
determination (Figure 3). The centroid of this Gaussian is then taken as the best estimate of
the pulsar period. The light curves derived of the RXTE and Chandra observations folded
at the measured periods are shown in Figures. 4–5.
To estimate the uncertainties in the period measurements, we use the bootstrap tech-
nique of Diaconis & Efron (1983). This is done as the following: (1) constructing a new data
set of the same total number of counts by re-sampling with replacement from the observed
events; (2) determining the period with this re-sampled data set in the exactly same way as
with the original data; (3) repeating the above two steps for 500 times to produce a period
distribution; (4) Using the dispersion of this distribution as an estimate of the 1σ period
uncertainty. The distributions produced for the three observations are shown in the right
column of Figure 3, while the estimated uncertainties are included in Table 1.
To compute the pulsed fraction of the X-ray emission from PSR J1930+1852, we used
the Chandra observation. We extracted a total of 5506 counts in the 0.3 − 10 keV band
from the on-pulsar pixels of the 1-D count distribution (the solid curve in Figure 2). After
subtracting the local nebular contribution estimated from the neighboring off-pulsar pixels,
the remaining 3560±92 counts are considered as the net total emission from the pulsar. This
emission can be further divided into the pulsed and persistent components. To determine
the persistent component, we construct a 1-D distribution of the persistent emission from
the off-pulse counts, defined to be in the phase interval 0.1 − 0.3 (Figure 5). The same
on-pulsar pixels as shown in Figure 2 now contain a total of 598 counts, Corrected for the
off-pulse phase fraction (1/5), the total number of persistent counts over the entire phase is
then 598×5. Therefore, the net number of the pulsed counts is (5506-598×5)=2516±143.
This results in a pulsed fraction of fp ≡ (pulsed/total counts) = 71± 5%.
– 7 –
3.2. Pulsed Emission Spectral Characteristics
To check for phase-dependent spectral variations across the pulse profile we compute
the hardness ratio in each phase bin, defined as HR = Nh/Ns, where Ns and Nh are the
counts selected from the 0.3− 3 keV and 3− 10 keV energy bands, respectively. The pulsar
counts (pulsed and unpulsed) are extracted from the 6 pixel source region as discussed in §2
and the background from the neighboring 4 pixels. The calculated HR is shown in the lower
panel of Figure 5. Fitting these HR data points assuming a constant HR value resulted in
a χ2 of 17.94 for 9 degrees of freedom, which means that the hardness ratio changes with
phase at a confidence level of 96.4%. Further more, it appears that the HR values of the
on pulse emission are higher than those of the off-pulse emission. In order to quantify this,
we computed the mean HR for the off-pulse emission (bins 1, 2 3 and 10 in the panel) as
HR = 0.77±0.08 and the on-pulse bins (4 to 9) as HR = 0.95±0.04. Therefore, the on-pulse
emission is harder than the off-pulse emission at a confidence level of ∼ 2σ, or 98%.
Next, we study the Chandra spectrum of PSR J1930+1852 using the same sources
and background counts as extracted above. For the pulsed spectrum, the phase width
corrected off-pulse counts are subtracted from the on-pulse counts in each spectral bin.
Figure 6a presents the best fit absorbed power-law model using the standard response matrix.
Although the overall χ2 is acceptable (34.4 for 35 degree-of-freedom), the residuals to this
fit display characteristic feature, indicating that the gain of the response function is not
properly calibrated for the CC-mode. Following the method suggested by Kaaret et al.
(2001) we calibrate the gain offset and scale in XSPEC by comparing the overall CC-mode
spectra of PSR J1930+1852 to that determined by the ACIS-S3 imaging data. The latter
is characterized by the same model with the absorption column density NH = 1.6 × 10
cm−2 and a photon index α = 1.35 (Camilo et al. 2002). The resulting gain scale and
offset are found to be 0.90 and -0.18, respectively. Fixing this gain correction and NH to the
above values, we re-fit the pulsed emission spectrum to obtain a photon index of 1.2 ± 0.2
(see Figure 6b). The new χ2 value is 17.7 for 34 degree-of-freedom, significantly better than
without the gain correction. The pulsed flux measured in the 2 − 10 keV energy band is
1.2× 10−12 ergs cm−2 s−1. When compared to the overall 2− 10 keV flux of 1.7× 10−12 ergs
cm−2 s−1 (Camilo et al. 2002), this implies that ∼ 70% of the total emission from the pulsar
is pulsed, consistent with the estimate in Section 3.1.
4. Discussion
The properties of PSR J1930+1852 are most similar to those found for other examples of
young, energetic pulsars. The power-law spectral index of the pulsar emission is consistent
– 8 –
with its spin-down energy according to the empirical law of Gotthelf (2003) for energetic
rotation powered pulsars with Ė > 4× 1036 erg s−1. The power law index is also consistent
with that of the pulsed emission, as found for other high Ė, Crab-like pulsars (Gotthelf
2003). As with most X-ray detected radio pulsars, the X-ray pulse morphology differs from
that of the radio pulse. The full width at half maximum (FWHM) of the X-ray pulse is 0.4
phase compared to 0.15 phase in radio. Notably, the X-ray pulse has a steep rise and slow
decline, whereas the radio pulse is inverted, with a slow rise and steep decay instead.
The unpulsed component of PSR J1930+1852 is most likely nonthermal in nature as
the thermal emission from the cooling surface of the neutron star should be negligible.
According to the standard theoretical cooling curves, the surface temperature of a 1.4 M⊙
neutron star is about 0.13 keV at the age of PSR J1930+1852 (about 3,000 years; Page 1998).
Assuming a radius of 12 km the neutron star should have an absorbed 0.2 − 10 keV flux of
∼ 8×10−15 erg cm−2 s−1, which accounts for ∼ 0.4% of our detected total 0.2−10 keV X-ray
flux or 1.4% of the unpulsed flux. Tennant et al. (2001) detected the X-ray emission of the
Crab pulsar at its pulse minimum, though accounting for only a tiny fraction of the total or
unpulsed flux. Tennant et al. (2001) further suggested that this component is nonthermal.
The unpulsed X-ray emission from PSR J1930+1852 may be of the same nature as that of
the Crab pulsar.
Together with the previous X-ray and radio periods, the three timing measurements
obtained herein provide an opportunity to study the pulsar period evolution. A linear fit to
these periods yields a Ṗ of 7.5116(6)×10−13 s s−1 with a reduced χ2ν of 3.6 (see Figure 7).
The large χ2ν value and the scattered residuals show that the period of PSR J1930+1852
evolves in a more complicated than a simply constant spin down. The period derivative
obtained here is also significantly (9σ) different from that obtained by Camilo et al. (2002).
This suggests that PSR J1930+1852 has experienced periods of timing noise and/or glitches
- not unepxected for a young pulsar (e.g., Zhang et al. 2001; Wang, et al. 2001; Crawford
& Demiańsky 2003). Arzoumanian et al. (1994) defined a quantity ∆8 to represent the
stability of a pulsar. They found an empirical relation between ∆8 and Ṗ , which predicts a
high ∆8 of -0.67 for PSR J1930+1852. This value is higher than those measured for most
ordinary pulsars and is consistent with the variability in spin-down rate observed for this
pulsar.
Indeed, PSR J1930+1852 shares other interesting properties with PSR B0540-69. For
example, the pulsed X-ray emission of PSR B0540-69 has probably a harder spectrum, with
a photon index of 1.83±0.13, than the steady component whose photon index is (2.09±0.14;
Kaaret et al. 2001), whereas PSR J1930+1852 also has a harder pulsed emission than the
steady emission. Furthermore, the pulse width of PSR B0540-69 is about 0.4 and its pulsed
– 9 –
fraction fp = 71.0 ± 5%, both nearly identical to the respective values measured herein for
PSR J1930+1852. Based on these X-ray emission similarities, the X-ray emission regions of
the two pulsars may have the similar overall structures and viewing geometries.
The project is partially supported by NASA/SAO/CXC through grant GO5-6057X.
FJL and JLQ also acknowledge support from the National Natural Science Foundation of
China.
REFERENCES
Arzoumanian, Z., Nice, D.J., Taylor, J.H., & Thorestt, S.E. 1994, ApJ, 422, 671
Camilo, F., Lorimer, D.R., Bhat, N.D.R., Gotthelf, E.V., Halpern, J.P., Wang, Q.D., Lu,
F.J., & Mirabal, N. 2002, ApJ, 574, L71
Crawford, F., & Demiański, M. 2003, ApJ, 595, 1052
Diaconis, P., & Efron, B. 1983, Scientific American, May P96
Gotthelf, E. V. 2003, ApJ, 591, 361
Gotthelf, E. V. 2004, in “Young Neutron Stars and Their Environments”, IAU Symp. 218.
Ed. F. Camilo & B. M. Gaensler (S.F. CA.: ASP) 2004, 218, 225
Kaaret, P., et al. 2001, ApJ, 546, 1159
Lu, F.J., Wang, Q.D., Aschenbach, B., Durouchoux, P., & Song, L.M. 2002, ApJ, 568, L49
Middleditch, J., et al. 2006, ApJ, 652, 1531
Page, D. 1998, in The Many Faces of Neutron Stars ed. R. Buccheri, J. van Paradijs, & M.A.
Alpar (Dordrecht: Kluwer), 539
Tennant, A.F., et al. 2001, ApJ, 554, L173
Wang, N., Wu, X.J., Manchester, R.N., et al. 2001, Chin. J. Astron. Astrophys., 1, 195
Zhang, W., Marshall, F.E., Gotthelf, E.V., Middleditch, J., & Wang, Q.D. 2001, ApJ, 554,
This preprint was prepared with the AAS LATEX macros v5.2.
– 10 –
Fig. 3.— Period and period uncertainty of PSR J1930+1852 at three epochs. Left – The
periodograms of PSR J1930+1852 constructed from the September 2002, December 2002,
and June 2003 observations and together with the respective best-fit Gaussian profiles for
the central peaks. Right – The distribution of the 500 periods from the bootstrapped data
for each observations. The Gaussian 1σ width gives an estimate of the period uncertainty.
The P0 values are given in Table 1.
– 11 –
Fig. 4.— The pulse shape of PSR J1930+1852 in the 2 − 15 keV band as obtained with
RXTE on 2002 September 12 (solid) and December 23 (dashed). Phase zero is arbitrary;
two cycles are shown for clarity. The December light curve is shifted upward by 0.002.
– 12 –
Fig. 5.— The pulse shape and its hardness ratio of PSR J1930+1852 in the 0.3 − 10 keV
band as measured with Chandra on 2002 June 30. The pulse shape (Top Panel) is folded
at the period given in Table 1 and the phase bin size is chosen so that each bin contains
almost the same counts. The hardness ratio (Bottom Panel) is as defined in the text (§3.2);
the background, as defined in §2, has been subtracted.
– 13 –
Table 1. Timing Results for PSR J1930+1852
Date Obs. Type Epoch Period
(UT) (MJD[TDB]) (s)
1997 Apr 27 ASCA 50566 0.13674374(5)a
2002 Jan 17 Radio 52280 0.136855046957(9)a
2002 Sep 12 RXTE 52530 0.136871312(4)
2002 Dec 23 RXTE 52632 0.136877919(3)
2003 Jun 30 Chandra 52820 0.136890130(5)
aTaken from Camilo et al. (2002)
Fig. 6.— The pulsed X-ray spectrum of PSR J1930+1852 obtained with Chandra ACIS-I3
in continuous-clocking mode: Left Panel: fitting with an absorbed power-law model with
the gain scale and offset fixed as 1 and 0; Right Panel: fitting with the same model but with
the gain scale and offset of 0.90 and -0.18.
– 14 –
Fig. 7.— The period residuals of PSR J1930+1852 in different epochs.
Introduction
Observations and Data Analysis
Results
Pulsar Timing
Pulsed Emission Spectral Characteristics
Discussion
|
0704.0975 | Recurrence analysis of the Portevin-Le Chatelier effect | Microsoft Word - RQA_PLC.doc
Recurrence analysis of the Portevin-Le Chatelier effect
A. Sarkara*, Charles L. Webber Jr.b, P. Barata, P. Mukherjeea
a Variable Energy Cyclotron Centre, 1/AF Bidhan Nagar, Kolkata 700064, India
b Department of Physiology, Loyola University Medical Center, 2160 S. First Avenue,
Maywood, IL 60153, USA
Abstract
Tensile tests were carried out by deforming polycrystalline samples of Al-
2.5%Mg alloy at room temperature in a wide range of strain rates where the Portevin-
Le Chatelier (PLC) effect was observed. The experimental stress-time series data have
been analyzed using the recurrence analysis technique based on the Recurrence Plot
(RP) and the Recurrence Quantification Analysis (RQA) to study the change in the
dynamical behavior of the PLC effect with the imposed strain rate. Our study revealed
that the RQA is able to detect the unique crossover phenomenon in the PLC dynamics.
PACS: 62.20.Fe, 05.40.Fb, 05.45.Tp
Keywords: Recurrence Plot, Recurrence Quantification Analysis, Portevin-Le
Chatelier effect
Introduction
The Portevin-Le Chatelier (PLC) effect is one of the widely studied
metallurgical phenomena, observed in many metallic alloys of technological
importance [1-12]. It is a striking example of the complexity of the spatiotemporal
dynamics, arising from the collective behavior of dislocations. In uniaxial loading with
* Corresponding author: [email protected]
constant imposed strain rate, the effect manifests itself as a series of serrations (stress
drops) in the stress-time or strain curve. Each stress drop is associated with the
nucleation of a band of localized plastic deformation, often designated as PLC band,
which under certain conditions propagates along the sample. The microscopic origin
of the PLC effect is the dynamic strain aging (DSA) [13-19] of the material due to the
interaction between mobile dislocations and diffusing solute atoms. At the
macroscopic scale, this dynamic strain aging leads to a negative strain rate sensitivity
(SRS) of the flow stress and makes the plastic deformation nonuniform.
In polycrystals three types of the PLC effect are traditionally distinguished on the
qualitative basis of the spatial arrangement of localized deformation bands and the
particular appearance of deformation curves [20, 21]. Three generic types of
serrations: type A, B and C occur depending on the imposed strain rate. For
sufficiently large strain rate, type A serrations are observed. In this case, the bands are
continuously propagating and highly correlated. The associated stress drops are small
in amplitude [22,23]. If the strain rate is lowered, type B serrations with relatively
larger amplitude occur around the uniform stress strain curve. These serrations
correspond to intermittent band propagation. The deformation bands are formed ahead
of the previous one in a spatially correlated manner and give rise to regular surface
markings [22,23]. For even smaller strain rate, bands become static. This type C band
nucleates randomly in the sample leading to large saw-tooth shaped serration in the
stress strain curve and random surface markings [22,23].
From metallurgical point of view, the PLC effect is usually undesirable since it has
detrimental influences like the loss of ductility and the appearance of surface markings
on the specimen. Beyond its importance in metallurgy, the PLC effect is an epitome
for a general class of nonlinear complex systems with intermittent bursts. The
succession of plastic instabilities shares both physical and statistical properties with
many other systems exhibiting loading-unloading cycles e.g. earthquakes. PLC effect
is regulated by interacting mechanisms that operate across multiple spatial and
temporal scales. The output variable (stress) of the effect exhibits complex
fluctuations which contains information about the underlying dynamics.
The PLC effect has been extensively studied over the last several decades with the
goal being to achieve a better understanding of the small-scale processes and of the
multiscale mechanisms that link the mesoscale DSA to the macroscale PLC effect.
The technological goal is to increase the SRS to positive values in the range of
temperatures and strain rates relevant for industrial processes. This would ensure
material stability during processing and would eliminate the occurrence of the PLC
effect.
Due to a continuous effort of numerous researchers, there is now a reasonable
understanding of the mechanisms and manifestations of the PLC effect. A review of
this field can be found in Ref. [17,18]. The possibility of chaos in the stress drops of
PLC effect was first predicted by G. Ananthakrishna et. al. [24] and latter by V.
Jeanclaude et. al. [25]. This prediction generated a new enthusiasm in this field. In
last few years, many statistical and dynamical studies have been carried out on the
PLC effect [10-12, 26-32]. Analysis revealed two types of dynamical regimes in the
PLC effect. At medium strain rate (type B) chaotic regime has been demonstrated [30,
33], which is associated with the bell-shaped distribution of the stress drops. For high
strain rate (type A) the dynamics is identified as self organized criticality (SOC) with
the stress drops following a power law distribution [33]. The crossover between these
two mechanisms has also been a topic of intense research for the past few years
[29,33,34-36]. It is shown that the crossover from the chaotic to SOC dynamics is
clearly signaled by a burst in multifractality [29,33].
This crossover phenomenon is of interest in the larger context of dynamical
systems as this is a rare example of a transition between two dynamically distinct
states. Chaotic systems are characterized by the self similarity of the strange attractors
and sensitivity to initial conditions quantified by fractal dimension and the existence
of a positive Lyapunov exponent, respectively. On the contrary, the SOC dynamics is
characterized by infinite number of degrees of freedom and a power law statistics.
The general consensus that the dynamic strain aging is the cause behind the
PLC effect suggests a discrete connection between the stress fluctuation and the band
dynamics. We do not have a system of primitive equations to describe the dynamics of
the band, so we must extract as much information as possible from the data itself. We
use the stress data recorded during the plastic deformation for our analysis. However,
we do not analyze these data blindly but in the framework of nonlinear dynamics as
the band dynamics shows intermittency. In this work, we have carried out detailed
recurrence analysis of the stress time data observed during the PLC effect to study the
change in the dynamical behavior of the effect with the imposed strain rate.
Experimental details:
Substitutional Aluminum alloys with Mg as the primary alloying element are model
systems for the PLC effect studies. These alloys have wide technological applications
due to their advantageous strength to weight ratio. They show good ductility and can
be rolled to large reductions and processed in thin sheets and are being extensively
used in beverage packaging and other applications. However, the discontinuous
deformation behavior of these alloys at room temperature rule them out from many
important applications like in the automobile industry. These alloys exhibit the PLC
effect for wide range of strain rates and temperatures. Under these conditions the
deformation of these materials localize in narrow bands which leave undesirable band-
type macroscopic surface markings on the final products.
Tensile tests were conducted on flat specimens prepared from polycrystalline
Al-2.5%Mg alloy. Specimens with gauge length, width and thickness of 25, 5 and 2.3
mm, respectively were tested in an INSTRON (model 4482) machine. All the tests
were carried out at room temperature (300K) and consequently there was only one
control parameter, the applied strain rate. To monitor closely its influence on the
dynamics of the PLC effect, strain rate was varied from 7.98×10-5 S-1 to 1.60×10-3 S-1.
The PLC effect was observed through out the range. The stress-time response was
recorded electronically at periodic time intervals of 0.05 seconds. Fig. 1 shows the
observed PLC effect in a typical stress-strain curve for strain rate 1.20×10-3 S-1. The
stress data shows an increasing trend due to the strain hardening effect. The trend is
eliminated and analyses reported in this study are carried out on the resulting data. The
inset in the Fig. 1 shows a typical segment of the trend corrected stress-strain curve. In
the varied strain rate region we could observe type B, B+A and A serrations as
reported [20,21]. We kept the sampling rate same for all the experiments.
Consequently the number of data points was not same for different strain rate
experiments. To analyze the data in similar footing we have carried our analysis on the
stress data from the same strain region 0.02-0.10 for all strain rate experiments.
Recurrence Analysis
Eckman, Kamphorst and Ruelle [37] proposed a new method to study the
recurrences and nonstationary behaviour occurring in dynamical system. They
designated the method as “recurrence plot” (RP). The method is found to be efficient
in identification of system properties that cannot be observed using other conventional
linear and nonlinear approaches. Moreover, the method has been found very useful for
analysis of nonstationary system with high dimension and noisy dynamics. The
method can be outlined as follows: given a time series {xi} of N data points, first the
phase space vectors ui={xi,xi+τ,……,xi+(d-1)τ}are constructed using Taken’s time delay
method. The embedding dimension (d) can be estimated from the false nearest
neighbor method. The time delay (τ) can be estimated either from the autocorrelation
function or from the mutual information method. The main step is then to calculate the
N×N matrix
)(, jiiji xxR −−Θ= ε , i,j=1,2,….,N (14)
where εi is a cutoff distance, ||..|| is a norm (we have taken the Euclidean norm), and
Θ(x) is the Heavyside function. The cutoff distance εi defines a sphere centered at ix .
If jx falls within this sphere, the state will be close to ix and thus 1, =jiR . The binary
values in jiR , can be simply visualized by a matrix plot with color black (1) and white
(0). This plot is called the recurrence plot.
However, it is often not very straight forward to conclude about the dynamics of
the system from the visual inspection of the RPs. Zbilut and Webber [38,39]
developed the recurrence quantification analysis (RQA) to provide the quantification
of important dynamical aspects of the system revealed through the plot. The RQA
proposed by Zbilut and Webber is mostly based on the diagonal structures in the RPs.
They defined different measures, the recurrence rate (REC) measures the fraction of
black points in the RP, the determinism (DET) is the measure of the fraction recurrent
points forming the diagonal line structure, the maximal length of diagonal structures
(Lmax), the entropy (Shannon entropy of the line segment distributions) and the trend
(measure of the paling of recurrent points away from the central diagonal). These
variables are used to detect the transitions in the time series. Recently Gao [40]
emphasized the importance of the vertical structures in RPs and introduced a
recurrence time statistics corresponding to the vertical structures in RP. Marwan et. al.
[41] extended Gao’s view and defined measures of complexity based on the
distribution of the vertical line length. They introduced three new RP based measures:
the laminarity, the trapping time (TT) and the maximal length of the vertical structures
(Vmax). Laminarity is analogous to DET and gives the measure of the amount of
vertical structure in the RP and represents laminar states in the system. TT contains
information about the amount as well as the length of the vertical structure. Applying
these measures to the logistic map data they found that in contrast to the conventional
RQA measures, their measures are able to identify the laminar states i.e. chaos-chaos
transitions. The vertical structure based measures were also found very successful to
detect the laminar phases before the onset of life-threatening ventricular
tachyarrhythmia [41]. Here we have applied the measures proposed by Marwan et. al.
along with the traditional measures to find the effect of strain rate on the PLC effect.
Results and Discussions
RP and RQA have been successfully applied to diverse fields starting from
Physiology to Econophysics in recent years. A review of various applications of RPs
and RQA can be found in the recent article by Marwan et. al. [42]. Here we extend the
list of application of RPs and RQA and for the first time apply these methods to study
the dynamical behavior of the PLC effect in Al-2.5%Mg alloy. In this study, we
particularly concentrate on the strain rate region of 7.98×10-5 S-1 to 1.60×10-3 S-1. The
main goal is to demonstrate the ability of RQA to detect the unique crossover
phenomenon observed in the PLC dynamics.
It has been shown that a RP analysis is optimal when the trajectory is embedded
in a phase space reconstructed with an appropriate dimension d [38]. Such a
dimension can be well estimated using a false nearest neighbor technique. The d-
dimensional phase space is then reconstructed using delay coordinate. The time delay
τ can be estimated using the mutual information or the first zero of an auto correlation
function. Based on the false nearest neighbor method we have chosen d to be 10 for all
the strain rates. τ obtained from the mutual information were in the range 1-14 for
different strain rate data. A parameter specific to the RP is the cutoff distance εi. εi is
selected from the scaling curve of REC vs, εi as suggested in the literature [43]. Fig. 2
shows the RPs of the stress fluctuations during the PLC effect at four different strain
rates. From the visual inspection of the RPs it is easy to understand that the dynamical
behavior of the PLC effect changes with the strain rate. However, it is wise to go for
the RQA and quantify the difference in the PLC dynamics with strain rate. Fig. 3
shows the variation of the various RQA variables with strain rate. It can be seen from
the Fig. 3 that the RQA variables like DET and laminarity do not show any systematic
variation with strain rate. Lmax, TT and Vmax decreased rapidly with strain rate and
reached a plateau. Trend values remained almost constant at lower strain rates and
decreased at higher strain rates. The variation of entropy with strain rate is rather
interesting. The entropy initially decreased with strain rate and suddenly reached a
higher value. However, the most important behavior was observed in the variation of
REC and a variable derived from REC and DET, i.e. the ratio of DET and REC
(DET/REC). The REC values decreased initially and reached a low value and then
again started increasing. This variation is quite appealing in the sense that the REC
value is very low in the crossover region and hence is able to detect the crossover
phenomenon of the PLC effect. On the contrary, the DET/REC values showed an
abrupt jump in the crossover region.
It is clearly evident from this study that RQA is able to detect the crossover in
the PLC dynamics from type B to type A region. However the detailed explanation of
the results obtained from the study are not straightforward. Further study is necessary
which will in turn also help to understand the dislocation dynamics involved in the
PLC effect.
Conclusions
In conclusion, for the first time we have applied the recurrence analysis to study
the dynamical behavior of the PLC effect. The study revealed that the recurrence
analysis is efficient to detect the unique crossover, as indicated in the earlier studies, in
the dynamics of the PLC effect.
References
1. F. Le Chatelier, Rev. de Metall. 6, 914 (1909).
2. A.W. Sleeswyk, Acta Metall. 6, 598 (1958).
3. P.G. McCormick, Acta Metall. 20, 351 (1972).
4. J.D. Baird, The Inhomogeneity of Plastic Deformation (American Society of
Metals, OH, 1973).
5. A. Van den Beukel, Physica Status Solidi(a) 30, 197(1975).
6. A. Kalk, A. Nortmann, and Ch. Schwink, Philos. Mag. A 72, 1239 (1995).
7. M. Zaiser, P. Hahner, Phys Status Solidi (b) 199, 267 (1997).
8. M. S. Bharathi, M. Lebyodkin, G. Ananthakrishna, C. Fressengeas, and L. P.
Kubin, Acta Mater. 50, 2813 (2002).
9. E. Rizzi, P. Hahner, Int. J. Plasticity 20, 121(2004).
10. P. Barat, A. Sarkar, P. Mukherjee, S. K. Bandyopadhyay, Phys Rev Lett 94,
05502 (2005).
11. A. Sarkar, P. Barat, Mod Phys Lett B 20, 1075 (2006)
12. A. Sarkar, A. Chatterjee, P. Barat, P. Mukherjee, Mat Sci Eng A, in press
13. A.H. Cottrell, Dislocations and plastic flow in crystals (Oxford University
Press, London, 1953).
14. A. Van den Beukel, U. F. Kocks, Acta Metall. 30, 1027 (1982).
15. J. Schlipf, Scripta Metall. Mater. 31, 909 (1994).
16. P. Hahner, A. Ziegenbein, E. Rizzi, and H. Neuhauser, Phys. Rev. B 65,
134109 (2002).
17. Y. Estrin, L.P. Kubin, Continuum Models for Materials with Microstructure,
ed. H.B. Muhlhaus (Wiley, New York, 1995, p. 395).
18. L.P. Kubin, C. Fressengeas, G. Ananthakrishna, Dislocations in Solids, Vol.
11, ed. F. R. N. Nabarro, M. S. Duesbery (Elsevier Science, Amsterdam,
2002, p 101).
19. A. Nortmann, and Ch. Schwink, Acta Mater. 45, 2043 (1997);
20. K. Chihab, Y. Estrin, L.P. Kubin, J. Vergnol, Scripta Metall. 21, 203(1987).
21. K. Chihab, and C. Fressengeas, Mater. Sci. Eng. A 356,102 (2003).
22. M. Lebyodkin, L. Dunin-Barkowskii, Y. Brechet, Y. Estrin and L. P. Kubin,
Acta mater 48, 2529 (2000).
23. P. Hahner, Mat Sci Eng A 164, 23 (1993).
24. G. Ananthakrishna, M.C. Valsakumar, J. Phys. D 15, L171(1982).
25. V. Jeanclaude, C. Fressengeas, L.P. Kubin, Nonlinear Phenomena in Materials
Science II, ed. L.P. Kubin (Trans. Tech., Aldermanndorf, 1992, p. 385).
26. M. Lebyodkin, Y. Brechet, Y. Estrin, L.P. Kubin, Acta Mater. 44,
4531(1996).
27. M.A. Lebyodkin, Y. Brechet, Y. Estrin, L.P. Kubin, Phys. Rev. Lett.
74,4758(1995).
28. S. Kok, M.S. Bharathi, A.J. Beaudoin, C. Fressengeas, G. Ananthakrishna,
L.P. Kubin, M. Lebyodkin, Acta Mater. 51, 3651(2003).
29. M.S. Bharathi, M. Lebyodkin, G. Ananthakrishna, C. Fressengeas, L.P.
Kubin, Phys. Rev. Lett. 87, 165508(2001).
30. S. Venkadesan, M. C. Valsakumar, K. P. N. Murthy, and S. Rajasekar, Phys.
Rev. E 54, 611 (1996).
31. M.A. Lebyodkin and T.A. Lebedkine, Phys. Rev. E 73, 036114 (2006)
32. D. Kugiumtzis, A. Kehagias, E. C. Aifantis, and H. Neuhäuser, Phys. Rev. E
70, 036110 (2004)
33. G. Ananthakrishna, S. J. Noronha, C. Fressengeas, and L. P. Kubin, Phys.
Rev. E 60, 5455 (1999).
34. G. Ananthakrishna and M. S. Bharathi, Phys. Rev. E 70, 026111 (2004)
35. M. S. Bharathi, G. Ananthakrishna, EuroPhys Lett 60, 234 (2002).
36. M. S. Bharathi and G. Ananthakrishna, Phys. Rev. E 67, 065104 (2003)
37. J. –P. Eckmann, S. O. Kamphorst, D. Ruelle, Europhys. Lett. 4, 973 (1987).
38. J. –P. Zbilut and C. L. Webber Jr., Phys. Lett. A 171, 199 (1992).
39. C. L. Webber Jr. and J. –P. Zbilut, J. Appl. Physiol. 76, 965 (1994).
40. J. Gao and H. Cai, Phys. Lett. A 270, 75 (2000).
41. N. Marwan, N. Wessel, U. Meyerfeldt, A. Schirdewan, J. Kurths, Phys. Rev.
E 66, 026702 (2002).
42. N. Marwan, M. C. Romano, M. Thiel and J. Kurths, Physics Reports 438, 237
(2007).
43. C. L. Webber Jr, J. –P. Zbilut, In: Tutorials in contemporary nonlinear
methods for the behavioral sciences, Chapter 2, pp. 26-94, 2005. M. A.
Riley, G. Van Orden, eds.
Fig. 1 True stress vs. true strain curve of Al-2.5%Mg alloy deformed at a strain rate of
1.20×10-3 S-1. The inset shows a typical segment of the trend corrected true stress vs true
strain curve.
Fig. 2 Recurrence plots at the strain rate (a) 1.60×10-3 S-1 (b) 7.97×10-4 S-1 (c) 3.85×10-4
S-1 (d) 1.99×10-4 S-1
Fig. 3 Variation of the Recurrence Quantification Analysis variables with strain rate.
0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14
0.040 0.042 0.044 0.046 0.048
True Strain
True Strain
Fig. 1
Fig. 2
0.0 5.0x10-4 1.0x10-3 1.5x10-3
Type AType B
Strain rate (s-1)
Strain rate (s-1)
0.0 5.0x10-4 1.0x10-3 1.5x10-3
0.0 5.0x10-4 1.0x10-3 1.5x10-3
0.0 5.0x10-4 1.0x10-3 1.5x10-3
Strain rate (s-1)
0.0 5.0x10-4 1.0x10-3 1.5x10-3
Strain rate (s-1)
Strain rate (s-1)
0.0 5.0x10-4 1.0x10-3 1.5x10-3
Strain rate (s-1)
0.0 5.0x10-4 1.0x10-3 1.5x10-3
Strain rate (s-1)
0.0 5.0x10-4 1.0x10-3 1.5x10-3
Strain rate (s-1)
0.0 5.0x10-4 1.0x10-3 1.5x10-3
Type AType B
Strain rate (s-1)
Fig. 3
|
0704.0982 | Daemons and DAMA: Their Celestial-Mechanics Interrelations | Microsoft Word - Drob87Engl_v3.doc
DAEMONS AND DAMA:
THEIR CELESTIAL-MECHANICS INTERRELATIONS
Edward M. Drobyshevski
and Mikhail E. Drobyshevski
) Ioffe Physico-Technical Institute, Russian Academy of Sciences, 194021 St.Petersburg, Russia
) Astronomical Department, Faculty of Mathematics and Mechanics, St.Petersburg State University, Peterhof,
198504 St-Petersburg, Russia
Abstract. The assumption of the capture by the Solar System of the electrically charged
Planckian DM objects (daemons) from the galactic disk is confirmed by the St.Petersburg
(SPb) experiments detecting particles with V < 30 km/s. Here the daemon approach is
analyzed considering the positive model independent result of the DAMA/NaI experiment in
Gran Sasso. The maximum in DAMA signals observed in the May-June period is explained as
being associated with the formation behind the Sun of a trail of daemons that the Sun captures
into elongated orbits as it moves to the apex. The range of significant 2-6-keV DAMA signals
fits well the iodine nuclei elastically knocked out of the NaI(Tl) scintillator by particles falling
on the Earth with V = 30-50 km/s from strongly elongated heliocentric orbits. The half-year
periodicity of the slower daemons observed in SPb originates from the transfer of particles,
that are deflected through ~90°, into near-Earth orbits each time the particles cross the outer
reaches of the Sun which had captured them. Their multi-loop (cross-like) trajectories traverse
many times the Earth’s orbit in March and September, which increases the probability for the
particles to enter near-Earth orbits during this time. Corroboration of celestial mechanics
calculations with observations yields ~10
for the cross section of daemon interaction
with the solar matter.
Key words: DM detection; DM in Earth-crossing orbits; celestial mechanics; Planckian
particles
I. Introduction. Crisis in the Search for DM Candidates
The origin of dark matter (DM) has intrigued researches for several decades. It has become
increasingly clear that these are neither neutrinos with m ≥ 10 eV nor massive compact halo
objects (MACHOs) (Evans and Belokurov, 2005). In the most recent decade, efforts were
focused primarily on the search for weakly interacting massive particles (WIMPs) and similar
objects with a mass of ~10-100 GeV, whose existence is predicted by some theories of
elementary particles beyond the Standard Model. It was believed self-evident from the very
beginning that the cross section of their interaction with nucleons, s, should be larger than that
of their mutual annihilation (~3×10-34 cm2) (Primack et al., 1988). The level reached presently
is s ~ 10
, but still no reliable and universally accepted results have been reported.
One cannot avoid the impression that the researchers are pursuing an imaginary but
nonexisting horizon, in other words, that the WIMPs as they were conceived of originally
simply do not exist.
The only data that could possibly be treated as evidence for the existence of WIMPs,
or more generally a DM particle component in the galactic halo, were obtained by the DAMA
collaboration in their 7-year-long experiment (Bernabei et al., 2003). The evidence is a yearly
modulation of the number of 2-6-keV signals accumulated with ~100 kg of NaI(Tl)
scintillators. The modulation is ~5% and reaches a maximum some time at the beginning of
June; it could be attributed to a seasonal variation of a ground-level flux of objects from the
galactic halo caused by the Earth’s orbit being inclined with respect to the direction of motion
of the Solar system around the galactic centre (Bernabei et al., 2003). The statistical
significance of the modulation (6.3σ) is high enough to leave no doubt in its existence. The
experiments being performed on other installations do not, however, support these results. To
interpret this situation, the scientists running the DAMA have to consider different types of
WIMPs and different modes of their interaction with matter. Recall, for instance, the recent
assumption of a light pseudoscalar and scalar DM candidate of ~keV mass (see Bernabei et
al., 2006, and refs. herein). Another approach was put forward by Foot (2006). He believes
that the DAMA signals originate from the Earth crossing a stream of micrometeorites of
mirror matter.
The purpose of the present paper is to show that the effects observed by DAMA/NaI,
including the yearly variation of the signal level, allow an interpretation drawn from the
St.Petersburg (SPb) experiments on detection of DArk Electric Matter Objects (daemons),
which presumably are Planckian elementary black holes carrying a negative electric charge
(md ≈ 3×10
g, Ze = -10e) (Drobyshevski, 1997a,b).
Starting from March 2000, we have been reliably detecting by means of thin spaced
ZnS(Ag) scintillators, both in ground-level and underground experiments, signals whose
separation corresponds to ~10-15 km/s, i.e., the velocity of objects falling from near Earth,
almost circular heliocentric orbits (NEACHOs). The flux is ~10
and it varies with P
= 0.5 yr, to pass through maxima in March and September (Drobyshevski, 2005a;
Drobyshevski and Drobyshevski, 2006; Drobyshevski et al., 2003). (Note, that attempts were
made to treat these results in terms of possible properties of mirror matter also (Foot and
Mitra, 2003), but observations of a daemon flux directed upward, i.e., from under the ground
level, are apparently in conflict with this interpretation.)
II. Specific Features of the Traversal of the Sun by Daemons
As the Sun moves through the interstellar medium with a velocity V∞, particles of the latter
(we assume for the sake of simplicity that their velocity is << V∞) become focused by
gravitation of the Sun, so that its effective cross section becomes Seff = πpmax
[1+(Vesc/V∞)
] (Eddington, 1926). Here R
is the radius of the Sun, Vesc = 617.7 km/s is
the escape velocity from its surface, and pmax is the maximum value of the impact parameter
(the impact parameter is the distance between the continuation of the V∞ vector and the center
of the Sun). For V∞ = 20 km/s, pmax = 30.9R
In crossing the Sun with a velocity of ~10
cm/s, daemons are slowed down. One
cannot calculate at present the associated decelerating force, because a negative daemon
captures protons and heavy (Zn > Z) nuclei, catalyzes proton fusion reactions, decomposes
somehow the nucleons in nuclei etc. As a result, the effective charge of the daemon, complete
with the particles captured and carried by it, varies continuously. Straightforward estimates
show, however, that daemons of the galactic disk with a velocity dispersion of ~4-30 km/s
(Bahcall et al., 1992), are slowed down strongly enough to preclude the escape to infinity of
many of them as they pass through the Sun (Drobyshevski, 1996, 1997a).
Such objects move along strongly elongated trajectories with perihelia within the Sun.
Subsequent crossings of the Sun’s material bring about contraction of the orbits and their
escape under the Sun’s surface. If, however, a daemon moving on such a trajectory passes
through the Earth’s gravitational sphere of action, it is deflected, which will result in the
perihelion of its orbit leaving the Sun with a high probability. The daemon will be injected
into a stable, strongly elongated Earth crossing heliocentric orbit (SEECHO). Straightforward
estimates made in the gas kinetic approximation and using the concepts of mean free path
length etc. suggest that daemons build up on SEECHOs to produce an Earth crossing flux of
~3×10-7 cm-2s-1 (Drobyshevski, 1997a). (These were fairly optimistic calculations performed
for a rough estimation of the parameters of the daemon detector that was being designed at
that time, in 1996.)
In subsequent inevitable crossings by SEECHO daemons of the Earth’s sphere of
action, their orbits deform to approach that of the Earth; these are near-Earth almost circular
heliocentric orbits (NEACHOs). And whereas daemons moving in nearly parabolic
SEECHOs strike the Earth with velocities of up to V⊕√3 = 51.6 km/s (here V⊕ = 29.79 km/s is
the orbital velocity of the Earth), the NEACHO objects fall on the Earth with a velocity of
only ~10(11.2)-15 km/s.
Estimates of the ground-level SEECHO daemon flux made in 1996 were based on
simple concepts of an isotropic flux of galactic disk daemons incident on the Sun. Our
subsequent experiments that demonstrated a half-year variation made it clear that the flux is
not isotropic, probably because of the motion of the Solar system relative to the DM
population of the disk. We know now also that the daemons detected by us fall, judging from
their velocity, from NEACHOs (Drobyshevski, 2005b; Drobyshevski et al., 2003).
III. Calculation of the Passage of Daemons through the Sun
It appears only natural to assume the flux variations with P = 0.5 yr to be a consequence of
the composition of the Earth’s orbital motion around the Sun and of the Sun itself relative to
the galactic disk population.
The Sun moves relative to the nearest star population with a velocity of 19.7 km/s in
the direction of the apex with the coordinates A = 271° and D = +30° (equatorial coordinates)
or L” = 57° and B” = +22° (galactic coordinates) (Allen, 1973).
Initially, rather than delving into the fundamental essence of the processes underlying
the celestial mechanics, we invoked a simplified concept of a “shadow”, which is produced by
daemons captured into SEECHOs from the galactic disk by the moving Sun, and of the
corresponding “antishadow” created by some daemons crossing the Sun in the opposite
direction (i.e., in the direction of its motion), an approach that had been reflected in our earlier
publications (Drobyshevski, 2004; Drobyshevski et al., 2003).
In a new approach to calculation of the passage of objects through the Sun we made
use of the celestial mechanics integrator of Everhart (1974). It was adapted to FORTRAN by
S.Tarasevich (Institute of Theoretical Astronomy of RAS) for use with the BESM-6
computer. In ITA (and now in the Institute of Applied Astronomy of RAS) it was employed in
calculation of asteroid ephemeredes and precise prediction of the apparition of comets
allowing for the action of known planets. We made two important refinements on the code,
more specifically, we (i) introduced the resistance of the medium in the simplest gas dynamics
form, F = σρV2 (Drobyshevski, 1996) (where σ is the effective cross section of a particle, and
ρ is the medium density) and (ii) took into account that the Sun is not a point object but has
instead a density distributed over the volume (we used the model of the Sun from Allen
(1973)).
The very first calculations revealed that the trajectories of particles falling on the Sun
and crossing it have a non-closed, many-loop pattern (Drobyshevski, 2005b). This should
certainly have been expected, because inside the Sun particles move in a gravitational field
not of a point but rather of a radius-dependent mass, so that the trajectories do not close and
form instead a rosette, whose petals appear successively in the direction opposite to that of a
body moving around the Sun (see, e.g., Figs.4 and 5 below).
This prompted us to consider the possibility of combining and explaining the results of
DAMA/NaI and of our experiments in terms of a common daemon paradigm, all the more so
because earlier attempts (Drobyshevski, 2005a) had succeeded in proposing an interpretation
of the so-called “Troitsk anomaly”, i.e., a displacement of the tritium β-spectrum tail
occurring with a half-year periodicity.
This approach:
a) would hopefully provide an answer as to why the results of the DAMA/NaI are not
confirmed by other WIMP experiments;
b) would permit us to understand why the intensity of the scintillation signals assigned to
recoil nuclei lies in the 2-6-keV interval (here 2 keV is the sensitivity threshold of the
DAMANaI detector) and not higher, whereas elastic interaction of nuclei with WIMPs
of the galactic halo (V = 200-300 km/s) should seemingly produce signals with
energies of up to ~200 keV.
IV. On How to Corroborate the St.Petersburg and DAMA Experiments
The first question that comes immediately to mind is how could one explain the twofold
difference in the signal periodicity between the SPb and DAMA experiments?
Figure 1. Scheme of motion of the Sun and the Earth towards the apex (see text).
Let us begin with the SPb experiment.
Figure 1 shows schematically the motion of the Sun together with the Earth in the
direction of the apex. The angle between the plane of the Earth’s orbit, the ecliptic, and the
apex direction is approximately α0 = 52°-53° (the direction to the apex, just as the velocity V∞,
depends on the stars (or interstellar gas clouds etc.) one chooses as references (Allen, 1973);
we assume in what follows α0 = 52° and V∞ = 20 km/s). We assume also the angle between
the straight line lying in the ecliptic plane and normal to the apex direction and the equinox
line to be about 10°; as the Earth moves along its orbit, it crosses first this line, and after that,
the equinox line. This order of crossing fits our measurements of the positions of the maxima
in the primary daemon flux (Drobyshevski, 2005a; Drobyshevski and Drobyshevski, 2006),
which occur some time in the first decade of March or September (and incidentally coincides
with A = 258° for the Solar apex relative the interstellar gas).
0 10 20 30 40
p0 (R�)
− σ = 1.00×10
− σ = 2.58×10
− σ = 2.58×10
←|α2|
V∞ = 20 km/s
Figure 2. Angle of deviation for an object having passed through the Sun depending on the impact
parameter p0 at different cross-sections σ of its interaction with matter (at the object mass 3×10
g). α1
is the angle of deviation from the initial velocity V∞ direction after the first passage through the Sun;
α2 is the angle of deviation from the same direction after the second passage.
Figure 2 plots the angle of deviation α1 of a material particle from the direction of its
initial velocity V∞ after emergence from the Sun again to infinity or to the aphelion of the first
loop (i.e. at R1) of its trajectory vs. the impact parameter p0 (the dependence of R1 on the
impact parameter p0 is given in Fig.3). We consider subsequently only cases with p0 < pmax.
Interestingly, in the case of multi-loop trajectories, which is possible for σ ≠ 0, angular
deflections of subsequent from preceding loops differ little from α1, although they gradually
decrease. The value of σ can be estimated from a comparison of further calculations with
experiment.
Straightforward reasoning suggests that the two maxima observed in March and
September should be a consequence of passage through the Earth’s orbit of daemons with an
impact parameter about p0 = ±9.162R
, where they are deflected by the Sun through α1 ≈ 90
0 4 8 12 16
p0 (R�)
− σ = 1.00×10
− σ = 3.00×10
(a.u.)
V∞ = 20 km/s
Figure 3. Maximum distance R1 the object reaches after the first passage through the Sun versus σ and
Figure 4. An example of multi-loop (cross-like) trajectory of an object being braked by the Solar
matter (the Sun's center is at X = 0, Y = 0) for repeated passages through the Sun's body. Object of
3×10-5 g mass and cross-section σ = 0.79×10-19 cm2 falls from infinity (X = -∞; V∞ = 20 km/s) with an
impact parameter p0 = Y(-∞) = 9.162R
. The figure plane contains the direction to the apex and the
normal to it lying in the plane of the ecliptic. The figure shows also an ellipse with the major semi-axis
of 1 AU, i.e., the projection of the Earth’s orbit (the dotted circle of 1 AU radius is given as a scale for
the reader’s orientation).
to either side. Moreover, the presence of these maxima suggests also that they originate from
the daemons that had already been captured by the Sun for σ ≥ 0.78×10-19 cm2 but return
repeatedly to it and cross its body. Figure 4 (with a table) provides an idea of such trajectories.
A particle moves along a trajectory making a right cross. First, in traversing the Sun, it
is deflected through 90° and, in crossing the Earth’s orbit, escapes with σ = 0.78×10-19 cm2 to
R1 → ∞ (calculations for Fig.4 are made for σ = 0.79×10
to avoid calculations with too
great a value of R1). Thereafter, returning, now from outside, it crosses for the second time the
Earth’s orbit and, hitting the Sun on completion of the first loop, it again deflected, leaving it
through nearly the right angle. Now the daemon moves along the petal oriented in the anti-
apex direction. But here, although R2 > 1 AU, it does not cross the Earth’s orbit because of the
large inclination of the ecliptic. The subsequent two crossings of the Earth’s orbit (here still
R3 > 1 AU) from the side opposite to that of the first transit, are completed by the daemon
after the return and the third crossing of the Sun. In making the fourth passage through the
Sun after the return, the particle moves toward the apex. Here, depending on the value of σ
and completing the cross, the daemon can move away from the Sun to a distance R4 > 1 AU
(but here again, because of the inclination of the ecliptic to the apex direction, it does not
cross the Earth’s orbit), but may not reach R4 = 1 AU at all. The first value of R4 > 1 AU
corresponds to σ1 = 0.78×10
at which the resistance of the solar material in the first
passage was just high enough to absorb the excess energy ∆E = mdV∞
/2, i.e., the particle was
captured by the Sun. The second (upper) value σ3 = 1.415×10
(for the minimum value
R3 = 1 AU) can be estimated under the assumption that on its third passage the daemon can
finally reach and crosses slightly the Earth’s orbit (i.e., R3 > 1 AU). The validity of the latter
assumption is argued for at least by our observation in autumn and spring of two distinct
maxima; the fourfold (or even six-fold - see below) crossing by daemons of the Earth’s orbit
increases, accordingly, the probability of their transfer from the loop trajectories into
SEECHOs and, subsequently, in NEACHOs, whence they fall on the Earth with V ≈ 10-15
km/s. Thus, we arrive at 0.78×10-19 ≤ σ < 1.415×10-19 cm2 (we point out once more that the
values of σ thus found depend on the accepted daemon mass; they should be proportional to
md). For σ = σ3, the Sun does not capture the daemon at p0 > 12.01R
, i.e., when the daemon
initially passes through more rarefied outer layers of the Sun it moves to infinity again.
To choose a still more optimistic scenario, assume a daemon that for σ1 =
0.78(0.79)×10-19 cm2 crosses the Earth’s orbit in the fifth loop as well (with R5 = 1.0919 > 1
AU). The condition R5 = 1 AU yields σ5 = 0.849 ×10
. With σ1 < σ < σ5, the daemon
has now crossed the Earth’s orbit six times! In this respect, Jupiter is markedly behind the
Earth (with only two crossings), while Venus is on the winning side (8 crossings).
Figure 3 presenting the R1(p0) relation in a graphic form for σ = 1×10
and 3×10-19
facilitates estimation of the energy losses suffered by the daemon in traversing the Sun,
of the number of such traversals etc.
Turning now back to the DAMA/NaI experiment, the daemon can obviously cross the
set-up in June provided it falls after the first traversal of the Sun in the plane of the ecliptic,
i.e., if it is deflected through α1 = 52°, which occurs at p0 = 4.90R
. The above estimates of σ
suggest (see Fig. 5a) that R1 > 1 AU, and that the second loop extends up to R2 > 1 AU also,
while naturally not crossing the Earth’s orbit, because it leaves the ecliptic plane. In June, the
second loops of the trajectories with p0 = -6.26R
enter the plane of the ecliptic as well and
extend in it up to R2 ≈ 2 > 1AU (Figs.5c,d). In December, the second loops of trajectories with
p0 = 2.215R
enter the ecliptic plane (Fig.5b). At σ = σ5 = 0.849×10
they have R2 =
1.14 AU, i.e. they are able to cause SEECHOs in December, and only at σ ≥ 0.94×10-19 cm2
R2 becomes ≤ 1 AU.
Fig. 5. The same as Fig.4, but here the Earth's orbit is projected into the figure plane as a straight line
segment of 2 AU length with 52° inclination to the apex direction (December is up-left, June is down-
right).
Thus the calculations performed for σ = σ1 = 0.78×10
, σ = σ5 = 0.849×10
, σ = 1×10-19 cm2, and σ = σ3 = 1.415×10
suggest that the daemons captured in
traversing the Sun produce behind it a fairly smeared trail (“shadow”) through which the
Earth passes in May-June-July, but which, generally speaking, does not reach the part of the
Earth’s orbit oriented in the direction of the apex and corresponding approximately to the
November-January period. This is easy to understand, because the second loops of the
trajectories which fall into the apex hemisphere and could produce an “antishadow”
correspond to small p0, i.e., to particles passing through the dense central part of the Sun,
where they suffer the strongest deceleration. This is why, in particular, the second loop of the
trajectory of the daemon that crossed the Sun at p0 = 2.215R
and fell into the ecliptic plane
exactly in December, simply cannot reach the Earth for σ ≥ 1×10-19 cm2 (Fig.5b). It is thus
clear that the ground level flux of daemons from SEECHOs should exhibit a distinct 1-year
periodicity with a minimum some time in December.
Superimposed on this is the half-year wave of the NEACHO objects, which appear as
a result of having transferred from numerous SEECHOs just in periods before the equinoxes
(note these SEECHOs are realizing at both signs of p0 = ±9.162R
). Such transitions are more
probable in March and September because of the appearance of a noticeably larger number of
objects in SEECHOs with comparatively short semimajor axes lying close to these ecliptic
plane zones. The daemons entering these SEECHOs come from the cross-shaped rosette
trajectories, along which the same object crosses the Earth’s orbit twice (or even thrice) back
and forth, the second (and all the more so, the third) time doing it with a noticeably below the
parabolic velocity. Significantly, the projection of the SEECHO object velocity vector on the
Earth’s orbit reaches its maximum values here, with the correspondingly increasing duration
(and efficiency) of the Earth’s gravitational perturbations. Note also that the ratio of the minor
to major semi-axes for the SEECHOs produced in the capture of objects that had crossed the
outer zones of the Sun (p0 ≈ 10R
) exceeds those for the objects with p0 → 0 (compare Figs.4
and 5), which, on the whole, also acts so as to increase the velocity vector projection on the
Earth’s orbit.
V. The Manifestation of Daemons in the SPb and DAMA/NaI Detectors
Our SPb detector, made up of thin spaced ZnS(Ag) scintillators, was sensitive to the passage
of only fairly low-velocity (<30 km/s) daemons. The reason for this, we believe presently, is
that successive “disintegrations” of daemon-containing nucleons in the Zn (or Fe) nucleus
captured by the daemon occur with an interval of ~10
s, whereas the characteristic
dimension of the set-up is ~20-30 cm. At higher velocities, the complex consisting of the
captured nucleus (and, possibly, a cluster of atoms) and the daemon traverses the system
carrying an excessive positive charge, which is readily compensated by electrons captured on
the way, and, therefore, the daemon does not interact with new nuclei with an attendant
generation of a noticeable secondary signal.
The DAMA/NaI experiment with a ~100 kg NaI(Tl) scintillators was designed for
measurement of the annual modulation signature. In case of WIMPs the measured quantity is
the energy of the recoil nuclei knocked out by heavy (~10-100 GeV) WIMPs of the galactic
halo. The set-up is thought to be sensitive to interactions occurring both with I and Na.
Note that the sensitivity threshold of the system, ~2 keV, corresponds to the velocity
of an iodine nucleus of 55 km/s if one takes the quenching factor to be about unity (the
quenching factor is a ratio of efficiencies of producing scintillations by particles under
consideration and electrons of the same energy). Assuming elastic interaction with a very
massive particle, the latter, to produce a 2-keV signal, should move with a velocity of ~30
km/s (in an elastic head-on collision, the velocity of the light particle, a nucleus, should be
twice that of the heavy projectile particle). If the signals are due to WIMPs of the galactic
halo (V∞ = 200-300 km/s), the recoil energy of the iodine nucleus could, seemingly, reach as
high as 110-240 keV.
Information on the yearly variation of the flux of particles traversing the DAMA/NaI
is provided primarily by signals in the 2-6-keV range (Bernabei et al., 2003, 2006). The 6-
keV signal corresponds to the velocity of an elastically colliding projectile of ~ 47 km/s. This
figure is in good agreement with the velocity of 51.6 km/s with which a particle in a quasi-
parabolic orbit hits the Earth (29.78√3 = 51.6 km/s; this velocity would produce a recoil
nucleus with an energy of 7 keV, but allowing for the statistics of other than head-on
collisions we would obtain 5-6 keV). But it is with these velocities (when particle energies
differ exactly by a factor of three; compare, on the other hand, the 6 and 2 keV which one
measures!) that SEECHO daemons fall on the Earth. A truly remarkable coincidence indeed.
A number of additional questions, however, immediately arise here, to which one
cannot yet supply unambiguous answers.
Indeed, estimates of the velocity with which daemons escape from geocentric Earth-
crossing orbits (GESCO) into the Earth suggest that the resistance offered by a metal-like
solid to a daemon moving with a velocity of ~10 km/s is ~10
dyne (Drobyshevski, 2004),
which entails a release of thermal energy of ~6 MeV/cm. It is unclear what energy would be
liberated in a dielectric (without conductive electrons) scintillator. If it is heat, the scintillator
will not detect it. (On the other hand, it is too high for the cryogenic systems of the type
CDMS-I and Edelweiss-I designed for the detection of WIMPs either; see refs. in Bernabei et
al. (2003).)
The situation is not yet clear with regard to the quenching factor, which we considered
above to be about unity for the low-energy iodine nuclei. Neutron elastic scattering
experiments give a value of about 0.09 for I and 0.30 for Na (see Bernabei et al., 2003, and
refs. therein). We will not give details of such calibrations, nor discuss the different
possibilities here (see, however, arXiv:0706.3095).
Also, one should not forget that the DAMA/NaI system, by the multiple hit rejection
criterion by Bernabei et al. (2003, see Sec.3.3), rejects events with signals appearing
simultaneously in two or more scintillators (nine scintillators altogether). But then how (and
with what efficiency) do recoil nuclei form in one detector piece only?
One may recall that immediately after leaving a solid, the daemon moves in vacuum
(or in air) together with a cluster of atoms, in one of whose nuclei it resides (Drobyshevski,
2005a). When it enters a solid object again, the daemon leaves a larger part of this cluster
close to the surface of the object, and moves inside it only with a small part of the cluster, or
even only with the remainder of the nucleus in which it rests. It is unclear how efficiently
such a complex can initiate scintillations. It is conceivable that as long as it carries an excess
positive charge, it is surrounded by electrons, and, in moving as a conventional heavy atom
(or ion with Zi = 1) with a relatively low velocity (<50 km/s; recall that this corresponds to
less than ~2 keV energy of an iodine nucleus), but in a rectilinear trajectory and without
noticeable deceleration, through the dielectric and only moving atoms apart rather than
penetrating into them, it will excite only phonons but not scintillations. The daemon resides in
this state for tens of microseconds, “digesting” gradually the nucleons of the nucleus it is
carrying and traveling tens of cm in this time. (We are not discussing here the points bearing
on possible modes of “digestion” by the daemon, an elementary black hole, of nucleons that
could leave no trace in a scintillation detector.) Eventually, however, the daemon/nuclear-
remainder complex acquires zero charge to become a particle of the neutron type. This state
lasts until the next proton in the nucleus disintegrates and the system then acquires a negative
charge, ~10
s (with the distance traveled ~5 cm). It is in this state that this neutral (but
supermassive, ~3×10-5 g) complex, 1-3 fermi in size, passes through electronic shells of atoms
and is capable, in a path length of ~5 cm, to produce a recoil nucleus with a double velocity of
up to ~100 km/s. It is such SEECHO-daemon-caused events that satisfy the multiple hit
rejection criterion (Bernabei et al., 2003) that are possibly detected by the DAMA/NaI. The
processes involved here certainly need a deeper analysis.
VI. Conclusions
The daemon approach had offered an explanation for the ~5-eV drift of the tritium β-spectrum
tail with a half-year period, the so-called “Troitsk anomaly”, and some predictions regarding
the KATRIN experiment on direct measurement of the neutrino mass (Drobyshevski, 2005a).
It now appears that one can corroborate within the daemon paradigm the results of the
DAMA/NaI experiment with the inferences drawn from the SPb study, which, in addition to
detecting daemon populations of different velocities captured by the Solar system and moving
within it in orbits of different, including geocentric, populations, established also a half-year
variation in the NEACHO flux, demonstrating the advantages of vacuum systems in daemon
detection, and revealing some of their remarkable properties and specific features of their
interaction with matter.
We find it particularly impressive that the range (2-6 keV) of the recorded signals in
which the DAMA/NaI exhibits a yearly periodicity coincides with exactly the same level (2-7
keV) that follows from the celestial mechanics scenario. On the other hand, a more careful
analysis of the evolution of daemons captured by the Sun from the galactic disk and governed
by celestial mechanics sheds light on reasons underlying the detection of the yearly
periodicity of the high-velocity population (30-50 km/s) measured by DAMA, and of the half-
year periodicity of the low-velocity population (5-10-30 km/s) in the SPb experiment (in the
latter case, the part played by the cross-like multi-loop trajectories of daemons traversing the
Sun and captured by it appears significant). If performed with good statistics, the more
advanced measurements on DAMA/LIBRA will hopefully also reveal the half-year harmonic
in low signal level events (~2-3 keV). Such events originate from the fall of daemons from
“short” SEECHOs in March and September.
An analysis of the conditions favouring capture of daemons by the Sun and
corroboration of their subsequent possible celestial mechanics evolution with the results
gained in the SPb and DAMA/NaI experiments permits one to impose fairly strong constraints
(one would almost say, to measure) on the effective cross section σ of daemon interaction
with the Solar material. It was found to be 0.78×10-19 ≤ σ < 1.4×10-19 cm2. This is ~500 times
the cross section of the neutral “antineon” atom formed by a daemon (Ze = -10e) and ten
protons it captured, while being 3000-5000 times smaller than the cross section for a
daemon/heavy-nucleus complex with electrons surrounding it. On the other hand, σ can be
governed also by the Coulomb interaction of the daemon changing continuously its effective
charge with particles of the solar plasma (Drobyshevski, 1996), which, in turn, may be helpful
in refining our knowledge of the characteristics of the solar material.
Obviously enough, the problems addressed in the paper (interaction of daemons both
with the solar matter and with the scintillator material, the celestial mechanics and statistical
evolution of their ensemble after capture by the Solar system, the part played by the initial
conditions and the starting velocity dispersion in the daemon population of the galactic disk,
refinement of the apex relative to this population, - it seems it is closer to the apex relative the
interstellar gas, not stars (see. Fig.1), transfer to SEECHOs and, subsequently, to NEACHOs
and GESCOs etc.) would require a much more careful and comprehensive analysis. This
would permit a quantitative comparison of theoretical predictions with future experimental
results.
References
Allen C.W., 1973. Astrophysical Quantities, 3
ed., Univ. of London, The Athlone Press.
Bahcall J.H., Flynn C., Gould A., 1992. Local dark matter from a carefully selected sample,
Astrophys. J., 389, 234-250.
Bernabei R., Belli P., Cappella F., Cerulli R., Montecchia F., Nozzoli F., Incicchitti A.,
Prosperi D., Dai C.J., Kuang H.H., Ma J.M., Ye Z.P., 2003. Dark Matter Search, Riv.
Nuovo Cimento, 20(1), 1-73; astro-ph/0307403.
Bernabei R., Belli P., Montecchia F., Nozzoli F., Cappella F., Incicchitti A., Prosperi D., R.,
R. Cerulli, C.J. Dai, H.L. He, H.H. Kuang, J.M. Ma, Z.P. Ye, 2006, Investigating
pseudoscalar and scalar dark matter, Int. J. Mod. Phys. A 21, 1445-1469.
Drobyshevski E.M., 1996. Solar neutrinos and dark matter: cosmions, CHAMPs or…
DAEMONs? Mon. Not. Roy. Astron Soc., 282, 211-217.
Drobyshevski E.M., 1997a. If the dark matter objects are electrically multiply charged: New
opportunities, in: Dark Matter in Astro- and Particle Physics (H.V.Klapdor-
Kleingrothaus and Y.Ramachers, eds.), World Scientific, pp.417-424.
Drobyshevski E.M., 1997b. Dark Electric Matter Objects (daemons) and some possibilities of
their detection, in: “COSMO-97. First International Workshop on Particle Physics and
the Early Universe” (L.Roszkowski, ed.), World Scientific, pp.266-268.
Drobyshevski E.M., 2004. Hypothesis of a daemon kernel of the Earth, Astron. Astrophys.
Trans., 23, 49-59; astro−ph/0111042.
Drobyshevski E.M., 2005a. Daemons, the "Troitsk anomaly" in tritium beta spectrum, and the
KATRIN experiment, hep-ph/0502056.
Drobyshevski E.M., 2005b. Detection of Dark Electric Matter Objects falling out from Earth-
crossing orbits, in: “The Identification of Dark Matter” (N.J.Spooner and
V.Kudryavtsev, eds.), World Scientific, pp.408-413.
Drobyshevski E.M., Beloborodyy M.V., Kurakin R.O., Latypov V.G., Pelepelin K.A., 2003.
Detection of several daemon populations in Earth-crossing orbits, Astron. Astrophys.
Trans., 22, 19-32; astro-ph/0108231.
Drobyshevski E.M., Drobyshevski M.E., 2006. Study of the spring and autumn daemon-flux
maxima at the Baksan Neutrino Observatory, Astron. Astrophys. Trans., 25, 57-73;
astro−ph/0607046.
Eddington A.S., 1926. The Internal Constitution of the Stars, Camb. Univ. Press, Cambridge.
Evans N.W., Belokurov V., 2005. RIP: The MACHO era (1974-2004), in: “The Identification
of Dark Matter” (N.J.Spooner and V.Kudryavtsev, eds.), World Scientific, 2005,
pp.141-150; astro-ph/0411222.
Everhart E., 1974. Implicit single sequence methods for integrating orbits, Celestial
Mechanics, 10, 35-55.
Foot R., 2006. Implications of the DAMA/NaI and CDMS experiments for mirror matter-type
dark matter, Phys.Rev. D74, 023514; astro-ph/0510705.
Foot R., Mitra S., 2003. Have mirror micrometeorites been detected? Phys.Rev. D68, 071901;
hep-ph/0306228.
Primack J.R., Seckel D., Sadoulet B., 1988. Detection of cosmic dark matter, Annu. Rev.
Nucl. Part. Sci., 38, 751-807.
|
0704.0984 | Transfer of a Polaritonic Qubit through a Coupled Cavity Array | Transfer of a Polaritonic Qubit through a Coupled Cavity Array
Sougato Bose 1, Dimitris G. Angelakis2,∗ and Daniel Burgarth1,3
1Department of Physics and Astronomy,
University College London, Gower St., London WC1E 6BT, UK
2Centre for Quantum Computation,
Department of Applied Mathematics and Theoretical Physics,
University of Cambridge, Wilberforce Road, CB3 0WA, UK and
3Computer Science Departement, ETH Zürich,
CH-8092 Zürich, Switzerland
Abstract
We demonstrate a scheme for quantum communication between the ends of an array of coupled
cavities. Each cavity is doped with a single two level system (atoms or quantum dots) and the
detuning of the atomic level spacing and photonic frequency is appropriately tuned to achieve
photon blockade in the array. We show that in such a regime, the array can simulate a dual rail
quantum state transfer protocol where the arrival of quantum information at the receiving cavity
is heralded through a fluorescence measurement. Communication is also possible between any pair
of cavities of a network of connected cavities.
∗Electronic address: [email protected]
http://arxiv.org/abs/0704.0984v1
mailto:[email protected]
I. INTRODUCTION
Recently, the exciting possibility of coupling high Q cavities directly with each other
has materialized in a variety of settings, namely fiber coupled micro-toroidal cavities [1],
arrays of defects in photonic band gap materials (PBGs) [2, 3] and microwave stripline
resonators joined to each other [4]. A further exciting development has been the ability to
couple each such cavity to a quantum two-level system which could be atoms for micro-
toroid cavities, quantum dots for defects in PBGs or superconducting qubits for microwave
stripline resonators[5]. Possibilities with such systems are enormous and include the the
implementation optical quantum computing [6], the production of entangled photons [7],
the realization of Mott insulating and superfluid phases and spin chain systems [8, 9, 10] .
Such settings can also be used to verify the possibilities of distributed quantum computation
involving atoms coupled to distinct cavities [11] also to generate cluster states for efficient
measurement based quantum computing schemes[12].
When the coupling between the cavity field and the two-level system (which we will just
call atom henceforth, noting that they need not necessarily be only atoms) is very strong (in
the so called strong coupling regime), each cavity-atom unit behaves as a quantum system
whose excitations are combined atom-field excitations called polaritons. The nonlinearity
induced by this coupling or as it is otherwise known, the photon blockade effect[13], forces the
system to a state where maximum one excitation (polariton) per site is allowed. However, a
superposition of two different polaritons, which is equivalent to a superposition of two energy
levels of the cavity-atom system, is indeed allowed and naturally the question arises as to
whether that can be used as a qubit. Purely atomic qubits (formed from purely atomic energy
levels) in cavities have long been discussed in the literature (see references cited in [11], for
example), but such qubits in distinct cavities do not directly interact with each other unless
mediated through light. On the other hand, a purely photonic field in a cavity is not easy to
manipulate in the sense of one being able to create arbitrary superpositions of its states by an
external laser. Being a mixed excitation, polaritons interact with each other as well as permit
easy manipulations with external lasers in much the same manner as one would manipulate
and superpose atomic energy levels. Is there any interesting form of quantum information
processing that can be performed by encoding the quantum information in a superposition of
polaritonic states? While an ultimate aim might be to accomplish full quantum computation
|e, 1〉; |g, 2〉
|e, 0〉; |g, 1〉
|g, 0〉
ω0 − g P (−)†
|e, 1〉; |g, 2〉
|e, 0〉; |g, 1〉
|g, 0〉
ω0 − g
FIG. 1: A series of coupled cavities coupled through light and the polaritonic energy levels for
two neighbouring cavities. These polaritons involve an equal mixture of photonic and atomic
excitations and are defined by creation operators P
(±,n)†
= (|g, n〉k〈g, 0|k ± |e, n − 1〉k〈g, 0|k)/
where |n〉k, |n − 1〉k and |0〉k denote n, n − 1 and 0 photon Fock states in the kth cavity. The
polaritons of the kth atom-cavity system are denoted as |n±〉k and given by |n±〉k = (|g, n〉k ±
|e, n − 1〉k)/
2 with energies E±n = nωd ± g
with polaritonic qubits (it has been recently shown this to possible using the cluster state
approach [12]), we concentrate here on a more modest aim of transferring the state of a
qubit encoded in polaritonic states (a polaritonic qubit) from one end of the coupled cavity
array to another.
Assume a chain of N coupled cavities. We will describe the system dynamics using
the operators corresponding to the localized eigenmodes (Wannier functions), a
k(ak). The
Hamiltonian is given by
kak +
kak+1 +H.C.). (1)
and corresponds to a series quantum harmonic oscillators coupled through hopping photons.
The photon frequency and hopping rate is ωd and A respectively and no nonlinearity is
present yet. Assume now that the cavities are doped with two level systems (atoms/ quantum
dots/superconducting qubits) and |g〉k and |e〉k their ground and excited states at site k.
The Hamiltonian describing the system is the sum of three terms. Hfree the Hamiltonian
for the free light and dopant parts, H int the Hamiltonian describing the internal coupling of
the photon and dopant in a specific cavity and Hhop for the light hopping between cavities.
Hfree = ωd
kak + ω0
|e〉k〈e|k (2)
H int = g
k|g〉k〈e|k +H.C.) (3)
Hhop = A
kak+1 +H.C) (4)
where g is the light atom coupling strength. The Hfree +H int part of the Hamiltonian can
be diagonalized in a basis of mixed photonic and atomic excitations, called polaritons (Fig.
1). While |g, 0〉k is the ground state of each atom cavity system, the excited eigenstates of
the kth cavity-atom system are given by |n±〉k = (|g, n〉k ± |e, n − 1〉k)/
2 with energies
E±n = nωd ± g
n. One can then define polariton creation operators P
(±,n)†
k by the action
(±,n)†
k |g, 0〉k = |n±〉k. As we have proved elsewhere, due to the blockade effect, once a site
is excited to |1−〉 or |1+〉, no further excitation is possible[8]. In simplified terms, this is
because it costs more energy to add another excitation in already filled site so the system
prefers to deposit it if possible to an a nearby empty site. This effect has recently lead to
the prediction of a Mott phase for polaritons in coupled cavity systems[8]. If we restrict to
the low energy dynamics of the system such that states with n ≥ 1 are not occupied, which
can be ensured through appropriate initial conditions, the Hamiltonian in becomes (in the
interaction picture):
HI = A
k+1 + A
k+1 +H.C. (5)
where P
k = P
(±,1)†
k is the polaritonic operator creating excitations to the first polaritonic
manifold (Fig. 1). In deriving the above, the logic is that the terms of the type P
which inter-convert between polaritons, are fast rotating and they vanish[8].
We are now in a position to outline the basic idea behind the protocol. A qubit is encoded
as a superposition of the polaritonic states |1+〉 and |1−〉 in the first cavity. The multi-
cavity system is then allowed to evolve according to HI . At the receiving cavity at the other
end we then do a measurement inspired by a dual rail quantum state transfer protocol [14]
which heralds the perfect reception of the qubit for one outcome of the measurement, while
for the other outcome of the measurement the process is simply to be repeated once more
after a time delay. Before presenting the scheme in detail, let us first present a special set of
initial conditions under which HI describes the dynamics of two identical parallel uncoupled
spin chains.
Suppose we are restricting our attention to a dynamics in which the initial state is ob-
tained by the action of only one of the operators among P
k and P
k on the state
k |g, 0〉k which has all the sites in the state |g, 0〉. As P
k does not act after P
k has
acted and vice versa, under the above restricted initial conditions, the system is going to
evolve only according to one of the terms in Eq.(5) i.e., only according to the first or the
second term. To be more precise, if we start with a state P
k |g, 0〉k only the term
k=1 P
k+1 is going to be active and cause the time evolution, while if we start with
the state P
k |g, 0〉k only the term A
k=1 P
k+1 will be responsible for the time
evolution. Each of the operators P
k and P
k individually have the same algebra as the
Pauli operator σ+k = σ
k + iσ
k , which makes both the parts of the Hamiltonian individually
equivalent to a XY spin chain with a Hamiltonian HXY = A
k+1 + σ
k+1). The
restricted set of initial states mentioned above can be mapped on to those of two parallel
chains of spins labeled as chain I and chain II respectively. Let |0〉 and |1〉 be spin-up and
spin-down states of a spin along the z direction, |0〉(I)|0〉(II) be a state with all spins of
both chains being in the state |0〉, |k〉(I)|0〉(II) represent the state obtained from |0〉(I)|0〉(II)
by flipping only the kth spin of chain I and |0〉(I)|k〉(II) represents the state obtained from
|0〉(I)|0〉(II) by flipping only the kth spin of chain II. Then, the restricted class of initial
conditions for polaritonic states can be mapped on to states of the parallel spin chains as
|g, 0〉1|g, 0〉2....|g, 0〉N → |0〉I |0〉II , (6)
|g, 0〉1..|g, 0〉k−1|1+〉k|g, 0〉k+1..|g, 0〉N → |k〉(I)|0〉II , (7)
|g, 0〉1..|g, 0〉k−1|1−〉k|g, 0〉k+1..|g, 0〉N → |0〉I |k〉(II) (8)
Under the above mapping and under the above restrictions on state space, HI becomes
equivalent to the Hamiltonian of two identical parallel XY spin chains completely decoupled
from each other. Precisely such a Hamiltonian is known to permit a heralded perfect quan-
tum state transfer from one end of a pair of parallel spin chains to the other [14], and we
discuss that below.
Spin chains are capable to transmitting quantum states by natural time evolution [15].
However it is well known that due to the disperion on the chain [16] the fidelity of transfer
is quite low except for specific engineered couplings in the spin chains [17, 18] or when the
receiver has access to a significant memory [19]. The advantage of the polariton system is
that we have two parallel and identical chains. We have recently shown how this can be
made use of in a dual rail protocol [14]. The main idea of this protocol is to encode the
state in a symmetric way on both chains. The sender Alice encodes a qubit α|0〉+ β|1〉 to
be transmitted as
|Φ(0)〉 = α|0〉(I)|1〉(II) + β|1〉(I)|0〉(II), (9)
which evolves with time as
|Φ(t)〉 =
f1j(t)(α|0〉(I)|j〉(II) + β|j〉(I)|0〉(II)), (10)
where f1j is the transition amplitude of a spin flip from the 1st to the jth site of a chain.
Clearly, if after waiting a while Bob performs a joint parity measurement on the two spins
at his (receiving) end of the chain and the parity is found to be “odd”, then the state of the
whole system will be projected to α|0〉(I)|N〉(II) + β|N〉(I)|0〉(II), which implies the perfect
reception of Alice’s state (albeit encoded in two qubits now). The protocol presented in
Ref.[14] in fact suggested the use of a two qubit quantum gate at Bob’s end which measured
both the parity as well as mapped the state to a single qubit state. However, here the
presentation as above suffices for what follows. Physically, this protocol, which is called
the dual rail protocol, allows one to perform measurements on the chain that monitor the
location of the quantum information without perturbing it. As such it can also be used for
arbitrary graphs of spins (as long as there are two identical parallel graphs) with the receiver
at any node of the graph. Furthermore, for the Hamiltonian at hand (XY spin model) it is
known [20] that the probability of success converges exponentially fast to one if the receiver
performs regular measurements. The time it takes to reach a transfer fidelity F scales as
t = 0.33A−1N5/3| ln(1− F )|. (11)
The difference between our current coupled cavity system and the spin chain system
considered in [14] is that in our case, the two chains are effectively realized in one sys-
tem. Therefore, it is not necessary to perform a two-qubit measurement such as a parity
measurement at the receiving ends of the chain. The qubit to be transferred is encoded as
′|1+〉1 + β
′|1−〉1 ≡ α|e, 0〉1 + β|g, 1〉1. This state can be created by the sender Alice using
a resonant Jaynes-Cummings interaction between the atom and the cavity field. Then the
whole evolution will exactly be as in Eq.(10) with the spin chain states have to be replaced
by polaritonic states according to the mapping given in Eqs.(6)-(8). The measurement to
herald the arrival of the state at the receiving end is accomplished by a exciting (shelving)
|g, 0〉 repeatedly to a metastable state by an appropriate laser (which does not do anything
if the atom is either in |1±〉). The fluorescence emitted on decay of the atom from this
metastable state to |g, 0〉 implies that another measurement has to be done after waiting
a while. No fluorescence implies success and completion of the perfect transfer of the po-
laritonic qubit. Interestingly enough, the measurement at the receiving cavity need not
be snapshot measurements at regular time intervals, but can also be continuous measure-
ments under which the scheme can have very similar behavior to the case with snap-shot
measurements for appropriate strength of the continuous measurement process [21].
We now briefly discuss the parameter regime needed for the scheme of this paper. In order
to achieve the required limit of no more than one excitation per site, the parameters should
have the following values[8]. The ratio between the internal atom-photon coupling and the
hopping of photons down the chain should be g/A = 102. We should be on resonance,
∆ = 0, and the cavity/atomic frequencies ωd, ω0 ∼ 104g which means we should be well in
the strong coupling regime. The losses should also be small, g/max(κ, γ) ∼ 103, where κ
and γ are cavity and atom/other qubit decay rates. These values are expected to be feasible
in both toroidal microcavity systems with atoms and stripline microwave resonators coupled
to superconducting qubits [5], so that the above states are essentially unaffected by decay
for a time 10/A (10ns for the toroidal case and 100ns for microwave stripline resonators type
of implementations).
We conclude with a brief discussion about the positive features of the scheme and sit-
uations in which the scheme might be practically relevant. The scheme combines the best
aspects of both atomic and photonic qubits as far as communication is concerned. The
atomic content of the polaritonic state enables the manipulation to create the initial state
and measure the received state of the cavity-atom systems with external laser fields, while
the photonic component enables its hopping from cavity to cavity thereby enabling transfer.
Unlike quantum communication schemes where an atomic qubit first has to be mapped to
the photonic state in the transmitting cavity and be mapped back to an atomic state in
the receiving cavity by external lasers, here the polaritonic qubit simply has to be created.
Once created, it will hop by itself though the array of cavities without the need of further
external control or manipulation.
In what situations might such a scheme have some practical utility? One case is when
Alice “knows” the quantum state she has to transmit to Bob. She can easily prepare it as
a polaritonic state in her cavity and then let Bob receive it through the natural hopping
of the polaritons. Another situation is when a multiple number of cavities are connected
with each other through an arbitrary graph. The protocol of Ref.[14] still works fine in
this situation with Alice’s qubit being receivable in any of the cavities simply by doing the
receiving fluorescence measurements in that cavity.
We acknowledge the hospitality of Quantum Information group in NUS Singapore, and
the Kavli Institute for Theoretical Physics where discussions between DA and SB took place
during joint visits. This work was supported in part by the QIP IRC (GR/S82176/01),
the European Union through the Integrated Projects QAP (IST-3-015848), SCALA (CT-
015714) and SECOQC., and an Advanced Research Fellowship from EPSRC.
[1] D. K. Armani, T. J. Kippenberg, S.M. Spillane & K. J. Vahala. Nature 421, 925 (2003).
[2] D.G. Angelakis, E. Paspalakis and P.L. Knight, Contemp. Phys. 45, 303 (2004).
[3] A. Yariv, Y. Xu, R. K. Lee and A. Scherer. Opt. Lett. 24, 711 (1999); M. Bayindir, B.
Temelkuran and E. Ozbay. Phys. Rev. Lett., 84, 2140 (2000); J. Vuckovic, M. Loncar, H.
Mabuchi. & Scherer A. Phys. Rev. E, 65, 016608 (2001);
[4] Barbosa Alt H., Graef H.-D.C. et al., Phys. Rev. Lett. 81 (1998) 4847; A. Blais, R-S. Huang,
et al., Phys. Rev. A 69, 062320 (2004).
[5] G. S. Solomon, M. Pelton, and Y. Yamamoto Phys. Rev. Lett. 86, 3903 (2001); A. Badolato
et al, Science 308 1158 (2005); A. Wallraff, D. I. Schuster, et al., Nature 431, 162-167 (2004);
T. Aoki et al. quant-ph/0606033; W.T.M. Irvine et al., Phys. Rev. Lett. 96, 057405 (2006).
[6] D.G. Angelakis, M.Santos, V. Yannopapas and A. Ekert, quantum-ph/0410189, Phys. Lett.
A. 362, 377 (2007).
[7] D. G. Angelakis and S. Bose. J. Opt. Soc. Am. B 24, 266-269 (2007).
[8] D. G. Angelakis, M. Santos and S. Bose. quant-ph/0606157.
[9] M. J. Hartmann, F. G. S. L. Brandao and M. B. Plenio, Nature Physics 2, 849 (2006).
[10] A. Greentree et al., Nature Physics 2, 856 (2006).
[11] A. Serafini, S. Mancini and S. Bose, Phys. Rev. Lett. 96, 010503 (2006).
[12] D.G. Angelakis and A. Kay, quant-ph/0702133.
[13] K. M. Birnbaum, A. Boca et al. Nature 436, 87 (2005); A. Imamoglu, H. Schmidt, et al. Phys.
Rev. Lett. 79 1467 (1997).
[14] D. Burgarth and S. Bose, Phys. Rev. A 71, 052315 (2005).
[15] S. Bose, Phys. Rev. Lett 91, 207901(2003).
[16] T. J. Osborne and N. Linden, Phys. Rev. A 69, 052315 (2004).
[17] M. Christandl, N. Datta, A. Ekert and A. J. Landahl, Phys. Rev. Lett. 92, 187902 (2004).
[18] M. B. Plenio and F. L. Semiao, New J. Phys. 7, 73 (2005).
[19] V. Giovannetti and D. Burgarth, Phys. Rev. Lett. 96, 030501 (2006).
[20] D. Burgarth, V. Giovannetti and S. Bose, J. Phys. A: Math. Gen. 38, 6793 (2005).
[21] K. Shizume, K. Jacobs, D. Burgarth & S. Bose, arXiv.org eprint: quant-ph/0702029 (2007).
http://arxiv.org/abs/quant-ph/0606033
http://arxiv.org/abs/quantum-ph/0410189
http://arxiv.org/abs/quant-ph/0606157
http://arxiv.org/abs/quant-ph/0702133
http://arxiv.org/abs/quant-ph/0702029
Introduction
References
|
0704.0985 | Architecture for Pseudo Acausal Evolvable Embedded Systems | Preparation of Papers in Two-Column Format for the Proceedings in A4
Architecture for Pseudo Acausal Evolvable
Embedded Systems
Mohd Abubakr and Rali Manikya Vinay,
Electronics and Communication Engineering
Gokaraju Rangaraju Institute of Engineering and Technology
Miyapur, Hyderabad, 500045 INDIA
Email id: [email protected]
Abstract-Advances in semiconductor technology are
contributing to the increasing complexity in the design of
embedded systems. Architectures with novel techniques such as
evolvable nature and autonomous behavior have engrossed lot of
attention. This paper demonstrates conceptually evolvable
embedded systems can be characterized basing on acausal nature.
It is noted that in acausal systems, future input needs to be known,
here we make a mechanism such that the system predicts the
future inputs and exhibits pseudo acausal nature. An embedded
system that uses theoretical framework of acausality is proposed.
Our method aims at a novel architecture that features the
hardware evolability and autonomous behavior alongside pseudo
acausality. Various aspects of this architecture are discussed in
detail along with the limitations.
I. INTRODUCTION
Consumer demands are main source of inspiration for
innovative designs of embedded systems. With the increase
in demand for the dependable machines, new architectures
for autonomous devices have emerged in the market. These
devices include smart phones, smart cars that have the
capability of taking decision on their own. The success of
these devices has led the industry to invest towards
extending autonomous behavior of machine for all
applications. Research is now seeking a course which makes
machines much more superior and intelligent that are not
only 'smart' enough to take decision but can forecast the
possible robust surroundings it faces and acclimatizes itself
by modifying itself at the hardware level. Evolvable
embedded systems are one of such highly demanded and
invested field in the present research scenario. ‘Evolvable’
implies the autonomous behavior of the system to be capable
of repairing and formulating a solution on its own. This
autonomous behavior can be achieved through genetic
programming and artificial intelligence.
In this paper, a new class of embedded systems is
discussed that have certain distinguishable properties. The
evolvable concept of embedded systems has been studied
recently and a generic methodology has been developed [1].
We discuss the possibility of using the theoretical framework
of acausality in developing evolvable embedded systems and
propose a novel architecture for implementing the same.
The paper is organized as follows. Motivation and prior
work discussed in section II. A brief explanation about
acausality in section III. A proposed architecture to
implement the projected technique has been furnished in
section IV. In the sub-sections each block of the architecture
is briefly discussed. The working of the architecture is
explained in the section V followed by conclusion.
_________________________________________
Research funded by: GRIET, Miyapur, Hyderabad.
II. MOTIVATION AND PRIOR WORK
Conventional embedded systems consist of a micro-
controller and DSP components realized using Field
programmable gate arrays (FPGA), Complex programmable
logic arrays (CPLDs), etc. With the increasing trend of
System on Chip (SoC) integrations, mixed signal design on
the single chip has become achievable. Such systems are
excessively used in the areas of wireless communication,
networking, signal processing, multimedia and networking.
In order to increase the quality of service (QoS) the
embedded system needs to be fault tolerant, must consume
low power, must have high life time and should be
economically feasible. These services have become a
common specification for all the embedded systems and
consequently to attract attention from commercial market
different researchers have come up with novel solutions to
redefine the QoS of embedded systems. Future embedded
systems consists of evolutionary techniques that repair,
evolve and adapt themselves to the conditions they are put in.
Such systems can be termed as autonomous designs.
Autonomous designs using genetic algorithms and
artificial intelligence evolve new hardware systems basing
on the conditions have generated interest in recent days.
These systems are based upon the adaptive computational
machines that produce an entirely new solution on its own
when the environment gets hostile. Here hostile environment
refers to the changes in temperature, increase in radiation
content, under these conditions an autonomous systems
needs to have an ability to modify and evolve hardware that
is less susceptible to the hostile environment. Classification
of embedded systems as given in [1] is as
1) Class 0 (fixed software and hardware): Software as well
as hardware together are defined at the design time. Neither
reconfiguration nor adaptation is performed. This class also
contains the systems with reconfigurable FPGAs that are
only configured during reset. A coffee machine could be a
good example.
2) Class 1 (reconfigurable SW/HW): Software or
hardware (a configuration of an FPGA) is changed during
the run in order to improve performance and the utilization
of resources (e.g. in reconfigurable computing). Evolutionary
algorithm can be used to schedule the sequence of
configurations at the compile time, but not at the operational
time.
3) Class 2 (evolutionary optimization): Evolutionary
algorithm is a part of the system. Only some coefficients in
SW (some constants) or HW (e.g. register values) are
evolved, i.e. limited adaptability is available. Fitness
calculation and genetic operations are performed in software.
Example: an adaptive filter changing coefficients for a fixed
structure of an FIR filter.
4) Class 3a (evolution of programs): Entire programs are
constructed using genetic programming in order to ensure
adaptation or high-performance computation. Everything is
performed in software [2].
5) Class 3b (evolution of hardware modules): Entire
hardware modules are evolved in order to ensure adaptation,
high-performance computation, fault-tolerance or low-
energy consumption. Fitness calculation and genetic
operations are carried out in software or using a specialized
hardware. Reconfigurable hardware is configured using
evolved configurations. The system typically consists of a
DSP and a reconfigurable device. Example: NASA JPL
SABLES [3].
6) Class 4 (evolvable SoC): All components of class 3b
are implemented on a single chip. It means that the SoC
contains a reconfigurable device. Some of such devices have
been commercialized up to now, for example, a data
compression chip [4].
7) Class 5 (evolvable IP cores): All components of class
3b are implemented as IP cores, i.e. at the level of HDL
source code (Hardware Description Language). It requires
describing the reconfigurable device at the HDL level as well.
An approach—called the virtual reconfigurable circuit—has
been introduced to deal with this problem [15]. Then the
entire evolvable subsystem can be realized in a single FPGA.
8) Class 6 (co-evolving components): The embedded
system contains two or more co-evolving hardware or
software devices. These co-evolving components could be
implemented as multiprocessors on a SoC or as evolvable IP
cores on an FPGA. No examples representing this class are
available nowadays.
Figure 1: Evolvable component placed in evolvable embedded
system. Components 1–4 represent the environment for the
evolvable component in this example.
Any embedded system can be categorized through the
classes given above. Reconfigurable computing has
significantly contributed to the idea of have evolvable
hardware through dynamically upload/remove the hardware
components from the hardware module library. According
to [1], an evolvable embedded system can be defined as “a
reconfigurable embedded system in which an evolutionary
algorithm is utilized to dynamically modify some of system
(software and/or hardware) components in order to adapt the
behavior of the system to a changing environment”. Figure 1
shows a general block diagram for evolvable embedded
system.
III. ACAUSALITY
Embedded systems design is based on the assumption that
it generates an output based on present and past inputs.
These set of embedded systems are called as real time
embedded systems. The new set of embedded systems that is
introduced here is based on the assumption that the output is
generated considering even the future inputs. Such systems
can be termed as Acausal Embedded Systems. Due to the
uncertainty in prediction of the future inputs these systems
require a specialized master algorithm that evolves hardware
to implement specified output, and this evolved hardware is
constructed via available resources.
We define acausality as the term denoted to the systems
whose present outputs are dependent on past, present and
future inputs. Acausality is in total contrast with the present
convention of embedded systems that purely rely on present
and past inputs.
IV. PROPOSED SYSTEM
Various proposals for autonomous embedded systems are
available in the literature [16]. The system proposed here
belongs to Class 6 group of systems. It can evolve hardware
and software on its own using artificial intelligence
algorithms and available set of resources. Another major
block of the proposed architecture is the use of future input
predictor. The details about the future input predictors are
explained in the next sections. Use of artificial intelligence in
determining suitable solution for the predicted future inputs
is essential and plays an important role in determining the
efficiency of the system.
The figure 2 shows the proposed architecture of an acausal
system belonging to the class6 group of evolvable embedded
systems. It can evolve hardware and software on its own
using artificial intelligence algorithms and available set of
resources. The contemporary technology allows the
embedded hardware creator to make use of the
reconfigurable hardware resources to build in an evolved
design. Some reliable reconfigurable hardware platforms can
be Programmable Analog Array, Field Programmable Gate
Array (FPGA), FPAA (Field Programmable Analog Array),
FPTA (Field Programmable Transistor Array), nano devices,
reconfigurable antennas, MEMS (micro electromechanical
systems), reconfigurable optics and some other few selective
components.
The next sections give the generic description about each
block present in the proposed architecture shown in the
figure 2.
A. Past Input Summarizer
Functionality of the past input summarizer (PIS) is to store
the past values of inputs into allocated memory. Since it is
practically impossible to store all the past inputs in a memory,
Figure 2: The generalized architecture of the proposed model
hence we use an algorithm to compress and summarize the
past inputs. Hence we can define PIS as “it stores relevant
information of the past inputs which can be applicable in
predicting the future inputs, in a limited memory by
following an optimized algorithm”. The compression and
summarization algorithms can be chosen based upon the
sensitivity of the application. Conventional embedded
systems have very limited memory due to the space
constraints. Utilization of newly developed optical memory
concept can be beneficial in increasing the potential of past
inputs storage [17].
B. Present Input
Acquisition of present inputs is the fundamental
functionality of this block. There may be wide varieties of
inputs depending on the applications i.e. the inputs such as
signals obtained through sensor elements, transmission
receiver, transducer, etc. At this juncture noise factors are
taken into consideration; consequently this block plays an
important role in determining the efficiency of the intact
embedded system. After execution of a particular input it is
sent to past input summarizer and a new input is extracted. It
is a requisite that noise gets eliminated at this block itself
else the noise subsumes at past input summarizer.
C. Future Input Predictor and Analyzer
Future input predictor (FIP) block as the name indicates
predicts the future inputs. Here an allotment is made to pass
on future inputs through another device as well. This
allotment is done using either antenna or some special sensor
networks. Sometimes, future inputs can be known through
external agents, hence this allotment forms an interface
between the external agent and the system. This provision
makes this block exhibit dual stability in predicting the
future inputs. Many algorithms for future prediction are
available in literature. Future prediction and estimates are
extensively used in financial decisions. Future inputs can
also be predicted using the data obtained from past inputs.
Using pattern recognition algorithms for discrete sets of past
inputs, the future inputs can be determined. There are plenty
of algorithms that are develops to solve pattern recognition
from the available data [5,6,7,8].
D. Embedded Hardware Creator
This is the heart of acausal embedded system, as it is the
most important block of all. Embedded Hardware creator
(EHC) should have an autonomous capability i.e. should be
capable of making decisions. This can be achieved using
advance techniques such as artificial intelligence (adaptable
neural networks), genetic computing, evolutionary
computing etc.
Artificial Neural networks (ANN) are models of human
brain used to perform tasks based on self-learning and
adaptation to environment. Creating ANNs that can learn and
generalize the information from surrounding is the first step
in having autonomous computational machines. Due to
practical inabilities the learning time for ANNs is very high
[9,10]. Another mechanism that can help in realizing EHC is
cellular automata. Cellular automata are discrete spatially
extended dynamical systems that have been extensively
studied as a model of computational devices [11,12].
Evolutionary algorithms such as genetic algorithms have
generated a huge interest in this area. Genetic algorithms are
based upon selection and mutation [13,14]. Such techniques
can be useful in implementing EHC.
E. Available Hardware Resources
For the construction of the hardware devices certain
amount of resources are allocated to the embedded system
creator. These allocated resources are based upon the future
need for up-gradation and cost of the total embedded system.
FPGA, FPAA, PALS, memory elements, etc are few such
examples of reconfigurable devices that can be used as
resources for the embedded hardware creator. A provision is
provided to increase or repair the available resources in order
to upgrade the system at any moment. Size of the available
resources is a constraint since it induces the tendency of the
system to be more susceptible to thermal noise effects and
radiation. In order to reduce the unwanted noise effects
caused by thermal variance and incoming radiation proper
shielding techniques are needed else available resources may
become defective. Another parameter that determines the
number of available resources is the cost. The economic
feasibility of the embedded system is crucial for the
commercial success.
F. Evolved Architecture constructor
This block is the final evolved design done by the
embedded hardware creator. This block has inbuilt design
features to execute the instructions given by EHC into
building up an actual hardware design. It also consists of
read/write/erase mechanisms for FPGA, FPAA, FPTA and
other memories. Hence it forms an interface between EHC
and Available resources.
V. WORKING OF THE SYSTEM
The sequence of operations that go in the system are as
follows. First the past input summarizer (PIS), the present
input and the future input predictor (FIP) give realizable
inputs to the embedded hardware creator and then by
following a relevant logic a layout for the new derived
design results as an outcome from the embedded hardware
creator depending upon the inputs from PIS, present input
and FIP. The EHC is accountable to sending instructions to
EAC about the construction of the system using available
hardware resources. After receiving the instructions from
EHC, EAC launches the concrete effort of writing and
erasing of the reconfigurable hardware resources, connecting
interconnects, etc to construct the working structure of the
predicted solution. The potential capacity of the hardware
depends on the type of system that is used. The construction
of hardware design is not a single step process but a
continuous process that repeats itself for a more resourceful
and well-organized design. The proposed system is a
generalized version that can be modified as per the
application in which the concept of acausal self-evolving
reconfigurable hardware is used.
VI. CONCLUSION
In this paper we have proposed a model that uses the
theoretical framework of acausality. The proposed
architecture is a generalized version of evolvable
architectures and basing on the application it can be suitably
modified. The proposal of pseudo acausal evolvable
embedded systems opens up a path for a new era of research
and the pace of technological changes assume a new shape
where we find the machines repairing themselves and
evolving autonomously removing the major bottle-necks of
maintenance and non-durable nature of the existing
embedded systems. This implementation of such technology
finds itself an imperative place in every field of application.
Some if its prospective aspects viewed in near future are in
aeronautics, astronautics, robotics, etc. This technology may
develop as a capstone for evolvable embedded system
applications and AI research.
The generalized concept of modeling evolvable embedded
systems have been realized in terms of reconfigurable
components and artificial intelligence, our future research
will be in creating tools for such design. Due to financial
constraints we have restricted our work only up to theoretical
work and we hope in near future to practically demonstrate
such a system.
REFERENCES
[1] Lukas Sekanina and Vladimır Drabek, Theory and Applications of
Evolvable Embedded Systems, Proc. Of 11th International Conference
on on Engineering of Computer based systems, May 2004
[2] M. Love, K. R. Sorensen, J. Larsen, and J. Clausen.
DisruptionManagement for an Airline – Rescheduling of Aircraft. In
Applications of Evolutionary Computing, EvoWorkshops 2002, volume
2279 of LNCS, pages 315–324. Springer- Verlag, 2002.
[3] A. Stoica, R. S. Zebulum, D. Keymeulen, M. I. Ferguson, and X. Guo.
Evolving Circuits in Seconds: Experiments with a Stand-Alone Board
Level Evolvable System. In Proc.of the 2002 NASA/DoD Conference
on Evolvable Hardware, pages 67–74, Alexandria, Virginia, 2002.
IEEE Computer Society.
[4] M. Tanaka, H. Sakanashi, M. Salami, M. Iwata, T. Kurita, and T.
Higuchi. Data Compression for Digital Color Electrophotographic
Printer with Evolvable Hardware. In Proc.of the 2nd Int. Conf. on
Evolvable Systems: From Biology to Hardware ICES’98, volume 1478
of LNCS, pages 106–114, Lausanne, Switzerland, 1998. Springer-
Verlag.
[5] L. Devroye,. Gy¨orfi, G. Lugosi, A probabilistic theory of pattern
recognition. New York: Springer, 1996.
[6] M. Kearns M. and U. Vazirani An Introduction to Computational
Learning Theory. The MIT Press, Cambridge, Massachusetts, 1994.
[7] V. Vapnik, Statistical Learning Theory, New York etc.: John Wiley &
Sons, Inc. 1998
[8] Daniil Ryabko, Pattern Recognition for Conditionally
Independent Data, CS.LG/0507040
[9] X.Yao, Evolutionary Artificial Neural Networks, Int. J. Neural
Systems, Vol 4. pp 203-222, 1993.
[10] B.Muller and J.Reinhardt, Neural Networks, An Introduction,
springer-verlag 1990.
[11] H.A. Gutowitz, editor , Cellular Automata, MIT press, Cambridge, MA,
1990
[12] T.Toffoli, N.Margolus, Cellular Automata Machines, A new
environment for modeling, MIT press, Cambridge, MA, 1987
[13] C.R. Stephens, I. Garc´ıa Olmedo, J. Mora Vargas, H. Waelbroeck,
Self Adaptation in evolving systems adap-org/9708002
[14] T. B¨ack, Evolutionary Algorithms in Theory and Practice: evolution
strategies, evolutionary programming, genetic algorithms, (Oxford
Univ. Press 1996).
[15] L. Sekanina. Towards Evolvable IP Cores for FPGAs.In Proc. of the
2003 NASA/DoD Conference on EvolvableHardware, pages 145–154,
Chicago, IL, 2003. IEEE ComputerSociety.
[16] L. Sekanina. Evolvable Components: From Theory to Hardware
Implementations. Natural Computing Series, Springer Verlag, 2004.
[17] Mohd Abubakr, R.M.Vinay, Novel Technique for Volatile Optical
Memory using solitons, Proceeding of IEEE WOCN Bangalore, 2006
|
0704.0986 | On reference frames in spacetime and gravitational energy in freely
falling frames | arXiv:0704.0986v1 [gr-qc] 7 Apr 2007
On reference frames in spacetime and
gravitational energy in freely falling frames
J. W. Maluf ∗, F. F. Faria and S. C. Ulhoa
Instituto de F́ısica,
Universidade de Braśılia
C. P. 04385
70.919-970 Braśılia DF, Brazil
Abstract
We consider the interpretation of tetrad fields as reference frames
in spacetime. Reference frames may be characterized by an antisym-
metric acceleration tensor, whose components are identified as the
inertial accelerations of the frame (the translational acceleration and
the frequency of rotation of the frame). This tensor is closely related to
gravitoelectromagnetic field quantities. We construct the set of tetrad
fields adapted to observers that are in free fall in the Schwarzschild
spacetime, and show that the gravitational energy-momentum con-
structed out of this set of tetrad fields, in the framework of the telepar-
allel equivalent of general relatrivity, vanishes. This result is in agree-
ment with the principle of equivalence, and may be taken as a condi-
tion for a viable definition of gravitational energy.
PACS numbers: 04.20.Cv, 04.20.Fy
(*) e-mail: [email protected]
http://arxiv.org/abs/0704.0986v1
1 Introduction
It is a long-established practice in physics to describe the gravitational field
by means of theories invariant under local Lorentz transformations. This is
the case of the Einstein-Cartan theory, for instance, or more generally of the
metric-affine approach to the gravitational field [1]. In the latter formulation,
the theory of gravity is considered as a gauge theory of the Poincaré group.
The motivation for addressing theories of gravity by means of local Lorentz
(SO(3,1)) symmetry is partially due to the impact of the Yang-Mills gauge
theory in particle physics and quantum field theory. Because of the local
SO(3,1) symmetry, it is possible to assert that in such theories “all reference
frames are equivalent”.
The investigation of metric-affine theories of gravity is important because
one might have to go beyond the Riemannian formulation of general relativity
in order to deal with structures that pertain to a possible quantum theory
of gravity. The relevance of the Poincaré group and its representations in
quantum field theory is well known. In spite of the above mentioned feature
of the local SO(3,1) symmetry, there is no physical reason that prevents
the possibility of considering theories of gravity invariant under the global
Lorentz symmetry.
One theory that exhibits invariance under global SO(3,1) symmetry is
the teleparallel equivalent of general relativity (TEGR) [2, 3, 4, 5, 6, 7, 8].
The Lagrangian density of the theory is invariant under local SO(3,1) trans-
formations up to a nontrivial, nonvanishing total divergence [9], and for this
reason the local SO(3,1) group is not a symmetry of the theory. (From a
different perspective, the TEGR may be considered as a gauge theory for the
translation group [10].) Because of the global SO(3,1) symmetry, we must
ascribe an interpretation to six degrees of freedom of the tetrad field. In the
TEGR two sets of tetrad fields that yield the same spacetime metric tensor
are physically distinct. Thus we should interpret the tetrad fields as refer-
ence frames adapted to ideal observers in spacetime. Therefore two sets of
tetrad fields that are related by a local SO(3,1) transformation yield the same
metrical properties of the spacetime, but represent reference frames that are
characterized by different inertial accelerations. In a given gravitational field
configuration, the Schwarzschild spacetime, say, a moving observer or an ob-
server at rest are described by different sets of tetrad fields, and both sets
of tetrads are related by some sort of SO(3,1) transformation. Of course the
proper interpretation of the translational and rotational accelerations of a
frame makes sense at least in the case of asymptotically flat spacetimes.
In this paper we carry out an analysis of the inertial accelerations of a
frame in the context of the TEGR. The inertial accelerations are represented
by a second rank antisymmetric tensor under global SO(3,1) transforma-
tions that is coordinate independent. This tensor can be decomposed into
translational and rotational accelerations (the latter is in fact the rotational
frequency of the frame). By considering the weak field limit we will see that
there is a very interesting relationship between the translational acceleration
and rotational frequency of the frame, and electric and magnetic fields, re-
spectively. This relationship is explicitly investigated in the context of the
Kerr spacetime. The translational acceleration and rotational frequency that
are necessary no maintain a static frame in the spacetime are closely related
to the electric field of a point charge and to the magnetic field of a perfect
magnetic dipole, respectively. The present analysis is very much similar to
the usual formulation of gravitoelectromagnetism.
We consider the four-velocity of observers that are in free fall (radially)
in the Schwarzschild spacetime and construct the reference frame adapted to
such observers. We show that the expression for the gravitational energy-
momentum that arises in the framework of the TEGR [4, 5, 7] vanishes,
if evaluated in this frame. This is a very interesting result that shows the
consistency of the above definition with the principle of equivalence. The
local effects of gravity are not measured by an observer in free fall, who
defines a locally inertial reference frame. In this frame the acceleration of
the observer vanishes (section 3), and therefore he cannot measure neither
the gravitational force exerted on him nor the mass of the black hole. Thus
in a freely falling frame the gravitational energy should vanish. The tetrad
field that establishes the reference frame of an observer in free fall is related
to other (possibly static) frames by a frame transformation, not a coordinate
transformation. For instance, it is possible to establish a transformation from
the freely falling frame to a frame adapted to observers that are asympotically
at rest in the Schwarzschild spacetime, out of which we obtain the usual value
for the total gravitational energy of the spacetime. We believe that viable
definitions of gravitational energy-momentum should exhibit this feature.
Notation: spacetime indices µ, ν, ... and SO(3,1) indices a, b, ... run from
0 to 3. Time and space indices are indicated according to µ = 0, i, a =
(0), (i). The tetrad field is denoted by ea µ, and the torsion tensor reads
Taµν = ∂µeaν − ∂νeaµ. The flat, Minkowski spacetime metric tensor raises
and lowers tetrad indices and is fixed by ηab = eaµebνg
µν = (− + ++). The
determinant of the tetrad field is represented by e = det(ea µ).
2 The field equations of the TEGR
Einstein’s general relativity is determined by the field equations. The latter
may be written either in terms of the metric tensor or of the tetrad field.
The TEGR is a reformulation of Einstein’s general relativity in terms of
the tetrad field. Sometimes the theory is also called “tetrad gravity” [9].
The tetrad field is anyway necessary to describe the coupling of Dirac spinor
fields with the gravitational field. The formulation of general relativity in
a different geometrical framework allows a new insight into the theory, and
this is precisely what happens in the consideration of the TEGR.
The Lagrangian density for the gravitational field in the TEGR is given
L = −k e (
T abcTabc +
T abcTbac − T aTa)− LM
≡ −k eΣabcTabc − LM , (1)
where k = 1/(16π), and LM stands for the Lagrangian density for the matter
fields. As usual, tetrad fields convert spacetime into Lorentz indices and vice-
versa. The tensor Σabc is defined by
Σabc =
(T abc + T bac − T cab) +
(ηacT b − ηabT c) , (2)
and T a = T b b
a. The quadratic combination ΣabcTabc is proportional to the
scalar curvature R(e), except for a total divergence [7].
The field equations for the tetrad field read
eaλebµ∂ν(eΣ
bλν)− e(Σbν aTbνµ −
eaµTbcdΣ
bcd) =
eTaµ . (3)
where eTaµ = δLM/δe
aµ. It is possible to prove by explicit calculations that
the left hand side of Eq. (3) is exactly given by 1
e [Raµ(e)− 12eaµR(e)]. The
field equations above may be rewritten in the form
∂ν(eΣ
aλν) =
e ea µ(t
λµ + T λµ) , (4)
where
tλµ = k(4ΣbcλTbc
µ − gλµΣbcdTbcd) , (5)
is interpreted as the gravitational energy-momentum tensor [7].
The Lagrangian density defined by Eq. (1) is invariant under global
SO(3,1) transformations of the tetrad field. As we asserted before, un-
der local SO(3,1) transformations the purely gravitational part of Eq. (1),
−k eΣabcTabc, transforms into −k eΣabcTabc plus a nontrivial, nonvanishing
total divergence [9]. The integral of this total divergence in general is non-
vanishing, unless restrictive conditions are imposed on the Lorentz transfor-
mation matrices.
The Hamiltonian formulation of the TEGR is obtained by first establish-
ing the phase space variables. The Lagrangian density does not contain the
time derivative of the tetrad component ea0. Therefore this quantity will
arise as a Lagrange multiplier. The momentum canonically conjugated to
eai is given by Π
ai = δL/δėai. The Hamiltonian formulation is obtained by
rewriting the Lagrangian density in the form L = pq̇ − H , in terms of eai,
Πai and Lagrange multipliers. The Legendre transform can be successfuly
carried out, and the final form of the Hamiltonian density reads [11]
H = ea0C
a + αikΓ
ik + βkΓ
k , (6)
plus a surface term. αik and βk are Lagrange multipliers that (after solving
the field equations) are identified as αik = 1/2(Ti0k + Tk0i) and βk = T00k.
Ca, Γik and Γk are first class constraints.
The constraint Ca is written as Ca = −∂iΠai+ha, where ha is an intricate
expression of the field variables. The integral form of the constraint equation
Ca = 0 motivates the definition of the total energy-momentum four-vector
P a [4],
P a = −
d3x∂iΠ
ai . (7)
V is an arbitrary volume of the three-dimensional space. In the configuration
space we have
Πai = −4keΣa0i . (8)
The emergence of total divergences in the form of scalar or vector densities
is possible in the framework of theories constructed out of the torsion tensor.
Metric theories of gravity do not share this feature. We note that by making
λ = 0 in eq. (4) and identifying Πai in the left hand side of the latter, the
integral form of eq. (4) is written as
P a =
d3x e ea µ(t
0µ + T 0µ) . (9)
In empty spacetimes and in the framework of black holes P a does represent
the gravitational energy-momentum contained in a volume V of the three-
dimensional space. Several applications to well known gravitational field
configurations support this interpretation.
3 Reference frames in spacetime
A set of four orthonormal, linearly independent vector fields in spacetime
establish a reference frame. Altogether, they define a tetrad field ea µ, which
allows the projection of vectors and tensors in spacetime in the local frame
of an observer.
Each set of tetrad fields defines a class of reference frames [12]. If we
denote by xµ(s) the world line C of an observer in spacetime (s is the proper
time of the observer), and by uµ(s) = dxµ/ds its velocity along C, we identify
the observer’s velocity with the a = (0) component of ea
µ. Thus uµ(s) =
µ along C. The acceleration aµ of the observer is given by the absolute
derivative of uµ along C,
De(0)
= uα∇αe(0) µ , (10)
where the covariant derivative is constructed out of the Christoffel symbols.
Thus ea
µ determines the velocity and acceleration along the worldline of an
observer adapted to the frame. Therefore a given set of tetrad fields, for which
µ describes a congruence of timelike curves, is adapted to a particular
class of observers, namely, to observers characterized by the velocity field
uµ = e(0)
µ, endowed with acceleration aµ. If ea µ → δaµ in the limit r → ∞,
then ea µ is adapted to static observers at spacelike infinity.
A geometrical characterization of tetrad fields as an observer’s frame can
be given by considering the acceleration of the frame along an arbitrary
path xµ(s) of the observer in spacetime. The acceleration of the frame is
determined by the absolute derivative of ea
µ along xµ(s). Thus, assuming
that the observer carries an orthonormal tetrad frame ea
µ, the acceleration
of the latter along the path is given by [13, 14]
µ , (11)
where φab is the antisymmetric acceleration tensor. According to Refs. [13,
14], in analogy with the Faraday tensor we can identify φab → (a,Ω), where
a is the translational acceleration (φ(0)(i) = a(i)) and Ω is the frequency
of rotation of the local spatial frame with respect to a nonrotating (Fermi-
Walker transported [12]) frame. It follows from Eq. (11) that
b = eb µ
= eb µ u
λ∇λea µ . (12)
Therefore given any set of tetrad fields for an arbitrary gravitational field
configuration, its geometrical interpretation can be obtained by suitably in-
terpreting the velocity field uµ = e(0)
µ and the acceleration tensor φab. The
acceleration vector aµ defined by Eq. (10) may be projected on a frame in
order to yield
ab = eb µa
µ = eb µu
α∇αe(0) µ = φ(0) b . (13)
Thus aµ and φ(0)(i) are not different accelerations of the frame.
The expression of aµ given by Eq. (10) may be rewritten as
aµ = uα∇αe(0) µ = uα∇αuµ =
, (14)
where Γ
αβ are the Christoffel symbols. We see that if u
µ = e(0)
µ represents
a geodesic trajectory, then the frame is in free fall and aµ = φ(0)(i) = 0.
Therefore we conclude that nonvanishing values of the latter quantities do
represent inertial accelerations of the frame.
In view of the orthogonality of the tetrads we write Eq. (12) as φa
−uλea µ∇λeb µ, where ∇λeb µ = ∂λeb µ − Γσλµeb σ. Now we take into account
the identity ∂λe
µ − Γσλµeb σ + 0ωλ b cec µ = 0, where 0ωλ b c is the metric
compatible, torsion free Levi-Civita connection, and express φa
b according
b = e(0)
µ( 0ωµ
a) . (15)
At last we consider the identity 0ωµ
b = −Kµ a b, where −Kµ a b is the
contortion tensor defined by
Kµab =
ν(Tλµν + Tνλµ + Tµλν) , (16)
and Tλµν = e
λTaµν (see, for instance, Eq. (4) of Ref. [7]; the identity is
obtained by requiring the vanishing of a general SO(3,1) connection ωµab, or
by direct calculation). After simple manipulations we finally obtain
φab =
[T(0)ab + Ta(0)b − Tb(0)a] . (17)
The expression above is clearly not invariant under local SO(3,1) trans-
formations, but is invariant under coordinate transformations. The values of
φab for a given tetrad field may be used to characterize the frame. We recall
that we are assuming the observer to carry the set of tetrad fields along xµ(s),
for which we have uµ = e(0)
µ. We interpret φab as the inertial accelerations
along xµ(s).
Two simple, straightforward applications of Eq. (17) are the following:
(i) The tetrad field adapted to observers at rest in Minkowski spacetime is
given by ea µ(ct, x, y, z) = δ
µ. We consider a time-dependent boost in the x
direction, say, after which the tetrad field reads
ea µ(ct, x, y, z) =
γ −βγ 0 0
−βγ γ 0 0
0 0 1 0
0 0 0 1
, (18)
where γ = (1 − β2)−1/2, β = v/c and v = v(t). The frame above is
then adapted to observers whose four-velocity is uµ = e(0)
µ(ct, x, y, z) =
(γ, βγ, 0, 0). After simple calculations we obtain
φ(0)(1) =
[βγ] =
1− v2/c2
, (19)
φ(0)(2) = 0 ,
φ(0)(3) = 0 ,
and φ(i)(j) = 0.
(ii) A frame adapted to an observer in Minkowski spacetime whose four-
velocity is e(0)
µ = (1, 0, 0, 0) and which rotates around the z axis, say, reads
ea µ(ct, x, y, z) =
1 0 0 0
0 cosω(t) − sinω(t) 0
0 sinω(t) cosω(t) 0
0 0 0 1
. (20)
It is easy to carry out the simple calculations and obtain
φ(2)(3) = 0 , (21)
φ(3)(1) = 0 ,
φ(1)(2) = −
and φ(0)(i) = 0. Together with the discussion regarding Eq. (14), the exam-
ples above support the interpretation of φa
b as the inertial accelerations of
the frame.
4 A freely falling frame in the Schwarzschild
spacetime
We will consider in this section a frame that is in free fall in the Schwarzschild
spacetime, namely, that is radially accelerated towards the center of the black
hole. We will take into account the kinematical quantities discussed the
preceeding section, in order to illustrate the construction of the tetrad field.
The spacetime is described by the line element
ds2 = −α−2dt2 + α2dr2 + r2(dθ2 + sin2 θdφ2) , (22)
where
α−2 = 1−
. (23)
Let us define the quantity β,
= (1− α−2)1/2 , (24)
which will be useful in the following.
An observer that is in radial free fall in the Schwarzschild spacetime is
endowed with the four-velocity [15]
, 0, 0
. (25)
The simplest set of tetrad fields that satisfies the condition
α = uα , (26)
is given by
eaµ =
−1 −α2β 0 0
β sin θ cosφ α2 sin θ cos φ r cos θ cosφ −r sin θ sinφ
β sin θ sinφ α2 sin θ sinφ r cos θ sinφ r sin θ cos φ
β cos θ α2 cos θ −r sin θ 0
. (27)
We recall that the index a labels the lines, and µ the columns. Since the
frame is in free fall the equation φ(0)(i) = 0 is satisfied. It is not difficult to
show that this set of tetrad fields also satisfies the conditions
φ(i)(j) =
[T(0)(i)(j) + T(i)(0)(j) − T(j)(0)(i)] = 0 . (28)
Three of the four conditions established by Eq. (26) are more relevant
for our purposes, namely, the three components of the frame velocity in
the three-dimensional space, ui = e(0)
i. Together with the three conditions
determined by Eq. (28), we have six conditions on the frame. We may assert
that these six conditions completely fix the structure of the tetrad field, even
though Eq. (28) has been verified a posteriori. Therefore Eq. (27) describes
a nonrotating frame in radial free fall in the Schwarzschild spacetime.
We will evaluate the gravitational energy-momentum out of the tetrad
field above, but will omit the details of the calculations which are alge-
braically long, but otherwise simple. The nonvanishing components of the
torsion tensor are
T001 = −β∂rβ (29)
T101 = −α2∂rβ
T202 = −rβ
T303 = −rβ sin2 θ
T212 = r(1− α2)
T313 = r(1− α2) sin2 θ .
The gravitational energy contained within a spherical surface of constant
radius is given by
P (0) = −
dSj Π
(0)j = 4k
dS1 e(e
001 + e(0) 1Σ
101) , (30)
where
Σ001 =
(g00g11g22T212 + g
00g11g33T313) , (31)
Σ101 = −
(g00g11g22T202 + g
00g11g33T303) .
We find that
e(e(0) 0Σ
001 + e(0) 1Σ
101) = r sin θ(α2 − 1− α2β2) (32)
= 0 ,
and therefore the gravitational energy contained within a surface of constant
radius as well as the total gravitational energy of the spacetime vanishes, if
evaluated in the frame of a freely falling observer. This is a very interesting
property of the whole formalism described in section 2. The vanishing of the
gravitational energy for freely falling observers is a feature that is consistent
with (and a consequence of) the principle of equivalence, since local effects of
gravity are not measured by observers in free fall. For other frames that are
related to Eq. (27) by a local Lorentz transformation we obtain nonvanishing
values of P (0). In particular, the total gravitational energy calculated out of
frames such that ea µ(t, x, y, z) → δaµ in the asymptotic limit r → ∞ is
exactly P (0) = m [4]. The latter tetrad field is adapted to observers at rest
at spacelike infinity. Thus the vanishing of gravitational energy in freely
falling frames shows that the localizability of the gravitational energy is not
inconsistent with with the principle of equivalence. The result given by Eqs.
(30-32) is a very good example of the frame dependence of the gravitational
energy definition (7).
It can be easily verified that the gravitational momentum components
P (1) and P (2) vanish in view of integrals like
0 dφ sinφ = 0 =
0 dφ cosφ,
whereas P (3) vanishes due to
0 dθ sin θ cos θ = 0.
It is important to remark that in general the vanishing of φab does not
imply the vanishing of P a. For an observer at rest at spacelike infinity the
total gravitational energy is nonvanishing, whereas for these observers we
have φab ∼= 0 (in the limit r → ∞; see next section).
5 Static frames in the Kerr spacetime
Another interesting application of the definitions of velocity and inertial ac-
celeration of a frame discussed in section 3 is the analysis of a static frame
in Kerr’s spacetime. The latter is established by the line element
ds2 = −
dt2 −
2χ sin2 θ
dφ dt+
dr2 (33)
+ρ2dθ2 +
Σ2 sin2 θ
dφ2 ,
with the following definitions:
∆ = r2 + a2 − 2mr , (34)
ρ2 = r2 + a2 cos2 θ ,
Σ2 = (r2 + a2)2 −∆a2 sin2 θ ,
ψ2 = ∆− a2 sin2 θ ,
χ = 2amr .
A static reference frame in Kerr’s spacetime is defined by the congruence
of timelike curves uµ(s) such that ui = 0, namely, the spatial velocity of the
observers is zero with respect to static observers at spacelike infinity. Since
we identify ui = e(0)
i, a static reference frame is established by the condition
i = 0 . (35)
In view of the orthogonality of the tetrads, the equation above implies e(k) 0 =
0. This latter equation remains satisfied after a local rotation of the frame,
ẽ(k) 0 = Λ
0 = 0. Therefore condition (35) determines the static char-
acter of the frame, up to an orientation of the frame in the three-dimensional
space.
A simple form for the tetrad field that satisfies Eq. (35) (or, equivalently,
e(k) 0 = 0) reads
eaµ =
−A 0 0 −B
0 C sin θ cosφ ρ cos θ cos φ −D sin θ sinφ
0 C sin θ sin φ ρ cos θ sinφ D sin θ cosφ
0 C cos θ −ρ sin θ 0
, (36)
with the following definitions
, (37)
χ sin2 θ
In the expression of D we have
Λ = (ψ2Σ2 + χ2 sin2 θ)1/2 .
We are interested in the calculation of φab given by Eq. (17), and for this
purpose it is useful to work with the inverse tetrad field ea
µ. It reads
sin θ sinφ − ρχ
sin θ cosφ 0
sin θ cos φ
sin θ sin φ
cos θ
cos θ cosφ 1
cos θ sinφ −1
sin θ
0 −ρψ
sin θ
sin θ
, (38)
where now the index a labels the columns, and µ the lines.
The frame determined by Eqs. (36) and (38) is valid in the region outside
the ergosphere. The function ψ2 = ∆ − a2 sin2 θ vanishes over the external
surface of the ergosphere (defined by r = r⋆ = m+
m2 − a2 cos2 θ; over this
surface g00 = 0), and we see that various components of Eqs. (36) and (38)
are not well defined over this surface. It is well known that it is not possible
to maintain static observers inside the ergosphere of the Kerr spacetime.
By inspecting Eq. (38) we see that for large values of r we have
µ(t, r, θ, φ) ∼= (0, cos θ,−(1/r) sin θ, 0) ,
µ(t, x, y, z) ∼= (0, 0, 0, 1) . (39)
Therefore we may assert that the frame given by Eq. (37) is characterized by
the following properties: (i) the frame is static, because Eq. (35) is verified;
(ii) the e(3)
µ components are oriented along the symmetry axis of the black
hole (the z direction). The second condition is ultimately reponsible for the
simple form of Eq. (36).
The evaluation of φab is long but straightforward, and for this reason
we will omit the details of the calculations. For convenience of notation we
define the vectors
r̂ = sin θ cosφ x̂+ sin θ sinφ ŷ + cos θ ẑ (40)
θ̂ = cos θ cos φ x̂+ cos θ sin φ ŷ− sin θ ẑ
which have well defined meaning as unit vectors in the asymptotic limit
r → ∞. We also define the three-dimensional vectors
a = (φ01, φ02, φ03) , (41)
Ω = (φ23, φ31, φ12) . (42)
We obtain the following expressions for a and Ω:
sin θ cos θ θ̂
, (43)
Ω = −
cos θ r̂+
sin θ ∂r
sin θ ∂θ
r̂ . (44)
The specific functional form of the vectors above completely characterize
the frame determined by Eq. (36). The determination of a and Ω is equiva-
lent to the fixation of six components of the tetrad field. Equations (43) and
(44) represent the inertial accelerations that one must exert on the frame
in order to verify that (i) the frame is static (condition (35)), and that (ii)
the e(3)
µ components of the tetrad field asymptotically coincides with the
symmetry axis of the black hole.
The form of a and Ω for large values of r is very interesting. It is easy to
verify that in the limit r → ∞ we obtain
r̂ , (45)
Ω ∼= −
2 cos θ r̂+ sin θ θ̂
. (46)
After the identificationsm↔ q and 4πma↔ m̄, where q is the electric charge
and m̄ is the magnetic dipole moment, equations (45) and (46) resemble the
electric field of a point charge and the magnetic field of a perfect dipole
that points in the z direction, respectively. These equations represent a
manifestation of gravitoelectromagnetism.
If we abandon the statical condition given by Eq. (35), an observer lo-
cated at a position (r, θ, φ) will be subject to an acceleration −a and to a
rotational motion determined by −Ω = ΩD, which is the dragging frequency
of the frame. Thus the gravitomagnetic effect is locally equivalent to iner-
tial effects in a frame rotating with frequency −ΩD, the latter having the
magnetic dipole moment structure given by Eq. (46). This is precisely the
gravitational Larmor’s theorem, discussed in Ref. [16].
The emergence of gravitoelectromagnetic (GEM) field quantities in the
context of the acceleration tensor φab presents no difference with respect to
the usual approach in the literature. Let us assume that tetrad field satisfies
the boundary conditions
ea µ ∼= δaµ +
ha µ , (47)
where ha µ is the perturbation of the flat space-time tetrad field in the limit
r → ∞, and that in this limit the SO(3,1) and spacetime indices acquire the
same significance. It is straightforward to verify that in this case we have
φ(0)(i) ∼= −∂i
, (48)
φ(i)(j) ∼= −
. (49)
Thus we identify
h00 , (50)
Ai = −
h0i . (51)
The identification above is equivalent to the one usually made in the litera-
ture, namely, Φ = (1/4)h̄00 and Ai = −(1/2)h̄0i [17], where h̄µν is the trace-
reversed field quantity defined by h̄µν = hµν−(1/2)ηµνh, and h = ηµνhµν . The
latter identification is made directly in the weak field form of the metric ten-
sor of a slowly rotating source. Assuming that h00 = 2Φ/c
2 and hij = δijh00,
where c is the speed of light (according to Eq. (1.4) of Ref. [17]), we obtain
h̄00 = 2h00, and therefore V = (1/4)h̄00. To our knowledge, the identification
of the GEM field quantities out of the tensor φab has not been addressed in
the literature so far.
6 Comments
Gravity theories invariant under the global SO(3,1) group are physically ac-
ceptable. The gravitational field equations determine the gravitational field,
not the frame. A given gravitational field configuration admits an infinity
of frames which in general are distinct from each other. We know that the
physical properties of a system are different in a static and in an accelerated
frame, for instance, and this feature should also hold in general relativity.
The gravitational energy-momentum that is defined in the realm of the
TEGR is frame dependent. This issue has been partially discussed before
in Refs. [7, 8], and also in Ref. [9]. This dependence is considered here to
be a natural property of the definition. The frame may be characterized by
the six components of the antisymmetric tensor φab, defined by Eq. (17),
which determine the translational acceleration and rotational frequency of
the frame, and which resembles the electric field of a point charge and the
magnetic field of a dipole, respectively, in the weak field limit of the Kerr
spacetime (in the consideration of a static frame).
In section 4 we have shown that the gravitational energy-momentum cal-
culated out of a frame that is nonrotating and freely falling in the Schwarzschild
spacetime vanishes. We expect this property to hold in the consideration of
a general spacetime geometry, in which case the analysis is somewhat more
complicated, because the frame is expected not to rotate with respect to a
Fermi-Walker transported frame. In general the construction of the latter
frame is not trivial.
It is clear that if the gravitational energy-momentum definition were in-
variant under local Lorentz transformations, we would not arrive at the result
of section 4, since the the value of P a on a three-dimensional volume V would
be the same for all frames, and presumably nonvanishing.
A common critique of the localizability of gravitational energy is that the
latter is unattainable because of the principle of equivalence. In this paper we
have seen that this is not the case. Definition (7) for the gravitational energy-
momentum yields the expected results both for observers asymptotically at
rest and for freely falling observers.
Acknowledgement
J. W. M. is grateful to G. F. Rubilar for helpful discussions on reference
frames. This work was supported in part by CNPQ (Brazil).
References
[1] F. W. Hehl, J. D. McCrea, E. W. Mielke and Y. Ne’eman, Phys. Rep.
258, 1 (1995).
[2] F. W. Hehl, in “Proceedings of the 6th School of Cosmology and Gravi-
tation on Spin, Torsion, Rotation and Supergravity”, Erice, 1979, edited
by P. G. Bergmann and V. de Sabbata (Plenum, New York, 1980).
[3] J. M. Nester, Int. J. Mod. Phys. A 4, 1755 (1989); J. Math. Phys. 33,
910 (1992).
[4] J. W. Maluf, J. F. da Rocha-Neto, T. M. L. Toŕıbio and K. H. Castello-
Branco, Phys. Rev. D 65, 124001 (2002).
[5] J. W. Maluf, F. F. Faria and K. H. Castello-Branco, Class. Quantum
Grav. 20, 4683 (2003).
[6] Y. Obukhov and J. G. Pereira, Phys. Rev. D 67, 044016 (2003).
[7] J. W. Maluf, Ann. Phys. (Leipzig) 14, 723 (2005).
[8] J. W. Maluf, S. C. Ulhoa, F. F. Faria and J. F. da Rocha-Neto, Class.
Quantum Grav. 23, 6245 (2006).
[9] Y. N. Obukhov and G. F. Rubilar, Phys. Rev. D 73, 124017 (2006).
[10] V. C. de Andrade and J. G. Pereira, Phys. Rev. D 56, 4689 (1997).
[11] J. W. Maluf and J. F. da Rocha-Neto, Phys. Rev. D 64, 084014 (2001).
[12] F. H. Hehl, J. Lemke and E. W. Mielke, Two Lectures on Fermions and
Gravity, in “Geometry and Theoretical Physics”, edited by J. Debrus
and A. C. Hirshfeld (Springer, Berlin Heidelberg, 1991).
[13] B. Mashhoon and U. Muench, Ann. Phys. (Leipzig) 11, 532 (2002) [gr-
qc/0206082].
[14] B. Mashhoon, Ann. Phys. (Leipzig) 12, 586 (2003) [hep-th/0309124].
[15] J. B. Hartle, “Gravity: An Introduction to Einstein’s General Relativ-
ity” (Addison-Wesley, San Francisco, 2003), p. 198.
[16] B. Mashhoon, Phys. Lett. A 173, 347 (1993).
[17] B. Mashhoon, Gravitoelectromagnetism: a Brief Review [gr-qc/0311030].
|