text
stringlengths 7.27k
650k
| label
int64 0
10
|
---|---|
THE ADJOINT GROUP OF A COXETER QUANDLE
arXiv:1702.07104v2 [math.GT] 1 Mar 2017
TOSHIYUKI AKITA
A BSTRACT. We give explicit descriptions of the adjoint group of the Coxeter
quandle associated with a Coxeter group W . The adjoint group turns out to be an
intermediate group between W and the corresponding Artin group, and fits into a
central extension of W by a finitely generated free abelian group. We construct 2cocycles of W corresponding to the central extension. In addition, we prove that
the commutator subgroup of the adjoint group is isomorphic to the commutator
subgroup of W .
1. I NTRODUCTION
A nonempty set X equipped with a binary operation X × X → X, (x, y) 7→ x ∗ y
is called a quandle if it satisfies the following three conditions:
(1) x ∗ x = x (x ∈ X),
(2) (x ∗ y) ∗ z = (x ∗ z) ∗ (y ∗ z) (x, y, z ∈ X),
(3) For all y ∈ X, the map X → X defined by x 7→ x ∗ y is bijective.
Quandles have been studied in low dimensional topology as well as in Hopf algebras. To any quandle X one can associate a group Ad(X) called the adjoint group
of X (also called the associated group or the enveloping group in the literature). It
is defined by the presentation
Ad(X) := hex (x ∈ X) | e−1
y ex ey = ex∗y (x, y ∈ X)i
Although adjoint groups play important roles in the study of quandles, not much
is known about the structure of them, partly because the definition of Ad(X) by a
possibly infinite presentation is difficult to work with in explicit calculations. See
Eisermann [9] and Nosaka [14] for generality of adjoint groups, and [6, 9, 10, 14]
for descriptions of adjoint groups of certain classes of quandles.
In this paper, we will study the adjoint group of a Coxeter quandle. Let (W, S) be
a Coxeter system, a pair of a Coxeter group W and the set S of Coxeter generators
of W . Following Nosaka [14], we define the Coxeter quandle XW associated with
(W, S) to be the set of all reflections of W :
XW :=
[
w−1 Sw.
w∈W
The quandle operation is given by the conjugation x ∗ y := y−1 xy = yxy. The symmetric group Sn of n letters is a Coxeter group (of type An−1 ), and the associated
Coxeter quandle XSn is nothing but the set of all transpositions. In their paper
Key words and phrases. quandle, Coxeter group, Artin group, pure Artin group.
1
2
T. AKITA
[2], Andruskiewitsch-Fantino-Garcı́a-Vendramin showed that Ad(XSn ) is an intermediate group between Sn and the braid group Bn of n strands, in the sense
that the canonical projection Bn Sn splits into a sequence of surjections Bn
Ad(XSn ) Sn , and that Ad(XSn ) fits into a central extension of the form
0 → Z → Ad(XSn ) → Sn → 1.
See [2, Proposition 3.2] and its proof. The primary purpose of this paper is to
generalize their results to arbitrary Coxeter quandles. We will show that Ad(XW )
is an intermediate group between W and the Artin group AW associated with W
(Proposition 3.3), and will show that Ad(XW ) fits into a central extension of the
form
(1.1)
φ
0 → Zc(W ) → Ad(XW ) → W → 1,
where c(W ) is the number of conjugacy classes of elements in XW (Theorem 3.1).
As is known, the central extension (1.1) corresponds to the cohomology class uφ ∈
H 2 (W, Zc(W ) ). We will construct 2-cocycles of W representing uφ (Proposition 4.2
and Theorem 4.7).
Finally, Eisermann [9] claimed that Ad(XSn ) is isomorphic to the semidirect
product An o Z where An is the alternating group on n letters, but he did not write
down the proof. We will generalize his result to Coxeter quandles (Theorem 5.1
and Corollary 5.2).
Notation. Let G be a group, g, h ∈ G elements of G and m ≥ 2 a natural number.
Define
(gh)m := ghg · · · ∈ G.
| {z }
m
For example, (gh)2 = gh, (gh)3 = ghg, (gh)4 = ghgh. Let GAb := G/[G, G] be
the abelianization of G, AbG : G → GAb the natural projection, and write [g] :=
g[G, G] ∈ GAb for g ∈ G. For a quandle X and elements xk ∈ X (k = 1, 2, 3, . . . ), we
denote
x1 ∗ x2 ∗ x3 ∗ · · · ∗ x` := (· · · ((x1 ∗ x2 ) ∗ x3 ) ∗ · · · ) ∗ x` ,
for simplicity.
2. C OXETER GROUPS AND C OXETER QUANDLES
2.1. Coxeter groups. Let S be a finite set and m : S × S → N ∪ {∞} a map satisfying the following conditions:
(1) m(s, s) = 1 for all s ∈ S
(2) 2 ≤ m(s,t) = m(t, s) ≤ ∞ for all distinct s,t ∈ S.
The map m is represented by the Coxeter graph Γ whose vertex set is S and whose
edges are the unordered pairs {s,t} ⊂ S such that m(s,t) ≥ 3. The edges with
m(s,t) ≥ 4 are labeled by the number m(s,t). The Coxeter system associated with
Γ is the pair (W, S) where W is the group generated by s ∈ S and the fundamental
relations (st)m(s,t) = 1 (s,t ∈ S, m(s,t) < ∞):
(2.1)
W := hs ∈ S | (st)m(s,t) = 1(s,t ∈ S, m(s,t) < ∞)i.
COXETER QUANDLE
3
The group W is called the Coxeter group (of type Γ), and elements of S are called
Coxeter generators of W (also called simple reflections in the literature). Note
that the order of the product st is precisely m(s,t). In particular, every Coxeter
generator s ∈ S has order 2. It is easy to check that the defining relations in (2.1)
are equivalent to the following relations
(2.2)
s2 = 1 (s ∈ S), (st)m(s,t) = (ts)m(s,t) (s,t ∈ S, s , t, m(s,t) < ∞).
Finally, the odd subgraph Γodd is a subgraph of Γ whose vertex set is S and whose
edges are the unordered pairs {s,t} ⊂ S such that m(s,t) is an odd integer. We refer
Bourbaki [3] and Humphreys [12] for further details of Coxeter groups.
2.2. Conjugacy classes of reflections. Let
XW :=
[
w−1 Sw
w∈W
be the set of reflections in W as in the introduction (the underlying set of the Coxeter quandle). Let OW be the set of conjugacy classes of elements of XW under W ,
and RW a complete set of representatives of conjugacy classes. We may assume
RW ⊆ S. Let c(W ) be the cardinality of OW . The following facts are well-known
and easy to prove.
Proposition 2.1. The elements of OW are in one-to-one correspondence with the
connected components of Γodd . Consequently, c(W ) equals to the number of connected components of Γodd .
To be precise, the conjugacy class of s ∈ S corresponds to the connected component
of Γodd containing s ∈ S. The proof of Proposition 2.1 can be found in [4, Lemma
3.6].
Proposition 2.2. WAb is the elementary abelian 2-group with a basis {[s] ∈ WAb |
s ∈ RW }. In particular, WAb (Z/2)c(W ) .
2.3. Coxeter quandles. Now we turn our attention to Coxeter quandles. Let XW
be the Coxeter quandle associated with (W, S) and
Ad(XW ) := hex (x ∈ XW ) | e−1
y ex ey = ex∗y (x, y ∈ XW )i
the adjoint group of XW . Observe that
(2.3)
e−1
y ex ey = ex∗y = ey−1 xy
and
(2.4)
ey ex e−1
y = ex∗y ,
where (2.4) follows from e−1
y ex∗y ey = ex∗y∗y = ex .
Proposition 2.3. Ad(XW ) is generated by es (s ∈ S).
Proof. Given x ∈ XW , we can express x as x = (s1 s2 · · · s` )−1 s0 (s1 s2 · · · s` ) for some
si ∈ S (0 ≤ i ≤ `) by the definition of XW . Applying (2.3), we have
ex = e(s1 s2 ···s` )−1 s0 (s1 s2 ···s` ) = es0 ∗s1 ∗s2 ∗···∗s` = (es1 es2 · · · es` )−1 es0 (es1 es2 · · · es` ),
proving the proposition.
4
T. AKITA
Proposition 2.4. Ad(XW )Ab is the free abelian group with a basis {[es ] | s ∈ RW }.
In particular, Ad(XW )Ab Zc(W ) .
Proof. By the definition of Ad(XW ), the abelianization Ad(XW )Ab is generated by
[ex ] (x ∈ XW ) subject to the relations [ex ] = [ey−1 xy ], [ex ][ey ] = [ey ][ex ] (x, y ∈ XW ).
Consequently, [ex ] = [ey ] ∈ Ad(XW )Ab if and only if x, y ∈ XW are conjugate in
W . We conclude that Ad(XW )Ab is the free abelian group with a basis {[es ] | s ∈
RW }.
Let φ : Ad(XW ) → W be a surjective homomorphism defined by ex 7→ x, which
is well-defined by virtue of (2.3), and let CW := ker φ be its kernel.
Lemma 2.5. CW is a central subgroup of Ad(XW ).
Proof. Given g ∈ CW , it suffices to prove g−1 ex g = ex for all x ∈ XW . To do so, set
g = eεy11 eεy22 · · · eεy`` (εi ∈ {±1}). Applying (2.3) and (2.4), we have
g−1 ex g = (eεy11 eεy22 · · · eεy`` )−1 ex (eεy11 eεy22 · · · eεy`` ) = ex∗y1 ∗y2 ∗···∗y` .
Now x ∗ y1 ∗ y2 ∗ · · · ∗ y` = (y1 y2 · · · y` )−1 x(y1 y2 · · · y` ), and the proposition follows
from φ (g) = yε11 yε22 · · · yε`` = y1 y2 · · · y` = 1.
Lemma 2.6. If x, y ∈ XW are conjugate in W , then e2x = e2y ∈ CW .
Proof. It is obvious that e2x ∈ CW for all x ∈ XW . Since CW is a central subgroup,
2
2
e2x = e−1
y ex ey = ey−1 xy holds for all x, y ∈ XW , which implies the lemma.
3. A RTIN GROUPS AND THE PROOF OF THE MAIN RESULT
Now we state the main result of this paper:
Theorem 3.1. CW is the free abelian group with a basis {e2s | s ∈ RW }. In particular, CW Zc(W ) .
As a consequence of Theorem 3.1, Ad(XW ) fits into a central extension of the
form 0 → Zc(W ) → Ad(XW ) → W → 1 as stated in the introduction. We begin with
the determination of the rank of CW .
Proposition 3.2. rank(CW ) = c(W ).
φ
Proof. The central extension 1 → CW → Ad(XW ) → W → 1 yields the following
exact sequence for the rational homology of groups:
H2 (W, Q) → H1 (CW , Q)W → H1 (Ad(XW ), Q) → H1 (W, Q)
(see Brown [5, Corollary VII.6.4]). Here the co-invariants H1 (CW , Q)W coincides
with H1 (CW , Q) because CW is a central subgroup of Ad(XW ). It is known that the
rational homology of a Coxeter group is trivial (see Akita [1, Proposition 5.2] or
Davis [7, Theorem 15.1.1]). As a result, we have an isomorphism H1 (CW , Q)
H1 (Ad(XW ), Q) and hence we have
rank(CW ) = dimQ CW ⊗ Q = dimQ H1 (CW , Q) = dimQ H1 (Ad(XW ), Q) = c(W )
as desired.
COXETER QUANDLE
5
Given a Coxeter system (W, S), the Artin group AW associated with (W, S) is the
group defined by the presentation
AW := has (s ∈ S) | (as at )m(s,t) = (at as )m(s,t) (s,t ∈ S, s , t, m(s,t) < ∞)i.
In view of (2.2), there is an obvious surjective homomorphism π : AW → W defined
by as 7→ s (s ∈ S). The pure Artin group PW associated with (W, S) is defined to be
the kernel of π so that there is an extension
π
1 → PW ,→ AW → W → 1.
In case W is the symmetric group on n letters, AW is the braid group of n strands,
and PW is the pure braid group of n strands. Little is known about the structure of
general Artin groups. Among others, the following questions are still open.
(1) Are Artin groups torsion free?
(2) What is the center of Artin groups?
(3) Do Artin groups have solvable word problem?
(4) Are there finite K(π, 1)-complexes for Artin groups?
See survey articles by Paris [15–17] for further details of Artin groups.
Proposition 3.3. The assignment as 7→ es (s ∈ S) yields a well-defined surjective
homomorphism ψ : AW → Ad(XW ).
Proof. As for the well-definedness, it suffices to show (es et )m(s,t) = (et es )m(s,t) for
all distinct s,t ∈ S with m(s,t) < ∞. Applying the relation ex ey = ey ex∗y (x, y ∈ XW )
repeatedly as in
(et es )m(s,t) = et es et es · · · = es et∗s et es · · · = es et et∗s∗t es · · · = · · · ,
we obtain
(et es )m(s,t) = (es et )m(s,t)−1 ex ,
where
m(s,t)
z
}|
{
x = t ∗ s ∗ t ∗ s ∗ · · · = (st)−1
m(s,t)−1t(st)m(s,t)−1
= (st)−1
m(s,t)−1 (ts)m(s,t)
= (st)−1
m(s,t)−1 (st)m(s,t) .
Here we used the relation (st)m(s,t) = (ts)m(s,t) . It follows from the last equality
that x is the last letter in (st)m(s,t) , i.e. x = s or x = t according as m(s,t) is odd or
m(s,t) is even. We conclude that (es et )m(s,t)−1 ex = (es et )m(s,t) as desired. Finally,
the surjectivity follows from Proposition 2.3.
As a result, the adjoint group Ad(XW ) is an intermediate group between a Coxeter group W and the corresponding Artin group AW , in the sense that the surjection
π : AW W splits into a sequence of surjections AW Ad(XW ) W .
Proposition 3.4. PW is the normal closure of {a2s | s ∈ S} in AW . In other words,
PW is generated by g−1 a2s g (s ∈ S, g ∈ AW ).
6
T. AKITA
Proof. Given a Coxeter system (W, S), let F(S) be the free group on S and put
2
R := {(st)m(s,t) (ts)−1
m(s,t) | s,t ∈ S, s , t, m(s,t) < ∞}, Q := {s | s ∈ S}.
Let N(R), N(Q) be the normal closure of R, Q in F(S), respectively. The third
isomorphism theorem yields a short exact sequence of groups
1→
F(S) p
F(S)
N(R)N(Q)
→
→
→ 1.
N(R)
N(R)
N(R)N(Q)
Observe that the left term N(R)N(Q)/N(R) is nothing but the normal closure of Q
in F(S)/N(R). Now F(S)/N(R)N(Q) = F(S)/N(R ∪ Q) = W by the definition of
W and F(S)/N(R) is identified with AW via s 7→ as (s ∈ S). Under this identification, the map p is the canonical surjection π : AW W and hence the left term
N(R)N(Q)/N(R) coincides with PW . The proposition follows.
Remark 3.5. Digne-Gomi [8, Corollary 6] obtained a presentation of PW by using Reidemeister-Schreier method. Their presentation is infinite whenever W is
infinite. Proposition 3.4 can be read off from their presentation.
Now we prove Theorem 3.1. Consider the commutative diagram
1
PW
ψ
1
CW
π
AW
W
1
W
1
ψ
φ
Ad(XW )
whose rows are exact. Since ψ : AW → Ad(XW ) is surjetive, one can check that
its restriction ψ : PW → CW is also surjective. Since PW is generated by g−1 a2s g
(s ∈ S, g ∈ AW ) by Proposition 3.4, CW is generated by
ψ(g−1 a2s g) = ψ(g)−1 e2s ψ(g) = e2s ,
where the last equality follows from the fact that CW is central by Lemma 2.5.
Combining with Lemma 2.6, we see that CW is generated by {e2s | s ∈ RW }. Now
rank(CW ) = c(W ) by Proposition 3.2 and |RW | = c(W ) by the defintion of RW ,
CW must be a free abelian group of rank c(W ) and {e2s | s ∈ RW } must be a basis
of CW .
4. C ONSTRUCTION OF 2- COCYCLES
Throughout this section, we assume that the reader is familiar with group cohomology and Coxeter groups. The central extension
φ
1 → CW → Ad(XW ) → W → 1
corresponds to the cohomology class uφ ∈ H 2 (W,CW ) (see Brown [5, §IV.3] for
precise). In this section, we will construct 2-cocycles representing uφ . Before
doing so, we claim uφ , 0. For if uφ = 0 then Ad(XW ) CW × W . But this is
not the case because Ad(XW )Ab Zc(W ) by Proposition 2.4 while (CW × W )Ab
CW ×WAb Zc(W ) × (Z/2)c(W ) by Proposition 2.2. Now we invoke the celebrated
Matsumoto’s theorem:
COXETER QUANDLE
7
Theorem 4.1 (Matsumoto [13]). Let (W, S) be a Coxeter system, M a monoid
and f : S → M a map such that ( f (s) f (t))m(s,t) = ( f (t) f (s))m(s,t) for all s,t ∈ S,
s , t, m(s,t) < ∞. Then there exists a unique map F : W → M such that F(w) =
f (s1 ) · · · f (sk ) whenever w = s1 · · · sk (si ∈ S) is a reduced expression.
The proof also can be found in [11, Theorem 1.2.2]. Define a map f : S →
Ad(XW ) by s 7→ es , then f satisfies the assumption of Theorem 4.1 as in the proof
of Proposition 3.3, and hence there exists a unique map F : W → Ad(XW ) such
that F(w) = f (s1 ) · · · f (sk ) = es1 · · · esk whenever w = s1 · · · sk (si ∈ S) is a reduced
expression. It is clear that F : W → Ad(XW ) is a set-theoretical section of φ :
Ad(XW ) → W . Define c : W × W → CW by c(w1 , w2 ) = F(w1 )F(w2 )F(w1 w2 )−1 .
The standard argument in group cohomology (see Brown [5, §IV.3]) implies the
following result:
Proposition 4.2. c is a normalized 2-cocycle and [c] = uφ ∈ H 2 (W,CW ).
Remark 4.3. In case W = Sn is the symmetric group of n letters, Proposition 4.2
was stated in [2, Remark 3.3].
Now we deal with the case c(W ) = 1 more closely. All finite irreducible Coxeter
groups of type other than Bn (n ≥ 2), F4 , I2 (m) (m even) satisfy c(W ) = 1. Among
en (n ≥ 2), D
e n (n ≥ 4), Ee6 , Ee7 and
affine irreducible Coxeter groups, those of type A
Ee8 fulfill c(W ) = 1. For simplifying the notation, we will identify CW with Z by
e2s 7→ 1 and denote our central extension by
φ
0 → Z → Ad(XW ) → W → 1.
Proposition 4.4. If c(W ) = 1 then H 2 (W, Z) Z/2.
Proof. A short exact sequence 0 →√Z ,→ C → C× → 1 of abelian groups, where
C → C× is defined by z 7→ exp(2π −1z), induces the exact sequence
H 1 (W, C) → H 1 (W, C× ) → H 2 (W, Z) → H 2 (W, C)
δ
(see Brown [5, Proposition III.6.1]). It is known that H k (W, C) = 0 for k > 0
(see the proof of Proposition 3.2), which implies that the connecting homomorphism δ : H 1 (W, C× ) → H 2 (W, Z) is an isomorphism. We claim that H 1 (W, C× ) =
Hom(W, C× ) Z/2. Indeed, W is generated by S consisting of elements of order 2, and all elements of S are mutually conjugate by the assumption c(W ) = 1.
Thus Hom(W, C× ) consists of the trivial homomorphism and the homomorphism
ρ defined by ρ(s) = −1 (s ∈ S).
φ
Corollary 4.5. If c(W ) = 1 then 0 → Z → Ad(XW ) → W → 1 is the unique nontrivial central extension of W by Z.
In general, given a homomorphism of groups f : G → C× , the cohomology class
δ f ∈ H 2 (G, Z), where δ : Hom(G, C× ) → H 2 (G, Z) is the connecting homomorphism as above, can be described as follows. For each g ∈ G, choose a branch of
log( f (g)). We argue log( f (1)) = 0. Define τ f : G × G → Z by
1
√ {log( f (g)) + log( f (h)) − log( f (gh))} ∈ Z.
(4.1)
τ f (g, h) =
2π −1
8
T. AKITA
By a diagram chase, one can prove the following:
Proposition 4.6. τ f is a normalized 2-cocycle and [τ f ] = δ f ∈ H 2 (G, Z).
Assuming c(W ) = 1, let ρ : W → C× be the homomorphism defined by ρ(s) =
−1 (s ∈ S) as in the proof of Proposition 4.4. Note that ρ(w) = (−1)`(w) (w ∈ W )
where `(w) is the length of w (see Humphreys [12, §5.2]). For each w ∈ W , choose
a branch of log(ρ(w)) as
(
0
if `(w) is even,
√
log(ρ(w)) =
π −1 if `(w) is odd.
Applying (4.1), the corresponding map τρ : W ×W → Z is given by
(
1 if `(w1 ) and `(w2 ) are odd,
(4.2)
τρ (w1 , w2 ) =
0 otherwise.
Now Corollary 4.5 and Proposition 4.6 imply the following theorem:
Theorem 4.7. If c(W ) = 1 then τρ : W ×W → Z defined by (4.2) is a normalized
2-cocycle and [τρ ] = uφ ∈ H 2 (W, Z).
5. C OMMUTATOR SUBGROUPS OF ADJOINT GROUPS
As was stated in the introduction, Eisermann [9, Example 1.18] claimed that
Ad(XSn ) is isomorphic to the semidirect product An o Z where An is the alternating
group on n letters. We will generalize his result to Coxeter quandles by showing
the following theorem:
Theorem 5.1. φ : Ad(XW ) → W induces an isomorphism
φ : [Ad(XW ), Ad(XW )] → [W,W ].
Proof. Consider the following commutative diagram with exact rows and columns.
1
CW
1
[Ad(XW ), Ad(XW )]
φ
1
[W,W ]
Ad(XW )
1
AbAd(XW )
AbAd(XW )
φ
W
1
ker φAb
Ad(XW )Ab
1
φAb
AbW
WAb
1
1
From Proposition 2.2 and Proposition 2.4, we see that ker φAb is the free abelian
group with a basis {[e2s ] | s ∈ RW }, which implies that AbAd(XW ) : CW → ker φAb
COXETER QUANDLE
9
is an isomorphism because it assigns [e2s ] to e2s (s ∈ RW ). Since φ is surjective, it is obvious that φ ([Ad(XW ), Ad(XW )]) = [W,W ]. We will show that φ :
[Ad(XW ), Ad(XW )] → [W,W ] is injective. To do so, let g ∈ [Ad(X), Ad(X)] be an
element with g ∈ ker φ = CW . Then [g] := AbAd(XW ) (g) = 1 by the exactness of the
middle row. But it implies g = 1 because AbAd(XW ) : CW → ker φAb is an isomorphism.
Corollary 5.2. There is a group extension of the form
AbAd(X
)
1 → [W,W ] → Ad(XW ) −→W Zc(W ) → 1.
If c(W ) = 1 then the extension splits and Ad(XW ) [W,W ] o Z.
Since c(Sn ) = 1 and [Sn , Sn ] = An , we recover the claim by Eisermann mentioned above.
Acknowledgement. The author thanks Daisuke Kishimoto and Takefumi Nosaka
for valuable comments and discussions with the author. He also thanks Ye Liu
for informing the paper Digne-Gomi [8] and careful reading of earlier drafts of
this paper. The author was partially supported by JSPS KAKENHI Grant Number
26400077.
R EFERENCES
[1] Toshiyuki Akita, Euler characteristics of Coxeter groups, PL-triangulations of closed manifolds, and cohomology of subgroups of Artin groups, J. London Math. Soc. (2) 61 (2000), no. 3,
721–736, DOI 10.1112/S0024610700008693. MR1766100 (2001f:20080)
[2] N. Andruskiewitsch, F. Fantino, G. A. Garcı́a, and L. Vendramin, On Nichols algebras associated to simple racks, Groups, algebras and applications, Contemp. Math., vol. 537, Amer.
Math. Soc., Providence, RI, 2011, pp. 31–56, DOI 10.1090/conm/537/10565. MR2799090
(2012g:16065)
[3] Nicolas Bourbaki, Éléments de mathématique, Masson, Paris, 1981 (French). Groupes et
algèbres de Lie. Chapitres 4, 5 et 6. [Lie groups and Lie algebras. Chapters 4, 5 and 6].
MR647314 (83g:17001)
[4] Noel Brady, Jonathan P. McCammond, Bernhard Mühlherr, and Walter D. Neumann,
Rigidity of Coxeter groups and Artin groups, Geom. Dedicata 94 (2002), 91–109, DOI
10.1023/A:1020948811381. MR1950875
[5] Kenneth S. Brown, Cohomology of groups, Graduate Texts in Mathematics, vol. 87, SpringerVerlag, New York, 1982. MR672956 (83k:20002)
[6] F.J.-B.J. Clauwens, The adjoint group of an Alexander quandle (2011), available at http:
//arxiv.org/abs/1011.1587.
[7] Michael W. Davis, The geometry and topology of Coxeter groups, London Mathematical Society Monographs Series, vol. 32, Princeton University Press, Princeton, NJ, 2008. MR2360474
(2008k:20091)
[8] F. Digne and Y. Gomi, Presentation of pure braid groups, J. Knot Theory Ramifications 10
(2001), no. 4, 609–623, DOI 10.1142/S0218216501001037. MR1831679
[9] Michael Eisermann, Quandle coverings and their Galois correspondence, Fund. Math. 225
(2014), no. 1, 103–168, DOI 10.4064/fm225-1-7. MR3205568
[10] Agustı́n Garcı́a Iglesias and Leandro Vendramin, An explicit description of the second cohomology group of a quandle (2015), available at http://arxiv.org/abs/1512.01262.
[11] Meinolf Geck and Götz Pfeiffer, Characters of finite Coxeter groups and Iwahori-Hecke algebras, London Mathematical Society Monographs. New Series, vol. 21, The Clarendon Press,
Oxford University Press, New York, 2000. MR1778802
10
T. AKITA
[12] James E. Humphreys, Reflection groups and Coxeter groups, Cambridge Studies in Advanced Mathematics, vol. 29, Cambridge University Press, Cambridge, 1990. MR1066460
(92h:20002)
[13] Hideya Matsumoto, Générateurs et relations des groupes de Weyl généralisés, C. R. Acad. Sci.
Paris 258 (1964), 3419–3422 (French). MR0183818
[14] Takefumi Nosaka, Central extensions of groups and adjoint groups of quandles (2015), available at http://arxiv.org/abs/1505.03077.
[15] Luis Paris, Braid groups and Artin groups, Handbook of Teichmüller theory. Vol. II, IRMA
Lect. Math. Theor. Phys., vol. 13, Eur. Math. Soc., Zürich, 2009, pp. 389–451, DOI
10.4171/055-1/12. MR2497781
[16]
, K(π, 1) conjecture for Artin groups, Ann. Fac. Sci. Toulouse Math. (6) 23 (2014),
no. 2, 361–415, DOI 10.5802/afst.1411 (English, with English and French summaries).
MR3205598
[17]
, Lectures on Artin groups and the K(π, 1) conjecture, Groups of exceptional type, Coxeter groups and related geometries, Springer Proc. Math. Stat., vol. 82, Springer, New Delhi,
2014, pp. 239–257, DOI 10.1007/978-81-322-1814-2 13. MR3207280
D EPARTMENT OF M ATHEMATICS , H OKKAIDO U NIVERSITY, S APPORO , 060-0810 JAPAN
E-mail address: [email protected]
| 4 |
arXiv:1607.07586v2 [] 1 Jan 2017
DISTINGUISHING FINITE GROUP CHARACTERS AND
REFINED LOCAL-GLOBAL PHENOMENA
KIMBALL MARTIN AND NAHID WALJI
Abstract. Serre obtained a sharp bound on how often two irreducible degree
n complex characters of a finite group can agree, which tells us how many local
factors determine an Artin L-function. We consider the more delicate question
of finding a sharp bound when these objects are primitive, and answer these
questions for n = 2, 3. This provides some insight on refined strong multiplicity
one phenomena for automorphic representations of GL(n). For general n, we
also answer the character question for the families PSL(2, q) and SL(2, q).
1. Introduction
In this paper, we consider two questions about seemingly different topics:
(1) How often can two characters of a finite group agree?
(2) How many local Euler factors determine an L-function?
The first question is just about characters of finite groups, and the second is
a refined local-global principle in number theory. However, it has been observed,
notably by Serre, that being able to say something about (1) allows one to say
something about (2), which is our primary motivation, though both are natural
questions. Our main results about the first question are for comparing primitive
characters of degree ≤ 3 and characters of PSL(2, q) or SL(2, q). This will yield
sharp bounds on how many Euler factors one needs to distinguish primitive 2- or
3-dimensional L-functions of Galois representations. We address them in turn.
1.1. Distinguishing group characters. Let G be a finite group, and ρ, ρ′ be two
complex representations of G with characters χ, χ′ . We will study the quantities
δ(ρ, ρ′ ) = δ(χ, χ′ ) =
|{g ∈ G : χ(g) 6= χ′ (g)}|
.
|G|
Specifically, let δn (G) be the minimum of δ(ρ, ρ′ ) as ρ, ρ′ range over pairs of
inequivalent irreducible n-dimensional representations of G, with the convention
that δn (G) = 1 if there are no such pairs ρ, ρ′ . Note that δn (G) tells us what fraction
of elements of G we must check to distinguish irreducible degree n characters. Put
dn = inf G {δn (G)}.
An elementary consequence of orthogonality relations is
Proposition 1.1. We have dn ≥
1
2n2 .
Buzzard, Edixhoven and Taylor constructed examples to show this bound is
sharp when n is a power of 2, which Serre generalized this to arbitrary n (see
[Ram94b]).
Date: January 3, 2017.
1
2
KIMBALL MARTIN AND NAHID WALJI
Theorem 1.2 (Serre). For any n, there exists G such that δn (G) =
1
2n2 .
1
2n2 ,
so dn =
In particular, the infimum in dn is a minimum. We will recall the proof of
Proposition 1.1 and Serre’s construction in Section 2.2. For now, the main points
to note are that Serre’s examples must be solvable and the representations are
induced.
In this paper, we consider two kinds of refinements of determining dn . The first
refinement is about restricting to primitive representations and the second is about
restricting to certain families of groups.
Define δn♮ (G) to be the infimum of δ(ρ, ρ′ ) where ρ, ρ′ range over pairs of inequivalent irreducible primitive n-dimensional complex representations of G. Let
d♮n = inf G {δn♮ (G)}. From Serre’s theorem, we get a trivial bound d♮n ≥ dn = 2n1 2 .
Our first result is to determine d♮n for n ≤ 3.
Theorem 1.3. We have d♮1 = 21 , d♮2 = 14 and d♮3 = 72 . Furthermore, δ2♮ (G) = 14
if and only if G is an extension of H ×C2 C2m where m ∈ N and H = [48, 28] or
H = [48, 29]. Also, δ3♮ (G) = 27 if and only if G is an extension of PSL(2, 7).
Here G being an extension of H by some N ⊳ G means G/N ≃ H. The groups
[48, 28] and [48, 29] are the two groups of order 48 which are extensions of S4 by
the cyclic group C2 and contain SL(2, 3).
The n = 1 case is already contained in Proposition 1.1 as d1 = d♮1 . For n = 2, 3,
1
from
these bounds are much better than the trivial bounds d♮2 ≥ 18 and d♮3 ≥ 18
Proposition 1.1. For n = 2, related results were previously obtained by the second
author in [Wal14] and will be discussed below.
Note that while dn is a strictly decreasing sequence for n ≥ 1, our result says
this is not the case for d♮n .
In a slightly different direction, one can look for stronger lower bounds than 2n1 2
for certain families of groups. We do not begin a serious investigation of this here,
but just treat two basic families of finite groups of Lie type which are related to
the calculations for δ2♮ (G) and δ3♮ (G).
Theorem 1.4. We compute δn (G) and δn♮ (G) where G = PSL(2, q) and G =
SL(2, q); for n not listed explicitly below, δn (G) = δn♮ (G) = 1.
For G = SL(2, q) with q arbitrary or for G = PSL(2, q) with q even,
= 2q if n = q±1
2 and q is odd,
δn (G) = δn♮ (G)
1
≥ 6 if n = q − 1,
♮
and δq+1 (G) ≥ 61 whereas δq+1
(G) = 1.
For G = PSL(2, q) and q odd,
q−1
2
= q if n = 2 and q ≡ 3 mod 4,
δn (G) = δn♮ (G)
= 2q if n = q+1
2 and q ≡ 1 mod 4,
≥ 16 if n = q − 1,
and δq+1 (G) ≥
1
6
♮
(G) = 1.
whereas δq+1
We remark that we completely determine δq±1 (G) for G = SL(2, q) and PSL(2, q)
in Section 6, but the exact formulas are a bit complicated and depend on divisibility
DISTINGUISHING FINITE GROUP CHARACTERS
3
conditions of q ∓ 1. In particular, δq±1 (SL(2, q)) = 16 if and only if 12|(q ∓ 1), and
δq±1 (PSL(2, q)) = 16 if and only if 24|(q ∓ 1).
The values for SL(2, q) immediately give the following bounds.
Corollary 1.5. d♮(q±1)/2 ≤
2
q
for q any odd prime power greater than 3.
Note the upper bound in the corollary for q = 7 is the exact value of d♮3 .
Even though Theorem 1.3 implies d♮n is not a decreasing sequence for n ≥ 1, this
corollary at least suggests that d♮n → 0 as n → ∞.
The proof of Theorem 1.3 relies on consideration of various cases according to the
possible finite primitive subgroups of GL2 (C) and GL3 (C) which are “minimal lifts”,
and about half of these are of the form PSL(2, q) or SL(2, q) for q ∈ {3, 5, 7, 9}. Thus
Theorem 1.4 is a generalization of one of the ingredients for Theorem 1.3. However,
most of the work involved in the proof of Theorem 1.3 is the determination of and
reduction to these minimal lifts, as described in Section 3.
1.2. Distinguishing L-functions. Let F be a number field, and consider an Lfunction L(s), which is a meromorphic function of a complex variable
s satisfying
Q
certain properties, principally having an Euler product L(s) =
Lv (s) where v
runs over all primes of F for s in some right half-plane. For almost all (all but
finitely many) v, we should have Lv (s) = (pv (qv−s ))−1 where qv is the size of the
residue field of Fv and pv a polynomial of a fixed degree n, which is the degree of
the L-function.
Q
Prototypical L-functions of degree n are L-functions L(s, ρ) = L(s, ρv ) of ndimensional Galois representations
ρ : Gal(F̄ /F ) → GLn (C) (or into GLn (Qp )) and
Q
L-functions L(s, π) = L(s, πv ) of automorphic representations π of GLn (AF ). In
fact it is conjectured that all (nice) L-functions are automorphic. These L-functions
are local-global objects, and one can ask how many local factors Lv (s) determine
L(s).
First consider the automorphic case: suppose π, π ′ are irreducible cuspidal automorphic representations of GLn (AF ), S is a set of places of F and we know that
L(s, πv ) = L(s, πv′ ) for all v 6∈ S. Strong multiplicity one says that if S is finite,
then L(s, π) = L(s, π ′ ) (in fact, π ≃ π ′ ). Ramakrishnan [Ram94b] conjectured that
if S has density < 2n1 2 , then L(s, π) = L(s, π ′ ), and this density bound would be
sharp. This is true when n = 1, and Ramakrishnan also showed it when n = 2
[Ram94a].
Recently, in [Wal14] the second author showed that when n = 2 one can in fact
obtain stronger bounds under various assumptions, e.g., the density bound 18 from
[Ram94a] may be replaced by 14 if one restricts to non-dihedral representations
(i.e., not induced from quadratic extensions) or by 92 if the representations are not
twist-equivalent.
Our motivation for this project was to try to understand an analogue of [Wal14]
for larger n. However the analytic tools known for GL(2) that are used in [Wal14]
are not known for larger n. Moreover, the classification of GL(2) cuspidal representations into dihedral, tetrahedral, octahedral and icosahedral types has no
known nice generalization to GL(n). So, as a proxy, we consider the case of Galois
(specifically Artin) representations. The strong Artin conjecture says that all Artin
representations all automorphic, and Langlands’ principle of functoriality says that
whatever is true for Galois representations should be true (roughly) for automorphic
representations as well.
4
KIMBALL MARTIN AND NAHID WALJI
Consider ρ, ρ′ be irreducible n-dimensional Artin representations for F , i.e., irreducible n-dimensional continuous complex representations of the absolute Galois
group Gal(F̄ /F ) of F . For almost all places v of F , we can associate a well-defined
Frobenius conjugacy class Frv of Gal(F̄ /F ), and L(s, ρv ) determines the eigenvalues of ρ(Frv ), and thus tr ρ(Frv ). Let S be a set of places of F , and suppose
L(s, ρv ) = L(s, ρ′v ), or even just tr ρ(Frv ) = tr ρ′ (Frv ), for all v 6∈ S.
Continuity means that ρ and ρ′ factor through a common finite quotient G =
Gal(K/F ) of Gal(F̄ /F ), for some finite normal extension K/F . View ρ, ρ′ as irreducible n-dimensional representations of the finite group G. The Chebotarev
density theorem tells us that if C is a conjugacy class in G, then the image of Frv
|C|
. This implies that if
in Gal(K/F ) lies in C for a set of primes v of density |G|
♮
′
the density of S is < δn (G) (or < δn (G) if ρ, ρ are primitive), then ρ ≃ ρ′ , i.e.,
L(s, ρ) = L(s, ρ′ ). Moreover, this bound on the density of S is sharp.
Consequently, Proposition 1.1 tells us that if the density of S is < 2n1 2 , then
L(s, ρ) = L(s, ρ′ ), and Serre’s result implies this bound is sharp. (See Rajan [Raj98]
for an analogous result on ℓ-adic Galois representations.) In fact, this application
to Galois representations was Serre’s motivation, and it also motivated the bound
in Ramakrishnan’s conjecture. For us, the Chebotarev density theorem together
with Theorem 1.3 yields
Corollary 1.6. Let ρ, ρ′ be irreducible primitive n-dimensional Artin representations for F . Suppose tr ρ(Frv ) = tr ρ′ (Frv ) for a set of primes v of F of density
c.
(1) If n = 2 and c > 43 , then ρ ≃ ρ′ .
(2) If n = 3 and c > 75 , then ρ ≃ ρ′ .
When n = 2, if ρ and ρ′ are automorphic, i.e., satisfy the strong Artin conjecture,
then the above result already follows by [Wal14]. When n = 2, the strong Artin
conjecture for ρ is known in many cases—for instance, if ρ has solvable image by
Langlands [Lan80] and Tunnell [Tun81], or if F = Q and ρ is “odd” via Serre’s conjecture by Khare-Wintenberger [KW09]. We remark that the methods of [Wal14]
are quite different than ours here.
The above corollary suggests the following statement may be true: if π, π ′ are
cuspidal automorphic representations of GL3 (AF ) which are not induced from characters and L(s, πv ) = L(s, πv′ ) for a set of primes v of density > 57 , then π ≃ π ′ .
Since not all cuspidal π, π ′ come from Artin representations, the 57 bound is not
even conjecturally sufficient for general π, π ′ . However, it seems reasonable to think
that coincidences of a large fraction of Euler factors only happen for essentially algebraic reasons, so the density bounds are likely to be the same in both the Artin
and automorphic cases.
Acknowledgements. We thank a referee for pointing out an error in an earlier
version. The first author was partially supported by a Simons Collaboration Grant.
The second author was supported by Forschungskredit grant K-71116-01-01 of the
University of Zürich and partially supported by grant SNF PP00P2-138906 of the
Swiss National Foundation. This work began when the second author visited the
first at the University of Oklahoma. The second author would like to thank the
first author as well as the mathematics department of the University of Oklahoma
for their hospitality.
DISTINGUISHING FINITE GROUP CHARACTERS
5
2. Notation and Background
Throughout, G, H and A will denote finite groups, and A will be abelian. Denote
by Z(G) the center of G.
If G and N are groups, by a (group) extension of G by N we mean a group H
with a normal subgroup N such that H/N ≃ G. The extension is called central or
cyclic if N is a central or cyclic subgroup of H.
If G, H, and Z are groups such that Z ⊂ Z(G) ∩ Z(H), then the central product
G×Z H of G and H with respect to Z is defined to be direct product G× H modulo
the central subgroup {(z, z) : z ∈ Z}.
P
If χ1 , χ2 are characters of G, their inner product is (χ1 , χ2 ) = |G|−1 G χ1 (g)χ2 (g).
We denote a cyclic group of order m by Cm .
2.1. Finite subgroups of GLn (C). Next we recall some definitions and facts
about finite subgroups of GLn (C).
Let G be a finite subgroup of GLn (C), so one has the standard representation
of G on V = Cn . We say G is reducible if there exists a nonzero proper subspace
W ⊂ V which is fixed by G.
Suppose G is irreducible. Schur’s lemma implies that Z(G) ⊂ Z(GLn (C)). In
particular, Z(G) is cyclic. If there exists a nontrivial decomposition V = W1 ⊕ · · · ⊕
Wk such that G acts transitively on the Wj , then we say G is imprimitive. In this
case, each Wj has the same dimension, and the standard representation is induced
from a representation on W1 . Otherwise, call G primitive.
Let A 7→ Ā denote the quotient map from GLn (C) to PGLn (C). Similarly, if
G ⊂ GLn (C), let Ḡ be the image of G under this map. We call the projective image
Ḡ irreducible or primitive if G is. Finite subgroups of PGLn (C) have been classified
for small n, and we can use this to describe the finite subgroups of GLn (C).
Namely, suppose G ⊂ GLn (C) is irreducible. Then Z(G) is a cyclic subgroup
of scalar matrices, and Ḡ = G/Z(G). Hence the irreducible finite subgroups of
GLn (C), up to isomorphism, are a subset of the set of finite cyclic central extensions
of the irreducible subgroups Ḡ of PGLn (C).
Let H be an irreducible subgroup of PGLn (C). Given one cyclic central extension G of H which embeds (irreducibly) in GLn (C), note that the central product
G ×Z(G) Cm also does for any cyclic group Cm ⊃ Z(G), and has the same projective image as G. (Inside GLn (C), this central product just corresponds to adjoining
more scalar matrices to G.) Conversely, if G ×Z(G) Cm is an irreducible subgroup of
GLn (C), so is G. We say G is a minimal lift of H to GLn (C) if G is an irreducible
subgroup of GLn (C) with Ḡ ≃ H such that G is not isomorphic to G0 ×Z(G0 ) Cm
for any proper subgroup G0 of G.
2.2. Serre’s construction. Here we explain the proof of Proposition 1.1 and describe Serre’s construction.
Suppose χ1 and χ2 are two distinct irreducible degree n characters of a finite
group G. Let Y be the set of elements of G such that χ1 (g) 6= χ2 (g). Then we have
X
|G|((χ1 , χ1 ) − (χ1 , χ2 )) =
χ1 (g)(χ1 (g) − χ2 (g)).
Y
Using the bound |χi (g)| ≤ n for i = 1, 2 and orthogonality relations, we see
|G| = |G|((χ1 , χ1 ) − (χ1 , χ2 )) ≤ 2n2 |Y |.
6
KIMBALL MARTIN AND NAHID WALJI
This proves Proposition 1.1.
We now recall Serre’s construction proving Theorem 1.2, which is briefly described in [Ram94b] using observations from [Ser81, Sec 6.5].
Let H be an irreducible subgroup of GLn (C), containing ζI for each n-th root
of unity ζ, such that H̄ has order n2 . This means that H is of “central type” with
cyclic center. Such H exist for all n. For instance, one can take H̄ = A × A, where
A is an abelian group of order n. Some nonabelian examples of such H̄ are given
by Iwahori and Matsumoto [IM64, Sec 5]. Iwahori and Matsumoto conjectured
that groups of central type are necessarily solvable and this was proved using the
classification of finite simple groups by Howlett and
P Isaacs2 [HI82].
Since |H| = n3 and |Z(H)| = n, the identity
| tr h| = |H| implies tr h = 0
for each h ∈ H \ Z(H), i.e., the set of h ∈ H such that tr h = 0 has cardinality
n3 − n = (1 − n12 )|H|.
Let G = H × {±1} and consider the representations of G given by ρ = τ ⊗ 1 and
ρ′ = τ ⊗ sgn, where τ is the standard representation of H and sgn is the nontrivial
character of {±1}. Then tr ρ(g) = tr ρ′ (g) = 0 for 2(n3 − n) = (1 − n12 )|G| elements
of G. On the remaining 2n elements of Z(G), tr ρ and tr ρ′ must differ on precisely
n elements, giving G with δn (G) = 2n1 2 as desired.
Finally that ρ and ρ′ so constructed are induced for n > 1. It suffices to
show τ is induced. Since H̄ is solvable, there is a subgroup of prime index p,
so there exists P
a subgroup KPof H of index p which contains Z =PZ(H). Put
2
2
2
χ
=
tr
τ
.
Now
K |χ(k)| ≤
Z |χ(k)| = |H|. On the other hand
K |χ(k)| =
Pr P
2
i=1
K |ψi (k)| = r|K|, where χ|K = ψ1 + · · · + ψr is the decomposition of χ|K
into irreducible characters of K. Thus r ≥ p and we must have equality, which
means τ is induced from a ψi . We note that, more generally, Christina Durfee
informed us of a proof that ρ, ρ′ must be induced if δ(ρ, ρ′ ) = 2n1 2 .
3. General Methods
3.1. Central extensions and minimal lifts. The first step in the proof of Theorem 1.3 is the determination of the minimal lifts of irreducible finite subgroups of
PGL2 (C) and PGL3 (C). Here we explain our method for this.
Let G be a group and A an additive abelian group, which we view as a G-module
with trivial action. Then a short exact sequence of groups
ι
(3.1)
π
0 → A → H → G → 1,
where ι and π are homomorphisms, such that ι(A) ⊂ Z(H) gives a central extension
H of G by A. Let M (G, A) be the set of such sequences. (Note these sequences are
often called central extensions, but for our purpose it makes sense to call the middle
term H the central extension.) We say two sequences in M (G, A) are equivalent if
there is a map φ that makes this diagram commute:
0
A
ι
(3.2)
H
π
G
1
G
1
φ
0
A
ι′
H′
π′
Let M̃ (G, A) be M (G, A) modulo equivalence.
If two sequences in M (G, A) as above are equivalent, then H ≃ H ′ . However
the converse is not true. E.g., taking G ≃ A ≃ Cp , then |M̃ (G, A)| = p but there
DISTINGUISHING FINITE GROUP CHARACTERS
7
are only two isomorphism classes of central extensions of Cp by itself, namely the
two abelian groups of order p2 .
Let Cent(G, A) be the set of isomorphism classes of central extensions of G by A.
Then the above discussion shows we have a surjective but not necessarily injective
map Φ : M̃ (G, A) → Cent(G, A) induced from sending a sequence as in (3.1) to the
isomorphism class of H.
Viewing A as a trivial G-module, we have a bijection between M̃ (G, A) and
H 2 (G, A), with the class 0 ∈ H 2 (G, A) corresponding to all split sequences in
M (G, A). We can use this to help determine minimal lifts of irreducible subgroups
of PGLn (C). We recall H1 (G, Z) is the abelianization of G, and H2 (G, Z) is the
Schur multiplier of G.
Proposition 3.1. Let G be an irreducible subgroup of PGLn (C). Then any minimal
lift of G to GLn (C) is a central extension of G by Cm for some divisor m of the
exponent of H1 (G, Z) × H2 (G, Z).
Proof. Any lift of G to an irreducible subgroup H ⊂ GLn (C) corresponds to an
element of Cent(G, A) where A = Cm for some m, and thus corresponds to at least
one element of H 2 (G, A). The universal coefficients theorem gives us the exact
sequence
(3.3)
0 → Ext(H1 (G, Z), A) → H 2 (G, A) → Hom(H2 (G, Z), A) → 0.
′
Let m
L be the gcd ofLm with the exponent of H1 (G, Z) × H2 (G, Z). Recall that
Ext( Z/ni Z, A) =
A/ni A, so Ext(H1 (G, Z), Cm ) = Ext(H1 (G, Z), Cm′ ). An
analogous statement is true for Hom(H2 (G, Z), −) so |H 2 (G, Cm )| = |H 2 (G, Cm′ )|.
Assume m 6= m′ . Consider a sequence as in (3.1) with A = Cm′ . This gives a
sequence
0 → Cm → H ×Cm′ Cm → G → 1
in M (G, Cm ) by extending ι : Cm′ → H to be the identity on Cm . Note if one has
an equivalence φ of two sequences in M (G, Cm ) constructed in this way, then commutativity implies φ(H) = H so restricting the isomorphism φ on the middle groups
to H yields and equivalence of the corresponding sequences in M (G, Cm′ ). Hence
all elements of M̃ (G, Cm ) arise from “central products” of sequences in M (G, Cm′ ),
and thus no elements of Cent(G, Cm ) can be minimal lifts.
When H1 (G, Z) × H2 (G, Z) ≃ 1, then H 2 (G, A) = 0 for any abelian group A,
which means all central extensions are split, i.e., Cent(G, A) = {G × A} for any A.
When H1 (G, Z) × H2 (G, Z) ≃ Z/2Z, then (3.3) tells us that |H 2 (G, Cm )| has size
1 or 2 according to whether m is odd or even, so there must be a unique nonsplit
extension G̃ ∈ Cent(G, C2 ). Then the argument in the proof tells us any cyclic
central extension of G is a central product of either G or G̃ with a cyclic group.
However, in general, knowing H1 (G, Z) and H2 (G, Z) is not enough to determine
the size of Cent(G, Cm ). When |Cent(G, Cm )| < |H 2 (G, Cm )|, we will sometimes
need a way to verify that the central extensions of G by Cm we exhibit exhaust all
of Cent(G, Cm ). For this, we will use a lower bound on the size of the fibers of Φ,
i.e., a lower bound on the number of classes in M̃ (G, A) a given central extension
H ∈ Cent(G, A) appears in.
The central automorphisms of a group H with center Z, denoted AutZ (H), are
the automorphisms σ of H which commute with the projection H → H/Z, i.e.,
satisfy σ(h)h−1 ∈ Z for all h ∈ H.
8
KIMBALL MARTIN AND NAHID WALJI
Proposition 3.2. Let A be abelian and H ∈ Cent(G, A) such that A = Z := Z(H).
|Aut(Z)|
Then |Φ−1 (H)| ≥ |Aut
. Moreover, if H is perfect, then |Φ−1 (H)| ≥ |Aut(Z)|.
Z (H)|
Recall H being perfect means H equals its derived group, i.e., H1 (H, Z) = 0. In
particular, non-abelian simple groups are perfect. By (3.3), central extensions of
perfect groups are simpler to study. In fact a perfect group H possesses a universal
central extension by H2 (H, Z).
Proof. Consider a commuting diagram of sequences as in (3.2) with H ′ = H. Suppose π = π ′ , which forces φ ∈ AutZ (H) and ι′ (A) = ker π = ι(A). Fixing π and ι,
there are |Aut(Z)| choices for ι′ , which gives |Aut(Z)| elements of M (G, A). Each
different ι′ must induce a different central automorphism φ ∈ AutZ (H). Thus at
most |AutZ (H)| of these |Aut(Z)| bottom sequences can lie in the same equivalence
class, which proves the first statement.
Adney and Yen [AY65] showed |AutZ (H)| = |Hom(H, Z)| when H has no abelian
direct factor. Consequently, AutZ (H) = 1 when H is perfect.
3.2. Reduction to minimal lifts. Let G be a finite group and ρ1 , ρ2 be two
inequivalent irreducible representations of G into GLn (C). Let Ni = ker ρi and
Gi = ρi (G) for i = 1, 2. We want to reduce the problem of finding lower bounds for
δ(ρ1 , ρ2 ) to the case where G1 and G2 are minimal lifts of Ḡ1 and Ḡ2 . Note that
δ(ρ1 , ρ2 ) is unchanged if we factor through the common kernel N1 ∩ N2 , so we may
assume N1 ∩ N2 = 1. Then N1 × N2 is a normal subgroup of G, N1 ≃ ρ2 (N1 ) ⊳ G2
and N2 ≃ ρ1 (N2 ) ⊳ G1 .
Write Gi = Hi ×Z(Hi ) Zi for i = 1, 2, where Hi is a minimal lift of Ḡi to GLn (C)
and Zi is a cyclic group containing Z(Hi ).
h6=0}|
as
For a subgroup H of GLn (C), let αn (H) be the minimum of |{h∈H:tr
|H|
one ranges over all embeddings (i.e., faithful n-dimensional representations) of H
in GLn (C).
Lemma 3.3. Let m = |ρ1 (N2 ) ∩ Z(G1 )|. Then δ(ρ1 , ρ2 ) ≥
m−1
m αn (H1 ).
Proof. Let K = N2 ∩ ρ1−1 (Z(G1 )), so ρ1 (K) is a cyclic subgroup of Z(G1 ) of order
m and ρ2 (K) = 1. Fix any g ∈ G. Then as k ranges over K, tr ρ1 (gk) ranges
over the values ζ tr ρ(g), where ζ runs through all m-th roots of 1 in C, attaining
each value equally often. On the other hand, tr ρ2 (gk) = tr ρ2 (g) for all k ∈ K.
1
|K| values on the
So provided tr ρ1 (g) 6= 0, tr ρ1 and tr ρ2 can agree on at most m
coset gK. Then note that the fraction of elements g ∈ G for which tr ρ1 (g) 6= 0 is
the same as the fraction of elements in h ∈ H1 for which tr h 6= 0.
We say a subgroup H0 of a group H is Z(H)-free if H0 6= 1 and H0 ∩ Z(H) = 1.
The above lemma implies that if G1 has no Z(G1 )-free normal subgroups, then
1)
δ(ρ1 , ρ2 ) ≥ αn (H
or N2 = 1 (as the K in the proof must be nontrivial). This
2
will often allow us to reduce to the case where N2 = 1, and similarly N1 = 1, i.e.,
G = G1 = G2 , when we can check this property for G1 and G2 . The following
allows us to simply check it for H1 and H2 .
Lemma 3.4. If H1 has no Z(H1 )-free normal subgroups, then G1 has no Z(G1 )free normal subgroups.
Proof. Suppose H1 has no Z(H1 )-free normal subgroups, but that N is a Z(G1 )free normal subgroup of G1 . Let N ′ = {n ∈ H1 : (n, z) ∈ G1 = H1 ×Z(H1 )
DISTINGUISHING FINITE GROUP CHARACTERS
9
Z1 for some z ∈ Z1 }. Then N ′ ⊳ H1 . If N ′ = 1, then N ⊂ Z1 = Z(G1 ), contradicting N being Z(G1 )-free. Hence N ′ 6= 1 and must contain a nontrivial a ∈ Z(H1 ).
But then (a, z) ∈ N ∩ Z(G1 ) for some z ∈ Z1 , which also contradicts N being
Z(G1 )-free.
This will often allow us to reduce to the case where G = H ×Z(H) A for some
cyclic group A ⊃ Z(H), where we can use the following.
Lemma 3.5. Let H be a finite group, A ⊃ Z(H) an abelian group and G =
H ×Z(H) A. Then δn♮ (G) ≥ min{ 21 αn (H), δn♮ (H)}.
Proof. We may assume m = |A| > 1. Let ρ1 , ρ2 : G → GLn (C) be distinct primitive
representations of G. They pull back to H×A, so for i = 1, 2 we can view ρi = τi ⊗χi
where τi : H → GLn (C) is primitive and χi : A → C× . By a similar argument
to the proof of Lemma 3.3, we have that δ(ρ1 , ρ2 ) ≥ m−1
m αn (H) if χ1 6= χ2 . If
χ1 = χ2 , it is easy to see δ(ρ1 , ρ2 ) = δ(τ1 , τ2 ).
In the simplest situation, this method gives the following.
Corollary 3.6. Let H be the set of minimal lifts of Ḡ1 and Ḡ2 to GLn (C). Suppose
that H has no Z(H)-free normal subgroups for all H ∈ H. Then
1
δ(ρ1 , ρ2 ) ≥ min{ αn (H), δn♮ (H) : H ∈ H}.
2
This corollary will address most but not all cases of our proof of Theorem 1.3.
Namely, when n = 3, it can happen that Ḡ1 has a lift H ≃ Ḡ1 which is simple,
so H is a Z(H)-free normal subgroup of itself. So we will need to augment this
approach when H1 or H2 is simple.
4. Primitive degree 2 characters
In this section we will prove the n = 2 case of Theorem 1.3.
We used the computer package GAP 4 [GG] for explicit group and character
calculations in this section and the next. We use the notation [n, m] for the m-th
group of order n in the Small Groups Library, which is accessible by the command
SmallGroup(n,m) in GAP. We can enumerate all (central or not) extensions of G
by N in GAP if |G||N | ≤ 2000 and |G||N | 6= 1024 as all groups of these orders
are in the Small Groups Library. We can also compute homology groups Hn (G, Z)
using the HAP package in GAP.
4.1. Finite subgroups of GL2 (C). Recall the classification of finite subgroups of
PGL2 (C) ≃ SO3 (C). Any finite subgroup of PGL2 (C) is of one of the following
types:
(A) cyclic
(B) dihedral
(C) tetrahedral (A4 ≃ PSL(2, 3))
(D) octahedral (S4 )
(E) icosahedral (A5 ≃ PSL(2, 5) ≃ PSL(2, 4) ≃ SL(2, 4))
Now suppose G is a subgroup of GL2 (C) with projective image Ḡ in PGL2 (C).
If Ḡ is cyclic, G is reducible. If Ḡ is dihedral, then G is not primitive.
Assume Ḡ is primitive. Then we have the following possibilities.
10
KIMBALL MARTIN AND NAHID WALJI
(C) Suppose Ḡ = A4 ≃ PSL(2, 3). Here H1 (A4 , Z) ≃ Z/3Z and H2 (A4 , Z) ≃ Z/2Z.
There is one nonsplit element of Cent(A4 , C2 ), namely SL(2, 3); one nonsplit
element of Cent(A4 , C3 ), namely [36, 3]; and one element of Cent(A4 , C6 )
which is not a central product with a smaller extension, namely [72, 3]. Of
these central extensions (and the trivial extension A4 ), only SL(2, 3) and [72, 3]
have irreducible faithful 2-dimensional representations.
Thus SL(2, 3) and [72, 3] are the only minimal lifts of A4 to GL2 (C). We
check that neither H = SL(2, 3) nor H = [72, 3] has Z(H)-free normal subgroups. In both cases, we have α2 (H) = 34 , and δ2♮ (H) = 23 .
(D) Next suppose Ḡ = S4 . Note H1 (S4 , Z) ≃ H2 (S4 , Z) ≃ Z/2Z. There are 3
nonsplit central extensions of S4 by C2 : [48, 28], [48, 29], [48, 30]. Neither S4
nor [48, 30] have faithful irreducible 2-dimensional representations, but both
[48, 28] and [48, 29] do.
Thus H = [48, 28] and H = [48, 29] are the minimal lifts of S4 to GL2 (C).
Neither of them have Z(H)-free normal subgroups. In both cases we compute
α2 (H) = 58 and δ2♮ (H) = 14 .
(E) Last, suppose Ḡ = A5 = PSL(2, 5). This group is perfect and H2 (A5 , Z) ≃
Z/2Z, with SL(2, 5) being the nontrivial central extension by C2 (the universal
central extension). Note A5 has no irreducible 2-dimensional representations.
Hence there is only one minimal lift of A5 to GL2 (C), H = SL(2, 5). We can
check that SL(2, 5) has no Z(SL(2, 5))-free normal subgroups, α2 (SL(2, 5)) = 34
and δ2♮ (SL(2, 5)) = 52 (cf. Theorem 1.4).
4.2. Comparing characters. Let ρ1 , ρ2 : G → GL2 (C) be inequivalent primitive
representations. By Corollary 3.6,
1 3 5 3
1
2 1 2
δ(ρ1 , ρ2 ) ≥ min
∪
= .
, ,
, ,
2 4 8 4
3 4 5
4
This shows d♮2 ≥ 14 . Furthermore, we can only have δ(ρ1 , ρ2 ) = 41 if Ḡ1 or Ḡ2 is S4 ,
which implies G1 or G2 is of the form H ×C2 C2m for some m with H = [48, 28] or
H = [48, 29]. Thus we can only have δ2♮ (G) = 41 if G is an extension of H ×C2 C2m
where m ∈ N and H = [48, 28] or H = [48, 29]. Moreover, if G is such an extension
δ2♮ (G) equals 41 because δ2♮ (H) does.
This completes the proof of Theorem 1.3 when n = 2.
5. Primitive degree 3 characters
Here we prove the n = 3 case of Theorem 1.3.
5.1. Finite subgroups of GL3 (C). First we review the classification of finite subgroups GL3 (C). The classification can be found in Blichfeldt [Bli17] or Miller–
Blichfeldt–Dickson [MBD61]. We follow the classification system therein. The
description involves 3 not-well-known groups, G36 = [36, 9], G72 = [72, 41], and
G216 = [216, 153]. Explicit matrix presentations for preimages in GL3 (C) are given
in [Mar04, Sec 8.1].
Any finite subgroup G of GL3 (C) with projective image Ḡ is one of the following
types, up to conjugacy:
(A) abelian
(B) a nonabelian subgroup of GL1 (C) × GL2 (C)
DISTINGUISHING FINITE GROUP CHARACTERS
(C) a group generated by a diagonal subgroup and
(D) a group generated by a diagonal subgroup,
1
a
b
form
c
(E) Ḡ ≃ G36
(F) Ḡ ≃ G72
(G) Ḡ ≃ G216
(H) Ḡ ≃ A5 ≃ PSL(2, 5) ≃ PSL(2, 4) ≃ SL(2, 4)
(I) Ḡ ≃ A6 ≃ PSL(2, 9)
(J) Ḡ ≃ PSL(2, 7)
1
1
1
11
1
1 and a matrix of the
Of these types, (A), (B) are reducible, (C), (D) are imprimitive, and the remaining types are primitive. The first 3 primitive groups, (E), (F) and (G), have
non-simple projective images, whereas the latter 3, (H), (I) and (J), have simple
projective images.
Now we describe the minimal lifts to GL3 (C) of Ḡ for cases (E)–(J).
(E) We have H1 (G36 , Z) ≃ Z/4Z and H2 (G36 , Z) ≃ Z/3Z. The nonsplit extension
of G36 by C2 is [72, 19]. There is one non split extension of G36 by C4 which is
not a central product, [144, 51]. However, G36 , [72, 19] and [144, 51] all have
no irreducible 3-dimensional representations.
There is 1 nonsplit central extension of G36 by C3 , [108, 15]; there is one
by C6 which is not a central product, [216, 25]; there is one by C12 which is
not a central product, [432, 57]. All of these groups have faithful irreducible
3-dimensional representations.
Hence any minimal lift of G36 to GL3 (C) is H = [108, 15], H = [216, 25] or
H = [432, 57]. In all of these cases, H has no Z(H)-free normal subgroups,
α3 (H) = 97 and δ3♮ (H) = 12 .
(F) We have H1 (G72 , Z) ≃ Z/2Z×Z/2Z and H2 (G72 , Z) ≃ Z/3Z. There is a unique
nonsplit central extension of G36 by C2 , [144, 120]; a unique central extension
of G by C3 , [216, 88]; and a unique central extension of G by C6 which is not
a central product, [432, 239]. Of these extensions (including G72 ), only the
latter two groups have faithful irreducible 3-dimensional representations.
Thus there are two minimal lifts of G72 to GL3 (C), H = [216, 88] and H =
[432, 239]. In both cases, H has no Z(H)-free normal subgroups, α3 (H) = 89
and δ3♮ (H) = 21 .
(G) We have H1 (G216 , Z) ≃ H2 (G216 , Z) ≃ Z/3Z. There are 4 nonsplit central extensions of G216 by C3 : [648, 531], [648, 532], [648, 533], and [648, 534]. Neither
G216 nor [648, 534] has irreducible faithful 3-dimensional representations.
Thus there are three minimal lifts of G216 to GL3 (C), H = [648, 531],
H = [648, 532], and H = [648, 533]. In all cases H has no Z(H)-free nor♮
4
mal subgroups, α3 (H) = 20
27 and δ3 (H) = 9 .
(H) As mentioned in the n = 2 case, A5 ≃ PSL(2, 5) is perfect and we have
H2 (A5 , Z) ≃ Z/2Z. The nontrivial extension by C2 (the universal central
12
KIMBALL MARTIN AND NAHID WALJI
√
√
√
3
1 0 1±2 5
2
3
Ḡ
3
2
1
G36
36
4
9
1
7
1
G72
72
8
9
1
5
7
1
G216
216
8
27
9
1
1
1
2
A5
60
4
3
5
1
3
2
2
A6
360
8
9
5
1
3
1
2
PSL(2, 7) 168
8
3
7
Table 1. Fraction of group elements with primitive degree 3 characters having given absolute value
extension) is SL(2, 5), but SL(2, 5) has no faithful irreducible 3-dimensional
representations.
Thus the only minimal lift of A5 to GL3 (C) is A5 itself. We have α3 (A5 ) = 23
and δ3♮ (A5 ) = δ3♮ (PSL(2, 5)) = 52 (cf. Theorem 1.4).
(I) The group A6 is also perfect, but (along with A7 ) exceptional among alternating groups in that H2 (A6 , Z) ≃ Z/6Z. Neither A6 ≃ PSL(2, 9), nor its
double cover SL(2, 9), has irreducible 3-dimensional representations. There is
a unique nonsplit central extension of A6 by C3 , sometimes called the Valentiner group, which we denote V1080 = [1080, 260] and is also a perfect group. It
is known (by Valentiner) that V1080 has an irreducible faithful 3-dimensional
representation.
To complete the determination of minimal lifts of A6 to GL3 (C), we need
to determine the central extensions of A6 by C6 . Here we cannot (easily)
proceed naively as in the other cases of testing all groups of the appropriate
order because we do not have a library of all groups of order 2160. We have
|M̃ (A6 , C6 )| = 6, with one class accounted for by the split extension and one
by SL(2, 9) ×C2 C6 . Since V1080 must correspond to two classes in M̃ (A6 , C3 ),
V1080 ×C3 C6 corresponds to two classes in M̃ (A6 , C6 ) by the proof of Proposition 3.1. Since A6 is perfect, it has a universal central extension by C6 , which
we denote Ã6 . By Proposition 3.2, Ã6 accounts for the remaining 2 classes
in M̃ (A6 , C6 ), and thus we have described all elements of Cent(A6 , C6 ). The
group Ã6 is the unique perfect group of order 2160 and can be accessed by
the command PerfectGroup(2160) in GAP, and we can check that it has no
faithful irreducible 3-dimensional representations.
Hence V1080 is the unique minimal lift of A6 to GL3 (C). We note H = V1080
has no Z(H)-free normal subgroups, α3 (H) = 97 , and δ3♮ (H) = 25 .
(J) The group PSL(2, 7) is perfect and H2 (PSL(2, 7), Z) ≃ Z/2Z. Since SL(2, 7)
has no faithful irreducible 3-dimensional representations, any minimal lift of
PSL(2, 7) to GL3 (C) is just H = PSL(2, 7). Here α3 (H) = 23 and δ3♮ (H) = 72
by Theorem 1.4.
5.2. Comparing characters. Let G be a finite group and ρ1 , ρ2 : G → GL3 (C)
be two inequivalent primitive representations. Let Gi , Ni , Hi , Zi be as in Section
3.2. As before, we may assume N1 ∩ N2 = 1, so G contains a normal subgroup
isomorphic to N1 × N2 whose image in G1 is N2 and image in G2 is N1 .
DISTINGUISHING FINITE GROUP CHARACTERS
13
Proposition 5.1. Suppose at least one of Ḡ1 , Ḡ2 is simple. Then δ(ρ1 , ρ2 ) ≥ 27 ,
with equality only if Ḡ1 ≃ Ḡ2 ≃ PSL(2, 7).
Proof. Say Ḡ1 is simple. Then by above, H1 is isomorphic to one of A5 , V1080 and
PSL(2, 7).
Case I: Suppose Ḡ1 6≃ Ḡ2 . For i = 1, 2, the fraction of g ∈ G for which
| tr ρi (g)| = x is the same as the fraction of h ∈ Hi for which | tr h| = x. Calculations
show that the proportion of such g ∈ G (given x) depends neither on the minimal
lift Hi nor its embedding into GL3 (C), but just on Ḡi . These proportions are given
in Table 1.
If Ḡ1 ≃ PSL(2, 7), we see δ(ρ1 , ρ2 ) ≥ 72 just from considering elements with
√
absolute character value 2. Looking at other absolute character values shows this
inequality is strict.
If Ḡ1 ≃ A5 or A6 and Ḡ2 is not√isomorphic to A5 or A6 , then considering elements
with absolute character value 1±2 5 shows δ(ρ1 , ρ2 ) ≥ 25 .
So assume Ḡ1 ≃ A5 and Ḡ2 ≃ A6 . Then G1 = A5 × Cm and G2 ≃ V1080 ×C3 C3r
for some m, r ∈ N. Suppose δ(ρ1 , ρ2 ) < 31 . By Lemma 3.3, ρ1 (N2 ) and ρ2 (N1 )
are either Z(G1 )- and Z(G2 )-free normal subgroups of G1 and G2 or trivial. This
forces N1 = 1, so G ≃ G1 , but it is impossible for a quotient of G1 to be isomorphic
to G2 . Hence δ(ρ1 , ρ2 ) ≥ 13 > 72 in this case.
Case II: Suppose Ḡ1 ≃ Ḡ2 .
First suppose N1 or N2 is trivial, say N1 . Then G ≃ G1 . By Lemma 3.5, we
have δ3♮ (G) ≥ min{ 13 , δ3♮ (H1 )}. Thus δ3♮ (G) = 72 if and only if H1 = PSL(2, 7).
So assume N1 and N2 are nontrivial. By Lemma 3.3, we can assume ρ1 (N2 ) and
ρ(N1 ) are Z(G1 )- and Z(G2 )-free normal subgroups of G1 and G2 . This is only
possible if N1 ≃ N2 ≃ H1 ≃ H2 is isomorphic to A5 or PSL(2, 7).
Let N = ρ−1
1 (N2 ) ⊳ G and we identify N = N1 × N2 . Fix g ∈ G. Then for
any n1 ∈ N1 , tr ρ1 (g(n1 , 1)) = tr ρ1 (g) but tr ρ2 (g(n1 , 1)) = tr ρ2 (g(n1 , 1)). Since
ρ2 (g(N1 × 1)) = H2 × {z} for some z ∈ Z2 , the fraction of elements of g(N1 × 1)
(and thus of G) on which tr ρ1 and tr ρ2 can agree is at most the maximal fraction
of elements of H1 with a given trace. By Table 1 this is less than 21 for either
Ḡ1 ≃ A5 or Ḡ1 ≃ PSL(2, 7).
To complete the proof of Theorem 1.3 for n = 3, it suffices to show δ(ρ1 , ρ2 ) > 27
when Ḡ1 and Ḡ2 are each one of G36 , G72 and G216 . Using Corollary 3.6, in this
situation we see
1 7 8 20
1 1 4
10
δ(ρ1 , ρ2 ) ≥ min
, ,
, ,
.
∪
=
2 9 9 27
2 2 9
27
This finishes Theorem 1.3.
6. Families SL(2, q) and PSL(2, q)
We consider SL(2, q) and PSL(2, q), for even and odd prime powers q. We
separate these into three subsections: SL(2, q), q odd; SL(2, q) ≃ PSL(2, q), q even;
and PSL(2, q), q odd. We refer to, and mostly follow the notation of, Fulton–
Harris [Ful91] for the representations of these groups.
√
× 2
Choose an element ∆ ∈ F×
q − (Fq ) . Denote by E := Fq ( ∆) the unique
√
quadratic extension of Fq . We can write the elements of E as a+bδ, where δ := ∆.
14
KIMBALL MARTIN AND NAHID WALJI
2
2
The norm map N : E× → F×
q is then defined as N (a + bδ) = a − b ∆. We also
1
denote E to be the kernel of the norm map.
6.1. SL(2, q), for odd q. The order of SL(2, q) is (q + 1)q(q − 1). We begin by
describing the conjugacy classes for SL(2, q):
(A) I.
(B) −I.
ǫ γ
, where
ǫ
ǫ = ±1 and γ = 1 or ∆. So there are four conjugacy classes, each of size
(q 2 − 1)/2.
x
(D) Conjugacy classes of the form [c3 (x)], where c3 (x) =
with x 6=
x−1
−1
±1. Since the conjugacy classes c3 (x) and c3 (x ) are the same, we have
(q − 3)/2 different conjugacy classes, each of size q(q + 1).
x ∆y
(E) Conjugacy classes of the form [c4 (z)], where c4 (z) =
where z =
y x
x + δy ∈ E1 and z 6= ±1. Since c4 (z) = c4 (z̄) we have (q − 1)/2 conjugacy
classes, each of size q(q − 1).
(C) Conjugacy classes of the form [c2 (ǫ, γ)], where c2 (ǫ, γ) =
We give a brief description of the representations that appear in the character
table. The first set of representations, denoted Wα , are induced from the subgroup
×
B of upper triangular matrices. Given a character α ∈ Fc
q , we can extend this to a
character of B, which we then induce to a (q + 1)-dimensional representation Wα
of SL2 (Fq ). If α2 6= 1, then the induced representation is irreducible. If α = 1,
then W1 decomposes into its irreducible consituents: the trivial representation U
and the Steinberg representation V . If α2 = 1 and α 6= 1, then it decomposes into
two irreducible constituents denoted W + and W − .
For the remaining irreducible representations, we consider characters α and ϕ
of the diagonal subgroup A and the subgroup S := {c4 (z) | z ∈ E1 }, respectively,
where the characters agree when restricted to A ∩ S. Then we construct a virtual
G
character πϕ := IndG
A (α) − Wα − IndS (ϕ) (note that the virtual character will not
depend on the specific choice of α).
When ϕ = ϕ, πϕ decomposes into two distinct characters. In the case when ϕ is
trivial, π1 decomposes into the difference between the characters for the Steinberg
representation and the trivial representation. If ϕ is the unique (non-trivial) order 2
character of S, then πϕ decomposes into two distinct irreducible characters of equal
dimension; we will label the corresponding representations X + and X − . If ϕ 6= ϕ,
then πϕ corresponds to an irreducible representation, which we denote as Xϕ . Two
irreducibles Xϕ and Xϕ′ are equivalent if and only if ϕ = ϕ′ or ϕ = ϕ′ . We note
that out of all the irreducible representations, the imprimitive representations are
exactly all the Wα (for α2 6= 1).
We define some notation that will appear in the character table for SL(2, q).
×
with α 6= ±1, and ϕ a character of E1 with ϕ2 6= 1. Fix τ to be the
Let α ∈ Fc
q
DISTINGUISHING FINITE GROUP CHARACTERS
15
× 2
\
non-trivial element of F×
q /(Fq ) , and let
p
1
(τ (ǫ) ± τ (ǫγ) τ (−1)q),
2
p
1
±
u (ǫ, γ) = ǫ(−τ (ǫ) ± τ (ǫγ) τ (−1)q).
2
s± (ǫ, γ) =
Lastly, we define ψ to be the non-trivial element of E1\
/(E1 )2 . The character table
is:
[I]
[−I]
[c2 (ǫ, γ)]
[c3 (x)]
[c4 (z)]
q2 −1
Size:
1
1
q(q
+
1)
q(q
− 1)
2
Rep
#
U
1
1
1
1
1
1
q−1
q−1
±
·
ψ(−1)
X±
2
u
(ǫ,
γ)
0
−ψ(z)
2
2
q+1
q+1
2
s± (ǫ, γ)
τ (x)
0
W±
2
2 · τ (−1)
q−1
Xϕ
q
−
1
(q
−
1)ϕ(−1)
−ϕ(ǫ)
0
−ϕ(z)
−
ϕ(z −1 )
2
1
q
q
0
1
−1
V
−1
Wα q−3
q
+
1
(q
+
1)α(−1)
α(ǫ)
α(x)
+
α(x
)
0
2
The pair of representations X ± : The two (q −1)/2-dimensional representations
X + and X − have the same trace character values for exactly all group elements
outside of [c2 (ǫ, γ)], so we have δ(X + , X − ) = 2/q.
The pair of representations W ± : The two (q+1)/2-dimensional representations
W + and W − have the same trace character values exactly for all group elements
outside of the [c2 (ǫ, γ)] conjugacy classes. So again we have δ(W + , W − ) = 2/q.
(q − 1)-dimensional representations: There are (q − 1)/2 such representations,
c1 , for ϕ2 6= 1. Note that |E1 | = q + 1.
denoted Xϕ , where ϕ ∈ E
In order to determine δ(Xϕ , Xϕ′ ), we need to find the number of z ∈ E1 for
which ϕ(z) + ϕ(z −1 ) = ϕ′ (z) + ϕ′ (z −1 ), and whether ϕ(−1) = ϕ′ (−1).
We begin with the first equation. Note that Im(ϕ), Im(ϕ′ ) ⊂ µq+1 , where µn
denotes the nth roots of unity. Then ϕ(z) + ϕ(z −1 ) is of the form ζ a + ζ −a , where ζ
is the primitive (q + 1)th root of unity e2πi/(q+1) and a is a non-negative integer less
than q + 1. Now ζ a + ζ −a = ζ b + ζ −b for some 0 ≤ a, b < q + 1 implies that a = b or
(q + 1) − b. So ϕ(z) + ϕ(z −1 ) = ϕ′ (z) + ϕ′ (z −1 ) iff ϕ(z) = ϕ′ (z) or ϕ(z) = ϕ′ (z −1 ).
If ϕ(z) = ϕ′ (z), then this is equivalent to (ϕ′ )−1 ϕ(z) = 1, and the number of z
for which this holds is |ker (ϕ′ )−1 ϕ|. The number of z for which ϕ(z) = ϕ′ (z −1 ) is
|ker ϕ′ ϕ|. Thus the number of z ∈ E1 for which ϕ(z) + ϕ(z −1 ) = ϕ′ (z) + ϕ′ (z −1 ) is
|ker (ϕ′ )−1 ϕ| + |ker ϕ′ ϕ| − |ker ϕ′ ϕ ∩ ker (ϕ′ )−1 ϕ|.
c1 can
Now E1 is a cyclic group, so we can fix a generator g. The elements of E
m
then be denoted as {ϕ0 , ϕ1 , ϕ2 , . . . , ϕq }, where ϕm is defined via ϕm (g) = ζ . Note
that |ker ϕm | = (m, q + 1). Define
(m + m′ , k) + (m − m′ , k) − (m + m′ , m − m′ , k) − 1 − tm,m′
,
2
= 1 if both k and m + m′ are even, and 0 otherwise.
Mk (m, m′ ) :=
where tm,m′
Then:
16
KIMBALL MARTIN AND NAHID WALJI
Lemma 6.1. For distinct integers 0 ≤ m, m′ < q + 1, we have
|{[c4 (z)] : ϕm (z) + ϕm (z −1 ) = ϕm′ (z) + ϕm′ (z −1 )}| = Mq+1 (m, m′ ).
If m and m′ have the same parity, then ϕm (−1) = ϕm′ (−1) so
q−1
1
′
(6.1)
− Mq+1 (m, m ) .
δ(Xϕm , Xϕm′ ) =
q+1
2
If m and m′ have different parity, then
2
q +1
1
(6.2)
− Mq+1 (m, m′ )(q − 1) .
δ(Xϕm , Xϕm′ ) = 2
q −1
2
To determine the minimum possible value of δ above, we consider the maximum
possible size of Mk (m, m′ ).
Lemma 6.2. Suppose k = 2j ≥ 8. Then
max Mk (m, m′ ) = 2j−2 − 1 =
k
− 1,
4
where m, m′ run over distinct classes in Z/kZ \ {0, k2 } with m 6≡ ±m′ .
Suppose k ∈ 2N is not a power of 2 and let p be the smallest odd prime dividing
k. Then
(
k
1
1
+
p − 1 k ≡ 0 (mod 4)
max Mk (m, m′ ) = 4
k−2
k ≡ 2 (mod 4),
4
where m, m′ range as before.
In all cases above, the maximum occurs with m, m′ of the same parity if and only
if 4|k.
Proof. Let d = (m + m′ , k) and d′ = (m − m′ , k), so our restrictions on m, m′ imply
that d, d′ are proper divisors of k of the same parity. Note that any pair of such
d, d′ arise from some m, m′ if d 6= d′ , and the case d = d′ = k2 does not occur. Then
Mk (m, m′ ) = 21 (d + d′ − (d, d′ ) − 1 − tm,m′ ), and m, m′ have the same parity if and
only if d, d′ are both even.
The case k = 2j has a maximum with d = k2 and d′ = k4 .
Suppose k = 2pk ′ as in the second case. Then note d + d′ − (d, d′ ) is maximised
when d = k2 and d′ = kp , which is an admissible pair if k ′ is even. Otherwise, we
k
get a maximum when d = k2 and d′ = 2p
.
In all cases we have
(6.3)
max Mk (m, m′ ) ≤
k
− 1,
3
and equality is obtained if and only if 12|k for suitable m, m′ of the same parity.
This leads to an exact formula for δq−1 (SL(2, q)) with q > 3 odd by combining
with (6.1) and (6.2). We do not write down the final expression, but just note the
consequence that δq−1 (SL(2, q)) ≥ 61 with equality if and only 12|(q + 1).
DISTINGUISHING FINITE GROUP CHARACTERS
17
×
(q + 1)-dimensional representations: Consider Wα , Wα′ , where α, α′ ∈ Fc
q −
{±1} and α 6= α′ . Since |F×
|
=
q
−
1,
we
know
that
Im(α)
<
µ
.
So,
given
a
q−1
q
c
×
m
generator g of the cyclic group F×
q , we define the elements of Fq as: αm (g) = ζ ,
2πi/(q−1)
where ζ := e
, and 0 ≤ m ≤ q − 2.
Using similar arguments to the (q − 1)-dimensional case above, we have:
Lemma 6.3. For distinct integers 0 ≤ m, m′ < q − 1, we have
{[c3 (x)] : αm (x) + αm (x−1 ) = αm′ (x) + αm′ (x−1 )} = Mq−1 (m, m′ ).
Given that the value of αm (−1) is +1 if m is even and −1 if m is odd, we obtain
that if m and m′ have the same parity, then
q−3
1
− Mq−1 (m, m′ ) .
δ(Wαm , Wαm′ ) =
q−1
2
Whereas if m and m′ have different parity, then
2
1
q −3
′
δ(Wαm , Wαm′ ) = 2
− Mq−1 (m, m )(q + 1) .
q −1
2
Combining these with Lemma 6.2 for q > 5 gives a formula for δq+1 (SL(2, q)).
In particular, (6.3) gives δq+1 (SL(2, q)) ≥ 61 , with equality if and only if 12|(q − 1).
6.2. SL(2, q), for even q. We keep the notation from the previous section. The
order of SL(2, q) is again q(q + 1)(q − 1). The conjugacy classes for SL(2, q), q even,
are as follows:
(A) I.
1 1
(B) [N ] =
. This conjugacy class is of size q 2 − 1.
0 1
x
(C) [c3 (x)], where c3 (x) =
, with x 6= 1. We note that [c3 (x)] =
x−1
[c3 (x−1 )], so there are (q − 2)/2 such conjugacy classes. Each one is of size
q(q + 1).
x ∆y
(D) [c4 (z)], where c4 (z) =
for z = x + δy ∈ E1 with z 6= 1. Since
y x
c4 (z) = c4 (z̄), there are q/2 such conjugacy classes, each of size q(q − 1).
The representations for q even are constructed similarly to the case of q odd,
with a couple of differences: Since, for q even, the subgroup S has odd order, it
does not have characters of order two, and so the irreducible representations X ± do
not arise. Similarly, the character α cannot be of order two, and so the irreducible
representations W ± do not occur. The character table is:
[I]
[N ]
[c3 (x)]
[c4 (z)]
Size:
1
q2 − 1
q(q + 1)
q(q − 1)
Rep
#
U
1
1
1
1
1
q/2
q−1
−1
0
−ϕ(z) − ϕ(z −1 )
Xϕ
V
1
q
0
1
−1
1
α(x) + α(x−1 )
0
Wα (q − 2)/2 q + 1
18
KIMBALL MARTIN AND NAHID WALJI
Representations of dimension q − 1: The analysis here is similar to that in
Section 6.1, which gives us:
1 q
δ(Xϕm , Xϕm′ ) =
− Mq+1 (m, m′ ) .
q+1 2
Analogous to Lemma 6.2, we have when k ≥ 3 is odd,
1 k − 1
2 p
(6.4)
max Mk (m, m′ ) =
k
1
(p
+
p
−
1)
−
1
1
2
2 p1 p2
k = pj
k = p1 p2 k ′
where m, m′ run over all nonzero classes of Z/kZ such that m 6≡ ±m′ and in the
latter case are the two smallest distinct primes dividing k. The above two equations
give an exact expression for δq−1 (SL(2, q)), q ≥ 4. For k odd, note
(6.5)
max Mk (m, m′ ) ≤
7k − 15
,
30
with equality if and only if 15|k. Thus δq−1 (SL(2, q)) ≥
only if 15|(q + 1).
4
15
with equality if and
Representations of dimension q + 1: A similar analysis to that in Section 6.1
gives
1
q−2
′
δ(Wαm , Wαm′ ) =
− Mq−1 (m, m ) .
q−1
2
Combining this with (6.4) gives an exact formula for δq+1 (SL(2, q)) for q ≥ 8, and
4
from (6.5), we again get δq+1 (SL(2, q)) ≥ 15
with equality if and only if 15|(q − 1).
6.3. PSL(2, q), for odd q. The order of PSL(2, q) is
conjugacy classes are as follows:
(A) I.
1
2
2 q(q
− 1) if q is odd. The
γ
(B) [c2 (γ)], where c2 (γ) = c2 (1, γ) =
for γ ∈ {1, ∆}.
1
(C) [c3 (x)], (x 6= ±1), where c3 (x) is as in the previous two sections. Since c3 (x) =
c3 (−x) = c3 (1/x) = c3 (−1/x), the number of such conjugacy classes when
q ≡ 3 (mod 4) is (q − 3)/4. In this case, all of the c3 (x) conjugacy classes have
size q(q + 1).
If q ≡ 1 (mod
√ 4), then −1 is a square in Fq and there is a conjugacy class
denoted by c3 ( −1) which has size q(q + 1)/2; the remaining c3 (x) conjugacy
classes (there are (q − 5)/4 such classes) have size q(q + 1).
(D) [c4 (z)], for z ∈ E1 , z 6= ±1, where c4 (z) is defined as in the previous two
sections. Since c4 (z) = c4 (z̄) = c4 (−z) = c4 (−z̄), when q ≡ 1 (mod 4), the
number of such conjugacy classes is (q − 1)/4, and they are all of size q(q − 1).
When q ≡ 3 (mod 4), we can choose ∆ to be −1 (since it is not a square),
and so we see that δ ∈ E1 . The conjugacy class associated to c4 (δ) has size
q(q − 1)/2, whereas the rest of the c4 (z) conjugacy classes (of which there are
(q − 3)/4 such classes) have size q(q − 1).
1
The representations of PSL(2, q) are the representations of SL(2, q) which are
trivial on −I; this depends on the congruence class of q modulo 4.
DISTINGUISHING FINITE GROUP CHARACTERS
19
6.3.1. q ≡ 1 (mod 4).
For the character table below, the notation is the same as in previous subsections.
√
[I]
[c2 (γ)] [c3 ( −1)]
[c3 (x)]
[c4 (z)]
2
q(q+1)
q −1
Size:
1
q(q
+
1)
q(q
− 1)
2
2
q−5
q−1
Rep
#
1
2
1
4
4
U
1
1
1
1
1
1
√
q+1
W±
2
s± (1, γ) τ ( −1)
τ (x)
0
2
q
−
1
−1
0
0
−ϕ(z)
−
ϕ(z −1 )
Xϕ q−1
4
V
1
q
0
1
1
−1
√
−1
Wα q−5
q
+
1
1
2α(
−1)
α(x)
+
α(x
)
0
4
Representations W ± : The trace characters of these (q + 1)/2-dimensional representations agree everywhere but for the conjugacy classes [c2 (γ)]. This gives us
δ(W + , W − ) = 2/q.
Representations of dimension q − 1: Assume q ≥ 9. Any two representations
Xϕ , Xϕ′ have trace characters that may differ only for the conjugacy classes [c4 (z)].
We may view ϕ as a map into µ q+1 and parameterize the ϕ by ϕm for nonzero
2
m ∈ Z/ q+1
2 Z similar to before. Analogously, we obtain
q−1
1
− 2M q+1 (m, m′ ) .
δ(Xϕm , Xϕm′ ) =
2
q+1
2
From (6.5), this gives δq−1 (PSL(2, q)) ≥
4
15 ,
with equality if and only if 30|(q + 1).
Representations of dimension q + 1: Assume q ≥ 13. The analysis follows in
a similar manner to that in previous sections. View α : F×
q /{±1} → µ q−1 , and we
2
can parametrize such α by m ∈ Z/ q−1
2 Z as before. One difference is that we must
√
consider the case when x = −1. Note that this is the only
√ conjugacy√class of the
form [c3 (x)] that has size q(q + 1)/2. We find that αm ( −1) = αm′ ( −1) if and
only if m, m′ have the same parity. Overall we get
q−5
1
− 2M q−1 (m, m′ ) + 1 − tm,m′ .
δ(Wαm , Wαm′ ) =
2
q−1
2
From (6.3) we get δq+1 (PSL(2, q)) ≥
6.3.2. q ≡ 3 (mod 4).
[I]
Size:
1
Rep Number
1
U
1
1
q−1
X±
2
2
q−3
q−1
Xϕ
4
V
1
q
q−3
Wα
q
+
1
4
[c2 (γ)]
1
6
with equality if and only if 24|(q − 1).
q2 −1
2
[c3 (x)]
q(q + 1)
2
1
u± (1, γ)
−1
0
1
1
0
0
1
α(x) + α(x−1 )
q−3
4
where u± (1, γ) and ψ are defined as before.
[c4 (z)]
q(q − 1)
q−3
4
1
−ψ(z)
−ϕ(z) − ϕ(z −1 )
−1
0
[c4 (δ)]
q(q−1)
2
1
1
−ψ(δ)
−2ϕ(δ)
1
0
20
KIMBALL MARTIN AND NAHID WALJI
Representations X ± : For W ± , the characters of the representations X ± agree
everywhere but for the conjugacy classes [c2 (γ)], so: δ(X + , X − ) = 2/q.
Representations of dimension q − 1: Assume q ≥ 11. Any two representations
Xϕ , Xϕ′ have trace characters that may differ only for the conjugacy classes [c4 (z)].
In the case of the conjugacy class [c4 (δ)], we note that δ has order 2 in E1 /{±1}.
Parametrize the nontrivial maps ϕ : E1 /{±1} → µ q+1 by 1 ≤ m ≤ q−3
4 as before.
2
Then ϕm (δ) = ϕm′ (δ) if and only if m, m′ have the same parity. We obtain
1
q−3
′
′
δ(Xϕm , Xϕm′ ) =
− 2M q+1 (m, m ) + 1 − tm,m .
2
q+1
2
By (6.3), we get δq−1 (PSL(2, q)) ≥
1
6
with equality if and only if 24|(q + 1).
Representations of dimension q + 1: Assume q ≥ 11. We obtain
1
q−3
′
δ(Wαm , Wαm′ ) =
− 2M q−1 (m, m ) .
2
q−1
2
By (6.5), we get δq+1 (PSL(2, q)) ≥
4
15 ,
with equality if and only if 30|(q − 1).
References
[AY65] J. E. Adney and Ti Yen, Automorphisms of a p-group, Illinois J. Math. 9 (1965),
137–143. MR0171845
[Bli17] HF Blichfeldt, Finite collineation groups, Chicago, 1917.
[Ful91] William and Harris Fulton Joe, Representation theory, Graduate Texts in Mathematics,
vol. 129, Springer-Verlag, New York, 1991. A first course, Readings in Mathematics.
[GG] GAP Group, GAP—Groups, Algorithms, and Programming, Version 4.8.3.
[HI82] Robert B. Howlett and I. Martin Isaacs, On groups of central type, Math. Z. 179 (1982),
no. 4, 555–569.
[IM64] Nagayoshi Iwahori and Hideya Matsumoto, Several remarks on projective representations of finite groups, J. Fac. Sci. Univ. Tokyo Sect. I 10 (1964), 129–146 (1964).
[KW09] Chandrashekhar Khare and Jean-Pierre Wintenberger, Serre’s modularity conjecture.
I, Invent. Math. 178 (2009), no. 3, 485–504.
[Lan80] Robert P. Langlands, Base change for GL(2), Annals of Mathematics Studies, vol. 96,
Princeton University Press, Princeton, N.J.; University of Tokyo Press, Tokyo, 1980.
[Mar04] Kimball Martin, Four-dimensional Galois representations of solvable type and automorphic forms, ProQuest LLC, Ann Arbor, MI, 2004. Thesis (Ph.D.)–California Institute of Technology.
[MBD61] G. A. Miller, H. F. Blichfeldt, and L. E. Dickson, Theory and applications of finite
groups, Dover Publications, Inc., New York, 1961.
[Raj98] C. S. Rajan, On strong multiplicity one for l-adic representations, Internat. Math. Res.
Notices 3 (1998), 161–172.
[Ram94a] Dinakar Ramakrishnan, A refinement of the strong multiplicity one theorem for GL(2).
Appendix to: “l-adic representations associated to modular forms over imaginary quadratic fields. II” [Invent. Math. 116 (1994), no. 1-3, 619–643] by R. Taylor, Invent.
Math. 116 (1994), no. 1-3, 645–649.
, Pure motives and automorphic forms, Motives (Seattle, WA, 1991), Proc.
[Ram94b]
Sympos. Pure Math., vol. 55, Amer. Math. Soc., Providence, RI, 1994, pp. 411–446.
[Ser81] Jean-Pierre Serre, Quelques applications du théorème de densité de Chebotarev, Inst.
Hautes Études Sci. Publ. Math. 54 (1981), 323–401 (French). MR644559
[Tun81] Jerrold Tunnell, Artin’s conjecture for representations of octahedral type, Bull. Amer.
Math. Soc. (N.S.) 5 (1981), no. 2, 173–175.
[Wal14] Nahid Walji, Further refinement of strong multiplicity one for GL(2), Trans. Amer.
Math. Soc. 366 (2014), no. 9, 4987–5007.
DISTINGUISHING FINITE GROUP CHARACTERS
Department of Mathematics, University of Oklahoma, Norman, OK 73019 USA
Department of Mathematics, Occidental College, Los Angeles, CA 90041 USA
21
| 4 |
SCIENCE CHINA
Mathematics
. ARTICLES .
January 2015 Vol. 55 No. 1: 1–XX
doi: 10.1007/s11425-000-0000-0
arXiv:1603.01003v2 [] 24 Jul 2016
In memory of 10 years since the passing of Professor Xiru Chen - a great Chinese statistician
A review of 20 years of naive tests of significance for
high-dimensional mean vectors and covariance
matrices
HU Jiang1 & BAI Zhidong1,∗
1Key
Laboratory for Applied Statistics of MOE, Northeast Normal University, Changchun, 130024, P.R.C.
Email: [email protected], [email protected]
Received Month 00, 2015; accepted Month 00, 2015
Abstract In this paper, we introduce the so-called naive tests and give a brief review of the new developments.
Naive testing methods are easy to understand and perform robustly, especially when the dimension is large. In
this paper, we focus mainly on reviewing some naive testing methods for the mean vectors and covariance
matrices of high-dimensional populations, and we believe that this naive testing approach can be used widely
in many other testing problems.
Keywords
MSC(2010)
Naive testing methods, Hypothesis testing, High-dimensional data, MANOVA.
62H15, 62E20
Citation: HU J, BAI Z D. A review of 20 years of naive tests of significance for high-dimensional mean vectors and
covariance matrices. Sci China Math, 2015, 55, doi: 10.1007/s11425-000-0000-0
1
Introduction
Since its proposal by Hotelling (1931) [23], the Hotelling T 2 test has served as a good test used in
multivariate analyses for more than eight decades due to its many useful properties: it is uniformly the
most powerful of the affine invariant tests for the hypotheses H0 : µ = 0 for the one-sample problem and
H0 : µ1 = µ2 for the two-sample problem. However, it has a fatal defect in that it is not well defined when
the dimension is larger than the sample size or the degrees of freedom. As a remedy, Dempster (1958) [16]
proposed his non-exact test (NET) to test the hypothesis of the equality of two multivariate population
means, that is, the test of locations in the two-sample problem. In 1996, Bai and Saranadasa [3] further
found that Dempster’s NET not only serves as a replacement for the Hotelling T 2 to test the hypothesis
when the number of degrees of freedom is lower than the dimension but is also more powerful than the
Hotelling T 2 when the dimension is large, but not too large, such that T 2 is well defined. They also
proposed the asymptotic normal test (ANT) to test the same hypothesis and strictly proved that both
the NET and ANT have similar asymptotic power functions that are higher than those of the Hoteling
T 2 test. Thus, their work raised an important question that classical multivariate statistical procedures
need to re-examine when the dimension is high. To call attention to this problem, they entitled their
paper “The Effect of High Dimension”.
That paper was published nearly 20 years ago and has been cited in other studies more than 100 times
to date in Web of Science. It is interesting that more than 95% of the citations were made in the past
∗ Corresponding
author
c Science China Press and Springer-Verlag Berlin Heidelberg 2012
math.scichina.com
www.springerlink.com
2
Hu J & Bai Z.
Sci China Math
January 2015
Vol. 55
No. 1
10 years. This pattern reveals that high-dimensional data analysis has attracted much more widespread
attention since the year 2005 than it had received previously. In the theory of hypothesis testing, of course,
the most preferred test is the uniformly most powerful test. However, such a test does not exist unless
the distribution family has the property of a monotone likelihood ratio for which the parameter can only
be univariate. Hence, there is no uniformly most powerful test for multivariate analysis. Therefore, the
optimal procedure can only be considered for smaller domains of significance tests, such as unbiased tests
or invariant tests with respect to specific transformation groups. The Hotelling T 2 was derived based on
the likelihood ratio principle and proved to be the most powerful invariant test with respect to the affine
transformation group (see Page 174 of [1]). A serious point, however, is that the likelihood ratio test
must be derived under the assumption that the likelihood of the data set exists and is known, except for
the unknown parameters. In a real application, it is impossible to verify that the underlying distribution
is multivariate normal or has any other known form of the likelihood function. Thus, we would like to
use another approach to set up a test for some given hypothesis: choose h(θ) as a target function for the
hypotheses such that the null hypothesis can be expressed as h(θ) = 0 and the alternative as h(θ) > 0
and then look for a proper estimator θ̂ of the parameter θ. Then, we reject the hypothesis if h(θ̂) > h0
such that PH0 (h(θ̂) > h0 ) = α. For example, for the Hotelling test of the difference of two sample means,
Σ =S
one can choose h(µ1 , µ2 , Σ ) = (µ1 − µ2 )′Σ −1 (µ1 − µ2 ), the estimators µ̂i = X̄i , i = 1, 2, and Σ̂
for the sample means and sample covariance matrix. Dempster’s NET and Bai and Saranadasa’s ANT
simply use h(µ1 , µ2 ) = kµ1 − µ2 k2 and µ̂i = X̄i , i = 1, 2. That is, the Hotelling test uses the squared
Mahalanobis distance, whereas the NET and ANT use the squared Euclidean distance. We believe that
the reason why the NET and ANT are more powerful for large dimensions than the Hotelling test is
because the target function of the latter involves too many nuisance parameters in Σ , which cannot be
well estimated. Because the new tests focus only on the naive target function instead of the likelihood
ratio, we call them the naive tests, especially the ones that are independent of the nuisance parameters,
which generally ensures higher power.
In 1996, Bai and Saranadasa [3] raised the interesting point that one might prefer adopting a test of
higher power and approximate size rather than a test of exact size but much lower power. The naive
tests have undergone rapid development over the past twenty years, especially over the past 10. In this
paper, we give a brief review of the newly developed naive tests, which are being applied to a wide array
of disciplines, such as genomics, atmospheric sciences, wireless communications, biomedical imaging, and
economics. However, due to the limited length of the paper, we cannot review all of the developments
and applications in all directions, although some of them are excellent and interesting for the field of
high-dimensional data analysis. In this paper, we focus mainly on reviewing some naive testing methods
(NTMs) for the mean vectors and covariance matrices of high-dimensional populations.
Based on the NTMs, many test statistics have been proposed for high-dimensional data analysis.
Throughout this paper, we suppose that there are k populations and that the observations Xi1 , . . . , Xini
are p-variate independent and identically distributed (i.i.d.) random sample vectors from the i-th population, which have the mean vector µi and the covariance matrix Σ i . Moreover, except where noted, we
work with the following model assumptions:
(A1) Xij := (Xij1 , . . . , Xijp )′ = Γi Zij + µi , for i = 1, . . . k, j = 1 . . . , ni , where Γi is a p × m noni
are m-variate i.i.d. random
random matrix for some m > p such that Γi Γ′i = Σi , and {Zij }nj=1
vectors satisfying E(Zij ) = 0 and V ar(Zij ) = Im , the m × m identity matrix;
(A2)
ni
n
→ κi ∈ (0, 1) i = 1, . . . k, as n → ∞, where n =
Denote
X̄i =
Pk
i=1
ni .
ni
ni
1 X
1 X
(i)
Xij and Si =
(Xij − X̄i )(Xij − X̄i )′ = (sij ).
ni j=1
ni − 1 j=1
When k = 1, the subscripts i or 1 are suppressed from ni , n1 , Γi , µi and so on, for brevity.
Hu J & Bai Z.
Sci China Math
January 2015
Vol. 55
P
3
No. 1
D
Throughout the paper, we denote by → the convergence in probability and by → the convergence in
distribution.
The remainder of the paper is organized as follows: In Section 2, we review the sample location
parameters. In subsection 2.1, we introduce the findings of Bai and Saranadasa [3]. In subsection
2.2, we introduce Chen and Qin [14]’s test based on the unbiased estimator of the target function. In
subsection 2.3, we review Srivastava and Du’s work on the scale invariant NTM, based on the modified
component-wise squared Mahalanobis distance. In subsection 2.4, we introduce Cai et al’s NTM based
on the Kolmogorov distance, i.e., the maximum component of difference. In subsection 2.5, we introduce
some works on the extensions to MANOVA and contrast tests, that is, tests for problems of more than
two samples. In Section 3, we introduce some naive tests of hypotheses on covariances. In subsection
3.1, we introduce the naive test proposed by Ledoit and Wolf [27] on the hypothesis of the one-sample
covariance matrix and the spherical test. In subsection 3.2, we introduce the NTM proposed by Li and
Chen (2012) [28]. In subsection 3.3, we introduce Cai’s NTM on covariances based on the Kolmogorov
distance. We also review the testing of the structure of the covariance matrix in subsection 3.4. In Section
4, we make some general remarks on the development of NTMs.
2
2.1
Testing the population locations
Asymptotic powers of T 2 , NET and ANT
In this section, we first consider the simpler one-sample problem by NTM. That is, the null hypothesis
is H0 : µ1 = µ0 . Under the assumption (A1) with k = 1, and testing the hypothesis
H0 : µ = µ 0
v.s. H1 : µ 6= µ0 ,
it is easy to check that EX̄ = µ. Thus, to set up a test of this hypothesis, we need to choose some norms of
the difference µ − µ0 . There are three types of norms to be chosen in the literature: the Euclidean norm,
the Maximum component norm and the Mahalanobis squared norm. Let us begin from the classical one.
The most famous test is the so-called Hotelling T 2 statistic,
T 2 = n(X̄ − µ0 )′ S−1 (X̄ − µ0 )
(2.1)
which was proposed by Hotelling (1931) [23] and is a natural multi-dimensional extension of the squared
univariate Student’s t-statistic. If the Zj s are normally distributed, the Hotelling T 2 statistic is shown to
be the likelihood ratio test for this one-sample problem and to have many optimal properties. Details can
be found in any textbook on multivariate statistical analysis, such as [1, 31]. It is easy to verify that X̄
and S are unbiased, sufficient and complete estimators of the parameters µ and Σ and that, as mentioned
above, the target function is chosen as the Mahalanobis squared distance of the population mean µ from
the hypothesized mean µ0 , which is also the Euclidean norm of Σ−1/2 (µ − µ0 ). Thus, we can see that the
Hotelling T 2 statistic is a type of NTM, and we simply need to obtain its (asymptotic) distribution. It
(n−p) 2
T has an F -distribution with degrees of freedom p
is well known that under the null hypothesis, p(n−1)
and n − p, and when p is fixed, as n tends to infinity, T 2 tends to a chi-squared distribution with degrees
of freedom p. If we assume yn = p/n → y ∈ (0, 1) and Xj are normally distributed, following Bai and
Saranadasa [3], we may easily derive that
s
(1 − yn )3
nyn
nkδk2 D
T2 −
−
→ N (0, 1), as n → ∞,
(2.2)
2nyn
1 − yn
1 − yn
where δ = Σ −1/2 (µ − µ0 ). By (2.2), it is easy to derive that the asymptotic power function of the T 2
test satisfies
s
!
n(1 − y)
2
→ 0.
kδk
βH − Φ −ξα +
2y
4
Hu J & Bai Z.
Sci China Math
January 2015
Vol. 55
No. 1
Here and throughout the paper, Φ is used for the distribution function of a standard normal random
variable, and ξα is its upper α quantile. It should be noted that the above asymptotic distribution of
the Hotelling T 2 statistic (2.2) still holds without the normality assumption. The details can be found
in [33].
Next, we derive the asymptotic power for ANT. In this case, the target function is chosen as h(µ) =
P
kµ − µ0 k2 , and the natural estimator of µ is X̄ = n1 ni=1 Xi . It is easy to derive that
EkX̄k2
V ar(kX̄k2 )
= kµk2 +
1
Σ
trΣ
n
(2.3)
2
Σ2 + 4µ′Σ µ
trΣ
n
m
m
X
X
2
1
+ √ EZ13
(γi′ γi )2
µ′ γi (γi′ γi ) + (EZ14 − 3)
n
n
i=1
i=1
=
(2.4)
where γi is the i-th column of the matrix Γ. Under the conditions
(µ − µ0 )′Σ (µ − µ0 ) =
Σ) =
λmax (Σ
we have
2
V ar(kX̄ − µ0 k ) =
1
Σ2 ),
o( trΣ
n
√
Σ2 ),
o( trΣ
m
X
1
2
Σ2 + (EZ14 − 3)
(γi′ γi )2
trΣ
n
n
i=1
(2.5)
(2.6)
!
(1 + o(1)).
Under the conditions (2.5) and (2.6), using the moment method or martingale decomposition method,
one can prove that
kX̄ − µ0 k2 − E(kX̄ − µ0 k2 )
p
→ N (0, 1)
(2.7)
V ar(kX̄ − µ0 k2 )
To perform the test for the hypothesis H0 : µ = µ0 vs. H1 : µ 6= µ0 , it is necessary to construct
ratio-consistent estimators of E(kX̄ − µ0 k2 ) and V ar(kX̄ − µ0 k2 ) under the null hypothesis. It is obvious
Σ can be estimated by n1 tr(S). The variance can be simply estimated by n1 tr(S2 ) − n1 tr2 (S) if
that n1 trΣ
EZ14 = 3. In the general case, it can be estimated by n1 σ̂n2 , where
σ̂n2
=
1
(n)5
−
X
j1 ,··· ,j5
distinct
1
(n)6
tr ((Xj1 − Xj2 )(Xj1 − Xj3 )′ (Xj1 − Xj4 )(Xj1 − Xj5 )′ )
X
j1 ,··· ,j6
distinct
tr ((Xj1 − Xj2 )(Xj1 − Xj3 )′ (Xj6 − Xj4 )(Xj6 − Xj5 )′ ) ,
(2.8)
where the summations above are taken for all possibilities that j1 , · · · , js , s = 5 or 6, distinctly run over
{1, · · · , n}, and (n)l = n(n − 1) · · · (n − l + 1). Using the standard limiting theory approach, one may
prove that σ̂n2 is a ratio-consistent estimator of σn2 , where
Σ2 + (EZ14 − 3)
σn2 = 2trΣ
m
X
(γi′ γi )2 .
i=1
Therefore, the test rejects H0 if
kX̄ − µ0 k2 >
1
1
tr(S) + √ ξα σ̂n .
n
n
From this result, it is easy to derive that under conditions (2.5) and (2.6), the asymptotic power of ANT
is
!
!
√
√
nkµ − µ0 k2
nkµ − µ0 k2
p
p
≃ Φ −ξα +
.
βAN T ≃ Φ −ξα +
σ̂n2
σn2
Hu J & Bai Z.
January 2015
Sci China Math
Vol. 55
5
No. 1
Comparing the expressions of the asymptotic powers of Hotelling test and ANT, one sees that the factor
√
1 − y appears in the asymptotic power of Hotelling’s test but not in that of the ANT. This difference
shows that the ANT has higher power than does the T 2 test when y is close to 1.
Moreover, if p, the dimension of the data, is larger than n − 1, the degrees of freedom, then T 2 is not
well defined, and there is no way to perform the significance test using it.
Remark 1. In the real calculation of σ̂n2 , the computation using the expression of (2.8) is very time
consuming. To reduce the computing time, we should rewrite it as
n
σ̂n2
=
1X ′
4
(X Xj )2 −
n j=1 j
(n)2
+
6
(n)3
X
X
j1 ,j2
distinct
X′j1 Xj2 X′j1 Xj3 +
j1 ,j2 ,j3
distinct
1
(n)2
X′j1 Xj1 X′j1 Xj2 −
2
(n)3
X
j1 ,j2 ,j3
distinct
X
X′j1 Xj1 X′j2 Xj2
j1 ,j2
distinct
X′j1 Xj1 X′j2 Xj3 −
4
(n)4
X
X′j1 Xj2 X′j3 Xj4 , (2.9)
j1 ,j2 ,j3 ,j4
distinct
where each summation runs over all possibilities in which the indices involved are distinct.
It is easy to see that to calculate the estimator σ̂n2 using (2.9) is very time consuming: for example, to
calculate the last term, one needs to compute 2pn4 multiplications. To further reduce the computation
time, one may use the inclusion-exclusion principle to change the last five sums into forms that are easier
to calculate. For example, the last sum I6 can be written as
I6 = (X′ X)2 − 2X′ Xa − 4X′ X(2) X + a2 + 2tr(X2(2) ) + 8X′ X(3) − 6b,
(2.10)
where
X =
X(3)
=
n
X
j=1
n
X
Xj ,
a=
n
X
X′j Xj ,
X(2) =
b=
Xj X′j ,
j=1
j=1
X′j Xj Xj ,
n
X
n
X
(X′j Xj )2 .
j=1
j=1
Here, the coefficients of various terms can be found by the following arguments: Let Ω denote the fact that
there are no restrictions between the indices j1 , · · · , j4 , and let Aik denote the restriction that ji = jk ,
i < k 6 4, which is called an equal sign, or an edge between vertices i and j.
The sum I6 in which the indices j1 , · · · , j4 are distinct can be considered the indices running over the
Q
set 16i<k64 (Ω − Aik ). By expanding the product, one may split the sum I6 into a signed sum of several
sums: the first sum runs over Ω, followed by the subtraction of 6 sums with one equal sign; add 15 sums
with two equal signs; subtract 20 sums with three equal signs, and so on, and finally add the sum with all
six equal signs. Now, the first one runs over Ω, that is, there are no restrictions among the four vertices
1, 2, 3, 4, which simply gives the first term in (2.10). The sum with the equal sign A12 is given by
X
X′j1 Xj2 X′j3 Xj4 =
A12
n1
n X
n X
X
X′j Xj X′j3 Xj4 = aX′ X;
j=1 j3 =1 j4 =1
Similarly, the sum under the equal sign A34 is also aX′ X. These two cases give −2X′ Xa in the second
term in (2.10); the other 4 cases with one equal sign give −4X′ X(2) X in the third term. For example,
X
A13
X′j1 Xj2 X′j3 Xj4 =
n1
n X
n X
X
X′j Xj2 X′j Xj4 = X′ X(2) X;
j=1 j2 =1 j4 =1
Under the equal sign A14 , A23 or and A24 , the sum again has the form X′ X(2) X. By similar arguments,
one can show that the sum with the two equal signs A12 and A34 is given by a2 in the fourth term;
the sums with two equal signs A13 and A24 or A14 and A23 are given by 2tr(X2(2) ) in the fifth term;
the sums for the other 12 cases with two equal signs, such as A12 and A23 , are given by 12X′ X(3) ;
6
Hu J & Bai Z.
Sci China Math
January 2015
Vol. 55
No. 1
there are 4 cases in which the three equal signs make three indices equal and leave one index free of
the rest (or, equivalently, three edges forming a triangle), which contribute −4X′ X(3) ; and two cases
give a final contribution of 8X′ X(3) in the seventh term of (2.10). There are 16 other cases of three
equal signs that imply all indices j1 , · · · , j4 are equal, giving a sum of b. Additionally, if there are more
than three equal signs, the indices j1 , · · · , j4 are also all identical; thus, we obtain the coefficient for b as
−16 + 64 − 65 + 1 = −6. Therefore, the splitting (2.10) is true.
Similarly, one may show that
I1
= b
I2
= X′ X(3) − b
I3
I4
I5
= a2 − b
= X′ X(2) X − 2X′ X(3) − tr(X2(2) ) + 2b
= X′ Xa − a2 − 2X′ X(3) + 2b.
Finally, one can calculate the estimator of the variance by
σ̂n2
=
n2 − n + 2
4
n2 − 3n + 4 2 6n − 2 ′
6n − 10
b−
X′ X(3) −
a +
X X(2) X −
tr(X2(2) )
(n − 1)3
(n − 2)2
(n)4
(n)4
(n)4
2n + 2 ′
4
+
X Xa −
(X′ X)2 .
(n)4
(n)4
From the expressions above, the numbers of multiplications to be calculated for the terms above are
n(p + 1), np3 , np + 1, np3 , p2 , p + 1 and p + 1, respectively. Thus, by using this formula, the computation
time will be reduced significantly.
Now, consider the two-sample location problem of multivariate normal distributions, H0 : µ1 = µ2 ,
with a common covariance matrix Σ . The classical test for this hypothesis is the Hotelling T 2 test
n1 n2
(X̄1 − X̄2 )′ S−1 (X̄1 − X̄2 )
T2 =
n1 + n2
Pni
Xij , i = 1, 2 and
where X̄i = n1i j=1
ni
2 X
X
1
S=
(Xij − X̄i )(Xij − X̄i )′ .
n1 + n2 − 2 i=1 j=1
In 1958 and 1960, Dempster published two papers, [16] and [17], in which he argued that if p, the
dimension of the data, is larger than N = n1 + n2 − 2, the degrees of freedom, then T 2 is not well
defined, and there is no way to perform the significance test using T 2 . Therefore, he proposed the socalled NET (non-exact test) as follows: Arrange the data X = (X11 , · · · , X1n1 , X21 , · · · , X2n2 ) as a p × n
matrix, where n = n1 + n2 . Select an n × n orthogonal matrix H and transform the data matrix to
Y = X H = (y1 , · · · , yn ) such that
n
n2
1
y1
∼ N √ µ1 + √ µ2 , Σ
n
n
r n n
1 2
(µ1 − µ2 ), Σ
y2
∼ N
n
y3 , · · · , yn
i.i.d.
∼
Then, he defined his NET by
TD =
N (0, Σ ).
ky2 k2
.
ky3 k2 + · · · + kyn k2
He claimed that as n1 and n2 increase, using the so-called chi-square approximation of kyj k2 for i =
2, 3, · · · , n,
nTD ∼ Fr,nr ,
Hu J & Bai Z.
January 2015
Sci China Math
Vol. 55
No. 1
7
where N = n1 + n2 − 2 and r = (trΣ)2 /trΣ2 . Comparing N TD with T 2 , we find that Dempster simply
replaced S by tr(S)Ip to smooth the trouble when S is singular.
Bai and Saranadasa [3] observed that Dempster’s NET is not only a remedy for the T 2 test when it
is not well defined but is also more powerful than T 2 even when it is well defined, provided that the
dimension p is large compared with the degrees of freedom N . Based on Dempster’s NET, Bai and
Saranadasa [3] also proposed the so-called ANT (asymptotic normality test) based on the normalization
of ky2 k2 . They established a CLT (central limit theorem) as follows:
Mn
D
p
→ N (0, 1),
V ar(Mn )
where Mn = kX̄1 − X̄2 k2 − n1nn2 tr(S). To perform the significance test for H0 , Bai and Saranadasa
proposed the ratio-consistent estimator of V ar(Mn ) by
1 2
2(N + 2)(N + 1)N
2
\
tr(S ) − tr S .
V ar(Mn ) =
n21 n22 (N − 1)
N
Bai and Saranadasa [3] proved that when dimension p is large compared with n, both NET and
ANT are more powerful than the T 2 test, by deriving the asymptotic power functions of the three tests.
Under the conditions that p/n → y ∈ (0, 1) and n1 /n → κ ∈ (0, 1), the power function of the T 2 test
asymptotically satisfies
s
!
n(1 − y)
2
→ 0,
κ(1 − κ)kδk
βH (δ) − Φ −ξα +
2y
where δ = Σ −1/2 (µ1 − µ2 ).
Under the assumption A2 and
n
2
Σ )
µ Σµ = o
tr(Σ
n1 n2
√
Σ2 ),
Σ ) = o( trΣ
λmax (Σ
′
(2.11)
(2.12)
where µ = µ1 − µ2 , Bai and Saranadasa [3] also proved that the power function of Dempster’s NET
satisfies
nκ(1 − κ)kµk2
√
→ 0.
βD (µ) − Φ −ξα +
Σ2
2trΣ
Without the normality assumption, under the assumptions A1 and A2 and the conditions (2.11) and
(2.12), Bai and Saranadasa [3] proved that the power function of their ANT has similar asymptotic power
to the NET, that is,
nκ(1 − κ)kµk2
√
βBS (µ) − Φ −ξα +
→ 0.
Σ2
2trΣ
2.2
Chen and Qin’s approach
Chen and Qin (2010) [14] argued that the main term of Bai and Saranadasa’s ANT contains squared
terms of sample vectors that may cause non- robustness of the test statistic against outliers and thus
proposed an unbiased estimator of the target function kµ1 − µ2 k2 , given by
TCQ =
1
n1 (n1 − 1)
X
X′1i X1j +
i6=j
1
n2 (n2 − 1)
X
i6=j
X′2i X2j −
2 X ′
X X2j
n1 n2 i,j 1i
Chen and Qin [14] proved that ETCQ = kµ1 − µ2 k2 , and under the null hypothesis,
V ar(TCQ ) =
2
n1 (n1 − 1)
Σ 21 ) +
tr(Σ
2
n2 (n2 − 1)
Σ22 ) +
tr(Σ
4
Σ1Σ 2 )(1 + o(1)).
tr(Σ
n1 n2
8
Hu J & Bai Z.
Sci China Math
January 2015
Vol. 55
No. 1
Similarly to Bai and Saranadasa (1996), under the conditions
n1
→ κ
n
Σ iΣ j Σ lΣ h ) = o tr2 (Σ
Σ 1 + Σ 2 )2 , for i, j, l, h = 1 or 2,
tr(Σ
Σ1 + Σ 2 )2 , for i = 1 or 2,
(µ1 − µ2 )′Σ i (µ1 − µ2 ) = o n−1 tr(Σ
Σ1 + Σ 2 )2 = o (µ1 − µ2 )′Σ i (µ1 − µ2 ) , for i = 1 or 2,
or n−1 tr(Σ
Chen and Qin proved that
(2.13)
(2.14)
(2.15)
TCQ − kµ1 − µ2 k2 D
p
→ N (0, 1).
V ar(TCQ )
To perform the test for H0 : µ1 = µ2 with the target function h(µ1 , µ2 ) = kµ1 − µ2 k2 , they proposed
the estimator for V ar(TCQ ) to be
σ̂n2 =
2
4
2
\
\
\
Σ1Σ 2 ),
Σ21 ) +
Σ 22 ) +
tr(Σ
tr(Σ
tr(Σ
n1 (n1 − 1)
n2 (n2 − 1)
n1 n2
where
\
Σ2i ) =
tr(Σ
\
Σ 1Σ 2 ) =
tr(Σ
1
ni (ni − 1)
1
n1 n2
X
j6=k
n1 X
n2
X
j=1 k=1
X′ik (Xij − X̄i(jk) )X′ij (Xik − X̄i(jk) ),
X′2k (X1j − X̄1(j) )X′1j (X2k − X̄2(k) ),
and X̄i(∗) denotes the sample mean of the i-th sample, excluding the ∗-th vectors, as indicated in the
braces.
Applying the central limit theorem, Chen and Qin derived the asymptotic power functions for two
cases:
nκ(1−κ)kµ1 −µ2 k2
if (2.14) holds,
Φ −ξα + √
2
Σ 1 +(1−κ)Σ
2tr(κΣ
Σ2 )
(2.16)
βCQ ∼
nκ(1−κ)kµ1 −µ2 k2
if
(2.15)
holds.
Φ √
2
Σ1 +(1−κ)Σ
Σ2 )
2tr(κΣ
Remark 2. The expression of the asymptotic power under the condition (2.15) ((3.5) in Chen and
Qin [14]) may contain an error in that the denominator
of the quantity inside the function Φ should
q
be σn2 in Chen and Qin’s notation, that is, 2
1
n1 (µ1
− µ2 )′Σ 1 (µ1 − µ2 ) +
1
n2 (µ1
− µ2 )′Σ 2 (µ1 − µ2 ),
instead of σn1 . However, the asymptotic power is 1 under the condition (2.15). Therefore, the typo does
not affect the correctness of the expression of the asymptotic power. This point is shown by the following
facts:
By the condition (2.13), we have
q
Σi )kµ1 − µ2 k2 6 o
Σ2i )kµ1 − µ2 k2 .
(µ1 − µ2 )′Σ i (µ1 − µ2 ) 6 λmax (Σ
tr(Σ
Therefore,
Consequently,
2
σn2
6 o σn1 kµ1 − µ2 k2 .
kµ1 − µ2 k2
σn2
>
>
σn2
o(σn1 )
s
2
σn2
2 >M
σn1
for any fixed constant M , where the last step follows from the condition (2.15). Regarding Chen and
Qin’s expression, one has
kµ1 − µ2 k2
kµ1 − µ2 k2
>
> M.
σn1
σn2
Hu J & Bai Z.
Sci China Math
January 2015
Vol. 55
No. 1
9
For the one-sample location problem, Chen and Qin [14] modified TBS and proposed
TCQ =
X
1
Xi′ Xj
n(n − 1)
i6=j
Σ4 = o(tr2Σ 2 ) (which is equivalent to (2.6)), as min{p, n} → ∞,
and showed that under the condition trΣ
q
P
2
n(n−1) tr(
TCQ
i6=j (Xi
−
D
X̄(i,j) )X′i (Xj
− X̄(i,j) )Xj )
→ N (0, 1).
(2.17)
Note that the difference between the statistics in (2.17) and the LHS of (2.7) with the denominator
Σ2 .
replaced by the estimator n1 (trS2 − n1 tr2 S) is in the denominators, that are the estimators of trΣ
Remark 3. It has been noted that the main part TCQ of Chen and Qin’s test is exactly the same as Bai
and Saranadasa’s Mn because they are both unbiased estimators of the target function kµ1 − µ2 k2 and
are functions of the complete and sufficient statistics of the mean vectors and covariance matrices for the
two samples. We believe that Chen and Qin’s idea of an unbiased estimator of the target function helped
them propose a better estimator of the asymptotic variance of the test such that their test performed
better than did Bai and Saranadasa’s ANT in simulation. In addition, there is an improved statistic for
the Chen and Qin test by thresholding methods, which was recently proposed by Chen et al. [13]. Wang
et al [50] also proposed a test for the hypothesis under the elliptically distributed assumption, which can
be viewed as a nonparametric extension of TCQ .
2.3
Srivastava and Du’s approach
While acknowledging the defect of the Hotelling test, as indicated in [3,16], Srivastava and Du (2008) [42]
noted that the NET and ANT are not scale invariant, which may cause lower power when the scales of
different components of the model are very different. Accordingly, they proved the following modification
to the ANT:
TSD,1 = (X̄ − µ0 )′ D−1
S (X̄ − µ0 ),
where DS = Diag(s11 , · · · , spp ) is the diagonal matrix of the sample covariance matrix S. Let R be the
population correlation matrix. Then, under the condition that
trRi
p→∞ p
λ(R)
lim √
p→∞
p
< ∞, for i = 1, 2, 3, 4;
0 < lim
= 0,
where λ(R) is the largest eigenvalue of the correlation matrix R.
Srivastava and Du [42] showed that if n ≍ pη and 12 < η 6 1,
nTSD,1 −
r
2 trR2 −
(n−1)p
n−3
p2
n−1
cp,n
D
→ N (0, 1),
−1/2
as n → ∞,
−1/2
where R is the sample correlation matrix, i.e., R = DS SDS
and cp,n = 1 + trR2 /p3/2 . They
showed that the asymptotic power of TSD,1 under the local alternative, as n → ∞,
!
(n−1)p
′ −1
nTSD,1 − n−3
n(µ
−
µ
)
D
(µ
−
µ
)
0
0
Σ
√
,
> ξ−α
βSD = P
→ Φ −ξα +
r
2trR2
p2
2
2 trR − n−1 cp,n
10
Hu J & Bai Z.
Sci China Math
January 2015
Vol. 55
No. 1
where DΣ is the diagonal matrix of population covariance matrix Σ . Later, Srivastava [41] modified the
asymptotic results above to cases, where the adjusting term cp,n1 in the last test statistic is replaced by
Pn
1 and the restriction for η is relaxed to 0 < η 6 1. Further, by excluding i=1 (Xi − µ0 )′ D−1
S (Xi − µ0 )
from TSD,1 and modifying DS , Park and Ayyala [34] obtained another NTM test statistic:
TP A =
X
n−5
X′i D−1
S(i,j) Xj
n(n − 1)(n − 3)
i6=j
((i,j))
((i,j))
where DS(i,j) = Diag(s11 , · · · , spp ) is the diagonal matrix of the sample covariance matrix excluding
P
the sample points Xi and Xj , i.e., S(i,j) = (n − 3)−1 k6=i,j (Xk − X̄(i,j) )(Xk − X̄(i,j) )′ and X̄(i,j) =
P
(n − 2)−1 k6=i,j Xk .
Srivastava and Du also considered the two-sample location problem with the common covariance matrix
Σ [42] and proposed the testing statistic
TSD,2 =
n1 n2
(X̄1 − X̄2 )′ D−1
S (X̄1 − X̄2 )
n
Under similar conditions for the CLT of TSD,1 , they proved that
p
TSD,2 − NN−2
D
r
→ N (0, 1).
2
2 trR2 − pn cn,p
(2.18)
They then further derived the asymptotic power function
−1
κ(1 − κ)µ′ DΣ
µ
√
βSD (µ) ∼ Φ −ξα +
2trR2
!
.
Remark 4. The advantage of this statistic is that the terms Xi , DS(i,j) and Xj are all independent such
that it is easy to obtain the approximation
−1
−1
ETP A = µ′ EDΣ
µ ≃ µ′ DΣ
µ,
which is similar to E(nTS − p(n − 1)/(n − 3)), as given in [42]. This point shows that both TSD,1 and
−1
TP A are NTM tests based on the target function µ′ DΣ
µ. The idea that they use to exclude the bias
p(n − 1)/(n − 3) is similar to TCQ , which removes the bias estimator trS given in TBS . Park and Ayyala
also gave the asymptotic distribution of TP A under the null hypothesis, that is,
p
n(n − 1)TP A
p
→ N (0, 1),
[2
2trR
P
[2 is a ratio-consistent estimator of trR2 , i.e., trR
[2 /trR2 →
where trR
1, and
[2 =
trR
X
1
−1
′
Xi′ D−1
S(i,j) (Xj − X̄(i,j) )Xj DS(i,j) (Xi − X̄(i,j) ).
n(n − 1)
i6=j
They then showed that the asymptotic power of the test TP A is the same as the asymptotic power of TSD .
Recently, Dong et al. [18] gave a shrinkage estimator of the diagonals of the population covariance matrix
DΣ 1 and showed that the shrinkage-based Hotelling test performs better than the unscaled Hotelling test
and the regularized Hotelling test when the dimension is large.
Remark 5. For Σ 1 6= Σ 2 , Srivastava et al. [43] used D = DS1 /n1 + DS2 /n2 instead of DS in TSD,2 . For
the case in which the population covariance matrices are diagonal, Wu et al. [52] constructed a statistic
by summing up the squared component-wise t-statistics for missing data, and Dong et al. [18] proposed
a shrinkage-based diagonalized Hotellings test.
Hu J & Bai Z.
2.4
Sci China Math
January 2015
Vol. 55
No. 1
11
Cai et al’s idea
Cai et al [10] noted that all of the NTM tests associated with target functions based on the Euclidean
norm or Mahalanobis distance require the condition that
√
nkµk2 / p → ∞,
(2.19)
to distinguish the null and alternative hypotheses with probability tending to 1. This condition does
√
not hold if only a few components of µ have the order O(1/ n) and all others are 0. Therefore, they
proposed using the L∞ norm or, equivalently, the Kolmogorov distance.
Indeed, Cai et al’s work compensates for the case where
√
(2.20)
n max |µi | → ∞.
i6p
Note that neither of the conditions (2.19) and (2.20) implies the other. The condition (2.20) is weaker
than (2.19) only when the bias vector µ − µ0 or µ1 − µ2 is sparse.
Now, we introduce the work of Cai et al. [8], who developed another NTM test based on the Kolmogorov
distance, which performs more powerfully against sparse alternatives in high-dimensional settings.
Supposing that Σ 1 = Σ 2 = Σ and {X1 , X2 } satisfy the sub-Gaussian-type or polynomial-type tails
condition, Cai et al. proposed the test statistic
2
n1 n2
Xi
TCLX =
,
max
n1 + n2 16i6p ωii
d
d
−1 (X̄ − X̄ ) := (X , . . . , X )′ and Σ
−1 := Ω = (ω )
where Σ
1
2
1
p
ij p×p is the constrained l1 -minimization for the
inverse matrix estimator of Σ −1 . Here, the so-called constrained l1 -minimization for the inverse matrix
estimator is defined by
o
nX
d
−1 = arg min
|ωij |; subject to kSΩ − Ip k∞ 6 γn
Σ
Ω=(ωij )
ij
p
where γn is a tuning parameter, which may generally be chosen as C log p/n for some large constant
C. For more details on the properties of l1 -minimization estimators, the reader is referred to [5]. Under
the null hypothesis H0 and some spectrum of population covariance matrix conditions, for any x ∈ R, as
min{n, p} → ∞,
x
1
P(TCLX − 2 log(p) − log log(p) 6 x) → Exp − Exp −
.
π
2
To evaluate the performance of their maximum absolute components test, they also proved the following
result:
Σ) 6 λmax (Σ
Σ) 6 C0 for some constant C0 > 1; kp = O(pr ) for some r 6 1/4;
Suppose that C0−1p
6 λmin (Σ
√
maxi6p |µi |/ σii > 2β log(p)/n with β > 1/ mini (σii ωii ) + ε for some ε > 0. Then, as p → ∞
PH1 (φα (Ω)) → 1,
where kp is the number of non-zero entries of µ.
Remark 6. Note that for the statistic TCLX , one can use any consistent estimator of Σ −1 in the sense
of the L1 -norm and infinity norm with at least a logarithmic rate of convergence.
2.5
MANOVA and Contrasts: more than two samples
In this subsection, we consider the problem of testing the equality of several high-dimensional mean
vectors, which is also called the multivariate analysis of variance (MANOVA) problem. This problem is
to test the hypothesis
H0 : µ 1 = · · · = µ k
vs
H1 : ∃i 6= j, µi 6= µj .
(2.21)
12
Hu J & Bai Z.
January 2015
Sci China Math
Vol. 55
No. 1
For samples that are drawn from a normal distribution family, the MANOVA problem in a highdimensional setting has been considered widely in the literature. For example, among others, Tonda
and Fujikoshi [48] obtained the asymptotic null distribution of the likelihood ratio test; Fujikoshi [21]
found the asymptotic null distributions for the Lawley-Hotelling trace and the Pillai trace statistics;
and Fujikoshi et al [22] considered the Dempster trace test, which is based on the ratio of the trace of
the between-class sample covariance matrix to the trace of the within-class sample covariance matrix.
Instead of investigating the ratio of the traces of the two sample matrices, Schott [38] proposed a test
statistic based on the difference between the traces. Next, we introduce three NTM statistics that are
the improvements on TSD , TCQ and TCLX .
Recently, Srivastava and Kubokawa [44] proposed a test statistic for testing the equality of the mean
vectors of several groups with a common unknown non-singular covariance matrix. Denote by 1r =
(1, . . . , 1)′ an r-vector with all entries 1, and define
Y
= (X11 , . . . , X1n1 , . . . , Xk1 , . . . , Xknk ),
L
= (Ik−1 , −1k−1 )(k−1)×k
and
1n1
0
E= .
..
0
0
0
1n2
..
.
0
..
.
0
1nk
.
n×k
Then, Srivastava and Kubokawa proposed the following test statistic:
TSK =
−1
tr(BDS
) − (n − k)p(k − 1)(n − k − 2)−1
p
,
2cp,n (k − 1)(trR2 − (n − k)−1 p2 )
where B = Y′ E(E′ E)−1 L′ [L(E′ E)−1 L′ ]−1 L(E′ E)−1 E′ Y, DS = Diag[(n − k)−1 Y(In − E(E′ E)−1 E′ )Y],
−1/2
−1/2
and cp,n = 1 + tr(R2 )/p3/2 . Note that Diag[A] denotes the
R = DS Y(In − E(E′ E)−1 E′ )YDS
diagonal matrix consisting of the diagonal elements of the matrix A. Under the null hypothesis and the
condition n ≍ pδ with δ > 1/2, TSK is asymptotically distributed as N (0, 1). Thus, as n, p → ∞,
PH0 (TSK > ξα ) → Φ(−ξα ).
Hence, by comparing the results presented in [42] and [41], it is easy to see that cp,n may be removable
under certain conditions.
Motivated by Chen and Qin [14], Hu et al. [24] proposed a test for the MANOVA problem, that is,
THB
=
k
X
i<j
′
(X̄i − X̄j ) (X̄i − X̄j ) − (k − 1)
= (k − 1)
k
X
i=1
1
ni (ni − 1)
X
k1 6=k2
k
X
n−1
i trSi
i=1
X′ik1 Xik2 −
k
X
i<j
2 X ′
Xik1 Xjk2 .
ni nj
k1 ,k2
When k = 2, clearly, THB reduces to Chen and Qin’s test statistic. It is also shown that as p → ∞ and
n → ∞,
P
THB − ki<j kµi − µj k2 D
p
→ N (0, 1).
V ar(THB )
To perform the test, a ratio-consistent estimator of V ar(THB ) for the MANOVA test is proposed in the
paper.
Hu J & Bai Z.
Sci China Math
January 2015
Vol. 55
13
No. 1
Cai and Xia [10] also applied their idea to the MANOVA case under the homogeneous covariance
assumption using the following test statistic:
(
)
2
X
Xjli
ni nj
,
TCX = max
16i6p
ni + nj
b̂ii
16j<l6k
′ d
d
−1 (X̄ − X̄ ) := (X
−1 := (ω )
where Σ
j
l
jl1 , . . . , Xjlp ) ; Σ
ij p×p is a consistent estimator, e.g. the constrained
l1 -minimization for the inverse matrix estimate of Σ −1 ; and b̂ii are the diagonal elements of the matrix
B̂, which is defined by
B̂ = P
ni
k X
X
1
b −1 (Xij − X̄i )(Xij − X̄i )′Σ
b −1 .
Σ
ni − k i=1 j=1
To introduce the theory of Cai and Xia’s test, let
′
r
r
n1 n2
nk−1 nk
1
,
(X̄1 − X̄2 )i , . . . ,
(X̄k−1 − X̄k )i
Yi =
σ̂ii
n1 + n2
nk−1 + nk
k(k−1)
×1
2
where σ̂ii is the estimate of the (i, i)-entry of the covariance matrix Σ . Let Σ Y : ̺ × ̺, ̺ = k(k−1)
, be
2
2
the covariance matrix of Yi . Let λY be the largest eigenvalue of Σ Y , and let d be the dimension of the
eigenspace of λ2Y . Let λ2Y,i : 1 6 i 6 ̺ be the eigenvalues of Σ Y arranged in descending order.
Under the null hypothesis H0 and some regularity conditions on the population covariance matrix, for
any x ∈ R, as min{n, p} → ∞,
x
d
Σ) exp − 2
,
H(Σ
PH0 (TCX − 2λ2Y log(p) − (d − 2)λ2Y log log(p) 6 x) → exp −Γ−1
2
2λY
Q̺
λ2
where Γ is the gamma function, and H = i=d+1 (1 − λY2,i )−1/2 . Similar to the two-sample location
Y
problem, they also established a theorem to evaluate the consistency of their test:
P
Σ ) 6 λmax (Σ
Σ) 6 C0 for some constant
Suppose that C0−1 6 λmin (Σ
C0 > 1. If kp = maxj<l6k pi=1 I(µj −
p
√
µl 6= 0) = o(pr ), for some r < 1/4 and maxi kδi k2 / σii > 2σ 2 β log p with some β > 1/(mini σii ωii ) + ε
for some constant ε > 0, then, as p → ∞,
PH1 (φα (Ω) = 1) → 1,
where δi = (µ1i − µ2i , · · · , µk−1,i − µk,i )′ .
2.6
Some related work on the tests of high-dimensional locations
Chen et al. [12] proposed another statistic:
TRHT = X̄′ (S + λI)−1 X̄, for λ > 0,
which is called the regularized Hotelling T 2 test. The idea is to employ the technique of ridge regression
to stabilize the inverse of the sample covariance matrix given in (2.1). Assuming that the underlying
distribution is normally distributed, it is proven that under the null hypothesis, for any λ > 0, as
p/n1 → y ∈ (0, ∞)
√
1−λm(λ)
p nTRHT /p − 1−p(1−λm(λ))/n
D
→ N (0, 1),
m(λ)−λm′ (λ)
1−λm(λ)
−
λ
(1−p/n+pλm(λ)/n)3
(1−p/n+pλm(λ)/n)4
where m(λ) = p1 tr(S + λI)−1 and m′ (λ) = 1p tr(S + λI)−2 . They also give an asymptotic approximation
method for selecting the tuning parameter λ in the regularization. Recently, based on a supervisedlearning strategy, Shen and Lin [39] proposed a statistic to select an optimal subset of features to maximize
the asymptotic power of the Hotelling T 2 test.
14
Hu J & Bai Z.
Sci China Math
January 2015
Vol. 55
No. 1
The Random Projection was first proposed by Lopes et al. [29] and was further discussed in later
studies [26, 47, 51, 53]. For Gaussian data, the procedure projects high-dimensional data onto random
subspaces of relatively low-dimensional spaces to allow the traditional Hotelling T 2 statistic to work well.
This method can be viewed as a two-step procedure. First, a single random projection is drawn, and it
is then used to map the samples from the high-dimensional space to a low-dimensional space. Second,
the Hotelling T 2 test is applied to a new hypothesis-testing problem in the projected space. A decision
is then returned to the original problem by simply rejecting H0 whenever the Hotelling test rejects it in
the projected spaces.
Some other related work on tests of high-dimensional locations can be found in [4, 11, 19, 20, 25, 30, 49],
which we do not discuss at length in this paper.
3
3.1
NTM on covariance matrices
One-sample scatter test
The standard test for scatters is to test the hypothesis H0 : Σ = Σ 0 vs H1 : Σ 6= Σ 0 . Because Σ 0
−1/2
by the data set and then change the test to the simpler hypothesis
is known, one can multiply Σ 0
H0 : Σ = Ip . The classical test for this hypothesis is the well-known likelihood ratio, which can be found
in any standard textbook, such as Anderson [1]. The likelihood ratio test statistic is given by
TLR = trS − log det(S) − p.
When p is fixed, the test based on TLR has many optimalities, such as unbiasedness, consistency, and
being invariant under affine transformation. However, similar to the Hotelling T 2 test, it has a fatal
defect in that it is not well defined when p is larger than n − 1. When p is large but smaller than n − 1,
the null distribution is not simple to use, even under normality. The popularly used option is the Wilks
theorem. However, when p is large, the Wilks theorem introduces a very serious error to the test because
its size tends to 1 as p tends to infinity. A correction to the likelihood ratio test based on random matrix
theory can be found in [2]. However, when p is large, especially when p/n is close to 1, we believe that
the asymptotic power will be low, much as occurs for the T 2 test. The idea of NTM can also be applied
to this hypothesis. Now, we first introduce the work by Ledoit and Wolf [27].
Ledoit and Wolf considered two hypotheses: H01 : Σ = Ip and H02 : Σ = aIp with a > 0 unknown.
Based on the idea of the Nagao test (see Nagao (1973) [32]), they proposed two test statistics:
!2
1
S
1
2
V = tr(S − Ip ) and U = tr 1
− Ip ,
p
p
p trS
which can be viewed from the perspective of NTM as considering S and 1p trS to be the estimators of
parameters Σ and a in the target functions
2
1
Σ
1
2
Σ
Σ
Σ
(3.1)
− Ip
h(Σ ) = tr(Σ − Ip ) and h(Σ , a) = tr
p
p
a
Σ. They studied the asymptotic properties of
, respectively. Note that under the null hypothesis, a = p1 trΣ
U and V in the high-dimensional setting where p/n → c ∈ (0, ∞) and found that U , for the hypothesis
of sphericity, is robust against large p, even larger than n. However, because V is not consistent against
every alternative, they proposed a new test statistic:
2
p 1
p
1
2
trS + .
W = tr(S − Ip ) −
p
n p
n
Under normality and the assumptions that
1
Σ)
tr(Σ
p
=
α
Hu J & Bai Z.
Sci China Math
January 2015
Vol. 55
15
No. 1
1
Σ − αI)2 = δ 2
tr(Σ
p
1
Σ j ) → νj < ∞, for j = 3, 4,
tr(Σ
p
They proved the following:
(i) The law of large numbers
1
tr(S)
p
1
tr(S2 )
p
P
→ α
P
→ (1 + c)α2 + δ 2 ;
(ii) The CLT, if δ = 0
n
"
1
p tr(S) − α
n+p+1 2
1
2
α
p tr(S) −
n
D
→ N
#
" # "
2α2
0
c
,
0
4 1 + 1c α3
4
2
c
D
1
c
α3
+ 5 + 2c α4
4 1+
#!
.
Based on these results, they derived that nU − p → N (1, 4). The inconsistency of the test based on V
can be seen from the following facts. When p is fixed, by the law of large numbers, we have S → Σ = Ip ,
P
and hence, V = p1 tr(S − Ip )2 → 0. However, when p/n → c > 0, we have
V
=
2
1
tr(S)2 − tr(S) + 1
p
p
P
→ (1 + c)α2 + δ 2 − 2α + 1 = cα2 + (α − 1)2 + δ 2 .
Σ − Ip )2 = (α − 1)2 + δ 2 , the null hypothesis can be regarded as
Because the target function is p1 tr(Σ
2
2
(α − 1) + δ = 0, and the alternative can be considered as (α − 1)2 + δ 2 > 0. However, the limit of V
has one more term, cα2 , which is positive. Therefore, the test V is not consistent. In fact, it is easy to
construct a counterexample based on this limit: set
cα + (1 − α)2 + δ 2 = c.
1−c
. Accordingly, the limit of V is the same for
When δ = 0, the solution to the equation above is α = 1+c
1−c
the null α = 1 and the alternative α = 1+c .
For W , we have
P
W → cα2 + (α − 1)2 + δ 2 − cα2 + c = c + (α − 1)2 + δ 2 .
When p is fixed, they proved that as n → ∞,
np
P
W → χ2p(p+1)/2
2
or equivalently,
P
nW − p →
2 2
−p
χ
p p(p+1)/2
When p → ∞, the right-hand side of the above tends to N (1, 4), which is the same as the limit when
p/n → c. This behavior shows that the test based on W is robust against p increasing. Chen et al [15]
extended the work to the case without normality assumptions.
Now, the target functions (3.1) can be rewritten as
Σ) =
h1 (Σ
1
2
1
Σ − Ip )2 = trΣ
Σ2 − trΣ
Σ+1
tr(Σ
p
p
p
16
Hu J & Bai Z.
January 2015
Sci China Math
Vol. 55
No. 1
and
2
1
Σ
h2 (a, Σ ) = tr
− Ip
=
p
a
1
1
Σ2
Σ 2
p trΣ − ( p trΣ )
.
Σ)2
( p1 trΣ
Then, under the normality assumption, Srivastava [40] gave the unbiased and consistent estimators of
these parameters in the previous target functions, which are as follows:
1[
1
1
1\2
(n − 1)2
2
2
Σ = trS and trΣ
Σ =
trS −
trΣ
(trS) .
p
p
p
p(n − 2)(n + 1)
n−1
Based on these estimators, he proposed the test statistics
TS1 =
1
1
[
\
Σ2
Σ 2
1[
1\2
p trΣ − ( p trΣ )
Σ − 2 trΣ
Σ + 1 and TS2 =
trΣ
,
p
p
1
Σ)2
( p[
trΣ
and proved that under the assumption n ≍ pδ , 0 < δ 6 1, as {n, p} → ∞, we have asymptotically,
1
n
2
Σ − Ip ) ∼ N (0, τ12 )
TS1 − tr(Σ
2
p
and
n
2
TS2 −
1
1
Σ2
Σ 2
p trΣ − ( p trΣ )
Σ)2
( p1 trΣ
2
2
where τ12 = 2n
p (α2 − 2α3 + α4 ) + α2 , τ2 =
null hypothesis, one can easily obtain
!
∼ N (0, τ22 ),
2n(α4 α21 −2α1 α2 α3 +a32 )
pα61
n
D
TS1 → N (0, 1) and
2
+
α22
α41
Σi . Thus, under the
and αi = p1 trΣ
n
D
TS2 → N (0, 1).
2
Later, Srivastava and Yanagihara [45] and Srivastava et al. [46] extended this work to the cases of two or
more population covariance matrices and without normality assumptions, respectively. Furthermore, Cai
and Ma [9] showed that TS1 is rate-optimal over this asymptotic regime, and Zhang et al. [54] proposed
the empirical likelihood ratio test for this problem.
3.2
Li and Chen’s test based on unbiased estimation of target function
Li and Chen (2012) [28] considered the two-sample scatter problem, that is, testing the hypothesis
Σ1 , Σ 2 ) = tr(Σ
Σ1 − Σ 2 )2 . They selected the test
H0 : Σ 1 = Σ 2 . They choose the target function as h(Σ
Σ1 , Σ 2 ) as
statistic by the unbiased estimator of h(Σ
TLC = An1 + An2 − 2Cn1 n2
where
Anh
=
1 X ′
2
(Xhi Xhj )2 −
(nh )2
(nh )3
i6=j
+
Cn1 n2
=
1
(nh )4
X
X
X′hi Xhj X′hj Xhk
i,j,k
distinct
X′hi Xhj X′hk Xhl ,
i,j,k,l
distinct
XX
1
1 X ′
(X1i X2j )2 −
X′1i X2j X′2j X1k
n1 n2 i,j
n2 (n1 )2
j
i6=k
XX
XX
1
1
X′2i X1j X′1j X2k +
−
X′1i X2j X′1k X2l .
n1 (n2 )2
(n
)
(n
)
1
2
2
2
j
i6=k
i6=j k6=l
Hu J & Bai Z.
Sci China Math
January 2015
Vol. 55
No. 1
17
Under the conditions A1 and A2 and for any i, j, k, l ∈ {1, 2},
Σ iΣ j Σ kΣ l ) = o(tr(Σ
ΣiΣ j )tr(Σ
ΣkΣ l )),
tr(Σ
we have
V ar(TLC )
2
X
4 2 2
8
2
Σ2
=
2 tr Σ i + n tr(Σ i − Σ 1Σ 2 )
n
i
i
i=1
4
Σ1 − Σ 2 )Γi ◦ Γ′i (Σ
Σ1 − Σ 2 )Γi )
tr(Γ′i (Σ
ni
8
Σ1Σ 2 ),
tr2 (Σ
+
n1 n2
+
where A ◦ B = (aij bij ) denotes the Hadamard product of matrices A and B.
Li and Chen [28] proved that
Σ1 − Σ 2 )2 D
TCL − tr(Σ
p
→ N (0, 1).
V ar(TLC )
p
p
ar(TCL ) := n21 An1 + n22 An2 , which is a ratio-consistent estimator of V ar(TLC )
Li and Chen selected V\
under H0 . Therefore, the test rejects H0 if
2
2
An1 +
An2 .
TLC > ξα
n1
n2
Remark 7. In [28], Li and Chen also considered the test for the covariance between two sub-vectors,
i.e., testing the hypothesis H0 : Σ1,12 = Σ2,12 , where Σi,12 is the off-diagonal blocks of Σi . As the test
statistic is similar, we omit the details here.
3.3
Cai et al’s maximum difference test
Cai et al [6] also applied their maximum elements of the difference of two sample covariance matrices to
test the hypothesis of the equality of the two population covariances. They defined their test statistic as
follows:
(sij1 − sij2 )2
,
Mn = max Mij = max
16i6j6p
16i6j6p θ̂ij1 /n1 + θ̂ij2 /n2
where sijl is the (i, j)-th element of the sample covariance of the l-th sample, and
θ̂ijl
nl
2
1 X
(Xkil − X̄il )(Xkjl − X̄jl ) − sijl
=
nl
k=1
1 6 i 6 j 6 p and l = 1, 2. Here, θ̂ijl can be considered an estimator of the variance of sijl . Then, they
defined the test by
φα = I(Mn > qα + 4 log p − log log p).
where qα is the upper α quantile of the Type I extreme value distribution with the c.d.f.
x
1
exp − √ exp(− ) ,
2
8π
and therefore
qα = − log(8π) − 2 log log(1 − α)−1 .
Σ2 and certain distributional
Under sparse conditions on the difference of the population covariances Σ 1 −Σ
conditions, they proved that for any t ∈ R
t
1
.
P(M1 − 4 log p + log log p 6 t) → exp − √ exp −
2
8π
As expected, Cai et al’s test is powerful when the difference of the two population covariances is sparse,
and it thus compensates somewhat for Li and Chen’s test.
18
3.4
Hu J & Bai Z.
January 2015
Sci China Math
Vol. 55
No. 1
Testing the structure of the covariance matrix
In this subsection, we will consider another important test problem, namely, testing the structure of the
covariance matrix. First, we review the test hypothesis that the covariance matrix Σ is banded. That
is, the variables have nonzero correlations only up to a certain lag τ > 1. To elaborate, we denote
Σ = (σij )p×p and consider the following test hypotheses:
H0 : σij = 0, for all |i − j| > τ v.s. H1 : σij 6= 0, for some |i − j| > τ,
(3.2)
or, equivalently,
H0 : Σ = Bτ (Σ) v.s. H1 : Σ 6= Bτ (Σ),
where Bτ (Σ) = (σij I(|i − j| 6 τ )). From the perspective of NTMs, one can also choose the target
functions by the Euclidean distance and the Kolmogorov distance, which are the main concepts of the
tests proposed by Qiu and Chen [36] and Cai and Jiang [7], respectively.
For τ + 1 6 q 6 p − 1 and µ = 0, let
X
X
2
1
2
Xli X(l+q)j Xlk X(l+q)k
X
X
X
X
−
σ[
=
li
lj
(l+q)i
(l+q)j
ll+q
n(n − 1)
n(n − 1)(n − 2) i,j,k
i6=j
+
distinct
1
n(n − 1)(n − 2)(n − 3)
X
Xli X(l+q)j Xlk X(l+q)m .
i,j,k,m
distinct
P
Pp−q [
τ
τ
2
By denoting TQC
:= 2 p−1
q=k+1
l=1 σll+q , one can easily check that TQC is an unbiased estimator of
tr(Σ − Bτ (Σ)). Under the assumptions τ = o(p1/4 ), (A1), (A2) and certain conditions on the eigenvalues
of Σ, Qiu and Chen [36] showed that under the null hypothesis,
τ
nTQC
D
→ N (0, 4),
Vnτ
and the power function asymptotically satisfies
τ
βQC = P(nTQC
/Vnτ > 2ξα |Σ 6= Bk (Σ)) ≃ Φ
2ξα Vnτ
− δnτ
nvnτ
>Φ
ξα Vnτ
− δnτ
tr(Σ2 )
,
Pp c2
Pτ Pp−q [
2
−2 2
2
where Vnτ = l=1 σ
tr (Σ2 )+8n−1 tr(Σ(Σ−Bτ (Σ)))2 +4n−1 ∆tr[(Γ′ (Σ−
ll +2
q=1
l=1 σll+q , vnτ = 4n
′
Bτ (Σ))Γ) ◦ (Γ (Σ − Bτ (Σ))Γ)] and δnτ = tr(Σ − Bτ (Σ)2 /vnτ .
We can also rewrite the test hypothesis (3.2) as
H0 : ρij = 0, for all |i − j| > τ v.s. H1 : ρij 6= 0, for some |i − j| > τ,
where ρij is the population correlation coefficient between two random variables X1i and X1j . Cai and
Jiang [7] proposed a test procedure based on the largest magnitude of the off-diagonal entries of the
sample correlation matrix
τ
TCL
= max |ρ̂ij |,
|i−j|>τ
where ρ̂ij is the sample correlation coefficient. They showed that under the assumptions log p = o(n1/3 ) →
∞ and τ = o(pǫ ) with ǫ > 0, for any t ∈ R,
1
t
τ
2
.
P(n(TCL ) − 4 log p + log log p 6 t) → exp − √ exp −
2
8π
By implication, one can reject the null hypothesis whenever
τ
(TCL
)2 > n−1 [4 log p − log log p − log(8π) − 2 log log(1 − α)−1 ]
with asymptotical size α.
Hu J & Bai Z.
Sci China Math
January 2015
Vol. 55
No. 1
19
Remark 8. If τ = 1 and under the normal assumption, then the test hypothesis is (3.2), also known as
testing for complete independence, which was first considered by Schott in 2005 [37] for a high-dimensional
random vector and using the Euclidean distance of the sample correlation matrix.
τ
Remark 9. Peng et al. [35] improved the power of the test TQC
by employing the banding estimator
for the covariance matrices. Zhang et al. [54] also gave the empirical likelihood ratio test procedure for
testing whether the population covariance matrix has a banded structure.
4
Conclusions and Comments
All of the NTM procedures show that most classical procedures in multivariate analysis are less powerful
in some parameter settings when the dimension of data is large. Thus, it is necessary to develop new
procedures to improve the classical ones. However, all of the NTM procedures developed to date require
additional conditions on the unknown parameters to guarantee the optimality of the new procedures; e.g.,
all procedures based on asymptotically normal estimations require that the eigenstructure of population
covariance matrix should not be too odd, and all NTMs based on the Kolmogorov distance require the
sparseness of the known parameters. Therefore, there is a strong need to develop data-driven procedures
that are optimal in most cases.
Acknowledgements
The authors would like to thank the referees for their constructive comments, which
led to a substantial improvement of the paper. J. Hu was partially supported by the National Natural Science
Foundation of China (Grant No. 11301063), Science and Technology Development foundation of Jilin (Grant No.
20160520174JH), and Science and Technology Foundation of Jilin during the “13th Five-Year Plan”; and Z. D.
Bai was partially supported by the National Natural Science Foundation of China (Grant No. 11571067).
References
1 T. Anderson. An introduction to multivariate statistical analysis. Third Edition. Wiley New York, 2003.
2 Z. D. Bai, D. D. Jiang, J. F. Yao, and S. R. Zheng. Corrections to LRT on large-dimensional covariance matrix by
RMT. The Annals of Statistics, 37(6B):3822–3840, Dec. 2009.
3 Z. D. Bai and H. Saranadasa. Effect of high dimension: by an example of a two sample problem. Statistica Sinica,
6:311–329, 1996.
4 M. Biswas and A. K. Ghosh. A nonparametric two-sample test applicable to high dimensional data. Journal of
Multivariate Analysis, 123:160–171, 2014.
5 T. Cai, W. Liu, and X. Luo. A Constrained l1 Minimization Approach to Sparse Precision Matrix Estimation. Journal
of the American Statistical Association, 106(494):594–607, 2011.
6 T. Cai, W. Liu, and Y. Xia. Two-Sample Covariance Matrix Testing and Support Recovery in High-Dimensional and
Sparse Settings. Journal of the American Statistical Association, 108(501):265–277, 2013.
7 T. T. Cai and T. Jiang. Limiting laws of coherence of random matrices with applications to testing covariance structure
and construction of compressed sensing matrices. Annals of Statistics, 39(3):1496–1525, 2011.
8 T. T. Cai, W. Liu, and Y. Xia. Two-sample test of high dimensional means under dependence. Journal of the Royal
Statistical Society: Series B, 2014.
9 T. T. Cai and Z. Ma. Optimal hypothesis testing for high dimensional covariance matrices. Bernoulli, 19(5B):2359–
2388, 2013.
10 T. T. Cai and Y. Xia. High-dimensional sparse MANOVA. Journal of Multivariate Analysis, 131:174–196, 2014.
11 a. Chakraborty and P. Chaudhuri. A Wilcoxon-Mann-Whitney-type test for infinite-dimensional data. Biometrika,
102(February):239–246, 2015.
12 L. Chen, D. Paul, R. Prentice, and P. Wang. A regularized Hotelling’s T (2) test for pathway analysis in proteomic
studies. Journal of the American Statistical Association, 106(496):1345–1360, 2011.
13 S. X. Chen, J. Li, and P. Zhong. Two-Sample Tests for High Dimensional Means with Thresholding and Data
Transformation. arXiv preprint arXiv:1410.2848, pages 1–44, 2014.
14 S. X. Chen and Y. L. Qin. A two-sample test for high-dimensional data with applications to gene-set testing. The
Annals of Statistics, 38(2):808–835, 2010.
15 S. X. Chen, L. X. Zhang, and P. S. Zhong. Tests for High-Dimensional Covariance Matrices. Journal of The American
Statistical Association, 105(490):810–819, June 2010.
20
Hu J & Bai Z.
Sci China Math
January 2015
Vol. 55
No. 1
16 A. P. Dempster. A high dimensional two sample significance test. The Annals of Mathematical Statistics, 29(1):995–
1010, 1958.
17 A. P. Dempster. A significance test for the separation of two highly multivariate small samples. Biometrics, 16(1):41,
Mar. 1960.
18 K. Dong, H. Pang, T. Tong, and M. G. Genton. Shrinkage-based diagonal Hotelling T2s tests for high-dimensional
small sample size data. Journal of Multivariate Analysis, 143:127–142, 2016.
19 L. Feng. Scalar-Invariant Test for High-Dimensional Regression Coefficients. (2005):1–19.
20 L. Feng and F. Sun. A note on high-dimensional two-sample test. Statistics & Probability Letters, 105:29–36, 2015.
21 Y. Fujikoshi. Multivariate analysis for the case when the dimension is large compared to the sample size. Journal of
the Korean Statistical Society, 33(1):1–24, 2004.
22 Y. Fujikoshi, T. Himeno, and H. Wakaki. Asymptotic results of a high dimensional MANOVA test and power
comparison when the dimension is large compared to the sample size. Journal of the Japan Statistical Society,
34(1):19–26, 2004.
23 H. Hotelling. The generalization of student’s ratio. The Annals of Mathematical Statistics, 2(3):360–378, 1931.
24 J. Hu, Z. Bai, C. Wang, and W. Wang. On testing the equality of high dimensional mean vectors with unequal
covariance matrices. Annals of the Institute of Statistical Mathematics, pages 1–20, 2014.
25 M. Hyodo and T. Nishiyama. A one-sample location test based on weighted averaging of two test statistics in
high-dimensional data. 2014.
26 L. Jacob, P. Neuvial, and S. Dudoit. DEGraph : differential expression testing for gene networks. 2012.
27 O. Ledoit and M. Wolf. Some hypothesis tests for the covariance matrix when the dimension is large compared to the
sample size. The Annals of Statistics, 30(4):1081–1102, 2002.
28 J. Li and S. X. Chen. Two sample tests for high-dimensional covariance matrices. The Annals of Statistics, 40(2):908–
940, Apr. 2012.
29 M. E. Lopes, L. Jacob, and M. J. Wainwright. A More Powerful Two-Sample Test in High Dimensions using Random
Projection. Advances in Neural Information Processing Systems, 1(2):1206–1214, 2011.
30 P. K. Mondal, M. Biswas, and A. K. Ghosh. On high dimensional two-sample tests based on nearest neighbors.
Journal of Multivariate Analysis, 141:168–178, 2015.
31 R. J. Muirhead. Aspects of multivariate statistical theory, volume 42. Wiley, 1982.
32 H. Nagao. On some test criteria for covariance matrix. The Annals of Statistics, 1(4):700–709, 1973.
33 G. Pan and W. Zhou. Central limit theorem for Hotelling’s T 2 statistic under large dimension. The Annals of Applied
Probability, 21(5):1860–1910, 2011.
34 J. Park and D. N. Ayyala. A test for the mean vector in large dimension and small samples. Journal of Statistical
Planning and Inference, 143(5):929–943, 2013.
35 L. Peng, S. X. Chen, and W. Zhou. More powerful tests for sparse high-dimensional covariances matrices. Journal of
Multivariate Analysis, 149:124–143, 2016.
36 Y. Qiu and S. X. Chen. Test for bandedness of high-dimensional covariance matrices and bandwidth estimation.
Annals of Statistics, 40(3):1285–1314, 2012.
37 J. R. Schott. Testing for complete independence in high dimensions. Biometrika, 92(4):951–956, 2005.
38 J. R. Schott. Some high-dimensional tests for a one-way MANOVA. Journal of Multivariate Analysis, 98(9):1825–1839,
Oct. 2007.
39 Y. Shen and Z. Lin. An adaptive test for the mean vector in large-p-small-n problems. Computational Statistics &
Data Analysis, 89:25–38, 2015.
40 M. S. Srivastava. Some Tests Concerning the Covariance Matrix in High Dimensional Data. Journal of the Japan
Statistical Society, 35(2):251–272, 2005.
41 M. S. Srivastava. A test for the mean vector with fewer observations than the dimension under non-normality. Journal
of Multivariate Analysis, 100(3):518–532, Mar. 2009.
42 M. S. Srivastava and M. Du. A test for the mean vector with fewer observations than the dimension. Journal of
Multivariate Analysis, 99(3):386–402, Mar. 2008.
43 M. S. Srivastava, S. Katayama, and Y. Kano. A two sample test in high dimensional data. Journal of Multivariate
Analysis, 114:349–358, Feb. 2013.
44 M. S. Srivastava and T. Kubokawa. Tests for multivariate analysis of variance in high dimension under non-normality.
Journal of Multivariate Analysis, 115:204–216, 2013.
45 M. S. Srivastava and H. Yanagihara. Testing the equality of several covariance matrices with fewer observations than
the dimension. Journal of Multivariate Analysis, 101(6):1319–1329, July 2010.
46 M. S. Srivastava, H. Yanagihara, and T. Kubokawa. Tests for covariance matrices in high dimension with less sample
size. Journal of Multivariate Analysis, 130:289–309, 2014.
47 M. Thulin. A high-dimensional two-sample test for the mean using random subspaces. Computational Statistics &
Data Analysis, 74:26–38, 2014.
48 T. Tonda and Y. Fujikoshi. Asymptotic Expansion of the Null Distribution of LR Statistic for Multivariate Linear
Hu J & Bai Z.
49
50
51
52
53
54
Sci China Math
January 2015
Vol. 55
No. 1
21
Hypothesis when the Dimension is Large. Communications in Statistics - Theory and Methods, 33(5):1205–1220, Jan.
2004.
A. Touloumis, S. Tavaré, and J. C. Marioni. Testing the mean matrix in high-dimensional transposable data. Biometrics, 71(1):157–166, 2015.
L. Wang, B. Peng, and R. Li. A High-Dimensional Nonparametric Multivariate Test for Mean Vector. Journal of the
American Statistical Association, 1459(June 2015):00–00, 2015.
S. Wei, C. Lee, L. Wichers, G. Li, and J. Marron. Direction-projection-permutation for high dimensional hypothesis
tests. arXiv preprint arXiv:1304.0796, pages 1–29, 2013.
Y. Wu, M. G. Genton, and L. a. Stefanski. A multivariate two-sample mean test for small sample size and missing
data. Biometrics, 62(3):877–885, 2006.
J. Zhang and M. Pan. A high-dimension two-sample test for the mean using cluster. Computational Statistics and
Data Analysis, 97:87–97, 2016.
R. Zhang, L. Peng, and R. Wang. Tests for covariance matrix with fixed or divergent dimension. The Annals of
Statistics, 41(4):2075–2096, 2013.
| 10 |
PREPRINT VERSION
An agent-driven semantical identifier using radial basis neural
networks and reinforcement learning
C. Napoli, G. Pappalardo, and E. Tramontana
arXiv:1409.8484v1 [] 30 Sep 2014
PUBLISHED ON: Proceedings of the XV Workshop ”Dagli Oggetti agli Agenti”
BIBITEX:
@inproceedings{Napoli2014Anagent
year={2014},
issn={1613-0073},
url={http://ceur-ws.org/Vol-1260/},
booktitle={Proceedings of the XV Workshop ”Dagli Oggetti agli Agenti”},
title={An agent-driven semantical identifier using radial basis neural networks and reinforcement learning},
publisher={CEUR-WS},
volume={1260},
author={Napoli, Christian and Pappalardo, Giuseppe and Tramontana, Emiliano},
}
Published version copyright c 2014
UPLOADED UNDER SELF-ARCHIVING POLICIES
An agent-driven semantical identifier using radial
basis neural networks and reinforcement learning
Christian Napoli, Giuseppe Pappalardo, and Emiliano Tramontana
Department of Mathematics and Informatics
University of Catania, Viale A. Doria 6, 95125 Catania, Italy
{napoli, pappalardo, tramontana}@dmi.unict.it
Abstract—Due to the huge availability of documents in digital
form, and the deception possibility raise bound to the essence of
digital documents and the way they are spread, the authorship
attribution problem has constantly increased its relevance. Nowadays, authorship attribution, for both information retrieval and
analysis, has gained great importance in the context of security,
trust and copyright preservation.
This work proposes an innovative multi-agent driven machine
learning technique that has been developed for authorship attribution. By means of a preprocessing for word-grouping and timeperiod related analysis of the common lexicon, we determine a
bias reference level for the recurrence frequency of the words
within analysed texts, and then train a Radial Basis Neural
Networks (RBPNN)-based classifier to identify the correct author.
The main advantage of the proposed approach lies in the generality of the semantic analysis, which can be applied to different
contexts and lexical domains, without requiring any modification.
Moreover, the proposed system is able to incorporate an external
input, meant to tune the classifier, and then self-adjust by means
of continuous learning reinforcement.
I.
I NTRODUCTION
Nowadays, the automatic attribution of a text to an author,
assisting both information retrieval and analysis, has become
an important issue, e.g. in the context of security, trust and
copyright preservation. This results from the availability of
documents in digital form, and the raising deception possibilities bound to the essence of the digital reproducible contents,
as well as the need for new mechanical methods that can
organise the constantly increasing amount of digital texts.
During the last decade only, the field of text classification
and attribution has undergone new developement due to the
novel availability of computational intelligence techniques,
such as natural language processing, advanced data mining
and information retrieval systems, machine learning and artificial intelligence techniques, agent oriented programming,
etc. Among such techniques, Computer Intelligence (CI) and
Evolutionary Computation (EC) methods have been largely
used for optimisation and positioning problems [1], [2]. In [3],
agent driven clustering has been used as an advanced solution
for some optimal management problems, whereas in [4] such
problems are solved for mechatronical module controls. Agent
driven artificial intelligence is often used in combination with
advanced data analysis techniques in order to create intelligent
control systems [5], [6] by means of multi resolution analysis [7]. CI and parallel analysis systems have been proposed
in order to support developers, as in [8], [9], [10], [11],
where such a classification and analysis was applied to assist
refactoring in large software systems [12], [13], [14], [15].
Moreover, CI and techniques like neural networks (NNs)
have been used in [16], [17] in order to model electrical
networks and the related controls starting by classification
strategies, as well as for other complex physical systems
by using several kinds of hybrid NN-based approaches [18],
[19], [20], [21]. All the said works use different forms of
agent-based modeling and clustering for recognition purposes,
and these methods efficiently perform very challenging tasks,
where other common computational methods failed or had
low efficiency or, simply, resulted as inapplicable due to
complicated model underlying the case study. In general,
agent-driven machine learning has been proven as a promising
field of research for the purpose of text classification, since
it allows building classification rules by means of automatic
learning, taking as a basis a set of known texts and trying to
generalise for unknown ones.
While machine learning and NNs are a very promising
field, the effectiveness of such approaches often lies on the
correct and precise preprocessing of data, i.e. the definition
of semantic categories, affinities and rules used to generate a
set of numbers characterising a text sample, to be successively
given as input to a classifier. Typical text classification, e.g.
by using NNs, takes advantage of topics recognition, however
results are seldom appropriate when it comes to classify people
belonging to the same social group or who are involved
in a similar business (e.g. the classification of: texts from
different scientists in the same field of research, the politicians
belonging to the same party, texts authored by different people
using the same technical jargon).
In our approach we devise a solution for extracting from the
analysed texts some characteristics that can express the style of
a specific author. Obtaining this kind of information abstraction
is crucial in order to create a precise and correct classification
system. On the other hand, while data abound in the context
of text analysis, a robust classifier should rely on input sets
that are compact enough to be apt to the training process.
Therefore, some data have to reflect averaged evaluations that
concern some anthropological aspects such as the historical
period, or the ethnicity, etc. This work satisfies the above
conditions of extracting compact data from texts since we
use a preprocessing tool for word-grouping and time-period
related analysis of the common lexicon. Such a tool computes
a bias reference system for the recurrence frequency of the
word used in the analysed texts. The main advantage of this
choice lies in the generality of the implemented semantical
REFERENCE
DATABASE
TEXT
DATABASE
TRAINING
SET
KNOWN
PREPROCESSING
BIASING
NEW
DATA
UNKNOWN
RBPNN
with
reinforcement
learning
LOCAL
EXTERNAL
TEXT
DATABASE
WORDNET
LEXICON
Fig. 1. A general schema of the data flow through the agents of the developed
system.
identifier, which can be then applied to different contexts and
lexical domains without requiring any modification. Moreover,
in order to have continuous updates or complete renewals of
the reference data, a statically trained NN would not suffice
to the purpose of the work. For these reasons, the developed
system is able to self-correct by means of continuous learning
reinforcement. The proposed architecture also diminishes the
human intervention over time thanks to its self-adaption properties. Our solution comprises three main collaborating agents:
the first for preprocessing, i.e. to extract meaningful data from
texts; the second for classification by means of a proper Radial
Basis NN (RBNN); and finally, one for adapting by means of
a feedforward NN.
The rest of this paper is as follows. Section II gives
the details of the implemented preprocessing agent based on
lexicon analysis. Section III describes the proposed classifier
agent based on RBNNs, our introduced modifications and
the structure of the reinforcement learning agent. Section IV
reports on the performed experiments and the related results.
Finally, Section V gives a background of the existing related
works, while Section VI draws our conclusions.
Algorithm 1: Find the group a word belongs to and count
occurrences
Start,
Import a speech into T ext,
Load dictionary into W ords,
Load group database into Groups,
thisW ord = Text.get();
while thisW ord do
thisGroup = Words.search(thisW ord);
if !thisGroup then
Load a different Lexicon;
if Lexicon.exist(thisW ord) then
W ords.update();
Groups.update();
end
else
break;
end
end
while Text.search(thisW ord) do
Groups.count(thisGroup);
end
thisW ord = Text.get();
end
Export W ords and Groups,
Stop.
The fundamental steps of the said analysis (see also Algorithm 1) are the followings:
1)
2)
3)
4)
5)
6)
II.
E XTRACTING SEMANTICS FROM LEXICON
Figure 1 shows the agents for our developed system: a
preprocessing agent extracts characteristics from given text
parts (see text database in the Figure), according to a known
set of words organised into groups (see reference database);
a RBPNN agent takes as input the extracted characteristics,
properly organised, and performs the identification on new
data, after appropriate training. An additional agent, dubbed
adaptive critic, shown in Figure 6, dynamically adapts the
behaviour of the RBPNN agent when new data are available.
Firstly, preprocessing agent analyses a text given as input
by counting the words that belong to a priori known groups of
mutually related words. Such groups contain words that pertain
to a given concern, and have been built ad-hoc and according
to the semantic relations between words, hence e.g. assisted
by the WordNet lexicon1 .
1 http://wordnet.princeton.edu
7)
8)
9)
import a single text file containing the speech;
import word groups from a predefined database, the
set containing all words from each group is called
dictionary;
compare each word on the text with words on the
dictionary;
if the word exists on the dictionary then the relevant
group is returned;
if the word has not been found then search the
available lexicon;
if the word exists on the lexicon then the related
group is identified;
if the word is unkown, then a new lexicon is loaded
and if the word is found then dictionary and groups
are updated;
search all the occurrences of the word in the text;
when an occurrence has been found, then remove it
from the text and increase the group counter.
Figure 2 shows the UML class diagram for the software
system performing the above analysis. Class Text holds a text
to be analysed; class Words represents the known dictionary,
i.e. all the known words, which are organised into groups given
by class Groups; class Lexicon holds several dictionaries.
III.
T HE RBPNN CLASSIFIER AGENT
For the work proposed here, we use a variation on Radial
Basis Neural Networks (RBNN). RBNNs have a topology similar to common FeedForward Neural Networks (FFNN) with
BackPropagation Training Algorithms (BPTA): the primary
Lexicon
Text
get()
exist()
get()
Words
search()
update()
Groups
Filter
service()
search()
count()
update()
FFNN
RBNN
PNN
Our RBPNN
Fig. 2. UML class diagram for handling groups and counting words belonging
to a group.
difference only lies in the activation function that, instead of
being a sigmoid function or a similar activation function, is a
statistical distribution or a statistically significant mathematical
function. The selection of transfer functions is indeed decisive
for the speed of convergence in approximation and classification problems [22]. The kinds of activation functions used
for Probabilistic Neural Networks (PNNs) have to meet some
important properties to preserve the generalisation abilities of
the ANNs. In addition, these functions have to preserve the
decision boundaries of the probabilistic neural networks. The
selected RBPNN architecture is shown in Figure 4 and takes
advantage from both the PNN topology and the Radial Basis
Neural Networks (RBNN) used in [23].
Each neuron performs a weighted sum of its inputs and
passes it through a transfer function f to produce an output.
This occurs for each neural layer in a FFNN. The network
can be perceived as a model connecting inputs and outputs,
with the weights and thresholds being free parameters of the
model, which are modified by the training algorithm. Such
networks can model functions of almost arbitrary complexity
with the number of layers and the number of units in each
layer determining the function complexity. A FFNN is capable
to generalise the model, and to separate the input space in
various classes (e.g. in a 2D variable space it is equivalent to
the separation of the different semi-planes). In any case, such
a FFNN can only create a general model of the entire variable
space, while can not insert single set of inputs into categories.
On the other hand, a RBNN is capable of clustering the inputs
by fitting each class by means of a radial basis function [24],
while the model is not general for the entire variable space, it
is capable to act on the single variables (e.g. in a 2D variable
space it locates closed subspaces, without any inference on the
remaining space outside such subspaces).
Another interesting topology is provided by PNNs, which
are mainly FFNNs also functioning as Bayesian networks
with Fisher Kernels [25]. By replacing the sigmoid activation
function often used in neural networks with an exponential
function, a PNN can compute nonlinear decision boundaries
approaching the Bayes optimal classification [26]. Moreover,
a PNN generates accurate predicted target probability scores
with a probabilistic meaning (e.g. in the 2D space it is
equivalent to attribute a probabilistic score to some chosen
points, which in Figure 3 are represented as the size of the
points).
Finally, in the presented approach we decided to combine
the advantages of both RBNN and PNN using the so called
Fig. 3. A comparison between results of several types of NNs, Our RBPNN
includes the maximum probability selector module
RBPNN. The RBPNN architecture, while preserving the capabilities of a PNN, due to its topology, then being capable
of statistical inference, is also capable of clustering since the
standard activation functions of a PNN are substituted by radial
basis functions still verifying the Fisher kernel conditions
required for a PNN (e.g. such an architecture in the 2D variable
space can both locate subspace of points and give to them a
probabilistic score). Figure 3 shows a representation of the
behaviour for each network topology presented above.
A. The RBPNN structure and topology
In a RBPNN both the input and the first hidden layer
exactly match the PNN architecture: the input neurones are
used as distribution units that supply the same input values
to all the neurones in the first hidden layer that, for historical
reasons, are called pattern units. In a PNN, each pattern unit
performs the dot product of the input pattern vector v by a
weight vector W(0) , and then performs a nonlinear operation
on the result. This nonlinear operation gives output x(1) that
is then provided to the following summation layer. While a
common sigmoid function is used for a standard FFNN with
BPTA, in a PNN the activation function is an exponential, such
that, for the j-esime neurone the output is
||W(0) · v||
(1)
(1)
xj ∝ exp
2σ 2
where σ represents the statistical distribution spread.
The given activation function can be modified or substituted
while the condition of Parzen (window function) is still satisfied for the estimator N̂ . In order to satisfy such a condition
some rules must be verified for the chosen window function in
order to obtain the expected estimate, which can be expressed
as a Parzen window estimate p(x) by means of the kernel K
of f in the d-dimensional space S d
n
P
x−xi
1
pn (x) = n1
K
d
hn
h
i=1
R
Sd
K(x)dx = 1
n
(2)
P1
Σ
υ
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Σ
.
.
.
.
.
.
P*
Pn
Σ
Fig. 4. A representation of a Radial Basis Probabilistic Neural Network with
maximum probability selector module
where hn ∈ N is called window width or bandwidth parameter
and corresponds to the width of the kernel. In general hn ∈ N
depends on the number of available sample data n for the
estimator pn (x). Since the estimator pn (x) converges in mean
square to the expected value p(x) if
lim hpn (x)i = p(x)
n→∞
(3)
lim var (pn (x))
n→∞
=
0
where hpn (x)i represents the mean estimator values and
var (pn (x)) the variance of the estimated output with respect
to the expected values, the Parzen condition states that such
convergence holds within the following conditions:
sup K(x) < ∞
x
lim xK(x)
=
0
lim hnd
=
0
|x|→∞
Fig. 5.
Setup values for the proposed RBPNN: NF is the number of
considered lexical groups, NS the number of analysed texts, and NG is the
number of people that can possibly be recognised as authors.
units work as in the neurones of a linear perceptron network.
The training for the output layer is performed as in a RBNN,
however since the number of summation units is very small
and in general remarkably less than in a RBNN, the training
is simplified and the speed greatly increased [27].
The output of the RBPNN (as shown in Figure 4) is given
to the maximum probability selector module, which effectively
acts as a one-neuron output layer. This selector receives as
input the probability score generated by the RBPNN and
attributes to one author only the analysed text, by selecting
the most probable author, i.e. the one having the maximum
input probability score. Note that the links to this selector are
weighted (with weights adjusted during the training), hence the
actual input is the product between the weight and the output
of the summation layer of the RBPNN.
(4)
n→∞
lim nhnd
n→∞
= ∞
In this case, while preserving the PNN topology, to obtain
the RBPNN capabilities, the activation function is substituted
with a radial basis function (RBF); an RBF still verifies all
the conditions stated before. It then follows the equivalence
between the W(0) vector of weights and the centroids vector of
a radial basis neural network, which, in this case, are computed
as the statistical centroids of all the input sets given to the
network.
We name f the chosen radial basis function, then the new
output of the first hidden layer for the j-esime neurone is
||v − W(0) ||
(1)
(5)
xj , f
β
where β is a parameter that is intended to control the distribution shape, quite similar to the σ used in (1).
The second hidden layer in a RBPNN is identical to a PNN,
it just computes weighted sums of the received values from the
preceding neurones. This second hidden layer is called indeed
summation layer: the output of the k-esime summation unit is
X
(2)
(1)
xk =
Wjk xj
(6)
j
where Wjk represents the weight matrix. Such weight matrix
consists of a weight value for each connection from the j-esime
pattern units to the k-esime summation unit. These summation
B. Layer size for a RBPNN
The devised topology enables us to distribute to different
layers of the network different parts of the classification task.
While the pattern layer is just a nonlinear processing layer, the
summation layer selectively sums the output of the first hidden
layer. The output layer fullfills the nonlinear mapping such as
classification, approximation and prediction. In fact, the first
hidden layer of the RBPNN has the responsibility to perform
the fundamental task expected from a neural network [28]. In
order to have a proper classification of the input dataset, i.e. of
analysed texts to be attributed to authors, the size of the input
layer should match the exact number NF of different lexical
groups given to the RBPNN, whereas the size of the pattern
units should match the number of samples, i.e. analysed texts,
NS . The number of the summation units in the second hidden
layer is equal to the number of output units, these should match
the number of people NG we are interested in for the correct
recognition of the speakers (Figure 5).
C. Reinforcement learning
In order to continuously update the reference database for
our system, a statically trained NN would not suffice for the
purpose of the work. Since the aim of the presented system is
having an expanding database of text samples for classification
and recognition purpose, the agent driven identification should
dynamically follow the changes in such a database. When
a new entry is made then the related feature set and biases
change, it implies that also the RBPNN should be properly
managed in order to ensure a continuous adaptive control for
reinforcement learning. Moreover, for the considered domain
it is desirable that a human supervisor supply suggestions, expecially when the system starts working. The human activities
are related to the supply of new entries into the text sample
database, and to the removal of misclassifications made by the
RBPNN.
We used a supervised control configuration (see Figure 6),
where the external control is provided by the actions and
choices of a human operator. While the RBPNN is trained with
a classical backpropagation learning algorithm, it is also embedded into an actor-critic reinforcement learning architecture,
which back propagates learning by evaluating the correctness
of the RBPNN-made choices with respect to the real word.
Let ξ be the error function, i.e. ξ = 0 for the results
supported by human verification, or the vectorial deviance for
the results not supported by a positive human response. This
assessment is made by an agent named Critic. We consider
the filtering step for the RBPNN output, to be both: Critic,
i.e. a human supervisor acknowledging or rejecting RBPNN
classifications; or Adaptive critic, i.e. an agent embedding a
NN that in the long run simulates the control activity made by
the human Critic, hence decreasing human control over time.
Adaptive critic needs to learn, and this learning is obtained by
a modified backpropagation algorithm using just ξ as error
function. Hence, Adaptive critic has been implemented by
a simple feedforward NN trained, by means of a traditional
gradient descent algorithm so that the weight modification
∆wij is
∆wij = −µ
∂ξ
∂ξ ∂ f˜i ∂ ũi
= −µ
∂wij
∂ f˜i ∂ ũi ∂wij
(7)
The f˜i is the activation of i-esime neuron, ũi is the i-esime
input to the neurone weighted as
X
ũi =
wij f˜j (ξi )
(8)
j
The result of the adaptive control determines whether to
continue the training of the RBPNN with new data, and
whether the last training results should be saved or discarded.
At runtime this process results in a continuous adaptive learning, hence avoiding the classical problem of NN polarisation
and overfitting. Figure 6 shows the developed learning system
reinforcement. According to the literature [29], [30], [31], [32],
straight lines represent the data flow, i.e. training data fed
to the RBPNN, then new data inserted by a supervisor, and
the output of the RBPNN sent to the Critic modules also
by means of a delay operator z −1 . Functional modifications
operated within the system are represented as slanting arrows,
i.e. the choices made by a human supervisor (Critic) modify
the Adaptive critic, which adjust the weight of its NN; the
combined output of Critic and Adaptive critic determines
whether the RBPNN should undergo more training epochs and
so modify its weights.
z -1
Critic
Features
DATA
RBPNN
Fig. 6. The adopted supervised learning model reinforcement. Slanting arrows
represent internal commands supplied in order to control or change the status
of the modules, straight arrows represent the data flow along the model, z −1
represents a time delay module which provides 1-step delayed outputs.
characteristics (see Section II), then such results have been
given to the classification agent. The total number of text
samples was 344, and we used 258 of them for training the
classification agent and 86 for validation. The text samples,
both for training and validation, were from different persons
that have given a speech (from A. Einstein to G. Lewis), as
shown in Figure 7.
Given the flexible structure of the implemented learning
model, the word groups are not fixed and can be modified,
added or removed over time by an external tuning activity. By
using the count of words in a group, instead of a word-by-word
counts, the multi-agent system realises a statistically driven
classifier that identifies the main semantic concerns regarding
the text samples, and then, attributes such concerns to the most
probable person.
The relevant information useful in order to recognise the
author of the speech is usually largely spread over a certain
number of word groups that could be indication of the cultural
extraction, heritage, field of study, professional category etc.
This implies that we can not exclude any word group, a priori,
while the RBPNN could learn to automatically enhance the
relevant information in order to classify the speeches.
Figure 7-left shows an example of the classifier performances for results generated by the RBPNN (before the filter
implemened by the probabilistic selector). Since the RBPNN
results have a probability between 0 and 1, then the shown
performance is 0 when a text was correctly attributed (or
not attributed) to a specific person. Figure 7-right shows the
performances of the system when including the probabilistic
selector. For this case, a boolean selection is involved, then
correct identifications are represented as 0, false positive
identifications as −1 (black marks), and missed identifications
as +1 (white marks).
For validation purposes, Figure 7-(left and right) shows
results according to e:
e = y − ỹ
IV.
E XPERIMENTAL SETUP
The proposed RBPNN architecture has been tested using
several text samples collected from public speeches of different
people both from the present and the past era. Each text sample
has been given to the preprocessing agent that extract some
Adaptive
critic
(9)
where e identifies the performance, ỹ the classification result,
and y the expected result. Lower e (negative values) identify
an excess of confidence in the attribution of a text to a person,
while greater e (positive values) identify a lack of confidence
in that sense.
Fig. 7. The obtained performance for our classification system before (left) and after (right) the maximum probability selector choice. The mean grey color
represents the correct classifications, while white color represents missed classification and black color false classifications.
The system was able to correctly attribute the text to the
proper author with only a 20% of missing assignments.
V.
R ELATED W ORKS
Several generative models can be used to characterise
datasets that determine properties and allow grouping data
into classes. Generative models are based on stochastic block
structures [33], or on ‘Infinite Hidden Relational Models’ [34],
and ‘Mixed Membership Stochastic Blockmodel’ [35]. The
main issue of class-based models is the type of relational
structure that such solutions are capable to describe. Since
the definition of a class is attribute-dependent, generally the
reported models risk to replicate the existing classes for each
new attribute added. E.g. such models would be unable to
efficiently organise similarities between the classes ‘cats’ and
‘dogs’ as child classes of the more general class ‘mammals’.
Such attribute-dependent classes would have to be replicated as
the classification generates two different classes of ‘mammals’:
the class ‘mammals as cats’ and the class ‘mammals as dogs’.
Consequently, in order to distinguish between the different
races of cats and dogs, it would be necessary to further
multiply the ‘mammals’ class for each one of the identified
race. Therefore, such models quickly lead to an explosion of
classes. In addition, we would either have to add another class
to handle each specific use or a mixed membership model, as
for crossbred species.
Another paradigm concerns the ’Non-Parametric Latent
Feature Relational Model’ [36], i.e. a Bayesian nonparametric
model in which each entity has boolean valued latent features
that influence the model’s relations. Such relations depend on
well-known covariant sets, which are neither explicit or known
in our case study at the moment of the initial analysis.
In [37], the authors propose a sequential forward feature
selection method to find the subset of features that are relevant
to a classification task. This approach uses novel estimation of
the conditional mutual information between candidate feature
and classes, given a subset of already selected features used as
a classifier independent criterion for evaluating feature subsets.
In [38], data from the charge-discharge simulation of
lithium-ions battery energy storage are used for classification
purposes with recurrent NNs and PNNs by means of a theoretical framework based on signal theory.
While showing the effectiveness of the neural network
based approaches, in our case study classification results are
given by means of a probability, hence the use of a RBPNN,
and an on-line training achieved by reinforcement learning.
VI.
C ONCLUSION
This work has presented a multi-agent system, in which an
agent analyses fragments of texts and another agent consisting
of a RBPNN classifier, performs probabilistic clustering. The
system has successfully managed to identify the most probable
author among a given list for the examined text samples. The
provided identification can be used in order to complement
and integrate a comprehensive verification system, or other
kinds of software systems trying to automatically identify
the author of a written text. The RBPNN classifier agent
is continuously trained by means of reinforcement learning
techniques in order to follow a potential correction provided by
an human supervisor, or an agent that learns about supervision.
The developed system was also able to cope with new data
that are continuously fed into the database, for the adaptation
abilities of its collaborating agents and their reasoning based
on NNs.
ACKNOWLEDGMENT
This work has been supported by project PRIME funded
within POR FESR Sicilia 2007-2013 framework and project
PRISMA PON04a2 A/F funded by the Italian Ministry of
University and Research within PON 2007-2013 framework.
R EFERENCES
[1]
C. Napoli, G. Pappalardo, E. Tramontana, Z. Marszałek, D. Połap,
and M. Woźniak, “Simplified firefly algorithm for 2d image key-points
search,” in IEEE Symposium Series on Computational Intelligence.
IEEE, 2014.
[2]
[3]
M. Gabryel, M. Woźniak, and R. K. Nowicki, “Creating learning sets
for control systems using an evolutionary method,” in Proceedings of
Artificial Intelligence and Soft Computing (ICAISC), ser. LNCS, vol.
7269. Springer, 2012, pp. 206–213.
F. Bonanno, G. Capizzi, A. Gagliano, and C. Napoli, “Optimal management of various renewable energy sources by a new forecasting
method,” in Proceedings of International Symposium on Power Electronics, Electrical Drives, Automation and Motion (SPEEDAM). IEEE,
2012, pp. 934–940.
[20]
[21]
[4]
A. Nowak and M. Woźniak, “Analysis of the active module mechatronical systems,” in Proceedings of Mechanika - ICM. Kaunas, Lietuva:
Kaunas University of Technology Press, 2008, pp. 371–376.
[22]
[5]
C. Napoli, G. Pappalardo, and E. Tramontana, “A hybrid neuro–wavelet
predictor for qos control and stability,” in Proceedings of AI*IA:
Advances in Artificial Intelligence. Springer, 2013, pp. 527–538.
[23]
[6]
F. Bonanno, G. Capizzi, G. L. Sciuto, C. Napoli, G. Pappalardo, and
E. Tramontana, “A novel cloud-distributed toolbox for optimal energy
dispatch management from renewables in igss by using wrnn predictors
and gpu parallel solutions,” in Power Electronics, Electrical Drives,
Automation and Motion (SPEEDAM), 2014 International Symposium
on. IEEE, 2014, pp. 1077–1084.
[7]
A. Nowak and M. Woźniak, “Multiresolution derives analysis of module
mechatronical systems,” Mechanika, vol. 6, no. 74, pp. 45–51, 2008.
[8]
C. Napoli, G. Pappalardo, and E. Tramontana, “Using modularity metrics to assist move method refactoring of large systems,” in Proceedings
of International Conference on Complex, Intelligent, and Software
Intensive Systems (CISIS). IEEE, 2013, pp. 529–534.
[9]
G. Pappalardo and E. Tramontana, “Suggesting extract class refactoring
opportunities by measuring strength of method interactions,” in Proceedings of Asia Pacific Software Engineering Conference (APSEC).
IEEE, December 2013.
[10]
E. Tramontana, “Automatically characterising components with concerns and reducing tangling,” in Proceedings of Computer Software
and Applications Conference (COMPSAC) Workshop QUORS. IEEE,
July 2013. DOI: 10.1109/COMPSACW.2013.114, pp. 499–504.
[11]
C. Napoli, G. Papplardo, and E. Tramontana, “Improving files availability for bittorrent using a diffusion model,” in IEEE 23nd International
Workshop on Enabling Technologies: Infrastructure for Collaborative
Enterprises - WETICE 2014, June 2014, pp. 191–196.
[12]
R. Giunta, G. Pappalardo, and E. Tramontana, “Aspects and annotations
for controlling the roles application classes play for design patterns,” in
Proceedings of Asia Pacific Software Engineering Conference (APSEC).
IEEE, December 2011, pp. 306–314.
[13]
A. Calvagna and E. Tramontana, “Delivering dependable
reusable components by expressing and enforcing design
decisions,” in Proceedings of Computer Software and Applications
Conference (COMPSAC) Workshop QUORS.
IEEE, July 2013.
DOI: 10.1109/COMPSACW.2013.113, pp. 493–498.
[14]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
R. Giunta, G. Pappalardo, and E. Tramontana, “AODP: refactoring code
to provide advanced aspect-oriented modularization of design patterns,”
in Proceedings of Symposium on Applied Computing (SAC). ACM,
2012.
[34]
[15]
E. Tramontana, “Detecting extra relationships for design patterns roles,”
in Proceedings of AsianPlop, March 2014.
[35]
[16]
G. Capizzi, C. Napoli, and L. Paternò, “An innovative hybrid neurowavelet method for reconstruction of missing data in astronomical
photometric surveys,” in Proceedings of Artificial Intelligence and Soft
Computing (ICAISC). Springer, 2012, pp. 21–29.
[36]
[17]
F. Bonanno, G. Capizzi, G. L. Sciuto, C. Napoli, G. Pappalardo, and
E. Tramontana, “A cascade neural network architecture investigating
surface plasmon polaritons propagation for thin metals in openmp,” in
Proceedings of Artificial Intelligence and Soft Computing (ICAISC), ser.
LNCS, vol. 8467. Springer, 2014, pp. 22–33.
[18]
C. Napoli, F. Bonanno, and G. Capizzi, “Exploiting solar wind time
series correlation with magnetospheric response by using an hybrid
neuro-wavelet approach,” Proceedings of the International Astronomical
Union, vol. 6, no. S274, pp. 156–158, 2010.
[19]
G. Capizzi, F. Bonanno, and C. Napoli, “Hybrid neural networks
architectures for soc and voltage prediction of new generation batteries
[37]
[38]
storage,” in Proceedings of International Conference on Clean Electrical Power (ICCEP). IEEE, 2011, pp. 341–344.
C. Napoli, F. Bonanno, and G. Capizzi, “An hybrid neuro-wavelet
approach for long-term prediction of solar wind,” IAU Symposium, no.
274, pp. 247–249, 2010.
G. Capizzi, F. Bonanno, and C. Napoli, “A new approach for lead-acid
batteries modeling by local cosine,” in Power Electronics Electrical
Drives Automation and Motion (SPEEDAM), 2010 International Symposium on, June 2010, pp. 1074–1079.
W. Duch, “Towards comprehensive foundations of computational intelligence,” in Challenges for Computational Intelligence. Springer,
2007, pp. 261–316.
G. Capizzi, F. Bonanno, and C. Napoli, “Recurrent neural networkbased control strategy for battery energy storage in generation systems
with intermittent renewable energy sources,” in Proceedings of International Conference on Clean Electrical Power (ICCEP). IEEE, 2011,
pp. 336–340.
S. Haykin, Neural Networks - A comprehensive foundation. Prentice
Hall, 2004.
S. Mika, G. Ratsch, W. Jason, B. Scholkopft, and K.-R. Muller,
“Fisher discriminant analysis with kernels,” in Proceedings of the Signal
Processing Society Workshop. Neural networks for signal processing IX.
IEEE, 1999.
D. F. Specht, “Probabilistic neural networks,” Neural networks, vol. 3,
no. 1, pp. 109–108, 1990.
H. Deshuang and M. Songde, “A new radial basis probabilistic neural
network model,” in Proceedings of Conference on Signal Processing,
vol. 2. IEEE, 1996.
W. Zhao, D.-S. Huang, and L. Guo, “Optimizing radial basis probabilistic neural networks using recursive orthogonal least squares algorithms
combined with micro-genetic algorithms,” in Proceedings of Neural
Networks, vol. 3. IEEE, 2003.
D. V. Prokhorov, R. A. Santiago, and D. C. W. II, “Adaptive
critic designs: A case study for neurocontrol,” Neural Networks,
vol. 8, no. 9, pp. 1367 – 1372, 1995. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/0893608095000429
H. Javaherian, D. Liu, and O. Kovalenko, “Automotive engine torque
and air-fuel ratio control using dual heuristic dynamic programming,”
in Proceedings of International Joint Conference on Neural Networks
(IJCNN), 2006, pp. 518–525.
B. Widrow and M. Lehr, “30 years of adaptive neural networks:
perceptron, madaline, and backpropagation,” Proceedings of the IEEE,
vol. 78, no. 9, pp. 1415–1442, Sep 1990.
J.-W. Park, R. Harley, and G. Venayagamoorthy, “Adaptive-criticbased optimal neurocontrol for synchronous generators in a power
system using mlp/rbf neural networks,” IEEE Transactions on Industry
Applications, vol. 39, no. 5, pp. 1529–1540, Sept 2003.
K. Nowicki and T. A. B. Snijders, “Estimation and prediction for
stochastic blockstructures,” Journal of the American Statistical Association, vol. 96, no. 455, pp. 1077–1087, 2001.
Z. Xu, V. Tresp, K. Yu, and H. peter Kriegel, “Infinite hidden relational
models,” in In Proceedings of International Conference on Uncertainity
in Artificial Intelligence (UAI), 2006.
E. M. Airoldi, D. M. Blei, E. P. Xing, and S. E. Fienberg, “Mixed
membership stochastic block models,” in Advances in Neural Information Processing Systems (NIPS). Curran Associates, 2009.
K. Miller, T. Griffiths, and M. Jordan, “Nonparametric latent feature
models for link prediction,” in Advances in neural information processing systems (NIPS). Curran Associates, Inc., 2009, vol. 22, pp.
1276–1284.
J. Novovičová, P. Somol, M. Haindl, and P. Pudil, “Conditional mutual
information based feature selection for classification task,” in Progress
in Pattern Recognition, Image Analysis and Applications. Springer,
2007, pp. 417–426.
F. Bonanno, G. Capizzi, and C. Napoli, “Some remarks on the application of rnn and prnn for the charge-discharge simulation of advanced
lithium-ions battery energy storage,” in Proceedings of International
Symposium on Power Electronics, Electrical Drives, Automation and
Motion (SPEEDAM). IEEE, 2012, pp. 941–945.
| 9 |
Journal of Nonlinear Systems and Applications ()
Copyright c 2009 Watam Press
http://www.watam.org/JNSA/
STATISTICS ON GRAPHS, EXPONENTIAL FORMULA AND
COMBINATORIAL PHYSICS
arXiv:0910.0695v2 [cs.DM] 11 Feb 2010
Laurent Poinsot, Gérard H. E. Duchamp, Silvia Goodenough and Karol A. Penson
2
Abstract. The concern of this paper is a famous combinatorial
formula known under the name “exponential formula”. It occurs
quite naturally in many contexts (physics, mathematics, computer
Partial semigroups
Let us call partial semigroup a semigroup with a partially
defined associative law (see for instance [6] for usual semigroups and [1, 14, 18] for more details on structures with
a partially defined binary operation). More precisely, a
partial semigroup is a pair (S, ∗) where S is a set and ∗
is a (partially defined) function S × S → S such that the
two (again partially defined) functions S × S × S → S
science). Roughly speaking, it expresses that the exponential generating function of a whole structure is equal to the exponential
of those of connected substructures. Keeping this descriptive statement as a guideline, we develop a general framework to handle many
different situations in which the exponential formula can be applied.
Keywords. Combinatorial physics, Exponential generating function, Partial semigroup, Experimental mathematics.
1
†‡
∗
(x, y, z) 7→ (x ∗ y) ∗ z and (x, y, z) 7→ x ∗ (y ∗ z)
(1)
coincide (same domain and values). Using this requirement one can see that the values of the (partially defined)
functions S n → S
Introduction
(x1 , · · · , xn ) 7→ ET (x1 , · · · , xn )
Applying the exponential paradigm one can feel sometimes incomfortable wondering whether “one has the
right” to do so (as for coloured structures, for example). The following paper is aimed at giving a rather
large framework where this formula holds.
Exponential formula can be traced back to works by
Touchard and Ridell & Uhlenbeck [20, 17]. For an other
exposition, see for example [4, 7, 9, 19].
We are interested to compute various examples of EGF
for combinatorial objects having (a finite set of) nodes
(i.e. their set-theoretical support) so we use as central
concept the mapping σ which associates to every structure, its set of (labels of its) nodes.
We need to draw what could be called “square-free decomposable objects” (SFD). This version is suited to our
needs for the “exponential formula” and it is sufficiently
general to contain, as a particular case, the case of multivariate series.
(2)
obtained by evaluating the expression formed by labelling
by xi (from left to right) the ith leaf of a binary tree T
with n nodes and by ∗ its internal nodes, is independant
of T . We will denote x1 ∗ · · · ∗ xn their common value.
In this paper we restrict our attention to commutative
semigroups. By this we mean that the value x1 ∗ · · · ∗
xn does not depend on the relative order of the xi . A
nonempty partial semigroup (S, ∗) has a (two-sided and
total) unit ǫ ∈ S if, and only if, for every ω ∈ S, ω ∗ ǫ =
ω = ǫ∗ω. Using associativity of ∗, it can be easily checked
that if S has a unit, then it is unique.
Example 2.1. Let F be a set of sets (resp. which contains ∅ as an element) and which is closed under the disjoint sum ⊔, i.e., if A, B ∈ F such that A ∩ B = ∅, then
A ∪ B(= A ⊔ B) ∈ F . Then (F, ⊔) is a partial semigroup
(resp. partial semigroup with unit).
3
∗ L.
Poinsot, G. H. E. Duchamp and S. Goodenough are affiliated to Laboratoire d’Informatique Paris Nord, Université ParisNord 13, CNRS UMR 7030, 99 av. J.-B. Clément, F 93430 Villetaneuse, France (emails: {ghed,laurent.poinsot}@lipn-univ.paris13.fr,
[email protected]).
† K. A. Penson is affiliated to Laboratoire de Physique Théorique
de la Matière Condensée, Université Pierre et Marie Curie, CNRS
UMR 7600, Tour 24 - 2e ét., 4 pl. Jussieu, F 75252 Paris cedex 05,
France (email: [email protected]).
‡ Manuscript received October 05, 2009. This work was supported by the French Ministry of Science and Higher Education
under Grant ANR PhysComb.
Square-free decomposable partial semigroups
+
Let 2(N ) be the set of all finite subsets of the positive
integers N+ and (S, ⊕) be a partial semigroup with unit
+
(here denoted ǫ) equipped with a mapping σ : S → 2(N ) ,
called the (set-theoretic) support mapping. Let D be the
domain of ⊕. The triple (S, ⊕, σ) is called square-free
decomposable (SFD) if, and only if, it fulfills the two following conditions.
1
2
L. Poinsot, G. H. E. Duchamp, S. Goodenough and K.A. Penson
• Direct sum (DS):
1. σ(ω) = ∅ iff ω = ǫ;
2. D = {(ω1 , ω2 ) ∈ S 2 : σ(ω1 ) ∩ σ(ω2 ) = ∅};
3. For all ω1 , ω2 ∈ S, if (ω1 , ω2 ) ∈ D then σ(ω1 ⊕
ω2 ) = σ(ω1 ) ∪ σ(ω2 ).
• Levi’s property (LP): For every ω1 , ω2 , ω 1 , ω 2 ∈ S
such that (ω1 , ω2 ), (ω 1 , ω 2 ) ∈ D and ω1 ⊕ ω2 = ω 1 ⊕
ω 2 , there are ωij ∈ S for i = 1, 2, j = 1, 2 such that
(ωi1 , ωi2 ), (ω1j , ω2j ) ∈ D, ωi = ωi1 ⊕ωi2 and ω j = ω1j ⊕ω2j
for i = 1, 2 and j = 1, 2.
Remark 3.1. The second and third conditions of (DS)
imply that σ(ω1 ⊕ω2 ) = σ(ω1 )⊔σ(ω2 ) whenever (ω1 , ω2 ) ∈
D (which means that σ(ω1 )∩σ(ω2 ) = ∅), where ⊔ denotes
the disjoint sum.
Example 3.1. As example of this setting we have:
is equal to ǫ, then ωi 6= ωj for every i, j ∈ {1, . . . , n} such
n
G
σ(ωi ).
that i 6= j. Moreover σ(⊕ni=1 ωi ) =
i=1
(ωi )ni=1
Lemma 3.2. Let
be a finite family of elements
of S with pairwise disjoint supports. Suppose that for
i
i
i = 1, · · · , n, ωi = ⊕nk=1
ωik , where (ωik )nk=1
is a finite
n
n
i
ωik .
family of elements of S. Then ⊕i=1 ωi = ⊕i=1 ⊕nk=1
These lemmas are useful to define the sum of two or
more elements of S using respective sum decompositions.
Now, an atom in a partial semigroup with unit S is any
object ω 6= ǫ which cannot be split, formally
ω = ω1 ⊕ ω2 =⇒ ǫ ∈ {ω1 , ω2 } .
(3)
The set of all atoms is denoted by atoms(S). Whenever
the square-free decomposable semigroup S is not trivial,
i.e., reduced to {ǫ}, atoms(S) is not empty.
1. The positive square-free integers, σ(n) being the set Example 3.2. The atoms obtained from examples 3.1:
of primes which divide n, the atoms being the prime
1. The atoms of 3.1.2 are the primes.
numbers.
2. The atoms of 3.1.3 are connected graphs.
2. All the positive integeres (S = N+ ), under the usual
integer multiplication, σ(n) being the set of primes
3. The atoms of 3.1.4 are the endofunctions for which
which divide n.
the domain is a singleton.
3. Graphs, hypergraphs, (finitely) coloured, weighted
4. The atoms of 3.1.5 are the monomials.
graphs, with nodes in N+ , σ(G) being the set of nodes
The prescriptions (DS,LP) imply that decomposition
and ⊕ the juxtaposition (direct sum) when the set
of
objects into atoms always exists and is unique.
of nodes are mutually disjoint.
Proposition 3.1. Let (S, ⊕, σ) be SFD. For each ω ∈
4. The set of endofunctions f : F → F where F is a
S
there is one and only one finite set of atoms A =
finite subset of N+ .
{ω1 , · · · , ωn } such that ω = ⊕ni=1 ωi . One has A = ∅
5. The (multivariate) polynomials in N[X], X = {xi : iff ω = ǫ.
i ∈ I}, with I ⊆ N+ , being a nonempty set of (commuting or not) variables, with σ(P ) = Alph(P ) the
Exponential formula
set of indices of variables that occur in a polynomial 4
P , and ⊕ = +.
In this section we consider (S, ⊕, σ) as a square-free
6. For a given finite or denumerable field, the set of irre- decomposable partial semigroup with unit.
ducible monic polynomials is denumerable. Arrange
In the set S, objects are conceived to be “measured” by
them in a sequence (Pn )n∈N+ , then the square-free
monic (for a given order on the variables) polynomi- different parameters (data in statistical language). So, to
als is SFD, σ(P ) := {n ∈ N+ : Pn divides P } and ⊕ get a general purpose tool, we suppose that the statistics
takes its values in a (unitary) ring R of characteristic zero
being the multiplication.
that is to say which contains Q (as, to write exponential
7. Rational complex algebraic curves; σ(V ) being the generating series it is convenient to have at hand the fracset of monic irreducible bivariate polynomials van- tions 1 ). Let then c : S → R be the given statistics. For
n!
ishing on V .
F a finite set and each X ⊆ S, we define
In what follows we write ⊕ni=1 ωi instead of ω1 ⊕· · ·⊕ωn
(if n = 0, then ⊕ni=1 ωi = ǫ) and we suppose that (S, ⊕, σ)
is SFD for the two following lemmas.
Lemma 3.1. Let ω1 , . . . , ωn ∈ S such that ⊕ni=1 ωi is
defined. Then for every i, j ∈ {1, . . . , n} such that i 6= j,
it holds that σ(ωi ) ∩ σ(ωj ) = ∅. In particular, if none ωk
XF := {ω ∈ X : σ(ω) = F } .
(4)
In order to write generating series, we need
X
1. that the sums c(XF ) :=
c(ω) exist for every
ω∈XF
finite set F of N+ and every X ⊆ S;
Statistics on Graphs, Exponential Formula and Combinatorial Physics
2. that F → c(XF ) would depend only of the cardinal- where
ity of the finite set F of N+ , for each fixed X ⊆ S;
an =
X Y
fcard(p)
3
(8)
π∈Πn p∈π
3. that c(ω1 ⊕ ω2 ) = c(ω1 ).c(ω2 ).
We formalize it in
(LF) Local finiteness. — For each finite set F of N+ , the
subset SF of S is a finite set.
(Eq) Equivariance. —
with Πn being the set of all partitions of [1..n] (in particX zn
ular for n = 0, a0 = 1) and ez =
∈ R[[z]].
n!
n≥0
In what follows [1..n] denotes the interval {j ∈ N+ : 1 ≤
j ≤ n}, reduced to ∅ when n = 0. Let (S, ⊕, σ) be a
locally finite SFD and c be a multiplicative equivariant
statistics.
For every subset X of S one sets the following
card(F1 ) = card(F2 ) =⇒ c(atoms(S)F1 ) = c(atoms(S)F2 ) .
exponential
generating series
(5)
∞
(Mu) Multiplicativity. —
X
zn
.
(9)
EGF(X; z) =
c(X[1..n] )
n!
c(ω1 ⊕ ω2 ) = c(ω1 ).c(ω2 ) .
(6)
n=0
Remark 4.1. a) In fact, (LF) is a property of the set thm 4.1 (exponential formula). Let S be a locally finite
S, while (Eq) is a property of the statistics. In practice, SFD and c be a multiplicative equivariant statistics. We
we choose S which is locally finite and choose equivariant have
statistics for instance
EGF(S; z) = c(ǫ) − 1 + eEGF(atoms(S);z) ..
(10)
(number of cycles) (number of fixed points)
c(ω) = x
y
In particular if c(ǫ) = 1 (for instance if c is proper and
R is an integral domain),
for some variables x, y.
EGF(S; z) = eEGF(atoms(S);z) .
(11)
b) More generally, it is typical to take integer-valued
partial (additive) statistics c1 , · · · ci , · · · , cr (for every
Proof — Let n = 0. Then the unique element of S∅
c (ω) c (ω)
c (ω)
ω ∈ S, ci (ω) ∈ N) and set c(ω) = x11 x22 · · · xrr .
is ǫ. Therefore c(S∅ ) = c(ǫ). Now suppose that n > 0
and let ω ∈ S[1..n] . According to proposition 3.1, there
c) The set of example 3.1.2 is not locally finite, but
is a unique finite set {α1 , . . . , αk } ⊆ atoms(S) such that
other examples satisfy (LF): for instance 3.1.3 if one asks
ω = ⊕ki=1 αi . By lemma 3.1, {σ(αi ) : 1 ≤ i ≤ k} is a parthat the number of arrows and weight is finite, 3.1.1.
tition of [1..n] into k blocks. Therefore ω ∈ atoms(S)P1 ⊕
A multiplicative statistics is called proper if c(ǫ) 6= 0. · · · ⊕ atoms(S)Pk where Pi = σ(αi ) for i = 1, . . . , k. We
It is called improper if c(ǫ) = 0. In this case, for every can remark that α1 ⊕ · · · ⊕ αk is well-defined for each
(α1 , . . . , αk ) ∈ atoms(S)P1 × · · · × atoms(S)Pk since the
ω ∈ S, c(ω) = 0 as c(ω) = c(ω ⊕ ǫ) = c(ω)c(ǫ) = 0.
If R is a integral domain and if c is proper, supports are disjoint. Now, one has, thanks to the partithen c(ǫ) = 1 because c(ǫ) = c(ǫ ⊕ ǫ) = c(ǫ)2 , tions of [1..n]
therefore 1 = c(ǫ). Note that for each X ⊆ S,
G M
X
S[1..n] =
atoms(S)p
(12)
c(ǫ) if ǫ ∈ X
c(X∅ ) =
c(ω) =
. For every
p∈π
π∈Π
n
0
if ǫ 6∈ X
X Y
ω∈X∅
X
c(S[1..n] ) =
c(atoms(S)p )
(13)
finite subset X of S, we also define c(X) :=
c(ω),
ω∈X
then we have in particular c(∅) = 0 (which is not the
same as c(S∅ ) = c({ǫ}) if c is proper). The requirement
(LF) implies that for every X ⊆ S and every finite set
F of N+ , c(XF ) is defined as a sum of a finite number of terms because XF ⊆ SF , and therefore XF is finite.
π∈Πn p∈π
as, for disjoint (finite) sets F and G of N+ , it is easy to
check that c(XF ⊕ XG ) = c(XF )c(XG ) for every X ⊆ S
and because the disjoint union as only a finite number of
factors. Therefore due to equivariance of c on sets of the
form atoms(S)F , one has
X Y
Now, we are in position to state the exponential formula
c(S[1..n] ) =
c(atoms(S)[1..card(p)] ) .
(14)
as it will be used throughout the paper. Let us recall the
π∈Πn p∈π
usual exponential formula for formal power series in R[[z]]
(see [13,X
19] for more details on formal power series). Let But c(atoms(S)[1..card(p)] ) is the card(p)th coefficient of
the series EGF(atoms(S); z). Therefore due to the
f (z) =
fn z n . Then we have
usual exponential formula, EGF(S; z) = c(ǫ) − 1 +
n≥1
eEGF(atoms(S);z) . Now if c(ǫ) = 1, then we obtain
X zn
EGF(atoms(S);z)
.
(7) EGF(S; z) = e
ef =
an
n!
n≥0
4
5
L. Poinsot, G. H. E. Duchamp, S. Goodenough and K.A. Penson
Two examples
(one-parameter) groups eλΩ where Ω =
X
α(ω)ω is
ω∈HWC
an element of HWC , with all - but a finite number of
them - the complex numbers α(ω) equal to 0, and ω a
word on the alphabet {a, a† } leads to the necessity of
solving the Normal Ordering Problem, i.e., the reduction
of the powers of Ω to the form
X
Ωn =
βi,j (a† )i aj .
(18)
(15)
The examples provided here pertain to the class of labelled graphs where the “classic” exponential formula applies, namely Burnside’s Classes1 Burn a,b , defined, for
0 ≤ a < b two integers, as the class of graphs of numeric
endofunctions f such that
fa = fb
where f n denotes the nth power with respect to functional composition. Despite of its simplicity, there are
still (enumerative combinatorial) open problems for this
class and only B1,ℓ+1 gives rise to an elegant formula
[8, 19] (see also [11], for the idempotent case: ℓ = 1 and
compare to exact but non-easily tractable formulas in [4]
for the general case in the symmetric semigroup, and in
[12] for their generalization to the wreath product of the
symmetric semigroup and a finite group).
In the sequel, Normal (Ωn ) denotes such a sum. This
problem can be performed with three indices in general
and two in the case of homogeneous operators that is
operators for which the “excess” e = i − j is constant
along the monomials (a† )i aj of the support (for which
βi,j 6= 0). Thus, for
X
βi,j (a† )i aj
(19)
Ω=
i−j=e
one has, for all n ∈ N,
The second example: the class of finite parti∞
tions which can be (and should here) identified as
X
n
† ne
Normal
(Ω
)
=
(a
)
SΩ (n, k)(a† )k ak
(20)
graphs of equivalence relations on finite subsets F ⊆
+
k=0
N .
Call this class “Stirling class” as the number of such graphs with support [1..n] and k conwhen e ≥ 0, and
nected components is exactly the Stirling number of
!
∞
the second kind S2 (n, k) and, using the statistics
X
n
†
k
k
(21)
Normal (Ω ) =
SΩ (n, k)(a ) a an|e|
x(number of points) y (number of connected components) , one obk=0
tains
X
x
xn
(16) otherwise. It turns out that, when there is only one anS2 (n, k) y k = ey(e −1) .
n!
n,k≥0
nihilation, one gets a formula of the type (x, y are formal
Examples of this kind bring us to the conclusion that commutative variables)
bivariate statistics like Burna,b (n, k), S2 (n, k) or S1 (n, k)
P
X
xn
xn
SΩ (n, k) y k = g(x)ey n≥1 SΩ (n,1) n!
(22)
(Stirling numbers of the second and first kind) are better
n!
n,k≥0
understood through the notion of one-parameter group,
conversely such groups naturally arinsing in Combinatowhich is a generalization of formula (16). A complete
rial Physics lead to such statistics and new ones some of
study of such a procedure and the details to perform the
which can be interpreted combinatorially.
solution of the normal ordering problem may be found in
[5].
6
Generalized Stirling numbers in
7
Combinatorial Physics
Conclusion
In this paper, we have broadened 2, 3 the domain of application of the exponential formula, a tool originated from
statistical physics. This broadening reveals us, together
with the essence of “why this formula works”, a possibility
of extension to denominators other than the factorial and,
on the other hand, provides a link with one-parameter
groups whose infinitesimal generators are (formal) vector
The complex algebra generated by these two symbols and fields on the line. The general combinatorial theory of
this unique relation, the Heisenberg-Weyl algebra, will be the correspondence (vector fields ↔ bivariate statistics)
here denoted by HWC . The consideration of evolution is still to be done despite the fact that we have already a
wealth of results in this direction.
1
In Quantum Mechanics, many tools boil down to the consideration of creation and annihilation operators which
will be here denoted respectively a† and a. These two
symbols do not commute and are subject to the unique
relation
[a, a† ] = 1 .
(17)
The name is related to the notion of free Burnside semigroups,
namely the quotient of the free semigroup A+ , where A is a finite
alphabet, by the the smallest congruence that contains the relators
ω n+m = ω n , ω ∈ A+ . For more details see [15].
2 A part of our setting can be reformulated in the categorical
context [2, 3]
3 Another direction is the q-exponential formula [10, 16].
Statistics on Graphs, Exponential Formula and Combinatorial Physics
Acknowledgements
We would like to thank Christian Krattenthaler (from
Wien) for fruitful discussions.
The research of this work was supported, in part, by the
Agence Nationale de la Recherche (Paris, France) under
Program No. ANR-08-BLAN-0243-2. We would like also
to acknowledge support from “Projet interne au LIPN
2009” “Polyzêta functions”.
References
[1] R. H. Bruck, A survey of binary systems, Ergebnisse der Mathematik und ihrer Grenzgebiete, new series, vol. 20, BerlinGttingen-Heidelberg, Springer, 1958.
[2] F. Bergeron, G. Labelle, and P. Leroux, Combinatorial Species
and Tree-Like Structures, Cambridge University Press, 1999.
[3] P. J. Cameron, C. Krattenthaler, and T. W. Müller, Decomposable functors and the exponential principle II, to appear.
[4] A. Dress and T. W. Müller, Decomposable functors and the exponential principle, Advances in Mathematics, vol. 129, pp. 188221, 1997.
[5] G. H. E. Duchamp, K. A. Penson, A. I. Solomon, A. Horzela and
P. Blasiak, One-parameter groups and combinatorial physics, in
Proc. of the Third International Workshop on Contemporary
Problems in Mathematic Physics (COPROMAPH3), PortoNovo (Benin), 2003. arXiv:quant-ph/0401126
[6] S. Eilenberg, Automata, Languages and Machines - volume A,
Academic Press, 1974.
[7] P. Flajolet, R. Sedgewick, Analytic Combinatorics, Cambridge
University Press, 2008.
[8] I. P. Goulden and D. M. Jackson, Combinatorial enumeration,
John Wiley & Sons, Inc., 1983.
[9] A. Joyal, Une théorie combinatoire des séries formelles, Advances in Mathematics, vol. 42, pp. 1-82, 1981.
[10] I. M. Gessel, A q-analog of the exponential formula, Discrete
Mathematics 306 (2006).
[11] B. Harris and L. Schoenfeld, The number of idempotent elements in symmetric semigroups, Journal of Combinatorial Theory, Series A, vol. 3, pp. 122-135, 1967.
[12] C. Krattenthaler and T. W. Müller, Equations in finite semigroups: explicit enumeration and asymptotics of solution numbers, Journal of Combinatorial Theory, Series A, vol. 105,
pp. 291-334, 2004.
[13] S. Lang, Complex analysis, Springer, 1999.
[14] E. S. Ljapin and A. E. Evseev, The Theory of Partial Algebraic
Operations, Kluwer Academic, 1997.
[15] A. Pereira do Lago and I. Simon, Free Burnside Semigroups,
Theoretical Informatics and Applications, vol. 35, pp. 579-595,
2001.
[16] C. Quesne, Disentangling q-Exponentials: A General Approach, International Journal of Theoretical Physics, Vol. 43,
No. 2, February 2004
[17] R. J. Ridell and G. E. Uhlenbeck, On the theory of the virial
development of the equation of state of monomatomic gases, J.
Chem. Phys., vol. 21, pp. 2056-2064, 1953.
[18] G. Segal, Configuration-spaces and iterated loop-spaces, Inventiones Mathematicae, vol. 21 (3), pp. 213-221, 1973.
[19] R. Stanley, Enumerative Combinatorics - Volume I, in Studies
in Advanced Mathematics, vol. 49, Cambridge University Press,
1997.
[20] J. Touchard, Sur les cycles des substitutions, Acta Mathematica, vol. 70, pp. 243-297, 1939.
5
| 5 |
arXiv:1601.07667v1 [] 28 Jan 2016
Classification of group isotopes according to
their symmetry groups
Halyna Krainichuk
Abstract
The class of all quasigroups is covered by six classes: the class of all asymmetric quasigroups and five varieties of quasigroups (commutative, left symmetric,
right symmetric, semi-symmetric and totally symmetric). Each of these classes
is characterized by symmetry groups of its quasigroups.
In this article, criteria of belonging of group isotopes to each of these classes
are found, including the corollaries for linear, medial and central quasigroups
etc. It is established that an isotope of a noncommutative group is either semisymmetric or asymmetric, each non-medial T-quasigroup is asymmetric etc. The
obtained results are applied for the classification of linear group isotopes of prime
orders, taking into account their up to isomorphism description.
Keywords: central quasigroup, medial quasigroup, isotope, left-, right-,
totally-, semi-symmetric, asymmetric, commutative quasigroup, isomorphism.
Classification: 20N05, 05B15.
Introduction
The group isotope variety is an abundant class of quasigroups in a sense
that it contains quasigroups from almost all quasigroup classes, which
have ever been under consideration. Many authors obtained their results
in the group isotope theory that is why the available results are widely
scattered in many articles and quite often are repeated. However, taking
into account the group theory development, these results have largely a
“folkloric” level of complexity. Considering their applicability, they should
be systematized. One of the attempts of systemic presentations of the
group isotopes is the work by Fedir Sokhatsky “On group isotopes”, which
is given in three articles [17, 18, 19]. But, parastrophic symmetry has
been left unattended.
Some a concept of a symmetry for all parastrophes of quasigroup was
investigated by J.D.H. Smith [16]. This symmetry is known as triality.
The same approach for all parastrophes of quasigroup can be found in such
articles as T. Evans [8], V. Belousov [2], Yu. Movsisyan [14], V. Shcherbacov [24], G. Belyavskaya and T. Popovych [4] and in many articles of
other authors. Somewhat different idea of a symmetry of quasigroups and
loops was suggested by F. Sokhatsky [22]. Using this concept, one can
systematize many results.
The purpose of this article is a classification of group isotopes according
to their parastrophic symmetry groups.
If a σ-parastrophe coincides with a quasigroup itself, then σ is called
a symmetry of the quasigroup. The set of all symmetries forms a group,
which is a subgroup of the symmetry group of order three, i.e., S3 . All
quasigroups defined on the same set are distributed into 6 blocks according
to their symmetry groups. The class of quasigroups whose symmetry
1
contains the given subgroup of the group S3 forms a variety. Thus, there
exist five of such varieties: commutative, left symmetric, right symmetric,
semi-symmetric and totally symmetric. The rest of quasigroups forms the
class of all asymmetric quasigroups, which consists of quasigroups with a
unitary symmetry group.
In this article, the following results are obtained:
• criteria of belonging of group isotopes to each of these classes (Theorem 8);
• a classification of group isotopes defined on the same set, according
to their symmetry groups (Table 1 and Corollary 16);
• corollaries for the well-known classes, such as linear (Corollary 16),
medial (Corollary 10) and central quasigroups (Corollary 13);
• every non-medial T-quasigroup is asymmetric (Corollary 14);
• an isotope of a noncommutative group is either semi-symmetric or
asymmetric (Corollary 9);
• classification of linear isotopes of finite cyclic groups (Corollary 16);
• classification of linear group isotopes of prime orders (Theorem 21).
1
Preliminaries
A groupoid (Q; ·) is called a quasigroup, if for all a, b ∈ Q every of the
equations x · a = b and a · y = b has a unique solution. For every σ ∈ S3
σ
a σ-parastrophe ( · ) is defined by
σ
x1σ · x2σ = x3σ ⇐⇒ x1 · x2 = x3 ,
where S3 := {ι, s, ℓ, r, sℓ, sr} is the symmetric group of order 3 and s :=
(12), ℓ := (13), r := (23).
σ
A mapping (σ; (·)) 7→ ( · ) is an action on the set ∆ of all quasigroup
operations defined on Q. A stabilizer Sym(·) is called a symmetry group
of (·). Thus, the number of different parastrophes of a quasigroup
operation (·) depends on its symmetry group Sym(·). Since Sym(·) is
a subgroup of the symmetric group S3 , then there are six classes of
quasigroups. A quasigroup is called
• asymmetric, if Sym(·) = {ι}, i.e., all parastrophes are pairwise different;
• commutative, if Sym(·) ⊇ {ι, s}, i.e., the class of all commutative
quasigroups is described by xy = yx it means that
s
(·) = ( ·),
ℓ
sr
(·) = ( · ),
r
sℓ
( ·) = ( · );
• left symmetric, if Sym(·) ⊇ {ι, r}, i.e., the class of all left symmetric
quasigroups is described by x · xy = y, it means that
r
(·) = ( ·),
s
ℓ
( ·) = (·),
sℓ
sr
( · ) = ( · );
• right symmetric, if Sym(·) ⊇ {ι, ℓ}, i.e., the class of all right
symmetric quasigroups is described by xy · y = x, it means that
ℓ
(·) = (·),
s
r
( ·) = ( ·),
2
sr
sℓ
( · ) = ( · );
• semi-symmetric, if Sym(·) ⊇ A3 , i.e., the class of all semi-symmetric
quasigroups is described by x · yx = y, it means that
sℓ
sr
(·) = ( · ) = ( · ),
s
ℓ
r
( ·) = (·) = ( ·);
• totally symmetric, if Sym(·) = S3 , i.e., the class of all totally symmetric quasigroups is described by xy = yx and xy · y = x, it means
that all parastrophes coincide.
We will say, a quasigroup has the property of a symmetry, if it satisfies
one of the following symmetry properties: commutativity, left symmetry,
right symmetry, semi-symmetry or total symmetry.
Let P be an arbitrary proposition in a class of quasigroups A. The
proposition σP is said to be a σ-parastrophe of P , if it can be obtained
τ
τ σ −1
from P by replacing every parastrophe ( ·) with ( ·
class of all σ-parastrophes of quasigroups from A.
); σA denotes the
Theorem 1. [22] Let A be a class of quasigroups, then a proposition P
is true in A if and only if σP is true in σA.
Corollary 2. [22] Let P be true in a class of quasigroups A, then σP is
true in σA for all σ ∈ Sym(A).
Corollary 3. [22] Let P be true in a totally symmetric class A, then σP
is true in A for all σ.
A groupoid (Q; ·) is called an isotope of a groupoid (Q; +) iff there
exists a triple of bijections (α, β, γ), which is called an isotopism, such
that the relation x · y := γ −1 (αx + βy) holds. An isotope of a group is
called a group isotope.
A permutation α of a set Q is called unitary of a group (Q; +), if
α(0) = 0, where 0 is a neutral element of (Q; +).
Definition 1. [18] Let (Q; ·) be a group isotope and 0 be an arbitrary
element of Q, then the right part of the formula
x · y = αx + a + βy
(1)
is called a 0-canonical decomposition, if (Q; +) is a group, 0 is its neutral
element and α, β are unitary permutations of (Q; +).
In this case, we say: the element 0 defines the canonical decomposition;
(Q; +) is its decomposition group; α, β are its coefficients and a is its free
member.
Theorem 4. [18] An arbitrary element of a group isotope uniquely defines
a canonical decomposition of the isotope.
Corollary 5. [17] If a group isotope (Q; ·) satisfies an identity
w1 (x) · w2 (y) = w3 (y) · w4 (x)
and the variables x, y are quadratic, then (Q; ·) is isotopic to a commutative group.
Recall, that a variable is quadratic in an identity, if it has exactly
two appearances in this identity. An identity is called quadratic, if all
variables are quadratic. If a quasigroup (Q; ·) is isotopic to a parastrophe
of a quasigroup (Q; ◦), then (Q; ·) and (Q; ◦) are called isostrophic.
The given below Theorem 6 and its Corollary 7 are well known and
can be found in many articles, for example, in [2], [18].
3
Theorem 6. A triple (α, β, γ) of permutations of a set Q is an autotopism
of a group (Q, +) if and only if there exists an automorphism θ of (Q, +)
and elements b, c ∈ Q such that
α = Lc Rb−1 θ,
β = Lb θ,
γ = Lc θ.
Corollary 7. Let α, β1 , β2 , β3 , β4 be permutations of a set Q besides α
is a unitary transformation of a group (Q, +) and let
α(β1 x + β2 y) = β3 u + β4 v,
where {x, y} = {u, v} holds for all x, y ∈ Q. Then the following statements
are true:
1) α is an automorphism of (Q, +), if u = x, v = y;
2) α is an anti-automorphism of (Q, +), if u = y, v = x.
A quasigroup is a linear group isotope [1], if there exists a group (Q; +),
its automorphisms ϕ, ψ, an arbitrary element c such that for all x, y ∈ Q
x · y = ϕx + c + ψy.
T. Kepka, P. Nemec [11, 12] introduced the concept of T -quasigroups
and studied their properties, namely the class of all T -quasigroups is a variety. T -quasigroups, sometimes, are called central quasigroups. Central
quasigroups are precisely the abelian quasigroups in the sense of universal
algebra [26]. A T -quasigroup is a linear isotope of an abelian group and,
according to Toyoda-Bruck theorem [27, 5], it is medial if and only if coefficients of its canonical decompositions commute. A medial quasigroup [2]
is a quasigroup defined by the identity of mediality
xy · uv = xu · yv.
2
Isotopes of groups
In this section, criteria for group isotopes to have symmetry properties are given and a classification of group isotopes according to their
symmetry groups is described.
2.1
Classification of group isotopes
The criteria for group isotopes to be commutative, left symmetric and
right symmetric are found by O. Kirnasovsky [10]. The criteria of total
symmetry, semi-symmetry and asymmetry are announced by the author
in [13].
The following theorem systematizes all criteria on symmetry and implies a classification of group isotopes according to their symmetry groups.
Theorem 8. Let (Q; ·) be a group isotope and (1) be its canonical decomposition, then
1) (Q; ·) is commutative if and only if (Q; +) is abelian and β = α;
2) (Q; ·) is left symmetric if and only if (Q; +) is abelian and β = −ι;
3) (Q; ·) is right symmetric if and only if (Q; +) is abelian and α = −ι;
4) (Q; ·) is totally symmetric if and only if (Q; +) is abelian and α =
β = −ι;
4
5) (Q; ·) is semi-symmetric if and only if α is an anti-automorphism of
(Q; +),
β = α−1 , α3 = −Ia−1 , αa = −a, where Ia (x) := −a + x + a;
6) (Q; ·) is asymmetric if and only if (Q; +) is not abelian or −ι 6= α 6=
β 6= −ι and at least one of the following conditions is true: α is not
an anti-automorphism, β 6= α−1 , α3 6= −Ia−1 , αa 6= −a.
Proof. 1) Let a group isotope (Q; ·) be commutative, e.i., the identity
xy = yx holds. Using its canonical decomposition (1), we have
αx + a + βy = αy + a + βx.
Corollary 5 implies that (Q; +) is an abelian group. When x = 0, we
obtain α = β.
Conversely, let (Q; +) be an abelian group and β = α, then
x · y = αx + a + αy = αy + a + αx = y · x.
Thus, (Q; ·) is a commutative quasigroup.
2) Let a group isotope (Q; ·) be left symmetric, e.i., the identity xy·y =
x holds. Using its canonical decomposition (1), we have
α(αx + a + βy) + a + βy = x.
Replacing a + βy with y, we obtain α(αx + y) = x − y. Corollary 7 implies
that α is an automorphism. When x = 0, we have αy = −y, i.e., α = −ι.
Since α is an automorphism and an anti-automorphism, then (Q; +) is an
abelian group.
Conversely, suppose that the conditions of 2) are performed, then (Q; ·)
is a left symmetric quasigroup. Indeed,
xy · y = −(−x + βy) + βy = x − βy + βy = x.
The proof of 3) is similar to 2). The point 4) follows from 2) and 3).
5) Let a group isotope (Q; ·) be semi-symmetric, e.i., the identity x ·
yx = y holds. Using (1), we have
αx + a + β(αy + a + βx) = y,
hence,
β(αy + a + βx) = −a − αx + y.
Corollary 7 implies that β is an anti-automorphism of (Q; +), therefore
β 2 x + βa + βαy = −a − αx + y.
(2)
When x = y = 0, we obtain βa = −a and, when x = 0, we have βα = ι,
i.e., β = α−1 . Substitute the obtained relations in (2):
α−2 x − a + y = −a − αx + y.
Reducing y on the right in the equality and replacing x with α2 x, we have
x − a = −a − α3 x, wherefrom −α3 x = a + x − a that is α3 = −Ia−1 .
Conversely, suppose that the conditions of 5) hold. A quasigroup (Q; ·)
defined by x · y := αx + a + α−1 y is semi-symmetric. Indeed,
x · yx = αx + a + α−1 (αy + a + α−1 x).
5
Since α is an anti-automorphism of the (Q; ·), then
x · yx = αx + a + α−2 x + α−1 a + y.
α3 = −Ia−1 implies a + α−2 x = −αx + a, so,
x · yx = αx − αx + a + α−1 a + y = a + α−1 a + y.
Because α−1 a = −a, then x · yx = y. Thus, (Q; ·) is semi-symmetric.
6) Asymmetricity of a group isotope means that it is neither commutative, nor left symmetric, nor right symmetric, nor semi-symmetric. It
means that all conditions 1)–4) are false. Falsity of 1) implies falsity of 4).
Falsity of 5) is equivalent to fulfillment of at least one of the conditions:
α is not an anti-automorphism, β 6= α−1 , α3 6= −Ia−1 , α(a) 6= −a. Falsity
of 1), 2), 3) means that (Q; +) is noncommutative or each of the following
inequalities β 6= α, β 6= −ι, α 6= −ι is true. Thus, 6) has been proved.
From Theorem 8, we can deduce the corollary for the classification of
group isotopes over a noncommutative group.
Corollary 9. An isotope of a noncommutative group is either semisymmetric or asymmetric.
Proof. Theorem 8 implies that an isotope of a noncommutative group can
be semi-symmetric or asymmetric. A group isotope can not be asymmetric and semi-symmetric simultaneously, because according to definition,
a symmetry group of a semi-symmetric quasigroup is A3 or S3 and a
symmetry group of an asymmetric quasigroup is {ι}.
Corollary 10. Commutative, left symmetric, right symmetric and totally
symmetric linear isotopes of a group are medial quasigroups.
Proof. The corollary follows from Theorem 8, because in every of these
cases the decomposition group of a canonical decomposition of a group
isotope is commutative and both of its coefficients are automorphisms
according to assumptions and they commute.
Corollary 11. A nonmedial linear isotope of an arbitrary group is either
semi-symmetric or asymmetric.
Proof. The corollary immediately follows from Theorem 8 and Corollary 10.
2.2
Classification of isotopes of abelian groups
An isotope of a nonabelian group is either semi-symmetric or asymmetric
(see Corollary 9). In other words, commutative, left symmetric, right
symmetric and totally symmetric group isotopes exist only among isotopes
of a commutative group. Consequently, it is advisable to formulate a
corollary about classification of isotopes of commutative groups.
Corollary 12. Let (Q; ·) be an isotope of a commutative group and (1) be
its canonical decomposition, then the following conditions are true: 1)-4)
of Theorem 8 and
5′ ) (Q; ·) is semi-symmetric if and only if α is an automorphism of
(Q; +), β = α−1 , α3 = −ι, αa = −a;
6
6′ ) (Q; ·) is asymmetric if and only if −ι 6= α 6= β 6= −ι and at least
one of the following conditions is true: α is not an automorphism,
β 6= α−1 , α3 6= −ι, αa 6= −a.
Proof. The proof follows from Theorem 8 taking into account that (Q; +)
is commutative.
The varieties of medial and T -quasigroups are very important and investigated subclasses of the variety of all group isotopes. These quasigroups have different names in the scientific literature. For example, medial quasigroups also are called entropic or bisymmetry, and T quasigroups are called central quasigroups [26], linear isotopes of commutative groups and etc.
The next statement gives a classification of varieties of T -quasigroups
according to their symmetry groups.
Corollary 13. Let (Q; ·) be a linear isotope of a commutative group and
(1) be its canonical decomposition, then the following conditions are true:
1)-4) of Theorem 8, 5′ ) of Corollary 12 and
6′′ ) (Q; ·) is asymmetric if and only if −ι 6= α 6= β 6= −ι and at least one
of the following conditions is true: β 6= α−1 , α3 6= −ι, αa 6= −a.
Proof. This theorem immediately follows from Corollary 12.
Corollary 14. Every nonmedial T-quasigroup is asymmetric.
In other words, if a T-quasigroup is not asymmetric, then it is medial.
3
Linear isotopes of finite cyclic groups
A full description of all n-ary linear isotopes of cyclic groups up to an isomorphism is given by F. Sokhatsky and P. Syvakivskyj [20]. All pairwise
non-isomorphic group isotopes up to order 15 and a criterion of their existence are established by O. Kirnasovsky [10]. Some algebraic properties
of non-isomorphic quasigroups are studied by L. Chiriac, N. Bobeica and
D. Pavel [6] using the computer.
Let Q be a set and Is(+) be a set of all isotopes of a group (Q; +).
Theorem 8 does not give a partition of Is(+). But it is easy to see that only
totally symmetric quasigroups are common for two classes of symmetric
quasigroups, i.e., of group isotopes being not asymmetric.
To emphasize that a group isotope is not totally symmetric we add
the word ‘strictly’. For example, the term ‘a strictly commutative group
isotopes’ means that it is commutative, but not totally symmetric.
Theorem 8 implies that the conditions of exclusion of totally symmetric
quasigroups from a set of group isotopes are the following: coefficients of
its canonical decomposition do not equal −ι simultaneously. Generally
speaking, a set Is(+) is parted into six subsets.
7
A group isotope
(Q; ·)
is strictly
commutative
is strictly left
symmetric
its symmetry group
conditions of its
canonical decomposition (1)
{ι, s}
(Q; +) is abelian, β = α 6= −ι
{ι, r}
(Q; +) is abelian, β = −ι 6= α
{ι, ℓ}
(Q; +) is abelian, α = −ι 6= β
is strictly right
symmetric
is strictly
semi-symmetric
A3
is totally
symmetric
α is an anti-automorphism of
(Q; +), β = α−1 , αa = −a, α3 =
−Ia−1 , where Ia (x) := −a + x + a,
(Q; +) is non-abelian or α 6= −ι
S3
(Q; +) is abelian α = β = −ι
α is not an anti-automorphism of
(Q; +), β 6= α−1 , α3 6= −Ia−1 ,
is asymmetric
{ι}
αa 6= −a and (Q; +) is non-abelian
or −ι 6= α 6= β 6= −ι.
Table 1. A partition of group isotopes.
Consider the set of all linear isotopes of a finite cyclic group. Their
up to isomorphism description has been found by F. Sokhatsky and
P. Syvakivsky [20].
We will use the following notation: Zm denotes the ring of integers
modulo m; Z∗m the group of invertible elements of the ring Zm ; and
(α, β, d), where α, β ∈ Z∗m and d ∈ Zm , denotes an operation (◦) which is
defined on Zm by the equality
x ◦ y = α · x + β · y + d.
(3)
Since every automorphism θ of the cyclic group (Zm ; +) can be defined by
θ(x) = k · x for some k ∈ Z∗m , then linear isotopes of (Zm ; +) are exactly
the operations being defined by (3), i.e., they are the triples (α, β, d).
Theorem 15. [20] An arbitrary linear isotope of a cyclic m order group
is isomorphic to exactly one isotope (Zm , ◦) defined by (3), where α, β is
a pair of invertible elements in the ring Zm and d is a common divisor of
µ = α + β − 1 and m.
Classification of linear group isotopes of Zm according to their symmetry groups is given in the following corollary.
Corollary 16. Let Zm be a ring of integers modulo m and let (α, β, d)
be its arbitrary linear isotope, where d ∈ GCD(m; α + β − 1). Then an
arbitrary linear isotope of an m-order cyclic group is isomorphic to exactly
one isotope (Zm , ◦) defined by (3), besides
conditions of its canonical decomposition (1)
β = α 6= −1
β = −1 6= α
α = −1 6= β
is strictly commutative
is strictly left symmetric
is strictly right symmetric
its symmetry group
{1, s}
{1, r}
{1, ℓ}
is strictly semi-symmetric
A3
α 6= −1, β = α−1 ,
α3 = −1, αd = −d
is totally symmetric
S3
α = β = −1
is asymmetric
{1}
−1 6= α 6= β 6= −1 and
β 6= α−1 or α3 6= −1 or
αd 6= −d
a group isotope (Q; ◦)
8
3.1
Linear group isotopes of prime orders
Group isotopes and linear isotopes were studied by many authors: V. Belousov [1], E. Falconer [9], T. Kepka and P. Nemec [11], [12] , V. Shcherbacov [23], F. Sokhatsky [21], A. Drápal [7], G. Belyavskaya [3] and others.
In this part of the article, we are giving a full classification of linear
group isotopes of prime order up to isomorphism relation and according
to their symmetry groups.
Theorem 15 implies the following statements.
Corollary 17. [20] Linear group isotopes of a prime order, which are
defined by a pair (α, β) are pairwise isomorphic, if α+β 6= 1. If α+β = 1,
then they are isomorphic to either (α, β, 0) or (α, β, 1).
Corollary 18. [20, 25] There exist exactly p2 −p−1 linear group isotopes
of a prime order p up to isomorphism.
Let p be prime, then Zp is a field, so, according to Corollary 17, there
are two kinds of group isotopes of the cyclic group Zp :
• M0 := {(α, β, 0) | α, β = 1, 2, . . . , p − 1};
• M1 := {(α, 1 − α, 1) | α = 2, . . . , p − 1}.
For brevity, the symbols cs, ls, rs, ts, ss, as denote respectively strictly
commutative, strictly left symmetric, strictly right symmetric, strictly
semi-symmetric, totally symmetric, asymmetric quasigroups. For example, M0ss denotes the set of all strictly semi-symmetric group isotopes from
M0 .
Quasigroups of orders 2 and 3. Every quasigroup of the orders 2
and 3 is isotopic to the cyclic groups. Only ι is a unitary substitutions of
Z2 , so Theorem 4 implies that there exist two group isotopes:
x ◦ y := x + y,
0
and
x ◦ y := x + y + 1.
1
They are isomorphic and ϕ(x) := x + 1 is the corresponding isomorphism.
Proposition 19. All quasigroups of the order 2 are pairwise isomorphic.
There exist two unitary substitutions of the group Z3 : ι and (12)
and the both of them are automorphisms of the cyclic group Z3 . Thus,
Theorem 4 implies that all quasigroups of the order 3 are linear isotopes
of the cyclic group. So, Corollary 17 implies that
M0 = {(1, 1, 0), (2, 2, 0), (1, 2, 0), (2, 1, 0)},
M1 = {(2, 2, 1)}.
According to Corollary 18 and Corollary 16, we obtain the following result.
Proposition 20. There exist exactly five 3-order quasigroups up to isomorphism, which can be distributed into four blocks:
1) strictly commutative: (1, 1, 0);
2) strictly left symmetric: (1, 2, 0);
3) strictly right symmetric: (2, 1, 0);
4) totally symmetric: (2, 2, 0), (2, 2, 1).
9
Linear group isotopes of the order p > 3. Full description of
these group isotopes is given in the following theorem.
Theorem 21. The set of all pairwise non-isomorphic group isotopes of
prime order p (p > 3) is equal to
M = {(α, β, 0) | α, β = 1, 2, . . . , p−1} ∪ {(α, 1−α, 1) | α = 2, . . . , p−1}.
The set M equals union of the following disjoint sets:
1) the set of all strictly commutative group isotopes
M cs = {(1, 1, 0), (2, 2, 0), . . . , (p − 2, p − 2, 0), (2−1 , 2−1 , 1)};
2) the set of all strictly left symmetric group isotopes
M ls = {(1, p − 1, 0), (2, p − 1, 0), . . . , (p − 2, p − 1, 0), (2, p − 1, 1)};
3) the set of all strictly right symmetric group isotopes
M rs = {(p − 1, 1, 0), (p − 1, 2, 0), . . . , (p − 1, p − 2, 0), (p − 1, 2, 1)};
4) the set of all totally symmetric group isotopes
M ts = {(p − 1, p − 1, 0)};
5) the set M ss of all strictly semi-symmetric group isotopes is empty,
if p − 3 is not quadratic residue modulo p, but if there exists k such
that p − 3 = k2 modulo p, then the set is equal to
M ss = (1 + k)2−1 , 2(1 + k)−1 , 0 , (1 − k)2−1 , 2(1 − k)−1 , 0 ;
6) the set of all asymmetric group isotopes is equal to
M as = {(3, p−2, 1), . . . , (p−2, 3−p, 1)} \ (2−1 , 1−2−1 , 1) ∪
∪ (α, β, 0) α, β = 1, 2, 3, . . . , p − 2, α 6= β \ M ss .
Proof. Let (α, β, d) be an arbitrary group isotope.
If it is commutative, then, according to Corollary 16, we obtain
M0cs = {(α, α, 0) | α = 1, 2, . . . , p − 2},
|M0cs | = p − 2.
If d = 1, then Corollary 17 implies 2α = 1, i.e., M1cs = {(2−1 , 2−1 , 1)}.
Thus, the set of all pairwise non-isomorphic commutative group isotopes
is M cs = M0cs ∪ M1cs and |M cs | = p − 1.
Consider left symmetric quasigroups. According to Corollary 16, we
have
M0ls = {(α, p − 1, 0) | α = 1, 2, . . . , p − 2},
|M0ls | = p − 2.
If d = 1, then Corollary 17 implies α = 2 and, by Corollary 16, M1ls =
{(2, −1, 1)}. Thus, the set of all pairwise non-isomorphic left symmetric
group isotopes is M ls = M0ls ∪ M1ls and |M ls | = p − 1.
The relationships for right symmetric quasigroups can be proved in
the same way.
In virtue of Corollary 16, an arbitrary totally symmetric isotope is
defined by the pair (−1, −1) = (p − 1, p − 1) of automorphisms. According
to Corollary 17, d = 0, 1. Suppose that d = 1, then p − 1 + p − 1 = 1, i.e.,
10
2p = 3. Since p > 3, than this equality is impossible, so, d = 0. Thus,
M ts = {(p − 1, p − 1, 0)} and |M ts | = 1.
Consider semi-symmetric quasigroups. According to Corollary 17,
α 6= −1,
α3 = −1,
αd = −d,
where d = 0, 1. But d 6= 1, since α 6= −1, so d = 0. The equality α3 = −1
is equivalent to (α + 1)(α2 − α + 1) = 0. It is equivalent to α2 − α + 1 = 0.
It is easy to prove that α exists if and only if p − 3 is a quadratic residue
modulo p. If p − 3 = k2 modulo p, then
M ss = (1 + k)2−1 , 2(1 + k)−1 , 0 , (1 − k)2−1 , 2(1 − k)−1 , 0 .
Consequently, |M ss | = 2, and M ss = ∅ otherwise.
Let (α, β, d) be an arbitrary asymmetric group isotope. Corollary 17
implies that d ∈ {0, 1}. Since the given isotope is neither commutative,
nor left symmetric, nor right symmetric, nor totally symmetric, then,
according to Corollary 16, α, β 6∈ {0, p − 1} and α 6= β.
In the case when d = 1, then from Corollary 17, it follows that β = 1−α
and the relationships α, β 6∈ {0, p − 1}, α 6= β imply
α 6∈ {0, 1, 2,
p+1
, p − 1}.
2
Since for d = 1 semi-symmetric group isotopes do not exist, then
p+1
,
M1as = (α, 1 − α, 1) α = 3, . . . , p − 2, α 6=
2
|M1as | = p − 5.
The set M0as depends on existence of semi-symmetric isotopes. Nevertheless,
M0as = (α, β, 0) α, β = 1, 2, 3, . . . , p − 2, α 6= β \ M ss ,
2
p − 5p + 4, if p − 3 is a quadratic residue modulo p;
|M0as | =
p2 − 5p + 6, otherwise.
Note, that F. Radó [15] proved that a semi-symmetric group isotope
of prime order p exists if and only if p − 3 is a quadratic residue modulo
p.
The general formula of all linear pairwise non-isomorphic group isotopes of the prime order p is found in Corollary 18 by F. Sokhatsky and
P. Syvakivskyj [20] and also by K. Shchukin [25].
Corollary 22. A number of all linear group isotopes of the prime order
p > 3 up to isomorphism is equal to p2 − p − 1 and it is equal to the sum
of the following numbers:
1) p − 1 of strictly commutative quasigroups;
2) p − 1 of strictly left symmetric quasigroups;
3) p − 1 of strictly right symmetric quasigroups;
4) 1 of the totally symmetric quasigroup;
5) 2 of semi-symmetric quasigroups, if p−3 is quadratic residue modulo
p and 0 otherwise;
11
6) (p − 2)2 − 5 asymmetric quasigroups, if p − 3 is quadratic residue
modulo p and (p − 2)2 − 3 otherwise.
Proof. The proof immediately follows from the proof of Theorem 21.
Acknowledgment. The author is grateful to her scientific supervisor
Prof. Fedir Sokhatsky for the design idea and the discussion of this article,
to the members of his scientific School for helpful discussions and to the
reviewer of English Vira Obshanska.
References
[1] Belousov V.D., Balanced identities in quasigroups, Matem. Sbornik
(1966) V.70. No.1 55–97 (Russian).
[2] Belousov V.D., Foundations of the theory of quasigroups and loops,
M.: Nauka (1967), 222 (Russian).
[3] Belyavskaya G.B., Quasigroups: identities with permutations, linearity and nuclei, LAP Lambert Academic Publishing, (2013), 71.
[4] Belyavskaya G.B., Popovich T.V., Conjugate sets of loops and quasigroups. DC-quasigroups, Buletinul Academiei de tiine a Republicii
Moldova, 1(68), (2012), 21–31.
[5] Bruck R.H., Some results in the theory in the theory quasigroups,
Trans. Amer. Math. Soc. 55, (1944), 19–52.
[6] Chiriac Liubomir, Bobeica Natalia, Pavel Dorin, Study on properties of non-isomorphic finite qusigroups using the computer, Proceedings of the Third Conference of Mathematical Society of Moldova,
Chisinau, Republic of Moldova, IMCS-50, August 19-23, (2014), 44–
47.
[7] Drápal A., Group isotopes and a holomorphic action, Result. Math.,
54 (2009), no.3-4, 253–272.
[8] Evans T., On multiplicative systems defined by generators and relations, Proc. Camb. Phil. Soc., 47 (1951), 637–649.
[9] Falconer E., Isotopes of some special quasigroup varieties, Acta Math.
Acad. Sci. Hung. 22 (1971), 73–79.
[10] Kirnasovsky Oleg U., Linear isotopes of small orders of groups,
Quasigroups and related systems, 2 n.1(2) (1995), 51–82.
[11] Kepka T., Nemec P., T-quasigroups I, Acta Universitatis Carolinae
Math. et Phys., (1971), 12, No.1, 39–49.
[12] Kepka T., Nemec P., T-quasigroups II, Acta Universitatis Carolinae
Math. et Phys., (1971), 12, No.2, 31–49.
[13] Krainichuk Halyna, About classification of quasigroups according to
symmetry groups, Proceedings of the Third Conference of Mathematical Society of Moldova, Chisinau, Republic of Moldova, IMCS-50,
August 19-23, (2014), 112–115.
[14] Movsisyan Yu.M., Hiperidentities in algebras and varieties, Uspehi
Mat. Nauk, 53, (1998), 61–114. (Russian)
[15] Radó F., On semi-symmetric quasigorups, Aequationes mathematicae Vol.11, Issue 2, (1974), (Cluj, Romania) 250–255.
12
[16] Smith J.D.H., An introduction to Quasigroups and Their Representation, Studies in Advanced Mathematics. Chapman and Hall/CRC,
London, (2007).
[17] Sokhatsky F.M., On group isotopes I, Ukrainian Math. Journal,
47(10) (1995), 1585–1598.
[18] Sokhatsky F.M., On group isotopes II, Ukrainian Math.J., 47(12)
(1995), 1935–1948.
[19] Sokhatsky F.M., On group isotopes III, Ukrainian Math.J., 48(2)
(1996), 283–293.
[20] Sokhatsky Fedir, Syvakivskyj Petro, On linear isotopes of cyclic
groups, Quasigroups and related systems, 1 n.1(1) (1994), 66–76.
[21] Sokhatsky Fedir M., Some linear conditions and their application to
describing group isotopes, Quasigroups and related systems, 6 (1999),
43–59.
[22] Sokhatsky F.M., Symmetry in quasigroup and loop theory, 3rd Mile High Conference on Nonassociative Mathematics, Denver, Colorado, USA, August 11-17, (2013);
http://web.cs.du.edu/∼petr/milehigh/2013/Sokhatsky.pdf.
[23] Shcherbacov V.A., On linear quasigroups and groups of automorphisms, Matemat. issledovaniya. Kishinev (1991). Vol.120, 104–113
(Russian).
[24] Shcherbacov V.A., A-nuclei and A-centers of a quasigroup, Technical
report, Central European University, Department of Mathematics
and its Applications, February 2011. (2011), 55.
[25] Shchukin K.K., Gushan V.V., Representation of parastrophes of
quasigroups and loops, Diskret. Matemat. 16(4) (2004), 149–157
(Russian).
[26] Szendrei Â., Modules in general algebra, Contributions to general
algebra 10 (Proc. Klagenfurt Conf. 1997), (1998), 41–53.
[27] Toyoda K., On axioms of linear functions, Proc.Imp.Acad. Tokyo,
17, (1941), 221–227.
Department of mathematical analysis and differential equations,
Faculty of Mathematics and Information Technology
Donetsk National University
Vinnytska oblast, Ukraine 21000
e-mail: [email protected]
13
| 4 |
CAMERA-TRAP IMAGES SEGMENTATION USING MULTI-LAYER ROBUST PRINCIPAL
COMPONENT ANALYSIS
Jhony-Heriberto Giraldo-Zuluaga? , Alexander Gomez? , Augusto Salazar? , and Angélica Diaz-Pulido†
arXiv:1701.08180v2 [] 30 Dec 2017
?
Grupo de Investigación SISTEMIC, Facultad de Ingenierı́a, Universidad de Antioquia UdeA,
Calle 70 No. 52-21, Medellı́n, Colombia
†
Instituto de Investigación de Recursos Biológicos Alexander von Humboldt,
Calle 28A No. 15-09, Bogotá D.C, Colombia
ABSTRACT
Camera trapping is a technique to study wildlife using automatic triggered cameras. However, camera trapping collects a lot of false positives (images without animals), which
must be segmented before the classification step. This paper
presents a Multi-Layer Robust Principal Component Analysis (RPCA) for camera-trap images segmentation. Our MultiLayer RPCA uses histogram equalization and Gaussian filter as pre-processing, texture and color descriptors as features, and morphological filters with active contour as postprocessing. The experiments focus on computing the sparse
and low-rank matrices with different amounts of camera-trap
images. We tested the Multi-Layer RPCA in our camera-trap
database. To our best knowledge, this paper is the first work
proposing Multi-Layer RPCA and using it for camera-trap
images segmentation.
Index Terms— Camera-trap images, Multi-Layer Robust
Principal Component Analysis, background subtraction, image segmentation.
1. INTRODUCTION
Studying and monitoring of mammals and birds species can
be performed using non-invasive sampling techniques. These
techniques allow us to observe animal species for conservation purposes, e.g. to estimate population sizes of endangered
species. Camera trapping is a method to digitally capture
wildlife images. This method facilitates the register of terrestrial vertebral species, e.g. cryptic species. Consequently,
camera traps can generate large volumes of information in
short periods of time. Thus, the contributions in camera trapping are important for better species conservation decisions.
Camera traps are devices to capture animal images in the
wild. These devices consist of a digital camera and a motion
detector. They are triggered when the motion sensor detects
movement and dependent on the temperature of the source in
relation to the environment temperature. Biologist can monitor wildlife with camera traps for detecting rare species, delineating species distributions, monitoring animal behavior, and
measuring other biological rates [1]. Camera traps generate
large volumes of information, for example a camera trapping
study can generate until 200000 images, where 1% of the information is valuable [2]. As a consequence, biologists have
to analyze thousands of photographs in a manual way. Nowadays, software solutions cannot handle the increment of the
number of images in camera trapping [3]. Accordingly, it is
important to develop algorithms to assist the post-processing
of camera-trap images.
Background subtraction techniques could help to segment
animals from camera-trap images. There is a significant body
research of background subtraction focused in video surveillance [4]. Nevertheless, there are not enough methods that
can handle the complexity of natural dynamic scenes [5].
Camera-trap images segmentation is important for animal detection and classification. Camera-traps images usually have
ripping water, moving shadows, swaying trees and leaves, sun
spots, scene changes, among others. Consequently, the models used to segment those types of images should have robust
feature extractors. There are some segmentation methods in
the literature applied to camera-trap images segmentation.
Reddy and Aravind proposed a method to segment tigers on
camera-trap images, using texture and color features with
active contours [6]. They do not make an objective evaluation
of their method. Ren et al. developed a method to segment
images from dynamic scenes, including camera-trap images;
the method uses Bag of Words (BoW), Histogram of Oriented Gradients (HOG), and graph cut energy minimization
[7]. They do not show the results on camera-trap images.
Zhang et al. developed a method to segment animals from
video sequences, using camera-trap images; the method uses
BoW, HOG, and graph cut energy minimization [8]. They obtained 0.8695 of average f-measure on their own camera-trap
data set.
Robust Principal Component Analysis (RPCA) is a
method derived from Principal Component Analysis. RPCA
assumes that a data matrix can be decomposed in a low-rank
and a sparse matrix. RPCA has newly seen significant activity in many areas of computer sciences, particularly in
background subtraction. As a result, there are some algorithms to solve the RPCA problem [9, 10, 11, 12, 13]. In this
work, we proposed a Multi-Layer RPCA approach in order
to segment animals from camera-trap images. Our method
combines color and texture descriptors as feature extractor,
and solve the RPCA problem with some state-of-the-art algorithms. To our knowledge, this paper is the first work in
proposing a Multi-Layer RPCA approach and using it for
camera-trap images segmentation.
The paper is organized as follows. Section 2 shows material and methods. Section 3 describes the experimental framework. Section 4 presents the experimental results and the discussion. Finally, Section 5 shows the conclusions.
2. MATERIALS AND METHODS
This section shows a brief explanation of the algorithms and
metrics used in this paper.
(a) Original Image
(b) Ground truth
Fig. 1: Ground truth example for the evaluation process, (a)
original image, (b) ground truth or manual segmented image.
way on uniform regions such as water, the sky, and others.
Color descriptors could overcome the texture descriptor limitation [16]. The Multi-Layer approach proposed in this paper
was tested in camera traps for wildlife image segmentation.
M (x, y) = βft (x, y) + (1 − β)fc (x, y)
(3)
2.1. Robust Principal Component Analysis
2.3. Evaluation Metrics
An image can be decomposed in a low-rank and sparse matrix. Equation 1 shows the RPCA problem, where M is the
data matrix, L0 is the low-rank matrix, and S0 is the sparse
matrix. The low-rank matrix is the background and the sparse
matrix is the foreground in background subtraction.
The f-measure metric was chosen to evaluate the performance
of the Multi-Layer RPCA. Equation 4 shows the f-measure,
where precision and recall are extracted from the confusion
matrix. Precision is the proportion of predicted positives that
are correctly real positives. In the same way, recall denotes
the proportion of the real positives that are correctly predicted [17]. The confusion matrix is computed comparing the
ground truth (GT) with the automatic segmented images.
M = L0 + S0
(1)
The RPCA problem can be solved with the convex program Principal Component Pursuit (PCP). This program computes L and S, taking the objective function in the Equation
2, where ||L||∗ denotes the nuclear norm of the low-rank matrix, ||S||1 denotes the l1 -norm of the sparse matrix, and λ is
a regularizing parameter. There are some algorithms to perform PCP such as Accelerated Proximal Gradient (APG), and
Augmented Lagrange multiplier (ALM) [14].
minimize
||L||∗ + λ||S||1
subject to L + S = M
(2)
f-measure = 2
precision ∗ recall
precision + recall
(4)
3. EXPERIMENTAL FRAMEWORK
This section introduces the database used in this paper, the
experiments executed, and the implementation details of our
Multi-Layer RPCA.
2.2. Multi-Layer Robust Principal Component Analysis
3.1. Database
Equation 3 shows the data matrix M in our Multi-Layer
RPCA method, where β ∈ [0, 1] is a weight value indicating the contribution of the texture function to the overall
data matrix. Function ft (x, y) denotes the texture descriptor
extracted from each image, using the classic Local Binary
Pattern (LBP) [15]. LBP describes the texture of an image
using the neighborhood of each pixel. Function fc (x, y) denotes the color transformation for each image, converting to
gray scale in this case. Our Multi-Layer RPCA computes the
L and S matrices from the M matrix in the Equation 3. Texture descriptors can work robustly into rich texture regions
with light variation. However, they do not work in a efficient
The Alexander von Humboldt Institute realizes samplings
with camera traps in different regions of the Colombian forest. We select 25 cameras from 8 regions, where each camera
has a relative unalterable environment. Each camera was
placed in its site between 1 to 3 months. We extract 30 days
and 30 nights of images from those cameras, in daytime color
and nighttime infrared formats respectively. The database
consists of 1065 GT images from the 30 days and 30 nights.
The images have a spatial resolution of 3264x2448 pixels.
The length of each day or night data set varies from 9 to 72
images, depending on the animal activity that day or night.
Figure 1 shows an example of the GT images.
Sparse
matrix
LBP
features
Raw
image
RPCA
Histogram
equalization
Color
features
Low-rank
matrix
Sparse
matrix
Hard
Threshold
Median filter
Multi-Layer
RPCA
Active
contours
Opening
Low-rank
matrix
Closing
Opening
Background
Fig. 2: Block diagram of the pre-processing methods used in
the Experiment 1, 3, and 4.
LBP
features
Raw
image
Gaussian
filter
Sparse
matrix
RPCA
Color
features
Low-rank
matrix
Fig. 3: Block diagram of the pre-processing methods used in
the Experiment 2 and 3.
3.2. Experiments
The experiments computed the background models with
different conditions and amount of images, observing the
robustness of the Multi-Layer RPCA and the influence of
pre-processing in the results. All experiments performed our
method with β = [0, 0.05, 0.1, 0.15, . . . , 1]. Accordingly,
Experiment 1 uses histogram equalization as pre-processing
in the color transformed image, Figure 2 shows the preprocessing for each raw image. The background model is
computed with entire days e.g. we take all images of day 1
and solve the RPCA problem in the Experiment 1. Experiment 2 computes the background model with entire nights;
Figure 3 shows the pre-processing for each raw image. Experiment 3 takes entire days and nights e.g. we take all images of
day 1 and night 1 to solve the RPCA problem. Experiment 3
uses two pre-processes, daytime images uses the pre-process
in Figure 2 and nighttime images uses the pre-process in Figure 3. Experiment 4 takes entire days and nights such as the
Experiment 3, but it only uses the pre-processing in Figure 2
for all images.
We tested 9 algorithms to solve the RPCA problem in
this paper. Active Subspace RPCA (AS-RPCA) [9]; Exact ALM (EALM), Inexact ALM (IALM), Partial APG
(APG-PARTIAL) and APG [10]; Lagrangian Alternating
Direction Method (LSADM) [11]; Non-Smooth Augmented
Lagrangian v1 (NSA1) and Non-Smooth Augmented Lagrangian v2 (NSA2) [12]; Probabilistic Robust Matrix Factorization (PRMF) [13].
The foreground was obtained applying a post-process to
the sparse matrix. The post-processing was the same for all
experiments. This stage includes a hard threshold, morphological filters, and an active contours with a negative contraction bias [18]. Figure 4 shows the post-processing used. Fi-
Foreground
Fig. 4: Block diagram of the postprocessing.
nally, The f-measure was computed comparing each GT with
each foreground. The average f-measure was computed as the
mean of all f-measures. The results are displayed as a plot of
the average f-measure vs β.
3.3. Implementation Details
The RPCA algorithms were computed using the Sobral et al.
library [19]. The rest of the source code was developed using
the image processing toolbox of Matlab.
4. RESULTS
This section shows the results and discussions of the experiments introduced in the Section 3. These results use the metrics explained in the Section 2.3.
Figures 5a and 5b show the average f-measure vs β of the
Experiments 1 and 2 for all RPCA algorithms chosen. Table
1 shows the summary of the best results for each experiment.
APG-PARTIAL was the best algorithm in the Experiments 1
and 2. Daytime images have rich texture regions. In contrast,
nighttime images have uniform color. Texture representations
are more important on daytime images in the Experiment 1
due to β = 0.6. On the contrary, color descriptors are more
important on nighttime images in the Experiment 2 due to
β = 0.3. Those results show the importance of combining
the color and texture descriptors. Figure 5 shows the performances of the RPCA normal algorithms when β = 0. Thus,
our Multi-Layer RPCA outperforms the RPCA normal methods.
Figures 5c and 5d show the average f-measure vs β of the
Experiments 3 and 4. Table 1 shows that dividing the preprocessing per daytime or nighttime in the Experiment 3 does
not make a big difference in the results, but it increases the
fine-tuning parameters. Table 1 shows that NSA2 was the
best algorithm in the Experiments 3 and 4, contrary to the Experiments 1 and 2 where APG-PARTIAL was the best. NSA2
algorithm is a better choice than other RPCA algorithms, if
we cannot differentiate between daytime and nighttime images, or if it is difficult to do so. On the other hand, APGPARTIAL is better, if we have information about the infrared
activation.
Figure 6 shows two visual results of the Multi-Layer
0.75
f-measure
f-measure
0.7
0.7
AS-RPCA
EALM
IALM
APG-PARTIAL
APG
LSADM
NSA1
NSA2
PRMF
0.65
0.6
0
0.2
0.4
AS-RPCA
EALM
IALM
APG-PARTIAL
APG
LSADM
NSA1
NSA2
PRMF
0.65
0.6
0.55
0.6
0.8
1
0
0.2
0.4
beta
(a) Average f-measure vs β for the experiment 1
0.8
1
(b) Average f-measure vs β for the experiment 2
0.7
0.7
0.65
f-measure
f-measure
0.6
beta
AS-RPCA
EALM
IALM
APG-PARTIAL
APG
LSADM
NSA1
NSA2
PRMF
0.6
0.55
AS-RPCA
EALM
IALM
APG-PARTIAL
APG
LSADM
NSA1
NSA2
PRMF
0.65
0.6
0.55
0.5
0
0.2
0.4
0.6
0.8
1
0
0.2
(c) Average f-measure vs β for the experiment 3
0.4
0.6
0.8
1
beta
beta
(d) Average f-measure vs β for the experiment 4
Fig. 5: Results of the proposed experiments, (a) average f-measure vs β per days, (b) average f-measure vs β per nights, (c)
average f-measure vs β per days and nights with two different pre-processes, (d) average f-measure vs β per days and nights
with one pre-process.
Table 1: β values and algorithms for the best performances
of each experiment.
(a) Ground
(b) Sparse
(c) Foreground
(d) Ground
(e) Sparse
(f) Foreground
Fig. 6: Visual results of the Multi-Layer RPCA, (a) original daytime image, (b) sparse matrix after the hard threshold
with APG-PARTIAL and β = 0.6, (c) foreground, (d) original nighttime image, (e) sparse matrix after the hard threshold
with APG-PARTIAL and β = 0.3, (f) foreground.
RPCA. Figure 6a shows a daytime image without any preprocessing. Figure 6d shows an original nighttime image.
Figures 6b and 6e show the sparse matrix after the hard
threshold. Figures 6c and 6f show the foreground image.
These color results are made with the GT images. Yellowcolored regions mean pixels that are on the GT and the automatic segmented images. Red and green regions are visual
representations of the under and over segmentation.
5. CONCLUSIONS
We proposed a Multi-Layer RPCA for camera-trap image
segmentation, using texture and color descriptors. The pro-
Experiment
Experiment 1
Experiment 2
Experiment 3
Experiment 4
Algorithm
APG-PARTIAL
APG-PARTIAL
NSA2
NSA2
β
0.6
0.3
0.45
0.55
Avg f-measure
0.7539
0.7393
0.7259
0.7261
posed algorithm is composed of pre-processing, RPCA algorithm, and post-processing. The pre-processing uses histogram equalization, Gaussian filtering, or a combination of
both. The RPCA algorithm computes the sparse and low-rank
matrices for background subtraction. The post-processing
computes morphological filters and an active contours with
a negative contraction bias. We proved the Multi-Layer
RPCA algorithm in a camera-trap images database from the
Colombian forest. The database was manually segmented to
extract the f-measure of each automatic segmented image.
We reach 0.7539 and 0.7393 of average f-measure in daytime
and nighttime images respectively. The average f-measure
was computed with all GT images. To our best knowledge,
this paper is the first work in proposing Multi-Layer RPCA
and using it for camera-trap images segmentation.
Acknowledgment. This work was supported by the Colombian National Fund for Science, Technology and Innovation,
Francisco José de Caldas - COLCIENCIAS (Colombia).
Project No. 111571451061.
6. REFERENCES
[1] A. F. O’Connell, J. D. Nichols, and K. U. Karanth, Camera traps in animal ecology: methods and analyses.
Springer Science & Business Media, 2010.
[2] A. Diaz-Pulido and E. Payan, “Densidad de ocelotes
(leopardus pardalis) en los llanos colombianos,” Mastozoologı́a neotropical, vol. 18, no. 1, pp. 63–71, 2011.
[3] E. H. Fegraus, K. Lin, J. A. Ahumada, C. Baru, S. Chandra, and C. Youn, “Data acquisition and management
software for camera trap data: A case study from the
team network,” Ecological Informatics, vol. 6, no. 6,
pp. 345–353, 2011.
[4] S. Brutzer, B. Höferlin, and G. Heidemann, “Evaluation of background subtraction techniques for video
surveillance,” in Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pp. 1937–1944,
IEEE, 2011.
[5] V. Mahadevan and N. Vasconcelos, “Spatiotemporal
saliency in dynamic scenes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 1,
pp. 171–177, 2010.
[6] K. P. K. Reddy and R. Aravind, “Segmentation of
camera-trap tiger images based on texture and color features,” in Communications (NCC), 2012 National Conference on, pp. 1–5, IEEE, 2012.
[7] X. Ren, T. X. Han, and Z. He, “Ensemble video object cut in highly dynamic scenes,” in Proceedings of
the IEEE Conference on Computer Vision and Pattern
Recognition, pp. 1947–1954, 2013.
[8] Z. Zhang, T. X. Han, and Z. He, “Coupled ensemble
graph cuts and object verification for animal segmentation from highly cluttered videos,” in Image Processing (ICIP), 2015 IEEE International Conference on,
pp. 2830–2834, IEEE, 2015.
[9] G. Liu and S. Yan, “Active subspace: Toward scalable
low-rank learning,” Neural computation, vol. 24, no. 12,
pp. 3371–3394, 2012.
[10] Z. Lin, M. Chen, and Y. Ma, “The augmented lagrange
multiplier method for exact recovery of corrupted lowrank matrices,” arXiv preprint arXiv:1009.5055, 2010.
[11] D. Goldfarb, S. Ma, and K. Scheinberg, “Fast alternating linearization methods for minimizing the sum
of two convex functions,” Mathematical Programming,
vol. 141, no. 1-2, pp. 349–382, 2013.
[12] N. S. Aybat, D. Goldfarb, and G. Iyengar, “Fast firstorder methods for stable principal component pursuit,”
arXiv preprint arXiv:1105.2126, 2011.
[13] N. Wang, T. Yao, J. Wang, and D.-Y. Yeung, “A probabilistic approach to robust matrix factorization,” in European Conference on Computer Vision, pp. 126–139,
Springer, 2012.
[14] E. J. Candès, X. Li, Y. Ma, and J. Wright, “Robust
principal component analysis?,” Journal of the ACM
(JACM), vol. 58, no. 3, p. 11, 2011.
[15] T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Transactions on
pattern analysis and machine intelligence, vol. 24, no. 7,
pp. 971–987, 2002.
[16] J. Yao and J.-M. Odobez, “Multi-layer background subtraction based on color and texture,” in 2007 IEEE Conference on Computer Vision and Pattern Recognition,
pp. 1–8, IEEE, 2007.
[17] D. M. Powers, “Evaluation: from precision, recall and
f-measure to roc, informedness, markedness and correlation,” 2011.
[18] V. Caselles, R. Kimmel, and G. Sapiro, “Geodesic active contours,” International journal of computer vision,
vol. 22, no. 1, pp. 61–79, 1997.
[19] A. Sobral, T. Bouwmans, and E.-h. Zahzah, “Lrslibrary:
Low-rank and sparse tools for background modeling
and subtraction in videos,” in Robust Low-Rank and
Sparse Matrix Decomposition: Applications in Image
and Video Processing, CRC Press, Taylor and Francis
Group, 2015.
| 1 |
Un résultat intrigant en commande sans modèle
Discussing an intriguing result on model-free control
Cédric Join1,5,6 , Emmanuel Delaleau2 , Michel Fliess3,5 , Claude H. Moog4
1
arXiv:1711.02877v1 [] 8 Nov 2017
CRAN (CNRS, UMR 7039), Université de Lorraine, BP 239, 54506 Vandœuvre-lès-Nancy, France.
[email protected]
2
Département de Mécatronique, École nationale d’ingénieurs de Brest, 29280 Plouzané, France.
[email protected]
3
LIX (CNRS, UMR 7161), École polytechnique, 91128 Palaiseau, France. [email protected]
4
LS2N (CNRS, UMR 6004), 44321 Nantes 03, France. [email protected]
5
AL.I.E.N. (ALgèbre pour Identification & Estimation Numériques), 7 rue Maurice Barrès, 54330 Vézelise, France.
{michel.fliess, cedric.join}@alien-sas.com
6
Projet Non-A, INRIA Lille – Nord-Europe, France
RÉSUMÉ. Un exemple mathématique élémentaire prouve, grâce au critère de Routh-Hurwitz, un résultat à l’encontre
de la pratique actuelle en commande sans modèle : il peut y avoir plus de difficultés à régler un correcteur proportionnel
« intelligent » (iP) qu’un proportionnel-dérivé intelligent (iPD). Les simulations numériques de l’iPD et d’un PID classique
tournent largement en faveur du premier. Introduction et conclusion analysent la commande sans modèle à la lumière des
avancées actuelles.
ABSTRACT. An elementary mathematical example proves, thanks to the Routh-Hurwitz criterion, a result that is intriguing with respect to today’s practical understanding of model-free control, i.e., an “intelligent” proportional controller
(iP) may turn to be more difficult to tune than an intelligent proportional-derivative one (iPD). The vast superiority of iPDs
when compared to classic PIDs is shown via computer simulations. The introduction as well as the conclusion analyse
model-free control in the light of recent advances.
MOTS-CLÉS. Commande sans modèle, correcteurs iP, correcteurs iPD, PID, critère de Routh-Hurwitz, réparation, apprentissage, intelligence artificielle.
KEYWORDS. Model-free control, iP controllers, iPD controllers, PID, Routh-Hurwitz criterion, fault accommodation, machine learning, artificial intelligence.
La première fois qu’Aurélien vit Bérénice, il la trouva franchement laide.
Aragon (Aurélien. Paris : Gallimard, 1944)
1.
Introduction
1.1. Généralités
Les faits suivants sont connus de tout automaticien :
— Écrire un « bon » modèle mathématique d’une machine réelle, c’est-à-dire non idéalisée, comme en
physique fondamentale, est redoutable, voire impossible. Ainsi s’explique la popularité industrielle
stupéfiante des correcteurs PID (voir, par exemple, [Åström & Hägglund (2006)],
[Franklin et coll. (2015)], [Janert (2014)], [Lunze (1996)], [O’Dwyer (2009)],
[Rotella & Zambettakis (2008)]). Une telle modélisation y est sans objet.
— Le tribut est lourd :
— performances médiocres,
— défaut de robustesse,
— réglage laborieux des gains.
c 2017 ISTE OpenScience – Published by ISTE Ltd. London, UK – openscience.fr
Page | 1
La « commande sans modèle », ou model-free control, [Fliess & Join (2013)] et ses correcteurs « intelligents » ont été inventés pour combler ces lacunes. De nombreuses publications récentes, dans les
domaines les plus divers, démontrent leur efficacité et simplicité, non seulement en France, mais aussi,
et même davantage, à l’étranger : voir, par exemple, les références de [Fliess & Join (2013)], et
[Abouaïssa et coll. (2017b)], [Abouaïssa et coll. (2017a)] et leurs références.
La bibliographie de [Fliess & Join (2013)] atteste que la dénomination model-free control apparaît
maintes fois dans la littérature, mais en des sens distincts du nôtre. L’importance croissante de l’intelligence artificielle et de l’apprentissage, au travers des réseaux de neurones notamment, s’est fort naturellement greffée au sans-modèle : voir, par exemple, [Cheon et coll. (2015)], [Lillicrap et coll. (2016)],
[Luo et coll. (2016)], [Mnih et coll. (2015)], [Radac & Precup (2017)], [Radac et coll. (2017)]. Nos techniques, sans nul besoin de calculs lourds [Join et coll. (2013)], éludent cette tendance actuelle de l’informatique (voir [Gédouin et coll. (2011)], [Lafont et coll. (2015)], [Menhour et coll. (2017)] pour des
illustrations concrètes).
1.2. Bref aperçu de la commande sans modèle 1
On remplace le modèle global inconnu par le modèle ultra-local :
y (ν) = F + αu
(1)
— Les variables u et y désignent respectivement la commande et la sortie.
— L’ordre de dérivation, choisi par l’ingénieur, ν ≥ 1 est 1, en général. Parfois, ν = 2. On n’a jamais
rencontré ν ≥ 3 en pratique.
— L’ingénieur décide du paramètre α ∈ R de sorte que les trois termes de (1) aient même magnitude.
Une identification précise de α est, donc, sans objet.
— On estime F grâce aux mesures de u et y.
— F subsume non seulement la structure inconnue du système mais aussi les perturbations externes 2 .
Si ν = 2, on ferme la boucle avec un régulateur intelligent proportionnel-intégral-dérivé, ou iPID, c’està-dire une généralisation des PID classiques,
R
Festim − ÿ ∗ − KP e − KI e − KD ė
(2)
u=−
α
— Festim est une estimée F .
— y ∗ est la trajectoire de référence.
— e = y ∗ − y est l’erreur de poursuite.
— KP , KI , KD ∈ R sont les gains.
Il vient, d’après (1) et (2),
Z
ë + KD ė + KP e + KI e = Festim − F
(3)
On obtient une « bonne » poursuite si l’estimée Festim est « bonne », c’est-à-dire F − Festim ' 0. Contrairement aux PID classiques, (3) prouve la facilité, ici, du choix des gains.
1. Pour plus de détails, voir [Fliess & Join (2013)].
2. Cette distinction entre structure interne et perturbations externes se retrouve partout. Elle ne présente, à notre avis, aucune évidence a priori. Les confondre est une percée conceptuelle indubitable. Comparer avec « la commande par rejet actif de perturbations »,
ou Active Disturbance Rejection (ADRC) (voir, par exemple, [Sira-Ramírez et coll. (2017)]).
c 2017 ISTE OpenScience – Published by ISTE Ltd. London, UK – openscience.fr
Page | 2
Si KD = 0 on a un régulateur intelligent proportionnel-intégral, ou iPI,
R
Festim − ÿ ∗ − KP e − KI e
u=−
α
Si KI = 0 on a un régulateur intelligent proportionnel-dérivé, ou iPD,
Festim − ÿ ∗ − KP e − KD ė
u=−
α
(4)
Le plus fréquemment, ν = 1. On obtient alors un régulateur intelligent proportionnel, ou iP,
u=−
Festim − ẏ ∗ − KP e
α
(5)
Remarque. Voir [Delaleau (2014)] pour une autre approche de la stabilisation.
Voici deux exceptions où un iPD est employé avec ν = 2 : [De Miras et coll. (2013)],
[Menhour et coll. (2017)] 3 .
1.3. But
Cet article exhibe un exemple linéaire, ÿ − ẏ = u, a priori élémentaire, où un iPD doit remplacer un
iP, contrairement à ce que l’on aurait pu croire naïvement. L’explication repose sur le critère bien connu
de Routh-Hurwitz (voir, par exemple, [Gantmacher (1966)]). Il démontre l’« étroitesse » de l’ensemble
des paramètres stabilisants {α, KP } en (5).
1.4. Plan
Le paragraphe 2. présente notre exemple et les excellents résultats obtenus avec un iPD. Au paragraphe suivant, les difficultés rencontrées avec un iP sont expliquées grâce au critère de Routh-Hurwitz.
L’équivalence démontrée en [d’Andréa-Novel et coll. (2010)], [Fliess & Join (2013)] entre PID et iPD
nous conduit à comparer leurs performances au paragraphe 4. : l’avantage des iPD y est manifeste. On
en déduit en conclusion quelques pistes de réflexions sur les correcteurs intelligents associés au sansmodèle.
2.
Notre exemple
2.1. Présentation
Soit le système linéaire, stationnaire et instable,
ÿ − ẏ = u
(6)
3. La littérature sur les illustrations du sans-modèle contient plusieurs exemples avec emploi d’un iPID et ν = 2, mais sans justification aucune, comme l’absence de frottements [Fliess & Join (2013)]. Un iP avec ν = 1 aurait suffi peut-être. D’où une implantation
encore plus simple.
c 2017 ISTE OpenScience – Published by ISTE Ltd. London, UK – openscience.fr
Page | 3
Commande
Sortie
15
1.2
1
10
0.8
5
0.6
0.4
0
0.2
-5
0
-10
-0.2
0
5
10
15
20
25
30
0
5
Time(s)
10
15
20
25
30
Time(s)
(a) Commande
(b) Sortie, trajectoire de référence (- -)
Figure 1.: iPD
D’après (1), il vient, si ν = 1,
F = −αÿ + (1 + α)ẏ
(7)
L’iP déduit des calculs de [Fliess & Join (2013)] fonctionne mal.
2.2. iPD
Avec un iPD (4), ν = 2 en (1), on remplace (7) par
F = (1 − α)ÿ + αẏ
Les simulations numériques de la figure 1, déduites des calculs de [Fliess & Join (2013)], sont excellentes. On choisit α = 0.5 et les gains KP and KD tels que (s + 0.5)2 est le polynôme caractéristique de
la dynamique d’erreur. On introduit un bruit additif de sortie, blanc, centré et gaussien, d’écart type 0.01.
La condition initiale est y(0) = −0.05.
3.
Pourquoi l’implantation de l’iP échoue-t-elle ?
Tentons de comprendre l’échec de l’implantation de l’iP. Il vient pour (3) :
ė + KP e = Festim − F
(8)
où F provient de (7). Il est loisible d’écrire Festim dans le domaine opérationnel (voir, par exemple,
[Erdélyi (1962)]) :
Festim = −α
s2
s
y
+
(1
+
α)
y
(T s + 1)2
(T s + 1)
(9)
— Festim et y sont les analogues opérationnels 4 de Festim et y,
4. La terminologie « transformées de Laplace » est beaucoup plus usuelle, comme chacun le sait.
c 2017 ISTE OpenScience – Published by ISTE Ltd. London, UK – openscience.fr
Page | 4
(a) Pour tout T
(b) T = 0.1s
Figure 2.: Domaine de stabilité pour (KP , α, T )
s
(T s+1)
2
s
et (T s+1)
2 représentent des filtres dérivateurs d’ordres 1 et 2, où T > 0 est la constante de
temps (voir, par exemple, [Leich & Boite (1980)]).
Il est loisible de poser y ∗ ≡ 0 pour étudier la stabilité. Alors, e = −y. Grâce à (6)-(9), on obtient le
polynôme caractéristique
KP
1
KP
1
2 KP
2 4
3
2
2
− 2T
T s + s (2T − T ) + s −2T + T (1 + ) − T
+s
−
α
α
α
α
α
On utilise le critère de stabilité de Routh-Hurwitz (voir, par exemple, [Gantmacher (1966)], et
[Franklin et coll. (2015)], [Lunze (1996)], [Rotella & Zambettakis (2008)]). La figure 2 montre, avec
une discrétisation convenable, l’étroitesse de l’ensemble {α, KP } des paramètres stabilisants, y compris en tenant compte de la constante de temps T . La difficulté de trouver un iP satisfaisant pour (6) se
trouve confirmée. Ajoutons que la valeur « évidente » α = −1 ne convient jamais 5 .
—
4.
Comparaison entre iPD et PID
En [d’Andréa-Novel et coll. (2010)], [Fliess & Join (2013)] est démontrée une certaine équivalence
entre iPD (4) et PID usuels :
Z
u = kP e + kI e + kD ė (kP , kI , kD ∈ R)
5. La valeur α = −1 est « évidente » car, alors, F = ÿ en (7). On retrouve (6).
c 2017 ISTE OpenScience – Published by ISTE Ltd. London, UK – openscience.fr
Page | 5
Commande
Sortie
3
1.2
1
2
0.8
1
0.6
0
0.4
-1
0.2
-2
0
-3
-0.2
0
5
10
15
20
25
30
0
5
10
Time(s)
15
20
25
30
Time(s)
(a) Commande
(b) Sortie, trajectoire de référence (- -)
Figure 3.: PID
Commande
Sortie
4
1.2
3
1
2
0.8
1
0.6
0
0.4
-1
0.2
-2
0
-3
-0.2
0
5
10
15
20
25
30
0
5
Time(s)
10
15
20
25
30
Time(s)
(a) Commande
(b) Sortie, trajectoire de référence (- -)
Figure 4.: PID avec δ = 0.8
On détermine kP , kI , kD de sorte que (s + 0.66)3 soit le polynôme caractéristique de la dynamique
d’erreur. On assure ainsi un temps de réponse égal à celui du paragraphe 2.2., à ±5% près. Les résultats
de la figure 3 sont satisfaisants 6 quoiqu’inférieurs à ceux de la figure 1 pour l’iPD 7 . Afin de tester la
robustesse, modifions (6) :
ÿ − ẏ = δu
où δ, 0 ≤ δ ≤ 1, correspond à une perte de puissance de l’actionneur 8 . Avec δ = 0.8, figures 4 and 5
révèlent un meilleur comportement de l’iPD. Cette supériorité s’accentue si δ = 0.5 : voir figures 6 et 7.
6. Un réglage plus soigneux du PID les améliorerait peut-être.
7. Même bruit qu’au paragraphe 2.2.
8. Comparer avec la « réparation », ou fault accommodation, en [Fliess & Join (2013)]. Voir, aussi, [Lafont et coll. (2015)].
c 2017 ISTE OpenScience – Published by ISTE Ltd. London, UK – openscience.fr
Page | 6
Commande
Sortie
15
1.2
1
10
0.8
5
0.6
0.4
0
0.2
-5
0
-10
-0.2
0
5
10
15
20
25
30
0
5
10
Time(s)
15
20
25
30
Time(s)
(a) Commande
(b) Sortie, trajectoire de référence (- -)
Figure 5.: iPD avec δ = 0.8
Commande
Sortie
4
1.2
3
1
2
0.8
1
0.6
0
0.4
-1
0.2
-2
0
-3
-0.2
0
5
10
15
20
25
30
0
5
10
Time(s)
15
20
25
30
Time(s)
(a) Commande
(b) Sortie, trajectoire de référence (- -)
Figure 6.: PID avec δ = 0.5
Commande
Sortie
15
1.2
1
10
0.8
5
0.6
0.4
0
0.2
-5
0
-10
-0.2
0
5
10
15
20
25
30
0
5
Time(s)
10
15
20
25
30
Time(s)
(a) Commande
(b) Sortie, trajectoire de référence (- -)
Figure 7.: iPD avec δ = 0.5
c 2017 ISTE OpenScience – Published by ISTE Ltd. London, UK – openscience.fr
Page | 7
5.
Conclusion
Diverses questions découlent de cette étude :
1. L’iP et l’iPD sont les deux seuls correcteurs intelligents qui devraient importer. Les autres, comme
iPI et iPID, ne joueront, sans doute, qu’un rôle marginal.
2. L’iPD s’impose ici grâce à l’examen d’une équation donnée. Exhiber d’autres équations de systèmes jouissant de cette propriété est un but intellectuel légitime. Mais que faire sans modèle ?
Procéder par essais et erreurs ? Tenir compte des propriétés qualitatives de la machine ? Un mélange des deux ? Voilà encore une interrogation épistémologique nouvelle, due au surgissement de
la commande sans modèle 9 .
3. La preuve repose sur le choix des filtres dérivateurs. Une analyse indépendante d’une telle approche
reste à découvrir. Elle permettrait une meilleure compréhension du phénomène.
Bibliographie
ABOUAÏSSA H., ALHAJ HASAN O., JOIN C., FLIESS M., DEFER D., « Energy saving for building heating via a simple
and efficient model-free control design : First steps with computer simulations ». 21st International Conference on System
Theory, Control and Computing, Sinaia, 2017a. https://hal.archives-ouvertes.fr/hal-01568899/en/
ABOUAÏSSA H., FLIESS M., JOIN C., « On ramp metering : Towards a better understanding of ALINEA via model-free
control ». International Journal of Control, 90 (2017b) : 1018-1026.
d’ANDRÉA-NOVEL B., FLIESS M., JOIN C., MOUNIER H., STEUX B., « A mathematical explanation via “intelligent” PID
controllers of the strange ubiquity of PIDs ». 18th Mediterranean Conference on Control & Automation, Marrakech, 2010.
https://hal.archives-ouvertes.fr/inria-00480293/en/
ÅSTRÖM K.J., HÄGGLUND T., Advanced PID Control. Research Triangle Park, NJ : Instrument Society of America, 2006.
CHEON K., KIM J., HAMADACHE M., LEE D., « On replacing PID controller with deep learning controller for DC motor
system ». Journal of Automation and Control Engineering, 3 (2015) : 452-456.
DELALEAU E., « A proof of stability of model-free control ». 2014 IEEE Conference on Norbert Wiener in the 21st Century
(21CW), Boston, 2014.
DE MIRAS J., JOIN C., FLIESS M., RIACHY S., BONNET S., « Active magnetic bearing : A new step for model-free
control ». 52nd IEEE Conference on Decision and Control, Florence, 2013.
https://hal.archives-ouvertes.fr/hal-00857649/en/
ERDÉLYI A., Operational Calculus and Generalized Functions. New York : Holt Rinehart and Winston, 1962.
FLIESS M., JOIN C., « Model-free control ». International Journal of Control, 86 (2013) : 2228-2252.
FRANKLIN G.F., POWELL J.D., EMAMI-NAEINI A., Feedback Control of Dynamic Systems (7th ed.). Harlow : Pearson,
2015.
GANTMACHER F.R., Théorie des matrices, t. 2 (traduit du russe). Paris : Dunod, 1966.
GÉDOUIN P.-A., DELALEAU E., BOURGEOT J.-M., JOIN C., ARBAB CHIRANI S., CALLOCH S., « Experimental comparison of classical PID and model-free control: Position control of a shape memory alloy active spring ». Control Engineering
Practice, 19 (2011) : 433-441.
JANERT P.K., Feedback Control for Computer Systems. Sebastopol, CA : O’Reilly Media, 2014.
JOIN C., CHAXEL F., FLIESS M., « “Intelligent” controllers on cheap and small programmable devices ». 2nd International
Conference on Control and Fault-Tolerant Systems, Nice, 2013.
https://hal.archives-ouvertes.fr/hal-00845795/en/
LAFONT F., BALMAT J.-F., PESSEL N., FLIESS M., « A model-free control strategy for an experimental greenhouse with
an application to fault accommodation ». Computers and Electronics in Agriculture, 110 (2015) : 139-149.
9. À ce sujet, voir aussi la conclusion de [Fliess & Join (2013)].
c 2017 ISTE OpenScience – Published by ISTE Ltd. London, UK – openscience.fr
Page | 8
LEICH H., BOITE J., Les filtres numériques : Analyse et synthèse des filtres unidimensionnels. Paris : Masson, 2013.
LILLICRAP T.P., HUNT J.J., PRITZEL A., HEESS N., EREZ T., TASSA Y., SILVER D., WIERSTRA D., « Continuous control
with deep reinforcement learning ». 6th International Conference on Learning Representations, Vancouver, 2016.
LUNZE J., Regelungstheorie 1. Berlin : Springer, 1996.
LUO B., LIU D., HUANG T., WANG D., « Model-free optimal tracking control via critic-only Q-learning ». IEEE Transactions on Neural Networks and Learning Systems, 27 (2016) : 2134-2144.
MENHOUR L., D’ANDRÉA-NOVEL B., FLIESS M., GRUYER D., MOUNIER H., « An efficient model-free setting for longitudinal and lateral vehicle control. Validation through the interconnected pro-SiVIC/RTMaps prototyping platform ».
IEEE Transactions on Intelligent Transportation Systems, (2017) : DOI: 10.1109/TITS.2017.2699283
MNIH V., KAVUKCUOGLU K., SILVER D., RUSU A.A., VENESS J., BELLEMARE M.G., GRAVES A., RIEDMILLER M.,
FIDJELAND A.K., OSTROVSKI G., PETERSEN S., BEATTIE C., SADIK A., ANTONOGLOU I., KING H., KUMARAN D.,
WIERSTRA D., LEGG S., HASSABIS D., « Human-level control through deep reinforcement learning ». Nature, 518
(2015) : 529-533.
O’DWYER A., Handbook of PI and PID Controller Tuning Rules (3rd ed.). Londres : Imperial College Press, 2009.
RADAC M.-B., PRECUP R.E., « Data-driven model-free slip control of anti-lock braking systems using reinforcement
Q-learning ». Neurocomputing, (2017) : http://dx.doi.org/10.1016/j.neucom.2017.08.036
RADAC M.-B., PRECUP R.E., ROMAN R.C., « Model-free control performance improvement using virtual reference feedback tuning and reinforcement Q-learning ». International Journal of Control, 48 (2017) : 1071-1083.
ROTELLA F., ZAMBETTAKIS I., Automatique élémentaire. Paris : Hermès-Lavoisier, 2008.
S IRA -R AMÍREZ H., L UVIANO -J UÁREZ A., R AMÍREZ -N ERIA M., Z URITA -B USTAMANTE E.W., Active Disturbance Rejection Control of Dynamic Systems: A Flatness Based Approach. Oxford & Cambridge, MA : Elsevier, 2017.
c 2017 ISTE OpenScience – Published by ISTE Ltd. London, UK – openscience.fr
Page | 9
| 3 |
arXiv:1705.02822v2 [] 10 May 2017
Rank Vertex Cover as a Natural Problem for Algebraic
Compression∗
S. M. Meesum†
Fahad Panolan‡
Saket Saurabh†‡
Meirav Zehavi‡
Abstract
The question of the existence of a polynomial kernelization of the Vertex Cover
Above LP problem has been a longstanding, notorious open problem in Parameterized
Complexity. Five years ago, the breakthrough work by Kratsch and Wahlström on
representative sets has finally answered this question in the affirmative [FOCS 2012].
In this paper, we present an alternative, algebraic compression of the Vertex Cover
Above LP problem into the Rank Vertex Cover problem. Here, the input consists
of a graph G, a parameter k, and a bijection between V (G) and the set of columns of a
representation of a matriod M , and the objective is to find a vertex cover whose rank
is upper bounded by k.
1
Introduction
The field of Parameterized Complexity concerns the study of parameterized problems, where
each problem instance is associated with a parameter k that is a non-negative integer.
Given a parameterized problem of interest, which is generally computationally hard, the
first, most basic question that arises asks whether the problem at hand is fixed-parameter
tractable (FPT). Here, a problem Π is said to be FPT if it is solvable in time f (k) · |X|O(1) ,
where f is an arbitrary function that depends only on k and |X| is the size of the input
instance. In other words, the notion of FPT signifies that it is not necessary for the combinatorial explosion in the running time of an algorithm for Π to depend on the input size,
but it can be confined to the parameter k. Having established that a problem is FPT, the
second, most basic question that follows asks whether the problem also admits a polynomial
kernel. A concept closely related to kernelization is one of polynomial compression. Here,
b and a
a problem Π is said to admit a polynomial compression if there exist a problem Π
polynomial-time algorithm such that given an instance (X, k) of Π, the algorithm outputs
∗
Supported by Parameterized Approximation, ERC Starting Grant 306992, and Rigorous Theory of
Preprocessing, ERC Advanced Investigator Grant 267959.
†
The Institute of Mathematical Sciences, HBNI, Chennai, India. {meesum|saket}@imsc.res.in
‡
Department of Informatics, University of Bergen, Norway. {fahad.panolan|meirav.zehavi}@ii.uib.no
1
b b
b where |X|
b = b
an equivalent instance (X,
k) of Π,
kO(1) and b
k ≤ k. Roughly speaking,
compression is a mathematical concept that aims to analyze preprocessing procedures in
b the problem is further said to
a formal, rigorous manner. We note that in case Π = Π,
b
b
admit a polynomial kernelization, and the output (X, k) is called a kernel.
The Vertex Cover problem is (arguably) the most well-studied problem in Parameterized Complexity [10, 7]. Given a graph H and a parameter k, this problem asks whether
H admits a vertex cover of size at most k. Over the years, a notable number of algorithms
have been developed for the Vertex Cover problem [2, 1, 11, 25, 5, 3, 6]. Currently, the
best known algorithm solves this problem in the remarkable time 1.2738k · nO(1) [6]. While
it is not known whether the constant 1.2738 is “close” to optimal, it is known that unless
the Exponential Time Hypothesis (ETH) fails, Vertex Cover cannot be solved in time
2o(k) · nO(1) [16]. On the other hand, in the context of kernelization, the picture is clear
in the following sense: It is known that Vertex Cover admits a kernel with O(k2 ) vertices and edges [2], but unless NP ⊆ co-NP/poly, it does not admit a kernel with O(k 2−ǫ )
edges [9]. We remark that it is also known that Vertex Cover admits a kernel not only
of size O(k2 ), but also with only 2k vertices [5, 20], and it is conjectured that this bound
might be essentially tight [4].
It has become widely accepted that Vertex Cover is one of the most natural test
beds for the development of new techniques and tools in Parameterized Complexity. Unfortunately, the vertex cover number of a graph is generally large—in fact, it is often linear
in the size of the entire vertex set of the graph [10, 7]. Therefore, alternative parameterizations, known as above guarantee parameterizations, have been proposed. The two most well
known such parameterizations are based on the observation that the vertex cover number
of a graph H is at least as large as the fractional vertex cover number of H, which in turn is
at least as large as the maximum size of a matching of H. Here, the P
fractional vertex cover
number of H is the solution to the linear program that minimizes v∈V (H) xv subject to
the constraints xu + xv ≥ 1 for all {u, v} ∈ E(H), and xv ≥ 0 for all v ∈ V (H). Accordingly, given a graph H and a parameter k, the Vertex Cover Above MM problem asks
whether H admits a vertex cover of size at most µ(H) + k, where µ(H) is the maximum
size of a matching of H, and the Vertex Cover Above LP problem asks whether H
admits a vertex cover of size at most ℓ(H) + k, where ℓ(H) is the fractional vertex cover
number of H.
On the one hand, several parameterized algorithms for these two problems have been
developed in the last decade [28, 27, 8, 24, 21]. Currently, the best known algorithm for
Vertex Cover Above LP, which is also the best known algorithm Vertex Cover
Above MM, runs in time 2.3146k · nO(1) [21]. On the other hand, the question of the
existence of polynomial kernelizations of these two problems has been a longstanding, notorious open problem in Parameterized Complexity. Five years ago, the breakthrough work
by Kratsch and Wahlström on representative sets has finally answered this question in the
affirmative [19]. Up to date, the kernelizations by Kratsch and Wahlström have remained
the only known (randomized) polynomial compressions of Vertex Cover Above MM
2
and Vertex Cover Above LP. Note that since ℓ(H) is necessarily at least as large as
µ(H), a polynomial compression of Vertex Cover Above LP also doubles as a polynomial compression of Vertex Cover Above MM. We also remark that several central
problems in Parameterized Complexity, such as the Odd Cycle Transversal problem,
are known to admit parameter-preserving reductions to Vertex Cover Above LP [21].
Hence, the significance of a polynomial compression of Vertex Cover Above LP also
stems from the observation that it simultaneously serves as a polynomial compression of
additional well-known problems, and can therefore potentially establish the target problem
as a natural candidate to express compressed problem instances.
Recently, a higher above-guarantee parameterization of Vertex Cover, resulting in
the Vertex Cover Above Lovász-Plummer, has been introduced by Garg and Philip
[12]. Here, given a graph H and a parameter k, the objective is to determine whether H
admits a vertex cover of size at most (2ℓ(H) − µ(H)) + k. Garg and Philip [12] showed
that this problem is solvable in time 3k · nO(1) , and Kratsch [18] showed that it admits a
(randomized) kernelization that results in a large, yet polynomial, kernel. We remark that
above-guarantee parameterizations can very easily reach bars beyond which the problem at
hand is no longer FPT. For example, Gutin et al. [14] showed that the parameterization of
Vertex Cover above m/∆(H), where ∆(H) is the maximum degree of a vertex in H and
m is the number of edges in H, results in a problem that is not FPT (unless FPT=W[1]).
Our Results and Methods. In this paper, we present an alternative, algebraic compression of the Vertex Cover Above LP problem into the Rank Vertex Cover problem.
We remark that Rank Vertex Cover was originally introduced by Lovász as a tool for
the examination of critical graphs [22]. Given a graph H, a parameter ℓ, and a bijection
between V (G) and the set of columns of a representation of a matroid M , the objective
of Rank Vertex Cover is to find a vertex cover of H whose rank, which is defined by
the set of columns corresponding to its vertices, is upper bounded by ℓ. Note that formal
definitions of the terms used in the definition of Rank Vertex Cover can be found in
Section 2.
We obtain a (randomized) polynomial compression of size Õ(k7 + k4.5 log(1/ε))1 , where
ε is the probability of failure. Here, by failure we mean that we output an instance of Rank
Vertex Cover which is not equivalent to the input instance. In the first case, we can
simply discard the output instance, and return an arbitrary instance of constant size; thus,
we ensure that failure only refers to the maintenance of equivalence. Our work makes use
of properties of linear spaces and matroids, and also relies on elementary probability theory.
One of the main challenges it overcomes is the conversion of the methods of Lovász [22]
into a procedure that works over rationals with reasonably small binary encoding.
1
Õ hide factors polynomial in log k
3
2
Preliminaries
We use N to denote the set of natural numbers. For any n ∈ N, we use [n] as a shorthand
for {1, 2, . . . , n}. In this paper, the notation F will refer to a finite field of prime size.
Accordingly, Fn is an n-dimensional linear space over the field F, where a vector v ∈ Fn
is a tuple of n elements from the field F. Here, the vector v is implicitly assumed to be
represented as a column vector, unless stated otherwise. A finite set of vectors
P S over the
field F is said to be linearly independent if the only solution to the equation v∈S λv v = 0,
where it holds that λv ∈ F for all v ∈ S, is the one that assigns zero to all of the scalars
λv . A set S that is not linearly independent is said to bePlinearly dependent. The span of
a set of vectors S, denoted by S (or span(S)), is the set { v∈S αv v : αv ∈ F}, defined over
the linear space Fn .
For a graph G, we use V (G) and E(G) to denote the vertex set and the edge set of G,
respectively. We treat the edge
set of an undirected graph G as a family of subsets of size
V (G)
2 of V (G), i.e. E(G) ⊆ 2 . An independent set in a graph G is a set of vertices X such
that for all u, v ∈ X, it holds that {u, v} ∈
/ E(G). For a graph G and a vertex v ∈ V (G),
we use G \ v to denote the graph obtained from G after deleting v and the edges incident
with v.
2.1
Matroids
Definition 1. A matroid X is a pair (U, I), where U is a set of elements and I is a set of
subsets of U , with the following properties: (i) ∅ ∈ I, (ii) if I1 ⊂ I2 and I2 ∈ I, then I1 ∈ I,
and (iii) if I1 , I2 ∈ I and |I1 | < |I2 |, then there is x ∈ (I2 \ I1 ) such that I1 ∪ {x} ∈ I.
A set I ′ ∈ I is said to be independent; otherwise, it is said to be dependent. A set B ∈ I
is a basis if no superset of B is independent. For example, Ut,n = ([n], {I : I ⊆ [n], |I| ≤ t})
forms a matroid known as a uniform matroid. For a matroid X = (U, I), we use E(X), I(X)
and B(X) to denote the ground set U of X, the set of independent sets I of X, and the set
of bases of X, respectively. Here, we are mainly interested in linear matroids, which are
defined as follows. Given a matroid X = (U, I), a matrix M having |U | columns is said to
represent X if (i) the columns of M are in bijection with the elements in U , and (ii) a set
A ⊆ U is independent in X if and only if the columns corresponding to A in M are linearly
independent. Accordingly, a matroid is a linear matroid if it has a representation over some
field. For simplicity, we use the same symbol to refer to a matroid M and its representation.
For a matrix M and some subset B of columns of M , we let M [⋆, B] denote the submatrix
of M that is obtained from M be deleting all columns not in B. The submatrix of M over
a subset of rows R and a subset of columns B is denoted using M [R, B].
We proceed by stating several basic definitions related to matroids that are central to
our work. For this purpose, let X = (U, I) be a matroid. An element x ∈ U is called
a loop if it does not belong to any independent set of X. If X is a linear matroid, then
loops correspond to zero column vectors in its representation. An element x ∈ U is called
4
a co-loop if it occurs in every basis of X. Note that for a linear matroid X, an element x
is a co-loop if it is linearly independent from any subset of U \ {x}. For a subset A ⊆ U ,
the rank of A is defined as the maximum size of an independent subset of A, that is,
rankX (A) := maxI ′ ⊆A {|I ′ | : I ′ ∈ I}. We remove the subscript of rankX (A), if the matroid
is clear from the context.
The rank function of X is the function rank : 2U → N that assigns rank(A) to each
subset A ⊆ U . Note that this function satisfies the following properties.
1. 0 ≤ rank(A) ≤ |A|,
2. if A ⊆ B, then rank(A) ≤ rank(B), and
3. rank(A ∪ B) + rank(A ∩ B) ≤ rank(A) + rank(B).
A subset F ⊆ U is a flat if rank(F ∪ {x}) > rank(F ) for all x ∈
/ F . Let F be the
set ofTall flats of the matroid X. For any subset A of U , the closure A is defined as
A = F ∈F {F : A ⊆ F }. In other words, the closure of a set is the flat of minimum rank
containing it. Let F be a flat of the matroid X. A point x ∈ F is said to be in general
position on F if for any flat F ′ of X, if x is contained in span(F ′ \ {x}) then F ⊆ F ′ .
Deletion and Contraction. The deletion of an element u from X results in a matroid
X ′ , denoted by X \ u, with ground set E(X ′ ) = E(X) \ {u} and set of independent sets
I(X ′ ) = {I : I ∈ I(X), u ∈
/ I}. The contraction of a non-loop element u from X results in
a matroid X ′ , denoted by X/u, with ground set E(X ′ ) = E(X)\{u} and set of independent
sets I(X ′ ) = {I \ {u} : u ∈ I and I ∈ I(X)}. Note that B is a basis in X/u if and only
if B ∪ {u} is a basis in X.
When we are considering two matroids X and X/u, then for any subset
T ⊆ E(X) \ {u}, T represents the closure of T with respect to the matroid X.
A matroid can be also be represented by a ground set and a rank function, and for our
purposes, it is sometimes convenient to employ such a representation. That is, we also use
a pair (U, r) to specify a matroid, where U is the ground set and r is rank function. Now,
we prove several lemmata regarding operations on matroids, which are used later in the
paper.
Observation 1. Let M be a matroid, u ∈ E(M ) be a non-loop element in M and v be a
co-loop in M . Then, rank(M/u) = rank(M ) − 1 and v is a co-loop in M/u.
Given a matrix (or a linear matroid) A and a column v ∈ A, by moving the vector v
to another vector u, we refer to the operation that replaces the column v by the column u
in A.
Lemma 1. Let X = (U, I) be a linear matroid, W ⊆ U , and let u, v ∈
/ W be two elements
that are each a co-loop in X. Let X ′ be the linear matroid obtained by moving u to a general
position on the flat spanned by W . Then, v is also a co-loop in X ′ .
5
Proof. Let u′ denote the vector to which u was moved (that is in general position on the
span of W ). Notice that the only modification performed with respect to the vectors of X
is the update of u to u′ . Suppose, by way of contradiction, that v is not a co-loop in X ′ .
Then, there exists a set of elements S ⊆ E(X ′ ), where v ∈
/ S, whose span contains v. If
′
u ∈
/ S, then S ⊆ U , which implies that v was not a co-loop in X. Since this results in a
contradiction, we have that u′ ∈ S. As u′ is in the span of W , v must be in the span of
(W ∪ S) \ {u′ }. Since (W ∪ S) \ {u′ } ⊆ U and v ∈
/ (W ∪ S) \ {u′ }, we have thus reached a
contradiction.
We remark that the proof of Lemma 1 does not require the vector u to be moved to a
general position on the flat, but it holds true also if u is moved to any vector in span(W).
Lemma 2. Let X = (U, I) be a matroid and u ∈ U be an element that is not a loop in X.
If v ∈ U is a co-loop in X, then v is also a co-loop in the contracted matroid X/u.
Proof. Suppose, by way of contradiction, that v is not a co-loop in X/u. Then, there exists
an independent set I ∈ I(X/u), where v ∈
/ I, whose span contains v. In particular, this
implies that I ∪ {v} is a dependent set in X/u. By the definition of contraction, I ∪ {u} is
an independent set in X. As v is a co-loop in X, I ∪ {u, v} is also an independent set in X.
By the definition of contraction, I ∪ {v} is an independent set in X/u, which contradicts
our previous conclusion that I ∪ {v} is a dependent set in X/u.
Lemma 3 (Proposition 3.9 [13]). Let v be an element in a matroid X = (U, I), which is not
a loop in X. Let T be a subset of U such that v ∈ T . Then, rankX (T ) = rankX/v (T \{v})+1.
The lemma above can be rephrased as follows: if T is a set of elements in a matroid
X = (U, I) such that an element v ∈ U is contained in the span of T , then the rank of T
in the contracted matroid X/v is smaller by 1 than the rank of T in X.
3
Compression
Our objective is to give a polynomial compression of Vertex Cover Above LP. More
precisely, we develop a polynomial-time randomized algorithm that given an instance of
Vertex Cover Above LP with parameter k and ε > 0, with probability at least 1 − ε
outputs an equivalent instance of Rank Vertex Cover whose size is bounded by a
polynomial in k and ǫ. It is known that there is a parameter-preserving reduction from
Vertex Cover Above LP to Vertex Cover Above MM such that the parameter of
the output instance is linear in the parameter of the original instance [19]. Thus, in order to
give a polynomial compression of Vertex Cover Above LP to Rank Vertex Cover
where the size of the output instance is bounded by Õ(k7 + k4.5 log(1/ε)), it is enough to
give a polynomial compression of Vertex Cover Above MM to Rank Vertex Cover
with the same bound on the size of the output instance. For a graph H, we use µ(H)
6
and β(H) to denote the maximum size of a matching and the vertex cover number of H,
respectively. Let (G, k) be an instance of Vertex Cover Above MM. Let n = |V (G)|
and In denote the n × n identity matrix. That is, In is a representation of Un,n . Notice that
(G, k) is a Yes-instance of Vertex Cover Above MM if and only if (G, In , µ(G) + k),
with any arbitrary bijection between V (G) and columns of In , is a Yes-instance of Rank
Vertex Cover.
In summary, to give the desired polynomial compression of Vertex Cover Above LP,
it is enough to give a polynomial compression of instances of the form (G, In , µ(G) + k) of
Rank Vertex Cover where the size of the output instance is bounded by Õ(k7 + k4.5 log(1/ε)).
Here, the parameter is k. For instances of Rank Vertex Cover, we assume that the
columns of the matrix are labeled by the vertices in V (G) in a manner corresponding to a
bijection between the input graph and columns of the input matrix. As discussed above,
we again stress that now our objective is to give a polynomial compression of an instance
of the form (G, In , µ(G) + k) of Rank Vertex Cover to Rank Vertex Cover, which
can now roughly be thought of as a polynomial kernelization. We achieve the compression
in two steps.
1. In the first step, given (G, M = In , µ(G) + k), in polynomial time we either conclude
that (G, In , µ(G) + k) is a Yes-instance of Rank Vertex Cover or (with high
probability of success) output an equivalent instance (G1 , M1 , ℓ) of Rank Vertex
Cover where the number of rows in M1 , and hence rank(M1 ), is upper bounded by
O(k3/2 ). More over we also bound the bits required for each entry in the matrix to
be Õ(k5/2 + log(1/ε)) This step is explained in Section 3.2. Notice that after this
step, the graph G1 need not be bounded by kO(1) .
2. In the second step, we work with the output (G1 , M1 , ℓ) of the first step, and in
polynomial time we reduce the number of vertices and edges in the graph G1 (and
hence the number of columns in the matrix M1 ). That is, output of this step is an
equivalent instance (G2 , M2 , ℓ) where the size of G2 is bounded by O(k3 ). This step
is explained in Section 3.3.
Throughout the compression algorithm, we work with Rank Vertex Cover. Notice
that the input of Rank Vertex Cover consists of a graph G, an integer ℓ, and a linear
representation M of a matroid with a bijection between V (G) and the set of columns of M .
In the compression algorithm, we use operations that modify the graph G and the matrix
M simultaneously. To employ these “simultaneous operations” conveniently, we define (in
Section 3.1) the notion of a graph-matroid pair. We note that the definition of a graphmatroid pair is the same as a pre-geometry defined in [22], and various lemmas from [22]
which we use here are adapted to this definition. We also define deletion and contraction
operations on a graph-matroid pair, and state some properties of these operations.
7
3.1
Graph-Matroid Pairs
We start with the definition of a graph-matroid pair.
Definition 2. A pair (H, M ), where H is a graph and M is a matroid over the ground set
V (H), is called a graph-matroid pair.
Notice that there is natural bijection between V (H) and E(M ), which is the identity
map. Now, we define deletion and contraction operations on graph-matroid pairs.
Definition 3. Let P = (H, M ) be a graph-matroid pair, and let u ∈ V (H). The deletion
of u from P , denoted by P \ u, results in the graph-matroid pair (H \ u, M \ u). If u is not
a loop in M , then the contraction of u in P , denoted by P/u, results in the graph-matroid
pair (H \ u, M/u). For an edge e ∈ E(H), P \ e represents the pair (H \ e, M )
We remark that matroid deletion and contraction can be done time polynomial in the
size of ground set for a linear matroid. For details we refer to [13, 26].
Definition 4. Given a graph-matroid pair P = (H, M ), the vertex cover number of P is
defined as τ (P ) = min{rankM (S) : S is a vertex cover of H}.
For example, if M is an identity matrix (where each element is a co-loop), then τ (P )
is the vertex cover number of H. Moreover, if we let M be the uniform matroid Ut,n such
that t is at least the size of the vertex cover number of H, then τ (P ) again equals the
vertex cover number of H.
Let P = (H, M ) be a graph-matroid pair where M is a linear matroid. Recall that M is
also used to refer to a given linear representation of the matroid. For the sake of clarity, we
use vM to refer explicitly to the column vector associated with a vertex v ∈ V (H). When
it is clear from context, we use v and vM interchangeably.
Lemma 4 (see Proposition 4.2 in [22]). Let P = (H, M ) be a graph-matroid pair and
v ∈ V (H) such that the vector vM is a co-loop in M , where M is a linear matroid. Let
P ′ = (H, M ′ ) be the graph-matroid pair obtained by moving vM to a vector vM ′ in general
position on a flat containing the neighbors of v, NH (v). Then, τ (P ′ ) = τ (P ).
Proof. Note that the operation in the statement of the lemma does not change the graph
H. The only change occurs in the matroid, where we map an co-loop vM to a vector lying
in the span of its neighbors. It is clear that such an operation does not increase the rank of
any vertex cover. Indeed, given a vertex cover T of H, in case it excludes v, the rank of T
is the same in both M and M ′ , and otherwise, since vM is an co-loop, the rank of T cannot
increase when M is modified by replacing vM with any other vector. Thus, τ (P ′ ) ≤ τ (P ).
For the other inequality, let T be the set of vectors corresponding to a minimum rank
vertex cover of the graph H in the graph-matroid pair P ′ (where we have replaced the
vector vM by the vector vM ′ ). In what follows, note that as we are working with linear
matroids, the closure operation is the linear span. We have the following two cases:
8
/ T In this case T is still a vertex cover of H with the same rank. Thus,
Case 1: vM ′ ∈
τ (P ′ ) = rankM ′ (T ) = rankM (T ) ≥ τ (P ).
Case 2: vM ′ ∈ T
Here, we have two subcases:
/ T \ {vM ′ }, then note that τ (P ′ ) = rankM ′ (T ) = rankM ′ (T \ {vM ′ }) + 1 =
• If vM ′ ∈
rankM ((T \ {vM ′ }) ∪ {vM }) ≥ τ (G). The third equality follows because vM is an
co-loop.
• If vM ′ ∈ T \ {vM ′ }, then as vM ′ is in general position on the flat of its neighbors, by
definition this means that all of the neighbors of vM ′ are also present in T \ {vM ′ }.
Since vM and vM ′ have the same neighbors (as the graph H has not been modified),
all of the neighbors of vM belong to in T \ vM ′ . Thus, T \ {vM ′ } is a vertex cover of H.
Therefore, τ (P ′ ) = rankM ′ (T ) = rankM ′ (T ) = rankM ′ (T \ vM ′ ) = rankM (T \ vM ′ ) ≥
τ (P ). The second equality crucially relies on the observation that rank of a set is
equal to the rank of the span of the set.
This completes the proof of the lemma.
Lemma 5 (see Proposition 4.3 in [22]). Let P = (H, M ) be a graph-matroid pair, and let v
be a vertex of H that is contained in a flat spanned by its neighbors. Let P ′ = P/v. Then,
τ (P ′ ) = τ (P ) − 1.
Proof. Recall that the contraction of a vertex v in P results in the graph-matroid pair
P ′ = (H \ v, M/vM ), i.e. the vertex is deleted from the graph and contracted in the
matroid. Denote the contracted matroid M/vM by M ′ .
We first prove that τ (P ) ≤ τ (P ′ ) + 1. Let T be a minimum rank vertex cover in P ′ ,
i.e. rankM ′ (T ) = τ (P ′ ). Let W be a maximum sized independent set in I(M ′ ) contained
in T . Then, by the definition of contraction, W ∪ {v} is a maximum sized independent set
in I(M ) contained in T ∪ {v}. Moreover, T ∪ {v} is a vertex cover in H, and therefore we
get that τ (P ) ≤ rankM (T ∪ {v}) = |W ∪ {v}| = rank′M (T ) + 1 = τ (P ′ ) + 1.
Now we prove that τ (P ′ ) ≤ τ (P ) − 1. Assume that T is a minimum rank vertex cover
of P . In case v ∈
/ T , it holds that all of the neighbors of v must belong T to cover edges
incident to v. By our assumption, v is in the span of its neighbors in M . Therefore, v
necessarily belongs to the span of T . Note that T \{v} is a vertex cover of H ′ . By Lemma 3,
we have that τ (P ) = rankM (T ) = rankM ′ (T \ {v}) + 1 ≥ τ (P ′ ) + 1. This completes the
proof.
3.2
Rank Reduction
In this section we explain the first step of our compression algorithm. Formally, we want
to solve the following problem.
9
Rank Reduction
Input: An instance (G, M = In , µ(G) + k) of Rank Vertex Cover, where
n = |V (G)|.
Output: An equivalent instance (G′ , M ′ , ℓ) such that the number of rows in M ′
is at most O(k3/2 ).
Here, we give a randomized polynomial time algorithm for Rank Reduction. More
precisely, along with the input of Rank Reduction, we are given an error bound ε > 0,
and the objective is to output a “small” equivalent instance with probability at least 1 − ε.
We start with a reduction rule that reduces the rank by 2.
Reduction Rule 1 (Vertex Deletion). Let (P, ℓ) be an instance of Rank Vertex Cover,
where P = (G, M ) is a graph-matroid pair. Let v ∈ V (G) be a vertex such that vM is a
co-loop in M . Let M1 be the matrix obtained after moving the vM to a vector vM1 in
general position on the flat spanned by NG (v). Let P1 = (G, M1 ) and let P ′ = P1 /vM1 .
Then output (P ′ , ℓ − 1)
Lemma 6. Reduction Rule 1 is safe.
Proof. We need to show that (P, ℓ) is a Yes-instance if and only if (P ′ , ℓ−1) is a Yes-instance,
which follows from Lemmata 4 and 5.
Lemma 7. Let (P, ℓ) be an instance of Rank Vertex Cover, where P = (G, M ) is a
graph-matroid pair. Let (P ′ , ℓ − 1) be the output of Reduction Rule 1, where P ′ = (G′ , M ′ ).
Then rank(M ′ ) = rank(M ) − 2.
Proof. In Reduction Rule 1, we move a co-loop vM of M to a vector vM1 , obtaining a matrix
M1 . Note that vM1 lies in the span of NG (v), and therefore vM1 is not a co-loop in M1 .
Hence, we have that rank(M1 ) = rank(M )−1. By the definition of general position, it holds
that vM1 is not a loop in M1 . Notice that M ′ = M1 /vM1 . Therefore, by Observation 1,
rank(M ′ ) = rank(M1 ) − 1 = rank(M ) − 2.
The following lemma explain how to apply Reduction Rule 1 efficiently. Later we will
explain (Lemma 10) how to keep the bit length of each entries in the matrix bounded by
polynomial in k.
Lemma 8. Let M be a linear matroid with |E(M )| = n and let p > 2n be an integer. Then,
n
Reduction Rule 1 can be applied in polynomial time with success probability at least 1 − 2p .
The number of bits required for each entry in the output representation matrix is O(log p)
times the number of bits required for each entry in the input representation matrix2 .
2
We remark that we are unaware of a procedure to derandomize the application of Reduction Rule 1.
10
Proof. Let F be the set of columns in M corresponding
P to NG (v). Using formal indeterminates x = {xh : h ∈ F }, obtain a vector g(x) =
h∈F xh h. Suppose the values of
∗
the indeterminates have been fixed to some numbers x such that for any independent set
I ∈ M which does not span F , I ∪ {g(x∗ )} is also independent. We claim that g(x∗ ) is in
general position on F . By definition, if g(x∗ ) is not in general position, then there exists a
flat F ′ with g(x∗ ) ∈ F ′ \ {g(x∗ )} but it does not contain F . Let I be a basis of F ′ \ {g(x∗ )},
clearly I does not span F but I ∪ {g(x∗ )} is a dependent set, which is a contradiction due
to the choice of x∗ .
Let I be an independent set which does not span F . We need to select x in such a way
that DR,I (x) = det(M [R, I ∪ {g(x)}]) is not identically zero for some R. First of all, note
that there is a choice of R for which the polynomialP
DR,I (x) is not identically zero and
has total degree one. This is so because DR,I (x) = h∈F xh det(M [R, I ∪ {h}]); if it is
identically zero for every R, then det(M [R, I ∪ {h}]) = 0 which implies that every element
h ∈ F is spanned by I. Thus, this case does not arise due to the choice of I. If we choose
x ∈ [p]|F | uniformly at random, for some number p, then the probability that DR,I (x) = 0
is at most p1 by Schwartz-Zippel Lemma. The number of independent sets in M , which
does not span F , is at most 2n . By union bound, the probability that DR,I (x) = 0 for
n
some I, an independent set of M , is at most 2p . Therefore, the success probability is at
n
least 1 − 2p .
The procedure runs in polynomial time and the process of matroid contraction can at
most double the matrix size, this gives us the claimed bit sizes.
In the very first step of applying Reduction Rule 1, the theorem above makes the bit
sizes O(log p). On applying the the rule again the bit length of entries double each time
due to Gaussian elimination performed for the step of matroid contraction. This can make
the numbers very large. To circumvent this, we show that given a linear matroid (U, I) of
low rank and where the ground set U is small, along with a representation matrix M over
the field R, for a randomly chosen small prime q, the matrix M mod q is also a linear
representation of M (see Lemma 9). To prove this result, we first observe that for any
number n, the number of distinct prime factors is bounded by O(log n).
Observation 2. There is a constant c such that number of distinct prime factors of any
number n, denoted by ω(n), is at most c log n.
The well-known prime number theorem implies that
Proposition 1. There is a constant c such that the number of distinct prime numbers
smaller than or equal to n, denoted by π(n), is at least c logn n .
Lemma 9. Let X = (U, I) be a rank r linear matroid representable by an r × n matrix M
′
′
over R with each entry between −nc r (1/δ) and nc r (1/δ) for some constants c′ and δ. Let
ε > 0. There is a number c ∈ O(log 1ε ) such that for a prime number q chosen uniformly at
11
2r+3
2
(n log n+log(1/δ))
, the
random from the set of prime numbers smaller than or equal to c n
ε
matrix Mq = M mod q over R represents the matroid X with probability at least 1 − nε .
Proof. To prove that Mq is a representation of X (with high probability), it is enough
to show that for any basis B ∈ B(X), the corresponding columns in Mq are linearly
independent. For this purpose, consider some basis B ∈ B(X). Since B is an independent
set in M , we have that the determinant of M [⋆, B], denoted by det(M [⋆, B]), is non-zero.
The determinant of Mq [⋆, B] is equal to det(M [⋆, B]) mod q. Let a = det(M [⋆, B]), and let
b = a mod q. The value b is equal to zero only if q is prime factor of b. Since the absolute
′
value of each entry in M is at most nc r (1/δ), the absolute value of a is upper bounded by
′ 2
r!nc r (1/δ)r . By Observation 2, the number of prime factors of a is at most c1 (log(r!) +
c′ r 2 log n + r log(1/δ)) for some constant c1 . The total number of bases in X is at most nr .
Hence the cardinality of the set F = {z : z is a prime factor of det(M [⋆, B]) for some B ∈
B(X)} is at most nr · c1 (log(r!) + c′ r 2 log n + r log(1/δ)) ≤ c2 nr+1 (n log n + log(1/δ)) for
some constant c2 .
By Proposition 1, there is a constant c3 such that the number of prime factors less than
2r+3 (n log n+log(1/δ))2
is at least
or equal to c n
ε
t = c3 c
n2r+3 (n log n + log(1/δ))2
ε log( n
2r+3 (n log n+log(1/δ))2
ε
)
.
The probability that Mq is not a representation of X (denote it by Mq 6≡ M ) is,
Pr[Mq 6≡ M ] = Pr[q ∈ F ] ≤
|F |
t
2r+3
≤
2
n
(n log n+log(1/δ))
) ε
c2 log(
ε
· r+1
·
c3 c n (n log n + log(1/δ)) n
For any ε > 0, there is a number c ∈ O(log 1ε ) such that the above probability is at most
ε
n . This completes the proof of the lemma.
By combining Lemmata 8 and 9, we can apply Reduction Rule 1, such that each entry
in the output representation matrix has bounded value.
Lemma 10. Given ε > 0, Reduction Rule 1 can be applied in polynomial time with success
probability at least 1 − nε . Moreover, each entry in the output representation matrix is at
most c n
2r+3 (n log n+log(1/ǫ))2
ǫ
, where c ∈ O(log 1ǫ ).
Proof. Let M be the input representation matrix. Let ǫ′ = ε/(2n). Now we apply Lemma 8
n
using p > 2ǫ′ . Let M ′ be the output representation matrix of Lemma 8. By Lemma 8, M ′
represents M with probability at least 1 − ǫ′ . Observe that the absolute values of matrix
entries are bounded by the value of q as given in Lemma 9, thus each entry in M ′ has
′
absolute value bounded by nc n /ǫ2 for some constant c′ . Now, applying Lemma 9 again
completes the proof.
12
We would like to apply Reduction Rule 1 as may times as possible in order to obtain
a “good” bound on the rank of the matroid. However, for this purpose, after applying
Reduction Rule 1 with respect to some co-loop of the matroid, some other co-loops need to
remain co-loops. Thus, instead of applying Reduction Rule 1 blindly, we choose vectors vM
whose vertices belong to a predetermined independent set. To understand the advantage
behind a more careful choice of the vectors vM , suppose that we are given an independent
set U in the graph G such that every vertex in it is a co-loop in the matroid. Then, after
we apply Reduction Rule 1 with one of the vertices in U , it holds that every other vertex in
U is still a co-loop (by Lemma 1 and Observation 1). In order to find a large independent
set (in order to apply Reduction Rule 1 many times), we use the following two known
algorithmic results about Vertex Cover Above MM.
Lemma 11 ([21]). There is a 2.3146k · nO(1) -time deterministic algorithm for Vertex
Cover Above MM.
Recall that for a graph G, we let β(G) denote the vertex cover number of G.
Lemma 12 ([23]). For any ǫ > 0, there is a randomized polynomial-time approximation
algorithm
that given a graph G, outputs a vertex cover of G of cardinality at most µ(G) +
√
O( log n)(β(G) − µ(G)), with probability at least 1 − ǫ.
In what follows, we also need the following general lemma about linear matroids.
Lemma 13 ([13]). Let M be an a × b matrix representing some matroid. If M ′ is a matrix
consisting of a row basis of M then M ′ represents the same matroid as M .
We are now ready to give the main lemma of this subsection.
Lemma 14. There is a polynomial time randomized algorithm that given an instance
(G, M = In , µ(G) + k) of Rank Vertex Cover and ε̂ > 0, with probability at least 1 − ε̂
outputs an equivalent instance (G′ , M ′ , ℓ) of Rank Vertex Cover such that the number
of rows in M ′ is at most O(k3/2 ). Here, M ′ is a matrix over the field R where each entry
is Õ(k5/2 + log(1/ε̂)) bits long.
Proof. Recall that n = |V (G)|. If k ≤ log n, then we use Lemma 11 to solve the problem
in polynomial time. Next, we assume that log n < k. Let δ = ε̂/2. Now, by using
Lemma
12, in polynomial time we obtain a vertex cover Y of G of size at most µ(G) +
√
′
c log n · k ≤ µ(G) + c′ · k3/2 , with probability at least 1 − δ, where c′ is some constant.
If we fail to compute such a vertex cover, then we output an arbitrary constant sized
instance as output; and this will happen only with probability at most δ. Otherwise, let
S = V (G) \ Y . Since Y is a vertex cover of G, we have that S is an independent set of
G. Hence, |S| ≥ n − (µ(G) + c′ · k3/2 ). Since M = In , all the elements of M , including
the ones in S, are co-loops in M . Now, we apply Reduction Rule 1 with the elements of S
(one by one). By Lemma 1 and Observation 1, after each application of Reduction Rule 1,
13
the remaining elements in S are still co-loops. In particular, we apply Reduction Rule 1
|S| many times. Let (G′ , M ′ , ℓ) be the instance obtained after these |S| applications of
Reduction Rule 1 using Lemma 10 (substituting ε = δ in Lemma 10).
By Lemma 7, we know that after each application of Reduction Rule 1, the rank reduces
by 2. Hence,
rank(M ′ ) = rank(M ) − 2|S|
′
3/2
= n − 2 n − (µ(G) + c · k )
= −n + 2µ(G) + 2c′ · k3/2 ≤ 2c′ · k3/2
(because 2µ(G) ≤ n).
During each application of Reduction Rule 1, by Lemma 13, we can assume that the
number of rows in the representation matrix is exactly same as the rank of the matrix.
Now, we return (G′ , M ′ , ℓ) as the output. Notice that the number of rows in M ′ is at most
O(k3/2 ).
Now, we analyze the probability of success. As finding the approximate vertex cover Y
using Lemma 12 fails with probability at most δ = 2ε̂ , in order to get the required success
probability of 1 − ε̂, |S| applications of Reduction Rule 1 should succeed with probability
at least 1 − 2ε̂ . We suppose that the matrix M = In is over the field R. Recall that the
instance (G′ , M ′ , ℓ) is obtained after |S| applications of Reduction Rule 1. The failure
probability of each application of Reduction Rule 1 is at most nδ . Hence, by union bound
the probability failure in at least one application of Reduction Rule 1 is at most δ. Hence
the total probability of success is at least 1 − (δ + δ) = 1 − ε̂. By Lemma 10 each entry in
2r+3 (n log n+log(2/ε̂))2
. Hence the bits required
the output representation matrix is at most c n
ε̂
′
to represent an entry in M is at most Õ(r log n + log(2/ε̂)) = Õ(k5/2 + log(1/ε̂)).
3.3
Graph Reduction
In the previous subsection we have seen how to reduce the number of rows in the matroid.
In this subsection we move to second step of our compression algorithm. That is, to reduce
the size of the graph. Formally, we want to solve the following problem.
Graph Reduction
Input: An instance (G′ , M, ℓ) of Rank Vertex Cover such that the number
3
of rows in M is at most O(k 2 )
Output: An equivalent instance (G′′ , M ′ , ℓ) such that |V (G′′ )|, |E(G′′ )| ≤ O(k3 )
Here, we give an algorithm to reduce the number of edges in the graph. Having reduced the
number of edges, we also obtain the desired bound on the number of vertices (as isolated
vertices are discarded). Towards this, we first give some definitions and notations. In this
section, we use F to denote either a finite field or R.
14
Definition 5 (Symmetric Square). For a set of vectors S over a field F, the symmetric
square, denoted by S (2) , is defined as S (2) = {uv T + vuT : u, v ∈ S}, where the operation is
matrix multiplication. The elements of S (2) are matrices. We can define the rank function
r (2) : S (2) → Z by treating the matrices as “long” vectors over the field F.
With a rank function r (2) , the pair (S (2) ), r (2) ) forms a matroid. For details we refer
the reader to [22].
The dot product of two column vectors a, b ∈ Fn is the scalar aT b and is denoted by
ha, bi. Two properties of dot product are (i)ha, bi = hb, ai and (ii)ha, b + ci = ha, bi + ha, ci.
Definition 6. Given a vector space Fd and a subspace F of Fd , the orthogonal space of F
is defined as F ⊥ = {x ∈ Fd : hy, xi = 0 for all y ∈ F }.
The following observation can be proved using associativity of matrix multiplication
and dot product of vectors.
Observation 3. Let u, v, w be three n-length vectors. Then, uv T w = hv, wiu.
Definition 7 (2-Tuples Meeting a Flat). For a flat F in a linear matroid S (here S is a set
of vectors), the set of 2-tuples meeting F is defined as F2 := {uv T + vuT : v ∈ F, u ∈ S}.
For the sake of completeness, we prove the following lemmata using elementary techniques from linear algebra.
Lemma 15 (see Proposition 2.8 in [22]). For any flat F in a linear matroid S with rank
function r, it holds that F2 (the set of 2-tuples meeting F ) forms a flat in the matroid
(S (2) , r (2) ).
Proof. Suppose, by way of contradiction, that F2 is not a flat. Then, there exist a, b ∈ S
such that e = abT + baT ∈ S (2) is not in F2 and
r (2) (F2 ∪ {e}) = r (2) (F2 ).
As e lies in the span of F2 , there exist scalars λuv such that
X
abT + baT =
λuv (uv T + vuT ).
(1)
u∈F,v∈S
Note that neither a nor b belongs to F , because if at least one of them belongs to F ,
then e lies in F2 (by the definition of F2 ). Therefore, F 6= S and it is a proper subspace
of S, which implies that F ⊥ is non-empty (follows from Proposition 13.2 in [17]). Pick an
element x ∈ F ⊥ . By right multiplying the column matrix x with the terms in Equation 1,
15
we get
abT x + baT x =
X
λuv (uv T x + vuT x)
X
λuv hv, xiu + hu, xiv
X
λuv hv, xiu
u∈F,v∈S
hb, xia + ha, xib =
=
u∈F,v∈S
u∈F,v∈S
(2)
The second equality follows from Observation 3, and the third equality follows from
the fact that hu, xi = 0 (because u ∈ F and x ∈ F ⊥ ). Now, by taking dot product with x,
from Equation 2, we have that
X
2ha, xihb, xi =
λuv hv, xihu, xi = 0
(3)
u∈F,v∈S
The last equality follows from the fact that hu, xi = 0. As the choice of x was arbitrary,
Equations 2 and 3 hold for all x ∈ F ⊥ .
By Equation 3, for all x ∈ F ⊥ , at least one of hb, xi or ha, xi is zero. If exactly one of
hb, xi or ha, xi is zero for some x ∈ F ⊥ , then at least one of a or b is a linear combination of
vectors from F (by Equation 2) and hence it belongs to F , which is a contradiction (recall
that we have argued that both a and b do not belong to the flat F ). Now, consider the
case where both hb, xi and ha, xi are zero for all x ∈ F ⊥ . Then, both a and b belong to
F ⊥⊥ . Since F ⊥⊥ = F (in the case F is over a finite field, see Theorem 7.5 in [15]), again
we have reached a contradiction.
For a graph-matroid pair P = (H, M ) (here M represents a set of vectors), define
E(P ) ⊆ M (2) as E(P ) = {uv T + vuT : {u, v} ∈ E(H)}. Note that E(P ) forms a matroid
with the same rank function as the one of M (2) . Moreover, the elements of E(P ) are in
correspondence with the edges of H. For simplicity, we refer to an element of E(P ) as an
edge. Using Lemma 15, we prove the following lemma.
Lemma 16 (see Proposition 4.7 in [22]). Let P = (H, M ) be a graph-matroid pair, and
let r (2) be the rank function of E(P ). For an edge e that is not a co-loop in (E(P ), r (2) ), it
holds that τ (P \ e) = τ (P ).
Proof. The deletion of edges cannot increase the vertex cover number, thus τ (P \e) ≤ τ (P ).
Next, we show that it also holds that τ (P \ e) ≥ τ (P ).
Let T be a vertex cover of H \ e. Notice that T is a flat in M . Denote e = {u, v}
and F = T . If at least one of u or v lies in F , then F is a vertex cover of H and hence
τ (P \ e) ≥ τ (P ). Hence, to conclude the proof, it is sufficient to show that at least one
of u or v lies in F . Suppose, by way of contradiction, that u, v ∈
/ F . Then, the edge
T
T
e = uv + vu does not belong to F2 (the set of 2-tuples meeting F ). By Theorem 16, we
16
have that F2 is a flat in (M (2) , r (2) ). Since F is a vertex cover of H \ e, by the definition of
F2 and E(P ), we have that E(P ) \ {e} ⊆ F2 . Recall that e is not a co-loop in (E(P ), r (2) ).
This implies that e belongs to the closure of E(P ) \ {e}, and hence it belongs to its superset
F2 . We have thus reached a contradiction. This completes the proof.
Using Lemma 16, we get the following bound on the number of edges analogously to
Theorem 4.6 in [22].
Lemma 17. Let (H, M, ℓ) be an instance of Rank Vertex Cover and r = rank(M ).
Applying thereduction given by Lemma 16 on (H, M ) exhaustively results in a graph with
at most r+1
edges.
2
Proof. Observe that the dimension of the matroid (E(P ), r (2) ) is bounded by r+1
2 , and
the reduction given by Lemma 16 deletes any edge that is not a co-loop in this matroid.
In other words, once the reduction can no longer be applied, every edge is a co-loop in the
matroid (E(P ), r (2) ), and hence the graph has at most r+1
edges.
2
Lemma 17 bounds the number of edges of the graph. To bound the number of vertices
in the graph, we apply the following simple reduction rule.
Reduction Rule 2. Let (G, M, ℓ) is an instance of Rank Vertex Cover. For v ∈ V (G)
of degree 0 in G, output (G \ v, M \ v, ℓ).
Reduction Rule 2 and Lemma 17 lead us to the main result of this subsection:
Corollary 1. There is a polynomial time algorithm, which given an instance (G′ , M ′ , ℓ)
3
of Rank Vertex Cover such that the number of rows in M is at most O(k 2 ), outputs
an equivalent instance (G′′ , M ′′ , ℓ) such that |V (G′′ )|, |E(G′′ )| = O(k3 ). Here, M ′′ is a
restriction of M ′ .
By combining both the steps we get a polynomial compression of size Õ(k7 + k4.5 log(1/ε))
for Vertex Cover Above LP.
4
Conclusion
In this paper, we presented a (randomized) polynomial compression of the Vertex Cover
Above LP problem into the algebraic Rank Vertex Cover problem. With probability
at least 1−ε, the output instance is equivalent to the original instance and it is of bit length
Õ(k7 + k4.5 log(1/ε)). Here, the probability ε is part of the input. Recall that having our
polynomial compression at hand, one also obtains polynomial compressions of additional
well-known problems, such as the Odd Cycle Transversal problem, into the Rank
Vertex Cover problem.
Finally, we note that we do not know how to derandomize our polynomial compression,
and it is also not known how to derandomize the polynomial kernelization by Kratsch
17
and Wahlström [19]. Thus, to conclude our paper, we would like to pose the following
intriguing open problem: Does there exist a deterministic polynomial compression of the
Vertex Cover Above LP problem?
References
[1] R. Balasubramanian, M. R. Fellows, and V. Raman. An improved fixed-parameter
algorithm for vertex cover. Information Processing Letters, 65(3):163–168, 1998.
[2] J. F. Buss and J. Goldsmith. Nondeterminism within p. SIAM J. Comput., 22(3):560–
572, June 1993.
[3] L. S. Chandran and F. Grandoni. Refined memorization for vertex cover. Information
Processing Letters, 93(3):125–131, 2005.
[4] J. Chen, H. Fernau, I. A. Kanj, and G. Xia. Parametric duality and kernelization:
Lower bounds and upper bounds on kernel size. SIAM J. Comput., 37(4):1077–1106,
Nov. 2007.
[5] J. Chen, I. A. Kanj, and W. Jia. Vertex cover: Further observations and further
improvements. Journal of Algorithms, 41(2):280 – 301, 2001.
[6] J. Chen, I. A. Kanj, and G. Xia. Improved upper bounds for vertex cover. Theoretical
Computer Science, 411(40-42):3736–3756, 2010.
[7] M. Cygan, F. V. Fomin, L. Kowalik, D. Lokshtanov, D. Marx, M. Pilipczuk,
M. Pilipczuk, and S. Saurabh. Parameterized algorithms. Springer, 2015.
[8] M. Cygan, M. Pilipczuk, M. Pilipczuk, and J. O. Wojtaszczyk. On multiway cut
parameterized above lower bounds. ACM Trans. Comput. Theory, 5(1):3:1–3:11, May
2013.
[9] H. Dell and D. Van Melkebeek. Satisfiability allows no nontrivial sparsification unless
the polynomial-time hierarchy collapses. Journal of the ACM (JACM), 61(4):23, 2014.
[10] R. G. Downey and M. R. Fellows. Fundamentals of Parameterized Complexity. Texts
in Computer Science. Springer, 2013.
[11] R. G. Downey, M. R. Fellows, and U. Stege. Parameterized complexity: A framework
for systematically confronting computational intractability. In Contemporary trends in
discrete mathematics: From DIMACS and DIMATIA to the future, volume 49, pages
49–99, 1999.
18
[12] S. Garg and G. Philip. Raising the bar for vertex cover: Fixed-parameter tractability
above A higher guarantee. In Proceedings of the Twenty-Seventh Annual ACM-SIAM
Symposium on Discrete Algorithms, SODA 2016,, pages 1152–1166. SIAM, 2016.
[13] G. Gordon and J. McNulty. Matroids: A Geometric Introduction. Cambridge University Press, 2012. Cambridge Books Online.
[14] G. Gutin, E. J. Kim, M. Lampis, and V. Mitsou. Vertex cover problem parameterized
above and below tight bounds. Theory of Computing Systems, 48(2):402–410, 2011.
[15] R. Hill. A First Course in Coding Theory. Oxford Applied Linguistics. Clarendon
Press, 1986.
[16] R. Impagliazzo, R. Paturi, and F. Zane. Which problems have strongly exponential
complexity? Journal of Computer and System Sciences, 63:512–530, 2001.
[17] S. Jukna. Extremal Combinatorics: With Applications in Computer Science. Texts in
Theoretical Computer Science. An EATCS Series. Springer Berlin Heidelberg, 2011.
[18] S. Kratsch. A randomized polynomial kernelization for vertex cover with a smaller
parameter. In 24th Annual European Symposium on Algorithms, ESA 2016, volume 57
of LIPIcs, pages 59:1–59:17. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2016.
[19] S. Kratsch and M. Wahlström. Representative sets and irrelevant vertices: New tools
for kernelization. In Foundations of Computer Science (FOCS), 2012 IEEE 53rd
Annual Symposium on, pages 450–459. IEEE, 2012.
[20] M. Lampis. A kernel of order 2k − c log k for vertex cover. Information Processing
Letters, 111(23):1089–1091, 2011.
[21] D. Lokshtanov, N. Narayanaswamy, V. Raman, M. Ramanujan, and S. Saurabh. Faster
parameterized algorithms using linear programming. ACM Transactions on Algorithms (TALG), 11(2):15, 2014.
[22] L. Lovász. Flats in matroids and geometric graphs. Combinatorial Surveys, pages
45–86, 1977.
[23] S. Mishra, V. Raman, S. Saurabh, S. Sikdar, and C. R. Subramanian. The complexity of könig subgraph problems and above-guarantee vertex cover. Algorithmica,
61(4):857–881, 2011.
[24] N. Narayanaswamy, V. Raman, M. Ramanujan, and S. Saurabh. Lp can be a cure
for parameterized problems. In STACS’12 (29th Symposium on Theoretical Aspects
of Computer Science), volume 14, pages 338–349. LIPIcs, 2012.
19
[25] R. Niedermeier and P. Rossmanith. Upper bounds for vertex cover further improved.
In Annual Symposium on Theoretical Aspects of Computer Science, pages 561–570.
Springer Berlin Heidelberg, 1999.
[26] J. G. Oxley. Matroid Theory (Oxford Graduate Texts in Mathematics). Oxford University Press, Inc., New York, NY, USA, 2006.
[27] V. Raman, M. Ramanujan, and S. Saurabh. Paths, flowers and vertex cover. In
European Symposium on Algorithms, pages 382–393. Springer Berlin Heidelberg, 2011.
[28] I. Razgon and B. O’Sullivan. Almost 2-sat is fixed-parameter tractable. Journal of
Computer and System Sciences, 75(8):435–450, 2009.
20
| 8 |
SYNTHETIC DATA AUGMENTATION USING GAN FOR IMPROVED LIVER LESION
CLASSIFICATION
Maayan Frid-Adar1
Eyal Klang2
Michal Amitai2
Jacob Goldberger3
Hayit Greenspan1
1
arXiv:1801.02385v1 [] 8 Jan 2018
2
Department of Biomedical Engineering, Tel Aviv University, Tel Aviv, Israel.
Department of Diagnostic Imaging, The Chaim Sheba Medical Center, Tel-Hashomer, Israel.
3
Faculty of Engineering, Bar-Ilan University, Ramat-Gan, Israel.
ABSTRACT
In this paper, we present a data augmentation method that
generates synthetic medical images using Generative Adversarial Networks (GANs). We propose a training scheme that
first uses classical data augmentation to enlarge the training
set and then further enlarges the data size and its diversity by
applying GAN techniques for synthetic data augmentation.
Our method is demonstrated on a limited dataset of computed
tomography (CT) images of 182 liver lesions (53 cysts, 64
metastases and 65 hemangiomas). The classification performance using only classic data augmentation yielded 78.6%
sensitivity and 88.4% specificity. By adding the synthetic data
augmentation the results significantly increased to 85.7% sensitivity and 92.4% specificity.
Index Terms— Image synthesis, data augmentation, generative adversarial network, liver lesions, lesion classification
1. INTRODUCTION
One of the main challenges in the medical imaging domain
is how to cope with the small datasets and limited amount
of annotated samples, especially when employing supervised
machine learning algorithms that require labeled data and
larger training examples. In medical imaging tasks, annotations are made by radiologists with expert knowledge on
the data and task and most annotations of medical images
are time consuming. Although public medical datasets are
available online, and grand challenges have been publicized,
most datasets are still limited in size and only applicable to
specific medical problems. Collecting medical data is a complex and expensive procedure that requires the collaboration
of researchers and radiologists [1].
Researchers attempt to overcome this challenge by using data augmentation schemes, commonly including simple
modifications of dataset images such as translation, rotation,
flip and scale. Using such data augmentation to improve the
training process of networks has become a standard procedure in computer vision tasks [2]. However, the diversity that
can be gained from small modifications of the images (such
as small translations and small angular rotations) is relatively
small. This motivates the use of synthetic data examples; such
samples enable the introduction of more variability and can
possibly enrich the dataset further, in order to improve the
system training process.
A promising approach for training a model that synthesizes images is known as Generative Adversarial Networks
(GANs) [3]. GANs have gained great popularity in the computer vision community and different variations of GANs
were recently proposed for generating high quality realistic
natural images [4, 5]. Recently, several medical imaging applications have applied the GAN framework [6, 7, 8]. Most
studies have employed the image-to-image GAN technique
to create label-to-segmentation translation, segmentationto-image translation or medical cross modality translations.
Some studies have been inspired by the GAN method for
image inpainting. In the current study we investigate the
applicability of GAN framework to synthesize high quality
medical images for data augmentation. We focus on improving results in the specific task of liver lesion classification.
The liver is one of three most common sites for metastatic
cancer along with the bone and lungs. According to the World
Health Organization, in 2012 alone, cancer accounted for 8.2
million deaths worldwide of which 745,000 were caused by
liver cancer [9]. There is a great need and interest in developing automated diagnostic tools based on CT images to assists
radiologists in the diagnosis of liver lesions. Previous studies
have presented methods for automatic classification of focal
liver lesions in CT images [10, 11, 12].
In the current work we suggest an augmentation scheme
that is based on combination of standard image perturbation and synthetic liver lesion generation using GAN for
improved liver lesion classification. The contributions of
this work are the following: synthesis of high quality focal
liver lesions from CT images using generative adversarial
networks (GANs), design of a CNN-based solution for the
liver lesion classification task and augmentation of the CNN
training set using the generated synthetic data - for improved
classification results.
2. GENERATING SYNTHETIC LIVER LESIONS
Even a small CNN has thousands of parameters that need to
be trained. When using deep networks with multiple layers
or dealing with limited numbers of training images, there is a
danger of overfitting. The standard solution to reduce overfitting is data augmentation that artificially enlarges the dataset
[2]. Classical augmentation techniques on gray-scale images
include mostly affine transformations. To enrich the training
data we apply here an image synthesis technique based on
the GAN network. However, to train a GAN we need many
examples as well. The approach we propose here involves
several steps: in the first step, standard data augmentation is
used to create a larger dataset which is then used to train a
GAN. The synthetic examples created by the GAN are next
used as an additional resource for data augmentation. The
combined standard and synthetic augmentation is finally used
to train a lesion classifier. Examples of real and synthetic lesions are shown in Figure 1. We next describe the details of
the proposed system.
2.1. Classic Data Augmentation
Classic augmentation techniques on gray-scale images include mostly affine transformations such as translation, rotation, scaling, flipping and shearing. In order to preserve
the liver lesion characteristics we avoided transformations
that cause shape deformation (like shearing and elastic deformations). In addition, we kept the ROI centered around
the lesion. Each lesion ROI was first rotated Nrot times at
random angles θ = [0◦ , ..., 180◦ ]. Afterwards, each rotated
ROI was flipped Nf lip times (up-down, left-right), translated
Ntrans times where we sampled random pairs of [x, y] pixel
values between (−p, p) related to the lesion diameter (d) by
p = min(4, 0.1 × d). Finally the ROI was scaled Nscale
times from a stochastic range of scales s = [0.1 × d, 0.4 × d].
The scale was implemented by changing the amount of tissue around the lesion in the ROI. As a result of the augmentation process, the total number of augmentations was
N = Nrot × (1 + Nf lip + Ntrans + Nscale ). Bicubic interpolation was used to resize the ROIs to a uniform size of
64 × 64.
2.2. GAN Networks for Lesion Synthesis
GANs [3] are a specific framework of a generative model.
It aims to implicitly learn the data distribution pdata from a
set of samples (e.g. images) to further generate new samples
drawn from the learned distribution.
We employed the Deep Convolutional GAN (DCGAN)
for synthesizing labeled lesions for each lesion class separately: Cysts, Metastases and Hemangiomas. We followed
the architecture proposed by Radford et al. [4]. The model
consists of two deep CNNs that are trained simultaneously, as
depicted in Figure 2. A sample x is input to the discriminator
Fig. 1: Lesion ROI examples of Cysts (top row), Metastases
(middle row) and Hemangiomas (bottom row). Left side:
Real lesions; Right side: Synthetic lesions.
(denoted D), which outputs D(x), its probability of being a
real sample. The generator (denoted G) gets input samples z
from a known simple distribution pz , and maps G(z) to the
image space of distribution pg . During training the generator improves in its ability to synthesize more realistic images
while the discriminator improves in its ability to distinguish
the real from the synthesized images. Hence the moniker of
adversarial training.
The generator network (Figure 2) takes a vector of 100
random numbers drawn from a uniform distribution as input
and outputs a liver lesion image of size 64 × 64 × 1. The
network architecture [4] consists of a fully connected layer
reshaped to size 4 × 4 × 1024 and four fractionally-strided
convolutional layers to up-sample the image with a 5 × 5 kernel size. The discriminator network has a typical CNN architecture that takes the input image of size 64 × 64 × 1 (lesion
ROI), and outputs a decision - if the lesion is real or fake. In
this network, four convolution layers are used, with a kernel
size of 5 × 5 and a fully connected layer. Strided convolutions are applied to each convolution layer to reduce spatial
dimensionality instead of using pooling layers.
3. EXPERIMENTS AND RESULTS
3.1. Data and Implementation
The liver lesion data used in this work, was collected from
the Sheba Medical Center. Cases of cysts, metastases and hemangiomas, were acquired from 2009 to 2014 using a General Electric (GE) Healthcare scanner and a Siemens Medical System scanner, with the following parameters: 120kVp,
140-400mAs and 1.25-5.0mm slice thickness. Cases were
collected with the approval of the institution’s Institutional
Review Board.
The dataset was made up of 182 portal-phase 2-D CT
scans: 53 cysts, 64 metastases, 65 hemangiomas. An expert
radiologist marked the margin of each lesion and determined
its corresponding diagnosis which was established by biopsy
or a clinical follow-up. This serves as our ground truth.
Liver lesions vary considerably in shape, contrast and size
Fig. 2: Deep Convolutional GAN Architecture (generator+descriminator).
(10-102mm). They also vary in location, where some can be
located in interior sections of the liver and some are near its
boundary where the surrounding parenchyma tissue of the lesions changes. Finally, lesions also vary within categories.
Each type of lesion has its own characteristics but some characteristics may be confusing, in particular for metastasis and
hemangioma lesions. Hemangiomas are benign tumors and
metastases are malignant lesions derived from different primary cancers. Thus, the correct identification of a lesion as
metastasis or hemangioma is especially important.
We use a liver lesion classification CNN of the following architecture: three pairs of convolutional layers where
each convolutional layer is followed by a max-pooling layer,
and two dense fully-connected layers ending with a soft-max
layer to determine the network predictions into the three lesion classes. We use ReLU as activation functions and incorporated a dropout layer with a probability of 0.5 during training. For training we used a batch size of 64 with a learning
rate of 0.001 for 150 epochs.
The input to our classification system are ROIs of 64 × 64
cropped from CT scans using the radiologist’s annotations.
The ROIs are extracted to capture the lesion and its surrounding tissue relative to its size. In all experiments and evaluations we used 3-fold cross validation with case separation at
the patient level and each fold contained a balanced number
of cyst, metastasis and hemangioma lesion ROIs. For the implementation of the liver lesion classification CNN we used
the Keras framework. For the implementation of the GAN architectures we used the TensorFlow framework. All training
processes were performed using an NVIDIA GeForce GTX
980 Ti GPU.
3.2. Evaluation of the Synthetic Data Augmentation
We started by examining the effects of using only classic data
augmentation for the liver lesion classification task (our baseline). We then synthesized liver lesion ROIs using GAN and
examined the classification results after adding the synthesized lesion ROIs to the training set. A detailed description
of each step is provided next.
Fig. 3: Accuracy results for liver lesion classification with the
increase of training set size. The red line shows the effect of
adding classic data augmentation and the blue line shows the
effect of adding synthetic data augmentation.
As our baseline, we used classical data augmentation (see
section 2.1). We refer to this network as CNN-AUG. We
recorded the classification results for the liver lesion classification CNN for increasing amounts of data augmentation
over the original training set. In order to examine the effect
of adding increasing numbers of examples, we formed the
1
2
9
data groups Daug
⊂ Daug
⊂ ... ⊂ Daug
such that the first
data group was only made up of the original ROIs and each
group contains more augmented data. For each original ROI,
we produced a large number of augmentations (Nrot = 30,
Nf lip = 3, Ntrans = 7 and Nscale = 5), resulting in N =
480 augmented images per lesion ROI and overall ∼ 30, 000
examples per folder. Then, we selected the images for the data
groups by sampling randomly augmented examples such that
for each original lesion we sampled the same augmentation
volume.
Table 1: Confusion Matrix for the Optimal Classical Data
Augmentation Group (CNN-AUG).
True \ Auto
Cyst
Met
Hem
Cyst
52
2
0
Met
1
44
18
Hem
0
18
47
Specificity
98.4%
83.9%
84.6%
Sensitivity
98.1%
68.7%
72.3%
of augmented samples were used. The confusion matrix for
the optimal point appears in Table 1.
The blue line in Figure 3 shows the total accuracy results for
the lesion classification task for the synthetic data augmentation scenario. The classification results significantly improved from 78.6% with no synthesized lesions to 85.7% for
optimal
3
Daug
+ Dsynth
= 5000 + 3000 = 8000 samples per fold.
The confusion matrix for the best classification results using
synthetic data augmentation is presented in Table 2.
3.3. Expert Assessment of the Synthesized Data
Table 2: Confusion Matrix for the Optimal Synthetic Data
Augmentation Group (CNN-AUG-GAN).
True \ Auto
Cyst
Met
Hem
Cyst
53
2
1
Met
0
52
13
Hem
0
10
51
Specificity
97.7%
89%
91.4%
Sensitivity
100%
81.2%
78.5%
The second step of the experiment consisted of generating synthetic liver lesion ROIs for data augmentation using
GAN. We refer to this network as CNN-AUG-GAN. Since
our dataset was too small for effective training, we incorporated classic augmentation for the training process. We employed the DCGAN architecture to train each lesion class separately, using the same 3-fold cross validation process and the
same data partition. In all the steps of the learning procedure we maintained a complete separation between train and
test subsets. After the generator had learned each lesion class
data distribution separately, it was able to synthesize new examples by using an input vector of normal distributed samples
(“noise”). The same approach that was applied in step one of
the experiment when constructing the data groups was also
applied in step two: We collected large numbers of synthetic
lesions for all three lesion classes, and constructed increased
size data groups {Dsynth }6j=1 of synthetic examples. To keep
the classes balanced, we sampled the same number of synthetic ROIs for each class.
Results of the GAN-based synthetic augmentation experiment are shown in Figure 3. The baseline results (classical
augmentation) are shown in red. We see the total accuracy results for the lesion classification task, for each group of data.
When no augmentations were applied, a result of 57% was
achieved; this may be due to overfitting over the small number of training examples (∼ 63 samples per fold). The results
improved as the number of training examples increased, up to
saturation around 78.6% where adding more augmented data
examples failed to improve the classification results. We note
6
that the saturation starts with Daug
= 5000 samples per fold.
We define this point as i=optimal where the smallest number
In order to assess the synthesized lesion data, we challenged
two radiologists to classify real and fake lesion ROIs into one
of three classes: cyst, metastasis or hemangioma. The goal
of the experiment was to check if the radiologists would perform differently on a real lesion vs. a fake one. Similar results
would indicate the relevance of the generated data, to the classification task.
The experts were given, in random order, lesion ROIs
from the original dataset of 182 real lesions and from 120
additional synthesized lesions. The expert radiologists’ results were compared against the ground truth classification.
It is important to note that in the defined task, we challenged
the radiologists to reach a decision based on a single 2-D
ROI image. This scenario is not consistent with existing
clinical workflow in which the radiologist makes a decision
based on the entire 3D volume, with support from additional
anatomical context, medical history context, and more. We
are therefore not focusing on the classification results perse, but rather on the delta in performance between the two
presented datasets.
We received the following set of results: Expert 1 classified the real and synthesized lesions correctly in 78% and
77.5% of the cases, respectively. Expert 2 classified the real
and synthesized lesions correctly in 69.2% and 69.2% of the
cases, respectively. We observe that for both experts, the classification performances for the real lesions and the synthesized lesions were similar. This suggests that our synthetic
generated lesions were meaningful in appearance.
4. CONCLUSION
To conclude, in this work we presented a method that uses the
generation of synthetic medical images for data augmentation
to improve classification performance on a medical problem
with limited data. We demonstrated this technique on a liver
lesion classification task and achieved a significant improvement of 7% using synthetic augmentation over the classic
augmentation. In the future, we plan to extend our work to
additional medical domains that can benefit from synthesis of
lesions for improved training - towards increased classification results.
5. REFERENCES
[1] H. Greenspan, B. van Ginneken, and R. M. Summers, “Guest editorial deep learning in medical imaging: Overview and future promise of an exciting new
technique,” IEEE Transactions on Medical Imaging,
vol. 35, no. 5, pp. 1153–1159, May 2016.
[2] A. Krizhevsky, Ilya I. Sutskever, and G. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing
systems, 2012.
[3] I. Goodfellow et al., “Generative adversarial nets,”
in Advances in neural information processing systems,
2014.
[4] A. Radford, L. Metz, and S. Chintala,
“Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint
arXiv:1511.06434, 2015.
[5] A. Odena, C. Olah, and J. Shlens, “Conditional image
synthesis with auxiliary classifier gans,” arXiv preprint
arXiv:1610.09585, 2016.
[6] P. Costa et al., “Towards adversarial retinal image synthesis,” arXiv preprint arXiv:1701.08974, 2017.
[7] D. Nie et al., Medical Image Synthesis with ContextAware Generative Adversarial Networks, pp. 417–425,
Springer International Publishing, Cham, 2017.
[8] T. Schlegl et al., “Unsupervised anomaly detection with
generative adversarial networks to guide marker discovery,” in International Conference on Information Processing in Medical Imaging. Springer, 2017, pp. 146–
157.
[9] J. Ferlay et al., “Cancer incidence and mortality worldwide: sources, methods and major patterns in globocan
2012,” International Journal of Cancer, vol. 136, no. 5,
2015.
[10] M. Gletsos et al., “A computer-aided diagnostic system
to characterize CT focal liver lesions: design and optimization of a neural network classifier,” IEEE Transactions on Information Technology in Biomedicine, vol. 7,
no. 3, pp. 153–162, Sept 2003.
[11] C. Chang et. al, “Computer-aided diagnosis of liver
tumors on computed tomography images,” Computer
Methods and Programs in Biomedicine, vol. 145, pp.
45–51, 2017.
[12] I. Diamant et al., “Task-driven dictionary learning based
on mutual information for medical image classification,”
IEEE Transactions on Biomedical Engineering, vol. 64,
no. 6, pp. 1380–1392, June 2017.
| 1 |
arXiv:1303.6030v1 [] 25 Mar 2013
ON HILBERT FUNCTIONS OF
GENERAL INTERSECTIONS OF IDEALS
GIULIO CAVIGLIA AND SATOSHI MURAI
Abstract. Let I and J be homogeneous ideals in a standard graded polynomial
ring. We study upper bounds of the Hilbert function of the intersection of I
and g(J), where g is a general change of coordinates. Our main result gives a
generalization of Green’s hyperplane section theorem.
1. Introduction
Hilbert functions of graded K-algebras are important invariants studied in several
areas of mathematics. In the theory of Hilbert functions, one of the most useful
tools is Green’s hyperplane section theorem, which gives a sharp upper bound for
the Hilbert function of R/hR, where R is a standard graded K-algebra and h is
a general linear form, in terms of the Hilbert function of R. This result of Green
has been extended to the case of general homogeneous polynomials by Herzog and
Popescu [HP] and Gasharov [Ga]. In this paper, we study a further generalization
of these theorems.
Let K be an infinite field and S = K[x1 , . . . , xn ] a standard graded polynomial
ring. Recall that the Hilbert function H(M, −) : Z → Z of a finitely generated
graded S-module M is the numerical function defined by
H(M, d) = dimK Md ,
where Md is the graded component of M of degree d. A set W of monomials of
S is said to be lex if, for all monomials u, v ∈ S of the same degree, u ∈ W and
v >lex u imply v ∈ W , where >lex is the lexicographic order induced by the ordering
x1 > · · · > xn . We say that a monomial ideal I ⊂ S is a lex ideal if the set of
monomials in I is lex. The classical Macaulay’s theorem [Mac] guarantees that, for
any homogeneous ideal I ⊂ S, there exists a unique lex ideal, denoted by I lex , with
the same Hilbert function as I. Green’s hyperplane section theorem [Gr] states
Theorem 1.1 (Green’s hyperplane section theorem). Let I ⊂ S be a homogeneous
ideal. For a general linear form h ∈ S1 ,
H(I ∩ (h), d) ≤ H(I lex ∩ (xn ), d) for all d ≥ 0.
2010 Mathematics Subject Classification. Primary 13P10, 13C12, Secondary 13A02.
The work of the first author was supported by a grant from the Simons Foundation (209661 to
G. C.). The work of the second author was supported by KAKENHI 22740018.
1
2
GIULIO CAVIGLIA AND SATOSHI MURAI
Green’s hyperplane section theorem is known to be useful to prove several important results on Hilbert functions such as Macaulay’s theorem [Mac] and Gotzmann’s
persistence theorem [Go], see [Gr]. Herzog and Popescu [HP] (in characteristic 0)
and Gasharov [Ga] (in positive characteristic) generalized Green’s hyperplane section theorem in the following form.
Theorem 1.2 (Herzog–Popescu, Gasharov). Let I ⊂ S be a homogeneous ideal.
For a general homogeneous polynomial h ∈ S of degree a,
H(I ∩ (h), d) ≤ H(I lex ∩ (xan ), d) for all d ≥ 0.
We study a generalization of Theorems 1.1 and 1.2. Let >oplex be the lexicographic
order on S induced by the ordering xn > · · · > x1 . A set W of monomials of S is
said to be opposite lex if, for all monomials u, v ∈ S of the same degree, u ∈ W and
v >oplex u imply v ∈ W . Also, we say that a monomial ideal I ⊂ S is an opposite lex
ideal if the set of monomials in I is opposite lex. For a homogeneous ideal I ⊂ S, let
I oplex be the opposite lex ideal with the same Hilbert function as I and let Ginσ (I)
be the generic initial ideal ([Ei, §15.9]) of I with respect to a term order >σ .
In Section 3 we will prove the following
Theorem 1.3. Suppose char(K) = 0. Let I ⊂ S and J ⊂ S be homogeneous ideals
such that Ginlex (J) is lex. For a general change of coordinates g of S,
H(I ∩ g(J), d) ≤ H(I lex ∩ J oplex , d) for all d ≥ 0.
Theorems 1.1 and 1.2, assuming that the characteristic is zero, are special cases
of the above theorem when J is principal. Note that Theorem 1.3 is sharp since the
equality holds if I is lex and J is oplex (Remark 3.5). Note also that if Ginσ (I) is
lex for some term order >σ then Ginlex (J) must be lex as well ([Co1, Corollary 1.6]).
Unfortunately, the assumption on J, as well as the assumption on the characteristic of K, in Theorem 1.3 are essential (see Remark 3.6). However, we prove the
following result for the product of ideals.
Theorem 1.4. Suppose char(K) = 0. Let I ⊂ S and J ⊂ S be homogeneous ideals.
For a general change of coordinates g of S,
H(Ig(J), d) ≥ H(I lexJ oplex , d) for all d ≥ 0.
Inspired by Theorems 1.3 and 1.4, we suggest the following conjecture.
Conjecture 1.5. Suppose char(K) = 0. Let I ⊂ S and J ⊂ S be homogeneous
ideals such that Ginlex (J) is lex. For a general change of coordinates g of S,
dimK Tori (S/I, S/g(J))d ≤ dimK Tori (S/I lex , S/J oplex )d for all d ≥ 0.
Theorems 1.3 and 1.4 show that the conjecture is true if i = 0 or i = 1. The
conjecture is also known to be true when J is generated by linear forms by the result
of Conca [Co2, Theorem 4.2]. Theorem 2.8, which we prove later, also provides some
evidence supporting the above inequality.
HILBERT FUNCTIONS OF GENERAL INTERSECTIONS
3
2. Dimension of Tor and general change of coordinates
Let GLn (K) be the general linear group of invertible n × n matrices over K.
Throughout the paper, we identify each P
element h = (aij ) ∈ GLn (K) with the
change of coordinates defined by h(xi ) = nj=1 aji xj for all i.
We say that a property (P) holds for a general g ∈ GLn (K) if there is a non-empty
Zariski open subset U ⊂ GLn (K) such that (P) holds for all g ∈ U.
We first prove that, for two homogeneous ideals I ⊂ S and J ⊂ S, the Hilbert
function of I ∩ g(J) and that of Ig(J) are well defined for a general g ∈ GLn (K),
i.e. there exists a non-empty Zariski open subset of GLn (K) on which the Hilbert
function of I ∩ g(J) and that of Ig(J) are constant.
Lemma 2.1. Let I ⊂ S and J ⊂ S be homogeneous ideals. For a general change
of coordinates g ∈ GLn (K), the function H(I ∩ g(J), −) and H(Ig(J), −) are well
defined.
Proof. We prove the statement for I ∩ g(J) (the proof for Ig(J) is similar). It is
enough to prove the same statement for I + g(J). We prove that inlex (I + g(J)) is
constant for a general g ∈ GLn (K).
Let tkl , where 1 ≤ k, l ≤ n, be indeterminates, K̃ = K(tkl : 1 ≤ k, l ≤ n) the field
of fractions of K[tkl : 1 ≤ k, l ≤ n] and A = K̃[x1 , . . . , xn ]. Let ρ : S → A be the ring
P
map induced by ρ(xk ) = nl=1 tlk xl for k = 1, 2, . . . , n, and L̃ = IA + ρ(J)A ⊂ A.
Let L ⊂ S be the monomial ideal with the same monomial generators as inlex (L̃).
We prove inlex (I + g(J)) = L for a general g ∈ GLn (K).
Let f1 , . . . , fs be generators of I and g1 , . . . , gt those of J. Then the polynomials
f1 , . . . , fs , ρ(g1 ), . . . , ρ(gt ) are generators of L̃. By the Buchberger algorithm, one
can compute a Gröbner basis of L̃ from f1 , . . . , fs , ρ(g1 ), . . . , ρ(gt ) by finite steps.
Consider all elements h1 , . . . , hm ∈ K(tkl : 1 ≤ k, l ≤ n) which are the coefficient of
polynomials (including numerators and denominators of rational functions) that appear in the process of computing a Gröbner basis of L̃ by the Buchberger algorithm.
Consider a non-empty Zariski open subset U ⊂ GLn (K) such that hi (g) ∈ K \ {0}
for any g ∈ U, where hi (g) is an element obtained from hi by substituting tkl with
entries of g. By construction inlex (I + g(J)) = L for every g ∈ U.
Remark 2.2. The method used to prove the above lemma can be easily generalized
to a number of situations. For instance for a general g ∈ GLn (K) and a finitely
generated graded S-module M, the Hilbert function of Tori (M, S/g(J)) is well deϕp+1
ϕp
ϕ0
ϕ1
fined for every i. Let F : 0 −→ Fp −→ · · · −→ F1 −→ F0 −→ 0 be a graded
free resolution of M. Given a change of coordinates g, one first notes that for every
i = 0, . . . , p, the Hilbert function H(Tori (M, S/g(J)), −) is equal to the difference
between the Hilbert function of Ker(πi−1 ◦ ϕi ) and the one of ϕi+1 (Fi+1 ) + Fi ⊗S g(J)
4
GIULIO CAVIGLIA AND SATOSHI MURAI
where πi−1 : Fi−1 → Fi−1 ⊗S S/g(J) is the canonical projection. Hence we have
H(Tori (M, S/g(J)), −) =
(1)
H(Fi , −) − H(ϕi (Fi ) + g(J)Fi−1, −) + H(g(J)Fi−1, −)
− H(ϕi+1 (Fi+1 ) + g(J)Fi , −).
Clearly H(Fi , −) and H(g(J)Fi−1, −) do not depend on g. Thus it is enough to show
that, for a general g, the Hilbert functions of ϕi (Fi ) + g(J)Fi−1 are well defined for
all i = 0, . . . , p + 1. This can be seen as in Lemma 2.1.
Next, we present two lemmas which will allow us to reduce the proofs of the
theorems in the third section to combinatorial considerations regarding Borel-fixed
ideals.
The first Lemma is probably clearly true to some experts, but we include its proof
for the sake of the exposition. The ideas used in Lemma 2.4 are similar to that of
[Ca1, Lemma 2.1] and they rely on the construction of a flat family and on the use
of the structure theorem for finitely generated modules over principal ideal domains.
Lemma 2.3. Let M be a finitely generated graded S-module and J ⊂ S a homogeneous ideal. For a general change of coordinates g ∈ GLn (K) we have that
dimK Tori (M, S/g(J))j ≤ dimK Tori (M, S/J)j for all i and for all j.
Proof. Let F be a resolution of M, as in Remark 2.2. Let i, 0 ≤ i ≤ p + 1 and
notice that, by equation (1), it is sufficient to show: H(ϕi (Fi ) + g(J)Fi−1 , −) ≥
H(ϕi (Fi ) + JFi−1 , −). We fix a degree d and consider the monomial basis of (Fi−1 )d .
Given a change of coordinates h = (akl ) ∈ GLn (K) we present the vector space
Vd = (ϕi (Fi ) + h(J)Fi−1 )d with respect to this basis. The dimension of Vd equals
the rank of a matrix whose entries are polynomials in the akl ’s with coefficients in
K. Such a rank is maximal when the change of coordinates h is general.
For a vector w = (w1 , . . . , wn ) ∈ Zn≥0 , let inw (I) be the initial ideal of a homogeneous ideal I with respect to the weight order >w (see [Ei, p. 345]). Let T be a new
indeterminate and R = S[T ]. For a = (a1 , . . . , an ) ∈ P
Zn≥0 , let xa = xa11 xa22 · · · xann
and (a, w) = a1 w1 + · · · + an wn . For a polynomial f = a∈Zn ca xa , where ca ∈ K,
≥0
let b = max{(a, w) : ca 6= 0} and
X
T −(a,w) ca xa ∈ R.
f˜ = T b
a∈Zn
≥0
Note that f˜ can be written as f˜ = inw (f ) + T g where g ∈ R. For an ideal I ⊂ S,
let I˜ = (f˜ : f ∈ I) ⊂ R. For λ ∈ K \ {0}, let Dλ,w be the diagonal change of
coordinates defined by Dλ,w (xi ) = λ−wi xi . From the definition, we have
R/ I˜ + (T ) ∼
= S/inw (I)
and
R/ I˜ + (T − λ) ∼
= S/Dλ,w (I)
HILBERT FUNCTIONS OF GENERAL INTERSECTIONS
5
where λ ∈ K \ {0}. Moreover (T − λ) is a non-zero divisor of R/I˜ for any λ ∈ K.
See [Ei, §15.8].
Lemma 2.4. Fix an integer j. Let w ∈ Zn≥0 , M a finitely generated graded S-module
and J ⊂ S a homogeneous ideal. For a general λ ∈ K, one has
dimK Tori M, S/inw (J) j ≥ dimK Tori M, S/Dλ,w (J) j for all i.
Proof. Consider the ideal J˜ ⊂ R defined as above. Let M̃ = M ⊗S R and Ti =
˜
TorR
i (M̃ , R/J). By the structure theorem for modules over a PID (see [La, p. 149]),
we have
M
Aij
(Ti )j ∼
= K[T ]aij
as a finitely generated K[T ]-module, where aij ∈ Z≥0 and where Aij is the torsion
submodule. Moreover Aij is a module of the form
Aij ∼
=
bij
M
K[T ]/(Phi,j ),
h=1
where Phi,j is a non-zero polynomial in K[T ]. Set lλ = T − λ. Consider the exact
sequence
·lλ
(2)
→ R/J˜ −−−→ R/ (lλ ) + J˜ −−−→ 0.
0 −−−→ R/J˜ −−−
By considering the long exact sequence induced by TorR
i (M̃ , −), we have the following exact sequence
˜ −→ Ki−1 −→ 0,
(3)
0 −→ Ti /lλ Ti −→ TorR
i M̃ , R/ (lλ ) + J
·l
λ
Ti−1 . Since lλ is a regular element for
where Ki−1 is the kernel of the map Ti−1 −→
R and M̃ , the middle term in (3) is isomorphic to
TorSi M, S/inw (J) , if λ = 0,
R/(lλ )
˜
Tori
M̃ /lλ M̃ , R/ (lλ ) + J =
TorSi M, S/Dλ,w (J) , if λ 6= 0
(see [Mat, p. 140]). By taking the graded component of degree j in (3), we obtain
dimK TorSi M, S/inw (J) j = aij + #{Phij : Phi,j (0) = 0}
(4)
+#{Phi−1,j : Phi−1,j (0) = 0},
where #X denotes the cardinality of a finite set X, and
dimK TorSi M, S/Dλ,w (J) j = aij
(5)
for a general λ ∈ K. This proves the desired inequality.
Corollary 2.5. With the same notation as in Lemma 2.4, for a general λ ∈ K,
dimK Tori M, inw (J) j ≥ dimK Tori M, Dλ,w (J) j for all i.
6
GIULIO CAVIGLIA AND SATOSHI MURAI
Proof. For any homogeneous ideal I ⊂ S, by considering the long exact sequence
induced by Tori (M, −) from the short exact sequence 0 −→ I −→ S −→ S/I −→ 0
we have
Tori (M, I) ∼
= Tori+1 (M, S/I) for i ≥ 1
and
dimK Tor0 (M, I)j = dimK Tor1 (M, S/I)j + dimK Mj − dimK Tor0 (M, S/I)j .
Thus by Lemma 2.4 it is enough to prove that
dimK Tor1 M, S/inw (J) j − dimK Tor1 M, S/Dλ,w (J) j
≥ dimK Tor0 M, S/inw (J) j − dimK Tor0 M, S/Dλ,w (J) j .
This inequality follows from (4) and (5).
Proposition 2.6. Fix an integer j. Let I ⊂ S and J ⊂ S be homogeneous ideals.
Let w, w′ ∈ Zn≥0 . For a general change of coordinates g ∈ GLn (K),
(i) dimK Tori (S/I, S/g(J))j ≤ dimK Tori (S/inw (I), S/inw′ (J))j for all i.
(ii) dimK Tori (I, S/g(J))j ≤ dimK Tori (inw (I), S/inw′ (J))j for all i.
Proof. We prove (ii) (the proof for (i) is similar). By Lemmas 2.3 and 2.4 and
Corollary 2.5, we have
dimK Tori inw (I), S/inw′ (J) j ≥ dimK Tori Dλ1 ,w (I), S/Dλ2 ,w′ (J) j
= dimK Tori I, S/Dλ−1
Dλ2 ,w′ (J) j
1 ,w
≥ dimK Tori I, S/g(J) j ,
as desired, where λ1 , λ2 are general elements in K.
′
Remark 2.7. Let w = (1, 1, . . . , 1) and note that the composite of two general
changes of coordinates is still general. By replacing J by h(J) for a general change
of coordinates h, from Proposition 2.6(i) it follows that
dimK Tori (S/I, S/h(J))j ≤ dimK Tori S/in>σ (I), S/h(J))j
for any term order >σ .
The above fact gives, as a special case, an affirmative answer to [Co2, Question
6.1]. This was originally proved in the thesis of the first author [Ca2]. We mention
it here because there seem to be no published article which includes the proof of
this fact.
Theorem 2.8. Fix an integer j. Let I ⊂ S and J ⊂ S be homogeneous ideals. For
a general change of coordinates g ∈ GLn (K),
(i) dimK Tori (S/I, S/g(J))j ≤ dimK Tori (S/Ginlex (I), S/Ginoplex (J))j for all i.
(ii) dimK Tori (I, S/g(J))j ≤ dimK Tori (Ginlex (I), S/Ginoplex (J))j for all i.
Proof. Without loss of generality, we may assume inlex (I) = Ginlex (I) and that
inoplex (J) = Ginoplex (J). It follows from [Ei, Propositin 15.16] that there are vectors
w, w′ ∈ Zn≥0 such that inw (I) = inlex (I) and inw′ (g(J)) = Ginoplex (J). Then the
desired inequality follows from Proposition 2.6.
HILBERT FUNCTIONS OF GENERAL INTERSECTIONS
7
Since Tor0 (S/I, S/J) ∼
= I/IJ, we have the next
= S/(I + J) and Tor0 (I, S/J) ∼
corollary.
Corollary 2.9. Let I ⊂ S and J ⊂ S be homogeneous ideals. For a general change
of coordinates g ∈ GLn (K),
(i) H(I ∩ g(J), d) ≤ H(Ginlex (I) ∩ Ginoplex (J), d) for all d ≥ 0.
(ii) H(Ig(J), d) ≥ H(Ginlex (I)Ginoplex (J), d) for all d ≥ 0.
We conclude this section with a result regarding the Krull dimension of certain
Tor modules. We show how Theorem 2.8 can be used to give a quick proof of
Proposition 2.10, which is a special case (for the variety X = Pn−1 and the algebraic
group SLn ) of the main Theorem of [MS].
Recall that generic initial ideals are Borel-fixed, that is they are fixed under the
action of the Borel subgroup of GLn (K) consisting of all the upper triangular invertible matrices. In particular for an ideal I of S and an upper triangular matrix
b ∈ GLn (K) one has b(Ginlex (I)) = Ginlex (I). Similarly, if we denote by op the
change of coordinates of S which sends xi to xn−i for all i = 1, . . . , n, we have that
b(op(Ginoplex (I))) = op(Ginoplex (I)).
We call opposite Borel-fixed an ideal J of S such that op(J) is Borel-fixed (see
[Ei, §15.9] for more details on the combinatorial properties of Borel-fixed ideals).
It is easy to see that if J is Borel-fixed, then so is (x1 , . . . , xi ) + J for every
i = 1, . . . , n. Furthermore if j is an integer equal to min{i : xi 6∈ J} then J : xj
is also Borel-fixed; in this case I has a minimal generator divisible by xj or I =
(x1 , . . . , xj−1 ). Analogous statements hold for opposite Borel-fixed ideals.
Let I and J be ideals generated by linear forms. If we assume that I is Borel
fixed and that J is opposite Borel fixed, then there exist 1 ≤ i, j ≤ n such that
I = (x1 , . . . , xi ) and J = (xj , . . . , xn ). An easy computation shows that the Krull
dimension of Tori (S/I, S/J) is always zero when i > 0.
More generally one has
Proposition 2.10 (Miller–Speyer). Let I and J be two homogeneous ideals of S.
For a general change of coordinates g, the Krull dimension of Tori (S/I, S/g(J)) is
zero for all i > 0.
Proof. When I or J are equal to (0) or to S the result is obvious. Recall that a
finitely generated graded module M has Krull dimension zero if and only if Md = 0
for all d sufficiently large. In virtue of Theorem 2.8 it is enough to show that
Tori (S/I, S/J) has Krull dimension zero whenever I is Borel-fixed, J opposite Borelfixed and i > 0. By contradiction, let the pair I, J be a maximal counterexample
(with respect to point-wise inclusion). By the above discussion, and by applying op
if necessary, we can assume that I has a minimal generator of degree greater than 1.
Let j = min{h : xh 6∈ I} and notice that both (I : xj ) and (I+(xj )) strictly contain I.
For every i > 0 the short exact sequence 0 → S/(I : xj ) → S/I → S/(I + (xj )) → 0
induces the exact sequence
Tori (S/(I : xj ), S/J) → Tori (S/I, S/J) → Tori (S/(I + (xj )), S/J).
8
GIULIO CAVIGLIA AND SATOSHI MURAI
By the maximality of I, J, the first and the last term have Krull dimension zero.
Hence the middle term must have dimension zero as well, contradicting our assumption.
3. General intersections and general products
In this section, we prove Theorems 1.3 and 1.4. We will assume throughout the
rest of the paper char(K) = 0.
A monomial ideal I ⊂ S is said to be 0-Borel (or strongly stable) if, for every
monomial uxj ∈ I and for every 1 ≤ i < j one has uxi ∈ I. Note that 0-Borel
ideals are precisely all the possible Borel-fixed ideals in characteristic 0. In general,
the Borel-fixed property depends on the characteristic of the field and we refer the
readers to [Ei, §15.9] for the details. A set W ⊂ S of monomials in S is said to be
0-Borel if the ideal they generate is 0-Borel, or equivalently if for every monomial
uxj ∈ W and for every 1 ≤ i < j one has uxi ∈ W . Similarly we say that a
monomial ideal J ⊂ S is opposite 0-Borel if for every monomial uxj ∈ J and for
every j < i ≤ n one has uxi ∈ J.
Let >rev be the reverse lexicographic order induced by the ordering x1 > · · · > xn .
We recall the following result [Mu1, Lemma 3.2].
Lemma 3.1. Let V = {v1 , . . . , vs } ⊂ Sd be a 0-Borel set of monomials and W =
{w1 , . . . , ws } ⊂ Sd the lex set of monomials, where v1 ≥rev · · · ≥rev vs and w1 ≥rev
· · · ≥rev ws . Then vi ≥rev wi for all i = 1, 2, . . . , s.
Since generic initial ideals with respect to >lex are 0-Borel, the next lemma and
Corollary 2.9(i) prove Theorem 1.3.
Lemma 3.2. Let I ⊂ S be a 0-Borel ideal and P ⊂ S an opposite lex ideal. Then
dimK (I ∩ P )d ≤ dimK (I lex ∩ P )d for all d ≥ 0.
Proof. Fix a degree d. Let V, W and Q be the sets of monomials of degree d in I,
I lex and P respectively. It is enough to prove that #V ∩ Q ≤ #W ∩ Q.
Observe that Q is the set of the smallest #Q monomials in Sd with respect to
>rev . Let m = max>rev Q. Then by Lemma 3.1
#V ∩ Q = #{v ∈ V : v ≤rev m} ≤ #{w ∈ W : w ≤rev m} = #W ∩ Q,
as desired.
Next, we consider products of ideals. For a monomial u ∈ S, let max u (respectively, min u) be the maximal (respectively, minimal) integer i such that xi divides
u, where we set max 1 = 1 and min 1 = n. For a monomial ideal I ⊂ S, let I(≤k) be
the K-vector space spanned by all monomials u ∈ I with max u ≤ k.
Lemma 3.3. Let I ⊂ S be a 0-Borel ideal and P ⊂ S an opposite 0-Borel ideal.
Let G(P ) = {u1 , . . . , us } be the set of the minimal monomial generators of P . As a
K-vector space, IP is the direct sum
s
M
IP =
(I(≤min ui ) )ui .
i=1
HILBERT FUNCTIONS OF GENERAL INTERSECTIONS
9
Proof. It is enough to prove that, for any monomial w ∈ IP , there is the unique
expression w = f (w)g(w) with f (w) ∈ I and g(w) ∈ P satisfying
(a) max f (w) ≤ min g(w).
(b) g(w) ∈ G(P ).
Given any expression w = f g such that f ∈ I and g ∈ P , since I is 0-Borel and
xmin g
P is opposite 0-Borel, if max f > min g then we may replace f by f xmax
∈ I and
f
xmax f
replace g by g xmin g ∈ P . This fact shows that there is an expression satisfying (a)
and (b).
Suppose that the expressions w = f (w)g(w) and w = f ′ (w)g ′(w) satisfy conditions
(a) and (b). Then, by (a), g(w) divides g ′ (w) or g ′(w) divides g(w). Since g(w) and
g ′ (w) are generators of P , g(w) = g ′ (w). Hence the expression is unique.
Lemma 3.4. Let I ⊂ S be a 0-Borel ideal and P ⊂ S an opposite 0-Borel ideal.
Then dimK (IP )d ≥ dimK (I lex P )d for all d ≥ 0.
lex
Proof. Lemma 3.1 shows that dimK I(≤k) d ≥ dimK I(≤k)
for all k and d ≥ 0. Then
d
the statement follows from Lemma 3.3.
Finally we prove Theorem 1.4.
Proof of Theorem 1.4. Let I ′ = Ginlex (I) and J ′ = Ginoplex (J). Since I ′ is 0-Borel
and J ′ is opposite 0-Borel, by Corollary 2.9(ii) and Lemmas 3.4
H(Ig(J), d) ≥ H(I ′ J ′ , d) ≥ H(I lex J ′ , d) ≥ H(I lex J oplex , d)
for all d ≥ 0.
Remark 3.5. Theorems 1.3 and 1.4 are sharp. Let I ⊂ S be a Borel-fixed ideal and
J ⊂ S an ideal satisfying that h(J) = J for any lower triangular matrix h ∈ GLn (K).
For a general g ∈ GLn (K), we have the LU decomposition g = bh where h ∈ GLn (K)
is a lower triangular matrix and b ∈ GLn (K) is an upper triangular matrix. Then
as K-vector spaces
I ∩ g(J) ∼
= b−1 (I) ∩ h(J) = I ∩ J and Ig(J) ∼
= b−1 (I)h(J) = IJ.
Thus if I is lex and J is opposite lex then H(I ∩ g(J), d) = H(I ∩ J, d) and
H(Ig(J), d) = H(IJ, d) for all d ≥ 0.
Remark 3.6. The assumption on Ginlex (J) in Theorem 1.3 is necessary. Let I =
(x31 , x21 x2 , x1 x22 , x32 ) ⊂ K[x1 , x2 , x3 ] and J = (x23 , x23 x2 , x3 x22 , x32 ) ⊂ K[x1 , x2 , x3 ]. Then
the set of monomials of degree 3 in I lex is {x31 , x21 x2 , x21 x3 , x1 x22 } and that of J oplex is
{x33 , x23 x2 , x23 x1 , x3 x22 }. Hence H(I lex ∩ J oplex , 3) = 0. On the other hand, as we see
in Remark 3.5, H(I ∩ g(J), 3) = H(I ∩ J, 3) = 1. Similarly, the assumption on the
characteristic of K is needed as one can easily see by considering char(K) = p > 0,
I = (xp1 , xp2 ) ⊂ K[x1 , x2 ] and J = xp2 . In this case we have H(I lex ∩ J oplex , p) = 0,
while H(I ∩ g(J), p) = H(g −1(I) ∩ J, p) = 1 since I is fixed under any change of
coordinates.
10
GIULIO CAVIGLIA AND SATOSHI MURAI
Since Tor0 (S/I, S/J) ∼
= (I ∩ J)/IJ for all homo= S/(I + J) and Tor1 (S/I, S/J) ∼
geneous ideals I ⊂ S and J ⊂ S, Theorems 1.3 and 1.4 show the next statement.
Remark 3.7. Conjecture 1.5 is true if i = 0 or i = 1.
References
[AH] A. Aramova and J. Herzog, Koszul cycles and Eliahou-Kervaire type resolutions, J. Algebra
181 (1996), 347–370.
[Bi] A. Bigatti, Upper bounds for the Betti numbers of a given Hilbert function, Comm. Algebra
21 (1993), 2317–2334.
[Ca1] G. Caviglia, The pinched Veronese is Koszul, J. Algebraic Combin. 30 (2009), 539–548.
[Ca2] G. Caviglia, Kozul algebras, Castelnuovo–Mumford regularity and generic initial ideals, Ph.d
Thesis, 2004.
[Co1] A. Conca, Reduction numbers and initial ideals, Proc. Amer. Math. Soc. 131 (2003), 1015–
1020.
[Co2] A. Conca, Koszul homology and extremal properties of Gin and Lex, Trans. Amer. Math.
Soc. 356 (2004), 2945–2961.
[Ei] D. Eisenbud, Commutative algebra with a view toward algebraic geometry, Grad. Texts in
Math., vol. 150, Springer-Verlag, New York, 1995.
[Ga] V. Gasharov, Hilbert functions and homogeneous generic forms, II, Compositio Math. 116
(1999), 167–172.
[Go] G. Gotzmann, Einige einfach-zusammenhängende Hilbertschemata, Math. Z. 180 (1982),
291–305.
[Gr] M. Green, Restrictions of linear series to hyperplanes, and some results of Macaulay and
Gotzmann, in: Algebraic Curves and Projective Geometry (1988), in: Trento Lecture Notes in
Math., vol. 1389, Springer, Berlin, 1989, pp. 76–86.
[HP] J. Herzog and D. Popescu, Hilbert functions and generic forms, Compositio Math. 113 (1998),
1–22.
[Hu] H. Hulett, Maximum Betti numbers of homogeneous ideals with a given Hilbert function,
Comm. Algebra 21 (1993), 2335–2350.
[La] S. Lang, Algebra, Revised third edition, Grad. Texts in Math., vol. 211, Springer-Verlag, New
York, 2002.
[Mac] F.S. Macaulay, Some properties of enumeration in the theory of modular systems, Proc.
London Math. Soc. 26 (1927), 531–555.
[Mat] H. Matsumura, Commutative Ring Theory, Cambridge University Press, 1986.
[MS] E. Miller, D. Speyer, A Kleiman-Bertini theorem for sheaf tensor products. J. Algebraic
Geom. 17 (2008), no. 2, 335–340.
[Mu1] S. Murai, Borel-plus-powers monomial ideals, J. Pure Appl. Algebra 212 (2008), 1321–1336.
[Mu2] S. Murai, Free resolutions of lex-ideals over a Koszul toric ring, Trans. Amer. Math. Soc.
363 (2011), 857–885.
[Pa] K. Pardue, Deformation classes of graded modules and maximal Betti numbers, Illinois J.
Math. 40 (1996), 564–585.
Department of Mathematics, Purdue University, West Lafayette, IN 47901, USA.
E-mail address: [email protected]
Satoshi Murai, Department of Mathematical Science, Faculty of Science, Yamaguchi University, 1677-1 Yoshida, Yamaguchi 753-8512, Japan.
E-mail address: [email protected]
| 0 |
JMLR: Workshop and Conference Proceedings 1:1–20, 2018
workshop title
Aggregating Strategies for Long-term Forecasting
Alexander Korotin
[email protected]
Skolkovo Institute of Science and Technology,
Nobel street, 3, Moscow, Moskovskaya oblast’, Russia
arXiv:1803.06727v1 [cs.LG] 18 Mar 2018
Vladimir V’yugin∗
[email protected]
Institute for Information Transmission Problems,
Bolshoy Karetny per. 19, build.1, Moscow, Russia
Evgeny Burnaev†
[email protected]
Skolkovo Institute of Science and Technology,
Nobel street, 3, Moscow, Moskovskaya oblast’, Russia
Editor: Editor’s name
Abstract
The article is devoted to investigating the application of aggregating algorithms to the
problem of the long-term forecasting. We examine the classic aggregating algorithms based
on the exponential reweighing. For the general Vovk’s aggregating algorithm we provide its
generalization for the long-term forecasting. For the special basic case of Vovk’s algorithm
we provide its two modifications for the long-term forecasting. The first one is theoretically
close to an optimal algorithm and is based on replication of independent copies. It provides
the time-independent regret bound with respect to√
the best expert in the pool. The second
one is not optimal but is more practical and has O( T ) regret bound, where T is the length
of the game.
Keywords: aggregating algorithm, long-term forecasting, prediction with experts’ advice,
delayed feedback.
1. Introduction
We consider the online game of prediction with experts’ advice. A master (aggregating)
algorithm at every step t = 1, . . . , T of the game has to combine aggregated prediction from
predictions of a finite pool of N experts (see e.g. Littlestone and Warmuth (1994), Freund
and Schapire (1997), Vovk (1990), Vovk (1998), Cesa-Bianchi and Lugosi (2006), Adamskiy
et al. (2016) among others). We investigate the adversarial case, that is, no assumptions
are made about the nature of the data (stochastic, deterministic, etc.).
In the classical online scenario, all predictions at step t are made for the next step t + 1.
The true outcome is revealed immediately at the beginning of the next step of the game
and the algorithm suffers loss using a loss function.
∗
†
Vladimir V’yugin was supported by the Russian Science Foundation grant (project 14-50-00150).
Evgeny Burnaev was supported by the Ministry of Education and Science of Russian Federation, grant
No. 14.606.21.0004, grant code: RFMEFI60617X0004.
c 2018 A. Korotin, V. V’yugin & E. Burnaev.
Korotin V’yugin Burnaev
In contrast to the classical scenario, we consider the long-term forecasting. At each step
t of the game, the predictions are made for some pre-determined point t + D ahead (where
D ≥ 1 is some fixed known horizon), and the true outcome is revealed only at step t + D.
The performance of the aggregating algorithm is measured by the regret over the entire
game. The regret RT is the difference between the cumulative loss of the online aggregating
algorithm and the loss of some offline comparator. A typical offline comparator is the best
fixed expert in the pool or the best fixed convex linear combination of experts. The goal of
an aggregating algorithm is to minimize the regret, that is, RT → min.
It turns out that there exists a wide range of aggregating algorithms for the classic
scenario (D = 1). The majority of them are based on the exponential reweighing methods
(see Littlestone and Warmuth (1994), Freund and Schapire (1997), Vovk (1990), Vovk
(1998), Cesa-Bianchi and Lugosi (2006), Adamskiy et al. (2016), etc.). At the same time,
several algorithms come from the general theory of online convex optimization by Hazan
(2016). Such algorithms are based on online gradient descent methods.
There is no right answer to the question which category of algorithms is better in
practice. Algorithms from both groups theoretically have good performance. Regret (with
respect to some offline comparator) is usually bounded by a sublinear function of T :
1. RT ≤ O(T ) for Fixed Share with constant share by Herbster and Warmuth (1998);
√
2. RT ≤ O( T ) for Regularized Follow The Leader according to Hazan (2016);
3. RT ≤ O(ln T ) for Fixed Share with decreasing share by Adamskiy et al. (2016);
4. RT ≤ O(1) in aggregating algorithm by Vovk (1998);
and so on. In fact, the applicability of every particular algorithm and the regret bound
depends on the properties of the loss function (convexity, Lipschitz, exponential concavity,
mixability, etc.).
When it comes to long-term forecasting, many of these algorithms do not have theoretical
guaranties of performance or even do not have a version for the long-term forecasting. Longterm forecasting implies delayed feedback. Thus, the problem of modifying the algorithms
for the long-term forecasting can be partially solved by the general results of the theory of
forecasting with the delayed feedback.
The main idea in the field of the forecasting with the delayed feedback belongs to
Weinberger and Ordentlich (2002). They studied the simple case of binary sequences
prediction under fixed known delay feedback D. According to their results, an optimal1
predictor p∗D (xt+D |xt , . . . , x1 ) for the delay D can be obtained from an optimal predictor
p∗1 (xt+1 |xt , . . . , x1 ) for the delay 1. The method implies running D independent copies of
predictor p∗1 on D disjoint time grids GRd = {t | t ≡ d (mod D)} for 1≤ d ≤ D. Thus,
p∗D (xt+D |xt , . . . , x1 ) = p∗1 (xt+D |xt , xt−D , xt−2D , . . . )
for all t. We illustrate the optimal partition of the timeline in Figure 1.
It turns out, their result also works for the general problem of forecasting under the
fixed known delay feedback, in particular, for the prediction with expert advice (we prove
1. An optimal predictor is any predictor with the regret which is less or equal to the minimax regret. There
may exist more than one optimal predictor.
2
Aggregating Strategies for Long-term Forecasting
Figure 1: The optimal approach to the problem of forecasting with the fixed known delay
D. The timeline is partitioned into D disjoint grids GRd . Games on different
grids are considered separately. Each game has fixed known delay 1 (not D).
this in Appendix A). Thus, it is easy to apply any 1-step-ahead forecasting aggregating
algorithm to the problem of long-term forecasting by running its D independent copies on
D disjoint grids GRd (for d = 1, . . . , D). We call algorithms obtained by this method as
replicated algorithms.
Nevertheless, one may say that such a theoretically optimal approach is practically far
1
from optimal because it uses only D
of observed data at every step of the game. Moreover,
separate learning processes on grids GRd (for d = 1, . . . , D) do not even interact.
Gradient-descent-based aggregating algorithms have several non-replicated adaptations
for the long-term forecasting. The most obvious adaptation is the delayed gradient descent
by Quanrud and Khashabi (2015). Also, the problem of prediction with experts’ advice can
be considered as a special case of online convex
optimization with memory by Anava et al.
√
(2015). Both approaches provide RT ≤ O( T ) classical regret bound.2 Thus, the practical
and theoretical problem of modifying the gradient descent based aggregating algorithms for
long-term forecasting can be considered as solved.
In this work, we investigate the problem of modifying aggregating algorithms based on
exponential reweighing for the long-term forecasting. We consider the general aggregating
algorithm by Vovk (1999) for the 1-step-ahead forecasting and provide its reasonable nonreplicated generalization for the D-th-step-ahead forecasting. These algorithms are denoted
by G1 and GD respectively. We obtain a general expression for the regret bound of GD .
2. We do not include the value of the forecasting horizon D in regret bounds because in this article we
are interested only in the regret asymptotic behavior w.r.t. T . In all algorithms that we discuss the
asymptotic behavior w.r.t. D is sublinear or linear.
3
Korotin V’yugin Burnaev
As an important special case, we consider the classical exponentially reweighing algorithm
V1 by Vovk (1998), that is a case of G1 , designed to compete with the best expert in the
pool. The algorithm V1 can be considered close to an optimal one because it provides
constant T -independent regret bound. We provide its replicated modification VD for the
long-term forecasting. We also propose a non-replicated modification VDF C for the long-term
forecasting of V1 (motivated by reasonable
practical approach).3 Our main result here is
√
F
C
that the regret bound for VD is O( T ).
All the algorithms that we develop and investigate require the loss function to be exponentially concave. This is a common assumption (see e.g. Kivinen and Warmuth (1999))
for the algorithms based on the exponential reweighing.4
The main contributions of this article are the following:
1. Developing the general non-replicated exponentially reweighing aggregating algorithm
GD for the problem of long-term prediction with experts’ advice and estimating its
regret.
2. Developing the non-replicated adaptation VDF C of the
√ powerful aggregating algorithm
V1 by Vovk (1998). The obtained algorithm has O( T ) regret bound with respect to
the best expert in the pool.
In our previous work (see Korotin et al. (2017)) we also studied the application of
algorithm V1 to the long-term forecasting. We applied the method of Mixing Past Posteriors
by Bousquet and Warmuth (2003) to connect the independent learning processes on separate
grids GRd (for d = 1, . . . , D). We obtained the algorithm VDGC that partially connects the
learning processes on these grids.5
In contrast to our previous work, in this article we consider the general probabilistic
framework for the long-term forecasting (algorithms G1 and GD ). We obtain the algorithm
VDF C that fully connects the learning processes on different grids, see details in Subsection
4.3.
The article is structured as follows.
In Section 2 we set up the problem of long-term prediction with experts’ advice and
state the protocol of the online game.
In Section 3 we discuss the aggregating algorithms for the 1-step-ahead forecasting. In
Subsection 3.1 we describe the general model G1 by Vovk (1999) and consider its special
case V1 in Subsection 3.2.
In Section 4 we discuss aggregating algorithms for the D-th-step-ahead forecasting:
we develop general model GD in Subsection 4.1. Then we discuss its two special cases:
algorithm VD in Subsection 4.2 that is a replicated
√ version of V1 and our non-replicated
version VDF S in Subsection 4.3. We prove the O( T ) regret bound for VDF C .
In Appendix A we generalize the result by Weinberger and Ordentlich (2002) to the case
of long-term prediction with experts’ advice and prove that the approach with replicating
1-step-ahead predictors for D-th-step-ahead forecasting is optimal.
3. FC — full connection.
4. Usually, even more general assumption is used that the loss function is mixable.
5. GC — grid connection.
4
Aggregating Strategies for Long-term Forecasting
2. Preliminaries
We use bold font to denote vectors (e.g. w ∈ RM for some integer M ). In most cases,
superscript refers to index/coordinate of an element in the vector (e.g. (w1 , . . . , wN ) = w).
Subscript is always used to indicate time (e.g. lt , RT , ωτ , wtn , etc.).
For any integer M we denote the probability simplex of dimension M by
∆M = {p such that (p ∈ RM
+ ) ∧ (kpk1 = 1) ∧ (p > 0)}.
We use the notation e ∈ RM
+ to denote the unit vector (1, 1, . . . , 1). The dimension M
e
of the vector is always clear from the context. Note that M
∈ ∆M .
The words prediction and forecasting are absolute synonyms in this paper.
2.1. A Game of Long-Term Forecasting with Experts’ Advice
We consider the online game of D-th-step-ahead forecasting of time series ωt ∈ Ω by
aggregating a finite pool of N forecasting experts. We use N = {1, . . . , N } to denote the
pool and n ∈ N as an index of an expert.
n
At each integer step t = 1, 2, . . . , T − D experts n ∈ N present their forecasts ξt+D
∈Ξ
T
of time series {ωτ }τ =1 for the time moment t+D. The master (aggregating) algorithm combines these forecasts into a single (aggregated) forecast γt+D ∈ Γ ⊂ Ξ for the time moment
t + D.
After the corresponding outcome ωt+D is revealed (on the step t + D of the game), both
experts and algorithm suffer losses using a loss function λ : Ω × Ξ → R+ . We denote the
n ) and the loss of the aggregating
n
= λ(ωt+D , ξt+D
loss of expert n ∈ N on step t + D by lt+D
algorithm by ht+D = λ(ωt+D , γt+D ). We fix the protocol of the game below.
Protocol (D-th-step-ahead forecasting with Experts’ advice)
Get the experts n ∈ N predictions ξtn ∈ Ξ for steps t = 1, . . . , D.
Compute the aggregated predictions γt ∈ Γ for steps t = 1, . . . , D.
FOR t = 1, . . . , T
1. Observe the true outcome ωt ∈ Ω.
2. Suffer losses from past predictions
(a) Compute the losses ltn = λ(ωt , ξtn ) for all n ∈ N of the experts’ forecasts ξtn made at the
step t − D.
(b) Compute the loss ht = λ(ωt , γt ) of the aggregating algorithm’s forecast γt made at the
step t − D.
3. Make the forecast for the next step (if t ≤ T − 1)
n
(a) Get the experts n ∈ N predictions ξt+D
∈ Ξ for the step t + D.
(b) Compute the aggregated prediction γt+D ∈ Γ of the algorithm.
ENDFOR
We assume that the forecasts ξtn of all experts n ∈ N for first D time moments t =
1, . . . , D are given before the game.
5
Korotin V’yugin Burnaev
P
P
The variables LnT = Tt=1 ltn (for all n ∈ N ) and HT = Tt=1 ht correspond to the cumulative losses of expert n and the aggregating algorithm over the entire game respectively. We
1
N ).
also denote the vector of experts’ forecasts for the step t + D by ξt+D = (ξt+D
, . . . , ξt+D
In the general protocol sets Ξ and Γ ⊂ Ξ may not be equal. For example, the problem
of combining N soft classifiers into a hard one has Ξ = [0, 1] and Γ = {0, 1} ( Ξ. In this
article we assume that the sets of possible experts’ and algorithm’s forecasts are equal, that
is Ξ = Γ. Moreover, we assume that Ξ = Γ is a convex set. We will not use the notation of
Ξ anymore.
The performance of the algorithm is measured by the (cumulative) regret. The cumulative regret is the difference between the cumulative loss of the aggregating algorithm and
the cumulative loss of some off-line comparator. A typical approach is to compete with the
best expert in the pool. The cumulative regret with respect to the best expert is
RT = HT − min LnT .
n∈N
(1)
The goal of the aggregating algorithm is to minimize the regret, that is, RT → min.
In order to theoretically guarantee algorithm’s performance, some upper bound is usually
proved for the cumulative regret RT ≤ f (T ).
In the base setting (1), sub-linear upper bound f (T ) for the regret leads to asymptotic
performance of the algorithm equal to the performance of the best expert. More precisely,
we have limT →∞ RTT = 0.
2.2. Exponentially concave loss functions
We investigate learning with exponentially concave loss functions. Loss function λ : Ω × Γ → R+
is called η-exponentially concave (for some η > 0) if for all ω ∈ Ω and all probability distributions π on set Γ the following holds true:
Z
−ηλ(ω,γπ )
e
≥
e−ηλ(ω,γ) π(dγ),
(2)
γ∈Γ
where
Z
γπ =
γπ(dγ) = Eπ γ.
(3)
γ∈Γ
In (2) variable γπ is called aggregated prediction. Since Γ is convex, we have γπ ∈ Γ.
If a loss function is η-exponentially concave, it is also η 0 -exponentially concave for all
0
η ∈ (0, η]. This fact immediately follows from the general properties of exponentially
concave functions. For more details see the book by Hazan (2016).
Note that the basic square loss and the log loss functions are both exponentially concave.
This fact is proved by Kivinen and Warmuth (1999).
3. Aggregating Algorithm for 1-Step-Ahead Forecasting
In this section we discuss basic aggregating algorithms for 1-step-ahead forecasting based
on exponential reweighing. Our framework is built on the general aggregating algorithm
G1 by Vovk (1999), we discuss it in Subsection 3.1. The simplest and earliest version V1 by
Vovk (1998) of this algorithm is discussed in Subsection 3.2.
6
Aggregating Strategies for Long-term Forecasting
3.1. General Model
We investigate the adversarial case, that is, no assumptions (functional, stochastic, etc)
are made about the nature of data and experts. However, it turns out that in this case it
is convenient to develop algorithms using some probabilistic interpretations.
Recall that ξt = (ξt1 , . . . , ξtN ) is a vector of experts’ predictions of ωt . Loss function
λ : Ω × Γ → R+ is η-exponentially concave for some η > 0.
We assume that data is generated using some probabilistic model with hidden states.
The model is shown in Figure 2.
Figure 2: Probabilistic model of data generation process.
We suppose that there is some hidden sequence of experts nt ∈ N (for t = 1, 2, . . . , T )
that generates the experts predictions ξt . Particular, hidden expert nt at step t is called
active expert. The conditional probability to observe the vector ξt of experts’ predictions
at step t is
nt
e−ηλ(ωt ,ξt )
p(ξt |nt ) =
,
Zt
R
where Zt = ξ∈Γ e−ηλ(ωt ,ξ) dξ is the normalizing constant.6 We denote Ξt = (ξ1 , . . . , ξt ) and
Ωt = (ω1 , . . . , ωt ) for all t = 1, . . . , T .
For the first active expert n1 some known prior distribution is given p(n1 ) = p0 (n1 ).
The sequence (n1 , . . . , nT ) of active experts is generated step-by-step. For t ∈ {1, . . . , T −1}
each nt+1 is sampled from some known distribution p(nt+1 |Nt ), where Nt = (n1 , . . . , nt ).7
Thus, active expert nt+1 depends on the previous experts Nt .
The considered probabilistic model is:8
6. Constant Zt is nt -independent. In the article we do not need to compute the exact value of the normalizing constant Zt .
7. In case p(nt+1 |Nt ) = p(nt+1 |nt ), we obtain traditional Hidden Markov Process. The hidden state at
step t + 1 depends only on the previous hidden state at step t.
8. The correct way is to include the time series values ωt as the conditional parameter in the model
probabilistic distribution, that is, p(NT , ΞT |ΩT ). We omit the values ωt in probabilities p(·) in order
not to overburden the notation.
7
Korotin V’yugin Burnaev
p(NT , ΞT ) = p(NT ) · p(ΞT |NT ) = p0 (n1 )
T
Y
Y
T
p(nt |Nt−1 ) ·
p(ξt |nt ) .
t=2
(4)
t=1
The similar equation holds true for t ≤ T :
p(Nt , Ξt ) = p(Nt ) · p(Ξt |Nt ) = p0 (n1 )
t
Y
Y
t
p(nτ |Nτ −1 ) ·
p(ξτ |nτ ) .
τ =2
τ =1
The probability p(NT ) is that of hidden states (active experts).9
Suppose that the current time moment is t. We observe the experts’ predictions Ξt
made earlier, time series Ωt and predictions ξt+1 for the step t + 1. Since we observe Ωt
and Ξt , we are able to estimate the conditional distribution p(Nt |Ξt ) of hidden variables
Nt . This estimate allows us to compute the conditional distribution on the active expert
nt+1 at the moment t + 1. We denote for all nt+1 ∈ N
n
X
t+1
wt+1
= p(nt+1 |Ξt ) =
Nt
p(nt+1 |Nt )p(Nt |Ξt ).
(5)
∈N t
1 , . . . , w N ) to combine the aggregated prediction
We use the weight vector wt+1 = (wt+1
t+1
for the step t + 1:
X
γt+1 =
nt+1
p(nt+1 |Ξt )
ξt+1
= hwt+1 , ξt+1 i =
nt+1 ∈N
N
X
n
n
wt+1
ξt+1
.
n=1
The aggregating algorithm is shown below. We denote it by G1 = G1 (p), where p
indicates the probability distribution p(NT ) of active experts to which the algorithm is
applied.
Algorithm G1 (p) (Aggregating algorithm for distribution p of active experts)
Set initial prediction weights w1n1 = p0 (n1 ).
Get the experts n ∈ N predictions ξ1n ∈ Γ for the step t = 1.
Compute the aggregated prediction γ1 = hw1 , ξ1 i for the step t = 1.
FOR t = 1, . . . , T
1. Observe the true outcome ωt ∈ Ω.
2. Update the weights
(a) Calculate the prediction weights wt+1 = (wt1 , . . . , wtN ), where
n
wt t+1 = p(nt+1 |Ξt )
for all nt+1 ∈ N .
3. Make forecast for the next step (if t ≤ T − 1)
Q
9. The form p(NT ) = p0 (n1 ) tt=2 p(nt |Nt−1 ) is used only for convenience and association with online
scenario. It does not impose any restrictions on the type of probability distribution. In fact, p(NT ) may
be any distribution on N T of any form.
8
Aggregating Strategies for Long-term Forecasting
n
(a) Get the experts n ∈ N predictions ξt+1
∈ Γ for the step t + 1.
(b) Combine the aggregated prediction γt+1 = hwt+1 , ξt+1 i ∈ Γ of the algorithm.
ENDFOR
To estimate the performance of the obtained algorithm, we prove Theorem 1. Recall
that HT is the cumulative loss of the algorithm over the entire game.
Theorem 1 For the algorithm G1 applied to to the model (4) the following upper bound
for the cumulative loss over the entire game holds true:
−ηLNT
1
T
HT ≤ − ln Ep(NT ) e
.
η
(6)
Similar results were obtained by Vovk (1999). In this article we reformulate these results
in terms of our interpretable probabilistic framework. This is required for the completeness
of the exposition and theoretical analysis of algorithm GD (see Subsection 4.1).
Proof Define the mixloss at the step t:
1 X −ηλ(ωt ,ξtnt )
mt = − ln
· wtnt .
e
η
(7)
nt ∈N
Since λ is η-exponentially concave function, for the aggregated prediction γt = hwt , ξt i and
probability distribution wt we have
X
nt
e−ηλ(ωt ,γt ) ≥
e−ηλ(ωt ,ξt ) wtnt ,
nt ∈N
which is equal to e−ηht ≥ e−ηmt . We conclude that ht ≤ mt for all t ∈ {1, . . . , T }. Thus,
the similar inequality is true for the cumulative loss of the algorithm and the cumulative
mixloss:
HT =
T
X
ht ≤
t=1
T
X
m t = MT .
t=1
Now lets compute MT .
For all t
1 X −ηλ(ωt ,ξtnt )
mt = − ln
e
· p(nt |Ξt−1 ) =
η
nt ∈N
1 X
− ln
Zt · p(ξt |nt ) · p(nt |Ξt−1 ) =
η
nt ∈N
1
1 X
− ln Zt − ln
p(ξt |nt ) · p(nt |Ξt−1 ) =
η
η
nt ∈N
1
1
− ln Zt − ln p(ξt |Ξt−1 )
η
η
9
Korotin V’yugin Burnaev
We compute
T
X
T
T
1 Y
1 Y
ln
Zt − ln
p(ξt |Ξt−1 ) =
η
η
t=1
t=1
t=1
T
−ηLNT
1 Y
1
1
T
ln
Zt − ln p(ΞT ) = − ln Ep(NT ) e
η
η
η
HT ≤ MT =
mt =
(8)
t=1
and finish the proof.
In the current form it is difficult to understand the meaning of the theorem. However,
the main idea is partially shown in the following corollary.
Corollary 2 The regret of the algorithm G1 (p) with respect to the sequence NT∗ = {n∗1 , . . . , n∗T }
of experts has the following upper bound:
RT (NT∗ )
= HT −
T
X
t=1
1
n∗
lt t ≤ − ln p(NT∗ ).
η
(9)
Proof We simply bound (6):
T
∗
X
NT
−ηLNT
1
1
1
n∗
∗
−ηL
T
T
HT ≤ − ln Ep(NT ) e
≤ − ln p(NT ) · e
=
lt t − ln p(NT∗ ).
η
η
η
t=1
Applying algorithm G1 to different probability models p makes it possible to change the
upper regret bound with respect to concrete sequences NT .10 Choosing different p makes it
possible to obtain adaptive algorithms (e.g. Fixed Share by Herbster and Warmuth (1998),
Adamskiy et al. (2016), Vovk (1999)). Such algorithms provide low regret not only with
respect to the best constant sequence of experts, but also with some more complicated
sequences.
For example, suppose we are to obtain minimal possible regret bound with respect to
the sequences (N T )∗ ⊂ N T . In this case it is reasonable to set p(NT ) = |(N1T )∗ | for all
NT ∈ (N T )∗ and p(NT ) = 0 for all other NT .
Nevertheless, the most popular approach is to compete with the best (fixed) expert.
This approach is simple and, at the same time, serves as the basis for each research.
3.2. Case of Hidden Markov Process: Classical Vovk’s Algorithm
Consider the simplified dependence of active experts shown on Figure 3. For all t expert
nt+1 depends only on the previous expert nt , that is, p(nt+1 |NT ) = p(nt+1 |nt ).
10. In this article we do not raise the question of computational efficiency of algorithm G1 (p). In fact,
computational time and required memory can be pretty high for complicated distributions p, even O(N T ).
Nevertheless, all the special algorithms that we consider (V1 , VD , VDF C ) are computationally efficient.
They require ≤ O(N T ) computational time and ≤ O(N D) memory.
10
Aggregating Strategies for Long-term Forecasting
Figure 3: Hidden Markov Model model for data generation process.
The classic Vovk’s aggregating algorithm is obtained from G1 by applying it to the
simple distribution p. We put p(n1 ) = N1 (for all n1 ∈ N ) and p(nt |nt−1 ) = [nt = nt−1 ] (for
all t > 1 and nt , nt−1 ∈ N ). We denote the algorithm G1 for the described p by V1 .
According to Corollary 2, algorithm V1 has the following regret bound with respect to
the best constant expert NT∗ = (n∗ , . . . , n∗ ):
RT (NT∗ )
= HT −
T
X
t=1
1
ln N
∗
ltn ≤ − ln p(NT∗ ) =
.
η
η
(10)
At the same time, it is easy to recurrently compute the weights wt step by step, that
is, w1 → w2 → . . . . We get
nt
nt
e−ηLt
wtnt e−ηlt
nt+1
wt+1
= p(nt |Ξt ) = PN
=
.
PN
−ηLn
n e−ηltn
T
e
w
n=1
n=1 t
Thus, algorithm V1 is a powerful and computationally efficient tool to aggregate experts
in the game of the 1-step-ahead forecasting.
4. Aggregating Algorithm for Long-Term Forecasting
The Section is devoted to aggregating algorithms for the long-term forecasting. In
Subsection 4.1 we provide a natural long-term forecasting generalization GD of algorithm
G1 by Vovk (1999). We provide the general regret bound and discuss the difficulties that
prevent us from obtaining general bound in a simple form. In Subsection 4.2 we show how
the replicated version VD for the long-term forecasting of V1 fits into the general model,
FC
and prove its regret bound. In Subsection 4.3 we
√ describe non-replicated version VD for
the long-term forecasting of V1 and prove its O( T ) regret bound.
4.1. General Model
We describe the natural algorithm obtained by enhancing G1 for the problem of the
D-th-step-ahead forecasting below and denote it by GD . Note that weights wt in GD differ
for different D (for the same probability model p).
11
Korotin V’yugin Burnaev
Algorithm GD (p) (Aggregating algorithm for distribution p of active experts)
Set initial prediction weights wtnt = p(nt ) for all t = 1, . . . , D and nt ∈ N .
Get the predictions ξtn ∈ Γ of experts n ∈ N for steps t = 1, . . . , D.
Compute the aggregated predictions γt = hwt , ξt i for steps t = 1, . . . , D.
FOR t = 1, . . . , T
1. Observe the true outcome ωt ∈ Ω.
2. Update the weights
(a) Calculate the prediction weights wt+D = (wt1 , . . . , wtN ), where
n
wt t+D = p(nt+D |Ξt )
for all nt+D ∈ N .
3. Make forecast for the next step (if t ≤ T − 1)
n
∈ Γ of experts n ∈ N for the step t + D.
(a) Get the predictions ξt+D
(b) Combine the aggregated prediction γt+D = hwt+D , ξt+D i ∈ Γ of the algorithm.
ENDFOR
Despite the fact that algorithm GD (for D > 1) is the direct modification of G1 , it seems
hard to obtain the adequate general bound of the loss of the form (6). Indeed, let us try to
apply the ideas of Theorem 1 proof to algorithm GD .
Denote by mt the mixloss from (7). Recall that the weights wtnt here are equal to the
probabilities p(nt |Ξt−D ) but not p(nt |Ξt−1 ). Again, for η-exponentially concave function
we have ht ≤ mt . Similar to the proof of Theorem 1, we compute for all t
1
1
mt = − ln Zt − ln p(ξt |Ξt−D ),
η
η
where we assume Ξt−D = ∅ for t ≤ D. The cumulative mixloss is equal to
Ht ≤ Mt =
T
X
t=1
T
T
t=1
t=1
1 Y
1 Y
mt = ln
Zt − ln
p(ξt |Ξt−D ).
η
η
(11)
Unfortunately, for D > 1 in the general case this expression can not be simplified in the
same way as in Theorem 1 for G1 .
However, there exist some simple theoretical cases when this bound can be simplified.
For a fixed D, the obvious one is when
p(NT ) =
T
Y
bt ),
p(N
(12)
t=T −D+1
bt = (. . . , nt−2D , nt−D , nt ) for all t = 1, . . . , T . Note that (12) means that
where we use N
probability distributions on separate grids GRd (for d = 1, . . . , D) are independent. In this
case, the learning process separates into D disjoint one-step ahead forecasting games on
b t−D ) and (11) is simplified to
grids GRd . We have p(ξt |Ξt−D ) = p(ξt |Ξ
12
Aggregating Strategies for Long-term Forecasting
T
Ht ≤ Mt =
1 Y
1
ln
Zt − ln
η
η
t=1
1
ln
η
T
Y
t=1
1
Zt − ln
η
T
Y
T
Y
b t) =
p(Ξ
t=T −D+1
t=T −D+1
−ηLNT
1
b
T
,
p(Ξt ) = − ln Ep(NT ) e
η
b t = {ξt , ξt−D , ξt−2D , . . . } for all t.
where Ξ
In Appendix A we prove that the approach (12) may be considered as optimal when
the goal is to compete with the best expert. Nevertheless, there are no guaranties that this
approach is optimal in the general case.
4.2. Optimal Approach: Replicated Vovk’s Algorithm
Algorithm V1 can be considered as close to optimal (for the 1-step-ahead forecasting
and competing with the best expert in the pool) because of its low constant regret bound
RT ≤ lnηN . One can obtain the aggregating algorithm VD for the long-term forecasting
whose regret is close to optimal when competing with the best constant active expert.
The idea is to run D independent one-step-ahead forecasting algorithms V1 on D disjoint
subgrids GRd (for d ∈ {1, . . . , D}). This idea is motivated by Theorem 4 from Appendix
A.
To show how this method fits into the general model GD , we define p(ND ) ≡ ( N1 )D for
all ND ∈ N D . Next, for all t > D we set p(nt+1 |Nt ) = p(nt+1 |nt+1−D ) = [nt+1 = nt+1−D ],
that is, active expert nt+1 depends only on expert nt+1−D that was active D steps ago. The
described model is shown in Figure 4.
Figure 4: The probabilistic model for algorithm VD .
For simplicity, we assume that T is a multiple of D. Since VD runs D independent copies
of V1 , it is easy to estimate its regret with respect to the best expert:
RT (NT∗ )
= HT −
T
X
t=1
1
ln N
∗
ltn ≤ − ln p(NT∗ ) = D
.
η
η
13
Korotin V’yugin Burnaev
At the same time, the weights’ updating process is simple (it can be separated into
grids). We have
wtnt
nt
b nt
e−ηLt−D
= p(nt |Ξt−D ) = P
N
bn
−η Lt−D
n=1 e
nt
wt−D
e−ηlt−D
= PN
n
−ηlt−D
n
n=1 wt−D e
,
b nt = Pt lnt for all t. This formula allows efficient recurrent computation
where L
t
τ =1 t
· · · → wt−D → wt → wt+D → . . . .
4.3. Practical Approach: Non-Replicated Vovk’s Algorithm
Despite the fact that algorithm VD is theoretically close to optimal for competing with
the best expert, it has several practically obvious disadvantages. With the increase of
T
D the overall length of subgames decreases (∼ D
) and subgrids become more infrequent.
Moreover, to set the forecasting weight wt+D at step t we use only ≈ Dt previous observations
and forecasts.
One may wonder why not use all the observed losses to set the weight wt+D . We apply
the algorithm GD to the probability distribution p from Subsection 3.2 which is a case of
model from Figure 3. We denote the obtained algorithm by VDF C .
The weights are efficiently recomputed · · · → wt−1 → wt → wt+1 → . . . according to
the formula
nt
nt
wtnt
e−ηLt−D
= p(nt |Ξt−D ) = PN
−ηLn
t−D
n=1 e
nt −ηlt−D
wt−1
e
= PN
n
−ηlt−D
n
n=1 wt−1 e
.
Theorem 3 For the η λ -exponentially concave (for some η λ ) and L-Lipshitz for all ω w.r.t.
second argument γ ∈ Γ and k · kΓ loss function
λ : Ω × Γ → [0, H] ⊂ R+
with maxγ∈Γ kγkΓ ≤ B there exists T0 such that for all T ≥ T0 the following holds true:
there exists η λ > η ∗ > 0 such that algorithm VDF C = GD (p) with N experts and learning
rate η ∗ has regret bound
√
√
RT ≤ O( N ln N · T )
with respect to the best expert in the pool.
Proof We use the superscript (. . .)D to denote the variables obtained by algorithm GD (p)
(for example, weights wtD , predictions γtD , etc.). Our main idea is to prove that the weights
wtD are approximately equal to the weights wt1 obtained in the one-step-ahead forecasting
game G1 (p) with the same experts and same time series ωt . Thus, the forecasts γtD and γt1
1
have approximately the same losses hD
t and ht respectively.
We compare both algorithms with the same (yet unknown) learning rate η. Note that
1
D . For all t we have
wt+1
= wt+D
nt
)D ∼ (wtnt )D · e
(wtnt )1 = (wt+D−1
14
n
t
−ηL[t−D+1,t−1]
,
Aggregating Strategies for Long-term Forecasting
P
nt
t
= t−1
where Ln[t−D+1,t−1]
τ =t−D+1 lτ . We estimate the difference between the cumulative
losses HT1 and HTD of forecasts of aggregating algorithms G1 (p) = V1 and GD (p) = VDF C for
the given p from Subsection 3.2.
|HT1
−
HTD |
=|
T
X
h1t
−
t=1
L
T
X
kγt1 − γtD kΓ = L
t=1
T
X
t=1
T
X
hD
t |
≤
T
X
|h1t
−
hD
t |
=
t=1
T
X
t=1
khwt1 , ξt i − hwtD , ξt ikΓ = L
t=1
L
T X
N
X
|λ(ωt , γt1 ) − λ(ωt , γtD )| ≤
T
X
khwt1 − wtD , ξt ikΓ ≤
t=1
|(wtn )1 − (wtn )D | · kξtn kΓ ≤ BL
t=1 n=1
T X
N
X
|(wtn )1 − (wtn )D | ≤
t=1 n=1
BLT N · max |(wtn )1 − (wtn )D |
t,n
(13)
Our goal is to estimate the maximum. In fact, we are to estimate the maximum possible
single weight change over D − 1 steps in algorithm G1 (p) or GD (p).
W.l.o.g. we assume that the maximum is achieved at step t on the
1-st coordinate
−ηLn
[t−D+1,t−1] (so that
(n = 1). We denote x = wtD ∈ ∆N and a ∈ ∆N , where an ∼ e
(wtn )1 ∼ an xn ). The latter equation imposes several restrictions on a. In fact, for all
n, n0 ∈ N the following must be true: aan0 ≤ e−η(D−1)H , where H is the upper bound for
n
the loss λ(ω, γ). We denote the subset of such vectors a by ∆0N ( ∆N .
Note that wt1 ∼ (x1 · a1 , x2 · a2 , . . . , xN · aN ). We are to bound the maximum
x1 a1
max max0 |x1 − PN
| .
x∈∆N a∈∆N
n=1 xn an
We consider the case x1 > PNx1 a1
(the other case is similar). In this case
n=1 xn an
x1 a1
a1
max max0 x1 − PN
= max x1 max0 1 − PN
.
x∈∆N a∈∆N
x∈∆N
a∈∆N
n=1 xn an
n=1 xn an
We examine the behavior of 1 − PN a1
under the fixed x.
n=1
1 − PN
a1
n=1 xn an
xn an
PN
→ max0
n=2 xn an
⇐⇒
a1
a∈∆N
PN
=
N
X
an
n=2
a1
xn → max0
(14)
a∈∆N
x a
Since aan1 ≤ e−η(D−1)H , we have n=1a1 n n ≤ (1 − x1 )e−η(D−1)H , and the argument a
that maximizes (14) does not depend on x2 , . . . , xN . W.l.o.g. we can assume that a =
1
1−a
1−a
e−η(D−1)H
(a, N1−a
−1 , N −1 , . . . , N −1 ), where a = (N −1)+e−η(D−1)H < N . At the same time, we can assume
1−x
1−x
that x = (x, N1−x
−1 , N −1 , . . . , N −1 ). Thus,
x1 a1
xa
max max0 x1 − PN
= max x −
.
(1−x)(1−a)
x∈∆N a∈∆N
x∈(0,1)
xa +
n=1 xn an
N −1
15
Korotin V’yugin Burnaev
√
The derivative is equal to zero at
e−η(D−1)H
(N −1)+e−η(D−1)H
x∗
=
1−a−
a(1−a)(N −1)
.
1−aN
Substituting x = x∗ and a =
we obtain the N -independent expression:
√
2
1 − e−η(D−1)H
.
1 − e−η(D−1)H
(15)
We are interested in the behavior near η = 0. A closer look at the Maclaurin series of
numerator and denominator give us the decomposition
2
2
H
0 + 0η + η 2 (D−1)
+ ...
4
.
0 + η(D − 1)H + . . .
At η → 0 the function is equal to 0. There exist some η 0 > 0, such that forall 0 ≤ η <η0
+ η.
expression (15) can be bounded by some η-linear function U (D, H) · η = (D−1)H
4
Thus, expression (13) is bounded by
T N · B · L · N · U (D, H) · η = F N T η
We set F = B · L · U (D, H). Next,
ln N
HTD ≤ HT1 + |HT1 − HTD | ≤ L∗T +
+ F N T η,
η
PT n
where L∗T = minn∈N LnT = minn∈N
loss of the best constant expert.
t=1 lt is the
q
Choosing η ∗ = arg minη>0 lnηN + F N T η = FlnNNT , we obtain
√
√
√
√
RT ≤ 2 F N ln N · T = O( N ln N T )
regret bound with respect to the best expert in the pool.
Note that in order to use linear approximation of maximum we need η ≤ η 0 . Moreover, to bound the loss HT1 weqneed η ∗ -exponentially concave function λ. This means that
η ∗ ≤ min{η 0 , η λ }. Since η ∗ = FlnNNT ∼
the required conditions are met.
√1 ,
T
there exists some huge T0 such that for T ≥ T0
5. Conclusion
The problem of long-term forecasting is of high importance. In the article we developed
general algorithm GD (p) for the D-th-step ahead forecasting with experts’ advice (where
p is the distribution over active experts). The algorithm uses the ideas of the general
aggregating algorithm G1 by Vovk (1999). We also provided the expression for the upper
bound for the loss of algorithm GD (p) for any probability √
distribution p over active experts.
For its important special case VDF C we proved the O( T ) regret bound w.r.t. the best
expert in the pool. Algorithm VDF C is a practical long-term forecasting modification of
algorithm V1 by Vovk (1998).
16
Aggregating Strategies for Long-term Forecasting
It seems possible to apply the approach from the proof of Theorem 3 in order to obtain simpler and more understandable loss bound for algorithm GD (p) for any probability
distribution p. This statement serves as the challenge for our further research.
References
Dmitry Adamskiy, Wouter M. Koolen, Alexey Chernov, and Vladimir Vovk. A closer look
at adaptive regret. Journal of Machine Learning Research, 17(23):1–21, 2016. URL
http://jmlr.org/papers/v17/13-533.html.
Oren Anava, Elad Hazan, and Shie Mannor. Online learning for adversaries with memory:
Price of past mistakes. In Proceedings of the 28th International Conference on Neural
Information Processing Systems - Volume 1, NIPS’15, pages 784–792, Cambridge, MA,
USA, 2015. MIT Press. URL http://dl.acm.org/citation.cfm?id=2969239.2969327.
Olivier Bousquet and Manfred K. Warmuth. Tracking a small set of experts by mixing
past posteriors. J. Mach. Learn. Res., 3:363–396, March 2003. ISSN 1532-4435. URL
http://dl.acm.org/citation.cfm?id=944919.944940.
Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge
University Press, New York, NY, USA, 2006. ISBN 0521841089.
Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning
and an application to boosting. Journal of Computer and System Sciences, 55(1):119
– 139, 1997. ISSN 0022-0000. doi: https://doi.org/10.1006/jcss.1997.1504. URL http:
//www.sciencedirect.com/science/article/pii/S002200009791504X.
Elad Hazan. Introduction to online convex optimization. Found. Trends Optim., 2(34):157–325, August 2016. ISSN 2167-3888. doi: 10.1561/2400000013. URL https:
//doi.org/10.1561/2400000013.
Mark Herbster and Manfred K. Warmuth. Tracking the best expert. Mach. Learn., 32(2):
151–178, August 1998. ISSN 0885-6125. doi: 10.1023/A:1007424614876. URL https:
//doi.org/10.1023/A:1007424614876.
Jyrki Kivinen and Manfred K. Warmuth. Averaging expert predictions. In Paul Fischer
and Hans Ulrich Simon, editors, Computational Learning Theory, pages 153–167, Berlin,
Heidelberg, 1999. Springer Berlin Heidelberg. ISBN 978-3-540-49097-5.
Alexander Korotin, Vladimir V’yugin, and Eugeny Burnaev. Long-term sequential prediction using expert advice. CoRR, abs/1711.03194, 2017. URL http://arxiv.org/abs/
1711.03194.
Nick Littlestone and Manfred K. Warmuth. The weighted majority algorithm. Inf. Comput.,
108(2):212–261, February 1994. ISSN 0890-5401. doi: 10.1006/inco.1994.1009. URL
http://dx.doi.org/10.1006/inco.1994.1009.
17
Korotin V’yugin Burnaev
Kent Quanrud and Daniel Khashabi.
Online learning with adversarial delays.
In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages
1270–1278. Curran Associates, Inc., 2015. URL http://papers.nips.cc/paper/
5833-online-learning-with-adversarial-delays.pdf.
V Vovk. A game of prediction with expert advice. J. Comput. Syst. Sci., 56(2):153–173,
April 1998. ISSN 0022-0000. doi: 10.1006/jcss.1997.1556. URL http://dx.doi.org/10.
1006/jcss.1997.1556.
V. Vovk. Derandomizing stochastic prediction strategies. Machine Learning, 35(3):247–282,
Jun 1999. ISSN 1573-0565. doi: 10.1023/A:1007595032382. URL https://doi.org/10.
1023/A:1007595032382.
Volodimir G. Vovk. Aggregating strategies. In Proceedings of the Third Annual Workshop
on Computational Learning Theory, COLT ’90, pages 371–386, San Francisco, CA, USA,
1990. Morgan Kaufmann Publishers Inc. ISBN 1-55860-146-5. URL http://dl.acm.
org/citation.cfm?id=92571.92672.
M. J. Weinberger and E. Ordentlich. On delayed prediction of individual sequences. IEEE
Transactions on Information Theory, 48(7):1959–1976, Jul 2002. ISSN 0018-9448. doi:
10.1109/TIT.2002.1013136.
Appendix A. Optimal Approach to Delayed Feedback Forecasting with
Experts’ Advice
The appendix is devoted to obtaining the minimax regret bound for the problem of
the D-th-step-ahead forecasting as a function of the minimax bound for the 1-step-ahead
forecasting. We consider the general protocol of the D-th-step-ahead forecasting game with
experts’ advice from Subsection 2.1. Within the framework of the task, we desire to compete
with the best expert in the pool.
We use Ωt to denote the sequence (ω1 , ..., ωt ) of time series values at first t steps. Pair
It = (Ωt , Ξt+D ) is the information that an online algorithm knows on the step t of the game.
Let SD,t be the set of all possible online randomized prediction aggregation algorithms with
the forecasting horizon D and game length t. Each online algorithm s ∈ SD,t for given time
series Ωt and experts answers Ξt provides a sequence of distributions π1s (γ), ..., πts (γ) ,
where each distribution πτs (γ) is a function of Iτ −D for τ = 1, . . . , t. We write πτs (γ) =
π s (γ|Iτ −D ).
The expected cumulative loss of the algorithm s ∈ SD,t on a given IT is
HtD (s, It )
t
X
=
Eπτs λ(ωτ , γ)
τ =1
and the cumulative loss of expert n:
Lnt (It ) =
t
X
τ =1
18
λ(ωτ , ξτn )
Aggregating Strategies for Long-term Forecasting
The online performance of the algorithm s ∈ SD,t for the given It is measured by the
expected cumulative regret over t rounds:
RtD (s, It ) = HtD (s, It ) − min Lnt (It )
n∈N
with respect to the best expert. Here λ(ω, γ) : Ω × Γ → R+ is some loss function, not
necessary convex, Lipshitz or exponentially concave. The performance of the strategy s is
RtD (s) = max RtD (s, It ),
It
that is the maximal expected regret over all possible games It . The strategy s∗D,t that
achieves the minimal regret
s∗D,t = arg min RtD (s)
sD ∈SD,t
is called optimal (may not be unique).
Theorem 4 For the given ΩT , ΞT , forecasting horizon D and game length T (such that T
is a multiple of D) we have
RTD (sD,T ) ≥ D · RT1 /D (s1,T /D ).
Proof Let s = s∗D,T ∈ SD,T be the optimal strategy. We define new one-step ahead
forecasting strategy s00 ∈ S1,T based on s. Let
00
πts
1
=
D
t
[D
]D+D
X
πτs
t
τ =[ D
]D+1
1
=
D
t
[D
]D+D
X
π s (γ|Iτ −D ).
t
τ =[ D
]D+1
For every one-step ahead forecasting game IT0 /D = (Ω0T /D , Ξ0T /D ) we create a new D-step
n
0
)0 for all
and (ξtn )00 = (ξ[t/D]+1
ahead forecasting game IT00 = (Ω00T , Ξ00T ). We set ωt00 = ω[t/D]+1
t = 1, 2, . . . , T and n ∈ N .
The last step is to define one-step ahead forecasting strategy s0 for one-step ahead
0
s00
forecasting game IT0 /D . We set πts = π(t−1)D+1
.
We compute the loss of the algorithm s on IT00 /D .
HTD (s, It00 ) =
T
X
Eπts λ(ωt00 , γ) =
t=1
T
X
0
Eπts λ(ω[t/D]+1
, γ) =
t=1
t=1
T /D
X
t=1
T /D D
X X
(t−1)D+1
τ =1
T /D
D · Eπs00
00
s
Eπ(t−1)D+τ
λ(ωt , γ) =
X
λ(ωt00 , γ) = D
Eπs00
t=1
T /D
=D
X
t=1
(t−1)D+1
00
λ(ωt , γ) =
00
Eπs0 λ(ωt , γ) = D · HT1 /D (s0 , IT0 /D )
t
19
Korotin V’yugin Burnaev
Also note that
LnT (IT00 ) = D · LnT /D (IT0 /D ).
(16)
Thus, for every IT0 /D we have
RTD (s, It00 ) = D · RT1 /D (s0 , IT0 /D ).
According to the definition of the minimax regret, for the one-step forecasting game of
length T /D there exists such IT0 /D that
RT1 /D (s0 , IT0 /D ) ≥ RT1 /D (s1,T /D )
.
For IT0 /D and the corresponding sequence we obtain
RTD (s) ≥ RTD (s, It00 ) = D · RT1 /D (s0 , IT0 /D ) ≥ D · RT1 /D (s1,T /D )
This ends the proof.
In fact, from Theorem 4 we can conclude that an optimal aggregating algorithm A∗D for
the long-term forecasting with experts’ advice and competing with the best expert can be
obtained by a simple replicating technique from the optimal algorithm A∗1 for the 1-step
ahead forecasting and competing with the best expert.
However, if the goal is not to compete with the best expert (but, for example, to compete
with the best alternating sequence of experts with no more than K switches or some other
limited sequence), this theorem may also work. All the computations still remain true in the
general case, except for (16). This equality should be replaced (if possible) by the analogue.
20
| 10 |
1
Random Caching in Backhaul-Limited
Multi-Antenna Networks: Analysis and Area
Spectrum Efficiency Optimization
arXiv:1709.06278v1 [] 19 Sep 2017
Sufeng Kuang, Student Member, IEEE, Nan Liu, Member, IEEE
Abstract
Caching at base stations is a promising technology to satisfy the increasing capacity requirements
and reduce the backhaul loads in future wireless networks. Careful design of random caching can fully
exploit the file popularity and achieve good performance. However, previous works on random caching
scheme usually assumed single antenna at BSs and users, which is not the case in practical multi-antenna
networks. In this paper, we consider the analysis and optimization in the cache-enabled multi-antenna
networks with limited backhaul. We first derive a closed-form expression and a simple tight upper
bound of the successful transmission probability, using tools from stochastic geometry and a gamma
approximation. Based on the analytic results, we then consider the area spectrum efficiency maximization
by optimizing design parameters, which is a complicated mixed-integer optimization problem. After
analyzing the optimal properties, we obtain a local optimal solution with lower complexity. To further
simplify the optimization, we then solve an asymptotic optimization problem in the high user density
region, using the upper bound as the objective function. Numerical simulations show that the asymptotic
optimal caching scheme achieves better performance over existing caching schemes. The analysis and
optimization results provide insightful design guidelines for random caching in practical networks.
Index Terms
Cache, limited backhaul, multi-antenna, Poisson point process, stochastic geometry
I. I NTRODUCTION
The deployment of small base stations (SBSs) or network densification, is proposed as a key
method for meeting the tremendous capacity increase in 5G networks [1]. Dense deployment of
S.Kuang and N.Liu are with the National Mobile Communications Research Laboratory, Southeast University, Nanjing 210096,
China.(Email: [email protected] and [email protected]).
2
small cells can significantly improve the performance of the network by bringing the BSs closer
to the users. On the other hand, MIMO technology, especially the deployment of massive number
of antennas at BSs, is playing an essential role in 5G networks to satisfy the increasing data
requirements of the users. Therefore, the combination of small cells and novel MIMO techniques
is inevitable in 5G networks [2]. However, this approach aggravates the transmission loads of
the backhaul links connecting the SBSs and the core networks.
Caching at BSs is a promising method to alleviate the heavy backhaul loads in small cell
networks [3]. Therefore, jointly deploying cache and multi antennas at BSs is proposed to
achieve 1000x capacity increase for 5G networks [4]–[6]. In [4] and [5], the authors considered
the optimization of the cooperative MIMO for video streaming in cache-enabled networks. In
[6], the authors considered the optimal multicasting beamforming design for cache-enabled cloud
radio access network (C-RAN). Note that the focuses of the above works were on the parameter
optimization for cache-enabled networks under the traditional grid model.
Recently, Poisson point process (PPP) is proposed to model the BS locations to capture the
irregularity and randomness of the small cell networks [7]. Based on the random spatial model,
the authors in [8] and [9] considered storing the most popular files in the cache for small
cell networks and heterogeneous cellular networks (HetNets). In [10], the authors considered
uniformly caching all the files at BSs, assuming that the file request popularity follows a uniform
distribution. In [11] , the authors considered storing different files at the cache in an i.i.d. manner,
in which each file is selected with its request probability. Note that in [8]–[11], the authors did
not consider the optimal cache placement, rather, they analyzed the performance under a given
cache placement, and thus, the results of the papers might not yield the best performance of the
system.
In view of the above problem, random caching strategy is proposed to achieve the optimal
performance [12]–[15]. In [12] and [13], the authors considered the caching and multicasting in
the small cell networks or the HetNets, assuming random caching at SBSs. In [14], the authors
considered the analysis and optimization of the cache-enabled multi-tier HetNets, assuming
random caching for all tiers of the BSs. Note that in [12]–[14], the authors obtained a waterfilling-type optimal solutions in some special cases due to the Rayleigh distribution of the fading.
In [15] and [16], the authors considered the helper cooperation for the small cell networks or
the HetNets, in which the locations of BSs are modeled as Poisson cluster Process.
However, the works mentioned above considered the random caching in networks equipped
3
with a single antenna at BSs and users, which is rarely the case in practical networks. In [17],
the authors considered the cache distribution optimization in HetNets, where multi-antennas are
deployed at the MBSs. However, the authors considered a special case of zero forcing precoding,
i.e., the number of the users is equal to the number of BS antennas, in which the equivalent
channel gains from the BSs to its served users follow the Rayleigh distribution. This property
does not hold for the general MIMO scenario, where the equivalent channel gains from the BSs
to its served users follow the Gamma distribution [18], [19].
The main difficulty of the analysis of the multi-antenna networks stems from the complexity
of the random matrix channel [20]–[22]. In [20], the authors utilized the stochastic ordering to
compare MIMO techniques in the HetNets, but the authors did not comprehensively analyze
the SINR distribution. In [21], the authors proposed to utilize a Toeplitz matrix representation
to obtain a tractable expression of the successful transmission probability in the multi-antenna
small cell networks. In [22]–[24], the authors extended the approach of the Toeplitz matrix
representation to analyze the MIMO mutli-user HetNets, MIMO networks with interference
nulling and millimeter wave networks with directional antenna arrays. However, this expression
involves the matrix inverse, which is difficult for analysis and optimization. In [25], a gamma
approximation [26] was utilized to facilitate the analysis in the millimeter wave networks.
In this paper, we consider the analysis and optimization of random caching in backhaullimited multi-antenna networks. Unlike the previous works [12]–[14] focusing on the successful
transmission probability of the typical user, we analyze and optimize the area spectrum efficiency,
which is a widely-used metric to describe the network capacity. The optimization is over file
allocation strategy and cache placement strategy, where a file allocation strategy dictates which
file should be stored at the cache of the BSs and which file should be transmitted via the
backhaul, and a cache placement strategy is to design the probability vector according to which
the files are randomly stored in the cache of the BSs.
First, we derive an exact expression of the successful transmission probability in cacheenabled multi-antenna networks with limited backhaul, using tools from stochastic geometry
and a Toeplitz matrix representation. We then utilize a gamma approximation to derive a tight
upper bound on the performance metrics to facilitate the parameter design. The exact expression
involves the inverse of a lower triangular Toeplitz matrix and the upper bound is a sum of a
series of fractional functions of the caching probability. These expressions reveal the impacts of
the parameters on the performance metrics, i.e., the successful transmission probability and the
4
area spectrum efficiency. Furthermore, the simple analytical form of the upper bound facilitates
the parameter design.
Next, we consider the area spectrum efficiency maximization by jointly optimizing the file
allocation and cache placement, which is a very challenging mixed-integer optimization problem.
We first prove that the area spectrum efficiency is an increasing function of the cache placement.
Based on this characteristic, we then exploit the properties of the file allocation and obtain a
local optimal solution in the general region, in which the user density is moderate. To further
reduce the complexity, we then solve an asymptotic optimization problem in the high user density
region, using the upper bound as objective function. Interestingly, we find that the optimal file
allocation for the asymptotic optimization is to deliver the most B popular files via the backhaul
and store the rest of the files at the cache, where B is the largest number of files the backhaul
can deliver at same time. In this way, the users requesting the most B popular files are associated
with the nearest BSs, who obtain the most B popular files via the backhaul, and therefore achieve
the optimal area spectrum efficiency.
Finally, by numerical simulations, we show that the asymptotic optimal solution with low
complexity achieves a significant gain in terms of area spectrum efficiency over previous caching
schemes.
II. S YSTEM M ODEL
A. Network Model
We consider a downlink cache-enabled multi-antenna network with limited backhaul, as shown
in Fig. 1, where BSs, equipped with N antennas, are distributed according to a homogeneous
Poisson point process (PPP) Φb with density λb . The locations of the single-antenna users are
distributed as an independent homogeneous PPP Φu with density λu . According to Slivnyak’s
theorem [27], we analyze the performance of the typical user who is located at the origin without
loss of generality. All BSs are operating on the same frequency band and the users suffer intercell
interference from other BSs. We assume that all the BSs are active due to high user density.
We assume that each user requests a certain file from a finite content library which contains
F files. Let F = {1, 2, 3, · · · , F } denotes the set of the files in the network. The popularity of
the requested files is known a priori and is modeled as a Zipf distribution [28]
f −γ
qf = PF −γ ,
i=1 i
(1)
5
Backhaul File
Cache File
BS
User
Cache
Backhhaul
Distance-based Association.
Content-centric Association.
Fig. 1. Illustration of the system model. For the backhaul file, the distance-based association scheme is adopted. For the cached
file, the content-centric association scheme is adopted.
where qf is the probability that a user requests file f and γ is the shape parameter of the Zipf
distribution. We assume that all the files have same size and the size of a file is normalized to
1 for simplicity.
Each BS is equipped with a cache with C segments and the cache can store at most C different
files out of the content library. For the files which are not stored in the cache, the BSs can fetch
them from the core network via backhaul links which can transmit at most B files at same time.
We refer to C as cache size and B as backhaul capability. We assume B + C ≤ F to illustrate
the resource limitation.
B. Caching and Backhaul Delivery
The set of all the files is partitioned into two disjoint sets, and we define the set of files stored
in the cache of all the BSs and the set of files not stored in any of the BSs, i.e., the files in it
must be transmitted via the backhaul, as cached file set and backhaul file set, which are denoted
as Fc and Fb , respectively. The number of files in Fx is Fx , x = c, b. Since Fc and Fb form a
partition of F, we have
Fc
[
Fb = F, Fc
\
Fb = ∅.
(2)
We define the process of designing Fc and Fb as file allocation.
Each BS can cache C different files from a total Fc files via the random caching scheme, in
which the BS stores a certain file i ∈ Fc randomly with probability ti . Let t = (ti )i∈Fc denotes
6
the caching distribution of all the files. Then we have the following constraints
ti ∈ [0, 1], ∀i ∈ Fc ,
X
ti ≤ C.
(3)
(4)
i∈Fc
We refer to the specification of t as cache placement.
For further analysis, we define the BSs that have the cached file f ∈ Fc in the cache as the
f-cached BSs. According to the thinning theory of the PPP, the density of the f -cached BSs is
λfb = tf λb . We denote the set of the f -cached BSs as Φfb and the set of the remaining BSs that
do not have file f in their cache as Φ−f
b . When a user requests the file f out of the cached
file set, the user is associated with the nearest f -cached BSs which will provide the required
file f from its cache. We refer to the BS associated with the typical user as the tagged BS and
this association scheme is called content-centric association scheme. Content-centric association
scheme is different from the traditional distance-based association scheme, where the user is
associated with the nearest BS. We denote the tagged BS as BS 1.
When a user requests a file out of the backhaul file set, the distance-based association is
adopted and the user is associated with the nearest BS. We define the set of backhaul requested
r
files as the set of the backhaul files requested by the users of BS i, which is denoted as Fb,i
⊆ Fb .
r
The number of the backhaul request file is denoted as Fb,i
. If BS i needs to transmit less than
r
r
B backhaul files via backhaul, i.e., Fb,i
≤ B, then BS i gets all Fb,i
files from the backhaul and
transmits all of them to the designated users; otherwise, BS i will randomly select B different
r
files from Fb,i
according to the uniform distribution, where these files will be transmitted from
the backhaul links and then passed on to the designated users.
III. P ERFORMANCE M ETRIC AND P ROBLEM F ORMULATION
A. Performance Metric
In this part, we define the successful transmission probability (STP) and the area spectrum
efficiency (ASE) of the typical user when the single-user maximal ratio combination (MRT)
beamforming is adopted. We consider the single-user MRT beamforming due to the low complexity of the beamforming design, which is suitable for the scenario when a large number
of antennas are deployed at the BSs. For other MIMO scenarios, the equivalent channel gain
follows the gamma distribution with different parameters [18], [19]. Therefore, the method of the
7
analysis here can be extended to the cache-enabled MIMO networks with different precoding and
combining strategies. We assume that the transmitter can get the perfect channel state infomation
(CSI) through the feedback from the users. The BSs do not have the CSI of the other cells due
to the high BS density.
We refer to the typical user as user 0 and it is served by the tagged BS located at x1 . Due to
the assumption of single-user MRT, each BS serves one user per resource block (RB). Hence,
the received signal of the typical user on its resource block is
β
y0f = kx1 k− 2 h∗0,1 w1 s1 +
X
β
kxi k− 2 h∗0,i wi si + n0 ,
(5)
i∈{Φb \1}
where wi is the beamforming vector of BS i to its served user, f is the file requested by the
typical user, xi is the location of BS i, h0,i ∈ CN ×1 is the channel coefficient vector from BS i
to the typical user, si ∈ C1×1 is the transmitting message of BS i and n0 is the additive white
Gaussian noise (AWGN) at the receiver. We assume that E[si ∗ si ] = P for any i and P is the
transmit power of the BS. The elements of the channel coefficient vector h0,i are independent
and identical complex Gaussian random variables, i.e., CN (0, 1). β > 2 is the pathloss exponent.
For single-user MRT, to maximize the channel gain from the BS to its served user, the
beamforming at BS i is wi =
hi
khi k
[29], where hi ∈ CN ×1 is the channel coefficient from
BS i to its served user. Thus the SIR of the typical user requiring file f , whether f is in the
cached file set or the backhaul file set, is given by
SIRf = P
where gi =
kh∗0,i hi k2
khi k2
P kx1 k−β g1
,
−β g
i
i∈{Φb \1} P kxi k
(6)
is the equivalent channel gain (including channel coefficient and the
beamforming) from BS i to the typical user. It is shown that the equivalent channel gain from the
tagged BS to its served user, i.e., g1 ∼ Gamma(N, 1) and gi ∼ Exp(1), ∀i > 1 [29]. In this paper,
we consider SIR for performance analysis rather than SINR due to the dense deployments of the
SBSs. In simulations, we include the noise to illustrate that in dense networks the consideration
of the SIR achieves nearly the same performance as that of the SINR.
For the single-user MRT, the successful transmission probability (STP) of the typical user is
defined as the probability that the SIR is larger than a threshold, i.e.,
Ps (Fc , t) =
X
f ∈Fc
qf P (SIRf > τ ) +
X
f ∈Fb
qf P (SIRf > τ, f transmitted through backhaul) , (7)
8
where τ is the SIR threshold. As mentioned before, f is transmitted by the backhaul if the
r
number of the backhaul requested files of the tagged BS Fb,1
is no more than the backhaul
capability B, or if it is been chosen to be transmitted according to the uniform distribution in
r
r
the event where Fb,1
> B. Note that the STP is related to the random variables Fb,1
and SIRf .
We use the area spectrum efficiency (ASE) as the metric to describe the average spectrum
efficiency per area. The ASE of the single-user MRT is defined as [30], [31]
R(Fc , t) = λb Ps (Fc , t) log2 (1 + τ ).
(8)
where the unit is bit/s/Hz/km2 . Note that the ASE reveals the relationship between the BS density
and the network capacity.
B. Problem Formulation
Under given backhaul capability B and cache size C, the caching strategy, i.e., the file
allocation strategy and the cache placement strategy, fundamentally affects the ASE. We study the
problem of maximizing the ASE via a careful design of file allocation Fc and cache placement
t as follows
max R(Fc , t)
Fc ,t
s.t
(9)
(2), (3), (4).
We will derive the expressions of the STP and the ASE in Section IV and solve the ASE
maximization problem, i.e., the problem in (9), in Section V.
IV. P ERFORMANCE A NALYSIS
In this section, we first derive an exact expression of the STP and ASE under given file
allocation and cache placement strategy, i.e., under given Fc and t. Then we utilize a gamma
distribution approximation to obtain a simpler upper bound of the STP and ASE.
A. Exact Expression
In this part, we derive an exact expression of the STP and the ASE using tools from stochastic
geometry. In general, the STP is related to the number of the backhaul request file of the tagged
9
r
BS, i.e., Fb,1
. Therefore, to obtain the STP, we first calculate the probability mass function (PMF)
r
of Fb,1
.
r
r
Lemma 1. (pmf of Fb,1
) When f ∈ Fb is requested by the tagged BS, the pmf of Fb,1
is given by
r
b
= k = g ({Fb \ f }, k − 1) ,
PF
Fb,1
f
ä
Ä
k ∈ {1, 2, · · · , Fb },
(10)
where g(B, k) is given by
4
X
g (B, k) =
Ç
q i λu
1− 1+
3.5λb
Y
Y∈{X ⊆B:|X |=k} i∈Y
å−4.5 !
Y
Ç
qi λu
1+
3.5λb
i∈B\Y
å−4.5
.
(11)
Proof: See Appendix A.
Based on Lemma 1, we have the following corollary.
r
r
Corollary 1. (The pmf of Fb,1
when λu → ∞). When λu → ∞, the pmf of Fb,1
is
0,
k = 1, 2, · · · , Fb − 1
1,
k = Fb
r
b
lim PF
Fb,1
=k =
f
λu →∞
Ä
ä
.
(12)
r
converges to constant Fb in distribution as λu → ∞. The
Corollary 1 interprets that Fb,1
asymptotic result is consistent with the fact that when the user density is high, each BS will
have many users connected to it, and thus, each BS will require all the backhaul files.
We then calculate the STP, which is defined in (7). Base on Lemma 1, we can rewrite the
r
. Therefore, we can obtain the
STP as a combination of the STP conditioned on the given Fb,1
STP in Theorem 1.
Theorem 1. (STP) The STP is given by
Ps (Fc , t) =
X
qf Psf,c (tf ) +
f ∈Fc
X
f ∈F \Fc
qf
Fb
Ä
X
PF b F r
b,1
f
=k
k=1
ä
B
P b,
max (k, B) s
(13)
r
b
where PF
Fb,1
= k is given in (10), qf is given in (1), Psf,c (tf ) and Psb are the STPs of the
f
Ä
ä
cached file f ∈ Fc and the backhaul file f ∈ F \ Fc , which are given by
Psf,c (tf )
tf
=
tf + l0c,f
1
Psb =
1 + l0b,f
τ 2\β
I−
tf + l0c,f
"
τ 2\β
I−
1 + l0b,f
"
#−1
!
D
c,f
,
(14)
1
#−1
!
Db,f
,
1
(15)
10
where k · k1 is the l1 induced matrix norm (i.e, ||B||1 = max1≤j≤n
Pm
i=1
|bij |, B ∈ Rm×n ), I is
an N × N identity matrix, Dc,f and Db,f are N × N Toeplitz matrices of the cached file f ∈ Fc
and the backhaul file f ∈ F \ Fc , which are given by
n,f
l
1
n,f
l2
.
..
,
0
Dn,f =
0
l1n,f
0
..
.
..
.
n ∈ {c, b},
(16)
n,f
n,f
n,f
0
lN
lN
−1, · · · l1
where l0c,f , l0b,f , lic,f and lib,f are given by
l0c,f
l0b,f
lic,f
lib,f
ñ
ô
Ç
2
2
2π
2τ
; 2 − ; −τ + (1 − tf ) csc
= tf
2 F1 1, 1 −
β−2
β
β
β
ñ
ô
2τ
2
2
=
; 2 − ; −τ ,
2 F1 1, 1 −
β−2
β
β
ñ
2τ i−2\β
2 2
2
= (1 − tf ) B( + 1, i − ) + tf
2 F1 i + 1, i −
β β
β
iβ − 2
ñ
ô
2τ i−2\β
2
2
=
; i + 1 − ; −τ , ∀i ≥ 1.
2 F1 i + 1, i −
iβ − 2
β
β
å
2π
τ 2\β ,
β
(17)
(18)
2
2
; i + 1 − ; −τ
β
β
ô
, ∀i ≥ 1, (19)
(20)
Here, 2 F1 (·) is the Gauss hypergeometric function and B(·) is the Beta function.
Proof. Considering the equivalent channel gain g1 ∼ Gamma(N, 1), the STP is a complex n-th
derivative of the interference Laplace transform LI (s) [20]. Utilizing the approach in [21], we
obtain the expressions of the STPs in lower triangular Toeplitz matrix representation as (14) and
(15). For details, please see Appendix B.
According to (8), we then obtain the ASE under given Fc and t
R(Fc , t) = λb Ps (Fc , t) log2 (1 + τ ).
(21)
For backhaul-limited multi-antenna networks, the change of the BS density λb and user density
Ä
ä
r
λu influence Ps (Fc , t) via the pmf of the backhaul request file, i.e., P Fb,1
= k . However, when
λu approaches infinity and λb remains finite, since all the backhaul files are requested by the
r
b
tagged BS (shown in Corollary 1), PF
Fb,1
= k is no longer related to λb . Therefore, when
f
Ä
ä
the user density is high and the design parameter Fc and t are given, deploying more BSs will
11
always increase the ASE.
From Theorem 1 and the definition of the ASE, we can derive a tractable expression of the
ASE for N = 1, i.e., the backhaul-limited single-antenna networks. The ASE of the backhaullimited single-antenna networks is given in the following corollary.
Corollary 2. (ASE of Single-Antenna Networks) The ASE of the cache-enabled single-antenna
networks with limited backhaul is given by
Ñ
RSA (Fc , t) = λb log2 (1 + τ )
Fb
r
b
X X
= k qf B
PF
Fb,1
qf tf
f
+
ζ1 (τ )tf + ζ2 f ∈F \Fc k=1 max (k, B) (ζ1 (τ ) + ζ2 )
ä
Ä
X
f ∈Fc
é
, (22)
where ζ1 (τ ) and ζ2 (τ ) satisfy
ñ
2τ
2
2
ζ1 (τ ) = 1 +
; 2 − ; −τ
2 F1 1, 1 −
β−2
β
β
Ç
å
2π
2π
ζ2 (τ ) =
csc
τ 2\β .
β
β
ô
Ç
å
2π
2π
−
csc
τ 2\β ,
β
β
(23)
(24)
The proof of Corollary 2 is similar to the proof of Theorem 1 except that the equivalent
channel gain g1 ∼ Exp(1).
B. Upper Bound and Asymptotic Analysis
In this part, we first derive an upper bound of the STP under given Fc and t. We then give
the asymptotic analytic results in high user density region. First, we introduce a useful lemma
to present a lower bound of the gamma distribution.
Lemma 2. [26]: Let g be a gamma random variable follows Gamma(M, 1). The probability
P(g < τ ) can be lower bounded by
P(g < τ ) > 1 − e−aτ
î
óM
,
(25)
1
where α = (M !)− M .
Utilizing the above lemma, we then obtain the upper bound of the STP as follows.
Theorem 2. (Upper Bound of STP) The upper bound of the STP is given by
Psu (Fc , t) =
X
f ∈Fc
qf Psu,f,c (tf ) +
X
f ∈F \Fc
qf
Fb
Ä
X
PF b F r
f
k=1
b,1
=k
ä
B
P u,b ,
max (k, B) s
(26)
12
where Psu,f,c (tf ) and Psu,b are the upper bounds of the STPs of the cached file f ∈ Fc and the
backhaul file f ∈ F \ Fc , which are given by
Psu,f,c (tf )
Psu,b
(−1)i+1 Ni tf
=
,
i=1 (θA (i) tf + θC (i))
Ä ä
N
X
(27)
(−1)i+1 Ni
=
,
i=1 (θA (i) + θC (i))
Ä ä
N
X
(28)
where
ñ
ô
2τ
2
2
θA (i) = 1 +
; 2 − ; −iατ
2 F1 1, 1 −
β−2
β
β
Ç
å
2π
2π
θC (i) =
csc
(iατ )2\β .
β
β
Ç
å
2π
2π
−
csc
(iατ )2\β
β
β
(29)
(30)
1
Here, α = (N !)− N is a constant related to the number of BS antennas N .
Proof. The STP of the cached file f ∈ Fc is
Psf,c (tf ) = P g1 > τ Ikx1 kβ
Ä
Ä
(a)
≤ 1 − EIkx1 kβ
=
N
X
=
(−1)
=
1 − exp −ατ Ikx1 k
β
ääN ò
Ä
ä
N
EIkx1 kβ exp −iατ Ikx1 kβ
i
!
i+1
(−1)
i=1
(b)
Ä
!
i+1
i=1
N
X
ïÄ
ää
Ä
ä
N
Ekx1 kβ LI iατ kx1 kβ
i
(−1)i+1
N
X
i=1 tf
2τ
F
β−2 2 1
h
1, 1 − β2 ; 2 − β2 ; −iατ
i
ÄN ä
i
tf
+ 1 + (1 − tf ) 2π
csc
β
2π
β
(iατ )2\β
.
(31)
where (a) follows from g1 ∼ Gamma(N, 1) and Lemma 2, (b) follows from the PDF of kx1 k
fkx1 k (r) = 2πtf λb r exp {−πtf λb r2 } for f ∈ Fc and LI iτ kx1 kβ is given in Appendix B as
Ä
LI iτ kx1 kβ = exp −πλb
Ä
ä
2τ tf
F
β−2 2 1
h
1, 1 − β2 ; 2 − β2 ; −iατ
ä
i
+ (1 − tf ) 2π
csc
β
2π
β
(iατ )2\β kx1 k2 .
1
Here, α = (N !)− N .
Therefore, we obtain an upper bound of Psu,c,f (tf ) , ∀f ∈ Fc . We can obtain the expression
of Psu,b similarly and then finish the proof of Theorem 2.
The upper bound of the STP is a series of fractional functions of the cache placement t.
Comparing with the exact expression of the STP in Theorem 1, the upper bound approximates
the STP in a simpler manner, and therefore facilitates the analysis and further optimization.
13
According to Theorem 2, the upper bound of the ASE is given by
Ru (Fc , t) = λb Psu (Fc , t) log2 (1 + τ ).
(32)
To obtain design insights, we then analyze the upper bound of the ASE in the asymptotic
r
region, i.e., the high user density region. When λu → ∞, the discrete random variable Fb,1
→ Fb
in distribution as shown in Corollary 1. Therefore, we have the following corollary according to
Theorem 2.
Corollary 3. (Asymptotic Upper Bound of ASE) In high user density region, i.e., λu → ∞, the
asymptotic upper bound of the ASE is given by
Ö
Ru,∞ (Fc , t) = λb log2 (1 + τ )
è
X
qf Psu,f,c (tf ) +
f ∈Fc
X
f ∈F \Fc
qf B
P u,b
max (Fb , B) s
,
(33)
where Psu,f,c (tf ) and Psu,b are given in (27) and (28).
We plot Fig. 2 and Fig. 3 to validate the correctness of the analytic results. Fig. 2 plots the
successful transmission probability vs. the number of BS antennas and target SIR. Fig. 2 verifies
Theorem 1 and Theorem 2, and demonstrates the tightness of the upper bound. Fig. 2 also
shows that the successful transmission probability increases with the number of BS antennas
and decreases with the SIR threshold. Moreover, Fig. 2 (a) indicates that when the user density
is large, the increase of the BS density increases the successful transmission probability. Fig. 3
plots the area spectrum efficiency vs. user density by showing that when the user density is larger
than a certain threshold, i.e., 6 × 10−3 m−2 for the single-antenna networks and 4 × 10−3 m−2 for
the multi-antenna networks, the asymptotic upper bound of the ASE is nearly same as the ASE.
V. ASE O PTIMIZATION
A. General ASE Optimization
In this part, we solve the ASE maximization problem, i.e., maximize R(Fc , t) via optimizing
the file allocation Fc and the cache placement t. Based on the relationship between R(Fc , t)
and Ps (Fc , t), the ASE optimization problem is formulated as follows.
14
0.7
λ b =5 × 10 -4 m-2
0.8
0.7
0.6
λ b =10 -4 m-2
0.5
Upper Bound
Theo.
Monte Carlo
0.4
2
3
4
5
6
7
8
9
Theoritical
Monte Carlo
0.5
N=1
0.4
N=2
0.3
0.2
N=3
0.1
0.3
1
Upper Bound
0.6
Successful Transmission Probability
Successful Transmission Probability
0.9
0
-10
10
-5
0
5
10
15
20
SIR Threshold (dB)
The number of BS antennas N
(b) STP vs. SIR threshold τ at λb = 10−4 m−2 .
(a) STP vs. the number of BS antennas N at τ = 0dB.
Fig. 2. STP vs. the number of BS antennas N and SIR threshold τ . λu = 10−3 m−2 , β = 4, F = 8, B = C = 2,
Fb = {1, 2, 3, 4}, Fc = {5, 6, 7, 8}, t = (0.8, 0.6, 0.4, 0.2), γ = 1. In this paper, the transmit power is 6.3W, the
noise power in the Monte Carlo simulation is σn = −97.5dBm [32], the theoretical results are obtained without
consideration of noise. The Monte Carlo results are obtained by averaging over 106 random realizations.
55
Area Spectrum Efficiency(bits/Hz/km 2 )
N=3
Upper Bound
Theo.
Monte Carlo
Asym.
50
45
40
N=2
35
N=1
30
25
1
2
3
4
5
6
User Density λu (m -2 )
7
8
9
10
× 10-3
Fig. 3. The ASE vs. user density λu . λb = 10−4 m−2 , β = 4, F = 8, B = C = 2, Fb = {1, 2, 3, 4}, Fc = {5, 6, 7, 8},
t = (0.8, 0.6, 0.4, 0.2), γ = 1, τ = 0dB.
Problem 1. (ASE Optimization)
R∗ , max λb Ps (Fc , t) log2 (1 + τ )
Fc ,t
s.t.
(34)
(2), (3), (4).
where Ps (Fc , t) is given in (13).
The above problem is a mixed-integer problem to optimize the discrete parameter Fc and the
continuous parameter t. To solve the complex problem, we first explore optimal properties of
the discrete variable Fc and then optimize the continuous variable t. To optimally design the
15
file allocation Fc , we first study the properties of the STPs of the cached file and backhaul file,
i.e., Psf,c (tf ) and Psb . Based on the properties of the lower triangular Toeplitz matrix form in
(14) and (15), we then obtain the following lemma.
Lemma 3. Psf,c (tf ) and Psb have following properties:
1) Psf,c (tf ) is bounded by
tf
tf
≤ Psf,c (tf ) ≤
,
µA tf + νA
µB tf + νB
(35)
where µA , νA , µB and νB are given by
Ç
å
ñ
2π
2τ
2
2
2π
csc
τ 2\β +
; 2 − ; −τ
µA = 1 −
2 F1 1, 1 −
β
β
β−2
β
β
+
N
−1
X
i=1
N −i
2τ i
2
2
; i + 1 − ; −τ
2 F1 i + 1, i −
N
iβ − 2
β
β
ñ
ñ
ô
ô
ô
2 2
2
− B( + 1, i − )τ 2\β , (36)
β β
β
N
−1
X
2π
2π
N −i2 2
2
νA =
csc
τ 2\β −
B( + 1, i − )τ 2\β ,
β
β
N β β
β
i=1
Ç
å
Ç
å
ñ
2π
2π
2τ
2
2
µB = 1 −
csc
τ 2\β +
; 2 − ; −τ
2 F1 1, 1 −
β
β
β−2
β
β
+
N
−1 ñ
X
i=1
2
2
2τ i
; i + 1 − ; −τ
2 F1 i + 1, i −
iβ − 2
β
β
ñ
ô
(37)
ô
ô
2 2
2
− B( + 1, i − )τ 2\β ,
β β
β
N
−1
X
2π
2π
2
2 2
2\β
νB =
csc
τ
−
B( + 1, i − )τ 2\β ).
β
β
β
β
i=1 β
Ç
(38)
å
(39)
Furthermore, νA , νB are positive and µA , µB are no larger than 1.
2) Psf,c (tf ) is an increasing function of tf .
3) Psf,c (tf ) is no larger than Psb and no smaller than 0, i.e., 0 ≤ Psf,c (tf ) ≤ Psb .
Proof: See Appendix C.
Property 1 in Lemma 3 shows that the bounds of Psf,c (tf ) are fractional functions of tf . The
expressions of the bounds are similar to the expression of the STP of the cached file in singleantenna networks given in Theorem 2. Based on the properties of Psf,c (tf ) and Psb , we have the
following theorem to reveal the properties of the optimal file allocation Fc∗ .
Theorem 3. (Property of the Optimal File Allocation Fc∗ ) To optimize cached file set Fc for
Problem 1, we should allocate at least C files and at most F − B files to store in the cache,
that is to say, Fc∗ = {C, C + 2, · · · , F − B}.
16
Proof. We utilize the contradiction to prove Theorem 3. More specifically, we consider the cases
in which we cache less than C files or more than F − B files, and we then prove that the cases
are not optimal in terms of the ASE. For details, please see Appendix D.
The above theorem interprets that for optimal file allocation, the number of cached files
should be larger than the cache size and the number of the backhaul files shoud be larger than
the backhaul capability, indicating that we should make full use of the resources.
Based on Theorem 3, we can reduce the complexity of search for Fc∗ . Otherwise, we have
to check the cases such that Fc < C or Fc > F − B. When B or C is large, Theorem 3 will
largely reduce the complexity. Under given Fc , we optimize the cache placement t. The ASE
optimization over t under given Fc is formulated as follows
Problem 2. (Cache Placement Optimization under Given Fc )
R∗ (Fc ) , max λb
t
s.t.
X
qf Psf,c (tf ) log2 (1 + τ )
(40)
f ∈Fc
(3), (4).
where Psf,c (tf ) is given in (14).
The optimal solution for Problem 2 is denoted as t∗ (Fc ). The above problem is a continuous
optimization problem of a differentiable function over a convex set and we can use the gradient
projection method to obtain a local optimal solution. Under given Fc , we can obtain optimal
cache placement t∗ (Fc ) using Algorithm 1.
Algorithm 1 Optimal Solution to Problem 2
1.
2.
3.
4.
5.
6.
7.
8.
Initialization: n = 1, nmax = 104 and ti (1) = F1c for all i ∈ Fc . Constant lower triangular
Toeplitz matrix ∂B
is given in (55).
∂ti
repeat
CalculateÄDc,i with täi = ti (n) according to (16) for all i ∈ Fc .
Bi (n) = ti (n) + l0c,i I − τ 2\β Dc,i for all i ∈ Fc .
0
−1
∂B −1
ti (n+1) = ti (n)+s(n) B−1
λb log2 (1+τ )qi for all i ∈ Fc .
i (n) − ti (n)Bi (n) ∂ti Bi (n)
1
î
0
ó1
ti (n + 1)
=
ti (n + 1) − u∗ 0 for all i
∈
Fc , where
î
ó
P
0
∗ 1
1
i∈Fc ti (n + 1) − u 0 = C and [x]0 denotes max{min{1, x}, 0}.
n = n + 1.
until Convergence or n is larger than nmax .
u∗
satisfying
17
From Corollary 2, we can easily observe that when N = 1, i.e., in the single-antenna networks,
the objective function of Problem 2 is concave and the Slaters condition is satisfied, implying
that strong duality holds. In this case, we can obtain a closed-form optimal solution to the convex
optimization problem using KKT conditions. After some manipulations, we obtain the optimal
cache placement t∗ under given Fc for single-antenna networks.
Corollary 4. (Optimal Cache Placement of Single-Antenna Networks) When the cached file set
Fc is fixed, the optimal cache placement t∗ of the single-antenna networks is given by
1
t∗f =
ζ1 (τ )
Ñs
é1
λb log2 (1 + τ )qf ζ2 (τ )
− ζ2 (τ ) , f ∈ Fc ,
u∗
(41)
0
where [x]10 denotes max{min{1, x}, 0} and u∗ is the optimal dual variable for the constraint
∗ ti
i∈FC
P
≤ C, which satisfies
1
f ∈FC ζ1 (τ )
X
Ñs
é1
λb log2 (1 + τ )qf ζ2 (τ )
− ζ2 (τ ) = C.
u∗
(42)
0
Here ζ1 (τ ) and ζ2 (τ ) are given in (23) and (24), respectively.
Finally, combining the analysis of the properties of Fc∗ in Theorem 3 and the optimization of
t under given Fc , we can obtain (Fc∗ , t∗ ) to Problem 1. The process of solving Problem 1 is
summarized in Algorithm 2.
Algorithm 2 Optimal Solution to Problem 1
1. Initialization: R∗ = ∞.
2. for Fc = C : F − B do
3.
Choose Fc ∈ {X ⊆ F : |X | = Fc }.
4.
Obtain the optimal solution t∗ (Fc ) to Problem 2 using Algorithm 1 (when N > 1) or
Corollary 4 (when N = 1).
5.
if R∗ < R (Fc , t∗ (Fc )) then
6.
R∗ = R (Fc , t∗ (Fc )), (Fc∗ , t∗ ) = (Fc , t∗ (Fc )).
7.
end if
8. end for
Algorithm 2 includes two layers. In the outer layer, we search Fc∗ by checking all the possible
PF −B ÄF ä
i=C
i
choices, and in the inner layer, we utilize Algorithm 1 or Corollary 4 to obtain t∗ (Fc )
under given Fc . We refer to the optimal solution based on Algorithm 2 as Exact Opt..
18
B. Asymptotic ASE Optimization based on the Upper Bound in (33)
In Algorithm 2, we need to consider
PF −B ÄF ä
i=C
i
choices and the complexity is Θ(F F ). When F
is very large, Algorithm 2 is not acceptable due to high complexity. Furthermore, the expression
of Psf,c (tf ) is complex and we need to calculate the matrix inverse to obtain the derivative of
tf . Thus Algorithm 2 requires high complexity if the number of antennas is large. Note that
the upper bound of the STP closely approximates the STP, as illustrated in Fig. 2, therefore we
utilize the upper bound of the STP, i.e., Psu,f,c (tf ) to approximate Psf,c (tf ). To facilitate the
optimization, we formulate the problem of optimizing the asymptotic upper bound of the ASE
to provide insightful guidelines for the parameter design in the high user density region. Based
on the Corollary 3, the asymptotic optimization problem is formulated as follows.
Problem 3. (Asymptotic ASE Optimization)
∗
Ru,∞
, max Ru,∞ (Fc , t)
Fc ,t
s.t
(2), (3), (4).
(43)
The above problem is a mixed-integer problem. By carefully investigating the characteristic
of Ru,∞ (Fc , t), which is a series of fractional functions of t in (33), we obtain the following
lemma to reveal the properties of Ru,∞ (Fc , t).
Lemma 4. Under a given Fc , Ru,∞ (Fc , t) is an increasing function of tf for any f ∈ Fc .
Proof: See Appendix E.
Based on the properties of the asymptotic upper bound Ru,∞ (Fc , t), we then analyze the
properties of the optimal file allocation Fc∗ and obtain Fc∗ as a unique solution.
Theorem 4. (Asymptotic Optimal File Allocaiton) In high user density region, i.e., λu → ∞,
the optimal cached file FC∗ is given by FC∗ = {B + 1, B + 2, · · · , F }.
Proof. We first prove that the number of optimal cached files is F − B and then prove that the
optimal F − B cached files are the least F − B popular files. For details, please see Appendix
F.
Theorem 4 interprets that we should transmit B most popular files via the backhaul and cache
the remaining files when the user density is very large. Compared to the process of checking
Area Spectrum Efficiency(bps/Hz/km 2 )
19
80
70
60
Exact.Opt.
Asym.Opt.
50
40
4
3
2
Number of Antennas N
1
4000
5000
6000
7000
8000
9000
10000
User Density λu (km -2 )
Fig. 4. Comparison between Exact Opt. and Asym. Opt.. λb = 10−4 m−2 , β = 4, F = 6, B = C = 2, γ = 0.6, τ = 0dB.
PF −B ÄF ä
i=C
choices in Algorithm 2, we get a unique optimal solution of file allocation and thus
i
largely reduce the complexity. When Fc∗ is given, we only need to optimize the continuous
variable t. We then use the gradient projection method get the local optimal solution. The
algorithm is summarized in Algorithm 3.
Algorithm 3 Optimal Solution to Problem 3
1.
Initialization: n = 1, nmax = h 104 , Fc∗ = {B + 1, Bi + 2, · · · , F }, ti (1) = F1c for all
2τ
2
2
i ∈ Fc∗ , θA (j) = 1 + β−2
− 2π
csc 2π
(jατ )2\β , θC (j) =
2 F1 1, 1 − β ; 2 − β ; −jατ
β
β
2π
β
2.
3.
4.
5.
6.
csc
repeat
2π
β
(jατ )2\β for all j ∈ {1, 2, · · · , N }.
Ç
0
ti (n + 1) = ti (n) + s(n)
î
0
PN
(−1)j+1 (N
θ (j)
j) C
j=1 (θA (j)ti (n)+θC (j))2
ó
∗ 1
å
λb log2 (1 + τ )qi for all i ∈ Fc .
∈
Fc , where
ti (n + 1)
=
ti (n + 1) − u 0 for all i
î 0
ó
P
∗ 1
1
i∈Fc ti (n + 1) − u 0 = C and [x]0 denotes max{min{1, x}, 0}.
n = n + 1.
until Convergence or n is larger than nmax .
u∗
satisfying
Algorithm 3 is guaranteed to converge because the gradient projection method converges to
a local optimal point for solving a problem whose feasible set is convex. In Algorithm 3, the
calculation of the matrix inverse (Step 5 in Algorithm 1) and the search for Fc∗ (Step 3 in
Algorithm 2 ) are avoided. Therefore, Algorithm 3 achieves lower complexity comparing to
Algorithm 2. We refer to the caching scheme based on Algorithm 3 as Asym. Opt..
Now we utilize simulations to compare the proposed Exact Opt. (the optimal solution obtained
by Algorithm 2) and Asym. Opt. (the asymptotic optimal solution obtained by Algorithm 3). From
Fig. 4, we can see that the performance of Asym. Opt. is very close to that of Exact Opt., even
20
when the user density is low. Therefore, Algorithm 3 with low complexity is applicable and
effective for parameter design in general region.
VI. N UMERICAL R ESULTS
In this section, we compare the proposed asymptotic optimal caching scheme given by Algorithm 3 with three caching schemes, i.e., the MPC (most popular caching) scheme [8], the UC
(uniform caching) scheme [10] and the IID (identical independent distributed caching) scheme
[11]. In the MPC scheme, the BSs cache or use backhaul to deliver the most popular B +C files.
In the IID scheme, the BSs select B + C files to cache or transmit via the backhaul in an i.i.d
manner with probability qi for file i. In the UC scheme, the BSs select B+C files according to the
uniform distribution to cache or deliver via the backhaul. Note that in simulations, we consider the
noise. Unless otherwise stated, our simulation environment parameters are as follows: P = 6.3W,
σn = −97.5dBm, λb = 10−4 m−2 , λu = 5 × 10−3 m−2 , β = 4, F = 500, τ = 0dB.
Fig. 5 and Fig. 6 illustrate the area spectrum efficiency vs. different parameters. We observe
that the proposed asymptotic optimal scheme outperforms all previous caching schemes. In
addition, out of the previous caching schemes, the MPC scheme achieves the best performance
and the UC scheme achieves the worst performance.
Fig. 5 (a) plots the ASE vs. the number of BS antennas. We can see that the ASE of all the
schemes increases with the number of BS antennas. This is because the increase of the number
of BS antennas leads to larger spatial diversity and thus achieves better performance. It is shown
that the increase of the number of BS antennas leads to an increasing gap between the proposed
asymptotic optimal caching scheme and previous caching schemes. This is because the better
performance of a larger number of BS antennas leads to a larger gain when we exploit the file
diversity. Furthermore, for asymptotic optimal caching scheme, the less popular files are more
likely to be stored when the number of BS antennas is large. Fig. 5 (b) plots the ASE vs. the
Zipf parameter γ. We can see that the ASE of the proposed asymptotic optimal caching scheme,
the MPC scheme and the IID scheme increases with the increase of the Zipf parameter γ. This
is because when γ increases, the probability that the popular files are requested increases, and
hence, the users are more likely to require the popular files from the nearby BSs who cache the
files or obtain the files via backhaul. The change of the Zipf parameter γ have no influence to
the ASE of the UC scheme. This is because in the UC scheme, all the files are stored/fetched
40
80
35
70
Area Spectrum Efficiency(bps/Hz/km 2 )
Area Spectrum Efficiency(bps/Hz/km 2 )
21
30
25
20
15
Asym.Opt.
MPC
IID
UC
10
5
1
2
3
4
5
6
7
8
9
60
50
40
Asym.Opt.
MPC
IID
UC
30
20
10
0.3
10
0.4
0.5
0.6
The number of BS antennas N
0.7
0.8
0.9
1
1.1
1.2
Zipf Parameter γ
(a) Number of BS antennas at C = 30, B = 20, γ = 0.6.
(b) Zipf parameter at N = 8, C = 30, B = 20.
Fig. 5. ASE vs. the number of BS antennas N and Zipf parameter γ.
50
60
Area Spectrum Efficiency(bps/Hz/km 2 )
Area Spectrum Efficiency(bps/Hz/km 2 )
55
50
45
40
35
30
25
Asym.Opt.
MPC
IID
UC
20
15
10
10
45
40
35
30
25
20
Asym.Opt.
MPC
IID
UC
15
10
20
30
40
50
60
70
80
90
100
Cache Size C
(a) Cache size at N = 8, B = 20, γ = 0.6.
110
0
5
10
15
20
25
30
35
40
45
50
Backhaul Capacity B
(b) Backhaul capability at N = 8, C = 30, γ = 0.6.
Fig. 6. ASE vs. cache size C and backhaul capability B.
with the same probability, and the change of the file popularity by altering γ has no influence
to the ASE.
Fig. 6 plots the ASE vs. the cache size C or the backhaul capability B. From Fig. 6, we can
see that the ASE of all the schemes increases with the cache size and the backhaul capability
because the probability that a randomly requested file is cached at or delivered via the backhaul
increases. The increase of the cache size leads to the increase of the gap between the proposed
asymptotic optimal caching scheme and the MPC scheme. This is because when the cache size
is small, the ASEs of the proposed caching scheme and the MPC scheme mainly come from the
spectrum efficiency of the backhaul files, which is same for the proposed caching scheme and the
MPC scheme. The increase of the cache size can bring larger gains of caching diversity. However,
the increase of the backhaul capability has no influence to the gap between the proposed caching
22
scheme and the previous caching schemes.
VII. C ONCLUSION
In this paper, we consider the analysis and optimization of random caching in backhaul-limited
multi-antenna networks. We propose a file allocation and cache placement design to effectively
improve the network performance. We first derive an exact expression and an upper bound of the
successful transmission probability, using tools from stochastic geometry. Then, we consider the
area spectrum efficiency maximization problem with continuous variables and integer variables.
We obtain a local optimal solution with reduced complexity by exploring optimal properties,
and we also solve an asymptotic optimization problem in high user density region, utilizing the
upper bound as the objective function. Finally, we show that the proposed asymptotic optimal
caching scheme achieves better performance compared with the existing caching schemes, and
the gains are larger when the number of the antennas is larger and/or the Zipf parameter is
smaller.
A PPENDIX
A. Proof of Lemma 1
According to the thinning theory of the PPP, the density of the users that require file f
is qf λu . We define Pi as the probability that file i is requiring by the BS. Note that Pi is
equivalent to the probability that the number of the users requiring file i is not zero, which is
Pi = 1 − 1 +
qi λu −4.5
3.5λb
according to [33]. When file f is requested by a certain BS, to calculate
r
, we need to consider the rest Fb − 1 files out of {Fb \ f } because file f is always
the pmf of Fb,1
requested by the BS. We define the file from the set {Fb \ f } as rest backhaul file and the file
r
from the set {Fb,1
\ f } as rest backhaul request file. To calculate the probability that the number
r
b
of the rest backhaul request files is k, i.e., PF
Fb,1
− 1 = k , we combine all the probabilities
f
Ä
ä
of the cases when any given k − 1 rest backhaul files are required and the Fb − k rest backhaul
r
b
Fb,1
files are not required. Therefore, PF
= k is given by
f
Ä
r
b
PF
Fb,1
−1=k =
f
Ä
ä
X
ä
Y
Y∈{X ⊆{Fb \f }:|X |=k} i∈X
Therefore we finish the proof of Lemma 1.
Pi
Y
i∈B\X
(1 − Pi ) , k = {0, 1, · · · , Fb − 1}.
(44)
23
B. Proof of Theorem 1
The STP of f ∈ Fc is given by
Ä
Psf,c (tf ) = P SIRf > τ
ä
= P g1 > τ Ikx1 kβ
Ä
(a)
= Ex1
ä
ä
(−1)m τ m kx1 kmβ (m) Ä
LI
τ kx1 kβ ,
m!
m=0
"N −1
X
#
where (a) follows from g1 ∼ Gamma(N, 1), I ,
P
i∈{Φb \1}
(45)
kxi k−β gi , LI (s) , EI [exp(−Is)] is
(m)
the Laplace transform of I, and LI (s) is the mth derivative of LI (s).
Denote s = τ kx1 kβ and ym =
(−1)m sm (m)
LI (s),
m!
guish the the interference, we define I f =
we have Psf,c (tf ) = Ex1
P
i∈Φfb \Bf,0
kx1 k−β gi and I −f =
N −1
m=0
îP
ó
ym . To distin-
P
i∈Φ−f
\Bf,0
b
kx1 k−β gi .
We then have LI (τ kx1 kβ )) = LI f (τ kx1 kβ ))LI −f (τ kx1 kβ )).
Therefore, we can derive the expression of LI f τ kx1 kβ ) as follows
Ä
X
i∈{Φfb \1}
β
LI f τ kx1 kβ ) = E
exp −τ kx1 k
Ä
ä
"
ä
kxi k−β gi
1
=
E
1 + τ kx1 kβ kxi k−β
i∈{Φfb \1}
(a)
#
Y
!
Z ∞
!
1
1−
= exp −2πtf λb
rdr
kx1 k
1 + τ kx1 kβ r−β
Ç
ô
ñ
å
2τ
2
2
2
= exp −πtf λb
; 2 − ; −τ kx1 k ,
2 F1 1, 1 −
β−2
β
β
(b)
(46)
where (a) follows from gi ∼ exp(1) due to the random beamforming effect, (b) follows from
the probability generating functional (PGFL) of the PPP [27]. Similarly, we have
LI −f τ kx1 kβ =
Ä
ä
Y
E exp −τ kx1 kβ kxi k−β gi
î
¶
©ó
i∈Φ−f
b
Z ∞
1
= exp −2π(1 − tf )λb
1−
0
1 + τ kx1 kβ r−β
å
Ç
Ç
å
2π
2π
2
2\β
= exp −π(1 − tf )λb csc
τ kx1 k .
β
β
!
!
rdr
(47)
24
Therefore, the expression of LI (τ kx1 kβ )) is given by
Ç
β
ñ
Ç
ô
2
2
2τ
; 2 − ; −τ
tf
2 F1 1, 1 −
β−2
β
β
LI (τ kx1 k )) = exp −πλb
Ç
å
å
To further calculate the STP, we need to calculate the nth derivative of Laplace transform
(n)
LI (s). After some calculations, we can derive the following recursive relationship
(n)
LI (s) =
n−1
X
i=0
n−1
X
+
i=0
!
n−1
(−1)n−i (n − i)!
πtf λb
i
Z ∞
kx1 k2
v
!
n−1
(−1)n−i (n − i)!
π(1 − tf )λb
i
n−i
− β2
1 + sv
Z ∞
0
− β2
dv (i)
n−i+1 LI (s)
β
v− 2
n−i
1 + sv
− β2
dv
(i)
n−i+1 LI (s).
(48)
According to the definition of yn and s, we have
yn = a
n−1
X
i=0
Ö
where li =
(1 − tf )
R∞
0
β
w− 2
i
β
1+w− 2
n−i
l y,
n n−i i
dw
+ tf
i+1
R∞
τ −2\β
β
w− 2
i
β
1+w− 2
(49)
è
dw
, i = {1, 2, · · · , N − 1} and a =
i+1
πλb kx1 k2 τ 2\β . Note that li can be expressed as the combination of the Gauss hypergeometric
function and the Beta function, which is presented in Theorem 1. Let l0 =
2
; −τ
β
ò
+ (1 − tf ) 2π
csc
β
Å
2π
β
ã
ã
Å
2τ
tf β−2
2 F1
ï
1, 1− β2 ; 2−
τ 2\β and we then have y0 = LI (τ kx1 kβ )) = exp −πλb l0 kx1 k2 .
Ä
ä
To get the expression of yn , we need to solve a series of linear equality. We then construct a
Toeplitz matrix as [21] and after some manipulations, we obtain Psf,c (tf ) as follows
Psf,c (tf )
= Ex1 y0
N
−1
X
i=0
1 i i
aD
,
i!
(50)
1
where kk1 is the l1 induced matrix norm and the expression of D is
l
1
l2
.
..
.
0
D=
0
l1
0
..
.
..
lN lM −2 · · ·
l1
.
å
2π
2π
+ (1 − tf ) csc
τ 2\β kx1 k2 .
β
β
(51)
0
According to the thinning of the PPP, the PDF of the distance of the closest f -cached BS
25
2
to the typical user is 2πtf λb kx1 ke−πtf λb kx1 k . After taking expectation over x1 and utilizing the
Taylor expansion, the STP is given by
Psf,c
"
tf
(tf ) =
tf + l0
τ 2\β
I−
tf + l0
#−1
!
D
.
(52)
1
We can obtain Psb , f ∈ Fb similarly and we omit the details due to space limitation.
C. Proof of Lemma 3
ï
Firstly, we prove the property 1. Let A , I −
tf
tf +l0c,f
Å
τ 2\β
tf +l0c,f
ã
Dc,f
ò
and we have Psf,c (tf ) =
kA−1 k1 . We first derive the lower bound of kA−1 k1 . For any x and y satisfying y = A−1 x,
we have kA−1 k1 ≥
kyk1
kxk1
due to the inequality kyk1 ≤ kA−1 k1 kxk1 . Let y = [1, 1, · · · , 1]T and
then we have
A−1
1
≥
kyk1
=
kxk1
N−
N
τ 2\β
tf +l0c,f
PN −1
i=1
(N − i)lic,f
.
(53)
We then derive the upper bound of kA−1 k1 . Noticing that A−1 = (I − A) A−1 + I and using
the triangle inequality, we have kA−1 k1 ≤ k(I − A)k1 kA−1 k1 + kIk1 . Therefore, we obtain the
upper bound of kA−1 k1 as follows
A−1
1
≤
kIk1
=
1 − k(I − A)k1
1−
1
τ 2\β
tf +l0c,f
PN −1 c,f .
li
i=1
tf
kA−1 k1 , we get the bounds of Psf,c (tf ) as
tf +l0c,f
c,f
lic,f and l0c,f in Theorem 1. It can be shown that li+1,
(54)
Note that Psf,c (tf ) =
(35) after substituting
the expression of
≤ lic,f for i ∈ N and
l0c,f =
c,f
i=1 li , ∀tf
P∞
∈ [0, 1]. Moreover, lic,f decreases with tf for i ∈ N. Therefore, after carefully
checking the properties of νA , µA , νB and µB , we obtain property 1.
Secondly, we prove property 2. We define B , tf + l0c,f A and then we have Psf,c (tf ) =
Ä
ä
ktf B−1 k1 . Furthermore, the derivative of B w.r.t. tf is a lower triangular Toeplitz matrix and
,
1 − k0
∂B
=
∂tf
k1
1 − k0
k2
k1
..
.
..
.
kN −1
kN −2
1 − k0
...
···
k1 1 − k0
(55)
26
where k0 and ki , i ∈ {1, 2, · · · , N − 1} are given by
Ç
å
ñ
ô
2π
2τ
2
2
2π
csc
τ 2\β −
; 2 − ; −τ ,
(56)
k0 =
2 F1 1, 1 −
β
β
β−2
β
β
ñ
ô
2
2τ 2\β
2
2
2τ i
2
ki =
B( + 1, i − ) −
; i + 1 − ; −τ , 1 ≤ i ≤ N − 1. (57)
2 F1 i + 1, i −
β
β
β
iβ − 2
β
β
Then we derive the derivative of Psf,c (tf ) w.r.t. tf as follows
∂Psf,c (tf )
∂(tf B−1 )
=
∂tf
∂tf
1
∂B −1
B
∂tf
= B−1 − tf B−1
1
≥
tf
≥
1
1
tf B−1
tf
1
tf B−1
=
tf
Note that 1 − k0 +
PN −1
i=1
ki ≤ 1 − k0 +
1
∂Psf,c (tf )
≥
tf B−1
∂tf
tf
−1 ∂B
−1
tf B
1
1
1
1
− tf B
1−
∂tf
!
−1
B tf
1
!
∂B
∂tf
tf B−1
1
1
1 − 1 − k0 +
N
−1
X
!
!
−1
ki
tf B
P∞
i=1
.
1
i=1
(58)
ki = 1, therefore we have
1 − tf B−1
1
=
Ä
ä
1 f,c
Ps (tf ) 1 − Psf,c (tf ) .
tf
∂Psf,c (tf )
∂tf
Note that Psf,c (tf ) ∈ [0, 1], we then finish the proof of property 2 because
(59)
≥0
Finally, we prove property 3. Note that Psf,c (tf ) = Psb if and only if tf = 1. According to
property 2, we have Psf,c (tf ) ≤ Psb . According to property 1, the upper bound and the lower
bound of Psf,c (tf ) are both 0 if tf = 0, therefore we have Psf,c (tf ) = 0 if tf = 0.
D. Proof of Theorem 3
To prove property 1 is equivalent to prove that B ≤ |Fb∗ | ≤ F − C for optimizing Ps (Fc , t).
We first prove that |Fb∗ | ≤ F − C. Suppose that there exists optimal (Fb∗ , t∗ ) to problem 1
0
Ä
0
satisfying |Fb | > F − C. Now we construct a feasible solution Fb , t
ä
to problem 1, where
0
0
Fb is the set of the most popular F − C files of Fb∗ , the elements of t are same as t∗ if
0
0
Ä
0
f ∈ {F \ Fb∗ } and are one if f ∈ {Fb∗ \ Fb }. Due to the fact that Fc ≤ C, Fb , t
0
ä
is a
feasible solution satisfying the constraints. Note that Psf,c (tf ) = Psb if tf = 1 , we then have
0
0
Ps (t , F \ Fb ) − Ps (t∗ , F \ Fb∗ ) =
P
0
f ∈Fb∗ \Fb
qf Psf,c (1) −
PFb
k=1
Ä
r
P Fb,1
=k
ä
B
Pb
max(k,B) s
which contradicts with the optimality of (Fb∗ , t∗ ). Therefore, we prove |Fb∗ | ≤ F − C.
> 0,
27
We then prove that |Fb∗ | ≥ B. Suppose that there exist optimal (Fb∗ , t∗ ) to problem 1 satisfying
0
Ä
0
0
ä
|Fb | < B. Now we construct a feasible solution Fb , t to problem 1, where Fb is the combining
0
0
of Fb∗ and any B − |Fb∗ | files in {F \ Fb∗ }, the elements of t are same as t∗ if f ∈ {F \ Fb }.
When |Fb | ≤ B, Ps (Fc , t) =
0
P
f ∈Fc
qf Psf,c (tf ) +
0
then have Ps (t , F \ Fb ) − ps (t∗ , F \ Fb∗ ) =
P
f ∈F \Fc
P
f ∈Fb, \Fb∗
qf Psb . Note that Psb ≥ Psf,c (tf ) , we
Ä
ä
qf Psb − Psf,c (tf ) ≥ 0, which contradicts
with the optimality of (Fb∗ , t∗ ). Therefore, we prove |Fb∗ | ≥ B.
E. Proof of Lemma 4
To prove Lemma 4 is equivalent to prove Psu,f,c (tf ) is increasing w.r.t. tf for f ∈ Fc .
qf tf
ζ1 (ατ )tf +ζ2 (ατ )
When N = 1, we have Psu,f,c (tf ) =
and it is increasing w.r.t. tf . We then
consider the scenario when N ≥ 2. According to the proof of the upper bound, Psu,f,c (tf ) =
1 − EIkx1 kβ
ïÄ
Ä
1 − exp −ατ Ikx1 k
Psu,f,c (tf ) = 1 − Ekx1 kβ
β
ääN ò
. After taking expectation over I, we have
ïÄ
1 − exp πλb ((1 − ζ1 (ατ )) tf − ζ2 (ατ )) kx1 k2
Ä
ääN ò
,
(60)
where ζ1 (ατ ) and ζ2 (ατ ) are given in (23) and (24).
Let U (tf |x1 ) , 1 − exp πλb ((1 − ζ1 (ατ )) tf − ζ2 (ατ )) kx1 k2
Ä
Ä
ää
and we have
Ä
ä
Ä
ä
∂U N (tf |x1 )
= N πλb kx1 k2 (ζ1 (ατ ) − 1) U N −1 (tf |x1 ) exp πλb ((1 − ζ1 (ατ )) tf − ζ2 (ατ )) kx1 k2 .
∂tf
∂U N (tf |x1 )
∂tf
Note that ζ1 (ατ ) < 1 and ζ2 (ατ ) > 0, we have
< 0 and U N (tf |x1 ) is a decreasing
î
ó
function of tf for any x1 . Therefore, Psu,f,c (tf ) = 1 − Ekx1 kβ U N (tf |x1 ) is an increasing
î
ó
function of tf . This is because Ekx1 kβ U N (tf |x1 ) can be interpreted as a combination of a
series of U N (tf |x1 ) with different x1 and the monotonicity holds.
F. Proof of Theorem 4
To prove Fc∗ = {B + 1, B + 2, · · · , F } is equivalent to prove Fb∗ = {1, 2, · · · , B}.
Firstly we prove that |Fb∗ | ≤ B. Suppose that there exist optimal (Fb∗ , t∗ ) to problem 1
satisfying |Fb∗ | > B and then we have max (Fb∗ , B) = Fb . Now we construct a feasible solution
Ä
0
0
Fb , t
ä
0
to problem 3, where Fb is the set of the most popular B files of Fb∗ , the elements of
0
0
t are same as t∗ if f ∈ {F \ Fb∗ } and are zero if f ∈ {Fb∗ \ Fb }. Note that Psu,f,c (tf ) = 0
0
0
u
u
if and only if tf = 0 , we then have Rs,∞
(t , F \ Fb ) − Rs,∞
(t∗ , F \ Fb∗ ) = λb log2 (1 +
τ)
Å
P
0
f ∈Fb
qf Psu,b
B
−
ã
u,b
P
f ∈Fb∗
qf Ps
Fb
B > 0 because Psu,b > 0, which contradicts with the optimality
of (Fb∗ , t∗ ). Therefore, we can prove |Fb∗ | ≤ B.
28
Secondly we prove that |Fb∗ | ≥ B. Note that Psu,f,c (tf ) increases w.r.t. tf and Psu,f,c (tf ) ≤
Psu,b , therefore we can easily prove |Fb∗ | ≥ B similarly as Appendix D.
Combining the results |Fb∗ | ≤ B and |Fb∗ | ≥ B, we have |Fb∗ | = B. Finally we prove
Fb∗ = {1, 2, · · · , B}. Suppose that there exists optimal (Fb∗ , t∗ ) to problem 1 satisfying |Fb∗ | = B
0
Ä
0
and Fb 6= {1, 2, · · · B}. We construct a feasible solution Fb , t
0
Ä
{1, 2, · · · B}, the elements of t are same as t∗ if f ∈ {F \ Fb∗
0
¶
in order for the rest files, i.e., tfn = t∗fm , fn ∈ Fb∗
¶
n and m denote the order of the file in Fb∗
0
u
Fb ) − Rs,∞
(t∗ , F \ Fb∗ ) = λb log2 (1 + τ )
τ)
P
fn ∈{Fb∗
T
Fc0 } qfn
we then obtain
q fn
Psu,b − Psc,b t∗fm
Ä
Ä
0
0
0
©
P
©
0
fm ∈{Fb
¶
T
to problem 3, where Fb =
S
0
¶
Fc , fm ∈ Fb
0
Fc and Fb
0
ä
Fb } and are same as t∗
T
©
Fc∗ , n = m. Note that
0
©
u
Fc∗ . We then have Rs,∞
(t , F \
T
Fc∗ } qfm
Ä
Ä
Psu,b − Psc,b t∗fm
Ä
ää
ää
Ä
ä
− λb log2 (1 +
0
ä
Psu,b − Psc,b tfn . Note that Psu,b −Psc,b t∗fm = Psu,b −Psc,b tfn if m = n,
0
0
u
u
(t∗ , F \Fb∗ )
(t , F \Fb )−Rs,∞
Rs,∞
Å
ãÅ
Ä
T
T
0
ä
ãã
= λb log2 (1+τ )
Å
P
0 T
m∈{1,2,··· ,|{Fb
Fc∗ }|}
q fm −
> 0. The last inequality is because qfm > qfn if n = m and Psu,b >
ä
Psc,b t∗fm .
R EFERENCES
[1] N. Bhushan, J. Li, D. Malladi, R. Gilmore, D. Brenner, A. Damnjanovic, R. T. Sukhavasi, C. Patel, and S. Geirhofer,
“Network densification: the dominant theme for wireless evolution into 5G,” IEEE Communications Magazine, vol. 52,
no. 2, pp. 82–89, February 2014.
[2] J. G. Andrews, S. Buzzi, W. Choi, S. V. Hanly, A. Lozano, A. C. K. Soong, and J. C. Zhang, “What will 5G be?” IEEE
Journal on Selected Areas in Communications, vol. 32, no. 6, pp. 1065–1082, June 2014.
[3] X. Wang, M. Chen, T. Taleb, A. Ksentini, and V. C. M. Leung, “Cache in the air: exploiting content caching and delivery
techniques for 5G systems,” IEEE Communications Magazine, vol. 52, no. 2, pp. 131–139, February 2014.
[4] A. Liu and V. K. N. Lau, “Cache-enabled opportunistic cooperative MIMO for video streaming in wireless systems,” IEEE
Transactions on Signal Processing, vol. 62, no. 2, pp. 390–402, Jan 2014.
[5] ——, “Exploiting base station caching in MIMO cellular networks: Opportunistic cooperation for video streaming,” IEEE
Transactions on Signal Processing, vol. 63, no. 1, pp. 57–69, Jan 2015.
[6] M. Tao, E. Chen, H. Zhou, and W. Yu, “Content-centric sparse multicast beamforming for cache-enabled cloud RAN,”
IEEE Transactions on Wireless Communications, vol. 15, no. 9, pp. 6118–6131, Sept 2016.
[7] J. G. Andrews, F. Baccelli, and R. K. Ganti, “A tractable approach to coverage and rate in cellular networks,” IEEE
Transactions on Communications, vol. 59, no. 11, pp. 3122–3134, November 2011.
[8] E. Baştuğ, M. Bennis, M. Kountouris, and M. Debbah, “Cache-enabled small cell networks: Modeling and tradeoffs,”
EURASIP Journal on Wireless Communications and Networking, vol. 2015, no. 1, pp. 1–11, 2015.
[9] C. Yang, Y. Yao, Z. Chen, and B. Xia, “Analysis on cache-enabled wireless heterogeneous networks,” IEEE Transactions
on Wireless Communications, vol. 15, no. 1, pp. 131–145, Jan 2016.
[10] S. T. ul Hassan, M. Bennis, P. H. J. Nardelli, and M. Latva-Aho, “Modeling and analysis of content caching in wireless
small cell networks,” in 2015 International Symposium on Wireless Communication Systems (ISWCS), Aug 2015, pp.
765–769.
29
[11] B. B. Nagaraja and K. G. Nagananda, “Caching with unknown popularity profiles in small cell networks,” in 2015 IEEE
Global Communications Conference (GLOBECOM), Dec 2015, pp. 1–6.
[12] Y. Cui, D. Jiang, and Y. Wu, “Analysis and optimization of caching and multicasting in large-scale cache-enabled wireless
networks,” IEEE Transactions on Wireless Communications, vol. 15, no. 7, pp. 5101–5112, July 2016.
[13] Y. Cui and D. Jiang, “Analysis and optimization of caching and multicasting in large-scale cache-enabled heterogeneous
wireless networks,” IEEE Transactions on Wireless Communications, vol. 16, no. 1, pp. 250–264, Jan 2017.
[14] J. Wen, K. Huang, S. Yang, and V. O. K. Li, “Cache-enabled heterogeneous cellular networks: Optimal tier-level content
placement,” IEEE Transactions on Wireless Communications, vol. 16, no. 9, pp. 5939–5952, Sept 2017.
[15] Z. Chen, J. Lee, T. Q. S. Quek, and M. Kountouris, “Cooperative caching and transmission design in cluster-centric small
cell networks,” IEEE Transactions on Wireless Communications, vol. PP, no. 99, pp. 1–1, 2017.
[16] S. Kuang and N. Liu, “Cache-enabled base station cooperation for heterogeneous cellular network with dependence,” in
2017 IEEE Wireless Communications and Networking Conference (WCNC), March 2017, pp. 1–6.
[17] D. Liu and C. Yang, “Caching policy toward maximal success probability and area spectral efficiency of cache-enabled
HetNets,” IEEE Transactions on Communications, vol. PP, no. 99, pp. 1–1, 2017.
[18] X. Yu, C. Li, J. Zhang, and K. B. Letaief, “A tractable framework for performance analysis of dense multi-antenna
networks,” arXiv preprint arXiv:1702.04573, 2017.
[19] L. H. Afify, H. ElSawy, T. Y. Al-Naffouri, and M. S. Alouini, “A unified stochastic geometry model for MIMO cellular
networks with retransmissions,” IEEE Transactions on Wireless Communications, vol. 15, no. 12, pp. 8595–8609, Dec
2016.
[20] H. S. Dhillon, M. Kountouris, and J. G. Andrews, “Downlink MIMO HetNets: Modeling, ordering results and performance
analysis,” IEEE Transactions on Wireless Communications, vol. 12, no. 10, pp. 5208–5222, October 2013.
[21] C. Li, J. Zhang, and K. B. Letaief, “Throughput and energy efficiency analysis of small cell networks with multi-antenna
base stations,” IEEE Transactions on Wireless Communications, vol. 13, no. 5, pp. 2505–2517, May 2014.
[22] C. Li, J. Zhang, J. G. Andrews, and K. B. Letaief, “Success probability and area spectral efficiency in multiuser MIMO
HetNets,” IEEE Transactions on Communications, vol. 64, no. 4, pp. 1544–1556, April 2016.
[23] C. Li, J. Zhang, M. Haenggi, and K. B. Letaief, “User-centric intercell interference nulling for downlink small cell
networks,” IEEE Transactions on Communications, vol. 63, no. 4, pp. 1419–1431, April 2015.
[24] X. Yu, J. Zhang, M. Haenggi, and K. B. Letaief, “Coverage analysis for millimeter wave networks: The impact of directional
antenna arrays,” arXiv preprint arXiv:1702.04493, 2017.
[25] T. Bai and R. W. Heath, “Coverage and rate analysis for millimeter-wave cellular networks,” IEEE Transactions on Wireless
Communications, vol. 14, no. 2, pp. 1100–1114, Feb 2015.
[26] H. Alzer, “On some inequalities for the incomplete gamma function,” Mathematics of Computation of the American
Mathematical Society, vol. 66, no. 218, pp. 771–778, 1997.
[27] M. Haenggi, Stochastic geometry for wireless networks.
Cambridge University Press, 2012.
[28] L. Breslau, P. Cao, L. Fan, G. Phillips, and S. Shenker, “Web caching and Zipf-like distributions: evidence and implications,”
in INFOCOM ’99. Eighteenth Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings.
IEEE, vol. 1, Mar 1999, pp. 126–134 vol.1.
[29] A. M. Hunter, J. G. Andrews, and S. Weber, “Transmission capacity of ad hoc networks with spatial diversity,” IEEE
Transactions on Wireless Communications, vol. 7, no. 12, pp. 5058–5071, December 2008.
[30] F. Baccelli, B. Blaszczyszyn, and P. Muhlethaler, “An aloha protocol for multihop mobile wireless networks,” IEEE
Transactions on Information Theory, vol. 52, no. 2, pp. 421–436, Feb 2006.
30
[31] W. C. Cheung, T. Q. S. Quek, and M. Kountouris, “Throughput optimization, spectrum allocation, and access control in
two-tier femtocell networks,” IEEE Journal on Selected Areas in Communications, vol. 30, no. 3, pp. 561–574, April 2012.
[32] H. Holma and A. Toskala, LTE for UMTS-OFDMA and SC-FDMA based radio access.
John Wiley & Sons, 2009.
[33] S. Singh, H. S. Dhillon, and J. G. Andrews, “Offloading in heterogeneous networks: Modeling, analysis, and design
insights,” IEEE Transactions on Wireless Communications, vol. 12, no. 5, pp. 2484–2497, May 2013.
| 7 |
arXiv:1612.05971v3 [] 21 Mar 2018
An Integrated Optimization + Learning Approach to
Optimal Dynamic Pricing for the Retailer with Multi-type
Customers in Smart Grids✩
Fanlin Menga,d,∗, Xiao-Jun Zengb, Yan Zhangc, Chris J. Dentd , Dunwei Gonge
a School
of Engineering and Computing Sciences, Durham University, Durham DH1 3LE, UK
of Computer Science, The University of Manchester, Manchester M13 9PL, UK
c College of Information System and Management, National University of Defense Technology, Changsha
410073, China
d School of Mathematics, University of Edinburgh, Edinburgh EH9 3FD, UK
e School of Information and Control Engineering, China University of Mining and Technology, Xuzhou
221116, China
b School
Abstract
In this paper, we consider a realistic and meaningful scenario in the context of smart
grids where an electricity retailer serves three different types of customers, i.e., customers with an optimal home energy management system embedded in their smart
meters (C-HEMS), customers with only smart meters (C-SM), and customers without
smart meters (C-NONE). The main objective of this paper is to support the retailer to
make optimal day-ahead dynamic pricing decisions in such a mixed customer pool. To
this end, we propose a two-level decision-making framework where the retailer acting as upper-level agent firstly announces its electricity prices of next 24 hours and
customers acting as lower-level agents subsequently schedule their energy usages accordingly. For the lower level problem, we model the price responsiveness of different
customers according to their unique characteristics. For the upper level problem, we
optimize the dynamic prices for the retailer to maximize its profit subject to realistic
market constraints. The above two-level model is tackled by genetic algorithms (GA)
based distributed optimization methods while its feasibility and effectiveness are con-
✩ ©2018. This manuscript version is made available under the CC-BY-NC-ND 4.0 license
http://creativecommons.org/licenses/by-nc-nd/4.0/. Please cite this accepted article as: Fanlin Meng, XiaoJun Zeng, Yan Zhang, Chris J. Dent, Dunwei Gong, An Integrated Optimization + Learning Approach to
Optimal Dynamic Pricing for the Retailer with Multi-type Customers in Smart Grids, Information Sciences
(2018), doi: 10.1016/j.ins.2018.03.039
∗ Corresponding author. Tel.: +44 131 650 5069; Email: [email protected].
Preprint submitted to Information Sciences
March 22, 2018
firmed via simulation results.
Keywords:
Bilevel Modelling, Genetic Algorithms, Machine Learning, Dynamic Pricing,
Demand-side Management, Demand Response, Smart Grids
1. Introduction
With the large-scale deployment of smart meters and two-way communication infrastructures, dynamic pricing based demand response and demand-side management
programs [37] [12] have attracted enormous attentions from both academia and industry and are expected to bring great benefits to the whole power system. For dynamic
pricing, price changes between different time segments. Real-time pricing (RTP), timeof-use pricing (ToU) and critical-peak pricing (CPP) are commonly used dynamic pricing strategies [20]. There are emerging studies on how to design optimal dynamic pricing strategies within the last decade. For instance, a residential implementation of CPP
was investigated in [18] and an optimal ToU pricing for residential load control was
proposed in [10]. An optimal RTP algorithm based on utility maximization for smart
grid was proposed in [36]. More recently, a game theory based dynamic pricing method
for demand-side management was proposed in [39] where different pricing strategies
such as RTP and ToU are evaluated for the residential and commercial sectors.
Many existing studies on dynamic pricing based demand response and demandside management assume that customers are installed with home energy management
systems (HEMS) in their smart meters, i.e. an optimization software which is able to
help customers maximize their benefits such as minimizing their payment bills. For
instance, references [5, 30–32, 44, 48] propose different HEMS for customers to help
them deal with the dynamic pricing signals. Instead of focusing on the single level
customer-side optimization problems, references [9, 28, 33, 43, 45, 46, 49] deal with
how a retailer determines retail electricity prices based on the expected responses of
customers where they model the interactions between a retailer and its customers as a
Stackelberg game or bilevel optimization problem where HEMS are assumed to have
been installed for all customers.
2
In contrast, [19] [29] investigate the electricity consumption behavior of customers
who have installed smart meters without HEMS embedded (C-SM). Even without
HEMS installed, these customers can easily get access to price information and their
electricity consumption data through smart meters, and are likely to respond to dynamic price signals. On the other hand, with smart meters and two-way communication, the retailer is able to identify each customer’s energy consumption patterns based
on history smart meter data. For instance, [19] presents a stochastic regression based
approach to predict the hourly electricity consumption of each residential customer
from historical smart meter data in the context of dynamic pricing. [29] proposes two
appliance-level electricity consumption behavior models for each customer in the context of dynamic pricing with the premise that appliance-level electricity consumption
data at each hour can be disaggregated from household-level smart meter data using
non-intrusive appliance load monitoring (NILM) [17].
In addition to C-HEMS and C-SM, it is also unavoidable that some customers do
not have smart meters installed (C-NONE). Therefore, these customers do not have direct access to electricity prices 1 and their history consumption, and are more likely to
have a relatively low demand elasticity. On the other hand, without smart meters, the
retailer does not have accurate consumption data of each individual customer but only
the aggregated demand data of all customers. As a result, an aggregated demand model
is needed to forecast the total demand of C-NONE. Existing research on aggregated demand modelling in the context of dynamic pricing include artificial intelligence based
approaches [21, 35, 47] and demand elasticity based approaches [15, 22, 26, 40].
Although the above and other unlisted studies have provided valuable insights on
how to model customers’ demand patterns in the context of dynamic pricing and smart
grids, they all consider scenarios where only one single type of customers exist. However, there will be situations when several types of customers with different level of
demand elasticity (e.g., C-HEMS, C-SM, and C-NONE) coexist in the market, espe1 Although
customers of this type could not receive price signals directly due to the unavailability of
smart meters, the electricity price information is usually open to the public through other sources (e.g., their
retailer’s website) and customers of this type might take advantage of this.
3
cially during the transition phase of smart metering and smart grids (e.g., the smart
meter roll-out in UK is expected to finish by 2020 [2]). Considering customer segmentation and differences in pricing has been extensively studied in retail sectors such
as broadband and mobile phone industry [14], but has not received much attention in
energy and power system sector mainly due to the flat price regulation in the electricity retail market and lack of demand flexibility among customers. Nonetheless, with
the liberalization of retail market and the development of smart grids and demand side
management, situations start changing recently. For instance, the importance of considering customer segments has been demonstrated in some recent studies [13] [7]. More
specifically, the benefit of capturing different customer groups with distinctive energy
consumption patterns to the electricity tariff design was firstly described in [13]. Further, [7] proposed an approach to group customers into different clusters (types) based
on their demand elasticity with the aim to design effective demand response programs.
When dealing with a customer pool consisting of different types of customers, the
aggregated demand behavior is a combination of behaviors from distinctive energy user
groups (e.g. C-HEMS, C-SM, and C-NONE considered in our paper) and will be very
complicated. For instance, there will be a lot of energy switch behaviors from C-HEMS
and C-SM who are very sensitive to price changes (i.e. the demand would be an implicit discontinuous function of prices) whereas the demand of C-NONE is much less
sensitive to price signals. Therefore, it is very difficult for existing demand modelling
approaches which are mainly developed for a single type of users to handle the complicated demand behaviors of such a customer pool. To this end, in this paper we propose
a hybrid demand modelling approach by considering the behavior differences among
customers explicitly (i.e. customers of similar behavior patterns are categorized in the
same group). Our proposed approach, which captures more detailed utility information
of a mixed customer pool, can better reflect the demand behaviours in reality, and thus
provide more accurate and well-behavioural demand models for the retailer to make
right pricing decisions.
In terms of bilevel optimization, there are many existing solution methods such as
the commonly used single level reduction method which converts the bilevel problem
into a single level problem by replacing the lower level optimization problem with its
4
Karush–Kuhn–Tucker (KKT) conditions [24, 38]. However, for bilevel problems that
have non-convex and discontinuous lower level problems such as our considered hybrid
optimization+ machine learning problem with mixed (both integer and continuous)
decision variables, the above conventional bilevel methods are infeasible [38] due to the
unavailability of derivative information and the non-convexity of lower level problems.
In such cases, metaheuristic optimization methods such as genetic algorithms, which
are easy to implement and do not require derivative information, are often employed
and have been widely used in energy system modelling studies [6, 25, 34, 41]. In
addition, the intrinsic parallelism of genetic algorithms could be exploited in our future
investigations e.g. by developing parallel genetic algorithms based solutions to take
advantage of distributed and parallel computing facilities. To this end, in this paper we
propose a GA-based two-level solution framework to solve the problem in a distributed
and coordinated manner.
The main objective of this paper is to support the retailer to make best dynamic
pricing decisions in such a mixed customer pool which takes all different types of customers (i.e., C-HEMS, C-SM, and C-NONE) into account at the same time. To the
best of our knowledge, this is the first paper to tackle such a realistic and meaningful demand response problem by considering potential responses of different types of
customers. The main contributions of this paper can be summarized as follows:
• We propose an integrated optimization+ learning to optimal dynamic pricing for
the retailer with a mixed customer pool by considering potential responses of different
types of customers.
• A genetic algorithm (GA) based two-level distributed pricing optimization framework is proposed for the retailer to determine optimal electricity prices.
The rest of this paper is organized as follows. The system model framework is
presented in Section 2. An optimal home energy management system for C-HEMS is
given in Section 3 while two appliance-level learning models for C-SM are presented
in Section 4. In Section 5, an aggregated demand model for C-NONE is presented. A
pricing optimization model for the retailer is provided in Section 6 and GA based twolevel distributed pricing optimization algorithms are presented in Section 7. Numerical
results are presented in Section 8. This paper is concluded in Section 9.
5
Figure 1: Two-level pricing optimization framework with the retailer and its customers.
2. System Model
In this paper, we consider a practical situation where a retailer serves three different
groups of customers (i.e., C-HEMS, C-SM, and C-NONE). The number of above three
groups of customers is denoted as N1 , N2 , and N3 respectively with the total number
of customers denoted as N = N1 + N2 + N3 . The retailer procures electricity from
the wholesale market, and then determines optimal retail dynamic prices based on the
potential responses (when and how much is the energy consumption) of customers,
which can be cast as a two-level decision making framework. The above interactions
between the retailer and its customers is further depicted in Figure 1.
As aforementioned, at the customer-side, for C-HEMS, their installed HEMS represents some optimization software such as to minimize customers’ bills or maximize
their comforts. With the help of two-way communication infrastructure, the retailer is
able to know these customers’ energy consumption responses to dynamic price signals
by interacting with the installed HEMS. As a result, for illustration purposes, we formulate the energy management optimisation problem for C-HEMS by modifying [28]. In
[28], we only consider shiftable and curtailable appliances where the scheduling problem of shiftable appliances is formulated as a linear program, and that of curtailable
6
appliances is represented by a predefined linear demand function. However, in this paper we consider three types of appliances (i.e. interruptible, non-interruptible and curtailable appliances), with a more detailed problem formulation where the consumption
scheduling problems of interruptible and non-interruptible appliances are formulated
as integer programming problems, while that of curtailable appliances is formulated as
a linear program. Nevertheless, other existing HEMS methods such as [32][48] in the
literature should work equally well in this context.
For C-SM, they cannot always find the best energy usage scheduling without the
help of HEMS. However, with the help of non-intrusive appliance load monitoring
techniques (NILM) [17], hourly or even minutely appliance-level energy consumption
data of each appliance can be disaggregated from household-level smart meter data
with high accuracy. As a result, appliance-level energy consumption patterns of each CSM can be identified from history price and consumption data using machine learning
algorithms. To this end, we modify [29] to identify appliance-level energy consumption
patterns for each C-SM. More specifically, in this paper we have removed the price
elasticity of demand constraints considered in [29] for the learning model of curtailable
appliances to simplify the model implementation, as such constraints seem to make no
difference in this particular study.
For C-NONE, they usually manifest a relatively low demand elasticity due to lack
of direct access to real-time price signals. On the other hand, the retailer is unable to
know the accurate energy consumption information of each individual C-NONE. As
a result, to identify the electricity consumption patterns for the pool of C-NONE, an
aggregated demand model is needed. To this end, we adopt the approach proposed in
[26] to identify the aggregated energy consumption patterns of the whole C-NONE.
Different from [26], in this paper we have added a detailed analysis of the adopted
aggregated demand model such as its capability of ensuring a basic market behavior
and enabling the market operator or retailers to see the cross effect of usage switching.
At the retailer-side, with the demand modelling for different types of consumers
established, the pricing optimization problem for the retailer is formulated so as to
maximize its profit. Such a two-level interaction model with hybrid optimization (such
as integer programming for C-HEMS) and machine learning problems (such as proba7
bilistic Bayesian learning) at the customer-side and a quadratic programming problem
at the retailer-side is non-convex and discontinuous. To this end, we propose a GA
based solution framework to solve the retailer-side and customer-side problems in a
distributed and coordinated manner.
It is worth mentioning that we mainly focus on the pricing optimization problem
from the perspective of retailer in this paper. As a result, the benefits of customers are
not discussed to depth in this specific study and readers are referred to our previously
published works for more information [28, 29] [26]. Furthermore, it should be noted
that the obtained optimal dynamic prices will be applied to all types of customers. A
possible extension to the present work is to introduce a differential pricing framework
for the retailer to offer individualized prices to each type of customers. However, to
determine the ‘right’ pricing strategies for different types of customers requires further
research and substantial numerical experiments, which is part of our future work.
3. HEMS Optimization Model for C-HEMS
In this section, we provide the mathematical representation of the optimization
model for customers with HEMS. We define N1 = {1, 2, ..., N1} as the set of such
consumers.
For each customer n ∈ N1 , we denote the set of all the considered appliances as An ,
interruptible appliances (e.g., electric vehicles) as In , non-interruptible appliances (e.g.,
washing machine) as NIn , and curtailable appliances (e.g., air conditioners) as Cn . As
a result, we have An = In ∪ NIn ∪ Cn . Since both interruptible and non-interruptible
appliances can be regarded as shiftable appliances, S n is used to represent such an
union set, i.e., S n = In ∪ NIn .
We define H = {1, 2, ..., H}, where H is the scheduling horizon for appliance operations. Further, let ph denote the electricity price announced by the retailer at h ∈ H. For
each appliance a ∈ An , a scheduling vector of energy consumption over H is denoted
H
as xn,a = [x1n,a , ..., xhn,a, ..., xn,a
] where xhn,a ≥ 0 represents the n-th customer’s energy
consumption of appliance a at time h.
8
3.1. Interruptible Appliances
For each interruptible appliance a ∈ In , the scheduling window for each appliance
a can be set by each customer according to his/her preference and is defined as Hn,a ,
{αn,a , αn,a + 1, ..., βn,a}. Note that the operations of these appliances can be discrete,
i.e., it is possible to charge the EVs for one hour, then stop charging for one or several
hours and then complete the charging after that. It is further assumed that the appliances
consume a constant amount of energy (denoted as xrated
n,a ) for each running time period.
Finally, the payment minimization problem of each interruptible appliance is modelled as follows, which can be solved using existing integer linear programming solvers.
min JIn (a) = min
xhn,a
βn,a
X
ph × xhn,a
(1)
h=αn,a
s.t.
βn,a
X
xhn,a = En,a ,
(2)
h=αn,a
xhn,a ∈ {0, xrated
n,a }, ∀h ∈ Hn,a .
(3)
Constraint (2) represents that, for each appliance a, the total energy consumption to
accomplish the operations within the scheduling window is fixed, which can be found
from the technical specification of the appliance and is denoted as En,a . Constraint (3)
represents that appliance a consumes a constant amount of energy for each running
time period, i.e., xrated
n,a . Note that the actual running time period of an appliance a can
be derived from the vector of optimal energy consumptions, i.e. the appliance is ‘on’
at hour h if the corresponding optimal energy consumption is xrated
n,a . Otherwise, the
appliance is ‘off’.
3.2. Non-interruptible Appliances
For each non-interruptible appliance a ∈ NIn , the scheduling window is defined as
Hn,a , {αn,a , αn,a +1, ..., βn,a}. Different from interruptible appliances, the operations of
each non-interruptible appliance must be continuous, i.e., once the appliance starts, it
must operate continuously till complete its task. Further, the appliances are assumed to
9
consume a constant amount of energy (denoted as xrated
n,a ) for each running time period.
The length of operations for each no-interruptible appliance is denoted as Ln,a .
Finally, the bill minimization problem of each non-interruptible appliance is modelled as follows, which can be solved using existing integer linear programming solvers.
min JNIn(a) = min
δhn,a
βn,a
X
h
ph × xrated
n,a × δn,a
(4)
h=αn,a
s.t.
βn,a
X
δhn,a = Ln,a ,
(5)
δhn,a ∈ {0, 1}, ∀h ∈ Hn,a ,
(6)
h=αn,a
α −1
δn,an,a
= 0,
(7)
τ
δhn,a − δh−1
n,a ≤ δn,a , τ = h, ..., h + Ln,a − 1;
∀h ∈ {αn,a , ..., βn,a − Ln,a + 1}.
(8)
Constraint (5) represents that the total time required to accomplish the operations
of each non-interruptible appliance is predetermined and denoted as Ln,a . (6) indicates
that the decision variable is a 0/1 variable representing the on/off operations of the
appliance and (7) illustrates that the initial state of the appliance is ‘off’. (8) is adopted
to guarantee the continuous operations of the appliance.
3.3. Curtailable Appliances
For each curtailable appliance a ∈ Cn , we define the scheduling window Hn,a ,
{αn,a , αn,a + 1, ..., βn,a}. The key characteristics which make curtailable appliances fundamentally different from interruptible and non-interruptible appliances are: 1) their
energy demand cannot be postponed or shifted; 2) but their energy consumption level
can be adjusted.
In view of this, we define the energy consumption at time slot h for each curtailable
appliance xhn,a . The minimum acceptable and maximum affordable consumption levels,
which can be set in advance according to each individual customer’s preferences, are
defined as uhn,a and uhn,a respectively.
10
Finally, the optimization problem of each curtailable appliance is proposed for each
customer to minimize his/her payment bill subject to an acceptable total energy consumption, which can be solved via existing linear programming solvers.
minJCn(a) = min
xhn,a
βn,a
X
ph × xhn,a
(9)
h=αn,a
s.t.
uhn,a ≤ xhn,a ≤ uhn,a ,
βn,a
X
min
xhn,a ≥ Un,a
.
(10)
(11)
h=αn,a
Constraint (10) enforces that the energy consumption at each time slot is within
the minimum acceptable consumption level uhn,a and maximum affordable consumption level uhn,a . Constraint (11) indicates that for each curtailable appliance, there is a
minimum acceptable total energy consumption during the whole operation periods that
must be satisfied.
4. Appliance-level Learning Models for C-SM
For each customer n ∈ N2 , we define the set of shiftable appliances S n and curtailable appliances Cn .
For notation simplicity, we use s to denote each shiftable appliance and c for each
curtailable appliance. Further, subscript n is omitted in the rest of this section.
4.1. Shiftable Appliances
Denote the scheduling window for each shiftable appliance s as H s , {a s , ..., b s},
where a s is the earliest possible time to switch on the appliance s and b s is the latest
possible time to switch off. Let T s = b s − a s + 1 denote the length of the scheduling
window for s. Assume the available historical smart data for appliance s are electricity
consumption scheduling vectors xs (d) = [xas s (d), xas s +1 (d), ..., xbs s (d)] (d = 1, 2, ..., D),
where xhs (d) (h = a s , a s + 1, ..., b s) represents the electricity consumption during time
slot h by appliance s on day d. Suppose each shiftable appliance runs at a constant
11
power rate, and the total running time and electricity taken for appliance s to accomplish the operations are denoted as L s and E s respectively.
Based on the above historical data of a given customer showing when appliance
s has been used and the corresponding dynamic prices, the basic idea behind this
appliance-level learning model is to calculate the probabilities that appliance s was
used at the cheapest, second cheapest,..., or most expensive price. The above insights
can be represented as (12).
Pis (d) =
fi (d)
d
d = 1, 2, ...D
(12)
where fi (d) represents the number of days when appliance s is used i-th cheapest within
the past d days. Note that the superscript s at the right hand side of the equation has
been omitted for notation simplicity. Let the current day be d and then fi (d) and Pis (d)
can be derived based on historical data up to day d and (12) respectively.
Further, let δi (d + 1) (taking value as 1 or 0) represent the probability that appliance
s is used i-th cheapest on day d + 1 and then it becomes a new piece of information
to be used to obtain Pis (d + 1). As a result, (12) can be rewritten in a recursive way as
follows:
Pis (d + 1) =
= Pis (d) +
fi (d+1)
d+1
1
d+1 [δi (d
+ 1) − Pis (d)].
(13)
The above recursive formula shows that when a new piece of information δi (d + 1)
is received, the updated probability, Pis (d + 1), is equal to the existing probability Pis (d)
plus an adjusting term. The adjusting term includes the adjusting coefficient 1/(d + 1)
and the prediction error term [δi (d + 1) − Pis (d)]. Recall that δi (d) only takes its value
as 1 or 0 in (13), which also means the cost (sum of hourly electricity prices) of each
possible operation schedule for appliance s is different with each other. However,
under some circumstances, many hourly prices are the same within a day and possibly
for many days, which could result in two or more operation schedules having the same
costs such that there are more than one i-th cheapest operation schedules. To overcome
such uncertainties in the price signals, a systematic framework for obtaining δi (d + 1)
under different cases is to be given below.
Firstly, suppose that there are in total k possible operation schedules for appliance
12
s and the cost of each schedule (i.e. sum of the hourly prices) on day d + 1 is c j (d + 1)
where j = 1, ..., k is an unique index for each schedule. Secondly, c j (d + 1) is sorted
in an ascending order. For the cases where there are two or more schedules having the
same cost, these costs are treated as the order they appear in c j (d + 1) ( j = 1, ..., k). As
a result, the cost of m-th cheapest schedule can be denoted as rm (d + 1) (m = 1, ..., k).
Finally, when electricity prices and the usage data of each appliance are received at
the end of day d + 1, δi (d + 1) is calculated based on the following three cases.
• Case 1. s is not operated as the i-th cheapest schedule on day d + 1, then
δi (d + 1) = 0;
• Case 2. s is operated using the i-th cheapest schedule on day d + 1 with the cost
ri (d + 1) which satisfies ri (d + 1) , rm (d + 1) (∀m, m , i), then
δi (d + 1) = 1;
• Case 3. s is operated using the i-th cheapest schedule on day d + 1 with the cost
ri (d +1) but there are k0 additional operation schedules where rml (d +1) = ri (d +1) (l =
1, ..., k0), then
δi (d + 1) =
Pis (d)
.
P0 s
Pis (d) + kl=1
Pml (d)
Based on the above probabilistic usage model Pis , the expected bill of each shiftable
appliance s on a given day can be calculated from the perspective of retailer as follows.
X
s
s
(14)
Bs =
PCi × E s /L s × Pi
i
where
PCis
denote the cost of i-th cheapest schedule PT is on that given day.
Furthermore, the expected hourly energy consumption of each shiftable appliance
s can be calculated as follows.
y s,h =
X
h
E s /L s × Pis × Ii,s
i
h
where Ii,s
is defined as follows:
h
Ii,s
s
1 if h ∈ PT i
.
=
0 if h < PT is
13
(15)
4.2. Curtailable Appliances
The scheduling window of appliance c ∈ Cn is defined as Hc , {ac , ..., bc}. Let
yc,h (d), p̄(d) = yc,h (d), [pac (d), ..., pbc (d)] , d = 1, ..., D be the available historical
input-output data of an unknown demand function, where the input data p̄ = [pac (d), ...,
pbc (d)] represent the price signals during the scheduling window Hc on day d and the
output data yc,h (d) represent the energy consumption of appliance c at time slot h on
day d.
We use a linear demand function to model how a customer responds to dynamic
price signals when using curtailable appliances, which is formulated as follows.
ŷc,h = αc,h,0 + βc,h,ac pac + ... + βc,h,bc pbc
(16)
where ŷc,h represents the expected demand of appliance c at time h, ph is the electricity
price at time slot h, αc,h,0 , βc,h,ac , ..., βc,h,bc are the parameters to be identified.
In this paper, the least square is adopted to estimate model parameters β = [αc,h,0 , βc,h,ac ,
..., βc,h,bc ], where the best estimates β̂ are obtained by solving (17).
β̂ = argmin
β
PD
d=1
yc,h (d) − αc,h,0 − βc,h,ac pac (d) − ... − βc,h,bc pbc (d) 2 .
(17)
Finally, the expected hourly energy consumption of appliance c for a given day, denoted as yc,h , can be predicted based on the above established demand model. Further,
P h
the expected daily bill of c on that day can be represented as Bc =
p × yc,h .
h∈Hc
5. Aggregated Demand Modelling for C-NONE
In this section, an aggregated demand model is proposed to identify the demand
patterns of C-NONE.
Suppose the electricity prices and aggregated consumption data for all C-NONE
in last D days are available. We define the electricity price vector on day d ∈ D ,
h
i
{1, ..., D} as p(d) = p1 (d), ..., ph(d), ..., pH (d) where ph (d) represents the price at hour
h ∈ H on day d. Furthermore, we define the aggregated electricity consumption vector
on day d ∈ D as y(d) = y1 (d), ..., yh(d), ..., yH (d) , where yh (d) represents the aggregated consumption by all C-NONE at hour h.
14
It is believed that the aggregated electricity demand of C-NONE at hour h not only
depends on the price at h but also on prices at other hours due to the cross effect of
usage switching [22] [26]. Therefore the aggregated demand model at hour h can be
expressed as follows:
yh = Rh (p1 , p2 , ..., pH ).
(18)
As the mathematical form of Rh (·) is usually unknown, we need to find an estimated
demand function R̂h (p1 , p2 , ..., pH ) that is as close to Rh (p1 , p2 , ..., pH ) as possible. For
this purpose, we use a linear demand function to represent R̂h (p1 , p2 , ..., pH ) as follows.
R̂h (p1 , p2 , ..., pH ) = αh + βh,1 p1 + · · · + βh,h ph + · · · + βh,H pH .
(19)
where βh,h is called the self or direct price elasticity of demand, which measures the
responsiveness of electricity demand at hour h to changes in the electricity price at h.
When the price at hour h increases but prices at other times remain unchanged, the
demand at hour h typically decreases, and thus the self-elasticity is usually negative
[22] (see (20)). βh,l (h , l) is the cross-price elasticity, which measures the responsiveness of the demand for the electricity at hour h to changes in price at some other hour
l , h. When the price of electricity at hour l increases but prices at other times remain
unchanged, some demand at hour l will typically be shifted to hour h, and therefore the
demand at hour h increases. Thus, cross elasticities are usually positive [22] (see (21)).
Furthermore, we consider an important necessary and sufficient condition (see (22))
for the electricity to be a demand consistent retail product to ensure that the proposed
demand model follows a normal market behavior. That is, when the overall market
price is decreased, the overall market demand should increase or remain unchanged.
βh,h < 0.
(20)
βh,l > 0 if h , l.
X
βh,h +
βl,h ≤ 0
(21)
(22)
l∈H,l,h
It should be emphasized that the important reasons behind using linear demand
models are: 1) As the prices of electricity are normally changing slowly with time, at a
given time, we only need to model the demand around a small price interval or locally.
15
Since any non-linear behavior can be approximated well via a linear model locally,
it is the main reason that linear demand model is widely used in this research area
[26] [4] and selected in this work; 2) with conditions (20) - (22), it ensures the basic
market behavior that demands go down when prices go up and vice versa during the
pricing optimization. However, nonlinear demand models often fail to maintain this
basic market rule and could result in situations such as higher market prices leading
to higher usages. As a result, using such nonlinear demand models in the pricing
optimization could result in incorrect pricing for the retailer; 3) it enables the market
operator or retailers to see the cross effect of usage switching such as customers usually
only shift their usages to nearby hours but rarely to far away hours.
Finally, the demand model parameters βh,l , l = 1, ..., H can be identified by solving
the following optimization problem.
min
PD P H
d=1
h=1
λ(D−d) αh + βh,1 p1 (d) + · · · + βh,H pH (d) − yh (d)
subject to (20), (21), and (22).
2
(23)
where 0 ≤ λ ≤ 1 is the forgetting factor which exponentially discounts the influence
of old data and therefore the model will catch up the behavior change of customers
with time. (23) is a quadratic programming problem and can be solved using existing
solvers.
6. Pricing Optimization for the Retailer
From this section, we restart using the subscript n in all of the following mathematical representations, which has been omitted in Section 4.
We define a cost function Ch (Lh ) to represent the retailer’s cost of providing Lh
electricity to all customers at each hour h ∈ H. We make the same assumption as [31]
that the cost function Ch (Lh ) is convex increasing in Lh , which is designed as follows.
Ch (Lh ) = ah L2h + bh Lh + ch
(24)
where ah > 0 and bh ≥ 0, ch ≥ 0 for each hour h ∈ H.
We denote the minimum price (e.g., the wholesale price) that the retailer can offer
as pmin
h and the maximum price (e.g., the retail price cap due to retail market competi16
tion and regulation) as pmax
h . As a result, we have:
h
max
pmin
h ≤ p ≤ ph .
(25)
A maximum energy supply at each time slot, denoted as Ehmax , is imposed on the
retailer to respect the power network capacity. Thus, we have
Eh =
P P
n∈N1 a∈An
xhn,a +
P
P
n∈N2 s∈S n
yn,s,h +
P
c∈Cn
yn,c,h + R̂h (p1 , p2 , ..., pH ) ≤ Ehmax , ∀h ∈ H
(26)
Due to the retail market regulation, we add the revenue constraint to ensure a sufficient number of low price periods and thus to improve the acceptability of retailer’s
pricing strategies. That is, there exists a total revenue cap, denoted as RE max , for the
retailer. Thus, we have the following constraint 2 :
RE =
P
ph ×
n∈N1 a∈An
h∈H
P
c∈Cn
Bn,c) +
P P
P
xhn,a +
P
(
P
n∈N2 s∈S n
Bn,s+
ph × R̂h (p1 , p2 , ..., pH ) ≤ RE max
(27)
h∈H
Finally, the profit maximization problem for the retailer to optimize the electricity
prices for the next day is modelled as follows:
)
(
P
Ch (Eh )
max RE −
ph
h∈H
(28)
subject to constraints (25), (26), and (27).
7. GA based Solution Algorithms to the above Two-level Model
7.1. Solution Framework
As the proposed two-level pricing optimization problem consisting of the profit
maximization problem for the retailer and integrated optimization+ learning based demand modelling problems for customers is non-convex, non-differentiable and discontinuous, it is intractable for conventional non-linear optimization methods. As a result,
2 It
should be highlighted that such a revenue cap is necessary for the pricing optimization. Since the
electricity is the basic necessity in daily life and fundamentally less elastic, without such a revenue cap, a
retailer could lift its profit significantly by increasing its prices aggressively. However, such a pricing strategy
will anger customers and could lead to political consequences [1]. For such a reason, the revenue cap which
basically is the total customers’ bill cap is necessary to ensure the sensible pricing strategy for a retailer.
17
we adopt a genetic algorithm (GA) based solution methods to solve the problems for
retailer and its customers in a distributed and coordinated manner.
In our proposed genetic algorithms, binary encoding and deterministic tournament
selection without replacement is adopted. For the crossover and mutation operations,
we employ uniform crossover and bit flip mutation respectively. The constraints are
handled by the approach proposed in [11]. Readers are referred to [28] for more details
on our adopted GA.
Finally, the GA based distributed pricing optimization framework are given in Algorithm 1. Moreover, the optimal home energy management algorithm for C-HEMS is
given in Algorithm 2. The appliance-level learning algorithms for C-SM is presented in
Algorithm 3. For C-NONE, the retailer could directly use the established demand models presented in Section 5 to forecast customers’ consumptions. It is worth mentioning
that at the end of each day, the established demand models of C-SM and C-NONE will
be updated based on the newly available prices and usages data of that day. At the end,
the optimal dynamic prices are found for the retailer.
Algorithm 1 GA based pricing optimization algorithm to (28) executed by the retailer
1:
Population initialization by generating a population of PN chromosomes randomly; each chromosome represents a strategy (i.e. prices over H) of the retailer.
2:
for i=1 to PN do
3:
The retailer announces strategy i to customers.
4:
Receive the responsive demand from n ∈ N1 ∪ N2 (i.e., Algorithm 2 and 3).
In addition, the responsive demands from C-NONE are estimated based on the
aggregated demand model proposed in Section 5.
5:
Fitness evaluation and constraint handling [11] to satisfy constraints (25 - 27).
6:
end for
7:
A new generation are created by using deterministic tournament selection without
replacement, uniform crossover and bit flip mutation.
8:
Steps 2-7 are repeated until the stopping condition is reached and the retailer announces final prices to all the customers.
18
Algorithm 2 HEMS executed by each smart meter for C-HEMS
1:
Receive the price signals from the retailer via smart meter.
2:
The smart meter schedules energy consumption based on prices by solving optimization problems in Section 3.
3:
The smart meter only sends back the aggregated hourly demand of the household
to retailer via the two-way communication infrastructure.
Algorithm 3 Appliance-level learning executed by each smart meter for C-SM
1:
Receive the price signals from the retailer via smart meter.
2:
The smart meter calculates the expected hourly energy consumption and daily bill
payment of each appliance based on the learning models proposed in Section 4.
3:
The smart meter sends back aggregated hourly demand and daily bill information
of the household to the retailer via the two-way communication infrastructure.
7.2. Computational Aspects of the Model
The considered bilevel model has a hierarchical structure where the retailer acting
as the upper level agent and customers acting as decentralized lower level agents (for
C-NONE, the problem is solved directly at the retailer side). The subsequent proposed
solution framework solves the problem in a distributed and coordinated manner, where
the retailer determines the prices first and then customers (C-HEMS and C-SM) simultaneously determines their consumptions based on the price signal. Since each customer (C-HEMS and C-SM) is equipped with a smart meter which is assumed to have
the computational capacity to solve its own consumption scheduling problem independently and simultaneously when receiving the price signal, it is believed that increasing
the customer number does not increase the total computation time significantly. Similar conclusions have been reported in [33] where a simulated annealing algorithm is
adopted to solve their upper level pricing optimization problem with independent customers. In addition, a similar solution procedure as ours where the upper level problem
is firstly evaluated via metaheuristic optimization algorithms and the lower level problem is tackled via standard solvers (e.g., integer programming and linear programming
solvers) has been recently reported in [16] and been proved to be effective in solving
19
large scale bi-level problems.
Although promising, it should be noted that bilevel optimization problems are generally difficult to solve (e.g., for the simplest case where both upper and lower level
problems are linear programs, it is proved to be NP-hard [8]) and might face scalability issues in solving very large scale problems. One possible solution to overcome
the scalability issues is to find good initial solutions for the problem by utilizing and
learning from historical data using machine learning algorithms, which can therefore
greatly reduce the number of iterations needed. For instance, in our considered bilevel
problem, due to the daily pricing practice, there are many historical data of past daily
prices and customer responses (consumptions). By going through these data at the retailer side, some approximated optimal prices can be found, which can be chosen as
starting points for GA. As a result, the optimal prices for the next day are likely to be
found within a smaller number of iterations.
8. Simulation Results
In this section, we conduct simulations to evaluate the proposed pricing optimization model with different types of customers. Ideally, we would use observed data from
relevant trials in the simulations, however, not all of them are available. As a result, for
those data which are publicly available (e.g., the electricity price and demand data used
for the aggregated demand modelling [3]), relevant links/references have been cited in
the text. For those data which are not publicly available, we will simulate the required
data (e.g., the dynamic electricity price and consumption response data at the appliance
level used for customer behaviour learning model for C-SM) or provide the required
data directly in the paper (e.g., the parameter settings of appliances for the C-HEMS
models).
8.1. Simulation Set-up
We simulate a neighbourhood consisting of one retailer and three different types
of customers (i.e., C-HEMS, C-SM, C-NONE) where the total number of customers
is set to 100. It is assumed that each customer has 5 appliances: EV, dishwasher,
20
Table 1: Parameters for each interruptible appliance
xrated
n,a
Appliance Name
Ea
Ha
Dishwasher
1.8kWh
8PM-7AM
1kWh
PHEV
10kWh
7PM-7AM
2.5kWh
Table 2: Parameters for each non-interruptible appliance
Appliance Name
Ea
Ha
xrated
n,a
La
Washing machine
2kWh
8AM-9PM
1kWh
2hrs
Clothes Dryer
3kWh
8PM-6AM
1.5kWh
2hrs
Table 3: Parameters for each curtailable appliance
Appliance Name
Uamin
Ha
uha
uha
Air-conditioner
18kWh
12PM-12AM
1kWh
2kWh
washing machine, clothes dryer and air conditioner. Further, a fixed amount (0.05
kWh) of background consumption at each hour is considered for each household. The
scheduling window is set from 8AM to 8AM (the next day). It is worth mentioning
that customers are assumed to be homogeneous due to hardware constraints with the
aim to simply the model implementation process. For details on how to implement a
similar kind of model with heterogeneous customers, readers are referred to [33].
The parameter settings for HEMS optimization models of C-HEMS are given in
Tables 1 2 3. Despite an extensive search, however, we have not found any publicly
available real-world data for real-time prices and consequence demand response at the
individual appliance level, data which are required for our simulation for C-SM. As a
result, the historical usage data used in appliance-level customer behavior learning for
shiftable appliances in C-SM are generated by tuning HEMS optimization models with
waiting time costs [29] whereas the historical usage data for identifying the demand
model of curtailable appliances are simulated based on Section 3.3. For C-NONE,
the aggregated demand model is learned from history electricity price data and down-
21
Table 4: Parameter settings of GA
Parameter Name
Symbol
Values
Chromosome Length
Lg
10 × 24
Population Size
PN
300
Mutation Probability
Pg
0.005
Termination Generation
Tg
300
scaled energy consumption data (i.e., daily consumption of each household is scaled
down to the same amount as C-HEMS and C-SM) between 1 January 2012 and 21
December 2012 of ISO New England [3]. For the retailer, the maximum price and
minimum price at each hour are set to 6.00 cents and 14.00 cents respectively. Further,
the parameters of GA are set as Table 4. More specifically, the chromosome length Lg
is determined based on the value range (i.e. [6.00 cents, 14.00 cents] in our study) and
precision requirement (i.e. two digits after the decimal point) of each decision variable
ph , h = 1, 2, ...24. Since there are in total (14.00 − 6.00)/0.01 = 800 possible values
for each variable ph , at least 10 binary bits (29 < 800 and 210 > 800) are needed to
satisfy the precision requirement of each variable. Therefore, the chromosome, which
represents a vector of 24 decision variables, needs to be (at least) 240 binary bits long
[28]. To set the mutation rate Pg , we start with the standard mutation rate setting (i.e.
1/Lg ≈ 0.0042 in our study), and finally find that a mutation rate of 0.005 works well
for our problem. For the population size, which usually increases with the problem
size (dimensions), we start with the setting used in [11] (i.e. the population size is
set to 10 multiplied by the problem dimension – 240 in our study), and finally find
that a population size of 300 works well for our problem. Considering the problem
complexity, the termination generation is set to 300 to ensure sufficient evolutions of
the GA.
8.2. Results and Analysis
The simulations are implemented in the Matlab R2016b software environment on
a 64 bit Linux PC with Intel Core i5-6500T CPU @ 2.50GHz and 16 GB RAM.
22
Table 5: Combinations of different types of customers for pricing optimization
Case study number
C-HEMS
C-SM
C-NONE
1
0
0
100
2
0
30
70
3
0
100
0
4
30
70
0
5
100
0
0
6
50
30
20
7
0
70
30
8
70
30
0
9
30
60
10
10
20
30
50
We implement the proposed pricing optimization under each case study listed in
Table 5 where the evolution processes of GA are shown in Figure 2. The revenue, cost
and profit of the retailer as well as the CPU computation time under different case studies are given in Table 6. Note that for case studies where C-HEMS and C-SM coexist
(C-NONE are directly handled at the retailer side), the energy consumption problems
of C-HEMS and C-SM under each retailer’s pricing strategy are solved sequentially
in this paper, which leads to a CPU time of around 2000 seconds. However, since in
real applications customers’ energy consumption problems are tackled simultaneously
by each smart meter in parallel, the total computation time is only constrained by the
customers energy consumption problem with the highest computation time and will be
much shorter (i.e. around 1000 seconds for C-HEMS in this study). It is also worth
mentioning that our proposed optimal dynamic pricing for the retailer is updated every
24 hours, and therefore the solution method is sufficiently fast for this purpose.
In addition, it can be found from Table 6 that, if all customers are C-SM, the profit
of retailer will be $120.12 compared with that of $92.12 under 100% C-NONE. In addition, the profit of retailer can be further increased to $124.05 if all customers become
C-HEMS. The above results show that, without increasing its revenue, the retailer can
23
104
1.25
1.2
fitness value
1.15
Case 1
Case 2
Case 3
Case 4
Case 5
Case 6
Case 7
Case 8
Case 9
Case 10
1.1
1.05
1
0.95
0.9
0
50
100
150
200
250
300
generation number of GAs
Figure 2: The evolution process of GA with different penetration rates of smart meter and HEMS.
Table 6: Comparison of revenue, cost and profit of the retailer as well as the CPU computation time under
different case studies
Case study number
Revenue ($)
Cost ($)
Profit ($)
CPU time (s)
1
350
257.88
92.12
9
2
350
249.25
100.75
838
3
350
229.88
120.12
853
4
350
228.89
121.11
2015
5
350
225.95
124.05
1013
6
350
233.47
116.53
1999
7
350
238.14
111.86
838
8
350
227.20
122.80
2013
9
350
231.60
118.40
2010
10
350
242.99
107.01
1997
actually gain more profit with the installation of smart meters and HEMS in households,
which indicates that the retailers are likely to have strong motivations to participate in
and promote the proposed pricing based demand response programs.
24
Furthermore, the optimized electricity prices under several selected case studies are
shown in Figure 3 where the corresponding energy consumption of customers are plotted in Figure 4. It can be found from Figures 3 4 that, compared with that of C-NONE,
the demands of C-HEMS and SM are more responsive to prices, and are effectively
shifted from peak-demand periods (higher prices) to off-peak demand periods (lower
prices). In the case where 3 different types of customers co-exist, the resulted prices
and demands exhibit a mixed and synthesized behavior. For instance, the demand of a
mixed pool of customers under Case 6 is still responsive to price signals but less sensitive than that of C-HEMS (Case 3). In other words, in a realistic scenario like Case 6,
our proposed pricing optimization model can generate profitable electricity prices for
the retailer and lead to a reasonable consumption behavior for customers.
14
Electricity price (cents/kwh)
13
12
11
10
9
8
Case 1
Case 2
Case 3
Case 6
7
6
8AM
11AM
2PM
5PM
8PM
11PM
2AM
5AM
time
Figure 3: Optimal prices for different case studies.
8.3. Emerging Scenarios with PV Generation and Energy Storage
In the following, we consider the availability of PV generation for all types of
customers (e.g., C-HEMS, C-SM, C-NONE) and also battery energy storage for CHEMS. The pricing optimization for the retailer under such emerging scenarios are
investigated in this subsection.
25
500
Consumption under Case 1
Consumption under Case 2
Consumption under Case 3
Consumption under Case 6
450
Electricity consumption (kwh)
400
350
300
250
200
150
100
50
0
8AM
11AM
2PM
5PM
8PM
11PM
2AM
5AM
time
Figure 4: Electricity demand of customers under different case studies.
Suppose that a forecast of PV generation is known as a prior at the beginning of
each day. It is further assumed that PV generation will be firstly used by customers
whereas any surplus will be sold back to the grid at same retail price of that time.
The PV generation data used in this simulation is adopted from [48]. Furthermore,
the revenue cap imposed on the retailer is reduced accordingly from $350 to $270 due
to the availability of PV generation and therefore the reduction of energy demand of
customers. In addition, we assume that C-HEMS use energy storage for price arbitrage
to minimize their energy cost. The energy storage model is adopted from [48] and
we consider perfect charging and discharging, i.e., the charging/discharging efficiency
factor is equal to 1. The battery capacity is considered as 10 kWh and the maximum
amount of energy to be charged/discharged in each time period is constrained to 2 kWh.
Furthermore, the initial and final state of charge (SoC) are set to 80% (i.e., 8 kWh in this
simulation). Note that a full battery energy storage model considering self-discharging
and degradation [42] is out of scope of this paper.
The impact of PV penetration on the retailer is investigated under case studies
shown in Table 5, where the simulation results can be found in Table 7. Compared
with those in Table 6, we can see that the retailer achieves similar profit levels (i.e.,
26
Table 7: Comparison of revenue, cost and profit of the retailer under different case studies with PV generation
Case study number
Revenue ($)
Cost ($)
Profit ($)
1
270
198.64
71.36
3
270
170.66
99.34
5
270
167.29
102.71
6
270
174.63
95.37
Table 8: Comparison of revenue, cost and profit of the retailer under C-HEMS with PV and energy storage
Scenario
Revenue ($)
Cost ($)
Profit ($)
C-HEMS with PV and storage (no selling back)
270
145.45
124.55
C-HEMS with PV and storage (selling back)
270
124.98
145.02
ratio of profit to revenue). The above finding indicates that the profit of retailer is not
significantly influenced by the penetration of PV generation.
Additionally, we study the impact of battery energy storage on the retailer where
customers are C-HEMS. We consider two scenarios for C-HEMS (i.e., with/without the
capability of selling electricity back to the grid) where the profits of retailer are given
in Table 8. The above results reveal two-fold findings: 1) the retailer can improve its
profit with the energy storage penetration in households; 2) the retailer could gain even
more profit if customers (C-HEMS) have the capability of selling electricity back to the
grid. To be more specific, with battery energy storage, customers will have the demand
flexibility where some electricity can be bought at cheap price time periods beforehand and stored in the energy storage units for use in high price periods. As dynamic
prices reflect wholesale prices, the purchase cost of the retailer for such electricity in
the wholesale market will also be low. Without increasing its revenue, the retailer can
increase its profit. In addition, if customers can sell electricity back to the grid, the energy storage charging/discharging operations are no longer constrained by the amount
of electricity customers actually use (i.e. the energy storage units charge/discharge at
higher power rates), which gives customers more demand flexibility. Thus, more electricity can be bought at cheap price time periods beforehand, which on the other hand
27
leads to a lower wholesale electricity purchase cost for the retailer. As a result, without
increasing its revenue, the retailer can gain even more profit. Furthermore, the optimized prices and corresponding storage operations and appliance consumption profiles
in the above two scenarios are illustrated in Figures 5 and 6. It can be easily found out
that the customers with capacity of selling back electricity often charge/discharge the
battery at maximum rate in order to take full advantage of price arbitrage to minimize
15
15
10
10
PV generation
Storage charging-discharing
Home appliances electricity consumption
Electricity supplied by the retailer
optimized prices
5
5
0
Electricity prices (cents/kwh)
Electricity consumption (kwh)
their energy bills.
0
8AM
11AM
2PM
5PM
8PM
11PM
2AM
5AM
time
Figure 5: Storage operation and appliance consumption profile of one household (no selling back).
8.4. Solution Algorithms Comparison
In general, algorithms to solve bilevel problems can be approximated categorized
into three groups: iterative algorithms based on the definitions of bilevel/Stackelberg
equilibriums (e.g.,[27]), KKT based classical mathematical optimization algorithms
(i.e. KKT based single level reduction) and GA type of metaheuristic algorithms [38].
In this subsection, we conduct further simulations to compare our GA based solution
algorithm with the other two types of algorithms.
Firstly, we compare GA with a two-step iterative algorithm [27] on selected representative case studies listed in Table 5. Same as our GA based solution method,
the iterative algorithm is also working in a distributed manner that the retailer and cus-
28
15
10
10
PV generation
Storage charging-discharing
Home appliances electricity consumption
Electricity supplied by the retailer
optimized prices
5
5
0
Electricity prices (cents/kwh)
Electricity consumption (kwh)
15
0
8AM
11AM
2PM
5PM
8PM
11PM
2AM
5AM
time
Figure 6: Storage operation and appliance consumption profile of one household (selling back).
tomers solve their optimization problems sequentially. The description of the algorithm
in the context of our problem setting is given as follows: (1) find an initial feasible price
vector for the retailer; (2) given a price vector announced by the retailer, each customer
solves their energy consumption problem and obtains optimal energy consumption in
response to the price vector; (3) the retailer receives energy consumptions of all customers and treats them as known parameters in the profit maximization model Eq. (28).
Therefore, Eq. (28) becomes a linear programming problem, from solving which the
retailer obtains a new price vector; (4) If the previous two price vectors satisfy the termination criterion (i.e. prices are close enough), the best solution is recorded and the
algorithm is terminated; otherwise, go to step (2).
The solutions obtained by GA and the iterative algorithm are reported in Table 9.
Note that the reported results of the iterative algorithm are based on the best initial conditions among several attempts for each case study. From the results, we can find that
GA outperforms the iterative algorithm in all cases. It is also observed that the iterative
algorithm performs worst in the case where customers are all C-HEMS (mathematical optimization models at the lower level) whereas it performs best in the case where
customers are all C-NONE (analytical electricity demand functions at the lower level).
The above can explained by the fact that the iterative algorithm is a local search based
29
Table 9: Comparison of GA with the iterative algorithm
Case study number
GA
Iterative algorithm
Revenue
Profit
Revenue
Profit
1
350
92.12
350
89.98
3
350
120.12
350
80.05
5
350
124.05
350
73.48
6
350
116.53
350
80.50
heuristic and is more likely to be trapped in the local minima, especially for more complex problems. Instead, GA is a population based meta-heuristic algorithm with global
search characteristics and has more chances to find the global optimal/near-optimal
solutions.
Secondly, we are also interested in knowing how good solutions that our proposed
GA could achieve compared with theoretical optimal or near-optimal solutions, which
are usually obtained by classical mathematical optimization methods such as KKT
based single level reduction method. It should be noted that the KKT based method
solves the bilevel model in a centralized manner at the retailer’s side, by which it assumes that an optimistic bilevel model is adopted, i.e. customers are always expected
to make decisions that lead to the best possible profit of the retailer [38]. Clearly, the
above assumption is strong and therefore the best solutions obtained by the KKT based
centralized method (given that the real optimums are attained) are deemed to be better
than solutions obtained by distributed solution methods considering game behaviours
between the retailer and customers such as our GA based distributed method. Since
the KKT based method requires a well-defined convex and continuous lower level optimization problem, it cannot be directly used to solve our bilevel model. To make
the comparisons achievable, we modify our bilevel model such that only C-HEMS are
considered at the lower level problem. In addition, the integer linear programming
problems of interruptible and non-interruptible appliances of C-HEMS are modified to
linear programs. Finally, the modified bilevel model has a lower level with only linear
programming problems and the KKT based method is implemented in Matlab with the
30
Table 10: Comparison of GA with KKT based single level reduction method
Number of lower
GA
level decision variables
Profit
Profit
Optimality gap
1
63
124.05
124.05
0.0000%
2
1- 2
131
96.76
96.80
0.0000%
3
1- 3
221
127.47
127.53
0.0036%
4
1- 4
296
135.98
135.96
0.0281%
5
1- 5
351
141.62
141.58
0.0344%
6
1- 6
460
134.19
133.74
0.4142%
Test problem
Customer profiles 3
1
KKT
assistance of YALMIP toolbox [23].
In this particular simulation, we consider six different model instances (customer
profiles) of the relaxed C-HEMS model, which are named from C-HEMS-1 to CHEMS-6 respectively. Except for the first customer profile (C-HEMS-1) which is directly adapted from Tables 1 2 3, the other five customer profiles are generated by
adding random numbers to the above settings. Finally, we compare GA with KKT
based method on six test problems as shown in Table 10 . For all the test problems, GA
uses the same settings as Table 4. For KKT based method, we set the relative optimality gap of solvers to 0.01% and also a large amount of computation time to achieve as
best solutions as possible to provide a reliable benchmark.
The simulation results are reported in Table 10, from which we can find that for relatively small-scale test problems such as problems 1-3, KKT based centralized method
could achieve the theoretic optimums or very close optimums with a small optimality
gap. On the other side, GA based distributed solution method can attain the theoretic
optimum for test problem 1 and could also achieve solutions close to theoretic optimums for test problems 2-3. For larger-scale problems such as test problems 4-6, KKT
based centralized method could not achieve an optimality gap of 0.01% within reasonably large computation times (the computation time limit is set to 12000 seconds)
3 The
numbers 1 to 6 represents C-HEMS-1, C-HEMS-2, ... C-HEMS-6 respectively.
31
and only near optimums are attained. It is also observed that our GA based solution
method could achieve actually better near optimums than KKT based solution method
on these large scale test problems. As a result, from the above comparisons of GA with
the iterative algorithm and KKT based single level reduction method on different test
cases, it is reasonable to conclude that our proposed GA are good enough and efficient
in solving the proposed bilevel model.
It is also worth pointing out that the proposed GA based distributed solution method
is also a feasible and effective solution for the realistic cases of energy pricing problem
where there may be up to a few millions of customers. The reason is that, by simply
distributing the computing of all customers’ best responses to the retailer’s prices to a
small number (saying a few hundreds) of cloud computing devices in which each device
computes the best responses of a group of customers (saying a couple of thousands),
the corresponding cloud/parallel computing enables the time needed for computing all
customers’ best responses being largely similar to the time needed for computing of
each group of customers. In other words, the proposed GA based solution method
enables the parallel computing and leads to the feasibility and effectiveness for the
realistic cases of energy pricing with a large number of customers. This is in contrast
with the complexity of KKT based optimization algorithms in the same cases, for which
it requires the centralised computing with multi-millions of constraints in order to find
the feasible and optimal prices. As for the definition based iterative algorithms, the
local optimization feature of such algorithms makes it much weaker option compared
with our proposed approach.
9. Conclusion
In this paper, we study the dynamic pricing optimization problem in a realistic
scenario consisting of one retailer and three different types of customers (C-NONE,
C-SM, and C-HEMS). The interactions between retailer and customers are treated as
a two-level decision-making framework. Firstly, we propose an integrated optimization+ machine learning based demand modelling framework for customers. Secondly,
we propose a profit-maximization based dynamic pricing model for the retailer sub-
32
ject to realistic market constraints. Finally, GA based distributed pricing optimization
algorithms are proposed to tackle the above two-level decision making problems. Simulation results indicate that our proposed pricing optimization model and solution algorithms are feasible and effective. However, to understand the computational aspect
of bilevel problems and their solution algorithms thoroughly, much work remains to be
done. In our future work, we plan a separate, dedicated contribution on that subject
considering both our and more generalized bilevel problems, by taking advantage of
e.g., machine learning algorithms and distributed and parallel computational facilities.
Acknowledgements
This work was partly supported by the National Nature Science Foundation of
China (Grant No. 71301133), Humanity and Social Science Youth Foundation of Ministry of Education, China (Grant No. 13YJC630033), and the Engineering and Physical
Sciences Research Council, UK (Grant No. EP/I031650/1).
References
References
[1] http://www.bbc.co.uk/news/uk-politics-24213366, (accessed
August 30, 2016).
[2] https://www.ofgem.gov.uk/electricity/retail-market/metering/transition-smart(accessed July 16, 2017).
[3] http://iso-ne.com/markets/hstdata/znl_info/hourly/index.html,
(accessed March 01, 2013).
[4] H. Aalami, M. P. Moghaddam, G. Yousefi, Modeling and prioritizing demand
response programs in power markets, Electric Power Systems Research 80 (4)
(2010) 426–435.
[5] C. O. Adika, L. Wang, Autonomous appliance scheduling for household energy
management, IEEE Transactions on Smart Grid 5 (2) (2014) 673–682.
33
[6] A. Arabali, M. Ghofrani, M. Etezadi-Amoli, M. S. Fadali, Y. Baghzouz, Geneticalgorithm-based optimization approach for energy management, IEEE Transactions on Power Delivery 28 (1) (2013) 162–170.
[7] A. Asadinejad, M. G. Varzaneh, K. Tomsovic, C.-f. Chen, R. Sawhney, Residential customers elasticity estimation and clustering based on their contribution at
incentive based demand response, in: Power and Energy Society General Meeting
(PESGM), 2016, IEEE, 1–5, 2016.
[8] O. Ben-Ayed, C. E. Blair, Computational difficulties of bilevel linear programming, Operations Research 38 (3) (1990) 556–560.
[9] B. Chai, J. Chen, Z. Yang, Y. Zhang, Demand response management with multiple
utility companies: A two-level game approach, IEEE Transactions on Smart Grid
5 (2) (2014) 722–731.
[10] S. Datchanamoorthy, S. Kumar, Y. Ozturk, G. Lee, Optimal time-of-use pricing
for residential load control, in: Smart Grid Communications (SmartGridComm),
2011 IEEE International Conference on, IEEE, 375–380, 2011.
[11] K. Deb, An efficient constraint handling method for genetic algorithms, Computer
Methods in Applied Mechanics and Engineering 186 (2) (2000) 311–338.
[12] B. P. Esther, K. S. Kumar, A survey on residential Demand Side Management
architecture, approaches, optimization models and methods, Renewable and Sustainable Energy Reviews 59 (2016) 342–351.
[13] C. Flath, D. Nicolay, T. Conte, C. van Dinther, L. Filipova-Neumann, Cluster
Analysis of Smart Metering Data-An Implementation in Practice, Business &
Information Systems Engineering 4 (1) (2012) 31–39.
[14] X. Fu, X.-J. Zeng, X. R. Luo, D. Wang, D. Xu, Q.-L. Fan, Designing an intelligent decision support system for effective negotiation pricing: A systematic and
learning approach, Decision Support Systems 96 (2017) 49–66.
34
[15] V. Gómez, M. Chertkov, S. Backhaus, H. J. Kappen, Learning price-elasticity of
smart consumers in power distribution systems, in: Smart Grid Communications
(SmartGridComm), 2012 IEEE Third International Conference on, IEEE, 647–
652, 2012.
[16] J. Han, G. Zhang, Y. Hu, J. Lu, A solution to bi/tri-level programming problems
using particle swarm optimization, Information Sciences 370 (2016) 519–537.
[17] G. W. Hart, Nonintrusive appliance load monitoring, Proceedings of the IEEE
80 (12) (1992) 1870–1891.
[18] K. Herter, Residential implementation of critical-peak pricing of electricity, Energy Policy 35 (4) (2007) 2121–2130.
[19] J. Hosking, R. Natarajan, S. Ghosh, S. Subramanian, X. Zhang, Short-term forecasting of the daily load curve for residential electricity usage in the Smart Grid,
Applied Stochastic Models in Business and Industry 29 (6) (2013) 604–620.
[20] A. R. Khan, A. Mahmood, A. Safdar, Z. A. Khan, N. A. Khan, Load forecasting,
dynamic pricing and DSM in smart grid: A review, Renewable and Sustainable
Energy Reviews 54 (2016) 1311–1322.
[21] A. Khotanzad, E. Zhou, H. Elragal, A neuro-fuzzy approach to short-term load
forecasting in a price-sensitive environment, IEEE Transactions on Power Systems 17 (4) (2002) 1273–1282.
[22] D. S. Kirschen, G. Strbac, P. Cumperayot, D. de Paiva Mendes, Factoring the
elasticity of demand in electricity prices, IEEE Transactions on Power Systems
15 (2) (2000) 612–617.
[23] J. Lofberg, YALMIP: A toolbox for modeling and optimization in MATLAB, in:
Computer Aided Control Systems Design, 2004 IEEE International Symposium
on, IEEE, 284–289, 2004.
[24] J. Lu, J. Han, Y. Hu, G. Zhang, Multilevel decision-making: a survey, Information
Sciences 346 (2016) 463–487.
35
[25] T. Lv, Q. Ai, Y. Zhao, A bi-level multi-objective optimal operation of gridconnected microgrids, Electric Power Systems Research 131 (2016) 60–70.
[26] Q. Ma, X.-J. Zeng, Demand modelling in electricity market with day-ahead dynamic pricing, in: 2015 IEEE International Conference on Smart Grid Communications (SmartGridComm), IEEE, 97–102, 2015.
[27] C. Mediwaththe, E. Stephens, D. Smith, A. Mahanti, Competitive Energy Trading Framework for Demand-side Management in Neighborhood Area Networks,
IEEE Transactions on Smart Grid (2017) (in press).
[28] F.-L. Meng, X.-J. Zeng, A Stackelberg game-theoretic approach to optimal realtime pricing for the smart grid, Soft Computing 17 (12) (2013) 2365–2380.
[29] F.-L. Meng, X.-J. Zeng, A Profit Maximization Approach to Demand Response
Management with Customers Behavior Learning in Smart Grid, IEEE Transactions on Smart Grid 7 (3) (2016) 1516–1529.
[30] A.-H. Mohsenian-Rad, A. Leon-Garcia, Optimal residential load control with
price prediction in real-time electricity pricing environments, IEEE Transactions
on Smart Grid 1 (2) (2010) 120–133.
[31] A.-H. Mohsenian-Rad, V. W. Wong, J. Jatskevich, R. Schober, A. Leon-Garcia,
Autonomous demand-side management based on game-theoretic energy consumption scheduling for the future smart grid, IEEE Transactions on Smart Grid
1 (3) (2010) 320–331.
[32] N. G. Paterakis, O. Erdinc, A. G. Bakirtzis, J. a. P. Catalão, Optimal household appliances scheduling under day-ahead pricing and load-shaping demand response
strategies, IEEE Transactions on Industrial Informatics 11 (6) (2015) 1509–1519.
[33] L. P. Qian, Y. J. A. Zhang, J. Huang, Y. Wu, Demand response management via
real-time electricity price control in smart grids, IEEE Journal on Selected areas
in Communications 31 (7) (2013) 1268–1280.
36
[34] B. Qu, J. Liang, Y. Zhu, Z. Wang, P. N. Suganthan, Economic emission dispatch
problems with stochastic wind power using summation based multi-objective evolutionary algorithm, Information Sciences 351 (2016) 48–66.
[35] Y. Ren, P. N. Suganthan, N. Srikanth, G. Amaratunga, Random vector functional
link network for short-term electricity load demand forecasting, Information Sciences 367 (2016) 1078–1093.
[36] P. Samadi, A.-H. Mohsenian-Rad, R. Schober, V. W. Wong, J. Jatskevich, Optimal real-time pricing algorithm based on utility maximization for smart grid, in:
Smart Grid Communications (SmartGridComm), 2010 First IEEE International
Conference on, IEEE, 415–420, 2010.
[37] P. Siano, Demand response and smart grids – A survey, Renewable and Sustainable Energy Reviews 30 (2014) 461–478.
[38] A. Sinha, P. Malo, K. Deb, A Review on Bilevel Optimization: From Classical to
Evolutionary Approaches and Applications, IEEE Transactions on Evolutionary
Computation (2017) (in press).
[39] D. Srinivasan, S. Rajgarhia, B. M. Radhakrishnan, A. Sharma, H. Khincha,
Game-Theory based dynamic pricing strategies for demand side management in
smart grids, Energy 126 (2017) 132–143.
[40] P. R. Thimmapuram, J. Kim, Consumers’ price elasticity of demand modeling
with economic effects on electricity markets using an agent-based model, IEEE
Transactions on Smart Grid 4 (1) (2013) 390–397.
[41] A. Trivedi, D. Srinivasan, S. Biswas, T. Reindl, A genetic algorithm–differential
evolution based hybrid framework: Case study on unit commitment scheduling
problem, Information Sciences 354 (2016) 275–300.
[42] J. Vetter, P. Novák, M. Wagner, C. Veit, K.-C. Möller, J. Besenhard, M. Winter, M. Wohlfahrt-Mehrens, C. Vogler, A. Hammouche, Ageing mechanisms in
lithium-ion batteries, Journal of Power Sources 147 (1) (2005) 269–281.
37
[43] W. Wei, F. Liu, S. Mei, Energy pricing and dispatch for smart grid retailers under
demand response and market price uncertainty, IEEE Transactions on Smart Grid
6 (3) (2015) 1364–1374.
[44] Z. Wu, S. Zhou, J. Li, X.-P. Zhang, Real-time scheduling of residential appliances via conditional risk-at-value, IEEE Transactions on Smart Grid 5 (3) (2014)
1282–1291.
[45] M. Yu, S. H. Hong, A real-time demand-response algorithm for smart grids: A
stackelberg game approach, IEEE Transactions on Smart Grid 7 (2) (2016) 879–
888.
[46] M. Yu, S. H. Hong, Supply–demand balancing for power management in smart
grid: A Stackelberg game approach, Applied Energy 164 (2016) 702–710.
[47] Z. Yun, Z. Quan, S. Caixin, L. Shaolan, L. Yuming, S. Yang, RBF neural network and ANFIS-based short-term load forecasting approach in real-time price
environment, IEEE Transactions on Power Systems 23 (3) (2008) 853–858.
[48] Y. Zhang, R. Wang, T. Zhang, Y. Liu, B. Guo, Model predictive control-based
operation management for a residential microgrid with considering forecast uncertainties and demand response strategies, IET Generation, Transmission & Distribution 10 (10) (2016) 2367–2378.
[49] M. Zugno, J. M. Morales, P. Pinson, H. Madsen, A bilevel model for electricity
retailers’ participation in a demand response market environment, Energy Economics 36 (2013) 182–197.
38
| 3 |
Utilizing Static Analysis and Code Generation to Accelerate Neural
Networks
Lawrence McAfee
Kunle Olukotun
Stanford University, 450 Serra Mall, Stanford, CA 94305
Abstract
As datasets continue to grow, neural network
(NN) applications are becoming increasingly
limited by both the amount of available computational power and the ease of developing
high-performance applications. Researchers
often must have expert systems knowledge
to make their algorithms run efficiently. Although available computing power increases
rapidly each year, algorithm efficiency is not
able to keep pace due to the use of general purpose compilers, which are not able
to fully optimize specialized application domains. Within the domain of NNs, we have
the added knowledge that network architecture remains constant during training, meaning the architecture’s data structure can be
statically optimized by a compiler. In this paper, we present SONNC, a compiler for NNs
that utilizes static analysis to generate optimized parallel code. We show that SONNC’s
use of static optimizations make it able to
outperform hand-optimized C++ code by up
to 7.8X, and MATLAB code by up to 24X.
Additionally, we show that use of SONNC
significantly reduces code complexity when
using structurally sparse networks.
1. Introduction
Neural networks (NN) have gained much renewed interest in recent years, as they have been shown to outperform many application-specific machine learning algorithms across several domains (Bengio, 2009). Given
their potential promise for helping to move the field of
machine learning towards true artificial intelligence,
Appearing in Proceedings of the 29 th International Conference on Machine Learning, Edinburgh, Scotland, UK, 2012.
Copyright 2012 by the author(s)/owner(s).
[email protected]
[email protected]
recent research trends have shown researchers’ eagerness to test NNs on larger datasets (Raina, 2009; Cai
et al., 2011). However, due to the core linear algebra
routines that compose most applications, NNs are becoming increasingly limited by the amount of available
computational power. In cases where large datasets
are desired, researchers typically resort to structurally
sparse networks, which commonly refers to networks
with either dense local receptive fields (Bengio & Lecun, 2007) or non-dense receptive fields (Coates & Ng,
2011). However, in order to make larger scale networks
run efficiently, researchers often find themselves needing to have expert systems knowledge to build their
applications.
To the benefit of algorithm efficiency, available computational power increases each year. This benefits many
applications in general, but the relative efficiency increase many applications see is nowhere near as fast as
the pace of hardware advancement. This is due to the
fact that for most applications, programmers utilize
general purpose compilers to perform much of the optimization work such that the programmer can continue
to focus on the higher-level issues that their domain
requires. However, general purpose compilers (e.g.,
GCC) are not capable of fully optimizing specialized
application domains. Some general purpose platforms
are more specialized to certain domains, such as MATLAB for linear algebra-based development, and are
better suited for many routines that are used to compose NNs. However, as NN applications get more complex, even a platform such as MATLAB is no longer
optimal due to a lack of domain specific knowledge
about the underlying data structures.
Within the domain of NNs, one piece of domain specific knowledge that can be used to increase efficiency
is knowing that most network architectures – where
the architecture is defined by the choices for layer sizes,
mini-batch size, and interlayer connectivity – do not
change during training. This means that the data
structures used to store the network architecture are
Utilizing Static Analysis and Code Generation to Accelerate Neural Networks
capable of being statically optimized, and then generated code can be made to run the specific architecture
as efficiently as possible.
these additional abstractions and static optimizations
to allow NNs to run more efficiently.
In this paper, we present SONNC (pronounced
“sonic”), a Statically Optimizing Neural Network
Compiler1 . The main contributions of this paper are:
3. System Overview
– SONNC, a neural network compiler that focuses on
statically analyzing and optimizing NNs, and generates efficient parallel C++ code.
– We demonstrate analyses and optimizations which
use NN domain-specific knowledge.
– We demonstrate the conciseness of code utilizing
SONNC by using SONNC’s front-end interface to
MATLAB.
– We show that SONNC without any explicit performance tuning, outperforms hand-optimized C++ code
by 3.3X–7.8X, and MATLAB code by 9.2X–24X.
2. Related Work
Several machine learning (ML) development platforms
have been introduced recently to help with scaling to
larger applications. A few popular platforms are OptiML (Sujeeth et al., 2011), Theano (Bergstra et al.,
2010), and GraphLab (Low et al., 2010). OptiML
is a domain-specific language for ML built on top of
the heterogeneous computing platform Delite (Chafi
et al., 2011). It provides abstractions to allow a programmer to develop ML applications, while Delite implicitly takes care of parallelizing and running the application across multiple CPUs and GPUs. Theano
is a compiler for symbolic mathematical expressions.
Although meant to be a general symbolic compiler,
Theano is designed to handle ML applications. Programmers develop their applications using Python and
Numpy data types, and Theano implicity generates
C++ and CUDA. GraphLab provides a parallel abstraction similar to MapReduce for running large scale
machine learning applications on a cluster.
Similar to SONNC, each of these platforms provides
useful abstractions for programming large scale machine learning algorithms. These platforms are designed to optimize ML applications in general by using optimized routines and data structures. Unlike
SONNC, however, none of these other platforms focus
on statically optimizing NN data structures. Neural
networks are a quickly growing field within ML, and
many NN applications are composed of very computationally expensive operations. SONNC aims to provide
1
Code available at: http://github.com/sonnc/sonnc
SONNC makes it easier for an end user to continue
scaling applications without considering the complexities of tuning high performance code. As an example,
if a user wants to design a network that uses structurally sparse connectivity – either locally dense or
unstructured sparsity – a great deal of development
effort would need to go into developing a sparse data
structure that is efficient for indexing and updating
the nonzero values in the weight matrix. When using
SONNC, however, the user only needs to define the
network’s connectivity at the beginning of her code,
and the rest of the code remains unaffected by the underlying data structure. This way, the programmer
only needs to focus on algorithmic intent rather than
worry about the details of implementation.
In addition to optimizing for data structures of the NN
application, SONNC also optimizes the algorithm’s operations by transforming and condensing sequences of
routines into more efficient routines. NNs typically
have a very straighforward data flow, with minimal
high-level control structures. This makes it possible
to perform alterations on the execution graph to make
the algorithm more efficient by improving caching and
reducing overhead.
3.1. Compiler Stages
The following sections briefly overview each stage of
the compiler, which include building an execution
graph, analyzing and optimizing the data structures
and operations, and then generating efficient parallel
code.
3.1.1. Building an Execution Graph
SONNC is a standalone compiler, rather than a new
programming language. As such, supported data types
and operations must be embedded into an existing language to allow a user to use the system in a natural
way. (See Section 5.1 for a description of the currently
available data types in MATLAB.) Once an algorithm
is written, an additional compilation function must be
called to let SONNC perform its optimizations.
SONNC supports two high-level matrix data types: a
dense matrix type and a sparse matrix type. A dense
matrix is used to denote a variable where any element
can contain a nonzero value. A sparse matrix, however, denotes a variable whose sparsity structure does
not change after initialization. In practice, the sparse
Utilizing Static Analysis and Code Generation to Accelerate Neural Networks
type is typically only used for weight data in a NN. In
addition to these two matrix types, vector and scalar
types are supported. All of the standard linear algebra
operations between matrices and vectors are supported
that are commonly used in NN algorithms, including
multiplication, elemental (e.g., dot) operations, norms,
and non-linearity operations.
When SONNC’s data types are connected together via
the supported operations, an execution graph of the
NN application is implicitly built. This graph contains
the flow of operations necessary to compute the nodes
at the output (i.e., weight and bias updates) from the
nodes at the input (i.e., the training set).
3.1.2. Analysis and Optimization
Static graph optimizations. SONNC performs several common static compiler optimizations, including
dead code elimination, operation re-writing, subexpression elimination, and method fusion. An example
of subexpression elimination is the pre-computing of
all-constant-input operations. For example in the iterative shrinkage thresholding algorithm (ISTA), the
algorithm repeatedly runs the update expression:
Z = h(α/L) (Z −
1
(ZW T − X)W )
L
The only variable being updated in this expression is
Z, the approximation to the sparse codes. Hence, to
speed up the update expression, we can expand out
the expression and precompute W T W and XW such
that time is not wasted repeatedly computing these
constant values.
Method fusion is an important optimization for attaining high performance. The compiler scans the execution graph for recognized operation sequences, and
replaces them with more concise and efficient operations that typically have better caching and less overhead. This is a place where having domain specific
knowledge becomes very useful; there are many operation sequences that are shared between various NN
algorithms. For example, restricted Boltzmann machines, autoencoders, and backpropogation networks
all share an operation sequence of matrix multiplication followed by bias addition followed by a nonlinearity. SONNC would recognize this sequence and convert it into its own internal operation, which in the
case of a sigmoid nonlinearity would be called MultBiasSigm. Since the bias and nonlinearity operations
must be applied to each element of the preceding data
matrix, significant savings can be made if these operations can be performed while the data is still in the
CPU cache immediately following the matrix multipli-
cation. SONNC contains several operation sequences
that it can recognize and replace with more efficient
routines.
Data structure optimization. Since the NN architecture does not change during training, we can parameterize the underlying data structures such that
the generated code is optimized to run as efficiently
as possible for the specific network architecture. The
number of threads is also chosen during this stage of
the compiler. The entire data structure optimization
process will be described in greater detail in Section 4.
3.1.3. Multithreading and Code Generation
Once the number of threads is chosen in the previous stage, the graph is expanded into a multithreaded
graph where each node represents an operation performed by a single thread. Thread synchronization
points are determined during this phase, and this is
the last internal representation of the application before code generation. C++ code is then generated to
perform the NN application.
4. Data Structure Optimization
Data structure optimization has the single biggest impact on performance in comparison to the other optimizations that SONNC performs. A network’s architecture, again, is defined by choices for the layer sizes,
mini-batch size, and interlayer connectivity. During
this stage, the underlying data structures are parameterized to run efficiently for the specific application.
The number of threads is also chosen during this stage
of optimization. Although not directly a parameter
that affects the underlying data structures, the number of threads must be chosen jointly with the matrix
blocking size (described below) in order to yield good
parallel performance. Choosing the right number of
threads can have a large impact on performance. The
optimal number of threads varies significantly based
on matrix dimensions and connectivity structures. For
example, even with the same matrix dimensions, the
optimal number of threads between a network that
uses dense local receptive fields and a network that
uses non-dense local receptive fields can vary by a factor of two or four.
4.1. Underlying Data Structure
Although SONNC contains the two high-level matrix
data types described in Section 3.1.1 (i.e., a dense type
and a sparse type), the system contains several underlying data structures including a dense structure, locally dense sparse structure, a few general sparse struc-
Utilizing Static Analysis and Code Generation to Accelerate Neural Networks
tures, and a hybrid sparse structure. Each of these
underlying structures are appropriate for different circumstances, and the compiler chooses which to use for
each matrix within a target application. The choice of
an underlying data structure is not always intuitive.
For example, when a user defines a network with dense
local receptive fields, a logical choice for the underlying
data structure might be to use a locally dense sparse
data structure, which stores information on the locations of rectangular dense blocks within a sparse matrix. In many cases, this is the best data structure to
use for the application. But this is only true when the
receptive field dimension is large enough. When the
receptive field is small (e.g., less than about 5 x 5),
the overhead of performing small dense matrix multiplications actually increases above the simpler general
sparse structure. With small enough receptive fields,
the general sparse structure can outperform the locally
dense structure by 1.5X–2X.
4.2. Data Structure Parameterization
In addition to choosing the correct underlying data
structure, each of these data structures is parameterizable, effectively making a wide range of different underlying structures to choose from. The two most important of these parameters are the matrix blocking
size and the data layout in memory. These parameters apply to both dense and sparse data structures.
The blocking size determines how the matrix’s data is
partitioned in memory by splitting up the matrix into
separate square blocks. Smaller block sizes increase
concurrency, but also increase overhead in reading and
writing the matrix data. The data layout parameter
sets whether matrix elements are stored in memory using row-major order, column-major order, or another
format. Both the blocking size and data format significantly impact cache reuse. While the block size is
typically set globally for all matrices, the data format
is set individually for each variable and depends heavily on the operations and neighboring variables (in the
execution graph) that directly interact with a variable.
One important point to note is that the compiler must
have knowledge of the L1 and L2 cache sizes in order
to properly set the blocking size. In SONNC’s current implementation, it implicitly discovers these values during installation, which is described in the next
section.
4.3. Joint Parameter Selection
SONNC’s ability to properly choose the underlying
data structure, matrix parameterization, and number of threads represent the most important aspect
of SONNC as a statically optimizing NN compiler.
Properly tuning these parameters can give up to two
orders of magnitude difference in performance. The
joint impact of these parameters is non-linear, and so
the heuristics used to optimize this stage are critical
to getting good performance. To perform this tuning process, SONNC initially must run several timing
tests during its installation in order to calibrate to the
CPU. SONNC times matrix multiplications for several
matrix dimensions, block sizes, and number of threads
in order to create a large lookup table. To keep this
lookup table from being too large, parameter values
are swept over exponentially, and matrix dimensions
are only tested up to 10,000, block sizes up to 1,000,
and number of threads up to 32. For applications with
matrix dimensions larger than this, timing becomes
more easily predictable from the lookup table. The
timing values in this table are generally nonlinear due
to caching. They are also nonconvex as a function of
the number of threads. Currently, SONNC uses linear
interpolation between data points in order to choose
parameter values for a specific application.
This tuning process is also an ongoing area of active
research for the compiler. Future plans for the tuning
process include training a deep learning algorithm on
the parameter space in order to better learn the nonlinearities. For the current implementation, however,
linear interpolation has shown to work very well when
using power-of-2 spacing when creating the lookup table.
One other important point to note is that this parameter selection operation is very fast. Using the parameter values mentioned previously during the installation
phase, SONNC builds a lookup table that is stored
as a 160MB file which is loaded into memory during
each use. When a new application is being optimized,
SONNC simply interpolates the neighboring matrix
settings from the lookup table to set the block size
and thread count. Since this operation only includes
linear array scanning and vector averaging, it required
around 2-2.5 seconds to perform parameter selection.
While more sophisticated and computationally expensive methods were tested, linear interpolation worked
well in practice.
5. Productivity
One of SONNC’s goals is to make it easier to write
concise and expressive code, while attaining the performance of optimized C++ code. This way, programmers can focus primarily on the algorithmic intent of
their applications. However, this is often not possible
with nontrivial data structures, such as when network
architectures contain sparse interlayer connectivities.
Utilizing Static Analysis and Code Generation to Accelerate Neural Networks
For example, if a LRF network is being defined, a
programmer has a choice to use either a custom data
structure or MATLAB’s sparse structure. Unfortunately, either option would require additional handcoded routines for efficient random indexing. This in
turn increases code complexity significantly.
When using SONNC, however, the only difference between whether a user would like to use dense, locally
dense, or unstructured sparse connectivity is a matter of how the matrix is initialized. The remainder of
the network’s algorithmic description would be data
structure independent. The following section gives an
example of how the SONNC compiler could be used in
practice.
5.1. MATLAB-Embedded Data Types and
Operations
Although SONNC’s main contribution is its powerful static optimization routines, a front-end interface
for MATLAB is provided to allow end users to easily
integrate the SONNC back-end into existing applications. This should in many cases automatically lead
to more concise code and much higher performance for
NN applications. SONNC embeds four data types into
MATLAB: a Vector type, a Scalar type, a DenseMatrix type, and a SparseMatrix type.
SONNC also overloads many common symbolic operators and other methods in MATLAB such that code
can be written using standard MATLAB syntax. Once
a user has declared her variables using SONNC data
types, much of the remainder of her code should be
identical to as it would be otherwise in MATLAB. The
main difference is that the body of the NN convergence
loop is separated from declaration of the convergence
loop construct. The body of the loop is written first,
followed by a declaration of the convergence loop with
its stopping criterion.
5.2. Example Code
Algorithm 1 shows an example use of SONNC data
types inside a MATLAB script that implements a single layer LRF backpropogation network. This example highlights the use of the four embedded data types
(e.g., DenseMatrix SparseMatrix, Vector, and Scalar ),
one control structure (untilConverged ), and one other
method, runNN, used to compile and run the application. This example demonstrates the SparseMatrix
constructor being initialized with a dense Matlab matrix structure (the LRFs are stored inside a mostly zero
‘dense’ matrix). SparseMatrix can additionally be initialized with either MATLAB’s sparse data structure,
or a cell array that contains information of the loca-
tions and data of submatrices within a larger sparse
matrix, which is useful for locally dense sparse variables.
untilConverged specifies the convergence stopping criterion. The convergence loop iterates until the normalized difference between successive values of the Scalar
type cost falls below the specified tolerance. The output of untilConverged is a data type that simply combines the information of the looping structure and the
execution graph, and is used as the input to the compiler. The code is then compiled and executed using
the runNN method.
As can be observed in this example code, the NN’s
routines are not dependent on the weight matrix data
structure. SONNC makes it simple to initialize data
structures as desired, without needing to worry about
tuning code for performance.
6. Performance Evaluation
This section presents performance results for a set of
NN applications written in MATLAB using SONNC
data types. We compare these results to handoptimized reference implementations written using
both MATLAB and C++ code. In addition, we analyze the performance improvements achievable due to
SONNC’s static optimizations that were overviewed in
Section 3.1.
6.1. Methodology
We compare the performance results for three different NN applications: the restricted Boltzmann machine (RBM), the autoencoder (AE), and the iterative
shrinkage thresholding algorithm (ISTA). For each of
these algorithms, we use two different sparsity patterns: local receptive fields (LRF) and unstructured
sparsity. These experiments were run on a machine
containing two quad-core Intel Xeon X5550 2.67GHz
processors and 24GB of RAM. The version of MATLAB used is R2011b 7.13. The SONNC applications
are algorithmically identical to the hand-optimized
MATLAB and C++ implementations. For the handoptimized versions, we made a reasonable effort to
write efficient sparse routines. To implement LRFs by
hand in both MATLAB and C++, we use an arraybased structure where each entry contains a dense submatrix and the index of its upper left corner. For unstructured sparsity, we use MATLAB’s builtin sparse
data structure for comparison. In C++ we use the
compressed sparse block format (Buluc et al., 2009),
which has several parallelization benefits. To parallelize the C++ code, we divide the work up evenly
Utilizing Static Analysis and Code Generation to Accelerate Neural Networks
Algorithm 1 LRF Backprop Net (SONNC-based)
% MATLAB data type initialization
% Sparsity set inside a mostly−zero ...
dense matrix
V mat = getTrainingData();
T mat = getTrainingTargets();
W mat = setLocalRFs();
bias mat = zeros(1, hidden dim);
lr mat = 0.01;
% MiNNCS data type initialization
V = DenseMatrix(V mat);
T = DenseMatrix(T mat);
W = SparseMatrix(W mat);
bias = Vector(bias mat);
lr = Scalar(lr mat);
% Define a single backprop iteration
H = V * W;
H = bsxfun(@plus, H, bias);
H = sigmoid(H);
dH = (H − T) .* H .* (1 − H);
W update = W − lr * (V' * dH);
bias update = bias − lr * sum(dH, 1);
err = (T − H) .ˆ 2;
cost = sum(sum(err));
% Build array for updated variables
updates = containers.Map();
updates(W) = W update;
updates(bias) = bias update;
% Build execution graph
outputs = {W, bias, cost};
graph = buildGraph(outputs, updates);
% Declare convergence parameters
tol = 1e−6;
mainLoop = untilConverged(cost, tol, ...
graph);
% Execute backprop net
runNN(mainLoop);
over the number of threads in the processor.
In the following discussion, the SONNC-based code,
hand-optimized MATLAB, and hand-optimized C++
code will be simply referred to as SONNC, MATLAB,
and C++ code, respectively.
One important point to note for using MATLAB’s
sparse data structure is that writing to only the
nonzero elements as the result of a matrix multiplication is an inefficient process. This significantly impacts performance for AEs and RBMs for the unstructured sparsity experiments. While these results are included for completeness, a more fair comparison is to
the C++ implementation in this case.
Timing was only performed between the lines of code
immediately before and after the convergence loop, so
as not to be affected by initialization procedures. Each
application was run 10 times using 100 iterations of the
convergence loop in order to smooth out any fluctuations due to caching and other variables. We present
here the averaged time of the last five executions.
6.2. Performance Comparison
Figures 1–4 show the performance comparison between
the SONNC, MATLAB, and C++ implementations.
The reported speedup is relative to the hand-optimized
version in each case. In each experiment, the SONNC
code runs significantly faster than either of the handcoded implementations.
SONNC shows the most benefit for the AE and
RBM with unstructured sparsity, attaining over 200X
speedup over MATLAB in some cases (Figure 1). This
is because, as mentioned above, updating the nonzeros
of the MATLAB sparse structure is an inefficient process. In the other tests, SONNC yields around 9X–24X
speedup over MATLAB. In contrast to the AE and
RBM, ISTA obtains relatively modest speedup (about
10X) over MATLAB when using unstructured sparsity because in ISTA the weight matrix never needs
to be updated. SONNC also performs better than
the C++ code, generally yielding around 4.2X–7.8X
speedup (Figure 2). The important thing to note here
is that optimizations performed by SONNC – i.e., using knowledge of the sparsity structure – allow it to
outperform C++ code that is optimized primarily for
load balance.
When using LRFs (Figures 3–4), however, the MATLAB implementation is able to run much more efficiently for the weight updates than when using unstructured sparsity. When using locally dense sparsity, MATLAB is still able to utilize its underlying
BLAS imiplementation to perform the matrix multiplications. Even when using LRF sparsity, however,
SONNC is still able to yield 15X-24X speedup over
MATLAB, and 3.3X–6.1X speedup over C++.
Additionally, as detailed in Section 5, SONNC is able
to yield these levels of performance with much more
succint code. If the user ever wants to switch their network between a dense, LRF, or unstructured sparse interlayer connectivity, it is just a matter of changing the
matrix’s initialization, and all the performance benefits will automatically be available due to the implicit
compiler optimizations.
Utilizing Static Analysis and Code Generation to Accelerate Neural Networks
Local Receptive Field NN Connectivity (vs. MATLAB)
100K x 100K Dimension
100K x 500K Dimension
15
18.2
23.6
16.7
17.4
19.6
22.4
20.2
19.4
16.7
15.9
20
ISTA
RBM
AE
10
5
9.9
9.2
50
10.4
100
Speedup
150
149
139
25
ISTA
RBM
AE
15.1
219
184
212
211
238
30
200
10.4
Speedup
250
242
300
24.0
General Sparse NN Connectivity (vs. MATLAB)
0
0
1.00E-004
1.00E-005
1.00E-006
1.00E-007
10 x 5
5x5
Sparsity
2x5
1x5
LRF Dimension
Figure 1. Speedup relative to hand-coded MATLAB using
general sparse connectivity.
Figure 3. Speedup relative to hand-coded MATLAB using
local receptive field connectivity.
General Sparse NN Connectivity (vs. C++)
Local Receptive Field NN Connectivity (vs. C++)
100K x 500K Dimension
5.5
5.7
ISTA
RBM
AE
3.7
5.4
5.4
3.8
5.1
5.3
4.9
3.5
4
3.3
5
Speedup
ISTA
RBM
AE
4.5
6
6.6
5.8
4.2
6.4
4.4
7.2
7
7.8
4.5
7.3
7.8
7.4
9
8
7
6
5
4
3
2
1
0
4.8
Speedup
100K x 100K Dimension
3
2
1
0
1.00E-004
1.00E-005
1.00E-006
1.00E-007
10 x 5
Sparsity
5x5
2x5
1x5
LRF Dimension
Figure 2. Speedup relative to hand-coded C++ using general sparse connectivity.
Figure 4. Speedup relative to hand-coded C++ using local
receptive field connectivity.
6.3. Impact of Optimizations
ation sequences that it can recognize and replace with
more efficient routines. The new routines typically optimize cache reuse by combining adjacent operations
into the same loop. An example of this is the MultBiasSigm method described in Section 3.1.2. This figure
shows that method fusion is able to attain a 1.9X to
2.6X speedup for the tested algorithms. Less speedup
is attained for ISTA, which is algorithmically simpler
than the AE or RBM, and so has fewer operation sequences that can be fused.
Figures 5 and 6 present the impact of two of the more
important optimizations described in Section 3.1. Figure 5 shows the impact of proper matrix parameterization, which includes choosing the best underlying
data structure format and number of threads. Tuning
these parameters has the most impact of any optimization stage. This figure demonstrates the nonlinearity
of jointly tuning the matrix blocking size and number
of threads. For any given block size, there is typically a
single optimal setting for the number of threads. However, each block size has a different optimal setting for
the number of threads, since smaller block sizes can
utilize more threads. But smaller block sizes also have
increasingly more overhead. This figure shows that,
in this case, selecting the correct combination of these
parameters has a 1.5X impact on performance for the
two different block sizes.
Figure 6 shows the impact of method fusion. SONNC
has a large number of common and NN-specific oper-
7. Conclusion
Many promising neural network learning algorithms
are facing computational challenges as they scale to
larger datasets. Although the available computational
power increases each year, the pace of neural network algorithmic efficiency does not advance as quickly
due to the use of general purpose compilers that NN
programmers rely on to optimize their applications.
As NNs are applied to larger datasets, and algorithm
Utilizing Static Analysis and Code Generation to Accelerate Neural Networks
than optimized C++ code.
Impact of Matrix Parameterization
References
900
Execution time (ms)
800
Bengio, Y. Learning deep architectures for ai. In Foundations and Trends in Machine Learning, 2009.
700
600
500
4 threads
32 threads
400
300
200
Bengio, Y. and Lecun, Y. Scaling learing algorithms
towards ai. In Large-Scale Kernel Machines, 2007.
100
0
50
100
200
500
1000
2000
5000 10000 20000 50000
Block size
Figure 5. Matrix parameterization is a difficult and nonlinear process. The matrix blocking size and number of
threads must be chosen jointly in order to maximize performance.
Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P.,
Pascanu, R., Desjardins, G., Turian, J., WardeFarley, D., and Bengio, Y. Theano: A cpu and
gpu math expression compiler. In Proceedings of the
Python for Scientific Computing Conference, 2010.
Buluc, A., Fineman, J., Frigo, M., Gilbert, J., and
Leiserson, C. Parallel sparse matrix-vector and
matrix-transpose-vector multiplication using compressed sparse blocks. In Parallelism in Algorithms
and Architectures, 2009.
Speedup (relative to no fusion)
Impact of Method Fusion
3
2.5
2
No fusion
Fusion
1.5
1
0.5
0
ISTA
RBM
AE
Cai, Z., Vagena, Z., Jermaine, C., and Haas, P. Very
large scale bayesian inference using mcdb. In Big
Learn Workshop, Advances in Neural Information
Processing Systems, 2011.
Chafi, H., Sujeeth, A., Brown, K., Lee, H., Atreya, A.,
and Olukotun, K. A domain-specific approach to
heterogeneous parallelism. In Principles and Practice of Parallel Programming, 2011.
Algorithm
Figure 6. Method fusion gives a 1.9X to 2.6X performance
increase. ISTA, being a simpler algorithm, does not have
as many fusable operations and therefore does not benefit
as much from fusion as the AE and RBM.
complexity increases, application efficiency is becoming critical in order to continue advancing the field
of research. In this paper, we presented SONNC, a
compiler that performs static optimization of a NN
application in order to generate high performance parallel code. In addition to standard compiler optimizations, SONNC relies on the domain specific knowledge
that NN architecture does not change during training,
which allows the compiler to optimize the underlying
data structures used to store the network’s architecture. We showed that SONNC was able to outperform
MATLAB implementations by 9X–24X, and C++ implementations by 3.3X–7.8X. Additionally, we demonstrated how programmer productivity can be increased
when using SONNC. SONNC abstracts the underlying
data structure, which reduces code complexity, but
still allows algorithms to attain performance better
Coates, A. and Ng, A. Selecting receptive fields in
deep networks. In Advances in Neural Information
Processing Systems, 2011.
Low, Y., Gonzalez, J., Kyrola, A., Bickson, D.,
Guestrin, C., and Hellerstein, J. Graphlab: A new
framework for parallel machine learning. In 26th
Conference on Uncertainty in Artificial Intelligence,
2010.
Raina, R. Large-scale deep unsupervised learning
using graphics processors. In Proceedings of the
26th Annual International Conference on Machine
Learning, 2009.
Sujeeth, A., Lee, H., Brown, K., Chafi, H., Wu, M.,
Atreya, A., Olukotun, K., Rompf, T., and Odersky,
M. Optiml: An implicitly parallel domain-specific
language for machine learning. In Proceedings of the
28th International Conference on Machine Learning, 2011.
| 6 |
Evolutionary Approach for the Containers Bin-Packing
Problem
R. Kammarti (2), I. Ayachi(1), (2), M. Ksouri (2), P. Borne (1)
[email protected], [email protected], [email protected], [email protected]
1
2
LAGIS, Ecole Centrale de Lille, Villeneuve d’Ascq, France
LACS, Ecole Nationale des Ingénieurs de Tunis, Tunis - Belvédère. TUNISIE
Abstract: This paper deals with the resolution of combinatorial optimization problems, particularly those concerning
the maritime transport scheduling. We are interested in the management platforms in a river port and more specifically
in container organisation operations with a view to minimizing the number of container rehandlings. Subsequently, we
rmeet customers’ delivery deadlines and we reduce ship stoppage time
In this paper, we propose a genetic algorithm to solve this problem and we present some experiments and results.
Keywords: Bin-paking, Genetic algorithm, transport scheduling, heuristic, optimization, container
Ryan Kammarti was born in Tunis, Tunisia in 1978. He received his M.E. and Master degree in Automatics and
Industrial Computing from the National Institute of Applied Sciences and Technology (INSAT) in 2003 and his Ph.D.
degree in automatics and industrial computing from the Central School of Lille and the National School of Engineers of
Tunis in 2006.He currently occupies an assistant teacher position with the University of Tunis EL Manar, Tunis,
Tunisia. He is also a team head in the ACS research laboratory in the National School of Engineers of Tunis (ENIT).
Imen Ayachi was born in Tunisia in 1980; she received her M.E and Master degree in Automatics and Industrial
Computing from the National Institute of Applied Science and Technology (INSAT) in 2006. Currently, she is a
doctoral student (in Electrical Engineering and automatics and industrial computing) at the National School of
Engineers of Tunis (ENIT - TUNISIA) and the Central School of Lille (EC LILLE - FRANCE). Presently she is a
contractual assistant at the High School of Commerce (TUNISIA).
Mekki Ksouri was born in Jendouba, Tunisia in 1948. He received his M.A. degree in physics in the FST in Tunis in
1973, the M.E. degree from the High School of Electricity in Paris in 1973, degree, the D.Sc. degree, and the Ph.D.
degree from the University of Paris VI, Paris, France, in 1975 and 1977 respectively. He is Professor at the National
School of Engineers of Tunis (ENIT). He was principal of the National Institute of Applied Sciences and Technology
(INSAT) from 1999 to 2005, principal and founder of the High School of Statistics and Information Analysis from 2001
to 2005 and the High Institute of Technologic Studies from 1996 to 1999 and principal of The High Normal School of
Technological Education from 1978 to 1990. Pr. Ksouri is the author or coauthor of many journal articles, book
chapters, and communications in international conferences. He is also the author of 6 books.
Pierre Borne received the Master degree of Physics in 1967, the Masters of Electronics, of Mechanics and of Applied
Mathematics in 1968. The same year he obtained the engineering Diploma from the Industrial Institute in north "IDN" .
He obtained the PhD in Automatic Control from the University of Lille in 1970 and the DSc of the same University in
1976. He became Doctor Honoris Causa of the Moscow Institute of Electronics and Mathematics (Russia) in 1999, the
University of Waterloo (Canada) in 2006 and of the Polytechnic University of Bucarest (Romania). He is author or coauthor of about 200 Journal articles and book chapters, and of 38 plenary lectures and of more than 250 communications
in international conferences. He has been the supervisor of 71 PhD thesis and he is author of 20 books. He is Fellow of
IEEE, member of the Fellows Committee of IEEE and has been President of the IEEE/SMC society in 2000 and 2001.
He is presently Professor at the central school of Lille.
1. Introduction
Containerization is the use of containers for
goods transport, especially in the maritime
domain. This process that began in the 1960s
and generalized in 1980s is a container
logistics chain, which was put in place
around the world. In fact, major ports have
been adapted to this new transport mode by
creating dedicated terminals for loading and
unloading container ships, storage of
containers and their transfer to trains or
trucks.
The processes of loading and unloading
containers are among the most important
tasks that have to be considered in a container
terminal. Indeed, the determination of an
effective container organization reduces
material handling costs (i.e., the costs
associated with loading, unloading and
transporting cargo) and minimizes the time of
loading and unloading the containers.
Studies in Informatics and Control, Vol. 18, No. 4/2009, pages 315-324
1
This work addresses one of the management
issues docks in a port and more specifically
the organization of the container at the port.
At each port of destination, some containers
are unloaded from ship and loaded in the port
to be delivered to their customers. Our aim is
to determine a valid containers arrangement
in the port, in order to meet customers’
delivery
deadlines,
reduce
the
loading/unloading time of these containers as
well as the number of rehandlings and
accordingly to minimize the ship idle time.
When studying such optimization problems,
it is necessary to take into consideration two
main aspects, the on-time delivery of
containers to customers and the re-handling
operations. A re-handling is a container
movement made in order to permit access to
another, or to improve the overall stowage
arrangement, and is considered a product of
poor planning [Wilson and col., 2001]
The problem studied in this work is classified
as a three dimensional bin packing problem
where containers are items and storage spaces
in the port are bins used. It falls into the
category of NP hard problems.
To find solutions for the bin packing
problem, researches used some heuristics like
the ant colony, tabu search and the genetic
algorithms.
In this paper, we have proposed an efficient
genetic algorithm which consists on selecting
two chromosomes (parent) from an initially
constructed population using a roulette wheel
technique. Then, the two parents are
combined using a one point crossover
operator. Finally, a mutation operator is
performed.
Some experimental results are presented in
addition to a study of the influence of the
containers and chromosomes numbers, on
this model.
The rest of this paper is organized as follows:
In section 2, a literature review on the bin
packing problem and some of its variants,
especially the container stowage planning
problem, is presented. Next in section 3, the
mathematical formulation of the problem is
given and the proposed GA is described.
Then, some experiments and results are
presented and discussed, in section 4. Finally,
section 5 covers our conclusion.
2
2. Literature review
The bin packing is a basic problem in the
domain of operational research and
combinatorial optimization. It consists to find
a valid arrangement of all rectangular objects
in items also rectangular called bins, in a way
that minimizes the number of boxes used. A
solution to this problem is to determine the
bins number used to place all the objects in
the different bins on well-defined positions
and orientations. The traditional problem is
defined in one dimension, but there are many
alternatives into two or three dimensions.
The two dimensional bin packing (2BP) is a
generalization of one dimensional problem,
[Bansal and Sviridenko, 2007] since they
have the same objective but all bins and
boxes used were defined with their width and
height. This problem has many industrial
applications, especially in optimisation
cutting (wood, cloth, metal, glass) and
packing (transportation and warehousing)
[Lodai and col., 2002]
Three dimensional bin packing problem
(3BP) is the less studied. It is very rare to find
work on 3D bin packing [Ponce-Pérez and
col., 2005]. In the three-dimensional bin
packing problem we are given a set of n
rectangular-shaped
items,
each
one
characterised by width wj, height hj, and
depth dj, (j J = {1, . . . ,n}) and an unlimited
number of identical three-dimensional bins
having width W, height H, and depth D. 3BP
consists of orthogonally packing all items
into the minimum number of bins [Faroe and
col, 2003]
Three dimensional bin packing is applied in
many industrial applications such as filling
pallets [Bischoff and col., 1995], loading
trucks and especially in container loading and
container stowage planning.
The container loading problems can be
divided into two types. The first called three
dimensional bin-packing. His aim is to
minimise the container costs used. [Bortfeldt
and Mack, 2007]],[He and Cha, 2002]. The
second is the knapsack problems and his
target is to maximise the stowed volume of
container required [Bortfeldt and Gehring,
2001], [Raidl, 1999].
Studies in Informatics and Control, Vol. XX, No. X, Month Year
The task of determining a viable container
organisation for container ships called
container stowage planning is among the
most important tasks that have to be
considered in a container terminal. Many
approaches have been developed to solve this
problem, rule based, mathematical model,
simulation based and heuristic methods.
[Wilson and col., 2001], [Wilson and Roach,
1999] and [Wilson and Roach, 2000]
developed a computer system that generates a
sub-optimal solution to the stowage preplanning problem. The planning process of
this model is decomposed into two phases. In
the first phase, called the strategic process,
they use the branch and bound approach to
solve the problem of assigning generalized
containers (having the same characteristics)
to a blocked cargo-space in the ship. In the
second phase called tactical process, the best
generalised solution is progressively refined
until each container is specifically allocated
to a stowage location. These calculations
were performed using tabu search heuristic.
[Sciomachen and Tanfani, 2007] develop a
heuristic algorithm to solve the problem of
determining stowage plans for containers in a
ship, with the aim of minimising the total
loading time. This approach is compared to a
validated heuristic and the results showed
their effectiveness.
In [Avriel and Penn, 1993] and [Avriel and
col., 1998] a mathematical stowage planning
model for container ship is presented in order
to minimise the shifting number without any
consideration for ship’s stability. Furthermore
[Imai and col., 2002] applied a mathematical
programming model but they proposed many
simplification hypotheses which can make
them inappropriate for practical applications.
container terminal when the type and the size
of containers are different.
We noted that the most studied problems
were ship’s container stowage and container
loading/unloading . In this paper, we
presented a genetic algorithm to solve the
container stowage problem in the port. Our
aim is to determine a valid containers
arrangement, in order to meet customers’
delivery
deadlines,
reduce
the
loading/unloading time of these containers as
well as the re-handling operations. The
genetic algorithm is chosen due to relatively
good results that have been reported in many
works on this problem [Bazzazi and col.,
2009], [Dubrovsky and col., 2002].
3. Problem Formulation
In this section, we detail our evolutionary
approach by presenting the adopted
mathematical
formulation
and
the
evolutionary algorithm based on the
following assumptions.
3.1. Assumptions
In our work we suppose that:
The containers are identical (weight,
shape, type) and each is waiting to be
delivered to its destination.
Initially containers are stored at the
platform edge or at the vessel.
A container can be unloaded if all the
floor which is above is unloaded
The containers are loaded from floor
to ceiling
We are given a set of cuboids container
localised into a three dimensions cartesian
system showed in the figure 1.
[Imai and col., 2006] proposed a ship’s
container stowage and loading plans that
satisfy two conflict criteria the ship stability
and the minimum number of container
rehandles required. The problem is
formulated as a multi-objective integer
programming and they implement a
weighting method to come up to a single
objective function.
In [Bazzazi and col., 2009] a genetic
algorithm is developed to solve an extended
storage space allocation problem (SSAP) in a
Figure 1. Cartesian coordinate system
Studies in Informatics and Control, Vol. 18, No. 4/2009, pages 315-324
3
illustrate the fact that a container can only
have two positions either on another or on the
ground.
3.2. Input parameters
Let’s consider the following variables:
i: Container index,
n1: Maximum containers number on the axis
X
n2: Maximum containers number on the axis
Y
n3: Maximum containers number on the axis
Z
Nc floor: Maximum containers number per
floor, Nc floor= n1*n2
Nfloor: Total number of floors
Nc floor (j) : the containers number in the floor
j
Ncmax: Maximum containers number, with
N’ = n1.n2.n3
Nc: the containers number
3.3. Mathematical formulation
Let us consider that the space used to stowed
containers at the port consisting of a single
bay. Our fitness function aims to reduce the
number of container rehandlings and then
minimize the ship stoppage time. To do that
we use the following function:
4. Evolution procedure
We detail here the evolution procedure used
in our approach. The principle of the
selection procedure is the same used by
Kammarti in [Kammarti and col., 2004],
[Kammarti and col., 2005] and Harbaoui in
[Harbaoui Dridi and col., 2009].
We create an initial population of size N. We
select parents using roulette-wheel method
and N new individuals generated using the
crossover, mutation and copy after a selection
phase added to the initial population to form
an intermediate population noted Pinter and
having 2N as size. Pinter is sorted according to
their fitness in increasing order. The first N
individuals of Pinter will form the population
(i +1), where i is the iteration number. The
principle of this selection procedure is
illustrated in Figure 2.
Fitness function:
Nc
Minimise Pi mi X i,(x, y,z)
i 1
wit h x 1..n1 , y 1..n2 , z 1..n3
Where:
Pi : Priority value depending on the delivery
date di of container i to customer, with Pi=1
∕di
mi: the minimum number of container
rehandles to unload the container i
Xi(x, y, z) is the decision variable,
1 if we have a containeri in thisposition
Xi (x, y,z)
0 otherwise
Subject to:
Nc floor (j) ≥ Nc floor (j+1)
with j = 1. . Nfloor
if Xi,(x,y,z) = 0 then Xi,(x,y,z−1) = 0
(1)
(2)
The constraint equations (1) and (2) ensure
that a floor lower level contains more
containers than directly above. They also
4
Figure 2. Evolution procedure
4.1 Solution representation :
chromosome
The developed solution representation
consists in a three dimension matrix to
reproduce the real storage of the containers.
The figure 3 shows a solution representation.
Studies in Informatics and Control, Vol. XX, No. X, Month Year
population using roulette-wheel selection.
Then we generate randomly, according to the
three axes x, y and z, three intersection plans
respectively noted: p-crois-x, p-crois-y and pcrois-z.
Figure 3. Solution representation
4.2 Initial population (Initial solution
generation procedure)
To improve the solutions quality of the initial
population we opted for the construction of a
heuristic
representing
the
different
characteristics of the problem.
The heuristic principle is to always keep on
top the containers that will be unloaded on
first time.
Let’s consider the following:
cont = { cont[x][y][z] / 1 ≤ x ≤ n1, 1 ≤ y ≤ n2,
1 ≤ z ≤ n3} which designate the container
coordinates that is a chromosome like shown
before. Their association will construct the
initial population.
Indeed, the child E1 will receive the same
genes that I1 in this crossover plan, the
remaining places are fulfilled by missing
genes in the order in which they appear in I2.
While, the child E2 will receive the same
genes that I2 in this crossover plan, the
remaining places are fulfilled by missing
genes in the order in which they appear in I1.
The crossover operation is produced
randomly by a probability Pc >0.7
Figure 5 shows the crossover of two parents
I1 and I2 to give two children E1 and E2. In
this example, p-crois-x=2, p-crois-y=2 and pcrois-z=1.
To create the initial population we have to
build randomly a column of a given
chromosome number. Each chromosome
contains a given containers number (Nc).
Figure 4 represents the chromosome creation
algorithm.
Begin creat_chromosome
container number =1
While (container number <= Nc)
For z = 0 to n1
For x = 0 to n2
For y = 0 to n3
cont[x][y][z]=container number
container number ++
End
End
End
End
For i = 0 to container number
Permute two randomly selected containers
End
End
Figure 4. Chromosome creation algorithm
Figure 5. The crossover operation
4.3. Crossover operator
4.4 Mutation operation
The crossover operator adopted is to choose
two individuals I1 and I2 of the initial
In order to allow an exploration of various
regions of space research, it is necessary to
Studies in Informatics and Control, Vol. 18, No. 4/2009, pages 315-324
5
introduce random mutation operations in the
evolution process. Mutation operators prevent
the degeneration of the population. This
degeneration can lead to a convergence of
individuals to a local optimum.
We considered three problem sizes:
Small sizes (to 64 containers per
solution)
Medium sizes (Between 125 and 750
containers per solution)
Large sizes (1000 containers per
solution).
The mutation operator is the randomly
swapping two containers. In figure 6, the
selected containers to switch are the container
cont[0][0][2] and the container with
coordinates cont[1][0][0].
For each problem size, we generate N
chromosomes by population.
Figure 6. The mutation operation
We consider that:
The single individual size will be
between 27 and 1000 containers.
The number of chromosomes in a
population N varies between 10 and
250.
The number of generation Ngene
varies between 10 and 300.
n1, n2 and n3 with n1 = n2 = n3, will
be defined by user
The number of containers per
chromosome is also defined by user.
The delivery date of each container is
randomly generated.
4.5. Evolutionary approach algorithm
5.1 The number of container influence
The algorithm of our evolutionary approach
is shown by Figure 7.
In this example we select N = 50, we set the
number of generation equal to 20 and we
calculate each time, the fitness function
value. The results are presented in table 1.
Begin
Create, evaluate and correct the initial population
Where (the end criterion is not satisfied) do
o Copy the N best solutions from the present
population to a new intermediate 2N sized one
o Where (the intermediate population is not full) do
According to the roulette principle, fill up the
intermediate population with child solutions
obtained with crossover, mutation or copy.
Sort the intermediate population solutions
according to their fitness in an increasing order
Copy the best present solutions to the following
population (N sized).
Return the best solution
End
Figure 7. Evolutionary approach algorithm
5. Experimental Result
In this section, we present different
simulations according to the containers total
number in a solution Nc as well as the size N
of a population.
6
Nc
64
125
343
729
1000
Fi
142.98
369.88
1332.76
2476.22
4524.98
Ff
54.18
174.07
677.13
1734.79
2773.49
Table 1. Evolution of the fitness function
according to the number of containers
We notice that Fi is the fitness function value
for the best solution in the first generation
and that Ff is the fitness function value for the
best solution in the last generation that is
when reached convergence.
To show the convergence of our approach we
mention the case where Ncont = 64, in the
first generation the best individual has a
fitness Fi = 149.98 and in the last generation
has the best fitness function Ff = 54.18. While
in the case where Ncont = 1000, Fi = 4524.98
and
Ff = 2773.49. So, more container
number is small the fitness value is better.
Studies in Informatics and Control, Vol. XX, No. X, Month Year
There is a relative relationship between the
iteration number and the value of the fitness
function. In fact, we varied the generation
number keeping the same container number
(Nc=64) and the number of chromosomes
(N=50)
Ngene
20
50
100
150
175
200
Fi
142.98
128.45
124.76
134.81
127.23
138.21
Ff
54.18
45.799
43.68
38.23
38.40
38.47
Table 2. The influence of generation number
According to results illustrated in table 2, we
note, that higher is the iteration number,
better is the quality of the fitness function.
We remark that, from 100 iterations, the
fitness value is stabilized around the value
38. The curve shown in the following figure
confirms these results.
Figure 9. Evolution of simulation time according
to container number with 20 generations
5.2 The number of chromosomes
influence N
Through this example, we fix the size of our
problem to 125 containers by chromosome
and we vary the number of solutions per
population to study the algorithm behaviour
for 100 generations. The results are presented
in the table 3 and figure 10.
N
Fi
Ff
Tsimultaion
20
346.12
144.14
61.92
40
349.63
128.51
67.48
50
326.33
124.65
84.21
75
340.70
121.53
110.27
100
280.11
115.87
144.93
125
270.64
107.36
174.68
Table 3. Evolution of the fitness function
according to the number of chromosomes per
population
Figure 8. Evolution of the fitness function
according to the generation number
According to the results, we note that higher
is the chromosome number per population,
better is the value of the fitness function.
Unless, the simulation time increases.
We also note that the convergence time will
increase when the number of containers will
grow respectively with the simulation time
and the requested number of generations to
reach good solutions. (Figure 9)
Figure 10. Evolution of simulation time
according to chromosome number
Studies in Informatics and Control, Vol. 18, No. 4/2009, pages 315-324
7
6. Summary and Conclusions
packing problem, European Journal of
Operational Research, 2007
In this work, we have presented an
evolutionary approach to solve the problem
of containers organization at the port
Problem. Our objective is to respect
customers’ delivery deadlines and to reduce
the number of container rehandles.
7. BORTFELDT, A. and GEHRING, H., A
hybrid genetic algorithm for the container
loading problem, European Journal of
Operational Research, Volume 131, Issue 1,
16 May 2001, Pages 143-161
We proposed a brief literature review on the
bin packing problem and some of its variants,
especially the container stowage planning
problem.
Then,
we
described
the
mathematical formulation of the problem.
After that, we presented our optimization
approach which is an evolutionary algorithm
based on genetic operators. We also detailed
the use genetic algorithm for solutions
improving. The experimental results were
later presented by showing the influence of
the number of containers in a chromosome
and the influence of the number of
chromosomes per population on the
convergence and the simulation time.
9. FAROE, O., PISINGER, D. and
ZACHARIASEN, M., Guided local search
for the three-dimensional bin packing
problem, Informs journal on computing, Vol.
15, No.3, 2003.
REFERENCES
8. DUBROVSKY, O., LEVITIN, G., and
PENN, M., A genetic algorithm with
compact solution encoding for the
container ship stowage problem, Journal of
Heuristics 8, 585–599, 2002.
10. HARBAOUI DRIDI, I., KAMMARTI,
R., KSOURI M. and BORNE, P., A Genetic
Algorithm for the Multi-Pickup and
Delivery Problem with Time Windows,
Studies in informatics and control, June 2009
Volume 18 Number 2.
1. AVRIEL, M., PENN, M., Exact and
approximate solutions of the container
ship stowage problem, Computers and
Industrial Engineering 25, pp. 271–274,
1993.
11. HE, D.Y. and CHA, J.Z., Research on
solution to complex container loading
problem based on genetic algorithm,
Proceedings of the first international
conference on Machine Learning and
Cybernetics, 2002
2. AVRIEL, M., PENN, M., SHPIRER, N.
and WITTEBOON, S., Stowage planning
for container ships to reduce the number
of shifts, Annals of Operations Research 76,
p. 55-71, 1998
12. IMAI,
A.,
NISHIMURA,
E.,
PAPADIMITRIU, S. and SASAKI, K. , The
containership
loading
problem,
International Journal of Maritime Economics
4, pp. 126–148, 2002
3. BANSAL, N. and SVIRIDENKO,M.,
Two-dimensional bin packing with onedimensional
resource
augmentation,
Discrete Optimization, Volume 4, Issue 2, 1,
Pages 143-153, June 2007.
13. IMAI, A., SASAKI, K., NISHIMURA, E.
and PAPADIMITRIOU, S., Multi-objective
simultaneous stowage and load planning
for a container ship with container
rehandle in yard stacks, European Journal
of Operational Research, 2006
4. BAZZAZI, M., SAFAEI, N.
and
JAVADIAN, N., A genetic algorithm to
solve the storage space allocation problem
in a container terminal, Computers &
Industrial Engineering, 2009
5. BISCHOFF, E. E., JANETZ, F. and
RATCLIFF, M. S. W., Loading pallets with
non identical items, European Journal of
Operations Research, 1995
6. BORTFELDT, A. and MACK, D., A
heuristic for the three-dimensional strip
8
14. KAMMARTI, R., HAMMADI, S.,
BORNE, P. and KSOURI, M., A New
Hybrid Evolutionary Approach for the
Pickup and Delivery Problem With Time
Windows, IEEE SMC, Systems, Man and
Cybernetics, IEEE International Conference,
Vol. 2, pp. 1498-1503, 2004.
15. KAMMARTI, R., HAMMADI, S.,
BORNE, P. and KSOURI, M., Lower
Bounds In An Hybrid Evolutionary
Studies in Informatics and Control, Vol. XX, No. X, Month Year
Approach for the Pickup And Delivery
Problem With Time Windows, Systems,
Man and Cybernetics, 2005 IEEE
International Conference on Volume 2, Issue
, 12-12 Oct. 2005 Page(s):1156 – 1161
16. LODI, A., MARTELLO, S. and VIGO,
D., Recent advances on two-dimensional
bin packing problems, Discrete Applied
Mathematics, Volume 123, Issues 1-3, 15
November 2002, Pages: 379-396
17. PONCE-PÉREZ, A., PÉREZ-GARCIA,
A. and AYALA-RAMIREZ, V., Binpacking
using
genetic
algorithms,
Electronics, Communications and Computers,
Proceedings. 15th International Conference
on, 2005
18. SCIOMACHEN, A. and TANFANI, E.,
A 3D-BPP approach for optimising
stowage plans and terminal productivity,
European Journal of Operational Research,
Volume 183, Issue 3, , Pages 1433-1446, 16
December 2007
19. R. RAIDL, G., Weight-codings in a
genetic algorithm for the multiconstraint
knapsack problem, Proc. of the 1999 IEEE
Congress on Evolutionary Computation,
Washington DC, 1999
20. WILSON, I.D. and ROACH, P.A.,
Container
stowage
planning:
a
methodology for generating computerized
solutions, Journal of the Operational
Research Society, Vol. 51:1248-1255, 2000
21. WILSON, I. D. , ROACH, P. A. and
WARE, J. A. , Container stowage preplanning: using search to generate
solutions, a case study , Knowledge-Based
Systems, Volume 14, Issues 3-4, Pages 137145, June 2001.
22. WILSON, I.D. and ROACH, P.A.,
Principles of combinatorial optimization
applied
to
container-ship
stowage
planning, Journal of Heuristics 5 (1999), pp.
403–418.
Studies in Informatics and Control, Vol. 18, No. 4/2009, pages 315-324
9
| 9 |
An O(log k log2 n)-competitive Randomized Algorithm for the
k-Sever Problem
arXiv:1510.07773v1 [] 27 Oct 2015
Wenbin Chen∗†‡
Abstract
In this paper, we show that there is an O(log k log2 n)-competitive
randomized algorithm for the k-sever problem on any metric space
with n points, which improved the previous best competitive ratio
O(log2 k log3 n log log n) by Nikhil Bansal et al. (FOCS 2011, pages 267276).
Keywords: k-sever problem; Online algorithm; Primal-Dual method;
Randomized algorithm;
1
Introduction
The k-sever problem is to schedule k mobile servers to serve a sequence of requests in a metric
space with the minimum possible movement distance. In 1990, Manasse et al. introduced the ksever problem as a generalization of several important online problems such as paging and caching
problems [29] ( Its conference version is [28]), in which they proposed a 2-competitive algorithm
for the 2-sever problem and a n − 1-competitive algorithm for the n − 1 sever problem in a n-point
metric space. They still showed that any deterministic online algorithm for the k-sever problem is
of competitive ratio at least k. They proposed the well-known k-sever conjecture: for the k-sever
problem on any metric space with more than k different points, there exists a deterministic online
algorithm with competitive ratio k.
It was in [29] shown that the k-sever conjecture holds for two special cases: k = 2 and n = k + 1.
The k-sever conjecture also holds for the k-sever problem on a uniform metric. The special case
of the k-sever problem on a uniform metric is called the paging (also known as caching) problem.
Slator and Tarjan have proposed a k-competitive algorithm for the paging problem [31]. For some
other special metrics such as line, tree, there existed k-competitive online algorithms. Yair Bartal
∗
Email:[email protected]
Department of Computer Science, Guangzhou University, P.R. China
‡
State Key Laboratory for Novel Software Technology, Nanjing University, P.R. China
†
1
and Elias Koutsoupias show that the Work Function Algorithm for the k-sever problem is of kcompetitive ratio in the following special metric spaces: the line, the star, and any metric space
with k + 2 points [16]. Marek Chrobak and Lawrence L. Larmore proposed the k-competitive
Double-Coverage algorithm for the k-sever problem on trees [21].
For the k-sever problem on the general metric space, the k-sever conjecture remain open. Fiat
et al. were the first to show that there exists an online algorithm of competitive ratio that depends
only on k for any metric space: its competitive ratio is Θ((k!)3 ). The bound was improved later
by Grove who showed that the harmonic algorithm is of competitive ratio O(k2k ) [25]. The result
was improved to (2k log k) by Y.Bartal and E. Grove [14]. A significant progress was achieved by
Koutsoupias and Papadimitriou, who proved that the work function algorithm is of competitive
ratio 2k − 1 [27].
Generally, people believe that randomized online algorithms can produce better competitive
ratio than their deterministic counterparts. For example, there are several O(log k)-competitive
algorithms for the paging problem and a Ω(log k) lower bound on the competitive ratio in [24, 30,
1, 4]. Although there were much work [17, 13, 15], the Ω(log k) lower bound is still best lower bound
in the randomized case. Recently, N. Bansal et al. propose the first polylogaritmic-competitive
randomized algorithm for the k-sever problem on a general metrics spcace [3]. Their randomized
algorithm is of competitive ratio O(log2 k log3 n log log n) for any metric space with n points, which
improves on the deterministic 2k − 1 competitive ratio of Koutsoupias and Papadimitriou whenever
n is sub-exponential k.
For the k-server problem on the general metric space, it is widely conjectured that there is an
O(log k)-competitive randomized algorithm, which is called as the randomized k-server conjecture.
For the paging problem ( it corresponds to the k-sever problem on a uniform metric), there is
O(log k)-competitive algorithms [24, 30, 1]. For the weighted paging problem ( it corresponds to the
k-sever problem on a weighted star metric space), there were also O(log k)-competitive algorithms
[4, 9] via the online primal-dual method. More extensive literature on the k-server problem can be
found in [26, 18].
In this paper, we show that there exists a randomized k-sever algorithm of O(log k log2 n)competitive ratio for any metric space with n points, which improved the previous best competitive
ratio O(log2 k log3 n log log n) by Nikhil Bansal et al. [3].
In order to get our results, we use the online primal-dual method, which is developed by Buchbinder and Naor et al. in recent years. Buchbinder and Naor et al. have used the primal-dual
method to design online algorithms for many online problems such as covering and packing problems, the ad-auctions problem and so on [4, 5, 6, 7, 8, 10]. First, we propose a primal-dual
formulation for the fraction k-sever problem on a weighted hierarchical well-separated tree (HST).
Then, we design an O(ℓ log k)-competitive online algorithm for the fraction k-sever problem on a
weighted HST with depth ℓ. Since any HST with n leaves can be transformed into a weighted
HST with depth O(log n) with any leaf to leaf distance distorted by at most a constant [3], thus,
we get an O(log k log n)-competitive online algorithm for the fraction k-sever problem on an HST.
Based on the known relationship between the fraction k-sever problem and the randomized k-sever
problem, we get that there is an O(log k log n)-competitive randomized algorithm for the k-sever
problem on an HST with n points. By the metric embedding theory [22], we get that there is an
O(log k log2 n)-competitive randomized algorithm for the k-sever problem on any metric space with
n points.
2
2
Preliminaries
In this section, we give some basic definitions.
Definition 2.1. (Competitive ratio adapted from [32]) For a deterministic online algorithm DALG,
we call it r-competitive if there exists a constant c such that for any request sequence ρ, costDALG (ρ) ≤
r · costOP T (ρ) + c, where costDALG (ρ) and costOP T (ρ) are the costs of the online algorithm DALG
and the best offline algorithm OP T respectively.
For a randomized online algorithm, we have a similar definition of competitive ratio:
Definition 2.2. (Adapted from [32]) For a randomized online algorithm RALG, we call it rcompetitive if there exists a constant c such that for any request sequence ρ, E[costRALG (ρ)] ≤
r · costOP T (ρ) + c, where E[costRALG (ρ)] is the expected cost of the randomized online algorithm
RALG.
In order to analyze randomized algorithms for the k-sever problem, D. Türkouğlu introduce the
fractional k-sever problem [32]. On the fractional k-sever problem, severs are viewed as fractional
entities as opposed to units and an online algorithm can move fractions of servers to the requested
point.
Definition 2.3. (Fractional k-sever problem adapted from [32]) Suppose that there are a metric
space S and a total of k fractional severs located at the points of the metric space. Given a sequence
of requests, each request must be served by providing one unit server at requested point, through
moving fractional servers to the requested point. The cost of an algorithm for servicing a sequence
of requests is the cumulative sum of the distance incurred by each sever, where moving a w fraction
of a server for a distance of δ costs wδ.
In [11, 12], Bartal introduce the definition of a Hierarchical Well-Separated Tree (HST), into
which a general metric can be embedded with a probability distribution. For any internal node, the
distance from it to its parent node is σ times of the distance from it to its child node. The number
σ is called the stretch of the HST. An HST with stretch σ is called a σ-HST. In the following, we
give its formal definition.
Definition 2.4. (Hierarchically Well-Separated Trees (HSTs)[20]). For σ > 1, a σ-Hierarchically
Well-Separated Tree (σ-HST) is a rooted tree T = (V, E) whose edges length function d satisfies the
following properties:
(1). For any node v and any two children w1 , w2 of v, d(v, w1 ) = d(v, w2 ).
(2). For any node v, d(p(v), v) = σ · d(v, w), where p(v) is the parent of v and w is a child of v.
(3). For any two leaves v1 and v2 , d(p(v1 ), v1 ) = d(p(v2 ), v2 ).
Fakcharoenphol et al. showed the following result [22].
Lemma 2.5. If there is a γ-competitive randomized algorithm for the k-sever problem on an σ-HST
with all requests at the n leaves, then there exists an O(γσ log n)- competitive randomized online
algorithm for the k-server problem on any metric space with n points.
We still need the definition of a weighted hierarchically well-separated tree introduced in [3].
3
Definition 2.6. (Weighted Hierarchically Well-Separated Trees (Weighted HSTs)[3]). A weighted
σ-HST is a rooted tree satisfying the property (1), (3) of the definition 2.4 and the property:
d(p(v), v) ≥ σ · d(v, w) for any node v which is not any leaf or the root, where p(v) is the parent of v and w is a child of v.
In [3], Banasl et al. show that an arbitrary depth σ-HST with n leaves can be embedded into
an O(log n) depth weighted σ-HST with constant distortion, which is described as follows.
Lemma 2.7. Let T be a σ-HST T with n leaves which is of possibly arbitrary depth. Then, T can
be transformed into a weighted σ-HST T̃ with depth O(log n) such that: the leaves of T̃ and T are
2σ
factor.
the same, and leaf to leaf distance in T is distorted in T̃ by a at most σ−1
3
An O(log2 k) Randomized Algorithm for the k-Sever Problem on
an HST when n = k + 1
In this paper, we view the k-sever problem as the weighed caching problem such that the cost of
evicting a page out of the cache using another page satisfies the triangle inequality, i.e., a point is
viewed as a page; the set of k points that are served by k severs is viewed as the cache which holds
k pages; the distance of two points i and j is viewed the cost of evicting the corresponding page pi
out of the cache using the corresponding page pj .
Let [n] = {p1 , . . . , pn } denotes the set of n pages and d(pi , pj ) denotes the cost of evicting
the page pi out of the cache using the page pj for any pi , pj ∈ [n], which satisfies the triangle
inequality: for any pages i, j, s, d(pi , pi ) = 0; d(pi , pj ) = d(pj , pi ); d(pi , pj ) ≤ d(pi , ps ) + d(ps , pj ).
Let p1 , p2 , . . . , pM be the requested pages sequence until time M , where pt is the requested page at
time t. At each time step, if the requested page pt is already in the cache then no cost is produced.
Otherwise, P
the page pt must be fetched into the cache by evicting some other pages p in the cache
and a cost
d(p, pt ) is produced.
In this section, in order to clearly describe our algorithm design idea, we consider the case
n = k + 1.
First, we give some notations. Let σ-HST denote a hierarchically well-separated trees with
stretch factor σ. Let N be the number of nodes in a σ-HST and leaves be p1 , p2 , · · · , pn . Let ℓ(v)
denote the depth of a node v. Let r denote the root node. Thus, ℓ(r) = 0. For any leaf p, let ℓ
denote its depth, i.e. ℓ(p) = ℓ. Let p(v) denote the parent node of a node v. C(v) denote the set
of children of a node v. Let D denote the distance from the root to its a child. D(v) denote the
D
.
distance from a node v to its parent, i.e. D(v) = d(v, p(v)). It is easy to know that D(v) = σℓ(v)−1
Let Tv denote the subtree rooted at v and L(Tv ) denote the set of leaves in Tv . Let |Tv | denote the
number of the leaves in Tv . For a leaf pi , let A(pi , j) denote the ancestor node of pi at the depth
j. Thus, A(pi , ℓ) is pi , A(pi , 0) is the root r and so on. At time t, let variable xpi ,t denote the
fraction of pi that is in
Pthe cache and upi ,t denote the fraction
Pof pi that is out of cache. Obviously,
xpi ,t + upi ,t = 1 and
xp,t = k. For a node v, let uv,t =
up,t , i.e., it is the total fraction of
p∈[n]
p∈L(Tv )
P
pages in the subtree Tv which is out of the cache. It is easy to see that uv,t =
uw,t. Suppose
w∈C(v)
that at time 0, the set of initial k pages in the cache is I = {pi1 , . . . , pik }.
At time t, when the request pt arrives, if page pt is fetched mass ∆(pt , p) into the cache by
evicting out the page p in the cache, then the evicting cost is d(p, pt ) · ∆(pt , p). For a σ-HST metric,
4
suppose the path from pt to p in it is: pt , vj , . . . , v1 , v, v1′ , . . . , vj′ , p, where v is the first common
ancestor node of pt and p. By the definition of a σ-HST, we have D(pt ) = D(p) and D(vi ) = D(vi′ )
j
j
j
P
P
P
for any 1 ≤ i ≤ j. Thus, d(p, pt ) = D(pt ) +
D(vi ) + D(p) +
D(vi′ ) = 2D(pt ) +
2D(vi ). So,
the evicting cost is (2D(pt )+
cost incurred at time t is
N
P
j
P
i=1
i=1
i=1
2D(vi ))·∆(pt , p). Since p can be any page in [n]\{pt }, the evicting
i=1
2D(v) max{0, uv,t−1 − uv,t }. Thus, we give the LP formulation for the
v=1
fractional k-sever problem on a σ-HST as follows.
M P
N
M
P
P
(P)
Minimize
2D(v)zv,t +
∞ · upt ,t
t=1 v=1
t=1 P
up,t ≥ |S| − k;
Subject to ∀t > 0 and S ⊆ [n] with |S| > k,
p∈S
P
(up,t−1 − up,t );
∀t > 0 and a subtree Tv (v 6= r), zv,t ≥
(3.1)
(3.2)
p∈L(Tv )
∀t > 0 and node v, zv,t , uv,t ≥ 0;
(3.3)
For t = 0 and any leaf node p ∈ I, up,0 = 0;
(3.4)
For t = 0 and any leaf node p 6∈ I, up,0 = 1;
(3.5)
The first primal
constraintP
(3.1) states that
Pat any time t, if we take any set S of vertices with
P
xp,t = |S| − k, i.e., the total number of pages out
xp,t ≥ |S| −
up,t = |S| −
|S| > k, then
p∈S
p∈S
p∈[n]
of the cache is at lease |S| − k. The variables zv,t denote the total fraction mass of pages in Tv that
are moved out of the subtree Tv (Obviously, it is not needed to define a variable zr,t for the root
node). The fourth and fifth constraints ((3.4) and (3.5)) enforce the initial k pages in the cache
are pi1 , . . . , pik . The first term in the object function is the sum of the moved cost out of the cache
and the second term enforces the requirement that the page pt must be in the cache at time t (i.e.,
upt ,t = 0).
Its dual formulation is as follows.
M
P
P
P
γp
(|S| − k)aS,t +
(D)
Maximize
t=1 S⊆[n],|S|>k
Subject to ∀t and p ∈ [n] \ {pt },
p6∈I
P
aS,t −
S:p∈S
∀t = 0 and ∀p ∈ [n], γp −
ℓ
P
ℓ
P
(b(A(p,j),t+1 − b(A(p,j),t ) ≤ 0
(3.6)
j=1
bA(p,j),1 ≤ 0
(3.7)
j=1
∀t > 0 and any subtree Tv , bv,t ≤ 2D(v)
(3.8)
∀t > 0 and v and |S| > k, aS,t , bv,t ≥ 0
(3.9)
In the dual formulation, the variable aS,t corresponds to the constraint of the type (3.1); the
variable bv,t corresponds to the constraint of the type (3.2); The variable γp corresponds to the
constraint of the type (3.4) and (3.5).
Based on above primal-dual formulation, we extend the design idea of Bansal et al.’s primaldual algorithm for the metric task system problem on a σ-HST [10] to the k-sever problem on a
σ-HST. The design idea of our online algorithm is described as follows. During the execution of
our algorithm, it always maintains the following relation between the primal variable uv,t and dual
bv,t+1
ln(1 + k)) − 1). When the request pt arrives at
variable bv,t+1 : uv,t = f (bv,t+1 ) = |Tkv | (exp( 2D(v)
time t, the page pt is gradually fetched into the cache and other pages are gradually moved out of
5
the cache by some rates until pt is completely fetched into the cache(i.e, upt ,t is decreased at some
rate and other up,t is increased at some rate for any p ∈ [n] \ {pt } until upt ,t becomes 0). It can be
viewed that we move mass upt ,t out of leaf pt through its ancestor nodes and distribute it to other
leaves p ∈ [n] \ {pt }. In order to compute the exact distributed amount at each page p ∈ [n] \ {pt },
the online algorithm should maintain the following invariants:
1. ( Satisfying Dual Constraints:) It is tight for all dual constraints of type (3.6) on other
leaves [n] \ {pt }.
P
2. (Node Identity Property:) uv,t =
uw,t holds for each node v,.
w∈C(v)
We give more clearer description of the online algorithm process. At time t, when the request
pt arrives, we initially set upt ,t = upt ,t−1 . If upt ,t = 0, then we do nothing. Thus, the primal cost
and the dual profit are both zero. All the invariants continue to hold. If upt ,t 6= 0, then we start
to increase variable aS at rate 1. At each step, we would like to keep the dual constraints (3.6)
tight and maintain the node identity property. However, increasing variable aS violates the dual
constraints(3.6) on leaves in [n] \ {pt }. Hence, we increase other dual variables in order to keep
these dual constraints (3.6) tight. But, increasing these variables may also violate the node identity
property. So, it makes us to update other dual variables. This process results in moving initial upt ,t
mass from leaf pt to leaves [n] \ {pt }. We stop the updating process when upt ,t become 0.
In the following, we will compute the exact rate at which we should move mass upt ,t from pt
through its ancestor nodes at time t to other leaves in [n] \ {pt } in the σ-HST. Because of the space
limit, we put proofs of the following some claims in the Appendix. First, we show one property of
the function f .
Lemma 3.1.
duv,t
dbv,t+1
Proof. Since uv,t =
claim.
=
ln(1+k)
2D(v) (uv,t
bv,t+1
|Tv |
k (exp( 2D(v)
+
|Tv |
k ).
ln(1 + k)) − 1), we take the derivative over bv,t+1 and get the
In order to maintain the Node Identity Property: uv,t =
P
uw,t for each node v at any
w∈C(v)
time t, when uv,t is increased or decreased, it is also required to increase or decrease the children
of v at some rate. The connection between these rates is given.
Lemma 3.2. For a node v, if we increase variable bv,t+1 at rate h, then we have the following
equality:
P dbw,t+1
dbv,t+1
|Tv |
1
=
· (uw,t + |Tkv | )
σ (uv,t + k ) ·
dh
dh
w∈C(v)
We need one special case of lemma 3.2: when the variable bv,t+1 is increased (decreased) at rate
h, it is required that the increasing (decreasing) rate of all children of v is the same. By above
lemma, we get:
Lemma 3.3. For v a node, assume that we increase (or decrease ) the variable bv,t+1 at rate h. If
the increasing (or decreasing ) rate of each w ∈ C(v) is the same, then in order to keep the Node
Identity Property, we should set the increasing (or decreasing ) rate for each child w ∈ C(v) as
follows:
db
dbw,t+1
= σ1 · v,t+1
dh
dh
Repeatedly applying this lemma, we get the following corollary.
6
Corollary 3.4. For a node v with ℓ(v) = j and a path P from leaf pi ∈ Tv to v, if bv,t+1 is increased
( or decreased) at rate h and the increasing (decreasing) rate of all children of any v ′ ∈ P is the
P dbv′ ,t+1
db
1
) = (1 + Θ( σ1 )).
= v,t+1
· ψ(j), where ψ(j) = (1 + σ1 + σ12 + · · · + σℓ−j
same, then
dh
dh
v′ ∈P
We still require the following special case of lemma 3.2. Let w1 be the first child of the node
v. Assume that bw1 ,t+1 is increased (or decreased) at some rate and the rate of increasing (or
decreasing) bw′ ,t+1 is the same for every w′ ∈ C(v), w′ 6= w1 . If bv,t+1 is unchanged, then the
following claim should hold.
Lemma 3.5. Let w1 , . . . , wm be the children of a node v. Assume that we increase (or decrease)
db ′
db i ,t+1
w1 at rate h and also increase w2 to wm at the same rate h. For i ≥ 2, let wdh,t+1 be wdh
. If
we would like to maintain the amount uv,t unchanged, then we should have:
dbw′ ,t+1
dh
|T (w1 )|
k
|T (v)|
uv,t + k
uw1 ,t +
=
· (−
dbw1 ,t+1
dh
+
dbw′ ,t+1
)
dh
Theorem 3.6. When request pt arrives at time t, in order to keep the dual constraints tight
and node identity property, if aS,t is increased with rate 1, we should decrease every bA(pt ,j),t+1
(1 ≤ j ≤ ℓ) with rate:
dbA(pt ,j),t+1
daS,t
2+
1
|T
|T
|
|
n−1
= ψ(j)
[(uA(pt ,j),t + A(pk t ,j) )−1 − (uA(pt ,j−1),t + A(ptk,j−1) )−1 ].
For each sibling w of A(pt , j), increase bw,t+1 with the following rate:
dbw,t+1
daS,t
=
1
2+ n−1
ψ(j) (uA(pt ,j−1),t
+
|TA(pt ,j−1) | −1
)
k
Thus, we design an online algorithm for the fractional k-sever problem as follows (see Algorithm
3.1).
1:
2:
3:
4:
5:
6:
At time t = 0, we set bp,1 = γp = 0 for all p and set bA(p,j),1 = 0 for any 1 ≤ j ≤ ℓ.
At time t ≥ 1, when a request pt arrives:
Initially, we set up,t = up,t−1 for all p, and bp,t+1 is initialized to bp,t .
If upt ,t = 0, then do nothing.
Otherwise, do the following:
P
xp,t = k = n − 1. So, S = [n].
Let S = {p : up,t < 1}. Since k = n − 1, |S| >
x∈S
While upt ,t 6= 0:
8:
Increasing aS,t with rate 1;
9:
For each 1 ≤ j ≤ ℓ, decrease every bA(pt ,j),t+1 with rate:
7:
dbA(pt ,j),t+1
daS,t
2+
1
|T
|
|T
|
n−1
= ψ(j)
[(uA(pt ,j),t + A(pk t ,j) )−1 − (uA(pt ,j−1),t + A(ptk,j−1) )−1 ],
For each sibling w of A(pt , j), increase bw,t+1 with the following rate:
10:
11:
12:
dbw,t+1
daS,t
13:
For any
1
σ
·
1
2+ n−1
|TA(pt ,j−1) | −1
)
k
ψ(j) (uA(pt ,j−1),t +
node v ′ in the path from w to a leaf
=
dbv′ ,t+1
dh
in Tw , if w′ be the child of v ′ ,
dbw′ ,t+1
dh
=
Algorithm 3.1: The online primal-dual algorithm for the fractional k-sever problem on a σ-HST.
Theorem 3.7. The online algorithm for the fractional k-sever problem on a σ-HST is of competitive
ratio 15 ln2 (1 + k).
7
In [32], Duru Türkoğlu study the relationship between fractional version and randomized version
of the k-sever problem, which is given as follows.
Lemma 3.8. The fractional k-sever problem is equivalent to the randomized k-sever problem on
the line or circle, or if k = 2 or k = n − 1 for arbitrary metric spaces.
Thus, we get the following conclusion:
Theorem 3.9. There is a randomized algorithm with competitive ratio 15 ln2 (1 + k) for the k-sever
problem on a σ-HST when n = k + 1.
By lemma 2.5, we get the following conclusion:
Theorem 3.10. There is an O(log2 k log n) competitive randomized algorithm for the k-sever problem on any metric space when n = k + 1 .
4
An O(ℓ log k)-competitive Fractional Algorithm for the k-Sever
Problem on a Weighted HST with Depth ℓ
In this section, we first give an O(ℓ log k)-competitive fractional algorithm for the k-Sever problem
on a weighted σ-HST with depth ℓ.
We give another some notations for a weighted HST. Let T̃ be a weighted σ-HST. For a node
D(v)
= d(p(v),v)
whose depth is j, let σj = D(w)
d(v,w) where w is a child of v. By the definition of a weighted σHST, σj ≥ σ for all 1 ≤ j ≤ ℓ−1. For a node v ∈ T̃ , if any leaf p ∈ L(T (v)) such that up,t = 1, we call
it a full node. By this definition, for a full node, uv,t = |L(Tv )|. Otherwise, we call it non-full node.
Let N F C(v) is the set of non-full children node of v, i.e., N F C(v) = {w|w ∈ C(v) and w is a non-full
node }. For a node v, let N L(Tv ) denote the set of non-full leaf nodes in T̃v . Let S = {p|up,t < 1}.
Let P ′ denotes the path from pt to root r: {A(pt , ℓ) = Pt , A(pt , ℓ−1), . . . , A(pt , 1), . . . , A(pt , 0) = r}.
For a node v ∈ P ′ , if there exists a p ∈ S\{pt } such that v is the first common ancestor of pt and p, we
call it a common ancestor node in P ′ . Let CA(pt , S) denote the set of common ancestor nodes in P ′ .
Suppose thatP
CA(pt , S) = {A(pt , ℓh ), . . . , A(pt , ℓ2 ), A(pP
t , ℓ1 )}, where ℓ1 < ℓ2 < . . . < ℓh . For a node
uw,t . Thus, for a full node v, uv,t = 0.
up,t . It is easy to know that uv,t =
v, let uv,t =
p∈S
w∈N F C(v)
For any ℓh < j < ℓ, uA(pt ,j),t = uA(pt ,ℓ),t . For any 0 ≤ j < ℓ1 , uA(pt ,0),t = uA(pt ,j),t = uA(pt ,ℓ1 ),t . For
any ℓi−1 < j < ℓi , uA(pt ,j),t = uA(pt ,ℓi ),t .
The primal-dual formulation for the fractional k-sever problem on a weighted HST is the same
as that on a HST in section 3. Based on the primal-dual formulation, the design idea of our online
algorithm is similar to the design idea in section 3. During the execution of our algorithm, it keeps
the following relation between the primal variable uv,t and dual variable bv,t+1 : uv,t = f (bv,t+1 ) =
bv,t+1
|N L(Tv )|
(exp( 2D(v)
ln(1 + k)) − 1). This relation determines how much mass of upt ,t should be
k
gradually moved out of leaf pt and how it should be distributed among other leaves S \ {pt } until
pt is completely fetched into the cache, i.e. upt ,t = 0. Thus,
P at any time t, the algorithm maintains
up,t = n − k.
a distribution (up1 ,t , . . . , upn ,t ) on the leaves such that
p∈[n]
In order to compute the the exact rate at which we should move mass upt ,t from pt through its
ancestor nodes at time t to other leaves S \ {pt } in the weighted σ-HST, using similar argument to
that in section 3, we get following several claims. Because of the space limit, we put their proofs
in the Appendix.
8
Lemma 4.1.
duv,t
dbv,t+1
Proof. Since uv,t =
claim.
=
ln(1+k)
2D(v) (uv,t
|N L(Tv )|
).
k
+
bv,t+1
|N L(Tv )|
(exp( 2D(v)
k
ln(1 + k)) − 1), we take the derivative over bv,t+1 and get the
Lemma 4.2. For a node v with ℓ(v) = j, if we increase variable bv,t+1 at rate h, then we have the
following equality:
P
dbw,t+1
db
|N L(Tv )|
w )|
1
) · v,t+1
=
· (uw,t + |N L(T
)
σj (uv,t +
k
dh
dh
k
w∈N F C(v)
Lemma 4.3. For v a node with ℓ(v) = j, assume that we increase (or decrease ) the variable bv,t+1
at rate h. If the increasing (or decreasing ) rate of each w ∈ N F C(v) is the same, then in order
to keep the Node Identity Property, we should set the increasing (or decreasing ) rate for each
child w ∈ N F C(v) as follows:
dbw,t+1
db
= σ1j · v,t+1
dh
dh
Repeatedly applying this lemma, we get the following corollary.
Corollary 4.4. For a node v with ℓ(v) = j and a path P from leaf pi ∈ Tv to v, if bv,t+1 is increased
( or decreased) at rate h and the increasing (decreasing) rate of all children of any v ′ ∈ P is the same,
P dbv′ ,t+1
db
= v,t+1
· φ(j), where φ(j) = (1 + σ1j + σj ·σ1j+1 + · · · + σj ·σj+11...σℓ−1 ) ≤ (1 + Θ( σ1 )).
then
dh
dh
v′ ∈P
Lemma 4.5. Let w1 , . . . , wm be the non-full children node of a node v (i.e., any wi ∈ N F C(v)).
Assume that we increase (or decrease) w1 at rate h and also increase w2 to wm at the same rate
db ′
db i ,t+1
. If we would like to maintain the amount uv,t unchanged, then
h. For i ≥ 2, let wdh,t+1 be wdh
we should have:
dbw′ ,t+1
dh
|NL(Tw1 )|
k
v )|
uv,t + |NL(T
k
uw1 ,t +
=
· (−
dbw1 ,t+1
dh
+
dbw′ ,t+1
)
dh
Theorem 4.6. When request pt arrives at time t, in order to keep the dual constraints tight and
node identity property, if aS,t is increased with rate 1, we should decrease every bA(pt ,j),t+1 for each
j ∈ {ℓ1 + 1, ℓ2 + 1, . . . , ℓh + 1} with rate:
dbA(pt ,j),t+1
daS,t
ur,t + |S|
|N L(T
|N L(T
)|
)|
A(pt ,j−1)
A(pt ,j)
= φ(j) k [(uA(pt ,j),t +
)−1 − (uA(pt ,j−1),t +
)−1 ].
k
k
For each sibling w ∈ N F C(v) of A(pt , j), increase bw,t+1 with the following rate:
dbw,t+1
daS,t
|S|
=
ur,t + k
φ(j)
(uA(pt ,j−1),t +
|N L(TA(pt ,j−1) )| −1
)
k
Thus, we design an online algorithm for the fractional k-sever problem on a weighted σ-HST as
follows (see Algorithm 4.1).
Theorem 4.7. The online algorithm for the fractional k-sever problem on a weighted σ-HST with
depth ℓ is of competitive ratio 4ℓ ln(1 + k).
By lemma 2.7, we get:
Theorem 4.8. There exists an O(log k log n)-competitive fractional algorithm for the k-sever problem on any σ-HST.
In [3], Nikhil Bansal et al. show the following conclusion.
9
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
At time t = 0, we set bp,1 = γp = 0 for all p.
At time t ≥ 1, when a request pt arrives:
Initially, we set up,t = up,t−1 for all p, and bp,t+1 is initialized to bp,t .
If upt ,t = 0, then do nothing.
Otherwise, do the following:
Let S = {p : up,t < 1}. Suppose that CA(pt , S) = {A(pt , ℓh ), . . . , A(pt , ℓ2 ), A(pt , ℓ1 )}, where
0 ≤ ℓ1 < ℓ2 < . . . < ℓh < ℓ.
While upt ,t 6= 0:
Increasing aS,t with rate 1;
For each j ∈ {ℓ1 + 1, ℓ2 + 1, . . . , ℓh + 1}, decrease every bA(pt ,j),t+1 with rate:
|S|
dbA(pt ,j),t+1
ur,t + k
=
daS,t
φ(j) [(uA(pt ,j),t
|N L(TA(pt ,j−1) )| −1
) ].
k
+
|N L(TA(pt ,j) )| −1
)
k
− (uA(pt ,j−1),t +
For each sibling w ∈ N F C(v) of A(pt , j), increase bw,t+1 with the following rate:
dbw,t+1
daS,t
ur,t + |S|
|N L(T
)|
A(pt ,j−1)
= φ(j)k (uA(pt ,j−1),t +
)−1
k
14:
For any node v ′ in the path from w to a leaf in N L(Tw ), if w′ ∈ N F C(v ′ ) and
db ′
db ′
ℓ(v ′ ) = j, wdh,t+1 = σ1j · vdh,t+1
15:
For p ∈ S \ {pt }), if some up,t reaches the value of 1, then we update S ← S \ {p} and
16:
the set N F C(v) for each ancestor node v of p.
13:
Algorithm 4.1: The online primal-dual algorithm for the fractional k-sever problem on a weighted
σ-HST.
Lemma 4.9. Let T be a σ-HST with σ > 5. Then any online fractional k-sever algorithm on T can
be converted into a randomized k-sever algorithm on T with an O(1) factor loss in the competitive
ratio.
Thus, we get the following conclusion by Theorem 4.8:
Theorem 4.10. Let T be a σ-HST with σ > 5. There is a randomized algorithm for the k-sever
problem with a competitive ratio of O(log k log n) on T .
By lemma 2.5, we get the following conclusion:
Theorem 4.11. For any metric space, there is a randomized algorithm for the k-sever problem
with a competitive ratio of O(log k log2 n).
5
Conclusion
In this paper, for any metric space with n points, we show that there exist a randomized algorithm
with O(log k log2 n)-competitive ratio for the k-sever problem, which improved the previous best
competitive ratio O(log2 k log3 n log log n).
Acknowledgments
We would like to thank the anonymous referees for their careful readings of the manuscripts and
many useful suggestions.
10
Wenbin Chen’s research has been partly supported by the National Natural Science Foundation
of China (NSFC) under Grant No.11271097., the research projects of Guangzhou education bureau
under Grant No. 2012A074. and the project KFKT2012B01 from State Key Laboratory for Novel
Software Technology, Nanjing University.
References
[1] Dimitris Achlioptas, Marek Chrobak, and John Noga. Competitive analysis of randomized
paging algorithms. Theoretical Computer Science, 234(1-2):203-218, 2000.
[2] Avrim Blum, Carl Burch, and Adam Kalai. Finely-competitive paging. Proceedings of the 40th
Annual Symposium on Foundations of Computer Science, page 450-456, 1999.
[3] Nikhil Bansal, Niv Buchbinder, Aleksander Madry, Joseph Naor. A PolylogarithmicCompetitive Algorithm for the k-Server Problem. FOCS 2011, pages 267-276.
[4] Nikhil Bansal, Niv Buchbinder, and Joseph (Seffi) Naor. A primal-dual randomized algorithm
for weighted paging. Proceedings of the 48th Annual IEEE Symposium on Foundations of
Computer Science, pages 507-517, 2007.
[5] N. Buchbinder, K. Jain, and J. Naor. Online primal-dual algorithms for maximizing ad-auctions
revenue. Proc. 14th European Symp. on Algorithms (ESA), pp. 253-264, 2007.
[6] N. Buchbinder and J. Naor. Online primal-dual algorithms for covering and packing problems.
Proc. 12th European Symp. on Algorithms (ESA), volume 3669 of Lecture Notes in Comput.
Sci., pages 689–701. Springer, 2005.
[7] N. Buchbinder and J. Naor. Improved bounds for online routing and packing via a primal-dual
approach. Proc. 47th Symp. Foundations of Computer Science, pages 293–304, 2006.
[8] Niv Buchbinder, Joseph Naor. The Design of Competitive Online Algorithms via a Primal-Dual
Approach. Foundations and Trends in Theoretical Computer Science 3(2-3): 93–263 (2009).
[9] Nikhil Bansal, Niv Buchbinder, and Joseph (Seffi) Naor. Towards the randomized k-server
conjecture: A primal-dual approach. Proceedings of the 21st Annual ACM- SIAM Symposium
on Discrete Algorithms, pp. 40-55, 2010.
[10] Nikhil Bansal, Niv Buchbinder, and Joseph (Seffi) Naor. Metrical task systems and the ksever problem on HSTs. In ICALP’10: Proceedings of the 37th International Colloquium on
Automata, Languages and Programming, 2010, pp.287-298.
[11] Yair Bartal. Probabilistic approximations of metric spaces and its algorithmic applications.
Proceedings of the 37th Annual IEEE Symposium on Foundations of Computer Science, pages
184-193, 1996.
[12] Yair Bartal. On approximating arbitrary metrices by tree metrics. Proceedings of the 30th
Annual ACM Symposium on Theory of Computing, pages 161-168, 1998.
11
[13] Yair Bartal, Bela Bollobas, and Manor Mendel. A ramsy-type theorem for metric spaces and its
applications for metrical task systems and related problems. Proceedings of the 42nd Annual
IEEE Symposium on Foundations of Computer Science, pages 396-405, 2001.
[14] Yair Bartal and Eddie Grove. The harmonic k-server algorithm is competitive. Journal of the
ACM, 47(1):1-15, 2000.
[15] Yair Bartal, Nathan Linial, Manor Mendel, and Assaf Naor. On metric ramsey-type phenomena. Proceedings of the 35th Annual ACM Symposium on Theory of Computing, pages
463C472, 2003.
[16] Yair Bartal, Elias Koutsoupias. On the competitive ratio of the work function algorithm for
the k-sever problem. Theoretical Computer Science, 324(2-3): 337-345.
[17] Avrim Blum, Howard J. Karloff, Yuval Rabani, and Michael E. Saks. A decomposition theorem
and bounds for randomized server problems. Proceedings of the 31st Annual IEEE Symposium
on Foundations of Computer Science, pages 197-207, 1992.
[18] Allan Borodin and Ran El-Yaniv. Online computation and competitive analysis. Cambridge
University Press, 1998.
[19] M. Chrobak and L. Larmore. An optimal on-line algorithm for k-servers on trees. SIAM Journal
on Computing, 20(1): 144-148, 1991.
[20] A. Coté, A. Meyerson, and L. Poplawski. Randomized k-server on hierarchical binary trees.
Proceedings of the 40th Annual ACM Symposium on Theory of Computing, pages 227-234,
2008.
[21] B. Csaba and S. Lodha. A randomized on-line algorithm for the k-server problem on a line.
Random Structures and Algorithms, 29(1): 82-104, 2006.
[22] Jittat Fakcharoenphol, Satish Rao, and Kunal Talwar. A tight bound on approximating arbitrary metrics by tree metrics. Proceedings of the 35th Annual ACM Symposium on Theory of
Computing, pages 448-455, 2003.
[23] A. Fiat, Y. Rabani, and Y. Ravid. Competitive k-server algorithms. Journal of Computer and
System Sciences, 48(3): 410-428, 1994.
[24] Amos Fiat, Richard M. Karp, Michael Luby, Lyle A. McGeoch, Daniel Dominic Sleator, and
Neal E. Young. Competitive paging algorithms. Journal of Algorithms, 12(4): 685-699, 1991.
[25] Edward F. Grove. The harmonic online k-server algorithm is competitive. Proceedings of the
23rd Annual ACM Symposium on Theory of Computing, pages 260-266, 1991.
[26] Elias Koutsoupias. The k-sever problem. Computer Science Review. Vol. 3. No. 2. Pages 105118, 2009.
[27] Elias Koutsoupias and Christos H. Papadimitriou. On the k-server conjecture. Journal of the
ACM, 42(5): 971-983, 1995.
12
[28] M.S. Manasse, L.A. McGeoch, and D.D. Sleator. Competitive algorithms for online problems.
Proceedings of the 20th Annual ACM Symposium on Theory of Computing, pages 322-333,
1988.
[29] M. Manasse, L.A. McGeoch, and D. Sleator. Competitive algorithms for server problems.
Journal of Algorithms, 11: 208-230, 1990.
[30] Lyle A. McGeoch and Daniel D. Sleator. A strongly competitive randomized paging algorithm.
Algorithmica, 6(6): 816-825, 1991.
[31] Daniel D. Sleator and Robert E. Tarjan. Amortized efficiency of list update and paging rules.
Communications of the ACM, 28(2): 202-208, 1985.
[32] Duru Türkoğlu. The k-sever problem and fractional analysis. Master’s Thesis, The University
of Chicago, 2005. http://people.cs.uchicago.edu/∼duru/papers/masters.pdf.
Appendix
Proofs for claims in section 3
The proof for Lemma 3.2 is as follows:
P
Proof. Since it is required to maintain uv,t =
uw,t, we take the derivative of both sides and
w∈C(v)
get that:
duv,t
dbv,t+1
·
dbv,t+1
dh
P
=
w∈C(v)
By lemma 3.1, we get:
duw,t
dbw,t+1
·
dbw,t+1
dh .
ln(1+k)
2D(v) (uv,t
+
D(v)
Since D(w)
= σ, we get :
P dbw,t+1
dbv,t+1
|Tv |
1
=
σ (uv,t + k ) ·
dh
dh
w∈C(v)
|Tv |
k )
P
=
w∈C(v)
· (uw,t +
dbw,t+1
dh
·
ln(1+k)
2D(w) (uw,t
+
|Tv |
k ).
|Tw |
k )
The proof for Lemma 3.3 is as follows:
Proof. By above Lemma 3.2, if the increasing (or decreasing ) rate of each w ∈ C(v) is the same,
we get that:
P
dbv,t+1
db
db
|Tv |
1
= w,t+1
·
· (uv,t + |Tkv | ).
(uw,t + |Tkw | ) = w,t+1
σ (uv,t + k ) ·
dh
dh
dh
So, we get that:
dbw,t+1
dh
=
1
σ
·
w∈C(v)
dbv,t+1
dh
The proof for Lemma 3.5 is as follows:
Proof. By lemma 3.2, in order to keep the amount uv,t unchanged, we get:
P
dbw1 ,t+1
dbw,t+1
1 )|
· (uw1 ,t + |T (w
)+
· (uw,t + |T (w)|
dh
k
dh
k ) = 0.
Thus,
db 1 ,t+1
( wdh
db
−
db
dbw′ ,t+1
)
dh
′
w∈C(v)\{w1 }
1 )|
· (uw1 ,t + |T (w
)
k
1 ,t+1
So, ( wdh
− wdh,t+1 ) · (uw1 ,t +
Hence, we get the claim.
|T (w1 )|
)
k
+
+
dbw′ ,t+1
dh
dbw′ ,t+1
dh
13
·
·
P
(uw,t +
w∈C(v)
(uv,t + |T (v)|
k )
|T (w)|
k )
=0
=0
The proof for Theorem 3.6 is as follows:
Proof. When request pt arrives at time t, we move mass upt ,t from pt through its ancestor nodes
to other leaves [n] \ {pt }, i.e. upt ,t is decreased and up,t is increased for any p ∈ [n] \ {pt }. Since
these mass moves out of each subtree TA(pt ,j) for each 1 ≤ j ≤ ℓ, uA(pt ,j),t is decreased. By
b
t ,j),t+1
uA(pt ,j),t = f (bA(pt ,j),t+1 ) = |Tkv | (exp( A(p
ln(1 + k)) − 1) (we need to keep this relation during
2D(v)
the algorithm), bA(pt ,j),t+1 also decreases for each 1 ≤ j ≤ ℓ. On the other hand, up,t is increased
for each p ∈ [n] \ {pt }. Thus, for each node v whose Tv doesn’t contain pt , its mass uv,t is also
increased. For each node v whose Tv doesn’t contain pt , it must be a sibling of some node A(pt , j).
For each 1 ≤ j ≤ ℓ, we assume that all siblings v ′ of node A(pt , j) increase bv′ ,t+1 at the same rate.
In the following, we will compute the increasing (or decreasing) rate of all dual variables in the
b
t ,j),t+1
be the decreasing rate of bA(pt ,j),t+1
σ-HST regarding aS . For 1 ≤ j ≤ ℓ, let ∇bj = − A(pda
S
b
regarding aS . For 1 ≤ j ≤ ℓ, let ∇b′j = w,t+1
daS be the increasing rate of bw,t+1 for any siblings w of
A(pt , j) regarding aS .
Using from top to down method, we can get a set of equations about the quantities ∇bj and
∇b′j . First, we consider the siblings of A(pt , 1) ( i.e. those nodes are children of root r, but they
are not A(pt , 1)). Let w be one of these siblings. If bw,t+1 is raised by ∇b′1 , by Corollary 3.4, the
sum of ∇b′ on any path from a leaf in Tw to w must be ψ(1) · ∇b′1 . Since aS is increasing with rate
1, it forces ψ(1) · ∇b′1 = 1 in order to maintain the dual constraint (3) tight for leaves in Tw .
This considers the dual constraints for these leaves. Now, this increasing mass must be canceled
out by decreasing the mass in TA(pt ,1) since the mass ur,t in Tr is not changed. Thus, in order to
maintain the “Node Identity Property” of root, by Lemma 3.5, we must set ∇b1 such that:
∇b′1 = (∇b1 + ∇b′1 ) ·
|T(A(p ,1)) |
t
)
k
(uA(pt ,1),t +
n
1+ n−1
For siblings of node A(pt , 2), we use the similar argument. Let w be a sibling of A(pt , 2).
Consider a path from a leaf in Tw to the w. Their dual constraint (3) already grows at rate
1 + ψ(1)∇b1 . This must be canceled out by increasing bw,t+1 , and if bw,t+1 is raised by ∇b′2 , by
corollary 3.4, the sum of ∇b′ on any path from a leaf in Tw to w must be ψ(2) · ∇b′2 . Thus, ∇b′2
must be set such that: ψ(2) · ∇b′2 = 1 + ψ(1) · ∇b1
Again, this increasing mass must be canceled out by decreasing the mass in TA(pt ,2) . In order
to keep the “Node Identity Property” of A(pt , 1), by Lemma 3.5. we must set ∇b2 such that:
∇b′2 = (∇b2 + ∇b′2 ) ·
|TA(p ,2) |
t
)
k
|T(A(p ,1) |
t
(uA(pt ,1),t +
)
k
(uA(pt ,2),t +
Continuing this method, we obtain a system of linear equations about all ∇bj and ∇b′j (1 ≤
j ≤ ℓ). For maintaining the dual constraints tight, we get the following equations:
ψ(1) · ∇b′1 = 1
ψ(2) · ∇b′2 = 1 + ψ(1) · ∇b1
..
.
ℓ−1
P
ψ(ℓ) · ∇b′ℓ = 1 +
ψ(i)∇bi
i=1
For keeping the node identity property, we get the following equations:
∇b′1 = (∇b1 + ∇b′1 ) ·
|T(A(p ,1)) |
t
)
k
(uA(pt ,1),t +
n
1+ n−1
14
∇b′2 = (∇b2 + ∇b′2 ) ·
..
.
∇b′ℓ
= (∇bℓ +
∇b′ℓ )
·
|T(A(p ,2)) |
t
)
k
|T(A(p ,1)) |
t
(uA(pt ,1),t +
)
k
(uA(pt ,2),t +
|T(A(p ,ℓ)) |
t
)
k
|T(A(p ,ℓ−1)) |
t
)
(uA(pt ,ℓ−1),t +
k
(uA(pt ,ℓ),t +
.
We continue to solve the system of linear equations.
For each 1 ≤ j ≤ ℓ,
j−1
P
ψ(j) · ∇b′j = 1 +
ψ(i)∇bi
=1+
j−2
P
i=1
ψ(i)∇bi + ψ(j − 1)∇bj−1
i=1
= ψ(j − 1) · ∇b′j−1 + ψ(j − 1)∇bj−1
= ψ(j − 1) · (∇b′j−1 + ∇bj−1 ).
|T(A(p ,j−1)) |
t
)
k
|T(A(p ,j−2)) |
t
(uA(pt ,j−2),t +
)
k
|T(A(p ,j−2)) |
t
)
(uA(pt ,j−2),t +
k
|T(A(p ,j−1)) |
t
(uA(pt ,j−1),t +
)
k
Since ∇b′j−1 = (∇bj−1 + ∇b′j−1 ) ·
ψ(j)∇b′j = ψ(j − 1)∇b′j−1 ·
(uA(pt ,j−1),t +
, we get:
Solving the recursion, we get:
n
1+ n−1
∇b′j =
|T(A(p ,j−1)) |
∇bj =
ψ(j)·(uA(pt ,j−1),t +
n
1+ n−1
ψ(j) · [(uA(pt ,j),t
t
k
+
)
|T(A(pt ,j)) | −1
)
k
− (uA(pt ,j−1),t +
|T(A(pt ,j−1)) | −1
) ]
k
The proof for Theorem 3.7 is as follows:
Proof. Let P denote the value of the objective function of the primal solution and D denote the
value of the objective function of the dual solution. Initially, let P = 0 and D = 0. In the following,
we prove three claims:
(1) The primal solution produced by the algorithm is feasible.
(2) The dual solution produced by the algorithm is feasible.
(3) P ≤ 15 ln2 (1 + k)D.
By three claims and weak duality of linear programs, the theorem follows immediately.
First,
we prove the claim (1) as follows. At any time t, since S = [n] and the algorithm keeps
P
= n − k = 1 = |S| − k. So, the primal constraints (3.1) are satisfied.
p∈[n]
Second, we prove the claim (2) as follows. By theorem 3.6, the dual constraints (3.6) are
satisfied. Obviously, dual constraints (3.7) are satisfied. For any node v, if bv,t+1 = 0, then
uv,t = 0; if bv,t+1 = 2D(v), then uv,t = |Tv |. Thus, the dual constraints (3.8) are satisfied.
Third, we prove claim (3) as follows. If the algorithm increases the variables aS,t at some time
∂D
= |S| − k = n − (n − 1) = 1. Let’s compute the primal cost. At depth j(1 ≤ j ≤ ℓ),
t, then: ∂a
S,t
we compute the movement cost of our algorithm by the change of ∇b′j as follows.
P
db′
du
2D(w) · dbw,t
· aSj
′
j
w∈C(A(pt ,j−1))\{A(pt ,j)}
P
= ∇b′j ·
2D(w)
w∈C(A(pt ,j−1))\{A(pt ,j)}
·
ln(1+k)
2D(w) (uw,t
15
+
|Tw |
k )
=
1
2+ n−1
ψ(j)
≤
5
2
P
(uw,t +
w∈C(A(pt ,j−1))\{A(pt ,j)}
|TA(p ,j−1) |
t
uA(p ,j−1),t +
k
P t
|T |
(uw,t + kw )
w∈C(A(pt ,j−1))\{A(pt ,j)}
|TA(p ,j−1) |
t
uA(pt ,j−1),t +
k
|Tw |
)
k
· ln(1 + k) ·
· ln(1 + k) ·
Let Bj denote uA(pt ,j),t +
|TA(pt ,j) |
.
k
P
Then
(uw,t +
w∈C(A(pt ,j−1))\{A(pt ,j)}
|Tw |
k )
= Bj−1 − Bj .
Hence, the total
cost over all ℓ levels is
Pℓ movement
Bj−1 −Bj
5
ln(1
+
k)
·
j=1 Bj−1
2
P
Bj
5
= 2 ln(1 + k) · ℓj=1 (1 − Bj−1
)
P
B
ℓ
j
≤ 52 ln(1 + k) · j=1 ln Bj−1
=
=
5
2
5
2
5
2
5
2
5
2
0
ln(1 + k) · ln B
Bℓ
ln(1 + k) · ln u B0+ 1
pt ,t
k
≤ ln(1 + k) · ln kB0
≤ ln(1 + k) · 2 ln k · B0
1
≤ ln(1 + k) · 2 ln k · (2 + n−1
)
2
= 15 ln (1 + k)
where the first inequality holds since 1 ≤ y − ln y for any 0 ≤ y ≤ 1.
Thus, we get P ≤ 15 ln2 (1 + k)D.
Let OP T be the cost of the best offline algorithm. Pmin be the optimal primal solution and
Dmax be the optimal dual solution. Then, Pmin ≤ OP T since OP T is a feasible solution for the
15 ln2 (1+k)D
P
P
≤
primal program. Based on the weak duality, Dmax ≤ Pmin . Hence, OP
T ≤ Pmin ≤
Pmin
15 ln2 (1+k)Dmax
Pmin
2
min
≤ 15 ln (1+k)P
= 15 ln2 (1 + k).
Pmin
So, the competitive ratio of this algorithm is 15 ln2 (1 + k).
Proofs for claims in section 4
The proof for Lemma 4.2 is as follows:
P
Proof. Since it is required to maintain uv,t =
uw,t, we take the derivative of both sides
w∈N F C(v)
and get that:
dbv,t+1
duv,t
=
dbv,t+1 ·
dh
P
duw,t
dbw,t+1
·
w∈N F C(v)
ln(1+k)
2D(v) (uv,t
By lemma 3.1, we get:
D(v)
Since D(w)
= σj , we get :
db
|N L(Tv )|
1
) · v,t+1
σj (uv,t +
k
dh
=
dbw,t+1
dh .
+
|N L(Tv )|
)
k
P
w∈N F C(v)
dbw,t+1
dh
=
P
w∈N F C(v)
· (uw,t +
dbw,t+1
dh
·
ln(1+k)
2D(w) (uw,t
+
|N L(Tw )|
).
k
|N L(Tw )|
)
k
The proof for Lemma 4.3 is as follows:
Proof. By Lemma 4.2, if the increasing (or decreasing ) rate of each w ∈ N F C(v) is the same, we
get that:
16
1
σj (uv,t
+
db
|N L(Tv )|
) · v,t+1
k
dh
So, we get that:
dbw,t+1
dh
=
=
1
σj
dbw,t+1
dh
·
P
·
(uw,t +
w∈N F C(v)
|N L(Tw )|
)
k
=
dbw,t+1
dh
· (uv,t +
|N L(Tv )|
).
k
dbv,t+1
dh
The proof for Lemma 4.5 is as follows:
Proof. By lemma 4.2, in order to keep the amount uv,t unchanged, we get:
P
dbw1 ,t+1
|N L(Tw1 )|
dbw,t+1
w )|
· (uw1 ,t +
)+
· (uw,t + |N L(T
) = 0.
dh
k
dh
k
Thus,
db 1 ,t+1
( wdh
db
−
db
dbw′ ,t+1
)
dh
w∈N F C(v)\{w1 }
|N L(Tw1 )|
· (uw1 ,t +
)
k
′
1 ,t+1
− wdh,t+1 ) · (uw1 ,t +
So, ( wdh
Hence, we get the claim.
|N L(Tw1 )|
)
k
+
+
dbw′ ,t+1
dh
dbw′ ,t+1
dh
·
·
P
(uw,t +
w∈N F C(v)
v )|
(uv,t + |N L(T
)
k
|N L(Tw )|
)
k
=0
=0
The proof for Theorem 4.6 is as follows:
Proof. When request pt arrives at time t, we move mass upt ,t from pt through its ancestor nodes to
other non-full leaves nodes S \ {pt }, i.e. upt ,t is decreased and up,t is increased for any p ∈ S \ {pt }.
Since these mass moves out of each subtree TA(pt ,j) for each 1 ≤ j ≤ ℓ, uA(pt ,j),t is decreased. By
b
v )|
t ,j),t+1
(exp( A(p
uA(pt ,j),t = f (bA(pt ,j),t+1 ) = |N L(T
ln(1 + k)) − 1) (we need to keep this relation
k
2D(v)
during the algorithm), bA(pt ,j),t+1 also decreases for each 1 ≤ j ≤ ℓ. On the other hand, up,t is
increased for each p ∈ S \ {pt }. Thus, for each non-full node v whose Tv doesn’t contain pt , its mass
uv,t is also increased. For each non-full node v whose Tv doesn’t contain pt , it must be a sibling of
some node A(pt , j) where j ∈ {ℓ1 , . . . , ℓh }. We assume that all siblings v ′ of any node v increase
bv′ ,t+1 at the same rate
In the following, we will compute the increasing (or decreasing) rate of all dual variables in
b
t ,j),t+1
be the decreasing rate of
the weighted σ-HST regarding aS . For 1 ≤ j ≤ ℓ, let ∇bj = − A(pda
S
b
bA(pt ,j),t+1 regarding aS . For each j ∈ {ℓ1 + 1, ℓ2 + 1, . . . , ℓh + 1}, let ∇b′j = w,t+1
daS be the increasing
rate of bw,t+1 for any siblings w ∈ N F C(A(pt , j)) of A(pt , j) regarding aS .
Using from top to down method, we can get a set of equations about the quantities ∇bj and
∇b′j . First, we consider the siblings of A(pt , ℓ1 + 1) ( i.e. those nodes are children of A(pt , ℓ1 ),
but they are not A(pt , ℓ1 + 1)). Let w be one of these siblings. If bw,t+1 is raised by ∇b′ℓ1 +1 , by
Corollary 4.4, the sum of ∇b′ on any path from a leaf in Tw to w must be φ(ℓ1 + 1) · ∇b′ℓ1 +1 . Since
aS is increasing with rate 1, it forces φ(ℓ1 + 1) · ∇b′ℓ1 +1 = 1 in order to maintain the dual constraint
(3) tight for non-full leaf nodes in Tw .
This considers the dual constraints for these non-full leaf nodes. Now, this increasing mass
must be canceled out by decreasing the mass in TA(pt ,ℓ1 +1) since the mass uA(pt ,ℓ1 ),t in TA(pt ,ℓ1 ) is
not changed. Thus, in order to maintain the “Node Identity Property” of A(pt , ℓ1 ), by Lemma 4.5,
we must set ∇b1 such that:
∇b′ℓ1 +1
= (∇bℓ1 +1 +
|NL(T(A(p ,ℓ +1)) )|
t 1
)
k
|S|
uA(pt ,ℓ1 ),t + k
|NL(T(A(p ,ℓ )) )|
t 2
(uA(pt ,ℓ2 ),t +
)
k
|S|
ur + k
∇b′ℓ1 +1 )
= (∇bℓ1 +1 + ∇b′ℓ1 +1 ) ·
·
(uA(pt ,ℓ1 +1),t +
For siblings of node A(pt , ℓ2 +1), we use the similar argument. Let w be a sibling of A(pt , ℓ2 +1).
Consider a path from a non-full leaf node in Tw to the w. Their dual constraint (3.6) already grows
17
at rate 1 + φ(ℓ1 + 1)∇bℓ1 +1 . This must be canceled out by increasing bw,t+1 , and if bw,t+1 is raised
by ∇b′ℓ2 +1 , by corollary 4.4, the sum of ∇b′ on any path from a leaf in Tw to w must be φ(2) · ∇b′2 .
Thus, ∇b′2 must be set such that: φ(ℓ2 + 1) · ∇b′ℓ2 +1 = 1 + φ(ℓ1 + 1) · ∇bℓ1 +1 .
Again, this increasing mass must be canceled out by decreasing the mass in TA(pt ,ℓ2 ) . In order
to keep the “Node Identity Property” of A(pt , ℓ2 ), by Lemma 4.5. we must set ∇bℓ2 +1 such that:
|NL(TA(p ,ℓ +1) )|
t 2
)
k
|NL(T(A(p ,ℓ ) )|
t
2
(uA(pt ,ℓ2 ),t +
)
k
|NL(TA(p ,ℓ ) )|
t
3
(uA(pt ,ℓ3 ),t +
)
k
|NL(T(A(p ,ℓ ) )|
t
2
(uA(pt ,ℓ2 ),t +
)
k
∇b′ℓ2 +1 = (∇bℓ2 +1 + ∇b′ℓ2 +1 ) ·
= (∇bℓ2 +1 + ∇b′ℓ2 +1 ) ·
(uA(pt ,ℓ2 +1),t +
Continuing this method, we obtain a system of linear equations about all ∇bj and ∇b′j (j ∈
{ℓ1 +1, ℓ2 +1, . . . , ℓh +1}). For maintaining the dual constraints tight, we get the following equations:
φ(ℓ1 + 1) · ∇b′ℓ1 +1 = 1
φ(ℓ2 + 1) · ∇b′ℓ2 +1 = 1 + φ(ℓ1 + 1) · ∇bℓ1 +1
..
.
h−1
P
ψ(ℓh + 1) · ∇b′ℓh +1 = 1 +
φ(ℓi + 1)∇bℓi +1
i=1
For keeping the node identity property, we get the following equations:
∇b′ℓ1 +1
= (∇bℓ1 +1 +
∇b′ℓ1 +1 )
·
∇b′ℓ2 +1 = (∇bℓ2 +1 + ∇b′ℓ2 +1 ) ·
..
.
∇b′ℓh +1
= (∇bℓh +1 +
∇b′ℓh +1 )
·
|NL(T(A(p ,1)) )|
t
)
k
|S|
ur + k
|NL(T(A(p ,ℓ )) )|
t 3
)
(uA(pt ,ℓ3 ),t +
k
|NL(T(A(p ,ℓ )) )|
t
2
(uA(pt ,ℓ2 ),t +
)
k
(uA(pt ,ℓ2 ),t +
|NL(T(A(p ,ℓ)) )|
t
)
k
|NL(T(A(p ,ℓ )) )|
t h
(uA(pt ,ℓh ),t +
)
k
(uA(pt ,ℓ),t +
.
We continue to solve the system of linear equations.
For each 1 ≤ j ≤ h,
j−1
P
φ(ℓj + 1) · ∇b′ℓj +1 = 1 +
φ(ℓi + 1)∇bℓi +1
=1+
j−2
P
i=1
i=1
φ(ℓi + 1)∇bℓi +1 + φ(ℓj−1 + 1)∇bℓj−1 +1
= φ(ℓj−1 + 1) · ∇b′ℓj−1 +1 + φ(j − 1)∇bℓj−1 +1
= φ(ℓj−1 + 1) · (∇b′ℓj−1 +1 + ∇bℓj−1 +1 ).
Since
∇b′ℓj−1 +1
= (∇bℓj−1 +1 +
∇b′ℓj−1 +1 ) ·
ψ(ℓj + 1)∇b′ℓj +1 = ψ(ℓj−1 + 1)∇b′ℓj−1 +1 ·
|NL(T(A(p ,ℓ )) )|
t j
)
k
|NL(T(A(p ,ℓ
)|
t j−1 ))
(uA(pt ,ℓj−1 ),t +
)
k
|NL(T(A(p ,ℓ
)|
))
t j−1
)
(uA(pt ,ℓj−1 ),t +
k
|NL(T(A(p ,ℓ )) )|
t j
(uA(pt ,ℓj ),t +
)
k
(uA(pt ,ℓj ),t +
, we get:
Solving the recursion, we get:
∇b′ℓj +1 =
ur + |S|
k
φ(ℓj +1)·(uA(pt ,ℓj ),t +
|NL(T(A(p ,ℓ )) )|
t j
)
k
|S|
∇bℓj +1 =
=
|N L(T(A(pt ,ℓj )) )|
|N L(T(A(pt ,ℓj+1 )) )|
)−1 − (uA(pt ,ℓj ),t +
)−1 ]
k
k
|S|
|N L(T(A(pt ,ℓj +1)) )|
|N L(T(A(pt ,ℓj )) )|
ur + k
)−1 − (uA(pt ,ℓj ),t +
)−1 ]
k
k
φ(ℓj +1) · [(uA(pt ,ℓj +1),t +
ur + k
φ(ℓj +1)
· [(uA(pt ,ℓj+1 ),t +
18
The proof for Theorem 4.7 is as follows:
Proof. Let P denote the value of the objective function of the primal solution and D denote the
value of the objective function of the dual solution. Initially, let P = 0 and D = 0. In the following,
we prove three claims:
(1) The primal solution produced by the algorithm is feasible.
(2) The dual solution produced by the algorithm is feasible.
(3) P ≤ 2 ln(1 + k)D.
By three claims and weak duality of linear programs, the theorem follows immediately.
The proof of claim (1) and (2) are similar to that of claim (1) and (2) in section 3.7.
Third, we prove claim (3) as follows. If the algorithm increases the variables aS,t at some time
∂D
= |S| − k. Let’s compute the primal cost. At depth j ∈ {ℓ1 + 1, ℓ2 + 1, . . . , ℓh + 1},
t, then: ∂a
S,t
we compute the movement cost of our algorithm by the change of ∇b′j as follows.
P
db′
du
2D(w) · dbw,t
· aSj
′
j
w∈N F C(A(pt ,j−1))\{A(pt ,j)}
= ∇b′j
P
2D(w) ·
w∈C(A(pt ,j−1))\{A(pt ,j)}
=
|S|
ur,t + k
φ(j)
P
=
p∈S
=
up,t + |S|
k
φ(j)
P
+
|N L(Tw )|
)
k
P
w )|
)
(uw,t + |NL(T
k
w∈C(A(pt ,j−1))\{A(pt ,j)}
|NL(TA(p ,j−1) )|
t
uA(pt ,j−1),t +
k
P
w )|
(uw,t + |NL(T
)
k
w∈C(A(pt ,j−1))\{A(pt ,j)}
|NL(TA(p ,j−1) )|
t
uA(pt ,j−1),t +
k
· ln(1 + k) ·
P
w )|
)
(uw,t + |NL(T
k
w∈C(A(pt ,j−1))\{A(pt ,j)}
|NL(TA(p ,j−1) )|
φ(j)
t
uA(pt ,j−1),t +
k
P
P
|S|−1
w )|
(uw,t + |NL(T
(
up,t + k +2(|S|−k))
)
k
w∈C(A(pt ,j−1))\{A(pt ,j)}
p∈S\{pt }
|NL(TA(p ,j−1) )|
φ(j)
t
uA(pt ,j−1),t +
k
P
|T (w)|
(uw,t + k )
w∈C(A(pt ,j−1))\{A(pt ,j)}
(|S|−1)
|NL(TA(p ,j−1) )|
k
t
uA(pt ,j−1),t +
k
P
w )|
)
(uw,t + |NL(T
k
w∈C(A(pt ,j−1))\{A(pt ,j)}
|NL(TA(p ,j−1) )|
t
uA(pt ,j−1),t +
k
(
<
· ln(1 + k) ·
ln(1+k)
2D(w) (uw,t
p∈S\{pt }
+2)
up,t + |S|−1
k
· ln(1 + k) ·
· ln(1 + k) ·
≤ (3(|S| − k) +
) · ln(1 + k) ·
≤ 4 · ln(1 + k) · (|S| − k) ·
≤ 4 · ln(1 + k) · (|S| − k), since
P
(uw,t +
w∈C(A(pt ,j−1))\{A(pt ,j)}
|N L(T
)|
A(pt ,j−1)
.
uA(pt ,j−1),t +
k
Where the first inequality holds since
|N L(Tw )|
)
k
≤
P
up,t < |S| − k, the reason is that the constraint
P
up,t =
at time t is not satisfied otherwise the algorithm stop increasing the variable up,t (Since
p∈S
P
up,t < |S| − k ⇔ upt ,t = 0, i.e the algorithm stop increasing the variables). In
|S| − k,
p∈S\{pt }
p∈S\{pt }
addition, when |S| ≥ k + 1, |S|−1
= |S| − k.
k
Thus, the total cost of all j depth is at most 4ℓ · ln(1 + k) · (|S| − k). Hence, we get P ≤
4ℓ̇ · ln(1 + k) · D.
19
So, the competitive ratio of this algorithm is 4ℓ ln(1 + k).
20
| 8 |
1
Characterization and Inference of Graph Diffusion
Processes from Observations of Stationary Signals
arXiv:1605.02569v4 [] 6 Jun 2017
Bastien Pasdeloup, Vincent Gripon, Grégoire Mercier, Dominique Pastor, and Michael G. Rabbat
Abstract—Many tools from the field of graph signal processing
exploit knowledge of the underlying graph’s structure (e.g., as
encoded in the Laplacian matrix) to process signals on the
graph. Therefore, in the case when no graph is available, graph
signal processing tools cannot be used anymore. Researchers have
proposed approaches to infer a graph topology from observations
of signals on its nodes. Since the problem is ill-posed, these
approaches make assumptions, such as smoothness of the signals
on the graph, or sparsity priors. In this paper, we propose a
characterization of the space of valid graphs, in the sense that
they can explain stationary signals. To simplify the exposition in
this paper, we focus here on the case where signals were i.i.d.
at some point back in time and were observed after diffusion
on a graph. We show that the set of graphs verifying this
assumption has a strong connection with the eigenvectors of
the covariance matrix, and forms a convex set. Along with a
theoretical study in which these eigenvectors are assumed to be
known, we consider the practical case when the observations
are noisy, and experimentally observe how fast the set of valid
graphs converges to the set obtained when the exact eigenvectors
are known, as the number of observations grows. To illustrate
how this characterization can be used for graph recovery, we
present two methods for selecting a particular point in this set
under chosen criteria, namely graph simplicity and sparsity.
Additionally, we introduce a measure to evaluate how much
a graph is adapted to signals under a stationarity assumption.
Finally, we evaluate how state-of-the-art methods relate to this
framework through experiments on a dataset of temperatures.
I. I NTRODUCTION
In many applications, such as brain imaging [1] and hyperspectral imaging [2], it is convenient to model the relationships
among the entries of the signals studied using a graph. Tools
such as graph signal processing can then be used to help understand the studied signals, providing a spectral view of them.
However, there are many cases where a graph structure is not
readily available, making such tools not directly appliable.
Graph topology inference from only the knowledge of
signals observed on the vertices is a field that has received a lot
of interest recently. Classical methods to obtain such a graph
are generally based on estimators of the covariance matrix
using tools such as covariance selection [3] or thresholding of
This was supported by the European Research Council under the European
Union’s Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement n◦ 290901, by the Labex CominLabs Neural Communications, and by
the Natural Sciences and Engineering Research Council of Canada through
grant RGPAS 429296-12.
B. Pasdeloup, V. Gripon, G. Mercier, and D. Pastor are with UMR CNRS
Lab-STICC, Télécom Bretagne, 655 Avenue du Technopole, 29280, Plouzané,
France. Email: {name.surname}@telecom-bretagne.eu.
M.G. Rabbat is with the Department of Electrical and Computer Engineering, McGill University, 3480 University Street, Montréal, H3A 0E9, Canada.
Email: [email protected].
the empirical covariance matrix [4]. More recent approaches
make assumptions on the graph, and enforce properties such as
sparsity of the graph and/or smoothness of the signals [5]–[7].
A common aspect of all these techniques is that they
propose graph inference strategies that directly find a particular
topology from the signals based on some priors. Rather than
performing a direct graph inference, we explore an approach
that proceeds in two steps. First, we characterize the matrices
that may explain the relationships among signal entries. Then,
we introduce criteria to select a matrix from this set.
In this paper, we consider the case of stationary signals
[8]–[10]. These signals are such that their covariance matrix
has the same eigenvectors as the graph Fourier transform
operator. To simplify the exposition in this paper, we focus
here on the case of diffusion matrices, but the same ideas
and methods could work for general observations of stationary
signals on graphs. We assume that the signals were i.i.d. at
some point back in time. The relationships among entries of
the signals were then introduced by a diffusion matrix applied
a variable number of times on each signal. This matrix has
non-null entries only when a corresponding edge exists in
the underlying graph, and therefore is compliant with the
underlying graph structure, modeling a diffusion process on
it. Such matrices are referred to as graph shift operators [11],
[12], examples of which are the adjacency matrix or the graph
Laplacian. Under these settings, we address in this paper the
following question: How can one characterize an adapted
diffusion matrix from a set of observed signals?
To answer this question, we choose to focus in this paper on
a particular family of matrices to model the diffusion process
for the signals. Similar results can be obtained with other graph
shift operators by following the same development.
We show that retrieving a diffusion matrix from signals can
be done in two steps, by first characterizing the set of admissible candidate matrices, and then by introducing a selection
criterion to encourage desirable properties such as sparsity
or simplicity of the matrix to retrieve. This particular set of
admissible matrices is defined by a set of linear inequality
constraints. A consequence is that it is a convex polytope, in
which one can select a point by defining a criterion over the
set of admissible diffusion matrices and then maximizing or
minimizing the criterion.
We show that all candidate matrices share the same set of
eigenvectors, namely those of the covariance matrix. Along
with a theoretical study in which these eigenvectors are
assumed to be known, we consider the practical case when
only noisy observations of them are available, and observe
the speed of convergence of the approximate set of solutions
2
to the limit one, as the number of observed signals increases.
Two criteria for selecting a particular point in this set are
proposed. The first one aims to recover a graph that is simple,
and the second one encourages sparsity of the solution in the
sense of the L1,1 norm. Additionally, we propose a method to
obtain a diffusion matrix adapted to stationary signals, given
a graph inferred with other methods based on other priors.
This paper is organized as follows. First, Section II introduces the problem addressed in this article, and presents the
notions and vocabulary that are necessary for a full understanding of our work. Then, Section III reviews the work that has
been done in graph recovering from the observation of signals.
Section IV studies the desired properties that characterize
the admissible diffusion matrices, both in the ideal case and
in the approximate one. Section V introduces methods to
select an admissible diffusion matrix in the polytope based
on a chosen criterion. Then, in Section VI, these methods
are evaluated on synthetic data. Finally, in Section VII, a
dataset of temperatures in Brittany is studied, and additional
experiments are performed to establish whether current stateof-the-art methods can be used to infer a valid diffusion matrix.
II. P ROBLEM FORMULATION
A. Definitions
We consider a set of N random variables (vertices) of interest. Our objective is, given a set of M realizations (signals)
of these variables, to infer a diffusion matrix adapted to the
underlying graph topology on which unknown i.i.d. signals
could have evolved to generate the given M observations.
Definition 1 (Graph): A graph G is a pair (V, E) in which
V = {1, . . . , N } is a set of N vertices and E ⊆ V × V is a
set of edges. In the remainder of this document, we consider
positively weighted undirected graphs. Therefore, we make no
distinction between edges (u, v) and (v, u). We denote such
an edge by {u, v}. A convenient way to represent G is through
its adjacency matrix W:
αuv if {u, v} ∈ E
W(u, v) ,
; αuv ∈ R+ ; ∀u, v ∈ V .
0
otherwise
Graph shift operators are defined by Sandryhaila et al. [11],
[12] as local operations that replace a signal value at each
vertex of a graph with the linear combination of the signal
values at the neighbors of that vertex. The adjacency matrix
is an example of graph shift operator, as its entries are non-null
if and only if there exists a corresponding edge in E.
Graphs may have numerous properties that can be used as
priors when inferring an unknown graph. In this paper, we
are particularly interested in the sparsity of the graph, that
measures the density of its edge, and in its simplicity. A graph
is said to be simple if no vertex is connected to itself, i.e., if the
diagonal entries of the associated adjacency matrix are null.
These two properties are often desired in application domains.
In the general case, we consider graphs that can have selfloops, i.e., non-null elements on the diagonal of W.
A signal on a graph can be seen as a vector that attaches a
value to every vertex in V.
Definition 2 (Signal): A signal x on a graph G of N vertices
is a function on V. For convenience, signals on graphs are
represented by vectors in RN , in which x(i) is the signal
component associated with the ith vertex of V.
A widely-considered matrix that allows the study of signals
on a graph G is the normalized Laplacian of G.
Definition 3 (Normalized Laplacian): The normalized
Laplacian Ł of a graph G with adjacency matrix W is a
1
1
differential operator on G, defined by Ł , I − D− 2 WD− 2 ;
where D isPthe diagonal matrix of degrees of the vertices:
D(u, u) , v∈V W(u, v); ∀u ∈ V, and I is the identity matrix
of size N . Note that for Ł to be defined, D must contain only
non-null entries on its diagonal, which is the case when every
vertex has at least one neighbor.
An interpretation of this matrix is obtained by considering
the propagation of a signal x on G using Ł. By definition,
1
1
Łx = Ix − (D− 2 WD− 2 )x. Therefore, the normalized Laplacian models the variation of a signal x when diffused through
one step of a diffusion process represented by the graph shift
1
1
operator TŁ , D− 2 WD− 2 . More generally, in this paper, we
define diffusion matrices as follows:
Definition 4 (Diffusion matrix): A diffusion matrix T is a
symmetric matrix such that
• ∀u, v ∈ V : T(u, v) ≥ 0;
• λ1 = 1;
• ∀i, ∈ {2, . . . , N } : |λi | ≤ 1,
where λi are the eigenvalues of T, in descending order.
The idea behind these constraints is that we want to model a
diffusion process by a matrix. Such process propagates signal
components from vertex to vertex and consequently consists
of positive entries indicating what quantity of signal is sent
to the neighboring vertices. Enforcing all eigenvalues to have
their modulus be at most 1 imposes a scale factor, and has
the interesting consequence to cause the series Ti x i to be
bounded, for any signal x.
Note that by construction, the largest eigenvalue of TŁ is 1.
In our experiments, we will use this particular matrix TŁ to
diffuse signals on the graph. Other popular matrices could be
used instead to diffuse signals. For example, any polynome of
TŁ could be used [13]–[15].
B. Graph Fourier transform
One of the cornerstones of signal processing on graphs
is the analogy between the notion of frequency in classical
signal processing and the eigenvalues of the Laplacian. The
eigenvectors of the Laplacian of a binary ring graph correspond to the classical Fourier modes (see e.g., [16] for a
detailed explanation). The lowest eigenvalues are analogous
to low frequencies, while higher ones correspond to higher
frequencies. Using this analogy, researchers have successfully
been able to use graph signal processing techniques on nonring graphs (e.g., [17], [18]).
To be able to do so, the Laplacian matrix of the studied
graph must be diagonalizable. Although it is a sufficient,
but not necessary, condition for diagonalization, we only
consider undirected graphs in this article (see Definition 1),
for which the normalized Laplacian as defined in Definition 3
is symmetric. Note that there also exist definitions of the
Laplacian matrix when the graphs are directed [19].
3
To understand the link between diffusion of signals on the
graph and the notion of smoothness on the graph introduced in
Section II-C, we need to introduce the graph Fourier transform
[16], [20], that transports a signal x defined on the graph into
its spectral representation b
x:
Definition 5 (Graph Fourier transform): Let Λ =
(λ1 , . . . , λN ) be the set of eigenvalues of Ł, sorted by increasing value, and X = (χ1 , . . . , χN ) be the matrix of associated
eigenvectors. The graph Fourier transform of a signal x is the
projection of x in the spectral basis defined by X : b
x , X > x.
N
b
x is a vector in R , in which b
x(i) is the spectral component
associated with χi .
This operator allows the transportation of signals into a
spectral representation defined by the graph. Note that there
exist other graph Fourier transform operators, based on the
eigenvectors of the non-normalized Laplacian L , D − W or
on those of the adjacency matrix [11].
An important property of the normalized Laplacian states
that the eigenvalues of Ł lie in the closed interval [0, 2], with
the multiplicity of eigenvalue 0 being equal to the number of
connected components in the graph, and 2 being an eigenvalue
for bipartite graphs only [20]. We obtain that the eigenvalues
of TŁ lie in the closed interval [-1, 1], with at least one of
them being equal to 1. Also, since TŁ and Ł only differ by an
identity, both matrices share the same set of eigenvectors. If
the graph is connected then TŁ has a single eigenvalue equal
to 1, being associated with a constant-sign eigenvector χ1 :
s
D(i, i)
,
(1)
∀i ∈ {1, . . . , N } : χ1 (i) =
Tr(D)
where D is the matrix of degrees introduced in Definition 3,
and all other eigenvalues of TŁ are strictly less than 1.
Therefore, diffusing a signal x using TŁ shrinks the spectral
contribution of the eigenvectors of Ł associated with high
eigenvalues more than those associated with lower ones.
It is worth noting that since one of the eigenvalues of
TŁ is equal to 1, then the contribution of the associated
eigenvector χ1 does not change after diffusion. Therefore,
after numerous diffusion steps, (b
x(i))i∈[2;N ] become close to
null and x becomes stable on any non-bipartite graph. As a
consequence, we consider in our experiments signals that are
diffused a limited number of times.
C. Smoothness of signals on the graph
A commonly desired property for signals on graphs is
smoothness. Informally, a signal is said to be smooth on the
graph if it has similar entries where the corresponding vertices
are adjacent in the graph. In more details, given a diffusion
matrix T for a graph G, smoothness of a signal x can be
measured via the following quantity:
X
2
S(x) ,
T(u, v) (x(u) − x(v)) .
(2)
{u,v}∈E
From this equation, we can see that the lower S(x) is,
the more regular are the entries of x on the graph. When
using TŁ as a diffusion matrix, signals that are low-frequency,
i.e., that mostly have a spectral contribution of the lower
eigenvectors of the Laplacian, have a low value of S(x) and are
then smooth on the graph. As mentioned above, diffusion of
signals using TŁ shrinks the contribution of eigenvectors of Ł
associated with higher eigenvalues more than the contribution
of the ones associated with lower eigenvalues. Thence the
property that diffused signals become low-frequency after
some diffusion steps, and hence smooth on the graph. In
addition to seeing diffusion as a link between graphs and
signals naturally defined on them, this interesting property
justifies the assumption, made in many papers, that signals
should be smooth on a graph modeling their support [5]–[7].
D. Stationarity of signals on the graph
Considering stationary signals is a very classical framework
in traditional signal processing that facilitates the analysis of
signals. Analogously, stationary processes on graphs have been
recently defined to ease this analysis in the context of signal
processing on graphs [8]–[10].
A random process on a graph is said to be (wide-sense)
stationary if its first moment is constant over the vertex
set and its covariance matrix is invariant with respect to
the localization operator [10]. In particular, white noise is
stationary for any graph, and any number of applications
of a graph shift operator on such noise leaves the process
stationary. This implies that the covariance matrix of stationary
signals shares the same eigenvectors as this particular operator
(see Section III-D for details).
Diffusion of signals is a particular case of stationary processing. The example we develop in this article when studying
diffusion of signals through a matrix T can be generalized to
any stationary process and any graph shift operator, with only
few adaptations.
E. Problem formulation
Using the previously introduced notions, we can formulate
the problem we address in this paper as follows. Let X =
(x1 , . . . , xM ), xi ∈ RN , be a N ×M matrix of M observations,
one per column. Let Y = (y1 , . . . , yM ), yi ∈ RN , be a N ×M
unknown matrix of M i.i.d. signals; i.e., the entries Y(i, j) are
zero-mean, independent random variables. Let k ∈ RM
+ be an
unknown vector of M positive numbers, corresponding to the
number of times each signal is diffused before observation.
Given X, we aim to characterize the set of all diffusion
e such that there exist Y and k with:
matrices1 T
e k(i) y .
∀i ∈ {1, . . . , M } : xi = T
i
(3)
This framework can be seen as a particular case of graph
filters [12], containing only a monomial of the diffusion
matrix. From a practical point of view, this corresponds to
the setup where all signals are observed at a given time t,
but have been initialized at various instants t − k(i). More
generally, all polynomials of the diffusion matrix share the
same eigenvectors. The key underlying assumption in our work
1 Throughout this article we will denote recovered/estimated quantities using
a tilde.
4
is that each observation is the result of passing white noise
through a graph filter whose eigenvectors are the same as those
of the normalized Laplacian. Consequently, our approach can
be applied to any graph filter
∀i ∈ {1, . . . , M } : xi =
∞
X
ej y ,
(Ki )j T
i
(4)
j=0
for M sequences K1 , . . . , KM .
To summarize the following sections, we infer a diffusion
matrix in two steps. First, we characterize the convex set of
solutions using the method in Section IV. Then, we select a
point from this set using some criteria on the matrix we want
to infer. The strategies we propose are given in Section V.
III. R ELATED WORK
While much effort has gone into inferring graphs from
signals, the problem of characterizing the set of admissible
graphs under diffusion priors is relatively new, and forms the
core of our work. In this section we review related work on
reconstructing graphs from the observation of diffused signals
and make connections to the approach we consider. Additional
approaches exist but consider different signal models such as
time series [13], [21], band-limited signals [22] or combinations of localized functions [14], [15].
A. Estimation of the covariance matrix
As stated in the introduction, obtaining the eigenvectors of
the covariance matrix is a cornerstone of our approach. They
allow us to define a polytope limiting the set of matrices that
can be used to model a diffusion process.
h
i
Since the covariance matrix Σ , E XX> is not obtainable
in practical cases, a common approach involve estimating Σ
e
using the sample covariance matrix Σ:
e ,
Σ
where M(i, j) ,
1
M
1
(X − M)(X − M)> ,
M −1
M
P
k=1
(5)
X(i, k) is an N ×M matrix with each
row containing the mean signal value for the associated vertex.
An interesting property of this matrix is that its eigenvectors
converge to those of the covariance matrix as the number of
signals increases (see Section IV-B).
Other methods exist to infer a covariance matrix [23]–[26]
and may be interesting to consider in place of the sample
covariance matrix. Methods for retrieving a sparse covariance
matrix based on properties of its spectral norm are described
in [23] and [24]. However, these works do not provide any
information on the convergence rate of the eigenvectors of
their solutions to the eigenvectors of the covariance matrix,
as the number of signals increases. Similarly, [25] and [26]
retrieve covariance matrices that converge in operator norm
or in distribution. An intensive study of covariance estimation
methods could be interesting to find techniques that improve
the convergence of eigenvectors. This paper focuses on the use
of the sample covariance matrix.
B. Graphical lasso for graph inference
A widely-used approach to provide a graph is the graphical
lasso [27], which recovers a sparse precision matrix (i.e., ine under the assumption that the data
verse covariance matrix) Θ
are observations from a multivariate Gaussian distribution. The
core of this method consists in solving the following problem,
e = argmin Tr(ΣΘ)
e
Θ
− log det(Θ) + λkΘk1 ,
(6)
Θ≥0
e is the sample covariance matrix and λ is a regularwhere Σ
ization parameter controlling sparsity.
Numerous variations of this technique have been developed
[28]–[31], and several applications have been using graphical
lasso-based methods for inferring a sparse graph. Examples
can be found for instance in the fields of neuroimaging [32],
[33] or traffic modeling [34].
What makes this method interesting, in addition to its fast
convergence to a sparse solution, is a previous result from
Dempster [3]. In the covariance selection model, Dempster
proposes that the inverse covariance matrix should have numerous null off-diagonal entries. An additional result from
Wermuth [35] states that the non-null entries in the precision
matrix correspond to existing edges in a graph that is representative of the studied data.
Therefore, in our experiments, we evaluate whether considering the result of the graphical lasso as a graph makes it
admissible or not to model a diffusion process. However, when
considering (6), we can see that the method does not impose
any similarity between the eigenvectors of the covariance
matrix and those of the inferred solution. For this reason,
we do not expect this method to provide a solution that is
admissible in our settings.
Close to the graphical lasso, [36] and [37] propose an
algorithm to infer a precision matrix by adding generalized
Laplacian constraints. While this allows for good recovery of
the precision matrix, it proceeds in an iterative way by following a block descent algorithm that updates one row/column
per iteration. As for the graphical lasso, it does not force
the eigenvectors of the retrieved matrix to match those of the
covariance matrix, and therefore does not match our stationarity assumption. Interestingly, these methods could also be
mentioned in the next section, dedicated to smoothness-based
methods. In particular, [36] has pointed out that minimizing
e
the quantity Tr(ΣΘ)
promotes smoothness of the solution
when Θ is a graph Laplacian. Additionally, [38] promotes
sparsity of the inferred graph by applying a soft threshold to
the precision matrix, and shows that the solution matches a
smoothness assumption on signals.
C. Smoothness-based methods for graph inference
Another approach to recover a graph is to assume that the
signal components should be similar when the vertices on
which they are defined are linked with a strong weight in
W, thus enforcing natural signals on this graph to be lowfrequency (smooth). Using the definition of smoothness of
signals on a graph in (2), we can see that the smaller S(x),
the more regular the components of x on the graph.
5
A first work taking this approach has been proposed by
Lake and Tenenbaum [5], in which they solve a convex
optimization problem to recover a sparse graph from data
to learn the structure best representing some concepts. More
recently, Dong et al. [6] have proposed a similar method that
outperforms the one by Lake and Tenenbaum. In order to find a
graph Laplacian that minimizes S in (2) for a set of signals, the
authors propose an iterative algorithm that converges to a local
solution, based on the resolution of the following problem:
L∗ = arg min kX − Yk2F + α Tr(Y> LY) + βkLk2F
L,Y
Tr(L) = N
L(i, j) = L(j, i) ≤ 0, i 6= j
s. t.
,
PN
∀i ∈ {1, . . . , N } : j=1 L(i, j) = 0
(7)
where L∗ is the non-normalized Laplacian recovered, k · kF
is the Frobenius norm, Y is a matrix in RN ×M that can be
considered as a noiseless version of signals X, and α and β
are regularization parameters controlling the distance between
X and Y, and the sparsity of the solution.
Kalofolias [7] proposes a unifying framework to improve
the previous solutions of Lake and Tenenbaum, and Dong et
al., by proposing a better prior and reformulating the problem
to optimize over entries of the (weighted) adjacency matrix
rather than the Laplacian. An efficient implementation of his
work is provided in the Graph Signal Processing Toolbox [39].
His approach consists in rewriting the problem as an `1 minimization, that leads to naturally sparse solutions. Moreover,
the author has shown that the method from Dong et al. could
be encoded in his framework.
Graph inference with smoothness priors continues to receive
a lot of interest. Recently, Chepuri et al. [40] have proposed
to infer a sparse graph on which signals are smooth, using
an edge selection strategy. Finally, enforcing the smoothness
property for signals defined on a graph has also been considered by Shivaswamy and Jebara [41], where a method is
proposed to jointly learn the kernel of an SVM classifier and
optimize the spectrum of the Laplacian to improve this classification. Contrary to our approach, Shivaswamy and Jebara [41]
study a semi-supervised case, in which the spectrum of the
Laplacian is learned based on a set of labeled examples.
D. Diffusion based methods for graph inference
Recently we proposed a third approach to recover a graph
from diffused signals. In [42], we study a particular case of the
problem we consider here, namely when k is a known constant
vector. Let K denote the value in every entry of this vector. We
show in [42] that the covariance matrix of signals diffused K
times on the graph is equal to T2K . This implies that we need
to recover a particular root of the covariance matrix to obtain
T. In more details, if Y is a matrix of mutually independent
signals with independent entries, X = TK Y, and Σ is the
covariance matrix of X, we have:
i
h
i
h
>
Σ = E XX> = E TK YY> TK
= T2K ,
(8)
using the independence of Y and the symmetry of T.
Thanks to K being known, one could then retrieve a matrix
e by diagonalizing Σ, taking the 2K-square root of the obT
tained eigenvalues, and solving a linear optimization problem
to recover their missing signs. This reconstruction process was
illustrated on synthetic cases, where a graph G is generated,
and M i.i.d. signals are diffused on it using the associated
matrix TK to obtain X [42]. Experiments demonstrate that
e = T2K (which is the limit case when M grows
when using Σ
e = T.
to infinity), we can successfully recover T
However, this previous work has two principal limitations:
1) The number of diffusion steps k is constant and known,
which is a limiting assumption since in practical applications signals may be obtained after a variable,
unknown number of diffusion steps. In this work, we
remove this assumption. Taking the 2K-square root of
the eigenvalues of Σ is therefore no longer possible.
2) The number of observations M is assumed to be infinite
so that we have a perfect characterization of the eigenvectors of the covariance matrix. We also address this
assumption in this paper and show that the higher M ,
the closer the recovered graph to the ground truth.
Ongoing work by Segarra et al. [43], [44], initiated in
[45], takes a similar direction. The authors propose a two-step
approach, where they first retrieve the eigenvectors of a graph
shift operator, and then infer the missing eigenvalues based
on some criteria. They also study the case of stationary graph
processes, for which the covariance matrix shares the same
eigenbasis as the graph Fourier transform operator, and use
this information to infer a graph based on additional criteria.
However, while the characterization of the set of solutions
is identical to ours, our works differ in the matrix selection
strategy. Segarra et al. [43] focus on adjacency and Laplacian
inference, while we aim at recovering a matrix modeling
a diffusion process. Still, note that both of our works can
be easily extended to any graph shift operator, by setting
up the correct set of constraints. The authors of [43] solve
a slightly different problem, where they minimize the `1
norm of the inferred matrix under more constraints than ours,
which describe a valid Laplacian matrix. In particular, they
enforce the diagonal elements of the solution to be null,
thus considering graphs that do not admit self-loops. In more
details, they solve the following optimization problem:
PN
S = i=1 λi χi χ>
∗
i
S = arg min
kSk1 s. t.
,
S,λ1 ,...,λN
S∈S
(9)
where S∗ is the inferred graph shift operator, S is the set
of admissible solutions delimited by their constraints, and
χ1 , . . . , χN are the eigenvectors of the covariance matrix.
Contrary to their approach, we aim at inferring a matrix that
can be simple (see Section V-A) or sparse (see Section V-B),
rather than selecting a sparse matrix from the set of simple
matrices. Among other differences, we propose in Section V-C
a method to approximate the solution of any graph inference
strategy to make it match our stationary assumption on signals.
Our work also explores how the polytope of solutions can be
used to evaluate which graph, among a set of given graphs, is
the most adapted to given signals.
6
E. Other related work
Shahrampour and Preciado [46], [47] study the context of
network inference from stimulation of its vertices with noise.
However, their method implies a series of node knockout
operations that need to individually intervene on the vertices.
Also, we note that there exist methods that aim to recover
a graph from the knowledge of its Laplacian spectrum [48].
However, we do not assume that such information is available.
Finally, a recent work by Shafipour et al. [49] has started
to explore the problem of graph inference from non-stationary
graph signals, which is a direct continuation of the work
presented in this article and of the work by Segarra et al..
f2 , λ
f3 ) ∈ [−1, 1] × [−1, 1] (using a step
Figure 1: All pairs (λ
1 0 0
−2
e
of 10 ) for which T = X 0 λf2 0 X > is an admissible
IV. C HARACTERIZATION OF THE SET OF ADMISSIBLE
DIFFUSION MATRICES
In this section, we show that the set of diffusion matrices
verifying the properties in Definition 4 is a convex polytope
delimited by linear constraints depending on the eigenvectors
of the covariance matrix of signals diffused on the graph. Then,
we study the impact of a limited number of observations on
the deformation of this polytope, due to imprecisions in the
obtention of these eigenvectors.
A. Characterization of the polytope of solutions
In the asymptotic case when M is infinite, the covariance
matrix Σ of the given signals X is equal to a (fixed) power K
of the diffusion matrix. Thus, under these asymptotic settings,
X can be obtained using Principal Component Analysis on X
[50]. In the more global case when k is a vector, the covariance
matrix of the signals is a linear combination of multiple powers
of T, and has therefore the same set of eigenvectors, since all
powers of a matrix share the same eigenvectors. This is also
the case when considering graph filters as in (4).
In more details, if we consider signals xi = Tk(i) yi , we have
the following development. We denote by X(i) the signal at
the ith column of X, and drop the constant factor and signals
mean from (5) for readability:
e =
Σ
M
X
>
X(i)X(i)
i=1
=
X X
k∈k
>
Tk Y(i)Y(i) Tk
>
i s.t.
k(i)=k
=
X
k∈k
X
>
>
Tk
Y(i)Y(i) Tk
(10)
i s.t.
k(i)=k
Σ =
X
k∈k
=
X
X
>
>
Tk EY
Y(i)Y(i) Tk
i s.t.
k(i)=k
|{i, k(i) = k}| T2k ,
k∈k
which is a linear combination of various powers of T, all
having the same eigenvectors X .
f
0 0 λ
3
diffusion matrix (in red). The exact eigenvalues of the matrix
TŁ associated with W are located using a green dot.
Let us first consider the limit case when the eigenvectors
X of Σ are available. Given the remarks in Section II-B,
e we must find
to recover an acceptable diffusion matrix T,
f
f
e
eigenvalues Λ = (λ1 , . . . , λN ) such that:
e j) ≥ 0;
• ∀i, j ∈ {1, . . . , N }; j ≥ i : T(i,
f1 = 1;
• Let χ1 be the constant-sign eigenvector in X : λ
ei ∈ [−1, 1].
• ∀i ∈ {1, . . . , N } : λ
Note that these constraints are driven by the will to recover
a diffusion matrix as defined in Definition 4. If we were
considering the case of other graph shift operators, these
constraints would be different and would yield the definition
of different constraints in (13). As an example, diffusion using
a Laplacian matrix would imply the definition of constraints
that enforce the diagonal entries to be positive and off-diagonal
ones to be negative (see [43]). Similarly, aiming to recover
1
1
the diffusion matrix TŁ , D− 2 WD− 2 associated with the
normalized Laplacian would imply additional constraints.
To illustrate how these properties translate into a set
of admissible diffusion matrices, let us consider the randomly
3 × 3 symmetric adjacency matrix W =
0.417 generated
0.302 0.186
0.302 0.147 0.346 . We compute its associated matrix TŁ
0.186 0.346 0.397
and corresponding eigenvectors X . This simulates a perfect
retrieval of the eigenvectors of the covariance matrix of signals
f2 , λ
f3 ) ∈ [−1, 1]×
diffused by TŁ on the graph. For all pairs (λ
[−1, 1] (using a step of 10−2 ), Fig. 1 depicts those that allow
the reconstruction of a diffusion matrix.
As we can see, the set of admissible matrices is convex
(but non-strictly convex), delimited by affine equations. To
characterize these equations, let us consider any entry of index
e we want to
(i, j) in the upper triangular part of the matrix T
recover. Since X is assumed
to
be
known,
by
developing
the
f
λ1 0 0
>
e , X 0 ... 0 X , we can write every
matrix product T
0
0 λf
N
f1 , . . . , λf
e j) as a linear combination of variables λ
entry T(i,
N
by developing the scalars in X . Let αij1 , . . . , αijN be the
factors associated with λ1 , . . . , λN for the equation associated
e j), i.e.:
with the entry T(i,
f1 + · · · + αijN λf
e j) = αij1 λ
T(i,
N .
(11)
7
As an example, let us consider a 3×3 matrix T with known
eigenvectors X . Using the decomposition of T, we can write
T(2, 3) as follows:
T(2, 3) = X (2, 1)X (3, 1) λ1 + X (2, 2)X (3, 2) λ2 + X (2, 3)X (3, 3) λ3 .
|
{z
}
{z
}
{z
}
|
|
α231
α232
α233
(12)
Enforcing the value of all entries of the matrix to be positive
thus defines the following set of N (N2+1) inequalities:
f1 +· · ·+αijN λf
∀i, j ∈ {1, . . . , N }; j ≥ i : αij1 λ
N ≥ 0 , (13)
where j ≥ i comes from the symmetry property.
Our problem of recovering the correct set of eigenvalues to
reconstruct the diffusion matrix thus becomes a problem of
selecting a vector of dimension N − 1 (since one eigenvalue
is equal to 1 due to the imposed scale) in the convex polytope
delimited by (13). Since the number of possible solutions
is infinite, it is an ill-posed problem. To cope with this
issue, one then needs to incorporate additional information
or a selection criterion to enforce desired properties on the
reconstructed matrix. To illustrate the selection of a point in
the polytope, Section V presents strategies based on different
criteria, namely sparsity and simplicity.
Note that the polytope is in most cases not a singleton. The
covariance matrix belongs to the set of admissible matrices,
along with all its powers, which are different if the covariance
matrix is not the identity. When additional constraints delimit
the polytope, for example when enforcing the matrices to be
simple, there exist situations when the solution is unique [43].
B. Impact of the use of the sample covariance matrix on the
polytope definition
The results discussed above use the eigenvectors X of the
limit covariance matrix Σ as M tends to infinity, directly obtained by diagonalizing the diffusion matrix of a ground truth
graph under study. Next, we study the impact of the use of the
e of controlled signals respecting
sample covariance matrix Σ
our assumptions on the estimation of these eigenvectors.
e instead of Σ, let us
To understand the impact of using Σ
again consider an example 3 × 3 matrix and the associated
polytope, as we did in Section IV-A. We generate a random
graph with N = 3 vertices by drawing the entries of its
adjacency matrix uniformly, and compute its matrix TŁ . Using
TŁ , we diffuse M i.i.d. signals (entries are drawn uniformly)
a variable number of times (chosen uniformly in the interval
[2, 5]) to obtain X. Then, we compute the sample covariance
e and its matrix of eigenvectors X
e.
matrix of X, Σ,
Fig. 2 depicts in white the ground truth polytope (i.e., the
one delimited by equations (13) using the eigenvectors of
Σ) and the polytopes associated with 10 different sample
e obtained from different realizations
covariance matrices Σ,
of X. From each of the eigenvector sets of these empirical
f2 , λ
f3 ) that
covariance matrices, we determine the pairs (λ
satisfy the criteria in Section IV-A. Then, we plot a histogram
of the number of occurrences of these valid pairs.
As we can see, the recovered polytope more accurately
reflects the true one as M increases. This coincides with the
fact that the empirical covariance matrix converges to the real
one as M tends to infinity.
In more details, we are interested in the convergence of
e =
the eigenvectors of the empirical covariance matrix X
e N } to those of the actual covariance matrix X .
{f
χ1 , . . . , χ
Asymptotic results on this convergence are provided by Anderson [51], which extends earlier related results by Girshick
fi be the vector of cosine
[52] and Lawley [53]. Let ei , X > χ
fi and all eigenvectors of the actual
similarities between χ
covariance matrix. Anderson [51] states that, as the number
of observations tends to infinity, entries in ei have a Gaussian
distribution with a known variance. In particular, when all
eigenvalues are distinct, the inner product between the ith (for
all i) eigenvector of the covariance matrix, χi , and the j th
fj , is asymptotically
(for all j) eigenvector of its estimate, χ
Gaussian with zero mean and variance
fj
λi λ
fj )2
(M − 1)(λi − λ
, λi 6= λj ,
(14)
fj is the
where λi is the eigenvalue associated with χi , and λ
fj . As a consequence, the variance
eigenvalue associated with χ
1
decreases like M
, and it also depends on the squared difference
fj . Additionally, [51] shows that the maximum
between λi and λ
likelihood estimate λei of λi (for all i) is
1 M −1 X
λj ,
(15)
λei =
Qi M
j∈Li
where Qi is the multiplicity of eigenvalue λi , and Li is the
set of integers {Q1 + · · · + Qi−1 + 1, . . . , Q1 + · · · + Qi },
containing all indices of equal eigenvalues. In the simple case
when all eigenvalues are distinct, (15) simplifies to
M −1
λei =
λi .
(16)
M
The eigenvalues of the empirical covariance matrix thus converge to those of the actual covariance matrix as M increases.
As M tends to infinity, ei thus tends to the ith canonical vector,
fi and χi . Additionally, [51]
indicating collinearity between χ
provides a similar result for the more general case when
eigenvalues may be repeated.
As we can see from (14), the convergence of the eigenvectors of the sample covariance matrix to the eigenvectors of the
true covariance matrix is impacted by the eigenvalues of the
matrix used to diffuse the signals. We know that diffusing a
signal K times using T is equivalent to TK x. When rewriting
this equation in the spectral basis, we obtain (X ΛK X > )x,
where X and Λ are the eigenvectors and eigenvalues of T.
As we can see, the power distributes on the eigenvalues, and
due to their location in the interval ]−1, 1] (with the noticeable
fj )2 in (14) gets
exception of bipartite graphs), the term (λi − λ
smaller, and M must grow to achieve the same precision.
To illustrate the impact of the number of diffusions on the
convergence of the polytope, let us consider the following
experiment. We generate 104 occurrences of random adjacency matrices of N = 10 vertices, by drawing their entries
uniformly in [0, 1], and enforcing symmetry. Then, for each
adjacency matrix, we compute the associated matrix TŁ , and
8
(a)
(b)
(c)
f2 , λ
f3 ) is valid, in the sense of the criteria in Section IV-A,
Figure 2: Histogram representing the number of times a pair (λ
e
when used jointly with X to recover a diffusion matrix. The ground truth polytope is represented by the inequality constraints
in white. Results obtained for 10 instances of X on the same graph, for M = 10 (a), M = 100 (b) and M = 1000 (c).
Inclusion ratio
1
M
M
M
M
M
0.8
0.6
= 105
= 104
= 103
= 102
= 10
0.4
5
10
K
15
20
Figure 3: Ratio of cases when the eigenvalues of the ground
truth matrix belong to the approximate polytope, as a function
of the number of diffusions K, for various quantities of signals
M . Tests were performed for 104 occurrences of random
adjacency matrices of N = 10 vertices.
for various values of K and of M , we diffuse M randomly
generated signals K times using TŁ . From the diffused signals,
we compute the eigenvectors of the sample covariance matrix,
and check if the eigenvalues of TŁ are located in the polytope
defined by these eigenvectors. Fig. 3 depicts the ratio of times
it is the case, for each combination of K and M .
The figure demonstrates that the number of diffusions has
some imporance in the process. Too small values of K encode
too little information on the diffusion matrix in the signals,
but values that are too high concentrate the eigenvalues too
much around 0. This corroborates that, as the eigenvalues
concentrate around 0, it is necessary to have a higher value of
M to achieve the same precision.
V. S TRATEGIES FOR SELECTING A DIFFUSION MATRIX
As stated in Section IV-A, inferring a valid diffusion matrix,
in the sense that it can explain the relationships among signal
entries through a diffusion process, reduces to selecting a point
in the polytope. Since it contains an infinite number of possible
solutions, one needs to introduce additional selection criteria
in order to favor desired properties of the retrieved solution.
Note that the polytope describes a set of diffusion matrices
e
as introduced in Definition 4. Given a diffusion matrix T
selected from the polytope, unless the degrees of the vertices
are known, there is no possibility in the general case to retrieve
the corresponding adjacency matrix. However, in the particular
case when the associated adjacency matrix is binary, one can
e at 0, setting its non-null entries to 1.
just threshold T
In this section, we first propose to illustrate the selection
of points in the polytope, using two criteria: simplicity of the
solution, and sparsity. In the first case, we aim at retrieving
a diffusion matrix that has an empty diagonal. In the second
case, we aim at recovering a sparse diffusion matrix. Additionally, we introduce a third method that performs differently
from the two other methods. Numerous graph inference techniques have been developed to obtain a graph from signals,
with various priors. While most of them do not require the
retrieved matrices to share the eigenvectors of the covariance
matrix, it may still be interesting to evaluate whether these
matrices are close enough to the polytope. If one can select
a point in the polytope that is close to the solution of a
given method, while keeping the properties enforced by the
associated priors, we obtain a new selection strategy. For this
reason, we introduce in this section a method to adapt the
solutions of other methods to stationary signals.
A. Selecting a diffusion matrix under a simplicity criterion
The first criterion we consider to select a point in the
polytope is simplicity of the solution. In other words, we want
e of eigenvalues that, jointly
to encourage the retrieval of a set Λ
with the eigenvectors X of the covariance matrix, produce a
diffusion matrix that has an empty diagonal. Such a matrix
represents a process that maximizes the diffusion of a signal
evolving on it, and does not retain any of its energy.
As shown in Section IV-A, since we are considering a
diffusion matrix as defined in Definition 4, the polytope of
solutions is defined by inequality constraints (13) that each
enforce the positivity of an entry in the matrix to recover. A
consequence is that if the matrix to be retrieved contains any
9
null entry, then the point we want to select lies on an edge or
a face of the polytope, since at least one inequality constraint
holds with equality. Enforcing simplicity of the solution is
therefore equivalent to selecting a point in the polytope that is
located at the intersection of at least N constraints. Using this
observation and the fact that the trace of a matrix is equal to the
sum of its eigenvalues, retrieving the eigenvalues that enforce
simplicity of the corresponding matrix reduces to solving a
linear programming problem, stated as follows:
N
X
f1 , . . . , λf
λ
=
arg
min
λi
N
λ1 ,...,λN
i=1
(13)
∀i ∈ {1, . . . , N } : λi ∈ [−1, 1]
s. t.
λ1 = 1
(17)
,
where the two last constraints impose a scale factor.
Equation (17) is a linear program for which it is known that
polynomial-time algorithms exist. The main bottleneck of this
method is the definition of the N (N2+1) linear constraints in
(13), that are computed in O(N 3 ) time and space.
B. Selecting a diffusion matrix under a sparsity criterion
In many applications one may believe the graph underlying
the observations is sparse. Similar to the case when trying to
recover a simple graph, finding a sparse admissible solution
can be formulated as finding a point at the intersection of multiple linear constraints. To find a sparse solution, we seek the
set of admissible eigenvalues for which the maximum number
of constraints in (13) are null. This reduces to minimizing the
`0 norm of the solution, which is an NP-hard problem [54].
A common approach to circumvent this problem is to
approximate the minimizer of the `0 norm by minimizing
the `1 norm instead [55]–[57]. In our case, we use the L1,1
matrix norm, which is the sum of all entries, since they are all
positive. In this section, we adopt this approach and consider
again a linear programming problem as follows:
λ1 0 0
>
f1 , . . . , λf
0 ... 0
λ
=
arg
min
1
X
X > 1N
N
N
0 0 λN
λ1 ,...,λN
(13)
(18)
∀i ∈ {1, . . . , N } : λi ∈ [−1, 1] ,
s. t.
λ1 = 1
normalized Laplacian). Let X Tm be the eigenvectors of Tm ,
and let X Σ be the eigenvectors of the covariance matrix.
The idea here is to consider Tm as if it were expressed in
the eigenbasis of X Σ , to check whether it belongs or not to
the polytope of admissible matrices. In other words, we want
to find a matrix Am such that Tm = X Σ Am X >
Σ . Using the
fact that X Σ forms an orthonormal basis, we have Am =
X>
Σ Tm X Σ . Unless X Tm and X Σ are the same, Am is not
necessarily a diagonal matrix. Therefore, Am lies in a space of
dimension N 2 , while the polytope is defined by N variables.
Let us call Λm = (λm1 , . . . , λmN ) the vector of elements
on the diagonal of Am . Since the polytope of admissible
diffusion matrices is defined in RN , Λm is the point of this
2
set that forms the best estimate for Am , defined in RN , after
dimensionality reduction. In other words, Λm is the orthogonal
projection of Am in the polytope. If Λm does not belong to
the polytope of admissible diffusion matrices characterized by
X Σ , then the method m provides a solution that does not
satisfy the conditions to be a diffusion process. To find the
point in the polytope that is the closest to Λm in the sense of
the Euclidean norm, we solve the following problem:
g
Λ
m = arg min kΛ − Λm k2
Λ∈RN
(13)
∀i ∈ {1, . . . , N } : Λ(i) ∈ [−1, 1]
s. t.
Λ(1) = 1
(19)
.
g
The solution to (19) gives us a set of eigenvalues Λ
m =
g
]
(λm1 , . . . , λmN ) that represent the best approximation of Am
when restricting the search to admissible diffusion
! matrices.
Therefore, the matrix Tf
m = XΣ
]
λ
0
m1 0
0 ...
0
0
0 λ]
mN
X>
Σ is the
adaptation of the solution of method m to stationary signals.
We measure the distance between Am , the projection of Tm
g
in the space defined by X Σ , and Λ
m , the closest point in the
polytope, as follows:
λg
0
0
m1
g
0
...
0
d(Am , Λ
,
(20)
m ) , Am −
]
0
0 λ
mN
F
where 1N is the vector of N entries all equal to one.
where k · kF is the Frobenius norm.
Fig. 4 summarizes provides a graphical illustration of the
various steps to compute this distance.
C. Adaptation of other strategies to stationary signals
VI. N UMERICAL EXPERIMENTS
The two methods introduced before consist in selecting a
point in the polytope, given simplicity or sparsity priors. In
this section, we take a different point of view. Many graph
inference techniques exist in the literature (see Section III),
all enforcing different properties of the graph that is retrieved.
However, most of them do not impose the eigenvectors of the
inferred solution to match those of the covariance matrix. The
idea here is to adapt these solutions to stationary signals.
To do so, let us consider an inference method m providing
an adjacency or a Laplacian matrix from a set of signals X =
(x1 , . . . , xM ). Let Tm be a diffusion matrix associated with
the inferred matrix (for example, the one derived from the
To be able to evaluate reconstruction performance of the
methods presented in Section V, we first need to design
experimental settings. This section introduces a generative
model for graphs and signals, and evaluates the performance
of the methods Simple and Sparse. The regularization method
introduced in Section V-C is evaluated as a means to select a
matrix representing best some given signals, among a set of
possible matrices. Section VII presents additional experiments
and comparisons with other methods from the literature on a
non-synthetic dataset of temperatures in Brittany.
Our experiments show that the Simple method succeeds in
recovering the ground truth matrix from signals diffused on
10
Definition 7 (Erdős-Rényi graph): An Erdős-Rényi graph of
parameter P is a graph where each edge exists with probability
P independently from each other. The adjacency matrix W of
such a graph is defined by:
1 with probability P
W(i, j) ,
.
(22)
0 with probability 1 − P
Figure 4: Correction of the result of an inference method m
to match the stationarity hypothesis on the observed signals.
The eigenvalues of the result of m are expressed in the space
defined by X Σ as a matrix Am . Then, Am is approximated
by Λm , its orthogonal projection in the space of the polytope.
The closest point in the polytope (in the sense of the Euclidean
g
norm), Λ
m , is then found by solving (19). Finally, the distance
between Am and its estimate in the polytope is given by the
measure in (20). This corresponds to the norm of the vector
in green.
it, provided that the number of signals is high enough. The
Sparse method, while not being able to retrieve the ground
truth matrix, infers a matrix that has a lower L1,1 norm than
the matrix yielding the polytope. Additionally, we show that
the regularization method allows the selection of the ground
truth diffusion matrix from a set of candidate matrices, even
for a small number of signals. Finally, comparison with other
inference methods on a dataset of temperatures show that
the methods Simple and Sparse return the best solutions with
respect to their objectives, and that the regularization strategy
applied to methods favorizing smoothness of signals yields a
diffusion matrix on which signals are relatively smooth.
Once a graph is generated using the model presented above,
and given a number M of signals to produce, signals verifying
our settings are created as follows:
1) Create Y a N × M matrix with i.i.d. entries. We denote
by Y(i) the ith column of Y. In these experiments,
entries of Y are drawn uniformly.
2) Create k a vector of M i.i.d. integer entries, comprised in
the interval {1, . . . , 10}. These values are chosen not too
high in reaction to a remark in Section II-B, not to obtain
signals that are already stable. In these experiments,
entries of k are also drawn uniformly.
3) Compute the diffusion matrix TŁ associated with G.
4) Create X a N × M matrix of signal as follows: ∀i ∈
k(i)
{1, . . . , M } : X(i) , TŁ Y(i).
Once these four steps are performed, the objective becomes:
given X and some criteria on the graph to retrieve, infer
e for the diffusion matrix of the signals. To
an estimate T
summarize the previous sections, we proceed as follows:
e , an estimate for the eigenvectors of the diffusion
1) Find X
matrix. Here, this is done by computing the eigenvectors
of the sample covariance matrix.
e , compute the constraints in (13) that define the
2) Using X
polytope of solutions.
3) Select a point from the polytope, using one of the
strategies in Section V.
A. Generative model for graphs and signals
In our experiments, we consider randomly generated graphs,
produced by a random geometric model. These are frequently
used to model connectivity in wireless networks [58].
Definition 6 (Random geometric graph): A random geometric graph of parameter R is a graph built from a set of N
uniformly distributed random points on the surface of a unit
2-dimensional torus, by adding an edge between those being
closer than R according to the geodesic distance d(i, j) on
the torus. We then add a weight on the existing edges that
is inversely proportional to the distance separating the points.
Here, we choose to use the inverse of d(i, j). The adjacency
matrix W of such a graph is defined by:
1
if d(i, j) < R and i 6= j
d(i,j)
.
(21)
W(i, j) ,
0
otherwise
Note that, by construction, such graphs are simple and
relatively sparse. Therefore, we expect methods using such
selection criteria to be able to retrieve them.
In some of our experiments, we also consider random graphs
generated by an Erdős-Rényi model [59], in which two vertices
are linked with a given probability independently from each
other. Such graphs are defined as follows:
B. Error metrics
To be able to evaluate the reconstruction error for our
techniques, we use multiple metrics. Let T be the ground truth
diffusion matrix, with eigenvalues Λ = (λ1 , . . . , λN ), and let
e be the one that is recovered using the assessed technique,
T
f1 , . . . , λf
e = (λ
with eigenvalues Λ
N ).
The first metric we propose is the mean error per reconstructed entry (MEPRE):
e ,
MEPRE(T, T)
1
N
e
T
T
−
e F
kTkF
kTk
.
(23)
F
This quantity measures the mean error for all entries in the
e using
reconstructed matrix, where we first normalize T and T
their Frobenius norm k · kF to avoid biases related to scale.
The second metric we propose is the reconstruction error of
the powered retrieved eigenvalues (REPRE). We define it as
the Euclidean distance between the Kth-power of the ground
e for
truth vector of eigenvalues ΛK and the recovered ones Λ,
the best value of K possible:
1
K∈R N
e , min
REPRE(Λ, Λ)
e
ΛK
Λ
−
K
e ∞
kΛ k∞
kΛk
.
2
(24)
11
Here, the normalization using k · k∞ comes from the constraint in Section IV-A that the highest eigenvalue should
be equal to 1. Therefore, it imposes a scale on the set of
eigenvalues. Also, we divide the error by N to make it
independent of the number of vertices.
Since in the limit case Σ = T2K , for an unknown K ∈ R,
then the algorithms should be able to recover at least a power
of T. Indeed, there is absolutely no way to distinguish for
example one step of diffusion of a signal x using T2K(i.e.,
2
T2K x) and two diffusion steps of x using TK (i.e., TK x).
A consequence is that, if the algorithm cannot fully retrieve
T, a power of T should also be an acceptable answer.
These two metrics provide information on the ability of
methods to infer a diffusion matrix that is close to the ground
truth one, or one of its powers. More classical metrics are also
considered to evaluate whether the most significant entries
of the inferred matrices correspond to existing edges in the
ground truth graph. Since such metrics are defined for binary,
relatively sparse graphs, we evaluate them on thresholded
versions of the inferred matrix. Entries of the inferred matrix
that are above this threshold t are set to 1, and others are set to
0. The resulting matrix is denoted in the following equations
e t . To find the optimal value, we perform an exhaustive
as T
search among all possible thesholds, and keep the one that
maximizes the F-measure, defined below.
The first metric we consider is the Precision, measuring the
fraction of relevant edges among those retrieved.
et) ,
precision(T, T
et (i,j)>0 and T(i,j)>0}
#{(i,j)|T
et k0,1
kT
,
(25)
where k · k0,1 denotes the L0,1 matrix norm, that counts the
number of non-null entries.
A second metric we consider is the Recall, that measures
the fraction of relevant edges effectively retrieved.
et) ,
recall(T, T
et (i,j)>0 and T(i,j)>0}
#{(i,j)|T
kTk0,1
,
(26)
Both metrics are often combined into a single one, called
F-measure, that can be considered as a harmonic mean of
precision and recall.
et) , 2
F-measure(T, T
et )·recall(T,T
et )
precision(T,T
et )+recall(T,T
et )
precision(T,T
.
(27)
Note that in practical cases, the optimal threshold is not
available, and depends on a desired sparsity of the inferred
matrix. In the following experiments, we show the compromise
between true positive edges and false positive edges for all
possible thresholds using ROC curves.
C. Performance of the Simple method
In the situation when the eigenvectors are not available,
due to a limited number of signals, recovery methods must
use estimated eigenvectors. Linear programming problems
introduced in Section V must then be solved on a polytope
defined by noisy eigenvectors. We have previously shown in
Section IV-B that increasing the number of signals allows
this approximate polytope to be more precise. The following
experiments evaluate the quality of the solutions retrieved by
the two methods introduced in Section V-A and Section V-B.
Fig. 5 illustrates the convergence of the Simple method to a
solution, when the number of signals increases. In this experiment, we generate 1000 random geometric graphs (N = 10,
R = 0.6) and, for each of them, we diffuse M ∈ {10i }1≤i≤6
signals using TŁ , as described in Section VI-A. For each
configuration, we retrieve a diffusion matrix by solving the
problem in (17) using the CVX [60] package for MATLAB
[61], with default parameters. Then, we compute the mean
errors, and measure the distance to the ground truth solutions
in terms of trace value, which is the objective function in (17):
e − Tr(TŁ ) ,
e , 1 Tr(T)
(28)
diff simple (TŁ , T)
N
e , 1 Tr(T)
e
which in our case simplifies to diff simple (TŁ , T)
N
since by construction Tr(TŁ ) = 0.
These results show that both error measurements decrease
— except for very low values of N due to the high variance
— as M increases. As the approximate polytope converges
to the ground truth one, the solution of (17) converges to the
ground truth matrix TŁ used to produce the signals. This is
confirmed by performance metrics as well as the ROC curves,
which indicate that the ground truth edges are recovered more
successfully as the number of signals increases.
As stated in Section VI-A, matrices generated using the
random geometric model are by construction simple, and
have therefore null traces. When defining the polytope using
inequalities in (13), we enforce the positivity of all entries of
the admissible matrices. Therefore, TŁ is a matrix for which
eigenvalues lie on a plane where λ1 + λ2 + λ3 = 0. While
the optimal solution may not be unique (see below), counterexamples to this uniqueness must respect particular constraints
and are very unlikely to happen for random matrices as well
as for non-synthetic cases. As a consequence, minimizing the
sum of the eigenvalues as an objective function enforces the
retrieval of the correct result in nearly all cases. This implies
that the error measurements will certainly converge to 0 as M
e
grows to infinity. This can easily be verified by replacing X
by X — the eigenvectors of TŁ — in (13) for the resolution
of (17).
As stated above, the solution is not necessarily unique.
Multiple matrices with a minimum trace can be found when
there exists a frontier of the polytope along which all points
have a sum that is minimal among all admissible vectors of
eigenvalues. As an example, let us consider the 8×8 Hadamard
matrix as a matrix of eigenvectors X : When defining the
polytope associated with these eigenvectors using constraints
in (13), we obtain for T(2, 6) the following constraint:
λ2 + λ3 + C ≥ 0 ,
(29)
where C is some value that does not depend on λ2 and λ3 .
As a consequence, any point located on this particular plane
corresponds to a matrix with identical sum of eigenvalues,
leading to the same trace. When considering random matrices
or non-synthetic cases, the case when there exists such a plane
that is aligned with the objective is very unlikely.
12
1
1
MEPRE
REPRE
diffsimple
0.8
Performance
Error
0.06
0.04
True positives
0.08
0.9
0.8
Recall
Precision
F-measure
0.02
0 1
10
102
103
104
105
106
0.7 1
10
102
103
M
(a)
104
105
0.6
M
M
M
M
M
M
0.4
0.2
106
0
0
M
(b)
0.2
0.4
0.6
False positives
= 106
= 105
= 104
= 103
= 102
= 10
0.8
1
(c)
Figure 5: Image (a) depicts the mean MEPRE and REPRE measurements of the solutions retrieved by the Simple method,
for M ∈ {10i }1≤i≤6 signals. Additionally, diff simple shows the distance to the ground truth solutions in terms of trace value,
which is the objective function of problem presented in (17). Image (b) shows the results in terms of edge reconstruction by
e t . Finally, image (c) depicts the ROC
studying the recall, precision and F-measure using the binarized thresholded matrix T
curves, that show the compromise between true positive edges and false positive edges when varying the threshold. All tests
were performed for 1000 occurrences of random geometric graphs with parameters N = 10 and R = 0.6.
D. Performance of the Sparse method
In this section, we perform the same experiment as for the
Simple method, but solving the problem in (18) instead of
(17). Here, since we minimize the L1,1 norm as an objective
function, we measure the mean difference between the sparsity
of the retrieved matrices and the sparsity of the ground truth
matrices TŁ , computed as follows:
e 1,1 − kTŁ k1,1 .
e , 1 kTk
(30)
diff sparse (TŁ , T)
N2
For space considerations, the results are not detailed here.
Contrary to the method for selecting a simple graph in
Section V-A, we have observed that the error and performance
measurements stay approximately constant for all values of M
(MEPRE ≈ 0.1, REPRE ≈ 6 × 10−2 , diff sparse ≈ −2 × 10−2 ,
F-measure ≈ 0.76). Similar results were obtained for additional
experiments conducted on random geometric graphs with
different values of R to assess different levels of sparsity.
The results suggest that the method fails at recovering the
matrix that was used for diffusing the signals, even when
the number of observations is high. However, the negative
difference indicates that the method does not fail at recovering
a sparse graph, in the sense of the L1,1 norm. Graphs generated using the random geometric model, while being sparse
by construction, are not necessarily the sparsest within the
associated admissible set, especially when considering the L1,1
norm. This implies that, as the number of signals increases,
the method in fact converges to the sparsest solution in the
polytope, although it is in most cases not the matrix we started
e by X — the eigenvectors of TŁ — in
from. Replacing X
(13) for the resolution of (18) confirms that there exist sparser
solutions than the ground truth graph.
Note that in their work, Segarra et al. [43] also use the L1,1
norm minimization as an objective, and suceed in retrieving
the ground truth graph. This is the case because they have
additional constraints that enforce the solution to be simple.
Therefore, the set of solutions is a lot smaller, and the solution
is most likely unique.
E. Impact of the parameters
The quality of the solutions inferred by the methods assessed above mostly depends on how close the eigenvectors
of the sample covariance matrix are from the ground truth
ones. Noticing that the density of eigenvalues in the interval
[−1, 1] increases with N for any diffusion matrix, it follows
that larger graphs have eigenvalues pairwise closer than for
smaller graphs. In this respect, Section IV-B tells us that the
number of signals necessary for a precise estimate of the
covariance matrix needs to be higher for larger graphs. This
raises the question of the scalability of this method.
For a fixed value of M = 105 signals, we study the
performance of the Simple and Sparse methods on random
geometric graphs of orders ranging from N = 10 to N = 100
vertices. For each value of N , we set R ∝ √1N so that the value
of N does not impact the average neighborhood of each vertex.
This value is chosen in accordance with the experiments
performed earlier in this section. Fig. 6 depicts the F-measure
performance measurements obtained for the Simple and Sparse
methods, for 1000 occurrences of random geometric graphs
with the previously described settings. Additionally, we plot
in this figure the results obtained for other families of graphs,
namely Erdős-Rényi graphs with P ∝ logNN , and the ring
graph of N vertices.
These measurements confirm that the inference methods
need a large quantity of signals to work properly. Additionally, it appears that the family of graphs has an impact on
the reconstruction performance. The ring graph has repeated
eigenvalues, which has a strong impact on the convergence of
the eigenvectors of the sample covariance matrix to those of
the real covariance matrix (see Section IV-B and [51]). However, this is a marginal case, since real-weighted graphs almost
surely have distinct eigenvalues, as illustrated by the studies on
random geometric and Erdős-Rényi graphs. Another parameter
that has importance on these experiments is the number of
diffusions of signals before observations, represented by the
vector k. This has been illustrated in Section IV-B.
13
1
1
F-measure
0.8
0.6
(RG)
(RG)
(ER)
(ER)
(ring)
(ring)
Success ratio
Simple
Sparse
Simple
Sparse
Simple
Sparse
0.8
0.6
0.4
50
0.4
0.2
0
20
40
60
80
100
100
M
150
200
Figure 7: Ratio of times when the diffusion matrix Ti was chosen by the algorithm as the most adapted to signals Xi among
a set of 20 possible diffusion matrices, for i ∈ {1, . . . , 20}.
Mean results for 100 iterations of the experiment.
N
Figure 6: F-measure scores obtained for the Simple and Sparse
methods, for various families of graphs and for M = 105 signals, as a function of the graph order. Tests were performed on
1000 occurrences of each family of graph, for each technique.
F. Application of regularization to graph hypothesis testing
To evaluate the practical interest of the regularization strategy introduced in Section V-C, let us consider the situation
where some signals are observed, and various diffusion matrices are provided by inference methods to explain these signals.
The objective is to determine which of the proposed solutions
matches the signals best under a stationarity assumption.
In the following experiment, we proceed as follows: let
(T1 . . . T20 ) be a set of 20 diffusion matrices corresponding
to graphs of N = 10 vertices, equally divided into random
geometric graphs (with R drawn uniformly in [0.2, 0.6]) and
Erdős-Rényi graphs (with P drawn uniformly in [0.2, 0.6]).
For each of these matrices, let us diffuse M random signals
as detailed in Section VI to obtain observations (X1 . . . X20 ),
where Xi is the set of signals obtained after diffusion by
Ti . From these sets, we can compute the eigenvectors of the
f1 . . . X
g
sample covariance matrices (X
20 ).
f
For each matrix of eigenvectors X i , i ∈ {1, . . . , 20}, and
for each diffusion matrix Tj , j ∈ {1, . . . , 20}, we compute
fi and the
the distance between the polytope yielded by X
projection of Tj in the space of the polytope (see Section V-C)
using (20). The graph that minimizes the distance is then
selected as the most appropriate. Fig. 7 depicts the ratio of
times Ti is selected as the most appropriate diffusion matrix
when considering signals Xi , for various values of M .
The results show that the regularization strategy selects the
matrix used to diffuse the signals in most cases, even when
M is low. Additional experiments were performed for larger
graphs, and similar results were observed. Also, increasing the
number of signals eventually leads to a selection of the correct
diffusion matrix in all cases. This experiment illustrates that
the regularization strategy introduced in Section V-C can be
successfully used to select the graph that is the most adapted
to given signals among a set of candidates.
An interesting direction for future work includes evaluation
of the performance of the method when considering selection
of the most adapted matrix from noisy versions of Ti .
VII. E VALUATION OF INFERENCE METHODS ON A DATASET
The two methods introduced in Section V-A and Section V-B present solutions to infer a graph from signals, while
ensuring that it is compliant with our diffusion prior. While
other methods from the literature do not clearly impose this
prior, evaluating whether they provide solutions that match
a diffusion assumption is interesting, as it would provide
additional selection strategies for admissible diffusion matrices
if it is the case. This section explores the application of
the regularization strategy in Section V-C to the method of
Kalofolias [7], and shows that it provides matrices that do not
belong to the polytope of solutions. The closest point in the
polytope is considered, and evaluation on a dataset shows that
the result has interesting similarities with the original matrix.
Throughout this section, we study an open dataset2 of
temperature observations from 37 weather stations located in
Brittany, France [62]. Our inference methods, as well as other
existing methods, are evaluated on this dataset in terms of
sparsity, trace of the solution, and smoothness.
A. Detailed evaluation of the method from Kalofolias
The method from Kalofolias has two major qualities: it recovers a graph in a very short amount of time, and encourages
smoothness of the solution, which can be a desirable property.
To evaluate whether the retrieved solution happens to match
a diffusion process, let us consider the following experiment:
1) Let G be a random geometric graph of N = 10
vertices (R = 0.6), and let TŁ the diffusion matrix
associated with its normalized Laplacian. Using this
matrix, we diffuse M = 106 i.i.d. signals as presented
in Section VI-A to obtain a matrix X. Using Principal
e , an estiComponent Analysis on X [50], we obtain X
mate for the eigenvectors of TŁ . This set of eigenvectors
yields a polytope of admissible solutions.
2) Then, we use the method from Kalofolias to infer a
graph G K from X, and compute the associated matrix
TŁK . Since the log method from Kalofolias depends
on parameters α and β, we keep the minimal distance
obtained for values of α and β ranging from 0.01 to 2,
with a step of 10−2 . Equation (20) gives us the distance
between the polytope and the inferred solution.
2 In
http://data.gouv.fr.
Occurrences
14
400
Kalofolias (dK )
Random (dR )
200
0
0
0.5
1
1.5
Distance to polytope
2
Figure 8: Number of times a distance to the ground truth polytope was observed using either the method from Kalofolias
[7] (dK ), or a random geometric graph (dR ). Distances are
grouped in bins of size 10−2 . Tests were performed for 105
occurrences of graphs per method, with M = 106 signals.
3) Additionally, we generate a random geometric graph G R
(independent from the ground truth one) from X, using
the same settings as for G (N = 10, R = 0.6). This
gives us a baseline of how close a random graph with
the same edges distribution can be to the ground truth
one, and gives information on whether the results of
Kalofolias are closer to the ground truth than a random
matrix. Again, (20) measures the distance between the
polytope and the associated matrix TŁR .
We perform these three steps for 105 occurrences of random
geometric graphs. Let dK be the vector of distances to the
polytope obtained for each ground truth graph using the
method of Kalofolias, and dR the vector of distances to the
polytope for the baseline random graphs. In Fig. 8, we plot a
histogram of the number of times each distance was observed.
From these results, a first observation is that neither the
methods from Kalofolias nor the random method ever returned
a graph that was located in the polytope of solutions. Two
direct interpretations of this result can be made: first, it implies
that the set of admissible matrices per ground truth graph
is small relatively to the set of random graphs. Second, it
implies that the method from Kalofolias does not succeed in
recovering a graph that matches diffusion priors on the signals.
Mann-Whitney U test [63] on dK and dR shows that the
distributions differ significantly (U = 9.9813×107 , P < 10−5
two-tailed). This implies that the results obtained with the
method from Kalofolias are most of the time closer to an
admissible matrix than random solutions. This observation
can be explained by the remarks in Section II-C. Diffusion
of signals on a graph tends to smoothen them, as the low
frequencies are attenuated slower than higher ones. Since the
method of Kalofolias retrieves a graph on which signals are
smooth, the observation that it provides solutions that are
closer to the polytope than random solutions is quite natural.
The question is then whether the closest point to the
retrieved solution in the polytope has interesting properties.
Let us evaluate this solution on the dataset of temperatures.
Fig. 9 depicts the 10% most significant connections in the
adjacency matrix of the graph G K retrieved by Kalofolias, as
well as those of the matrix associated with the closest point in
e=X
eΛ
eX
e > , where Λ
e is the solution of (19).
the polytope, T
(a)
(b)
Figure 9: Most significant connections in the adjacency matrix
of the graph G K retrieved by the method from Kalofolias (a),
and most significant connections from the matrix associated
e (b).
with the closest point in the polytope, T
The method from Kalofolias retrieves a matrix that has
stronger connections between stations with similar locations.
There is a strong connectivity among stations located on the
south coast of Brittany. Stations located more in the land also
tend to be linked to close inland stations. The regularized
matrix appears to keep these properties: the strong links on the
coasts still appear, and the result also still gives importance
on the coastal versus inland aspect of the stations. Still,
differences can be seen, as the regularized matrix appears to
give more importance to the relations between stations on the
north coasts. Such relations also exist in the original matrix,
but are not depicted due to the threshold.
When computing the total smoothness of the signals with
both matrices, we obtain that the solution from Kalofolias
has a higher value of smoothness (see Fig. 10) than the
closest point in the polytope. This implies that the signals are
smoother on the approximate matrix than on the one recovered
by the method of Kalofolias. This may seem counter-intuitive,
since the solutions of the method by Kalofolias are not restricted to the polytope. However, the method from Kalofolias
imposes inference of a matrix with an empty diagonal, which
is not the case of the approximate one. These measurements,
in addition to those below, suggest that inferring a graph using
the method from Kalofolias, and considering the closest point
in the polytope, is an interesting method to infer a valid graph
on which signals are smooth.
B. Evaluation of graph inference methods on the dataset
We have proposed in Section V-C a technique to find a
valid matrix in the polytope that approximates the solution
of any method. Therefore, it is possible to evaluate all methods in terms of properties, such as L1,1 sparsity, trace, or
smoothness. Since all methods do not impose the same scale
on the inferred matrices, these quantities are computed for the
inferred diffusion matrices after normalization such that their
first eigenvalue equals one, as in the constraints in Definition 4.
When applying our methods Simple (Section V-A) and
Sparse (Section V-B), as well as those of Kalofolias [7],
Segarra et al. [43] and the graphical lasso [27] on the dataset
of temperatures, we have obtained the results in Fig. 10.
15
Simple
Sparse
Kalofolias [7]
Kalofolias closest
Segarra et al. [43]
Segarra et al. closest
Graphical lasso [27]
Graphical lasso closest
polytope
X
X
0.0313
X
0.0062
X
1.3730
X
L1,1
36.9974
36.9971
36.9979
36.9974
36.9993
36.9974
35.3539
36.9984
Tr
0.0013
0.9093
0
0.0298
1.97 × 10−5
0.0046
13.4977
13.3584
S(X)
0.0551
0.0585
0.0751
0.0548
0.0245
0.0551
32.8421
0.0335
Figure 10: Sparsity, trace and smoothness obtained for the
dataset of temperatures. Elements in bold denote the method
performing best among those that return a solution located in
the polytope. If a method provides a solution that does not
belong to the polytope, the distance to the closest point is
indicated in the first column. The last column
P indicates the
total smoothness for all signals, i.e., S(X) , x∈X S(x).
First, we notice that the method from Segarra et al. [43]
returns a matrix that is at a distance of 0.0062 from the
polytope. As the polytope description is the same for their
method and for ours, we would expect this distance to be
0. This small difference comes from their implementation. In
order to keep their equality constraints enforcing the elements
in the polytope to have an empty diagonal, while coping with
the noise in the eigenvectors, they allow small deviations from
the polytope. They do not return a matrix S∗ that shares the
eigenvectors of the covariance matrix, but a matrix Ŝ∗ such
that kS∗ − Ŝ∗ kF ≤ ε. Here, experiments were performed for
ε = 10−3 . For this reason, the matrix they return is located
slightly outside of the polytope of solutions. When considering
the closest point to this result in the polytope, it appears to be
very close to the solution returned by the Simple method.
As expected, the Sparse method recovers the matrix with
the lowest L1,1 norm. It is also interesting to remark that
the projection of the solution of the graphical lasso on the
polytope is smoother than the projection of the solution
obtained by the method from Kalofolias. This echoes the
remark in Section III-B that minimization of the quantity
e
Tr(ΣΘ)
in (6) tends to promote smoothness of the signals
on the graph when Θ is a Laplacian matrix, which appears
to be encouraged by the regularization algorithm. Note that
the method from Kalofolias infers a graph which projection
on the polytope gets the second best smoothness score, while
having a small trace. On the other hand, the solution infered
by the graphical lasso appears to have most of its energy on
the diagonal entries. This is confirmed by the traces of the
matrices, both for the original solution and its approximate in
the polytope. Therefore, these two solutions provide interesting
ways to find a graph on which stationary signals are smooth,
with different simplicity assumptions.
VIII. C ONCLUSIONS
In this article, we have proposed a method for characterizing
the set of matrices that may be used to explain the relationships
among signal entries assuming a diffusion process. We have
shown that they are part of a convex polytope, and have
illustrated how one could choose a point in this particular set,
given additional selection criteria such as sparsity of simplicity
of the graph to infer. Finally, we have shown that most of
other existing methods do not infer matrices that belong to
the polytope of admissible solutions for stationarity signals,
and have introduced a method to consider the closest valid
matrix. An experiment was performed to illustrate that this
particular method can be useful for graph hypothesis testing.
Future directions based on this work are numerous. First
of all, reviewing the covariance estimation techniques is an
interesting direction, as obtention of the eigenvectors of the
covariance matrix is a cornerstone of our approach, and some
techniques may provide such information more precisely than
the sample covariance. We could also explore new strategies
to select a point in the polytope, for example by enforcing
the reconstruction of a binary matrix. Another interesting
direction would then be to propose selection strategies that
do not imply the full definition of the N (N2+1) constraints
defining the polytope. Finally, our immediate next work will
be to complement our experiments on graph hypothesis testing,
considering noisy versions of the candidate diffusion matrices.
ACKNOWLEDGEMENTS
The authors would like to thank the reviewers of previous
versions of this paper, whose remarks helped improving the
quality of our work. Also, we would like to thank Xiaowen
Dong and Santiago Segarra for kindly providing their codes, as
well as Benjamin Girault for sharing his dataset. Additionally,
we would like to thank Pierre Vandergheynst and his group at
EPFL for the inspiring discussions that led to this work.
R EFERENCES
[1] F. D. V. Fallani, J. Richiardi, M. Chavez, and S. Achard, “Graph
analysis of functional brain networks: Practical issues in translational
neuroscience,” Phil. Trans. R. Soc. B, vol. 369, no. 1653, p. 20130521,
2014.
[2] G. Camps-Valls, T. V. B. Marsheva, and D. Zhou, “Semi-supervised
graph-based hyperspectral image classification,” Geoscience and Remote
Sensing, IEEE Transactions on, vol. 45, no. 10, pp. 3044–3054, 2007.
[3] A. P. Dempster, “Covariance selection,” Biometrics, pp. 157–175, 1972.
[4] P. J. Bickel and E. Levina, “Covariance regularization by thresholding,”
The Annals of Statistics, vol. 36, no. 6, pp. 2577–2604, 2008. [Online].
Available: http://www.jstor.org/stable/25464728
[5] B. M. Lake and J. B. Tenenbaum, “Discovering structure by learning
sparse graph,” in Proceedings of the 33rd Annual Cognitive Science
Conference, 2010.
[6] X. Dong, D. Thanou, P. Frossard, and P. Vandergheynst, “Learning
laplacian matrix in smooth graph signal representations,” arXiv preprint
arXiv:1406.7842, 2014.
[7] V. Kalofolias, “How to learn a graph from smooth signals,” in Journal
of Machine Learning Research (JMLR), ser. Workshop and Conference
Proceedings, 2016.
[8] B. Girault, “Stationary Graph Signals using an Isometric Graph
Translation,” in Eusipco, Nice, France, Aug. 2015, pp. 1531–1535.
[Online]. Available: https://hal.inria.fr/hal-01155902
[9] N. Perraudin and P. Vandergheynst, “Stationary signal processing
on graphs,” CoRR, vol. abs/1601.02522, 2016. [Online]. Available:
http://arxiv.org/abs/1601.02522
[10] A. G. Marques, S. Segarra, G. Leus, and A. Ribeiro, “Stationary graph
processes and spectral estimation,” CoRR, vol. abs/1603.04667, 2016.
[Online]. Available: http://arxiv.org/abs/1603.04667
[11] A. Sandryhaila and J. Moura, “Discrete signal processing on graphs,”
Signal Processing, IEEE Transactions on, vol. 61, no. 7, pp. 1644–656,
2013.
[12] A. Sandryhaila and J. M. Moura, “Discrete signal processing on graphs:
Frequency analysis,” IEEE Transactions on Signal Processing, vol. 62,
no. 12, pp. 3042–3054, 2014.
16
[13] J. Mei and J. M. F. Moura, “Signal processing on graphs: Estimating
the structure of a graph,” in 2015 IEEE International Conference on
Acoustics, Speech and Signal Processing, ICASSP 2015, South Brisbane,
Queensland, Australia, April 19-24, 2015, 2015, pp. 5495–5499.
[Online]. Available: http://dx.doi.org/10.1109/ICASSP.2015.7179022
[14] D. Thanou, X. Dong, D. Kressner, and P. Frossard, “Learning
heat diffusion graphs,” CoRR, vol. abs/1611.01456, 2016. [Online].
Available: http://arxiv.org/abs/1611.01456
[15] H. Petric Maretic, D. Thanou, and P. Frossard, “Graph learning under
sparsity priors,” in International Conference on Acoustics, Speech and
Signal Processing (ICASSP), 2017.
[16] D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst, “The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular
domains,” Signal Processing Magazine, IEEE, vol. 30, no. 3, pp. 83–98,
2013.
[17] A. Agaskar and Y. M. Lu, “A spectral graph uncertainty principle,”
Information Theory, IEEE Transactions on, vol. 59, no. 7, pp. 4338–
4356, 2013.
[18] N. Tremblay, “Réseaux et signal: des outils de traitement du signal pour
l’analyse des réseaux,” Ph.D. dissertation, Ecole Normale Supérieure de
Lyon, 2014.
[19] F. Chung, “Laplacians and the cheeger inequality for directed graphs,”
Annals of Combinatorics, vol. 9, no. 1, pp. 1–19, 2005. [Online].
Available: http://dx.doi.org/10.1007/s00026-005-0237-z
[20] F. R. Chung, Spectral Graph Theory. American Mathematical Soc.,
1997, vol. 92.
[21] Y. Shen, B. Baingana, and G. B. Giannakis, “Topology inference of
directed graphs using nonlinear structural vector autoregressive model,”
in International Conference on Acoustics, Speech and Signal Processing
(ICASSP), 2017.
[22] S. Sardellitti, S. Barbarossa, and P. Di Lorenzo, “Graph topology
inference based on transform learning,” in Signal and Information
Processing (GlobalSIP), 2016 IEEE Global Conference on. IEEE,
2016, pp. 356–360.
[23] T. Cai and W. Liu, “Adaptive thresholding for sparse covariance matrix
estimation,” Journal of the American Statistical Association, vol. 106,
no. 494, pp. 672–684, 2011.
[24] T. T. Cai, H. H. Zhou et al., “Optimal rates of convergence for sparse
covariance matrix estimation,” The Annals of Statistics, vol. 40, no. 5,
pp. 2389–2420, 2012.
[25] W. B. Wu and M. Pourahmadi, “Banding sample autocovariance matrices of stationary processes,” Statistica Sinica, pp. 1755–1768, 2009.
[26] H. Xiao, W. B. Wu et al., “Covariance matrix estimation for stationary
time series,” The Annals of Statistics, vol. 40, no. 1, pp. 466–493, 2012.
[27] J. Friedman, T. Hastie, and R. Tibshirani, “Sparse inverse covariance
estimation with the graphical lasso,” Biostatistics, vol. 9, no. 3, pp. 432–
441, 2008.
[28] A. J. Rothman, P. J. Bickel, E. Levina, J. Zhu et al., “Sparse permutation
invariant covariance estimation,” Electronic Journal of Statistics, vol. 2,
pp. 494–515, 2008.
[29] D. M. Witten, J. H. Friedman, and N. Simon, “New insights and faster
computations for the graphical lasso,” Journal of Computational and
Graphical Statistics, vol. 20, no. 4, pp. 892–900, 2011.
[30] R. Mazumder and T. Hastie, “The graphical lasso: New insights and
alternatives,” Electronic Journal of Statistics, vol. 6, p. 2125, 2012.
[31] K. M. Tan, D. Witten, and A. Shojaie, “The cluster graphical lasso
for improved estimation of gaussian graphical models,” Computational
Statistics & Data Analysis, vol. 85, pp. 23–36, 2015.
[32] S. Huang, J. Li, L. Sun, J. Liu, T. Wu, K. Chen, A. Fleisher, E. Reiman,
and J. Ye, “Learning brain connectivity of alzheimer’s disease from
neuroimaging data,” in Advances in Neural Information Processing
Systems, 2009, pp. 808–816.
[33] S. Yang, Q. Sun, S. Ji, P. Wonka, I. Davidson, and J. Ye, “Structural
graphical lasso for learning mouse brain connectivity,” in Proceedings
of the 21th ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, 2015, pp. 1385–1394.
[34] S. Sun, R. Huang, and Y. Gao, “Network-scale traffic modeling and
forecasting with graphical lasso and neural networks,” Journal of Transportation Engineering, vol. 138, no. 11, pp. 1358–1367, 2012.
[35] N. Wermuth, “Analogies between multiplicative models in contingency
tables and covariance selection,” Biometrics, pp. 95–108, 1976.
[36] E. Pavez and A. Ortega, “Generalized laplacian precision matrix estimation for graph signal processing,” in Acoustics, Speech and Signal
Processing (ICASSP), 2016 IEEE International Conference on. IEEE,
2016, pp. 6350–6354.
[37] H. E. Egilmez, E. Pavez, and A. Ortega, “Graph learning from data
under structural and laplacian constraints,” CoRR, vol. abs/1611.05181,
2016. [Online]. Available: http://arxiv.org/abs/1611.05181
[38] M. G. Rabbat, “Inferring sparse graphs from smooth signals with theoretical guarantees,” in International Conference on Acoustics, Speech
and Signal Processing (ICASSP), 2017.
[39] N. Perraudin, J. Paratte, D. Shuman, L. Martin, V. Kalofolias, P. Vandergheynst, and D. K. Hammond, “Gspbox: A toolbox for signal
processing on graphs,” ArXiv e-prints, 2014.
[40] S. P. Chepuri, S. Liu, G. Leus, and A. O. H. III, “Learning sparse graphs
under smoothness prior,” in International Conference on Acoustics,
Speech and Signal Processing (ICASSP), 2017.
[41] P. K. Shivaswamy and T. Jebara, “Laplacian spectrum learning,” in
Machine Learning and Knowledge Discovery in Databases. Springer,
2010, pp. 261–276.
[42] B. Pasdeloup, M. Rabbat, V. Gripon, D. Pastor, and G. Mercier, “Graph
reconstruction from the observation of diffused signals,” in Proceedings
of the 53rd Annual Allerton Conference on Communication, Control,
and Computing, 2015.
[43] S. Segarra, A. G. Marques, G. Mateos, and A. Ribeiro, “Network
topology inference from spectral templates,” CoRR, vol. abs/1608.03008,
2016. [Online]. Available: http://arxiv.org/abs/1608.03008
[44] ——, “Robust network topology inference,” in International Conference
on Acoustics, Speech and Signal Processing (ICASSP), 2017.
[45] ——, “Network topology identification from spectral templates,” CoRR, vol. abs/1604.02610, 2016. [Online]. Available:
http://arxiv.org/abs/1604.02610
[46] S. Shahrampour and V. M. Preciado, “Reconstruction of directed networks from consensus dynamics,” in American Control Conference
(ACC), 2013. IEEE, 2013, pp. 1685–1690.
[47] ——, “Topology identification of directed dynamical networks via power
spectral analysis,” IEEE Transactions on Automatic Control, vol. 60,
no. 8, pp. 2260–2265, 2015.
[48] M. Ipsen and A. S. Mikhailov, “Evolutionary reconstruction of networks,” Physical Review E, vol. 66, no. 4, p. 046109, 2002.
[49] R. Shafipour, A. G. Marques, G. Mateos, and A. Ribeiro, “Network
topology inference from non-stationary graph signals,” in International
Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017.
[50] K. Pearson, “On lines and planes of closest fit to system of points in
space,” Philiosophical Magazine, vol. 2, pp. 559–572, 1901.
[51] T. W. Anderson, “Asymptotic theory for principal component analysis,”
Ann. Math. Statist., vol. 34, no. 1, pp. 122–148, 03 1963. [Online].
Available: http://dx.doi.org/10.1214/aoms/1177704248
[52] M. A. Girshick, “On the sampling theory of roots of determinantal
equations,” Ann. Math. Statist., vol. 10, no. 3, pp. 203–224, 09 1939.
[Online]. Available: http://dx.doi.org/10.1214/aoms/1177732180
[53] D. N. Lawley, “Tests of significance for the latent roots of covariance
and correlation matrices,” Biometrika, vol. 43, no. 1/2, pp. 128–136,
1956. [Online]. Available: http://www.jstor.org/stable/2333586
[54] E. Amaldi and V. Kann, “On the approximability of minimizing nonzero
variables or unsatisfied relations in linear systems,” Theoretical Computer Science, vol. 209, no. 1, pp. 237–260, 1998.
[55] S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition
by basis pursuit,” SIAM Review, vol. 43, no. 1, pp. 129–159, 2001.
[56] R. Gribonval and M. Nielsen, “Sparse representations in unions of
bases,” Information Theory, IEEE Transactions on, vol. 49, no. 12, pp.
3320–3325, 2003.
[57] F. Rinaldi, “Mathematical programming methods for minimizing the
zero-norm over polyhedral sets,” Ph.D. dissertation, Sapienza, University
of Rome, 2009.
[58] M. Nekovee, “Worm epidemics in wireless ad hoc networks,” New
Journal of Physics, vol. 9, no. 6, p. 189, 2007.
[59] P. Erdős and A. Rényi, “On random graphs I,” Publ. Math. Debrecen,
vol. 6, pp. 290–297, 1959.
[60] M. Grant and S. Boyd, “CVX: MATLAB software for disciplined convex
programming, version 2.1,” http://cvxr.com/cvx, Mar. 2014.
[61] MATLAB, version 7.14.0 (R2012a).
Natick, Massachusetts: The
MathWorks Inc., 2012.
[62] B. Girault, “Signal processing on graphs-contributions to an emerging
field,” Ph.D. dissertation, Lyon, École normale supérieure, 2015.
[63] H. B. Mann and D. R. Whitney, “On a test of whether one of two
random variables is stochastically larger than the other,” Ann. Math.
Statist., vol. 18, no. 1, pp. 50–60, 03 1947. [Online]. Available:
http://dx.doi.org/10.1214/aoms/1177730491
| 8 |
The Orthogonal Vectors Conjecture
for Branching Programs and Formulas
arXiv:1709.05294v1 [cs.CC] 15 Sep 2017
Daniel Kane
UCSD
Ryan Williams∗
MIT
Abstract
In the O RTHOGONAL V ECTORS (OV) problem, we wish to determine if there is an orthogonal pair
of vectors among n Boolean vectors in d dimensions. The OV Conjecture (OVC) posits that OV requires
n2−o(1) time to solve, for all d = ω(log n). Assuming the OVC, optimal time lower bounds have been
proved for many prominent problems in P, such as Edit Distance, Frechet Distance, Longest Common
Subsequence, and approximating the diameter of a graph.
We prove that OVC is true in several computational models of interest:
• For all sufficiently large n and d, OV for n vectors in {0, 1}d has branching program complexity
Θ̃(n · min(n, 2d )). In particular, the lower bounds match the upper bounds up to polylog factors.
• OV has Boolean formula complexity Θ̃(n · min(n, 2d )), over all complete bases of O(1) fan-in.
• OV requires Θ̃(n · min(n, 2d )) wires, in formulas comprised of gates computing arbitrary symmetric functions of unbounded fan-in.
Our lower bounds basically match the best known (quadratic) lower bounds for any explicit function
in those models. Analogous lower bounds hold for many related problems shown to be hard under OVC,
such as Batch Partial Match, Batch Subset Queries, and Batch Hamming Nearest Neighbors, all of which
have very succinct reductions to OV.
The proofs use a certain kind of input restriction that is different from typical random restrictions
where variables are assigned independently. We give a sense in which independent random restrictions
cannot be used to show hardness, in that OVC is false in the “average case” even for AC0 formulas:
• For every fixed p ∈ (0, 1) there is an εp > 0 such that for every n and d, OV instances where
input bits are independently set to 1 with probability p (and 0 otherwise) can be solved with AC0
formulas of size O(n2−εp ), on all but a on (1) fraction of instances. Moreover, εp → 1 as p → 1.
∗
Supported by an NSF CAREER.
1 Introduction
We investigate the following basic combinatorial problem:
O RTHOGONAL V ECTORS (OV)
Given: n vectors v1 , . . . , vn ∈ {0, 1}d
Decide: Are there i, j such that hvi , vj i = 0?
An instructive way of viewing the OV problem is that we have a collection of n sets over [d], and wish to
find two disjoint sets among them. The obvious algorithm runs in time O(n2 · d), and log(n) factors can be
shaved [Pri99]. For d < log2 (n), stronger improvements are possible: there are folklore O(n · 2d · d)-time
and Õ(n + 2d )-time algorithms (for a reference, see [CST17]). Truly subquadratic-time algorithms have
recently been developed for even larger dimensionalities: the best known result in this direction is that for
all constants c ≥ 1, OV with d = c log n dimensions can be solved in n2−1/O(log c) time [AWY15, CW16].
However, it seems inherent that, as the vector dimension d increases significantly beyond log n, the time
complexity of OV approaches the trivial n2 bound.
Over the last several years, a significant body of work has been devoted to understanding the following
plausible lower bound conjecture:
Conjecture 1.1 (Orthogonal Vectors Conjecture (OVC) [Wil04, AVW14, BI15, ABV15]). For every ε > 0,
there is a c ≥ 1 such that OV cannot be solved in n2−ε time on instances with d = c log n.
In other words, OVC states that OV requires n2−o(1) time on instances of dimension ω(log n). The
popular Strong Exponential Time Hypothesis [IP01, CIP09] (on the time complexity of CNF-SAT) implies OVC [Wil04]. For this reason, and the fact that the OV problem is very simple to work with,
the OVC has been the engine under the hood of many recent conditional lower bounds on classic problems solvable within P. For example, the OVC implies nearly-quadratic time lower bounds for Edit Distance [BI15], approximating the diameter of a graph [RV13], Frechet Distance [Bri14, BM16], Longest
Common Substring and Local Alignment [AVW14], Regular Expression Matching [BI16], Longest Common Subsquence, Dynamic Time Warping, and other string similarity measures [ABV15, BK15], Subtree Isomorphism and Largest Common Subtree [ABH+ 16], Curve Simplification [BBK+ 16], intersection
emptiness of two finite automata [Weh16], first-order properties on sparse finite structures [GIKW17] as well
as average-case hardness for quadratic-time [BRSV17]. Other works surrounding the OVC (or assuming it)
include [WY14, Wil16, AVW16, CDHL16, APRS16, ED16, IR16, CGR16, KPS17].
Therefore it is of strong interest to prove the OVC in reasonable computational models. Note that OV can
be naturally expressed as a depth-three formula with unbounded fan-in: an OR of n2 NORs of d ANDs on
two input variables: an AC0 formula of size O(n2 · d). Are there smaller formulas for OV?
1.1 OVC is True in Restricted Models
In this paper, we study how well OV can be solved in the Boolean formula and branching program models.
Among the aforementioned OV algorithms, only the first two seem to be efficiently implementable by
formulas and branching programs: for example, there are DeMorgan formulas for OV of size only O(n2 d)
and size O(nd2d ), respectively (see Proposition 1).
1
The other algorithms do not seem to be implementable in small space, in particular with small-size
branching programs. Our first theorem shows that the simple constructions solving OV with O(n2 · d)
and O(n · 2d · d) work are essentially optimal for all choices of d and n:
Theorem 1.1 (OVC For Formulas of Bounded Fan-in). For every constant c ≥ 1, OV on n vectors in d dimensions does not have c-fan-in formulas of size O(min{n2 /(log d), n·2d /(d1/2 log d)}), for all sufficiently
large n, d.
Theorem 1.2 (OVC For Branching Programs). OV on n vectors in d dimensions does not have branching
programs of size O(min{n2 , n · 2d /(d1/2 )}/(log(nd) log(d))), for all sufficiently large n, d.
As far as we know, size-s formulas of constant fan-in may be more powerful than size-s branching programs (but note that DeMorgan formulas can be efficiently simulated with branching programs). Thus
the two
√ lower bounds are incomparable. These lower bounds are tight up to the (negligible) factor of
min{ log n, d1/2 } log(d) log(nd), as the following simple construction shows:
Proposition 1. OV has AC0 formulas (and branching programs) of size O(dn · min(n, 2d )).
Proof. The O(dn2 ) bound is obvious: take an OR over all n2 pairs of vectors, and use an AND ◦ OR of
O(d) size to determine orthogonality of the pair. For the O(dn2d ) bound, our strategy is to try all 2d vectors
v, and look for a v that is equal to one input vector and is orthogonal to another input vector. To this end,
take an OR over all 2d possible vectors w over [d], and take the AND of two conditions:
1. There is a vector v in the input such that v = w. This can be computed with an an OR over all n
vectors of an O(d)-size formula, in O(nd) size.
2. There is a vector u in the input such that hu, wi = 0. This can be computed with a parallel OR over
all n vectors of an O(d)-size formula, in O(nd) size.
Note that the above formulas have constant-depth, with unbounded fan-in AND and OR gates. Since DeMorgan formulas of size s can be simulated by branching programs of size O(s), the proof is complete.1
Formulas with symmetric gates. As mentioned above, OV can be naturally expressed as a depth-three
formula of unbounded fan-in: an AC03 formula of O(n2 d) wires. We show that this wire bound is also
nearly optimal, even when we allow arbitrary symmetric Boolean functions as gates. Note this circuit model
subsumes both AC (made up of AND, OR, and NOT gates) and TC (made up of MAJORITY and NOT
gates).
Theorem 1.3. Every formula computing OV composed of arbitrary symmetric functions with unbounded
fan-in needs at least Ω(min{n2 /(log d), n · 2d /(d1/2 log d)}) wires, for all n and d.
1.2 Lower Bounds for Batch Partial Match, Batch Subset Query, Batch Hamming Nearest
Neighbors, etc.
A primary reason for studying OV is its ubiquity as a “bottleneck” special case of many other basic search
problems. In particular, many problems have very succinct reductions from OV to them, and our lower
bounds extend to these problems.
1
This should be folklore, but we couldn’t find a reference; see the Appendix B.
2
We say that a linear projection reduction from a problem A to problem B is a circuit family {Cn } where
each Cn has n input and O(n) outputs, each output of Cn depends on at most one input, and x ∈ A if and
only if C|x| (x) ∈ B, for all possible inputs x. Under this constrained reduction notion, it is easy to see that
if OV has a linear projection reduction to B, then size lower bounds for OV (even in our restricted settings)
imply analogous lower bounds for B as well. Via simple linear projection reductions which preserve both n
and d (up to constant multiplicative factors), analogous lower bounds hold for many other problems which
have been commonly studied, such as:
BATCH PARTIAL M ATCH
Given: n “database” vectors v1 , . . . , vn ∈ {0, 1}d and n queries q1 , . . . , qn ∈ {0, 1, ⋆}d
Decide: Are there i, j such that vi is a partial match of qj , i.e. for all k, qj [k] ∈ {vi [k], ⋆}?
BATCH S UBSET Q UERY
Given: n sets S1 , . . . , Sn ⊆ [d] and n queries T1 , . . . , Tn ⊆ [d]
Decide: Are there i, j such that Si ⊆ Tj ?
BATCH H AMMING N EAREST N EIGHBORS
Given: n points p1 , . . . , pn ∈ {0, 1}d and n queries q1 , . . . , qn ∈ {0, 1}d , integer k
Decide: Are there i, j such that pi and qj differ in at most k positions?
1.3 “Average-Case” OVC is False, Even for AC0
The method of proof in the above lower bounds is an input restriction method that does not assign variables
independently (to 0, 1, or ⋆) at random. (Our restriction method could be viewed as a random process, just
not one that assigns variables independently.) Does OV become easier under natural product distributions of
instances, e.g., with each bit of each vector being an independent random variable? Somewhat surprisingly,
we show that a reasonable parameterization of average-case OVC is false, even for AC0 formulas.
For p ∈ (0, 1), and for a given n and d, we call OV(p)n,d the distribution of OV instances where all bits
of the n vectors are chosen independently, set to 1 with probability p and 0 otherwise. We would like to
understand when OV(p) can be efficiently solved on almost all instances (i.e., with probability 1 − o(1)).
We give formulas of truly sub-quadratic size for every p > 0:
Theorem 1.4. For every p ∈ (0, 1), and every n and d, there is an AC0 formula of size n2−εp that correctly
answers all but a on (1) fraction of OV(p)n,d instances on n vectors and d dimensions, for an εp > 0 such
that εp → 1 as p → 1.
Interestingly, our AC0 formulas have one-sided error, even in the worst case: if there is no orthogonal pair
in the instance, our formulas always output 0. However, they may falsely report that there is no orthogonal
pair, but this only occurs with probability o(1) on a random OV(p)n,d instance, for any n and d.
1.4 Intuition
Our lower bounds give some insight into what makes OV hard to solve. There are two main ideas:
3
1. OV instances with n d-dimensional vectors can encode difficult Boolean functions on d inuts, requiring circuits of size Ω̃(min(2d , n)). This can be accomplished by encoding those strings with “middle”
Hamming weight from the truth table of a hard function with the vectors in an OV instance, in such
a way that finding an orthogonal pair is equivalent to evaluating the hard Boolean function at a given
d-bit input. This is an inherent property of OV that is independent of the computational model.
2. Because we are working with simple computational models, we can generally make the following kind
of claim: given an algorithm for solving OV and given a partial assignment to all input vectors except
for one appropriately chosen vector, we can propagate this partial assignment through the algorithm,
and “shrink” the size of the algorithm by a factor of Ω(n). This sort of argument was first used by
Nechiporuk [Nec66] in the context of branching program lower bounds, and can be also applied to
formulas.
Combining the two ideas, if we can “shrink” our algorithm by a factor of n by restricting the inputs
appropriately, and argue that the remaining subfunction requires circuits of size Ω̃(min(2d , n)), we can
conclude that the original algorithm for OV must have had size Ω̃(min(n2d , n2 )). (Of course, there are
many details to verify, but this is the basic idea.)
The small AC0 formulas for OV(p) (the average-case setting) involve several ideas. First, given the
probability p ∈ (0, 1) of 1 and the number of vectors n, we observe a simple phase transition phenomenon:
there is only a particular range of dimensionality d in which the problem is non-trivial, and outside of this
range, almost all instances are either “yes” instances or “no” instances. Second, within this “hard” range
of d, the orthogonal vector pairs are expected to have a special property: with high probability, at least
one orthogonal pair in a “yes” instance has noticeably fewer ones than a typical vector in the distribution.
To obtain a sub-quadratic size AC0 formula from these observations, we partition the instance into small
groups such that the orthogonal pair (if it exists) is the only “sparse” vector in its group, whp. Over all pairs
of groups i, j in parallel, we take the component-wise OR of all sparse vectors in group i, and similarly for
group j. Then we test the two ORed vectors for orthgonality. By doing so, if our formula ever reports 1,
then there is some orthogonal pair in the instance (even in the worst case).
2 Lower Bounds
Functions that are hard on the middle layer of the hypercube. In our lower bound proofs, we will use
functions on d-inputs for which
every small circuit fails to agree with the function on inputs of Hamming
weight about d/2. Let [d]
denote
the set of all d-bit vectors of Hamming weight k.
k
Lemma 2.1. Let d be even, let C be a set of Boolean functions, let N (d, s) be the number of functions in C
d
on d inputs of size at most s, and let s⋆ ∈ N satisfy log2 (N (d, s⋆ )) < d/2
.
[d]
d
× {0, 1}, such that every function f : {0, 1}d →
pairs (xi , yi ) ∈ d/2
Then there is a sequence of d/2
d
{0, 1} satisfying f (xi ) = yi (for all i = 1, . . . , d/2
) requires C-size at least s⋆ .
d
Proof. By definition, there are N (d, s) functions of size s on d inputs from C, and there are 2(d/2) in[d]
× {0, 1} defined over all d-bit vectors of Hamming weight d/2. For
put/output sequences (xi , yi ) d/2
d
2(d/2) > N (d, s), there is an input/output sequence that is not satisfied by any function in C of size s.
4
Note that it does not matter what is meant by size in the above lemma: it could be gates, wires, etc., and
the lemma still holds (as it is just counting). The above simple lemma applies to formulas, as follows:
[d]
d
× {0, 1}, such that every
Corollary 2.1. Let c ≥ 2 be a constant. There are d/2
pairs (xi , yi ) ∈ d/2
d
function f : {0, 1}d → {0, 1} satisfying f (xi ) = yi (for all i = 1, . . . , d/2 ) needs c-fan-in formulas of
size at least Ω(2d /(d1/2 log d)).
Proof. There are N (d, s) ≤ dkc ·s formulas of size s on d inputs, where the constant kc depends only on c.
d
When 2(d/2) > dkc ·s , Lemma 2.1 says that there is an input/output sequence of length d that no formula
d/2
of size s can satisfy. Thusto satisfy thatsequence, we need a formula of size s at least large enough that
d
d
2(d/2) ≤ dkc ·s , i.e., s ≥ Ω
/ log(d) ≥ Ω(2d /(d1/2 log d)).
d/2
2.1 Lower Bound for Constant Fan-in Formulas
We are now ready to prove the lower bound for Boolean formulas of constant fan-in:
Reminder of Theorem 1.1 For every constant c ≥ 1, OV on n vectors in d dimensions does not have
c-fan-in formulas of size O(min{n2 /(log d), n · 2d /(d1/2 log d)}), for all sufficiently large n, d.
All of the lower bound proofs have a similar structure. We will give considerably more detail in the proof
of Theorem 1.1 to aid the exposition of the later lower bounds.
Proof. To simplify the calculations, assume d is even in the following. Let Fn,d (v1 , . . . , vn ) be a c-fan-in
formula of minimal size s computing OV on n vectors of dimension d, where each vi denotes a sequence of
d Boolean variables (vi,1 , . . . , vi,d ).
Let ℓ be the number of leaves of Fn,d . Since Fn,d is minimal, each gate has fan-in at least two (gates of
fan-in 1 can be “merged” into adjacent gates). Therefore (by an easy induction on s) we have
s ≥ ℓ ≥ s/2.
(1)
Observe there must be a vector vi⋆ (for some i⋆ ∈ [n]) whose d Boolean variables appear on at most ℓ/n
leaves of the formula Fn,d .
[d]
d
Case 1. Suppose d/2
≤ n − 1. Let {(xi , yi )} ⊆ d/2
× {0, 1} be a set of hard pairs from Corollary 2.1,
[d]
d
and let f : {0, 1} → {0, 1} be any function that satisfies f (xi ) = yi , for all i. Let {x′1 , . . . , x′t } ⊆ d/2
d
be those d-bit strings of Hamming weight d/2 such that f (x′i ) = 1, for some t ≤ d/2
≤ n − 1. By
Corollary 2.1, such an f needs c-fan-in formulas of size at least Ω(2d /(d1/2 log d)).
d
Case 2. Suppose d/2
≥ n − 1. Then we claim there is a list of input/output pairs (x1 , y1 ), . . .,
[d]
(xn−1 , yn−1 ) ∈ d/2 × {0, 1} such that for every f : {0, 1}d → {0, 1} satisfying f (xi ) = yi , for all i, f
needs formulas of size at least Ω(n/ log d). To see this, simply note that if we take n − 1 distinct strings
d
x1 , . . . , xn−1 from d/2
, there are 2n−1 possible choices for the list of pairs. So when 2n−1 > dkc ·s , there
is a list (x1 , y1 ), . . ., (xn−1 , yn−1 ) that no formula of size s satisfies. For any function f : {0, 1}d →
{0, 1} such that f (xi ) = yi for all i = 1, . . . , n − 1, its formula size s ≥ Ω(n/ log d) in this case. Let
[d]
be those d-bit strings of Hamming weight d/2 such that (x′i , 1) is on the list, for some
{x′1 , . . . , x′t } ⊆ d/2
t ≤ n − 1.
5
d
For either of the two cases, we will use the list of t ≤ min{n − 1, d/2
} strings {x′1 , . . . , x′t } to make
assignments to the variables vi of our OV formula, for all i 6= i⋆ . In particular, for all i = 1, . . . , t with
i 6= i⋆ , we substitute the d bits of x′i (the complement of xi , obtained by flipping all the bits of x′i ) in place
of the d-bit input vector vi . If t < n − 1 (which can happen in case 1), substitute all other z~j with j 6= i⋆
with ~1. Note that all of the pairs of vectors substituted so far are not orthogonal to each other: for all i 6= i′ ,
6 0, because both x′i and x′i′ are distinct vectors each with d/2 ones, and for all i we have
we have hx′i , x′i′ i =
hx′i , ~1i =
6 0.
After these substitutions, the remaining formula Fn′ is on only d inputs, namely the vector vi⋆ . Moreover,
is a formula with at most ℓ/n leaves labeled by literals: the rest of the leaves are labeled with 0/1
constants. After simplifying the formula (replacing all gates with some 0/1 inputs by equivalent functions
of smaller fan-in, and replacing gates of fan-in 1 by wires), the total number of leaves of Fn′ is now at most
ℓ/n. Therefore by (1) we infer that
Fn′
size(Fn′ ) ≤ 2ℓ/n.
(2)
Since Fn,d computes OV, it follows that for every input vector y ∈ {0, 1}d of Hamming weight d/2, Fn′ on
input y outputs 1 if and only if there is some i such that hx′i , yi = 0. Note that since both x′i and y have
Hamming weight exactly d/2, we have hxi , yi = 0 if and only if y = xi .
By our choice of xi ’s, it follows that for all y ∈ {0, 1}d of Hamming weight d/2, Fn′ (y) = 1 if and only
if f (y) = 1. By our choice of f (from Corollary 2.1 in case 1, and our claim in case 2), we must have
size(Fn′ ) ≥ min{Ω(2d /(d1/2 log d)), Ω(n/ log d))},
depending on whether
d
d/2
(3)
≤ n − 1 or not (case 1 or case 2). Combining (2) and (3), we infer that
ℓ ≥ Ω(n · min{Ω(2d /(d1/2 log d)), Ω(n/ log d))}),
(4)
n 2
o
n
n·2d
therefore the overall lower bound on formula size is s ≥ Ω min log
.
,
d d1/2 log d
Remark on a Red-Blue Variant of OV. In the literature, OV is sometimes posed in a different form,
where half of the vectors are colored red, half are colored blue, and we wish to find a red-blue pair which
is orthogonal. Calling this form OV’, we note that OV’ also exhibits the same lower bound up to constant
factors. Given an algorithm/formula/circuit A for computing OV’ on 2n vectors (n of which are red, and
n of which are blue), it is easy to verify that an algorithm/formula/circuit for OV on n vectors results by
simply putting two copies of the set of vectors in the red and blue parts. Thus our lower bounds hold for the
red-blue variant as well.
2.2 Lower Bound for Branching Programs
Recall that a branching program of size S on n variables is a directed acyclic graph G on S nodes, with
a distinguished start node s and exactly two sink nodes, labeled 0 and 1 respectively. All non-sink nodes
are labeled with a variable xi from {x1 , . . . , xn }, and have one outgoing edge labeled xi = 1 and another
outgoing edge labeled xi = 0. The branching program G evaluated at an input (a1 , . . . , an ) ∈ {0, 1}n is
the subgraph obtained by only including edges of the form xi = ai , for all i = 1, . . . , n. Note that after such
6
an evaluation, the remaining subgraph has a unique path from the start node s to a sink; the sink reached on
this unique path (be it 0 or 1) is defined to be the output of G on (a1 , . . . , an ).
Reminder of Theorem 1.2 OV on n vectors in d dimensions does not have branching programs of size
O(min{n2 , n · 2d /(d1/2 )}/(log(nd) log(d))), for all sufficiently large n, d.
Proof. (Sketch) The proof is similar to Theorem 1.1; here we focus on the steps of the proof that are
different. Let G be a branching program with S nodes computing OV on n vectors with d dimensions. Each
node of G reads a single input bit from one of the input vectors; thus there is an input vector vi⋆ that is read
only O(S/n) times in the entire branching program G.
We will assign all variables other than the d variables that are part ofvi⋆ . Using the same encoding as
Theorem 1.1, by assigning the n − 1 other vectors, we can implement a function f : {0, 1}d → {0, 1} that is
[d]
. In particular, we substitute d-bit
hard for branching programs to compute on the set of d-bit inputs in d/2
[d]
vectors which represent inputs from f −1 (1) ∩ d/2 for all n − 1 input vectors different from vi⋆ . For each
of these assignments, we can reduce the size of the branching program accordingly: for each input bit xj
that is substituted with the bit aj , we remove all edges with the label xj = ¬aj , so that every node labeled
xj now has outdegree 1. After the substitution, two properties hold:
1. There is a hard function f such that the minimum size
T of a branching program computing f on
d
the n − 1 inputs satisfies T log2 (T ) ≥ Ω(min{ d/2
, n}/ log(d)). To see this is possible, first note
there are dT · 2Θ(T log(T )) branching programs of size T on d inputs (there are dT choices for the
node labels, and 2Θ(T log(T )) choices for the remaining graph on T nodes). In contrast, there are at
d
min{(d/2
),n−1} choices for the hard function f ’s values on d-bit inputs of Hamming weight d/2.
least 2
d
min{(d/2
),n−1} , or
Therefore there is a function f such that dT · 2Θ(T log(T )) ≥ 2
d
T + Θ(T log(T )) ≥ min
, n − 1 / log2 (d).
d/2
2. The minimum size of a branching program computing a function f : {0, 1}d → {0, 1} on the remaining d bits of input is at most O(S/n). This follows because every node v with outdegree 1 can be
removed from the branching program without changing its functionality: for every arc (u, v) in the
graph, we can replace it with the arc (u, v ′ ), where (v, v ′ ) is the single edge out of v, removing the
node v.
n
o
d
Combining these two points, we have (S/n) · log(S/n) ≥ Ω min d/2
, n / log(d) , or
S≥Ω
min{n
d
d/2
2 !
,n }
log(S/n) · log(d)
.
Since S ≤ n2 d, we have
S≥Ω
min{n
d
d/2
2 !
,n }
log(nd) · log(d)
≥Ω
This concludes the proof.
7
min{n · 2d /d1/2 , n2 }
log(nd) · log(d)
!
.
2.3 Formulas With Symmetric Gates
We will utilize a lower bound on the number of functions computable by symmetric-gate formulas with a
small number of wires:
Lemma 2.2. There are nO(w) symmetric-gate formulas with w wires and n inputs.
Proof. There is an injective mapping from the set of trees of unbounded fan-in and w wires into the set of
binary trees with at most 2w nodes: simply replace each node of fan-in k with a binary tree of at most 2k
nodes. The number of such binary trees is O(42w ) (by upper bounds on Catalan numbers). This counts the
number of “shapes” for the symmetric formula; we also need to count the possible gate assignments. There
are 2k+1 symmetric functions on k inputs. So for a symmetric-gate formula with g gates, where the ith gate
has fan-in wi for i P
= 1, . . . , g, the number of possible assignments of symmetric functions to its gates is
Q
g
wi +1 = 2g+ i wi = 2g+w . There are at most w leaves, and there are nw ways to choose the variables
2
i=1
read at each leaf. Since g ≤ w, we conclude that there are at most 42w · 22w · nw ≤ nO(w) symmetric-gate
formulas with w wires.
Reminder of Theorem 1.3 Every formula computing OV composed of arbitrary symmetric functions with
unbounded fan-in needs at least Ω(min{n2 /(log d), n · 2d /(d1/2 log d))}) wires, for all n and d.
Proof. (Sketch) The proof is quite similar to the other lower bounds, given Lemma 2.2, so we just sketch
the ideas. Let F be a symmetric-gate formula for computing OV with unbounded fan-in and w wires. Let
wi be the number of wires touching inputs and wg be the number of wires that do not touch inputs. Since F
is a formula, we have (by a simple induction argument) that wi ≥ wg , thus
w ≤ 2wi .
(5)
As before, each leaf of the formula is labeled by an input from one of the input n vectors; in this way, every
leaf is “owned” by one of the n input vectors. We will substitute a 0/1 variable assignment to all vectors,
except the vector z~⋆ which owns the fewest leaves. This gives a 0/1 assignment to all but O(wi /n) of the wi
wires that touch inputs.
After any such variable assignment, we can simplify F as follows: for every symmetric-function gate g
which has wg input wires with k wires assigned 0/1, we can replace g with a symmetric function g′ that has
only wg − k inputs, and no input wires assigned 0/1 (a partial assignment to a symmetric function just yields
another symmetric function on a smaller set of inputs). If g ′ is equivalent to a constant function itself, then
we remove it from the formula and substitute its output wire with that constant, repeating the process on the
gates that use the output of g as input. When this process completes, our new formula F ′ has d inputs and
no wires that are assigned constants. So F ′ has O(wi /n) wires touching inputs, and therefore by (5) the
total number of wires in F ′ is O(w/n).
d
min{n−1,(d/2
)} different functions on
As described earlier, the n − 1 vectors we assign can implement 2
d-bit inputs, but there are at most dO(w/n) functions computable by the symmetric formula remaining, by
d
min{n−1,(d/2
)} , or
Lemma 2.2. Thus we need that the number of wires w satisfies dO(w/n) ≥ 2
w ≥ Ω(min{n2 , n · 2d /(d1/2 )}/(log d)).
This completes the proof.
8
3 Small Formulas for OV in the Average Case
Recall that for p ∈ (0, 1) and for a fixed n and d, we say that OV(p)n,d is the distribution of OV instances
where all bits of the n vectors from {0, 1}d are chosen independently, set to 1 with probability p and 0 otherwise. We will often say that a vector is “sampled from OV(p)” if each of its bits are chosen independently
in this way. We would like to understand how efficiently OV(p)n,d can be solved on almost all instances
(i.e., with probability 1 − o(1)), for every n and d.
Reminder of Theorem 1.4 For every p ∈ (0, 1), and every n and d, there is an AC0 formula of size n2−εp
that correctly answers all but a on (1) fraction of OV(p)n,d instances on n vectors and d dimensions, for an
εp > 0 such that εp → 1 as p → 1.
Proof. Let ε > 0 be sufficiently small in the following. First, we observe that OV(p)n,d is very easy, unless
d is close to (2/ log2 (1/(1 − p2 ))) log 2 (n). In particular, for dimensionality d that is significantly smaller
(or larger, respectively) than this quantity, all but a o(1) fraction of the OV(p)n,d instances are “yes” (or
“no”, respectively). To see this, note that two randomly chosen d-dimensional vectors under the OV(p)n,d
distribution are orthogonal with probability (1 − p2 )d . For d = (2/ log 2 (1/(1 − p2 ))) log 2 (n), so a random
pair is orthogonal with probability
(1 − p2 )(2/ log2 (1/(1−p
2 ))) log
2 (n)
= 1/n2 .
Thus an OV(p)n,d instance with n vectors has nontrivial probability of being a yes instance for d approximately (2/ log 2 (1/(1 − p2 ))) log2 (n).
Therefore if d > (2/ log 2 (1/(1 − p2 )) + ε) log2 (n), or d < (2/ log 2 (1/(1 − p2 )) − ε) log2 (n), then
the random instance is either almost surely a “yes” instance, or almost surely a “no” instance, respectively.
These comparisons could be done with the quantities (2/ log2 (1/(1 − p2 )) − ε) log2 (n) and (2/ log 2 (1/(1 −
p2 ))+ε) log2 (n) (which can be hard-coded in the input) with a poly(d, log n)-size branching program, which
can output 0 and 1 respectively if this is the case.2
From here on, assume that d ∈ [(2/ log 2 (1/(1 − p2 )) − ε) log 2 (n), (2/ log 2 (1/(1 − p2 )) + ε) log2 (n)].
Note that for p sufficiently close to 1, the dimensionality d is δ log n for a small constant δ > 0 that is
approaching 0. Thus in the case of large p, the AC0 formula given in Proposition 1 has sub-quadratic size.
In particular, the size is
O(n · 2d · d) ≤ n1+2/ log2 (1/(1−p
For p ≥ 0.867 >
p
2 ))+o(1)
.
(6)
3/4, this bound is sub-quadratic. For smaller p, we will need a more complex argument.
Suppose u, v ∈ {0, 1}d are randomly chosen according to the distribution of OV(p) (we will drop the n, d
subscript, as we have fixed n and d at this point).
We now claim that, conditioned on the event that u, v is an orthogonal pair, both u and v are expected to
have between (p/(1 + p) − ε)d and (p/(1 + p) + ε)d ones, with 1 − o(1) probability. The event that both
u[i] = v[i] = 1 holds with probability 1 − p2 ; conditioned on this event never occurring, we have
Pr[u[i] = 0, v[i] = 0 | ¬(u[i] = v[i] = 1)] = (1 − p)2 /(1 − p2 ),
Pr[u[i] = 1, v[i] = 0 | ¬(u[i] = v[i] = 1)] = p(1 − p)/(1 − p2 ),
Pr[u[i] = 0, v[i] = 1 | ¬(u[i] = v[i] = 1)] = p(1 − p)/(1 − p2 ).
2
As usual, poly(m) refers to an unspecified polynomial of m of fixed degree.
9
Hence the expected number of ones in u (and in v) is only p(1 − p)d/(1 − p2 ) = pd/(1 + p), and the number
of ones is within (−εd, εd) of this quantity with probability 1 − o(1). (For example, in the case of p = 1/2,
the expected number of ones is d/3, while a typical vector has d/2 ones.)
Say that a vector u is light if it has at most (p/(1 + p) + ε)d ones. It follows from the above discussion
that if an OV(p) instance is a “yes” instance, then there is an orthogonal pair with two light vectors, with
probability 1 − o(1). Since the expected number of ones is pd, the probability that a randomly chosen u is
light is
p
ε
p
2
+ ε d = pd 1 −
+
Pr u has at most
ones ≤ e−(p/(p+1)+ε/p) pd/2
1+p
p+1 p
= e−(p
3 /(2(p+1)2 )−Θ
p (ε))d
,
by a standard Chernoff tail bound (see Theorem A.1 in Appendix A). So with high probability, there are at
3
2
most n · e−(p /(2(p+1) )−Θp (ε))d = n1−α light vectors in an OV(p) instance, where
α=
p3
+ Θp (ε) · 2/(log 2 (1/(1 − p2 ))).
(p + 1)2 log2 (1/(1 − p2 ))
Divide the n vectors of the input arbitrarily into n1−α(1−ε) groups G1 , . . . , Gn1−α(1−ε) , of O(nα(1−ε) )
vectors each. WLOG, suppose an orthogonal pair u, v lies in different groups u ∈ Gi and v ∈ Gj , with i 6= j
(note that, conditioned on there being an orthogonal pair, this event also occurs with 1 − o(1) probability).
Since every vector is independently chosen, and given that Prv [v is light ] ≤ 1/nα , note that
Pr
v1 ,...,vnα(1−ε)
[all vi in group Ga are not light] ≥ (1 − 1/nα )n
α(1−ε)
≥ 1 − 1/nεα ,
for every group Ga . Thus the groups Gi and Gj have at most one light vector with probability 1 − o(1).
We can now describe our formula for OV(p), in words. Let Light(v) be the function which outputs 1 if and
only if the d-bit input vector v is light. Since every symmetric function has poly(d)-size formulas [Khr72],
Light(v) also has poly(d)-size formulas. Here is the formula:
Take the OR over all n2−2α(1−ε) pairs (i, j) ∈ [n1−α(1−ε) ]2 with i < j:
Take the ¬OR over all k = 1, . . . , d, of the AND of two items:
1. The OR over all O(nα(1−ε) ) vectors u in group Gi of (Light(u) ∧ u[k]).
2. The OR over all O(nα(1−ε) ) vectors v in group Gj of (Light(v) ∧ v[k]).
To see that this works, we observe:
• If there is an orthogonal pair u, v in the instance, then recall that with probability 1 − o(1), (a) u and
v are light, (b) u and v appear in different groups Gi and Gj , and (c) there are no other light vectors
in Gi and no other light vectors in Gj . Thus the inner ORs over the group Gi (and respectively Gj )
will only output the bits of the vector u (and respectively v). Thus the above formula, by guessing
the pair (i, j), and checking over all k = 1, . . . , d that (u[k] ∧ v[k]) is not true, will find that u, v are
orthogonal, and output 1.
10
• If there is no orthogonal pair, then we claim that the formula always outputs 0. Suppose the formula
outputs 1. Then there is some (i, j) ∈ [n1−α(1−ε) ]2 such that the inner product of two vectors Vi and
Wj is 0, where Vi is the OR of all light vectors in group Gi and Wj is the OR of all light vectors
in group Gj . But for these two vectors to have zero inner product, it must be that all pairs of light
vectors (one from Gi and one from Gj ) are orthogonal to each other. Thus there is an orthogonal pair
in the instance.
Using the poly(d)-size formulas for Light, the DeMorgan formula has size
O(n2−2α(1−ε) · d · nα(1−ε) · poly(d)) ≤ O(n2−α(1−ε) · poly(d)).
(7)
Substituting in the value for α, the exponent becomes
2−
2
p3 (1 − ε)
+ Θp (ε) ·
.
2
2
(p + 1) log2 (1/(1 − p ))
log2 (1/(1 − p2 ))
Recalling that we are setting ε to be arbitrarily small (its value only affects the o(1) probability of error), the
formula size is
3
n
2−
p
+o(1)
2(p+1)2 log2 (1/(1−p2 ))
.
Observe that our formula can in fact be made into an AC0 formula of similar size; this is easy to see except
for the poly(d)-size formula for Light. But for d = O(log n), any formula of poly(log n)-size on O(log n)
ε
bits can be converted into an AC0 circuit of depth c/ε and size 2(log n) , for some constant c ≥ 1 and any
desired ε > 0.
The final formula is the minimum of the formulas of (6) and (7). For every fixed p ∈ (0, 1], we obtain a
bound of n2−εp for an εp > 0.
4 Conclusion
It is important to note that the largest known lower bound for branching programs computing any explicit
function is due to Neciporuk [Nec66] from 1966, and is only Ω(N 2 / log2 N ) for inputs of length N . A
similar statement holds for Boolean formulas over the full binary basis (see for example [Juk12]). Our
lower bounds for OV match these bounds up to polylogarithmic factors. Thus it would be a significant
breakthrough to generalize our results to other problems believed to require cubic time, such as:
3-O RTHOGONAL V ECTORS (3-OV)
Given: n vectors v1 , . . . , vn ∈ {0, P
1}d
Decide: Are there i, j, k such that dℓ=1 vi [ℓ] · vj [ℓ] · vk [ℓ] = 0?
It is known that the Strong Exponential Time Hypothesis also implies that 3-OV requires n3−o(1) for
dimensionality d = ω(log n) [Wil04, AV14].
Acknowledgements. We are very grateful to Ramamohan Paturi for raising the question of whether the
OV conjecture is true for AC0 circuits.
11
References
[ABH+ 16] Amir Abboud, Arturs Backurs, Thomas Dueholm Hansen, Virginia Vassilevska Williams, and
Or Zamir. Subtree isomorphism revisited. In SODA, pages 1256–1271, 2016.
[ABV15]
Amir Abboud, Arturs Backurs, and Virginia Vassilevska Williams. Tight hardness results for
LCS and other sequence similarity measures. In FOCS, pages 59–78, 2015.
[APRS16] Thomas Dybdahl Ahle, Rasmus Pagh, Ilya P. Razenshteyn, and Francesco Silvestri. On the
complexity of inner product similarity join. In PODS, pages 151–164, 2016.
[AV14]
Amir Abboud and Virginia Vassilevska Williams. Popular conjectures imply strong lower
bounds for dynamic problems. In FOCS, pages 434–443, 2014.
[AVW14]
Amir Abboud, Virginia Vassilevska Williams, and Oren Weimann. Consequences of faster
alignment of sequences. In ICALP, pages 39–51, 2014.
[AVW16]
Amir Abboud, Virginia Vassilevska Williams, and Joshua Wang. Approximation and fixed
parameter subquadratic algorithms for radius and diameter in sparse graphs. In SODA, pages
377–391. Society for Industrial and Applied Mathematics, 2016.
[AWY15] Amir Abboud, Richard Ryan Williams, and Huacheng Yu. More applications of the polynomial
method to algorithm design. In SODA, pages 218–230, 2015.
[BBK+ 16] Kevin Buchin, Maike Buchin, Maximilian Konzack, Wolfgang Mulzer, and André Schulz. Finegrained analysis of problems on curves. In EuroCG, Lugano, Switzerland, 2016.
[BI15]
Arturs Backurs and Piotr Indyk. Edit distance cannot be computed in strongly subquadratic
time (unless SETH is false). In STOC, pages 51–58, 2015.
[BI16]
Arturs Backurs and Piotr Indyk. Which regular expression patterns are hard to match?
FOCS, pages 457–466, 2016.
[BK15]
Karl Bringmann and Marvin Künnemann. Quadratic conditional lower bounds for string problems and dynamic time warping. In FOCS, pages 79–97, 2015.
[BM16]
Karl Bringmann and Wolfgang Mulzer. Approximability of the discrete Fréchet distance. JoCG,
7(2):46–76, 2016.
[Bri14]
Karl Bringmann. Why walking the dog takes time: Frechet distance has no strongly subquadratic algorithms unless SETH fails. In FOCS, pages 661–670, 2014.
In
[BRSV17] Marshall Ball, Alon Rosen, Manuel Sabin, and Prashant Nalini Vasudevan. Average-case finegrained hardness. IACR Cryptology ePrint Archive, 2017:202, 2017.
[CDHL16] Krishnendu Chatterjee, Wolfgang Dvorák, Monika Henzinger, and Veronika Loitzenbauer.
Model and objective separation with conditional lower bounds: Disjunction is harder than conjunction. In LICS, pages 197–206, 2016.
[CGR16]
Massimo Cairo, Roberto Grossi, and Romeo Rizzi. New bounds for approximating extremal
distances in undirected graphs. In SODA, pages 363–376, 2016.
12
[CIP09]
Chris Calabro, Russell Impagliazzo, and Ramamohan Paturi. The complexity of satisfiability
of small depth circuits. In Parameterized and Exact Complexity (IWPEC), pages 75–85, 2009.
[CST17]
Pairwise comparison of bit vectors. https://cstheory.stackexchange.com/questions/37361/
January 20, 2017.
[CW16]
Timothy M. Chan and Ryan Williams. Deterministic APSP, Orthogonal Vectors, and more:
Quickly derandomizing Razborov-Smolensky. In SODA, pages 1246–1255, 2016.
[ED16]
Jacob Evald and Søren Dahlgaard. Tight hardness results for distance and centrality problems
in constant degree graphs. CoRR, abs/1609.08403, 2016.
[GIKW17] Jiawei Gao, Russell Impagliazzo, Antonina Kolokolova, and R. Ryan Williams. Completeness
for first-order properties on sparse structures with algorithmic applications. In SODA, pages
2162–2181, 2017.
[IP01]
Russell Impagliazzo and Ramamohan Paturi. On the complexity of k-SAT. J. Comput. Syst.
Sci., 62(2):367–375, 2001.
[IR16]
Costas S. Iliopoulos and Jakub Radoszewski. Truly subquadratic-time extension queries and
periodicity detection in strings with uncertainties. In 27th Annual Symposium on Combinatorial
Pattern Matching, CPM 2016, June 27-29, 2016, Tel Aviv, Israel, pages 8:1–8:12, 2016.
[Juk12]
Stasys Jukna. Boolean Function Complexity: Advances and Frontiers. Springer-Verlag, 2012.
[Khr72]
V. M. Khrapchenko. The complexity of the realization of symmetrical functions by formulae.
Mathematical notes of the Academy of Sciences of the USSR, 11(1):70–76, 1972.
[KPS17]
Marvin Künnemanm, Ramamohan Paturi, and Stefan Schneider. On the fine-grained complexity
of one-dimensional dynamic programming. CoRR, abs/1703.00941, 2017.
[Nec66]
E. I. Nechiporuk. On a boolean function. Doklady of the Academy of Sciences of the USSR,
169(4):765–766, 1966. English translation in Soviet Mathematics Doklady 7:4, pages 999–
1000.
[Pri99]
Paul Pritchard. A fast bit-parallel algorithm for computing the subset partial order. Algorithmica, 24(1):76–86, 1999.
[RV13]
Liam Roditty and Virginia Vassilevska Williams. Fast approximation algorithms for the diameter and radius of sparse graphs. In STOC, pages 515–524, 2013.
[Weh16]
Michael Wehar. Intersection non-emptiness for tree-shaped finite automata. Available at
http://michaelwehar.com/documents/TreeShaped.pdf, February 2016.
[Wil16]
Richard Ryan Williams. Strong ETH breaks with merlin and arthur: Short non-interactive
proofs of batch evaluation. In 31st Conference on Computational Complexity, CCC 2016, May
29 to June 1, 2016, Tokyo, Japan, pages 2:1–2:17, 2016.
[Wil04]
Ryan Williams. A new algorithm for optimal 2-constraint satisfaction and its implications.
Theor. Comput. Sci., 348(2-3):357–365, 2005. See also ICALP’04.
[WY14]
Ryan Williams and Huacheng Yu. Finding orthogonal vectors in discrete structures. In SODA,
pages 1867–1877, 2014.
13
A
Chernoff Bound
We use the following standard tail bound:
Theorem A.1. Let p ∈ (0, 1) and let X1 , . . . , Xd ∈ {0, 1} be independent random variables, such that for
all i we have Pr[Xi = 1] = p. Then for all δ ∈ (0, 1),
"
#
X
2
Pr
Xi > (1 − δ)pd ≤ e−δ pd/2 .
i
B DeMorgan Formulas into Branching Programs
Here we describe at a high level how to convert a DeMorgan formula (over AND, OR, NOT) of size s into
a branching program of size O(s).
Our branching program will perform an in-order traversal of the DeMorgan formula, maintaining a counter
(from 1 to s) of the current node being visited in the formula. The branching program begins at the root
(output) of the formula. If the current node is a leaf, its value b is returned to the parent node. If the current
node is not a leaf, the branching program recursively evaluates its left child (storing no memory about the
current node).
The left child returns a value b. If the current node is an AND and b = 0, or the current node is an OR and
b = 1, the branching program propagates the bit b up the tree (moving up to the parent). If the current node
is a NOT then the branching program moves to the parent with the value ¬b.
If none of these cases hold, then the branching program erases the value b, and recursively evaluates the
right child, which returns a value b. This value is simply propagated up the tree (note the fact that we visited
the right child means that we know what the left child’s value was).
Observe that we only hold the current node of the formula in memory, as well as O(1) extra bits.
14
| 8 |
Identifying Reusable Macros for Efficient Exploration via
Policy Compression
Francisco M. Garcia
Bruno C. da Silva
University of Massachusetts - Amherst
Amherst, Massachusetts, USA
[email protected]
Universidade Federal do Rio Grande do Sul
Porto Alegre, Rio Grande do Sul, Brazil
[email protected]
arXiv:1711.09048v1 [] 24 Nov 2017
ABSTRACT
Reinforcement Learning agents often need to solve not a single task,
but several tasks pertaining to a same domain; in particular, each
task corresponds to an MDP drawn from a family of related MDPs
(a domain). An agent learning in this setting should be able exploit
policies it has learned in the past, for a given set of sample tasks, in
order to more rapidly acquire policies for novel tasks. Consider, for
instance, a navigation problem where an agent may have to learn to
navigate different (but related) mazes. Even though these correspond
to distinct tasks (since the goal and starting locations of the agent
may change, as well as the maze configuration itself), their solutions
do share common properties—e.g. in order to reach distant areas of
the maze, an agent should not move in circles. After an agent has
learned to solve a few sample tasks, it may be possible to leverage
the acquired experience to facilitate solving novel tasks from the
same domain. Our work is motivated by the observation that trajectory samples from optimal policies for tasks belonging to a common
domain, often reveal underlying useful patterns for solving novel
tasks. We propose an optimization algorithm that characterizes the
problem of learning reusable temporally extended actions (macros).
We introduce a computationally tractable surrogate objective that is
equivalent to finding macros that allow for maximal compression of
a given set of sampled trajectories. We develop a compression-based
approach for obtaining such macros and propose an exploration
strategy that takes advantage of them. We show that meaningful behavioral patterns can be identified from sample policies over discrete
and continuous action spaces, and present evidence that the proposed
exploration strategy improves learning time on novel tasks.
KEYWORDS
Reinforcement Learning; Autonomous Agents; Exploration
1
INTRODUCTION
Reinforcement Learning (RL) is an active area of research concerned
with the problem of an agent learning from interactions with its
environment. In this framework, an agent is at a state st at time step
t, takes an action at , receives reward r t , and moves to state st +1 . A
policy π defines a mapping from states to actions and determines
the behavior of an agent in its environment. The objective of an RL
algorithm is to learn an optimal policy π ∗ that achieves the maximum
expected sum of rewards. A sample from a policy, from some initial
state s 0 to some terminal state sT , is referred to as a trajectory, τ , and
corresponds to a sequence of decisions or actions: τ = {a 1 , . . . , aT }.
Under Review (AAMAS 2018), 2018
© 2018 All rights reserved.
At the core of RL is the problem of the exploration-exploitation
trade-off. Exploration is concerned with taking actions to gather information about the underlying problem, and exploitation deals with
making decisions based on the acquired knowledge. Even though in
the last few years an impressive number of successful applications
of RL to difficult problems has occurred [11, 14], these were not
due to advances on exploration methods, but rather due to better
approaches to approximating state values, V , or state-action values,
Q. Exploration strategies have a significant impact on how quickly a
given learning algorithm is capable of finding a solution or policy.
Many commonly-used exploration strategies, such as ϵ-greedy or
Boltzmann exploration, potentially spend large amounts of time unnecessarily exploring due to the fact that they are not based on prior
information (about related problems) when deployed on a new task.
In many practical RL problems, however, such prior information
may be available—an agent may need to learn how to behave in different problem instances (tasks) that share similar properties. More
formally, we define tasks as MDPs drawn from a family of related
MDPs (a domain); we assume that tasks share a set of transition
rules when analyzed over an appropriate state abstraction (i.e., given
appropriate state features). For instance, let us say that tasks correspond to mazes with different configurations. We assume there
exists a (possibly unknown) state abstraction ϕ(s) that maps a state
to, e.g., its (x, y) coordinates on the maze. The states that may result
from executing action Right, when in a state with abstraction ϕ(s),
may vary from maze to maze, since that particular location may
or may not be next to a wall. However, those possible next states
belong to a same set that is shared by all tasks: either the agent
successfully moves to the right, thereby transitioning to a state with
abstraction (x + 1, y); or it hits a wall, thereby transitioning to a state
with abstraction (x, y). This implies that even though mazes may
differ, they do share common properties in terms of the possible
outcomes of executing an action in a given state: the dynamics of
the tasks, when seen through an appropriate state abstract ϕ, are not
arbitrarily different, since tasks are assumed to be related. More formally, let us assume that there exists a (possibly unknown) transition
function f (a, ϕ(s)) → ϕ(s ′ ) determining the (abstract representation)
of the state s ′ that results from executing a in state s. Since tasks
in a domain are assumed to be related, the states that may result of
executing an action in a state are not arbitrarily different and form a
set Φ(a, ϕ(s)) which is common to all tasks. This implies that tasks
in a domain do share similar underlying dynamics—in the case of
sample mazes, for instance, the possible transitions resulting from
collisions against a wall are common to all tasks. State abstraction
similar to ϕ, which allow for different but related MDPs to be analyzed and compared, have been studied by others and shown to allow
AAMAS’18, 2018
for option policies to be defined over abstract state representations—
thereby making it possible for such options to be used to reused and
deployed when tackling different but related tasks [8].
In this work we propose to identify reusable options by exploiting
the observation that the possible consequences of executing an action
in a particular state are shared by many related tasks. This suggests
the existence of a (possibly unknown) common transition dynamics structure underlying all tasks. We propose to identify options
reflecting this common structure—in particular, options that encode
recurring temporally extended actions (also called macros, [20], [17],
[1]) that occur as parts of the solution to many related tasks in the
domain. We also propose an exploration strategy making use of
such recurring options/action patters in order to facilitate learning of
novel tasks. Whereas primitive actions last for one time-step, taking
the agent from state st to st +1 , a temporally extended action lasts for
n > 1 time-steps, taking the agent from st to st +n .
In order to identify these macros, we propose to solve a surrogate
problem: to identify macro actions that allow for a given set of
sample trajectories to be compactly reconstructed. In particular, if
we represent different action sequences of length l (which may occur
as part of a trajectory τ ) by a unique symbol, we can borrow ideas
from the compression literature and find binary representations for
the possible corresponding macros. Given binary representations of
this type, we can then evaluate the expected number of bits required
to encode a set of trajectories (drawn from optimal policies to a set
of sample tasks) in a compact way. By construction, solving this
surrogate problem formulation implies that the identified macros will
be recurring behavioral patterns in the policies for different tasks;
they do, therefore, reflect common structures and action patterns in
the solutions to different (but related) problem instances drawn from
a same domain.
In this work we focus on identifying a set of open-loop temporally
extended actions which are used for exploration (when tackling novel
tasks) in order to bias learning efforts towards more promising action
sequences. Intuitively, such macros allow an agent to more quickly
reach parts of the state space that would be unlikely to be reached
by random or unguided exploration, and that are often encountered
when solving tasks drawn from the domain.
In this work, we make the following contributions:
• propose a compression-based method for obtaining reusable
macros from a set of sample trajectories;
• demonstrate how two different compression algorithms (Huffman coding and LZW) can be used in this framework;
• introduce an exploration strategy that leverages the empirical
distribution of obtained macros in both discrete and continuous action spaces;
• provide experiments that show the benefits of our compressionbased approach for learning exploration strategies in standard
RL benchmarks.
2
RELATED WORK
The proposed work lies at the intersection between transfer learning
(TL), option discovery, and learned exploration strategies. We show
how to discover useful recurring macro-actions from previous experiences, and leverage this knowledge to derive useful exploration
strategies for solving new problems.
Francisco M. Garcia and Bruno C. da Silva
The literature on these related research areas is rich. [22] provides a detailed survey on existing TL methods and the different
metrics used to evaluate how much it is gained from transferring
knowledge. From this survey two main metrics are relevant to our
work: jumpstart (the improvement of initial performance of an agent
when solving a novel task (over an agent without this knowledge),
and total reward (the total cumulative reward received by an agent
during training).
Our approach requires the action-set to be shared throughout tasks,
but no explicit state variable mapping between tasks is assumed or
given. In a similar manner, [9] proposed separating the problems
being solved by an agent into an agent-space and a problem-space,
such that options defined over the agent-space are transferable across
different tasks. In their experiments, options or macros shared similarities in that they had access to a same set of sensors; the authors
did not exploit the fact that they were also being deployed over
tasks with a shared set of abstract transition rules. Similarly, [7]
developed a framework for obtaining an approximation to the value
function V when defined in the problem-space, resulting in a method
to jumpstart learning. More recent work has focused on using neural
networks to learn problem-invariant feature representations to transfer knowledge between tasks instances [5], [16]. These approaches,
however, do not directly address the problem of learning transferable exploration strategies, and instead rely on standard exploration
techniques such as ϵ-greedy or Boltzmann.
Option extraction techniques have also been studied at length. One
such approach to learn options is based on proto-value functions
[13], [12]. Here, an agent builds a graph of the environment and
uses such a model to identify temporally extended behaviors that
allow promising areas of the state space to be reached. Another
recent approach to option discovery relies on learning a model of
the environment and using the graph Laplacian to cluster states into
abstract states [10]. These methods depend on fully exploring the
environment a priori in order to build an accurate model, which
implicitly assumes that an efficient exploration strategy is available.
The problem of learning efficient exploration strategies has not
been overlooked. Most existing approaches seek to find efficient
ways to use the knowledge gained about a given task that the agent
is currently solving, as opposed to leveraging existing knowledge
of policies learned in previously-experienced related tasks. [21]
proposed a count-based exploration strategy by hashing frequentlyencountered states and adding a corresponding exploration bonus
to the reward function. In this work, high dimensional states are
mapped to a lower dimensional representation, thus allowing estimates of the reward function to be transferable across tasks. Another
recently proposed work in learning to explore is that of [4], where
a generative adversarial network (GAN) is trained to estimate state
visitation frequencies and use them as a reward bonus. In both cases,
the resulting change in agent behavior is implicitly determined by
a new reward function, and not part of an explicitly-derived exploration strategy. Furthermore, most of these methods have in common
the reuse of knowledge of the value function; they do not directly
analyze the behavior (policies) of the agent under optimal policies
for previously-solved tasks.
In this work, we aim at addressing these limitations. We analyze
sample trajectories drawn from optimal policies to related tasks and
use them to derive an exploration strategy that is agnostic to the
Identifying Reusable Macros for Efficient Exploration via Policy Compression
state space representation and which is transferable and applicable
to novel tasks drawn from a same domain. The main requirement of
our method is that the action space remains constant across tasks.
3
AAMAS’18, 2018
where T is a set of sampled tasks drawn from the domain. Next, we
show how to use this formulation to obtain useful macro-actions from
sample trajectories by using two different compression algorithms:
Huffman Coding and LZW.
PROBLEM FORMULATION
We are interested in obtaining macro-actions that would allow an
agent to more rapidly acquire an optimal policy π ∗ for a given task
υ drawn from a domain ϒ. A particular task instance drawn from the
problem domain is an MDP. Let A = {a 1 , a 2 , . . . , ak } be an action
set containing all primitive actions available in all tasks in ϒ. We seek
to find a set of macros M such that if a new action set A′ = A ∪ M
were to be used, it would allow for a set of sample trajectories
(drawn from optimal policies for sample tasks in the domain) to be
expressed in a more compact manner. Given one candidate binary
representation for each primitive action and macro in M, we define
B A (τ , υ) as the minimum number of bits needed to represent a given
trajectory τ , sampled from the optimal policy for some task υ, by
using action set A. Let Eυ∼ϒ [B A (τ π ∗ , υ)] be the expected number of
bits needed to represent a trajectory τ π ∗ (sampled from an optimal
policy π ∗ to task υ) given an action set A and domain ϒ. We wish to
find a new action set A′ = A ∪ M, such that:
Eυ∼ϒ [B A′ (τ π ∗ , υ)] ≪ Eυ∼ϒ [B A (τ π ∗ , υ)].
The reasoning underlying this objective is that finding a set of
macros that leads to compressed trajectory representations implies
that these macro-actions are encountered frequently as part of optimal policies in the domain. The action patterns emerging from such
trajectories allow us to determine which action sequences do occur
often in optimal policies, and which ones do not. Identifying macros
that maximize trajectory representation compression, therefore, allows us to capture the underlying action patterns in optimal policies
π ∗ from which trajectories were sampled; these can then be used
by the agent to bias exploration when tackling novel tasks from the
same domain. To achieve the goal above, we propose to minimize
the following objective function:
J (A′ ) = Eυ∼ϒ [B A′ (τ π ∗ , υ)] + fe (A′ ).
(1)
The first term in Equation (1) seeks to minimize the expected
number of bits needed to encode trajectories sampled from their
corresponding policies π ∗ , while the second term is a regularizer on
the dimensionality of the extended action space; this dimensionality
depends on the particular encoder e used to compressed trajectories
by re-writing them using a particular extended action set. On one
hand, if the extended action set A′ were to become too large, it could
include useful macros but could make learning more challenging,
since there would be too many actions whose utilities need to be estimated. On the other hand, if A′ is too small, the agent would forfeit
the opportunity of deploying macros that encode useful recurring
behaviors.
In practice, it may be infeasible or too expensive to find the set
A′ that minimizes this expression, since the agent can only sample
tasks from ϒ. We can, however, approximate the solution to J (A′ )
by sampling tasks and optimizing the following surrogate objective:
1 Õ
Jˆ(A′ ) =
(B A′ (τ π ∗ , υ)) + fe (A′ )
(2)
|T |
τ ∈T
3.1
Compression via Huffman Coding
Huffman coding [6] is a compression algorithm that assigns variablelength codes to each symbol in its codebook. This technique requires
every possible symbol and their respective probabilities to be known
in advance, which is commonly done as a pre-processing step. Assuming that all macros have a fixed length l, we first identify all action sequences of length l that occur in the set of sampled trajectories
and compute their respective probabilities (i.e., the frequency with
which they occur in the sampled trajectories). For example, given
a trajectory τ = {a 1 , a 2 , a 3 , a 1 , a 2 } and maximum macro length
l = 2 we identify candidate macros m 1 = {a 1 , a 2 }, m 2 = {a 2 , a 3 },
m 3 = {a 3 , a 1 }; their respective probabilities are p1 = 0.5, p2 = 0.25,
p3 = 0.25. Equipped with this data, we use Huffman coding as follows. Let M = {m 1 , m 2 , . . . , mn } be the set of available symbols
corresponding to macros and P = {p1 , p2 , . . . , pn } be a set of probabilities, where pi is the probability of macro mi appearing in sampled
trajectories for tasks taken from ϒ. Based on M and P, we create a
codebook C(M, P) that generates binary encodings c i for each macro
mi ; that is, a string of 0s and 1s that uniquely identifies a macro mi ;
Huffman Coding ensures that the more frequent a macro is in the
sampled trajectories, the shorter its encoding will be. In order to keep
the extended action set from including all possible length-l macros,
we may wish to keep only the n most frequently occurring ones—i.e.,
the n macros with shorter binary encodings. For added flexibility, we
also extend the codebook to include the primitive action set A and
ensure (by construction) that their corresponding codes are longer
than those used to represent macros. This is done to represent our
preference for using macros over primitives. A given trajectory τ π
(drawn from a policy π for task υ) can then be re-expressed as some
sequence of primitives and macros τ π = {c 1 , c 2 , . . . , cT }, where c t
is the codeword of the action or macro performed at time-step t.
This minimum-length representation of a given trajectory has length
B A′ (τ π , υ) and can be identified via a simple dynamic programming
algorithm, which we omit here due to space constraints.
Note that when using this approach, we may wish to regularize
A′ so as not to include highly unlikely macros—i.e., macros with
large code lengths. Let c max be the length of the longest code in M
and let fe (A′ ) = λc max , where λ is a regularization parameter; our
objective is now given by:
1 Õ
Jˆ(A′ ) =
(B A′ (τ π ∗ , υ)) + λc max .
|T |
τ ∈T
(3)
Assuming an upper bound on n (the number of macros we wish
to obtain) and on l (the length of those macros), we can identify the particular values of n and l that minimize Jˆ(A′ ) by executing an iterative search for values of n = {1, . . . , n max } and
l = {l min , l min +1 , . . . , l max }. This allows for recurring macros (and
the probability distribution with which they occur as part of optimal
solutions to related tasks) to be identified. However, a pre-processing
step, which builds a candidate codebook for each possible length l
AAMAS’18, 2018
Francisco M. Garcia and Bruno C. da Silva
and estimates the corresponding macros’ probabilities, is required
and can be computationally expensive:
T HEOREM 3.1. Let l min and l max be the minimum and maximum
allowed length for a macro, |T | the number of sampled trajectories,
and S = maxk (|τ k | − l min ), where |τ k | is the length of the k th
sampled trajectory. Let l ∆ = l max − l min . The pre-processing step
has complexity O(l ∆ |T |S).
P ROOF. Proof given in appendix A.
□
Furthermore, we can show an upper bound on the number of bits
required to represent sample trajectories if using Huffman Coding—
which is only one possible compression scheme, but not necessarily
the one that achieves the true maximum compression according to
the proposed objective:
T HEOREM 3.2. Assume a set of m macros, m > 1, and corresponding probabilities {p1 , . . . , pm }, where pi ≤ pi+1 for i =
1, . . . , m − 1. If a codebook constructed via Huffman Coding is used,
the number of bits Bt needed to represent a set of |T | sampled trajectories is upper bounded by:
c × min ⌊logρ
where c =
Í |T | |τ k |
k
l
ρ+1
⌋, m − 1
ρp1 + p2
√
and ρ = 1+2 5 .
P ROOF. The proof follows trivially from [3].
□
Huffman Coding constructs a codebook by analyzing (during an
offline pre-processing step) all candidate macros of length l that
occur in the sampled data. This results in the construction of binary
representations that take into account the frequencies of each macro
in the entire data set, but which is (as discussed above) computationally expensive. To address this problem we show, in the next section,
how to use an on-line compression: LZW.
method to identify a codebook containing macros. We first define a
limit, blimit , in the number of bits we allow the codebook to have,
and initialize the codebook with primitive actions. As we compress
a set of sampled trajectories with LZW, macro actions that recur will
naturally be identified and be added to the codebook. As it is the
case with Huffman coding, we are also interested in computing P,
the set of probabilities (frequency of occurrence) for a given set of
macros M; this will be used later when using the identified macros
to construct an exploration strategy. P can be estimated directly by
counting (in the sampled trajectories) the number of matches that
each symbol in the codebook has and normalizing those values into
a valid probability distribution.
As before, we may wish to regularize A′ so as to keep it from
becoming too large; this can be achieved by penalizing the objective function based on blimit . Let the regularization term be given
by fe (A′ ) = λbl imi t , where λ is the regularization parameter; our
objective now becomes:
1 Õ
Jˆ(A′ ) =
(B A′ (τ π ∗ , υ)) + λbl imi t .
(4)
|T |
τ ∈T
Since LZW does not require an expensive pre-processing step,
it is possible to efficiently iterate over different values blimit =
1, . . . , bmax in order to find the value of blimit that minimizes this
above objective. The particular number of bits needed to encode a
given set of |T | sampled trajectories can be upper bounded as by the
following theorem:
T HEOREM 3.3. Given |A| primitive actions and the number N
of symbols stored in the codebook constructed by LZW, the total
number of bits Bt needed to represent |T | sampled trajectories is
upper bounded by:
!
|T |
i−1
Õ
Õ
k
j
|τ | −
|A| × blimit
j=1
k
1−|A |
3.2
Compression via LZW
LZW [24] is another compression technique that can be used to
identify binary encodings for each macro and primitive action in
order to compress trajectories. However, unlike Huffman coding,
it does not require pre-processing the data to build codes for each
symbol. LZW assumes that the same number of bits will be used
to represent all symbols in its codebook and incrementally builds
it as it processes a given message, string, or set of trajectories. For
example, in the case of encoding the English language, it is standard
practice to set the limit in the number of bits per symbol to 8; LZW
will then be able to represent/store 256 symbols in its codebook.
The first symbols to be included in the codebook would be, in this
case, characters a through z. The remaining of the codebook is then
built incrementally, as new character combinations are found in the
message, string, or trajectory. For instance, if the symbol a is already
in the codebook and the subsequence ae is found in the input, the
symbol ae is added to the codebook. In this sense, LZW is a greedy
method: it always tries to create new, longer symbols that match
the patterns observed in the data. This process continues until no
more symbols can be represented by the alloted number of bits. If
we consider primitive actions in trajectories analogous to single
characters in an alphabet, we can easily extend this compression
where i = ⌈log |A | 1 − N |A | ⌉.
P ROOF. The proof is given in Appendix A.
□
A downside of using LZW to identify recurring macros is that
the macros that incrementally populate the codebook depend on the
order in which trajectories are processed. Unlike Huffman coding,
therefore, and depending on the maximum size that the codebook
can be, it is possible that some frequently-occurring macros will be
excluded from the extended action set.
4
GUIDING EXPLORATION VIA
RECURRING MACROS
In the previous sections we showed how to use different compressions methods to approximately solve the proposed minimization
objective. The proposed methods, however, implicitly assumed the
existence of a process for determining whether two particular sequences of actions (or macros) were equal. This was needed, for
instance, for LZW to check if the current set of symbols in the input was a match with any of the existing macros in its codebook.
It was also needed to count how many times a given sequence of
actions/macro occurred within sampled trajectories for estimating its
probability. In this section we discuss how such comparisons can be
Identifying Reusable Macros for Efficient Exploration via Policy Compression
done in both discrete and continuous action spaces, and then propose
an exploration strategy that leverages the macros identified by our
method.
4.1
4.2
Assume that the representative macro associated with a cluster c i
is the mean between the trajectories in that cluster, denoted by m̄i .
When analyzing whether a new candidate macro m 3 is equivalent to
existing ones representing clusters, three things can happen:
Macros for Discrete-Action Task Exploration
In the case of discrete action spaces, equivalence between macros
can be easily established. Let k 1 and k 2 be the length of macros
m 1 and m 2 , respectively. m 1 and m 2 are equivalent iff k 1 = k 2 and
m 1,t = m 2,t , for t = 1, . . . , k 1 , where m 1,t and m 2,t refer to the
actions taken at time-step t in macro m 1 and m 2 , respectively. We
can use this equivalence relation along with the methods presented
in the previous section whenever a comparison between two macros
was needed in order to identified a given set of recurring macros, M,
and its corresponding probability distribution, P.
Macros for Continuous-Action Task
Exploration
Continuous action spaces present unique challenges. Unlike discrete
action spaces, their action spaces are infinite (e.g., contain an infinite
number of primitive actions); for this reason, it is unlikely that any
two sequences of actions will be identical. To deal with this situation,
we could discretize such a space, but this would raise a new question:
what discretization resolution should be selected? If the resolution is
too coarse, the agent might not be able to execute particular actions
that are part of an optimal policy. On the other hand, if the resolution
is too fine-grained, it becomes unlikely that any two sequences
will repeat in subsequent executions of a policy, which implies
that identifying recurring action sequences becomes nontrivial. In
this setting, we propose to check for the equivalence between two
continuous-action trajectories by measuring the distance between
them according to the Dynamic Time Warping (DTW) method. DTW
[2] is an algorithm developed for measuring the similarity between
two continuous signals that vary with time. It produces a mapping
from one signal to the other, as well as a distance estimate entailed
by such a mapping. In particular, given two macros m 1 and m 2 of
length k, we use DTW to define the following equivalence relation:
(
m 1 = m 2 dtw(m 1 , m 2 ) < α
m 1 , m 2 otherwise
where dtw(m 1 , m 2 ) is the mapping distance between m 1 and m 2 ,
as given by DTW, and α is an environment-dependent similarity
threshold. We assume that the continuous-action trajectories being
analyzed are sampled in time according to some fixed frequency dt
and are stored in a vector τ whose i-th element is the continuous
action executed at the i-th time step. When using DTW to identify
recurring continuous macros of length l via Huffman Coding or
LZW, we interpret l as the desired time duration of the macros;
l
these macros then correspond to action subsequences formed by dt
contiguous elements in τ . This implies that candidate continuous
macros can be extracted from τ and computationally represented as
finite vectors of continuous actions. Whenever two candidate macros
are compared and deemed equivalent according to the DTW criterion,
they are clustered and the mean of the trajectories in the cluster is
used to represent the macro itself. For example, assume there are
two clusters of trajectories deemed equivalent, c 1 and c 2 , each of
which initially contains only one macro: c 1 = {m 1 } and c 2 = {m 2 }.
AAMAS’18, 2018
(1) dtw(m 3 , m¯1 ) < dtw(m 3 , m¯2 ) and dtw(m 3 , m¯1 ) < α: In this
case we update c 1 = {m 1 , m 3 }, and m¯1 = m1|c+m| 3 ;
1
(2) dtw(m 3 , m¯2 ) < dtw(m 3 , m¯1 ) and dtw(m 3 , m¯2 ) < α: In this
case we update c 2 = {m 2 , m 3 }, and m¯2 = m2|c+m| 3 ;
2
(3) dtw(m 3 , m¯1 ) > α and dtw(m 3 , m¯2 ) > α: In this case we
create a new cluster c 3 = {m 3 }, and m¯3 = m 3 .
m +m
m
+m
where i|c | j = i, t|c | j, t , t = 1, . . . , k, and mi,t refers to the
action taken at time-step t in macro mi . When deploying Huffman
Coding or LZW to operate over continuous action trajectories, each
symbol added to a codebook will represent one of the mean macros
associated with a given trajectory cluster.
4.3
Modified ϵ-greedy Exploration with Macros
We now propose a simple way of using the identified set of macros M
(and corresponding probability distributions P) to design a modified
ϵ-greedy exploration strategy:
(
πexp (s) =
arg maxu ∈A′ Q(s, u)
m∼P
w.p. (1 − ϵ)
otherwise
That is, with probability 1 − ϵ the agent behaves greedy with respect
to the extended action set A′ and with probability ϵ it draws a macro
m according to the probability distribution P. Selecting actions in
this manner biases exploration towards macros that occurred often
in similar tasks; these macros are selected according to the estimated
probability distribution with which they were part of optimal solutions to tasks sampled from the domain. The use of recurring macros
to bias exploration allows the agent to more easily reach states that
would be unlikely to be reached by random or unguided exploration,
and that are often encountered when solving tasks drawn from the
domain.
5
EXPERIMENTS
We carried out two sets of experiments (in discrete and in continuous action spaces) in order to evaluate the benefits of using the
macros identified by our method. In the discrete case, we evaluated our method in a navigation task involving maps taken from
the video game Dragon’s Age, whereas in the continuous case we
used mountain car [15]. Sample trajectories from optimal policies
were collected by using Q-Learning [23] and Deterministic Policy
Gradient (DPG) [18] to solve different tasks drawn from each corresponding domain. Our proposed macro-based exploration strategy
was tested using both Huffman Coding and LZW on novel tasks
against an agent with no macros and an agent with a set of macros
randomly defined and sampled uniformly during exploration. Note
that our method could, if necessary, be coupled with orthogonal
exploration strategies that do not make use of knowledge gained
when previously solving related tasks, such as techniques that rely
on reward bonuses.
AAMAS’18, 2018
5.1
Francisco M. Garcia and Bruno C. da Silva
The Dragon’s Age Domain
5.2
For this set of experiments, we trained an agent to find optimal
policies on 20 different mazes corresponding to challenging environments drawn from the Dragon’s Age video game [19]. These
mazes have sizes varying between 400 and approximately 30,000
states. The agent receives a reward of -1 at each time-step and a
reward of +10 once it reaches a predefined goal. The test set (mazes
where the identified macros will be evaluated) was composed of
another 10 different environments with goal locations and starting
points placed at random. The agent has at its disposal four different primitive actions: moving right (r), down (d), left (l) and up
(u). The starting and goal locations are placed at random on these
maps. We used tabular Q-learning to acquire policies for each task
instance. In this experiment, Huffman Coding identified 11 macros
of length 3, while LZW filled a 4-bit codebook containing 16 macros
with lengths varying between 1 and 5. Table 1 shows the four most
commonly-occurring macros identified by Huffman Coding (H) and
LZW (L), and also their marginal probabilities. In this scenario, it
is easy to interpret what the extracted macros achieve when used
as exploration biases: they discourage the agent from repeatedly
moving back and forth between a small number of states. Figure 3
shows a performance comparison for 4 different selected test environments (tasks) over 500 episodes. We compared the performance of
an agent equipped only with primitive actions (black), an agent with
9 randomly-defined macros (green), an agent with macros extracted
via Huffman Coding (red), and an agent with macros extracted via
LZW (blue). The mean performance over the entire test set is shown
in Figure 1.
Mountain Car Domain
In these experiments, we used the mountain car problem to define
a family of related continuous-action tasks and evaluate our macroidentification method on such a setting. The state variables in this
problem are given by a 4 dimensional vector representing the current
position and velocity of the car in the x and y axis. The action
space is limited to one-dimensional real values between [a min , a max ]
representing a range of accelerations the agent is able to produce,
where a min < 0 and a max > 0. The agent receives a reward of -1 at
each time step and a reward of +100 if it reaches the goal position
at the top of the mountain. To create different learning tasks, we
defined several variations of the basic mountain car problem; these
consist in changing the goal position, the maximum velocity in the
positive and negative x-axis, and the maximum acceleration that
the agent is capable of producing. The agent was trained for 200
episodes in 6 training tasks, and another 8 task variations where
construct as testing tasks to evaluate the learned macros. In this
setting, we obtained 7 macros of length 5 by using Huffman Coding
and 16 macros of lengths ranging from 2 to 9 via LZW. Figure 5
shows the three most frequently-occurring macros identified by each
method.
Figure 4: Mean performance on test tasks (Mountain Car
domain).
Figure 1: Mean performance of different types of macros on
test tasks (Dragon’s Age domain).
Table 1: Four most commonly-occurring macros identified via
Huffman Coding (H) and LZW (L) and their probabilities.
m1
m2
m3
m4
H (macro)
(r,r,r)
(d,d,d)
(u,u,u)
(l,l,l)
H (prob)
0.19
0.19
0.11
0.09
L (macro)
(d)
(u)
(r,r)
(d,d,d)
L (prob)
0.11
0.07
0.05
0.03
Figure 6 shows the performance of our method on two selected
sample tasks in the test set—tasks whose dynamics and corresponding optimal policies are different from the ones the agent trained
on. This figure compares an agent learning with macros obtained
via Huffman Coding (red), LZW (blue), randomly-defined macros
(green), and no macros (black). Figure 4 shows the mean performance across 8 variations of the problem. These results demonstrate
that our agent is capable of exploiting the recurring action patterns
it has observed in previously-solved related tasks, and that simply
creating temporally-extended actions with no guidance (e.g. random
macros) can actually be detrimental to learning novel tasks.
6
CONCLUSION
We have introduced a data-driven method for identifying recurring
macros (frequently-occurring behaviors) observed as part of the solutions to tasks drawn from a family of related RL problems. Such
macros were then used to define an exploration strategy for accelerating learning on novel tasks. Our method is based on identifying
macros that allow for maximal compression of a set of sampled
Identifying Reusable Macros for Efficient Exploration via Policy Compression
(a) Maze 1
(b) Maze 2
AAMAS’18, 2018
(c) Maze 3
(d) Maze 4
Figure 2: Sample Dragon’s Age maps used for macro evaluation. Obstacles are dark green and traversable terrain is in light color.
(a) Maze 1
(b) Maze 2
(c) Maze 3
(d) Maze 4
Figure 3: Performance of different types of macros on selected test tasks (Dragon’s Age domain).
trajectories drawn from optimal policies for related tasks. A property
of macros that allow for such a compression is that they correspond
to recurring action patterns, and thus capture structure in the action
sequences underlying the solutions to tasks from the domain. We
formulated this problem as an optimization one and developed a
sample-based approximation to its solution. We performed a series
of experiments demonstrating the usefulness of the action patterns
discovered by the method, and provided evidence of the benefits
obtained by leveraging previous experiences to bias exploration on
novel tasks, compared to using task-agnostic exploration methods.
As future work, we would like to extend our method to closed-loop
options; this could be achieved by identifying regions in the state
space where stochastic policies for different tasks share similar action probabilities, or by optimizing an exploration policy directly
(i.e., one that in expectation leads to rapid acquisition of near-optimal
policies on tasks drawn from a common distribution or domain). In
this work we focused on how an agent should behave when exploring by exploiting prior knowledge about related tasks. A natural
follow-up question is when should the agent explore and whether
we can design an experience-based approach to determining this
as well. Finally, we believe that exploration methods that leverage
previous experiences on related tasks may be useful in traditional
multitask learning problems, in which policies for different tasks are
transferred or reused.
REFERENCES
[1] Andrew G. Barto and Sridhar Mahadevan. 2003. Recent Advances in Hierarchical
Reinforcement Learning. Discrete Event Dynamic Systems 13, 4 (2003), 341–379.
[2] Donald J. Berndt and James Clifford. 1994. Using Dynamic Time Warping to
Find Patterns in Time Series. In Proceedings of the 3rd International Conference
on Knowledge Discovery and Data Mining (AAAIWS’94). AAAI Press, 359–370.
AAMAS’18, 2018
(a) Macros extracted through Huffman Coding.
Probabilities are shown inset in the legend.
Francisco M. Garcia and Bruno C. da Silva
(a) Mountain Car Task 1
(b) Mountain Car Task 2
(b) Macros extracted via LZW. Probabilities are
shown inset in the legend.
Figure 5: Sample continuous-action macros identified via
Huffman Coding and LZW (mountain car domain).
[3] Michael Buro. 1993. On the Maximum Length of Huffman Codes. Inform.
Process. Lett. 45 (1993), 219–223.
[4] Justin Fu, John D. Co-Reyes, and Sergey Levine. 2017. EX2: Exploration with
Exemplar Models for Deep Reinforcement Learning. CoRR abs/1703.01260
(2017). http://arxiv.org/abs/1703.01260
[5] Abhishek Gupta, Coline Devin, YuXuan Liu, Pieter Abbeel, and Sergey Levine.
2017. Learning Invariant Feature Spaces to Transfer Skills with Reinforcement
Learning. In proceedings of the International Conference on Learning Representations (ICLR-2017).
[6] David A. Huffman. 1952. A Method for the Construction of MinimumRedundancy Codes. Proceedings of the Institute of Radio Engineers 40, 9 (September 1952), 1098–1101.
[7] George Konidaris and Andrew Barto. 2006. Autonomous shaping: knowledge
transfer in reinforcement learning. In proceedings of the 23rd International Conference on Machine Learning (ICML-2006). 489–496.
[8] George Konidaris and Andrew Barto. 2007. Building portable options: Skill transfer in reinforcement learning. In Proceedings of the 20th International Joint Conference on Artificial Intelligence. 895–900. http://citeseerx.ist.psu.edu/viewdoc/
summary?doi=10.1.1.121.9726
[9] George Konidaris and Andrew G. Barto. 2007. Building Portable Options: Skill
Transfer in Reinforcement Learning. In Proceedings of the 20th International
Joint Conference on Artificial Intelligence (IJCAI-2007). 895–900.
[10] Ramnandan Krishnamurthy, Aravind S. Lakshminarayanan, Peeyush Kumar, and
Balaraman Ravindran. 2016. Hierarchical Reinforcement Learning using SpatioTemporal Abstractions and Deep Neural Networks. CoRR (2016).
[11] Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom
Erez, Yuval Tassa, David Silver, and Daan Wierstra. 2015. Continuous control
with deep reinforcement learning. CoRR (2015).
[12] Marlos C. Machado, Marc G. Bellemare, and Michael Bowling. 2017. A Laplacian
Framework for Option Discovery in Reinforcement Learning. CoRR (2017).
[13] Sridhar Mahadevan. 2005. Proto-value Functions: Developmental Reinforcement Learning. In Proceedings of the 22nd International Conference on Machine
Figure 6: Performance with no macros (black), random
macros (green), and learned macros (blue, red) on selected
mountain car tasks.
Learning (ICML-2005). ACM, 553–560.
[14] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis
Antonoglou, Daan Wierstra, and Martin Riedmiller. 2013. Playing Atari with
Deep Reinforcement Learning. (2013).
[15] Andrew William Moore. 1990. Efficient memory-based learning for robot control.
Technical Report. University of Cambridge, Computer Laboratory.
[16] Emilio Parisotto, Jimmy Ba, and Ruslan Salakhutdinov. 2017. Actor-Mimic:
Deep Multitask And Transfer Reinforcement Learning. In proceedings of the
International Conference on Learning Representations (ICLR-2017).
[17] Doina Precup. 2000. Temporal Abstraction in Reinforcement Learning. Ph.D.
Dissertation. (2000).
[18] David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. 2014. Deterministic Policy Gradient Algorithms. In Proceedings
of the 31st International Conference on Machine Learning (ICML-2014), Tony
Jebara and Eric P. Xing (Eds.). JMLR Workshop and Conference Proceedings,
387–395.
[19] N. Sturtevant. 2012. Benchmarks for Grid-Based Pathfinding. Transactions on
Computational Intelligence and AI in Games 4, 2 (2012), 144 – 148.
[20] Richard S. Sutton, Doina Precup, and Satinder P. Singh. 1999. Between MDPs and
Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning.
Artificial Intelligence 112, 1-2 (1999), 181–211.
[21] Haoran Tang, Rein Houthooft, Davis Foote, Adam Stooke, Xi Chen, Yan Duan,
John Schulman, Filip De Turck, and Pieter Abbeel. 2016. Exploration: A Study of
Count-Based Exploration for Deep Reinforcement Learning. CoRR (2016).
[22] Matthew E. Taylor and Peter Stone. 2009. Transfer Learning for Reinforcement
Learning Domains: A Survey. Journal of Machine Learning Research 10 (Dec.
2009), 1633–1685.
[23] Christopher J. C. H. Watkins and Peter Dayan. 1992. Q-learning. In Machine
Learning. 279–292.
[24] T. A. Welch. 1984. A Technique for High-Performance Data Compression. Computer 17, 6 (June 1984), 8–19. https://doi.org/10.1109/MC.1984.1659158
Identifying Reusable Macros for Efficient Exploration via Policy Compression
A APPENDIX
A.1 Proof of Theorem 3.1
case, would then be given by:
|T |
Õ
Given a specific macro length l, identifying all length-l action subsequences in a particular trajectory τ with length |τ | takes |τ | − l steps.
Let τ max be the longest trajectory in our sampled data. Since there
are |T | trajectories and given that |τ k | ≤ |τ max | for k = 1, . . . , |T |,
identifying all candidate macros of length l that occur in the sampled
Í |T |
data takes k |τ k | − l = O(|T ||τ k | − l) steps. The quantity |τ | − l is
at its maximum when l = l min and the process of constructing codebooks for different candidate macro lengths l needs to be repeated
l ∆ = l max − l min times. Therefore, the complete pre-processing
process has complexity O(l ∆ |T |S).
A.2
To prove the upper bound on the number of bits used by LZW
to encode a set of |T | trajectories, we first define a lower bound
on the maximum length of a macro represented by a codebook
of N symbols and a primitive action set of size |A|. Assume, as an
example, that A = {a 1 , a 2 } and N = 2. In this case, the longest macro
considering all possible codebooks of length N = 2 would be at least
of length 1, and the codebook C would be given by C = {a 1 , a 2 }. If
N = 6, a lower bound on the largest possible macro over all possible
codebooks of such size would be of length 4; and the codebook
would be given by C = {a 1 , a 2 , {a 1 , a 1 }, {a 1 , a 2 }, {a 2 , a 1 }, {a 2 , a 2 }}.
In general, determining such a lower bound on the maximum length
macro given a codebook of N symbols can be posed an optimization
objective:
max i + 1
i
N−
i
Õ
|A| j ≥ 0
j
First, note that the summation in the constraint functions above
Í
corresponds to a geometric series whose value is given by ij |A| j =
|A |(1− |A | i )
.
1− |A |
Also note that the boundary of the constraint (i.e., the
smallest value of i that does not violate it) can be found when
Í
N − ij |A| j = 0. Combining these two we obtain:
0
=
|A|i
=
i
=
|A|(1 − |A|i )
1 − |A|
1 − |A|
1−N
|A|
1 − |A|
log |A | 1 − N
|A|
N−
Since the maximization requires returning the largest value of i
that does not violate the constraint, plus one, we take the ceiling of
the quantity above:
1 − |A|
i = log |A | 1 − N
|A|
Note that in the worst case (in terms of the number of bits needed
to encode trajectories) we could use macros of length greater than 1
just once, and encode the remaining parts of the trajectories using
only primitive actions. The total number of symbols used, in this
|τ k | −
i−1
Õ
|A| j
j=1
k
where |T | is the number of trajectories sampled and |τ k | is the
number of primitive actions in the k th trajectory.
Given that all symbols in the codebook have a binary encoding of
length blimit , the total number of bits, Bt , needed to encode all |T |
sampled trajectories is upper bounded by:
!
|T |
i−1
Õ
Õ
k
j
|τ | −
|A| × blimit
k
where i = ⌈log |A | 1 − N
Proof of Theorem 3.3
s. t.
AAMAS’18, 2018
j=1
1−|A |
⌉.
|A |
| 2 |
CAT(0) Groups and Acylindrical Hyperbolicity
arXiv:1610.08005v2 [] 9 Jan 2017
Burns Healy
Abstract
In this paper we take a result stating that rank one elements of a CAT(0) group are
generalized loxodromics and expand it to show the reverse implication. This gives us, in
particular, a complete classification of the intersection of CAT(0) and acylindrically hyperbolic groups, and demonstrates exactly which elements are generalized loxodromics.
We go on to apply this classification to the braid groups and OutpWn q in order to learn
about their potential CAT(0) structures, ruling out the cases of Euclidean buildings
and symmetric spaces.
1
Introduction
As spaces, CAT(0) metric spaces have non-positive curvature, while hyperbolicity represents the property of strictly negative curvature. We will restrict our attention to
the groups that act geometrically on them. Generalizations of hyperbolicity, such as
relative hyperbolicity and acylindrical hyperbolicity, relax the strictness of this action.
Such a description leads us to the natural question: in what sense can these generalized hyperbolic metric actions also meet our criteria of non-positive curvature? A
more direct question: where does the class of CAT(0) groups intersect those which
are acylindrically hyperbolic? This question has generated a lot of interest, leading to
recent results in the category of CAT(0) cube groups and spaces in [CM16] and [Gen16].
This question is already answered in part in [Sis] and [Sis16], where Sisto gives
results which tell us that rank one isometries of CAT(0) groups are generalized loxodromics, which implies that they are acylindrically hyperbolic. We go one step further
and make the claim that these classes of elements completely coincide.
Theorem 2.7 Let G be a group, which is not virtually cyclic, acting geometrically
on a CAT(0) space X. Then G is acylindrically hyperbolic if and only if it contains an
element g which acts as a rank one isometry on X.
This classification will be useful to us in order to help classify types of groups
suspected to be CAT(0). We begin by looking at the braid groups. Charney originally
posed the question are all finite type Artin groups CAT(0)? This is still an open
question for the braid groups, although the answer is known for small index. For
n ď 3, it is easy to see by looking at the algebraic description of the group. The cases
of n “ 4 and 5 were proved by Brady and McCammond in [BM10], and the n “ 6
case was proved by Haettel, Kielak, and Schwer in [HKS16]. It was also proved by
Huang, Jankiewicz and Przytycki that the 4-strand braid group, though CAT(0), is
not CAT(0)-cubed [HJP16].
1
Our contribution here will be to combine Theorem 2.7 with a result of Bowditch
(see 3.1) to show:
Theorem 3.2 Let Bn be the braid group with n ě 4, and suppose that X is a
CAT(0) space on which Bn acts geometrically. Then Y is a rank one CAT(0) space
in the natural splitting of X “ Y ˆ R. In particular, Bn does not act on a Euclidean
building.
For more information about CAT(0) buildings, the reader is referred to [Dav98].
The fact that these two classes of CAT(0) spaces are disjoint is explored at length in
[BB08]. In particular, in all Euclidean buildings and symmetric spaces, any bi-infinite
geodesic bounds a half-flat.
There has also been some interest in whether the automorphism group of universal
right angled Coxeter groups admit a geometric action on a CAT(0) space, see [Cun15].
Using a similar method that shows the acylindrical hyperbolicity of OutpFn q, we have
the following.
Theorem 4.6 OutpWn q is acylindrically hypebolic for n ě 4.
Then, in the same way as braid groups, we obtain a relevant CAT(0) result as a
corollary.
Theorem 4.7 Suppose OutpWn q acts geometrically on X a CAT(0) space. Then
X contains a rank one geodesic. In particular, OutpWn q cannot act geometrically on
a Euclidean building.
The author would like to thank his Ph.D. advisor, Genevieve Walsh, for her continued support, as well as Kim Ruane and Adam Piggott for helpful suggestions and
comments.
2
Classification
We begin with some definitions.
Definition 2.1. A metric space action G œ S is called acylindrical if for every ǫ ą 0
there exist R, N ą 0 such that for any two points x, y P S such that dpx, yq ě R, the
set
tg P G|dpx, g.xq ď ǫ, dpy, g.yq ď ǫu
has cardinality less than N .
We quickly note here that this property does not imply properness. This still allows
individual points to have infinite stabilizers.
Definition 2.2. A group G is called acylindrically hyperbolic if it admits an acylindrical
action on a hyperbolic space X which is not elementary; that is, the group has a limit
set inside the boundary of the space X of cardinality strictly greater than 2.
It’s worth pointing out that this means we do not wish to consider groups which are
finite or virtually cyclic as being acylindrically hyperbolic, despite the fact that they
are (elementary) hyperbolic. These are the only hyperbolic groups we exclude, as any
non-elementary hyperbolic group will satisfy this requirement by the natural action on
2
its Cayley Graph.
Remark 2.3. [Osi16] For any acylindrical group action on a hyperbolic space, no elements act as parabolics. This means for such an action, the effect of any particular
group element is either loxodromic or elliptic.
Definition 2.4. Let G be an acylindrically hyperbolic group. An element g P G is
called a generalized loxodromic if there is an acylindrical action G œ S for S a hyperbolic
metric space such that g acts as a loxodromic.
The status of being a generalized loxodromic is a group theoretic property. While
one qualifying action might have a particular element acting loxodromically, in another
it may act elliptically. The existence of a generalized loxodromic can be taken as an
alternate definition of acylindrical hyperbolicity.
Osin asks under what conditions an acylindrically hyperbolic group might have a
uiniversal acylindrical action, which is one in which all generalized loxodromics act
as loxodromics. Abbott gives an example of a finitely generated (but not finitely presented) group with no such action in [Abb16].
Before getting to our main theorem, we will recall a few definitions.
Definition 2.5. [BH99] Let X be a geodesic metric space, and ∆ a geodesic triangle.
¯ be a comparison triangle in E2 , i.e. a geodesic triangle with the same side lengths.
Let ∆
¯
We say that X is CAT p0q if, for all x, y P ∆ and all comparison points x̄, ȳ P ∆
dX px, yq ď dE2 px̄, ȳq.
and this holds for all ∆.
Definition 2.6. [CS15] A geodesic γ in a CAT p0q space X is said to be contracting if
there exists a constant D ą 0 such that for all x, y P X
dX px, yq ă dX px, πγ pxqq ùñ dX pπγ pxq, πγ pyqq ă D
Equivalently, any metric ball B that doesn’t intersect γ projects to a segment of length
ă 2D on γ.
Theorem 2.7. Let G be a group, which is not virtually cyclic, acting geometrically on
a CAT(0) space X. Then G is acylindrically hyperbolic if and only if it contains an
element g which acts as a rank one isometry on X. Furthermore, the set of generalized
loxodromics is precisely the set of rank one elements.
Proof. (ð) This follows from theorem 5.4 in [BF09], where it is stated that a geodesic
in a CAT(0) space is contracting exactly when it fails to bound a half flat, meaning rank
one geodesics are contracting. Next, contracting elements satisfy a property labelled
weakly contracting, shown in [Sis]. Sisto goes on to prove in Theorem 1.6 that this
property is strong enough to show that this element is contained in a virtually cyclic
subgroup, labelled Epgq, which is hyperbolically embedded in the group. This is one
of four equivalent conditions for being a generalized loxodromic, listed in Theorem 1.4
of [Osi16].
(ñ) If G is acylindrically hyperbolic, then it contains at least one generalized
loxodromic. This is because we can take the action on a hyperbolic space guaranteed by
3
the definition of acylindrical hyperbolicity, and knowing that it is devoid of parabolic
elements, invoke the non-elementary condition on the action to verify at least one
element must act as a loxodromic. Call this element g. We know by a result of Sisto
that this element is Morse in G [Sis16]. An equivalence in the setting of CAT(0) groups,
proved in [CS15], says that a (quasi-)geodesic in a CAT(0) space is contracting if and
only if it is Morse and if and only if it is rank one. The geometric nature of our action,
which says our space is quasi-isometric to our group, guarantees that because our group
element is Morse, its axes are as well. Therefore our element g has axes which are rank
one, i.e. it acts as a rank one isometry.
This equivalence allows us to restate the Rank Rigidity Conjecture for CAT(0)
groups, originally posited by Ballman and Buyalo.
Conjecture 2.8. Rank Rigidity Conjecture [BB08]
Let X be a locally compact geodesically complete CAT(0) space and G a discrete group
acting geometrically on X. If X is irreducible, then either:
• X is a Euclidean building or higher rank symmetric space
or
• G is Acylindrically Hyperbolic.
3
Braid Groups
Braid groups, which we will denote here by Bn , are an important example of groups
that are intermediate between hyperbolic and flat. They are not hyperbolic (nor even
relatively hyperbolic); indeed they have a number of flats. However, they do have
free subgroups and many properties shared by groups which are non-positively curved.
The following is obtained by Bowditch in [Bow08] by noting that Bn :“ Bn /ZpBn q
represents the mapping class group of a punctured surface.
Theorem 3.1. [Bow08] Let n ě 4. The group Bn :“ Bn /ZpBn q is acylindrically
hyperbolic.
A stronger statement holds. A result from [CW16] shows that all Artin-Tits groups
of spherical type, otherwise known as generalized braid groups, are acylindrically hyperbolic, after modding by their center.
Now this can be combined with 2.7 to show:
Theorem 3.2. Let Bn be the braid group with n ě 4, and suppose that X is a CAT(0)
space on which Bn acts geometrically. Then Y is a rank one CAT(0) space in the
natural splitting of X “ Y ˆ R. In particular, Bn does not act on a Euclidean building.
This holds for all Artin-Tits group of spherical type. By combining Theorem 2.7
to a result of Calvez and Wiest [CW16], we get that any CAT(0) space acted on geometrically by Artin-Tits group of spherical type must be rank one.
To illustrate this theorem, we give an example from an explicit CAT(0) complex. In
[Bra00], Brady constructs a 2-complex that B4 {ZpB4 q acts on made up of equilateral
4
triangles which is CAT(0) using the standard piecewise Euclidean metric. This is
obtained by ‘projecting’ down the infinite cylic factor corresponding to
ZpB4 q – Z “ă σ1 σ2 σ3 ą
The link of any vertex in this projected complex looks as in Figure 1.
a`
b´
c`
c´
d`
a´
f`
d´
e`
e´
c`
b`
f´
a`
Figure 1: An Arbitrary Link
Importantly, we note that the top right and bottom left vertices of this link are
identified, as well as the top left and bottom right. We recognize this as a 1-skeleton of
a CW-complex homeomorphic to a Möbius strip. Because this complex is CAT(0), this
link has a standard CAT(1) metric on it [BH99]. Because the corresponding triangles
are equilateral, this metric assigns each edge in this link a length of π3 . We now examine
the vertices labelled d´ and b´ , specifically noting that
dlk pb´ , d´ q “
4π
ąπ
3
Because this angle is larger than π, that means that the path obtained in the space
obtained by concatenating the paths from x0 to those points in the link is a local
geodesic. We note this path is also a portion of an axis for the group element bd´1 .
(Note: one of these elements must have an inverse, as both vertices have a plus, which
means that group element takes us towards x0 .)
This space enjoys the property that all vertices have isometric links, so if we look
at the link of the vertex b.x0 , we see the path the axis of bd´1 takes is in through b`
and out through d´ , which also have distance greater than π. This implies our axis,
which we will label γ :“ γbd´1 , is also a local geodesic at this vertex. Combining this
with the fact that edges of triangles in our space are local geodesics because the metric
is CAT(0) and locally Euclidean, this tells us that γ is a geodesic axis for the action
of the group element bd´1 .
Indeed this axis is rank one, as [BB08] proves the Rank Rigidity Conjecture in the
case where the dimension of the complex is 2. Because X is two dimensional, if γ
bounded a space isometric to E2` , then any vertices of links it went through would
have to have diameter at most π, as the link of that vertex would contain faces forming
a half-disk portion of the copy of E2` . As this axis is rank one, we know that our group
element is a generalized loxodromic, by Theorem 3.2.
5
So what does it look like? If we translate bd´1 into our standard generating set,
noting that σ1 , σ3 commute, we get the element
σ2 σ1 σ3 σ2´1 σ1´1 σ3´1
Figure 2: The element bd´1
4
OutpWnq
Assume here that n ě 4.
We will use the conventions in this section that
Fn “ xxi | y, Wn “ xwi | wi2 y
As wi´1 “ wi we will suppress inverse notation when working in Wn
We will make use of the following result, so it is helpful to list it here.
Theorem 4.1. [GPR12]
AutpWn q “ Aut0 pWn q ¸ Σn “ pWn ¸ Out0 pWn qq ¸ Σn
where the Wn factor is the whole of InnpWn q, Aut0 is partial conjugations, Out0 is
the image of Aut0 after quotienting out the inner automorphisms, and Σn is the full
symmetric group on n letters, corresponding to permuting the generators.
Lemma 4.2. Consider the subgroup Wn ą G :“ xw1 wi |i P t2, . . . nuy. Then G – Fn´1 .
Proof. It is clear that none of the n ´ 1 generators are redundant. Therefore, we are
reduced to demonstrating there are no relations. We begin by noting that
pw1 wi q´1 “ wi w1
6
Next, note that cancellation can only happen in the form wi wi “ 1, as these are
the only relators in Wn . This is only the case if we have wi w1 w1 wj , in which case this
is equal to wi wj , which is irreducible and simply another expression for the element
xi´1 xj´1 , or w1 wj wj w1 , which translates to xj´1 x´1
j´1 , i.e. a free reduction in Fn´1 .
This gives a natural map ι : AutpWn q Ñ AutpFn´1 q because an automorphism of
Wn will induce a map on the elements w1 wi , noting that this subgroup is characteristic. The subgroup G above is sometimes referred to as the ‘even subgroup’. Because
all cancellation happens in pairs, it is well defined to speak of words of even length
(including the empty word).
Using this characterization, it is easy to see that this subgroup must be characteristic. Given our decomposition of AutpWn q, we see that generators come in the form
of graph automorphisms and partial conjugations. In the former case, all wi are sent
to words of length one, and in the latter, wi are sent to either words of length one
or three. In either case, after possible cancellation in pairs, words of even length will
remain of even length. Because this subgroup is precisely the words of even length, a
property preserved under automorphisms of Wn , it is a characteristic subgroup.
Lemma 4.3. This map is injective.
Proof. Let φ : Wn Ñ Wn be an automorphism that fixes pointwise the set tw1 wi u.
Our goal is then to show that it must be the case that
φ “ idAutpWn q .
We do this by considering φ on each generator. If φ fixes each generator, it is
necessarily the identity.
• Suppose φpw1 q “ z ‰ w1 . Assume z is fully reduced. We know that z ‰ id,
as this is an automorphism. Because φ fixes w1 wi , it must be the case that for
all i, φpwi q “ z ´1 w1 wi . We invoke the fact that AutpWn q is generated only
by conjugations and permutations, following from the decomposition above, to
observe that the word we map to must begin and end with the same letter after
being reduced. If z ´1 does not end with the letter w1 (meaning z starts with
the letter w1 ), then z ´1 must start with the letter wi for each i. This is a
contradiction.
Therefore we assume z starts with the letter w1 . The last letter of z will be the
first letter of z ´1 w1 wi , which again because AutpWn q has no transvections, must
be either wi or empty. Because the former is impossible for all i simultaneously,
it must be that it is empty. This tells us that z “ w1 , which contradicts our
assumption.
• Now suppose φpwi q “ zi ‰ wi for some i ‰ 1. Then because φpw1 wi q “ w1 wi ,
we get that φpw1 q “ w1 zi´1 . This must be true for every i, meaning that for all
i ‰ 1, φpwi q “ zi “ zj “ φpwj q. This contradicts our assumption that φ is an
automorphism, because the image of the generators is no longer a generating set.
Due to the injectivity of this map, we will label it
ι : AutpWn q ãÑ AutpFn´1 q
7
Because our goal is to say something about OutpWn q, we look at what happens to
elements of InnpWn q. While it is not quite true that they map into OutpFn´1 q, we find
this is almost the case.
Lemma 4.4. Let r P AutpFn q be defined by rpxi q “ x´1
i . Then
ιpInnpWn qq Ă InnpFn´1 q ¸ tru.
Proof. Recall xj :“ impw1 wj`1 q. By abuse of notation, and because the map is injective, we will switch between these words in our computations. Denote C xi for conjugation by xi in AutpFn´1 q.
We will consider the effect that an inner automorphism of Wn will have on tw1 wi u,
recalling that this set of automorphisms is generated by elementary conjugations. These
will come in two flavors, denoting an inner conjugation by C j for conjugation of all
generating elements by wj :
• C 1 pw1 wj q “ wj w1 “ pw1 wj q´1 . Therefore C 1 “ r.
• C i, i ‰ 1
C i pw1 wj q “ wi pw1 wj qwi
“ wi w1 pw1 w1 wj w1 qw1 wi
“ wi w1 pwj w1 qw1 wi
“ wi w1 pw1 wj q´1 w1 wi
“ pw1 wi q´1 pw1 wj q´1 pwi w1 q´1
“ r ˝ C x1 pxj q
Finally, this gives us the following fact:
ῑpOutpWn qq Ă AutpFn´1 q{pInnpFn´1 q ¸ truq Ă OutpFn´1 q{xxRyy
where R “ xr|r 2 y – Z{2Z.
This relationship is summarized in the following diagram, where qr represents the
quotient map killing the normal closure of R:
AutpWn q
ι
AutpFn´1 q
q
q̄
OutpWn q
OutpFn´1 q
ῑ
qr
OutpFn´1 q{xxRyy
Figure 3: Diagramatic Relations of the Groups
8
Lemma 4.5. For any φ P AutpWn q,
rr, ιpφqs Ă InnpFn´1 q
Proof. We break this into cases, depending on what kind of automorphism φ is. We
need only consider the case where φ is a generator, because the inner automorphisms
are normal. To demonstrate this for a normal subgroup K Ÿ G, and an element r P G,
assume that a, b P G are such that rr, as P K, rr, bs P K. We’d like to show rr, abs P K.
Then,
rr, abs P K
õ
rabrb´1 a´1 P K
õ
brb´1 r “ k1 P K
1
´1
rark a P K
õ
conjugate by a´1 ; K is normal
´1
1
a rark P K
õ
´1
a rar P K
õ
conjugate by a
rara´1 P K
and we note the last line is true by hypothesis. Now, let’s look at the cases:
• φ is a graph automorphism (i.e. it permutes the generators), so φ P Σn . This
subgroup is generated by transpositions of wi . We further break this into subcases:
– φ “ pwi wj q, i ‰ 1 ‰ j. Then it is easy to see ιpφq permutes xi´1 , xj´1 and
that this map commutes with inverting every generator.
– φ “ pw1 wi q. In this case, φpw1 wj q “ wi wj “ wi w1 w1 wj for i ‰ j and
´1
φpw1 wi q “ wi w1 . Then ιpφqpxj q “ x´1
i´1 xj for j ‰ i´1 and ιpφqpxi´1 q “ xi´1 .
The rest of the proof that ιpφq commutes with r is left to the reader.
• φ is a partial conjugation. Once more, we are relegated to subcases:
– Neither the acting letter nor the acted-on letter is w1 . Then for i ‰ 1 ‰
j, φpwj q “ wi wj wi and φ fixes all other generators. Then ιpφq fixes all
generators of Fn´1 except xj´1 , and ιpφqpxj´1 q “ xi´1 x´1
j´1 xi´1 . Note that φ
is order two in the codomain, so ιpφq is also order two. This is borne out by
performing the calculation on the right hand side. We see that rr, ιpφqspxi q “
xi for i ‰ j ´ 1.
ιpφq ˝r ˝ ιpφq ˝rpxj´1 q “ ιpφq ˝r ˝ ιpφqpx´1
j´1 q
´1
“ ιpφq ˝rpx´1
i´1 xj´1 xi´1 q
“ ιpφqpxi´1 x´1
j´1 xi´1 q
“ xj´1
– The acting letter is w1 . Then call the acted on letter wi , so that φpwi q “
w1 wi w1 and fixes other generators. In this case, ιpφq inverts xi´1 and fixes
the other free generators. This automorphism commutes with inverting all
generators.
9
– The acted on letter is w1 , so φpw1 q “ wi w1 wi . Quick calculations show that
ιpφqpxj q “ x´2
i´1 xj . Then
ιpφq ˝r ˝ ιpφq ˝rpxj q “ ιpφq ˝r ˝ ιpφqpx´1
j q
2
“ ιpφq ˝rpx´1
j xi´1 q
“ ιpφqpxj x´2
i´1 q
2
“ x´2
i´1 xj xi´1
x´2
“ xj i´1 .
This allows us to make the observation that
R Ÿ ι ˝ qpAutpWn qq ă OutpFn´1 q.
In other words
xxRyy X ι ˝ qpAutpWn qq “ R.
More to the point, this allows us to replace Figure 3 with Figure 4.
AutpWn q
ι
impιq
q
q̄
OutpWn q
impι ˝ qq
ῑ
AutpFn´1 q
q
OutpFn´1 q
qr
impι ˝ qq{R
Figure 4: Involution Normality in the Image
It is shown in [BBF15] that OutpFn q is acylindrically hyperbolic. This is proven
by way of its action on the free factor complex, although it is unknown if this action
is itself acylindrical. Hyperbolicity of this complex is demonstrated in [BF14]. In
this action fully irreducible elements of OutpFn q act with Weak Proper Discontinuity
(WPD), which tells us that they are generalized loxodromics (in an action on a different
space). Theorem H in [BBF15] constructs a new action on a space quasi-isometric to
a tree, which we will denote by Q, that satisfies the conditions required by acylindrical
hyperbolicity, in which these same fully irreducible elements act loxodromically. The
fact that this action is acylindrical is stated in the discussion after Theorem I [BBF15].
Furthermore we are guaranteed, again by [BBF15] that all fully irreducible group
elements act loxodromically in this action on Q.
Theorem 4.6. OutpWn q is acylindrically hyperbolic, for n ě 4.
Proof. We will abuse notation throughout this proof, letting r represent both the
automorphism in AutpFn´1 q and its image under the map q.
The first thing we will do is make a slight modification to Q. Unlike in uniquely
geodesic spaces such as CAT(0) spaces, fixed point sets in arbitrary hyperbolic spaces
10
aren’t as nice as we like, so we will add in a little extra structure. Let δ represent the
constant of hyperbolicity for Q
Define Q̂ :“ Q Y E, where E consists of combinatorial edges of length δ between
any two points which are at distance at most δ in Q. We note that these two spaces are
quasi-isometric by noting that Q embeds into Q̂ in the natural way such that distances
are not changed, and the embedding is δ quasi-onto. The group OutpFn q will act on
Q̂ in the natural way on the embedded copy Q, and permute the edges in E according
to their endpoints.
Label Q̂R the fixed point set of R “ xry acting on Q̂.
Now define an action OutpWn q œ Q̂R . We start by noting there is a natural action
of M :“ ι ˝ qpAutpWn qq œ Q̂, because it is a subgroup of OutpFn´1 q. Now, to say
what an element of ψ P OutpWn q does, we look at its image, ῑpψq P qr pM q. Using
the structure of Figure 4, ῑpψq “ gR for some element g P OutpFn´1 q. Now, for any
f P Q̂R , we can define
ῑpψq.f “ g.f.
This is well-defined, because no matter what element of R we pick (i.e., either id or r),
they both have the same effect on f , (i.e., r.f “ f by definition of Q̂R .)
Finally we claim the image of OutpWn q leaves Q̂R invariant set-wise. That is to
say, doesn’t take it off itself. Let C P impι ˝ qq which encompasses any element coming
from OutpWn q, and let f P Q̂R . Note that from 4.5 that rr, Cs “ 1 in OutpFn´1 q. Then
r C .f “ C r.f
“ Cpr.f q
“ C .f.
So the point f is moved to under C is indeed fixed by r, as r C .f “ C .f .
Now in order to show that this action on Q̂R satisfies acylindrical hyperbolicity, we
must show three things:
1. Q̂R is hyperbolic.
2. This action satisfies acylindricity.
3. This action is non-elementary.
For the first task, we recall that hyperbolicity is a quasi-isometry invariant, so we
know that Q̂ is hyperbolic. We claim that Q̂R is quasi-convex in Q̂, making it also
hyperbolic. To show quasi-convexity, let f0 , f1 be in Q̂R such that a geodesic between
them leaves Q̂R . If such points don’t exist, our subspace is directly convex. Otherwise, label x0 , x1 the points (possibly the same as fi ) such that the chosen geodesic
first leaves then re-enters Q̂R . Let λ “ rx0 , x1 s, which by assumption only intersects
Q̂R in the endpoints. If we take r.λ, we obtain another, distinct geodesic between
x0 , x1 . Label this point xi . By the closeness of geodesics with the same endpoints in a
hyperbolic space, the distance between xi and g.xi is bounded by δ. Therefore there
is a combinatorial edge between them of length δ. Because r is order 2, it acts by
inversion on this edge, and therefore fixes its midpoint. This means that every point
on this geodesic is within distance at most 2δ of a fixed point. So this (and therefore
11
any) geodesic between points in Q̂R lies in a 2δ neighborhood of Q̂R . This means Q̂R
is 2δ quasiconvex, and therefore hyperbolic.
For acylindricity, we begin by letting Rpǫq, N pǫq be constants depending on ǫ that
demonstrate the acylindricity of the action OutpFn´1 q œ Q. We note that these same
constants will work to demonstrate acylindricity of impι ˝ qq because the relevant set of
elements will be a subset of the one we consider in the supergroup. Our claim is that
these same constants will once again work for impι˝qq{R. We proceed by contradiction.
Let ǫ ą 0, and Rpǫq as above. Then suppose
|tφ P impι ˝ qq{R|dpx, φ.xq ď ε, dpy, φ.yq ď εu| ě N.
Now consider the set of pre-images tqr´1 pφqu of these elements. Because qr is surjective, this set has no fewer elements than the original. Furthermore, because our
quotient is by R, which acts trivially on QR , these elements also have the same induced action. Therefore, this violates the assumption that there are fewer than N pǫq
elements in impι ˝ qq that satisfy this property. Finally, adding these combinatorial
edges to QR does not change the property of acylindrical hyperbolicity; it slightly
modifies the constants. This is because it does not change the distance of points in
Q, and elements moving the new combinatorial edges close to themselves must bring
those endpoints, which belong to Q, close to themselves. Specifically, for x, y P Q̂ with
distance dpx, yq ě R , the set
tg P G|dpx, g.xq ď ǫ, dpy, g.yq ď ǫu
is contained in the set, for x, y P Q with dpx, yq ě Rpǫ ` 2δq,
tg P G|dpx, g.xq ď ǫ ` 2δ, dpy, g.yq ď ǫ ` 2δu
which is finite by assumption.
Finally, we are tasked with showing this action is non elementary. In order to
demonstrate that the limit set is not elementary, we will show that there are two elements in the image of AutpWn q that act as loxodromics on Q̂. Due to quasi-convexity,
any element which acts as a loxodromic on Q and fixes Q̂R set-wise will also act as a
loxodromic on Q̂R , so establishing a loxodromic action on Q is sufficient. To find these
elements, we recall that Q is designed such that any elements acting loxodromically
and with WPD on the free factor complex also act as such on Q. To find these, we
turn to [BF14], which tells us that the elements which act loxodromically are exactly
those automorpshisms that don’t fix (or have any power which fixes) any individual
free factor. It is clear why this is a necessary condition, but sufficiency is more intricate.
We’ll fix some notation for partial conjugations in AutpWn q. Let
Pi,j pwi q “ wj wi wj
Pi,j pwk q “ wk
12
k‰i
Note that this element is order 2. To see what effect this has on the free group:
Pi,j pxi´1 q “ Pi,j pw1 wi q
“ w1 wj wi wj
“ w1 wj wi pw1 w1 qwj
“ pw1 wj qpwi w1 qpw1 wi q
“ xj´1 x´1
i´1 xj´1
Now let’s examine the product Pi,j ˝ Pi,k for k ‰ j.
Pi,k pPi,j pxi´1 qq “ Pi,k pxj´1 x´1
i´1 xj´1 q
“ xj´1 xk´1 xi´1 xk´1 xj´1 :
We further see that Pi,j commutes with Pk,n exactly when i ‰ k. For the following
element, consider all addition performed in the indices to be done mod n ´ 1. To find
a desired loxodromic element, let
µ “ Π1ďiďn´1 Pi,i`1 Pi,i`2 .
This is fully irreducible because it affects each generator, ranging on indices 1 ď i ď
n ´ 1, sending each one to a word as in :, which are not expressible as a conjugation.
We need one more loxodromic, which does not commute with µ, to finish the proof.
Let
τ “ Π1ďiďn´1 Pi,i`3 Pi,i`2 .
Defining this element, we see why at the beginning of the section we required that
n ě 4. Now that we have these two distinct loxodromics, we get a limit set strictly
greater than 2 (the endpoints of these elements), which completes the proof.
Theorem 4.7. Suppose OutpWn q acts geometrically on X a CAT(0) space. Then X
contains a rank one geodesic. In particular, OutpWn q cannot act geometrically on a
Euclidean building.
This follows from 2.7 and 4.6.
References
[Abb16] Carolyn R. Abbott. Not all finitely generated groups have universal acylindrical actions. Proc. Amer. Math. Soc., 144(10):4151–4155, 2016.
[BB08]
Werner Ballmann and Sergei Buyalo. Periodic rank one geodesics in
hadamard spaces. In Geometric and probabilistic structures in dynamics,
volume 469 of Contemp. Math., pages 19–27. Amer. Math. Soc., Providence,
RI, 2008.
[BBF15] Mladen Bestvina, Ken Bromberg, and Koji Fujiwara. Constructing group
actions on quasi-trees and applications to mapping class groups. Publ. Math.
Inst. Hautes Études Sci., 122:1–64, 2015.
13
[BF09]
Mladen Bestvina and Koji Fujiwara. A characterization of higher rank symmetric spaces via bounded cohomology. Geom. Funct. Anal., 19(1):11–40,
2009.
[BF14]
Mladen Bestvina and Mark Feighn. Hyperbolicity of the complex of free
factors. Adv. Math., 256:104–155, 2014.
[BH99]
Martin R. Bridson and André Haefliger. Metric spaces of non-positive curvature, volume 319 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin,
1999.
[BM10]
Tom Brady and Jon McCammond. Braids, posets and orthoschemes. Algebr.
Geom. Topol., 10(4):2277–2314, 2010.
[Bow08] Brian H. Bowditch. Tight geodesics in the curve complex. Invent. Math.,
171(2):281–300, 2008.
[Bra00]
Thomas Brady. Artin groups of finite type with three generators. Michigan
Math. J., 47(2):313–324, 2000.
[CM16]
Indira Chatterji and Alexandre Martin. A note on the acylindrical hyperbolicity of groups acting on cat(0) cube complexes. 2016.
[CS15]
Ruth Charney and Harold Sultan. Contracting boundaries of CATp0q spaces.
J. Topol., 8(1):93–117, 2015.
[Cun15] Charles Cunningham. On the automorphism groups of universal right-angled
coxeter groups, 2015. Dissertation.
[CW16] Matthieu Calvex and Bert Wiest. Acylindrical hyperbolicity and artin-tits
groups of spherical type. 2016.
[Dav98] Michael W. Davis. Buildings are CATp0q. In Geometry and cohomology in
group theory (Durham, 1994), volume 252 of London Math. Soc. Lecture Note
Ser., pages 108–123. Cambridge Univ. Press, Cambridge, 1998.
[Gen16] Anthony Genevois. Acylindrical action on the hyperplanes of a cat(0) cube
complex. 2016.
[GPR12] Mauricio Gutierrez, Adam Piggott, and Kim Ruane. On the automorphisms
of a graph product of abelian groups. Groups Geom. Dyn., 6(1):125–153,
2012.
[HJP16] Jingyin Huang, Kasia Jankiewicz, and Piotr Przytycki. Cocompactly cubulated 2-dimensional Artin groups. Comment. Math. Helv., 91(3):519–542,
2016.
[HKS16] Thomas Haettel, Dawid Kielak, and Petra Schwer. The 6-strand braid group
is CATp0q. Geom. Dedicata, 182:263–286, 2016.
[Osi16]
D. Osin. Acylindrically hyperbolic groups.
368(2):851–888, 2016.
[Sis]
Alessandro Sisto. Contracting elements and random walks. To appear in
Crelle’s Journal.
[Sis16]
Alessandro Sisto. Quasi-convexity of hyperbolically embedded subgroups.
Math. Z., 283(3-4):649–658, 2016.
14
Trans. Amer. Math. Soc.,
| 4 |
International Journal of Material Forming
New method to characterize a machining system: application in turning
Claudiu F. Bisu1,2, Jean-Yves K'nevez², Philippe Darnis3, Raynald Laheurte²,3, Alain Gérard²
1
University Politehnica from Bucharest, 313 Splaiul Independentei, 060042 Bucharest
Roumanie (UE) email: [email protected]
2
Université de Bordeaux, 351 cours de la Libération, 33405 Talence-cedex France (UE)
email: [email protected]
3
Université de Bordeaux - IUT EA 496, 15 rue Naudet, 33175 Gradignan Cedex France
(UE)
Abstract
Many studies simulates the machining process by using a single degree of freedom springmass sytem to model the tool stiffness, or the workpiece stiffness, or the unit toolworkpiece stiffness in modelings 2D. Others impose the tool action, or use more or less
complex modelings of the efforts applied by the tool taking account the tool geometry.
Thus, all these models remain two-dimensional or sometimes partially three-dimensional.
This paper aims at developing an experimental method allowing to determine accurately the
real three-dimensional behaviour of a machining system (machine tool, cutting tool, toolholder and associated system of force metrology six-component dynamometer).
In the work-space model of machining, a new experimental procedure is implemented to
determine the machining system elastic behaviour. An experimental study of machining
system is presented. We propose a machining system static characterization. A
decomposition in two distinct blocks of the system "Workpiece-Tool-Machine" is realized.
The block Tool and the block Workpiece are studied and characterized separately by matrix
stiffness and displacement (three translations and three rotations). The Castigliano's theory
allows us to calculate the total stiffness matrix and the total displacement matrix.
A stiffness center point and a plan of tool tip static displacement are presented in agreement
with the turning machining dynamic model and especially during the self induced vibration.
These results are necessary to have a good three-dimensional machining system dynamic
characterization (presented in a next paper).
Keywords: experimental model, displacement plan, self-excited vibrations, turning}
1
Nomenclature
a
BT
BW
[C]
[CO]
[Ci]
CRBT
D1
D2
{D}
Dij
ni
Distance between displacement transducer
Block Tool
Block Workpiece
Damping matrix
Compliance matrix
Displacement transducer (i =1 to 6)
Block Tool BT stiffness center
Holding fixture diameter (mm)
Workpiece diameter (mm)
Small displacements torsor
Straight line corresponding of the displacement direction of the
point Pi,j (i = x, y, z) and (j=1, 2, 3)
Points displacements vectors Pi,j (i = x, y, z) and (j=1, 2, 3)
Distance between the line Dij
Young modulus (N/mm²)
Scale factors
Force vectors applied to obtain BT stiffness center (i = x, y, z)
Inertial moment
Stiffness matrix (N/m)
Stiffness matrix of rotation (Nm/rad)
Stiffness matrix of displacement (N/m)
Stiffness matrix of BT displacement (N/m)
Stiffness matrix of BW displacement (N/m)
Stiffness matrix of machining system displacement (N/m)
Errors matrix for the matrix [K]
Stiffness matrix of rotation / displacement (Nm/m)
Stiffness matrix of displacement /rotation (N/rad)
Holding fixture length (mm)
Length workpiece (mm)
Point intersection between straight lines (Dij)(i = x, y, z) and (j=1,
2, 3)
Displacement measured at the charge point
Mass matrix
Plan normal Pi
O
OC
P
Pi
PBT
Pij
{T}
[V]
v1
Tool tip point
Cub center
Force (N)
Plan including the point Mi
Displacement plan considering tool point
Charge points (i = x, y, z) and (j=1, 2, 3)
Mechanical actions torsor
Matrix eigenvector [KF,BT]
Matrix eigenvalue [KF,BT]
dij
Dx
E
ex , fx
Fi
I
[K]
[KC]
[KF]
[KF,BT]
[KF,BW]
[KF,WAM]
[Kerrors %]
[KCF]
[KFC]
L1
L2
Mi
m
[M]
2
WTM
x (z)
y
i
i
i
i
Workpiece-Tool-Machine
Cross (feed) direction
Cutting axis
Displacement (mm)
Displacement along i (i=1,2,3)
Measured angle at the force point
Angular deviation of "Co-planarity" between lines Dij (i = x, y, z;
and j=1, 2, 3)
Minimal distance between straight lines Dij (i = x, y, z; and j=1, 2,
3)
Rotation along i (i=x, y, z)
Introduction
Metal cutting is one of the most important manufacturing process. The most common
cutting processes are turning, milling, drilling and grinding.
During the cutting process of different materials, a whole of physic-chemical and dynamic
phenomena are involved. Elasto-plastic strains, friction and thermal phenomena are
generated in the contact zone between workpiece, tool and chip. These phenomena are
influenced by: physical properties of the material to be machined, tool geometry, cutting
and lubrication conditions, and also the machining system dynamic parameters (stiffness,
damping). The machine tool vibrations are generated by the interaction between the elastic
machining system and the cutting process. The elastic system is composed of: the different
parts of the machine tool in movement, the workpiece and the tool. Actions of the
machining process are usually forces and moments. These actions also generate relative
displacements of elements composing the elastic system. They occur for example between
the tool and workpiece, the tool device and bed, etc. These displacements modify the
cutting conditions and in the same way the forces. Thus, the knowledge of the machining
system elastic behaviour is essential to understand the cutting process [6].
Certain scientists developed a finite element beam model of spinning stepped-shaft
workpiece to perform stability analysis using Nyquist criterion [38]} or the traditional
stability lobe diagram [16, 25]. This traditional stability analysis technique shows that the
chatter instability depends on the structural damping in the system and the spindle speed.
Chen and Tsao presented a dynamic model of cutting tool with [11] and without tailstock
supported workpiece using beam theory [10]. Here, the effects of workpiece parameters are
studied on the dynamic stability of turning process by treating the workpiece as a
continuous system. Carrino et al., [8] present a model that takes into account both the
workpiece deflection and the cutting force between tool and workpiece. The three
components of the cutting force are function of the cutting geometry. The effect of the
workpiece-tool-machine deflections is a shift of the workpiece cross-section and a moving
back of the tool holder in the radial and the tangential direction (2D model).
In these processes, the cutting forces measurement has important and tremendous
applications within industry and research alike. The cutting forces estimation allows to
3
supervise tool wear evolution [36], establishes material machinabilities, optimizes cutting
parameters, predicts machined workpiece surface quality and study phenomena such as
chip formation or vibrations appearance. Sekar and Yang propose a compliant two degree
of freedom dynamic cutting force model by considering the relative motion of workpiece
with cutting tool. Tool and workpiece are modelled as two separate single degree of
freedom spring-mass-damper systems [34].
In the literature, there are many studies concerning the cutting force measurement. Many
dynamometers for this purpose have been developed [5, 9, 12, 23].
Independently of the machining operation type, methods for cutting force measurements
can be divided into two general categories. The first corresponds to the category that uses
the current or the voltage signals emitted by the tool machine drive motor or control
systems [27]. The second uses transducers mounted on the tool or the workpiece assemblies
[20, 37, 40].
The cutting forces developed in machining operations may be estimated indirectly by
obtaining the power consumed or directly from metal cutting dynamometers; mechanical,
hydraulic, pneumatic or several types of electro-mechanical dynamometers.
Knowing the cutting forces is essential to machine tool builders in calculating power
requirements and frame rigidity. Cutting forces acting on the tool must be measured at the
design tool that are strong enough to remove chip at the desired quantity from the
workpiece and to calculate power of tool driver system. The dynamometer is able to
measure three force components: cutting force (Fc), feed force (Fa) and radial force (Fp)
but not the torque at the tool tip. Axinte et al., [1] propose a procedure to account for both
calibration and process errors in the uncertainty estimation for the specific situation of
single cutting force measurements. The influence parameters considered in their work,
contribution to the measurement uncertainty, workpiece, tool and machine were not
considered. Perez et al., [28] give a mechanistic model for the estimation of cutting forces
in micromilling based on specific cutting pressure. The model includes three parameters
which allow to control the entry of the cutter in the workpiece. The errors in the radial and
the angular position of the cutting edges of the tool is considered. The cutting forces are
calculated on the basis of the engaged cut geometry, the undeformed chip thickness
distribution along the cutting edges, and the empirical relationships that relate the cutting
forces to the undeformed chip geometry. This relation do not take into account the elasticity
of the machining system. In the measurement of the cutting forces [35], only elastic
deflections of the cutting tool due to the cutting forces were measured by means of the load
cells located at suitable position on the cutting tool.
However the dynamometer can measure three perpendicular cutting force components and
three torque components simultaneously during turning, and the measured numerical values
can be stored in computer by data acquisition system [12]. This dynamometer was designed
to measure up to 5,000 N maximum force and 350 N/m torque. The system sensitivity is pm
4 % in force and pm 8 % in torque.
4
During the cutting process, the cutting tool penetrates into the workpiece due to the relative
motion between tool and workpiece and the cutting forces and torques are measured on a
measuring plane in the Cartesian coordinate system. The cutting forces have been measured
by the dynamometers designed for different working principles as strain gauge based [12].
Thus it is necessary to have a good methodology to measure the workpiece-tool-machine
rigidity before measuring forces and torques. This new methodolgy is precisely the purpose
of this paper. In section 2 we present the experimental device. After (see section 3) we
conceive the workpiece. The workpiece geometry and dimensions retained for these testtubes were selected using the finite element method coupled to an optimization method, by
SAMCEF software. In the following section 4 a methodology based on the virtual work
(three translations and three rotations) is exhibited to study the static aims and to
characterize the static equivalent stiffness values in order to identify the three-dimensional
elastic behaviour of the machining system. The applied efforts are quantified with a force
sensor. The torsor of small displacements, 3 linears and 3 rotations displacement, is
measured by six displacement transducer. A stiffness global matrix is deduced with its
various results. The sum of the two stiffness matrix displacements block tool and block
workpiece determines the stiffness matrix of machining system displacement. By the
Castigliano's theorem we determine the angle that characterizes the principal direction of
deformation. Before concluding, in section 5 the stiffness center is obtained using the least
squares method in the coordinate system based on the tool in O point that is the origin of
the coordinate system.
General points
Today, machines tool are very rigid there are less and less geometrically faulty. The
vibratory problems are strongly related to the cutting. Ideally, cutting conditions are chosen
such that material removal is performed in a stable manner. However, sometimes chatter is
unavoidable because of the geometry of the cutting tool and workpiece. In [7] the bulk of
the motion during chatter comes from the workpiece since it has a static stiffness that is up
to 3.2 times less than the cutting tool. Since it is highly impractical to instrument the
workpiece during cutting the end goal is to develop an observer that can transform
measurements made at the cutting tool into a prediction about the motion of the workpiece.
Dassanayake [13] approaches in the 1D case the dynamic response of the tool holder to the
request of the tool which follows a regenerative surface. They consider only tool motions
and disregards workpiece vibration. Insperger continues in the 2D case keeping workpiece
rigid but he takes into account the flexibility of the tool[18]}. For an operation of milling
[33], the deflections of the machine-tool, the toolholder and the toolholder clamping in the
spindle, the tool clamping in the toolholder, and the tool itself, were measured
experimentally under the effects of known forces. The results of this study show that the
stiffness of both the machine and the clamping in the machine-spindle-toolholder-tool
system have a similar importance in the displacement of the tool tip (subjected to a cutting
force) to the deflection of the tool itself. Thus, it is necessary to identify the elastic
behaviour of machine parts [6]. These vibrations are generated and self induced by the
cutting process. A conventional lathe with high rigidity is used to study these dynamic
5
phenomena. The Workpiece-Tool-Machine (WTM) system is presented on the figure 1 for
a turning operation.
Fig. 1 Workpiece-Tool-Machine considering dynamic cutting process.
The elastic structure of WTM system has several degrees of freedom and has many specific
vibration modes. The vibrations of each element of the structure are characterized by its
natural frequency depending on the Stiffness matrix [K], the Mass matrix [M] and the
Damping matrix [C]. In a first time, only the stiffness matrix [K] is studied.
Our experimental approach is based on the matrix development that is presented in [29]. To
identify the WTM system static behaviour, the machining system is divided into two
blocks, the block Tool (BT) and the block Workpiece (BW) figure 2. These two blocks are
related to the turn bed supposed to be infinitely rigid.
Fig. 2 Presentation of the experimental device.}
6
3 Components of the system WTM
3.1 Block Workpiece: BW
As many workers [2, 27, 41], a cylindrical geometry of the workpiece is chosen. The BW
represents the revolving part of the WTM system; it includes the holding fixture, the
workpiece and the spindle (figures 3 a, b). To make the whole frame rigid, a very rigid unit
(workpiece, holding fixture) is conceived in front of the WTM elements (figure 4).
Fig.3 BW representation.}
The workpiece geometry and his holding fixture are selected with D1 = 60 mm, D2 = 120
mm and L2 = 30 mm (cf. figure 4). These dimensions retained for these test-tubes were
selected using the finite element method coupled to an optimization method by SAMCEF
software. It is necessary to determine the holding fixture length L1 to obtain a significant
stiffness in flexion. The objective is to move away the first BW vibration mode of the lathe
fundamental natural vibration mode (see [3]).
Fig. 4 Geometry of holding fixture / workpiece.}
7
As [33] and others the stiffness is calculated on the basis of the displacement for a given
force P :
=
P L3
(1)
3E I
with inertial moment :
I=
D14
(2)
64
The figure 5 represents the displacements and stiffness values relating to the length of
holding fixture / workpiece, for a force P = 1,000 N, a Young modulus E = 21.10-5 N/mm2
and a holding fixture diameter D1 = 60 mm.
Fig. 5 Displacements according to the holding fixture length.
A holding fixture length : L1 = 180 mm, for a stiffness in flexion of 7.10-7 N/m, is
reminded. This value is including in the higher part of the interval of the acceptable rigidity
values for conventional lathe (cf. figure 6), [19, 21, 22].
8
Fig. 6 Acceptable aria representation of the workpiece deformation.}
3.2 Block Tool: BT
In this case, the BT part includes the tool, the tool-holder, the dynamometer, the fixing
plate on the cross slide (figure 7a). The six-component dynamometer [12] is fixed between
the cross slide and the tool-holder. This is necessary thereafter to measure the cutting
mechanical actions. The stiffness of BT is evaluated into the next section.
Fig. 7 Block tool BT representation.
4. Static characterization of machining system
The static study aims at characterizing the static equivalent stiffness values in order to
identify the three dimensional elastic behaviour of the machining system. Generally, the
static tests consist in charging by known efforts the two blocks and measuring only the
9
associated displacements components [17, 39]. Here, also the static tests consist in loading
by known efforts the two blocks but measuring the small displacements torsor (i.e. three
linear displacements, and three rotations). The applied efforts are quantified with a force
sensor. The small displacements torsor is measured by six displacement transducers. A
stiffness global matrix is deduced with its various results. It is a real 3D pattern. For
instance, Carrino et al., present a model that takes into account both the workpiece
deflection and the cutting force between tool and workpiece. The three components of the
cutting force are a depend on the cutting geometry. The effect of the workpiece-toolmachine deflections is a shift of the workpiece cross-section and a moving back of the tool
holder in the radial and the tangential direction (2D model) [8].
4.1 Stiffness matrix
The experimental approach is based on the matrix development presented in [30]. The
deformation of a structure element is represented by displacements of nodes determining
this element. The "associated forces" correspond to displacements which act as these nodes.
The transformation matrix which connects generalized displacements of an element to the
"associated forces" is the rigidity matrix or the stiffness matrix of the element. In the same
way the matrix which connects generalized displacements of the structure to the applied
generalized discrete forces is the stiffness matrix of the structure simply named as "stiffness
matrix" : [K].
The relation between forces and displacements is given by [29]:
{T} = [K] x {D}
(3)
where {T} represents the mechanical action torsor, [K] the stiffness matrix and {D} the
small displacements torsor.
The general form of the square (6 x 6) stiffness matrix [K] is:
K FC
[K]A, xyz =
KC
KF
K CF A, xyz
(4)
where [KF], [KC], [KCF] and [KFC] are respectively square (3 x 3) displacement matrix,
rotation and rotations / displacement and displacement / rotation expressed at the point A in
x, y, z machine axes.
4.2 Experimental determination of the stiffness matrix
The matrix elements of the small displacement torsor are identified thanks to the
experimental device presented in the figure 8. The considered system is a cube.
Displacements are measured by six displacement transducers. Two displacement
10
transducers are positioned symmetrically on each of the 3 directions. The force is applied to
each x, y, z direction in two different levels by a screw-swivel system controlled by a force
sensor. Each loading point coordinates are known starting from the cube center Oc. This
allows for each applied force to determine the moment and thus the complete torsor of
mechanical actions {T}.
Fig. 8 Experimental device for the static characterization.
Induced displacements are solid body displacements and it is noted that rotations are low (>
10- 5 rad) but exist. The existence of these rotations is important and in agreement with the
torque via the virtual work theory. The measure principle is presented in the figure 9 and is
used to determine the components of the small displacements torsor {D} which is
composed by the three rotations x, y, z and the three displacements x, y, z.
11
Fig. 9 Position of Displacement transducer.
Thus, the displacements m and rotations are determined, for each loading direction, by
using the relations:
m=
m1 m 2
2
, tan =
m 2 m1
a
(5)
From these relations and considering the six measurements points it results:
x
y =
z
0
0
1
a
1
2
0
0
0
1
a
1
a
0
0
0
1
a
0
0
0
0
0
0
0
1
2
1
2
0
0
0
0
1
2
1
a
1
2
0
0
1 m
1
a
m
2
0 m
3
m
0 4
m 5
0 m 6
1
2
(6)
The tests are carried out with specific assemblies which are designed for each direction of
measurement. The loading (respectively unloading) is carried out by step of 30 daN (resp.
– 30 daN) until (resp. from) level of 200 daN, and this procedure is used for each test
following known directions. To check the repeatability and accuracy of identifications all
tests and measurements are carried out five times and the average is selected for each point
at figure 10.
12
To exploit measurements as well as possible, the displacements curves are plotted
depending on the applied force for each loading direction. A line of least squares is adjusted
to determine the displacements components values for a given force. Thus, six torsors of
small displacements are identified for six loadings cases. The linear behaviour observed in
loading is different from the linear behaviour noted in unloading. This different linearity
between loading and unloading is due to the existence of deviations and friction forces in
each point surfaces of the assembly. These deviations due to the installation of the parts of
the associated assembly and friction force are different in charge and discharge.
When this difference in linear behaviour appears (hysteresis), we use the line (figure 10)
which passes by the middle (C point) of segment AB (charge-discharge). OC is the line
which the slope, by assumption, corresponds to the "real" stiffness. Segment AB represents
the double of the friction forces and deviations for the deformation , [24].
Fig. 10 Diagrammatic representation of linear behaviour in charge and discharge.}
At this stage, the matrix column of the small displacements torsors and of the mechanical
action torsors are known. The flexibility matrix [C0] of the system is deducted by:
[CO] = {T}-1 {D} .
(7)
The inversion of experimental flexibility matrix gives the global stiffness matrix, [K].
4.2.1 Stiffness matrix of BT
The figure 11 presents a loading example in x direction of BT (see [3]). A similar loading
experimentation in y and z direction of BT is carried out. For each x, y or z direction
measurements are taken at three points and captured by data acquisition card installed in
PC. The stored data can be retrieved and used for analysis when required. Using Labview
software we obtain the experimental flexibility matrix [C0]. A simple inversion gives the
stiffness global matrix [KBT].
13
Fig. 11 Example of loading in x BT direction.
5 10 6 7.7 10 6 2.9 10 6
6
7.8 10 6
3 10 6
5 10
6
6
2.7 10
4 10
5.8 10 6
[KBT] =
6
6
7 10 6
1.8 10 2.8 10
1.7 10 6
2 10 6
4.6 10 6
6
6
1.4 10
7.4 10 5
110
6.7 10 6
7.5 10 6
1.4 10 6
1.5 10 6
1.5 10 6
1.9 10 6
8.7 10 5
3.3 10 5v
1.7 10 6
1.6 10 6
4.7 10 5
2 10 5
3.4 10 6
1.7 10 6
1.3 10 7
1.2 10 7
5.8 10 6
1.7 10 6
(8)
O , xyz
In addition, on figure 10 we made a simplifying assumption while retaining for each level
of deformation given (for example AB), the average charge (C) correspondent between the
value in charge (A) and the value discharges (B) from them. Consequently, it is advisable to
make sure the validity of this assumption. This must be done while estimating, on each
level of loading, the error made by using the median value between the charge and the
discharge. However, the use of the least squares method allows the evaluation of the error
made for each level of loading, i.e. each elements of the matrix [KBT]. Thus, a matrix of
error can be built. This matrix noted [Kerrors%] thus allows to know the error attached to each
element of the matrix [KBT]. This operation is performed by an errors matrix [Kerrors%] given
in (9).
[Kerrors%] =
0.1
0.6
4.3
0.4
0.1
0.1
0.7 3.8
1.4 1.2
1.2 0.05
0.1 0.1
0.1 0.05
0.2 0.3
2.3
2.5
4.4
0.4
0.7
1.3
5.7
3.3
0.1
1
0.1
0.2
1.7
2.5
4.7
2.2
0.3
1.2
(9)
14
It is noted that the error does not exceed 6 % what is largely acceptable.
In addition, for a "perfect" decoupled system [K] is diagonal, and the elements are stiffness
values in N/m. The matrix [K] obtained here is a matrix block. Comparing this matrix to the
matrix [K]A, xyz (4) we establish between elements the following correspondences:
the elements of the matrix 3 3 in the right higher corner are the elements
corresponding to stiffness values of displacements (N/m) noted [KF] in (4),
the elements of the matrix 3 3 in the left lower corner are the elements
corresponding to stiffness values of rotations (N/rad) noted [KC] in (4),
\item the elements of the matrix 3 3in the left higher corner are the elements
corresponding to the couplings of "displacements / rotations" noted [KFC] in (4),
\item the elements of the matrix 3 3 in the right lower corner are the elements
corresponding to the couplings of "rotations / displacements" noted [KCF] in (4).
These two last elements ("displacements / rotations" [KFC], "rotations / displacements"
[KCF]) are not taken into account here. Only the stiffness part of displacement (noted
[KF, BT] below) is necessary considering in our next dynamic model (not presented here).
[KF, BT] =
6.7 106
6
7.5 10
1.4 106
8.7 105 3.4 106
3.3 105 1.7 106
1.7 106 1.3 107
O, xyz
(10)
4.2.2 Stiffness matrix of BW
As the BW geometry was simpler, we limited ourselves to measure the stiffness values of
displacements by using three displacement transducer following the three main directions.
The loading was carried out with the dynamometer (cf. figure 12).
15
Fig. 12 Experimental device for the BW static characterization.
Using this geometry, the BW has a very high rigidity according to z axis and this value is
very small compared with the principal stiffness value. The BW behaviour is linear, with a
nearly null hysteresis. At the loading and unloading points, the part is not influenced by the
friction phenomenon or other various fits generated by the assembled elements, like the
spindle or the ball bearings. The stiffness matrix [KF, BW] obtained according to the three
directions is:
[KF, BW] =
1.4 10 7
0
0
0
2 10 7
0
.
8
2.85 10
O, xyz
0
(11)
We may notice that the spindle and its ball bearings decrease the global BW rigidity
compared with the calculated rigidity in the section 3.1.
4.3 Experimental determination of the machining system stiffness
matrix
In order to know stiffness values of the machining system in the three directions, the elastic
interaction BT BW is modelled by static stiffness which are assembled in parallel (figure
16
13). The chip is the common point between deformation and force. It connects the two
stiffnesses (BT / BW). The sum of the two stiffness matrix of displacements BT [KF,BT]
and BW [KF,BW] determines the stiffness matrix of machining system displacement
[KF,WAM].
Fig. 13 Static stiffness assembled in parallel.
[KF,WAM] =
2.7 107
6
7.5 10
1.4 106
8.7 105
2.1 107
1.7 106
3.4 106
1.7 106
2.9 108
O, xyz
(12)
In this form the matrix of the machining system shows that the main diagonal has elements
of a higher order than the others. Moreover the diagonalization of the matrix is possible and
it comes:
[KF, WAM-d] =
2.6 107
0
0
0
2.2 107
2.9 108
O, xyz
0
0
(13)
By the Castigliano's theorem we determine the angle K. This angle characterizes the
principal direction of deformation. It takes place in the case of two blocks BT and BW
interaction (figure 14). On the figure 14, K-BT et K-BW correspond respectively to the
principal direction of deformation of the Block Tool and Block Workpiece in connexion
with the machine axes.
17
Fig. 14 Diagram of K angle determination.
The stiffness values [KBT] and [KBW] of the whole elastic system, among principales axes,
are determined by deformation energy minimization, allowing thus the diagonalization of
the matrix [KF,BT-d]:
[KF, BT-d] =
4.1 105
0
6 106
1.3 10 7
O, xyz
0
0
(14)
In the plane O,x,y, with K-BT = 52° and K-BW = 0° we obtain an angle K = 76°. In the
plane O,y,z with K-BT = 32° and K-BW = 0° we have an angle K = 65°. On this direction,
the maximum deformation of the system is obtained.
5 Rotation center
Deacu [14] and Kudinov [24] show that any machine tool, is characterized by the
deformation principal directions. These deformation principal directions are function of the
machine structure, its geometrical configurations and cutting parameters used. We can
observe either a very stiff behaviour, or very rubber band according to the direction. Here,
the tool is regarded as forming integral part of block BT. The aim is to determine the
stiffness center CRBT of elastic system BT [26]. This stiffness center corresponds to the
18
rotation center of the block BT compared to the bed. Obtaining CRBT consists in finding
the points of intersection of different perpendicular to displacements.
5.1 Experimental step
The stiffness center is obtained in the coordinate system based on the tool in O point that is
the origin of the (x, y, z) coordinate system.
The procedure to obtain the stiffness center is detailed in the figure 15; the imposed charge
follows all three directions, Fi being known; (i = x, y, z).
Fig. 15 Experimental procedure to obtain stiffness center CRBT
The displacement vector measurement under the load di,j is obtained in two points Pi,j in
each direction (i = x, y, z) and (j = 1, 2). The direction (Dij) is released being straight line
containing di,j vector that include the point Pi,j.
According to the figure 16, these lines are not exactly coplanar and no secant. It is possible
to find for each direction the intersection point, named Mi, of the lines Dij using the
following relations, for x direction:
OM x = OPx,1 ex Od x,1
(15)
19
Using the method of least squares, we can minimize the distance dx between the lines (Dx,1
and Dx,2, figure 16); dx is calculated using the expression:
OM OP d
x
x,2
x,2
dx =
d x,2
(16)
and the angle x corresponds of angular deviation of "coplanarity":
P M d
x = arccos x,1 x x,1
Px ,1M x d x ,1
(17)
The two lines (Dij) are separated by a minimum distance i = 1.8 mm and maximum
deviation i = 2° (see the table 1).
In the next step, three mean plans Pi are defined by approximation of the points Mi and
containing the lines Dij.
We can draw the normal ni of each plan Pi that includes the point Mi (figure 17).
Fig. 16 Identifying the intersection point Mx
Writing the loop of geometrical closure under the three directions:
OCRBT = OJ i f i ni
(18)
where: i = x, y, z.
We obtain a linear system with three vectors' equations and three unknown factors, after
that is seeking to obtain the intersection point CRBT using least squares minimization
20
method. In reality, the directions of these three normal lines cross (nearly at r = 1.2 mm) in
only one point noted CRBT (see figure 17) which corresponds with stiffness center.
Fig. 17 Determination of the block tool rotation center: CRBT
Charge
point
Px,1
Px,2
Py,1
Py,2
Pz,1
Pz,2
Coordinate Displacement Shift
of point
distance I
vector d ij
(mm)
(m)
(m)
35
9.110-5
1.810-3
-20
1.710-5
52
3.410-5
35
0.810-5
1.810-3
-20
1.510-5
117
1.510-5
116
210-5
1.710-3
15
-210-5
56
-210-5
116
0.210-5
1.710-3
15
-1.310-5
103
9.810-6
45
5.510-6
8.810-4
-20
6.510-5
17
5.510-5
130
310-5
8.810-4
-20
110-5
6
8.710-5
Angular
shift i (°)
0.28
0.28
1.97
1.97
0.18
0.18
Intersection Coordinate
point Mi
of CRBT
(m)
(m)
0.366
0.042
0.178
0.366
0.042
0.178
0.086
0.045
0.081
0.086
0.045
0.081
0.033
-0.052
-0.227
0.033
-0.052
-0.227
0.56
0.56
-0.58
-0.58
-0.08
-0.08
Table. 1 Experimental values to obtain stiffness center CRBT
21
The obtained results using this experimental approach are coherent with other finding in the
literature [24].
Thereafter is made a verification measuring the tool point displacement O in work-space
under the charge. We considerate our experimental results viewing the tool point that
moves under one segment of a sphere with the center CRBT: assimilated with a normal plan
(O, CRBT) for small displacements (accordingly figure 18).
Fig. 18 Determination of the block tool center rotation: CRBT
5.2 Comparaison with stiffness matrix
If we proceed on the diagonalization of the block workpiece stiffness matrix [KF,BT]
obtained in the section (4.2.1}), we obtain:
[KF, BT-d] =
4.1 105
0
6 106
.
7
1.3 10
O, xyz
0
0
(19)
The eigenvectors associated are:
v1
v2
v3
0.0688 0.4103
0.6336
[V] =
.
0.9896 0.3389 07713
0.1260 0.83467 0.0603 O , xyz
(20)
We note that the maximum stiffness obtained is situated on the direction of third
eigenvector v3 , being at 4° nearly the direction (CRBT - O}) (see figure 19); and
respectively the minimum and stiffness average on the directions of eigenvectors v1 and v2 .
22
Fig 19 Eigenvectors.
6 Conclusion
In the literature many authors use only tool rigidity [15, 31, 32, 40, 41] and recently the
elastic behaviour of the machine [6]; furthermore, this research considers the rigidity of the
tool device system (BT). Anothers takes into account that both the workpiece deflection
and the cutting force between tool and workpiece [8]. In [34] tool and workpiece are
modelled as two separate single degree of freedom spring-mass-damper systems. All these
models are 2D, or more or less partially 3D, and thus does not allow a real 3D modeling of
the cut.
The approach presented in this paper is fully different from previous studies in the sense
that we use the six-dimension of the torsors (3 force components, 3 torque components).
Consequently, this way allows us to be directly in the most suitable framework for a real
3D cutting modeling. Thanks to the principle of virtual work, we determined the complete
torsor of small displacements (3 linear and 3 rotations displacement) associated with the
mechanical actions torsor (3 force components, 3 torque components) via the matrix of
rigidities. The stiffness center and the rotation center were obtained experimentally. The
minimal displacement direction was defined on the basis of experimental model. Using
Castigliano's theorem, we determined the angle K which characterizes the principal
deformation direction. Then, we characterized block workpiece (BW) was characterized by
its stiffness values and the diagonal matrix of displacements. This diagonal matrix of
displacement validate the hypothesis and confirm the good rigidity of workpiece. Whole
static behaviour analysis of the system (BT-BW) is thus validated. An application of this
methodology within a framework of machining really 3D is available in [4]. These results
are required to have a good 3D modeling of the cut, as being presented in a forthcoming
paper.
23
Acknowledgements
The authors acknowledge Jean Pierre Larivière, Ingineer CNRS (Centre National de la
Recherche Scintifique - France) for the numerical simulation with SAMCEF software and
Professor Miron Zapciu for the helpful discussions on this subject.
References
1. Axinte D. A., Belluco W., De Chiffre L.,
Evaluation of cutting force uncertainty components in turning,
International Journal of Machine Tools and Manufacture, 41, pp. 719-730, (2001)
2. Benardos P. G., Mosialos S., Vosniakos G. C.,
Prediction of workpiece elastic deflections under cutting forces in turning,
Robotics and Computer-Integrated Manufacturing, 22, pp. 505-514, (2006)
3. Bisu C. F.,
Etude des vibrations auto-entretenues en coupe tridimensionnelle: nouvelle modélisation
appliquée au tournage,
Ph. D. Thesis, Université Bordeaux 1 and Universitatea Politehnica Bucharest, (2007)
4. Bisu C. F., Darnis P., Gérard A., K'nevez J-Y.,
Displacements analysis of self-excited vibrations in turning,
International Journal of Advanced Manufacturing Technology, 44, 1-2, pp. 1-16, (2009)
(doi: 10.1007/s00170-008-1815-8)
5. Buyuksagis I. S.,
Analysis of circular marble sawing using a block-cutter,
Ph. D. Thesis, Osmangazi University, Institute of Sciences and Technology, (1998)
6. Cano T., Chapelle F., Lavest J.-M., Ray P.,
A new approach to identifying the elastic behaviour of a manufacturing machine,
International Journal of Machine Tools and Manufacture, 48, pp. 1569-1577, (2008)
7. Cardi A. A., Firpi H. A., Bement M. T., Liang S. Y.,
Workpiece dynamic analysis and prediction during chatter of turning process,
Mechanical Systems and Signal Processing, 22, 1481-1494, (2008)
8. Carrino L., Giorleo G., Polini W., Prisco U.,
Dimensional errors in longitudinal turning based on the unified generalized mechanics of
cutting approach. Part I: Three-dimensional theory,
International Journal of Machine Tools and Manufacture, 42, pp. 1509-1515, (2002)
9. Castro L. R., Viéville P., Lipinski P.,
Correction of dynamic effects on force measurements made with piezoelectric
dynamometers,
International Journal of Machine Tools and Manufacture, 46, (14), pp. 1707-1715, (2006)
24
10. Chen C. K., Tsao Y. M.,
A stability analysis of regenerative chatter in turning process without using tailstock,
International Journal of Advanced Manufacturing Technology, 29, (7-8), pp. 648-654,
(2006)
11. Chen C. K., Tsao Y. M.,
A stability analysis of turning tailstock supported flexible work-piece,
International Journal of Machine Tools and Manufacture, 46, (1), pp. 18-25, (2006)
12. Couétard Y.,
Caractérisation et étalonnage des dynamomètres à six composantes pour torseur associé à
un système de forces,
Ph. D. Thesis, Universit\'e Bordeaux~1 Talence, (2000)
13. Dassanayake A. V., Suh C. S.,
On nonlinear cutting response and tool chatter in turning operation,
Communications in Nonlinear Science and Numercial Simulation, 13, (5), pp. 979-1001,
(2008)
14. Deacu I., Pavel G.,
Vibrations des Machines-Outils,
Dacia, Cluj Napoca, (1977)
15. Dimla Sr D. E.,
The impact of cutting conditions on cutting forces and vibration signals in turning with
plane face geometry inserts,
Journal of Materials Processing Technology, 155-156, pp. 1708-1715, (2004)
16. Ganguli A., Deraemaeker A., Preumont A.,
Regenerative chatter reduction by active damping control,
Journal of Sound and Vibration, 300, pp. 847-862, (2007)
17. Gorodetskii Y. I., Budankov A. S., Komarov V. N.,
A system for experimental studies of the dynamics of the process of cutting metal,
Journal of Machinery Manufacture and Reliability, 37, (1), pp. 68-73, (2008)
18. Insperger T., Barton D. A. W., Stepan G.,
Criticality of Hopf bifucation in state-dependent delay model turning processes,
International Journal of Non-Linear Mechanics, 43, pp. 140-149, (2008)
19. Ispas C., Gheorghiu H., Parausanu I., Anghel V.,
Vibrations des systèmes technologiques,
Agir, Bucarest, (1999)
20. Karabay S.,
Design criteria for electro-mechanical transducers and arrangement for measurement
cutting forces acting on dynamometers,
Materials & Design, 28, pp. 496-506, (2007)
25
21. Koenigsberger F., Tlusty J.,
Machine Tools Structures, Pergamon Press, (1970)
22. Konig W., Sepulveda E., Lauer-Schmaltz H.,
Zweikomponenten schnittkraftmesser,
Industrie-Anzeiger, (1997)
23. Korkut I.,
Design and manufacturing of a dynamometer connected to computer which can do
measuring with strain gages on the lathe,
Ph. D. Thesis, University of Gazi, Institute of Science and Technology, (1996)
24. Kudinov V. A.,
Dinamica Masinilor Unelten,
Tehnicas, Bucarest,(1970)
25. Lapujoulade F., Coffignal G., Pimont, J.,
Cutting forces evaluation during high speed milling,
2th IDMME' 98, 2, pp. 541-549,
Compiègne, France, May, (1998)
26. Marinescu I., Ispas C., Boboc, D.,
Handbook of Machine Tool Analysis,
Deckker M., New York, (2002)
27. Mehdi K., Rigal J-F., Play D.,
Dynamic behavior of thin wall cylindrical workpiece during the turning process, Part 1:
Cutting process simulation,
J. Manuf. Sci. and Engng., 124, pp. 562-568, (2002)
28. Pérez H., Vizan A., Hernandez J. C., Guzman M.,
Estimation of cutting forces in micromilling through the determination of specific cutting
pressures,
Journal of Materials Processing Technology, 190, pp. 18-22, (2007)
29. Pestel E. C., Leckie F. A.,
Matrix methods in elastomechanics,
McGraw-Hill, New York, (1963)
30. Robinson J.,
Analyse matricielle des structures à l'usage des ingénieurs,
Dunod, Paris, (1971)
31. Saglam H., Unsacar F., Yaldiz S.,
Investigation of the effect of rake angle and approaching angle on main cutting force and
tool tip temperature,
International Journal of Machine Tools and Manufacture, 46, (2), pp. 132-141, (2006)
26
32. Saglam H., Yaldiz S., Unsacar F.,
The effect of tool geometry and cutting speed on main cutting force and tool tip
temperature,
Materials & Design, 28, pp. 355-360, (2002)
33. Salgado M. A., Lopez de Lacalle L.N., Lamikiz A., Munoa J., Sanchez J. A.,
Evaluation of the stiffness chain on the deflection of end-mills under cutting forces,
International Journal of Machine Tools and Manufacture, 45, pp. 727-739, (2005)
34. Seka, M., Srinivas J., Kotaiah K. R., Yang S. H.,
Stability analysis of turning process with tailstock-supported workpiece,
International Journal of Advanced Manufacturing Technology, doi: 10.1007/s001700008-1764-2, (2008)
35. Seker U., Kurt A., Ciftci I.,
Design and constrution of a dynamometer for measurement of cutting forces during
machining with linear motion,
Materials & Design, 23, pp. 355-360, (2002)
36. Toh C. K.,
Static and dynamic cutting force analysis when high speed rough milling hardened steel,
Materials & Design, 25, pp. 41-50, (2004)
37. Toulouse D.,
Contribution à la modélisation et à la métrologie de la coupe dans le cas d'un usinage
tridimensionnel, Ph. D. Thesis, Université Bordeaux 1 Talence, (1998)
38. Wang Z. C., Cleghorn W. L.,
Stability analysis of spinning stepped-shaft workpieces in a turning process,
Journal of Sound and Vibration, 250, (2), pp. 356-367, (2002)
39.Yaldiz S., Ünsacar F.,
Design, development and testing of a turning dynamometer for cutting force
measurement,
Materials & Design, 27, 839-846, (2006)
40. Yaldiz S., Ünsacar F.,
A dynamometer design for measurement the cutting forces on turning,
Measurement, 39, pp. 80-89, (2006)
41. Yaldiz S., Ünsacar F., Saglam H.,
Comparaison of experimental results obtained by designed dynamometer to fuzzy model
for predicting cutting forces in turning,
Materials & Design, 27, pp.1139-1147, (2006)
27
| 5 |
1
H-DenseUNet: Hybrid Densely Connected UNet for
Liver and Tumor Segmentation from CT Volumes
Xiaomeng Li1 , Hao Chen1,2 , Xiaojuan Qi1 , Qi Dou1 , Chi-Wing Fu1 , and Pheng-Ann Heng1
arXiv:1709.07330v2 [] 22 Nov 2017
1
Department of Computer Science and Engineering, The Chinese University of Hong Kong
2
Imsight Medical Technology, Inc
Abstract—Liver cancer is one of the leading causes of cancer
death. To assist doctors in hepatocellular carcinoma diagnosis and
treatment planning, an accurate and automatic liver and tumor
segmentation method is highly demanded in the clinical practice.
Recently, fully convolutional neural networks (FCNs), including
2D and 3D FCNs, serve as the back-bone in many volumetric
image segmentation. However, 2D convolutions can not fully
leverage the spatial information along the third dimension while
3D convolutions suffer from high computational cost and GPU
memory consumption. To address these issues, we propose a
novel hybrid densely connected UNet (H-DenseUNet), which
consists of a 2D DenseUNet for efficiently extracting intra-slice
features and a 3D counterpart for hierarchically aggregating
volumetric contexts under the spirit of the auto-context algorithm
for liver and tumor segmentation. We formulate the learning
process of H-DenseUNet in an end-to-end manner, where the
intra-slice representations and inter-slice features can be jointly
optimized through a hybrid feature fusion (HFF) layer. We
extensively evaluated our method on the dataset of MICCAI
2017 Liver Tumor Segmentation (LiTS) Challenge. Our method
outperformed other state-of-the-arts on the segmentation results
of tumors and achieved very competitive performance for liver
segmentation even with a single model.
Index Terms—CT, liver tumor segmentation, deep learning,
hybrid features
Raw image
Ground truth
3D display
Figure 1: Examples of contrast-enhanced CT scans showing
the large variations of shape, size, location of liver lesion. Each
row shows a CT scan acquired from individual patient. The
red regions denote the liver while the green ones denote the
lesions (see the black arrows above).
I. I NTRODUCTION
Liver cancer is one of the most common cancer diseases
in the world and causes massive deaths every year [1, 2].
The accurate measurements from CT, including tumor volume, shape, location and further functional liver volume, can
assist doctors in making accurate hepatocellular carcinoma
evaluation and treatment planning. Traditionally, the liver and
liver lesion are delineated by radiologists on a slice-by-slice
basis, which is time-consuming and prone to inter- and intrarater variations. Therefore, automatic liver and liver tumor
segmentation methods are highly demanded in the clinical
practice.
Automatic liver segmentation from the contrast-enhanced
CT volumes is a very challenging task due to the low intensity
contrast between the liver and other neighboring organs (see
the first row in Figure 1). Moreover, radiologists usually enhance CT scans by an injection protocol for clearly observing
tumors, which may increase the noise inside the images on
the liver region [3]. Compared with liver segmentation, liver
tumor segmentation is considered to be a more challenging
task. First, the liver tumor has various size, shape, location
and numbers within one patient, which hinders the automatic
segmentation, as shown in Figure 1. Second, some lesions do
not have clear boundaries, limiting the performance of solely
edge based segmentation methods (see the lesions in the third
row of Figure 1). Third, many CT scans consist of anisotropic
dimensions with high variations along the z-axis direction (the
voxel spacing ranges from 0.45mm to 6.0mm), which further
poses challenges for automatic segmentation methods.
To tackle these difficulties, the remarkable ability for extracting visual features is required. Recently, fully convolutional neural networks (FCNs) have achieved great success
on a broad array of recognition problems [4–12]. Many
researchers advance this stream using deep learning methods
in the liver and tumor segmentation problem and the literature
can be classified into two categories broadly. (1) 2D FCNs,
such as UNet architecture [13], the multi-channel FCN [14],
and the FCN based on VGG-16 [15]. (2) 3D FCNs, where 2D
convolutions are replaced by 3D convolutions with volumetric
data input [16, 17].
In clinical diagnosis, an experienced radiologist usually
observes and segments tumors according to many adjacent
2
slices along the z-axis. However, 2D FCN based methods
ignore the contexts on the z-axis, which would lead to limited
segmentation accuracy. To be specific, single or three adjacent slices cropped from volumetric images are fed into 2D
FCNs [14, 15] and the 3D segmentation volume is generated
by simply stacking the 2D segmentation maps. Although
adjacent slices are employed, it is still not enough to probe
the spatial information along the third dimension, which may
degrade the segmentation performance. To solve this problem,
some researchers proposed to use tri-planar schemes [4], that
is, three 2D FCNs are applied on orthogonal planes (e.g.,
the xy, yz, and xz planes) and voxel prediction results are
generated by the average of these probabilities. However,
this approach still cannot probe sufficient volumetric contexts,
leading to limited representation ability.
Compared to 2D FCNs, 3D FCNs suffer from high computational cost and GPU memory consumption. The high
memory consumption limits the depth of the network as well
as the filter’s field-of-view, which are the two key factors
for performance gains [18]. The heavy computation of 3D
convolutions also impedes its application in training a largescale dataset. Moreover, many researchers have demonstrated
the effectiveness of knowledge transfer (the knowledge learnt
from one source domain efficiently transferred to another
domain) for boosting the performance [19, 20]. Unfortunately,
only a dearth of 3D pre-trained model exists, which restricts
the performance and also the adoption of 3D FCNs.
To address the above problems, we proposed a novel endto-end system, called hybrid densely connected UNet (HDenseUNet), where intra-slice features and 3D contexts are
effectively probed and jointly optimized for accurate liver
and lesion segmentation. Our H-DenseUNet pushes the limit
further than other works with technical contributions on the
following two key factors:
Increased network depth. First, to fully extract high-level
intra-slice features, we design a very deep and efficient training
network based on 2D convolutions, called 2D DenseUNet,
which inherits the advantages of both densely connected
path [21] and UNet connections [5]. Densely connected path
is derived from densely connected network (DenseNet), where
the improved information flow and parameters efficiency
make it easy for training a deep network. Different from
DenseNet [21], we add the UNet connections, i.e., long-range
skip connections, between the encoding part and the decoding
part in our architecture; hence, we can enable low-level spatial
feature preservation for better intra-slice context exploration.
Hybrid feature exploration. Second, to explore the volumetric feature representation, we design an end-to-end training
system, called H-DenseUNet, where intra-slice and inter-slice
features are effectively extracted and then jointly optimized
through the hybrid feature fusion (HFF) layer. Specifically, 3D
DenseUNet is integrated with the 2D DenseUNet by the way
of auto-context mechanism. With the guidance of semantic
probabilities from 2D DenseUNet, the optimization burden in
the 3D DenseUNet can be well alleviated, which contributes
to the training efficiency for 3D contexts extraction. Moreover,
with the end-to-end system, the hybrid feature, consisting of
volumetric features and the high-level representative intra-slice
features, can be automatically fused and jointly optimized
together for better liver and tumor recognition.
In summary, this work has the following achievements:
• We design a DenseUNet to effectively probe hierarchical
intra-slice features for liver and tumor segmentation,
where the densely connected path and UNet connections
are carefully integrated to improve the performance. To
our knowledge, this is the first work that pushes the depth
of 2D networks for liver and tumor segmentation to 167
layers.
• We propose a H-DenseUNet framework to explore hybrid
(intra-slice and inter-slice) features for liver and tumor
segmentation, which elegantly tackles the problems that
2D convolutions neglect the volumetric contexts and 3D
convolutions suffer from heavy computational cost.
• Our framework is an end-to-end system that jointly fuses
and optimizes the hybrid features through the HFF layer,
which can be served as a new paradigm for effectively
exploiting 3D contexts. Compared with other state-ofthe-art methods, our method ranked the 1st on lesion
segmentation and achieved very competitive performance
on liver segmentation in the 2017 LiTS Leaderboard.
II. R ELATED W ORK
A. Hand-crafted feature based methods
In the past decades, a lot of algorithms, including thresholding [22, 23], region growing, deformable model based methods [24, 25] and machine learning based methods [26–30] have
been proposed to segment liver and liver tumor. Thresholdbased methods classified foreground and background according to whether the intensity value is above a threshold. Variations of region growing algorithms were also popular in the
liver and lesion segmentation task. For example, Wong et al.
[24] segmented tumors by a 2D region growing method with
knowledge-based constraints. Level set methods also attracted
attentions from researchers with the advantages of numerical
computations involving curves and surfaces [31]. For example,
Jimenez-Carretero et al. [25] proposed to classify tumors by
a multi-resolution 3D level set method coupled with adaptive
curvature technique. A large variety of machine learning based
methods have also been proposed for liver tumor segmentation.
For example, Huang et al. [26] proposed to employ the random
feature subspace ensemble-based extreme learning machine
(ELM) for liver lesion segmentation. Vorontsov et al. [27]
proposed to segment tumors by support vector machine (SVM)
classifier and then refined the results by the omnidirectional
deformable surface model. Similarly, Kuo et al. [29] proposed
to learn SVM classifier with texture feature vector for liver
tumor segmentation. Le et al. [28] employed the fast marching
algorithm to generate initial regions and then classified tumors
by training a noniterative single hidden layer feedforward
network (SLFN). To speed up the segmentation algorithm,
Chaieb et al. [32] adopted a bootstrap sampling approach for
efficient liver tumor segmentation.
B. Deep learning based methods
Convolutional neural networks (CNNs) have achieved great
success in many object recognition problems in computer
3
vision community. Many researchers followed this trend and
proposed to utilize various CNNs for learning feature representations in the application of liver and lesion segmentation.
For example, Ben-Cohen et al. [15] proposed to use a FCN
for liver segmentation and liver-metastasis detection in CT
examinations. Christ et al. [13, 33] proposed a cascaded
FCN architecture and dense 3D conditional random fields
(CRFs) to automatically segment liver and liver lesions. In the
meanwhile, Sun et al. [14] designed a multi-channel FCN to
segment liver tumors from CT images, where the probability
maps were generated by the feature fusion from different
channels.
Recently, during the 2017 ISBI LiTS challenge, Han [34],
proposed a 2.5D 24-layer FCN model to segment liver tumors,
where the residual block was employed as the repetitive
building blocks and the UNet connection was designed across
the encoding part and decoding part. 2.5D refers to using 2D
convolutional neural network with the input of adjacent slices
from the volumetric images. Both Vorontsov et al. [35] and
Chlebus et al. [36] achieved the second place in the ISBI challenge. Vorontsov et al. [35] also employed ResNet-like residual
blocks and UNet connections with 21 convolutional layers,
which is a bit shallower and has fewer parameters compared
to that proposed by Han [34]. Chlebus et al. [36] designed
a 28-layer UNet architecture in two individual models and
subsequently filtered the false positives of tumor segmentation
results by a random forest classifier. Instead of using 3D
FCNs, all of the top results employed 2D FCNs with different
network depths, showing the efficacy of 2D FCNs regarding
the underlying volumetric segmentation problem. However, all
these networks are shallow and ignore the 3D contexts, which
limit the high-level feature extraction capability and restrict
the recognition performance.
III. M ETHOD
Figure 2 shows the pipeline of our proposed method for liver
and tumor segmentation. To reduce the overall computation
time, a simple ResNet architecture [34] is trained to get a
quick but coarse segmentation of liver. With the region of
interest (ROI), our proposed H-DenseUNet efficiently probes
intra-slice and inter-slice features through a 2D DenseUNet
f2d and a 3D counterpart f3d , followed by jointly optimizing
the hybrid features in the hybrid feature fusion (HFF) layer
for accurate liver and lesion segmentation.
A. Deep 2D DenseUNet for Intra-slice Feature Extraction
The intra-slice feature extraction part follows the structure
of DenseNet-161 [21], which is composed of repetitive densely
connected building blocks with different output dimensions.
In each densely connected building block, there are direct
connections from any layer to all subsequent layers, as shown
in Figure 2(c). Each layer produces k feature maps and k is
called growth rate. One advantage of the dense connectivity
between layers is that it has fewer output dimensions than
traditional networks, avoiding learning redundant features.
Moreover, the densely connected path ensures the maximum
information flow between layers, which improves the gradient
flow, and thus alleviates the burden in searching for the optimal
solution in a very deep neural network.
However, the original DenseNet-161 [21] is designed for
the object classification task while our problem belongs to
the segmentation topics. Moreover, a deep FCN network for
segmentation tasks actually contains several max-pooling and
upsampling operations, which may lead to the information loss
of low-level (i.e., high resolution) features. Given above two
considerations, we develop a 2D DenseUNet, which inherits
both advantages of densely connected path and UNet-like connections [5]. Specifically, the dense connection between layers
is employed within each micro-block to ensure the maximum
information flow while the UNet long range connection links
the encoding part and the decoding part to preserve low-level
information.
Let I ∈ Rn×224×224×12×1 denote the input training samples
(for 224 × 224 × 12 input volumes) with ground-truth labels
Y ∈ Rn×224×224×12×1 , where n denotes the batch size of
the input training samples and the last dimension denotes the
channel. Yi,j,k = c since each pixel (i, j, k) is tagged with
class c (background, liver and tumor). Let function F denote
the transformation from the volumetric data to three adjacent
slices. Specifically, every three adjacent slices along z-axis are
stacked together and the number of groups can be transformed
to the batch dimension. For example, I2d = F (I), where
I2d ∈ R12n×224×224×3 denotes the input samples of 2D
DesneUNet. The detailed transformation process is illustrated
in Figure 2(d). Because of the transformation, the 2D and 3D
DenseUNet can be jointly trained, which will be described in
detail in section B. For convenience, F −1 denotes the inverse
transformation from three adjacent slices to the volumetric
data. The 2D DenseUNet conducts liver and tumor segmentation,
X2d = f2d (I2d ; θ2d ), X2d ∈ R12n×224×224×64 ,
yˆ2d = f2dcls (X2d ; θ2dcls ), yˆ2d ∈ R12n×224×224×3
(1)
where X2d is the feature map from layer "upsample5_conv"
(see Table I) and yˆ2d is the predicted pixel-wise probabilities
corresponding to the input three adjacent slices.
The illustration and detailed structure of 2D DenseUNet
are shown in Figure 2(c) and Table I, respectively. The depth
of 2D DenseUNet is extended to 167 layers, referred as 2D
DenseUNet-167, which consists of convolution layers, pooling
layers, dense blocks, transition layers and upsampling layers.
The dense block denotes the cascade of several micro-blocks,
in which all layers are directly connected, see Figure 2(c).
To change the size of feature-maps, the transition layer is
employed, which consists of a batch normalization layer and
a 1 × 1 convolution layer followed by an average pooling
layer. A compression factor is included in the transition
layer to compress the number of feature-maps, preventing the
expanding of feature-maps (set as 0.5 in our experiments). The
upsampling layer is implemented by the bilinear interpolation,
followed by the summation with low-level features (i.e., UNet
connections) and a 3 × 3 convolutional layer. Before each
convolution layer, the batch normalization and the Rectified
Linear Unit (ReLU) are employed in the architecture.
4
,
+
2D
DenseUNet
3D
DenseUNet
′
,
2D input: 224
224
Concatenation
3
HFF layer
Liver segmentation Tumor segmentation
padding
2D ResNet
12
(b) 2nd stage: H-DenseUNet for accurate liver and tumor segmentation
1
3
1
3
3
3
3
1
3
3
1
1
3
1
3
1
3
Upsampling layer 3 Upsampling layer 4 Upsampling layer 5
1
3
7
7
1
3
1
3
6
(d) The illustration of transformation process
1
Transition layer 1
Dense block 1
1
(a) 1st stage : Coarse liver segmentation
224
3
3D input: 224
(c) The illustration of the 2D DenseUNet.
Figure 2: The illustration of the pipeline for liver and lesion segmentation. (a) A ResNet is employed for coarse liver
segmentation to reduce computation time. (b) The H-DenseUNet consists of 2D DenseUNet and 3D counterpart, which are
responsible for hierarchical features extraction from intra-slice and inter-slice, respectively. Hybrid feature fusion (HFF) layer
is proposed to fuse intra-slice and inter-slice features and optimize together for better liver and tumor segmentation. L(y, yˆ2d 0 )
and L(y, yˆH ) are jointly employed to supervise this end-to-end learning process. (c) The illustration of the 2D DenseUNet.
The structure in the orange block is a micro-block and k denotes the growth-rate. (d) The transformation of volumetric data
to three adjacent slices (Best viewed in color).
B. H-DenseUNet for Hybrid Feature Exploration
2D DenseUNet with deep convolutions can produce highlevel representative in-plane features but neglect the spatial
information along the z dimension while 3D DenseUNet has
large GPU computational cost and limited kernel’s field-ofview as well as the network depth. To address these issues,
we propose H-DenseUNet to jointly fuse and optimize the
learned intra-slice and inter-slice features for better liver tumor
segmentation.
To fuse hybrid features from the 2D and 3D network, the
feature volume size should be aligned. Therefore, the feature
maps and score maps from 2D DenseUNet are transformed to
the volumetric shape as follows:
X2d 0 = F −1 (X2d ), X2d 0 ∈ Rn×224×224×12×64 ,
yˆ2d 0 = F −1 (yˆ2d ), yˆ2d 0 ∈ Rn×224×224×12×3 ,
(2)
Then the 3D DenseUNet distill the visual features with 3D
contexts by concatenating the original volumes I with the
contextual information yˆ2d 0 from the 2D network. Specifically,
the detectors in the 3D counterpart trained based not only
on the features probed from the original images, but also on
the probabilities of a large number of context pixels from 2D
DenseUNet. With the guidance from the supporting contexts
pixels, the burden in searching for the optimal solution in the
3D counterpart has also been well alleviated, which significantly improves the learning efficiency of the 3D network.
The learning process of 3D DenseUNet can be described as:
X3d = f3d (I, yˆ2d 0 ; θ3d ),
Z = X3d + X2d 0 ,
(3)
where X3d denotes the feature volume from layer "upsample5_conv" in 3D DenseUNet-65. Z denotes the hybrid feature,
which refers to the sum of intra-slice and inter-slice features
from 2D and 3D network, respectively. Then the hybrid feature
is jointly learned and optimized in the HFF layer,
H = fHF F (Z; θHF F ),
yˆH = fHF F cls (H; θHF F cls )
(4)
5
Table I: Architectures of the proposed H-DenseUNet, consisting of the 2D DenseUNet and the 3D counterpart. The symbol
[ ] denotes the long range UNet summation connections. The second and forth column indicate the output size of the current
stage in two architectures, respectively. Note that each "conv" corresponds the sequence BN-ReLU-Conv.
input
convolution 1
pooling
Feature size
224 × 224
112 × 112
56 × 56
dense block 1
56 × 56
transition layer 1
56 × 56
28 × 28
dense block 2
28 × 28
transition layer 2
28 × 28
14 × 14
dense block 3
14 × 14
transition layer 3
14 × 14
7×7
7×7
dense block 4
upsampling layer
upsampling layer
upsampling layer
upsampling layer
upsampling layer
convolution 2
1
2
3
4
5
14
28
56
112
224
224
×
×
×
×
×
×
14
28
56
112
224
224
2D DenseUNet-167 (k=48)
7 × 7,96,stride 2
3 × 3 max pool,stride
2
1 × 1, 192 conv
×6
3 × 3, 48
conv
1 × 1 conv
2 × 2 average pool
1 × 1, 192 conv
× 12
3 × 3, 48
conv
1 × 1 conv
2 × 2 average pool,
1 × 1, 192 conv
× 36
3 × 3, 48
conv
1 × 1 conv
2 × 2 average pool
1 × 1, 192 conv
× 24
3 × 3, 48
conv
2 × 2 upsampling [dense block 3], 768, conv
2 × 2 upsampling [dense block 2], 384, conv
2 × 2 upsampling [dense block 1], 96, conv
2 × 2 upsampling [convolution 1], 96, conv
2 × 2 upsampling, 64, conv
1 × 1, 3
where H denotes the optimized hybrid features and yˆH refers
to the pixel-wise predicted probabilities generated from the
HFF layer fHF F cls (·). In our experiments, the 3D counterpart
of H-DenseUNet cost only 9 hours to converge, which is
significantly faster than training the 3D counterpart with
original data solely (63 hours).
The detailed structure of the 3D counterpart is shown in
the Table I, called 3D DenseUNet-65, which consists of 65
convolutional layers and the growth rate is 32. Compared
with 2D DenseUNet counterpart, the number of micro-blocks
in each dense block is decreased due to the high memory
consumption of 3D convolutions and the limited GPU memory.
The rest of the network setting is the same with the 2D
counterpart.
C. Loss Function, Training and Inference Schemes
In this section, we present more details regarding the loss
function, training and the inference schemes.
1) Loss Function: To train the networks, we employed
weighted cross-entropy function as the loss function, which
is described as:
L(y, ŷ) = −
N
3
1 XX c c
w y log yˆi c
N i=1 c=1 i i
(5)
where yˆi c denotes the probability of voxel i belongs to class
c (background, liver or lesion), wic denotes the weight and yic
indicates the ground truth label for voxel i.
2) Training Scheme: We first train the ResNet in the same
way with Han [34] to get the coarse liver segmentation results.
The parameters of the encoder part in 2D DenseUNet f2d
are initialized with DenseNet’s weights (object classificationtrained) [21] while the decoder part are trained with the
random initialization. Since the weights are initialized with a
random distribution in the decoder part, we first warm up the
Feature size
224 × 224 × 12
112 × 112 × 6
56 × 56 × 3
56 × 56 × 3
56 × 56 × 3
28 × 28 × 3
28 × 28 × 3
28 × 28 × 3
14 × 14 × 3
14 × 14 × 3
14 × 14 × 3
7×7×3
7×7×3
14 × 14 × 3
28 × 28 × 3
56 × 56 × 3
112 × 112 × 6
224 × 224 × 12
224 × 224 × 12
3D DenseUNet-65 (k=32)
7 × 7 × 7,96,stride
3 × 3 × 3 max pool,stride
2
1 × 1 × 1, 128 conv
×3
3 × 3 × 3, 32
conv
1 × 1 × 1 conv
2 × 2 × 1 average pool
1 × 1 × 1, 128 conv
×4
3 × 3 × 3, 32
conv
1 × 1 × 1 conv
2 × 2 × 1 average pool
1 × 1 × 1, 128 conv
× 12
3 × 3 × 3, 32
conv
1 × 1 × 1 conv
2 × 2 × 1 average pool
1 × 1 × 1, 128 conv
×8
3 × 3 × 3, 32
conv
2 × 2 × 1 upsampling [dense block 3], 504, conv
2 × 2 × 1 upsampling [dense block 2], 224, conv
2 × 2 × 1 upsampling [dense block 1], 192, conv
2 × 2 × 2 upsampling [convolution 1], 96, conv
2 × 2 × 2 upsampling, 64, conv
1 × 1 × 1, 3
network without UNet connections. After several iterations,
the UNet connections are added to jointly fine tune the model.
To effectively train the H-DenseUNet, we first optimize
f2d (·) and f2dcls (·) with cross entropy loss L(y, yˆ2d 0 ) on our
dataset. Secondly, we fix parameters in f2d (·) and f2dcls (·),
and focus on training f3d (·), fHF F (·) and fHF F cls (·) with
cross entropy loss L(y, yˆH ), where parameters are all randomly initialized. Finally, The whole network is jointly finetuned with following combined loss:
Ltotal = λL(y, yˆ2d 0 ) + L(y, yˆH )
(6)
where λ is the balanced weight and set as 0.5 in our experiments empirically.
3) Inference Scheme: In the test stage, we first get the
coarse liver segmentation result. Then H-DenseUNet can generate accurate liver and tumor predicted probabilities within
the ROI. The thresholding is applied to get the liver tumor
segmentation result. To avoid the holes in the liver, a largest
connected component labeling is performed to refine the
liver result. After that, the final lesion segmentation result is
obtained by removing lesions outside the final liver region.
IV. E XPERIMENTS AND R ESULTS
A. Dataset and Pre-processing
We tested our method on a competitive dataset of MICCAI
2017 LiTS Challenge, which contains 131 and 70 contrastenhanced 3D abdominal CT scans for training and testing,
respectively. The dataset was acquired by different scanners
and protocols from six different clinical sites, with a largely
varying in-plane resolution from 0.55 mm to 1.0 mm and slice
spacing from 0.45 mm to 6.0 mm.
For image preprocessing, we truncated the image intensity
values of all scans to the range of [-200,250] HU to remove
the irrelevant details. For coarse liver segmentation, all the
6
3D DenseUNet without pre-trained model
2D DenseUNet without pre-trained model
2D DenseNet with pre-trained model
2D DenseUNet with pre-trained model
H-DenseUNet
0.35
0.30
loss
0.25
0.20
0.15
0.10
0.05
0
20
40
epoch
60
80
100
Figure 3: Training losses of 2D DenseUNet with and without pre-trained model, 2D DenseNet with pre-trained model,
3D DenseUNet without pre-trained model as well as HDenseUNet (Best viewed in color).
training images were resampled to a fixed resolution of 0.69×
0.69 × 1.0 mm3 . Since the liver lesions are extremely small
in some training cases, we trained with the original images in
the networks to avoid possible artifact from image resampling.
B. Evaluation Metrics
According to the evaluation of 2017 LiTS challenge, we
employed Dice per case score and Dice global score to evaluate the liver and tumor segmentation performance respectively.
Dice per case score refers to an average Dice score per
volume while Dice global score is the Dice score evaluated
by combining all datasets into one. Root mean square error
(RMSE) is also adopted to measure the tumor burden.
C. Implementation Details
In this section, we present more details regarding the
implementation environment and data augmentation strategies.
The model was implemented using Keras package [37]. The
initial learning rate was 0.01 and decayed according to the
equation lr = lr ∗ (1 − iterations/total_iterations)0.9 . We
used stochastic gradient descent with momentum.
For data augmentation, we adopted random mirror and
scaling between 0.8 and 1.2 for all training data to alleviate
the overfitting problem. The training of 2D DenseUNet model
took about 21 hours using two NVIDIA Titan Xp GPUs with
12 GB memory while the 3D counterpart model cost about 9
hours under the same settings. In other words, the total training
time for H-DenseUNet took about 30 hours. In the test phase,
the total processing time of one subject depends on the number
of slices, ranging from 30 seconds to 200 seconds.
D. Ablation Analysis of H-DenseUNet
In this section, we conduct comprehensive experiments to
analyze the effectiveness of our proposed H-DenseUNet.
1) Effectiveness of the Pre-trained Model: One advantage
in the proposed method is that we can train the network by
transfer learning with the pre-trained model, which is crucial in
finding an optimal solution for the network. Here, we analyze
the learning behaviors of 2D DenseUNet with and without
the pre-trained model. Both two experiments were conducted
under the same experimental settings. From Figure 3, it is
clearly observed that with the pre-trained model, 2D DenseUNet can converge faster and achieve lower loss value, which
shows the importance of utilizing the pre-trained model with
transfer learning. The test results in Table II demonstrated
that the pre-trained model can help the network achieve better
performance consistently. Our proposed H-DenseUNet inherits
this advantage, which plays an important role in achieving the
promising results.
2) Comparison of 2D and 3D DenseUNet: We compare the
inherent performance of 2D DenseUNet and 3D DenseUNet
to validate that using 3D network solely maybe defective. The
number of parameters is one of key elements in measuring the
model representation capability, thus both 2D DesneUNet-167
and 3D DenseUNet-65 are designed with the same level of
model complexity (around 40M parameters).
We compare the learning behaviors of two experiments
without using the pre-trained model. From Figure 3, it shows
that the 2D DenseUNet achieves better performance than
the 3D DenseUNet, which highlights the effectiveness and
efficiency of 2D convolutions with the deep architecture. This
is because the 3D kernel consumes large GPU memory so
that the network depth and width are limited, leading to weak
representation capability. In addition, 3D DenseUNet took
much more training time (63 hours) to converge compared
to 2D DenseUNet (21 hours).
Except for the heavy computational cost of the 3D network,
another defective is that only a dearth of pre-trained model
exists for the 3D network. From Table II, compared with the
results generated by 3D DenseUNet, 2D DenseUNet with pretrained model achieved 8.9 and 3.0 (Dice: %) improvements
on the lesion segmentation results by the measurement of Dice
per case and Dice global score, respectively.
3) Effectiveness of UNet Connections: We analyze the
effectiveness of UNet connections in our proposed framework.
Both 2D DenseNet and DenseUNet are trained with the same
pre-trained model and training strategies. The difference is
that DenseUNet contains long range connections between
the encoding part and the decoding part to preserve highresolution features. As the results shown in Figure 3, it
is obvious that DenseUNet achieves lower loss value than
DenseNet, demonstrating the UNet connections actually help
the network converge to a better solution. The experimental
results in Table II consistently demonstrated that the lesion
segmentation performance can be boosted by a large margin
with UNet connections embedded in the network.
4) Effectiveness of Hybrid Feature Fusion: To validate
the effectiveness of the hybrid architecture, we compare the
learning behaviors of H-DenseUNet and 2D DenseUNet. It is
observed that the loss curve for H-DenseUNet begins around
0.04. This is because we fine tune the H-DenseUNet on the 2D
DenseUNet basis, which serves as a good initialization. Then
7
Table II: Segmentation results by ablation study of our methods on the test dataset (Dice: %).
Model
3D DenseUNet without pre-trained model
UNet [36]
ResNet [34]
2D DenseUNet without pre-trained model
2D DenseNet with pre-trained model
2D DenseUNet with pre-trained model
H-DenseUNet
Lesion
Dice per case
Dice global
59.4
78.8
65.0
67.0
67.7
80.1
68.3
81.8
70.2
82.1
72.2
82.4
Liver
Dice per case
Dice global
93.6
92.9
94.7
94.7
95.3
95.9
95.8
96.3
96.1
96.5
Table III: Leaderboard of 2017 Liver Tumor Segmentation (LiTS) Challenge (Dice: %, until 1st Nov. 2017)
Team
our
IeHealth
hans.meine
superAI
Elehanx [34]
medical
deepX [38]
Njust768
Medical [35]
Gchlebus [36]
predible
Lei [39]
ed10b047
chunliang
yaya
Lesion
Dice per case
Dice global
72.2
82.4
70.2
79.4
67.6
79.6
67.4
81.4
67.0
66.1
78.3
65.7
82.0
65.5
76.8
65.0
65.0
64.0
77.0
64.0
63.0
77.0
62.5
78.8
62.4
79.2
Liver
Dice per case
Dice global
96.1
96.5
96.1
96.4
96.0
96.5
0.0
0.0
95.1
95.1
96.3
96.7
4.10
13.5
95.0
95.0
94.0
94.0
95.8
96.2
95.9
96.3
Tumor Burden
RMSE
0.015
0.017
0.020
1251.447
0.023
0.017
0.920
0.020
0.020
0.016
0.016
Note: - denotes that the team participated in ISBI competition and the measurement was not evaluated.
case 1
case 37
Raw image
Ground truth
2D DenseUNet
H-DenseUNet
Figure 4: Examples of segmentation results by 2D DenseUNet
and H-DenseUNet on the validation dataset. The red regions
denote the segmented liver while the green ones denote the
segmented lesions. The gray regions denote the true liver while
the white ones denote the true lesions.
the loss value decreases to nearly 0.02, which is attributed
to the hybrid feature fusion learning. Figure 3 shows that
H-DenseUNet can converge to the smaller loss value than
the 2D DenseUNet, which indicates that the hybrid architecture can contribute to the performance gains. Compared
with 2D DenseUNet, our proposed H-DenseUNet advances
the segmentation results on both two measurements for liver
and tumor segmentation consistently, as shown in Table II.
The performance gains indicate that contextual information
along the z dimension, indeed, contributes to the recognition
of lesion and liver, especially for lesions that have much more
blurred boundary and considered to be difficult to recognize.
Figure 5: Examples of liver and tumor segmentation results of
H-DenseUNet from the test dataset. The red regions denote
the liver and the green ones denote the tumors.
case 69
Figure 4 shows some segmentation results achieved by 2D
DenseUNet and H-DenseUNet on the validation dataset. It is
observed that H-DenseUNet can achieve much better results
than 2D DenseUNet. Moreover, we trained H-DenseUNet in
an end-to-end manner, where the 3D contexts can also help
extract more representative in-plane features. The end-to-end
system jointly optimizes the 2D and 3D networks, where the
hybrid feature can be fully explored. Figure 5 presents some
examples of liver and tumor segmentation results of our HDenseUNet on the test dataset. We can observe that most small
targets as well as large objects can be well segmented.
8
E. Comparison with Other Methods
2000000
There were more than 50 submissions in 2017 ISBI and
MICCAI LiTS challenges. Both challenges employed the same
training and test datasets for fair performance comparison.
Different from the ISBI challenge, more evaluation metrics
have been added in the MICCAI challenge for comprehensive
comparison. The detailed results of top 15 teams on the
leaderboard, including both ISBI and MICCAI challenges, are
listed in Table III. Our method outperformed other state-of-thearts on the segmentation results of tumors and achieved very
competitive performance for liver segmentation. For tumor
burden evaluation, our method achieved the lowest estimation
error and ranked the 1st place among all the teams. It is worth
mentioning that our result was achieved by a single model.
Most of the top teams in the challenges employed deep
learning based methods, demonstrating the effectiveness of
CNN based methods in medical image analysis. For example,
Han [34], Vorontsov et al. [35] and Bi et al. [39] all adopted 2D
deep FCNs, where ResNet-like residual blocks were employed
as the building blocks. In addition, Chlebus et al. [36] trained
the UNet architecture in two individual models, followed by
a random forest classifier. In comparison, our method with a
167-layer network consistently outperformed these methods,
which highlighted the efficacy of 2D DenseUNet with pretrained model. Our proposed H-DenseUNet further advanced
the segmentation accuracy for both liver and tumor, showing
the effectiveness of the hybrid feature learning process.
Our method achieved the 1st place among all state-of-thearts in the lesion segmentation and very competitive result
to DeepX [38] for liver segmentation. Note that our method
surpassed DeepX by a significant margin in the Dice per case
evaluation for lesion, which is considered to be notoriously
challenging and difficult. Moreover, our result was produced
by the single model while DeepX [38] employed multi-model
combination strategy to improve the results, showing the
efficiency of our method in the clinical practice.
1750000
V. D ISCUSSION
Automatic liver and tumor segmentation plays an important
role in clinical diagnosis. It provides the precise contour of
the liver and any tumors inside the anatomical segments of
the liver, which assists doctors in the diagnosis process. In
this paper, we present an end-to-end training system to explore hybrid features for automatic liver lesion segmentation,
where we probe 3D contexts effectively under the auto-context
mechanism. Through the hybrid fusion learning of intraslice and inter-slice features, the segmentation performance
for liver lesion has been improved, which demonstrates the
effectiveness of our H-DenseUNet. Moreover, compared with
other 3D networks [9, 16], our method probes 3D contexts
efficiently. This is crucial in the clinical practice, especially
when huge amount of 3D images, containing large image size
and a number of slices, are increasingly accumulated in the
clinical sites.
To have a better understanding about the performance gains,
we analyze the effectiveness of our method regarding the
tumor size in each patient. Figure 6 shows the tumor size
Boundary for the large and small tumor groups
1500000
Tumor size
1250000
1000000
750000
500000
250000
0
0
5
10
15
20
Image id
25
30
35
40
Figure 6: Tumor size in each patient of the validation dataset.
Table IV: Effectiveness of our method regarding to the tumor
size (Dice: %).
Baseline
H-DenseUNet
Total
43.56
45.04 (+1.48)
Large-tumor group
58.24
60.59 (+2.35)
Small-tumor group
41.08
42.18 (+1.1)
Note: Baseline is the 2D DenseUNet with pre-trained model.
value of 40 CT volume data in the validation dataset, where
the tumor size is obtained by summing up tumor voxels in each
ground-truth image. It is observed that the dataset has large
variations of the tumor size. For comparison, we divide the
dataset into the large-tumor group and the small-tumor group
by the orange line in Figure 6. From Table IV, we can observe
that our method improves the segmentation accuracy by 1.48
(Dice:%) in the whole validation dataset. We can also observe
that the large-tumor group achieves 2.35 (Dice:%) accuracy
improvements while the score for the small-tumor group is
slightly advanced, with 1.1 (Dice:%). From the comparison,
we claim that the performance gain is mainly attributed to
the improvement of the large-tumor data segmentation results.
This is mainly because that the H-DenseUNet mimics the
diagnosis process of radiologists, where tumors are delineated
by observing several adjacent slices, especially for tumors
have blurred boundaries. Once the blurred boundaries are well
segmented, the segmentation accuracy for the large-tumor data
can be improved by a large margin. Although the hybrid
feature still contributes to the segmentation of small tumors,
the improvement is limited since small tumors usually occur in
fewer slices. In the future, we will focus on the segmentation
for small tumors. Several potential directions will be taken into
considerations, i.e., multi-scale representation structure [40]
and deep supervision [16]. Recently, perceptual generative
adversarial network (GAN) [41] has been proposed for small
object detection. They generate superresolved representations
for small objects by discovering the intrinsic structural correlations between small-scale and large-scale objects, which
may also be a potential direction for handling this challenging
problem.
Another key that should be explored in the future study is
9
the potential depth for the H-DenseUNet. In our experiments,
we trained the network using data parallel training, which is
an effective technique to speed up the gradient descent by
paralleling the computation of the gradient for a mini-batch
across mini-batch elements. However, the model complexity
is restricted by the GPU memory. In the future, to exploit
the potential depth of the H-DenseUNet, we can train the
network using model parallel training, where different portions
of the model computation are done on distributed computing
infrastructures for the same batch of examples, which will be
another possible direction to further improve the performance.
[7]
[8]
VI. C ONCLUSION
We present an end-to-end training system H-DenseUNet for
liver and tumor segmentation from CT volumes, which is a
new paradigm to effectively probe high-level representative
intra-slice and inter-slice features, followed by optimizing
the features through the hybrid feature fusion layer. The
architecture gracefully addressed the problems that 2D convolutions ignore the volumetric contexts and 3D convolutions
suffer from heavy computational cost. Extensive experiments
on the dataset of 2017 LiTS demonstrated the superiority
of our proposed H-DenseUNet. With a single-model basis,
our method excelled others by a large margin on lesion
segmentation and achieved very competitive result on liver
segmentation on the Leaderboard. In addition, our network
architecture is inherently general and can be easily extended
to other applications.
R EFERENCES
[1] J. Ferlay, H.-R. Shin, F. Bray, D. Forman, C. Mathers,
and D. M. Parkin, “Estimates of worldwide burden of
cancer in 2008: Globocan 2008,” International journal
of cancer, vol. 127, no. 12, pp. 2893–2917, 2010.
[2] R. Lu, P. Marziliano, and C. H. Thng, “Liver tumor volume estimation by semi-automatic segmentation
method,” in Engineering in Medicine and Biology Society, 2005. IEEE-EMBS 2005. 27th Annual International
Conference of the. IEEE, 2006, pp. 3296–3299.
[3] M. Moghbel, S. Mashohor, R. Mahmud, and M. I. B.
Saripan, “Review of liver segmentation and computer
assisted detection/diagnosis methods in computed tomography,” Artificial Intelligence Review, pp. 1–41, 2017.
[4] A. Prasoon, K. Petersen, C. Igel, F. Lauze, E. Dam,
and M. Nielsen, “Deep feature learning for knee cartilage segmentation using a triplanar convolutional neural
network,” in International conference on medical image
computing and computer-assisted intervention. Springer,
2013, pp. 246–253.
[5] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in
International Conference on Medical Image Computing
and Computer-Assisted Intervention. Springer, 2015, pp.
234–241.
[6] M. F. Stollenga, W. Byeon, M. Liwicki, and J. Schmidhuber, “Parallel multi-dimensional lstm, with application
to fast biomedical volumetric image segmentation,” in
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
Advances in Neural Information Processing Systems,
2015, pp. 2998–3006.
H. R. Roth, L. Lu, A. Farag, H.-C. Shin, J. Liu, E. B.
Turkbey, and R. M. Summers, “Deeporgan: Multi-level
deep convolutional networks for automated pancreas
segmentation,” in International Conference on Medical
Image Computing and Computer-Assisted Intervention.
Springer, 2015, pp. 556–564.
J. Wang, J. D. MacKenzie, R. Ramachandran, and D. Z.
Chen, “Detection of glands and villi by collaboration of
domain knowledge and deep learning,” in International
Conference on Medical Image Computing and ComputerAssisted Intervention. Springer, 2015, pp. 20–27.
Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and
O. Ronneberger, “3d u-net: learning dense volumetric
segmentation from sparse annotation,” in International
Conference on Medical Image Computing and ComputerAssisted Intervention. Springer, 2016, pp. 424–432.
M. Havaei, A. Davy, D. Warde-Farley, A. Biard,
A. Courville, Y. Bengio, C. Pal, P.-M. Jodoin, and
H. Larochelle, “Brain tumor segmentation with deep
neural networks,” Medical image analysis, vol. 35, pp.
18–31, 2017.
H. Chen, Q. Dou, L. Yu, J. Qin, and P.-A. Heng,
“Voxresnet: Deep voxelwise residual networks for brain
segmentation from 3d mr images,” NeuroImage, 2017.
X. Wang, Y. Zheng, L. Gan, X. Wang, X. Sang, X. Kong,
and J. Zhao, “Liver segmentation from ct images using a
sparse priori statistical shape model (sp-ssm),” PloS one,
vol. 12, no. 10, p. e0185249, 2017.
P. F. Christ, M. E. A. Elshaer, F. Ettlinger, S. Tatavarty,
M. Bickel, P. Bilic, M. Rempfler, M. Armbruster, F. Hofmann, M. D’Anastasi et al., “Automatic liver and lesion
segmentation in ct using cascaded fully convolutional
neural networks and 3d conditional random fields,” in
International Conference on Medical Image Computing
and Computer-Assisted Intervention. Springer, 2016, pp.
415–423.
C. Sun, S. Guo, H. Zhang, J. Li, M. Chen, S. Ma, L. Jin,
X. Liu, X. Li, and X. Qian, “Automatic segmentation
of liver tumors from multiphase contrast-enhanced ct
images based on fcns,” Artificial Intelligence in Medicine,
2017.
A. Ben-Cohen, I. Diamant, E. Klang, M. Amitai, and
H. Greenspan, “Fully convolutional network for liver
segmentation and lesions detection,” in International
Workshop on Large-Scale Annotation of Biomedical Data
and Expert Label Synthesis. Springer, 2016, pp. 77–85.
Q. Dou, H. Chen, Y. Jin, L. Yu, J. Qin, and P.-A. Heng,
“3d deeply supervised network for automatic liver segmentation from ct volumes,” in International Conference
on Medical Image Computing and Computer-Assisted
Intervention. Springer, 2016, pp. 149–157.
F. Lu, F. Wu, P. Hu, Z. Peng, and D. Kong, “Automatic
3d liver location and segmentation via convolutional
neural network and graph cut,” International journal of
computer assisted radiology and surgery, vol. 12, no. 2,
pp. 171–182, 2017.
10
[18] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv
preprint arXiv:1409.1556, 2014.
[19] H. Chen, D. Ni, J. Qin, S. Li, X. Yang, T. Wang, and P. A.
Heng, “Standard plane localization in fetal ultrasound via
domain transferred deep neural networks,” IEEE journal
of biomedical and health informatics, vol. 19, no. 5, pp.
1627–1636, 2015.
[20] N. Tajbakhsh, J. Y. Shin, S. R. Gurudu, R. T. Hurst, C. B.
Kendall, M. B. Gotway, and J. Liang, “Convolutional
neural networks for medical image analysis: Full training
or fine tuning?” IEEE transactions on medical imaging,
vol. 35, no. 5, pp. 1299–1312, 2016.
[21] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in
Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, 2017.
[22] L. Soler, H. Delingette, G. Malandain, J. Montagnat,
N. Ayache, C. Koehl, O. Dourthe, B. Malassagne,
M. Smith, D. Mutter et al., “Fully automatic anatomical,
pathological, and functional segmentation from ct scans
for hepatic surgery,” Computer Aided Surgery, vol. 6,
no. 3, pp. 131–142, 2001.
[23] J. H. Moltz, L. Bornemann, V. Dicken, and H. Peitgen,
“Segmentation of liver metastases in ct scans by adaptive
thresholding and morphological processing,” in MICCAI
workshop, vol. 41, no. 43, 2008, p. 195.
[24] D. Wong, J. Liu, Y. Fengshou, Q. Tian, W. Xiong,
J. Zhou, Y. Qi, T. Han, S. Venkatesh, and S.-c. Wang,
“A semi-automated method for liver tumor segmentation
based on 2d region growing with knowledge-based constraints,” in MICCAI workshop, vol. 41, no. 43, 2008, p.
159.
[25] D. Jimenez-Carretero, L. Fernandez-de Manuel, J. Pascau, J. M. Tellado, E. Ramon, M. Desco, A. Santos,
and M. J. Ledesma-Carbayo, “Optimal multiresolution
3d level-set method for liver segmentation incorporating
local curvature constraints,” in Engineering in medicine
and biology society, EMBC, 2011 annual international
conference of the IEEE. IEEE, 2011, pp. 3419–3422.
[26] W. Huang, Y. Yang, Z. Lin, G.-B. Huang, J. Zhou,
Y. Duan, and W. Xiong, “Random feature subspace
ensemble based extreme learning machine for liver tumor
detection and segmentation,” in Engineering in Medicine
and Biology Society (EMBC), 2014 36th Annual International Conference of the IEEE. IEEE, 2014, pp. 4675–
4678.
[27] E. Vorontsov, N. Abi-Jaoudeh, and S. Kadoury,
“Metastatic liver tumor segmentation using texture-based
omni-directional deformable surface models,” in International MICCAI Workshop on Computational and Clinical
Challenges in Abdominal Imaging. Springer, 2014, pp.
74–83.
[28] T.-N. Le, H. T. Huynh et al., “Liver tumor segmentation
from mr images using 3d fast marching algorithm and
single hidden layer feedforward neural network,” BioMed
research international, vol. 2016, 2016.
[29] C.-L. Kuo, S.-C. Cheng, C.-L. Lin, K.-F. Hsiao, and S.-
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
[41]
H. Lee, “Texture-based treatment prediction by automatic
liver tumor segmentation on computed tomography,” in
Computer, Information and Telecommunication Systems
(CITS), 2017 International Conference on. IEEE, 2017,
pp. 128–132.
P.-H. Conze, V. Noblet, F. Rousseau, F. Heitz, V. de Blasi,
R. Memeo, and P. Pessaux, “Scale-adaptive supervoxelbased random forests for liver tumor segmentation in
dynamic contrast-enhanced ct scans,” International journal of computer assisted radiology and surgery, vol. 12,
no. 2, pp. 223–233, 2017.
A. Hoogi, C. F. Beaulieu, G. M. Cunha, E. Heba,
C. B. Sirlin, S. Napel, and D. L. Rubin, “Adaptive local
window for level set segmentation of ct and mri liver
lesions,” Medical image analysis, vol. 37, pp. 46–55,
2017.
F. Chaieb, T. B. Said, S. Mabrouk, and F. Ghorbel, “Accelerated liver tumor segmentation in four-phase computed tomography images,” Journal of Real-Time Image
Processing, vol. 13, no. 1, pp. 121–133, 2017.
P. F. Christ, F. Ettlinger, F. Grün, M. E. A. Elshaera,
J. Lipkova, S. Schlecht, F. Ahmaddy, S. Tatavarty,
M. Bickel, P. Bilic et al., “Automatic liver and tumor
segmentation of ct and mri volumes using cascaded
fully convolutional neural networks,” arXiv preprint
arXiv:1702.05970, 2017.
X. Han, “Automatic liver lesion segmentation using
a deep convolutional neural network method,” arXiv
preprint arXiv:1704.07239, 2017.
E. Vorontsov, G. Chartrand, A. Tang, C. Pal, and
S. Kadoury, “Liver lesion segmentation informed by joint
liver segmentation,” arXiv preprint arXiv:1707.07734,
2017.
G. Chlebus, H. Meine, J. H. Moltz, and A. Schenk,
“Neural network-based automatic liver tumor segmentation with random forest-based candidate filtering,” arXiv
preprint arXiv:1706.00842, 2017.
F. Chollet et al., “Keras,” https://github.com/fchollet/
keras, 2015.
Y. Yuan, “Hierarchical convolutional-deconvolutional
neural networks for automatic liver and tumor segmentation,” arXiv preprint arXiv:1710.04540, 2017.
L. Bi, J. Kim, A. Kumar, and D. Feng, “Automatic liver
lesion detection using cascaded deep residual networks,”
arXiv preprint arXiv:1704.02703, 2017.
K. Kamnitsas, C. Ledig, V. F. Newcombe, J. P. Simpson,
A. D. Kane, D. K. Menon, D. Rueckert, and B. Glocker,
“Efficient multi-scale 3d cnn with fully connected crf
for accurate brain lesion segmentation,” Medical image
analysis, vol. 36, pp. 61–78, 2017.
J. Li, X. Liang, Y. Wei, T. Xu, J. Feng, and S. Yan,
“Perceptual generative adversarial networks for small
object detection,” in IEEE CVPR, 2017.
| 1 |
Convex Optimization with Nonconvex Oracles
Oren Mangoubi1 and Nisheeth K. Vishnoi2
arXiv:1711.02621v1 [] 7 Nov 2017
1,2
École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
November 8, 2017
Abstract
In machine learning and optimization, one often wants to minimize a convex objective function F but can only evaluate a noisy approximation F̂ to it. Even though F is convex, the
noise may render F̂ nonconvex, making the task of minimizing F intractable in general. As a
consequence, several works in theoretical computer science, machine learning and optimization
have focused on coming up with polynomial time algorithms to minimize F under conditions
on the noise F (x) − F̂ (x) such as its uniform-boundedness, or on F such as strong convexity.
However, in many applications of interest these conditions do not hold. Here we show that, if
the noise has magnitude αF (x) + β for some α, β > 0, then there is a polynomial time algorithm to find an approximate minimizer of F . In particular, our result allows for unbounded
noise and generalizes those of [1, 17] who proved similar results for the bounded noise case, and
that of [2] who assume that the noise grows in a very specific manner and that F is strongly
convex. Turning our result on its head, one may also view our algorithm as minimizing a nonconvex function F̂ that is promised to be related to a convex function F as above. Technically,
Markov chains, such as the stochastic gradient Langevin dynamics, are deployed to arrive at
approximate solutions to these optimization problems. For the class of noise we consider, no
single temperature allows such a Markov chain to both mix quickly and concentrate near the
global minimizer. Consequently, our algorithm, which is a “simulated annealing” modification
of the stochastic gradient Langevin dynamics, gradually decreases the temperature of the chain
to approach the global minimizer. Analyzing such an algorithm for the unbounded noise model
and a general convex function turns out to be challenging and requires several technical ideas
that might be of independent interest in deriving non-asymptotic bounds for other simulated
annealing based algorithms.
1
Contents
1 Introduction
1.1 Our contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Short summary of techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Organization of the rest of the paper . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
5
6
7
2 Overview of Our Contributions
9
3 Preliminaries
3.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Assumptions on the convex objective function and the constraint set . . . . . . . . .
3.3 A smoothed oracle from a non-smooth one . . . . . . . . . . . . . . . . . . . . . . . .
15
15
15
15
4 Our Contribution
16
4.1 Our Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2 Statement of Our Main Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5 Proofs
5.1 Assumptions about the smooth oracle . . . . . . . . . .
5.2 Conductance and bounding the Cheeger constant . . . .
5.3 Bounding the escape probability . . . . . . . . . . . . .
5.4 Comparing noisy functions . . . . . . . . . . . . . . . . .
5.5 Bounding the error and running time: The smooth case
5.6 The non-smooth case . . . . . . . . . . . . . . . . . . . .
5.7 Rounding the domain of the Markov Chain . . . . . . .
5.8 Proof of Main Result (Theorem 4.1) . . . . . . . . . . .
2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
17
17
18
20
24
24
30
32
33
1
Introduction
A general problem that arises in machine learning, computational mathematics and optimization
is that of minimizing a convex objective function F : K → R, where K ⊆ Rd is convex, and one can
only evaluate F approximately. Let F̂ denote this “noisy” approximation to F . In this setting, even
though the function F is convex, we can no longer assume that F̂ is convex. However, if one does
not make any assumptions on the noise function, the problem of minimizing F can be shown to be
arbitrarily hard. Thus, in order to obtain algorithms for minimizing F with provable guarantees,
one must place some restrictions on the noise function.
A widely studied setting is that of “additively” bounded noise. Here, the noise N (x) := F (x) −
F̂ (x) is assumed to have a uniform bound on K: supx∈K |N (x)| ≤ β for some β ≥ 0. This model
was first considered by Applegate and Kannan in [1] and has received attention in a recent work
of Zhang, Liang and Charikar [17].
In practice, however, the strongest bound we might have for the noise may not be uniform
on K. One such noise model is that of “multiplicative” noise where one assumes that F̂ (x) =
F (x)(1 + ψ(x)), where |ψ(x)| ≤ α for all x ∈ K and some α ≥ 0. In other words, N (x) =
F̂ (x) − F (x) = F (x) × ψ(x), which motivates the name. One situation where multiplicative noise
arises is when F decomposes into a sum of functions that are easier to compute, but these component
functions are computed via Monte Carlo integration with stopping criteria that depend on the
computed value of the component function [5].1 For other natural settings where multiplicative
noise arises see [6, 9, 10].
More generally, one can model the noisy F̂ by decomposing it into additive and multiplicative
components, in the following sense:
Definition 1.1. We say that F̂ has both additive and multiplicative noise of levels (β, α) if there
exist functions ϕ : K → R and ψ : K → R and α, β ≥ 0 with |ϕ(x)| ≤ β and |ψ(x)| ≤ α for all
x ∈ K such that
F̂ (x) = F (x)(1 + ψ(x)) + ϕ(x)
∀x ∈ K.
(1)
To motivate this definition, we consider the problem of solving a system of noisy linear or nonlinear
black-box equations where one wishes to find a value of x such that hi (x) = 0 for each component
function hi [6]. Since each equation hi (x) = 0 must be satisfied simultaneously for a single value of
x, it is not enough to solve each equation individually. One way in which we may solve this system
of equations is by minimizing an objective function of the form
n
F (x) =
1X
(hi (x))2
n
(2)
i=1
since any value of x that minimizes F (x) also solves the system of equations hi (x) = 0 for every i,
provided that such a solution exists. In practice, rather than having access to an exact computation
oracle of hi (x) one may instead only have access to a perturbed function
ĥi (x) = hi (x) + Ni (x).
1
Note that a function with additive noise of level β can instead be modeled as having multiplicative noise of level
β if we replace the objective function F (x) with F (x) − F (x? ) + β and the corresponding noisy function F̂ (x) with
F̂ (x) − F (x? ) + β, where x? is the unique minimizer of F . Conversely, if an objective function F is bounded on its
domain K and its corresponding noisy function F̂ has multiplicative noise of level α, then the noise can instead be
modeled with the additive noise model, with additive noise of level β = α × supx∈K |F (x)|. Nevertheless, even though
it may be possible to represent a noisy function as either having additive or multiplicative noise of finite level, the
bound on the noise function N (x) implied by the noise model is different depending on the model we use.
3
Here Ni (x) is a noise term that may have additive or multiplicative noise (or both), that is, |Ni (x)| ≤
b + ahi (x) for some a, b ≥ 0. Hence, instead of minimizing Equation (2), one must try to minimize
a noisy function of the form
n
F̂ (x) =
1X
(ĥi (x))2 .
n
(3)
i=1
A straightforward calculation shows that the fact that |Ni (x)| ≤ b + ahi (x) for all i implies that
1
|F̂ (x) − F (x)| ≤ (2a + a2 + 2b + 2ab)F (x) + (b + ab) + b2 .
2
Thus, F̂ can be modeled as having additive noise of level β = 12 (b + ab) + b2 together with multiplicative noise of level α = 2a + a2 + 2b + 2ab. In particular, even if each component function only
has additive noise (that is, if a = 0), F̂ will still have nonzero multiplicative noise α = 2b. Thus we
arrive at the following general problem.
Problem 1. Let K ⊆ Rd be a convex body and F : K → R be a convex function. Let x? be a
minimizer of F in K. Given access to a noisy oracle F̂ for F that has additive and multiplicative
noise of levels (β, α). The problem is to find an approximate minimizer x̂ for F such that F (x̂) −
F (x? ) ≤ ε̂ for a given ε̂ > 0.
This problem was first studied (indirectly) by Applegate and Kannan [1] in the special case of
additive noise where α = 0. Specifically, they studied the related problem of sampling from the
canonical distribution
1
e−ξF̂ (x)
R
−ξ
F̂
(y)
dy
Ke
when F̂ is an additively noisy version of a convex function. Roughly, their algorithm discretized
K with a grid and ran a simple random walk on this grid. Using their Markov chain one can
solve Problem 1 in the special case of α = 0 for some error ε̂ = Õ(dβ) with running time that is
polynomial in d and various other parameters as well.
Recently, in Belloni et al. [2], Problem 1 was studied in a special case where the noise decreases to
zero near the global minimum2 and F is m-strongly convex. Specifically, they study the situation
where the noise is bounded by |N (x)| ≤ ckx − x? kp , for some 0 < p < 2 and some c > 0.
Roughly speaking, in this regime they show that one can obtain an approximate minimizer x̂
1
such that F (x̂) − F (x? ) ≤ O(( md ) 2−p ) in polynomial time. To find an approximate minimizer x̂,
they repeatedly run a simulated annealing Markov chain based on the “hit-and-run” algorithm.
They state that they are “not aware of optimization methods for such a problem” outside of their
work, and that “it is rather surprising that one may obtain provable guarantees through simulated
annealing” under noise with non-uniform bounds even in the special case of strong convexity.
Very recently, Problem 1 was also studied by Zhang, Liang and Charikar [17] in the special case
of additive noise (where α = 0 but β ≥ 0). The main component of their algorithm is the stochastic
gradient Langevin dynamics (SGLD) Markov chain that runs at a fixed “temperature” parameter
to find an approximate minimizer x̂. In particular, they show that one can solve Problem 1 in the
special case of α = 0 for some error ε̂ = Õ(dβ) with running time that is polynomial in d and β
and various smoothness parameters.
2
The authors of [2] also study separately the special case of purely additive noise, but not simultaneously in the
presence of a non-uniformly bounded noise component.
4
Extending these results to the general case when both α, β > 0, and F is not necessarily strongly
convex, has been an important and challenging problem. The difficulty arises from the fact that, in
this setting, the noise can become unbounded and the prior Markov chain approaches do not seem
to work. Roughly, the Markov chains of [1, 17] run at a fixed temperature and, due to the fact that
the noise can be very different at different levels of F , would either get stuck in local minimum
or never come close to the minimzer; see Figure 2 for an illustration. The Markov chain of [2] on
the other hand varies the temperature but the strong convexity of F makes the task of estimating
progress significantly simpler.
1.1
Our contributions
The main result of this paper is the first polynomial time algorithm that solves Problem 1 when
α, β > 0 without assuming that F is strongly convex. Our algorithm combines simulated annealing
(as in [2]) with the stochastic gradient Langevin dynamics (as in [17]). We assume that k∇F k ≤ λ
and that K is contained in a bounding ball of radius R > 0, and that K = K0 + B(0, r0 ) for some
r0 > 0, where “+” denotes the Minkowski sum. Note that, given bounds λ and R, one can deduce
an upper bound of λR on the value of F in K.
Theorem 1.2. [Informal; see Section 4.2 for a formal description] For any desired accuracy
level ε̂, additive noise level β = O(ε̂), and a multiplicative noise level α that is a sufficiently small
constant, there exists an algorithm that solves Problem 1 and outputs x̂ with high probability such
that F (x̂) − F (x? ) ≤ ε̂. The running time of the algorithm is polynomial in d, R, 1/r0 , and λ,
whenever α ≤ Õ( d1 ) and β ≤ Õ( dε̂ ).
In particular, our theorem recovers the result of [17] in the special case of no multiplicative noise
(α = 0). Importantly, when the multiplicative noise coefficient satisfies α ≤ Õ( d1 ), then one can
obtain an approximate minimizer x̂ such that F (x̂)−F (x? ) ≤ ε̂ for arbitrarily small ε̂ in polynomial
time. More generally, in the case where α ≤ Õ( d1 ) or β ≤ Õ( dε̂ ) may not be satisfied, then the
dβ
running time of our algorithm is roughly bounded by edα+ ε̂ .
The requirement that β ≤ Õ( dε̂ ) in order to get a polynomial running time can be shown to be
necessary using results from [3] (as done in [17]). If the additive noise β was required to be any
lower than Ω( dε̂ ), the algorithm would take an exponentially long time to escape the local minima
(Figure 1). We believe that the requirement that α ≤ Õ( d1 ) in order to get a polynomial running
time is also tight for a similar reason. This is because a sub-level set U of F of s height ε̂, i.e.,
U = {x ∈ K : F (x) ≤ ε̂}, will have a uniform bound on the noise of size supx∈U αF (x) ≤ αε̂ in the
presence of multiplicative noise of level α. This is equivalent to having additive noise of level Õ( dε̂ ),
which is required for the Markov chain to quickly escape the local minima of that sub-level set.
Establishing this formally is an interesting open problem. While our algorithm’s running time is
polynomial in various parameters, we believe that it is not tight and can be improved with a more
thorough analysis of the underlying Markov chain, perhaps using tools developed in [12]. In [17] a
version of Problem 1 is solved for a class of nonconvex functions F in the special case that α = 0
(i.e., only additive noise). It would therefore be interesting to see if we can solve Problem 1 for
this class of nonconvex functions F but under the more general noise model where we have both
additive and multiplicative noise. Finally, we note the following interpretation of our main result for
nonconvex functions: Suppose we are given oracle access to a nonconvex function F̂ with a promise
that there is a convex function F and functions ψ and ϕ such that F̂ (x) = F (x)(1 + ψ(x)) + ϕ(x)
(as in Definition 1.1), then there is an algorithm to minimize F̂ .
5
U F (x
U F (x
)+d
)+
x⇤
x
Figure 1: The local minimum at x◦ has “depth” β so, roughly speaking, there is a path of maximum
elevation F̂ (x◦ ) + β between it and the neighboring minimum at x? (which happens to be the global
◦
minimum in this example). However, the sub-level set U F̂ (x )+β has a very narrow bottleneck, so it
will take a long time for a Markov chain running at temperature β to escape the region of attraction
◦
around the local minimum at x◦ . On the other hand, the sub-level set U F̂ (x )+dβ does not have
a narrow bottleneck, so a Markov chain running at temperature dβ will quickly escape the local
minimum x◦ .
1.2
Short summary of techniques
To find an approximate global minimum of the objective function F , we must try to find an
approximate global minimum of the noisy approximation F̂ . One method of optimizing a nonconvex
or approximately convex function F̂ is to generate a Markov chain with stationary distribution
approximating the canonical distribution
1
π̂ (ξ) (x) := R
K
e−ξF̂ (y) dy
e−ξF̂ (x) ,
where ξ is thought of as an “inverse temperature” parameter. If the “temperature” ξ −1 is small,
then π̂ (ξ) concentrates near the global minima of F̂ . On the other hand, to escape local minima of
“depth” β > 0 in polynomial time, one requires the temperature ξ −1 to be at least Ω(β) (Figure
1). Now consider the random variable Z ∼ N (0, ξ −1 Id ) with
π (ξ) (x) := R
Rd
1
e
−ξ 12 kyk2
1
dy
2
e−ξ 2 kxk .
Then F (Z) concentrates near dξ −1 with high probability. This suggests that for a noisy function
F̂ where we are given a bound on the additive noise of level β > 0, the best we can hope to achieve
in polynomial time is to find a point x̂ such that
|F (x̂) − F (x? )| ≤ Õ(dβ),
since there may be sub-optimal local minima in the vicinity of x? that have depth O(β), requiring
the temperature ξ −1 to be at least Ω(β) (Figure 1).
6
As mentioned earlier, optimization of a noisy function under additive noise is studied in [17],
who analyze the stochastic gradient Langevin dynamics (SGLD) Markov chain. The SGLD chain
approximates the Langevin diffusion, which has stationary distribution π̂ (ξ) . They show that by
running SGLD at a single fixed temperature ξ one can obtain an approximate global minimizer x̂
of F such that |F (x̂) − F (x? )| < Õ(ε̂) with high probability with running time that is polynomial in
d, edβ/ε̂ , and various smoothness bounds on F . In particular, for the algorithm to get a polynomial
running time in d and β one must choose ε̂ = Ω(dβ). Thus, the SGLD algorithm returns an
approximate minimizer such that |F (x̂) − F (x? )| ≤ Õ(dβ) in polynomial time in the additive case.
More generally, if multiplicative noise is present one may have many local minima of very
different sizes, so our bound on the “depth” of the local minima is not uniform over K. In this case
the approach from [17] of using a single fixed temperature will lead to either a very long running
time or a very large error ε̂: If the temperature is hot enough to escape even the deepest the local
minima, then the Markov chain will not concentrate near the global minimum and the error ε̂
will be large (Figure 2(b)). If the temperature is chosen to be too cold, then the algorithm will
take a very long time to escape the deeper local minima (Figure 2(c)). Instead of using a fixed
temperature, we search for the global minimum by starting the Markov chain at a high temperature
and then slowly lowering the temperature at each successive step of the chain (Figure 2(d)). This
approach is referred to as “simulated annealing” in the literature [11].
The only non-asymptotic analysis we are aware of where the bound on the noise is not uniform
involves a simulated annealing technique based on the hit-and-run algorithm [2]. Specifically, the
authors of [2] show that if F is m-strongly convex, then one can compute an approximate global
1
minimizer x̂ such that |F (x̂) − F (x? )| < ( md ) 2−p with running time Õ(d4.5 ), as long as N (x) ≤ ckxkp
for some 0 < p < 2 and some c > 0. The algorithm used in [2] runs a sequence of subroutine Markov
chains. Each of these subroutine Markov chains is restricted to a ball B(yk , rk ) centered at the
point yk returned by the subroutine chain from the last epoch. Crucially, for this algorithm to
work, rk must be chosen such that B(yk , rk ) contains the minimizer x? at each epoch k. Towards
this end, the authors of [2] show that since the temperature is decreased at each epoch, F (yk ) is
much smaller than F (yk−1 ) at each epoch k. Since F is assumed to be strongly convex, the authors
of [2] show that this decrease in F implies a contraction in the distance kyk − x? k at each epoch k,
allowing one to choose a sequence of radii rk that contract as well at each step but still have the
property that x? ∈ B(yk , rk ).
One obstacle in generalizing the results of [2] to the non-strongly convex case is that we do not
have an oracle for the sub-level sets of F , but only for F̂ , whose sub-level sets may not even be
connected. Instead, we show that the SGLD Markov chain concentrates inside increasingly smaller
sub-level sets of F as the temperature parameter is decreased. To analyze the behavior of the
SGLD Markov chain at each temperature, we build several new tools and use some from [17]. Our
results make important contributions to the growing body of work on non-asymptotic analysis of
simulated annealing, Langevin dynamics and their various combinations [4, 12, 14, 16].
1.3
Organization of the rest of the paper
We start with a detailed but informal primer of the algorithm and the key steps and ideas involved in
the proof of Theorem 1.2 in Section 2. Subsequently, we present the notation and other preliminaries
in Section 3. This is followed by a formal presentation of the algorithm and the statement of the
main results in Sections 4.1 and 4.2. Section 5 contains the detailed mathematical proof of our
main theorem.
7
F
F
F̂
X0
F̂
X5
X4
X0
X4
X1
X1
X6
X7
X11
X8
X5
X9
X9
(a)
(b)
F
F
F̂
F̂
X0
X0
X5
X4
X1
X5
X6
X1
X4
X6
X8
X9
X7
(c)
X12
(d)
Figure 2: (a) Optimization of a convex function F (green) with noisy oracle F̂ (black) under
bounded additive noise. Since the gap between the noise bounds (dashed lines) is constant, the
Markov chain (red) can be run at a single temperature that is both hot enough to quickly escape
any local minimum but also cold enough so that the Markov chain eventually concentrates near
the global minimum. (b) and (c) Optimization of a convex function F (green) with noisy oracle F̂
(black) when both additive and multiplicative noise are present, if we run the Markov chain at single
a fixed temperature. If the temperature is hot enough to escape even the deepest local minima,
then it will not concentrate near the global minimum, leading to a large error. If instead the
Markov chain is run at a colder temperature, it will take a very long time to escape the deeper local
minima. (d) Optimization of a convex function F (green) with noisy oracle F̂ (black) under both
additive and multiplicative noise, when using a gradually decreasing temperature. If multiplicative
noise is present the local minima of F̂ are very deep for large values of F . To quickly escape the
deeper local minima, the Markov chain is started at a high temperature. As the Markov chain
concentrates in regions where F is smaller, the local minima become shallower, so the temperature
may be gradually decreased while still allowing the Markov chain to escape nearby local minima. As
the temperature is gradually decreased, the Markov chain concentrates in regions with successively
smaller values of F̂ .
8
2
Overview of Our Contributions
The model and the problem. Let K ⊆ Rd be the given convex body contained in a ball of
radius R > 0 and F : K → R the given convex function. We assume that F has gradient bounded
by some number λ > 0, and that K = K0 + B(0, r0 ) for some r0 > 0, where “+” denotes the
Minkowski sum and K0 is a convex body. Let x? be a minimizer of F over K. Recall that our goal
is to find an approximate minimizer x̂ of F on K, such that F (x̂) − F (x? ) ≤ ε̂ for a given ε̂ > 0.
We assume that we have access to a membership oracle for K and a noisy oracle F̂ for F . Recall
that in our model of noise, we assume that there exist functions ϕ : K → R and ψ : K → R and
numbers α, β ≥ 0, with |ϕ(x)| ≤ β and |ψ(x)| ≤ α for all x ∈ K, such that
F̂ (x) = F (x)(1 + ψ(x)) + ϕ(x)
∀x ∈ K.
(4)
We say that F̂ has “additive noise” ϕ of level β and “multiplicative noise” ψ of level α. To simplify
our analysis, we assume that F ≥ 0 and that F has minimizer x? ∈ K such that F (x? ) = 0 (if not,
we can always shift F and F̂ down by the constant F (x? ) to satisfy this assumption). That way,
the multiplicative noise ψ has the convenient property that it goes to zero as we approach x? .
We first describe our algorithm and the proof assuming that F̂ is smooth and we have access
to the gradient of F̂ . Specifically, we assume that k∇F̂ k is bounded from above by some number
λ̃ > 0 and that the Hessian of F̂ has singular values bounded above by L > 0. This simplifies the
presentation considerably and we explain how to deal with the non-smooth case at the end of this
section.
Our algorithm. To find an approximate minimizer of F , we would like to design a Markov chain
whose stationary distribution concentrates in a subset of K where the values of F are small. The
optimal choice of parameters for this Markov chain will depend on the amount of noise present.
Since the bounds on the noise are not uniform, the choice of these parameters will depend on
the current state of the chain. To deal with this fact, we will run a sequence of Markov chains in
different epochs, where the parameters of the chain are fixed throughout each epoch. Our algorithm
runs for kmax epochs, with each epoch indexed by k.
(k) max
In epoch k, we run a separate Markov chain {Xi }ii=1
over K for the same number of iterations
(imax . Each such Markov chain has parameters ξk and ηk that depend on k. We think of ξk−1 as
the “temperature” of the Markov chain and ηk as the step size. At the beginning of each epoch,
we decrease the temperature and step size, and keep them fixed throughout the epoch. We explain
quantitatively how we set the temperature a bit later, Each Markov chain also has an initial point
(k)
X0 ∈ K. This initial point is chosen from the uniform distribution on a small ball centered at the
point in the Markov chain of the previous epoch (k − 1) with the smallest value of F̂ . In the final
epoch, the algorithm outputs a solution x̂, where x̂ is chosen to be the point in the Markov chain
of the final epoch with the smallest value of F̂ .
Description of the Markov chain in a single epoch. We now describe how the Markov
(k)
(k)
chain, at the point Xi in the k-th epoch, chooses the next point Xi+1 . First, we compute the
(k)
0
gradient ∇F̂ (Xi ). Then we compute a “proposal” Xi+1
for the next point as follows
r
2ηk
(k)
(k)
0
Xi+1 = Xi − ηk ∇F̂ (Xi ) +
Pi ,
ξk
(5)
0
where Pi is sampled from N (0, Id ). If Xi+1
is inside the domain K, then we accept the proposal and
(k)
(k)
(k)
0 ; otherwise we reject the proposal and we set X
set Xi+1 = Xi+1
i+1 = Xi
9
– which is the old point.
The update rule in Equation (5) is called the Langevin dynamics. This is a version of gradient
descent injected with a random term. The amount of randomness is controlled by the temperature
ξk−1 and the step size ηk . This randomness allows the Markov chain to escape local minima when
F̂ is not convex. Although the stationary distribution of this Markov chain is not known exactly,
roughly speaking it is approximately proportional3 to e−ξk F̂ . This completes the description of our
algorithm in the smooth case and we now turn to explaining the steps involved in bounding its
running time for a given bound on the error ε̂.
Steps in bounding the running time. In every epoch, the algorithm makes multiplicative
1
progress so that the smallest value of F achieved by the Markov chain decreases by a factor of 10
.
M
To achieve an error ε̂, our algorithm therefore requires kmax = O(log ε̂ ) epochs, where M is the
maximum value of F̂ on K (M ≤ λR). The running time of our algorithm is given by the number of
epochs kmax multiplied by the number of steps imax taken by the Markov chain within each epoch.
For simplicity, we will run the Markov chain at each epoch for the same number of steps imax . For
1
the value of F to decrease by a factor of 10
in each epoch, we must set the number of steps imax
taken by the Markov chain during each epoch to be no less than the hitting time of the Markov
chain for epoch k to a sub-level set Uk ⊆ K of F , where the “height” of Uk is one-tenth the value
of F at the initial point in this Markov chain. By the height of a sub-level set, we mean the largest
value of F achieved at any point on that sub-level set, that is the sub-level set {y ∈ K : F (y) ≤ h}
has height h. Thus, bounding the hitting time will allow us to bound the number of steps imax for
which we must run each Markov chain. Specifically, we should choose imax to be no less than the
greatest hitting time in any of the epochs with high probability.
This approach was used in the simpler setting of additive noise and a non-iterative way by [17].
Thus, the running time is roughly the product of the number of epochs and the hitting time to the
sub-level set Uk , and having determined the number of epochs required for a given accuracy, we
proceed to bounding the hitting time.
Bounding the hitting time and the Cheeger constant. To bound the hitting time of the
Markov chain in a single epoch, we use the strategy of [17], who bound the hitting time of the
Langevin dynamics Markov chain in terms of the Cheeger constant. Since the Markov chain has
approximate stationary measure induced by e−ξk F̂ , we consider the Cheeger constant with respect
to this measure, defined as follows.
Given a probability measure µ on some domain, we consider the ratio of the measure of the
boundary of an arbitrary subset A of the domain to the measure of A itself. The Cheeger constant
of a set V is the infimum of these ratios over all subsets A ⊆ V (see Definition 5.1 in Section 5.2
for a formal definition). We use some of the results in [17] to show a bound on the hitting time
to the sub-level set Uk contained in a larger sub-level set Uk0 in terms of the Cheeger constant Cˆk ,
with respect to the measure induced by e−ξk F̂ (x) on Uk0 . Specifically, we set Uk0 to be the sub-level
(k)
(k)
1
set of height F̂ (X0 ) + ξk−1 d and Uk to be the sub-level set of height 10
F (X0 ) and show that for
a step size
(Cˆk )2
ηk =
,
d3 ((ξk λ̃)2 + ξk L)2
the hitting time to Uk is bounded by
R
qλ̃ξk +d ;
ηk ˆ
C
d k
see Section 5.5. Thus, to complete our bound on the
hitting time we need to bound the corresponding Cheeger constants.
3
By this we mean that the density of this measure is
1
R
e−ξk F̂
10
e−ξk F̂ .
Bounding the Cheeger constant. We would like to bound the Cheeger constant of the measure
induced by e−ξk F̂ (x) . However, F̂ is not convex, so we cannot directly apply the usual approach of
Lovasz and Simonovits [13] for convex functions. Instead, we first apply their result to bound the
Cheeger constant of the convex function F . We then bound the Cheeger constant of the nonconvex
function F̂ in terms of the Cheeger constant of the convex function F , using a very useful stability
property satisfied by the Cheeger constant.
Roughly speaking, we show that the Cheeger constant of Uk0 \Uk is bounded below by R1 (where
R is the radius of the bounding ball for K) as long as the inverse temperature satisfies
ξk ≥
d
(k)
1
10 F (X0 )
(see Lemma 5.3). However, the difficulty is that Uk0 may have sharp corners, the volume of Uk
might be so small that Uk would have much smaller measure than Uk0 \Uk , leading to a very small
Cheeger constant. To get around this problem, we instead consider a slightly “rounded” version
of K, where we take K to be the Minkowski sum of another convex body with a ball of very small
radius r0 . The roundness allows us to show that Uk contains a ball of even smaller radius r̂ such
that the measure is much larger on this ball than at any point in Uk0 \Uk . This in turn allows us
to apply the results of [13] to show that the Cheeger constant is bounded below by R1 (see Lemma
5.3). Note our Cheeger bound is more general (for convex functions) than that obtained in [17],
where the constraint set is assumed to be a ball.
The Cheeger constant has the following useful stability property that allows us to bound the
Cheeger constant of the nonconvex F̂ with respect to the convex F : if |F̂ (x) − F (x)| ≤ Nk for all
x ∈ Uk0 , then the Cheeger constant for the measures proportional to e−ξk F and e−ξk F̂ differ by a
factor of at most e−2ξk Nk . For our choice of Uk0 , we have
(k)
Nk ≈ α[F (X0 ) + ξk−1 d] + β.
We can then use the stability property to show that the Cheeger constants of F̂ and F differ by a
factor of at most e−2ξk Nk , allowing us to get a large bound for the Cheeger constant of F̂ in terms
of our bound for the Cheeger constant of F as long as the bound on the noise Nk on Uk0 is not too
large, namely we get that the Cheeger constant is bounded below by
!
1 −2ξk Nk
1
d
≈ exp −αd −
e
β
(k)
R
R
F (X )
0
if we choose ξk =
d
(k)
1
F (X0 )
10
.
At this point we mention the key difference between the approach of [17] and ours in bounding
the hitting time. As [17] assume a uniform bound on the noise they only consider the Cheeger
constant of K\Uk , where K is the entire constraint set and is assumed to be a ball. Since the noise
in our model depends on the “height” of the level sets, we instead need to bound the Cheeger
(k)
constant of Uk0 \Uk , where Uk0 is the level set of height F̂ (X0 ) + ξk−1 d and Uk is the level set of
(k)
1
height 10
F (X0 ).
In order to complete our bound for the Cheeger constant of F̂ , we still need to verify that we
can choose a temperature such that the Cheeger constant of F is large and the Cheeger constants
of F and F̂ are close at this same temperature.
11
Requirements on the temperature to bound the Cheeger constant. To get a large bound
for the Cheeger constant of F̂ , we need to use a temperature ξk−1 such that the following competing
requirements are satisfied:
1. We need the Cheeger constant of the convex objective function F to be not too small. This
requires the temperature to be low enough.
2. We need the Markov chain to stay inside a level set on which the upper bound Nk on the
noise is not too large, to show that the Cheeger constants of F and F̂ are close. That is, we
need to show that the ratio e−2ξk Nk of the Cheeger constants of F and F̂ is not to small. This
requires the temperature to be low enough.
3. We also need the temperature to be high enough to show that the ratio e−2ξk Nk is not to
small. This requires the temperature to be high enough.
At some epoch k, the value of F becomes too low for all three of these requirements on the
temperature to be satisfied simultaneously. At this point the Cheeger constant and hitting time
to Uk become very large no matter what temperature we use, so that the minimum value of F
obtained by the Markov chain no longer decreases by a large factor in imax steps.
Quantitative error and running time bounds. We now give a more quantitative analysis to
determine at what point F stops decreasing. The value of F at this point determines the error ε̂
of the solution returned by our algorithm. Towards this end, we set the inverse temperature to be
ξk = 1 d (k) and check to what extent all 3 requirements above are satisfied.
10
F (X0 )
1. We start by showing that if the temperature roughly satisfies ξk ≥
constant for F on Uk0 \Uk is bounded below by
1
R
d
(k)
1
F (X0 )
10
then the Cheeger
(see Lemma 5.3).
2. We then show that at each epoch the Markov chain remains with high probability in the level
(k)
set Uk0 of height F̂ (X0 )+ξk−1 d (Lemma 5.7). The fact that the noise satisfies |F̂ (x)−F (x)| ≤
αF (x) + β (note that we assume F ≥ 0), implies that the noise is roughly bounded above by
(k)
Nk = α[F (X0 ) + ξk−1 d] + β on this level set.
3. Since we chose the temperature to be ξk =
d
(k)
1
F (X0 )
10
ξk Nk ≈ αd +
, we have that
d
(k)
F (X0 )
β.
1 −2ξk Nk
Combing
≈
these three facts
we get that the Cheeger constant is bounded below by R e
(k)
1
d
(k) β . If we run the algorithm for enough epochs to reach F (X0 ) ≤ ε̂ for any
R exp −αd −
F (X0 )
1
d
R exp(−αd − ε̂ β).
2
ˆ
(Ck )
. Choosing
Rd3 ((ξk λ̃)2 +ξk L)2
desired error ε̂ > 0, the Cheeger constant will be roughly bounded below by
Recall that the hitting time is bounded by
R
qλ̃ξk +d ,
ηk ˆ
Ck
d
for stepsize ηk ≈
imax to be equal to our bound on the hitting time, and recalling that kmax = Õ(1), we get a running
time of roughly
"
#!
3
3
5 L
λ̃
d
,
Õ R 2 d5 3 + d 2 ] exp(c[αd + β
ε̂
ε̂
ε̂
for some c = Õ(1).
12
Therefore, for our choice of inverse temperature ξk =
d
(k)
1
F (X0 )
10
, the running time is polynomial
in d, R, λ and λ̃ whenever the multiplicative noise level satisfies α ≤ Õ( d1 ) and the additive noise
level satisfies β ≤ Õ( dε̂ ). As discussed in the introduction, the requirements that α ≤ Õ( d1 ) and
β ≤ Õ( dε̂ ) are not an artefact of the analysis or algorithm and are in fact tight.
Drift bounds and initialization. So far we have been implicitly assuming that the Markov
chain does not leave Uk0 , so that we could analyze the Markov chain using the Cheeger constant
on Uk0 . We now show that this assumption is indeed true with high probability. This is important
to verify, since there are examples of Markov chains where the Markov chain may have a high
probability of escaping a level set Uk0 , even if this level set contains most of the stationary measure,
provided that the Markov chain is started far from the stationary distribution.
(k)
To get around this problem, at each epoch we choose choose the initial point X0 from the
uniform distribution on a small ball of radius r centered at the point in the Markov chain of the
previous epoch k − 1 with the smallest value of F̂ . We then show that if the Markov chain is
initialized in this small ball, it has a low probability of leaving the level set Uk0 (see Propositions
5.5, 5.7 and Lemma 5.7).
Our method of initialization is another crucial difference between our algorithm and the algorithms in [17] and [2], since it allows us to effectively restrict the Markov chain to a sub-level set
of the objective function F , which we do not have direct oracle access to, rather than restricting
the Markov chain to a large ball as in [2] or the entire constraint set K as in [17] for which we have
an membership oracle. This in turn allows us to get a tighter bound on the multiplicative noise
than would otherwise be possible, since the amount of multiplicative noise depends, by definition,
on the sub-level set.
We still need to show that the chain X (k) does not leave the set Uk0 with high probability. To
bound the probability that X (k) leaves Uk0 , we would like to use the fact that most of its stationary
distribution is concentrated in Uk0 . However, the problem is that we do not know the stationary
distribution of X (k) . To get around this, we consider a related Markov chain Y (k) with known
stationary distribution. The chain Y (k) evolves according to the same update rules as X (k) , using
the same sequence of Gaussian random vectors P1 , P2 , . . . and the same starting point, except that it
performs a Metropolis “accept-reject” step that causes its stationary distribution to be proportional
to e−ξk F̂ . The fact that we know the stationary distribution of Y (k) is key to showing that Y (k)
stays in the subset Uk0 with high probability (see Proposition 5.6). We then argue that Y (k) = X (k)
with high probability, implying that X (k) also stays inside the set Uk0 with high probability (see
Lemma 5.7).
Another coupled toy chain. So far we have shown that the Markov chain X (k) stays inside the
set Uk0 with high probability. However, to use the stability property to bound the hitting time of the
Markov chain X (k) to the set Uk , we actually want the Markov chain to be restricted to the set Uk0
where the noise is not too large. In reality, however, the domain of X (k) is all of K, so we cannot
directly bound the hitting time of X (k) with the Cheeger constant of Uk0 \Uk . Instead, we consider
a Markov chain X̂ (k) that evolves according to the same rules as X (k) , except that it rejects any
proposal outside of Uk0 . Since X̂ (k) has domain Uk0 , we can use our bound on the Cheeger constant
of Uk0 \Uk to obtain a bound on the hitting time of X̂ (k) . Then, we argue that since X (k) stays in
Uk0 with high probability, and X̂ (k) and X (k) evolve according to the same update rules as long as
X (k) stays inside Uk0 , X̂ (k) = X (k) with high probability as well, implying a hitting time bound for
X (k) .
13
Rounding the sub-level sets. We must also show a bound on the roundness of the sets Uk0 , to
avoid the possibillty of the Markov chain getting stuck in “corners”. The authors of [17] take this
as an assumption about the constraint set. However, since we must consider the Cheeger constant
on sub-level sets Uk0 rather than just on the entire constraint set, we must make sure that these
sub-level sets are “round enough”. Towards this end we consider “rounded” sub-level sets where we
take the Minkowski sum of Uk0 with a ball of a small radius r0 . We then apply the Hanson-Wright
inequality to show that any Gaussian random variable with center inside this rounded sub-level set
and small enough covariance remains inside the rounded sub-level set with high probability (see
Lemma 5.14).
Smoothing a non-differentiable noisy oracle. Finally, so far we have considered the special
case where F̂ is smooth. However, F̂ may not be smooth or may not even be differentiable, so we
may not have access to a well-behaved gradient which we need to compute the Langevin dynamics
Markov chain (Equation 5). To get around this problem, we follow the approach of [7] and [17].
We define a smoothed function
f˜σ (x) := EZ [F̂ (x + Z)]
where Z ∼ N (0, σId ) and σ > 0 is a parameter we must fix. The smoothness of f˜σ comes from the
fact that f˜σ is a convolution of F̂ with a Gaussian distribution.
When choosing σ, we want σ to be small enough so that we get a good bound on the noise
˜
|fσ (x) − F (x)|. Specifically, we need
r β + ε̂α + ε̂/d
√
σ = Õ min √ ,
,
d
λ d
where λ is a bound on k∇F k. On the other hand, we also want σ not to be too small so that we
get a good bound on the smoothness of f˜σ .
Further, so far we have also assumed that we have access to the full gradient of F̂ , but in general
F̂ may not even have a gradient. Instead, we would like to use the gradient of f˜σ to compute the
proposal for the Langevin dynamics Markov chain (Equation (5)). However, computing the full
gradient of f˜σ can be expensive, since we do not even have direct oracle access to f˜σ . Instead, we
compute a projection g(x) of ∇f˜σ , where
Z
(F̂ (x + Z) − F̂ (x))
σ2
Since g has the property that E[g(x)] = ∇f˜σ , g is called a “stochastic gradient” of f˜σ . We use
this stochastic gradient g in place of the full gradient of F̂ when computing the proposal for the
Langevin dynamics Markov chain (Equation 5). This gives rise to the following Markov chain
proposal, also known as stochastic gradient Langevin dynamics (SGLD):
r
2ηk
(k)
(k)
0
Xi+1 = Xi − ηk g(Xi ) +
Pi .
ξk
To bound the running time of SGLD, we will need a bound on the magnitude of the gradient of f˜σ
(see Lemma 5.10), bounds on the Hessian and tails of f˜σ , which we obtain from [17] (see Lemma
5.12), and bounds on the noise of the smoothed function, |f˜σ − F (x)| (see Lemma 5.13).
g(x) =
Although in this technical overview we largely showed running time and error bounds assuming
access to a full gradient, in reality we prove Theorem 1.2 for the more general stochastic gradient
Langevin dynamics algorithm, where we only assume access to a stochastic gradient of a smooth
function. Therefore, the bounds on the noise and smoothness of f˜σ allow us to extend the error
and polynomial running time bounds shown in this overview to the more general case where F̂ may
not be differentiable.
14
3
Preliminaries
In this section we go over notation and assumptions that we use to state our algorithm and prove
our main result. We start by giving assumptions we make about the convex objective function F .
We then explain how to obtain an oracle for the gradient of the smoothed function f˜σ if we only
have access to the non-smooth oracle F̂ .
3.1
Notation
In this section we define the notation we use to prove our main result. For any set S ⊆ Rd and
t ≥ 0 define St := S + B(0, t) where “+” denotes the Minkowski sum. We denote the `2 -norm by
k · k, and the d × d identity matrix by Id . We denote by k · kop the operator norm of a matrix, that
is, its largest singular value. We define B(a, t) to be the closed Euclidean ball with center a and
radius t. Denote the multivariate normal distribution with mean m and covariance matrix Σ by
N (m, Σ). Let x? denote a minimizer of F on K.
3.2
Assumptions on the convex objective function and the constraint set
We make the following assumptions about the convex objective function F and K:
• K is contained in a ball, with K ⊆ B(c, R) for some c ∈ Rd .
• F (x? ) = 0.4
• There exists an r0 > 0 and a convex body K0 such that K = K0 + B(0, r0 ) for some convex
body. (This assumption is necessary to ensure that our convex body does not have “pointy”
edges, so that the Markov chain does not get stuck for a long time in a corner.)
• F is convex over Kr for some r > 0.
• k∇F (x)k ≤ λ for all x ∈ Kr , where λ > 0.
3.3
A smoothed oracle from a non-smooth one
In this section we show how to obtain a smooth noisy oracle for F if one only has access to a nonsmooth and possibly non-continuous noisy oracle F̂ . Our goal is to find an approximate minimum
for F on the constraint set K. (We consider the thickened set Kr only to help us compute a smooth
oracle for F on K). We assume that we have access to a noisy function F̂ of the form
F̂ (x) = F (x)(1 + ψ(x)) + ϕ(x),
(6)
where |ψ(x)| < α, and |ϕ(x)| < β for every x ∈ Kr , for some α, β ≥ 0. We extend F̂ to values
outside Kr by setting F̂ (x) = 0 for all x ∈
/ Kr . Since F̂ need not be smooth, as in [7, 17] we will
instead optimize the following smoothed function
f˜σ (x) := EZ [F̂ (x + Z)]
(7)
4
If F (x? ) is nonzero, we can define a new objective function F 0 (x) = F (x) − F (x? ) and a new noisy function
0
˜
f (x) = f˜(x) − F (x? ). The noise N 0 (x) = f˜0 (x) − F 0 (x) can then be modeled as having additive noise of level
β 0 = β + αF (x? ) and multiplicative noise of level α0 = α, if N (x) = f˜(x) − F (x) has additive noise of level β and
multiplicative noise of level α.
15
where Z ∼ N (0, σId ), for some σ > 0. The parameter σ determines the smoothness of f˜σ ; a larger
value of σ will mean that f˜σ will be smoother. The gradient of f˜σ (x) can be computed using a
stochastic gradient g(x), where
1
g(x) ≡ gZ (x) := 2 Z F̂ (x + Z) − F̂ (x) ,
∇f˜σ (x) = EZ [g(x)].
σ
4
4.1
Our Contribution
Our Algorithm
In this section we state our simulated annealing algorithm (Algorithm 2) that we use to obtain a
solution to Problem 1. At each epoch, our algorithm uses the SGLD Markov chain as a subroutine,
which we describe first in Algorithm 1. The SGLD Markov chain we describe here is the same
algorithm used in [17], except that we allow the user to specify the initial point.
Algorithm 1 Stochastic gradient Langevin dynamics (SGLD)
input: Convex constraint set K̂ ⊆ Rd , inverse temperature ξ > 0, step size η > 0, parameters
imax ∈ N and D > 0, and a stochastic gradient oracle g for some f˜ : K → R.
input: Initial point X0 ∈ K̂.
1: for i = 0 to imax do
2: Sample Pi ∼ N (0, Id ).
q
3:
0
Set Xi+1
= Xi − ηg(Xi ) +
2η
ξ Pi .
0
0
Set Xi+1 = Xi+1
if Xi+1
∈ K̂ ∩ B(Xi , D). Otherwise, set Xi+1 = Xi .
5: end for
output: Xi? , where i? := argmini {F̂ (Xi )}
4:
Using Algorithm 1 as a subroutine, we define the following simulated annealing algorithm:
Algorithm 2 Simulated annealing SGLD
input: Convex constraint set K̂ ⊆ Rd , initial point x0 ∈ K̂, inverse temperatures ξ0 , ξ1 , . . . , ξkmax ,
step sizes η0 , η1 , . . . , ηkmax , parameters kmax , imax ∈ N, D > 0 and r > 0, and a stochastic gradient
oracle g for some f˜ : K̂ → R.
1: Sample y0 from the uniform distribution on B(x0 , r) ∩ K̂.
2: for k = 0 to kmax do
3: Run Algorithm 1 on K̂, inverse temperature ξ = ξk , and step size ηk , imax , the oracle g, and
the initial point X0 = yk . Let xk+1 be the output of Algorithm 1.
4: Sample yk+1 from the uniform distribution on B(xk+1 , r) ∩ K̂.
5: end for
output: xkmax
4.2
Statement of Our Main Theorem
We now formally state our main result, where we bound the error and running time when Algorithm
2 is used to solve Problem 1, assuming access to an oracle F̂ that may be non-smooth or even noncontinuous.
16
Theorem 4.1. (Main Theorem: Error bounds and running time for Algorithm 2) Let
F : K → R be a convex function, and K ⊆ Rd be a convex set that satisfy the assumptions stated in
Section 3.1. Let F̂ be a noisy oracle for F with multiplicative noise of level α ≤ O(1) and additive
noise of level β, as in Equation (6). Let ε̂ ≥ 75β and δ 0 > 0. Then there exist parameters imax ,
max
, and σ, such that if we run Algorithm 2 with a smoothed version f˜σ of the oracle
kmax , (ξk , ηk )kk=0
F̂ (as defined in Section 3.3), with probability at least 1 − δ 0 , the algorithm outputs a point x̂ such
that
F (x̂) − F (x? ) ≤ ε̂,
β
with running time that is polynomial in d, e(dα+d ε̂ )c , R, λ,
λ)).
1
r0 ,
β, 1ε̂ , and log δ10 , where c = O(log(R+
We give a proof of Theorem 4.1 in Section 5.8.
The precise values of the parameters in this theorem are quite involved and appear in the proofs at
the following places: ξk appears in (41), ηk here (43), imax here (42), and the expression for kmax
can be found here (40). Below we present their approximate magnitudes. The inverse temperature
parameter ξk , the smoothing parameter σ, and the number of epochs kmax satisfy:
1
d
d
(0)
k
≤ ξk ∼ Õ d · max
, F̂ (X0 ) · 10
≤ Õ
,
Ω̃
λR
ε̂
ε̂
1
r
β
,
√ ,q
σ = min
2
1
λ(1 + α) d
log( ) + d
α
kmax ∼ log
R
.
ε̂
To make the expressions for ηk and imax understandable, assume that λR > 1, β, that r > βλ , d > ε̂,
and that R > λ. Then
ηk ∼
ε̂4
−d(α+ βε̂ )c0 −k
e
10
d9 R5 λ8 β 4
d6.5 11 6 d(α+ β )c0
ε̂
≤
R 2 λ βe
ε̂3
1 ≤ imax
1+c00 α
,
2
R
00
where c0 ∼ log rdδ min{ε̂/λ,r
0 } and c is a constant factor. In particular, the running time is given by
imax × kmax .
5
5.1
Proofs
Assumptions about the smooth oracle
We first show how to optimize F if one has access to a smooth noisy objective function f˜ : K → R
(Sections 5.2, 5.3, 5.5). Then, in Section 5.6, we show how one can obtain a smooth noisy objective
function from a non-smooth and possibly non-continuos noisy objective function F̂ . We will make
the following assumptions (we prove in Section 5.6 that these assumptions hold for a smoothed
17
version f˜σ of a non-smooth noisy objective function F̂ ). We assume the following noise model for
f˜:
f˜(x) = F (x)(1 + ψ(x)) + ϕ(x),
for all x ∈ K where |ψ(x)| ≤ α and |ϕ(x)| ≤ β. Note that, with a slight abuse of notation, in
Section 3.3 we also used the letters α and β to denote the noise levels of the non-smooth oracle F̂ ,
even though typically F̂ will have lower noise levels than f˜. In this section, as well as in Sections
5.2-5.5 where we assume direct access to a stochastic gradient for the smooth oracle f˜, we will
instead refer to the noise levels of F̂ by “α̂” and “β̂”. In Section 4.2, on the other hand, “α” and
“β” will be used exclusively to denote the noise levels of F̂ . We also assume that
α ≥ α̂
and β ≥ β̂.
(8)
We make the following assumptions about f˜:
• ψ(x) > −α† for some 0 ≤ α† < 1. This assumption is needed because if not we might have
ψ(x) = −1 for all x ∈ K, in which case f˜(x) would give no information about F .
• k∇f˜(x)k ≤ λ̃ for all x ∈ K.
• We assume that we have access to a stochastic gradient g such that ∇f˜(x) = E[g(x)] for every
x ∈ K. However, we do not assume that we have oracle access to f˜ itself.
Assumption 1. (Based on assumption A in [17]) Let f˜ : K → Rd be differentiable, and let
g ≡ gw : K → Rd be such that ∇f˜(x) = E[gW (x)] where W is a random variable. We will assume
that
1. There exists ζmax > 0 such that for every compact convex K̂ ⊆ Rd , every x ∈ K̂r0 , and every
0 ≤ ζ ≤ ζmax , the random variable Z ∼ N (x, 2ζId ) satisfies P(Z ∈ K) ≥ 13 . We prove this
assumption in Lemma 5.14.
2. There exists L > 0 such that |f˜(y) − f˜(x) − hy − x, ∇f˜(x)i| ≤
L
2 ky
− xk2
∀x, y ∈ K.
3. There exists bmax > 0 and G > 0 such that for any u ∈ Rd with kuk ≤ bmax the stochastic
2
2
2
gradient g(x) satisfies E[ehu,g(x)i |x] ≤ eG kuk .
5.2
Conductance and bounding the Cheeger constant
To help us bound the convergence rate, we define the Cheeger constant of a distribution, as well as
the conductance of a Markov chain. For any set K̂ and any function f : K̂ → R, define
e−f (x)
−f (x)
K̂ e
µK̂
f (x) := R
∀x ∈ K̂.
Definition 5.1. (Cheeger constant) For all V ⊆ K̂, define the Cheeger constant to be
CfK̂ (V ) := lim inf inf
ε↓0
K̂
µK̂
f (Aε ) − µf (A)
ε
A⊆V
recalling that Aε = A + B(0, ε).
18
K̂(A)
µf
.
For a Markov chain Z0 , Z1 , . . . on K̂ with stationary distribution µZ and transition kernel QZ , we
define the conductance on a subset V to be
R
QZ (x, K̂\A)µZ (x)dx
K̂
ΦZ (V ) := inf A
∀V ⊆ K̂
A⊆V
µZ (A)
and the hitting time
τZ (A) := inf{i : Zi ∈ A}
∀A ⊆ K̂.
Finally, we define the notion of two Markov chains being ε0 -close:
Definition 5.2. If W0 , W1 , . . . and Z0 , Z1 , . . . are Markov chains on a set K̂ with transition kernels
QW and QZ , respectively, we say that W is ε0 -close to Z with respect to a set U ⊆ K̂ if
QZ (x, A) ≤ QW (x, A) ≤ (1 + ε0 )QZ (x, A)
for every x ∈ K̂\U and A ⊆ K̂\{x}.
We now give a generalization of Proposition 2 in [17]:
Lemma 5.3. (Bounding the Cheeger constant) Assume that K̂ ⊆ K0 is convex, and that F
is convex and λ-Lipschitz on K̂r0 . Then for every ε > 0 and all ξ ≥
K̂
CξFr (K̂r0 \U ε ) ≥
0
ε
,r0 ))
4d log(R/ min( 2λ
ε
we have
1
.
R
ε
Proof. Let x̂? be a minimizer of F on K̂r0 . Let r̂ = min( 2λ
, r0 ). Then since K̂r0 = K̂ + B(0, r0 ), for
0
?
some a ∈ K there is a closed ball B(a, r̂) ⊆ K̂r0 , with x ∈ B(a, r̂). By the Lipschitz property, we
have
ε
sup{F (x) : x ∈ B(a, r̂)} ≤ F (x? ) + 2r̂λ ≤ F (x? ) + .
2
Therefore,
inf{
e−ξF (x)
: x ∈ B(a, r̂), y ∈ K0 \U ε )} ≥ eξε/2 .
e−ξF (y)
(9)
Then Equation (9) implies that
K̂
0
µξFr (K̂r0 \Uε )
K̂ 0
µξFr (Uε )
≤ e−ξε/2
Vol(B(c, R))
Vol(B(a, r̂))
R
= e−ξε/2 ( )d
r̂
−ξε/2+d log(R/r̂)
=e
1
≤ ,
2
which implies that
1
K̂ 0
µξFr (K̂r0 \Uε ) ≤ .
2
19
(10)
Then by Theorem 2.6 of [13] for all A ⊆ K̂r0 \U ε for any 0 < δ < 2R we have
K̂
0
δ
2 2R
K̂ 0
K̂ 0
min(µξFr (A), µξFr (K̂r0 \Aδ ))
δ
1 − 2R
δ
2 2R
K̂ 0
K̂ 0
≥
min(µξFr (A), µξFr (K̂r0 \Aδ ))
δ
1 − 2R
δ
2 2R
K̂ 0
K̂ 0
min(µξFr (A), 1 − µξFr (Aδ ))
=
δ
1 − 2R
δ
2 2R
K̂ 0
K̂ 0
K̂ 0
=
min(µξFr (A), 1 − µξFr (Aδ \A) − µξFr (A))
δ
1 − 2R
Eq. (10) 2 δ
1
K̂r0
K̂r0
2R
≥
1 min(µξF (A), 1 − µξF (Aδ \A) − 2 )
1 − 2R
δ
2 2R
1
K̂ 0
K̂ 0
min(µξFr (A), − µξFr (Aδ \A))
=
δ
2
1 − 2R
δ
2 2R
K̂ 0
=
µξFr (A),
δ
1 − 2R
µξFr (Aδ \A) ≥
provided that 0 < δ < ∆A for some small enough value ∆A > 0 that depends on A. Therefore for
every A ⊆ K̂r0 \Uε there exists ∆A > 0 such that
K̂
0
µξFr (Aδ \A)
K̂
≥
0
δµξFr (A)
1
2 2R
1−
δ
2R
∀0 < δ < ∆A .
Taking δ → 0, we get
K̂
CξFr (K̂r0 \U ε ) ≥
5.3
0
1
.
R
Bounding the escape probability
We will use the Lemma proved in this section (Lemma 5.7) to show that the SGLD chain X defined
in Algorithm 1 does not drift too far from its initial objective function value with high probability.
This will allow us to bound the noise, since the noise is proportional to the objective function F .
The organization of this section is as follows: we first define a “toy” algorithm and an associated
Markov chain Y that will allow us to prove Lemma 5.7 (and which we will later use to prove
Theorem 2). We then prove Propositions 5.5 and 5.6, and Lemma 5.7. Proposition 5.5 is used to
prove Proposition 5.6, which in turn is used to prove Lemma 5.7.
We begin by recalling the Metropolis-adjusted version of Algorithm 1 defined in [17], which defines a Markov chain Y0 , Y1 , . . . with stationary distribution µK
. Note that this is a“toy” algorithm
ξ f˜
which is not meant to be implemented; rather we state this algorithm only to define the Markov
chain Y0 , Y1 , . . ., which we will use as a tool to prove Lemma 5.7 and Theorem 2.
20
Algorithm 3 Lazy Metropolis-adjusted SGLD
input: Convex constraint set K̂ ⊆ Rd , inverse temperature ξ > 0, step size η > 0, parameters
imax ∈ N and D > 0, stochastic gradient oracle g for some f˜ : K → R,
input: Initial point Y0 ∈ Rd .
1: for i = 0 to imax do
2: Sample Pi ∼ N (0, Id ).
q
3:
0
Set Yi+1
= Yi − ηg(Yi ) +
2η
ξ Pi .
00 = Y 0
0
00 = Y .
Set Yi+1
Otherwise, set Yi+1
i
i+1 if Yi+1 ∈ K̂ ∩ B(Yi , D).
00 )k
00
1
−
+ηg(Yi+1
kY −Y
00 )
] f˜(Yi )−f˜(Yi+1
E[e 4η i i+1
000
00
. Otherwise, set
e
5: Set Yi+1 = Yi+1 with probability min 1,
− 1 kY 00 −Y +ηg(Y )k
4:
E[e
000
Yi+1
4η
i+1
i
i
]
= Yi .
000 if V = 1; otherwise,
6: Set Vi = 1 with probability 12 and set Vi = 0 otherwise. Let Yi+1 = Yi+1
i
let Yi+1 = Yi .
7: end for
output: xi? , where i? := argmini {f˜(xi )}.
We now define a coupling of three Markov chains. We will use this coupling to prove Lemma 5.7
and Theorem 2.
Definition 5.4. (Coupled Markov chains) Let X and X̂ be Markov chains generated by generated by Algorithm 1 with constraint set K and K̂r0 , respectively, where K̂ ⊆ K and K̂r0 =
K̂ + B(0, r0 ). Let Y be the Markov chain generated by Algorithm 3. We define a coupling of the
Markov chains X, X̂ and Y in the following way: Define recursively, t(0) = 0,
t(i + 1) = min{j ∈ N : j > i, Vj = 1}.
Let Q0 , Q1 , . . . ∼ N (0, Id ) be i.i.d. Let X0 = Y0 = X̂0 . Let Yi be the chain in Algorithm 3 generated
by setting Pi = Qi for all i ≥ 0 with constraint set K. Let X be the chain in Algorithm 3 generated
by setting Pi = Qt(i) for all i ≥ 0 with constraint set K. Let X̂ be the chain in Algorithm 3
generated by setting Pi = Qt(i) for all i ≥ 0 with constraint set K̂r0 .
We now bound the escape probability of the Markov chain Y from a sub-level set of a given height,
assuming that it is initialized from its stationary distribution conditioned on a small ball.
Proposition 5.5. (Escape probability from stationary distribution on a small ball) Let
r > 0 be such that r0 ≥ r > 0 and let ξ > 0. Let Y0 , Y1 , . . . be the Markov chain defined in
B(y,r)∩K
Algorithm 3 with stationary distribution π = µK
, and let Y0 be sampled from π0 := µξf˜
,
ξ f˜
where π0 is the distribution of π conditioned on B(y, r) ∩ K for some y ∈ K. Then for every i ≥ 0
we have
˜
P(f˜(Yi ) ≥ h) ≤ eξ[f (y)+λ̃r]−ξh+d log(
2R
)
r
∀h ≥ 0.
Proof. Fix h ≥ 0. Define S1 := B(y, r) ∩ K and S2 := {x ∈ K : f˜(x) ≥ h}. Let cπ =
be the normalizing constant of π. Since π is the stationary distribution of Y ,
P(Yi ∈ S2 ) ≤
π(S2 )
π(S1 )
∀ i ∈ {0, . . . , imax }
21
R
−ξ f˜(x) dx
Ke
(11)
But k∇ξ f˜k = kξ∇f˜k ≤ ξ λ̃, implying that
˜
π(S1 ) = π(B(y, r)) ≥ cπ e−[ξf (y)+ξλ̃r] × Vol(B(y, r) ∩ K)
1
˜
≥ cπ e−ξ[f (y)+λ̃r] × Vol(B(0, r)),
2
(12)
since B(y, r) ∩ K contains a ball of radius 21 r because r ≤ r0 . Also,
π(S2 ) = π({x : f˜(x) ≥ h}) ≤ cπ e−ξh Vol(K) ≤ cπ e−ξh Vol(B(0, R)).
(13)
Therefore,
Eq. (11)
P(Yi ∈ S2 )
≤
π(S2 )
π(S1 )
Eq. (12), (13)
≤
ξ[f˜(y)+λ̃r]−ξh
e
˜
= eξ[f (y)+λ̃r]−ξh+d log(
2R
)
r
×
R
1
2r
!d
.
We now extend our bound for the escape probability of the Markov chain Y (Proposition 5.5) to
the case where Y is instead initialized from the uniform distribution on a small ball:
Proposition 5.6. (Escape probability from uniform distribution on a small ball) Let
r > 0 be such that r0 ≥ r > 0 and let ξ > 0. Let ν0 be the uniform distribution on B(y, r) ∩ K for
some y ∈ K. Let Y0 , Y1 , . . . be the Markov chain defined in Algorithm 3 with stationary distribution
, and let Y0 be sampled from ν0 . Then for every i ≥ 0 we have
π = µK
ξ f˜
˜
P(f˜(Yi ) ≥ h) ≤ eξ[f (y)+λ̃r]−ξh+d log(
2R
)
r
+ 2rλ̃ξ
∀h ≥ 0.
(14)
Moreover, for every A ⊆ K, we have
ν0 (A) ≤ e2Rλ̃ξ+d log(
2R
)
r
π(A).
(15)
Proof. Since k∇ξ f˜(x)k ≤ ξ λ̃,
ξ f˜(x) −
sup
x∈B(y,r)∩K
inf
x∈B(y,r)∩K
ξ f˜(x) ≤ 2rλ̃ξ,
and hence
inf x∈B(y,r)∩K π(x)
≥ e−2rλ̃ξ .
supx∈B(y,r)∩K π(x)
B(y,r)∩K
Define π0 := µξf˜
(16)
to be the distribution of π conditioned on B(y, r) ∩ K̂0 . Let Z be sampled
(Y0 )
from the distribution π0 . Let Z 0 = Y0 with probability min( πν00(Y
, 1); otherwise let Z 0 = Z. Then
0)
Z 0 has distribution π0 . Moreover, by Equation (16), Z 0 = Y0 with probability at least e−2rλ̃ξ .
Therefore, by Proposition 5.5
˜
P(f˜(Yi ) ≥ h) ≤ eξ[f (y)+λ̃r]−ξh+d log(
22
2R
)
r
+ 1 − e−2rλ̃ξ
˜
≤ eξ[f (y)+λ̃r]−ξh+d log(
2R
)
r
+ 2rλ̃ξ
∀h ≥ 0.
This proves Equation (14). Now, since kξ∇f˜(x)k ≤ ξ λ̃ and K ⊆ B(c, R),
sup ξ f˜(x) − inf ξ f˜(x) ≤ 2Rλ̃ξ,
x∈K
x∈K
and so
inf x∈K π(x)
≥ e−2Rλ̃ξ .
supx∈K π(x)
(17)
Therefore, for every z ∈ K we have
π(z)
= Vol(B(y, r) ∩ K) × π(z)
ν0 (z)
1
≥ Vol(B(0, r)) × π(z)
2
1
1
inf x∈K π(x)
≥ Vol(B(0, r)) ×
2
Vol(B(0, 2R)) supx∈K π(x)
−d
Eq. (17)
2R
≥
e−2Rλ̃ξ
r
= e−2Rλ̃ξ−dlog(
2R
)
r
(18)
.
Where the second inequality holds since r ≤ r0 . This proves Equation (15).
We are now ready to bound the escape probability of the SGLD Markov chain X defined in
Algorithm 1 when it is initialized from the uniform distribution on a small ball:
Lemma 5.7. (Escape probability for unadjusted SGLD chain) Let r > 0 be such that
r0 ≥ r > 0 and let ξ > 0. Let ν0 be the uniform distribution on B(y, r) ∩ K for some y ∈ K, and
let X0 be sampled from ν0 . Let X0 , X1 , . . . be the Markov chain generated by Algorithm 1 with
δ
constraint set K. Let δ ≤ 41 and let 0 < η ≤ imax ×16d(G
2 +L) then
˜
P(f˜(Xi ) ≥ h) ≤ eξ[f (y)+λ̃r]−ξh+d log(
2R
)
r
+ 2rλ̃ξ + δ
∀h ≥ 0.
Proof. Let Y0 , Y1 , . . . be the Markov chain generated by Algorithm 3, and let X0 , X1 , . . . be the
Markov chain defined in Algorithm 1, where both chains have constraint set K. Couple the Markov
chains X and Y as in Definition 5.4. By Claim 2 in the proof of Lemma 13 of [17], for each i ≥ 0
the rejection probability P(Yi+1 = Yi ) is bounded above by 1 − e−16ηd(G
Hence, for all 0 ≤ i ≤ imax we have
P(Xj = Yj ∀0 ≤ j ≤ i) ≥ (1 −
δ
imax
2 +L)
δ
≤ 1 − e− imax ≤
)i ≥ 1 − δ.
(19)
Thus,
P(f˜(Xi ) ≥ h) ≤ P(f˜(Yi ) ≥ h) + P(Xj 6= Yj for some 0 ≤ j ≤ i)
Eq. (19)
≤
P(f˜(Yi ) ≥ h) + δ
Proposition 5.6
≤
˜
∀h ≥ 0
eξ[f (y)+λ̃r]−ξh+d log(
23
2R
)
r
+ 2rλ̃ξ + δ
δ
imax .
∀h ≥ 0.
5.4
Comparing noisy functions
In this section we bound the ratio of F̂ to f˜. We use this bound to prove Theorem 2 in Section 5.5.
Lemma 5.8. (Bounding the ratio of two noisy objective functions) Fix x ∈ K and let
t ≥ 5β. Define Ĥ = max{f˜(x), t} and let Jˆ = max{F̂ (x), t}. Then,
1
Ĥ ≤ Jˆ ≤ 5Ĥ.
5
Proof. By our assumption in Equation (8), we have that
|F (x) − F̂ (x)| ≤ α̂F (x) + β̂ ≤ αF (x) + β.
Since α < 21 , we have,
F (x) ≤ 2F̂ (x) + 2β.
(20)
We also have that,
|f˜(x) − F (x)| ≤ αF (x) + β
implying that
f˜(x) ≤ 4F (x) + β.
(21)
Therefore, combining Equations (20) and (21), we have
f˜(x) ≤ 4F̂ (x) + 5β,
(22)
implying that
max(f˜(x), 5β) ≤ max(4F̂ (x) + 5β, 20β).
Thus,
max(f˜(x), 5β) ≤ 5 max(F̂ (x), 5β).
ˆ By a similar argument as above, we can also show that Jˆ ≤ 5Ĥ.
Thus, we have Ĥ ≤ 5J.
5.5
Bounding the error and running time: The smooth case
In this section we will show how to bound the error and running time of Algorithm 1, if we
assume that we have access to a stochastic gradient oracle g for a smooth noisy function f˜, which
approximates the convex function F . In particular, we do not assume access to the smooth function
f˜ itself, only to g. We also assume access to a non-smooth oracle F̂ , which we use to determine the
temperature parameter for our Markov chain based on the value of F̂ (Xk0 ) at the beginning of each
epoch. To prove the running time and error bounds, we will use the results of Sections 5.2 and 5.3.
Recall that in this section α and β refer exclusively to the multiplicative and additive noise
levels of f˜. We must first define parameters that will be needed to formally state and prove our
error and running time bounds:
24
• Fix 0 ≤ ε <
1
25
and δ > 0.
• Set parameters of Algorithms 1 and 2 as follows:
– Let y0 ∈ K and let H0 := f˜(y0 ).
– Fix D ≥ 1ε β. For every 0 ≤ k ≤ kmax , let Hk := f˜(xk ) and define Ĥk := max(Hk , D).
– Assume, without loss of generality, that r0 ≤
D 5
λ.
– For every 0 ≤ k ≤ kmax , let Jk := F̂ (xk ). Define Jˆk := max(Jk , D).
0 /D)
– Set the number of epochs to be kmax = d log(5J
e + 1.
log( 1 )
25ε
– At every k ≥ 0, set the temperature to be ξk =
ε
4d log(R/ min( 2λ
D,r0 ))
1
εD
25
– Set r =
ε
4d log(R/ min( 2λ
D,r0 ))
.
1 ˆ
εJk
Define ξ¯ :=
5
.
δ
.
ξ̄ λ̃
– Define
ω 2 b2max
1
η̄ := c min ζmax , d 2 ,
,
¯ 2 + ξL)
¯ 2
λ
d Rd3 ((ξG)
†
and
d log(2 Rr ) + δ + 1 + log( 1δ ) )
B :=
.
ε
2d log(R/ min( 2λ
D, r0 ))
0
– Set the number of steps imax for which we run the the Markov chain X in each epoch
to be
1
150
1− ε α
¯ + log( 2Rλ̃ )) + 4 log( 1 )
8Rλ̃ξk + 4d(1 + log(1 + ξ)
δ
δ
+ 1.
imax =
h
i
2
β
β
α
150d
ε
p
0
0
(3+εB + D )+ D log(R/ min( 2λ D,r ))
1
† /de− ε
1−α†
η̄
1536R
– Define B :=
(d log(2 Rr )+δ+log(imax +1)+log( 1δ )))
ε
2d log(R/ min( 2λ
D,r0 ))
.
– For every ξ > 0 define
h
i
β
β
α
ε
0
− 100d
2
2
† (3+εB+ D )+ D log(R/ min( 2λ D,r )) 2
ε
1−α
(e
)
ω b
η(ξ) := c min ζmax , d 2 , max ,
,
3
2
2
λ
d
Rd ((ξG) + ξL)
where ω = εD, and c is the universal constant in Lemma 15 of [17].
¯
Set the step size at each epoch to be ηk = η(ξk ). Also define η̄ = η(ξ).
√
– Set D = 2η̄d.
We now state the error and running time bounds:
5
This is without loss of generality since if there exists a convex body K0 such that K0 + B(0, r0 ) = K, then for
every 0 < ρ ≤ r0 there must also exist a convex body K00 such that K00 + B(0, ρ) = K, namely K00 = K0 + B(0, r0 − ρ).
25
Theorem 5.9. (Error and running time bounds when using a smooth noisy objective
ε
function) Assume that α ≤ 32
. Then with probability at least 1 − 6δ(kmax + 1) Algorithm 2
returns a point x̂ = xkmax such that
1
(D + β),
1−α
F (x̂) − F (x? ) ≤
h
d
with running time that is polynomial in d, e ε/150
G, ζmax , bmax , and log( 1δ ).
α
1−α†
(3+εB0 + Dβ )+ Dβ
i
ε
log(R/ min( 2λ
D,r0 ))
, R, λ, λ̃, L,
Proof. Set notation as in Algorithms 1 and 2. Denote by X (k) the Markov chain generated by
Algorithm 1 as a subroutine in the k’th epoch of Algorithm 2 with constraint set K.
1
Set hk = Ĥk + ξk−1 d log( 2R
r ) + δ + log(imax + 1) + log( δ ) . Then by Lemma 5.7
P( sup
0≤i≤imax
2R
(k)
f˜(Xi ) ≥ hk ) ≤ (imax + 1) × [eξk [Ĥk +λ̃r]−ξk hk +d log( r ) + 2rλ̃ξk + δ]
≤ eξk Ĥk +δ−ξk hk +d log(
2R
)+log(imax +1)
r
(23)
+ 4δ]
= 5δ,
where the second inequality holds since r =
δ
ξ̄ λ̃
and ξk ≤ ξ¯ for all k.
(k)
But f˜(Xi ) ≥ hk if and only if
(k)
(k)
(k)
F (Xi )(1 + ψ(Xi )) + ϕ(Xi ) ≥ hk
if and only if
1
(k)
F (Xi ) ≥
1+
(k)
(k)
ψ(Xi )
(hk − ϕ(Xi )),
(k)
since 1 + ψ(Xi ) ≥ 0. Also,
1
1+
(k)
(k)
(k)
ψ(Xi )
(hk − ϕ(Xi )) ≤
1
(hk + β),
1 − α†
(k)
since ψ(Xi ) ≥ −α† > −1 and |ϕ(Xi )| < β.
Hence,
Eq. (23)
(k)
˜
5δ ≥ P
sup f (Xi ) ≥ hk
0≤i≤imax
1
(k)
≥P
sup F (Xi ) ≥
(hk + β) .
1 − α†
0≤i≤imax
1
0
0
Define K̂(k) := (K0 ∩ {x ∈ Rd : F (x) ≤ 1−α
† (hk + β) + λr }) + B(0, r ). Then
1
x ∈ K : F (x) ≤
(hk + β) ⊆ K̂(k) ,
1 − α†
since k∇F k ≤ λ. Thus, by Equations (24) and (25),
(k)
P Xi ∈ K̂(k) ∀0 ≤ i ≤ imax ≥ 1 − 5δ.
26
(24)
(25)
(26)
Also, for every x ∈ K̂(k) , since r0 ≤
D
λ,
we have
1
(hk + β) + 2λr0
1 − α†
1
≤
(hk + β) + 2D
1 − α†
1
R
1
−1
=
Ĥk + ξk
d log(4 ) + δ + log(imax + 1) + log( ) + β + 2D
r
δ
1 − α†
!
R
1
1 ˆ d log(4 r ) + δ + log(imax + 1) + log( 1δ ) )
=
Ĥk + εJk
+ β + 2D
ε
5
2d log(R/ min( 2λ
D, r0 ))
1 − α†
Lemma 5.8
1
Ĥ
+
ε
Ĥ
B
+
β
+ 2D.
≤
k
k
1 − α†
F (x) ≤
(27)
Thus, for every x ∈ K̂(k) ,
|N (x)| ≤ αF (x) + β
Eq. (27)
≤
α
1 − α†
(28)
Ĥk (1 + εB) + β + 2D + β := Nk .
00
(k)
K̂ (K̂(k) \U εĤk ) ≥
Define Ukε := {x ∈ K̂(k) : F (x) ≤ ε00 } for every ε00 > 0. Then by Lemma 5.3, C(ξF
k
)
for any ξ ≥
ε
4d log(R/ min( 2λ
Ĥk ,r0 )
εĤk
But by Lemma 5.8, ξk =
.
ε
4d log(R/ min( 2λ
D,r0 ))
1 ˆ
εJk
≥
ε
Ĥk ,r0 )
4d log(R/ min( 2λ
εĤk
5
K̂
C(ξ
(k)
˜ (K̂
kf )
(k)
1
R
\UkεĤk )
Eq. (28)
≥
≥
1 − 4d
ε
e
R
Lemma 5.8
≥
(k)
K̂
e−2ξk Nk C(ξ
(K̂(k) \UkεĤk )
kF )
Lemma 5.3
=
, so
(29)
1 −2ξk Nk
e
R
Nk
1 Jˆ
5 k
ε 1
D,r0 ))
log(R/ min( 2λ
5
1 − 4d
ε
e
R
α
Nk
1
25 Ĥk
ε
log(R/ min( 2λ
D,r0 ))
Ĥ (1+εB)+2D+β )+β
†( k
1−α
ε
D,r0 ))
log(R/ min( 2λ
1 − 4d
1
ε
25 Ĥk
= e
R
h
i
ε
1 − 100d α (3+εB+ Dβ )+ Dβ log(R/ min( 2λ
D,r0 ))
≥ e ε 1−α†
,
R
where the first inequality holds by the stability property of the Cheeger constant, and the last
inequality is true since Ĥk ≥ D by definition.
Recall that
h
i
β
β
α
ε
0
− 100d
2
2
† (3+εB+ D )+ D log(R/ min( 2λ D,r )) 2
ε
1−α
ω b
(e
)
ηk = c min ζmax , d 2 , max ,
(30)
λ
d
Rd3 ((ξk G)2 + ξk L)2
K̂(k) (K̂(k) \U εĤk ))2
(C
2
2
Eq. (29)
k
ω b
(ξ f˜)
≤ c min ζmax , d 2 , max , 3 k
,
2
λ
d
d ((ξk G) + ξk L)2
27
where ω = εD.
Recall that X (k) is the subroutine Markov chain described in Algorithm 1 with inputs specified
by Algorithm 2 and constraint set K. Let X̂ (k) be the Markov chain generated by Algorithm 1
(k)
(k)
(k)
with constraint set K̂r0 and initial point X0 = X̂0 . Let Y (k) be the Markov chain generated by
(k)
Algorithm 3 with constraint set K̂r0 . Couple the Markov chains as in definition 5.4.
Write
(UkεĤk )ω/λ := (UkεĤk + B(0, ω/λ)) ∩ K̂(k)
as shorthand. Then by Lemma 15 of [17] and by Equation (30), the Markov chain X̂ (k) is ε0 -close
to Y (k) with ε0 ≤ 14 ΦY (K̂(k) \(UkεĤk )ω/λ ) and
1 p
εĤk
(k)
K̂(k)
)ω/λ )
ηk /dC(ξ
˜ (K̂ \(Uk
kf )
1536
h
i
β
β
Eq. (29)
α
ε
0
1 p
− 100d
† (3+εB+ D )+ D log(R/ min( 2λ D,r ))
ε
1−α
≥
ηk /de
.
1536R
ΦY (K̂(k) \(UkεĤk )ω/λ )) ≥
(31)
Recall that by Equation (15) of Proposition 5.6, for every A ⊆ K0 , we have
ν0 (A) ≤ e4Rλ̃ξk +dlog(
2R
)
r
(k)
µK̂
(A).
ξ f˜
k
Therefore, since X̂ (k) is ε0 -close to Y (k) , by Lemma 11 of [17], with probability at least 1 − δ we
have
τX̂ (k) ((UkεĤk )ω/λ ) ≤
4 log(e2Rλ̃ξk +dlog(
2R
)
r
/δ)
(32)
Φ2Y (K̂(k) \(UkεĤk )ω/λ )
1
8Rλ̃ξk + 4dlog( 2R
r ) + 4 log( δ )
Eq. (31)
≤
1
1536R
p
− 100d
ηk /de ε
h
α
1−α†
(3+εB+ Dβ )+ Dβ
i
ε
log(R/ min( 2λ
D,r0 ))
=
¯ + log( 2Rλ̃ )) + 4 log( 1 )
8Rλ̃ξk + 4d(log(ξ)
δ
δ
2
h
i
β
β
α
100d
ε
p
0
− ε
1
† (3+εB+ D )+ D log(R/ min( 2λ D,r ))
1−α
ηk /de
1536R
≤
¯ + log( 2Rλ̃ )) + 4 log( 1 )
8Rλ̃ξk + 4d(log(1 + ξ)
δ
δ
2
h
i
β
β
100d
α
ε
p
− ε
3+εB+ D )+ D log(R/ min( 2λ D,r0 ))
(
1
†
1−α
ηk /de
1536R
≤
¯ + log( 2Rλ̃ )) + 4 log( 1 )
8Rλ̃ξk + 4d(1 + log(1 + ξ)
δ
δ
2
h
i
β
β
100d
α
ε
p
0 ))
−
3+εB+
+
log(R/
min(
D,r
(
)
1
†
ε
D
D
2λ
1−α
η̄/de
1536R
≤
≤
2
¯ + log( 2Rλ̃ )) + 4 log( 1 )
8Rλ̃ξk + 4d(1 + log(1 + ξ)
δ
δ
1
1536R
p
− 150d
η̄ † /de ε
h
α
1−α†
(3+εB0 + Dβ )+ Dβ
i
ε
log(R/ min( 2λ
D,r0 ))− 75
α log(imax +1)
ε
2
¯ + log( 2Rλ̃ )) + 4 log( 1 )
8Rλ̃ξk + 4d(1 + log(1 + ξ)
150
δ
δ
× (imax + 1) ε α
h
i
2
α
ε
p
− 150d
D,r0 ))
(3+εB0 + Dβ )+ Dβ log(R/ min( 2λ
1
η̄ † /de ε 1−α†
1536R
28
≤ imax ,
where the first equality is true since r =
δ
,
ξ̄ λ̃
the fourth inequality is true by the definition of η̄, the
η̄ † ,
fifth inequality is true by the definition of
and the last inequality is true by our choice of imax .
(k)
(k)
But by Equation (26), Xi = X̂i with probability at least 1 − 5δ. Therefore, since Equation
(32) holds with probability at least 1 − δ, we have that
τX (k) ((UkεĤk )ω/λ ) ≤ imax .
(33)
with probability at least 1 − 6δ.
Therefore, by Equation (33), with probability at least 1 − 6δ for some 0 ≤ i◦k ≤ imax we have
(k)
Xi◦ ∈ (UkεĤk )ω/λ̃ and hence that
k
(k)
F (Xi◦ ) ≤ εĤk + λ̃ ×
k
ω
= εĤk + εD ≤ 2εĤk
λ̃
and therefore, since 0 ≤ α < 1,
Lemma 5.8
1˜
(k)
(k)
(k)
f (xk+1 )
≤
F̂ (xk+1 ) = min F̂ (Xi ) ≤ F̂ (Xi◦ ) ≤ 2F (Xi◦ ) + β ≤ 4εĤk + β ≤ 5εĤk .
k
k
0≤i≤imax
5
Hence, for every 0 ≤ k ≤ kmax we have
f˜(xk+1 ) = Hk+1 ≤ 25εĤk = 25ε max(Hk , D)
with probability at least 1 − 6δ.
Therefore, by induction on Equation (34), for every 0 ≤ k ≤ kmax , we have
Hk+1 ≤ 25ε × max (25ε)k H0 , D
(34)
(35)
with probability at least 1 − 6δ(k + 1).
0 /D)
0 /D)
By Lemma 5.8, we have kmax = d log(5J
e + 1 ≥ d log(H
e + 1. Then, with probability at least
log( 1 )
log( 1 )
1 − 6δ(kmax + 1),
25ε
25ε
f˜(xkmax ) − F (x? ) = f˜(xkmax )
= Hkmax
Eq. (35)
≤
kmax −1
25ε × max (25ε)
H0 , D
≤ 25ε × D
≤ D,
since 0 ≤ ε <
Hence,
1
25
implies that 0 ≤ 25ε < 1.
F (xkmax ) − F (x? ) = F (xkmax )
1
≤
(f˜(xkmax ) + β)
1−α
1
≤
(D + β),
1−α
where the first equality holds since F (x? ) = 0.
29
(36)
5.6
The non-smooth case
In this section we bound the gradient, supremum, and smoothness of the smoothed function fσ
obtained from F (Propositions 5.10 and 5.11 and Lemma 5.12), where fσ is defined in Equation
(7). We also bound the noise |F (x) − fσ (x)| of fσ (Lemma 5.13). We use these bounds in Section
5.8 to Prove our main result (Theorem 4.1).
Proposition 5.10. (Gradient bound for smoothed oracle)
For every x ∈ K we have
√
2d
k∇f˜σ (x)k ≤
(2λR(1 + 2α) + 2β).
σ
Proof.
1
kZk F̂ (x + Z) − F̂ (x)
σ2
1
≤ EZ
kZk max |F̂ (y2 ) − F̂ (y1 )|
y1 ,y2 ∈K
σ2
1
1
≤
max |F̂ (y2 ) − F̂ (y1 )|EZ
kZk
σ y1 ,y2 ∈K
σ
1
1
≤
max |F̂ (y2 ) − F̂ (y1 )|EZ
kZk
σ y1 ,y2 ∈K
σ
√ Γ( d+1
1
2 )
=
max |F̂ (y2 ) − F̂ (y1 )| 2
d
σ y1 ,y2 ∈K
Γ( 2 )
√
2d
≤
max |F̂ (y2 ) − F̂ (y1 )|
σ y1 ,y2 ∈K
√
2d
(2λR(1 + 2α) + 2β),
≤
σ
k∇f˜σ (x)k ≤ EZ
where the equality is true since σ1 kZk has χ distribution with d degrees of freedom, and the second√
Γ( d+1 )
to-last inequality is true since Γ( d2 ) ≤ d. The last ineqaulty is true because F is λ-Lipschitz,
2
and because of our assumption on the noise (Equation (6)).
Proposition 5.11. (maximum value of non-smooth noisy oracle) For every x ∈ Kr , we have
F̂ (x) ≤ (1 + α)2λ(R + r) + β.
Proof. Since F is λ-Lipschitz and by our assumption that F (x? ) = 0,
F (x) ≤ 2λ(R + r)
∀x ∈ Kr .
Thus,
F̂ (x) ≤ (1 + α)F (x) + β ≤ (1 + α)2λ(R + r) + β.
We recall the following Lemma from [17]:
30
(37)
Lemma 5.12. (Lemma 17 in [17]) Suppose that M̂ > 0 is a number such that 0 ≤ F̂ (x) ≤ M̂
for all x ∈ Kr then
1.
EZ [gZ (x)] = ∇f˜σ (x)
∀x ∈ K.
2. For every u ∈ Rd ,
2
EZ [ehu,gZ (x)i(2M̂ /σ) ] ≤ e
4M̂ 2
kuk2
σ2
.
3.
k∇2 f˜σ (x)kop ≤
2M̂
.
σ2
We show that the smoothed gradient is a good approximation of F for sufficiently small σ:
Lemma 5.13. (Noise of smoothed oracle) Let A ⊆ At ⊆ Kr for some A ⊆ Kr and some t > 0.
Let H 0 = supy∈A F (y). Then
√
t2 /σ 2 −d
|f˜σ (x) − F (x)| ≤ λσ(1 + α) d + H 0 × e− 8
+ αH 0 + β
for every x ∈ A.
Proof. Define N(x) := F̂ (x) − F (x). For any function h : Rd → R, define
h̃σ (x) := EZ [h(x + Z)],
where Z ∼ N (0, σ 2 Id ). Then for every x ∈ A we have,
|f˜σ (x) − F (x)| = |F̃σ (x) + Ñσ (x) − F (x)|
≤ |F̃σ (x) − F (x)| + |Ñσ (x)|
= EZ [|F (x + Z) − F (x)|] + |EZ [N(x + Z)]|
≤ EZ [|F (x + Z) − F (x)|] + EZ [α(H 0 + λkZk) + β]
≤ EZ [λkZk] + H 0 × P(kZk ≥ t) + EZ [α(H 0 + λkZk) + β]
= λ(1 + α)EZ [kZk] + H 0 × P(kZk ≥ t) + αH 0 + β
1
1
t
= λσ(1 + α)EZ [ kZk] + H 0 × P( kZk ≥ ) + αH 0 + β
σ
σ
σ
√
1
t
0
≤ λσ(1 + α) d + H × P( kZk ≥ ) + αH 0 + β
σ
σ
√
t2 /σ 2 −d
≤ λσ(1 + α) d + H 0 × e− 8
+ αH 0 + β,
where the second inequality holds because F is λ-Lipschitz on Kr and also since F is defined to
be zero outside Kr with x ∈ K ⊆ Kr . The third inequality holds by our assumption on the noise
(Equation (6)), and since F is defined to be zero outside Kr . The fourth inequality holds because
1
σ kzk is χ-distributed with d degrees of freedom. The last inequality holds by the Hanson-Wright
inequality (see for instance [8], [15]).
31
5.7
Rounding the domain of the Markov Chain
We now show that our constraint set K̂ is sufficiently “rounded”. This roundness property is used
to show that the Markov chain does not get stuck for a long time in corners of the constraint set.
0
r
)2 . Let K̂ ⊆ K0 be a
Lemma 5.14. (Roundness of constraint set) Let ζmax = ( 10√2(d+20)
convex set. Then for any ζ ≤ ζmax and any x ∈ K̂r0 the random variable W ∼ N (0, Id ) satisfies
P(
p
1
2ζW + x ∈ K̂r0 ) ≥ .
3
Proof. Without loss of generality, we may assume that x is the origin and that K̂r0 contains the ball
B(a, r0 ) where a = (r0 , 0, . . . , 0)> (since K̂r0 = K̂ + B(0, r0 ) implies that there is a ball contained in
K̂r0 that also contains x on its boundary. We can then translate and rotate K̂r0 to put x and a in
the desired position).
1
Since P( 10
≤ W1 ≤ 100) ≥ 0.45, with probability at least 0.45 we have that
1
≤ W1 ≤ 100
10
0
r
but ζmax = ( 10√2(d+20)
)2 , and hence, with probability at least 0.45,
But our choice of ζmax
√
2ζ
r0
√
.
(d
+
20)
≤
W
≤
1
r0
2ζ
q 02
implies that (r2ζ) − r 1 0 2 (d + 20) > 0, implying that
(r )
2ζ
s
s
0
0 )2
0 )2
(r
(r
r0
r
1
1
√ −
− q 0 2 (d + 20) ≤ W1 ≤ √ +
− q 0 2 (d + 20) .
(r )
(r )
2ζ
2ζ
2ζ
2ζ
2ζ
But
√
a−
√t
a
≤
√
2ζ
a − t for every t ∈ [0, a), which implies
r0
√ −
2ζ
s
(r0 )2
r0
− (d + 20) ≤ W1 ≤ √ +
2ζ
2ζ
s
(r0 )2
− (d + 20).
2ζ
Therefore
r0 −
p
p
p
(r0 )2 − 2ζ(d + 20) ≤ 2ζW1 ≤ r0 + (r0 )2 − 2ζ(d + 20).
Hence,
p
( 2ζW1 − r0 )2 ≤ (r0 )2 − 2ζ(d + 20),
which implies that
p
( 2ζW1 − r0 )2 + 2ζ(d + 20) ≤ (r0 )2 .
(38)
But by the Hanson-Wright inequality
d
X
21
1
P(
Wj2 ≥ d + 20) ≤ e− 8 < .
10
j=2
32
(39)
Thus, Equations (39) and (38) imply that with probability at least 0.45 −
k
1
10
≥
1
3
we have
p
p
2ζW − ak = k 2ζW − (r0 , 0, . . . , 0)> k2
d
X
p
0 2
= ( 2ζW1 − r ) + 2ζ
Wj2
j=2
p
≤ ( 2ζW1 − r0 )2 + 2ζ(d + 20)
Eq. (38)
≤
(r0 )2 ,
implying that W ∈ B(a, r0 ) ⊆ K̂r0 with probability at least 31 .
5.8
Proof of Main Result (Theorem 4.1)
In this section, we prove Theorem 4.1. We do so by applying the bounds on the smoothness of fσ
of Section 5.6 to Theorem 5.9.
We note that in this section we will use “α” and “β” exclusively to denote the multiplicative
and additive noise levels of F . We will then set the smooth oracle f˜ to be f˜ = f˜σ , where f˜σ is the
smooth function obtained from F , defined in Equation (7). As an intermediate step in proving the
main result, we show that f˜σ has multiplicative noise level 2α and additive noise level 2β.
1
Proof. We will assume that α < 800
. This assumption is consistent with the statment of Theorem
4.1, which assumes that α = O(1).
Define the following constants: M = 2λR + 2β, M̂ = 6λR + β, L = 4σM̂2 , G = 2σM̂ , bmax = 1,
2
√
r0
ζmax = 10√2(d+20)
, and λ̃ = σ2d (2λR(1 + 2α) + 2β).
!
We set σ =
1
2
min
β √ q
r
,
1
λ(1+α) d
log( α
)+d
. Recall from Section 3.3 that σ determines the amount
of smoothness in f˜σ . A larger value of σ means that f˜σ will be smoother, decreasing the running
time of the algorithm. On the other hand, a smaller value of σ means that f˜σ will be a closer
approximation to F , and consequently lead to a lower error. We choose σ in such a way so that
the error ε̂ will be bounded by the desired value ε̂.
Set parameters of Algorithms 1 and 2 as follows:
• Fix ε =
1
50 .
• Let D = 23 ε̂.
• Define J0 := F̂ (x0 ) and set the number of epochs to be
log(5J0 /D)
+ 1.
kmax =
log(2)
• For every 0 ≤ k ≤ kmax , let Jk := F̂ (xk ), and define Jˆk := max(Jk , D).
• Fix δ =
δ0
6(kmax +1) .
33
(40)
• At every k ≥ 0, set the temperature to be
ξk =
ε
4d log(R/ min( 2λ
D, r0 ))
.
1 ˆ
εJk
(41)
5
Define ξ¯ :=
• Set r =
ε
4d log(R/ min( 2λ
D,r0 ))
.
1
εD
25
δ
.
ξ̄ λ̃
• Define
ω 2 b2max
1
η̄ := c min ζmax , d 2 ,
,
¯ 2 + ξL)
¯ 2
λ
d Rd3 ((ξG)
†
and
d log(2 Rr ) + δ + 1 + log( 1δ ) )
.
B :=
ε
2d log(R/ min( 2λ
D, r0 ))
0
• Set the number of steps imax for which we run the the Markov chain X in each epoch to be
1
150
1− ε α
¯ + log( 2Rλ̃ )) + 4 log( 1 )
8Rλ̃ξk + 4d(1 + log(1 + ξ)
δ
δ
+ 1. (42)
imax =
h
i
2
150d
α
p
0 + β + β log(R/ min( ε D,r 0 ))
−
3+εB
(
1
ε
D) D
2λ
† /de
1−α†
η̄
1536R
• Define B :=
(d log(2 Rr )+δ+log(imax +1)+log( 1δ )))
ε
2d log(R/ min( 2λ
D,r0 ))
.
• For every ξ > 0 define
h
i
β
β
α
ε
0
− 100d
2
2
† (3+εB+ D )+ D log(R/ min( 2λ D,r )) 2
ε
1−α
)
(e
ω b
η(ξ) := c min ζmax , d 2 , max ,
,
3
2
2
λ
d
Rd ((ξG) + ξL)
(43)
where ω = εD, and c is the universal constant in Lemma 15 of [17]. We set the step size at
¯
each epoch k to be ηk = η(ξk ). We also define η̄ := η(ξ).
√
• Set D = 2η̄d.
We determine the constants for which f˜ =!f˜σ satisfies the various assumptions of Theorem 5.9.
Since σ =
1
2
min
β √
, q r
λ(1+α) d 8 log( 1 )+d
α
, by Lemma 5.13, we have that
|f˜σ (x) − F (x)| ≤ 2αF (x) + 2β
∀x ∈ K.
So, with a slight abuse of notation, we may state that f˜ = f˜σ has multiplicative noise of level 2α
and additive noise of level 2β, if we use “α” and “β” to denote the noise levels of F̂ .
Hence, M = 2λR + 2β ≥ supx∈K f˜σ (x). By Lemma 5.14, part 1 of Assumption 1 is satisfied
with constant ζmax . By Lemma 5.12 and Proposition 5.11, f˜σ satisfies parts 2 and 3 of Assumption
1 with constants L, G and bmax , (recall that we defined these constants at the beginning of this
34
proof). By Proposition 5.10, k∇f˜σ (x)k ≤ λ̃ for all x ∈ K. Therefore, applying Theorem 2 with the
above constants and the smoothed function fσ , we have,
F (x̂) − F (x? ) ≤
1
(D + 2β) ≤ ε̂,
1 − 2α
h
i
α
ε
8d
D,r0 ))
(3+εB0 + Dβ )+ Dβ log(R/ min( 2λ
, R, λ, λ̃, L, G,
with running time that is polynomial in d, e ε 1−α†
1
ζmax , bmax , and log( δ ). This completes the proof of the Theorem.
References
[1] David Applegate and Ravi Kannan. Sampling and integration of near log-concave functions.
In Proceedings of the twenty-third annual ACM symposium on Theory of computing, pages
156–163. ACM, 1991.
[2] Alexandre Belloni, Tengyuan Liang, Hariharan Narayanan, and Alexander Rakhlin. Escaping
the local minima via simulated annealing: Optimization of approximately convex functions.
In Conference on Learning Theory, pages 240–265, 2015.
[3] Avrim Blum and Ronald L Rivest. Training a 3-node neural network is np-complete. In
Advances in neural information processing systems, pages 494–501, 1989.
[4] Sébastien Bubeck, Ronen Eldan, and Joseph Lehec. Finite-time analysis of projected Langevin
Monte Carlo. In Advances in Neural Information Processing Systems 28: Annual Conference
on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec,
Canada, pages 1243–1251, 2015.
[5] Ruobing Chen. Stochastic derivative-free optimization of noisy functions. Ph.D. Thesis, Lehigh
University, 2015.
[6] Ruobing Chen, Matt Menickelly, and Katya Scheinberg. Stochastic optimization using a trustregion method and random models. Mathematical Programming, pages 1–41, 2015.
[7] John C Duchi, Michael I Jordan, Martin J Wainwright, and Andre Wibisono. Optimal rates
for zero-order convex optimization: The power of two function evaluations. IEEE Transactions
on Information Theory, 61(5):2788–2806, 2015.
[8] David Lee Hanson and Farroll Tim Wright. A bound on tail probabilities for quadratic forms
in independent random variables. The Annals of Mathematical Statistics, 42(3):1079–1083,
1971.
[9] Mohamed Jebalia and Anne Auger. On Multiplicative Noise Models for Stochastic Search,
pages 52–61. Springer Berlin Heidelberg, Berlin, Heidelberg, 2008.
[10] Mohamed Jebalia, Anne Auger, and Nikolaus Hansen. Log-linear convergence and divergence
of the scale-invariant (1 + 1)-ES in noisy environments. Algorithmica, 59(3):425–460, 2011.
[11] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. Optimization by simulated annealing. Science,
220(4598):671–680, 1983.
[12] Yin Tat Lee and Santosh S Vempala. Convergence rate of riemannian hamiltonian monte carlo
and faster polytope volume computation. arXiv preprint arXiv:1710.06261, 2017.
35
[13] László Lovász and Miklós Simonovits. Random walks in a convex body and an improved
volume algorithm. Random structures & algorithms, 4(4):359–412, 1993.
[14] Maxim Raginsky, Alexander Rakhlin, and Matus Telgarsky. Non-convex learning via stochastic
gradient Langevin dynamics: a nonasymptotic analysis. In Satyen Kale and Ohad Shamir,
editors, Proceedings of the 2017 Conference on Learning Theory, volume 65 of Proceedings
of Machine Learning Research, pages 1674–1703, Amsterdam, Netherlands, 07–10 Jul 2017.
PMLR.
[15] Mark Rudelson and Roman Vershynin. Hanson-wright inequality and sub-Gaussian concentration. Electron. Commun. Probab, 18(82):1–9, 2013.
[16] Max Welling and Yee W Teh. Bayesian learning via stochastic gradient Langevin dynamics.
In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages
681–688, 2011.
[17] Yuchen Zhang, Percy Liang, and Moses Charikar. A hitting time analysis of stochastic gradient
Langevin dynamics. In Proceedings of the 30th Conference on Learning Theory, COLT 2017,
Amsterdam, The Netherlands, 7-10 July 2017, pages 1980–2022, 2017.
36
| 8 |
Ultra-Reliable Low Latency Cellular Networks:
Use Cases, Challenges and Approaches
He Chen, Rana Abbas1, Peng Cheng1, Mahyar Shirvanimoghaddam1, Wibowo Hardjawana1, Wei Bao1,
Yonghui Li, and Branka Vucetic
The University of Sydney, NSW 2006, Australia
Email: [email protected]
Abstract‐The fifth-generation cellular mobile networks are expected to support mission critical
ultra-reliable low latency communication (uRLLC) services in addition to the enhanced mobile
broadband applications. This article first introduces three emerging mission critical applications of
uRLLC and identifies their requirements on end-to-end latency and reliability. We then investigate
the various sources of end-to-end delay of current wireless networks by taking the 4G Long Term
Evolution (LTE) as an example. Subsequently, we propose and evaluate several techniques to
reduce the end-to-end latency from the perspectives of error control coding, signal processing, and
radio resource management. We also briefly discuss other network design approaches with the
potential for further latency reduction.
I. Introduction
The growth of wireless data traffic over the past three decades has been relentless. The upcoming
fifth-generation (5G) of wireless cellular networks is expected to carry 1000 times more traffic [1]
while maintaining high reliability. Another critical requirement of 5G is ultra-low latency – the
time required for transmitting a message through the network. The current fourth-generation (4G)
wireless cellular networks have a nominal latency of about 50ms; however, this is currently
unpredictable and can go up to several seconds [2]. Moreover, it is mainly optimized for mobile
broadband traffic with target block error rate (BLER) of 10-1 before re-transmission.
1
These authors contributed equally to this article.
There is a general consensus that the future of many industrial control, traffic safety, medical,
and internet services depends on wireless connectivity with guaranteed consistent latencies of
1ms or less and exceedingly stringent reliability of BLERs as low as 10-9 [3]. While the projected
enormous capacity growth is achievable through conventional methods of moving to higher parts
of the radio spectrum and network densifications, significant reductions in latency, while
guaranteeing an ultra-high reliability, will involve a departure from the underlying theoretical
principles of wireless communications.
II. Emerging uRLLC Applications
In this section, we introduce three emerging mission-critical applications and identify their
latency and reliability requirements.
A. Tele-surgery
The application of uRLLC in tele-surgery has two main use cases [4]: (1) remote surgical
consultations, and (2) remote surgery. The remote surgical consultations can occur during
complex life-saving procedures after serious accidents with patients having health emergency
that cannot wait to be transported to a hospital. In such cases, first-responders at an accident
venue may need to connect to surgeons in hospital to get advice and guidance to conduct
complex medical operations. On the other hand, in a remote surgery scenario, the entire
treatment procedure of patients is executed by a surgeon at a remote site, where hands are
replaced by robotic arms. In these two use cases, the communication networks should be able to
support the timely and reliable delivery of audio and video streaming. Moreover, the haptic
feedback enabled by various sensors located on the surgical equipment is also needed in remote
surgery such that the surgeons can feel what the robotic arms are touching for precise decision-
making. Among these three types of traffic, it is haptic feedback that requires the tightest delay
requirement with the end-to-end round trip times (RTTs) lower than 1ms [4]. In terms of
reliability, rare failures can be tolerated in remote surgical consultations, while the remote
surgery demands an extremely reliable system (BLER down to 10-9) since any noticeable error
can lead to catastrophic outcomes.
B. Intelligent Transportation
The realization of uRLLC can empower several technological transformations in transportation
industry [5], including automated driving, road safety and traffic efficiency services, etc. These
transformations will get cars fully connected such that they can react to increasingly complex
road situations by cooperating with others rather than relying on its local information. These
trends will require information to be disseminated among vehicles reliably within extremely
short time duration. For example, in fully automated driving with no human intervention,
vehicles can benefit by the information received from roadside infrastructure or other vehicles.
The typical use cases of this application are automated overtake, cooperative collision avoidance
and high density platooning, which require an end-to-end latency of 5–10ms and a BLER down
to 106 [5].
C. Industry Automation
uRLLC is one of the enabling technologies in the fourth industrial revolution [6]. In this new
industrial vision, industry control is automated by deploying networks in factories. Typical
industrial automation use cases requiring uRLLC include factory, process, and power system
automation. To enable these applications, an end-to-end latency lower than 0.5ms and an
exceedingly high reliability with BLER of 10-9 should be supported [3]. Traditionally, industrial
control systems are mostly based on wired networks because the existing wireless technologies
cannot meet the industrial latency and reliability requirements. Nevertheless, replacing the
currently used wires with radio links can bring substantial benefits: (1) reduced cost of
manufacturing, installation and maintenance; (2) higher long-term reliability as wired
connections suffer from wear and tear in motion applications; (3) inherent deployment flexibility.
Other possible applications of uRLLC include Tactile Internet, augmented/virtual reality, fault
detection, frequency and voltage control in smart grids.
Fig. 1. Architecture of 4G LTE network with representative mission-critical user equipment. The bottom part lists various
potential measures towards latency reduction in different parts.
III. Latency Sources in Cellular Networks
Cellular networks are complex systems with multiple layers and protocols, as depicted in Fig. 1.
The duration of a data block at the physical layer is a basic delay unit which gets multiplied over
higher layers and thus causes a considerable latency in a single link. On the other hand, protocols
at higher layers and their interactions are significant sources of delay in the whole network.
Latency varies significantly as a function of multiple parameters, including the transmitter–
receiver distance, wireless technology, mobility, network architecture, and the number of active
network users.
TABLE I
VARIOUS DELAY SOURCES OF AN LTE SYSTEM (RELEASE 8) IN THE UPLINK AND DOWNLINK
Delay Component
Description
Time (ms)
A user connected and aligned to a base station will send a Scheduling
Request (SR) when it has data to transmit. The SR can only be sent in an
Grant acquisition
5ms
SR-valid Physical Uplink Control Channel (PRCCH). This component
characterizes the average waiting time for a PRCCH.
This procedure applies to the users not aligned with the base station. To
establish a link, the user initiates an uplink grant acquisition process over the
Random Access
9.5ms
random access channel. This process includes preamble transmissions and
detection, scheduling, and processing at both the user and the base station.
Transmit time interval
The minimum time to transmit each packet of request, grant or data
1ms
The time used for the processing (e.g., encoding and decoding) data and
Signal processing
3ms
control
Packet retransmission
The (uplink) hybrid automatic repeat request process delay for each
in access network
retransmission
8ms
Queueing delay due to congestion, propagation delay, packet retransmission
Vary
delay caused by upper layer (e.g., TCP)
widely
Core network/Internet
The latency components of the LTE networks have been systematically evaluated and quantified
in [7]. Latencies for various radio access network algorithms and protocols in data transmission
from a user to the gateway (i.e., uplink) and back (i.e., downlink) are summarized in Table I. The
two most critical sources of delay in radio access networks are the link establishment (i.e., grant
acquisition or random access) and packet retransmissions caused by channel errors and
congestion. Another elementary delay component is the transmit time interval (TTI), defined as
the minimum data block length, which is involved in each transmission of grant, data, and
retransmission due to errors detected in higher layer protocols.
According to Table I, after a user is aligned with the base station, its total average radio access
delay for an uplink transmission can be up to 17ms excluding any retransmission. The delay for a
downlink transmission is around 7.5ms, which is lower than that of the uplink since no grant
acquisition process is needed in the downlink. The overall end-to-end latency in cellular
networks is dictated not only by the radio access network but also includes delays of the core
network, data center/cloud, Internet server and radio propagation. It increases with the
transmitter-receiver distance and the network load. As shown by the experiment conducted in [8],
at least 39ms is needed to contact the core network gateway, which connects the LTE system to
the Internet, while a minimum of 44ms is required to get response from the Google server. As
the number of users in the network rises, the delay goes up, due to more frequent collisions in
grant acquisition and retransmissions caused by inter-user interference.
In the subsequent sections, we will consider novel approaches that could be implemented at
various cellular network layers (as depicted in the bottom part of Fig. 1) to support ultra-low
latency services.
IV. Short Error Control Codes
In traditional communication systems, very long low-density parity check (LDPC) or turbo codes
are used to achieve near error-free transmissions, as long as the data rate is below the Shannon
channel capacity. Since the network latency is significantly affected by the size of data blocks,
short codes are a prerequisite for low delays; but the Shannon theoretical model breaks down for
short codes. A recent Polyansky-Poor-Verdu (PPV) analysis of channel capacity with finite
block lengths [9] has provided the tradeoffs between delays, throughput, and reliability on
Gaussian channels and fixed rate block codes, by introducing a new fundamental parameter
called ‘channel dispersion’; this analysis shows that there is a severe capacity loss at short blocklengths.
There are no known codes that achieve the PPV limit. Low-density parity check (LDPC) codes
and polar codes have been reported to achieve almost 95% of the PPV bound at block error rates
as low as 10-7 for block lengths of a few hundred symbols [10]. However, their main drawback is
the large decoding latency. On the other hand, convolutional codes provide fast decoding as a
block can be decoded as it is being received and can achieve BLERs as low as 10-9. Note that as
the signal-to-noise ratio (SNR) in wireless channels varies over time and frequency due to fading,
these low BELRs can only be achieved at very high SNRs (as high as 90 dB) over point to point
channels. To address this issue, these error control codes need to be augmented by some form of
diversity such as implementing multi-antenna techniques.
As long fixed rate codes achieve the Shannon capacity limit for one signal-to-noise ratio (SNR)
only, today’s wireless networks use adaptive schemes, which select a code from a large number
of fixed rate codes, to transmit data at the highest possible rate for a specified reliability and
estimated channel state information (CSI). The problem is the inevitable latency increase due to
complex encoding and decoding algorithms, the time required to estimate the CSI at the receiver,
the feedback of CSI back to the transmitter, code rate and modulation selection process in the
transmitter, and block length.
In this context, self-adaptive codes appear as a promising solution to uRLLC. Self-adaptive
codes, also known as rateless codes, can adapt the code rate to the channel variations by sending
an exact amount of coded symbols needed for successful decoding. This self-adaptation does not
require any channel state information at the transmitter side, thus eliminating the channel
estimation overhead and delay. While there are some research results on rateless codes for the
short block length regime, they are all on binary codes, and their extension to the real domain is
not straight-forward.
Fig. 2. AFC with a 0.95-rate Protograph-based LDPC precoder are used to encode a message of length 192 bits for a block error
rate of 10-4 over a wide range of SNRs for the AWGN channel.
Recently, an analog fountain code (AFC) [11] was proposed as a capacity-approaching rateless
code over a wide range of SNRs for asymptotically long codewords. AFC can be represented by
a single sparse non-binary generator matrix such that the optimization of the coding and
modulation can be performed jointly via specialized EXIT charts. The resulting performance is
seamless over a large range of SNRs with only linear encoding and decoding complexity with
respect to the block length. In Fig. 2, we show that AFC, even in the current sub-optimal design
for short codes, has a small gap to the PPV bound in the high SNR region. Moreover, we expect
that a much lower latency can be achieved when optimizing AFC for shorter block lengths. As
self-adaptive codes do not require any CSI to be available at the transmitter side, the channel
estimation overhead can be eliminated, which has been reported to require 7–8ms in the current
LTE standards. Finally, for the sake of completion, it is worth mentioning that our simulations
over the Rayleigh fading channel showed that AFC can achieve BLERs as low as 10-6 for a wide
range of SNRs with space diversity with only 10 antennas and maximum ratio combining.
V. Ultra-fast Signal Processing
The current LTE systems use system throughput as the main design target and performance
indicator. In contrast, signal processing latency issues has drawn far less attention in the design
process. Similar to Section III, valuable insights into the processing latency bottleneck in the
current LTE systems could be obtained by a breakdown of latencies contributed by each LTE
receiver module. To this end, we investigate the average computational time for the major
receiver modules of an LTE Release 8 system by implementing it on an Intel Core i5 computer.
The computational time, a practical indicator for relative latency, is presented in Table II for
three typical bandwidths. In the simulations, we have 4 transmit and 2 receive antennas, 16QAM, and 0.3691 code rate at signal-to-noise ratio of 10dB. The closed-loop spatial
multiplexing mode was implemented and the average computational time is based on one
subframe. It is clearly shown that MMSE-based channel estimation, MMSE-SIC-based MIMO
detection, and Turbo decoding consume the most computational resources and dominate the
computational time. To lower the processing latency, new ultra-fast signal processing techniques,
especially for the three identified functions, should be developed to strike a favorable tradeoff
between throughput and latency.
TABLE II
A COMPARISON OF COMPUTATIONAL TIME FOR DIFFERENT FUNCTION MODULES AT THE RECEIVER, WHEREIN ALL NUMBERS WITHOUT A UNIT ARE
IN SECONDS.
Receive Modules
B = 1.4MHz
B = 5MHz
B = 10MHz
CFO Compensation
0.0010
0.0023
0.0037
FFT
2.9004e-04
6.2917e-04
8.3004e-04
Disassemble Reference Signal
1.2523e-04
2.2708e-04
3.1685e-04
Channel Estimation (MMSE)
0.0015
0.0141
0.0878
Disassemble Symbols
0.0013
0.0045
0.0087
MIMO Detection (MMSE-SIC)
0.0028
0.0242
0.0760
SINR Calculation
2.4947e-04
6.6754e-04
0.0012
Layer Demapping
4.3253e-05
1.0988e-04
3.8987e-04
Turbo Decoding
0.0129
0.0498
0.1048
Obtained Throughput
2.2739Mbps 10.073Mbps
20.41Mbps
In our simulation, we propose and implement an improved channel estimation approach to
reduce the channel estimation latency. The basic idea is to use the least square estimation to
extract the CSI associated with the reference symbols, and then employ an advanced lowcomplexity 2-D biharmonic interpolation method to obtain the CSI for the entire resource block.
Typically, the resulting curves from the biharmonic interpolation method are much smoother
than the linear and nearest neighbor methods. Our simulation results show that the proposed
channel estimation method can reduce around 60% of the computational time relative to the
MMSE-based method at B = 5MHz, while achieving almost the same system throughput.
It is also desirable to develop ultra-fast multilayer interference suppression technologies to
enable fast MIMO detection, especially for a large number of transmit and receive antennas.
Along this direction, a parallel interference cancellation (PIC) with decision statistical combining
(DSC) detection algorithm was developed in [12], which can significantly reduce the detection
latency compared with MMSE-SIC. The PIC detectors are equivalent to a bank of matched
filters, which avoid the time-consuming MMSE matrix inversion. A very small number of
iterations between the decoder and the matched filter are added to achieve the performance of
MMSE receivers. This algorithm was also applied to ICI cancellation for high-mobility MIMOOFDM systems and was shown to achieve a very good performance/complexity tradeoff.
Parallel hardware implementation is another important measure to reduce signal processing
latency. For example, the recently proposed parallel turbo decoder architecture [13] eliminates
the serial data dependencies, realizes full parallel processing, offers a significantly higher
processing throughput, and finally achieves a 50% hardware resource reduction compared with
the original architecture. With uRLLC recently declared as one of the major goals in 5G
networks, we envisage more research activities in developing ultra-fast signal processing
techniques and architectures.
VI. Radio Resource Management
In this section, we will discuss two radio resource management techniques that have great
potential to reduce the latency caused by the medium access process.
A. Non-orthogonal Multiple Access
As shown in Table I, grant acquisition and random access procedures in current standards are
two major sources of delay. This calls for novel approaches and fundamental shifts from current
protocols and standards originally designed for human communication to meet the requirements
for ultra-low latency applications. Though optimal in terms of per user achievable rate,
orthogonal multiple access (OMA) techniques, such as OFDMA in current LTE, are major
causes of the latency associated with the link establishment and random access. More
specifically, in existing wireless systems, radio resources are orthogonally allocated to the users
to deliver their messages. This requires the base station to first identify the users through
contention-based random access. This strategy suffers from severe collisions and high latencies
when the number of users increases.
Non-orthogonal multiple access (NOMA) has recently gained considerable attention as an
effective alternative to conventional OMA. In general, NOMA allows the signals from various
users to overlap by exploiting power, code or interleaver pattern at the expense of receiver
complexity. In the power-domain NOMA, which has been shown to be optimal in terms of
spectral efficiency [14], signals from multiple users are superimposed and successive
interference cancellation (SIC) is used at the receiver to decode the messages. Users do not need
to be identified at the base station beforehand, thus eliminating random access delay which is
significantly high in medium to high load scenarios [14].
Fig. 3 shows a comparison between NOMA and OMA in an uncoordinated scenario, where the
devices randomly choose a subband for their transmission. The number of subbands is denoted
by Ns and the total available bandwidth is assumed to be W = 100MHz. The bandwidth is
assumed to be uniformly divided into Ns subbands, each of W/Ns bandwidth. As can be seen,
when the number of devices is small, OMA slightly outperforms NOMA in terms of delay,
which is expected as the collision probability in this case is small and the devices can achieve
higher spectral efficiency as they are transmitting orthogonally. However, when the number of
devices is large, NOMA outperforms OMA, as it can effectively exploit the interference and
enable the devices to be decoded at the base station. In other words, in high traffic load scenarios,
OMA is mainly dominated by the random access collision which leads to unavoidable high
latencies, while NOMA supports a large number of devices with the desired latency, by
eliminating the random access phase and enabling the users to share the same radio resources.
Fig. 3. Delay versus the number of devices for NOMA and OMA.
The main benefits of NOMA come from the fact that it does not need separate grant acquisition
and random access phase, as the devices can send their data whenever they want to send. This
becomes more beneficial when the number of devices grows large, which is the scenario of
interest for most internet-of-things use cases. NOMA can also be easily combined with AFC
codes [11] to improve the spectral efficiency for each user, therefore providing a cross-layer
solution for reducing the delay. One solution to better satisfy the latency requirements for
different applications is to further divide the radio resources between the different uRLLC
applications. This will be further discussed in the next subsection. In this way, NOMA can be
further tuned to service a larger number of devices with the same requirements.
B. Resource Reservation via Resource Block Slicing
In the current LTE network, the management of radio resource blocks (RBs) for multiple
services is jointly optimized. As such, the latencies of different services are interdependent [15].
A traffic overload generated by one service can negatively impact the latency performance of
other services. To address this issue, we propose to reserve radio resources for each service. The
reservation is done by slicing RBs and allocating a slice to each service based on the traffic
demand. Moreover, if RBs in a slice are not used, they will be shared by other services. This type
of resource reservation method can achieve a high spectral efficiency and eliminate the latency
problem caused by the traffic overload issues coming from other services.
To evaluate the benefit of the proposed RB slicing on a LTE network, we conduct a simulation to
compare its performance with a legacy LTE network by using NS-3. Two types of services with
different data rates and latency requirements, i.e., low latency intelligent transportation systems
(ITS) with average packet sizes of 100 bytes and average packet intervals of 100ms per user, and
smart grid (SG) with average packet sizes of 300 bytes and average packet intervals of 80ms per
user, respectively, are considered in our simulation. The devices for the above services are
distributed in 1 km2 area according to a Poisson Point Process (PPP) with averages of 400 and
600 devices for ITS and SG, respectively, served by 4 LTE base stations, operating with 20MHz
bandwidth. The proportion of traffic load for each slice is approximated based on the ratio of the
number of users in a service over the total number of users in all services. Thus, for the proposed
RB slicing, we allocate 40% of available RBs exclusively for ITS devices transmissions, leaving
the remaining 60% RBs for SG devices transmissions. Note that all available RBs are shared by
ITS and SG equally in the current LTE network.
Fig. 4 shows the cumulative density function (CDF) for the end to end packet latencies under a
legacy LTE network and under the RB slicing regime that isolates the traffic demand of
intelligent transportation systems (ITS) sensor and smart grids (SG) from each other. By
performing RB slicing that reserves resources for each service, the latency is reduced from an
average of 10ms to 5ms and 6ms for ITS and SG devices, respectively, as shown in the small box
in Fig. 3. This simulation confirms the benefit of the proposed approach. The open future
research challenges can be then on how to dynamically optimize the proportion for the resources
reserved by multiple services with varying load as well as heterogeneous reliability and latency
requirements.
Fig. 4. The cumulative distribution function (CDF) of the end‐to‐end delay without and with radio resource block slicing2.
VII. Other Potential Techniques
In addition to the measures introduced in previous sections, there are other techniques that have
great potential to reduce the end-to-end latency of cellular systems. In what follows, we briefly
discuss the principles of four potential technologies and explain how they can reduce latency.
A. Cross-layer Error Control
Automatic Repeat reQuest (ARQ) is a commonly-used error control method for detecting packet
losses by using acknowledgements and timeouts. ARQ has been widely adopted in many
2
The authors would like to thank Zhouyou Gu for his assistance in simulating this figure.
communication networks with Transport Control Protocol (TCP). However, it introduces high
and unpredictable delays in wireless networks due to the time varying channel and user
contention over a common radio link. On the other hand, User Datagram Protocols (UDP), with
no ARQ retransmissions and lower overheads than TCP, have been used for delay sensitive
applications with no stringent requirements for low error probabilities, such as Voice over
Internet Protocol (VoIP), Video on Demand (VoD), Internet Protocol Television (IPTV) etc.
For emerging mission-critical applications over wireless networks, lower overheads are desirable
to reduce overall end to end latency. However, in order for UDP to be suitable for uRLLC, its
reliability needs to be substantially improved. Research on this has focused on the design of error
control schemes with minimal error protection at the physical layer and rateless coding for
erasure channels in the application layer. The research problems have been in optimizing the
redundancy split between the physical and application layers to have reliable transmission. This
approach involves a significant loss in the decoding error performance due to hard decision
decoding at the application layer and weak codes at the physical layer. A promising solution to
resolve this is to use short AFC codes in both the physical and the network layer and form a
concatenated code with soft output decoding at the physical and soft input decoding at the
network layer. Furthermore, the decoding of both AFC codes can be highly parallelized for a low
decoding delay.
B. Device-to-Device Communication
Device-to-device (D2D) communication refers to a radio technology that enables direct
communication between two physically close terminals. D2D has recently been considered as a
key solution for ultra-low latency applications, as it provides a direct link between traffic
participants, without going through the network infrastructure. D2D communication is a good fit
for vehicle-to-vehicle (V2V) communications to enable real-time safety systems, cooperative
collision avoidance and automated overtake. However, it may be not applicable to many other
mission-critical services, such as power systems or remote surgery with communication nodes
separated at large distances. Due to the global spectrum shortage, D2D links are expected to
operate within the same spectrum used by existing infrastructure-based communication systems
(e.g., cellular systems). This calls for highly efficient interference management techniques to
ensure the harmonious coexistence between D2D links and conventional links. Otherwise, the
latency gain introduced by D2D communication can easily disappear.
C. Mobile Edge Computing
Mobile edge computing (MEC) is a promising approach to promptly process computationally
intensive jobs offloaded from mobile devices. Edge computing modules can be installed at base
stations which are closer to sensing devices than data servers/clouds. To decrease job-processing
delays, edge computing modules are operated in a Software as a Service (SaaS) fashion. In other
words, a set of data processing software is in an always-on status, ready to process offloaded jobs
from sensing devices. The offloaded jobs can be processed immediately without waiting for
computing resource allocation, software initiation, and environment parameter configuration.
The data transfer between the sensing device and the computing module in the base station relies
on the existing air interface. A multiplexer/de-multiplexer at the base station can distinguish if
transmitted data are for computation offloading purpose. If so, the data is redirected to the edge
computing modules instead of the mobile core network. In fact, the implementation of edge
computing technologies is not mature in cellular networks. The key barrier stems from the
incompatibility of computing services and the existing LTE protocol stack. Modifying the
existing stack to accommodate computing services may cause substantial network reconstruction
and reconfiguration. Therefore, smoothly merging edge computing into the protocol stack is a
key future research direction.
D. Mobile Caching for Content Delivery
Smart mobile caching schemes are also effective solutions for improving the delay performance
of data intensive applications, e.g., multimedia, augment reality (AR) applications etc. Mobile
caching enables content reuse, which leads to drastic delay reductions and backhaul efficiency
improvements. The mobile cache can be installed at each base station. Whenever a mobile
device’s request “hits” a cached content, the base station intercepts the request and directly
returns the cached content without resorting to a remote server. Each base station determines the
cached contents through learning their popularities. Caching policies such as geo-based caching
and least frequently used eviction, etc. can be employed.
The selected contents are then
downloaded from remote servers. Downloading cached files is not a delay-sensitive task; hence,
it can be operated in a separate network without competing for network bandwidth with other
delay-sensitive data traffic. Despite the potential benefits of caching, it is still challenging to
realize these benefits in practice. This is because the cache size at the base station is limited, but
the number of possible contents can be unlimited. Thus, it is essential to determine how to wisely
cache a set of popular contents to maximize the hit rate.
VIII. Summary
This article has introduced the emerging applications, design challenges, and potential
approaches in the design of ultra-reliable low latency communications (uRLLC). We described
potential use cases of uRLLC in tele-surgery, smart transportation and industry automation and
presented the latency and reliability requirements for these applications. To pinpoint major
latency bottlenecks in current cellular networks, we showed a breakdown of the various delay
sources in an LTE system and found that a few orders of end-to-end latency reduction is required
to support the mission critical applications. To achieve this, each latency component needs to be
reduced significantly. Our initial results showed that short analog fountain codes, ultra-fast signal
processing, non-orthogonal multiple access and resource reservation via resource block slicing
are essential to reduce latency in the physical and multiple access layers. Furthermore, other
potential latency reduction measures, including cross-layer error control, device-to-device
communication, mobile edge computing and mobile caching, were briefly discussed. We hope
this article can encourage more research efforts toward the realization of uRLLC.
REFERENCES
[1] https://www.metis2020.com/wp-content/uploads/deliverables/METIS_D8.4_v1.pdf
[2] N. Larson et al., Investigating Excessive delays in mobile broadband networks, Proc. ACM Sigcom’2015.
[3] Nokia white paper on 5G for Mission Critical Communication,
http://www.hit.bme.hu/~jakab/edu/litr/5G/Nokia_5G_for_Mission_Critical_Communication_White_Paper.pdf
[4] 5GPPP Association, “5G and e-health,” 5GPPP, White Paper, Oct. 2015. [Online]. Available: https://5gppp.eu/wp-content/uploads/2016/02/5G-PPP-White-Paper-on-eHealth-Vertical-Sector.pdf
[5] 5GPPP Association, ``5G automotive vision,'' 5GPPP, White Paper, Oct. 2015. [Online]. Available: https://5gppp.eu/wp-content/uploads/2014/02/5G-PPP-White-Paper-on-Automotive-Vertical-Sectors.pdf
[6] M. Luvisotto, Z. Pang and D. Dzung, "Ultra High Performance Wireless Control for Critical Applications:
Challenges and Directions," IEEE Transactions on Industrial Informatics, vol. 13, no. 3, pp. 1448-1459, June 2017.
[7] Study on Latency Reduction Techniques for LTE (Release 14), document TR 36.881, 3rd Generation Partnership
Project (3GPP), 2016. [Online]. Available: http://www.3gpp.org/ftp//Specs/archive/36_series/36.881/.
[8] P. Schulz et al., "Latency Critical IoT Applications in 5G: Perspective on the Design of Radio Interface and
Network Architecture," IEEE Communications Magazine, vol. 55, no. 2, pp. 70-78, February 2017.
[9] Y. Polyanskiy, H. V. Poor and S. Verdu, "Channel Coding Rate in the Finite Blocklength Regime," IEEE
Transactions on Information Theory, vol. 56, no. 5, pp. 2307-2359, May 2010.
[10] G. Durisi, T. Koch and P. Popovski, "Toward Massive, Ultrareliable, and Low-Latency Wireless
Communication With Short Packets," Proceedings of the IEEE, vol. 104, no. 9, pp. 1711-1726, Sept. 2016.
[11] Shirvanimoghaddam, Mahyar, Yonghui Li, and Branka Vucetic. "Near-capacity adaptive analog fountain codes
for wireless channels." IEEE Communications Letters 17.12 (2013): 2241-2244.
[12] N. Aboutorab, W. Hardjawana, and B. Vucetic, “A new iterative doppler-assisted channel estimation joint with
parallel ICI cancellation for high-mobility MIMO-OFDM systems,” IEEE Trans. Veh. Technol., vol. 61, no. 4, pp.
1577–1589, May 2012.
[13] A. Li, P. Hailes, R. G. Maunder, B. M. Al-Hashimi, and L. Hanzo, “1.5 Gbit/s FPGA implementation of a fullyparallel turbo decoder designed for mission-critical machine-type communication applications,” IEEE Access, vol. 4,
pp. 5452–5473, Aug. 2016.
[14] M. Shirvanimoghaddam, M. Dohler, and S. Johnson. "Massive non-orthogonal multiple access for cellular IoT:
Potentials and limitations," IEEE Communications Magazine, Accepted December 2016.
[15] O. Sallent, J. Perez-Romero, R. Ferrus, and R. Agusti, “On radio access network slicing from a radio resource
management perspective,” IEEE Wireless Communications, vol. PP, no. 99, pp. 2–10, 2017.
| 7 |
Noname manuscript No.
(will be inserted by the editor)
A review of heterogeneous data mining for brain disorders
arXiv:1508.01023v1 [cs.LG] 5 Aug 2015
Bokai Cao · Xiangnan Kong · Philip S. Yu
Abstract With rapid advances in neuroimaging techniques, the research on brain disorder identification has
become an emerging area in the data mining community. Brain disorder data poses many unique challenges
for data mining research. For example, the raw data
generated by neuroimaging experiments is in tensor representations, with typical characteristics of high dimensionality, structural complexity and nonlinear separability. Furthermore, brain connectivity networks can be
constructed from the tensor data, embedding subtle interactions between brain regions. Other clinical measures are usually available reflecting the disease status
from different perspectives. It is expected that integrating complementary information in the tensor data and
the brain network data, and incorporating other clinical
parameters will be potentially transformative for investigating disease mechanisms and for informing therapeutic interventions. Many research efforts have been
devoted to this area. They have achieved great success
in various applications, such as tensor-based modeling,
subgraph pattern mining, multi-view feature analysis.
In this paper, we review some recent data mining methods that are used for analyzing brain disorders.
Keywords Data mining · Brain diseases · Tensor
analysis · Subgraph patterns · Feature selection
B. Cao · P.S. Yu
Department of Computer Science, University of Illinois at
Chicago, Chicago, IL 60607.
E-mail: [email protected], [email protected]
X. Kong
Department of Computer Science, Worcester Polytechnic Institute, Worcester, MA 01609.
E-mail: [email protected]
1 Introduction
Many brain disorders are characterized by ongoing injury that is clinically silent for prolonged periods and
irreversible by the time symptoms first present. New
approaches for detection of early changes in subclinical periods will afford powerful tools for aiding clinical diagnosis, clarifying underlying mechanisms and
informing neuroprotective interventions to slow or reverse neural injury for a broad spectrum of brain disorders, including bipolar disorder, HIV infection on brain,
Alzheimer’s disease, Parkinson’s disease, etc. Early diagnosis has the potential to greatly alleviate the burden
of brain disorders and the ever increasing costs to families and society.
As the identification of brain disorders is extremely
challenging, many different diagnosis tools and methods
have been developed to obtain a large number of measurements from various examinations and laboratory
tests. Especially, recent advances in the neuroimaging
technology have provided an efficient and noninvasive
way for studying the structural and functional connectivity of the human brain, either normal or in a diseased
state [48]. This can be attributed in part to advances
in magnetic resonance imaging (MRI) capabilities [33].
Techniques such as diffusion MRI, also referred to as
diffusion tensor imaging (DTI), produce in vivo images
of the diffusion process of water molecules in biological
tissues. By leveraging the fact that the water molecule
diffusion patterns reveal microscopic details about tissue architecture, DTI can be used to perform tractography within the white matter and construct structural
connectivity networks [2, 36, 12, 38, 40]. Functional MRI
(fMRI) is a functional neuroimaging procedure that
identifies localized patterns of brain activation by detecting associated changes in the cerebral blood flow.
2
B. Cao et al.
The primary form of fMRI uses the blood oxygenation
level dependent (BOLD) response extracted from the
gray matter [3, 42, 43]. Another neuroimaging technique
is positron emission tomography (PET). Using different radioactive tracers (e.g., fluorodeoxyglucose), PET
produces a three-dimensional image of various physiological, biochemical and metabolic processes [68].
A variety of data representations can be derived
from these neuroimaging experiments, which present
many unique challenges for the data mining community.
Conventional data mining algorithms are usually developed to tackle data in one specific representation, a majority of which are particularly for vector-based data.
However, the raw neuroimaging data is in the form
of tensors, from which we can further construct brain
networks connecting regions of interest (ROIs). Both
of them are highly structured considering correlations
between adjacent voxels in the tensor data and that
between connected brain regions in the brain network
data. Moreover, it is critical to explore interactions between measurements computed from the neuroimaging
and other clinical experiments which describe subjects
in different vector spaces. In this paper, we review some
recent data mining methods for (1) mining tensor imaging data; (2) mining brain networks; (3) mining multiview feature vectors.
2 Tensor Imaging Analysis
For brain disorder identification, the raw data generated by neuroimging experiments are in tensor representations [15, 22, 68]. For example, in contrast to twodimensional X-ray images, an fMRI sample corresponds
to a four-dimensional array by recording the sequential
changes of traceable signals in each voxel1 .
Tensors are higher order arrays that generalize the
concepts of vectors (first-order tensors) and matrices
(second-order tensors), whose elements are indexed by
more than two indices. Each index expresses a mode of
variation of the data and corresponds to a coordinate
direction. In an fMRI sample, the first three modes usually encode the spatial information, while the fourth
mode encodes the temporal information. The number
of variables in each mode indicates the dimensionality
of a mode. The order of a tensor is determined by the
number of its modes. An mth-order tensor can be represented as X = (xi1 ,··· ,im ) ∈ RI1 ×···×Im , where Ii is
the dimension of X along the i-th mode.
Definition 1 (Tensor product) The tensor product
of three vectors a ∈ RI1 , b ∈ RI2 and c ∈ RI3 , denoted
1
A voxel is the smallest three-dimensional point volume
referenced in a neuroimaging of the brain.
c1
≈
X
c2
b2 + ⋯ +
b1 +
a1
cR
a2
bR
aR
Fig. 1 Tensor factorization of a third-order tensor.
by a ⊗ b ⊗ c, represents a third-order tensor with the
elements (a ⊗ b ⊗ c)i1 ,i2 ,i3 = ai1 bi2 ci3 .
Tensor product is also referred to as outer product
in some literature. An mth-order tensor is a rank-one
tensor if it can be defined as the tensor product of m
vectors.
Definition 2 (Tensor factorization) Given a thirdorder tensor X ∈ RI1 ×I2 ×I3 and an integer R, as illustrated in Figure 1, a tensor factorization of X can be
expressed as
X = X1 + X2 + · · · + XR =
R
X
ar ⊗ br ⊗ cr
(1)
r=1
One of the major difficulties brought by the tensor data is the curse of dimensionality. The total number of voxels contained in a multi-mode tensor, say,
X = (xi1 ,··· ,im ) ∈ RI1 ×···×Im is I1 × · · · × Im which is
exponential to the number of modes. If we unfold the
tensor into a vector, the number of features will be extremely high [69]. This makes traditional data mining
methods prone to overfitting, especially with a small
sample size. Both computational scalability and theoretical guarantee of the traditional models are compromised by such high dimensionality [22].
On the other hand, complex structural information
is embedded in the tensor data. For example, in the neuroimaging data, values of adjacent voxels are usually
correlated with each other [33]. Such spatial relationships among different voxels in a tensor image can be
very important in neuroimaging applications. Conventional tensor-based approaches focus on reshaping the
tensor data into matrices/vectors and thus the original
spatial relationships are lost. The integration of structural information is expected to improve the accuracy
and interpretability of tensor models.
2.1 Classification
Suppose we have a set of tensor data D = {(Xi , yi )}ni=1
for classification problem, where Xi ∈ RI1 ×···×Im is the
neuroimaging data represented as an mth-order tensor
and yi ∈ {−1, +1} is the corresponding binary class label of Xi . For example, if the i-th subject has Alzheimer’s
A review of heterogeneous data mining for brain disorders
disease, the subject is associated with a positive label,
i.e., yi = +1. Otherwise, if the subject is in the control
group, the subject is associated with a negative label,
i.e., yi = −1.
Supervised tensor learning can be formulated as the
optimization problem of support tensor machines (STMs)
[55] which is a generalization of the standard support
vector machines (SVMs) from vector data to tensor
data. The objective of such learning algorithms is to
learn a hyperplane by which the samples with different
labels are divided as wide as possible. However, tensor data may not be linearly separable in the input
space. To achieve a better performance on finding the
most discriminative biomarkers or identifying infected
subjects from the control group, in many neuroimaging
applications, nonlinear transformation of the original
tensor data should be considered. He et al. study the
problem of supervised tensor learning with nonlinear
kernels which can preserve the structure of tensor data
[22]. The proposed kernel is an extension of kernels in
the vector space to the tensor space which can take the
multidimensional structure complexity into account.
2.2 Regression
Slightly different from classifying disease status (discrete label), another family of problems use tensor neuroimages to predict cognitive outcome (continuous label). The problems can be formulated in a regression
setup by treating clinical outcome as the real label, i.e.,
yi ∈ R, and treating tensor neuroimages as the input.
However, most classical regression methods take vectors as input features. Simply reshaping a tensor into a
vector is clearly an unsatisfactory solution.
Zhou et al. exploit the tensor structure in imaging
data and integrate tensor decomposition within a statistical regression paradigm to model multidimensional arrays [69]. By imposing a low rank approximation to the
extremely high dimensional complex imaging data, the
curse of dimensionality is greatly alleviated, thereby allowing development of a fast estimation algorithm and
regularization. Numerical analysis demonstrates its potential applications in identifying regions of interest in
brains that are relevant to a particular clinical response.
2.3 Network Discovery
Modern imaging techniques have allowed us to study
the human brain as a complex system by modeling it
as a network [1]. For example, the fMRI scans consist of
activations of thousands of voxels over time embedding
a complex interaction of signals and noise [19], which
3
naturally presents the problem of eliciting the underlying network from brain activities in the spatio-temporal
tensor data. A brain connectivity network, also called a
connectome [52], consists of nodes (gray matter regions)
and edges (white matter tracts in structural networks
or correlations between two BOLD time series in functional networks).
Although the anatomical atlases in the brain have
been extensively studied for decades, task/subject specific networks have still not been completely explored
with consideration of functional or structural connectivity information. An anatomically parcellated region
may contain subregions that are characterized by dramatically different functional or structural connectivity
patterns, thereby significantly limiting the utility of the
constructed networks. There are usually trade-offs between reducing noise and preserving utility in brain parcellation [33]. Thus investigating how to directly construct brain networks from tensor imaging data and
understanding how they develop, deteriorate and vary
across individuals will benefit disease diagnosis [15].
Davidson et al. pose the problem of network discovery from fMRI data which involves simplifying spatiotemporal data into regions of the brain (nodes) and relationships between those regions (edges) [15]. Here the
nodes represent collections of voxels that are known to
behave cohesively over time; the edges can indicate a
number of properties between nodes such as facilitation/inhibition (increases/decreases activity) or probabilistic (synchronized activity) relationships; the weight
associated with each edge encodes the strength of the
relationship.
A tensor can be decomposed into several factors.
However, unconstrained tensor decomposition results
of the fMRI data may not be good for node discovery
because each factor is typically not a spatially contiguous region nor does it necessarily match an anatomical region. That is to say, many spatially adjacent voxels in the same structure are not active in the same
factor which is anatomically impossible. Therefore, to
achieve the purpose of discovering nodes while preserving anatomical adjacency, known anatomical regions in
the brain are used as masks and constraints are added
to enforce that the discovered factors should closely
match these masks [15].
Overall, current research on tensor imaging analysis
presents two directions: (1) supervised: for a particular
brain disorder, a classifier can be trained by modeling
the relationship between a set of neuroimages and their
associated labels (disease status or clinical response);
(2) unsupervised: regardless of brain disorders, a brain
network can be discovered from a given neuroimage.
4
3 Brain Network Analysis
We have briefly introduced that brain networks can be
constructed from neuroimaging data where nodes correspond to brain regions, e.g., insula, hippocampus, thalamus, and links correspond to the functional/structural
connectivity between brain regions. The linkage structure in brain networks can encode tremendous information about the mental health of human subjects. For example, in brain networks derived from functional magnetic resonance imaging (fMRI), functional connections
can encode the correlations between the functional activities of brain regions. While structural links in diffusion tensor imaging (DTI) brain networks can capture
the number of neural fibers connecting different brain
regions. The complex structures and the lack of vector
representations for the brain network data raise major
challenges for data mining.
Next, we will discuss different approaches on how
to conduct further analysis for constructed brain networks, which are also referred to as graphs hereafter.
Definition 3 (Binary graph) A binary graph is represented as G = (V, E), where V = {v1 , · · · , vnv } is the
set of vertices, E ⊆ V × V is the set of deterministic
edges.
3.1 Kernel Learning on Graphs
In the setting of supervised learning on graphs, the target is to train a classifier using a given set of graph data
D = {(Gi , yi )}ni=1 , so that we can predict the label ŷ
for a test graph G. With applications to brain networks,
it is desirable to identify the disease status for a subject based on his/her uncovered brain network. Recent
development of brain network analysis has made characterization of brain disorders at a whole-brain connectivity level possible, thus providing a new direction for
brain disease classification.
Due to the complex structures and the lack of vector
representations, graph data can not be directly used as
the input for most data mining algorithms. A straightforward solution that has been extensively explored is
to first derive features from brain networks and then
construct a kernel on the feature vectors.
Wee et al. use brain connectivity networks for disease diagnosis on mild cognitive impairment (MCI),
which is an early phase of Alzheimer’s disease (AD)
and usually regarded as a good target for early diagnosis and therapeutic interventions [61, 62, 63]. In the
step of feature extraction, weighted local clustering coefficients of each ROI in relation to the remaining ROIs
are extracted from all the constructed brain networks to
B. Cao et al.
quantify the prevalence of clustered connectivity around
the ROIs. To select the most discriminative features for
classification, statistical t-test is performed and features
with p-values smaller than a predefined threshold are
selected to construct a kernel matrix. Through the employment of the multi-kernel SVM, Wee et al. integrate
information from DTI and fMRI and achieve accurate
early detection of brain abnormalities [63].
However, such strategy simply treats a graph as a
collection of nodes/links, and then extracts local measures (e.g., clustering coefficient) for each node or performs statistical analysis on each link, thereby blinding the connectivity structures of brain networks. Motivated by the fact that some data in real-world applications are naturally represented by means of graphs,
while compressing and converting them to vectorial representations would definitely lose structural information, kernel methods for graphs have been extensively
studied for a decade [5].
A graph kernel maps the graph data from the original graph space to the feature space and further measures the similarity between two graphs by comparing
their topological structures [49]. For example, product
graph kernel is based on the idea of counting the number of walks in product graphs [18]; marginalized graph
kernel works by comparing the label sequences generated by synchronized random walks of labeled graphs
[30]; cyclic pattern kernels for graphs count pairs of
matching cyclic/tree patterns in two graphs [23].
To identify individuals with AD/MCI from healthy
controls, instead of using only a single property of brain
networks, Jie et al. integrate multiple properties of fMRI
brain networks to improve the disease diagnosis performance [27]. Two different yet complementary network
properties, i.e., local connectivity and global topological properties are quantified by computing two different
types of kernels, i.e., a vector-based kernel and a graph
kernel. As a local network property, weighted clustering coefficients are extracted to compute a vector-based
kernel. As a topology-based graph kernel, WeisfeilerLehman subtree kernel [49] is used to measure the topological similarity between paired fMRI brain networks.
It is shown that this type of graph kernel can effectively capture the topological information from fMRI
brain networks. The multi-kernel SVM is employed to
fuse these two heterogeneous kernels for distinguishing
individuals with MCI from healthy controls.
3.2 Subgraph Pattern Mining
In brain network analysis, the ideal patterns we want
to mine from the data should take care of both local
and global graph topological information. Graph kernel
A review of heterogeneous data mining for brain disorders
+ Alzheimer's disease
+ Alzheimer's disease
5
+ Alzheimer's disease
0.5
0.8
0.016
0.064
0.004
0.144
0.036
0.576
0.016
0.144
0.2
0.3
0.8
0.6
0.9
0.8
0.9
0.6
0.2
0.6
0.3
0.5
0.5
- Normal
- Normal
0.8
- Normal
0.6
0.9
Fig. 3 An example of fMRI brain networks (left) and all posA discriminative subgraph pattern
sible instantiations of linkage structures between red nodes
(right) [10].
Fig. 2 An example of discriminative subgraph patterns in
brain networks.
methods seem promising, which however are not interpretable. Subgraph patterns are more suitable for brain
networks, which can simultaneously model the network
connectivity patterns around the nodes and capture the
changes in local area [33].
Definition 4 (Subgraph) Let G0 = (V 0 , E 0 ) and G =
(V, E) be two binary graphs. G0 is a subgraph of G
(denoted as G0 ⊆ G) iff V 0 ⊆ V and E 0 ⊆ E. If G0 is a
subgraph of G, then G is supergraph of G0 .
A subgraph pattern, in a brain network, represents a
collection of brain regions and their connections. For example, as shown in Figure 2, three brain regions should
work collaboratively for normal people and the absence
of any connection between them can result in Alzheimer’s
disease in different degree. Therefore, it is valuable to
understand which connections collectively play a significant role in disease mechanism by finding discriminative
subgraph patterns in brain networks.
Mining subgraph patterns from graph data has been
extensively studied by many researchers [29, 13, 56, 66].
In general, a variety of filtering criteria are proposed.
A typical evaluation criterion is frequency, which aims
at searching for frequently appearing subgraph features
in a graph dataset satisfying a prespecified threshold.
Most of the frequent subgraph mining approaches are
unsupervised. For example, Yan and Han develop a
depth-first search algorithm: gSpan [67]. This algorithm
builds a lexicographic order among graphs, and maps
each graph to an unique minimum DFS code as its
canonical label. Based on this lexicographic order, gSpan
adopts the depth-first search strategy to mine frequent
connected subgraphs efficiently. Many other approaches
for frequent subgraph mining have also been proposed,
e.g., AGM [26], FSG [34], MoFa [4], FFSM [24], and
Gaston [41].
Moreover, the problem of supervised subgraph mining has been studied in recent work which examines
how to improve the efficiency of searching the discriminative subgraph patterns for graph classification. Yan
et al. introduce two concepts structural leap search and
frequency-descending mining, and propose LEAP [66]
which is one of the first work in discriminative subgraph mining. Thoma et al. propose CORK which can
yield a near-optimal solution using greedy feature selection [56]. Ranu and Singh propose a scalable approach,
called GraphSig, that is capable of mining discriminative subgraphs with a low frequency threshold [46].
Jin et al. propose COM which takes into account the
co-occurences of subgraph patterns, thereby facilitating the mining process [28]. Jin et al. further propose
an evolutionary computation method, called GAIA, to
mine discriminative subgraph patterns using a randomized searching strategy [29]. Zhu et al. design a diversified discrimination score based on the log ratio which
can reduce the overlap between selected features by considering the embedding overlaps in the graphs [70].
Conventional graph mining approaches are best suited
for binary edges, where the structure of graph objects is
deterministic, and the binary edges represent the presence of linkages between the nodes [33]. In fMRI brain
network data however, there are inherently weighted
edges in the graph linkage structure, as shown in Figure 3 (left). A straightforward solution is to threshold
weighted networks to yield binary networks. However,
such simplification will result in great loss of information. Ideal data mining methods for brain network analysis should be able to overcome these methodological
problems by generalizing the network edges to positive
and negative weighted cases, e.g., probabilistic weights
in fMRI brain networks, integral weights in DTI brain
networks.
Definition 5 (Weighted graph) A weighted graph is
e = (V, E, p), where V = {v1 , · · · , vn }
represented as G
v
6
B. Cao et al.
is the set of vertices, E ⊆ V × V is the set of nondeterministic edges. p : E → (0, 1] is a function that assigns
a probability of existence to each edge in E.
fMRI brain networks can be modeled as weighted
graphs where each edge e ∈ E is associated with a
probability p(e) indicating the likelihood of whether
this edge should exist or not [32, 10]. It is assumed that
p(e) of different edges in a weighted graph are independent from each other. Therefore, by enumerating the
possible existence of all edges in a weighted graph, we
can obtain a set of binary graphs. For example, in Fig. 3
(right), consider the three red nodes and links between
them as a weighted graph. There are 23 = 8 binary
graphs that can be implied with different probabilities.
e the probability of G
e containFor a weighted graph G,
ing a subgraph feature G0 is defined as the probability
e contains subgraph
that a binary graph G implied by G
0
G . Kong et al. propose a discriminative subgraph feature selection method based on dynamic programming
to compute the probability distribution of the discrimination scores for each subgraph pattern within a set of
weighted graphs [32].
For brain network analysis, usually we only have a
small number of graph instances [32]. In these applications, the graph view alone is not sufficient for mining
important subgraphs. Fortunately, the side information
is available along with the graph data for brain disorder identification. For example, in neurological studies,
hundreds of clinical, immunologic, serologic and cognitive measures may be available for each subject, apart
from brain networks. These measures compose multiple side views which contain a tremendous amount of
supplemental information for diagnostic purposes. It is
desirable to extract valuable information from a plurality of side views to guide the process of subgraph
mining in brain networks.
Figure 4(a) illustrates the process of selecting subgraph patterns in conventional graph classification approaches. Obviously, the valuable information embedded in side views is not fully leveraged in feature selection process. To tackle this problem, Cao et al. introduce an effective algorithm for discriminative subgraph selection using multiple side views [9], as illustrated in Figure 4(b). Side information consistency is
first validated via statistical hypothesis testing which
suggests that the similarity of side view features between instances with the same label should have higher
probability to be larger than that with different labels.
Based on such observations, it is assumed that the similarity/distance between instances in the space of subgraph features should be consistent with that in the
space of a side view. That is to say, if two instances
are similar in the space of a side view, they should also
Side Views
Mine
Brain Networks
Subgraph Patterns
Graph Classification
(a) Treating side views and subgraph patterns separately.
Side Views
Guide
Mine
Brain Networks
Subgraph Patterns
Graph Classification
(b) Using side views as guidance for the process of selecting
subgraph patterns.
Fig. 4 Two strategies of leveraging side views in feature se-
lection process for graph classification.
be close to each other in the space of subgraph features. Therefore the target is to minimize the distance
between subgraph features of each pair of similar instances in each side view [9]. In contrast to existing
subgraph mining approaches that focus on the graph
view alone, the proposed method can explore multiple
vector-based side views to find an optimal set of subgraph features for graph classification.
For graph classification, brain network analysis approaches can generally be put into three groups: (1)
extracting some local measures (e.g., clustering coefficient) to train a standard vector-based classifier; (2) directly adopting graph kernels for classification; (3) finding discriminative subgraph patterns. Different types of
methods model the connectivity embedded in brain networks in different ways.
4 Multi-view Feature Analysis
Medical science witnesses everyday measurements from
a series of medical examinations documented for each
subject, including clinical, imaging, immunologic, serologic and cognitive measures [7], as shown in Figure 5.
Each group of measures characterize the health state
of a subject from different aspects. This type of data is
named as multi-view data, and each group of measures
form a distinct view quantifying subjects in one specific
feature space. Therefore, it is critical to combine them
to improve the learning performance, while simply concatenating features from all views and transforming a
multi-view data into a single-view data, as the method
(a) shown in Figure 6, would fail to leverage the underlying correlations between different views.
A review of heterogeneous data mining for brain disorders
MRI sequence A
Cognitive measures
View 1
View 2
View 6
HIV/
seronegative
View 3
View 5
View 4
MRI sequence B
Serologic measures
Immunologic
measures
Clinical measures
Fig. 5 An example of multi-view learning in medical studies
[6].
4.1 Multi-view Learning
Suppose we have a multi-view classification task with
n labeled
m different views:
n instances represented
ofrom
n
(1)
(2)
(m)
(v)
D=
xi , xi , · · · , xi , yi
, where xi ∈ RIv ,
i=1
Iv is the dimensionality of the v-th view, and yi ∈
{−1, +1} is the class label of the i-th instance.
Representative methods for multi-view learning can
be categorized into three groups: co-training, multiple
kernel learning, and subspace learning [65]. Generally,
the co-training style algorithm is a classic approach
for semi-supervised learning, which trains in alternation to maximize the mutual agreement on different
views. Multiple kernel learning algorithms combine kernels that naturally correspond to different views, either
linearly [35] or nonlinearly [58, 14] to improve learning performance. Subspace learning algorithms learn a
latent subspace, from which multiple views are generated. Multiple kernel learning and subspace learning are
generalized as co-regularization style algorithms [53],
where the disagreement between the functions of different views is taken as a part of the objective function to
be minimized. Overall, by exploring the consistency and
complementary properties of different views, multi-view
learning is more effective than single-view learning.
In the multi-view setting for brain disorders, or for
medical studies in general, a critical problem is that
there may be limited subjects available (i.e., a small n)
yet introducing a large number of measurements (i.e.,
Pm
a large i=1 Ii ). Within the multi-view data, not all
7
features in different views are relevant to the learning
task, and some irrelevant features may introduce unexpected noise. The irrelevant information can even be
exaggerated after view combinations thereby degrading performance. Therefore, it is necessary to take care
of feature selection in the learning process. Feature selection results can also be used by researchers to find
biomarkers for brain diseases. Such biomarkers are clinically imperative for detecting injury to the brain in the
earliest stage before it is irreversible. Valid biomarkers
can be used to aid diagnosis, monitor disease progression and evaluate effects of intervention [32].
Conventional feature selection approaches can be divided into three main directions: filter, wrapper, and
embedded methods [20]. Filter methods compute a discrimination score of each feature independently of the
other features based on the correlation between the
feature and the label, e.g., information gain, Gini index, Relief [44, 47]. Wrapper methods measure the usefulness of feature subsets according to their predictive
power, optimizing the subsequent induction procedure
that uses the respective subset for classification [21, 45,
50, 37, 6]. Embedded methods perform feature selection
in the process of model training based on sparsity regularization [17, 16, 59, 60]. For example, Miranda et al.
add a regularization term that penalizes the size of the
selected feature subset to the standard cost function of
SVM, thereby optimizing the new objective function to
conduct feature selection [39]. Essentially, the process
of feature selection and learning algorithm interact in
embedded methods which means the learning part and
the feature selection part can not be separated, while
wrapper methods utilize the learning algorithm as a
black box.
However, directly applying these feature selection
approaches to each separate view would fail to leverage multi-view correlations. By taking into account the
latent interactions among views and the redundancy
triggered by multiple views, it is desirable to combine
multi-view data in a principled manner and perform
feature selection to obtain consensus and discriminative low dimensional feature representations.
4.2 Modeling View Correlations
Recent years have witnessed many research efforts devoted to the integration of feature selection and multiview learning. Tang et al. study multi-view feature selection in the unsupervised setting by constraining that
similar data instances from each view should have similar pseudo-class labels [54]. Considering brain disorder
identification, different neuroimaging features may capture different but complementary characteristics of the
8
data. For example, the voxel-based tensor features convey the global information, while the ROI-based Automated Anatomical Labeling (AAL) [57] features summarize the local information from multiple representative brain regions. Incorporating these data and additional non-imaging data sources can potentially improve the prediction. For Alzheimer’s disease (AD) classification, Ye et al. propose a kernel-based method for
integrating heterogeneous data, including tensor and
AAL features from MRI images, demographic information and genetic information [68]. The kernel framework
is further extended for selecting features (biomarkers)
from heterogeneous data sources that play more significant roles than others in AD diagnosis.
Huang et al. propose a sparse composite linear discriminant analysis model for identification of diseaserelated brain regions of AD from multiple data sources
[25]. Two sets of parameters are learned: one represents
the common information shared by all the data sources
about a feature, and the other represents the specific
information only captured by a particular data source
about the feature. Experiments are conducted on the
PET and MRI data which measure structural and functional aspects, respectively, of the same AD pathology.
However, the proposed approach requires the input as
the same set of variables from multiple data sources.
Xiang et al. investigate multi-source incomplete data
for AD and introduce a unified feature learning model
to handle block-wise missing data which achieves simultaneous feature-level and source-level selection [64].
For modeling view correlations, in general, a coefficient is assigned for each view, either at the view-level
or feature-level. For example, in multiple kernel learning, a kernel is constructed from each view and a set of
kernel coefficients are learned to obtain an optimal combined kernel matrix. These approaches, however, fail to
explicitly consider correlations between features.
4.3 Modeling Feature Correlations
One of the key issues for multi-view classification is to
choose an appropriate tool to model features and their
correlations hidden in multiple views, since this directly
determines how information will be used. In contrast
to modeling on views, another direction for modeling
multi-view data is to directly consider the correlations
between features from multiple views. Since taking the
tensor product of their respective feature spaces corresponds to the interaction of features from multiple
views, the concept of tensor serves as a backbone for
incorporating multi-view features into a consensus representation by means of tensor product, where the complex multiple relationships among views are embedded
B. Cao et al.
Modeling
Feature selection
Method (a)
View 1
Method (b)
View 2
View 3
Method (c)
Fig. 6 Schematic view of the key differences among three
strategies of multi-view feature selection [6].
within the tensor structures. By mining structural information contained in the tensor, knowledge of multiview features can be extracted and used to establish a
predictive model.
Smalter et al. formulate the problem of feature selection in the tensor product space as an integer quadratic
programming problem [51]. However, this method is
computationally intractable on many views, since it directly selects features in the tensor product space resulting in the curse of dimensionality, as the method (b)
shown in Figure 6. Cao et al. propose to use a tensorbased approach to model features and their correlations
hidden in the original multi-view data [6]. The operation of tensor product can be used to bring m-view
feature vectors of each instance together, leading to
a tensorial representation for common structure across
multiple views, and allowing us to adequately diffuse relationships and encode information among multi-view
features. In this manner, the multi-view classification
task is essentially transformed from an independent domain of each view to a consensus domain as a tensor
classification problem.
Qm
(v)
By using Xi to denote v=1 ⊗xi , the dataset of
labeled multi-view instances can be represented as D =
{(Xi , yi )}ni=1 . Note that each multi-view instance Xi
is an mth-order tensor that lies in the tensor product
space RI1 ×···×Im . Based on the definitions of inner product and tensor norm, multi-view classification can be
formulated as a global convex optimization problem in
the framework of supervised tensor learning [55]. This
model is named as multi-view SVM [6], and it can be
solved with the use of optimization techniques developed for SVM.
Furthermore, a dual method for multi-view feature
selection is proposed in [6] that leverages the relationship between original multi-view features and reconstructed tensor product features to facilitate the im-
A review of heterogeneous data mining for brain disorders
9
plementation of feature selection, as the method (c) in
Figure 6. It is a wrapper model which selects useful
features in conjunction with the classifier and simultaneously exploits the correlations among multiple views.
Following the idea of SVM-based recursive feature elimination [21], multi-view feature selection is consistently
formulated and implemented in the framework of multiview SVM. This idea can extend to include lower order
feature interactions and to employ a variety of loss functions for classification or regression [11].
5 Future Work
Fig. 7 A bioinformatics heterogeneous information network
schema.
The human brain is one of the most complicated biological structures in the known universe. While it is
very challenging to understand how it works, especially
when disorders and diseases occur, dozens of leading
technology firms, academic institutions, scientists, and
other key contributors to the field of neuroscience have
devoted themselves to this area and made significant
improvements in various dimensions2 . Data mining on
brain disorder identification has become an emerging
area and a promising research direction.
This paper provides an overview of data mining approaches with applications to brain disorder identification which have attracted increasing attention in both
data mining and neuroscience communities in recent
years. A taxonomy is built based upon data representations, i.e., tensor imaging data, brain network data
and multi-view data, following which the relationships
between different data mining algorithms and different
neuroimaging applications are summarized. We briefly
present some potential topics of interest in the future.
Bridging heterogeneous data representations.
As introduced in this paper, we can usually derive data
from neuroimaging experiments in three representations,
including raw tensor imaging data, brain network data
and multi-view vector-based data. It is critical to study
how to train a model on a mixture of data representations, although it is very challenging to combine data
that are represented in tensor space, vector space and
graph space, respectively. There is a straightforward
idea of defining different kernels on different feature
spaces and combing them through multi-kernel algorithms. However it is usually hard to interpret the results. The concept of side view has been introduced to
facilitate the process of mining brain networks, which
may also be used to guide supervised tensor learning.
It is even more interesting if we can learn on tensors
and graphs simultaneously.
2
http://www.whitehouse.gov/BRAIN
Integrating multiple neuroimaging modalities.
There are a variety of neuroimaging techniques available characterizing subjects from different perspectives
and providing complementary information. For example, DTI contains local microstructural characteristics
of water diffusion; structural MRI can be used to delineate brain atrophy; fMRI records BOLD response related to neural activity; PET measures metabolic patterns [63]. Based on such multimodality representation,
it is desirable to find useful patterns with rich semantics. For example, it is important to know which connectivity between brain regions is significant in the sense
of both structure and functionality. On the other hand,
by leveraging the complementary information embedded in the multimodality representation, better performance on disease diagnosis can be expected.
Mining bioinformatics information networks.
Bioinformatics network is a rich source of heterogeneous
information involving disease mechanisms, as shown in
Figure 7. The problems of gene-disease association and
drug-target binding prediction have been studied in the
setting of heterogeneous information networks [8, 31].
For example, in gene-disease association prediction, different gene sequences can lead to certain diseases. Researchers would like to predict the association relationships between genes and diseases. Understanding the
correlations between brain disorders and other diseases
and the causality between certain genes and brain diseases can be transformative for yielding new insights
concerning risk and protective relationships, for clarifying disease mechanisms, for aiding diagnostics and
clinical monitoring, for biomarker discovery, for identification of new treatment targets and for evaluating
effects of intervention.
10
References
1. O. Ajilore, L. Zhan, J. GadElkarim, A. Zhang, J. D.
Feusner, S. Yang, P. M. Thompson, A. Kumar, and
A. Leow. Constructing the resting state structural connectome. Frontiers in neuroinformatics, 7, 2013.
2. P. J. Basser and C. Pierpaoli. Microstructural and physiological features of tissues elucidated by quantitativediffusion-tensor MRI. Journal of Magnetic Resonance, Series B, 111(3):209–219, 1996.
3. B. Biswal, F. Zerrin Yetkin, V. M. Haughton, and J. S.
Hyde. Functional connectivity in the motor cortex of
resting human brain using echo-planar MRI. Magnetic
resonance in medicine, 34(4):537–541, 1995.
4. C. Borgelt and M. R. Berthold. Mining molecular fragments: Finding relevant substructures of molecules. In
ICDM, pages 51–58. IEEE, 2002.
5. F. Camastra and A. Petrosino. Kernel methods for
graphs: A comprehensive approach. In Knowledge-Based
Intelligent Information and Engineering Systems, pages
662–669. Springer, 2008.
6. B. Cao, L. He, X. Kong, P. S. Yu, Z. Hao, and A. B.
Ragin. Tensor-based multi-view feature selection with
applications to brain diseases. In ICDM, pages 40–49.
IEEE, 2014.
7. B. Cao, X. Kong, C. Kettering, P. S. Yu, and A. B. Ragin.
Determinants of HIV-induced brain changes in three different periods of the early clinical course: A data mining
analysis. NeuroImage: Clinical, 2015.
8. B. Cao, X. Kong, and P. S. Yu. Collective prediction
of multiple types of links in heterogeneous information
networks. In ICDM, pages 50–59. IEEE, 2014.
9. B. Cao, X. Kong, J. Zhang, P. S. Yu, and A. B. Ragin.
Mining brain networks using multiple side views for neurological disorder identification.
10. B. Cao, L. Zhan, X. Kong, P. S. Yu, N. Vizueta, L. L.
Altshuler, and A. D. Leow. Identification of discriminative subgraph patterns in fMRI brain networks in bipolar affective disorder. In Brain Informatics and Health.
Springer, 2015.
11. B. Cao, H. Zhou, and P. S. Yu. Multi-view machines.
arXiv, 2015.
12. T. L. Chenevert, J. A. Brunberg, and J. Pipe. Anisotropic
diffusion in human white matter: demonstration with mr
techniques in vivo. Radiology, 177(2):401–405, 1990.
13. H. Cheng, D. Lo, Y. Zhou, X. Wang, and X. Yan. Identifying bug signatures using discriminative graph mining.
In ISSTA, pages 141–152. ACM, 2009.
14. C. Cortes, M. Mohri, and A. Rostamizadeh. Learning
non-linear combinations of kernels. In NIPS, pages 396–
404, 2009.
15. I. Davidson, S. Gilpin, O. Carmichael, and P. Walker.
Network discovery via constrained tensor analysis of
fMRI data. In KDD, pages 194–202. ACM, 2013.
16. Z. Fang and Z. M. Zhang. Discriminative feature selection
for multi-view cross-domain learning. In CIKM, pages
1321–1330. ACM, 2013.
17. Y. Feng, J. Xiao, Y. Zhuang, and X. Liu. Adaptive unsupervised multi-view feature selection for visual concept
recognition. In ACCV, pages 343–357, 2012.
18. T. Gärtner, P. Flach, and S. Wrobel. On graph kernels:
Hardness results and efficient alternatives. In Learning
Theory and Kernel Machines, pages 129–143. Springer,
2003.
19. C. R. Genovese, N. A. Lazar, and T. Nichols. Thresholding of statistical maps in functional neuroimaging using
the false discovery rate. Neuroimage, 15(4):870–878, 2002.
B. Cao et al.
20. I. Guyon and A. Elisseeff. An introduction to variable
and feature selection. The Journal of Machine Learning
Research, 3:1157–1182, 2003.
21. I. Guyon, J. Weston, S. Barnhill, and V. Vapnik. Gene selection for cancer classification using support vector machines. Machine learning, 46(1-3):389–422, 2002.
22. L. He, X. Kong, P. S. Yu, A. B. Ragin, Z. Hao, and
X. Yang. DuSK: A dual structure-preserving kernel for
supervised tensor learning with applications to neuroimages. In SDM. SIAM, 2014.
23. T. Horváth, T. Gärtner, and S. Wrobel. Cyclic pattern
kernels for predictive graph mining. In KDD, pages 158–
167. ACM, 2004.
24. J. Huan, W. Wang, and J. Prins. Efficient mining of
frequent subgraphs in the presence of isomorphism. In
ICDM, pages 549–552. IEEE, 2003.
25. S. Huang, J. Li, J. Ye, T. Wu, K. Chen, A. Fleisher,
and E. Reiman. Identifying Alzheimer’s disease-related
brain regions from multi-modality neuroimaging data using sparse composite linear discrimination analysis. In
NIPS, pages 1431–1439, 2011.
26. A. Inokuchi, T. Washio, and H. Motoda. An aprioribased algorithm for mining frequent substructures from
graph data. In Principles of Data Mining and Knowledge
Discovery, pages 13–23. Springer, 2000.
27. B. Jie, D. Zhang, W. Gao, Q. Wang, C. Wee, and D. Shen.
Integration of network topological and connectivity properties for neuroimaging classification. Biomedical Engineering, 61(2):576, 2014.
28. N. Jin, C. Young, and W. Wang. Graph classification
based on pattern co-occurrence. In CIKM, pages 573–
582. ACM, 2009.
29. N. Jin, C. Young, and W. Wang. GAIA: graph classification using evolutionary computation. In SIGMOD, pages
879–890. ACM, 2010.
30. H. Kashima, K. Tsuda, and A. Inokuchi. Marginalized
kernels between labeled graphs. In ICML, volume 3, pages
321–328, 2003.
31. X. Kong, B. Cao, and P. S. Yu. Multi-label classification
by mining label and instance correlations from heterogeneous information networks. In KDD, pages 614–622.
ACM, 2013.
32. X. Kong, A. B. Ragin, X. Wang, and P. S. Yu. Discriminative feature selection for uncertain graph classification.
In SDM, pages 82–93. SIAM, 2013.
33. X. Kong and P. S. Yu. Brain network analysis: a data
mining perspective. ACM SIGKDD Explorations Newsletter, 15(2):30–38, 2014.
34. M. Kuramochi and G. Karypis. Frequent subgraph discovery. In ICDM, pages 313–320. IEEE, 2001.
35. G. R. Lanckriet, N. Cristianini, P. Bartlett, L. E. Ghaoui,
and M. I. Jordan. Learning the kernel matrix with
semidefinite programming. The Journal of Machine Learning Research, 5:27–72, 2004.
36. D. Le Bihan, E. Breton, D. Lallemand, P. Grenier, E. Cabanis, and M. Laval-Jeantet. Mr imaging of intravoxel incoherent motions: application to diffusion and perfusion
in neurologic disorders. Radiology, 161(2):401–407, 1986.
37. S. Maldonado and R. Weber. A wrapper method for feature selection using support vector machines. Information
Sciences, 179(13):2208–2217, 2009.
38. M. J. McKeown, S. Makeig, G. G. Brown, T.-P. Jung,
S. S. Kindermann, A. J. Bell, and T. J. Sejnowski. Analysis of fMRI data by blind separation into independent
spatial components. Human Brain Mapping, 6:160–188,
1998.
A review of heterogeneous data mining for brain disorders
39. J. Miranda, R. Montoya, and R. Weber. Linear penalization support vector machines for feature selection. In
Pattern Recognition and Machine Intelligence, pages 188–
192. Springer, 2005.
40. M. E. Moseley, Y. Cohen, J. Kucharczyk, J. Mintorovitch,
H. Asgari, M. Wendland, J. Tsuruda, and D. Norman.
Diffusion-weighted mr imaging of anisotropic water diffusion in cat central nervous system. Radiology, 176(2):439–
445, 1990.
41. S. Nijssen and J. N. Kok. A quickstart in frequent structure mining can make a difference. In KDD, pages 647–
652. ACM, 2004.
42. S. Ogawa, T. Lee, A. Kay, and D. Tank. Brain magnetic
resonance imaging with contrast dependent on blood oxygenation. Proceedings of the National Academy of Sciences,
87(24):9868–9872, 1990.
43. S. Ogawa, T.-M. Lee, A. S. Nayak, and P. Glynn.
Oxygenation-sensitive contrast in magnetic resonance
image of rodent brain at high magnetic fields. Magnetic
resonance in medicine, 14(1):68–78, 1990.
44. H. Peng, F. Long, and C. Ding. Feature selection based
on mutual information criteria of max-dependency, maxrelevance, and min-redundancy. Pattern Analysis and Machine Intelligence, 27(8):1226–1238, 2005.
45. A. Rakotomamonjy.
Variable selection using SVMbased criteria. The Journal of Machine Learning Research,
3:1357–1370, 2003.
46. S. Ranu and A. K. Singh. Graphsig: A scalable approach
to mining significant subgraphs in large graph databases.
In ICDE, pages 844–855. IEEE, 2009.
47. M. Robnik-Šikonja and I. Kononenko. Theoretical and
empirical analysis of relieff and rrelieff. Machine learning,
53(1-2):23–69, 2003.
48. M. Rubinov and O. Sporns. Complex network measures
of brain connectivity: uses and interpretations. Neuroimage, 52(3):1059–1069, 2010.
49. N. Shervashidze, P. Schweitzer, E. J. Van Leeuwen,
K. Mehlhorn, and K. M. Borgwardt. Weisfeiler-lehman
graph kernels. The Journal of Machine Learning Research,
12:2539–2561, 2011.
50. M.-D. Shieh and C.-C. Yang. Multiclass SVM-RFE for
product form feature selection. Expert Systems with Applications, 35(1):531–541, 2008.
51. A. Smalter, J. Huan, and G. Lushington. Feature selection in the tensor product feature space. In ICDM, pages
1004–1009, 2009.
52. O. Sporns, G. Tononi, and R. Kötter. The human connectome: a structural description of the human brain. PLoS
computational biology, 1(4):e42, 2005.
53. S. Sun. A survey of multi-view machine learning. Neural
Computing and Applications, 23(7-8):2031–2038, 2013.
54. J. Tang, X. Hu, H. Gao, and H. Liu. Unsupervised feature
selection for multi-view data in social media. In SDM,
pages 270–278. SIAM, 2013.
55. D. Tao, X. Li, X. Wu, W. Hu, and S. J. Maybank. Supervised tensor learning. Knowledge and Information Systems,
13(1):1–42, 2007.
56. M. Thoma, H. Cheng, A. Gretton, J. Han, H.-P. Kriegel,
A. J. Smola, L. Song, S. Y. Philip, X. Yan, and K. M.
Borgwardt. Near-optimal supervised feature selection
among frequent subgraphs. In SDM, pages 1076–1087.
SIAM, 2009.
57. N. Tzourio-Mazoyer, B. Landeau, D. Papathanassiou,
F. Crivello, O. Etard, N. Delcroix, B. Mazoyer, and
M. Joliot. Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. Neuroimage,
15(1):273–289, 2002.
11
58. M. Varma and B. R. Babu. More generality in efficient
multiple kernel learning. In ICML, pages 1065–1072,
2009.
59. H. Wang, F. Nie, and H. Huang. Multi-view clustering
and feature learning via structured sparsity. In ICML,
pages 352–360, 2013.
60. H. Wang, F. Nie, H. Huang, and C. Ding. Heterogeneous
visual features fusion via sparse multimodal machine. In
CVPR, pages 3097–3102, 2013.
61. C.-Y. Wee, P.-T. Yap, K. Denny, J. N. Browndyke,
G. G. Potter, K. A. Welsh-Bohmer, L. Wang, and
D. Shen. Resting-state multi-spectrum functional connectivity networks for identification of mci patients. PloS
one, 7(5):e37828, 2012.
62. C.-Y. Wee, P.-T. Yap, W. Li, K. Denny, J. N. Browndyke,
G. G. Potter, K. A. Welsh-Bohmer, L. Wang, and
D. Shen. Enriched white matter connectivity networks
for accurate identification of mci patients. Neuroimage,
54(3):1812–1822, 2011.
63. C.-Y. Wee, P.-T. Yap, D. Zhang, K. Denny, J. N.
Browndyke, G. G. Potter, K. A. Welsh-Bohmer, L. Wang,
and D. Shen. Identification of mci individuals using structural and functional connectivity networks. Neuroimage,
59(3):2045–2056, 2012.
64. S. Xiang, L. Yuan, W. Fan, Y. Wang, P. M. Thompson,
and J. Ye. Multi-source learning with block-wise missing
data for Alzheimer’s disease prediction. In KDD, pages
185–193. ACM, 2013.
65. C. Xu, D. Tao, and C. Xu. A survey on multi-view learning. arXiv, 2013.
66. X. Yan, H. Cheng, J. Han, and P. S. Yu. Mining significant graph patterns by leap search. In SIGMOD, pages
433–444. ACM, 2008.
67. X. Yan and J. Han. gspan: Graph-based substructure
pattern mining. In ICDM, pages 721–724. IEEE, 2002.
68. J. Ye, K. Chen, T. Wu, J. Li, Z. Zhao, R. Patel, M. Bae,
R. Janardan, H. Liu, G. Alexander, et al. Heterogeneous
data fusion for Alzheimer’s disease study. In KDD, pages
1025–1033. ACM, 2008.
69. H. Zhou, L. Li, and H. Zhu. Tensor regression with applications in neuroimaging data analysis. Journal of the
American Statistical Association, 108(502):540–552, 2013.
70. Y. Zhu, J. X. Yu, H. Cheng, and L. Qin. Graph classification: a diversified discriminative feature selection approach. In CIKM, pages 205–214. ACM, 2012.
| 5 |
arXiv:1708.09527v1 [] 31 Aug 2017
APÉRY SETS OF SHIFTED NUMERICAL MONOIDS
CHRISTOPHER O’NEILL AND ROBERTO PELAYO
Abstract. A numerical monoid is an additive submonoid of the non-negative integers. Given a numerical monoid S, consider the family of “shifted” monoids Mn
obtained by adding n to each generator of S. In this paper, we characterize the Apéry
set of Mn in terms of the Apéry set of the base monoid S when n is sufficiently large.
We give a highly efficient algorithm for computing the Apéry set of Mn in this case,
and prove that several numerical monoid invariants, such as the genus and Frobenius
number, are eventually quasipolynomial as a function of n.
1. Introduction
The factorization theory of numerical monoids – co-finite, additive submonoids of the
non-negative integers – has enjoyed much recent attention; in particular, invariants such
as the minimum factorization length, delta set, and ω-primality have been studied in
much detail [8]. These measures of non-unique factorization for individual elements in
a numerical monoid all exhibit a common feature: eventual quasipolynomial behavior.
In many cases, this eventual behavior is periodic (i.e. quasiconstant) or quasilinear,
and this pattern always holds after some initial “noise” for small monoid elements.
In this paper, we describe quasipolynomial behavior of certain numerical monoid
invariants over parameterized monoid families. Unlike previous papers, which studied
how factorization invariants change element-by-element (e.g., minimum factorization
length [1]), we investigate how a monoid’s properties change as the generators vary by
a shift parameter. More specifically, we study “shifted” numerical monoids of the form
Mn = hn, n + r1 , . . . , n + rk i
for r1 < · · · < rk , and find explicit relationships between the Frobenius number, genus,
type, and other properties of Mn and Mn+rk when n > rk2 . As with the previous
element-wise investigations of invariant values, our monoid-wise analysis reveals eventual quasipolynomial behavior, this time with respect to the shift parameter n.
The main result of this paper is Theorem 3.3, which characterizes the Apéry set of Mn
(Definition 2.2) for large n in terms of the Apéry set of the monoid S = hr1 , . . . , rk i
at the base of the shifted family. Apéry sets are non-minimal generating sets that
concisely encapsulate much of the underlying monoid structure, and many properties
of interest can be recovered directly and efficiently from the Apéry set, making it a
Date: September 1, 2017.
1
2
CHRISTOPHER O’NEILL AND ROBERTO PELAYO
sort of “one stop shop” for computation. We utilize these connections in Section 4 to
derive relationships between properties of Mn and Mn+rk when n is sufficiently large.
One of the main consequences of our results pertains to computation. Under our
definition of Mn above, every numerical monoid is a member of some shifted family of
numerical monoids. While Apéry sets of numerical monoids (and many of the properties derived from them) are generally more difficult to compute when the minimal
generators are large, our results give a way to more efficiently perform these computations by instead computing them for the numerical monoid S, which has both smaller
and fewer generators than Mn . In fact, one surprising artifact of the algorithm described in Remark 3.5 is that, in a shifted family {Mn } of numerical monoids, the
computation of the Apéry set of Mn for n > rk2 is typically significantly faster than for
Mn with n ≤ rk2 , even though the former has larger generators. We discuss this and
futher computational consequences in Remark 4.11, including implementation of our
algorithm in the popular GAP package numericalsgps [5].
2. Background
In this section, we recall several definitions and results used in this paper. For more
background on numerical monoids, we direct the reader to [9].
Definition 2.1. A numerical monoid M is an additive submonoid of Z≥0 . When
we write M = hn1 , . . . , nk i, we assume n1 < · · · < nk . We say M is primitive if
gcd(n1 , . . . , nk ) = 1. A factorization of an element a ∈ M is an expression
a = z1 n1 + · · · + zk nk
of a as a sum of generators of M, which we often represent with the integer tuple
~z = (z1 , . . . , zk ) ∈ Zk≥0 . The length of a factorization ~z of a is the number
|~z | = z1 + · · · + zk
of generators appearing in z. The set of factorizations of a is denoted ZM (a) ⊂ Zk≥0 ,
and the set of factorization lengths is denoted LM (a) ⊂ Z≥0 .
Definition 2.2. Let M be a numerical monoid. Define the Apéry set of x ∈ M as
Ap(M, x) = {m ∈ M : m − x ∈ Z \ M}
and the Apéry set of M as Ap(M) = Ap(M; n1 ), where n1 is the smallest nonzero
element of M. Note that under this definition, |Ap(M; x)| = x/ gcd(M).
Theorem 2.3 appeared in [1] for primitive, minimally generated numerical monoids,
and in [3] for general numerical monoids. We state the latter version here.
Theorem 2.3 ([1, 3]). Suppose M = hn1 , . . . , nk i is a numerical monoid. The function
m : M → Z≥0 sending each a ∈ M to its shortest factorization length satisfies
m(a + nk ) = m(a) + 1
for all a > nk−1 nk .
APÉRY SETS OF SHIFTED NUMERICAL MONOIDS
3
Notation. Through the remainder of this paper, r1 < · · · < rk and n are non-negative
integers, g = gcd(r1 , . . . , rk ), and
S = hr1 , . . . , rk i
and
Mn = hn, n + r1 , . . . , n + rk i
denote additive submonoids of Z≥0 . Unless otherwise stated, we assume n > rk and
gcd(n, g) = 1 so Mn is primitive and minimally generated as written, but we do not
make any such assumptions on S. Note that choosing n as the first generator of Mn
ensures that every numerical monoid falls into exactly one shifted family.
3. Apéry sets of shifted numerical monoids
The main results in this section are Theorem 3.3, which expresses Ap(Mn ) in terms
of Ap(S; n) for n sufficiently large, and Proposition 3.4, which characterizes Ap(S; n)
for large n in terms of Z≥0 \ S. In addition to the numerous consequences in Section 4,
these two results yield an algorithm to compute Ap(Mn ) for large n; see Remark 3.5.
Lemma 3.1 ([3, Theorem 3.4]). Suppose n > rk2 . If ~a and ~b are factorizations of an
element m ∈ Mn with |~a| < |~b|, then b0 > 0.
Remark 3.2. Lemma 3.1 was also proven in [12] and subsequently improved in [7],
both with strictly higher bounds on the starting value of n. The latter source proved
that such numerical monoids are homogeneous, meaning every element of the Apéry
set has a unique factorization length. This property appears as part of Theorem 3.3
along with a characterization of the unique length in terms of S.
Theorem 3.3. If n ∈ S satisfies n > rk2 , then
Ap(Mn ; n) = {i + mS (i) · n | i ∈ Ap(S; gn)}
where g = gcd(S) and mS denotes min factorization length in S. Moreover, we have
LMn (i + mS (i) · n) = {mS (i)}
for each i ∈ Ap(S; gn).
Proof. Let A = {i + mS (i) · n | i ∈ Ap(S; gn)}. Each element of Ap(S; gn) is distinct
modulo n, since each element of {i/g | i ∈ Ap(S; gn)} is distinct modulo n and
gcd(n, g) = 1. As such, each element of A is distinct modulo n, since
i + mS (i) · n ≡ i mod n
for i ∈ Ap(S; gn). This implies |A| = n, so it suffices to show A ⊆ Ap(Mn ; n).
Fix i ∈ Ap(S; gn), and let a = i + mS (i) · n. If ~s ∈ ZS (i) has minimal length, then
a = i + mS (i) · n =
k
X
i=1
si ri + |~s| · n =
k
X
i=1
si (n + ri ),
4
CHRISTOPHER O’NEILL AND ROBERTO PELAYO
meaning a ∈ S. In this way, each minimal length factorization ~s for i ∈ S corresponds
to a factorization of a ∈ Mn with first component zero. More generally, for each ℓ ≥ 0,
there is a natural bijection
{~z ∈ ZMn (a) : |~z| = ℓ} −→ {~s ∈ ZS (a − ℓn) : |~s| ≤ ℓ}
(z0 , z1 , . . . , zk ) 7−→ (z1 , . . . , zk )
between the factorizations of a ∈ Mn of length ℓ and the factorizations of a − ℓn ∈ S
of length at most ℓ, obtained by writing
a = z0 n +
k
X
i=1
zi (n + ri ) = ℓn +
k
X
zi ri
i=1
and subsequently solving for a − ℓn.
Now, since a − mS (i) · n = i ∈ Ap(S; gn), whenever ℓ > mS (i) we have
a − ℓn = (a − mS (i) · n) − (ℓ − mS (i))n = i − (ℓ − mS (i))n ∈
/ S,
so a has no factorizations in Mn of length ℓ. Moreover, a can’t have any factorizations
in Mn with length strictly less than mS (i) since Lemma 3.1 would force every factorization of length mS (i) to have nonzero first coordinate. Putting these together, we see
LMn (a) = {mS (i)}, meaning every factorization of a in Mn has first coordinate zero.
As such, we conclude a ∈ Ap(Mn ; n).
Proposition 3.4. If n ∈ S and n > F (S), then Ap(S; n) = {a0 , . . . , an−1 }, where
gi
if i ∈ S
ai =
gi + n if i ∈
/S
and g = gcd(S). In particular, this holds whenever n > rk2 as in Theorem 3.3.
Proof. Clearly ai ∈ Ap(S; n) for each i ≤ n − 1. Moreover, the values a0 , . . . , an−1 are
distinct modulo n, so the claimed equality holds.
Remark 3.5. Theorem 3.3 and Proposition 3.4 yield an algorithm to compute the
Apéry set of numerical monoids M = hn1 , . . . , nk i that are sufficiently shifted (that is,
if n1 > (nk − n1 )2 ) by first computing the Apéry set for S = hni − n1 : 2 ≤ i ≤ ki.
Table 1 compares the runtime of this algorithm to the one currently implemented in
the GAP package numericalsgps [5]. The strict decrease in runtime halfway down the
last column corresponds to values of n where n > rk2 = 202 = 400, so we may use
Proposition 3.4 to express Ap(S; gn) directly in terms of the gaps of S. This avoids
performing extra modular arithmetic computations for each Apéry set element, as is
normally required to compute Ap(S; gn) from Ap(S).
An implementation will appear in the next release of the numericalsgps package,
and will not require any special function calls. In particular, computing the Apéry set
of a monoid M that is sufficiently shifted will automatically use the improved Apéry
set algorithm, and resort to the existing algorithm in all other cases.
APÉRY SETS OF SHIFTED NUMERICAL MONOIDS
n
50
200
400
1000
5000
10000
Mn
h50, 56, 59, 70i
h200, 206, 209, 220i
h400, 406, 409, 420i
h1000, 1006, 1009, 1020i
h5000, 5006, 5009, 5020i
h10000, 10006, 10009, 10020i
GAP [5]
1 ms
30 ms
170 ms
3 sec
17 min
3.6 hr
5
Remark 3.5
1 ms
30 ms
170 ms
1 ms
1 ms
1 ms
Table 1. Runtime comparison for computing Apéry sets of the numerical monoids Mn with S = h6, 9, 20i. All computations performed using
GAP and the package numericalsgps [5].
4. Applications
As Apéry sets can be used to easily compute other numerical monoid invariants, the
results of Section 3 can be applied to provide quick computations of and structural
results for the Frobenius number, genus (Definition 4.2), and type (Definition 4.8) of
Mn for n sufficiently large. In particular, we show that each of these are eventually
quasipolynomial functions of n (Corollaries 4.4, 4.3, and 4.10, respectively).
Definition 4.1. A function f : Z → R is an r-quasipolynomial of degree d if
f (n) = ad (n)nd + · · · + a1 (n)n + a0 (n)
for periodic functions a0 , . . . , ad , whose periods all divide r, with ad not identically 0.
Definition 4.2. Suppose S is a numerical monoid with gcd(S) = 1. The genus of
S is the number g(S) = |Z≥0 \ S| of positive integers that lie outside of S. The
largest integer F (S) = max(Z≥0 \ S) outside of S is the Frobenius number of S. For a
non-primitive monoid T = gS with g ≥ 1, define g(T ) = g · g(S) and F (T ) = g · F (S).
Corollary 4.3. For n > rk2 , the function n 7→ g(Mn ) is rk -quasiquadratic in n.
Proof. By counting the elements of Z≥0 \ Mn modulo n, we can write
X jak
g(Mn ) =
n
a∈Ap(Mn ;n)
Applying Theorem 3.3 and Proposition 3.4, a simple calculation shows that
X
X i
+
mS (i)
g(Mn ) =
n
i∈Ap(S;gn)
i∈Ap(S;gn)
n−1
X
X
X
gi
mS (gi) +
mS (gi + gn).
=
+ g · g(S) +
n
i≥0
i=0
i<n
gi∈S
gi∈S
/
6
CHRISTOPHER O’NEILL AND ROBERTO PELAYO
Each of the four terms in the above expression is eventually quasipolynomial in n.
Indeed, the first term is g-quasilinear in n, the second term is independent of n, and
Theorem 2.3 implies the third and fourth terms are eventually rk -quasiquadratic and
rk -quasilinear in n, respectively. This completes the proof.
Corollary 4.4 is known in more general contexts [10, 11]. The proof given here yields a
fast algorithm for computing the Frobenius number of Mn for n > rk2 ; see Remark 4.11.
Corollary 4.4. For n > rk2 , the function n → F (Mn ) is rk -quasiquadratic in n.
Proof. Let a denote the element of Ap(S; gn) for which mS (−) is maximal. Theorem 3.3
and Proposition 3.4 imply
F (Mn ) = max(Ap(Mn ; n)) − n = a − n + mS (a) · n.
Theorem 2.3 and Proposition 3.4 together imply a + rk is the element of Ap(S; gn + rk )
for which mS (−) is maximal, and quasilinearity of mS (−) completes the proof.
Remark 4.5. Wilf’s conjecture [13], a famously open problem in the numerical monoids
literature, states that for any primitive numerical monoid S = hr1 , . . . , rk i,
F (S) + 1 ≤ k(F (S) − g(S))
To date, Wilf’s conjecture has only been proven in a handful of special cases, and
remains open in general. Corollary 4.7 adds monoids of the form Mn for n > rk2 to the
list of special cases in which Wilf’s conjecture is known to hold.
Definition 4.6 ([4]). The Wilf number W (S) of a numerical monoid S with embedding
dimension k is given by
W (S) = k(F (S) − g(S)) − (F (S) + 1).
Corollary 4.7. For n > rk2 , the function n 7→ W (Mn ) is rk -quasiquadratic in n.
In particular, Mn satisfies Wilf ’s conjecture for n > rk2 .
Proof. Apply Corollaries 4.3 and 4.4, and note that the quadratic coefficients of the
maps n 7→ F (Mn ) and n 7→ g(Mn ) are constants g/rk and g/2rk , respectively.
We next examine the pseudo-Frobenius numbers of Mn .
Definition 4.8. An integer m ≥ 0 is a pseudo-Frobenius number of a numerical monoid
S if m ∈
/ S but m + n ∈ S for all positive n ∈ S. Denote by P F (S) the set of pseudoFrobenius numbers of S, and by t(S) = |P F (Mn )| the type of S.
Theorem 4.9. Given n ∈ Z≥0 , let Pn denote the set
Pn = {i ∈ Ap(S; gn) : a ≡ i mod n for some a ∈ P F (Mn )}.
For n > rk2 , the map Pn → Pn+rk given by
i
if i ≤ gn
i 7→
i + rk if i > gn
APÉRY SETS OF SHIFTED NUMERICAL MONOIDS
7
is a bijection. In particular, there is a bijection P F (Mn ) → P F (Mn+rk ).
Proof. Fix i ∈ Ap(S; gn) and write a = i + mS (i)n ∈ Ap(Mn ; n). First, if i ≤ gn,
then i ∈ Ap(S; gn + grk ) by Proposition 3.4, so by Theorem 3.3,
a′ = i + mS (i)(n + rk ) = a + mS (i)rk ∈ Ap(Mn+rk ; n + rk ).
Notice that if a + rj ∈ Mn , then the bijection in the proof of Theorem 3.3 implies a + rj
has a factorization of length mS (i) in Mn . As such, for each j we have a′ + rj ∈ Mn+rk
if and only if a + rj ∈ Mn . On the other hand, if i > gn, then i + rk ∈ Ap(S; gn + grk )
by Proposition 3.4, so by Theorem 3.3,
a′ = i + rk + mS (i + rk )(n + rk ) = a + (n + rk ) + (mS (i) + 1)rk ∈ Ap(Mn+rk ; n + rk ).
Once again, a′ + rj ∈ Mn+rk if and only if a + rj ∈ Mn for each j, thus proving the
first claim. The second claim is obtained using the composition
P F (Mn ) −→ Pn −→ Pn+rk −→ P F (Mn+rk )
of the above map with two bijections obtained via reduction modulo n.
Corollary 4.10. The function n 7→ t(Mn ) is eventually rk -periodic. In particular,
Mn is (pseudo)symmetric if and only if Mn+rk is (pseudo)symmetric.
Proof. Apply Theorem 4.9 and [9, Corollaries 3.11 and 3.16].
Remark 4.11. Each of the quantities and properties discussed in this section are
usually computed for a general numerical monoid M by first computing an Apéry set.
Indeed, computing each of these quantities for the monoids in Table 1 takes only
slightly longer than the corresponding Apéry set runtime. As such, computing these
values using the algorithm discussed in Remark 3.5 is also significantly faster for n > rk2 .
References
[1] T. Barron, C. O’Neill, and R. Pelayo, On the set of elasticities in numerical monoids,
Semigroup Forum 94 (2017), no. 1, 37–50.
[2] T. Barron, C. O’Neill, and R. Pelayo, On dymamic algorithms for factorization invariants
in numerical monoids, Mathematics of Computation 86 (2017), 2429–2447.
[3] R. Conaway, F. Gotti, J. Horton, C. O’Neill, R. Pelayo, M. Williams, and B. Wissman, Minimal presentations of shifted numerical monoids, preprint. Available at
arXiv:/1701.08555.
[4] M. Delgado, On a question of Eliahou and a conjecture of Wilf, to appear, Mathematische
Zeitschrift. Available at arXiv:math.CO/1608.01353
[5] M. Delgado,
P. Garcı́a-Sánchez,
J. Morais,
NumericalSgps,
A package for numerical semigroups, Version 0.980 dev (2013), (GAP package),
http://www.fc.up.pt/cmup/mdelgado/numericalsgps/.
[6] J. Garcı́a-Garcı́a, M. Moreno-Frı́as, and A. Vigneron-Tenorio, Computation of Delta sets
of numerical monoids, Monatshefte für Mathematik 178 (3) 457–472.
[7] R. Jafari and Z. Armengou, Homogeneous numerical semigroups, preprint. Available at
arXiv:/1603.01078.
8
CHRISTOPHER O’NEILL AND ROBERTO PELAYO
[8] C. O’Neill and R. Pelayo, Factorization invariants in numerical monoids, Contemporary
Mathematics 685 (2017), 231–349.
[9] J. Rosales and P. Garcı́a-Sánchez, Numerical semigroups, Developments in Mathematics,
Vol. 20, Springer-Verlag, New York, 2009.
[10] B. Roune and K. Woods, The parametric Frobenius problem, Electron. J. Combin. 22
(2015), no. 2, Research Paper #P2.36.
[11] B. Shen, The parametric Frobenius problem and parametric exclusion, preprint. Available
at arXiv:math.CO/1510.01349.
[12] T. Vu, Periodicity of Betti numbers of monomial curves, Journal of Algebra 418 (2014)
66–90.
[13] H. Wilf, A circle-of-lights algorithm for the “money-changing problem”,
Amer. Math. Monthly 85 (1978), no. 7, 562–565.
Mathematics Department, University of California Davis, Davis, CA 95616
E-mail address: [email protected]
Mathematics Department, University of Hawai‘i at Hilo, Hilo, HI 96720
E-mail address: [email protected]
| 0 |
BOUNDARY CONVEX COCOMPACTNESS AND STABILITY OF
SUBGROUPS OF FINITELY GENERATED GROUPS
arXiv:1607.08899v1 [] 29 Jul 2016
MATTHEW CORDES AND MATTHEW GENTRY DURHAM
Abstract. A Kleinian group Γ < Isom(H3 ) is called convex cocompact if any orbit of Γ in H3 is
quasiconvex or, equivalently, Γ acts cocompactly on the convex hull of its limit set in ∂H3 .
Subgroup stability is a strong quasiconvexity condition in finitely generated groups which is intrinsic to the geometry of the ambient group and generalizes the classical quasiconvexity condition
above. Importantly, it coincides with quasiconvexity in hyperbolic groups and convex cocompactness in mapping class groups.
Using the Morse boundary, we develop an equivalent characterization of subgroup stability which
generalizes the above boundary characterization from Kleinian groups.
1. Introduction
There has been much recent interest in generalizing salient features of Gromov hyperbolic spaces
to more general contexts, including their boundaries and convexity properties of nicely embedded
subspaces. Among these are the Morse property, its generalization to subspaces, stability, and the
Morse boundary. In this article, we use the Morse boundary to prove that stability for subgroups of
finitely generated groups is naturally a convex cocompactness condition in the classical boundary
sense. We begin with some motivation from Kleinian groups and mapping class groups.
A nonelementary discrete (Kleinian) subgroup Γ < PSL2 (C) determines a minimal Γ-invariant
closed subspace Λ(Γ) of the Riemann sphere called its limit set and taking the convex hull of Λ(Γ)
determines a convex subspace of H3 with a Γ-action. A Kleinian group Γ is called convex cocompact
if it acts cocompactly on this convex hull or, equivalently, any Γ-orbit in H3 is quasiconvex. Another
equivalent characterization is that such a Γ has a compact Kleinian manifold; see [Mar74, Sul85].
Originally defined by Farb-Mosher [FM02] and later developed further by Kent-Leininger [KL08]
and Hamenstädt [Ham05], a subgroup H < Mod(S) is called convex cocompact if and only if any
H-orbit in T (S), the Teichmüller space of S with the Teichmüller metric, is quasiconvex, or H acts
cocompactly on the weak hull of its limit set Λ(H) ⊂ PML(S) in the Thurston compactification of
T (S). This notion is important because convex cocompact subgroups H < Mod(S) are precisely
those which determine Gromov hyperbolic surface group extensions.
In both of these examples, convex cocompactness is characterized equivalently by both a quasiconvexity condition and an asymptotic boundary condition. In [DT15b], Taylor and the second
author introduced stability in order to characterize convex cocompactness in Mod(S) by a quasiconvexity condition intrinsic to the geometry of Mod(S). In fact, stability naturally generalizes the
above quasiconvexity characterizations of convex cocompactness to any finitely generated group.
In this article, we use the Morse boundary to define an asymptotic property for subgroups
of finitely generated groups called boundary convex cocompactness which generalizes the classical
boundary characterization of convex cocompactness from Kleinian groups. Our main theorem is:
Theorem 1.1. Let G be a finitely generated group. Then H < G is boundary convex cocompact if
and only if H is stable in G.
Date: August 1, 2016.
1
Before moving on to the definitions, we discuss the situation in hyperbolic groups.
Let H be a quasiconvex subgroup of a hyperbolic group G. Then H has a limit set Λ(H) ⊂ ∂Gr G,
the Gromov boundary of G, and one can define the weak hull of Λ(H) to be the union of all geodesics
in G connecting distinct points in Λ(H). Swenson [Swe01] proved that H acts cocompactly on this
weak hull if and only H is quasiconvex in G. Hence a quasiconvex subgroup of a hyperbolic group
satisfies a boundary characterization of convex cocompactness intrinsic to the ambient geometry.
Stability generalizes quasiconvexity to any finitely generated group, and similarly the Morse
boundary generalizes the Gromov boundary. Thus Theorem 1.1 is a generalization of the hyperbolic
case to any finitely generated group.
Stability and the Morse boundary. We start with the definition of a Morse quasigeodesic:
Definition 1.2 (Morse quasigeodesic). A quasigeodesic γ in geodesic metric space X is called
N -Morse if there exists a function N : R≥1 × R≥0 → R≥0 such that if q is any (K, C)-quasigeodesic
with endpoints on γ, then q ⊂ NN (K,C) (γ), the N (K, C)-neighborhood of γ.
We call N the Morse gauge of γ.
Note that if X is hyperbolic, then every geodesic in X is Morse with a uniform gauge.
If G is a finitely generated group, then we call g ∈ G a Morse element if its orbit in any Cayley
graph of G is a Morse quasigeodesic. Some examples of Morse elements include rank-1 elements of
CAT(0) groups [BC12], pseudo-Anosov elements of mapping class groups [Beh06], fully irreducible
elements of the outer automorphism groups of free groups [AK11], and rank-one automorphisms of
hierarchically hyperbolic spaces [DHS16].
We can now give a formal definition of stability:
Definition 1.3 (Stability). If f : X → Y is a quasiisometric embedding between geodesic metric
spaces, we say X is a stable subspace of Y if there exists a Morse gauge N such that every pair of
points in X can be connected by an N -Morse quasigeodesic in Y ; we call f a stable embedding.
If H < G are finitely generated groups, we say H is stable in G if the inclusion map i : H ,→ G
is a stable embedding.
We note that stable subgroups are always hyperbolic and quasiconvex regardless of the chosen
word metric on G—stability and quasiconvexity coincide in hyperbolic spaces—and stability is
invariant under quasiisometric embeddings [DT15b].
Introduced by the first author in [Cor15] for proper geodesic spaces and later refined and generalized to geodesic spaces in [CH16], the Morse boundary of a geodesic metric space X, denoted ∂M Xe ,
consists, roughly speaking, of asymptotic classes of sequences of points which can be connected to
a fixed basepoint e ∈ X by Morse geodesic rays; see Subsection 2.2 for the formal definition. Importantly, it is a visual boundary, generalizes the Gromov boundary when X is hyperbolic, and
quasiisometries induce homeomorphisms at the level of Morse boundaries.
Boundary convex cocompactness. Let G be a finitely generated group acting by isometries on
a proper geodesic metric space X. Fix a basepoint e ∈ X. One can define a limit set Λe (G) ⊂ ∂M Xe
as the set of points which can be represented by sequences of G-orbit points; note that Λe (G) is
obviously G-invariant. One then defines the weak hull He (G) of Λe (G) in X by taking all geodesics
with distinct endpoints in Λe (G). See Section 3 for the precise definitions.
Definition 1.4 (Boundary convex cocompactness). We say that G acts boundary convex cocompactly on X if the following conditions hold:
(1) G acts properly on X;
(2) For some (any) e ∈ X, Λe (G) is nonempty and compact;
(3) For some (any) e ∈ X, the action of G on He (G) is cocompact.
2
Definition 1.5 (Boundary convex cocompactness for subgroups). Let G be a finitely generated
group. We say H < G is boundary convex cocompact if H acts boundary convex cocompactly on
any Cayley graph of G with respect to a finite generating set.
Theorem 1.1 is an immediate consequence of the following stronger statement:
Theorem 1.6. Let G be a finitely generated group acting by isometries on a proper geodesic metric
space X. Then the action of G is boundary convex cocompact if and only if some (any) orbit of G
in X is a stable embedding.
In either case, G is hyperbolic and any orbit map orbe : G → X extends continuously and Gequivariantly to an embedding of ∂Gr G which is a homeomorphism onto its image Λe (G) ⊂ ∂M Xe .
We note that Theorem 1.1 and [DT15b, Proposition 3.2] imply that boundary convex cocompactness is invariant under quasiisometric embeddings.
Remark 1.7 (On the necessity of the conditions in Definition 1.4). The compactness assumption
on Λe (G) is essential: Consider the group G = Z2 ∗ Z ∗ Z = ha, bi ∗ hci ∗ hdi acting on its Cayley
graph with the subgroup H = ha, b, ci. Since the H is isometrically embedded and convex in G, it
follows that ∂M He ∼
= Λe (H) ⊂ ∂M Ge and He (H) = H for any e ∈ G, whereas H is not hyperbolic
and thus not stable in G.
While compactness of Λe (G) does imply that He (G) is stable in X (Proposition 4.2), it is unclear
how to leverage this fact into proving properness or cocompactness of the G-action, even in the
presence of one or the other.
Remark 1.8 (Stability versus boundary convex cocompactness). We expect that stability will be
a much condition to check in practice than boundary convex cocompactness. Our purpose is to
prove that stability generalizes multiple classical notions of convex cocompactness and is thus the
correct generalization of convex cocompactness to finitely generated groups.
Remark 1.9 (Conical limit points). In addition to the above discussed notions for Kleinian groups,
there is characterization for convex cocompactness in terms of limit points, namely that every limit
point is conical. Kent-Leininger develop a similar characterization of this for subgroups of Mod(S)
acting on T (S)∪PML(S). We believe that there is a similar conicality characterization for subgroup
stability with respect to limit sets in the Morse boundary. However, we have chosen not to pursue
this here in interest of brevity. Moreover, it’s unclear how useful such a characterization would be.
Stability and boundary convex cocompactness in important examples. Since stability
is a strong property, it is interesting to characterize and produce stable subgroups of important
groups. There has been much recent work to do so, which we will now briefly overview.
(1) For a relatively hyperbolic group (G, P), Aougab, Taylor, and the second author [ADT16]
prove that if H < G is finitely generated and quasiisometrically embeds in the associated
coned space [Far98], then H is stable in G. Moreover, if we further assume that the
subgroups in P are one-ended and have linear divergence, then quasiisometrically embedding
in the coned space is equivalent to stability.
(2) For Mod(S), that subgroup stability and convex cocompactness in the sense of [FM02] are
equivalent was proven by Taylor and the second author [DT15b].
(3) For the right-angled Artin group A(Γ) of a finite simplicial graph Γ which is not a join,
Koberda-Mangahas-Taylor [KMT14] proved that stability for H < A(Γ) is equivalent to
H being finitely generated and purely loxodromic, i.e. each element acts loxodromically
on the associated extension graph Γe , a curve graph analogue. They also prove that stable
subgroups satisfy a strictly weaker condition called combinatorial quasiconvexity, sometimes
called convex cocompactness [Hag08]; we note that, unlike stable subgroups, combinatorially
quasiconvex subgroups need not be hyperbolic.
3
(4) For Out(F), the outer automorphism group of a free group on at least three letters, work
in [ADT16] proves that a subgroup H < Out(F) which has a quasiisometrically embedded
orbit in the free factor graph F is stable in Out(F). This is related to and builds on
others’ work as follows: Hamenstädt-Hensel [HH14] proved that a subgroup H < Out(F)
quasiisometrically embedding in the free factor graph is equivalent to a certain convex
cocompactness condition in the projectivization of the Culler-Vogtmann outer space and
its boundary which is analogous to that the Kent-Leininger condition on T (S) ∪ PML(S).
Following [HH14], we shall call such subgroups convex cocompact. By work of DowdallTaylor [DT14, DT15a], quasiisometrically embedding in the free factor graph implies a
stability-like property for orbit maps in outer space; we note that when such a group is also
fully atoroidal, they prove the corresponding free group extension is hyperbolic. In [ADT16],
the authors prove this stability property pulls back to genuine stability in Out(F).
We summarize these results in the following theorem:
Theorem 1.10 ([DT15b, KMT14, ADT16]). Suppose that the pair H < G satisfies one of the
following conditions:
(1) H is a quasiconvex subgroup of a hyperbolic group G;
(2) G is relatively hyperbolic and H is finitely generated and quasiisometrically embeds in the
coned space associated to G in the sense of [Far98];
(3) G = A(Γ) for Γ a finite simplicial graph which is not a join and H is finitely generated and
H quasiisometrically embeds in the extension graph Γe [KMT14];
(4) G = Mod(S) and H is a convex cocompact subgroup in the sense of [FM02];
(5) G = Out(F) and H is a convex cocompact subgroup in the sense of [HH14].
Then H is stable in G. Moreover, for (1), (3), and (4), the reverse implication also holds.
As a corollary of Theorem 1.1, we have:
Corollary 1.11. Suppose H < G are as any of (1)–(5) in Theorem 1.10. Then H is boundary
convex cocompact.
Item (1) in Corollary 1.11 is originally due to Swenson [Swe01]. In examples (2)–(5), each of
the previously established boundary characterizations of convex cocompactness was in terms of
an external space. Corollary 1.11 provides a boundary characterization of convex cocompactness
which is intrinsic to the geometry of the ambient group.
We note that Hamenstädt has announced that there are stable subgroups of Out(F) which are
not convex cocompact in the sense of [HH14], but such subgroups would be convex cocompact both
in the quasiconvex and boundary senses.
Finally, we see that boundary convex cocompactness is generic in these main example groups.
For any such group G and probability measure µ thereon, consider k ≥ 2 independent random
walks (wn1 )n∈N , . . . , (wnk )n∈N whose increments are distributed according to µ. For each n, let
Γ(n) = hwn1 , . . . , wnk i ≤ G. Following Taylor-Tiozzo [TT16], we say a random subgroup of G has a
property P if
P[Γ(n) has P ] → 1.
In the following, (1) is a consequence of [TT16], (3) of [KMT14], and (2), (4)–(5) of [ADT16]:
Theorem 1.12 ([TT16, KMT14, ADT16]). Suppose that G is any of the groups in (1)–(5) of
Theorem 1.10. Then a k-generated random subgroup of G is stable.
Hence, Theorem 1.1 gives us:
Corollary 1.13. Suppose G is any of the groups in (1)–(5) of Theorem 1.10. Then a k-generated
random subgroup of G is boundary convex cocompact.
4
Acknowledgements. We would like to thank the organizers of the “Geometry of Groups in Montevideo” conference, where part of this work was accomplished. The first and second authors were
partially supported by NSF grants DMS-1106726 and DMS-1045119, respectively. We would like
to thank Ursula Hamenstädt and Samuel Taylor for interesting conversations, and also the latter
for useful comments on an earlier draft of this paper.
2. Background
We assume the reader is familiar with basics of δ-hyperbolic spaces and their Gromov boundaries.
In this subsection we recall some of the basic definitions. For more information, see [BH99, III.H].
2.1. Gromov boundaries.
Definition 2.1. Let X be a metric space and let x, y, z ∈ X. The Gromov product of x and y with
respect to z is defined as
1
(x · y)z = (d(z, x) + d(z, y) − d(x, y)) .
2
Let (xn ) be a sequence in X. We say (xi ) converges at infinity if (xi · xj )e → ∞ as i, j → ∞. Two
convergent sequences (xn ), (ym ) are said to be equivalent if (xi · yj ) → ∞ as i, j → ∞. We denote
the equivalence class of (xn ) by lim xn .
The sequential boundary of X, denoted ∂X, is defined to be the set of convergent sequences
considered up to equivalence.
Definition 2.2. [BH99, Definition 1.20] Let X be a (not necessarily geodesic) metric space. We
say X is δ–hyperbolic if for all w, x, y, z we have
(x · y)w ≥ min {(x · z)w , (z · y)w } − δ.
If X is δ–hyperbolic, we may extend the Gromov product to ∂X in the following way:
(x · y)e = sup lim inf {(xn · ym )e } .
m,n→∞
where x, y ∈ ∂X and the supremum is taken over all sequences (xi ) and (yj ) in X such that
x = lim xi and y = lim yj .
2.2. (Metric) Morse boundary. Let X be a geodesic metric space, let e ∈ X and let N be a
(N )
Morse gauge. We define Xe to be the set of all y ∈ X such that there exists a N –Morse geodesic
[e, y] in X.
(N )
(N )
Proposition 2.3 (Xe are hyperbolic; Proposition 3.2 [CH16]). Xe
the sense of Definition 2.2.
(N )
is 8N (3, 0)–hyperbolic in
(N )
As each Xe is hyperbolic we may consider its Gromov
boundary,
∂Xe , and the associated
(N )
visual metric d(N ) . We call the collection of boundaries ∂Xe , d(N ) the metric Morse boundary
of X.
(N )
Instead of focusing on sequences which live in some Xe , we now consider the set of all Morse
geodesic rays in X (with basepoint p) up to asymptotic equivalence. We call this collection the
Morse boundary of X, and denote it by ∂M X.
To topologize the boundary, first fix a Morse gauge N and consider the subset of the Morse
boundary that consists of all rays in X with Morse gauge at most N :
N
∂M
Xp = {[α] | ∃β ∈ [α] that is an N –Morse geodesic ray with β(0) = p}.
We topologize this set with the compact-open topology. This topology is equivalent to one defined
N X , which are defined as follows:
by a system of neighborhoods, {Vn (α) | n ∈ N}, at a point α in ∂M
p
5
the set Vn (α) is the set of geodesic rays γ with basepoint p and d(α(t), γ(t)) < δN for all t < n,
where δN is a constant that depends only on N .
Let M be the set of all Morse gauges. We put a partial ordering on M so that for two Morse
gauges N, N 0 ∈ M, we say N ≤ N 0 if and only if N (λ, ) ≤ N 0 (λ, ) for all λ, ∈ N. We define the
Morse boundary of X to be
N
∂M Xp = lim ∂M
Xp
−→
M
N X is
with the induced direct limit topology, i.e., a set U is open in ∂M Xp if and only if U ∩ ∂M
p
open for all N .
In [CH16] the first author and Hume show that if X is a proper geodesic metric space, then there
(N )
NX .
is a natural homeomorphism between ∂Xe and ∂M
e
2.3. Useful facts. In this subsection, we will collect a number of basic facts and definitions.
The following lemma states that a quasigeodesic with endpoints on a Morse geodesic stays
Hausdorff close:
Lemma 2.4 (Hausdorff close; Lemma 2.1 in [Cor15]). Let X be a geodesic space and let γ : [a, b] →
X be a N -Morse geodesic segment and let σ : [a0 , b0 ] → X be a continuous (K, C)-quasi-geodesic
such that γ(a) = σ(a0 ) and γ(b) = σ(b0 ). Then the Hausdorff distance between α and β is bounded
by 2N (λ, )
The next lemma states that if a geodesic triangles ∆ has two Morse sides, then ∆ is slim and its
third side is also Morse:
Lemma 2.5 (Lemma 2.2-2.3 in [Cor15]). Let X be a geodesic space. For any Morse gauge N , there
exists a gauge N 0 such that the following holds: Let γ1 , γ2 : [0, ∞) → X be N -Morse geodesics such
that γ1 (0) = γ2 (0) = e and let x1 = γ1 (v1 ), x2 = γ2 (v2 ) be points on γ1 and γ2 respectively. Let γ be
a geodesic between x1 and x2 . Then the geodesic triangle γ1 ([0, v1 ]) ∪ γ ∪ γ2 ([0, v2 ]) is 4N (3, 0)-slim.
Moreover γ is N 0 -Morse.
The following is the standard fact that quasiisometries preserve the Gromov product:
Lemma 2.6. Suppose X and Y are proper Gromov hyperbolic metric spaces. If f : X → Y is a
quasiisometric embedding, then there exist A ≥ 1, B > 0 such that for any x, y, z ∈ X, we have
1
(x · y)z − B ≤ (f (x) · f (y))f (z) ≤ A(x · y)z + B.
A
We have the following well-known consequence:
Lemma 2.7. Suppose that f : X → Y is a quasiisometric embedding between proper Gromov
hyperbolic spaces. Then the induced map ∂f : ∂X → ∂Y is a homeomorphism onto its image.
2.4. Stability. The following is [DT15b, Definition 3.1]:
Definition 2.8 (Stability 1). if for any K ≥ 1, C ≥ 0, there exists R = R(K, C) ≥ 0 such that if
q1 : [a, b] → Y, q2 : [a0 , b0 ] → Y are (K, C)-quasigeodesics with q1 (a) = q10 (a0 ), q1 (b) = q2 (b0 ) ∈ f (X),
then
dHaus (q1 , q2 ) < R.
We will use the following definition of stability which is equivalent via Lemma 2.4:
Definition 2.9 (Stability 2). Let X, Y be geodesic metric spaces and let f : X → Y be a quasiisometric embedding. We say X is stable in Y there exists a Morse gauge N such that any x, y ∈ f (X)
are connected by a N -Morse geodesic in Y . We say that f is a stable embedding.
6
2.5. Morse preserving maps. It will be useful for having a notion of when maps between metric
spaces preserve data encoded by the Morse boundary.
Definition 2.10 (Morse preserving maps). Let X, Y be proper geodesic metric spaces and e ∈
X, e0 ∈ Y . We say that g : ∂M Xe → ∂M Ye0 Morse-preserving if for each Morse gauge N , there
(N )
(N 0 )
exists another Morse gauge N 0 such that g injectively maps ∂Xe → ∂Ye0 .
The following is a consequence of the definitions:
Proposition 2.11. Let f : X → Y be (K, C)-quasiisometric embedding. If f induces a Morsepreserving map ∂M f : ∂M X → ∂M Y , then ∂M f is a homeomorphism onto its image.
We note that if f : X → Y is a stable embedding, then all geodesics get sent to uniformly Morse
quasigeodesics. Hence the induced map ∂f : ∂M X → ∂M Y is clearly Morse-preserving and:
Corollary 2.12. Let X, Y be proper geodesic metric spaces. If f : X → Y is a stable embedding,
then the induced map ∂f : ∂M X → ∂M Y is a homeomorphism onto its image. Moreover, if f is
the orbit map of a finitely generated group G acting by isometries on Y , then ∂f is G-equivariant.
3. The action of G on ∂M X
In this section, we begin a study of the dynamics of the action of G on ∂M X.
3.1. Definition of the G-action. For the rest of this section, fix G a group acting by isometries
on a proper geodesic metric space X and a base point e ∈ X.
Lemma 3.1. Given any Morse function N and g ∈ G, there exists a Morse function N 0 depending
(N )
only on N and g such that if (xn ), (yn ) ⊂ Xe are asymptotic, then:
(N 0 )
(1) (g · xn ), (g · yn ) ⊂ Xe
and
(N 0 )
(N 0 )
(2) (g · yn ) ∈ Xe
is asymptotic to (g · xn ) ∈ Xe .
Proof. Since G is acting by isometries we see that g and g · xi are connected by a N -Morse geodesic.
(N )
(N 0 )
As in the proof of Proposition 3.15 in [CH16] we show that Xg·e ⊂ Xe
for some N 0 which
depends only on N and d(g · e, e).
To prove the second part, we note that in the proof of Proposition 3.15 in [CH16] we also get
(N )
that there exists a constant D such that for all x, y ∈ ∂Xg·e ,
(x ·N 0 y)e − D ≤ (x ·N y)g·e ≤ (x ·N 0 y)e + D.
(N )
Since G acts by isometries and (xn ), (yn ) ⊂ Xe are asymptotic, it follows that (g · xn ), (g · yn ) ⊂
(N )
(N )
Xg·e are asymptotic, which forces (g · xn ), (g · yn ) ⊂ Xe to be asymptotic.
We may naturally extend the action of G on X to an action on the whole Morse boundary, ∂M X,
(N )
(N )
as follows: Suppose (xn ) ⊂ Xe is a sequence N -converging to a point λ ∈ ∂Xe . For any g ∈ G,
0
(N )
we define g · λ to be the asymptotic class of (g · xn ) ∈ ∂Xe , where N 0 is the Morse gauge from
Lemma 3.1.
3.2. Definition of the limit set.
Definition 3.2 (Λ(G)). The limit set of the G-action on ∂M X is
n
o
Λe (G) = λ ∈ ∂M X ∃N and (gk ) ⊂ G such that (gk · e) ⊂ Xe(N ) and lim gk · e = λ
The following lemma is an immediate consequence of the definitions and Lemma 3.1:
7
Lemma 3.3. For any e, f ∈ X and corresponding natural change of basepoint homeomorphism
φe,f : Λe (G) → Λf (G), we have φe,f (Λe (G)) = Λf (G). Moreover, we have G · Λe (G) ⊂ Λe (G), i.e.,
Λe (G) is G-invariant.
3.3. Limit geodesics and their properties. In Subsection 3.5, we will define the weak hull of
Λe (G) as all geodesics connecting points in Λe (G). For now, however, we will focus our attention
on a special class of visual geodesics arising naturally as limits of finite geodesics whose endpoint
sequences converge to the Morse boundary. These geodesics are easier to work with and more
obviously tied to the geometry of G.
(N )
(N )
Let (xn ), (yn ) ∈ Xe be sequences asymptotic to λ− , λ+ ∈ ∂Xe respectively, with λ− 6= λ+ .
Let γx,n = [e, xn ], γy,n = [e, yn ] be N -Morse geodesics. Since X is proper, the Arzelà-Ascoli theorem
implies that there exist geodesic rays γx , γy such that γx,n , γy,n subsequentially converge uniformly
on compact sets to γx , γy . Let γn be a geodesic joining γx (n) and γy (n) for each n ∈ N.
In fact, one can prove that (γn ) has a Morse subsequential limit:
Lemma 3.4. Using the above notation, there exists a Morse gauge N 0 and a biinfinite N 0 -Morse
geodesic γ : (−∞, ∞) → X such that γn subsequentially converges uniformly on compact sets to γ.
Moreover, we have that γ ⊂ N4N (3,0) (γx ∪ γy ).
Proof. Let R ∈ N be so that d(γx (R), γy (t)) > 4N (3, 0) for all t ∈ [0, ∞). For each natural number
n > R, let γn be the geodesic between γx (n) and γy (n).
By Lemma 2.5, we know that the triangle γx ([0, n]) ∪ γn ∪ γy ([0, n]) is 4N (3, 0)-slim. From the
choice of R, it follows that each γn must intersect a compact ball of radius 4N (3, 0) centered at
γx (R). By Arzelá–Ascoli, there is a subsequence of {γn } which converges to a biinfinite geodesic γ.
Since every γn is in the 4N (3, 0) neighborhood of γx ∪ γy , it follows that γ must be as well. From
this, a standard argument implies that γ((−∞, 0]), γ([0, ∞)) are a bounded Hausdorff distance from
γx , γy , respectively, where that bound depends on N .
Finally, it follows from the moreover statement of Lemma 2.5 that γ is N 0 -Morse, where N 0
depends on N because each γn is N 0 -Morse.
(N )
(N )
Definition 3.5 (Limit geodesics and triangles). Given (xn ), (yn ) ∈ Xe and λ− , λ+ ∈ ∂Xe as
above, we call γx , γy as described above a limit legs, and γ a limit geodesic for λ− , λ+ . We call the
triangle formed by γx ∪ γy ∪ γ a limit triangle based at e.
The next goal is to prove the following two propositions:
(N )
Proposition 3.6 (Limit triangles are slim). For any Morse gauge N , if (xn ), (yn ) ⊂ Xe
(N )
asymptotic to λ− 6= λ+ ∈ ∂Xe , then any limit triangle is 4N (3, 0)-slim.
are
Proposition 3.7 (Limit geodesics are asymptotic). For any Morse gauge N , there exists a constant
(N )
K 0 > 0 such that if γ, γ 0 are limit geodesics with the same endpoints λ+ , λ− ∈ ∂Xe , then
dHaus (γ, γ 0 ) < K 0 .
Proposition 3.7 is an immediate consequence of Proposition 3.6 and following lemma, the proof
of which appears in the proof of Theorem 2.8 of [CH16]:
(N )
Lemma 3.8 (Limit geodesic rays are asymptotic). For any Morse gauge N and λ ∈ ∂Xe
and γ 0 are N -Morse geodesic rays with endpoint λ, then dHaus (γ, γ 0 ) < 14N (3, 0).
, if γ
Proof of Proposition 3.6. By Lemma 3.4, γ ⊂ N4N (3,0) (γx ∪ γy ). Let w ∈ γy and assume that w
is not within 4N (3, 0) of a point in γx . It suffices to find a point on γ within 4N (3, 0) of w. (A
similar argument works when y is not within 4N (3, 0) of γy .)
8
Let z ∈ γ be the closest point on γ to w. By Lemma 2.5, each γx ([0, n]) ∪ γn ∪ γy [(0, n)] is
4N (3, 0)-slim. Since the γn subsequentially converges on compact sets to γ, it follows that for all
≥ 0, we have d(w, z) ≤ 4N (3, 0) + . Thus d(w, z) ≤ 4N (3, 0) and the proposition follows from a
symmetric argument with w ∈ γx .
3.4. Asymptoticity. In this subsection, we study the behavior of geodesics and geodesic rays
(N )
asymptotic to points in ∂Xe .
(N )
Definition 3.9 (Asymptotic rays). Let λ ∈ ∂Xe and let γ : [0, ∞) → X with γ(0) = e be a limit
leg for λ based at e. We say that a geodesic γ 0 : [0, ∞) → X with γ 0 (0) = e is asymptotic to λ if
there exists K > 0 such that
dHaus (γ, γ 0 ) < K.
The following is an immediate consequence of Corollary 2.6 in [Cor15]:
Lemma 3.10. There exists K0 > 0 and Morse gauge N 0 depending only on N such that the
(N )
following holds: For any λ ∈ ∂Xe , if γ, γ 0 : [0, ∞) → X are geodesic rays with γ(0) = γ 0 (0) which
0
are asymptotic to λ, then γ, γ are N 0 -Morse and
dHaus (γ, γ 0 ) < K0 .
Definition 3.11 (Asymptotic, bi-asymptotic). Let γ : (−∞, ∞) → X be a biinfinite geodesic in
(N )
X with γ(0) a closest point to e along γ. Let λ ∈ ∂Xe . We say γ is forward asymptotic to λ if
for any N -Morse geodesic ray γλ : [0, ∞) → X with γλ (0) = e, there exists K > 0 such that
dHaus (γ([0, ∞), γλ ([0, ∞)) < K.
We define backwards asymptotic similarly. If γ is forwards, backwards asymptotic to λ, λ0 , respectively, then we say γ is bi-asymptotic to (λ, λ0 ).
We note that it is an immediate consequence of Proposition 3.6 and Lemma 3.10 that limit
geodesics are bi-asymptotic to their endpoints with a uniform asymptoticity constant.
Lemma 3.12. There exists a K1 > 0 depending only on N such that the following holds: Let
γ+ , γ− be N -Morse geodesic rays with γ+ (0), γ− (0) = e, and let γ be a bi-infinite geodesic such that
both
dHaus (γ− ([0, ∞)), γ((−∞, 0])) < K and dHaus (γ+ ([0, ∞)), γ([0, ∞))) < K
for some constant K > 0. Then the triangle γ− ∪ γ ∪ γ+ is K1 -slim. Furthermore, there exist
S, R ∈ [0, ∞) such that the following holds:
dHaus (γ− ([R, ∞)), γ((−∞, 0])), dHaus (γ+ ([S, ∞)), γ([0, ∞))), d(γ− (R), γ(0)), d(γ+ (S), γ(0)) < K1 .
Proof. Up to reparameterization we may assume that γ(0) is the point on γ closest to e. By
assumption we know that there is a constant K > 0 such that dHaus (γ([0, ∞)), γ+ ) < K. Let
t ∈ [6K, ∞) and let t0 ∈ [0, ∞) be such that γ(t0 ) is the closest point along γ to γ+ (t). We claim
that φ = [e, γ(0)] ∪ γ([0, t0 ]) ∪ [γ(t0 ), γ+ (t)] is a (5, 0)-quasigeodesic.
It suffices to check the standard inequality with vertices in different segments of φ. First suppose
that u ∈ [e, γ(0)] and v ∈ [γ(0), γ(t0 )]; the case that u ∈ [γ(t0 ), γ+ (t)] and v ∈ [γ(0), γ(t)] is similar.
We know that d(u, γ(0)) ≤ d(u, v) because γ(0) is a nearest point to e along γ. Note that
d(γ(0), v) ≤ d(u, v) + d(u, γ(0)) by the triangle inequality. Let dφ (u, v) denote the distance along
φ between u and v. We have:
d(u, v) ≤ dφ (u, v) =d(u, γ(0)) + d(γ(0), v)
≤d(u, v) + (d(u, v) + d(u, γ(0))
≤3d(u, v).
9
γ
γ−
γ(t0 )
γ+ (t)
γ+ (t0 )
γ(0)
γ+
γ+ (S)
e
Figure 3.1. Lemma 3.12
Now assume that u ∈ [e, γ(0)] and v ∈ [γ(t0 ), γ+ (t)].
Let ξ = [u, γ(0)] ∪ γ([0, t0 ]) ∪ [γ(t0 ), v] and denote its arclength by kξk. Since t ∈ [6K, ∞), we
know that d(u, v) ≥ t − 2K ≥ 32 t and d(γ(0), γ(t0 )) = t0 < t + 2K < 2t. Putting these inequalities
together:
d(u, v) ≤ kξk =d(u, γ(0)) + d(γ(0), γ(t0 )) + d(γ(s), v)
9
<t/6 + 2t + t/6 < 3t ≤ d(u, v)
2
Thus φ is a (5, 0)-quasi geodesic.
By Lemma 2.4, it follows that dHaus (φ, [e, γ+ (t)) < N (5, 0). Hence there exist S ≥ 0 and
t0 > t0 − K > 0 such that d(γ(0), γ+ (S)), d(γ(t0 ), γ+ (t0 )) < N (5, 0). Let [γ+ (S), γ(0)], [γ(t0 ), γ+ (t0 )]
be geodesics and φ0 = [γ+ (S), γ(0)]∪[γ(0, t0 )]∪[γ(t0 ), γ+ (t0 )]. Then φ0 is a (1, 2N (5, 0))-quasigeodesic
and Lemma 2.4 implies that dHaus (φ0 , [γ+ (S, t0 )]) < N (1, 2N (5, 0)) and hence
dHaus ([γ(0, t0 )], γ+ (S, t0 )) < 2N (1, 2N (5, 0)) + 2N (5, 0).
Since t0 > t0 − K > 0 depends only on t0 and N , we have that t0 → ∞ as t0 → ∞, and hence
dHaus (γ(0, ∞), γ+ (S, ∞)) < 2N (1, 2N (5, 0)) + 2N (5, 0), as required. A similar argument with γ−
provides R ≥ 0 such that dHaus (γ(0, −∞), γ− (R, ∞)) < 2N (1, 2N (5, 0)) + 2N (5, 0).
We know by construction that d(γ− (R), γ(0)), d(γ+ (S), γ(0)) < N (5, 0), so d(γ− (R), γ+ (S)) <
2N (5, 0). We also know by Lemma 2.5 that the triangle γ− ([0, R]) ∪ [γ− (R), γ+ (S)] ∪ γ+ ([0, S]) is
4N (3, 0)-slim. Thus if we set K1 = max{(4N (3, 0) + 2N (5, 0)), (2N (1, 2N (5, 0)) + N (5, 0))}, then
the triangle γ− ∪ γ ∪ γ+ is K1 -slim. This completes the proof.
The following proposition will allow us to define the weak hull in the next subsection:
Proposition 3.13. There exists K2 > 0 depending only on N such that the following holds: Let
(N )
λ− , λ+ ∈ ∂Xe be distinct points. If γ, γ 0 : (−∞, ∞) → X are geodesics with γ(0), γ 0 (0) closest
points to e along γ, γ 0 , respectively, such that γ, γ 0 are bi-asymptotic to (λ− , λ+ ), then
dHaus (γ, γ 0 ) < K2 .
Proof. First assume γ is a limit geodesic.
Since γ is a limit geodesic, Lemma 2.5 implies that γ is N 0 -Morse where N 0 depends only on N .
By Lemma 3.12, we know that there exist S, R ∈ [0, ∞) such that each of the following holds:
dHaus (γ([R, ∞)), γ 0 ((−∞, 0])) < K1 , dHaus (γ([S, ∞)), γ 0 ([0, ∞))) < K10 ,
and
d(γ(R), γ 0 (0)), d(γ(S), γ 0 (0)) < K10 ,
10
where K10 depends only on N 0 .
We know then that d(γ(R), γ(S)) < 2K10 . Putting together these facts, we get that the Hausdorff
distance between γ, γ 0 is less than 2K10 .
It follows that any two geodesics γ, γ 0 have Hausdorff distance bounded by K2 = 4K10 .
The following is an immediate corollary of Lemma 3.12 and Proposition 3.13:
(N )
Corollary 3.14. There exists K1 such that for any distinct λ+ , λ− ∈ ∂Xe , any geodesic rays
γ+ , γ− asymptotic to λ+ , λ− respectively with γ+ (0) = γ− (0) = e, and any bi-infinite geodesic γ
bi-asymptotic to λ+ , λ− , the triangle γ+ ∪ γ ∪ γ− is K1 -slim.
3.5. The weak hull He (G). We are ready to define the weak hull of G in X.
Definition 3.15. The weak hull of G in X based at e ∈ X, denoted He (G), is the collection of all
biinfinite rays γ which are bi-asymptotic to (λ, λ0 ) for some λ 6= λ0 ∈ Λe (G).
Note that Proposition 3.13 says that any two rays γ, γ 0 ∈ He (G) that are both bi-asymptotic to
(N )
(λ, λ0 ) have bounded Hausdorff distance, where that bound depends on the stratum Xe in which
0
the sequences defining λ, λ live. Also note that the proof of Lemma 3.4 implies that limit geodesics
are in He (G).
The following lemma is evident from the definitions:
Lemma 3.16. If |Λe (G)| ≥ 2, then He (G) is nonempty and G-invariant.
The following is an interesting question:
Question 3.17. If |Λ(G)| =
6 ∅, then must we have in fact |Λ(G)| ≥ 2?
In the case of CAT(0) spaces, this question has been affirmatively answered [Mur15, Lemma 4.9].
4. Boundary convex cocompactness and stability
In this section, we prove the main theorem, Theorem 1.6, namely that boundary convex cocompactness as in Definition 1.4 and stability as in Definition 2.9 are equivalent.
4.1. Compact of limit sets have stable weak hulls. For the rest of this section, fix a group G
acting by isometries on a proper geodesic metric space X.
In this subsection, we will prove that if G has a compact limit set, then He (G) is stable.
Lemma 4.1. If K ⊂ ∂M X is nonempty and compact, then for any e ∈ X there exists N > 0 such
(N )
that K ⊂ ∂Xe .
Proof. We will closely follow the proof of Lemma 3.3 in [Mur15].
(N )
N X [CH16, Theorem 3.14].
Since X is proper, we use the fact that ∂Xe is homeomorphic to ∂M
N X for any Morse gauge N . Then we know that there is a
Assume that K is not contained in ∂M
sequence (αi ) ⊂ K of Ni -Morse geodesics where Ni > Ni−1 + 1 for all i. Let An = {αi }i≥n+1 . We
N X is finite. Since singletons are closed in each ∂ N X, each of
note that for all n and N , An ∩ ∂M
M
which is Hausdorff, it follows that singletons are closed in ∂M X. Thus An is closed in ∂M X by the
definition of the direct limit topology.
The collection {∂M X\An }n∈N is an open cover of K, but each ∂M X \ An only contains a finite
number of the αi , so that any finite subcollection of {∂M X\An }n∈N will only contain finitely many
αi . This contradicts the fact that K is compact, completing the proof.
The following proposition is the main technical statement of this section:
(N )
Proposition 4.2 (Compact limit sets have stable hulls). If Λe (G) ⊂ ∂Xe is compact for some
(any) e ∈ X, then for each e ∈ X there exists a Morse gauge N 0 such that He (G) is N 0 -stable.
11
γx0
γx
x
x00
x0
e
y 00
y0
y
γ0
γy
γy0
Figure 4.1. Proposition 4.2
Proof. Let x, y ∈ He (G). By Lemma 3.4, we may assume that x, y do not lie on the same geodesic.
Let [x, y] be any geodesic between x and y.
Since He (G) is a subspace with the induced metric, it suffices to prove that [x, y] is uniformly
Morse. Moreover, since every hull geodesic lies within uniform Hausdorff distance of a limit geodesic
by Proposition 3.13 and Lemma 4.1, it suffices to consider the case when x and y lie on limit
geodesics by Lemma 2.4.
Let γx , γy be distinct limit geodesics on which x, y lie, respectively. By Proposition 3.6, there
exist limit legs γx0 , γy0 based at e and points x0 ∈ γx0 , y 0 ∈ γy0 such that d(x, x0 ), d(y, y 0 ) ≤ 4N (3, 0).
Let γ 0 be the limit geodesic which forms a limit triangle with limit legs γx0 and γy0 . Note by
definition γ 0 ∈ He (G). By Proposition 3.6, the limit triangle γ 0 ∪ γx0 ∪ γy0 is 4N (3, 0)-thin, so there
exist x00 , y 00 ∈ γ 0 with d(x0 , x00 ), d(y 0 , y 00 ) < 4N (3, 0), and hence d(x, x00 ), d(y, y 00 ) < 8N (3, 0).
Let [x00 , x], [y, y 00 ] be any geodesics between x, x00 and y, y 00 , respectively. Then the concatenation
σ = [x00 , x] ∪ [x, y] ∪ [y, y 00 ] gives a (1, 16N (3, 0))-quasigeodesic with endpoints x00 , y 00 on the N 0 Morse geodesic γ 0 . If [x00 , y 00 ] ⊂ γ 0 is the subsegment of γ 0 between x00 and y 00 , then σ is in the
N 0 (1, 16N (3, 0)) neighborhood of [x00 , y 00 ]. An easy argument then implies that σ is uniformly
Morse, from which it follows immediately that [x, y] is uniformly Morse.
4.2. Boundary convex cocompactness implies stability. We now prove the first direction of
the main theorem:
Theorem 4.3. Suppose G acts by isometries on a proper geodesic metric space X such that
(1) The action of G on X is proper,
(2) Λe (G) is nonempty and compact for any e ∈ X, and
(3) G acts cocompactly on He (G) for any e ∈ X.
Then any orbit of G is a stable subspace of X and the orbit map extends continuously and Gequivariantly to an embedding of ∂Gr G into ∂M Xe which is a homeomorphism onto its image Λe (G).
Proof. Since G acts properly and cocompactly on He (G), it follows that the orbit map g 7→ g · e
is a quasiisometry G → He (G), where we consider He (G) with the metric induced from X. In
particular, G · e is quasiisometrically embedded in X.
12
Let x ∈ He (G) and consider G · x ⊂ He (G) ⊂ X. Let y, z ∈ G · x. By Proposition 4.2,
there exists a Morse gauge N such that any geodesic [y, z] between y, z is N -Morse. Hence, any
(K, C)-quasigeodesic between y, z must stay within N (K, C) of [y, z], implying that G · x is a stable
subspace of X. In particular, G is hyperbolic, so ∂Gr G is defined.
By Corollary 2.12, the orbit map g 7→ g · e induces a topological embedding f : ∂Gr G → ∂M X
which gives a homeomorphism f : ∂Gr G → Λe (G). This completes the proof.
4.3. Stability implies boundary convex cocompactness. Finally, we prove the second direction:
Theorem 4.4. Suppose G acts by isometries on a proper geodesic metric space X such that any
orbit map of G is an infinite diameter stable subspace of X. Then:
(1) Λ(G) is nonempty and compact,
(2) He (G) is a stable subset of X,
(3) G acts cocompactly on He (G), and
(4) Any orbit map extends continuously and G-equivariantly to an embedding of ∂Gr G into
∂M Xe which is a homeomorphism onto its image Λe (G).
Proof. G is hyperbolic because its orbit is stable, so we know that ∂M G = ∂Gr G is compact. By
Corollary 2.12, any orbit map g 7→ g · e induces a homeomorphism ∂Gr G → Λe (G), giving us (1)
and (4).
Thus by Proposition 4.2, the weak hull He (G) is stable in X, proving (3). It remains to prove
that G acts cocompactly on He (G).
To see this, let z ∈ He (G). By Proposition 3.13, we may assume that z ∈ γ for some limit
geodesic γ with ends (gn · e), (hn · e). Let γg , γh be the limit legs of the corresponding limit triangle
for γ. By Lemma 3.6, there exists x ∈ γg ∪ γh such that d(x, z) < 4N (3, 0), where N is the Morse
(N )
gauge such that Λe (G) ⊂ ∂Xe .
Without loss of generality, assume that x ∈ γg . By definition of γg , there exists a sequence of
N -Morse geodesics γn = [e, gn · e] which subsequentially converges to γg uniformly on compact sets.
Hence, by passing to a subsequence if necessary, for any > 0 and every T > 0, there exists M > 0
such that for any n ≥ M , we have
dHaus (γn ([0, T ]), γg ([0, T ])) < .
Taking T > 0 sufficiently large so that x ∈ γg ([0, T ]) and taking n > M for M corresponding to
this T , we may assume there exists y ∈ γn such that d(y, x) < .
Since G·e is quasiisometrically embedded, there exist K, C such that any geodesic in G between 1
and gn maps to a (K, C)-quasigeodesic qn between e and gn · e. Let qn0 denote the (K, C + (K + C))quasigeodesic obtained by connecting successive vertices of qn by geodesics of length at most K +C.
Since [e, gn · e] is N -Morse, it follows that
there exists w0 ∈ qn0 such that d(w0 , y) < 2N (K, 2C + K), and thus w ∈ qn with d(w, y) <
2N (K, 2C + K) + K+C
2 . It follows that
K +C
d(z, w) ≤ d(z, x) + d(x, y) + d(y, w) < 4N (3, 0) + + 2N (K, 2C + K) +
2
which is a constant depending only on N and the quasiisometric embedding constants of G in X.
Hence G acts cocompactly on He (G), as required.
References
[ADT16] Tarik Aougab, Matthew Gentry Durham, and Samuel J Taylor, Middle recurrence and pulling back stability
for proper actions, in preparation (2016).
13
[AK11] Yael Algom-Kfir, Strongly contracting geodesics in outer space, Geometry & Topology 15 (2011), no. 4,
2181–2233.
[BC12] J. Behrstock and R. Charney, Divergence and quasimorphisms of right-angled Artin groups, Math. Ann.
352 (2012), 339–356.
[Beh06] J. Behrstock, Asymptotic geometry of the mapping class group and Teichmüller space, Geometry & Topology 10 (2006), 2001–2056.
[BH99] Martin R Bridson and André Haefliger, Metric spaces of non-positive curvature, Vol. 319, Springer, 1999.
[CH16] Matthew Cordes and David Hume, Stability and the morse boundary, arXiv preprint arXiv:1606.00129
(2016).
[Cor15] Matthew Cordes, Morse boundaries of proper geodesic metric spaces, arXiv:1502.04376 (2015).
[DHS16] Matthew G Durham, Mark F Hagen, and Alessandro Sisto, Boundaries and automorphisms of hierarchically
hyperbolic spaces, arXiv preprint arXiv:1604.01061 (2016).
[DT14] Spencer Dowdall and Samuel J Taylor, Hyperbolic extensions of free groups, arXiv preprint arXiv:1406.2567
(2014).
[DT15a]
, Contracting orbits in outer space, arXiv preprint arXiv:1502.04053 (2015).
[DT15b] Matthew Durham and Samuel J Taylor, Convex cocompactness and stability in mapping class groups,
Algebraic & Geometric Topology 15 (2015), no. 5, 2839–2859.
[Far98] Benson Farb, Relatively hyperbolic groups, Geometric and functional analysis 8 (1998), no. 5, 810–840.
[FM02] Benson Farb and Lee Mosher, Convex cocompact subgroups of mapping class groups, Geom. Topol. 6 (2002),
91–152 (electronic). MR1914566
[Hag08] Frédéric Haglund, Finite index subgroups of graph products, Geom. Dedicata 135 (2008), 167–209.
MR2413337
[Ham05] Ursula Hamenstädt, Word hyperbolic extensions of surface groups, arXiv preprint math (2005).
[HH14] Ursula Hamenstädt and Sebastian Hensel, Convex cocompact subgroups of Out(f), arXiv preprint
arXiv:1411.2281 (2014).
[KL08] Richard P. Kent IV and Christopher J. Leininger, Shadows of mapping class groups: capturing convex
cocompactness, Geom. Funct. Anal. 18 (2008), no. 4, 1270–1325. MR2465691
[KMT14] Thomas Koberda, Johanna Mangahas, and Samuel J Taylor, The geometry of purely loxodromic subgroups
of right-angled artin groups, To appear in the Transactions of the American Mathematical Society; arXiv
preprint arXiv:1412.3663 (2014).
[Mar74] Albert Marden, The geometry of finitely generated kleinian groups, Annals of Mathematics (1974), 383–462.
[Mur15] Devin Murray, Topology and dynamics of the contracting boundary of cocompact cat (0) spaces, arXiv
preprint arXiv:1509.09314 (2015).
[Sul85] Dennis Sullivan, Quasiconformal homeomorphisms and dynamics i. solution of the fatou-julia problem on
wandering domains, Annals of mathematics 122 (1985), no. 2, 401–418.
[Swe01] Eric L Swenson, Quasi-convex groups of isometries of negatively curved spaces, Topology and its Applications 110 (2001), no. 1, 119–129.
[TT16] Samuel J Taylor and Giulio Tiozzo, Random extensions of free groups and surface groups are hyperbolic,
International Mathematics Research Notices 2016 (2016), no. 1, 294–310.
Department of Mathematics, Brandeis University, 415 South Street, Waltham, MA 02453, U.S.A.
E-mail address: [email protected]
Department of Mathematics, University of Michigan, 530 Church Street, Ann Arbor, MI 48105,
U.S.A.
E-mail address: [email protected]
14
| 4 |
Streaming Algorithms for k-Means Clustering with Fast Queries
Yu Zhang∗
Kanat Tangwongsan†
Srikanta Tirthapura‡
arXiv:1701.03826v1 [] 13 Jan 2017
January 17, 2017
Abstract
We present methods for k-means clustering on a stream with a focus on providing fast responses to
clustering queries. When compared with the current state-of-the-art, our methods provide a substantial
improvement in the time to answer a query for cluster centers, while retaining the desirable properties
of provably small approximation error, and low space usage. Our algorithms are based on a novel idea
of “coreset caching” that reuses coresets (summaries of data) computed for recent queries in answering the current clustering query. We present both provable theoretical results and detailed experiments
demonstrating their correctness and efficiency.
1
Introduction
Clustering is a fundamental method for understanding and interpreting data. The goal of clustering is to
partition input objects into groups or “clusters” such that objects within a cluster are similar to each other,
and objects in different clusters are not. A popular formulation of clustering is k-means clustering. Given a
set of points S in an Euclidean space and a parameter k, the goal is to partition S into k “clusters” in a way
that minimizes a cost metric based on the `2 distance between points. The k-means formulation is widely
used in practice.
We consider streaming k-means clustering, where the inputs to clustering are not all available at once,
but arrive as a continuous, possibly unending sequence. The algorithm needs to maintain enough state to
be able to incrementally update the clusters as more tuples arrive. When a query is posed, the algorithm is
required to return k cluster centers, one for each cluster within the data observed so far.
While there has been substantial prior work on streaming k-means clustering (e.g. [1, 2, 3, 4]), the
major focus of prior work has been on optimizing the memory used by the streaming algorithm. In this
respect, these works have been successful, and achieve a provable guarantee on the approximation quality
of clustering, while using space polylogarithmic in the size of the input stream [1, 2]. However, for all these
algorithms, when a query for cluster centers is posed, an expensive computation is needed at time of query.
This can be a serious problem for applications that need answers in (near) real-time, such as in network
monitoring and sensor data analysis. Our work aims at designing a streaming clustering algorithm that significantly improves the clustering query runtime compared to the current state-of-the-art, while maintaining
other desirable properties enjoyed by current algorithms, such as provable accuracy and limited memory.
To understand why current solutions have a high query runtime, let us review the framework used in
current solutions for streaming k-means clustering. At a high level, incoming data stream S is divided into
∗
Department of Electrical and Computer Engineering, Iowa State University, [email protected]
Computer Science Program, Mahidol University International College, [email protected]
‡
Department of Electrical and Computer Engineering, Iowa State University, [email protected]
†
1
smaller “chunks” S1 , S2 , . . . ,. Each chunk is summarized using a “coreset” (for example, see [5]). The
resulting coresets may still not all fit into the memory of the processor, so multiple coresets are further
merged recursively into higher level coresets forming a hierarchy of coresets, or a “coreset tree”. When a
query arrives, all active coresets in the coreset tree are merged together, and a clustering algorithm such as
k-means++ [6] is applied on the result, outputting k cluster centers. The query runtime is proportional to the
number of coresets that need to be merged together. In prior algorithms, the total size of all these coresets
could be as large as the memory of the processor itself, and hence the query runtime can be very high.
1.1
Our Contributions
We present algorithms for streaming k-means whose query runtime is substantially smaller than the current state-of-the-art, while maintaining the desirable properties of a low memory footprint and provable
approximation guarantees on the result. Our algorithms are based on the idea of “coreset caching” that to
our knowledge, has not been used before in streaming clustering. The idea in coreset caching is to reuse
coresets that have been computed during previous (recent) queries to speedup the computation of a coreset
for the current query. This way, when a query arrives, it is not needed to combine all coresets currently
in memory; it is sufficient to only merge a coreset from a recent query (stored in the coreset cache) along
with coresets of points that arrived after this query. We show that this leads to substantial savings in query
runtime.
Name
Coreset Tree (CT)
Query Cost
(per point)
r log N
O kdm
q · log r
Cached Coreset
Tree (CC)
Recursive Cached
Coreset Tree (RCC)
O
O
kdm
q
kdm
q
·r
log log N
Update Cost
(per point)
Memory Used
Coreset level returned
at query after N batches
O(kd)
log N
O md r log
r
logr N
O(kd)
log N
O md r log
r
2 logr N
O(kd log log N)
O(mdN 1/8 )
O(1)
O(kd)
log N
O md r log
r
2 logr N
Online Coreset
Cache (OnlineCC)
usually O(1)
worst case O kdm
q ·r
Table 1: The accuracy and query cost of different clustering methods. k is the number of centers desired,
d is the dimension of a data point, m is the size of a coreset (in practice, this is a constant factor times k),
r is a parameter used for CC and CT, showing the “merge degree” of the coreset tree, q is the number of
points between two queries. The “level” of a coreset is indicative of the number of recursive merges of
prior coresets to arrive at this coreset. Smaller the level, more accurate is the coreset. For example, a batch
algorithm that sees the entire input can return a coreset at level 0.
Let n denote the number of points observed in the stream so far. Let N = mn where m is the size of a
coreset, a parameter that is independent of n. Our main contributions are as follows:
• We present an algorithm “Cached Coreset Tree” (CC) whose query runtime is a factor of O(log N)
smaller than the query runtime of a state-of-the-art current method, “Coreset Tree” (CT), 1 while using
similar memory and maintaining the same quality of approximation.
1
CT is essentially the streamkm++ algorithm [1] and [2] except it has a more flexible rule for merging coresets.
2
• We present a recursive version of CC, “Recursive Cached Coreset Tree” (RCC), which provides more
flexible tradeoffs for the memory used, quality of approximation, and query cost. For instance, it is
possible to improve the query runtime by a factor of O(log N/ log log N), and improve the quality of
approximation, at the cost of greater memory.
• We present an algorithm OnlineCC, a combination of CC and a simple sequential streaming clustering
algorithm (due to [7]), which provides further improvements in clustering query runtime while still
maintaining the provable clustering quality, as in RCC and CC.
• For all algorithms, we present proofs showing that the k centers returned in response to a query form
an O(log k) approximation to the optimal k-means clustering cost. In other words, the quality is
comparable to what we will obtain if we simply stored all the points so far in memory, and ran an
(expensive) batch k-means++ algorithm at time of query.
• We present a detailed experimental evaluation. These show that when compared with streamkm++ [1],
a state-of-the-art method for streaming k-means clustering, our algorithms yield substantial speedups
(5x-100x) in query runtime and in total time, and match the accuracy, for a broad range of query
arrival frequencies.
Our theoretical results are summarized in Table 1.
1.2
Related Work
In the batch setting, when all input is available at the start of computation, Lloyd’s algorithm [8], also known
as the k-means algorithm, is a simple iterative algorithm for k-means clustering that has been widely used
for decades. However, it does not have a provable approximation guarantee on the quality of the clusters.
k-means++ [6] presents a method to determine the starting configuration for Lloyd’s algorithm that yields a
provable guarantee on the clustering cost. [9] proposes a parallelization of k-means++ called k-meansII.
The earliest streaming clustering method, Sequential k-means (due to [7]), maintains the current
cluster centers and applies one iteration of Lloyd’s algorithm for every new point received. Because it
is fast and easy to implement, Sequential k-means is commonly used in practice (e.g., Apache Spark
mllib [10]). However, it cannot provide any approximation guarantees [11] on the cost of clustering.
BIRCH [12] is a streaming clustering method based on a data structure called the “CF Tree”, and returns
cluster centers through agglomerative hierarchical clustering on the leaf nodes of the tree. CluStream[13]
constructs “microclusters” that summarize subsets of the stream, and further applies a weighted k-means
algorithm on the microclusters. STREAMLS [3] is a divide-and-conquer method based on repeated application of a bicriteria approximation algorithm for clustering. A similar divide-and-conquer algorithm based
on k-means++ is presented in [2]. However, these methods have a high cost of query processing, and are
not suitable for continuous maintenance of clusters, or for frequent queries. In particular, at the time of
query, these require merging of multiple data structures, followed by an extraction of cluster centers, which
is expensive.
n
[5] presents coresets of size O k log
for summarizing n points k-means, and also show how to use the
d
merge-and-reduce technique based on the Bentley-Saxe decomposition [14] to derive a small-space stream
ing algorithm using coresets. Further work [15, 16] has reduced the size of a k-means coreset to O kd
.
6
streamkm++ [1] is a streaming k-means clustering algorithm that uses the merge-and-reduce technique
along with k-means++ to generate a coreset. Our work improves on streamkm++ w.r.t. query runtime.
Roadmap. We present preliminaries in Section 2, background for streaming clustering in Section 3 and
then the algorithms CC, RCC, and OnlineCC in Section 4, along with their proofs of correctness and quality
guarantees. We then present experimental results in Section 5.
3
2
2.1
Preliminaries
Model and Problem Description
We work with points from the d-dimensional Euclidean space Rd for integer d > 0. A point can have a
positive integral weight associated with it. If unspecified, the weight of a point is assumed to be 1. For
points x, y ∈ Rd , let D(x, y) = kx − yk denote the Euclidean distance between x and y. For point x and a point
set Ψ ⊆ Rd , the distance of x to Ψ is defined to be D(x, Ψ) = minψ∈Ψ kx − ψk.
Definition 1 (k-means clustering problem) Given a set P ⊆ Rd with n points, an associated weight function w : P → Z + , find a point set Ψ ⊆ Rd , |Ψ| = k, that minimizes the objective function
X
X
φΨ (P) =
w(x) · D2 (x, Ψ) =
min w(x) · kx − ψk2 .
x∈P
x∈P
ψ∈Ψ
Streams: A stream S = e1 , e2 , . . . is an ordered sequence of points, where ei is the i-th point observed by the
algorithm. For t > 0, let S(t) denote the prefix e1 , e2 , . . . , et . For 0 < i ≤ j, let S(i, j) denote the substream
ei , ei+1 , . . . , e j . Let n denote the total number of points observed so far. Define S = S(1, n) be the whole
stream observed until en .
We have written our analysis as if a query for cluster centers arrives every q points. This parameter (q)
captures the query rate in the most basic terms, we note that our results on the amortized query processing
time still hold as long as the average number of points between two queries is q. The reason is that the cost
of answering each query does not relate to when the query arrives, and the total query cost is simply the
number of queries times the cost of each query. Suppose that the queries arrived according to a different
probability distribution, such that the expected interval between two queries is q points. Then, the same
results will hold in expectation.
In the theoretical analysis of our algorithms, we measure the performance in both terms of computational
runtime and memory consumed. The computational cost can be divided into two parts, the query runtime,
and the update runtime, which is the time to update internal data structures upon receiving new points. We
typically consider amortized processing cost, which is the average per-point processing cost, taken over the
entire stream. We express the memory cost in terms of words, while assuming that each point in Rd can be
stored in O(d) words.
2.2
The k-means++ Algorithm
Our algorithm uses as a subroutine the k-means++ algorithm [6], a batch algorithm for k-means clustering with provable guarantees on the quality of the objective function. The properties of the algorithm are
summarized below.
Theorem 2 (Theorem 3.1 in [6]) On an input set of n points P ⊆ Rd , the k-means++ algorithm returns a
set Ψ of k centers such that E φΨ (P) ≤ 8(ln k+2)·φOPT (P) where φOPT (P) is the optimal k-means clustering
cost for P. The time complexity of the algorithm is O(nkd).
2.3
Coresets
Our clustering method builds on the concept of a coreset, a small-space representation of a weighted set of
points that (approximately) preserves certain properties of the original set of points.
4
Algorithm 1: Stream Clustering Driver
1
2
3
4
5
6
7
8
9
def StreamCluster-Update(H, p)
B Insert points into D in batches of size m
H.n ← H.n + 1
Add p to H.C
if (|H.C| = m) then
H.D.Update(H.C, H.n/m)
H.C ← ∅
def StreamCluster-Query()
C1 ← H.D.Coreset()
return k-means++(k, C1 ∪ H.C)
Definition 3 (k-means Coreset) For a weighted point set P ⊆ Rd , integer k > 0, and parameter 0 < < 1,
a weighted set C ⊆ Rd is said to be a (k, )-coreset of P for the k-means metric, if for any set Ψ of k points
in Rd , we have
(1 − ) · φΨ (P) ≤ φΨ (C) ≤ (1 + ) · φΨ (P)
When k is clear from the context, we simply say an -coreset. In this paper we use term “coreset” to
mean a k-means coreset. For integer k > 0, parameter 0 < < 1, and weighted point set P ⊆ Rd , we use the
notation coreset(k, , P) to mean a (k, )-coreset of P. We use the following observations from [5].
Observation 4 ([5]) If C1 and C2 are each (k, )-coresets for disjoint multi-sets P1 and P2 respectively, then
C1 ∪ C2 is a (k, )-coreset for P1 ∪ P2 .
Observation 5 ([5]) If C1 is (k, )-coreset for C2 , and C2 is a (k, δ)-coreset for P, then C1 is a (k, (1 + )(1 +
δ) − 1)-coreset for P.
While our algorithms can work with any method for constructing coresets, one concrete construction
due to [16] provides the following guarantees.
Theorem 6 (Theorem 15.7 in [16]) Let 0 < δ <
1
2
and let n denote the size of point set P. There exists an
kd+log( 1δ )
algorithm to compute coreset(k, , P) with probability at least 1−δ. The size of the coreset is O
,
6
and the construction time is O ndk + log2 1δ log2 n + |coreset(k, , P)| .
3
Streaming Clustering and Coreset Trees
To provide context for how algorithms in this paper will be used, we describe a generic “driver” algorithm
for streaming clustering. We also discuss the coreset tree (CT) algorithm. This is both an example of how
the driver works with a specific implementation and a quick review of an algorithm from prior work that our
algorithms build upon.
5
Algorithm 2: Coreset Tree Update
1
2
3
4
5
6
7
8
B Input: bucket b
def CT-Update(b)
Append b to Q0
j←0
while |Q j | ≥ r do
U ← coreset(k, , ∪B∈Q j B)
Append U to Q j+1
Qj ← ∅
j← j+1
10
def CT-Coreset()
S S
return j { B∈Q j B}
3.1
Driver Algorithm
9
The “driver” algorithm (presented in Algorithm 1) is initialized with a specific implementation of a clustering data structure D and a batch size m. It internally keeps state inside an object H. It groups arriving points
into batches of size m and inserts into the clustering data structure at the granularity of a batch. H stores
additional state, the number of points received so far in the stream H.n, and the current batch of points H.C.
Subsequent algorithms in this paper, including CT, are implementations for the clustering data structure D.
3.2 CT: r-way Merging Coreset Tree
The r-way coreset tree (CT) turns a traditional batch algorithm for coreset construction into a streaming algorithm that works in limited space. Although the basic ideas are the same, our description of CT generalizes
the coreset tree of Ackermann et al. [1], which is the special case when r = 2.
The Coreset Tree: A coreset tree Q maintains buckets at multiple levels. The buckets at level 0 are called
base buckets, which contain the original input points. The size of each base bucket is specified by a parameter
m. Each bucket above that is a coreset summarizing a segment of the stream observed so far. In an r-way
CT, level ` has between 0 and r − 1 (inclusive) buckets, each a summary of r` base buckets.
Initially, the coreset tree is empty. After observing n points in the stream, there will be N = bn/mc
base buckets (level 0). Some of these base buckets may have been merged into higher-level buckets. The
distribution of buckets across levels obeys the following invariant:
If N is written in base r as N = (sq , sq−1 , . . . , s1 , s0 )r , with sq being the most significant digit
Pq
(i.e., N = i=0 si ri ), then there are exactly si buckets in level i.
How is a base bucket added? The process to add a base bucket is reminiscent of incrementing a base-r
counter by one, where merging is the equivalent of transferring the carry from one column to the next. More
specifically, CT maintains a sequence of sequences {Q j }, where Q j is the buckets at level j. To incorporate
a new bucket into the coreset tree, CT-Update, presented in Algorithm 2, first adds it at level 0. When the
number of buckets at any level i of the tree reaches r, these buckets are merged, using the coreset algorithm,
to form a single bucket at level (i + 1), and the process is repeated until there are fewer than r buckets at all
levels of the tree. An example of how the coreset tree evolves after the addition of base buckets is shown in
Figure 1.
6
S S
How to answer a query? The algorithm simply unions all the (active) buckets together, specifically j { B∈Q j B}.
Notice that the driver will combine this with a partial base bucket before deriving the k-means centers.
We present lemmas stating the properties of the CT algorithm and we use the following definition in
proving clustering guarantees.
Definition 7 (Level-` Coreset) For ` ∈ Z≥0 , a (k, , `)-coreset of a point set P ⊆ Rd , denoted by coreset(k, , `, P),
is as follows:
• The level-0 coreset of P is P.
• For ` > 0, a level-` coreset of P is a coreset of the Ci ’s (i.e., coreset(k, , ∪ti=1Ci )), where each Ci is
a level-`i coreset, `i < `, of Pi such that {P j }tj=1 forms a partition of P.
We first determine the number of levels in the coreset tree after observing N base buckets. Let the
maximum level of the tree be denoted by `(Q) = max{ j | Q j , ∅}.
Lemma 8 After observing N base buckets, `(Q) ≤
log N
log r .
Proof: As was pointed out earlier, for each level j ≥ 0, a bucket in Q j is a summary of r j base buckets. Let
∗
`∗ = `(Q). After observing N base buckets, the coverage of a bucket at level `∗ cannot exceed N, so r` ≤ N,
log
N
which means `(Q) = `∗ = log r .
Lemma 9 For a point set P, parameter > 0, and integer ` ≥ 0, if C = coreset(k, , `, P) is a level
`-coreset of P, then C = coreset(k, 0 , P) where 0 = (1 + )` − 1.
Proof: We prove this by induction using the proposition P: For a point set P, if C = coreset(k, , `, P),
then C = coreset(k, 0 , P) where 0 = (1 + )` − 1.
To prove the base case of ` = 0, consider that, by definition, coreset(k, , 0, P) = P, and coreset(k, 0, P) =
P.
Now consider integer L > 0. Suppose that for each positive integer ` < L, P(`) was true. The task is
to prove P(L). Suppose C = coreset(k, , L, P). Then there must be an arbitrary partition of P into sets
q
P1 , P2 , . . . , Pq such that ∪i=1 Pi = P. For i = 1 . . . q, let Ci = coreset(k, , `i , Pi ) for `i < L. Then C must
q
be of the form coreset(k, , ∪i=1Ci ).
By the inductive hypothesis, we know that Ci = coreset(k, i , Pi ) where i = (1 + )`i − 1. By
the definition of a coreset and using `i ≤ (L − 1), it is also true that Ci = coreset(k, 00 , Pi ) where
q
q
00 = (1 + )(L−1) − 1. Let C 0 = ∪i=1Ci . From Observation 4 and using P = ∪i=1 Pi , it must be true that C 0 =
coreset(k, 00 , P). Since C = coreset(k, , C 0 ) and using Observation 5, we get C = coreset(k, γ, P)
where γ = (1 + )(1 + 00 ) − 1. Simplifying, we get γ = (1 + )(1 + (1 + )(L−1) − 1) − 1 = (1 + )L − 1. This
proves the inductive case for P(L), which completes the proof.
The accuracy of a coreset is given by the following lemma, since it is clear that a level-` bucket is a
level-` coreset of its responsible range of base buckets.
Lemma 10 Let = (c log r)/ log N where c is a small enough constant. After observing stream S = S(1, n),
a clustering query StreamCluster-Query returns a set of k centers Ψ of S whose clustering cost is a
O(log k)-approximation to the optimal clustering for S .
7
Proof: After observing N base buckets, Lemma 8 indicates that all coresets in Q are at level no greater than
(log N/ log r). Using Lemma 8, the maximum level coreset in Q is an 0 -coreset where
N
! log
c log r log N
log r
c
log
r
0 = 1 +
− 1 ≤ e log N · log r − 1 < 0.1.
log N
Consider that StreamCluster-Query computes k-means++ on the union of two sets, one of the result
is CT-Coresetand the other the partially-filled base bucket H.C. Hence, Θ = (∪ j ∪B∈Q j B) ∪ H.C is the
coreset union that is given to k-means++. Using Observation 4, Θ is a 0 -coreset of S . Let Ψ be the final
k centers generated by running k-means++ on Θ, and let Ψ1 be the set of k centers which achieves optimal
k-means clustering cost for S . From the definition of coreset, when 0 < 0.1, we have
0.9φΨ (S ) ≤ φΨ (Θ) ≤ 1.1φΨ (S )
(1)
0.9φΨ1 (S ) ≤ φΨ1 (Θ) ≤ 1.1φΨ1 (S )
(2)
Let Ψ2 denote the set of k centers which achieves optimal k-means clustering cost for Θ. Using Theorem 2, we have
E φΨ (Θ) ≤ 8(ln k + 2) · φΨ2 (Θ)
(3)
Since Ψ2 is the optimal k centers for coreset Θ, we have
φΨ2 (Θ) ≤ φΨ1 (Θ)
(4)
E φΨ (Θ) ≤ 9(ln k + 2) · φΨ1 (S )
(5)
E φΨ (S ) ≤ 10(ln k + 2) · φΨ1 (S )
(6)
Using Equations 2, 3 and 4 we get
Using Equations 1 and 5,
We conclude that Ψ is a factor-O(log k) clustering centers of S compared to the optimal.
The following lemma quantifies the memory and time cost of CT.
Lemma 11 Let N be the number of buckets
observed
so far. Algorithm CT, including the driver, takes
mdr log
N
amortized O(kd) time per point, using O log r
memory. The amortized cost of answering a query is
kdm r log N
O q · log r per point.
Proof: First, the cost of arranging n points into level-0 buckets is trivially O(n), resulting in N = n/m buckets. For j ≥ 1, a level- j bucket is created for every mr j points, so the number of level- j buckets ever created
P
is N/r j . Hence, across all levels, the total number of buckets created is `j=1 rNj = O(N/r). Furthermore,
when a bucket is created, CT merges rm points into m points. By Theorem 6, the total cost of creating these
buckets is O( Nr · (kdrm + log2 (rm) + dk)) = O(nkd), hence O(kd) amortized time per point. In terms of
N
space, each level must have fewer than r buckets, each with m points. Therefore, across ` ≤ log
log r levels, the
N
space required is O( log
log r · mdr). Finally, when answering a query, the union of all the buckets has at most
O(mr ·
log N
log r )
points, computable in the same time as the size. Therefore, k-means++, run on these points
N
plus at most one base bucket, takes O(kdrm · log
log r ). The amortized bound immediately follows. This proves
8
Batch 7
Batch 4
Batch 3
Batch 2
After Batch 1
[1,4]
[1,2]
Coreset Tree
[1,4]
[1,2]
[1,1]
[2,2]
[7,7]
[3,3]
[3,3]
Coreset Cache
Action taken to
answer the query
coreset merged for CT: [1,1]
coreset merged for CC: same
[1, 2]
[1, 2]
CT: [1,2]
CC: same
CT: [1,2], [3,3]
CC: same
Batch 8
[1, 2]
Batch 15
[5,8]
CT: [1,4] [5,6] [7,7]
CC: [1, 6] and [7,7]
Batch 16
[1,16]
[9,12]
[9,16]
[13,16]
[9,12]
[13,14]
[7,8]
[5,6]
[1,4] [1, 6]
[1, 4]
[1,8]
[1,8]
[1,4]
[4,4]
CT: [1,4]
CC: same
[1,8]
Coreset Tree
[5,6]
[3,4]
[1,2]
[1,1]
[13,14]
[15,16]
[15,15]
[7,7]
Coreset Cache
Action taken to
answer the query
[1, 4] [1, 6] [1, 8]
CT: [1,8]
CC: same
[8,8]
[15,15]
[1, 8] [1, 12] [1, 14]
CT: [1,8] [9,12] [13,14] [15,15]
CC: [1, 14] and [15,15]
[16,16]
[1, 8] [1, 12] [1, 14] [1, 16]
CT: [1,16]
CC: same
Figure 1: Illustration of Algorithm CC, showing the states of coreset tree and cache after batch 1, 2, 3, 4, 7,
8, 15 and 16. The notation [l, r] denotes a coreset of all points in buckets l to r, both endpoints inclusive.
The coreset tree consists of a set of coresets, each of which is a base bucket or has been formed by merging
multiple coresets. Whenever a coreset is merged into a another coreset (in the tree) or discarded (in the
cache), the coreset is marked with an “X”. We suppose that a clustering query arrives after seeing each
batch, and describe the actions taken to answer this query (1) if only CT was used, or (2) if CC was used
along with CT.
the theorem.
As evident from the above lemma, answering a query using CT is expensive compared to the cost of
adding a point. More precisely, when queries are made rather frequently—every q points, q < O(rm ·
e
logr N) = O(rkd
· logr N)—the cost of query processing is asymptotically greater than the cost of handling
point arrivals. We address this issue in the next section.
4
Clustering Algorithms with Fast Queries
This section describes algorithms for streaming clustering with an emphasis on query time.
4.1
Algorithm CC: Coreset Tree with Caching
The CC algorithm uses the idea of “coreset caching” to speed up query processing by reusing coresets that
were constructed during prior queries. In this way, it can avoid merging a large number of coresets at query
time. When compared with CT, the CC algorithm can answer queries faster, while maintaining nearly the
same processing time per element.
In addition to the coreset tree CT, the CC algorithm also has an additional coreset cache, cache, that
stores a subset of coresets that were previously computed. When a new query has to be answered, CC
avoids the cost of merging coresets from multiple levels in the coreset tree. Instead, it reuses previously
cached coresets and retrieves a small number of additional coresets from the coreset tree, thus leading to
9
less computation at query time.
However, the level of the resulting coreset increases linearly with the number of merges a coreset is
involved in. For instance, suppose we recursively merged the current coreset with the next arriving batch
to get a new coreset, and so on, for N batches. The resulting coreset will have a level of Θ(N), which can
lead to a very poor clustering accuracy. Additional care is needed to ensure that the level of a coreset is
controlled while caching is used.
Details: Each cached coreset is a summary of base buckets 1 through some number u. We call this number
u as the right endpoint of the coreset and use it as the key/index into the cache. We call the interval [1, u] as
the “span” of the bucket. To explain which coresets are cached by the algorithm, we introduce the following
definitions.
For integers n > 0 and r > 0, consider the unique decomposition of n according to powers of r as
Pj
n = i=0 βi rαi , where 0 ≤ α0 < α1 . . . < α j and 0 < βi < r for each i. The βi s can be viewed as the non-zero
digits in the representation of n as a number in base r. Let minor(n, r) = β0 rα0 , the smallest term in the
decomposition, and major(n, r) = n − minor(n, r). Note that when n is of the form βrα where 0 < β < r and
α ≥ 0, major(n) = 0.
Pj
For κ = 1 . . . j, let nκ = i=κ βi rαi . nκ can be viewed as the number obtained by dropping the κ smallest
non-zero digits in the representation of n as a number in base r. The set prefixsum(n, r) is defined as
{nκ |κ = 1 . . . j}. When n is of the form βrα where 0 < β < r, prefixsum(n, r) = ∅.
For instance, suppose n = 47 and r = 3. Since 47 = 1 · 33 + 2 · 32 + 2 · 30 , we have minor(47, 3) =
2, major(47, 3) = 45, and prefixsum(47, 3) = {27, 45}.
We have the following fact on the prefixsum.
Fact 12 Let r ≥ 2. For each N ∈ Z+ , prefixsum(N + 1, r) ⊆ prefixsum(N, r) ∪ {N}.
Proof: There are three cases:
Case I: N . (r − 1) (mod r). Consider N in r-ary representation, and let δ denote the least significant
digit. Since δ < (r − 1), in going from N to (N + 1), the only digit changed is the least significant digit, which
changes from δ to δ + 1 and no carry propagation takes place. For all elements y ∈ prefixsum(N + 1, r), y is
also in prefixsum(N). The only exception is when N ≡ 0 (mod r), when one element of prefixsum(N +
1, r) is N itself. In this case, it is still true that prefixsum(N + 1, r) ⊆ prefixsum(N, r) ∪ {N}.
Case II: N ≡ (r − 1) (mod r) and for r-ary representation of N, all digits are (r − 1). In this case, (N + 1)
should be power of r and can be represented as term rα where α ≥ 0, then prefixsum(N + 1, r) is empty set
so our claim holds.
Case III: N ≡ (r − 1) (mod r) but Case II does not hold. Consider the r-ary representation of N. There
must exist at least one digit less than (r − 1). N + 1 will change a streak of (r − 1) digits to 0 starting from the
least significant digit, until it hits the first digit that is not (r − 1) which should be less than (r − 1). We refer
such digit as βk . N can be expressed in r-ary form as N = (β j β j−1 · · · βk+1 βk βk−1 · · · β1 β0 )r . Correspondingly,
N + 1 = (β j β j−1 · · · βk+1 (1 + βk )00 · · · 00)r . Comparing the prefixsum of (N + 1) with N, the part of digits
β j β j−1 · · · βk+1 remains unchanged, thus prefixsum(N + 1, r) ⊂ prefixsum(N, r).
CC caches every coreset whose right endpoint is in prefixsum(N, r). When a query arrives after N
batches, the task is to compute a coreset whose span is [1, N]. CC partitions [1, N] as [1, N1 ] ∪ [N1 + 1, N]
where N1 = major(N, r). Out of these two intervals, [1, N1 ] is already available in the cache, and [N1 + 1, N]
is retrieved from the coreset tree, through the union of no more than (r − 1) coresets. This needs a merge of
no more than r coresets. This is in contrast with CT, which may need to merge as many as (r − 1) coresets
at each level of the tree, resulting in a merge of up to (r − 1) log N coresets at query time. The algorithm
10
for maintaining the cache and answering clustering queries is shown in Algorithm 3. See Figure 1 for an
example of how the CC algorithms updates the cache and answers queries using cached coresets.
Note that to keep the size of the cache small, as new base buckets arrive, CC-Update will ensure that
“stale” or unnecessary coresets are removed.
Algorithm 3: Coreset Tree with Caching: Algorithm Description
1
2
3
4
5
6
7
8
9
10
11
12
13
14
def CC-Init(r, k, )
Remember the parameters r, k, and .
B The coreset tree
Q ← CT-Init(r, k, )
cache ← ∅
def CC-Update(b, N)
B b is a batch and N is the number of batches so far.
Remember N
Q.CT-Update(b, N)
B May need to insert a coreset into cache
if r divides N then
c ← CC-Coreset()
Add coreset c to cache using key N
Remove from cache each bucket whose key does not appear in prefixsum(N + 1)
def CC-Coreset()
B Return a coreset of points in buckets 1 till N
N1 ← major(N, r) and N2 ← minor(N, r)
Let N2 = βrα where α and β < r are positive integers
B a is the coreset for buckets N1 + 1, N1 + 2, . . . , (N1 + N2 ) = N and is retrieved from the
coreset tree
15
16
17
18
19
20
21
a ← ∪B∈Qα B
B b is the coreset spanning [1, N1 ], retrieved from the cache
if N1 is 0 then
b←∅
else
b ← cache.lookup(N1 )
C ← coreset(k, , a ∪ b)
return C
The following lemma relates what the cache stores with the number of base buckets observed so far,
guaranteeing that Algorithm 3 can find the required coreset.
Lemma 13 Immediately before base bucket N arrives, each y ∈ prefixsum(N, r) appears in the key set of
cache.
Proof: Proof is by induction on N. The base case N = 1 is trivially true, since prefixsum(1, r) is empty.
For the inductive step, assume that at the beginning of batch N, each y ∈ prefixsum(N, r) appears in cache.
By Fact 12, we know that prefixsum(N + 1, r) ⊆ prefixsum(N, r) ∪ {N}. Using this, every bucket with a
11
right endpoint in prefixsum(N + 1, r) is present in cache at the beginning of batch (N + 1), except for the
coreset with right endpoint N. But the algorithm adds the coreset for this bucket to the cache, if r divides N.
Hence, the inductive step is proved.
Since major(N, r) ∈ prefixsum(N, r) for each N, we can always retrieve the bucket with span [1, major(N, r)]
from cache.
Lemma 14 When
after inserting batch N, Algorithm CC-Coreset returns a coreset whose level in
l log queried
m
N
no more than 2log
−
1.
r
Proof: Let χ(N) denote the number of non-zero digits in the representation of N as a number
l N in
m base r. We
show that the level of the coreset returned by Algorithm CC-Coreset is no more than log
log r + χ(N) − 1.
l log N m
Since χ(N) ≤ log r , the lemma follows.
The proof is by induction on χ(N). If χ(N) = 1, then major(N, r) = 0, and the coreset is retrieved diN
rectly from the coreset tree Q. By Lemma 8, each coreset in Q is at a level no more than d log
log r e, and the base
case follows. Suppose the claim was true for all N such that χ(N) = t. Consider N such that χ(N) = (t + 1).
The algorithm computes N1 = major(N, r), and retrieves the coreset with span [1, N1 ] from
l Nthe
m cache. Note
that χ(N1 ) = t. By the inductive hypothesis, b, the coreset for span [1, N1 ] is at a level log
log r + t − 1. The
coresets for span [N1 + 1, N] are retrieved
l from
m the coreset tree; note there are multiple such coresets, but
N
each of them is at a level no more than log
log r , lusingmLemma 8. Their union is denoted by a. The level of
N
the final coreset for span [1, N] is no more than log
log r + t, proving the inductive case.
log r
Let the accuracy parameter = 2clog
N , where c < ln 1.1. We have the following lemma on the accuracy
of clustering centers returned by CC.
Lemma 15 After observing N batches, Algorithm StreamCluster-Query when using clustering data
structure CC, returns a set of k points whose clustering cost is within a factor of O(log k) of the optimal
k-means clustering cost.
l log N m
Proof: From Lemma 14, we know that the level of a coreset returned is no more than 2log
r − 1. Following
an argument similar to Lemma 10, we arrive at the result.
Lemma
Algorithm
3 processes a stream of points using amortized
time O(kd) per point, using memory
mdr16
log N
kdm
of O log r . The amortized cost of answering a query is O q · r .
Proof: The runtime for Algorithm CC-Update is the sum of the times to update the coreset tree Q and to
update cache. We know from Lemma 11 that the time to update the coreset is O(kd) per point. To update
the cache, note that CC-Update inserts a new coreset into the cache every r batches. The cost of computing
this coreset is O(kmdr). Averaged over the mr points in r batches, the cost of maintaining cache is O(kd)
per point. The overall update time for
Algorithm
CC-Update is O(kd) per point.
The coreset tree Q uses space O mdrloglogr N . After processing batch N, cache only stores those buckets b
corresponding to prefixsum(N + 1, r). The number of such buckets possible is O(log N/ log r), so that the
space cost of cache is O(md log N/ log r). The space complexity follows.
At query time, Algorithm CC-Coreset combines no more than r buckets, out of which there is no
more than one bucket from the cache, and no more than (r − 1) from the coreset tree. It is necessary to run
k-means++ on O(mr)
using time O(kdmr). Since there is a query every q points, the amortized query
points
kdmr
time per point is O q .
12
4.2
Algorithm RCC: Recursive Coreset Cache
There are a few issues with the CC data structure. One is that the level of the coreset finally generated is
O(logr N). Since theoretical guarantees on the approximation quality of clustering worsen with an increase
in the level of the coreset, it is natural to ask if the level can be reduced further to O(1). Further, the time
taken to process a query is linearly proportional to r; it would be interesting to reduce the query time further.
While it is desirable to simultaneously reduce the level of the coreset as well as the query time, at first
glance, these two seem to be inversely related. It seems that if we decreased the level of a coreset, leading
to better accuracy, then we will have
√ to increase the merge degree, which would in turn increase the query
time. For example,
if
we
set
r
=
N, then the level of the resulting coreset is O(1), but the query time will
√
increase to O N .
In the following, we present a solution RCC that uses the idea of coreset caching in a recursive manner
to achieve both a low level of the coreset, as well as a small query time. In our approach, we keep the
merge degree of nodes relatively high, thus keeping the levels of coresets low. At the same time, we use
coreset caching even within a single level of a coreset tree, so that it is not necessary to merge r coresets at
query time. The coreset caching has to be done carefully, so that the level of the coreset does not increase
significantly.
For instance, suppose we built another coreset tree with merge degree 2 for the O(r) coresets within a
single level of the current coreset tree, this would lead to a level of log r. At query time, we will need to
aggregate O(log r) coresets
n N fromothis tree, in addition to a coreset from the coreset cache. So, this will lead
to a level of O max log
This is an improvement from
log r , log r , and a query time proportional to O(log r).
log N
the coreset cache, which has a query time proportional to r and a level of O log r .
We can take this idea further by recursively applying the same idea to the O(r) buckets within a single
level of the coreset tree. Instead of having a coreset tree with merge degree 2, we use a tree with a higher
merge degree, and then have a coreset cache for this tree to reduce the query time, and apply this recursively
within each tree. This way we can approach the ideal of a small level and a small query time. We are able
to achieve interesting tradeoffs, as shown in Table 2. In order to keep the level of the resulting coreset low,
along with the coreset cache for each level, we also maintain a list of coresets at each level, like in the CT
algorithm. In merging coresets to a higher level, the list is used, rather than the recursive coreset cache.
The RCC data structure is defined inductively as follows. For integer i ≥ 0, the RCC data structure of
order i is denoted by RCC(i). RCC(0) is a CC data structure with a merge degree of r0 = 2. For i > 0, RCC(i)
consists of:
• cache(i), a coreset cache that stores previously computed coresets.
• For each level ` = 0, 1, 2, . . ., there are two structures. One is a list of buckets L` , similar to the
i
structure Q` in a coreset tree. The maximum length of a list is ri = 22 . Another is an RCC` structure
which is a RCC structure of a lower order (i − 1), which stores the same information as L` , except, in a
way that can be quickly retrieved during a query.
The main data structure R is initialized as R = RCC-Init(ι), for a parameter ι, to be chosen. Note that ι
is the highest order of the recursive structure. This is also called the “nesting depth” of the structure.
Lemma
returns
a coreset whose level
17
When queried after inserting N batches, Algorithm 6 using RCC(ι)
is O log2ι N . The amortized time cost of answering a clustering query is O kdm
·
ι
per
point.
q
Proof: Algorithm 6 retrieves a few coresets from RCC of different orders. From the outermost structure
Rι = RCC(ι), it retrieves one coreset c from cache(ι). Using an analysis similar to Lemma 14, the level of bι
N
is no more than 2 log
log rι .
13
Algorithm 4: RCC-Init(i)
i
1
2
3
4
5
6
R.order ← i, R.cache ← ∅, R.r ← 22
/* N is the number of batches so far
R.N ← 0
foreach ` = 0, 1, 2, . . . do
R.L` ← ∅
if R.order > 0 then
R.R` ← RCC-Init(order − 1)
*/
7
8
return R
Algorithm 5: R.RCC-Update(b)
/* b is a batch of points
1
*/
R.N ← R.N + 1
/* Insert b into R.L0 and merge if needed
2
3
4
*/
Append b to R.L0 .
if R.order > 0 then
recursively update R.R0 by R.R0 .RCC-Update(b)
5
6
7
8
9
10
11
`←0
while (|R.L` | = R.r) do
b0 ← BucketMerge(R.L` )
Append b0 to R.L`+1
if R.order > 0 then
recursively update R.R`+1 by R.R`+1 .RCC-Update(b)
12
13
14
15
B Empty the list of coresets
R.L` ← ∅
B Empty the cache
if R.order > 0 then
R.R` ← RCC-Init(R.order − 1)
16
17
18
19
20
if R.r divides R.N then
Bucket b0 ← R.RCC-Coreset()
Add b0 to R.cache with right endpoint R.N
From R.cache, remove all buckets b00 such that right(b00 ) < prefixsum(R.N + 1)
Note that for i < ι, the maximum number of coresets that will be inserted into RCC(i) is ri+1 = ri2 . The
reason is that inserting ri+1 buckets into RCC(i) will lead to the corresponding list structure for RCC(i) to
become full. At this point, the list and the RCC(i) structure will be emptied out in Algorithm 5. From each
recursive call to RCC(i), it can be similarly seen that the level of a coreset retrieved from the cache is at level
14
Algorithm 6: R.RCC-Coreset()
1
2
3
B ← R.RCC-Getbuckets
C ← coreset(k, , B)
return bucket (C, 1, R.N, 1 + maxb∈B level(b))
Algorithm 7: R.RCC-Getbuckets()
7
N1 ← major(R.N, R.r)
b1 ← retrieve bucket with right endpoint N1 from R.cache
Let `∗ be the lowest numbered non-empty level among R.Li , i ≥ 0.
if R.order > 0 then
B2 ← R.R`∗ .RCC-Getbuckets()
else
B2 ← R.L`∗
8
return {b1 } ∪ B2
1
2
3
4
5
6
log ri
2 log
ri−1 , which is O(1). The algorithm returns a coreset formed by the union of all the coresets, followed by
a further merge step. Overall, the level of the coreset is one more than the maximum of the levels of all the
N
coresets returned. This is O( log
log rι ).
For the query cost, note that the number of coresets merged at query time is equal to the nesting depth
of the structure ι. The query time equals the cost of running k-means++ on the union of all these coresets,
for a total time of O(kdmι). The amortized per-point cost of a query follows.
Lemma 18 The memory consumed by RCC(ι) is O(mdrι ). The amortized processing time is O(kdι) per point.
Proof: First, we note in RCC(i) for i < ι, there are O(1) lists L` . The reason is as follows. It can be seen that
in order to get a single bucket in list L2 within RCC(i), it is necessary to insert ri2 = ri+1 buckets into RCC(i).
Since this is the maximum number of buckets that will be inserted into RCC(i), there are no more than three
levels of lists within each RCC(i) for i < ι.
We prove by induction on i that RCC(i) has no more than 6ri buckets. For the base case, i = 0, and
we have r0 = 2. In this case, RCC(0) has three levels, each with no more than 2 buckets. The number of
buckets in the cache is also a constant for r0 , so that the total memory is no more than 6 buckets, due to
the lists in different levels, and no more than 2 buckets in the cache, for a total of 8 = 4r0 buckets. For the
inductive case, consider that RCC(i) has no more than three levels. The list at each level has no more than
ri buckets. The recursive structures R` within RCC(i) themselves have no more than 6ri−1 buckets. Adding
the constant number of buckets within the cache, we get the total number of buckets within RCC(i) to be
√
3ri + 4ri−1 + 2 = 3ri + 6 ri + 2 ≤ 6ri , for ri ≥ 16, i.e. i ≥ 2. Thus if ι is the nesting depth of the structure,
the total memory consumed is O(mdrι ), since each bucket requires O(md) space.
For the processing cost, when a bucket is inserted into R = RCC(ι), it is added to list L0 within R.
The cost of maintaining these lists in R and R.cache, including merging into higher level lists, is amortized
O(kd) per point, similar to the analysis in Lemma 16. The bucket is also recursively inserted into a RCC(ι−1)
structure, and a further structure within, and the amortized time for each such structure is O(kd) per point.
The total time cost is O(kdι) per point.
15
Query
Shared State
Update(p)
EstCost, cost0
Coreset CS
Cluster Centers C
No
EstCost
> α∙cost0
Yes
Compute CS
Fall
back to
C ← kmeans++(k, CS)
CC
Use Sequential k-means
to update C, EstCost
Add p to CS
Update EstCost, cost0
Return C
Figure 2: Illustration of Algorithm OnlineCC.
Different tradeoffs are possible by setting ι to specific values. Some examples are shown in the Table 2.
ι
log log N − 3
log log N/2
coreset level
at query
O(1)
p
O( log N)
Query cost
(per point)
O( kdm
q log log N)
update cost
per point
O(kd log log N)
O( kdm
q log log N)
O(kd log log N)
Memory
O(mdN 1/8 )
√
O(md2 log N )
Table 2: Possible tradeoffs for the RCC(ι) algorithm, based on the parameter ι, the nesting depth of the
structure.
4.3
Online Coreset Cache: a Hybrid approach of CC and Sequential k-means
If we breakdown the query runtime of the algorithms considered so far, we observe two major components.
The first component is the construction of the coreset of all points seen so far, through merging stored coresets. The second component is the k-means++ algorithm applied on the resulting coreset. The algorithms
discussed so far, CC and RCC, are focused on decreasing the runtime of the first component, coreset construction, by reducing the number of coresets to be merged at query time. But they still have to pay the
cost of the second component k-means++, which is substantial in itself, since the runtime of k-means++
is O(kdm) where m is the size of the coreset. To make further progress, we have to reduce this component.
However, the difficulty in eliminating k-means++ at query time is that without an approximation algorithm
such as k-means++ we do not have a way to guarantee that the returned clustering is an approximation to
the optimal.
We present an algorithm, OnlineCC, which only occasionally uses k-means++ at query time, and most
of the time, uses a much cheaper method of cost O(1) to compute the clustering centers. OnlineCC uses a
16
combination of CC and the Sequential k-means algorithm [7] (a.k.a Online Lloyd’s algorithm) to maintain the cluster centers quickly while also providing a guarantee on the quality of clustering. OnlineCC
continuously maintains cluster centers in a manner similar to [7], where each arriving point incrementally
updates the current set of cluster centers. While Sequential k-means can process incoming points (and
answer queries) extremely quickly, it cannot provide any guarantees on the quality of answers, and in some
cases, the clustering quality can be very poor when compared with say, k-means++. To guard against such
deterioration in clustering quality, our algorithm (1) falls back to a provably accurate clustering algorithm
CC occasionally, and (2) runs Sequential k-means only so long as the clustering cost does not get much
larger than the previous time CC was used. This ensures that our clusters always have a provable quality with
respect to the optimal.
In order to achieve the above, OnlineCC also processes incoming points using CC, thus maintaining
coresets of substreams of data seen so far. When a query arrives, it typically answers them in O(1) time
using the centers maintained using Sequential k-means. If however the clustering cost is significantly
higher (by more than a factor of α for a parameter α > 1) than the previous time the algorithm fell back to
CC, then the query processing again returns to regenerate a coreset using the CC algorithm. One difficulty in
implementing this idea is that (efficiently) maintaining an estimate of the current clustering cost is not easy,
since each change in cluster centers can affect the contribution of a number of points to the clustering cost.
To reduce the cost of maintenance, our algorithm maintains an upper bound on the clustering cost; as we
show further, this is sufficient to give a provable guarantee on the quality of clustering. Further details on
how the upper bound on the clustering cost is maintained, and how Sequential k-means and CC interact
are shown in Algorithm 8, with a schematic in Figure 2.
We state the properties of Algorithm OnlineCC.
Lemma 19 In Algorithm 8, after observing point set P, if C is the current set of cluster centers, then
EstCost is an upper bound on φC (P).
Proof: Consider the value of EstCost between every two consecutive switches to CC. Without loss of
generality, suppose there is one switch happens at time 0, let P0 denote the points observed until time 0
(including the points received at 0). We will do induction on the number of points received after time 0, we
denote this number as i. Then Pi is P0 union the i points received after time 0.
When i is 0, we compute C from the coreset CS , from the coreset definition
cost0 = φC (CS ) ≥ (1 − ) · φC (P0 )
where is the approximation factor of coreset CS . So for dataset P0 , the estimation cost EstCost is greater
than the k-means cost φC (P0 ).
At time i, denote Ci as the cluster centers maintained and EstCosti as the estimation of k-means cost.
Assume the statement is true such that EstCosti > φCi (Pi ).
Consider when a new point p comes, c p is the nearest center in Ci to p. We compute c0p which is the
new position of the center c p , let Ci+1 denote the new center set where Ci+1 = Ci \ {c p } ∪ {c0p }.
We know that
φCi (S (i)) + kp − c0p k2 ≥ φCi+1 (S (i))
As c0p is the centroid of c p and p, we have
kp − c0p k < kp − c p k = minc∈Ci kp − ck
So c0p is the nearest center in Ci+1 to p. Adding up together, we get:
φCi (S (i)) + kp − c p k2 ≥ φCi+1 (S (i + 1))
17
Algorithm 8: The Online Coreset Cache: A hybrid of CC and Sequential k-means algorithms
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
def OnlineCC-Init(k, , α)
Remember coreset approximation factor , merge-degree r, and parameter α > 1 the threshold to
switch the query processing to CC
B C is the current set of cluster centers
Initialize C by running k-means++ on set S0 consisting of the first O(k) points of the stream
B cost0 is the clustering cost during the previous “fallback” to CC; EstCost is an estimate of
the clustering cost of C on the stream so far
cost0 ← clustering cost of C on S0
EstCost ← cost0
Q ← CC-Init(r, k, )
B On receiving a new point p from the stream
def OnlineCC-Update(p)
Assign p to the nearest center c p in C
EstCost ← EstCost + kp − c p k2
B c0p is the centroid of c p and p where w is the weight of c p
c0p ← (w · c p + p)/(w + 1)
Update center c p in C to c0p
Add p to the current batch b. If |b| = m, then execute Q.CC-Update(b)
def OnlineCC-Query()
if EstCost > α · cost0 then
CS ← Q.CC-Coreset() ∪ b, where b is the current batch that has not been inserted into Q
C ← k-means++(k, CS )
cost0 ← φC (CS ), the k-means cost of C on CS
EstCost ← cost0 /(1 − )
return C
From the assumption EstCosti ≥ φCi (S (i)), we prove that EstCosti+1 ≥ φCi+1 (S (i + 1)).
Lemma 20 When queried after observing point set P, the OnlineCC algorithm (Algorithm 8) returns a
set of k points C whose clustering cost is within O(log k) of the optimal k-means clustering cost of P, in
expectation.
Proof: Let φ∗ (P) denote the optimal k-means cost for P. Our goal is to show that φC (P) = O(log k)φ∗ (P).
There are two cases for handling the query.
Case I: When C is directly retrieved from CC, using Lemma 15 we have E φC (P) ≤ O(log k) · φ∗ (P).
This case is handled through the correctness of CC.
Case II: The query algorithm does not fall back to CC. We first note from Lemma 19 that φC (P) ≤
EstCost. Since the algorithm did not fall back to CC, we have EstCost ≤ α · cost0 . Since cost0 was the
result of applying CC to P0 , we have from Lemma 15 that cost0 ≤ O(log k) · φ∗ (P0 ). Since P0 ⊆ P, we know
that φ∗ (P0 ) ≤ φ∗ (P). Putting together the above four inequalities, we have φC (P) = O(log k) · φ∗ (P).
18
5
Experimental Evaluation
In this section, we present results from an empirical evaluation of the performance of algorithms proposed
in this paper. Our goals are twofold: to understand the relative clustering accuracy and running time of
different algorithms in the context of continuous queries, and to investigate how they behave under different
settings of parameters.
5.1
Datasets
We work with the following real-world or semi-synthetic datasets, based on data from the UCI Machine
Learning Repositories [17], all of which have been used in the past for benchmarking clustering algorithms.
A summary of the datasets used appears in Table 3.
The Covtype dataset models the forest cover type prediction problem from cartographic variables. The
dataset contains 581, 012 instances and 54 integer attributes. The Power dataset measures electric power
consumption in one household with a one-minute sampling rate over a period of almost four years. We
remove the instances with missing values, resulting in a dataset with 2, 049, 280 instances and 7 real attributes. The Intrusion dataset is the 10% subset of the KDD Cup 1999 data. The competition task was
to build a predictive model capable of distinguishing between normal network connections and intrusions.
We ignore symbolic attributes, resulting in a dataset with 494, 021 instances and 34 real attributes. For the
above datasets, to erase any potential special ordering within data, we randomly shuffle each dataset before
consuming it as a data stream.
The above datasets, as well as most datasets used in previous works on streaming clustering have been
static datasets that have been converted into streams by reading them in some sequence. To better model the
evolving nature of data streams and drift in location of centers, we generate a semi-synthetic dataset that we
call Drift based on the USCensus1990 dataset from [17]. The method of data generation is as follows, and
is inspired by [18]. We first cluster the USCensus1990 dataset to compute 20 cluster centers and for each
cluster, the standard deviation of the distances to the cluster center. Then we generate the synthetic dataset
using the Radial Basis Function (RBF) data generator from the MOA stream mining framework [19]. The
RBF generator moves the drifting centers with a user-given direction and speed. For each time step, the
RBF generator creates 100 random points around each center using a Gaussian distribution with the cluster
standard deviation. In total, the synthetic dataset contains 200, 000 and 68 real attributes.
Dataset
Number of Points
Dimension
Covtype
Power
Intrusion
Drift
581, 012
2, 049, 280
494, 021
200, 000
54
7
34
68
Table 3: Summary of Datasets.
5.2
Experimental Setup and Implementation Details
We implemented all the clustering algorithms using Java, and ran experiments on a desktop with Intel Core
i5-4460 3.2GHz processor and 8GB main memory.
19
Algorithms: For comparison, we used two prominent streaming clustering algorithm. (1) One is the
Sequential k-means algorithm due to [7], which is also implemented in clustering packages today. For
Sequential k-means clustering, we used the implementation in Apache Spark MLLib [10], except that
our implementation is sequential. The initial centers are set by the first k points in the stream instead of
setting by random Gaussians, because that may cause some clusters be empty. (2) We also implemented
streamkm++ [1], a current state-of-the-art algorithm with good practical performance. streamkm++ can be
viewed as a special case of CT where the merge degree r is 2. The bucket size is set to be 10k where k is the
number of centers. 2
For CC, we set the merge degree to 2, in line with streamkm++. For RCC, the maximum nesting depth is
1
1
1
3, so the merge degrees for different structures are N 2 , N 4 and N 8 respectively, where N is the total number
of buckets. For OnlineCC, the threshold α is set to 1.2. To compute the EstCost after each fall back to CC,
we need to know the value of cluster standard deviation D. This value is estimated using the coreset which
stands for the whole points received.
Finally, as the baseline on the accuracy of stream clustering algorithms, we use the batch k-means++
algorithm, which is expected to outperform every streaming algorithm. We report the median error due to
five independent runs of each algorithm for each setting. The same applies to runtime as well.
To compute a coreset, we use the k-means++ algorithm (similar to [1, 2]). Note since the size of the
coreset is greater than k, k-means++ is used with multiple centers chosen in each iteration, to control the
number of iterations. We also use k-means++ as the final step to construct k centers of from the coreset, and
take the best clustering out of five independent runs of k-means++; each instance of k-means++ is followed
by up to 20 iterations of Lloyd’s algorithm to further improve clustering quality. The number of clusters k
is chosen from the set {10, 15, 20, 25, 30}.
Metrics: We evaluate the clustering accuracy through the standard within cluster sum of squares (SSQ)
metric, which is also the k-means objective function. We also measure the runtime of each algorithm and
the runtime is measured for the entire datasets. Further, runtime is split into two parts, (1) update time, the
time required to update internal data structures upon receiving new data (2) query time, the time required to
answer the clustering queries. There is a query posed for cluster centers for every q points observed.
5.3
Discussion of Experimental Results
Accuracy (k-means cost): Consider Figures 3 and 4. Figure 4 shows the k-means cost versus k when the
query interval is 100 points. For the Intrusion data, the result of Sequential k-means is not shown
since its cost much larger (by a factor of about 105 ) than the other methods. Not surprisingly, for all
algorithms studied, the clustering cost decreases with k. For all the datasets, Sequential k-means always
achieves the highest k-means cost, in some cases (such as Intrusion), much higher than other methods.
This shows that Sequential k-means is consistently worse than the other methods, when it comes to
clustering accuracy – this is as expected, since unlike the other methods, Sequential k-means does not
have a theoretical guarantee on clustering quality. A similar trend is also observed on the plot with the
k-means cost versus the number of points received, Figure 3.
The other algorithms, streamkm++, CC, RCC, and OnlineCC all achieve very similar clustering cost,
on all data sets. In Figure 4, we also show the cost of running a batch algorithm k-means++ (followed by
iterations of Lloyd’s algorithm). We found that the clustering costs of the streaming algorithms are nearly
the same as that of running the batch algorithm, which can see the input all at once! Indeed, we cannot
2
A larger bucket size such as 200k can yield slightly better clustering quality. But this led to a high runtime for streamkm++,
especially when queries are frequent, hence we stay with a smaller bucket size.
20
11
k−means cost
6
7
x 10
4
Sequential
StreamKM++
CC
RCC
OnlineCC
4
x 10
3
2
2
1
0
0
0
0
2
4
6
number of points x 105
1
2
3
number of points x 106
(a) Covtype
(b) Power
12
3
11
x 10
3
2
2
1
1
0
0
x 10
0
0
2
4
number of points x 105
(c) Intrusion
1
2
number of points x 105
(d) Drift
Figure 3: k-means cost vs. number of points. The number of centers k = 20. The k-means cost of
Sequential k-means on Intrusion dataset is not shown in Figure (c), since it was orders of magnitude
larger than the other algorithms.
expect the streaming clustering algorithms to perform any better than this.
According to theory, as the merge degree r increases, the clustering accuracy increases. With this reasoning, we should see RCC achieve the highest clustering accuracy (lowest clustering cost), better than that
of CC and streamkm++; e.g. for Covtype when k is 20, the merge degree of RCC is 53, compared with 2
for streamkm++. But our experimental results do not show such behavior, and RCC and streamkm++ show
similar accuracy. Further, their accuracy matches that of batch k-means++. A possible reason for this may
be that our theoretical analyses of streaming clustering methods is too conservative, and/or there is structure
within real data that we can better exploit to predict clustering accuracy.
Update Time: Figure 5–7 show the results of run time versus the number of clusters, when the query
interval is 100 points. Due to the inferior accuracy of Sequential k-means, the run time result is not
shown in these figures. The update runtime of algorithm streamkm++, CC and OnlineCC all increase
linearly with the number of centers, as the amortized update time is proportional to k. The update time of
21
11
k−means cost
10
7
x 10
6
Sequential
StreamKM++
CC
RCC
OnlineCC
KMeans++
5
x 10
4
2
0
0
0
0
10 15 20 25 30
number of clusters k
10 15 20 25 30
number of clusters k
(a) Covtype
(b) Power
12
11
x 10
x 10
10
4
5
2
0
0
0
0
10 15 20 25 30
number of clusters k
(c) Intrusion
10 15 20 25 30
number of clusters k
(d) Drift
Figure 4: k-means cost vs. number of centers k for different algorithms. The cost is computed at the end of
observing all the points. The k-means cost of Sequential k-means on Intrusion dataset is not shown
in Figure (c), since it was orders of magnitude larger than the other algorithms.
OnlineCC is nearly the same as CC, since OnlineCC-Update calls CC-Update, with a small amount of
additional computation. Among the four algorithms that we compare, RCC has the largest update time, since
it needs to update multiple levels of the cache as well as the coreset tree.
Query Time: From Figure 6, we see that OnlineCC has the fastest query time, followed by RCC, and
CC, and finally by streamkm++. Note that the y-axis in Figure 6 is in log scale. We note that OnlineCC is
significantly faster than the rest of the algorithms. For instance, it is about two orders of magnitude faster
than streamkm++ for q = 100. This shows that the algorithm succeeds in achieving significantly faster
queries than streamkm++, while maintaining the same clustering accuracy.
Total Time: Figure 7 shows the total runtime, the sum of the update time and the query time, as a
function of k, for q = 100. For streamkm++, update time dominates the query time, hence the total time is
close to its query time. For OnlineCC, however, the update time is greater than the query time, hence the
total time is substantially larger than its query time. Overall, the total time of OnlineCC is still nearly 5-10
22
update time (seconds)
300
200
StreamKM++
CC
RCC
OnlineCC
200
150
100
100
50
0
0
0
0
10 15 20 25 30
number of clusters k
(a) Covtype
10 15 20 25 30
number of clusters k
(b) Power
200
150
150
100
100
50
50
0
0
0
0
10 15 20 25 30
number of clusters k
(c) Intrusion
10 15 20 25 30
number of clusters k
(d) Drift
Figure 5: Update time (seconds) vs. number of centers k for different algorithms. The query interval q is
100 points.
times faster than streamkm++.
We next consider how the runtime varies with q, the query interval. Figure 8 shows the algorithm total
run time as a function of the query interval q. Note that the update time does not change with q, and is not
shown here. The trend for query time is similar to that shown for total time, except that the differences are
more pronounced. We note that the total time for OnlineCC is consistently the smallest, and does not change
with an increase in q. This is because OnlineCC essentially maintains the cluster centers on a continuous
basis, while occasionally falling back to CC to recompute coresets, to improve its accuracy. For the other
algorithms including CC, RCC, and streamkm++, the query time and the total time decrease as q increases
(and queries become less frequent). As q approaches 5000, the total time stabilizes, since at this point update
time dominates the query time for all algorithms.
23
query time (seconds)
4
StreamKM++
CC
RCC
OnlineCC
4
10
10
2
10
2
10
0
10
0
10
−2
0
10 15 20 25 30
number of clusters k
10
0
(a) Covtype
4
(b) Power
4
10
10
2
2
10
10
0
0
10
10
−2
10
10 15 20 25 30
number of clusters k
0
−2
10 15 20 25 30
number of clusters k
10
(c) Intrusion
0
10 15 20 25 30
number of clusters k
(d) Drift
Figure 6: Query time (seconds) vs. number of centers k for different algorithms. The query interval q is
100 points.
6
Conclusion
We have presented fast algorithms for streaming k-means clustering that are capable of answering queries
quickly. When compared with prior methods, our method provides a significant speedup—both in theory
and practice—in query processing while offering provable guarantees on accuracy and memory cost. The
general framework that we present for “coreset caching” maybe applicable to other streaming algorithms
that are built around the Bentley-Saxe decomposition. Many open questions remain, including (1) improved
handling of concept drift, through the use of time-decaying weights, and (2) clustering on distributed and
parallel data streams.
24
total time (seconds)
4
4
10
10
StreamKM++
CC
RCC
OnlineCC
3
10
3
10
2
2
10
10
1
10
1
0
10
10 15 20 25 30
number of clusters k
0
(a) Covtype
10 15 20 25 30
number of clusters k
(b) Power
4
3
10
10
3
10
2
10
2
10
1
10
0
1
10
10 15 20 25 30
number of clusters k
(c) Intrusion
0
10 15 20 25 30
number of clusters k
(d) Drift
Figure 7: Total time (seconds) vs. number of centers k. Total time is the sum of update time and the query
time. The query interval q is 100 points.
References
[1] M. R. Ackermann, M. Märtens, and C. R. et al., “Streamkm++: A clustering algorithm for data
streams,” J. Exp. Algorithmics, vol. 17, no. 1, pp. 2.4:2.1–2.4:2.30, 2012.
[2] N. Ailon, R. Jaiswal, and C. Monteleoni, “Streaming k-means approximation,” in NIPS, 2009, pp.
10–18.
[3] S. Guha, A. Meyerson, and N. M. et al., “Clustering data streams: Theory and practice,” IEEE TKDE,
vol. 15, no. 3, pp. 515–528, 2003.
[4] M. Shindler, A. Wong, and A. Meyerson, “Fast and accurate k-means for large datasets,” in NIPS,
2011, pp. 2375–2383.
25
total time (seconds)
4
4
10
10
StreamKM++
CC
RCC
OnlineCC
3
10
3
10
2
2
10
10
1
10
1
10
50 100 500 10005000
50
query interval
100 500 1000 5000
query interval
(a) Covtype
(b) Power
3
10
3
10
2
10
2
10
1
10
1
50
10
100 500 1000 5000
query interval
50
100 500 1000 5000
query interval
(c) Intrusion
(d) Drift
Figure 8: Total time as a function of the query interval q. For every q points, there is a query for the cluster
centers. The number of centers k is set to 20.
[5] S. Har-Peled and S. Mazumdar, “On coresets for k-means and k-median clustering,” in STOC, 2004,
pp. 291–300.
[6] D. Arthur and S. Vassilvitskii, “k-means++: The advantages of careful seeding,” in SODA, 2007, pp.
1027–1035.
[7] J. B. MacQueen, “Some methods for classification and analysis of multivariate observations,” in Proc.
of the fifth Berkeley Symposium on Mathematical Statistics and Probability, 1967, pp. 281–297.
[8] S. P. Lloyd, “Least squares quantization in PCM,” IEEE Trans. Information Theory, vol. 28, no. 2, pp.
129–136, 1982.
[9] B. Bahmani, B. Moseley, A. Vattani, R. Kumar, and S. Vassilvitskii, “Scalable k-means++,” PVLDB,
vol. 5, pp. 622–633, 2012.
26
[10] X. Meng, J. K. Bradley, and B. Y. et al., “Mllib: Machine learning in apache spark,” J. Mach. Learn.
Res., vol. 17, pp. 1235–1241, 2016.
[11] T. Kanungo, D. M. Mount, and N. S. N. et al., “A local search approximation algorithm for k-means
clustering,” Computational Geometry, vol. 28, no. 23, pp. 89 – 112, 2004.
[12] T. Zhang, R. Ramakrishnan, and M. Livny, “Birch: An efficient data clustering method for very large
databases,” in SIGMOD, 1996, pp. 103–114.
[13] C. C. Aggarwal, J. Han, J. Wang, and P. S. Yu, “A framework for clustering evolving data streams,” in
PVLDB, 2003, pp. 81–92.
[14] J. L. Bentley and J. B. Saxe, “Decomposable searching problems i. static-to-dynamic transformation,”
Journal of Algorithms, vol. 1, pp. 301 – 358, 1980.
[15] S. Har-Peled and A. Kushal, “Smaller coresets for k-median and k-means clustering,” Discrete Comput.
Geom., vol. 37, no. 1, pp. 3–19, 2007.
[16] D. Feldman and M. Langberg, “A unified framework for approximating and clustering data,” in STOC,
2011, pp. 569–578.
[17] M. Lichman, “UCI machine learning repository,” 2013. [Online]. Available: http://archive.ics.uci.edu/
ml
[18] J. P. Barddal, H. M. Gomes, F. Enembreck, and J.-P. Barths, “Sncstream+: Extending a high quality
true anytime data stream clustering algorithm,” Information Systems, vol. 62, pp. 60 – 73, 2016.
[19] A. Bifet, G. Holmes, R. Kirkby, and B. Pfahringer, “Moa: Massive online analysis,” J. Mach. Learn.
Res., vol. 11, pp. 1601–1604, 2010.
27
| 8 |
A Framework for Accurate Drought Forecasting
System Using Semantics-Based Data Integration
Middleware
arXiv:1706.07294v1 [cs.DB] 20 Jun 2017
Adeyinka K. Akanbi and Muthoni Masinde
Department of Information Technology
Central University of Technology, Free State, South Africa
{aakanbi,emasinde}@cut.ac.za
Abstract. Technological advancement in Wireless Sensor Networks (WSN)
has made it become an invaluable component of a reliable environmental monitoring system; they form the ’digital skin’ through which to
’sense’ and collect the context of the surroundings and provides information on the process leading to complex events such as drought. However,
these environmental properties are measured by various heterogeneous
sensors of different modalities in distributed locations making up the
WSN, using different abstruse terms and vocabulary in most cases to
denote the same observed property, causing data heterogeneity. Adding
semantics and understanding the relationships that exist between the
observed properties, and augmenting it with local indigenous knowledge
is necessary for an accurate drought forecasting system. In this paper, we
propose the framework for the semantic representation of sensor data and
integration with indigenous knowledge on drought using a middleware
for an efficient drought forecasting system.
Key words: middleware, internet of things, drought forecasting, semantic integration,ontology, interoperability, semantic technology
1 Introduction
The application of Semantic Technology for drought forecasting is a growing research area. Our work investigates the semantic representation and integration of
measured environmental entities with the local Indigenous knowledge (IK) using
an ontology to allow reasoning and generate inference based on their interrelationship. We present a proposed model which outline our research directions,
[7] provides a further overview of the framework towards an accurate drought
forecasting system.
In terms of negative impacts, droughts are currently ranked 1 number one
(CRED 2012). Compared to other natural disasters such as floods, hurricanes,
earthquakes and epidemics, droughts are very difficult to predict; they creep
1
The ranking is based on severity, length of event, total area affected, total loss of life,
total economic loss, social effect, long-term impacts, suddenness and frequency[1].
2
A.K. Akanbi & Muthoni Masinde
slowly and last longest. The complex nature of droughts onset-termination has
made it acquire the title ”the creeping disaster” [2]. The greatest challenge is
designing a framework which can track information about the ’what’, ’where’
and ’when’ of environmental phenomena and the representation of the various
dynamic aspects of the phenomena [3]. The representation of such phenomena
requires better understanding of the ’process’ that leads to the ’event’. For example, a soil moisture sensor provides sets of values for the observed property
soil moisture. The measured property can also be influenced by the temperature heat index measured over the observed period. This makes accurate prediction based on these sensor values almost impossible without understanding
the semantics and relationships that exist between this various properties. Hypothetically, drought prediction tools could be used to establish precise drought
development patterns as early as possible and provide sufficient information to
decision-makers to prepare for the droughts long before they happen. This way,
the prediction can be used to mitigate effects of droughts.
The technological advancement in Wireless Sensor Networks (WSN) has facilitated its use in monitoring environmental properties irrespective of the geographical location. In their (WSNs) current implementation, these properties
are measured using heterogeneous sensors that are mostly distributed in different locations. Further, different abstruse terms and vocabulary in most cases
are used to denote the same observed property, thereby leading to data heterogeneity. Moreover, research [4], [5] on indigenous knowledge (IK) on droughts
has pointed to the fact that IK on living and non-living things e.g., sifennefene
worms, peulwane birds, lehota frogs and plants like mutiga tree, mothokolo tree
etc can indicate drier or wetter conditions, which can imply likely occurrence of
drought event over time [6]. This scenario shows that environmental events can
be inferred from sensors data augmented with IK, if proper semantic is attached
to it based on some set of indicators. Therefore, a semantics-based data integration middleware is required to bridge the gap between heterogeneous sensor
data and IK for an accurate drought forecasting and prediction system.
2 Problem Statements
The following problems were identified as a major bottleneck for the utilization
of semantic technologies for drought forecasting:
The current lack of ontology based middleware for the semantic representation of
environmental process: Ontological modeling of key concepts of environmental
phenomena such as object, state, process and event, ensures the drawing of accurate inference from the sequence of processes that lead to an event. Presently,
what is currently missing is an environmental ontology with well-defined vocabularies that allow explicit representation of the process, events and also attach
semantics to the participants in the environmental domain.
Lack of semantic integration of heterogeneous data sources with indigenous
knowledge for an accurate environmental forecasting: Studies reveal that over
80% of farmers in some parts of Kenya, Zambia, Zimbabwe and South Africa
Semantics-Based Data Integration Middleware
3
rely on Indigenous knowledge forecasts (IKF) for their agricultural practices [5].
An IoT-based environmental monitoring system made up of interconnected heterogeneous weather information sources such as sensors, mobile phones, conventional weather stations, and indigenous knowledge could improve the accuracy
of environmental forecasting.
Lack of IoT-based drought forecasts communication and dissemination channels:
There is a lack of effective dissemination channels for drought forecasting information. For example, the absence of smart billboards placed at strategic location and smart phones. The output channels would ensure farmers have access
to drought forecasting information know the spatial distribution of a drought
vulnerability index.
3 Research Questions
To what extent does the adoption of knowledge representation and semantic technology in the development of a middleware enable seamless sharing and exchange
of data among heterogeneous IoT entities?
Several standards have been created to cope with the data heterogeneities. Examples are the Sensor Markup Language (SensorML) 2 , WaterML, and American Federal Geographic Data (FGDC) Standard 3 . However, these standards
provide sensor data to a predefined application in a standardized format, and
hence do not generally solve data heterogeneity. Semantic technology solves this
by representing data in a machine readable language such as Resource Description Framework (RDF) and Ontology Web Language (OWL), for seamless data
sharing irrespective of the domain.
What are the main components of an implementation framework/architecture
that employs the middleware to implement an IoT-based Drought Early Warning Systems (DEWS)?
The existence of ontology with well-defined vocabularies that allows an explicit
representation of process and events; the representation and integration of the
inputs in machine-readable formats, the availability of a reasoning engine (CEP
Engine) that generates inference based on input parameters.
4 Methodology
The proposed semantic middleware is a software layer composed of a set of
various sub-layers interposed between the application layer and the physical
layer. It incorporates interface protocols, which liaise with the storage database
in the cloud for downloading the semi-processed sensory reading to be represented based on the ontology through a mediator device as shown in figure 3[7].
An environmental process-based ontology is required to overcome the problems
2
3
http://www.opengeospatial.org/standards
https://www.fgdc.gov/metadata
4
A.K. Akanbi & Muthoni Masinde
associated with the dynamic nature of environmental data and the data heterogeneities. The study proposes to use DOLCE top-level ontology for the modelling of the foundational entities needed to represent the dynamic phenomena.
Information from the sensor data streams is integrated with indigenous knowledge using a Complex Events Processing (CEP ) engine as proposed in figure 1.
This will serve as the reasoning engine for inferring patterns leading to drought,
based on a set of rules derived from indigenous knowledge of the local people
on drought. Figure 2 depicts the overview of the middleware architecture. The
domain of this particular case study is Free State Province, South Africa - an
ongoing research project by AfriCRID4 , Department of Information Technology,
Central University of Technology, Free State.
Fig. 1: The semantic middleware integration framework
Fig. 2: Overview of the middleware
architecture
5 Results and Discussion
The study is expected to produce a semantic based data integration middleware that semantically represents and integrates heterogeneous data sources
with indigenous knowledge based on a unified ontology for an accurate IoTbased drought forecasting system. With more integrated comprehensive services
that are based on semantic interoperability, our approach makes a unique contribution towards improving the accuracy of drought prediction and forecasting
systems.
4
http://africrid.com/
Semantics-Based Data Integration Middleware
5
References
1. D. Chester. Natural hazards by ea bryant. cambridge university press, 1991. no. of
pages: 294. price:40(hardback); 14.95 (paperback). isbn 0 521 37295 x (hardback);
0 521 37889 3 (paperback), 1993.
2. A. K. Mishra and V. P. Singh. A review of drought concepts. Journal of Hydrology,
391(1):202-216, 2010.
3. D. J. Peuquet and N. Duan. An event-based spatiotemporal data model (estdm)
for temporal analysis of geographical data. International journal of geographical
information systems, 9(1):7-24, 1995.
4. F. Mugabe, C. Mubaya, D. Nanja, P. Gondwe, A. Munodawafa, E. Mutswangwa,
I. Chagonda, P. Masere, J. Dimes, and C. Murewi. Use of indigenous knowledge
systems and scientific methods for climate forecasting in southern zambia and north
western zimbabwe. Zimbabwe Journal of Technological Sciences, 1(1):19-30, 2010.
5. M. Masinde and A. Bagula. Itiki: bridge between african indigenous knowledge and
modern science of drought prediction. Knowledge Management for Development
Journal, 7(3):274-290, 2011.
6. P. Sillitoe. The development of indigenous knowledge: a new applied anthropology
1. Current anthropology, 39(2):223-252, 1998.
7. Akanbi, Adeyinka K., and Muthoni Masinde. ”Towards Semantic Integration of
Heterogeneous Sensor Data with Indigenous Knowledge for Drought Forecasting.”
Proceedings of the Doctoral Symposium of the 16th International Middleware Conference. ACM, 2015.
| 2 |
1
A Multi- or Many-Objective Evolutionary
Algorithm with Global Loop Update
arXiv:1803.06282v1 [] 25 Jan 2018
Yingyu Zhang
Bing Zeng
Abstract—Multior
many-objective
evolutionary
algorithms(MOEAs),
especially
the
decomposition-based
MOEAs have been widely concerned in recent years. The
decomposition-based MOEAs emphasize convergence and
diversity in a simple model and have made a great success in
dealing with theoretical and practical multi- or many-objective
optimization problems. In this paper, we focus on update
strategies of the decomposition-based MOEAs, and their
criteria for comparing solutions. Three disadvantages of the
decomposition-based MOEAs with local update strategies and
several existing criteria for comparing solutions are analyzed
and discussed. And a global loop update strategy and two
hybrid criteria are suggested. Subsequently, an evolutionary
algorithm with the global loop update is implemented and
compared to several of the best multi- or many-objective
optimization algorithms on two famous unconstraint test suites
with up to 15 objectives. Experimental results demonstrate that
unlike evolutionary algorithms with local update strategies,
the population of our algorithm does not degenerate at any
generation of its evolution, which guarantees the diversity
of the resulting population. In addition, our algorithm wins
in most instances of the two test suites, indicating that it
is very competitive in terms of convergence and diversity.
Running results of our algorithm with different criteria for
comparing solutions are also compared. Their differences
are very significant, indicating that the performance of our
algorithm is affected by the criterion it adopts.
Index Terms—evolutionary algorithms, many-objective optimization, global update strategy, Pareto optimality, decomposition.
I. I NTRODUCTION
lot of real-world problems such as electric power system
reconfiguration problems [1], water distribution system
design or rehabilitation problems [2], automotive engine calibration problems [3], land use management problems [4],
optimal design problems [5]–[7], and problems of balancing
between performance and cost in energy systems [8], etc.,
can be formulated into multi- or many-objective optimization
problems(MOPs) involving more than one objective function.
MOPs have attracted extensive attention in recent years and
different kinds of algorithms for solving them have been
A
This work was supported by the National Natural Science Foundation of
China under Grant 61773192.
Y. Zhang is with the School of Computer Science, Liaocheng University,
Liaocheng 252000, China (e-mail:[email protected]).
B. Zeng is with the School of Software Engineering, South China University
of Technology, Guangzhou 510006, China.
Y. Li is with the School of Computer Science, Liaocheng University,
Liaocheng 252000, China.
J. Li is with the School of information science and engineering, Shandong
Normal University, Jinan 250014, China, and also with the School of
Computer Science, Liaocheng University, Liaocheng 252000, China.
Yuanzhen Li
Junqing Li
proposed. Although algorithms based on particle swarm optimization [9] and simulated annealing [10] developed to solve
MOPs are not ignorable, multi- or many-objective evolutionary
algorithms(MOEAs) are more popular and representative in
solving MOPs, such as the non-dominated sorting genetic
algorithm-II (NSGA-II) [11], the strength pareto evolutionary
algorithm 2(SPEA-2) [12], and the multi-objective evolutionary algorithm based on decomposition(MOEA/D) [13],etc. In
General, MOEAs can be divided into three categories [14]. The
first category is known as the indicator-based MOEAs. In an
indication-based MOEA, the fitness of an individual is usually
evaluated by a performance indicator such as hypervolume
[15]. Such a performance indicator is designed to measure the
convergence and diversity of the MOEA, and hence expected
to drive the population of the MOEA to converge to the Pareto
Front(PF) quickly with good distribution. The second category
is the domination-based MOEAs, in which the domination
principle plays a key role. However, in the domination-based
MOEAs, other measures have to be adopted to maintain the
population diversity. In NSGA-II, crowding distances of all
the individuals are calculated at each generation and used to
keep the population diversity , while reference points are used
in NSGA-III [16]. The third category is the decompositionbased MOEAs. In a decomposition based MOEA, a MOP is
decomposed into a set of subproblems and then optimized
simultaneously. A uniformly generated set of weight vectors
associated with a fitness assignment method such as the
weighted sum approach, the Tchebycheff approach and the
penalty-based boundary intersection(PBI) approach, is usually used to decompose a given MOP. Generally, a weight
vector determines a subproblem and defines a neighborhood.
Subproblems in a neighborhood are expected to own similar
solutions and might be updated by a newly generated solution.
The decomposition-based MOEA framework emphasizes the
convergence and diversity of the population in a simple model.
Therefore, it was studied extensively and improved from
different points of view [17]–[23] since it was first proposed
by Zhang and Li in 2007 [13].
Recently, some efforts have been made to blend different
ideas appeared in the domination-based MOEAs and the
decomposition-based MOEAs. For examples, an evolutionary
many-objective optimization algorithm based on dominance
and decomposition(MOEA/DD) is proposed in [21], and a
reference vector guided evolutionary algorithm is proposed
in [20]. In MOEA/DD, each individual is associated with
a subregion uniquely determined by a weight vector, and
each weight vector (or subregion) is assigned to a neighborhood. In an iterative step, mating parents is chosen from
2
the neighboring subregions of the current weight vector with
a given probability δ, or the whole population with a low
probability 1 − δ. In case that no associated individual exists
in the selected subregions, mating parents are randomly chosen
from the whole population. And then serval classical genetic
operators such as the simulated binary crossover(SBX) [24]
and the polynomial mutation [25],etc., are applied on the
chosen parents to generate an offspring. Subsequently, the
offspring is used to update the current population according to
a complicated but well-designed rule based on decomposition
and dominance.
In this paper, we focus on update strategies of the
decomposition-based evolutionary algorithms and the criteria for comparing solutions. Three disadvantages of the
decomposition-based MOEAs with local update strategies and
several existing criteria for comparing solutions are analyzed
and discussed. And a global loop update (GLU) strategy and
two hybrid criteria are suggested. Also, we propose an evolutionary algorithm with the GLU strategy for solving multior many-objective optimization problems(MOEA/GLU). The
GLU strategy is designed to try to avoid the shortcomings of
the decomposition-based MOEAs with local update strategies
and eliminate bad solutions in the initial stage of the evolution,
which is expected to force the population to converge faster
to the PF.
The rest of the paper is organized as follows. In section
II, we provide some preliminaries used in MOEA/GLU and
review serval existing criteria for comparing solutions, i.e., PBI
criterion, dominance criterion and distance criterion. And then
two hybrid criteria for judging the quality of two given solutions are suggested. The disadvantages of the decompositionbased MOEAs with local update strategies are also analyzed
in this section. In section III, the algorithm MOEA/GLU
is proposed. A general framework of it is first presented.
Subsequently, the initialization procedure, the reproduction
procedure, and the GLU procedure are elaborated. Some discussions about advantages and disadvantages of the algorithm
are also made. In section IV, empirical results of MOEA/GLU
on DTLZ1 to DTLZ4 and WFG1 to WFG9 are compared
to those of several other MOEAs, i.e., NSGA-III, MOEA/D,
MOEA/DD and GrEA. Running results of MOEA/GLU with
different criteria are also compared in this section. The paper
is concluded in section V.
II. P RELIMINARIES
AND
M OTIVATIONS
A. MOP
Without loss of generality, a MOP can be formulated as a
minimization problem as follows:
M inimize F (x) = (f1 (x), f2 (x), ..., fM (x))T
Subject to x ∈ Ω,
(1)
where M ≥ 2 is the number of objective functions, x is
a decision vector, Ω is the feasible set of decision vectors,
and F (x) is composed of M conflicting objective functions.
F (x) is usually considered as a many-objective optimization
problems when M is greater than or equal to 4.
A solution x of Eq.(1) is said to dominate the other one
y (x 4 y), if and only if fi (x) ≤ fi (y) for i ∈ (1, ..., M )
and fj (x) < fj (y) for at least one index j ∈ (1, ..., M ). It is
clear that x and y are non-dominated with each other, when
both x 4 y and y 4 x are not satisfied. A solution x is Paretooptimal to Eq.(1) if there is no solution y ∈ Ω such that y 4 x.
F(x) is then called a Pareto-optimal objective vector. The set
of all the Pareto optimal objective vectors is the PF [26]. The
goal of a MOEA is to find a set of solutions, the corresponding
objective vectors of which are approximate to the PF.
B. Criteria for Comparing Solutions
1) Dominance criterion: Dominance is usually used to
judge whether or not one solution is better than the other in
the dominance-based MOEAs. As a criterion for comparing
two given solutions, dominance can be described as follows.
Dominance criterion:A solution x is considered to be better
than the other one y when x 4 y.
As it is discussed in [27], the selection pressure exerted by
the dominance criterion is weak in a dominance-based MOEA,
and becomes weaker as the number of the objective functions
increases. It indicates that such a criterion is too stringent for
MOEAs to choose the better one from two given solutions.
Therefore, in practice, the dominance criterion is usually used
together with other measures.
2) PBI criterion: In a decomposition-based MOEA, approaches used to decompose a MOP into subproblems can
be considered as criteria for comparing two solutions, such as
the weighted sum approach, the Tchebycheff approach and the
PBI approach [13]. Here, we describe the PBI approach as a
criterion for comparing two given solutions.
PBI criterion:A solution x is considered to be better than
the other one y when P BI(x) < P BI(y) , where P BI(•)
is defined as P BI(x) = g P BI (x|w, z ∗ ), ω is a given weight
vector, and z ∗ is the ideal point.
The PBI function can be elaborated as [13]:
M inimize g P BI (x|w, z ∗ ) = d1 + θd2
Subject to x ∈ Ω
(2)
where
(F (x) − z ∗ )T w
kwk
w
,
d2 = F (x) − z ∗ + d1
kwk
d1 =
(3)
and θ is a used-defined constant penalty parameter. In a
decomposition-based MOEA with the PBI criterion, the set
of the weight vectors is usually generated at the initialization
stage by the systematic sampling approach and remains unchanged in the running process of the algorithm. The ideal
point is also set at the initialization stage, but can be updated
by every newly generated offspring.
3) Distance criterion: In [28], a criterion with respect to
the two Euclidean distances d1 and d2 defined by Eq.(3) are
used to judge whether or not a solution is better than the other.
Denote the two Euclidean distances of x and y as {d1x , d2x }
and {d1y , d2y } ,respectively. A criterion for comparing two
3
given solutions with respect to the two distances can be written
as follows.
Distance criterion:A solution x is considered to be better
than the other one y when d2x < d2y . In the case that d2x =
d2y , x is considered to be better than y when d1x < d1y .
4) Two Hybrid Criteria: It has been shown that the dominance criterion can be a good criterion for choosing better
solutions in conjunction with other measures [11], [16] . And
likely, the PBI criterion has achieved great success in MOEAs
[13], [21]. However, there are two facts with respect to these
two criteria respectively can not be ignored. The first one is
that using dominance comparison alone can not exert too much
selection pressure on the current population, and hence, can
not drive the population to converge to the PF of a given
MOP quickly. The second one is that it is not necessarily
P BI(x) < P BI(y) when x 4 y, and vice versa.
Therefore, it might be natural to combine these two criteria
in consideration of the two facts. Here, we suggest two hybrid
criteria.
H1 criterion: One solution x is considered to be better
than the other one y when x 4 y. In the case that the two
solutions do not dominate with each other, x is considered
to be better than y when P BI(x) < P BI(y).
H2 criterion: One solution x is considered to be better
than the other one y when x 4 y. In the case that the two
solutions do not dominate with each other, x is considered
to be better than y when d2x < d2y .
It is clear that the H1 criterion combines dominance with
the PBI criterion, while the H2 criterion associates dominance
with the Euclidean distance d2.
C. The Systematic Sampling Approach
The systematic sampling approach proposed by Das and
Dennis [29] is usually used to generate weight vectors in
MOEAs. In this approach, weight vectors are sampled from
a unit simplex. Let ω = (ω1 , ..., ωM )T is a given weight
vector, ωj (1 6 j 6 M ) is the jth component of ω, δj is
the uniform spacing between two consecutive ωj values, and
1/δj is an integer. The possible values of ωj are sampled
Pj−1
from {0, δj , ..., Kj δj }, where Kj = (1 − i=1 ωi )/δj . In
a special case, all δj are equal to δ. To generate a weight
vector, the systematic sampling approach starts with sampling
from {0, δ, 2δ, ..., 1} to obtain the first component ω1 , and then
from {0, δ, 2δ, ..., K2δ} to get the second component ω2 and
so forth, until the M th component ωM is generated. Repeat
such a process, until a total of
D+M −1
N (D, M ) =
(4)
M −1
different weight vectors are generated, where D > 0 is the
number of divisions considered along each objective coordinate.
The approach can be illustrated by Fig.1, in which each
level represents one component of ω, and each path from the
root to one of the leaves represents a possible weight vector.
0
0
1
0.5
1
0.5
0
0
1
0.5
0.5
0
0
0
0.5
Fig. 1. Generating weight vectors for δ = 0.5 and M = 3 using the
systematic sampling approach.
Therefore, all weight vectors included in the tree can be listed
as follows.
(0,
0,
1)
(0,
0.5, 0.5)
(0,
1,
0)
(5)
(0.5, 0, 0.5)
(0.5, 0.5,
0)
(1,
0,
0)
A recursive algorithm for MOEAs to generate weight vectors using the systematic sampling approach can be found in
section III. Here, we consider two cases of D taking a large
value and a small value respectively. As discussed in [21] and
[29], a large D would add more computational burden to a
MOEA, and a small D would be harmful to the population
diversity. To avoid this dilemma, [16] and [21] present a
two-layer weight vector generation method. At first, a set of
N1 weight vectors in the boundary layer and a set of N2
weight vectors in the inside layer are generated, according to
the systematic sampling approach described above. Then, the
coordinates of weight vectors in the inside layer are shrunk
by a coordinate transformation as
vij =
1−τ
+ τ × ωij ,
M
(6)
where ωij is the ith component of the jth weight vectors in
the inside layer, and τ ∈ [0, 1] is a shrinkage factor set as
τ = 0.5 in [16] and [21]. At last, the two sets of weight vectors
are combined to form the final set of weight vectors. Denote
the numbers of the weight vectors generated in the boundary
layer and the inside layer as D1 and D2 respectively. Then,
the number of the weight vectors generated by the two-layer
weight vector generation method is N (D1, M ) + N (D2, M ).
D. Local update and its advantages
Most of the decomposition-based MOEAs update the population with an offspring generated by the reproduction operators to replace the individuals worse than the offspring in the
current neighborhood. Such an update strategy can be named
as a local update(LU) strategy since it involves only the individuals in the current neighborhood. The decomposition-based
MOEAs with the LU strategy have at least two advantages.
The first one is that the LU strategy can help the algorithms to
converge to the PF faster than other algorithms with non-local
update strategies, which helps them achieve great success on
4
a lot of MOPs in the past ten years. The second one is that
the time complexities of the decomposition-based MOEAs are
usually lower than those of MOEAS with non-local update
strategies. This allows them to have a great advantage in
solving complicated problems or MOPs with many objectives,
since the running time taken by a MOEA to solve a given MOP
becomes much longer as the number of the objective functions
increases.
In spite of the above advantages, the decomposition-based
MOEAs with the LU strategy have their own disadvantages.
The first disadvantage is that when the algorithms deal with
some problems such as DTLZ4, the population may lose
its diversity. As we can see from Fig.2, a running instance
of MOEA/D on DTLZ4 with 3 objectives generates welldistributed results, while the solution set of the other one
degenerates nearly to an arc on a unit circle. What’s worse,
the solution set of some running instances of MOEA/D even
degenerates to a few points on a unit circle in our experiments.
to decide which one individual is to be replaced when there are
multiple individuals worse than the newly generated offspring.
One of the simplest replacement policies is to randomly choose
one individual in the current neighborhood and judge whether
or not the offspring is better than it. If the selected individual
is worse, it will be replaced by the offspring. Or else, the
offspring will be abandoned.
Fig.3 shows the results of the original MOEA/D and its
modified version with the modified LU strategy described
above on DTLZ1 with 3 objectives. As it can been seen from
Fig.3, the modified LU strategy lowers down the convergence
speed of the algorithm, indicating that it is not a good update
strategy.
0.6
0.5
0.4
0.3
0.2
1.2
0.1
1
0.8
0
0
0.2
0.6
0.4
0.6
0.6
0.4
0.2
0
0.4
(a)
0.2
0
0
0
0.5
0.5
0.6
1
1
1.5
1.5
0.5
(a)
0.4
0.3
0.2
1
0.1
0.5
0
0
0
0.2
0.4
0.60
0.2
0.4
0.6
-0.5
(b)
-1
0
0
0.5
0.5
1
1
1.5
1.5
(b)
Fig. 2. Two running instances of MOEA/D on DTLZ4. The first one obtained
a well distributed set of solutions, and the second one obtained a degenerate
set of solutions locating on an arc of a unit circle.
Notice that a call of the LU procedure replaces all individuals worse than the newly generated offspring within the current
neighborhood, which might be the reason resulting in the loss
of the population diversity. Therefore, to avoid the loss of
the population diversity, one can modify the LU procedure to
replace at most one individual at a call. But the problem is how
Fig. 3. Running results of the original MOEA/D and its modified version
with a modified LU on the DTLZ1 problem with 3 objectives. The red dots
are the resulting solutions of the original MOEA/D, and the blue ones are the
resulting solutions of its modified version. Rotate (a) around the Z axis by
about 90 degrees to get (b).
The second disadvantage of the decomposition-based
MOEAs with the LU strategy is that they don’t consider
the individuals beyond the current neighborhood. As we can
see, such a LU strategy allows the MOEAS to update the
population with less time, but it might ignore some important
information leading to better convergence. Fig.4 illustrates this
viewpoint. Although the newly generated individual is better
than individual A, it will be abandoned by the decompositionbased MOEAs with the LU strategy, since it is only compared
to the individuals in the current neighborhood.
5
A
current neighborhood
Fig. 4. Illustration of the second disadvantage of the decomposition-based
MOEAs with the LU strategy. The black dots represents the individuals in
the current population, the red dot is a newly generated individual.
Asafuddoula et al have noticed this disadvantage of the
decomposition-based MOEAs with the LU strategy [28]. The
update strategy of their algorithm involves all of the individuals in the current population, which has been demonstrated
to be effective on the DTLZ and WFG test suites to some
extent. We call such an update strategy a global update(GU)
strategy, since each call of the update procedure considers all
the individuals in the population, and replaces at most one
individual. In Fig.4, individual A will be replaced by the newly
generated individual if a decomposition-based MOEA adopts
the GU strategy instead of the LU strategy.
The third disadvantage of the decomposition-based MOEAs
with the LU strategy relates to the individuals and their
attached weight vectors. As a simple example, consider the
case where an individual x and a newly generated offspring
c are attached to a weight vector ωx , and an individual y
is attached to ωy , so that g P BI (c|wx , z ∗ ) < g P BI (x|wx , z ∗ )
and g P BI (x|wy , z ∗ ) < g P BI (y|wy , z ∗ ) are satisfied. In other
words, c is better than x and x is better than y, when the
weight vector ωx and the weight vector ωy are considered as
the reference weight vector respectively. Therefore, x will be
replaced by c in a typical decomposition-based MOEA. But
so far, there is no decomposition-based MOEA considering x
as a replacement for y.
In order to deal with the three disadvantages of the
decomposition-based MOEAs with the LU strategy, we propose a MOEA with the GLU strategy(i.e. MOEA/GLU) mentioned before, which is presented in Section III.
III. P ROPOSED A LGORITHM -MOEA/GLU
A. Algorithm Framework
The general framework of MOEA/GLU is presented in
Algorithm 1. As it is shown in the general framework, a while
loop is executed after the initiation procedure, in which a f or
loop is included. In the f or loop, the algorithm runs over N
weight vectors ,generates an offspring for each weight vector
in the reproduction procedure, and updates the population with
the offspring in the GLU procedure.
B. Initialization Procedure
The initialization procedure includes four steps. In the
first step, a set of uniformly distributed weight vectors are
Algorithm 1 General Framework of MOEA/GLU
Output: Final Population.
1: Initialization Procedure.
2: while The stop condition is not stisfied do
3:
for i = 1 to N do
4:
Reproduction Procedure.
5:
The GLU procedure
6:
end for
7: end while
generated using the systematic approach proposed in [29].
A recursive algorithm for generating the weight vectors is
presented in algorithm 2 and 3.
Algorithm 2 The systematic sampling approach
Input: D:the number of divisions, M:the number of objectives.
Output: A set of uniform weight vectors.
1: ω = (0, ..., 0);
2: Gen ith Level(ω, 0, 0, D, M );
Algorithm 2 calls the recursive function Gen ith Level
described in algorithm 3 with ω = (0, ..., 0), K = 0, and
i = 0, to generate weight vectors. At the ith level of the
recursive function, the ith component of a weight vector is
generated. As discussed before, the value of each component
of a weight vector ranges from 0 to 1 with the step size 1/D,
and all components of a weight vector sum up to 1. In other
words, all components of a weight vector share D divisions.
Therefore, if K=D(K is the number of divisions that have been
allocated), then the rest of the components are all set to zero.
In addition, if ω[i] is the last component, i.e., i = M − 1,
then all the remaining divisions are assigned to it. Both the
two cases indicate the end of a recursive call, and a generated
weight vector is output.
Algorithm 3 Gen ith Level(ω, K, i, D, M )
1: if k==D then
2:
ω[i], ..., ω[M − 1] ← 0;
3:
output(ω);
4:
return;
5: end if
6: if i==M-1 then
7:
ω[i] ← (D − K)/D;
8:
output(ω);
9:
return;
10: end if
11: for j=0 to D-K do
12:
ω[i] ← j/D;
13:
Gen ith Level(ω, K + j, i + 1, D, M );
14: end for
One of the main ideas of MOEA/GLU is that each individual is attached to a weight vector and a weight vector
owns only one individual. Meanwhile, each weight vector
determines a neighborhood. In the second step, the neighborhoods of all weight vectors are generated by calculating the
6
Euclidean distances of the weight vectors using Eq.(3). Subsequently, a population of N individuals is initialized randomly
and attached to N weight vectors in order of generation in the
third step. Finally, the ideal point is initialized in the fourth
step, which can be updated by every offspring in the course
of evolution.
shortest perpendicular distance. Supposed that the minimum
value of {di1 , di2 , ..., diN } is still dij at a certain generation,
and the offspring c is better than P [i] . Then, P [i] will be
replaced out by c, and considered as a candidate to take the
place of the individual hold by the jth weight vector, i.e.,
P [j].
C. Reproduction Procedure
E. Discussion
The reproduction procedure can be described as follows.
Firstly, a random number r between 0 and 1 is generated. If
r is less than a given selection probability Ps , then choose
two individuals from the neighborhood of the current weight,
or else choose two individuals from the whole population.
Secondly, the SBX operator is applied on the two individuals
to generate two intermediate individuals. Notice that, if both
of the two individuals are evaluated and used to update the
population, then the number of individuals evaluated at each
generation will be twice as many as that of the individuals
in the whole population. However, the number of individuals
evaluated at each generation in many popular MOEAs such
as NSGA-III and MOEADD etc., is usually the same as the
size of the population. Therefore, one of the two intermediate
individuals is abandoned at random for the sake of fairness.
Finally, the polynomial mutation operator is applied on the reserved intermediate individual to generate an offspring, which
will be evaluated and used to update the current population in
the following GLU procedure.
This section gives a simple discussion about the similarities
and differences of MOEA/GLU, MOEA/D, and MOEA/DD.
1) Similarities of MOEA/GLU, MOEA/D, and MOEA/DD.
MOEA/GLU and MOEA/DD can be seen as two variants of MOEA/D to some extent, since all of the three
algorithms employ decomposition technique to deal with
MOPs. In addition, a set of weight vectors is used
to guide the selection procedure, and the concept of
neighborhood plays an important role in all of them.
2) Differences between MOEA/GLU and MOEA/D.
Firstly, MOEA/D uses a LU strategy, and MOEA/GLU
employs the so-called GLU strategy, which considers all
of the individuals in the current population at each call of
the update procedure. Secondly, to judge whether or not
an individual is better than the other, MOEA/D compares
the fitness values of them, while other criteria for
comparing individuals can also be used in MOEA/GLU.
Thirdly, once a individual is generated in MOEA/D, all
the individuals in the current neighborhood that worse
than it will be replaced. However, each individual is
attached to one weight vector in MOEA/GLU, and a
newly generated individual is only compared to the old
one attached to the same weight vector. The replacement
operation occurs only when the new individual is better
than the old one.
3) Differences between MOEA/GLU and MOEA/DD. In
the first place, one weight vector in MOEA/DD not
only defines a subproblem, but also specifies a subregion
that can be used to estimate the local density of a
population. In principle, a subregion owns zero, one,
or more individuals at any generation. In MOEA/GLU,
each individual is attached only to one weight vector,
and a weight vector can hold only one individual. In
the second place, the dominance criterion can be taken
into account in MOEA/GLU, the way that it is used
is different from that of MOEA/DD. In MOEA/GLU,
the dominance between the newly generated individual
and the old one attached to the same weight vector can
be used to judge which of the two is better, while the
dominance criterion is considered among all individuals
within a subregion in MOEA/DD.
D. The GLU procedure
Algorithm 4 The GLU procedure
Input: a new offspring c, the current population P.
1: bFlag=true;
2: while bFlag do
3:
F ind Attathed W eight(c) → i;
4:
if c is better than P[i] then
5:
Swap(c,P[i]);
6:
else
7:
bFlag=false;
8:
end if
9: end while
The GLU procedure is illustrated in Algorithm 4, which can
described as follows. Each individual is attached to a weight
vector, which has the shortest perpendicular distance to the
weight vector. F ind Attathed W eight(c) is designed to find
the attached weight of c, in which the perpendicular distance
is calculated by Eq.(3). Denote the perpendicular distance of
the ith individual P [i] to the jth weight vector as dij . A
given weight vector maintains only one slot to keep the best
individual attached to it generated so far from the beginning of
the algorithm. The minimum value of {di1 , di2 , ..., diN } can
be expected to be dii after the algorithm evolves enough generations. However, in the initialization stage, all the individuals
are generated randomly, and attached to the weight vectors in
order of generation. In other words, the ith weight vector may
not be the one, to which its attached individual P [i] has the
F. Time Complexity
The function Find Attathed Weight in the GLU procedure
runs over all weight vectors, calculates the perpendicular
distances between the newly generated offspring and all weight
vectors, and finds the weight vector, to which the offspring
has the shortest perpendicular distance. Therefore, it takes
O(M N ) times of floating-point calculations for the function
7
Find Attathed Weight to find the attached weight vector of the
offspring, where M is the number of the objective functions
and N is the size of the population.
As it is indicated before, the while loop is designed to
help the individuals in the initial stage of the algorithm to
find their attached weight vectors quickly. The fact that the
individuals at a certain generation do not attach to their corresponding weight vectors causes extra entries into the function
Find Attathed Weight. However, once all of the individuals
are attached to their corresponding weight vectors, the function
Find Attathed Weight will be entered at most two times. Let
the entries into the function Find Attathed Weight be (1+Ni )
times at each call of the GLU procedure,
P and denote the
number of the generations as G. Since i Ni ≤ N and the
GLU procedure is called N G times in the whole process
of MOEA/GLU, the time complexity of the algorithm is
O(M N 2 G), which is the same as that of MOEA/DD, but
worse than that of MOEA/D.
IV. E XPERIMENTAL R ESULTS
A. Performance Metrics
1) Inverted Generational Distance(IGD): Let S be a result
solution set of a MOEA on a given MOP. Let R be a uniformly
distributed representative points of the PF. The IGD value of
S relative to R can be calculated as [30]
P
d(r, S)
(7)
IGD(S, R) = r∈R
|R|
where d(r,S) is the minimum Euclidean distance between r and
the points in S, and |R| is the cardinality of R. Note that, the
points in R should be well distributed and |R| should be large
enough to ensure that the points in R could represent the PF
very well. This guarantees that the IGD value of S is able to
measure the convergence and diversity of the solution set. The
lower the IGD value of S, the better its quality [21].
2) HyperVolume(HV): The HV value of a given solution
set S is defined as [31]
!
[
HV (S) = vol
[f1 (x), z1 ] × . . . × [fM (x), zM ] , (8)
x∈S
where vol(·) is the Lebesgue measure,and z r = (z1 , . . . , zM )T
is a given reference point. As it can be seen that the HV value
of S is a measure of the size of the objective space dominated
by the solutions in S and bounded by z r .
As with [21], an algorithm based on Monte Carlo sampling
proposed in [32] is applied to compute the approximate HV
values for 15-objective test instances, and the WFG algorithm
[33] is adopted to compute the exact HV values for other test
instances for the convenience of comparison. In addition,
QM all
the HV values are normalized to [0, 1] by dividing i=1 zi .
B. Benchmark Problems
1) DTLZ test suite: Problems DTLZ1 to DTLZ4 from the
DTLZ test suite proposed by Deb et al [34] are chosen for
our experimental studies in the first place. One can refer to
[34] to find their definitions. Here, we only summarize some
of their features.
DTLZ1:The
global PF of DTLZ1 is the linear hyper-plane
PM
k
i=1 fi = 0.5. And the search space contains (11 − 1)
local PFs that can hinder a MOEA to converge to the
hyper-plane.
PM 2
• DTLZ2:The global PF of DTLZ2 satisfys
i fi = 1.
Previous studies have shown that this problem is easier
to be solved by existing MOEAs, such as NSGA-III,
MOEADD, etc., than DTLZ1, DTLZ3 and DTLZ4.
• DTLZ3:The definition of the glocal PF of DTLZ3 is the
same as that of DTLZ2. It introduces (3k − 1) local PFs.
All local PFs are parallel to the global PF and a MOEA
can get stuck at any of these local PFs before converging
to the global PF. It can be used to investigate a MOEA’s
ability to converge to the global PF.
• DTLZ4:The definition of the global PF of DTLZ4 is also
the same as that of DTLZ2 and DTLZ3. This problem
can be obtained by modifying DTLZ2 with a different
meta-variable mapping, which is expected to introduce a
biased density of solutions in the search space. Therefore,
it can be used to investigate a MOEA’s ability to maintain
a good distribution of solutions.
To calculate the IGD value of a result set S of a MOEA
running on a MOP, a set R of representative points of the PF
needs to be given in advance. For DTLZ1 to DTLZ4, we take
the set of the intersecting points of weight vectors and the PF
∗
surface as R. Let f ∗ = (f1∗ , ..., fM
) be the intersecting point
of a weight vector w = (w1 , ..., wM )T and the PF surface.
Then fi∗ can be computed as [21]
wi
(9)
fi∗ = 0.5 × PM
j=1 wj
•
for DTLZ1, and
wi
fi∗ = qP
M
j=1
(10)
wj
for DTLZ2, DTLZ3 and DTLZ4.
2) WFG test suite [35], [36]: This test suite allows test
problem designers to construct scalable test problems with
any number of objectives, in which features such as modality
and separability can be customized as required. As discussed
in [35], [36], it exceeds the functionality of the DTLZ test
suite. In particular, one can construct non-separable problems,
deceptive problems, truly degenerative problems, mixed shape
PF problems, problems scalable in the number of positionrelated parameters, and problems with dependencies between
position- and distance-related parameters as well with the
WFG test suite.
In [36], several scalable problems, i.e., WFG1 to WFG9, are
suggested for MOEA designers to test their algoritms, which
can be described as follows.
M inimize F (X) = (f1 (X), ..., fM (X))
fi (X) = xM + 2ihi (x1 , ..., xM−1 )
(11)
T
X = (x1 , ..., xM )
where hi is a problem-dependent shape function determining
the geometry of the fitness space, and X is derived from
a vector of working parameters Z = (z1 , ..., zn )T , zi ∈
[0, 2i] , by employing four problem-dependent transformation
8
functions t1 , t2 , t3 and t4 . Transformation functions must
be designed carefully such that the underlying PF remains
intact with a relatively easy to determine Pareto optimal set.
The WFG Toolkit provides a series of predefined shape and
transformation functions to help ensure this is the case. One
can refer to [35], [36] to see their definitions. Let
′′ T
Z ′′ = (z1′′ , ..., zm
) = t4 (t3 (t2 (t1 (Z ′ ))))
Z ′ = (z1 /2, ..., zn /2n)T .
TABLE I
N UMBER OF P OPULATION S IZE
M
3
5
8
10
15
D1
12
6
3
3
2
D2
2
2
1
Population Size
91
210
156
275
135
(12)
Then xi = zi′′ (zi′′ − 0.5) + 0.5 for problem WFG3, whereas
X = Z ′′ for problems WFG1, WFG2 and WFG4 to WFG9.
The features of WFG1 to WFG9 can be summarized as
follows.
• WFG1:A separable and uni-modal problem with a biased
PF and a convex and mixed geometry.
• WFG2:A non-separable problem with a convex and disconnected geometry, i.e., the PF of WFG2 is composed
of several disconnected convex segments. And all of its
objectives but fM are uni-modal.
• WFG3:A non-separable and uni-modal problem with a
linear and degenerate PF shape, which can be seen as a
connected version of WFG2.
• WFG4:A separable and multi-modal problem with large
”hill sizes”, and a concave geometry.
• WFG5:A separable and deceptive problem with a concave
geometry.
• WFG6:A nonseparable and uni-modal problem with a
concave geometry.
• WFG7:A separable and uni-modal problem with parameter dependency, and a concave geometry.
• WFG8:A nonseparable and uni-modal problem with parameter dependency, and a concave geometry.
• WFG9:A nonseparable, deceptive and uni-modal problem
with parameter dependency, and a concave geometry.
As it can be seen from above, WFG1 and WFG7 are both
separable and uni-modal, and WFG8 and WFG9 have nonseparable property, but the parameter dependency of WFG8
is much harder than that caused of WFG9. In addition, the
deceptiveness of WFG5 is more difficult than that of WFG9,
since WFG9 is only deceptive on its position parameters.
However, when it comes to the nonseparable reduction, WFG6
and WFG9 are more difficult than WFG2 and WFG3. Meanwhile,problems WFG4 to WFG9 share the same EF shape in
the objective space, which is a part of a hyper-ellipse with
radii ri = 2i, where i ∈ {1, ..., M }.
C. Parameter Settings
The parameter settings of MOEA/GLU are listed as follows.
1) Settings for Crossover Operator:The crossover probability is set as pc = 1.0 and the distribution index is
ηc = 30.
2) Settings for Mutation Operator:The mutation probability
is set as pm = 0.6/n, and is different from that of
MOEA/DD, which is 1/n. The distribution index is set
as ηm = 20.
3) Population Size:The population size of MOEA/GLU is
the same as the number of the weight vectors that
TABLE II
N UMBER OF G ENERATIONS
problem
DTLZ1
DTLZ2
DTLZ3
DTLZ4
4)
5)
6)
7)
8)
9)
10)
M =3
400
250
1000
600
M =5
600
350
1000
1000
M =8
750
500
1000
1250
M = 10
1000
750
1500
2000
M = 15
1500
1000
2000
3000
can be calculated by Eq.(4). Since the divisions for 3and 5-objective instances are set to 12 and 6, and the
population sizes of them are 91 and 210, respectively. As
for 8-, 10- and 15-objective instances, two-layer weight
vector generation method is applied. The divisions and
the population sizes of them are listed in Table I.
Number of Runs:The algorithm is independently run 20
times on each test instance, which is the same as that of
other algorithms for comparison.
Number of Generations: All of the algorithms stopped
at a predefined number of generations. The number of
generations for DTLZ1 to DTLZ4 is listed in Table II,
and the number of generations for all the instances of
WFG1 to WFG9 is 3000.
Penalty Parameter in PBI: θ = 5.0.
Neighborhood Size: T = 20.
Selection Probability: The probability of selecting two
mating individuals from the current neighborhood is set
as ps = 0.9.
Settings for DTLZ1 to DTLZ4:As in papers [21],
[28], the number of the objectives are set as M ∈
{3, 5, 8, 10, 15} for comparative purpose. And the number of the decision variables is set as n = M + r − 1,
where r = 5 for DTLZ1, and r = 10 for DTLZ2,
DTLZ3 and DTLZ4. To calculate the HV value we
set the reference point to (1, ..., 1)T for DTLZ1, and
(2, ..., 2)T DTLZ2 to DTLZ4.
Settings for WFG1 to WFG9: The number of the decision variables is set as n = k + l, where k = 2 × (M − 1)
is the position-related variable and l = 20 is the
distance-related variable. To calculate the HV values for
problems WFG1 to WFG9, the reference point is set to
(3, ..., 2M + 1)T .
D. Performance Comparisons on DTLZ1 to DTLZ4
We calculate the IGD values and HV values of the same
solution sets found by MOEA/GLU, and compare the calculation results with those of MOEA/DD, NSGA-III, MOEA/D
and GrEA obtained in [21].
9
3
DTLZ1
5
8
10
15
3
DTLZ2
5
8
10
15
MOEA/GLU
1.073E-4
3.608E-4
1.669E-3
1.732E-4
2.115E-4
2.395E-4
1.457E-3
2.069E-3
3.388E-3
1.429E-3
2.030E-3
3.333E-3
2.261E-3
3.652E-3
6.111E-3
4.418E-4
5.738E-4
7.510E-4
9.513E-4
1.075E-3
1.231E-3
2.553E-3
3.038E-3
3.375E-3
2.917E-3
3.701E-3
4.104E-3
4.394E-3
6.050E-3
7.623E-3
MOEA/DD
3.191E-4
5.848E-4
6.573E-4
2.635E-4
2.916E-4
3.109E-4
1.809E-3
2.589E-3
2.996E-3
1.828E-3
2.225E-3
2.467E-3
2.867E-3
4.203E-3
4.669E-3
6.666E-4
8.073E-4
1.243E-3
1.128E-3
1.291E-3
1.424E-3
2.880E-3
3.291E-3
4.106E-3
3.223E-3
3.752E-3
4.145E-3
4.557E-3
5.863E-3
6.929E-3
NSGA-III
4.880E-4
1.308E-3
4.880E-3
5.116E-4
9.799E-4
1.979E-3
2.044E-3
3.979E-3
8.721E-3
2.215E-3
3.462E-3
6.896E-3
2.649E-3
5.063E-3
1.123E-2
1.262E-3
1.357E-3
2.114E-3
4.254E-3
4.982E-3
5.862E-3
1.371E-2
1.571E-2
1.811E-2
1.350E-2
1.528E-2
1.697E-2
1.360E-2
1.726E-2
2.114E-2
MOEA/D
4.095E-4
1.495E-3
4.743E-3
3.179E-4
6.372E-4
1.635E-3
3.914E-3
6.106E-3
8.537E-3
3.872E-3
5.073E-3
6.130E-3
1.236E-2
1.431E-2
1.692E-2
5.432E-4
6.406E-4
8.006E-4
1.219E-3
1.437E-3
1.727E-3
3.097E-3
3.763E-3
5.198E-3
2.474E-3
2.778E-3
3.235E-3
5.254E-3
6.005E-3
9.409E-3
GrEA
2.759E-2
3.339E-2
1.351E-1
7.369E-2
3.363E-1
4.937E-1
1.023E-1
1.195E-1
3.849E-1
1.176E-1
1.586E-1
5.110E-1
8.061E-1
2.057E+0
6.307E+1
6.884E-2
7.179E-2
7.444E-2
1.411E-1
1.474E-1
1.558E-1
3.453E-1
3.731E-1
4.126E-1
4.107E-1
4.514E-1
5.161E-1
5.087E-1
5.289E-1
5.381E-1
DTLZ4
m
DTLZ3
TABLE III
B EST, M EDIAN AND W ORST IGD VALUES BY MOEA/GLU, MOEA/DD, NSGA-III, MOEA/D AND G R EA ON DTLZ1, DTLZ2, DTLZ3 AND DTLZ4
INSTANCES WITH D IFFERENT N UMBER OF O BJECTIVES . T HE VALUES IN RED ARE THE BEST, AND THE VALUES IN GRAY ARE THE SECOND BEST.
MOEA/IUP
1.598E-4
1.257E-3
8.138E-3
2.965E-4
8.390E-4
2.543E-3
1.987E-3
4.478E-3
1.759E-2
2.173E-3
2.663E-3
4.795E-3
5.299E-3
8.732E-3
1.912E-2
9.111E-5
1.105E-4
1.385E-4
7.218E-5
9.255E-5
1.115E-4
3.540E-4
4.532E-4
5.823E-4
8.397E-4
1.156E-3
1.482E-3
9.325E-4
1.517E-3
2.427E-3
MOEA/DD
5.690E-4
1.892E-3
6.231E-3
6.181E-4
1.181E-3
4.736E-3
3.411E-3
8.079E-3
1.826E-2
1.689E-3
2.164E-3
3.226E-3
5.716E-3
7.461E-3
1.138E-2
1.025E-4
1.429E-4
1.881E-4
1.097E-4
1.296E-4
1.532E-4
5.271E-4
6.699E-4
9.107E-4
1.291E-3
1.615E-3
1.931E-3
1.474E-3
1.881E-3
3.159E-3
NSGA-III
9.751E-4
4.007E-3
6.665E-3
3.086E-3
5.960E-3
1.196E-2
1.244E-2
2.375E-2
9.649E-2
8.849E-3
1.188E-2
2.082E-2
1.401E-2
2.145e-2
4.195E-2
2.915E-4
5.970E-4
4.286E-1
9.849E-4
1.255E-3
1.721E-3
5.079E-3
7.054E-3
6.051E-1
5.694E-3
6.337E-3
1.076E-1
7.110E-3
3.431E-1
1.073E+0
MOEA/D
9.773E-4
3.426E-3
9.113E-3
1.129E-3
2.213E-3
6.147E-3
6.459E-3
1.948E-2
1.123E+0
2.791E-3
4.319E-3
1.010E+0
4.360E-3
1.664E-2
1.260E+0
2.929E-1
4.280E-1
5.234E-1
1.080E-1
5.787E-1
7.348E-1
5.298E-1
8.816E-1
9.723E-1
3.966E-1
9.203E-1
1.077E+0
5.890E-1
1.133E+0
1.249E+0
GrEA
6.770E-2
7.693E-2
4.474E-1
5.331E-1
8.295E-1
1.124E+0
7.518E-1
1.024E+0
1.230E+0
8.656E-1
1.145E+0
1.265E+0
9.391E+1
1.983E+2
3.236E+2
6.869E-2
7.234E-2
9.400E-1
1.422E-1
1.462E-1
1.609E-1
3.229E-1
3.314E-1
3.402E-1
4.191E-1
4.294E-1
4.410E-1
4.975E-1
5.032E-1
5.136E-1
3
DTLZ1
5
8
10
15
3
DTLZ2
5
8
10
15
MOEA/GLU
0.973657
0.973576
0.973279
0.998981
0.998976
0.998970
0.999948
0.999925
0.999888
0.999991
0.999981
0.999971
0.999986
0.999923
0.999826
0.926698
0.926682
0.926652
0.990545
0.990533
0.990513
0.999341
0.999326
0.999296
0.999921
0.999920
0.999919
0.999998
0.999997
0.999993
MOEA/DD
0.973597
0.973510
0.973278
0.998980
0.998975
0.998968
0.999949
0.999919
0.999887
0.999994
0.999990
0.999974
0.999882
0.999797
0.999653
0.926674
0.926653
0.926596
0.990535
0.990527
0.990512
0.999346
0.999337
0.999329
0.999952
0.999932
0.999921
0.999976
0.999954
0.999915
NSGA-III
0.973519
0.973217
0.971931
0.998971
0.998963
0.998673
0.999975
0.993549
0.966432
0.999991
0.999985
0.999969
0.999731
0.999686
0.999574
0.926626
0.926536
0.926359
0.990459
0.990400
0.990328
0.999320
0.978936
0.919680
0.999918
0.999916
0.999915
0.999975
0.999939
0.999887
MOEA/D
0.973541
0.973380
0.972484
0.998978
0.998969
0.998954
0.999943
0.999866
0.999549
0.999983
0.999979
0.999956
0.999695
0.999542
0.999333
0.926666
0.926639
0.926613
0.990529
0.990518
0.990511
0.999341
0.999329
0.999307
0.999922
0.999921
0.999919
0.999967
0.999951
0.999913
GrEA
0.967404
0.964059
0.828008
0.991451
0.844529
0.500179
0.999144
0.997992
0.902697
0.999451
0.998587
0.532348
0.172492
0.000000
0.000000
0.924246
0.923994
0.923675
0.990359
0.990214
0.990064
0.999991
0.999670
0.989264
0.997636
0.996428
0.994729
0.999524
0.999496
0.998431
DTLZ4
m
DTLZ3
TABLE IV
B EST, M EDIAN AND W ORST HV VALUES BY MOEA/GLU, MOEA/DD, NSGA-III, MOEA/D AND G R EA ON DTLZ1, DTLZ2, DTLZ3 AND DTLZ4
INSTANCES WITH D IFFERENT N UMBER OF O BJECTIVES . T HE VALUES IN RED ARE THE BEST, AND THE VALUES IN GRAY ARE THE SECOND BEST.
MOEA/GLU
0.926717
0.926457
0.924931
0.990565
0.990532
0.990451
0.999345
0.999322
0.999252
0.999922
0.999921
0.999919
0.999996
0.999994
0.999990
0.926731
0.926729
0.926725
0.990570
0.990570
0.990569
0.999364
0.999363
0.999362
0.999919
0.999914
0.999910
0.999990
0.999979
0.999959
MOEA/DD
0.926617
0.926346
0.924901
0.990558
0.990515
0.990349
0.999343
0.999311
0.999248
0.999923
0.999922
0.999921
0.999982
0.999951
0.999915
0.926731
0.926729
0.926725
0.990575
0.990573
0.990570
0.999364
0.999363
0.998360
0.999921
0.999920
0.999917
0.999915
0.999762
0.999680
NSGA-III
0.926480
0.925805
0.924234
0.990453
0.990344
0.989510
0.999300
0.924059
0.904182
0.999921
0.999918
0.999910
0.999910
0.999793
0.999780
0.926659
0.926705
0.799572
0.991102
0.990413
0.990156
0.999363
0.999361
0.994784
0.999915
0.999910
0.999827
0.999910
0.999581
0.617313
MOEA/D
0.926598
0.925855
0.923858
0.990543
0.990444
0.990258
0.999328
0.999303
0.508355
0.999922
0.999920
0.999915
0.999918
0.999792
0.999628
0.926729
0.926725
0.500000
0.990569
0.990568
0.973811
0.999363
0.998497
0.995753
0.999918
0.999907
0.999472
0.999813
0.546405
0.502115
GrEA
0.924652
0.922650
0.621155
0.963021
0.808084
0.499908
0.953478
0.791184
0.498580
0.962168
0.735934
0.499676
0.000000
0.000000
0.000000
0.924613
0.924094
0.500000
0.990514
0.990409
0.990221
0.999102
0.999039
0.998955
0.999653
0.999608
0.999547
0.999561
0.999539
0.999521
10
1) DTLZ1:From the calculation results listed in Table III
and Table IV, it can be seen that MOEA/GLU and
MOEA/DD perform better than the other three algorithms on all of the IGD values and most of the HV
values. Specifically, MOEA/GLU wins in the best and
median IGD values of the 3-, 8-, 10- and 15-objective
instances, and MOEA/DD wins in the worst IGD values
of the 3-, 8-, 10- and 15-objective instances. As for the
5-objective instance, MOEA/GLU wins in all of the IGD
values. When it comes to the HV values, MOEA/GLU
performs the best on the 3-, 5- and 15-objective instances, and MOEA/DD shows the best performance
on the 10-objective instance as listed in Table IV. In
addition, MOEA/GLU wins in the median and worst
HV values of the 8-objective instance, and NSGA-III
wins in the best HV value of it. Although all of the
values obtained by MOEA/GLU and MOEA/DD are
close, MOEA/GLU wins in most of the IGD and HV
values. Therefore, MOEA/GLU can be considered as the
best optimizer for DTLZ1.
2) DTLZ2:As it can be seen from Table III, MOEA/D,
MOEA/GLU and MOEA/DD are significantly better
than the other two on all of the IGD values of DTLZ2.
As for the IGD values, MOEA/GLU performs the best
on the 3-, 5- and 8-objective instances, and MOEA/D
performs the best on the 10-objective instance. In addition, MOEA/GLU wins in the best value of the 15objective instance, and MOEA/DD wins in the median
and worst values of it. When it comes to the HV values,
MOEA/GLU performs the best on the 3-, 5- and 15objective instances, MOEA/DD performs the best on the
10-objective instance and wins in the worst value of the
8-objective instance, and GrEA wins in the best and median values of the 8-objective instance. On the whole, the
differences of MOEA/GLU, MOEA/DD and MOEA/D
are not significant on DTLZ2, but MOEA/GLU wins
in more values than both MOEA/D and MOEA/DD.
Therefore, MOEA/GLU can also be considered as the
best optimizer for DTLZ2.
3) DTLZ3:Again, MOEA/GLU and MOEA/DD are the
best two optimizer for DTLZ3, and their performances
are also close. As for the IGDvalues, MOEA/GLU
performs the best on the 5- and 8-objective instances,
MOEA/DD performs the best on the 10-objective instance, MOEA/GLU wins in the best and median values
of the 3-objective instance and the best value of the 15objective instance, MOEA/DD wins in the median and
worst values of the 15-objective instance, and the worst
value of the 3-objective. As far as the HV values are
concerned, MOEA/GLU performs the best on the 3-, 5-,
8- and 15-objective instances, and MOEA/DD performs
the best on the 10-objective instance. Since MOEA/GLU
wins in more values than the other four algorithms, it
can be considered as the best optimizer for DTLZ3.
4) DTLZ4:It is clear that MOEA/GLU performs the best
on all of the IGD values of DTLZ4. However, it is
hard to distinguish the better one from MOEA/GLU and
MOEA/DD when it comes to the HV values. Interest-
ingly, the performance of MOEA/GLU and MOEA/DD
are so close that all of the HV values of the 3-objective
instance, the best and median HV values of the 8objective instance obtained by them are equal in terms
of 6 significant digits. But taking the performance on
the IGD values into consideration, MOEA/GLU is the
best optimizer for DTLZ4.
E. Performance Comparisons on WFG1 to WFG9
The HV values of MOEA/GLU, MOEA/DD, MOEA/D and
GrEA on WFG1 to WFG5 are listed in Table V, and the
HV values on WFG6 to WFG9 are listed in Table VI. The
comparison results can be concluded as follows.
1) WFG1:MOEA/DD wins in all the values of WFG1
except the worst value of the 3-objective instance, and
hence be regarded as the best optimizer for WFG1.
2) WFG2:MOEA/GLU shows the best performance on the
3-objective instance, while MOEA/DD performs the best
on the 8-objective instance. In addition, MOEA/GLU
wins in the best and median values of 5- and 10objective instances, while MOEA/DD wins in the worst
values of them. Obviously, MOEA/GLU and MOEA/DD
are the best two optimizer for WFG2, but it is hard to
tell which one is better, since the differences between
them are not significant.
3) WFG3:MOEA/GLU performs the best on the 10objective instance, MOEA/DD shows the best performance on the 3-objective instance, and GrEA wins in
the 5- and 8-objective instances. The values obtained by
the three algorithms are very close. They all have their
own advantages.
4) WFG4:MOEA/GLU shows the best in all values of
WFG4, and is considered as the winner.
5) WFG5:Like in WFG4, MOEA/GLU is the winner of
WFG5, since it wins in all values except the median
and worst values of the 3-objective instance.
6) WFG6:MOEA/GLU and GrEA are the best two optimizer of WFG. The values obtained by them are not
significant with ups and downs on both sides. Specifically, MOEA/GLU wins in the 3-objective instance,
the best values of the 5- and 8-objective instances, the
median and worst values of the 10-objective instance.
GrEA wins in all the other values.
7) WFG7:MOEA/GLU wins in all the values of WFG7,
and is considered as the best optimizer.
8) WFG8:MOEA/GLU wins in most of the values of
WFG8 except the best value of the 5-objective instance
and the median value of the 8-objective instance. Therefore, it can also be regarded as the best optimizer for
WFG8.
9) WFG9:The situation of WFG9 is a little bit complicated,
but it is clear that MOEA/GLU, MOEA/DD and GrEA
are all better than MOEA/D. To be specific, GrEA wins
in the 8-objective instance, and it might be said that
MOEA/DD performs the best on the 3- and 5-objective
instance although the worst value of it on the 3-objective
instance is slightly worse than that of MOEA/GLU. In
11
TABLE V
B EST, M EDIAN AND W ORST HV VALUES BY MOEA/GLU, MOEA/DD,
MOEA/D AND G R EA ON WFG1 TO WFG5 INSTANCES WITH D IFFERENT
N UMBER OF O BJECTIVES . T HE VALUES IN RED ARE THE BEST, AND THE
VALUES IN GRAY ARE THE SECOND BEST.
TABLE VI
B EST, M EDIAN AND W ORST HV VALUES BY MOEA/GLU, MOEA/DD,
MOEA/D AND G R EA ON WFG6 TO WFG9 INSTANCES WITH D IFFERENT
N UMBER OF O BJECTIVES . T HE VALUES IN RED ARE THE BEST, AND THE
VALUES IN GRAY ARE THE SECOND BEST.
m
8
10
WFG2
3
5
8
10
WFG3
3
5
8
10
WFG4
3
5
8
10
WFG5
3
5
8
10
GrEA
0.794748
0.692567
0.627963
0.876644
0.831814
0.790367
0.811760
0.681959
0.616006
0.866298
0.832016
0.757841
0.950084
0.942908
0.800186
0.980806
0.976837
0.808125
0.980012
0.840293
0.778291
0.964235
0.959740
0.956533
0.699502
0.672221
0.662046
0.695221
0.684583
0.671553
0.657744
0.649020
0.638147
0.543352
0.513261
0.501210
0.723403
0.722997
0.722629
0.881161
0.879484
0.877642
0.787287
0.784141
0.679178
0.896261
0.843257
0.840257
0.689784
0.689177
0.688885
0.836232
0.834726
0.832212
0.838183
0.641973
0.571933
0.791725
0.725198
0.685882
3
WFG6
MOEA/D
0.932609
0.929839
0.815356
0.918652
0.915737
0.912213
0.918252
0.911586
0.808931
0.922484
0.915715
0.813928
0.951685
0.803246
0.796567
0.982796
0.978832
0.807951
0.963691
0.800333
0.787271
0.962841
0.957434
0.773474
0.697968
0.692355
0.679281
0.669009
0.662925
0.654729
0.529698
0.457703
0.439274
0.382068
0.337978
0.262496
0.724682
0.723945
0.723219
0.870868
0.862132
0.844219
0.784340
0.737386
0.718648
0.747485
0.712680
0.649713
0.693135
0.687378
0.681305
0.829696
0.826739
0.812225
0.779091
0.753486
0.705938
0.730990
0.715161
0.673789
5
8
10
3
WFG7
5
MOEA/DD
0.937694
0.933402
0.899253
0.963464
0.960897
0.959840
0.922284
0.913024
0.877784
0.926815
0.919789
0.864689
0.958287
0.952467
0.803397
0.986572
0.985129
0.980035
0.981673
0.967265
0.789739
0.968201
0.965345
0.961400
0.703664
0.702964
0.701624
0.673031
0.668938
0.662951
0.598892
0.565609
0.556725
0.552713
0.532897
0.504943
0.727060
0.726927
0.726700
0.876181
0.875836
0.875517
0.920869
0.910146
0.902710
0.913018
0.907040
0.888885
0.693665
0.693544
0.691173
0.833159
0.832710
0.830367
0.852838
0.846736
0.830338
0.848321
0.841118
0.829547
5
8
10
3
WFG8
WFG1
3
MOEA/GLU
0.937116
0.928797
0.915136
0.906874
0.899351
0.862874
0.839662
0.831208
0.781919
0.887565
0.843225
0.794202
0.959834
0.958155
0.808454
0.995169
0.993049
0.813859
0.978775
0.795215
0.778920
0.981398
0.978021
0.779176
0.700589
0.695748
0.689587
0.679497
0.675726
0.662165
0.572932
0.554256
0.526689
0.572593
0.554042
0.531208
0.731535
0.731180
0.730558
0.883419
0.881701
0.880210
0.939271
0.933853
0.926261
0.967623
0.963674
0.951068
0.698469
0.692607
0.685518
0.844325
0.841781
0.838402
0.892830
0.889458
0.884971
0.919163
0.916148
0.911875
5
8
10
3
WFG9
m
5
8
10
MOEA/GLU
0.710228
0.701988
0.698358
0.858096
0.846655
0.840335
0.912150
0.901300
0.880581
0.938343
0.927854
0.914464
0.731908
0.731809
0.731691
0.888158
0.887856
0.887592
0.948854
0.947862
0.946082
0.976171
0.975644
0.974641
0.678825
0.677146
0.674987
0.806626
0.805050
0.803366
0.895652
0.845761
0.823666
0.961919
0.923244
0.881384
0.695369
0.642755
0.642240
0.809717
0.751592
0.749481
0.828505
0.809564
0.746497
0.843321
0.830062
0.803744
MOEA/DD
0.708910
0.699663
0.689125
0.850531
0.838329
0.828315
0.876310
0.863087
0.844535
0.884394
0.859986
0.832299
0.727069
0.727012
0.726907
0.876409
0.876297
0.874909
0.920763
0.917584
0.906219
0.927666
0.923441
0.917141
0.672022
0.670558
0.668593
0.818663
0.795215
0.792900
0.876929
0.845975
0.730348
0.896317
0.844036
0.715250
0.707269
0.687401
0.638194
0.834616
0.797185
0.764723
0.772671
0.759369
0.689923
0.717168
0.717081
0.696061
MOEA/D
0.702840
0.695081
0.684334
0.846015
0.813844
0.754054
0.692409
0.661156
0.567108
0.643198
0.582342
0.409210
0.725252
0.724517
0.723449
0.859727
0.843424
0.811292
0.729953
0.708701
0.605900
0.706473
0.625828
0.596189
0.671355
0.669927
0.664120
0.808204
0.793773
0.771763
0.537772
0.446544
0.347990
0.508652
0.350409
0.270931
0.688940
0.681725
0.636355
0.798069
0.789998
0.727728
0.633476
0.604016
0.548119
0.572925
0.546451
0.516309
GrEA
0.699876
0.693984
0.685599
0.855839
0.847137
0.840637
0.912095
0.902638
0.885712
0.943454
0.927443
0.884145
0.723229
0.722843
0.722524
0.884174
0.883079
0.881305
0.918742
0.910023
0.901292
0.937582
0.902343
0.901477
0.671845
0.669762
0.667948
0.797496
0.792692
0.790693
0.803050
0.799986
0.775434
0.841704
0.838256
0.830394
0.702489
0.638103
0.636575
0.823916
0.753683
0.747315
0.842953
0.831775
0.765730
0.860676
0.706632
0.686917
addition, the median and worst values of MOEA/GLU
on the 10-objective instance are far better than those of
other algorithms, while the best value is sightly worse
than that of GrEA.
On the whole, MOEA/GLU shows a very competitive
performance on the WFG test suite, especially WFG4, WFG5,
WFG7 and WFG8, of which MOEA/GLU wins almost all the
HV values.
12
F. Performance Comparisons of Algorithms with Different
Criteria for Comparison
In this subsection, we compare the HV values of
MOEA/GLU with different criteria for comparing solutions on
WFG1 to WFG9 with different objectives. The HV values are
listed in Table VII. The comparison results can be concluded as
follows. For the sake of convenience, we denote MOEA/GLU
with the PBI, H1, and H2 criteria as MOEA/GLU-PBI,
MOEA/GLU-H1, and MOEA/GLU-H2, respectively.
•
•
•
•
•
•
•
•
•
WFG1:MOEA/GLU-H2 is the best optimizer since it
performs the best on all instances of WFG1.
WFG2:MOEA/GLU-H2 wins in all values of WFG2 except the best value of the 3-objective instance. Therefore,
it is considered to be the best optimizer for WFG2.
WFG3:The situation is a little bit complicated for WFG3.
Specifically, MOEA/GLU-PBI wins on the worst value
of the 5-objective instance and the best value of the
8-objective instance. And MOEA/GLU-H1 wins in the
best and median values of the 3-objective instance, while
MOEA/GLU-H2 is the best on other values. Therefore,
MOEA/GLU-H2 can be considered the best optimizer for
WFG3.
WFG4: MOEA/GLU-H1 is the best optimizer that wins
in all the values of WFG4.
WFG5: MOEA/GLU-PBI has the best median value for
the 3-objective instance of WFG5, and MOEA/GLUH2 wins in the worst value of the 3-objective instance,
while MOEA/GLU-H1 performs the best on all the other
values. Therefore, MOEA/GLU-H1 is considered the best
optimizer for WFG5.
WFG6: MOEA/GLU-PBI has the best median value for
the 5-objective instance of WFG6, and MOEA/GLUH2 wins in the best value of the 8-objective instance
and the worst value of the 10-objective instance, while
MOEA/GLU-H1 performs the best on all other values.
Therefore, MOEA/GLU-H1 is considered the best optimizer for WFG6.
WFG7: Since MOEA/GLU-H1 wins in all the values of
WFG7 except the worst value of its 3-objective instance,
it is considered the best optimizer.
WFG8: MOEA/GLU-PBI performs the worst on all
the values of WFG8. MOEA/GLU-H1 wins in its 3objective instance, while MOEA/GLU-H2 wins in the
10-objective instance. As for 5- and 8-objective instance,
MOEA/GLU-H1 wins in the median and worst value,
and MOEA/GLU-H2 wins in the best value. It is clear
that MOEA/GLU-PBI is the worst optimizer for WFG8.
However, as for MOEA/GLU-H1 and MOEA/GLU-H2,
it is still hard to say which one of the two is better for
WFG8.
WFG9: MOEA/GLU-PBI wins in the best value of the
5-objective instance, the worst value of the 8-objective
instance , and the median value of 10-objective instance
of WFG9. MOEA/GLU-H2 only wins in the worst value
of the 10-objective instance, while MOEA/GLU-H1 wins
in all the other values of WFG9. Therefore, MOEA/GLUH1 can be considered the best optimizer for WFG9.
On the whole, MOEA/GLU-H1 is the best optimizer for
WFG4 to WFG7, and MOEA/GLU-H2 is the best for WFG1
to WFG3, and WFG9. As for WFG8, Both MOEA/GLU-H1
and MOEA/GLU-H2 are better than MOEA/GLU-PBI, but it
is hard to say which one of the two is better. These indicate
that the running results of MOEA/GLU are affected by the
criterion for comparing solutions it adopts.
V. C ONCLUSION
In this paper, we propose a MOEA with the so-called GLU
strategy, i.e., MOEA/GLU. The main ideas of MOEA/GLU
can be concluded as follows. Firstly, MOEA/GLU employs a
set of weight vectors to decompose a given MOP into a set
of subproblems and optimizes them simultaneously, which is
similar to other decomposition-based MOEAs. Secondly, each
individual is attached to a weight vector and a weight vector
owns only one individual in MOEA/GLU, which is the same
as that in MOEA/D, but different from that in MOEA/DD.
Thirdly, MOEA/GLU adopts a global update strategy, i.e. the
GLU strategy. Our experiments indicate that the GLU strategy
can overcome the disadvantages of MOEAs with local update
strategies discussed in section II, although it makes the time
complexity of the algorithm higher than that of MOEA/D.
These three main ideas make MOEA/GLU a different algorithm from other MOEAs, such as MOEA/D, MOEA/DD,
and NSGA-III, etc. Additionally, the GLU strategy is simpler
than the update strategies of MOEA/DD and NSGA-III. And
the time complexity of MOEA/GLU is the same as that of
MOEA/DD, but worse than that of MOEA/D.
Our algorithm is compared to several other MOEAs, i.e.,
MOEA/D, MOEA/DD, NSGA-III, GrEA on 3, 5, 8, 10, 15objective instances of DTLZ1 to DTLZ4, and 3, 5, 8, 10objective instances of WFG1 to WFG9. The experimental
results show that our algorithm wins in most of the instances.
In addition, we suggest two hybrid criteria for comparing
solutions, and compare them with the PBI criterion. The
empirical results show that the two hybrid criteria is very
competitive in 3, 5, 8, 10-objective instances of WFG1 to
WFG9.
Our future work can be carried out in the following three
aspects. Firstly, it is interesting to study the performances of
MOEA/GLU on other MOPs, such as the ZDT test problems,
the CEC2009 test problems, combinatorial optimization problems appeared in [37], [38], and especially some real-world
problems with a large number of objectives. Secondly, it might
be valuable to apply the two hybrid criteria for comparing
solutions to other MOEAs. Thirdly, improve MOEA/GLU to
overcome its shortcomings. As we can see, the algorithm
contains at least two shortcomings. One is that all of its experimental results on WFG1 are worse than those of MOEA/DD
except for the best HV value of the 3-objective instance.
The other is that its time complexity is worse than that of
MOEA/D. Further research is necessary to be carried out to
try to overcome these two shortcomings.
ACKNOWLEDGMENT
The authors would like to thank Qingfu Zhang and Ke Li
for their generously giving the java codes of MOEA/D and
13
10
WFG4
3
5
8
10
WFG7
3
5
8
10
MOEA/DD.
R EFERENCES
[1] S. Panda, “Multi-objective evolutionary algorithm for sssc-based controller design,” Electric Power Systems Research, vol. 79, no. 6, pp. 937
– 944, 2009.
[2] G. Fu, Z. Kapelan, J. R. Kasprzyk, and P. Reed, “Optimal design of water
distribution systems using many-objective visual analytics,” Journal of
Water Resources Planning & Management, vol. 139, no. 6, pp. 624–633,
2013.
[3] R. J. Lygoe, M. Cary, and P. J. Fleming, “A real-world application
of a many-objective optimisation complexity reduction process,” in
International Conference on Evolutionary Multi-Criterion Optimization,
2013, pp. 641–655.
[4] O. Chikumbo, E. Goodman, and K. Deb, “Approximating a multidimensional pareto front for a land use management problem: A
modified moea with an epigenetic silencing metaphor,” in Evolutionary
Computation, 2012, pp. 1–9.
[5] T. Ganesan, I. Elamvazuthi, and P. Vasant, “Multiobjective design
optimization of a nano-cmos voltage-controlled oscillator using game
theoretic-differential evolution,” Applied Soft Computing, vol. 32, pp.
293 – 299, 2015.
[6] T. Ganesan, I. Elamvazuthi, K. Z. K. Shaari, and P. Vasant,
Hypervolume-Driven Analytical Programming for Solar-Powered Irrigation System Optimization. Heidelberg: Springer International Publishing, 2013, pp. 147–154.
H1
0.958855
0.884332
0.810262
0.994851
0.993575
0.815756
0.976819
0.970539
0.773388
0.980088
0.788371
0.770895
0.698434
0.695167
0.685562
0.845047
0.844616
0.843364
0.895639
0.893686
0.887629
0.920249
0.918867
0.916739
0.678718
0.676451
0.674706
0.808181
0.805537
0.804091
0.889190
0.844953
0.8248850
0.934571
0.908858
0.885137
H2
0.958309
0.955588
0.811369
0.997409
0.996668
0.816768
0.992986
0.983890
0.800754
0.995465
0.993171
0.801428
0.698400
0.693015
0.692078
0.844654
0.842033
0.838127
0.892399
0.888761
0.882556
0.917481
0.915163
0.911581
0.677100
0.675703
0.671803
0.821287
0.804396
0.801026
0.902643
0.842148
0.819766
0.948360
0.924340
0.886750
WFG3
PBI
0.957614
0.955452
0.810162
0.994983
0.993170
0.814648
0.978952
0.961105
0.776725
0.982992
0.978054
0.781965
0.697911
0.695548
0.690822
0.843774
0.840175
0.833143
0.891617
0.888071
0.882510
0.917412
0.916424
0.909849
0.672998
0.671628
0.666242
0.805790
0.804308
0.803095
0.877491
0.844134
0.811508
0.942152
0.905199
0.877100
WFG6
8
H2
0.944690
0.939554
0.922585
0.931487
0.922682
0.895859
0.918863
0.867385
0.851531
0.918984
0.877223
0.869973
0.731411
0.730869
0.730343
0.883788
0.881128
0.878642
0.938517
0.928525
0.920050
0.963828
0.959950
0.953305
0.731854
0.731742
0.731573
0.887789
0.887662
0.887201
0.947672
0.946120
0.944323
0.976109
0.975063
0.974119
WFG9
5
H1
0.937488
0.924179
0.903350
0.908070
0.898754
0.873925
0.844850
0.787144
0.706517
0.876852
0.846479
0.746061
0.731777
0.731550
0.731255
0.885457
0.884489
0.882801
0.940762
0.937541
0.930123
0.971182
0.968851
0.964127
0.731874
0.731773
0.731506
0.888112
0.888013
0.887752
0.948562
0.947616
0.945989
0.976717
0.975950
0.975258
WFG5
WFG1
3
PBI
0.932478
0.919064
0.907255
0.909324
0.897364
0.860576
0.843381
0.831511
0.743512
0.849143
0.844050
0.785389
0.727187
0.726180
0.724841
0.881501
0.879884
0.876368
0.934740
0.930307
0.921402
0.967500
0.961761
0.951755
0.729737
0.728050
0.727450
0.887632
0.887157
0.886747
0.946957
0.945634
0.944550
0.976190
0.975073
0.974186
WFG8
m
WFG2
TABLE VII
B EST, M EDIAN AND W ORST HV VALUES BY MOEA/GLU WITH T HREE D IFFERENT C RITERIA : PBI, H1 AND H2 ON INSTANCES OF WFG1 TO WFG9
WITH 3,5,8 AND 10 O BJECTIVES . T HE VALUES IN RED FONT ARE THE BEST.
PBI
0.690390
0.685735
0.675661
0.679563
0.673978
0.669298
0.586041
0.550607
0.532548
0.574988
0.549540
0.529822
0.708928
0.699913
0.692277
0.859456
0.847747
0.832130
0.916292
0.900157
0.883341
0.937329
0.923721
0.911045
0.689751
0.641612
0.638641
0.806133
0.750934
0.748504
0.816273
0.803022
0.761689
0.837137
0.825754
0.762914
H1
0.700632
0.696145
0.684853
0.681048
0.676910
0.666466
0.578472
0.552667
0.540043
0.565746
0.548960
0.508638
0.713367
0.703288
0.699125
0.856809
0.849112
0.840610
0.913827
0.902256
0.893311
0.941969
0.928654
0.915935
0.697593
0.642693
0.642119
0.799793
0.753767
0.749479
0.824253
0.803785
0.740563
0.839769
0.825240
0.757464
H2
0.699646
0.692045
0.685778
0.682644
0.676994
0.666298
0.605893
0.592805
0.581368
0.574100
0.555640
0.534696
0.713228
0.702914
0.694282
0.858598
0.848262
0.838766
0.916850
0.897580
0.884989
0.938397
0.926283
0.916056
0.693049
0.642552
0.641827
0.790106
0.750096
0.746307
0.823161
0.797496
0.741695
0.835042
0.823727
0.764740
[7] F. Domingo-Perez, J. L. Lazaro-Galilea, A. Wieser, E. Martin-Gorostiza,
D. Salido-Monzu, and A. de la Llana, “Sensor placement determination
for range-difference positioning using evolutionary multi-objective optimization,” Expert Systems with Applications, vol. 47, pp. 95 – 105,
2016.
[8] B. Najafi, A. Shirazi, M. Aminyavari, F. Rinaldi, and R. A. Taylor,
“Exergetic, economic and environmental analyses and multi-objective
optimization of an sofc-gas turbine hybrid cycle coupled with an {MSF}
desalination system,” Desalination, vol. 334, no. 1, pp. 46 – 59, 2014.
[9] J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Neural
Networks, 1995. Proceedings., IEEE International Conference on, vol. 4,
Nov 1995, pp. 1942–1948 vol.4.
[10] B. Suman and P. Kumar, “A survey of simulated annealing as a tool
for single and multiobjective optimization,” Journal of the Operational
Research Society, vol. 57, no. 10, pp. 1143–1160, 2006.
[11] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist
multiobjective genetic algorithm: Nsga-ii,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, Apr 2002.
[12] E. Zitzler, M. Laumanns, and L. Thiele, “Spea2: Improving the strength
pareto evolutionary algorithm for multiobjective optimization,” in Evolutionary Methods for Design, Optimisation, and Control. CIMNE,
Barcelona, Spain, 2002, pp. 95–100.
[13] Q. Zhang and H. Li, “Moea/d: A multiobjective evolutionary algorithm
based on decomposition,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 712–731, Dec 2007.
[14] A. Trivedi, D. Srinivasan, K. Sanyal, and A. Ghosh, “A survey of
14
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
multiobjective evolutionary algorithms based on decomposition,” IEEE
Transactions on Evolutionary Computation, vol. PP, no. 99, pp. 1–1,
2016.
M. Emmerich, N. Beume, and B. Naujoks, An EMO Algorithm Using
the Hypervolume Measure as Selection Criterion. Berlin, Heidelberg:
Springer Berlin Heidelberg, 2005, pp. 62–76.
K. Deb and H. Jain, “An evolutionary many-objective optimization
algorithm using reference-point-based nondominated sorting approach,
part i: Solving problems with box constraints,” IEEE Transactions on
Evolutionary Computation, vol. 18, no. 4, pp. 577–601, Aug 2014.
R. Carvalho, R. R. Saldanha, B. N. Gomes, A. C. Lisboa, and A. X.
Martins, “A multi-objective evolutionary algorithm based on decomposition for optimal design of yagi-uda antennas,” IEEE Transactions on
Magnetics, vol. 48, no. 2, pp. 803–806, Feb 2012.
T. Ray, M. Asafuddoula, and A. Isaacs, “A steady state decomposition
based quantum genetic algorithm for many objective optimization,” in
2013 IEEE Congress on Evolutionary Computation, June 2013, pp.
2817–2824.
H. H. Tam, M. F. Leung, Z. Wang, S. C. Ng, C. C. Cheung, and A. K.
Lui, “Improved adaptive global replacement scheme for moea/d-agr,” in
2016 IEEE Congress on Evolutionary Computation (CEC), July 2016,
pp. 2153–2160.
R. Cheng, Y. Jin, M. Olhofer, and B. Sendhoff, “A reference vector
guided evolutionary algorithm for many-objective optimization,” IEEE
Transactions on Evolutionary Computation, vol. 20, no. 5, pp. 773–791,
Oct 2016.
K. Li, K. Deb, Q. Zhang, and S. Kwong, “An evolutionary manyobjective optimization algorithm based on dominance and decomposition,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 5,
pp. 694–716, Oct 2015.
J. Chen, J. Li, and B. Xin, “Dmoea- varepsilontextC :
Decomposition-based multiobjective evolutionary algorithm with the
varepsilon -constraint framework,” IEEE Transactions on Evolutionary
Computation, vol. 21, no. 5, pp. 714–730, Oct 2017.
A. Trivedi, D. Srinivasan, K. Sanyal, and A. Ghosh, “A survey of
multiobjective evolutionary algorithms based on decomposition,” IEEE
Transactions on Evolutionary Computation, vol. 21, no. 3, pp. 440–462,
June 2017.
K. Deb and R. B. Agrawal, “Simulated binary crossover for continuous
search space,” vol. 9, no. 3, pp. 115–148, 2000.
K. Deb and M. Goyal, “A combined genetic adaptive search (geneas)
for engineering design,” 1999, pp. 30–45.
K. Miettinen, Nonlinear Multiobjective Optimization.
Norwell,
MA:Kluwer, 1999.
H. Ishibuchi, N. Tsukamoto, and Y. Nojima, “Evolutionary manyobjective optimization: A short review,” pp. 2419–2426, 2008.
M. Asafuddoula, T. Ray, and R. Sarker, “A decomposition-based evolutionary algorithm for many objective optimization,” IEEE Transactions
on Evolutionary Computation, vol. 19, no. 3, pp. 445–460, June 2015.
I. Das and J. E. Dennis, “Normal-boundary intersection: A new method
for generating the pareto surface in nonlinear multicriteria optimization
problems,” Siam Journal on Optimization, vol. 8, no. 3, pp. 631–657,
2006.
P. A. N. Bosman and D. Thierens, “The balance between proximity and
diversity in multiobjective evolutionary algorithms,” IEEE Transactions
on Evolutionary Computation, vol. 7, no. 2, pp. 174–188, April 2003.
E. Zitzler and L. Thiele, “Multiobjective evolutionary algorithms: a comparative case study and the strength pareto approach,” IEEE Transactions
on Evolutionary Computation, vol. 3, no. 4, pp. 257–271, Nov 1999.
J. Bader and E. Zitzler, “Hype: An algorithm for fast hypervolume-based
many-objective optimization,” Evolutionary Computation, vol. 19, no. 1,
pp. 45–76, 2011.
L. While, L. Bradstreet, and L. Barone, “A fast way of calculating
exact hypervolumes,” IEEE Transactions on Evolutionary Computation,
vol. 16, no. 1, pp. 86–95, Feb 2012.
K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable test problems
for evolutionary multiobjective optimization,” pp. 105–145, 2001.
S. Huband, L. Barone, L. While, and P. Hingston, “A scalable multiobjective test problem toolkit,” Lecture Notes in Computer Science, vol.
3410, pp. 280–295, 2005.
S. Huband, P. Hingston, L. Barone, and L. While, “A review of
multiobjective test problems and a scalable test problem toolkit,” IEEE
Transactions on Evolutionary Computation, vol. 10, no. 5, pp. 477–506,
Oct 2006.
E. Zitzler and L. Thiele, “Multiobjective evolutionary algorithms: a comparative case study and the strength pareto approach,” IEEE Transactions
on Evolutionary Computation, vol. 3, no. 4, pp. 257–271, 1999.
[38] H. Ishibuchi, Y. Hitotsuyanagi, N. Tsukamoto, and Y. Nojima, “Manyobjective test problems to visually examine the behavior of multiobjective evolution in a decision space,” in International Conference on
Parallel Problem Solving From Nature, 2010, pp. 91–100.
Yingyu Zhang received the B.Eng. degree in computer science and technology from Changsha University of Science and Technology, Changsha, China,
in 2002, and the M.Eng. and Ph.D. degrees in
computer science from Huazhong University of Science and Technology, Wuhan, China, in 2007, and
2011, respectively. He is now a lecturer with the
School of Computer Science, Liaocheng University,
Liaocheng, China. His research interests includes
quantum optimization, evolutionary multi-objective
optimization, machine learning, and cloud computing.
Bing Zeng received the B.Econ. degree in economics from Huazhong Agricultural University,
Wuhan, China, in 2004, and received the B.Eng.
degree in computer science and technology, the
M.Eng. and Ph.D. degrees in information security
from Huazhong University of Science and Technology, Wuhan, China, in 2004, 2007 and 2012,
respectively. He is currently an Assistant Professor at
South China University of Technology, Guangzhou,
China. His research interests are in cryptography and
network security, with a focus on secure multiparty
computation.
Yuanzhen Li received his PhD degree from Beijing University of Posts and Telecommunications
in 2010. He is now an associate professor in the
Department of Computer Science and Technology,
Liaocheng University, China. His research interests
include wireless communications, evolutionary computation and Multi-objective optimization.
Junqing Li received the B.Sc. and Ph.D. degrees
from Shandong Economic University, Northeastern
University in 2004 and 2016, respectively. Since
2004, he has been with the School of Computer
Science, Liaocheng University, Liaocheng, China,
where he became an Associate Professor in 2008.
He also works with the School of information and
Engineering, Shandong Normal University, Jinan,
Shandong, China, where he became the doctoral
supervisor. His current research interests include
intelligent optimization and scheduling.
| 9 |
GENERATION OF UNSTRUCTURED MESHES IN 2-D, 3-D, AND
SPHERICAL GEOMETRIES WITH EMBEDDED HIGH
RESOLUTION SUB-REGIONS∗
JORGE M. TARAMÓN† , JASON P. MORGAN† ‡ , CHAO SHI‡ , AND JÖRG
arXiv:1711.06333v1 [cs.CG] 14 Nov 2017
HASENCLEVER§
Abstract. We present 2-D, 3-D, and spherical mesh generators for the Finite Element Method
(FEM) using triangular and tetrahedral elements. The mesh nodes are treated as if they were linked
by virtual springs that obey Hooke’s law. Given the desired length for the springs, the FEM is used to
solve for the optimal nodal positions for the static equilibrium of this spring system. A ’guide-mesh’
approach allows the user to create embedded high resolution sub-regions within a coarser mesh. The
method converges rapidly. For example, in 3-D, the algorithm is able to refine a specific region within
an unstructured tetrahedral spherical shell so that the edge-length factor l0r /l0c = 1/33 within a few
iterations, where l0r and l0c are the desired spring length for elements inside the refined and coarse
regions respectively. One use for this type of mesh is to model regional problems as a fine region
within a global mesh that has no fictitious boundaries, at only a small additional computational cost.
The algorithm also includes routines to locally improve the quality of the mesh and to avoid badly
shaped ’slivers-like’ tetrahedra.
Key words. Finite Element Method, Unstructured tetrahedral mesh, Embedded high resolution
sub-region
AMS subject classifications. 65D18, 68U01, 68W05
1. Introduction. Mesh generation and (adaptive) refinement are essential ingredients for computational modelling in various scientific and industrial fields. A
particular design metric or goal is the quality of the generated mesh, because lowquality meshes can potentially lead to larger numerical approximation errors. A highquality mesh would consist of triangles (in 2-D) or tetrahedra (in 3-D) that have
aspect ratios near 1, i.e. their sides should have similar lengths. The techniques to
generate meshes can be crudely classified into three groups: (1) The advancing front
method [20, 24, 9, 15] starts from the boundary of the domain. New elements are
created one-by-one from an existing front of elements towards the interior until the
region is filled. Advancing front methods generally create high-quality meshes close
to the domain boundaries but can have difficulties in regions where advancing fronts
merge. (2) Octree-based methods [21, 18, 16] produce graded meshes through recursive subdivision of the domain. The simplicity of these methods makes them very
efficient. However, poorly shaped elements can be introduced near region boundaries.
(3) Delaunay Triangulation ensures that the circumcircle/circumsphere associated to
each triangle/tetrahedron does not contain any other point in its interior. This feature makes Delaunay-based methods [7, 23, 8, 26] robust and efficient. However, in
3-D they can generate very poorly shaped tetrahedra with four almost coplanar vertex
nodes. These so-called ’sliver’ elements have a volume near zero. Several techniques to
remove slivers have been proposed [6, 19, 5] although some slivers near the boundaries
can typically persist [13].
Current mesh generation algorithms oriented to engineering such as Gmsh [14],
∗ Submitted
to the editors 14Nov2017.
Funding: This work was funded by the COMPASS Consortium.
† Department of Earth Sciences, Royal Holloway University of London,
([email protected]).
‡ EAS Dept., Snee Hall, Cornell University, Ithaca, NY, USA
§ Institute of Geophysics, Hamburg University, Hamburg, Germany
1
Egham,
UK
2
J. M. TARAMÓN, J. P. MORGAN, C. SHI, AND J. HASENCLEVER
GiD (https://www.gidhome.com) or TetGen [28] are based on the methods described
above. Variational methods [1] rely on energy minimization to optimize the mesh
during the generation procedure in order to create higher-quality meshes. A widely
used open access community-code for 2-D mesh generation is Triangle [25], however
there is no 3-D version of this mesh generator. DistMesh [22] is an elegant and simple
spring-based method that allows the user to create 2D and 3D unstructured meshes
based on the distance from any point to the boundary of the domain. However this
algorithm is often slow, requiring many steps to converge,
Any ’good’ mesh should be able to meet the following requirements [3]: (1) It
conforms to the boundary; (2) It is fine enough in those regions where the problem
to be solved demands higher accuracy; (3) Its total number of elements is as small
as possible to reduce the size of the problem and the computational costs to solve
it; (4) It has well-shaped elements to improve the performance of iterative methods
such as the conjugate gradient method [27]. Frequently used mesh generators in 3-D
geodynamic problems are the ones included in the ASPECT [17], Rhea [4] and Fluidity
[11] codes. ASPECT and Rhea are written in C++ with adaptive mesh refinement
(AMR). However their regular hexahedral elements create so-called ”hanging nodes”
in regions where the resolution changes and cannot be directly applied to create wellformed tetrahedral elements. Fluidity is another example of AMR for a tetrahedral
mesh. However it has very limited mesh generation capabilities, and in this context
mesh-generation should not be confused with mesh adaptivity.
Here we present a new unstructured mesh generator that is based on a finite
element implementation of the DistMesh approach using virtual springs between nodes
and solving for the equilibrium positions of the nodes. We modify the Distmesh
solution procedure to directly solve for static equilibrium. Our method is considerably
faster than the DistMesh code. It also allows the user to create tetrahedral meshes
without hanging nodes. The user can also create embedded high resolution sub-regions
within a global coarse mesh. This approach becomes very useful when the goal is to
create a mesh that minimizes the number of fictitious internal boundaries, within a
computational problem.
A key design goal is the generation of a Delaunay mesh using a built-in MATLAB
triangulation function called ’delaunay’. Throughout the algorithm, this ’delaunay’
function is called to generate the spring connectivity matrix that relates nodes to
triangles or tetrahedra. We have also developed and tested techniques for adding or
rejecting nodes in regions where the mesh resolution is too high or too low respectively.
A smooth variation in the element size between high resolution and low resolution
regions is achieved by using a guide-mesh approach. These local operations improve
the quality of the relatively few poorly shaped elements that can result from the
ficticious spring algorithm to determine good nodal locations. The mesh-generation
code is written in vectorized MATLAB, and can be easily used within the MATLAB
working environment.
We will present this approach first in its simplest form for making a mesh in a
well-defined rectangular 2-D region (Section 2). In Section 3 we show how a 2-D
cylindrical annulus mesh can be generated with small modifications to the previous
rectangular mesh generator algorithm. In Section 4 we present the modifications
needed to create the 3-D spherical shell mesh that we are using to solve for mantle
flow.
2. 2-D Rectangular work flow. This mesh generation algorithm has its simplest form as a program to create a 2-D rectangular mesh with an embedded high
MESH GENERATION WITH EMBEDDED HI-RES SUB-REGIONS
3
Fig. 1. Flow chart for the mesh generator iterative process. Yellow, orange and green boxes
represent the routines exclusively used for creating 2-D rectangular meshes, 2-D cylindrical annulus
meshes and 3-D spherical shell meshes, respectively. White boxes represent the shared routines to
all mesh generators. µ is the mean of the misfit spring lengths (equation (16)) and q is the quality
factor of the elements (equations (13) and (31) for triangular and tetrahedral elements respectively).
Tolerance parameters µt , qt and q̄t are listed in Table 1.
resolution sub-region. The white and yellow boxes in Figure 1 show the flowchart
that describes this algorithm.
Step 1: Definition of preferred nodal distances and initial placement
of the nodes. The first step in this recipe is to define the preferred nodal distances
within the refined (l0r ) and coarse (l0c ) regions as well as the dimensions of the regions.
In order to avoid poor quality elements, an appropriate smooth transition for the mesh
refinement should be specified. Here we choose a preferred spring-length function that
is defined on a so-called ’guide-mesh’. This approach is very similar to the background
grid approach created by [20]. The generation of a refined rectangular mesh using the
guide-mesh approach involves the following steps. First, create a (coarse) mesh to
serve as a guide-mesh with only a small number of nodes defining the boundaries of
the domain and the internal boundaries of the embedded high resolution and transition
sub-regions. Second, create the design function l0 (x, y) for each node of the guidemesh. This function defines the desired length for the springs around those points.
Third, the function l0 (x, y) is evaluated at the midpoint of all springs using linear
Finite Element shape functions. We find that a coarse guide-mesh is a simple and
flexible way to control nodal spacing during the generation of a Finite Element mesh.
Figure 2a shows the guide-mesh for a rectangular mesh example whose parameters are
listed in Table 1. Red and blue dots represent nodes in the guide-mesh with defined
4
J. M. TARAMÓN, J. P. MORGAN, C. SHI, AND J. HASENCLEVER
Fig. 2. (a) Guide-mesh defined by a few nodes in Cartesian coordinates for a rectangular mesh.
The parameters for this mesh are listed in Table 1. Each node is assigned a value for the desired
spring length, being l0r for red dots and l0c for blue dots. The length of the springs within the refined
region (in red) is approximately equal to l0r . The length of the springs within the transition region
(in green) varies smoothly from l0r to l0c . The length of springs within the coarse region (in blue)
is approximately equal to l0c . (b) Initial guess for the rectangular mesh. (c) Zoom around the left
boundary of the refined region for the initial guess (yellow line in (b)). The guide-mesh defining
refined (red) and transition (green) regions is shown in white dashed lines.
l0r and l0c , respectively. The red region represents the refined region of the mesh
with spring length approximately equal to l0r . The green region defines the transition
region where the length of the springs smoothly varies from l0r to l0c . The blue region
represents the coarse region of the mesh with a apporximate spring length of l0c .
The next step is to create a starting guess for the locations of the nodes. Computational work is reduced considerably with a good initial guess for the density of the
nodes. Nodes on the boundary and within the domain are created taking into account
both the location of the refined region and the desired springs length for elements inside the refined and coarse regions. Boundary nodes in the refined and coarse regions
are created using l0r and l0c respectively for the spacing between the nodes. The
interior nodes within the refined and coarse regions are created using a circle packing
lattice with radius equal to l0r /2 and l0c /2 respectively. This fills each region with
an equilateral triangular tiling. In the transition region the size of the elements is
expected to change smoothly between l0r and l0c . The initial placement for boundary
and interior nodes in the transition region is created using l0r as explained above.
After this step, the rejection method described in [22] is used to discard points and
create a ’balanced’ intitial distribution of nodes. After performing a Delaunay triangulation, a quasi-regular mesh of triangles within the refined and coarse regions, with
a poorly structured transition region between them is created (Figure 2b). Figure 2c
shows a zoom of the initial mesh with the guide-mesh also shown.
Step 2: Spring-based solver. Inspired by [22], to generate an unstructured
mesh we link the future locations of finite element nodes with virtual elastic springs.
The spring length is used to define the desired nodal distance within any mesh region,
i.e., short springs lead to mesh regions with higher resolution and longer springs lead
to lower resolution mesh regions. Nodal positions are solved for so that the global
network of virtual springs is in static equilibrium. The behaviour of each ficticious
MESH GENERATION WITH EMBEDDED HI-RES SUB-REGIONS
5
Fig. 3. (a) Virtual spring in the 2-D space. Both global reference system (X, Y ) and local
reference system (X 0 , Y 0 ) are shown. (b) Virtual spring in the 3-D space. Both global reference
system (X, Y , Z) and local reference system (X 0 , Y 0 , Z 0 ) are shown. Grey dots represent two nodes
linked by the virtual spring. Red arrows represent the forces acting at each end of the spring.
spring is described by Hooke’s law
F = −kδs
(1)
where F is the force acting at each end of spring, k is the stiffness of the spring, and
δs is the distance the spring is stretched or compressed from its equilibrium length
l0 . Forces and nodal positions are expressed in x, y coordinates in 2-D (Figure 3a).
Because Hooke’s law is formulated along the spring direction it is necessary to introduce the X 0 axis as the local 1-D reference system to solve for the nodal positions.
Hooke’s law for each spring in the local 1-D reference system is given by
(2a)
f1 0 =
(2b)
f2 0 = −kδs = −k(x2 0 − x1 0 − l0 )
kδs =
k(x2 0 − x1 0 − l0 )
where f 0 and x0 are the force and position of the ends of the spring given by the
subscripts 1 and 2, respectively. Writing equations (2a) and (2b) in matrix form, and
moving the force terms to the left hand side yields
0
f1
−1
1
0
−1
1
x1 0
(3)
+k
=k
f2 0
1 −1
l0
1 −1
x2 0
In order to solve for the nodal positions in 2-D, a change from local coordinates (x1 0 ,
0; x2 0 , 0) to global coordinates (x1 , y1 ; x2 , y2 ) is needed. This change of coordinates
is described in matrix form as
cos α sin α
0
0
(4)
R2D =
0
0
cos α sin α
where α is the angle of the X 0 axis measured from the X axis in the counterclockwise
direction (Figure 3a). Applying equation (4) to equation (3) (see section 5 for further
details), equation (3) becomes
−cα 2 −sα cα
cα 2
sα cα
x1
f1,x
cα
−sα cα −sα 2
sα cα
sα 2
y1 = f1,y + kl0 sα
(5) k
2
cα 2
sα cα
−cα
−sα cα
z1
f2,x
−cα
2
2
sα cα
sα
−sα cα −sα
x2
f2,y
−sα
6
J. M. TARAMÓN, J. P. MORGAN, C. SHI, AND J. HASENCLEVER
Fig. 4. Implementation of boundary conditions along a straight tilted segment (yellow dashed
line) for one triangle. A rotation is needed for the node 2 in order to pass from the global reference
system (X, Y ) to the local reference system (X 0 , Y 0 ) where y2 0 = 0 is the constrained boundary
condition.
where sα ≡ sin α and cα ≡ cos α. Equation (5) can be written in the matrix form as
(6)
Kx = f + fl0
where K is the stiffness matrix, x is the nodal displacement vector, f is the element
force vector and fl0 is the force-term created by the fact that the springs would
have zero-force at their desired length. Because the system of equations is solved
for its equilibrium steady state, f = 0. A vectorized ’blocking’ technique based
on the MATLAB methodology described in the MILAMIN code [10] is employed to
speed up the assembly of the stiffness matrix. The solution to this problem is the
’optimal’ position of each node obtained from the inversion of the system of static
force equilibrium equations
(7)
x = K −1 fl0
Straight line Boundary Conditions. Boundary conditions are necessary to
constrain the mesh to the desired domain boundaries, and to differentiate between
boundary and interior nodes. In the simple case of a rectangular mesh, a boundary
node is free to slide along a domain edges parallel to the X- or Y -axis. We achieve
this by setting one of its yi or xi values to be fixed and letting the other value vary
so that the node is free to move along the boundary segment. In the case of a
general line that is not parallel to the X- or Y -axes, this requires a transformation
from global coordinates to a new local coordinate system in which the constraint
direction is parallel to a local coordinate axis. In other words, the new local axes
have to be parallel to and perpendicular to the boundary segment. For simplicity,
the mathematical implementation is shown for one triangle (Figure 4). Node 2 is
free to slide along the tilted segment (yellow dashed line in Figure 4) since y2 0 = 0
defines the boundary constraint. The boundary condition is imposed by a rotation
of coordinate system for node 2 given by the transformation matrix T that relates
7
MESH GENERATION WITH EMBEDDED HI-RES SUB-REGIONS
global coordinates x to local coordinates x0 by
x1
1
y1
1
x2
cos α2 − sin α2
(8)
y2 =
sin α2
cos α2
x3
y3
| {z } |
{z
x
T
1
1
}|
x1
y1
x2 0
0
x3
y3
{z
x0
}
Applying the transformation matrix to the stiffness matrix and force vector
(9)
K 0 = T T KT
(10)
fl0 0 = T T fl0
the new system of equations is given by
(11)
K 0 x0 = fl0 0
which is solved for x0 . When desired, the original global coordinates are recovered
through the transformation matrix
(12)
x = T x0
Step 3: Mesh refinement. In this algorithm we refine a mesh by decreasing
the element size in the region of interest. One common issue in the refinement process
arises from the size contrast between large and small elements within a short spatial
interval so that poorly-shaped elements with short and long edges may form. In order
to mitigate this issue a transition region surrounding the refined region is defined
using the guide-mesh approach described above (see Figure 2a).
Quality factor for triangles. The ’quality’ of a mesh is determined by assessing the quality of its individual elements. This usually involves measures of angles,
edge lengths, areas (in 2-D), volumes (in 3-D), or the radius of its inscribed and circumscribed circles/spheres, see e.g., [12, 27]. Here we use a normalized quality factor,
which in 2-D is given by
(13)
q2D =
2rc
Rc
where rc is the radius of the element’s inscribed circle and Rc is the radius of its
circumscribed circle. Rc and rc can be expressed as
r
1 (b + c − a)(c + a − b)(a + b − c)
(14)
rc =
2
a+b+c
(15)
abc
Rc = p
(a + b + c)(b + c − a)(c + a − b)(a + b − c)
where a, b and c are the side lengths of the triangle. A fair criteria to evaluate the
quality of a mesh is to provide the minimum and mean values of the quality factor,
cf. [1]. Here both are used as control parameters to determine when the iterative
algorithm has reached the desired mesh quality tolerances (Figure 1).
8
J. M. TARAMÓN, J. P. MORGAN, C. SHI, AND J. HASENCLEVER
Step 4: Local mesh improvements. So far the above algorithm would only
move nodes within the domain to meet the desired spring lengths/internodal distances.
However, in general we do not know a priori how many nodes are needed for a mesh.
Therefore we use algorithms to locally add and remove nodes where the spacing is too
loose or tight in the equilibrium configuration. After solving for nodal positions, we
check if the mesh has reached the expected nodal density by determining the mean
of the misfit in spring lengths (Figure 1). This is given by
(16)
µ=
N
1 X li − l0i
N i=1
l0i
where l is the actual spring length, l0 is the desired spring length and N is the total
number of springs in the mesh. Nodes are added or rejected (see below) if µ ≥ µt .
When µ < µt the expected nodal density is achieved and element shape improvements
(see below) are applied to obtain higher quality elements. After some experimentation
we found it appropriate to use 0.02 < µt < 0.05 for 2-D meshes.
Add/reject nodes. In the iterative process of mesh generation the possibility
to either add or reject nodes plays an important local role. This feature is especially
relevant when the goal is to create a global coarse mesh with an embedded high
resolution sub-region. The logic for adding or rejecting nodes is based on the relative
length change of the springs connecting nodes
(17)
=
l − l0
l0
indicating whether springs are stretched ( > 0) or compressed ( < 0) with respect
to their desired lengths. A new node is created at the midpoint of those springs
with > 0.5, i.e., springs stretched more than 50% greater than their desired length.
One node at the end of a spring is rejected when < −0.5, i.e., springs compressed
more than 50% below their desired length. In order to save computational time,
the add/reject nodes routine is called as a sub-iteration within the main iteration in
which nodal positions are found. Sub-iterations are performed until the percentage
of springs with || > 0.5 in the sub-iteration j + 1 is higher than in the sub-iteration
j. This implementation is especially useful when a large fraction of nodes need to be
either added or rejected within a particular region of the mesh, e.g., when a relatively
poor initial guess is used.
Smooth positions of the interior nodes. Good quality meshes are directly
related to the generation of isotropic elements [1]. A Laplacian smoothing criteria, cf.
[9], is used to improve the shape of poorly shaped elements, i.e., to make elements as
close to a equilateral triangles or regular tetrahedra as possible. This method is only
applied to interior nodes. The routine repositions interior nodes towards the mean of
the barycentres of their surrounding elements, i.e.,
N
P
(18)
xs =
xbi
i=1
N
where xs are the new coordinates of the interior node, N is the number of elements
surrounding the interior node and xbi are the barycentre coordinates of the i-th
surrounding element. Figure 13 shows an example of smoothing positions of interior
nodes for a 2-D mesh.
9
MESH GENERATION WITH EMBEDDED HI-RES SUB-REGIONS
Table 1
Mesh Parameters.
Symbol
Meaning
d
l
ri
ro
x0
z0
θ0
φ0
r0
Depth
Length
Inner radius
Outer radius
x-coordinate centre of refined region
z-coordinate centre of refined region
Colatitude centre of refined region
Longitude centre of refined region
Radial distance centre of refined region
Desired spring length for elements
inside the coarse region
Desired spring length for elements
inside the refined region
Transition region depth
Transition region length
Transition region width
Refined region depth
Refined region length
Refined region width
Tolerance for minimum quality factor
Tolerance for mean quality factor
Tolerance for mean misfit spring
length
l0
c
l0
r
dt
lt
wt
dr
lr
wr
qt
q̄t
µt
Rectangular
box
Cylindrical
annulus
Spherical
shell
2900 km
40000 km
0 km
0 km
-
3471 km
6371 km
90◦
6371 km
3471 km
6371 km
90◦
90◦
6371 km
1500 km
2000 km
2000 km
7.5 km
10 km
60 km
2900 km
8000 km
300 km
3333 km
0.45
2900 km
8000 km
300 km
3333 km
0.30
2900 km
6800 km
9600 km
300 km
2200 km
5000 km
0.23
0.89
0.025
0.93
0.04
0.86
0.11
Example: Rectangular mesh with an embedded high resolution region. Several tests have been performed with the above implementations in order to
demonstrate the robustness of this mesh-generation recipe. As an example, we show
the results for a rectangular box with an embedded high-resolution sub-region (code
available in section 5). The input parameters that control the algorithm are listed
in Table 1. The algorithm created the mesh in 9 s (all tests in this study have been
performed using MATLAB R2015a (8.5.0.197613) on a 3.2 GHz Intel Core i5 (MacOSX 10.12.5) with 24 GB of 1600 MHz DDR3 memory) after eight outermost loop
iterations (cf. Figure 1). Figure 5a shows the final mesh (top) and a zoom around
the left boundary of the refined region (bottom) for the iteration 8 (see Figure 14
for iterations 0 (initial mesh) and 1). The final mesh has 22000 nodes forming 43000
triangles (Table 2) with an edge-length factor l0r /l0c = 1/200. The percentage of
triangles within the coarse, transition and refined regions is 0.3%, 6.3% and 93.4%
respectively. The lowest quality factor for an element is 0.51 (red line in Figure 5b)
and the mean quality factor for all elements is 0.99 (blue line in Figure 5b). Only
0.12% of the triangles have a quality factor lower than 0.6 (green line in Figure 5b).
Figure 5c shows the fraction of elements as a function of quality factor for the final
mesh.
10
J. M. TARAMÓN, J. P. MORGAN, C. SHI, AND J. HASENCLEVER
Fig. 5. (a) Final mesh (top) for a rectangular box with an embedded high resolution sub-region
and a zoom around the left boundary of the refined region (bottom). (b) Minimum quality factor (red
line), mean quality factor for all elements (blue line) and percentage of elements having a quality
factor lower than 0.6% (green line) as a function of iteration number. (c) Histogram of the fraction
of elements as a function of quality factor for the final mesh.
Table 2
Information on example meshes.
Mesh
Rectangular
box
Cylindrical
annulus
Spherical
shell
nodes
elements
time (s)
iterations
time per
node
time per
element
22000
43000
9
8
4.1 · 10−4
2.1 · 10−4
12000
23000
17
5
1.4 · 10−3
7.4 · 10−4
27000
150000
224
10
8.3 · 10−3
1.5 · 10−3
3. 2-D Cylindrical annulus work flow. The algorithm presented above needs
to be slightly modified to generate a cylindrical annulus mesh. The white and orange
boxes in Figure 1 show the flowchart that describes this modified algorithm. Since
the general algorithm is the same, in this section we only discuss the parts that differ
from the rectangular mesh generator described previously.
Cylindrical annulus guide-mesh. The generation of a refined cylindrical annulus mesh using the guide-mesh involves the same steps as for a rectangular mesh
except that the function l0 (x, y) becomes l0 (θ, r). In this case the guide-mesh is a
coarse cylindrical annulus mesh defined in polar coordinates. Figure 6a shows the
guide-mesh (white dashed lines) defining the refined (red), transition (green) and
coarse (blue) regions and the parameters are listed in Table 1. Red and blue dots represent l0 r and l0 c respectively. The initial triangulation is shown in black solid lines.
Figure 6c shows a zoom of the guide-mesh defined in polar coordinates. Green dots
represent the points where the function l0 (θ, r) is interpolated. The use of a guide-
MESH GENERATION WITH EMBEDDED HI-RES SUB-REGIONS
11
Fig. 6. (a) Guide-mesh (white dashed lines) defined by a few nodes (red and blue dots represent
l0 r and l0 c respectively) in polar coordinates for a cylindrical annulus mesh (initial guess is shown
in black solid lines). Red, green and blue colours represent the refined, transition and coarse regions
respectively. (b) Guide-mesh defined in Cartesian coordinates. Same colours as in (a). (c) Zoom
around an edge of the transition region in polar coordinates. The function l0 (θ, r) can be interpolated
at green dots with maximum precision since both boundaries – the cylindrical annulus mesh and its
guide-mesh – are overlapping. (d) Zoom around an edge of the transition region in Cartesian
coordinates. The function l0 (x, y) cannot be interpolated at magenta dots since they lay outside of
the outer boundary of a Cartesian guide-mesh. The precision of the interpolated l0 values at yellow
dots is reduced since both boundaries – the cylindrical annulus mesh and its guide-mesh – do not
overlap.
mesh defined in polar coordinates (white dashed lines in Figure 6a and Figure 6c)
instead of Cartesian coordinates (white dashed lines in Figure 6b and Figure 6d)
takes advantage of higher precision when l0 values are interpolated in points both
close and on the boundaries (green dots in Figure 6c). This is because the shapes of
the outer and inner boundaries of any cylindrical annulus mesh defined in Cartesian
coordinates is not perfectly circular (Figure 6b). Therefore, it may occur that some
boundary points (magenta dots in Figure 6d) may lay outside of the boundaries of
a Cartesian guide-mesh (which can be a very coarse mesh) preventing accurate interpolation for the desired length at those points. Furthermore, the fact that both
boundaries – the cylindrical annulus mesh and its guide-mesh – would not overlap in
a Cartesian geometry would reduce the precision of the interpolated l0 values (yellow
dots in Figure 6d).
12
J. M. TARAMÓN, J. P. MORGAN, C. SHI, AND J. HASENCLEVER
Fig. 7. (a) Conceptual diagram for circular boundary conditions. The motion of boundary
nodes is first restricted to be along the tangent line to the circle. Then they are ’pulled back’ to the
circle by projecting in the radial direction. (b) Implementation of circular boundary conditions for
one triangle. A rotation is needed for the node 2 in order to pass from the global reference system
(X, Y ) to the local surface-parallel reference system (X 0 , Y 0 ) where y2 0 = |r| is the constrained
boundary condition.
Circular Boundary Conditions. Boundary conditions for a cylindrical annulus mesh are a generalization to the treatment for a straight-sided boundary linesegment. We denote the inner and outer boundaries Σ of the cylindrical annulus
mesh as radii r = rinner and r = router respectively. Ω is the interior region confined
between both boundaries. A useful boundary condition is to prescribe nodes on Σ
that are free to move along the circular boundary. This nodal motion is generated by
two independent steps (Figure 7a): 1) The node is allowed to move along the tangent
line to the circle at its current location, and 2) the node is place onto the circle by projecting its new location in the radial direction. This approximation assumes that the
radial distance needed to put the node back onto the circle is small compared to the
distance moved along the tangent line. For simplicity, the mathematical implementation is presented here only for one triangle (Figure 7b). The boundary condition for
node 2 is that it slides along its tangent line (dashed line in Figure 7b) since y2 0 = |r|,
where r is the radial distance from the centre of the cylindrical annulus mesh to the
boundary. The boundary condition is imposed by a rotation of the coordinate system
for node 2 given by the transformation matrix T that relates global coordinates x
with local coordinates x0 (local surface-parallel reference system (X 0 , Y 0 ) in green in
Figure 7b) by
x1
1
x1
y1
y1
1
x2
x2 0
cos
θ
sin
θ
2
2
(19)
y2 =
|r|
− sin θ2 cos θ2
x3
x3
1
y3
1
y3
| {z } |
{z
} | {z }
x
T
x0
where θ2 is the angle of the node 2 measured from the Y axis in the clockwise direction.
After applying the transformation matrix to the stiffness matrix and force vector
(20)
K 0 = T T KT
MESH GENERATION WITH EMBEDDED HI-RES SUB-REGIONS
13
Fig. 8. (a) Final mesh for a cylindrical annulus with an embedded high resolution sub-region.
(b) Zoom around an edge of the refined region. (c) Minimum quality factor (red line), mean quality
factor for all elements (blue line) and percentage of elements having a quality factor lower than
0.6% (green line) as a function of iteration number. (d) Histogram of the fraction of elements as a
function of quality factor for the final mesh.
(21)
fl0 0 = T T fl0
the new system of equations is given by
(22)
K 0 x0 = fl0 0
which is then solved for x0 . Global coordinates are recovered through the transformation matrix
(23)
x = T x0
Add/reject nodes in cylindrical annulus meshes. The routine to add or
reject nodes for a cylindrical annulus mesh works like the one explained above for
a rectangular mesh. The only difference appears when a new node is added on a
boundary spring. In this case, the new boundary node needs to be projected onto the
surface along the radial direction.
Example: Cylindrical annulus mesh with an embedded high resolution
region. We show the results for a cylindrical annulus mesh with an embedded highresolution sub-region (code available in section 5). The input generation parameters
are listed in Table 1. The algorithm created the mesh in 17 s after 5 iterations.
Figure 8a shows the final mesh (top) and a zoom around an edge of the refined region
(bottom) for iteration 5 (see Figure 15 for iterations 0 (initial mesh) and 1). The final
14
J. M. TARAMÓN, J. P. MORGAN, C. SHI, AND J. HASENCLEVER
Fig. 9. (a) Guide-mesh defined by a few nodes (red and blue dots represent l0r and l0c respectively) in spherical coordinates for a spherical shell. The length of the springs within the refined
region (red) is approximately equal to l0r . The length of the springs within the transition region
(green) smoothly varies from l0r to l0c . Outside the transition region the length of the springs is
approximately equal to l0c . (b) Model domain representing a 3-D spherical shell with an embedded
high resolution sub-region.
mesh has 12000 nodes forming 23000 triangular elements (Table 2) with an edge-length
factor l0r /l0c = 1/200. The percentage of triangles within the coarse, transition and
refined regions is 0.2%, 6.0% and 93.8% respectively. The worst quality factor for an
element is 0.44 (red line in Figure 8b) and the mean quality factor of all elements
is 0.98 (blue line in Figure 8b). Only 0.13% of the triangles have a quality factor
lower than 0.6 (green line in Figure 8b). Figure 8c shows the fraction of elements as
a function of their quality factor for the final mesh.
4. 3-D Spherical shell work flow. The algorithm presented above was developed as an intermediate step towards the generation of 3-D spherical shell meshes that
include an embedded high resolution sub-region. The white and green backgrounds in
Figure 1 show the flowchart that describes the 3-D spherical algorithm. In this section
we discuss those parts of the algorithm that differ from the cylindrical annulus mesh
generator.
Initial placement of the nodes in 3-D. The boundary nodes in the refined and
coarse regions are created by recursively splitting an initial dodecahedron according
to l0r and l0c respectively. This gives a uniform distribution of equilateral triangles
on the spherical surface. In contrast to equilateral triangles in 2-D, which are able to
fill up the plane, regular tetrahedra do not fill up the entire space. However, there
do exist some compact lattices, e.g., the hexagonal close packing (hcp) lattice, that
create a distribution of nodes that leads to well shaped tetrahedra. The interior nodes
within the refined and coarse regions are created by a close-packing of equal spheres
with radii equal to l0r /2 and l0c /2 respectively. The initial placement for boundary
and interior nodes in the transition region is created using l0r as explained above.
Then the rejection method described in [22] is used to discard points and create a
weighted distribution of nodes.
Spherical shell guide-mesh. The generation of a refined spherical shell mesh
using the guide-mesh involves steps similar to those described above except that the
preferred length function l0 (x, y) is now l0 (θ, φ, r). In this case the guide-mesh is a
coarse spherical shell mesh defined in spherical coordinates (Figure 9a).
15
MESH GENERATION WITH EMBEDDED HI-RES SUB-REGIONS
Spring-based solver in 3-D. The spring-based solver described above naturally
extends to 3-D. Forces and nodal positions are expressed in x, y and z coordinates
(Figure 3b). In order to solve for nodal positions in 3-D, a change from local coordinates (x1 0 , 0, 0; x2 0 , 0, 0) to global coordinates (x1 , y1 , z1 ; x2 , y2 , z2 ) is needed. This
change of coordinates consists of a 3-D rotation described by the rotation matrix
(24)
R3D =
cos α cos β
0
cos α sin β
0
sin α
0
0
cos α cos β
0
cos α sin β
0
sin α
where α and β are angles equivalents to latitude and longitude, respectively (Figure 3b). Applying equation (24) to equation (3) (see section 5 for details), equation
(3) becomes
−cα 2 sβ cβ −sα cα cβ
−cα 2 cβ 2
−cα 2 sβ 2
−sα cα sβ
−cα 2 sβ cβ
−s
c
c
−s
−sα 2
α α β
α cα s β
k
s α cα cβ
cα 2 sβ cβ
cα 2 cβ 2
c 2s c
s α cα s β
cα 2 s β 2
α β β
s α cα cβ
s α cα s β
sα 2
f1,x
cα cβ
f1,y
cα s β
sα
f
= 1,z + kl0
f2,x
−cα cβ
f
−c s
α β
2,y
f2,z
−sα
(25)
cα 2 cβ 2
cα 2 s β cβ
s α cα cβ
−cα 2 cβ 2
−cα 2 sβ cβ
−sα cα cβ
cα 2 sβ cβ
cα 2 s β 2
sα cα sβ
−cα 2 sβ cβ
−cα 2 sβ 2
−sα cα sβ
s α cα cβ
s α cα s β
sα 2
−sα cα cβ
−sα cα sβ
−sα 2
x1
y1
z1
x2
y2
z2
where sα ≡ sin α, cα ≡ cos α, sβ ≡ sin β and cβ ≡ cos β. The system of equations is
solved as described above (see equation (7)).
Spherical Boundary Conditions. For 3-D applications, we currently focus on
developing unstructured spherical meshes. Using a notation similar to that for 2-D
circular boundary conditions, we denote by Σ the inner and outer boundaries of the
spherical shell with radii r = rinner and r = router respectively. Ω is the interior
region between the boundaries. A useful boundary condition consists in prescribing
boundary nodes that are free to slide along the local tangent plane to the spherical
surface. Nodal sliding is generated in two independent steps (Figure 10a): 1) The
node is allowed to move along the local tangent plane to the sphere, and 2) the
node is returned to the sphere’s surface by projecting in the radial direction. This
approximation assumes that the radial distance needed to pull the node back to the
surface of the sphere is small compared to the distance moved along the tangent plane.
For simplicity, the mathematical implementation of the spherical boundary conditions
is presented here only for one tetrahedron (Figure 10b). Node 2 is free to slide along
the tangent plane since the boundary condition is z2 00 = |r|, where r is the radial
distance from the centre of the sphere to the surface. This boundary condition is
imposed by two rotations of the coordinate system for node 2. The first rotation is
around the Z axis by an angle φ2 , which is the longitude of node 2 (local reference
system (X 0 , Y 0 , Z 0 ) in blue in Figure 10b). The second rotation is around the Y 0 axis
by an angle θ2 , which is the colatitude for node 2 (local reference system (X 00 , Y 00 , Z 00 )
in green in Figure 10b). The complete rotation is given by the transformation matrix
16
J. M. TARAMÓN, J. P. MORGAN, C. SHI, AND J. HASENCLEVER
Fig. 10. (a) Conceptual diagram for spherical boundary conditions. The motion of boundary
nodes is first restricted to be along the tangent plane to the sphere. Then, they are ’pulled back’ to
the sphere’s surface by projecting in the radial direction. (b) Implementation of spherical boundary
conditions for one tetrahedron. Two rotations are needed for node 2 to pass from the global reference
system (X, Y , Z) to the local reference system (X 00 , Y 00 , Z 00 ), where z2 00 = |r| is the boundary
condition.
T that relates global coordinates x with local coordinates x00 as follows
(26)
x
1
y1
z
1
x
2
y2
z2
x3
y3
z3
x4
y4
z4
| {z
1
}
=
x1
y1
z
1
x 00
2
y2 00
|r|
x3
y3
z3
x4
y4
1
z4
}|
{z
1
1
cos φ2 cos θ2 − sin φ2 cos φ2 sin θ2
sin φ2 cos θ2
cos θ2 sin φ2 sin θ2
− sin θ2
0
cos θ2
1
1
1
1
1
{z
|
x
T
x00
}
This transformation matrix contains a θ and φ angle for each node on the spherical
boundary. Applying the transformation matrix to stiffness matrix and force vector
(27)
K 00 = T T KT
(28)
fl0 00 = T T fl0
the new system of equations is given by
(29)
K 00 x00 = fl0 00
which is solved for x00 . Global Cartesian coordinates are recovered through the transformation matrix
(30)
x = T x00
Quality factor for tetrahedra. The 3-D quality factor for a tetrahedron is
defined by
(31)
q3D =
3rs
Rs
MESH GENERATION WITH EMBEDDED HI-RES SUB-REGIONS
17
Fig. 11. (a) Tetrahedron with vertices OABC. R and r are the radius of the circumscribed
and inscribed spheres respectively. (b) Number of tetrahedra as a function of the quality factor q3D
(green) and the shape measure s (red) for the same mesh.
where rs is the radius of the tetrahedron’s inscribed sphere and Rs is the radius of its
circumscribed sphere. Rs and rs are given by
(32)
(33)
rs =
|a · (b × c) |
(|a × b| + |b × c| + |c × a| + | (a × b) + (b × c) + (c × a) |)
Rs =
|a2 · (b × c) + b2 · (c × a) + c2 · (a × b) |
2|a · (b × c) |
where a, b and c are vectors pointing from one node, O, to the three other nodes
of the tetrahedron A, B and C respectively (Figure 11a). This quality factor is
normalized to be 0 for degenerate tetrahedra and 1 for regular tetrahedra. Note that
different definitions for normalized aspect ratios can lead to different estimators for
the global quality of a mesh. For example, [2] define a shape measure s that depends
on tetrahedral volume and the lengths of its edges. Computing q3D and s for the same
mesh gives differences of up to 0.1 for the worst element (Figure 11b). The quality
factor q3D that we choose to use is a more restrictive aspect ratio than the shape
factor measure s.
Element shape improvements. In 3-D, even when the expected nodal density
is achieved (µ < µt ) by adding or rejecting nodes, a considerable number of poorly
shaped tetrahedra can still persist. Local improvements are needed to ensure that the
mesh is robust enough to perform optimal FEM calculations. After some experimentation, we found it appropriate to use µt = 0.11 although this can vary from 0.1 to
0.2 depending on the degree of mesh refinement. The value of µt for 2-D meshes is
smaller than for 3-D meshes due to the shape compactness that can be achieved on a
2-D planar surface.
Methods based on swapping edges or faces to improve element quality can possibly
generate non-Delaunay triangulations, which will cause problems in algorithms that
rely on a mesh created by a Delaunay triangulation (e.g. point search algorithms).
Hence, as an alternative and in addition to smoothing the position of interior nodes, we
recommend two additional operations to improve the quality of tetrahedral elements.
Improvement of badly shaped tetrahedra. Unstructured 3-D meshes are
composed of irregular tetrahedra. Some may be quite poor in terms of their shape
and quality factor (see [6] for a complete categorization of badly shaped tetrahedra).
The first improvement for tetrahedral shapes acts locally and only modifies one node
of each badly shaped tetrahedron. For each badly shaped tetrahedron, identified by
18
J. M. TARAMÓN, J. P. MORGAN, C. SHI, AND J. HASENCLEVER
q < qbad , where 0.2 ≤ qbad ≤ 0.3, we select the spring with the maximum distortion,
i.e. max(||). If > 0, a new node is created in the midpoint of the selected spring,
while a node at one end of the selected spring is removed if < 0. A new connectivity
is then created by another Delaunay triangulation. The new connectivity is only
modified in the surroundings of nodes that have been added or removed, keeping the
rest of the connectivity to be the same as the old triangulation. Figure 16 illustrates a
simple example that improves badly shaped tetrahedra when meshing the unit cube.
Removing slivers. Slivers are degenerate tetrahedra whose vertices are wellspaced and near the equator of their circumsphere, hence their quality factor and
enclosed volume are close to zero. We define a sliver as a tetrahedron with q < 0.1.
Our routine for removing slivers is purely geometrical, i.e., it does not take into
account the actual or desired length of the springs. The four vertices of each sliver
are replaced by the three mesh points of the best potential triangle that can be
generated from all permutations of its vertices and potential new nodes created at the
midpoints of its springs (Figure 17). Delaunay triangulation is called afterwards to
create the connectivity matrix around the changed nodes.
Example: Spherical shell mesh with an embedded high resolution region. We show the results for a spherical shell mesh with an embedded high-resolution
sub-region (code available in section 5). The input mesh parameters are listed in Table 1. We recommend to set the point around which the refined region is created
far from the polar axis since the guide-mesh can have difficulties in interpolating the
desired spring lengths near the polar axis.
For this example, the domain of the mesh is a spherical shell whose boundaries
represent the core-mantle boundary and the Earth’s surface (Figure 9b). The smallest
tetrahedra with quasi-uniform size lie inside the high resolution region (red tesseroid
in Figure 9b). This region is embedded within a coarser global mesh. A transition
region (green tesseroid in Figure 9b) guarantees a gradual change in tetrahedral size
from the high resolution region to the coarse region. The algorithm created the mesh
in 224 s after 10 iterations (see Figure 12a for a cross section of the final mesh).
Figure 12b shows a detail of the mesh around the northern boundary of the refined
region. The mesh has 27000 nodes forming 150000 tetrahedra (Table 2) with an edgelength factor l0r /l0c = 1/33. The fraction of tetrahedra within the coarse, transition
and refined regions is 0.8%, 20.0% and 79.2% respectively (see Figure 18). The worst
quality factor for an element is 0.23 (red line in Figure 12c) and the mean of the quality
factor for all elements is 0.87 (blue line in Figure 12c). Only 1% of the tetrahedra
have a quality factor lower than 0.4. Figure 12d shows the fraction of elements as a
function of their quality factor for the final mesh.
5. Summary. We have developed the tools for generating unstructured meshes
in 2-D, 3-D, and spherical geometries that can contain embedded high resolution
sub-regions. While we do not discuss the recipe for the (simpler) generation of a
Cartesian 3-D mesh, only small modifications to the 3-D spherical code are needed
to assign boundary points to lie along small sets of linear boundary edges and planar
boundary surfaces. The algorithm employs the FEM to solve for the optimal nodal
positions of a spring-like system of preferred nodal positions. Straight line, circular
and spherical boundary conditions are imposed to constrain the shape of the mesh.
We use a guide-mesh approach to smoothly refine the mesh around regions of interest.
Methods for achieving the expected nodal density and improving the element shape
and quality are also introduced to give robustness to the mesh. These allow it to make
Finite Element meshes capable of higher computational accuracy and faster iterative
MESH GENERATION WITH EMBEDDED HI-RES SUB-REGIONS
19
Fig. 12. (a) Cross section of the final mesh with an embedded high resolution sub-region after
refinement using the guide-mesh. (b) Zoom around the boundary of the refined region. (c) Minimum
quality factor (red line), mean quality factor for all elements (blue line) and fraction of elements
having a quality factor lower than 0.4% (green line) as a function of iteration number. (d) Histogram
of the fraction of elements as a function of quality factor for the final mesh.
convergence. This approach could also be extended to be used as part or all of an
adaptive mesh refinement routine.
Acknowledgments. We thank Cornell University for supporting the initial work
on this problem by Morgan and Shi, and the COMPASS Consortium for the Ph.D.
support for Jorge M. Taramón.
REFERENCES
[1] P. Alliez, D. Cohen-Steiner, M. Yvinec, and M. Desbrun, Variational tetrahedral meshing,
ACM Trans. Graph., 24 (2005), p. 617, https://doi.org/10.1145/1073204.1073238.
[2] A. Anderson, X. Zheng, and V. Cristini, Adaptive unstructured volume remeshing - I: The
method, J. Comput. Phys., 208 (2005), pp. 616–625, https://doi.org/10.1016/j.jcp.2005.02.
023.
[3] M. Bern, D. Eppstein, and J. Gilbert, Provably good mesh generation, J. Comput. Syst.
Sci., 48 (1994), pp. 384–409, https://doi.org/10.1016/S0022-0000(05)80059-5.
[4] C. Burstedde, O. Ghattas, M. Gurnis, G. Stadler, Eh Tan, T. Tu, L. C. Wilcox, and
S. Zhong, Scalable adaptive mantle convection simulation on petascale supercomputers, in
2008 SC - Int. Conf. High Perform. Comput. Networking, Storage Anal., 2008, pp. 1–15,
https://doi.org/10.1109/SC.2008.5214248.
[5] S. Cheng and T. Dey, Quality meshing with weighted Delaunay refinement, Proc. Thirteen.
Annu. ACM-SIAM Symp. Discret. algorithms, 33 (2002), pp. 137–146, https://doi.org/10.
1137/S0097539703418808.
20
J. M. TARAMÓN, J. P. MORGAN, C. SHI, AND J. HASENCLEVER
[6] S.-W. Cheng, T. K. Dey, H. Edelsbrunner, M. A. Facello, and S.-H. Teng, Silver exudation, J. ACM, 47 (2000), pp. 883–904, https://doi.org/10.1145/355483.355487.
[7] L. P. Chew, Guaranteed-Quality Triangular Meshes, tech. report, Department of Computer
Science, Cornell University, Ithaca, New York, 1989.
[8] L. P. Chew, Guaranteed-Quality Delaunay Meshing in 3D (short version), Proc. Thirteen. Annu. Symp. Comput. Geom., (1997), pp. 391–393, https://doi.org/10.1145/262839.
263018.
[9] W.-Y. Choi, D.-Y. Kwak, I.-H. Son, and Y.-T. Im, Tetrahedral mesh generation based on
advancing front technique and optimization scheme, Int. J. Numer. Methods Eng., 58
(2003), pp. 1857–1872, https://doi.org/10.1002/nme.840.
[10] M. Dabrowski, M. Krotkiewski, and D. W. Schmid, MILAMIN: MATLAB-based finite
element method solver for large problems, Geochem. Geophys. Geosyst., 9 (2008), https:
//doi.org/10.1029/2007GC001719.
[11] D. R. Davies, C. R. Wilson, and S. C. Kramer, Fluidity: A fully unstructured anisotropic
adaptive mesh computational modeling framework for geodynamics, Geochem. Geophys.
Geosyst., 12 (2011), https://doi.org/10.1029/2011GC003551.
[12] J. Dompierre, P. Labbé, F. Guibault, and R. Camarero, Proposal of benchmarks for 3D
unstructured tetrahedral mesh optimization, in 7th Int. Meshing Roundtable, 1998, pp. 525–
537.
[13] H. Edelsbrunner and D. Guoy, An Experimental Study of Sliver Exudation, Eng. Comput.,
18 (2002), pp. 229–240, https://doi.org/10.1007/s003660200020.
[14] C. Geuzaine and J.-F. Remacle, Gmsh: a three-dimensional finite element mesh generator
with built-in pre-and post-processing facilities, Int. J. Numer. Methods Eng. 79(11), 0
(2009), pp. 1309–1331, https://doi.org/10.1002/nme.2579.
[15] Y. Ito, A. M. Shih, and B. K. Soni, Reliable Isotropic Tetrahedral Mesh Generation Based
on an Advancing Front Method, in 13th Int. Meshing Roundtable, 2004, pp. 95–105.
[16] Y. Ito, A. M. Shih, and B. K. Soni, Octree-based reasonable-quality hexahedral mesh generation using a new set of refinement templates, Int. J. Numer. Methods Eng., 77 (2009),
pp. 1809–1833, https://doi.org/10.1002/nme.2470.
[17] M. Kronbichler, T. Heister, and W. Bangerth, High accuracy mantle convection simulation through modern numerical methods, Geophys. J. Int., 191 (2012), pp. 12–29,
https://doi.org/10.1111/j.1365-246X.2012.05609.x.
[18] F. Labelle and J. R. Shewchuk, Isosurface stuffing, ACM Trans. Graph., 26 (2007), p. 57,
https://doi.org/10.1145/1276377.1276448.
[19] X.-Y. Li and S.-H. Teng, Generating well-shaped Delaunay meshed in 3D, in 12th Annu.
ACM-SIAM Symp. Discret. algorithms, Washington, D. C., 2001, pp. 28–37.
[20] R. Löhner and P. Parikh, Generation of three-dimensional unstructured grids by the
advancing-front method, Int. J. Numer. Methods Fluids, 8 (1988), pp. 1135–1149, https:
//doi.org/10.1002/fld.1650081003.
[21] S. A. Mitchell and S. Vavasis, Quality mesh generation in three dimensions, in Proc. Eighth
Annu. Symp. Comput. Geom. ACM, 1992, pp. 212–221, https://doi.org/10.1145/142675.
142720.
[22] P.-O. Persson and G. Strang, A Simple Mesh Generator in MATLAB, SIAM Rev., 46
(2004), pp. 329–345, https://doi.org/10.1137/S0036144503429121.
[23] J. Ruppert, A Delaunay Refinement Algorithm for Quality 2-Dimensional Mesh Generation,
J. Algorithms, 18 (1995), pp. 548–585, https://doi.org/10.1006/jagm.1995.1021.
[24] J. Schöberl, An advancing front 2D/3D-mesh generator based on abstract rules, Comput.
Vis. Sci., 1 (1997), pp. 41–52, https://doi.org/10.1007/s007910050004.
[25] J. R. Shewchuk, Triangle: Engineering a 2D quality mesh generator and Delaunay triangulator, in Appl. Comput. Geom. Towar. Geom. Eng., M. C. Lin and D. Manocha,
eds., vol. 1148, Springer Berlin Heidelberg, 1996, pp. 203–222, https://doi.org/10.1007/
BFb0014497.
[26] J. R. Shewchuk, Tetrahedral mesh generation by Delaunay refinement, in 14th Annu. Symp.
Comput. Geom. - SCG ’98, 1998, pp. 86–95, https://doi.org/10.1145/276884.276894.
[27] J. R. Shewchuk, What is a Good Linear Element? Interpolation, Conditioning, and Quality
Measures, in Elev. Int. Meshing Roundtable, 2002, pp. 115–126, https://doi.org/10.1.1.68.
8538.
[28] H. Si, TetGen, a Delaunay-Based Quality Tetrahedral Mesh Generator, AMC Trans. Math.
Softw., 41 (2015), p. 36, https://doi.org/10.1145/2629697.
MESH GENERATION WITH EMBEDDED HI-RES SUB-REGIONS
21
SUPPLEMENTARY MATERIALS.
SM1. Derivation of equation (5). The 2-D development of equation (3),
rewritten here for convenience
0
h
i
h
i 0
f1
−1
1
0
−1
1
x1
(34)
+
k
=
k
0
f
1 −1
l
1 −1
x 0
2
0
2
is given by two steps. First, develop the right hand side of equation (34) by writing
local coordinates as a function of global coordinates (see Figure 3a)
k
−1
1
1
−1
x1 0
x2 0
x2 0 − x1 0
−(x2 0 − x1 0 )
(x2 − x1 )cα + (y2 − y1 )sα
=k
− (x2 − x1 )cα + (y2 − y1 )sα
−1
1
x1 c α + y1 c α
=k
1 −1
x2 c α + y2 c α
=k
(35)
=k
−1
1
1
−1
cos α
0
sin α
0
0
cos α
0
sin α
x1
y1
x2
y2
where sα ≡ sin α and cα ≡ cos α. Second, express the global coordinates of the force
vector as a function of local coordinates (see Figure 3a)
f1,x
cα
0
0
f
s
0
α
1,y
=
f1 0
(36)
f
0
c
f
α
2,x
f2,y
0
2
sα
Combining equations (34) and (35) gives
(37)
x1
0
h
ih
i
h
i
f1
−1
1
cos α sin α
0
0
1
0
y1 − k −1
=k
f2 0
1 −1
0
0
cos α sin α
x2
1 −1
l0
y2
Substituting equation (37) into equation (36) and reordering gives
cα
0
x1
h
ih
i
s
0
−1
1
cos
α
sin
α
0
0
y
1
k 0α c
1 −1
0
0
cos α sin α
x2
α
0
sα
(38)
=
y2
f1,x
f1,y
f2,x
f2,y
+k
cα
sα
0
0
0
0
cα
sα
h
−1
1
1
−1
i
0
l0
which is equivalent to equation (5).
SM2. Derivation of equation (25). The 3-D development of equation (3),
rewritten here for convenience
0
h
i
h
i 0
f1
−1
1
0
−1
1
x1
(39)
+
k
=
k
0
f
1 −1
l
1 −1
x 0
2
0
2
22
J. M. TARAMÓN, J. P. MORGAN, C. SHI, AND J. HASENCLEVER
also involves two steps. First, develop the right hand side of equation (39) by writing
local coordinates as a function of global coordinates (see Figure 3b)
k
−1
1
1
−1
x1 0
x2 0
=k
= k
(40)
=k
=k
x2 0 − x1 0
0
0
−(x2 − x1 )
(x2 − x1 )cβ + (y2 − y1 )sβ cα + (z2 − z1 )sα
− (x2 − x1 )cβ + (y2 − y1 )sβ cα + (z2 − z1 )sα
−1
1
x1 c α c β + y1 c α s β + z 1 s α
x2 c α c β + y2 c α s β + z 2 s α
1 −1
x1
y1
−1
1
cα cβ cα s β s α 0
0
0 z1
1 −1
0
0
0 c α c β c α s β s α x2
y
2
z2
where sα ≡ sin α, cα ≡ cos α, sβ ≡ sin β and cβ ≡ cos β. Second, express the global
coordinates of the force vector as a function of local coordinates (see Figure 3b)
f
1,x
f1,y
f1,z
f2,x
(41)
f2,y
f2,z
c c
α β
cα s β
sα
= 0
0
0
0
0
0
cα cβ
cα s β
sα
0
f1
f2 0
Combining equations (39) and (40) gives
(42)
f1 0
f2 0
x
1
i y1
h
ih
−1
1
cα cβ cα s β s α 0
0
0 z1
=k
1 −1
0
0
0 cα cβ cα s β s α
x2
y2
z2
h
i
−1
1
0
−k
1 −1
l0
Substituting equation (42) into equation (41) and reordering gives
c c
x
0
α β
1
c
s
0
α
β
h
ih
i y1
s
0 −1
1
cα cβ cα s β s α 0
0
0 z1
k 0α c c
1 −1
0
0
0 cα cβ cα s β s α
α β
x2
0
0
(43)
cα s β
sα
f
1,x
f1,y
f
= f1,z
2,x
f2,y
f2,z
y2
z2
c c
α β
cα s β
s
+ k 0α
0
0
0
0
0
cα cβ
cα s β
sα
h
i
−1
1
0
1 −1
l0
which is equivalent to equation (25).
SM3. Code for Rectangular mesh generation. Code to reproduce the
example shown in Figure 5 and Figure 14
MESH GENERATION WITH EMBEDDED HI-RES SUB-REGIONS
23
SM4. Code for Cylindrical annulus mesh generation. Code to reproduce
the example shown in Figure 8 and Figure 15
SM5. Code for Spherical shell mesh generation. Code to reproduce the
example shown in Figure 12 and Figure 18
SM6. Additional Figures.
24
J. M. TARAMÓN, J. P. MORGAN, C. SHI, AND J. HASENCLEVER
Fig. 13. (a) Initial 2-D mesh. (b) Mesh after applying the Laplacian correction to smooth
positions of its interior nodes. Blue points are the barycentres of the triangles. Green and black
crosses are the nodal positions before and after smoothing, respectively. Red arrows indicate the
motions of interior nodes.
Fig. 14. (a) Initial mesh (top) for a rectangular box with an embedded high resolution subregion and a zoom around the left boundary of the refined region (bottom). (b) Mesh (top) and zoom
(bottom) after the first iteration.
MESH GENERATION WITH EMBEDDED HI-RES SUB-REGIONS
25
Fig. 15. (a) Initial mesh (top) for a cylindrical annulus with an embedded high resolution subregion and a zoom around an edge of the refined region (bottom). (b) Mesh (top) and zoom (bottom)
after the first iteration.
26
J. M. TARAMÓN, J. P. MORGAN, C. SHI, AND J. HASENCLEVER
Fig. 16. (a) Initial mesh with badly shaped tetrahedra (in blue). Rejected nodes in red. (b)
Badly shaped tetrahedra. (c) Mesh after improving badly shaped tetrahedra contains no badly shaped
tetrahedra. (d) Fraction of tetrahedra for a given quality factor for both before (dashed line) and
after (solid line) local improvements to the shape of badly shaped tetrahedra. The minimum quality
factor for the initial mesh is 0.04 and for the final mesh is 0.39.
MESH GENERATION WITH EMBEDDED HI-RES SUB-REGIONS
27
Fig. 17. Removing a sliver (represented by black lines and dashed grey line for hidden edge).
Possible triangles (grey and green colours) created from permutations of the vertices and midpoints
of the edges of a sliver. Black, red and green points represent unaltered, removed and added nodes,
respectively. qtri is the quality factor for each triangle. The four vertices of the sliver are replaced
by the three mesh points of the potential triangle with the best quality factor (green colour).
28
J. M. TARAMÓN, J. P. MORGAN, C. SHI, AND J. HASENCLEVER
Fig. 18. (a) Tetrahedra within the coarse region. (b) Tetrahedra within the transition region.
(c) Tetrahedra within the refined region.
| 5 |
An Analysis on the Influence of Network Topologies on
Local and Global Dynamics of Metapopulation Systems
Daniela Besozzia Paolo Cazzanigab
Dario Pescinib Giancarlo Maurib
a Università degli Studi di Milano
Dipartimento di Informatica e Comunicazione
Via Comelico 39, 20135 Milano, Italy
[email protected]
b Università degli Studi di Milano-Bicocca
Dipartimento di Informatica, Sistemistica e Comunicazione
Viale Sarca 336, 20126 Milano, Italy
cazzaniga/pescini/[email protected]
Metapopulations are models of ecological systems, describing the interactions and the behavior of
populations that live in fragmented habitats. In this paper, we present a model of metapopulations
based on the multivolume simulation algorithm tau-DPP, a stochastic class of membrane systems,
that we utilize to investigate the influence that different habitat topologies can have on the local and
global dynamics of metapopulations. In particular, we focus our analysis on the migration rate of
individuals among adjacent patches, and on their capability of colonizing the empty patches in the
habitat. We compare the simulation results obtained for each habitat topology, and conclude the
paper with some proposals for other research issues concerning metapopulations.
1
Introduction
The field of metapopulations ecology deals with the study of spatial systems describing the behavior of
interacting populations that live in fragmented habitats [17]. The purpose of these models is to understand
how the local and global dynamics of metapopulation systems, usually balanced between local extinctions and new colonizations of unoccupied patches, depend on the spatial arrangement of the habitat.
Consequently, relevant insights into related fields of ecological research, such as evolutionary ecology or
conservation and landscape management, can be achieved. Indeed, the topology of fragmented habitats
potentially holds relevant implications for the persistence of populations, and their robustness against
natural or anthropogenic disturbance [36].
Recently, in addition to ever increasing applications of graph-based methods for the analysis of complex networks in cell biology [1, 2], graph theory has also been applied to the study of metapopulations
systems. In graph models of metapopulations, nodes are used to represent habitat patches, and graph
edges are used to denote some functional connections between patches (typically related to the dispersal
of individuals). Attributes can be associated to nodes, describing the quality or dimension of patches,
while different types of edges can be exploited to represent the distance between connected patches, the
rate of dispersal between a couple of patches, or simply whether two patches are connected or not.
Metapopulation models using graph-based methods [36, 15] are simple to implement and require
relatively few data for their definition, while individual-based models implement more detailed aspects,
P. Milazzo and M.J. Pérez Jiménez (Eds.): Applications of Membrane Computing,
Concurrency and Agent-based Modelling in Population Biology (AMCA-POP 2010)
EPTCS 33, 2010, pp. 1–17, doi:10.4204/EPTCS.33.1
c D. Besozzi et al.
The influence of network topologies on metapopulations dynamics
2
concerning the nature and the interaction of populations [34, 4]. Both types of modeling approaches are
useful for the analysis of specific features of metapopulations but, while the first focuses on the properties
of the habitat topology, the second is more concerned with the emergent dynamics. In this paper, we
present a stochastic multivolume model of metapopulations, which integrates the explicit representation
of interactions between the individuals of the populations – and therefore allows to simulate the emergent
local and global dynamics – with a graph description of the habitat topology – which allows to investigate
the influence of distinct spatial structures on the dynamics.
This model, which represents a simplified extension of a previous metapopulation model that we
introduced in [7, 6], is based on the multivolume stochastic simulation algorithm tau-DPP [11, 8], a
stochastic class of membrane systems. Membrane systems, or P systems, were introduced in [27] as a
class of unconventional computing devices of distributed, parallel and nondeterministic type, inspired
by the compartmental structure and the functioning of living cells. The basic model consists of a membrane structure where multisets of objects evolve according to given evolution rules. A comprehensive
overview of P systems and of its many applications in various research areas, ranging from Biology to
Linguistics to Computer Science, can be found in [28, 12, 29].
In tau-DPP, the distinct compartments of any multivolume model can be arranged according to a
specified hierarchy (e.g., a membrane structure), under the additional assumption that the topological
structure and the volume dimensions do not change during the system evolution (each volume is assumed
to satisfy the standard requirements of the classical stochastic simulation algorithm, see [16] and [5] for
more details). Inside each volume, two different types of rules can be defined: the internal rules, which
modify the objects contained inside the volume where they take place (in the case of metapopulation,
they describe the growth and death of population individuals according to the Lotka-Volterra model of
preys and predators), and the communication rules, which are used to move the objects between adjacent
volumes (in the case of metapopulation, they describe the migration of population individuals).
In this paper, tau-DPP is exploited to analyze the emergent dynamics of metapopulation systems,
where the focus is on the influence that the topology of patches has on the migration of individuals, and
their capability to colonize other patches in the habitat. To this purpose, we consider six different habitat
topologies, formally described by graph structures, and analyze how the topological structure of patchto-patch connections, and the rate of individual dispersal between connected patches, influence the local
and global dynamics of a metapopulation. In particular, we will first consider how a given topology and
a fixed dispersal rate between patches can influence the prey-predators dynamics, and then we will focus
on the colonization of empty patches, starting from the dispersal of predators that live in a few patches
which occupy peculiar positions in the given network topology.
The paper is structured as follows: in Section 2 we present the concept of metapopulations in Ecology, and then describe the multivolume model of metapopulations by focusing, in particular, to the
different habitat topologies. In Section 3 we will show the simulation results concerning the influence of
these habitat topologies on the emergent dynamics of metapopulations, considering the effects of predators dispersal and colonization. Finally, in Section 4 we conclude the paper with some final remarks and
several proposals for further research issues concerning metapopulations.
2
Metapopulations
In this section, we first provide a brief introduction to the most relevant features of metapopulations,
concerning both the topology of the habitats and the emergent dynamics. Then, we describe the modeling
approach used in this paper, that is based on a stochastic class of membrane systems, which will be used
D. Besozzi et al.
3
in Section 3 to analyze the influence of different network topologies on the dynamics of metapopulations.
2.1 Dynamical models of interacting populations in Ecology
Since its introduction in [22], the concept of metapopulations (also called multi-patch systems) has been
extensively applied in Ecology to analyze the behavior of interacting populations, to the purpose of determining how fragmented habitats can influence various aspects of these systems, such as local and global
population persistence, or the evolution of species [18]. Lately, this topic has been largely employed for
other populations species, living in both natural and artificial/theoretical fragmented landscapes [17].
A metapopulation consists of local populations, living in spatially separated habitats called patches
– which can be characterized by different areas, quality or isolation – connected each other through a
dispersal pool, which is the spatial place where individuals from a population spend some lifetime during
the migration among patches. In multi-patch systems, two principal types of dynamics exist: on the one
hand, the individuals of the different populations can have local interactions inside each patch (according
to a given dynamical model, e.g., the Lotka-Volterra system of interaction between preys and predators
[25]); on the other hand, the dispersal of individuals among mutually connected patches can influence
the global behavior of the whole system [20, 21, 33, 37]. The dispersal of individuals, which is usually
dependent on the distance between patches, may reduce the local population growth, and thus increase
the extinction risk, which can be due also to environmental and demographical stochasticity. Hence,
the persistence of populations is assumed to be balanced between local extinctions and the process of
colonization, that is, the establishment of new populations in empty patches [17].
Several theoretical frameworks for metapopulation analysis have been defined up to now, remarking
specific properties of multi-patch systems which have been either explicitly or implicitly considered in
these modeling methods (see, e.g., [14, 17, 24, 19] for further details). For instance, referring to the
landscape, most theoretical models take care of the spatial structure of the habitat, the local quality of
the environment, the patch areas and their mutual connectivity (or isolation), in order to capture the
effect of habitat fragmentation on species persistence. In fact, good local conditions can determine the
growth and the survival of populations inside the patches, and high patch connectivity can decrease local
extinction risk. Moreover, as dispersal and colonization are distance-dependent elements, they can be
used to account for the importance of real landscape structures. Referring to population interactions and
dynamics, colonization can depend or not on the cooperation of migrating individuals (in the first case, it
is called “Allee effect”). Models not accounting for within-patch dynamics – but only assuming whether
a patch is occupied or not – usually consider local dynamics on a faster time scale with respect to the
global dynamics, and also neglect the dependence of colonization and extinction rates on population
sizes. Finally, regional stochasticity can account for “bad” or “good” years over the local environmental
quality, which depends on, e.g., the weather conditions which affect sustenance resource availability and,
once more, they can influence the growth and survival of populations.
Recently, graph-based models for metapopulations have started to be more and more defined because of the intuitive and visual way they hold for the representation of these ecological systems (see
[36, 23, 35] and references therein). In these models, nodes represent habitat patches and graph edges
denote functional connections between patches (typically related to the dispersal of individuals). In addition, attributes can be associated to nodes, describing the quality or dimension of patches, and different
types of edges can be adopted to represent the distance between connected patches, the rate of dispersal
between a couple of patches, or simply whether two patches are connected or not. These models allow
to make insights into the features of habitat distribution, such as the predominant importance of some
nodes or clusters of nodes with respect to other characteristics of metapopulation, like their dynamics, the
4
The influence of network topologies on metapopulations dynamics
vulnerability to disturbance, the persistence of populations according to dispersal, and so on. These results open promising perspective in related research fields as evolutionary ecology, conservation biology,
epidemiology, management and design of natural reserves.
2.2 A P system–based model of metapopulations: focusing on network topologies
Most of the issues discussed in Section 2.1 were explicitly considered in our previous model for metapopulations [6, 7]. In those works, metapopulation models were based on a class of membrane systems
called DPP [31, 30], which were used to execute qualitative stochastic simulations of the local and
global dynamics of metapopulations. In particular, in [7] we introduced a model of metapopulations
with predator-prey dynamics, where additional features were used in order to catch and better describe
relevant properties of the modeled system. For instance, the regions of the membrane structure were
represented as nodes of a weighted graph with attributes, where the weight associated to edges corresponds to the “distance” among connected regions, while attributes specify their surface dimension.
These new features are necessary in order to outline the spatial distribution of patches and the relevant
additional features associated to them: the dimension of a patch is needed to define the density of the
populations living inside that patch, while the distance is needed to identify isolated patches, as well as to
define the dispersal rates of migrating individuals. Moreover, by using some rules which do not modify
the objects on which they act (the so-called “mute rules”), we modified the classical view of maximal
parallelism, by allowing the maximal application of rules but, at the same time, reducing the maximal
consumption of objects. The model was applied to investigate some emergent metapopulation behaviors,
such as the influence of patch dimension, patch-to-patch distance, stochastic breeding, the dynamics underlying migration and colonization, the effects due to isolated patches, etc. Then, in [6] we extended
the analysis of that model by focusing on periodic resource feeding strategies, and compared different
systems where either increasing, decreasing, stationary or purely feeding stochastic phases were defined
inside each patch. We have shown there, for instance, how the seasonal variance can transform the basic
Lotka-Volterra dynamics inside each patch into a more complex dynamics, where the different phases of
a feeding cycle can be identified through the effect that they have on the standard oscillations of preys
and predators.
In this section, we present a simplified model of metapopulations, which exploits the multivolume
stochastic simulation algorithm tau-DPP [11, 5]. With respect to the previous model, here we will not
need to use the concept of mute rules, as the probabilistic choice and applications of rules is already
embedded in the tau leaping algorithm [10], on which tau-DPP is based. Moreover, we will not consider
the presence of the dispersal pool, but we will instead focus our analysis on the direct communication
of individuals among interconnected patches, according to some fixed network topologies. In order to
compare the influence of each network, we have decided to perform our analysis on a total of 6 patches,
spatially arranged in different ways. Namely, we assume that these network topologies can be described
by graphs having the same number of nodes, but distinct connections, such as the chain, grid, star, ring,
complete or random structure (see graphs a, b, c, d, e, f , respectively, in Fig. 1). From now on, we will
refer to the formal data structure by using the term ‘graph’, and use the term ‘network’ to denote the
topological relationship on each graph.
Formally, each network topology ν ∈ {a, b, c, d, e, f }, can be generally described by a weighted
undirected graph Gν = (N∆ν , E ν , wν ) where:
• N∆ν is the set of nodes, such that each node pi ∈ N∆ν , i=1, . . ., 6, is characterized by a value δ (pi ) ∈ ∆
(with ∆ being a set of attributes of some kind);
• E ν ⊆ {(pi , p j ) | pi , p j ∈ N∆ν } is the set of (undirected) edges between nodes;
D. Besozzi et al.
5
Figure 1: Network topologies.
• wν : E ν → R+ is the weight function associating a cost to each edge.
In the case of metapopulations, the set of nodes N∆ν coincides with the set of patches, the attribute
of a node represents the area of the patch, the edges characterize which patches are directly reachable
from any patch (self-edges might exist as well but will not be considered in this work), and the weight
wνi, j of an edge (pi , p j ) represents a cost to measure the effort that individuals have to face when moving
from patch pi to p j . Given a network topology ν , we denote by Ad j(pi )ν the set of nodes that are
directly connected to any node pi , that is, Ad j(pi )ν = {p j ∈ N∆ν | ∃ (pi , p j ) ∈ E ν }. We also denote
by deg(pi )ν the degree of patch pi , that is, the number of patches directly connected to pi (formally,
deg(pi )ν = card(Ad j(pi )ν )). We outline that, in what follows, we will assume that: (1) wνi, j = 1 for
each (pi , p j ) ∈ E ν and each ν ∈ {a, b, c, d, e, f }, that is, all edges have the same cost; (2) δ (pi ) = 1 for
each pi ∈ N∆ν and each ν ∈ {a, b, c, d, e, f }, that is, all patches have the same dimension. The rational
behind this is that, in this paper, we focus our attention on the influence that different topologies of the
habitat network can have on the local and global dynamics of metapopulations, regardless of the local
features of each patch, or of the distances between patches. These features might be naturally added
in further works related to this model, where real data can be used to define a specific model of some
metapopulation systems.
In addition to the chosen network topology, this model of metapopulations also considers the presence of species individuals, which locally interact according to a chosen dynamics, and give rise to global
dynamics thanks to the dispersal processes. To this purpose, in this paper we assume that each patch is
characterized by the Lotka-Volterra (LV) model describing the interaction between the individuals of two
populations, namely preys and predators. Inside each patch, the LV model is described by the following
set of internal rules:
r1 : AX → X X
r2 : XY → YY
r3 :
Y →λ
where X denotes the preys, Y denotes the predators, A denotes the sustenance resources and λ is the
The influence of network topologies on metapopulations dynamics
6
empty symbol. Rules r1 and r2 model the growth of preys and predators, respectively, while rule r3
models the death of predators. Each rule is also characterized by a stochastic constants (expressed in
time−1 ), that is used – together with the current amounts of individuals occurring in the patch – to evaluate
its application probability step by step, according to the tau leaping algorithm (see [10, 11, 8] for more
details). All the simulations shown hereafter have been executed using the following values of stochastic
constants and of initial amount of preys, predators, and sustenance resources: c1 =0.1, c2 =0.01, c3 =10,
X0 =Y0 =1000, A0 =200 (the value of A is fixed for the entire duration of each simulation). The simulations
have been performed with the software BioSimWare [5], that implements different stochastic simulation
algorithms for both single and multivolume systems. The software is available for free download at
http://bimib.disco.unimib.it/index.php/Software.
In Fig. 2 we show the oscillating dynamics (left side) of preys and predators in the single patch,
obtained with this choice of parameters, and the corresponding phase space (right side). These figures
can be considered as reference to compare and discuss the dynamics obtained in the multi-patch model,
as described in Section 3.
5000
5000
X
Y
4500
4500
4000
4000
3500
3500
3000
2500
Y
Individuals
3000
2500
2000
2000
1500
1500
1000
1000
500
0
500
0
2
4
6
8
10
0
500
Time [a.u.]
1000
1500
2000
2500
3000
3500
X
Figure 2: The Lotka-Volterra dynamics in the single patch: oscillations in preys, X , and predators, Y (left
side), and corresponding phase space (right side).
The single patch model is then extended to a multi-patch model where, inside each patch pi of each
network topology ν , we add as many communication rules as the number of patches connected to pi
(that is, a total of deg(pi )ν rules inside each patch). These rules are needed to move population individuals among the various patches of the network, thus allowing to analyze the effects of migration and
colonization in the metapopulation. This is done by attaching a destination target to each communication
rule, specifying the destination patch, as it is usually done in P systems. Formally, in each patch pi of
network ν , we add the so-called dispersal rules
rd p j : Y → (Y,target(p j )),
for each p j ∈ Ad j(pi )ν . Similarly to the local rules r1 , r2 , r3 , the probability of applying each dispersal
rule is determined by using its stochastic constant cd p j , whose values will be given in the next section to
consider different migration rates.
D. Besozzi et al.
3
7
The influence of network topologies on metapopulation dynamics
In this section we analyze how the topological structure of patch-to-patch connections, and the rate of
individual dispersal between connected patches, influence the local and global dynamics of a metapopulation. In particular, in Section 3.1 we consider how a given topology and a fixed dispersal rate can
influence the prey-predators dynamics, while in Section 3.2 we focus on the capability of colonization
of empty patches, starting from the dispersal of predators living in a few patches which occupy peculiar
positions in the given network topology.
3.1 Network topologies and migration
In this section, we analyze the role of migration and compare the six network topologies with respect to
four different conditions for the dispersal rules. Namely, we assume that each patch of each topology is
initialized with a complete LV model as given in Section 2.2, where the value of the stochastic constant
cd p j for the dispersal of predators, in each patch pi ∈ N∆ν , can assume one of the following values:
1. cd p j =1, for each p j ∈ Ad j(pi )ν ;
2. cd p j =10, for each p j ∈ Ad j(pi )ν ;
3. cd p j =20, for each p j ∈ Ad j(pi )ν ;
10
4. cd p j = deg(p
, for each p j ∈ Ad j(pi )ν .
i)
By considering the first condition as reference, the power of dispersal in the second (third) condition is
ten-fold (twenty-fold) the first one, irrespective of the position that patch pi occupies in the considered
network. In other terms, the flux of dispersal from each patch, in the first three conditions, results
amplified by the number of connections that each patch has with respect to the other patches in the
network. On the contrary, the fourth condition corresponds to the situation when, for each patch p j ∈
Ad j(pi )ν , the sum of the values of constants of dispersal rules in pi is always equal to 10, but the rate
of dispersal along each edge from pi to p j depends on the degree of pi . For instance, in the network
topology a (Fig. 1), the value of cd p j in patches p0 and p5 is equal to 10, while the value of cd p j in
patches p1 , . . ., p4 is equal to 5; in the network topology c (Fig. 1), the value of cd p j in patch p0 is equal
to 2, while the value of cd p j in all other patches is equal to 10, and so on. So doing, we can weigh the
dispersal of predators according to the position of each patch in the network, and simulate a situation
where the flux of dispersal from each patch towards its adjacent patches is uniform throughout the whole
network.
For space limits, in Fig. 3 we present the phase spaces of all network topologies, obtained from
simulations of the fourth condition only. For each network, in particular, we show the phase space of the
local dynamics of each patch. The graphics show that, in the case of the chain graph (phase space (a)), the
patches having different degrees are characterized by different dynamics: in fact, patches p0 and p5 show
a different behavior with respect to the other patches. In addition to the role of patch degree, we can see
that also the position of patches in the graph plays a central role: despite the fact that patches p1 , p2 , p3
and p4 have all the same degree, the dynamics inside p1 and p4 differs from that of patches p2 and p3 .
This is due to the different power of dispersal rules of their two neighbors, namely cd p j = 10 in patches
p0 , p5 , while cd p j = 5 in patches p2 , p3 , which cause a larger flux of predators dispersal towards patches
p1 and p4 . The global effect is the presence of three different dynamics (one in p0 , p5 , another one in p1 ,
p4 , and a third one in p2 , p3 ), all of which are characterized by oscillations in X and Y with no regular
amplitudes (compare these phase spaces with the standard LV phase space in the single patch model
The influence of network topologies on metapopulations dynamics
8
(a) chain
Y
(b) grid
4000
3500
3000
2500
2000
1500
1000
500
Y
4000
3500
3000
2500
2000
1500
1000
500
3000
3000
2500
2500
0
0
2000
1
2
Patch
1500
2
Patch
1000
3
2000
1
X
1500
500
4
5
X
1000
3
500
4
0
5
0
(d) ring
(c) star
Y
8000
7000
6000
5000
4000
3000
2000
1000
0
Y
5000
4500
4000
3500
3000
2500
2000
1500
1000
500
3500
3500
3000
0
3000
0
2500
1
2
Patch
1500
2500
1
2000
X
2
Patch
1000
3
2000
500
4
5
500
4
0
5
0
(e) complete
Y
4000
3500
3000
2500
2000
1500
1000
500
X
1500
1000
3
(f) random
Y
5000
4500
4000
3500
3000
2500
2000
1500
1000
500
2500
2000
0
1500
X
1
2
Patch
1000
3
500
4
5
0
3500
3000
0
2500
1
2000
2
Patch
1500
X
1000
3
500
4
5
0
Figure 3: The power of migration: LV dynamics in the phase space of each network topology.
D. Besozzi et al.
9
given in Fig. 2, right side, and also with the phase spaces in Fig. 3, graphics (d) and (e)). Furthermore,
we can evidence that these oscillations are characterized by an initial wider amplitude, which is reduced
during time.
Similarly, the dynamics of the patches in the grid graph (phase space (b)) is influenced only by the
number of edges; in this phase space, we can identify two different types of dynamics: one for the patches
with three edges (p1 , p4 ) and another one for those with two connections.
In the star graph (phase space (c)), the LV dynamics endures in all patches apart from p0 , where the
number of preys X collapses to an attractor in zero, and no oscillations according to the LV dynamics
in both X and Y can be established. In this patch, the number of predators fluctuates in a certain range,
because of their dispersal from/to the other patches. Basically, in this condition patch p0 , that represents
the center of the star, becomes a local area of the habitat where only dispersal occurs.
The simulations for the ring and complete graphs (phase spaces (d), (e)) show very similar results:
in both cases, all patches in each graph have the same degree (two in the first configuration and five in
the second one), leading to regular oscillations in X and Y with almost constant amplitude.
The results concerning the last configuration, the random graph (phase space (f)), show a combination
of the effects described above. In particular, the dynamics of the patches differ each other depending on
the degree of the patches themselves; moreover, in p4 , which is characterized by the highest degree, the
high number of incoming predators (migrating from the four adjacent patches) leads to the extinction of
preys (similarly to what happens in patch p0 of the star graph).
We also tested, for each network topology, the other three conditions listed above. In these cases,
the results have shown that the amplification of the power of dispersal with respect to the patch degree
gives rise to a balance between the incoming and migrating individuals, which leads to comparable LV
dynamics for all networks, with regular oscillations inside each patch (data not shown).
3.2 Network topologies and colonization
In this section, we compare the six network topologies with respect to the capability of colonizing the
empty patches that each network contains, starting from the patches that contain a complete LV model
and that occupy a peculiar position in that network . We recall that in this work we are considering only
the migration of predators, hence the empty patches are hereby assumed to contain no predators but only
an initial amount of preys. In each network ν , the set of patches initialized with the complete LV model
will be denoted as pνLV . To test the feature of colonization, we consider four different initial conditions,
hereby denoted as ICk, k=1, . . . , 4, where Y0 =0 and:
1. IC1 is characterized by cd p j =1 and X0 =10;
2. IC2 is characterized by cd p j =1 and X0 =100;
3. IC3 is characterized by cd p j =10 and X0 =10;
4. IC4 is characterized by cd p j =10 and X0 =100.
In each given network, all empty patches are initialized with the same chosen condition ICk, besides the
patches in the set pνLV that are initialized with a standard LV model, having the communication constant
cd p j equal to the one given in the chosen ICk, and all other parameters as given in Section 2.2.
With this type of analysis, we expect to determine which features of the network topologies are more
relevant with respect to the colonization of empty patches, under a given initial condition. All conditions
have been tested for each network and, for each fixed initial condition, different sets of pνLV have been
considered. In the following, for space limits, we present only some results of these simulations, and
The influence of network topologies on metapopulations dynamics
10
briefly discuss the results obtained in the other analyzed conditions. In each of the following graph,
preys (X ) are represented with solid lines, while predators (Y ) are represented with dashed lines.
We start by considering the network ν = a, that is, the chain graph. In this case, we present the
results obtained in all the initial conditions IC1, IC2, IC3, IC4, considering three sets of LV patches,
namely paLV ={p0 , p5 }, paLV ={p2 } and paLV ={p0 }. In the first case (paLV ={p0 , p5 }, shown in Fig. 4) we
can see that, when the power of dispersal is low (IC1, IC2), the time required by the predators to reach
the patches p2 and p3 , which are at the highest distance from p0 and p5 , allows an initial uncontrolled
growth of the preys in p2 and p3 , which subsequently undergo extinction as soon as the predators enter
the patch. Such “delay” in the local establishment of a population of predators is the effect that prevent
the formation of the LV dynamics; this effect, as shown hereafter, is a common aspect of all network
topologies. Concerning the chain network, this is more evident in condition IC2, where the initial amount
of preys inside the empty patches is higher than IC1: in this case, the LV dynamics can be established
only in four of the six patches. On the other hand, with the initial conditions IC3 and IC4, the power
of dispersal is sufficient to colonize all of the patches, irrespectively of the numbers of preys that are
initially present in the empty patches and of the position of the LV complete patch. Similar results for
the chain network have been obtained in the second analyzed case (paLV ={p2 }, shown in Fig. 5) and in
the third case (paLV ={p0 }, data not shown).
12000
14000
12000
10000
Individuals
Individuals
10000
8000
6000
8000
6000
4000
4000
2000
00
1
2
3
Patch
4
5
0
2
4
6
8
10
12
14
16
18
2000
20
00
Time [a.u.]
1
2
3
Patch
4500
4
5
0
2
4
6
8
10
12
14
16
18
20
Time [a.u.]
6000
4000
5000
3000
Individuals
Individuals
3500
2500
2000
1500
4000
3000
2000
1000
500
00
1
2
Patch
3
4
5
0
2
4
6
8
10
12
14
16
18
Time [a.u.]
20
1000
25
20
00
15
1
2
Patch
10
3
4
Time [a.u.]
5
5
0
Figure 4: Colonization in the chain topology, with paLV ={p0 , p5 } and initial conditions IC1 (top left), IC2
(top right), IC3 (bottom left), IC4 (bottom right).
D. Besozzi et al.
11
16000
25000
14000
20000
Individuals
Individuals
12000
10000
8000
15000
10000
6000
4000
5000
2000
00
1
2
3
Patch
4
5
0
2
4
6
8
10
12
14
16
18
20
00
Time [a.u.]
1
2
3
Patch
6000
4
5
0
2
4
6
8
10
12
14
16
18
20
Time [a.u.]
9000
8000
5000
Individuals
Individuals
7000
4000
3000
2000
6000
5000
4000
3000
2000
1000
00
1
2
Patch
3
4
5
0
2
4
6
8
10
12
14
16
18
Time [a.u.]
20
1000
00
1
2
Patch
3
4
5
0
2
4
6
8
10
12
14
16
18
20
Time [a.u.]
Figure 5: Colonization in the chain topology, with paLV ={p2 } and initial conditions IC1 (top left), IC2
(top right), IC3 (bottom left), IC4 (bottom right).
For the network topology ν = b, that is, the grid graph, we show the results obtained in the cases IC1,
when pbLV ={p0 } (Fig. 6, left side) and pbLV ={p1 } (Fig. 6, right side). According to the position of the LV
complete patches in this network topology, we can see that, in the first case, the predators are capable to
colonize patches p1 and p3 , that are directly connected to p0 , and patch p4 , that is directly connected to
both p1 and p3 . However, patches p2 and p5 cannot be colonized. In the second case, the higher degree
of the LV complete patch p1 , allows the colonization of all patches. With the initial condition IC2 (data
not shown), in the other tested cases pbLV ={p0 } and pbLV ={p1 }, only the patches directly connected to p0
and p1 , respectively, are colonized by the predators.
For the network topology ν = c, that is, the star graph, we show the results obtained in the cases
IC1, when pcLV ={p1 } (Fig. 7, left side) and pcLV ={p1 , p3 } (Fig. 7, right side). According to the position
of the LV complete patches in this network topology, we can see that, in the first case, no patches are
colonized because of the high degree of p0 (which is the only patch connected to p1 ) that spreads the
predators over the other patches, thus preventing the formation of the LV dynamics. In the second case,
the combined effect of migration from p1 and p3 allows the colonization of patch p0 , which is directly
connected with both of them. We then performed other simulations starting with conditions IC3 and
IC4: in these cases, the higher value of cd p j allows the colonization of every patch (except from patch p0 )
independently from the initial position of the LV complete patch (data not shown). On the contrary, when
The influence of network topologies on metapopulations dynamics
12
16000
12000
14000
10000
Individuals
Individuals
12000
10000
8000
8000
6000
6000
4000
4000
2000
00
1
2
3
Patch
4
5
0
2
4
6
8
10
12
14
16
18
2000
20
00
Time [a.u.]
1
2
3
Patch
4
5
0
2
4
6
8
10
12
14
16
18
20
Time [a.u.]
Figure 6: Colonization in the grid topology, with initial condition IC1 and pbLV ={p0 } (left), pbLV ={p1 }
(right).
25000
14000
12000
20000
Individuals
Individuals
10000
15000
8000
6000
10000
4000
5000
00
1
2
3
Patch
4
5
0
2
4
6
8
10
12
14
16
18
2000
20
00
Time [a.u.]
1
2
3
Patch
4
5
0
2
4
6
8
10
12
14
16
18
20
Time [a.u.]
Figure 7: Colonization in the star topology, with initial condition IC1 and pcLV ={p1 } (left), pcLV ={p1 , p3 }
(right).
18000
20000
16000
18000
16000
14000
12000
Individuals
Individuals
14000
10000
8000
12000
10000
8000
6000
6000
4000
4000
2000
00
1
2
Patch
3
4
5
0
2
4
6
8
10
12
14
16
18
Time [a.u.]
20
2000
00
1
2
Patch
3
4
5
0
2
4
6
8
10
12
14
16
18
20
Time [a.u.]
Figure 8: Colonization in the ring topology, with pdLV ={p0 } and initial condition IC1 (left) and IC2
(right).
D. Besozzi et al.
13
we assume pcLV ={p0 }, that is, the center of the star, then all patches are fully colonized, independently
from the considered initial condition.
For the network topology ν = d, that is, the circular graph, we show the results obtained in the cases
IC1 and IC2, when pdLV ={p0 } (Fig. 8, left and right sides, respectively). Starting with the initial condition
IC2, the predators are capable of colonizing only the patches directly connected to the LV complete patch
p0 , while in the case IC1, also patch p4 (being at distance 2 from the LV complete patch) is colonized.
These results highlight, in particular, another aspect that was more marginal in the other simulations: the
stochastic nature of the communication process and of the growth of preys, which leads to the extinction
of preys in patch p2 , while in patch p4 it drives the local behavior to an oscillatory dynamics.
For the network topology ν = e, that is, the complete graph, we show the results obtained in the cases
IC1, when peLV ={p0 } (Fig. 9, left side) and peLV ={p0 , p3 } (Fig. 9, right side). While in the second case
– where the LV dynamics is initially placed in two patches – the predators can colonize all patches, in
the first case the colonization of all empty patches fails. Once more, this is an effect of the stochastic
noise combined with the low amounts of predators, which is in turn caused by the fact that the higher the
number of adjacent patches, the lower the number of predators that persist inside each patch. In all other
simulations performed with initial conditions IC3 and IC4, all patches have always been colonized, as
the higher values of dispersal rules assure a more uniform spread of predators throughout the network,
and thus flattens the influence of migration delay (data not shown).
For the network topology ν = f , that is, the random graph, we show the results obtained in the cases
f
f
IC1, when pLV ={p0 } (Fig. 10, left side) and pLV ={p2 } (Fig. 10, right side). According to the position
of the LV complete patches in this network topology, we can see that, in the first case, all patches are
colonized by predators (similar results are obtained by placing the LV complete model in patch p4 – data
not shown). In the second case, patch p5 is not colonized because there is only one path of length 2 which
connects it to the initial complete LV patch p2 ; the same holds for patch p3 , which has distance from p2
equal to 3. For similar reasons, considering the case of initial condition IC1, with the LV complete model
in patch p3 , the only patch that is not colonized by predators is p2 (data not shown). In all the simulations
performed with the initial condition IC2, some of the patches have not been colonized because of the high
amount of preys initially occurring in the patches. On the other hand, with the initial conditions IC3, IC4,
the power of dispersal allows the colonization of all patches (data not shown).
12000
8000
7000
10000
Individuals
Individuals
6000
8000
6000
5000
4000
3000
4000
2000
2000
00
1
2
Patch
3
4
5
0
2
4
6
8
10
12
14
16
18
Time [a.u.]
20
25
1000
20
00
15
1
2
Patch
10
3
4
Time [a.u.]
5
5
0
Figure 9: Colonization in the complete topology, with initial condition IC1 and peLV ={p0 } (left),
peLV ={p0 , p3 } (right).
The influence of network topologies on metapopulations dynamics
14
12000
16000
14000
10000
Individuals
Individuals
12000
8000
6000
10000
8000
6000
4000
4000
2000
00
1
2
Patch
3
4
5
0
2
4
6
8
10
12
14
16
18
Time [a.u.]
20
2000
00
1
2
Patch
3
4
5
0
2
4
6
8
10
12
14
16
18
20
Time [a.u.]
f
Figure 10: Colonization in the random topology, with initial condition IC1 and pLV
={p0 } (left),
f
pLV ={p2 } (right).
4
Discussion
The fragmented habitats of real metapopulations are usually characterized by complex network topologies. In this paper, we have analyzed six small topologies that can be considered representative of local
areas in a structured habitat, and we have investigated the influence that the degree and the position of
each patch in the topology can have on the migration of individuals, as well as on the capability of colonizing empty patches. Our analysis suggests that, with respect to the power of migration (Section 3.1),
we can identify different behaviours that depend on two characteristics of the topology: on a first level,
the local behaviour inside each patch is influenced by its degree. This is especially evident if we compare
the network topology described by the circular or complete graphs, with the topology described by the
star graph: while in the first case (where all nodes have the same degree) all patches are characterized by
a similar (regular) oscillating dynamics, in the second case the most critical node is the center of the star
(which has a much higher degree than all other nodes in the same graph). In the latter case, this patch is
likely to undergo a local modification of its initial dynamics, due to a more higher incoming migration
of individuals from all other adjacent patches. On a second level, assuming in this case that the degree
of nodes is equal, then also the position of each patch in the topology matters: for instance, we have
seen that in the network topology described by the chain graph – where all nodes, besides the ones at
the extremes of the chain, have the same degree – the local dynamics is also influenced by the dynamics
of the adjacent patches in the graph. Therefore, in hypothetical habitats where there exist many patches
connected in a linear way, our results suggest that the length of the chain might have a negative role in
the establishment and in the maintenance of local dynamics.
Considering the feature of colonization (Section 3.2), we have evidenced that, in most network
topologies, the lack of colonization can be due to the delay of migrating predators with respect to the
(uncontrolled) local growth of prey, which then leads to the extinction of preys and the prevention of
the LV dynamics. To effectively measure how strong is the power of the delay, it would be interesting
to understand whether the local growth of preys can be controlled by inducing their death and thus potentially allowing the establishment of oscillations. Besides this aspect deserving further investigations,
our analysis have evidenced that the colonization of empty patches occurs more easily in those patches
that are adjacent to the patch(es) initialized with the LV complete model. Once more, this highlights the
relevance of the position of the patch(es) where standard oscillations in preys and predators are already
D. Besozzi et al.
15
settled at the beginning of the simulation. Indeed, the power of colonization is stronger in the circular
and complete networks – where the position of the LV complete patch is irrelevant (as the spread of migrating individuals throughout the network results uniform), and it is weaker in the star network – where
the position of the LV complete patch is of primary importance (as the spread of migrating individuals
throughout the network strongly depends on whether the patch is placed at the center or at the tips of the
star).
In addition to the investigations that we have presented in this work, further types of analysis that
we plan to perform on metapopulation systems concern, for instance, the study of the aspects considered
in this paper (migration, colonization, network topologies, etc.) by assuming other local and global
dynamics, e.g., the population growth according to the logistic function. Moreover, an interesting issue
that might be investigated is the synchronization of local population dynamics (e.g. by considering the
establishment and decay of oscillations in preys and predators) during migration through a given network
topology, or in the process of colonization.
Concerning the use of graphs, other relevant questions regard the analysis of the dynamics with
respect to graph properties, such as different measures of habitat connectivity (centrality indexes) [13,
26]. In this context, for example, the star graph can resemble the notion of hub (a node with high degree)
in a typical scale-free network, a structure that is known to be robust to random disturbances but highly
vulnerable to deliberate attacks on the hubs [32, 3].
Another topic of interest concerns the fact that various populations can coexist in a common habitat,
but have distinct (inter)species dynamics or different dispersal capabilities in that habitat [9]. In cases
like this, it would be interesting to construct and analyze different metapopulation models, one for each
target species, according to both the patch-to-patch connections and to the specific population dynamics.
By comparing and intersecting the results obtained on the distinct network topologies of the common
habitat derived in this way, it would be possible to determine the locations of the habitat that are most
important for each species, and thus aid the design of natural reserve systems where we can have the
most appropriate solution for all species in terms of the maximal improvement of dispersal (reduction
of species isolation) and the minimal spread of disturbances (diseases, pathogens, invasive species, etc.)
[36].
We believe that our modeling approach opens interesting perspectives and can represent an useful tool
for the investigation of a wide range of properties in metapopulation systems. We expect that applications
of this model to real cases – characterized by complex habitat networks (where each patch possesses its
own features of quality, occupancy, connectivity) and different population dynamics – will aid in the
achievement of important results and new perspective in Ecology.
References
[1] T. Aittokallio & B. Schwikowski (2006): Graph-based methods for analysing networks in cell biology. Briefings in bioinformatics 7(3), pp. 243–255. Available at http://dx.doi.org/10.1093/bib/bbl022.
[2] R. Albert (2005): Scale-free networks in cell biology. Journal of Cell Science 118(21), pp. 4947–4957.
Available at http://dx.doi.org/10.1242/jcs.02714.
[3] R. Albert & A. L. Barabási (2002): Statistical mechanics of complex networks. Reviews of Modern Physics
74(1), pp. 47–97. Available at http://dx.doi.org/10.1103/RevModPhys.74.47.
[4] L. Berec (2002): Techniques of spatially explicit individual-based models: construction, simulation, and
mean-field analysis. Ecological Modelling 150(1-2), pp. 55–81. Available at http://dx.doi.org/10.
1016/S0304-3800(01)00463-X.
16
The influence of network topologies on metapopulations dynamics
[5] D. Besozzi, P. Cazzaniga, G. Mauri & D. Pescini (2010): BioSimWare: A P Systems–based Simulation
Environment for Biological Systems. Accepted for presentation at CMC11, Jena, Germany, 2010.
[6] D. Besozzi, P. Cazzaniga, D. Pescini & G. Mauri (2007): Seasonal variance in P system models for metapopulations. Progress in Natural Science 17(4), pp. 392–400. Available at http://www.informaworld.com/
smpp/content~db=all~content=a790271810~tab=linking.
[7] D. Besozzi, P. Cazzaniga, D. Pescini & G. Mauri (2008): Modelling metapopulations with stochastic
membrane systems. BioSystems 91(3), pp. 499 – 514. Available at http://www.sciencedirect.com/
science/article/B6T2K-4PD4XHR-1/2/0903081b39759345708d5fc97aeee6bc.
[8] D. Besozzi, P. Cazzaniga, D. Pescini & G. Mauri (2009): Algorithmic Bioprocesses, chapter A Multivolume Approach to Stochastic Modelling with Membrane Systems, pp. 519–542. Springer Verlag. Available at http://www.springer.com/computer/theoretical+computer+science/book/
978-3-540-88868-0.
[9] A. Bunn (2000): Landscape connectivity: A conservation application of graph theory. Journal of Environmental Management 59(4), pp. 265–278. Available at http://dx.doi.org/10.1006/jema.2000.0373.
[10] Y. Cao, D. T. Gillespie & L. R. Petzold (2006): Efficient step size selection for the tau-leaping simulation
method. Journal of Chemical Physics 124, p. 044109. Available at http://www.ncbi.nlm.nih.gov/
pubmed/16460151.
[11] P. Cazzaniga, D. Pescini, D. Besozzi & G. Mauri (2006): Tau leaping stochastic simulation method in P
systems. In: H. J. Hoogeboom, G. Păun, G. Rozenberg & A. Salomaa, editors: Proc. of the 7th International
Workshop on Membrane Computing, 4361. LNCS, pp. 298–313. Available at http://www.springerlink.
com/content/y055172665v12t2k/?p=0a193a302bda4824b742ec37b8cda9d9&pi=18.
[12] G. Ciobanu, G. Păun & M. J. Pérez-Jiménez, editors (2005): Applications of Membrane Computing. SpringerVerlag, Berlin.
[13] S. N. Dorogovtsev & J. F. F. Mendes (2002):
Evolution of networks.
Advances in
Available at http://www.informaworld.com/smpp/
Physics 51(4), pp. 1079–1187.
content~db=all~content=a713801291~tab=linking.
[14] J. B. Dunning Jr., D. J. Stewart, B. J. Danielson, B. R. Noon, T. L. Root, R. H. Lamberson & E.E. Stevens
(1995): Spatially Explicit Population Models: Current Forms and Future Uses. Ecological Applications 5(1),
pp. 3–11. Available at http://www.esajournals.org/doi/abs/10.2307/1942045.
[15] A. Fall, M. Fortin, M. Manseau & D. O’brien (2007): Spatial Graphs: Principles and Applications
for Habitat Connectivity. Ecosystems 10, pp. 448–461. Available at http://dx.doi.org/10.1007/
s10021-007-9038-7.
[16] D. T. Gillespie (1977): Exact stochastic simulation of coupled chemical reactions. Journal of Physical Chemistry 81(25), pp. 2340–2361. Available at http://dx.doi.org/10.1021/j100540a008.
[17] I. Hanski (1998): Metapopulation dynamics. Nature 396, pp. 41–49. Available at http://www.nature.
com/nature/journal/v396/n6706/abs/396041a0.html.
[18] A. Hastings & S. Harrison (1994): Metapopulation dynamics and genetics. Annual Review of Ecology
and Systematics 25, pp. 167–188. Available at http://arjournals.annualreviews.org/doi/abs/10.
1146/annurev.es.25.110194.001123.
[19] A. Hastings & C. L. Wolin (1989): Within-patch dynamics in a metapopulation. Ecology 70(5), pp. 1261–
1266. Available at http://www.esajournals.org/doi/abs/10.2307/1938184.
[20] V. A. A. Jansen (2001): The dynamics of two diffusively coupled predator-prey populations. Theoretical
Population Biology 59, pp. 119–131. Available at http://dx.doi.org/10.1006/tpbi.2000.1506.
[21] V. A. A. Jansen & A. L. Lloyd (2000): Local stability analysis of spatially homogeneous solutions of
multi-patch systems. Journal of Mathematical Biology 41, pp. 232–252. Available at http://www.
springerlink.com/content/px0an76xh7551jew/.
D. Besozzi et al.
17
[22] R. Levins (1969): Some demographic and genetic consequences of environmental heterogeneity for biological
control. Bulletin of the Entomological Society of America 71, pp. 237–240.
[23] E. S. Minor & D. L. Urban (2008): A Graph-Theory Framework for Evaluating Landscape Connectivity and
Conservation Planning. Conservation Biology 22(2), pp. 297–307. Available at http://dx.doi.org/10.
1111/j.1523-1739.2007.00871.x.
[24] A. Moilanen (2004): SPOMSIM: software for stochastic patch occupancy models of metapopulation dynamics. Ecological Modelling 179, pp. 533–550. Available at http://dx.doi.org/10.1016/j.ecolmodel.
2004.04.019.
[25] J. D. Murray (2002): Mathematical Biology. I: An introduction. Springer-Verlag, New York.
[26] M. E. J. Newman (2003): The Structure and Function of Complex Networks. SIAM Review 45(2), pp. 167–
256. Available at http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&
id=SIREAD000045000002000167000001&idtype=cvips&gifs=yes.
[27] G. Păun (2000): Computing with membranes. Journal of Computer and System Sciences 61(1), pp. 108–143.
Available at http://dx.doi.org/10.1006/jcss.1999.1693.
[28] G. Păun (2002): Membrane Computing. An Introduction. Springer-Verlag, Berlin.
[29] G. Păun, G. Rozenberg & A. Salomaa, editors (2010): The Oxford Handbook of Membrane Computing.
Oxford University Press.
[30] D. Pescini, D. Besozzi, G. Mauri & C. Zandron (2006): Dynamical probabilistic P systems. International
Journal of Foundations of Computer Science 17(1), pp. 183–204. Available at http://dx.doi.org/10.
1142/S0129054106003760.
[31] D. Pescini, D. Besozzi, C. Zandron & G. Mauri (2006): Analysis and simulation of dynamics in probabilistic
P systems. In: N. Pierce A. Carbone, editor: Proc. of 11th International Workshop on DNA Computing,
DNA11, 3892. LNCS, London, ON, Canada, pp. 236–247. Available at http://www.springerlink.com/
content/7p7653442111r273/?p=832da88ca13a4d75be122f548c3b0df6&pi=18.
[32] S. H. Strogatz (2001): Exploring complex networks. Nature 410, pp. 268–276. Available at http://tam.
cornell.edu/SS_exploring_complex_networks.pdf.
[33] A. D. Taylor (1990): Metapopulations, dispersal, and predator-prey dynamics: an overview. Ecology 71(2),
pp. 429–433. Available at http://www.esajournals.org/doi/abs/10.2307/1940297.
[34] J. M. J. Travis & C. Dytham (1998): The evolution of dispersal in a metapopulation: a spatially explicit, individual-based model. Proceedings of the Royal Society of London. Series B: Biological Sciences
265(1390), pp. 17–23. Available at http://dx.doi.org/10.1098/rspb.1998.0258.
[35] D. Urban & T. Keitt (2001): Landscape Connectivity: A Graph-Theoretic Perspective. Ecology 82(5), pp.
1205–1218. Available at http://dx.doi.org/10.2307/2679983.
[36] D. L. Urban, E. S. Minor, E. A. Treml & R. S. Schick (2009): Graph models of habitat mosaics. Ecology
Letters 12, pp. 260–273. Available at http://dx.doi.org/10.1111/j.1461-0248.2008.01271.x.
[37] W. W. Weisser, V. A. A. Jansen & M. P. Hassell (1997): The effects of a pool of dispersers on host-parasitoid
systems. Journal of Theoretical Biology 189, pp. 413–425. Available at http://dx.doi.org/10.1006/
jtbi.1997.0529.
| 5 |
BCOL RESEARCH REPORT 15.03
arXiv:1705.05920v1 [math.OC] 16 May 2017
Industrial Engineering & Operations Research
University of California, Berkeley, CA 94720–1777
PATH COVER AND PATH PACK INEQUALITIES FOR THE
CAPACITATED FIXED-CHARGE NETWORK FLOW PROBLEM
ALPER ATAMTÜRK, BIRCE TEZEL AND SIMGE KÜÇÜKYAVUZ
Abstract. Capacitated fixed-charge network flows are used to model a variety of problems in telecommunication, facility location, production planning and supply chain management. In this paper, we investigate capacitated path substructures and derive strong
and easy-to-compute path cover and path pack inequalities. These inequalities are based
on an explicit characterization of the submodular inequalities through a fast computation
of parametric minimum cuts on a path, and they generalize the well-known flow cover
and flow pack inequalities for the single-node relaxations of fixed-charge flow models. We
provide necessary and sufficient facet conditions. Computational results demonstrate the
effectiveness of the inequalities when used as cuts in a branch-and-cut algorithm.
July 2015; October 2016; May 2017
1. Introduction
Given a directed multigraph with demand or supply on the nodes, and capacity, fixed and
variable cost of flow on the arcs, the capacitated fixed-charge network flow (CFNF) problem
is to choose a subset of the arcs and route the flow on the chosen arcs while satisfying the
supply, demand and capacity constraints, so that the sum of fixed and variable costs is
minimized.
There are numerous polyhedral studies of the fixed-charge network flow problem. In a
seminal paper Wolsey (1989) introduces the so-called submodular inequalities, which subsume almost all valid inequalities known for capacitated fixed-charge networks. Although
the submodular inequalities are very general, their coefficients are defined implicitly through
value functions. In this paper, we give explicit valid inequalities that simultaneously make
use of the path substructures of the network as well as the arc capacities.
For the uncapacitated fixed-charge network flow problem, van Roy and Wolsey (1985)
give flow path inequalities that are based on path substructures. Rardin and Wolsey (1993)
A. Atamtürk: Department of Industrial Engineering & Operations Research, University of California, Berkeley, CA 94720. [email protected]
B. Tezel: Department of Industrial Engineering & Operations Research, University of California, Berkeley,
CA 94720. [email protected]
S. Küçükyavuz: Department of Industrial and Systems Engineering, University of Washington, Seattle, WA
98195. [email protected] .
1
introduce a new family of dicut inequalities and show that they describe the projection of
an extended multicommodity formulation onto the original variables of fixed-charge network
flow problem. Ortega and Wolsey (2003) present a computational study on the performance
of path and cut-set (dicut) inequalities.
For the capacitated fixed-charge network flow problem, almost all known valid inequalities
are based on single-node relaxations. Padberg et al. (1985), van Roy and Wolsey (1986)
and Gu et al. (1999) give flow cover, generalized flow cover and lifted flow cover inequalities.
Stallaert (1997) introduces the complement class of generalized flow cover inequalities and
Atamtürk (2001) describes lifted flow pack inequalities. Both uncapacitated path inequalities and capacitated flow cover inequalities are highly valuable in solving a host of practical
problems and are part of the suite of cutting planes implemented in modern mixed-integer
programming solvers.
The path structure arises naturally in network models of the lot-sizing problem. Atamtürk
and Muñoz (2004) introduce valid inequalities for the capacitated lot-sizing problems with
infinite inventory capacities. Atamtürk and Küçükyavuz (2005) give valid inequalities for
the lot-sizing problems with finite inventory and infinite production capacities. Van Vyve
(2013) introduces path-modular inequalities for the uncapacitated fixed charge transportation problems. These inequalities are derived from a value function that is neither globally
submodular nor supermodular but that exhibits sub or supermodularity under certain set
selections. Van Vyve and Ortega (2004) and Gade and Küçükyavuz (2011) give valid inequalities and extended formulations for uncapacitated lot-sizing with fixed charges on stocks.
For uncapacitated lot-sizing with backlogging, Pochet and Wolsey (1988) and Pochet and
Wolsey (1994) provide valid inequalities and Küçükyavuz and Pochet (2009) provide an
explicit description of the convex hull.
Contributions. In this paper we consider a generic path relaxation, with supply and/or
demand nodes and capacities on incoming and outgoing arcs. By exploiting the path substructure of the network and introducing notions of path cover and path pack we provide
two explicitly-described subclasses of the submodular inequalities of Wolsey (1989). The
most important consequence of the explicit derivation is that the coefficients of the submodular inequalities on a path can be computed efficiently. In particular, we show that the
coefficients of such an inequality can be computed by solving max-flow/min-cut problems
parametrically over the path. Moreover, we show that all of these coefficients can be computed with a single linear-time algorithm. For a path with a single node, the inequalities
reduce to the well-known flow cover and flow pack inequalities. Moreover, we show that
the path cover and path pack inequalities dominate flow cover and flow pack inequalities
for the corresponding single node relaxation of a path obtained by merging the path into a
single node. We give necessary and sufficient facet-defining conditions. Finally, we report
on computational experiments demonstrating the effectiveness of the proposed inequalities
when used as cuts in a branch-and-cut algorithm.
Outline. The remainder of this paper is organized as follows: In Section 2, we describe
the capacitated fixed-charge flow problem on a path, its formulation and the assumptions
we make. In Section 3, we review the submodular inequalities, discuss their computation
on a path, and introduce two explicit subclasses: path cover inequalities and path pack
2
inequalities. In Section 4, we analyze sufficient and necessary facet-defining conditions. In
Section 5, we present computational experiments showing the effectiveness of the path cover
and path pack inequalities compared to other network inequalities.
2. Capacitated fixed-charge network flow on a path
Let G = (N 0 , A) be a directed multigraph with nodes N 0 and arcs A. Let sN and tN be
the source and the sink nodes of G. Let N := N 0 \ {sN , tN }. Without loss of generality,
we label N := {1, . . . , n} such that a directed forward path arc exists from node i to node
i + 1 and a directed backward path arc exists from node i + 1 to node i for each node
i = 1, . . . , n − 1 (see Figure 1 for an illustration). In Remarks 1 and 2, we discuss how to
obtain a “path” graph G from a more general directed multigraph.
Let E + = {(i, j) ∈ A : i = sN , j ∈ N } and E − = {(i, j) ∈ A : i ∈ N, j = tN }.
Moreover, let us partition the sets E + and E − such that Ek+ = {(i, j) ∈ A : i ∈
/ N, j = k}
−
+
and Ek = {(i, j) ∈ A : i = k, j ∈
/ N } for k ∈ N . We refer to the arcs in E and E − as
non-path arcs. Finally, let E := E + ∪ E − be the set of all non-path arcs. For convenience,
we generalize this set notation scheme. Given an arbitrary subset of non-path arcs Y ⊆ E,
let Yj+ = Y ∩ Ej+ and Yj− = Y ∩ Ej− .
Remark 1. Given a directed multigraph G̃ = (Ñ , Ã) with nodes Ñ , arcs à and a path that
passes through nodes N , we can construct G as described above by letting E + = {(i, j) ∈
à : i ∈ Ñ \ N, j ∈ N } and E − = {(i, j) ∈ à : i ∈ N, j ∈ Ñ \ N } and letting all the arcs in
E + be the outgoing arcs from a dummy source sN and all the arcs in E − to be incoming
to a dummy sink tN .
Remark 2. If there is an arc t = (i, j) from node i ∈ N to j ∈ N , where |i − j| > 1,
then we construct a relaxation by removing arc t, and replacing it with two arcs t− ∈ Ei−
and t+ ∈ Ej+ . If there are multiple arcs from node i to node j, one can repeat the same
procedure.
Throughout the paper, we use the following notation: Let [k, j] = {k, k +1, . . . , j} if k ≤ j
P
P
Pj
and ∅ otherwise, c(S) = t∈S ct , y(S) = t∈S yt , (a)+ = max{0, a} and dkj = t=k dt if
j ≥ k and 0 otherwise. Moreover, let dim(A) denote the dimension of a polyhedron A and
conv(S) be the convex hull of a set S.
The capacitated fixed-charge network flow problem on a path can be formulated as a
mixed-integer optimization problem. Let dj be the demand at node j ∈ N . We call a node
j ∈ N a demand node if dj ≥ 0 and a supply node if dj < 0. Let the flow on forward path
arc (j, j + 1) be represented by ij with an upper bound uj for j ∈ N \ {n}. Similarly, let
the flow on backward path arc (j + 1, j) be represented by rj with an upper bound bj for
j ∈ N \ {n}. Let yt be the amount of flow on arc t ∈ E with an upper bound ct . Define
binary variable xt to be 1 if yt > 0, and zero otherwise for all t ∈ E. An arc t is closed if
xt = 0 and open if xt = 1. Moreover, let ft be the fixed cost and pt be the unit flow cost
of arc t. Similarly, let hj and gj be the costs of unit flow, on forward and backward arcs
(j, j + 1) and (j + 1, j) respectively for j ∈ N \ {n}. Then, the problem is formulated as
X
X
min
(ft xt + pt yt ) +
(hj ij + gj rj )
(1a)
t∈E
j∈N
3
sN
cti
b1
1
d1
b2
...
3
2
u1 d2
bn−1
b3
u2
d3
n
un−1
u3
dn
ctk
tN
Figure 1. Fixed-charge network representation of a path.
s. t.
ij−1 − rj−1 + y(Ej+ ) − y(Ej− ) − ij + rj = dj ,
0 ≤ yt ≤ ct xt ,
(F1)
t ∈ E,
j ∈ N,
(1b)
(1c)
0 ≤ ij ≤ uj ,
j ∈ N,
(1d)
0 ≤ rj ≤ bj ,
j ∈ N,
(1e)
xt ∈ {0, 1},
t ∈ E,
(1f)
i0 = in = r0 = rn = 0.
(1g)
Let P be the set of feasible solutions of (F1). Figure 1 shows an example network representation of (F1).
Throughout we make the following assumptions on (F1):
The set Pt = {(x, y, i, r) ∈ P : xt = 0} =
6 ∅ for all t ∈ E,
ct > 0, uj > 0 and bj > 0 for all t ∈ E and j ∈ N ,
ct ≤ d1n + c(E − ) for all t ∈ E + ,
ct ≤ bj−1 + uj + (dj )+ + c(Ej− ), for all j ∈ N, t ∈ Ej+ ,
ct ≤ bj + uj−1 + (−dj )+ + c(Ej+ ) for all j ∈ N, t ∈ Ej− .
Assumptions (A.1)–(A.2) ensure that dim conv(P) = 2|E| + |N | − 2. If (A.1) does not hold
for some t ∈ E, then xt = 1 for all points in P. Similarly, if (A.2) does not hold, the flow
on such an arc can be fixed to zero. Finally, assumptions (A.3)–(A.5) are without loss of
generality. An upper bound on yt can be obtained directly from the flow balance equalities
(1b) by using the upper and lower bounds of the other flow variables that appear in the
same constraint. As a result, the flow values on arcs t ∈ E cannot exceed the capacities
implied by (A.3)–(A.5).
Next, we review the submodular inequalities introduced by Wolsey (1989) that are valid
for any capacitated fixed-charge network flow problem. Furthermore, using the path structure, we provide an O(|E| + |N |) time algorithm to compute their coefficients explicitly.
(A.1)
(A.2)
(A.3)
(A.4)
(A.5)
4
3. Submodular inequalities on paths
Let S + ⊆ E + and L− ⊆ E − . Wolsey (1989) shows that the value function of the following
optimization problem is submodular:
X
v(S + , L− ) = max
at yt
(2a)
t∈E
s. t.
(F2)
ij−1 − rj−1 + y(Ej+ ) − y(Ej− ) − ij + rj ≤ dj ,
j ∈ N,
(2b)
0 ≤ ij ≤ uj ,
j ∈ N,
(2c)
0 ≤ rj ≤ bj ,
j ∈ N,
(2d)
0 ≤ yt ≤ ct ,
t ∈ E,
(2e)
i0 = in = r0 = rn = 0,
yt = 0,
+
+
(2f)
−
t ∈ (E \ S ) ∪ L ,
(2g)
where at ∈ {0, 1} for t ∈ E + and at ∈ {0, −1} for t ∈ E − . The set of feasible solutions of
(F2) is represented by Q.
We call the sets S + and L− that are used in the definition of v(S + , L− ) the objective sets.
For ease of notation, we also represent the objective sets as C := S + ∪ L− . Following this
notation, let v(C) := v(S + , L− ), v(C \ {t}) = v(S + \ {t}, L− ) for t ∈ S + and v(C \ {t}) =
v(S + , L− \ {t}) for t ∈ L− . Similarly, let v(C ∪ {t}) = v(S + ∪ {t}, L− ), for t ∈ S + and
v(C ∪ {t}) = v(S + , L− ∪ {t}) for t ∈ L− . Moreover, let
ρt (C) = v(C ∪ {t}) − v(C)
be the marginal contribution of adding an arc t to C with respect to the value function v.
Wolsey (1989) shows that the following inequalities are valid for P:
X
X
X
a t yt +
ρt (C \ {t})(1 − x̄t ) ≤ v(C) +
ρt (∅)x̄t ,
(3)
t∈E
X
t∈E
at yt +
t∈C
X
t∈E\C
ρt (E \ {t})(1 − x̄t ) ≤ v(C) +
t∈C
X
ρt (C)x̄t ,
(4)
t∈E\C
where the variable x̄t is defined as
(
x̄t =
xt ,
t ∈ E+
1 − xt ,
t ∈ E−.
In fact, inequalities (3) and (4) are also valid for fixed-charge network flow formulations
where the flow balance constraints (1b) are replaced with constraints (2b). However, in this
paper, we focus on formulations with flow balance equalities (1b).
We refer to submodular inequalities (3) and (4) derived for path structures as path inequalities. In this paper, we consider sets S + and L− such that (F2) is feasible for all
objective sets C and C \ {t} for all t ∈ C.
5
3.1. Equivalence to the maximum flow problem. Define sets K + and K − such that
the coefficients of the objective function (2a) are:
+
1, t ∈ K
at =
−1, t ∈ K −
0, otherwise,
(5)
where S + ⊆ K + ⊆ E + and K − ⊆ E − \ L− . We refer to the sets K + and K − as coefficient
sets. Let the set of arcs with zero coefficients in (2a) be represented by K̄ + = E + \ K + and
K̄ − = E − \ K − . Given a selection of coefficients as described in (5), we claim that (F2)
can be transformed to a maximum flow problem. We first show this result assuming dj ≥ 0
for all j ∈ N . Then, in Appendix A, we show that the nonnegativity of demand is without
loss of generality for the derivation of the inequalities.
Proposition 1. Let S + ⊆ E + and L− ⊆ E − be the objective sets in (F2) and let Y be the
nonempty set of optimal solutions of (F2). If dj ≥ 0 for all j ∈ N , then there exists at least
one optimal solution (y∗ , r∗ , i∗ ) ∈ Y such that yt∗ = 0 for t ∈ K̄ + ∪ K − ∪ L− .
Proof. Observe that yt∗ = 0 for all t ∈ E + \S + , due to constraints (2g). Since K̄ + ⊆ E + \S + ,
yt∗ = 0, for t ∈ K̄ + from feasibility of (F2). Similarly, yt∗ = 0 for all t ∈ L− by constraints
(2g).
Now suppose that, yt∗ = > 0 for some t ∈ Kj− (i.e., at = −1 for arc t in (F2)). Let the
slack value at constraint (2b) for node j be
∗
sj = dj − i∗j−1 − rj−1
+ y ∗ (Ej+ ) − y ∗ (Ej− \ {t}) − yt∗ − i∗j + rj∗ .
If sj ≥ , then decreasing yt∗ by both improves the objective function value and conserves
the feasibility of flow balance inequality (2b) for node j, since sj − ≥ 0.
If sj < , then decreasing yt∗ by violates flow balance inequality since sj − < 0. In this
case, there must exist a simple directed path P from either the source node sN or a node
k ∈ N \ {j} to node j where all arcs have at least a flow of ( − sj ). This is guaranteed
because, sj < implies that, without the outgoing arc t, there is more incoming flow to
node j than outgoing. Then, notice that decreasing the flow on arc t and all arcs in path
P by − sj conserves feasibility. Moreover, the objective function value either remains the
same or increases, because decreasing yt by − sj increases the objective function value by
− sj and the decreasing the flow on arcs in P decreases it by at most − sj . At the end
of this transformation, the slack value sj does not change, however; the flow at arc t is now
yt∗ = sj which is equivalent to the first case that is discussed above. As a result, we obtain
a new solution to (F2) where yt∗ = 0 and the objective value is at least as large.
Proposition 2. If dj ≥ 0 for all j ∈ N , then (F2) is equivalent to a maximum flow problem
from source sN to sink tN on graph G.
Proof. At the optimal solution of problem (F2) with objective set (S + , L− ), the decision
variables yt , for t ∈ (E + \ S + ) ∪ K − ∪ L− can be assumed to be zero due to Proposition
1 and constraints (2g). Then, these variables can be dropped from (F2) since the value
v(S + , L− ) does not depend them and formulation (F2) reduces to
v(S + , L− ) = max y(S + ) : ij−1 − rj−1 + y(Sj+ ) − y(K̄j− ) − ij + rj ≤ dj , j ∈ N,
6
(2c) − (2f) .
(6)
Now, we reformulate (6) by representing the left hand side of the flow balance constraint by
a new nonnegative decision variable zj that has an upper bound of dj for each j ∈ N :
max y(S + ) : ij−1 − rj−1 + y(Sj+ ) − y(K̄j− ) − ij + rj = zj , j ∈ N,
0 ≤ zj ≤ dj , j ∈ N,
(2c) − (2f) .
The formulation above is equivalent the maximum flow formulation from the source node
sN to the sink node tN for the path structures we are considering in this paper.
Under the assumption that dj ≥ 0 for all j ∈ N , Proposition 1 and Proposition 2 together
show that the optimal objective function value v(S + , L− ) can be computed by solving a
maximum flow problem from source sN to sink tN . We generalize this result in Appendix
A for node sets N such that dj < 0 for some j ∈ N . As a result, obtaining the explicit
coefficients of submodular inequalities (3) and (4) reduces to solving |E| + 1 maximum flow
problems. For a general underlying graph, solving |E| + 1 maximum flow problems would
take O(|E|2 |N |) time (e.g., see King et al. (1994)), where |E| and |N | are the number of
arcs and nodes, respectively. In the following subsection, by utilizing the equivalence of
maximum flow and minimum cuts and the path structure, we show that all coefficients of
(3) and (4) can be obtained in O(|E| + |N |) time using dynamic programming.
3.2. Computing the coefficients of the submodular inequalities. Throughout the
paper, we use minimum cut arguments to find the explicit coefficients of inequalities (3)
and (4). Figure 2a illustrates an example where N = [1, 5], E + = [1, 5], E − = [6, 10]
and in Figure 2b, we give an example of an sN − tN cut for S + = {2, 4, 5}, L− = {10}
and K̄ − = {7, 9, 10}. The dashed line in Figure 2b represents a cut that corresponds to
the partition {sN , 2, 5} and {tN , 1, 3, 4} with a value of b1 + d2 + c7 + u2 + c4 + b4 + d5 .
Moreover, we say that a cut passes below node j if j is in the source partition and passes
above node j if j is in the sink partition.
Let αju and αjd be the minimum value of a cut on nodes [1, j] that passes above and below
node j, respectively. Similarly, let βju and βjd be the minimum values of cuts on nodes [j, n]
that passes above and below node j respectively. Finally, let
S − = E − \ (K − ∪ L− ),
where K − is defined in (5). Recall that S + and L− are the given objective sets. Given the
notation introduced above, all of the arcs in sets S − and L− have a coefficient zero in (F2).
Therefore, dropping an arc from L− is equivalent to adding that arc to S − . We compute
{u,d}
{u,d}
αj
by a forward recursion and βj
by a backward recursion:
d
u
αju = min{αj−1
+ uj−1 , αj−1
} + c(Sj+ )
αjd
=
d
u
min{αj−1
, αj−1
+ bj−1 } + dj +
c(Sj− ),
(7)
(8)
where α0u = α0d = 0 and
u
d
βju = min{βj+1
, βj+1
+ bj } + c(Sj+ )
u
d
βjd = min{βj+1
+ uj , βj+1
} + dj + c(Sj− ),
7
(9)
(10)
c2
c1
u1
1
u2
2
d1
u3
3
c7
d2
c5
u4
4
b3
b2
b1
c6
c4
c3
c8
d3
5
b4
c9
d4
c10
d5
(a) A path graph with E + = [1, 5] and E − = [6, 10].
c2
1
u1
c4
2
3
c7
d2
u3
4
b3
b2
b1
d1
u2
c5
d3
u4
5
b4
c9
d4
d5
(b) An sN − tN cut for set S + = {2, 4, 5}, L− = {10} and K̄ − = {7, 9, 10}.
Figure 2. An example of an sN − tN cut.
u
d
where βn+1
= βn+1
= 0.
u
Let mj and mdj be the values of minimum cuts for nodes [1, n] that pass above and below
node j, respectively. Notice that
muj = αju + βju − c(Sj+ )
(11)
mdj = αjd + βjd − dj − c(Sj− ).
(12)
and
For convenience, let
mj := min{muj , mdj }.
Notice that mj is the minimum of the minimum cut values that passes above and below
node j. Since the minimum cut corresponding to v(C) has to pass either above or below
node j, mj is equal to v(C) for all j ∈ N . As a result, the minimum cut (or maximum flow)
value for the objective set C = S + ∪ L− is
v(C) = m1 = · · · = mn .
(13)
Proposition 3. All values mj , for j ∈ N , can be computed in O(|E| + |N |) time.
Obtaining the explicit coefficients of inequalities (3) and (4) also requires finding v(C \{t})
for t ∈ C and v(C ∪ {t}) for t ∈
/ C in addition to v(C). It is important to note that we do
not need to solve the recursions above repeatedly. Once the values muj and mdj are obtained
for the set C, the marginals ρt (C \ {t}) and ρt (C) can be found in O(1) time for each t ∈ E.
We use the following observation while providing the marginal values ρt (C) and ρt (C \{t})
in closed form.
Observation 1. Let c ≥ 0 and d := (b − a)+ , then,
1. min{a + c, b} − min{a, b} = min{c, d},
2. min{a, b} − min{a, b − c} = (c − d)+ .
8
In the remainder of this section, we give a linear-time algorithm to compute the coefficients ρt for inequalities (3) and (4) explicitly for paths.
Coefficients of inequality (3): Path cover inequalities. Let S + and L− be the objective sets in (F2) and S − ⊆ E − \ L− . We select the coefficient sets in (5) as K + = S + and
K − = E − \ (L− ∪ S − ) to obtain the explicit form of inequality (3). As a result, the set
definition of S − = E − \ (K − ∪ L− ) is conserved.
Definition 1. Let the coefficient sets in (5) be selected as above and (S + , L− ) be the
objective set. The set (S + , S − ) is called a path cover for the node set N if
v(S + , L− ) = d1n + c(S − ).
For inequality (3), we assume that the set (S + , S − ) is a path cover for N . Then, by
definition,
v(C) = m1 = · · · = mn = d1n + c(S − ).
After obtaining the values muj and mdj for a node j ∈ N using recursions in (7)-(10), it is
trivial to find the minimum cut value after dropping an arc t from Sj+ :
t ∈ Sj+ ,
v(C \ {t}) = min{muj − ct , mdj },
j ∈ N.
Similarly, dropping an arc t ∈ L−
j results in the minimum cut value:
t ∈ L−
j ,
v(C \ {t}) = min{muj , mdj + ct },
j ∈ N.
Using Observation 1, we obtain the marginal values
ρt (C \ {t}) = (ct − λj )+ ,
t ∈ S+
ρt (C \ {t}) = min{λj , ct },
t ∈ L−
and
where
λj = (muj − mdj )+ ,
j ∈ N.
On the other hand, all the coefficients ρt (∅) = 0 for arcs t ∈ E \ C. First, notice that,
for t ∈ E + \ S + , v({t}) = 0, because the coefficient at = 0 for t ∈ E + \ S + . Furthermore,
v({t}) = 0 for t ∈ E − \ L− , since all incoming arcs would be closed for an objective set
(∅, {t}). As a result, inequality (3) for the objective set (S + , L− ) can be written as
X
X
y(S + ) +
(ct − λj )+ (1 − xt ) ≤ d1n + c(S − ) +
min{ct , λj }xt + y(E − \ (L− ∪ S − )).
t∈L−
t∈S +
(14)
We refer to inequalities (14) as path cover inequalities.
Remark 3. Observe that for a path consisting of a single node N = {j} with demand
d := dj > 0, the path cover inequalities (14) reduce to the flow cover inequalities (Padberg
et al., 1985, van Roy and Wolsey, 1986). Suppose that the path consists of a single node
N = {j} with demand d := dj > 0. Let S + ⊆ E + and S − ⊆ E − . The set (S + , S − ) is a
flow cover if λ := c(S + ) − d − c(S − ) > 0 and the resulting path cover inequality
X
y(S + ) +
(ct − λ)+ (1 − xt ) ≤ d + c(S − ) + λx(L− ) + y(E − \ L− )
(15)
t∈S +
9
is a flow cover inequality.
Proposition 4. Let (S + , S − ) be a path cover for the node set N . The path cover inequality
for node set N is at least as strong as the flow cover inequality for the single node relaxation
obtained by merging the nodes in N .
Proof. Flow cover and path cover inequalities differ in the coefficients of variables xt for
t ∈ S + and t ∈ L− . Therefore, we compare the values λj , j ∈ N of path cover inequalities
(14) to the value λ of flow cover inequalities (15) and show that λj ≤ λ, for all j ∈ N . The
merging of node set N in graph G is equivalent to relaxing the values uj and bj to be infinite
for j ∈ [1, n − 1]. As a result, the value of the minimum cut that goes above a the merged
node is m̄u = c(S + ) and the value of the minimum cut that goes below the merged node is
m̄d = d1n + c(S − ). Now, observe that the recursions in (7)-(10) imply that the minimum
cut values for the original graph G are smaller:
muj = αju + βju − c(Sj+ ) ≤ c(S + ) = m̄u
and
mdj = αjd + βjd − dj − c(Sj− ) ≤ d1n + c(S − ) = m̄d
for all j ∈ N . Recall that the coefficient for the flow cover inequality is λ = (m̄u − m̄d )+
and the coefficients for path cover inequality are λj = (muj − mdj )+ for j ∈ N . The fact that
(S + , S − ) is a path cover implies that mdj = d1n + c(S − ) for all j ∈ N . Since m̄d = mdj and
muj ≤ m̄u for all j ∈ N , we observe that λj ≤ λ for all j ∈ N . Consequently, the path cover
inequality (14) is at least as strong as the flow cover inequality (15).
c1 = 15
c2 = 35
u1 = 10
1
u2 = 10
2
b1 = 15
d1 = 10
c3 = 30
c4 = 10
u3 = 20
3
b2 = 15
d2 = 10
4
b3 = 10
d3 = 5
d4 = 15
Figure 3. A lot-sizing instance with backlogging.
Example 1. Consider the lot-sizing instance in Figure 3 where N = [1, 4], S + = {2, 3},
L− = ∅. Observe that mu1 = 45, md1 = 40, mu2 = 65, md2 = 40, mu3 = 60, md3 = 40, and
mu4 = 45, md4 = 40. Then, λ1 = 5, λ2 = 25, λ3 = 20, and λ4 = 5 leading to coefficients 10
and 10 for (1 − x2 ) and (1 − x3 ), respectively. Furthermore, the maximum flow values are
v(C) = 40, v(C \ {2}) = 30, and v(C \ {3}) = 30. Then, the resulting path cover inequality
(14) is
y2 + y3 + 10(1 − x2 ) + 10(1 − x3 ) ≤ 40,
(16)
and it is facet-defining for conv(P) as will be shown in Section 4. Now, consider the relaxation obtained by merging the nodes in [1, 4] into a single node with incoming arcs {1, 2, 3, 4}
and demand d = 40. As a result, the flow cover inequalities can be applied to the merged
10
node set. The excess value for the set S + = {2, 3} is λ = c(S + ) − d = 25. Then, the
resulting flow cover inequality (15) is
y2 + y3 + 10(1 − x2 ) + 5(1 − x3 ) ≤ 40,
and it is weaker than the path cover inequality (16).
Coefficients of inequality (4): Path pack inequalities. Let S + and L− be the objective
sets in (F2) and let S − ⊆ E − \ L− . We select the coefficient sets in (5) as K + = E + and
K − = E − \ (S − ∪ L− ) to obtain the explicit form of inequality (4). As a result, the set
definition of S − = E − \ (K − ∪ L− ) is conserved.
Definition 2. Let the coefficients in (5) be selected as above and (S + , L− ) be the objective
set. The set (S + , S − ) is called a path pack for node set N if
v(S + , L− ) = c(S + ).
For inequality (4), we assume that the set (S + , S − ) is a path pack for N and L− = ∅
for simplicity. Now, we need to compute the values of v(C), v(E), v(E \ {t}) for t ∈ C and
v(C ∪ {t}) for t ∈ E \ C. The value of v(C ∪ {t}) can be obtained using the values muj and
mdj that are given by recursions (7)-(10). Then,
t ∈ Ej+ \ Sj+ ,
v(C ∪ {t}) = min{muj + ct , mdj },
j∈N
and
t ∈ Sj− ,
v(C ∪ {t}) = min{muj , mdj + ct },
j ∈ N.
Then, using Observation 1, we compute the marginal values
ρt (C) = min{ct , µj },
t ∈ Ej+ \ Sj+
and
ρt (C) = (ct − µj )+ ,
t ∈ Sj− ,
µj = (mdj − muj )+ ,
j ∈ N.
where
Next, we compute the values v(E) and v(E \{t}) for t ∈ C. The feasibility of (F1) implies
that (E + , ∅) is a path cover for N . By Assumption (A.1), (E + \ {t}, ∅) is also a path cover
for N for each t ∈ S + . Then v(E) = v(E \ {t}) = d1n and
ρt (E \ {t}) = 0,
t ∈ S + ∪ L− .
Then, inequality (4) can be explicitly written as
X
X
y(S + ) +
(yt − min{ct , µj }xt ) +
(ct − µj )+ (1 − xt ) ≤ c(S + ) + y(E − \ S − ). (17)
t∈E + \S +
t∈S −
We refer to inequalities (17) as path pack inequalities.
Remark 4. Observe that for a path consisting of a single node N = {j} with demand
d := dj > 0, the path pack inequalities (17), reduce to the flow pack inequalities ((Atamtürk,
2001)). Let (S + , S − ) be a flow pack and µ := d−c(S + )+c(S − ) > 0. Moreover, the maximum
flow that can be sent through S + for demand d and arcs in S − is c(S + ). Then, the value
11
function v(S + ) = c(S + ) and the resulting path pack inequality
X
X
(ct − µ)+ (1 − xt ) ≤ c(S + ) + y(E − \ S − ) (18)
y(S + ) +
(yt − min{ct , µ}xt ) +
t∈E + \S +
t∈S −
is equivalent to the flow pack inequality.
Proposition 5. Let (S + , S − ) be a path pack for the node set N . The path pack inequality
for N is at least as strong as the flow pack inequality for the single node relaxation obtained
by merging the nodes in N .
Proof. The proof is similar to that of Proposition 4. Flow pack and path pack inequalities
only differ in the coefficients of variables xt for t ∈ E + \ S + and t ∈ S − . Therefore, we
compare the values µj , j ∈ N of path pack inequalities (17) to the value µ of flow pack
inequalities (18) and show that µj ≤ µ for all j ∈ N . For the single node relaxation, the
values of the minimum cuts that pass above and below the merged node are m̄u = c(S + )
and m̄d = d1n + c(S − ), respectively. The recursions in (7)-(10) imply that
muj = αju + βju − c(Sj+ ) ≤ c(S + ) = m̄u
and
mdj = αjd + βjd − dj − c(Sj− ) ≤ d1n + c(S − ) = m̄d .
The coefficient for flow pack inequality is µ = (m̄d − m̄u )+ and for path pack inequality
µj = (mdj − muj )+ . Since (S + , S − ) is a path pack, the minimum cut passes above all nodes
in N and muj = c(S + ) for all j ∈ N . As a result, muj = m̄u for all j ∈ N and mdj ≤ m̄d .
Then, observe that the values
µj ≤ µ, j ∈ N.
Example 1 (continued). Recall the lot-sizing instance with backlogging given in Figure 3.
Let the node set N = [1, 4] with E − = ∅ and S + = {3}. Then, mu1 = 30, md1 = 40,
mu2 = 30, md2 = 40, mu3 = 30, md3 = 30, mu4 = 30, md4 = 30, leading to µ1 = 10, µ2 = 10,
µ3 = 0 and µ4 = 0. Moreover, the maximum flow values are v(C) = 30, v(C ∪ {1}) = 40,
v(C ∪ {2}) = 40, v(C ∪ {4}) = 30, v(E) = 40, and v(E \ {3}) = 40. Then the resulting path
pack inequality (17) is
y1 + y2 + y3 + y4 ≤ 30 + 10x1 + 10x2
(19)
and it is facet-defining for conv(P) as will be shown in Section 4. Now, suppose that the
nodes in [1, 4] are merged into a single node with incoming arcs {1, 2, 3, 4} and demand
d = 40. For the same set S + , we get µ = 40 − 30 = 10. Then, the corresponding flow pack
inequality (18) is
y1 + y2 + y3 + y4 ≤ 30 + 10x1 + 10x2 + 10x4 ,
which is weaker than the path pack inequality (19).
Proposition 6. If |E + \S + | ≤ 1 and S − = ∅, then inequalities (14) and (17) are equivalent.
Proof. If E + \ S + = ∅ and S − = ∅, then it is easy to see that the coefficients of inequality
(17) are the same as (14). Moreover, if |E + \ S + | = 1 (and wlog E + \ S + = {j}), then the
12
resulting inequality (17) is
y(E + ) − y(E − ) ≤ v(C) + ρj (C)xj
= v(C) + v(C ∪ {j}) − v(C) xj
= v(C ∪ {j}) − ρj (C)(1 − xj ),
which is equivalent to path cover inequality (14) with the objective set (E + , ∅).
4. The strength of the path cover and pack inequalities
The capacities of the forward and the backward path arcs play an important role in
finding the coefficients of the path cover and pack inequalities (14) and (17). Recall that
K + and K − are the coefficient sets in (5), (S + , L− ) is the objective set for (F2) and
S − = E − \ (K − ∪ L− ).
Definition 3. A node j ∈ N is called backward independent for set (S + , S − ) if
d
αju = αj−1
+ uj−1 + c(Sj+ ),
or
u
αjd = αj−1
+ bj−1 + dj + c(Sj− ).
Definition 4. A node j ∈ N is called forward independent for set (S + , S − ) if
d
βju = βj+1
+ bj + c(Sj+ ),
or
u
βjd = βj+1
+ uj + dj + c(Sj− ).
Intuitively, backward independence of node j ∈ N implies that the minimum cut either
passes through the forward path arc (j − 1, j) or through the backward path arc (j, j − 1).
Similarly, forward independence of node j ∈ N implies that the minimum cut either passes
through the forward path arc (j, j + 1) or through the backward path arc (j + 1, j). In
Lemmas 7 and 8 below, we further explain how forward and backward independence affect
+
−
the coefficients of path cover and pack inequalities. First, let Sjk
= ∪ki=j Si+ , Sjk
= ∪ki=j Si−
−
k
and L−
jk = ∪i=j Li if j ≤ k, and ∅ otherwise.
Lemma 7. If a node j ∈ N is backward independent for set (S + , S − ), then the values λj
+
−
and µj do not depend on the sets S1j−1
, S1j−1
and the value d1j−1 .
d
Proof. If a node j is backward independent, then either αju = αj−1
+ uj−1 + c(Sj+ ) or
−
+
d
u
u
d
αj = αj−1 + bj−1 + dj + c(Sj ). If αj = αj−1 + uj−1 + c(Sj ), then the equality in (7)
d
u
d
implies αj−1
+ uj−1 ≤ αj−1
. As a result, the equality in (8) gives αjd = αj−1
+ dj + c(Sj− ).
Following the definitions in (11)–(12), the difference
wj := muj − mdj
is βju − βjd + uj−1 which only depends on sets Sk+ and Sk− for k ∈ [j, n], the value djn and
the capacity of the forward path arc (j − 1, j).
u
u
d
If αjd = αj−1
+bj−1 +dj +c(Sj− ), then the equality in (8) implies αj−1
+bj−1 ≤ αj−1
. As a
+
u
u
u
result, the equality in (7) gives αj = αj−1 + c(Sj ). Then, the difference wj = βj − βjd − bj−1
13
which only depends on sets Sk+ and Sk− for k ∈ [j, n], the value djn and the capacity of the
backward path arc (j, j − 1).
Since the values λj and µj are defined as (wj )+ and (−wj )+ respectively, the result
follows.
Remark 5. Let wj := muj − mdj . If a node j ∈ N is backward independent for a set (S + , S − ),
d
+ uj−1 + c(Sj+ ), then
then we observe the following: (1) If αju = αj−1
wj = βju − βjd + uj−1 ,
u
+ bj−1 + dj + c(Sj− ), then
and (2) if αjd = αj−1
wj = βju − βjd − bj−1 .
Lemma 8. If a node j ∈ N is forward independent for set (S + , S − ), then the values λj
+
−
and µj do not depend on the sets Sj+1n
, Sj+1n
and the value dj+1n .
d
d
Proof. The forward independence implies either βju = βj+1
+ bj + c(Sj+ ) and βjd = βj+1
+
−
+
−
u
u
d
u
dj + c(Sj ) or βj = βj+1 + c(Sj ) and βj = βj+1 + uj + dj + c(Sj ). Then, the difference
wj = muj − mdj is either αju + αjd + bj or αju + αjd − uj and in both cases, it is independent
of the sets Sk+ , Sk− for k ∈ [1, j − 1] and the value d1j−1 .
Remark 6. Let wj := muj − mdj . If a node j ∈ N is forward independent for a set (S + , S − ),
d
then we observe the following: (1) If βju = βj+1
+ bj + c(Sj+ ), then
wj = αju − αjd + bj ,
u
and (2) if βjd = βj+1
+ uj + dj + c(Sj− ), then
wj = αju − αjd − uj .
Corollary 9. If a node j ∈ N is backward independent for set (S + , S − ), then the values
+
−
λk and µk for k ∈ [j, n] are also independent of the sets S1j−1
, S1j−1
and the value d1j−1 .
+
−
Similarly, if a node j ∈ N is forward independent for set (S , S ), then the values λk and
+
−
µk for k ∈ [1, j] are also independent of the sets Sj+1n
, Sj+1n
and the value dj+1n .
Proof. The proof follows from recursions in (7)–(10). If a node j is backward independent,
u
d
u
d
we write αj+1
and αj+1
in terms of αj−1
and αj−1
and observe that the difference wj+1 =
+
u
d
u
d
mj+1 − mj+1 does not depend on αj−1 nor αj−1 which implies independence of sets S1j−1
,
−
S1j−1 and the value d1n . We can repeat the same argument for wj , j ∈ [j + 2, n] to show
independence.
u
d
We show the same result for forward independence by writing βj−1
and βj−1
in terms of
u
d
u
d
βj+1 and βj+1 , we observe that wj does not depend on βj+1 nor βj+1 . Then, it is clear that
+
−
wj−1 is also independent of the sets Sj+1n
, Sj+1n
and the value dj+1n . We can repeat the
same argument for wj , j ∈ [1, j − 1] to show independence.
Proving the necessary facet conditions frequently requires a partition of the node set N
into two disjoint sets. Suppose, N is partitioned into N1 = [1, j − 1] and N2 = [j, n] for
some j ∈ N . Let EN 1 and EN 2 be the set of non-path arcs associated with node sets N1
and N2 . We consider the forward and backward path arcs (j − 1, j) and (j, j − 1) to be in
the set of non-path arcs EN 1 and EN 2 since the node j − 1 ∈ Ni and j ∈
/ Ni for i = 1, 2. In
14
+
+
−
−
+
+
particular, EN
1 := (j, j − 1) ∪ E1j−1 , EN 1 := (j − 1, j) ∪ E1j−1 and EN 2 := (j − 1, j) ∪ Ejn ,
−
−
+
−
+
−
`
`
EN
2 := (j, j − 1) ∪ Ejn , where Ek` and Ek` are defined as ∪i=k Ei and ∪i=k Ei if k ≤ `
respectively, and as the empty set otherwise. Since the path arcs for N do not have associated
fixed-charge variables, one can assume that there exists auxiliary binary variables x̃k = 1 for
+
+
k ∈ {(j − 1, j), (j, j − 1)}. Moreover, we partition the sets S + , S − and L− into SN
1 ⊇ S1j−1 ,
−
−
−
−
+
+
−
−
−
−
SN 1 ⊇ S1j−1 , LN 1 := L1j−1 and SN 2 ⊇ Sjn , SN 2 ⊇ Sjn , LN 2 := Ljn . Then, let v1 and
v2 be the value functions defined in (F2) for the node sets N1 and N2 and the objective
+
−
+
−
u
d
u
d
sets (SN
1 , LN 1 ) and (SN 2 , LN 2 ). Moreover, let αj , αj , βj and βj be defined for j ∈ N in
+
−
−
−
recursions (7)-(10) for the set (S , S ) and recall that S = E \ (K − ∪ L− ).
Lemma 10. Let (S + , L− ) be the objective set for the node set for N = [1, n]. If αju =
+
d
u
αj−1
+ uj−1 + c(Sj+ ) or βj−1
= βjd + bj−1 + c(Sj−1
), then
+
−
+
−
v(S + , L− ) = v1 (SN
1 , LN 1 ) + v2 (SN 2 , LN 2 ),
+
+
+
where N1 = [1, j − 1], N2 = [j, n] and the arc sets are SN
1 = (j, j − 1) ∪ S1j−1 , SN 2 =
+
−
−
−
−
(j − 1, j) ∪ Sjn , SN 1 = S1j−1 , SN 2 = Sjn .
Proof. See Appendix B.1.
Lemma 11. Let (S + , L− ) be the objective set for the node set for N = [1, n]. If αjd =
−
u
d
αj−1
+ bj−1 + dj−1 + c(Sj− ) or βj−1
= βju + uj−1 + dj−1 + c(Sj−1
), then
+
−
+
−
v(S + , L− ) = v1 (SN
1 , LN 1 ) + v2 (SN 2 , LN 2 ),
+
+
+
+
−
where N1 = [1, j − 1], N2 = [j, n] and the arc sets are SN
1 = S1j−1 , SN 2 = Sjn , SN 1 =
−
−
−
(j − 1, j) ∪ S1j−1
, SN
2 = (j, j − 1) ∪ Sjn .
Proof. See Appendix B.2.
Lemma 12. Let (S + , L− ) be the objective set for the node set for N = [1, n]. If αju =
−
d
d
αj−1
+ uj−1 + c(Sj+ ) and βj−1
= βju + uj−1 + dj−1 + c(Sj−1
), then
−
+
−
+
v(S + , L− ) = v1 (SN
1 , LN 1 ) + v2 (SN 2 , LN 2 ),
+
+
+
+
where N1 = [1, j − 1], N2 = [j, n] and the arc sets are SN
1 = S1j−1 , SN 2 = (j − 1, j) ∪ Sjn ,
−
−
−
−
SN 1 = S1j−1 , SN 2 = (j, j − 1) ∪ Sjn .
Proof. See Appendix B.3.
Lemma 13. Let (S + , L− ) be the objective set for the node set for N = [1, n]. If αjd =
+
u
u
αj−1
+ bj−1 + dj + c(Sj− ) and βj−1
= βjd + bj−1 + c(Sj−1
), then
+
−
+
−
v(S + , L− ) = v1 (SN
1 , LN 1 ) + v2 (SN 2 , LN 2 ),
+
+
+
+
where N1 = [1, j − 1], N2 = [j, n] and the arc sets are SN
1 = (j, j − 1) ∪ S1j−1 , SN 2 = Sjn ,
−
−
−
−
SN
1 = (j − 1, j) ∪ S1j−1 , SN 2 = Sjn .
Proof. See Appendix B.4.
In the remainder of this section, we give necessary and sufficient conditions for path cover
and pack inequalities (14) and (17) to be facet-defining for the convex hull of P.
15
Theorem 14. Let N = [1, n], and dj ≥ 0 for all j ∈ N . If L− = ∅ and the set (S + , S − )
is a path cover for N , then the following conditions are necessary for path cover inequality
(14) to be facet-defining for conv(P):
(i) ρt (C \ {t}) < ct , for all t ∈ C,
(ii) maxt∈S + ρt (C \ {t}) > 0,
(iii) if a node j ∈ [2, n] is forward independent for set (S + , S − ), then node j − 1 is not
backward independent for set (S + , S − ),
(iv) if a node j ∈ [1, n − 1] is backward independent for set (S + , S − ), then node j + 1 is
not forward independent for set (S + , S − ),
(v) if maxt∈S + (ct − λi )+ = 0 for i = p, . . . , n for some p ∈ [2, n], then the node p − 1 is
i
not forward independent for (S + , S − ),
(vi) if maxt∈S + (ct − λi )+ = 0 for i = 1, . . . , q for some q ∈ [1, n − 1], then the node q + 1
i
is not backward independent for (S + , S − ).
Proof. (i) If for some t0 ∈ S + , ρt0 (C \ {t0 }) ≥ ct0 , then the path cover inequality with the
objective set S + \ {t0 } summed with yt0 ≤ ct0 xt0 results in an inequality at least as
strong. Rewriting the path cover inequality using the objective set S + , we obtain
X
(yt + ρt (S + \ {t})(1 − xt )) + yt0 ≤ v(S + ) − ρt0 (S + \ {t0 })(1 − xt0 ) + y(E − \ S − )
t∈S + \{t0 }
= v(S + \ {t0 }) + ρt0 (S + \ {t0 })xt0 + y(E − \ S − ).
Now, consider summing the path cover inequality for the objective set S + \ {t0 }
X
(yt + ρt (S + \ {t, t0 })(1 − xt )) ≤ v(S \ {t0 }) + y(E − \ S − ),
t∈S + \{t0 }
and yt0 ≤ ct0 xt0 . The resulting inequality dominates inequality (3) because ρt (S + \
{t}) ≤ ρt (S + \ {t, t0 }), from the submodularity of the set function v. If the assumption
of L− = ∅ is dropped, this condition extends for arcs t ∈ L− as ρt (C \ {t}) > −ct with
a similar proof.
(ii) If L− = ∅ and maxt∈S + ρt (C \ {t}) = 0, then summing flow balance equalities (1b) for
all nodes j ∈ N gives an inequality at least as strong.
(iii) Suppose a node j is forward independent for (S + , S − ) and the node j − 1 is backward
independent for (S + , S − ) for some j ∈ [2, n]. Lemmas 10–13 show that the nodes N
and the arcs C = S + ∪ L− can be partitioned into N1 = [1, j − 1], N2 = [j, n] and C1 ,
C2 such that the sum of the minimum cut values for N1 , N2 is equal to the minimum
cut for N . From Remarks 5 and 6 and Corollary 9, it is easy to see that λi for i ∈ N
will not change by the partition procedures described in Lemmas 10–13. We examine
the four cases for node j − 1 to be forward independent and node j to be backward
independent for the set (S + , S − ).
+
d
u
(a) Suppose αju = αj−1
+ uj−1 + c(Sj+ ) and βj−1
= βjd + bj−1 + c(Sj−1
). Consider
+
+
the partition procedure described in Lemma 10, where SN 1 = (j, j − 1) ∪ S1j−1
,
+
+
−
−
−
−
SN
=
(j
−
1,
j)
∪
S
,
S
=
S
,
S
=
S
.
Then,
the
path
cover
inequalities
jn
1j−1
jn
2
N1
N2
16
for nodes N1 and N2
rj−1 +
j−1 X
X
j−1
X
+
yt + (ct − λi )+ (1 − xt ) ≤ v1 (SN
)
+
y(Ei− \ Si− ) + ij−1
1
i=1 t∈S +
i
i=1
and
ij−1 +
n X
X
n
X
+
yt + (ct − λi )+ (1 − xt ) ≤ v2 (SN
)
+
y(Ei− \ Si− ) + rj−1
2
i=j t∈S +
i
i=j
summed gives
n X
X
yt + (ct − λi )+ (1 − xt ) ≤ v(S + ) + y(E − \ S − ),
i=1 t∈S +
i
which is the path cover inequality for N with the objective set S + .
−
u
d
(b) Suppose αjd = αj−1
+ bj−1 + dj−1 + c(Sj− ) and βj−1
= βju + uj−1 + dj−1 + c(Sj−1
).
+
+
+
+
Consider the partition described in Lemma 11, where SN 1 = S1j−1 , SN 2 = Sjn
,
−
−
−
−
SN 1 = (j − 1, j) ∪ S1j−1 , SN 2 = (j, j − 1) ∪ Sjn . The path cover inequalities for
nodes N1 and N2
j−1 X
X
j−1
X
+
yt + (ct − λi )+ (1 − xt ) ≤ v1 (SN
)
+
y(Ei− \ Si− )
1
i=1 t∈S +
i
i=1
and
n X
X
n
X
+
yt + (ct − λi ) (1 − xt ) ≤ v2 (SN 2 ) +
y(Ei− \ Si− ).
+
i=j t∈S +
i
i=j
summed gives the path cover inequality for nodes N and arcs C.
−
d
d
(c) Suppose αju = αj−1
+uj−1 +c(Sj+ ) and βj−1
= βju +uj−1 +dj−1 +c(Sj−1
). Consider
+
+
+
+
the partition described in Lemma 12, where SN 1 = S1j−1 , SN 2 = (j − 1, j) ∪ Sjn
,
−
−
−
−
SN 1 = S1j−1 , SN 2 = (j, j − 1) ∪ Sjn . The path cover inequalities for nodes N1 and
N2
j−1 X
X
j−1
X
+
yt + (ct − λi )+ (1 − xt ) ≤ v1 (SN
)
+
y(Ei− \ Si− ) + ij−1
1
i=1 t∈S +
i
i=1
and
ij−1 +
n X
X
n
X
+
yt + (ct − λi )+ (1 − xt ) ≤ v2 (SN
)
+
y(Ei− \ Si− ).
2
i=j t∈S +
i
i=j
summed gives the path cover inequality for nodes N and arcs C.
+
u
u
(d) Suppose αjd = αj−1
+ bj−1 + dj + c(Sj− ) and βj−1
= βjd + bj−1 + c(Sj−1
). Consider
+
+
+
+
the partition described in Lemma 13, where SN 1 = (j, j − 1) ∪ S1j−1 , SN
2 = Sjn ,
−
−
−
−
SN
1 = (j − 1, j) ∪ S1j−1 , SN 2 = Sjn . The path cover inequalities for nodes N1 and
17
N2
rj−1 +
j−1 X
X
j−1
X
+
yt + (ct − λi )+ (1 − xt ) ≤ v1 (SN
)
+
y(Ei− \ Si− )
1
i=1 t∈S +
i
and
n X
X
i=1
n
X
+
yt + (ct − λi )+ (1 − xt ) ≤ v2 (SN
)
+
y(Ei− \ Si− ) + rj−1 .
2
i=j t∈S +
i
i=j
summed gives the path cover inequality for nodes N and arcs C.
(iv) The same argument for condition (iii) above also proves the desired result here.
(v) Suppose (ct − λi )+ = 0 for all t ∈ Si+ and i ∈ [p, n] and the node p − 1 is forward
independent for some p ∈ [2, n]. Then, we partition the node set N = [1, n] into
+
u
N1 = [1, p − 1] and N2 = [p, n]. We follow Lemma 10 if βp−1
= βpd + bp−1 + c(Sp−1
)
−
+
−
+
d
u
and follow Lemma 11 if βp−1 = βp + up−1 + dp−1 + c(Sp−1 ) to define SN 1 , SN 1 , SN
2
−
and SN
2 . Remark 6 along with the partition procedure described in Lemma 10 or 11
implies that λi will remain unchanged for i ∈ N1 . The path cover inequality for nodes
N and arcs C is
y(S + ) +
p−1 X
X
(ct − λi )+ (1 − xt ) ≤ v(S + ) + y(E − \ S − ).
i=1 t∈S +
i
+
u
If βp−1
= βjd + bp−1 + c(Sp−1
), then the path cover inequality for nodes N1 and arcs
+
−
SN 1 , SN 1 described in Lemma 10 is
rp−1 +
p−1 X
X
p−1
X
+
yt + (ct − λi )+ (1 − xt ) ≤ v(SN
)
+
y(Ei− \ Si− ) + ip−1 .
1
i=1 t∈S +
i
i=1
Moreover, let m̄up and m̄dp be the values of minimum cut that goes above and below
+
−
node p for the node set N2 and arcs SN
2 , SN 2 and observe that
m̄up = βpu + up−1 and m̄dp = βpd .
Then, comparing the difference λ̄p := (m̄up − m̄dp )+ = (βpu − βpd + up−1 )+ to λp =
(mup − mdp )+ = (βpu − βpd + αpu − αpd + c(Sp+ ) − dp − c(Sp− ))+ , we observe that λ̄p ≥ λp
since αpu − αpd + c(Sp+ ) − dp − c(Sp− ) ≤ up−1 from (7)–(8). Since (ct − λp )+ = 0, then
(ct − λ̄p )+ = 0 as well. Using the same technique, it is easy to observe that λ̄i ≥ λi
+
for i ∈ [p + 1, n] as well. As a result, the path cover inequality for N2 with sets SN
2,
−
SN 2 is
ip−1 +
n
X
y(Si+ )
≤
+
v(SN
2)
+
i=p
n
X
y(Ei− \ Si− ) + rp−1 .
i=p
+
−
The path cover inequalities
and for N2 , SN
2 , SN 2 summed gives the
path cover inequality for N , S , S .
−
d
Similarly, if βp−1
= βju + up−1 + dp−1 + c(Sp−1
), the proof follows very similar to the
previous argument using Lemma 11. Letting m̄up and m̄dp be the values of minimum
+
for N1 , SN
1,
+
−
−
SN
1
18
+
−
cut that goes above and below node p for the node set N2 and arcs SN
2 , SN 2 , we get
m̄up = βpu and m̄dp + bp−1 = βpd
under this case. Now, notice that that αpu − αpd + c(Sp+ ) − dp − c(Sp− ) ≥ −bp−1 from
(7)–(8), which leads to λ̄p ≥ λp . Then the proof follows same as above.
(vi) The proof is similar to that of the necessary condition (v). We use Lemmas 12 and 13
and Remark 6 to partition the node set N and arcs S + , S − into node sets N1 = [1, q]
+
−
+
−
and N2 = [q + 1, n] for q ∈ [2, n] and arcs SN
1 , SN 1 and SN 2 , SN 2 . Next, we check the
values of minimum cut that goes above and below node q for the node set N1 and arcs
+
−
u
d
+
−
SN
1 , SN 1 . Then, observing −uq ≤ βq − αq + c(Sq ) − dq − c(Sq ) ≤ bq from (9)–(10), it
+
is easy to show that the coefficients xt for t ∈ SN
1 are equal to zero in the path cover
+
−
inequality for node set N1 . As a result, the path cover inequalities for N1 , SN
1 , SN 1
+
−
+
−
and for N2 , SN 2 , SN 2 summed gives the path cover inequality for N , S , S .
Remark 7. If the node set N consists of a single node, then the conditions (i) and (ii)
of Theorem 14 reduce to the sufficient facet conditions of flow cover inequalities given in
(Padberg et al., 1985, Theorem 4) and (van Roy and Wolsey, 1986, Theorem 6). In this
setting, conditions (iii)–(vi) are no longer relevant.
Theorem 15. Let N = [1, n], E − = ∅, dj > 0 and |Ej+ | = 1, for all j ∈ N and let the set
S + be a path cover. The necessary conditions in Theorem 14 along with
(i) (ct − λj )+ > 0 for all t ∈ Sj+ , j ∈ N ,
(ii) (ct − λj )+ < c(E + \ S + ) for all t ∈ Sj+ , j ∈ N
are sufficient for path cover inequality (14) to be facet-defining for conv(P).
Proof. Recall that dim conv(P) = 2|E| + n − 2. In this proof, we provide 2|E| + n − 2
affinely independent points that lie on the face F
(
)
X
F = (x, y, i, r) ∈ P : y(S + ) +
(ct − λj )+ (1 − xt ) = d1n .
t∈S +
First, we provide Algorithm 1 which outputs an initial feasible solution (x̄, ȳ, ī, r̄), where
all the arcs in S + have non-zero flow. Let d¯j be the effective demand on node j, that is,
the sum of dj and the minimal amount of flow that needs to be sent from the arcs in Sj+
to ensure v(S + ) = d1n . In Algorithm 1, we perform a backward pass and a forward pass
on the nodes in N . This procedure is carried out to obtain the minimal amounts of flow on
the forward and backward path arcs to satisfy the demands. For each node j ∈ N , these
minimal outgoing flow values added to the demand dj give the effective demand d¯j .
Algorithm 1 ensures that at most one of the path arcs (j − 1, j) and (j, j − 1) have
non-zero flow for all j ∈ [2, n]. Moreover, note that sufficient condition (i) ensures that all
the arcs in S + have nonzero flow. Moreover, for at least one node i ∈ N , it is guaranteed
that c(Si+ ) > d¯i . Otherwise, ρt (C) = ct for all t ∈ S + which contradicts the necessary
condition (i). Necessary conditions (iii) and (iv) ensure that īj < uj and r̄j < bj for all
19
Algorithm 1
Initialization: Let d¯j = dj for j ∈ N
for j = (n − 1)nto 1 do
+ o
,
Let ∆ = min uj , d¯j+1 − c(S + )
j+1
d¯j = d¯j + ∆, d¯j+1 = d¯j+1 − ∆,
īj = ∆.
end for
for j = 2 to n do
+
+
Let ∆ = d¯j−1 − c(Sj−1
) ,
d¯j = d¯j + ∆, d¯j−1 − ∆
r̄j−1 = ∆ − min{∆, īj−1 }
īj−1 = īj−1 − min{∆, īj−1 }
end for
ȳj = d¯j , for all j ∈ S + .
x̄j = 1 if j ∈ S + , 0 otherwise.
ȳj = x̄j = 0, for all j ∈ E − .
j = 1, . . . , n − 1. Let
e := arg max{c(Si+ ) − d¯i }
i∈N
be the node with the largest excess capacity. Also let 1j be the unit vector with 1 at position
j.
Next, we give 2|S + | affinely independent points represented by w̄t = (x̄t , ȳt , īt , r̄t ) and
t
w̃ = (x̃t , ỹt , ĩt , r̃t ) for t ∈ S + :
(i) Select w̄e = (x̄, ȳ, ī, r̄) given by Algorithm 1. Let ε > 0 be a sufficiently small value.
We define w̄t for e 6= t ∈ S + as ȳt = ȳe + ε1e − ε1t , x̄t = x̄e . If t < e, then īt = īe
and r̄jt = r̄je for j < t and for t ≥ e, r̄jt = r̄je + ε for t ≤ j < e.
(ii) In this class of affinely independent solutions, we close the arcs in S + one at a time and
P
open all the arcs in E + \ S + : x̃t = x̄ − 1t + j∈E + \S + 1j . Next, we send an additional
ȳt − (ct − λj )+ amount of flow from the arcs in S + \ {t}. This is a feasible operation
because v(C \ {t}) = d1n − (ct − λj )+ . Let (y∗ , i∗ , r∗ ) be the optimal solution of (F2)
corresponding to v(S + \{t}). Then let, ỹjt = yj∗ for j ∈ S + \{t}. Since v(C \{t}) < d1n ,
additional flow must be sent through nodes in E + \S + to satisfy flow balance equations
(1b). This is also a feasible operation, because of assumption (A.1). Then, the forward
and backward path flows ĩt and r̃t are calculated using the flow balance equations.
In the next set of solutions, we give 2|E + \S + |−1 affinely independent points represented
by ŵt = (x̂t , ŷt , ît , r̂t ) and w̌t = (x̌t , y̌t , ǐt , řt ) for t ∈ E + \ S + .
(iii) Starting with solution w̄e , we open arcs in E + \ S + , one by one. ŷt = ȳe , x̂t = x̄e + 1t ,
ît = īe , r̂t = r̄e .
(iv) If |E + \ S + | ≥ 2, then we can send a sufficiently small ε > 0 amount of flow from
arc t ∈ E + \ S + to t 6= k ∈ E + \ S + . Let this set of affinely independent points be
represented by w̌t for t ∈ E + \ S + . While generating w̌t , we start with the solution w̃e ,
20
where the non-path arc in Se+ is closed. The feasibility of this operation is guaranteed
by the sufficiency conditions (ii) and necessary conditions (iii) and (iv).
e
(a) If ỹte = ct , then there exists at least one arc t 6= m ∈ E + \S + such that 0 ≤ ỹm
< cm
+
+
e
due to sufficiency assumption (ii), then for each t ∈ E \ S such that ỹt = ct , let
Pm−1
y̌t = ỹe − ε1t + ε1m , x̌t = x̃e . If t < m, then ǐt = ĩe and řt = r̃e + ε i=t 1i . If
P
t−1
t > m, then ǐt = ĩe + ε i=m 1i and řt = r̃e .
e
e
(b) If ỹt < ct and there exists at least one arc t 6= m ∈ E + \ S + such that ỹm
= 0,
then the same point described in (a) is feasible.
e
(c) If ỹte < ct and there exists at least one arc t 6= m ∈ E + \ S + such that ỹm
= cm ,
t
e
then, we send ε amount of flow from t to m, y̌ = ỹ + ε1t − ε1m , x̌t = x̃e .
Pm−1
If t < m, then ǐt = ĩe + ε i=t 1i and řt = r̃e . If t > m, then ǐt = ĩe and
P
t−1
řt = r̃e + ε i=m 1i .
Finally, we give n − 1 points that perturb the flow on the forward path arcs (j, j + 1) for
j = 1, . . . , n − 1 represented by w̆j = (x̆j , y̆j , ĭj , r̆j ). Let k = min{i ∈ N : Si+ 6= ∅} and
` = max{i ∈ N : Si+ 6= ∅}. The solution given by Algorithm 1 guarantees īj < uj and
r̄j < bj for j = 1, . . . , n − 1 due to necessary conditions (iii) and (iv).
(v) For j = 1, . . . , n − 1, we send an additional ε amount of flow from the forward path
arc (j, j + 1) and the backward path arc (j + 1, j). Formally, the solution w̆j can be
obtained by: y̆j = ȳe , x̆j = x̄e , ĭj = īe + ε1j and r̆j = r̄e + ε1j .
Next, we identify conditions under which path pack inequality (17) is facet-defining for
conv(P).
Theorem 16. Let N = [1, n], dj ≥ 0 for all j ∈ N , let the set (S + , S − ) be a path pack
and L− = ∅. The following conditions are necessary for path pack inequality (17) to be
facet-defining for conv(P):
(i) ρj (S + ) < cj , for all j ∈ E + \ S + ,
(ii) maxt∈S − ρt (C) > 0,
(iii) if a node j ∈ [2, n] is forward independent for set (S + , S − ), then node j − 1 is not
backward independent for set (S + , S − ),
(iv) if a node j ∈ [1, n − 1] is backward independent for set (S + , S − ), then node j + 1 is
not forward independent for set (S + , S − ),
(v) if maxt∈E + \S + ρt (C) = 0 and maxt∈S − ρt (C) = 0 for i = p, . . . , n for some p ∈ [2, n],
i
i
i
then the node p − 1 is not forward independent for (S + , S − ),
(vi) if maxt∈E + \S + ρt (C) = 0 and maxt∈S − ρt (C) = 0 for i = 1, . . . , q for some q ∈
i
i
i
[1, n − 1], then the node q + 1 is not backward independent for (S + , S − ).
Proof. (i) Suppose that for some k ∈ E + \ S + , ρk (S + ) = ck . Then, recall the implicit
form of path pack inequality (17) is
X
X
y(E + \ {k}) + yk +
ρt (S + )(1 − xt ) ≤ v(S + ) +
ρt (S + )xt + ck xk + y(E − \ S − ).
t∈S −
21
k6=t∈E + \S +
Now, if we select ak = 0 in (F2), then the coefficients of xk and yk become zero and
summing the path cover inequality
X
X
ρt (S + )xt + y(E − \ S − ).
ρt (S + )(1 − xt ) ≤ v(S + ) +
y(E + \ {k}) +
t∈S −
k6=t∈E + \S +
with yk ≤ ck xk gives the first path cover inequality.
(ii) Suppose that ρj (S + ) = 0 for all j ∈ S − . Then the path pack inequality is
X
y(E + ) ≤ v(S + ) +
ρt (S + )xt + y E − \ (L− ∪ S − ) ,
t∈E + \S +
where L− = ∅. If an arc j is dropped from S − and added to L− , then v(S + ) =
v(S + ∪ {j}) since ρj (S + ) = 0 for j ∈ S − . Consequently, the path pack inequality with
S − = S − \ {j} and L− = {j}
X
X
y(E + ) +
ρt (S + ∪ {j})(1 − xt ) ≤ v(S + ) +
ρt (S + ∪ {j})xt + y E − \ (L− ∪ S − ) .
t∈S −
t∈E + \S +
But since 0 ≤ ρt (S + ∪ {j}) ≤ ρt (S + ) from submodularity of v and ρt (S + ) = 0 for all
t ∈ S − , we observe that the path pack inequality above reduces to
X
y(E + ) ≤ v(S + ) +
ρt (S + ∪ {j})xt + y E − \ (L− ∪ S − )
t∈E + \S +
and it is at least as strong as the first pack inequality for S + , S − and L− = ∅.
(iii)–(iv) We repeat the same argument of the proof of condition (iii) of Theorem 14. Suppose
a node j is forward independent for (S + , S − ) and the node j − 1 is backward
independent for (S + , S − ) for some j ∈ [2, n]. Lemmas 10–13 show that the nodes
N and the arcs C = S + ∪ L− can be partitioned into N1 = [1, j − 1], N2 = [j, n]
and C1 , C2 such that the sum of the minimum cut values for N1 , N2 is equal to
the minimum cut for N . From Remarks 5 and 6 and Corollary 9, it is easy to see
that µi for i ∈ N will not change by the partition procedures described in Lemmas
10–13. We examine the four cases for node j − 1 to be forward independent and
node j to be backward independent for the set (S + , S − ). For ease of notation, let
Q+
jk :=
k
X
X
(yt − min{µi , ct }xt )
i=j t∈E + \S +
i
i
and
Q−
jk :=
k X
X
i=j
(ct − µi )+ (1 − xt )
t∈Si−
for j ≤ k and j ∈ N , k ∈ N (and zero if j > k), where the values µi are the
coefficients that appear in the path pack inequality (17). As a result, the path pack
inequality can be written as
−
−
−
y(S + ) + Q+
1n ≤ v(C) + Q1n + y(E \ S ).
d
αj−1
+ uj−1 + c(Sj+ )
u
(a) Suppose αju =
and βj−1
=
the partition procedure described in Lemma 10,
22
(20)
+
βjd + bj−1 + c(Sj−1
). Consider
+
+
where SN
=
(j,
j
−
1) ∪ S1j−1
,
1
+
+
−
−
−
−
SN
2 = (j − 1, j) ∪ Sjn , SN 1 = S1j−1 , SN 2 = Sjn . Then, the path pack
inequalities for nodes N1 is
+
−
−
−
rj−1 + y(S1j−1
) + Q+
1j−1 ≤ v1 (C1 ) + Q1j−1 + y(E1j−1 \ S1j−1 )f + ij−1 .
(21)
Similarly, the path pack inequality for N2 is
+
−
−
−
ij−1 + y(Sjn
) + Q+
jn ≤ v2 (C2 ) + Qjn + y(Ejn \ Sjn ) + rj−1 .
(22)
Inequalities (21)–(22) summed gives the path pack inequality (20).
−
u
d
(b) Suppose αjd = αj−1
+ bj−1 + dj + c(Sj− ) and βj−1
= βju + uj−1 + dj−1 + c(Sj−1
).
+
+
+
+
Consider the partition described in Lemma 11, where SN 1 = S1j−1 , SN 2 = Sjn ,
−
−
−
−
SN
1 = (j − 1, j) ∪ S1j−1 , SN 2 = (j, j − 1) ∪ Sjn . The submodular inequality
(4) for nodes N1 where the objective coefficients of (F2) are selected as at = 1
+
−
−
for t ∈ E1j−1
, at = 0 for t = (j, j − 1), at = −1 for t ∈ EN
1 \ SN 1 and at = 0
−
for t ∈ SN 1 is
X
+
−
−
−
y(S1j−1
)+
kt (1 − xt ) + Q+
(23)
1j−1 ≤ v1 (C1 ) − Q1j−1 + y(E1j−1 \ S1j−1 ),
+
t∈SN
1
+
where kt for t ∈ SN
1 are some nonnegative coefficients. Similarly, the submodular inequality (4) for nodes N2 , where the objective coefficients of (F2)
+
are selected as at = 1 for t ∈ Ejn
, at = 0 for t = (j − 1, j), at = −1 for
−
−
−
t ∈ EN 2 \ SN 2 and at = 0 for t ∈ SN 2 is
X
+
−
−
−
y(Sjn
)+
kt (1 − xt ) + Q+
(24)
jn ≤ v2 (C2 ) − Qjn + y(Ejn \ Sjn ),
+
t∈SN
2
+
where kt for t ∈ SN
2 are some nonnegative coefficients. The sum of inequalities
(23)–(24) is at least as strong as the path pack inequality (20).
−
d
d
(c) Suppose αju = αj−1
+ uj−1 + c(Sj+ ) and βj−1
= βju + uj−1 + dj−1 + c(Sj−1
).
+
+
+
Consider the partition described in Lemma 12, where SN 1 = S1j−1 , SN 2 =
+
−
−
−
−
(j −1, j)∪Sjn
, SN
1 = S1j−1 , SN 2 = (j, j −1)∪Sjn . The submodular inequality
(4) for nodes N1 where the objective coefficients of (F2) are selected as at = 1
+
−
−
for t ∈ E1j−1
, at = 0 for t = (j, j − 1), at = −1 for t ∈ EN
1 \ SN 1 and at = 0
−
for t ∈ SN 1 is
X
+
−
−
−
y(S1j−1
)+
kt (1 − xt ) + Q+
(25)
1j−1 ≤ v1 (C1 ) − Q1j−1 + y(E1j−1 \ S1j−1 ) + ij−1 ,
+
t∈SN
1
+
where kt for t ∈ SN
1 are some nonnegative coefficients. The path pack inequality for N2 is
+
−
−
−
ij−1 + y(Sjn
) + Q+
jn ≤ v2 (C2 ) + Qjn + y(Ejn \ Sjn ).
(26)
The sum of inequalities (25)–(26) is at least as strong as inequality (20).
+
u
u
(d) Suppose αjd = αj−1
+ bj−1 + dj + c(Sj− ) and βj−1
= βjd + bj−1 + c(Sj−1
).
+
+
Consider the partition described in Lemma 13, where SN 1 = (j, j − 1) ∪ S1j−1 ,
+
+
−
−
−
−
SN
2 = Sjn , SN 1 = (j − 1, j) ∪ S1j−1 , SN 2 = Sjn . The path pack inequalities
23
for nodes N1 is
+
−
−
−
rj−1 + y(S1j−1
) + Q+
1j−1 ≤ v1 (C1 ) + Q1j−1 + y(E1j−1 \ S1j−1 ).
(27)
The submodular inequality (4) for nodes N2 where the objective coefficients of
+
(F2) are selected as at = 1 for t ∈ Ejn
, at = 0 for t = (j − 1, j), at = −1 for
−
−
−
t ∈ EN 2 \ SN 2 and at = 0 for t ∈ SN 2 is
X
+
−
−
−
y(Sjn
)+
kt (1 − xt ) + Q+
(28)
jn ≤ v2 (C2 ) − Qjn + y(Ejn \ Sjn ) + rj−1 ,
+
t∈SN
2
+
where kt for t ∈ SN
2 are some nonnegative coefficients. The sum of inequalities
(27)–(28) is at least as strong as the path pack inequality (20).
(v) Suppose (ct − µi )+ = 0 for all t ∈ Si− and i ∈ [p, n] and the node p − 1 is forward
independent. Then, we partition the node set N = [1, n] into N1 = [1, p − 1] and
+
u
N2 = [p, n]. We follow Lemma 10 if βp−1
= βpd + bp−1 + c(Sp−1
) and follow Lemma
−
+
−
+
−
d
u
11 if βp−1 = βp + up−1 + dp−1 + c(Sp−1 ) to define SN 1 , SN 1 , SN 2 and SN
2 . Remark
6 along with the partition procedure described in Lemma 10 or 11 implies that µi
will remain unchanged for i ∈ N1 .
+
u
If βp−1
= βjd + bp−1 + c(Sp−1
), then the coefficients µi of the path pack inequality
+
−
for nodes N1 and arcs SN 1 , SN 1 described in Lemma 10 is the same as the coefficients
of the path pack inequality for nodes N and arcs S + , S − . Moreover, let m̄up and
m̄dp be the values of minimum cut that goes above and below node p for the node
+
−
set N2 and arcs SN
2 , SN 2 and observe that
m̄up = βpu + up−1 and m̄dp = βpd .
Then, comparing the difference µ̄p := (m̄dp − m̄up )+ = (βpd − βpu − up−1 )+ to µp =
(mdp − mup )+ = (βpd − βpu + αpd − αpu − c(Sp+ ) + dp + c(Sp− ))+ , we observe that µ̄p ≥ µp
since αpd − αpu − c(Sp+ ) + dp + c(Sp− ) ≥ −up−1 from (7)–(8). Since (ct − µp )+ = 0,
then (ct − µ̄p )+ = 0 as well. Using the same technique, it is easy to observe that
µ̄i ≥ µi for i ∈ [p + 1, n] as well. As a result, the path pack inequality for N2 with
+
−
+
sets SN
2 , SN 2 , summed with the path pack inequality for nodes N1 and arcs SN 1 ,
−
+
−
SN
1 give the path pack inequality for nodes N and arc S , S .
−
d
u
Similarly, if βp−1 = βj + up−1 + dp−1 + c(Sp−1 ), the proof follows very similar
to the previous argument using Lemma 11. Letting m̄up and m̄dp be the values of
+
minimum cut that goes above and below node p for the node set N2 and arcs SN
2,
−
SN 2 , we get
m̄up = βpu and m̄dp + bp−1 = βpd
under this case. Now, notice that that αpd − αpu − c(Sp+ ) + dp + c(Sp− ) ≤ bp−1 from
(7)–(8), which leads to µ̄p ≥ µp . Then the proof follows same as above.
(vi) The proof is similar to that of the necessary condition (v) above. We use Lemmas
12–13 and Remark 6 to partition the node set N and arcs S + , S − into node sets
+
−
+
−
N1 = [1, q] and N2 = [q + 1, n] and arcs SN
1 , SN 1 and SN 2 , SN 2 . Next, we check
the values of minimum cut that goes above and below node q for the node set N1
+
−
d
u
+
−
and arcs SN
1 , SN 1 . Then, observing −bq ≤ βq − αq − c(Sq ) + dq + c(Sq ) ≤ uq from
−
+
+
(9)–(10), it is easy to see that the coefficients of xt for t ∈ SN 1 and t ∈ EN
1 \ SN 1
24
are equal to zero in the path pack inequality for node set N1 . As a result, the path
+
−
+
−
pack inequalities for N1 , SN
1 , SN 1 and for N2 , SN 2 , SN 2 summed gives the path
+
−
pack inequality for N , S , S .
Remark 8. If the node set N consists of a single node, then the conditions (i) and (ii) of
Theorem 16 reduce to the necessary and sufficient facet conditions of flow pack inequalities
given in (Atamtürk, 2001, Proposition 1). In this setting, conditions (iii)–(vi) are no longer
relevant.
Theorem 17. Let N = [1, n], E − = ∅, dj > 0 and |Ej+ | = 1, for all j ∈ N and let the
objective set S + be a path pack for N . The necessary conditions in Theorem 16 along with
(i) for each j ∈ E + \ S + , either S + ∪ {j} is a path cover for N or ρj (S + ) = 0,
(ii) for each t ∈ S + , there exists jt ∈ E + \ S + such that S + \ {t} ∪ {jt } is a path cover for
N,
(iii) for each j ∈ [1, n − 1], there exists kj ∈ E + \ S + such that the set S + ∪ {kj } is a path
cover and neither node j is backward independent nor node j +1 is forward independent
for the set S + ∪ {kj }
are sufficient for path pack inequality (17) to be facet-defining for conv(P).
Proof. We provide 2|E| + n − 2 affinely independent points that lie on the face:
X
F = (x, y, i, r) ∈ P : y(S + ) +
(yt − min{µj , ct }xt ) = c(S + ) .
+
+
t∈E \S
Let (y∗ , i∗ , r∗ ) ∈ Q be an optimal solution to (F2). Since S + is a path pack and E − = ∅,
v(S + ) = c(S + ). Then, notice that yt∗ = ct for all t ∈ S + . Moreover, let e be the arc
with largest capacity in S + , ε > 0 be a sufficiently small value and 1j be the unit vector
with 1 at position j. First, we give 2|E + \ S + | affinely independent points represented by
z̄ t = (x̄t , ȳt , īt , r̄t ) and z̃ t = (x̃t , ỹt , ĩt , r̃t ) for t ∈ E + \ S + .
(i) Let t ∈ E + \ S + , where S + ∪ {t} is a path cover for N . The solution z̄ t has arcs
in S + ∪ {t} open, x̄tj = 1 for j ∈ S + ∪ {t}, 0 otherwise, ȳjt = yj∗ for j ∈ S + and
ȳtt = ρt (S + ), 0 otherwise. The forward and backward path arc flow values ītj and r̄jt
can then be calculated using flow balance equalities (1b) where at most one of them
can be nonzero for each j ∈ N . Sufficiency condition (i) guarantees the feasibility of
z̄ t .
(ii) Let t ∈ E + \ S + , where ρt (S + ) = 0 and let t 6= ` ∈ E + \ S + , where S + ∪ {`} is a path
cover for N . The solution z̄ t has arcs in S + ∪{t, `} open, x̄tj = 1 for j ∈ S + ∪{t, `}, and
0 otherwise, ȳjt = yj∗ for j ∈ S + , ȳtt = 0, ȳ`t = ρ` (S + ), and 0 otherwise. The forward
and backward path arc flow values ītj and r̄jt can then be calculated using flow balance
equalities (1b) where at most one of them can be nonzero for each j ∈ N . Sufficiency
condition (i) guarantees the feasibility of z̄ t .
25
(iii) The necessary condition (i) ensures that ρt (S + ) < ct , therefore ȳtt < ct . In solution
z̃ t , starting with z̄ t , we send a flow of ε from arc t ∈ E + \ S + to e ∈ S + . Let
Pt−1
ỹt = ȳt + ε1t − ε1e and x̃t = x̄t . If e < t, then r̃t = r̄t + ε i=e 1i , ĩt = īt and if e > t,
Pe−1
then ĩt = īt + ε i=t 1i , r̃t = r̄t .
Next, we give 2|S + | − 1 affinely independent feasible points ẑ t and ž t corresponding to
t ∈ S + that are on the face F . Let k be the arc in E + \ S + with largest capacity.
(iv) In the feasible solutions ẑ t for e 6= t ∈ S + , we open arcs in S + ∪ {k} and send an
ε flow from arc k to arc t. Let ŷt = ȳk + ε1k − ε1t and x̂t = x̄k . If t < k, then
Pk−1
Pt−1
r̂t = r̄k + ε i=t 1i , ît = īk and if t > k, then ît = īk + ε i=k 1i , r̂t = r̄k .
(v) In the solutions ž t for t ∈ S + , we close arc t and open arc jt ∈ E + \ S + that is
introduced in the sufficient condition (ii). Then, x̌tj = 1 if j ∈ S + \ {t} and if j = jt
and x̌tj = 0 otherwise. From sufficient condition (ii), there exists y̌jt values that satisfy
the flow balance equalities (1b). Moreover, these y̌jt values satisfy inequality (17) at
equality since both S + ∪ {jt } and S + \ {t} ∪ {jt } are path covers for N . Then, the
forward and backward path arc flows are found using flow balance equalities where at
most one of ǐtj and řjt are nonzero for each j ∈ N .
Finally, we give n−1 points z̆ j corresponding to forward and backward path arcs connecting
nodes j and j + 1.
(vi) In the solution set z̆ j for j = 1, . . . , n − 1, starting with solution z̄ kj , where kj is
introduced in the sufficient condition (iii), we send a flow of ε from both forward path
arc (j − 1, j) and backward path arc (j, j − 1). Since the sufficiency condition (iii)
k
k
ensures that r̄j j < bj and īj j < uj , the operation is feasible. Let y̆j = ȳkj , x̆j = x̄kj ,
ĭj = īkj + ε1j and r̆j = r̄kj + ε1j .
5. Computational study
We test the effectiveness of path cover and path pack inequalities (14) and (17) by embedding them in a branch-and-cut framework. The experiments are ran on a Linux workstation
with 2.93 GHz Intel R CoreTM i7 CPU and 8 GB of RAM with 1 hour time limit and 1 GB
memory limit. The branch-and-cut algorithm is implemented in C++ using IBM’s Concert
Technology of CPLEX (version 12.5). The experiments are ran with one hour limit on
elapsed time and 1 GB limit on memory usage. The number of threads is set to one and the
dynamic search is disabled. We also turn off heuristics and preprocessing as the purpose is
to see the impact of the inequalities by themselves.
Instance generation. We use a capacitated lot-sizing model with backlogging, where constraints (1b) reduce to:
ij−1 − rj−1 + yj − ij + rj = dj ,
j ∈ N.
Let n be the total number of time periods and f be the ratio of the fixed cost to the variable
cost associated with a non-path arc. The parameter c controls how large the non-path
arc capacities are with respect to average demand. All parameters are generated from a
discrete uniform distribution. The demand for each node is drawn from the range [0, 30]
26
¯ 1.25 × c × d],
¯ where d¯
and non-path arc capacities are drawn from the range [0.75 × c × d,
is the average demand over all time periods. Forward and backward path arc capacities are
¯ 2.0 × d]
¯ and [0.3 × d,
¯ 0.8 × d],
¯ respectively. The variable costs pt , ht
drawn from [1.0 × d,
and gt are drawn from the ranges [1, 10], [1, 10] and [1, 20] respectively. Finally, fixed costs
ft are set equal to f × pt . Using these parameters, we generate five random instances for
each combination of n ∈ {50, 100, 150}, f ∈ {100, 200, 500, 1000} and c ∈ {2, 5, 10}.
Finding violated inequalities. Given a feasible solution (x∗ , y∗ , i∗ , r∗ ) to a linear programming (LP) relaxation of (F1), the separation problem aims to find sets S + and L− that
maximize the difference
X
X
y ∗ (S + ) − y ∗ (E − \ L− ) +
(ct − λj )+ (1 − x∗t ) −
(min{λj , ct })x∗t − d1n − c(S − )
t∈L−
t∈S +
for path cover inequality (14) and sets S + and S − that maximize
X
X
y ∗ (S + ) − y ∗ (E − \ S − ) −
min{ct , µj }x∗t +
(ct − µj )+ (1 − xt ) − c(S + )
t∈S −
t∈E + \S +
for path pack inequality (17). We use the knapsack relaxation based heuristic separation
strategy described in (Wolsey and Nemhauser, 1999, pg. 500) for flow cover inequalities to
choose the objective set S + with a knapsack capacity d1n . Using S + , we obtain the values
λj and µj for each j ∈ N and let S − = ∅ for path cover and path pack inequalities (14)
and (17). For path cover inequalities (14), we add an arc t ∈ E − to L− if λj x∗t < yt∗ and
λj < ct . We repeat the separation process for all subsets [k, `] ⊆ [1, n].
Results. We report multiple performance measures. Let zINIT be the objective function
value of the initial LP relaxation and zROOT be the objective function value of the LP
relaxation after all the valid inequalities added. Moreover, let zUB be the objective function
value of the best feasible solution found within time/memory limit among all experiments for
ROOT
INIT
, root gap= 100 × zUB −z
. We compute
an instance. Let init gap= 100 × zUBz−z
zUB
UB
the improvement of the relaxation due to adding valid inequalities as gap imp= 100 ×
init gap−root gap
−zLB
,
. We also measure the optimality gap at termination as end gap = zUBzUB
init gap
where zLB is the value of the best lower bound given by CPLEX. We report the average
number of valid inequalities added at the root node under column cuts, average elapsed
time in seconds under time, average number of branch-and-bound nodes explored under
nodes. If there are instances that are not solved to optimality within the time/memory
limit, we report the average end gap and the number of unsolved instances under unslvd
next to time results. All numbers except initial gap, end gap and time are rounded to the
nearest integers.
In Tables 1, 2 and 3, we present the performance with the path cover (14) and path pack
(17) inequalities under columns spi. To understand how the forward and backward path
arc capacities affect the computational performance, we also apply them to the single node
relaxations obtained by merging a path into a single node, where the capacities of forward
and backward path arcs within a path are considered to be infinite. In this case, the path
inequalities reduce to the flow cover and flow pack inequalities. These results are presented
under columns mspi.
27
Table 1. Effect of path size on the performance.
p≤5
p=1
n
f
c
init gap imp cuts
gap imp
p ≤ 0.5 × n
cuts
gap imp
gap (m)spi (m)spi spi mspi spi mspi spi
mspi
cuts
p≤n
gap imp
cuts
spi mspi spi mspi spi mspi
2 14.8
5 44.3
10 58.3
34%
56%
60%
21
52
54
87% 52% 212 106 97%
99% 69% 303 148 99%
96% 70% 277 147 99%
2 14.5
200 5 49.8
10 66.3
32%
43%
38%
22
44
47
81% 57% 257 133 96% 61% 1965 241 96% 61% 2387 241
99% 57% 378 162 100% 57% 1264 171 100% 57% 1420 171
98% 50% 392 169 99% 51% 1235 197 99% 51% 1283 197
2 19.1
500 5 54.4
10 73.0
23%
35%
34%
19
36
43
77% 48% 266 128 90% 49% 2286 222 90% 49% 3249 222
99% 49% 522 185 100% 49% 1981 205 100% 49% 2074 205
99% 40% 498 196 99% 40% 1336 236 99% 40% 1445 236
2 14.6
1000 5 59.7
10 76.9
18%
31%
30%
15
36
41
73% 39% 264 99 83% 40% 2821 211 83% 40% 3918 212
98% 45% 487 201 100% 45% 2077 239 100% 45% 2329 239
100% 36% 529 215 100% 37% 1935 268 100% 37% 2149 268
Average: 45.5
36%
36
92% 51% 365 157 97% 52% 1609 206 97% 52% 1894 206
100
50
52% 1164 158 97% 52% 1233 158
69% 664 151 99% 69% 664 151
70% 574 167 99% 70% 574 167
In Table 1, we focus on the impact of path size on the gap improvement of the path
cover and path pack inequalities for instances with n = 50. In the columns under p = 1,
we obtain the same results for both mspi and spi since the paths are singleton nodes.. We
present these results under (m)spi. In columns p ≤ q, we add valid inequalities for paths of
size 1, . . . , q and observe that as the path size increases, the gap improvement of the path
inequalities increase rapidly. On average 97% of the initial gap is closed as longer paths
are used. On the other hand, flow cover and pack inequalities from merged paths reduce
about half of the initial gap. These results underline the importance of exploiting path
arc capacities on strengthening the formulations. We also observe that the increase in gap
improvement diminishes as path size grows. We choose a conservative maximum path size
limit of 0.75 × n for the experiments reported in Tables 2, 3 and 4.
In Table 2, we investigate the computational performance of path cover and path pack
inequalities independently. We present the results for path cover inequalities under columns
titled cov, for path pack inequalities under pac and for both of them under the columns
titled spi. On average, path cover and path pack inequalities independently close the gap
by 63% and 53%, respectively. However, when used together, the gap improvement is 96%,
which shows that the two classes of inequalities complement each other very well.
In Table 3, we present other performance measures as well for instances with 50, 100,
and 150 nodes. We observe that the forward and backward path arc capacities have a large
impact on the performance level of the path cover and pack inequalities. Compared to flow
cover and pack inequalities added from merged paths, path cover and path pack inequalities
reduce the number of nodes and solution times by orders of magnitude. This is mainly due
to better integrality gap improvement (50% vs 95% on average).
28
In Table 4, we examine the incremental effect of path cover and path pack inequalities
over the fixed-charge network cuts of CPLEX, namely flow cover, flow path and multicommodity flow cuts. Under cpx, we present the performance of flow cover, flow path and
multi-commodity flow cuts added by CPLEX and under cpx spi, we add path cover and
path pack inequalities addition to these cuts. We observe that with the addition of path
cover and pack inequalities, the gap improvement increases from 86% to 95%. The number
of branch and bound nodes explored is reduced about 900 times. Moreover, with path cover
and path pack inequalities the average elapsed time is reduced to almost half and the total
number of unsolved instances reduces from 13 to 6 out of 180 instances.
Tables 1, 2, 3 and 4 show that submodular path inequalities are quite effective in tackling lot-sizing problems with finite arc capacities. When added to the LP relaxation, they
improve the optimality gap by 95% and the number of branch and bound nodes explored
decreases by a factor of 1000. In conclusion, our computational experiments indicate that
the use of path cover and path pack inequalities is beneficial in improving the performance
of the branch-and-cut algorithms.
Table 2. Effect of path cover (cov) and path pack (pac) inequalities when
used separately and together (spi).
gap imp
nodes
cuts
time
c
init
gap
cov
2
100 5
10
14.8
44.3
58.3
62% 18%
75% 37%
77% 39%
96%
97%
93%
273 6258
7 759
319 12366
9 357
213 29400 63 290
13 1151 0.3 0.6
25 435 0.1 1.0
11 386 0.1 2.1
0.2
0.1
0.1
2
200 5
10
50
14.5
49.8
66.3
73% 34% 92%
67% 38% 100%
61% 48% 97%
148 3268 18 1593
576 11525
3 736
226 8799 14 619
41 2469 0.7 0.3
36 1022 0.4 0.9
32 739 0.2 0.7
0.6
0.1
0.1
2
500 5
10
19.1
54.4
73.0
57% 57%
56% 75%
59% 65%
92% 635 1825 19 1587 316 2577 1.6 0.3
99% 348 363
1 902 148 1164 0.3 0.1
97% 8410 5284 11 727 67 698 3.0 0.5
1.0
0.1
0.1
2
1000 5
10
14.6
59.7
76.9
61% 65% 90% 278 258 60 1362 427 2094 0.8 0.3
59% 81% 100% 1063 208
2 1673 364 1792 1.3 0.1
51% 77% 99% 3791 1452
5 1202 155 1032 2.1 0.2
0.9
0.1
0.1
45.5 63% 53% 96% 1357 6751 18 984 136 1297 0.9 0.6
0.3
n
f
Average:
pac
spi
cov
29
pac spi cov pac
spi cov pac spi
Table 3. Comparison of path inequalities applied to paths (spi) versus
applied to merged paths (mspi).
gapimp
n
50
100
150
f
c
nodes
init
spi mspi spi
gap
cuts
mspi
spi mspi
time (endgap:unslvd)
spi
mspi
2 14.8 96% 52%
100 5 44.3 97% 69%
10 58.3 93% 70%
7
9
63
430 1151
553 435
468 386
195
146
160
0.2
0.1
0.1
0.2
0.1
0.1
2 14.5 92% 59%
200 5 49.8 100% 57%
10 66.3 97% 53%
18
3
14
330 2469
1112 1022
615 739
226
176
173
0.6
0.1
0.1
0.2
0.3
0.2
2 19.1 92% 43%
500 5 54.4 99% 48%
10 73.0 97% 48%
19
1
11
2041 2577
705 1164
5659 698
238
214
248
1.0
0.1
0.1
0.7
0.3
1.4
2 14.6 90% 45%
1000 5 59.7 100% 50%
10 76.9 99% 40%
60
2
5
612 2094
2265 1792
9199 1032
301
241
314
0.9
0.1
0.1
0.4
0.7
2.3
2 13.9 95% 65% 39
100 5 42.2 98% 70% 19
10 57.8 94% 59% 230
7073 3114
20897 1337
395277 1298
410
297
346
1.3
0.2
0.4
3.2
4.8
88.2
2 16.1 89% 56% 290
151860 6919
200 5 47.6 99% 55%
7
455192 2355
10 65.7 95% 54% 104 4130780 1872
478
331
399
11.0
0.3
0.5
58.4
126.1
962.3 (1.1:1)
2 17.5 84% 36% 1047
956745 11743
500 5 53.9 99% 41%
4
332041 3874
10 72.9 96% 42% 34 1175647 1495
475
444
474
47.7
0.4
0.3
390.9
115.5
352.5
2 17.9 91% 41% 173
57147 10919
1000 5 58.5 100% 45%
1
284979 3261
10 75.7 97% 36% 88 3158262 2358
570
501
657
21.3
23.0
0.3
92.8
0.5 1047.0 (0.7:1)
2 13.2 94% 64% 336
163242 5159
100 5 44.8 99% 65% 17 3024118 2087
10 56.9 95% 65% 404 7254052 1492
704
431
476
11.3 107.6
0.5 929.6
0.9 2087.3 (0.7:1)
2 14.7 92% 53% 519 2772494 12636
200 5 48.1 99% 55% 15 3802938 2462
10 65.2 95% 50% 330 9377122 2047
744
508
567
27.2 1390.6 (0.1:1)
0.6 1483.0 (1.2:2)
0.9 3585.9 (8.2:5)
2 19.3 86% 33% 7927 7619674 22275
500 5 54.4 100% 45%
7 7873043 4927
10 72.3 97% 41% 250 10219548 2678
792 1087.3 3165.6 (4.0:4)
641
0.8 2813.6 (4.3:3)
713
1.2 3422.8 (11.0:5)
2 19.6 88% 34% 2824 7316675 33729
1000 5 57.5 100% 39%
2 9661586 6710
10 75.8 96% 37% 99 9910056 3981
724 804.8 3260.3 (2.5:3)
709
0.8 3578.9 (9.7:5)
829
1.2 3412.3 (15.2:5)
Average: 45.2 95% 50% 416 2504012 4619 440
30
56.3
903.0 (1.6:36)
Table 4. Effectiveness of the path inequalities when used together with
CPLEX’s network cuts.
gapimp
n
f
c
nodes
init
cpx spi cpx cpx spi
gap
time (endgap:unslvd)
cpx
cpx spi
cpx
2
100 5
10
13.9
42.2
57.8
96%
99%
99%
85%
97%
93%
35
5
10
1715
75
2970
1.0
0.2
0.3
0.5
0.1
0.6
2
200 5
10
16.1
47.6
65.7
90%
99%
97%
79%
95%
89%
288
7
61
9039
52
3186
6.6
0.3
0.4
2.1
0.1
0.7
2
500 5
10
17.5
53.9
72.9
85%
99%
98%
63%
94%
89%
1232
6
11
455068
92
4621
57.3
0.4
0.4
95.2
0.1
0.9
2
1000 5
10
17.9
58.5
75.7
91%
100%
97%
76%
93%
85%
173
1
117
18109
156
5297
22.2
0.3
0.7
3.6
0.1
1.0
2
100 5
10
13.2
44.8
56.9
94%
100%
99%
86%
97%
92%
365
5
16
60956
119
15929
9.7
0.4
0.5
19.0
0.1
3.9
2
200 5
10
14.7
48.1
65.2
92%
99%
97%
80%
96%
91%
954
11
181
216436
284
3992
44.9
0.5
0.9
66.7
0.2
1.2
2
500 5
10
19.3
54.4
72.3
86%
100%
97%
69%
94%
88%
7647 4943603 1049.9
5
5434
0.8
141
141211
1.4
2
1000 5
10
19.6
57.5
75.8
88%
100%
96%
71%
90%
89%
3051 2788993 917.4 (0.2:1)
3
4322
0.8
196
10588
2.5
619.4 (0.4:1)
1.2
2.8
2
100 5
10
14.1
42.7
57.5
94%
100%
99%
82%
97%
93%
1623
8
26
384.0
0.1
13.8
2
200 5
10
16.3
48.0
65.0
89%
99%
98%
78%
95%
90%
4279 5634851 259.9
13
1310
0.9
128
163145
1.2
2
500 5
10
16.3
54.5
72.0
88%
99%
96%
72%
93%
90%
8083 6805861 1226.3 (0.3:1) 2137.6 (0.7:3)
7
6606
1.6
2.2
376
900152
3.2
302.4
2
1000 5
10
18.0
57.9
75.6
82%
100%
96%
63%
94%
84%
13906 9894589 3000.5 (1.2:4) 2835.9 (3.0:5)
4
1977
3.4
0.8
704 6127929 15.0
1785.0 (1.8:2)
Average: 45.0
95%
86%
1213 1087194 185.1 (0.0:6) 320.2 (0.2:13)
100
150
200
31
864841
213
45263
32.2
0.5
0.7
1215.1 (0.2:1)
1.6
35.9
1940.4 (0.1:1)
0.5
52.3
Acknowledgements
A. Atamtürk and Birce Tezel are supported, in part, the National Science Foundation
grant #0970180 and by grant FA9550-10-1-0168 from the Office of the Assistant Secretary
of Defense for Research and Engineering. Simge Küçükyavuz is supported, in part, by
the National Science Foundation grant #1055668. The authors are also thankful to the
Associate Editor and anonymous referees for their constructive feedback that improved the
paper substantially.
References
Atamtürk, A. (2001). Flow pack facets of the single node fixed–charge flow polytope. Operations Research Letters, 29:107–114.
Atamtürk, A. and Küçükyavuz, S. (2005). Lot sizing with inventory bounds and fixed costs.
Operations Research, 53:711 – 730.
Atamtürk, A. and Muñoz, J. C. (2004). A study of the lot–sizing polytope. Mathematical
Programming, 99:43–65.
Gade, D. and Küçükyavuz, S. (2011). A note on lot-sizing with fixed charges on stocks: the
convex hull. Discrete Optimization, 8:385–392.
Gu, Z., Nemhauser, G. L., and Savelsbergh, M. W. P. (1999). Lifted flow cover inequalities
for mixed 0–1 integer programs. Mathematical Programming, 85:439–467.
King, V., Rao, S., and Tarjan, R. (1994). A faster deterministic maximum flow algorithm.
Journal of Algorithms, 17(3):447–474.
Küçükyavuz, S. and Pochet, Y. (2009). Uncapacitated lot sizing with backlogging: the
convex hull. Mathematical Programming, 118(1):151–175.
Ortega, F. and Wolsey, L. A. (2003). A branch-and-cut algorithm for the single-commodity,
uncapacitated, fixed-charge network flow problem. Networks, 41:143–158.
Padberg, M. W., van Roy, T. J., and Wolsey, L. A. (1985). Valid linear inequalities for fixed
charge problems. Operations Research, 33:842–861.
Pochet, Y. and Wolsey, L. A. (1988). Lot-size models with backlogging: Strong reformulations and cutting planes. Mathematical Programming, 40(1-3):317–335.
Pochet, Y. and Wolsey, L. A. (1994). Polyhedra for lot-sizing with Wagner–Whitin costs.
Mathematical Programming, 67(1-3):297–323.
Rardin, R. L. and Wolsey, L. A. (1993). Valid inequalities and projecting the multicommodity extended formulation for uncapacitated fixed charge network flow problems. European
Journal of Operational Research, 71:95–109.
Stallaert, J. I. A. (1997). The complementary class of generalized flow cover inequalities.
Discrete Applied Mathematics, 77:73–80.
van Roy, T. J. and Wolsey, L. A. (1985). Valid inequalities and separation for uncapacitated
fixed charge networks. Operations Research Letters, 4:105–112.
van Roy, T. J. and Wolsey, L. A. (1986). Valid inequalities for mixed 0–1 programs. Discrete
Applied Mathematics, 14:199–213.
Van Vyve, M. (2013). Fixed-charge transportation on a path: optimization, LP formulations
and separation. Mathematical Programming, 142:371–395.
32
Van Vyve, M. and Ortega, F. (2004). Lot-sizing with fixed charges on stocks: the convex
hull. Discrete Optimization, 1:189–203.
Wolsey, L. A. (1989). Submodularity and valid inequalities in capacitated fixed charge
networks. Operations Research Letters, 8:119–124.
Wolsey, L. A. and Nemhauser, G. L. (1999). Integer and Combinatorial Optimization. Wiley
Series in Discrete Mathematics and Optimization. Wiley.
Appendix A. Equivalency of (F2) to the maximum flow problem
In Section 3, we showed the maximum flow equivalency of v(S + , L− ) under the assumption that dj ≥ 0 for all j ∈ N . In this section, we generalize the equivalency for the paths
where dj < 0 for some j ∈ N .
Observation 2. If dj < 0 for some j ∈ N , one can represent the supply amount as a dummy
arc incoming to node j (i.e., added to Ej+ ) with a fixed flow and capacity of −dj and set
the modified demand of node j to be dj = 0.
Given the node set N with at least one supply node, let T (N ) be the transformed path
using Observation 2. Transformation T ensures that the dummy supply arcs are always
open. As a result, they are always in the set S + . We refer to the additional constraints
that fix the flow to the supply value on dummy supply arcs as fixed-flow constraints. Notice
that, v(S + , L− ) computed for T (N ) does not take fixed-flow constraints into account. In
the next proposition, for a path structure, we show that there exists at least one optimal
solution to (F2) such that the fixed-flow constraints are satisfied.
Proposition 18. Suppose that dj < 0 for some j ∈ N . If (F2) for the node set N is
feasible, then it has at least one optimal solution that satisfies the fixed-flow constraints.
Proof. We need to show that v(S + , L− ) has an optimal solution where the flow at the
dummy supply arcs is equal to the supply values. The transformation T makes Proposition
1 applicable to the modified path T (N ). Let Y be the set of optimal solutions of (F2). Then,
there exists a solution (y∗ , i∗ , r∗ ) ∈ Y where yt∗ = 0 for t ∈ E − \ (S − ∪ L− ). Let p ∈ Sj+
represent the index of the dummy supply arc with cp = −dj . If yp∗ < cp , then satisfying the
fixed-flow constraints require pushing flow through the arcs in E − \ L− . We use Algorithm
2 to construct an optimal solution with yp∗ = cp . Note that each arc in Ek− \ L−
k for k ∈ N
appear in (F2) with the same coefficients, therefore we merge these outgoing arcs into one
P
in Algorithm 2. We represent the merged flow and capacity by Ȳk− = t∈E − \(S − ∪L− ) yt∗
k
k
k
and C̄k = c Ek− \ (Sk− ∪ L−
k ) for k ∈ N .
Proposition 18 shows that, under the presence of supply nodes, transformation T both
captures the graph’s structure and does not affect (F1)’s validity. As a result, Propositions
1 and 2 become relevant to the transformed path and submodular path inequalities (14)
and (17) are also valid for paths where dj < 0 for some j ∈ N .
33
Algorithm 2
J : Set of supply nodes in N where the nodes are sorted with respect to their order in N .
(y∗ , i∗ , r∗ ) ∈ Y: yt∗ = 0 for all t ∈ E − .
for q ∈ J do
Let p be the dummy supply arc in Sq+
∆ = cp − yp∗
for j = q to n do
Ȳj− = Ȳj− + min{C̄j − Ȳj− , ∆}
∆ = ∆ − min{C̄j − Ȳj− , ∆}
i∗j = i∗j + ∆
if i∗j > uj then
∆ = i∗j − uj
i∗j = uj
Let k := j
break inner loop
end if
end for
if ∆ > 0 then
for j = k to 1 do
Ȳj− = Ȳj− + min{C̄j − Ȳj− , ∆}
∆ = ∆ − min{C̄j − Ȳj− , ∆}
rj∗ = rj∗ + ∆
if rj∗ > bj then
∆ = rj∗ − bj
break inner loop
end if
end for
end if
if ∆ > 0 then
(F2) is infeasible for the node set N .
end if
end for
Appendix B. Proofs
+
−
B.1. Proof of Lemma 10. Recall that C = S + ∪ L− and let C1 = SN
1 ∪ LN 1 and
+
−
C2 = SN
2 ∪ LN 2 . In (13), we showed that the value of the minimum cut is
v(C) = mi = min{αiu + βiu − c(Si+ ), αid + βid − di − c(Si− )}
for all i ∈ N . For node set N1 and the arc set C1 , the value of the minimum cut is
u
d
v1 (C1 ) = min{αj−1
+ bj−1 , αj−1
}.
{u,d}
This is because of three observations: (1) the values αi
for i ∈ [1, j − 2] are the same
+
for the node sets N1 and N , (2) for the arc set C1 the set Sj−1 now includes the backward
path arc (j, j − 1) and (3) node j − 1 is the last node of the first path. Similarly, for node
34
set N2 and the arc set C2 , the value of the minimum cut is
v2 (C2 ) = min{βju + uj−1 , βjd }.
{u,d}
For nodes N2 and the arc set C2 , (1) the values βi
for i ∈ [j + 1, n] are the same for the
node sets N2 and N , (2) for the arc set C2 the set Sj+ now includes the forward path arc
(j − 1, j) and (3) node j is the first node of the second path.
d
d
Now, if αju = αj−1
+uj−1 +c(Sj+ ), then αjd = αj−1
+dj +c(Sj− ) from equations in (7)–(8).
d
Then, rewriting v(C) = mj and v1 (C1 ) in terms of αj−1
:
d
v(C) = αj−1
+ min{βju + uj−1 , βjd }
and
d
v1 (C1 ) = αj−1
.
As a result, the values v1 (C1 ) and v2 (C2 ) summed gives the value v(C) under the assumption
for the value of αju .
+
−
u
d
Similarly, if βj−1
= βjd + bj−1 + c(Sj−1
), then βj−1
= βjd + dj−1 + c(Sj−1
) from equations
in (9)–(10). Then, rewriting v(C) = mj−1 and v2 (C2 ) in terms of βjd :
u
d
v(C) = βjd + min{αj−1
+ bj−1 , αj−1
}
and
v2 (C2 ) = βjd .
As a result, the values v1 (C1 ) and v2 (C2 ) summed gives the value v(C) under the assumption
u
for the value of βj−1
.
B.2. Proof of Lemma 11. The proof follows closely to that of Lemma 10. Let C =
+
−
+
−
S + ∪ L− , C1 = SN
1 ∪ LN 1 and C2 = SN 2 ∪ LN 2 . For node set N1 and the arc set C1 , the
value of the minimum cut is
u
d
v1 (C1 ) = min{αj−1
, αj−1
+ uj−1 },
−
−
where uj−1 is added because c(SN
1 ) = c(S1j−1 ) + uj−1 . Similarly, for node set N2 and the
arc set C2 , the value of the minimum cut is
v2 (C2 ) = min{βju , βjd + bj−1 },
−
−
where bj−1 is added because c(SN
2 ) = c(Sjn ) + bj−1 .
u
u
Now, if αjd = αj−1
+ bj−1 + dj−1 + c(Sj− ), then αju = αj−1
+ c(Sj+ ) from equations in
u
(7)–(8). Then, rewriting v(C) = mj and v1 (C1 ) in terms of αj−1
:
u
v(C) = αj−1
+ min{βju , βjd + bj−1 }
and
u
v1 (C1 ) = αj−1
.
As a result, the values v1 (C1 ) and v2 (C2 ) summed gives the value v(C) under the assumption
for the value of αjd .
−
+
d
u
Similarly, if βj−1
= βju + uj−1 + dj−1 + c(Sj−1
), then βj−1
= βju + c(Sj−1
) from equations
u
in (9)–(10). Then, rewriting v(C) = mj−1 and v2 (C2 ) in terms of βj :
u
d
v(C) = βju + min{αj−1
, αj−1
+ uj−1 }
35
and
v2 (C2 ) = βju .
As a result, the values v1 (C1 ) and v2 (C2 ) summed gives the value v(C) under the assumption
d
for the value of βj−1
.
B.3. Proof of Lemma 12. The proof follows closely to that of Lemmas 10 and 11. Let
−
+
−
+
C = S + ∪ L− , C1 = SN
1 ∪ LN 1 and C2 = SN 2 ∪ LN 2 . For node set N1 and the arc set C1 ,
the value of the minimum cut is
u
d
v1 (C1 ) = min{αj−1
, αj−1
}
and for node set N2 and the arc set C2 , the value of the minimum cut is
v2 (C2 ) = min{βju + uj−1 , βjd + bj−1 }.
−
d
d
Now, if αju = αj−1
+ uj−1 + c(Sj+ ) and βj−1
= βju + uj−1 + dj−1 + c(Sj−1
), then αjd =
−
+
u
u
+ dj + c(Sj ) and βj−1 = βj + c(Sj ). Then, rewriting v(C) = mj , v1 (C1 ) and v2 (C2 ):
d
αj−1
d
d
v(C) = αj−1
+ min{uj−1 + βju , βjd } = αj−1
+ uj−1 + βju ,
d
v1 (C1 ) = αj−1
and v2 (C2 ) = βju + uj−1 .
As a result, the values v1 (C1 ) and v2 (C2 ) summed gives the value v(C) under the assumption
d
for the values of αju and βj−1
.
B.4. Proof of Lemma 13. The proof follows closely to that of Lemmas 10 and 11. Let
+
−
+
−
C = S + ∪ L− , C1 = SN
1 ∪ LN 1 and C2 = SN 2 ∪ LN 2 . For node set N1 and the arc set C1 ,
the value of the minimum cut is
u
d
v1 (C1 ) = min{αj−1
+ bj−1 , αj−1
+ uj−1 }
and for node set N2 and the arc set C2 , the value of the minimum cut is
v2 (C2 ) = min{βju , βjd }.
+
u
u
Now, if αjd = αj−1
+ bj−1 + dj + c(Sj− ) and βj−1
= βjd + bj−1 + c(Sj−1
), then αju =
u
d
αj−1
+ c(Sj+ ) and βj−1
= βjd + dj + c(Sj− ). Then, rewriting v(C) = mj , v1 (C1 ) and v2 (C2 ):
u
u
v(C) = αj−1
+ min{βju , βjd + bj−1 } = αj−1
+ βjd + bj−1 ,
u
v1 (C1 ) = αj−1
+ bj−1 and v2 (C2 ) = βjd .
As a result, the values v1 (C1 ) and v2 (C2 ) summed gives the value v(C) under the assumption
u
for the values of αjd and βj−1
.
36
| 8 |
N EAREST N EIGHBOUR R ADIAL BASIS F UNCTION
S OLVERS FOR D EEP N EURAL N ETWORKS
Benjamin J. Meyer, Ben Harwood, Tom Drummond
ARC Centre of Excellence for Robotic Vision, Monash University
{benjamin.meyer,ben.harwood,tom.drummond}@monash.edu
arXiv:1705.09780v2 [] 29 Oct 2017
A BSTRACT
We present a radial basis function solver for convolutional neural networks that
can be directly applied to both distance metric learning and classification problems.
Our method treats all training features from a deep neural network as radial basis
function centres and computes loss by summing the influence of a feature’s nearby
centres in the embedding space. Having a radial basis function centred on each
training feature is made scalable by treating it as an approximate nearest neighbour
search problem. End-to-end learning of the network and solver is carried out,
mapping high dimensional features into clusters of the same class. This results in a
well formed embedding space, where semantically related instances are likely to
be located near one another, regardless of whether or not the network was trained
on those classes. The same loss function is used for both the metric learning and
classification problems. We show that our radial basis function solver outperforms
state-of-the-art embedding approaches on the Stanford Cars196 and CUB-2002011 datasets. Additionally, we show that when used as a classifier, our method
outperforms a conventional softmax classifier on the CUB-200-2011, Stanford
Cars196, Oxford 102 Flowers and Leafsnap fine-grained classification datasets.
1
I NTRODUCTION
The solver of a neural network is vital to its performance, as it defines the objective and drives the
learning. We define a solver as the layers of the network that are aware of the class labels of the
data. In the domain of image classification, a softmax solver is conventionally used to transform
activations into a distribution across class labels (Krizhevsky et al., 2012; Simonyan and Zisserman,
2014; Szegedy et al., 2015; He et al., 2016). While in the domain of distance metric learning, a
Siamese (Chopra et al., 2005) or triplet (Hoffer and Ailon, 2015; Schroff et al., 2015; Kumar et al.,
2017) solver, with contrastive or hinge loss, is commonly used to pull embeddings of the same class
together and push embeddings of different classes apart. The two tasks of classification and metric
learning are related but distinct. Conventional classification learning is generally used when the
objective is to associate data with a pre-defined set of classes and there is sufficient data to train or
fine-tune a network to do so. Distance metric learning, or embedding space building, aims to learn
an embedding space where samples with similar semantic meaning are located near one another.
Applications for learning such effective embeddings include transfer learning, retrieval, clustering
and weakly supervised or self-supervised learning.
In this paper, we present a deep neural network solver that can be applied to both embedding space
building and classification problems. The solver defines training features in the embedding space as
radial basis function (RBF) centres, which are used to push or pull features in a local neighbourhood,
depending on the labels of the associated training samples. The same loss function is used for both
classification and metric learning problems. This means that a network trained for the classification
task results in feature embeddings of the same class being located near one another and similarly,
a network trained for metric learning results in embeddings that can be well classified by our RBF
solver. Fast approximate nearest neighbour search is used to provide an efficient and scalable solution.
The best success on embedding building tasks has been achieved by deep metric learning methods
(Hoffer and Ailon, 2015; Schroff et al., 2015; Song et al., 2016b; Sohn, 2016; Kumar et al., 2017),
which make use of deep neural networks. Such approaches may indiscriminately pull samples of
1
the same class together, regardless of whether the two samples were already within well defined
local clusters of like samples. These methods aim to form a single cluster per class. In contrast,
our approach pushes a feature around the embedding space based only on the local neighbourhood
of that feature. This means that the current structure of the space is considered, allowing multiple
clusters to form for a single class, if that is appropriate. Our radial basis function solver is able to
learn embeddings that result in samples of similar semantic meaning being located near one another.
Our experiments show that the RBF solver is able to do this better than existing deep metric learning
methods.
Softmax solvers have been a mainstay of the standard classification problem (Krizhevsky et al.,
2012; Simonyan and Zisserman, 2014; Szegedy et al., 2015; He et al., 2016). Such an approach is
inefficient as classes must be axis-aligned and the number of classes is baked into the network. Our
RBF approach is free to position clusters such that the intrinsic structure of the data can be better
represented. This may involve multiple clusters forming for a single class. The nearest neighbour
RBF solver outperforms conventional softmax solvers in our experiments and provides additional
adaptability and flexibility, as new classes can be added to the problem with no updates to the network
weights required to obtain reasonable results. This performance improvement is obtained despite
smaller model capacity. The RBF solver by its very nature is a classifier, but learns the classification
problem in the exact same way it learns the embedding space building problem.
The main advantages of our novel radial basis function solver for neural networks can be summarised
as follows:
• Our solver can be directly applied to two previously separate problems; classification and
embedding space learning.
• End-to-end learning can be made scalable by leveraging fast approximate nearest neighbour
search (as seen in Section 3.2).
• Our approach outperforms current state-of-the-art deep metric learning algorithms on the
Stanford Cars196 and CUB-200-2011 datasets (as seen in Section 4.1).
• Finally, our radial basis function classifier outperforms a conventional softmax classifier
on the fine-grained classification datasets CUB-200-2011, Stanford Cars196, Oxford 102
Flowers and Leafsnap (as seen in Section 4.2).
2
R ELATED W ORK
Radial Basis Functions in Neural Networks Radial basis function networks were introduced by
Broomhead and Lowe (1988). The networks formulate activation functions as RBFs, resulting in
an output that is a sum of radial basis function values between the input and network parameters.
In contrast to these radial basis function networks, our approach uses RBFs in the solver of a deep
convolutional neural network and our radial basis function centres are coupled to high dimensional
embeddings of training samples, rather than being network parameters. Radial basis functions have
been used as neural network solvers in the form of support vector machines. In one such formulation,
a neural network is used as a fixed feature extractor and separate support vector machines are trained
to classify the features (Razavian et al., 2014; Donahue et al., 2014). No joint training occurs between
the solver (classifier) and network. Such an approach is often used for transfer learning, where the
network is trained on vast amounts of data and the support vector machines are trained for problems
in which labelled training data is scarce. Tang (2013) replaces the typical softmax classifier with
linear support vector machines. In this case, the solver and network are trained jointly, meaning the
loss that is minimised is margin based.
Metric Learning Early methods in the domain of metric learning include those that use Siamese
networks (Bromley et al., 1993) and contrastive loss (Hadsell et al., 2006; Chopra et al., 2005).
The objective of these approaches is to pull pairwise samples of the same class together and push
pairwise samples of different classes apart. Such methods work on absolute distances, while triplet
networks with hinge loss (Weinberger et al., 2006) work on relative distance. Triplet loss approaches
take a trio of inputs; an anchor, a positive sample of the same class as the anchor and a negative
sample of a different class. Triplet loss aims to pull the positive sample closer to the anchor than the
negative sample. Several deep metric learning approaches make use of, or generalise deep triplet
2
Figure 1: Overview of our radial basis function solver.
neural networks (Hoffer and Ailon, 2015; Wang et al., 2014; Schroff et al., 2015; Song et al., 2016b;
Sohn, 2016; Kumar et al., 2017). Schroff et al. (2015) perform semi-hard mining within a mini-batch,
while Song et al. (2016b) propose a lifted structured embedding with efficient computation of the
full distance matrix within a mini-batch. This allows comparisons between all positive and negative
pairs in the batch. Similarly, Sohn (2016) proposes an approach that allows multiple intra-batch
distance comparisons, but optimises a generalisation of triplet loss, named N-pair loss, rather than a
max-margin based objective, as in Song et al. (2016b). The global embedding structure is considered
in Song et al. (2016a) by directly minimising a global clustering metric, while a combination of
global and triplet loss is shown to be beneficial in Kumar et al. (2016). Finally, Kumar et al. (2017)
introduce a smart mining technique that mines for triplets over the entire dataset. A Fast Approximate
Nearest Neighbour Graph (FANNG) (Harwood and Drummond, 2016) is leveraged for computational
efficiently. Beyond triplet loss, Rippel et al. (2016) introduce a loss function that allows multiple
clusters to form per class. Rather than only penalising a single triplet at a time, the neighbourhood
densities are considered and overlaps between classes penalised.
3
R ADIAL BASIS F UNCTION S OLVERS
A radial basis function returns a value that depends only on the distance between two points, one of
which is commonly referred to as a centre. Although several radial basis functions exist, in this paper
we use RBF to refer to a Gaussian radial basis function, which returns a value based on the Euclidean
distance between a point x and the RBF centre c. The radial basis function, f , is calculated as:
−kx − ck2
f (x, c) = exp
(1)
2σ 2
where σ is standard deviation that controls the width of the Gaussian curve, that is, the region around
the RBF centre deemed to be of importance.
In the context of our neural network solver, we define the deep feature embeddings of each training
set sample as radial basis function centres. Specifically, we take the layer in a network immediately
before the solver as the embedding layer. For example, in a VGG architecture, this may be FC7 (fully
connected layer 7), forming a 4096 dimension embedding. In general, however, the embedding may
be of any size. An overview of this approach is seen in Figure 1.
3.1
C LASSIFIER AND L OSS F UNCTION
A radial basis function classifier can be formed by the weighted sum of the RBF distance calculations
between a sample feature embedding and the centres. Classification of a sample is achieved by
passing the input through the network, resulting in a feature embedding in the same space as the RBF
centres. A probability distribution over class labels is found by summing the influence of each centre
and normalising. A centre contributes only to the probability of the ground truth label of the training
sample coupled to that centre. For example, the probability that the feature embedding x has class
label Q is:
P
i∈Q wi f (x, ci )
P r(x ∈ class Q) = Pm
,
(2)
j=1 wj f (x, cj )
3
where f is the RBF, i ∈ Q are the centres with label Q, m is the number of training samples and
wi is a learnable weight for RBF centre i. Of course, if a sample is in the training set and has a
corresponding RBF centre, the distance calculation to itself is omitted during the computation of the
classification distribution, the loss function and the derivatives.
The loss function used for optimisation is simply the summed negative logarithm of the probabilities
of the true class labels. For example, the loss L for sample x with ground truth label R is:
L(x) = − ln (P r(x ∈ class R)) .
(3)
The same loss function is used regardless of whether the network is being trained for classification,
as above, or for embedding space building (distance metric learning). This is possible since the RBF
classifier is directly computed from distances between features in the embedding space. This means
that a network trained for classification will result in features of the same class being located near
one another, and similarly a network trained for metric learning will result in an embedding space in
which features can be well classified using RBFs.
3.2
N EAREST N EIGHBOUR RBF S OLVER
In Equation 2, the distribution is calculated by summing over all RBF centres. However, since these
centres are attached to training samples, of which there could be any large number, computing that
sum is both intractable and unnecessary. The majority of RBF values for a given feature embedding
will be effectively zero, as the sample feature will lie only within a subset of the RBF centres’
Gaussian windows. As such, only the local neighbourhood around a feature embedding should be
considered. Operating on the set of the nearest RBF centres to a feature ensures that most of the
distance values computed are pertinent to the loss calculation. The classifier equation becomes:
P
i∈Q∩N wi f (x, ci )
P r(x ∈ class Q) = P
,
(4)
j∈N wj f (x, cj )
where N is the set of approximate nearest neighbours for sample x and therefore i ∈ Q ∩ N is the
set of approximate nearest neighbours that have label Q. Again, we note that training set samples
exclude their own RBF centre from their nearest neighbour list.
In the interest of providing a scalable solution, we use approximate nearest neighbour search to obtain
candidate nearest neighbour lists. This allows for a trade off between precision and computational
efficiency. Specifically, we use a Fast Approximate Nearest Neighbour Graph (FANNG) (Harwood
and Drummond, 2016), as it provides the most efficiency when needing a high probability of finding
the true nearest neighbours of a query point. Importantly, FANNG provides scalability in terms of the
number of dimensions and the number of training samples.
3.3
E ND - TO - END L EARNING
The network and solver weights are learned end-to-end. As the weights are constantly being updated
during training, the locations of the RBF centres are changing. This leads to complications in the
computation of the derivatives of the loss with respect to the embeddings. This calculation requires
dimension by dimension differences between the training embeddings and the RBF centres. The
centres are moving as the network is being updated, but computing the current RBF centre locations
online is intractable. For example, if considering 100 nearest neighbours, 101 samples would need to
be forward propagated through the network for each training sample. However, we find that is is not
necessary for the RBF centres to be up to date at all times in order for the model to converge. A bank
of the RBF centres is stored and updated at a fixed interval.
A further consequence of the RBF centres moving during training is that the nearest neighbours also
change. It is intractable to find to correct nearest neighbours each time the weights are updated. This
is simply remedied by considering a larger number of nearest neighbours than would be required if
all centres and neighbour lists were up-to-date at all times. The embedding space changes slowly
enough that it is highly likely many of the previously neighbouring RBF centres will remain relevant.
Since the Gaussian RBF decays to zero as the distance between the points becomes large, it does not
matter if an RBF centre that is no longer near the sample remains a candidate nearest neighbour.
We call the frequency at which the RBF centres are updated and the nearest neighbours found the
update interval. During training, at a fixed number of epochs we forward pass the entire training set
4
through the network, storing the new RBF centres. The up-to-date nearest neighbours can now be
found. If FANNG is used, a rebuild of the graph is required. Note that the stored RBF centres do not
have dropout (Srivastava et al., 2014) applied, but the current training embeddings may. The effect of
the number of nearest neighbours considered and the update interval are discussed in Section 4.2.
Radial Basis Function Parameters A global standard deviation parameter σ is shared amongst
the RBFs. This ensures that the assumption made about samples only being influenced by their
nearest RBF centres holds. Although the parameter is learnable, we find that fixing the standard
deviation value before training is a suitable approach. We treat the standard deviation as an additional
hyperparameter to tune, however it can also be learned independently before full network training
commences. As seen in Equation 4, each RBF centre has a weight, which is learned end-to-end with
the network weights. These weights are initialised at values of one. Note that in our experiments we
only tune the RBF weights for the classification task; they remain fixed for metric learning problems.
4
E XPERIMENTS
We detail our experimental results in two tasks; distance metric learning and image classification.
4.1
D ISTANCE M ETRIC L EARNING
Experimental Set-up We evaluate our approach on two datasets; Stanford Cars196 (Krause et al.,
2013) and CUB-200-2011 (Birds200) (Welinder et al., 2010). Cars196 consists of 16,185 images
of 196 different car makes and models, while Birds200 consists of 11,788 images of 200 different
bird species. In this problem, the network is trained and evaluated on different sets of classes. We
follow the experimental set-up used in Song et al. (2016b); Sohn (2016); Song et al. (2016a); Kumar
et al. (2017). For the Cars196 dataset, we train the network on the first 98 classes and evaluate on the
remaining 98. For the Birds200 dataset we train on the first 100 classes and evaluate on the remaining
100. Stochastic gradient descent optimisation is used. All images are first resized to be 256x256 and
data is augmented by random cropping and horizontal mirroring. Note that we do not crop the images
using the provided bounding boxes.
Our method is compared to state-of-the-art approaches on the considered datasets; semi-hard mining
for triplet networks (Schroff et al., 2015), lifted structured feature embedding (Song et al., 2016b),
N-pair loss (Sohn, 2016), clustering (Song et al., 2016a), global loss with triplet networks (Kumar
et al., 2016) and smart mining for triplet networks (Kumar et al., 2017). For fair comparison to these
methods, we use the same base architecture for our experiments; GoogLeNet (Szegedy et al., 2015).
Network weights are initialised from ImageNet (Russakovsky et al., 2015) pre-trained weights. We
use 100 nearest neighbours and an update interval of 10 epochs. RBF weights are fixed at a value of
one for this task. We train for 50 epochs on Cars196 and 30 epochs on Birds200. A batch size of 20,
base learning of 0.00001 and weight decay of 0.0002 are used. The RBF standard deviation used
depends on size of the embedding dimension. We find values between 10 and 30 work well for this
task.
Evaluation Metrics Following Song et al. (2016b), we evaluate the embedding space using two
metrics; Normalised Mutual Information (NMI) (Manning et al., 2008) and Recall@K. The NMI
score is the ratio of mutual information and average entropy of a set of clusters and labels. It evaluates
only for the number of clusters equal to the number of classes. As discussed in Section 1, a good
embedding space does not necessarily have only one cluster per class, but may have multiple well
formed clusters in the space. This means that our mutual information may be higher than reported
with this metric. Nevertheless, we present results on the NMI score in the interest of comparing
to existing methods that evaluate on this metric. The Recall@K (R@K) metric is better suited for
evaluating an embedding space. A true positive is defined as a sample that has at least one of its true
nearest K neighbours in the embedding space with the same class as itself.
Embedding Space Dimension We investigate the importance of the embedding dimension. A
similar study in Song et al. (2016b) suggests that the number of dimensions is not important for
triplet networks, in fact, increasing the number of dimensions can be detrimental to performance.
We compare our method with increasing dimension size against triplet loss (Weinberger et al., 2006;
5
Figure 2: Effect of embedding size on NMI score on the test set of Cars196 (left) and Birds200
(right). The NMI of our RBF approach improves with increasing embedding size, while performance
degrades or oscillates for triplet (Weinberger et al., 2006; Schroff et al., 2015) and lifted structured
embedding (Song et al., 2016b).
Figure 3: Recall of our RBF solver at 1, 2, 4 and 8 nearest neighbours on the test set of Cars196 (left)
and Birds200 (right). Recall performance of our approach increases with embedding size.
Schroff et al., 2015) and lifted structured embedding (Song et al., 2016b), both taken from the
study in Song et al. (2016b). Figure 2 shows the effect of the embedding size on NMI score. It’s
clear that while increasing the number of dimensions does not necessarily improve performance for
triplet-based networks, the dimensionality is important for our RBF approach. The NMI score for our
approach improves with increasing numbers of dimensions. Similar behaviour is seen in Figure 3,
which shows the Recall@K metric for our RBF method with varying numbers of dimensions. Again,
this shows that the dimensionality is an important factor for our approach.
Comparison of Results Our approach is compared to the state-of-the-art in Table 1, with the
compared results taken from Song et al. (2016a) and Kumar et al. (2017). Since, as discussed above,
the number of embedding dimensions does not have much impact on the other approaches, all results
in Song et al. (2016a) and Kumar et al. (2017) are reported using 64 dimensions. For fair comparison,
we report our results at 64 dimensions, but also at the better performing higher dimensions. Our
approach outperforms the other methods in both the NMI and Recall@K measures, at all embedding
sizes presented. Our approach is able to produce better compact embeddings than existing methods,
but can also take advantage of a larger embedding space. Figure 4 shows a t-SNE (van der Maaten
and Hinton, 2008) visualisation of the Birds200 test set embedding space. Despite the test classes
being withheld during training, bird species are well clustered.
6
Table 1: Embedding results on Cars196 and Birds200. The test set is comprised of classes on which
the network was not trained. Our approach is compared with state-of-the-art approaches; Semi-hard
(Schroff et al., 2015), LiftStruct (Song et al., 2016b), N-pairs (Sohn, 2016), Triplet/Gbl (Kumar et al.,
2016), Clustering (Song et al., 2016a) and SmartMine (Kumar et al., 2017).
Cars196 Dataset
Semi-hard
LiftStruct
N-pairs
Triplet/Gbl
Clustering
SmartMine
RBF (Ours)
RBF (Ours)
RBF (Ours)
RBF (Ours)
RBF (Ours)
Birds200 Dataset
Dims NMI
R@1 R@2 R@4 R@8
NMI
R@1 R@2 R@4 R@8
64
64
64
64
64
64
64
128
256
512
1024
51.54
52.98
53.90
61.41
58.11
64.65
71.05
73.52
77.35
78.39
79.65
55.38
56.50
57.24
58.61
59.23
59.90
61.26
61.72
62.18
63.50
63.95
42.59
43.57
45.37
49.04
48.18
49.78
51.15
52.08
54.74
55.91
57.22
53.35
56.88
57.79
58.20
59.04
59.50
62.15
63.35
63.76
64.68
65.30
63.78
65.70
66.76
72.51
70.64
76.20
80.74
83.37
85.49
86.91
87.33
73.52
76.01
77.75
81.75
80.27
84.23
88.06
89.80
91.10
92.06
92.36
82.41
84.27
86.35
88.39
87.81
90.19
92.79
93.76
94.81
95.52
95.65
55.03
56.55
58.41
60.97
61.44
62.34
64.64
64.69
67.18
68.26
68.75
66.44
68.59
69.51
72.33
71.83
74.05
75.57
76.05
77.53
78.63
79.12
77.23
79.63
79.49
81.85
81.92
83.31
84.72
84.86
86.09
86.38
87.14
Figure 4: Visualisation of the Birds200 test set embedding space, using the t-SNE algorithm (van der
Maaten and Hinton, 2008). Despite not being trained on the test classes, bird species are well
clustered. Best viewed in colour and zoomed in on a monitor.
4.2
I MAGE C LASSIFICATION
Experimental Set-up We evaluate our solver in the domain of image classification, comparing
performance with conventional softmax loss. For all experiments, images are resized to 256x256 and
random cropping and horizontal mirroring is used for data augmentation. Unlike in Section 4.1, we
crop Birds200 and Cars196 images using the provided bounding boxes before resizing. The same
7
Figure 5: Effect of the number of training samples per class on the test set accuracy of Birds200,
using a VGG16 architecture. Note that the final data point in the plot refers to the entire training set;
while most classes have 24 training samples per class, some have only 23.
Table 2: Birds200 test set accuracy.
Solver
Base Network
AlexNet
VGG16
ResNet50
Softmax
RBF (Ours)
62.41
75.37
78.05
66.95
78.63
78.98
classes are used for training and testing. All datasets are split in to training, validation and test sets.
We select softmax and RBF hyperparameters that minimise the validation loss. The FC7 layer (4096
dimensions), with dropout and without a ReLU, is used as the embedding layer for our RBF solver
when using a VGG (Simonyan and Zisserman, 2014) or AlexNet (Krizhevsky et al., 2012) architecture.
For a ResNet architecture (He et al., 2016), we use the final pooling layer (2048 dimensions). We find
that following the ResNet embedding layer with a dropout layer results in a small performance gain
for both RBF and softmax solvers. A batch size of 20 is used and an update interval of 10 epochs,
unless otherwise noted. We use stochastic gradient descent optimisation. In general, we find a base
learning rate of 0.00001 to be appropriate for our approach. A standard deviation of around 100 for
the RBFs is found to be suitable for the 4096 dimension VGG16 embeddings on Birds200. Networks
are initialised with ImageNet (Russakovsky et al., 2015) pre-trained weights.
Evaluation on Birds200 We carry out detailed evaluation of our approach on the Birds200 dataset.
Since there is no standard validation set for this dataset, we take 20% of the training data as validation
data. In Table 2, we evaluate with three network architectures; AlexNet (Krizhevsky et al., 2012),
VGG16 (Simonyan and Zisserman, 2014) and ResNet50 (He et al., 2016). Our approach outperforms
the softmax counterpart for each network. The performance gain over softmax is larger for AlexNet
and VGG than for ResNet. This is likely because ResNet has significantly more non-linear activation
function layers, meaning there is less improvement seen when using the highly non-linear RBF solver.
The effect of the number of training samples per class is shown in Figure 5. Our RBF approach
outperforms softmax loss at all numbers of training images, with a particularly large gain when
training data is scarce.
Results from ablation experiments on our RBF approach are shown in Table 3. The importance of
the following components of learning are shown; tuning the RBF standard deviation σ, learning the
RBF weights and fine-tuning the network weights. Figure 6a shows the impact of the number of
nearest neighbours used for each sample during training. There is a clear lower bound required for
good performance. As discussed in Section 3.3, this is because the network weights are constantly
8
Table 3: Ablation study on Birds200.
Initial
Network
Weights
Tune σ
Learn
RBF
Weights
Fine-tune
Network
Weights
Test
Accuracy
Random
ImageNet
ImageNet
ImageNet
ImageNet
Yes
Yes
Yes
Yes
Yes
No
No
Yes
No
Yes
No
No
No
Yes
Yes
1.35
47.32
49.22
77.94
78.63
(a)
(b)
(c)
Figure 6: (a) The effect of the number of nearest neighbours considered during training. (b) The
average distance from training samples to their nearest RBF centres. (c) The average RBF value
between training samples and their nearest RBF centres.
being updated, but the stored RBF centres are not. As such, we need to consider a larger number of
neighbours than if the centres were always up-to-date. Figure 6b shows the average distance from
each training sample to its nearest RBF centres at different points during training. Similarly, Figure
6c shows the average radial basis function values between training samples and their nearest centres.
These experiments use a VGG16 architecture.
When training with softmax loss on a VGG16 architecture, validation loss plateaus at around 7000
iterations. For our RBF solver, the number of iterations taken for validation loss to stop improving
depends on the update interval, that is, the interval at which the RBF centres are updated and the
nearest neighbours computed. For update intervals of 1, 5 and 10, validation loss stops improving
at around 8500, 12000 and 15000 iterations, respectively. Since nearest neighbour search becomes
the bottleneck as the dataset size increases, a less frequent update interval should be used for large
datasets, allowing for a faster overall training time. The softmax solver is able to converge in fewer
iterations than our approach. This is likely due to the RBF centres not being up-to-date at all times,
leading to weight updates that are less effective than in the ideal scenario. However, as discussed in
Section 3.3, keeping the RBF centres up-to-date at all times in intractable.
Our RBF approach allows clusters to position themselves freely in the embedding space, such that
the intrinsic structure of the data can be represented. As a result, we expect the embeddings to be
co-located based not only in terms of class, but also in terms of more fine-grained information, such
as attributes. We use the 312 binary attributes of Birds200 to confirm this expectation. For each
4096 dimension VGG16 test set embedding, we propagate attributes by computing the density of
each attribute label present in the neighbouring test embeddings. This is done using Gaussian radial
basis functions, treating each attribute as a binary classification problem. We find the best Gaussian
standard deviation for softmax and our RBF learned embeddings separately. A precision and recall
curve, shown in Figure 7, is generated by sweeping the classification discrimination threshold from
zero to one. We find that for a given precision, the RBF solver results in an embedding space with
better attribute recall than softmax. Note that we do not train the models using the attribute labels.
Other Datasets We further evaluate our approach on three other fine-grained classification datasets;
Oxford 102 Flowers (Nilsback and Zisserman, 2008), Stanford Cars196 (Krause et al., 2013) and
9
Figure 7: Attribute precision and recall on the 312 binary attributes of Birds200. The attributes
are propagated from neighbouring test embeddings and the curves are generated by sweeping the
classification discrimination threshold. The ideal standard deviation is found for the RBF and softmax
approaches separately. No training was carried out on the attribute labels.
Table 4: Test accuracy on fine-grained classification datasets.
Dataset
Oxford 102 Flowers
Stanford Cars196
Leafsnap Field
Softmax
RBF (Ours)
82.79
85.67
73.80
86.26
86.52
75.96
Leafsnap (Kumar et al., 2012). We use the standard training, validation and test splits for Oxford
102 Flowers. For Stanford Cars196, we take 30% of the training set as validation data. We use the
challenging field images from Leafsnap, which are taken in uncontrolled conditions. The dataset
contains 185 classes of leaf species and we split the data into 50%, 20% and 30% for training,
validation and testing, respectively. Again, hyperparameters are selected based on validation loss and
a VGG16 architecture is used. Results are shown in Table 4.
5
D ISCUSSION AND C ONCLUSION
Our approach is designed to address two problems; metric space learning and classification. The use
of RBFs arises very naturally in the context of the first problem because metric spaces are defined
and measured in terms of Euclidean distance. It is perhaps more surprising that the classification
problem also benefits from using a metric space kernel density approach, rather than softmax. This
appears to hold independently of the base network architecture (Table 2) and the improvement is
particularly strong when limited quantities of training data are available (Figure 5).
Metric learning inherently pulls samples together into high density regions of the embedding space,
whereas softmax is content to allow samples to fill a very large region of space, provided that the
logit dimension corresponding to the correct class is larger than the others. This suggests that metric
learning is able to provide some regularisation, because classification is driven by multiple nearby
samples, whereas samples may be well separated in logit space for softmax. In turn, this leads to
increased robustness for the metric space approach, particularly when training data is impoverished.
Additionally, softmax is constrained to push samples into regions of space determined by the locations
of the logit axes, whereas our metric learning approach is free to position clusters in a way that may
more naturally reflect the intrinsic structure of the data. Finally, our approach is also free to create
multiple clusters for each class, if this is appropriate. As a result of these factors, our RBF solver
is able to outperform state-of-the-art approaches in the metric learning problem, as well as provide
benefit over softmax in the classification problem.
10
ACKNOWLEDGMENTS
This research was supported by the Australian Research Council Centre of Excellence for Robotic
Vision (project number CE140100016).
R EFERENCES
J. Bromley, I. Guyon, Y. Lecun, E. Sackinger, and R. Shah. Signature verification using a Siamese
time delay neural network. In Advances in neural information processing systems (NIPS 1993),
1993.
D. S. Broomhead and D. Lowe. Radial basis functions, multi-variable functional interpolation and
adaptive networks. Technical report, DTIC Document, 1988.
S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application
to face verification. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern
Recognition (CVPR’05), volume 1, pages 539–546, 2005.
J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. DeCAF: A Deep
Convolutional Activation Feature for Generic Visual Recognition. In Icml, volume 32, pages
647–655, 2014.
R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality Reduction by Learning an Invariant Mapping. In
2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06),
volume 2, pages 1735–1742, 2006.
B. Harwood and T. Drummond. FANNG: Fast Approximate Nearest Neighbour Graphs. In 2016
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5713–5722, 2016.
K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. In 2016 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, 2016.
E. Hoffer and N. Ailon. Deep metric learning using triplet network. In International Workshop on
Similarity-Based Pattern Recognition, pages 84–92, 2015.
J. Krause, M. Stark, J. Deng, and L. Fei-Fei. 3d object representations for fine-grained categorization.
In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages
554–561, 2013.
A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet Classification with Deep Convolutional
Neural Networks. In Advances in Neural Information Processing Systems, pages 1097–1105. 2012.
N. Kumar, P. N. Belhumeur, A. Biswas, D. W. Jacobs, W. J. Kress, I. Lopez, and J. V. B. Soares.
Leafsnap: A Computer Vision System for Automatic Plant Species Identification. In The 12th
European Conference on Computer Vision (ECCV), 2012.
V. B. G. Kumar, G. Carneiro, and I. Reid. Learning Local Image Descriptors with Deep Siamese and
Triplet Convolutional Networks by Minimizing Global Loss Functions. In 2016 IEEE Conference
on Computer Vision and Pattern Recognition (CVPR), pages 5385–5394, 2016.
V. B. G. Kumar, B. Harwood, G. Carneiro, I. Reid, and T. Drummond. Smart Mining for Deep Metric
Learning. arXiv preprint arXiv:1704.01285, 2017.
C. D. Manning, P. Raghavan, and H. Schütze. Introduction to information retrieval, volume 1.
Cambridge university press Cambridge, 2008.
M.-E. Nilsback and A. Zisserman. Automated Flower Classification over a Large Number of Classes.
In Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing,
2008.
A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. CNN Features Off-the-Shelf: An
Astounding Baseline for Recognition. In Proceedings of the 2014 IEEE Conference on Computer
Vision and Pattern Recognition Workshops, pages 512–519, 2014.
11
O. Rippel, M. Paluri, P. Dollar, and L. Bourdev. Metric learning with adaptive density discrimination.
International Conference on Learning Representations, 2016.
O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla,
M. Bernstein, A. C. Berg, and L. Fei-Fei. Imagenet large scale visual recognition challenge.
International Journal of Computer Vision, 115(3):211–252, 2015.
F. Schroff, D. Kalenichenko, and J. Philbin. FaceNet: A unified embedding for face recognition and
clustering. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages
815–823, 2015.
K. Simonyan and A. Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv preprint arXiv:1409.1556, 2014.
K. Sohn. Improved Deep Metric Learning with Multi-class N-pair Loss Objective. In Advances in
Neural Information Processing Systems 29, pages 1857–1865. 2016.
H. O. Song, S. Jegelka, V. Rathod, and K. Murphy. Learnable Structured Clustering Framework for
Deep Metric Learning. arXiv preprint arXiv:1612.01213, 2016a.
H. O. Song, Y. Xiang, S. Jegelka, and S. Savarese. Deep Metric Learning via Lifted Structured
Feature Embedding. In 2016 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), pages 4004–4012, 2016b.
N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple
way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):
1929–1958, 2014.
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and
A. Rabinovich. Going deeper with convolutions. In 2015 IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), pages 1–9, 2015.
Y. Tang. Deep learning using linear support vector machines. arXiv preprint arXiv:1306.0239, 2013.
L. van der Maaten and G. Hinton. Visualizing data using t-SNE. Journal of Machine Learning
Research, 9(Nov):2579–2605, 2008.
J. Wang, Y. Song, T. Leung, C. Rosenberg, J. Wang, J. Philbin, B. Chen, and Y. Wu. Learning
Fine-Grained Image Similarity with Deep Ranking. In 2014 IEEE Conference on Computer Vision
and Pattern Recognition, pages 1386–1393, 2014.
K. Q. Weinberger, J. Blitzer, and L. Saul. Distance metric learning for large margin nearest neighbor
classification. Advances in neural information processing systems, 2006.
P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSD
Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010.
12
| 7 |
Recognizing Union-Find trees is NP-complete✩
Kitti Gelle1 , Szabolcs Iván1
a University
of Szeged, Hungary
arXiv:1510.07462v2 [cs.CC] 5 Jan 2017
Abstract
Disjoint-Set forests, consisting of Union-Find trees are data structures having a widespread practical application due to their efficiency. Despite them being well-known, no exact structural characterization of these
trees is known (such a characterization exists for Union trees which are constructed without using path
compression). In this paper we provide such a characterization by means of a simple push operation and
show that the decision problem whether a given tree is a Union-Find tree is NP-complete.
1. Introduction
Disjoint-Set forests, introduced in [10], are fundamental data structures in many practical algorithms where one has to maintain a partition of
some set, which support three operations: creating a partition consisting of singletons, querying
whether two given elements are in the same class of
the partition (or equivalently: finding a representative of a class, given an element of it) and merging
two classes. Practical examples include e.g. building a minimum-cost spanning tree of a weighted
graph [4], unification algorithms [17] etc.
To support these operations, even a linked list
representation suffices but to achieve an almostconstant amortized time cost per operation,
Disjoint-Set forests are used in practice. In this
data structure, sets are represented as directed
trees with the edges directed towards the root; the
create operation creates n trees having one node
each (here n stands for the number of the elements in the universe), the find operation takes
a node and returns the root of the tree in which the
node is present (thus the same-class(x, y) operation is implemented as find(x) == find(y)), and
the merge(x, y) operation is implemented by merging the trees containing x and y, i.e. making one of
the root nodes to be a child of the other root node
(if the two nodes are in different classes).
In order to achieve near-constant efficiency, one has
to keep the (average) height of the trees small.
✩ This research was supported by NKFI grant number
K108448.
Preprint submitted to Elsevier
There are two “orthogonal” methods to do that:
first, during the merge operation it is advisable to
attach the “smaller” tree below the “larger” one. If
the “size” of a tree is the number of its nodes, we
say the trees are built up according to the union-bysize strategy, if it’s the depth of a tree, then we talk
about the union-by-rank strategy. Second, during
a find operation invoked on some node x of a tree,
one can apply the path compression method, which
reattaches each ancestor of x directly to the root
of the tree in which they are present. If one applies both the path compression method and either
one of the union-by-size or union-by-rank strategies,
then any sequence of m operations on a universe
of n elements has worst-case time cost O(mα(n))
where α is the inverse of the extremely fast growing (not primitive recursive) Ackermann function
for which α(n) ≤ 5 for each practical value of
n (say, below 265535 ), hence it has an amortized
almost-constant time cost [22]. Since it’s proven [9]
that any data structure has worst-case time cost
Ω(mα(n)), the Disjoint-Set forests equipped with
a strategy and path compression offer a theoretically optimal data structure which performs exceptionally well also in practice. For more details see
standard textbooks on data structures, e.g. [4].
Due to these facts, it is certainly interesting both
from the theoretical as well as the practical point of
view to characterize those trees that can arise from
a forest of singletons after a number of merge and
find operations, which we call Union-Find trees in
this paper. One could e.g. test Disjoint-Set implementations since if at any given point of execution
January 6, 2017
a tree of a Disjoint-Set forest is not a valid UnionFind tree, then it is certain that there is a bug in
the implementation of the data structure (though
we note at this point that this data structure is
sometimes regarded as one of the “primitive” data
structures, in the sense that is is possible to implement a correct version of them that needs not be
certifying [20]). Nevertheless, only the characterization of Union trees is known up till now [2], i.e.
which correspond to the case when one uses one of
the union-by- strategies but not path compression.
Since in that case the data structure offers only a
theoretic bound of Θ(log n) on the amortized time
cost, in practice all implementations imbue path
compression as well, so for a characterization to be
really useful, it has to cover this case as well.
In this paper we show that the recognition problem of Union-Find trees is NP-complete when the
union-by-size strategy is used (and leave open the
case of the union-by-rank strategy). This confirms
the statement from [2] that the problem “seems
to be much harder” than recognizing Union trees
(which in turn can be done in low-degree polynomial time).
Related work. There is an increasing interest in determining the complexity of the recognition problem of various data structures. The
problem was considered for suffix trees [16, 21],
(parametrized) border arrays [14, 19, 8, 13, 15], suffix arrays [1, 7, 18], KMP tables [6, 12], prefix tables [3], cover arrays [5], and directed acyclic wordand subsequence graphs [1].
the subtree (Vx = {y ∈ Vt : x t y}, x, parentt |Vx )
of t rooted at x. When x, y ∈ Vt , we say that x
is lighter than y in t (or y is heavier than x) if
size(t, x) < size(t, y).
Two operations on trees are that of merging and collapsing.
Given two trees
t
=
(Vt , roott , parentt ) and s
=
(Vs , roots , parents ) with Vt and Vs being disjoint, their merge merge(t, s) (in this
order) is the tree (Vt ∪ Vs , roott , parent)
with parent(x) = parentt (x) for x ∈ Vt ,
parent(roots ) = roott and parent(y) =
parents (y) for each non-root node y ∈ Vs of
s. Given a tree t = (V, root, parent) and a
node x ∈ V , then collapse(t, x) is the tree
(V, root, parent′ ) with parent′ (y) = root
if y is a non-root ancestor of x in t, and
parent′ (y) = parent(y) otherwise. For examples, see Figure 1.
The class of Union trees is the least class of trees
satisfying the following two conditions: every singleton tree (having exactly one node) is a Union
tree, and if t and s are Union trees with size(t) ≥
size(s), then merge(t, s) is a Union tree as well.
Analogously, the class of Union-Find trees is the
least class of trees satisfying the following three conditions: every singleton tree is a Union-Find tree, if
t and s are Union-Find trees with size(t) ≥ size(s),
then merge(t, s) is a Union-Find tree as well, and
if t is a Union-Find tree and x ∈ Vt is a node of t,
then collapse(t, x) is also a Union-Find tree.
We’ll frequently sum the size of “small enough” children of nodes, so we introduce a shorthand also for
that: for a tree t, a node x of t, andPa threshold W ≥
0, let sumsize(t, x, W ) stand for {size(t, y) : y ∈
children(t, x), size(t, y) ≤ W }. We say that a
node x of a tree t satisfies the Union condition if
for each child y of x we have sumsize(t, x, W ) ≥ W
where W = size(t, y) − 1. Otherwise, we say that
x violates the Union condition (at child y). Then,
the characterization of Union trees from [2] can be
formulated in our terms as follows:
2. Notation
A tree is a tuple t = (Vt , roott , parentt ) with Vt
being the finite set of its nodes, roott ∈ Vt its
root and parentt : (Vt − {roott }) → Vt mapping
each non-root node to its parent (so that the graph
of parentt is a directed acyclic graph, with edges
being directed towards the root).
For a tree t and a node x ∈ Vt , let children(t, x)
stand for the set {y ∈ Vt : parentt (y) = x} of its
children and children(t) stand as a shorthand for
children(t, roott ), the set of depth-one nodes of
t. Two nodes are siblings in t if they have the same
parent. Also, let x t y denote that x is an ancestor
of y in t, i.e. x = parentkt (y) for some k ≥ 0
and let size(t, x) = |{y ∈ Vt : x t y}| stand for
the number of descendants of x (including x itself).
Let size(t) stand for size(t, roott ), the number of
nodes in the tree t. For x ∈ Vt , let t|x stand for
Theorem 1. A tree t is a Union tree if and only
if each node x of t satisfies the Union condition.
Equivalently, x satisisfies the Union condition if and
only if whenever x1 , . . . , xk is an enumeration of
children(t, x) such
P that size(t, xi ) ≤ size(t, xi+1 )
for each i, then j<i size(t, xj ) ≥ size(t, xi ) − 1
for each i = 1, . . . , k. (In particular, each non-leaf
node has to have a leaf child.)
2
3. Structural characterization of Union-Find
trees
nodes, x has to have depth at least two in s as
well. Let {x1 , . . . , xk } stand for X − Y . Now for
any node xi ∈ X − Y there exists a unique node
yi ∈ Y such that yi s xi . Let us define the
trees t0 = t, ti = push(ti−1 , xi , yi ). Then t ⊢∗ tk ,
children(tk ) = Y = children(s) and for each
y ∈ Y , tk |y s|y . Applying the induction hypothesis we get that tk |y ⊢∗ s|y for each y ∈ Y , hence the
immediate subtrees of tk can be transformed into
the immediate subtrees of s by repeatedly applying
push operations, hence t ⊢∗ s as well.
Suppose t and s are trees on the same set V of nodes
and with the same root r. We write t s if x t y
implies x s y for each x, y ∈ V . For an example,
consult Figure 1. There, t′′′ t′′ (e.g. r t′′′ x and
also r t′′ x); the reverse direction does not hold
since e.g. y t′′ z but y 6t′′′ z. Also, the reader is
encouraged to verify that t′′′ t′ t′′ also holds.
Clearly, is a partial order on any set of trees
(i.e. is a reflexive, transitive and antisymmetric relation). It is also clear that t s if and only if
parentt (x) s x holds for each x ∈ V − {r} which
is further equivalent to requiring parentt (x) s
parents (x) since parentt (x) cannot be x.
Another notion we define is the (partial) operation
push on trees as follows: when t is a tree and
x 6= y ∈ Vt are siblings in t, then push(t, x, y)
′
is defined as the
( tree (Vt , roott , parent ) with
y
if z = x,
parent′ (z) =
, that is,
parentt (z) otherwise
we “push” the node x one level deeper in the tree
just below its former sibling y. (See Figure 1.)
We write t ⊢ t′ when t′ = push(t, x, y) for
some x and y, and as usual, ⊢∗ denotes the
reflexive-transitive closure of ⊢. Observe that when
t′ = push(t, x, y), then size(t′ , y) = size(t, y) +
size(t, x) > size(t, y) and size(t′ , z) = size(t, z)
for each z 6= y, hence ⊢∗ is also a partial ordering
on trees. This is not a mere coincidence:
The relations and ⊢∗ are introduced due to their
intimate relation to Union-Find and Union trees:
Theorem 2. A tree t is a Union-Find tree if and
only if t ⊢∗ s for some Union tree s.
Proof. Let t be a Union-Find tree. We show the
claim by structural induction. For singleton trees
the claim holds since any singleton tree is a Union
tree as well. Suppose t = merge(t1 , t2 ). Then by
the induction hypothesis, t1 ⊢∗ s1 and t2 ⊢∗ s2
for the Union trees s1 and s2 . Then, for the tree
s = merge(s1 , s2 ) we get that t ⊢∗ s. Finally,
assume t = collapse(t′ , x) for some node x. Let
x = x1 ≻ x2 ≻ . . . ≻ xk = roott′ be the ancestral
sequence of x in t′ . Then, defining t0 = t, ti =
push(ti−1 , xi , xi+1 ) we get that t ⊢∗ tk−1 = t′ and
t′ ⊢∗ s for some Union tree applying the induction
hypothesis, thus t ⊢∗ s also holds.
Now assume t ⊢∗ s (equivalently, t s by Proposition 1) for some Union tree s. Let X stand
for the set children(t) of depth-one nodes of t
and Y stand for children(s). By t s we get
that Y ⊆ X. Let {x1 , . . . , xk } stand for the set
X − Y . Then for each xi there exists a unique
yi ∈ Y with yi s xi . Let us define the sequence
t = t0 ⊢ t1 ⊢ . . . ⊢ tk with ti = push(ti−1 , xi , yi ).
Then, tk s holds. Moreover, as children(tk ) =
children(s) = Y , we get that tk |y s|y for each
y ∈ Y . Applying the induction hypothesis we have
that each subtree tk |y is a Union-Find tree. Now
let us define the sequence t′0 , t′1 , . . . , t′k of trees as
follows: t′0 is the singleton tree with root roott ,
and t′i = merge(t′i−1 , tk |yi′ ) where Y = {y1′ , . . . , yℓ′ }
is an enumeration
of Y such that
Pof the members
size(s, yi′ ) ≤ 1 +
size(s, yj′ ) for each i = 1, . . . , ℓ.
Proposition 1. For any pair s and t of trees, t s
if and only if t ⊢∗ s.
Proof. For ⊢∗ implying it suffices to show that
⊢ implies since the latter is a partial order. So
let t = (V, r, parent), and x 6= y ∈ V be siblings in
t with the common parent z, and s = push(t, x, y).
Then, since parentt (x) = z = parents (y) =
parents (parents (x)), we get parentt (x) s x,
and by parentt (w) = parents (w) for each node
w 6= x, we have t s.
It is clear that is equality on singleton trees, thus
implies ⊢∗ for trees of size 1. We apply induction
on the size of t = (V, r, parent) to show that whenever t s = (V, r, parent′ ), then t ⊢∗ s as well.
Let X stand for the set children(t) of depth-one
nodes of t and Y stand for children(s). Clearly,
Y ⊆ X since by t s, any node x of t having
depth at least two has to satisfy parent(x) s
parent′ (x) and since parent(x) 6= r for such
j<i
(Such an orderingP
exists as s is a Union tree.) Since
size(s, yj′ ), we get thet each t′i
size(t′i−1 ) = 1 +
j<i
is a Union-Find tree as well.
3
Finally, the tree t results from the tree t′k
constructed above by applying successively one
collapse operation on each node in X − Y , hence
t is a Union-Find tree as well.
an ancestor of y in t′ (alliwing y = z), thus by
size(t′ , z) ≥ size(t′ , y) ≥ size(t, y) we get that z is
heavy in t′ hence x is a basket of t′ as well.
We will frequently sum the sizes of the light children of nodes, so for a tree t and node x of t, let
sumlight(t, x) stand for sumsize(t, x, H), the total size of the light children of x in t.
We introduce a charge function c as follows: for a
tree t and node x ∈ Vt , let the charge of x in t,
denoted c(t, x), be
(
0
if x is a light node of t,
c(t, x) =
sumlight(t, x) − H otherwise.
4. Complexity
In this section we show that the recognition of
Union-Find trees is NP-complete. Note that membership in NP is clear by Theorem 2 and that the
possible number of pushes is bounded above by n2 :
upon pushing x below y, the size of y increases,
while the size of the other nodes remains the same.
Since the size of any node is at most n, the sum of
the sizes of all the nodes is at most n2 in any tree.
Let H > 0 be a parameter (which we call the “heaviness threshold parameter” later on) to be specified later and t = (V, root, parent) be a tree.
We call a node x light if size(t, x) ≤ H, heavy if
size(t, x) > H, and a basket if it has a heavy child
in t. Note that every basket is heavy. A particular
type of basket is an empty basket which is a node
having a single child of size H + 1. (Thus empty
baskets have size H + 2.)
Let us call a tree t0 flat if it satisfies all the following
conditions:
It’s worth observing that when x is a non-basket
heavy node of t, then sumlight(t, x) = size(t, x)−
1 (since by being non-basket, each child of x is
light).
Note that in particular the charge of a node is computable from the multiset of the sizes of its children
(or, from the multiset of the sizes of its light children along with its own size). Let t′ = push(t, x, y)
and let z = parentt (y) be the common parent
of x and y in t. Then, c(t, w) = c(t′ , w) for each
node w ∈
/ {y, z} since for those nodes this multiset does not change. We now list how c(t′ , y) and
c(t′ , z) vary depending on whether y and x are light
or heavy nodes. Note that size(t′ , z) = size(t, z)
(thus the charge of z can differ in t and t′ only if
z is heavy and sumlight(t′ , z) 6= sumlight(t, z))
and size(t′ , y) = size(t, x) + size(t, y).
1. There are K > 0 depth-one nodes of t0 which
are empty baskets.
2. There is exactly one non-basket heavy depthone node of t0 , with size H + 1.
3. The total size of light depth-one nodes is (K +
1)·H. That is, sumsize(t0 , roott0 , H) = (K +
1) · H.
4. The light nodes and non-basket heavy nodes
have only direct children as descendants, thus
in particular, the subtrees rooted at these
nodes are Union trees.
i) If x and y are both light nodes with size(t, x)+
size(t, y) ≤ H (i.e. y remains light in t′ after
x is pushed below y) then sumlight(t′ , z) =
sumlight(t, z) hence c(t, z) = c(t′ , z) and
c(t′ , y) = c(t, y) = 0 since y is still light in
t′ .
ii) If x and y are both light nodes with size(t, x)+
size(t, y) > H (i.e. y becomes heavy due to
the pushing), then z is heavy as well (both in
t and in t′ since size(t, z) = size(t′ , z)). Then
since sumlight(t′ , z) = sumlight(t, z) −
(size(t, x) + size(t, y)), we get c(t′ , z) =
c(t, z) − (size(t, x) + size(t, y)) and c(t′ , y) =
size(t, y) + size(t, x) − (H + 1) since y is heavy
in t′ having only light children (by the assumption, x is still light and each child of y already
present in t have to be also light since y itself
is light in t).
iii) If x is heavy and y is in light in t, then after pushing, y becomes heavy as well (and z
See Figure 2.
The fact that the push operation cannot decrease
the size of a node has a useful corollary:
Proposition 2. If t ⊢∗ t′ , and x is a heavy (basket,
resp.) node of t, then x is a heavy (basket, resp.)
node of t′ as well. Consequently, if x is light in t′ ,
then x is light in t as well.
Proof. Retaining heaviness simply comes from
size(t, x) ≤ size(t′ , x). When x is a basket node
of t, say having a heavy child y, then by x t y
and t ⊢∗ t′ (that is, t t′ ) we get x t′ y
as well, hence x has a (unique) child z which is
4
is also heavy). Then, by sumlight(t′ , z) =
sumlight(t, z) − size(t, y) (since the child y
of z loses its status of being light) we have
c(t′ , z) = c(t, z) − size(t, y) and in t′ , y is a
heavy node with light children of total size
size(t, y) − 1, hence c(t′ , y) = size(t, y) − (H +
1) (while c(t, y) = 0 since y is light in t).
iv) If x is light and y is heavy in t, then z is heavy
as well, z loses its light child x while y gains the
very same light child, hence c(t′ , z) = c(t, z) −
size(t, x) and c(t′ , y) = c(t, y) + size(t, x).
v) Finally, if both x and y are heavy, then z is
heavy as well, and since neither z nor y loses
or gains a light child, c(t′ , z) = c(t, z) and
c(t′ , y) = c(t, y).
thus sumlight(t, x) ≥ H yielding c(t, x) =
sumlight(t, x) − H ≥ 0 and the statement is
proved.
Thus, since a flat tree has total charge 0 while in a
Union tree each node has nonnegative charge, and
the push operation either decreases the total charge
in Cases ii) and iii) above, or leaves the total charge
unchanged in the other cases, we get the following:
Lemma 1. Suppose t0 ⊢∗ t for a flat tree t0 and a
Union tree t. Then for any sequence of push operations transforming t0 into t, each push operation is
of type i), iv) or v) above, i.e.
• either a light node is pushed into a light node,
yielding still a light node;
Observe
that P
in Cases i), iv) and v) we have
P
c(t′ , x) while in Cases ii) and iii)
c(t, x) =
x∈V
x∈Vt
P
Pt
c(t′ , x) (namely, the
c(t, x) >
it holds that
x∈Vt
• or some node (either heavy or light) is pushed
into a heavy node.
x∈Vt
total charge decreases by H + 1 in these two cases).
Now for a P
flat tree t0 it is easy to
c(t0 , x) is zero:
we have
check that
Moreover, the charge of each node of t has to be
zero.
In particular, a heavy node x has charge 0 in a tree
t if and only if sumlight(t, x) = H yielding also
that each non-basket heavy node of t has to be of
size exactly H + 1. It’s also worth observing that
by applying the above three operations one does
not create a new heavy node: light nodes remain
light all the time. Moreover, if for a basket node
sumlight(t, x) = H in a Union tree t, then there
has to exist a heavy child of x in t of size exactly
H + 1 (by x being a basket node, there exists a
heavy child and if the size of the lightest heavy child
is more than H + 1, then the Union condition gets
violated).
Recall that in a flat tree t0 with K empty baskets,
there are K + 1 baskets in total (the depth-one
empty baskets and the root node), and there are
K + 1 non-basket heavy nodes (one in each basket,
initially), each having size H + 1, and all the other
nodes are light.
Thus if t0 ⊢∗ t for some Union tree t, then the set
of non-basket heavy nodes coincide in t0 and in t,
and also in t, the size of each such node x is still
H + 1. In particular, one cannot increase the size
of x by pushing anything into x.
Summing up the result of the above reasoning we
have:
x∈Vt0
sumlight(t0 , roott0 ) = (K + 1) · H by assumption, hence the root node is in particular
heavy and c(t0 , roott0 ) = K · H, the empty
baskets (having no light node at all) have charge
−H each and there are K of them, finally, all the
heavy nodes of size H + 1 have charge zero as well
as each light node, making up the total sum of
charges to be zero.
The charge function we introduced turns out to be
particularly useful due to the following fact:
Proposition 3. If t is a Union tree, then c(t, x) ≥
0 for each node of t.
Proof. If x is a light node of t, then c(t, x) = 0
and we are done.
If x is a heavy node which is not a basket, then all of
its children are light nodes, thus sumlight(t, x) =
size(t, x) − 1, making c(t, x) = size(t, x) − (H +
1) which is nonnegative since size(t, x) > H by x
being heavy.
Finally, if x is a basket node of t, then it has at
least one heavy child. Let y be a lightest heavy
child of x. Since t is a Union tree, we have
sumsize(t, x, size(t, y) − 1) ≥ size(t, y) − 1. By
the choice of y, every child of x which is lighter
than y is light itself, thus sumsize(t, x, size(t, y) −
1) = sumsize(t, x, H) = sumlight(t, x), moreover, size(t, y) − 1 ≥ H since y is heavy,
Lemma 2. Suppose t0 ⊢∗ t for some flat tree t0
and some Union tree t. Then for any pushing sequence transforming t0 into t, each step ti ⊢ ti+1 of
5
basket) and each child of x also has the same
size in t′ as in t, hence the Union condition is
still violated. Now assume z and w are children of x. Upon pushing, x loses a child of size
size(t, z) and a child of size size(t, w) and gains a
child of size size(t, z) + size(t, w). It is clear that
sumsize(t, x, W ) ≥ sumsize(t′ , x, W ) for each possible W then (equality holds when W ≥ size(t, z)+
size(t, w) or W < min{size(t, w), size(t, z)} and
strict inequality holds in all the other cases). It
is also clear that there is at least one child y ′
of x in t′ such that size(t, y) ≤ size(t′ , y ′ ): if
y 6= z then y ′ = y suffices, otherwise y ′ = w
is fine. Now let y0 be such a child with W =
size(t′ , y0 ) being the minimum possible. Then
sumsize(t′ , x, W − 1) = sumsize(t′ , x, size(t, y) −
1) ≤ sumsize(t, x, size(t, y) − 1) < size(t, y) − 1 ≤
size(t′ , y0 ) − 1 and thus x still violates the Union
condition in t′ as well.
the push chain with ti+1 = push(ti , x, y) has to be
one of the following two forms:
• x and y are light in ti and in ti+1 as well.
• y is a basket in ti (and in ti+1 as well).
We can even restrict the order in which the above
operations are applied.
Lemma 3. If t0 ⊢∗ t for some flat tree t0 and
Union tree t, then there is a push sequence t0 ⊢
t1 ⊢ . . . ⊢ tk = t with ti+1 = push(ti , xi , yi ) and an
index ℓ such that for each i ≤ ℓ, yi is a basket node
and for each i > ℓ, both xi and yi are light nodes
with size(ti , xi ) + size(ti , yi ) ≤ H.
Proof. Indeed, assume ti+1 = push(ti , x, y) for
the light nodes x and y of ti with size(ti , x) +
size(ti , y) ≤ H and ti+2 = push(ti+1 , z, w) for the
basket w of ti+1 . Then we can modify the sequence
as follows:
Hence we have:
• if z = y, then we can get ti+2 from ti by pushing x and y into the basket w first, then pushing x into y;
Proposition 5. Suppose t0 is a flat tree. Then, t0
is a Union-Find tree if and only if t0 ⊢ t1 ⊢ . . . ⊢
tk for some Union tree tk such that at each step
ti ⊢ ti+1 of the chain with ti+1 = push(ti , x, y), the
node y is a basket node in ti (and consequently, in
all the other members of the sequence as well).
• otherwise we can simply swap the two push operations since w cannot be either x or y (since
light nodes are not baskets), nor descendants
of x or y, thus z and w are already siblings in
ti as well, hence z can be pushed into w also
in ti , and afterwards since x and y are siblings
in the resulting tree, x can be pushed into y.
Proof. Observe that initially in any flat tree t0 ,
only basket nodes violate the Union condition.
Moreover, by pushing arbitrary nodes into baskets
only the baskets’ Union status can change (namely,
upon pushing into some basket x, the status of x
and its parent can change, which is also a basket).
Thus, after pushing nodes into baskets we either already have a Union tree, or not, but in the latter
case we cannot transform the tree into a Union tree
by pushing light nodes into light nodes by Proposition 4.
Applying the above modification finitely many
times we arrive to a sequence of trees satisfying the
conditions of the Lemma. Thus if t0 ⊢∗ t for some
flat tree t0 and Union tree t, we can assume that
first we push nodes into baskets, then light nodes
into light nodes, yielding light nodes.
However, it turns out the latter type of pushing
cannot fix the Union status of the trees we consider.
Hence we arrive to the following characterization:
Proposition 4. Suppose t is a tree with a basket
node x violating the Union condition, i.e. for some
child y of x it holds that sumsize(t, x, size(t, y) −
1) < size(t, y) − 1. Then for any tree t′ =
push(t, z, w) with z and w being light nodes with
total size at most H we have that x still violates
the Union condition in t′ .
Proposition 6. Assume t0 is a flat tree having K
empty baskets. Then, t0 is a Union-Find tree if and
only if the set L of its depth-one light nodes can
be partitioned into sets
P L1 , . . . , LK+1 such that for
each 1 ≤ i ≤ KP+ 1, x∈Li size(t0 , x) = H and for
each y ∈ Li ,
z∈Li ,size(t0 ,z)<size(t0 ,y) size(t0 , z) ≥
size(t0 , y) − 1.
Proof. If z (and w) are not children of x, then
children(t, x) = children(t′ , x) (since in particular w 6= x by w being light and x being a
Proof. Recall that non-basket nodes of a flat tree
satisfy the Union condition. Assume the set L
6
offset c ≥ 0, the instance I ′ = a′1 , . . . , a′3m with
a′i = ai + c for each i and with B ′ = B + 3c, the
set of solutions of I and I ′ coincide. Indeed, the
instance I ′ still satisfies
of the depth-one light nodes of t0 can be partitioned into sets Li , i = 1, . . . , K + 1 as above. Let
y1 , . . . , yK be the empty basket nodes of t0 . Then
by pushing every member of Li into the basket yi
(and leaving members of LK+1 at depth one) we
arrive to a tree t whose basket nodes satisfy the
Union condition (their light children do not violate
the condition due to the assumption on Li , and
their only heavy child having size H + 1 does not
violate
the condition either since sumlight(t, x) =
P
x∈Li size(t0 , x) = H due also to the assumption
on Li . Finally, the root node has K children (the
initially empty baskets) of size 2H + 2 but since it
has light children of total size H and a heavy child
of size H + 1, their sizes summing up to 2H + 1,
the depth-one baskets also do not violate the Union
condition. Hence t0 ⊢∗ t for some Union tree t, thus
is a Union-Find tree.
For the other direction, assume t0 is a Union-Find
tree. Then since t0 is also a flat tree, it can be transformed into a Union tree t by repeatedly pushing
nodes into baskets only. Since the pushed nodes’
original parents are baskets as well (since they are
parents of the basket into which we push), a node
is a child of a basket node in t if and only if it is
a child of a (possibly other) basket node in t0 . We
also know that the charge of each node in the Union
tree we gain at the end has to be zero, in particular, each basket x still has to have a heavy child of
size exactly H + 1 and sumlight(t, x) = H has to
hold. Let y1 , . . . , yK+1 stand for the basket nodes
of t0 (and of t as well): then, defining Li as the set
of light children of yi in t suffices.
B′
4
=
=
B + 3c
B
3
B
=
+ c<
+ c < ai + c = a′i
4
4
4
4
3
B + 3c
B′
B
+ c=
=
,
ai + c <
2
2
2
2
′
hence each solution B ′P= {B1′ , . . . , Bk′ } of
PI still
′
contains triplets and
i∈Bj′ ai
i∈Bj′ ai = 3c +
P
′
which is B if and only if i∈B ′ ai = B, thus any
j
solution of I ′ is also a solution of I, the other direction being also straightforward to check.
Thus, by setting the above offset to c = ⌈(1 +
max{2⌈log B⌉ , 24 }−B)/3⌉ (in which case B ′ = 2D +d
for some suitable integers D > 3 and d ∈ {1, 2, 3})
we get that the following problem is also strongly
NP-complete:
Definition 1 (3-Partition’). Input: A sequence
aP1 , . . . , a3m of positive integers such that B =
3m
i=1 ai
is a positive integer of the form 2D + d
m
for some integer D > 3 and d ∈ {1, 2, 3} and
B
B
4 < ai < 2 for each i = 1, . . . , 3m.
Output: Does there exist a partition B =
{B
P 1 , . . . , Bk } of the set {1, . . . , 3m} satisfying
i∈Bj ai = B for each j ∈ {1, . . . , k}?
Observe that in any solution, k = m and each Bj
consists of exactly three elements.
We are now ready to show the NP-completeness of
the recognition of Union-Find trees via a (logspace,
many-to-one) reduction from the 3-Partition’
problem to it. To each instance I = a1 , . . . , a3m
of the 3-Partition’ problem we associate the following flat tree t(I):
At this point we recall that the following problem 3Partition is NP-complete in the strong sense [11]:
given a list aP1 , . . . , a3m of positive integers with the
3m
ai
being an integer, such that for
value B = i=1
m
each 1 ≤ i ≤ 3m we have B4 < ai < B2 , does
there exist a partition B P
= {B1 , . . . , Bk } of the
set {1, . . . , 3m} satisfying i∈Bj ai = B for each
1 ≤ j ≤ k?
(Here “in the strong sense” means that the problem remains NP-complete even if the numbers are
encoded in unary.)
Observe that by the condition B4 < ai < B2 each set
Bj has to contain exactly three elements (the sum
of any two of the input numbers is less than B and
the sum of any four of them is larger than B), thus
in particular in any solution above k = m holds.
Note also that for any given instance I =
a1 , . . . , a3m of the above problem and any given
• The number K of empty baskets in t(I) is m −
1.
• The heaviness threshold parameter H of the
tree is B + P
2D−1 − 1 where B = 2D + d is the
3m
ai
with D > 3 being an integer
target sum i=1
m
and d ∈ {1, 2, 3}.
• There are 3m + (D − 1)m light nodes of depth
one in t(I): first, to each member ai of the
input a light node xi of size ai is associated,
and to each index 1 ≤ i ≤ m and 0 ≤ j < D−1,
a light node yij of size wij = 2j is associated.
7
2D−2 ≥ 2 if D ≥ 3). Thus by Proposition 6, t(I) is
indeed a Union-Find tree. (See Figure 4.)
For the other direction, suppose t(I) is a UnionFind tree. Then by Proposition 6, the multiset of
the sizes of the light nodes can be partitioned into
sets L = {L1 , . . . , Lm } such that each Li sums up
exactly to H = B + 2D−1 − 1 and each Li satisfies
the Union condition.
First we show that each Li contains exactly one
“small” weight wkj for each j = 0, . . . , D − 2. (Note
that each aj exceeds 2D−2 hence the name of these
weights.) We prove this by induction on j. The
claim holds for j = 0 since in a Union-Find tree
each inner node has a leaf child. By induction, the
smallest j members of Li have sizes 20 , 21 , . . . , 2j−1 ,
summing up to 2j − 1. Since all the weights ak
′
are larger than 2D−2 as well as the weights wkj for
j ′ > j, none of these can be the (j + 1)th smallest
integer in Li without violating the Union condition.
Thus, each set Li has to have some wkj i as its j +
1th smallest element, and since both the number
of these sets and the number of the small weights
of size 2j are m we get that each set Li contains
exactly one small weight of size 2j .
Thus, the small weights sum up exactly to 2D−1 − 1
in each of the sets Li , hence the weights ai have to
sum up exactly to H − (2D−1 − 1) = 2D + d =
B in each of these sets, yielding a solution to the
instance (a1 , . . . , a3m ) of the 3-Partition’ problem
and concluding the proof.
Note that since the 3-Partition’ problem is
strongly NP-complete we can assume that the input numbers ai are encoded in unary, thus the tree
t(I) can be indeed built in logspace (and in polytime).
(See Figure 3 for an example.)
In order to ease notation we say that a member ai
of some sequence (or multiset) a1 , . . . , an satisfies
the Union condition if the sum of the members of
the sequence
that are less than ai is at least ai − 1,
P
i.e.
aj ≥ ai − 1. In addition we say that the
aj <ai
sequence itself satisfies the Union condition if each
of its elements does so. It is clear that a node x of
a tree t satisfies the Union condition if and only if
the multiset {size(t, y) : y ∈ children(t, x)} does
so.
The following lemma states that the above construction is indeed a reduction:
Lemma 4. For any instance I of the 3Partition’ problem, I has a solution iff t(I)
is a Union-Find tree.
Proof. For one direction, assume {B1 , . . . , Bm } is
a solution of the instance I = a1 , . . . , a3m . Then
the multiset of the sizes of the light nodes can be
partitioned into sets L = (L1 , . . . , Lm ) as Li = {aj :
j ∈ Bi } ∪ {wij : 0 ≤ j < D − 1}. It is clear that
P
D−1
− 1 = H. Thus, by Propoℓ∈Li ℓ = B + 2
sition 6 we only have to show that each Li satisfies the Union condition. For the elements wij of
′
Pj−1
size 2j it is clear since j ′ =0 wij = 2j − 1. Now
let a be the smallest element of Bi . Then, since
Bi consists of three integers summing up to B, we
get that a ≤ B3 . On the other hand, B4 < a by
the definition of the 3-Partition’ problem. Recall
that B = 2D + d for some d ∈ {1, 2, 3}: we get
that 2D−2 < a, thus in particular each weight wij
is smaller
than a. Summing up all these weights
PD−2
j
D−1
− 1. We claim that
we get
j=0 wi = 2
B
3
2D +d
3
D
Thus, since the strongly NP-complete 3Partition’ problem reduces to the problem
of deciding whether a flat tree is a Union-Find
tree, and moreover, if a flat tree is a Union-Find
tree, then it can be constructed from a Union tree
by applying finitely many path compressions (all
one has to do is to “move out” the light nodes from
the baskets by calling a find operation on each of
them successively), we have proven the following:
Theorem 3. It is NP-complete already for flat
trees to decide whether
D−1
=
< 2
− 1. Indeed, multiplying
a ≤
by 3 we get 2 + d < 2D + 2D−1 − 3, subtracting 2D and adding 3 yields d + 3 < 2D−1 which
holds since d ≤ 3 and D > 3. Thus we have that
a satisfies the Union condition as well. Then, if
b > a is P
also a member of Bi , then it suffices to
D−2
show a + j=0 wij ≥ b − 1 but since a ≥ B4 > 2D−2
PD−2 j
and j=0 wi = 2D−1 − 1, we get the left sum exceeds 2D−1 + 2D−2 − 1 which is at least b − 1 since
D
b < B2 = 2 2+d ≤ 2D−1 + 2D−2 (since d ≤ 3 and
i) a given (flat) tree is a Union-Find tree and
ii) whether a given (flat) tree can be constructed
from a Union tree by applying a number of path
compression operations.
5. Conclusion, future directions
We have shown that unless P = NP, there is no
efficient algorithm to check whether a given tree
8
is a valid Union-Find tree, assuming union-by-size
strategy and usage of path compression, since the
problem is NP-complete. A natural question is
whether the recognition problem remains NP-hard
assuming union-by-rank strategy (and of course
path compression)? Since the heights of merged
trees do not add up, only increase by at most one,
there is no “obvious” way to encode arithmetic into
the construction of these trees, and even the characterization by the push operation is unclear to hold
in that case (since in that setting, path compression
can alter the order of the subsequent merging).
[12]
[13]
[14]
[15]
References
[16]
[1] H. Bannai, S. Inenaga, A. Shinohara, and M. Takeda.
Inferring strings from graphs and arrays. Lecture
Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes
in Bioinformatics), 2747:208–217, 2003. cited By 15.
[2] Leizhen Cai. The recognition of union trees. Inf. Process. Lett., 45(6):279–283, 1993.
[3] Julien Clément, Maxime Crochemore, and Giuseppina
Rindone. Reverse engineering prefix tables. In Susanne
Albers and Jean-Yves Marion, editors, 26th International Symposium on Theoretical Aspects of Computer
Science, STACS 2009, February 26-28, 2009, Freiburg,
Germany, Proceedings, volume 3 of LIPIcs, pages 289–
300. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Germany, 2009.
[4] Thomas H. Cormen, Clifford Stein, Ronald L. Rivest,
and Charles E. Leiserson. Introduction to Algorithms.
McGraw-Hill Higher Education, 2nd edition, 2001.
[5] M. Crochemore, C.S. Iliopoulos, S.P. Pissis, and G. Tischler. Cover array string reconstruction. Lecture
Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes
in Bioinformatics), 6129 LNCS:251–259, 2010. cited
By 8.
[6] J.-P. Duval, T. Lecroq, and A. Lefebvre. Efficient validation and construction of border arrays and validation
of string matching automata. RAIRO - Theoretical Informatics and Applications, 43(2):281–297, 2009. cited
By 12.
[7] J.-P. Duval and A. Lefebvre. Words over an ordered
alphabet and suffix permutations. Theoretical Informatics and Applications, 36(3):249–259, 2002. cited By
19.
[8] Jean-Pierre Duval, Thierry Lecroq, and Arnaud Lefebvre. Border array on bounded alphabet. J. Autom.
Lang. Comb., 10(1):51–60, January 2005.
[9] M. Fredman and M. Saks. The cell probe complexity of
dynamic data structures. In Proceedings of the Twentyfirst Annual ACM Symposium on Theory of Computing, STOC ’89, pages 345–354, New York, NY, USA,
1989. ACM.
[10] Bernard A. Galler and Michael J. Fisher. An improved
equivalence algorithm. Commun. ACM, 7(5):301–303,
May 1964.
[11] Michael R. Garey and David S. Johnson. Computers and Intractability: A Guide to the Theory of NP-
[17]
[18]
[19]
[20]
[21]
[22]
9
Completeness. W. H. Freeman & Co., New York, NY,
USA, 1979.
P. Gawrychowski, A. Jez, and L. Jez. Validating the
knuth-morris-pratt failure function, fast and online.
Theory of Computing Systems, 54(2):337–372, 2014.
cited By 7.
Tomohiro I, Shunsuke Inenaga, Hideo Bannai, and
Masayuki Takeda. Counting parameterized border arrays for a binary alphabet. In Language and automata
theory and applications, volume 5457 of Lecture Notes
in Comput. Sci., pages 422–433. Springer, Berlin, 2009.
Tomohiro I, Shunsuke Inenaga, Hideo Bannai, and
Masayuki Takeda. Verifying and enumerating parameterized border arrays. Theoretical Computer Science,
412(50):6959 – 6981, 2011.
Tomohiro I, Shunsuke Inenaga, Hideo Bannai, and
Masayuki Takeda.
Verifying and enumerating parameterized border arrays. Theoret. Comput. Sci.,
412(50):6959–6981, 2011.
Tomohiro I, Shunsuke Inenaga, Hideo Bannai, and
Masayuki Takeda. Inferring strings from suffix trees
and links on a binary alphabet. Discrete Applied Mathematics, 163, Part 3:316 – 325, 2014. Stringology Algorithms.
Kevin Knight. Unification: A multidisciplinary survey.
ACM Comput. Surv., 21(1):93–124, March 1989.
Gregory Kucherov, Lilla Tóthmérész, and Stéphane
Vialette. On the combinatorics of suffix arrays. Information Processing Letters, 113(22–24):915 – 920, 2013.
Weilin Lu, P. J. Ryan, W. F. Smyth, Yu Sun, and
Lu Yang. Verifying a border array in linear time. J.
Comb. Math. Comb. Comput, 42:223–236, 2000.
Ross M. McConnell, Kurt Mehlhorn, Stefan Näher, and
Pascal Schweitzer. Certifying algorithms. Computer
Science Review, 5(2):119–161, 2011.
Tatiana Starikovskaya and Hjalte Wedel Vildhøj. A
suffix tree or not a suffix tree? Journal of Discrete
Algorithms, 32:14 – 23, 2015. StringMasters 2012 2013
Special Issue (Volume 2).
Robert Endre Tarjan. Efficiency of a good but not linear set union algorithm. J. ACM, 22(2):215–225, April
1975.
s: r
t: y
r
y
x
x
y
y
z
z
r
r
x
z
x
z
(a) Trees s and t
(b) t′ = merge(s, t)
(c) t′′ = push(t′ , x, y)
(d) t′′′ = collapse(t′′ , z)
Figure 1: Merge, collapse and push.
Figure 2: A flat tree with H = 9, K = 3. The six depth-one light nodes’ sizes sum up to 4 × 9 = 36.
Figure 3: Illustrating the reduction: t(I) for I = (5, 5, 5, 5, 5, 6, 6, 7, 7), m = 3, B = 17. Then D = 4, d = 1
and H = 24. Small weights wij are of size 1, 2, 4.
Figure 4: The Union tree corresponding to a solution of the instance on Figure 3.
10
| 8 |
Secure Channel for Molecular Communications
S. M. Riazul Islam
Dept. of Computer Engineering
Sejong University
Seoul, South Korea
[email protected]
Farman Ali
UWB Wireless Research Center
Inha University
Incheon, South Korea
[email protected]
Abstract—Molecular communication in nanonetworks is an
emerging communication paradigm that uses molecules as
information carriers. Achieving a secure information exchange is
one of the practical challenges that need to be considered to
address the potential of molecular communications in
nanonetworks. In this article, we have introduced secure channel
into molecular communications to prevent eavesdropping. First,
we propose a Diffie–Hellman algorithm-based method by which
communicating nanomachines can exchange a secret key through
molecular signaling. Then, we use this secret key to perform
ciphering. Also, we present both the algorithm for secret key
exchange and the secured molecular communication system. The
proposed secured system is found effective in terms of energy
consumption.
Keywords— Nanonetworks, molecular
security, secure channel, eavesdropping.
communications,
Hyeonjoon Moon
Dept. of Computer Engineering
Sejong University
Seoul, South Korea
[email protected]
Kyung-Sup Kwak
UWB Wireless Research Center
Inha University
Incheon, South Korea
[email protected]
key (a key only known to them). Then, information is encrypted
using the key before transmission. However, in molecular
communications, the main challenge is how to generate a private
key so that the adversary cannot learn the exchanged key.
Moreover, both key exchange process and encryption
algorithms should be cost-effective in terms of computational
complexity, and energy consumption. Authors in [8] make use
of radio-frequency identification (RFID) noisy tags, similar to
blocker tag suggested by authors in [9], to exchange secret key.
These tags intentionally generate noise on the channel so that
intruders in RFID communications can’t understand the key.
The same idea has been tailored to security requirements in near
field communication (NFC) [10]. However, these works have
been performed in the context of EM communications. In this
article, we try to establish a secure channel in molecular
communications to defend against eavesdropping.
I. INTRODUCTION
II. SYSTEM MODEL
Molecular communication is promising to provide
appropriate solutions for a wide range of applications including
biomedical, industry, and environmental areas [1], [2]. Even, it
can be integrated with the Internet of Things (IoT) to provide
IoT-based healthcare services [3], [4] by implementing the
concept of Internet of NanoThings [5]. This is an
interdisciplinary research field and is significantly different
from the electromagnetic (EM) communication system, since it
utilizes molecules as carriers of information. However,
molecular communication has a number of technical challenges
to overcome. Introducing security into molecular
communications is a fundamental challenge for researchers.
There exist very few papers that focus on security aspects of this
promising technology [6], [7]. These papers discuss several
security and privacy issues and challenges in the context of
molecular communications, in general. To the best of our
knowledge, there exists no prominent work which attempts to
mitigate any particular security risk.
We consider that the underlying system is time-slotted with
specific slot duration and the participating nanomachines are in
a stationary fluidic medium at any distance within the network
coverage. We also consider that these nanomachines are
perfectly synchronized and communicate with each other using
same types of messenger molecules. Symbols are supposed to
be transmitted upon on-off keying (OOK) modulation through
the memoryless channel. In this scheme, information bit 1 is
conveyed by liberating an impulse of 𝑧1 number of molecules at
the start of the slot, whereas no molecule is released for
information 0. For the simplicity of presentation, we consider a
full-duplex system; participating nanomachines can transmit
and receive information at the same time. However, the
proposed method, with a slight modification, can also equally be
applied in a half-duplex system.
In this article, we deal with eavesdropping, a special class of
threats, that might exploit vulnerabilities to breach security in
molecular communications. It is the act of secretly listening to
an ongoing communication between two nanomachines. In EM
communications, a secure channel is established to prevent
eavesdropping. To do this, two communicating parties follow
Diffie–Hellman key exchange protocol and generate a private
This work was supported in part by National Research Foundation of
Korea-Grant funded by the Korean Government (Ministry of Science and ICT)NRF-2017R1A2B2012337) and in part by Sejong University Faculty Research
Fund.
Fig. 1. Molecular communication with eavesdropping (A sends violet
molecules, C-green molecules).
In case of reception, a nanomachine counts the total number
of messenger molecules received during the time slot. This
received number of molecules, denoted by 𝑧2 , is then compared
to z, a threshold number of molecules. If 𝑧2 is less than z, the
machine considers the received bit to be 0; otherwise, it decodes
the received bit to be 1. We further assume that there exists at
least a malicious nanomachine which uses a suitable detector to
receive the transmitted molecules with the purpose of
eavesdropping (see Fig. 1).
III. EAVESDROPPING DISTANCE
This is very noticeable that eavesdropping is a significant
matter, since molecular communication is a wireless approach
in practice. A question that may arise is how a malicious
nanomachine can decode the transmitted data out of received
molecules. This can be achieved by two ways. First, the intruder
can do required experiments prior to an attack. Second, the
attacker can have prerequisite knowledge from literature
investigation. Also, it’s not unusual that the intruder will have
the required detector to receive the molecules and the
hardware/software arrangements to decode the received
molecules, since this doesn’t need any special kit.
Molecular communication is typically occurred between two
nanomachines in near vicinity. A reasonable question is how
close the malicious machine needs to be to be capable to detect
the transmitted molecules. However, there is no exact answer to
this question. The reason why we can’t answer this question
accurately is that there is a number of factors which determine
the said distance. For instance, the distance might be influenced
by the following factors, among others: molecular
characteristics of the given transmitting nanomachine, number
of molecules sent by transmitting nanomachine, detector
characteristics of the intruder, quality of the intruder’s
nanomachine itself and location of the attack. Thus, any
particular distance specified would only be usable for a definite
set of the aforementioned parameters and cannot be utilized to
develop common security strategies. Nevertheless, we can
arguably say that eavesdropping can be performed up to a
distance of more than a typical distance between two authentic
nanomachines. In other words, a powerful attacker can still
decode the information even when it operates from a relatively
larger distance compared to intended receiving nanomachine..
IV. PROPOSED SECURE CHANNEL
The idea is that both nanomachine A and nanomchine C
transmit random data simultaneously. This is conceivable as
Fig. 2. The secret key exchange in molecular communication.
nanomachines can launch and collect the molecules at the same
time. In the setup phase, these two nanomachines synchronize
on the exact timing of the bits and also on the energies (number
of molecules) of the transmitted molecular signal. After the
synchronization phase, machines A and C are able to send
molecules at the same time with the same number of molecules.
At the time of transmitting random bits of 0 (sending no
molecules) or 1 (sending some predetermined number of
molecules), both nanomachines also listen to molecular signal.
Now, we consider all the possible cases below:
Case 1: When both machines transmit a zero molecular
signal, the sum of these two molecular signals is zero and a
malicious nanomachine, who is eavesdropping, would recognize
that both nanomachines sent a zero. This case does not help the
malicious machine B to understand that which machine is
sending a bit of the secret key.
Case 2: When both machines transmit a one molecular
signal, the sum of these two molecular signals is the double (two
times the number of molecules set for sending a one) and the
malicious nanomachine would recognize that both
nanomachines sent a one. This case also does not help the
malicious machine B to recognize that which machine is sending
a bit of the secret key.
Case 3: An interesting case is happened when nanomachine
A sends a one whereas nanomachine C sends a zero or when
machine A sends a zero whereas machine C sends a one. In this
situation, both nanomachines can find what the other machine
has transmitted, since both machines are aware of what
information they themselves have just sent. Conversely, the
malicious node B only understands the sum of two molecular
signals and it cannot factually find out which machine sent the
one and which machine sent the zero.
This concept has been demonstrated in Fig. 2. While the top
part of the figure shows the molecules released (violet color) by
nanomachine A, whereas the middle part shows the molecules
released (green color) by C. Machine A randomly transmits the
eight bits: 1, 0, 1, 1, 0, 1, 0, and 1. Machine B randomly transmits
the eight bits: 0, 1, 0, 1, 1, 1, 0, and 1. The bottom part of the
figure displays the sum of the molecules released by both
machines. This is the ultimate signal as observed by the
malicious machine B. It clearly shows the resultant signal in case
of A transmits 0, and C transmits 1 is the same as in the case of
A transmits 1, and C transmits 0. Therefore, the malicious
machine B can’t differentiate between these two cases. Then, the
two authentic machines now abandon all bits, where they
transmitted the same number of molecules (corresponding to
both A and C sent 0, and both A and C sent 1). However, both
machines accept all bits, where they transmitted different
number of molecules (corresponding to A sent 1 and C sent 0,
and A sent 0 and C sent 1). They can consider either the bits
transmitted by machine A or the bits transmitted by machine C
as the secret key. This is basically a prior agreement as per
security policies. In this fashion, both machines can exchange an
encryption key of any desired length. Thus, in this example,
transmitted bits, by the selected nanomachine, at bit indexes 0,
1, 2, and 4 constitutes the secret key, since the participatory
machines sent different number of molecules at these bit
indexes. Now, if a security scheme selects the nanomachine C
for this purpose, the encryption key will be 0101. However, if
the machine A is selected, the key is 1010. The key we have just
obtained is 4-bits in length. Note that a secret key of any desired
length can be achieved if both machines continue their
operations until the target number of bits are stored. The
flowchart in Fig. 3 shows the generalized algorithm of our
proposed technique as described above. In this article, we will
assume the desired key is 8 bits in length.
The next step after the private key generation is to encrypt
the information bit to be transmitted before it goes onto the
channel through molecular modulation. For this purpose, we use
Exclusive OR (XOR) cipher, since it is simple to implement and
XOR operation is inexpensive in terms of computation. The
encrypted bits are obtained by using the following logical
operations,
𝑥𝑗 = 𝑏𝑘𝑗 ⊕ 𝑏𝑖𝑗 for 𝑗 = 0,1, … ,7
operation is sufficient enough to hide the information from
unauthorized parties. This in turn ensures that frequent changes
of private key are not mandatory. Conversely, security-sensitive
applications will require relatively frequent changes of secret
keys. In the receiving side, decryption operation, presented in
(2), will be performed on the information bits after molecular
demodulation by using the same logical operations as applied in
(1).
𝑦𝑗 = 𝑏𝑘𝑗 ⊕ 𝑏𝑒𝑗 for 𝑗 = 0,1, … ,7
where 𝑦𝑗 , 𝑏𝑘𝑗 and 𝑏𝑒𝑗 are the j-th bit of the decrypted bits, the
secret key, and demodulated information block of 8-bit,
respectively. If there occurs error free communication, 𝑦𝑗
should be the same as 𝑏𝑖𝑗 . Our proposed secured molecular
communication system thus eventually takes the form of Fig. 4.
The serial-to-parallel (S/P) and parallel-to-serial (P/S)
converters have been used, since the hardware XOR cipher
performs 8-bit parallel XOR operations. However, this is a
design issue. The same XOR cipher can alternatively be
accomplished by using a single XOR gate. It is worth
mentioning that the processing time to place secured molecular
information onto the channel becomes negligible compared to
the usual baseband information processing time, since there is
no complicated computation in ciphering operations and
parallel-to-serial and serial-to-parallel operations also occur
instantly. Moreover, the use of hardware ciphering instead of
software counterpart further reduces the associated time. Thus,
our secured molecular communication system is effective in
terms of information processing time.
(1)
where 𝑥𝑗 , 𝑏𝑘𝑗 and 𝑏𝑖𝑗 are the j-th bit of the encrypted bits to be
modulated for molecular transmission, the secret key, and
information block of 8 bits, respectively. There will be many
applications of molecular communications where stringent
security will not be required. In those cases, a simple hiding
(2)
V. ENERGY CONSUMPTION ANALYSIS
𝐸𝑏𝑇
Let
and 𝐸𝑏𝐶 be the energy required to transmit one bit of
information, and energy required to compute one bit of
information, respectively. Also, 𝑁 denotes the total number of
information bits to be transmitted and the key length is
designated by 𝐾. And the number of key generation to complete
the information transmission is 𝑀. Since information bits (1’s or
0’s) are randomly generated on an equiprobable basis and
transmissions of two out of four possible cases (during key
exchange) are discarded, participating nodes should exchange
2𝑛 bits of information, on an average, to share 𝑛 bits of key.
Therefore, the required energy to exchange the desired key once
is 𝐸𝐾 = 2 × 𝐾 × 𝐸𝑏𝑇 . Ciphering and deciphering 𝑁 information
bits by using XOR logical operations require 𝐸𝐶 = 2 × 𝑁 × 𝐸𝑏𝐶
energy. Thus, the total energy required to transmit the
information with security becomes
𝐸𝑇𝑆 = Energy for Information Transmission
+𝑀 × 𝐸𝐾 + 𝐸𝐶
=𝑁×
𝐸𝑏𝑇
+ 𝑀 × (2 × 𝐾 × 𝐸𝑏𝑇 ) + 2 × 𝑁 × 𝐸𝑏𝐶 . (3)
Considering the fact that the energy required to transmit one
bit of information is analogous to the energy required to carry
1000 logical operations [11], we use 𝐸𝑏𝐶 = 0.001 × 𝐸𝑏𝑇 in (3)
and get
𝐸𝑇𝑆 = (1.002𝑁 + 2𝐾𝑀)𝐸𝑏𝑇
Fig. 3. The algorithm for secret key exchange.
(4)
In case of no security, the total energy required to transmit
the information is
for a long time. As a result, the amount of additional energy is
negligible compared to the total energy requirement for
information transmission. Even, the amount of extra energy
consumption is still very low for the cases where there might be
a provision of frequent changes of shared keys (different key
after a few frames), since the key is very short in length
compared to the total length of several frames of interest. Thus,
the proposed method is found energy efficient. Both simulated
and analytical results are well matched at every case.
Fig. 4. The molecular communication system with the secure channel.
𝐸𝑇0 = Energy for Information Transmission = 𝑁 × 𝐸𝑏𝑇 (5)
VI. PERFORMANCE E VALUATION
To assess the proposed method, we perform computer
simulation. We consider same types of molecules of the same
size with OOK modulation scheme in a 2-dimensional confined
space and 125 molecules/bit (𝐸𝑏𝑇 ) is used on an average. The
sizes of the messenger molecules are anticipated to be
analogous to that of the fluid molecules. The threshold number
of molecules is supposed to be 𝑧 = 20 . Transmitter and
receiver are synchronized [12] so that the starting time of each
molecular symbol at each communicating node becomes the
same and the track of secret key sharing is maintained. The 4K
information bits, divided into 4 frames, are transmitted. The
frequent change of secret key is implemented by generating
new keys after each 2 frames.
The Fig. 5 presents the total energy requirements for
secured molecular communication system under varying key
length. It clearly shows that no additional energy is required to
transmit secured molecular information in the proposed secure
channel fashion. This is because the number of bits to send the
same amount information remains unchanged. However, the
secured molecular communication system needs energy in
exchanging the secret keys and in processing the information
for encryption. In case of simple information hide operation,
where the key should be exchanged only once, prior to
information transmission, the same key can relatively be used
Fig. 5. Comparison of energy consumption.
VII. CONCLUDING REMARKS
In this article, we have proposed a secured molecular
communication system to defend against eavesdropping. The
participating nanomachines exchange a secret key through
molecular signaling in such a way that adversary cannot
understand the key. This key exchange mechanism doesn’t
require significant additional energy. Also, the use of XOR
ciphering to encrypt and decrypt the data using the generated
secret key make the system simple and effective in terms of
energy consumption. The proposed system can effectively be
used in molecular communication systems where simple hide
operations are sufficient or more stringent security are required.
REFERENCES
[1]
I.F. Akyildiz, F. Brunetti, and C. Blazques, “Nanonetworks: A New
Communication Paradigm,” Computer Networks, vol. 52, no. 12, pp.
2260-2279, Aug. 2008.
[2] T. Nakano, M.J. Moore, W. Fang, A.V. Vasilakos, and S. Jianwei,
"Molecular Communication and Networking: Opportunities and
Challenges," IEEE Transactions on NanoBioscience, vol. 11, no.2,
pp.135-148, Jun. 2012.
[3] S. M. R. Islam, D. Kwak, M. H. Kabir, M. Hossain, and K. S. Kwak, "The
Internet of Things for Healthcare: A Comprehensive Survey," IEEE
Access, vol. 3, pp. 678-708, Jun. 2015.
[4] S. M. R. Islam, M. N. Uddin, K. S. Kwak, "The IoT: Exciting Possibilities
for Bettering Lives," IEEE Consumer Electronics Magazine, vol. 5, no.
2, pp. 49-57, Apr. 2016.
[5] I. F. Akyildiz, M. Pierobon, S. Balasubramaniam and Y. Koucheryavy,
"The internet of Bio-Nano things," IEEE Communications Magazine, vol.
53, no. 3, pp. 32-40, March 2015.
[6] V. Loscri, C. Marchal, N. Mitton, G. Fortino, and A.V. Vasilakos,
"Security and Privacy in Molecular Communication and Networking:
Opportunities and Challenges," IEEE Transactions on NanoBioscience,
vol. 13, no. 3, pp. 198-207, Sep. 2014.
[7] F. Dressler, and F. Kargl, “Security in Nano Communication: Challenges
and Open Research Issues," in Proc. IEEE Int. Conf. on Communications,
pp. 6183-6187, Jun. 2012.
[8] C. Castelluccia, and G. Avoine, "Noisy Tags: A Pretty Good Key
Exchange Protocol for RFID Tags", in Proceedings of CARDIS 2006,
LNCS 3928, pp. 289-299, 2006.
[9] A. Juels, R. Rivest, and M. Szydlo, “The Blocker Tag: Selective Blocking
of RFID Tags for Consumer Privacy,” In: Atluri, V. (ed.) Conf. on
Computer and Communications Security, Washington, DC, USA, pp.
103–111. ACM Press, New York, Oct. 2003.
[10] E. Haselsteiner, and K. BreitfuB, “Security in Near Field Communication
(NFC): Strengths and Weaknesses,” In Workshop on RFID Security,
2006.
[11] H. Karl, and A. Willing, “Protocols and Architectures for Wireless Sensor
Networks,” John Wiley & Sons Ltd, Chichester, England, 2005.
[12] M. H. Kabir, S. M. R. Islam, and K. S. Kwak, "D-MoSK Modulation in
Molecular Communications," IEEE Transactions on NanoBioscience,
vol. 14, no. 6, pp. 680-683, Jun. 2015.
| 7 |
WHEN IS R ⋉ I AN ALMOST GORENSTEIN LOCAL RING?
arXiv:1704.05961v1 [] 20 Apr 2017
SHIRO GOTO AND SHINYA KUMASHIRO
Abstract. Let (R, m) be a Gorenstein local ring of dimension d > 0 and let I be an
ideal of R such that (0) 6= I ( R and R/I is a Cohen-Macaulay ring of dimension d.
There is given a complete answer to the question of when the idealization A = R ⋉ I of
I over R is an almost Gorenstein local ring.
1. Introduction
Let (R, m) be a Gorenstein local ring of dimension d > 0 with infinite residue class
field. Assume that R is a homomorphic image of a regular local ring. With this notation
the purpose of this paper is to prove the following theorem.
Theorem 1.1. Let I be a non-zero ideal of R and suppose that R/I is a Cohen-Macaulay
ring of dimension d. Let A = R⋉I denote the idealization of I over R. Then the following
conditions are equivalent.
(1) A = R ⋉ I is an almost Gorenstein local ring.
(2) R has the presentation R = S/[(X) ∩ (Y )] where S is a regular local ring of
dimension d + 1 and X, Y is a part of a regular system of parameters of S such
that I = XR.
The notion of almost Gorenstein local ring (AGL ring for short) is one of the generalization of Gorenstein rings, which originated in the paper [1] of V. Barucci and R. Fröberg
in 1997. They introduced the notion for one-dimensional analytically unramified local
rings and developed a beautiful theory, investigating the semigroup rings of numerical
semigroups. In 2013 the first author, N. Matsuoka, and T. T. Phuong [5] extended the
notion to arbitrary Cohen-Macaulay local rings but still of dimension one. The research of
[5] has been succeeded by two works [11] and [3] in 2015 and 2017, respectively. In [3] one
can find the notion of 2-almost Gorenstein local ring (2-AGL ring for short) of dimension
one, which is a generalization of AGL rings. Using the Sally modules of canonical ideals,
the authors show that 2-AGL rings behave well as if they were twins of AGL rings. The
purpose of the research [11] of the first author, R. Takahashi, and N, Taniguchi started in
a different direction. They have extended the notion of AGL ring to higher dimensional
Cohen-Macaulay local/graded rings, using the notion of Ulrich modules ([2]). Here let us
briefly recall their definition for the local case.
The first author was partially supported by JSPS Grant-in-Aid for Scientific Research (C) 25400051.
1
2
SHIRO GOTO AND SHINYA KUMASHIRO
Definition 1.2. Let (R, m) be a Cohen-Macaulay local ring of dimension d, possessing
the canonical module KR . Then we say that R is an AGL ring, if there exists an exact
sequence
0 → R → KR → C → 0
of R-modules such that either C = (0) or C 6= (0) and µR (C) = e0m (C), where µR (C)
denotes the number of elements in a minimal system of generators of C and
e0m (C) = lim (d − 1)!·
n→∞
ℓR (C/mn+1 C)
nd−1
denotes the multiplicity of C with respect to the maximal ideal m (here ℓR (∗) stands for
the length).
We explain a little about Definition 1.2. Let (R, m) be a Cohen-Macaulay local ring
of dimension d and assume that R possesses the canonical module KR . The condition
of Definition 1.2 requires that R is embedded into KR and even though R 6= KR , the
difference C = KR /R between KR and R is an Ulrich R-module ([2]) and behaves well.
In particular, the condition is equivalent to saying that mC = (0), when dim R = 1 ([11,
Proposition 3.4]). In general, if R is an AGL ring of dimension d > 0, then Rp is a
Gorenstein ring for every p ∈ Ass R, because dimR C ≤ d − 1 ([11, Lemma 3.1]).
The research on almost Gorenstein local/graded rings is still in progress, exploring, e.g.,
the problem of when the Rees algebras of ideals/modules are almost Gorenstein rings (see
[6, 7, 8, 9, 10, 15]) and the reader can consult [11] for several basic results on almost
Gorenstein local/graded rings. For instance, non-Gorenstein AGL rings are G-regular in
the sense of [14] and all the known Cohen-Macaulay local rings of finite Cohen-Macaulay
representation type are AGL rings. Besides, the authors explored the question of when
the idealization A = R ⋉ M is an AGL ring, where (R, m) is a Cohen-Macaulay local ring
and M is a maximal Cohen-Macaulay R-module. Because A = R ⋉ M is a Gorenstein
ring if and only if M ∼
= KR as an R-module ([13]), this question seems quite natural and
in [11, Section 6] the authors actually gave a complete answer to the question in the case
where M is a faithful R-module, that is the case (0) :R M = (0). However, the case
where M is not faithful has been left open, which our Theorem 1.1 settles in the special
case where R is a Gorenstein local ring and M = I is an ideal of R such that R/I is
a Cohen-Macaulay ring with dim R/I = dim R. For the case where dim R/I = d but
depth R/I = d − 1 the question remains open (see Remark 2.6).
2. Proof of Theorem 1.1
The purpose of this section is to prove Theorem 1.1. To begin with, let us fix our
notation. Unless otherwise specified, throughout this paper let (R, m) be a Gorenstein
local ring with d = dim R > 0. Let I be a non-zero ideal of R such that R/I is a
WHEN IS R ⋉ I AN ALMOST GORENSTEIN LOCAL RING?
3
Cohen-Macaulay ring with dim R/I = d. Let A = R ⋉ I be the idealization of I over R.
Therefore, A = R ⊕ I as an R-module and the multiplication in A is given by
(a, x)(b, y) = (ab, bx + ay)
where a, b ∈ R and x, y ∈ I. Hence A is a Cohen-Macaulay local ring with dim A = d,
because I is a maximal Cohen-Macaulay R-module.
For each R-module N let N ∨ = HomR (N, R). We set L = I ∨ ⊕ R and consider L to
be an A-module under the following action of A
(a, x) ◦ (f, y) = (af, f (x) + ay),
where (a, x) ∈ A and (f, y) ∈ L. Then it is standard to check that the map
A∨ → L, α 7→ (α ◦ j, α(1))
is an isomorphism of A-modules, where j : I → A, x 7→ (0, x) and 1 = (1, 0) denotes the
identity of the ring A. Hence by [12, Satz 5.12] we get the following.
Fact 2.1. KA = L, where KA denotes the canonical module of A.
We set J = (0) :R I. Let ι : I → R denote the embedding. Then taking the R-dual of
the exact sequence
ι
0→I−
→ R → R/I → 0,
we get the exact sequence
ι∨
0 → (R/I)∨ → R∨ −
→ I ∨ → 0 = Ext1R (R/I, R) → · · ·
of R-modules, which shows I ∨ = R·ι. Hence J = (0) :R I ∨ because I = I ∨∨ ([12, Korollar
6.8]), so that I ∨ = R·ι ∼
= (R/J)∨ = KR/J ([12, Satz
= R/J as an R-module. Hence I ∼
5.12]). Therefore, taking again the R-dual of the exact sequence
ι∨
0 → J → R∨ −
→ I ∨ → 0,
ι
we get the exact sequence 0 → I −
→ R → J ∨ → 0 of R-modules, whence J ∨ ∼
= R/I, so
∨
∼
that J = (R/I) = KR/I . Summarizing the arguments, we get the following.
Fact 2.2. I ∼
= (R/I)∨ = KR/I .
= (R/J)∨ = KR/J and J ∼
Notice that r(A) = 2 by [12, Satz 6.10] where r(A) denotes the Cohen-Macaulay type
of A, because A is not a Gorenstein ring (as I ∼
6= R; see [13]) but KA is generated by two
elements; KA = R·(ι, 0) + R·(0, 1).
We denote by M = m × I the maximal ideal of A. Let us begin with the following.
Lemma 2.3. Let d = 1. Then the following conditions are equivalent.
(1) A is an AGL ring.
(2) I + J = m.
When this is the case, I ∩ J = (0).
4
SHIRO GOTO AND SHINYA KUMASHIRO
Proof. (2) ⇒ (1) We set f = (ι, 1) ∈ KA and C = KA /Af . Let α ∈ m and β ∈ I. Let us
write α = a + b with a ∈ I and b ∈ J. Then because
(α, 0)(0, 1) = (0, α) = (bι, a + b) = (b, a)(ι, 1), (0, β)(0, 1) = (0, 0),
we get MC = (0), whence A is an AGL ring.
(1) ⇒ (2) We have I ∩ J = (0). In fact, let p ∈ Ass R and set P = p × I. Hence
P ∈ Min A. Assume that IRp 6= (0). Then since AP = Rp ⋉ IRp and AP is a Gorenstein
local ring, IRp ∼
= Rp ([13]), so that JRp = (0). Therefore, (I ∩ J)Rp = (0) for every
p ∈ Ass R, whence I ∩ J = (0).
Now consider the exact sequence
ϕ
0→A−
→ KA → C → 0
of A-modules such that MC = (0). We set f = ϕ(1). Then f 6∈ MKA by [11, Corollary
3.10], because A is not a discrete valuation ring (DVR for short). We identify KA =
I ∨ × R (Fact 2.1) and write f = (aι, b) with a, b ∈ R. Then a 6∈ m or b 6∈ m, since
f = (a, 0)(ι, 0) + (b, 0)(0, 1) 6∈ MKA .
Firstly, assume that a 6∈ m. Without loss of generality, we may assume a = 1, whence
f = (ι, b). Let α ∈ m. Then since (α, 0)(0, 1) ∈ Af , we can write (α, 0)(0, 1) = (r, x)(ι, b)
with some r ∈ R and x ∈ I. Because
(0, α) = (α, 0)(0, 1) = (r, x)(ι, b) = (rι, x + rb),
we get
r ∈ (0) :R ι = J,
α = x + rb ∈ I + J.
Therefore, m = I + J.
Now assume that a ∈ m. Then since b 6∈ m, we may assume b = 1, whence f = (aι, 1).
Let α ∈ m and write (α, 0)(ι, 0) = (r, x)(aι, 1) with r ∈ R and x ∈ I. Then since
(αι, 0) = ((ra)ι, ax + r), we get
α − ra ∈ J,
r = −xa ∈ (a),
so that α ∈ J + (a2 ) ⊆ J + m2 , whence m = J. Because I ∩ J = (0), this implies I = (0),
which is absurd. Therefore, a 6∈ m, whence I + J = m.
Corollary 2.4. Let d = 1. Assume that A = R ⋉ I is an AGL ring. Then both R/I
and R/J are discrete valuation rings and µR (I) = µR (J) = 1. Consequently, if R is a
homomorphic image of a regular local ring, then R has the presentation
R = S/[(X) ∩ (Y )]
for some two-dimensional regular local ring (S, n) with n = (X, Y ), so that I = (x) and
J = (y), where x, y respectively denote the images of X, Y in R.
WHEN IS R ⋉ I AN ALMOST GORENSTEIN LOCAL RING?
5
Proof of Corollary 2.4. Since I + J = m and I ∩ J = (0), KR/I ∼
= m/I by Fact
= J ∼
2.2. Hence R/I is a DVR by Burch’s Theorem (see, e.g., [4, Theorem 1.1 (1)]), because
idR/I m/I = idR/I KR/I = 1 < ∞, where idR/I (∗) denotes the injective dimension. We
similarly get that R/J is a DVR, since KR/J ∼
= m/J . Consequently, µR (I) = µR (J) =
=I∼
1. We write I = (x) and J = (y). Hence m = I + J = (x, y). Since xy = 0, we have
m2 = (x2 , y 2) = (x + y)m. Therefore, v(R) = e(R) = 2 because R is not a DVR, where
v(R) (resp. e(R)) denotes the embedding dimension of R (resp. the multiplicity e0m (R)
of R with respect to m). Suppose now that R is a homomorphic image of a regular local
ring. Let us write R = S/a where a is an ideal in a two-dimensional regular local ring
(S, n) and choose X, Y ∈ n so that x, y are the images of X, Y in R, respectively. Then
n = (X, Y ), since a ⊆ n2 . We consider the canonical epimorphism
ϕ : S/[(X) ∩ (Y )] → R
and get that ϕ is an isomorphism, because
ℓS (S/(XY, X + Y )) = 2 = ℓR (R/(x + y)R) .
Thus a = (X) ∩ (Y ) and R = S/[(X) ∩ (Y )].
We note the following.
Proposition 2.5. Let S be a regular local ring of dimension d + 1 (d > 0) and let X, Y
be a part of a regular system of parameters of S. We set R = S/[(X) ∩ (Y )] and I = (x),
where x denotes the image of X in R. Then I 6= (0), R/I is a Cohen-Macaulay ring with
dim R/I = d, and the idealization A = R ⋉ I is an AGL ring.
Proof. Let y be the image of Y in R. Then (y) = (0) :R x and we have the presentation
0 → (y) → R → (x) → 0
of the R-module I = (x), whence A = R[T ]/(yT, T 2), where T is an indeterminate.
Therefore
A = S[T ]/(XY, Y T, T 2 ).
Notice that (XY, Y T, T 2 ) is equal to the ideal generated by the 2 × 2 minors of the matrix
Y T
M = (X
T Y 0 ) and we readily get by [11, Theorem 7.8] that A = R ⋉ I is an AGL ring,
because X, Y, T is a part of a regular system of parameters of the regular local ring S[T ]P ,
where P = nS[T ] + (T ).
We are now ready to prove Theorem 1.1.
Proof of Theorem 1.1. By Proposition 2.5 we have only to show the implication (1) ⇒ (2).
Consider the exact sequence
0 → A → KA → C → 0
6
SHIRO GOTO AND SHINYA KUMASHIRO
of A-modules such that C is an Ulrich A-module. Let M = m × I stand for the maximal
ideal of A. Then since mA ⊆ M ⊆ mA (here mA denotes the integral closure of mA)
and the field R/m is infinite, we can choose a superficial sequence f1 , f2 , . . . , fd−1 ∈ m for
C with respect to M so that f1 , f2 , . . . , fd−1 is also a part of a system of parameters for
both R and R/I. We set q = (f1 , f2 , . . . , fd−1 ) and R = R/q. Let I = (I + q)/q and
J = (J + q)/q. Then since f1 , f2 , . . . , fd−1 is a regular sequence for R/I, by the exact
sequence
0 → I → R → R/I → 0
we get the exact sequence
0 → I/qI → R → R/(I + q) → 0,
so that I/qI ∼
= I as an R-module. Hence
A/qA = R ⋉ (I/qI) ∼
= R ⋉ I.
Remember that A/qA is an AGL ring by [11, Theorem 3.7], because f1 , f2 , . . . , fd−1 is a
superficial sequence of C with respect M and f1 , f2 , . . . , fd−1 is an A-regular sequence.
Consequently, thanks to Corollary 2.4, R is a DVR and µR (I) = 1. Hence R/I is a
regular local ring and µR (I) = 1, because I/qI ∼
= I. Let I = (x). Then R/J ∼
= I = (x),
since J = (0) :R I. Because f1 , f2 , . . . , fd−1 is a regular sequence for the R-module I,
f1 , f2 , . . . , fd−1 is a regular sequence for R/J, so that we get the exact sequence
0 → J/qJ → R → R/(J + q) → 0.
Therefore, J ∼
= I, we have J = (0) :R I. Hence R/J
= I/qI ∼
= J/qJ and since R/(J + q) ∼
is a regular local ring and µR (J) = 1, because R/J is a DVR and µR (J ) = 1 by Corollary
2.4.
Let J = (y) and let m = m/q. Then by Lemma 2.3 we have m = I + J, whence
m = (x, y, f1 , f2 , . . . , fd−1 ). Therefore µR (m) = d + 1, since R is not a regular local
ring. On the other hand, since both R/I and R/J are regular local rings, considering the
canonical exact sequence
0 → R → R/I ⊕ R/J → R/(I + J) → 0
(notice that I ∩ J = (0) for the same reason as in the proof of Lemma 2.3), we readily
get e(R) = 2. We now choose a regular local ring (S, n) of dimension d + 1 and an ideal
a of S so that R = S/a. Let X, Y, Z1 , Z2 , . . . , Zd−1 be the elements of n whose images in
R are equal to x, y, f1, f2 , . . . , fd−1 , respectively. Then n = (X, Y, Z1 , Z2 , . . . , Zd−1 ), since
a ⊆ n2 . Because (X) ∩ (Y ) ⊆ a as xy = 0, we get a surjective homomorphism
S/[(X) ∩ (Y )] → R
WHEN IS R ⋉ I AN ALMOST GORENSTEIN LOCAL RING?
7
of rings, which has to be an isomorphism, because both the Cohen-Macaulay local rings
S/[(X) ∩ (Y )] and R have the same multiplicity 2. This completes the proof of Theorem
1.1.
Remark 2.6. Let (S, n) be a two-dimensional regular local ring and let X, Y be a regular
system of parameters of S. We set R = S/[(X) ∩(Y )]. Let x, y denote the images of X, Y
in R, respectively. Let n ≥ 2 be an integer. Then dim R/(xn ) = 1 but depth R/(xn ) = 0.
We have xn = xn−1 (x + y), whence (xn ) ∼
= (x) as an R-module because x + y is a nonn
zerodivisor of R, so that R ⋉ (x ) is an AGL ring (Proposition 2.5). This example shows
that there are certain ideals I in Gorenstein local rings R of dimension d > 0 such that
dim R/I = d and depth R/I = d − 1, for which the idealizations R ⋉ I are AGL rings.
However, we have no idea to control them.
References
[1] V. Barucci and R. Fröberg, One-dimensional almost Gorenstein rings, J. Algebra, 188 (1997),
418–442.
[2] J. P. Brennan, J. Herzog, and B. Ulrich, Maximally generated maximal Cohen-Macaulay
modules, Math. Scand., 61 (1987), 181–203.
[3] T. D. M. Chau, S. Goto, S. Kumashiro, and N. Matsuoka, Sally modules of canonical ideals
in dimension one and 2-AGL rings, Preprint 2017.
[4] S. Goto and F. Hayasaka, Finite homological dimension and primes associated to integrally
closed ideals, Proc. Amer. Math. Soc., 130 (2002), 3159–3164.
[5] S. Goto, N. Matsuoka, and T. T. Phuong, Almost Gorenstein rings, J. Algebra, 379 (2013),
355-381.
[6] S. Goto, N. Matsuoka, N. Taniguchi, and K.-i. Yoshida, The almost Gorenstein Rees algebras of parameters, J. Algebra, 452 (2016), 263–278.
[7] S. Goto, N. Matsuoka, N. Taniguchi, and K.-i. Yoshida, The almost Gorenstein Rees algebras over two-dimensional regular local rings, J. Pure Appl. Algebra, 220 (2016), 3425–3436.
[8] S. Goto, N. Matsuoka, N. Taniguchi, and K.-i. Yoshida, On the almost Gorenstein property
in Rees algebras of contracted ideals, arXiv:1604.04747v2.
[9] S. Goto, N. Matsuoka, N. Taniguchi, and K.-i. Yoshida, The almost Gorenstein Rees algebras of pg -ideals, good ideals, and powers of the maximal ideals, arXiv:1607.05894v2.
[10] S. Goto, M. Rahimi, N. Taniguchi, and H. L. Truong, When are the Rees algebras of parameter ideals almost Gorenstein graded rings?, Kyoto J. Math. (to appear).
[11] S. Goto, R. Takahashi and N. Taniguchi, Almost Gorenstein rings -towards a theory of higher
dimension, J. Pure Appl. Algebra, 219 (2015), 2666–2712.
[12] J. Herzog and E. Kunz, Dear kanonische Modul eines-Cohen-Macaulay-Rings, Lecture Notes in
Mathematics, 238, Springer-Verlag, 1971.
[13] I. Reiten, The Converse to a Theorem of Sharp on Gorenstein Modules, Proc. Amer. Math. Soc.,
32 (1972), 417–417.
[14] R. Takahashi, On G–regular local rings, Comm. Algebra, 36 (2008), 4472–4491.
[15] N. Taniguchi, On the almost Gorenstein property of determinantal rings, arXiv:1701.06690v1.
Department of Mathematics, School of Science and Technology, Meiji University,
1-1-1 Higashi-mita, Tama-ku, Kawasaki 214-8571, Japan
E-mail address: [email protected]
Department of Mathematics and Informatics, Graduate School of Science and Technology, Chiba University, Chiba-shi 263, Japan
E-mail address: [email protected]
| 0 |
arXiv:1508.04753v1 [] 18 Aug 2015
Cold Object Identification in the Java Virtual
Machine
Kim T. Briggs∗, Baoguo Zhou†, Gerhard W. Dueck‡
August 20, 2015
Abstract
Many Java applications instantiate objects within the Java heap that
are persistent but seldom if ever referenced by the application. Examples
include strings, such as error messages, and collections of value objects
that are preloaded for fast access but they may include objects that are
seldom referenced. This paper describes a stack-based framework for detecting these “cold” objects at runtime, with a view to marshaling and
sequestering them in designated regions of the heap where they may be
preferentially paged out to a backing store, thereby freeing physical memory pages for occupation by more active objects. Furthermore, we evaluate
the correctness and efficiency of stack-based approach with an Access Barrier. The experimental results from a series of SPECjvm2008 benchmarks
are presented.
For submission to ‘Software: Practice and Experience’
1
Introduction
Long-running Java applications [7], such as web servers and network security
monitors, may preload and retain large numbers of objects within the Java
heap [10] in order to allow fast application access at runtime. In many cases,
some of these objects are infrequently referenced by the application. We refer
to these objects, which are persistent in the Java heap but seldom referenced,
as cold objects.
The presence of cold objects in the heap is problematic insofar as they may be
collocated in virtual memory [12] with more active objects. Any page of virtual
memory that contains all or part of a cold object may also contain parts of more
active objects, and application references to the active objects will prevent the
∗ IBM
Canada, 770 Palladium Drive, Ottawa, ON, Canada, E-mail: [email protected]
of Computer Science, University of New Brunswick, Fredericton, E3B 5A3, N.B.,
Canada, E-mail: [email protected]
‡ Faculty of Computer Science, University of New Brunswick, Fredericton, E3B 5A3, N.B.,
Canada, E-mail: [email protected]
† Faculty
1
page from being swapped out of virtual memory. As a result, large applications
that commit most or all of the available physical memory may experience undue
memory pressure. Additionally, active objects and cold objects are co-located,
when active objects are accessed, both active objects and cold objects might be
loaded into cache at the same time, actually cold objects will not be accessed.
Therefore, cold objects will degrade the cache-hit performance.
If cold objects are collected and moved to cold regions, both page fault and
cache-miss performance could be decreased. Furthermore, cold regions can be
excluded from Garbage Collection (GC) [10], if they contain only leaf objects
and primitive arrays. Therefore, pause times caused by the GC can be reduced
as well.
As the cold area becomes populated with cold objects, operating system
primitives such as madvise() [11] may be used to inform the operating system
that pages mapped to the cold area may preferentially be swapped out of resident memory, thereby freeing those pages for occupation by more active objects.
The cold area can then be monitored, for example, with continuous reference
sampling and periodic calls to mincore(), to detect when presumed cold objects
become active and to take appropriate action.
Management of cold objects is most relevant in the context of long-running
applications with large heap requirements. For that reason, the balanced garbage
collection [2, 1, 15] (GC) framework was selected as a basis for investigating cold
object management. The balanced collector improves application performance
by partitioning the heap into a large number of regions of equal size and limiting
the number of regions that are included for collection in partial GC cycles. This
reduces the frequency of more time-consuming global GC cycles, which involve
all heap regions.
In this paper, we present a stack-based framework to identify cold objects. Cold objects have been identified and harvested successfully in many
SPECjvm2008 [16, 17] applications. At the same time, we evaluate the correctness and efficiency of stack-based solution with an Access Barrier mechanism [14]. All experiments are performed on IBM’s J9 Virtual Machine (JVM) [3].
2
Stack-Based Cold Object Identification Framework
A stack-based framework is used to identify and harvest cold objects. The
framework supports identification and collection of cold objects. The main idea
of the measurement of cold objects is to periodically walk thread-local stacks
and mark active references. Whenever a Java method is invoked, a new stack
frame will be generated. Since local variables and passed arguments [7] will be
stored in the stack frame, object references corresponding to local variables and
passed arguments in the current stack frame are considered to be active. After
a period of time when no new active objects are being discovered, subtraction
of the collection of objects found to be active from the collection of live objects
2
reveals the collection of live, inactive (cold) objects. Once they have been identified, cold objects can be harvested form the main heap and sequestered in a
designated cold area.
2.1
Cold Region Reservation and Instrumentation
When the JVM [10] starts, a preset number of contiguous regions are reserved
for the cold area. Cold regions are excluded from copy forward and compaction
during partial GC [15] cycles, except to receive objects that have been identified
as cold and copied from pinned regions, as described below.
In order to preclude the need to traverse cold regions during mark/sweep [9,
18] actions, only arrays of primitive data (eg, char[]) and leaf objects (objects
with no reference-valued fields) are considered as collectible cold objects. This
constraint, in conjunction with the leaf object optimization feature available in
the IBM Java virtual machine [3], ensures that objects within cold regions can
be correctly marked but not touched by the marking scheme. This constraint
can be relaxed to include Java objects that contain only primitive-valued fields.
Objects that have been sequestered in cold regions are monitored for reference activity. To that end, each cold region is instrumented with an activity
map containing one bit per potential object location within the region, as for
the collector’s mark map. The stacks of all mutator threads are periodically
walked to collect active heap references. Any mutator reference to an object
within a cold region is marked by setting the corresponding bit in the activity
map.
The number and total size of objects that are sequestered in the cold area
and the incidence of activity involving these objects are the main outcomes of
interest for this paper.
2.2
Pinned Region Selection and Instrumentation
When marking active objects on the heap, since objects might be moved by copyforward or compaction [5] of the GC, some regions are selected to be pinned.
Pinned regions are excluded from partial GC collection sets [10] sets. That
means objects in the pinned region will not move, which facilitates tracking and
marking active objects.
The balanced garbage collector [1] assigns a logical age to each heap region,
which reflects the number of GC cycles that the contained objects have survived.
Allocations of new objects occur in the youngest regions (age 0), and persistent
objects are progressively copied forward into increasingly older regions until
they are copied into tenured regions (age 24). In the balanced GC, tenured
regions are excluded from partial GC collection sets.
In order to enable detection of cold objects, a number of tenured regions
are first pinned so that they are excluded from partial GC collection sets [15]
cycles. This ensures that objects contained within pinned regions maintain
a fixed location within the mutator address space. Pinned regions are also
instrumented with activity maps to record which objects have been sampled
3
from mutator stacks. Cold objects can be identified within a pinned region,
after a preset amount of time Tcold has elapsed since the most recent setting (0
→ 1) of an activity bit, by subtracting the activity map from the mark map.
2.2.1
Pinned Region Selection.
The region pinning framework partitions the regions of the balanced GC heap
into four collections:
1. Young regions (age < 24)
2. Unpinned regions (age 24, not pinned)
3. Pinned regions (age 24, pinned)
4. Cold regions (the cold area)
Unpinned regions are considered for pinning at the end of each partial GC
cycle. They are selectable if they have an allocation density d (ratio of allocated
bytes to region size R) exceeding a preset threshold Dhi . Additionally, the
total size of potentially collectible cold objects contained in the region must be
greater than 0.01R. Selectable regions are ranked at the end of each partial GC
cycle according to the region pinning metric value P that reflects the volume of
activity in each region:
P = mma(r) ∗ d
(1)
where r is the number of mutator references to contained objects since the end
of the previous partial GC cycle, r reflects the object activity in the region since
any reference that is found on the stack frame is considered to be active. The
mma(r) is the modified moving average of r with a smoothing factor of 0.875:
(7 ∗ mma(rn−1 ) + rn )
,n > 0
(2)
8
The maximum number of regions that may be pinned at any time is determined by a preset parameter Pmax . At the end of every partial GC cycle, if the
number of currently pinned regions n is less than Pmax , up to Pmax n selectable
regions may be pinned.
Two strategies for selecting regions for pinning were implemented. The
pinning strategy is determined by a JVM parameter that is interpreted when the
JVM starts and remains fixed while the mutator runs. With selective pinning,
only the most active selectable regions are pinned. An active selectable region
must satisfy mma(r) > r > 0 and sum(r) > R, where sum(r) is the sum of
r over all previous partial GC cycles. The average pinning metric value Pavg
from the collection of all selectable regions with non-zero pinning metric value is
computed and only regions satisfying P > Pavg are selectable for pinning. The
activity maps for these regions should converge more quickly and cold object
identification should be more accurate after a period (Tcold ) of quiescence.
mma(r0 ) = 0; mma(rn ) =
4
The alternative pinning strategy, unselective pinning, pins in decreasing order of pinned metric value selectable tenured regions up to a preset maximum
(Pmax ). Cold objects will be found only in tenured regions that persist in the
heap. Unselective pinning should converge to a pinned region collection that
contains all of these regions.
In either case, pinned regions are unpinned, at the end of every partial GC
cycle, if their density falls below a preset low density threshold Dlo , the total
mass of eligible objects (primitive arrays) falls below 0.01R, or they survive a
period of inactivity > Tcold and the contained collectible cold objects are moved
into the cold area.
All pinned regions are unpinned at the start of a global GC cycle, or when
the cold area becomes full. Pinning is resumed after the global GC completes
or when space becomes available in the cold area.
2.2.2
Pinned Region Instrumentation.
When a region is pinned, it is instrumented with an activity map to track
reference activity within the region. The activity map contains an activity bit
for each mark bit. Activity bits are initially 0 and are set when a reference to the
corresponding object is sampled. The region is also walked to assess the number
of marked objects nmarked , the number of marked collectible objects ncollectible ,
and the respective total sizes mmarked and mcollectible of these collections.
Three timestamps are maintained to record the time tpinned at which the
region was pinned, the time tinactive of the most recent setting of an activity bit,
and the time twalked that the region was most recently walked. Pinned regions
are walked, and twalked is updated to the current time t, whenever t − twalked >
Tcold /4. Current values for nmarked , mcollectible , mmarked , mcollectible , and d are
obtained each time the region is walked.
Over time, the rate at which activity bits are set will diminish, until a period
of time > Tcold has elapsed with no new activity bits set. The collectible cold
objects in this region can then be identified and copied into the cold area.
2.2.3
Mutator Thread Instrumentation.
The primary sources for reference collection are the mutator stacks, which are
periodically walked down from the top-most frame until a frame that has not
been active since the most recent stack walk. Frame equality is determined on
the basis of the frame base pointer and a hash of the stack contents between
the frame base and stack pointers. Each mutator thread is instrumented with a
fixed-length buffer for reference collection and two arrays of stack frame traces–
one to hold traces from the most recent stack walk and one to hold traces from
the current stack walk.
References from each active frame are added to the mutator’s reference
buffer. Stack walks are discontinued if the reference buffer overflows (collected
references are retained for activity map updates). If the stack frame buffer for
the current stack overflows, the mutator continues to walk the stack and collect
5
references while matching current frame base pointer and hash against the previous stack until a match is found. The next previous stack is then composed
from the head of the current stack and the tail of the previous stack. Any missing frames between these stack segments are of little consequence–if the next
stack walk continues past the end of the head segment it will fall through to
the tail segment and eventually find a match, and setting the activity bits for
redundant samples collected from frames with missing traces is idempotent.
In addition, two timestamps wstart and wend are maintained for each mutator thread to record the start and end times of the most recent stack walk.
References collected from stack walks started before the most recent GC cycle
are discarded.
2.2.4
Activity Sampling Daemon.
When the JVM is initialized, a thread activity sampling daemon is started to
control reference activity sampling when mutator threads are executing and to
harvest references collected by mutator threads.
The daemon thread remains in a paused state during GC cycles. Between
GC cycles the daemon interacts with mutator threads by polling each active
mutator thread at approximately 1 millisecond intervals. During each polling
cycle, the daemon instruments each previously uninstrumented mutator thread
and signals it to start a stack walk. It harvests collected reference samples from
previously instrumented mutator threads that have completed a stack walk and
signals these threads to start a new stack walk. Mutator threads receive these
signals and commence the stack walk at their safe point [8].
For each harvested reference sample the daemon increments the reference
activity counter r for the containing region (young, pinned, unpinned, or cold).
Additionally, if the referenced object is contained in a pinned or cold region,
the daemon sets the corresponding bit in the region’s activity map. No explicit
synchronization is required to set or test the activity bits, since they are set
only on the daemon thread and tested only on the master GC thread, and these
threads never access region activity maps concurrently.
2.2.5
Cold Object Collection.
The region pinning framework attempts to pin a collection of tenured regions
that contains as much of the mutator’s active working set as possible. This
may seem counterintuitive, given that we are attempting to identify persistent
objects that are almost never in the working set. Cold object identification is
like looking for shadows in a windowless room—they are easier to see when the
lights are turned on. Pinning the most active regions is expected to reduce the
likelihood of identifying as cold objects that are actually just dormant in the
context of current mutator activity. For pinned regions that are receiving few
active references all or most objects would be identified as cold under a fixed
Tcold threshold.
6
In the presence of high reference activity, the activity map of a pinned region
can be expected to converge more quickly to a stable state where no new activity
bits are being set. After a fixed time Tcold has elapsed since the last change
to the state of the activity map, the pinned region will be included in the copy
forward collection set for the next partial GC cycle, if the pinned region has an
accurate remembered set card list and no critical regions in use. When a pinned
region is included in the copy forward collection set, collectible cold objects are
copied into the next available cold region while all other objects are copied into
other unpinned regions. After all objects have been copied out the region is
unpinned.
Cold regions are instrumented as for pinned regions in order to allow reference activity to be tracked. At present, all objects that are copied to cold
regions remain in the cold area, without compaction or copying, until they are
collected as garbage or the mutator terminates.
3
JVM PROFILING CONFIGURATIONS
All JVMs were compiled with gcc version 4.4.7 with optimization enabled (-O3).
Two JVMs were produced for profiling:
1. linux: generic JVM with no reference sampling.
2. ssd-stack: stack sampling enabled.
The JVM run configurations are shown in (Table 1).
Table 1: Running parameters
JVM
linux
ssd-stack
ssd-stack
Pinning strategy
none
selective
unselective
JIT
enabled
enabled
enabled
Run time(s)
9600 x 2
9600 x 2
9600 x 2
Tcold(s)
900
900
900
The linux and ssd-stack JVMs were each run twice with the Just-in-Time [4,
6] compiler (JIT) enabled. The only data of interest from the second runs were
the SPECjvm2008 benchmark scores, which were the basis for comparison of
overall performance of the linux versus ssd-stack JVMs. The ssd stack JVM
was run in two modes – selective or unselective pinning – in order to permit
comparison of cold object identification between these region pinning strategies.
Four SPECjvm2008 [16] benchmarks (compiler.compiler, derby, xml.transform,
xml.validation) were selected for profiling. The linux JVM was executed twice
for each benchmark. The ssd-stack JVMs ran each benchmark twice with selective pinning and twice with unselective pinning. Each profiling run executed a
single iteration of one benchmark for 9600 seconds. Sampling interval is 1 ms,
and Tcold is 15 minutes.
7
All profiling runs were performed on a 1.8GHz 8 Core/16 Thread (Xeon)
server running CentOS version 6.4. No other user applications were active
during any of the profiling runs.
4
RESULTS
The first three JVM run configurations from Table 1 (linux, ssd-stack/selective
pinning, ssd-stack/unselective pinning) were used to determine the runtime heap
characteristics of each SPECjvm2008 benchmark and to allow performance comparisons between the linux and ssd-stack JVMs. Heap characteristics most
salient to cold object identification with activity tracking are the numbers of
tenured regions and the distributions of mutator activity within young, unpinned, and pinned regions.
4.1
SPECjvm2008 Scores versus Linux
Runtime performance of the ssd-stack JVMs (selective and unselective pinning) versus the linux JVM was assessed using SPECjvm2008 [16] scores for
two runs of each benchmark: compiler.compiler, derby, xml.transform, and
xml.validation. The resulting benchmark scores are plotted in Figure 1.
(a) Compiler.compiler score
(b) Derby score
(c) xml.transform score
(d) xml.validate score
Figure 1: Running performance of the ssd-stack JVMs
Performance degradation was calculated as the ratio of the difference between the linux and ssd-stack scores to the linux score. The overall average
performance degradation for the ssd stack JVMs was 0.04 (4%). Performance
8
degradation ratios versus linux for all runs of the ssd stack JVMs are listed in
Table 2.
Table 2: Overhead caused by the feature of cold objects
Pinning strategy
Benchmark run
compiler.compiler
derby
xml.transform
xml.validation
4.2
1
0.02
0.04
0.06
0.04
selective
2
0.05
0.02
0.06
0.03
1
0.02
0.03
0.05
0.06
unselective
2
0.06
0.03
0.07
0.03
Garbage Collection Metrics versus Linux
Summary garbage collector metrics are shown in Table 3 for all benchmark
runs with the linux JVM and the ssd-stack JVM with unselective and selective
pinning. Compaction times were significantly lower for all of the ssd-stack JVM
runs with the compiler.compiler benchmark. Considering the average total GC
times for each pair of runs, compiler.compiler had slightly better total GC times
for ssd-stack unselective and selective pinning compared to linux (4% and 2%
lower, respectively), as did xml.transform (4% and 5%). Average total GC times
were slightly worse for derby (4% and <1% higher than linux) and xml.validation
(2%, 2%). Much of the ssd stack GC overhead is incurred in compiling pinned
region and cold collection statistics and streaming them to a printable log file.
These statistics are informative only and can be suppressed to reduce overhead.
4.3
Region Age and Activity
Figures 2a-2b represent the relative region counts (greyscale, left axis, percentage of total marked region count) and reference counts (colored lines, right axis,
proportion of total reference count) for the young, unpinned, and pinned parts
of the heap after each partial GC cycle. Plots are presented for unselective and
selective pinning for each benchmark executed with the ssd stack JVM, using
data collected from the first of two runs. Partial GC cycle counts are represented
on the horizontal axes.
The compiler.compiler benchmark (Figures 2a, 2b) was atypical in that it
showed a predominance of young regions (over 90% of marked regions) that
receive a relatively high proportion (almost 0.2) of reference activity. The other
benchmarks (Figures 3a − 5b) all showed a predominance of unpinned and
pinned regions that receive almost all reference activity. Regardless of pinning
strategy, they also showed a tendency for an initially high concentration of
reference activity within pinned regions that diminished over time. This is
not surprising since both pinning strategies favor selection of regions receiving
high reference activity. The activity maps of pinned regions with high reference
activity tend to converge relatively quickly. Most regions are pinned early in the
mutator lifecycle and remain pinned until they are cold collected, their contents
9
Table 3: GC metrics (in ms)
compile.compile
linux-r1
linux-r2
unselective-r1
unselective-r2
selective-r1
selective-r2
derby
linux-r1
linux-r2
unselective-r1
unselective-r2
selective-r1
selective-r2
xml.transform
linux-r1
linux-r2
unselective-r1
unselective-r2
selective-r1
ssd-selective-r2
xml.validation
linux-r1
linux-r2
unselective-r1
unselective-r2
selective-r1
selective-r2
Compact
244
339
31
99
35
37
Compact
12,147
8,252
12,998
13,222
10,879
10,000
Compact
37
31
30
31
31
39
Compact
35
97
33
36
30
35
Copyforward
3,517,715
3,442,073
3,401,260
3,314,364
3,445,347
3,405,924
Copyforward
587,707
581,722
608,385
598,226
570,283
598,301
Copyforward
375,820
376,827
362,166
357,149
359,097
353,823
Copyforward
707,832
726,825
742,721
725,050
730,070
725,577
Glb. mrk
33
38
35
31
38
33
Glb. mrk
40
35
36
34
38
44
Glb. mrk
40
39
35
32
32
34
Glb. mrk
33
38
41
29
39
36
Incr. mark
1,166,619
1,157,656
1,112,495
1,088,048
1,128,614
1,112,630
Incr. mark
2,884
2,916
2,856
2,768
3,030
2,860
Incr.
761
773
799
814
847
813
Incr. mark
872
952
611
786
958
907
Sweep
54,731
55,105
52,078
51,770
52,869
51,835
Sweep
2,552
2,051
2,569
2,507
2,483
2,217
Sweep
342
327
328
360
362
372
Sweep
283
306
236
269
292
274
Total
4,739,401
4,655,271
4,565,965
4,454,374
4,626,962
4,570,519
Total
605,641
595,282
627,179
617,069
587,038
613,744
Total
377,126
378,127
363,486
358,513
360,498
355,211
Total
709,135
728,299
743,722
726,251
731,468
726,909
become dereferenced, or the mutator ends. When they are cold collected the
remaining objects, active or not collectible, are redistributed to other unpinned
regions, so that active objects tend to become more diffusely scattered over
time.
Most of the abrupt drops in reference activity in Figures 3a and 3b coincide
with cold collection, while increases tend to be associated with region pinning
events. For the compiler.compiler benchmark most of the variability in region
counts involved young regions. For the other benchmarks most of the variability
involved unpinned regions. For all benchmarks the pinned region count was
relatively stable, although more replacement occurred with unselective pinning.
Ideally, pinned regions selection should result in a higher proportion of reference activity within pinned regions. Also, this should be realized by pinning
as few regions as possible. By that measure, selective pinning outperformed unselective pinning for the compiler.compiler and xml.validation benchmarks and
slightly underperformed for derby and xml.transform.
The compiler.compiler workload mainly involves younger regions. It pro-
10
(a) Compiler.compiler,
Pinning
Unselective (b) Compiler.compiler, Selective Pinning
Figure 2: Compiler.compiler Activity
duced on average about 49 tenured regions, and most of these persisted for the
duration of the run. With unselective pinning (Figure 2a) about 32 regions were
typically pinned and they received about 33% of reference activity on average.
With selective pinning (Figure 2b) only 2 regions were typically pinned and they
received about 45% of reference activity on average.
(a) Derby, Unselective Pinning
(b) Derby, Selective Pinning
Figure 3: Derby Activity
The derby benchmark produced a greater and more stable population of
tenured regions, with a large number (>500) persisting over the course of the
run. This is not surprising since derby loads an entire database into the heap
before the benchmarking iteration starts and retains these objects for the duration of the run. With unselective pinning (Figure 3a) the maximum number
Pmax of regions (256) were typically pinned at any time and they received about
69% of reference activity on average. With selective pinning (Figure 3b) only
5 - 6 regions were typically pinned and they received about 53% of reference
11
activity on average. If the Pmax limit had been removed for unselective pinning
the number of pinned region would have risen to include more of the regions
containing portions of the derby database content. This in turn would have reduced the number of unpinned regions available for compaction and tail filling,
forcing allocation of new regions to receive aging heap objects. Selective pinning
performed almost as well for derby with at most 6 pinned regions.
(a) xml.transform, Unselective Pinning (b) xml.transform, Selective Pinning
Figure 4: xml.transform Activity
The xml.transform benchmark used an average of about 290 tenured regions,
of which only about 40 persisted throughout the run. With unselective pinning
(Figure 4a) about 23 regions were pinned, on average, and they received about
38% of reference activity on average. With selective pinning (Figure 4b) only 6
regions were typically pinned and they received about 31% of reference activity
on average.
(a) xml.validate, Unselective Pinning
(b) xml.validate, Selective Pinning
Figure 5: xml.validate Activity
The xml.validation benchmark used about 200 tenured regions, with about
12
13 persisting for the duration of the run. With unselective pinning (Figure 5a)
only about 7 regions were pinned at any time, and they received about 72%
of reference activity on average. With selective pinning (Figure 5b) only 2
regions were typically pinned and they received about 78% of reference activity
on average.
4.4
Cold Object Collection
Cold objects are collected into the cold area when their containing pinned regions
pass a time (Tcold ) where no new references into the region are sampled. The
number and total size of cold objects collected into the cold area, and the
number of references into the cold area, are summarized in Table 4 for all runs
of each SPECjvm2008 [16] benchmark profiled. The statistics for cold references
include the total reference count and the number of distinct objects referenced.
For all benchmark runs unselective pinning produced the greatest collection of
cold objects, but it also tended to result in a higher count of references into
the cold area, especially for xml.transform. In all cases, the number of distinct
objects referenced was small, regardless of pinning strategy.
Table 4: The number and the size of cold objects
compiler.compiler
unselective, run1
unselective, run2
selective, run1
selective, run2
derby
unselective, run1
unselective, run2
selective, run1
selective, run2
xml.transform
unselective, run1
unselective, run2
selective, run1
selective, run2
xml.validation
unselective, run1
unselective, run2
selective, run1
selective, run2
Cold Objects
Cold Bytes
Cold References
24,452
15,717
0
0
4,498,952
8,009,752
0
0
3
0
0
0
79,383
40,861
9,039
3,284
6,868,440
10,958,816
1,379,888
279,736
0
1
0
0
27,603
29,995
14,486
12,961
16,749,392
17,394,688
8,928,248
5,734,144
716,850
635,768
0
0
16,188
14,926
3,698
4,889
2,910,520
2,733,280
474,904
582,880
0
0
0
16
2
1
3
2
3
Figures 6a- 6b show the cold collections for the first runs with unselective
13
and selective pinning for each benchmark. The left axes represent total byte
count; the right axes represent object count. Partial GC cycle counts at the
time of cold collection are represented on the horizontal axes.
(a) Compiler.compiler,
Pinning
Unselective (b) Compiler.compiler, Selective Pinning
Figure 6: Compiler.compiler, Cold objects
Unselective pinning for compiler.compiler resulted in a collection of over 30
pinned regions, six of which were cold collected. Three references to two cold
objects were subsequently sampled. Selective pinning resulted in two regions
that remained pinned for most of the compiler.compiler benchmark run. One
of these went cold (no new activity for > Tcold seconds) about halfway through
the run and remained cold until the end but was not collectible because it’s
remembered card set was in a persistent overflow state.
(a) Derby, Unselective Pinning
(b) Derby, Selective Pinning
Figure 7: Derby, Cold objects
Unselective pinning for derby resulted in a maximal collection of pinned
regions (256 regions) and 15 cold collections. No activity was recorded in the
cold area. Selective pinning pinned only six very active regions, two of which
were cold collected early in the run. There was no activity in the cold area.
Unselective pinning for xml.transform resulted in a collection of about 23
pinned regions, 19 of which were cold collected. However, there were a high
number of references to three objects in the cold area. Selective pinning pinned
at most six active regions at any time but eight regions were cold collected.
There was no subsequent activity in the cold area.
14
(a) xml.transform, Unselective Pinning (b) xml.transform, Selective Pinning
Figure 8: xml.transform, Cold objects
(a) xml.validate, Unselective Pinning
(b) xml.validate, Selective Pinning
Figure 9: xml.validate, Cold objects
Unselective pinning for xml.validation resulted in a collection of about seven
pinned regions, five of which were cold collected. There were no references to
objects in the cold area. Selective pinning pinned at most two active regions at
any time and cold collected one region. There was no subsequent activity in the
cold area.
5
Evaluation of the Stack-Based Solution with
an Access Barrier
Since stack sampling is intermittent, with each mutator thread walking its stack
about once per millisecond, and can only occur at safe points only, there is a
concern that the reference sampling rate may not be high enough to support
reliable cold object identification. An Access Barrier can capture all read/write
access operations when Java runs interpreting mode. Since the Access Barrier
does not miss any access information, it will be used as benchmark to evaluate
the correctness and efficiency of stack-based solution.
5.1
Evaluation Metrics
Two key metrics are used to evaluate stack-based solution. F alseInactivity is
used to verify the reliability of stack-based solution. ConvergenceT ime is used
15
to evaluate the efficiency of stack-based solution.
• F alseInactivity is the number of objects that are considered inactive with
the stack-based solution, but are marked active with the Access Barrier.
For example, because of non-continuous sampling, the stack-based solution
misses some objects’ activities, and these objects are considered to be
inactive. However, these objects’ activities are captured by the Access
Barrier. The F alseInactivity reflects the missing of some active objects.
Smaller values mean better stack-based solutions. The F alseInactivity is
a ratio described in the following formula.
F alseInactivity =
N umbers of f alse inactive objects
N umbers of all inactive objects
(3)
• ConvergenceT ime is the time span that a region is pinned before it is
determined to have identified all active objects. ConvergenceT ime reflects the speed that a pinned region is found to collect cold objects. The
lower the ConvergenceT ime is, the more efficient the identification of cold
objects is.
5.2
Experiments
Two evaluation experiments are performed with Java executing in interpreting
mode instead of Just-in-Time mode.
5.3
Case 1 - SPECjvm2008 Derby
Running parameters are as follows.
1. Running period: 60 hours
2. Cold threshold: 6 hours
3. Sampling interval: 100 ms
Benchmark Derby was run for 60 hours, active objects were identified with
stack-based and Access Barrier at the same time. Experimental results are
presented in Table 5. It is not surprising that the Access Barrier can harvest
more cold objects than the stack-based solution, because the Access Barrier can
capture all read/write access to objects, while stack sampling is intermittent.
For example, the number of collectible pinned regions in Access Barrier is 5
times larger than in stack-based solution; the number of cold objects in Access
Barrier is 11.78 times larger than that in stack-based solution; and the size of
cold objects is 10.42 times larger in Access Barrier than that in stack-based
solution.
16
Table 5: Evaluation with SPECjvm2008 Derby
Items
Collectible pinned regions
CovergenceTime(in Second)
All Objects
Active Objects
Cold Objects
Size of All Objects(in Byte)
Size of Active Objects(in Byte)
Size of Cold Objects(in Byte)
5.3.1
AccessBarrier
85
27,721.62
1,485,531
673,272
812,259
172,021,784
129,221,416
42,800,368
stack-based
17
69,497.71
72,290
3,350
68,940
32,882,056
28,773,560
4,108,496
Ratio
5: 1
1:2.50
11.78:1
10.42:1
Reliability of the stack-based solution.
The F alseInactivity ratio is 1.62% (see Table 4), which is quite low. It reflects
the fact that there are few objects that are incorrectly classified as cold. The
data supports the hypothesis that cold objects can be identified by the stackbased approach, which is encouraging.
Table 6: F alseInactivity Results
Inactive objects
68,940
5.3.2
F alseInactivity Objects
1,117
F alseInactivity Ratio
1.62%
Efficiency of the stack-based marking approach.
The Access Barrier has found 85 collectible pinned regions, while the stackbased solution has 17 collectible pinned regions. Although the Access Barrier
has more collectible pinned regions than the stack-based solution, 17 collectible
pinned regions in the stack-based solution are completely included in the Access
Barrier collectible pinned regions.
Figure 10 shows a convergence time comparison between stack-based solution and the Access Barrier. The X-axis represents 17 common collectible
pinned regions, the Y-axis represents convergence time. In the Access Barrier,
the maximum convergence time is less than 500 minutes, while in stack-based
solution, the maximum convergence time reaches more than 2500 minutes.
5.4
Case 2 - SPECjvm2008 Compiler.compiler
Running parameters are as follows.
1. Running period: 12 hours
17
Figure 10: Convergence time in Derby
2. Cold threshold: 72 minutes
3. Sampling interval: 15 ms
After Compiler.compiler has run by 12 hours, the results shown in Table
7 are obtained. The Access Barrier still harvests more cold objects than the
stack-based solution.
Table 7: Evaluation with Compiler.compiler
Items
Collectible pinned regions
AverageColdDuration (in Seconds)
All Objects
Active Objects
Cold Objects
Size of All Object (in Bytes)
Size of Active Object (in Bytes)
Size of Cold Object (in Bytes)
5.4.1
AccessBarrier
64
4440.64
1,390,739
141,772
1,248,967
3.32
20,090,576
109,475,216
stack-based
28
6311.96
376,824
279
376,545
53,923,840
1,280,616
52,643,224
Ratio
2.29 : 1
1 : 1.42
3.32 : 1
2.08 : 1
Reliability of stack-based solution.
The falseInactivity ratio is 0.32% (see Table 8), which is still quite low. The
data confirms that cold objects can be identified by the stack-based approach.
18
Table 8: F alseInactivity
Inactive objects
376,545
5.4.2
F alseInactivity Objects
1,191
falseInactivity Ratio
0.32%
Convergence time analysis.
The Access Barrier has 64 collectible pinned regions. The stack-based solution has 28 collectible pinned regions. Although the Access Barrier has more
collectible pinned regions than the stack-based solution, 28 collectible pinned
regions in the stack-based solution are completely included in the Barrier-based
collectible pinned regions.
Figure 11 shows a convergence time comparison between stack-based solution
and Access Barrier. In the Access Barrier, the maximum convergence time is
less than 100 minutes. While in stack-based solution, the convergence time in
the majority of collectible pinned regions is less than 100 minutes as well, only
3 collectible pinned regions have a high convergence time.
Figure 11: Convergence time in Compiler.compiler
6
DISCUSSION
During the ssd-stack JVM benchmarking runs, mutator threads walked their
stacks to harvest heap references once every 1-2 ms on average. For cold object
identification to be reliable and effective, the rate of reference sampling must be
such that any active object within a pinned region is likely to appear on a mutator stack at a stack walking safe point at least once while the region is pinned.
The F alseInactivity experimental results in Derby and Compiler.compiler show
19
that the sampling frequency with 1-2 ms can satisfy the requirement of cold object identification.
A very small number of cold objects were referenced during any of the ssdstack JVM runs, and with the exception of xml.transform cold objects were
referenced very infrequently. Most benchmark runs collected a few tens of thousands of cold objects. In the exceptional case of compiller.compiler with selective
pinning, one pinned region went cold after 1,757 partial GC cycles but was not
collectible due to a persistently overflowed remembered set; none of the 3,354
objects that were cold at that point received references for the remainder of the
benchmark run (for 8,055 subsequent partial GC cycles).
The ssd-stack JVM, with selective or unselective pinning, consistently resulted in increased memory pressure, timeslicing, and kernel CPU usage compared to the linux JVM. The singleton thread activity sampling daemon minimizes writes to pinned region activity maps but must test the activity bit for
every sampled reference into a pinned region, making activity maps high running
candidates for available cache lines. Although cache misses were not profiled for
these runs, it is likely that high frequency access to pinned region activity maps
from the thread activity sampling daemon had a significant effect on memory
bandwidth. Since the daemon thread is bound to a specific node this effect may
be limited, especially in larger multicore systems. However, the daemon does
present a multicore scalability problem since it must handle proportionately
larger loads as the number of available cores increases.
Most of the ssd-stack benchmarking runs yielded a few tens of megabytes in
the cold area and consumed 4-6% of the available CPU bandwidth, which is a
relatively high price to pay for the amount of cold data collected. In this paper,
only primitive arrays and leaf objects are considered as cold objects. If cold
objects are not limited to primitive arrays and leaf objects, the amount of cold
data collected should be increased, but the possibility of object references from
cold objects to active objects would then require that marking should traverse
into the cold area, which is undesirable.
The region pinning framework attempts to pin the most active regions because they highlight mutator activity and allow cold objects to be identified
more quickly and with greater confidence. Selective pinning sets a high bar on
the activity metric for selectable regions and tends to pin only a few regions,
without replacement. It tended to collect a relatively small but very stable set
of cold objects. Unselective pinning attempts to maximize the number of pinned
regions and selects the most active regions in batches, but tends to unpin and
replace these over time. This strategy produced more substantial cold collections and the cold area typically received a small amount of reference activity
that was confined to a small number of distinct objects.
7
CONCLUSION
In this paper, we show the stack-based cold object identification framework,
which samples the mutator thread stack, marks the active objects, and harvests
20
the cold objects. stack-based reference sampling was effective in identifying inactive objects for the SPECjvm2008 [16] benchmarks studied here, as evidenced
by the stability of the cold areas established during the benchmark runs. A few
tens of megabytes of cold objects have been identified and harvested into cold
regions. Furthermore, we evaluate the correctness and efficiency of stack-based
solution with an Access barrier implementation, the results support that the
stack-based solution is an acceptable cold object identification approach.
The runtime overhead for walking mutator stacks and maintaining pinned
activity maps offset any gains that accrued from establishing the cold area and
marshalling cold objects out of resident memory, but there is still space to reduce
overhead by optimization.
8
FUTURE WORK
The focus of this effort so far has been on cold object identification and sequestration. If further development is extended, a mechanism for managing
frequently active objects in the cold area should be developed. For example, if
madvise() [13] is used to sequester pages in the cold area, pages containing parts
of active objects can be excluded. This would be effective if these objects are
relatively rare and simpler than providing specialized methods to copy active
cold objects back into tenured regions.
Only limited amount of work has been done to verify that the JVM and GC
do not reference objects in the cold area. Some JIT and GC (concurrent mark)
activity in cold areas was detected by erecting a partial memory protection
(read/write) barrier around the cold area between GC cycles. A similar mechanism can be used to detect GC incursions into the cold area during GC cycles,
but this has not been investigated to date. The sources of these incursions will
need to be modified to suppress activity in the cold area. For example, objects
being marked in the root set, where the leaf marking optimization is not available, can be tested for inclusion in cold regions and treated as leaf objects in
that case. The JIT peeks into cold regions to determine the length of arrays but
it may be possible to apply defaults or forego optimizations for arrays located
within the cold area.
Acknowledgement
The authors would like to acknowledge the funding support provided by IBM
and the Atlantic Canada Opportunities Agency (ACOA) through the Atlantic
Innovation Fund (AIF) program. Furthermore, we would also like to thank the
New Brunswick Innovation Fund for contributing to this project. Finally, we
would like to thank the Centre for Advanced Studies - Atlantic for access to the
resources for conducting our research.
21
References
[1] Balanced garbage collection policy. http://www-01.ibm.com/support/
knowledgecenter/SSYKE2_7.0.0/com.ibm.java.aix.70.doc/diag/
understanding/mm_gc_balanced.html. Accessed: 2015-05-12.
[2] IBM Garbage Collection policies. http://www-01.ibm.com/support/
knowledgecenter/SSYKE2_7.0.0/com.ibm.java.zos.71.doc/diag/
appendixes/cmdline/xgcpolicy.html?lang=en. Accessed date: 201505-12.
[3] J9 Virtual Machine (JVM).
https://www-01.ibm.com/support/
knowledgecenter/#!/SSYKE2_7.0.0/com.ibm.java.win.70.doc/user/
java_jvm.html. Accessed: 2015-05-12.
[4] JIT compiler overview.
https://www-01.ibm.com/support/
knowledgecenter/#!/SSYKE2_7.0.0/com.ibm.java.lnx.70.doc/diag/
understanding/jit_overview.html. Accessed: 2015-05-12.
[5] Diab Abuaiadh, Yoav Ossia, Erez Petrank, and Uri Silbershtein. An efficient parallel heap compaction algorithm. In OOPSLA, pages 224–236,
2004.
[6] Ali-Reza Adl-Tabatabai, MichalCierniak, Guei-Yuan Lueh, Vishesh M.
Parikh, and James M. Stichnoth. Fast, effective code generation in a justin-time java compiler. SIGPLAN Not., 33(5):280–290, May 1998.
[7] James Gosling, Bill Joy, Guy L. Steele, Jr., Gilad Bracha, and Alex Buckley. The Java Language Specification, Java SE 7 Edition. Addison-Wesley
Professional, 1st edition, 2013.
[8] Richard E. Jones and Andy C. King. A fast analysis for thread-local
garbage collection with dynamic class loading. In 5th IEEE International
Workshop on Source Code Analysis and Manipulation (SCAM), pages 129–
138, Budapest, September 2005.
[9] Toshiaki Kurokawa. A new fast and safe marking algorithm. Lisp Bull.,
(3):9–35, December 1979.
[10] Tim Lindholm and Frank Yellin. Java Virtual Machine Specification.
Addison-Wesley Longman Publishing Co., Inc., 1999.
[11] Robert Love. Linux System Programming: Talking Directly to the Kernel
and C Library. O’Reilly Media, Inc., 2007.
[12] Jeremy Manson, William Pugh, and Sarita V. Adve. The java memory
model. SIGPLAN Not., 40(1):378–391, January 2005.
[13] Marshall Kirk Mckusick and Michael J. Karels. A new virtual memory
implementation for Berkeley UNIX. In EUUG Conference Proceedings,
pages 451–458, 1986.
22
[14] Pekka P. Pirinen. Barrier techniques for incremental tracing. SIGPLAN
Not., 34(3):20–25, October 1998.
[15] Ryan Sciampacone, Peter Burka, and Aleksandar Micic. Garbage collection
in websphere application server v8, part 2: Balanced garbage collection as
a new option. In IBM WebSphere Developer Technical Journal., 2011.
[16] Kumar Shiv, Kingsum Chow, Yanping Wang, and Dmitry Petrochenko.
Specjvm2008 performance characterization. In Proceedings of the 2009
SPEC Benchmark Workshop on Computer Performance Evaluation and
Benchmarking, pages 17–35, 2009.
[17] Kumar Shiv, Kingsum Chow, Yanping Wang, and Dmitry Petrochenko.
Specjvm2008 performance characterization. In Proceedings of the 2009
SPEC Benchmark Workshop on Computer Performance Evaluation and
Benchmarking, pages 17–35, Berlin, Heidelberg, 2009. Springer-Verlag.
[18] David Ungar. Generation scavenging: A non-disruptive high performance
storage reclamation algorithm. SIGSOFT Softw. Eng. Notes, 9(3):157–167,
April 1984.
23
| 6 |
Weak Memory Models: Balancing Definitional Simplicity and Implementation
Flexibility
arXiv:1707.05923v1 [] 19 Jul 2017
Sizhuo Zhang, Muralidaran Vijayaraghavan, Arvind
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology
{szzhang, vmurali, arvind}@csail.mit.edu
Abstract—The memory model for RISC-V, a newly developed
open source ISA, has not been finalized yet and thus, offers an
opportunity to evaluate existing memory models. We believe
RISC-V should not adopt the memory models of POWER or
ARM, because their axiomatic and operational definitions are
too complicated. We propose two new weak memory models:
WMM and WMM-S, which balance definitional simplicity and
implementation flexibility differently. Both allow all instruction
reorderings except overtaking of loads by a store. We show
that this restriction has little impact on performance and it
considerably simplifies operational definitions. It also rules
out the out-of-thin-air problem that plagues many definitions.
WMM is simple (it is similar to the Alpha memory model),
but it disallows behaviors arising due to shared store buffers
and shared write-through caches (which are seen in POWER
processors). WMM-S, on the other hand, is more complex and
allows these behaviors. We give the operational definitions of
both models using Instantaneous Instruction Execution (I2 E),
which has been used in the definitions of SC and TSO. We also
show how both models can be implemented using conventional
cache-coherent memory systems and out-of-order processors,
and encompasses the behaviors of most known optimizations.
commercial processors and then constructing models to fit
these observations [7]–[10], [12]–[16].
The newly designed open-source RISC-V ISA [17] offers a
unique opportunity to reverse this trend by giving a clear definition with understandable implications for implementations.
The RISC-V ISA manual only states that its memory model
is weak in the sense that it allows a variety of instruction
reorderings [18]. However, so far no detailed definition has
been provided, and the memory model is not fixed yet.
In this paper we propose two weak memory models for
RISC-V: WMM and WMM-S, which balance definitional
simplicity and implementation flexibility differently. The difference between the two models is regarding store atomicity,
which is often classified into the following three types [19]:
•
•
Keywords-weak memory model
•
I. I NTRODUCTION
A memory model for an ISA is the specification of all
legal multithreaded program behaviors. If microarchitectural
changes conform to the memory model, software remains
compatible. Leaving the meanings of corner cases to be
implementation dependent makes the task of proving the
correctness of multithreaded programs, microarchitectures
and cache protocols untenable. While strong memory models
like SC and SPARC/Intel-TSO are well understood, weak
memory models of commercial ISAs like ARM and POWER
are driven too much by microarchitectural details, and
inadequately documented by manufacturers. For example, the
memory model in the POWER ISA manual [11] is “defined”
as reorderings of events, and an event refers to performing
an instruction with respect to a processor. While reorderings
capture some properties of memory models, it does not
specify the result of each load, which is the most important
information to understand program behaviors. This forces
the researchers to formalize these weak memory models by
empirically determining the allowed/disallowed behaviors of
Single-copy atomic: a store becomes visible to all processors at the same time, e.g., in SC.
Multi-copy atomic: a store becomes visible to the issuing
processor before it is advertised simultaneously to all other
processors, e.g., in TSO and Alpha [5].
Non-atomic (or non-multi-copy-atomic): a store becomes
visible to different processors at different times, e.g., in
POWER and ARM.
Multi-copy atomic stores are caused by the store buffer
or write-through cache that is private to each processor.
Non-atomic stores arise (mostly) because of the sharing
of a store buffer or a write-through cache by multiple
processors, and such stores considerably complicate the
formal definitions [7], [8]. WMM is an Alpha-like memory
model which permits only multi-copy atomic stores and
thus, prohibits shared store buffers or shared write-through
caches in implementations. WMM-S is an ARM/POWERlike memory model which admits non-atomic stores. We will
present the implementations of both models using out-oforder (OOO) processors and cache-coherent memory systems.
In particular, WMM and WMM-S allow the OOO processors
in multicore settings to use all speculative techniques which
are valid for uniprocessors, including even the load-value
speculation [20]–[24], without additional checks or logic.
We give operational definitions of both WMM and WMMS. An operational definition specifies an abstract machine,
and the legal behaviors of a program under the memory
Operational
model
SC
TSO
RMO
Alpha
RC
ARM and
POWER
WMM
WMM-S
Simple; I2 E [1]
Simple; I2 E [2]
Doesn’t exist
Doesn’t exist
Doesn’t exist
Complex; non
I2 E [7], [8]
Simple; I2 E
Medium; I2 E
Definition
Axiomatic model
Simple [1]
Simple [3]
Simple; needs fix [4]
Medium [5]
Medium [6]
Complex [9], [10]
Single-copy atomic
Multi-copy atomic
Multi-copy atomic
Multi-copy atomic
Unclear
Non-atomic
Model properties / Implementation flexibility
Allow shared write- Instruction reorderings
through cache/shared
store buffer
No
None
No
Only St-Ld reordering
No
All four
No
All four
No
All four
Yes
All four
Simple
Doesn’t exist
Multi-copy atomic
Non-atomic
No
Yes
Store atomicity
Figure 1.
All except Ld-St reordering
All except Ld-St reordering
Ordering
of
data-dependent
loads
Yes
Yes
Yes
No
Yes
Yes
No
No
Summary of different memory models
model are those that can result by running the program
on the abstract machine. We observe a growing interest
in operational definitions: memory models of x86, ARM
and POWER have all been formalized operationally [2], [7],
[8], [15], [25], and researchers are even seeking operational
definitions for high-level languages like C++ [26]. This is
perhaps because all possible program results can be derived
from operational definitions mechanically while axiomatic
definitions require guessing the whole program execution
at the beginning. For complex programs with dependencies,
loops and conditional branches, guessing the whole execution
may become prohibitive.
Unfortunately, the operational models of ARM and
POWER are too complicated because their abstract machines
involve microarchitectural details like reorder buffers (ROBs),
partial and speculative instruction execution, instruction
replay on speculation failure, etc. The operational definitions
of WMM and WMM-S are much simpler because they are
described in terms of Instantaneous Instruction Execution
(I2 E), which is the style used in the operational definitions of
SC [1] and TSO [2], [25]. An I2 E abstract machine consists
of n atomic processors and an n-ported atomic memory.
The atomic processor executes instructions instantaneously
and in order, so it always has the up-to-date architectural
(register) state. The atomic memory executes loads and
stores instantaneously. Instruction reorderings and store
atomicity/non-atomicity are captured by including different
types of buffers between the processors and the atomic
memory, like the store buffer in the definition of TSO. In
the background, data moves between these buffers and the
memory asynchronously, e.g., to drain a value from a store
buffer to the memory.
I2 E definitions free programmers from reasoning partially
executed instructions, which is unavoidable for ARM and
POWER operational definitions. One key tradeoff to achieve
I2 E is to forbid a store to overtake a load, i.e., disallow Ld-St
reordering. Allowing such reordering requires each processor
in the abstract machine to maintain multiple unexecuted
instructions in order to see the effects of future stores, and
the abstract machine has to contain the complicated ROBlike structures. Ld-St reordering also complicates axiomatic
definitions because it creates the possibility of “out-ofthin-air” behaviors [27], which are impossible in any real
implementation and consequently must be disallowed. We
also offer evidence, based on simulation experiments, that
disallowing Ld-St reordering has no discernible impact on
performance.
For a quick comparison, we summarize the properties of
common memory models in Figure 1. SC and TSO have
simple definitions but forbid Ld-Ld and St-St reorderings,
and consequently, are not candidates for RISC-V. WMM is
similar to RMO and Alpha but neither has an operational
definition. Also WMM has a simple axiomatic definition,
while Alpha requires a complicated axiom to forbid outof-thin-air behaviors (see Section V-B), and RMO has
an incorrect axiom about data-dependency ordering (see
Section X).
ARM, POWER, and WMM-S are similar models in the
sense that they all admit non-atomic stores. While the
operational models of ARM and POWER are complicated,
WMM-S has a simpler I2 E definition and allows competitive
implementations (see Section IX-B). The axiomatic models
of ARM and POWER are also complicated: four relations in
the POWER axiomatic model [10, Section 6] are defined in
a fixed point manner, i.e., their definitions mutually depend
on each other.
Release Consistency (RC) are often mixed with the concept
of “SC for data-race-free (DRF) programs” [28]. It should be
noted that “SC for DRF” is inadequate for an ISA memory
model, which must specify behaviors of all programs. The
original RC definition [6] attempts to specify all program
behaviors, and are more complex and subtle than the “SC for
DRF” concept. We show in Section X that the RC definition
fails a litmus test for non-atomic stores and forbids shared
write-through caches in implementation.
This paper makes the following contributions:
1) WMM, the first weak memory model that is defined
in I2 E and allows Ld-Ld reordering, and its axiomatic
definition;
2) WMM-S, an extension on WMM that admits non-atomic
stores and has an I2 E definition;
3) WMM and WMM-S implementations based on OOO pro-
cessors that admit all uniprocessor speculative techniques
(such as load-value prediction) without additional checks;
4) Introduction of invalidation buffers in the I2 E definitional
framework to model Ld-Ld and other reorderings.
Paper organization: Section II presents the related work.
Section III gives litmus tests for distinguishing memory
models. Section IV introduces I2 E. Section V defines
WMM. Section VI shows the WMM implementation using
OOO processors. Section VII evaluates the performance of
WMM and the influence of forbidding Ld-St reordering.
Section VIII defines WMM-S. Section IX presents the
WMM-S implementations with non-atomic stores. Section X
shows the problems of RC and RMO. Section XI offers the
conclusion.
dependency between I2 and I3 in Figure 2a. Thus the
resulting behaviors can arise only because of different store
atomicity properties. We use FENCELL for memory models
that can reorder data-dependent loads, e.g., I5 in Figure 2b
would be the MB fence for Alpha. For other memory models
that order data-dependent loads (e.g., ARM), FENCELL could
be replaced by a data dependency (like the data dependency
between I2 and I3 in Figure 2a). The Ld-Ld fences only
stop Ld-Ld reordering; they do not affect store atomicity in
these tests.
II. R ELATED W ORK
(a) SBE: test for multi-copy atomic stores
SC [1] is the simplest model, but naive implementations of
SC suffer from poor performance. Although researchers have
proposed aggressive techniques to preserve SC [29]–[38],
they are rarely adopted in commercial processors perhaps due
to their hardware complexity. Instead the manufactures and
researchers have chosen to present weaker memory models,
e.g., TSO [2], [3], [25], [39], PSO [4], RMO [4], Alpha [5],
Processor Consistency [40], Weak Consistency [41], RC [6],
CRF [42], Instruction Reordering + Store Atomicity [43],
POWER [11] and ARM [44]. The tutorials by Adve et al.
[45] and by Maranget et al. [46] provide relationships among
some of these models.
A large amount of research has also been devoted to
specifying the memory models of high-level languages:
C++ [26], [47]–[50], Java [51]–[53], etc. We will provide
compilation schemes from C++ to WMM and WMM-S.
Recently, Lustig et al. have used Memory Ordering
Specification Tables (MOSTs) to describe memory models,
and proposed a hardware scheme to dynamically convert
programs across memory models described in MOSTs
[19]. MOST specifies the ordering strength (e.g., locally
ordered, multi-copy atomic) of two instructions from the same
processor under different conditions (e.g., data dependency,
control dependency). Our work is orthogonal in that we
propose new memory models with operational definitions.
III. M EMORY M ODEL L ITMUS T ESTS
Here we offer two sets of litmus tests to highlight
the differences between memory models regarding store
atomicity and instruction reorderings, including enforcement
of dependency-ordering. All memory locations are initialized
to 0.
A. Store Atomicity Litmus Tests
Figure 2 shows four litmus tests to distinguish between
these three types of stores. We have deliberately added
data dependencies and Ld-Ld fences (FENCELL ) to these
litmus tests to prevent instruction reordering, e.g., the data
Proc. P1
I1 : St a 1
I2 : r1 = Ld a
I3 : r2 = Ld (b+r1 −1)
SC forbids but TSO allows:
Proc. P1
I1 : St a 2
Proc. P2
I4 : St b 1
I5 : r3 = Ld b
I6 : r4 = Ld (a+r3 −1)
r1 = 1, r2 = 0, r3 = 1, r4 = 0
Proc. P2
I2 : r1 = Ld a
I3 : St b (r1 − 1)
Proc. P3
I4 : r2 = Ld b
I5 : FENCELL
I6 : r3 = Ld a
TSO, RMO and Alpha forbid, but RC, ARM
and POWER allow: r1 = 2, r2 = 1, r3 = 0
(b) WRC: test for non-atomic stores [7]
Proc. P1
I1 : St a 2
Proc. P2
Proc. P3
I2 : r1 = Ld a
I4 : r2 = Ld b
I3 : St b (r1 − 1)
I5 : St a r2
TSO, RMO, Alpha and RC forbid, but ARM and
POWER allow: r1 = 2, r2 = 1, m[a] = 2
(c) WWC: test for non-atomic stores [46], [54]
Proc. P1
I1 : St a 1
Proc. P2
I2 : r1 = Ld a
I3 : FENCELL
I4 : r2 = Ld b
TSO, RMO and Alpha forbid,
POWER allow: r1 = 1, r2 = 0,
Proc. P3
I5 : St b 1
Proc. P4
I6 : r3 = Ld b
I7 : FENCELL
I8 : r4 = Ld a
but RC, ARM and
r3 = 1, r4 = 0
(d) IRIW: test for non-atomic stores [7]
Figure 2.
Litmus tests for store atomicity
SBE: In a machine with single-copy atomic stores (e.g., an
SC machine), when both I2 and I5 have returned value 1,
stores I1 and I4 must have been globally advertised. Thus
r2 and r4 cannot both be 0. However, a machine with store
buffers (e.g., a TSO machine) allows P1 to forward the value
of I1 to I2 locally without advertising I1 to other processors,
violating the single-copy atomicity of stores.
WRC: Assuming the store buffer is private to each processor
(i.e., multi-copy atomic stores), if one observes r1 = 2 and
r2 = 1 then r3 must be 2. However, if an architecture allows
a store buffer to be shared by P1 and P2 but not P3, then P2
can see the value of I1 from the shared store buffer before
I1 has updated the memory, allowing P3 to still see the old
value of a. A write-through cache shared by P1 and P2 but
not P3 can cause this non-atomic store behavior in a similar
way, e.g., I1 updates the shared write-through cache but has
not invalidated the copy in the private cache of P3 before I6
is executed.
WWC: This litmus test is similar to WRC but replaces the
load in I6 with a store. The behavior is possible if P1 and
P2 share a write-through cache or store buffer. However, RC
forbids this behavior (see Section X).
IRIW: This behavior is possible if P1 and P2 share a writethrough cache or a store buffer and so do P3 and P4.
B. Instruction Reordering Litmus Tests
Although processors fetch and commit instructions in order,
speculative and out-of-order execution causes behaviors as if
instructions were reordered. Figure 3 shows the litmus tests
on these reordering behaviors.
Proc. P1
Proc. P2
I1 : St a 1
I3 : St b 1
I2 : r1 = Ld b I4 : r2 = Ld a
SC forbids, but TSO allows:
r1 = 0, r2 = 0
Proc. P1
Proc. P2
I1 : St a 1 I3 : r1 = Ld b
I2 : St b 1 I4 : r2 = Ld a
TSO forbids, but Alpha and
RMO allow: r1 = 1, r2 = 0
(a) SB: test for St-Ld reordering [46] (b) MP: test for Ld-Ld and St-St
reorderings [7]
Proc. P1
Proc. P2
I1 : r1 = Ld b I3 : r2 = Ld a
I2 : St a 1
I4 : St b 1
TSO forbids, but Alpha,
RMO, RC, POWER and
ARM allow: r1 = r2 = 1
Proc. P1
Proc. P2
I1 : St a 1 I4 : r1 = Ld b
I2 : FENCE I5 : if(r1 6= 0) exit
I3 : St b 1 I6 : r2 = Ld a
Alpha, RMO, RC, ARM and
POWER allow: r1 = 1, r2 = 0
(c) LB: test for Ld-St reordering [7] (d) MP+Ctrl: test
dependency ordering
Proc. P1
Proc. P2
I1 : St a 1
I4 : r1 = Ld b
I2 : FENCE I5 : St (r1 + a) 42
I3 : St b 100 I6 : r2 = Ld a
Alpha, RMO, RC, ARM
and POWER allow: r1 =
100, r2 = 0
for
Proc. P1
Proc. P2
I1 : St a 1 I4 : r1 = Ld b
I2 : FENCE I5 : r2 = Ld r1
I3 : St b a
RMO, RC, ARM and
POWER forbid, but Alpha
allows: r1 = a, r2 = 0
(e) MP+Mem: test for memory- (f) MP+Data: test
dependency ordering
dependency ordering
Figure 3.
control-
for
data-
Litmus tests for instruction reorderings
SB: A TSO machine can execute I2 and I4 while I1 and I3
are buffered in the store buffers. The resulting behavior is as
if the store and the load were reordered on each processor.
MP: In an Alpha machine, I1 and I2 may be drained from
the store buffer of P1 out of order; I3 and I4 in the ROB of
P2 may be executed out of order. This is as if P1 reordered
the two stores and P2 reordered the two loads.
LB: Some machines may enter a store into the memory before
all older instructions have been committed. This results in the
Ld-St reordering shown in Figure 3c. Since instructions are
committed in order and stores are usually not on the critical
path, the benefit of the eager execution of stores is limited.
In fact we will show by simulation that Ld-St reordering
does not improve performance (Section VII).
MP+Ctrl: This test is a variant of MP. The two stores in
P1 must update memory in order due to the fence. Although
the execution of I6 is conditional on the result of I4 , P2 can
issue I6 speculatively by predicting branch I5 to be not taken.
The execution order I6 , I1 , I2 , I3 , I4 , I5 results in r1 = 1 and
r2 = 0.
MP+Mem: This test replaces the control dependency in
MP+Ctrl with a (potential) memory dependency, i.e., the
unresolved store address of I5 may be the same as the load
address of I6 before I4 is executed, However, P2 can execute
I6 speculatively by predicting the addresses are not the same.
This results in having I6 overtake I4 and I5 .
MP+Data: This test replaces the control dependency in
MP+Ctrl with a data dependency, i.e., the load address
of I5 depends on the result of I4 . A processor with loadvalue prediction [20]–[24] may guess the result of I4 before
executing it, and issue I5 speculatively. If the guess fails to
match the real execution result of I4 , then I5 would be killed.
But, if the guess is right, then essentially the execution of
the two data-dependent loads (I4 and I5 ) has been reordered.
C. Miscellaneous Tests
All programmers expect memory models to obey perlocation SC [55], i.e., all accesses to a single address appear
to execute in a sequential order which is consistent with the
program order of each thread (Figure 4).
Proc. P1
Proc. P2
I1 : r1 = Ld a I3 : St a 1
I2 : r2 = Ld a
Models with per-location
SC forbid: r1 = 1, r2 = 0
Figure 4.
Per-location SC
Proc. P1
Proc. P2
I1 : r1 = Ld b I3 : r2 = Ld a
I2 : St a r1
I4 : St b r2
All models forbid: r1 = r2 = 42
Figure 5.
Out-of-thin-air read
Out-of-thin-air behaviors (Figure 5) are impossible in real
implementations. Sometimes such behaviors are permitted
by axiomatic models due to incomplete axiomatization.
IV. D EFINING M EMORY M ODELS IN I 2 E
Figure 6 shows the I2 E abstract machines for SC,
TSO/PSO and WMM models. All abstract machines consist
of n atomic processors and an n-ported atomic memory m.
Each processor contains a register state s, which represents
all architectural registers, including both the general purpose
registers and special purpose registers, such as PC. The
abstract machines for TSO/PSO and WMM also contain a
store buffer sb for each processor, and the one for WMM
also contains an invalidation buffer ib for each processor as
shown in the figure. In the abstract machines all buffers are
unbounded. The operations of these buffers will be explained
shortly.
The operations of the SC abstract machine are the simplest:
in one step we can select any processor to execute the
next instruction on that processor atomically. That is, if
the instruction is a non-memory instruction (e.g., ALU or
branch), it just modifies the register states of the processor; if
it is a load, it reads from the atomic memory instantaneously
Processor ݅
Processor ݅
Reg State ݏ
Processor ݅
…
Reg State ݏ
…
…
Reg State ݏ
…
Store
Buffer ܾݏ
…
Inv
Buffer ܾ݅
Store
Buffer ܾݏ
Atomic Memory ݉
Atomic Memory ݉
Atomic Memory ݉
(a) SC
(b) TSO/PSO
(c) WMM
Figure 6.
…
I2 E abstract machines for different models
and updates the register state; and if it is a store, it updates
the atomic memory instantaneously and increments the PC.
A. TSO Model
The TSO abstract machine proposed in [2], [25] (Figure
6b) contains a store buffer sb for each processor. Just like SC,
any processor can execute an instruction atomically, and if
the instruction is a non-memory instruction, it just modifies
the local register state. A store is executed by inserting its
haddress, valuei pair into the local sb instead of writing the
data in memory. A load first looks for the load address in the
local sb and returns the value of the youngest store for that
address. If the address is not in the local sb, then the load
returns the value from the atomic memory. TSO can also
perform a background operation, which removes the oldest
store from a sb and writes it into the atomic memory. As
we discussed in Section III, store buffer allows TSO to do
St-Ld reordering, i.e., pass the SB litmus test (Figure 3a).
In order to enforce ordering in accessing the memory and
to rule out non-SC behaviors, TSO has a fence instruction,
which we refer to as Commit. When a processor executes
a Commit fence, it gets blocked unless its sb is empty.
Eventually, any sb will become empty as a consequence
of the background operations that move data from the sb to
the memory. For example, we need to insert a Commit fence
after each store in Figure 3a to forbid the non-SC behavior
in TSO.
We summarize the operations of the TSO abstract machine
in Figure 7. Each operation consists of a predicate and an
action. The operation can be performed by taking the action
only when the predicate is true. Each time we perform only
one operation (either instruction execution or sb dequeue)
atomically in the whole system (e.g., no two processors can
execute instructions simultaneously). The choice of which
operation to perform is nondeterministic.
Enabling St-St reordering: We can extend TSO to PSO by
changing the background operation to dequeue the oldest
store for any address in sb (see the PSO-DeqSb operation in
Figure 7). This extends TSO by permitting St-St reordering.
V. WMM M ODEL
WMM allows Ld-Ld reordering in addition to the reorderings allowed by PSO. Since a reordered load may read a stale
TSO-Nm (non-memory execution)
Predicate: The next instruction of a processor is a non-memory
instruction.
Action: Instruction is executed by local computation.
TSO-Ld (load execution)
Predicate: The next instruction of a processor is a load.
Action: Assume the load address is a. The load returns the value
of the youngest store for a in sb if a is present in the sb of
the processor, otherwise, the load returns m[a], i.e., the value of
address a in the atomic memory.
TSO-St (store execution)
Predicate: The next instruction of a processor is a store.
Action: Assume the store address is a and the store value is v.
The processor inserts the store ha, vi into its sb.
TSO-Com (Commit execution)
Predicate: The next instruction of a processor is a Commit and
the sb of the processor is empty.
Action: The Commit fence is executed simply as a NOP.
TSO-DeqSb (background store buffer dequeue)
Predicate: The sb of a processor is not empty.
Action: Assume the haddress, valuei pair of the oldest store in
the sb is ha, vi. Then this store is removed from sb, and the
atomic memory m[a] is updated to v.
PSO-DeqSb (background store buffer dequeue)
Predicate: The sb of a processor is not empty.
Action: Assume the value of the oldest store for some address a
in the sb is v. Then this store is removed from sb, and the atomic
memory m[a] is updated to v.
Figure 7.
Operations of the TSO/PSO abstract machine
value, we introduce a conceptual device called invalidation
buffer, ib, for each processor in the I2 E abstract machine (see
Figure 6c). ib is an unbounded buffer of haddress, valuei
pairs, each representing a stale memory value for an address
that can be observed by the processor. Multiple stale values
for an address in ib are kept ordered by their staleness.
The operations of the WMM abstract machine are similar
to those of PSO except for the background operation and
the load execution. When the background operation moves
a store from sb to the atomic memory, the original value
in the atomic memory, i.e., the stale value, enters the ib of
every other processor. A load first searches the local sb. If
the address is not found in sb, it either reads the value in
the atomic memory or any stale value for the address in the
local ib, the choice between the two being nondeterministic.
The abstract machine operations maintain the following
invariants: once a processor observes a store, it cannot observe
any staler store for that address. Therefore, (1) when a store
is executed, values for the store address in the local ib are
purged; (2) when a load is executed, values staler than
the load result are flushed from the local ib; and (3) the
background operation does not insert the stale value into
the ib of a processor if the sb of the processor contains the
address.
Just like introducing the Commit fence in TSO, to prevent
loads from reading the stale values in ib, we introduce the
Reconcile fence to clear the local ib. Figure 8 summarizes
the operations of the WMM abstract machine.
WMM-Nm (non-memory execution): Same as TSO-Nm.
WMM-Ld (load execution)
Predicate: The next instruction of a processor is a load.
Action: Assume the load address is a. If a is present in the sb
of the processor, then the load returns the value of the youngest
store for a in the local sb. Otherwise, the load is executed in
either of the following two ways (the choice is arbitrary):
1) The load returns the atomic memory value m[a], and all values
for a in the local ib are removed.
2) The load returns some value for a in the local ib, and all
values for a older than the load result are removed from the
local ib. (If there are multiple values for a in ib, the choice
of which one to read is arbitrary).
WMM-St (store execution)
Predicate: The next instruction of a processor is a store.
Action: Assume the store address is a and the store value is v.
The processor inserts the store ha, vi into its sb, and removes all
values for a from its ib.
WMM-Com (Commit execution): Same as TSO-Com.
WMM-Rec (execution of a Reconcile fence)
Predicate: The next instruction of a processor is a Reconcile.
Action: All values in the ib of the processor are removed.
WMM-DeqSb (background store buffer dequeue)
Predicate: The sb of a processor is not empty.
Action: Assume the value of the oldest store for some address
a in the sb is v. First, the stale haddress, valuei pair ha, m[a]i
is inserted to the ib of every other processor whose sb does not
contain a. Then this store is removed from sb, and m[a] is set
to v.
Figure 8.
Operations of the WMM abstract machine
A. Properties of WMM
Similar to TSO/PSO, WMM allows St-Ld and St-St
reorderings because of sb (Figures 3a and 3b). To forbid
the behavior in Figure 3a, we need to insert a Commit
followed by a Reconcile after the store in each processor.
Reconcile is needed to prevent loads from getting stale values
from ib. The I2 E definition of WMM automatically forbids
Ld-St reordering (Figure 3c) and out-of-thin-air behaviors
(Figure 5).
Ld-Ld reordering: WMM allows the behavior in Figure 3b
due to St-St reordering. Even if we insert a Commit between
the two stores in P1, the behavior is still allowed because
I4 can read the stale value 0 from ib. This is as if the two
loads in P2 were reordered. Thus, we also need a Reconcile
between the two loads in P2 to forbid this behavior in WMM.
No dependency ordering: WMM does not enforce any dependency ordering. For example, WMM allows the behaviors
of litmus tests in Figures 3d, 3e and 3f (I2 should be Commit
in case of WMM), because the last load in P2 can always
get the stale value 0 from ib in each test. Thus, it requires
Reconcile fences to enforce dependency ordering in WMM.
In particular, WMM can reorder the data-dependent loads
(i.e., I4 and I5 ) in Figure 3f.
Multi-copy atomic stores: Stores in WMM are multicopy atomic, and WMM allows the behavior in Figure 2a
even when Reconcile fences are inserted between Ld-Ld
pairs hI2 , I3 i and hI5 , I6 i. This is because a store can be
read by a load from the same processor while the store
is in sb. However, if the store is ever pushed from sb
to the atomic memory, it becomes visible to all other
processors simultaneously. Thus, WMM forbids the behaviors
in Figures 2b, 2c and 2d (FENCELL should be Reconcile in
these tests).
Per-location SC: WMM enforces per-location SC (Figure 4),
because both sb and ib enforce FIFO on same address entries.
B. Axiomatic Definition of WMM
Based on the above properties of WMM, we give a simple
axiomatic definition for WMM in Figure 9 in the style of
the axiomatic definitions of TSO and Alpha. A True entry
in the order-preserving table (Figure 9b) indicates that if
instruction X precedes instruction Y in the program order
(X <po Y ) then the order must be maintained in the global
memory order (<mo ). <mo is a total order of all the memory
and fence instructions from all processors. The notation
rf
S −→ L means a load L reads from a store S. The notation
max<mo {set of stores} means the youngest store in the
set according to <mo . The axioms are self-explanatory: the
program order must be maintained if the order-preserving
table says so, and a load must read from the youngest
store among all stores that precede the load in either the
memory order or the program order. (See Appendix A for
the equivalence proof of the axiomatic and I2 E definitions.)
These axioms also hold for Alpha with a slightly different
order-preserving table, which marks the (Ld,St) entry as
a = b. (Alpha also merges Commit and Reconcile into a
single fence). However, allowing Ld-St reordering creates
the possibility of out-of-thin-air behaviors, and Alpha uses an
additional complicated axiom to disallow such behaviors [5,
Chapter 5.6.1.7]. This axiom requires considering all possible
execution paths to determine if a store is ordered after a load
by dependency, while normal axiomatic models only examine
a single execution path at a time. Allowing Ld-St reordering
also makes it difficult to define Alpha operationally.
Axiom Inst-Order (preserved instruction ordering):
X <po Y ∧ order(X, Y ) ⇒ X <mo Y
Axiom Ld-Val (the value of a load):
rf
St a v −→ Ld a ⇒ St a v =
max<mo {St a v 0 | St a v 0 <mo Ld a ∨ St a v 0 <po Ld a}
(a) Axioms for WMM
HH Y
X H
H
Ld a
St a v
Reconcile
Commit
Ld b
St b v 0
Reconcile
Commit
a=b
False
True
False
True
a=b
True
True
True
False
True
True
True
True
True
True
(b) WMM order-preserving table, i.e. order(X, Y ) where X <po Y
Figure 9.
Axiomatic definition of WMM
C. Compiling C++11 to WMM
OOO Processor ܲ݅
…
Reorder Buffer (ROB)
C++ primitives [47] can be mapped to WMM instructions
in an efficient way as shown in Figure 10. For the purpose
of comparison, we also include a mapping to POWER [56].
C++ operations
Non-atomic Load
Load Relaxed
Load Consume
Load Acquire
Load SC
Non-atomic Store
Store Relaxed
Store Release
Store SC
Figure 10.
WMM instructions
Ld
Ld
Ld; Reconcile
Ld; Reconcile
Commit; Reconcile;
Ld; Reconcile
St
St
Commit; St
Commit; St
POWER instructions
Ld
Ld
Ld
Ld; cmp; bc; isync
sync; Ld; cmp;
bc; isync
St
St
lwsync; St
sync; St
Mapping C++ to WMM and POWER
The Commit; Reconcile sequence in WMM is the same as
a sync fence in POWER, and Commit is similar to lwsync.
The cmp; bc; isync sequence in POWER serves as a Ld-Ld
fence, so it is similar to a Reconcile fence in WMM. In case
of Store SC in C++, WMM uses a Commit while POWER
uses a sync, so WMM effectively saves one Reconcile. On
the other hand, POWER does not need any fence for Load
Consume in C++, while WMM requires a Reconcile.
Besides the C++ primitives, a common programming
paradigm is the well-synchronized program, in which all
critical sections are protected by locks. To maintain SC
behaviors for such programs in WMM, we can add a
Reconcile after acquiring the lock and a Commit before
releasing the lock.
For any program, if we insert a Commit before every store
and insert a Commit followed by a Reconcile before every
load, then the program behavior in WMM is guaranteed to
be sequentially consistent. This provides a conservative way
for inserting fences when performance is not an issue.
VI. WMM I MPLEMENTATION
WMM can be implemented using conventional OOO
multiprocessors, and even the most aggressive speculative
techniques cannot step beyond WMM. To demonstrate this,
we describe an OOO implementation of WMM, and show
simultaneously how the WMM model (i.e., the I2 E abstract
machine) captures the behaviors of the implementation. The
implementation is described abstractly to skip unrelated
details (e.g., ROB entry reuse). The implementation consists
of n OOO processors and a coherent write-back cache
hierarchy which we discuss next.
Ld resp
delay
Ld req
…
Store Buffer
Port ݅
St req
Mem Req Buffer ܾ݉ݎሾ݅ሿ
St resp
Write‐back
Cache Hierarchy
(CCM)
Atomic Memory ݉
Figure 11.
CCM+OOO: implementation of WMM
Consider a real n-ported write-back cache hierarchy with
each port i connected to processor P i. A request issued to
port i may be from a load instruction in the ROB of P i or
a store in the store buffer of P i. In conventional coherence
protocols, all memory requests can be serialized, i.e., each
request can be considered as taking effect at some time point
within its processing period [57]. For example, consider the
non-stalling MSI directory protocol in the Primer by Sorin
et al. [58, Chapter 8.7.2]. In this protocol, a load request
takes effect immediately if it hits in the cache; otherwise, it
takes effect when it gets the data at the directory or a remote
cache with M state. A store request always takes effect at
the time of writing the cache, i.e., either when it hits in the
cache, or when it has received the directory response and all
invalidation responses in case of miss. We also remove the
requesting store from the store buffer when a store request
takes effect. Since a cache cannot process multiple requests
to the same address simultaneously, we assume requests to
the same address from the same processor are processed in
the order that the requests are issued to the cache.
CCM (Figure 11) abstracts the above cache hierarchy
by operating as follows: every new request from port i is
inserted into a memory request buffer mrb[i], which keeps
requests to the same address in order; at any time we can
remove the oldest request for an address from a mrb, let
the request access the atomic memory m, and either send
the load result to ROB (which may experience a delay)
or immediately dequeue the store buffer. m represents the
coherent memory states. Removing a request from mrb and
accessing m captures the moment when the request takes
effect.
It is easy to see that the atomic memory in CCM
corresponds to the atomic memory in the WMM model,
because they both hold the coherent memory values. We will
show shortly that how WMM captures the combination of
CCM and OOO processors. Thus any coherent protocol that
can be abstracted as CCM can be used to implement WMM.
B. Out-of-Order Processor (OOO)
A. Write-Back Cache Hierarchy (CCM)
We describe CCM as an abstraction of a conventional
write-back cache hierarchy to avoid too much details. In the
following, we explain the function of such a cache hierarchy,
abstract it to CCM, and relate CCM to the WMM model.
The major components of an OOO processor are the ROB
and the store buffer (see Figure 11). Instructions are fetched
into and committed from ROB in order; loads can be issued
(i.e., search for data forwarding and possibly request CCM)
as soon as its address is known; a store is enqueued into the
store buffer only when the store commits (i.e., entries in a
store buffer cannot be killed). To maintain the per-location
SC property of WMM, when a load L is issued, it kills
younger loads which have been issued but do not read from
stores younger than L. Next we give the correspondence
between OOO and WMM.
Store buffer: The state of the store buffer in OOO is
represented by the sb in WMM. Entry into the store buffer
when a store commits in OOO corresponds to the WMM-St
operation. In OOO, the store buffer only issues the oldest
store for some address to CCM. The store is removed from
the store buffer when the store updates the atomic memory
in CCM. This corresponds to the WMM-DeqSb operation.
ROB and eager loads: Committing an instruction from
ROB corresponds to executing it in WMM, and thus the
architectural register state in both WMM and OOO must
match at the time of commit. Early execution of a load L to
address a with a return value v in OOO can be understood by
considering where ha, vi resides in OOO when L commits.
Reading from sb or atomic memory m in the WMM-Ld
operation covers the cases that ha, vi is, respectively, in the
store buffer or the atomic memory of CCM when L commits.
Otherwise ha, vi is no longer present in CCM+OOO at the
time of load commit and must have been overwritten in the
atomic memory of CCM. This case corresponds to having
performed the WMM-DeqSb operation to insert ha, vi into
ib previously, and now using the WMM-Ld operation to read
v from ib.
Speculations: OOO can issue a load speculatively by
aggressive predictions, such as branch prediction (Figure 3d),
memory dependency prediction (Figure 3e) and even loadvalue prediction (Figure 3f). As long as all predictions related
to the load eventually turn out to be correct, the load result
got from the speculative execution can be preserved. No
further check is needed. Speculations effectively reorder
dependent instructions, e.g., load-value speculation reorders
data-dependent loads. Since WMM does not require preserving any dependency ordering, speculations will neither break
WMM nor affect the above correspondence between OOO
and WMM.
Fences: Fences never go into store buffers or CCM in the
implementation. In OOO, a Commit can commit from ROB
only when the local store buffer is empty. Reconcile plays a
different role; at the time of commit it is a NOP, but while
it is in the ROB, it stalls all younger loads (unless the load
can bypass directly from a store which is younger than the
Reconcile). The stall prevents younger loads from reading
values that would become stale when the Reconcile commits.
This corresponds to clearing ib in WMM.
Summary: For any execution in the CCM+OOO implementation, we can operate the WMM model following the
above correspondence. Each time CCM+OOO commits an
instruction I from ROB or dequeues a store S from a store
buffer to memory, the atomic memory of CCM, store buffers,
and the results of committed instructions in CCM+OOO
are exactly the same as those in the WMM model when the
WMM model executes I or dequeues S from sb, respectively.
VII. P ERFORMANCE E VALUATION OF WMM
We evaluate the performance of implementations of WMM,
Alpha, SC and TSO. All implementations use OOO cores
and coherent write-back cache hierarchy. Since Alpha allows
Ld-St reordering, the comparison of WMM and Alpha will
show whether such reordering affects performance.
A. Evaluation Methodology
We ran SPLASH-2x benchmarks [59], [60] on an 8-core
multiprocessor using the ESESC simulator [61]. We ran all
benchmarks except ocean ncp, which allocates too much
memory and breaks the original simulator. We used simmedium inputs except for cholesky, fft and radix, where we
used sim-large inputs. We ran all benchmarks to completion
without sampling.
The configuration of the 8-core multiprocessor is shown
in Figures 12 and 13 . We do not use load-value speculation
in this evaluation. The Alpha implementation can mark a
younger store as committed when instruction commit is
stalled, as long as the store can never be squashed and
the early commit will not affect single-thread correctness. A
committed store can be issued to memory or merged with
another committed store in WMM and Alpha. SC and TSO
issue loads speculatively and monitor L1 cache evictions to
kill speculative loads that violate the consistency model. We
also implement store prefetch as an optional feature for SC
and TSO; We use SC-pf and TSO-pf to denote the respective
implementations with store prefetch.
Cores
L3 cache
Memory
8 cores (@2GHz) with private L1 and L2 caches
4MB shared, MESI coherence, 64-byte cache line
8 banks, 16-way, LRU replacement, max 32 req per bank
3-cycle tag, 10-cycle data (both pipelined)
5 cycles between cache bank and core (pipelined)
120-cycle latency, max 24 requests
Figure 12.
Frontend
ROB
Function
units
Ld queue
St queue
L1 D
cache
L2 cache
Multiprocessor system configuration
fetch + decode + rename, 7-cycle pipelined latency in all
2-way superscalar, hybrid branch predictor
128 entries, 2-way issue/commit
2 ALUs, 1 FPU, 1 branch unit, 1 load unit, 1 store unit
32-entry reservation station per unit
Max 32 loads
Max 24 stores, containing speculative and committed stores
32KB private, 1 bank, 4-way, 64-byte cache line
LRU replacement, 1-cycle tag, 2-cycle data (pipelined)
Max 32 upgrade and 8 downgrade requests
128KB private, 1 bank, 8-way, 64-byte cache line
LRU replacement, 2-cycle tag, 6-cycle data (both pipelined)
Max 32 upgrade and 8 downgrade requests
Figure 13.
Core configuration
empty
pendSt
flushRep
flushInv
flushLS
exe
active
1
0.5
(Bench.)
MM
W ha f
p
Al O-p
TSO
TS -pf
SC
SC
MM
W ha f
p
Al O-p
TSO
TS -pf
SC
SC
MM
W ha f
p
Al O-p
TSO
TS -pf
SC
SC
MM
W ha f
p
Al O-p
TSO
TS -pf
SC
SC
MM
W ha f
p
Al O-p
TSO
TS -pf
SC
SC
MM
W ha f
p
Al O-p
TSO
TS -pf
SC
SC
MM
W ha f
p
Al O-p
TSO
TS -pf
SC
SC
MM
W ha f
p
Al O-p
TSO
TS -pf
SC
SC
MM
W ha f
p
Al O-p
TSO
TS -pf
SC
SC
MM
W ha f
p
Al O-p
TSO
TS -pf
SC
SC
MM
W ha f
p
Al O-p
TSO
TS -pf
SC
SC
MM
W ha f
p
Al O-p
TSO
TS -pf
SC
SC
MM
W ha f
p
Al O-p
TSO
TS -pf
SC
SC
0
barnes
cholesky
fft
Figure 14.
fmm
lu_cb
lu_ncb
ocean_cp
radix
radiosity
raytrace
volrend
water_nsq
water_sp
Normalized execution time and its breakdown at the commit slot of ROB
B. Simulation Results
A common way to study the performance of memory
models is to monitor the commit of instructions at the commit
slot of ROB (i.e., the oldest ROB entry). Here are some
reasons why an instruction may not commit in a given cycle:
• empty: The ROB is empty.
• exe: The instruction at the commit slot is still executing.
• pendSt: The load (in SC) or Commit (in TSO, Alpha and
WMM) cannot commit due to pending older stores.
• flushLS: ROB is being flushed because a load is killed
by another older load (only in WMM and Alpha) or older
store (in all models) to the same address.
• flushInv: ROB is being flushed after cache invalidation
caused by a remote store (only in SC or TSO).
• flushRep: ROB is being flushed after cache replacement
(only in SC or TSO).
Figure 14 shows the execution time (normalized to WMM)
and its breakdown at the commit slot of ROB. The total
height of each bar represents the normalized execution time,
and stacks represent different types of stall times added to
the active committing time at the commit slot.
WMM versus SC: WMM is much faster than both SC and
SC-pf for most benchmarks, because a pending older store
in the store queue can block SC from committing loads.
WMM versus TSO: WMM never does worse than TSO or
TSO-pf, and in some cases it shows up to 1.45× speedup over
TSO (in radix) and 1.18× over TSO-pf (in lu ncb). There
are two disadvantages of TSO compared to WMM. First,
load speculation in TSO is subject to L1 cache eviction, e.g.,
in benchmark ocean cp. Second, TSO requires prefetch to
reduce store miss latency, e.g., a full store queue in TSO stalls
issue to ROB and makes ROB empty in benchmark radix.
However, prefetch may sometimes degrade performance due
to interference with load execution, e.g., TSO-pf has more
commit stalls due to unfinished loads in benchmark lu ncb.
WMM versus Alpha: Figure 15 shows the average number
of cycles that a store in Alpha can commit before it reaches
the commit slot. However, the early commit (i.e., Ld-St
reordering) does not make Alpha outperform WMM (see
Figure 14), because store buffers can already hide the store
miss latency. Note that ROB is typically implemented as
a FIFO (i.e., a circular buffer) for register renaming (e.g.,
freeing physical registers in order), precise exceptions, etc.
Thus, if the early committed store is in the middle of ROB, its
ROB entry cannot be reused by a newly fetched instruction,
i.e., the effective size of the ROB will not increase. In
summary, the Ld-St reordering in Alpha does not increase
performance but complicates the definition (Section V-B).
15
Cycles
Normalized time
1.5
10
5
0
bar cho fft
nes les
ky
Figure 15.
fmm lu_ lu_ oce rad rad ray vol wa wa
cb ncb an ix
ios trac ren ter_ ter_
_cp
ity
nsq sp
e d
Average cycles to commit stores early in Alpha
VIII. WMM-S M ODEL
Unlike the multi-copy atomic stores in WMM, stores in
some processors (e.g., POWER) are non-atomic due to shared
write-through caches or shared store buffers. If multiple
processors share a store buffer or write-through cache, a
store by any of these processors may be seen by all these
processors before other processors. Although we could tag
stores with processor IDs in the store buffer, it is infeasible
to separate values stored by different processors in a cache.
In this section, we introduce a new I2 E model, WMMS, which captures the non-atomic store behaviors in a way
independent from the sharing topology. WMM-S is derived
from WMM by adding a new background operation. We will
show later in Section IX why WMM-S can be implemented
using memory systems with non-atomic stores.
A. I2 E Definition of WMM-S
The structure of the abstract machine of WMM-S is the
same as that of WMM. To model non-atomicity of stores,
i.e., to make a store by one processor readable by another
processor before the store updates the atomic memory, WMMS introduces a new background operation that copies a store
from one store buffer into another. However, we need to
ensure that all stores for an address can still be put in a total
order (i.e., the coherence order), and the order seen by any
processor is consistent with this total order (i.e., per-location
SC).
To identify all the copies of a store in various store buffers,
we assign a unique tag t when a store is executed (by being
inserted into sb), and this tag is copied when a store is
copied from one store buffer to another. When a background
operation dequeues a store from a store buffer to the memory,
all its copies must be deleted from all the store buffers which
have them. This requires that all copies of the store are the
oldest for that address in their respective store buffers.
All the stores for an address in a store buffer can be strictly
ordered as a list, where the youngest store is the one that
entered the store buffer last. We make sure that all ordered
lists (of all store buffers) can be combined transitively to
form a partial order (i.e., no cycle), which has now to be
understood in terms of the tags on stores because of the
copies. We refer to this partial order as the partial coherence
order (<co ), because it is consistent with the coherence order.
Consider the states of store buffers shown in Figure 16
(primes are copies). A, B, C and D are different stores
to the same address, and their tags are tA , tB , tC and tD ,
respectively. A0 and B 0 are copies of A and B respectively
created by the background copy operation. Ignoring C 0 , the
partial coherence order contains: tD <co tB <co tA (D is
older than B, and B is older than A0 in P2), and tC <co tB
(C is older than B 0 in P3). Note that tD and tC are not
related here.
At this point, if we copied C in P3 as C 0 into P1, we
would add a new edge tA <co tC , breaking the partial order
by introducing the cycle tA <co tC <co tB <co tA . Thus
copying of C into P1 should be forbidden in this state.
Similarly, copying a store with tag tA into P1 or P2 should
be forbidden because it would immediately create a cycle:
tA <co tA . In general, the background copy operation must
be constrained so that the partial coherence order is still
acyclic after copying.
Enqueued later (younger)
↕
Enqueued earlier (Older)
Figure 16.
P1 sb
P2 sb
P3 sb
C’:
A’:
B:
D:
B’:
A:
C:
Example states of store buffers
Figure 17 shows the background operations of the WMMS abstract machine. The operations that execute instructions
in WMM-S are the same as those in WMM, so we do not
show them again. (The store execution operation in WMM-S
needs to also insert the tag of the store into sb).
Binding background copy with load execution: If the
WMM-S-Copy operation is restricted to always happen right
before a load execution operation that reads from the newly
created copy, it is not difficult to prove that the WMMS model remains the same, i.e., legal behaviors do not
change. In the rest of the paper, we will only consider this
“restricted” version of WMM-S. In particular, all WMM-SCopy operations in the following analysis of litmus tests fall
WMM-DeqSb (background store buffer dequeue)
Predicate: There is a store S in a store buffer, and all copies of
S are the oldest store for that address in their respective store
buffers.
Action: Assume the haddress, value, tagi tuple of store S is
ha, v, ti. First, the stale haddress, valuei pair ha, m[a]i is inserted
to the ib of every processor whose sb does not contain a. Then
all copies of S are removed from their respective store buffers,
and the atomic memory m[a] is updated to v.
WMM-S-Copy (background store copy)
Predicate: There is a store S that is in the sb of some processor
i but not in the sb of some other processor j. Additionally, the
partial coherence order will still be acyclic if we insert a copy of
S into the sb of processor j.
Action: Insert a copy of S into the sb of processor j, and remove
all values for the store address of S from the ib of processor j.
Figure 17.
Background operations of WMM-S
into this pattern.
B. Properties of WMM-S
WMM-S enforces per-location SC (Figure 4), because it
prevents cycles in the order of stores to the same address. It
also allows the same instruction reorderings as WMM does
(Figure 3). We focus on the store non-atomicity of WMM-S.
Non-atomic stores and cumulative fences: Consider the
litmus tests for non-atomic stores in Figures 2b, 2c and
2d (FENCELL should be Reconcile in these tests). WMM-S
allows the behavior in Figure 2b by copying I1 into the sb of
P2 and then executing I2 , I3 , I4 , I5 , I6 sequentially. I1 will
not be dequeued from sb until I6 returns value 0. To forbid
this behavior, a Commit is required between I2 and I3 in
P2 to push I1 into memory. Similarly, WMM-S allows the
behavior in Figure 2c (i.e., we copy I1 into the sb of P2
to satisfy I2 , and I1 is dequeued after I5 has updated the
atomic memory), and we need a Commit between I2 and
I3 to forbid the behavior. In both litmus tests, the inserted
fences have a cumulative global effect in ordering I1 before
I3 and the last instruction in P3.
WMM-S also allows the behavior in Figure 2d by copying
I1 into the sb of P2 to satisfy I2 , and copying I5 into the sb
of P4 to satisfy I6 . To forbid the behavior, we need to add a
Commit right after the first load in P2 and P4 (but before the
FENCELL /Reconcile that we added to stop Ld-Ld reordering).
As we can see, Commit and Reconcile are similar to release
and acquire respectively. Cumulation is achieved by globally
advertising observed stores (Commit) and preventing later
loads from reading stale values (Reconcile).
Programming properties: WMM-S is the same as WMM
in the properties described in Section V-C, including the
compilation of C++ primitives, maintaining SC for wellsynchronized programs, and the conservative way of inserting
fences.
IX. WMM-S I MPLEMENTATIONS
Since WMM-S is strictly more relaxed than WMM, any
WMM implementation is a valid WMM-S implementation.
However, we are more interested in implementations with
non-atomic memory systems. Instead of discussing each
specific system one by one, we explain how WMM-S can be
implemented using the ARMv8 flowing model, which is a
general abstraction of non-atomic memory systems [8]. We
first describe the adapted flowing model (FM) which uses
fences in WMM-S instead of ARM, and then explain how it
obeys WMM-S.
A. The Flowing Model (FM)
FM consists of a tree of segments s[i] rooted at the
atomic memory m. For example, Figure 19 shows four OOO
processors (P1. . .P4) connected to a 4-ported FM which has
six segments (s[1 . . . 6]). Each segment is a list of memory
requests, (e.g., the list of blue nodes in s[6], whose head is
at the bottom and the tail is at the top).
OOO interacts with FM in a slightly different way than
CCM. Every memory request from a processor is appended to
the tail of the list of the segment connected to the processor
(e.g., s[1] for P1). OOO no longer contains a store buffer;
after a store is committed from ROB, it is directly sent to
FM and there is no store response. When a Commit fence
reaches the commit slot of ROB, the processor sends a
Commit request to FM, and the ROB will not commit the
Commit fence until FM sends back the response for the
Commit request.
Inside FM, there are three background operations: (1) Two
requests in the same segment can be reordered in certain
cases; (2) A load can bypass from a store in the same segment;
(3) The request at the head of the list of a segment can flow
into the parent segment (e.g., flow from s[1] into s[5]) or the
atomic memory (in case the parent of the segment, e.g., s[6],
is m). Details of these operations are shown in Figure 18.
It is easy to see that FM abstracts non-atomic memory
systems, e.g., Figure 19 abstracts a system in which P1
and P2 share a write-through cache while P3 and P4 share
another.
Two properties of FM+OOO: First, FM+OOO enforces
per-location SC because the segments in FM never reorder
requests to the same address. Second, stores for the same
address, which lie on the path from a processor to m in
the tree structure of FM, are strictly ordered based on their
distance to the tree root m; and the combination of all
such orderings will not contain any cycle. For example, in
Figure 19, stores in segments s[3] and s[6] are on the path
from P3 to m; a store in s[6] is older than any store (for the
same address) in s[3], and stores (for the same address) in
the same segment are ordered from bottom to top (bottom
is older).
FM-Reorder (reorder memory requests)
Predicate: The list of segment s[i] contains two consecutive
requests rnew and rold (rnew is above rold in s[i]); and neither
of the following is true:
1) rnew and rold are memory accesses to the same address.
2) rnew is a Commit and rold is a store.
Action: Reorder rnew and rold in the list of s[i].
FM-Bypass (store forwarding)
Predicate: The list of segment s[i] contains two consecutive
requests rnew and rold (rnew is above rold in s[i]). rnew is a
load, rold is a store, and they are for the same address.
Action: we send the load response for rnew using the store value
of rold , and remove rnew from the segment.
FM-Flow (flow request)
Predicate: A segment s[i] is not empty.
Action: Remove the request r which is the head of the list of s[i].
If the parent of s[i] in the tree structure is another segment s[j],
we append r to the tail of the list of s[j]. Otherwise, the parent
of s[i] is m, and we take the following actions according to the
type of r:
• If r is a load, we send a load response using the value in m.
• If r is a store ha, vi, we update m[a] to v.
• If r is a Commit, we send a response to the requesting processor
and the Commit fence can then be committed from ROB.
Figure 18.
Background operations of FM
B. Relating FM+OOO to WMM-S
WMM-S can capture the behaviors of any program
execution in implementation FM+OOO in almost the same
way that WMM captures the behaviors of CCM+OOO. When
a store updates the atomic memory in FM+OOO, WMMS performs a WMM-S-DeqSb operation to dequeue that
store from store buffers to memory. When an instruction
is committed from a ROB in FM+OOO, WMM-S executes
that instruction. The following invariants hold after each
operation in FM+OOO and the corresponding operation in
WMM-S:
1) For each instruction committed in FM+OOO, the execution results in FM+OOO and WMM-S are the same.
2) The atomic memories in FM+OOO and WMM-S match.
3) The sb of each processor P i in WMM-S holds exactly all
the stores in FM+OOO that is observed by the commits
of P i but have not updated the atomic memory. (A store
is observed by the commits of P i if it has been either
committed by P i or returned by a load that has been
committed by P i).
4) The order of stores for the same address in the sb of any
processor in WMM-S is exactly the order of those stores
on the path from P i to m in FM+OOO.
It is easy to see how the invariants are maintained when
the atomic memory is updated or a non-load instruction is
committed in FM+OOO. To understand the commit of a load
L to address a with result v in processor P i in FM+OOO, we
still consider where ha, vi resides when L commits. Similar
to WMM, reading atomic memory m or local ib in the load
execution operation of WMM-S covers the cases that ha, vi
is still in the atomic memory of FM or has already been
overwritten by another store in the atomic memory of FM,
respectively. In case ha, vi is a store that has not yet updated
the atomic memory in FM, ha, vi must be on the path from
P i to m. In this case, if ha, vi has been observed by the
commits of P i before L is committed, then L can be executed
by reading the local sb in WMM-S. Otherwise, on the path
from P i to m, ha, vi must be younger than any other store
observed by the commits of P i. Thus, WMM-S can copy
ha, vi into the sb of P i without breaking any invariant. The
copy will not create any cycle in <co because of invariants 3
and 4 as well as the second property of FM+OOO mentioned
above. After the copy, WMM-S can have L read v from the
local sb.
Performance comparison with ARM and POWER: As
we have shown that WMM-S can be implemented using the
generalized memory system of ARM, we can turn an ARM
multicore into a WMM-S implementation by stopping Ld-St
reordering in the ROB. Since Section VII already shows
that Ld-St reordering does not affect performance, we can
conclude qualitatively that there is no discernible performance
difference between WMM-S and ARM implementations. The
same arguments apply to the comparison against POWER
and RC.
list tail req
list head req
FM
Atomic memory
Figure 19.
OOO+FM
Proc. P1
Proc P2
I1 : St a 1
I4 : r1 = Ld b
I2 : MEMBAR I5 : if(r1 6= 1) exit
I3 : St b 1
I6 : St c 1
I7 : r2 = Ld c
I8 : r3 = a+r2 −1
I9 : r4 = Ld r3
RMO forbids: r1 = 1, r2 = 1
r3 = a, r4 = 0
Figure 20. RMO dependency order
X. P ROBLEMS OF RC AND RMO
Here we elaborate the problems of RC (both RCsc and
RCpc ) and RMO, which have been pointed out in Section I.
RC: Although the RC definition [6] allows the behaviors of
WRC and IRIW (Figures 2b and 2d), it disallows the behavior
of WWC (Figure 2c). In WWC, when I2 reads the value of
store I1 , the RC definition says that I1 is performed with
respect to (w.r.t) P2. Since store I5 has not been issued due
to the data dependencies in P2 and P3, I1 must be performed
w.r.t P2 before I5 . The RC definition says that “all writes
to the same location are serialized in some order and are
performed in that order with respect to any processor” [6,
Section 2]. Thus, I1 is before I5 in the serialization order
of stores for address a, and the final memory value of a
cannot be 2 (the value of I1 ), i.e., RC forbids the behavior
of WWC and thus forbids shared write-through caches in
implementations.
RMO: The RMO definition [4, Section D] is incorrect in
enforcing dependency ordering. Consider the litmus test in
Figure 20 (MEMBAR is the fence in RMO). In P2, the
execution of I6 is conditional on the result of I4 , I7 loads
from the address that I6 stores to, and I9 uses the results
of I7 . According the definition of dependency ordering in
RMO [4, Section D.3.3], I9 depends on I4 transitively. Then
the RMO axioms [4, Section D.4] dictate that I9 must be
after I4 in the memory order, and thus forbid the behavior
in Figure 20. However, this behavior is possible in hardware
with speculative load execution and store forwarding, i.e., I7
first speculatively bypasses from I6 , and then I9 executes
speculatively to get 0. Since most architects will not be
willing to give up on these two optimizations, RISC-V should
not adopt RMO.
XI. C ONCLUSION
We have proposed two weak memory models, WMM and
WMM-S, for RISC-V with different tradeoffs between definitional simplicity and implementation flexibility. However
RISC-V can have only one memory model. Since there is
no obvious evidence that restricting to multi-copy atomic
stores affects performance or increases hardware complexity,
RISC-V should adopt WMM in favor of simplicity.
XII. ACKNOWLEDGMENT
We thank all the anonymous reviewers on the different
versions of this paper over the two years. We have also
benefited from the discussions with Andy Wright, Thomas
Bourgeat, Joonwon Choi, Xiangyao Yu, and Guowei Zhang.
This work was done as part of the Proteus project under the
DARPA BRASS Program (grant number 6933218).
A PPENDIX A.
P ROOF OF E QUIVALENCE BETWEEN WMM I2 E M ODEL
AND WMM A XIOMATIC M ODEL
Here we present the equivalence proof for the I2 E
definition and the axiomatic definition of WMM.
Theorem 1 (Soundness). WMM I2 E model ⊆ WMM axiomatic model.
Proof: The goal is that for any execution in the WMM
rf
I E model, we can construct relations h<po , <mo , −→i that
have the same program behavior and satisfy the WMM
axioms. To do this, we first introduce the following ghost
states to the I2 E model:
• Field source in the atomic memory: For each address a,
we add state m[a].source to record the store that writes
the current memory value.
• Fields source and overwrite in the invalidation buffer: For
each stale value ha, vi in an invalidation buffer, we add
state v.source to denote the store of this stale value, and
add state v.overwrite to denote the store that overwrites
v.source in the memory.
2
Per-processor list <po-i2e : For each processor, <po-i2e is
the list of all the instructions that has been executed by
the processor. The order in <po-i2e is the same as the
execution order in the processor. We also use <po-i2e to
represent the ordering relation in the list (the head of the
list is the oldest/minimum in <po-i2e ).
• Global list <mo-i2e : <mo-i2e is a list of all the executed
loads, executed fences, and stores that have been dequeued
from the store buffers. <mo-i2e contains instructions from
all processors. We also use <mo-i2e to represent the
ordering relation in the list (the head of the list is the
oldest/minimum in <mo-i2e ).
rf -i2e
rf -i2e
• Read-from relations −
−−−→: −−−−→ is a set of edges. Each
edge points from a store to a load, indicating that the load
had read from the store in the I2 E model.
m[a].source initially points to the initialization store, and
rf -i2e
<po-i2e , <mo-i2e , −−−−→ are all initially empty. We now show
how these states are updated in the operations of the WMM
I2 E model.
1) WMM-Nm, WMM-Com, WMM-Rec, WMM-St: Assume
the operation executes an instruction I in processor i. We
append I to the tail of list <po-i2e of processor i. If I is
a fence (i.e., the operation is WMM-Com or WMM-Rec),
then we also append I to the tail of list <mo-i2e .
2) WMM-DeqSb: Assume the operation dequeues a store S
for address a. In this case, we update m[a].source to be S.
Let S0 be the original m[a].source before this operation is
performed. Then for each new stale value ha, vi inserted
into any invalidation buffer, we set v.source = S0 and
v.overwrite = S. We also append S to the tail of list
<mo-i2e .
3) WMM-Ld: Assume the operation executes a load L for
address a in processor i. We append L to the tail of list
<po-i2e of processor i. The remaining actions depends
on how L gets its value in this operation:
• If L reads from a store S in the local store buffer, then
rf -i2e
we add edge S −−−−→ L, and append L to the tail of
list <mo-i2e .
• If L reads the atomic memory m[a], then we add edge
rf -i2e
m[a].source −−−−→ L, and append L to the tail of list
<mo-i2e .
• If L reads a stale value ha, vi in the local invalidation
rf -i2e
buffer, then we add edge v.source −−−−→ L, and we
insert L to be right before v.overwrite in list <mo-i2e
(i.e., L is older than v.overwrite, but is younger than
any other instruction which is older than v.overwrite).
As we will see later, at the end of the I2 E execution, <po-i2e ,
rf -i2e
rf
<mo-i2e and −−−−→ will become the h<po , <mo , −→i relations that satisfy the WMM axioms. Before getting there, we
show that the I2 E model has the following invariants after
each operation is performed:
1) For each address a, m[a].source in the I2 E model is the
•
youngest store for a in <mo-i2e .
2) All loads and fences that have been executed in the I2 E
model are in <mo-i2e .
3) An executed store is either in <mo-i2e or in store buffer,
i.e., for each processor i, the store buffer of processor i
contains exactly every store that has been executed in the
I2 E model but is not in <mo-i2e .
4) For any two stores S1 and S2 for the same address in the
store buffer of any processor i in the I2 E model, if S1 is
older than S2 in the store buffer, then S1 <po-i2e S2 .
5) For any processor i and any address a, address a cannot
be present in the store buffer and invalidation buffer of
processor i at the same time.
6) For any stale value v for any address a in the invalidation
buffer of any processor i in the I2 E model, the following
invariants hold:
a) v.source and v.overwrite are in <mo-i2e , and
v.source <mo-i2e v.overwrite, and there is no other
store for a between them in <mo-i2e .
b) For any Reconcile fence F that has been executed by
processor i in the I2 E model, F <mo-i2e v.overwrite.
c) For any store S for a that has been executed by
processor i in the I2 E model, S <mo-i2e v.overwrite.
d) For any load L for a that has been executed by
rf -i2e
processor i in the I2 E model, if store S −−−−→ L,
then S <mo-i2e v.overwrite.
7) For any two stale values v1 and v2 for the same address
in the invalidation buffer of any processor i in the I2 E
model, if v1 is older than v2 in the invalidation buffer,
then v1 .source <mo-i2e v2 .source.
8) For any instructions I1 and I2 , if I1 <po-i2e I2 and
order(I1 , I2 ) and I2 is in <mo-i2e , then I1 <mo-i2e I2 .
rf -i2e
9) For any load L and store S, if S −−−−→ L, then the
following invariants hold:
a) If S not in <mo-i2e , then S is in the store buffer of
the processor of L, and S <po-i2e L, and there is no
store S 0 for the same address in the same store buffer
such that S <po-i2e S 0 <po-i2e L.
b) If
S
is
in
<mo-i2e ,
then
S
=
maxmo-i2e {S 0 | S 0 .addr = L.addr ∧ (S 0 <po-i2e
L ∨ S 0 <mo-i2e L)}, and there is no other store
S 00 for the same address in the store buffer of the
processor of L such that S 00 <po-i2e L.
We now prove inductively that all invariants hold after each
operation R is performed in the I2 E model, i.e., we assume
all invariants hold before R is performed. In case performing
R changes some states (e.g., <mo-i2e ), we use superscript 0
to denote the state before R is performed (e.g., <0mo-i2e ) and
use superscript 1 to denote the state after R is performed
(e.g., <1mo-i2e ). Now we consider the type of R:
1) WMM-Nm: All invariants still hold.
2) WMM-St: Assume R executes a store S for address a
in processor i. R changes the states of the store buffer,
invalidation buffer, and <po-i2e of processor i. Now we
consider each invariant.
• Invariant 1, 2: These are not affected.
• Invariant 3: This invariant still holds for the newly
executed store S.
• Invariant 4: Since S becomes the youngest store in the
store buffer of processor i, this invariant still holds.
• Invariant 5: Since R will clear address a from the
invalidation buffer of processor i, this invariant still
holds.
• Invariant 6: Invariants 6a, 6b, 6d are not affected.
Invariant 6c still holds because there is no stale value
for a in the invalidation buffer of processor i after R
is performed.
• Invariant 7: This is not affected, because R can only
remove values from the invalidation buffer.
• Invariant 8: This is not affected because R is not in
<mo-i2e .
∗
∗
• Invariant 9: Consider load L and store S for address a
∗ rf -i2e
∗
∗
such that S −−−−→ L and L is from processor i. We
need to show that this invariant still holds for L∗ and
S ∗ . Since L∗ has been executed, we have L∗ <1po-i2e S.
Thus this invariant cannot be affected.
3) WMM-Com: Assume R executes a Commit fence F
in processor i. R adds F to the end of the <po-i2e of
processor i and adds F to the end of <mo-i2e . Now we
consider each invariant.
• Invariants 1, 3, 4, 5, 6, 7, 9: These are not affected.
• Invariant 2: This still holds because F is added to
<mo-i2e .
• Invariant 8: Consider instruction I in processor i such
that I <po-i2e F and order(I, F ). We need to show that
I <1mo-i2e F . Since order(I, F ), I can be a load, or
store, or fence. If I is a load or fence, since I has been
executed, invariant 2 says that I is in <0mo-i2e before R
is performed. Since F is added to the end of <mo-i2e ,
I <1mo-i2e F . If I is a store, the predicate of R says
that I is not in the store buffer. Then invariant 3 says
that I must be in <0mo-i2e , and we have I <1mo-i2e F .
4) WMM-Rec: Assume R executes a Reconcile fence F
in processor i. R adds F to the end of the <po-i2e of
processor i, adds F to the end of <mo-i2e , and clear the
invalidation buffer of processor i. Now we consider each
invariant.
• Invariants 1, 3, 4, 9: These are not affected.
• Invariant 2: This still holds because F is added to
<mo-i2e .
• Invariants 5, 6, 7: These invariant still hold because the
invalidation buffer of processor i is now empty.
• Invariant 8: Consider instruction I in processor i such
that I <po-i2e F and order(I, F ). We need to show that
I <1mo-i2e F . Since order(I, F ), I can be a load or
fence. Since I has been executed, I must be in <0mo-i2e
before R is performed according to invariant 2. Thus,
I <1mo-i2e F .
5) WMM-DeqSb: Assume R dequeues a store S for address
a from the store buffer of processor i. R changes the
store buffer of processor i, the atomic memory m[a], and
invalidation buffers of other processors. R also adds S
to the end of <mo-i2e . Now we consider each invariant.
• Invariant 1: This invariant still holds, because
m[a].source1 = S and S becomes the youngest store
for a in <1mo-i2e .
• Invariant 2: This is not affected.
• Invariant 3: This invariant still holds, because S is
removed from store buffer and added to <mo-i2e .
• Invariants 4: This is not affected because we only
remove stores from the store buffer.
• Invariant 5: The store buffer and invaliation buffer of
processor i cannot be affected. The store buffer and
invalidation buffer of processor j (6= i) may be affected,
because m[a]0 may be inserted into the invalidation
buffer of processor j. The predicate of R ensures that
the insertion will not happen if the store buffer of
processor j contains address a, so the invariant still
holds.
• Invariant 6: We need to consider the influence on both
existing stale values and the newly inserted stale values.
a) Consider stale value ha, vi which is in the invalidation buffer of processor j both before and after
operation R is performed. This implies j 6= i,
because the store buffer of processor i contains
address a before R is performed, and invariant 5
says that the invalidation buffer of processor i
cannot have address a before R is performed. Now
we show that each invariant still holds for ha, vi.
– Invariant 6a: This still holds because S is the
youngest in <mo-i2e .
– Invariant 6b: This is not affected.
– Invariant 6c: This is not affected because S is
not executed by processor j.
– Invariant 6d: Since S is not in <0mo-i2e , invariant 9a says that any load that has read S must
be from process i. Since i 6= j, this invariant
cannot be affected.
b) Consider the new stale value ha, vi inserted to
the invalidation buffer of process j (6= i). According to WMM-DeqSb, v = m[a]0 , v.source =
m[a].source0 , and v.overwrite = S. Now we check
each invariant.
– Invariant 6a: According to invariant 1,
v.source = m[a]0 .source is the youngest store
for a in <0mo-i2e . Since S (i.e., v.overwrite) is
appended to the tail of <0mo-i2e , this invariant
still holds.
•
•
– Invariant 6b: According to invariant 2, any
Reconcile fence F executed by processor j
must be in <0mo-i2e . Thus, F <1mo-i2e S =
v.overwrite, and the invariant still holds.
– Invariant 6c: The predicate of R says that the
store buffer of processor j cannot contain address
a. Therefore, according to invariant 3, any store
S 0 for a executed by processor j must be in
<0mo-i2e . Thus, S 0 <1mo-i2e S = v.overwrite, and
the invariant still holds.
– Invariant 6d: Consider load L for address a
that has been executed by processor j. Assume
rf -i2e
store S 0 −−−−→ L. The predicate of R says that
the store buffer of processor j cannot contain
address a. Thus, S 0 must be in <0mo-i2e according
to invariant 9a. Therefore, S 0 <1mo-i2e S =
v.overwrite, and the invariant still holds.
Invariant 8: Consider instruction I such that I <po-i2e S
and order(I, S). Since order(I, S), I can be a load,
fence, or store for a. If I is a load or fence, then
invariant 2 says that I is in <0mo-i2e , and thus I <1mo-i2e
S, i.e., the invariant holds. If I is a store for a, then
the predicate of R and invariant 4 imply that I is not
in the store buffer of processor i. Then invariant 3 says
that I must be in <0mo-i2e , and thus I <1mo-i2e S, i.e.,
the invariant holds.
Invariant 9: We need to consider the influence on both
loads that read S and loads that reads stores other than
S.
a) Consider load L for address a that reads from S,
rf -i2e
i.e., S −−−−→ L. Since S is not in <0mo-i2e before
R is performed, invariant 9a says that L must be
executed by processor i, S <po-i2e L, and there is
no store S 0 for a in the store buffer of processor
i such that S <po-i2e S 0 <po-i2e L. Now we show
rf -i2e
that both invariants still hold for S −−−−→ L.
– Invariant 9a: This is not affected because S is in
<1mo-i2e after R is performed.
– Invariant 9b: Since S <po-i2e L and S is the
youngest in <1mo-i2e , S satisfies the maxmo-i2e
formula. We prove the rest of this invariant by
contradiction, i.e., we assume there is store S 0
for a in the store buffer of processor i after R
is performed such that S 0 <po-i2e L. Note that
<po-i2e is not changed by R. The predicate of
R ensures that S is the oldest store for a in the
store buffer. Invariant 4 says that S <po-i2e S 0 .
Now we have S <po-i2e S 0 <po-i2e L (before R
is performed), contradicting with invariant 9a.
b) Consider load L for address a from processor j that
rf -i2e
reads from store S ∗ (6= S), i.e., S 6= S ∗ −−−−→ L.
Now we show that both invariants still hold for
rf -i2e
S ∗ −−−−→ L.
– Invariant 9a: This invariant cannot be affected,
because performing R can only remove a store
from a store buffer.
– Invariant 9b: This invariant can only be affected
when S ∗ is in <0mo-i2e . Since R can only remove
a store from a store buffer, the second half of
this invariant is not affected (i.e, no store S 00
in the store buffer and so on). We only need
to focus on the maxmo-i2e formula, i.e., S ∗ =
maxmo-i2e {S 0 | S 0 .addr = a ∧ (S 0 <po-i2e
L ∨ S 0 <mo-i2e L)}. Since L <1mo-i2e S, this
formula can only be affected when S <po-i2e L
and i = j. In this case, before R is performed, S
is in the store buffer of processor i, and S <po-i2e
L, and L reads from S ∗ 6= S. This contradicts
with invariant 9b which is assume to hold before
R is performed. Thus, the maxmo-i2e formula
cannot be affected either, i.e., the invariant holds.
6) WMM-Ld that reads from local store buffer: Assume R
executes a load L for address a in processor i, and L
reads from store S in the local store buffer. R appends
L to the <po-i2e of processor i, appends L to <mo-i2e ,
rf -i2e 1
and adds S −−−−→ L. Note that R does not change any
invalidation buffer or store buffer. Now we consider each
invariant.
• Invariants 1, 3, 4, 5, 7: These are not affected.
• Invariant 2: This still holds because L is added to
<mo-i2e .
• Invariant 6: We consider each invariant.
– Invariants 6a, 6b, 6c: These are not affected.
– Invariant 6d: L can only influence stale values for
a in the invalidation buffer of processor i. However,
since S is in the store buffer of processor i before
R is performed, invariant 5 says that the invalidation
buffer of processor i cannot contain address a.
Therefore this invariant still holds.
• Invariant 8: We consider instruction I such that
I <1po-i2e L and order(I, L). Since order(I, L), I can
only be a Reconcile fence or a load for a. In either
case, invariant 2 says that I is in <0mo-i2e . Since L is
appended to the end of <mo-i2e , I <1mo-i2e L, i.e., the
invariant still holds.
• Invariant 9: Since R does not change any store buffer
or any load/store already in <0mo-i2e , R cannot affect
this invariant for loads other than L. We only need to
rf -i2e 1
show that S −−−−→ L satisfies this invariant. Since
S is in the store buffer, invariant 3 says that S is
not in <mo-i2e . Therefore we only need to consider
invariant 9a. We prove by contradiction, i.e., we assume
there is store S 0 for a in the store buffer of processor i
and S <1po-i2e S 0 <1po-i2e L. Since R does not change
store buffer states, S and S 0 are both in the store buffer
before R is performed. We also have S <0po-i2e S 0
(because the only change in <po-i2e is to append L to
the end). According to the predicate of R, S should
be younger than S 0 , so S 0 <0po-i2e S (according to
invariant 4), contradicting with the previous conclusion.
Therefore, the invariant still holds.
7) WMM-Ld that reads from atomic memory: Assume R
executes a load L for address a in processor i, and
L reads from atomic memory m[a]. R appends L to
the <po-i2e of processor i, appends L to <mo-i2e , adds
rf -i2e 1
m[a].source −−−−→ L, and may remove stale values
from the invalidation buffer of processor i. Now we
consider each invariant.
• Invariants 1, 3, 4: These are not affected.
• Invariant 2: This still holds because L is added to
<mo-i2e .
• Invariants 5, 7: These are not affected because R only
remove values from an invalidation buffer.
• Invariant 6: We consider each invariant.
– Invariants 6a, 6b, 6c: These are not affected because
R only remove values from an invalidation buffer.
– Invariant 6d: L can only influence stale values for
a in the invalidation buffer of processor i. However,
R will remove address a from the the invalidation
buffer of processor i. Therefore this invariant still
holds.
• Invariant 8: We consider instruction I such that
I <1po-i2e L and order(I, L). Since order(I, L), I can
only be a Reconcile fence or a load for a. In either
case, invariant 2 says that I is in <0mo-i2e . Since L is
appended to the end of <mo-i2e , I <1mo-i2e L, i.e., the
invariant still holds.
• Invariant 9: Since R does not change any store buffer
or any load/store already in <0mo-i2e , R cannot affect
this invariant for loads other than L. We only need
rf -i2e 1
to show that m[a].source −−−−→ L satisfies this
invariant (m[a] is not changed before and after R is
performed). According to invariant 1, m[a].source is
the youngest store for a in <0mo-i2e . Therefore we
only need to consider invariant 9b. Since we also have
m[a].source <1po-i2e L, maxmo-i2e {S 0 | S 0 .addr =
a ∧ (S 0 <po-i2e L ∨ S 0 <mo-i2e L)} will return
m[a].source, i.e., the first half the invariant holds. The
predicate of R ensures that there is no store for a in
the store buffer of processor i, so the second half the
invariant also holds.
8) WMM-Ld that reads from the invalidation buffer: Assume
R executes a load L for address a in processor i, and L
reads from the stale value ha, vi in the local invalidation
buffer. R appends L to the <po-i2e of processor i, appends
rf -i2e 1
L to <mo-i2e , adds v.source −−−−→ L, and may remove
stale values from the invalidation buffer of processor i.
Now we consider each invariant.
• Invariants 1, 3, 4: These are not affected.
• Invariant 2: This still holds because L is added to
<mo-i2e .
• Invariants 5, 7: These are not affected because R can
only remove values from an invalidation buffer.
• Invariant 6: We consider each invariant.
– Invariants 6a, 6b, 6c: These are not affected because
R can only remove values from an invalidation buffer.
– Invariant 6d: Only stale values in the invalidation
buffer of processor i can be affected. Consider
stale value ha, v 0 i in the invalidation buffer of
processor i after R is performed. We need to
show that v.source <1mo-i2e v 0 .overwrite. Since
v 0 is not removed in R, v 0 must be either v or
younger than v in the invalidation buffer before
R is performed. According to invariant 7, either
v 0 .source = v.source or v.source <0mo-i2e v 0 .source.
Since v 0 .source <0mo-i2e v 0 .overwrite according to
invariant 6a, v.source <1mo-i2e v 0 .overwrite.
• Invariant 8: We consider instruction I such that
I <1po-i2e L and order(I, L), and we need to show
that I <1mo-i2e L. Since order(I, L), I can only be
a Reconcile fence or a load for a. If I is a Reconcile
fence, then invariant 6b says that I <0mo-i2e v.overwrite.
Since we insert L right before v.overwrite, we still have
I <1mo-i2e L. If I is a load for a, then invariant 6d
says that I <0mo-i2e v.overwrite, and thus we have
I <1mo-i2e L.
• Invariant 9: Since R does not change any store buffer
or any load/store already in <0mo-i2e , R cannot affect
this invariant for loads other than L. We only need to
rf -i2e 1
show that v.source −−−−→ L satisfies this invariant.
Since v.source is in <0mo-i2e , we only need to consider
invariant 9b. The predicate of R ensures that the store
buffer of processor i cannot contain address a, so the
second half of the invariant holds (i.e., there is no S 00
and so on).
Now we prove the first half of the invariant, i.e., consider maxmo-i2e {S 0 | S 0 .addr = a ∧ (S 0 <1po-i2e L ∨
S 0 <1mo-i2e L)}. First note that since v.source <0mo-i2e
v.overwrite, v.source <1mo-i2e L. Thus v.source is in
set {S 0 | S 0 .addr = a ∧ (S 0 <1po-i2e L ∨ S 0 <1mo-i2e
L)}. Consider any store S that is also in this set,
then S <1po-i2e L ∨ S <1mo-i2e L must be true.
If S <1po-i2e L, S is executed in processor i before
R is performed. Invariant 6c says that S <0mo-i2e
v.overwrite ⇒ S <1mo-i2e v.overwrite. If S <1mo-i2e L,
then S <1mo-i2e L <1mo-i2e v.overwrite. In either
case, we have S <mo-i2e v.overwrite. Since we have
proved invariant 6a holds after R is performed, either
S = v.source or S <mo-i2e v.overwrite. Therefore,
maxmo-i2e will return v.source.
It is easy to see that at the end of the I2 E execution
(of a program), there is no instruction to execute in each
processor and all store buffers are empty (i.e., all exected
loads stores and fences are in <mo-i2e ). At that time, if we
rf
define axiomatic relations <po , <mo , −→ as <po-i2e , <mo-i2e
rf -i2e
, −−−−→ respectively, then invariants 8 and 9b become
the Inst-Order and Ld-Val axioms respectively. That is,
rf -i2e
h<po-i2e , <mo-i2e , −−−−→i are the relations that satisfy the
WMM axioms and have the same program behavior as the
I2 E execution.
Theorem 2 (Completeness). WMM axiomatic model ⊆
WMM I2 E model.
Proof: The goal is that for any axiomatic relations
rf
h<po , <mo , −→i that satisfy the WMM axioms, we can
run the same program in the I2 E model and get the same
program behavior. We will devise an algorithm to operate
the I2 E model to get the same program behavior as in
rf
axiomatic relations h<po , <mo , −→i. In the algorithm, for
2
each instruction in the I E model, we need to find its
corresponding instruction in the <po in axiomatic relations.
Note that this mapping should be an one-to-one mapping, i.e.,
one instruction in the I2 E model will exactly correspond to
one instruction in the axiomatic relations and vice versa, so
we do not distinguish between the directions of the mapping.
The algorithm will create this mapping incrementally. Initially
(i.e., before the I2 E model performs any operation), for each
processor i, we only map the next instruction to execute
in processor i of the I2 E model to the oldest instruction in
the <po of processor i in the axiomatic relations. After the
algorithm starts to operate the I2 E model, whenever we have
executed an instruction in a processor in the I2 E model, we
map the next instruction to execute in that processor in the
I2 E model to the oldest unmapped instruction in the <po
of that processor in the axiomatic relations. The mapping
scheme obviously has the following two properties:
•
•
The k-th executed instruction in a processor in the I2 E
model is mapped to the k-th oldest instruction in the <po
of that processor in the axiomatic relations.
In the I2 E model, when a processor has executed x instructions, only the first x + 1 instructions (i.e., the executed
x instructions and the next instruction to execute) of that
processor are mapped to instructions in the axiomatic
relations.
Of course, later in the proof, we will show that the two corresponding instructions (one in the I2 E model and the other
in the axiomatic relations) have the same instruction types,
same load/store addresses (if they are memory accesses),
same store data (if they are stores), and same execution
results. In the following, we will assume the action of adding
new instruction mappings as an implicit procedure in the
algorithm, so we do not state it over and over again when
we explain the algorithm. When there is no ambiguity, we
do not distinguish an instruction in the I2 E model and an
instruction in the axiomatic relations if these two instructions
corresponds to each other (i.e., the algorithm has built the
mapping between them).
Now we give the details of the algorithm. The algorithm
begins with the I2 E model (in initial state), an empty set Z,
and a queue Q which contains all the memory and fence
instructions in <mo . The order of instructions in Q is the
same as <mo , i.e., the head of Q is the oldest instruction
in <mo . The instructions in Q and Z are all considered as
instructions in the axiomatic relations. In each step of the
algorithm, we perform one of the followings actions:
1) If the next instruction of some processor in the I2 E model
is a non-memory instruction, then we perform the WMMNm operation to execute it in the I2 E model.
2) Otherwise, if the next instruction of some processor in
the I2 E model is a store, then we perform the WMM-St
operation to execute that store in the I2 E model.
3) Otherwise, if the next instruction of some processor in
the I2 E model is mapped to a load L in set Z, then we
perform the WMM-Ld operation to execute L in the I2 E
model, and we remove L from Z.
4) Otherwise, we pop out instruction I from the head of Q
and process it in the following way:
a) If I is a store, then I must have been mapped to a
store in some store buffer (we will prove this), and
we perform the WMM-DeqSb operation to dequeue
I from the store buffer in the I2 E model.
b) If I is a Reconcile fence, then I must have been
mapped to the next instruction to execute in some
processor (we will prove this), and we perform the
WMM-Rec operation to execute I in the I2 E model.
c) If I is a Commit fence, then I must have been mapped
to the next instruction to execute in some processor
(we will prove this), and we perform the WMM-Com
operation to execute I in the I2 E model.
d) I must be a load in this case. If I has been mapped,
then it must be mapped to the next instruction to
execute in some processor in the I2 E model (we will
prove this), and we perform the WMM-Ld operation
to execute I in the I2 E model. Otherwise, we just
add I into set Z.
For proof purposes, we introduce the following ghost states
to the I2 E model:
•
•
Field source in atomic memory: For each address a, we
add state m[a].source to record the store that writes the
current memory value.
Fields source in invalidation buffer: For each stale value
ha, vi in an invalidation buffer, we add state v.source to
denote the store of this stale value.
These two fields are set when a WMM-DeqSb operation is
performed. Assume the WMM-DeqSb operation dequeues a
store S for address a. In this case, we update m[a].source
to be S. Let S0 be the original m[a].source before this
operation is performed. Then for each new stale value ha, vi
inserted into any invalidation buffer, we set v.source = S0 .
It is obvious that memory value m[a] is equal to the value
of m[a].source, and stale value v is equal to the value of
v.source.
For proof purposes, we define a function overwrite. For
each store S in <mo , overwrite(S) returns the store for the
same address such that
• S <mo overwrite(S), and
0
• there is no store S for the same address such that S <mo
0
S <mo overwrite(S).
In other words, overwrite(S) returns the store that overwrites
S in <mo . (overwrite(S) does not exist if S is the last store
for its address in <mo .)
Also for proof purposes, at each time in the algorithm,
we use Vi to represent the set of every store S in <mo that
satisfies all the following requirements:
1) The store buffer of processor i does not contain the
address of S.
2) overwrite(S) exists and overwrite(S) has been popped
from Q.
3) For each Reconcile fence F that has been executed by
processor i in the I2 E model, F <mo overwrite(S).
4) For each store S 0 for the same address that has been
executed by processor i in the I2 E model, S 0 <mo
overwrite(S).
5) For each load L for the same address that has been
rf
executed by processor i in the I2 E model, if store S 0 −→ L
0
in the axiomatic relations, then S <mo overwrite(S).
With the above definitions and new states, we introduce the
invariants of the algorithm. After each step of the algorithm,
we have the following invariants for the states of the I2 E
model, Z and Q:
1) For each processor i, the execution order of all executed
instructions in processor i in the I2 E model is a prefix
of the <po of processor i in the axiomatic relations.
2) The predicate of any operation performed in this step is
satisfied.
3) If we perform an operation to execute an instruction in
the I2 E model in this step, the operation is able to get
the same instruction result as that of the corresponding
instruction in the axiomatic relations.
4) The instruction type, load/stores address, and store data of
every mapped instruction in the I2 E model are the same
as those of the corresponding instruction in the axiomatic
relations.
5) All loads that have been executed in the I2 E model are
mapped exactly to all loads in <mo but not in Q or Z.
6) All fences that have been executed in processor i are
mapped exactly to all fences in <mo but not in Q.
7) All stores that have been executed and dequeued from
the store buffers in the I2 E model are mapped exactly to
all stores in <mo but not in Q.
8) For each address a, m[a].source in the I2 E model is
mapped to the youngest store for a, which has been
popped from Q, in <mo .
9) For each processor i, the store buffer of processor i
contains exactly every store that has been executed in the
I2 E model but still in Q.
10) For any two stores S1 and S2 for the same address in
the store buffer of any processor i in the I2 E model, if
S1 is older than S2 in the store buffer, then S1 <po S2 .
11) For any processor i and any address a, address a cannot
be present in the store buffer and invalidation buffer of
processor i at the same time.
12) For any processor i, for each store S in Vi , the invalidation
buffer of processor i contains an entry whose source field
is mapped to S.
13) For any stale value ha, vi in any invalidation buffer,
v.source has been mapped to a store in <mo , and
overwrite(v.source) exists, and overwrite(v.source) is not
in Q.
14) For any two stale values v1 and v2 for the same address
in the invalidation buffer of any processor i in the I2 E
model, if v1 is older than v2 in the invalidation buffer,
then v1 .source <mo v2 .source.
These invariants guarantee that the algorithm will operate
the I2 E model to produce the same program behavior as the
axiomatic model. We now prove inductively that all invariants
hold after each step of the algorithm, i.e., we assume all
invariants hold before the step. In case a state is changed in
this step, we use superscript 0 to denote the state before this
step (e.g., Q0 ) and use superscript 1 to denote the state after
this step (e.g., Q1 ). We consider which action is performed
in this step.
•
•
Action 1: We perform a WMM-Nm operation that executes
a non-memory instruction in the I2 E model. All the
invariants still hold after this step.
Action 2: We perform a WMM-St operation that executes
a store S for address a in processor i in the I2 E model.
We consider each invariant.
– Invariants 1, 2, 4: These invariants obviously hold.
– Invariants 3, 5, 6, 7, 8: These are not affected.
– Invariant 9: Note that S is mapped before this step. Since
S cannot be dequeued from the store buffer before this
step, invariant 7 says that S is still in Q. Thus, this
invariant holds.
– Invariant 10: Since S is the youngest store in store buffer
and invariant 1 holds after this step, this invariant also
holds.
– Invariant 11: Since the WMM-St operation removes
•
all stale values for a from the invalidation buffer of
processor i, this invariant still holds.
– Invariant 12: For any processor j (j 6= i), the action
in this step cannot change Vj or the invalidation buffer
of processor j. We only need to consider processor i.
The action in this step cannot introduce any new store
into Vi , i.e., Vi1 ⊆ Vi0 . Also notice that Vi1 does not
contain any store for a due to requirement 1. Since the
action in this step only removes values for address a
from the invalidation buffer of processor i, this invariant
still holds for i.
– Invariants 13, 14: These still hold, because we can only
remove values from the invalidation buffer in this step.
Action 3: We perform a WMM-Ld operation that executes
a load L in Z. (Note that L has been popped from Q
before.) We assume L is in processor i (both the axiomatic
relations and the I2 E model agree on this because of
the way we create mappings). We also assume that L has
rf
address a in the axiomatic relations, and that store S −→ L
in the axiomatic relations. According to invariant 4, L also
has load address a in the I2 E model. We first consider
several simple invariants:
– Invariants 1, 2, 5: These invariants obviously hold.
– Invariants 6, 7, 8, 9, 10: These are not affected.
– Invariant 11: Since the WMM-Ld operation does not
change store buffers and can only remove values from
the invalidation buffers, this invariant still holds.
– Invariants 13, 14: These still hold, because we can only
remove values from the invalidation buffer in this step.
We now consider the remaining invariants, i.e., 3, 4 and
12, according to the current state of Q (note that Q is not
changed in this step):
1) S is in Q: We show that the WMM-Ld operation can
read the value of S from the store buffer of processor
i in the I2 E model. We first show that S is in the store
buffer of processor i. Since L is not in Q, we have
L <mo S. According to the Ld-Val axiom, we know
S <po L, so S must have been executed. Since S is in
Q, invariant 7 says that S cannot be dequeued from the
store buffer, i.e., S is in the store buffer of processor i.
Now we prove that S is the youngest store for a in
the store buffer of processor i by contradiction, i.e.,
we assume there is another store S 0 for a which is in
the store buffer of processor i and is younger than S.
Invariant 10 says that S <po S 0 . Since S and S 0 are
stores for the same address, the Inst-Order axiom says
that S <mo S 0 . Since S 0 is in the store buffer, it is
executed before L. According to invariant 1, S 0 <po L.
rf
Then S −→ L contradicts with the Ld-Val axiom.
Now we can prove the invariants:
– Invariant 3: This holds because the WMM-Ld operation reads S from the store buffer.
– Invariant 4: This holds because invariant 3 holds after
this step.
– Invariant 12: The execution of L in this step cannot
introduce new stores into Vj for any j, i.e., Vj1 ⊆ Vj0 .
Since there is no change to any invalidation buffer
when WMM-Ld reads from the store buffer, this
invariant still holds.
2) S is not in Q but overwrite(S) is in Q: We show that
the WMM-Ld operation can read the value of S from
the atomic memory. Since S has been popped from Q
while overwrite(S) is not, S is the youngest store for
a in <mo that has been popped from Q. According to
invariant 8, the current m[a].source in the I2 E model is
S. To let WMM-Ld read m[a], we only need to show
that the store buffer of processor i does not contain any
store for a. We prove by contradiction, i.e., we assume
there is a store S 0 for a in the store buffer of processor
i. According to invariant 9, S 0 has been executed in
the I2 E model, and S 0 is still in Q. Thus, we have
S 0 <po L (according to invariant 1), and S <mo S 0 .
rf
Then S −→ L contradicts with the Ld-Val axiom.
Now we can prove the invariants:
– Invariant 3: This holds because the WMM-Ld operation reads S from the atomic memory m[a].
– Invariant 4: This holds because invariant 3 holds after
this step.
– Invariant 12: The execution of L in this step cannot
introduce new stores into Vj for any j, i.e., Vj1 ⊆ Vj0 .
Since there is no change to any invalidation buffer of
any processor other than i, we only need to consider
processor i. The WMM-Ld removes all values for
a from the invalidation buffer of processor i, so
the goal is to show that there is no store for a in
Vi1 . We prove by contradiction, i.e., assume there
is store S 0 for a in Vi1 . Requirement 5 for Vi says
that S <mo overwrite(S 0 ). Since overwrite(S) is in
Q, overwrite(S 0 ) is also in Q. This contradicts with
requirement 2. Therefore, there is no store for a in
Vi1 , and this invariant holds.
3) Both S and overwrite(S) are not in Q: We show that
the WMM-Ld operation can read the value of S from
the invalidation buffer of processor i. That is, we need
to show S ∈ Vi0 . We now prove that S satisfies all the
requirements for Vi0 :
– Requirement 1: We prove by contradiction, i.e., we
assume there is store S 0 for a in the store buffer of
processor i. Invariant 9 says that S 0 has been executed
but not in Q. Then we have S 0 <po L (invariant 1)
rf
and S <mo S 0 . Then S −→ L contradicts with the
Ld-Val axiom.
– Requirement 2: This satisfied because we assume
overwrite(S) is not in Q.
•
– Requirement 3: We prove by contradiction, i.e., we
assume that Reconcile fence F has been executed
by processor i, and overwrite(S) <mo F . Since F
is executed before L, invariant 1 says that F <po L.
Since order(F, L), the Inst-Order axiom says that
F <mo L. Now we have S <mo overwrite(S) <mo
rf
F <mo L. Thus, S −→ L contradicts with the Ld-Val
axiom.
– Requirement 4: We prove by contradiction, i.e.,
we assume that store S 0 for a has been executed
by processor i, and either S 0 = overwrite(S) or
overwrite(S) <mo S 0 . According to the definition
of overwrite, we have S <mo S 0 . Since S 0 has
been executed, invariant 1 says that S 0 <po L. Then
rf
S −→ L contradicts with the Ld-Val axiom.
– Requirement 5: We prove by contradiction, i.e., we
assume that store S 0 and load L0 are both for address
rf
a, L0 has been executed by processor i, S 0 −→ L0 ,
and either S 0 = overwrite(S) or overwrite(S) <mo
S 0 . According to the definition of overwrite, we have
S <mo S 0 . Since L0 has been executed, invariant 1
says that L0 <po L. Since order(L0 , L), the Instrf
Order axiom says that L0 <mo L. Since S 0 −→ L,
0
0
0
0
0
we have S <po L or S <mo L . Since L <po L
and L0 <mo L, we have S 0 <po L or S 0 <mo L.
rf
Since S <mo S 0 , S −→ L contradicts with the LdVal axiom.
Now we can prove the invariants:
– Invariant 3: This holds because the WMM-Ld operation reads S from the invalidation buffer of processor
i.
– Invariant 4: This holds because invariant 3 holds after
this step.
– Invariant 12: The execution of L in this step cannot
introduce new stores into Vj for any j, i.e., Vj1 ⊆ Vj0 .
Since there is no change to any invalidation buffer
of any processor other than i, we only need to
consider processor i. Assume the invalidation buffer
entry read by the WMM-Ld operation is ha, vi, and
v.source = S. The WMM-Ld rule removes any stale
value ha, v 0 i that is older than ha, vi from the invalidation buffer of processor i. The goal is to show that
v 0 .source cannot be in Vi1 . We prove by contradiction,
i.e., we assume that v 0 .source ∈ Vi1 . Since L has
been executed after this step, requirement 5 says that
S <mo overwrite(v 0 .source). Since v 0 is older than v
in the invalidation buffer before this step, invariant 14
says that v 0 .source <mo v.source = S. The above
two statements contradict with each other. Therefore,
this invariant still holds.
Action 4a: We pop a store S from the head of Q, and
we perform a WMM-DeqSb operation to dequeue S from
the store buffer. Assume that S is for address a, and
in processor i in the axiomatic relations. We first prove
that S has been mapped before this step. We prove by
contradiction, i.e., we assume S has not been mapped to
any instruction in the I2 E model before this step. Consider
the state right before this step. Let I be the next instruction
to execute in processor i in the I2 E model. We know I
is mapped and I <po S. The condition for performing
action 4a in this step says that I can only be a fence or
load, and we have order(I, S). According to the Inst-Order
axiom, I <mo S, so I has been popped from Q0 .
1) If I is a fence, since I is in <mo but not in Q0 ,
invariant 6 says that I must be executed, contradicting
our assumption that I is the next instruction to execute.
2) If I is a load, since I is not executed, and I is in <mo ,
and I is not in Q0 , invariant 5 says that I must be
in Z 0 . Then this algorithm step should use action 3
instead of action 4a.
Due to the contradictions, we know S must have been
mapped. Note that the next instruction to execute in
processor i cannot be store, because otherwise this step
will use action 2. According to invariant 4, S cannot be
mapped to the next instruction to execution in processor
i. Therefore S must have been executed in processor i in
the I2 E model before this step.
Also according to invariant 4, the address and data of S
in the I2 E model are the same as those in the axiomatic
relations. Now we consider each invariant.
– Invariants 1, 4, 7, 9, 10: These invariants obviously hold.
– Invariants 3, 5, 6: These are not affected.
– Invariant 2: We prove by contradiction, i.e., we assume
there is a store S 0 for a younger than S 0 in the store
buffer of processor i (before this step). According to
invariant 9, S 0 is in Q. Since S is the head of Q0 ,
S <mo S 0 . According to invariant 10, S 0 <po S. Since
order(S 0 , S), S 0 <mo S, contradicting with previous
statement. Thus, the predicate of the WMM-DeqSb
operation is satisfied, and the invariant holds.
– Invariant 8: S is the youngest instruction in <mo that
has been popped from Q, and m[a].source is updated
to S. Thus, this invariant still holds.
– Invariant 11: Since the WMM-DeqSb operation will not
insert the stale value into an invalidation buffer of a
processor if the store buffer of that processor contains
the same address, this invariant still holds.
– Invariant 12: For any processor j, the action this step
will not remove stores from Vj but may introduce new
stores to Vj , i.e., Vj0 ⊆ Vj1 . We consider the following
two types of processors.
1) Processor i: We show that Vi0 = Vi1 . We prove
by contradiction, i.e., we assume there is store S 0
such that S 0 ∈ Vi1 but S 0 ∈
/ Vi0 . Since S 0 satisfies
requirements 3, 4, 5 after this step, it also satisfies
these three requirements before this step. Then S 0
must fail to meet at least one of requirements 1 and
2 before this step.
a) If S 0 does not meet requirement 1 before this step,
then S 0 .addr is in the store buffer of processor i
before this step and S 0 .addr is not in this store
buffer after this step. Thus, S 0 .addr must be a.
Since S 0 meets requirement 4 before this step and
S has been executed by processor i before this
step, we know S <mo overwrite(S 0 ). Since S 0
meets requirement 2 after this step, overwrite(S 0 )
is not in Q1 . Since Q1 is derived by popping the
oldest store form Q0 , we know S is not in Q0 .
Since S is in the store buffer before this step,
this contradicts invariant 9. Therefore this case
is impossible.
b) If S 0 does not meet requirement 2 before this step,
then overwrite(S 0 ) is in Q0 but not in Q1 . Then
overwrite(S 0 ) = S. Since S has been executed
by processor i, S 0 will fail to meet requirement 4
after this step. This contradicts with S 0 ∈ Vi1 , so
this case is impossible either.
Now we have proved that Vi0 = Vi1 . Since the WMMDeqSb operation does not change the invalidation
buffer of processor i, this invariant holds for processor i.
2) Processor j (6= i): We consider any store S 0 such
that S 0 ∈ Vj1 but S 0 ∈
/ Vj0 . Since S 0 satisfies
requirements 1, 3, 4, 5 after this step, it also satisfies
these four requirements before this step. Then S 0
must fail to meet requirement requirement 2 before
this step, i.e., overwrite(S 0 ) is in Q0 but not in Q1 .
Then overwrite(S 0 ) = S. According to invariant 8,
we know S 0 is m[a].source. Consider the following
two cases.
a) The store buffer of processor j contains address
a: Since S 0 cannot meet requirement 1, Vj1 = Vj0 .
Since WMM-DeqSb cannot remove any value
from the invalidation buffer of processor j, this
invariant holds.
b) The store buffer of processor j does not contain
address a: In this case, the WMM-DeqSb operation will insert stale value ha, m[a]0 i into the
invalidation buffer of processor j, so the invariant
still holds.
– Invariant 13: The source field of all the newly inserted
stale values in this step are equal to m[a].source0 .
According to invariant 8, m[a].source0 is the youngest
store for a in <mo that is not in Q0 . Since S is the head
of Q0 , we know overwrite(m[a].source0 ) = S. Since S
is not in Q1 , this invariant still holds.
– Invariant 14: Assume ha, vi is the new stale value
inserted into the invalidation buffer of processor j in
•
•
this step. We need to show that for any stale value
ha, v 0 i that is in this invalidation buffer before this
step, v 0 .source <mo v.source. According to invariant 13,
overwrite(v 0 .source) is not in Q0 . Since v.source =
m[a].source0 , according to invariant 8, v.source is the
youngest store for a in <mo that is not in Q0 . Therefore
v 0 .source <mo v.source.
Action 4b: We pop a Reconcile fence F from Q, and
perform a WMM-Rec operation to execute it in the I2 E
model. Assume F is in processor i in the axiomatic
relations. We first prove that F has been mapped before
this step. We prove by contradiction, i.e., we assume F
is not mapped before this step. Consider the state right
before this step. Let I be the next instruction to execute in
processor i in the I2 E model. We know I is mapped and
I <po F . The condition for performing action 4b in this
step says that I can only be a fence or load, so we have
order(I, F ). According to the Inst-Order axiom, I <mo F ,
so I has been popped from Q0 .
1) If I is a fence, since I is in <mo but not in Q0 ,
invariant 6 says that I must be executed, contradicting
our assumption that I is the next instruction to execute.
2) If I is a load, since I is not executed, and I is in <mo ,
and Ik is not in Q0 , invariant 5 says that I must be
in Z 0 . Then this algorithm step should use action 3
instead of action 4a.
Due to the contradictions, we know F must have been
mapped before this step. According to invariant 6, since F
is in Q0 , F must have not been executed in the I2 E model.
Thus, F is mapped to the next instruction to execute in
processor i in the I2 E model.
Now we consider each invariant:
– Invariants 1, 2, 4, 6: These obviously hold.
– Invariants 3, 5, 7, 8, 9, 10: These are not affected.
– Invariants 11, 13, 14: These invariants hold, because
the invalidation buffer of any processor j (6= i) is not
changed, and the invalidation buffer of processor i is
empty after this step.
– Invariant 12: For any processor j (6= i), Vj1 = Vj0 and
the invalidation buffer of processor j is not changed in
this step. Thus, this invariant holds for any processor j
(6= i). We now consider processor i. The invalidation
buffer of processor i is empty after this step, so we need
to show that Vi1 is empty. We prove by contradiction, i.e.,
we assume there is a store S ∈ Vi1 . Since F has been
executed in processor i after this step, requirement 3
says that F <mo overwrite(S). Since F is the head of
Q0 , overwrite(S) must be in Q1 . Then S fails to meet
requirement 2 after this step, contradicting with S ∈ Vi1 .
Therefore Vi1 is empty, and this invariant also holds for
processor i.
Action 4c: We pop a Commit fence F from Q, and perform
a WMM-Com operation to execute it in the I2 E model.
•
Assume F is in processor i in the axiomatic relations.
Using the same argument as in the previous case, we can
prove that F is mapped to the next instruction to execute
in processor i in the I2 E model (before this step). Now
we consider each invariant:
– Invariants 1, 4, 6: These obviously hold.
– Invariants 3, 5, 7, 8, 9, 10, 11, 12, 13, 14: These are
not affected.
– Invariants 2: We prove by contradiction, i.e., we assume
there is store S in the store buffer of processor i before
this step. According to invariant 9, S has been executed
in processor i and is in Q0 . Thus, we have S <po F .
Since order(S, F ), the Ld-Val axiom says that S <mo F .
Then F is not the head of Q0 , contradicting with the
fact that we pop F from the head of Q0 .
Action 4d: We pop a load L from Q. Assume that L
is for address a and is in processor i in the axiomatic
relations. If we add L to Z, then all invariants obviously
hold. We only need to consider the case that we perform
a WMM-Ld operation to execute L in the I2 E model. In
this case, L is mapped before this step. Since L is in
Q0 , according to invariant 5, L must be mapped to an
unexecuted instruction in the I2 E model. That is, L is
mapped to the next instruction to execute in processor i
in the I2 E model. Invariant 4 ensures that L has the same
load address in the I2 E model. We first consider several
simple invariants:
– Invariants 1, 2, 5: These invariants obviously hold.
– Invariants 6, 7, 8, 9, 10: These are not affected.
– Invariant 11: Since the WMM-Ld operation does not
change store buffers and can only remove values from
the invalidation buffers, this invariant still holds.
– Invariants 13, 14: These still hold, because we can only
remove values from the invalidation buffer in this step.
rf
Assume store S −→ L in the axiomatic relations. We prove
the remaining invariants (i.e., 3, 4 and 12) according to
the position of S in <mo .
1) L <mo S: We show that the WMM-Ld can read S
from the store buffer of processor i in the I2 E model.
The Ld-Val axiom says that S <po L. Then S must
have been executed in processor i in the I2 E model
according to invariant 1. Since S is in Q0 , invariant 9
ensures that S is in the store buffer of processor i
before this step.
To let WMM-Ld read S from the store buffer, we
now only need to prove that S is the youngest store
for a in the store buffer of processor i. We prove by
contradiction, i.e., we assume there is another store S 0
for a which is in the store buffer of processor i and
is younger than S. Invariant 10 says that S <po S 0 .
Since S and S 0 are stores for the same address, the
Inst-Order axiom says that S <mo S 0 . Since S 0 is in
the store buffer, it is executed before L. According to
rf
invariant 1, S 0 <po L. Then S −→ L contradicts with
the Ld-Val axiom.
Now we can prove the invariants:
– Invariant 3: This holds because the WMM-Ld operation reads S from the store buffer.
– Invariant 4: This holds because invariant 3 holds after
this step.
– Invariant 12: The execution of L in this step cannot
introduce new stores into Vj for any j, i.e., Vj1 ⊆ Vj0 .
Since there is no change to any invalidation buffer
when WMM-Ld reads from the store buffer, this
invariant still holds.
2) S <mo L: We show that the WMM-Ld operation can
read the value of S from the atomic memory. Since L
is the head of Q0 , S is not in Q0 . According to the
Ld-Val axiom, there cannot be any store for a between
S and L in <mo . Thus, S is the youngest store for a
in <mo that has been popped from Q. According to
invariant 8, the current m[a].source in the I2 E model
is S.
To let WMM-Ld read m[a], we only need to show
that the store buffer of processor i does not contain
any store for a. We prove by contradiction, i.e., we
assume there is a store S 0 for a in the store buffer of
processor i. According to invariant 9, S 0 has executed
in the I2 E model, and S 0 is in Q0 . Thus, we have
S 0 <po L (according to invariant 1), and S <mo S 0 .
rf
Then S −→ L contradicts with the Ld-Val axiom.
Now we can prove the invariants:
– Invariant 3: This holds because the WMM-Ld operation reads S from the atomic memory m[a].
– Invariant 4: This holds because invariant 3 holds after
this step.
– Invariant 12: The execution of L in this step cannot
introduce new stores into Vj for any j, i.e., Vj1 ⊆ Vj0 .
Since there is no change to any invalidation buffer of
any processor other than i, we only need to consider
processor i. The WMM-Ld removes all values for
a from the invalidation buffer of processor i, so
the goal is to show that there is no store for a in
Vi1 . We prove by contradiction, i.e., assume there
is store S 0 for a in Vi1 . Requirement 5 for Vi says
that S <mo overwrite(S 0 ). Since S is the youngest
store for a that is in <mo but not Q0 , overwrite(S 0 )
must be in Q0 . This contradicts with requirement 2.
Therefore, there is no store for a in Vi1 , and this
invariant holds.
By combining Theorems 1 and 2, we prove the equivalence
between the I2 E model and the axiomatic model of WMM.
Theorem 3 (Equivalence). WMM I2 E model ≡ WMM
axiomatic model.
R EFERENCES
[1] L. Lamport, “How to make a multiprocessor computer that
correctly executes multiprocess programs,” Computers, IEEE
Transactions on, vol. 100, no. 9, pp. 690–691, 1979.
[2] P. Sewell, S. Sarkar, S. Owens, F. Z. Nardelli, and M. O.
Myreen, “x86-tso: a rigorous and usable programmer’s model
for x86 multiprocessors,” Communications of the ACM, vol. 53,
no. 7, pp. 89–97, 2010.
[3] SPARC International, Inc., The SPARC Architecture Manual:
Version 8. Prentice-Hall, Inc., 1992.
[4] D. L. Weaver and T. Gremond, The SPARC architecture
manual (Version 9). PTR Prentice Hall Englewood Cliffs,
NJ 07632, 1994.
[5] Alpha Architecture Handbook, Version 4. Compaq Computer
Corporation, 1998.
[6] K. Gharachorloo, D. Lenoski, J. Laudon, P. Gibbons, A. Gupta,
and J. Hennessy, “Memory consistency and event ordering in
scalable shared-memory multiprocessors,” in Proceedings of
the 17th International Symposium on Computer Architecture.
ACM, 1990, pp. 15–26.
[7] S. Sarkar, P. Sewell, J. Alglave, L. Maranget, and D. Williams,
“Understanding power multiprocessors,” in ACM SIGPLAN
Notices, vol. 46, no. 6. ACM, 2011, pp. 175–186.
[8] S. Flur, K. E. Gray, C. Pulte, S. Sarkar, A. Sezgin, L. Maranget,
W. Deacon, and P. Sewell, “Modelling the armv8 architecture,
operationally: Concurrency and isa,” in Proceedings of
the 43rd Annual ACM SIGPLAN-SIGACT Symposium on
Principles of Programming Languages, ser. POPL 2016.
New York, NY, USA: ACM, 2016, pp. 608–621. [Online].
Available: http://doi.acm.org/10.1145/2837614.2837615
[9] S. Mador-Haim, L. Maranget, S. Sarkar, K. Memarian,
J. Alglave, S. Owens, R. Alur, M. M. Martin, P. Sewell,
and D. Williams, “An axiomatic memory model for power
multiprocessors,” in Computer Aided Verification. Springer,
2012, pp. 495–512.
[10] J. Alglave, L. Maranget, and M. Tautschnig, “Herding cats:
Modelling, simulation, testing, and data mining for weak
memory,” ACM Transactions on Programming Languages and
Systems (TOPLAS), vol. 36, no. 2, p. 7, 2014.
[11] IBM, Power ISA, Version 2.07, 2013.
[12] J. Alglave, A. Fox, S. Ishtiaq, M. O. Myreen, S. Sarkar,
P. Sewell, and F. Z. Nardelli, “The semantics of power and
arm multiprocessor machine code,” in Proceedings of the 4th
workshop on Declarative aspects of multicore programming.
ACM, 2009, pp. 13–24.
[13] J. Alglave and L. Maranget, Computer Aided Verification:
23rd International Conference, CAV 2011, Snowbird, UT,
USA, July 14-20, 2011. Proceedings. Berlin, Heidelberg:
Springer Berlin Heidelberg, 2011, ch. Stability in Weak
Memory Models, pp. 50–66. [Online]. Available: http:
//dx.doi.org/10.1007/978-3-642-22110-1 6
[14] J. Alglave, “A formal hierarchy of weak memory models,”
Formal Methods in System Design, vol. 41, no. 2, pp. 178–
210, 2012.
[15] S. Sarkar, K. Memarian, S. Owens, M. Batty, P. Sewell,
L. Maranget, J. Alglave, and D. Williams, “Synchronising
c/c++ and power,” in ACM SIGPLAN Notices, vol. 47, no. 6.
ACM, 2012, pp. 311–322.
[16] J. Alglave, D. Kroening, V. Nimal, and M. Tautschnig,
“Software verification for weak memory via program transformation,” in Programming Languages and Systems. Springer,
2013, pp. 512–532.
[17] “The risc-v instruction set,” https://riscv.org/.
[18] A. Waterman, Y. Lee, D. A. Patterson, and K. Asanovi,
“The risc-v instruction set manual. volume 1: User-level isa,
version 2.1,” Technical Report UCB/EECS-2016-118, EECS
Department, University of California, Berkeley, May 2014.
[Online]. Available: https://people.eecs.berkeley.edu/∼krste/
papers/riscv-spec-v2.1.pdf
[19] D. Lustig, C. Trippel, M. Pellauer, and M. Martonosi, “Armor:
defending against memory consistency model mismatches
in heterogeneous architectures,” in Proceedings of the 42nd
Annual International Symposium on Computer Architecture.
ACM, 2015, pp. 388–400.
[20] M. H. Lipasti, C. B. Wilkerson, and J. P. Shen, “Value locality
and load value prediction,” ACM SIGOPS Operating Systems
Review, vol. 30, no. 5, pp. 138–147, 1996.
[21] M. M. K. Martin, D. J. Sorin, H. W. Cain, M. D.
Hill, and M. H. Lipasti, “Correctly implementing value
prediction in microprocessors that support multithreading
or multiprocessing,” in Proceedings of the 34th Annual
ACM/IEEE International Symposium on Microarchitecture,
ser. MICRO 34. Washington, DC, USA: IEEE Computer
Society, 2001, pp. 328–337. [Online]. Available: http:
//dl.acm.org/citation.cfm?id=563998.564039
[22] W. J. Ghandour, H. Akkary, and W. Masri, “The potential of
using dynamic information flow analysis in data value prediction,” in Proceedings of the 19th international conference
on Parallel architectures and compilation techniques. ACM,
2010, pp. 431–442.
[23] A. Perais and A. Seznec, “Eole: Paving the way for an effective
implementation of value prediction,” in Computer Architecture
(ISCA), 2014 ACM/IEEE 41st International Symposium on.
IEEE, 2014, pp. 481–492.
[24] A. Perais and A. Seznec, “Practical data value speculation
for future high-end processors,” in High Performance Computer Architecture (HPCA), 2014 IEEE 20th International
Symposium on. IEEE, 2014, pp. 428–439.
[25] S. Owens, S. Sarkar, and P. Sewell, “A better x86 memory
model: x86-tso,” in Theorem Proving in Higher Order Logics.
Springer, 2009, pp. 391–407.
[26] J. Kang, C.-K. Hur, O. Lahav, V. Vafeiadis, and D. Dreyer,
“A promising semantics for relaxed-memory concurrency,”
in Proceedings of the 44th ACM SIGPLAN Symposium on
Principles of Programming Languages, ser. POPL 2017.
New York, NY, USA: ACM, 2017, pp. 175–189. [Online].
Available: http://doi.acm.org/10.1145/3009837.3009850
[27] H.-J. Boehm and B. Demsky, “Outlawing ghosts: Avoiding
out-of-thin-air results,” in Proceedings of the Workshop on
Memory Systems Performance and Correctness, ser. MSPC
’14. New York, NY, USA: ACM, 2014, pp. 7:1–7:6. [Online].
Available: http://doi.acm.org/10.1145/2618128.2618134
[28] S. V. Adve and M. D. Hill, “Weak ordering a new definition,”
in ACM SIGARCH Computer Architecture News, vol. 18, no.
2SI. ACM, 1990, pp. 2–14.
[29] K. Gharachorloo, A. Gupta, and J. L. Hennessy, “Two
techniques to enhance the performance of memory consistency
models,” in Proceedings of the 1991 International Conference
on Parallel Processing, 1991, pp. 355–364.
[30] P. Ranganathan, V. S. Pai, and S. V. Adve, “Using speculative
retirement and larger instruction windows to narrow the
performance gap between memory consistency models,” in
Proceedings of the ninth annual ACM symposium on Parallel
algorithms and architectures. ACM, 1997, pp. 199–210.
[31] C. Guiady, B. Falsafi, and T. N. Vijaykumar, “Is sc+ ilp=
rc?” in Computer Architecture, 1999. Proceedings of the 26th
International Symposium on. IEEE, 1999, pp. 162–171.
[32] C. Gniady and B. Falsafi, “Speculative sequential consistency
with little custom storage,” in Parallel Architectures and Compilation Techniques, 2002. Proceedings. 2002 International
Conference on. IEEE, 2002, pp. 179–188.
[33] L. Ceze, J. Tuck, P. Montesinos, and J. Torrellas, “Bulksc: bulk
enforcement of sequential consistency,” in ACM SIGARCH
Computer Architecture News, vol. 35, no. 2. ACM, 2007, pp.
278–289.
[34] T. F. Wenisch, A. Ailamaki, B. Falsafi, and A. Moshovos,
“Mechanisms for store-wait-free multiprocessors,” in ACM
SIGARCH Computer Architecture News, vol. 35, no. 2. ACM,
2007, pp. 266–277.
[35] C. Blundell, M. M. Martin, and T. F. Wenisch, “Invisifence:
performance-transparent memory ordering in conventional
multiprocessors,” in ACM SIGARCH Computer Architecture
News, vol. 37, no. 3. ACM, 2009, pp. 233–244.
[36] A. Singh, S. Narayanasamy, D. Marino, T. Millstein, and
M. Musuvathi, “End-to-end sequential consistency,” in ACM
SIGARCH Computer Architecture News, vol. 40, no. 3. IEEE
Computer Society, 2012, pp. 524–535.
[39] S. Sarkar, P. Sewell, F. Z. Nardelli, S. Owens, T. Ridge,
T. Braibant, M. O. Myreen, and J. Alglave, “The semantics
of x86-cc multiprocessor machine code,” SIGPLAN Not.,
vol. 44, no. 1, pp. 379–391, Jan. 2009. [Online]. Available:
http://doi.acm.org/10.1145/1594834.1480929
[40] J. R. Goodman, Cache consistency and sequential consistency. University of Wisconsin-Madison, Computer Sciences
Department, 1991.
[41] M. Dubois, C. Scheurich, and F. Briggs, “Memory access
buffering in multiprocessors,” in ACM SIGARCH Computer
Architecture News, vol. 14, no. 2. IEEE Computer Society
Press, 1986, pp. 434–442.
[42] X. Shen, Arvind, and L. Rudolph, “Commit-reconcile and
fences (crf): A new memory model for architects and compiler
writers,” in Computer Architecture, 1999. Proceedings of the
26th International Symposium on. IEEE, 1999, pp. 150–161.
[43] Arvind and J.-W. Maessen, “Memory model = instruction
reordering + store atomicity,” in ACM SIGARCH Computer
Architecture News, vol. 34, no. 2. IEEE Computer Society,
2006, pp. 29–40.
[44] ARM, ARM Architecture Reference Manual, ARMv7-A and
ARMv7-R edition, 2013.
[45] S. V. Adve and K. Gharachorloo, “Shared memory consistency
models: A tutorial,” computer, vol. 29, no. 12, pp. 66–76, 1996.
[46] L. Maranget, S. Sarkar, and P. Sewell, “A tutorial introduction
to the arm and power relaxed memory models,” http://www.
cl.cam.ac.uk/∼pes20/ppc-supplemental/test7.pdf, 2012.
[47] R. Smith, Ed., Working Draft, Standard for Programming
Language C++. http://open-std.org/JTC1/SC22/WG21/docs/
papers/2015/n4527.pdf, May 2015.
[48] H.-J. Boehm and S. V. Adve, “Foundations of the c++
concurrency memory model,” in ACM SIGPLAN Notices,
vol. 43, no. 6. ACM, 2008, pp. 68–78.
[49] M. Batty, S. Owens, S. Sarkar, P. Sewell, and T. Weber,
“Mathematizing c++ concurrency,” in ACM SIGPLAN Notices,
vol. 46, no. 1. ACM, 2011, pp. 55–66.
[50] M. Batty, A. F. Donaldson, and J. Wickerson, “Overhauling
sc atomics in c11 and opencl,” SIGPLAN Not., vol. 51,
no. 1, pp. 634–648, Jan. 2016. [Online]. Available:
http://doi.acm.org/10.1145/2914770.2837637
[37] C. Lin, V. Nagarajan, R. Gupta, and B. Rajaram, “Efficient sequential consistency via conflict ordering,” in ACM SIGARCH
Computer Architecture News, vol. 40, no. 1. ACM, 2012, pp.
273–286.
[51] J. Manson, W. Pugh, and S. V. Adve, “The java
memory model,” in Proceedings of the 32Nd ACM
SIGPLAN-SIGACT Symposium on Principles of Programming
Languages, ser. POPL ’05. New York, NY, USA:
ACM, 2005, pp. 378–391. [Online]. Available: http:
//doi.acm.org/10.1145/1040305.1040336
[38] D. Gope and M. H. Lipasti, “Atomic sc for simple in-order
processors,” in High Performance Computer Architecture
(HPCA), 2014 IEEE 20th International Symposium on. IEEE,
2014, pp. 404–415.
[52] P. Cenciarelli, A. Knapp, and E. Sibilio, “The java memory model: Operationally, denotationally, axiomatically,” in
Programming Languages and Systems. Springer, 2007, pp.
331–346.
[53] J.-W. Maessen, Arvind, and X. Shen, “Improving the java
memory model using crf,” ACM SIGPLAN Notices, vol. 35,
no. 10, pp. 1–12, 2000.
[54] “Wwc+addrs test result in power processors,” http://www.cl.
cam.ac.uk/∼pes20/ppc-supplemental/ppc051.html#toc11.
[55] J. F. Cantin, M. H. Lipasti, and J. E. Smith, “The complexity
of verifying memory coherence,” in Proceedings of the
fifteenth annual ACM symposium on Parallel algorithms and
architectures. ACM, 2003, pp. 254–255.
[56] M. Batty, K. Memarian, S. Owens, S. Sarkar, and P. Sewell,
“Clarifying and compiling c/c++ concurrency: from c++ 11 to
power,” in ACM SIGPLAN Notices, vol. 47, no. 1. ACM,
2012, pp. 509–520.
[57] M. Vijayaraghavan, A. Chlipala, Arvind, and N. Dave,
Computer Aided Verification: 27th International Conference,
CAV 2015, San Francisco, CA, USA, July 18-24, 2015,
Proceedings, Part II. Cham: Springer International Publishing,
2015, ch. Modular Deductive Verification of Multiprocessor
Hardware Designs, pp. 109–127. [Online]. Available:
http://dx.doi.org/10.1007/978-3-319-21668-3 7
[58] D. J. Sorin, M. D. Hill, and D. A. Wood, “A primer on
memory consistency and cache coherence,” Synthesis Lectures
on Computer Architecture, vol. 6, no. 3, pp. 1–212, 2011.
[59] S. C. Woo, M. Ohara, E. Torrie, J. P. Singh, and A. Gupta,
“The splash-2 programs: Characterization and methodological
considerations,” in ACM SIGARCH Computer Architecture
News, vol. 23, no. 2. ACM, 1995, pp. 24–36.
[60] “Splash-2x benchmarks,”
parsec3-doc.htm#splash2x.
http://parsec.cs.princeton.edu/
[61] E. K. Ardestani and J. Renau, “Esesc: A fast multicore
simulator using time-based sampling,” in High Performance
Computer Architecture (HPCA2013), 2013 IEEE 19th International Symposium on. IEEE, 2013, pp. 448–459.
| 6 |
A divide and conquer method for symbolic regression ✩
Changtong Luoa,∗, Chen Chena,b , Zonglin Jianga,b
arXiv:1705.08061v2 [] 27 Jun 2017
a
State Key Laboratory of High Temperature Gas Dynamics, Institute of Mechanics,
Chinese Academy of Sciences, Beijing 100190, China
b
School of Engineering Sciences, University of Chinese Academy of Sciences,
Beijing, 100049, China
Abstract
Symbolic regression aims to find a function that best explains the relationship between independent variables and the objective value based on a given
set of sample data. Genetic programming (GP) is usually considered as an
appropriate method for the problem since it can optimize functional structure and coefficients simultaneously. However, the convergence speed of GP
might be too slow for large scale problems that involve a large number of
variables. Fortunately, in many applications, the target function is separable
or partially separable. This feature motivated us to develop a new method,
divide and conquer (D&C), for symbolic regression, in which the target function is divided into a number of sub-functions and the sub-functions are then
determined by any of a GP algorithm. The separability is probed by a new
proposed technique, Bi-Correlation test (BiCT). D&C powered GP has been
tested on some real-world applications, and the study shows that D&C can
help GP to get the target function much more rapidly.
Keywords: Mathematical modeling, Genetic programming, Symbolic
regression, Artificial intelligence, Divide and conquer
This work has been supported by the National Natural Science Foundation of China
(Grant No. 11532014).
∗
Corresponding author
Email addresses: [email protected] (Changtong Luo), [email protected]
(Chen Chen), [email protected] (Zonglin Jiang)
✩
Preprint submitted to Expert Systems with Applications
June 28, 2017
1. Introduction
Symbolic regression (SR) is a data-driven modeling method which aims to
find a function that best explains the relationship between independent variables and the objective value based on a given set of sample data (Schmidt and Lipson,
2009). Genetic programming (GP) is usually considered as a good candidate for SR since it does not impose a priori assumptions and can optimize
function structure and coefficients simultaneously. However, the convergence
speed of GP might be too slow for large scale problems that involve a large
number of variables.
Many efforts have been devoted trying to improve the original GP (Koza,
2008) in several ways. Some works suggest replacing its tree-based method,
with an integer string (Grammar Evolution) (O’Neill and Ryan, 2001), or
a parse matrix (Parse-Matrix Evolution) (Luo and Zhang, 2012). These
techniques can simplify the coding and decoding process but help little on
improving the convergence speed. Some other works suggest confining its
search space to generalized linear space, for example, Fast Function eXtraction (McConaghy, 2011), and Elite Bases Regression (Chen et al., 2017).
These techniques can accelerate the convergence speed of GP, even by orders
of magnitude. However, the speed is gained at the sacrifice of losing the
generality, that is, the result might be only a linear approximation of the
target function.
Fortunately, in many applications, the target function is separable or
partially separable (see section 2 for definitions). For example, in gas dynamics (Anderson, 2006), the heat flux coefficient St of a flat plate could be
formulated as
p
p
St = 2.274 sin(θ) cos(θ)/ Rex ,
(1)
and the heat flux qs at the stagnation point of a sphere as
p
qs = 1.83 × 10−4 v 3 ρ/R(1 − hw /hs ).
(2)
In equation (1), the two independent variables, θ and Rex , are both separable.
In equation (2), the first three variables v, ρ, and R are all separable, and
the last two variables, hw and hs , are not separable, but their combination
(hw , hs ) is separable. The function in equation (2) is considered partially
separable in this paper.
The feature of separability will be used in this paper to accelerate the
optimization process of symbolic regression. Some basic concepts on function separability are defined in Section 2. Section 3 describes the overall
2
work flow of the proposed method, divide and conquer. Section 4 presents
a special technique, bi-correlation test (BiCT), to determine the separability
of a function. Numerical results are given in Section 5, and the concluding
remarks are drawn in Section 6.
2. Basic concepts
The proposed method in this paper is based on a new concept referred to
as partial separability. It has something in common with existing separability
definitions such as reference (Berenguel et al., 2013) and (d’Avezac et al.,
2011), but is not exactly the same. To make it clear and easy to understand,
we begin with some illustrative examples. The functions as follows could all
be regarded as partially separable:
z = 0.8 + 0.6 ∗ ( u2 + cos(u) ) + sin(v + w) ∗ (v − w) ;
(3)
z = 0.8 + 0.6 ∗ ( u2 + cos(u) ) − sin(v + w) ∗ (v − w) ;
(4)
z = 0.8 + 0.6 ∗ ( u2 + cos(u) ) ∗ sin(v + w) ∗ (v − w) ;
(5)
where the boxed frames are used to indicate sub-functions, u is separable with
respect to z, while v and w themselves are not separable, but their combination (v, w) is considered separable. A simple example of non-separable
function is f (x) = sin(x1 + x2 + x3 ).
More precisely, the separability could be defined as follows.
Definition 1. A scalar function with n continuous variables f (x) (f : Rn 7→
R, x ∈ Rn ) is said to be partially separable if and only if it can be rewritten
as
f (x) = c0 ⊗1 ϕ1 (I1 x) ⊗2 ϕ2 (I2 x) ⊗3 · · · ⊗m ϕm (Im x)
(6)
where the binary operator ⊗i could be plus (+), minus (−), times(×). Ii is a
sub-matrix of the identity matrix, and Ii ∈ Rni ×n . The set { I1 , I2 , · · · , Im }
m
P
is a partition of the identity matrix I ∈ Rn×n ,
ni = n. The sub-function
i=1
ϕi is a scalar function such that ϕi : Rni 7→ R. Otherwise the function is
said to be non-separable.
3
In this definition, the binary operator, division (/), is not included in ⊗
for simplicity. However, this does not affect much of its generality, since the
sub-functions are not preset, and can be transformed as ϕ̃i (·)= 1/ϕi (·) if only
ϕi (·) 6= 0.
A special case is that all variables are separable, which could be defined
as follows.
Definition 2. A scalar function with n continuous variables f (x) (f : Rn 7→
R, x ∈ Rn ) is said to be completely separable if and only if it can be rewritten
as equation (6) and ni = 1 for all i = 1, 2, · · · , m.
3. Divide and conquer method
As above mentioned, many practical problems have the feature of separability. To make use of this feature to accelerate the optimization process of
genetic programming, a new method, divide and conquer (D&C), is proposed.
It works as follows.
First, a separability detection process is carried out to find out whether
the concerned problem is separable (at least partial separable) or not. The
variables are identified one by one, and then their combinations. Once it (or
the variable combination) is identified as separable, a sub-function ϕi (xi ) (or
ϕi (Ii x) for variable combinations) will be assigned. In this way, the structure
of target function f (x) could be divided into a set of sub-functions based on
the separability information: ϕi (Ii x), i = 1, 2, · · · , m.
Then, the sub-functions ϕi (Ii x) (i = 1, 2, · · · , m) are optimized and
determined one by one, using any of genetic programming algorithms, including classical GP, Grammatical Evolution (O’Neill and Ryan, 2001) and
Parse-Matrix Evolution (Luo and Zhang, 2012). When optimizing one subfunction, variables not involved ((I − Ii )x) are fixed as constants. That is,
only a small number of variables (Ii x, which is only a subset of {x1 , x2 , · · · , xn })
need be considered. This means the sub-function determination should be
much easier than evolving the target function f (x) directly.
For example, in Equation (3), Equation (4), or Equation (5), the
sub-function ϕ1 (u) = u2 + cos(u) (or ϕ2 (v, w) = sin(v + w) ∗ (v − w)) has
less number of variables and complexity than the original function.
Thus, optimizing them one by one is much easier for GP.
Finally, these sub-functions are properly combined to form the target
function, which is referred to as a function recover process. This process
4
Separability Detection
Function Division
Sub−Function Determination
Function Recover
Figure 1: Work flow of divide and conquer for symbolic regression
is rather simple, and all traditional regression algorithms are qualified to
accomplish this mission.
The work flow of D&C could be described in Figure 1.
4. Bi-correlation test
4.1. Description
The main idea of the proposed divide and conquer (D&C) method is to
make use of the separability feature to simplify the search process. Therefore,
the most important and fundamental step (the separability detection process
in Fig. 1) is to determine whether the concerned problem is separable (at
least partial separable) or not. To fulfill this task, a special technique, Bicorrelation test (BiCT), is provided in this section.
Consider independent variables x1 , x2 , · · · , xn and the dependent f as
n+1 random variables, and the known data as sample points. Recall that
the linear relation and correlation coefficient of two random variables has the
following relations.
Lemma 1. The two random variables ξ and η are linearly related with correlation coefficient 1 (i.e., ρξη = 1) if and only if there exists two constants
a, b (b 6= 0), such that P {η = a + bξ} = 1.
The correlation coefficient ρ could be estimated by the sample correlation
5
coefficient r, which is defined as follows.
N
¯ (ηi − η̄)
1 X (ξi − ξ)
r=
·
n − 1 i=1
σξ
ση
P
where N is the number of observations in the sample set,
is the summation
¯
symbol, ξi is the ξ value for observation i, ξ is the sample mean of ξ, ηi is
the η value for observation i, η̄ is the sample mean of η, σξ is the sample
standard deviation of ξ, and ση is the sample standard deviation of η.
Only continuous model functions are considered in this paper. As a result,
the conclusion of Lemma 1 could be simplified as follows.
• The two random variables f A and f B are linearly related (f B = a +
bf A ) if and only if the sample correlation coefficient r = 1 for any given
sample set.
Studies shows that the functional separability defined in the above section (See equation 6) could be observed with random sampling and linear
correlation techniques.
Without the loss of generality, a simple function with three variables
(f (x) = f (x1 , x2 , x3 ), xi ∈ [ai , bi ], i = 1, 2, 3) is considered to illustrate the
implementation of the bi-correlation test. To find out whether the first variable x1 is separable, two correlation tests are needed.
The first correlation test is carried out as follows. A set of random sample points in [a1 , b1 ] are generated, then these points are extended to three
dimensional space with the rest variables (x2 and x3 ) fixed to a point A. We
get a vector of function values f (A) = f (x1 , A) = (f1A , f2A , · · · , fNA ), where
N is the number of sample points. Then these points are extended to three
dimensional space with fixed x2 and x3 to another point B, We get another
vector f (B) = f (x1 , B) = (f1B , f2B , · · · , fNB ). It is obviously that the two
vectors f (A) and f (B) will be linearly related if x1 is separable. However, it
is easy to show that this linear relation could NOT ensure its separability.
Then it comes to the second correlation test. Another set of random
sample points in [a2 , b2 ]×[a3 , b3 ] are generated, then these points are extended
to three dimensional space with the rest variable(s) (x1 in this case) fixed to
a point C, and get a vector f (C) = f (C, x2 , x3 ). Similarly, another vector
f (D) = f (D, x2 , x3 ) is obtained. Again, the two vectors f (C) and f (D) needs
to be linearly related to ensure the separability of x1 .
6
4.2. Proposition
Without the loss of generality, suppose we have a scalar function f (x)
with n continuous variables (f : Rn 7→ R, x ∈ Ω ⊂ Rn , and Ω = [a1 , b1 ] ×
[a2 , b2 ] × · · · × [an , bn ]), and need to find out whether the first m variable
combination (x1 , x2 , · · · , xm ) are separable. Let the matrix X1 be a set of N
random sample points from the subset [a1 , b1 ] × [a2 , b2 ] × · · · × [am , bm ] ⊂ Rm ,
and
(1)
(1)
(1)
x1
x2
· · · xm
(2)
(2)
(2)
x
x2
· · · xm
X1 = 1
.
··· ··· ··· ···
(N )
(N )
(N )
x1
x2
· · · xm
The rest variables xm+1 , xm+2 , · · · , xn are fixed to two given points A and
B in the subset [am+1 , bm+1 ] × [am+2 , bm+2 ] × · · · × [an , bn ] ⊂ Rn−m , i.e.,
(A)
(B)
(B)
(B)
xA = (xm+1 (A) , xm+2 (A) ,
··· ,x
n ), xB = (xm+1 , xm+2 , · · · , xn ).
1
xm+1 (A) xm+2 (A) · · · xn (A)
1
xm+1 (A) xm+2 (A) · · · xn (A)
, and
Let the matrix X2 (A) = xA =
···
···
··· ···
· · ·
xm+1 (A) xm+2 (A) · · · xn (A)
1
1
1
X2 (B) = xB .
· · ·
1
Let the extended matrix XA = X1 X2 (A) , and XB = X1 X2 (B) .
Let f A be the vector of which the i-th element is the function value of
the i-th row of matrix XA , i.e., f A = f (XA ), and f B is similarly defined,
f B = f (XB ).
Lemma 2. The two vectors f A and f B are linearly related if the function
f (x) is separable with respect to the first m variable combination (x1 , x2 , · · · , xm ).
Proof. Since the first m variable combination (x1 , x2 , · · · , xm ) are separable,
from definition 1, we have f (x) = ϕ1 (x1 , x2 , · · · , xm )⊗ϕ2 (xm+1 , xm+2 , · · · , xn ).
Accordingly, the vector f A = f (XA ) = ϕ1 (X1 ) ⊗ ϕ2 (X2A ) = ϕ1 (X1 ) ⊗ kA ,
where ⊗ is a component-wise binary operation, and kA = ϕ2 (xA ) is a scalar.
7
Similarly, the vector f B = ϕ1 (X1 ) ⊗ kB . As a result,
if ⊗ is times
kA /kB · f B
fA =
kA − kB + f B if ⊗ is plus
kB − kA + f B if ⊗ is minus
which means the two vectors f A and f B are linearly related.
On the other hand, if the first m variables are fixed to two given points
C and D, and the rest of n − m variables are randomly sampled. A similar
proposition
could
be concluded as follows. Let
1
x1 (C) x2 (C) · · · xm (C)
1
x1 (C) x2 (C) · · · xm (C)
,
X1 (C) = x1 (C) x2 (C) · · · xm (C) =
···
··· ···
···
· · ·
x1 (C) x2 (C) · · · xm (C)
1
1
1
X1 (D) = x1 (D) x2 (D) · · · xm (D) ,
· · ·
1
(1)
(1)
(1)
xm+1 xm+2 · · · xn
(2)
(2)
xm+1 x(2)
· · · xn
m+2
X2 =
, the N × n matrix XC = X1 (C) X2 ,
··· ··· ···
···
(N )
(N )
(N )
xm+1 xm+2 · · · xn
and XD = X1 (D) X2 . Let f C be the vector of which the i-th element is
the function value of the i-th row of matrix XC , i.e., f C = f (XC ), and f D
is similarly defined, f D = f (XD ).
Lemma 3. The two vectors f C and f D are linearly related if the function
f (x) is separable with respect to the first m variable combination (x1 , x2 , · · · , xm ).
Proof. Since the first m variable combination (x1 , x2 , · · · , xm ) are separable,
from definition 1, we have f (x) = ϕ1 (x1 , x2 , · · · , xm )⊗ϕ2 (xm+1 , xm+2 , · · · , xn ).
Accordingly, the vector f C = f (XC ) = ϕ1 (X1C ) ⊗ ϕ2 (X2 ) = kC ⊗ ϕ2 (X2 ),
where ⊗ is a component-wise binary operation, and the scalar kC = ϕ1 (xC ).
Similarly, the vector f D = kD ⊗ ϕ2 (X2 ). As a result,
kC /kD · f D
if ⊗ is times
fC =
kC − kD + f D if ⊗ is plus or minus
which means the two vectors f C and f D are linearly related.
8
The above lemmas show that two function-value vectors must be linearly
related if the target function has the separability feature, while the separable
variables (or their complement variables) are fixed. These are necessary
conditions for the separability identification of target function. The sufficient
and necessary conditions are given as follows.
Theorem 1. The function f (x) is separable with respect to the first m variable combination (x1 , x2 , · · · , xm ) if and only if both of the flowing statements
are true.
(1) Any two function-value vectors with fixed (x1 , x2 , · · · , xm ) are linearly related;
(2) Any two function-value vectors with fixed (xm+1 , xm+2 , · · · , xn ) are linearly related.
Proof. From Lemma 2, and Lemma 3, we can conclude that the necessary
conditions of the theorem hold. The sufficient conditions can be proved by
contradiction. Suppose the separable form f (x) = ϕ1 (x1 , x2 , · · · , xm ) ⊗
ϕ2 (xm+1 , xm+2 · · · , xn ) can not be derived from the above two conditions.
Thus, there is at least one non-separable variable presented in both subfunctions, ϕ1 and ϕ2 . Without loss of generality, we assume xm to be this
non-separable variable. That is,
f (x) = ϕ1 (x1 , x2 , · · · , xm ) ⊗ ϕ2 (xm , xm+1 , xm+2 · · · , xn ) .
(7)
Similarly, the process of sampling for the first correlation test can be given
as
(1)
(1)
(1)
x1
x2
· · · xm
x(2) x(2) · · · x(2)
m
2
,
(8)
X1 = 1.
.
..
..
..
.
(N )
(N )
(N )
x1
x2
· · · xm
(1)
(A)
(A)
(A)
xm xm+1 xm+2 · · · xn
x(2) x(A) x(A) · · · x(A)
m
m
(A)
m+1
m+2
X̃2 = .
,
(9)
.
.
..
.
.
.
.
.
.
.
(N )
xm
(A)
(A)
(A)
xm+1 xm+2 · · · xm
9
and
(B)
(B)
(B)
xm+1 xm+2 · · · xn
(B)
(B)
(B)
xm+1 xm+2 · · · xm
(B)
(10)
X̃2 =
.
.
.
.
.
.
.
.
.
.
(N )
(B)
(B)
(B)
xm xm+1 xm+2 · · · xm
h
i
h
i
(A)
(B)
′
′
Let the extended matrix XA = X1 X̃2
, and XB = X1 X̃2
.
Thus,
(A)
f ′A = f (X ′ A ) = ϕ1 (X1 ) ⊗ ϕ2 X̃2
= ϕ1 (X1 ) ⊗ α,
(11)
(1)
xm
(2)
xm
..
.
and
f ′B
′
= f (X B ) = ϕ1 (X1 ) ⊗ ϕ2
(B)
X̃2
= ϕ1 (X1 ) ⊗ β
(12)
(A)
and
are defined, where α and β are function-value vectors of ϕ2 X̃2
(B)
ϕ2 X̃2
respectively. As a result,
f ′A
′
γ·f B
α − β + f ′B
=
β − α + f ′B
if ⊗ is times
if ⊗ is plus
if ⊗ is minus
(13)
where γ = (α1 /β1 , α2 /β2 , · · · , αN /βN ). From the lemmas, we know that, two
vectors f ′A and f ′ B are linearly related if they are in the relation of f ′A =
k1 ·f ′ B +k2 , where k1 and k2 are constant scalars, k1 6= 0. But, from the above
discussion, the components of all the three vectors, γ, α − β and β − α, are
(1)
(2)
(N )
not constant, due to the randomness of sample points xm , xm , · · · , xm .
This contradicts the supposition that the two vectors f ′A and f ′ B are linearly
related, and so Equation (7) cannot hold.
4.3. Notes on BiCT
The proposed technique is called bi-correlation test (BiCT) since two
complementary correlation tests are simultaneously carried out to determine
whether a variable or a variable-combination is separable.
The above process is illustrated with two sub-functions, and it could be
extended to determine the separability of a function with more sub-functions.
However, if the binary operators ⊗1 , ⊗2 , · · · , ⊗m in equation (6) are mutually
different with mixed times and plus or minus, the extension might be a little
10
difficult. This issue will be left for the future work. Hereafter, we assume
that the binary operators in equation (6) are the same, i.e, ⊗i = times or
⊗i = plus or minus for all i = 1, 2, · · · , m, for simplicity. In this case, the
extension process is very easy and omitted here.
To enhance the stability and efficiency of the algorithm, the distribution
of sample points should be as uniform as possible. Therefore, controlled sampling methods such as Latin hypercube sampling (Beachkofski and Grandhi,
2002) and orthogonal sampling (Steinberg and Lin, 2006) are preferred for
sample generation.
For the correlation test, any of correlation methods could be used. That
is, Pearson’s r method, Spearman’s rank order correlation, and Kendall’s τ
correlation are all effective for BiCT.
Take the function f (x) = 0.8 + 0.6 ∗ (x21 + cos(x1 )) ∗ sin(x2 + x3 ) ∗ (x2 −
x3 ), x ∈ [−3, 3]3 , as an example, the first sample set consists of 13 uniformly distributed points in [−3, 3], and the second sample set consists of
169 uniformly distributed points in [−3, 3]2 . The correlation tests could be
illustrated as in Fig. 2. As can be seen that the function-value vectors f A
and f B are linearly related (Fig. 2(b)), in which the variable x2 and x3 are
fixed when considering the first variable x1 (Fig. 2(a)). Similarly, to find
out the separability of variable combination (x2 , x3 ), the first variable x1 is
fixed (Fig. 2(c)). The corresponding function-value vectors f C and f D are
linearly related (Fig. 2(d))
The D&C method with BiCT technique is described with functions of
explicit expressions. While in practical applications, no explicit expression is
available. In this case, some modifications need to adapt for D&C method.
In fact, for data-driven modeling problems, a surrogate model of black-box
type could be established as the underlying target function (Forrester et al.,
2008) in advance. Then the rest steps are the same as above discussions.
5. Numerical results
5.1. Analysis on Computing time
The computing time (t) of a genetic programming with the proposed
divide and conquer (D&C) method consists three parts:
t = t1 + t2 + t3
(14)
where t1 is for the separability detection, t2 for sub-function determination,
and t3 for function recover. Note that both the separability detection and
11
6
A
1
f
B
4
f
0
−1
f
B
0
−2
−2
−3
−4
−4
−2
−1
0
x1
1
2
−5
3
1
2
3
4
5
6
fA
(b) f A ↔ f B
(a) (x2 , x3 ) are fixed
150
100
50
D
−6
−3
f
f
2
0
−50
−100
−150
−100
−50
0
C
f
(d) f C ↔ f D
(c) x1 are fixed
Figure 2: Demo of separability detection process of BiCT
12
50
100
Table 1: A mapping table for parse-matrix evolution
a·1
T
a·2 , a·3
expr
a·4
f → fk
-5
√
·
-5
λ2
-1
skip
-4
ln
-4
λ1
0
f1
-3 -2
cos /
-3 -2
f f2
1
f2
-1
-1
f1
0
skip
0
1.0
1
+
1
x1
2
*
2
x2
3
sin
···
···
4
exp
d
xd
5
(·)2
function recover processes are double-precision operations and thus cost much
less time than the sub-function determination process. That is, t ≈ t2 .
It is obviously that the CPU time for determining all sub-functions (t2 )
is much less than that of determining the target function directly (td ). Next,
a typical genetic programming, parse matrix evolution (PME), is taken as
the optimization driver (other GP algorithms should also work) to show the
performance of D&C.
Suppose that the dimension of the target function is d, the height of
the parse-matrix h, and the mapping Table as in Table 1, then the parsematrix entries a·1 ∈ −5, −4, ..., 4, 5, a·j ∈ −5, −4, · · · , d, (j = 1, 2), and
a·4 ∈ −1, 0, 1. Thus the parse-matrix (aij )h×4 have as many as (11 ∗ (6 + d) ∗
(6 + d) ∗ 3)h possible combinations. Thus the CPU time of determining the
target function directly satisfies
td ∼ (11 ∗ (6 + d) ∗ (6 + d) ∗ 3)h .
(15)
This means that the searching time of determining a target function will
increase exponentially with model complexity. Using D&C, only the subfunctions are needed to determinate, and each sub-function has less dimension d and less complexity h. Therefore, it will cost much less CPU time.
In fact, by D&C powered GP, the CPU time will increase only linearly with
dimensions, provided that the target function is completely separable.
Take equation (2) in Section 1 as an example. Without D&C, to search
directly, the control parameters of PME should be set as follows: d = 5,
h ≥ 9. From equation (15), the order of required CPU time td = O(2.58·1032).
Using D&C powered PME, the required CPU time will be much less.
In fact, after the separation detection, the function is divided into for subfunctions as follows.
√
√
qs = 1.83 × 10−4 v 3 · ρ · 1/ R · (1 − hw /hs )
13
√
For the sub-function v 3 , 1/ R, and 1 − hw /hs , the control parameters of
√
PME should be set as d = 1, h ≥ 2. For the sub-function ρ, d = 1, h ≥ 1.
As a result, t2 ≈ 4∗O(2.61·106) = O(107) by equation (15), which means the
D&C method could reduce the computational effort by orders of magnitude.
5.2. Program timing
Next, two illustrative examples are presented to show how much time the
D&C technique could save in practical applications.
Again, equation (1) and equation (2) are set as the target function, respectively. For equation (1), the sample set consists 100 observations uniformly distributed in [1,10] degree and [1000, 10000] (i.e., θ=1:10; Rex =
1000:1000:10000). The angle θ is fixed to 5 degree while detecting the subfunction f1 (Rex ), and the Renold number Rex is fixed to 5000 while detecting
the sub-function f2 (θ).
For equation (2), the sample set consists 30000 observations uniformly
distributed in a box in R5 (i.e., v = 500 : 100 : 1000; ρ = 0.0001 : 0.0001 :
0.001; R = 0.01 : 0.01 : 0.1; hw = 10000 : 10000 : 50000; hs = 100000 :
100000 : 1000000). The velocity of free-stream v, density of air ρ, radius of
nose R, Wall enthalpy hw , and total enthalpy hs are fixed to 800 m/s2 , 0.0005
kg/m3 , 0.05 m, 20000 J/kg, and 200000 J/kg, respectively, while detecting
the sub-functions.
In both tests, the program stops when the current model is believed good
can be regarded as the
enough: 1 − R2 < 1.0 · 10−10 , where R2 = 1 − SSE
SST
fraction of the total sum of squares that has ‘explained by’ the model. To
suppress the affect of randomness of PME, 10 runs are carried out for each
target function, and the averaged CPU time on a PC using a single CPU
core (Intel(R) Core (TM) i7-4790 CPU @3.60GHz) is recorded to show its
performance. The test results (see Table 2, and Table 3) show that D&C
technique can save CPU time remarkably. For equation (1), PME needs
about 12 minutes and 26 seconds to get an alternative of the target function
without D&C technique, while the D&C powered PME needs only about 11.2
seconds, which is much faster than the original algorithm. Similar conclusion
could also seen from the test results of equation (2) (see Table 3). Note that
the total time of D&C powered PME includes t1 , t2 and t3 (See Equ. 14),
and t1 + t3 ≈ 0.2 for Equ. (1), 0.3 for Equ. (2).
14
Table 2: Performance of PME on detecting Equ. (1) (with and without D&C)
Target Function CPU time
For D&C powered PME
f1 (Rex )
3s
f2 (θ)
8s∗
Total time
11.2s
PME without D&C
f (Rex , θ)
12m26s∗
Total time
746s
Result Expression
√
St = 0.1978/ Rex
St = 0.03215 ∗ θ − 0.01319 ∗ θ3
√
St = 2.274 ∗ θ ∗ cos(0.9116 ∗ θ)/ Rex
PME failed to get the exact result, but always result in an alternative
function with fitting error of zero in double precision (i.e., 1 − R2 = 0.0).
∗
Table 3: Performance of PME on detecting target Equ. (2)(with and without D&C)
Target Function CPU time
For D&C powered PME
f1 (v)
3s
f2 (ρ)
2s
f3 (R)
4s
f4 (hw , hs )
9s
Total time
18.3s
PME without D&C
f (v, ρ, R, hw, hs) 85m43s
Total time
5143s
Result Expression
qs
qs
qs
qs
= 1.647 · 10−5 ∗ v 3
√
= 3.77 · 105 ∗ pρ
= 6.49 · 103 ∗ 0.08442/R
= 9370 − 9370 ∗ hw /hs
qs = 0.000183 ∗ v 3 ∗
15
p
ρ/R ∗ (1 − hw /hs )
6. Conclusion
The divide and conquer (D&C) method for symbolic regression has been
presented. The main idea is to make use of the separability feature of the
underling target function to simplify the search process. In D&C, the target
function is divided into a number of sub-functions based on the information
of separability detection, and the sub-functions are then determined by any
of a genetic programming (GP) algorithms.
The most important and fundamental step in D&C is to identify the
separability feather of the concerned system. To fulfill this task, a special
algorithm, bi-correlation test (BiCT), is also provided for separability detection in this paper.
The study shows that D&C can accelerate the convergence speed of GP
by orders of magnitude without losing the generality, provided that the target
function has the feature of separability, which is usually the case in practical
engineering applications.
References
References
Anderson, J., 2006. Hypersonic and High-Temperature Gas Dynamics (2nd
ed.). American Institute of Aeronautics and Astronautics, Inc., Virginia.
Beachkofski, B., Grandhi, R., APRL 2002. Improved distributed hypercube
sampling. In: 43rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural
Dynamics, and Materials Conference, Structures, Structural Dynamics,
and Materials and Co-located Conferences. AIAA paper no. 2002-1274.
Denver, Colorado.
URL https://doi.org/10.2514/6.2002-1274
Berenguel, L., Casado, L. G., Garca, I., Hendrix, E. M. T., Messine, F.,
2013. On interval branch-and-bound for additively separable functions with
common variables. Journal of Global Optimization 56 (3), 1101–1121.
Chen, C., Luo, C., Jiang, Z., 2017. Elite bases regression: A real-time algorithm for symbolic regression.
URL https://arxiv.org/abs/1704.07313
16
d’Avezac, M., Botts, R., Mohlenkamp, M. J., Zunger, A., 2011. Learning
to predict physical properties using sums of separable functions. SIAM
Journal on Scientific Computing 33 (6), 3381–3401.
Forrester, A., Sobester, A., Keane, A., 2008. Engineering design via surrogate
modelling: a practical guide. John Wiley & Sons, Hoboken, New Jersey.
Koza, J. R., 2008. Genetic programming: on the programming of computers
by means of natural selection. MIT Press, Cambridge, MA.
Luo, C., Zhang, S.-L., 2012. Parse-matrix evolution for symbolic regression.
Engineering Applications of Artificial Intelligence 25 (6), 1182–1193.
McConaghy, T., 2011. FFX: Fast, Scalable, Deterministic Symbolic Regression Technology. Springer New York, New York, NY, pp. 235–260.
O’Neill, M., Ryan, C., Aug. 2001. Grammatical evolution. IEEE Trans. Evol.
Comp 5 (4), 349–358.
URL http://dx.doi.org/10.1109/4235.942529
Schmidt, M., Lipson, H., 2009. Distilling free-form natural laws from experimental data. Science 324 (59236), 81–85.
Steinberg, D. M., Lin, D. K. J., 2006. A construction method for orthogonal
latin hypercube designs. Biometrika 93 (2), 279–288.
17
| 9 |
Leverage Financial News to Predict Stock Price Movements
Using Word Embeddings and Deep Neural Networks
Yangtuo Peng and Hui Jiang
Department of Electrical Engineering and Computer Science
York University, 4700 Keele Street, Toronto, Ontario, M3J 1P3, Canada
emails: [email protected], [email protected],
arXiv:1506.07220v1 [] 24 Jun 2015
Abstract
Financial news contains useful information on public companies and the market.
In this paper we apply the popular word
embedding methods and deep neural networks to leverage financial news to predict stock price movements in the market.
Experimental results have shown that our
proposed methods are simple but very effective, which can significantly improve
the stock prediction accuracy on a standard financial database over the baseline
system using only the historical price information.
1
Introduction
In the past few years, deep neural networks
(DNNs) have achieved huge successes in many
data modeling and prediction tasks, ranging from
speech recognition, computer vision to natural
language processing. In this paper, we are interested in applying the powerful deep learning methods to financial data modeling to predict stock
price movements.
Traditionally neural networks have been used to
model stock prices as time series for the forecasting purpose, such as in (Kaastra and Boyd, 1991;
Adya and Collopy, 1991; Chan et al., 2000; Skabar and Cloete, 2002; Zhu et al., 2008). In these
earlier work, due to the limited training data and
computing power available back then, normally
shallow neural networks were used to model various types of features extracted from stock price
data sets, such as historical prices, trading volumes, etc, in order to predict future stock yields
and market returns. More recently, in the community of natural language processing, many methods have been proposed to explore additional information (mainly online text data) for stock forecasting, such as financial news (Xie et al., 2013;
Ding et al., 2014), twitters sentiments (Si et al.,
2013; Si et al., 2014), microblogs (Bar-Haim et al.,
2011). For example, (Xie et al., 2013) propose to
use semantic frame parsers to generalize from sentences to scenarios to detect the (positive or negative) roles of specific companies, where support
vector machines with tree kernels are used as predictive models. On the other hand, (Ding et al.,
2014) propose to use various lexical and syntactic constraints to extract event features for stock
forecasting, where they have investigate both linear classifiers and deep neural networks as predictive models.
In this paper, we propose to use the recent word
embedding methods (Mikolov et al., 2013b) to select features from on-line financial news corpora,
and employ deep neural networks (DNNs) to predict the future stock movements based on the extracted features. Experimental results have shown
that the features derived from financial news are
very useful and they can significantly improve the
prediction accuracy over the baseline system that
only relies on the historical price information.
2
Our Approach
In this paper, we use deep neural networks (DNNs)
as our predictive model, which takes as input the
features extracted from both historical price information and on-line financial news to predict the
stock movements in the future (either up or down).
2.1
Deep Neural Networks
The structure of DNNs used in this paper is a conventional multi-layer perceptron with many hidden layers. An L-layer DNN consisting of L − 1
hidden nonlinear layers and one output layer. The
output layer is used to model the posterior probability of each output target. In this paper, we use
the rectified linear activation function, i.e., f (x) =
max(0, x), to compute from activations to outputs
in each hidden layer, which are in turn fed to the
next layer as inputs. For the output layer, we use
the softmax function to compute posterior probabilities between two nodes, standing for stock-up
and stock-down.
2.2
Features from historical price data
In this paper, for each target stock on a target date,
we choose the previous five days’ closing prices
and concatenate them to form an input feature vector for DNNs: P = (pt−5 , pt−4 , pt−3 , pt−2 , pt−1 ),
where t denotes the target date, and pm denotes the
closing price on the date m. We then normalize all
prices by the mean and variance calculated from
all closing prices of this stock in the training set.
In addition, we also compute first and second order
differences among the five days’ closing prices,
which are appended as extra feature vectors.
For example, we compute the first order difference as follows: ∆P = (pt−4 , pt−3 , pt−2 , pt−1 )
−(pt−5 , pt−4 , pt−3 , pn−2 ). In the same way, the
second order difference is calculated by taking the
difference between two adjacent values in each
∆P . Finally, for each target stock on a particular
date, the feature vector representing the historical
price information consists of P , ∆P and ∆∆P .
2.3
Financial news features
In order to extract fixed-size features suitable to
DNNs from financial news corpora, we need to
pre-process the text data. For all financial articles,
we first split them into sentences. We only keep
those sentences that mention at least one stock
name or one public company. Each sentence is
labelled by the publication date of the original article and the mentioned stock name. It is possible that multiple stocks are mentioned in one sentence. In this case, this sentence is labeled several
times for each mentioned stock. We then group
these sentences by the publication dates and the
underlying stock names to form the samples. Each
sample contains a list of sentences that were published on the same date and mentioned the same
stock or company. Moreover, each sample is labelled as positive (“price-up”) or negative (“pricedown”) based on its next day’s closing price consulted from the CRSP financial database (Booth,
2012). In the following, we introduce our method
to extract three types of features from each sample.
(1) Bag of keywords (BoK): We first select the
keywords based on the recent word embedding
methods in (Mikolov et al., 2013a; Mikolov et
al., 2013b). Using the popular word2vec method
from Google1 , we first compute the vector representations for all words occurring in the training set. Secondly, we manually select a small set
of seed words, i.e., nine words of {surge, rise,
shrink, jump, drop, fall, plunge, gain, slump} in
this work, which are believed to have a strong indication to the stock price movements. Next, these
seed words are used to search for other useful keywords based on the cosine distances calculated between the word vector of each seed word and that
of other words occurring in the training set. For
example, based on the pre-calculated word vectors, we have found other words, such as rebound,
decline, tumble, slowdown, climb, which are very
close to at least one of the seed words. In this way,
we have searched all words occurring in training
set and kept the top 1,000 words (including the
nine seed words) as the keywords for our prediction task. Finally, a 1000-dimension feature vector, called bag-of-keywords or BoK, is generated
for each sample. Each dimension of the BoK vector is the TFIDF score computed for each selected
keyword from the whole training corpus.
(2) Polarity score (PS): We further compute
so-called polarity scores (Turney and Littman,
2003; Turney and Pantel, 2010) to measure how
each keyword is related to stock movements and
how each keyword applies to a target stock in
each sentence. To do this, we first compute the
point-wise mutual information for each keyword
freq(w,pos)×N
w: PMI(w, pos) = log freq(w)×freq(pos) , where
freq(w, pos) denotes the frequency of the keyword w occurring in all positive samples, N denotes the total number of samples in the training set, freq(w) denotes the total number of keyword w occurring in the whole training set and
freq(pos) denotes the total number of positive
samples in the training set. Furthermore, we calculate the polarity score for each keyword w as:
PS(w) = PMI(w, pos) − PMI(w, neg). Obviously, the above polarity score PS(w) measures
how (either positively or negatively) each keyword
is related to stock movements and by how much.
Next, for each sentence in all samples, we need
to detect how each keyword is related to the mentioned stock. To do this, we use the Stanford
parser (Marneffe et al., 2006) to detect whether the
target stock is a subject of the keyword or not. If
the target stock is not the subject of the keyword
in the sentence, we assume the keyword is oppo1
https://code.google.com/p/word2vec/
sitely related to the underlying stock. As a result,
we need to flip the sign of the polarity score. Otherwise, if the target stock is the subject of the keyword, we keep the keyword’s polarity score as it is.
For example, in a sentence like “Apple slipped behind Samsung and Microsoft in a 2013 customer
experience survey from Forrester Research”, we
first identify the keyword slipped, based on the
parsing result, we know Apple is the subject while
Samsung and Microsoft are not. Therefore, if this
sentence is used as a sample for Apple, the above
polarity score of “slipped” is directly used. However, if this sentence is used as a sample for Samsung or Microsoft, the polarity score of “slipped”
is flipped by multiplying −1.
Finally, the resultant polarity scores are multiplied to the TFIDF scores to generate another
1000-dimension feature vector for each sample.
(3) Category tag (CT): We further define a list
of categories that may indicate a specific event or
activity of a public company, which we call as category tags. In this paper, the defined category
tags include: new-product, acquisition, pricerise, price-drop, law-suit, fiscal-report, investment, bankrupt, government, analyst-highlights.
Each category is first manually assigned with a
few words that are closely related to the category.
For example, we have chosen released, publish,
presented, unveil as a list of seed words for the category new-product, which indicates the company’s
announcement of new products. Similarly, we use
the above word embedding model to automatically
expand the above word list by searching for more
words that have closer cosine distances with the
selected seed words. In this paper, we choose the
top 100 words to assign to each category.
After we have collected all key words for
each category, for each sample, we count the
total number of occurrences of all words under each category, and then we take the logarithm to obtain a feature vector as V =
(log N1 , log N2 , log N3 , ..., log Nc ), where Nc denotes the total number of times the words in category c appear in a sample.
2.4
Predicting Unseen Stocks via Correlation
Graph
There are a large number of stocks trading in the
market. However, we normally can only find a
fraction of them mentioned in daily financial news.
Hence, for each date, the above method can only
Figure 1: Illustration of a part of correlation graph
predict those stocks mentioned in the news. In this
section, we propose a new method to extend to
predict more stocks that may not be directly mentioned in the financial news. Here we propose to
use a stock correlation graph, shown in Figure 1, to
predict those unseen stocks. The stock correlation
graph is an undirected graph, where each node represents a stock and the arc between two nodes represents the correlation between these two stocks.
For example, if some stocks in the graph are mentioned in the news on a particular day, we first
use the above method to predict these mentioned
stocks. Afterwards, the predictions are propagated
along the arcs in the graph to generate predictions
for those unseen stocks.
(1) Build the graph: We choose the top 5,000
stocks from the CRSP database (Booth, 2012) to
construct the correlation graph. At each time, any
two stocks in the collection are selected to align
their closing prices based on the related dates (between 2006/01/01 - 2012/12/31). Then we calculate the correlation coefficient between the closing
prices of these two stocks. The computed correlation coefficient (between −1 and 1) is attached to
the arc connecting these two stocks in the graph,
indicating their price correlation. The correlation
coefficients are calculated for every pair of stocks
from the collection of 5,000 stocks. In this paper
we only keep the arcs with an absolute correlation
value greater than 0.8, all other edges are considered to be unreliable and pruned from the graph, a
tiny fraction of which is shown in Figure 1.
(2) Predict unseen stocks: In order to predict
price movements of unseen stocks, we first take
the prediction results of those mentioned stocks
from the DNN outputs, by which we construct a
5000-dimension vector x. Each dimension of x
corresponds to one stock and we set zeros for all
feature combination
price
price + BoK
price + BoK + PS
price + BOK + CT
price + PS
price + CT
price + PS +CT
price + BoK + PS + CT
unseen stocks. The above graph propagation process can be mathematically represented as a matrix multiplication: x0 = Ax, where A is a symmetric matrix denoting all correlation weights in
the graph. Of course, the graph propagation, i.e.
matrix multiplication, may be repeated for several
times until the prediction x0 converges.
3
Dataset
The financial news data we used in this paper are
provided by (Ding et al., 2014) which contains
106,521 articles from Reuters and 447,145 from
Bloomberg. The news articles were published in
the time period from October 2006 to December
2013. The historical stock security data are obtained from the Centre for Research in Security
Prices (CRSP) database (Booth, 2012). We only
use the security data from 2006 to 2013 to match
the time period of the financial news. Base on
the samples’ publication dates, we split the dataset
into three sets: a training set (all samples between 2006-10-01 and 2012-12-31), a validation
set (2013-01-01 and 2013-06-15) and a test set
(2013-06-16 to 2013-12-31). The training set containts 65,646 samples, the validation set 10,941
samples, and the test set 9,911 samples.
4
4.1
Experiments
Stock Prediction using DNNs
In the first set of experiments, we use DNNs to
predict stock’s price movement based on a variety of features, namely producing a polar prediction of the price movement on next day (either
price-up or price-down). Here we have trained a
set of DNNs using different combinations of feature vectors and found that the DNN structure of
4 hidden layers (with 1024 hidden nodes in each
layer) yields the best performance in the validation set. We use the historical price feature alone
to create the baseline and various features derived
from the financial news are added on top of it. We
measure the final performance by calculating the
error rate on the test set. As shown in Table 1,
the features derived from financial news can significantly improve the prediction accuracy and we
have obtained the best performance (an error rate
of 43.13%) by using all the features discussed in
Sections 2.2 and 2.3.
error rate
48.12%
46.02%
43.96%
45.86%
45.00%
46.10%
46.03%
43.13%
Table 1: Stock prediction error rates on the test
set.
Figure 2: Predict unseen stocks via correlation
4.2
Predict Unseen Stocks via Correlation
Here we group all outputs from DNNs based on
the dates of all samples on the test set. For each
date, we create a vector x based on the DNN prediction results for all observed stocks and zeros
for all unseen stocks, as described in section 2.4.
Then, the vector is propagated through the correlation graph to generate another set of stock movement prediction. We may apply a threshold on the
propagated vector to prune all low-confidence predictions. The remaining ones may be used to predict some stocks unseen on the test set. The prediction of all unseen stocks is compared with the
actual stock movement on next day. Experimental
results are shown in Figure 2, where the left yaxis denotes the prediction accuracy and the right
y-axis denotes the percentage of stocks predicated
out of all 5000 per day under each pruning threshold. For example, using a large threshold (0.9), we
may predict with an accuracy of 52.44% on 354
extra unseen stocks per day, in addition to predicting only 110 stocks per day on the test set.
5
Conclusion
In this paper, we have proposed a simple method
to leverage financial news to predict stock move-
ments based on the popular word embedding and
deep learning techniques. Our experiments have
shown that the financial news is very useful in
stock prediction and the proposed methods can
significantly improve the prediction accuracy on
a standard financial data set.
Acknowledgments
This work was supported in part by an NSERC
Engage grant from Canadian federal government.
References
[Adya and Collopy1991] Monica Adya and Fred Collopy. 1991. How effective are neural networks at
forecasting and prediction? a review and evaluation.
Journal of Forecasting, 17:481–495.
[Bar-Haim et al.2011] Roy Bar-Haim, Elad Dinur, Ronen Feldman, Moshe Fresko, and Guy Goldstein.
2011. Identifying and following expert investors in
stock microblogs. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language
Processing, pages 1310–1319, Edinburgh, Scotland,
UK., July. Association for Computational Linguistics.
[Booth2012] Chicago Booth. 2012. CRSP Data
Description Guide for the CRSP US Stock
Database and CRSP US Indices Database.
Center for Research in Security Prices, The University of Chicago Graduate School of Business
(https://wrds-web.wharton.upenn.
edu/wrds/index.cfm).
[Chan et al.2000] Man-Chung Chan,
Chi-Cheong
Wong, and Chi-Chung Lam. 2000. Financial
time series forecasting by neural network using
conjugate gradient learning algorithm and multiple
linear regression weight initialization. Computing
in Economics and Finance, 61.
[Ding et al.2014] Xiao Ding, Yue Zhang, Ting Liu, and
Junwen Duan. 2014. Using structured events to
predict stock price movement: An empirical investigation. In Proceedings of the 2014 Conference on
Empirical Methods in Natural Language Processing
(EMNLP), pages 1415–1425, Doha, Qatar, October.
Association for Computational Linguistics.
[Kaastra and Boyd1991] Iebeling Kaastra and Milton
Boyd. 1991. Designing a neural network for forecasting financial and economic time series. Neurocomputing, 10:215–236.
[Marneffe et al.2006] Marie-Catherine Marneffe, Bill
MacCartney, and Christopher D. Manning. 2006.
Generating typed dependency parses from phrase
structure parses. In Proceedings LREC.
[Mikolov et al.2013a] Tomas Mikolov, Kai Chen, Greg
Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In
Proceedings of Workshop at ICLR.
[Mikolov et al.2013b] Tomas Mikolov, Ilya Sutskever,
Kai Chen, Greg S Corrado, and Jeff Dean. 2013b.
Distributed representations of words and phrases
and their compositionality. In Proceedings of NIPS,
pages 3111–3119.
[Si et al.2013] Jianfeng Si, Arjun Mukherjee, Bing Liu,
Qing Li, Huayi Li, and Xiaotie Deng. 2013. Exploiting topic based twitter sentiment for stock prediction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics
(Volume 2: Short Papers), pages 24–29, Sofia, Bulgaria, August. Association for Computational Linguistics.
[Si et al.2014] Jianfeng Si, Arjun Mukherjee, Bing Liu,
Sinno Jialin Pan, Qing Li, and Huayi Li. 2014. Exploiting social relations and sentiment for stock prediction. In Proceedings of the 2014 Conference on
Empirical Methods in Natural Language Processing
(EMNLP), pages 1139–1145, Doha, Qatar, October.
Association for Computational Linguistics.
[Skabar and Cloete2002] Andrew Skabar and Ian
Cloete. 2002. Neural networks, financial trading
and the efficient markets hypothesis. In Proc.
the Twenty-Fifth Australasian Computer Science
Conference (ACSC2002), Melbourne, Australia.
[Turney and Littman2003] Peter D. Turney and
Michael L. Littman. 2003. Measuring praise and
criticism: Inference of semantic orientation from
association. ACM Trans. Inf. Syst., 21(4):315–346,
October.
[Turney and Pantel2010] Peter D. Turney and Patrick
Pantel. 2010. From frequency to meaning: Vector space models of semantics. J. Artif. Int. Res.,
37(1):141–188, January.
[Xie et al.2013] Boyi Xie, Rebecca J. Passonneau, Leon
Wu, and Germán G. Creamer. 2013. Semantic
frames to predict stock price movement. In Proceedings of the 51st Annual Meeting of the Association
for Computational Linguistics (Volume 1: Long Papers), pages 873–883, Sofia, Bulgaria, August. Association for Computational Linguistics.
[Zhu et al.2008] Xiaotian Zhu, Hong Wang, Li Xu, and
Huaizu Li. 2008. Predicting stock index increments
by neural networks: The role of trading volume under different horizons. Expert Systems with Applications, 34:30433054.
| 5 |
PICARD GROUPS FOR TROPICAL TORIC VARIETIES
arXiv:1709.03130v1 [math.AG] 10 Sep 2017
JAIUNG JUN, KALINA MINCHEVA, AND JEFFREY TOLLIVER
A BSTRACT. From any monoid scheme X (also known as an F1 -scheme) one can pass to a semiring
scheme (a generalization of a tropical scheme) XS by scalar extension to an idempotent semifield S.
In this paper, we investigate the relationship between the Picard groups of X and XS . We prove that
for a given irreducible monoid scheme X (with some mild conditions) and an idempotent semifield S,
the Picard group Pic(X) of X is stable under scalar extension to S. In other words, we show that the
two groups Pic(X) and Pic(XS ) are isomorphic. Moreover, each of these groups can be computed by
considering the correct sheaf cohomology groups. We also construct the group CaCl(XS ) of Cartier divisors modulo principal Cartier divisors for a cancellative semiring scheme XS and prove that CaCl(XS )
is isomorphic to Pic(XS ).
1.
Introduction
In recent years, there has been a growing interest in developing a notion of algebraic geometry
over more general algebraic structures than commutative rings or fields. The search for such a theory
is interesting in its own right, however, the current work relates to two actively growing sub-fields of
that study.
The first one is motivated by the search for “absolute geometry” (commonly known as F1 -geometry
or algebraic geometry in characteristic one) which is first mentioned by J. Tits in [Tit56]; Tits first
hints at the existence of a mysterious field of “characteristic one” by observing a degenerating case
of an incidence geometry Γ(K) associated to a Chevalley group G(K) over a field K; when K = Fq
(a field with q elements), as q → 1, the algebraic structure of K completely degenerates, unlike the
geometric structure of Γ(K) so Tits suggests that limq→1 Γ(K) should be a geometry over the field of
characteristic one.
In [Man95], Y. Manin considers the field of characteristic one from the completely different perspective; for the geometric approach to the Riemann hypothesis. Shortly after, in [Sou04], C. Soulé
first introduced a notion of algebraic geometry over the field F1 with one element. Since then
A. Connes and C. Consani have worked to find a geometric framework which could allow one to
adapt the Weil proof of the Riemann hypothesis for function fields to the Riemann zeta function
(cf. [CC10a], [CC10b], [CC11], [CC14], [CC17a], [CC17b]).
The second field to which this work contributes is a new branch of algebraic geometry called tropical geometry. It studies an algebraic variety X over a valued field k by means of its “combinatorial
shadow”, called the (set-theoretic) tropicalization of X and denoted trop(X ). This is a degeneration of
the original variety to a polyhedral complex obtained from X and a valuation on k. The combinatorial
shadow retains a lot of information about the original variety and encodes some of its invariants. Algebraically, trop(X ) is described by polynomials in an idempotent semiring, which is a more general
object than a ring – a semiring satisfies the same axioms that a ring does except invertibility of addition. However, the tropical variety trop(X ) has no scheme structure. In [GG16], J. Giansiracusa and
N. Giansiracusa combine F1 -geometry and tropical geometry in an elegant way to introduce a notion
of tropical schemes. This is an enrichment to the structure of trop(X ), since a tropical variety is seen
as the set of geometric points of the corresponding tropical scheme. The tropical scheme structure
2010 Mathematics Subject Classification. 14T05(primary), 14C22(primary), 16Y60(secondary), 12K10 (secondary),
06F05 (secondary).
Key words and phrases. tropical geometry, tropical schemes, idempotent semirings, Picard groups.
1
is encoded in a “bend congruence” (cf. [GG16]) or equivalently in a “tropical ideal” or a tower of
valuated matroids (cf. [MR14], [MR16]).
A natural question and central motivation for this work, is whether one can find a scheme-theoretic
tropical Riemann-Roch theorem using the notions from J. Giansiracusa and N. Giansiracusa’s theory.
The problem of finding such an analogue in the context of tropical varieties, that is the product of
set-theoretic tropicalization, has already received a lot of attention. In particular it has been solved
in the case of tropical curves. We recall that a tropical curve (the set-theoretical tropicalization of a
curve) is a connected metric graph. One can define divisors, linear equivalence and genus of a graph
in a highly combinatorial way. With these notions M. Baker and S. Norine [BN07] prove a RiemannRoch theorem for finite graphs while G. Mikhalkin and I. Zharkov [MZ08] and later A. Gathmann and
M. Kerber [GK08] solve the problem for metric graphs. Later the problem is revisited by O. Amini
and L. Caporaso [AC13] in the context of wighted graphs. A generalization of the work of M. Baker
and S. Norine to higher dimensions has been carried out by D. Cartwright in [Car15a] and [Car15b].
In [CC17a] A. Connes and C. Consani prove a Riemann-Roch statement for the points of a certain
Grothendieck topos over (the image of) Spec(Z), which can be thought of as tropical elliptic curves.
An important ingredient to the project of A. Connes and C. Consani to attack the Riemann Hypothesis
is to develop an adequate version of the Riemann-Roch theorem for geometry defined over idempotent semifields. Notably, there are several fundamental differences between their statement and the
tropical Riemann-Roch of M. Baker and S. Norine.
Recently, in [FRTU16] T. Foster, D. Ranganathan, M. Talpo and M. Ulirsch investigate the logarithmic Picard group (which is a quotient of the algebraic Picard group by lifts of relations on the tropical
curve) and solve the Riemann-Roch in the context of logarithmic curves (metrized curve complexes).
The solution of a scheme-theoretic tropical Riemann-Roch problem requires several ingredients, such
as a proper framework for “tropical sheaf cohomology” and a notion of divisors on tropical schemes,
in particular, ranks of divisors. In this note, we investigate Picard groups of tropical (toric) schemes as
the first step towards building scheme-theoretic tropical divisor theory and a Riemann-Roch theorem.
To do that, we look at an F1 -model of the tropical scheme, i.e., a monoid scheme.
Monoid schemes are related to tropical (more generally semiring) schemes and usual schemes via
scalar extension. More precisely, if X is a monoid scheme, say X = Spec M for a monoid M, then
XK = Spec K[M] is a scheme if K is a field and a semiring scheme if K is a semifield. Note that XC is
a toric variety if M is integral and finitely generated and XK is a tropical scheme (the scheme theoretic
tropicalization of XC ) if K is the semifield of tropical numbers. When K is a field, the relation between
Pic(X ) and Pic(XK ) has been extensively investigated by J. Flores and C. Weibel in [FW14]. They
show that in this case Pic(X ) is isomorphic to Pic(XK ).
In this paper, our main interest lies in the tropical context, i.e., the case when K is an idempotent
semifield. We investigate the relation between the Picard group Pic(X ) of a monoid scheme X and
Pic(XK ) for an idempotent semifield K. More precisely, when XK is the lift of an F1 -model X (with
an open cover satisfying a mild finiteness condition), we prove that the Picard group is stable under
scalar extension.
Theorem A. Let X be an irreducible monoid scheme and K be an idempotent semifield. Suppose that
X has an open cover satisfying Condition 3.4. Then we have that
Pic(X ) = Pic(XK ) = H1 (X , OX∗ ).
Remark 1.1. One may combine the above result with that of J. Flores and C. Weibel to conclude that
for an irreducible monoid scheme X , the Picard group is stable under scalar extension to a field or an
idempotent semifield.
Recall that a cancellative semiring scheme XK over an idempotent semifield K is a semiring scheme
such that for each open subset U of XK , the semiring OXK (U ) of sections is cancellative. We construct
the group of Cartier divisors and modulo principal Cartier divisors, which we denote by CaCl(XK ).
2
These are the naive principal Cartier divisors - the ones defined by a single global section. Then we
can show that
Theorem B. Let XK be a cancellative semiring scheme over an idempotent semifield K. Then
CaCl(XK ) is isomorphic to Pic(XK ).
We remark that the Picard group of a curve is rarely equal to the Picard group of its set-theoretic
tropicalization.
Acknowledgments This work was started at the Oberwolfach workshop ID: 1525 on Non-commutative
Geometry. The authors are grateful for the Institute’s hospitality and would like to thank the organizers for providing the opportunity. K.M would also like to thank Dhruv Ranganathan for many helpful
conversations.
2.
Preliminaries
In this section, we review some basic definitions and properties of monoid and semiring schemes.
We also recall the notion of Picard groups for monoid schemes developed in [FW14] and for semiring
schemes introduced in [Jun17].
2.1 Picard groups for monoid schemes. In what follows, by a monoid we always mean a commutative monoid M with an absorbing element 0M , i.e., 0M · m = 0M for all m ∈ M. Note that if M
is a monoid without an absorbing element, one can always embed M into M0 = M ∪ {0M } by letting
0M · m = 0M for all m ∈ M.
Remark 2.1. We will use the term “monoid schemes” instead of “F1 -schemes” to emphasize that
we are employing the minimalistic definition of F1 -schemes based on monoids following A. Deitmar
[Dei05], instead of any of the more general constructions that exist in the literature (cf. [CC10b]
or [Lor12]).
We recall some important notions which will be used throughout the paper. For the details, we
refer the reader to [Dei05], [Dei08], [CLS12].
Definition 2.2. [Dei05, §1.2 and §2.2] Let M be a monoid.
(1) An ideal I is a nonempty subset of M such that MI ⊆ I. In particular, 0M ∈ I. An ideal I is
said to be prime if M\I is a multiplicative nonempty subset of M.
(2) A maximal ideal of M is a proper ideal which is not contained in any other proper ideal.
(3) The prime spectrum Spec M of a monoid M is the set of all prime ideals of M equipped with
the Zariski topology.
(4) For any f ∈ M, we define the following set:
D( f ) := {p ∈ Spec M | f 6∈ p}.
Then {D( f )} f ∈M forms a basis of Spec M.
One can mimic the construction of the structure sheaf on a scheme to define the structure sheaf
(of monoids) on the prime spectrum Spec M of a monoid M. A prime spectrum Spec M together
with a structure sheaf is called an affine monoid scheme. A monoidal space is a topological space
together with a sheaf of monoids. A monoid scheme is a monoidal space which is locally isomorphic
to an affine monoid scheme. As in the case of schemes, we call a monoid scheme irreducible if the
underlying topological space is irreducible.
Remark 2.3.
(1) Let M be a monoid. Then M has a unique maximal ideal m := M\M × .
(2) Analogously to the classical case, the category of affine monoid schemes is equivalent to the
opposite of the category of monoids. A monoid scheme, in this case, is a functor which is
locally representable by monoids. In other words, one can understand a monoid scheme as a
functor of points. For details, see [PL11].
3
Next, we briefly recall the definition of invertible sheaves on a monoid scheme X . We refer the
readers to [FW14] and [CLS12] for details.
Definition 2.4. [FW14, §5] Let M be a monoid and X be a monoid scheme.
(1) By a M-set, we mean a set with an M-action.
(2) An invertible sheaf L on X is a sheaf of OX -sets which is locally isomorphic to OX .
Let M be a monoid and A, B be M-sets. One may define the tensor product A ⊗M B. Furthermore,
if A and B are monoids, A ⊗M B becomes a monoid in a canonical way. See, [CLS12, §2.2, 3.2] for
details.
Remark 2.5. We would like to warn the reader that in the literature the same terminology is used to
denote different things. In [FW14], the authors use the term “smash product” for tensor product of
monoids, whereas in [CLS12], the authors use the term “smash product” only for tensor product of
monoids over F1 , the initial object in the category of monoids. We will use the term tensor product to
stay compatible with the language of schemes and semiring schemes.
Let Pic(X ) be the set of the isomorphism classes of invertible sheaves on X . Suppose that L1 and
L2 are invertible sheaves on X . The tensor product L1 ⊗OX L2 is the sheafification of the presheaf (of
monoids) sending an open subset U of X to L1 (U ) ⊗OX (U) L2 (U ). It is well–known that L1 ⊗OX L2
is an invertible sheaf on X . Also, as in the classical case, the sheafificaiton L1−1 of the presheaf (of
monoids) sending an open subset U of X to HomOX (U) (L1 (U ), OX (U )) becomes an inverse of L1
with respect to the tensor product in the sense that L1 ⊗OX L1−1 ≃ OX . Therefore, Pic(X ) is a group.
One can use the classical argument to prove that Pic(X ) ≃ H1 (X , OX∗ ) (for instance, see [FW14,
Lemma 5.3.]). Recall that for any topological space X and a sheaf F of abelian groups on X , sheaf
cohomology Hi (X , F ) and Čech cohomology Ȟi (X , F ) agree for i = 0, 1. Therefore, we have
Pic(X ) ≃ H1 (X , OX∗ ) ≃ Ȟ1 (X , OX∗ ).
(1)
2.2 Picard groups for semiring schemes. A semiring is a set (with two binary operations - addition and multiplication) that satisfies the same axioms that a ring does, except invertibility of addition.
In this paper by a semiring we mean a commutative semiring with a multiplicative identity. A semifield is a semiring in which every non-zero element has a multiplicative inverse. A semiring A is
idempotent if a + a = a, for all elements a ∈ A. An example of an (idempotent) semifield is the tropical semifield which we denote by T. It is defined on the set R ∪ {−∞}, with operations maximum
and addition. As the name suggests, this semifield is central to tropical geometry.
Definition 2.6. Let A be a semiring.
(1) An ideal I of A is an additive submonoid I of A such that AI ⊆ I.
(2) An ideal I is said to be prime if I is proper and ab ∈ I then a ∈ I or b ∈ I.
(3) A proper ideal I is maximal if the only ideal strictly containing I is A.
Since there are several definitions of semiring schemes and some of them are not equivalent, we
present the following definition which we will use in this paper.
Definition 2.7. Let A be a semiring and X = Spec A be the set of all prime ideals of A. We endow X
with the Zariski topology. The topology on X is generated by the sets of the form D( f ) := {p ∈ X |
f 6∈ p} for all f ∈ A.
Let A be a semiring and S be a multiplicative subset of A. We recall the construction of localization
S−1 A of A at S from [Gol99, §11]. Let M = A × S. We impose an equivalence relation on M in such
a way that (a, s) ∼ (a′ , s′ ) if and only if ∃s′′ ∈ S such that s′′ as′ = s′′ a′ s. The underlying set of S−1 A
is the set of equivalence classes of M under ∼. We let as be the equivalence class of (a, s). Then one
4
can define the following binary operations + and · on S−1 A:
a a′
as′ + sa′
+ ′ :=
,
s s
ss′
a a′ aa′
· = ′.
s s′
ss
It is well-known that S−1 A is a semiring with the above operations. Furthermore, we have a canonical
homomorphism S−1 : A → S−1 A sending a to 1a . When S = A − p for some prime ideal p, we denote
the localization S−1 A by Ap .
Let A be a semiring and X = Spec A be the prime spectrum of A. For each Zariski open subset U
of X , we define the following set:
OX (U ) := {s : U →
∏ Ap},
p∈U
where s is a function such that s(p) ∈ Ap and is locally representable by fractions. One can easily see
that OX is a sheaf of semirings on X . An affine semiring scheme is the prime spectrum X = Spec A
equipped with a structure sheaf OX . Next, by directly generalizing the classical notion of locally
ringed spaces, one can define locally semiringed spaces. A semiring scheme is a locally semiringed
space which is locally isomorphic to an affine semiring scheme.
A special case of semiring schemes are the so-called tropical schemes. They are introduced in
[GG16]. These schemes locally are isomorphic to the prime spectrum of a quotient of the polynomial
semiring over the tropical semifield (denoted T[x1 , . . . , xn ]) by a particular equivalence relation, called
a “bend congruence”.
We note that every scheme is a semiring scheme, but never a tropical scheme. The reason is that
the structure sheaf of a tropical scheme is a sheaf of additively idempotent semirings which are never
rings.
One can extend the familiar notions of invertible sheaves and Picard group in the context of
schemes or monoid schemes to the semiring schemes setting. In fact, Čech cohomology for semiring
schemes is introduced in [Jun17] and the following is proved.
Theorem 2.8. [Jun17] Let X be a semiring scheme. Then Pic(X ) is a group and can be computed
via H1 (X , OX∗ ).
We note that OX∗ is a sheaf of abelian groups and thus we can define H1 (X , OX∗ ) in the usual way
and H1 (X , OX∗ ) = Ȟ1 (X , OX∗ ) as in (1).
Example 2.9. In [Jun17], the author also proves that for a projective space PnS over an idempotent
semifield S, one obtains that Pic(PnS ) ≃ Z, as in the classical case. In [FW14], a similar result is
proven for a projective space over a monoid. These two results motivate (among others) the authors
of the current note to study relations among Picard groups of schemes, monoid schemes, and semiring
schemes under “scalar extensions”.
3.
Picard groups of tropical toric schemes
In this section, we prove the main result which states that the Picard group Pic(X ) of an irreducible
monoid scheme X (with some mild conditions) is stable under scalar extension to an idempotent
semifield. Let us first recall the definition of scalar extension of a monoid scheme to a field or an
idempotent semifield.
Let X be a monoid scheme and K a field or an idempotent semifield. Suppose that X is affine, i.e.,
X = Spec M, for some monoid M. Then the scalar extension is defined as:
XK = X ×Spec F1 Spec K := Spec K[M],
5
where K[M] is a monoid semiring (when K is an idempotent semifield) or a monoid ring (when K is a
field). This construction can be globalized to define the base change functors from monoid schemes
to semiring schemes (or schemes). See, [GG16, §3.2.].
Remark 3.1. In [Dei08], Deitmar proves that we can obtain every toric variety from integral monoid
scheme X of finite type via extension of scalars. In other words, if K = C is the field of complex
numbers, then XK is a toric variety.
Since the Picard group Pic(X ) of a semiring scheme X can be computed by the cohomology of
the sheaf OX∗ of multiplicative units, we will need to understand K[M]× , the group of multiplicatively
invertible elements of K[M].
Recall that by a cancellative monoid, we mean a monoid M such that if ab = ac, for a ∈ M − {0M }
and b, c ∈ M, then b = c.
Proposition 3.2. Let M be a cancellative monoid and K be an idempotent semifield. Then, we have
K[M]× ∼
= K × × M×.
Proof. For x ∈ K[M], let φ (x) ∈ N be the smallest natural number such that x has a representation of
the form
φ (x)
x=
∑ ak mk ,
k=1
where ak ∈ K and mk ∈ M. Such a representation has a minimal length if and only if each ak 6= 0 and
the elements mk are nonzero and distinct. Since K is an idempotent semifield, two nonzero elements
of K cannot sum to zero and hence we have φ (x) ≤ φ (x + y) for all x, y ∈ K[M].
φ (x)
Let m ∈ M − {0M } and x ∈ K[M]. Write x =
φ (x)
∑ ak mk , so we have that mx =
∑ ak (mmk ).
k=1
k=1
This
is the shortest such expression for mx because each ak 6= 0 and M is cancellative. Hence one has
φ (mx) = φ (x). One also has φ (ax) = φ (x) for a ∈ K × .
φ (y)
Now let y ∈ K[M] be nonzero and x ∈ K[M] be arbitrary. Write y =
∑ ak mk with ak 6= 0 for all
k=1
k = 1, . . . , φ (y). In particular, a1 ∈ K × since K is a semifield. Also, since K is idempotent, we have
that y = a1 m1 + y and hence
φ (xy) = φ (xy + a1 m1 x) ≥ φ (a1 m1 x) = φ (x).
Finally suppose x ∈ K[M]× and let y = x−1 . Then, we have
1 = φ (1) = φ (xy) ≥ φ (x).
Since φ (x) ≤ 1, we have x = am for a ∈ K and m ∈ M. Similarly y = bm′ for b ∈ K and m′ ∈ M. One
easily sees that ab = 1 and mm′ = 1, and hence x ∈ K × × M × .
We can compute Čech cohomology given the existence of an appropriate open cover. To be compatible with the notation in Proposition 3.2, we use OX× for OX∗ . The following result provides a link
between Pic(XK ) and Čech cohomology of the sheaf K × × OX × (of abelian groups) on X , where
K × is the constant sheaf on X associated to the abelian group K × .
Lemma 3.3. Let X be an irreducible monoid scheme, U be an open subset of X , and K be an idempotent semifield. Then
UK = U ×Spec F1 Spec K
is an open subset of XK .
6
Proof. We may assume that X is affine, say X = Spec M, and U = D( f ) for some f ∈ M. Then one
can easily check that
UK = {p ∈ Spec K[M] | 1M · f 6∈ p}.
In other words, UK is nothing but D( f ) when f is considered as an element of K[M].
In what follows, we will assume the following condition for an irreducible monoid scheme.
Condition 3.4. Let X be an irreducible monoid scheme. Suppose that X has an open affine cover
U = {Uα } such that any finite intersection of the sets Uα is isomorphic to the prime spectrum of a
cancellative monoid.
Let X be an irreducible monoid scheme and U = {Ui } be an open cover of X . Let
UK := {Ui ×Spec F1 Spec K}.
It follows from Lemma 3.3 that UK is an open cover of XK for an idempotent semifield K.
Now, under the assumption of Condition 3.4, we have the following.
Theorem 3.5. Let X be an irreducible monoid scheme and U be an open cover of X satisfying
Condition 3.4. Let K be an idempotent semifield, XK = X ×Spec F1 Spec K, and UK = {Uα ×Spec F1
Spec K}. Then, we have
H1 (UK , OX×K ) ∼
= H1 (U , K × ×OX× ).
Proof. Let UαK = Uα ×Spec F1 Spec K. Write Uα1 ...αn = Uα1 ∩ . . . ∩ Uαn , and similarly for UαK1 ...αn . Let
F = K × ×OX× , and G = OX×K . Note that by irreducibility of X , we have F (U ) = K × × (OX (U )× )
for any open subset U ⊆ X .
Fix α1 , . . . , αn . Then we have Uα1 ...αn ∼
= Spec M and UαK1 ...αn ∼
= Spec K[M] for some cancellative
×
monoid M. Then F (Uα1 ...αn ) = K × M × ∼
= K[M]× = G (UαK1 ...αn ), by Proposition 3.2. These isomorphisms induce isomorphisms of Čech cochains Ck (U , F ) ∼
= Ck (UK , G ). The result will follow if these isomorphisms are compatible with the differentials. However, this reduces to checking that the maps F (Uα1 ...αn ) → F (Uα1 ...αn αn+1 ) → G (UαK1 ...αn αn+1 ) and F (Uα1 ...αn ) → G (UαK1 ...αn ) →
G (UαK1 ...αn αn+1 ) agree, which is readily checked.
Our next goal is to show that Pic XK can be computed by using a cover that satisfies the assumption
of Condition 3.4.
Lemma 3.6. Let K be an idempotent semifield. Suppose that a, b ∈ K and a 6= 0 or b 6= 0, then
a + b 6= 0.
Proof. Suppose that a + b = 0. Since K is idempotent we have
a + b = a + a + b = a + (a + b) = a + 0 = a.
Similarly, we obtain that a + b = b. It follows that a = b and hence a + b = a = b as K is idempotent.
This implies that a = b = 0, which contradicts the initial assumption.
Lemma 3.7. Let K be an idempotent semifield and M be a cancellative monoid. Let {Uα } be an open
cover of X = Spec K[M]. Then Uα = X for some α .
Proof. We may assume without loss of generality that each open set in the cover is nonempty. We
first consider the case where Uα = D( fα ), for all α . Then for each p ∈ Spec K[M], we have fα 6∈ p for
some α .
Let I be the ideal generated by { fα }. Then I 6⊆ p for any prime p of Spec K[M]. As in the classical
case, since each proper ideal is contained in a maximal ideal and each maximal ideal is prime, we
have I = K[M]. Hence 1 ∈ I so we can write 1 = ∑α fα gα for some fα ∈ K and gα ∈ M. Since K is
7
idempotent and M is cancellative, it follows from Lemma 3.6 that 1 = fα gα for some α and hence
fα ∈ K × . Therefore, we have that X = D( fα ).
For the general case, we may refine the open cover to a cover by sets of the form D( fα ). Then one
of the open sets in the refinement is X and hence this is true for the original cover.
Remark 3.8. A special case of Lemma 3.7, when M is a free monoid generated by n elements, is
proved in [Jun17, Lemma 4.20] and referred to as a “tropical partition of unity”.
Theorem 3.9. Let K be an idempotent semifield, M be a cancellative monoid, and X = Spec K[M].
Then Pic(X ) = 0, and more generally, we have
Hk (X , OX× ) = 0,
for k ≥ 1.
Proof. We in fact show that the sheaf cohomology of any sheaf of abelian groups on X vanishes. It
suffices to show that, for any sheaves F and G of abelian groups, if f : F → G is surjective then
the corresponding map F (X ) → G (X ) is surjective. From the surjectivity of f : F → G , it follows
that for every open set U ⊆ X , and for every s ∈ G (U ), there is a covering {Ui } of U , and elements
ti ∈ F (Ui ) such that f (ti ) = s|Ui for all i. From Lemma 3.7 the result follows.
Corollary 3.10. Let X be an irreducible monoid scheme and U = {Uα } be an affine open cover
satisfying Condition 3.4. Let K be an idempotent semifield and let XK = X ×Spec F1 Spec K. Let
UK = {Uα ×Spec F1 Spec K} be a cover of XK coming from U . Then, we have
H1 (XK , OX×K ) = H1 (UK , OX×K ).
Proof. Combine the previous result with the Serre’s version ( [Ser55, Théorème 1 of n◦ 29]) of Leray’s
theorem.
We now consider similar results for the cohomology of K
analogue of Lemma 3.7 for cancellative monoids.
× ×O × .
X
The following lemma is an
Lemma 3.11. Let M be a cancellative monoid. Let {Uα } be an open cover of X = Spec M. Then
Uα = X for some α .
Proof. As in Lemma 3.7, we may assume that each Uα is nonempty and satisfies Uα = D( fα ) for
some fα ∈ M. Let I be the ideal of M generated by { fα }. Then I is not contained in any prime
ideal. Since the maximal ideal of any monoid is prime then I is not contained in the maximal ideal
m = (M − M × ). Hence 1 ∈ I. Then 1 = fα g for some α and some g ∈ M. Then fα is a unit as
desired.
Corollary 3.12. Let M be a cancellative monoid and X = Spec M. Then, we have
Hk (X , K
×
×OX× ) = 0,
for k ≥ 1.
Proof. A similar argument to the one of Theorem 3.9 yields the desired result.
Corollary 3.13. Let X be an irreducible monoid scheme and U = {Uα } be an affine open cover of
X satisfying Condition 3.4. Then, we have
H1 (X , K
×
×OX× ) = H1 (U , K
×
×OX× ).
Proof. Combine the previous result with the Serre’s version ( [Ser55, Théorème 1 of n◦ 29]) of Leray’s
theorem.
We are now able to express Pic XK in terms of X .
Proposition 3.14. Let X be an irreducible monoid scheme and U = {Uα } is an affine open cover
satisfying Condition 3.4. Let K be an idempotent semifield and let XK = X ×Spec F1 Spec K. Then, we
have
Pic(XK ) ∼
= H1 (X , K × ×OX× ).
8
Proof. Let UK = {Uα ×Spec F1 Spec K}. Then, by Theorem 3.5 and Corollaries 3.10 and 3.13, we have
Pic(XK ) ∼
= H1 (UK , OX×K ) ∼
= H1 (U , K
×
×OX× ) ∼
= H1 (X , K
×
×OX× ).
Proposition 3.15. Let X be an irreducible monoid scheme. Then, we have
Pic(X ) = H1 (X , K
×
×OX× ).
Proof. Since the constant sheaf on an irreducible space is flasque, we have that H1 (X , K × ) = 0.
Then, it follows that
H1 (X , K
×
×OX× ) = H1 (X , K × ) × H1 (X , OX× ) = 0 × Pic(X ) = Pic(X ).
Combining the two previous propositions gives the following theorem.
Theorem 3.16. Let X be an irreducible monoid scheme and U = {Uα } is an affine open cover
satisfying Condition 3.4. Let K be an idempotent semifield and let XK = X ×Spec F1 Spec K. Then,
∼ Pic(X ).
Pic(XK ) =
4.
Cartier divisors on cancellative semiring schemes
In this section, we construct a Cartier divisor on a cancellative semiring scheme X , following
the idea of Flores and Weibel [FW14]. We show that the Picard group Pic(X ) is isomorphic to the
group Cart(X ) of Cartier divisors modulo principal Cartier divisors. In what follows, by an integral
semiring, we mean a semiring without zero divisors.
Let A be an integral semiring and p ∈ Spec A. We will call an element f ∈ A× (multiplicatively)
cancellable, if the following condition holds:
a f = b f implies a = b,
∀a, b ∈ A.
(2)
By a cancellative semiring, we mean a semiring A such that any nonzero element a ∈ A is cancellable.
Definition 4.1. By an integral semiring scheme, we mean a semiring scheme X such that for any
open subset U of X , OX (U ) is an integral semiring. Moreover, if OX (U ) is cancellative for any open
subset U of X , we call X a cancellative semiring scheme.
Remark 4.2. Classically, a ring without zero divisors is multiplicatively cancellative and vice versa.
However, a semiring without zero divisors is not necessarily cancellative - such as the polynomial
semiring over the tropical semifield T[x1 , . . . , xn ]. Nonetheless, any cancellative semiring is integral
and hence any cancellative semiring scheme is an integral semiring scheme.
Lemma 4.3. Let X be an integral semiring scheme. Then X has a unique generic point.
Proof. As in the classical case, if X is an integral semiring scheme, then X is irreducible and any
irreducible topological space has a unique generic point.
Lemma 4.4. Let X be an integral semiring scheme with a generic point η . Then, for any affine open
subset U = Spec A ⊆ X , we have
OX,η ≃ A(0) ,
where A(0) := Frac(A). In particular, OX,η is a semifield.
Proof. Since X is integral, A is an integral semiring. In particular, p = (0) ∈ Spec A is the generic
point of Spec A. Again, since X is integral, it follows that p is also the generic point of X . Therefore
we have OX,η = A(0) .
9
Definition 4.5. Let X be an integral semiring scheme and U = Spec A be any affine open subset. We
define the function field K(X ) of X as follows:
K(X ) := OX,η .
Example 4.6. Let X = Spec T[x], an affine line. Let A = T[x]. One can easily see that A is an integral
semiring and hence the generic point is η = (0). Therefore, we have
g
K(X ) = OX,η := { | g ∈ A, f ∈ A\{0}} = T(x).
f
Proposition 4.7. Let X be a cancellative semiring scheme with the function field K. Let K be the
constant sheaf associated to K on X . Then OX× is a subsheaf (of abelian groups) of K × .
Proof. First, suppose that X is affine, i.e., X = Spec A for some cancellative semiring A. Notice that,
since A is cancellative, for any f ∈ A, we have a canonical injection i f : A f → Frac(A). Therefore, as
in the classical case, for each open subset U of X , we have
\
OX (U ) =
A f ⊆ Frac(A) = K (U ).
(3)
D( f )⊆U
It follows that OX× (U ) ⊆ K × (U ). In general, one can cover X with affine open subsets and the
argument reduces to the case when X is affine; this is essentially due to Lemma 4.4.
Thanks to Proposition 4.7, we can define a Cartier divisor on a cancellative semiring scheme X as
follows:
Definition 4.8. Let X be a cancellative semiring scheme with the function field K. Let K be the
constant sheaf associated to K on X . A Cartier divisor on X is a global section of the sheaf of abelian
groups K × /OX× .
We recall the notion of Cartier divisors on a cancellative monoid scheme. Let X be a cancellative
and irreducible monoid scheme with a generic point η . Denote by K := OX,η the stalk at η and by
K be the associated constant sheaf.
Definition 4.9. [FW14, §6]
(1) A Cartier divisor is a global section of the sheaf K × /OX× of abelian groups. We let Cart(X )
be the group of Cartier divisors on X .
×
(2) A principal Cartier divisor is a Cartier divisor which is represented by some a ∈ OX,
η . Let
P(X ) be the subgroup of Cart(X ) consisting of principal Cartier divisors.
(3) We let CaCl(X ) := Cart(X )/P(X ) be the group of Cartier divisors modulo principal Cartier
divisors.
Flores and Weibel prove the following.
Theorem 4.10. [FW14, Proposition 6.1.] Let X be a cancellative monoid scheme. Then
Pic(X ) ≃ CaCl(X ).
Now, as in the classical case, for any Cartier divisor D, one can associate an invertible sheaf L (D)
on X .
In fact, the same argument as in the proof of Theorem 4.10 shows the following:
Proposition 4.11. Let X be a cancellative semiring scheme. Then the map ϕ : CaCl(X ) → Pic(X )
sending [D] to [L (D)] is an isomorphism, where [D] is the equivalence class of D ∈ Cart(X ) and
[L (D)] is the equivalence class of L (D) in Pic(X ).
10
Proof. The argument is similar to [FW14, Propoisiton 6.1], however, we include a proof for the sake
of completeness.
We have the following short exact sequence of sheaves of abelian groups:
0 −→ OX× −→ K
×
−→ K × /OX× −→ 0.
(4)
×
×
Since X is cancellative, X is irreducible. Furthermore, K is a constant sheaf on X and hence K is
flasque. In particular, H1 (X , K × ) = 0. Therefore, the cohomology sequence induced by (4) becomes:
f
0 −→ OX× (X ) −→ K × (X ) −→ (K × /OX× )(X ) −→ H1 (X , OX× ) −→ 0.
But, we have that H1 (X , OX× ) = Pic(X ), (K × /OX× )(X ) = Cart(X ), and
f (K
(5)
× (X )) = P(X ) and hence
Pic(X ) ≃ CaCl(X ) as claimed.
Example 4.12. Let A := T[x1 , ..., xn ] be the polynomial semiring over T. Recall that A is integral, but
not cancellative. Consider B := A/ ∼, where ∼ is a congruence relation on A such that f (x1 , ..., xn ) ∼
g(x1 , ..., xn ) if and only if f and g are same as functions on Tn . It is well–known (see for example
[Jun17] or [JM17]) that in this case B is cancellative. Now let X = Spec B. It follows from Proposition
4.11 that Pic(X ) = CaCl(X ). But, we know from [Jun17, Corollary 4.23.] that Pic(X ) is the trivial
group and hence so is CaCl(X ). In tropical geometry, one may obtain a “reduced model” of a tropical
scheme as above, which could be used to compute CaCl(X ), see [Jun17, Remark 4.27.].
Remark 4.13. One can easily see (cf. [FW14]) that if X is a cancellative monoid scheme and K is a
field, then CaCl(X ) is isomorphic to CaCl(XK ), where CaCl(XK ) is the group of Cartier divisors of
the scheme XK modulo principal Cartier divisors.
Remark 4.14. In the case of set-theoretic tropicalizations of curves, there is a well-developed theory
of divisors. Let C be an algebraic curve defined over a valued field and G be the set-theoretic tropicalization of C. We remind the reader, that G is a graph. There exists a notion of a Picard group of
G (cf. [BN07]). Moreover, there is a well-defined map Pic(C) → Pic(G), which one may think of
as the tropicalization map. If C has genus 0 then the Picard groups Pic(C) and Pic(G) are equal and
isomorphic to Z. However, in general Pic(C) and Pic(G) are rarely the same. For example, consider
an elliptic curve E degenerating to a cycle of P1 ’s. Note that Pic0 (E) = E but Pic(G) = S1 , the unit
circle.
Example 4.15 (Computing the Picard group of the tropicalization of P1 × P1 ). We consider the
quadric surface XC = P1 × P1 , which is a toric variety. We compute explicitly the Picard group
of the tropical scheme XT . Note that the calculation is analogous to the classical one.
We can do the computation on P1 × P1 or identify it with its image under the Segre embedding into
3
P , i.e., XT is defined by a congruence generated by a single “bend relation” x0 x1 ∼ x2 x3 . The two
computations are analogous.
Let U = ∪4i=1Ui be an open cover for XT = (P1 × P1 )T , where
U1 = Spec T[x, y],
U2 = Spec T[x, y−1 ],
U3 = Spec T[x−1 , x−1 y−1 ],
U4 = Spec T[x−1 , xy].
Now we can see which sections over each Ui are units in Γ(U, OXT ), namely,
Ai := OX×T (Ui ) = R,
OX×T (U1 )
∀i = 1, 2, 3, 4.
(6)
For instance, for i = 1, we have that
is the group of (tropically) multiplicatively invertible
elements in OXT (U1 ) = T[x, y]. Since the only tropical polynomials, which are multiplicatively invertible, are nonzero constants, we have (6).
11
Let Ui j := Ui ∩U j . Now we have
A12 := OX×T (U12 ) = (T[x, y±1 ])× ∼
= R × Z,
and similarly A14 , A23 , A34 are isomorphic to R × Z. Note, however, that
A13 ∼
= A24 = T([x±1 , y±1 ])× ∼
= R × Z2 .
Similarly for Ui jk := Ui ∩U j ∩Uk one may also check that:
Ai jk := OX×T (Ui jk ) = (T[x±1 , y±1 ])× ∼
= R × Z2 .
In particular, we get that
Č0 (XT , O × ) ∼
= R4 ,
XT
Č1 (XT , OX×T ) ∼
= (R × Z)4 × (R × Z2)2 ,
Č2 (XT , OX×T ) ∼
= (R × Z2 )4 .
(7)
We can proceed with the computation using the usual Čech complex or we can do a “tropical”
computation as in [Jun17] considering the following cochain complex:
d0+
d0−
d1+
// A12 × A13 × A23 × A14 × A24 × A34
d2+
//
d1−
A123 × A124 × A134 × A234
//
d2−
···
(8)
The complex (8) was introduced to deal with the lack of subtraction in tropical geometry; instead
of one differential we consider pairs of morphisms (di+ , di− ). Also, instead of the difference of two
functions f − g, we will have a tuple ( f , g) and we replace the kernel condition f − g = 0 with
( f , g) ∈ ∆, where ∆ is the diagonal. However, since OX× is a sheaf of abelian groups, we can simply
use the classical cochain complex.
Either way, we can see that the image of d0 is generated by elements of the form ( f1 , f2 , f3 , f4 ), where
fi ∈ R for all i and ( f1 , f2 , f3 , f4 ) 6= λ (1, 1, 1, 1) for some λ 6= 0 and thus the image of d0 is isomorphic
to R3 . The kernel of d1 is generated by elements of the following form:
b
ac
a
( yk , bxl yk , cxk , axl , xl yk , yk ),
c
b
b
for a, b, c ∈ R and k, l ∈ Z.
(9)
Now, for the choice of ( f1 , f2 , f3 , f4 ) = (b, c, 1T , bc ), one can easily see that any element as in (9)
defines the same equivalence class as an element (yk , xl yk , xk , xl , xl yk , yk ) in Ȟ1 (XT , OX×T ). Now it is
easy to see that
H1 (XT , OX×T ) = Ȟ1 (XT , OX×T ) ∼
= Z2 .
References
[AC13]
[BN07]
[Car15a]
[Car15b]
[CC10a]
[CC10b]
[CC11]
[CC14]
[CC17a]
[CC17b]
Omid Amini and Lucia Caporaso. Riemann-Roch theory for weighted graphs and tropical curves. Advances in
Mathematics, 240:1–23, 2013.
Matthew Baker and Serguei Norine. Riemann–Roch and Abel–Jacobi theory on a finite graph. Advances in
Mathematics, 215(2):766–788, 2007.
Dustin Cartwright. Combinatorial tropical surfaces. arXiv preprint arXiv:1506.02023, 2015.
Dustin Cartwright. Tropical complexes. arXiv preprint arXiv:1308.3813, 2015.
Alain Connes and Caterina Consani. From monoids to hyperstructures: in search of an absolute arithmetic.
Casimir Force, Casimir Operators and the Riemann Hypothesis, de Gruyter, pages 147–198, 2010.
Alain Connes and Caterina Consani. Schemes over F1 and zeta functions. Compos. Math, 146(6):1383–1415,
2010.
Alain Connes and Caterina Consani. On the notion of geometry over F1 . Journal of Algebraic Geometry 20 n.3,
525-557, 2011.
Alain Connes and Caterina Consani. The arithmetic site. Comptes Rendus Mathematique Ser. I 352, 971–975,
2014.
Alain Connes and Caterina Consani. Geometry of the scaling site. C. Sel. Math. New Ser. doi:10.1007/s00029017-0313-y, 2017.
Alain Connes and Caterina Consani. Homological algebra in characteristic one. arXiv preprint
arXiv:1703.02325, 2017.
12
Chenghao Chu, Oliver Lorscheid, and Rekha Santhanam. Sheaves and K-theory for F1 -schemes. Advances in
Mathematics, 229(4):2239–2286, 2012.
[Dei05]
Anton Deitmar. Schemes over F1 . In Number fields and function fieldstwo parallel worlds, pages 87–100.
Springer, 2005.
[Dei08]
Anton Deitmar. F1 -schemes and toric varieties. Contributions to Algebra and Geometry, 49(2):517–525, 2008.
[FRTU16] Tyler Foster, Dhruv Ranganathan, Mattia Talpo, and Martin Ulirsch. Logarithmic picard groups, chip firing, and
the combinatorial rank. arXiv preprint arXiv:1611.10233, 2016.
[FW14] Jaret Flores and Charles Weibel. Picard groups and class groups of monoid schemes. Journal of Algebra,
415:247–263, 2014.
[GG16]
Jeffrey Giansiracusa and Noah Giansiracusa. Equations of tropical varieties. Duke Mathematical Journal,
165(18):3379–3433, 2016.
[GK08]
Andreas Gathmann and Michael Kerber. A Riemann-Roch theorem in tropical geometry. Mathematische
Zeitschrift, 259:217–230, 2008.
[Gol99]
Jonathan Golan. Semirings and their applications, updated and expanded version of the theory of semirings,
with applications to mathematics and theoretical computer science, 1999.
[JM17]
Dániel Joó and Kalina Mincheva. Prime congruences of idempotent semirings and a Nullstellensatz for tropical
polynomials. C. Sel. Math. New Ser. doi:10.1007/s00029-017-0322-x, 2017.
[Jun17]
Jaiung Jun. Čech cohomology of semiring schemes. Journal of Algebra, 483:306–328, 2017.
[Lor12]
Oliver Lorscheid. The geometry of blueprints: Part I: Algebraic background and scheme theory. Advances in
Mathematics, 229(3):1804–1846, 2012.
[Man95] Yuri Manin. Lectures on zeta functions and motives (according to Deninger and Kurokawa). Astérisque,
228(4):121–163, 1995.
[MR14] Diane Maclagan and Felipe Rincón. Tropical schemes, tropical cycles, and valuated matroids. arXiv preprint
arXiv:1401.4654, 2014.
[MR16] Diane Maclagan and Felipe Rincón. Tropical ideals. arXiv preprint arXiv:1609.03838, 2016.
[MZ08]
Grigory Mikhalkin and Ilia Zharkov. Tropical curves, their Jacobians and theta functions. Curves and abelian
varieties, 465:203–230, 2008.
[PL11]
Javier López Peña and Oliver Lorscheid. Mapping F1 -land: An overview of geometries over the field with one
element. Noncommutative geometry, arithmetic, and related topics, 241-265, Johns Hopkins Univ. Press, 2011.
[Ser55]
Jean-Pierre Serre. Faisceaux algébriques cohérents. Annals of Mathematics, pages 197–278, 1955.
[Sou04] Christophe Soulé. Les variétés sur le corps à un élément. Mosc. Math. J, 4(1):217–244, 2004.
[Tit56]
Jacques Tits. Sur les analogues algébriques des groupes semi-simples complexes. In Colloque d’algèbre
supérieure, Bruxelles, pages 261–289, 1956.
[CLS12]
B INGHAMTON U NIVERSITY, B INGHAMTON , NY 13902, USA
E-mail address: [email protected]
YALE U NIVERSITY, N EW H AVEN , CT 06511, USA
E-mail address: [email protected]
E-mail address: [email protected]
13
| 0 |
Maximizing Expected Utility for Stochastic Combinatorial
Optimization Problems ∗
Jian Li†1 and Amol Deshpande‡2
arXiv:1012.3189v7 [] 10 Aug 2016
1
Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, P.R.China
2
Department of Computer Science, University of Maryland, College Park, USA
Abstract
We study the stochastic versions of a broad class of combinatorial problems where the weights of the
elements in the input dataset are uncertain. The class of problems that we study includes shortest paths,
minimum weight spanning trees, and minimum weight matchings, and other combinatorial problems like
knapsack. We observe that the expected value is inadequate in capturing different types of risk-averse
or risk-prone behaviors, and instead we consider a more general objective which is to maximize the
expected utility of the solution for some given utility function, rather than the expected weight (expected
weight becomes a special case). Under the assumption that there is a pseudopolynomial time algorithm
for the exact version of the problem (This is true for the problems mentioned above), 1 we can obtain the
following approximation results for several important classes of utility functions:
1. If the utility function µ is continuous, upper-bounded by a constant and limx→+∞ µ(x) = 0, we
show that we can obtain a polynomial time approximation algorithm with an additive error for
any constant > 0.
2. If the utility function µ is a concave increasing function, we can obtain a polynomial time approximation scheme (PTAS).
3. If the utility function µ is increasing and has a bounded derivative, we can obtain a polynomial
time approximation scheme.
Our results recover or generalize several prior results on stochastic shortest path, stochastic spanning
tree, and stochastic knapsack. Our algorithm for utility maximization makes use of the separability
of exponential utility and a technique to decompose a general utility function into exponential utility
functions, which may be useful in other stochastic optimization problems.
1
Introduction
The most common approach to deal with optimization problems in presence of uncertainty is to optimize
the expected value of the solution. However, expected value is inadequate in expressing diverse people’s
∗
A preliminary version of the paper appeared in the Proceedings of the 52nd Annual IEEE Symposium on Foundations of
Computer Science (FOCS), 2011.
†
[email protected]
‡
[email protected]
1
Following the literature [55], we differentiate between exact version and deterministic version of a problem; in the exact version
of the problem, we are given a target value and asked to find a solution (e.g., a path) with exactly that value (i.e., path length).
1
preferences towards decision-making under uncertain scenarios. In particular, it fails at capturing different
risk-averse or risk-prone behaviors that are commonly observed. Consider the following simple example
where we have two lotteries L1 and L2 . In L1 , the player could win 1000 dollars with probability 1.0, while
in L2 the player could win 2000 dollars with probability 0.5 and 0 dollars otherwise. It is easy to see that
both have the same expected payoff of 1000 dollars. However, many, if not most, people would treat L1
and L2 as two completely different choices. Specifically, a risk-averse player is likely to choose L1 and a
risk-prone player may prefer L2 (Consider a gambler who would like to spend 1000 dollars to play doubleor-nothing). A more involved but also more surprising example is the St. Petersburg paradox (see e.g., [45])
which has been widely used in the economics literature as a criticism of expected value. The paradox is
named from Daniel Bernoulli’s presentation of the problem, published in 1738 in the Commentaries of the
Imperial Academy of Science of Saint Petersburg. Consider the following game: you pay a fixed fee X to
enter the game. In the game, a fair coin is tossed repeatedly until a tail appears ending the game. The payoff
of the game is 2k where k is the number of heads that appear, i.e., you win 1 dollar if a tail appears on the
first toss, 2 dollars if a head appears on the first toss and a tail on the second, 4 dollars if a head appears on
the first two tosses and a tail on the third and so on. The question is what would be a fair fee X to enter the
game? First, it is easy to see that the expected payoff is
∞
E[payoff] =
X1
1
1
1
1
1 1 1 1
·1+ ·2+ ·4+
· 8 + ··· = + + + + ··· =
=∞
2
4
8
16
2 2 2 2
2
k=1
If we use the expected payoff as a criterion for decision making, we should therefore play the game at any
finite price X (no matter how large X is) since the expected payoff is always larger. However, researchers
have done extensive survey and found that not many people would pay even 25 dollars to play the game
[45], which significantly deviates from what the expected value criterion predicts. In fact, the paradox can
be resolved by expected utility theory with a logarithmic utility function, suggested by Bernoulli himself [7].
We refer interested reader to [59, 45] for more information. These observations and criticisms have led
researchers, especially in Economics, to study the problem from a more fundamental perspective and to
directly maximize user satisfaction, often called utility. The uncertainty present in the problem instance
naturally leads us to optimize the expected utility.
Let F be the set of feasible solutions to an optimization problem. Each solution S ∈ F is associated
with a random weight w(S). For instance, F could be a set of lotteries and w(S) is the (random) payoff of
lottery S. We model the risk awareness of a user by a utility function µ : R → R: the user obtains µ(x)
units of utility if the outcome is x, i.e., w(S) = x. Formally, the expected utility maximization principle is
simply stated as follows: the most desirable solution S is the one that maximizes the expected utility, i.e.,
S = arg max
E[µ(w(S 0 ))]
0
S ∈F
Indeed, expected utility theory is a branch of utility theory that studies “betting preferences” of people with
regard to uncertain outcomes (gambles). The theory was formally initiated by von Neumann and Morgenstern in 1940s [65, 24] 2 who gave an axiomatization of the theory (known as von Neumann-Morgenstern
expected utility theorem). The theory is well known to be versatile in expressing diverse risk-averse or
risk-prone behaviors.
In this paper, we focus on the following broad class of combinatorial optimization problems. The deterministic version of the problem has the following form: we are given a ground set of elements U =
2
Daniel Bernoulli also developed many ideas, such as risk aversion and utility, in his work Specimen theoriae novae de mensura
sortis (Exposition of a New Theory on the Measurement of Risk) in 1738 [8].
2
{ei }i=1...n ; each element e is associated with a weight we ; each feasible solution is a subset of the elements
satisfying some property. Let F denote the set of feasible solutions. The objective for the deterministic
P
problem is to find a feasible solution S with the minimum (or maximum) total weight w(S) = e∈S we .
We can see that many combinatorial problems such as shortest path, minimum spanning tree, and minimum
weight matching belong to this class. In the stochastic version of the problem, the weight we of each element e is a nonnegative random variable. We assume all we s are independent of each other. We use pe (.)
to denote the probability density function for we (or probability mass function in the discrete case). We are
also given a utility function µ : R+ → R+ which maps a weight value to a utility value. By the expected
utility maximization principle, our goal here is to find a feasible solution S ∈ F that maximizes the expected
utility, i.e., E[µ(w(S))]. We call this problem the expected utility maximization (EUM) problem.
Let us use the following toy example to illustrate the rationale behind EUM. There is a graph with two
nodes s and t and two parallel links e1 and e2 . Edge e1 has a fixed length 1 while the length of e2 is 0.9 with
probability 0.9 and 1.9 with probability 0.1 (the expected value is also 1). We want to choose one edge to
connect s and t. It is not hard to imagine that a risk-averse user would choose e1 since e2 may turn out to
be a much larger value with a nontrivial probability. We can capture such behavior using the utility function
(1) (defined in Section 1.1). Similarly, we can capture the risk-prone behavior by using, for example, the
1
utility function µ(x) = x+1
. It is easy to see that e1 maximizes the expected utility in the former case, and
e2 in the latter.
1.1
Our Contributions
In order to state our contribution, we first recall some standard terminologies. A polynomial time approximation scheme (PTAS) is an algorithm which takes an instance of a minimization problem (a maximization
problem resp.) and a parameter > 0 and produces a solution whose cost is at most (1 + )OPT (at least
(1 − )OPT resp.), and the running time, for any fixed constant > 0, is polynomial in the size of the
input, where OPT is the optimal solution. We use A to denote the deterministic combinatorial optimization
problem under consideration, and EUM(A) the corresponding expected utility maximization problem. The
exact version of A asks the question whether there is a feasible solution of A with weight exactly equal to
a given integer K. We say an algorithm runs in pseudopolynomial time for the exact version of A if the
running time is polynomial in n and K. For many combinatorial problems, a pseudopolynomial algorithm
for the exact version is known. Examples include shortest path, spanning tree, matching and knapsack.
We discuss in detail our results for EUM. We start with a theorem which underpins our other results. We
denote kµk∞ = supx≥0 |µ(x)|. We say a function µ
e(x) is an -approximation of µ(x) if |e
µ(x) − µ(x)| ≤
kµk∞ for all x ≥ 0. We allow µ
e(x) to be a complex function and |e
µ(x)| denote its absolute value (as we
will see shortly, µ
e(x) takes the form of a finite sum of complex exponentials). 3
Theorem 1 Assume that there is a pseudopolynomial algorithm for the exact version of A. Further assume
e(x) =
PL that xgiven any constant > 0, we can find an -approximation of the utility function µ as µ
k=1 ck φk , where |φk | ≤ 1 for all 1 ≤ k ≤ L (φk may be complex numbers). Let τ = maxk |ck |/kµk∞ .
Then, there is an algorithm that runs in time (nτ /)O(L) and finds a feasible solution S ∈ F such that
E[µ(w(S))] ≥ OPT − kµk∞ .
3
In practice, the user only needs to specify a real utility function µ(x). The complex function µ
e(x) is used to approximate the
real utility function µ(x).
3
From the above theorem, we can see that if we can -approximate the utility function µ by a short sum
of exponentials, we can obtain good approximation algorithms for EUM. In this paper, we consider three
important classes of utility functions.
1. (Class Cbounded ) Consider the deterministic problem which A is a minimization problem, i.e., we
would like the cost of our solution to be as small as possible. In the corresponding stochastic version
of A, we assume that any utility function µ(x) ∈ Cbounded is nonnegative, bounded, continuous and
limx→∞ µ(x) = 0 (please see below for the detailed technical assumptions). The last condition
captures the fact that if the cost of solution is too large, it becomes almost useless for us. We denote
the class of such utility functions by Cbounded .
2. (Class Cconcave ) Consider the deterministic problem A which is a maximization problem. In other
words, we want the value of our solution to be as large as possible. In the corresponding stochastic
version of A, we assume that µ(x) is a nonnegative, monotone nondecreasing and concave function.
Note that concave functions are extensively used to model risk-averse behaviors in the economics
literature. We denote the class of such utility functions by Cconcave .
3. (Class Cincreasing ) Consider a deterministic maximization problem A. In the corresponding stochastic
version of A, we assume that µ(x) is a nonnegative, differentiable and increasing function. We assume
d
dx µ(x) ∈ [L, U] for x ≥ 0, where L, U > 0 are constants. We denote the class of such utility
functions by Cincreasing . We can see that functions in Cincreasing can be concave, nonconcave, convex
or nonconvex. Convex functions are often associated with risk-prone behaviors, while nonconvexnonconcave utility functions have been also observed in various settings [36, 23].
Now, we state in details our assumptions and results for the above classes of utility functions.
Class Cbounded : Since µ is bounded, by scaling, without loss of generality, we can assume kµk∞ = 1. Since
limx→∞ µ(x) = 0, for any > 0, there exist a point T such that µ(x) ≤ for x > T . We assume that
T is a constant only depending on . We further assume that the continuous utility function µ satisfies the
α-Hölder condition, i.e., |µ(x) − µ(y)| ≤ C |x − y|α , for some constant C and some constant α > 1/2. We
say f is C-Lipschitz if f satisfies 1-Hölder condition with coefficient C. Under the above conditions, we
can prove Theorem 2.
Theorem
P2 If the xutility function µ belongs to Cbounded , then, for any > 0, we can obtain a function
µ(x) − µ(x)| ≤ , for x ≥ 0,where
µ
e(x) = L
k=1 ck φk , such that |e
L = 2O(T ) poly(1/),
|ck | ≤ 2O(T ) poly(1/),
|φk | ≤ 1 for all k = 1, . . . , L,
To show the above theorem, we use the Fourier series technique. However, the technique cannot be
used directly since it works only for periodic functions with bounded periodicities. In order to get a good
approximation for x ∈ [0, ∞), we leverage the fact that limx→∞ µ(x) = 0 and develop a general framework
that uses the Fourier series decomposition as a subroutine.
Now, we state some implications of the above results. Consider the utility function
x ∈ [0, 1]
1
x
1
− + δ + 1 x ∈ [1, 1 + δ]
χ
e(x) =
(1)
δ
0
x>1+δ
4
Figure 1: (1) The utility function χ
e(x), a continuous variant of the threshold function χ(x); (2) A smoother
variant of χ(x); (3) The utility function χ
e2 (x), a continuous variant of the 2-d threshold function χ2 (x).
where δ > 0 is a small constant (See Figure 1(1)). It is easy to verify that χ
e is 1/δ-Lipschitz and T = 2
for any δ < 1. Therefore, Theorem 2 is applicable. This example is interesting since χ
e can be viewed as a
continuous variant of the threshold function
1 x ∈ [0, 1]
χ(x) =
,
(2)
0 x>1
for which maximizing the expected utility is equivalent to maximizing Pr(w(S) ≤ 1). We first note that
even the problem of computing the probability Pr(w(S) ≤ 1) exactly for a fixed set S is #P-hard [38] and
there is an FPTAS [42]. Designing approximation algorithms for such special case has been considered
several times in the literature for various combinatorial problems including stochastic shortest path [52],
stochastic spanning tree [35, 26], stochastic knapsack [27] and some other stochastic problems [2, 50].
It is interesting to compare our result with the result for the stochastic shortest path problem considered
by Nikolova et al. [52, 50]. In [52], they show that there is an exact O(nlog n ) time algorithm for maximizing
the probability that the length of the path is at most 1, i.e., Pr(w(S) ≤ 1), assuming all edges are normally
distributed and there is a path with its mean at most 1. Later, Nikolova [50] extends the result to an FPTAS for
any problem under the same assumptions, if the deterministic version of the problem has a polynomial time
exact algorithm. We can see that under such assumptions, the optimal probability is at least 1/2. 4 Therefore,
provided the same assumption and further assuming that Pr(we < 0) is miniscule, 5 our algorithm is a PTAS
for maximizing E[e
χ(w(S))], which can be thought as a variant of the problem of maximizing E[χ(w(S))].
Indeed, we can translate this result to a bi-criterion approximation result of the following form: for any fixed
constants δ, > 0, we can find in polynomial time a solution S such that
Pr(w(S) ≤ 1 + δ) ≥ (1 − ) Pr(w(S ∗ ) ≤ 1).
where S ∗ is the optimal solution (Corollary 2). We note that such a bi-criterion approximation was only
known for exponentially distributed edges before [52].
Let us consider another application of our results to the stochastic knapsack problem defined in [27].
Given a set U of independent random variables {x1 , . . . , xn }, with associated profits {v1 , . . . , vn } and an
overflow probability γ, we are asked to pick a subset S of U such that
!
X
Pr
xi ≥ 1 ≤ γ
i∈S
4
The sum of multiple Gaussians is also a Gaussian. Hence, if we assume the mean of the length of a path (which is a Gaussian)
is at most 1, the probability that the length of the path is at most 1 is at least 1/2.
5
Our technique can only handle distributions with positive supports. Thus, we have to assume that the probability that a negative
value appears is miniscule (e.g., less than 1/n2 ) and can be safely ignored (because the probability that there is any realized negative
value is at most 1/n).
5
P
and the total profit i∈S vi is maximized. Goel and Indyk [27] showed that, for any constant > 0,
there
Pis a polynomial time algorithm that can find a solution S with the profit as least the optimum and
Pr( i∈S xi ≥ 1 + ) ≤ γ(1 + ) for exponentially distributed variables. They also gave a quasi-polynomial
time approximation scheme for Bernoulli distributed random variables. Quite recently, in parallel with
our work, Bhalgat et al. [13] obtained the same result for arbitrary distributions under the assumption that
γ = Θ(1). Their technique is based on discretizing the distributions and is quite involved. 6 Our result,
applied to stochastic knapsack, matches that of Bhalgat et al. under the same assumption. Our algorithm is
arguably simpler and has a much better running time (Theorem 7).
Equally importantly, we can extend our basic approximation scheme to handle generalizations such as
multiple utility functions and multidimensional weights. Interesting applications of these extensions include
various generalizations of stochastic knapsack, such as stochastic multiple knapsack (Theorem 10) and
stochastic multidimensional knapsack (stochastic packing) (Theorem 11).
Class Cconcave : We assume the utility function µ : [0, ∞) → [0, ∞) is a concave, monotone nondecreasing
function. This is a popular class of utility functions used to model risk-averse behaviors. For this class of
utility functions, we can obtain the following theorem in Section 5.
Theorem 3 Assume the utility function µ belongs to Cconcave , and there is a pseudopolynomial algorithm
for the exact version of A. Then, there is a PTAS for EUM(A).
Theorem 3 is also obtained by an application of Theorem 1. However, instead of approximating the
original utility function µ using a short sum of exponentials, which may not be possible in general, 7 we try
approximate a truncated version of µ. Theorem 3 recovers the recent result of [14]. Finally, we remark the
technique of [14] strongly relies on the concavity of µ, and seems difficult to extend to handle non-concave
utility functions.
Class Cincreasing : We assume the utility function µ : [0, ∞) → [0, ∞) is a positive, differentiable, and
d
increasing function. For technical reasons, we assume dx
µ(x) ∈ [L, U] for some constants L, U > 0 and all
x ≥ 0. For this class of utility functions, we can obtain the following theorem in Section 6.
Theorem 4 Assume the utility function µ belongs to Cincreasing , and there is a pseudopolynomial algorithm
for the exact version of A. Then, there is a PTAS for EUM(A).
Again, it may not be possible in general to approximate such an increasing function using a finite sum
of exponentials. Instead, we approximate a truncated version of µ, similar to the concave case. We note this
is the first such result for general increasing utility functions. Removing the bounded derivative assumption
remains an interesting open problem.
We believe our technique can be used to handle other classes of utility functions or other stochastic
optimization problems.
1.2
Related Work
In recent years stochastic optimization problems have drawn much attention from the computer science community and stochastic versions of many classical combinatorial optimization problems have been studied.
6
They also obtain several results related to stochastic knapsack, using the their discretization technique, together with other
ideas. Notably, they obtained a bi-criteria PTAS for the adaptive stochastic knapsack problem [13].
7
Suppose µ is a finite sum of exponentials. When x approaches to infinity, either |µ(x)| is periodic, or approaches to infinity, or
approaches to 0.
6
In particular, a significant portion of the efforts has been devoted to the two-stage stochastic optimization
problem. In such a problem, in a first stage, we are given probabilistic information about the input but the
cost of selecting an item is low; in a second stage, the actual input is revealed but the costs for the elements
are higher. We are asked to make decision after each stage and minimize the expected cost. Some general
techniques have been developed [31, 60]. We refer interested reader to [64] for a comprehensive survey.
Another widely studied type of problems considers designing adaptive probing policies for stochastic optimization problems where the existence or the exact weight of an element can be only known upon a probe.
There is typically a budget for the number of probes (see e.g., [30, 19]), or we require an irrevocable decision whether to include the probed element in the solution right after the probe (see e.g., [22, 17, 4, 21, 13]).
However, most of those works focus on optimizing the expected value of the solution. There is also sporadic
work on optimizing the overflow probability or some other objectives subject to the overflow probability
constraints. In particular, a few recent works have explicitly motivated such objectives as a way to capture
the risk-averse type of behaviors [2, 50, 63]. Besides those works, there has been little work on optimizing
more general utility functions for combinatorial stochastic optimization problems from an approximation
algorithms perspective.
The most related work to ours is the stochastic shortest path problem (Stoch-SP), which was also the
initial motivation for this work. The problem has been studied extensively for several special utility functions
in operation research community. Sigal et al. [61] studied the problem of finding the path with greatest
probability of being the shortest path. Loui [44] showed that Stoch-SP reduces to the shortest path (and
sometimes longest path) problem if the utility function is linear or exponential. Nikolova et al. [51] identified
more specific utility and distribution combinations that can be solved optimally in polynomial time. Much
work considered dealing with more general utility functions, such as piecewise linear or concave functions,
e.g., [48, 49, 6]. However, these algorithms are essentially heuristics and the worst case running times are
still exponential. Nikolova et al. [52] studied the problem of maximizing the probability that the length of the
chosen path is less than some given parameter. Besides the result we mentioned before, they also considered
Poisson and exponential distributions. Despite much effort on this problem, no algorithm is known to run
in polynomial time and have provable performance guarantees, especially for more general utility functions
or more general distributions. This is perhaps because the hardness comes from different sources, as also
noted in [52]: the shortest path selection per se is combinatorial; the distribution of the length of a path is
the convolution of the distributions of its edges; the objective is nonlinear; to list a few.
Kleinberg et al. [38] first considered the stochastic knapsack problem with Bernoulli-type distributions
and provided a polynomial-time O(log 1/γ) approximation where γ is the given overflow probability. In
the same paper, they noticed that even computing the overflow probability for a fixed set of items is #P-hard.
Li and Shi [42] provided an FPTAS for computing the overflow probability (or the threshold probability
for a sum of random variables). For item sizes with exponential distributions, Goel and Indyk [27] provided a bi-criterion PTAS, and for Bernoulli-distributed items they gave a quasi-polynomial approximation
scheme. Chekuri and Khanna [16] pointed out that a PTAS can be obtained for the Bernoulli case using
their techniques for the multiple knapsack problem. Goyal and Ravi [29] showed a PTAS for Gaussian distributed sizes. Bhalgat, Goel and Khanna [13] developed a general discretizaton technique that reduces the
distributions to a small number of equivalent classes which we can efficiently enumerate for both adaptive
and nonadaptive versions of stochastic knapsack. They used this technique to obtain improved results for
several variants of stochastic knapsack, notably a bi-criterion PTAS for the adaptive version of the problem.
In a recent work [43], the bi-criterion PTAS was further simplified and extended to the more general case
where the profit and size of an item can be correlated and an item can be cancelled in the middle. Dean at
al. [22] gave the first constant approximation for the adaptive version of stochastic knapsack. The adaptive
7
version of stochastic multidimensional knapsack (or equivalently stochastic packing) has been considered
in [21, 13] where constant approximations and a bi-criterion PTAS were developed.
This work is partially inspired by our prior work on top-k and other queries over probabilistic datasets [39,
41]. In fact, we can show that both the consensus answers proposed in [39] and the parameterized ranking
functions proposed in [41] follow the expected utility maximization principle where the utility functions
are materialized as distance metrics for the former and the weight functions for the latter. Our technique
for approximating the utility functions is also similar to the approximation scheme used in [41] in spirit.
However, no performance guarantees are provided in that work.
Recently, Li and Yuan [43] showed that an additive PTAS for µ ∈ Cbounded can be obtained using a
completely different approach, called the Poisson approximation technique. Roughly speaking, the Poisson
approximation technique allows us to extract a constant (depending on ) number of features from each distribution (called signature in [43]) and reduce the stochastic problem to a constant dimensional deterministic
optimization problem, which is similar to the algorithm presented in this paper. We suspect that besides this
superficial similarity, there may be deeper connections between two different techniques.
There is a large volume of work on approximating functions using short exponential sums over a
bounded domain, e.g., [54, 9, 10, 11]. Some works also consider using linear combinations of Gaussians or
other kernels to approximate functions with finite support over the entire real axis (−∞, +∞) [18]. This
is however impossible using exponentials since αx is either periodic (if |α| = 1) or approaches to infinity
when x → +∞ or x → −∞ (if |α| =
6 1).
2
An Overview of Our Approach
The high level idea of our approach is very simple and consists of the following steps:
1. We first observe that the problem is easy if the utility function is an exponential function. Specifically,
consider the exponential utility function µ(x) = φx for some complex number φ ∈ C. Fix an arbitrary
solution S. Due to independence of the elements, we can see that
hY
i Y
P
E[φw(S) ] = E φ e∈S we = E
φwe =
E[φwe ]
e∈S
e∈S
we
Taking log on both sides, we get log E[φw(S) ] =
e∈S log E[φ ]. If φ is a positive real number
and E[φwe ] ≤ 1 (or equivalently, − log E[φwe ] ≥ 0), this reduces to the deterministic optimization
problem.
P
2. In light of P
the above observation, we -approximate the utility function µ(x) by a short exponential
x
sum, i.e., L
i=1 ci φi with L being a small value (only depending on ), where (ci and φi may be
P
w(S)
complex numbers. Hence, E[µ(w(S))] can be approximated by L
].
i=1 ci E[φi
w(S)
3. Consider the following multi-criterion version of the problem with L objectives {E[φi ]}i=1,...,L :
w(S)
given L complex numbers v1 , . . . , vL , we want to find a solution S such that E[φi ] ≈ vi for i =
1, . . . , L. We achieve this by utilizing the pseudopolynomial time algorithm for the exact version of
the problem. We argue that we only need to consider a polynomial number of v1 , . . . , vL combinations
(which we call configurations) to find out the approximate optimum.
In Section 3, we show how to solve the multi-criterion problem provided that a short exponential sum
approximation of µ is given. In particular, we prove Theorem 1. Then, we show how to approximate
8
µ ∈ Cbounded by a short exponential sum by proving Theorem 2 in Section 4.1 and Section 4.2. For µ ∈
Cconcave or µ ∈ Cincreasing , it may not be possible to approximate µ directly by an exponential sum, and
some additional ideas are required. The details are provided in Section 5 and Section 6.
We still need to show how to compute E[φwe ]. If we is a discrete random variable with a polynomial
size support, we can easily compute E[φwe ] in polynomial time. If we has an infinite discrete or continuous
support, we can not compute E[φwe ] directly and need to approximate it. We briefly discuss this issue and
its implications in Appendix A.
3
Proof of Theorem 1
Now, we prove Theorem 1. We start with some notations. We use |c| and arg(c) to denote the absolute
value and the argument of the complex number c ∈ C, respectively. In other words, c = |c| · (cos(arg(c)) +
i sin(arg(c))) = P
|c|ei arg(c) . We always require arg(c) ∈ [0, 2π) for any c ∈ C. Recall that we say the
x
exponential sum L
i=1 ci φi is an -approximation for µ(x) if the following holds:
|µ(x) −
L
X
ci φxi | ≤ kµk∞
for x ≥ 0.
i=1
We first show that if the utility function can be decomposed exactly into a short exponential sum, we can
approximate the optimal expected utility well.
P
x
Theorem 5 Assume that µ
e(x) = L
k=1 ck φk is the utility function where |φk | ≤ 1 for 1 ≤ k ≤ L. Let
τ = maxk |ck |/kµk∞ . We also assume that there is a pseudopolynomial algorithm for the exact version of
A. Then, for any > 0, there is an algorithm that runs in time (n/)O(L) and finds a solution S such that
e
|E[e
µ(w(S))] − E[e
µ(w(S))]|
< kµk∞ ,
where Se = arg maxS 0 |E[e
µ(w(S 0 ))|.
We use the scaling and rounding technique that has been used often in multi-criterion optimization
problems (e.g., [58, 55]). Since our objective function is not additive and not monotone, the general results
for multi-criterion optimization [55, 46, 58, 1] do not directly apply here. We provide the details of the
algorithm here. We use the following parameters:
− ln(/Lτ )n
2πn
γ=
, J = max
,
.
Lnτ
γ
γ
Let V be the set of all 2L-dimensional integer vectors of the form v = hx1 , y1 , . . . , xL , yL i where 1 ≤ xi ≤
J and 1 ≤ yi ≤ J for i = 1, . . . , L.
For each element e ∈ U , we associate it with a 2L-dimensional integer vector
Ft(e) = hα1 (e), β1 (e), . . . , αL (e), βL (e)i,
where
e
− ln |E[φw
i ]| J
αi (e) = min
,
γ
n
9
e
arg(E[φw
i ])
βi (e) =
.
γ
and
(3)
We call Ft(e) the feature vector of e. Since
P |φi | ≤ 1, we can see that αi (e) ≥ 0 for any e ∈ U . It is easy
to see that Ft(e) ∈ V for all e ∈ U and e∈S Ft(e) ∈ V for all S ⊆ U . Intuitively, αi (e) and βi (e) can be
we
e
thought as the scaled and rounded versions of − ln |E[φw
i ]| and arg(E[φi ]), respectively.
2L
O(L)
We maintain J = (n/)
configurations (a configuration is just like a state in a dynamic program).
Each configuration Cf(v) is indexed by a 2L-dimensional vector v ∈ V and takes 0/1 value. In particular,
the value of Cf(v) for each v ∈ V is defined as follows: For each vector v ∈ V,
P
1. Cf(v) = 1 if and only if there is a feasible solution S ∈ F such that e∈S Ft(e) = v.
2. Cf(v) = 0 otherwise.
For any v = hx1 , y1 , . . . , xL , yL i, define the value of v to be
Val(v) =
L
X
ck e−xk γ+iyk γ .
k=1
Lemma 1 tells us the value of a configuration is close to the expected utility of the corresponding solution. Lemma 2 shows we can compute those configurations in polynomial time.
P
x
Lemma 1 Suppose µ
e(x) = L
k=1 ck φk , where |φk | ≤ 1 for all k = 1, . . . , L. Let τ = maxk |ck |/kµk∞ .
For any vector v = hx1 , y1 , . . . , xL , yL i ∈ V, Cf v (v) = 1 if and only if there is a feasible solution S ∈ F
such that
L
X
E[e
µ(w(S))] − Val(v) = E[e
µ(w(S))] −
ck e−xk γ+iyk γ ≤ O(kµk∞ ).
k=1
PL
P
w(S)
w(S)
]. Therefore, it suffices
] =
Proof: We first notice that E[e
µ(w(S))] = E[ L
k=1 ck E[φk
k=1 ck φk
w(S)
−x
γ+iy
γ
k
k
to
| ≤ O( Lτ ). Since Cf(v)
P show that for all k = 1, . . . , L, |E[φk ] − e
P = 1, we know that
Ft(e)
=
v
for
some
feasible
solution
S
∈
F.
In
other
words,
we
have
e∈S αk (e) = xk and
Pe∈S
β
(e)
=
y
for
all
1
≤
k
≤
L.
k
e∈S k
w(S)
Fix an arbitrary 1 ≤ k ≤ L. First, we can see that the arguments of E[φk
w(S)
arg(E[φk
]) − yk γ ≤
X
e
arg(E[φw
k ]) − βk (e)γ ≤
X
] and e−xk γ+iyk γ are close:
γ ≤ nγ =
e∈S
e∈S
,
Lτ
where we use arg(c) to denote the argument of the complex number c. Now, we show the magnitude of
w(S)
E[φk ] and e−xk γ+iyk γ are also close. We distinguish two cases:
j
k
e
e
− ln |E[φw
− ln |E[φw
i ]| J
i ]|
1. Recall that αi (e) = min
,
. If there is some e ∈ S such that
> Jn (which
γ
n
γ
implies that αk (e) = b Jn c), we know that
w(S)
− ln(|E[φk
]|) =
X
e
(− ln(|E[φw
k |)) >
e∈S
In this case, we have xk =
P
e∈S
w(S)
|E[φk
Jγ
.
n
αk (e) ≥ Jn . Thus, we have that
γd
]| − |e−xk γ | < e−Jγ/n ≤ e
10
n ln(/Lτ )
e/n
γ
<
.
Lτ
2. On the other hand, if αk (e) =
w(S)
− ln(|E[φk
j
e
− ln |E[φw
k ]|
γ
)|) − xk γ =
X
k
for all e ∈ S, we can see that
(− ln(|E[φwe |) − αk (e)γ) ≤
e∈S
X
γ ≤ nγ ≤
e∈S
.
Lτ
Since the derivative of ex is less than 1 for x < 0, we can get that
w(S)
|E[φk
]| − |e−xk γ | ≤ |e−xk γ−/Lτ − e−xk γ | ≤
.
Lτ
For any two complex numbers a, b with |a| ≤ 1 and |b| ≤ 1, if |a| − |b| < and |∠ab| = | arg(a) −
arg(b)| < , we can see that
|a − b|2 = |a|2 + |b|2 − 2|a||b| cos(∠ab)
= (|a| − |b|)2 + 2|a||b|(1 − cos(∠ab))
≤ 2 + 2(1 − cos(∠ab)2 )
≤ 2 + 2 sin(∠ab)2
≤ 2 + 2| arg(a) − arg(b)|2 ≤ 32 .
In the third inequality, we use the fact that sin x < x for all x > 0. The proof is completed.
2
Lemma 2 Suppose there is a pseudopolynomial time algorithm for the exact version of A, which runs in
time polynomial in n and t (t is the maximum integer in the instance of A). Then, we can compute the values
O(L) .
for all configurations {Cf(v)}v∈V in time ( nτ
)
Proof: For each vector v ∈ V, we can encode it as a nonnegative integer I(v) upper bounded by J 2L =
( n )O(L) . In particular, each coordinate of v takes the position of a specific digit in the integral representation, and the base is chosen to be J no carry can occur when we add at most n feature vectors. Then,
determining the value of a configuration Cf(v) P
is equivalent to determining whether there is a feasible solution S ∈ F such that the total weight of S (i.e., e∈S I(Ft(e))) is exactly the given value I(v). Suppose the
pseudopolynomial time algorithm for the exact version of A runs in time PA (n, t) for some polynomial PA .
Therefore, the value of each such Cf(v) can be also computed in time PA (n, I(v)) = PA (n, ( n )O(L) ) =
O(1) , the number of configuration is ( nτ )O(L) . The total running time
( n )O(L) . Since J are bounded by ( nτ
)
O(L) × ( nτ )O(L) = ( nτ )O(L) .
is ( nτ
)
2
Now, everything is ready to prove Theorem 5.
Proof of Theorem 5: We first use the algorithm in Lemma 2 to compute the values for all configurations.
Then, we findPthe configuration Cf(hx1 , y1 , . . . , xL , yL i) that has value 1 and that maximizes the quantity
−xk γ+iyk γ |. The feasible solution S corresponding to this configuration is our final
|Val(v)| = | L
k=1 ck e
solution. It is easy to see that the theorem follows from Lemma 1.
2
Theorem 1 can be readily obtained from Theorem 5 and the fact µ
e is an -approximation of µ.
Proof ofP
Theorem 1: Suppose S is our solution and S ∗ is the optimal solution for utility function µ. Recall
x
µ
e(x) = L
k=1 ck φk . From Theorem 5, we know that
|E[e
µ(w(S))]| ≥ |E[e
µ(w(S ∗ ))]| − O(kµk∞ ).
11
Since µ
e is an -approximation of µ, we can see that
Z
Z
(µ(x) − µ
e(x))dPS (x) ≤
kµk∞ dPS (x) ≤ kµk∞ .
E[µ(w(S))] − E[e
µ(w(S))] =
for any solution S, where PS is the probability measure of w(S). Therefore, we have
|E[µ(w(S))]| ≥ |E[e
µ(w(S))]| − kµk∞ ≥ |E[e
µ(w(S ∗ ))]| − O(kµk∞ )
≥ |E[µ(w(S ∗ ))]| − O(kµk∞ ).
2
This completes the proof of Theorem 1.
4
Class Cbounded
The main goal of this section is to prove Theorem 2. In Section 4.1, we develop a generic algorithm that
takes as a subroutine an algorithm F OURIER for approximating functions in a bounded interval domain,
and approximates µ(x) ∈ Cbounded in the infinite domain [0, +∞). In the Section 4.2, we use the Fourier
series expansion as the choice of F OURIER and show that important classes of utility functions can be
approximated well.
4.1
Approximating the Utility Function
There are many works on approximating functions using short exponential sums, e.g., the Fourier decomposition approach [62], Prony’s method [54], and many others [9, 10]. However, their approximations are
done over a finite interval domain, say [−π, π] or over a finite number of discrete points. No error bound
can be guaranteed outside the domain. Our algorithm is a generic procedure that turns an algorithm that can
approximate functions over [−π, π] into one that can approximate our utility function µ over [0, +∞), by
utilizing the fact that limx→∞ µ(x) = 0.
Recall for µ ∈ Cbounded , we assume that for any constant > 0, there exist a constant T such that
µ(x) ≤ for x > T . We also assume there is an algorithm F OURIER
any function f (under some
P that, for
x which is an -approximation
c
φ
conditions specified later), can produce an exponential sum fb(x) = L
i=1 i i
of f (x) in [−π, π] such that |φi | ≤ 1 and L depends only on and f . In fact, we can assume w.l.o.g. that
F OURIER can approximate f (x) over [−B, B] for any B = O(1). This is because we can apply F OURIER to
the scaled version g(x) = f (x · B
π ) (which is defined on [−π, π]) and then scale the obtained approximation
π
gb(x) back to [−B, B], i.e., the final approximation is fb(x) = gb( B
· x). Scaling a function by a constant
B
factor π typically does not affect the smoothness of f in any essential way and we can still apply F OURIER.
Recall that our goal is to produce an exponential sum that is an -approximation for µ(x) in [0, +∞). We
denote this procedure by E XP S UM -A PPROX.
12
Algorithm: E XP S UM -A PPROX(µ)
1. Initially, we slightly change function µ(x) to a new function µ
b(x) as follows: We require µ
b(x) is a
“smooth ” function in [−2T , 2T ] such that µ
b(x) = µ(x) for all x ∈ [0, T ]; µ
b(x) = 0 for |x| ≥ 2T .
We choose µ
b(x) in [−2T , 0] and [T , 2T ] such that µ
b(x) is smooth. We do not specify the exact
smoothness requirements now since they may depend on the choice of F OURIER. Note that there may
be many ways to interpolate µ such that the above conditions are satisfied (see Example 1 below).
The only properties we need are: (1) µ
b is amenable to algorithm F OURIER; (2) |b
µ(x) − µ(x)| ≤ for
x ≥ 0.
2. We apply F OURIER to g(x) = η x µ
b(x) over domain [−hT , hT ] (η ≥P
1 and h ≥ 2 are constants to
x
be determined later). Suppose the resulting exponential sum gb(x) = L
g (x) −
i=1 ci φi , such that |b
g(x)| ≤ for all x ∈ [−hT , hT ].
P
φi x
3. Let µ
e(x) = L
i=1 ci ( η ) , which is our final approximation of µ(x) on [0, ∞).
Example 1 Consider the utility function µ(x) = 1/(x + 1). Let T = 1 − 1. So µ(x) < for all x > T .
Now we create function µ
b(x) according to the first step of E XP S UM -A PPROX. If we only require µ
b(x) to be
1
continuous, then we can use, for instance, the following piecewise function: µ
b(x) = x+1 for x ∈ [0, T ];
x
2
µ
b(x) = − T + for x ∈ [T , 2T ]; µ
b(x) = 0 for x > 2T ; µ
b(x) = µ
b(−x) for x < 0. It is easy to see that
µ
b is continuous and -approximates µ.
2
By setting η = 2 and
!
P
|c
|/)
log( L
i
i=1
h ≥ max 2,
,
T
(4)
we can show the following theorem.
Lemma 3 µ
e(x) is a 2-approximation of µ(x).
Proof: We know that |b
g (x) − g(x)| ≤ for x ∈ [0, hT ]. Therefore, we have that
|e
µ(x) − µ
b(x)| =
gb(x) g(x)
− x ≤ x ≤ .
x
η
η
η
Combining with |b
µ(x) − µ(x)| ≤ , we obtain |e
µ(x) − µ(x)| ≤ 2 for x ∈ [0, hT ]. For x > hT , we can
see that
x
x
L
L
L
L
X
X
φi
φi
1 X
1 X
ci
≤
ci
≤ x
|ci | ≤ hT
|e
µ(x)| =
|ci | ≤
η
η
2
2
i=1
i=1
i=1
i=1
2
Since µ(x) < for x > hT , the proof is complete.
Remark: Since we do not know ci before applying F OURIER, we need to set h to be a quantity (only
depending
PL on and T ) such that (4) is always satisfied. In particular, we need to provide an upper bound
for i=1 |ci |. In the next subsection, we use the Fourier series decomposition as the choice for F OURIER,
which allows us to provide such a bound for a large class of functions.
13
4.2
Implementing F OURIER
Now, we discuss the choice of algorithm F OURIER and the conditions that f (x) needs to satisfy so that it
is possible to approximate f (x) by a short exponential sum in a bounded interval. In fact, if we know in
advance that there is a short exponential sum that can approximate f , we can use the algorithms developed
in [10, 11] (for continuous case) and [9] (for the discrete case). However, those works do not provide an
easy characterization of the class of functions. From now on, we restrict ourselves to the classic Fourier
series technique, which has been studied extensively and allows such characterizations.
Suppose from now on that f (x) is a real periodic function defined on [−π, π]. Consider the partial sum
of the Fourier series of the function f (x):
N
X
(SN f )(x) =
ck eikx
k=−N
Rπ
1
−ikx dx. It has L = 2N + 1 terms. Since f (x) is a real
where the Fourier coefficient ck = 2π
−π f (x)e
function, we have ck = c−k and the partial sum is also real. We are interested in the question under which
conditions does the function SN f converge to f (as N increases) and what is convergence rate? Roughly
speaking, the “smoother” f is, the faster SN f converges to f . In the following, we need one classic result
about the convergence of Fourier series and show how to use it in our problem.
We need a few more definitions. We say f satisfies the α-Hölder condition if |f (x)−f (y)| ≤ C |x−y|α ,
for some constant C and α > 0 and any x and y. The constant C is called the Hölder coefficient of f , also
denoted as |f |C 0,α . We say f is C-Lipschitz if f satisfies 1-Hölder condition with coefficient C.
Example 2 It is easy to check that the utility function µ in Example 1 is 1-Lipschitz since | dµ(x)
dx | ≤ 1 for
x ≥ 0. We can also see that χ
e(x) (defined in (1)) is 1δ -Lipschitz.
We need the following classic result of Jackson.
Theorem 6 (See e.g., [56]) Suppose that f (x) is a real periodic function defined on [−π, π]. If f satisfies
the α-Hölder condition, it holds that
|f (x) − (SN f )(x)| ≤ O
|f |
C 0,α ln N
Nα
.
We are ready to spell the details of F OURIER. Recall g(x) is obtained in step 2 in Algorithm E XP S UM A PPROX. By construction, g(−hT ) = g(hT ) = 0 for h ≥ 2. Hence, it can be considered as a periodic
function with period 2hT . Note that in Jackson’s theorem, the periodic function f is defined on [−π, π]. In
order to apply Jackson’s theorem to g(x) over [−hT , hT ], we consider the following function f , which is
the scaled version of g:
f (x) = g(xhT /π).
Then, F OURIER returns the following function gb, which is a sum of exponential functions:
xπ
gb(x) = SN f
.
hT
Now, we show that |b
g (x) − g(x)| ≤ for all x ∈ [−hT , hT ]. For the later parts of the analysis, we
need a few simple lemmas. The proofs of these lemmas are straightforward and thus omitted here.
14
Lemma 4 Suppose f : [a, c] → R is a continuous function which consists of two pieces f1 : [a, b] → R
and f2 : [b, c] → R. If both f1 and f2 satisfy the α-Hölder condition with Hölder coefficient C, then
|f |C 0,α ≤ 2C.
Lemma 5 Suppose g : [a, c] → R is a continuous function satisfying the α-Hölder condition with Hölder
coefficient C. Then, for f (x) = g(tx) for some t > 0, we have |f |C 0,α ≤ Ctα .
By Lemma 4, we know that the piecewise function µ
b (defined in step 1 in E XP S UM -A PPROX) satisfies
α-Hölder condition with coefficient 2C. Therefore, we can easily see that g(x) = µ
b(x)η x satisfies α1+2T
Hölder condition with coefficient at most 2
C on [−hT , hT ] (This is because µ
b is non-zero only in
[−2T , 2T ]). According to Lemma 5, we have |f (x)|C 0,α = |g(xhT /π)|C 0,α ≤ 21+2T (hT /π)α C. Using
Theorem 6, we obtain the following corollary.
Corollary 1 Suppose µ ∈ Cbounded satisfies the α-Hölder condition with |µ|C 0,α = O(1). For
N = 2O(T ) (h/)1+1/α ,
it holds that |g(x) − gb(x)| ≤ for x ∈ [−hT , hT ].
Proof: Applying Theorem 6 to f and plugging in the given value of N , we can see that |f (x)−(SN f )(x)| ≤
xπ
xπ
for x ∈ [−π, π]. Hence, we have that |g(x) − gb(x)| = |f ( hT
) − SN f ( hT
)| ≤ for x ∈ [−hT , hT ]. 2
How to Choose h: Now, we discuss the issue left in Section 4.1, that is how to choose h (the value should
be independent of ci s and L) to satisfy (4), when µ satisfies the α-Hölder condition for some α > 1/2. We
need the following results about thePabsolute convergence of Fourier coefficients. If f satisfies the α-Hölder
condition for some α > 1/2, then +∞
i=−∞ |ci | ≤ |f |C 0,α · cα where cα only depends on α [62]. We can see
that in order to ensure (4), it suffices to to set value h such that
hT ≥ log
21+2T (hT /π)α Ccα
= 2T + O log hT / .
We can easily verify that the above condition can be satisfied by letting h = max(O( T1 log 1 ), 2).
Proof of Theorem 2: Everything is in place to prove Theorem 2. First, we bound L by Corollary 1:
L = 2N + 1 = 2O(T ) poly(1/).
Rπ
1
−ikx dx.
Next, we bound the magnitude of each ck . Recall ck is the Fourier coefficient: ck = 2π
−π f (x)e
where f (x) = g(xhT /π) = η xhT /π µ
b(xhT /π) for x ∈ [−π, π]. Since hT = max(O(T ), O(log 1/)),
O(T
)
we can see |f (x)| ≤ 2
poly(1/) for x ∈ [−π, π]. Therefore,
Z π
1
|ck | ≤
|f (x)|dx ≤ 2O(T ) poly(1/).
2π −π
Finally, combining Corollary 1 and Lemma 3, we complete the proof of Theorem 2.
15
2
H
H
H
µ(x)
H
=
ν(x)
H
+
+
ν(x)
x0 x1
T2
x2
x3
x0 x1
x3 x0 x1
x2
x2
x3
x0 x1
x2
x3
Figure 2: (1) The concave utility function µ(x) and ν(x) = H − µ(x) (2) The piecewise linear function
ν(x). (3)-(5) Decomposing ν(x) into three scaled copies of τ (x).
Class Cconcave
5
In this section, we handle the case where the utility function µ : [0, ∞) → [0, ∞) is a concave nondecreasing
function and our goal is to prove Theorem 3.
We use OPT to denote the optimal value of our problem EUM(A). We can assume without loss of
generality that we know OPT, modulo a multiplicative factor of (1 ± ). This can be done by guessing all
powers of (1 + ) between maxe∈U E[µ(we )] and E[µ(w(U ))], 8 and run our algorithm for each guess. For
ease of notation, we assume that our current guess is exact OPT. Let
H = OPT/2 ,
T1 = µ−1 (OPT/)
and
T2 = µ−1 (OPT/2 ).
(5)
We first make the following simplifying assumption and show how to remove it later:
S1. We assume µ(0) = 0 and µ(x) = µ(T2 ) = H for all x > T2 .
Lemma 6 If the utility function µ ∈ Cconcave
the additional assumption S1, then, for any > 0, we
P satisfies
x , such that |e
c
φ
µ(x) − µ(x)| ≤ O(OPT) for all x > 0,
can obtain an exponential sum µ
e(x) = L
k
k=1
k
where L = poly(1/), |ck | ≤ poly(1/)H and |φk | ≤ 1 for all k = 1, . . . , L.
Proof: Consider the function ν(x) = H − µ(x). We can see ν is a nonincreasing convex function and
ν(x) = 0 for all x > T2 . We first approximate ν by a piecewise linear function ν as follows. Let N = 1/3 .
For all 0 ≤ i ≤ N , let
−1 iH
−1 (N − i)H
xi = µ
=ν
and xN +1 = ∞.
N
N
)−ν(xi )
for 0 ≤ i ≤ N . The piecewise linear function ν is defined by ν(xi ) = ν(xi ) for all
Let hi = ν(xxi+1
i+1 −xi
0 ≤ i ≤ N and
ν(x) = ν(xi ) + (x − xi )hi , for x ∈ [xi , xi+1 ].
It is easy to see ν is also a convex function (see Figure 2) and |ν(x) − ν(x)| ≤ H/N ≤ OPT.
Now we show ν can be written as a linear sum of N scaled copies of the following function ρ:
ρ(x) = 1 − x
for 0 ≤ x ≤ 1, and ρ(x) = 0
8
for x > 1.
We can assume every e ∈ U is in at least one feasible solution S ∈ F. Otherwise, we can simply remove those irrelevant
elements. Then, OPT is at least maxe∈U E[µ(we )]. We can test whether an item is an irrelevant element by using the pseudopolynomial time algorithm as follows: we assign the item with weight 1 and other items weight 0. We ask whether there is a feasible
solution with weight exactly 1.
16
We let ρh,a (x) = (−ha)ρ(x/a). It is easy to see that the first piece of ρh,a has slope h and ends at x = a.
Define
x
for 0 ≤ i ≤ N.
ρi (x) = ρhi −hi+1 ,xi+1 (x) = xi+1 (hi+1 − hi )ρ
xi+1
P −1
It is not hard to verify that ν(x) = N
2).
i=0 ρi (x) (see Figure
PD
x
2
By Theorem 2, we can find a function τe(x) =
k=1 dk ψk with D = poly(N/ ) = poly(1/),
9
2
|dk | = poly(1/) and |ψk | ≤ 1 for k = 1, . . . , D, such that |e
τ (x) − ρ(x)| ≤ /N for x ≥ 0. Consider
the function
νe(x) =
N
−1
X
xi+1 (hi+1 − hi )e
τ
i=0
x
xi+1
=
N
−1 X
D
X
1/xi+1 x
xi+1 (hi+1 − hi )dk (ψk
) .
i=0 k=1
Clearly, νe is the summation of N D exponentials. It is not difficult to see the magnitude of each coefficient,
|xi+1 (hi+1 − hi )dk |, is at most −xi+1 hi |dk | ≤ poly(1/)H. We can also see that
|e
ν (x) − ν(x)| ≤ |e
ν (x) − ν(x)| + |e
ν (x) − ν(x)| ≤ O(H2 ) ≤ O(OPT) for x ≥ 0.
2
Finally, letting µ
e(x) = H − νe(x) finishes the proof.
Since kµk∞ = OPT/2 , Lemma 6 implies that µ
e is an 3 -approximation of µ. Then, applying Theorem 1, we can immediately obtain a polynomial time algorithm that runs in time (n/)poly(1/) and finds a
solution S ∈ F such that OPT − E[µ(S)] ≤ 2 kµk∞ ≤ OPT, i.e., a PTAS.
Now, we show how to get rid of the assumption S1. From now on, the utility function µ is a general
increasing concave utility function with µ(0) = 0. 10 Let µH (x) = min(µ(x), H). We can see that
µH (x) satisfies S1. We say a value p ∈ R+ is huge is if p > T2 . Otherwise, we call it normal. We use
Huge to denote the set of huge values. For each element e, let wenm be the random variable which has the
same distribution
we in the normal value region, and zero probability elsewhere. For any S ⊆ U , let
P as nm
nm
w (S) = e∈S we . In the following lemma, we show µH is a good approximation for µ for normal
values.
Lemma 7 For any S ∈ F, we have that
E[µ(wnm (S))] − O()OPT ≤ E[µH (wnm (S))] ≤ E[µ(wnm (S))].
Proof: It is obvious that E[µH (wnm (S))] ≤ E[µ(wnm (S))]. So, we only need to prove the first inequality.
For any S ⊆ F, we have E[µ(w(S))] ≤ OPT. By Markov inequality, Pr[w(S) ≥ T1 ] ≤ , which implies
Pr[wnm (S) ≥ T1 ] ≤ . Now, we claim that for any integer k ≥ 1,
Pr [wnm (S) ≥ (k + 2)T1 ] ≤ Pr [wnm (S) ≥ kT1 ] .
(6)
Consider the following stochastic process. Suppose the weights of the elements in S are realized one by
one (say wenm
, . . . , wenm
). Let Zt be the sum of the first t realized values. Let t1 be the first time such that
n
1
Zt1 +1 ≥ kT1 . If this never happens, let t1 = ∞ and Zt1 = Zn . Let event E1 be t1 ≤ n and E2 be
9
It suffices to let T = 1 (i.e., ρ(x) = 0 for x ≥ 1).
The assumption that µ(0) = 0 is without loss of generality. If µ(0) > 0, we can solve the problem with the new utility function
µ(x) − µ(0). It is easy to verify a PTAS for the new problem is a PTAS for the original problem.
10
17
Zn ≥ (t + 2)T1 . Consider the random value Zn − Zt1 =
and all wenm
are nonnegative, we can see that
t
Pr[Zn − Zt1 > T1 | E1 ] = Pr
Pn
nm
t=t1 +1 wet .
As wnm (S) = Zn =
Pn
nm
t=1 wet
n
i
i
hX
nm
w
>
T
≤ .
wenm
>
T
|
t
≤
n
≤
Pr
1
1
1
et
t
n
h X
t=1
t=t1 +1
Moreover, we can see that event E1 ∧ (Zn − Zt1 > T1 ) is a necessary condition for event E2 . Hence, the
claim holds because
Pr[E2 ] ≤ Pr[E1 ] Pr[Zn − Zt1 > T1 | E1 ] ≤ Pr[E1 ].
From (6), we can see that Pr [wnm (S) ≥ 3T1 ] ≤ 2 , Pr [wnm (S) ≥ 5T1 ] ≤ 3 , . . . so on and so forth.
Furthermore, we can see that
Z ∞
Z ∞
nm
nm
nm
Pr[wnm (S) ≥ µ−1 (x)]dx
Pr[µ(w (S)) ≥ x]dx =
E[µ(w (S))] − E[µH (w (S))] =
H
H
Z
OPT ∞
nm
−1
=
Pr[w (S) ≥ µ (H + kOPT/)]dk
0
Z
OPT ∞
Pr[wnm (S) ≥ T2 + kT1 ]dk
≤
0
∞
2OPT X k
≤
≤ O(OPT).
k=2/
The first inequality holds due to the concavity of µ (or equivalently, the convexity of µ−1 ):
µ−1 (H + kOPT/) ≥ T2 + µ−1 (kOPT/) = T2 + kµ−1 (OPT/) = T2 + kT1 for k ≥ 0.
2
Now, we handle the contribution from huge values. Let Huge = {p | p ≥ T2 } and
X
Hg(e) =
Pr[we = p]µ(p).11
p∈Huge
Hg(e) can be thought as the expected contribution of huge values of e. We need the following observation
in [14]: the contribution of the huge values can be essentially linearized and separated from the contribution
of normal values, in the sense of the following lemma. We note that the simple insight has been used in a
variety of contexts in stochastic optimization problems (e.g., [47, 33, 34]).
Lemma 8 (The first half of Theorem 2 in [14]) For any S ∈ F, we have that
X
nm
E[µ(w(S))] ∈ (1 ± O()) E[µ(w (S))] +
Hg(e) .
e∈S
11
If we is continuously distributed, we let Hg(e) =
exactly the same as the discrete case.
R∞
T2
µ(x)pe (x)dx. The algorithm and analysis for the continuous case are
18
Now, we are ready to state our algorithm,P
which is an extension of the algorithm in Section 3. Using
x
Lemma 6, we first obtain a function µ
eH (x) = L
µH (x) − µH (x)| ≤ OPT. The feature
k=1 ck φk such that |e
vector Ft(e) is a 2L + 1-dimensional integer vector
Ft(e) = hα1 (e), β1 (e), . . . , αL (e), βL (e), bnHg(e)/OPTci,
where αi (e), βi (e) are defined as in (3) with respect to wenm . In other words, we extend the original feature vector by one more coordinate which represents the (scaled and rouned) contribution of huge values.
Similarly, each configuration Cf(v) is indexed by such a 2L + 1-dimensional vector v. The last coordinate
of v P
is at most n2 /. As before, we let Cf(v) = 1 if and only if there is a feasible solution S ∈ F such
that e∈S Ft(e) = v. We slightly modify the definition of Val(v) to incorporate the contribution of huge
values, as the following:
Val(hx1 , y1 , . . . , xL , yL , zi)| =
L
X
ck e−xk γ+iyk δ + z ·
k=1
OPT
.
n
Using the same technique as in Lemma 2 and the pseudopolynomail time algorithm for A, we can compute
the values of all configurations in time (n/)poly(1/) . Then, we return the solution for which the corresponding configuration Cf(hx1 , y1 , . . . , xL , yL , zi) that takes value 1 and maximizes |Val(v)|.
P
Proof of Theorem 3: The proof is similar to that of Theorem 1. Let any S ⊆ U , let vS = e∈S Ft(e)
Using the same proof of Lemma 1 and the fact that |e
µH (x) − µH (x)| ≤ OPT, we can see that for any
S ∈ F,
X
|Val(vS )| = E[µ(wnm (S))] +
Hg(e) ± O(OPT).
e∈S
Combining with Lemma 7 and Lemma 8, we can further see that for any S ∈ F,
|Val(vS )| = (1 ± O())E[µ(w(S))] ± O(OPT).
Suppose S is our solution and S ∗ is the optimal solution for utility function µ. From our algorithm, we
know that |Val(vS )| ≥ |Val(vS ∗ )|, which implies E[µ(w(S))] ≥ (1 − O())OPT and completes the proof.
2
6
Class Cincreasing
d
Recall that µ(x) ∈ Cincreasing is a positive, differentiable and increasing function and dx
µ(x) ∈ [L, U]
for some constants L, U > 0 and all x ≥ 0. By scaling, we can assume without loss of generality that
L ≤ 1 ≤ U. Our algorithm is almost the same as the one in Section 5 except that we use a slightly different
set of parameters:
OPT L
OPT L
−1 OPT L
H = 2 · , T1 =
· , T2 = µ
·
, and Huge = {p | p ≥ T2 }
U
U
2
U
Let µH (x) = min(µ(x), H). So, µH satisfies assumption S1. However, we can not use Lemma 6 since
it requires concavity. Nevertheless, we can still approximate µH by a short exponential sum, as in Lemma 9.
The remaining algorithm is exactly the same as the one in Section 5. To prove the performance guarantee,
we only need to prove analogues of Lemma 7 and Lemma 8. Now, we prove the aforementioned lemmas.
19
P
x
µH (x) −
Lemma 9 For any > 0, we can obtain an exponential sum µ
eH (x) = L
k=1 ck φk , such that |e
µH (x)| ≤ O(OPT) for all x > 0, where L = poly(1/), |ck | ≤ poly(1/)H and |φk | ≤ 1 for all k.
d
Proof: Since dx
µ(x) ∈ [L, U], we can see that H/U ≤ T2 ≤ H/L. Consider ν(x) = H − µH (x). We
can see ν is a decreasing, differentiable function and ν(x) = 0 for all x > T2 . Consider the function
ν(x) = H1 ν(xH). First, let T = 1/L ≥ T2 /H and we can see ν(x) = 0 for x > T . Hence, ν ∈ Cbounded
P
x
and satisfies U-Lipschitz condition. By Theorem 2, we can compute a function νe(x) = L
k=1 dk ψk , which
2
is an -approximation of ν, with L = poly(1/), |ψk | ≤ 1 and dk = poly(1/) for all k. Therefore,
P
1/H x
) is the desired approximation.
2
µ
eH (x) = H − H νe(x/H) = H − L
k=1 (Hdk )(ψk
The following lemma is an analogue of Lemma 7.
Lemma 10 For any S ∈ F, we have that
E[µH (wnm (S))] ∈ (1 ± O())E[µ(wnm (S))].
Proof: The proof is almost the same as that of Lemma 7, except that the last line makes use of the bounded
derivative assumption (instead of the concavity):
kOPT
kOPT
µ−1 H +
≥ T2 +
≥ T2 + kT1 for k > 0.
U
We handle the contribution from huge values in the same way. Recall Hg(e) =
p]µ(p). The following lemma is an analogue of Lemma 8.
2
P
p∈Huge Pr[we
=
Lemma 11 For any S ∈ F, we have that
X
E[µ(w(S))] ∈ (1 ± O()) E[µ(wnm (S))] +
Hg(e) .
e∈S
Proof: We can use exactly
P the same proof of Theorem 2 in [14] to show that E[µ(w(S))] ≥ (1 −
nm
O())(E[µ(w (S))] + e∈S Hg(e)), as the proof holds even without the concavity assumption. The other
direction requires a different argument, which goes as follows. Let E0 be the event that no we is realized to
a huge value and Ee,p be the event that we is realized to value p ∈ Huge. By Markov inequality, we have
Pr[E0 ] ≥ 1 − L/U. Moreover, using the fact that e−x ≥ 1 − x, we have that
Y
X
X
exp −
Pr[Ee,p ] =
exp −
Pr[Ee,p ]
e∈S
e∈S,p∈Huge
Y
≥
e∈S
p∈Huge
1−
X
Pr[Ee,p ]
p∈Huge
= Pr[E0 ] ≥ 1 − L/U ≥ 1 − .
P
Hence, e∈S,p∈Huge Pr[Ee,p ] ≤ −(ln(1 − )) ≤ 2 for < 1/2.
Next, we can see that E[µ(w(S)) | E0 ] Pr[E0 ] ≤ E[µ(wnm (S))] (for each realization of {we }e∈S
satisfying E0 , there is a corresponding realization of {wenm }e∈S ). From the bounded derivative assumption,
20
we can also see that E[µ(w(S)) | Ee,p ] ≤ µ(p) + U · E[w(S)] By inclusion-exclusion, we have that
X
E[µ(w(S))] ≤ E[µ(w(S)) | E0 ] Pr[E0 ] +
E[µ(w(S)) | Ee,p ] Pr[Ee,p ]
e∈S,p∈Huge
≤ E[µ(wnm (S))] +
X
Pr[Ee,p ]µ(p) +
e∈S,p∈Huge
≤ E[µ(wnm (S))] +
X
X
Pr[Ee,p ] · U · E[w(S)]
e∈S,p∈Huge
Hg(e) + O()E[µ(w(S))].
e∈S
The last inequality holds since E[w(S)] ≤ E[µ(w(S))]/L = O(E[µ(w(S))]).
7
2
Applications
We first consider two utility functions χ(x) and χ
e(x) presented in the introduction. Note that maximizing
E[χ(w(S))] is equivalent to maximizing Pr(w(S) ≤ 1). The following lemma is straightforward.
Lemma 12 For any solution S,
Pr(w(S) ≤ 1) ≤ E[e
χ(w(S))] ≤ Pr(w(S) ≤ 1 + δ).
Corollary 2 Suppose there is a pseudopolynomial time algorithm for the exact version of A. Then, for any
fixed constants > 0 and δ > 0, there is an algorithm that runs in time ( n )poly(1/) , and produces a solution
S ∈ F such that
Pr(w(S) ≤ 1 + δ) + ≥ max
Pr(w(S 0 ) ≤ 1)
0
S ∈F
Proof: By Theorem 1, Theorem 2 and Lemma 12, we can easily obtain the corollary. Note that we can
choose T = 2 for any δ ∈ (0, 1) and > 0. Thus L = poly(1/).
2
Now, let us see some applications of our general results to specific problems.
Stochastic Shortest Path: Finding a path with the exact target length (we allow non-simple paths)12 can be
easily done in pseudopolynomial time by dynamic programming.
Stochastic Spanning Tree: We are given a graph G, where the weight of each edge e is an independent,
nonnegative random variable. Our objective is to find a spanning tree T in G, such that Pr(w(T ) ≤ 1)
is maximized. Polynomial time algorithms have been developed for Gaussian distributed edges [35, 26].
To the best of our knowledge, no approximation algorithm with provable guarantee is known for other
distributions. Noticing there exists a pseudopolynomial time algorithm for the exact spanning tree problem
[5], we can directly apply Corollary 2.
Stochastic k-Median
on Trees: The problem asks for a set S of k nodes in the given probabilistic tree G
P
such that Pr( v∈V (G) dis(v, S) ≤ 1) is maximized, where dis(v, S) is the minimum distance from v to any
node in S in the tree metric. The k-median problem can be solved optimally in polynomial time on trees by
dynamic programming [37]. It is straightforward to modify the dynamic program to get a pseudopolynomial
time algorithm for the exact version.
12
The exact version of simple path is NP-hard, since it includes the Hamiltonian path problem as a special case.
21
Stochastic Knapsack with Random Sizes: We are given a set U of n items. Each item i has a random size
wi and a deterministic profit vi . We are also given a positive constant
P 0 ≤ γ ≤ 1. The goal is to find a subset
S ⊆ U such that Pr(w(S) ≤ 1) ≥ γ and the total profit v(S) = i∈S vi is maximized.
If the profits of the items are polynomially bounded integers, we can see the optimal profit is also
a polynomially bounded integer. We can first guess the optimal profit. For each guess g, we solve the
following problem: find a subset S of items such that the total profit of S is exactly g and E[e
χ(w(S))] is
maximized. The exact version of the deterministic problem is to find a solution S with a given total size
and a given total profit, which can be easily solved in pseudopolynomial time by dynamic programming.
Therefore, by Corollary 2, we can easily show that we can find in polynomial time a set S of items such that
the total profit v(S) is at least the optimum and Pr(w(S) ≤ 1 + ) ≥ (1 − )γ for any constant and γ.
If the profits are general integers, we can use the standard scaling technique to get a (1−)-approximation
for the total profit. We first make a guess of the optimal profit, rounded down to the nearest power of (1 + ).
maxi vi
There are at most log1+ nmin
guesses. For each guess g, we solve the following problem. We discard
i vi
all items with a profit larger than g. Let ∆ = ng2 . For each item with a profit smaller than g
n , we set its new
vi
profit to be v̄i = 0. Then, we scale each of the rest profits vi to v̄i = ∆b ∆ c. Now, we define the feasible set
F(g) = {S |
X
(1 − 2)g ≤
i∈S
X
v̄i ≤ (1 + 2)g}.
i∈S
2
Since there are at most n distinct v̄ values, we can easily show that finding a solution S in F(g) with a
given total size can be solved in pseudopolynomial time by dynamic programming.
Denote the optimal solution by S ∗ and the optimal profit by OP T . Suppose g is the right guess, i.e.,
1
( 1+ )OP T ≤ g ≤ OP T . We can easily see that for any solution S, we have that
(1 −
X
X
1 X
)
vi − g ≤
v̄i ≤
vi
n
i∈S
i∈S
i∈S
where the first inequalities are due to vi ≥ g
n and we set at most g profit to zero. Therefore, we can see
∗
S ∗ ∈ F(g). Applying Corollary 2, we obtain a solution
P Pr(w(S) ≤ 1 + δ) + ≥ Pr(w(S ) ≤
P S such that
1 + δ). Moreover, the profit of this solution v(S) = i∈S vi ≥ i∈S v̄i ≥ (1 − 2)g ≥ (1 − O())OP T.
In sum, we have obtained the following result.
Theorem 7 For any constants > 0 and γ > 0, there is a polynomial time algorithm to compute a set S of
items such that the total profit v(S) is within a 1− factor of the optimum and Pr(w(S) ≤ 1+) ≥ (1−)γ.
poly(1/)
Bhalgat et al. [13, Theorem 8.1] obtained the same result, with a running time n2
, while our running
time is npoly(1/) .
Moreover, we can easily extend our algorithm to generalizations of the knapsack problem if the corresponding exact version has a pseudopolynomial time algorithm. For example, we can get the same result for
the partial-ordered knapsack problem with tree constraints [25, 58]. In this problem, items must be chosen
in accordance with specified precedence constraints and these precedence constraints form a partial order
and the underlining undirected graph is a tree (or forest). A pseudopolynomial algorithm for this problem is
presented in [58].
Stochastic Knapsack with Random Profits: We are given a set U of n items. Each item i has a deterministic size wi and a random profit vi . The goal is to find a subset of items that can be packed into a
knapsack with capacity 1 and the probability that the profit is at least a given threshold T is maximized.
22
Henig [32] and Carraway et al. [15] studied this problem for normally distributed profits and presented
dynamic programming and branch and bound heuristics to solve this problem optimally.
We can solve the equivalent problem of minimizing the probability that the profit is at most the given
threshold, subject to the capacity constraint. We first show that relaxing the capacity constraint is necessary.
Consider the following deterministic knapsack instance. The profit of each item is the same as its size. The
given threshold is 1. We can see that the optimal probability is 1 if and only if there is a subset of items of
total size exactly 1. Otherwise, the optimal probability is 0. However, determining whether these is a subset
of items with total size exactly 1 is NP-Complete. Therefore, it is NP-hard to approximate the original
problem within any additive error less than 1 without violating the capacity constraint.
The corresponding exact version of the deterministic problem is to find a set of items S such that w(S) ≤
1 and v(S) is equal to a given target value. In fact, there is no pseudopolynomial time algorithm for this
problem. Since otherwise we can get an additive approximation without violating the capacity constraint,
contradicting the lower bound argument. Note that a pseudopolynomial time algorithm here should run in
time polynomial in the profit value (not the size). However, if the sizes can be encoded in O(log n) bits (we
only have a polynomial number of different sizes), we can solve the problem in time polynomial in n and
the largest profit value by standard dynamic programming.
For general sizes, we can round the size of each item down to the nearest multiple of nδ . Then, we can
solve the exact version in pseudopolynomial time poly(maxi vi , n, 1/δ) by dynamic programming. It is
easy to show that for any subset of items, its total size is at most the total rounded size plus δ. Therefore, the
total size of our solution is at most 1 + δ. We summarize the above discussion in the following theorem.
Theorem 8 If the optimal probability is Ω(1), we can find in time (n/δ)poly(1/) a subset S of items such
that Pr(v(S) > (1 − )T ) ≥ (1 − )OPT and w(S) ≤ 1 + δ, for any constant > 0.
8
Extensions
In this section, we discuss some extensions to our basic approximation scheme. We first consider optimizing
a constant number of utility functions in Section 8.1. Then, we study the problem where the weight of each
element is a random vector in Section 8.2.
8.1
Multiple Utility Functions
The problem we study in this section contains a set U of n elements. Each element e has a random weight
we . We are also given d utility functions µ1 , . . . , µd and d positive numbers λ1 , . . . , λd . We assume d is a
constant. A feasible solution consists of d subsets of elements that satisfy some property. Our objective is
to find a feasible solution S1 , . . . , Sd such that E[µi (w(Si ))] ≥ λi for all 1 ≤ i ≤ d.
We can easily extend our basic approximation scheme to the multiple utility functions case as follows.
We decompose these utility functions into short exponential sums using E XP S UM -A PPROX as before. Then,
for each utility function, we maintain (n/)O(L) configurations. Therefore, we have (n/)O(dL) configurations in total and we would like to compute the values for these configurations. We denote the deterministic
version of the problem under consideration by A. The exact version of A asks for a feasible solution
S1 , . . . , Sd such that the total weight of Si is exactly the given number ti for all i. Following an argument
similar to Lemma 2, we can easily get the following generalization of Theorem 1.
Theorem 9 Assume that there is a pseudopolynomial algorithm for the exact version of A. Further assume
that given any > 0, we can -approximate each utility function by an exponential sum with at most L
23
terms. Then, there is an algorithm that runs in time (n/)O(dL) and finds a feasible solution S1 , . . . , Sd such
that E[µi (w(Si )] ≥ λi − for 1 ≤ i ≤ d, if there is a feasible solution for the original problem.
Now let us consider two simple applications of the above theorem.
Stochastic Multiple Knapsack: In this problem we are given a set U of n items, d knapsacks with capacity
1, and d constants 0 ≤ γi ≤ 1. We assume d is a constant. Each item i has a random size wi and a
deterministic profit vi . Our
Pd objective is to find d disjoint subsets S1 , . . . , Sd such that Pr(w(Si ) ≤ 1) ≥ γi
for all 1 ≤ i ≤ d and i=1 v(Si ) is maximized. The exact version of the problem is to find a packing
such that the load of each knapsack i is exactly the given value ti . It is not hard to show this problem can
be solved in pseudopolynomial time by standard dynamic programming. If the profits are general integers,
we also need the scaling technique as in stochastic knapsack with random sizes. In sum, we can get the
following generalization of Theorem 7.
Theorem 10 For any constants d ∈ N, > 0 and 0 ≤ γi ≤ 1 for 1 ≤ i ≤ P
d, there is a polynomial time
algorithm to compute d disjoint subsets S1 , . . . , Sd such that the total profit di=1 v(Si ) is within a 1 −
factor of the optimum and Pr(w(Si ) ≤ 1 + ) ≥ (1 − )γi for 1 ≤ i ≤ d.
Stochastic Multidimensional Knapsack: In this problem we are given a set U of n items and a constant
0 ≤ γ ≤ 1. Each item i has a deterministic profit vi and a random size which is a random d-dimensional
vector wVi = {w
Pi1 , . . . , wid }. We assume d is a constant. Our objective is to find a subset S of items such
d
that Pr( j=1 ( i∈S wij ≤ 1)) ≥ γ and total profit is maximized. This problem can be also thought as the
fixed set version of the stochastic packing problem considered in [21, 13]. We first assume the components
of each size vector are independent. The correlated case will be addressed in the next subsection.
For ease of presentation, we assume d = 2 from now on. Extension to general constant d is straightforward. We can solve the problem by casting it into a multiple utility problem as follows. For each item
i, we create two copies i1 and i2 . The copy ij has a random weight wij . A feasible solution consists of
two sets S1 and S2 such that S1 (S2 ) only contains the first (second) copies of the elements and S1 and
S2 correspond to exactly the same subset of original elements. We enumerate all such pairs (γ1 , γ2 ) such
that γ1 γ2 ≥ γ and γi ∈ [γ, 1] is a power of 1 − for i = 1, 2. Clearly, there are a polynomial number
of such pairs.
For each pair (γ1 , γ2 ), we solve the following problem: find a feasible solution S1 , S2 such
P
that Pr( i∈Sj wij ≤ 1) ≥ γj for all j = 1, 2 and total profit is maximized. Using the scaling technique
and Theorem 9Vfor optimizing
multiple utility
P we can get a (1 − )-approximation for the optimal
P
Q2 functions,
2
profit and Pr( j=1 ( i∈Sj wij ≤ 1)) = j=1 Pr( i∈Sj wij ≤ 1) ≥ (1 − O())γ1 γ2 ≥ (1 − O())γ.
We note that the same result for independent components can be also obtained by using the discretization
technique developed for the adaptive version of the problem in [13] 13 . If the components of each size vector
are correlated, we can not decompose the problem into two 1-dimensional utilities as in the independent case.
Now, we introduce a new technique to handle the correlated case.
8.2
Multidimensional Weight
The general problem we study contains a set U of n elements. Each element e has a random weight vector
wi = (wi1 , . . . , wid ). We assume d is a constant. We are also given a utility function µ : Rd → R+ . A
feasible solution is a subset of elements satisfying some property. We use w(S) as a shorthand notation
13
With some changes of the discretization technique, the correlated case can be also handled [12].
24
P
P
for vector ( i∈S wi1 , . . . , i∈S wid ). Our objective is to find a feasible solution S such that E[µ(w(S)] is
maximized.
From now on, x and k denote d-dimensional vectors and kx (or k·x) denotes the inner product of k and x.
As before, we assume µ(x) ∈ [0, 1] for all x ≥ 0 and lim|x|→+∞ µ(x) = 0, where |x| = max(x1 , . . . , xd ),
Our algorithm is almost the same as in the one dimension case and we briefly sketch itQhere. We first notice
that expected utilities decompose for exponential utility functions, i.e., E[φk·w(S) ] = Pi∈S E[φk·wi ]. Then,
we attempt to -approximate the utility function µ(x) by a short exponential sum |k|≤N ck φkx
k (there
P
d
k·w(S)
k·w(S)
are O(N ) terms). If this can be done, E[φ
] can be approximated by |k|≤N ck E[φ
]. Using
the same argument as in Theorem 1, we can show that there is a polynomial time algorithm that can find a
feasible solution S with E[µ(w(S))] ≥ OP T − for any > 0, provided that a pseudopolynomial algorithm
exists for the exact version of the deterministic problem.
To approximate the utility function µ(x), we need the multidimensional Fourier
series expansion of
P
d
ikx where c =
a function
f : C → C (assuming f is 2π-periodic in each axis): f (x) ∼
k
k∈Zd ck e
R
1
−ikx
f
(x)e
dx.
The
rectangular
partial
sum
is
defined
to
be
d
d
x∈[−π,π]
(2π)
X
SN f (x) =
X
...
|k1 |≤N
ck eikx .
|kd |≤N
It is known that the rectangular partial sum SN f (x) converges uniformly to f (x) in [−π, π]d for many
function classes as n tends to infinity. In fact, a generalization of Theorem 6 to [−π, π]d also holds [3]: If f
satisfies the α-Hölder condition, then
|f (x) − (SN f )(x)| ≤ O
|f |
C 0,α
lnd N
Nα
for x ∈ [−π, π]d .
Now, we have an algorithm F OURIER that can approximate a function in a bounded domain. It is also
straightforward to extend E XP S UM -A PPROX to the multidimensional case. Hence, we can -approximate µ
by a short exponential sum in [0, +∞)d , thereby proving the multidimensional generalization of Theorem 2.
Let us consider an application of our result.
Stochastic Multidimensional Knapsack (Revisited): We consider the case where the components of each
weight vector can be correlated. Note that the utility function χ2 corresponding to this problem is the two
dimensional threshold function: χ2 (x, y) = 1 if x ≤ 1 and y ≤ 1; χ2 (x, y) = 0 otherwise. As in the one
dimensional case, we need to consider a continuous version χ
e2 of χ2 (see Figure 1(3)). By the result in this
section and a generalization of Lemma 12 to higher dimension, we can get the following.
Theorem 11 For any constants d ∈ N, > 0 and 0 ≤ γ ≤ 1, there is a polynomial V
time algorithm
for
P
finding a set S of items such that the total profit v(S) is 1− factor of the optimum and Pr( dj=1 ( i∈S wij ≤
1 + )) ≥ (1 − )γ.
9
A Few Remarks
Convergence of Fourier series: The convergence of the Fourier series of a function is a classic topic in harmonic analysis. Whether the Fourier series converges to the given function and the rate of the convergence
typically depends on a variety of smoothness condition of the function. We refer the readers to [62] for a
more comprehensive treatment of this topic. We note that we could obtain a smoother version of χ (e.g., see
25
Figure 1(2)), instead of the piecewise linear χ
e, and then use Theorem 6 to obtain a better bound for L. This
would result in an even better running time. Our choice is simply for the ease of presentation.
Discontinuous utility functions: If the utility function µ is discontinuous, e.g., the threshold function, then
the partial Fourier series behaves poorly around the discontinuity (this is known as the Gibbs phenomenon).
However, informally speaking, as the number of Fourier terms increases, the poorly-behaved strip around
the edge becomes narrower. Therefore, if the majority of the probability mass of our solution lies outside
the strip, we can still guarantee a good approximation of the expected utility. There are also techniques to
reduce the effects of the Gibbs phenomenon (See e.g., [28]). However, the techniques are not sufficient to
handle discontinuous functions. We note that very recently, Daskalakis et al. [20] obtained a true additive
PTAS (instead of a bi-criterion additive PTAS) for a closely related problem, called the fault tolerant storage
problem, under certain technical assumptions. 14 It is not clear how to use their technique to obtain a true
additive PTAS for our expected utility maximization problem. We leave this problem as an interesting open
problem.
10
Conclusion
We study the problem of maximizing expected utility for several stochastic combinatorial problems, such as
shortest path, spanning tree and knapsack, and several classes of utility functions. A key ingredient in our
algorithm is to decompose the utility function into a short exponential sum, using the Fourier series decomposition. Our general approximation framework may be useful for other stochastic optimization problems.
We leave the problems of obtaining a true additive PTAS, or nontrivial multiplicative approximation factors
for Cbounded as interesting open problems.
11
Acknowledgments
We would like to thank Evdokia Nikolova for providing an extended version of [52] and many helpful
discussions. We also would like to thank Chandra Chekuri for pointing to us the work [13] and Anand
Bhalgat for some clarifications of the same work.
References
[1] H. Ackermann, A. Newman, H. Röglin, and B. Vöcking. Decision making based on approximate and
smoothed pareto curves. Algorithms and Computation, pages 675–684, 2005.
[2] S. Agrawal, A. Saberi, and Y. Ye. Stochastic Combinatorial Optimization under Probabilistic Constraints. Arxiv preprint arXiv:0809.0460, 2008.
[3] S. Alimov, R. Ashurov, and A. Pulatov. Multiple fourier series and fourier integrals, in commutative
harmonic analysis. IV: Harmonic analysis in Rn . Encyclopedia of Mathematical Science, 42, 1992.
14
In the fault tolerant storage problem, we are given n real numbers
and an addition number 0 < θ < 1.
P 0 < p1 ≤ . . . ≤ pn < 1,P
n
Our goal is to partition 1 into n positive values x1 , . . . , xn (i.e., n
i=1 xi = 1), such that Pr[
i=1 Xi ≥ θ] is maximized, where
Xi is the Bernoulli random variable which takes value xi with probability pi . In order to obtained an additive PTAS, Daskalakis et
al. [20] assumed that all pi s are bounded below by a constant.
26
[4] N. Bansal, A. Gupta, J. Li, J. Mestre, V. Nagarajan, and A. Rudra. When LP is the Cure for Your
Matching Woes: Improved Bounds for Stochastic Matchings. European Symposium on Algorithms,
pages 218–229, 2010.
[5] F. Barahona and W. Pulleyblank. Exact arborescences, matchings and cycles. Discrete Applied Mathematics, 16(2):91–99, 1987.
[6] J. Bard and J. Bennett. Arc reduction and path preference in stochastic acyclic networks. Management
Science, 37(2):198–215, 1991.
[7] D. Bernoulli. Exposition of a new theory on the measurement of risk. Econometrica: Journal of
the Econometric Society, pages 23–36, 1954. Originally published in 1738; translated by Dr. Louise
Sommer.
[8] D. Bernoulli. Exposition of a new theory on the measurement of risk. Econometrica, 22(1):22–36,
1954. Originally published in 1738; translated by Dr. Lousie Sommer.
[9] G. Beylkin and L. Monzón. On Generalized Gaussian Quadratures for Exponentials and Their Applications* 1. Applied and Computational Harmonic Analysis, 12(3):332–373, 2002.
[10] G. Beylkin and L. Monzón. On approximation of functions by exponential sums. Applied and Computational Harmonic Analysis, 19(1):17–48, 2005.
[11] G. Beylkin and L. Monzón. Approximation by exponential sums revisited. Applied and Computational
Harmonic Analysis, 28(2):131–149, 2010.
[12] A. Bhalgat, 2011. Personal Communication.
[13] A. Bhalgat, A. Goel, and S. Khanna. Improved approximation results for stochastic knapsack problems. In ACM-SIAM symposium on Discrete algorithms, 2011.
[14] A. Bhalgat and S. Khanna. A utility equivalence theorem for concave functions. In Integer Programming and Combinatorial Optimization, pages 126–137. Springer, 2014.
[15] R. Carraway, R. Schmidt, and L. Weatherford. An algorithm for maximizing target achievement in the
stochastic knapsack problem with normal returns. Naval research logistics, 40(2):161–173, 1993.
[16] C. Chekuri and S. Khanna. A PTAS for the multiple knapsack problem. In ACM-SIAM symposium on
Discrete algorithms, pages 213–222, 2000.
[17] N. Chen, N. Immorlica, A. Karlin, M. Mahdian, and A. Rudra. Approximating matches made in
heaven. International Colloquium on Automata, Languages and Programming, pages 266–278, 2009.
[18] W. Cheney and W. Light. A Course in Approximation Theory. Brook/Cole Publishing Company, 2000.
[19] R. Cheng, J. Chen, and X. Xie. Cleaning uncertain data with quality guarantees. Proceedings of the
VLDB Endowment, 1(1):722–735, 2008.
[20] C. Daskalakis, A. De, I. Diakonikolas, A. Moitra, and R. A. Servedio. A polynomial-time approximation scheme for fault-tolerant distributed storage. In SODA, pages 628–644. SIAM, 2014.
27
[21] B. Dean, M. Goemans, and J. Vondrák. Adaptivity and approximation for stochastic packing problems.
In ACM-SIAM symposium on Discrete algorithms, pages 395–404, 2005.
[22] B. Dean, M. Goemans, and J. Vondrak. Approximating the Stochastic Knapsack Problem: The Benefit
of Adaptivity. Mathematics of Operations Research, 33(4):945, 2008.
[23] M. Fazel and M. Chiang. Network utility maximization with nonconcave utilities using sum-of-squares
method. In Decision and Control, 2005 and 2005 European Control Conference. CDC-ECC’05. 44th
IEEE Conference on, pages 1867–1874. IEEE, 2005.
[24] P. Fishburn. Utility Theory and Decision Making. John Wiley & Sons, Inc, 1970.
[25] M. Garey and D. Johnson. “Computers and Intractability: A Guide to the Theory of NPCompleteness”. W.H. Freeman, 1979.
[26] S. Geetha and K. Nair. On stochastic spanning tree problem. Networks, 23(8):675–679, 1993.
[27] A. Goel and P. Indyk. Stochastic load balancing and related problems. In Annual Symposium on
Foundations of Computer Science, page 579, 1999.
[28] D. Gottlieb and C. Shu. On the Gibbs phenomenon and its resolution. SIAM review, 39(4):644–668,
1997.
[29] V. Goyal and R. Ravi. Chance constrained knapsack problem with random item sizes. To appear in
Operation Research Letter, 2009.
[30] S. Guha and K. Munagala. Adaptive Uncertainty Resolution in Bayesian Combinatorial Optimization
Problems. To appear in ACM Transactions on Algorithms, 2008.
[31] A. Gupta, M. Pál, R. Ravi, and A. Sinha. Boosted sampling: approximation algorithms for stochastic
optimization. In ACM Symposium on Theory of Computing, pages 417–426. ACM, 2004.
[32] M. Henig. Risk criteria in a stochastic knapsack problem. Operations Research, 38(5):820–825, 1990.
[33] L. Huang and J. Li. Approximating the expected values for combinatorial optimization problems over
stochastic points. In Automata, Languages, and Programming, pages 910–921. Springer, 2015.
[34] L. Huang, J. Li, J. M. Phillips, and H. Wang. -kernel coresets for stochastic points. arXiv preprint
arXiv:1411.0194, 2014.
[35] H. Ishii, S. Shiode, and T. Nishida Yoshikazu. Stochastic spanning tree problem. Discrete Applied
Mathematics, 3(4):263–273, 1981.
[36] D. Kahneman and A. Tversky. Prospect theory: An analysis of decision under risk. Econometrica:
Journal of the Econometric Society, pages 263–291, 1979.
[37] O. Kariv and S. Hakimi. An algorithmic approach to network location problems. II: The p-medians.
SIAM Journal on Applied Mathematics, 37(3):539–560, 1979.
[38] J. Kleinberg, Y. Rabani, and É. Tardos. Allocating bandwidth for bursty connections. In ACM Symposium on Theory of Computing, page 673, 1997.
28
[39] J. Li and A. Deshpande. Consensus answers for queries over probabilistic databases. In ACM
SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, 2009.
[40] J. Li and A. Deshpande. Ranking continuous probabilistic datasets. Proceedings of the VLDB Endowment, 3(1), 2010.
[41] J. Li, B. Saha, and A. Deshpande. A unified approach to ranking in probabilistic databases. In Proceedings of the VLDB Endowment, 2009.
[42] J. Li and T. Shi. A fully polynomial-time approximation scheme for approximating a sum of random
variables. Operations Research Letters, 42(3):197–202, 2014.
[43] J. Li and W. Yuan. Stochastic combinatorial optimization via poisson approximation. In ACM Symposium on Theory of Computing, 2013.
[44] R. Loui. Optimal paths in graphs with stochastic or multidimensional weights. Communications of the
ACM, 26(9):670–676, 1983.
[45] R. Martin. The St. Petersburg Paradox. The Stanford Encyclopedia of Philosophy, 2004. http:
//plato.stanford.edu/archives/fall2004/entries/paradox-stpetersburg.
[46] S. Mittal and A. Schulz. A general framework for designing approximation schemes for combinatorial
optimization problems with many objectives combined into one. Approximation, Randomization and
Combinatorial Optimization. Algorithms and Techniques, pages 179–192, 2008.
[47] A. Munteanu, C. Sohler, and D. Feldman. Smallest enclosing ball for probabilistic data. In Proceedings
of the thirtieth annual symposium on Computational geometry, page 214. ACM, 2014.
[48] I. Murthy and S. Sarkar. Exact algorithms for the stochastic shortest path problem with a decreasing
deadline utility function. European Journal of Operational Research, 103(1):209–229, 1997.
[49] I. Murthy and S. Sarkar. Stochastic shortest path problems with piecewise-linear concave utility functions. Management Science, 44(11):125–136, 1998.
[50] E. Nikolova. Approximation Algorithms for Reliable Stochastic Combinatorial Optimization. International Workshop on Approximation Algorithms for Combinatorial Optimization Problems, pages
338–351, 2010.
[51] E. Nikolova, M. Brand, and D. Karger. Optimal route planning under uncertainty. In Proceedings of
International Conference on Automated Planning and Scheduling, 2006.
[52] E. Nikolova, J. Kelner, M. Brand, and M. Mitzenmacher. Stochastic shortest paths via quasi-convex
maximization. In European Symposium on Algorithms, pages 552–563, 2006.
[53] F. Oberhettinger. Fourier transforms of distributions and their inverses: a collection of tables. Academic press, 1973.
[54] M. R. Osborne and G. K. Smyth. A modified prony algorithm for fitting sums of exponential functions.
SIAM Journal of Scientific Computing, 1995.
[55] C. Papadimitriou and M. Yannakakis. On the approximability of trade-offs and optimal access of web
sources. In Annual Symposium on Foundations of Computer Science, 2000.
29
[56] M. J. D. Powell. Approximation theory and methods. Cambridge University Press, 1981.
[57] A. Ralston and R. Rabinowitz. A First Course in Numerical Analysis. 2001.
[58] H. Safer, J. B. Orlin, and M. Dror. Fully polynomial approximation in multi-criteria combinatorial
optimization, 2004. MIT Working Paper.
[59] P. A. Samuelson. St. petersburg paradoxes: Defanged, dissected, and historically described. Journal
of Economic Literature, 15(1):24–55, 1977.
[60] D. Shmoys and C. Swamy. An approximation scheme for stochastic linear programming and its application to stochastic integer programs. J. ACM, 53(6):1012, 2006.
[61] C. Sigal, A. Pritsker, and J. Solberg. The stochastic shortest route problem. Operations Research,
28(5):1122–1129, 1980.
[62] E. Stein and R. Shakarchi. Fourier analysis: an introduction. Princeton University Press, 2003.
[63] C. Swamy. Risk-Averse Stochastic Optimization: Probabilistically-Constrained Models and Algorithms for Black-Box Distributions. ACM-SIAM symposium on Discrete algorithms, 2010.
[64] C. Swamy and D. Shmoys. Approximation algorithms for 2-stage stochastic optimization problems.
ACM SIGACT News, 37(1):46, 2006.
[65] J. von Neumann and O. Morgenstern. Theory of Games and Economic Behavior. Princeton Univ.
Press, 2nd edition, 1947.
A
Computing E[φwe ]
If X is a random variable, then the characteristic function of X is defined as
G(z) = E[eizX ].
We can see E[φwe ] is nothing but the value of the characteristic function of we evaluated at −i ln φ (here ln
is the complex logarithm function). For many important distributions, including negative binomial, Poisson,
exponential, Gaussian, Chi-square and Gamma, a closed-form characteristic function is known. See [53] for
a more comprehensive list.
Example 3 Consider the Poisson distributed we with mean λ, i.e., Pr(we = k) = λk e−λ /k! . Its characiz
teristic function is known to be G(z) = eλ(e −1) . Therefore,
E[φwe ] = G(−i ln φ) = eλ(φ−1) .
1
Example 4 For Gaussian distribution N (µ, σ 2 ), we know its characteristic function is G(z) = eizµ− 2 σ
Therefore,
1 2
E[φwe ] = G(−i ln φ) = φu+ 2 σ ln φ .
30
2 z2
.
For some continuous distributions, no closed-form characteristic function is known and we need proper
numerical approximation method.
If the support of the distribution is bounded, we can use for example Gauss-Legendre quadrature [57]. If
the support is infinite, we can truncate the distribution and approximate
finite
R b the integral over the remaining
Pk
interval; Generally speaking a quadrature method approximates a f (x)dx by a linear sum i=1 ci f (xi )
where ci and xi are some constants independent of the function f . A typical practice is to use composite rule,
that is to partition [a, b] into N subintervals and approximate the integral using some quadrature formula over
each subinterval. For the example of Gauss-Laguerre quadrature, assuming continuity of the 2kth derivative
of f (x) for some constant k, if we partition [a, b] into M subintervals and apply Gauss-Legendre quadrature
of degree k to each subinterval, the approximation error is
Error =
(b − a)2k+1
(k!)4
f (2k) (ξ)
(2k + 1)[(2k)!]3
M 2k
where ξ is some point in (a, b) [57, pp.116]. Let ∆ = b−a
M . If we treat k as a constant, the behavior of the
error (in terms of ∆) is Error(∆) = O(∆2k maxξ f (2k) (ξ)). Therefore, if the support and maxξ f (2k) (ξ)
are bounded by a polynomial, we can approximate the integral, in polynomial time, such that the error is
O(1/nβ ) for any fixed integer β.
The next lemma shows that we do not lose too much even though we can only get an approximation of
w
E[φ e ].
e
Lemma 13 Suppose in Theorem 5, we can only compute an approximate value of E[φw
i ], denoted by Ee,i ,
we
for each e and i, such that |E[φi ] − Ee,i | ≤ O(n−β ) for some positive integer β. Denote E(S) =
Q
PL
k=1 ck
e∈S Ee,i . For any solution S, we have that
|E[e
µ(w(S))] − E(S)| ≤ O(n1−β ).
Proof: We need the following simple result (see [40] for a proof): a1 , . . . , an and e1 , . . . , en are complex
numbers such that |ai | ≤ 1 and |ei | ≤ n−β for all i and some β > 1. Then, we have
n
n
Y
Y
(ai + ei ) −
Ei ≤ O(n1−β ).
i=1
i=1
Since |φi | ≤ 1, we can see that
e
|E[φw
i ]|
Z
φxi pe (x)dx| ≤ 1.
=|
x≥0
The lemma simply follows by applying the above result and noticing that L and all ck s are constants.
2
We can show that Theorem 1 still holds even though we only have the approximations of the E[φwe ]
values. The proof is straightforward and omitted.
31
| 8 |
STRATIFICATION AND π-COSUPPORT: FINITE GROUPS
arXiv:1505.06628v2 [math.RT] 6 Jan 2017
DAVE BENSON, SRIKANTH B. IYENGAR, HENNING KRAUSE
AND JULIA PEVTSOVA
Abstract. We introduce the notion of π-cosupport as a new tool for the stable module category of a finite group scheme. In the case of a finite group,
we use this to give a new proof of the classification of tensor ideal localising subcategories. In a sequel to this paper, we carry out the corresponding
classification for finite group schemes.
Contents
Introduction
1. Finite group schemes
2. π-points
3. π-cosupport and π-support
4. Finite groups
5. Cohomological support and cosupport
6. Stratification
References
1
2
4
6
10
13
15
16
Introduction
The theory of support varieties for finitely generated modules over the group
algebra of a finite group began back in the nineteen eighties with the work of Alperin
and Evens [1], Avrunin and Scott [2], Carlson [14, 15], among others. An essential
ingredient in its development was Carlson’s anticipation that for elementary abelian
p-groups the cohomological definition of support, which takes its roots in Quillen’s
fundamental work on mod p group cohomology [22], gave the same answer as the
“rank” definition through restriction to cyclic shifted subgroups.
To deal with infinitely generated modules, Benson, Carlson and Rickard [6] found
that they had to introduce cyclic shifted subgroups for generic points of subvarieties,
defined over transcendental extensions of the field of coefficients. Correspondingly,
all the homogeneous prime ideals in the cohomology ring are involved, not just the
maximal ones. This enabled them to classify the tensor ideal thick subcategories
of the stable category stmod(kG) of finitely generated kG-modules [7].
It seemed plausible that one should be able to use modifications of the same
techniques to classify the tensor ideal localising subcategories of the stable category
StMod(kG) of all kG-modules, but there were formidable technical obstructions to
realising this, and it was not until more than a decade later that this was achieved by
Date: 4th January 2016.
2010 Mathematics Subject Classification. 16G10 (primary); 20C20, 20G10 20J06 (secondary).
Key words and phrases. cosupport, stable module category, finite group scheme, localising
subcategory, support, thick subcategory.
1
2
BENSON, IYENGAR, KRAUSE, AND PEVTSOVA
the first three authors of this paper [10], using a rather different set of ideas than the
ones in [7]. The basic strategy was a series of reductions, via changes of categories,
that reduced the problem to that of classifying the localising subcategories of the
derived category of differential graded modules over a polynomial ring, where it was
solved using methods from commutative algebra. A series of papers [8, 9, 10, 11],
established machinery required to execute this strategy.
In this paper we give an entirely new, and conceptually different, proof of the
classification of the tensor ideal localising subcategories of StMod(kG) from [10]. It
is closer in spirit to the proof of the classification of the tensor ideal thick subcategories of stmod(kG) from [7], and rooted essentially in linear algebra. The crucial
new idea is to introduce and study π-cosupports for representations.
The inspiration for this comes from two sources. The first is the theory of
π-points developed by Friedlander and the fourth author [18] as a suitable generalisation of cyclic shifted subgroups. Whereas Carlson’s original construction only
applied to elementary abelian p-groups, and required an explicit choice of generators of the group algebra, π-points allow for a “rank variety” description of the
cohomological support for any finite group scheme; see [18, Theorem 3.6] and Section 2 of this paper. Based on this, in [18] the π-support of a module over a finite
group scheme G defined over a field k is introduced and used to classify the tensor
ideal thick subcategories of stmod(kG) (with an error corrected in a sequel [12] to
this paper). But the tensor ideal localising subcategories of StMod(kG) remained
inaccessible by these techniques alone, even for finite groups.
What is required is a π-version of the notion of cosupports introduced in [11].
The relevance of π-cosupport is through the following formula for the module of
homomorphisms, proved in Section 3. For any finite group scheme G over k, and
kG-modules M and N , there is an equality
π- cosuppG (Homk (M, N )) = π- suppG (M ) ∩ π- cosuppG (N ).
To be able to apply this formula to cohomological cosupport developed in [9] one
needs to identify the two notions of cosupport. Our strategy for making this identification is to prove that π-cosupport detects projectivity: a kG-module is projective
if and only if its π-cosupport is empty. The desired classification result would then
follow from general techniques developed in [12].
In Section 4 we prove such a detection theorem for projectivity for finite groups.
Besides yielding the desired classification theorem for StMod(kG) from [10], it implies that cohomological support and cosupport coincide with π-support and πcosupport, respectively. This remarkable fact is a vast generalisation of Carlson’s
original anticipation. The different origins of the two notions are reflected in the
fact that phenomena that are transparent, or at least easy to detect, for one may
be rather opaque and difficult to verify for the other. See Section 5 for illustrations.
The corresponding detection theorem for arbitrary finite group schemes has
turned out to be more challenging, and is dealt with in the sequel to this paper [12],
using a different approach, where again π-support and π-cosupport play a crucial
role. This brings us to the second purpose of this paper: To lay the groundwork for
the proof in [12]. For this reason parts of this paper are written in the language of
finite group schemes. However, we have attempted to present it in such a way that
the reader only interested in finite groups can easily ignore the extra generality.
1. Finite group schemes
This section summarises basic property of modules over affine group schemes;
for details we refer the reader to Jantzen [20] and Waterhouse [24].
STRATIFICATION AND π-COSUPPORT: FINITE GROUPS
3
Let k be a field. An affine group scheme G over k is a functor from commutative
k-algebras to groups, with the property that, considered as a functor to sets, it is
representable as Homk-alg (R, −). The commutative k-algebra R has a comultiplication coming from the multiplicative structure of G, and an antipode coming from
the inverse. This makes R into a commutative Hopf algebra called the coordinate
algebra k[G] of G. Conversely, if k[G] is a commutative Hopf algebra over k then
Homk-alg (k[G], −) is an affine group scheme. This work concerns only affine group
schemes so henceforth we drop the qualifier “affine”.
A group scheme G over k is finite if k[G] is finite dimensional as a k-vector space.
The k-linear dual of k[G] is then a cocommutative Hopf algebra, called the group
algebra of G, and denoted kG.
We identify modules over a finite group scheme G with modules over its group
algebra kG; this is justified by [20, I.8.6]. Thus, we will speak of kG-modules (rather
than G-modules), and write Mod kG for the category of kG-modules.
Extending the base field. Let G be a finite group scheme over a field k. If K is
a field extension of k, we write K[G] for K ⊗k k[G], which is a commutative Hopf
algebra over K. This defines a group scheme over K denoted GK , and we have a
natural isomorphism KGK ∼
= K ⊗k kG.
For each kG-module M , we set
MK := K ⊗k M
and M K := Homk (K, M ),
viewed as KGK -modules. When K or M is finite dimensional over k, these are
related as follows.
Remark 1.1. For any kG-module M , there is a natural map of KGK -modules
Homk (K, k) ⊗k M −→ Homk (K, M ) .
This is a bijection when K or M is finite dimensional over k. Then M K is a direct
sum of copies of MK as a KGK -module, for Homk (K, k) is a direct sum of copies
of K, as a K-vector space.
The assignments M 7→ MK and M 7→ M K define functors from Mod kG to
Mod KGK that are left and right adjoint, respectively, to restriction of scalars along
the homomorphism of rings kG → KGK . The result below collects some basic facts
concerning how these functors interact with tensor products and homomorphisms.
In what follows, the submodule of G-invariants of a kG-module M is denoted M G ;
see [20, I.2.10] for the construction.
Lemma 1.2. Let G be a finite group scheme over k and K an extension of the
field k. Let M and N be kG-modules.
There are natural isomorphisms of KGK -modules:
(i) (M ⊗k N )K ∼
= M K ⊗ K NK .
(ii) (M ⊗k N )K ∼
= MK ⊗K N K when M is finite dimensional over k.
(iii) Homk (M, N )K ∼
= HomK (MK , N K ).
There are also natural isomorphisms of K-vector spaces:
(iv) HomkG (M, N )K ∼
= HomKGK (MK , N K ).
K GK
G K ∼
(v) (M ) = (M ) .
Proof. The isomorphisms in (i), (iii) and (iv) are standard whilst (v) is the special
case M = k and N = M of (iv). The isomorphism in (ii) can be realised as the
composition of natural maps
∼
∼
(M ⊗k K) ⊗K Homk (K, N ) −−→ M ⊗k Homk (K, N ) −−→ Homk (K, M ⊗k N )
4
BENSON, IYENGAR, KRAUSE, AND PEVTSOVA
where the last map is an isomorphism as M is finite dimensional over k.
Examples of finite group schemes. We recall some important classes of finite
group schemes relevant to this work.
Example 1.3 (Finite groups). A finite group G defines a finite group scheme
over any field k. More precisely, the group algebra kG is a finite dimensional
cocommutative Hopf algebra, and hence its dual is a commutative Hopf algebra
which defines a group scheme over k; it is also denoted G. A finite group E is an
elementary abelian p-group if it is isomorphic to (Z/p)r , for some prime number p.
The integer r is then the rank of E. Over a field k of characteristic p, there are
isomorphisms of k-algebras
p
k[E] ∼
= k ×r and kE ∼
= k[z1 , . . . , zr ]/(z , . . . , z p ).
1
r
The comultiplication on kE is determined by the map zi 7→ zi ⊗ 1 + zi ⊗ zi + 1 ⊗ zi
and the antipode is determined by the map zi 7→ (zi + 1)p−1 − 1.
Example 1.4 (Additive groups). Fix a positive integer r and let Ga(r) denote the
finite group scheme whose coordinate algebra is
r
k[Ga(r) ] = k[t]/(tp )
with comultiplication defined by t 7→ t ⊗ 1 + 1 ⊗ t and antipode t 7→ −t. There is
an isomorphism of k-algebras
p
p
kGa(r) ∼
= k[u0 , . . . , ur−1 ]/(u , . . . , u ).
0
r−1
We note that Ga(r) is the rth Frobenius kernel of the additive group scheme Ga
over k; see, for instance, [20, I.9.4]
Example 1.5 (Quasi-elementary group schemes). Following Bendel [3], a group
scheme over a field k of positive characteristic p is said to be quasi-elementary if it
is isomorphic to Ga(r) × (Z/p)s . Its group algebra structure is the same as that of
an elementary abelian p-group.
A finite group scheme G over a field k is unipotent if its group algebra kG is
local. Quasi-elementary group schemes are unipotent. Also, the group scheme over
a field of positive characteristic p defined by a finite p-group is unipotent.
2. π-points
In the rest of this paper G denotes a finite group scheme defined over a field k of
positive characteristic p. We recall the notion of π-points and basic results about
them. The primary references are the papers of Friedlander and Pevtsova [17, 18].
π-points. A π-point of G, defined over a field extension K of k, is a morphism of
K-algebras
α : K[t]/(tp ) −→ KGK
that factors through the group algebra of a unipotent abelian subgroup scheme C
of GK , and such that KGK is flat when viewed as a left (equivalently, as a right)
module over K[t]/(tp ) via α. It should be emphasised that C need not be defined
over k; see Examples 2.6. Restriction along α defines a functor
α∗ : Mod KGK −→ Mod K[t]/(tp ) .
The result below extends [18, Theorem 4.6], that dealt with MK .
Theorem 2.1. Let α : K[t]/(tp ) → KGK and β : L[t]/(tp ) → LGL be π-points of
G. Then the following conditions are equivalent.
STRATIFICATION AND π-COSUPPORT: FINITE GROUPS
5
(i) For any finite dimensional kG-module M , the module α∗ (MK ) is projective
if and only if β ∗ (ML ) is projective.
(ii) For any kG-module M , the module α∗ (MK ) is projective if and only if
β ∗ (ML ) is projective.
(iii) For any finite dimensional kG-module M , the module α∗ (M K ) is projective
if and only if β ∗ (M L ) is projective.
(iv) For any kG-module M , the module α∗ (M K ) is projective if and only if
β ∗ (M L ) is projective.
Proof. The equivalence of (i) and (ii) is proved in [18, Theorem 4.6]. The equivalence of (iii) and (iv) can be proved in exactly the same way.
(i) ⇐⇒ (iii) Since M is finite dimensional, M K is a direct sum of copies of MK ,
by Remark 1.1. Hence α∗ (M K ) is projective if and only if α∗ (MK ) is projective.
The same is true of β ∗ (M L ) and β ∗ (ML ).
Definition 2.2. When π-points α and β satisfy the conditions of Theorem 2.1,
they are said to be equivalent, and denoted α ∼ β.
For ease of reference, we list some basic properties of π-points.
Remark 2.3. (1) Let α : K[t]/(tp ) → KGK be a π-point and L a field extension of
K. Then L ⊗K α : L[t]/(tp ) → LGL is a π-point and it is easy to verify, say from
condition (i) of Theorem 2.1, that α ∼ L ⊗K α.
(2) Every π-point of a subgroup scheme H of G is naturally a π-point of G. This
follows from the fact that an embedding of group schemes always induces a flat
map of group algebras.
(3) Every π-point is equivalent to one that factors through a quasi-elementary
subgroup scheme over the same field extension; see [17, Proposition 4.2].
π-points and cohomology. The cohomology of G with coefficients in a kGmodule M is denoted H ∗ (G, M ). It can be identified with Ext∗kG (k, M ). Recall that
H ∗ (G, k) is a k-algebra that is graded-commutative (because kG is a Hopf algebra)
and finitely generated, as was proved by Friedlander and Suslin [19, Theorem 1.1].
Let Proj H ∗ (G, k) denote the set of homogeneous prime ideals H ∗ (G, k) that are
properly contained in the maximal ideal of positive degree elements.
Given a π-point α : K[t]/(tp ) → KGK we write H ∗ (α) for the composition of
homomorphisms of k-algebras.
K⊗ −
H ∗ (G, k) = Ext∗kG (k, k) −−−−k−→ Ext∗KGK (K, K) −→ Ext∗K[t]/(tp ) (K, K),
where the second map is induced by restriction. By Frobenius reciprocity and
the theorem of Friedlander and Suslin recalled above, Ext∗K[t]/(tp ) (K, K) is finitely
generated as a module over Ext∗KGK (K, K). Since the former is nonzero, it follows
that the radical of the kernel of the map Ext∗KGK (K, K) → Ext∗K[t]/(tp ) (K, K) is a
1
∗
prime ideal different from Ext>
KGK (K, K) and hence that the radical of Ker H (α)
∗
>1
is a prime ideal in H (G, k), different from H (G, k).
Remark 2.4. Fix a point p in Proj H ∗ (G, k). There exists a field K and a π-point
αp : K[t]/(tp ) −→ KGK
p
such that Ker H ∗ (αp ) = p. In fact, there is such a K that is a finite extension of
the degree zero part of the homogenous residue field at p; see [18, Theorem 4.2].
It is shown in [18, Corollary 2.11] that α ∼ β if and only if there is an equality
p
p
Ker H ∗ (α) = Ker H ∗ (β) .
6
BENSON, IYENGAR, KRAUSE, AND PEVTSOVA
In this way, the equivalence classes of π-points are in bijection with Proj H ∗ (G, k).
Theorem 2.5 ([18, Theorem 3.6]). Let G be a finite group scheme over a field k.
Taking a π-point α to the radical of Ker H ∗ (α) induces a bijection between the set
of equivalence classes of π-points of G and Proj H ∗ (G, k).
We illustrate these ideas on the Klein four group that will be the running example
in this work.
Example 2.6. Let V = Z/2 × Z/2 and k a field of characteristic two. The group
algebra kV is isomorphic to k[x, y]/(x2 , y 2 ), where x+ 1 and y + 1 correspond to the
generators of V . Let J = (x, y) denote the Jacobson radical of kV . It is well-known
that H ∗ (V, k) is the symmetric algebra on the k-vector space Homk (J/J 2 , k); see,
for example, [4, Corollary 3.5.7]. Thus H ∗ (V, k) is a polynomial ring over k in two
variables in degree one and Proj H ∗ (V, k) ∼
= P1k .
The π-point corresponding to a rational point [a, b] ∈ P1k (using homogeneous
coordinates) is represented by the map of k-algebras
k[t]/(tp ) −→ k[x, y]/(x2 , y 2 ) where t 7→ ax + by.
More generally, for each closed point p ∈ P1k there is some finite field extension
K of k such that P1K contains a rational point [a′ , b′ ] over p (with Aut(K/k) acting
transitively on the finite set of points over p). Then the π-point corresponding to
p is represented by the map of K-algebras
K[t]/(tp ) −→ K[x, y]/(x2 , y 2 ) where t 7→ a′ x + b′ y.
Now let K denote the field of rational functions in a variable s. The generic
point of P1k then corresponds to the map of K-algebras
K[t]/(tp ) −→ K[x, y]/(x2 , y 2 ) where t 7→ x + sy.
3. π-cosupport and π-support
As before, G is a finite group scheme over a field k of positive characteristic p.
In this section, we introduce a notion of π-cosupport of a kG-module, by analogy
with the notion of π-support introduced in [18, §5]. The main result, Theorem 3.4,
is a formula that computes the π-cosupport of a function object, in terms of the
π-support and π-cosupport of its component modules.
Definition 3.1. The π-cosupport of a kG-module M is the subset of Proj H ∗ (G, k)
defined by
π- cosuppG (M ) := {p ∈ Proj H ∗ (G, k) | α∗p (Homk (K, M )) is not projective}.
Here αp : K[t]/(tp ) → KGK denotes a representative of the equivalence class of
π-points corresponding to p; see Remark 2.4. The definition is modelled on that of
the π-support of M , introduced in [18] as the subset
π- suppG (M ) := {p ∈ Proj H ∗ (G, k) | α∗p (K ⊗k M ) is not projective}.
This is denoted Π(G)M in [18]; our notation is closer to the one used in [8] for
cohomological support.
STRATIFICATION AND π-COSUPPORT: FINITE GROUPS
7
Projectivity. For later use, we record the well-known property that the module
of homomorphisms preserves and detects projectivity.
Lemma 3.2. Let M and N be kG-modules.
(i) If M or N is projective, then so is Homk (M, N ).
(ii) M is projective if and only if Endk (M ) is projective.
Proof. We repeatedly use the fact that a kG-module is projective if and only if it
is injective; see, for example, [20, Lemma I.3.18].
(i) The functor Homk (M, −) takes injective kG-modules to injective kG-modules
because it is right adjoint to an exact functor. Thus, when N injective, so is
Homk (M, N ). The same conclusion follows also from the projectivity of M because
Homk (−, N ) takes projective modules to injective modules, as follows from the
natural isomorphisms:
HomkG (−, Homk (M, N )) ∼
= HomkG (− ⊗k M, N )
∼ HomkG (M ⊗k −, N )
=
∼
= HomkG (M, Homk (−, N )) .
The first and the last isomorphisms are by adjunction, and the one in the middle
holds because kG is cocommutative.
(ii) When M is projective, so is Endk (M ), by (i). For the converse, observe
that when Endk (M ) is projective, so is Endk (M ) ⊗k M , since − ⊗k M preserves
projectivity being the left adjoint of an exact functor. It remains to note that M is a
direct summand of Endk (M )⊗k M , because the composition of the homomorphisms
ν
ε
M −−→ Endk (M ) ⊗k M −−→ M,
ν(m) = idM ⊗m
where
and ε(f ⊗ m) = f (m),
of kG-modules equals the identity on M .
We now work towards Theorem 3.4 that gives a formula for the cosupport of a
function object, and the support of a tensor product. These are useful for studying
modules over finite group schemes, as will become clear in Section 5; see also [12].
Function objects and tensor products. The proof of Theorem 3.4 is complicated by the fact that, in general, a π-point α : K[t]/(tp ) → KGK does not
preserve Hopf structures, so restriction along α does not commute with taking tensor products, or the module of homomorphisms. To deal with this situation, we
adapt an idea from the proof of [17, Lemma 3.9]—see also [15, Lemma 6.4] and [23,
Lemma 6.4]—where the equivalence of (i) and (iii) in the following result is proved.
The hypothesis on the algebra A is motivated by Remark 2.3(3) and Example 1.5.
Lemma 3.3. Let K be a field of positive characteristic p and A a cocommutative
Hopf K-algebra that is isomorphic as an algebra to K[t1 , . . . , tr ]/(tp1 , . . . tpr ). Let
α : K[t]/(tp ) −→ A
be a flat homomorphism of K-algebras. For any A-modules M and N , the following
conditions are equivalent:
(i) α∗ (M ⊗K N ) is projective.
(ii) α∗ (HomK (M, N )) is projective.
(iii) α∗ (M ) or α∗ (N ) is projective.
8
BENSON, IYENGAR, KRAUSE, AND PEVTSOVA
Proof. As noted before, (i) ⇐⇒ (iii) is [17, Lemma 3.9]; the hypotheses of op. cit.
includes that M and N are finite dimensional, but that is not used in the proof.
We employ a similar argument to verify that (ii) and (iii) are equivalent.
Let σ : A → A be the antipode of A, ∆ : A → A ⊗K A its comultiplication, and
set I = Ker(A → K), the augmentation ideal of A. Identifying t with its image in
A, one has
(1 ⊗ σ)∆(t) = t ⊗ 1 − 1 ⊗ t + w with w ∈ I ⊗K I;
see [20, I.2.4]. Recall that the action of a ∈ A on HomK (M, N ) is given by multiplication with (1 ⊗ σ)∆(a), so that for f ∈ HomK (M, N ) and m ∈ M one has
X
X
(a · f )(m) =
a′ f (σ(a′′ )m) where ∆(a) =
a′ ⊗ a′′ .
Given a module over A ⊗K A, we consider two K[t]/(tp )-structures on it: One
where t acts via multiplication with (1 ⊗ σ)∆(t) and another where it acts via
multiplication with t ⊗ 1 − 1 ⊗ t. We claim that these two K[t]/(tp )-modules are
both projective or both not projective. This follows from a repeated use of [17,
Proposition 2.2] because w can be represented as a sum of products of nilpotent
elements of A ⊗K A, and each nilpotent element x of A ⊗K A satisfies xp = 0.
We may thus assume that t acts on HomK (M, N ) via t ⊗ 1 − 1 ⊗ t. There is then
an isomorphism of K[t]/(tp )-modules
α∗ (HomK (M, N )) ∼
= HomK (α∗ (M ), α∗ (N )) ,
where the action of K[t]/(tp ) on the right hand side is the one obtained by viewing
it as a Hopf K-algebra with comultiplication defined by t 7→ t ⊗ 1 + 1 ⊗ t and
antipode t 7→ −t. It remains to observe that for any K[t]/(tp )-modules U, V , the
module HomK (U, V ) is projective if and only if one of U or V is projective.
Indeed, if U or V is projective, so is HomK (U, V ), by Lemma 3.2. As to the
converse, every K[t]/(tp )-module is a direct sum of cyclic modules, so when U and
V are not projective, they must have direct summands isomorphic to K[t]/(tu ) and
K[t]/(tv ), respectively, for some 1 ≤ u, v < p. Then HomK (K[t]/(tu ), K[t]/(tv )) is
a direct summand of HomK (U, V ). The former cannot be projective as a K[t]/(tp )module because its dimension as a K-vector space is uv, while the dimension of any
projective module must be divisible by p: projective K[t]/(tp )-modules are free.
The first part of the result below is [18, Proposition 5.2].
Theorem 3.4. Let M and N be kG-modules. Then there are equalities
(i) π- suppG (M ⊗k N ) = π- suppG (M ) ∩ π- suppG (N ),
(ii) π- cosuppG (Homk (M, N )) = π- suppG (M ) ∩ π- cosuppG (N ).
Proof. We prove part (ii). Part (i) can be proved in the same fashion; see [18,
Proposition 5.2].
Fix a π-point α : K[t]/(tp ) → KGK . By Remark 2.3(3), we can assume α factors
as K[t]/(tp ) → KC → KGK , where C is a quasi-elementary subgroup scheme of
GK . As noted in Lemma 1.2(iii), there is an isomorphism of KGK -modules
Homk (M, N )K ∼
= HomK (MK , N K ) .
We may restrict a KGK module to K[t]/(tp ) by first restricting to KC, and
Homk (−, −) commutes with this operation. Thus the desired result follows from
the equivalence of (ii) and (iii) in Lemma 3.3, applied to the map K[t]/(tp ) → KC,
keeping in mind the structure of KC; see Example 1.5.
STRATIFICATION AND π-COSUPPORT: FINITE GROUPS
9
Basic computations. Next we record, for later use, some elementary observations
concerning support and cosupport. The converse of (i) in the result below also holds.
For finite groups this is proved in Theorem 4.4 below, and for finite group schemes
this is one of the main results of [12].
Lemma 3.5. Let M be a kG-module.
(i) π- suppG (M ) = ∅ = π- cosuppG (M ) when M is projective.
(ii) π- cosuppG (M ) = π- suppG (M ) when M is finite dimensional.
(iii) π- suppG (k) = Proj H ∗ (G, k) = π- cosuppG (k).
Proof. Part (ii) is immediate from definitions, given Remark 1.1. For the rest, fix
a π-point α : K[t]/(tp ) → KGK .
(i) When M is projective, so are the KGK -modules K ⊗k M and Homk (K, M ),
and restriction along α preserves projectivity, as α is a flat map. This justifies (i).
(iii) Evidently, kK equals K and α∗ (K) is non-projective. Since α was arbitrary,
one deduces that π- suppG (k) is all of Proj H ∗ (G, k). That this also equals the
π-cosupport of k now follows from (ii).
Corollary 3.6. If M is a kG-module, then there is an equality
π- cosuppG (Homk (M, k)) = π- suppG (M ).
Proof. This follows from Theorem 3.4 (ii) and Lemma 3.5 (iii).
The equality of π-support and π-cosupport for finite dimensional modules, which
holds by Lemma 3.5(ii), may fail for infinite dimensional modules. We describe an
example over the Klein four group; see Example 5.5 for a general construction.
Example 3.7. Let V be the Klein four group Z/2 × Z/2 and k a field of characteristic two. The π-points of kV were described in Example 2.6. We keep the notation
from there. Let M be the infinite dimensional kG-module with basis elements ui
and vi for i ≥ 0 and kG-action defined by
xui = vi ,
yui = vi−1 ,
xvi = 0,
yvi = 0
where v−1 is interpreted as the zero vector. A diagram for this module is as follows:
u2
u1
u0
◦
◦
◦
x
...
y
◦
v0
◦
v1
◦
v2
...
Claim. The π-support of M is the closed point {[0, 1]} of P1k whilst its π-cosupport
contains also the generic point.
Given a finite field extension K of k, it is not hard to verify that for any rational
point [a, b] ∈ P1K the image of multiplication by ax + by on MK is its socle, that
is spanned by the elements {vi }i>0 . For [a, b] 6= [0, 1], this is also the kernel of
ax + by, whilst for [a, b] = [0, 1] it contains, in addition, the element u0 . In view of
Remark 1.1, this justifies the assertions about the closed points of P1k .
For the generic point let K denote the field of rational functions in a variable s.
It is again easy to check that the kernel of x + sy and the image of multiplication
by x + sy on MK are equal to its socle. So the generic point is not in π- suppV (M ).
For cosupport, consider the k-linear map f : K → M defined as follows. Given
a rational function φ(s), its partial fraction expansion consists of a polynomial
ψ(s) plus the negative parts of the Laurent expansions at the poles. If ψ(s) =
10
BENSON, IYENGAR, KRAUSE, AND PEVTSOVA
α0 +α1 s+· · · we define f (φ) to be α0 u0 +α1 u1 +· · · . By definition, (x+sy)(f )(a) =
xf (a) + yf (sa); using this it is easy to calculate that f is in the kernel of x + sy.
On the other hand, any function in the image of x + sy lands in the socle of M . It
follows that the kernel of x + sy is strictly larger than the image, and so the generic
point is in the π-cosupport of M .
4. Finite groups
The focus of the rest of the paper is on finite groups. In this section we prove that
a module over a finite group is projective if and only if it has empty π-cosupport.
The key ingredient is a version of Dade’s lemma for elementary abelian groups. This
in turn is based on an analogue of the Kronecker quiver lemma [6, Lemma 4.1].
Lemma 4.1. Let k be an algebraically closed field, ℓ a non-trivial extension field of
k, and let V, W be k-vector spaces. If there exist k-linear maps f, g : V → W with
the property that for every pair of scalars λ and µ in ℓ, not both zero, the linear
map λf + µg : Homk (ℓ, V ) → Homk (ℓ, W ) is an isomorphism, then V = 0 = W .
Remark 4.2. Note that λf + µg here really means Homk (λ, f ) + Homk (µ, g) since
this is the way ℓ acts on homomorphisms.
Proof. Since k is algebraically closed, we may as well assume that ℓ = k(s), a simple
transcendental extension of k, since further extending the field only strengthens the
hypothesis without changing the conclusion.
Use g to identify V and W . Then f is a k-endomorphism of V with the property
that for all µ ∈ ℓ the endomorphism Homk (1, f ) + Homk (µ, id) of Homk (ℓ, V ) is
invertible. The action of f makes V into a k[t]-module, with t acting as f does.
Since f + µ. id is invertible for all µ ∈ k, and k is algebraically closed, V is a
k(t)-module.
Consider the homomorphism k(t) ⊗k k(s) → k(t) of rings that is induced by the
assignment p(t) ⊗ q(s) 7→ p(t)q(t). It is not hard to verify that its kernel is the ideal
generated by t − s, so there is an exact sequence of k(t) ⊗k k(s)-modules
t−s
0 −→ k(t) ⊗k k(s) −−−→ k(t) ⊗k k(s) −→ k(t) −→ 0
Applying Homk(t) (−, k(t)) and using adjunction yields an exact sequence
t−s
0 −→ k(t) −→ Homk (k(s), k(t)) −−−→ Homk (k(s), k(t)) −→ 0 .
Thus t− s is not invertible on Homk (k(s), k(t)). If V is nonzero then it has k(t) as a
summand as a k(t)-module, so that t − s, that is to say, Homk (1, t) + Homk (−s, id),
is not invertible. This contradicts the hypothesis.
Dade’s lemma for cosupport. The result below, and its proof, are modifications
of [6, Theorem 5.2]. Given an k-algebra R and an extension field K of k, we write
RK for the K-algebra K ⊗k R and M K for the RK -module Homk (K, M ).
Theorem 4.3. Let k be a field of positive characteristic p and set
R = k[t1 , . . . , tr ]/(tp1 , . . . , tpr ).
Let K be an algebraically closed field of transcendence degree at least r − 1 over k.
Then an R-module M is projective if and only if for all flat maps α : K[t]/(tr ) → RK
the module α∗ (M K ) is projective.
STRATIFICATION AND π-COSUPPORT: FINITE GROUPS
11
Proof. If M is projective as an R-module, then M K is projective as an RK -module,
and because α is flat, it follows that α∗ (M K ) is projective.
Assume that α∗ (M K ) is projective for all α as in the statement of the theorem.
We verify that M is projective as an R-module by induction on r, the case r = 1
being trivial. Assume r ≥ 2 and that the theorem is true with r replaced by r − 1.
It is easy to verify that the hypothesis and the conclusion of the result are
unchanged if we pass from k to any extension field in K. In particular, replacing
k by its algebraic closure in K we may assume that k is itself algebraically closed.
The plan is to use Lemma 4.1 to prove that Ext1R (k, M ) = 0. Since R is Artinian
it would then follow that M is injective, and hence also projective, because R is a
self-injective algebra.
We first note that for any extension field ℓ of k, tensoring with ℓ gives a one-toone map Ext∗R (k, k) → Ext∗Rℓ (ℓ, ℓ), which we view as an inclusion, and a natural
isomorphism of ℓ-vector spaces
ExtiR (k, M )ℓ ∼
= ExtiRℓ (ℓ, M ℓ ) for i ∈ Z.
These remarks will be used repeatedly in the argument below. They imply, for
example, that ExtiR (k, M ) = 0 if and only if ExtiRℓ (ℓ, M ℓ ) = 0.
Let β : Ext1R (k, k) → Ext2R (k, k) be the Bockstein map; see [5, §4.3] or, for a
slightly different approach, [20, I.4.22]. Recall that this map is semilinear through
the Frobenius map, in the sense that
β(λε) = λp β(ε)
for ε ∈ Ext1R (k, k) and λ ∈ k.
Fix an extension field ℓ of k in K that is algebraically closed and of transcendence
degree 1. Choose linearly independent elements ε and γ of Ext1R (k, k). The elements
β(ε) and β(γ) of Ext2R (k, k) induce k-linear maps
f, g : Ext1R (k, M ) −→ Ext3R (k, M ) .
Let λ and µ be elements in ℓ, not both zero, and consider the element
(4.1)
λ1/p ε + µ1/p γ ∈ Ext1Rℓ (ℓ, ℓ) ∼
= Homℓ (J/J 2 , ℓ) ,
where J is the radical of the ring Rℓ . It defines a linear subspace of codimension
one in the ℓ-linear span of t1 , . . . , tr in Rℓ . Let S be the subalgebra of Rℓ generated
by this subspace and view M ℓ as an S-module, by restriction of scalars.
Claim. As an S-module, M ℓ is projective.
p
Indeed, S is isomorphic to ℓ[z1 , . . . , zr−1 ]/(z1p , . . . , zr−1
), as an ℓ-algebra. It is
not hard to verify that the hypotheses of the theorem apply to the S-module M ℓ
and the extension field ℓ ⊂ K. Since the transcendence degree of K over ℓ is r − 2,
the induction hypothesis yields that the S-module M ℓ is projective, as claimed.
We give Rℓ the structure of a Hopf algebra by making the generators ti primitive:
that is, comultiplication is determined by the map ti 7→ ti ⊗1+1⊗ti and the antipode
is determined by the map ti 7→ −ti . Note that Rℓ ⊗S ℓ has a natural structure of
an Rℓ -module, with action induced from the left hand factor.
Claim. The Rℓ -module (Rℓ ⊗S ℓ) ⊗ℓ M ℓ , with the diagonal action, is projective.
Indeed, it is not hard to see the comultiplication on Rℓ induces one on S, so the
latter is a sub Hopf-algebra of the former. As M ℓ is projective as an S-module, by
the previous claim, so is Homℓ (M ℓ , N ) for any S-module N ; see Lemma 3.2. Since
12
BENSON, IYENGAR, KRAUSE, AND PEVTSOVA
projective S-modules are injective, the desired claim is then a consequence of the
following standard isomorphisms of functors
HomRℓ ((Rℓ ⊗S ℓ) ⊗ℓ M ℓ , −) ∼
= HomRℓ (Rℓ ⊗S ℓ, Homℓ (M ℓ , −))
∼ HomS (ℓ, Homℓ (M ℓ , −))
=
on the category of Rℓ -modules.
The Bockstein of the element (4.1) is
λβ(ε) + µβ(γ) ∈ Ext2Rℓ (ℓ, ℓ),
and is represented by an exact sequence of the form
(4.2)
0 −→ ℓ −→ Rℓ ⊗S ℓ −→ Rℓ ⊗S ℓ −→ ℓ −→ 0 .
For any Rℓ -module N the Hopf algebra structure on Rℓ gives a map of ℓ-algebras
Ext∗Rℓ (ℓ, ℓ) → Ext∗Rℓ (N, N ) such that the two actions of Ext∗Rℓ (ℓ, ℓ) on Ext∗Rℓ (ℓ, N )
coincide, up to the usual sign [4, Corollary 3.2.2]. What this entails is that the map
λf + µg : Ext1Rℓ (ℓ, M ℓ ) −→ Ext3Rℓ (ℓ, M ℓ )
may be described as splicing with the extension
0 −→ M ℓ −→ (Rℓ ⊗S ℓ) ⊗ℓ M ℓ −→ (Rℓ ⊗S ℓ) ⊗ℓ M ℓ −→ M ℓ −→ 0 ,
which is obtained from the exact sequence (4.2) by applying − ⊗ℓ M ℓ . By the
preceding claim, the modules in the middle are projective, and so the element
λβ(ε) + µβ(γ) induces a stable isomorphism
∼
Ω2 (M ℓ ) −−→ M ℓ .
It follows that λf + µg is an isomorphism for all λ, µ in l not both zero. Thus
Lemma 4.1 applies and yields Ext1R (k, M ) = 0 as desired.
Support and cosupport detect projectivity. The theorem below is the main
result of this work. Several consequences are discussed in the subsequent sections.
Theorem 4.4. Let k be a field and G a finite group. For any kG-module M , the
following conditions are equivalent.
(i) M is projective.
(ii) π- suppG (M ) = ∅
(iii) π- cosuppG (M ) = ∅.
Proof. We may assume that the characteristic of k, say p, divides the order of G.
The implications (i) =⇒ (ii) and (i) =⇒ (iii) are by Lemma 3.5.
(iii) =⇒ (i) Let E be an elementary abelian p-subgroup of G and let M ↓E
denote M viewed as a kE-module. The hypothesis implies π- cosuppE (M ↓E ) = ∅,
by Remark 2.3(2), and then it follows from Theorem 4.3 that M ↓E is projective.
Chouinard’s theorem [16, Theorem 1] thus implies that M is projective.
(ii) =⇒ (i) When π- suppG (M ) = ∅, it follows from Theorem 3.4 that
π- cosuppG Endk (M ) = ∅ .
The already settled implication (iii) =⇒ (i) now yields that Endk (M ) is projective.
Hence M is projective, by Lemma 3.2.
STRATIFICATION AND π-COSUPPORT: FINITE GROUPS
13
5. Cohomological support and cosupport
The final part of this paper is devoted to applications of Theorem 4.4. We proceed in several steps and derive global results about the module category of a finite
group from local properties, including a comparison of π-support and π-cosupport
with cohomological support and cosupport. In the next section we consider the
classification of thick and localising subcategories of the stable module category.
From now on G denotes a finite group, k a field of positive characteristic dividing
the order of G. Let StMod(kG) be the stable module category of all (meaning,
also infinite dimensional) kG-modules modulo the projectives; see, for example, [4,
§2.1]. This is not an abelian category; rather, it has the structure of a compactly
generated tensor triangulated category and comes equipped with a natural action
of the cohomology ring H ∗ (G, k); see [8, Section 10]. This yields the notion of
cohomological support and cosupport developed in [8, 11]. More precisely, for each
homogeneous prime ideal p of H ∗ (G, k) that is different from the maximal ideal of
positive degree elements, there is a distinguished object Γp k. Using this one defines
for each kG-module M its cohomological support
suppG (M ) = {p ∈ Proj H ∗ (G, k) | Γp k ⊗k M is not projective}
and its cohomological cosupport
cosuppG (M ) = {p ∈ Proj H ∗ (G, k) | Homk (Γp k, M ) is not projective}.
The result below reconciles these notions with the corresponding notions defined
in terms of π-points.
Theorem 5.1. Let G be a finite group and M a kG-module. Then
cosuppG (M ) = π- cosuppG (M )
and
suppG (M ) = π- suppG (M ) ,
∗
regarded as subsets of Proj H (G, k).
Proof. We use the fact that π- suppG (Γp k) = {p}; see [18, Proposition 6.6]. Then
using Theorems 4.4 and 3.4 (ii) one gets
def
p ∈ cosuppG (M ) ⇐⇒ Homk (Γp k, M ) is not projective
4.4
⇐⇒ π- cosuppG (Homk (Γp k, M )) 6= ∅
3.4
⇐⇒ π- suppG (Γp k) ∩ π- cosuppG (M ) 6= ∅
⇐⇒ p ∈ π- cosuppG (M ).
This gives the equality involving cosupports.
In the same vein, using Theorems 4.4 and 3.4 (i) one gets
def
p ∈ suppG (M ) ⇐⇒ Γp k ⊗k M is not projective
4.4
⇐⇒ π- suppG (Γp k ⊗k M ) 6= ∅
3.4
⇐⇒ π- suppG (Γp k) ∩ π- suppG (M ) 6= ∅
⇐⇒ p ∈ π- suppG (M ).
This gives the equality involving supports.
Here is a first consequence of this result; we are unable to verify it directly,
except for closed points in the π-support and π-cosupport.
14
BENSON, IYENGAR, KRAUSE, AND PEVTSOVA
Corollary 5.2. For any kG-module M the maximal elements, with respect to inclusion, in π- cosuppG (M ) and π- suppG (M ) coincide.
Proof. Given Theorem 5.1, this follows from [11, Theorem 4.13].
We continue with two useful formulas for computing cohomological supports and
cosupports; they are known from previous work [6, 11] and are now accessible from
the perspective of π-points.
Corollary 5.3. For all kG-modules M and N there are equalities
(i) suppG (M ⊗k N ) = suppG (M ) ∩ suppG (N ).
(ii) cosuppG (Homk (M, N )) = suppG (M ) ∩ cosuppG (N ).
Proof. This follows from Theorems 3.4 and 5.1.
We wrap up this section with a couple of examples. The first one shows that the
π-support of a module M may be properly contained in that of Endk (M ).
Example 5.4. Let V be the Klein four group Z/2 × Z/2 and k a field of characteristic two. Thus, Proj H ∗ (V, k) = P1k , and a realisation of points of P1k as π-points
of kV was given in Example 2.6. Let M be the infinite dimensional kV -module
described in Example 3.7. As noted there, the π-support of M consists of a single
point, namely, the closed point [0, 1]. We claim
π- suppV (Endk (M )) = {[0, 1]} ∪ {generic point of P1k } .
Indeed, since the π-cosupport of M contains [0, 1], by Example 3.7, it follows from
Theorem 3.4 that the π-cosupport of Endk (M ) is exactly {[0, 1]}. Corollary 5.2
then implies that [0, 1] is the only closed point in the π-support of Endk (M ). It
remains to verify that the latter contains also the generic point.
Let K be the field of rational functions in a variable s. The π-point defined by
K[t]/(tp ) → K[x, y]/(x2 , y 2 ) with t 7→ x + sy corresponds to the generic point of
P1k ; see Example 2.6. The desired result follows once we verify that the element
1 ⊗ idM of Endk (M )K is in the kernel of x + sy but not in its image. It is in
the kernel because idM is a kV -module homomorphism. Suppose there exists an
f in Endk (M )K with (x + sy)f = 1 ⊗ idM . Then for each n ≥ 0, the identity
((x + sy)f )(un ) = un yields
f (vn ) + sf (vn−1 ) = un + (x + sy)f (un ) .
Noting that v−1 = 0, by convention, it follows that
f (vn ) ≡ un + sun−1 + · · · + sn u0
modulo the submodule K(v0 , v1 , · · · ) of MK . This cannot be, as f is in Endk (M )K .
In Example 3.7 it is proved that, for M as above, π- suppV (M ) 6= π- cosuppV (M ).
The remark below is a conceptual explanation of this phenomenon, since M is of
the form T (Ip ) for p = [0, 1] in P1k .
Example 5.5. Given p ∈ Proj H ∗ (G, k), there is a kG-module T (Ip ) which is
defined in terms of the following natural isomorphism
b ∗ (G, −), Ip ) ∼
HomH ∗ (G,k) (H
= HomkG (−, T (Ip ))
b ∗ (G, −) is Tate cohomolwhere Ip denotes the injective envelope of H ∗ (G, k)/p, H
ogy, and HomkG (−, −) is the set of homomorphisms in StMod kG; see [13, §3].
The cohomological support and cosupport of this module have been computed in
STRATIFICATION AND π-COSUPPORT: FINITE GROUPS
15
[10, Lemma 11.10] and [11, Proposition 5.4], respectively. Combining this with
Theorem 5.1 gives
π- suppG (T (Ip )) = {p} and π- cosuppG (T (Ip )) = {q ∈ Proj H ∗ (G, k) | q ⊆ p} .
6. Stratification
The results of this section concern the triangulated category structure of the
stable module category, StMod(kG). Recall that a full subcategory C of StMod(kG)
is localising if it is a triangulated subcategory and is closed under arbitrary direct
sums. In a different vein, C is tensor ideal if for all C in C and arbitrary M , the
kG-module C ⊗k M is in C.
Following [10, §3], we say that the stable module category StMod(kG) is stratified
by H ∗ (G, k) if for each homogeneous prime ideal p of H ∗ (G, k) that is different from
the maximal ideal of positive degree elements the localising subcategory
{M ∈ StMod(kG) | suppG (M ) ⊆ {p}}
admits no proper non-zero tensor ideal localising subcategory.
We are now in the position to give a simplified proof of [10, Theorem 10.3]. We
refer the reader to [10, Introduction] for a version of this result dealing entirely with
the (abelian) category of kG-modules.
Theorem 6.1. Let k be a field and G a finite group. Then the stable module
category StMod(kG) is stratified as a tensor triangulated category by the natural
action of the cohomology ring H ∗ (G, k). Therefore the assignment
[
(6.1)
C 7−→
suppG (M )
M∈C
induces a one to one correspondence between the tensor ideal localising subcategories
of StMod(kG) and the subsets of Proj H ∗ (G, k).
Proof. It suffices to show that HomkG (M ⊗k −, N ) 6= 0 whenever M, N are kGmodules with suppG (M ) = {p} = suppG (N ); see [10, Lemma 3.9]. By adjointness,
this is equivalent to Homk (M, N ) being non-projective. Thus the assertion follows
from Corollary 5.3, once we observe that suppG (N ) = {p} implies p ∈ cosuppG (N ).
But this is again a consequence of Corollary 5.3, since Endk (N ) is non-projective
by Lemma 3.2.
The second part of the assertion is a formal consequence of the first; see [10, Theorem 3.8]. The inverse map sends a subset V of Proj H ∗ (G, k) to the subcategory
consisting of all kG-modules M such that suppG (M ) ⊆ V.
The next results concern stmod(kG), the full subcategory of StMod G consisting
of finite dimensional modules. A tensor ideal thick subcategory C of stmod(kG) is a
triangulated subcategory that is closed under direct summands and has the property
that for any C in C and finite dimensional kG-module M , the kG-module C ⊗k M
is in C. The classification of the tensor ideal thick subcategories of stmod(kG) is
the main result of [7] and can be deduced from the classification of the tensor ideal
localising subcategories of StMod(kG). This is based on the following lemma.
Lemma 6.2. Let M be a finite dimensional kG-module. Then suppG (M ) is a
Zariski-closed subset of Proj H ∗ (G, k). Conversely, each Zariski-closed subset of
Proj H ∗ (G, k) is of this form.
Proof. The first statement follows from [8, Theorem 5.5] and the second from [9,
Lemma 2.6].
16
BENSON, IYENGAR, KRAUSE, AND PEVTSOVA
Theorem 6.3. Let G be a finite group and k a field. Then the assignment (6.1)
induces a one to one correspondence between the tensor ideal thick subcategories of
stmod(kG) and the specialisation closed subsets of Proj H ∗ (G, k).
Proof. Let σ be the assignment (6.1). By Lemma 6.2, when C is a tensor ideal thick
subcategory of stmod(kG), the subset σ(C) of Proj H ∗ (G, k) is a union of Zarksiclosed subsets, and hence specialisation closed. Thus σ restricted to stmod G has the
desired image. Let τ be the map from specialisation closed subsets of Proj H ∗ (G, k)
to stmod G that assigns V to the subcategory with objects
{M ∈ stmod G | suppG M ⊆ V} .
This is readily seen to be a tensor ideal thick subcategory. The claim is that σ and
τ are inverses of each other.
Indeed for any V ⊆ Proj H ∗ (G, k) there is an inclusion στ (V) ⊆ V; equality holds
if V is closed, by Lemma 6.2, and hence also if V is specialisation closed.
Fix a tensor ideal thick subcategory C of stmod G. Evidently, there is an inclusion
C ⊆ τ σ(C). To prove that equality holds, it suffices to prove that if M is a finite
dimensional G-module with suppG (M ) ⊆ σ(C), then M is in C. Let C′ be the tensor
ideal localising subcategory of StMod(kG) generated by C. From the properties of
support, it is easy to verify that σ(C′ ) = σ(C), and then Theorem 6.1 implies M is
in C′ . Since M is compact when viewed as an object in StMod G, it follows by an
argument analogous to the proof of [21, Lemma 2.2] that M is in C, as desired.
Acknowledgements. Part of this article is based on work supported by the National Science Foundation under Grant No. 0932078000, while DB, SBI, and HK
were in residence at the Mathematical Sciences Research Institute in Berkeley, California, during the 2012–2013 Special Year in Commutative Algebra. The authors
thank the Centre de Recerca Matemàtica, Barcelona, for hospitality during a visit in
April 2015 that turned out to be productive and pleasant. SBI and JP were partly
supported by NSF grants DMS-1503044 and DMS-0953011, respectively. We are
grateful to Eric Friedlander for comments on an earlier version of this paper.
References
1. J. L. Alperin and L. Evens, Varieties and elementary abelian groups, J. Pure Appl. Algebra
26 (1982), 221–227.
2. G. S. Avrunin and L. L. Scott, Quillen stratification for modules, Invent. Math. 66 (1982),
277–286.
3. C. Bendel, Cohomology and projectivity of modules for finite group schemes, Math. Proc.
Camb. Phil. Soc. 131 (2001), 405–425.
4. D. J. Benson, Representations and Cohomology I: Basic representation theory of finite groups
and associative algebras, Cambridge Studies in Advanced Mathematics, vol. 30, Second Edition, Cambridge University Press, 1998.
5. D. J. Benson, Representations and Cohomology II: Cohomology of groups and modules, Cambridge Studies in Advanced Mathematics, vol. 31, Second Edition, Cambridge University
Press, 1998.
6. D. J. Benson, J. F. Carlson, and J. Rickard, Complexity and varieties for infinitely generated
modules, II, Math. Proc. Camb. Phil. Soc. 120 (1996), 597–615.
7. D. J. Benson, J. F. Carlson, and J. Rickard, Thick subcategories of the stable module category,
Fundamenta Mathematicae 153 (1997), 59–80.
8. D. J. Benson, S. B. Iyengar, and H. Krause, Local cohomology and support for triangulated
categories, Ann. Scient. Éc. Norm. Sup. (4) 41 (2008), 575–621.
9. D. J. Benson, S. B. Iyengar, and H. Krause, Stratifying triangulated categories, J. Topology
4 (2011), 641–666.
10. D. J. Benson, S. B. Iyengar, and H. Krause, Stratifying modular representations of finite
groups, Ann. of Math. 174 (2011), 1643–1684.
STRATIFICATION AND π-COSUPPORT: FINITE GROUPS
17
11. D. J. Benson, S. B. Iyengar, and H. Krause, Colocalising subcategories and cosupport, J. Reine
& Angew. Math. 673 (2012), 161–207.
12. D. J. Benson, S. B. Iyengar, H. Krause, and J. Pevtsova, Stratification for module categories
of finite group schemes, preprint 2015. arXiv:1510.06773
13. D. J. Benson and H. Krause, Pure injectives and the spectrum of the cohomology ring of a
finite group, J. Reine & Angew. Math. 542 (2002), 23–51.
14. J. F. Carlson, The complexity and varieties of modules, Integral representations and their
applications, Oberwolfach, 1980, Lecture Notes in Mathematics, vol. 882, Springer-Verlag,
Berlin/New York, 1981, pp. 415–422.
15. J. F. Carlson, The varieties and cohomology ring of a module, J. Algebra 85 (1983), 104–143.
16. L. Chouinard, Projectivity and relative projectivity over group rings, J. Pure & Applied Algebra 7 (1976), 278–302.
17. E. M. Friedlander and J. Pevtsova, Representation theoretic support spaces for finite group
schemes, Amer. J. Math. 127 (2005), 379–420, correction: AJM 128 (2006), 1067–1068.
18. E. M. Friedlander and J. Pevtsova, Π-supports for modules for finite groups schemes, Duke
Math. J. 139 (2007), 317–368.
19. E. M. Friedlander and A. Suslin, Cohomology of finite group schemes over a field, Invent.
Math. 127 (1997), 209–270.
20. J. C. Jantzen, Representations of algebraic groups, American Math. Society, 2003, 2nd ed.
21. A. Neeman, The connection between the K-theory localization theorem of Thomason, Trobaugh
and Yao and the smashing subcategories of Bousfield and Ravenel, Ann. Scient. Éc. Norm.
Sup. (4) 25 (1992), 547–566.
22. D. Quillen The spectrum of an equivariant cohomology ring: I, II, Ann. Math. 94 (1971)
549-572, 573-602.
23. A. Suslin, E. Friedlander, and C. Bendel, Support varieties for infinitesimal group schemes,
J. Amer. Math. Soc. 10 (1997), 729–759.
24. W. C. Waterhouse, Introduction to affine group schemes, Graduate Texts in Mathematics,
vol. 66, Springer-Verlag, Berlin/New York, 1979.
Dave Benson, Institute of Mathematics, University of Aberdeen, King’s College,
Aberdeen AB24 3UE, Scotland U.K.
Srikanth B. Iyengar, Department of Mathematics, University of Utah, Salt Lake
City, UT 84112, U.S.A.
Henning Krause, Fakultät für Mathematik, Universität Bielefeld, 33501 Bielefeld,
Germany.
Julia Pevtsova, Department of Mathematics, University of Washington, Seattle, WA
98195, U.S.A.
| 4 |
arXiv:1505.06607v2
[] 26
May 2015
arXiv:
some other
text
goes here
Stochastic Block Coordinate Frank-Wolfe Algorithm for
Large-Scale Biological Network Alignment
1
Yijie Wang1 and Xiaoning Qian1
Department of Electrical & Computer Engineering, Texas A&M University,
College Station, Texas, USA, 77843
(Dated: May 23, 2015)
Abstract
With increasingly “big” data available in biomedical research, deriving accurate and reproducible
biology knowledge from such big data imposes enormous computational challenges. In this paper, motivated by recently developed stochastic block coordinate algorithms, we propose a highly scalable randomized block coordinate Frank-Wolfe algorithm for convex optimization with general compact convex
constraints, which has diverse applications in analyzing biomedical data for better understanding cellular
and disease mechanisms. We focus on implementing the derived stochastic block coordinate algorithm
to align protein-protein interaction networks for identifying conserved functional pathways based on the
IsoRank framework. Our derived stochastic block coordinate Frank-Wolfe (SBCFW) algorithm has the
convergence guarantee and naturally leads to the decreased computational cost (time and space) for
each iteration. Our experiments for querying conserved functional protein complexes in yeast networks
confirm the effectiveness of this technique for analyzing large-scale biological networks.
1
Introduction
First-order methods in convex optimization have attracted significant attention in statistical learning in recent years. They are appealing to many learning problems, such as LASSO regression and matrix completion,
which have diverse applications in analyzing large-scale biological systems and high-dimensional biomedical
measurement profiles [13, 7]. These first-order optimization methods scale well with the current “big” data in
many biomedical applications due to their advantages that they have low computation burden per iteration
and they are easy to be implemented on parallel computational resources.
In this paper, we focus on the Frank-Wolfe algorithm, which is also known as the conditional gradient
method. One of its advantages is that at each iteration step it decomposes the complex constrained optimization problem into sub-problems which are easier to solve. Additionally, it is a projection free algorithm,
which avoids solving the projection problem for constrained optimization as done in many other algorithms.
The original Frank-Wolfe algorithm, developed for smooth convex optimization on a polytope, dates back
to Frank and Wolfe [4]. Dunn and Harshbarger [2, 3] have generalized the algorithm to solve the optimization for more general smooth convex objective functions over bounded convex feasible regions. Recently,
researchers [9] have proposed stochastic optimization ideas to scale up the original Frank-Wolfe algorithm.
Based on these previous seminal efforts, our main contribution in this paper is that we generalize the
stochastic block coordinate Frank-Wolfe algorithm proposed in [9], previously with block separable constraints, to solve more general optimization problems with any convex compact constraints, including the
problems with block inseparable constraints. Such a generalized algorithm has a broader range of biomedical
applications, including biological network alignment. We prove the convergence of our generalized stochastic block coordinate Frank-Wolfe algorithm and evaluate the algorithm performance for querying conserved
functional protein complexes in real-world protein-protein interaction (PPI) networks.
In the following sections, we first describe the model formulation of the optimization problems that
we are generally interested. Specifically, to address potential difficulty from more general convex compact
constraints, we derive a new stochastic block coordinate Frank-Wolfe algorithm and provide the convergence
proof. Then, we formulate the IsoRank problem for network alignment [11] as a convex programming
problem and develop a SBCFW-IsoRank algorithm based on our new stochastic block coordinate FrankWolfe algorithm. At last, in our experiments, we show the efficiency and effectiveness of our algorithm for
solving the PPI network query problem.
1
arXiv: some other text goes here
2
Stochastic Block Coordinate Descent Frank-Wolfe Algorithm
Consider the minimization problem:
min: f (x)
(1)
s.t. x ∈ D,
where the objective function f (x) is convex and differentiable on RN , and the domain D is a compact convex
subset of any vector space. We assume that the optimal solution x∗ to the above problem is non-empty and
bounded without loss of generality.
Assume that we can decompose the solution space RN into n equal-size subspaces:
n
RN = ⊕ RNi , N =
i=1
n
X
Ni ,
(2)
i=1
where N1 = . . . = Ni = . . . , Nn and RNi denotes the ith equal-size subspace along the corresponding coordinates. This decomposition enables scalable stochastic optimization algorithms.
Based on this decomposition,
Pn
we introduce matrices Ui , who sum up to an identity matrix IN = i=1 Ui , and Ui is a N × N matrix with
Ui (t, t) = 1, t ∈ RNi on its diagonal and the other entries being equal to zero. In typical stochastic optimization algorithms [12], instead of computing the gradient ∇f (x) at each iteration, the partial gradient of f (x)
on a randomly selected subspace RNi is used:
∇i f (x) = Ui ∇f (x).
(3)
Now we generalize the previous stochastic block coordinate Frank-Wolfe algorithm derived in [9] to solve
more general optimization problems with any compact convex constraints D. The new generalized stochastic
block coordinate Frank-Wolfe (SBCFW) algorithm is illustrated in Algorithm 1. In the pseudo code, the
operation i = Ck randomly selects one of the n equal-size subspaces to update the partial gradient at each
iteration with the same probability. In addition, Uj × s = Uj × xk denotes the condition that the elements
of the jth block of s equal to the elements of the jth block of xk .
Algorithm1: Generalized SBCFW Algorithm
1 Let x0 ∈ D, k = 0.
2 While Stopping criteria not satisfied do
3
n
Randomly divide RN into n blocks RN = ⊕ RNi ;
i=1
4
5
6
Choose i = Ck ;
Find ski such that
ski := arg
min
7
8
Determine the step size γ
γ := arg min f ((1 − γ)xk + γski );
∇i f (xk )T (s − xk );
Uj ×s=Uj ×xk ,∀j6=i;
s∈D
γ ∈[0,1]
9 Update xk+1 := (1 − γ)xk + γski ;
10 k = k + 1;
11 Endwhile
Note that our generalized SBCFW algorithm is similar to the algorithm in [9], which aims to solve optimization problems with block separable constraints and has the sub-linear convergence property. However,
our algorithm provides a more generalized framework, which can manipulate any convex and compact constraints no matter whether they are block separable or not. Because the setup of our algorithm is more
general without any specific structure, it is difficult to obtain theorectical convergence rate guarantees. In
this paper, we only provide the proof that our SBCFW converges to the global optimum. The convergence
guarantee of the generalized SBCFW algorithm is provided by Theorem 1 below, which is based on Lemma
1:
2
arXiv: some other text goes here
Lemma 1:
At each iteration of the SBCFW algorithm, the following inequality holds
∇f (xk )T Ei [ski ] − xk ≤ 0,
(4)
where Ei [ski ] is the expectation of ski with respect to the random selection of the ith cordinate block to the
corresponding subspace.
Proof. Assuming at the kth iteration, we solve the following optimization problem:
min: Zki (s) := ∇i f (xk )T (s − xk )
s.t. Uj × s = Uj × xk , ∀j 6= i,
(5)
s ∈ D.
The solution to (5) is ski . With ski achieving the minimum of (5), we have
Zki (ski ) ≤ Zki (xk ) = ∇i f (xk )T (xk − xk ) = 0.
(6)
Zki (ski ) = ∇i f (xk )T (ski − xk ) ≤ 0.
(7)
Therefore,
Taking expectation on both sides of the above inequality with respect to random blocks, we obtain
Ei ∇i f (xk )T (ski − xk )
≤0
1X
∇i f (xk )T (ski − xk ) ≤ 0
⇒
n i
!T
!
X
1 X k
k
k
⇒
∇i f (x )
(si − x )
≤0
n
i
i
!T
!
X
1X k
k
k
⇒
∇i f (x )
s −x
≤0
n i i
i
⇒
∇f (xk )T Ei [ski ] − xk
≤ 0.
(8)
The inequality in the third line can be derived based on the fact that ski − xk is a vector with only its ith
coordinate block having non-zero values and the other parts being all
Pzeros. With that,
P the summation in
the second line can be written as the inner product between vectors i ∇i f (xk ) and i (ski − xk ).
We now analyze the convergence of the new SBCFW algorithm based on Lemma 1 from two cases. The
first case is when
∇f (xk )T Ei [ski ] − xk = 0.
(9)
This simply means that xk is a stationary point. Because the original objective function f (x) is convex, we
can conclude that xk is the global minimum. Another case is when
∇f (xk )T Ei [ski ] − xk < 0,
(10)
indicating that Ei [ski ] − xk is a decent direction based on the definition [10]. Hence, Ei [ski ] − xk can move
along the direction to get closer to the global minimum in expectation. Furthermore, we compute the optimal
step size at each iteration, therefore the objective function values are guaranteed to be non-increasing. With
that, we present Theorem 1 as follows:
Theorem 1:
The sequence f (x1 ), f (x2 ), ..., f (xk ), ... generated by the SBCFW algorithm is nonincreasing
f (x1 ) ≥ f (x2 ) ≥ ... ≥ f (xk ) ≥ f (xk+1 ), k → ∞.
(11)
3
arXiv: some other text goes here
3
3.1
Biological Network Alignment
Optimization Model Formulation
In this section, we re-formulate the involved optimization problem for the network alignment algorithm—
IsoRank [11] to address the potential computational challenges of aligning multiple large-scale networks. The
new formulation has the same mathematical programming structure as the problem (1).
Let Ga and Gb be two biological networks to align. Two networks has Na and Nb vertices respectively.
We define B ∈ R(Na ×Nb )×(Na ×Nb ) as the Cartesian product network from Ga and Gb : B = Ga ⊗ Gb . Denote
the all one vector 1 ∈ RNa ×Nb and
B̄ = B × Diag(B1)−1 ,
(12)
where Diag(B1) can be considered as a degree matrix with B1 on its diagonal and all the other entries equal
to zero. B̄ contains the transition probabilities for the underlying Markov random walk in IsoRank [11]. It
is well known that if Ga and Gb are connected networks and neither of them is bipartite graph, then the
corresponding Markov chain represented by B̄ is irreducible and ergodic, and there exists a unique stationary
distribution for the underlying state transition probability matrix B̄. The goal of the IsoRank algorithm
is to find a right maximal eigenvector of the matrix B̄: B̄x = x and 1T x = 1, x ≥ 0, which corresponds
to the best correspondence relationships between vertices across two networks. When two networks are
of reasonable size, spectral methods as well as power methods can be implemented to solve the IsoRank
problem [11]. However, with large-scale networks, the transition probability matrix B̄ can be extremely
large (quadratic with Na × Nb ) and spectral methods can be computationally prohibitive. In this paper, we
re-formulate this problem of searching for maximal right eigenvector as a constrained optimization problem:
1
B̄x − x
2
s.t. 1T x = 1, x ≥ 0.
min: f (x) :=
2
(13)
(H)
1
After expanding the objective function, we obtain f (x) = xT M x, where M = B̄ T B̄ − B̄ − B̄ T +I. Therefore
2
the equivalent optimization problem is
1 T
x Mx
2
s.t. 1T x = 1, x ≥ 0.
min: f (x) :=
(14)
(H)
The gradient of f (x) can be easily computed ∇f (x) = M x. Furthermore, we find that the Hessian matrix
of f (x) is M , which is a positive semi-definite matrix proven by Lemma 2 :
Lemma 2 : M = B̄ T B̄ − B̄ − B̄ T + I is positive semi-definite.
Proof. M can be written as M = (B̄ − I)T (B̄ − I), which proves the lemma.
With Lemma 2, it is obvious that the objective function f (x) is convex. Also, the constraint set
H = {x|xT 1 = 1, x ≥ 0} is a unit simplex, which is convex and compact. Hence, the IsoRank problem (13) has
the same problem structure as (1) and our generalized SBCFW algorithm can be used to solve (14) with much
better scalability and efficiency due to the efficiency of the randomized partial gradient computation at each
iteration. Similarly as in [11], in addition to network topology, we can incorporate other information in the
formulation for more biologically significant alignment results by replacing B̄ with B̂ = αB̄ +(1−α)S̄1T , α ∈
[0, 1]. Here S̄ = S/|S| is a normalized similarity vector with size Na × Nb , cancatenated from the doubly
indexed similarity estimates S([u, v]) based on the sequence or function similarity between vertices u in Ga
and v in Gb .
3.2
SBCWF-IsoRank Algorithm
As shown in Section 3.1, f (x) in (13) is convex and the constraint set H in (14) is a convex compact set.
Therefore, we can apply the generalized SBCWF algorithm proposed in Section 2 to solve the corresponding
optimization problem (14). The detailed algorithm is illustrated in Algorithm 2. Here we want to emphasize
4
arXiv: some other text goes here
that, in
each
iteration of our SBCFW-IsoRank algorithm, both the time complexity and the space complexity
N2
, which is achieved through tracking the vectors of pk = Exk and qk = Eski at step 2 and 10
are O
n
of each iteration in Algorithm 2, respectively. The stopping criterion is B̄x − x ≤ ξ kxk, which can be
efficiently estimated by
B̄x − x = xT M x = (Ex)T Ex = pTk pk ,
(15)
which is taken in line 11 in the SBCFW-IsoRank algorithm.
Algorithm 2: SBCFW-IsoRank Algorithm
Input: ξ, n and E
1 for k = 0, ..., ∞ do
2 randomly divide RN into n equal-size parts
3 choose i = Ck
4 if (k == 0)
n
5
initialize the ith block of x0 with
N
6 endif
7 compute pk = Exk and ∇i f (xk ) = [E T ]i pk
8 solve the sub-problem:
9
ski := arg
min
∇i f (xk )T (s − xk )
10
11
12
13
14
Uj ×s=Uj ×xk , ∀j6=i;
s∈H,
qk = Eski
compute
if pTk pk < ξ kxk
break;
endif
∗
compute the
step size γk :
pTk pk − pTk qk
min
{γ̂,
1}
γ̂
>
0,
γ̂
=
15
γk∗ =
pTk pk − 2pTk qk + qTk qk
0
o.w.
16 xk+1 = xk + γ k (ski − xk )
17 endfor
Output: xk
3.3
Initialization
2
In order to guarantee both the time and space complexity to be O( Nn ) at each iteration, we can not initialize
the algorithm with randomly generated x0 to avoid a multiplication of a matrix of size N × N and a vector
of size N , whose time and space complexity would be O(N 2 ). We propose to initialize x0 in the following
way: First, randomly divide RN into n parts with equal sizes and randomly pick the ith part. Then,
n
we initialize every elements in the ith part with N
, which makes x0 in the feasible space defined by the
constraint set H. Using the above initialization strategy, the time and space complexity for computating
2
∇i f (x0 ), p0 = Ex0 and q0 = Es0 are all under O( Nn ), which is easy to verify.
3.4
Algorithm to Solve the Sub-problem
As shown in the SBCFW-IsoRank algorithm, at each iteration we need to solve a sub-problem. Fortunately,
the sub-problem can be solved in a straightforward manner for the optimization problem (14). For the
following sub-problem at iteration k:
min: ∇i f (xk )T (s − xk )
s.t. s ∈ H,
(16)
k
Uj × s = Uj × x , ∀j 6= i,
5
arXiv: some other text goes here
the optimalP
solution is s∗ = xk − Ui xk + Lej , where ej is an all-zero vector except that the jth element is
1 and L = l∈RNi xk (l). Here, j is the index of the coordinate with the smallest value in the ith block of
∇i f (xk ):
j = arg min: [∇i f (xk )](l).
(17)
l∈RNi
3.5
Optimal Step Size
To obtain the optimal step size at each iteration, we need to solve the following optimization problem:
T
min: xk + γ(sk − xk ) M xk + γ(sk − xk )
s.t. 0 ≤ γ ≤ 1,
(18)
pTk pk − pTk qk
> 0, which is the
pTk pk − 2pTk qk + qTk qk
solution to (18) without any constraints, the optimal solution γ ∗ is the minimum value between 1 and γ̂,
otherwise γ ∗ = 0. The definition of pk and qk are given in lines 7 and 10 in Algorithm 2.
which is the classic quadratic form with respect to γ. If γ̂ =
3.6
Time and Space Complexity
At each iteration, the most computationally expensive operations are the updates of pk and qk (lines 7 and
10 of SBCFW-IsoRank) and the calculation of the partial gradient ∇i f (xk ) (line 7 of SBCFW-IsoRank).
The calculation of pk and qk are similar. From line 10 of Algorithm 2, we know
pk = Exk
= E xk−1 + γ k−1 (sk−1 − xk−1 )
= pk−1 + γ
k−1
k−1
E(s
−x
k−1
(19)
).
The second equation is derived by replacing xk with the equation in line 16 of our SBCFW-IsoRank algorithm.
Because we keep tracking pk at each iteration,
we
do not need to recompute pk−1 . Therefore, we only need
N2
k−1
k−1
to compute E(s
−x
), which takes O
operations because (sk−1 − xk−1 ) is a vector, with only
n
itsith block
being non-zeros and all the other parts are zeros. Additionally, the memory consumption is also
N2
O
by the similar argument. Similarly, we can compute qk
n
qk = Esk
= E xk + (Lej − Ui xk )
(20)
= pk + E(Lej − Ui xk ),
where (Lej −Ui xk )is alsoa vector with only the ith block
having
non-zero values. Therefore, the computation
N2
N2
operations and consumes O
memory.
of qk also takes O
n
n
The equation of calculating ∇i f (xk ) is as follows:
∇i f (xk ) = [E T ]i pk ,
(21)
where the operator [·]i is to get the rows of the matrix corresponding to the ith coordinate block.
it
2 Hence,
N
k
is easy to verify that the time complexity and space complexity of computing ∇i f (x ) are O
.
n
In summary, based on the above
both the time complexity and space complexity of our SBCFW analyses,
N2
IsoRank at each iteration are O
.
n
6
arXiv: some other text goes here
YGR047C
YGR047C
YOR110W
YOR110W
YDR362C
YDR362C
YPL007C
YPL007C
YBR123C
YBR123C
YAL001C
YAL001C
(b) Sub-Network in Collins
(a) Sub-Network in Krogan
Figure 1: Query subnetwork and its aligned result in the target network: (a) the query subnetwork in the
Krogan’s yeast PPI network; (b) the aligned result in Collins yeast PPI network.
4
Experiments
In this section, we apply our SBCFW-IsoRank algorithm to two network query problems. For the first set of
experiments, we take a known protein complex in an archived yeast protein-protein interaciton (PPI) network
in one database [8] as the query to search for the subnetwork in another yeast PPI network [5] with different
archived interactions. We call this yeast-yeast network query problem. The goal of this set of experiments
is to check the correctness of our algorithm as we have the ground truth for the target subnetwork. With
that, we aim to test the convergence property of our algorithm under different partitions and the relationship
between the number of iterations and number of partitions. The second experiment is to query a large-scale
yeast PPI network in IntAct [6] to find similar subnetworks of proteins with similar cellular functionalities
for a known protein complex in human PPI network. The aim of this experiment is to show that our new
algorithm can help transfer biology knowledge from model organisms to study potential functionalities of
molecules in different organisms.
4.1
Yeast-Yeast Network Query Problem
We test our SBCFW-IsoRank algorithm on the yeast-yeast PPI network query problem by solving the
optimization problem introduced in the previous section. We take a subnetwork with 6 proteins (Fig. 1(a))
from the Krogan’s yeast PPI network [8] as the query example to search for the conserved functional complex
in a target network, which is the Collins network [5] with 1,622 proteins and 9,074 interactions. The query
subnetwork is the transcription factor TFIIIC complex in the Krogan’s network and we are interested in
testing whether we can find the same subnetwork in the Collins network. The dimension of our optimization
problem is 6 × 1, 622 = 9, 732. We run this preliminary example so that we can compare our stochastic
optimization results with the results from the power method, which is typically done in the original IsoRank
algorithm [11]. Theoretically, the time and space complexity of our SBCFW-IsoRank at each iteration are
both O(N 2 /n) based on the analysis in Section 3.6. Compared to O(N 2 ) time and space complexity for the
power method by IsoRank [11], our SBCFW-IsoRank algorithm can scale better with properly selected n.
As both the query example and the target network contain interactions among proteins from the same
organism—yeast, we can easily check the correctness of the query result. We define the accuracy as the
number of corrected aligned proteins divided by the total number of proteins in the query subnetwork. We
implement the SBCFW-IsoRank algorithm for different numbers of partitions n but use the same stopping
criterion: B̂x − x ≤ ξ kxk , ξ = 0.1. In Table I, we find that our stochastic optimization algorithm obtains
7
0.14
Objective Function Value
arXiv: some other text goes here
0.16
n=2
n=5
n=10
n=30
n=50
n=100
n=200
0.12
0.1
0.08
0.06
0.04
0.02
0
0
1000
2000
3000
#. Iterations
4000
Figure 2: The change of the objective function values with the increasing number of iterations with different
numbers of partitions.
the same biologically meaningful results as the power method.
Fig. 2 shows the changes of the objective function values with respect to the increasing number of
iterations. As illustrated in Fig. 2, our algorithm converges for all different n. Additionally, we find the
larger the number of partitions n is, the larger the number of iterations we need to have the algorithm
converge to the global optimum with the same stopping criterion. This clearly demonstrates the tradeoff
between the efficiency and scalability of the stochastic optimization algorithms. Interestingly, we notice
that for n = 10, 30, and 50, the number of iterations does not increase much, which indicates that we may
achieve fast computation with reasonably large n because our algorithm is more efficient for larger n at each
iteration.
Table 1: Comparison on different decompositions with ξ = 0.1.
#. of partitions (n) Computational time (s) #. Iterations Accuracy
2
11.60
648
100%
5
8.53
1,045
100%
10
7.44
1,742
100%
30
5.05
1,880
100%
50
4.93
2,070
100%
100
7.06
2,942
100%
200
13.05
4,478
100%
To further investigate the performance with different n, we run our algorithm 10 times for each n and show
the average computational time, the average number of iterations, and the average accuracy score in Table I.
From Table I, we observe that for all different n, our algorithm can obtain 100% accuracy, which again
demonstrates the effectiveness and convergence of our generalized SBCFW algorithm. Also, we notice that
with the increasing n, the number of iterations increase; however, the computational time is first reducing
then increasing. For example, when n = 2, our algorithm converges with the smallest number
2 of iterations,
N
but the computational time is not the best because at each iteration the algorithm takes O
operations.
2
In contrast, when n = 50, though the number of iterations is larger, but it reaches the global optimum with
8
arXiv: some other text goes here
PUP3
PSMB2
PSMB1
PSMB5
PSMA6
PRE9
PSMB4
PRE2
PRE7
PSMB7
PSMA2
PSMB3
PSMA1
PRE8
PSMB6
PRE5
PRE4
PUP1
SCL1
PSMA7
PSMA3
PRE3
PRE6 PRE1
PUP2 PRE10
PSMA4PSMA5
(a) The proteasome core complex
(Human)
(b) The proteasome core complex
(Yeast)
Figure 3: Querying human protein complex in a yeast PPI network. The proteins are annotated by their
gene names. The solid lines are protein interactions and the dash lines denote orthologous relationships
based on protein sequence similarity by BLAST between the proteins in different organisms. (a) Human
proteasome core complex; (b) The aligned proteasome core complex in yeast found by SBCFW-IsoRank.
the least computation time, which is indeed twice faster than n = 2. The trend of the computational time
implies that there may exist the best number of partitions n∗ . Empirically, when n < n∗ the computational
time decreases while the computational time can increase when n > n∗ . However, it is difficult to provide a
theoretical proof for this observed phenomenon. Finally, for the scalibility of the algorithm, we always prefer
larger n to make the memory requirement as low as possible.
4.2
Human-Yeast Network Query Problem
We further study the biological signficance of network query results by our SBCFW-IsoRank algorithm. We
extract a subnetwork as a query example from a human PPI network archived in IntAct [6]. The query
subnetwork is the proteasome core complex, with induced interactions among the corresponding proteins
from IntAct [6]. The proteasome core complex in human consists of 14 proteins in total, as shown in Fig. 3(a).
The target network is the yeast PPI network, also obtained from IntAct [6], which has 6,392 proteins and
77,065 interactions. Our goal is to find the most similar subnetwork to the human proteasome core complex
in the target yeast PPI network, based on both the interaction topology and the protein sequence similarity,
which is computed by BLAST [14].
We first construct the alignment network, which has N = 14 × 6, 392 = 89, 488 vertices. By our SBCFWIsoRank algorithm with n = 300, instead of operating a matrix of size 89, 488 × 89, 488 by the power method,
we only need to handle a matrix of size 298 × 89, 488. At each iteration, the computational time as well
as the memory requirement are reduced 300 times. Our Matlab implementation of SBCFW-IsoRank on
a MacPro notebook with 8GB RAM takes only around 750 seconds to converge by reaching the stopping
criteria (ξ = 0.1).
The identified subnetwork in the target yeast PPI network by our algorithm is illustrated in Fig. 3(b).
To evaluate the biological significance of the obtained subnetwork, we check the p-value based on GO (Gene
Ontology) enrichment analysis using GOTerm Finder [1]. The identified subnetwork is significantly enriched
in GO term GO:0005839, which is in fact the same proteasome core complex, with p-value 9.552e − 36. This
experiment demonstrates that our algorithm can find the biologically consistent groups of proteins with the
same cellular functionalities as the proteins in the query subnetwork, hence with the capability of transferring
existing biology knowledge in model organisms (yeast for example) to less studied organisms when the group
of proteins in the query subnetwork require better understanding of their cellular functionalities.
9
arXiv: some other text goes here
5
Conclusions
In this paper, we generalize the block coordinate Frank-Wolfe algorithm to solve general convex optimization
problems with any convex and compact constraint set. Our generalized SBCFW algorithm has the convergence guarantee. We re-formulate the IsoRank problem to such a convex programming problem and solve the
biological network alignment problem by our SBCFW-IsoRank algorithm, which scales better with the size
of networks under study. The scalability, efficiency, and effectiveness of our algorithm on solving IsoRank are
demonstrated for real-world PPI network query problems. In our future work, we will consider the derivation
of the optimal partition number for better tradeoff between computational efficiency and scalability.
6
Acknowledgements
The authors would like to thank Simon Lacoste-Julien for pointing out the error in the original conference
paper. This work was partially supported by Awards #1447235 and #1244068 from the National Science
Foundation; as well as Award R21DK092845 from the National Institute Of Diabetes And Digestive And
Kidney Diseases, National Institutes of Health.
References
[1] Boyle, E., Elizabeth, I., Weng, S., Gollub, J., Jin, H., Botstein, D., Cherry, J., and
Sherlock, G. GO::TermFinder—open source software for accessing Gene Ontology information and
finding significantly enriched Gene Ontology terms associated with a list of genes. Bioinformatics 20
(2004), 3710–3715.
[2] Dunn, J. Convergence rates for conditional gradient sequences generated by implicit step length rules.
SIAM Journal on Control and Optimization 5, 473-487 (1980).
[3] Dunn, J., and Harshbarger, S. Conditional gradient algorithms with open loop step size rules.
Journal of Mathematical Analysis and Applications 62, 432-444 (1978).
[4] Frank, M., and Wolfe, P. An algorithm for quadratic programming. Naval Research Logistics
Quarterly 3, 95-110 (1956).
[5] Hasty, J., McMillen, D., Issacs, F., and Collins, J. Computational studies of gene regulatory
networks: in numero molecular biology. Nat Rev Genet 2 (2001), 268–279.
[6] Kerrien, S., Aranda, B., Breuza, L., and et al. The intact molecular interaction database in
2012. Nucleic Acids Research 40, D1 (2012), D841–D846.
[7] Klau, G. A new graph-based method for pairwise global network alignment. BMC Bioinformatics 10,
Suppl 1 (2009), S59.
[8] Krogan, N., et al. Global landscape of protein complexes in the yeast Saccharomyces cerevisiae.
Nature 440 (2006), 4412–4415.
[9] Lacoste-Julien, S., Jaggi, M., Schmidt, M., and Pletscher, P. Block-coordinate frank-wolfe
optimization for structural svms. In International Conference on Machine Learning (2013).
[10] Ortega, J., and Rheinbold, W. C. Iterative Solution of Nonlinear Equations in Several Variables.
Society for Industrial and Applied Mathematics, 1970.
[11] Singh, R., Xu, J., and Berger, B. Global alignment of multiple protein interaction networks with
application to functional orthology detection. Proc. Natl Acad. Sci. 105 (2008), 12763–12768.
[12] Uryasev, S., and Pardalos, P. M., Eds. Stochastic Optimization: Algorithm and Application.
Springer, 2001.
10
arXiv: some other text goes here
[13] Zaslavskiy, M., Bach, F., and Vert, J. Global alignment of protein-protein interaction networks
by graph matching methods. Bioinformatics 25 (2009), 259–267.
[14] Zhang, Z., Schwartz, S., Wagner, L., and Miller, W. A greedy algorithm for aligning dna
sequences. J Comput Biol 7(1-2) (2000), 203–214.
11
| 5 |
arXiv:1105.4204v3 [] 27 Jul 2011
Fast O(1) bilateral filtering using
trigonometric range kernels
Kunal Narayan Chaudhury, Daniel Sage, and Michael Unser ∗
July 28, 2011
Abstract
It is well-known that spatial averaging can be realized (in space or
frequency domain) using algorithms whose complexity does not scale
with the size or shape of the filter. These fast algorithms are generally
referred to as constant-time or O(1) algorithms in the image processing
literature. Along with the spatial filter, the edge-preserving bilateral
filter [12] involves an additional range kernel. This is used to restrict
the averaging to those neighborhood pixels whose intensity are similar
or close to that of the pixel of interest. The range kernel operates by
acting on the pixel intensities. This makes the averaging process nonlinear and computationally intensive, especially when the spatial filter
is large. In this paper, we show how the O(1) averaging algorithms
can be leveraged for realizing the bilateral filter in constant-time, by
using trigonometric range kernels. This is done by generalizing the
idea in [10] of using polynomial kernels. The class of trigonometric
kernels turns out to be sufficiently rich, allowing for the approximation
of the standard Gaussian bilateral filter. The attractive feature of our
approach is that, for a fixed number of terms, the quality of approximation achieved using trigonometric kernels is much superior to that
obtained in [10] using polynomials.
∗
Correspondence: Kunal N. Chaudhury ([email protected]). Kunal N. Chaudhury is currently part of the Program in Applied and Computational Mathematics (PACM),
Princeton University, Princeton, NJ 08544-1000, USA. Michael Unser and Daniel Sage are
with the Biomedical Imaging Group, École Polytechnique Fédérale de Lausanne, Station-17,
CH-1015 Lausanne, Switzerland. This work was supported by the Swiss National Science
Foundation under grant 200020-109415.
1
1
Introduction
The bilateral filtering of an image f (x) in the general setting is given by
Z
f˜(x) = η −1 w(x, y) φ(f (x), f (y)) f (y) dy
where
Z
η=
w(x, y) φ(f (x), f (y)) dy.
In this formula, w(x, y) measures the geometric proximity between the pixel
of interest x and a nearby pixel y. Its role is to localize the averaging to
a neighborhood of x. On the other hand, the function φ(u, v) measures
the similarity between the intensity of the pixel of interest f (x) and its
neighbor f (y). The normalizing factor η is used to preserve constants, and
in particular the local mean.
In this paper, we consider the so-called unbiased form of the bilateral
filter [12], where w(x, y) is translation-invariant, that is, w(x, y) = w(x − y),
and where the range filter is symmetric and depends on the difference of
intensity, φ(f (x), f (y)) = φ(f (x) − f (y)). In this case, the filter is given by
Z
−1
˜
f (x) = η
w(y)φ(f (x − y) − f (x))f (x − y) dy
(1)
Ω
where
Z
w(y)φ(f (x − y) − f (x)) dy.
η=
(2)
Ω
We call w(x) the spatial kernel, and φ(s) the range kernel. The local support Ω
of the spatial kernel specifies the neighborhood over which the averaging
takes place. A popular form of the bilateral filter is one where both w(x)
and φ(s) are Gaussian [12, 10, 2, 16].
The edge-preserving bilateral filter was originally introduced by Tomasi
et al. in [12] as a simple, non-iterative alternative to anisotropic diffusion
[8]. This was motivated by the observation that while standard spatial
averaging performs well in regions with homogenous intensities, it tends
to performs poorly in the vicinity of sharp transitions, such as edges. For
the bilateral filter in (1), the difference f (x − y) − f (x) is close to zero in
homogenous regions, and hence φ(f (x − y) − f (x)) ≈ 1. In this case, (1)
simply results in the averaging of pixels in the neighborhood of the pixel of
interest. On the other hand, if the pixel of interest x is in the vicinity of an
edge, φ(f (x − y) − f (x)) is large when x − y belongs to the same side of
2
the edge as x, and is small when x − y is on the other side of the edge. As
a result, the averaging is restricted to neighborhood pixels that are on the
same side of the edge as the pixel of interest. This is the basic idea which
allows one to perform smoothing while preserving edges at the same time.
Since its inception, the bilateral filter has found widespread use in several
image processing, computer graphics, and computer vision applications.
This includes denoising [1], video abstraction [14], demosaicing [11], opticalflow estimation [15], and stereo matching [17], to name a few. More recently,
the bilateral filter has been extended by Baudes et al. [2] to realize the
popular non-local neighborhood filter, where the similarity between pixels
is measured using patches centered around the pixels.
The direct implementation of (1) turns out to be rather computationally
intensive for real time applications. Several efficient numerical schemes
have been proposed in the past for implementing the filter in real time,
even at video rates [5, 13, 9, 7]. These algorithms (with the exception of
[5]), however, do not scale well with the size of the spatial kernel, and this
limits their usage in high resolution applications. A significant advance
was obtained when Porikli [10] proposed a constant-time implementation
of the bilateral filter (for arbitrary spatial kernels) using polynomial range
kernels. The O(1) algorithm was also extended to include Gaussian φ(s) by
locally approximating it using polynomials. More recently, Yang et al. [16]
have proposed a O(1) algorithm for arbitrary range and spatial kernels by
extending the bilateral filtering method of Durand et al. [5]. Their algorithm
is based on a piecewise-linear approximation of the bilateral filter obtained
by quantizing φ(s).
In this paper, we extend the O(1) algorithm of Porikli to provide an exact
implementation of the bilateral filter, using trigonometric range kernels. Our
main observation that trigonometric functions share a common property
of polynomials which allows one to “linearize” the otherwise non-linear
bilateral filter. The common property is that the translate of a polynomial
(resp. trigonometric function) is again a polynomial (resp. trigonometric
function), and importantly, of the same degree. By fixing φ(s) to be a
trigonometric function, we show how this self-shiftable property can be
used to (locally) linearize the bilateral filter. This is the crux of the idea that
was used for deriving the O(1) algorithm for polynomial φ(s) in [10].
3
2
2.1
Constant-time bilateral filter
The main idea
It is the presence of the term φ(f (x − y) − f (x)) in (1) that makes the filter
non-linear. In the absence of this term, that is, when φ(s) is constant, the
filter is simply given by the averaging
Z
f (x) =
w(y)f (x − y) dy,
(3)
Ω
where we assume w(x) to have a total mass of unity. It is well-known
that (3) can be implemented in constant-time, irrespective of the size and
shape of the filter, using the convolution-multiplication property of the
(fast) Fourier transform. The number of computations required per pixel,
however, depends on the size of the image in this case [18]. On the other
hand, it is known that (3) can be realized at the cost of a constant number
of operations per pixel (independent of the size of the image and the filter)
using recursive algorithms. These O(1) recursive algorithms are based on
specialized kernels, such as the box and the hat function [6, 4, 19], and the
more general class of Gaussian-like box splines [3].
Our present idea is to leverage these fast averaging algorithms by expressing (1) in terms of (3), where the averaging is performed on the image
and its simple pointwise transforms. Our observation is that we can do so if
the range kernel is of the form
(−T ≤ s ≤ T ).
φ(s) = cos(γs)
(4)
By plugging (4) into (1), we can write the integral as
Z
Z
cos(γf (x)) w(y) cos(γf (x−y))f (x−y) dy+sin(γf (x)) w(y) sin(γf (x−y))f (x−y) dy.
Ω
Ω
This is clearly seen to be the linear combination of two spatial averages,
performed on the images cos(γf (x))f (x) and sin(γf (x))f (x). Similarly, we
can write the integral in (2) as
Z
Z
cos(γf (x)) w(y) cos(γf (x−y)) dy+sin(γf (x)) w(y) sin(γf (x−y)) dy.
Ω
Ω
In this case, the averaging is on the images cos(γf (x)) and sin(γf (x)). This
is the trick that allows us to express (1) in terms of linear convolution filters
applied to pointwise transforms of the image.
4
Note that the domain of φ(s) is [−T, T ] in (4). We assume here (without
loss of generality) that the dynamic range of the image is within [0, T ]. The
maximum of |f (x)−f (y)| over all x and y such that x−y ∈ Ω is within T in
this case. Therefore, by letting γ = π/2T , we can guarantee the argument γs
of the cosine function to be within the range [−π/2, π/2]. The crucial point
here is that the cosine function is oscillating and can assume negative values
over (−∞, ∞). However, its restriction over the half-period [−π/2, π/2] has
two essential properties of a range kernel—it is non-negative and has a
bump shape (cf. the outermost curve in Figure 1). Note that, in practice, the
bound on the local variations of intensity could be much lower than T .
2.2
General trigonometric kernels
The above idea can easily be extended to more general trigonometric functions of the form φ(s) = a0 + a1 cos(γs) + · · · + aN cos(N γs). This is most
conveniently done by writing φ(s) in terms of complex exponentials, namely
as
X
φ(s) =
cn exp jnγs .
(5)
|n|≤N
The coefficients cn must be real and symmetric, since φ(s) is real and symmetric. Now, using the addition-multiplication property of exponentials,
we can write
X
φ(f (x − y) − f (x)) =
dn (x) exp jnγf (x − y)
|n|≤N
where dn (x) = cn exp − jnγf (x) . Plugging this into (1), we immediately
see that
P
|n|≤N dn (x) gn (x)
(6)
f˜(x) = P
|n|≤N dn (x) hn (x)
where hn (x) = exp jnγf (x) , and gn (x) = f (x)hn (x). We refer to hn (x)
and gn (x) as the auxiliary images, and N as the degree of the kernel.
The above analysis gives us the following O(1) algorithm for the bilateral
filter: We first set up the auxiliary images and the coefficients dn (x) from
the input image. We then average each of the auxiliary images using a O(1)
algorithm (this can be done in parallel). The samples of the filtered image
is then given by the simple sum and division in (6). In particular, for an
image of size M × M , we can compute the spatial averages for any arbitrary
w(x) at the cost of O(M 2 log2 M ) operations using the Fourier transform. As
5
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
ï250
ï200
ï150
ï100
ï50
0
50
100
150
200
250
Figure 1: The family of raised cosines g(s) = [cos(γs)]N over the dynamic
range −T ≤ s ≤ T as N goes from 1 to 5 (outer to inner curves). We set
T = 255 corresponding to the maximum dynamic range of a grayscale
image, and γ = π/2T . They satisfy the two essential properties required
to qualify as a valid range kernel of the bilateral filter—non-negativity and
monotonicity (decay). Moreover, they have the remarkable property that
they converge to a Gaussian (after appropriate normalization) as N gets
large; see (7).
6
1.5
1
0.5
0
ï250
ï200
ï150
ï100
ï50
0
50
100
150
200
250
Figure 2: Approximation of the Gaussian exp(−x2 /2σ 2 ) (dashed black curve)
over the interval [−255, 255] using the Taylor polynomial (solid red curve)
and the raised cosine (solid blue curve). We set σ = 80, and use N = 4 for the
raised cosine in (7). The raised cosine is of the form a0 +a1 cos(2θ)+a2 cos(4θ)
in this case. We use a 3-term Taylor polynomial of the form b0 + b1 x2 + b2 x4 .
It is clear that the raised cosine offers a much better approximation than its
polynomial counterpart. In particular, note how the polynomial blows up
beyond |x| > 100.
mentioned earlier, this can further be reduced to a total of O(M 2 ) operations
using specialized spatial kernels [6, 4, 3].
2.3
Raised cosines
We now address the fact that φ(s) must have some additional properties to
qualify as a valid range kernel (besides being symmetric). Namely, φ(s) must
be non-negative, and must be monotonic in that φ(s1 ) ≤ φ(s2 ) whenever
|s1 | > |s2 |. In particular, it must have a peak at the origin. This ensures that
large differences in intensity gets more penalized than small differences, and
that (1) behaves purely as a spatial filter in a region having uniform intensity.
Moreover, one must also have some control on the variance (effective width)
of φ(s). We now address these design problems in order.
7
The properties of symmetry, non-negativity, and monotonicity are simultaneously enjoyed by the family of raised cosines of the form
N
φ(s) = cos(γs)
(−T ≤ s ≤ T ).
Writing cos θ = (ejθ + e−jθ )/2, and applying the binomial theorem, we see
that
N
X
−N N
φ(s) =
2
exp j(2n − N )γs .
n
n=0
This expresses the raised cosines as in (5), though we have used a slightly
different summation. Since φ(s) has a total of (N + 1) terms, this gives a total
of 2(N + 1) auxiliary images in (6). The central term n = N/2 is constant
when N is even, and we have one less auxiliary image to process in this
case.
2.4
Approximation of Gaussian kernels
Figure 1 shows the raised cosines of degree N = 1 to N = 5. It is seen
that φ(s) become more Gaussian-like over the half-period [−π, π] with the
increase in N . The fact, however, is that φ(s) converges pointwise to zero at
all points as N gets large, excepting for the node points 0, ±π, ±2π, . . .. This
problem can nevertheless be addressed by suitably scaling the raised coinse.
The precise result is given by the following pointwise convergence:
N
2 2
γs
γ s
lim cos √
= exp −
.
N −→∞
2
N
(7)
Proof. Note that Taylor’s theorem
tells us that if f (x) is
P withkremainder
(k) (0)/k! + xm f (m) (θ)/m!, where
sufficiently smooth, then f (x) = m−1
x
f
k=0
θ is some number between 0 and x. Applied to the cosine function, we have
cos(x) = 1 − x2 /2 + x4 cos θ/24. In other words, cos(x) = 1 − x2 /2 + r(x),
where |r(x)| . x4 (we write f (x) . g(x) to signify that f (x) ≤ Cg(x) for
some absolute constant C, where C is independent of x). Using this estimate,
along with the binomial theorem, we can write
cos
γs
√
N
N
=
γ 2 s2
1−
2N
N
+
N −k
γ 2 s2
r(s, N ) 1 −
,
k
2N
N
X
N
k=1
k
where |r(s, N )| . s4 /N 2 . We are almost done since it is well-known that
(1 + x/N )N approaches exp(x) as N gets large. To establish (7), all we need
8
to show is that, for any fixed s, the residual terms can be made negiligibly
small simply by setting N large.
Now note that if |s| . N 1/2 , then the magnitude of (1 − γ 2 s2 /2N ) is
within unity, and, on the other hand, s4 /N < 1 when |s| < N 1/4 . Thus,
given any fixed s, we set N to be large enough so that
the above
s satisfies
N
k
bounds. Then, following the trivial inequality k < N , we see that the
modulus of the residual is
4 k
4 N
N
X
s
s
1
k
.
N
.N
. ,
2
N
N
N
k=1
provided that |s| < LN = (N 1−2/N )1/4 . This can clearly be achieved by
increasing N , since LN is monotonic in N .
We have seen that raised cosines of sufficiently large order provide arbitrarily close approximations of the Gaussian. The crucial feature about
(7) is that the rate of convergence is much faster than that of Taylor polynomials, which were used to approximate the Gaussian range kernel in [10].
In particular, we can obtain an approximation comparable to that achieved
using polynomials using fewer number of terms. This is important from
the practical standpoint. In Figure 2, we consider the target Gaussian kernel
exp(−s2 /2σ 2 ), where σ = 80. We approximate this using the raised cosine
of degree 4, which has 3 terms. We also plot the polynomial corresponding
to the 3-term Taylor expansion of the Gaussian, which is used in for approximating the Gaussian in [10]. It is clear that the approximation quality of
the raised cosine is superior to that offered by a Taylor polynomial having
equal number of terms. In particular, note that the Taylor approximation
does not automatically offer the crucial monotonic property.
Table 1: N0 is the minimum degree of the raised cosine required to approximate a Gaussian of standard deviation σ on the interval [−255, 255]. The
estimate d(γσ)−2 e is also shown.
σ
200 150 100 80 60 50 40
N0
1
2
3
4
5
7
9
−2
d(γσ) e
1
2
3
5
8 11 17
2.5
Control of the width of range kernel
The approximation in (7) also suggests a means of controlling the variance of
the raised cosine, namely, by controlling the variance of the target Gaussian.
9
The target Gaussian (with normalization) has a fixed variance of γ −2 . This
can be increased simply by rescaling the argument of the cosine in (7) by
some ρ > 1. In particular, for sufficiently large N ,
N
s2
γs
√
≈ exp − 2 −2 .
cos
2ρ γ
ρ N
(8)
The variance of the target Gaussian (again with normalization) has now
increased to ρ2 γ −2 . A fairly accurate estimate of the variance of the raised
cosine is therefore σ 2 ≈ ρ2 γ −2 . In particular, we can increase the variance
simply by setting ρ = γσ for all σ > γ −2 , provided N is large enough.
Bringing down the variance below γ −2 , on the other hand, is more subtle.
This cannot be achieved simply by rescaling with ρ < 1 on account of the
oscillatory nature of the cosine. For instance, setting ρ < 1 can cause φ(s) to
become non-negative, or loose its monotonicity. The only way of doing so is
by increasing the degree of the cosine (cf. Figure 1). In particular, N must
be large enough so that the argument of cos(·) is within [−π/2, π/2] for all
−T ≤ s ≤ T . This is the case if
N ≥ ρ−2 ≈ (γσ)−2 .
In other words, to approximate a Gaussian having a small variance σ, N
must at least be as large as N0 ≈ (γσ)−2 . The bound is quite tight for large
σ, but is loose when σ is small. We empirically determined N0 for certain
values of σ for the case T = 255, some of which are given in Table 1. It
turned out to be much lower than the estimate (γσ)−2 when σ is small. For
a fixed setting of T (e.g., for grayscale images), this suggests the use of a
lookup table for determining N0 for small σ on-the-fly.
The above analysis leads us to an O(1) algorithm for approximating
the Gaussian bilateral filtering, where both the spatial and range filters are
Gaussians. The steps are summarized in Algorithm 1.
3
Experiments
We implemented the proposed algorithm for Gaussian bilateral filtering in
Java on a Mac OS X 2× Quad core 2.66 GHz machine, as an ImageJ plugin.
We used multi-threading for computing the spatial averages of the auxiliary
images in parallel. A recursive O(1) algorithm was used for implementing
the Gaussian filter in space domain [18]. The average times required for
processing a 720 × 540 grayscale image using our algorithm are shown in
10
Algorithm 1 Fast O(1) bilateral filtering for the Gaussian kernel
Input: Image f (x), dynamic range [−T, T ], σs2 and σr2 for the spatial and
range filters.
1. Set γ = π/2T , and ρ = γσr .
2. If σr > γ −2 , pick any large N . Else, set N = (γσr )−2 , or use a look-up
table to fix N .
√
3. Set hn (x) = exp jγ(2n − N )f (x)/ρ N and gn (x) = f (x)hn (x), and
√
the coefficients dn (x) = 2−N N
exp
−jγ(2n
−
N
)f
(x)/ρ
N .
n
4. Use an O(1) algorithm to filter hn (x) and gn (x) with a Gaussian of
variance σs2 to get hn (x) and gn (x).
P
PN
5. Set f˜(x) as the ratio of N
n=0 dn (x)gn (x) and
n=0 dn (x)hn (x).
Return: Filtered image f˜(x).
Table 2. We repeated the experiment for different variances of the Gaussian
range kernel, and at different spatial variances. As seen from the table,
the processing time is quite fast compared to a direct implementation of
the bilateral filter, which requires considerable time depending on the size
of the spatial filter. For instance, a direct implementation of the filter on
a 512 × 512 image required 4 seconds for σs as low as 3 on our machine
(using discretized Gaussians supported on [−3σ, 3σ]2 ), and this climbed up
to almost 10 seconds for σs = 10. As is seen from Table 2, the processing time
of our algorithm, however, suddenly shoots up for narrow Gaussians with
σr < 15. This is due to the large N required to approximate the Gaussian in
this regime (cf. Table 1). We have figured out an approximation scheme for
further accelerating the processing for very small σr , without appreciably
degarding the final output. Discussion of this method is however beyond
the present scope of the paper.
We next tried a visual comparisonof the ouput of our algorithm with the
algorithm in [10]. In Figure 3, we compare the outputs of the two algorithms
with the direct implementation, on a natural grayscale image. As is clearly
seen from the processed images, our result resembles the exact output very
closely. The result obtained using the polynomial kernel, on the other
hand, shows strange artifacts. The difference is also clear from the standard
deviation of the error between the exact output and the approximations. We
note, however, that the execution time of the polynomial method is slightly
lower than that of our method, since it requires half the number of auxiliary
images for a given degree.
We also tested our implementation of the Gaussian bilateral filter on
11
color (RGB) images. We tried a naive processing, where each of the three
color channels were processed independently. The results on a couple of
images are shown in Figure 4. The Java source code can be downloaded
from the web at http://bigwww.epfl.ch/algorithms/bilateral-filter.
Table 2: The time in milliseconds required for processing a grayscale image
of size 720 × 540 pixels using our algorithm. The processing was done on a
Mac OS X, 2× quad core 2.66 GHz machine, using multithreading.
σr →
10
20
30
40 50 60 70 80 90 100
σs = 10 3604 452 195 120 74 61 49 34 32 27
σs = 100 3755 482 217 127 89 69 54 43 37 28
4
Discussion
We presented a general method of computing the bilateral filter in constanttime using trigonometric range kernels. Within this framework, we showed
how feasible range kernels can be realized using the family of raised cosines.
The highlights of our approach are the following:
• Accuracy. Our method is exact, at least for the family of raised cosines. It
does not require the quantization of the range kernel, as is the case in [5, 16].
Moreover, note that the auxiliary images in (6) have the same dynamic range
as the input image irrespective of the degree N . This is unlike the situation
in [10], where the dynamic range of the auxiliary images grow exponentially
with the N . This makes the computations susceptible to numerical errors
for large N .
• Speed. Besides having O(1) complexity, our algorithm can also be implemented in parallel. This allows us to further accelerate its speed.
• Approximation property. Trigonometric functions yield better (local)
approximation of Gaussians than polynomials. In particular, we showed
that by using a particular class of raised cosines, we can obtain much better
approximations of the Gaussian range kernel than that offered by the Taylor
polynomials in [10]. The final output is artifact-free and resembles the true
output very closely. The only flip side of our approach (this is also the case
with [10], as noted in [16]) is that a large number of terms are required to
approximate very narrow Gaussians over large intervals.
• Space-variant extension. The spatial kernel in (1) can be changed from
point-to-point within the image to control the amount of smoothing (particularly in homogenous regions), while the range kernel is kept fixed. Thanks
12
Figure 3: Comparison of various implementations of the Gaussian bilateral
filter on the grayscale image Isha of size 600 × 512. The filter settings are
σs = 15 and σr = 80. (a) Original image; (b) Direct implementation of
the bilateral filter; (c) Output obtained using polynomial kernel [10]; and
(d) Output of our algorithm. Note the strange artifacts in (c), particularly
around the right eye (see zoomed insets). This is on account of the distortion
caused by the polynomial approximation shown in Figure 2. The standard
deviation of the error between (b) and (c) is 6.5, while that between (b) and
(d) is 1.2.
13
Figure 4: Results on the color images Greekdome and Tulip, using our implementation of the Gaussian bilateral filter. The original image is on the
left, and the processed image is on the right. In either case, the red, green,
and blue channels were processed independently. We used σs = 10 and
σr = 20 for Greekdome, and σs = 20 and σr = 60 for Tulip. (Images courtsey
of Sylvain Paris and Frédo Durand).
14
to (6), this can be done simply by computing the space-variant averages of
each auxiliary image. The good news is that this can also be realized for
a M × M image at the cost of O(M 2 ) operations, using particular spatial
kernels. This includes the two-dimensional box and hat filter [6, 4], and the
more general class of Gaussian-like box splines in [3].
5
Acknowledgement
The authors thank Ayush Bhandari for his help with the insets in Figure 3,
and also Sagnik Sanyal for providing the image used in the same figure.
References
[1] E.P. Bennett, J.L. Mason, and L. McMillan. Multispectral bilateral video
fusion. IEEE Transactions on Image Processing, 16:1185–1194, 2007.
[2] A. Buades, B. Coll, and J.M. Morel. A review of image denoising
algorithms, with a new one. Multiscale Modeling and Simulation, 4:490–
530, 2005.
[3] K.N. Chaudhury, A. Muñoz-Barrutia, and M. Unser. Fast space-variant
elliptical filtering using box splines. IEEE Transactions on Image Processing, 19:2290–2306, 2010.
[4] F. C. Crow. Summed-area tables for texture mapping. ACM Siggraph,
18:207–212, 1984.
[5] F. Durand and J. Dorsey. Fast bilateral filtering for the display of
high-dynamic-range images. ACM Siggraph, 21:257–266, 2002.
[6] P.S. Heckbert. Filtering by repeated integration. International Confernece
on Computer Graphics and Interactive Techniques, 20(4):315–321, 1986.
[7] S. Paris and F. Durand. A fast approximation of the bilateral filter using
a signal processing approach. European Conference on Computer Vision,
pages 568–580, 2006.
[8] P. Perona and J. Malik. Scale-space and edge detection using anisotropic
diffusion. IEEE Transactions on Pattern Analysis and Machine Intelligence,
12(7):629–639, 1990.
15
[9] T.Q. Pham and L.J. van Vliet. Separable bilateral filtering for fast video
preprocessing. IEEE International Conference on Multimedia and Expo,
pages 1–4, 2005.
[10] F. Porikli. Constant time O(1) bilateral filtering. IEEE Conference on
Computer Vision and Pattern Recognition, pages 1–8, 2008.
[11] R. Ramanath and W. E. Snyder. Adaptive demosaicking. Journal of
Electronic Imaging, 12:633–642, 2003.
[12] C. Tomasi and R. Manduchi. Bilateral filtering for gray and color
images. IEEE International Conference on Computer Vision, pages 839–846,
1998.
[13] B. Weiss. Fast median and bilateral filtering. ACM Siggraph, 25:519–526,
2006.
[14] H. Winnemöller, S. C. Olsen, and B. Gooch. Real-time video abstraction.
ACM Siggraph, pages 1221–1226, 2006.
[15] J. Xiao, H. Cheng, H. Sawhney, C. Rao, and M. Isnardi. Bilateral
filtering-based optical flow estimation with occlusion detection. European Conference on Computer Vision, pages 211–224, 2006.
[16] Q. Yang, K.-H. Tan, and N. Ahuja. Real-time O(1) bilateral filtering.
IEEE Conference on Computer Vision and Pattern Recognition, pages 557–
564, 2009.
[17] Q. Yang, L. Wang, R. Yang, H. Stewenius, and D. Nister. Stereo matching with color-weighted correlation, hierarchical belief propagation and
occlusion handling. IEEE Transaction on Pattern Analysis and Machine
Intelligence, 31:492–504, 2009.
[18] I. Young, J. Gerbrands, and L. van Vliet. Fundamentals of Image Processing. Delft PH Publications, 1995.
[19] I.T. Young and L.J. van Vliet. Recursive implementation of the Gaussian
filter. Signal Processing, 44(2):139–151, 1995.
16
| 5 |
Deep Learning with Topological Signatures
Roland Kwitt
Department of Computer Science
University of Salzburg, Austria
[email protected]
arXiv:1707.04041v3 [] 16 Feb 2018
Christoph Hofer
Department of Computer Science
University of Salzburg, Austria
[email protected]
Marc Niethammer
UNC Chapel Hill, NC, USA
[email protected]
Andreas Uhl
Department of Computer Science
University of Salzburg, Austria
[email protected]
Abstract
Inferring topological and geometrical information from data can offer an alternative
perspective on machine learning problems. Methods from topological data analysis,
e.g., persistent homology, enable us to obtain such information, typically in the form
of summary representations of topological features. However, such topological
signatures often come with an unusual structure (e.g., multisets of intervals) that is
highly impractical for most machine learning techniques. While many strategies
have been proposed to map these topological signatures into machine learning
compatible representations, they suffer from being agnostic to the target learning
task. In contrast, we propose a technique that enables us to input topological
signatures to deep neural networks and learn a task-optimal representation during
training. Our approach is realized as a novel input layer with favorable theoretical
properties. Classification experiments on 2D object shapes and social network
graphs demonstrate the versatility of the approach and, in case of the latter, we
even outperform the state-of-the-art by a large margin.
1
Introduction
Methods from algebraic topology have only recently emerged in the machine learning community,
most prominently under the term topological data analysis (TDA) [7]. Since TDA enables us to
infer relevant topological and geometrical information from data, it can offer a novel and potentially
beneficial perspective on various machine learning problems. Two compelling benefits of TDA
are (1) its versatility, i.e., we are not restricted to any particular kind of data (such as images,
sensor measurements, time-series, graphs, etc.) and (2) its robustness to noise. Several works have
demonstrated that TDA can be beneficial in a diverse set of problems, such as studying the manifold
of natural image patches [8], analyzing activity patterns of the visual cortex [28], classification of 3D
surface meshes [27, 22], clustering [11], or recognition of 2D object shapes [29].
Currently, the most widely-used tool from TDA is persistent homology [15, 14]. Essentially1 ,
persistent homology allows us to track topological changes as we analyze data at multiple “scales”.
As the scale changes, topological features (such as connected components, holes, etc.) appear and
disappear. Persistent homology associates a lifespan to these features in the form of a birth and
a death time. The collection of (birth, death) tuples forms a multiset that can be visualized as a
persistence diagram or a barcode, also referred to as a topological signature of the data. However,
leveraging these signatures for learning purposes poses considerable challenges, mostly due to their
1
We will make these concepts more concrete in Sec. 2.
Death
(2) Transform & Project
(µ2 , σ 2 )
x = ρ(p)
(µ1 , σ 1 )
sµ,σ,ν ((x0 , x1 ))
ν
Output:
(y1 , y2 )> ∈ R0 + ×R0
(1) Rotate points in D by π/4
Death-Birth
(persistence)
Input Layer
Input: D ∈ D
p = (b, d)
−1
Param.: θ = (µi , σ i )N
i=0
Birth
∆
(x0 , x1 )
Birth+Death
ν
Figure 1: Illustration of the proposed network input layer for topological signatures. Each signature, in the
form of a persistence diagram D ∈ D (left), is projected w.r.t. a collection of structure elements. The layer’s
learnable parameters θ are the locations µi and the scales σ i of these elements; ν ∈ R+ is set a-priori and
meant to discount the impact of points with low persistence (and, in many cases, of low discriminative power).
The layer output y is a concatenation of the projections. In this illustration, N = 2 and hence y = (y1 , y2 )> .
unusual structure as a multiset. While there exist suitable metrics to compare signatures (e.g., the
Wasserstein metric), they are highly impractical for learning, as they require solving optimal matching
problems.
Related work. In order to deal with these issues, several strategies have been proposed. In [2] for
instance, Adcock et al. use invariant theory to “coordinatize” the space of barcodes. This allows to
map barcodes to vectors of fixed size which can then be fed to standard machine learning techniques,
such as support vector machines (SVMs). Alternatively, Adams et al. [1] map barcodes to so-called
persistence images which, upon discretization, can also be interpreted as vectors and used with
standard learning techniques. Along another line of research, Bubenik [6] proposes a mapping
of barcodes into a Banach space. This has been shown to be particularly viable in a statistical
context (see, e.g., [10]). The mapping outputs a representation referred to as a persistence landscape.
Interestingly, under a specific choice of parameters, barcodes are mapped into L2 (R2 ) and the
inner-product in that space can be used to construct a valid kernel function. Similar, kernel-based
techniques, have also recently been studied by Reininghaus et al. [27], Kwitt et al. [20] and Kusano
et al. [19].
While all previously mentioned approaches retain certain stability properties of the original representation with respect to common metrics in TDA (such as the Wasserstein or Bottleneck distances),
they also share one common drawback: the mapping of topological signatures to a representation that
is compatible with existing learning techniques is pre-defined. Consequently, it is fixed and therefore
agnostic to any specific learning task. This is clearly suboptimal, as the eminent success of deep
neural networks (e.g., [18, 17]) has shown that learning representations is a preferable approach.
Furthermore, techniques based on kernels [27, 20, 19] for instance, additionally suffer scalability
issues, as training typically scales poorly with the number of samples (e.g., roughly cubic in case of
kernel-SVMs). In the spirit of end-to-end training, we therefore aim for an approach that allows to
learn a task-optimal representation of topological signatures. We additionally remark that, e.g., Qi et
al. [25] or Ravanbakhsh et al. [26] have proposed architectures that can handle sets, but only with
fixed size. In our context, this is impractical as the capability of handling sets with varying cardinality
is a requirement to handle persistent homology in a machine learning setting.
Contribution. To realize this idea, we advocate a novel input layer for deep neural networks that
takes a topological signature (in our case, a persistence diagram), and computes a parametrized
projection that can be learned during network training. Specifically, this layer is designed such that
its output is stable with respect to the 1-Wasserstein distance (similar to [27] or [1]). To demonstrate
the versatility of this approach, we present experiments on 2D object shape classification and the
classification of social network graphs. On the latter, we improve the state-of-the-art by a large
margin, clearly demonstrating the power of combining TDA with deep learning in this context.
2
Background
For space reasons, we only provide a brief overview of the concepts that are relevant to this work and
refer the reader to [16] or [14] for further details.
Homology. The key concept of homology theory is to study the properties of some object X by
means of (commutative) algebra. In particular, we assign to X a sequence of modules C0 , C1 , . . .
2
which are connected by homomorphisms ∂n : Cn → Cn−1 such that im ∂n+1 ⊆ ker ∂n . A structure
of this form is called a chain complex and by studying its homology groups Hn = ker ∂n / im ∂n+1
we can derive properties of X.
A prominent example of a homology theory is simplicial homology. Throughout this work, it is
the used homology theory and hence we will now concretize the already presented ideas. Let K
be a simplicial complex and Kn its n-skeleton. Then we set Cn (K) as the vector space generated (freely) by Kn over Z/2Z2 . The connecting homomorphisms ∂n : Cn (K) → Cn−1 (K) are
called
∂n (σ) =
Pn boundary operators. For a simplex σ = [x0 , . . . , xn ] ∈ Kn , we definePthem asP
σi ) =
∂n (σi ).
i=0 [x0 , . . . , xi−1 , xi+1 , . . . , xn ] and linearly extend this to Cn (K), i.e., ∂n (
Persistent homology. Let K be a simplicial complex and (K i )m
i=0 a sequence of simplicial complexes such that ∅ = K 0 ⊆ K 1 ⊆ · · · ⊆ K m = K. Then, (K i )m
i=0 is called a filtration of K. If we
use the extra information provided by the filtration of K, we obtain the following sequence of chain
complexes (left),
···
∂3
C22
···
∂3
∂2
C11
∂2
C12
ι
C01
∂1
C02
ι
ι
C2m
∂1
C1m
∂0
0
ι
∂1
C0m
C01 = [[v1 ], [v2 ]]Z2
K1
0
ι
ι
∂2
∂0
∂0
Example
C21
C11 = 0
0
C02 = [[v1 ], [v2 ], [v3 ]]Z2
K2
⊆
∂3
⊆
···
C12 = [[v1 , v3 ], [v2 , v3 ]]Z2
v2
K3
v3
v4
v1
C02 = [[v1 ], [v2 ], [v3 ], [v4 ]]Z2
C12 = [[v1 , v3 ], [v2 , v3 ], [v3 , v4 ]]Z2
C21 = 0
C22 = 0
C23 = 0
where Cni = Cn (Kni ) and ι denotes the inclusion. This then leads to the concept of persistent
homology groups, defined by
j
Hni,j = ker ∂ni /(im ∂n+1
∩ ker ∂ni ) for
i≤j .
The ranks, βni,j = rank Hni,j , of these homology groups (i.e., the n-th persistent Betti numbers),
capture the number of homological features of dimensionality n (e.g., connected components for
n = 0, holes for n = 1, etc.) that persist from i to (at least) j. In fact, according to [14, Fundamental
Lemma of Persistent Homology], the quantities
i,j−1
µi,j
− βni,j ) − (βni−1,j−1 − βni−1,j ) for
n = (βn
i<j
(1)
encode all the information about the persistent Betti numbers of dimension n.
Topological signatures. A typical way to obtain a filtration of K is to consider sublevel sets of a
function f : C0 (K) → R. This function can be easily lifted to higher-dimensional chain groups of
K by
f ([v0 , . . . , vn ]) = max{f ([vi ]) : 0 ≤ i ≤ n} .
−1
Given m = |f (C0 (K))|, we obtain (Ki )m
((−∞, ai ]) for
i=0 by setting K0 = ∅ and Ki = f
1 ≤ i ≤ m, where a1 < · · · < am is the sorted sequence of values of f (C0 (K)). If we construct
a multiset such that, for i < j, the point (ai , aj ) is inserted with multiplicity µi,j
n , we effectively
encode the persistent homology of dimension n w.r.t. the sublevel set filtration induced by f . Upon
adding diagonal points with infinite multiplicity, we obtain the following structure:
Definition 1 (Persistence diagram). Let ∆ = {x ∈ R2∆ : mult(x) = ∞} be the multiset of the
diagonal R2∆ = {(x0 , x1 ) ∈ R2 : x0 = x1 }, where mult denotes the multiplicity function and let
R2? = {(x0 , x1 ) ∈ R2 : x1 > x0 }. A persistence diagram, D, is a multiset of the form
D = {x : x ∈ R2? } ∪ ∆ .
We denote by D the set of all persistence diagrams of the form |D \ ∆| < ∞ .
For a given complex K of dimension nmax and a function f (of the discussed form), we can interpret
persistent homology as a mapping (K, f ) 7→ (D0 , . . . , Dnmax −1 ), where Di is the diagram of
dimension i and nmax the dimension of K. We can additionally add a metric structure to the space of
persistence diagrams by introducing the notion of distances.
Simplicial homology is not specific to Z/2Z, but it’s a typical choice, since it allows us to interpret n-chains
as sets of n-simplices.
2
3
Definition 2 (Bottleneck, Wasserstein distance). For two persistence diagrams D and E, we define
their Bottleneck (w∞ ) and Wasserstein (wqp ) distances by
! p1
w∞ (D, E) = inf sup ||x − η(x)||∞ and wqp (D, E) = inf
η x∈D
η
X
x∈D
||x − η(x)||pq
,
(2)
where p, q ∈ N and the infimum is taken over all bijections η : D → E.
Essentially, this facilitates studying stability/continuity properties of topological signatures w.r.t.
metrics in the filtration or complex space; we refer the reader to [12],[13], [9] for a selection of
important stability results.
Remark. By setting µni,∞ = βni,m −βni−1,m , we extend Eq. (1) to features which never disappear, also
referred to as essential. This change can be lifted to D by setting R2? = {(x0 , x1 ) ∈ R × (R ∪ {∞}) :
x1 > x0 }. In Sec. 5, we will see that essential features can offer discriminative information.
3
A network layer for topological signatures
In this section, we introduce the proposed (parametrized) network layer for topological signatures
(in the form of persistence diagrams). The key idea is to take any D and define a projection w.r.t. a
collection (of fixed size N ) of structure elements.
In the following, we set R+ := {x ∈ R : x > 0} and R+
0 := {x ∈ R : x ≥ 0}, resp., and start by
rotating points of D such that points on R2∆ lie on the x-axis, see Fig. 1. The y-axis can then be
interpreted as the persistence of features. Formally, we let b0 and b1 be the unit vectors in directions
(1, 1)> and (−1, 1)> and define a mapping ρ : R2? ∪ R2∆ → R × R+
0 such that x 7→ (hx, b0 i, hx, b1 i).
This rotates points in R? ∪ R2∆ clock-wise by π/4. We will later see that this construction is beneficial
for a closer analysis of the layers’ properties. Similar to [27, 19], we choose exponential functions as
structure elements, but other choices are possible (see Lemma 1). Differently to [27, 19], however,
our structure elements are not at fixed locations (i.e., one element per point in D), but their locations
and scales are learned during training.
Definition 3. Let µ = (µ0 , µ1 )> ∈ R × R+ , σ = (σ0 , σ1 ) ∈ R+ × R+ and ν ∈ R+ . We define
sµ,σ,ν : R × R+
0 →R
as follows:
sµ,σ,ν
2
2
2
2
e−σ0 (x0 −µ0 ) −σ1 (x1 −µ1 ) ,
x1 ∈ [ν, ∞)
x1
2
2
2
2
(x0 , x1 ) =
e−σ0 (x0 −µ0 ) −σ1 (ln( ν )ν+ν−µ1 ) , x1 ∈ (0, ν)
0,
x1 = 0
A persistence diagram D is then projected w.r.t. sµ,σ,ν via
X
Sµ,σ,ν : D → R,
D 7→
sµ,σ,ν (ρ(x)) .
(3)
(4)
x∈D
Remark. Note that sµ,σ,ν is continuous in x1 as
x
ν + ν and lim sµ,σ,ν (x0 , x1 ) = 0 = sµ,σ,ν (x0 , 0)
lim x = lim ln
x→ν
x→ν
x1 →0
ν
and e(·) is continuous. Further, sµ,σ,ν is differentiable on R × R+ , since
∂x1
1 = lim
(x) and
x→ν + ∂x1
lim
∂ ln
x1
ν
∂x1
x→ν −
ν+ν
(x) = lim
x→ν −
ν
=1 .
x
Also note that we use the log-transform in Eq. (4) to guarantee that sµ,σ,ν satisfies the conditions of
Lemma 1; this is, however, only one possible choice. Finally, given a collection of structure elements
Sµi ,σi ,ν , we combine them to form the output of the network layer.
4
−1
+
+
+
Definition 4. Let N ∈ N, θ = (µi , σ i )N
i=0 ∈ (R × R ) × (R × R )
N
Sθ,ν : D → (R+
0)
N
and ν ∈ R+ . We define
N −1
D 7→ Sµi ,σi ,ν (D) i=0 .
as the concatenation of all N mappings defined in Eq. (4).
Importantly, a network layer implementing Def. 4 is trainable via backpropagation, as (1) sµi ,σi ,ν is
differentiable in µi , σ i , (2) Sµi ,σi ,ν (D) is a finite sum of sµi ,σi ,ν and (3) Sθ,ν is just a concatenation.
4
Theoretical properties
In this section, we demonstrate that the proposed layer is stable w.r.t. the 1-Wasserstein distance wq1 ,
see Eq. (2). In fact, this claim will follow from a more general result, stating sufficient conditions on
q
functions s : R2? ∪ R2∆ → R+
0 such that a construction in the form of Eq. (3) is stable w.r.t. w1 .
Lemma 1. Let
s : R2? ∪ R2∆ → R+
0
have the following properties:
(i) s is Lipschitz continuous w.r.t. k · kq and constant Ks
(ii) s(x = 0, for x ∈ R2∆
Then, for two persistence diagrams D, E ∈ D, it holds that
X
x∈D
s(x) −
X
y∈E
s(y) ≤ Ks · wq1 (D, E) .
(5)
Proof. see Appendix B
Remark. At this point, we want to clarify that Lemma 1 is not specific to sµ,σ,ν (e.g., as in Def. 3).
Rather, Lemma 1 yields sufficient conditions to construct a w1 -stable input layer. Our choice of
sµ,σ,ν is just a natural example that fulfils those requirements and, hence, Sθ,ν is just one possible
representative of a whole family of input layers.
With the result of Lemma 1 in mind, we turn to the specific case of Sθ,ν and analyze its stability
properties w.r.t. wq1 . The following lemma is important in this context.
Lemma 2. sµ,σ,ν has absolutely bounded first-order partial derivatives w.r.t. x0 and x1 on R × R+ .
Proof. see Appendix B
Theorem 1. Sθ,ν is Lipschitz continuous with respect to wq1 on D.
Proof. Lemma 2 immediately implies that sµ,σ,ν from Eq. (3) is Lipschitz continuous w.r.t || · ||q .
Consequently, s = sµ,σ,ν ◦ ρ satisfies property (i) from Lemma 1; property (ii) from Lemma 1 is
satisfied by construction. Hence, Sµ,σ,ν is Lipschitz continuous w.r.t. wq1 . Consequently, Sθ,ν is
Lipschitz in each coordinate and therefore Liptschitz continuous.
Interestingly, the stability result of Theorem 1 is comparable to the stability results in [1] or [27]
(which are also w.r.t. wq1 and in the setting of diagrams with finitely-many points). However, contrary
to previous works, if we would chop-off the input layer after network training, we would then have a
mapping Sθ,ν of persistence diagrams that is specifically-tailored to the learning task on which the
network was trained.
5
b5
shift due to noise
b8
S1
a2
a3
b1
b2,3,4
b6
b7
b7
b8
ν
b5
b3
Artificially added noise
Filtration directions
Persistence diagram
(0-dim. features)
a3
b9
a2
a1
b9
b1
b2 a1 b4
b6
Figure 2: Height function filtration of a “clean” (left, green points) and a “noisy” (right, blue points) shape
along direction d = (0, −1)> . This example demonstrates the insensitivity of homology towards noise, as the
added noise only (1) slightly shifts the dominant points (upper left corner) and (2) produces additional points
close to the diagonal, which have little impact on the Wasserstein distance and the output of our layer.
5
Experiments
To demonstrate the versatility of the proposed approach, we present experiments with two totally
different types of data: (1) 2D shapes of objects, represented as binary images and (2) social network
graphs, given by their adjacency matrix. In both cases, the learning task is classification. In each
experiment we ensured a balanced group size (per label) and used a 90/10 random training/test
split; all reported results are averaged over five runs with fixed ν = 0.1. In practice, points in input
diagrams were thresholded at 0.01 for computational reasons. Additionally, we conducted a reference
experiment on all datasets using simple vectorization (see Sec. 5.3) of the persistence diagrams in
combination with a linear SVM.
Implementation. All experiments were implemented in PyTorch3 , using DIPHA4 and Perseus [23].
Source code is publicly-available at https://github.com/c-hofer/nips2017.
5.1
Classification of 2D object shapes
We apply persistent homology combined with our proposed input layer to two different datasets of
binary 2D object shapes: (1) the Animal dataset, introduced in [3] which consists of 20 different
animal classes, 100 samples each; (2) the MPEG-7 dataset which consists of 70 classes of different
object/animal contours, 20 samples each (see [21] for more details).
Filtration. The requirements to use persistent homology on 2D shapes are twofold: First, we need
to assign a simplicial complex to each shape; second, we need to appropriately filtrate the complex.
While, in principle, we could analyze contour features, such as curvature, and choose a sublevel set
filtration based on that, such a strategy requires substantial preprocessing of the discrete data (e.g.,
smoothing). Instead, we choose to work with the raw pixel data and leverage the persistent homology
transform, introduced by Turner et al. [29]. The filtration in that case is based on sublevel sets of
the height function, computed from multiple directions (see Fig. 2). Practically, this means that we
directly construct a simplicial complex from the binary image. We set K0 as the set of all pixels
which are contained in the object. Then, a 1-simplex [p0 , p1 ] is in the 1-skeleton K1 iff p0 and p1
are 4–neighbors on the pixel grid. To filtrate the constructed complex, we define by b the barycenter
of the object and with r the radius of its bounding circle around b. Finally, we define, for [p] ∈ K0
and d ∈ S1 , the filtration function by f ([p]) = 1/r · hp − b, di. Function values are lifted to K1 by
taking the maximum, cf. Sec. 2. Finally, let di be the 32 equidistantly distributed directions in S1 ,
starting from (1, 0). For each shape, we get a vector of persistence diagrams (Di )32
i=1 where Di is the
0-th diagram obtained by filtration along di . As most objects do not differ in homology groups of
higher dimensions (> 0), we did not use the corresponding persistence diagrams.
Network architecture. While the full network is listed in the supplementary material (Fig. 6), the
key architectural choices are: 32 independent input branches, i.e., one for each filtration direction.
Further, the i-th branch gets, as input, the vector of persistence diagrams from directions di−1 , di
and di+1 . This is a straightforward approach to capture dependencies among the filtration directions.
We use cross-entropy loss to train the network for 400 epochs, using stochastic gradient descent
(SGD) with mini-batches of size 128 and an initial learning rate of 0.1 (halved every 25-th epoch).
3
4
https://github.com/pytorch/pytorch
https://bitbucket.org/dipha/dipha
6
MPEG-7
Animal
Skeleton paths
Class segment sets
†
ICS
†
BCF
86.7
90.9
96.6
97.2
67.9
69.7
78.4
83.4
Ours
91.8
69.5
‡
‡
Figure 3: Left: some examples from the MPEG-7 (bottom) and Animal (top) datasets. Right: Classification
results, compared to the two best (†) and two worst (‡) results reported in [30].
Results. Fig. 3 shows a selection of 2D object shapes from both datasets, together with the obtained
classification results. We list the two best (†) and two worst (‡) results as reported in [30]. While,
on the one hand, using topological signatures is below the state-of-the-art, the proposed architecture
is still better than other approaches that are specifically tailored to the problem. Most notably, our
approach does not require any specific data preprocessing, whereas all other competitors listed in
Fig. 3 require, e.g., some sort of contour extraction. Furthermore, the proposed architecture readily
generalizes to 3D with the only difference that in this case di ∈ S2 . Fig. 4 (Right) shows an exemplary
visualization of the position of the learned structure elements for the Animal dataset.
5.2
Classification of social network graphs
In this experiment, we consider the problem of graph classification, where vertices are unlabeled
and edges are undirected. That is, a graph G is given by G = (V, E), where V denotes the set of
vertices and E denotes the set of edges. We evaluate our approach on the challenging problem of
social network classification, using the two largest benchmark datasets from [31], i.e., reddit-5k
(5 classes, 5k graphs) and reddit-12k (11 classes, ≈12k graphs). Each sample in these datasets
represents a discussion graph and the classes indicate subreddits (e.g., worldnews, video, etc.).
Filtration. The construction of a simplicial complex from G = (V, E) is straightforward: we set
K0 = {[v] ∈ V } and K1 = {[v0 , v1 ] : {v0 , v1 } ∈ E}. We choose a very simple filtration based on
the vertex degree, i.e., the number of incident edges to a vertex v ∈ V . Hence, for [v0 ] ∈ K0 we get
f ([v0 ]) = deg(v0 )/ maxv∈V deg(v) and again lift f to K1 by taking the maximum. Note that chain
groups are trivial for dimension > 1, hence, all features in dimension 1 are essential.
Network architecture. Our network has four input branches: two for each dimension (0 and 1) of
the homological features, split into essential and non-essential ones, see Sec. 2. We train the network
for 500 epochs using SGD and cross-entropy loss with an initial learning rate of 0.1 (reddit_5k), or
0.4 (reddit_12k). The full network architecture is listed in the supplementary material (Fig. 7).
Results. Fig. 5 (right) compares our proposed strategy to state-of-the-art approaches from the
literature. In particular, we compare against (1) the graphlet kernel (GK) and deep graphlet kernel
(DGK) results from [31], (2) the Patchy-SAN (PSCN) results from [24] and (3) a recently reported
graph-feature + random forest approach (RF) from [4]. As we can see, using topological signatures
in our proposed setting considerably outperforms the current state-of-the-art on both datasets. This is
an interesting observation, as PSCN [24] for instance, also relies on node degrees and an extension of
the convolution operation to graphs. Further, the results reveal that including essential features is key
to these improvements.
5.3
Vectorization of persistence diagrams
Here, we briefly present a reference experiment we conducted following Bendich et al. [5]. The idea
is to directly use the persistence diagrams as features via vectorization. For each point (b, d) in a
persistence diagram D we calculate its persistence, i.e., d − b. We then sort the calculated persistences
by magnitude from high to low and take the first N values. Hence, we get, for each persistence
diagram, a vector of dimension N (if |D \ ∆| < N , we pad with zero). We used this technique
on all four data sets. As can be seen from the results in Table 4 (averaged over 10 cross-validation
runs), vectorization performs poorly on MPEG-7 and Animal but can lead to competitive rates on
reddit-5k and reddit-12k. Nevertheless, the obtained performance is considerably inferior to our
proposed approach.
7
MPEG-7
Animal
reddit-5k
reddit-12k
5
10
20
40
80
160
81.8
48.8
37.1
24.2
82.3
50.0
38.2
24.6
79.7
46.2
39.7
27.9
74.5
42.4
42.1
29.8
68.2
39.3
43.8
31.5
64.4
36.0
45.2
31.6
Ours
91.8
69.5
54.5
44.5
0.8
Death
1.0
N
0.6
0.4
0.2
0.0
0.0
0.2
0.4
0.6
0.8
Birth
1.0
Figure 4: Left: Classification accuracies for a linear SVM trained on vectorized (in RN ) persistence diagrams
(see Sec. 5.3). Right: Exemplary visualization of the learned structure elements (in 0-th dimension) for the
Animal dataset and filtration direction d = (−1, 0)> . Centers of the learned elements are marked in blue.
1
1
1
G = (V, E)
f −1 ((−∞, 2])
1
5
3
1
2
2
1
f −1 ((−∞, 3])
f −1 ((−∞, 5])
reddit-5k
reddit-12k
GK [31]
DGK [31]
PSCN [24]
RF [4]
41.0
41.3
49.1
50.9
31.8
32.2
41.3
42.7
Ours (w/o essential)
Ours (w/ essential)
49.1
54.5
38.5
44.5
Figure 5: Left: Illustration of graph filtration by vertex degree, i.e., f ≡ deg (for different choices of ai , see
Sec. 2). Right: Classification results as reported in [31] for GK and DGK, Patchy-SAN (PSCN) as reported in
[24] and feature-based random-forest (RF) classification. from [4].
Finally, we remark that in both experiments, tests with the kernel of [27] turned out to be computationally impractical, (1) on shape data due to the need to evaluate the kernel for all filtration directions
and (2) on graphs due the large number of samples and the number of points in each diagram.
6
Discussion
We have presented, to the best of our knowledge, the first approach towards learning task-optimal
stable representations of topological signatures, in our case persistence diagrams. Our particular
realization of this idea, i.e., as an input layer to deep neural networks, not only enables us to learn with
topological signatures, but also to use them as additional (and potentially complementary) inputs to
existing deep architectures. From a theoretical point of view, we remark that the presented structure
elements are not restricted to exponential functions, so long as the conditions of Lemma 1 are met.
One drawback of the proposed approach, however, is the artificial bending of the persistence axis (see
Fig. 1) by a logarithmic transformation; in fact, other strategies might be possible and better suited
in certain situations. A detailed investigation of this issue is left for future work. From a practical
perspective, it is also worth pointing out that, in principle, the proposed layer could be used to handle
any kind of input that comes in the form of multisets (of Rn ), whereas previous works only allow
to handle sets of fixed size (see Sec. 1). In summary, we argue that our experiments show strong
evidence that topological features of data can be beneficial in many learning tasks, not necessarily to
replace existing inputs, but rather as a complementary source of discriminative information.
8
A
Technical results
Lemma 3. Let α ∈ R+ , β ∈ R, γ ∈ R+ . We have
i)
lim
x→0
ln(x)
x
2
2
1
x→0 x
· e−α(ln(x)γ+β) = 0
· e−α(ln(x)γ+β) = 0 .
ii) lim
Proof. We omit the proof for brevity (see supplementary material for details), but remark that only
(i) needs to be shown as (ii) follows immediately.
B
Proofs
Proof of Lemma 1. Let ϕ be a bijection between D and E which realizes wq1 (D, E) and let D0 =
D \ ∆, E0 = E \ ∆. To show the result of Eq. (5), we consider the following decomposition:
D = ϕ−1 (E0 ) ∪ ϕ−1 (∆)
= (ϕ−1 (E0 ) \ ∆) ∪ (ϕ−1 (E0 ) ∩ ∆) ∪ (ϕ−1 (∆) \ ∆) ∪ (ϕ−1 (∆) ∩ ∆)
|
{z
} |
{z
} |
{z
} |
{z
}
A
B
C
(6)
D
Except for the term D, all sets are finite. In fact, ϕ realizes the Wasserstein distance wq1 which implies
ϕ D = id. Therefore, s(x) = s(ϕ(x)) = 0 for x ∈ D since D ⊂ ∆. Consequently, we can ignore D
in the summation and it suffices to consider E = A ∪ B ∪ C. It follows that
X
x∈D
s(x) −
X
s(y) =
y∈E
X
x∈D
=
X
x∈E
≤ Ks ·
s(x) −
X
x∈D
x∈E
s(x) − s(ϕ(x)) ≤
X
x∈E
X
s(ϕ(x)) =
X
x∈E
s(x) −
X
s(ϕ(x))
x∈E
|s(x) − s(ϕ(x))|
||x − ϕ(x)||q = Ks ·
X
x∈D
||x − ϕ(x)||q = Ks · wq1 (D, E) .
Proof of Lemma 2. Since sµ,σ,ν is defined differently for x1 ∈ [ν, ∞) and x1 ∈ (0, ν), we need to
distinguish these two cases. In the following x0 ∈ R.
(1) x1 ∈ [ν, ∞): The partial derivative w.r.t. xi is given as
∂
∂ −σi2 (xi −µi )2
sµ,σ,ν (x0 , x1 ) = C ·
e
(x0 , x1 )
∂xi
∂xi
=C ·e
−σi2 (xi −µi )2
·
(−2σi2 )(xi
(7)
− µi ) ,
where C is just the part of exp(·) which is not dependent on xi . For all cases, i.e., x0 → ∞, x0 →
−∞ and x1 → ∞, it holds that Eq. (7) → 0.
(2) x1 ∈ (0, ν): The partial derivative w.r.t. x0 is similar to Eq. (7) with the same asymptotic
behaviour for x0 → ∞ and x0 → −∞. However, for the partial derivative w.r.t. x1 we get
∂
∂ −σ12 (ln( x1 )ν+ν−µ1 )2
ν
sµ,σ,ν (x0 , x1 ) = C ·
e
(x0 , x1 )
∂x1
∂x1
x
ν
1
= C · e( ... ) · (−2σ12 ) · ln
ν + ν − µ1 ·
ν
x1
(8)
x
ν
1
1
= C 0 · e( ... ) · ln
·
+(ν − µ1 ) · e( ... ) ·
.
ν
x1
x
| {z 1}
|
{z
}
(a)
(b)
As x1 → 0, we can invoke Lemma 4 (i) to handle (a) and Lemma 4 (ii) to handle (b); conclusively,
Eq. (8) → 0. As the partial derivatives w.r.t. xi are continuous and their limits are 0 on R, R+ , resp.,
we conclude that they are absolutely bounded.
9
References
[1] H. Adams, T. Emerson, M. Kirby, R. Neville, C. Peterson, P. Shipman, S. Chepushtanova, E. Hanson,
F. Motta, and L. Ziegelmeier. Persistence images: A stable vector representation of persistent homology.
JMLR, 18(8):1–35, 2017.
[2] A. Adcock, E. Carlsson, and G. Carlsson. The ring of algebraic functions on persistence bar codes. CoRR,
2013. https://arxiv.org/abs/1304.0530.
[3] X. Bai, W. Liu, and Z. Tu. Integrating contour and skeleton for shape classification. In ICCV Workshops,
2009.
[4] I. Barnett, N. Malik, M.L. Kuijjer, P.J. Mucha, and J.-P. Onnela. Feature-based classification of networks.
CoRR, 2016. https://arxiv.org/abs/1610.05868.
[5] P. Bendich, J.S. Marron, E. Miller, A. Pieloch, and S. Skwerer. Persistent homology analysis of brain artery
trees. Ann. Appl. Stat, 10(2), 2016.
[6] P. Bubenik. Statistical topological data analysis using persistence landscapes. JMLR, 16(1):77–102, 2015.
[7] G. Carlsson. Topology and data. Bull. Amer. Math. Soc., 46:255–308, 2009.
[8] G. Carlsson, T. Ishkhanov, V. de Silva, and A. Zomorodian. On the local behavior of spaces of natural
images. IJCV, 76:1–12, 2008.
[9] F. Chazal, D. Cohen-Steiner, L. J. Guibas, F. Mémoli, and S. Y. Oudot. Gromov-Hausdorff stable signatures
for shapes using persistence. Comput. Graph. Forum, 28(5):1393–1403, 2009.
[10] F. Chazal, B.T. Fasy, F. Lecci, A. Rinaldo, and L. Wassermann. Stochastic convergence of persistence
landscapes and silhouettes. JoCG, 6(2):140–161, 2014.
[11] F. Chazal, L.J. Guibas, S.Y. Oudot, and P. Skraba. Persistence-based clustering in Riemannian manifolds.
J. ACM, 60(6):41–79, 2013.
[12] D. Cohen-Steiner, H. Edelsbrunner, and J. Harer. Stability of persistence diagrams. Discrete Comput.
Geom., 37(1):103–120, 2007.
[13] D. Cohen-Steiner, H. Edelsbrunner, J. Harer, and Y. Mileyko. Lipschitz functions have Lp -stable persistence.
Found. Comput. Math., 10(2):127–139, 2010.
[14] H. Edelsbrunner and J. L. Harer. Computational Topology : An Introduction. American Mathematical
Society, 2010.
[15] H. Edelsbrunner, D. Letcher, and A. Zomorodian. Topological persistence and simplification. Discrete
Comput. Geom., 28(4):511–533, 2002.
[16] A. Hatcher. Algebraic Topology. Cambridge University Press, Cambridge, 2002.
[17] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
[18] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In NIPS, 2012.
[19] G. Kusano, K. Fukumizu, and Y. Hiraoka. Persistence weighted Gaussian kernel for topological data
analysis. In ICML, 2016.
[20] R. Kwitt, S. Huber, M. Niethammer, W. Lin, and U. Bauer. Statistical topological data analysis - a kernel
perspective. In NIPS, 2015.
[21] L. Latecki, R. Lakamper, and T. Eckhardt. Shape descriptors for non-rigid shapes with a single closed
contour. In CVPR, 2000.
[22] C. Li, M. Ovsjanikov, and F. Chazal. Persistence-based structural recognition. In CVPR, 2014.
[23] K. Mischaikow and V. Nanda. Morse theory for filtrations and efficient computation of persistent homology.
Discrete Comput. Geom., 50(2):330–353, 2013.
[24] M. Niepert, M. Ahmed, and K. Kutzkov. Learning convolutional neural networks for graphs. In ICML,
2016.
[25] C.R. Qi, H. Su, K. Mo, and L.J. Guibas. PointNet: Deep learning on point sets for 3D classification and
segmentation. In CVPR, 2017.
[26] S. Ravanbakhsh, S. Schneider, and B. Póczos. Deep learning with sets and point clouds. In ICLR, 2017.
[27] R. Reininghaus, U. Bauer, S. Huber, and R. Kwitt. A stable multi-scale kernel for topological machine
learning. In CVPR, 2015.
[28] G. Singh, F. Memoli, T. Ishkhanov, G. Sapiro, G. Carlsson, and D.L. Ringach. Topological analysis of
population activity in visual cortex. J. Vis., 8(8), 2008.
10
[29] K. Turner, S. Mukherjee, and D. M. Boyer. Persistent homology transform for modeling shapes and
surfaces. Inf. Inference, 3(4):310–344, 2014.
[30] X. Wang, B. Feng, X. Bai, W. Liu, and L.J. Latecki. Bag of contour fragments for robust shape classification.
Pattern Recognit., 47(6):2116–2125, 2014.
[31] P. Yanardag and S.V.N. Vishwanathan. Deep graph kernels. In KDD, 2015.
This supplementary material contains technical details that were left-out in the original submission
for brevity. When necessary, we refer to the submitted manuscript.
C
Additional proofs
In the manuscript, we omitted the proof for the following technical lemma. For completeness, the
lemma is repeated and its proof is given below.
Lemma 4. Let α ∈ R+ , β ∈ R and γ ∈ R+ . We have
(i) lim
x→0
ln(x)
x
1
x→0 x
(ii) lim
2
· e−α(ln(x)γ+β) = 0
2
· e−α(ln(x)γ+β) = 0 .
Proof. We only need to prove the first statement, as the second follows immediately. Hence, consider
2
ln(x) −α(ln(x)γ+β)2
·e
= lim ln(x) · e− ln(x) · e−α(ln(x)γ+β)
x→0
x→0
x
2
= lim ln(x) · e−α(ln(x)γ+β) −ln(x)
x→0
−1
2
= lim ln(x) · eα(ln(x)γ+β) +ln(x)
x→0
−1
∂
∂ α(ln(x)γ+β)2 +ln(x)
(∗)
= lim
ln(x) ·
e
x→0 ∂x
∂x
−1
γ
1
1
α(ln(x)γ+β)2 +ln(x)
· 2α(ln(x)γ + β) +
= lim · e
x→0 x
x x
−1
2
= lim eα(ln(x)γ+β) +ln(x) · (2α(ln(x)γ + β)γ + 1)
lim
x→0
=0
where we use de l’Hôpital’s rule in (∗).
D
Network architectures
2D object shape classification. Fig. 6 illustrates the network architecture used for 2D object shape
classification in [Manuscript, Sec. 5.1]. Note that the persistence diagrams from three consecutive
filtration directions di share one input layer. As we use 32 directions, we have 32 input branches.
The convolution operation operates with kernels of size 1 × 1 × 3 and a stride of 1. The max-pooling
operates along the filter dimension. For better readability, we have added the output size of certain
layers. We train with the network with stochastic gradient descent (SGD) and a mini-batch size of
128 for 300 epochs. Every 20th epoch, the learning rate (initially set to 0.1) is halved.
11
Graph classification. Fig. 7 illustrates the network architecture used for graph classification in
Sec. 5.2. In detail, we have 3 input branches: first, we split 0-dimensional features into essential
and non-essential ones; second, since there are only essential features in dimension 1 (see Sec. 5.2,
Filtration) we do not need a branch for non-essential features. We train the network using SGD with
mini-batches of size 128 for 300 epochs. The initial learning rate is set to 0.1 (reddit_5k) and 0.4
(reddit_12k), resp., and halved every 20th epochs.
D.1
Technical handling of essential features
In case of of 2D object shapes, the death times of essential features are mapped to the max. filtration
value and kept in the original persistence diagrams. In fact, for Animal and MPEG-7, there is always
only one connected component and consequently only one essential feature in dimension 0 (i.e., it
does not make sense to handle this one point in a separate input branch).
In case of social network graphs, essential features are mapped to the real line (using their birth time)
and handled in separate input branches (see Fig. 7) with 1D structure elements. This is in contrast to
the 2D object shape experiments, as we might have many essential features (in dimensions 0 and 1)
that require handling in separate input branches.
12
di−1 di di+1 (Filtration directions)
,...,
,...,
(for 32 directions, we have 32 3-tuple of persistence diagrams as input)
Input layer
N=75
Output: 3 × 75
...
this branch is repeated 32 times
Convolution
Filters: 3->16
Kernel: 1
Stride: 1
Convolution
Filters: 16->4
Kernel: 1
Stride: 1
Output: 16 × 75
Max-pooling
Output: 4 × 75
Linear
Output: 1 × 75
...
75->25
BatchNorm
Linear
25->25
ReLU
Concatenate
Linear
32· 25->100
BatchNorm
Linear
100->70 (in case of Animal, 100->20 )
Dropout
Cross-entropy loss
Figure 6: 2D object shape classification network architecture.
13
Input data (obtained by filtrating the graph by vertex degree)
0-dim. homology
1-dim. homology
es
ial
ial
nt
nt
se
se
es
essential features
Input layer
2D
Input layer
N=150
Linear
1D
Input layer
N=50
Linear
BatchNorm
Linear
Linear
25->25
25->25
75->75
ReLU
N=50
50->25
50->25
BatchNorm
Linear
1D
Linear
150->75
BatchNorm
essential features
ReLU
ReLU
Concatenate
Linear
125->200
BatchNorm + ReLU
Linear
200->100
BatchNorm
ReLU
Linear
100->50
BatchNorm + ReLU
Linear
100->5
BatchNorm
Dropout
Cross-entropy loss
Figure 7: Graph classification network architecture.
14
| 1 |
1
FoodNet: Recognizing Foods Using Ensemble of
Deep Networks
arXiv:1709.09429v1 [] 27 Sep 2017
Paritosh Pandey∗, Akella Deepthi∗ , Bappaditya Mandal and N. B. Puhan
Abstract—In this work we propose a methodology for an
automatic food classification system which recognizes the contents
of the meal from the images of the food. We developed a multilayered deep convolutional neural network (CNN) architecture
that takes advantages of the features from other deep networks
and improves the efficiency. Numerous classical handcrafted
features and approaches are explored, among which CNNs are
chosen as the best performing features. Networks are trained and
fine-tuned using preprocessed images and the filter outputs are
fused to achieve higher accuracy. Experimental results on the
largest real-world food recognition database ETH Food-101 and
newly contributed Indian food image database demonstrate the
effectiveness of the proposed methodology as compared to many
other benchmark deep learned CNN frameworks.
Index Terms—Deep CNN, Food Recognition, Ensemble of
Networks, Indian Food Database.
I. I NTRODUCTION
AND
C URRENT A PPROACHES
T
HERE has been a clear cut increase in the health consciousness of the global urban community in the previous
few decades. Given the rising number of cases of health
problems attributed to obesity and diabetes reported every
year, people (including elderly, blind or semi-blind or dementia
patients) are forced to record, recognize and estimate calories
in their meals. Also, in the emerging social networking photo
sharing, food constitutes a major portion of these images.
Consequently, there is a rise in the market potential for such
fitness apps products which cater to the demand of logging
and tracking the amount of calories consumed, such as [1],
[2]. Food items generally tend to show intra-class variation
depending upon the method of preparation, which in turn is
highly dependent on the local flavors as well as the ingredients used. This causes large variations in terms of shape,
size, texture, and color. Food items also do not exhibit any
distinctive spatial layout. Variable lighting conditions and the
point of view also lead to intra-class variations, thus making
the classification problem even more difficult [3], [4], [5].
Hence food recognition is a challenging task, one that needs
addressing.
In the existing literature, numerous methodologies assume
that the texture, color and shape of food items are well
defined [6], [7], [8], [9]. This may not be true because of
the local variations in the method of food preparation, as
well as the ingredients used. Feature descriptors like histogram
P. Pandey, A. Deepthi and N. B. Puhan are with the School of Electrical
Science, Indian Institute of Technology (IIT), Bhubaneswar, Odisha 751013,
India. E-mail: {pp20, da10, nbpuhan}@iitbbs.ac.in
B. Mandal is with the Kingston University, London, Surrey KT1 2EE,
United Kingdom. Email: [email protected]
∗ Represents equal contribution from the authors.
of gradient, color correlogram, bag of scale-invariant feature
transform, local binary pattern, spatial pyramidal pooling,
speeded up robust features (SURF), etc, have been applied
with some success on small datasets [9]. Hoashi et al. in
[10], and Joutou et al. in [11] propose multiple kernel learning
methods to combine various feature descriptors. The features
extracted have generally been used to train an SVM [12],
with a combination of these features being used to boost the
accuracy.
A rough estimation of the region in which targeted food
item is present would help to raise the accuracy for cases
with non-uniform background, presence of other objects and
multiple food items [13]. Two such approaches use standard
segmentation and object detection methods [14] or asking the
user to input a bounding box providing this information [15].
Kawano et al. [15], [16] proposed a semi-automated approach
for bounding box formation around the image and developed
a real-time recognition system. It is tedious, unmanageable
and does not cater to the need of full automation. Automatic
recognition of dishes would not only help users effortlessly
organize their extensive photo collections but would also help
online photo repositories make their content more accessible.
Lukas et al. in [17] have used a random forest to find
discriminative region in an image and have shown to under
perform convolutional neural network (CNN) feature based
method [18].
In order to improve the accuracy, Bettadapura et al. in
[19] used geotagging to identify the restaurant and search
for matching food item in its menu. Matsuda et al. in [20]
employed co-occurrence statistics to classify multiple food
items in an image by eliminating improbable combinations.
There has been certain progress in using ingredient level
features [21], [22], [23] to identify the food item. A variant of
this method is the usage of pairwise statistics of local features
[24]. In the recent years CNN based classification has shown
promise producing excellent results even on large and diverse
databases with non-uniform background. Notably, deep CNN
based transferred learning using fine-tuned networks is used
in [25], [26], [27], [28], [29] and cascaded CNN networks
are used in [30]. In this work, we extend the CNN based
approaches towards combining multiple networks and extract
robust food discriminative features that would be resilient
against large variations in food shape, size, color and texture.
We have prepared a new Indian food image database for this
purpose, the largest to our knowledge and experimented on two
large databases, which demonstrates the effectiveness of the
proposed framework. We will make all the developed models
and Indian food database available online to public [31].
2
Section II describes our proposed methodology and Section III
provides the experimental results before drawing conclusions
in Section IV.
II. P ROPOSED M ETHOD
Our proposed framework is based on recent emerging very
large deep CNNs. We have selected CNNs because their ability
to learn operations on visual data is extremely good and they
have been employed to obtain higher and higher accuracies
on challenges involving large scale image data [32]. We have
performed extensive experiments using different handcrafted
features (such as bag of words, SURF, etc) and CNN feature
descriptors. Experimental results show that CNNs outperform
all the other methods by a huge margin, similar to those
reported in [9] as shown in Table I. It can be seen that CNN
based methods (SELC & CNN) features performs much better
as compared to others.
TABLE I
ACCURACY (%) OF HANDCRAFTED & CNN FEATURES ON ETH F OOD -101 DATABASE . T HE METHODS ARE SURF +
B AG OF W ORDS 1024 (BW1024), SURF + I NDEPENDENT F ISCHER V ECTOR 64 (SURF+IFV64), B AG OF W ORDS
(BW), I NDEPENDENT F ISCHER V ECTORS (IFV), M ID - LEVEL D ISCRIMINATIVE S UPERPIXEL (MDS), R ANDOM
F OREST D ISCRIMINATIVE C OMPONENT (RFDC), S UPERVISED E XTREME L EARNING C OMMITTEE (SELC) AND
A LEX N ET TRAINED FROM SCRATCH (CNN).
Methods
Top-1
BW1024
33.47
SURF+IFV64
44.79
BW
28.51
IFV
38.88
MDS
42.63
RFDC
50.76
SELC
55.89
CNN
56.40
A. Proposed Ensemble Network Architecture
We choose AlexNet architecture by Krizhevsky et al. [18]
as our baseline because it offers the best solution in terms
of significantly lesser computational time as compared to any
other state-of-the-art CNN classifier. GoogLeNet architecture
by Szegedy et al. [33] uses the sparsity of the data to create
dense representations that give information about the image
with finer details. It develops a network that would be deep
enough, as it increases accuracy and yet have significantly
less parameters to train. This network is an approximation
of the sparse structure of a convolution network by dense
components. The building blocks called Inception modules,
is basically a concatenation of filter banks with a mask size
of 1 × 1, 3 × 3 and 5 × 5. If the network is too deep, the
inception modules lead to an unprecedented rise in the cost of
computation. Therefore, 1 × 1 convolutions are used to embed
the data output from the previous layers.
ResNet architecture by He et al. [34] addresses the problem
of degradation of learning in networks that are very deep. In
essence a ResNet is learning on residual functions of the input
rather than unreferenced functions. The idea is to reformulate
the learning problem into one that is easier for the network to
learn. Here the original problem of learning a function H(x)
gets transformed into learning non-linearly by various layers
fitting the functional form H(x) = Γ(x) + x, which is easier
to learn, where the layers have already learned Γ(x) and the
original input is x. These CNN networks are revolutionary
in the sense that they were at the top of the leader board
of ImageNet classification at one or other time [32], with
ResNet being the network with maximum accuracy at the time
Fig. 1. Our proposed CNN based ensemble network architecture.
of writing this paper. The main idea behind employing these
networks is to compare the increment in accuracies with the
depth of the network and the number of parameters involved in
training. Our idea is to create an ensemble of these classifiers
using another CNN on the lines of a Siamese network [35]
and other deep network combinations [36].
In a Siamese network [35], two or more identical subnetworks are contained within a larger network. These subnetworks have the same configuration and weights. It has been
used to find comparisons or relationships between the two
input objects or patches. In our architecture, we use this idea
to develop a three layered structure to combine the feature
outputs of three different subsections (or subnetworks) as
shown in Fig. 1. We hypothesize that these subnetworks with
proper fine-tuning would individually contribute to extract
better discriminative features from the food images. However,
the parameters along with the subnetwork architectures are
different and the task is not that of comparison (as in case
of Siamese network [35]) but pursue classification of food
images. Our proposition is that the features once added with
appropriate weights would give better classification accuracies.
Let I(w, h, c) represents a pre-processed input image of size
w × h pixels to each of the three fine-tuned networks and c
is the number of channels of the image. Color images are
used in our case. We denote C(m, n, q) as the convolutional
layer, where m and n are the sides length of the receptive
field and q is the number of filter banks. Pooling layer is
denoted by P (s, r), where r is the side length of the pooling
receptive field and s is the number of strides used in our
CNN model. In our ensemble net we did not use pooling. But
in our fine-tuned networks pooling is employed with variable
parameters. GoogLeNet for example uses overlapping pooling
in the inception module. All convolution layers are followed
by ReLU layers (see the text in Sec II-B) considered as an inbuilt activation. L represents the local response normalization
layer. Fully connected layer is denoted by F (e), where e is
the number of neurons. Hence, the AlexNet CNN model after
fine-tuning is represented as:
ΦA ≡ I(227, 227, 3) −→ C(11, 4, 96) −→ L −→ P (2, 3) −→ C(5, 1, 256)
−→ L −→ P (2, 3) −→ C(3, 1, 384) −→ C(3, 1, 384) −→ C(3, 1, 256)
(1)
−→ P (2, 3) −→ F (4096) −→ F (4096) −→ F (e).
AlexNet is trained in a parallel fashion, referred as a depth
of 2. Details of the architecture can be found in [18]. For
GoogLeNet we need to define the inception module as:
D(c1, cr3, c3, cr5, c5, crM ), where c1, c3 and c5 represent
number of filter of size 1 × 1, 3 × 3 and 5 × 5, respectively. cr3
and cr5 represent number of 1 × 1 filters used in the reduction
layer prior to 3 × 3 and 5 × 5 filters, and crM represents the
3
number of 1 × 1 filters used as reduction after the built in max
pool layer. Hence GoogLeNet is fine-tuned as:
ΦG ≡ I(224, 224, 3) −→ C(7, 2, 64) −→ P (2, 3) −→ L −→ C(1, 1, 64)
−→ C(3, 1, 192) −→ L −→ P (2, 3) −→ D(64, 96, 128, 16, 32, 32)
−→ D(128, 128, 192, 32, 96, 64) −→ P (2, 3) −→
D(192, 96, 208, 16, 48, 64) −→ D(160, 112, 224, 24, 64, 64) −→
(2)
D(128, 128, 256, 24, 64, 64) −→ D(112, 144, 288, 32, 64, 64) −→
D(256, 160, 320, 32, 128, 128) −→ P (2, 3) −→ D(256, 160, 320, 32,
∗
(1, 7) −→ F (e),
128, 128) −→ D(384, 192, 384, 48, 128, 128) −→ P
P ∗ refers to average pooling rather than max pooling used everywhere else. For fine-tuned ResNet, each repetitive residual
unit is presented inside as R and it is defined as:
ΦR ≡ I(224, 224, 3) −→ C(7, 2, 64) −→ P (2, 3) −→ 3 × R(C(1, 1, 64)
−→ C(3, 1, 64) −→ C(1, 1, 256)) −→ R(C(1, 2, 128) −→ C(3, 2, 128)
−→ C(1, 2, 512)) −→ 3 × R(C(1, 1, 128) −→ C(3, 1, 128)
−→ C(1, 1, 512)) −→ R(C(1, 2, 256) −→ C(3, 2, 256) −→
C(1, 2, 1024)) −→ 5 × R(C(1, 1, 256) −→ C(3, 1, 256) −→
(3)
C(1, 1, 1024)) −→ R(C(1, 2, 512) −→ C(3, 2, 512) −→ C(1, 2, 2048))
−→ 2 × R(C(1, 1, 512) −→ C(3, 1, 512) −→ C(1, 1, 2048))
∗
−→ P (1, 7) −→ F (e).
Batch norm is used after every convolution layer in ResNet.
The summations at the end of each residual unit are followed
by a ReLU unit. For all cases, the length of F (e) depends
on the number of categories to classify. In our case, e is the
number of classes. Let Fi denote the features from each of the
fine-tuned deep CNNs given by (1)-(3), where i ∈ {A, G, R}.
Let the concatenated features are represented by Ω(O, c),
where O is the output features from all networks, given by:
O = concatenate(wi Fi ) | ∀ i,
(4)
where wi is the weight given to features from each of the
networks with the constraint, such that Σi wi = 1. We define
the developed ensemble net as the following:
ΦE ≡ Ω(e ∗ η, c) −→ ReLU −→ F (e) −→ Sof tM ax,
(5)
where η is the number of fine-tuned networks. The Sof tM ax
function or the normalized exponential function is defined as:
expFj
, for j = 1, 2, . . . , e,
S(F )j = Pe
Fk
k=1 exp
(6)
where exp is the exponential. The final class prediction
D ∈ {1, 2, . . . , e} is obtained by finding the maximum of
the values of S(F )j , given by:
D = arg max(S(F )j ), for j = 1, 2, . . . , e.
j
(7)
B. Network Details
The ensemble net we designed consists of three layers as
shown in Fig. 1. Preprocessed food images are used to finetune all the three CNN networks: AlexNet, GoogLeNet and
ResNet. Then the first new layer one concatenates the features
obtained from the previously networks, passing it out with a
rectified linear unit (ReLU) non-linear activation. The outputs
are then passed to a fully connected (fc) layer that convolves
the outputs to the desired length of the number of classes
present. This is followed by a softmax layer which computes
the scores obtained by each class for the input image.
The pre-trained models are used to extract features and train
a linear kernel support vector machine (SVM). The feature
outputs of the fully connected layers and max-pool layers of
AlexNet and GoogLeNet are chosen as features for training
and testing the classifiers. For feature extraction, the images
are resized and normalized as per the requirement of the
networks. For AlexNet we used the last fully connected layer
to extract features (fc7) and for GoogLeNet we used last max
pool layer (cls3 pool). On the ETH Food 101 database, the
top-1 accuracy obtained remained in the range of 39.6% for
AlexNet to 44.06% for GoogLeNet, with a feature size varying
from a minimum of 1000 features per image to 4096 features
per image. Feature length of the features extracted out of the
last layer is 1000. The feature length out of the penultimate
layer of AlexNet gave a feature length of 4096 features, while
the ones out of GoogLeNet had a feature length of 1024. All
the three networks are fine-tuned using the ETH Food-101
database. The last layer of filters is removed from the network
and replaced with an equivalent filter giving an output of the
size 1 × 1 × 101, i.e., a single value for 101 channels. These
numbers are interpreted as scores for each of the food class
in the dataset. Consequently, we see a decrease in the feature
size from 1 × 1000 for each image to 1 × 101 for each image.
AlexNet is trained for a total of 16 epochs.
We choose the MatConvNet [37] implementation of
GoogLeNet with maximum depth and maximum number of
blocks. The implementation consists of 100 layers and 152
blocks, with 9 Inception modules (very deep!). To train
GoogLeNet, the deepest softmax layer is chosen to calculate
objective while the other two are removed. The training ran
for a total of 20 epochs. ResNet’s smallest MatConvNet model
with 50 layers and 175 blocks is used. The capacity to use any
deeper model is limited by the capacity of our hardware. The
batch size is reduced to 32 images for the same reason. ResNet
is trained with the data for 20 epochs. The accuracy obtained
increased with the depth of the network. The ensemble net
is trained with normalized features/outputs of the above three
networks. Parametrically weights are decided for each network
feature by running the experiments multiple times. A total of
30 epochs are performed. A similar approach is followed while
fine-tuning the network for Indian dataset. As the number of
images is not very high, jitters are introduced in the network
to make sure the network remains robust to changes. Same
depth and parameters are used for the networks. The output
feature has a length of 1 × 1 × 50 implying a score for each
of the 50 classes.
III. E XPERIMENTAL S ETUP AND R ESULTS
The experiments are performed on a high end server with
128GB of RAM equipped with a NVDIA Quadro K4200 with
4GB of memory and 1344 CUDA cores. We performed the
experiments on MATLAB 14a using the MatConvNet library
offered by vlFeat [38]. Caffe’s pre-trained network models
imported in MatConvNet are used. We perform experiments
on two databases: ETH Food-101 Database and and our own
newly contributed Indian Food Database.
A. Results on ETH Food-101 Database
ETH Food-101 [17] is the largest real-world food recognition database consisting of 1000 images per food class
100
100
90
90
80
70
60
AlexNet
GoogLeNet
ResNet
Ensemble Net
50
Fig. 2. Top row: 10 sample Indian food images. Bottom two rows: one of
the food samples (1 class) variations (20 images).
40
1
2
3
4
5
6
Rank
7
8
9
10
Accuracy (%)
Accuracy (%)
4
80
70
AlexNet
GoogLeNet
ResNet
Ensemble Net
60
50
40
1
2
(a)
3
4
5
6
Rank
7
8
9
10
(b)
Fig. 3. Rank vs Accuracy plots using various CNN frameworks, (a) for ETH
Food 101 Database and (b) for Indian Food Database.
picked randomly from foodspotting.com, comprising of 101
different classes of food. So there are 101,000 food images
in total, sample images can be seen in [17]. The top 101
most popular and consistently named dishes are chosen and
randomly sampled 750 training images per class are extracted.
Additionally, 250 test images are collected for each class, and
are manually cleaned. Purposefully, the training images are not
cleaned, and thus contain some amount of noise. This comes
mostly in the form of intense colors and sometimes wrong
labels to increase the robustness of the data. All images are
rescaled to have a maximum side length of 512 pixels. In
all our experiments we follow the same training and testing
protocols as that in [17], [9].
All the real-world RGB food images are converted to HSV
format and histogram equalization are applied on only the
intensity channel. The result is then converted back to RGB
format. This is done to ensure that the color characteristics
of the image does not change because of the operation and
alleviate any bias that could have been present in the data due
to intensity/illumination variations.
TABLE II
TABLE III
ACCURACY (%) FOR ETH F OOD -101 AND COMPARISON
WITH OTHER METHODS AFTER FINE - TUNING .
ACCURACY (%) FOR I NDIAN F OOD DATABASE AND
COMPARISON WITH OTHER METHODS AFTER
Network/Features
AlexNet
GoogLeNet
Lukas et al. [17]
Kawano et al. [15]
Martinel et al. [9]
ResNet
Ensemble Net
Top-1
42.42
53.96
50.76
53.50
55.89
67.59
72.12
Top-5
69.46
80.11
81.60
80.25
88.76
91.61
Top-10
80.26
88.04
89.70
89.10
93.79
95.95
FINE - TUNING .
Network/Features
AlexNet
GoogLeNet
ResNet
Ensemble Net
Top-1
60.40
70.70
43.90
73.50
Top-5
90.50
93.40
80.60
94.40
Top-10
96.20
97.60
91.50
97.60
Table II shows the Top-1, Top-5 and Top-10 accuracies
using numerous current state-of-the-art methodologies on this
database. We tried to feed outputs from the three networks into
the SVM classifier but the performance was not good. We
have noted only the highest performers, many more results
can be found in [9]. It is evident that with fine-tuning the
network performance has increased to a large extent. Fig. 3
(a) shows accuracies with the ranks plot up to top 10, where
the rank r : r ∈ {1, 2, . . . , 10} shows corresponding accuracy
of retrieving at least 1 correct image among the top r retrieved
images. This kind of graphs show the overall performance of
the system at different number of retrieved images. From Table
II and Fig. 3 (a), it is evident that our proposed ensemble net
has outperformed consistently all the current state-of-the-art
methodologies on this largest real-world food database.
B. Results on Indian Food Database
One of the contributions of this paper is the setting up of
an Indian food database, the first of its kind. It consists of 50
food classes having 100 images each. Some sample images
are shown in Fig. 2. The classes are selected keeping in mind
the varied nature of Indian cuisine. They differ in terms of
color, texture, shape and size as the Indian food lacks any
kind of generalized layout. We have ensured a healthy mix
of dishes from all parts of the country giving this database a
true representative nature. Because of the varied nature of the
classes present in the database, it offers the best option to test
a protocol and classifier for its robustness and accuracy. We
collected images from online sources like foodspotting.com,
Google search, as well as our own captured images using handheld mobile devices. Extreme care was taken to remove any
kind of watermarking from the images. Images with textual
patterns are cropped, most of the noisy images discarded and
a clean dataset is prepared. We also ensured that all the images
are of a minimum size. No upper bound on image size has
been set. Similar to the ETH Food-101 database protocol, we
have randomly selected 80 food images per class for 50 food
classes in the training and remaining in the test dataset.
Fig. 3 (b) shows accuracies with the ranks plot up to top 10
and Table III shows the Top-1, Top-5 and Top-10 accuracies
using some of the current state-of-the-art methodologies on
this database. Both these depict that our proposed ensemble
of the networks (Ensemble Net) is better at recognizing food
images as compared to that of the individual networks. ResNet
under performs as compared to GoogLeNet and AlexNet
probably because of the lack of sufficient training images
to train the network parameters. For overall summary: as is
evident from these figures (Fig. 3 (a) and (b)) and tables
(Tables II and III) that there is no single second best method
that outperforms all others methods in both the databases,
however, our proposed approach (Ensemble Net) outperforms
all other methods consistently for all different ranks in both
the databases.
IV. C ONCLUSIONS
Food recognition is a very crucial step for calorie estimation
in food images. We have proposed a multi-layered ensemble
of networks that take advantages of three deep CNN fine-tined
subnetworks. We have shown that these subnetworks with
proper fine-tuning would individually contribute to extract
5
better discriminative features from the food images. However,
in these subnetworks the parameters are different, the subnetwork architectures and tasks are different. Our proposed
ensemble architecture outputs robust discriminative features
as compared to the individual networks. We have contributed
a new Indian Food Database, that would be made available
to public for further evaluation and enrichment. We have
conducted experiments on the largest real-world food images
ETH Food-101 Database and Indian Food Database. The
experimental results show that our proposed ensemble net
approach outperforms consistently all other current state-ofthe-art methodologies for all the ranks in both the databases.
R EFERENCES
[1] MealSnap:, “Magical meal logging for iphone,” 2017. [Online].
Available: http://mealsnap.com
[2] Eatly:,
“Eat
smart
(snap
a
photo
of
your
meal
and
get
health
ratings),”
2017.
[Online].
Available:
https://itunes.apple.com/us/app/eatly-eat-smart-snap-photo/id661113749
[3] A. Myers, N. Johnston, V. Rathod, A. Korattikara, A. N. Gorban,
N. Silberman, S. Guadarrama, G. Papandreou, J. Huang, and K. Murphy,
“Im2calories: Towards an automated mobile vision food diary,” in IEEE
International Conference on Computer Vision (ICCV), 2015, pp. 1233–
1241.
[4] B. Mandal and H.-L. Eng., “3-parameter based eigenfeature regularization for human activity recognition,” in 35th IEEE International
Conference on Acoustics Speech and Signal Processing (ICASSP), Mar
2010, pp. 954–957.
[5] B. Mandal, L. Li, V. Chandrasekhar, and J. H. Lim, “Whole space subclass discriminant analysis for face recognition,” in IEEE International
Conference on Image Processing (ICIP), Quebec city, Canada, Sep 2015,
pp. 329–333.
[6] S. Sasano, X. H. Han, and Y. W. Chen, “Food recognition by combined bags of color features and texture features,” in 9th International
Congress on Image and Signal Processing, BioMedical Engineering and
Informatics (CISP-BMEI), Oct 2016, pp. 815–819.
[7] C. Pham and T. N. T. Thanh, “Fresh food recognition using feature
fusion,” in International Conference on Advanced Technologies for
Communications, Oct 2014, pp. 298–302.
[8] B. Mandal, W. Zhikai, L. Li, and A. Kassim, “Evaluation of descriptors
and distance measures on benchmarks and first-person-view videos
for face identification,” in International Workshop on Robust Local
Descriptors for Computer Vision, ACCV, Singapore, Nov 2014, pp. 585–
599.
[9] N. Martinel, C. Piciarelli, and C. Micheloni, “A supervised extreme
learning committee for food recognition,” Computer Vision and Image
Understanding, vol. 148, pp. 67–86, 2016.
[10] H. Hajime, T. Joutou, and K. Yanai, “Image recognition of 85 food
categories by feature fusion,” in IEEE International Symposium on
Multimedia (ISM), Dec 2010.
[11] T. Joutou and K. Yanai, “A food recognition system with multiple kernel
learning,” in IEEE International Conference on Image Processing, Nov
2009.
[12] C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning,
vol. 20, no. 3, pp. 273–297, 1995.
[13] S. Liu, D. He, and X. Liang, “An improved hybrid model for automatic
salient region detection,” IEEE Signal Processing Letters, vol. 19, no. 4,
pp. 207–210, Apr 2012.
[14] M. Bolaños and P. Radeva, “Simultaneous food localization and
recognition,” CoRR, vol. abs/1604.07953, 2016. [Online]. Available:
http://arxiv.org/abs/1604.07953
[15] Y. Kawano and K. Yanai, “Real-time mobile food recognition system,”
in Computer Vision and Pattern Recognition Workshops (CVPRW), Jun
2013.
[16] K. Yanai and Y. Kawano, “Food image recognition using deep convolutional network with pre-training and fine-tuning,” in 2015 IEEE
International Conference on Multimedia Expo Workshops (ICMEW),
June 2015, pp. 1–6.
[17] L. Bossard, M. Guillaumin, and L. Van Gool, “Food-101 – mining discriminative components with random forests,” in European Conference
on Computer Vision, 2014, pp. 446–461.
[18] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification
with deep convolutional neural networks,” in Advances in Neural Information Processing Systems, 2012.
[19] V. Bettadapura, E. Thomaz, A. Parnami, G. D. Abowd, and I. Essa,
“Leveraging context to support automated food recognition in restaurants,” in Proceedings of the IEEE Winter Conference on Applications
of Computer Vision, 2015, pp. 580–587.
[20] M. Yuji and K. Yanai, “Multiple-food recognition considering cooccurrence employing manifold ranking,” in Proceedings of the 21st
International Conference on Pattern Recognition, Nov 2012.
[21] X. Wang, D. Kumar, N. Thorne, M. Cord, and F. Precioso, “Recipe
recognition with large multimodal food dataset,” in IEEE International
Conference on Multimedia & Expo Workshops (ICMEW), Jul 2015.
[22] J. Chen and C.-w. Ngo, “Deep-based ingredient recognition for cooking
recipe retrieval,” in Proceedings of the ACM on Multimedia Conference,
2016, pp. 32–41.
[23] J. Baxter, “Food recognition using ingredient-level features.” [Online].
Available: http://jaybaxter.net/6869 food project.pdf
[24] S. Yang, M. Chen, D. Pomerleau, and R. Sukthankar, “Food recognition
using statistics of pairwise local features,” in Proceedings of IEEE
Conference on Computer Vision and Pattern Recognition, Jun 2010.
[25] S. Song, V. Chandrasekhar, B. Mandal, L. Li, J. Lim, G. S. Babu,
P. P. San, and N. Cheung, “Multimodal multi-stream deep learning for
egocentric activity recognition,” in 2016 IEEE Conference on Computer
Vision and Pattern Recognition Workshops, CVPR Workshops 2016, Las
Vegas, NV, USA, 2016, pp. 378–385.
[26] S. Zhang, H. Yang, and Z.-P. Yin, “Transferred deep convolutional neural
network features for extensive facial landmark localization,” IEEE Signal
Processing Letters, vol. 23, no. 4, pp. 478–482, Apr 2016.
[27] H. Park and K. M. Lee, “Look wider to match image patches with
convolutional neural networks,” IEEE Signal Processing Letters, vol. PP,
no. 99, pp. 1–1, Dec 2016.
[28] A. G. del Molino, B. Mandal, J. Lin, J. Lim, V. Subbaraju, and
V. Chandrasekhar, “Vc-i2r@imageclef2017: Ensemble of deep learned
features for lifelog video summarization,” in Working Notes of CLEF
2017 - Conference and Labs of the Evaluation Forum, Dublin, Ireland,
Sep 2017.
[29] T. Gan, Y. Wong, B. Mandal, V. Chandrasekhar, and M. Kankanhalli,
“Multi-sensor self-quantification of presentations,” in ACM Multimedia,
Brisbane, Australia, Oct 2015, pp. 601–610.
[30] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, “Joint face detection and
alignment using multitask cascaded convolutional networks,” IEEE
Signal Processing Letters, vol. 23, no. 10, pp. 1499–1503, Oct 2016.
[31] B. Mandal, “Publication and database,” 2017. [Online]. Available:
https://sites.google.com/site/bappadityamandal/publications-1
[32] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma,
Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. FeiFei, “Imagenet large scale visual recognition challenge,” Int. J. Comput.
Vision, vol. 115, no. 3, pp. 211–252, Dec 2015.
[33] C. Szegedy and et al., “Going deeper with convolutions,” in IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
[34] H. Kaiming and et al., “Deep residual learning for image recognition,”
in Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), 2016.
[35] J. Bromley, I. Guyon, Y. LeCun, E. Säckinger, and R. Shah, “Signature verification using a “siamese” time delay neural network,” in
Proceedings of the 6th International Conference on Neural Information
Processing Systems (NIPS), 1993, pp. 737–744.
[36] H. Zuo, H. Fan, E. Blasch, and H. Ling, “Combining convolutional
and recurrent neural networks for human skin detection,” IEEE Signal
Processing Letters, vol. 24, no. 3, pp. 289–293, Mar 2017.
[37] VLFEAT, “Vlfeat open source,” 2017. [Online]. Available:
http://www.vlfeat.org/matconvnet/
[38] ——, “Vlfeat open source,” 2017. [Online]. Available: www.vlfeat.org
| 1 |
arXiv:1608.07839v1 [math.PR] 28 Aug 2016
Non-Linear Wavelet Regression and Branch & Bound
Optimization for the Full Identification of Bivariate
Operator Fractional Brownian Motion
Jordan Frecon∗, Gustavo Didier†, Nelly Pustelnik∗ , and Patrice Abry∗‡
Abstract
Self-similarity is widely considered the reference framework for modeling the scaling
properties of real-world data. However, most theoretical studies and their practical
use have remained univariate. Operator Fractional Brownian Motion (OfBm) was
recently proposed as a multivariate model for self-similarity. Yet it has remained seldom
used in applications because of serious issues that appear in the joint estimation of
its numerous parameters. While the univariate fractional Brownian motion requires
the estimation of two parameters only, its mere bivariate extension already involves
7 parameters which are very different in nature. The present contribution proposes
a method for the full identification of bivariate OfBm (i.e., the joint estimation of all
parameters) through an original formulation as a non-linear wavelet regression coupled
with a custom-made Branch & Bound numerical scheme. The estimation performance
(consistency and asymptotic normality) is mathematically established and numerically
assessed by means of Monte Carlo experiments. The impact of the parameters defining
OfBm on the estimation performance as well as the associated computational costs are
also thoroughly investigated.
∗ Jordan Frecon (Corresponding author), Nelly Pustelnik and Patrice Abry are with Univ Lyon,
Ens de Lyon, Univ Claude Bernard, CNRS, Laboratoire de Physique, F-69342 Lyon, France (e-mail:
[email protected]).
† Gustavo Didier is with Mathematics Department, Tulane University, New Orleans, LA 70118, USA,
(e-mail: [email protected]).
‡ Work supported by French ANR grant AMATIS #112432, 2010-2014 and by the prime award no.
W911NF-14-1-0475 from the Biomathematics subdivision of the Army Research Office, USA. G.D. gratefully
acknowledges the support of Ens de Lyon.
1
1
Introduction
Scale invariance and self-similarity. Scale invariance, or scaling, is now recognized as
an ubiquitous property in a variety of real-world applications which are very different in
nature (cf. e.g., [25] and references therein for reviews). The so-named scale invariance
paradigm is based on the assumption that temporal dynamics in data are not driven by
one, or a few, representative time scales, but by a large continuum of them. Self-similar
stochastic processes provide the basal mathematical framework for the modeling of scaling
phenomena. In essence, self-similarity states that a signal X cannot be distinguished from
any of its dilated copies (cf. e.g., [22]):
f dd
{X(t)}t∈R = {aH X(t/a)}t∈R , ∀a > 0,
(1)
f dd
where = stands for the equality of finite dimensional distributions. The key information
on scale-free dynamics is summed up under a single parameter 0 < H < 1, called the Hurst
exponent, whose estimation is the main goal in scaling analysis. Amongst the numerous
estimators of H proposed in the literature (cf. e.g., [5] for a review), one popular methodology draws upon the computation of the sample variance of a set of multiscale quantities
(e.g., wavelet coefficients) TX (a, t) that behave like a power law with respect to the scale a:
X
2
TX
(a, t) ' a2H+1 .
(2)
t
In view of the relation (2), H can be estimated by means of a linear regression in log–log
coordinates (cf. e.g., [24]).
Fractional Brownian motion (fBm) BH (t) – i.e., the only Gaussian, self-similar, stationaryincrement process – has massively been used as a reference process in the modeling of scaling
properties in univariate real-world signals.
Multivariate scaling. Notwithstanding its theoretical and practical importance, fBm falls
short of providing an encompassing modeling framework for scaling because most modern
contexts of application involve the recording of multivariate time series that hence need to be
jointly analyzed. The construction of a comprehensive multivariate estimation paradigm is
still an open problem in the literature. The so-named Operator fractional Brownian motion
(OfBm), henceforth denoted by B W,H (t), is a natural extension of fBm. It was recently been
defined and studied in [3, 10, 9, 7] as the only Gaussian, multivariate self-similar process
with stationary increments. Multivariate self-similarity translates into the relation:
f dd
{B W,H (t)}t∈R = {aH B W,H (t/a)}t∈R , ∀a > 0,
(3)
where the scaling exponent consists of a Hurst matrix H = W diagHW −1 . In the latter
expression, W represents a P × PPinvertible matrix, and H is a P -dimensional vector of
+∞
k
k
Hurst eigenvalues, where aH :=
k=0 log (a)H /k!. The full parametrization of OfBm
further requires a P × P point-covariance matrix Σ. OfBm remains so far rarely used in
applications, mostly because its actual use requires, in a general setting, the estimation of
P + P 2 + P (P − 1)/2 parameters which are very different in nature (cf. Section 2). In
particular, Eq. (2) above results in a mixture of power laws (cf. Eqs. (13)-(15) in Section 2).
However, the identification of OfBm has been thoroughly studied in the entry-wise scaling
case (corresponding to a diagonal mixing matrix W ) [3] and often used in applications (cf.
2
e.g., [2, 6]). Identification has also been recently achieved under a non-diagonal mixing
matrix W , yet with more restrictive assumptions on Σ [8]. Even more recently, [1] proposed a general estimator for the vector of Hurst eigenvalues H in the bivariate setting,
yet requiring additional assumptions for the estimation of the extra parameters W and Σ.
The full identification of OfBm without parametric assumptions has remained an open issue.
Goals, contributions and outline. In this work, our contribution is two-fold. First,
the full identification of bivariate (P = 2) OfBm (Biv-OfBm) is formulated as a non-linear
wavelet domain regression. Second, an algorithmic solution for the associated optimization
problem is devised by means of a Branch & Bound procedure, which is essential in view
of the highly non-convex nature of the latter. To this end, definitions and properties of
Biv-OfBm are recapped in Section 2. A parsimonious parametrization of the process is
also proposed, which prevents the potential parametric under-determination of Biv-OfBm
[10]. In Section 3, the properties of the wavelet coefficients and of the wavelet spectrum of
Biv-OfBm are explicitly laid out and computed. This provides a mathematical framework
for the proposed estimation method. The full identification of Biv-OfBm is formulated as
a minimization problem whose solution is developed based on a Branch & Bound strategy
(cf. Section 4). The consistency and asymptotic normality of the proposed estimator is
mathematically established in the general multivariate setting P ≥ 2 (cf. Section 5), and
numerically assessed in the bivariate setting P = 2 by means of Monte Carlo experiments
conducted on large numbers of synthetic Biv-OfBm paths (cf. Section 6). Comparisons
with the Hurst eigenvalues estimators proposed in [1] are also reported. The routines for
the identification and synthesis of OfBm will be made publicly available at the time of
publication.
2
2.1
2.1.1
Bivariate Operator fractional Brownian motion
Definitions
Preamble
The most general definitions of OfBm were formulated in [3, 10, 9, 7] as the only multivariate
Gaussian, self-similar (i.e., satisfying Eq. (3)) process with stationary increments. Targeting
real-world data and applications, the present contribution is restricted to the (slightly)
narrower class of time reversible OfBm (cf. [3, 10, 9, 7]) whose scaling exponent matrix H
can be diagonalized as H = W diagHW −1 , where W is an invertible matrix. The definitions
and properties of OfBm are stated only in the bivariate setting.
2.1.2
Entry-wise scaling OfBm
The entry-wise scaling, time-reversible OfBm {X(t)}t∈R ≡ B Id,H (t) is defined by the condition W = Id. Hence, the Hurst exponent has the form H = diag(h1 , h2 ), 0 < h1 ≤ h2 < 1.
Let ΣX ≡ EX(1)X(1)∗ denote the point covariance matrix of X with entries σxm σxn ρxm ,xn ,
where σx2m is the variance of component m and ρxm ,xn is the correlation between components
m and n. It was shown in [4], [9] that the process X is well-defined (i.e., that its covariance
3
matrix EX(t)X(s)∗ is always positive definite) if and only if (with ρx ≡ ρx1 ,x2 ):
g(h1 , h2 , ρx ) ≡ Γ(2h1 + 1)Γ(2h2 + 1) sin(πh1 ) sin(πh2 )
− ρ2x Γ(h1 + h2 + 1)2 sin2 (π(h1 + h2 )/2) > 0. (4)
For entry-wise scaling OfBm, self-similarity in Eq. (3) simplifies to:
f dd
{X1 (at), X2 (at)}t∈R = {ah1 X1 (t), ah2 X2 (t)}t∈R , ∀a > 0.
(5)
The estimation of the parameters (h1 , h2 , ρx , σx1 , σx2 ), which fully characterize the process,
can thus be conducted by following univariate-type strategies, i.e., by making use of extensions of Eq. (2) to all auto- and cross-components (cf. [3, 7] for a theoretical study estimation
performance, or [6] for wavelet-based estimation on real-world data).
2.1.3
Mixing
Let W denote a 2 × 2 invertible matrix, hereinafter called the mixing matrix. OfBm is
defined as {B W,H (t) ≡ Y (t)}t∈R = {W B Id,H (t) ≡ W X(t)}t∈R . Following [3, 9], it is
straightforward to show that Y is self-similar as in Eq. (3), with H = W diagHW −1 . When
W is not diagonal, OfBm is no longer entry-wise scaling. Instead, the entry-wise scaling
behavior of OfBm consists of mixtures of univariate power laws (cf. Eqs. (13)-(15)). For
this reason, the construction of estimators in the bivariate setting cannot rely on a direct
extension of a univariate procedure.
2.2
2.2.1
Properties
Under-determination
Because W is invertible, one can show that, for Σy (t) ≡ EY (t)Y (t)∗ ,
Σy (t) = W EX(t)X(t)∗ W ∗ ≡ W Σx (t)W ∗ ,
(6)
which reveals three forms of under-determination in the parametrization of OfBm:
∗
i) Writing TX = diag(σx1 , σx2 ) and ΣX = TX CX TX
, where CX ≡ {1 ρx ; ρx 1} is the
correlation matrix of X, one cannot discriminate between Y = W X and Y = W 0 X 0 , where
−1
W 0 = W TX and X 0 = TX
X.
ii) Let Π denote a 2 × 2 permutation matrix, i.e., there is only one non-zero entry (equal
to 1) per column or line. Then, Y = W X = W 0 X 0 , where W 0 = W Π and X 0 = ΠT X.
iii) Let S be a diagonal matrix with entries ±1 and X 0 = SX, then Y = W X = W 0 X 0 ,
where W 0 ≡ W S.
2.2.2
Parametrization
To fix the parametric under-determination of OfBm, we adopt the following conventions:
i) the columns of W are normalized to 1; ii) h1 ≤ h2 ; iii) the diagonal entries of W are
positive. This leads us to propose the following generic 7-dimensional parametrization Θ =
(h1 , h2 , ρx , σx1 , σx2 , β, γ) of Biv-OfBm {Y (t)}t∈R :
√1 2 √β 2
σx21
σx1 σx2 ρx
1+γ
1+β
, ΣX =
.
(7)
W =
√−γ 2 √ 1 2
σx1 σx2 ρx
σx22
1+γ
1+β
4
3
Wavelet Analysis of OfBm
3.1
Multivariate discrete wavelet transform (DWT)
R
Let ψ0 be a mother wavelet, namely, ψ0 ∈ L2 (R) and R tk ψ0 (t)dt ≡ 0, k = 0, 1, . . . , Nψ − 1.
Let {ψj,k (t) = 2−j/2 ψ0 (2−j t−k)}(j,k)∈Z2 denote the collection of dilated and translated templates of ψ0 that forms an orthonormal basis of L2 (R). The multivariate DWT coefficients
of {Y (t)}t∈R are defined as (Dy (j, k)) ≡ (Dy1 (j, k), Dy2 (j, k)), where
Z
Dym (j, k) =
2−j/2 ψ0 (2−j t − k)Ym (t)dt, m = 1, 2.
(8)
R
For a detailed introduction to wavelet transforms, interested readers are referred to, e.g.,
[17].
3.2
Wavelet spectrum
The properties of the wavelet coefficients of OfBm in a P -variate setting were studied in
detail in [1]. Here, we only recall basic properties and expand on what is needed for actual
full identification (i.e., the estimation of all parameters entering its definition) of Biv-OfBm.
3.2.1
Mixture of power laws
From Eq. (5) and Y (t) = W X(t), it can be shown that the wavelet spectrum reads:
EDy (j, k)Dy (j, k)∗ = W 2j(H+Id/2) E0 2j(H
∗
+Id/2)
W∗
(9)
with E0 ≡ EDx (0, k)Dx (0, k)∗ =
σx21 ηh1
ρx σx1 σx2 η h1 +h2
ρx σx1 σx2 η h1 +h2
2
σx22 ηh2
2
!
(10)
Z
Z
1
|u|2h du ψ0 (v)ψ0 (v − u)∗ dv > 0.
(11)
2 R
R
The OfBm parametrization proposed above yields the following explicit form of the
wavelet spectrum:
E11 (2j , Θ) E12 (2j , Θ)
∗
j
EDy (j, k)Dy (j, k) ≡ E(2 , Θ) =
,
(12)
E12 (2j , Θ) E22 (2j , Θ)
and ηh = −
with E11 (2j , Θ) = (1 + γ 2 )−1 σx21 ηh1 2j(2h1 +1)
+ 2β(1 + β 2 )−1/2 (1 + γ 2 )−1/2 ρx σx1 σx2 η h1 +h2 2j(h1 +h2 +1)
2
+ β 2 (1 + β 2 )−1 σx22 ηh2 2j(2h2 +1) , (13)
E12 (2j , Θ) = −γ(1 + γ 2 )−1 σx21 ηh1 2j(2h1 +1)
+ (1 − βγ)(1 + β 2 )−1/2 (1 + γ 2 )−1/2 ρx σx1 σx2 η h1 +h2 2j(h1 +h2 +1)
2
+ β(1 + β 2 )−1 σx22 ηh2 2j(2h2 +1) , (14)
5
E22 (2j , Θ) = γ 2 (1 + γ 2 )−1 σx21 ηh1 2j(2h1 +1)
− 2γ(1 + β 2 )−1/2 (1 + γ 2 )−1/2 ρx σx1 σx2 η h1 +h2 2j(h1 +h2 +1)
2
+ (1 + β 2 )−1 σx22 ηh2 2j(2h2 +1) . (15)
3.2.2
Further under-determination
Eqs. (13), (14) and (15) reveal that E11 (2j , Θ), |E12 (2j , Θ)| and E22 (2j , Θ) are invariant
under the transformation (β, γ, ρx ) → −(β, γ, ρx ). Therefore, the definition of ρx can be
restricted to ρx ≥ 0.
3.3
Empirical wavelet spectrum
The goal is to estimate the Biv-OfBm parameters Θ = (h1 , h2 , ρx , σx1 , σx2 , β, γ) starting from
the wavelet spectrum EDy (j, ·)Dy (j, ·)∗ . The plug-in estimator of the ensemble variance
EDy (j, ·)Dy (j, ·)∗ is the sample variance
S(2j ) =
Kj
1 X
Dy (j, k)Dy (j, k)∗ ,
Kj
Kj =
k=1
N
,
2j
where N denotes the sample size. Fig. 1 illustrates the fact that S(2j ) is a satisfactory
estimator for EDy (j, ·)Dy (j, ·)∗ .
4
4.1
Non-linear regression based estimation and Branch
and Bound algorithm
Identification procedure as a minimization problem
The estimation of the parameter vector Θ of Biv-OfBm is challenging because its entry-wise
wavelet (or Fourier) spectrum is a mixture of power laws (cf. Eqs. (13)-(15)). This precludes
the direct extension of classical univariate techniques, based on the scalar relation Eq. (2)
[24]. For this reason, we formulate the full identification of Biv-OfBm (i.e., the estimation
of Θ) as a minimization problem1 :
Θ̂M
N = argmin CN (Θ), where
(16)
Θ∈Q0
CN (Θ) ≡
P
=2
X
j2
X
i1 ,i2 =1 j=j1
i1 ≤i2
2
log2 |Si1 ,i2 (2j )| − log2 |Ei1 ,i2 (2j , Θ)| .
(17)
The use of log2 ensures that the scales 2j , j = j1 , . . . , j2 contribute equally to CN . The search
space incorporates prior information in the shape of constraints: Sections 2.2.1 and 3.2.2
impose h1 ≤ h2 and ρx ∈ [0, 1] ; feasible solutions must satisfy the constraint g(h1 , h2 , ρx ) >
0 (cf. Eq. (4)) and (β, γ) ∈ [−1, 1]2 . For the sake of feasibility, we further restrict (σx1 , σx2 ) ∈
1 The superscript ·M has been added in order to refer to the M -estimator whose theoretical details are
given in Section 5.
6
3
30
3
30
3
30
20
2
20
2
20
2
10
1
10
1
10
1
0
1
0
5
10
j = log 2 scale
0
15
1
0
5
10
j = log 2 scale
0
15
1
5
10
j = log 2 scale
0
15
Figure 1: Wavelet Spectrum. Superimposition of log2 |Ep,p0 (2j , Θ)| (red ’+’) and
log2 |Sp,p0 (2j )| (solid black line) (with (p, p0 ) = (1, 1), (1, 2), (2, 2) from left to right) for
a single realization of Biv-ofBm with Θ = (h1 = 0.4, h2 = 0.8, ρx = 0.1, σx1 = 1, σx2 =
1, β = 0.5, γ = 0.5) (absolute difference between data and model is shown in dashed blue).
q
by21 + σ
by22 , where σ
by2m denotes the sample variance estimates of
[0, σmax ]2 , with σmax = σ
the increments of Ym . We arrive at the parameter space
n
Q0 = Θ = (h1 , h2 ,ρx , σx1 , σx2 , β, γ) ∈ R7 | Θ ∈ [0, 1]3 × [0, σmax ]2 × [−1, 1]2 ,
o
g(h1 , h2 , ρx ) > 0, h1 ≤ h2 .
(18)
The minimization of CN (Θ) is an intricate task for two reasons. First, because it involves disentangling a mixture of power laws, which yields a highly non-convex function.
Second, because the parameters to be estimated in Θ (scaling exponents, mixing coefficients,
variances and correlation) are very different in nature. The present contribution makes the
original proposition of searching the global minimum of Eq. (16) by means of a Branch &
Bound procedure detailed in the next section.
4.2
Global minimization via a Branch & Bound strategy
Branch & Bound algorithms consist of smart enumeration methods, which were shown to
solve a variety of constrained global non-convex optimization problems [14, 16, 18, 21]. In
the context of the estimation problem (16), it amounts to partitioning (branching) the search
space Q0 into smaller and smaller subregions, bounding the range of the objective function
CN in each subregion, and then identifying the region containing the global minimum. This
can be rephrased as 4 steps which are repeated until a stopping criterion is reached:
- Selecting: Choose any region R from the search space and relax it into a closed convex
set, i.e. an interval, as illustrated by the dashed line in Fig. 2 (left plot).
- Partitioning: Divide R into two smaller regions Ra and Rb .
- Bounding: Compute lower and upper bounds of CN on Ra and Rb . Upper bounds can be
obtained by evaluating CN anywhere in the region at hand. Lower bounds are computed by
resorting to interval arithmetic techniques (cf. Appendix A and [19, 15, 20]), which combine
elementary operations to produce rough lower bounds for the range of a given function, here
CN , on any interval.
- Pruning: Pruning is driven by three mechanisms: discard regions that do not satisfy
constraints (infeasibility) ; discard regions whose lower bound is larger than the smallest
upper bound as they cannot contain the global minimum (bound ) ; discard regions whose
size (for all parameters) has reached the targeted precision (size).
7
... ... ...
... ... ... ... ... ... C∆2
C2 C4 ...
... ...
C1 C3
... ...
Q0
Initialization
2
S0 = ∪∆
i=1 Ci
Initialization
Q0
Figure 2: Schematic view of the proposed approximation S0 ⊆ Q0 .
4.3
4.3.1
Branch & Bound procedure for Biv-OfBm identification
Convex relaxation
By nature, interval arithmetic techniques apply only to intervals, i.e., to convex sets. Therefore, in most Branch & Bound procedures, a convex relaxation of the search space Q0 is
required at initialization, as sketched in Fig. 2 (left). However, in the present case, a convex
relaxation of Q0 is not feasible because of the constraint g(h1 , h2 , ρx ) > 0.
Instead, we propose to approximate Q0 by an inner convexrelaxation S0 , consisting
2
of the union of ∆2 separable convex sets Ci , i.e., S0 = ∪∆
⊂ Q0 , as illustrated in
i=1 Ci
Fig. 2 (right). In practice, (h1 , h2 , ρx ) ∈ [0, 1]3 is approximated by a union of ∆2 nonoverlapping parallelepipedic sets {Pi }1≤i≤∆2 , denoted P ⊂ [0, 1]3 . They are obtained by
dividing (h1 , h2 ) ∈ [0, 1]2 into squares Ti with a discretization step ∆−1 and defining Pi =
Ti × [0, ρi ] where ρi is the largest value such that (∀(h1 , h2 ) ∈ Ti ), g(h1 , h2 , ρi ) > 0:
n
o
Ci = Θ = (h1 , h2 , ρx , σx1 , σx2 , β, γ) ∈ R7 | Θ ∈ Pi × [0, σmax ]2 × [−1, 1]2 , h1 ≤ h2 . (19)
In this inner convex relaxation strategy, the constraint g(h1 , h2 , ρx ) > 0 is necessarily
satisfied, a major practical benefit as infeasible regions need not be explored.
4.3.2
Algorithm
The full identification of Biv-OfBm is achieved via the following proposed sequence of operations:
Inputs.
- From data, compute the wavelet spectrum S(2j ), j = j1 , . . . , j2 ;
- Pick the Biv-OfBm model (i.e., Eqs. (13)-(15)) ;
- Set the precision δ for each parameter ;
- Set the inner convex relaxation parameter ∆ in order to approximate the set Q0 by
2
S0 = ∪∆
i=1 Ci ;
Initialization.
- Set Sb = ∅ and k = 0.
- Compute lower bounds li of CN on Ci (∀i = 1, . . . , ∆2 ) ;
- Compute upper bounds ui of CN on Ci (∀i = 1, . . . , ∆2 ) ;
- Set U = min(u1 , . . . , u∆2 ) and L = min(l1 , . . . , l∆2 ).
Iteration. Let Sk denote the partitioning at the step k.
- Selecting: Select region R ⊂ Sk with lowest lower bound L (best-first-search strategy) 2 .
2 At the step k = 0, this amounts in choosing one C . More generally, it consists in selecting one element
i
of the partitioning Sk .
8
- Cutting: Divide R into Ra and Rb such that R = Ra ∪ Rb and Ra ∩ Rb = ∅, along
its longest edge, in half, where the length of an edge is defined relatively to the maximum
accuracy δ prescribed by the practitioner.
- Lower bound: Compute lower bounds of Ra and Rb , using interval arithmetic.
- Upper bound: Compute upper bound of Ra (resp. Rb ), by evaluating CN (Θ) for Θ chosen
at the center of Ra (resp. Rb ).
- Branching: Update the partitioning Sk+1 = (Sk \R) ∪ Ra ∪ Rb , and update U and L on
this new partition.
- Pruning: Discard regions R∗ of Sk+1 either by bound, infeasibility or size, i.e. Sk+1 ←
Sk+1 \R∗ . Append to Sb the regions of Sk+1 discarded by size ; Discard regions in Sb for
bound.
- Set k ← k + 1
Stop and Output. Stop iterations when Sk is empty. Output Sb as the list of potential solutions at targeted precision δ. Output region in Sb with lowest upper bound and
corresponding best estimate at the targeted precision
Θ̂M,BB
.
N
5
(20)
Asymptotic theory in the multivariate setting
b M (see (16)) are studied theIn this section, the asymptotic properties of the exact solution Θ
N
oretically for a general multivariate OfBm [3, 10, 9]. In other words, the results encompass,
but are not restricted to, the bivariate framework of Section 2. Regarding the OfBm B W,H ,
it is assumed that: (OFBM1) B W,H is an RP -valued OfBm with Hurst parameter H, not
necessarily diagonalizable, where the eigenvalues of H satisfy 0 < <(hk ) < 1, k = 1, . . . , n ;
(OFBM2) EB W,H (t)B W,H (t)∗ , t 6= 0, is a full rank matrix (properness) ; (OFBM3) B W,H
is a time reversible stochastic process. Regarding ψ0 ∈ L1 (R), it is assumed that: (W1)
Nψ ≥ 2 ; (W2) supp(ψ0 ) is a compact interval ; (W3) supt∈R |ψ0 (t)|(1 + |t|)α < ∞ for α > 1.
The summation range in the objective function (17) is generalized to i1 , i2 = 1, . . . , P ≥ 2.
The proofs of the statements in this section can be found in Appendix B.
5.1
The asymptotic normality of the wavelet spectrum
b M draws upon the asymptotic normality of the
The asymptotic behavior of the estimator Θ
N
wavelet variance for fixed scales. Under the assumptions (OFBM1-3) and (W1-3), the latter
property can be established by an argument that is almost identical to that in [1, Theorem
3.1]. For the reader’s convenience, we reproduce the claim here.
Theorem 5.1. Suppose the assumptions (OFBM1-3) and (W1-3) hold. Let F ∈ S( P (P2+1) m, R)3
be the asymptotic covariance matrix described in [1, Proposition 3.3]. Then,
p
d
Kj (vecS S(2j ) − vecS E(2j , Θ))
→ Z,
(21)
j=j1 ,...,j2
d
as N → ∞, where j1 < . . . < j2 and m = j2 − j1 + 1 and Z = N P (P +1) ×m (0, F ), where vecS
2
defines the operator that vectorizes the upper triangular entries of a symmetric matrix:
vecS (S) = (s11 , . . . , s1P ; . . . ; sP −1,P −1 , sP −1,P ; sP,P )∗ .
3 S(n, R)
is the space of real symmetric matrices of size n × n.
9
5.2
Consistency of Θ̂M
N
Let Θ0 be the true parameter value. To prove the consistency of Θ̂M
N , the following additional
assumptions on the parametrization Θ are required:
i) The parameter space Ξ ⊆ Q0 (see (18)) is a finite-dimensional compact set and
Θ0 ∈ int Ξ;
(22)
ii) For some j ∗ = j1 , . . . , j2 ,
∗
∗
Θ 6= Θ0 ⇒ |Ei∗1 ,i∗2 (2j , Θ)| =
6 |Ei∗1 ,i∗2 (2j , Θ0 )|
(23)
for some matrix entry (i∗1 , i∗2 );
iii) ∀i1 , i2 = 1, . . . , P, j = j1 , . . . , j2 ,
Ei1 ,i2 (2j , Θ0 ) 6= 0 ;
iv) The mapping Θ 7→ E(2j , Θ)
(24)
(25)
is three times continuously differentiable on int Ξ.
Under (24), the functions log2 |Ei1 ,i2 (2j , Θ)|, i1 , i2 = 1, . . . , P , are well-defined. This
fact and Theorem 5.1 then imply that the functions log2 |Si1 ,i2 (2j )| are well-defined with
probability going to 1. In turn, condition (23) implies that the (entry-wise) absolute value
∗
of the target matrix E(2j , Θ) is (parametrically) identifiable, namely, there is an injective
j∗
function Ξ 3 Θ 7→ |E(2 , Θ)|.
The objective function CN (Θ) is a function of N , and so is S(2j ), j = j1 , . . . , j2 . Since
b N is attained (a.s.),
CN (·) is continuous and Ξ is compact, then for all N a minimum Θ
whence we can form one such sequence
bM
{Θ
N }N ∈N .
(26)
Any sequence (26) defines an M -estimator of Θ0 , e.g., [23, chapter 5]. The next theorem
shows that (26) is consistent.
Theorem 5.2. Under the assumptions of Theorem 5.1, suppose in addition that the conditions i) to iv) hold. Then, the sequence of minima (26) is consistent for Θ, namely,
bM
Θ
N → Θ0
in probability.
(27)
b M for a given N is not ensured by the conditions i) to
Remark 5.3. The uniqueness of Θ
N
iv), but it is not needed in Theorem 5.2.
5.3
Asymptotic normality of Θ̂M
N
By comparison to consistency, showing asymptotic normality will require an additional assumption, laid out next.
v) det
j2
X
X
Λi1 ,i2 (2j , Θ0 )Λi1 ,i2 (2j , Θ0 )∗
> 0,
j=j1 1≤i1 ≤i2 ≤n
(28)
where we define the score-like vector Λi1 ,i2 (2j , Θ)∗ = ∇Θ log2 |Ei1 ,i2 (2j , Θ)|.
10
Theorem 5.4. Under the assumptions of Theorem 5.1, suppose in addition that the condib M }N ∈N be a consistent sequence of minima of {CN }N ∈N , respectively.
tion v) holds. Let {Θ
N
Then,
√
d
bM
N (Θ
N → ∞, where
(29)
N − Θ0 ) → W,
d
W=
j2
X
X
j2
−1 X
2j/2
Λi1 ,i2 (2 , Θ)Λi1 ,i2 (2 , Θ)
log 2
j =1
j
j=j1 1≤i1 ≤i2 ≤P
X
∗
j
1
1≤i1 ≤i2 ≤P
Zi1 ,i2 (2j )
Λi1 ,i2 (2j , Θ)
,
Ei1 ,i2 (2j , Θ0 )
j
where Z = (Zi1 ,i2 (2 ))j is a random vector whose distribution is obtained in the weak limit
(21).
The next result is a corollary to Theorems 5.2 and 5.4.
b M }N ∈N be a sequence of
Corollary 5.5. Under the assumptions of Theorem 5.4, let {Θ
N
M,BB
b
minima of the objective function (17). Also, let {Θ
}N ∈N be an estimator of the form
N
(20) which satisfies
C
bM − Θ
b M,BB k ≤
a.s.
(30)
kΘ
N
N
N 1/2+ε
for constants C, ε > 0. Then,
√
d
b M,BB − Θ0 ) →
N (Θ
W,
N
N → ∞,
where the random vector W is given in (29).
Remark 5.6. The condition (30) is easily satisfied in practice, since over a compact set
and at a low computational cost a Branch and Bound algorithm is guaranteed to yield a
solution which lies at a controllable distance of the true minimum.
Remark 5.7. The technical condition (28) should be satisfied in many cases of interest, as
discussed in Appendix D.
6
6.1
Estimation performance: empirical study
Numerical simulation setting
Monte Carlo experiments were performed to empirically quantify the finite size performance
of the estimator Θ̂M,BB
. To examine the influence of Θ on the estimation performance, 9
N
different values of Θ were used, obtained essentially by varying the strength of the correlation
amongst components (ρx = 0.1, 0.45 and 0.8) and of the mixing factor (no mixing, β = γ =
0 ; orthogonal mixing, β = γ = 0.5, and β = −γ = 0.5, referred to as anti-orthogonal).
Three different sample sizes (short, N = 210 , medium, N = 214 and large, N = 218 ) were
investigated. Results are reported here for (h1 , h2 ) = (0.4, 0.8). Equivalent conclusions
are obtained for other choices of (h1 , h2 ). For each set of parameters Θ, the estimation
performance were assessed by means of box plots computed from 100 independent copies.
The synthesis of OfBm is achieved by using the multivariate toolbox devised in [12, 13], cf.
www.hermir.org. The computational loads are also quantified as a percentage of the number
of iterations that would be required by a systematic greedy grid search. The wavelet analysis
was based on least asymmetric orthonormal Daubechies wavelets [17]. All available scales
were used to compute CN : j1 = 1 ≤ j ≤ j2 = log2 N − Nψ − 1. The results are reported
11
ρx = 0.45
ρx = 0.8
log2 N
log2 N
log2 N
β = −γ = 0.5
β = γ = 0.5
β=γ=0
ρx = 0.1
Figure 3: Estimation performance of h2 as function of log2 N .
for Nψ = 2; it was found that further increasing Nψ did not improve the performance. The
2
proposed Branch & Bound procedure was run with S0 = ∪∆
i=1 for δ = 50. The impact of
varying the requested precision δ was also investigated.
b M for simplicity, is compared against other
The performance of Θ̂M,BB
, re-labelled Θ
N
existing estimation procedures. The scaling exponents (h1 , h2 ) are estimated by means of
bU
the univariate wavelet based estimator for Hurst parameter, (b
hU
1 , h2 ), as described in [24]
and applied to each component independently: The univariate estimate of h1 (resp. h2 ) is
obtained by taking the minimum (resp. maximum) between the linear regression coefficients
2
of log2 S11 (2j ) and log2 S22 (2j ) versus j ∈ J fs (resp. j ∈ J cs ), with J fs = {j1 , . . . , b j1 +j
2 c}
j1 +j2
cs
(fine scales) and J = {b 2 c + 1, . . . , j2 } (coarse scales). The parameters (h1 , h2 , β) are
b W bW
also estimated using the multivariate semiparametric estimator (b
hW
1 , h2 , β ) proposed in
j
b W bW
b M , (b
[1], which relies on the multiscale eigenstructure of S(2 ). The statistics Θ
hW
1 , h2 , β )
bU
and (b
hU
1 , h2 ) are compared in Figs. 4 to 7 in yellow, blue and magenta colors, respectively.
Though the full parametrization of Biv-OfBm requires a 7-dimensional vector parameter
Θ = (h1 , h2 , ρx , σx1 , σx2 , β, γ),
for ease of exposition we focus only on the 5 most interesting parameters (h1 , h2 , ρx , β, γ).
This follows the univariate literature that focuses on the estimation of H for fBm, while
neglecting the less interesting parameter σ 2 .
6.2
Estimation performance
Estimation of the dominant scaling exponent h2 (Fig. 3). As expected, all procedures yield accurate estimates of the largest scaling exponent h2 . While all methods show
comparable performance for large sample sizes, it is interesting that b
hM
2 displays better perbU
formance with lower bias and dispersion for small sample sizes by comparison to b
hW
2 and h2 .
12
ρx = 0.45
ρx = 0.8
log2 N
log2 N
log2 N
β = −γ = 0.5
β = γ = 0.5
β=γ=0
ρx = 0.1
Figure 4: Estimation performance of h1 as function of log2 N .
The impact of the correlation ρx or the mixing parameters (β, γ) on the performance is weak.
Estimation of the non-dominant scaling exponent h1 (Fig. 4). Estimating the lowest scaling exponent h1 is intrinsically more difficult because the mixture of power laws
masks the non-dominant Hurst eigenvalue. As expected, univariate-type analysis fails to
bW
estimate correctly h1 (except when there is no mixing (β = γ = 0)). While b
hM
1 and h2
show essentially the same performance for large sample size, it is interesting to note that b
hM
1
displays a far superior performance with lower bias and dispersion for small sample sizes.
However, the bias of b
hM
1 for (β = −γ) and small correlation ρx = 0.1 are observed, showing
M
that b
h1 is more strongly affected by the conjunction of low correlation amongst components
and anti-orthogonal mixing than b
hM
2 .
Estimation of β (Fig. 5). A significant benefit of βbM consists of its being robust to small
sample sizes, when βbW is not. While the mixing parameters seem not to impact the performance of βbM , a low correlation value ρx hurts its performance. Other results not reported
here for reasons of space also show that the performance of βbM is robust to a decrease of
h2 − h1 , while that of βbW drastically deteriorates when h2 − h1 → 0.
Estimation of γ (Fig. 6). The performance of γ
bM is very satisfactory, yet it is observed
to be affected by low correlation.
Estimation of ρx (Fig. 7). The parameter ρx appears the most difficult to estimate, with
significant bias for low correlation and anti-orthogonal mixing, a result in consistency with
[3].
13
ρx = 0.45
ρx = 0.8
log2 N
log2 N
log2 N
β = −γ = 0.5
β = γ = 0.5
β=γ=0
ρx = 0.1
Figure 5: Estimation performance of β as function of log2 N .
6.3
Computational costs
The computational costs of the proposed identification algorithm described in Section 4.3.2
are reported in Fig. 8 (top plots) as a function of log2 N for each parameter setting. They
are significantly smaller than those required by a systematic greedy grid search. Fig. 8
also clearly shows that the stronger the correlation amongst components, the easier the
minimization of the functional CN , thus indicating that the cross terms play a significant
role in the identification of Biv-OfBm. Though surprising at first, the clear decrease of the
computational costs with the increase of the sample size may be interpreted as the fact that
it is obviously far easier to disentangle three different power laws when a large number of
scales 2j is available, which then requires large sample sizes. It is also worth noting that
the orthogonal mixing, which may intuitively be thought of as the easiest, appears to be the
most demanding in terms of iterations to minimize CN . Unsurprisingly, the computational
cost increases when the requested precision δ on the estimates is increased (δ → 0) Fig. 9
(left).
6.4
Sample size versus precision.
bU
Figs. 3 to 7 show that increasing the sample size N improves the performance of (b
hU
1 , h2 )
W
W
W
and (b
h1 , b
h2 , βb ), as both their median-bias and variance decrease with N . Fig. 9 (right)
b M . As long as the dispersions
indicates that the impact of N is slightly more involved for Θ
of the estimates remain above the desired precision δ (targeted independently of N ), one
observes an expected decrease in the bias and dispersion when N is increased. Because the
minimization is stopped when the prescribed accuracy δ is reached, increasing N without
decreasing δ does not bring about any performance improvement, showing that the precision
should be decreased when the sample size increases (empirically as N −1/4 ) to improve the
performance.
14
ρx = 0.45
ρx = 0.8
log2 N
log2 N
log2 N
β = −γ = 0.5
β = γ = 0.5
β=γ=0
ρx = 0.1
Figure 6: Estimation performance of γ as function of log2 N .
6.5
Asymptotic normality of Θ̂
For a thorough study of normality, Θ is restricted to (h1 , h2 ), while all other parameters are
fixed a priori and known. Averages are obtained over 1000 independent realizations of BivOfBm for each sample size. Fig. 10 (left) visually compares the empirical distributions of
b
bM
hM
1 , h2 to their best Gaussian fits. Fig. 10 (right) measures the Kullback-Leibler divergence
bM
between the empirical distributions of b
hM
1 , h2 and their best Gaussian fits, as a function of
bM
the sample size N . Fig. 10 confirms the asymptotic normality of the estimates b
hM
1 , h2 as
theoretically predicted in Section 5. It further shows that the higher the requested precision
δ, the faster the convergence to normality. Moreover, asymptotic normality is reached for
far smaller sample sizes for the largest Hurst exponent estimate b
hM
2 than for that of the
M
b
smallest one h1 .
7
Conclusion
To the best of our knowledge, this contribution proposes the first full identification procedure
for Biv-OfBm. Its originality is to formulate identification as a non-linear regression as well
as to propose a Branch & Bound procedure to provide efficient and elegant solutions to the
corresponding non-convex optimization problem.
Consistence and asymptotic normality of the estimates are shown theoretically in a
general multivariate setting. The estimation performance is assessed for finite sample sizes
by Monte Carlo simulations and found globally satisfactory for all parameters. Estimation of
parameters γ (mixing) and ρx (correlation amongst components) remain the most difficult
parameters to estimate, though no other estimation procedure has yet been proposed in
the literature. However, including estimation of γ and ρx into the non-linear regression
b W bW
formulation permits to outperform the state-of-the art method, (b
hW
1 , h2 , β ), for estimating
h1 , h2 and β, at the price of massively increasing computational costs. The proposed
Branch & Bound procedure is yet still shown to have a significantly lower computational
15
ρx = 0.45
ρx = 0.8
log2 N
log2 N
log2 N
β = −γ = 0.5
β = γ = 0.5
β=γ=0
ρx = 0.1
Figure 7: Estimation performance of ρx as function of log2 N .
cost compared to the infeasible greedy grid search strategy. The estimation performance is
satisfactory and controlled enough so that actual use of real world data can be investigated.
This estimation procedure, together with its performance assessments are paving the
road for hypothesis testing, where, e.g., testing the absence of mixing (i.e., W is diagonal) or
of correlation amongst components (i.e., ρx ≡ 0) are obviously interesting issues in practice.
Routines permitting both the identification and synthesis of OfBm will be made publicly
available at time of publication.
This work has been explored on real Internet traffic data [11].
A
Interval arithmetic
Interval arithmetic is classically used to compute lower bounds for an objective criterion CN
on a convex set R [19, 15, 20]. It relies on the explicit decomposition of CN into several
elementary functions such as sum, product, inverse, square, logarithm, exponential,. . . ,
referred to as the calculus tree. For the sake of readability, we do not detail the calculus
tree associated to the full CN , but only its first term (13), sketched in Fig. 11 (right). The
leaves of the tree, i.e., the bottom line, consist of occurrences of the variables involved in Θ.
Each node of the graph corresponds to an elementary function applied to its children. The
intervals are composed and propagated from bottom to top to obtain a bound for C within
R. The literature on interval arithmetic provides boundaries for most of the elementary
functions on intervals:
• [x, x] + [y, y] = [x + y, x + y]
• [x, x] − [y, y] = [x − y, x − y].
• [x, x] × [y, y] = [min{x × y, x × y, x × y, x × y}, max{x × y, x × y, x × y, x × y}]
• [x, x] ÷ [y, y] = [x, x] × [ y1 , y1 ], if 0 6∈ [y, y]
16
Execution time
β=γ=0
2000
β = γ = 0.5
β = −γ = 0.5
ρ x = 0.1
ρ x = 0.45
ρ x = 0.8
1500
1000
500
Percentage of iterations
0
0.2
0.1
0
10
14
18 10
14
log2 N
18 10
14
log2 N
18
log2 N
4
-4
N = 210
N = 216
2
-5
log 2 Std(b
hM
2 )
log 2 computational cost (second)
Figure 8: Computational costs as functions of log2 N . Top: Average execution time
(in seconds). Bottom: percentage of performed iterations (compared to a greedy grid search
algorithm with same accuracy).
0
-2
-6
-7
δ = 0.05
δ = 0.01
δ = 0.005
δ = 0.001
-8
-9
-10
10
-4
0
0.01
0.02
0.03
0.04
0.05
12
14
16
18
20
log 2 N
δ
Figure 9: Precision. Computational costs (left) and dispersion of the estimates, as functions of targeted precision δ
log 2 DKL ( b
hM
1)
P (b
hM
1 )
15
10
5
0
0.4
0.45
0.5
0.55
log 2 DKL ( b
hM
2)
P (b
hM
2 )
b
hM
1
20
10
0
0.75
0.8
0.85
b
hM
2
2
0
−2
−4
10
12
14
16
log 2 N
18
δ
δ
δ
δ
14
16
log 2 N
18
0
=
=
=
=
20
0.05
0.01
0.005
0.001
−2
−4
10
12
20
bM
Figure 10: Asymptotic Normality. Left: empirical distributions of b
hM
1 , h2 and best
20
−3
Gaussian fit, N = 2 , δ = 10 . Right: Kullback-Leibler divergence between the empirical
bM
distributions of b
hM
1 , h2 and their best Gaussian fits, as functions of the sample size N .
• log([x, x]) = [log(x), log(x)], if 0 6∈ [x, x]
• exp([x, x]) = [exp(x), exp(x)], if 0 6∈ [x, x]
17
0.08
0.07
0.06
ηh
0.05
0.04
0.03
0.02
0.01
0
0
0.2
0.4
0.6
0.8
1
h
Figure 11: Interval arithmetic and function ηh . Left: calculus tree associated with the
first term in Eq.(13). Right: function ηh .
The only non-elementary function involved in CN is h 7→ ηh , defined in (11) and illustrated
in Fig. 11 (right), for a least asymmetric orthogonal Daubechies wavelet ψ0 with Nψ = 2.
From the study of its monotonicity, we devise the following empirical bounding scheme:
[ηh , ηh ], if h < 0.3,
(31)
(∀[h, h] ⊆ [0, 1]) η[h,h] = [ηh , ηh ], if h ≥ 0.3,
[min(ηh , ηh ), 0.071], otherwise.
B
Proof of Theorem 5.2
Consider any sequence {ΘN }N ∈N ∈ Ξ (i.e., not necessarily composed of minima). We claim
that
P
P
CN (ΘN ) → 0 ⇒ ΘN → Θ0 .
(32)
By contradiction, assume that we can choose a subsequence {ΘN (r) }r∈N such that, with
positive probability, CN (r) (ΘN (r) ) < r−1 and kΘN (r) − Θ0 k ≥ C0 > 0. Then, the conditions (23) and (25) imply that there are indices j ∗ , i∗1 and i∗2 , a constant δ > 0 and a
∗
∗
sequence of sets Eδ,N (r) = {ω : | log2 |Ei∗1 ,i∗2 (2j , ΘN (r) )| − log2 |Ei∗1 ,i∗2 (2j , Θ0 )|| ≥ δ} such
that P (Eδ,N (r) ) ≥ C1 > 0 for some C1 > 0. So, choose ε ∈ (0, δ). By Theorem 5.1, for some
C2 ,
j2
X
1
>
r j=j
1
X
1≤i1 ≤i2 ≤P
{log2 |Si1 ,i2 (2j )|−log2 |Ei1 ,i2 (2j , Θ0 )|+log2 |Ei1 ,i2 (2j , Θ0 )|−log2 |Ei1 ,i2 (2j , ΘN (r) )|}2
n
o2
∗
∗
≥ | log2 |Ei∗1 ,i∗2 (2j , Θ0 )| − log2 |Ei∗1 ,i∗2 (2j , ΘN (r) )|| − ε ≥ C2 > 0
with non-vanishing probability (contradiction). Thus, (32) holds. Now consider the sequence
(26), and note that
P
bM
0 ≤ CN (Θ
N ) = inf CN (Θ) ≤ CN (Θ0 ) → 0,
Θ∈Ξ
by Theorem 5.1. Therefore, by (32), the limit (27) holds.
18
(33)
C
Proof of Theorem 5.4
Rewrite
CN (Θ) =
j2
X
X
j=j1 1≤i1 ≤i2 ≤P
(fN )i1 ,i2 ,j (Θ) ∈ R,
where
(fN )i1 ,i2 ,j (Θ) = {log2 |Si1 ,i2 (2j )| − log2 |Ei1 ,i2 (2j , Θ)|}2 .
It is clear that for k ∈ N∗ ,
(k)
CN (Θ)
=
j2
X
X
(k)
(fN )i1 ,i2 ,j (Θ).
(34)
j=j1 1≤i1 ≤i2 ≤P
Fix a triple (i1 , i2 , j). By (22), (24) and (25), the first three derivatives of (fN )i1 ,i2 ,j (Θ)
with respect to Θ are well-defined in int Ξ. The first two derivatives at Θ can be expressed
as
0
(fN
)i1 ,i2 ,j (Θ) = ∇Θ (fN )i1 ,i2 ,j (Θ)∗
= −2{log2 |Si1 ,i2 (2j )| − log2 |Ei1 ,i2 (2j , Θ)|}Λi1 ,i2 (2j , Θ),
(35)
(36)
00
(fN
)i1 ,i2 ,j (Θ) = ∇Θ [∇Θ (fN )i1 ,i2 ,j (Θ)∗ ]
(37)
n
j
j
∗
j
j
= 2 Λi1 ,i2 (2 , Θ)Λi1 ,i2 (2 , Θ) − [log2 |Si1 ,i2 (2 )| − log2 |Ei1 ,i2 (2 , Θ)|]∇Θ Λi1 ,i2 (2j , Θ)}
(38)
000
)i1 ,i2 ,j (Θ) consists of sums and products of log2 |S(2j )| − log2 |E(2j , Θ)| and
Similarly, (fN
derivatives of Λi1 ,i2 (2j , Θ).
0
0
0
Rewrite CN
(Θ) = {(CN
)l (Θ)}l=1,...,dim Ξ . By a second order Taylor expansion of (CN
)l (Θ)
with Lagrange remainder,
0
b N ) − (C 0 )l (Θ0 ) = ∇Θ (C 0 )l (Θ0 )(Θ
b N − Θ0 )
R 3 (CN
)l (Θ
N
N
0
∗
e
b N − Θ0 ),
b N − Θ0 )∗ ∇Θ [∇Θ (CN )l ((ΘN )l ) ] (Θ
+ (Θ
2!
(39)
e N )l , l = 1, . . . , dim Ξ, is a parameter value lying in a segment between
where each entry (Θ
b
Θ0 and ΘN . Thus,
0 b
0
00
b N − Θ0 )
Rdim Ξ 3 CN
(ΘN ) − CN
(Θ0 ) = CN
(Θ0 )(Θ
0
∗
e
b N − Θ0 )∗ ∇Θ [∇Θ (CN )l ((ΘN )l ) ]
b N − Θ0 ),
+ (Θ
(Θ
(40)
2!
l=1,...,dim Ξ
b N − Θ0 )∗ ∇Θ [∇Θ (C 0 )l ((Θ
e N )l )∗ ], l = 1, . . . , dim Ξ, is a row vector. By
where each entry (Θ
N
0 b
b
(22), ΘN ∈ int Ξ for large N with probability going to 1. Thus, CN
(ΘN ) = 0. Solving (40)
b N − Θ0 yields
for Θ
√
n
o−1 √
0
∗
e
00
0
b N −Θ0 ) = − CN
b N −Θ0 )∗ ∇Θ [∇Θ CN ((ΘN )l ) ]
N (Θ
(Θ0 )+ (Θ
N CN
(Θ0 ).
2!
l=1,...,dim Ξ
(41)
19
b N for Θ0 and the expression for (f 000 )i ,i ,j (Θ), we have
Under (25), by the consistency of Θ
N 1 2
for l = 1, . . . , dim Ξ,
P
0
0
e N )l )∗ ] →
∇Θ [∇Θ CN
((Θ
∇Θ [∇Θ CN
(Θ0 )∗ ].
(42)
The invertibility of the matrix between braces in (41) is ensured with probability going to
1 by the condition (28) and Theorem 5.1, since the expression (37) entails
j2
X
P
00
CN
(Θ0 ) → 2
X
Λi1 ,i2 (2j , Θ0 )Λi1 ,i2 (2j , Θ0 )∗ .
(43)
j=j1 1≤i1 ≤i2 ≤P
Let k · kl1 be the entry-wise l1 matrix norm. By the relations (34) and (35), as well as a
first order Taylor
of log2 | · | around Ei1 ,i2 (2j , Θ0 ) under the condition (24), we
√ expansion
0
(Θ0 ) as
can reexpress N CN
−2
j2
X
j=j1
p X Si1 ,i2 (2j ) − Ei1 ,i2 (2j , Θ0 )
2j/2 Kj
Λi1 ,i2 (2j , Θ0 )
(log 2)Ei1 ,i2 (2j , Θ0 )
1≤i1 ≤i2 ≤P
j2
X
p
Kj kS(2j ) − E(2j , Θ0 )kl1 ∈ Rdim Ξ .
+o
(44)
j=j1
Then, the weak limit (29) is a consequence of (41), (42), (43), (44), Theorem 5.1 and the
Cramér-Wold device.
D
Discussion of remark 5.7
The condition (30) amounts to requiring the full rank of a sum of rank 1 terms. To fix ideas,
consider a sum of the form vv ∗ + ww∗ , where v, w ∈ R2 \{0}. Then, the sum has deficient
rank 1 if and only if v and w are collinear. Indeed, assume that there is a vector u 6= 0 such
that u∗ {vv ∗ + ww∗ }u = 0. Thus, u∗ vv ∗ u = 0 = u∗ ww∗ u, whence collinearity follows.
So, for notational simplicity, we assume that we can rewrite the wavelet spectrum as
|Ei1 ,i2 (2j , Θ)| = ai1 ,i2 2j2h1 + bi1 ,i2 2j(h1 +h2 ) + ci1 ,i2 2j2h2 ,
where i1 , i2 = 1, . . . , P (c.f. [1], Lemma 4.2). Further assume that the only parameters to
be estimated are h1 < h2 . Then, for a fixed j and a pair of indices (i1 , i2 ),
∂
1 ai1 ,i2 log(22j )2j2h1 + bi1 ,i2 log(2j )2j(h1 +h2 )
log2 |Ei1 ,i2 (2j , Θ)| =
,
∂h1
log 2 ai1 ,i2 2j2h1 + bi1 ,i2 2j(h1 +h2 ) + ci1 ,i2 2j2h2
1 bi1 ,i2 log(22j )2j(h1 +h2 ) + ci1 ,i2 log(2j )2j2h2
∂
log2 |Ei1 ,i2 (2j , Θ)| =
.
∂h2
log 2 ai1 ,i2 2j2h1 + bi1 ,i2 2j(h1 +h2 ) + ci1 ,i2 2j2h2
This suggests that for at least two triplets (i1 , i2 , j), the vectors
∂
∗
∂
log2 |Ei1 ,i2 (2j , Θ)|,
log2 |Ei1 ,i2 (2j , Θ)|
∂h1
∂h2
will not be collinear for most parametrizations in practice.
20
References
[1] P. Abry and G. Didier. Wavelet estimation for operator fractional Brownian motion.
Bernoulli, page to appear, 2015.
[2] S. Achard, D. Bassett, A. Meyer-Lindenberg, and E. Bullmore. Fractal connectivity of
long-memory networks. Phys. Rev. E, 77(3):036104, 2008.
[3] P.-O. Amblard and J.-F. Coeurjolly. Identification of the Multivariate Fractional Brownian Motion. IEEE Trans. Signal Process., 59(11):5152–5168, Nov. 2011.
[4] P.-O. Amblard, J.-F. Coeurjolly, F. Lavancier, and A. Philippe. Basic properties of
the multivariate fractional Brownian motion. Bulletin de la Société Mathématique de
France, Séminaires et Congrès, 28:65–87, 2012.
[5] J.-M. Bardet, G. Lang, G. Oppenheim, A. Philippe, S. Stoev, and M. Taqqu.
Semi-parametric estimation of the long-range dependence parameter: a survey. In
P. Doukhan, G. Oppenheim, and M. S. Taqqu, editors, Theory and applications of
Long-range dependence, pages 557–577, Boston, 2003. Birkhäuser.
[6] P. Ciuciu, P. Abry, and B. J. He. Interplay between functional connectivity and scalefree dynamics in intrinsic fMRI networks. NeuroImage, 95:248–263, 2014.
[7] J.-F. Coeurjolly, P.-O. Amblard, and S. Achard. Wavelet analysis of the multivariate
fractional Brownian motion. ESAIM: Probability and Statistics, 17:592–604, Aug. 2013.
[8] G. Didier, H. Helgason, and P. Abry. Demixing multivariate-operator self-similar processes. In Proc. Int. Conf. Acoust., Speech Signal Process., Brisbane, Australia, 2015.
[9] G. Didier and V. Pipiras. Integral representations and properties of operator fractional
Brownian motions. Bernoulli, 17(1):1–33, 2011.
[10] G. Didier and V. Pipiras. Exponents, symmetry groups and classification of operator
fractional Brownian motions. Journal of Theoretical Probability, 25(2):353–395, 2012.
[11] J. Frecon, R. Fontugne, G. Didier, N. Pustelnik, K. Fukuda, and P. Abry. Non-linear
regression for bivariate self-similarity identification - application to anomaly detection
in Internet traffic based on a joint scaling analysis of packet and byte counts. In Proc.
Int. Conf. Acoust., Speech Signal Process., Shanghai, China, March 2016.
[12] H. Helgason, V. Pipiras, and P. Abry. Fast and exact synthesis of stationary multivariate
Gaussian time series using circulant embedding. Signal Process., 91(5):1123 – 1133,
2011.
[13] H. Helgason, V. Pipiras, and P. Abry. Synthesis of multivariate stationary series with
prescribed marginal distributions and covariance using circulant matrix embedding.
Signal Process., 91(8):1741 – 1758, 2011.
[14] K. Ichida and Y. Fujii. An interval arithmetic method for global optimization. Computing, 23(1):85–97, 1979.
[15] L. Jaulin, M. Kieffer, and O. Didrit. Applied interval analysis : with examples in
parameter and state estimation, robust control and robotics. Springer, London, 2001.
21
[16] R. B. Kearfott. An interval branch and bound algorithm for bound constrained optimization problems. J. Global Optim., 2(3):259–280, 1992.
[17] S. Mallat. A Wavelet Tour of Signal Processing, Third Edition: The Sparse Way.
Academic Press, 3rd edition, 2008.
[18] R. Moore, E. Hansen, and A. Leclerc. Recent advances in global optimization. chapter
Rigorous Methods for Global Optimization, pages 321–342. Princeton University Press,
Princeton, NJ, USA, 1992.
[19] R. E. Moore. Interval analysis. Prentice-Hall Inc., Englewood Cliffs, N.J., 1966.
[20] R. E. Moore, R. B. Kearfott, and M. J. Cloud. Introduction to Interval Analysis. Society
for Industrial and Applied Mathematics, 2009.
[21] H. Ratschek and J. Rokne. New Computer Methods for Global Optimization. Halsted
Press, New York, NY, USA, 1988.
[22] G. Samorodnitsky and M. Taqqu. Stable non-Gaussian random processes. Chapman
and Hall, New York, 1994.
[23] A. Van der Vaart. Asymptotic Statistics, volume 3. Cambridge University Press, 2000.
[24] D. Veitch and P. Abry. A wavelet-based joint estimator of the parameters of long-range
dependence. IEEE Trans. Inform. Theory, 45(3):878–897, 1999.
[25] H. Wendt, P. Abry, and S. Jaffard. Bootstrap for empirical multifractal analysis. IEEE
Signal Process. Mag., 24(4):38–48, 2007.
22
| 10 |
Multiobjective Optimization in a Quantum Adiabatic Computer∗
Benjamı́n Barán†
Marcos Villagra‡
arXiv:1605.03152v3 [] 30 Jun 2017
Universidad Nacional de Asunción
Núcleo de Investigación y Desarrollo Tecnológico (NIDTEC)
Abstract
In this work we present a quantum algorithm for multiobjective combinatorial optimization. We
show how to map a convex combination of objective functions onto a Hamiltonian and then use that
Hamiltonian to prove that the quantum adiabatic algorithm of Farhi et al.[FGGS00] can find Paretooptimal solutions in finite time provided certain convex combinations of objectives are used and the
underlying multiobjective problem meets certain restrictions.
Keywords: quantum computation, multiobjective optimization, quantum adiabatic evolution.
1
Introduction
Optimization problems are pervasive in everyday applications like logistics, communication networks, artificial intelligence and many other areas. Consequently, there is a high demand of efficient algorithms for
these problems. Many algorithmic and engineering techniques applied to optimization problems are being
developed to make an efficient use of computational resources in optimization problems. In fact, several engineering applications are multiobjective optimization problems, where several objectives must be optimized
at the same time. For a survey on multiobjective optimization see for example Refs. [EG00, vLBB14]. In
this work, we present what we consider the first algorithm for multiobjective optimization using a quantum
adiabatic computer.
Quantum computation is a promising paradigm for the design of highly efficient algorithms based on the
principles of quantum mechanics. Researchers have studied the computational power of quantum computers
by showing the advantages it presents over classical computers in many applications. Two of the most wellknow applications are in unstructured search and the factoring of composite numbers.
√ In structured search,
Grover’s algorithm can find a single marked element among n elements in time O( n), whereas any other
classical algorithm requires time at least n [Gro96]. Shor’s algorithm can factor composite numbers in polynomial time—any other known classical algorithm can find factors of composite numbers in subexponential
time (it is open whether a classical algorithm can find factors in polynomial time) [Sho94].
Initially, before the year 2000, optimization problems were not easy to construct using quantum computers. This was because most studied models of quantum computers were based on quantum circuits
which presented difficulties for the design of optimization algorithms. The first paper reporting on solving an optimization√problem was in Ref. [DH99]. Their algorithm finds a minimum inside an array of n
numbers in time O( n). More recently, Baritompa et al.[BBW05] presented an improved algorithm based
on Ref. [DH99]; this latter algorithm, however, does not have a proof of convergence in finite time. The
algorithms of Refs. [DH99] and [BBW05] are based on Grover’s search, and hence, in the quantum circuit
model.
Quantum Adiabatic Computing was introduced by Farhi et al.[FGGS00] as a new quantum algorithm and
computation paradigm more friendly to optimization problems. This new paradigm is based on a natural
phenomenon of quantum annealing [DC08]; analogously to classical annealing, optimization problems are
mapped onto a natural optimization phenomenon, and thus, optimal solutions are found by just letting this
phenomenon to take place.
∗ An
extended abstract of this paper appeared in Ref. [BV16]
[email protected]
‡ Email: [email protected]; corresponding author
† Email:
1
The algorithms of Refs. [DH99] and [BBW05] are difficult to extend to multiobjective optimization and
to prove convergence in finite time. Hence, quantum adiabatic computing presents itself as a more suitable
model to achieve the following two goals: (i) to propose a quantum algorithm for multiobjective optimization
and (ii) prove convergence in finite time of the algorithm.
In this work, as our main contribution, we show that the quantum adiabatic algorithm of Farhi et
al.[FGGS00] can be used to find Pareto-optimal solutions in finite time provided certain restrictions are met.
In Theorem 4.1, we identify two structural features that any multiobjective optimization problem must have
in order to use the abovementioned adiabatic algorithm.
The outline of this paper is the following. In Section 2 we present a brief overview of multiobjective
combinatorial optimization and introduce the notation used throughout this work; in particular, several new
properties of multiobjective combinatorial optimization are also presented that are of independent interest. In
Section 3 we explain the quantum adiabatic theorem, which is the basis of the adiabatic algorithm. In Section
4 we explain the adiabatic algorithm and its application to combinatorial multiobjective optimization. In
Section 5 we prove our main result of Theorem 4.1. In Section 6 we show how to use the adiabatic algorithm
in a concrete problem. Finally, in Section 7 we present a list of challenging open problems.
2
Multiobjective Combinatorial Optimization
In this section we introduce the notation used throughout this paper and the main concepts of multiobjective
optimization. The set of natural numbers (including 0) is denoted N, the set of integers is Z, the set of real
numbers is denoted R and the set of positive real numbers is R+ . For any i, j ∈ N, with i < j, we let [i, j]Z
denote the discrete interval {i, i + 1, . . . , j − 1, j}. The set of binary words of length n is denoted {0, 1}n .
We also let poly(n) = O(nc ) be a polynomial in n.
A multiobjective combinatorial optimization problem (or MCO) is an optimization problem involving
multiple objectives over a finite set of feasible solutions. These objectives typically present trade-offs among
solutions and in general there is no single optimal solution. In this work, we follow the definition of Ref.
[KLP75]. Furthermore, with no loss of generality, all optimization problems considered in this work are
minimization problems.
Let S1 , . . . , Sd be totally ordered sets and let ≤i be an order on set Si for each i ∈ [1, d]Z . We also let ni
be the cardinality of Si . Define the natural partial order relation ≺ over the cartesian product S1 × · · · × Sd
in the following way. For any u = (u1 , . . . , ud ) and v = (v1 , . . . , vd ) in S1 × · · · × Sd , we write u ≺ v if and
only if for any i ∈ [1, d]Z it holds that ui ≤i vi ; otherwise we write u ⊀ v. An element u ∈ S is a minimal
element if there is no v ∈ S such that v ≺ u and v 6= u. Moreover, we say that u is non-comparable with v
if u ⊀ v and v ⊀ u and succinctly write u ∼ v. In the context of multiobjective optimization, the relation ≺
as defined here is often referred to as the Pareto-order relation [KLP75].
Definition 2.1 A multiobjective combinatorial optimization problem (or shortly, MCO) is defined as a tuple
Π = (D, R, d, F , ≺) where D is a finite set called domain, R ⊆ R+ is a set of values, d is a positive integer,
F is a finite collection of functions {fi }i∈[1,d]Z where each fi maps from D to R, and ≺ is the Pareto-order
relation on Rd (here Rd is the d-fold cartesian product on R). Define a function f that maps D to Rd as
f (x) = (f1 (x), . . . , fd (x)) referred as the objective vector of Π. If f (x) is a minimal element of Rd we say
that x is a Pareto-optimal solution of Π. For any two elements x, y ∈ D, if f (x) ≺ f (y) we write x ≺ y;
similarly, if f (x) ∼ f (y) we write x ∼ y. For any x, y ∈ D, if x ≺ y and y ≺ x we say that x and y are
equivalent and write x ≡ y. The set of all Pareto-optimal solutions of Π is denoted P (Π).
A canonical example in multiobjective optimization is the Two-Parabolas problem. In this problem we
have two objective functions defined by two parabolas that intersect in a single point, see Fig.1. In this work,
we will only be concerned with a combinatorial version of the Two-Parabolas problem where each objective
function only takes values on a finite set of numbers.
Considering that the set of Pareto-optimal solutions can be very large, we are mostly concerned on finding
a subset of the Pareto-optimal solutions. Optimal query algorithms to find all Pareto-optimal solutions for
d = 2, 3 and almost tight upper and lower bounds for any d ≥ 4 up to polylogarithmic factors were discovered
by [KLP75]; [PY00] showed how to find an approximation to all Pareto-optimal solutions in polynomial time.
For the remaining of this work, ≺ will always be the Pareto-order relation and will be omitted from the
definition of any MCO. Furthermore, for convenience, we will often write Πd = (D, R, F ) as a short-hand
for Π = (D, R, d, F ). In addition, we will assume for this work that each function fi ∈ F is computable in
polynomial time and each fi (x) is bounded by a polynomial in the number of bits of x.
2
Figure 1: The Two-Parabolas Problem. The first objective function f1 is represented by the bold line and
the second objective function f2 by the dashed line. For MCOs, each objective function takes values only on
the natural numbers. Note that there are no equivalent elements in the domain. In this particular example,
all the solutions between 7 and 15 are Pareto-optimal.
Definition 2.2 An MCO Πd is well-formed if for each fi ∈ F there is a unique x ∈ D such that fi (x) = 0.
An MCO Πd is normal if it is well-formed and fi (x) = 0 and fj (y) = 0, for i 6= j, implies x 6= y.
In a normal MCO, the value of an optimal solution in each fi is 0, and all optimal solutions are different.
In Fig.1, solutions 7 and 15 are optimal solutions of f1 and f2 with value 0, respectively; hence, the TwoParabolas problem of Fig.1 is normal.
Definition 2.3 An MCO Πd is collision-free if given λ = (λ1 , . . . , λd ), with each λi ∈ R+ , for any i ∈ [1, d]Z
and any pair x, y ∈ D it holds that |fi (x) − fi (y)| > λi . If Πd is collision-free we write succinctly as Πλd .
The Two-Parabolas problem of Fig.1 is not collision-free; for example, for solutions 5 and 9 we have that
f1 (5) = f1 (9). In Section 6 we show how to turn the Two-Parabolas problem into a collision-free MCO.
Definition 2.4 A Pareto-optimal solution x is trivial if x is an optimal solution of some fi ∈ F .
In Fig.1, solutions 7 and 15 are trivial Pareto-optimal solutions, whereas any x between 7 and 15 is
non-trivial.
Lemma 2.5 For any normal MCO Πd , if x and y are trivial Pareto-optimal solutions of Πd , then x and y
are not equivalent.
Proof.
Let x, y be two trivial Pareto-optimal solutions of Πd . There exists i, j such that fi (x) and
fj (y) = 0. Since Πd is normal we have that x 6= y and fi (y) > 0 and fj (x) > 0, hence, x ∼ y and they are
not equivalent.
Let Wd be a set of of normalized vectors in [0, 1)d , the continuous interval between 0 and less than 1,
defined as
)
(
d
X
d
wi = 1 .
(1)
Wd = w = (w1 , . . . , wd ) ∈ [0, 1)
i=1
For any w ∈ Wd , define hf (x), wi = hw, f (x)i = w1 f1 (x) + · · · + wd fd (x).
Lemma 2.6 Given Πd = (D, R, F ), any two elements x, y ∈ D are equivalent if and only if for all w ∈ Wd
it holds that hf (x), wi = hf (y), wi.
Proof.
Assume that x ≡ y. Hence f (x) = f (y). If we pick any w ∈ Wd we have that
hf (x), wi = w1 f1 (x) + · · · + wd fd (x) = w1 f1 (y) + · · · + wd fd (y) = hf (y), wi.
Now suppose that for all w ∈ Wd it holds hf (x), wi = hf (y), wi. By contradiction, assume that x 6≡ y.
With no loss of generality, assume further that there is exactly one i ∈ [1, d]Z such that fi (x) 6= fi (y). Hence
X
wi (fi (x) − fi (y)) =
wj (fj (y) − fj (x)).
(2)
j6=i
3
The right hand of Eq.(2) is 0 because for all j 6= i we have that fj (x) = fj (y). The left hand of Eq.(2),
however, is not 0 by our assumption, hence, a contradiction. Therefore, it must be that x and y are equivalent.
Lemma 2.7 Let Πd = (D, R, F ). For any w ∈ Wd there exists x ∈ D such that if hf (x), wi = miny∈D {hf (y), wi},
then x is a Pareto-optimal solution of Πd .
Proof. Fix w ∈ Wd and let x ∈ D be such that hf (x), wi is minimum among all elements of D. For any
y ∈ D, with y 6= x, we need to consider two cases: (1) hf (y), wi = hf (x), wi and (2) hf (y), wi > hf (x), wi.
Case (1). Here we have another two subcases, either fi (y) = fi (x) for all i or there exists at least one pair
i, j ∈ {1, . . . , d} such that wi fi (x) < wi fi (y) and wj fj (y) < wj fj (x). When fi (x) = fi (y) for each i = 1, . . . , d
we have that x and y are equivalent. On the contrary, if wi fi (x) < wi fi (y) and wj fj (y) < wj fj (x), we have
that fi (x) < fi (y) and fj (y) < fj (x), and hence, x ∼ y.
Case (2). In this case, there exists i ∈ {1, . . . , d} such that wi fi (x) < wi fi (y), and hence, fi (x) < fi (y).
Thus, f (y) 6≺ f (x) and y 6≺ x for any y 6= x.
We conclude from Case (1) that x ≡ y or x ∼ y, and from Case (2) that y ⊀ x. Therefore, x is
Pareto-optimal.
In this work, we will concentrate on finding non-trivial Pareto-optimal solutions. Finding trivial elements
can be done by letting wi = 1 for some i ∈ [1, d]Z and then running and optimization algorithm for fi ;
consequently, in Eq.(1) we do not allow for any wi to be 1. The process of mapping several objectives to a
single-objective optimization problem is sometimes referred as a linearization of the MCO [EG00].
From Lemma 2.7, we know that some Pareto-optimal solutions may not be optimal solutions for any
linearization w ∈ Wd . We define the set of non-supported Pareto-optimal solutions as the set N (Π) of all
Pareto-optimal solutions x such that hf (x), wi is not optimal for any w ∈ Wd . We also define the set S(Π)
of supported Pareto-optimal solutions as the set S(Π) = P (Π) \ N (Π) [EG00].
Note that there may be Pareto-optimal solutions x and y that are non-comparable and hf (x), wi =
hf (y), wi for some w ∈ Wd . That is equivalent to say that the objective function obtained from a linearization
of an MCO is not injective.
Definition 2.8 Any two elements x, y ∈ D are weakly-equivalent if and only if there exists w ∈ Wd such
that hf (x), wi = hf (y), wi.
By Lemma 2.6, any two equivalent solutions x, y are also weakly-equivalent, ; the other way, however,
does not hold in general. For example, consider two objective vectors f (x) = (1, 2, 3) and f (y) = (1, 3, 2).
Clearly, x and y are not equivalent; however, if w = (1/3, 1/3, 1/3) we can see that x and y are indeed
weakly-equivalent. In Fig.1, points 10 and 12 are weakly-equivalent.
3
Quantum Adiabatic Computation
Starting from this section we assume basic knowledge of quantum computation. For a thorough treatment
of quantum information science we refer the reader to the book by Nielsen and
P Chuang[NC00].
Let H be a Hilbert
space
with
a
finite
basis
{|u
i}
.
For
any
vector
|vi
=
i
i
i αi |ui i, the ℓ2 -norm of |vi is
pP
2 . For any matrix A acting on H, we define the operator norm of A induced by
defined as kvk =
|α
|
i
i
the ℓ2 -norm as kAk = maxkvk=1 kA|vik.
The Hamiltonian of a quantum system gives a complete description of its time evolution, which is governed
by the well-known Schrödinger’s equation
i~
d
|Ψ(t)i = H(t)|Ψ(t)i,
dt
(3)
where
√ H is a Hamiltonian, |Ψ(t)i is the state of the system at time t, Planck’s constant is denoted by ~ and
i = −1. For simplicity, we will omit ~ and i from now on. If H is time-independent, it is easy to see that a
solution to Eq.(3) is simply |Ψ(t)i = U (t)|Ψ(0)i where U (t) = e−itH using |Ψ(0)i as a given initial condition.
When the Hamiltonian depends on time, however, Eq.(3) is not in general easy to solve and much research
is devoted to it; nevertheless, there are a few known special cases.
4
Say that a closed quantum system is described by a time-dependent Hamiltonian H(t). If |Ψ(t)i is the
minimum energy eigenstate of H(t), adiabatic time evolution keeps the system in its lower energy eigenstate
as long as the change rate of the Hamiltonian is “slow enough.” This natural phenomenon is formalized in
the Adiabatic Theorem, first proved in Ref. [BF26]. Different proofs where given along the years, see for
example Refs. [Kat50, Mes62, SWL04, Rei04, AR04]. In this work we make use of a version of the theorem
presented in Ref. [AR04].
Consider a time-dependent Hamiltonian H(s), for 0 ≤ s ≤ 1, where s = t/T so that T controls the rate
of change of H for t ∈ [0, T ]. We denote by H ′ and H ′′ the first and second derivatives of H.
Theorem 3.1 (Adiabatic Theorem [BF26, Kat50, AR04]) Let H(s) be a nondegenerate Hamiltonian, let |ψ(s)i be one of its eigenvectors and γ(s) the corresponding eigenvalue. For any λ ∈ R+ and
s ∈ [0, 1], assume that for any other eigenvalue γ̂(s) it holds that |γ(s) − γ̂(s)| > λ. Consider the evolution
given by H on initial condition |ψ(0)i for time T and let |φi be the state of the system at T . For any
5
kH ′ k3 kH ′ k·kH ′′ k
nonnegative δ ∈ R, if T ≥ 10
} then kφ − ψ(1)k ≤ δ.
δ 2 . max{ λ4 ,
λ3
The Adiabatic Theorem was used in [FGGS00] to construct a quantum algorithm for optimization problems and introduced a new paradigm in quantum computing known as quantum adiabatic computing. In the
following section, we briefly explain the quantum adiabatic algorithm and use it to solve MCOs.
4
The Quantum Adiabatic Algorithm
Consider a function f : {0, 1}n → R ⊆ R+ whose optimal solution x̄ gives f (x̄) = 0. Let H1 be a Hamiltonian
defined as
X
H1 =
f (x)|xihx|.
(4)
x
Notice that H1 |x̄i = 0, and hence, |x̄i is an eigenvector. Thus, an optimization problem reduces to finding
an eigenstate with minimum eigenvalue [FGGS00]. For any s ∈ [0, 1], let H(s) = (1 − s)H0 + sH1 , where
H0 is an initial Hamiltonian chosen accordingly. If we initialize the system in the lowest energy eigenstate
|ψ(0)i, the adiabatic theorem guarantees that T at least 1/poly(λ) suffices to obtain a quantum state close
to |ψ(1)i, and hence, to our desired optimal solution. We call H1 and H0 the final and initial Hamiltonians,
respectively.
After defining the initial and final Hamiltonians, the adiabatic theorem guarantees that we can find an
optimal solution in finite time using the following procedure known as the Quantum Adiabatic Algorithm.
Let H(s) = (1 − s)H0 + sH1 . Prepare the system in the ground-state |ψ(0)i of H. Then let the system evolve
for time t close to T . Finally, after time t, read-out the result by measuring the system in the computational
basis. The only requirements, in order to make any use of the adiabatic algorithm, is that H0 and H1 must
not commute and the total Hamiltonian H(s) must be nondegenerate in its minimum eigenvalue [FGGS00].
In this section we show how to construct the initial and final Hamiltonians for MCOs. Given any normal
and collision-free MCO Πλd = (D, R, F ) we will assume with no loss of generality that D = {0, 1}n, that is,
D is a set of binary words of length n.
P
For each i ∈ [1, d]Z define a Hamiltonian Hfi = x∈{0,1}n fi (x)|xihx|. The minimum eigenvalue of each
Hfi is nondegenerate and 0 because Πλd is normal and collision-free. For any w ∈ Wd , the final Hamiltonian
Hw is defined as
Hw = w1 Hf1 + · · · + wd Hfd
X
w1 f1 (x) + · · · + wd fd (x) |xihx|
=
x∈{0,1}n
=
X
x∈{0,1}n
hf (x), wi|xihx|.
(5)
Following the work of [FGGS00], we choose
one that does not diagonalizes in
√ as initial Hamiltonian √
the computational basis. Let |0̂i = (|0i + |1i)/ 2 and |1̂i = (|0i − |1i)/ 2. A quantum state |x̂i, for any
x ∈ {0, 1}n, is obtained by applying the n-fold Walsh-Hadamard operation F ⊗n on |xi. The set {|x̂i}x∈{0,1}n
is known as the Hadamard basis. The initial Hamiltonian is thus defined over the Hadamard basis as
X
h(x)|x̂ihx̂|,
(6)
H0 =
x∈{0,1}n
5
where h(0n ) = 0 and h(x) ≥ 1 for all x 6= P
0n . It is easy to see that the minimum eigenvalue is nondegenerate∗
with corresponding eigenstate |0̂n i = √12 x∈{0,1}n |xi.
For any vector w in Euclidian space we define the ℓ1 -norm of w as kwk1 = |w1 | + · · · |wd |.
Theorem 4.1 Let Πλd be any normal and collision-free MCO. If there are no equivalent Pareto-optimal
solutions, then for any w ∈ Wd there exists w′ ∈ Wd , satisfying kw−w′ k1 ≤ 1/poly(n), such that the quantum
adiabatic algorithm, using Hw′ as final Hamiltonian, can find a Pareto-optimal solution x corresponding to
w in finite time.
Note that if a linearization w gives a nondegenerate Hamiltonian H(s), we can directly use the adiabatic
algorithm to find a Pareto-optimal solution. In the case of a degenerate Hamiltonian H(s), Theorem 4.1 tell
us that we can still find a Pareto-optimal solution using the adiabatic algorithm, provided we choose a new
w′ sufficiently close to w.
5
Eigenspectrum of the Final Hamiltonian
In this section we prove Theorem 4.1. Note that if the initial Hamiltonian does not commute with the
final Hamiltonian, it suffices to prove that the final Hamiltonian is nondegenerate in its minimum eigenvalue
[FGGS00]. For the remaining of this work, we let σw and αw be the smallest and second smallest eigenvalues
of Hw corresponding to a normal and collision-free MCO Πλd = (D, R, F ).
Lemma 5.1 Let x be a non-trivial Pareto-optimal solution of Πλd . For any w ∈ Wd it holds that σw >
hw, λi.
Proof.
Let σw = w1 f1 (x) + · · · + wd fd (x) and let x be a non-trivial Pareto-optimal element. For each
wi ∈ N we have that
X
X
wi λi = hw, λi.
wi fi (x) >
σw =
i
i
Lemma 5.2 For any w ∈ Wd , let Hw be a Hamiltonian with a nondegenerate minimum eigenvalue. The
eigenvalue gap between the smallest and second smallest eigenvalues of Hw is at least hλ, wi.
Proof.
Let σw be the unique minimum eigenvalue of Hw . We have that σw = hf (x), wi for some
x ∈ {0, 1}n. Now let αw = hf (y), wi be a second smallest eigenvalue of Hw for some y ∈ {0, 1}n where
y 6= x. Hence,
αw − σw = hf (y), wi − hf (x), wi
= w1 f1 (y) − w1 f1 (x) + w2 f2 (y) − w2 f2 (x)
> w1 λ1 + w2 λ2
= hλ, wi.
Lemma 5.3 If there are no weakly-equivalent Pareto-optimal solutions in Πλd , then the Hamiltonian Hw is
non-degenerate in its minimum eigenvalue.
Proof.
By the contrapositive, suppose Hw is degenerate in its minimum eigenvalue σw . Take any two
degenerate minimal eigenstates |xi and |yi, with x 6= y, such that
w1 f1 (x) + · · · + wd fd (x) = w1 f1 (y) + · · · + wd f2 (d) = σw .
Then it holds that x and y are weakly-equivalent.
We further show that even if Πλd has weakly-equivalent Pareto-optimal solutions, we can find a nondegenerate Hamiltonian. Let m = maxx,i {fi (x)}.
∗ In
quantum physics, a Hamiltonian is degenerate when one of its eigenvalues has multiplicity greater than one.
6
Lemma 5.4 For any Πλd , let x1 , . . . , xℓ ∈ D be Pareto-optimal solutions that are not pairwise equivalent.
If there exists w ∈ Wd and σw ∈ R+ such that hf (x1 ), wi = · · · = hf (xℓ ), wi = σw is minimum among
all y ∈ D, then there exists w′ ∈ Wd and i ∈ [1, ℓ]Z such that for all j ∈ [1, ℓ]Z , with j 6= i, it holds
′
hf (xi ), w′ i < hf (xj ), w′ i. Additionally, if the linearization w′ satisfies kw − w′ k1 ≤ hλ,wi
md , then hf (xi ), w i is
′
unique and minimum among all hf (y), w i for y ∈ D.
Proof.
We prove the lemma by induction on ℓ. Let ℓ = 2, then hf (x1 ), wi = hf (x2 ), wi, and hence,
w1 f1 (x1 ) + · · · + wd fd (x1 ) = σw
w1 f1 (x2 ) + · · · + wd fd (x2 ) = σw .
(7)
for some σw ∈ R+ . From linear algebra we know that there is an infinite number of elements of Wd that
simultaneously satisfy Eq.(7). With no loss of generality, fix w3 , . . . , wd and set b1 = w3 f3 (x1 )+· · ·+wd fd (x1 )
and b2 = w3 f3 (x2 ) + · · · + wd fd (x2 ). We have that
w1 f1 (x1 ) + w2 f2 (x1 ) = σw − b1
w1 f1 (x2 ) + w2 f2 (x2 ) = σw − b2 .
(8)
Again, by linear algebra, we know that Eq.(8) has a unique solution w1 and w2 ; it suffices to note that the
determinant of the coefficient matrix of Eq.(8) is not 0.
Choose any w1′ 6= w1 and w2′ 6= w2 satisfying w1′ +w2′ +w3 +· · ·+wd = 1 and let w′ = (w1′ , w2′ , w3 , . . . , wd ).
Then we have that hf (x1 ), w′ i 6= hf (x2 ), w′ i because w1′ and w2′ are not solutions to Eq.(8). Hence, either
hf (x1 ), w′ i or hf (x2 ), w′ i must be smaller than the other.
Suppose that hf (x1 ), w′ i < hf (x2 ), w′ i. We now claim that hf (x1 ), w′ i is mininum and unique among all
y ∈ D. In addition to the constraint of the preceding paragraph that w′ must satisfy, in order for hf (x1 ), w′ i
to be minimum, we must choose w′ such that kw − w′ k1 ≤ hλ,wi
md .
Assume for the sake of contradiction the existence of y ∈ D such that hf (y), w′ i ≤ hf (x1 ), w′ i. Hence,
hf (y), w′ i ≤ hf (x1 ), wi < hf (y), wi.
From Lemma 5.1, we know that |hf (x1 ), wi − hf (y), wi| > hλ, wi, and thus,
|hf (y), w′ i − hf (y), wi| > hλ, wi.
(9)
Using the Cauchy-Schwarz inequality we have that
|hf (y), w′ i − hf (y), wi| = |hf (y), w′ − wi|
≤ kf (y)k1 · kw′ − wk1
≤ hλ, wi,
where the last line follows from kf (y)k1 ≤ md and kw − w′ k1 ≤ hλ,wi
md ; from Eq.(9), however, we have that
|hf (y), w′ − wi| > hλ, wi, which is a contradiction. Therefore, we conclude that hf (y), w′ i > hf (x1 ), w′ i for
any y ∈ D; the case for hf (x1 ), w′ i > hf (x2 ), w′ i can be proved similarly. The base case of the induction is
thus proved.
Now suppose the statement holds for ℓ. Let x1 , . . . , xℓ , xℓ+1 be Pareto-optimal solutions that are not
pairwise equivalent. Let w ∈ Wd be such that hf (x1 ), wi = · · · = hf (wℓ+1 ), wi holds. By our induction
hypothesis, there exists w′ ∈ Wd and i ∈ [1, ℓ]Z such that hf (xi ), w′ i < hf (y), w′ i for any other y ∈ D.
If hf (xi ), w′ i 6= hf (xℓ+1 ), w′ i then we are done, because either one must be smaller. Suppose, however,
that hf (xℓ+1 ), w′ i = hf (xi ), w′ i = σw′ for some σw′ ∈ R+ . From the base case of the induction we know
there exists w′′ 6= w′ that makes hf (xi ), w′′ i < hf (xℓ+1 ), w′′ i, and hence, hf (xi ), w′′ i < hf (y), w′′ i for any
y ∈ D.
The premise in Lemma 5.4, that each x1 , . . . , xℓ must be Pareto-optimal solutions, is a sufficient condition
because if one solution is not Pareto-optimal, then the statement will contradict Lemma 2.7.
We now apply Lemma 5.4 to find a Hamiltonian with a nondegenerate minimum eigenvalue.
Lemma 5.5 Let Πλd be a MCO with no equivalent Pareto-optimal solutions and let Hw be a degenerate
Hamiltonian in its minimum eigenvalue with corresponding minimum eigenstates |x1 i, . . . , |xℓ i. There exists
′
w′ ∈ Wd , satisfying kw − w′ k1 ≤ hλ,wi
md , and i ∈ [1, ℓ]Z such that Hw is nondegenerate in its smallest
eigenvalue with corresponding eigenvector |xi i.
7
Proof. From Lemma 5.3, we know that if Πλd has no weakly-equivalent Pareto-optimal solutions, then for
any w the Hamiltonian Hw is nondegenerate.
We consider now the case when the minimum eigenvalue of Hw is degenerate with ℓ Pareto-optimal
solutions that are weakly-equivalent. Let x1 , . . . , xℓ be such weakly-equivalent Pareto-optimal solutions that
are non-trivial and xi 6≡ xj for all i 6= j. By Lemma 5.4 there exists w′ ∈ Wd , where w 6= w′ , such that
hf (xi ), w′ i is minimum among all y ∈ D.
If we consider our assumption from Section 2 that m = poly(n), where n is the maximum number of bits
of any element in D, we have that any w′ must satisfy kw − w′ k1 ≤ 1/poly(n). Then Theorem 4.1 follows
immediately from lemmas 2.7 and 5.5.
d
To see that the adiabatic evolution takes finite-time let ∆max = maxs k ds
H(s)k and gmin = mins g(s),
∆max
where g(s) is the eigenvalue gap of H(s). Letting T = O( g2 ) suffices to find a supported solution
min
d
corresponding to w. Since gmin > 0 and k ds
H(s)k = poly(n), we conclude that T is finite.
6
Application of the Adiabatic Algorithm to the Two-Parabolas
Problem
To make use of the adiabatic algorithm of Section 4 in the Two-Parabolas problem we need to consider
a collision-free version of the problem. Let T P2λ = (D, R, F ) be a normal and collision-free MCO where
λ = (λ1 , λ2 ) ∈ R+ × R+ , D = {0, 1}n, R ⊆ R+ and F = {f1 , f2 }. Let x0 and x′0 be the optimal solutions of
f1 and f2 , respectively. We will use xi to indicate the ith solution of f1 and x′i for f2 . Moreover, we assume
that |x0 − x′0 | > 1. This latter assumption will ensure that there is at least one non-trivial Pareto-optimal
solution.
To make T P2λ a Two-Parabolas problem, we impose the following conditions.
1. For each x ∈ [0, x0 ], the functions f1 and f2 are decreasing;
2. for each x ∈ [x′0 , 2n − 1], the functions f1 and f2 are increasing;
3. for each x ∈ [x0 + 1, x′0 − 1] , the function f1 is increasing and the function f2 is decreasing.
The final and initial Hamiltonians are as in Eq.(5) and Eq.(6), respectively. In particular, in Eq.(6), we
define the initial Hamiltonian as
X
|x̂ihx̂|.
(10)
Ĥ0 =
x∈{0,1}n \{0n }
Thus, the Hamiltonian of the entire system for T P2λ is
H(s) = (1 − s)Ĥ0 + sHw .
(11)
From the previous section we know that T = O( ∆g2max ) suffices to find a supported solution corresponding
min
to w [vDMV01]. The quantity ∆max is usually easy to estimate. The eigenvalue gap gmin is, however, very
difficult to compute; indeed, determining for any Hamiltonian if gmin > 0 is undecidable [CPGW15].
We present a concrete example of the Two-Parabolas problem on six qubits and numerically estimate the
eigenvalue gap. In Fig.2 we show a discretized instance of the Two-Parabolas problem—Table 1 presents a
complete specification of all points.
For this particular example we use as initial Hamiltonian 8H0 , that is, Eq.(10) multiplied by 8. Thus,
the minimum eigenvalue of 8H0 is 0, whereas any other eigenvalue is 8.
In Fig.3 we present the eigenvalue gap of T P2λ for w = 0.57 where we let w1 = w and w2 = 1 − w1 ;
for this particular value of w the Hamiltonian HF,w has a unique minimum eigenstate which corresponds
to Pareto-optimal solution 59. The two smallest eigenvalues never touch, and exactly at s = 1 the gap is
|hw, f (x0 )i − hw, f (x1 )i|, where x0 = 59 and x1 = 60 are the smallest and second smallest solutions with
respect to w, which agrees with lemmas 5.1 and 5.2.
Similar results can be observed for different values of w and a different number of qubits. Therefore, the
experimental evidence lead us to conjecture that in the Two-Parabolas problem gmin ≥ |hw, f (x)i−hw, f (y)i|,
where x and y are the smallest and second smallest solutions with respect to w.
8
Figure 2: A discrete Two-Parabolas problem on seven qubits. Each objective function f1 and f2 is represented
by the rounded points and the squared points, respectively. The gap vector λ = (0.2, 0.4). The trivial Paretooptimal points are 40 and 80.
Table 1: Complete definition of the Two-Parabolas example of Fig.2 for seven qubits.
x
1
5
9
13
17
21
25
29
33
37
41
45
49
53
57
61
65
69
73
77
81
85
89
93
97
101
105
109
113
117
121
125
f1 (x)
36.14
28.91
22.816
17.73
13.524
10.07
7.24
4.906
2.94
1.214
0.801
2.455
4.285
6.419
8.985
12.111
15.925
20.555
26.129
32.775
40.621
49.795
60.425
72.639
86.565
102.331
120.065
139.895
161.949
186.355
213.241
242.735
f2 (x)
214.879
188.449
164.435
142.709
123.143
105.609
89.979
76.125
63.919
53.233
43.939
35.909
29.015
23.129
18.123
13.869
10.239
7.105
4.339
1.813
1.2
3.654
6.284
9.218
12.584
16.51
21.124
26.554
32.928
40.374
49.02
58.994
x
2
6
10
14
18
22
26
30
34
38
42
46
50
54
58
62
66
70
74
78
82
86
90
94
98
102
106
110
114
118
122
126
f1 (x)
34.219
27.285
21.455
16.601
12.595
9.309
6.615
4.385
2.491
0.805
1.205
2.891
4.785
7.015
9.709
12.995
17.001
21.855
27.685
34.619
42.785
52.311
63.325
75.955
90.329
106.575
124.821
145.195
167.825
192.839
220.365
250.531
f2 (x)
208.038
182.224
158.794
137.62
118.574
101.528
86.354
72.924
61.11
50.784
41.818
34.084
27.454
21.8
16.994
12.908
9.414
6.384
3.69
1.204
1.804
4.29
6.984
10.014
13.508
17.594
22.4
28.054
34.684
42.418
51.384
61.71
x
3
7
11
15
19
23
27
31
35
39
43
47
51
55
59
63
67
71
75
79
83
87
91
95
99
103
107
111
115
119
123
127
9
f1 (x)
32.375
25.729
20.155
15.525
11.711
8.585
6.019
3.885
2.055
0.401
1.614
3.34
5.306
7.64
10.47
13.924
18.13
23.216
29.31
36.54
45.034
54.92
66.326
79.38
94.21
110.944
129.71
150.636
173.85
199.48
227.654
258.5
f2 (x)
201.354
176.148
153.294
132.664
114.13
97.564
82.838
69.824
58.394
48.42
39.774
32.328
25.954
20.524
15.91
11.984
8.618
5.684
3.054
0.6
2.413
4.939
7.705
10.839
14.469
18.723
23.729
29.615
36.509
44.539
53.833
64.519
x
4
8
12
16
20
24
28
32
36
40
44
48
52
56
60
64
68
72
76
80
84
88
92
96
100
104
108
112
116
120
124
128
f1 (x)
30.606
24.24
18.914
14.5
10.87
7.896
5.45
3.404
1.63
0
2.03
3.804
5.85
8.296
11.27
14.9
19.314
24.64
31.006
38.54
47.37
57.624
69.43
82.916
98.21
115.44
134.734
156.22
180.026
206.28
235.11
266.644
f2 (x)
194.825
170.219
147.933
127.839
109.809
93.715
79.429
66.823
55.769
46.139
37.805
30.639
24.513
19.299
14.869
11.095
7.849
5.003
2.429
0
3.029
5.603
8.449
11.695
15.469
19.899
25.113
31.239
38.405
46.739
56.369
67.423
Figure 3: Eigenvalue gap (in gray) of the Two-Parabolas problem of Fig.2 for w = 0.57. The eigenvalue gap
at s = 1 is exactly |hw, f (x)i − hw, f (y)i|, where x = 59 and y = 60 are the smallest and second smallest
solutions with respect to w.
7
Concluding Remarks and Open Problems
In this work we showed that the quantum adiabatic algorithm of [FGGS00] can be used for multiobjective
combinatorial optimization problems. In particular, a simple linearization of the objective functions suffices
to guarantee convergence to a Pareto-optimal solution provided the linearized single-objective problem has
an unique optimal solution. Nevertheless, even if a linearization of objectives does not give an unique optimal
solution, then it is always possible to choose an appropriate linearization that does.
We end this paper by listing a few promising and challenging open problems.
1. To make any practical use of Theorem 4.1 we need to chose w ∈ Wd in such a way that the optimal
solution of the linearization of an MCO has an unique solution. It is very difficult, however, to know a
priori which w to chose in order to use the adiabatic algorithm. Therefore, more research is necessary
to learn how to select these linearizations. One way could be to constraint the domain of an MCO in
order to minimize the number of weak-equivalent solutions.
2. Another related issue is learn how to solve multiobjective problems in the presence of equivalent
solutions. A technique of mapping an MCO with equivalent solutions to Hamiltonians seems very
difficult owing to the fact that the smallest eigenvalue must be unique in order to apply the adiabatic
theorem.
3. According to Theorem 4.1, we can only find all supported solutions. Other works showed that the
number of non-supported solutions can be much larger than the number of supported solutions [EG00].
Hence, it is interesting to construct a quantum algorithm that could find an approximation to all
Pareto-optimal solutions.
4. Prove our conjecture of Section 6 that the eigenvalue gap of the Hamiltonian of Eq.(11), corresponding
to the Two-Parabolas problem, is at least the difference between the smallest solution and second
smallest solution for any given linearization of the objective functions.
References
[AR04]
Andris Ambainis and Oded Regev. An elementary proof of the quantum adiabatic theorem.
arXiv:quant-ph/0411152, 2004.
[BBW05]
W. P. Baritompa, D. W. Bulger, and G. R. Wood. Grover’s quantum algorithm applied to
global optimization. SIAM Journal on Optimization, 15(4):11701184, 2005.
10
[BF26]
Max Born and Vladimir Fock. Beweis des adiabatensatzes. Zeitschrift für Physik, 51(3–4):165–
180, 1926.
[BV16]
Benjamin Barán and Marcos Villagra. Multiobjective optimization in a quantum adiabatic computer. Electronic Notes in Theoretical Computer Science, 329:27–38, 2016. arXiv:1605.03152.
[CPGW15] Toby S. Cubitt, David Perez-Garcia, and Michael M. Wolf. Undecidability of the spectral gap.
Nature, 528:207–211, 2015.
[DC08]
Arnab Das and Bikas K. Chakrabarti. Quantum annealing and quantum computation. Reviews
of Modern Physics, 80(1061), 2008.
[DH99]
Christoph Dürr and Peter Høyer. A quantum algorithm for finding the minimum. arXiv:quantph/9607014, 1999.
[EG00]
Matthias Ehrgott and Xavier Gandibleux. A survey and annotated bibliography of multiobjective combinatorial optimization. OR Spektrum, 22(4):425–460, 2000.
[FGGS00]
Edward Farhi, Jeffrey Goldstone, Sam Gutman, and Michael Sipser. Quantum computation by
adiabatic evolution. arXiv:quant-ph/0001106, 2000.
[Gro96]
Lov Grover. A fast quantum mechanical algorithm for database search. In Proceedings of the
28th Annual ACM Symposium on the Theory of Computing (STOC), pages 212–219, 1996.
[Kat50]
Tosio Kato. On the adiabatic theorem of quantum mechanics. Journal of the Physical Society
of Japan, 5(6):435, 1950.
[KLP75]
H. T. Kung, F. Luccio, and F. P. Preparata. On finding the maxima of a set of vectors. Journal
of the ACM, 22(4):469–476, 1975.
[Mes62]
Albert Messiah. Quantum Mechanics, Volume II. North Holland, 1962.
[NC00]
Michael Nielsen and Isaac Chuang. Quantum Computation and Quantum Information. Cambridge University Press, 2000.
[PY00]
Christos Papadimitriou and Mihalis Yannakakis. On the approximability of trade-offs and
optimal access of web sources. In Proceedings of the 41st Annual Symposium on Foundations
of Computer Science (FOCS), pages 86–92, 2000.
[Rei04]
Ben Reichardt. The quantum adiabatic optimization algorithm and local minima. In Proceedings
of the 36th Annual ACM Symposium on Theory of Computing (STOC), pages 502–510, 2004.
[Sho94]
Peter Shor. Algorithms for quantum computation: discrete logarithms and factoring. In Proceedings of the 35th Annual Symposium on Foundations of Computer Science (FOCS), pages
124–134, 1994.
[SWL04]
Marcelo Silva Sarandy, Lian-Ao Wu, and Daniel A. Lidar. Consistency of the adiabatic theorem.
Quantum Information Processing, 3(6):331–349, 2004.
[vDMV01]
Wim van Dam, Michele Mosca, and Umesh Vazirani. How powerful is adiabatic quantum
computation? In Proceedings of the 42nd IEEE Symposium on Foundations of Computer
Science (FOCS), pages 279–287. IEEE, 2001.
[vLBB14]
Christian von Lücken, Benjamin Barán, and Carlos Brizuela. A survey on multi-objective evolutionary algorithms for many-objective problems. Computational Optimization and Applications,
58(3):707–756, 2014.
11
| 8 |
Ref: ACM Genetic and Evolutionary Computation Conference (GECCO), pp. 189—190, Berlin, Germany, July 2017.
A Two-Phase Genetic Algorithm for Image Registration
Sarit Chicotay
Bar-Ilan University
Ramat-Gan, Israel
[email protected]
Eli (Omid) David1
Bar-Ilan University
Ramat-Gan, Israel
[email protected]
ABSTRACT
Image Registration (IR) is the process of aligning two (or more)
images of the same scene taken at different times, different
viewpoints and/or by different sensors. It is an important, crucial
step in various image analysis tasks where multiple data sources are
integrated/fused, in order to extract high-level information.
Registration methods usually assume a relevant transformation
model for a given problem domain. The goal is to search for the
"optimal" instance of the transformation model assumed with
respect to a similarity measure in question.
In this paper we present a novel genetic algorithm (GA)-based
approach for IR. Since GA performs effective search in various
optimization problems, it could prove useful also for IR. Indeed,
various GAs have been proposed for IR. However, most of them
assume certain constraints, which simplify the transformation
model, restrict the search space or make additional preprocessing
requirements. In contrast, we present a generalized GA-based
solution for an almost fully affine transformation model, which
achieves competitive results without such limitations using a twophase method and a multi-objective optimization (MOO) approach.
We present good results for multiple dataset and demonstrate the
robustness of our method in the presence of noisy data.
CCS CONCEPTS
• Computing methodologies~Search methodologies
Computing methodologies~Computer vision
•
KEYWORDS
Computer Vision, Genetic Algorithms, Image Registration, MultiObjective Optimization, Normalized Cross Correlation
1 INTRODUCTION
Image registration (IR) is an important, significant component
in many practical problem domains. Due to the enormous diversity
of IR applications and methodologies, automatic IR remains a
challenge to this day. A broad range of registration techniques has
been developed for various types of datasets and problem domains
[1], where typically, domain-specific knowledge is taken into
account and certain a priori assumptions are made to simplify the
model in question.
1
www.elidavid.com
Nathan Netanyahu is also with the Center for Automation Research,
University of Maryland, College Park, MD 20742.
2
Nathan S. Netanyahu2
Bar-Ilan University
Ramat-Gan, Israel
[email protected]
An affine transformation is one of the most commonly used
models. Since the search space is too large for a feasible exhaustive
search through the entire parameter space, the major challenge is to
avoid getting stuck at a local optimum when there are multiple
extrema in the similarity metric search space.
In order to overcome this problem, we present in this paper a
novel two-phase genetic algorithm (GA)-based approach for IR.
We devise a GA-based framework coupled with image processing
techniques to search efficiently for an optimal transformation with
respect to a given similarity measure. Due to our two-phase strategy
and a unique simultaneous optimization of two similarity measures
based on a multi-objective optimization (MOO) strategy, we obtain
good results over a relatively large search space assuming an almost
fully affine transformation model.
2
TWO-PHASE GA-BASED IR
This section describes briefly our two-phase GA-based
approach to optimize the devised similarity measures by utilizing
common IR tools. For a detailed presentation of this work see [2].
Our IR scheme searches for a transformation that generates a
maximal match in the overlap between the reference image and the
transformed sensed image, thus, the GA chromosome is defined by
six genes reflecting the effects represented by an affine
transformation; translation along the -and- axis, rotation, scale
factor, and shear along the -and- axis.
Two similarity measures are used in this work: (1) Euclidean
distance measure, which is applied to geometric feature points
extracted from both images, and (2) normalized cross correlation
(NCC) [3], which is an intensity-based measure.
The Euclidean distance measure computes the similarity
between two feature point sets,
and , extracted from the
reference and sensed image. We first tested our scheme using
manually selected features and showed that without assuming
correspondences our algorithm gives good registration results. We
then applied the measure to wavelet features obtained in a fullyautomatic mode from Simoncelli's steerable filters [4] based on a
wavelet transform. The Euclidean distance measure is calculated
for the two extracted feature sets as follows:
First, the feature points extracted from the sensed image are
warped according to the transformation assumed. For each warped
point ⃗ we determine its corresponding point ⃗ among the
unassigned reference feature points
⊆ , by finding its nearest
neighbor with respect to Euclidean distance ( ⃗, ⃗) , i.e.,
( )=
⃗
min ( , )
⃗
Finally, the similarity value is the value of the median Euclidean
distance among the correspondences found, i.e.,
( , )=
⃗
⃗,
( ⃗)
The second measure used is the normalized cross correlation
(NCC), which has been commonly applied to evaluate the degree
of similarity between two images. For image and the warped
image it is defined as:
∑ , ( ( , ) − ̅ )( ( , ) − )
( , )=
∑ , ( ( , ) − ̅) ∑ , ( ( , ) − )
where ̅ and are the average gray levels of images and .
Having performed several tests using each of these measures,
independently, as the fitness function, we noticed that the GA fails
to obtain consistent results with a single measure. In an attempt to
obtain a more robust IR scheme, we combined, therefore, the two
measures as part of our two-phase strategy.
2.1 Phase 1: Coarse Estimation
The goal of the first phase is to obtain an initial coarse estimate.
This is achieved using the Euclidean distance measure which is
expected to yield consistent candidate solutions that are "relatively"
close to the optimal solution.
The first phase completes when there is no "significant" update
for a predefined number of changes or when converging to some
predefined minimal distance measure.
2.2 Phase 2: Multi-Objective Optimization
The second phase starts with the population at the end of the
first phase. The Euclidean distance measure is combined with the
NCC measure which makes use of the full image data.
Ideally, we would like to optimize simultaneously the two
objective functions, however, in practice, they may not be
optimized simultaneously. Thus, we use a multi-objective
optimization approach that gives a partial ordering of solutions
based on Pareto dominance.
The second phase completes when there is no "significant"
update. We select among the pareto-optimal set the individual with
the best NCC value as the suggested solution.
(affected by contrast relationships). Table 1 and Figure 1 present
several results in a fully-automatic mode.
We compared also our results on multiple datasets to other
methods assuming a simpler transformation model and performed
additional tests on real images from INRIA database [5] that
underwent affine transformations. See [2] for full details.
4 CONCLUSIONS
In this paper we presented a novel two-phase GA-based image
registration algorithm, whose main advantage over existing
evolutionary IR techniques is that it provides a robust and
automatic solution for a (quasi) fully affine transformation which
is one of the most commonly used models for image registration.
We used the Euclidean distance measure and the NCC measure as
part of a two-phase strategy, supported by a novel MOO design,
which is used to optimize the two similarity measures
simultaneously.
We have tested extensively the proposed scheme, and
demonstrated its robustness to noisy data and consistency on
multiple datasets over successive runs.
Further research should be done to achieve a robust, fullyautomatic registration in more challenging scenarios.
REFERENCES
[1] J. Santamaría, O. Cordón, and S. Damas. A comparative study of state-ofthe art evolutionary image registration methods for 3D modeling, Computer
Vision and Image Understanding, vol. 115(9), pp. 1340–1354, 2011.
[2] http://u.cs.biu.ac.il/~nathan/GAImgReg/Two-Phase_GA_for_IR.pdf
[3] Y.R. Rao, N. Prathapani, and E. Nagabhooshanam. Application of
normalized cross correlation to image registation, International Journal of
Research in Engineering and Technology, vol. 3(5), pp. 12-16, 2014.
[4] A. Karasaridis and E.P. Simoncelli. A filter design technique for steerable
pyramid image transforms. Proceedings of the IEEE Inernational Conference
on Acoustics, Speech and Signal Processing, vol. 4(7-10), pp. 2387–2390,
1996.
[5] https://lear.inrialpes.fr/people/mikolajczyk/Database, INRIA, Mikolajczyk
evaluation database.
Table 1: Fully-automatic registration results of the images in
Figure 1 (RMSE in pixels).
Image
Boat
House
Avg. RMSE
1.37
1.34
σ RMSE
0.2
0.24
3 EMPIRICAL RESULTS
We tested our algorithm on a few dozens of synthetic and real
image datasets, including various satellite and outdoor scenes, in
both a semi-automatic and a fully-automatic mode.
The correctness of the final transformation is evaluated by the
root mean square error (RMSE) for manually selected points. We
consider RMSE value < 1.5 pixels as a good registration.
The semi-automatic mode yields good results in all of the cases
considered. The tests in a fully-automatic mode achieved
successful registration in RMSE terms in about 75% of the test
cases. Some of the failed cases can be recovered, though, if
additional measures/constraints are applied to the transformation's
parameters, e.g., using mutual information (MI) instead of NCC
2
Figure 1: (a), (d) Reference, (b), (e) sensed, and (c), (f)
registered images of "boat" and "house" pairs from [5].
| 1 |
arXiv:1705.08804v2 [cs.IR] 30 Nov 2017
Beyond Parity:
Fairness Objectives for Collaborative Filtering
Bert Huang
Department of Computer Science
Virginia Tech
Blacksburg, VA 24061
[email protected]
Sirui Yao
Department of Computer Science
Virginia Tech
Blacksburg, VA 24061
[email protected]
Abstract
We study fairness in collaborative-filtering recommender systems, which are
sensitive to discrimination that exists in historical data. Biased data can lead
collaborative-filtering methods to make unfair predictions for users from minority
groups. We identify the insufficiency of existing fairness metrics and propose four
new metrics that address different forms of unfairness. These fairness metrics can
be optimized by adding fairness terms to the learning objective. Experiments on
synthetic and real data show that our new metrics can better measure fairness than
the baseline, and that the fairness objectives effectively help reduce unfairness.
1 Introduction
This paper introduces new measures of unfairness in algorithmic recommendation and demonstrates
how to optimize these metrics to reduce different forms of unfairness. Recommender systems study
user behavior and make recommendations to support decision making. They have been widely applied in various fields to recommend items such as movies, products, jobs, and courses. However,
since recommender systems make predictions based on observed data, they can easily inherit bias
that may already exist. To address this issue, we first formalize the problem of unfairness in recommender systems and identify the insufficiency of demographic parity for this setting. We then
propose four new unfairness metrics that address different forms of unfairness. We compare our fairness measures with non-parity on biased, synthetic training data and prove that our metrics can better
measure unfairness. To improve model fairness, we provide five fairness objectives that can be optimized, each adding unfairness penalties as regularizers. Experimenting on real and synthetic data,
we demonstrate that each fairness metric can be optimized without much degradation in prediction
accuracy, but that trade-offs exist among the different forms of unfairness.
We focus on a frequently practiced approach for recommendation called collaborative filtering,
which makes recommendations based on the ratings or behavior of other users in the system. The
fundamental assumption behind collaborative filtering is that other users’ opinions can be selected
and aggregated in such a way as to provide a reasonable prediction of the active user’s preference
[7]. For example, if a user likes item A, and many other users who like item A also like item B, then
it is reasonable to expect that the user will also like item B. Collaborative filtering methods would
predict that the user will give item B a high rating.
With this approach, predictions are made based on co-occurrence statistics, and most methods assume that the missing ratings are missing at random. Unfortunately, researchers have shown that
sampled ratings have markedly different properties from the users’ true preferences [21, 22]. Sampling is heavily influenced by social bias, which results in more missing ratings in some cases than
others. This non-random pattern of missing and observed rating data is a potential source of unfairness. For the purpose of improving recommendation accuracy, there are collaborative filtering mod31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
els [2, 21, 25] that use side information to address the problem of imbalanced data, but in this work,
to test the properties and effectiveness of our metrics, we focus on the basic matrix-factorization
algorithm first. Investigating how these other models could reduce unfairness is one direction for
future research.
Throughout the paper, we consider a running example of unfair recommendation. We consider
recommendation in education, and unfairness that may occur in areas with current gender imbalance, such as science, technology, engineering, and mathematics (STEM) topics. Due to societal
and cultural influences, fewer female students currently choose careers in STEM. For example, in
2010, women accounted for only 18% of the bachelor’s degrees awarded in computer science [3].
The underrepresentation of women causes historical rating data of computer-science courses to be
dominated by men. Consequently, the learned model may underestimate women’s preferences and
be biased toward men. We consider the setting in which, even if the ratings provided by students
accurately reflect their true preferences, the bias in which ratings are reported leads to unfairness.
The remainder of the paper is organized as follows. First, we review previous relevant work in
Section 2. In Section 3, we formalize the recommendation problem, and we introduce four new
unfairness metrics and give justifications and examples. In Section 4, we show that unfairness
occurs as data gets more imbalanced, and we present results that successfully minimize each form
of unfairness. Finally, Section 5 concludes the paper and proposes possible future work.
2 Related Work
As machine learning is being more widely applied in modern society, researchers have begun identifying the criticality of algorithmic fairness. Various studies have considered algorithmic fairness
in problems such as supervised classification [20, 23, 28]. When aiming to protect algorithms from
treating people differently for prejudicial reasons, removing sensitive features (e.g., gender, race, or
age) can help alleviate unfairness but is often insufficient. Features are often correlated, so other
unprotected attributes can be related to the sensitive features and therefore still cause the model to
be biased [17, 29]. Moreover, in problems such as collaborative filtering, algorithms do not directly
consider measured features and instead infer latent user attributes from their behavior.
Another frequently practiced strategy for encouraging fairness is to enforce demographic parity,
which is to achieve statistical parity among groups. The goal is to ensure that the overall proportion
of members in the protected group receiving positive (or negative) classifications is identical to
the proportion of the population as a whole [29]. For example, in the case of a binary decision
Ŷ ∈ {0, 1} and a binary protected attribute A ∈ {0, 1}, this constraint can be formalized as [9]
Pr{Ŷ = 1|A = 0} = Pr{Ŷ = 1|A = 1} .
(1)
Kamishima et al. [13–17] evaluate model fairness based on this non-parity unfairness concept, or try
to solve the unfairness issue in recommender systems by adding a regularization term that enforces
demographic parity. The objective penalizes the differences among the average predicted ratings of
user groups. However, demographic parity is only appropriate when preferences are unrelated to
the sensitive features. In tasks such as recommendation, user preferences are indeed influenced by
sensitive features such as gender, race, and age [4, 6]. Therefore, enforcing demographic parity may
significantly damage the quality of recommendations.
To address the issue of demographic parity, Hardt et al. [9] propose to measure unfairness with the
true positive rate and true negative rate. This idea encourages what they refer to as equal opportunity
and no longer relies on the implicit assumption of demographic parity that the target variable is
independent of sensitive features. They propose that, in a binary setting, given a decision Ŷ ∈ {0, 1},
a protected attribute A ∈ {0, 1}, and the true label Y ∈ {0, 1}, the constraints are equivalent to [9]
Pr{Ŷ = 1|A = 0, Y = y} = Pr{Ŷ = 1|A = 1, Y = y}, y ∈ {0, 1} .
(2)
This constraint upholds fairness and simultaneously respects group differences. It penalizes models
that only perform well on the majority groups. This idea is also the basis of the unfairness metrics
we propose for recommendation.
Our running example of recommendation in education is inspired by the recent interest in using
algorithms in this domain [5, 24, 27]. Student decisions about which courses to study can have
2
significant impacts on their lives, so the usage of algorithmic recommendation in this setting has
consequences that will affect society for generations. Coupling the importance of this application
with the issue of gender imbalance in STEM [1] and challenges in retention of students with backgrounds underrepresented in STEM [8, 26], we find this setting a serious motivation to advance
scientific understanding of unfairness—and methods to reduce unfairness—in recommendation.
3 Fairness Objectives for Collaborative Filtering
This section introduces fairness objectives for collaborative filtering. We begin by reviewing the
matrix factorization method. We then describe the various fairness objectives we consider, providing
formal definitions and discussion of their motivations.
3.1 Matrix Factorization for Recommendation
We consider the task of collaborative filtering using matrix factorization [19]. We have a set of users
indexed from 1 to m and a set of items indexed from 1 to n. For the ith user, let gi be a variable
indicating which group the ith user belongs to. For example, it may indicate whether user i identifies
as a woman, a man, or with a non-binary gender identity. For the jth item, let hj indicate the item
group that it belongs to. For example, hj may represent a genre of a movie or topic of a course. Let
rij be the preference score of the ith user for the jth item. The ratings can be viewed as entries in a
rating matrix R.
The matrix-factorization formulation builds on the assumption that each rating can be represented
as the product of vectors representing the user and item. With additional bias terms for users and
items, this assumption can be summarized as follows:
rij ≈ p⊤
i q j + ui + vj ,
(3)
where pi is a d-dimensional vector representing the ith user, q j is a d-dimensional vector representing the jth item, and ui and vj are scalar bias terms for the user and item, respectively. The
matrix-factorization learning algorithm seeks to learn these parameters from observed ratings X,
typically by minimizing a regularized, squared reconstruction error:
λ
1 X
J(P , Q, u, v) =
(yij − rij )2 ,
(4)
||P ||2F + ||Q||2F +
2
|X|
(i,j)∈X
where u and v are the vectors of bias terms, || · ||F represents the Frobenius norm, and
yij = p⊤
i q j + ui + vj .
(5)
Strategies for minimizing this non-convex objective are well studied, and a general approach is to
compute the gradient and use a gradient-based optimizer. In our experiments, we use the Adam
algorithm [18], which combines adaptive learning rates with momentum.
3.2 Unfair Recommendations from Underrepresentation
In this section, we describe a process through which matrix factorization leads to unfair recommendations, even when rating data accurately reflects users’ true preferences. Such unfairness can
occur with imbalanced data. We identify two forms of underrepresentation: population imbalance
and observation bias. We later demonstrate that either leads to unfair recommendation, and both
forms together lead to worse unfairness. In our discussion, we use a running example of course
recommendation, highlighting effects of underrepresentation in STEM education.
Population imbalance occurs when different types of users occur in the dataset with varied frequencies. For example, we consider four types of users defined by two aspects. First, each individual
identifies with a gender. For simplicity, we only consider binary gender identities, though in this
example, it would also be appropriate to consider men as one gender group and women and all
non-binary gender identities as the second group. Second, each individual is either someone who
enjoys and would excel in STEM topics or someone who does and would not. Population imbalance
occurs in STEM education when, because of systemic bias or other societal problems, there may be
significantly fewer women who succeed in STEM (WS) than those who do not (W), and because of
3
converse societal unfairness, there may be more men who succeed in STEM (MS) than those who
do not (M). This four-way separation of user groups is not available to the recommender system,
which instead may only know the gender group of each user, but not their proclivity for STEM.
Observation bias is a related but distinct form of data imbalance, in which certain types of users
may have different tendencies to rate different types of items. This bias is often part of a feedback
loop involving existing methods of recommendation, whether by algorithms or by humans. If an
individual is never recommended a particular item, they will likely never provide rating data for that
item. Therefore, algorithms will never be able to directly learn about this preference relationship.
In the education example, if women are rarely recommended to take STEM courses, there may be
significantly less training data about women in STEM courses.
We simulate these two types of data bias with two stochastic block models [11]. We create one block
model that determines the probability that an individual in a particular user group likes an item in a
particular item group. The group ratios may be non-uniform, leading to population imbalance. We
then use a second block model to determine the probability that an individual in a user group rates
an item in an item group. Non-uniformity in the second block model will lead to observation bias.
Formally, let matrix L ∈ [0, 1]|g|×|h| be the block-model parameters for rating probability. For the
ith user and the jth item, the probability of rij = +1 is L(gi ,hj ) , and otherwise rij = −1. Morever,
let O ∈ [0, 1]|g|×|h| be such that the probability of observing rij is O(gi ,hj ) .
3.3 Fairness Metrics
In this section, we present four new unfairness metrics for preference prediction, all measuring a
discrepancy between the prediction behavior for disadvantaged users and advantaged users. Each
metric captures a different type of unfairness that may have different consequences. We describe the
mathematical formulation of each metric, its justification, and examples of consequences the metric
may indicate. We consider a binary group feature and refer to disadvantaged and advantaged groups,
which may represent women and men in our education example.
The first metric is value unfairness, which measures inconsistency in signed estimation error across
the user types, computed as
n
1X
Uval =
Eg [y]j − Eg [r]j − E¬g [y]j − E¬g [r]j ,
(6)
n j=1
where Eg [y]j is the average predicted score for the jth item from disadvantaged users, E¬g [y]j is
the average predicted score for advantaged users, and Eg [r]j and E¬g [r]j are the average ratings for
the disadvantaged and advantaged users, respectively. Precisely, the quantity Eg [y]j is computed as
X
1
Eg [y]j :=
yij ,
(7)
|{i : ((i, j) ∈ X) ∧ gi }|
i:((i,j)∈X)∧gi
and the other averages are computed analogously.
Value unfairness occurs when one class of user is consistently given higher or lower predictions
than their true preferences. If the errors in prediction are evenly balanced between overestimation
and underestimation or if both classes of users have the same direction and magnitude of error, the
value unfairness becomes small. Value unfairness becomes large when predictions for one class are
consistently overestimated and predictions for the other class are consistently underestimated. For
example, in a course recommender, value unfairness may manifest in male students being recommended STEM courses even when they are not interested in STEM topics and female students not
being recommended STEM courses even if they are interested in STEM topics.
The second metric is absolute unfairness, which measures inconsistency in absolute estimation error
across user types, computed as
n
1X
Uabs =
(8)
Eg [y]j − Eg [r]j − E¬g [y]j − E¬g [r]j .
n j=1
Absolute unfairness is unsigned, so it captures a single statistic representing the quality of prediction
for each user type. If one user type has small reconstruction error and the other user type has large
4
reconstruction error, one type of user has the unfair advantage of good recommendation, while
the other user type has poor recommendation. In contrast to value unfairness, absolute unfairness
does not consider the direction of error. For example, if female students are given predictions 0.5
points below their true preferences and male students are given predictions 0.5 points above their
true preferences, there is no absolute unfairness. Conversely, if female students are given ratings
that are off by 2 points in either direction while male students are rated within 1 point of their true
preferences, absolute unfairness is high, while value unfairness may be low.
The third metric is underestimation unfairness, which measures inconsistency in how much the
predictions underestimate the true ratings:
n
Uunder =
1X
max{0, Eg [r]j − Eg [y]j } − max{0, E¬g [r]j − E¬g [y]j } .
n j=1
(9)
Underestimation unfairness is important in settings where missing recommendations are more critical than extra recommendations. For example, underestimation could lead to a top student not being
recommended to explore a topic they would excel in.
Conversely, the fourth new metric is overestimation unfairness, which measures inconsistency in
how much the predictions overestimate the true ratings:
n
Uover =
1X
max{0, Eg [y]j − Eg [r]j } − max{0, E¬g [y]j − E¬g [r]j } .
n j=1
(10)
Overestimation unfairness may be important in settings where users may be overwhelmed by recommendations, so providing too many recommendations would be especially detrimental. For example,
if users must invest large amounts of time to evaluate each recommended item, overestimating essentially costs the user time. Thus, uneven amounts of overestimation could cost one type of user
more time than the other.
Finally, a non-parity unfairness measure based on the regularization term introduced by Kamishima
et al. [17] can be computed as the absolute difference between the overall average ratings of disadvantaged users and those of advantaged users:
Upar = |Eg [y] − E¬g [y]| .
Each of these metrics has a straightforward subgradient and can be optimized by various subgradient
optimization techniques. We augment the learning objective by adding a smoothed variation of a
fairness metric based on the Huber loss [12], where the outer absolute value is replaced with the
squared difference if it is less than 1. We solve for a local minimum, i.e,
min
P ,Q,u,v
J(P , Q, u, v) + U .
(11)
The smoothed penalty helps reduce discontinuities in the objective, making optimization more efficient. It is also straightforward to add a scalar trade-off term to weight the fairness against the loss.
In our experiments, we use equal weighting, so we omit the term from Eq. (11).
4 Experiments
We run experiments on synthetic data based on the simulated course-recommendation scenario and
real movie rating data [10]. For each experiment, we investigate whether the learning objectives
augmented with unfairness penalties successfully reduce unfairness.
4.1 Synthetic Data
In our synthetic experiments, we generate simulated course-recommendation data from a block
model as described in Section 3.2. We consider four user groups g ∈ {W, WS, M, MS} and three
item groups h ∈ {Fem, STEM, Masc}. The user groups can be thought of as women who do not
enjoy STEM topics (W), women who do enjoy STEM topics (WS), men who do not enjoy STEM
topics (M), and men who do (MS). The item groups can be thought of as courses that tend to appeal
5
0.12
Error
0.08
0.06
0.04
0.02
U
O
P
O+P
0.02
0.01
U
O
P
O+P
0.00
U
O
P
O+P
U
O
P
O+P
0.25
0.020
0.20
0.015
0.15
Over
Under
0.0200
0.0175
0.0150
0.0125
0.0100
0.0075
0.0050
0.0025
0.0000
0.03
Parity
0.00
0.04
Value
0.10
0.040
0.035
0.030
0.025
0.020
0.015
0.010
0.005
0.000
Absolute
0.14
0.010
0.10
0.005
U
O
P
O+P
0.000
0.05
U
O
P
O+P
0.00
Figure 1: Average unfairness scores for standard matrix factorization on synthetic data generated from different
underrepresentation schemes. For each metric, the four sampling schemes are uniform (U), biased observations
(O), biased populations (P), and both biases (O+P). The reconstruction error and the first four unfairness metrics
follow the same trend, while non-parity exhibits different behavior.
to most women (Fem), STEM courses, and courses that tend to appeal to most men (Masc). Based
on these groups, we consider the rating block model
Fem STEM Masc
0.8
0.2
0.2
W
(12)
L = WS 0.8
0.8
0.2
.
MS 0.2
0.8
0.8
0.2
0.2
0.8
M
We also consider two observation block models: one with uniform observation probability across all
groups O uni = [0.4]4×3 and one with unbalanced observation probability inspired by how students
are often encouraged to take certain courses
Fem STEM Masc
0.6
0.2
0.1
W
bias
O = WS 0.3
(13)
0.4
0.2
.
MS 0.1
0.3
0.5
M
0.05
0.5
0.35
We define two different user group distributions: one in which each of the four groups is exactly a
quarter of the population, and an imbalanced setting where 0.4 of the population is in W, 0.1 in WS,
0.4 in MS, and 0.1 in M. This heavy imbalance is inspired by some of the severe gender imbalances
in certain STEM areas today.
For each experiment, we select an observation matrix and user group distribution, generate 400 users
and 300 items, and sample preferences and observations of those preferences from the block models.
Training on these ratings, we evaluate on the remaining entries of the rating matrix, comparing the
predicted rating against the true expected rating, 2L(gi ,hj ) − 1.
4.1.1 Unfairness from different types of underrepresentation
Using standard matrix factorization, we measure the various unfairness metrics under the different
sampling conditions. We average over five random trials and plot the average score in Fig. 1. We
label the settings as follows: uniform user groups and uniform observation probabilities (U), uniform groups and biased observation probabilities (O), biased user group populations and uniform
observations (P), and biased populations and biased observations (P+O).
The statistics demonstrate that each type of underrepresentation contributes to various forms of unfairness. For all metrics except parity, there is a strict order of unfairness: uniform data is the most
6
Table 1: Average error and unfairness metrics for synthetic data using different fairness objectives. The best
scores and those that are statistically indistinguishable from the best are printed in bold. Each row represents a
different unfairness penalty, and each column is the measured metric on the expected value of unseen ratings.
Unfairness
Error
Value
Absolute
Underestimation
Overestimation
Non-Parity
None
Value
Absolute
Under
Over
Non-Parity
0.317 ± 1.3e-02
0.130 ± 1.0e-02
0.205 ± 8.8e-03
0.269 ± 1.6e-02
0.130 ± 6.5e-03
0.324 ± 1.3e-02
0.649 ± 1.8e-02
0.245 ± 1.4e-02
0.535 ± 1.6e-02
0.512 ± 2.3e-02
0.296 ± 1.2e-02
0.697 ± 1.8e-02
0.443 ± 2.2e-02
0.177 ± 1.5e-02
0.267 ± 1.3e-02
0.401 ± 2.4e-02
0.172 ± 1.3e-02
0.453 ± 2.2e-02
0.107 ± 6.5e-03
0.063 ± 4.1e-03
0.135 ± 6.2e-03
0.060 ± 3.5e-03
0.074 ± 6.0e-03
0.124 ± 6.9e-03
0.544 ± 2.0e-02
0.199 ± 1.5e-02
0.400 ± 1.4e-02
0.456 ± 2.3e-02
0.228 ± 1.1e-02
0.573 ± 1.9e-02
0.362 ± 1.6e-02
0.324 ± 1.2e-02
0.294 ± 1.0e-02
0.357 ± 1.6e-02
0.321 ± 1.2e-02
0.251 ± 1.0e-02
fair; biased observations is the next most fair; biased populations is worse; and biasing the populations and observations causes the most unfairness. The squared rating error also follows this same
trend. In contrast, non-parity behaves differently, in that it is heavily amplified by biased observations but seems unaffected by biased populations. Note that though non-parity is high when the
observations are imbalanced, because of the imbalance in the observations, one should actually expect non-parity in the labeled ratings, so it a high non-parity score does not necessarily indicate an
unfair situation. The other unfairness metrics, on the other hand, describe examples of unfair behavior by the rating predictor. These tests verify that unfairness can occur with imbalanced populations
or observations, even when the measured ratings accurately represent user preferences.
4.1.2 Optimization of unfairness metrics
As before, we generate rating data using the block model under the most imbalanced setting: The
user populations are imbalanced, and the sampling rate is skewed. We provide the sampled ratings
to the matrix factorization algorithms and evaluate on the remaining entries of the expected rating
matrix. We again use two-dimensional vectors to represent the users and items, a regularization term
of λ = 10−3 , and optimize for 250 iterations using the full gradient. We generate three datasets each
and measure squared reconstruction error and the six unfairness metrics.
The results are listed in Table 1. For each metric, we print in bold the best average score and
any scores that are not statistically significantly distinct according to paired t-tests with threshold
0.05. The results indicate that the learning algorithm successfully minimizes the unfairness penalties,
generalizing to unseen, held-out user-item pairs. And reducing any unfairness metric does not lead
to a significant increase in reconstruction error.
The complexity of computing the unfairness metrics is similar to that of the error computation,
which is linear in the number of ratings, so adding the fairness term approximately doubles the
training time. In our implementation, learning with fairness terms takes longer because loops and
backpropagation introduce extra overhead. For example, with synthetic data of 400 users and 300
items, it takes 13.46 seconds to train a matrix factorization model without any unfairness term and
43.71 seconds for one with value unfairness.
While optimizing each metric leads to improved performance on itself (see the highlighted entries
in Table 1), a few trends are worth noting. Optimizing any of our new unfairness metrics almost
always reduces the other forms of unfairness. An exception is that optimizing absolute unfairness
leads to an increase in underestimation. Value unfairness is closely related to underestimation and
overestimation, since optimizing value unfairness is even more effective at reducing underestimation
and overestimation than directly optimizing them. Also, optimizing value and overestimation are
more effective in reducing absolute unfairness than directly optimizing it. Finally, optimizing parity
unfairness leads to increases in all unfairness metrics except absolute unfairness and parity itself.
These relationships among the metrics suggest a need for practitioners to decide which types of
fairness are most important for their applications.
4.2 Real Data
We use the Movielens Million Dataset [10], which contains ratings (from 1 to 5) by 6,040 users of
3,883 movies. The users are annotated with demographic variables including gender, and the movies
are each annotated with a set of genres. We manually selected genres that feature different forms of
7
Table 2: Gender-based statistics of movie genres in Movielens data.
Count
Ratings per female user
Ratings per male user
Average rating by women
Average rating by men
Romance
Action
Sci-Fi
Musical
Crime
325
54.79
36.97
3.64
3.55
425
52.00
82.97
3.45
3.45
237
31.19
50.46
3.42
3.44
93
15.04
10.83
3.79
3.58
142
17.45
23.90
3.65
3.68
Table 3: Average error and unfairness metrics for movie-rating data using different fairness objectives.
Unfairness
Error
Value
Absolute
Underestimation
Overestimation
Non-Parity
None
Value
Absolute
Under
Over
Non-Parity
0.887 ± 1.9e-03
0.886 ± 2.2e-03
0.887 ± 2.0e-03
0.888 ± 2.2e-03
0.885 ± 1.9e-03
0.887 ± 1.9e-03
0.234 ± 6.3e-03
0.223 ± 6.9e-03
0.235 ± 6.2e-03
0.233 ± 6.8e-03
0.234 ± 5.8e-03
0.236 ± 6.0e-03
0.126 ± 1.7e-03
0.128 ± 2.2e-03
0.124 ± 1.7e-03
0.128 ± 1.8e-03
0.125 ± 1.6e-03
0.126 ± 1.6e-03
0.107 ± 1.6e-03
0.102 ± 1.9e-03
0.110 ± 1.8e-03
0.102 ± 1.7e-03
0.112 ± 1.9e-03
0.110 ± 1.7e-03
0.153 ± 3.9e-03
0.148 ± 4.9e-03
0.151 ± 4.2e-03
0.156 ± 4.2e-03
0.148 ± 4.1e-03
0.152 ± 3.9e-03
0.036 ± 1.3e-03
0.041 ± 1.6e-03
0.023 ± 2.7e-03
0.058 ± 9.3e-04
0.015 ± 2.0e-03
0.010 ± 1.5e-03
gender imbalance and only consider movies that list these genres. Then we filter the users to only
consider those who rated at least 50 of the selected movies.
The genres we selected are action, crime, musical, romance, and sci-fi. We selected these genres
because they each have a noticeable gender effect in the data. Women rate musical and romance
films higher and more frequently than men. Women and men both score action, crime, and sci-fi
films about equally, but men rate these film much more frequently. Table 2 lists these statistics in
detail. After filtering by genre and rating frequency, we have 2,953 users and 1,006 movies in the
dataset.
We run five trials in which we randomly split the ratings into training and testing sets, train each
objective function on the training set, and evaluate each metric on the testing set. The average scores
are listed in Table 3, where bold scores again indicate being statistically indistinguishable from the
best average score. On real data, the results show that optimizing each unfairness metric leads to the
best performance on that metric without a significant change in the reconstruction error. As in the
synthetic data, optimizing value unfairness leads to the most decrease on under- and overestimation.
Optimizing non-parity again causes an increase or no change in almost all the other unfairness
metrics.
5 Conclusion
In this paper, we discussed various types of unfairness that can occur in collaborative filtering. We
demonstrate that these forms of unfairness can occur even when the observed rating data is correct,
in the sense that it accurately reflects the preferences of the users. We identify two forms of data
bias that can lead to such unfairness. We then demonstrate that augmenting matrix-factorization objectives with these unfairness metrics as penalty functions enables a learning algorithm to minimize
each of them. Our experiments on synthetic and real data show that minimization of these forms of
unfairness is possible with no significant increase in reconstruction error.
We also demonstrate a combined objective that penalizes both overestimation and underestimation.
Minimizing this objective leads to small unfairness penalties for the other forms of unfairness. Using
this combined objective may be a good approach for practitioners. However, no single objective was
the best for all unfairness metrics, so it remains necessary for practitioners to consider precisely
which form of fairness is most important in their application and optimize that specific objective.
Future Work While our work in this paper focused on improving fairness among users so that
the model treats different groups of users fairly, we did not address fair treatment of different item
groups. The model could be biased toward certain items, e.g., performing better at prediction for
some items than others in terms of accuracy or over- and underestimation. Achieving fairness for
both users and items may be important when considering that the items may also suffer from discrimination or bias, for example, when courses are taught by instructors with different demographics.
8
Our experiments demonstrate that minimizing empirical unfairness generalizes, but this generalization is dependent on data density. When ratings are especially sparse, the empirical fairness does not
always generalize well to held-out predictions. We are investigating methods that are more robust to
data sparsity in future work.
Moreover, our fairness metrics assume that users rate items according to their true preferences. This
assumption is likely to be violated in real data, since ratings can also be influenced by various
environmental factors. E.g., in education, a student’s rating for a course also depends on whether
the course has an inclusive and welcoming learning environment. However, addressing this type of
bias may require additional information or external interventions beyond the provided rating data.
Finally, we are investigating methods to reduce unfairness by directly modeling the two-stage sampling process we used to generate synthetic, biased data. We hypothesize that by explicitly modeling
the rating and observation probabilities as separate variables, we may be able to derive a principled,
probabilistic approach to address these forms of data imbalance.
References
[1] D. N. Beede, T. A. Julian, D. Langdon, G. McKittrick, B. Khan, and M. E. Doms. Women in
STEM: A gender gap to innovation. U.S. Department of Commerce, Economics and Statistics
Administration, 2011.
[2] A. Beutel, E. H. Chi, Z. Cheng, H. Pham, and J. Anderson. Beyond globally optimal: Focused
learning for improved recommendations. In Proceedings of the 26th International Conference
on World Wide Web, pages 203–212. International World Wide Web Conferences Steering
Committee, 2017.
[3] S. Broad and M. McGee. Recruiting women into computer science and information systems.
Proceedings of the Association Supporting Computer Users in Education Annual Conference,
pages 29–40, 2014.
[4] O. Chausson. Who watches what? Assessing the impact of gender and personality on film
preferences. http://mypersonality.org/wiki/doku.php?id=movie_tastes_and_personality, 2010.
[5] M.-I. Dascalu, C.-N. Bodea, M. N. Mihailescu, E. A. Tanase, and P. Ordoñez de Pablos. Educational recommender systems and their application in lifelong learning. Behaviour & Information Technology, 35(4):290–297, 2016.
[6] T. N. Daymont and P. J. Andrisani. Job preferences, college major, and the gender gap in
earnings. Journal of Human Resources, pages 408–428, 1984.
[7] M. D. Ekstrand, J. T. Riedl, J. A. Konstan, et al. Collaborative filtering recommender systems.
Foundations and Trends in Human-Computer Interaction, 4(2):81–173, 2011.
[8] A. L. Griffith. Persistence of women and minorities in STEM field majors: Is it the school that
matters? Economics of Education Review, 29(6):911–922, 2010.
[9] M. Hardt, E. Price, N. Srebro, et al. Equality of opportunity in supervised learning. In Advances
in Neural Information Processing Systems, pages 3315–3323, 2016.
[10] F. M. Harper and J. A. Konstan. The Movielens datasets: History and context. ACM Transactions on Interactive Intelligent Systems (TiiS), 5(4):19, 2016.
[11] P. W. Holland and S. Leinhardt. Local structure in social networks. Sociological Methodology,
7:1–45, 1976.
[12] P. J. Huber. Robust estimation of a location parameter. The Annals of Mathematical Statistics,
pages 73–101, 1964.
[13] T. Kamishima, S. Akaho, H. Asoh, and J. Sakuma. Enhancement of the neutrality in recommendation. In Decisions@ RecSys, pages 8–14, 2012.
[14] T. Kamishima, S. Akaho, H. Asoh, and J. Sakuma. Efficiency improvement of neutralityenhanced recommendation. In Decisions@ RecSys, pages 1–8, 2013.
9
[15] T. Kamishima, S. Akaho, H. Asoh, and J. Sakuma. Correcting popularity bias by enhancing
recommendation neutrality. In RecSys Posters, 2014.
[16] T. Kamishima, S. Akaho, H. Asoh, and I. Sato. Model-based approaches for independenceenhanced recommendation. In Data Mining Workshops (ICDMW), 2016 IEEE 16th International Conference on, pages 860–867. IEEE, 2016.
[17] T. Kamishima, S. Akaho, and J. Sakuma. Fairness-aware learning through regularization approach. In 11th International Conference on Data Mining Workshops (ICDMW), pages 643–
650. IEEE, 2011.
[18] D. Kingma and J. Ba.
arXiv:1412.6980, 2014.
Adam: A method for stochastic optimization.
arXiv preprint
[19] Y. Koren, R. Bell, and C. Volinsky. Matrix factorization techniques for recommender systems.
Computer, 42(8), 2009.
[20] K. Lum and J. Johndrow. A statistical framework for fair predictive algorithms. arXiv preprint
arXiv:1610.08077, 2016.
[21] B. Marlin, R. S. Zemel, S. Roweis, and M. Slaney. Collaborative filtering and the missing at
random assumption. arXiv preprint arXiv:1206.5267, 2012.
[22] B. M. Marlin and R. S. Zemel. Collaborative prediction and ranking with non-random missing
data. In Proceedings of the third ACM conference on Recommender systems, pages 5–12. ACM,
2009.
[23] D. Pedreshi, S. Ruggieri, and F. Turini. Discrimination-aware data mining. In Proceedings of
the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,
pages 560–568. ACM, 2008.
[24] C. V. Sacin, J. B. Agapito, L. Shafti, and A. Ortigosa. Recommendation in higher education
using data mining techniques. In Educational Data Mining, 2009.
[25] S. Sahebi and P. Brusilovsky. It takes two to tango: An exploration of domain pairs for crossdomain collaborative filtering. In Proceedings of the 9th ACM Conference on Recommender
Systems, pages 131–138. ACM, 2015.
[26] E. Smith. Women into science and engineering? Gendered participation in higher education
STEM subjects. British Educational Research Journal, 37(6):993–1014, 2011.
[27] N. Thai-Nghe, L. Drumond, A. Krohn-Grimberghe, and L. Schmidt-Thieme. Recommender
system for predicting student performance. Procedia Computer Science, 1(2):2811–2819,
2010.
[28] M. B. Zafar, I. Valera, M. Gomez Rodriguez, and K. P. Gummadi. Fairness constraints: Mechanisms for fair classification. arXiv preprint arXiv:1507.05259, 2017.
[29] R. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork. Learning fair representations. In
Proceedings of the 30th International Conference on Machine Learning, pages 325–333, 2013.
10
| 2 |
New Integrality Gap Results for the Firefighters Problem on
Trees
Parinya Chalermsook1 and Daniel Vaz1,2
1
arXiv:1601.02388v3 [cs.DM] 29 Jul 2016
2
Max-Planck-Institut für Informatik, Saarbrücken, Germany
Graduate School of Computer Science, Saarland University, Saarbrücken, Germany
{parinya, ramosvaz}@mpi-inf.mpg.de
1st August 2016
Abstract
In the firefighter problem on trees, we are given a tree G = (V, E) together with a vertex
s ∈ V where the fire starts spreading. At each time step, the firefighters can pick one vertex
while the fire spreads from burning vertices to all their neighbors that have not been picked.
The process stops when the fire can no longer spread. The objective is to find a strategy
that maximizes the total number of vertices that do not burn. This is a simple mathematical
model, introduced in 1995, that abstracts the spreading nature of, for instance, fire, viruses,
and ideas. The firefighter problem is NP-hard and admits a (1 − 1/e) approximation based
on rounding the canonical LP. Recently, a PTAS was announced[1]. 1
The goal of this paper is to develop better understanding on the power of LP relaxations
for the firefighter problem. We first show a matching lower bound of (1 − 1/e + ) on the
integrality gap of the canonical LP. This result relies on a powerful combinatorial gadget that
can be used to derive integrality gap results in other related settings. Next, we consider the
canonical LP augmented with simple additional constraints (as suggested by Hartke). We
provide several evidences that these constraints improve the integrality gap of the canonical
LP: (i) Extreme points of the new LP are integral for some known tractable instances and
(ii) A natural family of instances that are bad for the canonical LP admits an improved
approximation algorithm via the new LP. We conclude by presenting a 5/6 integrality gap
instance for the new LP.
1
Introduction
Consider the following graph-theoretic model that abstracts the fire spreading process: We are
given graph G = (V, E) together with the source vertex s where the fire starts. At each time
step, we are allowed to pick some vertices in the graph to be saved, and the fire spreads from
burning vertices to their neighbors that have not been saved so far. The process terminates
when the fire cannot spread any further. This model was introduced in 1995 [13] and has been
used extensively by researchers in several fields as an abstraction of epidemic propagation.
There are two important variants of the firefighters problem. (i) In the maximization variant
(Max-FF), we are given graph G and source s, and we are allowed to pick one vertex per time
step. The objective is to maximize the number of vertices that do not burn. And (ii) In
the minimization variant (Min-FF), we are given a graph G, a source s, and a terminal set
1
The (1 − 1/e) approximation remained the best until very recently when Adjiashvili et al. [1] showed a PTAS.
Their PTAS does not bound the LP gap.
1
X ⊆ V (G), and we are allowed to pick b vertices per time step. The goal is to save all terminals
in X , while minimizing the budget b.
In this paper, we focus on the Max-FF problem. The problem is n1− hard to approximate
in general graphs [2], so there is no hope to obtain any reasonable approximation guarantee.
Past research, however, has focused on sparse graphs such as trees or grids. Much better
approximation algorithms are known on trees: The problem is NP-hard [15] even on trees of
degree at most three, but it admits a (1 − 1/e) approximation algorithm. For more than a
decade [2, 6, 5, 10, 14, 15], there was no progress on this approximability status of this problem,
until a PTAS was recently discovered [1].
Besides the motivation of studying epidemic propagation, the firefighter problem and its
variants are interesting due to their connections to other classical optimization problems:
• (Set cover) The firefighter problem is a special case of the maximum coverage problem with
group budget constraint (MCG) [7]: Given a collection of sets S = {S1 , . . . , Sm } : Si ⊆ X,
together with group constraints, i.e. a partition of S into groups G1 , . . . , G` , we are
interested in choosing one set from each group in a way that maximizes the total number
of elements covered, i.e. a feasible solution is a subset S 0 ⊆ S where |S 0 ∩ Gj | ≤ 1 for all j,
S
and | Si ∈S 0 Si | is maximized. It is not hard to see that Max-FF is a special case of MCG.
We refer the readers to the discussion by Chekuri and Kumar [7] for more applications of
MCG.
• (Cut) In a standard minimum node-cut problem, we are given a graph G together with
a source-sink pair s, t ∈ V (G). Our goal is to find a collection of nodes V 0 ⊆ V (G) such
that G \ V 0 has s and t in distinct connected components. Anshelevich et al. [2] discussed
that the firefighters’ solution can be seen as a “cut-over-time” in which the cut must be
produced gradually over many timesteps. That is, in each time step t, the algorithm is
allowed to choose vertex set Vt0 to remove from the graph G, and again the final goal
is to “disconnect” s from t 2 . This cut-over-time problem is exactly equivalent to the
minimization variant of the firefighter problem. We refer to [2] for more details about this
equivalence.
1.1
Our Contributions
In this paper, we are interested in developing a better understanding of the Max-FF problem
from the perspective of LP relaxation. The canonical LP relaxation has been used to obtain
the known (1 − 1/e) approximation algorithm via straightforward independent LP rounding
(each node is picked independently with the probability proportional to its LP-value). So far, it
was not clear whether an improvement was possible via this LP, for instance, via sophisticated
dependent rounding schemes 3 . Indeed, for the corresponding minimization variant, Min-FF,
Chalermsook and Chuzhoy designed a dependent rounding scheme for the canonical LP in
order to obtain O(log∗ n) approximation algorithm, improving upon an O(log n) approximation
obtained via independent LP rounding. In this paper, we are interested in studying this potential
improvement for Max-FF.
Our first result refutes such possibility for Max-FF. In particular, we show that the integrality gap of the standard LP relaxation can be arbitrarily close to (1 − 1/e).
Theorem 1. For any > 0, there is an instance (G, s) (whose size depends on ) such that the
ratio between optimal integral solution and fractional one is at most (1 − 1/e + ).
2
The notion of disconnecting the vertices here is slightly non-standard.
Cai, Verbin, and Yang [5] claimed an LP-respecting integrality gap of (1 − 1/e), but many natural rounding
algorithms in the context of this problem are not LP respecting, e.g. in [6]
3
2
Our techniques rely on a powerful combinatorial gadget that can be used to prove integrality
gap results in some other settings studied in the literature. In particular, in the b-Max-FF
problem, the firefighters can pick up to b vertices per time step, and the goal is to maximize the
number of saved vertices. We provide an integrality gap of (1 − 1/e) for the b-Max-FF problem
for all constant b ∈ N, thus matching the algorithmic result of [9]. In the setting where
√ an input
tree has degree at most d ∈ [4, ∞), we show an integrality gap result of (1−1/e+O(1/ d)). The
best known algorithmic result in this setting was previously a (1 − 1/e + Ω(1/d)) approximation
due to [14].
Motivated by the aforementioned negative results, we search for a stronger LP relaxation for
the problem. We consider adding a set of valid linear inequalities, as suggested by Hartke [12].
We show the following evidences that the new LP is a stronger relaxation than the canonical
LP.
• Any extreme point of the new LP is integral for the tractable instances studied by Finbow
et al. [11]. In contrast, we argue that the canonical LP does not satisfy this integrality
property of extreme points.
• A family of instances which captures the bad integrality gap instances given in Theorem 1,
admits a better than (1 − 1/e) approximation algorithm via the new LP.
• When the LP solution is near-integral, e.g. for half-integral solutions, the new LP is
provably better than the old one.
Our results are the first rigorous evidences that Hartke’s constraints lead to improvements
upon the canonical LP. All the aforementioned algorithmic results exploit the new LP constraints
in dependent LP rounding procedures. In particular, we propose a two-phase dependent rounding algorithm, which can be used in deriving the second and third results. We believe the new
LP has an integrality gap strictly better than (1 − 1/e), but we are unable to analyze it.
Finally, we show a limitation of the new LP by presenting a family of instances, whose
integrality gap can be arbitrarily close to 5/6. This improves the known integrality gap ratio [12],
and puts the integrality gap answer somewhere between (1 − 1/e) and 5/6. Closing this gap is,
in our opinion, an interesting open question.
Organization: In Section 2, we formally define the problem and present the LP relaxation.
In Section 3, we present the bad integrality gap instances. We present the LP augmented with
Hartke’s constraints in Section 4 and discuss the relevant evidences of its power in comparison to
the canonical LP. Some proofs are omitted for space constraint, and are presented in Appendix.
Related results: King and MacGillivray showed that the firefighter problem on trees is
solvable in polynomial time if the input tree has degree at most three, with the fire starting
at
√
a degree-2 vertex. From exponential time algorithm’s perspective, Cai et al. showed 2O( n log n)
time, exact algorithm. The discrete mathematics community pays particularly high attention
to the firefighter problem on grids [16, 10], and there has also been some work on infinite
graphs [13].
The problem also received a lot of attention from the parameterized complexity perspectives [8, 3, 5] and on many special cases, e.g., when the tree has bounded pathwidth [8] and on
bounded degree graphs [8, 4].
Recent update: Very recently, Adjiashvili et al. [1] showed a polynomial time approximation scheme (PTAS) for the Max-FF problem, therefore settling the approximability status.
Their results, however, do not bound the LP integrality gap. We believe that the integrality
gap questions are interesting despite the known approximation guarantees.
3
2
Preliminaries
A formal definition of the problem is as follows. We are given a graph G and a source vertex s
where the fire starts spreading. A strategy is described by a collection of vertices U = {ut }nt=1
where ut ∈ V (G) is the vertex picked by firefighters at time t. We say that a vertex u ∈ V (G)
is saved by the strategy U if for all path P = (s = v0 , . . . , vz = u) from s to u, we have
vi ∈ {u1 , . . . , ui } for some i = 1, . . . , z. A vertex v not saved by U is said to be a burning vertex.
The objective of the problem is to compute U so as to maximize the total number of saved
vertices. Denote by OPT(G, s) the number of vertices saved by an optimal solution.
When G is a tree, we think of G as being partitioned into layers L1 , . . . , Lλ where λ is the
height of the tree, and Li contains vertices whose distance is exactly i from s. Every strategy
has the following structure.
Proposition 2. Consider the firefighters problem’s instance (G, s) where G is a tree. Let
U = {u1 , . . . , un } be any strategy. Then there is another strategy U 0 = {u0t } where u0t belongs to
layer t in G, and U 0 saves at least as many vertices as U does.
We remark that this structural result only holds when an input graph G is a tree.
LP Relaxation: This paper focuses on the linear programming aspect of the problem. For
any vertex v, let Pv denote the (unique) path from s to v, and let Tv denote the subtree rooted
at v. A natural LP relaxation is denoted by (LP-1): We have variable xv indicating whether v
is picked by the solution, and yv indicating whether v is saved.
(LP-2)
(LP-1)
max
X
max
yv
xv ≤ 1 for all layer j
X
xv ≤ 1 for all layer j
v∈Lj
v∈Lj
yv ≤
yv
v∈X
v∈V
X
X
X
xu for all v ∈ V
yv ≤
X
xu for all v ∈ X
u∈Pv
u∈Pv
xv , yv ∈ [0, 1] for all v
xv , yv ∈ [0, 1] for all v
Let LP(T, s) denote the optimal fractional LP value for an instance (T, s). The integrality
gap gap(T, s) of the instance (T, s) is defined as gap(T, s) = OPT(T, s)/LP(T, s). The integrality
gap of the LP is defined as inf T gap(T, s).
Firefighters with terminals: We consider a more general variant of the problem, where
we are only interested in saving a subset X of vertices, which we call terminals. The goal is
now to maximize the number of saved terminals. An LP formulation of this problem, given an
instance (T, v, X ), is denoted by (LP-2). The following lemma argues that these two variants
are “equivalent” from the perspectives of LP relaxation.
Lemma 3. Let (T, X , s), with |X | > 0, be an input for the terminal firefighters problem that
gives an integrality gap of γ for (LP-2), and that the value of the fractional optimal solution is
at least 1. Then, for any > 0, there is an instance (T 0 , s0 ) that gives an integrality gap of γ +
for (LP-1).
Proof. Let M = 2|V (T )|/. Starting from (T, X , s), we construct an instance (T 0 , s0 ) by adding
M children to each vertex in X , so the number of vertices in T 0 is |V (T 0 )| = |V (T )| + M |X |.
We denote the copies of X in T 0 by X 0 and the set of their added children by X 00 . The root of
4
the new tree, s0 , is the same as s (the root of T .) Now we argue that the instance (T 0 , s0 ) has
the desired integrality gap, i.e. we argue that OPT(T 0 , s0 ) ≤ (γ + )LP(T 0 , s0 ).
Let (x0 , y 0 ) be an integral solution to the instance (T 0 , s0 ). We upper bound the number of
P
vertices saved by this solution, i.e. upper bounding v∈V (T 0 ) yv0 . We analyze three cases:
• For a vertex v ∈ V (T 0 ) \ X 00 , we upper bound the term yv0 by 1, and so the sum
P
0
v∈V (T 0 )\X 00 yv by |V (T )|.
• Now define X̃ ⊆ X 00 as the set of vertices v for which yv0 = 1 but yu0 = 0 for the parent u of
P
0 ≤ |X |: We break the
v. This means
that x0v = 1 for all vertices
in X̃ . Notice that v∈X̃ yvo
n o
n
set X̃ into X̃u
where X̃u = v ∈ X̃ : u is the parent of v . The LP constraint
u∈V (T 0 )
P
P
guarantees that v∈X̃u yv0 = v∈X̃u x0v ≤ 1 (all vertices in X̃u belong to the same layer.)
Summing over all such u ∈ X 0 , we get the desired bound.
• Finally, consider the term v∈X 0 \X̃ yv0 . Let (x∗ , y ∗ ) be an optimal fractional solution
to (T, X , s) for (LP-2). We only need to care about all vertices v such that yu0 = 1
P
for the parent u of v. This term is upper bounded by M u∈X 0 yu0 , which is at most
P
M γ ( v∈X yv∗ ), due to the fact that the solution (x0 , y 0 ) induces an integral solution on
instance (T, X , s).
P
Combining the three cases, we get |V (T )| + |X | + M γ ( v∈X yv∗ ) ≤ M + γM ( v∈X yv∗ ) ≤
P
P
M (γ + ) ( v∈X yv∗ ), if v∈X yv∗ ≥ 1. Now, notice that the fractional solution (x∗ , y ∗ ) for (LP-2)
on instance (T, X , s) is also feasible for (LP-1) on (T 0 , s0 ) with at least a multiplicative factor
of M times the objective value: Each fractional saving of u ∈ X 0 contributes to save all M
P
children of u. Therefore, M ( v∈X yv∗ ) ≤ M · LP(T 0 , s0 ), thus concluding the proof.
P
P
We will, from now on, focus on studying the integrality gap of (LP-2).
3
Integrality Gap of (LP-2)
We first discuss the integrality gap of (LP-2) for a general tree. We use the following combinatorial gadget.
Gadget: A (M, k, δ)-good gadget is a collection of trees T = {T1 , . . . , TM }, with roots
S
r1 , . . . , rM where ri is a root of Ti , and a subset S ⊆ V (Ti ) that satisfy the following properties:
• (Uniform depth) We think of these trees as having layers L0 , L1 , . . . , Lh , where Lj is the
union over all trees of all vertices at layer j and L0 = {r1 , . . . , rm }. All leaves are in the
same layer Lh .
• (LP-friendly) For any layer Lj , j ≥ 1, we have |S ∩ Lj | ≤ k. Moreover, for any tree Ti
and a leaf v ∈ V (Ti ), the unique path from ri to v must contain at least one vertex in S.
• (Integrally adversarial) Let B ⊆ {r1 , . . . , rM } be any subset of roots. Consider a subset of
vertices U = {uj }hj=1 such that uj ∈ Lj . For ri ∈ B and a leaf v ∈ Lh ∩ V (Ti ), we say that
v is (U, B)-risky if the unique path from ri to v does not contain any vertex in U. There
must be at least (1 − 1/k − δ) |B|
M |Lh | vertices in Lh that are (U, B)-risky, for all choices of
B and U.
We say that vertices in S are special and all other vertices are regular.
5
Lemma 4. For any integers k ≥ 2, M ≥ 1, and any real number δ > 0, a (M, k, δ)-good gadget
exists. Moreover, the gadget contains at most (k/δ)O(M ) vertices.
We first show how to use this lemma to derive our final construction. The proof of the
lemma follows later.
Construction: Our construction proceeds in k phases, and we will define it inductively.
The first phase of the construction is simply a (1, k, δ)-good gadget. Now, assume that we
have constructed the instance up to phase q. Let l1 , . . . , lMq ∈ Lαp be the leaves after the
construction of phase q that all lie in layer αq . In phase q + 1, we take the (Mq , k, δ)-good
gadget (Tq , {rq }, Sq ); recall that such a gadget consists of Mq trees. For each i = 1, . . . , Mq , we
unify each root ri with the leaf li . This completes the description of the construction.
S
Denote by S̄q = q0 ≤q Sq0 the set of all special vertices in the first q phases. After phase q,
we argue that our construction satisfies the following properties:
• All leaves are in the same layer αq .
• For every layer Lj , |Lj ∩ S̄q | ≤ k. For every path P from the root to v ∈ Lαi , |P ∩ S̄q | = q.
• For any integral solution U, at least |Lαq | ((1 − 1/k)q − qδ) vertices of Lαq burn.
It is clear from the construction that the leaves after phase q are all in the same layer. As
to the second property, the properties of the gadget ensure that there are at most k special
vertices per layer. Moreover, consider each path P from the root to some vertex v ∈ Lαq+1 . We
can split this path into two parts P = P 0 ∪ P 00 where P 0 starts from the root and ends at some
v 0 ∈ Lαq , and P 00 starts at v 0 and ends at v. By the induction hypothesis, |P 0 ∩ S̄q | = q and the
second property of the gadget guarantees that |P 00 ∩ Sq+1 | = 1.
To prove the final property, consider a solution U = {u1 , . . . , uαq+1 }, which can be seen as
0
U ∪ U 00 where U 0 = {u1 , . . . , uαq } and U 00 = {uαq +1 , . . . , uαq+1 }. By the induction hypothesis,
we have that at least ((1 − 1/k)q − qδ) |Lαq | vertices in Lαq burn; denote these burning vertices
|B|
by B. The third property of the gadget will ensure that at least (1 − 1/k − δ) M
|Lαq+1 | vertices
q
00
in Lαq+1 must be (U , B)-risky. For each risky vertex v ∈ Lαq+1 , a unique path from the root
to v 0 ∈ B does not contain any vertex in U 0 , and also the path from v 0 to v does not contain a
vertex in U 00 (due to the fact that it is (U 00 , B)-risky.) This implies that such vertex v must burn.
Therefore, the fraction of burning vertices in layer Lαq+1 is at least (1 − 1/k − δ)|B|/Mq ≥ (1 −
1/k −δ)((1−1/k)q −qδ), by induction hypothesis. This number is at least (1−1/k)q+1 −(q +1)δ,
maintaining the invariant.
After the construction of all k phases, the leaves are designated as the terminals X . Also,
Mq+1 ≤ (k/δ)2Mq , which means that, after k phases, Mk is at most a tower function of (k/δ)2 ,
···
that is, (k/δ)2(k/δ) with k − 1 such exponentiations. The total size of the construction is
P
2Mq ≤ (k/δ)2Mk = O(M
k+1 ).
q (k/δ)
An example construction, when k = 2, is presented in Figure 3 (in Appendix).
Theorem 5. A fractional solution, that assigns xv = 1/k to each special vertex v, saves every
terminal. On the other hand, any integral solution can save at most a fraction of 1−(1−1/k)k +.
Proof. We assign the LP solution xv = 1/k to all special vertices (those vertices in S̄k ), and
xv = 0 to regular vertices. Since the construction ensures that there are at most k special vertices
P
per layer, we have v∈Lj xv ≤ 1 for every layer Lj . Moreover, every terminal is fractionally
P
saved: For any t ∈ X , the path |Pt ∩ S̄k | = k, so we have v∈Pt xv = 1.
For the integral solution analysis, set δ = /k. The proof follows immediately from the
properties of the instance.
6
3.1
Proof of Lemma 4
We now show that the (M, k, δ)-good gadget exists for any value of M ∈ N, k ∈ N, k ≥ 2 and
δ ∈ R>0 . We first describe the construction and then show that it has the desired properties.
Construction: Throughout the construction, we use a structure which we call spider. A
spider is a tree in which every node except the root has at most one child. If a node has no
children (i. e. a leaf), we call it a foot of the spider. We call the paths from the root to each
foot the legs of the spider.
Let D = d4/δe. For each i = 1, . . . , M , the tree Ti is constructed as follows. We have a
spider rooted at ri that contains kDi−1 legs. Its feet are in Di−1 consecutive layers, starting at
P
layer αi = 1 + j<i Dj−1 ; each such layer has k feet. Denote by S (i) the feet of these spiders.
Next, for each vertex v ∈ S (i) , we have a spider rooted at v, having D2M −i+1 feet, all of which
P
S
(i)
belong to layer α = 1 + j≤M Dj−1 . The set S is defined as S = M
i=1 S . This concludes the
construction. We will use the following observation:
Observation 6. For each root ri , the number of leaves of Ti is kD2M .
Analysis: We now prove that the above gadget is (M, k, δ)-good. The construction ensures
that all leaves are in the same layer Lα .
The second property also follows obviously from the construction: For i 6= i0 , we have that
0
(i)
S ∩ S (i ) = ∅, and that each layer contains exactly k vertices from S (i) . Moreover, any path
from ri to the leaf of Ti must go through a vertex in S (i) .
The third and final property is established by the following two lemmas.
Lemma 7. For any ri ∈ B and any subset of vertices U = {uj }hj=1 such that uj ∈ Lj , a fraction
of at least (1 − 1/k − 2/D) of S (i) are (U, B)-risky.
Proof. Notice that a vertex v is (U, B)-risky if U is not a vertex cut separating v from B. There
are kDi−1 vertex-disjoint paths from ri ∈ B to vertices in S (i) . But the cut U induced on these
P
0
paths contains at most i0 ≤i Di −1 vertices (because all vertices in S (i) are contained in the
P
0
first i0 ≤i Di −1 ≤ Di−1 + 2Di−2 layers.) Therefore, at most (1/k + 2/D) fraction of vertices in
S (i) can be disconnected by U, and those that are not cut remain (U, B)-risky.
Lemma 8. Let v ∈ S (i) that is (U, B)-risky. Then at least (1 − 2/D) fraction of descendants
of v in Lα must be (U, B)-risky.
Proof. Consider each v ∈ S (i) that is (U, B)-risky and a collection of leaves Lv that are descendants of vertex v. Notice that a leaf u ∈ Lv is (U, B)-risky if removing U does not disconnect
vertex v from u.
There are D2M −i+1 ≥ DM +1 vertex disjoint paths connecting vertex v with leaves in Lv ,
while the cut set U contains at most 2DM vertices. Therefore, removing U can disconnect at
most 2/D fraction of vertices in Lv from v.
Combining the above two lemmas, for each ri ∈ B, the fraction of leaves of Ti that are
(U, B)-risky are at least (1 − 1/k − 2/D)(1 − 2/D) ≥ (1 − 1/k − 4/D). Therefore, the total
number of such leaves, over all trees in T , are (1 − 1/k − δ)|B||Lα |/M .
We extend the cosntruction to other settings in Section 3.1 (in Appendix).
Arbitrary number of firefighters: Let b ∈ N. In the b-firefighter problem, at each time
step, the firefighters may choose up to b vertices, and the fire spreads from the burning vertices
to vertices that have not been chosen so far. The goal is to maximize the number of saved
vertices. In this section, we show the following:
7
Theorem 9. For any integer b ∈ N (independent of |V (G)|), the integrality gap of the canonical
LP can be arbitrarily close to (1 − 1/e).
Proof. To prove this theorem, one only need to design a slightly different good gadget. That
is, an (M, k, δ)-good gadget is now a collection of trees T with roots r1 , . . . , rM together with
S
S ⊆ V (Ti ) that satisfy the following properties:
• All leaves of Ti are in the same layer Lh .
• For each layer Lj , we have |S ∩ Lj | ≤ kb. Moreover, for any tree Ti and a leaf v ∈ V (Ti ),
the unique path from ri to v must contain at least one vertex in S.
h|
• For any subset B ⊆ {r1 , . . . , rM } of roots and for any strategy U, at least (1−1/k−δ) |B||L
M
vertices in Lh are (U, B)-risky.
It is not hard to see that these gadgets can be used to construct the integrality gap in the
same way as in the previous section. Details are omitted.
Bounded degrees: Iwakawa et al. showed a (1 − 1/e + Ω(1/d)) approximation algorithm for
the instance that has degree at most d. We show an instance where this dependence on 1/d is
almost the best possible that can be obtained from this LP.
Theorem
10. For all d ≥ 4, the integrality gap of (LP-1) on degree-d graphs is (1 − 1/e +
√
O(1/ d)).
Proof. To prove this theorem, we construct a “bounded degree” analogue of our good gadgets.
That is, the (M, k)-good gadget in this setting guarantees that
• All leaves of Ti are in the same layer Lh .
• For each layer Lj , we have |S ∩ Lj | ≤ k. For each tree Ti , for each leaf v ∈ V (Ti ), the
unique path from ri to v contains one vertex in S.
h|
• For any subset B ⊆ {r1 , . . . , rM }, for any strategy U, at least (1 − 1/k − O(1/d)) |B||L
M
vertices in Lh are (U, B)-risky.
This gadget can be used to recursively construct the instance in k phases.
The final instance
√
k
guarantees the integrality
√ gap of 1−(1−1/k) +O(k/d). By setting k = d, we get the integrality
gap of (1 − 1/e + O(1/ d)) as desired4 .
4
Hartke’s Constraints
Due to the integrality gap result in the previous section, there is no hope to improve the best
known algorithms via the canonical LP relaxation. Hartke [12] suggested adding the following
constraints to narrow down the integrality gap of the LP.
X
xu ≤ 1 for all vertex v ∈ V (T ) and layer Lj below the layer of v
u∈Pv ∪(Tv ∩Lj )
We write the new LP with these constraints below:
4
By analyzing the Taylor’s series expansion of 1/e − (1 − 1/k)k , we get the term
8
1
2ek
+ O(1/k2 )
(LP’)
max
X
yv
v∈V
xu ≤ 1 for all layer j below vertex v
X
u∈Pv ∪(Tv ∩Lj )
yv ≤
X
xu for all v ∈ V
u∈Pv
xv , yv ∈ [0, 1] for all v
Proposition 11. Given the values {xv }v∈V (T ) that satisfy the first set of constraints, then the
P
solution (x, y) defined by yv = u∈Pv xv is feasible for (LP’) and at least as good as any other
feasible (x, y 0 ).
In this section, we study the power of this LP and provide three evidences that it may be
stronger than (LP-1).
4.1
New properties of extreme points
In this section, we show that Finbow et al. tractable instances [11] admit a polynomial time
exact algorithm via (LP’) (in fact, any optimal extreme point for (LP’) is integral.) In contrast,
we show that (LP-1) contains an extreme point that is not integral.
We first present the following structural lemma.
Lemma 12. Let (x, y) be an optimal extreme point for (LP’) on instance T rooted at s. Suppose
s has two children, denoted by a and b. Then xa , xb ∈ {0, 1}.
Proof. Suppose that xa , xb ∈ (0, 1). We will define two solutions (x0 , y0 ) and (x00 , y00 ) and derive
that (x, y) can be written as a convex combination of (x0 , y0 ) and (x00 , y00 ), a contradiction.
First, we define (x0 , y0 ) by setting x0b = 1, x0a = 0. For each vertex v ∈ Tb , we set x0v = 0. For
each vertex v ∈ Ta , we define x0v = xv /(1 − xa ). We verify that x0 is feasible
for (LP’):
For each
P
v ∈ Ta and any layer Lj below v,
0
u∈Pv xu
P
+
0
u∈Tv ∩Lj xu
P
=
P
xu −xa
(1−xa )
u∈Pv
+
u∈Tv ∩Lj
xu
(1−xa )
≤
P
u∈Pv ∪(Tv ∩Lj )
xu −xa
≤ 1 (the last inequality is due to the fact that x is feasible). The
P
constraint is obviously satisfied for all v ∈ Tb . For the root node v = s, we have u∈Lj x0u =
(1−xa )
P
u ∈(Lj ∩Ta
x −xa
) u
≤ 1.
We define
analogously: x00b = 0, x00a = 1. For each vertex v ∈ Ta , we set x00v = 0,
and for each v ∈ Tb , we define x00v = xv /(1 − xb ). It can be checked similarly that (x00 , y00 ) is a
feasible solution.
(1−xa )
(x00 , y00 )
Claim 13. If x is an optimal extreme point, then xa + xb = 1.
Proof. Observe that, for each v ∈ Tb , yv0 = 1 and for each v ∈ Ta , yv0 =
value of x0 is |Tb | +
P
v∈Ta
yv0 = |Tb | +
1
(1−xa )
v∈Ta (yv
P
Similarly, the objective value of solution x00 is |Ta | +
xb
(1−xb ) |Tb |.
9
− xa ) = |Tb | +
1
(1−xb )
v∈Tb (yv
P
yv −xa
P1−xa
. The objective
y
v∈Ta v
(1−xa )
−
− xb ) = |Ta | +
xa
(1−xa )
P
v∈Tb
|Ta |.
yv
(1−xb )
−
Consider the convex combination
has the objective value of
1−xa
0
(2−xa −xb ) x
+
1−xb
00
(2−xa −xb ) x .
This solution is feasible and
1
· (1 − xa − xb ) (|Ta | + |Tb |) +
yv
(2 − xa − xb )
v∈V (T )
X
If xa + xb < 1, we apply the fact that |Ta | + |Tb | > v∈V (T ) yv to get the objective of strictly
P
more than v∈V (T ) yv , contradicting the fact that (x, y) is optimal.
P
Finally, we define the convex combination by z = (1 − xa )x0 + xa x00 . It can be verified easily
that zv = xv for all v ∈ V (T ).
Finbow et al. Instances: In this instance, the tree has degree at most 3 and the root has
degree 2. Finbow et al. [11] showed that this is polynomial time solvable.
Theorem 14. Let (T, s) be an input instance where T has degree at most 3 and s has degree
two. Let (x, y) be a feasible fractional solution for (LP-3). Then there is a polynomial time
P
algorithm that saves at least v∈V (T ) yv vertices.
Proof. We prove this by induction on the number of nodes in the tree that, for any tree (T 0 , s0 )
that is a Finbow et al. instance, for any fractional solution (x, y) for (LP’), there is an integral
P
P
solution (x0 , y 0 ) such that v∈T 0 \{s0 } yv0 = v∈T 0 \{s0 } yv . Let a and b be the children of the
P
root s. From Lemma 12, assume w.l.o.g. that xa = 1, so we have v∈Ta yv = |Ta |. By
the induction hypothesis, there is an integral solution (x0 , y 0 ) for the subtree Tb such that
P
P
P
0 0
0
0
extended to the instance T
v∈Tb yv . The solution (x , y ) can be P
v∈Tb \{b} yv =
v∈Tb yv =
P
0
by defining xa = 1. This solution has the objective value of |Ta | + v∈Tb yb0 = |Ta | + v∈Tb yb ,
completing the proof.
Bad instance for (LP-1): We show in Figure 1 a Finbow et al.
instance as well as a solution for (LP-1) that is optimal and an extreme
point, but not integral.
a
b
Claim 15. The solution (x, y) represented in Figure 1, with y defined
c
d
according to Proposition 11, is an extreme point of this instance for
(LP-1).
Figure 1 Instance with
Proof. Suppose (for contradiction) that (x, y) is not an extreme point. a non-integral extreme
Then, there are distinct solutions (x0 , y 0 ), (x00 , y 00 ) and α ∈ (0, 1) such point for (LP-1). Gray
vertices: xv = 1/2; oththat (x, y) = α(x0 , y 0 ) + (1 − α)(x00 , y 00 ). Since yc = 1 and yc0 , yc00 ≤ 1, erwise: x =
0.
v
then yc0 = yc00 = 1, and likewise, yd0 = yd00 = 1. Combining that
x0a + x0c = yc0 = 1 with x0a + x0d = yd0 = 1 and x0c + x0d ≤ 1, we conclude
that x0a ≥ 1/2. Similarly, we get that x00a ≥ 1/2, which implies that x0a = x00a = 1/2.
Similar reasoning using that x0a + x0b ≤ 1 allows us to conclude that x0b = x00b = 1/2, and
thus, (x0 , y 0 ) = (x00 , y 00 ) = (x, y), which contradicts our assumption.
4.2
Rounding 1/2-integral Solutions
We say that the LP solution (x, y) is (1/k)-integral if, for all v, we have xv = rv /k for some
integer rv ∈ {0, . . . , k}. By standard LP theory, one can assume that the LP solution is (1/k)integral for some polynomially large integer k.
In this section, we consider the case when k = 2 (1/2-integral LP solutions). From Theorem 5, (LP-1) is not strong enough to obtain a 3/4 + approximation algorithm, for any > 0.
Here, we show a 5/6 approximation algorithm based on rounding (LP’).
10
Theorem 16. Given a solution (x, y) for (LP’) that is 1/2-integral, there is a polynomial time
P
algorithm that produces a solution of cost 5/6 v∈V (T ) yv .
We believe that the extreme points in some interesting special cases will be 1/2-integral.
Algorithm’s Description: Initially, U = ∅. Our algorithm considers the layers L1 , . . . , Ln
in this order. When the algorithm looks at layer Lj , it picks a vertex uj and adds it to U, as
follows. Consider Aj ⊆ Lj , where Aj = {v ∈ Lj : xv > 0}. Let A0j ⊆ Aj contain vertices v such
that there is no ancestor of v that belongs to Aj 0 for some j 0 < j, and A00j = Aj \ A0j , i.e. for
each v ∈ A00j , there is another vertex u ∈ Aj 0 for some j 0 < j such that u is an ancestor of v.
We choose the vertex uj based on the following rules:
• If there is only one v ∈ Aj , such that v is not saved by U so far, choose uj = v.
• Otherwise, if |A0j | = 2, pick uj at random from A0j with uniform probability. Similarly, if
|A00j | = 2, pick uj at random from A00j .
• Otherwise, we have the case |A0j | = |A00j | = 1. In this case, we pick vertex uj from A0j with
probability 1/3; otherwise, we take from A00j .
Analysis: Below, we argue that each vertex v ∈ V (T ) : xv > 0 is saved with probability
at least (5/6)yv . It is clear that this implies the theorem: Consider a vertex v 0 : xv0 = 0. If
yv0 = 0, we are immediately done. Otherwise, consider the bottommost ancestor v of v 0 such
that xv > 0. Since yv = yv0 , the probability that v 0 is saved is the same as that of v, which is
at least (5/6)yv .
We analyze a number of cases. Consider a layer Lj such that |Aj | = 1. Such a vertex v ∈ Aj
is saved with probability 1.
Next, consider a layer Lj such that |A0j | = 2. Each vertex v ∈ A0j is saved with probability
1/2 and yv = 1/2. So, in this case, the probability of saving v is more than (5/6)yv .
Lemma 17. Let Lj be the layer such that |A0j | = |A00j | = 1. Then the vertex u ∈ A0j is saved
with probability 2/3 ≥ (5/6)yu and vertex v ∈ A00j is saved with probability 5/6.
Proof. Let v 0 ∈ Aj 0 be the ancestor of v in some layer above Aj . The fact that v has not been
saved means that v 0 is not picked by the algorithm, when it processed Aj 0 .
We prove the lemma by induction on the value of j. For the base case, let Lj be the first
layer such that |A0j | = |A00j | = 1. This means that the layer Lj 0 must have |A0j 0 | = 2, and
therefore the probability of v 0 being saved is at least 1/2. Vertex u is not saved only if both
v 0 and u are not picked, and this happens with probability 1/2 · 2/3 = 1/3. Hence, vertex u is
saved with probability 2/3 as desired. Consider now the base case for vertex v, which is not
saved only if v 0 is not saved and u is picked by the algorithm among {u, v}. This happens with
probability 1/2 · 1/3 = 1/6, thus completing the proof of the base case.
For the purpose of induction, we now assume that, for all layer Li above Lj such that
|A0i | = |A00i | = 1, the probability that the algorithm saves the vertex in A0i is at least 2/3. Since
the vertex u is not saved only if v 0 is not saved, this probability is either 1/2 or 1/3 depending
on the layer to which v 0 belongs. If it is 1/3, we are done; otherwise, the probability is at most
1/2 · 2/3 = 1/3. Now consider vertex v, which is not saved only if v 0 is not saved and u is picked
at Lj . This happens with probability at most 1/2 · 1/3 = 1/6.
Lemma 18. Let Lj be a layer such that A00j = {u, v} (containing two vertices). Then each such
vertex is saved with probability at least 5/6.
11
Proof. Let u0 and v 0 be the ancestors of u and v in some sets A0i and A0k above the layer Lj .
There are the two possibilities: either both u0 and v 0 are in layers with |A0i | = |A0k | = 2 (maybe
i = k); or u0 is in the layer with |A0i | = |A00i | = 1. We remark that u0 6= v 0 : otherwise, the LP
constraint for v 0 and Lj would not be satisfied.
For u or v to be unsaved, we need that both u0 and v 0 are not saved by the algorithm.
Otherwise, if, say, u0 is saved, u is also saved, and the algorithm would have picked v.
P [u is not saved] = P [u not picked ∧ u0 is not saved ∧ v 0 is not saved]
= P [u not picked] · P [u0 is not saved ∧ v 0 is not saved]
1
1 1
= · =
2 4
8
7
5
P [u is saved] = ≥
8
6
It must be that P [u0 burns ∧ v 0 burns] ≤ 1/4, since either u0 and v 0 are in different layers or
they are in the same layer. If they are in different layers, picking each of them is independent,
and the probability of neither being saved is at most 1/4. If they are in the same layer, one
of them is necessarily picked, which implies that the probability of neither being saved is 0. In
any case, the probability is at most 1/4.
In the second case, at least one of the vertices u0 , v 0 is in a layer with one 2-special vertex.
W. l. o. g. let u0 be in such a layer. By Lemma 17, we know that the probability that u0 is not
saved is at most 1/3. Therefore,
P [u burns] = P [u not picked ∧ u0 burns ∧ v 0 burns]
= P [u not picked] · P [u0 burns ∧ v 0 burns]
≤ P [u not picked] · P [u0 burns]
1 1
1
≤ · =
2 3
6
5
P [u is saved] ≥
6
The proof for both cases works analogously for v.
4.3
Ruling Out the Gap Instances in Section 3
In this section, we show that the integrality gap instances for (LP-1) presented in the previous
section admit a better than (1 − 1/e) approximation via (LP’). To this end, we introduce the
concept of well-separable LP solutions and show an improved rounding algorithm for solutions
in this class.
Let η ∈ (0, 1). Given an LP solution (x, y) for (LP-1) or (LP’), we say that a vertex v is
P
η-light if u∈Pv \{v} xu < η; if a vertex v is not η-light, we say that it is η-heavy. A fractional
solution is said to be η-separable if for all layer j, either all vertices in Lj are η-light, or they
are all η-heavy. For an η-separable LP solution (x, y), each layer Lj is either an η-light layer
that contains only η-light vertices, or η-heavy layer that contains only η-heavy vertices.
Observation 19. The LP solution in Section 3 is η-separable for all values of η ∈ {1/k, 2/k, . . . , 1}.
Theorem 20. If the LP solution (x, y) is η-separable for some η, then there is an efficient
P
algorithm that produces an integral solution of cost (1 − 1/e + f (η)) v yv , where f (η) is some
function depending only on η.
12
Algorithm: Let T be an input tree, and (x, y) be a solution for (LP’) on T that is ηseparable for some constant η ∈ (0, 1). Our algorithm proceeds in two phases. In the first
phase, it performs randomized rounding independently for each η-light layer. Denote by V1 the
(random) collection of vertices selected in this phase. Then, in the second phase, our algorithm
performs randomized rounding conditioned on the solutions in the first phase. In particular,
when we process each η-heavy layer Lj , let L̃j be the collection of vertices
that
have not yet been
saved by V1 . We sample one vertex v ∈ L̃j from the distribution
xv
x(L̃j )
. Let V2 be the
v∈L̃j
set of vertices chosen from the second phase. This completes the description of our algorithm.
For notational simplification, we present the proof when η = 1/2. It will be relatively
obvious that the proof can be generalized to work for any η. Now we argue that each terminal
t ∈ X is saved with probability at least (1 − 1/e + δ)yt for some universal constant δ > 0
that depends only on η. We will need the following simple observation that follows directly by
standard probabilistic analysis.
Proposition 21. For each vertex v ∈ V (T ), the probability that v is not saved is at most
Q
−yv .
u∈Pv (1 − xu ) ≥ 1 − e
We start by analyzing two easy cases.
Lemma 22. Consider t ∈ X . If yt < 0.9 or there is some ancestor v ∈ Pt such that xv > 0.2,
then the probability that v is saved by the algorithm is at least (1 − 1/e + δ)yt .
Proof. First, let us consider the case where yt < 0.9. The probability of t being saved is at
least 1 − e−yv , according to the straightforward analysis. If yt < 0.9, we have 1 − e−yt /yt >
1.04(1 − 1/e)yt as desired.
Consider now the second case when xv > 0.2 for some ancestor v ∈ Pt . The bound used
typically in the analysis is only tight when the values are all small, and, therefore, we get an
advantage when one of the values is relatively big. In particular,
Pr [t is saved] ≥ 1 −
Y
(1 − xu )
u∈Pt
≥ 1 − (1 − xv )e−(yt −xv )
≥ 1 − (1 − 0.2)e−(yt −0.2)
≥ 1.01(1 − 1/e)yv
From now on, we only consider those terminals t ∈ X such that yt ≥ 0.9 and xv < 0.2, for
all v ∈ Pt . We remark here that if the value of η is not 1/2, we can easily pick other suitable
thresholds instead of 0.9 and 0.2.
For each vertex v ∈ V , let X1 ⊆ X be the set of terminals that are saved by V1 , i.e. a vertex
t ∈ X1 if and only if t is a descendant of some vertex in V1 . Let X2 ⊆ X \ X1 contain the set of
terminals that are not saved by the first phase, but are saved by the second phase, i.e. t ∈ X2
if and only if t has some ancestor in V2 .
PrV1 ,V2 [t 6∈ X1 ∪ X2 ] = PrV1 ,V2 [t 6∈ X1 ] PrV1 ,V2 [t 6∈ X2 : t 6∈ X1 ]
For any terminal t, let St0 and St00 be the sets of ancestors of t that are η-light and η-heavy
respectively, i.e. ancestors in St0 and St00 are considered by the algorithm in Phase 1 and 2
0
respectively. By Proposition 21, we can upper bound the first term by e−x(St ) . In the rest of
13
00
this section, we show that the second term is upper bounded by e−x(St ) c for some c < 1, and
0
00
therefore Pr [t 6∈ X1 ∪ X2 ] ≤ ce−x(St )−x(St ) ≤ ce−yt , as desired.
The following lemma is the main technical tool we need in the analysis. We remark that
this lemma is the main difference between (LP’) and (LP-2).
Lemma 23. Let t ∈ X and Lj be a layer containing some η-heavy ancestor of t. Then
EV1 [x(L̃j ) | t 6∈ X1 ] ≤ α
1
+ (1 − e−1/2 ) ≤ 0.9
2
Intuitively, this lemma says that any terminal that is still not saved by the result of the first
phase will have a relatively “sparse” layer above it. We defer the proof of this lemma to the
next subsection. Now we proceed to complete the analysis.
For each vertex v, denote by `(v) the layer to which vertex v belongs. For a fixed choice of
P
V1 , we say that terminal t is partially protected by V1 if v∈St00 xv x(L̃`(v) ) ≤ Cx(St00 ) (we will
choose the value of C ∈ (α, 1) later). Let X 0 ⊆ X \ X1 denote the subset of terminals that are
partially protected by V1 .
for α =
Claim 24. For any t ∈ X , PrV1 [t ∈ X 0 | t 6∈ X1 ] ≥ 1 − α/C.
Proof. By linearity of expectation and Lemma 23,
EV1
X
xv x(L̃`(v) ) | t 6∈ X1 =
v∈St00
X
h
i
xv EV1 x(L̃`(v) ) | t 6∈ X1 ≤ αx(St00 )
v∈St00
Using Markov’s inequality,
PrV1
X
xv x(L̃`(v) ) ≤ Cx(St00 ) | t 6∈ X1
v∈St00
= 1 − Pr
≥1−
=1−
X
xv x(L̃`(v) ) > Cx(St00 ) | t 6∈ X1
v∈St00
αx(St00 )
Cx(St00 )
α
C
We can now rewrite the probability of a terminal t ∈ X not being saved by the solution
after the second phase.
PrV1 ,V2 [t 6∈ X2 | t 6∈ X1 ]
= Pr t ∈ X 0 | t 6∈ X1 Pr t 6∈ X2 | t ∈ X 0 + Pr t 6∈ X 0 | t 6∈ X1 Pr t 6∈ X2 | t 6∈ X 0
α
00
≤ (1 − α/C)PrV1 ,V2 t 6∈ X2 | t ∈ X 0 + · e−x(St )
C
00
The last inequality holds because PrV1 ,V2 [t 6∈ X2 | t 6∈ X 0 ] is at most e−x(St ) from Proposition 21.
It remains to provide a better upper bound for Pr [t 6∈ X2 | t ∈ X 0 ]. Consider a vertex
v ∈ St00 that is involved in the second phase rounding. We say that vertex v is good for t and
14
V1 if x(L̃`(v) ) ≤ C 0 (we will choose the value C 0 ∈ (C, 1) later.) Denote by Stgood ⊆ St00 the set
of good ancestors of t. The following claim ensures that good ancestors have large LP-weight
in total.
Claim 25. For any node t ∈ X 0 , x(Stgood ) =
P
xv ≥ (1 − C/C 0 )x(St00 ).
v∈Stgood
Proof. Suppose (for contradiction) that the fraction of good layers was less than 1 − C/C 0 . This
means that x(St00 \ Stgood ) ≥ C/C 0 . For each such v ∈ St00 \ Stgood , we have x(L̃(v)) > C 0 . Then,
P
P
x C 0 ≥ C. This contradicts the assumption that t is partially
v∈St00 xv x(L̃`(v) ) >
v∈St00 \Stgood v
protected, and concludes our proof.
Now the following lemma follows.
Lemma 26. PrV1 ,V2 [t 6∈ X2 | t ∈ X 0 ] ≤ e−x(St ) e−(1− C 0 )x(St )( C 0 −1)
C
00
00
1
Proof.
PrV1 ,V2 t 6∈ X2 | t ∈ X 0
=
PrV1 V1 = V10 PrV2 t 6∈ X2 | V1 = V10
X
V10 :t∈X 0
≤
PrV1 V1 = V10
X
V10 :t∈X 0
≤
V10 :t∈X 0
≤
X
bad v∈St00
PrV1
1−
good v∈St00
e−xv
Y
Y
bad v∈St00
0
PrV1 V1 = V1
X
(1 − xv )
Y
Y
e−xv /C
xv
C0
0
good v∈St00
0
00
0
00 C
V1 = V10 e−x(St ) C 0 −(1−C/C )x(St )/C
V10 :t∈X 0
00
C
0
00
≤ e−x(St ) C 0 −(1−C/C )x(St )/C
0
X
PrV1 V1 = V10
V10 :t∈X 0
00
C
0
00
0
≤ e−x(St ) C 0 −(1−C/C )x(St )/C
1
0
00
00
≤ e−x(St ) e−(1−C/C )x(St )( C 0 −1)
Now we choose the parameters C and C 0 such that C = (1 + δ)α, C 0 = (1 + δ)C, and
(1 + δ)C 0 = 1, where (1 + δ)3 = 1/α. Notice that this choice of parameters satisfy our previous requirements that α < C < C 0 < 1. The above lemma then gives the upper bound of
00
δ2
00
00
e−x(St ) e− 1+δ x(St ) , which is at most e−(1+δ /2)x(St ) . Since δ > 0 is a constant, notice that we
do have an advantage over the standard LP rounding in this case. Now we plug in all the
parameters to obtain the final result.
2
PrV1 ,V2 [t 6∈ X1 ∪ X2 ] = PrV1 ,V2 [t 6∈ X1 ] PrV1 ,V2 [t 6∈ X2 | t 6∈ X1 ]
0
(1 − α/C) PrV1 ,V2 t 6∈ X2 | t ∈ X 0 +
0
(1 − α/C) e−x(St ) e− 2 x(St ) +
≤ e−x(St )
≤ e−x(St )
≤ e−yt
≤ e−yt
00
δ2
00
δ2
(1 − α/C) e− 2 x(St ) + α/C
00
δ −(1+δ2 /2)x(St00 )
1
e
+
1+δ
1+δ
15
α −x(St00 )
e
C
α −x(St00 )
e
C
Since we assume that yt > 0.9 and xv ≤ 0.2, we must have x(St00 ) ≥ 0.2, and therefore the
above term can be seen as e−yt · δ 0 for some δ 0 < 1. Overall, the approximation factor we get is
(1 − δ 0 /e) for some universal constant δ 0 ∈ (0, 1).
4.3.1
Proof of Lemma 23
For each u, let Eu denote the event that u is not saved by V1 . First we break the expectation
P
term into u∈Lj xu Pr [Eu | t 6∈ X1 ]. Let v ∈ L be the ancestor of t in layer Lj . We break down
the sum further based on the “LP coverage” of the least common ancestor between u and v, as
follows:
k/2
X
xu Pr [Eu | t 6∈ X1 ]
X
i=0 u∈Lj :q 0 (lca(u,v))=i
Here, q 0 (u) denotes k · x(Pu ); this term is integral since we consider the 1/k-integral solution
(x, y). The rest of this section is devoted to upper bounding the term Pr [Eu | t 6∈ X1 ]. The
following claim gives the bound based on the level i to which the least common ancestor belongs.
Claim 27. For each u ∈ Lj such that q 0 (lca(u, v)) = i,
Pr [Eu | t 6∈ X1 ] ≤ e−(1/2−i/k)
Proof. First, we recall that yu ≥ 1/2 and q 0 (u) ≥ k/2, since u is in the 1/2-heavy layer Lj . Let
w = lca(u, v) and P 0 be the path that connects w to u. Moreover, denote by S ⊆ P 0 the set of
P
P
light vertices on the path P 0 , i.e. S = St0 ∩ P 0 . Notice that x(S) ≥ a∈St0 ∩Pu xa − a∈Pw xa ≥
(1/2 − i/k).
For each w0 ∈ S, Pr [w0 6∈ V1 | t 6∈ X1 ] ∈ {1 − xw0 , 1 − xw0 /(1 − xv0 )} depending on whether
there is a vertex v 0 in Pv that shares a layer with w0 . In any case, it holds that Pr [w0 6∈ V1 | t 6∈ X1 ] ≤
(1 − xw0 ). This implies that
Pr [Eu | t 6∈ X1 ] ≤
Y
Pr w0 6∈ V1 | t 6∈ X1
w0 ∈S
≤
Y
(1 − xw0 )
w0 ∈S
≤
Y
e−xw0
w0 ∈S
−(1/2−i/k)
≤e
Claim 28. Let i be an integer and L0 ⊆ Lj be the set of vertices u such that q 0 (lca(u, v)) is at
least i. Then x(L0 ) ≤ (k − i)/k.
Proof. This claim is a consequence of Hartke’s constraints. Let v 0 be the topmost ancestor of v
such that q 0 (v 0 ) ≥ i. We remark that all vertices in L0 must be descendants of v 0 , so it must be
P
that w∈Pv0 xw + x(L0 ) ≤ 1. The first term is i/k, implying that x(L0 ) ≤ (k − i)/k.
Let Lij ⊆ Lj denote the set of vertices u whose least common ancestor lca(u, v) satisfies
P
0
q 0 (lca(u, v)) = i. As a consequence of Claim 28, i0 ≥i x(Lij ) ≤ (k − i)/k. Combining this
inequality with Claim 27, we get that
h
i
E x(L̃j ) | t 6∈ X1 ≤
k/2
X
x(Lij )e−1/2+i/k
i=0
16
k/2
This term is maximized when x(Lj ) = 1/2 and x(Lij ) = 1/k for all other i = 0, 1, . . . , k/2−
1. This implies that
k/2−1
h
i
E x(L̃j ) | t 6∈ X1 ≤ 1/2 +
X
e−1/2+i/k /k
i=0
Finally, using some algebraic manipulation and the fact that 1 + x ≤ ex , we get
k/2−1
h
i
E x(L̃j ) | t 6∈ X1 ≤ 1/2 +
X
e−1/2+i/k /k
i=0
1 −1/k 1 − e−1/2
e
k
1 − e−1/k
1/k
1
= 1/2 + (1 − e−1/2 ) 1/k
e
1 − e−1/k
1/k
= 1/2 + (1 − e−1/2 ) 1/k
e −1
≤ 1/2 + (1 − e−1/2 )
= 1/2 +
4.4
Integrality Gap for (LP’)
In this section, we present an instance where (LP’) has an integrality gap of 5/6 + , for any
> 0. Interestingly, this instance admits an optimal 21 -integral LP solution.
Gadget: The motivation of our construction is a simple gadget represented in Figure 2.
In this instance, vertices are either special (colored gray) or regular. This gadget has three
properties of our interest:
• If we assign an LP-value of xv = 1/2 to every special vertex, then this is a feasible LP
solution that ensures yu = 1 for all leaf u.
• For any integral solution U that does not pick any vertex in the first layer of this gadget,
at most 2 out of 3 leaves of the gadget are saved.
• Any pair of special vertices in the same layer do not have a common ancestor inside this
gadget.
Our integrality gap instance is constructed by creating partially
overlapping copies of this gadget. We describe it formally below.
Construction: The first layer of this instance, L1 , contains 4 nodes:
two special nodes, which we name a(1) and a(2), and two regular
nodes, which we name b(1) and b(2). We recall the definition of spider
from Section 3.1.
Let α = 5 d1/e. The nodes b(1) and b(2) are the roots of two Figure 2 Gadget used
spiders. Specifically, the spider Z1 rooted at b(1) has α feet, with one to get 5/6 integrality
foot per layer, in consecutive layers L2 , . . . , Lα+1 . For each j ∈ [α], gap. Special vertices are
denote by b0 (1, j), the j th foot of spider Z1 . The spider Z2 , rooted at colored gray.
b(2), has α2 feet, with one foot per layer, in layers Lα+2 , . . . , Lα2 +α+1 .
For each j ∈ [α2 ], denote by b0 (2, j), the j th foot of spider Z2 . All the
feet of spiders Z1 and Z2 are special vertices.
0 , with α2 feet, lying in the α2
For each j ∈ [α], the node b0 (1, j) is also the root of spider Z1,j
consecutive layers L2+α+jα2 , . . . , L1+α+(j+1)α2 (one foot per layer). For j 0 ∈ [α2 ], let b00 (1, j, j 0 )
17
0 that lies in layer L
3
denote the j 0 -th foot of spider Z1,j
1+α+jα2 +j 0 . Notice that we have α such
feet of these spiders
[α2 ],
n
0
Z1,j
b0 (2, j)
oα
j=1
lying in layers L2+α+α2 , . . . , L1+α+α2 +α3 . Similarly, for each
0
j ∈
the node
is the root of spider Z2,j
with α2 feet, lying in consecutive layers
L2+α+α3 +jα2 , . . . , L1+α+α3 +(j+1)α2 . We denote by b00 (2, j, j 0 ) the j 0 -th foot of this spider.
The special node a(1) is also the root of spider W1 which has α + α3 feet: The first α feet,
denoted by a0 (1, j) for j ∈ [α], are aligned with the nodes b0 (1, j), i.e. for each j ∈ [α], the foot
a0 (1, j) of spider W1 is in the same layer as the foot b0 (1, j) of Z1 . For each j ∈ [α], j 0 ∈ [α2 ], we
also have a foot a00 (1, j, j 0 ) which is placed in the same layer as b00 (1, j, j 0 ). Similarly, the special
node a(2) is the root of spider W2 having α2 + α4 feet. For j ∈ [α2 ], spider W2 has a foot a0 (2, j)
placed in the same layer as b0 (2, j). For j ∈ [α2 ], j 0 ∈ [α2 ], W2 also has a foot a00 (2, j, j 0 ) in the
layer of b00 (2, j, j 0 ). All the feet of both W1 and W2 are special vertices.
Finally, for i ∈ {1, 2}, and j ∈ [αi ], each node a0 (i, j) has α5−i children, which are leaves
of the instance. For j ∈ [α], j 0 ∈ [α2 ], the nodes b00 (i, j, j 0 ), a00 (i, j, j 0 ) have α3−i children each
which are also leaves of the instance. The set of terminals X is simply the set of leaves.
Proposition 29. We have |X | = 6α5 . Moreover, (i) the number of terminals in subtrees
Ta(1) ∪ Tb(1) is 3α5 , and (ii) the number of terminals in subtrees Ta(2) ∪ Tb(2) is 3α5 .
Proof. Each node a0 (1, j) has α4 children, and there are α such nodes. Similarly, each node
a0 (2, j) has α3 children. There are α2 such nodes. This accounts for 2α5 terminals.
For i ∈ {1, 2}, each node a00 (i, j, j 0 ) has α3−i children. There are αi+2 such nodes. This
accounts for another 2α5 terminals. Finally, there are α3−i children of each b00 (i, j, j 0 ), and
there are α2+i such nodes.
Fractional Solution: Our construction guarantees that any path from root to leaf contains 2
special vertices: For a leaf child of a0 (i, j), its path towards the root must contain a0 (i, j) and
a(i). For a leaf child of a00 (i, j, j 0 ), its path towards the root contains a00 (i, j, j 0 ) and a(i). For a
leaf child of b00 (i, j, j 0 ), the path towards the root contains b00 (i, j, j 0 ) and b0 (i, j).
Lemma 30. For each special vertex v, for each layer Lj below v, the set Lj ∩ Tv contains at
most one special vertex.
Proof. Each layer contains two special vertices of the form {a0 (i, j), b0 (i0 , j 0 )} or {a00 (i, j), b00 (i0 , j 0 )}.
In any case, the least common ancestor of such two special vertices in the same layer is always
the root s (since one vertex is in Ta(i) , while the other is in Tb(i) ) This implies that, for any
non-root vertex v, the set Lj ∩ Tv can contain at most one special vertex.
Notice that, there are at most two special vertices per layer. We define the LP solution x,
with xv = 1/2 for all special vertices v and xv = 0 for all other vertices. It is easy to verify that
this is a feasible solution.
P
We now check the constraint at v and layer Lj below v: If the sum u∈Pv xu = 0, then the
P
P
constraint is immediately satisfied, because u∈Lj ∩Tv xu ≤ 1. If u∈Pv xu = 1/2, let v 0 be the
P
P
special vertex ancestor of v. Lemma 30 guarantees that u∈Lj ∩Tv xu ≤ u∈Lj ∩Tv0 xu ≤ 1/2,
P
and therefore the constraint at v and Lj is satisfied. Finally, if u∈Pv xu = 1, there can be no
P
special vertex below v and therefore u∈Lj ∩Tv xu = 0.
Integral Solution: We argue that any integral solution cannot save more than (1 + 5/α)5α5
terminals. The following lemma is the key to our analysis.
Lemma 31. Any integral solution U : U ∩{a(1), b(1)} = ∅ saves at most (1+5/α)5α5 terminals.
18
Proof. Consider the set Q = {a0 (1, j)}αj=1 ∪{b0 (1, j)}αj=1 , and a collection of paths from {a(1), b(1)}
to vertices in set Q. These paths are contained in the layers L1 , . . . , Lα+1 , so the strategy U
induces a cut of size at most α + 1 on them. This implies that at most α + 1 vertices (out of
2α vertices in Q) can be saved by U. Let Q̃ ⊆ Q denote the set of vertices that have not been
saved by U. We remark that |Q̃| ≥ α − 1. We write Q̃ = Q̃a ∪ Q̃b where Q̃a contains the set of
vertices a0 (1, j) that are not saved, and Q̃b = Q̃ \ Q̃a . For each vertex in Q̃a , at least α4 − 1 of
its children cannot be saved, so we have at least (α4 − 1)|Q̃a | ≥ α4 |Q̃a | − α unsaved terminals
that are descendants of Q̃a . If |Q̃b | ≤ 3, we are immediately done: We have |Q̃a | ≥ α − 4, so
(α4 − 1)(α − 4) ≥ α5 − 5α4 unsaved terminals.
Consider the set
R=
[
00
a (1, j, j 0 ) ∪
j∈[α],j 0 ∈[α2 ]
[
[
b00 (1, j, j 0 )
j:b0 (1,j)∈Q̃b j 0 ∈[α2 ]
This set satisfies |R| = α3 + |Q̃b |α2 , and the paths connecting vertices in R to Q̃b ∪ {a(1)} lie
in layers L1 , . . . , Lα3 +α2 +α+1 . So the strategy U induced on these paths disconnects at most
α3 + α2 + α + 1 vertices. Let R̃ ⊆ R contain the vertices in R that are not saved by U, so we
have |R̃| ≥ (|Q̃b | − 1)α2 − α − 1, which is at least (|Q̃b | − 2)α2 . Each vertex in R̃ has α2 children.
We will have (α2 − 1) unsaved terminals for each such vertex, resulting in a total of at least
(α2 − 1)(|Q̃b | − 2)α2 ≥ α4 |Q̃b | − 4α4 terminals that are descendants of b(1).
In total, by summing the two cases, at least (α4 |Q̃a |−α)+(α4 |Q̃b |−4α4 ) ≥ (|Q̃a |+|Q̃b |)α4 −
4
5α ≥ α5 − 5α4 terminals are not saved by U, thus concluding the proof.
Lemma 32. Any integral solution U : U ∩{a(2), b(2)} = ∅ saves at most (1+5/α)5α5 terminals.
Since nodes a(1), a(2), b(1), b(2) are in the first layer, it is only possible to save one of them.
Therefore, either Lemma 31 or Lemma 32 apply, which concludes the analysis.
5
Conclusion and Open Problems
In this paper, we settled the integrality gap question for the standard LP relaxation. Our results
ruled out the hope to use the canonical LP to obtain better approximation results. While a
recent paper settled the approximability status of the problem [1], the question whether an
improvement over (1−1/e) can be done via LP relaxation is of independent interest. We provide
some evidences that Hartke’s LP is a promising candidate for doing so. Another interesting
question is to find a more general graph class that admits a constant approximation algorithm.
We believe that this is possible for bounded treewidth graphs.
References
[1] Adjiashvili, D., Baggio, A., Zenklusen, R.: Firefighting on Trees Beyond Integrality Gaps.
ArXiv e-prints (Jan 2016)
[2] Anshelevich, E., Chakrabarty, D., Hate, A., Swamy, C.: Approximability of the firefighter
problem - computing cuts over time. Algorithmica 62(1-2), 520–536 (2012)
[3] Bazgan, C., Chopin, M., Cygan, M., Fellows, M.R., Fomin, F.V., van Leeuwen, E.J.:
Parameterized complexity of firefighting. JCSS 80(7) (2014)
19
[4] Bazgan, C., Chopin, M., Ries, B.: The firefighter problem with more than one firefighter
on trees. Discrete Applied Mathematics 161(7-8), 899–908 (2013)
[5] Cai, L., Verbin, E., Yang, L.: Firefighting on trees: (1-1/e)-approximation, fixed parameter
tractability and a subexponential algorithm. In: ISAAC (2008)
[6] Chalermsook, P., Chuzhoy, J.: Resource minimization for fire containment. In: SODA
(2010)
[7] Chekuri, C., Kumar, A.: Maximum coverage problem with group budget constraints and
applications. In: APPROX 2004. pp. 72–83 (2004)
[8] Chlebíková, J., Chopin, M.: The firefighter problem: A structural analysis. In: Cygan, M.,
Heggernes, P. (eds.) IPEC. Lecture Notes in Computer Science, vol. 8894, pp. 172–183.
Springer (2014)
[9] Costa, V., Dantas, S., Dourado, M.C., Penso, L., Rautenbach, D.: More fires and more
fighters. Discrete Applied Mathematics 161(16–17), 2410 – 2419 (2013)
[10] Develin, M., Hartke, S.G.: Fire containment in grids of dimension three and higher. Discrete
Applied Mathematics 155(17), 2257–2268 (2007)
[11] Finbow, S., MacGillivray, G.: The firefighter problem: a survey of results, directions and
questions. Australas. J. Combin 43, 57–77 (2009)
[12] Hartke, S.G.: Attempting to narrow the integrality gap for the firefighter problem on trees.
Discrete Methods in Epidemiology 70, 179–185 (2006)
[13] Hartnell, B.: Firefighter! an application of domination. In: Manitoba Conference on Combinatorial Mathematics and Computing (1995)
[14] Iwaikawa, Y., Kamiyama, N., Matsui, T.: Improved approximation algorithms for firefighter problem on trees. IEICE Transactions 94-D(2), 196–199 (2011)
[15] King, A., MacGillivray, G.: The firefighter problem for cubic graphs. Discrete Mathematics
310(3), 614–621 (2010)
[16] Wang, P., Moeller, S.A.: Fire control on graphs. Journal of Combinatorial Mathematics
and Combinatorial Computing 41, 19–34 (2002)
20
A
Omitted Figures
21
k
kD
D2
D
D
. . . . . . . . . . . . . . . .
Figure 3 Simplified example of the instance used to achieve integrality gap of 1 − 1/e, when k = 2 and
D = 2. The labels in the figure indicate, in general, the number of edges in that location, in terms of k
and D. Special vertices are colored gray.
y1
y2
x2
x1
... ...
...
...
...
...
...
...
Figure 4 Simplified example of the instance with low integrality gap for 1/2-integral solutions. Special
vertices are colored gray.
22
| 8 |
Approximation Algorithms for the Open Shop
Problem with Delivery Times
arXiv:1706.02019v1 [] 7 Jun 2017
Imed KACEM∗and Christophe RAPINE†
Abstract
In this paper we consider the open shop scheduling problem where
the jobs have delivery times. The minimization criterion is the maximum lateness of the jobs. This problem is known to be NP-hard, even
restricted to only 2 machines. We establish that any list scheduling algorithm has a performance ratio of 2. For a fixed number of machines,
we design a polynomial time approximation scheme (PTAS) which represents the best possible result due to the strong NP-hardness of the
problem.
Keywords: Scheduling ; Open Shop ; Maximum Lateness ; Approximation ; PTAS
1
Introduction
Problem description. We consider the open shop problem with delivery
times. We have a set J = {1, 2, ..., n} of n jobs to be performed on a set
of m machines M1 , M2 , M3 .... Mm . Each job j consists of exactly m
operations Oi,j (i ∈ {1, 2, ..., m}) and has a delivery time qj , that we assume
non negative. For every job j and every index i, operation Oi,j should be
performed on machine Mi . The processing time of each operation Oi,j is
denoted by pi,j . At any time, a job can be processed by at most one machine.
Moreover, any machine can process only one job at a time. Preemption
of operations is not allowed. We denote by Ci,j the completion time of
operation Oi,j . For every job j, its completion time Cj is defined as the
completion time of its last operation. The lateness Lj of job j is equal to
Cj + qj . The objective is to find a feasible schedule that minimizes the
maximum lateness Lmax , where
∗
LCOMS, Université de Lorraine, Ile du Saulcy, Metz 57000, France. Contact:
[email protected]
†
LGIPM, Université de Lorraine, Ile du Saulcy, Metz 57000, France. Contact :
[email protected]
1
Lmax = max {Lj }
1≤j≤n
(1)
For any feasible schedule π, we denote the resulting maximum lateness
by Lmax (π). Moreover, L∗max denotes the maximum lateness of an optimal
solution π ∗ , that is, L∗max = Lmax (π ∗ ). According to the tertiary notation,
the problem is denoted as O||Lmax .
Recall that a constant approximation algorithm of performance ratio
γ ≥ 1 (or a γ-approximation) is a polynomial time algorithm that provides a
schedule with maximum lateness no greater than γL∗max for every instance.
A polynomial time approximation scheme (PTAS) is a family of (1 + ε)approximation algorithms of a polynomial time complexity for any fixed
ε > 0. If this time complexity is polynomial in 1/ε and in the input size
then we have a fully polynomial time approximation scheme (FPTAS).
Related approximation results. According to the best of our knowledge, the design of approximation algorithms has not yet been addressed
for problem O||Lmax . However, some inapproximability results have been
established in the literature. For a fixed number of machines, unless P=NP,
problem Om||Lmax cannot admit an FPTAS since it is NP-hard in the strong
sense on two machines [8], [9]. The existence of a PTAS for a fixed m is
an open question, that we answer positively in this paper. If the number m of machines is part of the inputs, Williamson et al [11] proved that
no polynomial time approximation algorithm with a performance guarantee
lower than 5/4 can exist, , unless P=NP, which precludes the existence of a
PTAS. Several interesting results exist for some related problems, mainly to
minimize the makespan :
• Lawler et al [8]-[9] presented a polynomial algorithm for problem
O2|pmtn|Lmax . In contrast, when preemption is not allowed, they
proved that problem O2||Lmax is strongly NP-hard, as mentioned above.
• Gonzales and Sahni [4] proved that problem Om||Cmax is polynomial
for m = 2 and becomes NP-hard when m ≥ 3 .
• Sevastianov and Woeginger [10] established the existence of a PTAS
for problem Om||Cmax when m is fixed.
• Kononov and Sviridenko [7] proposed a PTAS for problem Oq(P m)|rij |Cmax
when q and m are fixed.
• Approximation algorithms have been recently proposed for other variants such as the two-machine routing open shop problem. A sample
of them includes Chernykh el al [2] and Averbakh et al [1].
2
Finally, we refer to the state-of-the-art paper on scheduling problems under
the maximum lateness minimization by Kellerer [6].
Contribution. Unless P=NP, problem Om||Lmax cannot admit an FPTAS since it is NP-hard in the strong sense on two machines. Hence, the
best possible approximation algorithm is a PTAS. In this paper, we prove
the existence of such an algorithm for a fixed number of machines, and thus
gives a positive answer to this open problem. Moreover, we provide the analysis of some simple constant approximation algorithms when the number of
machines is a part of the inputs.
Organization of the paper. Section 2 present some simple preliminary approximation results on list scheduling algorithms. In Section 3, we
describe our PTAS and we provide the analysis of such a scheme. Finally,
we give some concluding remarks in Section 4.
2
Approximation Ratio of List Scheduling Algorithms
List scheduling algorithms are popular methods in scheduling theory. Recall
that a list scheduling algorithm relies on a greedy allocation of the operations
to the resources that prevents any machine to be inactive while an operation
is available to be performed. If several operations are concurrently available,
ties are broken using a priority list. We call a list schedule the solution produced by a list scheduling algorithm. We establish that any list scheduling
algorithm has a performance guarantee of 2, whatever its priority rule. Our
analysis relies on 2 immediate lower bounds, namely the conservation of the
work and the critical path. Let us denote
n
m
X
X
P = max {
pij } and Q = max {
pij + qj }
i=1,...,m
j=1,...,n
j=1
i=1
Clearly L∗max ≥ P and L∗max ≥ Q. We have the following result :
Proposition 1 Any list scheduling algorithm is a 2-approximation algorithm for problem O||Lmax . More precisely, for any list schedule π, Lmax (π) ≤
P +Q
Proof. Consider a list schedule π, and let u be a job such that Lu =
Lmax (π). Without loss of generality, we can assume that the last operation
of u is scheduled on the first machine. We consider 2 cases : either an idletime occurs on M1 before the completion of job u, or not. If there is no
idle time on M1 , then Lu ≤ P + qu ≤ P + Q. Otherwise, let us denote by
I the total idle time occuring on M1 before the completion time of job u.
We have Lu ≤ P + I + qu . Notice that job u could not have been available
3
on machine M1 at any idle instant, otherwise, due to the principle of list
scheduling algorithms, it would have been scheduled. As a consequence, an
operation of job u is performed on another machine at every idle instant of
M1 before Cu . Hence, we can bound the idle time I by the total processing
time of job u. We have :
Lu ≤ P + I + qu ≤ P +
m
X
piu + qu ≤ P + Q
i=1
We can conclude that in any case Lmax (π) ≤ P + Q ≤ 2L∗max
Notice that good a posteriori performances can be achieved by a list
scheduling algorithm, for instance if the workload P is large compared with
the critical path Q. One natural question is whether some better approximation ratios can be obtained with particular lists. It is a folklore that
minimizing the maximum lateness on one ressource can be achieved by sequencing the tasks in non-increasing order of their delivery times. This
sequence is known as Jackson’s order. One can wonder if a list scheduling
algorithm using Jackson’s order as its list performed better in the worst case.
The answer is negative. The following proposition states that the analysis
of Proposition 1 is tight whatever the list.
Proposition 2 No list scheduling algorithm can have a performance ratio
less than 2 for problem O2||Lmax .
Proof. Consider the following instance: we have 3 jobs to schedule on
2 machines. Jobs 1 and 2 have only one (non null) operation to perform,
respectively on machine M1 and M2 . The duration of the operation is equal
to a time units, where a ≥ 1 is a parameter of the instance. Both delivery
times are null. Job 3 has one unit operation to perform on each machine,
and its delivery time is q3 = a.
An optimal schedule sequences first Job 3 on both machines, creating an
idle time at the first time slot, and then performs Jobs 1 and 2. That is, the
optimal sequence is (3, 1) on M1 and (3, 2) on M2 . The maximum lateness
is equal to L∗max = a + 2. Notice that this schedule cannot be obtained by a
list scheduling algorithm, since an idle time occurs at the first instant while
a job (either 1 or 2) is available. Indeed, it is easy to see that, whatever
the list, either Job 1 or Job 2 is scheduled at time 0 by a list scheduling
algorithm. As a consequence, Job 3 cannot complete before time a + 1 in
a list schedule π. Hence, Lmax (π) ≥ 2a + 1. The ratio for this instance is
2a+1
a+2 , which tends to 2 when a tends to +∞.
As a conclusion, Jackson’s list does not perform better that any other
list in the worst case. Nevertheless, we use it extensively in the PTAS that
we present in the next section.
4
Optimal schedule:
3
1
3
M1
M2
2
Jackson’s schedule:
1
3
3
2
M1
M2
Figure 1. Illustration of the worst case of Jackson's rule
5
3
PTAS
In this section, we present the first PTAS for problem Om||Lmax , that is,
when the number of machines is fixed. Our algorithm considers three classes
of jobs as introduced by Sevastianov and Woeginger [10] and used by several
authors for a variety of makespan minimization in shops (see for instance
the extension by Jansen et al.for the job shop [5]). Notice that our approximation algorithm does not require to solve any linear program.
3.1
Description of the Algorithm
Let ε be a fixed positive number. We describe how to design an algorithm,
polynomial in the size of the inputs, with a performance ratio of (1 + ε)
ε
for problem Om||Lmax . As a shorthand, let ε = 2m(m+1)
. Recall that
P
n
m
P = maxi=1 { j=1 pij } is the maximal workload of a machine. For a given
integer k, we introduce the following subsets of jobs B, S and T :
B=
m
k
j ∈ J | max pi,j ≥ ε P
(2)
i=1
m
j ∈ J | εk P > max pi,j ≥ εk+1 P
i=1
m
k+1
T = j ∈ J | ε P > max pi,j
S=
i=1
(3)
(4)
By construction, for any integer k, sets B, S and T define a partition of
the jobs. For the ease of understanding, the jobs of B will be often called
the big jobs, the jobs of S the small jobs, and the jobs of T the tiny jobs.
Notice that the duration of any operation of a small jobs is less than εk P ,
and less than εk+1 P for a tiny job. The choice of k relies on the following
proposition, which comes from Sevastianov and Woeginger [10]:
Proposition 3 [10] There exists an integer k ≤ d m
ε e such that
p(S) ≤ εP
(5)
P
P
where p(S) = j∈S m
i=1 pij is the total amount of work to perform for the
jobs of S. Moreover, for the big jobs, we have:
|B| ≤
m
εk
(6)
Proof. Let us denote z = dm/εe. Observe that for a given value k, the
duration of the largest operation of any small job belongs to the interval
Ik = [εk+1 P, εk P [. Assume for the sake of contradiction that, for all values
k = 1, . . . , z, the corresponding set Sk does not verify Condition (5). As
6
a consequence, p(Sk ) > εP for each k = 1, . . . , z. Since these sets are
disjoint, it results that the total processing time of the operations
z+1 of the
jobs whose the duration of its largest operation belongs to ∈ ε P, P is
strictly greater than zεP . However, this amount of work is bounded by the
total work of the instance. We have :
m X
n
X
zεP <
pij ≤ mP
i=1 j=1
Thus z < m/ε, which contradicts our definition of z. It follows that at least
one interval Ik =[εk+1 P, εk P [ with 1 ≤ k ≤ z is suitable to contain the values
of the large operations of subset S such that p(S) ≤ εP .
To prove Inequality (6), we can observe that the total processing time
of the operations of B is bounded by mP . Thus, |B| εk P ≤ mP must hold
and Inequality (6) follows.
Notice that, for a fixed value m of machines, only a constant number
dm/εe of values must be considered for k. Hence, an integer k verifying
the conditions of Proposition 3 can be found in linear time. Assume from
now that k has been chosen according to Proposition 3. In order to present
our approach, let us explain how the different sets S, B and T of jobs
are scheduled in our PTAS. Since set S represents a very small work, we
can schedule it first. Clearly, its last operation cannot complete after time
t(S) ≤ εP in a list schedule. Since set B has a fixed number of jobs, we can
afford to consider all the ways to sequence them. For that, we discretize the
time, considering a time step δ = εk+1 P . Finally, for each assignment of
the big jobs, we schedule the tiny jobs using simply Jackson’s list scheduling
algorithm. One originality of our approach is the possibility for a tiny job
to push a big job in order to fit before it. More precisely, if the tiny job the
list scheduling algorithm is considering cannot complete before the start of
the next big job on its machine, say b, then we force its schedule by shifting
right the operation of job b as much as necessary. This shifting is special
in twofolds : first, we also shift right of the same amount of time all the
operations of the big jobs starting after job b. Second, the operation of job
b is then frozen, that is, it cannot be pushed again by a tiny job. Hence, an
operation of a big job can be pushed at most once by a tiny job, but can be
shifted right a lot of times, due to the push of other operations of some big
jobs. A more formal description of our algorithm can be given as follows:
ALGORITHM PTAS
1. Schedule first jobs of S using any list scheduling algorithm between
time 0 to time p(S) (the cost factor of this simplification will not be
more than 1 + ε).
2. Let δ = εk+1 P . Consider all the time intervals between p(S) and mP
of length δ (the number of these intervals is a constant for a fixed ε).
7
3. Enumerate all the schedules of jobs in B between p(S) and mP . Here,
a schedule is reduced to an assignment of the operations to starting
times of the time intervals defined in the previous step (the cost factor
of this simplification will not be more than 1 + ε).
4. Complete every partial schedule generated in the last step by adding
the jobs of T . The operations of T are added by applying a list scheduling algorithm using Jackson’s order (i.e., when several operations are
available to be performed we start by the one of the largest delivery
time). Note that if an operation cannot fit in front of a big job b, then
we translate b and all the next big jobs by the same necessary duration
to make the schedule feasible. The operation of job b is then frozen,
and cannot be shifted any more.
5. Return the best feasible schedule found by the algorithm.
3.2
Analysis of the Algorithm
We start by introducing some useful notations. Consider a schedule π. For
each machine i, we denote respectively by sir and eir the start time and completion time of the rth operation of a big job on machine i, for r = 1, . . . , |B|.
By convenience we introduce ei0 = 0. For short we call the grid the set of all
the couples (resource × starting time) defined in Phase (2) of the algorithm.
Recall that in the grid the starting times are discretized to the multiples of
δ. Notice that our algorithm enumerates in Phase (3) all the assignments
of big job operations to the grid. Phase (4) consists in scheduling all the
tiny jobs in-between the big jobs. In the following, we call a time-interval
on a machine corresponding to the processing of a big job a hole, for the
machine is not available to perform the tiny jobs. The duration of the rth
hole on machine i, that is eir −sir , is denoted by hir . By analogy to packing,
we call a bin the time-interval between two holes. The duration of the rth
bin on machine i, that is sir − ei,r−1 , is denoted by air . We also introduce
Hir = hi1 + · · · + hir and Air = ai1 + · · · + air , that is the overall duration
of the r first holes and bins, respectively, on machine i.
Now consider an optimal schedule π ∗ . With immediate notations, let
s∗ir be the start time of the rth operations of a big job on the machine i,
and let A∗ir be the overall duration of the r first bins. For the ease of the
presentation, we assume in the reminder, without loss of generality, that
we have no small jobs to schedule : Indeed, Phase (1) does not increase the
length of the schedule by more than εP ≤ εL∗max . We say that an assignment
to the grid is feasible if it defines a feasible schedule for the big jobs. The
next lemma shows that there exists a feasible assignment such that each
operation of the big jobs is delayed, compared to an optimal schedule, by at
least 2mδ time units and by at most (2 + |B|)mδ time units.
8
Lemma 4 There exists a feasible assignment s̄ to the grid such the operations of the big jobs are sequenced in the same order, and for every machine
i and index r we have:
s∗ir + 2mδ ≤ s̄ir ≤ s∗ir + (2 + |B|)mδ
Proof. Among all the possible assignments enumerated in Phase (3) for
the big jobs, certainly we consider the following one, which corresponds to
a shift of the optimal schedule π ∗ restricted to the big jobs :
• Insert 2mδ extra time units at the beginning of π ∗ , that is delay all
the operations by 2mδ.
• Align the big jobs to the grid (shifting them to the right)
• Define the assignment s̄ as the current starting times of the operations
of the big jobs.
More precisely, to align the big jobs to the grid, we consider sequentially
the operations by non-decreasing order of their starting time. We then shift
right the current operation to the next point of the grid and translate the
subsequent operations of the same amount of time. This translation ensures
that the schedule remains feasible for the big jobs.
By construction each operation is shifted right by at least 2mδ time
units, which implies that s̄ir ≥ s∗ir + 2mδ. The alignment of an operation
to the grid again shifts it right, together with all the subsequent operations,
by at most δ time units. Thus, the last operation is not shifted more than
m|B|δ time units by the alignment. The result follows.
Now consider the schedule π obtained by applying the Jackson’s list
scheduling algorithm to pack the tiny jobs between the holes, starting from
the feasible assignment of Lemma 4. Notice that, due to the shift procedure
in Phase (4), the starting time of the big jobs (the holes) can change between
the assignment s̄ and the schedule π. However a hole can be shifted at most
m|B| times since each operation of a big job is shifted at most once by
a tiny job. Moreover the length of a shift is bounded by the duration of
an operation of a tiny job, that is by δ. In addition, as we shift all the
operations belonging to the big jobs, the length of the bins cannot decrease
in the schedule π. Hence, we have the two following properties for the
schedule π, which are direct consequences of Lemma 4 and of the previous
discussion :
1. Any operation of a big job is only slightly delayed compared to the
optimal schedule π ∗ : sir ≤ s∗ir + 2(|B| + 1)mδ
9
2. Each bin is larger in π than in the optimal schedule. More precisely
we have Air ≥ A∗ir + 2mδ for all machine i and all index r.
In other words in the schedule π we have slightly delayed the big jobs to
give more room in each bin for the tiny jobs. We say that a job y is more
critical than a job x if y has a higher priority in the Jackson’s order. By
convention a job is as critical at itself. We have the following lemma:
Lemma 5 In schedule π, for every job x, there exists a job y such that :
qy ≥ qx and Cx ≤ Cy∗ + 2(|B| + 1)mδ
Proof. Let x be a job and Cx its completion time in schedule π. Without
loss of generality we can assume that the last operation of the job x is
processed on the first machine. If x is a big job, that is x ∈ B, we have
already noticed that we have Cx ≤ Cx∗ + 2(|B| + 1)mδ, due to our choice
of the big jobs assignment on the grid. Hence, the inequality of Lemma 5
holds for x. Thus consider in the remaining of the proof the case of a tiny
job x. We denote by Tx the subset of tiny jobs that are more critical than
x and such that their operation on the first machine is completed by time
Cx , that is :
Tx = {y ∈ T | Cy1 ≤ Cx1 and qy ≥ qx }
Observe that our definition implies in particular that x ∈ Tx . We first
establish that in schedule π, almost all the tiny jobs processed before x on
the first machine are more critical than x. That is, the schedule π essentially
follows the Jackson’s sequence for the tiny jobs. Let r be the index of the
bin where x completes in schedule π. For short we denote by A1 (x) the
overall time available for processing tiny jobs on the first machine over the
time-interval [0, C(x)], that is A1 (x) = Cx − H1,r . We also denote by p1 (Tx )
the total processing time of the operations of Tx on the first machine. We
claim that :
p1 (Tx ) ≥ A1 (x) − 2(m − 1)δ
(7)
If at every available instant on the first machine till the completion of x
an operation of Tx is processed in π, then clearly we have A1 (x) = p1 (Tx )
and Inequality 7 holds. Otherwise, consider a time interval I = [t, t0 ], included in a bin, such that no task of Tx is processed. We call such an interval
non-critical for x. It means that during I, either some idle times appear
on the first machine, and/or some jobs less critical than x have been processed. However, due to the shift procedure and the Jackson’s list used by
the algorithm, the only reason for not scheduling x during I is that this job
is not available by the time another less critical job z is started. Notice that
in an open-shop environment, a job x is not available on the first machine
only if one of its operations is being processed on another machine. As a
10
consequence, the interval I necessarily starts during the processing of x on
another machine, that is t ∈ [ti,x , Ci,x ] for some machine i. This holds for
any idle instant and any time an operation is started in interval I. As a
consequence, the interval I cannot finish later than the completion of x on
another machine i0 , plus the duration of a less critical (tiny) job z eventually
started on the first machine during time interval [ti0 ,x , Ci0 ,x ]. Since all the
jobs are tiny,Pthe overall duration of the non-critical intervals for x is thus
bounded by m
i=2 (pi,x +δ), which is at most equal to 2(m−1)δ. Inequality 7
follows.
Now let y be the job of Tx that completes last on the first machine
in the optimal schedule π ∗ . Let r∗ be the index of the bin where y is
processed on the first machine in π ∗ , and let A∗1 (y) be the total available
∗ , that is A∗ (y) = C ∗ − H ∗ .
time for tiny jobs in π ∗ before time C1,y
1
1,y
1,r∗
Recall that r is the number of bins used in schedule π to process all the
operations of Tx on the first machine. We prove that the optimal schedule
also uses (at least) this number of bins, that is r∗ ≥ r. Indeed, by the
conservation of work, we have that A∗1 (y) ≥ p1 (Tx ). Using Inequality 7 we
obtain that A∗1 (y) ≥ A1 (x) − 2(m − 1)δ. By definition of r and r∗ , we also
have A1,r−1 ≤ A1 (x) and A∗1 (y) ≤ A∗1,r∗ . Hence, the following inequality
must hold:
A1,r−1 ≤ A∗1,r∗ + 2(m − 1)δ
However, we have observed that our choice of the assignment of the big jobs
to the grid ensures that for any index l, A∗1,l +2mδ ≤ A1,l , which implies that
we have A1,r−1 + 2δ ≤ A1,r∗ . As a consequence, inequality A1,r−1 < A1,r∗
must hold. Since A1,l represents the total length of the l first bins in π,
which is obviously non-decreasing with l, it implies that r ≤ r∗ .
It means that in π ∗ , task y cannot complete its operation on the first
machine before the first r big tasks. We can conclude the proof of Lemma 5
by writing that, on one hand, Cy∗ ≥ p1 (Tx ) + H1,r , and, on the other hand,
Cx ≤ p1 (Tx ) + 2(m − 1)δ + H1,r . As a consequence, x does not complete
in π latter than 2(m − 1)δ times units after the completion time of y in π ∗ .
Since by definition y is more critical than x, Lemma 5 follows.
Finally, we can conclude that the following theorem holds:
Theorem 6 Problem Om||Lmax admits a PTAS.
Proof. We first establish that the maximum lateness of the schedule returned by our algorithm is bounded by (1 + ε)L∗max . In schedule π defined
in Lemma 5, let u be a job such that Lmax (π) = Cu + qu . If job u is a small
job, then it completes before time p(S). Due to our choice of the partition,
see Proposition 3, we have Lu ≤ εP + qu ≤ (1 + ε)L∗max . Hence, in the
following, we restrict to the case where u ∈
/ S, that is, job u is either a big
11
or a tiny job. According to Lemma 5, there exists a job y such that :
qu ≤ qy and Cu ≤ Cy∗ + 2(|B| + 1)mδ
We have :
Lmax (π) = Cu + qu
≤ Cy∗ + 2(|B| + 1)mδ + qy
≤ L∗max + 2(|B| + 1)mδ
As a consequence, using Proposition 3, we can write that for any fixed
ε≤1:
m
Lmax (π) − L∗max ≤ 2( k + 1)mεk+1 P
ε
m+1
≤ 2( k )mεk+1 P
ε
= 2(m + 1)mεP
≤ εL∗max
Hence, our algorithm has a performance guarantee of (1 + ε). Let us
now check its time complexity. First, the identification of k and the three
2
subsets B, S and T can be done in O( mε.n ). Second, the scheduling of the
jobs of S can clearly be performed in polynomial time (in fact, in linear time
in n for m fixed). Now, let us consider the scheduling of the big jobs. The
number ∆ of points in the grid is bounded by:
∆≤m×
≤
mP
m2
= k+1
δ
ε
m2
m
ε2+ ε
2+ m
ε
2 2m(m + 1)
≤m
ε
The second inequality comes from the fact that k ≤ dm/εe, due to Proposition 3. The last bound is clearly a constant for m and ε fixed. The number
of possible assignments of jobs of B in Phase (3) is bounded by
∆
(m|B|) ≤
m2
εk
∆
Hence, only a constant number of assignments to the grid are to be considered. Phase (4) completes every feasible assignment in a polynomial time.
Phase (5) outputs the best solution in a linear time of the number of feasible assignments. In overall, the algorithm is polynomial in the size of the
instance for a fixed ε and a fixed m.
12
4
Conclusion
In this paper we considered an open question related to the existence of
PTAS to the m-machine open shop problem where m is fixed and the jobs
have different delivery times. We answered successfully to this important
question. This represents the best possible result we can expect due to the
strong NP-hardness of the studied problem.
Our perspectives will be focused on the study of other extensions. Especially, the problem with release dates seems to be very challenging.
References
[1] Averbakh I., Berman O., Chernykh 1., 2005. A 65 -approximation algorithm for the two-machine routing open shop problem on a 2-node
network. European Journal of Operational Research 166(1): 3-24.
[2] Chernykh I., Kononov A., Sevastianov S., 2012. Efficient approximation
algorithms for the routing open shop problem. Computers & Operations
Research
[3] Hall L.A., Shmoys D.B., 1992. Jackson’s rule for single machine scheduling: making a good heuristic better. Mathematics of Operations Research 17: 22:35.
[4] Gonzales T., Sahni S., 1976. Open shop scheduling to minimize finish
time. Journal of Association Computing Machinery 23: 665-679.
[5] Jansen K., Solis-Oba R., Sviridenko M., 2003. Makespan minimization
in job-shops: A linear time approximation scheme. SIAM Journal on
Discrete Mathematics 16(2): 288-300.
[6] Kellerer H (2004) Minimizing the maximum lateness. In: Leung JYT
(ed) Handbook of scheduling: algorithms, models and performance analysis, CRC press, chap. 10.
[7] Kononov A., Sviridenko M., 2002. A linear time approximation scheme
for makespan minimization in an open shop with release dates. Operations Research Letters 30: 276-280.
[8] Lawler E.L., Lenstra J.K., Rinnooy Kan A.H.G., 1981. Minimizing
Maximum Lateness in a Two-Machine Open Shop. Mathematics of Operations Research 6:153-158.
[9] Lawler E.L., Lenstra J.K., Rinnooy Kan A.H.G., 1982. Erratum. Mathematics of Operations Research
7(4):635-635.
http://dx.doi.org/10.1287/moor.7.4.635
13
[10] Sevastianov S.V., Woeginger G.J., 1998. Makespan minimization in
open shops: A polynomial time approximation scheme. Mathematical
Programming 82: 191-198.
[11] Williamson D.P. , Hall L.A., Hoogeveen J.A., Hurkens C.A.J., Lenstra
J.K., Sevastianov S.V., Shmoys D.B., 1997. Short shop schedules. Operations Research, 45:288-294
14
| 8 |
Linear Sketching over F2
Sampath Kannan
∗
Elchanan Mossel
†
Grigory Yaroslavtsev
‡
arXiv:1611.01879v2 [] 11 Nov 2016
November 14, 2016
Abstract
We initiate a systematic study of linear sketching over F2 . For a given Boolean function
f : {0, 1}n → {0, 1} a randomized F2 -sketch is a distribution M over d×n matrices with elements
over F2 such that Mx suffices for computing f (x) with high probability. We study a connection
between F2 -sketching and a two-player one-way communication game for the corresponding
XOR-function. Our results show that this communication game characterizes F2 -sketching under
the uniform distribution (up to dependence on error). Implications of this result include: 1)
a composition theorem for F2 -sketching complexity of a recursive majority function, 2) a tight
relationship between F2 -sketching complexity and Fourier sparsity, 3) lower bounds for a certain
subclass of symmetric functions. We also fully resolve a conjecture of Montanaro and Osborne
regarding one-way communication complexity of linear threshold functions by designing an F2 sketch of optimal size.
Furthermore, we show that (non-uniform) streaming algorithms that have to process random
updates over F2 can be constructed as F2 -sketches for the uniform distribution with only a minor
loss. In contrast with the previous work of Li, Nguyen and Woodruff (STOC’14) who show an
analogous result for linear sketches over integers in the adversarial setting our result doesn’t
require the stream length to be triply exponential in n and holds for streams of length Õ(n)
constructed through uniformly random updates. Finally, we state a conjecture that asks whether
optimal one-way communication protocols for XOR-functions can be constructed as F2 -sketches
with only a small loss.
∗
University of Pennsylvania, [email protected]
Massachusetts Institute of Technology, [email protected]. E.M. acknowledges the support of grant N00014-16-12227 from Office of Naval Research and of NSF award CCF 1320105 as well as support from Simons Think Tank on
Geometry & Algorithms.
‡
Indiana University, Bloomington [email protected]
†
Contents
1 Introduction
1
2 Preliminaries
2.1 Communication complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Fourier analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
6
6
3 F2 -sketching over the uniform distribution
7
4 Applications
13
4.1 Composition theorem for majority . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.2 Address function and Fourier sparsity . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.3 Symmetric functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5 Turnstile streaming algorithms over F2
19
5.1 Random streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.2 Adversarial streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6 Linear threshold functions
21
7 Towards the proof of Conjecture 1.3
23
Appendix
29
A Deterministic F2 -sketching
29
A.1 Disperser argument . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
A.2 Composition and convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
B Randomized F2 -sketching
30
B.1 Extractor argument . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
B.2 Existential lower bound for arbitrary distributions . . . . . . . . . . . . . . . . . . . 32
B.3 Random F2 -sketching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
C Tightness of Theorem 3.4 for the Majority function
33
1
Introduction
Linear sketching is the underlying technique behind many of the biggest algorithmic breakthroughs
of the past two decades. It has played a key role in the development of streaming algorithms
since [AMS99]and most recently has been the key to modern randomized algorithms for numerical
linear algebra (see survey [Woo14]), graph compression (see survey [McG14]), dimensionality reduction, etc. Linear sketching is robust to the choice of a computational model and can be applied in
settings as seemingly diverse as streaming, MapReduce as well as various other distributed models
of computation [HPP+ 15], allowing to save computational time, space and reduce communication
in distributed settings. This remarkable versatility is based on properties of linear sketches enabled by linearity: simple and fast updates and mergeability of sketches computed on distributed
data. Compatibility with fast numerical linear algebra packages makes linear sketching particularly
attractive for applications.
Even more surprisingly linear sketching over the reals is known to be the best possible algorithmic approach (unconditionally) in certain settings. Most notably, under some mild conditions
linear sketches are known to be almost space optimal for processing dynamic data streams [Gan08,
LNW14, AHLW16]. Optimal bounds for streaming algorithms for a variety of computational
problems can be derived through this connection by analyzing linear sketches rather than general algorithms. Examples include approximate matchings [AKLY16], additive norm approximation [AHLW16] and frequency moments [LNW14].
In this paper we study the power of linear sketching over F2 . 1 To the best of our knowledge no
such systematic study currently exists as prior work focuses on sketching over the field of reals (or
large finite fields as reals are represented as word-size bounded integers). Formally, given a function
f : {0, 1}n → {0, 1} that needs to be evaluated over an input x = (x1 , . . . , xn ) we are looking for
a distribution over k subsets S1 , . . . , Sk ⊆ [n] such that the following holds: for any input x given
parities computed over these sets and denoted as χS1 (x), χS2 (x), . . . , χSk (x)2 it should be possible
to compute f (x) with probability 1 − δ. In the matrix form sketching corresponds to multiplication
over F2 of the row vector x by a random n × k matrix whose i-th column is a characteristic vector
of the random parity χSi :
.
..
..
..
..
.
.
.
x1 x2 . . . xn χS1 χS2 . . . χSk = χS1 (x) χS2 (x) . . . χSk (x)
..
..
..
..
.
.
.
.
This sketch alone should then be sufficient for computing f with high probability for any input x.
This motivates us to define the randomized linear sketch complexity of a function f over F2 as the
smallest k which allows to satisfy the above guarantee.
Definition 1.1 (F2 -sketching). For a function f : Fn2 → F2 we define its randomized linear sketch
complexity3 over F2 with error δ (denoted as Rδlin (f )) as the smallest integer k such that there
1
It is easy to see that sketching over finite fields can be significantly better than linear sketching over integers for
certain computations. As an example, consider a function (x mod 2) (for an integer input x) which can be trivially
sketched with 1 bit over the field of two elements while any linear sketch over the integers requires word-size memory.
2
Here we use notation χS (x) = ⊕i∈S xi .
3
In the language of decision trees this can be interpreted as randomized non-adaptive parity decision tree complexity. We are unaware of any systematic study of this quantity either. Since heavy decision tree terminology seems
excessive for our applications (in particular, sketching is done in one shot so there isn’t a decision tree involved) we
prefer to use a shorter and more descriptive name.
1
exists a distribution χS1 , χS2 , . . . , χSk over k linear functions over F2 and a postprocessing function
g : Fk2 → F2 4 which satisfies:
∀x ∈ Fn2 :
Pr [f (x1 , x2 , . . . , xn ) = g(χS1 (x), χS2 (x), . . . , χSk (x))] ≥ 1 − δ.
S1 ,...,Sk
As we show in this paper the study of Rδlin (f ) is closely related to a certain communication
complexity problem. For f : Fn2 → F2 define the XOR-function f + : Fn2 × Fn2 → F2 as f + (x, y) =
f (x + y) where x, y ∈ Fn2 . Consider a communication game between two players Alice and Bob
holding inputs x and y respectively. Given access to a shared source of random bits Alice has to
send a single message to Bob so that he can compute f + (x, y). This is known as the one-way
communication complexity problem for XOR-functions.
Definition 1.2 (Randomized one-way communication complexity of XOR function). For a function f : Fn2 → F2 the randomized one-way communication complexity with error δ (denoted as
Rδ→ (f + )) of its XOR-function is defined as the smallest size5 (in bits) of the (randomized using
public randomness) message M (x) from Alice to Bob which allows Bob to evaluate f + (x, y) for any
x, y ∈ Fn2 with error probability at most δ.
Communication complexity complexity of XOR-functions has been recently studied extensively
in the context of the log-rank conjecture (see e.g. [SZ08, ZS10, MO09, LZ10, LLZ11, SW12, LZ13,
TWXZ13, Lov14, HHL16]). However, such studies either mostly focus on deterministic communication complexity or are specific to the two-way communication model. We discuss implications of
this line of work for our F2 -sketching model in our discussion of prior work.
It is easy to see that Rδ→ (f + ) ≤ Rδlin (f ) as using shared randomness Alice can just send k bits
χS1 (x), χS2 (x), . . . , χSk (x) to Bob who can for each i ∈ [k] compute χSi (x + y) = χSi (x) + χSi (y),
which is an F2 -sketch of f on x + y and hence suffices for computing f + (x, y) with probability
1 − δ. The main open question raised in our work is whether the reverse inequality holds (at least
approximately), thus implying the equivalence of the two notions.
Conjecture 1.3. Is it true that Rδ→ (f + ) = Θ̃ Rδlin (f ) for every f : Fn2 → F2 and 0 < δ < 1/2?
In fact all known one-way protocols for XOR-functions can be seen as F2 -sketches so it is natural
to ask whether this is always true. In this paper we further motivate this conjecture through a
number of examples of classes of functions for which it holds. One important such example from the
previous work is a function Ham≥k which evaluates to 1 if and only if the Hamming weight of the
input string is at least k. The corresponding XOR-function Ham+
≥k can be seen to have one-way
communication complexity of Θ(k log k) via the small set disjointness lower bound of [DKS12] and
a basic upper bound based on random parities [HSZZ06]. Conjecture 1.3 would imply that in order
to prove a one-way disjointness lower bound it suffices to only consider F2 -sketches.
In the discussion below using Yao’s principle we switch to the equivalent notion of distributional
complexity of the above problems denoted as Dδ→ and Dδlin respectively. For the formal definitions
we refer to the reader to Section 2.1 and a standard textbook on communication complexity [KN97].
4
If a random family of functions is used here then the definition is changed accordingly. In this paper all g are
deterministic.
5
Formally the minimum here is taken over all possible protocols where for each protocol the size of the message
M (x) refers to the largest size (in bits) of such message taken over all inputs x ∈ Fn
2 . See [KN97] for a formal
definition.
2
Equivalence between randomized and distributional complexities allows us to restate Conjecture 1.3
as Dδ→ = Θ̃(Dδlin ).
For a fixed distribution µ over Fn2 we define Dδlin,µ (f ) to be the smallest dimension of an F2 sketch that correctly outputs f with probability 1 − δ over µ. Similarly for a distribution µ over
(x, y) ∈ Fn2 × Fn2 we denote distributional one-way communication complexity of f with error δ as
Dδ→,µ(f + ) (See Section 2 for a formal definition). Our first main result is an analog of Conjecture 1.3
for the uniform distribution U over (x, y) that matches the statement of the conjecture up to
dependence on the error probability:
lin,U
→,U
+
(f ).
Theorem 1.4. For any f : Fn2 → F2 it holds that DΘ(
1 (f ) ≥ D 1
)
n
3
A deterministic analog of Definition 1.1 requires that f (x) = g(χα1 (x), χα2 (x), . . . , χαk (x)) for a
fixed choice of α1 , . . . , αk ∈ Fn2 . The smallest value of k which satisfies this definition is known to be
equal to the Fourier dimension of f denoted as dim(f ). It corresponds to the smallest dimension
of a linear subspace of Fn2 that contains the entire spectrum of f (see Section 2.2 for a formal
definition). In order to keep the notation uniform we also denote it as D lin (f ). Most importantly,
as shown in [MO09] an analog of Conjecture 1.3 holds without any loss in the deterministic case,
i.e. D → (f + ) = dim(f ) = D lin (f ), where D → denotes the deterministic one-way communication
complexity. This striking fact is one of the reasons why we suggest Conjecture 1.3 as an open
problem.
In order to prove Theorem 1.4 we introduce a notion of an approximate Fourier dimension
(Definition 3.2) that extends the definition of exact Fourier dimension to allow that only 1 − ǫ
fraction of the total “energy” in f ’s spectrum should be contained in the linear subspace. The key
ingredient in the proof is a structural theorem Theorem 3.4 that characterizes both Dδlin,U (f ) and
Dδ→,U (f + ) in terms of f ’s approximate Fourier dimension.
Previous work and our results
Using Theorem 3.4 we derive a number of results that confirm Conjecture 1.3 for specific classes of
functions.
Recursive majority For an odd integer n the majority function M ajn is defined as to be equal
1 if and only if the Hamming weight of the input is greater than n/2. Of particular interest is
the recursive majority function M aj3◦k that corresponds to k-fold composition of M aj3 for k =
log3 n. This function was introduced by Boppana [SW86] and serves as an important example of
various properties of Boolean functions, most importantly in randomized decision tree complexity
([SW86, JKS03, MNSX11, Leo13, MNS+ 13]) and most recently deterministic parity decision tree
complexity [BTW15].
In Section 4.1 we show to use Theorem 3.4 to obtain the following result:
Theorem 1.5. For any ǫ ∈ [0, 1], γ <
1
2
− ǫ and k = log3 n it holds that:
+
D →,U
(M aj3◦k ) ≥ ǫ2 n + 1.
1 1
−ǫ2 )
n(4
In particular, this confirms Conjecture 1.3 for M aj3◦k with at most logarithmic gap as for
+
→,U
◦k + ) = Ω(n). By Yao’s principle R→
(M aj3◦k ) = Ω(n). Using
constant ǫ we get DΘ(
1 (M aj3
Θ( 1 )
)
n
n
3
+
standard error reduction [KN97] for randomized communication this implies that Rδ→ (M aj3◦k ) =
Ω̃(n) for constant δ < 1/2 almost matching the trivial upper bound.
Address function and Fourier sparsity The number s of non-zero Fourier coefficients of f
(known as Fourier sparsity) is one of the key quantities in the analysis of Boolean functions. It also
plays an important role in the recent work on log-rank conjecture for XOR-functions [TWXZ13,
STlV14]. A remarkable recent result by Sanyal [San15] shows that for Boolean functions dim(f ) =
√
O( s log s), namely all non-zero Fourier coefficients are contained in a subspace of a polynomially
smaller dimension. This bound is almost tight as the address function (see Section 4.2 for a
definition) exhibits a quadratic gap. A direct implication of Sanyal’s result is a deterministic F2 √
sketching upper bound of O( s log s) for any f with Fourier sparsity s. As we show in Section 4.2
this dependence on sparsity can’t be improved even if randomization is allowed.
Symmetric functions A function f is symmetric if it only depends on the Hamming weight of
its input. In Section 4.3 we show that Conjecture 1.3 holds (approximately)
for symmetric functions
P
which are not too close to a constant function or the parity function i xi where the sum is taken
over F2 .
Applications to streaming In the turnstile streaming model of computation an vector x of
dimension n is updated through a sequence of additive updates applied to its coordinates and the
goal of the algorithm is to be able to output f (x) at any point during the stream while using space
that is sublinear in n. In the real-valued case we have either x ∈ [0, m]n or x ∈ [−m, m]n for
some universal upper bound m and updates can be increments or decrements to x’s coordinates of
arbitrary magnitude.
For x ∈ Fn2 additive updates have a particularly simple form as they always flip the corresponding
coordinate of x. As we show in Section 5.2 it is easy to see based on the recent work of [Gan08,
LNW14, AHLW16] that in the adversarial streaming setting the space complexity of turnstile
streaming algorithms over F2 is determined by the F2 -sketch complexity of the function of interest.
However, this proof technique only works for very long streams which are unrealistic in practice
– the length of the adversarial stream has to be triply exponential in n in order to enforce linear
behavior. Large stream length requirement is inherent in the proof structure in this line of work
and while one might expect to improve triply exponential dependence on n at least an exponential
dependence appears necessary, which is a major limitation of this approach.
As we show in Section 5.1 it follows directly from our Theorem 1.4 that turnstile streaming algorithms that achieve low error probability under random F2 updates might as well be F2 -sketches. For
two natural choices of the random update model short streams of length either O(n) or O(n log n)
suffice for our reduction. We stress that our lower bounds are also stronger than the worst-case
adversarial lower bounds as they hold under an average-case scenario. Furthermore, our Conjecture 1.3 would imply that space optimal turnstile streaming algorithms over F2 have to be linear
sketches for adversarial streams of length only 2n.
Linear Threshold Functions Linear threshold functions (LTFs) are one of the most studied
classes of Boolean functions as they play a central role in circuit complexity, learning theory and
machine learning (See Chapter 5 in [O’D14] for a comprehensive introduction to properties of
LTFs). Such functions are parameterized by two parameters θ and m known as threshold and
4
margin respectively (See Definition 6.1 for a formal definition). We design an F2 -sketch for LTFs
with complexity O(θ/m log(θ/m)). By applying the sketch in the one-way communication setting
this fully resolves an open problem posed in [MO09]. Our work shows that dependence on n is
not necessary which is an improvement over previously best known protocol due to [LZ13] which
achieves communication O(θ/m log n). Our communication bound is optimal due to [DKS12]. See
Section 6 for details.
Other previous work Closely related to ours is work on communication protocols for XORfunctions started in [SZ08, MO09]. In particular [MO09] presents two basic one-way communication
protocols based on random parities. First one, stated as Fact B.7 generalizes the classic protocol
for equality. Second one uses the result of Grolmusz [Gro97] and implies that ℓ1 -sampling of Fourier
characters gives a randomized F2 -sketch of size O(kfˆk21 ) (for constant error). Another line of work
that is closely related to ours is the study of the two-player simultaneous message passing model
(SMP). This model can also allow to prove lower bounds on F2 -sketching complexity. However,
in the context of our work there is no substantial difference as for product distributions the two
models are essentially equivalent. Recent results in the SMP model include [MO09, LLZ11, LZ13].
While decision tree literature is not directly relevant to us since our model doesn’t allow adaptivity we remark that there has been interest recently in the study of (adaptive) deterministic parity
decision trees [BTW15] and non-adaptive deterministic parity decision trees [STlV14, San15]. As
mentioned above, our model can be interpreted as non-adaptive randomized parity decision trees
and to the best of our knowledge it hasn’t been studied explicitly before. Another related model
is that of parity kill numbers. In this model a composition theorem has recently been shown
by [OWZ+ 14] but the key difference is again adaptivity.
Organization The rest of this paper is organized as follows. In Section 2 we introduce the
required background from communication complexity and Fourier analysis of Boolean functions.
In Section 3 we prove Theorem 1.4. In Section 4 we give applications of this theorem for recursive
majority (Theorem 1.5), address function and symmetric functions. In Section 5 we describe
applications to streaming. In Section 6 we describe our F2 -sketching protocol for LTFs. In Section 7
we show a lower bound for one-bit protocols making progress towards resolving Conjecture 1.3.
In Appendix A we give some basic results about deterministic F2 -sketching (or Fourier dimension) of composition and convolution of functions. We also present a basic lower bound argument
based on affine dispersers. In Appendix B we give some basic results about randomized F2 -sketching
including a lower bound based on extractors and a classic protocol based on random parities which
we use as a building block in our sketch for LTFs. We also present evidence for why an analog of
Theorem 3.4 doesn’t hold for arbitrary distributions. In Appendix C we argue that the parameters
of Theorem 3.4 can’t be substantially improved.
2
Preliminaries
For an integer n we use notation [n] = {1, . . . , n}. For integers n ≤ m we use notation [n, m] =
{n, . . . , m}. For an arbitrary domain D we denote the uniform distribution over this domain as
U (D). For a vector x and p ≥ 1 we denote the p-norm of x as kxkp and reserve the notation kxk0
for the Hamming weight.
5
2.1
Communication complexity
Consider a function f : Fn2 × Fn2 → F2 and a distribution µ over Fn2 × Fn2 . The one-way distributional
complexity of f with respect to µ, denoted as Dδ→,µ (f ) is the smallest communication cost of
a one-way deterministic protocol that outputs f (x, y) with probability at least 1 − δ over the
inputs (x, y) drawn from the distribution µ. The one-way distributional complexity of f denoted
as Dδ→ (f ) is defined as Dδ→ (f ) = supµ Dδ→,µ(f ). By Yao’s minimax theorem [Yao83] it follows
that Rδ→ (f ) = Dδ→ (f ). One-way communication complexity over product distributions is defined as
Dδ→,× (f ) = supµ=µx ×µy Dδ→,µ(f ) where µx and µy are distributions over Fn2 .
With every two-party function f : Fn2 × Fn2 we associate with it the communication matrix
n
n
f
= f (x, y). We say that a deterministic protocol M (x) with length
M f ∈ F22 ×2 with entries Mx,y
t of the message that Alice sends to Bob partitions the rows of this matrix into 2t combinatorial
rectangles where each rectangle contains all rows of M f corresponding to the same fixed message
y ∈ {0, 1}t .
2.2
Fourier analysis
We consider functions from Fn2 to 6 . For any fixed n ≥ 1, the space of these
P functions forms an
1
inner product space with the inner product hf, gi = x∈F n2 [f (x)g(x)] = 2n x∈F n f (x)g(x). The ℓ2
2
p
p
2
norm of f : Fn2 → is kf k2 = hf, f i =
x [f (x) ] and the ℓ2 distance between
p two functions
n →
hf − g, f − gi =
is
the
ℓ
norm
of
the
function
f
−
g.
In
other
words,
kf
−
gk
=
f, g : F
2
2
q2 P
1
√ n
(f (x) − g(x))2 .
x∈F n
2
|F 2 |
P
For x, y ∈ Fn2 we denote the inner product as x · y = ni=1 xi yi . For α ∈ Fn2 , the character
χα : Fn2 → {+1, −1} is the function defined by χα (x) = (−1)α·x . Characters form an orthonormal
basis as hχα , χβ i = δαβ where δ is the Kronecker symbol. The Fourier coefficient of f : Fn2 →
corresponding to α is fˆ(α) = x [f (x)χα (x)]. The Fourier transform of f is the function fˆ : Fn2 →
that returns the value of each Fourier coefficient of f . We use notation Spec(f ) = {α ∈ Fn2 : fˆ(α) 6=
0} to denote the set of all non-zero Fourier coefficients of f .
The set of Fourier transforms of functions mapping Fn2 →
forms an inner product
rD space
E
D
E P
fˆ, fˆ =
with inner product fˆ, ĝ = α∈F n fˆ(α)ĝ(α). The corresponding ℓ2 norm is kfˆk2 =
2
qP
fˆ(α)2 . Note that the inner product and ℓ2 norm are weighted differently for a function
α∈F n
2
f : Fn → and its Fourier transform fˆ : Fn → .
2
2
Fact 2.1 (Parseval’s identity). For any f : Fn2 → it holds that kf k2 = kfˆk2 =
Moreover, if f : Fn → {+1, −1} then kf k2 = kfˆk2 = 1.
2
qP
α∈F n
2
fˆ(α)2 .
We use notation A ≤ Fn2 to denote the fact that A is a linear subspace of Fn2 .
Definition 2.2 (Fourier dimension). The Fourier dimension of f : Fn2 → {+1, −1} denoted as
dim(f ) is the smallest integer k such that there exists A ≤ Fn2 of dimension k for which Spec(f ) ⊆
A.
6
In all Fourier-analytic arguments Boolean functions are treated as functions of the form f : Fn
2 → {+1, −1}
where 0 is mapped to 1 and 1 is mapped to −1. Otherwise we use these two notations interchangeably.
6
We say that A ≤ Fn2 is a standard subspace if it has a basis v1 , . . . , vd where each vi has Hamming
weight equal to 1. An orthogonal subspace A⊥ is defined as:
A⊥ = {γ ∈ Fn2 : ∀x ∈ A γ · x = 0}.
An affine subspace (or coset) of Fn2 of the form A = H + a for some H ≤ Fn2 and a ∈ Fn2 is defined
as:
A = {γ ∈ Fn2 : ∀x ∈ H ⊥
γ · x = a · x}.
We now introduce notation for restrictions of functions to affine subspaces.
Definition 2.3. Let f : Fn2 →
and z ∈ Fn2 . We define f +z : Fn2 →
as f +z (x) = f (x + z).
+z (γ) = (−1)γ·z fˆ(γ) and hence:
Fact 2.4. Fourier coefficients of f +z are given as fd
X
f +z =
fˆ(S)χS (z)χS .
S∈F n
2
+z
:H→
Definition 2.5 (Coset restriction). For f : Fn2 → , z ∈ Fn2 and H ≤ Fn2 we write fH
the restriction of f to H + z.
Definition 2.6 (Convolution). For two functions f, g : Fn2 →
is defined as (f ∗ g)(x) = y∼U (F n2 ) [f (x)g(x + y)].
for
their convolution (f ∗ g) : Fn2 →
∗ g(S) = fˆ(S)ĝ(S).
For S ∈ Fn2 the corresponding Fourier coefficient of convolution is given as f[
3
F2-sketching over the uniform distribution
We use the following definition of Fourier concentration that plays an important role in learning
theory [KM93].
Definition 3.1 (Fourier concentration). The spectrum of aPfunction f : Fn2 → {+1, −1} is ǫconcentrated on a collection of Fourier coefficients Z ⊆ Fn2 if S∈Z fˆ2 (S) ≥ ǫ.
For a function f : Fn2 → {+1, −1} and a parameter ǫ > 0 we introduce a notion of approximate
Fourier dimension as the smallest integer for which f is ǫ-concentrated on some linear subspace of
dimension d.
Definition 3.2 (Approximate Fourier dimension). Let Ak be the set of all linear subspaces of Fn2
of dimension k. For f : Fn2 → {+1, −1} and ǫ > 0 the approximate Fourier dimension dimǫ (f ) is
defined as:
)
(
X
dimǫ (f ) = min ∃A ∈ Ak :
fˆ(S)2 ≥ ǫ .
k
S∈A
Definition 3.3 (Approximate Fourier dimension gap). For f : Fn2 → {+1, −1} and 1 ≤ d ≤ n we
define:
ǫd (f ) = max {dimǫ (f ) = d} ,
∆d (f ) = ǫd (f ) − ǫd−1 (f ),
ǫ
where we refer to ∆d (f ) as the approximate Fourier dimension gap of dimension d.
7
The following theorem shows that (up to some slack in the dependence on the probability of
error) the one-way communication complexity under the uniform distribution matches the linear
sketch complexity. We note that the theorem can be applied to all possible values of d and show
how to pick specific values of d of interest in Corollary 3.10. We illustrate tightness of Part 3 of this
theorem in Appendix C. We also note that the lower bounds given by this theorem are stronger
than the basic extractor lower bound given in Appendix B.1. See Remark B.5 for further discussion.
Theorem 3.4. For any f : Fn2 → {+1, −1}, 1 ≤ d ≤ n and ǫ1 = ǫd (f ), γ <
1.
→,U
lin,U
D(1−ǫ
(f + ) ≤ D(1−ǫ
(f ) ≤ d,
1 )/2
1 )/2
2.
Dγlin,U (f ) ≥ d + 1,
√
1− ǫ1
2 ,
3.
δ = ∆d (f )/4:
Dδ→,U (f + ) ≥ d.
Proof. Part 17 . By the assumptions
P of the theorem we know that there exists a d-dimensional
subspace A ≤ Fn2 which satisfies S∈A fˆ2 (S) ≥ ǫ1 . Let g : Fn2 → be a function defined by its
Fourier transform as follows:
(
fˆ(S), if S ∈ A
ĝ(S) =
0, otherwise.
Consider drawing a random variable θ from the distribution with p.d.f 1 − |θ| over [−1, 1].
Proposition 3.5. For all t such that −1 ≤ t ≤ 1 and z ∈ {+1, −1} random variable θ satisfies:
1
Pr[sgn(t − θ) 6= z] ≤ (z − t)2 .
θ
2
Proof. W.l.o.g we can assume z = 1 as the case z = −1 is symmetric. Then we have:
Z 1
Z 1
1
(1 − γ)dγ = (1 − t)2 .
(1 − |γ|)dγ ≤
Pr[sgn(t − θ) 6= 1] =
θ
2
t
t
Define a family of functions gθ : Fn2 → {+1, −1} as gθ (x) = sgn(g(x) − θ). Then we have:
Pr[gθ (x) 6= f (x)]
Pr [gθ (x) 6= f (x)] =
θ
x∼F n
θ x∼F n
2
2
=
Pr[sgn(g(x) − θ) 6= f (x)]
θ
x∼F n
2
1
2
(f (x) − g(x)) (by Proposition 3.5)
≤
2
x∼F n
2
1
= kf − gk22 .
2
Using the definition of g and Parseval we have:
1 − ǫ1
1
1
1 X ˆ2
1
f (S) ≤
kf − gk22 = kf[
− gk22 = kfˆ − ĝk22 =
.
2
2
2
2
2
S ∈A
/
1
Thus, there exists a choice of θ such that gθ achieves error at most 1−ǫ
2 . Clearly gθ can be computed
lin,U
based on the d parities forming a basis for A and hence D(1−ǫ1 )/2 (f ) ≤ d.
7
This argument is a refinement of the standard “sign trick” from learning theory which approximates a Boolean
function by taking a sign of its real-valued approximation under ℓ2 .
8
Part 2. Fix any deterministic sketch that uses d functions χS1 , . . . , χSd and let S = (S1 , . . . , Sd ).
For fixed values of these sketches b = (b1 , . . . , bd ) where bi = χSi (x) we denote the restriction on
the resulting coset as f |(S,b) . Using the standard expression for the Fourier coefficients of an affine
restriction the constant Fourier coefficient of the restricted function is given as:
!
P
X
X
b
f\
|(S,b) (∅) =
(−1) i∈Z i fˆ
Si .
i∈Z
Z⊆[d]
Thus, we have:
f\
|(S,b) (∅)2 =
X
Z⊆[d]
X
fˆ2 (
Si ) +
i∈Z
X
P
(−1)
i∈Z1 ∆Z2 bi
fˆ(
X
i∈Z1
Z1 6=Z2 ⊆[d]
Si )fˆ(
X
Si ).
i∈Z2
Taking expectation over a uniformly random b ∼ U (Fd2 ) we have:
!
i
h
P
X
X
X
X
X
b
\ 2 =
Si fˆ
Si
(−1) i∈Z1 ∆Z2 i fˆ
Si +
fˆ2
b∼U (F d ) f |(S,b) (∅)
b∼U (F d )
2
2
Z⊆[d]
=
X
fˆ2
X
Si
i∈Z
Z⊆[d]
!
i∈Z
Z1 6=Z2 ⊆[d]
i∈Z1
i∈Z2
.
The latter sum is the sum of squared Fourier coefficients over a linear subspace of dimension d
and hence is at most ǫ1 by the assumption of the theorem. Using Jensen’s inequality:
h
i r
h
i √
\
\
2 ≤
|
f
|
(∅)|
≤
ǫ1 .
f
|
(∅)
d
d
(S,b)
(S,b)
b∼U (F )
b∼U (F )
2
2
For a fixed restriction (S, b) if |fˆ|(S,b) (∅)| ≤ α then |P r[f |(S,b) = 1] − P r[f |(S,b) = −1]| ≤ α and
hence no algorithm can predict the value of the restricted function on this coset with probability
greater than 1+α
2 . Thus no algorithm can predict f |(S1 ,b1 ),...,(Sd ,bd ) for a uniformly random choice of
(b1 , . . . , bd ) and hence also on a uniformly at random chosen x with probability greater than
Part 3.
√
1+ ǫ1
2 .
Let ǫ2 = ǫd−1 (f ) and recall that ǫ1 = ǫd (f ).
Definition 3.6. We say that A ≤ Fn2 distinguishes x1 , x2 ∈ Fn2 if ∃S ∈ A : χS (x1 ) 6= χS (x2 ).
We first prove the following auxiliary lemma.
Lemma 3.7. Fix ǫ1 > ǫ2 ≥ 0 and x1 , x2 ∈ Fn2 . If there exists a subspace Ad ≤ Fn2 of dimension
d which distinguishes x1 and x2 such that f : Fn2 → {+1, −1} is ǫ1 -concentrated on Ad but is not
ǫ2 -concentrated on any d − 1 dimensional linear subspace then:
Pr [f +x1 (z) 6= f +x2 (z)] ≥ ǫ1 − ǫ2 .
z∈U (F n
2)
Proof. Note that for a fixed x ∈ Fn2 (by Fact 2.4) the Fourier expansion of f +x can be given as:
X
X
f +x (z) =
fˆ(S)χS (z + x) =
fˆ(S)χS (z)χS (x).
S∈F n
2
S∈F n
2
9
Thus we have:
1
1 − hf +x1 , f +x2 i
2
+
*
X
X
1
fˆ(S1 )χS1 χS1 (x1 ),
fˆ(S2 )χS2 χS2 (x2 )
1−
=
2
S1 ∈F n
S2 ∈F n
2
2
X
1
fˆ(S)2 χS (x1 )χS (x2 ) (by orthogonality of characters)
= 1 −
2
n
Pr n [f +x1 (z) 6= f +x2 (z)] =
z∈U (F 2 )
S∈F 2
P
We now analyze the expression S∈F n fˆ(S)2 χS (x1 )χS (x2 ). Breaking the sum into two parts
2
we have:
X
X
X
fˆ(S)2 χS (x1 )χS (x2 ) =
fˆ(S)2 χS (x1 )χS (x2 ) +
fˆ(S)2 χS (x1 )χS (x2 )
S∈F n
2
S∈Ad
≤
X
S∈Ad
S ∈A
/ d
fˆ(S)2 χS (x1 )χS (x2 ) + (1 − ǫ1 ).
To give a bound on the first term we will use the fact that Ad distinguishes x1 and x2 . We will
need the following simple fact.
Proposition 3.8. If Ad distinguishes x1 and x2 then there exists a basis S1 , S2 , . . . , Sd in Ad such
that χS1 (x1 ) 6= χS1 (x2 ) while χSi (x1 ) = χSi (x2 ) for all i ≥ 2.
Proof. Since Ad distinguishes x1 and x2 there exists a S ∈ Ad such that χS (x1 ) 6= χS (x2 ). Fix
S1 = S and consider an arbitrary basis in Ad of the form (S1 , T2 , . . . , Td ). For i ≥ 2 if χTi (x1 ) =
χTi (x2 ) then we let Si = Ti . Otherwise, we let Si = Ti + S1 , which preserves the basis and ensures
that:
χSi (x1 ) = χTi +S1 (x1 ) = χTi (x1 )χS1 (x1 ) = χTi (x1 )χS1 (x2 ) = χSi (x2 ).
Fix the basis (S1 , S2 , . . . , Sd ) in Ad with the properties given by Proposition 3.8. Let Ad−1 =
span(S2 , . . . , Sd ) so that for all S ∈ Ad−1 it holds that χS (x1 ) = χS (x2 ). Then we have:
X
X
X
fˆ(S)2 χS (x1 )χS (x2 ) =
fˆ(S)2 χS (x1 )χS (x2 ) +
fˆ(S + S1 )2 χS+S1 (x1 )χS+S1 (x2 )
S∈Ad
S∈Ad−1
=
X
S∈Ad−1
fˆ(S) −
2
X
S∈Ad−1
S∈Ad−1
fˆ(S + S1 )2
The first term in the above summation is at most ǫ2 since f is not ǫ2 -concentrated on any (d − 1)dimensional linear subspace. The second is at least ǫ1 − ǫ2 since f is ǫ1 -concentrated on Ad .
Thus, putting things together we have that
X
fˆ(S)2 χS (x1 )χS (x2 ) ≤ ǫ2 − (ǫ1 − ǫ2 ) + (1 − ǫ1 ) = 1 − 2(ǫ1 − ǫ2 ).
S∈F n
2
This completes that proof showing that Prz∈U (F n2 ) [f +x1 (z) 6= f +x2 (z)] ≥ ǫ1 − ǫ2 .
10
We are now ready to complete the proof of the third part of Theorem 3.4. We can always
assume that the protocol that Alice uses is deterministic since for randomized protocols one can
fix their randomness to obtain the deterministic protocol with the smallest error. Fix a (d − 1)-bit
deterministic protocol that Alice is using to send a message to Bob. This protocol partitions the
rows of the communication matrix into t = 2d−1 rectangles corresponding to different messages. We
denote the sizes of these rectangles as r1 , . . . , rt and the rectangles themselves as R1 , . . . , Rt ⊆ Fn2
respectively. Let the outcome of the protocol be P (x, y). Then the error is given as:
x,y∼U (F n
2)
t
X
ri
[ [P (x, y) 6= f (x + y)]]
×
2n x∼U (Ri ),y∼U (F n2 )
i=1
X
ri
≥
[ [P (x, y) 6= f (x + y)]] ,
×
2n x∼U (Ri ),y∼U (F n2 )
n−d
[ [P (x, y) 6= f (x + y)]] =
i : ri >2
where we only restricted attention to rectangles of size greater than 2n−d . Our next lemma shows
that in such rectangles the protocol makes a significant error:
Lemma 3.9. If ri > 2n−d then:
x∼U (Ri ),y∼U (F n
2)
[ [P (x, y) 6= f (x + y)]] ≥
1 ri − 2n−d
(ǫ1 − ǫ2 ).
2
ri
Proof. For y ∈ Fn2 let py (Ri ) = min(Prx∼U (Ri ) [f (x + y) = 1], Prx∼U (Ri ) [f (x + y) = −1]). We have:
x∼U (Ri ),y∼U (F n
2)
[ [P (x, y) 6= f (x + y)]] =
y∼U (F n
2 ) x∼U (Ri )
≥
y∼U (F n
2)
≥
y∼U (F n
2)
=
=
y∼U (F n
2)
[ [P (x, y) 6= f (x + y)]]
[py (Ri )]
[py (Ri )(1 − py (Ri ))]
1
Pr
[f (x1 + y) 6= f (x2 + y)]
2 x1 ,x2 ∼U (Ri )
"
#
1
2 x1 ,x2 ∼U (Ri )
y∼U (F n
2)
[ [f (x1 + y) 6= f (x2 + y)]]
Fix a d-dimensional linear subspace Ad such that g is ǫ1 -concentrated on Ad . There are 2n−d
vectors which have the same inner products with all vectors in Ad . Thus with probability at least
ri −2n−d
two random vectors x1 , x2 ∼ U (Ri ) are distinguished by Ad . Conditioning on this event
ri
we have:
"
#
1
[ [f (x1 + y) 6= f (x2 + y)]]
2 x1 ,x2∼U (Ri ) y∼U (F n2 )
≥
1 ri − 2n−d
2
ri
≥
1 ri − 2n−d
(ǫ1 − ǫ2 ),
2
ri
y∼U (F n
2)
[ [f (x1 + y) 6= f (x2 + y)] |Ad distinguishes x1 , x2 ]
where the last inequality follows by Lemma 3.7.
11
Using Lemma 3.9 we have:
x,y∼U (F n
2)
[ [P (x, y) 6= f (x + y)]] ≥
X
ǫ1 − ǫ2
2n+1
i : ri >2n−d
ri − 2n−d
t
ǫ1 − ǫ2 X
ri − 2n−d −
= n+1
2
i=1
ǫ1 − ǫ2
2n+1
ǫ1 − ǫ2
,
=
4
≥
where the inequality follows since
non-positive.
Pt
i=1 ri
2n − 2n−1
X
i : ri ≤2n−d
ri − 2n−d
= 2n , t = 2d−1 and all the terms in the second sum are
An important question that arises when applying Theorem 3.4 is the choice of the value of d.
The following simple corollaries of Theorem 3.4 give one particularly simple way of choosing these
values for any function in such a way that we obtain a non-trivial lower bound for O(1/n)-error.
Corollary 3.10. For any f : Fn2 → {+1, −1} such that fˆ(∅) ≤ θ for some constant θ < 1 there
exists an integer d ≥ 1 such that:
lin,U
→,U
+
(f )
DΘ(
1 (f ) ≥ d ≥ D 1
)
n
3
Proof. We have ǫ0 (f ) < θ and ǫn (f ) = 1. Let d∗ = arg maxnd=1 ∆d (f ) and ∆(f ) = ∆d∗ (f ). Consider
cases:
→,U
+
∗
Case 1. ∆(f ) ≥ 1−θ
3 . By Part 3 of Theorem 3.4 we have that D 1−θ (f ) ≥ d . Furthermore,
12n
lin,U
∗
ǫd∗ (f ) ≥ θ+ǫd∗ (f )−ǫd∗ −1 (f ) = θ+∆(f ) ≥ 13 − 2θ
3 . By Part 1 of Theorem 3.4 we have D 1−θ (f ) ≤ d .
3
Case 2. ∆(f ) < 1−θ
3 . In this case there exists d1 ≥ 1 such that ǫd1 (f ) ∈ [θ1 , θ2 ] where θ1 =
2(1−θ)
1−θ
θ + 3 , θ2 = θ + 3 . By averaging there exists d2 > d1 such that ∆d2 (f ) = ǫd2 (f ) − ǫd2 −1 (f ) ≥
→,U
1−θ2
1
+
n = Θ( n ). Applying Part 3 of Theorem 3.4 we have that DΘ( 1 ) (f ) ≥ d2 . Furthermore, we have
ǫd2 (f ) ≥ θ1 and hence
1−ǫd2 (f )
2
n
≤
1−θ1
2
<
1−θ
3 .
By Part 1 of Theorem 3.4 we have D lin,U
1−θ (f ) ≤ d2 .
3
The proof of Theorem 1.4 follows directly from Corollary 3.10. If θ ≤ 31 then the statement of
(f ) ≤ 0 and
the theorem holds. If θ ≥ 13 then ǫ0 (f ) ≥ 13 so by Part 1 of Theorem 3.4 we have D lin,U
1
3
the inequality holds trivially.
Furthermore, using the same averaging argument as in the proof of Corollary 3.10 we obtain
the following generalization of the above corollary that will be useful for our applications.
Corollary 3.11. For any f : Fn2 → {+1, −1} and d such that ǫd−1 (f ) ≤ θ it holds that:
D →,U
1−θ (f ) ≥ d.
4(n−d)
12
4
Applications
4.1
Composition theorem for majority
In this section using Theorem 3.4 we give a composition theorem for F2 -sketching of the composed
M aj3 function. Unlike in the deterministic case for which the composition theorem is easy to show
(see Lemma A.6) in the randomized case composition results require more work.
mn
Definition 4.1 (Composition). For f : Fn2 → F2 and g : Fm
2 → F2 their composition f ◦g : F2 → F2
is defined as:
(f ◦ g)(x) = f (g(x1 , . . . , xm ), g(xm+1 , . . . , x2m ), . . . , g(xm(n−1)+1 , . . . , xmn )).
Consider the recursive majority function M aj3◦k ≡ M aj3 ◦ M aj3 ◦ · · · ◦ M aj3 where the composition is taken k times.
Theorem 4.2. For any d ≤ n and k = log3 n it holds that ǫd (M aj3◦k ) ≤
4d
n.
First, we show a slighthly stronger result for standard subspaces and then extend this result to
arbitrary subspaces with a loss of a constant factor. Fix any set S ⊆ [n] of variables. We associate
this set with a collection of standard unit vectors corresponding to these variables. Hence in this
notation ∅ corresponds to the all-zero vector.
Lemma 4.3. For any standard subspace whose basis consists of singletons from the set S ⊆ [n] it
holds that:
2 |S|
X
\
M
aj3◦k (Z) ≤
n
Z∈span(S)
Proof. The Fourier expansion of M aj3 is given as M aj3 (x1 , x2 , x3 ) = 12 (x1 + x2 + x3 − x1 x2 x3 ).
For i ∈ {1, 2, 3} let Ni = {(i − 1)n/3 + 1, . . . , in/3}. Let Si = S ∩ Ni . Let αi be defined as:
2
X \
◦k−1
M aj3
(Z) .
αi =
Z∈span(Si )
Then we have:
X
Z∈span(S)
3
2 X
\
M
aj3◦k (Z) =
X
i=1 Z∈span(Si )
2
\
M
aj3◦k (Z) +
X
Z∈span(S)−∪3i=1 span(Si )
2
\
M
aj3◦k (Z) .
For each Si we have
X
Z∈span(Si )
2 1
\
M
aj3◦k (Z) =
4
X
Z∈span(Si )
2
αi
\
◦k−1
M aj3
(Z) = .
4
Moreover, for each Z ∈ span(S) − ∪3i=1 span(Si ) we have:
(
aj3◦k−1 (Z1 )M\
aj3◦k−1 (Z2 )M\
aj3◦k−1 (Z3 )
− 12 M\
\
◦k
M aj3 (Z) =
0
13
if Z ∈ ×3i=1 (span(Si ) \ ∅)
otherwise.
Thus, we have:
X
Z∈(span(S1 )\∅)×(span(S2 )\∅)×(span(S3 )\∅)
=
X
2
\
M
aj3◦k (Z)
Z∈(span(S1 )\∅)×(span(S2 )\∅)×(span(S3 )\∅)
1
=
4
=
X
Z∈(span(S1 )\∅)
2
\
◦k−1
M aj3
(Z1 )
1
4
2
2
2
\
\
\
◦k−1
◦k−1
◦k−1
M aj3
(Z3 )
M aj3
(Z2 )
M aj3
(Z1 )
X
Z∈(span(S2 )\∅)
1
α1 α2 α3 .
4
2
\
◦k−1
M aj3
(Z2 )
X
Z∈(span(S3 )\∅)
2
\
◦k−1
M aj3
(Z3 )
where the last equality holds since M\
aj3◦k−1 (∅) = 0. Putting this together we have:
2 1
\
M
aj3◦k (Z) = (α1 + α2 + α3 + α1 α2 α3 )
4
Z∈span(S)
1
1
1
α1 + α2 + α3 + (α1 + α2 + α3 ) = (α1 + α2 + α3 ).
≤
4
3
3
X
Applying this argument recursively to each αi for k − 1 times we have:
X
Z∈span(S)
3k
2
1 X
\
◦k
M aj3 (Z) ≤ k
γi ,
3
where γi = 1 if i ∈ S and 0 otherwise. Thus,
i=1
P
2
\
◦k (Z)
M
aj
≤
3
Z∈span(S)
|S|
n .
To extend the argument to arbitrary linear subspaces we show that any such subspace has less
Fourier weight than a collection of three carefully chosen standard subspaces. First we show how
to construct such subspaces in Lemma 4.4.
For a linear subspace L ≤ Fn2 we denote the set of all vectors in L of odd Hamming weight as
O(L) and refer to it as the odd set of L. For two vectors v1 , v2 ∈ Fn2 we say that v1 dominates v2 if the
set of non-zero coordinates of v1 is a (not necessarily proper) subset of the set of non-zero coordinates
of v2 . For two sets of vectors S1 , S2 ⊆ Fn2 we say that S1 dominates S2 (denoted as S1 ≺ S2 ) if
there is a matching M between S1 and S2 of size |S2 | such that for each (v1 ∈ S1 , v2 ∈ S2 ) ∈ M
the vector v1 dominates v2 .
Lemma 4.4 (Standard subspace domination lemma). For any linear subspace L ≤ Fn2 of dimension
d there exist three standard linear subspaces S1 , S2 , S3 ≤ Fn2 such that:
O(L) ≺ O(S1 ) ∪ O(S2 ) ∪ O(S3 ),
and dim(S1 ) = d − 1, dim(S2 ) = d, dim(S3 ) = 2d.
Proof. Let A ∈ Fd×n
be the matrix with rows corresponding to the basis in L. We will assume that
2
A is normalized in a way described below. First, we apply Gaussian elimination to ensure that
14
A = (I, M ) where I is a d × d identity matrix. If all rows of A have even Hamming weight then the
lemma holds trivially since O(L) = ∅. By reordering rows and columns of A we can always assume
that for some k ≥ 1 the first k rows of A have odd Hamming weight and the last d − k have even
Hamming weight. Finally, we add the first column to each of the last d − k rows, which makes all
rows have odd Hamming weight. This results in A of the following form:
0···0
a
1 0···0
0
.
..
k−1
1
A= 0
1
.
.
.
d−k
2
1
0
I
0
I
M
M
We use the following notation for submatrices: A[i1 , j1 ; i2 , j2 ] refers to the submatrix of A with
rows between i1 and j1 and columns between i2 and j2 inclusive. We denote to the first row as v,
the submatrix A[2,
P k; 1, n] as A and the submatrix A[k + 1, d; 1, n] as B. Each x ∈ O(L) can be
represented as i∈S Ai where the set S is of odd size and the sum is over Fn2 . We consider the
following three cases corresponding to different types of the set S.
Case 1. S ⊆ rows(A) ∪ rows(B). This corresponds to all odd size linear combinations of
the rows of A that don’t include the first row. Clearly, the set of such vectors is dominated by
O(S1 ) where S1 is the standard subspace corresponding to the span of the rows of the submatrix
A[2, d; 2, d].
Case 2. S contains the first row, |S ∩ rows(A)| and |S ∩ rows(B)| are even. All such linear
combinations have their first coordinate equal 1. Hence, they are dominated by a standard subspace
corresponding to span of the rows the d × d identity matrix, which we refer to as S2 .
Case 3. S contains the first row, |S ∩ rows(A)| and |S ∩ rows(B)| are odd. All such linear
combinations have their first coordinate equal 0. This implies that the Hamming weight of the first
d coordinates of such linear combinations is even and hence the other coordinates can’t be all equal
to 0. Consider the submatrix M = A[1, d; d + 1, n] corresponding to the last n − d columns of A.
Since the rank of this matrix is at most d by running Gaussian elimination on M we can construct
a matrix M ′ containing as rows the basis for the row space of M of the following form:
It M1
′
M =
0
0
where t = rank(M ). This implies that any non-trivial linear combination of the rows of M contains
1 in one of the first t coordinates. We can reorder the columns of A in such a way that these t
coordinates have indices from d+ 1 to d+ t. Note that now the set of vectors spanned by the rows of
the (d+t)×(d+t) identity matrix Id+t dominates the set of linear combinations we are interested in.
Indeed, each such linear combination has even Hamming weight in the first d coordinates and has
at least one coordinate equal to 1 in the set {d + 1, . . . , d + t}. This gives a vector of odd Hamming
weight that dominates such linear combination. Since this mapping is injective we have a matching.
We denote the standard linear subspace constructed this way as S3 and clearly dim(S3 ) ≤ 2d.
The following proposition shows that the spectrum of the M aj3◦k is monotone decreasing under
inclusion if restricted to odd size sets only:
15
Proposition 4.5. For any two sets Z1 ⊆ Z2 of odd size it holds that:
\
\
M
aj3◦k (Z1 ) ≥ M
aj3◦k (Z2 ) .
Proof. The proof is by induction on k. Consider the Fourier expansion of M aj3 (x1 , x2 , x3 ) =
1
2 (x1 + x2 + x3 − x1 x2 x3 ). The case k = 1 holds since all Fourier coefficients have absolute value
1/2. Since M aj3◦k = M aj3 ◦ (M aj3◦k−1 ) all Fourier coefficients of M aj3◦k result from substituting
either a linear or a cubic term in the Fourier expansion by the multilinear expansions of M aj3◦k−1 .
This leads to four cases.
Case 1. Z1 and Z2 both arise from linear terms. In this case if Z1 and Z2 aren’t disjoint then
they arise from the same linear term and thus satisfy the statement by the inductive hypothesis.
Case 2. If Z1 arises from a cubic term and Z2 from the linear term then it can’t be the case
that Z1 ⊆ Z2 since Z2 contains some variables not present in Z1 .
Case 3. If Z1 and Z2 both arise from the cubic term then we have (Z1 ∩ Ni ) ⊆ (Z2 ∩ Ni )
for each i. By the inductive hypothesis we then have M\
aj ◦k−1 (Z ∩ N ) ≥ M\
aj ◦k−1 (Z ∩ N ) .
3
1
i
3
2
i
Q
\
aj3◦k−1 (Zj ∩ Ni ) the desired inequality follows.
Since for j = 1, 2 we have M
aj3◦k (Zj ) = − 21 i M\
Case 4. If Z1 arises from the linear term and Z2 from the cubic term then w.l.o.g. assume that
Z1 arises from the x1 term. Note that Z1 ⊆ (Z2 ∩ N1 ) since Z1 ∩ (N2 ∪ N3 ) = ∅. By the inductive
hypothesis applied to Z1 and Z2 ∩ N1 the desired inequality holds.
We can now complete the proof of Theorem 4.2
Proof of Theorem 4.2. By combining Proposition 4.5 and Lemma 4.3 we have that any set T of vecP
\
tors that is dominated by O(S) for some standard subspace S satisfies S∈T M
aj3◦k (S)2 ≤ dim(S)
.
n
n
By the standard subspace domination lemma (Lemma 4.4) any subspace L ≤ F2 of dimension d has
O(L) dominated by a union of three standard subspaces of dimension 2d, d and d − 1 respectively.
P
\
Thus, we have
M
aj ◦k (S)2 ≤ 2d + d + d−1 ≤ 4d .
S∈O(L)
3
n
n
n
n
We have the following corollary of Theorem 4.2 that proves Theorem 1.5.
Corollary 4.6. For any ǫ ∈ [0, 1], γ <
1
2
− ǫ and k = log3 n it holds that:
+
(M aj3◦k ) ≥ ǫ2 n + 1.
D →,U
1 1
2)
−ǫ
(
n 4
Dγlin,U (M aj3◦k ) ≥ ǫ2 n + 1,
Proof. Fix d = ǫ2 n. For this choice of d Theorem 4.2 implies that ǫd (M aj3◦k ) ≤ 4ǫ2 . The firstppart
follows from Part 2 of Theorem 3.4. The second part is by Corollary 3.11 as by taking ǫ = d/n
we can set θ = 4ǫ2 ≥ ǫd (M aj3◦k ) and hence:
→,U
→,U
◦k
◦k
ǫ2 n + 1 ≤ D →,U
(M aj3◦k ).
1−θ (M aj3 ) = D 1−4ǫ2 (M aj3 ) ≤ D 1 1
2
−ǫ
(
)
4(n−d)
n
4
4n(1−ǫ2 )
16
4.2
Address function and Fourier sparsity
Consider the addressing function Addn : {0, 1}log n+n → {0, 1}8 defined as follows:
Addn (x, y1 , . . . , yn ) = yx , where x ∈ {0, 1}log n , yi ∈ {0, 1},
i.e. the value of Addn on an input (x, y) is given by the x-th bit of the vector y where x is treated
as a binary representation of an integer number in between 1 and n. Addressing function has only
n2 non-zero Fourier coefficients. In fact, as shown by Sanyal [San15] Fourier dimension, and hence
by Fact A.1 also the deterministic sketch complexity, of any Boolean function with Fourier sparsity
√
s is O( s log s).
Below using the addressing function we show that this relationship is tight (up to a logarithmic
factor) even if randomization is allowed, i.e. even for a function with Fourier sparsity s an F2 sketch
√
of size Ω( s) might be required.
Theorem 4.7. For the addressing function Addn and values 1 ≤ d ≤ n and ǫ = d/n it holds that:
→,U
DΘ(
1−ǫ (Addn ) ≥ d.
)
+
√
D lin,U
1− ǫ (Addn ) ≥ d,
n
2
Proof. If we apply the standard Fourier notaion switch where we replace 0 with 1 and 1 with −1 in
the domain and the range of the function then the addressing function Addn (x, y) can be expressed
as the following multilinear polynomial:
X
Y 1 − xj Y 1 + xj
,
Addn (x, y) =
yi
2
2
log n
i∈{0,1}
j : ij =1
j : ij =0
which makes it clear that the only non-zero Fourier coefficents correspond to the sets that contain
a single variable from the addressee block and an arbitrary subset of variables from the address
block. This expansion also shows that the absolute value of each Fourier coefficient is equal to n1 .
d×(log n+n)
Fix any d-dimensional subspace Ad and consider the matrix M ∈ F2
composed of the
basis vectors as rows. We add to M extra log n rows which contain an identity matrix in the first
(d+log n)×(log n+n)
log n coordinates and zeros everywhere else. This gives us a new matrix M ′ ∈ F2
.
′
Applying Gaussian elimination to M we can assume that it is of the following form:
Ilog n 0
0
Id′ M ,
M′ = 0
0
0
0
where d′ ≤ d. Thus, the total number of non-zero Fourier coefficients spanned by the rows of
′
M ′ equals nd′ . Hence, the total sum of squared Fourier coeffients in Ad is at most dn ≤ nd , i.e.
ǫd (Addn ) ≤ nd . By Part 2 of Theorem 3.4 and Corollary 3.11 the statement of the theorem follows.
8
In this section it will be more convenient to represent both domain and range of the function using {0, 1} rather
than F2 .
17
4.3
Symmetric functions
A function f : Fn2 → F2 is symmetric if it can be expressed as g(kxk0 ) for some function g : [0, n] →
F2 . We give the following lower bound for symmetric functions:
Theorem 4.8 (Lower bound for symmetric functions). For any symmetric function f : Fn2 → F2
that isn’t (1 − ǫ)-concentrated on {∅, {1, . . . , n}}:
lin,U
(f ) ≥
Dǫ/8
n
,
2e
→,U
+
DΘ(
1−ǫ (f ) ≥
)
n
n
.
2e
Proof. First we prove an auxiliary lemma. Let Wk be the set of all vectors in Fn2 of Hamming
weight k.
Lemma 4.9. For any d ∈ [n/2], k ∈ [n − 1] and any d-dimensional subspace Ad ≤ Fn2 :
min(k,n−k,d)
ed
ed
|Wk ∩ Ad |
≤ .
≤
|Wk |
n
n
Proof. Fix any basis in Ad and consider the matrix M ∈ Fd×n
composed of the basis vectors as
2
rows. W.l.o.g we can assume that this matrix is diagonalized and is in the standard form (Id , M ′ )
where Id is a d × d identity matrix and M ′ is a d × (n − d)-matrix. Clearly, any linear combination
of more than k rows of M has Hamming weightPgreater than k just from the contribution of the
first d coordinates. Thus, we have |Wk ∩ Ad | ≤ ki=0 di .
k
P
. On the other hand,
For any k ≤ d it is a standard fact about binomials that ki=0 di ≤ ed
k
|Wk ∩Ad |
n
ed k
k
we have |Wk | = k ≥ (n/k) . Thus, we have |Wk | ≤ n
and hence for 1 ≤ k ≤ d the desired
inequality holds.
If d < k then consider two cases. Since d ≤ n/2 the case n − d ≤ k ≤ n − 1 is symmetric to
1 ≤ k ≤ d. If d < k < n − d then we have |Wk | > |Wd | ≥ (n/d)d and |Wk ∩ Ad | ≤ 2d so that the
desired inequality follows.
Any symmetric function
P has its spectrum distributed uniformly over Fourier coefficients
Pn−1 of any
fixed weight. Let wi = S∈Wi fˆ2 (S). By the assumption of the theorem we have i=1
wi ≥ ǫ.
Thus, by Lemma 4.9 any linear subspace Ad of dimension at most d ≤ n/2 satisfies that:
X
S∈Ad
f (S) ≤ fˆ2 (∅) + fˆ2 ({1, . . . , n}) +
2
≤ fˆ2 (∅) + fˆ2 ({1, . . . , n}) +
ed
≤ (1 − ǫ) + ǫ .
n
n−1
X
wi
|Wi ∩ Ad |
|Wi |
wi
ed
n
i=1
n−1
X
i=1
Thus, f isn’t 1 − ǫ(1 − ed
n )-concentrated on any d-dimensional linear subspace, i.e. ǫd (f ) <
ed
1 − ǫ(1 − n ). By Part 2 of Theorem 3.4 this implies that f doesn’t have randomized sketches of
dimension at most d which err with probability less than:
q
1 − ǫ(1 − ed
1
ǫ
ed
ǫ
n)
−
≥
1−
≥
2
2
4
n
8
18
where the last inequality follows by the assumption that d ≤
lower bound follows by Corollary 3.11 by taking θ = ǫ/8.
5
n
2e .
The communication complexity
Turnstile streaming algorithms over F2
Let ei be the standard unit vector in Fn2 . In the turnstile streaming model the input x ∈ Fn2 is
represented as a stream σ = (σ1 , σ2 , . . . ) where σi ∈ P
{e1 , . . . , en }. For a stream σ the resulting
vector x corresponds to its frequency vector freq σ ≡ i σi . Concatenation of two streams σ and
τ is denoted as σ ◦ τ .
5.1
Random streams
We consider the following two natural models of random streams over F2 :
Model 1. In the first model we start with x ∈ Fn2 that is drawn from the uniform distribution
over Fn2 and then apply a uniformly random update y ∼ U (Fn2 ) obtaining x + y. In the streaming
language this corresponds to a stream σ = σ1 ◦ σ2 where freq σ1 ∼ U (Fn2 ) and freq σ2 ∼ U (Fn2 ). A
specific example of such stream would be one where for both σ1 and σ2 we flip an unbiased coin to
decide whether or not to include a vector ei in the stream for each value of i. The expected length
of the stream in this case is n.
Model 2. In the second model we consider a stream σ which consists of uniformly random
updates. Let σi = er(i) where r(i) ∼ U ([n]). This corresponds to each update being a flip in a
coordinate of x chosen uniformly at random. This model is equivalent to the previous model but
requires longer streams to mix. Using coupon collector’s argument such streams of length Θ(n log n)
can be divided into two substreams σ1 and σ2 such that with high probability both freq σ1 and
freq σ2 are uniformly distributed over Fn2 and σ = σ1 ◦ σ2 .
Theorem 5.1. Let f : Fn2 → F2 be an arbitrary function. In the two random streaming models
for generating σ described above any algorithm that computes f (freq σ) with probability at least
lin,U
(f ).
1 − Θ(1/n) in the end of the stream has to use space that is at least D1/3
Proof. The proof follows directly from Theorem 1.4 as in both models we can partition the stream
into σ1 and σ2 such that freq σ1 and freq σ2 are both distributed uniformly over Fn2 . We treat these
two frequency vectors as inputs of Alice and Bob in the communication game. Since communication
→,U
lin,U
DΘ(1/n)
(f + ) ≥ D1/3
(f ) is required no streaming algorithm with less space exists as otherwise Alice
would transfer its state to Bob with less communication.
5.2
Adversarial streams
We now show that any randomized turnstile streaming algorithm for computing f : Fn2 → F2 with
lin (f ) − O(log n + log(1/δ)) under adversarial
error probability δ has to use space that is at least R6δ
sequences of updates. The proof is based on the recent line of work that shows that this relationship holds for real-valued sketches [Gan08, LNW14, AHLW16]. The proof framework developed
by [Gan08, LNW14, AHLW16] for real-valued sketches consists of two steps. First, a turnstile
streaming algorithm is converted into a path-independent stream automaton (Definition 5.3). Second, using the theory of modules and their representations it is shown that such automata can
always be represented as linear sketches. We observe that the first step of this framework can be
19
left unchanged under F2 . However, as we show the second step can be significantly simplified as
path-independent automata over F2 can be directly seen as linear sketches without using module
theory. Furthermore, since we are working over F2 we also avoid the O(log m) factor loss in the
reduction between path independent automata and linear sketches that is present in [Gan08].
We use the following abstraction of a stream automaton from [Gan08, LNW14, AHLW16]
adapted to our context to represent general turnstile streaming algorithms over F2 .
Definition 5.2 (Deterministic Stream Automaton). A deterministic stream automaton A is a
Turing machine that uses two tapes, an undirectional read-only input tape and a bidirectional work
tape. The input tape contains the input stream σ. After processing the input, the automaton writes
an output, denoted as φA (σ), on the work tape. A configuration (or state) of A is determined by
the state of its finite control, head position, and contents of the work tape. The computation of
A can be described by a transition function ⊕A : C × F2 → C, where C is the set of all possible
configurations. For a configuration c ∈ C and a stream σ, we denote by c ⊕A σ the configuration of
A after processing σ starting from the initial configuration c. The set of all configurations of A that
are reachable via processing some input stream σ is denoted as C(A). The space of A is defined as
S(A) = log |C(A)|.
We say that a deterministic stream automaton computes a function f : Fn2 → F2 over a distribution Π if Prσ∼Π [φA (σ) = f (freq σ)] ≥ 1 − δ.
Definition 5.3 (Path-independent automaton). An automaton A is said to be path-independent
if for any configuration c and any input stream σ, c ⊕A σ depends only on freq σ and c.
Definition 5.4 (Randomized Stream Automaton). A randomized stream automaton A is a deterministic automaton with an additional tape for the random bits. This random tape is initialized
with a random bit string R before the automaton is executed. During the execution of the automaton
this bit string is used in a bidirectional read-only manner while the rest of the execution is the same
as in the deterministic case. A randomized automaton A is said to be path-independent if for each
possible fixing of its randomness R the deterministic automaton AR is path-independent. The space
complexity of A is defined as S(A) = maxR (|R| + S(AR )).
Theorems 5 and 9 of [LNW14] combined with the observation in Appendix A of [AHLW16] that
guarantees path independence yields the following:
Theorem 5.5 (Theorems 5 and 9 in [LNW14] + [AHLW16]). Suppose that a randomized stream
automaton A computes f on any stream with probability at least 1 − δ. For an arbitrary distribution
Π over streams there exists a deterministic9 path independent stream automaton B that computes
f with probability 1 − 6δ over Π such that S(B) ≤ S(A) + O(log n + log(1/δ)).
The rest of the argument below is based on the work of Ganguly [Gan08] adopted for our needs.
Since we are working over a finite field we also avoid the O(log m) factor loss in the reduction
between path independent automata and linear sketches that is present in Ganguly’s work.
Let An be a path-independent stream automaton over F2 and let ⊕ abbreviate ⊕An . Define
the function ∗ : Fn2 × C(An ) → C(An ) as: x ∗ a = a ⊕ σ, where f req(σ) = x. Let o be the initial
configuration of An . The kernel MAn of An is defined as MAn = {x ∈ Fn2 : x ∗ o = 0n ∗ o}.
9
We note that [LNW14] construct B as a randomized automaton in their Theorem 9 but it can always be made
deterministic by fixing the randomness that achieves the smallest error.
20
Proposition 5.6. The kernel MAn of a path-independent automaton An is a linear subspace of Fn2 .
Proof. For x, y ∈ MAn by path independence (x + y) ∗ o = x ∗ (y ∗ o) = 0n ∗ o so x + y ∈ MAn .
Since MAn ≤ Fn2 the kernel partitions Fn2 into cosets of the form x + MAn . Next we show that
there is a one to one mapping between these cosets and the states of An .
Proposition 5.7. For x, y ∈ Fn2 and a path independent automaton An with a kernel MAn it holds
that x ∗ o = y ∗ o if and only if x and y lie in the same coset of MAn .
Proof. By path independence x∗o = y ∗o iff x∗(x∗o) = x∗(y ∗o) or equivalently 0n ∗o = (x+ y)∗o.
The latter condition holds iff x + y ∈ MAn which is equivalent to x and y lying in the same cost of
MAn .
The same argument implies that the the transition function of a path-independent automaton
has to be linear since (x + y) ∗ o = x ∗ (y ∗ o). Combining these facts together we conclude that
a path-independent automaton has at least as many states as the best deterministic F2 -sketch for
f that succeeds with probability at least 1 − 6δ over Π (and hence the best randomized sketch as
well). Putting things together we get:
Theorem 5.8. Any randomized streaming algorithm that computes f : Fn2 → F2 under arbitrary
lin (f ) − O(log n +
updates over F2 with error probability at least 1 − δ has space complexity at least R6δ
log(1/δ)).
6
Linear threshold functions
In this section it will be convenient to represent the domain as {0, 1}n rather than Fn2 . We define
the sign function sign(x) to be 1 if x ≥ 0 and 0 otherwise.
Definition 6.1. A monotone linear threshold function (LTF) f : {0, 1} → {+1, −1} is defined by
a collection of weights w1 ≥ w2 · · · ≥ wn ≥ 0 as follows:
!
n
X
wi xi − θ ,
f (x1 , . . . , xn ) = sign
i=1
where θ is called the threshold of the LTF. The margin of the LTF is defined as:
m=
min
x∈{0,1}n
n
X
i=1
wi xi − θ .
P
W.l.o.g we can assume that LTFs normalized so that ni=1 wi = 1. The monotonicity in the
above definition is also without loss of generality as for negative weights we can achieve monotonicity
by complementing individual bits.
θ 2
).
Theorem 6.2. [MO09] There is a randomized linear sketch for LTFs of size O( m
Below we prove the following conjecture.
Conjecture 6.3. [MO09] There is a randomized linear sketch for LTFs of size O
21
θ
m
log
θ
m
.
In fact, all weights which are below the margin can be completely ignored when evaluating the
LTF.
Lemma 6.4. Let f be a monotone LTF with weights w1 ≥ w2 ≥ · · · ≥ wn , threshold θ and
margin m. Let f ≥2m be an LTF with the same threshold and margin but only restricted to weights
w1 ≥ w2 ≥ · · · ≥ wt , where t is the largest integer such that wt ≥ 2m. Then f = f ≥m .
Proof. For the sake of contradiction assume there exists an input (x1 , . . ., xn ) such that f(x1 , . . . , xn ) =
Pt∗
1 while f ≥2m (x1 , . . . , xt ) = 0. Fix the largest t∗ ≥ t such that sign
i=1 wi xi − θ = 0 while
P ∗
t +1
sign
i=1 wi xi − θ = 1. Clearly wt∗ +1 ≥ 2m, a contradiction.
The above lemma implies that after dropping the weights which are below 2m together with
the corresponding variables and reducing the value of n accordingly we can also make the margin
equal to wn /2. This observation also gives the following straightforward corollary that proves
Conjecture 6.3 about LTFs (up to a logarithmic factor in n).
θ
log n .
Corollary 6.5. There is a randomized linear sketch for LTFs of size O m
P
Proof. We will give a bound on |{x : f (x) = 0}|. If f (x) = 0 then ni=1 wi xi < θ. Since all weights
n
n
= θ/2m
≤ (n + 1)θ/2m . Thus
are at least wn the total number of such inputs is at most θ/w
n
θ
log n as desired.
applying the random F2 -sketching bound (Fact B.7) we get a sketch of size O m
Combined with Theorem 6.2 the above corollary proves Conjecture 6.3 except in the case when
β log (θ/m) < θ/m < nα for all α > 0 and β < ∞. This matches the result of [LZ13].
A full proof of Conjecture 6.3 can be obtained by using hashing to reduce the size of the domain
from n down to poly(θ/m).
θ
θ
Theorem 6.6. There is a randomized linear sketch for LTFs of size O m
log m
that succeeds
with any constant probability.
Proof. It suffices to only consider the case when θ/m > 100 since P
otherwise the bound follows
trivially from Theorem 6.2. Consider computing a single linear sketch i∈S xi where S is a random
vector in Fn2 with each coordinate set to 1 independently with probability 10m2 /θ 2 . This sketch
lets us distinguish the two cases kxk0 > θ 2 /m2 vs. kxk0 ≤ θ/m with constant probability. Indeed:
Case 1. kxk0 > θ 2 /m2 . The probability that a set S contains a non-zero coordinate of x in
this case is at least:
θ2
10m2 m2
≥ 1 − (1/e)10 > 0.9
1− 1−
θ2
Conditioned on this event the parity evaluate to 1 with probability at least 1/2. Hence, overall in
this case the parity evaluates to 1 with probability at least 0.4.
Case 2. kxk0 ≤ θ/m. In this case this probability that S contains a non-zero coordinate and
hence the parity can evaluate to 1 is at most:
10m2
1− 1−
θ2
θ/m
< 1 − (1/2e)1/10 < 0.2
22
Thus, a constant number of such sketches allows to distinguish the two cases above with constant
probability. If the test above declares that kxk0 > θ 2 /m2 then we output 1 and terminate. Note that
conditioned on the test above being correct it never declares that kxk0 > θ 2 /m2 while kxk0 ≤ θ/m.
Indeed in all such cases, i.e. when kxk0 > θ/m we can output 1 since if kxk0 > θ/m then
P
n
θwn
i=1 wi xi ≥ kxk0 wn ≥ m = 2θ, where we used the fact that by Lemma 6.4 we can set m = wn /2.
For the rest of the proof we thus condition on the event that kxk0 ≤ θ 2 /m2 . By hashing the
domain [n] randomly into O θ 4 /m4 buckets we can ensure that no non-zero entries of x collide
with any constant probability that is arbitrarily close to 1. This reduces the input length from n
down to O θ 4 /m4 and we can apply Corollary 6.5 to complete the proof. 10
This result is also tight as follows from the result of Dasgupta, Kumar and Sivakumar [DKS12]
discussed in the introduction. Consider the Hamming weight function Ham≥d (x) ≡ kxk0 ≥ d. This
function satisfies θ = d/n, m = 1/2n. A straightforward reduction from small set disjointness shows
that the one-way communication complexity of the XOR-function Ham≥d (x⊕ y) is Ω(d log d). This
shows that the bound in Theorem 6.6 can’t be improved without any further assumptions about
the LTF.
7
Towards the proof of Conjecture 1.3
We call a function f : Fn2 → {+1, −1} non-linear if for all S ∈ Fn2 there exists x ∈ Fn2 such that
f (x) 6= χS (x). Furthermore, we say that f is ǫ-far from being linear if:
maxn
Pr n [χS (x) = f (x)] = 1 − ǫ.
S∈F 2
x∼U (F 2 )
The following theorem is our first step towards resolving Conjecture 1.3. Since non-linear
functions don’t admit 1-bit linear sketches we show that the same is also true for the corresponding
communication complexity problem, namely no 1-bit communication protocol for such functions
can succeed with a small constant error probability.
→
Theorem 7.1. For any non-linear function f that is at most 1/10-far from linear D1/200
(f + ) > 1.
Proof. Let S = arg maxT Prx∈F n2 [χT (x) = f (x) . Pick z ∈ Fn2 such that f (z) 6= χS (z). Let the
distribution over the inputs (x, y) be as follows: y ∼ U (Fn2 ) and x ∼ Dy where Dy is defined as:
(
y + z with probability 1/2,
Dy =
U (Fn2 ) with probability 1/2.
Fix any deterministic Boolean function M (x) that is used by Alice to send a one-bit message based
on her input. For a fixed Bob’s input y he outputs gy (M (x)) for some function gy that can depend
on y. Thus, the error that Bob makes at predicting f for fixed y is at least:
1−
[gy (M (x))f (x + y)]
.
2
The key observation is that since Bob only receives a single bit message there are only four possible
functions gy to consider for each y: constants −1/1 and ±M (x).
x∼Dy
10
We note that random hashing doesn’t interfere with the linearity of the sketch as it corresponds to treating
collections of variables that have the same hash as a single variable representing their sum over F2 . Assuming no
collisions this sum evaluates to 1 if and only if a variable of interest is present in the collection.
23
Bounding error for constant estimators.
Byc = x∼Dy [gy (M (x))f (x + y)] and have:
Byc =
x∼Dy
[gy (M (x))f (x + y)] = |
If χS is not constant then
1
1
f (z) +
2
2
x∼Dy [f (x
[f (w)]
w∼U (F n
2)
w∼U (F n
2)
[f (w)] ≤
For both constant functions we introduce notation
+ y)]| =
1
1
f (z) +
2
2
≤ 2ǫ we have:
1
|f (z)| +
2
w∼U (F n
2)
If χS is a constant then w.l.o.g χS = 1 and f (z) = −1. Also
we have:
1
1
f (z) +
2
2
[f (w)]
w∼U (F n
2)
Since ǫ ≤ 1/10 in both cases Byc ≤
1
2
[f (w)]
w∼U (F n
2)
=
1
−1 +
2
[f (w)]
≤ 1/2 + ǫ.
[f (w)]
w∼U (F n
2)
[f (w)]
w∼U (F n
2)
≥ 1 − 2ǫ. Hence
≤ ǫ.
+ ǫ which is the bound we will use below.
Bounding error for message-based estimators. For functions ±M (x) we need to bound
M
M
x∼Dy [M (x)f (x + y)] . We denote this expression as By . Proposition 7.2 shows that y [By ] ≤
√
2
2
(1 + ǫ).
Proposition 7.2.
We have:
y
=
x∼Dy
y
y∼U (F n
2)
x∼Dy
[M (x)f (x + y)]
1
M (y + z)f (z) +
2
[M (x)f (x + y)]
≤
x∼Dy [M (x)f (x + y)]
√
2
2
(1 + ǫ).
1
y [|(M (y + z)f (z) + (M ∗ f )(y))|]
2
i1/2
1 h
2
≤
y ((M (y + z)f (z) + (M ∗ f )(y)))
2
1/2
1
=
(M (y + z)f (z))2 + ((M ∗ f )(y))2 + 2M (y + z)f (z)(M ∗ f )(y))
y
2
1/2
1
(M (y + z)f (z))2 + y ((M ∗ f )(y))2 + 2 y [M (y + z)f (z)(M ∗ f )(y)))
=
y
2
=
We have (M (y + z)f (z))2 = 1 and also by Parseval, expression for the Fourier spectrum of
convolution and Cauchy-Schwarz:
X
X
2
c(S)2 fˆ(S)2 ≤ ||M ||2 ||f ||2 = 1
\
M
M
∗ f (S)2 =
y [((M ∗ f )(y)) ] =
S∈F n
2
S∈F n
2
24
[M (y + z)f (z)(M ∗ f )(y))]. First we give a bound on
Thus, it suffices to give a bound on
(M ∗ f )(y):
(M ∗ f )(y) =
x [M (x)f (x
+ y)] ≤
x [M (x)χS (x
+ y)] + 2ǫ
Plugging this in we have:
y [M (y
+ z)f (z)(M ∗ f )(y))]
= −χS (z)
≤ −χS (z)
y [M (y
y
+ z)(M ∗ f )(y))]
[M (y + z)(M ∗ χS )(y)] + 2ǫ
= −χS (z)(M ∗ (M ∗ χS ))(z) + 2ǫ
= −χS (z)2 M̂ (S)2 + 2ǫ
≤ 2ǫ.
where we used the fact that the Fourier spectrum of (M ∗ (M ∗ χS )) is supported on S only and
M ∗\
(M ∗ χS )(S) = M̂ 2 (S) and thus (M ∗ (M ∗ χS ))(z) = M̂ 2 (S)χS (z).
Thus, overall, we have:
√
1√
2
2 + 4ǫ ≤
(1 + ǫ).
x∼Dy [M (x)f (x + y)] ≤
y
2
2
Putting things together. We have that the error that Bob makes is at least:
#
"
1 − y [max(Byc , ByM )]
1 − max(Byc , ByM )
=
y
2
2
Below we now bound
1/200.
c
M
y [max(By , By )]
from above by 99/100 which shows that the error is at least
c
M
y [max(By , By )]
1
= Pr[ByM ≥ 1/2 + ǫ] [ByM |ByM ≥ 1/2 + ǫ] + P r[ByM < 1/2 + ǫ]
+ǫ
2
1
+ ǫ − [ByM |ByM < 1/2 + ǫ]
= y [ByM ] + P r[ByM < 1/2 + ǫ]
2
Let δ = P r[ByM < 1/2 + ǫ]. Then the first of the expressions above gives the following bound:
1
δ
δ
c
M
+ ǫ = 1 − + ǫδ ≤ 1 − + ǫ
y [max(By , By )] ≤ (1 − δ) + δ
2
2
2
The second expression gives the following bound:
√
√
√
1
2
2 δ
2
c
M
(1 + ǫ) + δ
+ǫ ≤
+ +
ǫ + ǫ.
y [max(By , By )] ≤
2
2
2
2
2
√
2
These two bounds areequal for
δ = 1 − 2 (1 + ǫ) and hence the best of the two bounds is always
√
√
99
1
at most ( 42 + 12 ) + ǫ 42 + 1 ≤ 100
where the last inequality uses the fact that ǫ ≤ 10
.
25
References
[AHLW16] Yuqing Ai, Wei Hu, Yi Li, and David P. Woodruff. New Characterizations in Turnstile
Streams with Applications. In Ran Raz, editor, 31st Conference on Computational
Complexity (CCC 2016), volume 50 of Leibniz International Proceedings in Informatics (LIPIcs), pages 20:1–20:22, Dagstuhl, Germany, 2016. Schloss Dagstuhl–LeibnizZentrum fuer Informatik.
[AKLY16] Sepehr Assadi, Sanjeev Khanna, Yang Li, and Grigory Yaroslavtsev. Maximum matchings in dynamic graph streams and the simultaneous communication model. In Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms,
SODA 2016, Arlington, VA, USA, January 10-12, 2016, pages 1345–1364, 2016.
[AMS99]
Noga Alon, Yossi Matias, and Mario Szegedy. The space complexity of approximating
the frequency moments. J. Comput. Syst. Sci., 58(1):137–147, 1999.
[BTW15]
Eric Blais, Li-Yang Tan, and Andrew Wan. An inequality for the fourier spectrum of
parity decision trees. CoRR, abs/1506.01055, 2015.
[DKS12]
Anirban Dasgupta, Ravi Kumar, and D. Sivakumar. Sparse and lopsided set disjointness via information theory. In Approximation, Randomization, and Combinatorial
Optimization. Algorithms and Techniques - 15th International Workshop, APPROX
2012, and 16th International Workshop, RANDOM 2012, Cambridge, MA, USA, August 15-17, 2012. Proceedings, pages 517–528, 2012.
[Gan08]
Sumit Ganguly. Lower bounds on frequency estimation of data streams (extended abstract). In Computer Science - Theory and Applications, Third International Computer
Science Symposium in Russia, CSR 2008, Moscow, Russia, June 7-12, 2008, Proceedings, pages 204–215, 2008.
[GKdW04] Dmitry Gavinsky, Julia Kempe, and Ronald de Wolf. Quantum communication cannot
simulate a public coin. CoRR, quant-ph/0411051, 2004.
[GOS+ 11] Parikshit Gopalan, Ryan O’Donnell, Rocco A. Servedio, Amir Shpilka, and Karl Wimmer. Testing fourier dimensionality and sparsity. SIAM J. Comput., 40(4):1075–1100,
2011.
[Gro97]
Vince Grolmusz. On the power of circuits with gates of low l1 norms. Theor. Comput.
Sci., 188(1-2):117–128, 1997.
[HHL16]
Hamed Hatami, Kaave Hosseini, and Shachar Lovett. Structure of protocols for XOR
functions. Electronic Colloquium on Computational Complexity (ECCC), 23:44, 2016.
[HPP+ 15] James W. Hegeman, Gopal Pandurangan, Sriram V. Pemmaraju, Vivek B. Sardeshmukh, and Michele Scquizzato. Toward optimal bounds in the congested clique: Graph
connectivity and MST. In Proceedings of the 2015 ACM Symposium on Principles
of Distributed Computing, PODC 2015, Donostia-San Sebastián, Spain, July 21 - 23,
2015, pages 91–100, 2015.
26
[HSZZ06]
Wei Huang, Yaoyun Shi, Shengyu Zhang, and Yufan Zhu. The communication complexity of the hamming distance problem. Inf. Process. Lett., 99(4):149–153, 2006.
[JKS03]
T. S. Jayram, Ravi Kumar, and D. Sivakumar. Two applications of information complexity. In Proceedings of the 35th Annual ACM Symposium on Theory of Computing,
June 9-11, 2003, San Diego, CA, USA, pages 673–682, 2003.
[KM93]
Eyal Kushilevitz and Yishay Mansour. Learning decision trees using the fourier spectrum. SIAM J. Comput., 22(6):1331–1348, 1993.
[KN97]
Eyal Kushilevitz and Noam Nisan. Communication complexity. Cambridge University
Press, 1997.
[Leo13]
Nikos Leonardos. An improved lower bound for the randomized decision tree complexity
of recursive majority,. In Automata, Languages, and Programming - 40th International
Colloquium, ICALP 2013, Riga, Latvia, July 8-12, 2013, Proceedings, Part I, pages
696–708, 2013.
[LLZ11]
Ming Lam Leung, Yang Li, and Shengyu Zhang. Tight bounds on the randomized
communication complexity of symmetric XOR functions in one-way and SMP models.
CoRR, abs/1101.4555, 2011.
[LNW14]
Yi Li, Huy L. Nguyen, and David P. Woodruff. Turnstile streaming algorithms might
as well be linear sketches. In Symposium on Theory of Computing, STOC 2014, New
York, NY, USA, May 31 - June 03, 2014, pages 174–183, 2014.
[Lov14]
Shachar Lovett. Recent advances on the log-rank conjecture in communication complexity. Bulletin of the EATCS, 112, 2014.
[LZ10]
Troy Lee and Shengyu Zhang. Composition theorems in communication complexity. In
Automata, Languages and Programming, 37th International Colloquium, ICALP 2010,
Bordeaux, France, July 6-10, 2010, Proceedings, Part I, pages 475–489, 2010.
[LZ13]
Yang Liu and Shengyu Zhang. Quantum and randomized communication complexity of
XOR functions in the SMP model. Electronic Colloquium on Computational Complexity
(ECCC), 20:10, 2013.
[McG14]
Andrew McGregor. Graph stream algorithms: a survey. SIGMOD Record, 43(1):9–20,
2014.
[MNS+ 13] Frédéric Magniez, Ashwin Nayak, Miklos Santha, Jonah Sherman, Gábor Tardos, and
David Xiao. Improved bounds for the randomized decision tree complexity of recursive
majority. CoRR, abs/1309.7565, 2013.
[MNSX11] Frédéric Magniez, Ashwin Nayak, Miklos Santha, and David Xiao. Improved bounds for
the randomized decision tree complexity of recursive majority. In Automata, Languages
and Programming - 38th International Colloquium, ICALP 2011, Zurich, Switzerland,
July 4-8, 2011, Proceedings, Part I, pages 317–329, 2011.
[MO09]
Ashley Montanaro and Tobias Osborne. On the communication complexity of XOR
functions. CoRR, abs/0909.3392, 2009.
27
[O’D14]
Ryan O’Donnell. Analysis of Boolean Functions. Cambridge University Press, 2014.
[OWZ+ 14] Ryan O’Donnell, John Wright, Yu Zhao, Xiaorui Sun, and Li-Yang Tan. A composition
theorem for parity kill number. In IEEE 29th Conference on Computational Complexity,
CCC 2014, Vancouver, BC, Canada, June 11-13, 2014, pages 144–154, 2014.
[San15]
Swagato Sanyal. Near-optimal upper bound on fourier dimension of boolean functions
in terms of fourier sparsity. In Automata, Languages, and Programming - 42nd International Colloquium, ICALP 2015, Kyoto, Japan, July 6-10, 2015, Proceedings, Part
I, pages 1035–1045, 2015.
[STlV14]
Amir Shpilka, Avishay Tal, and Ben lee Volk. On the structure of boolean functions
with small spectral norm. In Innovations in Theoretical Computer Science, ITCS’14,
Princeton, NJ, USA, January 12-14, 2014, pages 37–48, 2014.
[SW86]
Michael E. Saks and Avi Wigderson. Probabilistic boolean decision trees and the complexity of evaluating game trees. In 27th Annual Symposium on Foundations of Computer Science, Toronto, Canada, 27-29 October 1986, pages 29–38, 1986.
[SW12]
Xiaoming Sun and Chengu Wang. Randomized communication complexity for linear
algebra problems over finite fields. In 29th International Symposium on Theoretical
Aspects of Computer Science, STACS 2012, February 29th - March 3rd, 2012, Paris,
France, pages 477–488, 2012.
[SZ08]
Yaoyun Shi and Zhiqiang Zhang. Communication complexities of symmetric xor functions. Quantum Inf. Comput, pages 0808–1762, 2008.
[TWXZ13] Hing Yin Tsang, Chung Hoi Wong, Ning Xie, and Shengyu Zhang. Fourier sparsity,
spectral norm, and the log-rank conjecture. In 54th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2013, 26-29 October, 2013, Berkeley, CA, USA,
pages 658–667, 2013.
[Woo14]
David P. Woodruff. Sketching as a tool for numerical linear algebra. Foundations and
Trends in Theoretical Computer Science, 10(1-2):1–157, 2014.
[Yao83]
Andrew Chi-Chih Yao. Lower bounds by probabilistic arguments (extended abstract).
In 24th Annual Symposium on Foundations of Computer Science, Tucson, Arizona,
USA, 7-9 November 1983, pages 420–428, 1983.
[ZS10]
Zhiqiang Zhang and Yaoyun Shi. On the parity complexity measures of boolean functions. Theor. Comput. Sci., 411(26-28):2612–2618, 2010.
28
Appendix
A
Deterministic F2-sketching
In the deterministic case it will be convenient to represent F2 -sketch of a function f : Fn2 → F2
as a d × n matrix Mf ∈ Fd×n
that we call the sketch matrix. The d rows of Mf correspond to
2
vectors α1 , . . . , αd used in the deterministic sketch so that the sketch can be computed as Mf x.
W.l.o.g below we will assume that the sketch matrix Mf has linearly independent rows and that
the number of rows in it is the smallest possible among all sketch matrices (ties in the choice of the
sketch matrix are broken arbitrarily).
The following fact is standard (see e.g. [MO09, GOS+ 11]):
Fact A.1. For any function f : Fn2 → F2 it holds that D lin (f ) = dim(f ) = rank(Mf ). Moreover,
set of rows of Mf forms a basis for a subspace A ≤ Fn2 containing all non-zero coefficients of f .
A.1
Disperser argument
We show that the following basic relationship holds between deterministic linear sketching complexity and the property of being an affine disperser. For randomized F2 -sketching an analogous
statement holds for affine extractors as shown in Lemma B.2.
Definition A.2 (Affine disperser). A function f is an affine disperser of dimension at least d if
for any affine subspace of Fn2 of dimension at least d the restriction of f on it is a non-constant
function.
Lemma A.3. Any function f : Fn2 → F2 which is an affine disperser of dimension at least d has
deterministic linear sketching complexity at least n − d + 1.
Proof. Assume for the sake of contradiction that there exists a linear sketch matrix Mf with
k ≤ n − d rows and a deterministic function g such that g(Mf x) = f (x) for every x ∈ Fn2 . For
any vector b ∈ Fk2 , which is in the span of the columns of Mf , the set of vectors x which satisfy
Mf x = b forms an affine subspace of dimension at least n − k ≥ d. Since f is an affine disperser for
dimension at least d the restriction of f on this subspace is non-constant. However, the function
g(Mf x) = g(b) is constant on this subspace and thus there exists x such that g(Mf x) 6= f (x), a
contradiction.
A.2
Composition and convolution
In order to prove a composition theorem for D lin we introduce the following operation on matrices
which for a lack of a better term we call matrix super-slam11 .
Definition A.4 (Matrix super-slam). For two matrices A ∈ Fa×n
and B ∈ Fb×m
their super-slam
2
2
n
abn ×nm
A†B ∈ F2
is a block matrix consisting of a blocks (A†B)i . The i-th block (A†B)i ∈ Fb2 ×nm is
constructed as follows: for every vector j ∈ {1, . . . , b}n the corresponding row of (A † B)i is defined
as (Ai,1 Bj1 , Ai,2 Bj2 , . . . , Ai,n Bjn ), where Bk denotes the kth row of B.
Proposition A.5. rank(A † B) ≥ rank(A)rank(B).
11
This name was suggested by Chris Ramsey.
29
Proof. Consider the matrix C which is a subset of rows of A † B where from each block (A † B)i we
select only b rows corresponding to the vectors j of the form αn for all α ∈ {1, . . . , b}. Note that
C ∈ Fab×mn
and C(i,k),(j,l) = Ai,j Bk,l . Hence, C is a Kronecker product of A and B and we have:
2
rank(A † B) ≥ rank(C) = rank(A)rank(B).
The following composition theorem for D lin holds as long as the inner function is balanced:
Lemma A.6. For f : Fn2 → F2 and g : Fm
2 → F2 if g is a balanced function then:
D lin (f ◦ g) ≥ D lin (f )D lin (g)
P
Proof. The multilinear expansions of f and g are given as f (y) = S∈F n fˆ(S)χS (y) and g(y) =
2
P
ĝ(S)χS (y). The multilinear expansion of f ◦g can be obtained as follows. For each monomial
S∈F m
2
fˆ(S)χS (y) in the multilinear expansion of f and each variable yi substitute yi by the multilinear
expansion of g on a set of variables xm(i−1)+1,...,mi . Multiplying all these multilinear expansions
corresponding to the term fˆ(S)χS gives a polynomial which is a sum of at most bn monomials
where b is the number of non-zero Fourier coefficients of g. Each such monomial is obtained by
picking one monomial from the multilinear expansions corresponding to different variables in χS and
multiplying them. Note that there are no cancellations between the monomials corresponding to a
fixed χS . Moreover, since g is balanced and thus ĝ(∅) = 0 all monomials corresponding to different
characters χS and χS ′ are unique since S and S ′ differ on some variable and substitution of g into
that variable doesn’t have a constant term but introduces new variables. Thus, the characteristic
vectors of non-zero Fourier coefficients of f ◦ g are the same as the set of rows of the super-slam of
the sketch matrices Mf and Mg (note, that in the super-slam some rows can be repeated multiple
times but after removing duplicates the set of rows of the super-slam and the set of characteristic
vectors of non-zero Fourier coefficients of f ◦ g are exactly the same). Using Proposition A.5 and
Fact A.1 we have:
D lin (f ◦ g) = rank(Mf ◦g ) = rank(Mf † Mg ) ≥ rank(Mf )rank(Mg ) = D lin (f )D lin (g).
Deterministic F2 -sketch complexity of convolution satisfies the following property:
Proposition A.7. D lin (f ∗ g) ≤ min(D lin (f ), D lin (g)).
Proof. The Fourier spectrum of convolution is given as f[
∗ g(S) = fˆ(S)ĝ(S). Hence, the set of
non-zero Fourier coefficients of f ∗ g is the intersection of the sets of non-zero coefficients of f and
g. Thus by Fact A.1 we have D lin (f ∗ g) ≤ min(rank(Mf , Mg )) = min(D lin (f ), D lin (g)).
B
Randomized F2-sketching
We represent randomized F2 -sketches as distributions over d × n matrices over F2 . For a fixed such
distribution Mf the randomized sketch is computed as Mf x. If the set of rows of Mf satisfies
Definition 1.1 for some reconstruction function g then we call it a randomized sketch matrix for f .
30
B.1
Extractor argument
We now establish a connection between randomized F2 -sketching and affine extractors which will
be used to show that the converse of Part 1 of Theorem 3.4 doesn’t hold for arbitrary distributions.
Definition B.1 (Affine extractor). A function f : Fn2 → F2 is an affine δ-extractor if for any affine
subspace A of Fn2 of dimension at least d it satisfies:
min
Pr [f (x) = z] > δ.
z∈{0,1} x∼U (A)
Lemma B.2. For any f : Fn2 → F2 which is an affine δ-extractor of dimension at least d it holds
that:
Rδlin (f ) ≥ n − d + 1.
Proof. For the sake of contradiction assume that there exists a randomized linear sketch with a
reconstruction function g : Fk2 → F2 and a randomized sketch matrix Mf which is a distribution
over matrices with k ≤ n − d rows. First, we show that:
Pr
x∼U (F n
2 )M ∼Mf
[g(M x) 6= f (x)] > δ.
Indeed, fix any matrix M ∈ supp(Mf ). For any affine subspace S of the form S = {x ∈ Fn2 |M x = b}
of dimension at least n − k ≥ d we have that minz∈{0,1} Prx∼U (S) [f (x) = z] > δ. This implies that
Prx∼U (S) [f (x) 6= g(M x)] > δ. Summing over all subspaces corresponding to the fixed M and all
possible choices of b we have that Prx∼U (F n2 ) [f (x) 6= g(M x)] > δ. Since this holds for any fixed M
the bound follows.
Using the above observation it follows by averaging over x ∈ {0, 1}n that there exists x∗ ∈ {0, 1}n
such that:
Pr
M ∼Mf
[g(M x∗ ) 6= f (x∗ )] > δ.
This contradicts the assumption that Mf and g form a randomized linear sketch of dimension
k ≤ n − d.
Pn/2
Fact B.3. The inner product function IP (x1 , . . . xn ) = i=1 x2i−1 ∧ x2i is an (1/2 − ǫ)-extractor
for affine subspaces of dimension ≥ (1/2 + α)n where ǫ = exp(−αn).
Corollary B.4. Randomized linear sketching complexity of the inner product function is at least
n/2 − O(1).
Remark B.5. We note that the extractor argument of Lemma B.2 is often much weaker than the
arguments we give in Part 2 and Part 3 Theorem 3.4 and wouldn’t suffice for our applications in
Section 4. In fact, the extractor argument is too weak even for the majority function M ajn . If the
√
first 100 n variables of M ajn are fixed to 0 then the resulting restriction has value 0 with probability
√
1 − e−Ω(n) . Hence for constant error M ajn isn’t an extractor for dimension greater than 100 n.
However, as shown in Section 4.3 for constant error F2 -sketch complexity of M ajn is linear.
31
B.2
Existential lower bound for arbitrary distributions
Now we are ready to show that an analog of Part 1 of Theorem 3.4 doesn’t hold for arbitrary
distributions, i.e. concentration on a low-dimensional linear subspace doesn’t imply existence of
randomized linear sketches of small dimension.
Lemma B.6. For any fixed constant ǫ > 0 there exists a function f : Fn2 → {+1, −1} such that
lin (f ) ≥ n − 3 log n such that f is (1 − 2ǫ)-concentrated on the 0-dimensional linear subspace.
Rǫ/8
Proof. The proof is based on probabilistic method. Consider a distribution over functions from Fn2
to {+1, −1} which independently assigns to each x value 1 with probability 1−ǫ/4 and value −1 with
n
probability ǫ/4. By a Chernoff bound with probability e−Ω(ǫ2 ) a random function
f drawn from
P
this distribution has at least an ǫ/2-fraction of −1 values and hence fˆ(∅) = 21n α∈F n f (x) ≥ 1 − ǫ.
2
This implies that fˆ(∅)2 ≥ (1 − ǫ)2 ≥ 1 − 2ǫ so f is (1 − 2ǫ)-concentrated on a linear subspace of
dimension 0. However, as we show below the randomized sketching complexity of some functions
in the support of this distribution is large.
The total number of affine subspaces of codimension d is at most (2 · 2n )d = 2(n+1)d since each
such subspace can be specified by d vectors in Fn2 and a vector in Fd2 . The number of vectors in each
such affine subspace is 2n−d . The probability that less than ǫ/8 fraction of inputs in a fixed subspace
n−d
have value −1 is by a Chernoff bound at most e−Ω(ǫ2 ) . By a union bound the probability that
a random function takes value −1 on less than ǫ/8 fraction of the inputs in any affine subspace of
n−d
codimension d is at most e−Ω(ǫ2 ) 2(n+1)d . For d ≤ n − 3 log n this probability is less than e−Ω(ǫn) .
By a union bound, the probability that a random function is either not an ǫ/8-extractor or isn’t
n
(1 − 2ǫ)-concentrated on fˆ(∅) is at most e−Ω(ǫn) + e−Ω(ǫ2 ) ≪ 1. Thus, there exists a function f
in the support of our distribution which is an ǫ/8-extractor for any affine subspace of dimension at
least 3 log n while at the same time is (1 − 2ǫ)-concentrated on a linear subspace of dimension 0.
By Lemma B.2 there is no randomized linear sketch of dimension less than n − 3 log n for f which
errs with probability less than ǫ/8.
B.3
Random F2 -sketching
The following result is folklore as it corresponds to multiple instances of the communication protocol
for the equality function [KN97, GKdW04] and can be found e.g. in [MO09] (Proposition 11). We
give a proof for completeness.
Fact B.7. A function f : Fn2 → F2 such that minz∈{0,1} Prx [f (x) = z] ≤ ǫ satisfies
Rδlin (f ) ≤ log
ǫ2n+1
.
δ
Proof. We assume that argminz∈{0,1} Prx [f (x) = z] = 1 as the other case is symmetric. Let
T = {x ∈ Fn2 |f (x) = 1}. For every two inputs x 6= x′ ∈ T a random F2 -sketch χα for α ∼ U (Fn2 )
satisfies Pr[χα (x) 6= χα (x′ )] = 1/2. If we draw t such sketches χα1 , . . . , χαt then Pr[χαi (x) =
χαi (x′ ), ∀i ∈ [t]] = 1/2t . For any fixed x ∈ T we have:
Pr[∃x′ 6= x ∈ T ∀i ∈ [t] : χαi (x) = χαi (x′ )] ≤
32
|T | − 1
ǫ2n
δ
≤
≤ .
t
t
2
2
2
Conditioned on the negation of the event above for a fixed x ∈ T the domain of f is partitioned by
the linear sketches into affine subspaces such that x is the only element of T in the subspace that
contains it. We only need to ensure that we can sketch f on this subspace which we denote as A. On
this subspace f is isomorphic to an OR function (up to taking negations of some of the variables)
and hence can be sketched using O(log 1/δ) uniformly random sketches with probability 1 − δ/2.
For the OR-function existence of the desired protocol is clear since we just need to verify whether
there exists at least one coordinate of the input that is set to 1. In case it does exist a random
sketch contains this coordinate with probability 1/2 and hence evaluates to 1 with probability at
least 1/4. Repeating O(log 1/δ) times the desired guarantee follows.
C
Tightness of Theorem 3.4 for the Majority function
An important question is whether Part 3 of Theorem 3.4 is tight. In particular, one might ask
whether the dependence on the error probability can be improved by replacing ∆d (f ) with a larger
quantity. As we show below this is not the case and hence Part 3 of Theorem 3.4 is tight.
We consider the majority function M ajn where n is an odd number. The total Fourier
weight on Fourier coefficients corresponding vectors of Hamming weight k is denoted as W k (f ) =
P
2 3/2
ˆ 2
α : kαk0 =k f (α) . For the majority function it is well-known (see e.g. [O’D14]) that for ξ = π
and odd k it holds that:
W k (M ajn ) = ξk −3/2 (1 ± O(1/k)).
Since M ajn is a symmetric function whose spectrum decreases monotonically with the Hamming
weight of the corresponding Fourier coefficient by a normalization argument as in Lemma 4.9
among all linear subspaces of dimension d the maximum Fourier weight is achieved by the standard
subspace Sd which spans d unit vectors. Computing the Fourier weight of Sn−1 we have:
X
X
[
[
M
aj n (α)2
M
aj n (α)2 = 1 −
α∈Sn−1
α∈S
/ n−1
n/2−1
=1−
X
W
2i+1
i=0
n/2−1
X
(M ajn )
n−1
2i
n
2i+1
1
2i + 1
=1−
ξ
3/2
n
(2i + 1)
i=0
γ
1
=1− √ ±O
,
3/2
n
n
1±O
1
2i + 1
where γ > 0 is an absolute constant. Thus, we can set ǫn (M ajn ) = 1, ǫn−1 (M ajn ) = 1 −
O(1/n3/2 )
in Part 3 of Theorem 3.4. This gives the following corollary:
√γ
n
−
1
Corollary C.1. It holds that Dδ→,U (M ajn+ ) ≥ n, where δ = √γn + O n3/2
for some constant
γ > 0.
√
Tightness follows from the fact that error O(1/ n) for M ajn can be achieved using a trivial
(n−1)-bit protocol in which Alice sends the first n−1 bits of her input x1 , . . . , xn−1 and Bob outputs
M ajn−1 (x1 + y1 , x2 + y2 , . . . , xn−1 + yn−1 ). The only inputs on which this protocol can make an
33
error are inputs where there is an equal number of zeros and ones among x1 + y1 , . . . , xn−1 + yn−1 .
√
It follows from the standard approximation of binomials that such inputs are an O(1/ n) fraction
under the uniform distribution.
34
| 8 |
Is Compare-and-Swap Really Necessary?
arXiv:1802.03844v1 [] 12 Feb 2018
Pankaj Khanchandani1 and Roger Wattenhofer1
1
ETH Zurich, Switzerland
[email protected], [email protected]
Abstract
The consensus number of a synchronization primitive, such as compare-and-swap or
fetch-and-add, is the maximum number of processes n among which binary consensus
can be solved by using read-write registers and registers supporting the synchronization primitive. As per Herlihy’s seminal result, any synchronization primitive with
consensus number n can be used to construct a wait-free and linearizable implementation of any non-trivial concurrent object or a data structure that is shared among
n processes. As the compare-and-swap primitive has infinite consensus number and is
widely supported by multi-processors, synchronization tasks have been typically solved
using the compare-and-swap synchronization primitive.
In this paper, we show that having compare-and-swap as the quintessential synchronization primitive for solving wait-free synchronization is not necessary. It is
not necessary as we give an O(1) time wait-free and linearizable implementation of
a compare-and-swap register for processes 1, 2, . . . , n using registers that support the
two weak synchronization primitives half-max and max-write, each with consensus
number one. Thus, any algorithm that uses powerful compare-and-swap registers to
solve some arbitrary synchronization problem among processes 1, 2, . . . , n can be transformed into an algorithm that has the same asymptotic time complexity and only uses
weak consensus number one primitives.
1
Introduction
Any modern multi-processor chip supports the compare-and-swap synchronization primitive which is widely used for coordination among multiple processes. The reason dates
back to 1991, when Maurice Herlihy [6] introduced consensus numbers in his seminal work
to quantify the power of a synchronization primitive. The consensus number of a synchronization primitive is defined as the maximum number of processes among which binary
consensus can be solved by using read-write registers and registers supporting the synchronization primitive. Herlihy showed that if a synchronization primitive has consensus
number n, then it can be used to design a linearizable and wait-free implementation of any
concurrent object that is shared among n processes (such as concurrent queues, stacks,
etc.). The wait-free property implies that a process completes its operation on the object
in a bounded number of steps irrespective of the schedule of the other processes. Linearizabiliy implies that each operation appears to take effect instantaneously at some point
during the execution of the operation. As the compare-and-swap primitive has infinite
consensus number, it can be used to design a linearizable and wait-free implementation
of any concurrent object. So, compare-and-swap is often considered a most powerful synchronization primitive, and as such widely supported by hardware. As a result, concurrent
algorithms and data structures have been traditionally designed using the compare-andswap primitive, eg., [11, 7, 10].
Recently, Ellen et al. [2] showed that combining two simple primitives of consensus
number one each can also achieve consensus number infinity. Thus, they showed that a set
of simple consensus number one primitives are sufficient to design a linearizable and waitfree implementation of any shared object, just like compare-and-swap is. However, this is
only a sufficiency argument and utilizes Herlihy’s universal wait-free construction of any
shared object, which is not very efficient and can take up to O(n) time per operation in a
system with n processes. This brings us to the following natural question: Is there a set of
simple (low consensus number) primitives that is not only sufficient like compare-and-swap
but also equally efficient?
One way to approach this question is to solve a specific synchronization problem with
some low consensus number primitives and compare the solution with a solution that
uses compare-and-swap. For example, Gelashvili et al. [4] implement a log data structure
using primitives of consensus number at most two, with the same performance as the log
data structure implemented using compare-and-swap. However, this approach answers the
efficiency question only partially, i.e., with respect to a specific synchronization problem.
Instead, we answer the question in affirmative for any synchronization task by giving an
O(1) time linearizable and wait-free implementation of a compare-and-swap register for
processes 1, 2, . . . , n using two consensus number one primitives that we call half-max and
max-write. Thus, we show that it is possible to transform any algorithm among processes
1, 2, . . . , n that uses compare-and-swap registers and takes O(T ) time into an O(T ) time
algorithm that only uses consensus number one primitives!
1
2
Related Work
Herlihy’s result [6] tells us that it is not possible to use low consensus number synchronization primitives to achieve higher consensus number. However, it relies on the assumption
that the individual synchronization primitives are not applied on the same register. Ellen
et al. [2] question this assumption by obtaining a higher consensus number using a collection of low consensus number primitives that are applied on the same register. One of
their examples solves binary consensus among any given number of processes using the
consensus number one primitives multiply and decrement.
This was followed by couple of works that highlight the use of combining low consensus
number primitives for specific synchronization tasks. Gelashvili et al. [4] give an efficient
implementation of a log data structure by just using low consensus number primitives.
In [9], we use some low consensus number primitives along with the compare-and-swap
√
primitive to improve the time bound of a wait-free queue from O(n) to O( n) where n
is the number of processes. Earlier, Golab et al. [5] give a blocking implementation of
comparison primitives by just using read-write registers and constant number of remote
memory references. As the implementation is blocking, it cannot be used for wait-free
synchronization tasks. So, there is no prior work that shows that a collection of low
consensus number primitives is both sufficient and efficient like compare-and-swap registers
for an arbitrary synchronization task.
3
An Overview of the Method
Our method is based on the observation that if several compare-and-swap operations attempt to simultaneously change the value in the register, only one of them succeeds. So,
instead of updating the final value of the register for each operation, we first determine the
single operation that succeeds and update the final value accordingly. This is achieved by
using two consensus number one primitives: max-write and half-max.
The max-write primitive takes two arguments. If the first argument is greater than
or equal to the value in the first half of the register, then the first half of the register is
replaced with the first argument and the second half is replaced with the second argument.
Otherwise, the register is left unchanged. This primitive helps in keeping a version number
along with a value.
The half-max primitive takes a single argument and replaces the first half of the register
with that argument if the argument is larger. Otherwise, the register remains unchanged.
This primitive is used along with the max-write primitive to determine the single successful
compare-and-swap operation out of several concurrent ones. The task of determining the
successful compare-and-swap operation can be viewed as a variation of tree-based combining (as in [3, 8] for example). The difference is that we do not use a tree as it would incur
Θ(log n) time overhead. Instead, our method does the combining in constant time as we
will see later.
2
In the following section, we formalize the model and the problem. In Section 5, we give
an implementation of the compare-and-swap operation using registers that support the
half-max, max-write, read and write operations. In Section 6, we prove its correctness and
show that the compare-and-swap operation runs in O(1) time. In Section 7, we argue that
the consensus numbers of the max-write and half-max primitives are both one. Finally, we
conclude and discuss some extensions in Section 8.
4
Model
A sequential object is defined by the tuple (S, O, R, T ). Here, S is the set of all possible
states of the object, O is the set of operations that can be performed on the object, R
is the set of possible return values of all the operations and T : S × O → S × R is the
transition function that specifies the next state of the object and the return value given a
state of the object and an operation applied on it.
A register is a sequential object and supports the operations read, write, half-max and
max-write. The read() operation returns the current value (state) of the register. The
write(v) operation updates the value of the register to v. The half-max(x) operation replaces the value in the first half of the register, say a, with max{x, a}. The max-write(x | y)
operation replaces the first half of the register, say a, with x and second half of the register
with y if and only if x ≥ a. The register operations are atomic, i.e., if different processes
execute them simultaneously, then they execute sequentially in some order. In general,
atomicity is implied whenever we use the word operation in the rest of the text.
An implementation of a sequential object is a collection of functions, one for each
operation defined by the object. A function specifies a sequence of instructions to be
executed when the function is executed. An instruction is an operation on a register or a
computation on local variables, i.e., variables exclusive to a process.
A process is a sequence of functions to be executed. Thus, a process defines a sequence
of instructions to be executed as determined by the individual functions in its sequence.
The processes have identifiers 1, 2, . . . , n. When a process executes a function, it is said to
call that function. A schedule is a sequence of process identifiers. Given a schedule S, an
execution E(S) is the sequence of instructions obtained by replacing each process identifier
in the schedule with the next instruction to be executed by the corresponding process.
Given an execution and a function called by a process, the start of the function call is
the point in the execution when the first register operation of the function call appears.
Similarly, the end of the function call is the point in the execution when the last register
operation of the function call appears. A function call A is said to occur before another
function call B, if the call A ends before call B starts. Thus, the function calls of an
implementation of an object O form a partial order PO (E) with respect to an execution
E. An implementation on an object O is linearizable if there is a total order TO (E) that
extends the partial order PO (E) for any given execution E so that the actual return value of
3
every functional call in the order TO (E) is same as the return value determined by applying
the specification of the object to the order TO (E). The total order TO (E) is usually defined
by associating a linearization point with each function call, which is a specific point in the
execution when the call takes effect. An implementation is wait-free if every function call
returns within a bounded number of steps of the calling process irrespective of the schedule
of the other processes.
Our goal is to develop a wait-free and linearizable implementation of the compare-andswap register. It supports read and compare-and-swap operations. The read() operation
returns the current value of the register. The compare-and-swap(a, b) operation returns
true and updates the value of the register to b if the value in the register is a. Otherwise,
it returns false and does not change the value.
5
Algorithm
Figure 1 shows the (shared) registers that are used by the algorithm. There are arrays A
and R of size n each. The ith entry of the array A consists of two fields: the field c keeps a
count of the number of compare-and-swap operations executed by the process i, the field
val is used to store or announce the second argument of the compare-and-swap operation
that the process i is executing. The ith entry of the array R consists of the fields c and ret.
The field ret is used for storing the return value of the cth compare-and-swap operation
executed by the process i. The register V stores the current value of the compare-and-swap
object in the field val along with its version number in the field seq. The fields seq, pid and
c of the register P respectively store the next version number, the process identifier of the
process that executed the latest successful compare-and-swap operation and the count of
compare-and-swap operations issued by that process. For all the registers, the individual
fields are of equal sizes except for the register P . The first half of this register stores the
field seq where as the second half stores the other two fields, pid and c.
R
A
c
c
val
ret
V seq val
P seq pid c
Figure 1: An overview of data structures used by Algorithm 1.
Algorithm 1 gives an implementation of the compare-and-swap register. To execute
the read function, a process simply reads and returns the current value of the object as
stored in the register V (Lines 20 and 21). To execute the compare-and-swap function, a
process starts by reading the current value of the object (Line 2). If the first argument of
the function is not equal to the current value, then it returns false (Lines 3 and 4). If both
4
the arguments are same as the current value, then it can simply return true as the new
value is same as the initial one (Lines 5 and 6).
Otherwise, the process competes with the other processes executing the compare-andswap function concurrently. First, the process increments its local counter (Line 7). Then,
the new value to be written by the process is announced in the respective entry of the
array A (Line 8) and the return value of the function is initialized to false by writing to
the respective entry in the array R (Line 9). The process starts competing with the other
concurrent processes by writing its identifier to the register P (Line 10). The competition is
finished by writing a version number larger than used by the competing processes (Line 11).
Algorithm 1: The compare-and-swap and the read functions. The symbol | is a
field separator. The symbol
is a variable that is not used. The variable id is the
identifier of the process executing the function. At initialization, we have c = 0 and
V = (0 | x), where x is the initial value of the compare-and-swap object.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
compare-and-swap(a, b)
(seq | val) ← V.read();
if a 6= val then
return false;
19
20
21
read()
( | val ) ← V.read();
return val ;
if a = b then
return true;
c ← c + 1;
A[id ].write(c | b);
R[id ].write(c | false);
P.max-write(seq + 1 | id | c);
P.half-max(seq + 2);
(seq | pid | cp) ← P.read();
(ca | val ) ← A[pid ].read();
if seq is even and cp = ca then
R[pid ].max-write(ca | true);
V.max-write(seq | val );
( | ret) ← R[id ].read();
return ret;
Once the winner of the competing processes is determined, the winner and the value
announced by it is read (Lines 12 and 13), the winner is informed that it won after appropriate checks (Lines 15, 14) and the current value is updated (Line 16). The value to be
returned is then read from the designated entry of array R (Line 17).
In the following section, we analyze Algorithm 1 and show that it is a linearizable and
O(1) time wait-free implementation of the compare-and-swap object.
5
6
Analysis
Let us first define some notations. We refer to a field f of a register X by X.f . The term
X.fki is the value of the field X.f just after process i executes Line k during a call. We omit
the call identifier from the notation as it will always be clear from the context. Similarly,
vki is the value of a variable v, that is local to the process i, just after it executes Line k
during a call. The term X.fe is the value of a field X.f at the end of an execution.
To prove that our implementation is linearizable, we first need to define the linearization
points. The linearization point of the compare-and-swap function executed by a process i
is given by Definition 1. There are four main cases. If the process returns from Line 4 or
Line 6, then the linearization point is the read operation in Line 2 as such an operation does
not change the value of the object (Cases 1 and 2). Otherwise, we look for the execution of
Line 16 that wrote the sequence number V.seq i2 +2 to the field V.seq for the first time. This
is the linearization point of the process i if its compare-and-swap operation was successful
as determined by the value of P.pid (Case 3a). Otherwise, the failed compare-and-swap
operations are linearized just after the successful one (Case 3b). The calls that have not
yet taken effect are linearized at the end (Case 4).
Definition 1. The compare-and-swap call by a process i is linearized as follows.
1. If V.val i2 6= ai1 , then the linearization point is the point when i executes Line 2.
2. If V.val i2 = ai1 = bi1 , then the linearization point is the point when i executes Line 2.
3. If V.val i2 = ai1 6= bi1 and V.seq e ≥ V.seq i2 + 2, then let p be the point when Line 16 is
executed by a process j so that V.seq j16 = V.seq i2 + 2 for the first time.
(a) If pid j12 = i, then the linearization point is p.
(b) If pid j12 6= i, then the linearization point is just after p.
4. If V.val i2 = ai1 6= bi1 and V.seq e < V.seq i2 + 2, then the linearization point is at the
end, after all the other linearization points.
Note that we assume in Case 3 that if V.seq e ≥ V.seq i2 + 2, then there is an execution
of Line 16 by a process j with the value V.seq j16 = V.seq i2 + 2. So, we first show in the
following lemmas that this is indeed true.
Lemma 1. The value of V.seq is always even.
Proof. We have V.seq = 0 at initialization. The modification only happens in Line 16 with
an even value.
Lemma 2. Whenever V.seq changes, it increases by 2.
6
Proof. Say that the value of the field was changed to V.seq i16 when a process i executed
Line 16. Then, the value seq i14 is even and so is the value V.seq i16 . Thus, the value
V.seq i16 was written to P.seq by a process j and that V.seq j2 = V.seq i16 − 2. As the field
V.seq is only modified by a max-write operation so it only increases. Thus, we have
V.seq ≥ V.seq i16 − 2 just before i modifies it. As V.seq is even by Lemma 1 and i modifies
it, we have V.seq = V.seq i16 − 2 before the modification. So, the value increases by 2.
Lemma 3. The linearization point as given by Definition 1 is well-defined.
Proof. The linearization point as given by Definition 1 clearly exists for all the cases except
for Case 3. Here, we only need to show that if V.seq e ≥ V.seq i2 + 2, then there exists an
execution of Line 16 by a process j so that V.seq j16 = V.seq i2 + 2. As V.seq i2 is even by
Lemma 1 and the value of V.seq only increases in steps of 2 by Lemma 2, it follows from
V.seq e ≥ V.seq i2 + 2 that V.seq i2 + 2 was written to V.seq at some point.
To show that the implementation is linearizable, we need to prove two main statements.
First, the linearization point is within the start and end of the corresponding function
call. Second, the value returned by a finished call is same as defined by the sequence of
linearization points up to the linearization point of the call. In the following two lemmas,
we show the first of these statements.
Lemma 4. If it holds that V.val i2 = ai1 6= bi1 for a compare-and-swap call by a process i,
then the value of V.seq is at least V.seq i2 + 2 at the end of the call.
Proof. We define a set of processes S = {j : V.seq j2 = V.seq i2 }. Consider the process k ∈ S
that is the first one to execute Line 14. As the first field of P.seq is always modified by a max
operation and process k writes V.seq i2 + 2 to that field, we have seq k14 = seq k12 ≥ V.seq i2 + 2.
If seq k12 > V.seq i2 + 2, then V.seq k12 ≥ V.seq i2 + 2 and we are done.
So, we only need to check the case when seq k12 = V.seq i2 + 2. As V.seq i2 is even by
Lemma 1, so is seq k14 = seq k12 . Moreover, the process pid k12 ∈ S as some process(es)
(including k) executed Line 10. As A[pid k12 ].c always increases whenever modified (Line 8),
we have ca k13 ≥ cp k12 . But, if ca k13 > cp k12 , then the process pid k12 finished even before the
process k, a contradiction. So, it holds that ca k13 = cp k12 and the process k executes Line 16.
Now, the execution of Line 16 by the process k either changes the value of V.seq or does
not. If it does, then V.seq k16 = V.seq i2 + 2 and we are done. Otherwise, someone already
changed the value of V.seq to at least V.seq i2 + 2 because of Lemma 2.
Lemma 5. The linearization point as given by Definition 1 is within the corresponding
call duration.
Proof. The statement is true for Cases 1 and 2 as the instruction corresponding to the
linearization point is executed by the process i itself.
For Case 3, we analyze the case of finished and unfinished call separately. Say that
the call is unfinished. As V.seq e ≥ V.seq i2 + 2 and V.seq i2 is the value of V.seq at the start
7
of the call, the linearization point as given by Definition 1 is after the call starts. Now,
assume that the call is finished. We know from Lemma 4 that the value of V.seq is at least
V.seq i2 + 2 when the call ends. So, the point when Line 16 writes V.seq i2 + 2 to V.seq is
within the call duration.
We know from Lemma 4 that if the call finishes, then we have V.seq e ≥ V.seq i2 + 2. So,
if V.seq e < V.seq i2 + 2, then the call is unfinished and it is fine to linearize it at the end as
done for Case 4.
Now, we need to show that the value returned by the calls is same as the value determined by the order of linearization points. We show this in the following lemmas.
Lemma 6. Assume that x = seq i12 = seq j12 for two distinct processes i and j and that x is
even. Then, it holds that pid i12 = pid j12 and cp i12 = cp j12 .
Proof. W.l.o.g. assume that the process i executes Line 12 before the process j does so.
As x = seq i12 = seq j12 by assumption, the only way in which the field P.pid can change
until the process j executes Line 12, is by a max-write operation on P with the value x as
the first field. This is not possible as x is even and the max-write on P is only executed
with odd value as the first field (Line 10). So, it holds that pid i12 = pid j12 . Similarly, we
have cp i12 = cp j12 .
Lemma 7. As long as the value of V.seq remains same, the value of V.val does not change.
Proof. Say that a process i is the first one to write a value x to V.seq. The value written
to the field V.val by the process i is val i13 . To have a different value of V.val with x as the
value of V.seq, another process j must execute Line 16 with seq j12 = x but val i13 6= val j13 .
As seq j12 = x = seq i12 , it follows from Lemma 6 that pid j12 = pid i12 and cp i12 = cp j12 . As the
condition in Line 14 is true for both the processes i and j, it then follows that ca i13 = ca j13 .
As the field A[pid j12 ].val is updated only once for a given value of A[pid j12 ].c (Line 8), it
holds that val i13 = val j13 and the claim follows.
Lemma 8. Say that seq i12 = x is even and pid i12 = j during a call by a process i, then it
holds for a call by the process j that V.seq j2 = x − 2.
Proof. As seq i12 = x, some process h modified P by executing Line 10 or Line 11 with x as
the first argument. As x is even and V.seq h2 is even by Lemma 1, the process h modified p
by executing Line 11. So, it holds that V.seq h2 = x − 2. Also, process h executed Line 10
with x − 1 as the first field. As pid i12 = j, the process j also executed Line 10 with x − 1
as the first field after the process h did so. So, it holds that V.seq j2 = x − 2.
Lemma 9. For every even value x ∈ [2, V.seq e ], there is an execution of Line 16 by a
process i so that seq i12 = x and the first such execution is the linearization point of some
call.
8
Proof. Consider an even value x ∈ [2, V.seq e ]. Then, we know from Lemma 2 that x is
written to V.seq by an execution of Line 16. Let p be the point of first execution of Line 16
by a process j so that seq j12 = x. So, it holds for the process pid j12 = h that V.seq h2 = x − 2
using Lemma 8. As point p is the first time when x is written to the field V.seq, it holds
that V.seq j16 = x. Thus, p is the linearization point of the process h by Definition 1.
Lemma 10. The value V.val is only modified at a Case 3a linearization point.
Proof. Let q be a Case 3a linearization point. Say that the value of V.seq is updated to x at
q. Let p be the first point in the execution when the value of V.seq is x−2. Using Lemma 9,
we conclude that p is either a linearization point (for x − 2 ≥ 2) or the initialization point
(for x − 2 = 0). Using Lemma 7, the value of V.val is not modified between p and q.
We want to use the above lemma in an induction argument on the linearization points
to show that the values returned by the corresponding calls are correct. First, we introduce
some notations for k ≥ 1. The term L.val k is the value of the abstract compare-and-swap
object after the k th linearization point. The terms V.seq k and V.val k , respectively, are
the values of V.seq and V.val after the k th linearization point. All these notations refer
to the respective values just after initialization for k = 0. For k ≥ 1, the term L.ret k is
the expected return value of the call corresponding to the k th linearization point. Due to
space constraints, we only give the statement of the following two lemmas without proof.
The proof is an induction on the linearization points and checks the different linearization
point cases separately.
Lemma 11. After k ≥ 0 linearization points, it holds that L.val k = V.val k except for
Case 4 linearization points. For k ≥ 1, the L.ret k values are false for Case 1, true for
Case 2, true for Case 3a and false for Case 3b.
Lemma 12. If the k th linearization point for k ≥ 1 corresponds to a finished call by a
process i, then the value returned by the call is L.ret k .
We can now state the following main theorem about Algorithm 1.
Theorem 13. Algorithm 1 is a wait-free and linearizable implementation of the compareand-swap register where both the compare-and-swap and read functions take O(1) time.
Proof. We conclude that the compare-and-swap function as given by Algorithm 1 is linearizable by using Lemma 5 and Lemma 12. The read operation is linearized at the point
of execution of Line 20. Clearly, this is within the duration of the call. To check the return
value, let LP k be the linearization point of the read operation and LP k0 be the linearization point previous to LP k . Then, we have V.val k = V.val k0 using Lemma 10. So, it holds
that V.val k = L.val k0 using Lemma 11. Moreover, both the compare-and-swap and read
functions end after executing O(1) steps and the implementation is wait-free.
9
7
Consensus Numbers
In this section, we prove that each of the max-write and the half-max primitives has consensus number one. Trivially, both operations can solve binary consensus for a single process
(itself) by just deciding on the input value. To show that these operations cannot solve
consensus for more than one process, we use the standard valency and indistinguishability
argument. We give a short proof for the reader familiar with such an argument. The
detailed proof can be found in the appendix.
We know that the initial configuration is a bivalent configuration (both 0 and 1 outputs
are possible) whereas the terminating configuration is univalent (either 0 or 1 only is a
possible output). Thus, a wait-free algorithm to solve binary consensus reaches a critical
configuration C where an operation by any process changes the configuration to a univalent
one. Say that next operation sa by process A changes the configuration to a 0-valent
configuration and the next operation sb by process B changes the configuration to a 1valent configuration. The operations sa and sb cannot be operations on different registers
as the configurations Csa sb is indistinguishable from the configuration Csb sa . It is also
not possible that one of sa or sb , say sa , is a read operation as the configurations Csa sb
and Csb are indistinguishable to process B. Both sa and sb cannot be a write operation
as the configurations Csa and Csb sa are indistinguishable to A. If both the operations are
max-write, then let sa be the operation with the greater (or equal) first argument. Then,
the configurations Csa and Csb sa are indistinguishable to A as sa overwrites sb .
8
Conclusion
The algorithm that we presented uses O(n) space to implement a single compare-and-swap
register due to the arrays A and R . If an algorithm uses m compare-and-swap registers,
then we can run m separate instances of the presented algorithm. In that case, we use
O(mn) space. But, we can save space if we observe that the algorithm uses the arrays A
and R to store the information about the latest pending call per process. As there is at
most one pending call per process even while implementing m compare-and-swap registers,
we can conceptually run m instances of the presented algorithm by using a single array A
and a single array R. In that case, the space required is O(m + n).
The half-max and max-write primitives have consensus number one each just like readwrite registers. Moreover, these primitives do not return any value and are based on the
max operation which is commutative. So, they can potentially yield a simpler and scalable
hardware implementation unlike compare-and-swap, which is known to not scale well [1].
Using our result, one can transform an algorithm that uses compare-and-swap registers into
an algorithm that uses the half-max and max-write primitives. The transformation takes
constant time per operation and works for any algorithm among processes 1, 2, . . . , n in
general, even if the algorithm not wait-free. Therefore, our result gives us an opportunity,
to rethink from scratch, the primitives to implement in a modern multi-processor.
10
References
[1] Tudor David, Rachid Guerraoui, and Vasileios Trigonakis. Everything you always
wanted to know about synchronization but were afraid to ask. In 24th ACM Symposium on Operating System Principles (SOSP), Farminton, Pennsylvania, Nov 2013.
[2] Faith Ellen, Rati Gelashvili, Nir Shavit, and Leqi Zhu. A Complexity-Based Hierarchy
for Multiprocessor Synchronization: [Extended Abstract]. In Proceedings of the 2016
ACM Symposium on Principles of Distributed Computing (PODC), Chicago, IL, USA,
Jul 1990.
[3] Faith Ellen and Philipp Woelfel. An Optimal Implementation of Fetch-and-Increment.
In 27th International Symposium on Distributed Computing (DISC), Jerusalem, Israel,
Oct 2013.
[4] Rati Gelashvili, Idit Keidar, Alexander Spiegelman, and Roger Wattenhofer. Brief
Announcement: Towards Reduced Instruction Sets for Synchronization. In 31st International Symposium on Distributed Computing (DISC), Vienna, Austria, Oct 2017.
[5] Wojciech Golab, Vassos Hadzilacos, Danny Hendler, and Philipp Woelfel. ConstantRMR Implementations of CAS and Other Synchronization Primitives Using Read and
Write Operations. In 26th Annual ACM SIGACT-SIGOPS Symposium on Principles
of Distributed Computing (PODC), Portland, Oregon, Aug 2007.
[6] Maurice Herlihy. Wait-free Synchronization. ACM Transactions on Programming
Languages and Systems (TOPLAS), 1991.
[7] Prasad Jayanti and Srdjan Petrovic. Efficient and Practical Constructions of LL/SC
Variables. In 22nd Annual Symposium on Principles of Distributed Computing
(PODC), Boston, Massachusetts, 2003 Jul.
[8] Pankaj Khanchandani and Roger Wattenhofer. Brief Announcement: Fast Shared
Counting using O(n) Compare-and-Swap Registers. In ACM Symposium on Principles
of Distributed Computing (PODC), Washington, DC, USA, Jul 2017.
[9] Pankaj Khanchandani and Roger Wattenhofer. On the Importance of Synchronization Primitives with Low Consensus Numbers. In 19th International Conference on
Distributed Computing and Networking (ICDCN), Varanasi, India, Jan 2018.
[10] Alex Kogan and Erez Petrank. Wait-free Queues with Multiple Enqueuers and Dequeuers. In 16th ACM Symposium on Principles and Practice of Parallel Programming
(PPoPP), San Antonio, TX, USA, Feb 2011.
[11] Maged M. Michael. Practical Lock-Free and Wait-Free LL/SC/VL Implementations Using 64-Bit CAS. In 18th International Symposium on Distributed Computing
(DISC), Amsterdam, Netherlands, Oct 2004.
11
A
Proof of Lemma 11
Proof. We prove the claim by induction on k. For the base case of k = 0, the claim is true
as V.val is initialized with the initial of the compare-and-swap object. Let LP k be the k th
linearization point for k ≥ 1 and say that it corresponds to a call by a process i. We have
the following cases.
Case 1: Let LP k0 be the linearization point previous to LP k . By induction hypothesis,
it holds that L.val k0 = V.val k0 . By Lemma 10, the value of V.val does not change until LP k .
As we have a read operation at LP k , it holds that V.val k0 = V.val k . By Definition 1, we
know that V.val k 6= a1i . So, it holds that L.val k0 = V.val k0 = V.val k 6= a1i . Thus, it follows
from the specification of the compare-and-swap object that L.val k = L.val k0 = V.val k .
Moreover, we have L.ret k = false as L.val k0 = V.val k 6= a1i .
Case 2: Again, we let LP k0 to be the linearization point previous to LP k . As argued in
the previous case, it holds that V.val k0 = V.val k . By Definition 1, we know that V.val k =
a1i = b1i . So, it holds that L.val k0 = V.val k0 = V.val k = a1i . Thus, it follows from the object’s
specification that L.val k = b1i = V.val k . Further, we have L.ret k = true as L.val k0 = a1i .
Case 3a: Consider the point LP k0 when the value V.seq k − 2 was written to V.seq for
the first time. As V.seq k is even by Lemma 1, it follows from Lemma 9 that LP k0 is a
linearization point or the initialization point. Using definition of Case 3a, LP k is the first
point when the value V.seq k was written to the field V.seq. So, we have V.seq i2 = V.seq k0 .
Thus, it holds that V.val i2 = V.val k0 by Lemma 7. Therefore, V.val i2 = L.val k0 as L.val k0 =
V.val k0 by induction hypothesis. Using definition of Case 3a, it also holds that a1i = V.val i2 .
Thus, we have a1i = L.val k0 and L.val k = b1i .
Now, assume that the instruction at LP k was executed by a process j. Using definition
of Case 3a, we have i = pid j12 . As LP k is the first time when the value of V.seq is
V.seq k = V.seq i2 + 2, we conclude that the process i is not finished until LP k by using
0
Lemma 4. As seq j12 = V.seq k = V.seq i2 +2, it is true that some process i0 has V.seq i2 = V.seq i2
and that the process executed Line 10 until LP k . As i = pid j12 , the process i0 = i. Moreover,
the process i did this during the call corresponding to the linearization point LP k as it
follows from Lemma 4 that there is a unique call for any process h given a fixed value of
V.seq h2 . Thus, the process i already executed Line 8 with b1i as the value of the second
field. This field has not changed as the call by process i is not finished until LP k . So, we
have val j13 = b1i and that V.val k = b1i as well. Because a1i = L.val k0 as shown before, we
also have L.ret k = true.
Case 3b: Let LP k0 and LP k00 be the first points when the value V.seq k and V.seq k − 2 is
written to V.seq respectively (LP k0 is just before the point LP k as defined by Case 3b). Let
i and j be the processes that execute the calls corresponding to the points LP k and LP k0
respectively. By definition of Case 3b, we have V.seq i2 = V.seq k0 − 2. As process j wrote
V.seq k0 to V.seq, we have V.seq j2 = V.seq k0 − 2 as well. So, we have V.val i2 = V.val j2 using
Lemma 7. Using definition of Case 3a and Case 3b, respectively, we have a1j = V.val j2 6= b1j
and a1i = V.val i2 . So, we have a1i 6= b1j . We have b1j = L.val k0 as argued in the previous
12
case, so it holds that L.val k = L.val k0 . By induction hypothesis, we have L.val k0 = V.val k0 .
Moreover, there no operations after LP k0 and until LP k by definition of Case 3b. So,
we have V.val k0 = V.val k and thus L.val k = V.val k . Also, we have L.ret k = false as
a1i 6= b1j = L.val k0 .
B
Proof of Lemma 12
Proof. Say the k th linearization point is a Case 1 point. Using its definition, the value
returned by the corresponding call is false as the condition in Line 3 holds true. Using
Lemma 11, we have L.ret k = false as well for Case 1. Next, assume that the k th linearization point is a Case 2 point. Then, the value returned by the corresponding call is true as
the condition in Line 5 is true by definition. Using Lemma 11, we have L.ret k = true as
well for Case 2.
Now, consider that the k th linearization point is a Case 3a point. Say that the process
j executes the operation at the linearization point. As pid j12 = i by definition of Case 3a,
the process i already executed Line 10 with the first field as V.seq k − 1. So, the process
i also initialized R[i] to (cp j12 | false) in Line 9. Moreover, the process j wrote the value
(cp j12 |true) to R[i] afterwards using a max-write operation. Thus, the value of R[i].ret after
LP k is true. This field is not changed by i until it returns. And, other processes only write
true to the field. So, the call returns true which is same as the value of L.ret k given by
Lemma 11.
Next, consider that the k th linearization point is a Case 3b point. Let p be the point
when the process i initializes R[i] to a value (x | false) during the call (Line 9). Consider
a process j that tries to write true to R[i].ret after p (by executing Line 15). So, it holds
that pid j12 = i and that seq j12 is even. Now, we consider three cases depending on the
relation between seq j12 and V.seq k . First, consider that seq j12 > V.seq k . As pid j12 = i and
seq j12 is even, we have V.seq i2 = seq j12 − 2 using Lemma 8. So, we have V.seq i2 > V.seq k − 2.
This cannot happen until i finishes as V.seq i2 = V.seq k − 2 for the current call by i using
definition of Case 3b. Second, consider that seq j12 = V.seq k . Using definition of Case 3b,
there is a process h so that pid h12 6= i and seq h12 = V.seq k . As seq j12 = V.seq k by assumption,
we have pid j12 6= i using Lemma 6. This contradicts our assumption that pid j12 = i. Third,
consider that seq j12 < V.seq k . As pid j12 = i and seq j12 is even, we have V.seq i2 = seq j12 − 2
using Lemma 8. So, we have V.seq i2 < V.seq k − 2. This corresponds to a previous call by
the process i as V.seq i2 = V.seq k − 2 for the current call by i. So, it holds that ca j13 < x
and execution of Line 15 has no effect. Thus, the process i returns false for Case 3b which
matches the L.ret k value given by Lemma 11.
If the k th linearization point is a Case 4 point, then we know from Lemma 4 that the
call is unfinished and we need not consider it.
13
C
Proof of Consensus Numbers
First, we define some terms. A configuration of the system is the value of the local variables
of each process and the value of the shared registers. The initial configuration is the input
0 or 1 for each process and the initial values of the shared registers. A configuration
is called a bivalent configuration if there are two possible executions starting from the
configuration so that in one of them all the processes terminate and decide 0 and in the
other all the processes terminate and decide 1. A configuration is called 0-valent if in all
the possible executions starting from the configuration, the processes terminate and decide
0. Similarly, a configuration is called 1-valent if in all the possible executions starting
from the configuration, the processes terminate and decide 1. A configuration is called a
univalent configuration if it is either 0-valent or 1-valent. A bivalent configuration is called
critical if the next step by any process changes it to a univalent configuration. Consider
an initial configuration in which there is a process X with the input 0 and a process Y
with the input 1. This configuration is bivalent as X outputs 0 if it is made to run until
it terminates and Y outputs 1 if it is made to run until it terminates. As the terminating
configuration is univalent, a critical configuration is reached assuming that the processes
solve wait-free binary consensus.
Assume that the max-write operation can solve consensus between two processes A and
B. Then, a critical configuration C is reached. W.l.o.g., say that the next step sa by the
process A leads to a 0-valent configuration C0 and that the next step sb by the process B
leads to a 1-valent configuration C1 . In a simple notation, C0 = Csa and C1 = Csb . We
have the following cases.
1. sa and sb are operations on different registers: The configuration C0 sb is indistinguishable from the configuration C1 sa . Thus, the process B decides the same value
if it runs until termination from the configurations C0 sb and C1 sa , a contradiction.
2. sa and sb are operations on the same register and at least one of them is a read
operation: W.l.o.g., assume that sa is a read operation. Then, the configuration
C0 sb is indistinguishable to C1 with respect to B as the read operation by A only
changes its local state. Thus, the process B decides the same value if it runs until
termination from the configurations C0 sb and C1 , a contradiction.
3. sa and sb are write operations on the same register: Then, the configuration C0 sb
is indistinguishable from the configuration C1 as sb overwrites the value written by
sa . Thus, the process B will decide the same value from these configurations, a
contradiction.
4. sa and sb are max-write operations on the same register: Say that the arguments of
these operations are a | x and b | y for A and B respectively. W.l.o.g., assume that
b ≥ a. Then, the configurations C0 sb is indistinguishable from C1 as the operation
14
sb will overwrite both the fields of the register. Thus, the process B will decide the
same value from these configurations, a contradiction.
So, the critical configuration cannot be reached and the processes A and B cannot solve
consensus using the max-write primitive. For the half-max primitive, observe that it also
cannot solve consensus between the processes A and B as otherwise max-write primitive
would as well. Thus, the consensus number is one for the max-write as well as the half-max
primitive.
15
| 8 |
DESINGULARIZATION OF REGULAR ALGEBRAS
MOHSEN ASGHARZADEH
arXiv:1310.1862v2 [] 14 Aug 2017
Abstract. We identify families of commutative rings that can be written as a direct limit of a directed
system of noetherian regular rings and investigate the homological properties of such rings.
1. Introduction
The goal of this work is to identify rings R that can be realized as a direct limit of a directed system
{Ri : i ∈ Γ} of noetherian regular rings (which we then call a densingularization of R), and to investigate
the homological properties of such an R. We emphasize that the poset Γ is filtered. A paradigm for this,
and one of the motivation for this work, is a result of Zariski [29] (and Popescu [24]):
Theorem 1.1. (Zariski-Popescu) Let (V, m) be a valuation domain containing a field k of zero characteristic. Then V has a densingularization.
It may be interesting to mention that the construction of densingularizations goes back to Akizuki [1]
and Nagata [20]. Recall from [6] that a ring is said to be regular, if each finitely generated ideal has finite
projective dimension. A ring is called coherent, if its finitely generated ideals are finitely presented. Our
first result in Section 2 is:
Proposition 1.2. Let R be a ring that has a desingularization and p a finitely generated prime ideal in
R. If Rp is coherent, then Rp is regular.
Also, Section 2 is devoted to computing the homological dimensions of an ideal I of a ring with a
desingularization {Ri : i ∈ Γ}. We do this by imposing some additional assumptions both on the ideal I,
the rings Ri and the poset Γ.
A quasilocal ring is a ring with a unique maximal ideal. A local ring is a noetherian quasilocal ring.
There are many definitions for the regularity condition in non-noetherian rings (see e.g. [15]). One of
these is the notion of super regularity. This notion was first introduced by Vasconcelos [28]. A coherent
quasilocal ring is called super regular if its global dimension is finite and equal to its weak dimension.
Section 3 deals with a desingularization of super regular rings. Our first result in this direction is
Proposition 3.3:
Proposition 1.3. Let {(Ri , mi )} be a directed system of local rings with the property that m2i = mi ∩m2i+1 .
If R := limRi is coherent and super regular, then each Ri is regular.
−→
We present a nice application of the notion of super regularity: we compute the global dimension of
certain perfect algebras. To this end, suppose R is a complete local domain which is not a field and
suppose that its perfect closure R∞ is coherent. In Proposition 3.4 we show that
gl. dim(R∞ ) = dim R + 1.
2010 Mathematics Subject Classification. Primary 13H05; Secondary 18A30.
Key words and phrases. Coherent rings; direct limits; homological dimensions; perfect rings; regular rings.
1
2
Let {Ri } be a pure directed system of local rings and suppose that the maximal ideal of R := lim Ri
−→
has a finite free resolution. In Proposition 4.1, we show R is noetherian and regular. Let F be a finite
Q
field. In Proposition 5.3 we present the desingularization of N F . This has some applications. For
Q
example, N F is stably coherent.
We cite [13] as a reference book on commutative coherent rings.
2. Homological properties of a desingularization
We start by introducing some notation. By p. dimR (−) (resp. fl. dimR (−)), we mean projective
dimension (resp. flat dimension) of an R-module. Denote the ith Koszul homology module of R with
respect to x := x1 , . . . , xn by Hi (x; R).
Remark 2.1. Let {Ri : i ∈ Γ} be a directed system of rings and let x := x1 , . . . , xn be in R := limRi . Let
−→
i0 be such that x ⊂ Ri0 . Then limi≥i H• (x, Ri ) ≃ H• (x, R).
−→ 0
Proof. This is straightforward and we leave it to the reader.
Definition 2.2. Let (R, m) be a quasilocal ring. Suppose m is generated by a finite sequence of elements
x := x1 , . . . , xn . Recall from Kabele [15] that R is H1 -regular, if H1 (x, R) = 0. Also, R is called Koszul
regular, if Hi (x, R) = 0 for all i > 0.
In general, H1 -regular rings are not Koszul regular, see [15].
Lemma 2.3. Let (R, m) be a coherent H1 -regular ring. Then R is Koszul regular.
Proof. Coherence regular local rings are integral domains. Recall from Definition 2.2 that m is finitely
generated. Let x := x1 , . . . , xn be a generating set for m. Since R is coherent and in view of [3, Lemma
3.7], the R-module Hi (y, R) is finitely generated, where y is a finite sequence of elements. By using basic
properties of Koszul homologies and by an easy induction one may show that Hi (x1 , . . . , xj ; R) = 0 for
all i > 0 and all j. We left the routine details to the reader, please see [19, Page 127-128 ].
Proposition 2.4. Let R be a ring with a desingularization and let p ∈ Spec(R) be finitely generated. If
Rp is coherent, then Rp is regular.
Proof. Let {Ri : i ∈ Γ} be a directed system of noetherian regular rings such that R := limRi . To
−→
simplify the notation, we replace Rp with (R, m) and (Ri )p∩Ri with (Ri , mi ). In view of the natural
isomorphism Rp ≃ lim(Ri )p∩Ri , we may do such a replacement. Let x := x1 , . . . , xn be a generating set
−→
for m. Without loss of the generality, we can assume that xi ∈ Rj for all i and all j. Set Ai := Ri /(x).
In the light of [17, Lemma 2.5.1],
0 → H2 (Ri , Ai , Ai ) → H1 (x, Ri ) → Ani → (x)/(x)2 → 0,
where H∗ (−, −, −) is the André-Quillen homology. By Remark 2.1, Koszul homology behaves well with
respect to direct limits. Recall from [17, Proposition 1.4.8] that André-Quillen homology behaves well
with respect to direct limits. These induce the following exact sequence
π
0 −→ H2 (R, k, k) −→ H1 (x, R) −→ Rn /mRn −→ m/m2 −→ 0,
where the map π induced from the natural surjective homomorphism that sends the canonical basis of
Ani to x. We view π as a surjective map of finite dimensional vector spaces with the same dimension. In
particular, π is an isomorphism.
3
Recall that k (resp. ki ) is the residue field of R (resp. Ri ). In the light of [17, Proposition 1.4.8,
Corollary 2.5.3], H2 (R, k, k) ≃ limH2 (Ri , ki , ki ) = 0. Thus H1 (x, R) = 0, because π is an isomorphism.
−→
Due to Lemma 2.3, fl. dimR (k) < ∞. Again, as R is coherent and in view of [13, Corollary 2.5.10], any
finitely generated ideal of R has finite projective dimension, i.e., R is regular.
By w. dim(R), we mean the weak dimension of R. By definition
w. dim(R) := sup{fl. dim(M ) : M is an R-module},
see [13, Page 20].
Lemma 2.5. Let {Ri : i ∈ Γ} be a directed system of rings such that their weak dimension is bounded
by an integer n. Set R := limRi . The following assertions hold:
−→
(i) The flat dimension of an R-module is bounded above by n.
(ii) If R is coherent, then projective dimension of any finitely presented R-module is bounded above
by n.
i
Proof. (i): Let M and N be two R-modules. By [12, VI, Exercise 17], TorR
TorR
j (M, N ) ≃ lim
j (M, N ),
−→
i
which is zero for all j > n and this is the thing that we search for.
(ii): This follows by [26, Corollary 11.5].
Let a be an ideal of a ring R and M an R-module. Let Σ be the family of all finitely generated
subideals b of a. The Koszul grade of a finitely generated ideal a := (x1 , . . . , xn ) on M is defined by
K. gradeR (a, M ) := inf{i ∈ N ∪ {0}|H i (HomR (K• (x), M )) 6= 0}.
Note that by [9, Corollary 1.6.22] and [9, Proposition 1.6.10 (d)], this does not depend on the choice of
generating sets of a. For an ideal a (not necessarily finitely generated), Koszul grade of a on M can be
defined by K. gradeR (a, M ) := sup{K. gradeR (b, M ) : b ∈ Σ}. By using [9, Proposition 9.1.2 (f)], this
definition coincides with the original definition for finitely generated ideals.
Corollary 2.6. Let {Ri : i ∈ Γ} be a directed system of coherent regular quasilocal rings such that their
Krull dimension is bounded by an integer. Suppose each Ri is noetherian and Γ is countable, or R is
coherent. Then R := limRi is regular.
−→
Proof. First, suppose that each Ri is noetherian and Γ is countable. Any ideal of R is countably generated.
It follows by the proof of [23, Corollary 2.47], that p. dim(−) ≤ fl. dim(−) + 1. It remains to recall
w. dim(Ri ) = dim(Ri ), because Ri is noetherian.
Now, suppose that R is coherent. Let I be a finitely generated ideal of R generated by x := x1 , . . . , xn .
There is j ∈ Γ such that x ⊆ Ri for all i ≥ j. Denote xRi by Ii and define mi := m ∩ Ri . In view of [3,
Lemma 3.2], K. gradeRi (mi , Ri ) ≤ dim Ri . Note that Ri /Ii has a finite free resolution. By [21, Chap. 6,
Theorem 2],
fl. dim(Ri /Ii )
≤ p. dim(Ri /Ii )
= K. grade(mi , Ri ) − K. grade(mi , Ri /Ii )
≤ K. grade(mi , Ri )
≤ dim Ri .
Thus,
{w. dim Ri : i ∈ Γ} ≤ sup{dim Ri : i ∈ Γ} < ∞.
So, Lemma 2.5 completes the proof.
4
Proposition 2.7. Let R be a quasilocal ring with a desingularization {Ri : i ∈ Γ}. The following holds:
(i) Any two-generated ideal of R has flat dimension bounded by 1.
(ii) If Γ is countable, then any two-generated ideal of R has projective dimension bounded by 2.
Proof. (i): Let I = (a, b) be a two-generated ideal of R. Without loss of generality, we may assume that
Ri is local. There is i0 ∈ Γ such that {a, b} ⊂ Ri for all i > i0 . We now apply an idea of BuchsbaumEisenbud. As, Ri is an unique factorization domain and in view of [10, Corollary 5.3], {a, b} has a
greatest common divisor c such that {a/c, b/c} is a regular sequence. Thus, p. dimRi ((a/c, b/c)Ri ) < 2.
Multiplication by c shows that (a/c, b/c) ≃ (a, b). Conclude that p. dimRi ((a, b)Ri ) < 2. Then by the
same reasoning as Lemma 2.5(i), fl. dimR (I) < 2.
(ii): In view of part (i) the claim follows by the argument of Corollary 2.6.
We will use the following result several times.
Lemma 2.8. (See [19, Theorem 23.1]) Let ϕ be a local map from a regular local ring (R, m) to a CohenMacaulay local ring (S, n). Suppose dim R + dim S/mS = dim S. Then ϕ is flat.
Example 2.9. The conclusion of Proposition 2.7 can not carry over three-generated ideals.
Proof. Let k be any field. For each n ≥ 3, set Rn := k[x1 , . . . , x2n−4 ]. Define
In := (x1 , x2 ) ∩ . . . ∩ (x2n−5 , x2n−4 ),
fn := x1 x3 . . . x2n−5 , and gn := x2 x4 . . . x2n−4 . Let hn be such that ((fn , gn ) : hn ) = In . It is proved in
[11] that
p. dimRn (Rn /(fn , gn , hn )) = n (∗)
The assignments x1 7→ x1 x2n−3 , x2 7→ x2 x2n−2 , and xi 7→ xi (for i 6= 1, 2) defines the ring homomorphism
ϕn,n+1 : Rn −→ Rn+1 . This has the following properties: ϕn,n+1 (fn ) = fn+1 , ϕn,n+1 (gn ) = gn+1 , and
ϕn,n+1 (In ) ⊆ In+1 . By using this we can choose hn+1 be such that ϕn,n+1 (hn ) = hn+1 . Look at the
directed system {Rn , ϕn,n+1 } and denote the natural map from Rn to R := limRn by ϕn . In view of the
−→
following commutative diagram,
ϕn,n+1
/ Rn+1
②
②
②
ϕn
②②ϕn+1
②
②
②
|②
R
Rn
the ideal I := (ϕn (fn ), ϕn (gn ), ϕn (hn ))R is independent of n.
Claim A. The extension Rn → Rn+1 is flat.
Indeed, denote the unique graded maximal ideal of Rn by mn . Set An := (Rn )mn . In view of [19,
Page 178], Rn → Rn+1 is flat provided the induced map ψn : An → An+1 is flat. In order
to prove ψn is flat, we note that dim(An ) = 2n − 4 and dim(An+1 ) = 2n − 2. Let m be the
n+1
m
unique graded maximal ideal of S := k[x1 , x2 , x2n−3 , x2n−2 ]. Then mA
.
≃ (x1 x2n−3S,x
n An+1
2 x2n−2 )
S
Since x2 x2n−2 ∈
/ p∈Ass(Sm /(x1 x2n−3 )) p, the sequence x1 x2n−3 , x2 x2n−2 is regular over Sm . In
n+1
n+1
particular, dim( mA
) = 2. Thus, dim(An+1 ) = dim(An ) + dim( mA
). In view of Lemma
n An+1
n An+1
2.8 we observe that An → An+1 is flat. This finishes the proof of the claim.
5
n
Set Tn := TorR
n (Rn /(fn , gn , hn ), k). Due to (∗), Tn 6= 0. Since Tn is graded (see [9, Page 33]), one
has (Tn )mn 6= 0 (see e.g., [9, Proposition 1.5.15(c)]). By [19, Exercise 7.7], Tor-modules compute with
localization. In the light of the rigidity property of Tor-modules over equal-characteristic regular local
n
rings (Auslander-Lichtenbaum) we see that TorR
n−i (Rn /(fn , gn , hn ), k)mn 6= 0 for all i ≥ 0. For more
n
details, please see [5]. In particular, TorR
n−i (Rn /(fn , gn , hn ), k) 6= 0 for all i ≥ 0. The map ϕn,n+1 induces
the following map:
R
n+1
ℓ
n
τn,n+1
: TorR
(Rn+1 /(fn+1 , gn+1 , hn+1 ), Rn+1 /mn Rn+1 ).
ℓ (Rn /(fn , gn , hn ), Rn /mn ) → Torℓ
ℓ
Claim B. The map τn,n+1
is one to one.
n
Indeed, in view of Claim A, the extension Rn → Rn+1 is flat. Set T := TorR
ℓ (Rn /(fn , gn , hn ), Rn /mn ).
This is a graded module over Rn . Recall from [19, Exercise 7.7] that
R
Torℓ n+1 (Rn+1 /(fn+1 , gn+1 , hn+1 ), Rn+1 /mn Rn+1 ) ≃ T ⊗Rn Rn+1 .
ℓ
is one to
Due to the proof of [19, Theorem 7.4(i)], T ֒→ T ⊗Rn Rn+1 is one to one. Thus, τn,n+1
one. This finishes the proof of the claim.
Let ℓ ≤ n. We combine Claim B along with [12, VI, Exercise 17] to construct the following injection
Rn /mℓ Rn
≃ (Rℓ /mℓ ) ⊗Rℓ Rn
n
֒→ TorR
ℓ (Rn /(fn , gn , hn ), Rn /mℓ Rn )
n
֒→ lim TorR
ℓ (Rn /(fn , gn , hn ), Rn /mℓ Rn )
−→
i
≃ TorR
ℓ (R/I, R/mℓ R).
Thus, 0 6= R/mℓR ֒→ TorR
ℓ (R/I, R/mℓ R). So p. dimR (R/I) = ∞, as claimed.
3. Desingularization via super regularity
We will use the following result several times.
Lemma 3.1. (See [28]) Let (R, m) be a super regular ring. Then m can be generated by a regular sequence.
In particular, m is finitely generated.
The notation SymR (−) stands for the symmetric algebra of an R-module. Also, we set GrR (I) :=
L∞ i i+1
, where I is an ideal of R.
i=0 I /I
Lemma 3.2. Let {(Ri , mi , ki ) : i ∈ Γ} be a directed system of local rings. Set R := lim Ri , m = lim mi
−→
−→
and k = lim ki . The following holds:
−→
i) GrR (m) ≃ limi GrRi (mi ).
−→
⊕µ(mi )
).
ii) Symk (k ⊕µ(m) ) ≃ limi Symki (ki
−→
Proof. i) Taking colimit of the following exact sequence of directed systems
0 −→ {mn+1
}i −→ {mni }i −→ {mn+1
/mni }i −→ 0,
i
i
and using 5-lemma, yields that limi mni /mn+1
≃ mn /mn+1 . In particular, GrR (m) ≃ limi GrRi (mi ).
i
−→
−→
ii) This is in [13, 8.3.3].
Proposition 3.3. Let {(Ri , mi ) : i ∈ Γ} be a directed system of local rings with the property that
m2i = mi ∩ m2i+1 . If R := limRi is coherent and super regular, then each Ri is regular.
−→
6
Proof. Denote the maximal ideal of R by m and denote the residue field of R (resp. Ri ) by k (resp. ki ).
In view of Lemma 3.1, m is generated by a regular sequence. Thus things equipped with the following
isomorphism
θ : Symk (k ⊕µ(m) ) −−−−→ GrR (m) :=
L∞
i=0
mi /mi+1 .
Look at the natural epimorphism
⊕µ(mi )
θi : Symki (ki
) ։ GrRi (mi ),
and the natural map ϕi : Vi := mi /m2i ֒→ Vi+1 := mi+1 /m2i+1 . We claim that:
Claim A. The map ϑi := Sym(ϕi ) is monomorphism.
Indeed, we look at the following diagram:
f
Symki (Vi ) ֒→
g
≃
Symki (Vi+1 ) ֒→ Symki (Vi+1 ) ⊗ki ki+1
❚❚❚❚
❚❚❚❚
❚❚❚❚
h
❚❚❚❚
ϑi
❚❚❚*
Symki+1 (Vi+1 )
/ Sym
ki+1 (Vi+1 ⊗ki ki+1 )
≃
i
/ Symk (L
dim
i+1
k
i
(ki+1 )
Vi+1 )
Remark that
1) Since Vi is a direct summand of Vi+1 as a ki -vector space, f is a monomorphism.
2) The map g is monomorphism, because ki is a field.
3) The horizontal isomorphism follows by [13, 8.3.2].
L
4) The vertical isomorphism follows by dimk (ki+1 ) Vi+1 ≃ Vi+1 ⊗ki ki+1 .
i
L
5) Since Vi+1 is a direct summand of
Vi+1 as a ki+1 -vector space, i is a monomorphism.
By these, we conclude that the map h is a monomorphism. So ϑi : Symki (Vi ) → Symki+1 (Vi+1 ) is
monomorphism. This completes the proof of Claim A.
Set Ki := ker θi . Also, remark that ϑi (Ki ) ⊆ Ki+1 . Again we denote the restriction map by ϑi : Ki ֒→
Ki+1 . Recall that limki ≃ k and lim(mni /mn+1
) ≃ mn /mn+1 . In view of Lemma 3.2, Sym(−) and Gr(−)
i
−→
−→
behave well with respect to direct limits. Hence, θ = limθi . Put all of these together to observe
−→
Ki ֒→ limKi ≃ ker θ = 0.
−→
So, θi is an isomorphism. This means that mi is generated by a regular sequence. The regularity of Ri
follows by this, because Ri is noetherian.
By gl. dim(R) we mean the global dimension of R. Let R be a noetherian local domain of prime
characteristic p. Recall that the perfect closure of R is defined by adjoining to R all higher p-power roots
of all elements of R and denote it by R∞ .
Proposition 3.4. Let R be a local domain of prime characteristic which is either excellent or homomorphic image of a Gorenstein local ring and suppose that its perfect closure is coherent (e.g., R is regular).
If R is not a field, then gl. dim(R∞ ) = dim R + 1.
Proof. Let x := x1 , . . . , xd be a system of parameters for R and set p := char R.
Claim A. One has x is a regular sequence on R∞ .
7
Indeed, this is in [25, Theorem 3.10] when R is homomorphic image of a Gorenstein ring. The argument
of [25, Theorem 3.10] is based on Almost Ring Theory. The claim in the excellent case is in [4,
Lemma 3.1]. This uses non-noetherian Tight Closure Theory.
In particular, fl. dim(R∞ /xR∞ ) = d. Combining this with [2, Theorem 1.2], w. dim(R∞ ) = dim R < ∞.
The same citation yields that gl. dim(R∞ ) ≤ dim R + 1. Suppose on the contrary that
gl. dim(R∞ ) 6= dim R + 1.
This says that gl. dim(R∞ ) = w. dim R. Note that R∞ is coherent and quasilocal. Denote its maximal
ideal by mR∞ . By definition, R∞ is super regular. In the light of Lemma 3.1, mR∞ is finitely generated.
We bring the following claim.
Claim B. One has mR∞ = mpR∞ .
Indeed, clearly, mpR∞ ⊂ mR∞ . Conversely, let r ∈ mR∞ . Since R is perfect, any polynomial such as
f (X) := X p − r has a root. Let r1/p ∈ R∞ be a root of f . We have (r1/p )p ∈ mR∞ . Since mR∞
is prime, r1/p ∈ mR∞ . We conclude from this that r = (r1/p )p ∈ mpR∞ , as claimed.
By Nakayama’s Lemma, mR∞ = 0, i.e., R∞ is a field. So, R is a field. This is a contradiction.
Question 3.5. Let R be a local domain of prime characteristic. What is gl. dim(R∞ )?
Proposition 3.6. Let R be a quasilocal containing a field of prime characteristic p which is integral and
purely inseparable extension of an F -finite regular local ring R0 . If R contains all roots of R0 , then R
has a desingularization with respect to a flat directed system of noetherian regular rings.
Proof. Note that R0 contains a field. Write R as a directed union of a filtered system {Ri } of its subrings
which are finitely generated algebras over R0 .
Claim A. The ring Ri is local.
Indeed, by assumption R0 is local. Denote its maximal ideal by m0 . Let mi and ni be two maximal ideals
of Ri . Both of them lying over m0 . Let r ∈ mi . Since the extension R0 → Ri is integral and
n
n
purely inseparable, we observe that rp ∈ R0 for some n ∈ N. Thus rp ∈ m0 = ni ∩ R0 . Since
ni is prime and r ∈ Ri , we deduce that r ∈ ni . Hence mi ⊂ ni . Therefore mi = ni , because mi is
maximal. So, Ri is local, as claimed.
We denote the unique maximal ideal of Ri by mi . Since R0 → R1 is integral, d := dim R0 = dim R1 .
n1
Remark that if y ∈ R1 , there is n1 ∈ N such that y p
∈ R0 . Since R0 → R1 is integral, R1 is finitely
n
generated as an R0 -module. From this we can pick a uniform n such that for any y ∈ R1 , y p ∈ R0 .
n
1/p
After adding R0
n
1/p
to R1 and denoting the new ring again by R1 , we may assume that R0
n
1/p
is a place that we use the assumptions R0∞ ⊆ R and that R0
⊂ R1 , here
is finite over R0 . Now, let x be a minimal
generating set for m0 . In particular, x is a regular system of parameters on R0 .
1/pn
Claim B. Let n be as the above paragraph. Then m1 = (x1
Indeed, let y ∈ m1 . Then y
pn
∈ R0 . In particular, y
n
pn
1/pn
, . . . , xd
∈ m0 . Then y
pn
)R1 .
P
=
ri xi where ri ∈ R0 . We are
in a situation to take p -th root in R1 , this is due to the choose of n. Taking pn -th roots, we have
P 1/pn 1/pn
1/pn
1/pn
1/pn
1/pn
∈ m1 . Thus y ∈ (x1 , . . . , xd )R1 . Therefore,
∈ R1 and xi
xi , where ri
y=
ri
1/pn
m1 =
1/pn
)R1 $ R1 . The
1/pn
1/pn
(x1 , . . . , xd )R1 as claimed.
m1 ⊂ (x1
, . . . , xd
reverse inclusion is trivial, because m1 is maximal. So,
8
In view of the claim, R1 is regular. Since m0 R1 is m1 -primary, the extension R0 → R1 is flat, please
see Lemma 2.8. Repeating this, one may observe that {Ri } is a desingularization for R and that Ri → Rj
is flat.
Lemma 3.7. One has limi∈Γ Ri [X] ≃ (limi∈Γ Ri )[X].
−→
−→
Proof. This is straightforward and we leave it to the reader.
Corollary 3.8. Adopt the notation of Proposition 3.6. Then R is stably coherent.
Proof. There is a flat directed system {Ri } of noetherian regular rings such that its direct limit is R. In
particular, Ri [X] → Rj [X] is flat. In view of [13, Theorem 2.3.3] and Lemma 3.7, R is stably coherent.
The assumption R0∞ ⊆ R in Proposition 3.6 is really needed:
Example 3.9. Let F be a field of characteristic 2 with [F : F 2 ] = ∞. Let R̂0 = F [[x, y]] be the formal
power series on variables {x, y} and look at R0 := F 2 [[x, y]][F ]. Let {bi : i ∈ N} ⊂ F be an infinite set of
2-independent elements. Set
en :=
fn :=
P∞
i
bi
P∞
,
i
bi
.
i=n (xy)
yn
i=n (xy)
xn
Define R := R0 [ei , fi : i ∈ N]. This is quasilocal. Denote its unique maximal ideal by m. Recall from [20,
Page 206] that R0 → Rˆ0 is integral and purely inseparable. Since R0 ⊂ R ⊂ R̂0 , we get that R0 ⊂ R is
integral and purely inseparable. By [15, Example 1], p. dimR (m) = ∞ and that m is finitely generated.
In particular, R is not regular. We conclude from Proposition 2.3 that R is not coherent. So, R has no
desingularization with respect to its noetherian regular subrings.
4. Desingularization via purity
We begin by recalling the notion of the purity. Let M ⊂ N be modules over a ring R. Recall that M
is pure in N if M ⊗R L → N ⊗R L is monomorphism for every R-module L. We say a directed system
{Ri : i ∈ Γ} is pure if Ri −→ Rj is pure for all i, j ∈ Γ with i ≤ j.
Proposition 4.1. Let {(Ri , mi ) : i ∈ Γ} be a pure directed system of local rings and such that the maximal
ideal of (R, m) := limi∈I Ri has a finite free resolution. Then the following assertions are true:
−→
(i) There exists an i ∈ Γ such that Rj is regular for all i ≤ j.
(ii) There exists an i ∈ Γ such that Rj → Rk is flat for all i ≤ j ≤ k.
(iii) The ring R is noetherian and regular.
Proof. (i): Look at the following finite free resolution of m:
0
/ FN
/ ...
/ Fj+1
fj
/ Fj
/ ...
/ F0
/m
/ 0,
where Fj is finite free and fj is given by a matrix with finite rows and finite columns. By It (fj ), we
mean the ideal generated by t × t minors of fj . Let rj be the expected rank of fj , see [9, Section 9.1] for
its definition. By [9, Theorem 9.1.6], K. gradeR (Irj (fj ), R) ≥ j. There is an index i ∈ Γ such that all of
components of {fj } are in Ri . Let Fj (i) be the free Ri -module with the same rank as Fj . Consider fj
as a matrix over Ri , and denote it by fj (i). Recall that m is finitely generated. Choosing i sufficiently
large, we may assume that m = mi R. For the simplicity of the reader, we bring the following claim.
9
Claim A. Let A be a subring of a commutative ring B. Let X ∈ Matrs (A) and Y ∈ Matst (A) be matrices.
Look at X ∈ Matrs (B) and Y ∈ Matst (B) as matrices whose entries coming from B. If XY ∈
Matrt (B) is zero as a matrix over B, then XY ∈ Matrt (A) is zero as a matrix over A.
Indeed, since A is a subring of B, the claim is trivial.
Thus, fj (i)fj+1 (i) = 0. Look at the following complex of finite free modules:
0
/ FN (i)
/ ...
/ Fj+1 (i)
fj (i)
/ Fj (i)
/ ...
/ F0 (i)
/ mi
/ 0.
We are going to show that this is exact. Recall that It (fj (i)) is the ideal generated by t × t minors of
fj (i). Clearly, rj is the expected rank of fj (i). Let z := z1 , . . . , zs be a generating set for It (fj (i)). In
view of the purity, there are monomorphisms 0 −→ Hj (z, Ri ) −→ Hj (z, R) for all i and j, see [9, Exercise
10.3.31]. Then,
K. gradeR (Irj (fj ), R) ≤ K. gradeRi (Irj (fj (i)), Ri ).
Thus, K. gradeRi (Irj (fj ), Ri ) ≥ j. Again, due to [9, Theorem 9.1.6],
0 −→ FN (i) −→ · · · −→ F0 (i)
is acyclic. Thus, p. dim(Ri /mi ) < ∞. By Local-Global-Principle (please see [9, Theorem 2.2.7]), Ri is
regular.
(ii): By purity, dim Rm ≥ dim Rn for all n ≤ m, see [8, Remark 4 and Corollary 5]. Again, in the light
of purity,
mm = (mm R) ∩ Rm = (mn R) ∩ Rm = (mn Rm )R ∩ Rm = mn Rm ,
for all n ≤ m. Thus mn Rm = mm . Denote the minimal number of elements required to generate the
ideal mm by µ(mm ). Consequently, µ(mm ) ≤ µ(mn ). By part (i), (Ri , mi ) is regular. Hence
dim Rm = µ(mm ) ≤ µ(mn ) = dim Rn .
Therefore, dim Rm = dim Rn . In view of Lemma 2.8, Rn → Rm is flat for all n ≤ m, as claimed.
(iii): Recall from (ii) that mm = mn Rm for all n < m. In view of [22], R is noetherian. The ring R is
regular, because p. dimR (R/m) < ∞.
Example 4.2. Here, we present a desingularization with respect to a non-pure directed system. To this
Pℓ
end let R := {n + i=1 ni ti : n ∈ Z, ni ∈ Z[1/2]}. Then R has a desingularization {(Ri , φi,j )} where
φi,j : Ri → Rj is not pure.
Proof. For each i ∈ N, set Ri := Z[t/2i ]. Note that Ri is a noetherian regular ring. The system {Ri }i∈N
Pℓ
is directed with respect to the inclusion. Let f ∈ R. Then f = n + i=1 ni ti where ni ∈ Z[1/2]. There
is k ∈ Z such that ni = mi /2ik for some mi ∈ Z and for all i. Deduce by this that f ∈ Rk . Thus, {Ri }
gives a desingularization for R. Now, we look at the following equation 2X = t/2i . Clearly, t/2i+1 ∈ Ri+1
is a solution. The equation has no solution in Ri . In the light of [19, Theorem 7.13], the map Ri → Ri+1
is not pure, as claimed.
Also, we present the following example.
Example 4.3. Let D be a noetherian regular integral domain with a fraction field Q. Let R := {f ∈
Q[X1 , . . . , Xn ] : f (0, . . . , 0) ∈ D}. Then R has a desingularization.
10
Proof. Without loss of the generality we may assume that D =
6 Q. Recall that
O
R = {f ∈ Q[X1 , . . . , Xn ] : f (0, . . . , 0) ∈ D} ≃
{f ∈ Q[Xi ] : f (0) ∈ D}.
1≤i≤n
Let F := Q
⊕n
. This is flat as a D-module. Under the identification SymQ (Q) = Q[X], the image of
SymD (Q) in the natural map SymD (Q) → SymQ (Q) is D + XQ[X]. So
N
SymD (F ) ≃ n SymD (Q)
N
≃ n {f ∈ Q[Xi ] : f (0) ∈ D}.
Due to the Lazard’s theorem, there is a directed system of finitely generated free modules {Fi : i ∈ Γ}
with direct limit F . In view of [13, 8.3.3],
R ≃ SymD (F ) ≃ limi∈Γ SymD (Fi ).
−→
Since Ri := SymD (Fi ) is a noetherian regular ring, R can be realized as a direct limit of the directed
system {Ri : i ∈ Γ} of noetherian regular rings.
5. Desingularization of products
The following definition is taken from [17].
Definition 5.1. A ring is called DLFPF, if it is a direct limit of a finite product of fields.
Question 5.2. (See [17, Question 10]) Let E be a field and let R be a maximal DLFPF subring of
Q
N
E.
Does R contain a field isomorphic to E?
The following answers Question 5.2 in the finite-field case.
L
Q
Proposition 5.3. Let F be a finite field. Then N F ≃ lim ( f inite Fi ) where Fi is a field.
−→
Q
Proof. Let S ⊂ N F be the subring consisting of all elements that have only finitely many distinct
S
Q
coordinates. Since F is finite, S = N F . By [14, Proposition 5.2], S = Aj where Aj is an artinian
Q
regular subring of N F . It remains to note that any artinian regular ring is isomorphic to a finite direct
product of fields.
Corollary 5.4. Let F be a finite field. Then
Q
N
F is stably coherent.
Proof. We adopt the notation of Proposition 5.3 and we denote a finite family of variables by X. Let
L
Rγ := ( 1≤i≤nγ Fi ). Let γ ≤ δ. Since Rγ is of zero weak dimension, Rγ → Rδ is flat. It turns out that
Rγ [X] → Rδ [X] is flat. In view of Lemma 3.7 and by Proposition 5.3,
Y
( F )[X] ≃ (limγ∈Γ Rγ )[X] ≃ limγ∈Γ (Rγ [X]).
−→
−→
Q
is a flat direct limit of noetherian regular rings. By [13, Theorem 2.3.3], N F is stably coherent.
In the following item we collect some homological properties of the products of rings.
Fact 5.5. i) Let R be a noetherian local ring. Combining [13, Theorem 6.1.2] and [13, Theorem 6.1.20],
Q
yields that N R is coherent if and only if dim R < 3.
Q
ii) Let {RN : n ∈ N} be a family of rings such that R :=
Rn is coherent. Then w. dim R =
sup{w. dim R}, please see [13, Theorem 6.3.6].
iii) (See the proof of [23, Corollary 2.47]) Let a be an ideal of a ℵn -noetherian ring A.
p. dimA (A/a) ≤ fl. dimA (A/a) + n + 1.
iv) (See [17, Introduction]) The ring
Q
N
Then
Q is not written as a direct limit of a finite product of fields.
11
Example 5.6. The ring R :=
Q
N
Q is coherent and regular. But, R has no desingularization with respect
to its noetherian regular subrings.
Q
Proof. By Fact 5.5 i), N Q is coherent. In the light of Fact 5.5 ii), R is von Neumann regular, i.e., R is
of zero weak dimension. We deduce from Fact 5.5 iii) that gl. dim(R) < 3. Therefore, R is coherent and
regular. Suppose on the contrary that R can be written as a direct limit of its noetherian regular subrings
{Ri : i ∈ I}. Due to [9, Corollary 2.2.20], Ri ≃ Ri1 × . . .× Rini , where Rij is a noetherian regular domain.
By Qij we mean the fraction field of Rij . Recall that limi limj Qij is a direct limit of finite product of
−→ −→
its subfields. This is a consequence of the fact that any double direct limit is a direct limit, please see
[7, III) Proposition 9]. Recall from [17, Corollary 4] that a von Neumann regular subring of DLF P F is
Q
DLF P F . We conclude by this that N Q can be written as a direct limit of a finite product of fields.
This is a contradiction, please see Fact 5.5 iv).
Acknowledgement . I would like to thank M. Dorreh, S. M. Bhatwadekar and H. Brenner, because of a
talk on direct limits. I thank the anonymous referee for various suggestions.
References
[1] Y. Akizuki, Eigene bemerkungen über primäre integritätsberreiche mit teilerkettensatz, Proc. Phys.Math. Soc. Japan
17 (1935), 327-336.
[2] M. Asgharzadeh, Homological properties of the perfect and absolute integral closure of Noetherian domains, Math.
Annalen 348 (2010), 237-263.
[3] M. Asgharzadeh and M. Tousi, On the notion of Cohen-Macaulayness for non-Noetherian rings, Journal of Algebra,
322 (2009), 2297–2320.
[4] M. Asgharzadeh and R. Bhattacharyya, Some remarks on big Cohen-Macaulay algebras via closure operations, Journal
of algebra and its applications, 11, No. 4 (2012).
[5] M. Auslander, Modules over unramified regular local rings, Ill. J. Math. 5, 631-647 (1961).
[6] J. Bertin, Anneaux cohérents réguliers, C. R. Acad. Sci. Paris, Sér A-B, 273, (1971).
[7] N. Bourbaki, Theory of sets, Reprint of the 1968, English translation elements of mathematics (Berlin), Springer-Verlag,
Berlin, (2004).
[8] H. Brenner, Lifting chains of prime ideals, J. Pure Appl. Algebra 179, (2003), 1-5.
[9] W. Bruns and J. Herzog, Cohen-Macaulay rings, Cambridge University Press 39, Cambridge, (1998).
[10] D. A. Buchsbaum and D. Eisenbud, Some structure theorems for finite free resolutions, Advances in Math. 12 (1974),
84-139.
[11] L. Burch, A note on the homology of ideals generated by three elements in local rings, Proc. Cambridge Philos. Soc.
64, (1968), 949-952.
[12] H. Cartan and S. Eilenberg, Homological algebra, Princeton University Press, 1956.
[13] S. Glaz, Commutative coherent rings, Springer LNM 1371, Spriger Verlag, (1989).
[14] R. Gilmer and W. Heinzer, Products of commutative rings and zero-dimensionality, Trans. Amer. Math. Soc. 331
(1992), no. 2, 663-680.
[15] T. Kabele, Regularity conditions in nonnoetherian rings, Trans. AMS., 155, (1971), 363–374.
[16] R.E. MacRae, On an application of fitting invariants, J. Algebra 2 (1965), 153–169.
[17] J. Majadas and A.G. Rodicio, Smoothness, regularity and complete intersection, London Mathematical Society Lecture
Note Series, 373, Cambridge University Press, Cambridge, (2010).
[18] A.R. Magid, Direct limits of finite products of fields, in: Zero-dimensional commutative rings (Knoxville, TN, 1994),
299-305, Lecture Notes in Pure and Appl. Math., 171, Dekker, New York, (1995).
[19] H. Matsumura, Commutative ring theory, Cambridge Studies in Advanced Math, 8, (1986).
[20] M. Nagata, Local rings, Interscience Tracts in Pure and Appl. Math., no. 13, Interscience, New York, (1962).
[21] D.G. Northcott, Finite free resolutions, Cambridge Tracts Math., vol. 71, Cambridge Univ. Press, Cambridge, (1976).
[22] T. Ogoma, Noetherian property of inductive limits of Noetherian local rings, Proc. Japan Acad., Ser. A 67(3), 68–69
(1991).
12
[23] B.L. Osofsky, Homological dimensions of modules, CBMS, 12, 1971.
[24] D. Popescu, On Zariski’s uniformization theorem, Algebraic geometry, Bucharest 1982, Lecture Notes in Math,
Springer, Berlin, 1056, 1984, 264–296.
[25] K. Shimomoto, F-coherent rings with applications to tight closure theory, Journal of Algebra 338, (2011), 24–34.
[26] B. Stenström, Rings of quotients, Die Grundlehren der Mathematischen Wissenschaften, 217, Springer-Verlag, New
York-Heidelberg, (1975).
[27] R. G. Swan, Néron-Popescu desingularization, Algebra and geometry (Taipei, 1995), 135-192, Lect. Algebra Geom., 2,
Int. Press, Cambridge, MA, (1998).
[28] W.V. Vasconcelos, Super-regularity in local rings, Journal of Pure and Applied Algebra 7, (1976), 231–233.
[29] O. Zariski, Local uniformization on algebraic varieties, Ann. of Math. (2), 141, (1940). 852-896.
E-mail address: [email protected]
| 0 |
Enhanced Discrete Particle Swarm Optimization Path
Planning for UAV Vision-based Surface Inspection
Manh Duong Phunga , Cong Hoang Quacha , Tran Hiep Dinhb , Quang Hab,∗
a Vietnam
arXiv:1706.04399v1 [cs.RO] 14 Jun 2017
b University
National University, 144 Xuan Thuy, Cau Giay, Hanoi, Vietnam
of Technology Sydney, 15 Broadway, Ultimo NSW 2007, Australia
Abstract
In built infrastructure monitoring, an efficient path planning algorithm is essential for robotic inspection of large surfaces using computer vision. In this
work, we first formulate the inspection path planning problem as an extended
travelling salesman problem (TSP) in which both the coverage and obstacle
avoidance were taken into account. An enhanced discrete particle swarm optimisation (DPSO) algorithm is then proposed to solve the TSP, with performance
improvement by using deterministic initialisation, random mutation, and edge
exchange. Finally, we take advantage of parallel computing to implement the
DPSO in a GPU-based framework so that the computation time can be significantly reduced while keeping the hardware requirement unchanged. To show
the effectiveness of the proposed algorithm, experimental results are included
for datasets obtained from UAV inspection of an office building and a bridge.
Keywords: Path planning, infrastructure monitoring, bridge inspection,
vision-based inspection, particle swarm optimization, unmanned aerial vehicle
1. Introduction
For robotics inspection of built infrastructure, computer vision can be used
to detect most surface deficiencies such as cracking, spalling, rusting, distortion,
∗ Corresponding
author
Email addresses: [email protected] (Manh Duong Phung), [email protected]
(Cong Hoang Quach), [email protected] (Tran Hiep Dinh), [email protected]
(Quang Ha)
Preprint submitted to Arxiv
June 15, 2017
misalignment, and excessive movements. Over the last decade, much research
effort has been devoted to this theme with computer vision becoming an important component of modern Structural Health Monitoring (SHM) systems for
built infrastructure such as rust detection of steel bridges [1], crack detection of
concrete bridges [2, 3, 4], or bridge condition assessment [5]. In this regard, it is
promising to integrate a computer vision system into mobile inspection robots,
such as unmanned aerial vehicles (UAVs) [6, 7] or ubiquitous robots [8, 9], especially when dealing with large and hardly accessible structures like tunnels
[10]. For this purpose, an efficient inspection path planning (IPP) algorithm is
therefore of crucial importance.
In vision-based inspection path planning, it is required to find a trajectory
that is informative enough to collect data from different views of a given structure so that the inspection robot can carry out the data acquisition of the region
of interest. Depending on size of the inspecting region, the trajectory can be
planned for multiple robots to coordinately conduct the data collection [11]. To
be visibly processed at a later time, the data collected are often from a sensor
of the time-of-flight (optical, sonar or radar) or passive optical (CCD camera)
type. Since the computational time for IPP rapidly increases with the area of
the region of interest, an IPP algorithm should meet the following criteria:
(i) capability of viewing/covering every surface of the region of interest via
at least one planned viewpoint of the inspection sensor,
(ii) obstacle avoidance for the robot,
(iii) generation of an ”optimal” path under available conditions, and
(iv) effectiveness in terms of processing time (for online re-planning and large
structure inspection).
Studies on IPP, in general, can be categorised into three groups, namely
cell decomposition, sub-problem separation, and other methods. In cell decomposition, the target space is decomposed in sub-regions called cells. The cell
shape can be trapezoidal, square, cubic, or customised depending on critical
points of Morse functions, often with a uniform size [12, 13, 14]. An exhaustive
path connecting each cell is then computed for the coverage, using typically a
2
heuristic algorithm such as wavefront [15] or spiral spanning tree [16]. Methods based on cell decomposition yield good results in terms of coverage and
obstacle avoidance. As the path generated, however, may not be optimal, it
is worth seeking a better and more feasible alternative. In this context, the
IPP separation approach tackling the non-deterministic polynomial time (NP)hard problems can be divided into two, the art gallery problem that finds the
smallest set of viewpoints to cover the whole gallery, and the travelling salesman problem (TSP) that finds the shortest path to visit a set of given cities
[17, 18, 19, 20, 21, 22]. Each problem can be solved separately using known
methods such as the randomised, incremental algorithm for the art gallery problem [23, 24] and the chained Lin-Kernighan heuristics for the TSP [25]. Other
approaches have focused on sampling the configuration space [26], using submodular objective function [27], or employing genetic algorithms [28] but they
often require constraining the robot to certain dynamic models or end with
near-optimal solutions. The requirements remain not only a shorter path but
also collision-free.
In this paper, the IPP problem is addressed by first formulating it as an
extended TSP. The enhanced discrete particle swarm optimisation (DPSO) is
then employed to solve the IPP. Finally, parallel computing based on graphical
processing units (GPU) is deployed to obtain the real-time performance. The
contributions of our approach are three folds: (i) By formulating the IPP as
an extended TSP, both the coverage and obstacle avoidance are simultaneously
taken into account. In addition, constraints related to the kinematic and dynamic models of the robot are separated from the DPSO solution so that this
solution can be applied to a broad range of robots. (ii) Three techniques including deterministic initialisation, random mutation, and edge exchange have been
proposed to improve the accuracy of DPSO. (iii) Parallel computation has been
implemented to significantly improve the time performance of DPSO. By utilising GPU, the parallel implementation does not add additional requirements to
the hardware, i.e. the developed software can run on popular laptop computers.
The rest of this paper is structured as follows. Section 2 introduces the
3
steps to formulate the IPP as an extended TSP. Section 3 presents the proposed
DPSO and its deployment for solving the IPP. Section 4 provides experimental
results. Finally, a conclusion is drawn to end our paper.
2. Problem formulation
Our ultimate goal is to design a path planning system for an UAV used for
inspecting planar surfaces of largely built structures like buildings or bridges.
The sensor used for the inspection is a CCD camera attached to a controllable
gimbal. We suppose that the 3D model of the structure and the environment
are known prior to planning, for example, by using laser scanners. Here, the
IPP objective is to find the shortest path for the UAV’s navigation and taking
photos of the target surfaces so that the images captured can be later processed
to detect potential defects or damages. We first consider the IPP as an extended
TSP and then solve it using the developed DPSO. This section presents the
computation of viewpoint selection and point-to-point pathfinding, which are
fundamental to formulate the extended TSP problem.
2.1. Viewpoint selection
The viewpoint selection involves finding a set of camera configurations that
together cover the whole surfaces of interest. Let P be a finite set of geometric
primitives pi comprising the surfaces to be covered. Each geometric primitive
pi corresponds to a surface patch within the field of view of the camera. Let C
be the configuration space such that every feasible configuration cj ∈ C maps
to a subset of P . Each configuration ci corresponds to a position (xi , yi , zi ) and
an orientation (ϕi , θi , ψi ) of the camera. Given a finite set of configurations C,
the viewpoint selection problem on one hand calls generally for the minimum
number of configurations ci such that all elements pi ∈ P are covered. On the
other hand, from image sticking and defect detection, the following requirements
are added to the system: (i) image capturing moment is when the camera is
perpendicular to the inspected surface, (ii) sufficiently high resolution to distinguish the smallest feature, sf , and (iii) overlapping of images to a percentage op
4
specified by the sticking algorithm. It turns out that those requirements simSmallest feature
Field of view
Surface
Working distance
Lens
Sensor
Sensor size
Sensor size
(b)
(a)
Figure 1: Camera for inspection: (a) Camera setup in the field; (b) Relation between
parameters of the camera and the field.
plify our selection problem. The perpendicular requirement confines the camera
orientation to the normal of the inspected surface. The resolution requirement
suggests the computation of the field of view of the camera as:
af ov =
1
rc sf ,
2
(1)
where rc is the camera resolution (see Fig. 1). Taken the overlapping percentage
into account, the geometric primitive pi is then:
pi = (1 − op )af ov .
(2)
The working distance from the camera to the surface can also be computed as:
dk =
af ov f
,
ss
(3)
where f and ss are respectively the focal length and sensor size of the camera.
From (2) and (3), it is possible to determine configurations ci to cover the set
5
Inspection surfaces
Primitive geometry (pi)
Viewpoint
Figure 2: Generation of inspection viewpoints.
of primitives P , as illustrated in Fig. 2. Specifically, for each surface Pk ⊂ P , a
grid with the cell size of pi is first established to cover Pk . A working surface Pk∗ ,
parallel to Pk and distant dk from Pk , is then created. Projecting the center of
each cell of Pk to Pk∗ gives the position component of viewpoint ci . The normal
of Pk defines the orientation component of ci , which is supposed to be fully
controlled by the inspecting UAV so that it can be omitted in our computation.
2.2. Point-to-point pathfinding
Given the viewpoints, the shortest, obstacle-free path between every pair
of them need be found to form a graph for later processing. Without loss of
generality, different motion planning approaches such as roadmap, decoupling,
potential field and mathematical programming can be used here depending on
the UAV model and dynamic constraints [29, 30]. In this work, the hierarchical
decoupled approach is employed in which open- and closed-loop controllers operating at a variety of rate are linked together from top to bottom [29, 31, 32].
Since the majority of UAVs currently in production often already equipped with
an inner-loop tracking controller and a waypoint following system, this approach
can be simplified to a discrete search that produces a set of waypoints connecting
6
two viewpoints while avoiding obstacles. For this, the workspace is first divided
into a grid of voxels. Each voxel has the free or occupied status corresponding
to the presence or absence of an object in that voxel. In order to consider the
UAV as a particle moving without collision between voxels, all the free voxels
in a sphere of a radius equal to the largest dimension of the UAV are marked
as occupied. Thus, the A* algorithm [33] can be used to find the shortest path
between viewpoints. In each step, the cost to move from one voxel to another
surrounding neighbour is computed as:
L(α, β, γ) = a1 α2 + a2 β 2 + a3 γ 2 ,
(4)
where coordinates α, β, γ ∈ {−1, 0, 1} indicate the position of neighbor, and
coefficients a1 , a2 and a3 assign a particular weight to each direction. The total
cost to move from a voxel p to the viewpoint g at step n is given by:
f (p) =
n
X
Lk + kp − gk2 ,
(5)
k=1
where Lk is the motion cost at step k.
2.3. Modelling the IPP as a TSP
For given viewpoints and paths between them, a graph can be built to model
the IPP as an extended TSP. We define each viewpoint as a node, i, and the
path between two viewpoints as an edge, eij . The length, lij , of edge eij is the
cost to travel from node i to node j determined by (5). If the path between
node i and node j is blocked due to obstacles, a virtual path between them is
defined and a very large cost is assigned for the path. Denoting the set of all
nodes by V and the set of all edges by E, we restrict motion of the UAV to the
graph G = (V, E). The IPP task is now to find a tour, with a minimum cost,
that visits each node (viewpoint) exactly once, including the way back to the
initial node. Let T be the set of these nodes.
By associating a binary variable
1 if edge e ∈ E is in tour
ij
λij =
0 otherwise
7
(6)
with each edge in the graph, the IPP is then formulated as follows:
min
X
lij λij
(7)
λij = 2 ∀i ∈ V
(8)
eij ∈E
X
subject to
j∈V, i6=j
X
λij ≤ |T | − 1
∀T ⊂ V, T 6= ∅
(9)
i,j∈T, i6=j
λij ∈ {0, 1},
(10)
where |T | is the number of nodes in the tour. The objective function in (7)
defines the shortest tour. The constraint in (8) implies each node in the graph
has exactly one incoming edge and one outgoing edge, i.e., the tour passes
through each node once, while condition (9) ensures no sub-tours, i.e., the tour
returns to the original node after visiting all other nodes.
3. Enhanced Discrete Particle Swarm Optimization for Inspection
Path Planning
Particle swarm optimization (PSO), inspired by social behavior of bird flocking or fish schooling, is a population-based stochastic technique designed for
solving optimization problems [34]. In PSO, a finite set of particles is generated, each particle seeks the global optimum by moving and evolving through
generations. Initially, each particle is assigned to a random position and velocity. It then moves by updating its best previous position, Pk , and the best
position of the swarm, Gk . Let xk and vk be respectively the position and velocity of a particle at generation k. The position and velocity of that particle
in the next generation is given by:
vk+1 ← w.vk + ϕ1 r1 .(Pk − xk ) + ϕ2 r2 .(Gk − xk )
(11)
xk+1 ← xk + vk+1 ,
(12)
8
where w is the inertial coefficient, ϕ1 is the cognitive coefficient, ϕ2 is the social
coefficient, and r1 , r2 are random samples of a uniform distribution in the range
[0,1]. Equations (11) and (12) imply that the motion of a given particle is
a compromise between three possible choices including following its own way,
moving toward its best previous position, or toward the swarm’s best position.
The ratio between choices is determined by the coefficients w, ϕ1 , and ϕ2 .
3.1. DPSO approach to the IPP
Since the IPP defined in (7) – (10) is a discrete optimization problem, enhanced algorithms for discrete particle optimization (DPSO) will be developed
for our problem, motivated by [35]. For this, let us begin with an outline of our
approach to solve the IPP problem using DPSO with improvements in initialization, mutation, edge exchange and parallel implementation.
First, let define the position of particles as sequences of N + 1 nodes, all
distinct, except that the last node must be equal to the first one:
x = (n1 , n2 , ..., nN , nN +1 ), ni ∈ V, n1 = nN +1 ,
(13)
where N is the number of nodes, N = |V |. Since each sequence is a feasible
tour satisfying (8) and (9), to minimise the objective function (7) according to
(11) and (12), we need to define the velocity and numerical operators for the
particles’ motion.
From (12), it can be seen that a new position of a particle can be evolved
from the position of its current generation via the velocity operator, considered
here as a list of node transpositions:
v = ((ni,1 , nj,1 ), (ni,2 , nj,2 ), ..., (ni,kvk , nj,kvk )),
(14)
where ni , nj ∈ V and kvk is the length of the transposition list.
In DPSO, particle velocities and positions are updated by using the following
operations:
• The addition between a position x and a velocity v is found by applying
the first transposition of v to x, then the second one to the result, etc.
9
For example, with x = (1, 4, 2, 3, 5, 1) and v = ((1, 2), (2, 3)), by applying
the first transposition of v to x and keeping in mind the equality between
the first and last nodes, we obtain (2,4,1,3,5,2). Then applying the second
transposition of v to that result gives (3,4,1,2,5,3), which is the final result
of x + v.
• The subtraction between a position x2 and a position x1 is defined as the
velocity v, i.e., x2 − x1 = v, such that by applying v to x1 we obtain back
x2 .
• The addition between a velocity v1 and a velocity v2 is defined as a new
velocity, v1 ⊕ v2 = v, which contains the transpositions of v1 followed by
the transpositions of v2 .
• The multiplication between a real coefficient c with a velocity v is a new
velocity, c.v, defined as follows:
– For c = 0, c.v = ∅.
– For 0 < c ≤ 1, c.v = ((ni,1 , nj,1 ), (ni,2 , nj,2 ), ..., (ni,ckvk , nj,ckvk )).
– For c < 0 and c > 1, we omit these cases since they do not occur in
our DPSO.
3.2. Augmentations to the DPSO
In order to speed up the convergence and avoid being stuck in the local
minimum, we propose to enhance optimisation performance of the DPSO as
follows.
3.2.1. Deterministic initialization
The swarm in DPSO, having no prior knowledge of the searching space, is
initialized with its particles at random positions. This initialization works well
for a relatively small search space.
For large structure, the searching result depends, to a great extent, on the
initial positions of the particles. Therefore, in order to increase the probability
10
Figure 3: Initialization of particle using back-and-forth path.
of reaching the global optimum, we propose to exploit features of viewpoints
to generate several seeding particles to facilitate the evolution of the swarm
in the search space. In our application, viewpoints are generated based on
a grid decomposition. Consequently, a back-and-forth tour would generate a
near-optimal path, as shown in Fig. 3, if no obstacles occur. From this observation, positions are deterministically assigned for several particles during the
initialization process.
3.2.2. Random mutation
Similar to other evolutionary optimisation techniques such as the genetic
algorithm or ant colony system, the PSO performs both exploration and exploitation of the search space. Initially, particles are far from each other so they
explore different regions in the search space. After evolving through generations, the swarm converges and starts to make more exploitation. At this stage,
distances between particles will gradually reduce to the size termed ”swarm
collapse” [34], whereby many particles will become almost identical.
In order to avoid the collapse situation and keep the balance between exploration and exploitation, random mutations for particles are employed. After
11
1
3
2
1
4
2
3
4
Exchange edges
(2,6) and (3,7)
6
8
6
5
7
8
7
5
Figure 4: DPSO augmentation using edge exchange.
every i generations, identical particles are filtered. The remaining are then
sorted according to their cost values. Finally, only one-third of the smallest
particles are kept for the next generation. All others are disturbed, each in
different and randomly-chosen dimensions.
3.2.3. Edge exchange
The enhancement is based on the geometric feature for which crossing edges
can be exchanged to result in a shorter tour. Here, as 3D cross checking may
be difficult, a complete search similarly to the 2-opt algorithm is employed to
compare each valid combination of the swapping mechanism for edges [36]. In
this search, every possible exchange of edges is evaluated and the one with
the most improvement is chosen. Figure 4 illustrates the case when an edge
exchange between (2,6) and (3,7) to shorten the tour. Since this augmentation
is computational demanding, it should be used only when the random mutation
does not make any difference.
3.2.4. Parallel implementation on GPU
Owing to the rapidly increased performance with thousands of cores, a
graphics processing unit (GPU) can outperform the traditional CPUs for problems that are suitable for processed by SIMD (single instruction multiple data).
As our optimisation algorithms are also a SIMD-based, we can take this advantage to implement in parallel the proposed DPSO in GPUs to reduce computation time.
The diagram and pseudo code for parallel implementation are shown in Fig. 5
12
Initialization
Particle
Particle
Particle
Compute velocity
Particle
Random mutation
Parallel
Compute position
Update local best
Evaluate fitness
Edge exchange
Global best fitness
Figure 5: Parallel implementation of the DPSO on GPU.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
/* Host:
*/
Load w, ϕ1 , ϕ2 , swarm size to global memory;
Load graph of shortest paths and travelling costs to global memory;
Copy initalized particles from CPU to global memory;
/* see Fig.7 line 1-8 */
Set threads per block = swarm size;
Call kernels to evolve each particle in a seperate thread;
/* Device:
*/
Kernel move particle(*particles){
Get *particle corresponding to thread id;
Update position and f itness of particle;
/* see Fig.7 line 10-17 */
Update global best to global memory and synchronize threads;
}
Kernel random mutation(*particles){
Sort particles using thrust library;
Randomize 2/3 worst particles;
Update global best to global memory and synchronize threads;
}
Kernel edge exchange(*graph){
foreach i < (number of nodes − 2) do
foreach i < j < (number of nodes − 2) do
Swap nodes(i, j) and evaluate f itness;
end
end
Update global best to global memory and synchronize threads;
}
Figure 6: Pseudo code for parallel computation of DPSO on GPU.
13
and Fig. 6 respectively. After initialization, parameters of a particle such as
the velocity, position, and fitness are computed in parallel, each particle in a
different thread. At the end of each generation, the results are saved to the
global memory to update these particle parameters and then a new parallel
computation round starts.
In UAVs, parallel programs can be implemented in recent onboard computers
having good GPU capability and low power consumption such as Jetson TK1
with 192 CUDA Cores (Kepler GPU), 5 W [37]. The board can be configured as
either the main or supplemental board communicated with other components
via standard communications protocols like MAVLink. However, if the battery
power is highly limited as in some micro UAVs, an alternative solution is to
stream the sensory data to the ground control station (GCS) and utilise the
GPU of a laptop to conduct the path planning. The result is then uploaded to
the UAV via GCS for planning/re-planning and navigation.
3.3. Enhanced DPSO Pseudo Code
For vision-based inspection, to take into account obstacle avoidance of the
UAV, a selected combination of random and deterministic initialization for each
particle in the swarm is performed on a CPU while its evolutions, including
computation of updated particles’ velocity and position, random mutation and
edge exchange, are implemented in parallel on a GPU.
By making use of all advantages of the enhanced DPSO algorithm, the
pseudo code for our proposed algorithm incorporating the above-mentioned augmentations is shown in Fig. 7.
4. Experimental results
Experiments have been carried out on two real datasets recorded by laser
scanners mounted on a UAV for inspection of an office building and a concrete
bridge. The first dataset represents a floor of the building with a size of 25 m
× 12 m × 8 m. The second dataset represents a part of the bridge including
14
1
2
3
4
5
6
7
8
9
10
11
12
/* ------------------------ Computation on CPU ------------------------- */
/* Initialization:
*/
Set swarm parameter w, ϕ1 , ϕ2 , swarm size;
foreach particle in swarm do
Initialize particle’s position with 10% specific and 90% random;
Compute f itness value of each particle;
Set local best value of each particle to itself;
Set velocity of each particle to zero;
end
Set global best to the best fit particle;
/* ------------------------ Computation on GPU ------------------------- */
/* Evolutions:
*/
repeat
foreach particle in swarm do
Compute new velocity;
/* using Eq.11 */
Compute new position;
/* using Eq.12 */
Update f itness of new position;
if new f itness < local best then
local best = new f itness;
end
13
14
15
16
20
end
if current generation reaches collapsed cycle then
Sort all particles by f itness;
Randomize 2/3 worst particles;
21
end
22
Find the particle with the best f itness and update global best;
if global best not improved then
/* Edge exchange */
foreach particle in swarm do
Swap each pair of nodes and evaluate f itness;
Choose the swap with best fit;
17
18
19
23
24
25
26
end
27
28
29
30
/* Random mutation */
end
until max generation not reached and
global best not remaining unchanged for a pre-specified number of generations;
Figure 7: Pseudo code of the enhanced DPSO algorithm.
piers and surfaces with a size of 22 m × 10 m × 4.5 m. Figures 8a and 9a show
the datasets in point cloud representation. In order to apply the IPP algorithm
to the datasets, planar surfaces and boundaries need to extracted from them.
For this task, we have developed a software for automatic interpretation of
15
unordered point cloud data described in details in [38]. The software uses the
Random sample consensus (RANSAC) algorithm combined with data obtained
from an inertial measurement unit (IMU) to detect planar surfaces. The convex
hull algorithm is then employed to determine their boundaries. The remaining
point cloud is clusterized to obstacle objects by finding the nearest neighbour
in a 3D Kd-tree structure. Through the software, users are able to select the
surfaces they want to inspect, as shown in Fig. 8b and Fig. 9b, respectively.
In all experiments, coefficients w = 1, ϕ1 = 0.4, ϕ2 = 0.4 are chosen for the
DPSO. The number of particles is set to 100. The random mutation is executed
in every three generations and the edge exchange is carried out if the random
mutation does not improve the result. The parallel implementation is developed
based on the CUDA platform. The programs, including both serial and parallel
versions, are executed in a laptop computer with CoreTM i7 CPU and GeForce@
GTX 960M GPU.
4.1. Path Generation and DPSO Convergence
Figures 8c and 9c show the paths generated to inspect three selected surfaces
of each dataset. Figures 8d and 9d show the paths in the appearance of obstacles.
It can be seen that the back-and-forth pattern is dominant in those paths,
except essential changes when having obstacles or switching between surfaces.
Figures 8e and 8f present the front and side views of a zoom-in part of the
inspection path showing that obstacles were avoided. Figures 9e and 9f show
similar results for the bridge dataset.
Figure 10 shows the graphs of the fitness value as an objective function of
the generation number for the two inspection cases of a building and a bridge.
In each graph, the fitness represents the cost to traverse the inspection path.
From the dataset of the office building, the DPSO by solving the extended TSP
improves 22.2 % of the travelling cost and converges within 60 generations. For
the second dataset of the bridge, those numbers are 37.9 % and 80, respectively.
The difference is accounted for by the variation in size of the inspection surfaces
and the structural complexity of the environments. That is to say in terms of al16
(a)
(b)
(c)
(d)
(e)
(f)
Figure 8: Experiment with the dataset recording one floor of an office building: (a) Raw
data recorded by laser scanners ; (b) Detected planar surfaces and their boundaries; (c) Path
planning to inspect the surfaces of ceiling, left wall, and back wall; (d) Inspection path with
the appearance of obstacles; (e) Part of inspection path avoiding an obstacle (front view); (f)
Part of inspection path avoiding an obstacle (side view);
gorithms, care should be given when considering parameters for the exploration
(number of particles) and exploitation (number of generations).
4.2. Effect of the augmentations on the DPSO
Table 1 presents the effect of augmentations on the performance improvement over DPSO in percentage by applying our enhanced algorithm. Here, with
the dataset obtained from building inspection, the deterministic initialization
17
(a)
(b)
(c)
(d)
(e)
(f)
Figure 9: Experiment with the dataset recording a part of a bridge: (a) Raw data recorded
by laser scanners ; (b) Detected planar surfaces and their boundaries; (c) Path planning to
inspect the piers, top surface, and slope surface; (d) Inspection path with the appearance of
obstacles; (e) Part of inspection path avoiding obstacles at surface bottom; (f) Part of
inspection path avoiding obstacles at surface top;
significantly improves the processing time by 2.8 times and slightly improves
the travelling cost by 1.4 %. Notably, the computational efficiency in terms of
fast convergence actually comes from the improvement of evolving generations
of the swarm by means of initialization. On the other hand, it is not surprised
that the edge exchange introduces some enhancement on the travelling cost as
it uses brute force transpositions. Likewise, the parallel implementation introduces the most significant impact on the computation time thanks to the parallel
processing capability taking advantage of the SIMD feature of the DPSO.
To show consistency in the effectiveness of the proposed approach, we com18
5000
Fitness
Fitness
5500
3300
3200
3100
3000
2900
2800
2700
2600
2500
2400
4500
4000
3500
0
10
20
30
40
Generation
50
3000
60
0
20
40
Generation
(a)
60
80
(b)
Figure 10: DPSO Convergence from datasets of: (a) Office building; (b) Concrete bridge.
Table 1: Percent improvement of the DPSO by augmentations
Building dataset
Algorithm
Bridge dataset
Time
Travelling cost
Time
Travelling cost
(%)
(%)
(%)
(%)
Initialization
280
1.4
310
1.7
Random mutation
x
5.0
x
5.5
Edge exchange
x
13.8
x
15.7
Parallel on GPU
6570
x
6720
x
x: not applicable
pare our enhanced DPSO algorithm not only with the conventional DPSO but
also with an ant colony system (ACS), where the ACS is implemented as in
[39]. In the comparison, each algorithm was executed over 15 trials. Table 2
shows the results expressed in the average value and the standard deviation of
the processing time and the travelling cost. Compared with the ACS algorithm,
our enhanced DPSO for the bridge inspection dataset has shown on average an
improvement of 15% in the travelling cost and 87 times in the computation time.
Owing to a significant improvement in processing time, the enhanced DPSO can
be applied for real-time automated inspection.
19
Table 2: Comparison between the enhanced DPSO, DPSO and ACS algorithms.
Building dataset
Algorithm
Bridge dataset
Time (s)
Travelling cost
Time (s)
Travelling cost
Enhanced DPSO
32.9±1.2
2490.2±38.9
41.6±1.5
3358.5±59.7
DPSO
2253.8±25.2
2998.1±44.3
2928.4±33.8 4130.7±65.8
ACS
2560.1±16.3
2763.6±56.8
3617.2±23.4 3862.3±87.4
5. Conclusion
In this paper, we have presented an enhanced discrete particle optimisation (DPSO) algorithm for solving the inspection path planning (IPP) problem
that is formulated as an extended travelling salesman problem (TSP) considering simultaneously the coverage and obstacle avoidance. By augmenting with
deterministic initialization, random mutation, edge exchange and parallel implementation on GPU, the proposed DPSO can greatly improve its performance
in both time and travelling cost. The validity and effectiveness of the proposed
technique are verified in successful experiments with two real-world datasets
collected by UAV inspection of an office building and a concrete bridge. In a
future work, the algorithm will be extended for inspection of non-planar surfaces and incorporation of online re-planning strategies to deal with inspection
of built infrastructure of an irregular shape.
Acknowledgments
The first author would like to acknowledge an Endeavours Research Fellowship (ERF-PDR-142403-2015) provided by the Australian Government. This
work is supported by the University of Technology Sydney Data Arena Research
Exhibit Grant 2016 and Vietnam National University Grant QG.16.29.
20
References
References
[1] K.-W. Liao, Y.-T. Lee, Detection of rust defects on steel bridge coatings via
digital image recognition, Automation in Construction 71 (Part 2) (2016)
294 – 306. doi:10.1016/j.autcon.2016.08.008.
[2] C. M. Yeum, S. J. Dyke, Vision-based automated crack detection for bridge
inspection, Comp.-Aided Civil and Infrastruct. Engineering 30 (10) (2015)
759–770. doi:10.1111/mice.12141.
[3] G. Li, S. He, Y. Ju, K. Du, Long-distance precision inspection method for
bridge cracks with image processing, Automation in Construction 41 (2014)
83 – 95. doi:10.1016/j.autcon.2013.10.021.
[4] R. Adhikari, O. Moselhi, A. Bagchi, Image-based retrieval of concrete crack
properties for bridge inspection, Automation in Construction 39 (2014) 180
– 194. doi:10.1016/j.autcon.2013.06.011.
[5] R. Zaurin, R. Catbas, Integration of computer imaging and sensor data for
structural health monitoring of bridges, Smart Materials and Structures
19 (1). doi:10.1088/0964-1726/19/1/015019.
[6] A. A. Woods, H. M. La, Q. Ha, A novel extended potential field controller
for use on aerial robots, in: Proceedings of the 12th IEEE International
Conference on Automation Science and Engineering (CASE), 2016, pp.
286–291. doi:10.1109/COASE.2016.7743420.
[7] A. Ellenberg, A. Kontsos, F. Moon, I. Bartoli, Bridge deck delamination
identification from unmanned aerial vehicle infrared imagery, Automation
in Construction 72 (Part 2) (2016) 155 – 165. doi:10.1016/j.autcon.
2016.08.024.
[8] T. Bock, The future of construction automation: Technological disruption
and the upcoming ubiquity of robotics, Automation in Construction 59
(2015) 113 – 121. doi:10.1016/j.autcon.2015.07.022.
21
[9] Y. Yu, N. Kwok, Q. Ha, Color tracking for multiple robot control using a
system-on-programmable-chip, Automation in Construction 20 (2011) 669
– 676. doi:10.1016/j.autcon.2011.04.013.
[10] R. Montero, J. Victores, S. Martnez, A. Jardn, C. Balaguer, Past, present
and future of robotic tunnel inspection, Automation in Construction 59
(2015) 99 – 112. doi:10.1016/j.autcon.2015.02.003.
[11] B. Zhang, W. Liu, Z. Mao, J. Liu, L. Shen, Cooperative and geometric
learning algorithm (CGLA) for path planning of UAVs with limited information, Automatica 50 (3) (2014) 809 – 820. doi:10.1016/j.automatica.
2013.12.035.
[12] H. Choset, Coverage for robotics - a survey of recent results, Annals of
Mathematics and Artificial Intelligence 31 (2001) 113–126. doi:10.1023/
A:1016639210559.
[13] H. Choset, K. Lynch, S. Hutchinson, G. Kantor, W. Burgard, L. Kavraki,
S. Thrun (Eds.), Principles of Robot Motion: Theory, Algorithms, and
Implementation, The MIT Press, 2005.
[14] E. Acar, H. Choset, A. Rizzi, P. Atkar, D. Hull, Morse decompositions for
coverage tasks, International Journal of Robotics Research 21 (4) (2002)
331–344. doi:10.1177/027836402320556359.
[15] V. Shivashankar, R. Jain, U. Kuter, D. Nau, Real-time planning for covering an initially-unknown spatial environment, in: Proceedings of the
Twenty-Fourth International Florida Artificial Intelligence Research Society Conference, 2011, pp. 63–68, isbn: 978-1-57735-501-4.
[16] Y. Gabriely, E. Rimon, Spiral-stc: an on-line coverage algorithm of grid
environments by a mobile robot, in: Proceedings of the IEEE International
Conference in Robotics and Automation (ICRA), Vol. 1, 2002, pp. 954–960.
doi:10.1109/ROBOT.2002.1013479.
22
[17] B. Englot, F. Hover, Three-dimensional coverage planning for an underwater inspection robot, International Journal of Robotics Research 32 (9-10)
(2013) 1048–1073. doi:10.1177/0278364913490046.
[18] G. A. Hollinger, B. Englot, F. S. Hover, U. Mitra, G. S. Sukhatme, Active planning for underwater inspection and the benefit of adaptivity,
The International Journal of Robotics Research 32 (1) (2013) 3 – 18.
doi:10.1177/0278364912467485.
[19] P. Janousek, J. Faigl, Speeding up coverage queries in 3d multi-goal
path planning, in: Proceedings of the IEEE International Conference on
Robotics and Automation (ICRA), 2013, pp. 5082–5087. doi:10.1109/
ICRA.2013.6631303.
[20] P. Wang, K. Gupta, R. Krishnamurti, Some complexity results for metric view planning problem with traveling cost and visibility range, IEEE
Transactions on Automation Science and Engineering 8 (3) (2011) 654 –
659. doi:10.1109/TASE.2011.2123888.
[21] P. S. Blaer, P. K. Allen, View planning and automated data acquisition
for three-dimensional modeling of complex sites, Journal of Field Robotics
26 (11-12) (2009) 865–891. doi:10.1002/rob.20318.
[22] M. Saha, T. Roughgarden, J.-C. Latombe, G. Snchez-Ante, Planning tours of robotic arms among partitioned goals, The International
Journal of Robotics Research 25 (3) (2006) 207–223.
doi:10.1177/
0278364906061705.
[23] T. Danner, L. E. Kavraki, Randomized planning for short inspection paths,
in: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2000, pp. 971–976. doi:10.1109/ROBOT.2000.844726.
[24] B. Englot, F. Hover, Planning complex inspection tasks using redundant
roadmaps, in: H. I. Christensen, O. Khatib (Eds.), Robotics Research,
23
Vol. 100 of Springer Tracts in Advanced Robotics, Springer International
Publishing, 2016, pp. 327–343. doi:10.1007/978-3-319-29363-9_19.
[25] D. Applegate, W. Cook, A. Rowe, Chained lin-kernighan for large traveling
salesman problems, INFORMS Journal on Computing 15 (1) (2003) 82–92.
doi:10.1287/ijoc.15.1.82.15157.
[26] G. Papadopoulos, H. Kurniawat, N. Patrikalakis, Asymptotically optimal
inspection planning using systems with differential constraints, in: Proceedings of the IEEE International Conference in Robotics and Automation
(ICRA), 2013, pp. 4126–4133. doi:10.1109/ICRA.2013.6631159.
[27] G. Hollinger, B. Englot, F. Hover, U. Mitra, G. Sukhatme, Uncertaintydriven view planning for underwater inspection, in: Proceedings of the
IEEE International Conference in Robotics and Automation (ICRA), 2012,
pp. 4884–4891. doi:10.1109/ICRA.2012.6224726.
[28] P. Jimenez, B. Shirinzadeh, A. Nicholson, G. Alici, Optimal area covering
using genetic algorithms, in: Proceedings of the IEEE/ASME International
Conference on Advanced Intelligent Mechatronics, 2007, pp. 1–5. doi:
10.1109/AIM.2007.4412480.
[29] C. Goerzen, Z. Kong, B. Mettler, A survey of motion planning algorithms
from the perspective of autonomous uav guidance, Journal of Intelligent
and Robotic Systems 57 (1) (2009) 65. doi:10.1007/s10846-009-9383-1.
[30] N. Dadkhah, B. Mettler, Survey of motion planning literature in the
presence of uncertainty: Considerations for uav guidance, Journal of
Intelligent & Robotic Systems 65 (1) (2012) 233–246.
doi:10.1007/
s10846-011-9642-9.
[31] S. Scherer, S. Singh, L. Chamberlain, M. Elgersma, Flying fast and
low among obstacles: Methodology and experiments., The International
Journal of Robotics Research 27 (5) (2008) 549–574.
0278364908090949.
24
doi:10.1177/
[32] D. Jung, P. Tsiotras, On-line path generation for unmanned aerial vehicles
using b-spline path templates, Journal of Guidance, Control, and Dynamics
36 (6) (2013) 1642–1653. doi:10.2514/1.60780.
[33] P. Hart, N. Nilsson, B. Raphael, A formal basis for the heuristic determination of minimum cost paths, IEEE Transactions on Systems Science and
Cybernetics 4 (2) (1968) 100–107. doi:10.1109/TSSC.1968.300136.
[34] J. Kennedy, R. Eberhart, Y. Shi (Eds.), Swarm Intelligence, Morgan Kaufmann, 2001.
[35] M. Clerc, Discrete particle swarm optimization, illustrated by the traveling salesman problem, in: G. Onwubolu, B. Babu (Eds.), New Optimization Techniques in Engineering, Vol. 141 of Studies in Fuzziness and
Soft Computing, Springer Berlin Heidelberg, 2004, pp. 219–239. doi:
10.1007/978-3-540-39930-8_8.
[36] G. Gutin, A. Punnen (Eds.), The Traveling Salesman Problem and Its
Variations, Springer US, 2007. doi:10.1007/b101971.
[37] Y. Ukidave, D. Kaeli, U. Gupta, K. Keville, Performance of the nvidia jetson tk1 in hpc, in: 2015 IEEE International Conference on Cluster Computing (CLUSTER), 2015, pp. 533–534. doi:10.1109/CLUSTER.2015.147.
[38] M. Phung, C. Quach, D. Chu, N. Nguyen, T. Dinh, Q. Ha, Automatic interpretation of unordered point cloud data for UAV navigation in construction,
in: Proceedings of the 14th International Conference on Control, Automation, Robotics and Vision (ICARCV), 2016. doi:10.1109/ICARCV.2016.
7838683.
[39] M. Dorigo, L. M. Gambardella, Ant colony system: a cooperative learning
approach to the traveling salesman problem, IEEE Trans. Evolutionary
Computation 1 (1) (1997) 53–66. doi:10.1109/4235.585892.
25
| 2 |
ACD Term Rewriting
arXiv:cs/0608016v1 [] 3 Aug 2006
Gregory J. Duck, Peter J. Stuckey, and Sebastian Brand
NICTA Victoria Laboratory
Department of Computer Science & Software Engineering,
University of Melbourne, Australia
Abstract. In this paper we introduce Associative Commutative Distributive Term Rewriting (ACDTR), a rewriting language for rewriting
logical formulae. ACDTR extends AC term rewriting by adding distribution of conjunction over other operators. Conjunction is vital for expressive term rewriting systems since it allows us to require that multiple
conditions hold for a term rewriting rule to be used. ACDTR uses the
notion of a “conjunctive context”, which is the conjunction of constraints
that must hold in the context of a term, to enable the programmer to
write very expressive and targeted rewriting rules. ACDTR can be seen
as a general logic programming language that extends Constraint Handling Rules and AC term rewriting. In this paper we define the semantics
of ACDTR and describe our prototype implementation.
1
Introduction
Term rewriting is a powerful instrument to specify computational processes. It is
the basis of functional languages; it is used to define the semantics of languages
and it is applied in automated theorem proving, to name only a few application
areas.
One difficulty faced by users of term rewriting systems is that term rewrite
rules are local, that is, the term to be rewritten occurs in a single place. This
means in order to write precise rewrite rules we need to gather all relevant
information in a single place.
Example 1. Imagine we wish to “program” an overloaded ordering relation for
integers variables, real variables and pair variables. In order to write this the
“type” of the variable must be encoded in the term1 as in:
int(x) ≤ int(y) → intleq(int(x), int(y))
real(x) ≤ real(y) → realleq(real(x), real(y))
pair(x1 , x2 ) ≤ pair(y1 , y2 ) → x1 ≤ y1 ∨ x1 = y1 ∧ x2 ≤ y2
In a more standard language, the type information for variables (and other
information) would be kept separate and “looked up” when required.
⊓
⊔
1
Operator precedences used throughout this paper are: ∧ binds tighter than ∨, and
all other operators, e.g. ¬, =, bind tighter than ∧.
Term rewriting systems such as constraint handling rules (CHRs) [5] and
associative commutative (AC) term rewriting [3] allow “look up” to be managed
straightforwardly for a single conjunction.
Example 2. In AC term rewriting the above example could be expressed as:
int (x) ∧ int (y) ∧ x ≤ y → int(x) ∧ int(y) ∧ intleq(x, y)
real (x) ∧ real (y) ∧ x ≤ y → real (x) ∧ real (y) ∧ realleq (x, y)
pair (x, x1 , x2 ) ∧ pair (y, y1 , y2 ) ∧ x ≤ y → pair (x, x1 , x2 ) ∧ pair (y, y1 , y2 )∧
(x1 ≤ y1 ∨ x1 = y1 ∧ x2 ≤ y2 )
where each rule replaces the x ≤ y by an appropriate specialised version, in the
conjunction of constraints. The associativity and commutativity of ∧ is used to
easily collect the required type information from a conjunction.
⊓
⊔
One difficulty remains with both AC term rewriting and CHRs. The “look
up” is restricted to be over a single large conjunction.
Example 3. Given the term int(x1 ) ∧ int(y1 ) ∧ pair (x, x1 , x2 ) ∧ pair (y, y1 , y2 ) ∧
x ≤ y. Then after rewriting x ≤ y to (x1 ≤ y1 ∨ x1 = y1 ∧ x2 ≤ y2 ) we could not
rewrite x1 ≤ y1 since the types for x1 , y1 appear in a different level.
In order to push the type information inside the disjunction we need to
distribute conjunction over disjunction.
⊓
⊔
Simply adding distribution rules like
A ∧ (B ∨ C) → A ∧ B ∨ A ∧ C
A ∧ B ∨ A ∧ C → A ∧ (B ∨ C)
(1)
(2)
does not solve the problem. Rule (1) creates two copies of term A, which increases
the size of the term being rewritten. Adding Rule (2) to counter this effect results
in a non-terminating rewriting system.
1.1
Conjunctive context
We address the non-termination vs. size explosion problem due to distributivity
rewrite rules in a similar way to how commutativity is dealt with: by handling
distributivity on the language level. We restrict ourselves to dealing with expanding distributivity of conjunction ∧ over any other operator, and we account
for idempotence of conjunction.2 Thus we are concerned with distribution rules
of the form
P ∧ f (Q1 , . . . , Qn ) → P ∧ f (P ∧ Q1 , . . . , P ∧ Qn ).
2
(3)
This means that conjunction is distributive over any function f in presence of a
redundant copy of P , i.e. P ∧ (P ∧ f (Q1 , . . . , Qn )) → P ∧ f (P ∧ Q1 , . . . , P ∧ Qn ).
We use idempotence to simplify the RHS and derive (3).
2
Let us introduce the conjunctive context of a term and its use in rewrite
rules, informally for now. Consider a term T and the conjunction C ∧ T modulo
idempotence of ∧ that would result from exhaustive application of rule (3) to
the superterm of T . By the conjunctive context of T we mean the conjunction C.
Example 4. The conjunctive context of the boxed occurrence of x in the term
(x = 3) ∧ (x2 > y ∨ ( x = 4) ∧ U ∨ V ) ∧ W,
is (x = 3) ∧ U ∧ W .
⊓
⊔
We allow a rewrite rule P → T to refer to the conjunctive context C of the rule
head P . We use the following notation:
C \ P ⇐⇒ T.
This facility provides ∧-distributivity without the undesirable effects of rule (3)
on the term size.
Example 5. We can express that an equality can be used anywhere “in its scope”
by viewing the equality as a conjunctive context:
x = a \ x ⇐⇒ a.
Using this rule on the term of Example 4 results in
(x = 3) ∧ (32 > y ∨ (3 = 4) ∧ U ∨ V ) ∧ W
⊓
⊔
without dissolving the disjunction.
1.2
Motivation and Applications
Constraint Model Simplification. Our concrete motivation behind associative commutative distributive term rewriting (ACDTR) is constraint model
mapping as part of the G12 project [7]. A key aim of G12 is the mapping of solver
independent models to efficient solver dependent models. We see ACDTR as
the basis for writing these mappings. Since models are not flat conjunctions of
constraints we need to go beyond AC term rewriting or CHRs.
Example 6. Consider the following simple constraint model inspired by the Social Golfers problem. For two groups g1 and g2 playing in the same week there
can be no overlap in players: maxOverlap(g1 , g2 , 0) The aim is to maximise the
number of times the overlap between two groups is less than 2; in other words
minimise the number of times two players play together in a group.
^
constraint
maxOverlap(g1 , g2 , 0)
∀w∈Weeks
∀g1 ,g2 ∈weeks[w]
g1 <g2
maximise
X
holds(maxOverlap(g1 , g2 , 1))
∀w1 ,w2 ∈Weeks
∀g1 ∈weeks[w1 ]
∀g2 ∈weeks[w2 ]
g1 <g2
3
Consider the following ACDTR program for optimising this constraint model.
maxOverlap(a, b, c1 ) \ maxOverlap(a, b, c2 ) ⇐⇒ c2 ≥ c1 | true
holds(true) ⇐⇒ 1
holds (false) ⇐⇒ 0
The first rule removes redundant maxOverlap constraints. The next two rules
implement partial evaluation of the holds auxiliary function which coerces a
Boolean to an integer.
By representing the constraint model as a giant term, we can optimise the
model by applying the ACDTR program. For example, consider the trivial case
with one week and two groups G1 and G2 . The model becomes
maxOverlap(G1 , G2 , 0) ∧ maximise(holds (maxOverlap(G1 , G2 , 1))).
The subterm holds(maxOverlap(G1 , G2 , 1)) simplifies to 1 using the conjunctive
context maxOverlap(G1 , G2 , 0).
⊓
⊔
It is clear that pure CHRs are insufficient for constraint model mapping for
at least two reasons, namely
– a constraint model, e.g. Example 6, is typically not a flattened conjunction;
– some rules rewrite functions, e.g. rules (2) and (3) rewriting function holds,
which is outside the scope of CHRs (which rewrite constraints only).
Global Definitions. As we have seen conjunctive context matching provides
a natural mechanism for making global information available. In a constraint
model, structured data and constraint definitions are typically global, i.e. on the
top level, while access to the data and the use of a defined constraint is local, e.g.
the type information from Example 1. Another example is partial evaluation.
Example 7. The solver independent modelling language has support for arrays.
Take a model having an array a of given values. It could be represented as the
top-level term array (a, [3, 1, 4, 1, 5, 9, 2, 7]). Deeper inside the model, accesses to
the array a occur, such as in the constraint x > y + lookup(a, 3). The following
rules expand such an array lookup:
array(A, Array) \ lookup(A, Index ) ⇐⇒ list element(Array, Index )
list element([X|Xs], 0) ⇐⇒ X
list element ([X|Xs], N ) ⇐⇒ N > 0 | list element (Xs, N − 1)
Referring to the respective array of the lookup expression via its conjunctive
context allows us to ignore the direct context of the lookup, i.e. the concrete
constraint or expression in which it occurs.
⊓
⊔
4
Propagation rules. When processing a logical formula, it is often useful to be
able to specify that a new formula Q can be derived from an existing formula
P without consuming P . In basic term rewriting, the obvious rule P ⇐⇒ P ∧ Q
causes trivial non-termination. This issue is recognised in CHRs, which provide
support for inference or propagation rules. We account for this fact and use rules
of the form P =⇒ Q to express such circumstances.
Example 8. The following is the classic CHR leq program reimplemented for
ACD term rewriting (we omit the basic rules for logical connectives):
leq(X, X) ⇐⇒ true
leq(X, Y ) \ leq(Y, X) ⇐⇒ X = Y
leq(X, Y ) \ leq(X, Y ) ⇐⇒ true
leq(X, Y ) ∧ leq(Y, Z) =⇒ leq(X, Z)
(reflexivity)
(antisymmetry)
(idempotence)
(transitivity)
These rules are almost the same as the CHR version, with the exception of
the second and third rule (antisymmetry and idempotence) which generalise its
original by using conjunctive context matching.
⊓
⊔
Propagation rules are also used for adding redundant information during model
mapping.
The rest of the paper is organised as follows. Section 2 covers the standard
syntax and notation of term rewriting. Section 3 defines the declarative and operational semantics of ACDTR. Section 4 describes a prototype implementation
of ACDTR as part of the G12 project. Section 5 compares ACDTR with related
languages. Finally, in Section 6 we conclude.
2
Preliminaries
In this section we briefly introduce the notation and terminology used in this
paper. Much of this is borrowed from term rewriting [3].
We use T (Σ, X) to represent the set of all terms constructed from a set of
function symbols Σ and set of variables X (assumed to be countably infinite).
We use Σ (n) ⊆ Σ to represent the set of function symbols of arity n.
A position is a string (sequence) of integers that uniquely determines a subterm of a term T , where ǫ represents the empty string. We define function T |p ,
which returns the subterm of T at position p as
T |ǫ = T
f (T1 , . . . , Ti , . . . , Tn )|ip = Ti |p
We similarly define a function T [S]p which replaces the subterm of T at position
p with term S. We define the set Pos(T ) to represent the set of all positions of
subterms in T .
An identity is a pair (s, t) ∈ T (Σ, X) × T (Σ, X), which is usually written as
s ≈ t. Given a set of identities E, we define ≈E to be the set of identities closed
under the axioms of equational logic [3], i.e. symmetry, transitivity, etc.
5
We define the congruence class [T ]≈E = {S ∈ T (Σ, X)|S ≈E T } as the set
of terms equal to T with respect to E.
Finally, we define function vars(T ) to return the set of variables in T .
3
Syntax and Semantics
The syntax of ACDTR closely resembles that of CHRs. There are three types of
rules of the following form:
(simplification)
(propagation)
(simpagation)
r @ H ⇐⇒ g | B
r @ H =⇒ g | B
r @ C \ H ⇐⇒ g | B
where r is a rule identifier, and head H, conjunctive context C, guard g and body
B are arbitrary terms. The rule identifier is assumed to uniquely determine the
rule. A program P is a set of rules.
We assume that vars(g) ⊆ vars(H) or vars(g) ⊆ vars(H) ∪ vars(C) (for
simpagation rules). The rule identifier can be omitted. If g = true then the guard
can be omitted.
We present the declarative semantics of ACDTR based on equational logic.
First we define the set of operators that ACDTR treats specially.
Definition 1 (Operators). We define the set of associate commutative operators as AC. The set AC must satisfy AC ⊆ Σ (2) and (∧) ∈ AC.
For our examples we assume that AC = {∧, ∨, +, ×}. We also treat the operator
∧ as distributive as explained below.
ACDTR supports a simple form of guards.
Definition 2 (Guards). A guard is a term. We denote the set of all “true”
guards as G, i.e. a guard g is said to hold iff g ∈ G. We assume that true ∈ G
and false 6∈ G.
We can now define the declarative semantics for ACDTR. In order to do so
we employ a special binary operator where to explicitly attach a conjunctive
context to a term. Intuitively, the meaning of T where C is equivalent to that of
T provided C is true, otherwise the meaning of T where C is unconstrained. For
Boolean expressions, it is useful to interpret where as conjunction ∧, therefore
where-distribution, i.e. identity (6) below, becomes equivalent to ∧-distribution
(3). The advantage of distinguishing where and ∧ is that we are not forced to
extend the definition of ∧ to arbitrary (non-Boolean) functions.
We denote by B the following set of built-in identities:
A◦B ≈B ◦A
(A ◦ B) ◦ C ≈ A ◦ (B ◦ C)
T ≈ (T where true)
A ∧ B ≈ (A where B) ∧ B
T where (W1 ∧ W2 ) ≈ (T where W1 ) where W2
f (A1 , ..., Ai , ..., An ) where W ≈ f (A1 , ..., Ai where W, ..., An ) where W
6
(1)
(2)
(3)
(4)
(5)
(6)
for all ◦ ∈ AC, functions f ∈ Σ (n) , and i ∈ {1, . . . , n}.
Definition 3 (Declarative Semantics for ACDTR). The declarative semantics for an ACDTR program P (represented as a multiset of rules) is given
by the function JK defined as follows:
JP K
= {Jθ(R)K | ∀R, θ . R ∈ P ∧ θ(guard(R)) ∈ G} ∪ B
JH ⇐⇒ g | BK
= ∃vars(B)−vars(H) (H ≈ B)
JC \ H ⇐⇒ g | BK = ∃vars(B)−vars(C,H) (H where C ≈ B where C)
JH =⇒ g | BK
= ∃vars(B)−vars(H) (H ≈ H ∧ B)
where function guard(R) returns the guard of a rule.
The function JK maps ACDTR rules to identities between the head and the
body terms, where body-only variables are existentially quantified.3 Note that
there is a new identity for each possible binding of guard(R) that holds in G.
A propagation rule is equivalent to a simplification rule that (re)introduces the
head H (in conjunction with the body B) in the RHS. This is analogous to
propagation rules under CHRs.
A simpagation rule is equivalent to a simplification rule provided the conjunctive context is satisfied.
The built-in rules B from Definition 3 contain identities for creating/destroying (3) and (4), combining/splitting (5), and distributing downwards/upwards (6) a conjunctive context in terms of the where operator.
The set B also contains identities (1) and (2) for the associative/commutative
properties of the AC operators.
Example 9. Consider the following ACDTR rule and the corresponding identity.
JX = Y \ X ⇐⇒ Y K
=
(Y where X = Y ) ≈ (X where X = Y )
(7)
Under this identity and using the rules in B, we can show that f (A)∧(A = B) ≈
f (B) ∧ (A = B), as follows.
f (A) ∧ (A = B)
≈(4)
(f (A) where (A = B)) ∧ (A = B)
≈(6)
(f (A where (A = B)) where (A = B)) ∧ (A = B) ≈(7)
(f (B where (A = B)) where (A = B)) ∧ (A = B) ≈(6)
(f (B) where (A = B)) ∧ (A = B)
≈(4)
f (B) ∧ (A = B)
⊓
⊔
3.1
Operational Semantics
In this section we describe the operational semantics of ACDTR. It is based
on the theoretical operational semantics of CHRs [1,4]. This includes support
for identifiers and propagation histories, and conjunctive context matching for
simpagation rules.
3
All other variables are implicitly universally quantified, where the universal quantifiers appear outside the existential ones.
7
Propagation history. The CHR concept of a propagation history, which prevents trivial non-termination of propagation rules, needs to be generalised over
arbitrary terms for ACDTR. A propagation history is essentially a record of all
propagation rule applications, which is checked to ensure a propagation rule is
not applied twice to the same (sub)term.
In CHRs, each constraint is associated with a unique identifier. If multiple
copies of the same constraint appear in the CHR store, then each copy is assigned
a different identifier. We extend the notion of identifiers to arbitrary terms.
Definition 4 (Identifiers). An identifier is an integer associated with each
(sub)term. We use the notation T #i to indicate that term T has been associated
with identifier i. A term T is annotated if T and all subterms of T are associated
with an identifier. We also define function ids(T ) to return the set of identifiers
in T , and term(T ) to return the non-annotated version of T .
For example, T = f (a#1, b#2)#3 is an annotated term, where ids(T ) = {1, 2, 3}
and term(T ) = f (a, b).
Identifiers are considered separate from the term. We could be more precise
by separating the two, i.e. explicitly maintain a map between Pos(T ) and the
identifiers for T . We do not use this approach for space reasons. We extend
and overload all of the standard operations over terms (e.g. from Section 2) to
annotated terms in the obvious manner. For example, the subterm relation T |p
over annotated terms returns the annotated term at position p. The exception
are elements of the congruence class [T ]≈AC , formed by the AC relation ≈AC ,
which we assume satisfies the following constraints.
A#i ◦ B#j ≈AC B#j ◦ A#i
A#i ◦ (B#j ◦ C#k) ≈AC (A#i ◦ B#j) ◦ C#k
We have neglected to mention the identifiers over AC operators. These identifiers
will be ignored later, so we leave them unconstrained.
A propagation history is a set of entries defined as follows.
Definition 5 (Entries). A propagation history entry is of the form (r @ E),
where r is a propagation rule identifier, and E is a string of identifiers. We
define function entry(r, T ) to return the propagation history entry of rule r for
annotated term T as follows.
entry(r, T )
= (r @ entry(T ))
entry(T1 ◦ T2 )
= entry(T1 ) entry(T2 )
entry(f (T1 , ..., Tn )#i) = i entry(T1 ) ... entry(Tn )
◦ ∈ AC
otherwise
This definition means that propagation history entries are unaffected by associativity, but are effected by commutativity.
Example 10. Consider the annotated term T = f ((a#1 ∧ b#2)#3)#4. We have
that T ∈ [T ]≈AC and T ′ = f ((b#2 ∧ a#1)#3)#4 ∈ [T ]≈AC . Although T
and T ′ belong to [T ]≈AC they have different propagation history entries, e.g.
entry(r, T ) = (r @ (4 1 2)) while entry(r, T ′ ) = (r @ (4 2 1)).
⊓
⊔
8
When a (sub)term is rewritten into another, the new term is assigned a set
of new unique identifiers. We define the auxiliary function annotate(P, T ) = Ta
to map a set of identifiers P and un-annotated term T to an annotated term Ta
such that ids(Ta ) ∩ P = ∅ and |ids(Ta )| = |Pos(T )|. These conditions ensure that
all identifiers are new and unique.
When a rule is applied the propagation history must be updated accordingly
to reflect which terms are copied from the matching. For example, the rule
f (X) ⇐⇒ g(X, X) essentially clones the term matching X. The identifiers,
however, are not cloned. If a term is cloned, we expect that both copies will
inherit the propagation history of the original. Likewise, terms can be merged,
e.g. g(X, X) ⇐⇒ f (X) merges two instances of the term matching X. In this
case, the propagation histories of the copies are also merged.
To achieve this we duplicate entries in the propagation history for each occurrence of a variable in the body that also appeared in the head.
Definition 6 (Updating History). Define function
update(H, Ha , B, Ba , T0 ) = T1
where H and B are un-annotated terms, Ha and Ba are annotated terms, and T0
and T1 are propagation histories. T1 is a minimal propagation history satisfying
the following conditions:
– T0 ⊆ T1 ;
– ∀p ∈ Pos(H) such that H|p = V ∈ X (where X is the set of variables), and
∃q ∈ Pos(B) such that B|q = V , then define identifier renaming ρ such that
ρ(Ha |p ) and Ba |q are identical annotated terms. Then if E ∈ T0 we have
that ρ(E) ∈ T1 .
Example 11. Consider rewriting the term Ha = f ((a#1 ∧ b#2)#3)#4 with a
propagation history of T0 = {(r @ (1 2))} using the rule f (X) ⇐⇒ g(X, X).
The resulting term is Ba = g((a#5 ∧b#6)#7), (a#8 ∧b#9)#10#11 and the new
propagation history is T1 = {(r @ (1 2)), (r @ (5 6)), (r @ (8 9))}.
⊓
⊔
Conjunctive context. According to the declarative semantics, a term T with
conjunctive context C is represented as (T where C). Operationally, we will
never explicitly build a term containing a where clause. Instead we use the
following function to compute the conjunctive context of a subterm on demand.
Definition 7 (Conjunctive Context). Given an (annotated) term T and a
position p ∈ Pos(T ), we define function cc(T, p) to return the conjunctive context
at position p as follows.
cc(T, ǫ)
= true
cc(A ∧ B, 1p)
= B ∧ cc(A, p)
cc(A ∧ B, 2p)
= A ∧ cc(B, p)
cc(f (T1 , . . . , Ti , . . . , Tn ), ip) = cc(Ti , p)
9
(f 6= ∧)
States and transitions. The operational semantics are defined as a set of
transitions on execution states.
Definition 8 (Execution States). An execution state is a tuple of the form
hG, T, V, Pi, where G is a term (the goal), T is the propagation history, V is
the set of variables appearing in the initial goal and P is a set of identifiers.
We also define initial and final states as follows.
Definition 9 (Initial and Final States). Given an initial goal G for program
P , the initial state of G is
hGa , ∅, vars(G), ids(Ga )i
where Ga = annotate(∅, G). A final state is a state where no more rules are
applicable to the goal G.
We can now define the operational semantics of ACDTR as follows.
Definition 10 (Operational Semantics).
hG0 , T0 , V, P0 i hG1 , T1 , V, P1 i
1. Simplify: There exists a (renamed) rule from P
H ⇐⇒ g | B
such that there exists a matching substitution θ and a term G′0 such that
–
–
–
–
G0 ≈AC G′0
∃p ∈ Pos(G′0 ) . G′0 |p = θ(H)
θ(g) ∈ G
Ba = annotate(P0 , θ(B))
Then G1 = G′0 [Ba ]p , P1 = P0 ∪ ids(G1 ) and T1 = update(H, G′0 |p , B, Ba , T0 ).
2. Propagate: There exists a (renamed) rule from P
r @ H =⇒ g | B
such that there exists a matching substitution θ and a term G′0 such that
–
–
–
–
–
G0 ≈AC G′0
∃p ∈ Pos(G′0 ) . G′0 |p = θ(H)
θ(g) ∈ G
entry(r, G′0 |p ) 6∈ T0
Ba = annotate(P0 , θ(B))
Then G1 = G′0 [G′0 |p ∧ Ba ]p , T1 = update(H, G′0 |p , B, Ba , T0 ) ∪ {entry(r, G′0 |p )}
and P1 = P0 ∪ ids(G1 ).
3. Simpagate: There exists a (renamed) rule from P
C \ H ⇐⇒ g | B
such that there exists a matching substitution θ and a term G′0 such that
10
h(leq(X1 , Y2 )3 ∧4 leq(Y5 , Z6 )7 ∧8 ¬9 leq(X10 , Z11 )12 ), ∅i trans
h(leq(X1 , Y2 )3 ∧4 leq(Y5 , Z6 )7 ∧13 leq(X15 , Z16 )14 ∧8 ¬9 leq(X10 , Z11 )12 ), T i idemp
h(leq(X1 , Y2 )3 ∧4 leq(Y5 , Z6 )7 ∧13 leq(X15 , Z16 )14 ∧8 ¬9 true17 ), T i simplif y
h(leq(X1 , Y2 )3 ∧4 leq(Y5 , Z6 )7 ∧13 leq(X15 , Z16 )14 ∧8 f alse18 ), T i simplif y
h(leq(X1 , Y2 )3 ∧4 leq(Y5 , Z6 )7 ∧13 f alse19 ), T i simplif y
h(leq(X1 , Y2 )3 ∧4 f alse20 ), T i simplif y
h(f alse21 ), T i
Fig. 1. Example derivation for the leq program.
–
–
–
–
–
G0 ≈AC G′0
∃p ∈ Pos(G′0 ) . G′0 |p = θ(H)
∃D.θ(C) ∧ D ≈AC cc(G′0 , p)
θ(g) ∈ G
Ba = annotate(P0 , θ(B))
Then G1 = G′0 [Ba ]p , T1 = update(H, G′0 |p , B, Ba , T0 ) and P1 = P0 ∪ ids(G1 ).
Example. Consider the leq program from Example 8 with the goal
leq(X, Y ) ∧ leq(Y, Z) ∧ ¬leq(X, Z)
Figure 1 shows one possible derivation of this goal to the final state representing
f alse. For brevity, we omit the V and P fields, and represent identifiers as subscripts, i.e. T #i = Ti . Also we substitute T = {transitivity @ (3 2 1 7 5 6)}.
We can state a soundness result for ACDTR.
Theorem 1 (Soundness). If hG0 , T0 , V, Pi ∗ hG′ , T ′ , V, P ′ i with respect to
a program P , then JP K |= ∃vars(G′ )−V G0 ≈ G′
This means that for all algebras A that satisfy JP K, G0 and G′ are equivalent
for some assignment of the fresh variables in G′ .
4
Implementation
We have implemented a prototype version of ACDTR as part of the mapping
language of the G12 project, called Cadmium. In this section we give an overview
of the implementation details. In particular, we will focus on the implementation
of conjunctive context matching, which is the main contribution of this paper.
Cadmium constructs normalised terms from the bottom up. Here, a normalised term is one that cannot be reduced further by an application of a rule.
Given a goal f (t1 , ..., tn ), we first must recursively normalise all of t1 , ..., tn (to
say s1 , ..., sn ), and then attempt to find a rule that can be applied to the top-level
of f (s1 , ..., sn ). This is the standard execution algorithm used by many TRSs
implementations.
11
This approach of normalising terms bottom up is complicated by the consideration of conjunctive context matching. This is because the conjunctive context
of the current term appears “higher up” in the overall goal term. Thus conjunctive context must be passed top down, yet we are normalising bottom up. This
means there is no guarantee that the conjunctive context is normalised.
Example 12. Consider the following ACDTR program that uses conjunctive context matching.
X = V \ X ⇐⇒ var(X) ∧ nonvar(V ) | V.
one(X) ⇐⇒ X = 1.
not one(1) ⇐⇒ f alse.
Consider the goal not one(A)∧one(A), which we expect should be normalised to
f alse. Assume that the sub-term not one(A) is selected for normalisation first.
The conjunctive context for not one(A) (and its subterm A) is one(A). No rule
is applicable, so not one(A) is not reduced.
Next the subterm one(A) is reduced. The second rule will fire resulting in
the new term A = 1. Now the conjunctive context for the first term not one(A)
has changed to A = 1, so we expect that A should be rewritten to the number
⊓
⊔
1. However not one(A) has already being considered for normalisation.
The current Cadmium prototype solves this problem by re-normalising terms
when and if the conjunctive context “changes”. For example, when the conjunctive context one(A) changes to A = 1, the term not one(X) will be renormalised
to not one(1) by the first rule.
The general execution algorithm for Cadmium is shown in Figure 2. Function
normalise takes a term T , a substitution θ, a conjunctive context CC and a
Boolean value Ch which keeps track of when the conjunctive context of the
current subterm has changed. If Ch = true, then we can assume the substitution
θ maps variables to normalised terms. For the initial goal, we assume θ is empty,
otherwise if we are executing a body of a rule, then θ is the matching substitution.
Operationally, normalise splits into three cases depending on what T is. If
T is a variable, and the conjunctive context has changed (i.e. Ch = true),
then θ(T ) is no longer guaranteed to be normalised. In this case we return the
result of renormalising θ(T ) with respect to CC. Otherwise if Ch = f alse, we
simply return θ(T ) which must be already normalised. If T is a conjunction
T1 ∧ T2 , we repeatedly call normalise on each conjunct with the other added
to the conjunctive context. This is repeated until a fixed point (i.e. further
normalisation does not result in either conjunct changing) is reached, and then
return the result of apply rule on the which we will discuss below. This fixed
point calculation accounts for the case where the conjunctive context of a term
changes, as shown in Example 12. Otherwise, if T is any other term of the form
f (T1 , ..., Tn ), construct the new term T ′ by normalising each argument. Finally
we return the result of apply rule applied to T ′ .
The function call apply rule(T ′ ,CC) will attempt to apply a rule to normalised
term T ′ with respect to conjunctive context CC. If a matching rule is found, then
12
normalise(T ,θ,CC,Ch)
if is var(T )
if Ch
return normalise(θ(T ),θ,CC,f alse)
else
return θ(T )
else if T = T1 ∧ T2
do
T1′ := T1
T2′ := T2
T1 := normalise(T1′ ,θ,T2′ ∧ CC,true)
T2 := normalise(T2′ ,θ,T1′ ∧ CC,true)
while T1 6= T1′ ∧ T2 6= T2′
return apply rule(T1′ ∧ T2′ ,CC)
else
T = f (T1 , ..., Tn )
T ′ := f (normalise(T1 ,θ,CC,Ch), ..., normalise(Tn ,θ,CC,Ch))
return apply rule(T ′ ,CC)
Fig. 2. Pseudo code of the Cadmium execution algorithm.
the result of normalise(B,θ,CC,f alse) is returned, where B is the (renamed) rule
body and θ is the matching substitution. Otherwise, T ′ is simply returned.
5
Related Work
ACDTR is closely related to both TRS and CHRs, and in this section we compare
the three languages.
5.1
AC Term Rewriting Systems
The problem of dealing with associative commutative operators in TRS is well
studied. A popular solution is to perform the rewriting modulo some permutation
of the AC operators. Although this complicates the matching algorithm, the
problem of trivial non-termination (e.g. by continually rewriting with respect to
commutativity) is solved.
ACDTR subsumes ACTRS (Associative Commutative TRS) in that we have
introduced distributivity (via simpagation rules), and added some “CHR-style”
concepts such as identifiers and propagation rules.
Given an ACTRS program, we can map it to an equivalent ACDTR program
by interpreting each ACTRS rule H → B as the ACDTR rule H ⇐⇒ B. We
can now state the theorem relating ACTRS and ACDTR.
Theorem 2. Let P be an ACTRS program and T a ground term, then
T →∗ S under P iff hTa , ∅, ∅, ids(Ta )i ∗ hSa , ∅, ∅, Pi under α(P ) (where
Ta = annotate(∅, T )) for some P and term(Sa ) = S.
13
5.2
CHRs and CHR∨
ACDTR has been deliberately designed to be an extension of CHRs. Several
CHR concepts, e.g. propagation rules, etc., have been adapted.
There are differences between CHRs and ACDTR. The main difference is
that ACDTR does not have a “built-in” or “underlying” solver, i.e. ACDTR is
not a constraint programming language. However it is possible to encode solvers
directly as rules, e.g. the simple leq solver from Example 8. Another important
difference is that CHRs is based on predicate logic, where there exists a distinction between predicate symbols (i.e. the names of the constraints) and functions
(used to construct terms). ACDTR is based on equational logic between terms,
hence there is no distinction between predicates and functions (a predicate is
just a Boolean function). To overcome this, we assume the existence of a set
Pred, which contains the set of function symbols that are Boolean functions.
We assume that AC ∩ Pred = {∧(2) }.
The mapping between a CHR program and an ACDTR program is simply
α(P ) = P ∪ {X ∧ true ⇐⇒ X}.4 However, we assume program P is restricted
as follows:
– rules have no guards apart from implicit equality guards; and
– the only built-in constraint is true
and the initial goal G is also restricted:
– G must be of the form G0 ∧ ... ∧ Gn for n > 0;
– Each Gi is of the form fi (A0 , ..., Am ) for m ≥ 0 and fi ∈ Pred;
– For all p ∈ Pos(Aj ), 0 ≤ j ≤ m we have that if Aj |p = g(B0 , ..., Bq ) then
g (q) 6∈ AC and g (q) 6∈ Pred.
These conditions disallow predicate symbols from appearing as arguments in
CHR constraints.
Theorem 3. Let P be a CHR program, and G an initial goal both satisfying
V
the above conditions, then hG, ∅, true, ∅iV
1 h∅, S, true, T ii (for some T , i
and V = vars(G)) under the theoretical operational semantics [4] for CHRs
iff hGa , ∅, V, ids(Ga )i hSa , T ′ , V, Pi (for some T ′ , P) under ACDTR, where
term(Sa ) = S1 ∧...∧Sn and S = {S1 #i1 , ..., Sn #in } for some identifiers i1 , ..., in .
We believe that Theorem 3 could be extended to include CHR programs that
extend an underlying solver, provided the rules for handling tell constraints are
added to the ACDTR program. For example, we can combine rules for rational
tree unification with the leq program from Example 8 to get a program equivalent
to the traditional leq program under CHRs.
ACDTR generalises CHRs by allowing other operators besides conjunction
inside the head or body of rules. One such extension of CHRs has been studied
before, namely CHR∨ [2] which allows disjunction in the body. Unlike ACDTR,
4
There is one slight difference in syntax: CHRs use ‘,’ to represent conjunction,
whereas ACDTR uses ‘∧’.
14
which manipulates disjunction syntactically, CHR∨ typically finds solutions using
backtracking search.
One notable implementation of CHR∨ is [6], which has an operational semantics described as an and/or (∧/∨) tree rewriting system. A limited form of
conjunctive context matching is used, similar to that used by ACDTR, based
on the knowledge that conjunction ∧ distributes over disjunction ∨. ACDTR
generalises this by distributing over all functions.
6
Future Work and Conclusions
We have presented a powerful new rule-based programming language, ACDTR,
that naturally extends both AC term rewriting and CHRs. The main contribution is the ability to match a rule against the conjunctive context of a (sub)term,
taking advantage of the distributive property of conjunction over all possible
functions. We have shown this is a natural way of expressing some problems,
and by building the distributive property into the matching algorithm, we avoid
non-termination issues that arise from naively implementing distribution (e.g.
as rewrite rules).
We intend that ACDTR will become the theoretical basis for the Cadmium
constraint mapping language as part of the G12 project [7]. Work on ACDTR
and Cadmium is ongoing, and there is a wide scope for future work, such as
confluence, termination and implementation/optimisation issues.
References
1. S. Abdennadher. Operational semantics and confluence of constraint propagation
rules. In Gert Smolka, editor, Proceedings of the Third International Conference
on Principles and Practice of Constraint Programming, LNCS 1330, pages 252–266.
Springer-Verlag, 1997.
2. S. Abdennadher and H. Schütz. CHR∨ : A flexible query language. In International
conference on Flexible Query Answering Systems, number 1495 in LNCS, pages
1–14, Roskilde, Denmark, 1998. Springer-Verlag.
3. F. Baader and T. Nipkow. Term rewriting and all that. Cambridge Univ. Press,
1998.
4. G. Duck, P. Stuckey, M. Garcia de la Banda, and C. Holzbaur. The refined operational semantics of constraint handling rules. In B. Demoen and V. Lifschitz,
editors, Proceedings of the 20th International Conference on Logic Programming,
LNCS 3132, pages 90–104. Springer-Verlag, September 2004.
5. T. Frühwirth. Theory and practice of constraint handling rules. Journal of Logic
Programming, 37:95–138, 1998.
6. L. Menezes, J. Vitorino, and M. Aurelio. A High Performance CHR∨ Execution
Engine. In Second Workshop on Constraint Handling Rules, Sitges, Spain, 2005.
7. P.J. Stuckey, M. Garcia de la Banda, M. Maher, K. Marriott, J. Slaney, Z. Somogyi,
M. Wallace, and T. Walsh. The G12 project: Mapping solver independent models
to efficient solutions. In M. Gabrielli and G. Gupta, editors, Proceedings of the
21st International Conference on Logic Programming, number 3668 in LNCS, pages
9–13. Springer-Verlag, 2005.
15
A
Examples
A.1
Further Motivating Examples
Example 13 (Conjunctive Normal Form). One of the roles of mapping models
is to convert a model written in an expressive language into a restricted language which is easy to solve. Many standard approaches to solving propositional
formulae require that the formulae are in conjunctive normal form (CNF). Disjunction ∨ is distributive over ∧, which can be used to establish CNF in a direct
way, using the oriented rule
P ∨ Q ∧ R → (P ∨ Q) ∧ (P ∨ R).
CNF conversion based on this rule can exponentially increase the size of the formula. This undesirable circumstance means that in practice CNF conversions are
preferred that replace subformulae by new propositional atoms, which increases
the formula size at most linearly.
Let us formulate this approach in rewrite rules. To keep this example simple,
we assume that the non-CNF subformula P ∨ Q ∧ R occurs in a positive context
(for example by a preprocessing into negation normal form). We replace Q ∧ R
by a new atom s defined by the logical implication s ⇒ (Q ∧ R). In rewrite rule
form, we have
P ∨ Q ∧ R → (P ∨ s) ∧ (¬s ∨ Q) ∧ (¬s ∨ R).
(8)
Unit resolution and unit subsumption can be formalised in rewrite rules. Here
are two versions, one using conjunctive context and a regular one:
with conj. context:
regular:
P \ P ⇐⇒ true
P ∧P → P
P ∧ (P ∨ Q) → P
P ∧ ¬P → false
P \ ¬P ⇐⇒ false
P ∧ (¬P ∨ Q) → P ∧ Q
We furthermore assume rules eliminating the logical constants true and false
from conjunctions and disjunctions in the obvious way. Let us contrast the two
rule sets for the formula (a∨b∧(c∨d))∧d. The following is a terminating rewrite
history:
with conj. context:
regular:
(a ∨ b ∧ (c ∨ d)) ∧ d
(a ∨ b ∧ (c ∨ d)) ∧ d
(a ∨ b ∧ (c ∨ true)) ∧ d
(a ∨ b ∧ true) ∧ d
(a ∨ s) ∧ (¬s ∨ b) ∧ (¬s ∨ c ∨ d) ∧ d
(a ∨ s) ∧ (¬s ∨ b) ∧ true ∧ d
(a ∨ b) ∧ d
(a ∨ s) ∧ (¬s ∨ b) ∧ d
16
To obtain the simple conjunct (a ∨ b) using the regular rule format, a rule expressing binary resolution, i.e. from (P ∨ S) ∧ (¬S ∨ Q) follows (P ∨ Q), would be
required. However, such a rule is undesirable as it would create arbitrary binary
resolvents, increasing formula size. Moreover, the superfluous atom s remains in
the formula.
⊓
⊔
Example 14 (Type remapping). One of the main model mappings we are interested in expressing is where the type of a variable is changed from a high level
type easy for modelling to a low level type easy to solve. A prime example of this
is mapping a set variable x ranging over finite subsets of some fixed set s to an array x′ of 0/1 variables indexed by s. So for variable x we have e ∈ x ⇔ x′ [e] = 1.
For this example we use the more concrete modelling syntax: t : x indicates variable x has type t, the types we are interested are l..u an integers in the range l to
u, set of S a set ranging over elements in S, and array[I] of E an array indexed
by set I of elements of type E. We use f orall and sum looping constructs which
iterate over sets. This is expressed in ACDTR as follows.
set of s : x ⇐⇒ array[s] of 0..1 : x′ ∧ map(x, x′ )
map(x, x′ ) \ x ⇐⇒ x′
array[s] of 0..1 : x \ card(x) ⇐⇒ sum(e in s) x[e]
array[s] of 0..1 : x ∧
z :: (array[s] of 0..1 : z ∧
⇐⇒
array[s] of 0..1 : y \ x ∩ y
f orall(e in s) z[e] = x[e] && y[e])
array[s] of 0..1 : x ∧
z :: (array[s] of 0..1 : z ∧
⇐⇒
array[s] of 0..1 : y \ x ∪ y
f orall(e in s) z[e] = x[e] || y[e])
array[s] of 0..1 : x \ x = ∅ ⇐⇒ f orall(e in s) x[e] = 0
card(t :: c) ⇐⇒ card(t) :: c
(t1 :: c) ∪ t2 ⇐⇒ t1 ∪ t2 :: c
t1 ∪ (t2 :: c) ⇐⇒ t1 ∪ t2 :: c
(t1 :: c) ∩ t2 ⇐⇒ t1 ∩ t2 :: c
t1 ∩ (t2 :: c) ⇐⇒ t1 ∩ t2 :: c
(t1 :: c) = t2 ⇐⇒ t1 = t2 ∧ c
t1 = (t2 :: c) ⇐⇒ t1 = t2 ∧ c
(t1 :: c) ≤ t2 ⇐⇒ t1 ≤ t2 ∧ c
t1 ≤ (t2 :: c) ⇐⇒ t1 ≤ t2 ∧ c
(t :: c1 ) :: c2 ⇐⇒ t :: (c1 ∧ c2 )
maxOverlap(x, y, c) ⇐⇒ card(x ∩ y) ≤ c
(typec)
(vsubs)
(card)
(cap)
(cup)
(emptyset)
(↑ card)
(↑ cupl)
(↑ cupr)
(↑ capl)
(↑ capr)
(↑ eql)
(↑ eqr)
(↑ leql)
(↑ leqr)
(↑ cc)
(maxO)
The :: constructor adds some local conjunctive context to an arbitrary term (like
where) and the last 11 rules bar 1 move this context outwards to the nearest
predicate scope. The last rule defines the maxOverlap predicate. They are used
to introduce new variables z and their type and the constraints upon then. As
17
an example, consider the following derivation:
maxO
typec
vsubs
typec
vsubs
cap
↑card
↑leql
set of 1..n : x ∧ set of 1..n : y ∧ maxOverlap(x, y, 1)
set of 1..n : x ∧ set of 1..n : y ∧ card(x ∩ y) ≤ 1
array[1..n] of 0..1 : x′ ∧ map(x, x′ ) ∧ set of 1..n : y ∧ card(x ∩ y) ≤ 1
array[1..n] of 0..1 : x′ ∧ map(x, x′ ) ∧ set of 1..n : y ∧ card(x′ ∩ y) ≤ 1
array[1..n] of 0..1 : x′ ∧ map(x, x′ ) ∧ array[1..n] of 0..1 : y ′ ∧ map(y, y ′ ) ∧
card(x′ ∩ y) ≤ 1
array[1..n] of 0..1 : x′ ∧ map(x, x′ ) ∧ array[1..n] of 0..1 : y ′ ∧ map(y, y ′ ) ∧
card(x′ ∩ y ′ ) ≤ 1
array[1..n] of 0..1 : x′ ∧ map(x, x′ ) ∧ array[1..n] of 0..1 : y ′ ∧ map(y, y ′ ) ∧
card(z :: (array[1..n] of 0..1 : z ∧ f orall(e in 1..n) z[e] = x′ [e] && y ′ [e]) ≤ 1
array[1..n] of 0..1 : x′ ∧ map(x, x′ ) ∧ array[1..n] of 0..1 : y ′ ∧ map(y, y ′ ) ∧
card(z) :: (array[1..n] of 0..1 : z ∧ f orall(e in 1..n) z[e] = x′ [e] && y ′ [e]) ≤ 1
array[1..n] of 0..1 : x′ ∧ map(x, x′ ) ∧ array[1..n] of 0..1 : y ′ ∧ map(y, y ′ ) ∧
card(z) ≤ 1 ∧ array[1..n] of 0..1 : z ∧ f orall(e in 1..n) z[e] = x′ [e] && y ′ [e]
The final goal is a flat conjunction of constraints and types. It can be similarly
translated into a conjunction of pseudo-Boolean constraints that can be sent to
a finite domain solver, by unrolling f orall and replacing the arrays by sequences
of n variables.
⊓
⊔
Example 15 (Rational Tree Unification). We can directly express the rational
tree unification algorithm of Colmerauer5 as an ACD term rewriting system.
f (s1 , . . . sn ) = f (t1 , . . . , tn ) ⇐⇒ s1 = t1 ∧ · · · sn = tn
f (s1 , . . . sn ) = g(t1 , . . . , tm ) ⇐⇒ f alse
(split)
(f ail)
The (split) rule must be defined for each constructor f /n and the (fail) rule for
each pair of different constructors f /n and g/m. The remaining rules are:
x = x ⇐⇒ var(x) | true
t = x ⇐⇒ var(x) ∧ nonvar(t) | x = t
x = s \ x = t ⇐⇒ var(x) ∧ nonvar(s) ∧ size(s) ≤ size(t) | s = t
x = y \ x ⇐⇒ var(x) ∧ var(y) ∧ x 6≡ y | y
(id)
(f lip)
(tsubs)
(vsubs)
where size(t) is the size of the term t in terms of number of symbols, and ≡ is
syntactic identity. Even though the goals are a single conjunction of constraints,
ACD is used for succinctly expressing the (vsubs) rule which replaces one variable
by another in any other position.
5
A. Colmerauer. Prolog and Infinite Trees. Logic Programming, APIC Studies in Data
Processing (16). Academic Press. 1992
18
The following derivation illustrates the unification process in action. The
underlined part show the matching elements
f lip
vsubs
vsubs
tsubs
split
split
tsubs
split
tsubs
split
id
x = y ∧ f (f (x)) = x ∧ y = f (f (f (y)))
x = y ∧ x = f (f (x)) ∧ y = f (f (f (y)))
x = y ∧ y = f (f (x)) ∧ y = f (f (f (y)))
x = y ∧ y = f (f (y)) ∧ y = f (f (f (y)))
x = y ∧ y = f (f (y)) ∧ f (f (y)) = f (f (f (y)))
x = y ∧ y = f (f (y)) ∧ f (y) = f (f (y))
x = y ∧ y = f (f (y)) ∧ y = f (y)
x = y ∧ f (y) = f (f (y)) ∧ y = f (y)
x = y ∧ y = f (y) ∧ y = f (y)
x = y ∧ y = f (y) ∧ f (y) = f (y)
x = y ∧ y = f (y) ∧ f (y) = f (y)
x = y ∧ y = f (y) ∧ true
⊓
⊔
A.2
Expanded Examples
The purpose of this section is to show some example derivations under the operational semantics of ACDTR, rather than high-level descriptions. We allow for
some shorthand, namely T #i = Ti .
Identifiers and conjunctive context. In this section we explain parts of the
derivation from Example 15 in more detail. The initial goal is
x = y ∧ f (f (x)) = x ∧ y = f (f (f (y)))
which corresponds to the initial state:
h(((x1 = y2 )3 ∧ (f (f (x4 )5 )6 = x7 )8 )9 ∧ (y10 = f (f (f (y11 )12 )13 )14 )15 )16 , ∅,
{x, y}, {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}i
The initial state is a quadruple contained an annotated version of the goal, an
empty propagation history, the set of variables in the goal and a set of “used”
identifiers.
The first derivation step is a Simplify transition with the f lip rule:
h(((x1 = y2 )3 ∧ (f (f (x4 )5 )6 = x7 )8 )9 ∧ (y10 = f (f (f (y11 )12 )13 )14 )15 )16 , ∅,
{x, y}, {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}i
h(((x1 = y2 )3 ∧ (x17 = f (f (x18 )19 )20 )21 )9 ∧ (y10 = f (f (f (y11 )12 )13 )14 )15 )16 , ∅,
{x, y}, {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21}i
We have replaced the annotated subterm (f (f (x4 )5 )6 = x7 )8 with x17 =
f (f (x18 )19 )20 )21 (i.e. flipped the operands to the equality) and reannotated the
19
new term with fresh identifiers. These were also added to the set of used identifiers. Since the propagation history is empty, it remains unchanged.
The next derivation step is a Simpagate transition with the vsubs rule.
h(((x1 = y2 )3 ∧ (x17 = f (f (x18 )19 )20 )21 )9 ∧ (y10 = f (f (f (y11 )12 )13 )14 )15 )16 , ∅,
{x, y}, {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21}i
h(((x1 = y2 )3 ∧ (y21 = f (f (x18 )19 )20 )21 )9 ∧ (y10 = f (f (f (y11 )12 )13 )14 )15 )16 , ∅,
{x, y}, {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22}i
The conjunctive context for subterm x17 is
cc(Ga , p) = (x1 = y2 )3 ∧ (y10 = f (f (f (y11 )12 )13 )14 )15 ∧ true
where Ga is the current goal and p is the position of x17 . The first conjunct
matches the conjunctive context of the vsubs rule, thus subterm x17 is replaced
with y21 . Identifier 21 is added to the list of used identifiers.
Execution proceeds until the final state
h(x = y ∧ y = f (y)) ∧ true, ∅, {x, y}, Pi
is reached, for some annotation of the goal and some set of identifiers P. This is
a final state because no more rules are applicable to it.
AC matching and propagation histories. Consider the propagation rule
from the leq program:
trans @ leq(X, Y ) ∧ leq(Y, Z) =⇒ X 6≡ Y ∧ Y 6≡ Z | leq(X, Z)
and the initial state
hleq(A1 , B2 )3 ∧4 leq(B5 , A6 )7 , ∅, {A, B}, {1, 2, 3, 4, 5, 6, 7}i.
We can apply Propagate directly (i.e. without permuting the conjunction)
to arrive at the state:
h(leq(A1 , B2 )3 ∧4 leq(B5 , A6 )7 ) ∧8 leq(A9 , A10 )11 ,
{trans @ (3 1 2 7 6 5)}, {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}i.
The propagation history prevents the rule from firing on the same terms
again, however we can permute the terms to find a new matching. Namely, we
can permute the annotated goal (which we call Ga )
(leq(A1 , B2 )3 ∧4 leq(B5 , A6 )7 ) ∧8 leq(A9 , A10 )11
to
(leq(B5 , A6 )7 ∧4 leq(A1 , B2 )3 ) ∧8 leq(A9 , A10 )11 .
20
The latter is an element of [Ga ]AC , and the identifiers have been preserved in the
correct way. The entry trans @ (7 6 5 3 1 2) is not in the propagation history,
so we can apply Propagate again to arrive at:
h((leq(B5 , A6 )7 ∧4 leq(A1 , B2 )3 ) ∧12 leq(B13 , B14 )15 ) ∧8 leq(A9 , A10 )11 ,
{trans @ (3 1 2 7 6 5), trans @ (7 6 5 3 1 2)}, {1...15}i.
Now the propagation history prevents the rule trans being applied to the
first two leq constraints. The guard also prevents the trans rule firing on either
of the two new constraints,6 thus we have reached a final state.
Updating propagation histories. Consider a modified version of the previous
example, now with two rules:
X ∧ X ⇐⇒ X
trans @ leq(X, Y ) ∧ leq(Y, Z) =⇒ leq(X, Z)
The first rule enforces idempotence of conjunction.
Consider the initial state:
hleq(A1 , A2 )3 ∧4 leq(A5 , A6 )7 ∧8 leq(A9 , A10 )11 , ∅, {A}, {1...11}i
We apply the trans rule to the first two copies of the leq constraint (with identifiers 3 and 7).
hleq(A1 , A2 )3 ∧4 leq(A5 , A6 )7 ∧8 leq(A9 , A10 )11 ∧12 leq(A13 , A14 )15 ,
{trans @ (3 1 2 7 5 6)}, {A}, {1...15}i
Next we apply idempotence to leq constraints with identifiers 7 and 11.
hleq(A1 , A2 )3 ∧4 leq(A16 , A17 )18 ∧12 leq(A13 , A14 )15 ,
{trans @ (3 1 2 7 5 6), trans @ (3 1 2 18 16 17)}, {A}, {1...18}i
An extra entry (trans @ (3 1 2 18 16 17)) is added to the propagation history
in order to satisfy the requirements of Definition 6. This is because we have
replaced the annotated constraint leq(A5 , A6 )7 with the newly annotated term
leq(A16 , A17 )18 , which defines an identifier renaming
ρ = {5 7→ 16, 6 7→ 17, 7 7→ 18}.
Since E = (trans @ (3 1 2 7 5 6)) is an element of the propagation history, we
have that ρ(E) = (trans @ (3 1 2 18 16 17)) must also be an element, and hence
the history is expanded.
6
Without the guard both ACDTR and CHRs are not guaranteed to terminate.
21
| 6 |
Fisher-Rao Metric, Geometry,
and Complexity of Neural
Networks
Tengyuan Liang∗
arXiv:1711.01530v1 [cs.LG] 5 Nov 2017
University of Chicago
Tomaso Poggio†
Massachusetts Institute of Technology
Alexander Rakhlin‡
University of Pennsylvania
James Stokes§
University of Pennsylvania
Abstract.
We study the relationship between geometry and capacity measures for
deep neural networks from an invariance viewpoint. We introduce a new
notion of capacity — the Fisher-Rao norm — that possesses desirable invariance properties and is motivated by Information Geometry. We discover
an analytical characterization of the new capacity measure, through which
we establish norm-comparison inequalities and further show that the new
measure serves as an umbrella for several existing norm-based complexity
measures. We discuss upper bounds on the generalization error induced
by the proposed measure. Extensive numerical experiments on CIFAR-10
support our theoretical findings. Our theoretical analysis rests on a key
structural lemma about partial derivatives of multi-layer rectifier networks.
Key words and phrases: deep learning, statistical learning theory, information geometry, Fisher-Rao metric, invariance, ReLU activation, natural
gradient, capacity control, generalization error.
1. INTRODUCTION
Beyond their remarkable representation and memorization ability, deep neural
networks empirically perform well in out-of-sample prediction. This intriguing
out-of-sample generalization property poses two fundamental theoretical questions:
• What are the complexity notions that control the generalization aspects of
neural networks?
∗
(e-mail: [email protected])
(e-mail: [email protected])
‡
(e-mail: [email protected])
§
(e-mail: [email protected])
†
1
file: paper_arxiv.tex date: November 7, 2017
2
• Why does stochastic gradient descent, or other variants, find parameters
with small complexity?
In this paper we approach the generalization question for deep neural networks
from a geometric invariance vantage point. The motivation behind invariance is
twofold: (1) The specific parametrization of the neural network is arbitrary and
should not impact its generalization power. As pointed out in [Neyshabur et al.,
2015a], for example, there are many continuous operations on the parameters of
ReLU nets that will result in exactly the same prediction and thus generalization
can only depend on the equivalence class obtained by identifying parameters
under these transformations. (2) Although flatness of the loss function has been
linked to generalization [Hochreiter and Schmidhuber, 1997], existing definitions
of flatness are neither invariant to nodewise re-scalings of ReLU nets nor general
coordinate transformations [Dinh et al., 2017] of the parameter space, which calls
into question their utility for describing generalization.
It is thus natural to argue for a purely geometric characterization of generalization that is invariant under the aforementioned transformations and additionally
resolves the conflict between flat minima and the requirement of invariance. Information geometry is concerned with the study of geometric invariances arising in
the space of probability distributions, so we will leverage it to motivate a particular geometric notion of complexity — the Fisher-Rao norm. From an algorithmic
point of view the steepest descent induced by this geometry is precisely the natural gradient [Amari, 1998]. From the generalization viewpoint, the Fisher-Rao
norm naturally incorporates distributional aspects of the data and harmoniously
unites elements of flatness and norm which have been argued to be crucial for
explaining generalization [Neyshabur et al., 2017].
Statistical learning theory equips us with many tools to analyze out-of-sample
performance. The Vapnik-Chervonenkis dimension is one possible complexity notion, yet it may be too large to explain generalization in over-parametrized models, since it scales with the size (dimension) of the network. In contrast, under
additional distributional assumptions of a margin, Perceptron (a one-layer network) enjoys a dimension-free error guarantee, with an `2 norm playing the role
of “capacity”. These observations (going back to the 60’s) have led the theory
of large-margin classifiers, applied to kernel methods, boosting, and neural networks [Anthony and Bartlett, 1999]. In particular, the analysis of Koltchinskii and
Panchenko [2002] combines the empirical margin distribution (quantifying how
well the data can be separated) and the Rademacher complexity of a restricted
subset of functions. This in turn raises the capacity control question: what is
a good notion of the restrictive subset of parameter space for neural networks?
Norm-based capacity control provides a possible answer and is being actively
studied for deep networks [Krogh and Hertz, 1992, Neyshabur et al., 2015b,a,
Bartlett et al., 2017, Neyshabur et al., 2017], yet the invariances are not always
reflected in these capacity notions. In general, it is very difficult to answer the
question of which capacity measure is superior. Nevertheless, we will show that
our proposed Fisher-Rao norm serves as an umbrella for the previously considered
norm-based capacity measures, and it appears to shed light on possible answers
to the above question.
Much of the difficulty in analyzing neural networks stems from their unwieldy
recursive definition interleaved with nonlinear maps. In analyzing the Fisher-Rao
file: paper_arxiv.tex date: November 7, 2017
CAPACITY MEASURE AND GEOMETRY
3
norm, we proved an identity for the partial derivatives of the neural network that
appears to open the door to some of the geometric analysis. In particular, we
prove that any stationary point of the empirical objective with hinge loss that
perfectly separates the data must also have a large margin. Such an automatic
large-margin property of stationary points may link the algorithmic facet of the
problem with the generalization property. The same identity gives us a handle
on the Fisher-Rao norm and allows us to prove a number of facts about it.
Since we expect that the identity may be useful in deep network analysis, we
start by stating this result and its implications in the next section. In Section
3 we introduce the Fisher-Rao norm and establish through norm-comparison
inequalities that it serves as an umbrella for existing norm-based measures of
capacity. Using these norm-comparison inequalities we bound the generalization
error of various geometrically distinct subsets of the Fisher-Rao ball and provide
a rigorous proof of generalization for deep linear networks. Extensive numerical
experiments are performed in Section 5 demonstrating the superior properties of
the Fisher-Rao norm.
2. GEOMETRY OF DEEP RECTIFIED NETWORKS
Definition 1. The function class HL realized by the feedforward neural network architecture of depth L with coordinate-wise activation functions σl : R Ñ R
is defined as set of functions fθ : X Ñ Y (X Ď Rp and Y Ď RK )1 with
(2.1)
fθ pxq “ σL`1 pσL p. . . σ2 pσ1 pxT W 0 qW 1 qW 2 q . . .qW L q ,
where the parameter vector θ P ΘL Ď Rd (d “ pk1 `
řL´1
i“1
ki ki`1 ` kL K) and
ΘL “ tW 0 P Rpˆk1 , W 1 P Rk1 ˆk2 , . . . , W L´1 P RkL´1 ˆkL , W L P RkL ˆK u .
For simplicity of calculations, we have set all bias terms to zero2 . We also
assume throughout the paper that
σpzq “ σ 1 pzqz.
(2.2)
for all the activation functions, which includes ReLU σpzq “ maxt0, zu, “leaky”
ReLU σpzq “ maxtαz, zu, and linear activations as special cases.
To make the exposition of the structural results concise, we define the following
intermediate functions in the definition (2.1). The output value of the t-th layer
hidden node is denoted as Ot pxq P Rkt , and the corresponding input value as
N t pxq P Rkt , with Ot pxq “ σt pN t pxqq. By definition, O0 pxq “ x P Rp , and the
final output OL`1 pxq “ fθ pxq P RK . For any Nit , Oit , the subscript i denotes the
i-th coordinate of the vector.
Given a loss function ` : Y ˆ Y Ñ R, the statistical learning problem can be
phrased as optimizing the unobserved population loss:
(2.3)
Lpθq :“
E
pX,Y q„P
`pfθ pXq, Y q ,
1
It is possible to generalize the above architecture to include linear pre-processing operations
such as zero-padding and average pooling.
2
In practice, we found that setting the bias to zero does not significantly impact results on
image classification tasks such as MNIST and CIFAR-10.
file: paper_arxiv.tex date: November 7, 2017
4
based on i.i.d. samples tpXi , Yi quN
i“1 from the unknown joint distribution P. The
unregularized empirical objective function is denoted by
(2.4)
N
ÿ
p θ pXq, Y q “ 1
p
`pfθ pXi q, Yi q .
Lpθq
:“ E`pf
N i“1
We first establish the following structural result for neural networks. It will
be clear in the later sections that the lemma is motivated by the study of the
Fisher-Rao norm, formally defined in Eqn. (3.1) below, and information geometry. For the moment, however, let us provide a different viewpoint. For linear
functions fθ pxq “ xθ, xy, we clearly have that xBf {Bθ, θy “ fθ pxq. Remarkably, a
direct analogue of this simple statement holds for neural networks, even if overparametrized.
Lemma 2.1 (Structure in Gradient). Given a single data input x P Rp , consider the feedforward neural network in Definition 1 with activations satisfying
(2.2). Then for any 0 ď t ď s ď L, one has the identity
ÿ
(2.5)
iPrkt s,jPrkt`1 s
BOs`1 t
Wij “ Os`1 pxq .
BWijt
In addition, it holds that
(2.6)
L
ÿ
ÿ
t“0 iPrkt s,jPrkt`1 s
BOL`1 t
Wij “ pL ` 1qOL`1 pxq .
BWijt
Lemma 2.1 reveals the structural constraints in the gradients of rectified networks. In particular, even though the gradients lie in an over-parametrized highdimensional space, many equality constraints are induced by the network architecture. Before we unveil the surprising connection between Lemma 2.1 and the
proposed Fisher-Rao norm, let us take a look at a few immediate corollaries of
this result. The first corollary establishes a large-margin property of stationary
points that separate the data.
Corollary 2.1 (Large Margin Stationary Points). Consider the binary classification problem with Y “ t´1, `1u, and a neural network where the output layer
has only one unit. Choose the hinge loss `pf, yq “ maxt0, 1 ´ yf u. If a certain
parameter θ satisfies two properties
p
p
1. θ is a stationary point for Lpθq
in the sense ∇θ Lpθq
“ 0;
2. θ separates the data in the sense that Yi fθ pXi q ą 0 for all i P rN s,
then it must be that θ is a large margin solution: for all i P rN s,
Yi fθ pXi q ě 1.
The same result holds for the population criteria Lpθq, in which case p2q is stated
as PpY fθ pXq ą 0q “ 1, and the conclusion is PpY fθ pXq ě 1q “ 1.
file: paper_arxiv.tex date: November 7, 2017
5
CAPACITY MEASURE AND GEOMETRY
q
q
Proof. Observe that B`pf,Y
“ ´y if yf ă 1, and B`pf,Y
“ 0 if yf ě 1. Using
Bf
Bf
Eqn. (2.6) when the output layer has only one unit, we find
„
B`pfθ pXq, Y q
p
p
x∇θ Lpθq, θy “ pL ` 1qE
fθ pXq ,
Bfθ pXq
“
‰
p ´Y fθ pXq1Y f pXqă1 .
“ pL ` 1qE
θ
p
For a stationary point θ, we have ∇θ Lpθq
“ 0, which implies the LHS of the
above equation is 0. Now recall that the second condition that θ separates the
data implies implies ´Y fθ pXq ă 0 for any point in the data set. In this case, the
RHS equals zero if and only if Y fθ pXq ě 1.
Granted, the above corollary can be proved from first principles without the
use of Lemma 2.1, but the proof reveals a quantitative statement about stationary
points along arbitrary directions θ.
In the second corollary, we consider linear networks.
Corollary 2.2 (Stationary Points for Deep Linear Networks). Consider linear neural networks with σpxq “ x and square loss function. Then all stationary
points θ “ tW 0 , W 1 , . . . , W L u that satisfy
„
1
2
p
p
∇θ Lpθq “ ∇θ E pfθ pXq ´ Y q “ 0 ,
2
must also satisfy
xwpθq, XT Xwpθq ´ XT Yy “ 0 ,
where wpθq “
śL
t“0 W
t
P Rp , X P RN ˆp and Y P RN are the data matrices.
Proof. The proof follows from applying Lemma 2.1
«
ff
L
L
ź
ź
p pY ´ X T
p
0 “ θT ∇θ Lpθq
“ pL ` 1qE
W t qX T
Wt ,
t“0
t“0
which means xwpθq, XT Xwpθq ´ XT Yy “ 0.
Remark 2.1. This simple Lemma is not quite asserting that all stationary
points are global optima, since global optima satisfy XT Xwpθq´XT Y “ 0, while
we only proved that the stationary points satisfy xwpθq, XT Xwpθq ´ XT Yy “ 0.
3. FISHER-RAO NORM AND GEOMETRY
In this section, we propose a new notion of complexity of neural networks that
can be motivated by geometrical invariance considerations, specifically the FisherRao metric of information geometry. We postpone this motivation to Section 3.3
and instead start with the definition and some properties. Detailed comparison
with the known norm-based capacity measures and generalization results are
delayed to Section 4.
file: paper_arxiv.tex date: November 7, 2017
6
3.1 An analytical formula
Definition 2. The Fisher-Rao norm for a parameter θ is defined as the
following quadratic form
(3.1) }θ}2fr :“ xθ, Ipθqθy ,
where Ipθq “ Er∇θ lpfθ pXq, Y q b ∇θ lpfθ pXq, Y qs .
The underlying distribution for the expectation in the above definition has
been left ambiguous because it will be useful to specialize to different distributions
depending on the context. Even though we call the above quantity the “FisherRao norm,” it should be noted that it does not satisfy the triangle inequality.
The following Theorem unveils a surprising identity for the Fisher-Rao norm.
Theorem 3.1 (Fisher-Rao norm). Assume the loss function `p¨, ¨q is smooth
in the first argument. The following identity holds for a feedforward neural network (Definition 1) with L hidden layers and activations satisfying (2.2):
«B
F2 ff
B`pfθ pXq, Y q
2
2
}θ}fr “ pL ` 1q E
(3.2)
, fθ pXq
.
Bfθ pXq
The proof of the Theorem relies mainly on the geometric Lemma 2.1 that
describes the gradient structure of multi-layer rectified networks.
Remark 3.1. In the case when the output layer has only one node, Theorem 3.1 reduces to the simple formula
«ˆ
ff
˙
B`pfθ pXq, Y q 2
2
2
2
}θ}fr “ pL ` 1q E
(3.3)
fθ pXq
.
Bfθ pXq
Proof of Theorem 3.1. Using the definition of the Fisher-Rao norm,
“
‰
}θ}2fr “ E xθ, ∇θ lpfθ pXq, Y qy2 ,
«B
F2 ff
B`pfθ pXq, Y q
“E
∇θ fθ pXq
,θ
,
Bfθ pXq
«B
F2 ff
B`pfθ pXq, Y q
“E
, ∇θ fθ pXqT θ
.
Bfθ pXq
By Lemma 2.1,
∇θ fθ pXqT θ “ ∇θ OL`1 pxqT θ ,
L
ÿ
ÿ
“
t“0 iPrkt s,jPrkt`1 s
BOL`1 t
Wij ,
BWijt
“ pL ` 1qOL`1 “ pL ` 1qfθ pXq .
Combining the above equalities, we obtain
«B
F2 ff
B`pf
pXq,
Y
q
θ
}θ}2fr “ pL ` 1q2 E
, fθ pXq
.
Bfθ pXq
file: paper_arxiv.tex date: November 7, 2017
CAPACITY MEASURE AND GEOMETRY
7
Before illustrating how the explicit formula in Theorem 3.1 can be viewed as a
unified “umbrella” for many of the known norm-based capacity measures, let us
point out one simple invariance property of the Fisher-Rao norm, which follows
as a direct consequence of Thm. 3.1. This property is not satisfied for `2 norm,
spectral norm, path norm, or group norm.
Corollary 3.1 (Invariance). If there are two parameters θ1 , θ2 P ΘL such
that they are equivalent, in the sense that fθ1 “ fθ2 , then their Fisher-Rao norms
are equal, i.e.,
}θ1 }fr “ }θ2 }fr .
3.2 Norms and geometry
In this section we will employ Theorem 3.1 to reveal the relationship among
different norms and their corresponding geometries. Norm-based capacity control
is an active field of research for understanding why deep learning generalizes
well, including `2 norm (weight decay) in [Krogh and Hertz, 1992, Krizhevsky
et al., 2012], path norm in [Neyshabur et al., 2015a], group-norm in [Neyshabur
et al., 2015b], and spectral norm in [Bartlett et al., 2017]. All these norms are
closely related to the Fisher-Rao norm, despite the fact that they capture distinct
inductive biases and different geometries.
For simplicity, we will showcase the derivation with the absolute loss function
`pf, yq “ |f ´ y| and when the output layer has only one node (kL`1 “ 1). The
argument can be readily adopted to the general setting. We will show that the
Fisher-Rao norm serves as a lower bound for all the norms considered in the
literature, with some pre-factor whose meaning will be clear in Section 4.1. In
addition, the Fisher-Rao norm enjoys an interesting umbrella property: by considering a more constrained geometry (motivated from algebraic norm comparison
inequalities) the Fisher-Rao norm motivates new norm-based capacity control
methods.
The main theorem we will prove is informally stated as follows.
Theorem 3.2 (Norm comparison, informal). Denoting ~ ¨ ~ as any one of:
(1) spectral norm, (2) matrix induced norm, (3) group norm, or (4) path norm,
we have
1
}θ}fr ď ~θ~ ,
L`1
for any θ P ΘL “ tW 0 , W 1 , . . . , W L u. The specific norms (1)-(4) are formally
introduced in Definitions 3-6.
The detailed proof of the above theorem will be the main focus of Section 4.1.
Here we will give a sketch on how the results are proved.
Lemma 3.1 (Matrix form).
(3.4)
fθ pxq “ xT W 0 D1 pxqW 1 D2 pxq ¨ ¨ ¨ DL W L DL`1 pxq ,
where Dt pxq “ diagrσ 1 pN t pxqqs P Rkt ˆkt , for 0 ă t ď L ` 1. In addition, Dt pxq is
a diagonal matrix with diagonal elements being either 0 or 1.
file: paper_arxiv.tex date: November 7, 2017
8
Proof of Lemma 3.1. Since O0 pxqW 0 “ xT W 0 “ N 1 pxq P R1ˆk1 , we have
“ N 1 pxqdiagpσ 1 pN 1 pxqqq “ O1 pxq. Proof is completed via induction.
N 1 pxqD1
`
˘2
For the absolute loss, one has B`pfθ pXq, Y q{Bfθ pXq “ 1 and therefore Theorem 3.1 simplifies to,
“
‰
(3.5)
}θ}2fr “ pL ` 1q2 E vpθ, XqT XX T vpθ, Xq ,
X„P
where vpθ, xq :“ W 0 D1 pxqW 1 D2 pxq ¨ ¨ ¨ DL pxqW L DL`1 pxq P Rp . The norm comparison results are thus established through a careful decomposition of the datadependent vector vpθ, Xq, in distinct ways according to the comparing norm/geometry.
3.3 Motivation and invariance
In this section, we will provide the original intuition and motivation for our
proposed Fisher-Rao norm from the viewpoint of geometric invariance.
Information geometry and the Fisher-Rao metric
Information geometry provides a window into geometric invariances when we
adopt a generative framework where the data generating process belongs to the
parametric family P P tPθ | θ P ΘL u indexed by the parameters of the neural
network architecture. The Fisher-Rao metric on tPθ u is defined in terms of a
local inner product for each value of θ P ΘL as follows. For each α, β P Rd define
the corresponding tangent vectors ᾱ :“ dpθ`tα {dt|t“0 , β̄ :“ dpθ`tβ {dt|t“0 . Then
for all θ P ΘL and α, β P Rd we define the local inner product
ż
ᾱ β̄
(3.6)
xᾱ, β̄ypθ :“
pθ ,
M pθ pθ
where M “ X ˆ Y. The above inner product extends to a Riemannian metric
on the space of positive densities ProbpM q called the Fisher-Rao metric3 . The
relationship between the Fisher-Rao metric and the Fisher information matrix
Ipθq in statistics literature follows from the identity,
(3.7)
xᾱ, β̄ypθ “ xα, Ipθqβy .
Notice that the Fisher information matrix induces a semi -inner product pα, βq ÞÑ
xα, Ipθqβy unlike the Fisher-Rao metric which is non-degenerate4 . If we make
the additional modeling assumption that pθ px, yq “ ppxqpθ py |xq then the Fisher
information becomes,
(3.8)
Ipθq “ EpX,Y q„Pθ r∇θ log pθ pY | Xq b ∇θ log pθ pY | Xqs .
If we now identify our loss function as `pfθ pxq, yq “ ´ log pθ py | xq then the FisherRao metric coincides with the Fisher-Rao norm when α “ β “ θ. In fact, our
Fisher-norm encompasses the Fisher-Rao metric and generalizes it to the case
when the model is misspecified P R tPθ u.
Flatness
3
Bauer et al. [2016] showed that it is essentially the the unique metric that is invariant under
the diffeomorphism group of M .
4
The null space of Ipθq is mapped to the origin under α ÞÑ dpθ`tα {dt|t“0 .
file: paper_arxiv.tex date: November 7, 2017
CAPACITY MEASURE AND GEOMETRY
9
Having identified the geometric origin of Fisher-Rao norm, let us study the
implications for generalization of flat minima. Dinh et al. [2017] argued by way
of counter-example that the existing measures of flatness are inadequate for explaining the generalization capability of multi-layer neural networks. Specifically,
by utilizing the invariance property of multi-layer rectified networks under nonnegative nodewise rescalings, they proved that the Hessian eigenvalues of the loss
function can be made arbitrarily large, thereby weakening the connection between
flat minima and generalization. They also identified a more general problem which
afflicts Hessian-based measures of generalization for any network architecture and
activation function: the Hessian is sensitive to network parametrization whereas
generalization should be invariant under general coordinate transformations. Our
proposal can be motivated from the following fact5 which relates flatness to geometry (under appropriate regularity conditions)
(3.9)
EpX,Y q„Pθ xθ, Hess r`pfθ pXq, Y qs θy “ EpX,Y q„Pθ xθ, ∇θ `pfθ pXq, Y qy2 “ }θ}2fr .
θ
In other words, the Fisher-Rao norm evades the node-wise rescaling issue because it is exactly invariant under linear re-parametrizations. The Fisher-Rao
norm moreover possesses an “infinitesimal invariance” property under non-linear
coordinate transformations, which can be seen by passing to the infinitesimal
form where non-linear coordinate invariance is realized exactly by the following
infinitesimal line element,
ÿ
(3.10)
ds2 “
rIpθqsij dθi dθj .
i,jPrds
Comparing }θ}fr with the above line element reveals the geometric interpretation
of the Fisher-Rao norm as the approximate geodesic distance from the origin. It
is important to realize that our definition of flatness
“
‰ (3.9) differs from [Dinh et al.,
p
2017] who employed the Hessian loss Hessθ Lpθq
. Unlike the Fisher-Rao norm,
the norm induced by the Hessian loss does not enjoy the infinitesimal invariance
property (it only holds at critical points).
Natural gradient
There exists a close relationship between the Fisher-Rao norm and the natural
gradient. In particular, the natural gradient descent is simply the steepest descent
direction induced by the Fisher-Rao geometry of tPθ u. Indeed, the natural gradient can be expressed as a semi-norm-penalized iterative optimization scheme as
follows,
(3.11)
„
1
2
p
p tq .
θt`1 “ arg min xθ ´ θt , ∇Lpθt qy `
}θ ´ θt }Ipθt q “ θt ´ ηt Ipθq` ∇Lpθ
2η
d
t
θPR
We remark that the positive semi-definite matrix Ipθt q changes with different t.
We emphasize an “invariance” property of natural gradient under re-parametrization
and an “approximate invariance” property under over-parametrization, which is
not satisfied for the classic gradient descent. The formal statement and its proof
are deferred to Lemma 6.1 in Section 6.2. The invariance property is desirable:
5
Set `pfθ pxq, yq “ ´ log pθ py|xq and recall the fact that Fisher information can be viewed as
variance as well as the curvature.
file: paper_arxiv.tex date: November 7, 2017
10
in multi-layer ReLU networks, there are many equivalent re-parametrizations of
the problem, such as nodewise rescalings, which may slow down the optimization process. The advantage of natural gradient is also illustrated empirically in
Section 5.5.
4. CAPACITY CONTROL AND GENERALIZATION
In this section, we discuss in full detail the questions of geometry, capacity measures, and generalization. First, let us define empirical Rademacher complexity
for the parameter space Θ, conditioned on data tXi , i P rN su, as
(4.1)
RN pΘq “ E sup
θPΘ
N
1 ÿ
i fθ pXi q ,
N i“1
where i , i P rN s are i.i.d. Rademacher random variables.
4.1 Norm Comparison
Let us collect some definitions before stating each norm
comparison result.
ř
For a vector v, the vector `p norm is denoted }v}p :“ p i |vi |p q1{p , p ą 0. For a
matrix M , }M }σ :“ maxv‰0 }v T M }{}v} denotes the spectral norm; }M }pÑq “
maxv‰0 }v T M }q {}v}p denotes the matrix induced norm, for p, q ě 1; }M }p,q “
˘ ‰
“ř `ř
p q{p 1{q denotes the matrix group norm, for p, q ě 1.
i |Mij |
j
4.1.1 Spectral norm.
Definition 3 (Spectral norm). Define the following “spectral norm” ball:
« ˜
¸ff1{2
L`1
L
ź
ź
(4.2)
}θ}σ :“ E }X}2
}Dt pXq}2σ
}W t }σ .
t“1
t“0
We have the following norm comparison Lemma.
Lemma 4.1 (Spectral norm).
1
}θ}fr ď }θ}σ .
L`1
Remark 4.1. Spectral norm as a capacity control has been considered in
[Bartlett et al., 2017]. Lemma 4.1 shows that spectral norm serves as a more
stringent constraint than Fisher-Rao norm. Let us provide an explanation of
˘‰
“ `
ś
t
2 1{2 here. Define the set of parameters
the pre-factor E }X}2 L`1
t“1 }D pXq}σ
induced by the Fisher-Rao norm geometry
“
‰
1
Bfr p1q :“ tθ : E vpθ, XqT XX T vpθ, Xq ď 1u “ tθ :
}θ}fr ď 1u .
L`1
p then, because
From Lemma 4.1, if the expectation is over the empirical measure E,
}Dt pXq}σ ď 1, we obtain
« ˜
¸ff1{2
L
L`1
L
ź
ź
“
‰ ź
1
2
t
2
t
2 1{2
p
p
}W }σ ď E}X}
}W t }σ ,
}θ}fr ď E }X}
}D pXq}σ
L`1
t“0
t“1
t“0
#
+
L
ź
1
which implies
θ:
}W t }σ ď “
Ă Bfr p1q .
‰
2 1{2
p
E}X}
t“0
file: paper_arxiv.tex date: November 7, 2017
11
CAPACITY MEASURE AND GEOMETRY
From Theorem 1.1 in [Bartlett et al., 2017], we know that a subset of the
Bfr p1q characterized by the spectral norm enjoys the following upper bound on
Rademacher complexity under mild conditions: for any r ą 0
˜#
+¸
“
‰
L
2 1{2 ¨ Polylog
p
ź
E}X}
t
(4.3)
RN
θ:
}W }σ ď r
Àr¨
.
N
t“0
Plugging in r “ “
(4.4)
˜#
RN
θ:
L
ź
1
‰1{2 ,
2
p
E}X}
we have,
+¸
1
t
}W }σ ď “
‰
2 1{2
p
E}X}
t“0
À“
1
2
p
E}X}
“
‰1{2 ¨
2
p
E}X}
‰1{2
¨ Polylog
N
Ñ0 .
“
‰
2 1{2 in Theorem 1.1 in [Bartlett et al.,
p
Interestingly, the additional factor E}X}
2017] exactly cancels with our pre-factor in the norm comparison. The above
calculations show that a subset of Bfr p1q, induced by the spectral norm geometry,
has good generalization error.
4.1.2 Group norm.
Definition 4 (Group norm).
p ě 1, q ą 0
« ˜
(4.5)
}θ}p,q :“ E
}X}2p˚
Define the following “group norm” ball, for
¸ff1{2
L`1
ź
}D
t
pXq}2qÑp˚
t“1
where
1
p
`
1
p˚
L
ź
}W t }p,q ,
t“0
“ 1. Here } ¨ }qÑp˚ denote the matrix induced norm.
Lemma 4.2 (Group norm).
It holds that
1
}θ}fr ď }θ}p,q .
L`1
(4.6)
Remark 4.2. Group norm as a capacity measure has been considered in
[Neyshabur et al., 2015b]. Lemma 4.2 shows that group norm serves as a more
stringent constraint than Fisher-Rao norm. Again, let us provide an explanation
“ `
˘‰1{2
śL`1 t
here.
of the pre-factor E }X}2p˚ t“1
}D pXq}2qÑp˚
Note that for all X
L`1
ź
}Dt pXq}qÑp˚ ď
t“1
L`1
ź
r
1
1
˚ ´ q s`
kt p
,
t“1
because
}Dt pXq}qÑp˚ “ max
v‰0
r 1˚ ´ 1 s`
}v T Dt pXq}p˚
}v}p˚
ď max
ď kt p q
.
v‰0 }v}q
}v}q
file: paper_arxiv.tex date: November 7, 2017
12
p we know
From Lemma 4.2, if the expectation is over the empirical measure E,
that in the case when kt “ k for all 0 ă t ď L,
« ˜
¸ff1{2
L`1
L
ź
ź
1
t
2
p }X}2˚
}θ}fr ď E
}D
pXq}
}W t }p,q ,
˚
p
qÑp
L`1
t“1
t“0
ˆ
˙1{2 ´
L
¯
L ź
r 1 ´1s
}W t }p,q ,
ď max }Xi }2p˚
k p˚ q ` ¨
i
t“0
$
’
&
which implies
θ:
’
%
L
ź
}W t }p,q ď ´
k
t“0
r p1˚ ´ 1q s`
,
/
.
1
¯L
Ă Bfr p1q .
/
maxi }Xi }p˚ -
By Theorem 1 in [Neyshabur et al., 2015b], we know that a subset of Bfr p1q
(different from the subset induced by spectral geometry) characterized by the
group norm, satisfies the following upper bound on the Rademacher complexity,
for any r ą 0
˜#
(4.7) RN
L
ź
θ:
+¸
}W t }p,q ď r
t“0
1
,
ˆ 1 1 ˙L
r ˚ ´ q s`
p
k
maxi }Xi }p˚
Plugging in r “
(4.8)
´
¯L
r 1 ´1s
2k p˚ q ` maxi }Xi }p˚ ¨ Polylog
?
Àr¨
.
N
we have
,˛
¨$
/
’
L
.
& ź
1
‹
˚
t
RN ˝ θ :
}W }p,q ď ´ 1 1 ¯L
‚
/
’
r p˚ ´ q s`
% t“0
maxi }Xi }p˚
k
´ 1 1 ¯L
r
´ s
L
2
k p˚ q ` maxi }Xi }p˚ ¨ Polylog
1
?
À ´ 1 1 ¯L
¨
Ñ0 .
r
´ s
N
k p˚ q ` max }X } ˚
i
i p
Once again, we point out that the intriguing combinatorial factor in Theorem
1 of Neyshabur et al. [2015b] exactly cancels with our pre-factor in the norm
comparison. The above calculations show that another subset of Bfr p1q, induced
by the group norm geometry, has good generalization error (without additional
factors).
4.1.3 Path norm.
Definition 5 (Path norm).
Define the following “path norm” ball, for q ě 1
(4.9)
» ˜
}πpθq}q :“ –E
L`1
ź
ÿ
|Xi0
i0 ,i1 ,...,iL
t“1
¸2{q˚ fi1{2 ˜
Ditt pXq|q
˚
ÿ
fl
L
ź
¨
¸1{q
|Witt it`1 |q
,
i0 ,i1 ,...,iL t“0
where 1q ` q1˚ “ 1, indices set i0 P rps, i1 P rk1 s, . . . iL P rkL s, iL`1 “ 1. Here πpθq
is a notation for all the paths (from input to output) of the weights θ.
file: paper_arxiv.tex date: November 7, 2017
13
CAPACITY MEASURE AND GEOMETRY
Lemma 4.3 (Path-q norm).
The following inequality holds for any q ě 1,
1
}θ}fr ď }πpθq}q .
L`1
(4.10)
Remark 4.3. Path norm has been investigated in [Neyshabur et al., 2015a],
where the definition is
˜
¸1{q
L
ÿ ź
t
q
.
|Wit it`1 |
i0 ,i1 ,...,iL t“0
Again, let us provide an intuitive explanation for our pre-factor
» ˜
¸2{q˚ fi1{2
L`1
ÿ
ź
˚
–E
fl
|Xi0
Ditt pXq|q
,
t“1
i0 ,i1 ,...,iL
here for the case q “ 1. Due to Lemma 4.3, when the expectation is over empirical
measure,
» ˜
¸2{q˚ fi1{2 ˜
¸1{q
L`1
L
ÿ
ź
ÿ ź
1
˚
t
q
t
q
fl ¨
}θ}fr ď –p
E
|Xi0
Dit pXq|
|Wit it`1 |
,
L`1
t“1
i0 ,i1 ,...,iL
i0 ,i1 ,...,iL t“0
˜
¸
L
ÿ ź
t
|Wit it`1 | ,
ď max }Xi }8 ¨
i
i0 ,i1 ,...,iL t“0
#
which implies
ÿ
θ:
i0 ,i1 ,...,iL
L
ź
1
|Witt it`1 | ď
maxi }Xi }8
t“0
+
Ă Bfr p1q .
By Corollary 7 in [Neyshabur et al., 2015b], we know that for any r ą 0, the
Rademacher complexity of path-1 norm ball satisfies
˜#
+¸
L
ÿ ź
2L maxi }Xi }8 ¨ Polylog
t
?
.
RN
θ:
|Wit it`1 | ď r
Àr¨
N
i0 ,i1 ,...,iL t“0
Plugging in r “ maxi 1}Xi }8 , we find that the subset of Fisher-Rao norm ball Bfr p1q
induced by path-1 norm geometry, satisfies
˜#
+¸
L
ÿ ź
1
1
2L maxi }Xi }8 ¨ Polylog
?
RN
θ:
|Witt it`1 | ď
¨
À
Ñ0 .
maxi }Xi }8
maxi }Xi }8
N
i0 ,i1 ,...,iL t“0
Once again, the additional factor appearing in the Rademacher complexity bound
in [Neyshabur et al., 2015b], cancels with our pre-factor in the norm comparison.
4.1.4 Matrix induced norm.
Definition 6 (Induced norm). Define the following “matrix induced norm”
ball, for p, q ą 0, as
« ˜
¸ff1{2
L`1
L
ź
ź
2
t
2
(4.11)
}θ}pÑq :“ E }X}p
}D pXq}qÑp
}W t }pÑq .
t“1
t“0
file: paper_arxiv.tex date: November 7, 2017
14
Lemma 4.4 (Matrix induced norm). For any p, q ą 0, the following inequality
holds
1
}θ}fr ď }θ}pÑq .
L`1
Remark that }Dt pXq}2qÑp may contain dependence on k when p ‰ q. This
motivates us to consider the following generalization of matrix induced norm,
where the norm for each W t can be different.
Definition 7 (Chain of induced norm). Define the following “chain of induced norm” ball, for a chain of P “ pp0 , p1 , . . . , pL`1 q, pi ą 0
« ˜
¸ff1{2
L`1
L
ź
ź
2
t
2
(4.12)
}θ}P :“ E }X}p0
}D pXq}pt Ñpt
}W t }pt Ñpt`1 .
t“1
t“0
Lemma 4.5 (Chain of induced norm).
It holds that
1
}θ}fr ď }θ}P .
L`1
Remark 4.4. Lemma 4.5 exhibits a new flexible norm that dominates the
Fisher-Rao norm. The example shows that one can motivate a variety of new
norms (and their corresponding geometry) as subsets of the Fisher-Rao norm
ball.
We will conclude this section with two geometric observations about the FisherRao norm with absolute loss function `pf, yq “ |f ´ y| and one output node. In
1
this case, even though tθ : L`1
}θ}fr ď 1u is non-convex, it is star-shaped.
Lemma 4.6 (Star shape). For any θ P Θ, let trθ, r ą 0u denote the line
connecting between 0 and θ to infinity. Then one has,
d
2pL ` 1q
}rθ}2fr “
}rθ}2fr .
dr
r
This also implies
}rθ}fr “ rL`1 }θ}fr .
Despite the non-convexity of set of parameters with a bound on the Fisher-Rao
norm, there is certain “convexity” in the function space:
Lemma 4.7 (Convexity in fθ ).
For any θ1 , θ2 P ΘL that satisfy
1
}θi }fr ď 1, for i “ 1 or 2,
L`1
we have for any 0 ă λ ă 1, the convex combination λfθ1 ` p1 ´ λqfθ2 can be
realized by a parameter θ1 P ΘL`1 in the sense
fθ1 “ λfθ1 ` p1 ´ λqfθ2 ,
and satisfies
1
}θ1 }fr ď 1 .
pL ` 1q ` 1
file: paper_arxiv.tex date: November 7, 2017
CAPACITY MEASURE AND GEOMETRY
15
4.2 Generalization
In this section, we will investigate the generalization puzzle for deep learning
through the lens of the Fisher-Rao norm. We will first introduce a rigorous proof
in the case of multi-layer linear networks, that capacity control with Fisher-Rao
norm ensures good generalization. Then we will provide a heuristic argument why
Fisher-Rao norm seems to be the right norm-based capacity control for rectified
neural networks, via norm caparison in Section 4.1. We complement our heuristic
argument with extensive numerical investigations in Section 5.
Theorem 4.1 (Deep Linear Networks). Consider multi-layer linear networks
with σpxq “ x, L hidden layers, input dimension p and single output unit, and
parameters θ P ΘL “ tW 0 , W 1 , . . . , W L u. Define the Fisher-Rao norm ball as in
Eqn. (3.5)
1
Bfr pγq “ tθ :
}θ}fr ď γu .
L`1
Then we have
c
p
(4.13)
,
E RN pBfr pγqq ď γ
N
assuming the Gram matrix ErXX T s P Rpˆp is full rank.6
Remark 4.5. Combining the above Theorem with classic symmetrization
and margin bounds [Koltchinskii and Panchenko, 2002], one can deduce that for
binary classification, the following generalization guarantee holds (for any margin
parameter α ą 0),
c
1 ÿ
log 1{δ
C
E 1 rfθ pXqY ă 0s ď
,
1 rfθ pXi qYi ď αs ` RN pBfr pγqq ` C
N i
α
N
for any θ P Bfr pγq with probability at least 1 ´ δ, where C ą 0 is some constant.
We would like to emphasize that to explain generalization in this over-parametrized
multi-layer linear network, it is indeed desirable that the generalization error in
Theorem 4.1 only depends on the Fisher-Rao norm and the intrinsic input dimension p, without additional dependence on other network parameters (such as
width, depth) and the X dependent factor.
In the case of ReLU networks, it turns out that bounding RN pBfr pγqq in terms
of the Fisher-Rao norm is a very challenging task. Instead, we provide heuristic
arguments via bounding the Rademacher complexity for various subsets of Bfr pγq.
As discussed in Remarks 4.1-4.3, the norms considered (spectral, group, and path
norm) can be viewed as a subset of unit Fisher-Rao norm ball induced by distinct
6
This assumption is to simplify exposition, and can be removed.
file: paper_arxiv.tex date: November 7, 2017
16
geometry. To remind ourselves, we have shown
$
,
’
/
L
& ź
.
γ
t
spectral norm B}¨}σ :“ θ :
}W }σ ď ”
ı1{2 / Ă Bfr pγq ,
’
% t“0
2
p
E}X}
,
$
/
’
L
.
& ź
γ
t
Ă Bfr pγq ,
group p, q norm B}¨}p,q :“ θ :
}W }p,q ď ´ 1 1 ¯L
/
’
r ´ s
% t“0
k p˚ q ` maxi }Xi }p˚ +
#
L
ÿ ź
γ
Ă Bfr pγq ,
path-1 norm B}πp¨q}1 :“ θ :
|Witt it`1 | ď
maxi }Xi }8
i ,i ,...,i t“0
0 1
L
from which the following bounds follow,
`
˘
Polylog
RN B}¨}σ ď γ ¨
,
N
`
˘
2L Polylog
?
RN B}¨}p,q ď γ ¨
,
N
`
˘
2L Polylog
?
RN B}πp¨q}1 ď γ ¨
.
N
The surprising fact is that despite the distinct geometry of the subsets B}¨}σ ,
B}¨}p,q and B}πp¨q}1 (which are described by different norms), the Rademacher
complexity of these sets all depend on the “enveloping” Fisher-Rao norm explicitly without either the intriguing combinatorial factor or the X dependent
factor. We believe this envelope property sheds light on how to compare different
norm-based capacity measures.
Before concluding this section, we present the contour plot of Fisher-Rao norm
and path-2 norm in a simple two layer ReLU network in Fig. 1, to better illustrate
the geometry of Fisher-Rao norm and the subset induced by other norm. We
choose two weights as x, y-axis and plot the levelsets of the norms.
5. EXPERIMENTS
5.1 Experimental details
In the realistic K-class classification context there is no activation function on
the K-dimensional output layer of the network (σL`1 pxq “ x) and we focus on
ReLU activation σpxq “ maxt0, xu for the intermediate layers. The loss function
is taken to be the cross entropy `py 1 , yq “ ´xey , log gpy 1 qy, where ey P RK denotes
the one-hot-encoded class label and gpzq is the softmax function defined by,
˜
¸T
exppz1 q
exppzK q
gpzq “ řK
, . . . , řK
.
k“1 exppzk q
k“1 exppzk q
It can be shown that the gradient of the loss function with respect to the output
of the neural network is ∇`pf, yq “ ´∇xey , log gpf qy “ gpf q ´ ey , so plugging into
the general expression for the Fisher-Rao norm we obtain,
(5.1)
}θ}2fr “ pL ` 1q2 Ertxgpfθ pXqq, fθ pXqy ´ fθ pXqY u2 s.
file: paper_arxiv.tex date: November 7, 2017
17
CAPACITY MEASURE AND GEOMETRY
2.0
2.0
1.5
1.5
category
category
fr
fr
3.4
path
level
x_2
x_2
path
level
1.0
3.2
1.0
3.0
4
2.8
3
2.6
2.4
2.2
2
0.5
0.5
0.0
0.0
0.5
1.0
1.5
2.0
0.0
0.5
x_1
1.0
1.5
2.0
x_1
2
2
1
1
category
category
fr
fr
0
path
x_2
x_2
path
level
level
3
0
1.5
2
1.0
−1
1
−1
−2
−2
−2
−1
0
1
2
−2
x_1
−1
0
1
2
x_1
Fig 1. The levelsets of Fisher-Rao norm (solid) and path-2 norm (dotted). The color denotes
the value of the norm.
In practice, since we do not have access to the population density ppxq of the
covariates, we estimate the Fisher-Rao norm by sampling from a test set of size
m, leading to our final formulas
(5.2)
}θ}2fr “ pL ` 1q2
(5.3)
}θ}2fr,emp “ pL ` 1q2
m K
1 ÿÿ
gpfθ pxi qqy rxgpfθ pxi qq, fθ pxi qy ´ fθ pxi qy s2 ,
m i“1 y“1
m
1 ÿ
rxgpfθ pxi qq, fθ pxi qy ´ fθ pxi qyi s2 .
m i“1
5.2 Over-parametrization with Hidden Units
In order to understand the effect of network over-parametrization we investigated the relationship between different
řL´1proposals for capacity control and the
number of parameters d “ pk1 ` i“1 ki ki`1 ` kL K of the neural network.
For simplicity we focused on a fully connected architecture consisting of L hidden layers with k neurons per hidden layer so that the expression simplifies to
d “ krp ` kpL ´ 1q ` Ks. The network parameters were learned by minimizing the
cross-entropy loss on the CIFAR-10 image classification dataset with no explicit
regularization nor data augmentation. The cross-entropy loss was optimized using 200 epochs of minibatch gradient descent utilizing minibatches of size 50 and
otherwise identical experimental conditions described in [Zhang et al., 2016]. The
same experiment was repeated using minibatch natural gradient descent employing the Kronecker-factored approximate curvature (K-FAC) method [Martens and
Grosse, 2015] with the same learning rate and momentum schedules. The first
fact we observe is that the Fisher-Rao norm remains approximately constant (or
decreasing) when the network is overparametrized by increasing the width k at
fixed depth L “ 2 (see Fig. 2). If we vary the depth L of the network at fixed
file: paper_arxiv.tex date: November 7, 2017
18
width k “ 500 then we find that the Fisher-Rao norm is essentially constant
when measured in its ‘natural units’ of L ` 1 (see Fig. 3). Finally, if we compare
each proposal based on its absolute magnitude, the Fisher-Rao norm is distinguished as the minimum-value norm, and becomes Op1q when evaluated using
the model distribution. This self-normalizing property can be understood as a
consequence of the relationship to flatness discussed in section 3.3, which holds
when the expectation is taken with respect to the model.
5.3 Corruption with Random Labels
Over-parametrized neural networks tend to exhibit good generalization despite
perfectly fitting the training set [Zhang et al., 2016]. In order to pinpoint the
“correct” notion of complexity which drives generalization error, we conducted a
series of experiments in which we changed both the network size and the signal-tonoise ratio of the datasets. In particular, we focus on the set of neural architectures
obtained by varying the hidden layer width k at fixed depth L “ 2 and moreover
for each training/test example we assign a random label with probability α.
It can be seen from the last two panels of Fig. 5 and 4 that for non-random
labels (α “ 0), the empirical Fisher-Rao norm actually decreases with increasing
k, in tandem with the generalization error and moreover this correlation seems
to persist when we vary the label randomization. Overall the Fisher-Rao norm
is distinguished from other measures of capacity by the fact that its empirical
version seems to track the generalization gap and moreover this trend does not
appear to be sensitive to the choice of optimization.
It is also interesting to note that the Fisher-Rao norm has a stability property with respect to increasing k which suggests that a formal k Ñ 8 limit might
exist. Finally, we note that unlike the vanilla gradient, the natural gradient differentiates the different architectures by their Fisher-Rao norm. Although we don’t
completely understand this phenomenon, it is likely a consequence of the fact
that the natural gradient is iteratively minimizing the Fisher-Rao semi-norm.
5.4 Margin Story
Bartlett et al. [2017] adopted the margin story to explain generalization. They
investigated the spectrally-normalized margin to explain why CIFAR-10 with random labels is a harder dataset (generalize poorly) than the uncorrupted CIFAR-10
(which generalize well). Here we adopt the same idea in this experiment, where
we plot margin normalized by the empirical Fisher-Rao norm, in comparison to
the spectral norm, based on the model trained either by vanilla gradient and natural gradient. It can be seen from Fig. 6 that the Fisher-Rao-normalized margin
also accounts for the generalization gap between random and original CIFAR10. In addition, Table 1 shows that the empirical Fisher-Rao norm improves the
normalized margin relative to the spectral norm. These results were obtained
by optimizing with the natural gradient but are not sensitive to the choice of
optimizer.
5.5 Natural Gradient and Pre-conditioning
It was shown in [Shalev-Shwartz et al., 2017] that multi-layer networks struggle
to learn certain piecewise-linear curves because the problem instances are poorlyconditioned. The failure was attributed to the fact that simply using a black-box
model without a deeper analytical understanding of the problem structure could
file: paper_arxiv.tex date: November 7, 2017
19
CAPACITY MEASURE AND GEOMETRY
α“0
α“1
Ratio
Model Fisher-Rao
Empirical Fisher-Rao
Spectral
1.61
2.12
0.76
22.68
35.98
0.63
136.67
205.56
0.66
Table 1
Comparison of Fisher-Rao norm and spectral norm after training with natural gradient using
original dataset (α “ 0) and with random labels (α “ 1). Qualitatively similar results holds for
GD+momentum.
180
225
160
200
140
175
120
150
Path norm
l2 norm
be computationally sub-optimal. Our results suggest that the problem can be
overcome within the confines of black-box optimization by using natural gradient. In other words, the natural gradient automatically pre-conditions the problem and appears to achieve similar performance as that attained by hard-coded
convolutions [Shalev-Shwartz et al., 2017], within the same number of iterations
(see Fig. 7).
100
80
125
100
60
75
40
50
20
0
25
200
300
400
500
600
700
Neurons per hidden layer
800
900
0
1000
200
300
400
200
300
400
500
600
700
800
900
1000
500
600
700
800
900
1000
Neurons per hidden layer
1.8
600
1.6
1.4
Model Fisher-Rao norm
Spectral norm
500
400
300
200
1.2
1.0
0.8
0.6
0.4
100
0
0.2
200
300
400
500
600
700
Neurons per hidden layer
800
900
0.0
1000
Neurons per hidden layer
30
Empirical Fisher-Rao norm
25
20
15
10
5
0
200
300
400
500
600
700
Neurons per hidden layer
800
900
1000
Fig 2. Dependence of different norms on width k of hidden layers pL “ 2q after optimizing with
vanilla gradient descent (red) and natural gradient descent (blue).
file: paper_arxiv.tex date: November 7, 2017
20
160
700
140
600
120
500
Path norm
l2 norm
100
80
60
100
20
1
2
3
4
Hidden layers
0
5
5000
4000
Spectral norm
1
2
1
2
3
4
5
3
4
5
Hidden layers
0.6
Model Fisher-Rao norm (normalized)
6000
3000
2000
1000
0
300
200
40
0
400
1
2
3
4
Hidden layers
0.5
0.4
0.3
0.2
0.1
0.0
5
Hidden layers
Empirical Fisher-Rao (normalized)
12
10
8
6
4
2
0
1
2
3
Hidden layers
4
5
Fig 3. Dependence of different norms on depth L (k “ 500) after optimzing with vanilla gradient
descent (red) and natural gradient descent (blue). The Fisher-Rao norms are normalized by L`1.
file: paper_arxiv.tex date: November 7, 2017
21
160
1000
140
800
Spectral norm
l2 norm
CAPACITY MEASURE AND GEOMETRY
120
600
100
400
80
200
0.0
0.2
0.4
0.6
label randomization
0.8
1.0
0.0
0.2
0.0
0.2
0.0
0.2
0.4
0.6
0.8
1.0
0.4
0.6
0.8
1.0
0.4
0.6
0.8
1.0
label randomization
2.3
275
2.2
Model Fisher-Rao norm
Path norm
2.4
300
250
225
200
2.1
2.0
1.9
1.8
175
1.7
150
1.6
0.0
0.2
0.4
0.6
label randomization
0.8
1.0
label randomization
0.90
32
0.80
Generalization gap
Empirical Fisher-Rao norm
0.85
30
28
26
24
0.75
0.70
0.65
0.60
22
0.55
20
0.50
0.0
0.2
0.4
0.6
label randomization
0.8
1.0
label randomization
Fig 4. Dependence of capacity measures on label randomization after optimizing with vanilla
gradient descent. The colors show the effect of varying network width from k “ 200 (red) to
k “ 1000 (blue) in increments of 100.
file: paper_arxiv.tex date: November 7, 2017
22
160
275
250
Spectral norm
l2 norm
140
120
100
225
200
175
150
80
125
0.0
0.2
0.4
0.6
label randomization
0.8
1.0
2.2
240
2.1
Model Fisher-Rao norm
260
Path norm
220
200
180
160
0.0
0.2
0.0
0.2
0.0
0.2
0.4
0.6
0.8
1.0
0.4
0.6
0.8
1.0
0.4
0.6
0.8
1.0
label randomization
2.0
1.9
1.8
1.7
1.6
140
1.5
120
0.0
0.2
0.4
0.6
label randomization
0.8
1.0
label randomization
0.90
0.85
40
Generalization gap
Empirical Fisher-Rao norm
45
35
30
0.80
0.75
0.70
0.65
0.60
25
0.55
20
0.0
0.2
0.4
0.6
label randomization
0.8
1.0
label randomization
Fig 5. Dependence of capacity measures on label randomization after optimizing with the natural
gradient descent. The colors show the effect of varying network width from k “ 200 (red) to
k “ 1000 (blue) in increments of 100. The natural gradient optimization clearly distinguishes
the network architectures according to their Fisher-Rao norm.
0.5
70
12
60
0.4
10
50
8
0.3
40
6
30
0.2
4
20
0.1
2
10
0.0
0
5
10
15
20
25
0
0.00
0.02
0.04
0.06
0.08
0.10
0
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
8
0.25
100
0.20
7
6
80
5
0.15
60
4
0.10
40
0.05
20
3
2
1
0.00
0
5
10
15
20
25
0
0.00
0.02
0.04
0.06
0.08
0.10
0
Fig 6. Distribution of margins found by natural gradient (top) and vanilla gradient (bottom)
before rescaling (left) and after rescaling by spectral norm (center) and empirical Fisher-Rao
norm (right).
file: paper_arxiv.tex date: November 7, 2017
CAPACITY MEASURE AND GEOMETRY
23
0
10
20
30
40
50
0
20
40
60
80
100
Fig 7. Reproduction of conditioning experiment from [Shalev-Shwartz et al., 2017] after 104
iterations of Adam (dashed) and K-FAC (red).
6. FURTHER DISCUSSION
In this paper we studied the generalization puzzle of deep learning from an
invariance point of view. The notions of invariance come from several angles: invariance in information geometry, invariance of non-linear local transformation,
invariance under function equivalence, algorithmic invariance of parametrization,
“flat” minima invariance of linear transformations, among many others. We proposed a new non-convex capacity measure using the Fisher-Rao norm. We demonstrated the good properties of Fisher-Rao norm as a capacity measure both from
the theoretical and the empirical side.
6.1 Parameter identifiability
Let us briefly discuss the aforementioned parameter identifiability issue in deep
networks. The function classes considered in this paper admit various group actions, which leave the function output invariant7 . This means that our hypothesis
class is in bijection with the equivalence class HL – Θ{ „ where we identify θ „ θ1
if and only if fθ ” fθ1 . Unlike previously considered norms, the capacity measure
introduced in this paper respects all of the symmetries of HL .
Non-negative homogeneity In the example of deep linear networks, where
σpxq “ x, we have the following non-Abelian Lie group symmetry acting on the
network weights for all Λ1 , . . . , ΛL P GLpk1 , Rq ˆ ¨ ¨ ¨ ˆ GLpkL , Rq,
´1
´1
1
l
L
θ ÝÑ pW 0 Λ1 , Λ´1
1 W Λ2 . . . , Λl W Λl`1 , . . . , ΛL W q .
It is convenient to express these transformations in terms of the Lie algebra of
real-valued matrices M1 , . . . , ML P Mk1 pRq ˆ ¨ ¨ ¨ ˆ MkL pRq,
θ ÝÑ pW 0 eM1 , e´M1 W 1 eM2 . . . , e´Ml W l eMl`1 , . . . , e´ML W L q .
If σpxq “ maxt0, xu (deep rectified network) then the symmetry is broken to the
abelian subalgebra vl , . . . , vL P Rk1 ˆ ¨ ¨ ¨ ˆ RkL ,
Λ1 “ ediagpv1 q , . . . , ΛL “ ediagpvL q .
Dead neurons. For certain choices of activation function the symmetry group
is enhanced at some θ P Θ. For example, if σpxq “ maxt0, xu and the parameter
7
In the statistics literature this is referred to as a non-identifiable function class.
file: paper_arxiv.tex date: November 7, 2017
24
vector θ is such that all the weights and biases feeding into some hidden unit v P V
are negative, then fθ is invariant with respect to all of the outgoing weights of v.
Permutation symmetry. In addition to the continuous symmetries there is
discrete group of permutation symmetries. In the case of a single hidden layer
with k units, this discrete symmetry gives rise to k! equivalent weights for a given
θ. If in addition the activation function satisfies σp´xq “ ´σpxq (such as tanh)
then we obtain an additional degeneracy factor of 2k .
6.2 Invariance of natural gradient
Consider the continuous-time analog of natural gradient flow,
(6.1)
dθt “ ´Ipθt q´1 ∇θ Lpθt qdt,
where θ P Rp . Consider a differentiable transformation from one parametrization
to another θ ÞÑ ξ P Rq denoted by ξpθq : Rp Ñ Rq . Denote the Jacobian Jξ pθq “
Bpξ1 ,ξ2 ,...,ξq q
qˆp . Define the loss function L̃ : ξ Ñ R that satisfies
Bpθ1 ,θ2 ,...,θp q P R
Lpθq “ L̃pξpθqq “ L̃ ˝ ξpθq,
and denote Ĩpξq as the Fisher Information on ξ associated with L̃. Consider also
the natural gradient flow on the ξ parametrization,
(6.2)
dξt “ ´Ĩpξt q´1 ∇ξ L̃pξt qdt.
Intuitively, one can show that the natural gradient flow is “invariant” to the
specific parametrization of the problem.
Lemma 6.1 (Parametrization invariance). Denote θ P Rp , and the differentiable transformation from one parametrization to another θ ÞÑ ξ P Rq as
ξpθq : Rp Ñ Rq . Assume Ipθq, Ĩpξq are invertible, and consider two natural gradient flows tθt , t ą 0u and tξt , t ą 0u defined in Eqn. (6.1) and (6.2) on θ and ξ
respectively.
(1) Re-parametrization: if q “ p, and assume Jξ pθq is invertible, then natural
gradient flow on the two parameterizations satisfies,
ξpθt q “ ξt ,
@t,
if the initial locations θ0 , ξ0 are equivalent in the sense ξpθ0 q “ ξ0 .
(2) Over-parametrization: If q ą p and ξt “ ξpθt q at some fixed time t, then
the infinitesimal change satisfies
ξpθt`dt q ´ ξpθt q “ Mt pξt`dt ´ ξt q,
Mt has eigenvalues either 0 or 1
where Mt “ Ipξt q´1{2 pIq ´ UK UKT qIpξt q1{2 , and UK denotes the null space of
Ipξq1{2 Jξ pθq.
7. PROOFS
Proof of Lemma 2.1. Recall the property of the activation function in (2.2).
Let us prove for any 0 ď t ď s ď L, and any l P rks`1 s
(7.1)
ÿ
iPrkt s,jPrkt`1 s
BOls`1 t
Wij “ Ols`1 pxq.
BWijt
file: paper_arxiv.tex date: November 7, 2017
CAPACITY MEASURE AND GEOMETRY
25
We prove this statement via induction on the non-negative gap s ´ t. Starting
with s ´ t “ 0, we have
BOlt`1 BNlt`1
BOlt`1
“
“ σ 1 pNlt`1 pxqqOit pxq,
BWilt
BNlt`1 BWilt
BOlt`1
“ 0,
BWijt
for j ‰ l,
and, therefore,
(7.2)
ÿ
iPrkt s,jPrkt`1 s
ÿ
BOlt`1 t
W
“
σ 1 pNlt`1 pxqqOit pxqWilt “ σ 1 pNlt`1 pxqqNlt`1 pxq “ Olt`1 pxq.
ij
BWijt
iPrkt s
This solves the base case when s ´ t “ 0.
Let us assume for general s ´ t ď h the induction hypothesis (h ě 0), and let
us prove it for s ´ t “ h ` 1. Due to chain-rule in the back-propagation updates
BOls`1
BOls`1 ÿ BNls`1 BOks
“
.
BWijt
BNls`1 kPrk s BOks BWijt
(7.3)
s
Using the induction on
BOks
t
BWij
as ps ´ 1q ´ t “ h
ÿ
(7.4)
iPrkt s,jPrkt`1 s
BOks t
W “ Oks pxq,
BWijt ij
and, therefore,
ÿ
iPrkt s,jPrkt`1 s
BOls`1 t
Wij
BWijt
ÿ
“
iPrkt s,jPrkt`1
“
BOls`1 ÿ BNls`1 BOks t
Wij
BNls`1 kPrk s BOks BWijt
s
s
BOls`1 ÿ
BNls`1 kPrk s
s
ÿ
BNls`1
BOks
iPrkt s,jPrkt`1 s
ÿ
“ σ 1 pNls`1 pxqq
BOks t
W
BWijt ij
s s
Wkl
Ok pxq “ Ols`1 pxq.
kPrks s
This completes the induction argument. In other words, we have proved for any
t, s that t ď s, and l is any hidden unit in layer s
(7.5)
ÿ
i,jPdimpW t q
BOls`1 t
Wij “ Ols`1 pxq.
BWijt
Remark that in the case when there are hard-coded zero weights, the proof
still goes through exactly. The reason is, for the base case s “ t,
ÿ
iPrkt s,jPrkt`1 s
ÿ
BOlt`1 t
W
“
σ 1 pNlt`1 pxqqOit pxqWilt 1pWilt ‰ 0q “ σ 1 pNlt`1 pxqqNlt`1 pxq “ Olt`1 pxq.
ij
BWijt
iPrkt s
file: paper_arxiv.tex date: November 7, 2017
26
and for the induction step,
ÿ
iPrkt s,jPrkt`1 s
ÿ
BOls`1 t
s s
s
Wij “ σ 1 pNls`1 pxqq
Wkl
Ok pxq1pWkl
‰ 0q “ Ols`1 pxq.
t
BWij
kPrks s
Proof of Lemma 4.1. The proof follows from a peeling argument from the
right hand side. Recall Ot P R1ˆkt , one has
“
‰
1
}θ}2fr “ E |OL W L DL`1 |2
because |OL W L | ď |W L }σ ¨ }OL }2
2
pL ` 1q
“
‰
ď E }W L }2σ ¨ }OL }22 ¨ |DL`1 pXq|2
“
‰
“ E |DL`1 pXq|2 ¨ }W L }2σ ¨ }OL´1 W L´1 DL }22
“
‰
ď E |DL`1 pXq|2 ¨ }W L }2σ ¨ }OL´1 W L´1 }22 ¨ }DL }2σ
“
‰
ď E }DL }2σ |DL`1 pXq|2 ¨ }W L }2σ }W L´1 }2σ ¨ }OL´1 }22
“
‰
ď E }DL }2σ }DL`1 pXq}2σ }OL´1 }22 ¨ }W L´1 }2σ }W L }2σ
repeat the process to bound }OL´1 }2
˜
¸
L`1
L
ź
ź
2
t
2
ď E }X}
}D pXq}σ
}W t }2σ “ }θ}2σ .
...
t“1
t“0
Proof of Lemma 4.2. The proof still follows a peeling argument from the
right. We know that
“
‰
1
}θ}2fr “ E |OL W L DL`1 |2
2
pL ` 1q
“
‰
ď E }W L }2p,q ¨ }OL }2p˚ ¨ |DL`1 pXq|2
use (7.6)
“ L`1
‰
2
L 2
L´1
L´1 L 2
“ E |D
pXq| ¨ }W }p,q ¨ }O
W
D }p˚
“
‰
ď E |DL`1 pXq|2 ¨ }W L }2p,q ¨ }OL´1 W L´1 }2q ¨ }DL }2qÑp˚
“
‰
ď E }DL }2qÑp˚ }DL`1 pXq}2p,q ¨ }W L }2p,q }W L´1 }2p,q ¨ }OL´1 }2p˚
“
‰
“ E }DL }2qÑp˚ }DL`1 pXq}2p,q ¨ }OL´1 }2p˚ ¨ }W L´1 }2p,q }W L }2p,q
use (7.8)
ď . . . repeat the process to bound }OL´1 }p˚
˜
¸
L`1
L
ź
ź
2
t
2
ď E }X}p˚
}D pXq}qÑp˚
}W t }2p,q “ }θ}2p,q
t“1
t“0
In the proof the first inequality we use Holder’s inequality
(7.6)
where
(7.7)
xw, vy ď }w}p }v}p˚
1
p
`
1
p˚
“ 1. Let’s prove for v P Rn , M P Rnˆm , we have
}v T M }q ď }v}p˚ }M }p,q .
file: paper_arxiv.tex date: November 7, 2017
27
CAPACITY MEASURE AND GEOMETRY
Denote each column of M as M¨j , for 1 ď j ď m,
˜
(7.8)
T
}v M }q “
m
ÿ
¸1{q
T
|v M¨j |
q
˜
m
ÿ
ď
¸1{q
}v}qp˚ }M¨j }qp
“ }v}p˚ }M }p,q .
j“1
j“1
Proof. Proof of Lemma 4.3 The proof is due to Holder’s inequality, for any
x P Rp
ˇ
ˇ
ˇ ÿ
ˇ
ˇ
ˇ
0
1
1
L
L L`1
W
D
pxqW
¨
¨
¨
D
x
pxqW
D
pxq
ˇ
ˇ
i0 i 0 i1 i1
i1 i2
iL
iL
ˇi ,i ,...,i
ˇ
0 1
L
˜
¸1{q˚ ˜
¸1{q
ÿ
ÿ
˚
ď
|xi0 Di11 pxq ¨ ¨ ¨ DiLL pxqDL`1 pxq|q
¨
|Wi00 i1 Wi11 i2 Wi22 i3 ¨ ¨ ¨ WiLL |q
.
i0 ,i1 ,...,iL
i0 ,i1 ,...,iL
Therefore we have
ˇ
ˇ2
ˇ ÿ
ˇ
1
ˇ
ˇ
0
2
1
1
L L`1
“
E
X
W
}θ}
D
pXqW
¨
¨
¨
W
D
pXq
ˇ
ˇ
i
i
i
i
i
i
i
0
fr
i
0 1
1
1 2
L
2
L
ˇi ,i ,...,i
ˇ
pL ` 1q
0 1
L
˜
¸2{q˚
¸2{q
˜
ÿ
ÿ
˚
|Wi00 i1 Wi11 i2 Wi22 i3 ¨ ¨ ¨ WiLL |q
ď
|Xi0 Di11 pXq ¨ ¨ ¨ DiLL pXqDL`1 pXq|q
¨E
,
i0 ,i1 ,...,iL
i0 ,i1 ,...,iL
which is
» ˜
¸2{q˚ fi1{2 ˜
¸1{q
L`1
L
ÿ
ź
ÿ ź
1
˚
t
q
t
q
fl ¨
|Xi0
}θ}fr ď –E
Dit pXq|
|Wit it`1 |
“ }πpθq}q .
L`1
t“1
i ,i ,...,i
i ,i ,...,i t“0
0 1
0 1
L
L
Proof of Lemma 4.4. The proof follows from the recursive use of the inequality,
}M }pÑq }v}p ě }v T M }q .
We have
“
‰
}θ}2fr “ E |OL W L DL`1 |2
“
‰
ď E }W L }2pÑq ¨ }OL }2p ¨ |DL`1 pXq|2
“
‰
ď E |DL`1 pXq|2 ¨ }W L }2pÑq ¨ }OL´1 W L´1 DL }2p
“
‰
ď E |DL`1 pXq|2 ¨ }W L }2pÑq ¨ }OL´1 W L´1 }2q ¨ }DL }2qÑp
“
‰
ď E }DL }2qÑp }DL`1 pXq}2qÑp ¨ }W L }2pÑq }W L´1 }2pÑq ¨ }OL´1 }2p
ď . . . repeat the process to bound }OL´1 }p
¸
˜
L`1
L
ź
ź
ď E }X}2p
}Dt pXq}2qÑp
}W t }2pÑq “ }θ}2pÑq ,
t“1
t“0
where third to last line is because DL`1 pXq P R1 , |DL`1 pXq| “ }DL`1 pXq}qÑp .
file: paper_arxiv.tex date: November 7, 2017
28
Proof of Lemma 4.5. The proof follows from a different strategy of peeling
the terms from the right hand side, as follows,
“
‰
}θ}2fr “ E |OL W L DL`1 |2
”
ı
ď E }W L }2pL ÑpL`1 ¨ }OL }2pL ¨ |DL`1 pXq|2
”
ı
ď E |DL`1 pXq|2 ¨ }W L }2pL ÑpL`1 ¨ }OL´1 W L´1 DL }2pL
”
ı
ď E |DL`1 pXq|2 ¨ }W L }2pL ÑpL`1 ¨ }OL´1 W L´1 }pL }DL }2pL ÑpL
”
ı
ď E }DL }2pL ÑpL |DL`1 pXq|2 ¨ }W L }2pL ÑpL`1 }W L´1 }2pL´1 ÑpL ¨ }OL´1 }2pL´1
˜
¸
L`1
L
ź
ź
2
t
2
ď E }X}p0
}D pXq}pt Ñpt
}W t }2pt Ñpt`1 “ }θ}2P .
t“1
t“0
Proof of Lemma 4.6.
d
}rθ}2fr “ E r2θ∇frθ pXqfrθ pXqs
dr
„
2pL ` 1q
“E
frθ pXqfrθ pXq
r
2pL ` 1q
“
}rθ}2fr
r
use Lemma 2.1
The last claim can be proved through solving the simple ODE.
Proof of Lemma 4.7. Let us first construct θ1 P ΘL`1 that realizes λfθ1 `
p1 ´ λqfθ2 . The idea is very simple: we put θ1 and θ2 networks side-by-side, then
construct an additional output layer with weights λ, 1 ´ λ on the output of fθ1
and fθ2 , and the final output layer is passed through σpxq “ x. One can easily
see that our key Lemma 2.1 still holds for this network: the interaction weights
between fθ1 and fθ2 are always hard-coded as 0. Therefore we have constructed
a θ1 P ΘL`1 that realizes λfθ1 ` p1 ´ λqfθ2 .
Now recall that
`
˘1{2
1
}θ1 }fr “ E fθ21
L`2
`
˘1{2
“ Epλfθ1 ` p1 ´ λqfθ2 q2
`
˘1{2
`
˘1{2
ď λ E fθ21
` p1 ´ λq E fθ22
ď1
`
˘1{2 ` 2 ˘1{2
because Erfθ1 fθ2 s ď E fθ21
E fθ2
.
Proof of Theorem 4.1. Due to Eqn. (3.5), one has
“
‰
1
2
T
T
}θ}
“
E
vpθ,
Xq
XX
vpθ,
Xq
fr
pL ` 1q2
“
‰
“ vpθqT E XX T vpθq
file: paper_arxiv.tex date: November 7, 2017
CAPACITY MEASURE AND GEOMETRY
29
because
in the linear case vpθ, Xq “ W 0 D1 pxqW 1 D2 pxq ¨ ¨ ¨ DL pxqW L DL`1 pxq “
śL
t
p
t“0 W “: vpθq P R . Therefore
N
1 ÿ
i fθ pXi q
RN pBfr pγqq “ E sup
θPB pγq N
fr
i“1
N
1 ÿ
i XiT vpθq
θPB pγq N
fr
i“1
G
C
N
1 ÿ
“ E sup
i Xi , vpθq
θPB pγq N
fr
i“1
›
›
N
›
1 ›› ÿ
›
ď γ E › i Xi ›
N ›
›
i“1
rEpXX T qs´1
g
f
›2
›
f
N
›
1 f 1 ›› ÿ
›
ď γ ? e E › i Xi ›
›
N N ›i“1
T
“ E sup
rEpXX qs´1
gC
G
f
N
ÿ
1 f
1
“ γ? e
Xi XiT , rEpXX T qs´1 .
N i“1
N
Therefore
g C
G
f
c
N
ÿ
1 f
1
p
´1
e
T
T
Xi Xi , rEpXX qs
“γ
.
E RN pBfr pγqq ď γ ?
E
N i“1
N
N
Proof of Lemma 6.1. From basic calculus, one has
∇θ Lpθq “ Jξ pθqT ∇ξ L̃pξq
Ipθq “ Jξ pθqT ĨpξqJξ pθq
Therefore, plugging in the above expression into the natural gradient flow in θ
dθt “ ´Ipθt q´1 ∇θ Lpθt qdt
“ ´rJξ pθt qT Ĩpξpθt qqJξ pθt qs´1 Jξ pθt qT ∇ξ L̃pξpθt qqdt.
In the re-parametrization case, Jξ pθq is invertible, and assuming ξt “ ξpθt q,
dθt “ ´rJξ pθt qT Ĩpξpθt qqJξ pθt qs´1 Jξ pθt qT ∇ξ L̃pξpθt qqdt
“ ´Jξ pθt q´1 Ĩpξpθt qq´1 ∇ξ L̃pξpθt qqdt
Jξ pθt qdθt “ ´Ĩpξpθt qq´1 ∇ξ L̃pξpθt qqdt
dξpθt q “ ´Ĩpξpθt qq´1 ∇ξ L̃pξpθt qqdt “ ´Ĩpξt q´1 ∇ξ L̃pξt qdt.
What we have shown is that under ξt “ ξpθt q, ξpθt`dt q “ ξt`dt . Therefore, if
ξ0 “ ξpθ0 q, we have that ξt “ ξpθt q.
file: paper_arxiv.tex date: November 7, 2017
30
In the over-parametrization case, Jξ pθq P Rqˆp is a non-square matrix. For
simplicity of derivation, abbreviate B :“ Jξ pθq P Rqˆp . We have
dθt “ θt`dt ´ θt “ ´Ipθt q´1 ∇θ Lpθt qdt
“ ´rB T ĨpξqBs´1 B T ∇ξ L̃pξpθt qqdt
”
ı´1
B T L̃pξpθt qqdt.
Bpθt`dt ´ θt q “ ´B B T ĨpξqB
Via the Sherman-Morrison-Woodbury formula
„
1
Iq ` Ĩpξq1{2 BB T Ĩpξq1{2
´1
“ Iq ´ Ĩpξq1{2 BpIp ` B T ĨpξqBq´1 B T Ĩpξq1{2
Denoting Ĩpξq1{2 BB T Ĩpξq1{2 “ U ΛU T , we have that rankpΛq ď p ă q. Therefore,
the LHS as
´1
„
„
1 ´1 T
1
1{2
T
1{2
“ U Iq ` Λ
U
Iq ` Ĩpξq BB Ĩpξq
„
´1
1
1{2
T
1{2
lim Iq ` Ĩpξq BB Ĩpξq
“ UK UKT
Ñ0
where UK corresponding to the space associated with zero eigenvalue of Ĩpξq1{2 BB T Ĩpξq1{2 .
Therefore taking Ñ 0, we have
„
1
lim Iq ` Ĩpξq1{2 BB T Ĩpξq1{2
Ñ0
´1{2
Ĩpξq
´1
UK UKT Ĩpξq´1{2
“ lim Iq ´ Ĩpξq1{2 BpIp ` B T ĨpξqBq´1 B T Ĩpξq1{2
Ñ0
“ Ĩpξq´1 ´ BpB T ĨpξqBq´1 B T
where only the last step uses the fact Ĩpξq is invertible. Therefore
ξpθt`dt q ´ ξpθt q “ Bpθt`dt ´ θt q
“
‰´1 T
“ ´B B T In pξqB
B ∇ξ L̃pξqdt
“ ´ηIpξq´1{2 pId ´ UK UKT qIpξq´1{2 ∇ξ L̃pξqdt
!
)
“ Ipξq´1{2 pId ´ UK UKT qIpξq1{2 Ipξq´1 ∇ξ L̃pξqdt
“ Mt pξt`dt ´ ξt q.
The above claim aserts that in the over-parametrized setting, running natural
gradient in the over-parametrized is nearly “invariant” in the following sense: if
ξpθt q “ ξt , then
ξpθt`dt q ´ ξpθt q “ Mt pξt`dt ´ ξt q
Mt “ Ipξt q´1{2 pIq ´ UK UKT qIpξt q1{2
and we know Mt has eigenvalue either 1 or 0. In the case when p “ q and Jξ pθq
has full rank, it holds that Mt “ I is the identity matrix, reducing the problem
to the re-parametrized case.
file: paper_arxiv.tex date: November 7, 2017
CAPACITY MEASURE AND GEOMETRY
31
REFERENCES
Shun-Ichi Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):
251–276, 1998.
Martin Anthony and Peter L Bartlett. Neural network learning: Theoretical foundations. cambridge university press, 1999.
Peter Bartlett, Dylan J Foster, and Matus Telgarsky. Spectrally-normalized margin bounds for
neural networks. arXiv preprint arXiv:1706.08498, 2017.
Martin Bauer, Martins Bruveris, and Peter W Michor. Uniqueness of the fisher–rao metric on
the space of smooth densities. Bulletin of the London Mathematical Society, 48(3):499–506,
2016.
Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp minima can generalize
for deep nets. arXiv preprint arXiv:1703.04933, 2017.
Sepp Hochreiter and Jürgen Schmidhuber. Flat minima. Neural Computation, 9(1):1–42, 1997.
Vladimir Koltchinskii and Dmitry Panchenko. Empirical margin distributions and bounding the
generalization error of combined classifiers. Annals of Statistics, pages 1–50, 2002.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep
convolutional neural networks. In Advances in neural information processing systems, pages
1097–1105, 2012.
Anders Krogh and John A Hertz. A simple weight decay can improve generalization. In Advances
in neural information processing systems, pages 950–957, 1992.
James Martens and Roger Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In International Conference on Machine Learning, pages 2408–2417,
2015.
Behnam Neyshabur, Ruslan R Salakhutdinov, and Nati Srebro. Path-sgd: Path-normalized
optimization in deep neural networks. In Advances in Neural Information Processing Systems,
pages 2422–2430, 2015a.
Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm-based capacity control in neural
networks. In Conference on Learning Theory, pages 1376–1401, 2015b.
Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nathan Srebro. Exploring
generalization in deep learning. arXiv preprint arXiv:1706.08947, 2017.
Shai Shalev-Shwartz, Ohad Shamir, and Shaked Shammah. Failures of deep learning. arXiv
preprint arXiv:1703.07950, 2017.
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding
deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530, 2016.
file: paper_arxiv.tex date: November 7, 2017
| 2 |
arXiv:1312.7186v4 [] 23 Jun 2016
Valid Post-Selection Inference in
High-Dimensional Approximately Sparse
Quantile Regression Models
Alexandre Belloni
The Fuqua School of Business, Duke University
and
Victor Chernozhukov
Department of Economics, Massachusetts Institute of Technology
and
Kengo Kato
Graduate School of Economics, University of Tokyo
June 24, 2016
Abstract
This work proposes new inference methods for a regression coefficient of interest in
a (heterogeneous) quantile regression model. We consider a high-dimensional model
where the number of regressors potentially exceeds the sample size but a subset
of them suffice to construct a reasonable approximation to the conditional quantile
function. The proposed methods are (explicitly or implicitly) based on orthogonal
score functions that protect against moderate model selection mistakes, which are
often inevitable in the approximately sparse model considered in the present paper.
We establish the uniform validity of the proposed confidence regions for the quantile
regression coefficient. Importantly, these methods directly apply to more than one
variable and a continuum of quantile indices. In addition, the performance of the
proposed methods is illustrated through Monte-Carlo experiments and an empirical
example, dealing with risk factors in childhood malnutrition.
Keywords: quantile regression, confidence regions post model selection, orthogonal score
functions
1
1
Introduction
Many applications of interest require the measurement of the distributional impact of a
policy (or treatment) on the relevant outcome variable. Quantile treatment effects have
emerged as an important concept for measuring such distributional impacts (see, e.g.,
[30, 22]). In this work we focus on the quantile treatment effect ατ on a policy/treatment
variable d of an outcome of interest y in the (heteroskedastic) partially linear model:
τ -quantile(y | z, d) = dατ + gτ (z),
where gτ is the (unknown) confounding function of the other covariates z which can be
well approximated by a linear combination of p technical controls. Large p arises due to
the existence of many features (as in genomic studies and econometric applications) and/or
the use of basis expansions in non-parametric approximations. When p is comparable or
larger than the sample size n, this brings forth the need to perform model selection or
regularization.
We propose methods to construct estimates and confidence regions for the coefficient
of interest ατ based upon robust post-selection procedures. We establish the (uniform)
validity of the proposed methods in a non-parametric setting. Model selection in those
settings generically leads to a moderate mistake and traditional arguments based on perfect
model selection do not apply. Therefore, the proposed methods are developed to be robust
to such model selection mistakes. Furthermore, they are directly applicable to construction
of simultaneous confidence bands when d is multivariate and a continuum of quantile indices
is of interest.
Broadly speaking the main obstacle to construct confidence regions with (asymptotically) correct coverage is the estimation of the confounding function gτ that is (generically)
non-regular because of the high-dimensionality. To overcome this difficulty we construct
2
(explicitly or implicitly) an orthogonal score function which leads to a moment condition
that is immune to first-order mistakes in the estimation of the confounding function gτ .
The construction is based on preliminary estimation of the confounding function and properly partialling out the confounding factors z from the policy/treatment variable d. The
former can be achieved via ℓ1 -penalized quantile regression [2, 19] or post-selection quantile
regression based on ℓ1 -penalized quantile regression [2]. The latter is carried out by heteroscedastic post-Lasso [35, 1] applied to a density-weighted equation. Then we propose two
estimators for ατ based on: (i) a moment condition based on an orthogonal score function;
(ii) a density-weighted quantile regression with all the variables selected in the previous
steps. The latter method is reminiscent of the “post-double selection” method proposed
in [5, 6]. Explicitly or implicitly the last step estimates ατ by minimizing a Neyman-type
score statistic [31].1
Under mild moment conditions and approximate sparsity assumptions, we establish
that the estimator α̌τ , as defined by either method (see Algorithms 2.1 and 2.2 below), is
root-n consistent and asymptotically normal,
√
σn−1 n(α̌τ − ατ )
where
N(0, 1),
(1.1)
denotes convergence in distribution; in addition the estimator α̌τ admits a (piv-
otal) linear representation. Hence the confidence region defined by
√
Cξ,n := {α ∈ R : |α − α̌τ | 6 σ
bn Φ−1 (1 − ξ/2)/ n}
(1.2)
has asymptotic coverage probability of 1 − ξ provided that the estimate σ
bn2 is consistent
for σn2 , namely, σ
bn2 /σn2 = 1 + oP (1). In addition, we establish that a Neyman-type score
1
We mostly focus on selection as a means of regularization, but certainly other regularizations (e.g. the
use of ℓ1 -penalized fits per se) are possible, although they perform no better than the methods we focus
on.
3
statistic Ln (α) is asymptotically distributed as the chi-squared distribution with one degree
of freedom when evaluated at the true value α = ατ , namely,
χ2 (1),
nLn (ατ )
(1.3)
which in turn allows the construction of another confidence region:
Iξ,n := {α ∈ Aτ : nLn (α) 6 (1 − ξ)-quantile of χ2 (1)},
(1.4)
which has asymptotic coverage probability of 1 − ξ. These convergence results hold under
array asymptotics, permitting the data-generating process P = Pn to change with n, which
implies that these convergence results hold uniformly over large classes of data-generating
processes. In particular, our results do not require separation of regression coefficients away
from zero (the so-called “beta-min” conditions) for their validity. Importantly, we discuss
how the procedures naturally allow for construction of simultaneous confidence bands for
many parameters based on a (pivotal) linear representation of the proposed estimator.
Several recent papers study the problem of constructing confidence regions after model
selection while allowing p ≫ n. In the case of linear mean regression, [5] proposes a
double selection inference in a parametric setting with homoscedastic Gaussian errors; [6]
studies a double selection procedure in a non-parametric setting with heteroscedastic errors;
[40] and [37] propose methods based on ℓ1 -penalized estimation combined with “one-step”
correction in parametric models. Going beyond mean regression models, [37] provides high
level conditions for the one-step estimator applied to smooth generalized linear problems,
[7] analyzes confidence regions for a parametric homoscedastic LAD regression model under
primitive conditions based on the instrumental LAD regression, and [9] provides two postselection procedures to build confidence regions for the logistic regression. None of the
aforementioned papers deal with the problem of the present paper.
4
Although related in spirit with our previous work [6, 7, 9], new tools and major departures from the previous works are required. First is the need to accommodate the
non-differentiability of the loss function (which translates into discontinuity of the score
function) and the non-parametric setting. In particular, we establish new finite sample
bounds for the prediction norm on the estimation error of ℓ1 -penalized quantile regression
in nonparametric models that extend results of [2, 19]. Perhaps more importantly, the use
of post-selection methods in order to reduce bias and improve finite sample performance
requires sparsity of the estimates. Although sharp sparsity bounds for ℓ1 -penalized methods are available for smooth loss functions, those are not available for quantile regression
precisely due to the lack of differentiability. This led us to developing sparse estimates with
provable guarantees by suitable truncation while preserving the good rates of convergence
despite of possible additional model selection mistakes. To handle heteroscedsaticity, which
is a common feature in many applications, consistent estimation of the conditional density
is necessary whose analysis is new in high-dimensions. Those estimates are used as weights
in the weighted Lasso estimation for the auxiliary regression ((2.7) below). In addition,
we develop new finite sample bounds for Lasso with estimated weights as the zero mean
condition is not assumed to hold for each observation but rather to the average across all
observations. Because the estimation of the conditional density function is at a slower rate,
it affects penalty choices, rates of convergence, and sparsity of the Lasso estimates.
This work and some of the papers cited above achieve an important uniformity guarantee with respect to the (unknown) values of the parameters. These uniform properties
translate into more reliable finite sample performance of the proposed inference procedures
because they are robust with respect to (unavoidable) model selection mistakes. There
is now substantial theoretical and empirical evidence on the potential poor finite sample
performance of inference methods that rely on perfect model selection when applied to
5
models without separation from zero of the coefficients (i.e., small coefficients). Most of
the criticism of these procedures are consequence of negative results established in [27],
[29], and the references therein.
Notation. We work with triangular array data {ωi,n : i = 1, . . . , n; n = 1, 2, 3, . . . }
where for each n, {ωn,i ; i = 1, . . . , n} is defined on the probability space (Ω, S, Pn ). Each
′
′
ωi,n = (yi,n
, zi,n
, d′i,n )′ is a vector which are i.n.i.d., that is, independent across i but not
necessarily identically distributed. Hence all parameters that characterize the distribution
of {ωi,n : i = 1, . . . , n} are implicitly indexed by Pn and thus by n. We omit this dependence
P
from the notation for the sake of simplicity. We use En to abbreviate the notation n−1 ni=1 ;
P
for example, En [f ] := En [f (ωi )] := n−1 ni=1 f (ωi ). We also use the following notation:
P
Ē[f ] := E [En [f ]] = E [En [f (ωi )]] = n−1 ni=1 E[f (ωi )]. The ℓ2 -norm is denoted by k · k; the
ℓ0 -“norm” k · k0 denotes the number of non-zero components of a vector; and the ℓ∞ -norm
k · k∞ denotes the maximal absolute value in the components of a vector. Given a vector
δ ∈ Rp and a set of indices T ⊂ {1, . . . , p}, we denote by δT ∈ Rp the vector in which
δT j = δj if j ∈ T and δT j = 0 if j ∈
/ T.
2
Setting
For a quantile index τ ∈ (0, 1), we consider a partially linear conditional quantile model
yi = di ατ + gτ (zi ) + ǫi , τ -quantile(ǫi | di , zi ) = 0, i = 1, . . . , n,
(2.5)
where yi is the outcome variable, di is the policy/treatment variable, and confounding
factors are represented by the variables zi which impact the equation through an unknown
function gτ . We shall use a large number p of technical controls xi = X(zi ) to achieve an
6
accurate approximation to the function gτ in (2.5) which takes the form:
gτ (zi ) = x′i βτ + rτ i , i = 1, . . . , n,
(2.6)
where rτ i denotes an approximation error. We view βτ and rτ as nuisance parameters while
the main parameter of interest is ατ which describes the impact of the treatment on the
conditional quantile (i.e., quantile treatment effect).
In order to perform robust inference with respect to model selection mistakes, we construct a moment condition based on a score function that satisfies an additional orthogonality property that makes them immune to first-order changes in the value of the nuisance
parameter. Letting fi = fǫi (0 | di , zi ) denote the conditional density at 0 of the disturbance
term ǫi in (2.5), the construction of the orthogonal moment condition is based on the linear
projection of the regressor of interest di weighted by fi on the xi variables weighted by fi
fi di = fi x′i θ0τ + vi ,
i = 1, . . . , n, Ē[fi xi vi ] = 0,
(2.7)
where θ0τ ∈ arg min Ē[fi2 (di − x′i θ)2 ]. The orthogonal score function ψi (α) := (1{yi 6
di α + x′i βτ + rτ } − τ )vi leads to a moment condition to estimate ατ ,
E[(1{yi 6 di ατ + x′i βτ + rτ i } − τ )vi ] = 0,
(2.8)
and satisfies the following orthogonality condition with respect to first-order changes in the
value of the nuisance parameters βτ and θ0τ :
∂β Ē[(1{yi 6 di ατ + x′i β + rτ i } − τ )fi (di − x′i θ0τ )]
∂θ Ē[(1{yi 6 di ατ + x′i βτ + rτ i } − τ )fi (di − x′i θ)]
β=βτ
θ=θ0τ
= 0, and
= 0.
(2.9)
In order to handle the high-dimensional setting, we assume that βτ and θ0τ are approximately sparse, namely, it is possible to choose sparse vector βτ and θτ such that:
kθτ k0 6 s, kβτ k0 6 s, Ē[(x′i θ0τ − x′i θτ )2 ] . s/n and Ē[(g(zi ) − x′i βτ )2 ] . s/n.
7
(2.10)
The latter equation requires that it is possible to choose the sparsity index s so that the
mean squared approximation error is of no larger order than the variance of the oracle
estimator for estimating the coefficients in the approximation. See [13] for a detailed
discussion of this notion of approximate sparsity.
2.1
Methods
The methodology based on the orthogonal score function (2.8) can be used for construction of many different estimators that have the same first-order asymptotic properties but
potentially different finite sample behaviors. In the main part of the paper we present two
such procedures in detail (the discussion on additional variants can be found in Subsection
1.1 of the Supplementary Appendix). Our procedures use ℓ1 -penalized quantile regression
and ℓ1 -penalized weighted least squares as intermediate steps (we collect the recommended
choices of the user-chosen parameters in Remark 2.1 below). The first procedure stated in
Algorithm 2.1 is based on the explicit construction of the orthogonal score function.
Algorithm 2.1 (Orthogonal score function.)
Step 1. Compute (b
ατ , βbτ ) from ℓ1 -penalized quantile regression of y on d and x.
Step 2. Compute (e
ατ , βeτ ) from quantile regression of y on d and {xj : |βbτ j | > λτ /{En [x2 ]}1/2 }.
ij
Step 3. Estimate the conditional density fb via (2.15) or (2.16).
Step 4. Compute θeτ from the post-Lasso estimator of fbd on fbx.
Step 5. Construct the score function ψbi (α) = (τ − 1{yi 6 di α + x′ βeτ })fbi (di − x′ θeτ ).
i
i
Step 6. For Ln (α) = |En [ψbi (α)]|2 /En [ψbi2 (α)], set β̌τ = βeτ and α̌τOS ∈ arg minα∈Aτ Ln (α).
Step 6 of Algorithm 2.1 solves the empirical analog of (2.8). We will show validity of the
confidence regions for ατ defined in (1.2) and (1.4). We note that the truncation in Step
2 for the solution of the penalized quantile regression (provably) induces a sparse solution
8
with the same rate of convergence as the original estimator. This is required because the
post-selection methods exhibit better finite sample behaviors in our simulations.
The second algorithm is based on selecting relevant variables from equations (2.5) and
(2.7), and running a weighted quantile regression.
Algorithm 2.2 (Weighted Double Selection.)
Step 1. Compute (b
ατ , βbτ ) from ℓ1 -penalized quantile regression of y on d and x.
Step 2. Estimate the conditional density fb via (2.15) or (2.16).
Step 3. Compute θbτ from the Lasso estimator of fbd on fbx.
Step 4. Compute (α̌τDS , β̌τ ) from quantile regression of fby on fbd and {fbxj : j ∈ supp(θbτ )}∪
{fbxj : |βbτ j | > λτ /{En [x2 ]}1/2 }.
ij
Although the orthogonal score function is not explicitly constructed in Algorithm 2.2,
inspection of the proof reveals that an orthogonal score function is constructed implicitly
via the optimality conditions of the weighted quantile regression in Step 4.
Comment 2.1 (Choices of User-Chosen Parameters) For γ = 0.05/n, we set the
penalty levels for the heteroscedastic Lasso and the ℓ1 -penalized quantile regression as
p
√
λ := 1.1 n2Φ−1 (1 − γ/2p) and λτ := 1.1 nτ (1 − τ )Φ−1 (1 − γ/2p).
(2.11)
bτ = diag[Γ
bτ jj , j = 1, . . . , p] is a diagonal matrix defined by the folThe penalty loading Γ
lowing procedure: (1) Compute the post-Lasso estimator θe0 based on λ and initial values
τ
bτ jj = max16i6n kfbi xi k∞ {En [fb2 d2 ]}1/2 . (2) Compute the residuals b
Γ
vi = fbi (di − x′i θeτ0 ) and
i i
update
bτ jj =
Γ
q
En [fbi2 x2ij b
vi2 ], j = 1, . . . , p.
(2.12)
In Algorithm 2.1 we have used the following parameter space for α:
Aτ = {α ∈ R : |α − α
eτ | 6 10{En [d2i ]}−1/2 / log n}.
9
(2.13)
Comment 2.2 (Estimating Standard Errors) There are different possible choices of
estimators for σn :
i
h
−1
2
2
σ
b1n
:= τ (1 − τ ) (En [e
vi2 ]) , σ
b2n
:= τ (1 − τ ) {En [fbi2 (di , x′iŤ )′ (di , x′iŤ )]}−1 ,
11
−2
2
′
2 2
b
σ
b3n := En [fi di e
vi ]
En [(1{yi 6 di α̌τ + xi β̌τ } − τ ) vei ],
(2.14)
where Ť = supp(β̌τ ) ∪ supp(θbτ ) is the set of controls used in the double selection quantile
regression. Although all three estimates are consistent under similar regularities conditions,
their finite sample behaviors might differ. Based on the small-sample performance in computational experiments, we recommend the use of σ
b3n for the orthogonal score estimator
and σ
b2n for the double selection estimator.
2.2
Estimation of Conditional Density Function
The implementation of the algorithms in Section 2.1 requires an estimate of the conditional
density function fi which is typically unknown under heteroscedasticity. Following [22], we
shall use the observation that 1/fi = ∂Q(τ | di , zi )/∂τ to estimate fi where Q(· | di , zi )
b | zi , di ) denote an
denotes the conditional quantile function of the outcome. Let Q(u
estimate of the conditional u-quantile function Q(u | zi , di ), based on either ℓ1 -penalized
quantile regression or an associated post-selection method, and let h = hn → 0 denote a
bandwidth parameter. Then an estimator of fi can be constructed as
fbi =
2h
b + h | zi , di ) − Q(τ
b − h | zi , di )
Q(τ
.
(2.15)
When the conditional quantile function is three times continuously differentiable, this estimator is based on the first order partial difference of the estimated conditional quantile
function, and so it has the bias of order h2 . Under additional smoothness assumptions, an
10
estimator that has a bias of order h4 is given by
fbi =
h
3 b
4 {Q(τ
b − h | zi , di )} −
+ h | zi , di ) − Q(τ
1 b
12 {Q(τ
b − 2h | zi , di )}
+ 2h | zi , di ) − Q(τ
. (2.16)
Comment 2.3 (Implementation of the estimates fbi ) There are several possible choices
of tuning parameters to construct the estimates fbi . In particular the bandwidth choices set
in the R package ‘quantreg’ from [23] exhibits good empirical behavior. In our theoretical
analysis we coordinate the bandwidth choice with the choice of the penalty level of the density weighted Lasso. In Subsection 1.4 of the Supplementary Appendix we discuss in more
detail the requirements associated with different choices for penalty level λ and bandwidth
h. Together with the recommendations made in Remark 2.1, we suggest to construct fbi as
in (2.15) with bandwidth h := min{n−1/6 , τ (1 − τ )/2}.
3
Theoretical Analysis
3.1
Regularity Conditions
In this section we provide regularity conditions that are sufficient for validity of the main
estimation and inference results. In what follows, let c, C, and q be given (fixed) constants
with c > 0, C > 1 and q > 4, and let ℓn ↑ ∞, δn ↓ 0, and ∆n ↓ 0 be given sequences of
positive constants. We assume that the following condition holds for the data generating
process P = Pn for each n.
Condition AS(P). (i) Let {(yi , di , xi = X(zi )) : i = 1, . . . , n} be independent random
vectors that obey the model described in (2.5) and (2.7) with kθ0τ k + kβτ k + |ατ | 6 C. (ii)
2
There exists s > 1 and vectors βτ and θτ such that x′i θ0τ = x′i θτ + rθτ i , kθτ k0 6 s, Ē[rθτ
i] 6
11
Cs/n, kθ0τ − θτ k1 6 s
p
log(pn)/n, and gτ (zi ) = x′i βτ + rτ i , kβτ k0 6 s, Ē[rτ2i ] 6 Cs/n.
(iii) The conditional distribution function of ǫi is absolutely continuous with continuously
differentiable density fǫ |d ,z (· | di , zi ) such that 0 < f 6 fi 6 sup fǫ |d ,z (t | di , zi ) 6 f¯ 6 C
i
and
supt |fǫ′i |di ,zi (t
i
t
i
i
i
i
| di , zi )| 6 f¯′ 6 C.
Condition AS(i) imposes the setting discussed in Section 2 in which the error term ǫi has
zero conditional τ -quantile. The approximate sparsity on the high-dimensional parameters
is stated in Condition AS(ii). Condition AS(iii) is a standard assumption on the conditional
density function in the quantile regression literature (see [22]) and the instrumental quantile
regression literature (see [15]). Next we summarize the moment conditions we impose.
Condition M(P). (i) We have Ē[{(di , x′i )ξ}2 ] > ckξk2 and Ē[{(di , x′i )ξ}4] 6 Ckξk4
for all ξ ∈ Rp+1 , c 6 min16j6p Ē[|fi xij vi − E[fi xij vi ]|2 ]1/2 6 max16j6p Ē[|fi xij vi |3 ]1/3 6 C.
(ii) The approximation error satisfies |Ē[fi vi rτ i ]| 6 δn n−1/2 and Ē[(x′i ξ)2rτ2i ] 6 Ckξk2Ē[rτ2i ]
for all ξ ∈ Rp . (iii) Suppose that Kq = E[max16i6n k(di , vi , x′i )′ kq∞ ]1/q is finite and satisfies
(Kq2 s2 + s3 ) log3 (pn) 6 nδn and Kq4 s log(pn) log3 n 6 δn n.
Condition M(i) imposes moment conditions on the variables. Condition M(ii) imposes
requirements on the approximation error. Condition M(iii) imposes growth conditions on s,
p, and n. In particular these conditions imply that the population eigenvalues of the design
matrix are bounded away from zero and from above. They ensure that sparse eigenvalues
and restricted eigenvalues are well behaved which are used in the analysis of penalized
estimators and sparsity properties needed for the post-selection estimator.
Comment 3.1 (Handling Approximately Sparse Models) To handle approximately
sparse models to represent gτ in (2.6), we assume a near orthogonality between rτ and f v,
namely Ē[fi vi rτ i ] = o(n−1/2 ). This condition is automatically satisfied if the orthogonality
condition in (2.7) can be strengthen to E[fi vi | zi ] = 0. However, it can be satisfied under
weaker conditions as discussed in Subsection 1.2 of the Supplementary Appendix.
12
Our last set of conditions pertains to the estimation of the conditional density function
(fi )ni=1 which has a non-trivial impact on the analysis. We denote by U the finite set of
quantile indices used in the estimation of the conditional density. Under mild regularity
conditions the estimators (2.15) and (2.16) achieve
!
X
1
b + u | di , zi ) − Q(τ
b − u | di , zi )| ,
fbi − fi = O hk̄ +
|Q(τ
h u∈U
(3.17)
where k̄ = 2 for (2.15) and k̄ = 4 for (2.16). Condition D summarizes sufficient conditions
to account for the impact of density estimation via (post-selection) ℓ1 -penalized quantile
regression estimators.
Condition D. For u ∈ U, assume that u-quantile(yi | zi , di ) = di αu + x′i βu + rui ,
2
fui = fyi |di ,zi (di αu + x′i βu + rui | zi , di ) > c where Ē[rui
] 6 δn n−1/2 and |rui | 6 δn h for all
k̄ 2
nh
+
,
i = 1, . . . , n, and the vector βu satisfies kβu k0 6 s. (ii) For seθτ = s + ns log(n∨p)
h2 λ2
λ
p
√
suppose hk̄ seθτ log(pn) 6 δn , h−2 Kq2 s log(pn) 6 δn n, λKq2 s 6 δn n, h−2 se
sθτ log(pn) 6
p
2
3
sθτ log(pn) 6 δn n, and Kq2 seθτ log (pn) log (n) 6 δn n.
δn n, λ se
Condition D(i) imposes the approximately sparse assumption for the u-conditional
quantile function for quantile indices u in a neighborhood of the quantile index τ . Condition D(ii) provides growth conditions relating s, p, n, h and λ. Subsection 1.4 in the
Supplementary Appendix discusses specific choices of penalty level λ and of bandwidth h
together with the implied conditions on the triple (s, p, n). In particular they imply that
sparse eigenvalues of order seθτ are well behaved.
3.2
Main results
In this section we state our theoretical results. We establish the first order equivalence of
the proposed estimators. We construct the estimators as defined as in Algorithm 2.1 and
13
bτ as in (2.12), and Aτ as in (2.13). The choices of λ
2.2 with parameters λτ as in (2.11), Γ
and h satisfy Condition D.
Theorem 1 Let {Pn } be a sequence of data-generating processes. Assume that conditions
AS(P), M(P) and D(P) are satisfied with P = Pn for each n. Then the orthogonal score
estimator α̌τOS based on Algorithm 2.1 and the double selection estimator α̌τDS based on Al√
gorithm 2.2 are first order equivalent, n(α̌τOS − α̌τDS ) = oP (1). Moreover, either estimator
satisfies
√
σn−1 n(α̌τ − ατ ) = Un (τ ) + oP (1)
where σn2 = τ (1 − τ )Ē[vi2 ]−1
and Un (τ )
N(0, 1),
P
and Un (τ ) := τ (1 − τ ){Ē[vi2 ]}−1/2 n−1/2 ni=1 (τ − 1{Ui 6 τ })vi ,
and U1 , . . . , Un are i.i.d. uniform random variables on (0, 1) independent from v1 , . . . , vn .
Furthermore,
nLn (ατ ) = U2n (τ ) + oP (1) and U2n (τ )
χ2 (1).
The result continues to apply if σn2 is replaced by any of the estimators in (2.14), namely,
σ
bkn /σn = 1 + oP (1) for k = 1, 2, 3.
The asymptotically correct coverage of the confidence regions Cξ,n and Iξ,n as defined
in (1.2) and (1.4) follows immediately. Theorem 1 relies on post model selection estimators
which in turn rely on achieving sparse estimates βbτ and θbτ . The sparsity of θbτ is derived in
Section 2.2 in the Supplemental Appendix under the recommended penalty choices. The
sparsity of βbτ is not guaranteed under the recommended choices of penalty level λτ which
leads to sharp rates. We bypass that by truncating small components to zero (as in Step
2 of Algorithm 2.1) which (provably) preserves the same rate of convergence and ensures
the sparsity.
In addition to the asymptotic normality, Theorem 1 establishes that the rescaled esti√
mation error σn−1 n(α̌τ − ατ ) is approximately equal to the process Un (τ ), which is pivotal
14
conditional on v1 , . . . , vn . Such a property is very useful since it is easy to simulate Un (τ )
conditional on v1 , . . . , vn . Thus this representation provides us with another procedure to
construct confidence intervals without relying on asymptotic normality which are useful for
the construction of simultaneous confidence bands; see Section 3.3.
Importantly, the results in Theorem 1 allow for the data generating process to depend on
the sample size n and have no requirements on the separation from zero of the coefficients.
In particular these results allow for sequences of data generating processes for which perfect
model selection is not possible. In turn this translates into uniformity properties over a
large class of data generating processes. Next we formalize these uniform properties. We let
Pn denote the collection of distributions P for the data {(yi , di , zi′ )′ }ni=1 such that Conditions
AS, M and D are satisfied for given n. This is the collection of all approximately sparse
models where the above sparsity conditions, moment conditions, and growth conditions are
satisfied. Note that the uniformity results for the approximately sparse and heteroscedastic
case are new even under fixed p asymptotics.
Corollary 1 (Uniform Validity of Confidence Regions) Let Pn be the collection of
all distributions of {(yi , di , zi′ )′ }ni=1 for which Conditions AS, M, and D are satisfied for
given n > 1. Then the confidence regions Cξ,n and Iξ,n defined based on either the orthogonal
score estimator or by the double selection estimator are asymptotically uniformly valid
lim sup |P(ατ ∈ Cξ,n ) − (1 − ξ)| = 0
n→∞ P∈Pn
3.3
and
lim sup |P(ατ ∈ Iξ,n ) − (1 − ξ)| = 0.
n→∞ P∈Pn
Simultaneous Inference over τ and Many Coefficients
In some applications we are interested on building confidence intervals that are simultaneously valid for many coefficients as well as for a range of quantile indices τ ∈ T ⊂ (0, 1) a
15
fixed compact set. The proposed methods directly extend to the case of d ∈ RK and τ ∈ T
τ -quantile(y | z, d) =
K
X
dj ατ j + g̃τ (z).
j=1
Indeed, for each τ ∈ T and each k = 1, . . . , K, estimates can be obtained by applying the
methods to the model (2.5) as
τ -quantile(y | z, d) = dk ατ k + gτ (z) where gτ (z) := g̃τ (z) +
X
d j ατ j .
j6=k
For each τ ∈ T , Step 1 and the conditional density function fi , i = 1, . . . , n, are the same for
all k = 1, . . . , K. However, Steps 2 and 3 adapt to each quantile index and each coefficient
of interest. The uniform validity of ℓ1 -penalized methods for a continuum of problems
(indexed by T in our case) has been established for quantile regression in [2] and for least
squares in [10]. The conclusions of Theorem 1 are uniformly valid over k = 1, . . . , K and
τ ∈ T (in the ℓ∞ -norm).
Simultaneous confidence bands are constructed by defining the following critical value
∗
n
c (1 − ξ) = inf t : P
sup
|Un (τ, k)| 6 t | {di , zi }i=1 > 1 − ξ ,
τ ∈T ,k=1,...,K
where the random variable Un (τ, k) is pivotal conditional on the data, namely,
n
{τ (1 − τ )Ē[vτ2ki ]}−1/2 X
√
Un (τ, k) :=
(τ − 1{Ui 6 τ })vτ ki ,
n
i=1
where Ui are i.i.d. uniform random variables on (0, 1) independent from {di , zi }ni=1 , and
vτ ki is the error term in the decomposition (2.7) for the pair (τ, k). Therefore c∗ (1 − ξ)
can be estimated since estimates of vτ ki and σnτ k , τ ∈ T and k = 1, . . . , K, are available.
Uniform confidence bands can be defined as
√
√
[α̌τ k − σnτ k c∗ (1 − ξ)/ n, α̌τ k + σnτ k c∗ (1 − ξ)/ n] for τ ∈ T , k = 1, . . . , K.
16
4
Empirical Performance
4.1
Monte-Carlo Experiments
Next we provide a simulation study to assess the finite sample performance of the proposed
estimators and confidence regions. We focus our discussion on the double selection estimator as defined in Algorithm 2.2 which exhibits a better performance. We consider the
median regression case (τ = 1/2) under the following data generating process:
y = dατ + x′ (cy ν0 ) + ǫ,
d = x′ (cd ν0 ) + ṽ,
ǫ ∼ N(0, {2 − µ + µd2 }/2),
ṽ ∼ N(0, 1),
(4.18)
(4.19)
where ατ = 1/2, θ0j = 1/j 2 , j = 1, . . . , p, x = (1, z ′ )′ consists of an intercept and covariates
z ∼ N(0, Σ), and the errors ǫ and ṽ are independent. The dimension p of the covariates
x is 300, and the sample size n is 250. The regressors are correlated with Σij = ρ|i−j| and
p
ρ = 0.5. In this case, fi = 1/{ π(2 − µ + µd2 )} so that the coefficient µ ∈ {0, 1} makes
the conditional density function of ǫ homoscedastic if µ = 0 and heteroscedastic if µ = 1.
The coefficients cy and cd are used to control the R2 in the equations: y −dατ = x′ (cy ν0 ) + ǫ
and d = x′ (cd ν0 ) + ṽ ; we denote the values of R2 in each equation by Ry2 and Rd2 . We
consider values (Ry2 , Rd2 ) in the set {0, .1, .2, . . . , .9} × {0, .1, .2, . . . , .9}. Therefore we have
100 different designs and perform 500 Monte-Carlo repetitions for each design. For each
repetition we draw new vectors xi ’s and errors ǫi ’s and ṽi ’s.
We perform estimation of fi ’s via (2.15) even in the homoscedastic case (µ = 0), since
we do not want to rely on whether the assumption of homoscedasticity is valid or not. We
use σ
b2n as the standard error estimate for the post double selection estimator based on
Algorithm 2.2. As a benchmark we consider the standard (naive) post-selection procedure
that applies ℓ1 -penalized median regression of y on d and x to select a subset of covariates
17
that have predictive power for y, and then runs median regression of y on d and the
selected covariates, omitting the covariates that were not selected. We report the rejection
frequency of the confidence intervals with the nominal coverage probability of 95%. Ideally
we should see the rejection rate of 5%, the nominal level, regardless of the underlying
generating process P ∈ Pn . This is the so called uniformity property or honesty property
of the confidence regions (see, e.g., [33], [32], and [28]).
In the homoscedastic case, reported on the left column of Figure 1, we have the empirical rejection probabilities for the naive post-selection procedure on the first row. These
empirical rejection probabilities deviate strongly away from the nominal level of 5%, demonstrating the striking lack of robustness of this standard method. This is perhaps expected
due to the Monte-Carlo design having regression coefficients not well separated from zero
(that is, the “beta min” condition does not hold here). In sharp contrast, we see that the
proposed procedure performs substantially better, yielding empirical rejection probabilities
close to the desired nominal level of 5%. In the right column of Figure 1 we report the
results for the heteroscedastic case (µ = 1). Here too we see the striking lack of robustness
of the naive post-selection procedure. We also see that the confidence region based on
the post-double selection method significantly outperforms the standard method, yielding
empirical rejection probabilities close to the nominal level of 5%.
4.2
Inference on Risk Factors in Childhood Malnutrition
The purpose of this section is to examine practical usefulness of the new methods and
contrast them with the standard post-selection inference (that assumes perfect selection).
We will assess statistical significance of socio-economic and biological factors on children’s malnutrition, providing a methodological follow up on the previous studies done by
[17] and [21]. The measure of malnutrition is represented by the child’s height, which will
18
Naive Post Selection (C0.05,n) rp(0.05)
Naive Post Selection (C0.05,n) rp(0.05)
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0.8
0.8
0.6
0.8
0.6
0.4
0.2
0.6
0.4
Ry2
Rd2
0.2
0
Rd2
0.8
0.6
0.4
0
0.4
0.2
Double Selection (C0.05,n) rp(0.05)
0.2
0
0
Ry2
Double Selection (C0.05,n) rp(0.05)
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0.8
0.8
0.6
0.8
0.6
0.4
0.2
Rd2
0.8
0.6
0.6
0.4
0.4
Ry2
Rd2
0.2
0
0
0.4
0.2
0.2
0
0
Ry2
Figure 1: The left column is the homoscedastic design (µ = 0), and the right column is the heteroscedastic
design. We the figure displays the rejection probabilities of the following confidence regions with nominal
coverage of 95%: (a) naive selection procedure (1st row), and (b) C0.05,n based on the post double selection
estimator (2nd row). The ideal rejection probability should be 5%, so ideally we should be seeing a flat
surface with height 5%.
be our response variable y. The socio-economic and biological factors will be our regressors
x, which we shall describe in more detail below. We shall estimate the conditional first
decile function of the child’s height given the factors (that is, we set τ = .1). We would
like to perform inference on the size of the impact of the various factors on the conditional
decile of the child’s height. The problem has material significance, so it is important to
conduct statistical inference for this problem responsibly.
19
The data comes originally from the Demographic and Health Surveys (DHS) conducted
regularly in more than 75 countries; we employ the same selected sample of 37,649 as
in Koenker (2012). All children in the sample are between the ages of 0 and 5. The
response variable y is the child’s height in centimeters. The regressors x include child’s
age, breast feeding in months, mothers body-mass index (BMI), mother’s age, mother’s
education, father’s education, number of living children in the family, and a large number
of categorical variables, with each category coded as binary (zero or one): child’s gender
(male or female), twin status (single or twin), the birth order (first, second, third, fourth,
or fifth), the mother’s employment status (employed or unemployed), mother’s religion
(Hindu, Muslim, Christian, Sikh, or other), mother’s residence (urban or rural), family’s
wealth (poorest, poorer, middle, richer, richest), electricity (yes or no), radio (yes or no),
television (yes or no), bicycle (yes or no), motorcycle (yes or no), and car (yes or no).
Although the number of covariates (p = 30) is substantial, the sample size (n = 37, 649)
is much larger than the number of covariates. Therefore, the dataset is very interesting
from a methodological point of view, since it gives us an opportunity to compare various
methods for performing inference to an “ideal” benchmark of standard inference based on
the standard quantile regression estimator without any model selection. This was proven
theoretically in [18] and in [4] under the p → ∞, p3 /n → 0 regime. This is also the general
option recommended by [22] and [27] in the fixed p regime. Note that this “ideal” option
does not apply in practice when p is relatively large; however it certainly applies in the
present example.
We will compare the “ideal” option with two procedures. First the standard postselection inference method. This method performs standard inference on the post-model
selection estimator, “assuming” that the model selection had worked perfectly. Second the
double selection estimator defined as in Algorithm 2.2. (The orthogonal score estimator
20
performs similarly so it is omitted due to space constrains.) The proposed methods do not
assume perfect selection, but rather build a protection against (moderate) model selection
mistakes.
We now will compare our proposal to the “ideal” benchmark and to the standard postselection method. We report the empirical results in Table 4.2. The first column reports
results for the ideal option, reporting the estimates and standard errors enclosed in brackets.
The second column reports results for the standard post-selection method, specifically
the point estimates resulting from the post-penalized quantile regression, reporting the
standard errors as if there had been no model selection. The last column report the results
for the double selection estimator (point estimate and standard error). Note that the
Algorithm 2.2 is applied sequentially to each of the variables. Similarly, in order to provide
estimates and confidence intervals for all variables using the naive approach, if the covariate
was not selected by the ℓ1 -penalized quantile regression, it was included in the post-model
selection quantile regression for that variable.
What we see is very interesting. First of all, let us compare the “ideal” option (column
1) and the naive post-selection (column 2). The Lasso selection method removes 16 out
of 30 variables, many of which are highly significant, as judged by the “ideal” option. (To
judge significance we use normal approximations and critical value of 3, which allows us
to maintain 5% significance level after testing up to 50 hypotheses). In particular, we
see that the following highly significant variables were dropped by Lasso: mother’s BMI,
mother’s age, twin status, birth orders one and two, and indicator of the other religion.
The standard post-model selection inference then makes the assumption that these are
true zeros, which leads us to misleading conclusions about these effects. The standard
21
Variable
cage
mbmi
breastfeeding
mage
medu
edupartner
deadchildren
csexfemale
ctwintwin
cbirthorder2
cbirthorder3
cbirthorder4
cbirthorder5
munemployed
motorcycleyes
quantile
regression
0.6456
(0.0030)
0.0603
(0.0159)
0.0691
(0.0036)
0.0684
(0.0090)
0.1590
(0.0136)
0.0175
(0.0125)
-0.0680
(0.1124)
-1.4625
(0.0948)
-1.7259
(0.3741)
-0.7256
(0.1073)
-1.2367
(0.1315)
-1.7455
(0.2244)
-2.4014
(0.1639)
0.0409
(0.1025)
0.6104
(0.1783)
Naive post
selection
0.6458
(0.0027)
0.0663
(0.0139)
0.0689
(0.0038)
0.0454
(0.0147)
0.1870
(0.0145)
0.0460
(0.0148)
-0.2121
(0.0978)
-1.5084
(0.0897)
-1.8683
(0.2295)
-0.2230
(0.0983)
-0.5751
(0.1423)
-0.7910
(0.1938)
-1.1747
(0.1686)
0.0077
(0.1077)
0.5883
(0.1334)
Table 1: Empirical Results
Double
Selection α̌τ
0.6449
(0.0032)
0.0582
(0.0173)
0.0700
(0.0044)
0.0685
(0.0126)
0.1566
(0.0154)
0.0348
(0.0143)
-0.1546
(0.1121)
-1.5299
(0.1019)
-1.9248
(0.7375)
-0.6818
(0.1337)
-1.1326
(0.1719)
-1.5819
(0.2193)
-2.3041
(0.2564)
0.0379
(0.1124)
0.5154
(0.1625)
Variable
mreligionhindu
mreligionmuslim
mreligionother
mreligionsikh
mresidencerural
wealthpoorer
wealthmiddle
wealthricher
wealthrichest
electricityyes
radioyes
televisionyes
refrigeratoryes
bicycleyes
caryes
quantile
regression
-0.4351
(0.2232)
-0.3736
(0.2417)
-1.1448
(0.3296)
-0.5575
(0.2969)
0.1545
(0.0994)
0.2732
(0.1761)
0.8699
(0.1719)
1.3254
(0.2244)
2.0238
(0.2596)
0.3866
(0.1581)
-0.0385
(0.1218)
-0.1633
(0.1191)
0.1544
(0.1774)
0.1438
(0.1048)
0.2741
(0.2058)
Naive post
selection
-0.2423
(0.1080)
0.0294
(0.1438)
-0.6977
(0.3219)
0.3692
(0.1897)
0.1085
(0.1363)
-0.1946
(0.1231)
0.9197
(0.2236)
0.5754
(0.1408)
1.2967
(0.2263)
0.7555
(0.1398)
0.1363
(0.1214)
-0.0774
(0.1234)
0.2451
(0.2081)
0.1314
(0.1016)
0.5805
(0.2378)
Double
Selection α̌τ
-0.5680
(0.1771)
-0.5119
(0.2176)
-1.1539
(0.3577)
-0.3408
(0.3889)
0.1678
(0.1311)
0.2648
(0.1877)
0.9173
(0.2158)
1.4040
(0.2505)
2.1133
(0.3318)
0.4582
(0.1577)
0.0640
(0.1207)
-0.0880
(0.1386)
0.2001
(0.1891)
0.1438
(0.1121)
0.5470
(0.2896)
post-selection inference then proceeds to judge the significance of other variables, in some
cases deviating sharply and significantly from the “ideal” benchmark. For example, there
is a sharp disagreement on magnitudes of the impact of the birth order variables and the
wealth variables (for “richer” and “richest” categories). Overall, for the naive post-selection,
8 out of 30 coefficients were more than 3 standard errors away from the coefficients of the
“ideal” option.
We now proceed to comparing our proposed options to the “ideal” option. We see approximate agreement in terms of magnitude, signs of coefficients, and in standard errors. In
few instances, for example, for the car ownership regressor, the disagreements in magnitude
may appear large, but they become insignificant once we account for the standard errors.
The main conclusion from our study is that the standard/naive post-selection inference
can give misleading results, confirming our expectations and confirming predictions of [27].
22
Moreover, the proposed inference procedure is able to deliver inference of high quality,
which is very much in agreement with the “ideal” benchmark.
SUPPLEMENTARY MATERIAL
Supplementary Material. The supplemental appendix contains the proofs, additional
discussions (variants, approximately sparse assumption) and technical results. (pdf)
References
[1] A. Belloni, D. Chen, V. Chernozhukov, and C. Hansen. Sparse models and methods for optimal instruments with an application to eminent domain. Econometrica,
80(6):2369–2430, November 2012.
[2] A. Belloni and V. Chernozhukov. ℓ1 -penalized quantile regression for high dimensional
sparse models. Ann. Statist., 39(1):82–130, 2011.
[3] A. Belloni and V. Chernozhukov.
Least squares after model selection in high-
dimensional sparse models. Bernoulli, 19(2):521–547, 2013.
[4] A. Belloni, V. Chernozhukov, and I. Fernandez-Val. Conditional quantile processes
based on series or many regressors. arXiv:1105.6154, may 2011.
[5] A. Belloni, V. Chernozhukov, and C. Hansen. Inference for high-dimensional sparse
econometric models. Advances in Economics and Econometrics: The 2010 World
Congress of the Econometric Society, 3:245–295, 2013.
[6] A. Belloni, V. Chernozhukov, and C. Hansen. Inference on treatment effects after
selection amongst high-dimensional controls. Rev. Econ. Stud., 81:608–650, 2014.
23
[7] A. Belloni, V. Chernozhukov, and K. Kato. Uniform post model selection inference
for LAD regression models. accepted at Biometrika, 2014.
[8] A. Belloni, V. Chernozhukov, and L. Wang. Square-root-lasso: Pivotal recovery of
sparse signals via conic programming. Biometrika, 98(4):791–806, 2011.
[9] A. Belloni, V. Chernozhukov, and Y. Wei. Honest confidence regions for logistic
regression with a large number of controls. ArXiv:1304.3969, 2013.
[10] Alexandre Belloni, Victor Chernozhukov, Iván Fernández-Val, and Chris Hansen. Program evaluation with high-dimensional data. arXiv preprint arXiv:1311.2645, 2013.
[11] Alexandre Belloni, Victor Chernozhukov, and Lie Wang. Pivotal estimation via squareroot lasso in nonparametric regression. The Annals of Statistics, 42(2):757–788, 2014.
[12] P. J. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of lasso and Dantzig
selector. Ann. Statist., 37(4):1705–1732, 2009.
[13] X. Chen. Large sample sieve estimatin of semi-nonparametric models. Handbook of
Econometrics, 6:5559–5632, 2007.
[14] Victor Chernozhukov, Denis Chetverikov, and Kengo Kato. Gaussian approximation
of suprema of empirical processes. arXiv preprint arXiv:1212.6885, 2012.
[15] Victor Chernozhukov and Christian Hansen. Instrumental variable quantile regression:
A robust inference approach. J. Econometrics, 142:379–398, 2008.
[16] Victor H. de la Peña, Tze Leung Lai, and Qi-Man Shao. Self-normalized Processes:
Limit Theory and Statistical Applications. Springer, New York, 2009.
24
[17] N. Fenske, T. Kneib, and T. Hothorn. Identifying risk factors for severe childhoold
malnutrition by boosting additive quantile regression. Journal of the Statistical Association, 106:494–510, 2011.
[18] Xuming He and Qi-Man Shao. On parameters of increasing dimensions. J. Multivariate
Anal., 73(1):120–135, 2000.
[19] K. Kato.
Group Lasso for high dimensional sparse quantile regression models.
arXiv:1103.1458, 2011.
[20] K. Knight. Limiting distributions for L1 regression estimators under general conditions. The Annals of Statistics, 26:755–770, 1998.
[21] R. Koenker. Additive models for quantile regression: Model selection and confidence
bandaids. Brazilian Journal of Probability and Statistics, 25(3):239–262, 2011.
[22] Roger Koenker. Quantile Regression. Cambridge University Press, Cambridge, 2005.
[23] Roger Koenker. quantreg: Quantile regression. r package version 5.24. R Foundation for Statistical Computing: Vienna) Available at: http://CRAN. R-project.
org/package= quantreg, 2016.
[24] Michael R. Kosorok. Introduction to Empirical Processes and Semiparametric Inference. Springer, New York, 2008.
[25] M. Ledoux and M. Talagrand. Probability in Banach Spaces (Isoperimetry and processes). Ergebnisse der Mathematik undihrer Grenzgebiete, Springer-Verlag, 1991.
[26] Sokbae Lee. Efficient semiparametric estimation of a partially linear quantile regression
model. Econometric Theory, 19:1–31, 2003.
25
[27] Hannes Leeb and Benedikt M. Pötscher. Model selection and inference: facts and
fiction. Econometric Theory, 21:21–59, 2005.
[28] Hannes Leeb and Benedikt M. Pötscher. Can one estimate the conditional distribution
of post-model-selection estimator? The Annals of Statistics, 34(5):2554–2591, 2006.
[29] Hannes Leeb and Benedikt M. Pötscher. Sparse estimators and the oracle property,
or the return of Hodges’ estimator. J. Econometrics, 142(1):201–211, 2008.
[30] E. L. Lehmann. Theory of Point Estimation. New York: Wiley, 1983.
[31] J. Neyman. C(α) tests and their use. Sankhya, 41:1–21, 1979.
[32] Joseph P. Romano and Azeem M. Shaikh. On the uniform asymptotic validity of
subsampling and the bootstrap. Ann. Statist., 40(6):2798–2822, 2012.
[33] Joseph P. Romano and Michael Wolf. Control of generalized error rates in multiple
testing. Ann. Statist., 35(4):1378–1408, 2007.
[34] Mark Rudelson and Roman Vershynin. On sparse reconstruction from fourier and
gaussian measurements. Communications on Pure and Applied Mathematics, 61:1025–
1045, 2008.
[35] R. J. Tibshirani. Regression shrinkage and selection via the Lasso. J. R. Statist. Soc.
B, 58:267–288, 1996.
[36] A. Tsybakov. Introduction to nonparametric estimation. Springer, 2008.
[37] Sara van de Geer, Peter Bühlmann, Ya’acov Ritov, and Ruben Dezeure. On asymptotically optimal confidence regions and tests for high-dimensional models. Annals of
Statistics, 42:1166–1202, 2014.
26
[38] A. W. van der Vaart and J. A. Wellner. Weak Convergence and Empirical Processes:
With Applications to Statistics. Springer-Verlag, New York, 1996.
[39] Aad W. van der Vaart and Jon A. Wellner. Empirical process indexed by estimated
functions. IMS Lecture Notes-Monograph Series, 55:234–252, 2007.
[40] Cun-Hui Zhang and Stephanie S. Zhang. Confidence intervals for low-dimensional
parameters with high-dimensional data. J. R. Statist. Soc. B, 76:217–242, 2014.
27
Supplementary Appendix for
“Valid Post-Selection Inference in
High-dimensional Approximately Sparse
Quantile Regression Models”
The supplemental appendix contains the proofs of the main results, additional
discussions and technical results. Section 1 collects the notation. Section 2
has additional discussions on variants of the proposed methods, assumptions
of approximately sparse functions, minimax efficiency, and the choices of bandwidth and penalty parameters and their implications for the growth of s, p
and n. Section 3 provides new results for ℓ1 -penalized quantile regression with
approximation errors, Lasso with estimated weights under (weaker) aggregated
zero mean condition, and for the solution of the zero of the moment condition
associated with the orthogonal score function. The proof of the main result of
the main text is provided in Section 4. Section 5 collects auxiliary technical
inequalities used in the proofs, Section 6 provides the proofs and technical lemmas for ℓ1 -quantile regression. Section 7 provides proofs and technical lemmas
for Lasso with estimated weights. Section 8 provides the proof for the orthogonal moment condition estimation problem. Finally Section 9 provided rates of
convergence for the estimates of the conditional density function.
Notation
In what follows, we work with triangular array data {ωi,n : i = 1, . . . , n; n = 1, 2, 3, . . . }
where for each n, {ωi,n ; i = 1, . . . , n} is defined on the probability space (Ω, S, Pn ). Each
1
′
′
ωi,n = (yi,n
, zi,n
, d′i,n )′ is a vector which are i.n.i.d., that is, independent across i but not
necessarily identically distributed. Hence all parameters that characterize the distribution
of {ωi,n : i = 1, . . . , n} are implicitly indexed by Pn and thus by n. We omit this dependence
P
from the notation for the sake of simplicity. We use En to abbreviate the notation n−1 ni=1 ;
P
for example, En [f ] := En [f (ωi )] := n−1 ni=1 f (ωi ). We also use the following notation:
P
Ē[f ] := E [En [f ]] = E [En [f (ωi )]] = n−1 ni=1 E[f (ωi )]. The ℓ2 -norm is denoted by k · k;
the ℓ0 -“norm” k · k0 denotes the number of non-zero components of a vector; and the ℓ∞ norm k · k∞ denotes the maximal absolute value in the components of a vector. Given
a vector δ ∈ Rp , and a set of indices T ⊂ {1, . . . , p}, we denote by δT ∈ Rp the vector
in which δT j = δj if j ∈ T , δT j = 0 if j ∈
/ T . We also denote by δ (k) the vector with k
non-zero components corresponding to k of the largest components of δ in absolute value.
We use the notation (a)+ = max{a, 0}, a ∨ b = max{a, b}, and a ∧ b = min{a, b}. We
also use the notation a . b to denote a 6 cb for some constant c > 0 that does not
depend on n; and a .P b to denote a = OP (b). For an event E, we say that E wp →
1 when E occurs with probability approaching one as n grows. Given a p-vector b, we
denote support(b) = {j ∈ {1, ..., p} : bj 6= 0}. We also use ρτ (t) = t(τ − 1{t 6 0}) and
ϕτ (t1 , t2 ) = (τ − 1{t1 6 t2 }).
Define the minimal and maximal m-sparse eigenvalues of a symmetric positive semidefinite matrix M as
φmin(m)[M] :=
δ ′ Mδ
δ ′ Mδ
and
φ
(m)[M]
:=
max
.
max
16kδk0 6m kδk2
16kδk0 6m kδk2
min
(0.20)
For notational convenience we write x̃i = (di , x′i )′ , φmin (m) := φmin(m)[En [x̃i x̃′i ]], and
φmax (m) := φmax (m)[En [x̃i x̃′i ]].
2
1
1.1
Additional Discussions
Variants of the Proposed Algorithms
There are several different ways to implement the sequence of steps underlying the two
procedures outlined in Algorithms 2.1 and 2.2. The estimation of the control function
gτ can be done through other regularization methods like ℓ1 -penalized quantile regression
instead of the post-ℓ1 -penalized quantile regression. The estimation of the error term v
in Step 2 can be carried out with Dantzig selector, square-root Lasso or the associated
post-selection method could be used instead of Lasso or post-Lasso. Solving for the zero
of the moment condition induced by the orthogonal score function can be substituted
by a one-step correction from the ℓ1 -penalized quantile regression estimator α
bτ , namely,
vi ].
α̌τ = α
bτ + (En [b
v 2 ])−1 En [(τ − 1{yi 6 α
bτ di + x′ βbτ })b
i
i
Other variants can be constructing alternative orthogonal score functions. This can
be achieved by changing the weights in the equation (2.7) to alternative weights, say f˜i ,
that lead to different errors terms ṽ that satisfies E[f˜xṽ] = 0. Then the orthogonal score
function is constructed as ψi (α) = (τ − 1{yi 6 di α + gτ (zi )})ṽi (f˜i /fi ). It turns out that
the choice f˜i = fi minimizes the asymptotic variance of the estimator of ατ based upon
the empirical analog of (2.8), among all the score functions satisfying (2.9). An example is
to set f˜i = 1 which would lead to ṽi = di − E[di | zi ]. Although such choice leads to a less
efficient estimator, the estimation of E[di | zi ] and fi can be carried out separably which
can lead to weaker regularity conditions.
3
1.2
Handling Approximately Sparse Functions
As discussed in Remark 3.1, in order to handle approximately sparse models to represent
gτ in (2.6) an approximate orthogonality condition is assumed, namely
Ē[fi vi rτ i ] = o(n−1/2 ).
(1.21)
In the literature such a condition has been (implicitly) used before. For example, (1.21)
holds if the function gτ is an exactly sparse linear combination of the covariates so that
all the approximation errors are exactly zero, namely, rτ i = 0, i = 1, . . . , n. An alternative
assumption in the literature that implies (1.21) is to have E[fi di | zi ] = fi {x′i θτ + rθτ i },
where θτ is sparse and rθτ i is suitably small, which implies orthogonality to all functions of
zi since we have E[fi vi | zi ] = 0.
The high-dimensional setting makes the condition (1.21) less restrictive as p grows. Our
discussion is based on the assumption that the function gτ belongs to a well behaved class
of functions. For example, when gτ belongs to a Sobolev space S(α, L) for some α > 1
and L > 0 with respect to the basis {xj = Pj (z), j > 1}. As in [36], a Sobolev space of
P
functions consists of functions g(z) = ∞
j=1 θj Pj (z) whose Fourier coefficients θ satisfy
o
n
P∞ 2α 2
P∞
2
.
θ ∈ Θ(α, L) = θ ∈ ℓ2 (N) :
j
θ
6
L
|θ
|
<
∞,
j
j=1
j=1 j
More generally, we can consider functions in a p-Rearranged Sobolev space RS(α, p, L)
which allow permutations in the first p components as in [11]. Formally, the class of
P
functions g(z) = ∞
j=1 θj Pj (z) such that
∞
X
∃ permutation Υ of {1, . . . , p} :
θ ∈ ΘR (α, p, L) = θ ∈ ℓ2 (N) :
|θj | < ∞, Pp
.
P
j 2α θ2 6 L2
j 2α θ2 + ∞
j=1
j=1
Υ(j)
j=p+1
j
It follows that S(α, L) ⊂ RS(α, p, L) and p-Rearranged Sobolev space reduces substantially
the dependence on the ordering of the basis.
4
Under mild conditions, it was shown in [11] that for functions in RS(α, p, L) the rate-
optimal choice for the size of the support of the oracle model obeys s . n1/[2α+1] . It follows
that
Ē[rτ2 ]1/2 = Ē[{
P
. n−α/{1+2α} .
√
However, this bound cannot guarantee converge to zero faster than n-rate to potentially
j>s θ(j) P(j) (zi )}
2 1/2
]
imply (1.21). Fortunately, to establish relation (1.21) one can exploit orthogonality with
respect all p components of xi . Indeed we have
P
P
|Ē[fi vi rτ i ]| = |Ē[fi vi { pj=s+1 θj Pj (zi ) + j>p+1 θj Pj (zi )]|
P
P
= | j>p+1 Ē[fi vi θj Pj (zi )]| 6 j>p+1 |θj |{Ē[fi2 vi2 ]E[Pj2 (zi )]}1/2
P
P
6 {Ē[fi2 vi2 ] maxj>p+1 E[Pj2 (zi )]}1/2 ( j>p+1 |θj |2 j 2α )1/2 ( j>p+1 j −2α )1/2
= O(p−α+1/2 ).
Therefore, condition (1.21) holds if n = o(p2α−1 ), in particular, for any α > 1, n = o(p)
suffices.
1.3
Minimax Efficiency
In this section we make some connections to the (local) minimax efficiency analysis from
the semiparametric efficiency analysis. In this section for the sake of exposition we assume
that (yi , xi , di )ni=1 are i.i.d., sparse models, rθτ i = rτ i = 0, i = 1, . . . , n, and the median case
(τ = .5). [26] derives an efficient score function for the partially linear median regression
model:
Si = 2ϕτ (yi , di ατ + x′i βτ )fi [di − m∗τ (z)],
where m∗τ (xi ) is given by
m∗τ (xi ) =
E[fi2 di |xi ]
.
E[fi2 |xi ]
5
Using the assumption m∗τ (xi ) = x′i θτ∗ , where kθτ∗ k0 6 s ≪ n is sparse, we have that
Si = 2ϕτ (yi , di ατ + x′i βτ )vi∗ ,
where vi∗ = fi di − fi m∗τ (xi ) would correspond to vi in (2.7). It follows that the estimator
based on vi∗ is actually efficient in the minimax sense (see Theorem 18.4 in [24]), and
inference about ατ based on this estimator provides best minimax power against local
alternatives (see Theorem 18.12 in [24]).
The claim above is formal as long as, given a law Qn , the least favorable submodels are
permitted as deviations that lie within the overall model. Specifically, given a law Qn , we
shall need to allow for a certain neighborhood Qδn of Qn such that Qn ∈ Qδn ⊂ Qn , where the
overall model Qn is defined similarly as before, except now permitting heteroscedasticity
(or we can keep homoscedasticity fi = fǫ to maintain formality). To allow for this we
consider a collection of models indexed by a parameter t = (t1 , t2 ):
yi = di (ατ + t1 ) + x′i (βτ + t2 θτ∗ ) + ǫi ,
fi di = fi x′i θτ∗ + vi∗ , E[fi vi∗ |xi ] = 0,
ktk 6 δ,
(1.22)
(1.23)
where kβτ k0 ∨ kθτ∗ k0 6 s/2 and conditions as in Section 2 hold. The case with t = 0
generates the model Qn ; by varying t within δ-ball, we generate models Qδn , containing
the least favorable deviations. By [26], the efficient score for the model given above is Si ,
so we cannot have a better regular estimator than the estimator whose influence function
is J −1 Si , where J = E[Si2 ]. Since our model Qn contains Qδn , all the formal conclusions
about (local minimax) optimality of our estimators hold from theorems cited above (using
subsequence arguments to handle models changing with n). Our estimators are regular,
√
since under Qtn with t = (O(1/ n), o(1)), their first order asymptotics do not change, as
a consequence of Theorems in Section 2. (Though our theorems actually prove more than
this.)
6
1.4
Choice of Bandwidth h and Penalty Level λ
The proof of Theorem 1 provides a detailed analysis for generic choice of bandwidth h and
the penalty level λ in Step 2 under Condition D. Here we discuss two particular choices for
λ, for γ = 0.05/n
√
√
(i) λ = h−1 nΦ−1 (1 − γ/2p) and (ii) λ = 1.1 n2Φ−1 (1 − γ/2p).
The choice (i) for λ leads to a sparser estimators by adjusting to the slower rate of convergence of fbi , see (3.17). The choice (ii) for λ corresponds to the (standard) choice of penalty
level in the literature for Lasso. Indeed, we have the following sparsity guarantees for the
θ̃τ under each choice
(i) e
sθτ . s + nh2k̄+2 / log(pn) and (ii) seθτ . h−2 s + nh2k̄ / log(pn).
In addition to the requirements in Condition M, (Kq2 s2 + s3 ) log3 (pn) 6 δn n and
Kq4 s log(pn) log3 n 6 δn n, which are independent of λ and h, we have that Condition
D simplifies to
√
h2k̄+1 n 6 δn ,
h−2 Kq4 s log(pn) 6 δn n,
(i)
h2k̄ s log(pn) 6 δn ,
(ii)
h−2 s2 log2 (pn) 6 δn n, h2k̄+2 Kq2 log(pn) log3 n 6 δn
√
h2k̄−2 s log(pn) 6 δn , h2k̄ n 6 δn , (h−2 log(pn) log3 n + Kq2 )Kq2 s log(pn) 6 δn n,
{h−2 + log(pn)}h−2 s2 log(pn) 6 δn n, h2k̄ Kq2 log(pn) log3 n 6 δn
For example, using the choice of fbi as in (2.16) so that k̄ = 4, we have that the following
choice growth conditions suffice for the conditions above:
(i)
Kq3 s3 log3 (pn) 6 δn n, Kq3 6 n1/3 and h = n−1/6
(ii)
(s + Kx3 )s3 log3 (pn) 6 δn n, Kq3 6 n1/3 , and h = n−1/8
7
2
Analysis of the Estimators
This section contains the main tools used in establishing the main inferential results. The
high-level conditions here are intended to be applicable in a variety of settings and they
are implied by the regularities conditions provided in the previous sections. The results
provided here are of independent interest (e.g. properties of Lasso under estimated weights).
We establish the inferential results (1.1) and (1.3) in Section 2.3 under high level conditions.
To verify these high-level conditions we need rates of convergence for the estimated residuals
v and the estimated confounding function b
b
gτ (z) = x′ βbτ which are established in sections 2.2
and 2.1 respectively. The main design condition relies on the restricted eigenvalue proposed
in [12], namely for x̃i = [di , x′i ]′
κc =
inf
kδT c k1 6ckδT k1
kx̃′i δk2,n /kδT k
(2.24)
where c = (c + 1)/(c − 1) for the slack constant c > 1, see [12]. When c is bounded, it is
well known that κc is bounded away from zero provided sparse eigenvalues of order larger
than s are well behaved, see [12].
2.1
ℓ1 -Penalized Quantile Regression
In this section for a quantile index u ∈ (0, 1), we consider the equation
ỹi = x̃′i ηu + rui + ǫi , u-quantile of (ǫi | x̃i , rui ) = 0
(2.25)
where we observe {(ỹi , x̃i ) : i = 1, . . . , n}, which are independent across i. To estimate ηu
we consider the ℓ1 -penalized u-quantile regression estimate
ηbu ∈ arg min En [ρu (ỹi − x̃′i η)] +
η
8
λu
kηk1 .
n
and the associated post-model selection estimates. That is, given an estimator η̄uj
ηeu ∈ arg min { En [ρu (ỹi − x̃′i η)] : ηj = 0 if η̄uj = 0} .
η
(2.26)
We will be typically concerned with ηbu , thresholded versions of the ℓ1 -penalized quantile
µ
regression, defined as ηbuj
= ηbuj 1{|b
ηuj | > µ/En [x̃2ij ]1/2 }.
As established in [2] for sparse models and in [19] for approximately sparse models,
under the event that
λu
> ckEn [(u − 1{ỹi 6 x̃′i ηu + rui })x̃i ]k∞
n
(2.27)
the estimator above achieves good theoretical guarantees under mild design conditions.
Although ηu is unknown, we can set λu so that the event in (2.27) holds with high probability. In particular, the pivotal rule proposed in [2] and generalized in [19] proposes to set
λu := cnΛu (1 − γ | x̃) for c > 1 where
Λu (1 − γ | x̃, ru ) = (1 − γ) − quantile of kEn [(u − 1{Ui 6 u})x̃i ]k∞
(2.28)
where Ui ∼ U(0, 1) are independent random variables conditional on x̃i , i = 1, . . . , n. This
quantity can be easily approximated via simulations. Below we summarize the high level
conditions we require.
Condition PQR. Let Tu = supp(ηu ) and normalize En [x̃2ij ] = 1, j = 1, . . . , p. Assume
p
that for some s > 1, kηu k0 6 s, krui k2,n 6 C s log(p)/n. Further, the conditional
distribution function of ǫi is absolutely continuous with continuously differentiable density
fǫ (· | x̃i , rui ) such that 0 < f 6 fi 6 sup fǫ |x̃ ,r (t | x̃i , rui ) 6 f¯, sup |f ′
(t | x̃i , rui )| <
t
i
i
f¯′ for fixed constants f , f¯ and f¯′ .
ui
t
ǫi |x̃i ,rui
Condition PQR is implied by Condition AS. The conditions on the approximation error
and near orthogonality conditions follows from choosing a model ηu that optimally balance
9
the bias/variance trade-off. The assumption on the conditional density is standard in the
quantile regression literature even with fixed p case developed in [22] or the case of p
increasing slower than n studied in [4].
Next we present bounds on the prediction norm of the ℓ1 -penalized quantile regression
estimator.
Lemma 1 (Estimation Error of ℓ1 -Penalized Quantile Regression) Under Condition
PQR, setting λu > cnΛu (1 − γ | x̃), we have with probability 1 − 4γ for n large enough
r
√
λu s
1
s log(p/γ)
′
kx̃i (b
ηu − ηu )k2,n . N :=
+
nκ2c
κ2c
n
and ηbu − ηu ∈ Au := ∆2c ∪ {v : kx̃′i vk2,n = N, kvk1 6 8Ccs log(p/γ)/λu }, provided that
sup
δ̄∈Au
En [|rui ||x̃′i δ̄|2 ]
En [|x̃′i δ̄|3 ]
→ 0.
+
N
sup
′ 2 3/2
En [|x̃′i δ̄|2 ]
δ̄∈Au En [|x̃i δ̄| ]
Lemma 1 establishes the rate of convergence in the prediction norm for the ℓ1 -penalized
quantile regression estimator. Exact constants are derived in the proof. The extra growth
p
condition required for identification is mild. For instance we typically have λu ∼ n log(np)
and for many designs of interest we have
inf kx̃′i δk32,n /En [|x̃′i δ|3 ]
δ∈∆c
bounded away from zero (see [2]). For more general designs we have
kx̃′i δk32,n
kx̃′i δk2,n
1
>
inf
>
inf
δ∈Au kδk1 maxi6n kx̃i k∞
δ∈Au En [|x̃′i δ|3 ]
maxi6n kx̃i k∞
κ
λu N
√ 2c
∧
s(1 + c) 8Ccs log(p/γ)
.
Lemma 2 (Estimation Error of Post-ℓ1 -Penalized Quantile Regression) Assume Condition PQR holds, and that the Post-ℓ1 -penalized quantile regression is based on an arbitrary
10
b > En [ρu (ỹi − x̃′ ηbu )] − En [ρu (ỹi − x̃′ ηu ))]
vector ηbu . Let r̄u > krui k2,n , sbu > |supp(b
ηu )| and Q
i
i
hold with probability 1−γ. Then we have for n large enough, with probability 1−γ −ε−o(1)
s
su + s) log(p/ε)
e := (b
b1/2
kx̃′i (e
ηu − ηu )k2,n . N
+ f¯r̄u + Q
nφmin (b
su + s)
provided that
En [|x̃′i δ̄|3 ]
En [|rui ||x̃′i δ̄|2 ]
e sup
+
N
→ 0.
′ 2 3/2
En [|x̃′i δ̄|2 ]
kδ̄k0 6b
su +s En [|x̃i δ̄| ]
kδ̄k0 6b
su +s
sup
Lemma 2 provides the rate of convergence in the prediction norm for the post model
selection estimator despite of possible imperfect model selection. In the current nonparametric setting it is unlikely for the coefficients to exhibit a large separation from zero. The
rates rely on the overall quality of the selected model by ℓ1 -penalized quantile regression
and the overall number of components sbu . Once again the extra growth condition required
for identification is mild. For more general designs we have
2.2
p
kx̃′i δk32,n
φmin(b
su + s)
kx̃′i δk2,n
√
inf
.
>
inf
>
′
kδk0 6b
su +s En [|x̃i δ|3 ]
kδk0 6b
su +s kδk1 maxi6n kx̃i k∞
sbu + s maxi6n kx̃i k∞
Lasso with Estimated Weights
In this section we consider the equation
fi di = fi x′i θτ + fi rθτ i + vi , i = 1, . . . , Ē[fi vi xi ] = 0
(2.29)
where we observe {(di , zi , xi = X(zi )) : i = 1, . . . , n}, which are independent across i.
We do not observe {fi = fτ (di , zi )}n directly and only estimates {fbi }n are available.
i=1
i=1
Importantly, we only require that Ē[fi vi xi ] = 0 and not E[fi xi vi ] = 0 for every i = 1, . . . , n.
Also, we have that Tθτ = supp(θτ ) is unknown but a sparsity condition holds, namely
|Tθτ | 6 s. To estimate θθτ and vi , we compute
λ b
θbτ ∈ arg min En [fbi2 (di − x′i θ)2 ] + kΓ
bi = fbi (di − x′i θbτ ), i = 1, . . . , n, (2.30)
τ θk1 and set v
θ
n
11
bτ are the associated penalty level and loadings specified below. A difficulty
where λ and Γ
is to account for the impact of estimated weights fbi while also only using Ē[fi vi xi ] = 0.
We will establish bounds on the penalty parameter λ so that with high probability the
following regularization event occurs
λ
b−1 En [fi xi vi ]k∞ .
> 2ckΓ
τ
n
(2.31)
As discussed in [12, 3, 8], the event above allows to exploit the restricted set condition
vi defined
kθbτ T c k1 6 c̃kθbτ T − θτ k1 for some c̃ > 1. Thus rates of convergence for θbτ and b
θτ
θτ
on (2.30) can be established based on the restricted eigenvalue κc̃ defined in (2.24) with
x̃i = xi .
However, the estimation error in the estimate fbi of fi could slow the rates of convergence.
The following are sufficient high-level conditions. In what follows c, c̄, C, f , f¯ are strictly
positive constants independent of n.
Condition WL. For the model (2.29) suppose that:
(i) for s > 1 we have kθτ k0 6 s, Φ−1 (1 − γ/2p) 6 δn n1/6 ,
(ii) f 6 fi 6 f¯, c 6 min{Ē[|fi xij vi − E[fi xij vi ]|2 ]}1/2 6 max{Ē[|fi xij vi |3 ]}1/3 6 C,
j6p
j6p
2
2
(iii) with probability 1 − ∆n we have En [fbi2 rθτ
i ] 6 cr ,
max|(En − Ē)[fi2 x2ij vi2 ]| + |(En − Ē)[{fi xij vi − E[fi xij vi ]}2 ]| 6 δn ,
j6p
(fbi2 −fi2 )2 2
(fbi2 −fi2 )2 2
2
2
2
2
b
vi + E n
max En [(fi − fi ) xij vi ] 6 δn , En
b2 2 vi 6 cf .
f2
j6p
i
fi fi
bτ 0 6 Γ
b τ 6 uΓ
bτ 0 , for Γ
bτ 0jj = {En [fb2 x2 v 2 ]}1/2 , 1 − δn 6 ℓ 6 u 6 C with prob 1 − ∆n .
(iv) ℓΓ
i ij i
Comment 2.1 Condition WL(i) is a standard condition on the approximation error that
yields the optimal bias variance trade-off (see [3]) and imposes a growth restriction on p
relative to n, in particular log p = o(n1/3 ). Condition WL(ii) imposes conditions on the
conditional density function and mild moment conditions which are standard in quantile
12
regression models even with fixed dimensions, see [22]. Condition WL(iii) requires highlevel rates of convergence for the estimate fbi . Several primitive moment conditions imply
first requirement in Condition WL(iii). These conditions allow the use of self-normalized
moderate deviation theory to control heteroscedastic non-Gaussian errors similarly to [1]
where there are no estimated weights. Condition WL(iv) corresponds to the asymptotically
bτ in (2.12).
valid penalty loading in [1] which is satisfied by the proposed choice Γ
Next we present results on the performance of the estimators generated by Lasso with
estimated weights. In what follows, b
κc is defined with fbi xi instead of x̃i in (2.24) so that
κc > κc mini6n fbi .
b
Lemma 3 (Rates of Convergence for Lasso) Under Condition WL and setting λ >
√
2c′ nΦ−1 (1 − γ/2p) for c′ > c > 1, we have for n large enough with probability 1 − γ − o(1)
√
s
1
λ
′
u+
kfbi xi (θbτ − θτ )k2,n 6 2{cf + cr } +
nb
κc̃
c
√
b−1k∞ n
s{c
1
1
2ckΓ
λs
+
c
}
f
r
τ0
+ 1+
u+
+
{cf + cr }2
kθbτ − θτ k1 6 2
κ2c̃
b
nb
κc̃ κ
c
2c̃
ℓc − 1 λ
b2c̃
b−1
b
where c̃ = kΓ
τ 0 k∞ kΓτ 0 k∞ (uc + 1)/(ℓc − 1)
Lemma 3 above establishes the rate of convergence for Lasso with estimated weights.
This automatically leads to bounds on the estimated residuals b
vi obtained with Lasso
through the identity
vi
vi − vi = (fbi − fi ) + fbi x′i (θτ − θbτ ) + fbi rθτ i .
b
fi
(2.32)
The Post-Lasso estimator applies the least squares estimator to the model selected by the
Lasso estimator (2.30),
n
o
En [fbi2 (di − x′i θ)2 ] : θj = 0, if θbτ j = 0 , set ṽi = fbi (di − x′i θeτ ).
θeτ ∈ arg minp
θ∈R
13
It aims to remove the bias towards zero induced by the ℓ1 -penalty function which is used to
select components. Sparsity properties of the Lasso estimator θbτ under estimated weights
follows similarly to the standard Lasso analysis derived in [1]. By combining such sparsity
properties and the rates in the prediction norm we can establish rates for the post-model selection estimator under estimated weights. The following result summarizes the properties
of the Post-Lasso estimator.
Lemma 4 (Model Selection Properties of Lasso and Properties of Post-Lasso)
2
2
Suppose that Condition WL holds, and κ′ 6 φmin ({s + nλ2 {c2f + c2r }}/δn ) 6 φmax ({s + nλ2 {c2f +
c2r }}/δn ) 6 κ′′ for some positive and bounded constants κ′ , κ′′ . Then the data-dependent
√
model Tbθτ selected by the Lasso estimator with λ > 2c′ nΦ−1 (1 − γ/2p) for c′ > c > 1,
satisfies with probability 1 − γ − o(1):
kθeτ k0 = |Tbθτ | . s +
n2 2
{c + c2r }
λ2 f
(2.33)
Moreover, the corresponding Post-Lasso estimator obeys with probability 1 − γ − o(1)
s
√
|Tbθτ | log(p ∨ n) λ s
′ e
kxi (θτ − θτ )k2,n .P cf + cr +
.
+
n
nκc
2.3
Moment Condition based on Orthogonal Score Function
Next we turn to analyze the estimator α̌τ obtained based on the orthogonal moment condition. In this section we assume that
Ln (α̌τ ) 6 min Ln (α) + δn n−1 .
α∈Aτ
This setting is related to the instrumental quantile regression method proposed in [15].
However, in this application we need to account for the estimation of the noise v that acts
14
as the instrument which is known in the setting in [15]. Condition IQR below suffices to
make the impact of the estimation of instruments negligible to the first order asymptotics
of the estimator α̌τ . Primitive conditions that imply Condition IQR are provided and
discussed in the main text.
Let {(yi , di , zi ) : i = 1, . . . , n} be independent observations satisfying
yi = di ατ + gτ (zi ) + ǫi ,
τ -quantile(ǫi | di , zi ) = 0,
fi di = fi x′i θ0τ + vi ,
Ē[fi xi vi ] = 0.
(2.34)
Letting D × Z denote the domain of the random variables (d, z), for h̃ = (g̃, ι̃), where g̃ is
a function of variable z, and the instrument ι̃ is a function that maps (d, x) 7→ ι̃(d, z) we
write
ψα̃,h̃ (yi , di , zi ) = ψα̃,g̃,ι̃ (yi , di , zi ) = (τ − 1{yi 6 g̃(zi ) + di α})ι̃(di , zi )
= (τ − 1{yi 6 g̃i + di α})ι̃i .
We denote h0 = (gτ , ι0 ) where ι0i := vi = fi (di − x′i θ0τ ). For some sequences δn → 0 and
∆n → 0, we let F denote a set of functions such that each element h̃ = (g̃, ι̃) ∈ F satisfies
Ē[(1 + |ι0i | + |ι̃i − ι0i |)(gτ i − g̃i )2 ] 6 δn n−1/2 , Ē[(ι̃i − ι0i )2 ] 6 δn ,
Ē[|gτ i − g̃i ||ι̃i − ι0i |] 6 δn n−1/2 , |Ē[fi ι0i {g̃i − gτ i }]| 6 δn n−1/2 ,
(2.35)
and with probability 1 − ∆n we have
sup
2 ,h̃∈F
|α−ατ |6δn
(En − Ē) ψα,h̃ (yi , di , zi ) − ψα,h0 (yi , di , zi ) 6 δn n−1/2
We assume that the estimated functions b
g and b
ι satisfy the following condition.
(2.36)
Condition IQR. Let {(yi , di , zi ) : i = 1, . . . , n} be random variables independent across
i satisfying (2.34). Suppose that there are positive constants 0 < c 6 C < ∞ such that:
(i) fyi |di ,zi (y | di , zi ) 6 f¯, fy′ i |di ,zi (y | di , zi ) 6 f¯′ ; c 6 |Ē[fi di ι0i ]|, and Ē[ι40i ] + Ē[d4i ] 6 C;
15
(ii) {α : |α − ατ | 6 n−1/2 /δn } ⊂ Aτ , where Aτ is a (possibly random) compact interval;
(iii) with probability at least 1 − ∆n the estimated functions b
h = (b
g, b
ι) ∈ F and
|α̌τ − ατ | 6 δn
and
En [ψα̌τ ,bh (yi , di , zi )]| 6 δn n−1/2
(2.37)
(iv) with probability at least 1 − ∆n , the estimated functions b
h = (b
g, b
ι) satisfy
kb
ιi − ι0i k2,n 6 δn and k1{|ǫi | 6 |di (ατ − α̌τ ) + gτ i − b
gi |}k2,n 6 δn2 .
Lemma 5 Under Condition IQR(i,ii,iii) we have
√
σ̄n−1 n(α̌τ − ατ ) = Un (τ ) + oP (1), Un (τ )
N(0, 1)
where σ̄n2 = Ē[fi di ι0i ]−1 Ē[τ (1 − τ )ι20i ]Ē[fi di ι0i ]−1 and
√
Un (τ ) = {Ē[ψα2 τ ,h0 (yi , di , zi )]}−1/2 nEn [ψατ ,h0 (yi , di , zi )].
Moreover, IQR(iv) also holds we have
nLn (ατ ) = Un (τ )2 + oP (1), Un (τ )2
χ2 (1)
and the variance estimator is consistent, namely
En [fbi dib
ιi ]−1 En [(τ − 1{yi 6 b
gi + di α̌τ })2b
ι2i ]En [fbi dib
ιi ]−1 →P Ē[fi di ι0i ]−1 Ē[τ (1 − τ )ι20i ]Ē[fi di ι0i ]−1 .
3
Proofs for Section 3 of Main Text (Main Result)
Proof. (Proof of Theorem 1) The first-order equivalence between the two estimators
follows from establishing the same linear representation for each estimator. In Part 1 of
16
the proof we consider the orthogonal score estimator. In Part 2 we consider the double
selection estimator.
Part 1. Orthogonal score estimator. We will verify Condition IQR and the result follows
by Lemma 5 applied with ι0i = vi = fi (di − x′i θ0τ ) and noting that 1{yi 6 di ατ + gτ (zi )} =
1{Ui 6 τ } for some uniform (0, 1) random variable (independent of {di , zi }) by the definition
of the conditional quantile function.
Condition IQR(i) requires conditions on the probability density function that are assumed in Condition AS(iii). The fourth moment conditions are implied by Condition M(i)
using ξ = (1, 0′)′ and ξ = (1, −θ′ )′ , since Ē[v 4 ] 6 f¯4 Ē[{(di , x′ )ξ}4] 6 C ′ (1 + kθ0τ k)4 , and
0τ
i
i
kθ0τ k 6 C assumed in Condition AS(i). Finally, by relation (2.7), namely E[fi xi vi ] = 0,
′ ′ 2
we have Ē[fi di vi ] = Ē[vi2 ] > f Ē[(di − x′i θ0τ )2 ] > cf k(1, θ0τ
) k by Condition AS(iii) and
Condition M(i).
Next we will construct the estimate for the orthogonal score function which are based
on post-ℓ1 -penalized quantile regression and post-Lasso with estimated conditional density
function. We will show that with probability 1 − o(1) the estimated nuisance parameters
belong to F.
To establish the rates of convergence for βeτ , the post-ℓ1 -penalized quantile regression
based on the thresholded estimator βbτλτ , we proceed to provide rates of convergence for
the ℓ1 -penalized quantile regression estimator βbτ . We will apply Lemma 1 with γ = 1/n.
Condition PQR holds by Condition AS with probability 1 − o(1) using Markov inequality
and Ē[rτ2i ] 6 s/n.
Since population eigenvalues are bounded above and bounded away from zero by Condition M(i), by Lemma 8 (where δ̄n → 0 under Kq2 Cs log2 (1 + Cs) log(pn) log n = o(n)),
we have that sparse eigenvalues of order ℓn s are bounded away from zero and from above
with probability 1 − o(1) for some slowly increasing function ℓn . In turn, the restricted
17
eigenvalue κ2c is also bounded away from zero for bounded c and n sufficiently large. Since
p
p
λτ . n log(p ∨ n), we will take N = C s log(n ∨ p)/n in Lemma 1. To establish the
side conditions note that
sup
δ∈Aτ
En [|x̃′i δ|3 ]
kx̃′i δk32,n
6 sup
δ∈Aτ
maxi6n kx̃i k∞ kδk1
kx̃′i δk2,n
which implies that N sup
δ∈Aτ
En [|x̃′i δ|3 ]
kx̃′i δk32,n
6 maxkx̃i k∞
i6n
√
s(1+2c)
κ2c
∨
8Ccs log(pn)
λu N
.P Kq
p
s log(pn).
→ 0 with probability 1−o(1) under Kq2 s2 log2 (p∨n) 6 δn n.
Moreover, the second part of the side condition
sup
δ∈Aτ
En [|rτ i | |x̃′i δ|2 ]
kx̃′i δk22,n
En [|rτ i | |x̃′i δ|] maxi6n kx̃i k∞ kδk1
kr k
max
kx̃i k∞ kδk1
6 sup τ i 2,n kx̃′ i6n
δk
kx̃′i δk22,n
2,n
i
δ∈Aτ
δ∈Aτ
√
8Ccs log(pn)
s(1+2c)
∨
maxi6n kx̃i k∞ krτ i k2,n
κ2c
λu N
6 sup
6
.P
q
s log(pn)
Kq
n
p
s log(pn).
Under Kq2 s2 log2 (p∨n) 6 δn n, the side condition holds with probability 1−o(1). Therefore,
p
p
by Lemma 1 we have kx̃′ βbτ − βτ k2,n . s log(pn)/n and kβbτ − βτ k1 . s log(pn)/n with
i
probability 1 − o(1).
With the same probability, by Lemma 6 with µ = λτ /n, since φmax (Cs) is uniformly
bounded with probability 1 − o(1), we have that the thresholded estimator βbτµ satisfies the
p
following bounds with probability 1 − o(1): kβbτµ − βτ k1 . s log(pn)/n, kβbτµ k0 . s and
p
kx̃′ (βbµ − βτ )k2,n . s log(pn)/n. We use the support of βbµ as to construct the refitted
i
τ
τ
estimator βeτ .
b = Cs log(pn)/n.
We will apply Lemma 2. By Lemma 12 and the rate of βbτµ , we can take Q
Since sparse eigenvalues of order Cs are bounded away from zero, we will use Ñ =
p
p
C s log(pn)/n and ε = 1/n. Therefore, kx′i (βeτ − βτ )k2,n . s log(np)/n with probability 1 − o(1) provided the side conditions of the lemma hold. To verify the side conditions
18
note that
e sup
N
kδk0 6Cs
En [|x̃′i δ|3 ]
kx̃′i δk32,n
e sup
6N
kδk0 6Cs
e sup
6N
kδk0 6Cs
sup
kδk0 6Cs
En [|rτ i | |x̃′i δ|2 ]
kx̃′i δk22,n
maxi6n kx̃i k∞ kδk1
kx̃′i δk2,n
√
maxi6n kx̃i k∞ Cskδk
√
φmin (Cs)kδk
.P
q
√
s log(pn)
Kq s
n
1
6 krτ i k2,n maxi6n kx̃i k∞ sup kx̃kδk
′ δk
i 2,n
kδk0 6Cs
q
q
p
s log(pn)
Cs
K
s log(pn)
6 krτ i k2,n maxi6n kx̃i k∞ φmin (Cs) .P
q
n
and the side condition holds with probability 1 − o(1) under Kq2 s log2 (p ∨ n) 6 δn n.
Next we proceed to construct the estimator for vi . Note that by the same arguments
above, under Condition D, we have the same rates of convergence for the post-selection
(after truncation) estimators (e
αu , βeu ), u ∈ U, kβeu k0 . C and k(e
αu , βeu ) − (e
αu , βeu )k .
p
s log(pn)/n, to estimate the conditional density function via (2.15) or (2.16). Thus with
probability 1 − o(1)
kfi − fbi k2,n
1
.
h
r
s log(n ∨ p)
+ hk̄ and max |fbi − fi | . δn
i6n
n
(3.38)
where k̄ depends on the estimator. (See relation (3.41) below.) Note that the last relation
implies that maxi6n fbi 6 2f¯ is automatically satisfied with probability 1 − o(1) for n large
enough. Let U denote the (finite) set of quantile indices used in the calculation of fbi .
p
Next we verify Condition WL to invoke Lemmas 3 and 4 with cr = C s log(pn)/n
p
k̄
and cf = C {1/h} s log(n ∨ p)/n + h . The sparsity condition in Condition WL(i) is
implied by Condition AS(ii) and Φ−1 (1 −γ/2p) 6 δn n1/6 is implied by log(1/γ) . log(p ∨n)
and log3 p 6 δn n from Condition M(iii). Condition WL(ii) follows from the assumption on
the density function in Condition AS(iii) and the moment conditions in Condition M(i).
The first requirement in Condition WL(iii) holds with c2r . s log(pn)/n since
2
2
b
¯
En [fbi2 rθτ
i ] 6 max fi En [rθτ i ] . f s log(pn)/n
i6n
19
from maxi6n fbi 6 C holding with probability 1 − o(1), and Markov’s inequality under
2
Ē[rθτ
i ] 6 Cs/n by Condition AS(ii).
The second requirement, maxj6p |(En − Ē)[fi2 x2ij vi2 ]|+|(En − Ē)[{fi xij vi −E[fi xij vi ]}2 ]| 6
δn with probability 1−o(1), is implied by Lemma 8 with k = 1 and K = {E[maxi6n kfi xi vi k2∞ ]}1/2 6
f¯{E[maxi6n k(xi , vi )k4 ]}1/2 6 f¯K 2 , under K 4 log(pn) log n 6 δn n.
∞
q
q
The third requirement of Condition WL(iii) follows from uniform consistency in (3.38)
and the second part of Condition WL(ii) since
bi − fi |2
|
f
2
2
2
2
2
2
2
2
2
max En [(fbi − fi ) xij vi ] 6 max
max(En − Ē)[fi xij vi ] + max Ē[fi xij vi ] . δn
j6p
i6n
j6p
j6p
fi2
with probability 1−o(1) by (3.38), the second requirement, and the bounded fourth moment
assumption in Condition M(i).
To show the fourth part of Condition WL(iii), because both fbi and fi are bounded away
from zero and from above with probability 1 − o(1), uniformly over u ∈ U (the finite set of
quantile indices used to estimate the density), and fb2 − f 2 = (fbi − fi )(fbi + fi ), it follows
i
i
that with probability 1 − o(1)
En [(fbi2 − fi2 )2 vi2 /fi2 ] + En [(fbi2 − fi2 )2 vi2 /{fbi2 fi2 }] . En [(fbi − fi )2 vi2 /fi2 ].
Next let δu = (e
αu − αu , βeu′ − βu′ )′ where the estimators satisfy Condition D. We have that
with probability 1 − o(1)
En [(fbi − fi )2 vi2 /fi2 ] . h2k̄ En [vi2 ] + h−2
X
2
En [vi2 (x̃′i δu )2 + vi2 rui
].
u∈U
The following relations hold for all u ∈ U
2
2
2
En [vi2 rui
] .P Ē[vi2 rui
] 6 f¯Ē[(di − x′i θ0τ )2 rui
] . s/n
En [vi2 (x̃′i δu )2 ] = Ē[(x′i δu )2 vi2 ] + (En − Ē)[vi2 (x′i δu )2 ]
6 Ckδu k2 + kδu k2 supkδk0 6kδu k0 ,kδk=1 |(En − Ē)[{vi x̃′i δ}2 ]|
20
(3.39)
where we have kδu k .
p
s log(n ∨ p)/n and kδu k0 . s with with probability 1 − o(1). Then
we apply Lemma 8 with Xi = vi x̃i . Thus, we can take K = {E[maxi6n kXi k2∞ ]}1/2 6
{E[maxi6n k(vi , x̃′i )k4∞ ]}1/2 6 Kq2 , and Ē[(δ ′ Xi )2 ] 6 Ē[vi2 (x̃′i δ)2 ] 6 Ckδk2 by the fourth
moment assumption in Condition M(i) and kθ0τ k 6 C. Therefore,
sup
kδk0 6Cs,kδk=1
(En − Ē)
{vi x̃′i δ}2
.P
Kq2 s log3 n log(p∨n)
n
+
q
Kq2 s log3 n log(p∨n)
n
Under Kq4 s log3 n log(p ∨ n) 6 δn n, with probability 1 − o(1) we have
c2f .
s log(n ∨ p)
+ h2k̄ .
h2 n
Condition WL(iv) pertains to the penalty loadings which are estimated iteratively. In
the first iteration we have that the loadings are constant across components as ̟ :=
maxi6n fbi maxi6n kxi k∞ {En [fbi2 d2i ]}2 . Thus the solution of the optimization problem is the
e defined as λ̃ := λ̟/ maxj6p {En [fb2 x2 v 2 ]}1/2 and
same if we use penalty parameters λ̃ and Γ
i
ejj =
Γ
ij i
maxj6p{En [fbi2 x2ij vi2 ]}1/2 .
b0τ jj 6 Γ
e jj 6 C Γ
b0τ jj for some bounded
By construction Γ
b0τ jj are bounded away from zero and from above with
C with probability 1 − o(1) as Γ
e and λ̃ . λ maxi6n kxi k∞ , by
probability 1 − o(1). Since Condition WL holds for (λ̃, Γ),
Lemma 3 we have with probability 1 − o(1)
r
√
s log(n ∨ p)
λ s maxi6n kxi k∞
1
k̄
′ b
+h +
kxi (θτ − θτ )k2,n .
h
n
n
The iterative choice of of penalty loadings satisfies
|En [fbi2 x2ij vi2 ]1/2 − En [fbi2 x2ij vbi2 ]1/2 | 6 maxi6n fbi kxi k∞ |En [{fi2 (di − x′i θ0τ ) − fbi (di − x′i θbτ )}2 ]1/2 |
6 maxi6n fbi kxi k∞ En [(fbi − fi )2 v 2 /f 2 ]1/2
i
i
+ maxi6n fbi kxi k∞ En [fbi2 {x′i (θbτ − θ0τ )}2 ]1/2
q
√
λKq2 s
Kq
k̄
.P h s log(n∨p)
+
+
K
h
q
n
n
uniformly in j 6 p. Thus the iterated penalty loadings are uniformly consistent with
probability 1 − o(1) and also satisfy Condition WL(iv) under h−2 Kq2 s log(pn) 6 δn n and
21
Kq4 s log(pn) 6 δn n. Therefore, in the subsequent iterations, by Lemma 3 and Lemma 4 we
have that the post-Lasso estimator satisfies with probability 1 − o(1)
kθeτ k0
kx′i (θeτ − θτ )k2,n
n2 {c2f + c2r }
ns log(n ∨ p)
.
+
s
.
s
e
:=
s
+
+
θτ
λ2
h2 λ2
r
√
λ s
1 s log(n ∨ p)
k̄
+h +
.
h
n
n
nhk̄
λ
!2
and
where we used that φmax (e
sθτ /δn ) 6 C implied by Condition D and Lemma 8, and that
p
p
p
√ −1
λ > nΦ (1 − γ/2p) ∼ n log(pn) so that seθτ log(pn)/n . (1/h) s log(pn)/n + hk̄ .
Next we construct a class of functions that satisfies the conditions required for F that
is used in Lemma 5. Define the following class of functions.
p
√
sθτ }
s log(pn)/n + hk̄ + λ n s }, kθk0 6 Ce
p
G = {x′i β : kβk0 6 Cs, kβ − βτ k 6 C s log(pn)/n}
p
J = {f˜i 6 2f¯ : kη̃u k0 6 Cs, kη̃u − ηu k 6 C s log(pn)/n, u ∈ U}
M = {x′i θ : kθ − θτ 0 k 6 C{ h1
(3.40)
˜
where f˜i = f (di , zi , {η̃u : u ∈ U}), f˜i = f (di , zi , {η̃˜u : u ∈ U}) ∈ J are functions that satisfy
|f˜i − fi | 6
4f¯ X ′
4f¯ X ′
|x̃i (ηu − η̃u ) + rui | + 4f¯hk̄ and |f˜i − f˜˜i | 6
|x̃ (η̃u − η˜˜u )|
h u∈U
h u∈U i
(3.41)
In particualr, taking f˜i := fbi ∧ 2f¯ where fbi is defined in (2.15) or (2.16). (Due to uniform
consistency (3.38) the minimum with 2f¯ is not binding for n large.) Therefore we have
Ē[(f˜i − fi )2 ] . h2k̄ + (1/h2 )
X
u∈U
2
Ē[|x̃′i (ηu − η̃u )|2 ] + Ē[rui
] . h2k̄ + h−2 s log(pn)/n. (3.42)
We define the function class F as
F = {(g̃, ṽ := f˜(d − m̃)) : g̃ ∈ G, m̃ ∈ M, f˜ ∈ J }
22
The rates of convergence and sparsity guarantees for βeτ and θeτ , and Condition D implies
that the estimates of the nuisance parameters b
gi := x′ βeτ and b
vi = fbi (di − x′ θeτ ) belongs to
i
i
the proposed F with probability 1 − o(1).
We will proceed to verify relations (2.35) and (2.36) hold under Condition AS, M and
D. We begin with (2.35). We have
Ē[|g̃i − gτ i |2 ] 6 2Ē[{x′i (β̃ − βτ )}2 ] + 2Ē[rτ2i ] . s log(pn)/n . δn n−1/2
Ē[|ṽi ||g̃i − gτ i |2 ] 6 2Ē[|ṽi ||x′i (β̃ − βτ )|2 ] + 2Ē[|ṽi |rτ2i ]
6 2{Ē[ṽi2 {x′i (β̃ − βτ )}2 ]Ē[{x′i (β̃ − βτ )}2 ]}1/2 + 2{Ē[ṽi2 rτ2i ]Ē[rτ2i ]}1/2
. {s log(pn)/n}1/2 f¯{Ē[(d − x′ θ̃)2 {x′ (β̃ − βτ )}2 ] + Ē[(d − x′ θ̃)2 r 2 ]}1/2
i
i
i
τi
. s log(pn)/n . δn n−1/2 .
since k(1, −θ)k 6 C and fi ∨ f˜i 6 2f¯, Ē[(x̃′i ξ)4] 6 Ckξk4, Ē[(x̃′i ξ)2 rτ2i ] 6 Ckξk2Ē[rτ2i ] and
Ē[rτ2i ] . s/n which hold by Conditions AS and M. Similarly we have
Ē[|ṽi − vi |{g̃i − gτ i }2 ] 6 Ē[|f˜i − fi ||di − x′i θτ 0 |{g̃i − gτ i }2 ] + Ē[f˜i |x′i (θ̃ − θτ 0 )|{g̃i − gτ i }2 ]
. s log(pn)/n . δn n−1/2 .
as |f˜i − fi | 6 2f¯. Moreover we have that
Ē[(ṽi − vi )2 ] 6 2Ē[(f˜i − fi )2 (di − xi θτ 0 )2 + f˜i2 {x′i (θ̃ − θτ 0 )}2 ]
6 4f¯Ē[(f˜i − fi )2 ]1/2 Ē[(di − x′i θτ 0 )4 ]1/2 + 2f¯2 Ē[{x′i (θ̃ − θτ 0 )}2 ]
p
√
. h1 s log(pn)/n + hk̄ + λ s/n . δn
√
under (3.42), h−2 s log(pn) 6 δn2 n, h 6 δn , and λ s 6 δn n. The next relation follows from
Ē[|ṽi − vi ||g̃i − gτ i |] 6 Ē[|f˜i − fi ||di − x′i θτ 0 ||g̃i − gτ i |] + Ē[f˜i |x′i (θ̃ − θτ 0 )||g̃i − gτ i |]
6 Ē[|f˜i − fi |2 ]1/2 Ē[|di − x′ θτ 0 |2 |g̃i − gτ i |2 ]1/2
i
+f¯Ē[|x′i (θ̃
2 1/2
Ē[|g̃i − gτ i |2 ]1/2
p
. {s log(pn)/n}1/2 { h1 s log(pn)/n + hk̄ } . δn n−1/2 .
− θτ 0 )| ]
23
under h−2 s2 log2 (pn) 6 δn2 n and hk̄
p
s log(pn) 6 δn .
Finally, since |Ē[fi vi rτ i ]| 6 δn n−1/2 from Condition M and Ē[fi vi xi ] = 0 from (2.7), we
have
|Ē[fi vi {g̃i − gτ i }]| 6 |Ē[fi vi x′i (β̃ − βτ )]| + |Ē[fi vi rτ i ]| 6 δn n−1/2 .
Next we verify relation (2.36). By triangle inequality we have
sup
2 ,h̃∈F
|α−ατ |6δn
6
(En − Ē)[(τ − 1{yi 6 di α + g̃i })ṽi − (τ − 1{yi 6 di α + gτ i })vi ]
sup
2 ,h̃∈F
|α−ατ |6δn
(En − Ē)[(1{yi 6 di α + gτ i } − 1{yi 6 di α + g̃i })vi ]
(3.43)
+ sup|α−ατ |6δn2 ,h̃∈F (En − Ē)[(τ − 1{yi 6 di α + g̃i })(ṽi − vi )]
Consider the first term of the right hand side in (3.43). Note that F1 := {(1{yi 6 di α +
gτ i } − 1{yi 6 di α + g̃i })vi : g̃ ∈ G, |α − ατ | 6 δn2 } ⊂ F1a − F1b where F1a := 1{yi 6
di α + gτ i }vi : |α − ατ | 6 δn2 } is the product of a VC class of dimension 1 with the random
variable v, and F1b := {1{yi 6 di α + g̃i }vi : g̃ ∈ G, |α − ατ | 6 δn2 } is the product of v
p
with the union of Cs
VC classes of dimension Cs. Therefore, its entropy number satisfies
′
N(ǫkF1 kQ,2, F1 , k · kQ,2) 6 (A/ǫ)C s where we can take the envelope F1 (y, d, x) = 2|v|. Since
for any function m ∈ F1 we have
Ē[m2 ] = Ē[(1{yi 6 di α + gτ i } − 1{yi 6 di α + g̃i })2 vi2 ]
6 f¯Ē[|gτ i − g̃i |v 2 ] 6 Ē[|gτ i − g̃i |]1/2 Ē[v 4 ] . {s log(pn)/n}1/2 ,
i
i
from the conditional density function being uniformly bounded and the bounded fourth
moment assumption in Condition M(i), by Lemma 9 with σ := C{s log(pn)/n}1/4 , a = pn,
kF1 kP,2 = Ē[vi2 ]1/2 6 C, and kMkP,2 6 Kq we have with probability 1 − o(1)
sup |(En − Ē)[m]| .
m∈F1
{s log(pn)/n}1/4 p
s log(pn) + n−1 sKq log(pn) . δn n−1/2
n1/2
under s3 log3 (pn) 6 δn4 n and Kq2 s2 log2 (pn) 6 δn2 n.
24
Next consider the second term of the right hand side in (3.43). Note that F2 :=
{(τ − 1{yi 6 di α + g̃i })(ṽi − vi ) : (g̃, ṽ) ∈ F, |α − ατ | 6 δn2 } ⊆ F2a · F2b where F2a :=
p
{(τ − 1{yi 6 di α + g̃i }) : g̃ ∈ G} is a constant minus the union of Cs
VC classes of
dimension Cs, and F2b := {ṽi − vi : (g̃, ṽ) ∈ F }. Note that standard entropy calculations
yield
N(ǫkF2a F2b kQ,2 , F2, k · kQ,2) 6 N( 2ǫ kF2a kQ,2, F2a , k · kQ,2) N( 2ǫ kF2b kQ,2, F2b , k · kQ,2 )
Furthermore, since ṽi − vi = (f˜i − fi )vi /fi + f˜i x′i (θτ 0 − θ̃), we have
′
′′
F2b ⊂ F2b
+ F2b
:= (J − {fi }) · {vi /fi } + J · ({x′i θτ 0 } − M)
′
′′
and F2 ⊂ F2a · F2b
+ F2a · F2b
. By (3.41), a covering for J can be constructed based on
p
a covering for Bu := {η̃u : kη̃u k0 6 Cs, kη̃u − ηu k 6 C s log(pn)/n} which is the union of
p
sparse balls of dimension Cs. It follows that for the envelope FJ := 2f¯ ∨ K −1 kx̃i k∞ ,
q
Cs
we have
N(ǫkFJ kQ,2 , J , k · kQ,2) 6
For any m ∈ F2a ·
′
F2b
we have
Ē[m2 ] 6 Ē[(f˜i − fi )2 vi2 /fi2] . h−2
. h−2 s log(pn)/n + h2k̄ .
P
u∈U
Y
u∈U
N
ǫh/4f¯
√
, Bu , k
|U |Kq 2Cs
·k
2
Ē[{|x̃′i (η̃u − ηu )|2 + rui
}vi2 /fi2 ] + h2k̄ Ē[vi2 /fi2 ]
p
Therefore, by Lemma 9 with σ := h1 s log(pn)/n + hk̄ , a = pn, kF2′ kP,2 = k(2f¯ ∨
Kq−1 kx̃i k∞ )vi /fi kP,2 6 (1 + 2f¯)Kq , and kMkP,2 6 (1 + 2f¯)Kq we have with probability
1 − o(1)
p
s log(pn)/n + hk̄ p
sup |(En − Ē)(m)| .
s log(pn) + n−1 sKq log(pn) . δn n−1/2
1/2
′
n
m∈F2a ·F2b
1
h
under h−2 s2 log2 (pn) 6 δn2 n, hk̄
p
s log(pn) 6 δn and Kq2 s2 log2 (pn) 6 δn2 n.
25
p
Ce
sθτ
Moreover, M is the union of
′′
m ∈ F2a · F2b
we have
VC subgraph of dimension Ce
sθτ , and for any
Ē[m2 ] 6 Ē[f˜i2 |x′i (θτ 0 − θ̃)|2 ] 6 4f¯Ē[|x′i (θτ 0 − θ̃)|2 ] . h−2 s log(pn)/n + h2k̄ + (λ/n)2 s.
p
√
Therefore, by Lemma 9 with σ := C{ h1 s log(pn)/n+ hk̄ + λ s/n}, a = pnKq s, kF2′′ kP,2 =
k(2f¯ ∨ K −1 kx̃i k∞ )kxi k∞ kP,2 6 (1 + 2f¯)Kq , and kMkP,2 6 (1 + 2f¯)Kq we have with probaq
bility 1 − o(1)
sup
′′
m∈F2a ·F2b
|(En − Ē)(m)| .
1
h
q
s log(pn)
n
+ hk̄ +
n1/2
√
λ s
n p
under the conditions h−2 se
sθτ log2 (pn) 6 δn2 n, hk̄
seθτ log(pn)+n−1 seθτ Kq log(pn) . δn n−1/2
p
p
seθτ log(pn) 6 δn , λ se
sθτ log(pn) 6 δn n
and Kq2 se2θτ log2 (pn) 6 δn2 n assumed in Condition D.
Next we verify the second part of Condition IQR(iii), namely (2.37). Note that since
p
Aτ ⊂ {α : |α − ατ | 6 C/ log n + |e
ατ − ατ | 6 C s log(pn)/n}, α̌τ ∈ Aτ and s log(pn) 6 δn2 n
implies the required consistency |α̌τ − ατ | 6 δn . To show the other relation in (2.37),
equivalent to with probability 1 − o(1)
|En [ψα̌τ ,bh (yi , di , zi )]| 6 δn1/2 n−1/2 ,
note that for any α ∈ Aτ (since it implies |α − ατ | 6 δn ) and b
h ∈ F, we have with
probability 1 − o(1)
|En [ψα,bh (yi , di , zi )]| . |En [ψατ ,h0 (yi , di , zi )] + Ē[fi di vi ](α − ατ )|
+O(δn |α − ατ |Ē[d2i |vi |] + δn n−1/2 )
from relations (7.64) with α instead of α̌τ , (7.68), (7.69), and (7.71). Letting α∗ =
1/2
ατ − {Ē[fi di vi ]}−1 En [ψατ ,h0 (yi , di , zi )], we have |En [ψα∗ ,bh (yi , di , zi )]| = O(δn n−1/2 ) with
26
probability 1 − o(1) since |α∗ − ατ | .P n−1/2 . Thus, with probability 1 − o(1)
{ En [ψα̌τ ,bh (yi , di , zi )] }2
En [b
vi2 ]
6 Ln (α̌τ ) 6
min { En [ψα,bh (yi , di , zi )] }2
α∈Aτ
τ 2 (1
−
τ )2 En [b
vi2 ]
. δn n−1
as En [b
vi2 ] is bounded away from zero with probability 1 − o(1).
Next we verify Condition IQR(iv). The first condition follows from the uniform consisp
tency of fbi and maxi6n kxi k∞ kθ̃τ − θ0τ k1 .P Kq s log(pn)/n → 0 under K 2 s2 log2 (pn) 6
q
δn n. The second condition in IQR(iv) also follows since
k1{|ǫi | 6 |di (ατ − α̌τ ) + gτ i − b
gi |}k22,n 6 En [1{|ǫi | 6 |di (ατ − α̌τ )| + |x′i (βeτ − βτ )| + |rτ i |}]
6 En [1{|ǫi | 6 3|di (ατ − α̌τ )|}]
+En [1{|ǫi | 6 3|x′i (βeτ − βτ )|}] + En [1{|ǫi | 6 3|rτ i |}]
p
.P f¯Kq s log(pn)/n
which implies the result under Kq2 s2 log(pn) 6 δn n.
The consistency of σ
b1n follows from kb
vi − vi k2,n →P 0 and the moment conditions.
2
The consistency of σ
b3,n follow from Lemma 5. Next we show the consistency of σ
b2n
=
{En [fˇi2 (di , x′iŤ )′ (di , x′iŤ )]}−1
11 . Because fi > f , sparse eigenvalues of size ℓn s are bounded
away from zero and from above with probability 1 − ∆n , and maxi6n |fbi − fi | = oP (1) by
Condition D, we have
′ ′
′
−1
2
{En [fbi2 (di , x′iŤ )′ (di , x′iŤ )]}−1
11 = {En [fi (di , xiŤ ) (di , xiŤ )]}11 + oP (1).
So that σ
b2n − σ̃2n →P 0 for
2
2 2
2
′
2
′
−1
2
−1
σ̃2n
= {En [fi2 (di , x′iŤ )′ (di , x′iŤ )]}−1
11 = {En [fi di ]−En [fi di xiŤ ]{En [fi xiŤ xiŤ ]} En [fi xiŤ di ]} .
Next define θ̌τ [Ť ] = {En [fi2 xiŤ x′iŤ ]}−1 En [fi2 xiŤ di ] which is the least squares estimator of
regressing fi di on fi xiŤ . Let θ̌τ denote the associated p-dimensional vector. By definition
27
fi x′i θτ = fi di − fi rθτ − vi , so that
−2
σ̃2n
= En [fi2 d2i ] − En [fi2 di x′i θ̌τ ]
= En [fi2 d2i ] − En [fi di fi x′i θτ ] − En [fi di fi x′i (θ̌τ − θτ )]
= En [fi di vi ] − En [fi di fi rθτ i ] − En [fi di fi x′i (θ̌τ − θτ )]
= En [vi2 ] + En [vi {fi di − vi }] − En [fi di fi rθτ i ] − En [fi di fi x′i (θ̌ − θ0 )]
We have that |En [vi {fi di − vi }]| = |En [fi vi x′i θ0τ ]| = oP (δn ) since
Ē[(vi fi x′i θ0τ )2 ] 6 f¯2 {Ē[vi4 ]Ē[(x′i θ0τ )4 ]}1/2 6 C
and Ē[fi vi x′i θ0τ ] = 0. Moreover, En [fi di fi rθτ i ] 6 f¯i2 kdi k2,n krθτ i k2,n = oP (δn ), |En [fi di fi x′i (θ̌−
θτ )]| 6 kdi k2,n kfi x′i (θ̌τ − θτ )k2,n = oP (δn ) since |Ť | . seθτ + s with probability 1 − o(1) and
supp(θbτ ) ⊂ Ť .
Part 2. Proof of the Double Selection. The analysis of θbτ and βbτ are identical to the
corresponding analysis for Part 1. Let Tbτ∗ denote the set of variables used in the last step,
namely Tb∗ = supp(βbλτ ) ∪ supp(θbτ ) where βbλτ denotes the thresholded estimator. Using the
τ
τ
τ
same arguments as in Part 1, we have with probability 1 − o(1)
ns log p
|Tbτ∗ | . sb∗τ = s + 2 2 +
h λ
nhk̄
λ
!2
.
Next we establish preliminary rates for η̌τ := (α̌τ , β̌τ′ )′ that solves
η̌τ ∈ arg min En [fbi ρτ (yi − (di , x′iTb∗ )η)]
η
τ
(3.44)
where fbi = fbi (di , xi ) > 0 is a positive function of (di , zi ). We will apply Lemma 2 as the
problem above is a (post-selection) refitted quantile regression for (ỹi , x̃i ) = (fbi yi , fbi (di , x′ )′ ).
i
28
Indeed, conditional on {di , zi }ni=1 , quantile moment condition holds as
E[(τ − 1{fbi yi 6 fbi di ατ + fbi x′i βτ + fbi rτ i })fbi xi ] = E[(τ − 1{yi 6 di ατ + x′i βτ + rτ i })fbi xi ]
= Ē[(τ − Fy |d ,z (di ατ + x′ βτ + rτ i ))fbi xi ]
i
i
i
i
= 0.
Since maxi6n fbi ∧ fbi−1 . 1 with probability 1 − o(1), the required side conditions of Lemma
p
2 hold as in Part 1. We can take r̄τ = C s log(pn)/n with probability 1 − o(1). Finally, to
b let ητ = (ατ , β ′ )′ and ηbτ = (b
provide the bound Q,
ατ , βbλτ ′ )′ which are Cs-sparse vectors.
τ
τ
By definition supp(βbτ ) ⊂ Tbτ∗ so that
En [fbi {ρτ (yi −(di , x′i )η̌τ )−ρτ (yi −(di , x′i )ητ )}] 6 En [fbi {ρτ (yi −(di , x′i )b
ητ )−ρτ (yi −(di , x′i )ητ )}]
By Lemma 12 we have with probability 1 − o(1)
b := C
En [fbi {ρτ (yi − (di , x′i )b
ητ ) − ρτ (yi − (di , x′i )ητ )}] 6 Q
Thus, with probability 1 − o(1), Lemma 2 implies
s
Since s 6
s∗τ
b
and
kfbi (di , x′i )(η̌τ − ητ )k2,n .
1/φmin(Cs + Cb
s∗τ )
s log(pn)
.
n
(s + b
s∗τ ) log(pn)
nφmin(Cs + Cb
s∗τ )
′
6 C by Condition D we have kη̌τ − ητ k .
q
sb∗τ log p
.
n
Next we construct an orthogonal score function based on the solution of the weighted
quantile regression (3.44). By the first order condition for (α̌τ , β̌τ ) in (3.44) we have for
si ∈ ∂ρτ (yi − di α̌τ − x′i β̌τ ) that
En
"
di
si fbi
xiTbτ∗
#
= 0.
By taking linear combination of the selected covariates via (1, −θeτ ) and defining b
vi =
vi . Since si = τ − 1{yi 6
fbi (di − x′iTb∗ θeτ ) we have ψα,bh (yi , di , zi ) = (τ − 1{yi 6 di α + x′i β̌τ })b
τ
29
di α̌τ + x′i β̌τ } if yi 6= di α̌τ + x′i β̌τ ,
|En [ψα̌τ ,bh (yi , di , zi )]| 6 |En [si vbi ]| + En [1{yi = di α̌τ + x′i β̌τ }|b
vi |]
6 En [1{yi = di α̌τ + x′i β̌τ }|b
vi − vi |] + En [1{yi = di α̌τ + x′i β̌τ }|vi |].
Since sb∗τ := |Tbτ∗| . seθτ with probability 1 − o(1), and yi has no point mass, yi = di α̌τ + x′i β̌τ
for at most Ce
sθτ indices in {1, . . . , n}. Therefore, we have with probability 1 − o(1)
En [1{yi = di α̌τ + x′i β̌τ }|vi |] 6 n−1 Ce
sθτ max |vi | .P n−1 seθτ Kq δn−1/6 . δn1/3 n−1/2 .
i6n
under Kq2 e
s2θτ 6 δn n. Moreover,
En [1{yi = di α̌τ +
x′i β̌τ }|b
vi
with probability 1 − o(1) under
− vi |] 6
p
q
(1 + |Tbτ∗ |)/nkb
vi − vi k2,n . δn n−1/2
(3.45)
(3.46)
sb∗τ kb
vi − vi k2,n 6 δn holding with probability 1 − o(1).
Therefore, the orthogonal score function implicitly created by the double selection estimator
α̌τ approximately minimizes
|En [(τ − 1{yi 6 di α + x′i β̌τ })b
vi ]|2
e
,
Ln (α) =
En [{(τ − 1{yi 6 di α + x′i β̌τ })2 b
vi2 ]
en (α̌τ ) . δn1/3 n−1 with probability 1 − o(1) by (3.45) and (3.46). Thus
and we have L
the conditions of Lemma 5 hold and the double selection estimator has the stated linear
representation.
4
Auxiliary Inequalities
In this section we collect auxiliary inequalities that we use in our analysis.
30
Lemma 6 Consider ηbu and ηu where kηu k0 6 s. Denote by ηbuµ the vector obtained by
µ
thresholding ηbu as follows ηbuj
= ηbuj 1{|b
ηuj | > µ/En [x̃2ij ]1/2 }. We have that
kb
ηuµ − ηu k1 6 kb
ηu − ηu k1 + sµ/ minj6p En [x̃2ij ]1/2
|supp(b
η µ )| 6 s + kb
ηu − ηu k1 maxj6p En [x̃2ij ]1/2 /µ
n
√
p
kx̃′i (b
ηuµ − ηu )k2,n 6 kx̃′i (b
ηu − ηu )k2,n + φmax (s) minj6p2 Ensµ[x̃2 ]1/2 +
ij
kb
ηu −ηu k1
√
s
where φmax (m) = sup16kθk0 6m kx̃′i θk22,n /kθk2 .
o
Proof. (Proof of Lemma 6) Let Tu = supp(ηu ). The first relation follows from the
triangle inequality
ηuλ )Tuc k1
kb
ηuλ − ηu k1 = k(b
ηuλ − ηu )Tu k1 + k(b
ηuλ )Tuc k1
ηu − ηu )Tu k1 + k(b
6 k(b
ηuλ − ηbu )Tu k1 + k(b
ηu )Tuc k1
6 µs/ minj6p En [x̃2ij ]1/2 + k(b
ηu − ηu )Tu k1 + k(b
= µs/ minj6p En [x̃2ij ]1/2 + kb
ηu − ηu k1
To show the second result note that
kb
ηu − ηu k1 > {|supp(b
ηuµ )| − s}µ/ max En [x̃2ij ]1/2 .
j6p
ηu − ηu k1 maxj6p En [x̃2ij ]1/2 /µ which yields the result.
Therefore, we have |supp(b
ηuλ )| − s 6 kb
To show the third bound, we start using the triangle inequality
kx̃′i (b
ηuµ − ηu )k2,n 6 kx̃′i (b
ηuµ − ηbu )k2,n + kx̃′i (b
ηu − ηu )k2,n .
Without loss of generality assume that order the components is so that |(b
ηuµ − ηbu )j | is
decreasing. Let T1 be the set of s indices corresponding to the largest values of |(b
ηuµ − ηbu )j |.
Similarly define Tk as the set of s indices corresponding to the largest values of |(b
ηuµ − ηbu )j |
31
k−1
outside ∪m=1
Tm . Therefore, ηbuµ − ηbu =
of the components, k(b
ηuµ − ηbu )Tk k2,̟
P⌈p/s⌉
P⌈p/s⌉
(b
ηuµ − ηbu )Tk . Moreover, given the monotonicity
√
6 k(b
ηuµ − ηbu )Tk−1 k1 / s. Then, we have
k=1
(b
ηuµ − ηbu )Tk k2,n
P
6 kx̃′i (b
ηuµ − ηbu )T1 k2,n + k>2 kx̃′i (b
ηuµ − ηbu )Tk k2,n
p
p
P
ηuµ − ηbu )T1 k + φmax (s) k>2 k(b
ηuµ − ηbu )Tk k
6 φmax (s)k(b
p
p
P
√
√
ηuµ − ηbu )Tk k1 / s
6 φmax (s)µ s/ minj6p En [x̃2ij ]1/2 + φmax (s) k>1 k(b
p
p
√
√
= φmax (s)µ s/ minj6p En [x̃2ij ]1/2 + φmax (s)kb
ηuµ − ηbu k1 / s
p
√
√
6 φmax (s){2 sµ/ minj6p En [x̃2ij ]1/2 + 2kb
ηu − ηu k1 / s}
kx̃′i (b
ηuµ − ηbu )k2,n = kx̃′i
k=1
where the last inequality follows from the first result and the triangle inequality.
The following result follows from Theorem 7.4 of [16] and the union bound.
Lemma 7 (Moderate Deviation Inequality for Maximum of a Vector) Suppose that
Pn
i=1 Uij
,
Sj = qP
n
2
U
i=1 ij
where Uij are independent variables across i with mean zero. We have that
A
−1
P max |Sj | > Φ (1 − γ/2p) 6 γ 1 + 3 ,
16j6p
ℓn
where A is an absolute constant, provided that for ℓn > 0
n1/6
0 6 Φ (1 − γ/(2p)) 6
min M[Uj ] − 1,
ℓn 16j6p
−1
M[Uj ] :=
Pn
Lemma 8 Let Xi , i = 1, . . . , n, be independent random vectors in Rp . Let
√
p
p
√
δ̄n := 2 C̄K k log(1 + k) log(p ∨ n) log n / n,
32
2 1/2
i=1 EUij
.
Pn
1
3 1/3
i=1 E|Uij |
n
1
n
where K > {E[max16i6n kXi k2∞ ]}1/2 and C̄ is a universal constant. Then we have
"
#
q
′
2
′
2
2
E
sup
En (α Xi ) − E[(α Xi ) ]
6 δ̄n + δ̄n
sup
Ē[(α′ Xi )2 ].
kαk0 6k,kαk=1
kαk0 6k,kαk=1
Proof. It follows from Theorem 3.6 of [34], see [6] for details.
We will also use the following result of [14].
Lemma 9 (Maximal Inequality) Work with the setup above. Suppose that F > supf ∈F |f |
is a measurable envelope for F with kF kP,q < ∞ for some q > 2. Let M = maxi6n F (Wi )
and σ 2 > 0 be any positive constant such that supf ∈F kf k2P,2 6 σ 2 6 kF k2P,2. Suppose that
there exist constants a > e and v > 1 such that
log sup N(ǫkF kQ,2 , F , k · kQ,2 ) 6 v log(a/ǫ), 0 < ǫ 6 1.
Q
Then
EP [sup |Gn (f )|] 6 K
f ∈F
s
vσ 2 log
akF kP,2
σ
vkMkP,2
+ √
log
n
akF kP,2
σ
!
,
where K is an absolute constant. Moreover, for every t > 1, with probability > 1 − t−q/2 ,
h
i
√
−1/2
−1 −1/2
sup |Gn (f )| 6 (1+α)EP [sup |Gn (f )|]+K(q) (σ+n
kMkP,q ) t+α n
kMkP,2 t ,
f ∈F
f ∈F
∀α > 0 where K(q) > 0 is a constant depending only on q. In particular, setting a > n
and t = log n, with probability > 1 − c(log n)−1 ,
s
!
vkMkP,q
akF kP,2
akF kP,2
+ √
,
sup |Gn (f )| 6 K(q, c) σ v log
log
σ
n
σ
f ∈F
(4.47)
where kMkP,q 6 n1/q kF kP,q and K(q, c) > 0 is a constant depending only on q and c.
33
5
Proofs for Section 2.1 of Supplementary Material
Proof. (Proof of Lemma 1) Let δ = ηbu − ηu and define
b
R(η)
= En [ρu (ỹi − x̃′i η)]−En [ρu (ỹi − x̃′i ηu −rui )]−En [(u−1{ỹi 6 x̃′i ηu +rui })(x̃′i η− x̃′i ηu −rui )].
b
b u )] 6 f¯krui k2 /2 and with probability at least 1 − γ,
By Lemma 11, R(η)
> 0, Ē[R(η
2,n
p
2
¯
b
R(ηu ) 6 R̄γ := 4 max{f krui k , krui k2,n log(8/γ)/n} 6 4Cs log(p/γ)/n from Condition
2,n
PQR. By definition of ηbu we have
b ηu ) − R(η
b u ) + En [(u − 1{ỹi 6 x̃′ ηu + rui })x̃′ ]δ = En [ρu (ỹi − x̃′ ηbu )] − En [ρu (ỹi − x̃′ ηu )]
R(b
i
i
i
i
6
Let N =
q
8cR̄γ /f +
10
f
f¯krui k2,n +
√
3cλu s
nκ2c
+
λu
kηu k1
n
−
λu
kb
ηu k1 .
n
√
8(1+2c) s log(16p/γ)
√
nκ2c
+
(5.48)
√
√
8c nR̄γ log(16p/γ)
λu {s log(p/γ)/n}1/2
denote the upper bound in the rate of convergence. Note that N > {s log(p/γ)/n}1/2 .
Suppose that the result is violated, so that kx̃′i δk2,n > N. Then by convexity of the
objective function in (2.26), there is also a vector δ̃ such that kx̃′i δ̃k2,n = N, and
En [ρu (ỹi − x̃′i (δ̃ + ηu ))] − En [ρu (ỹi − x̃′i ηu )] 6
λu
kηu k1
n
−
λu
kδ̃
n
+ ηu k1 .
(5.49)
Next we will show that with high probability such δ̃ cannot exist implying that kx̃′i δk2,n 6
N.
By the choice of λu > cΛu (1 − γ | x̃) the event Ω1 := { λnu > ckEn [(u − 1{ỹi 6
b1 (ηu ) 6 R̄γ }
x̃′ ηu + rui })x̃i ]k∞ } occurs with probability at least 1 − γ. The event Ω2 := {R
i
b
also holds with probability at least 1 − γ. Under Ω1 ∩ Ω2 , and since R(η)
> 0, we have
b u) −
−R(η
λu
kδ̃k1
cn
b u + δ̃) − R(η
b u ) + En [(u − 1{ỹi 6 x̃′ ηu + rui })x̃′ ]δ̃
6 R(η
i
i
= En [ρu (ỹi − x̃′i (δ̃ + ηu ))] − En [ρu (ỹi − x̃′i ηu )]
6
λu
kηu k1
n
−
λu
kδ̃
n
+ ηu k1
34
(5.50)
so that for c = (c + 1)/(c − 1)
kδ̃Tuc k1 6 ckδ̃Tu k1 +
nc
b u ).
R(η
λu (c − 1)
To establish that δ̃ ∈ Au := ∆2c ∪ {v : kx̃′i vk2,n = N, kvk1 6 2cnR̄γ /λu } we consider two
cases. If kδ̃Tuc k1 > 2ckδ̃Tu k1 we have
1
nc
b u)
R(η
kδ̃Tuc k1 6
2
λu (c − 1)
and consequentially
kδ̃k1 6 {1 + 1/(2c)}kδ̃Tuc k1 6
Otherwise kδ̃Tuc k1 6 2ckδ̃Tu k1 , and we have
2nc b
R(ηu ).
λu
√
kδ̃k1 6 (1 + 2c)kδ̃Tu k1 6 (1 + 2c) skx̃′i δ̃k2,n /κ2c .
Thus with probability 1 − 2γ, δ̃ ∈ Au .
Therefore, under Ω1 ∩ Ω2 , from (5.49), applying Lemma 13 (part (1) and (3) to cover
δ̃ ∈ Au ), for kx̃′i δ̃k2,n = N with probability at least 1 − 4γ we have
Ē[ρu (ỹi − x̃′i (δ̃ + ηu ))] − Ē[ρu (ỹi − x̃′i ηu )]
n
o
√
kx̃′ δ̃k
8cnR̄γ p
s
log(16p/γ)
+
6 λnu kδ̃k1 + i√n2,n 8(1+2c)
κ2c
λu N
n
o√
√
√
log(16p/γ)
8cnR̄γ
8(1+2c) s
3cλu s
′
√
6 2cR̄γ + kx̃i δ̃k2,n nκ2c +
+ λu N
κ2c
n
√
where we used the bound for kδ̃k1 6 (1 + 2c) skx̃′i δ̃k2,n /κ2c +
Using Lemma 10, since by assumption supδ̄∈Au
En [|rui ||x̃′i δ̄|2 ]
En [|x̃′i δ̄|2 ]
2nc
R̄γ .
λu
→ 0, we have
Ē[ρu (ỹi − x̃′i (ηu + δ̃)) − ρu (ỹi − x̃′i ηu )] > −f¯krui k2,n kx̃′i δ̃k2,n +
f kx̃′i δ̃k22,n
∧ q̄Au f kx̃′i δ̃k2,n
4
Note that N < 4q̄Au for n sufficiently large by the assumed side condition, so that the
minimum on the right hand side is achieved for the quadratic part. Therefore we have
f kx̃′i δ̃k22,n
6 2cR̄γ +kx̃′i δ̃k2,n
4
)
p
p
√
√
s
log(16p/γ)
n
R̄
log(16p/γ)
8(1
+
2c)
8c
3cλ
s
γ
u
√
f¯krui k2,n +
+
+
nκ2c
nκ2c
λu N
(
35
which implies that
kx̃′i δ̃k2,n 6
q
8cR̄γ /f
+ f8 f¯krui k2,n +
√
3cλu s
nκ2c
+
√
8(1+2c) s log(16p/γ)
√
nκ2c
+
√
√
8c nR̄γ log(16p/γ)
λu N
which violates the assumed condition that kx̃′i δ̃k2,n = N since N > {s log(p/γ)/n}1/2 .
Proof. (Proof of Lemma 2) Let δbu = ηbu − ηu . By optimality of ηeu in (2.26) we have
with probability 1 − γ
b
En [ρu (ỹi − x̃′i ηeu )] − En [ρu (ỹi − x̃′i ηu )] 6 En [ρu (ỹi − x̃′i ηbu )] − En [ρu (ỹi − x̃′i ηu )] 6 Q.
(5.51)
b1/2 denote the upper bound in the rate of convergence where
Let N = 2f¯r̄u + Aε,n + 2Q
ηu − ηu )k2,n > N.
Aε,n is defined below. Suppose that the result is violated, so that kx̃′i (e
Then by convexity of the objective function in (2.26), there is also a vector δeu such that
kx̃′i δeu k2,n = N, kδeu k0 = ke
ηu − ηu k0 6 sbu + s and
b
En [ρu (ỹi − x̃′i (ηu + δeu ))] − En [ρu (ỹi − x̃′i ηu )] 6 Q.
(5.52)
Next we will show that with high probability such δeu cannot exist implying that kx̃′i (e
ηu −
ηu )k2,n 6 N with high probability.
By Lemma 13, with probability at least 1 − ε, we have
s
′
′
e
|(En − Ē)[ρu (ỹi − x̃i (ηu + δu )) − ρu (ỹi − x̃i ηu )]|
(b
su + s) log(16p/ε)
68
=: Aε,n . (5.53)
′e
nφmin (b
su + s)
kx̃i δu k2,n
Thus combining relations (5.51) and (5.53), we have
b
Ē[ρu (ỹi − x̃′i (ηu + δeu ))] − Ē[ρu (ỹi − x̃′i ηu )] 6 kx̃′i δeu k2,n Aε,n + Q
with probability at least 1 − ε. Invoking the sparse identifiability relation of Lemma 10,
E [|r | |x̃′ θ|2 ]
with the same probability, since supkδk0 6bsu +s nEnui[|x̃′ θ|2i ] → 0 by assumption,
i
b
(f kx̃′i δeu k22,n /4) ∧ qesbu f kx̃′i δeu k2,n 6 kx̃′i δeu k2,n f¯krui k2,n + Aε,n + Q.
36
where qesbu :=
f 3/2
2f¯′
inf kδk0 6bsu +s
kx̃′i θk32,n
.
En [|x̃′i θ|3 ]
Under the assumed growth condition, we have N < 4e
qsbu for n sufficiently large and the
minimum is achieved in the quadratic part. Therefore, for n sufficiently large, we have
b1/2 < N
kx̃′i δeu k2,n 6 f¯krui k2,n + Aε,n + 2Q
Thus with probability at least 1 − ε − γ − o(1) we have kx̃′i δeu k2,n < N which contradicts
its definition. Therefore, kx̃′i (e
ηu − ηu )k2,n 6 N with probability at least 1 − γ − ε − o(1).
5.1
Technical Lemmas for Quantile Regression
Lemma 10 For a subset A ⊂ Rp let
3/2
q̄A = (1/2) · (f 3/2 /f¯′ ) · inf En |x̃′i δ|2
/En |x̃′i δ|3
δ∈A
and assume that for all δ ∈ A
Then, we have
Ē[ρu (ỹi −
x̃′i (ηu
Ē |rui | · |x̃′i δ|2 6 (f /[4f¯′ ])Ē[|x̃′i δ|2 ].
+ δ))] − Ē[ρu (ỹi −
x̃′i ηu )]
f kx̃′i δk22,n
>
∧ q̄A f kx̃′i δk2,n − f¯krui k2,n kx̃′i δk2,n .
4
Proof. (Proof of Lemma 10) Let T = supp(ηu ), Qu (η) := Ē[ρu (ỹi − x̃′i η)], Ju =
1/2
(1/2)En [fi x̃i x̃′i ] and define kδku = kJu δk. The proof proceeds in steps.
Step 1. (Minoration). Define the maximal radius over which the criterion function can
be minorated by a quadratic function
1
′
2
¯
rA = sup r : Qu (ηu + δ) − Qu (ηu ) + f krui k2,n kx̃i δk2,n > kδku , ∀δ ∈ A, kδku 6 r .
2
r
37
Step 2 below shows that rA > q̄A . By construction of rA and the convexity of Qu (·) and
k · ku ,
Qu (ηu + δ) − Qu (ηu ) + f¯krui k2,n kx̃′i δk2,n >
n
o
kδk2u
kδku
′
¯
> 2 ∧ rA · inf δ̃∈A,kδ̃ku >rAQu (ηu + δ̃) − Qu (ηu ) + f krui k2,n kx̃i δ̃k2,n
o
n
2
2
2
u rA
> kδk2 u ∧ {q̄A kδku } .
> kδk2 u ∧ kδk
rA 4
Step 2. (rA > q̄A ) Let Fỹ|x̃ denote the conditional distribution of ỹ given x̃. From [20],
for any two scalars w and v we have that
ρu (w − v) − ρu (w) = −v(u − 1{w 6 0}) +
Z
0
v
(1{w 6 z} − 1{w 6 0})dz.
(5.54)
We will use (5.54) with w = ỹi − x̃′i ηu and v = x̃′i δ. Using the law of iterated expectations
and mean value expansion, we obtain for t̃x̃i ,t ∈ [0, t]
Qu (ηu + δ) − Qu (ηu ) + f¯krui k2,n kx̃′i δk2,n >
Qu (ηu + δ) − Qu (ηu ) + Ē [(u − 1{ỹi 6 x̃′i ηu })x̃′i δ] =
hR ′
i
x̃i δ
′
′
= Ē 0 Fỹi |x̃i (x̃i ηu + t) − Fỹi |x̃i (x̃i ηu )dt
hR ′
i
2
x̃ δ
= Ē 0 i tfỹi |x̃i (x̃′i ηu ) + t2 fỹ′ i |x̃i (x̃′i ηu + t̃x̃,t )dt
hR ′
i
x̃ δ
> kδk2u − 61 f¯′ Ē[|x̃′i δ|3 ] − Ē 0 i t[fỹi |x̃i (x̃′i ηu ) − fỹi |x̃i (gui )]dt
(5.55)
> 21 kδk2u + 14 f Ē[|x̃′i δ|2 ] − 61 f¯′ Ē[|x̃′i δ|3 ] − (f¯′ /2)Ē [|x̃′i ηu − gui | · |x̃′i δ|2 ] .
where the first inequality follows noting that Fỹi |x̃i (x̃′i ηu + rui ) = u and |Fỹi |x̃i (x̃′i ηu + rui ) −
Fỹ |x̃ (x̃′ ηu )| 6 f¯|rui |.
i
i
i
Moreover, by assumption we have
Ē [|x̃′i ηu − gui | · |x̃′i δ|2 ] = Ē [|rui | · |x̃′i δ|2 ]
6 (f /8)(2/f¯′)Ē[|x̃′ δ|2 ]
i
38
(5.56)
Note that for any δ such that kδku 6 q̄A we have kδku 6 q̄A 6 (1/2) · (f 3/2 /f¯′ ) ·
3/2
Ē [|x̃′i δ|2 ] /Ē [|x̃′i δ|3 ], it follows that (1/6)f¯′Ē[|x̃′i δ|3 ] 6 (1/8)f Ē[|x̃′i δ|2 ]. Combining this
with (5.56) we have
1
1
f Ē[|x̃′i δ|2 ] − f¯′ Ē[|x̃′i δ|3 ] − (f¯′ /2)Ē |x̃′i ηu − gui | · |x̃′i δ|2 > 0.
4
6
(5.57)
Combining (5.55) and (5.57) we have rA > q̄A .
b u )] 6 f¯krui k2 /2, R(η
b u ) > 0 and
Lemma 11 Under Condition PQR we have Ē[R(η
2,n
p
b u ) > 4 max{f¯krui k2 , krui k2,n log(8/γ)/n}) 6 γ.
P (R(η
2,n
b u ) > 0 by convexity of ρu . Let ǫui =
Proof. (Proof of Lemma 11) We have that R(η
R
b u ) = −En [rui 1 1{ǫui 6 −trui }−1{ǫui 6 0} dt > 0.
ỹi − x̃′ ηu −rui . By Knight’s identity, R(η
i
0
b u )] = En [rui
Ē[R(η
6
R1
0
R1
En [rui 0
Fyi |x̃i (x̃′i ηu + (1 − t)rui ) − Fyi |x̃i (x̃′i ηu + rui ) dt]
f¯trui dt] 6 f¯krui k2 /2.
2,n
b u ) 6 2f¯krui k2 ) > 1/2 by Markov’s inequality.
Therefore P (R(η
2,n
R1
b u ) = En [rui zui ]. We have
Define zui := − 0 1{ǫui 6 −trui } − 1{ǫui 6 0} dt, so that R(η
P (En [rui zui ] 6 2f¯krui k2 ) > 1/2 so that for t > 4f¯krui k2 we have by Lemma 2.3.7 in [39]
2,n
2,n
1
P (|En [rui zui ]| > t) 6 2P (|En [rui zui ǫi ]| > t/4)
2
Since the rui zui ǫi is a symmetric random variable and |zui | 6 1, by Theorem 2.15 in [16]
we have
q
q
√
√
2
2 2
P ( n|En [rui zui ǫi ]| > t̄ En [rui
]) 6 P ( n|En [rui zui ǫi ]| > t̄ En [rui
zui ]) 6 2 exp(−t̄2 /2) 6 γ/8
for t̄ >
p
2 log(8/γ). Setting t = 4 max{f¯krui k22,n , krui k2,n
p
log(8/γ)/n} we have
P (En [rui zui ] > t) 6 4P (En [rui zui ǫi ] > t/4) 6 γ.
39
Lemma 12 Under Condition PQR, conditionally on {x̃i , i = 1, . . . , n}, for kb
ηu k0 6 k,
N 6 kx̃′i (b
ηu − ηu )k2,n 6 N̄ , we have with probability 1 − γ
En [ρu (ỹi −x̃′i ηbu )]−En [ρu (ỹi −x̃′i ηu )] 6
kx̃′i (b
ηu − ηu )k2,n
√
n
v
u
√
N̄
u (k + s) log(16p{1 + 3 n log( )}/γ)
t
N
4+4
φmin (k + s)
+f¯kx̃′i (b
ηu − ηu )k22,n + f¯krui k2,n kx̃′i (b
ηu − ηu )k2,n .
Proof. (Proof of Lemma 12) By triangle inequality we have
En [ρu (ỹi − x̃′i ηbu ) − ρu (ỹi − x̃′i ηu )] 6 |(En − Ē)[ρu (ỹi − x̃′i ηbu )ρu (ỹi − x̃′i ηu )]|
+|Ē[ρu (ỹi − x̃′i ηbu ) − ρu (ỹi − x̃′i ηu )]|.
The first term is bounded by Lemma 13. The second term is bounded using the identity
(5.54) with w = ỹi − x̃′i ηu and v = x̃′i δ similarly to the argument in (5.55). Using the law
of iterated expectations and mean value expansion, we obtain for t̃x̃i ,t ∈ [0, t]
Qu (ηu + δ) − Qu (ηu ) − f¯krui k2,n kx̃′i δk2,n 6
Qu (ηu + δ) − Qu (ηu ) + Ē [(u − 1{ỹi 6 x̃′i ηu })x̃′i δ] =
hR ′
i
x̃i δ
′
′
= Ē 0 Fỹi |x̃i (x̃i ηu + t) − Fỹi |x̃i (x̃i ηu )dt
and noting that
"Z ′
#
"Z
x̃i δ
En
Fỹi |x̃i (x̃′i ηu + t) − Fỹi |x̃i (x̃′i ηu )dt 6 f¯En
0
0
x̃′i δ
(5.58)
#
tdt 6 f¯kx̃′i δk22,n .
Lemma 13 Let wi (b) = ρu (ỹi − x̃′i ηu −b)−ρu (ỹi − x̃′i ηu ). Then, conditional on {x̃1 , . . . , x̃n },
w e have with probability 1 − γ that for vectors in the restricted set
q
√
N̄
′
s log(16p{1 + 3 n log( N
)}/γ)
4(1
+
c)
wi (x̃i δ)
sup
Gn
6
4
+
kx̃′i δk2,n
κc
δ ∈ ∆c ,
N 6 kx̃′i δk2,n 6 N̄
40
Similarly, for sparse vectors
sup
Gn
1 6 kδk0 6 k,
wi (x̃′i δ)
kx̃′i δk2,n
6 4+4
s
√
k log(16p{1 + 3 n log(N̄ /N)}/γ)
φmin (k)
N 6 kx̃′i δk2,n 6 N̄
Similarly, for ℓ1 -bounded vectors
q
√
wi (x̃′i δ)
R1
Gn
sup
6
4
+
4
log(16p{1
+
3
n log(N̄/N )}/γ)
kx̃′i δk2,n
N
kδk1 6 R1 ,
N 6 kx̃′i δk2,n 6 N̄
Proof. (Proof of Lemma 13) Let wi (b) = ρu (ỹi − x̃′i ηu − b) − ρu (ỹi − x̃′i ηu ) 6 |b|. Note
that wi (b) − wi (a) 6 |b − a|.
For any δ ∈ Rp , since ρu is 1-Lipschitz, we have
w (x̃′ δ)
var Gn kx̃i′ δki2,n
6
i
En [{wi (x̃′i δ)}2 ]
kx̃′i δk22,n
6
En [|x̃′i δ|2 ]
kx̃′i δk22,n
6 1.
Then, by Lemma 2.3.7 in [38] (Symmetrization for Probabilities) we have for any M > 1
2
wi (x̃′i δ)
wi (x̃′i δ)
o
>M 6
> M/4
P sup Gn
P sup Gn
kx̃′i δk2,n
1 − M −2
kx̃′i δk2,n
δ∈∆c
δ∈∆c
where Gon is the symmetrized process.
Consider Ft = {δ ∈ ∆c : kx̃′i δk2,n = t}. We will consider the families of Ft for t ∈ [N, N̄].
For any δ ∈ Ft , t 6 t̃ we have
′
wi (x̃′i δ(t̃/t))
wi (x̃′i δ)
o
o wi (x̃i δ)
−
−
G
6
Gn
n
t
t
t̃
wi (x̃′i δ(t̃/t))
t
1
t
+
Gon
wi (x̃′i δ(t̃/t))
t
−
wi (x̃′i δ(t̃/t))
t̃
Gon wi (x̃′i δ) − wi (x̃′i δ[t̃/t]) + Gon wi (x̃′i δ(t̃/t))
′
√
√
|x̃ δ| |t−t̃|
nEn (|x̃′i δ|) t̃t 1t − 1t̃
6 nEn ti
+
t
′
√
√
|x̃ δ|
t−t̃
= 2 nEn ti
6 2 n t−t t̃ .
t
=
41
·
1
t
−
1
t̃
√
Let T be a ε-net {N =: t1 , t2 , . . . , tK := N̄ } of [N , N̄] such that |tk − tk+1 |/tk 6 1/[2 n].
√
Note that we can achieve that with |T | 6 3 n log(N̄/N ).
Therefore we have
wi (x̃′i δ)
wi (x̃′i δ)
o
o
6 1 + sup
sup
Gn
=: 1 + Ao .
sup Gn
′
′
kx̃
δk
t
t∈T δ∈∆c ,kx̃i δk2,n =t
δ∈∆c
2,n
i
P (Ao > K) 6 minψ>0 exp(−ψK)E[exp(ψAo )]
2
6 8p|T | minψ>0 exp(−ψK) exp 8ψ 2 s(1+c)
κ2
c
6 8p|T | exp(−K
2
2
/[16 s(1+c)
])
κ2c
2
where we set ψ = K/[16 s(1+c)
] and bounded
κ2
c
o
E [exp (ψA )] 6(1)
6(2)
6(3)
6(4)
6(5)
6(6)
!#
wi (x̃′i δ)
2|T | sup E exp ψ
sup
t
t∈T
δ∈∆c ,kx̃′i δk2,n =t
"
′ !#
x̃i δ
2|T | sup E exp 2ψ
sup
Gon
t
t∈T
δ∈∆c ,kx̃′i δk2,n =t
#
!#
"
"
kδk1
max |Gon (x̃ij )|
2|T | sup E exp 2ψ
sup
2
j6p
′
t
t∈T
δ∈∆c ,kx̃i δk2,n =t
√
s(1 + c)
max |Gon (x̃ij )|
2|T |E exp 4ψ
κ
c √ j6p
s(1 + c) o
4p|T | max E exp 4ψ
Gn (x̃ij )
j6p
κc
s(1 + c)2
8p|T | exp 8ψ 2
κ2c
"
Gon
where (1) follows by exp(maxi∈I |zi |) 6 2|I| maxi∈I exp(zi ), (2) by contraction principle
√
(Theorem 4.12 [25]), (3) |Gon (x̃′i δ)| 6 kδk1 kGon (x̃i )k∞ , (4) s(1 + c)kx̃′i δk2,n /kδk1 > κc , (6)
En [x2ij ] = 1 and exp(z) + exp(−z) 6 2 exp(z 2 /2).
The second result follows similarly by noting that
√
√
kδk1
kkx̃′i δk2,n
k
p
=p
.
sup
6
sup
t
φmin(k)
φmin(k)
16kδk0 6k,kx̃′i δk2,n =t
16kδk0 6k,kx̃′i δk2,n =t t
42
The third result follows similarly by noting that for ant t ∈ [N , N̄]
kδk1
R1
.
6
t
N
kδk1 6R1 ,kx̃′i δk2,n =t
sup
6
Proofs for Section 2.2 of Supplementary Material
Lemma 14 (Choice of λ) Suppose Condition WL holds, let c′ > c > 1, γ 6 1/n1/3 , and
√
λ > 2c′ nΦ−1 (1 − γ/2p). Then for n > n0 (δn , c′ , c) large enough
b−1 En [fi xi vi ]k∞ ) > 1 − γ{1 + o(1)} + 4∆n .
P (λ/n > 2ckΓ
τ0
bτ 0jj =
Proof. (Proof of Lemma 14) Since Γ
with probability at least 1 − ∆n we have
bτ 0jj − Γτ 0jj | 6 max
max |Γ
j6p
j6p
q
En [fbi2 x2ij vi2 ] and Γτ 0jj =
q
En [fi2 x2ij vi2 ],
q
En [(fbi − fi )2 x2ij vi2 ] 6 δn1/2
by Condition WL(iii). Further, Condition WL implies that Γτ 0jj is bounded away from
b−1 Γτ 0 k∞ →P 1, so
zero and from above uniformly in j = 1, . . . , p and n. Thus we have kΓ
τ0
p
−1
4
b Γτ 0 k∞ 6 c′ /c with probability 1 − ∆n for n > n0 (δn , c′ , c, Γτ 0 ). By the triangle
that kΓ
τ0
inequality
b−1 En [fi xi vi ]k∞ 6 kΓ
b−1 Γτ 0 k∞ kΓ−1 En [fi xi vi ]k∞
kΓ
τ0
τ0
τ0
(6.59)
Next we will apply Lemma 7 which is based on self-normalized moderate deviation
theory. Define Uij = fi xij vi −E[fi xij vi ] which is zero mean by construction and En [fi xij vi ] =
En [Uij ] since Ē[fi xij vi ] = 0. Moreover, we have minj6p Ē[Uij2 ] > c by Condition WL(ii) and
maxj6p Ē[|Uij |3 ] . maxj6p Ē[|fi xij vi |3 ] 6 C since Uij is demeaned and the last bound from
by Condition WL(ii). Using that by Condition WL(iii), with probability 1 − ∆n we have
43
maxj6p |(En − Ē)[Uij2 ]| 6 δn and by Condition WL(ii) minj6p Ē[Uij2 ] > c, we have that
q
q
p
4
2
′
En [Uij ] 6 c /c En [fi2 x2ij vi2 ] with probability 1 − ∆n for n sufficiently large. Therefore,
q
−1
c 4 c′ √
−1
b
P (λ/n > 2ckΓτ 0 En [fi xi vi ]k∞ ) > P (Φ (1 − γ/2p) > c′ c nkΓ−1
τ 0 En [fi xi vi ]k∞ ) − ∆n
q
√
nEn [fi xij vi ]
c 4 c′
−1
√
= P Φ (1 − γ/2p) > c′ c maxj6p
En [fi2 x2ij vi2 ]
q
√
nEn [Uij ]
c 2 c′
−1
> P Φ (1 − γ/2p) > c′ c maxj6p √
− 2∆n
2]
En [Uij
√
nE [U ]
> P Φ−1 (1 − γ/2p) > maxj6p √ n 2ij
− 2∆n
En [Uij ]
> 1 − 2pΦ(Φ−1 (1 − γ/2p))(1 + o(1)) − 2∆n
> 1 − γ{1 + o(1)} − 2∆n
where the last relation by Condition WL.
Proof. (Proof of Lemma 3) Let δb = θbτ − θτ . By definition of θbτ we have
b 2 ] − 2En [fb2 (di − x′ θτ )xi ]′ δb = En [fb2 (di − x′ θbτ )2 ] − En [fb2 (di − x′ θτ )2 ]
En [fbi2 (x′i δ)
i
i
i
i
i
i
bτ θτ k1 − λ kΓ
bτ θbτ k1
6 λ kΓ
n
n
bτ δbT c k1
− nλ kΓ
θτ
λ
λ
bτ 0 δbT k1 − ℓkΓ
bτ 0 δbT c k1
6 n ukΓ
θτ
n
θτ
6
λ b b
kΓτ δTθτ k1
n
(6.60)
2
Therefore, using that c2f > En [(fbi2 − fi2 )2 vi2 /{fbi2 fi2 }] and c2r > En [fb2 rθτ
i ], we have
b 2 ] 6 2En [(fb2 − f 2 )vi xi /fi ]′ δb + 2En [fb2 rθτ i xi ]′ δb + 2(Γ
b
b −1 En [fi vi xi ])′ (Γ
b τ 0 δ)
En [fbi2 (x′i δ)
i
i
i
0
b τ 0 δbT k1 − λ ℓkΓ
b τ 0 δbT c k1
+ nλ ukΓ
θτ
n
θτ
b 2 ]}1/2 + 2kΓ
b 1
b −1 En [f 2 (di − x′ θτ )xi ]k∞ kΓ
b τ 0 δk
6 2{cf + cr }{En [fbi2 (x′i δ)
0
i
i
b τ 0 δbT k1 − λ ℓkΓ
b τ 0 δbT c k1
+ nλ ukΓ
θτ
n
θτ
b 2 ]}1/2 +
6 2{cf + cr }{En [fbi2 (x′i δ)
b 2 ]}1/2 +
6 2{cf + cr }{En [fbi2 (x′i δ)
44
λ
λ b b c
λ b b
b b
cn kΓτ 0 δk1 + n ukΓτ 0 δTθτ k1 − n ℓkΓτ 0 δTθτ k1
λ
1
λ
1
b b
b b c
n u + c kΓτ 0 δTθτ k1 − n ℓ − c kΓτ 0 δTθτ k1
(6.61)
Let c̃ =
that
cu+1 b
b−1
k Γτ 0 k ∞ k Γ
τ 0 k∞ .
cℓ−1
If δb 6∈ ∆c̃ we have u +
1
c
bτ 0 δbT k1 6 ℓ −
kΓ
θτ
b 2 ]}1/2 6 2{cf + cr }.
{En [fbi2 (x′i δ)
Otherwise assume δb ∈ ∆c̃ . In this case (6.61) yields
b 2 ] 6 2{cf + cr }{En [fb2 (x′ δ)
b 2 ]}1/2 +
En [fbi2 (x′i δ)
i
i
which implies
b 2 ]}1/2 +
6 2{cf + cr }{En [fbi2 (x′i δ)
λ
n
λ
n
u+
u+
1
c
bτ 0 δbT c k1 so
kΓ
θτ
1
λ
1
b b
b b c
c kΓτ 0 δTθτ k1 − n ℓ − c kΓτ 0 δTθτ k1
1 √
b 2 ]}1/2 /b
s{En [fbi2 (x′i δ)
κc̃
c
√
1
λ s
u+
6 2{cf + cr } +
nb
κc̃
c
To establish the ℓ1 -bound, first assume that δb ∈ ∆2c̃ . In that case
√
√
λs
1
s{c
+
c
}
f
r
2
′
2
1/2
b 1 6 (1 + 2c̃)kδbT k1 6 s{En [fb (x δ)
b ]} /b
+
.
u+
kδk
κ2c̃ 6 2
i
i
θτ
κ
b2c̃
nb
κc̃ b
c
κ2c̃
bτ 0 δbT k1 6 1 · ℓ − 1 kΓ
bτ 0 δbT c k1 so
Otherwise note that δb 6∈ ∆2c̃ implies that u + 1c kΓ
θτ
2
c
θτ
b 2 ]}1/2
{En [fbi2 (x′i δ)
that (6.61) gives
1
1λ
2 ′ b 2 1/2
2 ′ b 2 1/2
b
b
b
b
c
6 {cf +cr }2 .
2{cf + cr } − {En [fi (xi δ) ]}
kΓτ 0 δTθτ k1 6 {En [fi (xi δ) ]}
· ℓ−
2n
c
Therefore
b−1 k∞ n
1
1
2ckΓ
1
τ0
−1
b 1 6 1+
b k∞ kΓ
bτ 0 δbT c k1 6 1 +
c k1 6
1+
kδbTθτ
kΓ
{cf +cr }2
kδk
τ0
θτ
2c̃
2c̃
2c̃
ℓc − 1 λ
b−1 k∞ are uniformly bounded
Proof. (Proof of Lemma 4) Note that kfbk2∞ and kΓ
0
with probability going to one. Under the assumption on the design, for M defined in
Lemma 18 we have that minm∈M φmax (m ∧ n) is uniformly bounded. Thus by Lemma 18
with probability 1 − γ − o(1) we have
2
n{cf + cr } √
sm .
b
+ s .
λ
The bound then follows from Lemma 15.
45
6.1
Technical Results for Post-Lasso with Estimated Weights
Lemma 15 (Performance of the Post-Lasso) Under Conditions WL, let Tbθτ denote
the support selected by θbτ , and θeτ be the Post-Lasso estimator based on Tbθτ . Then we have
for sbθτ = |Tbθτ |, with probability 1 − o(1)
s
√ √
φ
(b
s
)
sbθτ log p
c
max
θτ
f
′
′
kfbi (xi θτ + rθτ i − xi θeτ )k2,n .
+p
φmin(b
sθτ ) mini6n fbi
n φmin(b
sθτ ) mini6n fbi
′
b ′
+ min
b kfi (x θτ + rθτ i − x θ)k2,n
supp(θ)⊆Tθτ
i
i
bτ 0 6 Γ
b τ 6 uΓ
bτ 0 with u > 1 > ℓ > 1/c in
Moreover, if in addition λ satisfies (2.31), and ℓΓ
the first stage for Lasso, then we have with probability 1 − γ − o(1)
√
p
1
λ s
′
′
b
min kfi (xi θτ + rθτ i − xi θ)k2,n 6 3{cf + cr } + u +
+ 3f¯C s/n.
c nκc̃ mini6n fbi
supp(θ)⊆Tbθτ
Proof. (Proof of Lemma 15) Let F = diag(f ), Fb = diag(fb), X = [x1 ; . . . ; xn ]′ , mτ =
Xθτ + rθτ , and for a set of indices S ⊂ {1, . . . , p} we define the projection matrix on the
columns associated with the indices in S as PS = F X[S](F X[S]′F X[S])−1 F X[S]′ and PbS =
FbX[S](X[S]′ Fb′ FbX[S])−1 FbX[S]′ . Since fi di = fi mτ i +vi we have that fbi di = fbi mτ i +vi fbi /fi
and we have
Fbmτ − FbX θeτ = (I − PbTbθτ )Fbmτ − PbTbθτ FbF −1 v
where I is the identity operator. Therefore
(6.62)
kFbmτ − FbX θeτ k 6 k(I − PbTbθτ )Fbmτ k + kPbTbθτ FbF −1 vk.
p
√
Since kFbX[Tbθτ ]/ n(X[Tbθτ ]′ Fb′ FbX[Tbθτ ]/n)−1 k 6 kFb−1 k∞ 1/φmin(b
sθτ ), the last term in
(6.62) satisfies
p
√
kPbTbθτ FbF −1 vk 6 kFb −1k∞ 1/φmin(b
sθτ ) kX[Tbθτ ]′ Fb2 F −1 v/ nk
n
p
√
√ o
′ b2
2
−1
′
−1
b
b
b
sθτ ) kX[Tθτ ] {F − F }F v/ nk + kX[Tθτ ] F v/ nk
6 kF k∞ 1/φmin(b
n
o
p
√
√
√
sθτ ) kX[Tbθτ ]′ {Fb 2 − F 2 }F −1v/ nk + sbθτ kX ′ F v/ nk∞ .
6 kFb −1k∞ 1/φmin(b
46
Condition WL(iii) implies that
√
kX[Tbθτ ]′ {Fb2 −F 2 }F −1v/ nk 6
sup
kαk0 6b
sθτ ,kαk61
√
√ p
|α′X[Tbθτ ]′ {Fb 2 −F 2 }F −1 v/ n| 6 n φmax (b
sθτ )cf .
Under Condition WL(iv), by Lemma 14 we have with probability 1 − o(1)
′
√
kX F v/ nk∞ .P
p
log(pn) max
16j6p
Moreover, Condition WL(iv) also implies max16j6p
q
En [fi2 x2ij vi2 ].
q
En [fi2 x2ij vi2 ] . 1 with probability 1 −
o(1) since max16j6p |(En −Ē)[fi2 x2ij vi2 ]| 6 δn with probability 1−∆n , and max16j6p Ē[fi2 x2ij vi2 ] 6
max16j6p {Ē[fi3 x3ij vi3 ]}2/3 . 1.
The last statement follows from noting that the Lasso solution provides an upper bound
to the approximation of the best model based on Tbθτ , and the application of Lemma 3.
Lemma 16 (Empirical pre-sparsity for Lasso) Let Tbθτ denote the support selected by
b−1 fi xi vi ]k∞ , and ℓΓ
bτ 0 6 Γ
bτ 6
the Lasso estimator, sbθτ = |Tbθτ |, assume λ/n > ckEn [Γ
τ0
bτ 0 with u > 1 > ℓ > 1/c. Then, for c0 = (uc + 1)/(ℓc − 1) and c̃ = (uc + 1)/(ℓc −
uΓ
b τ 0 k∞ kΓ
b−1 k∞ we have
1)kΓ
τ0
#
"
√ b
p
p
sk
Γ
k
n{c
+
c
}
τ0 ∞
f
r
b−1 k∞ c0
.
+
sbθτ 6 2 φmax (b
sθτ )(1 + 3kfbk∞ )kΓ
0
λ
κc̃ mini6n fbi
Proof. (Proof of Lemma 16) Let Fb = diag(fb), Rθτ = (rθτ 1 , . . . , rθτ n )′ , and X =
[x1 ; . . . ; xn ]′ . We have from the optimality conditions that the Lasso estimator θbτ satis-
fies
b−1 fb2 xi (di − x′ θbτ )] = sign(θbτ j )λ/n for each j ∈ Tbθτ .
2En [Γ
i
i
j
47
b−1 Γ
b0 k∞ 6 1/ℓ, we have
Therefore, noting that kΓ
p
b −1 X ′ Fb2 (D − X θbτ )) b k
sbθτ λ = 2k(Γ
Tθτ
b −1 X ′ Fb2 Rθτ ) b k
b −1 X ′ (Fb2 − F 2 )F −1 V ) b k + 2k(Γ
b −1 X ′ F V ) b k + 2k(Γ
6 2k(Γ
Tθτ
Tθτ
Tθτ
b −1 X ′ Fb2 X(θτ − θbτ )) b k
+2k(Γ
Tθτ
p
p
−1 b
−1 ′ ′
b
b
b −1 k∞ {cf + kFbk∞ cr } +
6 sbθτ kΓ Γ0 k∞ kΓτ 0 X F V k∞ + 2n φmax (b
sθτ )kΓ
p
b −1 k∞ kfbi x′i (θbτ − θτ )k2,n ,
sθτ )kFbk∞ kΓ
2n φmax (b
p
b −1 k∞
p
kΓ
0
b −1 X ′ F V k∞ + 2n φmax (b
(cf + kFbk∞ cr + kFbk∞ kfbi x′i (θbτ − θτ )k2,n ),
6 sbθτ (1/ℓ) nkΓ
s
)
θτ
τ0
ℓ
where we used that
k(X ′ Fb2 (θτ − θbτ ))Tbθτ k
6 supkδk0 6bsθτ ,kδk61 |δ ′ X ′ Fb2 X(θτ − θbτ )| 6 supkδk0 6bsθτ ,kδk61 kδ ′ X ′ Fb′ kkFbX(θτ − θbτ )k
p
6 supkδk 6bs ,kδk61 {δ ′ X ′ Fb 2 Xδ}1/2 kFbX(θτ − θbτ )k 6 n φmax (b
sθτ )kfbi k∞ kfbi x′ (θτ − θbτ )k2,n ,
0
i
θτ
k(X ′ (Fb2 − F 2 )F −1 V )Tbθτ k 6 supkδk0 6bsθτ ,kδk61 |δ ′ X ′ (Fb2 − F 2 )F −1 V |
p
6 supkδk 6bs ,kδk61 kXδk k(Fb 2 − F 2 )F −1 V k 6 n φmax (b
sθτ )cf
0
θτ
b−1 X ′ F V k∞ , and by Lemma 3, kfbi x′ (θbτ −θτ )k2,n 6 2{cf +cr }+ u +
Since λ/c > kΓ
τ0
i
we have
1
c
√ b
λ skΓ
τ 0 k∞
nκc̃ mini6n fbi
h
p
√ Γbτ 0 k∞ i
b −1 k
kΓ
nc
sθτ ) 0 ℓ ∞ λf (1 + 2kFbk∞ ) + ncλr 3kFbk∞ + kFbk∞ u + 1c κ sk
2 φmax (b
p
b
c̃ mini6n fi
.
sbθτ 6
1
1 − cℓ
The result follows by noting that (u + [1/c])/(1 − 1/[ℓc]) = c0 ℓ by definition of c0 .
Lemma 17 (Sub-linearity of maximal sparse eigenvalues) Let M be a semi-definite
positive matrix. For any integer k > 0 and constant ℓ > 1 we have φmax (⌈ℓk⌉)(M) 6
⌈ℓ⌉φmax (k)(M).
48
Lemma 18 (Sparsity for Estimated Lasso under data-driven penalty) Consider the
b−1 fi xi vi ]k∞ . Consider
Lasso estimator θbτ , let sbθτ = |Tbθτ |, and assume that λ/n > ckEn [Γ
τ0
the set
M=
Then,
b−1 k2 c2
m ∈ N : m > 8φmax (m)(1 + 3kfbk∞ )2 kΓ
0
∞ 0
"
#2
√ b
skΓτ 0 k∞
n{cf + cr }
+
.
λ
κc̃ mini6n fbi
#2
√ b
sk
n{c
+
c
}
Γ
k
f
r
τ0 ∞
b−1 k2 c2
.
+
sbθτ 6 4 min φmax (m) (1 + 3kfbk∞ )2 kΓ
0
∞ 0
m∈M
λ
κc̃ mini6n fbi
i
h
√ b
n{cf +cr }
skΓτ 0 k∞
b−1
.
Proof. (Proof of Lemma 18) Let Ln = 2(1 + 3kfbk∞ )kΓ
k
c
+
∞ 0
0
λ
κ min
fb
"
c̃
i6n
i
Rewriting the conclusion in Lemma 16 we have
sθτ 6 φmax (b
b
sθτ )L2n .
(6.63)
Consider any M ∈ M, and suppose b
sθτ > M. Therefore by the sublinearity of the maxi-
mum sparse eigenvalue (see Lemma 17)
sθτ
b
sbθτ 6
φmax (M)L2n .
M
Thus, since ⌈k⌉ 6 2k for any k > 1 we have
M 6 2φmax (M)L2n
which violates the condition that M ∈ M. Therefore, we have sbθτ 6 M.
In turn, applying (6.63) once more with sbθτ 6 M we obtain
sbθτ 6 φmax (M)L2n .
The result follows by minimizing the bound over M ∈ M.
49
7
Proofs for Section 2.3 of Supplementary Material
In this section we denote the nuisance parameters as h̃ = (g̃, ι̃), where g̃ is a function of
variable z ∈ Z, and ι̃ is a function on (d, z) 7→ ι̃(d, z). We define the score with (α̃, h̃) as
ψα̃,h̃ (yi , di , zi ) = (τ − 1{yi 6 g̃(zi ) + di α̃})ι̃(di , xi ).
For notational convenience we write ι̃i = ι̃(di , zi ) and g̃i = g̃(zi ), h0 = (gτ , ι0 ) and b
h = (b
g, b
ι).
For a fixed α̃ ∈ R, g̃ : Z → R, and ι̃ : D × Z → R we define
Γ(α̃, h̃) := Ē[ψα,h (yi , di , zi )]
α=α̃,h=h̃
The partial derivative of Γ with respect to α at (α̃, h̃) is denoted by Γα (α̃, h̃) and the
directional derivative with respect to [b
h − h0 ] at (α̃, h̃) is denote as
Γ(α̃, h̃ + t[b
h − h0 ]) − Γ(α̃, h̃)
.
t→0
t
Γh (α̃, h̃)[b
h − h0 ] = lim
Proof. (Proof of Lemma 5) The asymptotic normality results is established in Steps
1-4 which assume Condition IQR(i-iii). The additional results (derived on Steps 5 and 6)
also assumed Condition IQR(iv).
Step 1. (Normality result) We have the following identity
En [ψα̌τ ,bh (yi , di , zi )] = En [ψατ ,h0 (yi , di , zi )] + En [ψα̌τ ,bh (yi , di , zi ) − ψατ ,h0 (yi , di , zi )]
= En [ψατ ,h0 (yi , di , zi )] + Γ(α̌τ , b
h)
| {z }
(7.64)
(I)
+n
|
−1/2
Gn (ψα̌τ ,bh − ψα̌τ ,h0 ) + n
{z
} |
(II)
−1/2
Gn (ψα̌τ ,h0 − ψατ ,h0 )
{z
}
(III)
By the second relation in (2.37), Condition IQR(iii), the left hand side of the display above
satisfies we have |En [ψα̌τ ,bh (yi , di , zi )]| . δn n−1/2 with probability at least 1 − ∆n . Since
50
b
h ∈ F with probability at least 1 − ∆n by Condition IQR(iii), with the same probability
we have |(II)| . δn n−1/2 .
We now proceed to bound term (III). By Condition IQR(iii) we have with probability
at least 1 − ∆n that |α̌τ − ατ | 6 δn . Observe that
(ψα,h0 − ψατ ,h0 )(yi , di , zi ) = (1{yi 6 gτ i + di ατ } − 1{yi 6 gτi + di α})ι0i
= (1{ǫi 6 0} − 1{ǫi 6 di (α − ατ )})ι0i ,
so that |(ψα,h0 − ψατ ,h0 )(yi , di , zi )| 6 1{|ǫi | 6 δn |di |}|ι0i | whenever |α − ατ | 6 δn . Since the
class of functions {(y, d, z) 7→ (ψα,h0 −ψατ ,h0 )(y, d, z) : |α−ατ | 6 δn } is a VC subgraph class
with VC index bounded by some constant independent of n, using (a version of) Theorem
2.14.1 in [38], we have
sup
|α−ατ |6δn
|Gn (ψα,h0 − ψατ ,h0 )| .P (Ē[1{|ǫi | 6 δn |di |}ι20i ])1/2 .P δn1/2 .
1/3
This implies that |III| . δn n−1/2 with probability 1 − o(1).
1/2
Therefore we have 0 = En [ψατ ,h0 (yi , di , zi )] + (II) + OP (δn n−1/2 ) + OP (δn )|α̌τ − ατ |.
Step 2 below establishes that (II) = −Ē[fi di ι0i ](α̌τ − ατ )| + OP (δn n−1/2 + δn |α̌τ − ατ |).
Combining these relations we have
Ē[fi di ι0i ](α̌τ − ατ ) = En [ψατ ,h0 (yi , di , zi )] + OP (δn1/2 n−1/2 ) + OP (δn )|α̌τ − ατ |.
(7.65)
√
Note that Un (τ ) = {Ē[ψα2 τ ,h0 (yi , di , zi )]}−1/2 nEn [ψατ ,h0 (yi , di , zi )] and Ē[ψα2 τ ,h0 (yi , di , zi )] =
τ (1−τ )Ē[ι20i ] so that the first representation result follows from (7.65). Since Ē[ψατ ,h0 (yi , di , zi )] =
0 and Ē[ι30i ] 6 C, by the Lyapunov CLT we have
√
and Un (τ )
nEn [ψατ ,h0 (yi , di , zi )] =
√
nEn [ψατ ,h0 (yi , di , zi )]
N(0, Ē[τ (1 − τ )ι20i ])
N(0, 1) follows by noting that |Ē[fi di ι0i ]| > c > 0.
51
Step 2.(Bounding Γ(α, b
h) for |α − ατ | 6 δn ) For any (fixed function) b
h ∈ F, we have
Γ(α, b
h) = Γ(α, h0) + Γ(α, b
h) − Γ(α, h0 )
= Γ(α, h0) + {Γ(α, b
h) − Γ(α, h0 ) − Γh (α, h0 )[b
h − h0 ]} + Γh (α, h0 )[b
h − h0 ].
(7.66)
Because Γ(ατ , h0 ) = 0, by Taylor expansion there is some α̃ ∈ [ατ , α] such that
Γ(α, h0) = Γ(ατ , h0 ) + Γα (α̃, h0 )(α − ατ ) = {Γα (ατ , h0 ) + ηn } (α − ατ )
where |ηn | 6 δn Ē[|d2i ι0i |] 6 δn C by relation (7.73) in Step 4.
Combining the argument above with relations (7.68), (7.69) and (7.71) in Step 3 below
we have
Γ(α, b
h) = Γh (ατ , h0 )[b
h − h0 ] + Γ(ατ , h0 ) + {Γα (ατ , h0 )
+O(δn Ē[|d2i ι0i |])}(α − ατ ) + O(δn n−1/2 )
(7.67)
= Γα (ατ , h0 )(α − ατ ) + O(δn |α − ατ |Ē[|d2i ι0i |] + δn n−1/2 )
Step 3. (Relations for Γh ) The directional derivative Γh with respect the direction b
h − h0
at a point h̃ = (g̃, z̃) is given by
Γh (α, h̃)[b
h − h0 ] = −Ē[fǫi |di ,zi (di (α − ατ ) + g̃i − gτ i )ι̃0i {b
gi − gτ i }]
+Ē[(τ − 1{yi 6 g̃i + di α}){b
ιi − ι0i }]
Note that when Γh is evaluated at (ατ , h0 ) we have with probability 1 − ∆n
|Γh (ατ , h0 )[b
h − h0 ]| = | − Ē[fi ι0i {b
gi − gτ i }]| 6 δn n−1/2
(7.68)
by b
h ∈ F with probability at least 1 − ∆n , and by P (yi 6 gτ i + di ατ | di , zi ) = τ . The
52
expression for Γh also leads to the following bound
Γh (α, h0 )[b
h − h0 ] − Γh (ατ , h0 )[b
h − h0 ] =
= |Ē[{fǫi |di ,zi (0) − fǫi|di ,zi (di (α − ατ ))}ι0i {b
gi − gτ i }]
+Ē[{Fi (0) − Fi (di (α − ατ ))}{b
ιi − ι0i }]|
6 Ē[|α − ατ | f¯′ |di ι0i | |b
gi − gτ i |] + Ē[f¯|(α − ατ )di | |b
ιi − ι0i |]
6 f¯′ |α − ατ |{Ē[|b
gi − gτ i |2 ] Ē[ι2 d2 ]}1/2 + f¯|α − ατ |{Ē[(b
ιi − ι0i )2 ]Ē[d2 ]}1/2
0i i
(7.69)
i
.P |α − ατ |δn
The second directional derivative Γhh at h̃ = (g̃, ι̃) with respect to the direction b
h − h0 ,
provided b
h ∈ F, can be bounded by
Γhh (α, h̃)[b
h − h0 , b
h − h0 ]
g i − g τ i }2 ]
= −Ē[fǫ′i |di ,zi (di (α − ατ ) + g̃i − gτ i )ι̃i {b
+2En [fǫi |di ,zi (di (α − ατ ) + g̃i − gτ i ){b
gi − gτ i }{b
ιi − ι0i }]
6 f¯′ Ē[|ι0i |{b
gi − gτ i }2 ] + f¯′ Ē[|b
ιi − ι0i |{b
g i − g τ i }2 ]
+2f¯Ē[|b
gi − gτ i || |b
ιi − ι0i |]
6 δn n−1/2
(7.70)
since h̃ ∈ [h0 , b
h], |ι̃(di , zi )| 6 |ι0 (di , zi )| + |b
ι(di , zi ) − ι0 (di , zi )|, and the last bound follows
from b
h ∈ F.
Therefore, provided that b
h ∈ F , we have
h
i
Γ(α, b
h) − Γ(α, h0 ) − Γh (α, h0 ) b
h − h0
6 sup
h̃∈[h0 ,b
h]
−1/2
. δn n
h
i
Γhh (α, h̃) b
h − h0 , b
h − h0
(7.71)
.
Step 4. (Relations for Γα ) By definition of Γ, its derivative with respect to α at (α, h̃) is
Γα (α, h̃) = −Ē[fǫi |di ,zi (di (α − ατ ) + g̃i − gτ i )di ι̃i ].
53
Therefore, evaluating Γα (α, h̃) at α = ατ and h̃ = h0 , since for fǫi |di ,zi (0) = fi we have
Γα (ατ , h0 ) = −Ē[fi di ι0i ].
(7.72)
Moreover, Γα also satisfies
|Γα (α, h0 ) − Γα (ατ , h0 )| = Ē[fǫi |di ,zi (di (α − ατ ) | di , zi )ι0i di ] − Ē[fi ι0i di ]
6 |α − ατ |f¯′ Ē[|d2 ι0i |]
i
(7.73)
6 |α − ατ |f¯′ {Ē[d4i ]Ē[ι20i ]}1/2 6 C ′ |α − ατ |
since Ē[d4i ] ∨ Ē[ιP 0i4 ] 6 C and f¯′ < C by Condition IQR(i).
Step 5. (Estimation of Variance) First note that
|En [fbi dib
ιi ] − Ē[fi di ι0i ]|
= |En [fbi dib
ιi ] − En [fi di ι0i ]| + |En [fi di ι0i ] − Ē[fi di ι0i ]|
6 |En [(fbi − fi )dib
ιi ]| + |En [fi di (b
ιi − ι0i )]| + |En [fi di ι0i ] − Ē[fi di ι0i ]|
6 |En [(fbi − fi )di (b
ιi − ι0i )]| + |En [(fbi − fi )di ι0i ]|
+kfi di k2,n kb
ιi − ι0i k2,n + | En [fi di ι0i ] − Ē[fi di ι0i ]|
.P k(fbi − fi )di k2,n kb
ιi − ι0i k2,n + kfbi − fi k2,n kdi ι0i k2,n
(7.74)
+kfi di k2,n kb
ιi − ι0i k2,n + |En [fi di ι0i ] − Ē[fi di ι0i ]|
.P δn
because fi , fbi 6 C, Ē[d4i ] 6 C, Ē[ι40i ] 6 C by Condition IQR(i) and Condition IQR(iv).
54
Next we proceed to control the other term of the variance. We have
| kψα̌τ ,bh (yi , di , zi )k2,n − kψατ ,h0 (yi , di , zi )k2,n | 6 kψα̌τ ,bh (yi , di , zi ) − ψατ ,h0 (yi , di , zi )k2,n
6 kψα̌τ ,bh (yi , di , zi ) − (τ − 1{yi 6 di α̌τ + g̃i })ι0i k2,n
+k(τ − 1{yi 6 di α̌τ + g̃i })ι0i − ψατ ,h0 (yi , di , zi )k2,n
6 kb
ιi − ι0i k2,n + k(1{yi 6 di ατ + gτ i } − 1{yi 6 di α̌τ + g̃i })ι0i k2,n
1/2
1/2
6 kb
ιi − ι0i k2,n + kι20i k2,n k1{|ǫi | 6 |di (ατ − α̌τ ) + gτ i − g̃i |}k2,n
.P δn
(7.75)
by IQR(ii) and IQR(iv). Also, |En [ψα2 τ ,h0 (yi , di , zi )] − Ē[ψα2 τ ,h0 (yi , di , zi )]| .P δn by independence and bounded moment conditions in Condition IQR(ii).
Step 6. (Main Step for χ2 ) Note that the denominator of Ln (ατ ) was analyzed in
relation (7.75) of Step 5. Next consider the numerator of Ln (ατ ). Since Γ(ατ , h0 ) =
Ē[ψατ ,h0 (yi , di , zi )] = 0 we have
En [ψατ ,bh (yi , di , zi )] = (En −Ē)[ψατ ,bh (yi , di , zi )−ψατ ,h0 (yi , di , zi )]+Γ(ατ , b
h)+En [ψατ ,h0 (yi , di , zi )].
By b
h ∈ F with probability 1 − ∆n and (7.67) with α = ατ , it follows that with the same
probability
|(En − Ē)[ψατ ,bh (yi , di , zi ) − ψατ ,h0 (yi , di , zi )]| 6 δn n−1/2 and |Γ(ατ , b
h)| . δn n−1/2 .
The identity nA2n = nBn2 + n(An − Bn )2 + 2nBn (An − Bn ) for An = En [ψατ ,bh (yi , di , xi )] and
Bn = En [ψατ ,h0 (yi , di , xi )] .P {τ (1 − τ )Ē[ι20i ]}1/2 n−1/2 yields
nLn (ατ ) =
n|En [ψατ ,bh (yi , di , zi )]|2
En [ψα2
b (yi , di , zi )]
τ ,h
=
n|En [ψατ ,h0 (yi , di , zi )]|2 + OP (δn )
n|En [ψατ ,h0 (yi , di , zi )]|2
=
+ OP (δn )
Ē[τ (1 − τ )ι20i ] + OP (δn )
Ē[τ (1 − τ )ι20i ]
55
since τ (1 − τ )Ē[ι20i ] is bounded away from zero because C 6 |Ē[fi di ι0i ]| = |Ē[vi ι0i ]| 6
{Ē[vi2 ]Ē[ι20i ]}1/2 and Ē[vi2 ] is bounded above uniformly. Therefore, the result then follows
√
since nEn [ψατ ,h0 (yi , di , zi )]
N(0, τ (1 − τ )Ē[ι20i ]).
8
Rates of convergence for fb
b | x̃) = x̃′ ηbu for u = τ − h, τ + h. Using a Taylor expansion for the conditional
Let Q(u
quantile function Q(· | x̃), assuming that sup|τ̃ −τ |6h |Q′′′ (τ̃ | x̃)| 6 C we have
|Q(τ + h | x̃) − x̃′ ηbτ +h | + |Q(τ − h | x̃) − x̃′ ηbτ −h |
′
′
b
+ Ch2 .
|Q (τ | x̃) − Q (τ | x̃)| 6
h
b′ (τ | x̃i ) which
In turn, to estimate fi , the conditional density at Q(τ | x̃), we set fbi = 1/Q
leads to
b′ (τ | x̃i ) − Q′ (τ | x̃i )|
|Q
b ′ (τ | x̃i ) − Q′ (τ | x̃i )|.
= (fbi fi ) · |Q
|fi − fbi | =
′
′
b
Q (τ | x̃i )Q (τ | x̃i )
(8.76)
Lemma 19 (Bound Rates for Density Estimator) Let x̃ = (d, x), suppose that c 6
fi 6 C, sup f ′ (ǫ | x̃i ) 6 f¯′ 6 C, i = 1, . . . , n, uniformly in n. Assume further that with
ǫ
ǫi |x̃i
probability 1 − ∆n we have for u = τ − h, τ + h that
r
r
C
s
log(p
∨
n)
s2 log(p ∨ n)
C
kx̃′i (b
ηu − ηu ) + rui k2,n 6
, kb
ηu − ηu k1 6 2
κc
n
κc
n
r
C s log(p ∨ n)
.
and |b
ηu1 − ηu1 | 6
κc
n
p
p
Then if sup |Q′′′ (τ̃ | x̃)| 6 C, maxkxi k∞ s2 log(p ∨ n) + maxi6n |di | s log(p ∨ n) 6
i6n
√ |τ̃ −τ |6h
2
δn hκc n and max krui k∞ 6 hδn we have
u=τ +h,τ −h
kfi − fbi k2,n
1
.P
hκc
r
s log(n ∨ p)
+ h2 , and
n
56
krui k∞ maxi6n kxi k∞
max |fi − fbi | .P max
+
i6n
u=τ +h,τ −h
h
hκ2c
r
maxi6n |di |∞ s log(n ∨ p)
+
+ h2 .
hκc
n
r
s2 log(n ∨ p)
n
Proof. Letting (δαu ; δβu ) = ηu − ηbu and x̃i = (di , x′i )′ we have that
x̃′ (η
−b
η
)+r
−x̃′ (η
−b
η
)−r
|fbi − fi | 6 |fi fbi i τ +h τ +h τ +h,i2h i τ −h τ −h τ −h,i | + Ch2
= h−1 (fi fbi )|x′i δβτ +h + di δατ +h + rτ +h,i − x′i δβτ −h − di δατ −h − rτ −h,i }| + Ch2
6 h−1 (fi fbi ) Kx kητ +h k1 + Kx kητ −h k1 + |di | · |δατ +h | + |di | · |δατ −h |
+|rτ +h,i − rτ −h,i |} + Ch2 .
The result follows because for sequences dn → 0, cn → 0 we have |fbi − fi | 6 |fbi fi |cn + dn
implies that fbi (1 − fi cn ) 6 fi + dn . Since fi is bounded, fi cn → 0 which implies that fbi is
bounded. Therefore, |fbi − fi | . cn + dn . We take dn = Ch2 → 0 and
cn = h−1 Kx kητ +h k1 + Kx kητ −h k1 + |di | · |δατ +h | + |di | · |δατ −h | + |rτ +h,i − rτ −h,i | →P 0
by the growth condition.
Moreover, we have
k(fbi − fi )/fi k2,n .
kfbi x̃′i (b
ητ +h − ητ +h ) + fbi rτ +h,i k2,n + kfbi x̃′i (b
ητ −h − ητ −h ) + fbi rτ +h,i k2,n
+ Ch2 .
h
By the previous result fbi is uniformly bounded from above with high probability. Thus,
the result follows by the assumed prediction norm rate kx̃′i (b
ηu − ηu ) + rui k2,n .
57
Optimal IV (C0.05,n) rp(0.05)
0.5
0.4
0.3
0.2
0.1
0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
Rd2
0.2
0
0
Ry2
Optimal IV (I0.05,n) rp(0.05)
0.5
0.4
0.3
0.2
0.1
0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
Rd2
0.2
0
0
Ry2
Optimal IV (C0.05,n) rp(0.05)
0.5
0.4
0.3
0.2
0.1
0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
Rd2
0.2
0
0
Ry2
Optimal IV (I0.05,n) rp(0.05)
0.5
0.4
0.3
0.2
0.1
0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
Rd2
0.2
0
0
Ry2
| 10 |
On the Scalability of the GPU EXPLORE
Explicit-State Model Checker
Nathan Cassee
Thomas Neele
Anton Wijs∗
Eindhoven University of Technology
Eindhoven, The Netherlands
[email protected] {T.S.Neele, A.J.Wijs}@tue.nl
The use of graphics processors (GPUs) is a promising approach to speed up model checking to
such an extent that it becomes feasible to instantly verify software systems during development.
GPU EXPLORE is an explicit-state model checker that runs all its computations on the GPU. Over
the years it has been extended with various techniques, and the possibilities to further improve its
performance have been continuously investigated. In this paper, we discuss how the hash table of
the tool works, which is at the heart of its functionality. We propose an alteration of the hash table
that in isolated experiments seems promising, and analyse its effect when integrated in the tool.
Furthermore, we investigate the current scalability of GPU EXPLORE, by experimenting both with
input models of varying sizes and running the tool on one of the latest GPUs of NVIDIA.
1
Introduction
Model checking [2] is a technique to systematically determine whether a concurrent system adheres to
desirable functional properties. There are numerous examples in which it has been successfully applied,
however, the fact that it is computationally very demanding means that it is not yet a commonly used procedure in software engineering. Accelerating these computations with graphics processing units (GPUs)
is one promising way to model check a system design in mere seconds or minutes as opposed to many
hours.
GPU EXPLORE [28, 29, 33] is a model checker that performs all its computations on a GPU. Initially,
it consisted of a state space exploration engine [28], which was extended to perform on-the-fly deadlock
and safety checking [29]. Checking liveness properties has also been investigated [27], with positive
results, but liveness checking has yet to be integrated in the official release of the tool. Finally, in order
to reduce the memory requirements of GPU EXPLORE, partial order reduction has been successfully
integrated [20].
Since the first results achieved with GPU EXPLORE [28], considerable progress has been made. For
instance, the original version running on an NVIDIA K20 was able to explore the state space of the
peterson7 model in approximately 72 minutes. With many improvements to GPU EXPLORE’s algorithms, reported in [33], the GPU hardware and the CUDA compiler, this has been reduced to 16 seconds.
With these levels of speed-up, it has become much more feasible to interactively check and debug large
models. Furthermore, GPU developments continue and many options can still be investigated.
Performance is very important for a tool such as GPU EXPLORE. However, so far, the scalability of
the tool has not yet been thoroughly investigated. For instance, currently, we have access to a NVIDIA
Titan X GPU, which is equipped with 12 GB global memory, but for all the models we have been using
∗ We gratefully acknowledge the support of NVIDIA Corporation with the donation of the GeForce Titan X used for this
research.
Timo Kehrer and Alice Miller (Eds.):
Workshop on Graphs as Models 2017 (GAM ’17)
EPTCS 263, 2017, pp. 38–52, doi:10.4204/EPTCS.263.4
c N.W. Cassee, T.S. Neele & A.J. Wijs
N.W. Cassee, T.S. Neele & A.J. Wijs
39
so far, 5 GB of global memory suffices as the models used for the run-time analysis of GPU EXPLORE
do not require more than 5GB for a state space exploration. In the current paper, we report on results we
have obtained when scaling up models to utilise up to 12 GB.
In addition, we also experimentally compared running GPU EXPLORE on a Titan X GPU with the
Maxwell architecture, which was released on 2015, with GPU EXPLORE running on a Titan X GPU with
the Pascal architecture, which was released a year later. This provides insights regarding the effect recent
hardware developments have on the tool.
Finally, we analyse the scalability of a critical part of the tool, namely its hash table. This structure
is used during model checking to keep track of the progress made through the system state space. Even
a small improvement of the hash table design may result in a drastic improvement in the performance of
GPU EXPLORE. Recently, we identified, by conducting isolated experiments, that there is still potential
for further improvement [9]. In the current paper, we particularly investigate whether changing the size
of the so-called buckets, i.e., data structures that act as containers in which states are stored, can have a
positive effect on the running time.
The structure of the paper is as follows. In Section 2, we discuss related work. Next, an overview of
the inner working of GPU EXPLORE is presented in Section 3. The hash table and its proposed alterations
are discussed in Section 4. After that, we present the results we obtained through experimentation in
Section 5. Finally, conclusions and pointers to future work are given in Section 6.
2
Related work
In the literature, several different designs for parallel hash tables can be found. First of all, there is the
hash table for GPUs proposed by Alcantara et al. [1], which is based on Cuckoo hashing. Secondly,
Laarman et al. [24] designed a hash table for multi-core shared memory systems. Their implementation
was later used as a basis for the hash table underlying the LTS MIN model checker. Other lock-free hash
tables for the GPU are those proposed by Moazeni & Sarrafzadeh [19], by Bordawekar [6] and by Misra
& Chaudhuri [18]. Cuckoo hashing as implemented by Alcantara et al. is publicly available as part of the
CUDPP library 1 . Unfortunately, to the best of our knowledge, there are no implementations available of
the other hash table designs.
Besides GPU EXPLORE [33], there are several other GPU model checking tools. Bartocci et al. [5]
developed an extension for the SPIN model checker that performs state-space exploration on the GPU.
They achieved significant speed-ups for large models.
A GPU extension to the parallel model checking tool DIVINE, called DIVINE-CUDA, was developed by Barnat et al. [4]. To speed-up the model checking process, they offload the cycle detection
procedure to the GPU. Their tool can even benefit from the use of multiple GPUs. DIVINE-CUDA
achieves a significant speed-up when model checking properties that are valid.
Edelkamp and Sulewski address the issues arising from the limited amount of global memory available on a GPU. In [12], they implement a hybrid approach in a tool called C U DM O C, using the GPU
for next state computation, while keeping the hash table in the main memory, to be accessed by multiple
threads running on the Central Processing Unit (CPU). In [13], they keep part of the state space in the
global memory and store the rest on disk. The record on disk can be queried through a process they
call delayed duplicate detection. Even though disk access causes overhead, they manage to achieve a
speed-up over single-threaded tools.
1 http://cudpp.github.io/
40
On the Scalability of the GPU EXPLORE Explicit-State Model Checker
consumers
producer
rec
p0
gen work
c0
send
p1
c1
work
c1
work
τ
rec
c0
τ
par using
send * rec * _ -> trans,
send * _ * rec -> trans
in
"producer.aut"
||
"consumer.aut"
||
"consumer.aut"
end par
Figure 1: Example of LTS network with one producer and two consumers. On the right, the communication between the LTSs is specified using the EXP syntax [14]. Here, producer.aut and consumer.aut
are files containing the specification of the producer and the consumer respectively.
Edelkamp et al. [7, 8] also applied GPU programming to probabilistic model checking. They solve
systems of linear equations on the GPU in order to accelerate the value iteration procedure. GPUs are
well suited for this purpose, and can enable a speed-up of 18 times over a traditional CPU implementation.
Wu et al. [35] have developed a tool called GPURC that performs the full state-space exploration
process on the GPU, similar to GPU EXPLORE. Their implementation applies dynamic parallelism, a
relatively new feature in CUDA that allows launching of new kernels from within a running kernel. Their
tool shows a good speed-up compared to traditional single-threaded tools, although the added benefit of
dynamic parallelism is minimal.
Finally, GPUs are also successfully applied to accelerate other computations related to model checking. For instance, Wu et al. [34] use the GPU to construct counter-examples, and state space decomposition and minimisation are investigated in [26, 31, 32]. For probabilistic model checking, Češka et al. [10]
implemented GPU accelerated parameter synthesis for parameterized continous time Markov chains.
3
GPUs and GPU EXPLORE
GPU EXPLORE [28, 29, 33] is an explicit-state model checker that practically runs entirely on a GPU
(only the general progress is checked on the host side, i.e. by a thread running on the CPU). It is written
in CUDA C, an extension of C offered by N VIDIA. CUDA (Compute Unified Device Architecture) provides an interface to write applications for N VIDIA’s GPUs. GPU EXPLORE takes a network of Labelled
Transition Systems (LTSs) [17] as input, and can construct the synchronous product of those LTSs using
many threads in a Breadth-First-Search-based exploration, while optionally checking on-the-fly for the
presence of deadlocks and violations of safety properties. A (negation of a) safety property can be added
as an automaton to the network.
An LTS is a directed graph in which the nodes represent states and the edges are transitions between
the states. Each transition has an action label representing an event leading from one state to another.
An example network is shown in Figure 1, where the initial states are indicated by detached incoming
arrows. One producer generates work and sends it to one of two consumers. This happens by means of
synchronisation of the ‘send’ and ‘rec’ actions. The other actions can be executed independently. How
the process LTSs should be combined using the relevant synchronisation rules is defined on the right in
Figure 1, using the syntax of the E XP. OPEN tool [17]. The state space of this network consists of 8 states
and 24 transitions.
The general approach of GPU EXPLORE to perform state space exploration is discussed in this sec-
N.W. Cassee, T.S. Neele & A.J. Wijs
41
SM 0
threads (exploration)
SM N
threads (exploration)
shared mem. (cache)
shared mem. (cache)
L1 & L2 cache
texture cache
global memory
(network input)
(global hash table)
Figure 2: Schematic overview of the GPU hardware architecture and GPU EXPLORE
tion, leaving out many of the details that are not relevant for understanding the current work. The
interested reader is referred to [28, 29, 33].
In a CUDA program, the host launches CUDA functions called kernels, that are to be executed many
times in parallel by a specified number of GPU threads. Usually, all threads run the same kernel using
different parts of the input data, although some GPUs allow multiple different kernels to be executed
simultaneously (GPU EXPLORE does not use this feature). Each thread is executed by a streaming processor (SP). Threads are grouped in blocks of a predefined size. Each block is assigned to a streaming
multiprocessor (SM). An SM consists of a fixed number of SPs (see Figure 2).
Each thread has a number of on-chip registers that allow fast access. The threads in a block together
share memory to exchange data, which is located in the (on-chip) shared memory of an SM. Finally, the
blocks can share data using the global memory of the GPU, which is relatively large, but slow, since it
is off-chip. The global memory is used to exchange data between the host and the kernel. The GTX
T ITAN X, which we used for our experiments, has 12 GB global memory and 24 SMs, each having 128
SPs (3,072 SPs in total).
Writing well-performing GPU applications is challenging, due to the execution model of GPUs,
which is Single Instruction Multiple Threads. Threads are partitioned in groups of 32 called warps. The
threads in a warp run in lock-step, sharing a program counter, so they always execute the same program
instruction. Hence, thread divergence, i.e. the phenomenon of threads being forced to execute different
instructions (e.g., due to if-then-else constructions) or to access physically distant parts of the global
memory, negatively affects performance.
Model checking tends to introduce divergences frequently, as it requires combining the behaviour
of the processes in the network, and accessing and storing state vectors of the system state space in
the global memory. In GPU EXPLORE, this is mitigated by combining relevant network information as
much as possible in 32-bit integers, and storing these as textures, that only allow read access and use a
dedicated cache to speed up random accesses.
Furthermore, in the global memory, a hash table is used to store state vectors (Figure 2). The currently
used hash table has been designed to optimise accesses of entire warps: the space is partitioned into
buckets consisting of 32 integers, precisely enough for one warp to fetch a bucket with one combined
memory access. State vectors are hashed to buckets, and placed within a bucket in an available slot. If
the bucket is full, another hash function is used to find a new bucket. Each block accesses the global hash
table to collect vectors that still require exploration.
To each state vector with n process states, a group of n threads is assigned to construct its successors
using fine-grained parallelism. Since access to the global memory is slow, each block uses a dedicated
42
On the Scalability of the GPU EXPLORE Explicit-State Model Checker
state cache (Figure 2). It serves to store and collect newly produced state vectors, that are subsequently
moved to the global hash table in batches. With the cache, block-local duplicates can be detected.
4
The GPU EXPLORE Hash Table
States discovered during the search exploration phase of GPU EXPLORE are inserted into a global memory hash table. This hash table is used to keep track of the open and closed sets maintained during the
breadth first search based exploration of the state space.
Since many accesses (reads and writes) to the hash table are performed during state-space exploration, its performance is critical for our model checker. In order to allow for efficient parallel access,
the hash table should be lock-free. To prevent corruption of state vectors, insertion should be an atomic
operation, even when a state vector spans multiple 32 bit integers.
Given these requirements, we have considered several lock-free hash table implementations. One of
them, proposed by Alcantara et al. [1], uses so-called Cuckoo hashing.
With Cuckoo hashing a key is hashed to a bucket in the hash table, and in case of a collision the key
that is already in the bucket is evicted and rehashed using another hash function to a different bucket.
Re-insertions continue until the last evicted key is hashed to an empty bucket, until all hash functions are
exhausted or until the chain of re-insertions becomes too long [22].
The other hash table we considered is the one originally designed for GPU EXPLORE [33]; we refer to
its hashing mechanism as GPU EXPLORE hashing. We experimentally compared these two hash tables,
and from these comparisons we concluded that while Cuckoo hashing on average performs better, it does
not meet all the demands needed by GPU EXPLORE. However, based on the performance evaluation a
possible performance increase has been identified for GPU EXPLORE hashing [9]. This section discusses
the proposed modification, and its implementation.
4.1
GPU EXPLORE Hashing
Figure 3: Division of threads in a warp over a bucket of size 32 in GPU EXPLORE 2.0
The GPU EXPLORE hash table consists of two parts: the storage used to store the discovered vectors and
the set of hash constants used by the hash function to determine the position of a vector in the hash table.
The memory used to store vectors is divided into buckets with a size of 32 integers. Eachjbucket is addik
tionally split into two equally sized half buckets. Therefore a single bucket can store 2 ∗ vector16length
vectors. The reason for writing vectors to buckets with half-warps (a group of 16 threads) is that in
many cases, atomic writes of half-warps are scheduled in an uninterrupted sequence [33]. This results in
vectors consisting of multiple integers to be written without other write operations corrupting them. It
should be noted that the GPU EXPLORE hash table uses closed hashing, i.e., the vectors themselves are
stored in the hash table, as opposed to pointers to those vectors.
N.W. Cassee, T.S. Neele & A.J. Wijs
43
When inserting, a warp of 32 threads inserts a single vector into a bucket. The way threads are divided
over a bucket can be observed in Figure 3, this figure visualizes a single bucket of the GPU EXPLORE
hash table for a vector length of 3. Each thread in a warp is assigned to one integer of the bucket, and to
one integer of the vector. This assignment is done from left to right per half bucket. For this example the
first 3 threads, i.e., the first vector group, is assigned to the first slot in the bucket, and the first thread in
a vector group is assigned to the first integer of the vector. By assigning every thread to a single part of
the vector and of the bucket each thread has a relatively simple task, which can be executed in parallel.
The insertion algorithm first hashes the vector to insert to determine the bucket belonging to the
vector. Each thread checks its place in the bucket and compares the integer on that position to the
corresponding integer of the vector. After comparing, the insertion algorithm then uses CUDA warp
instructions to quickly exchange data between all 32 threads in the warp to determine whether a single vector group of threads has found the vector. If the vector has been found the insertion algorithm
terminates, if the vector has not been found the algorithm continues.
Figure 4: Example of inserting the vector (8, 2, 1) into the GPU EXPLORE 2.0 hash table.
Figure 4 shows an example of the vector (8, 2, 1) being inserted into the GPU EXPLORE hash table
where the first two slots are already occupied. Because the first two slots are already occupied the first
six threads compare their element of the vector to their element of the bucket. The icons above the arrows
indicate the result of these comparisons. As can be seen there is one thread that has a match, however,
because not all elements in the slot match the insertion algorithm does not report a find.
If the vector is not found in the bucket, the insertion algorithm selects the first free slot in the bucket,
in this case the third slot. This selection procedure can be done efficiently using CUDA warp instructions.
Next, the associated threads attempt to insert the vector (8, 2, 1) into the selected slot using a compare
and swap operation. If the insertion fails because another warp had claimed that slot already for another
vector, the algorithm takes the next free slot in the bucket.
If a bucket has no more free slots the next set of hash constants is used, and the next bucket is
probed. This is repeated until all hash constants have been used, and if no insertion succeeds into any of
the buckets, the insertion algorithm reports a failure and the exploration stops.
In GPU EXPLORE 2.0 the hash table is initialized using eight hash functions. After each exploration
step, blocks that have found new vectors use the insertion algorithm to insert any new vectors they found
into the global hash table. The hash table is therefore a vital part of GPU EXPLORE.
Buckets with a length of 32 integers have been chosen because of the fact that warps in CUDA
consist of 32 threads. This way, every integer in a bucket can be assigned to a single thread. Besides, this
design choice also allows for coalesced memory access: when a warp accesses a continuous block of 32
integers, this operation can be executed in a single memory fetch operation [21]. Uncoalesced accesses,
on the other hand, have to be done individually after each other. By coalescing the accesses, the available
bandwidth to global memory is used efficiently.
44
4.2
On the Scalability of the GPU EXPLORE Explicit-State Model Checker
Configurable Bucket Size
While the current implementation of the hash table makes it possible for GPU EXPLORE to achieve a
considerable increase in performance over CPU-based explicit-state model checkers [33], it suffers from
one disadvantage. Namely, after initially scanning the bucket, only x threads, where x is the vector
length, are active at a time. The other 32 − x threads are inactive while they await the result of the atomic
insertion of the active group. If the insertion fails but there is still a free slot in the hash table, another
group of x threads becomes active to attempt an atomic insertion, while the remaining 32 − x threads
again await the result of this operation.
Figure 5: Division of threads in a warp over buckets of size 8
Therefore, for buckets with a lot of free slots where insertions fail the majority of threads in the
warp are inactive. Furthermore, the vector size generally does not exceed four integers, which means
that when attempting to atomically insert a vector, the majority of threads in a warp is inactive. Ergo,
one possible improvement over the GPU EXPLORE hash table is to reduce the bucket size, so that there
are fewer slots per bucket, and therefore, fewer threads are needed to insert a single vector. As a single
insertion still uses one thread per integer in the bucket, in turn more vectors can be inserted in parallel.
Figure 5 shows what the division of threads over a warp looks like if buckets of size 8 instead of 32
are used. As can be observed, a warp can insert four elements in parallel, as in this diagram each group
of 8 threads inserts a different vector and accesses different buckets in global memory.
The logical consequence of this improvement is that after scanning a bucket, fewer threads are inactive while the vector is being inserted into a free slot. If we suppose that the vector size for a certain
model is 3, and that the new bucket size is 8, then while inserting a vector using the GPU EXPLORE hash
table 32 − 3 = 29 threads are inactive. On the other hand, if four buckets of size 8 are simultaneously
accessed, only 32 − 3 · 4 = 20 threads are inactive, and four vectors are simultaneously processed, as
opposed to only one.
However, while more threads can be active at the same time, smaller buckets also lead to thread
divergence within a warp. First of all, of course, accessing different buckets simultaneously likely leads
to uncoalesced memory accesses. Furthermore, it is also possible that in an insertion procedure, one
group needs to do more work than another in the same warp. For instance, consider that the first group in
the warp fails to find its vector in the designated bucket, and also cannot write it to the bucket since the
latter is full. In that case, the group needs to fetch another bucket. At the same time, another group in the
warp may find its vector in the designated bucket, and therefore be able to stop the insertion procedure.
In such a situation, these two groups will diverge, and the second group will have to wait until the first
group has finished inserting. This means that the use of smaller buckets can only be advantageous if the
performance increase of the smaller buckets outweighs the performance penalty of divergence. In this
paper, we address whether this is true or not in practical explorations.
The suggested performance increase has been experimentally validated by comparing an implementation of the hash table with varying bucket size to the original GPU EXPLORE 2.0 hash table, both in
N.W. Cassee, T.S. Neele & A.J. Wijs
45
GH
G H - CBS
2,500
Runtime (ms)
2,000
1,500
1,000
500
0
0
10
20
30
40
50
60
70
80
90
100
Duplication in sequence
Figure 6: Results of inserting a sequence of 100,000,000 randomly generated integers into the hash tables
of GPU EXPLORE (G H) and GPU EXPLORE with configurable buckets (G H - CBS). For the bucketed
version a bucket size of four integers has been used.
isolation and as a part of GPU EXPLORE. The results of this comparison are presented and discussed in
section 5.
5
Experiments
In this section we discuss our experiments to evaluate the scalability of GPU EXPLORE. In Section 5.1,
we report on experiments to evaluate the effect of the bucket size on the runtimes of GPU EXPLORE. In
the next two sections, our goal is to determine whether GPU EXPLORE scales well when varying the size
of the model and the performance of the hardware, respectively. For most of the experiments, we use an
NVIDIA Titan X (Maxwell) installed on a machine running L INUX M INT 17.2. The number of blocks
is set to 6,144, with 512 threads per block. The hash table is allocated 5GB in global memory.
For our benchmarks, we selected a varied set of models from the CADP toolset [14], the mCRL2
toolset [11] and the BEEM database [23]. The models with a .1 suffix have been modified to obtain a
larger state space.
5.1
Varying the Bucket Size
To test different bucket sizes two types of experiments have been performed. First, the hash table with
varying bucket sizes has been tested in isolation, where the time taken to insert 100,000,000 elements
has been measured. Second, GPU EXPLORE has been modified such that it uses a hash table with modifiable bucket size. This bucket size can be set at compile time. The performance of this version of
GPU EXPLORE has been compared to the original GPU EXPLORE w.r.t. the exploration of several input
models.
The input data for the performance evaluation of the hash tables in isolation is a set of sequences of
46
On the Scalability of the GPU EXPLORE Explicit-State Model Checker
randomly generated integers, each sequence consisting of 100,000,000 vectors with a length of 1 integer.
The sequences vary in how often an element occurs in the sequence. A duplication of 1 means that every
unique element occurs once in the sequence, and a duplication of 100 means that each unique element
in the sequence occurs 100 times. Therefore a sequence with a duplication of 100 has 100,000,000
unique
100
elements. Note that this experiment tries to replicate state space exploration, where many duplicate
insertions are performed when the fan-in is high, i.e., many transitions in the state space lead to the same
state. So this experiment replicates the performance of the hash table for models that vary in the average
fan-in of the states. For each sequence, we measured the total time required to insert all elements.
The results of this comparison are depicted in Figure 6. We refer to the standard GPU EXPLORE
2.0 hash table as G H and the hash table with configurable bucket size as G H - CBS. In the experiment,
G H - CBS with a bucket size of four integers has been compared to G H. G H - CBS is slower for sequences
where each element only occurs a few times. However, for sequences with a higher duplication degree
G H - CBS starts to outperform G H. After all, for the sequences with more duplication, less time is spent
on atomically inserting elements, since most elements are already in the hash table. G H - CBS performs
up to three times better than G H. We will see later why the amount of duplication is relevant for model
checking.
In addition to testing the performance of the hash tables in isolation, the performance of G H - CBS
has been compared with standard GPU EXPLORE hashing as a part of GPU EXPLORE 2.0. The hash
table underlying GPU EXPLORE, as implemented by Wijs, Neele and Bošnački [33], has been modified
to allow configuration of the bucket size at compile time. We compared the time required for state
space exploration of our implementation, GPU EXPLORE with configurable bucket size, with the original
implementation of GPU EXPLORE 2.0 [33].
Four bucket sizes have been compared to GPU EXPLORE 2.0, namely bucket sizes 4, 8, 16 and 32
integers. The relative performance with these bucket sizes has been recorded with the performance of
GPU EXPLORE 2.0 as baseline. For each model the experiment has been run five times and the average
running time of these five runs has been taken.
The result of these comparisons is illustrated in Figure 7. As can be observed the total exploration
time of GPU EXPLORE with configurable bucket size is for most models larger than the runtime of GPUEXPLORE 2.0. Only szymanski5 and lann7 show a small performance increase for bucket size 4. For the
other instances, however, the new hash table causes a slow down of up to 30%.
There are three reasons why the promising performance shown in the previous experiments is not
reflected here. First, the increased complexity of the hash table negatively affects register pressure, i.e.,
the number of registers each thread requires to execute the kernel. When the register usage in a kernel
increases, the compiler may temporarily place register contents in global memory. This effect is not
observed when the hash table is tested in isolation, as in that case, far fewer registers per thread are
required. The increase in register pressure is also the reason that GPU EXPLORE with a configurable
bucket size set to 32 is slower than GPU EXPLORE with a static bucket size of 32.
Furthermore, smaller bucket sizes result in more thread divergence and more uncoalesced memory
accesses when reading from the hash table. Therefore, the available memory bandwidth is used less
efficiently, leading to a drop in performance. Apparently, the increased potential for parallel insertions
of vectors cannot overcome this drawback.
Lastly, while exploring the state-space, GPU EXPLORE only discovers duplicates if those states have
several incoming transitions. On average, the models used for the experiments have a fan-in of 4 to
6, with some exceptions that have a higher fan-in of around 8 to 11. However, from Figure 6 it can
be concluded that the hash table in isolation only starts to outperform the static hash table when each
element is duplicated 21 times. This partly explains the performance seen in Figure 7.
Runtime relative to static bucket size of 32
N.W. Cassee, T.S. Neele & A.J. Wijs
47
wafer stepper.1
odp.1
1394.1
asyn3
lamport8
des
szymanski5
lann6
lann7
asyn3.1
1.3
1.2
1.1
1
4
8
16
32
Bucket size
Figure 7: Relative runtime of GPU EXPLORE 2.0 with variable bucket size. The runtime GPU EXPLORE
2.0 with a fixed bucket size of 32 integers is used as a reference and is normalized to 1.
5.2
Varying the Model Size
In addition to experimentally comparing the effect of different bucket sizes, we also investigated how
GPU EXPLORE 2.0 behaves when exploring state spaces of different size. We performed this experiment
with two different models. The first is a version of the Gas Station model [15] where the number of
pumps is fixed to two. We varied the number of customers between two and twelve. None of these
instances requires a state vector longer than two 32-bit integers.
The other model is a simple implementation of a ring-structured network, where one token is continuously passed forward between the nodes. Each node has two transitions to communicate with its
neighbours and a further three internal transitions. Here, we varied the number of nodes in the network.
We executed the tool five times on each instance and computed the average runtime. The results are
listed in Table 1. For the smallest instances, the performance of GPU EXPLORE (measured in states/sec)
is a lot worse compared to the larger instances. This has two main reasons. First of all, the relative
overhead suffered from initialization and work scanning is higher. Second, the parallelism offered by
the GPU cannot be fully exploited, because the size of one search layer is too small to occupy all blocks
with work.
For the gas station model, peak performance is achieved for the instance with ten customers, which
has 60 million states. For larger instances, the performance decreases slightly due to the increasing
occupancy of the hash table. This leads to more hash collisions, therefore more time is lost on rehashing.
The results of the token ring model show another interesting scalability aspect. There is a performance drop between the instances with 10 nodes and 11 nodes. This is caused by the fact that the
instance with 11 nodes is the smallest for which the state vector exceeds 32 bits in length. Longer state
vectors lead to more memory accesses throughout the state-space generation algorithm.
48
On the Scalability of the GPU EXPLORE Explicit-State Model Checker
Table 1: Performance of GPU EXPLORE for the Gas Station and the Token Ring model while varying the
amount of processes.
Token Ring
Gas Station
5.3
N
states
time (s)
states/sec
2
3
4
5
6
7
8
9
10
11
12
165
1,197
7,209
38,313
186,381
849,285
3,680,721
15,333,057
61,863,669
243,104,733
934,450,425
0.016
0.023
0.035
0.062
0.209
0.718
1.235
3.093
11.229
44.534
178.817
10,294
51,004
206,812
621,989
892,039
1,183,246
2,981,535
4,957,437
5,509,307
5,458,810
5,225,726
N
states
time (s)
states/sec
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
12
54
216
810
2,916
10,206
34,992
118,098
393,660
1,299,078
4,251,528
13,817,466
44,641,044
143,489,070
459,165,024
0.027
0.047
0.067
0.086
0.113
0.180
0.275
0.488
1.087
6.394
8.345
8.138
22.060
68.233
215.889
449
1,138
3,212
9,402
25,702
56,571
127,142
242,225
362,294
203,159
509,462
1,697,864
2,023,649
2,102,934
2,126,853
Speed-up Due to the Pascal Architecture
The Titan X we used for most of the benchmarks is based on the Maxwell architecture, and was launched
in 2015. Since then, NVIDIA has released several other high-end GPUs. Most aspects have been improved: the architecture has been revised, there are more CUDA cores on the GPU and there is more
global memory available. To investigate how well GPU EXPLORE scales with faster hardware, we performed several experiments with a Titan X with the Maxwell architecture (TXM) and with a Titan X
with the Pascal architecture (TXP). The latter was released in 2016, and the one we used is installed in
the DAS -5 cluster [3] on a node running C ENT OS L INUX 7.2.
The TXM experiments were performed with 6,144 blocks, while for the TXP, GPU EXPLORE was
set to use 14,336 blocks. The improvements of the hardware allows for GPU EXPLORE to launch more
blocks. To evaluate the speed-ups compared to a single-core CPU approach, we also conducted experiments with the G ENERATOR tool of the latest version (2017-i) of the C ADP toolbox [14]. These have
been performed on nodes of the DAS -5 cluster, which are equipped with an I NTEL H ASWELL E5-2630v3 2.4 GHz CPU, 64 GB memory, and C ENT OS L INUX 7.2.
The results are listed in Table 2. The reported runtimes are averages after having run the corresponding tool ten times. For most of the larger models, we see a speed-up of about 2 times when running
GPU EXPLORE on a TXP compared to running it on a TXM. The average speed-up is 1.73. This indicates that GPU EXPLORE scales well with a higher memory bandwidth and a larger amount of CUDA
cores.
Comparing GPU EXPLORE on a TXP with single-core CPU analysis, the average speed-up is 183.91,
and if we only take state spaces into account consisting of at least 10 million states, the average speedup is 280.81. Considering that with multi-core model checking, linear speed-ups can be achieved [16],
this roughly amounts to using 180 and 280 CPU cores, respectively. This, in combination with the
N.W. Cassee, T.S. Neele & A.J. Wijs
49
Table 2: Performance comparison of single-core G ENERATOR of C ADP (G EN) and GPU EXPLORE running on a Titan X with the Maxwell architecture (TXM) and with the Pascal architecture (TXP).
runtime (seconds)
model
acs
odp
1394
acs.1
transit
wafer stepper.1
odp.1
1394.1
asyn3
lamport8
des
szymanski5
peterson7
lann6
lann7
asyn3.1
speed-ups
states
G EN
TXM
TXP
G EN-TXP
TXM-TXP
4,764
91,394
198,692
200,317
3,763,192
3,772,753
7,699,456
10,138,812
15,688,570
62,669,317
64,498,297
79,518,740
142,471,098
144,151,629
160,025,986
190,208,728
4.17
3.26
2.81
5.30
34.36
22.95
65.50
82.71
358.58
1048.13
477.43
1516.71
3741.87
2751.15
3396.19
4546.84
0.05
0.08
0.20
0.18
0.77
1.01
1.34
1.42
3.15
5.81
12.34
7.48
31.60
10.57
16.67
31.03
0.06
0.05
0.15
0.14
0.48
0.51
0.66
0.83
1.98
3.11
6.65
3.90
15.74
5.39
8.41
15.37
71.91
70.76
18.95
37.88
70.99
45.17
99.54
99.66
181.47
336.80
71.84
389.10
237.81
510.80
403.92
295.92
0.89
1.64
1.36
1.27
1.59
2.00
2.03
1.71
1.59
1.87
1.86
1.92
2.01
1.96
1.98
2.02
average
183.91
1.73
observation that frequently, speed-ups over 300 times and once even over 500 times are achieved, clearly
demonstrates the effectiveness of using GPUs for explicit-state model checking.
6
Conclusion and Future Work
In this paper, we have reported on a number of scalability experiments we conducted with the GPU
explicit-state model checker GPU EXPLORE. In earlier work, we identified potential to further improve
its hash table [9]. However, experiments in which we varied the bucket size in GPU EXPLORE provided
the insight that only for very specific input models, and only if the bucket size is set very small (4), some
speed-up becomes noticeable. In the context of the entire computation of GPU EXPLORE, the additional
register use per thread and the introduced uncoalesced memory accesses and thread divergence make it
not worthwile to make the bucket size configurable. This may be different for other applications, as our
experiments with the hash table in isolation point out that hashing can be made more efficient in this way.
Besides this, we have also conducted experiments with models of different sizes. We scaled up a Gas
Station model and a Token Ring model and obtained very encouraging results; for the second model,
GPU EXPLORE can generate up to 2.1 million states per second, and for the first model, at its peak,
GPU EXPLORE is able to generate about 5.5 million states per second, exploring a state space of 934.5
million states in under three minutes. We believe these are very impressive numbers that demonstrate the
potential of GPU model checking.
Finally, we reported on some experiments we conducted with new GPU hardware. The Titan X
50
On the Scalability of the GPU EXPLORE Explicit-State Model Checker
with the Pascal architecture from 2016 provides for our benchmark set of models an average speed-up
of 1.73 w.r.t. the Titan X with the Maxwell architecture from 2015. We also compared the runtimes of
GPU EXPLORE running on the Pascal Titan X with the CPU single-core G ENERATOR tool of the C ADP
toolbox, and measured an average speed-up of 183.91 for the entire benchmark set of models, and of
280.81 for the models yielding a state space of at least 10 million states. Often speed-ups over 300 times
have been observed, and in one case even over 500 times.
Future work For future work, we will consider various possible extensions to the tool. First of all,
the ability to write explored state spaces to disk will open up the possibility to postprocess and further
analyse the state spaces. This could be done directly, or after application of bisimulation reduction on
the GPU [26].
Second of all, we will work on making the tool more user friendly. Currently, providing an input
model is quite cumbersome, since GPU EXPLORE requires a user to express system behaviour in the low
level description formalism of networks of LTSs. Specifying systems would be much more convenient if
a higher-level modelling language would be supported. We will investigate which modelling languages
would be most suitable for integration in the current tool.
Finally, we will also consider the application of GPU EXPLORE to conduct computations similar to
model checking, such as performance analysis [30]. This requires to incorporate time into the input
models, for instance by including special actions to represent the passage of time [25].
References
[1] Dan A. Alcantara, Vasily Volkov, Shubhabrata Sengupta, Michael Mitzenmacher, John D. Owens & Nina
Amenta (2012): Building an Efficient Hash Table on the GPU. In: GPU Computing Gems Jade Edition,
Morgan Kaufmann Publishers Inc., pp. 39–53, doi:10.1016/B978-0-12-385963-1.00004-6.
[2] Christel Baier & Joost-Pieter Katoen (2008): Principles of model checking. MIT Press.
[3] Henri Bal, Dick Epema, Cees de Laat, Rob van Nieuwpoort, John Romein, Frank Seinstra, Cees Snoek &
Harry Wijshoff (2016): A Medium-Scale Distributed System for Computer Science Research: Infrastructure
for the Long Term. IEEE Computer 49(5), pp. 54–63, doi:10.1109/MC.2016.127.
[4] Jiřı́ Barnat, Petr Bauch, Luboš Brim & Milan Češka (2012): Designing Fast LTL Model Checking Algorithms
for Many-Core GPUs. JPDC 72(9), pp. 1083–1097, doi:10.1016/j.jpdc.2011.10.015.
[5] Ezio Bartocci, Richard DeFrancisco & Scott A. Smolka (2014): Towards a GPGPU-parallel SPIN Model
Checker. In: SPIN 2014, ACM, New York, NY, USA, pp. 87–96, doi:10.1145/2632362.2632379.
[6] Rajesh Bordawekar (2014): Evaluation of Parallel Hashing Techniques. Presentation at GTC’14 (last
checked on 17 February 2017). http://on-demand.gputechconf.com/gtc/2014/presentations/
S4507-evaluation-of-parallel-hashing-techniques.pdf.
[7] Dragan Bošnački, Stefan Edelkamp, Damian Sulewski & Anton Wijs (2011): Parallel Probabilistic Model
Checking on General Purpose Graphics Processors. STTT 13(1), pp. 21–35, doi:10.1007/s10009-010-01764.
[8] Dragan Bošnački, Stefan Edelkamp, Damian Sulewski & Anton Wijs (2010): GPU-PRISM: An Extension of
PRISM for General Purpose Graphics Processing Units. In: PDMC, IEEE, pp. 17–19, doi:10.1109/PDMCHiBi.2010.11.
[9] Nathan Cassee & Anton Wijs (2017): Analysing the Performance of GPU Hash Tables for State Space
Exploration. In: GaM, EPTCS, Open Publishing Association.
N.W. Cassee, T.S. Neele & A.J. Wijs
51
[10] Milan Češka, Petr Pilař, Nicola Paoletti, Luboš Brim & Marta Kwiatkowska (2016): PRISM-PSY: Precise
GPU-Accelerated Parameter Synthesis for Stochastic Systems. In: TACAS, LNCS 9636, Springer, pp. 367–
384, doi:10.1007/978-3-642-54862-8.
[11] Sjoerd Cranen, Jan Friso Groote, Jeroen J. A. Keiren, Frank P. M. Stappers, Erik P. De Vink, Wieger Wesselink & Tim A. C. Willemse (2013): An Overview of the mCRL2 Toolset and Its Recent Advances. In:
TACAS, LNCS 7795, Springer, pp. 199–213, doi:10.1007/978-3-642-36742-7 15.
[12] Stefan Edelkamp & Damian Sulewski (2010): Efficient Explicit-State Model Checking on General Purpose
Graphics Processors. In: SPIN, LNCS 6349, Springer, pp. 106–123, doi:10.1007/978-3-642-16164-3 8.
[13] Stefan Edelkamp & Damian Sulewski (2010): External memory breadth-first search with delayed duplicate
detection on the GPU. In: MoChArt, LNCS 6572, Springer, pp. 12–31, doi:10.1007/978-3-642-20674-0 2.
[14] Hubert Garavel, Frédéric Lang, Radu Mateescu & Wendelin Serwe (2013): CADP 2011: A Toolbox for
the Construction and Analysis of Distributed Processes. STTT 15(2), pp. 89–107, doi:10.1007/978-3-54073368-3 18.
[15] David Heimbold & David Luckham (1985): Debugging Ada Tasking Programs. IEEE Software 2(2), pp.
47–57, doi:10.1109/MS.1985.230351.
[16] Alfons Laarman (2014): Scalable Multi-Core Model Checking.
doi:10.3990/1.9789036536561.
Ph.D. thesis, University of Twente,
[17] Frédéric Lang (2006): Refined Interfaces for Compositional Verification. In: FORTE, LNCS 4229, Springer,
pp. 159–174, doi:10.1007/11888116 13.
[18] Prabhakar Misra & Mainak Chaudhuri (2012): Performance Evaluation of Concurrent Lock-free Data Structures on GPUs. In: ICPADS, pp. 53–60, doi:10.1109/ICPADS.2012.18.
[19] Maryam Moazeni & Majid Sarrafzadeh (2012): Lock-free Hash Table on Graphics Processors. In: SAAHPC,
pp. 133–136, doi:10.1109/SAAHPC.2012.25.
[20] Thomas Neele, Anton Wijs, Dragan Bošnački & Jaco van de Pol (2016): Partial Order Reduction for GPU
Model Checking. In: ATVA, LNCS 9938, Springer, pp. 357–374, doi:10.1007/978-3-319-46520-3 23.
[21] John Nickolls, Ian Buck, Michael Garland & Kevin Skadron (2008): Scalable Parallel Programming with
CUDA. Queue 6(2), pp. 40–53, doi:10.1145/1365490.1365500.
[22] Rasmus Pagh & Flemming Friche Rodler (2001): Cuckoo Hashing. In: ESA, LNCS 2161, Springer, pp.
121–133, doi:10.1007/3-540-44676-1 10.
[23] Radek Pelánek (2007): BEEM: Benchmarks for Explicit Model Checkers. In: SPIN 2007, LNCS 4595, pp.
263–267, doi:10.1007/978-3-540-73370-6 17.
[24] Steven van der Vegt & Alfons Laarman (2011): A Parallel Compact Hash Table. In: MEMICS, LNCS 7119,
Springer, pp. 191–204, doi:10.1007/978-3-642-25929-6 18.
[25] Anton Wijs (2007): Achieving Discrete Relative Timing with Untimed Process Algebra. In: ICECCS, IEEE,
pp. 35–44, doi:10.1109/ICECCS.2007.13.
[26] Anton Wijs (2015): GPU Accelerated Strong and Branching Bisimilarity Checking. In: TACAS, LNCS
9035, Springer, pp. 368–383, doi:10.1007/978-3-662-46681-0 29.
[27] Anton Wijs (2016): BFS-Based Model Checking of Linear-Time Properties With An Application on GPUs.
In: CAV, Part II, LNCS 9780, Springer, pp. 472–493, doi:10.1007/978-3-319-41540-6 26.
[28] Anton Wijs & Dragan Bošnački (2014): GPUexplore: Many-Core On-the-Fly State Space Exploration Using
GPUs. In: TACAS, LNCS 8413, pp. 233–247, doi:10.1007/978-3-642-54862-8 16.
[29] Anton Wijs & Dragan Bošnački (2016): Many-Core On-The-Fly Model Checking of Safety Properties Using
GPUs. STTT 18(2), pp. 169–185, doi:10.1007/s10009-015-0379-9.
[30] Anton Wijs & Wan Fokkink (2005): From χt to µCRL: Combining Performance and Functional Analysis.
In: ICECCS, IEEE, pp. 184–193, doi:10.1109/ICECCS.2005.51.
52
On the Scalability of the GPU EXPLORE Explicit-State Model Checker
[31] Anton Wijs, Joost-Pieter Katoen & Dragan Bošnački (2014): GPU-Based Graph Decomposition into
Strongly Connected and Maximal End Components. In: CAV, LNCS 8559, Springer, pp. 309–325,
doi:10.1007/978-3-319-08867-9 20.
[32] Anton Wijs, Joost-Pieter Katoen & Dragan Bošnački (2016): Efficient GPU Algorithms for Parallel Decomposition of Graphs into Strongly Connected and Maximal End Components. Formal Methods in System
Design 48(3), pp. 274–300, doi:10.1007/s10703-016-0246-7.
[33] Anton Wijs, Thomas Neele & Dragan Bošnački (2016): GPUexplore 2.0: Unleashing GPU Explicit-State
Model Checking. In: FM, LNCS 9995, Springer, pp. 694–701, doi:10.1007/978-3-319-48989-6 42.
[34] Zhimin Wu, Yang Liu, Yun Liang & Jun Sun (2014): GPU Accelerated Counterexample Generation in LTL
Model Checking. In: ICFEM, LNCS 8829, Springer, pp. 413–429, doi:10.1007/978-3-319-11737-9 27.
[35] Zhimin Wu, Yang Liu, Jun Sun, Jianqi Shi & Shengchao Qin (2015): GPU Accelerated On-the-Fly Reachability Checking. In: ICECCS 2015, pp. 100–109, doi:10.1109/ICECCS.2015.21.
| 8 |
Testing High-dimensional Covariance Matrices under the
arXiv:1707.04010v1 [] 13 Jul 2017
Elliptical Distribution and Beyond
Xinxin Yang∗,
Xinghua Zheng†,
Jiaqi Chen‡,
Hua Li§
Abstract
We study testing high-dimensional covariance matrices when data exhibit heteroskedasticity.
The observations are modeled as Yi = ωi Zi , where Zi ’s are i.i.d. p-dimensional random vectors
with mean 0 and covariance Σ, and ωi ’s are random scalars reflecting heteroskedasticity. The
model is an extension of the elliptical distribution, and accommodates several stylized facts of real
data including heteroskedasticity, heavy-tailedness, asymmetry, etc. We aim to test H0 : Σ ∝ Σ0 ,
in the high-dimensional setting where both the dimension p and the sample size n grow to infinity
proportionally. We remove the heteroskedasticity by self-normalizing the observations, and estabP
P
lish a CLT for the linear spectral statistic (LSS) of e
Sn := np ni=1 Yi YTi /|Yi |2 = np ni=1 Zi ZTi /|Zi |2 .
The CLT is different from the existing ones for the LSS of the usual sample covariance matrix
P
Sn := 1n ni=1 Zi ZTi (Bai and Silverstein (2004), Najim and Yao (2016)). Our tests based on the
new CLT neither assume a specific parametric distribution nor involve the fourth moment of Zi .
Numerical studies show that our tests work well even when Zi ’s are heavy-tailed.
Keywords: Elliptical Distribution, Covariance Matrix, High Dimension, Central Limit Theorem,
Self-normalization
1 Introduction
1.1 Tests for high-dimensional covariance matrices
Testing covariance matrices is of fundamental importance in multivariate analysis. There has been a
long history of study on testing: (1) the covariance matrix Σ is equal to a given matrix, or (2) the
∗
Department of Mathematics, The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong.
[email protected]
†
Department of ISOM, The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong.
[email protected]
‡
Department of Mathematics, Harbin Institute of Technology, Harbin, 150001, China. [email protected]
§
Department of Applied Statistics, Chang Chun University, Changchun, 130001, China. [email protected]
1
covariance matrix Σ is proportional to a given matrix. To be specific, for a given covariance matrix Σ0 ,
one aims to test either
H0 : Σ = Σ 0
vs.
Ha : Σ = Σ0 , or
(1.1)
H0 : Σ ∝ Σ 0
vs.
Ha : Σ 6∝ Σ0 .
(1.2)
In the classical setting where the dimension p is fixed and the sample size n goes to infinity, the sample
covariance matrix is a consistent estimator, and further inference can be made based on the associated
CLT. Examples include the likelihood ratio tests (see, e.g., Muirhead (1982), Sections 8.3 and 8.4, and
Anderson (2003), Sections 10.7 and 10.8), and the locally most powerful invariant tests (John (1971),
Nagao (1973)).
In the high-dimensional setting, because the sample covariance matrix is inconsistent, the conventional tests may not apply. New methods for testing high-dimensional covariance matrices have been
developed. The existing tests were first proposed under the multivariate normal distribution, then have
been modified to fit more generally distributed data.
• Multivariate normally distributed data. When p/n → y ∈ (0, ∞), Ledoit and Wolf (2002)
show that John’s test for (1.2) is still consistent and propose a modified Nagao’s test for (1.1).
Srivastava (2005) introduces a new test for (1.2) under a more general condition that n = O(pδ )
for some δ ∈ (0, 1]. Birke and Dette (2005) show that the asymptotic null distributions of John’s
and the modified Nagao’s test statistics in Ledoit and Wolf (2002) are still valid when p/n → ∞.
Relaxing the normality assumption but still assuming the kurtosis of data equals 3, Bai et al.
(2009) develop a corrected likelihood ratio test for (1.1) when p/n → y ∈ (0, 1).
• More generally distributed data. Chen et al. (2010) generalize the results in Ledoit and Wolf
(2002) without assuming normality nor an explicit relationship between p and n. By relaxing
the kurtosis assumption, Wang et al. (2013) extend the corrected likelihood ratio test in Bai et al.
(2009) and the modified Nagao’s test in Ledoit and Wolf (2002) for testing (1.1). Along this line,
Wang and Yao (2013) propose two tests by correcting the likelihood ratio test and John’s test for
testing (1.2).
1.2 The elliptical distribution and its applications
The elliptically distributed data can be expressed as
Y = ωZ,
where ω is a positive random scalar, Z is a p-dimensional random vector from N(0, Σ), and further ω
and Z are independent of each other. It is a natural generalization of the multivariate normal distribution, and contains many widely used distributions including the multivariate t-distribution, the symmet2
ric multivariate Laplace distribution and the symmetric multivariate stable distribution. See Fang et al.
(1990) for further details.
One of our motivations of this study arises from the wide applicability of the elliptical distribution
to financial data analysis. The heavy-tailedness of stock returns has been extensively studied, dating
back at least to Fama (1965) and Mandelbrot (1967). Accommodating both heavy-tailedness and flexible shapes makes the elliptical distribution a more admissible candidate for stock-return models than
the Gaussian distribution; see, e.g., Owen and Rabinovitch (1983) and Bingham and Kiesel (2002).
McNeil et al. (2005) state that “elliptical distributions ... provided far superior models to the multivariate normal for daily and weekly US stock-return data" and that “multivariate return data for groups
of returns of similar type often look roughly elliptical." Theoretically, Owen and Rabinovitch (1983)
show that the elliptical distribution extends, among others, the mutual fund separation theorems and
the theory of the capital asset pricing model(CAPM). Chamberlain (1983) finds that the mean-variance
framework implies that asset returns are elliptically distributed. The Markowitz theory for large portfolio optimization has also been studied under the elliptical distribution in El Karoui (2009), El Karoui
(2010) and El Karoui (2013). Recently, Bhattacharyya and Bickel (2014) develop an adaptive procedure for estimating the elliptic density under both low- and high-dimensional settings.
1.3 Performance of the existing tests under the elliptical distribution
Are the existing tests for covariance matrices applicable to the elliptical distribution under the highdimensional setting? Both numerical and theoretical analyses give a negative answer.
We start with a simple numerical study to investigate the empirical sizes of the aforementioned
tests. Consider observations Yi = ωi Zi , i = 1, · · · , n, where
(i) ωi ’s are absolute values of i.i.d. standard normal random variables,
(ii) Zi ’s are i.i.d. p-dimensional standard multivariate normal random vectors, and
(iii) ωi ’s and Zi ’s are independent of each other.
Under such a setting, Yi ’s are i.i.d. random vectors with mean 0 and covariance matrix I. We will test
both (1.1) and (1.2).
For testing (1.1), we examine the tests in Ledoit and Wolf (2002) (LW test), Bai et al. (2009) (BJYZ
test), Chen et al. (2010) (CZZ test) and Wang et al. (2013) (WYMC-LR and WYMC-LW tests). Table 1
reports the empirical sizes for testing H0 : Σ = I at 5% significance level.
3
p/n = 0.5
p/n = 2
p
LW
BJYZ
CZZ
WYMC-LR
WYMC-LW
LW
CZZ
WYMC-LW
100
100%
100%
54.0%
100%
100%
100%
50.2%
100%
200
100%
100%
51.6%
100%
100%
100%
53.0%
100%
500
100%
100%
52.3%
100%
100%
100%
53.3%
100%
Table 1. Empirical sizes of the LW, BJYZ, CZZ, WYMC-LR and WYMC-LW tests for testing H0 : Σ = I at 5% significance level. Data are
generated as Yi = ωi Zi where ωi ’s are absolute values of i.i.d. N(0, 1), Zi ’s are i.i.d. N(0, I), and further ωi ’s and Zi ’s are independent of
each other. The results are based on 10, 000 replications for each pair of p and n.
We observe from Table 1 that the empirical sizes of these tests are far higher than the nominal level
of 5%, suggesting that they are inconsistent for testing (1.1) under the elliptical distribution.
For testing (1.2), we apply the tests proposed by Ledoit and Wolf (2002) (LW test), Srivastava
(2005) (S test), Chen et al. (2010) (CZZ test) and Wang and Yao (2013) (WY-LR and WY-JHN tests).
Table 2 reports the empirical sizes of these tests for testing H0 : Σ ∝ I at 5% significance level.
p/n = 0.5
p/n = 2
p
LW
S
CZZ
WY-LR
WY-JHN
LW
S
CZZ
WY-JHN
100
100%
100%
51.8%
100%
100%
100%
100%
50.2%
100%
200
100%
100%
53.0%
100%
100%
100%
100%
52.3%
100%
500
100%
100%
52.3%
100%
100%
100%
100%
53.5%
100%
Table 2. Empirical sizes of the LW, S, CZZ, WY-LR and WY-JHN tests for testing H0 : Σ ∝ I at 5% significance level. Data are generated as
Yi = ωi Zi where ωi ’s are absolute values of i.i.d. N(0, 1), Zi ’s are i.i.d. N(0, I), and further ωi ’s and Zi ’s are independent of each other. The
results are based on 10, 000 replications for each pair of p and n.
From Table 2, we see again that the empirical sizes of the examined tests are severely distorted.
To conclude, Tables 1 and 2 show that even under such a rather simple setup, the existing tests fail
badly. New tests are therefore needed.
Theoretically, the distorted sizes in Tables 1 and 2 are not unexpected. In fact, denote Sn =
Pn
1 Pn
1 Pn
T
ω
T
2
T
i=1 Zi Zi and Sn = n i=1 Yi Yi = n i=1 ωi Zi Zi . The celebrated Marčenko-Pastur theorem states
that the empirical spectral distribution (ESD) of Sn converges to the Marčenko-Pastur law. However,
Theorem 1 of Zheng and Li (2011) implies that the ESD of Sω
n will not converge to the MarčenkoPastur law except in the trivial situation where ωi ’s are constant. Since all the aforementioned tests
involve certain aspects of the limiting ESD (LSD) of Sω
n , the asymptotic null distributions of the involved test statistics are different from the ones in the usual setting, and consequently the tests are no
1
n
4
longer consistent.
1.4 Our model and aim of this study
In various real situations, the assumption that the observations are i.i.d. is too strong to hold. An important source of violation is (conditional) heteroskedasticity, which is encountered in a wide range
of applications. For instance, in finance, it is well documented that stock returns are (conditionally) heteroskedastic, which motivated the development of ARCH and GARCH models (Engle (1982),
Bollerslev (1986)). In engineering, Yucek and Arslan (2009) explain that the heteroskedasticity of
noise is one of the factors that degrade the performance of target detection systems.
In this paper, we study testing high-dimensional covariance matrices when the data may exhibit
heteroskedasticity. Specifically, we consider the following model. Denote by Yi , i = 1, · · · , n, the
observations, which can be decomposed as
Yi = ωi Zi , where
(i)
(ii)
(iii)
(iv)
(1.3)
ωi ’s are positive random scalars reflecting heteroskedasticity,
Zi ’s can be written as Zi = Σ1/2 e
Zi , where e
Zi consists of i.i.d. standardized random variables,
ωi ’s can depend on each other as well as on {Zi : i = 1, 2, · · · , n} in an arbitrary way, and
ωi ’s do not need to be stationary.
Model (1.3) incorporates the elliptical distribution as a special case. This general model further
possesses several important advantages:
• It can be considered as a multivariate extension of the ARCH/GARCH model, and accommodates the conditional heteroskedasticity in real data. In the ARCH/GARCH model, the volatility
process is serially dependent and depends on past information. Such dependence is excluded
from the elliptical distribution; however, it is perfectly compatible with Model (1.3).
• The dependence of ωi and Zi can feature the leverage effect in financial econometrics, which accounts for the negative correlation between asset return and change in volatility. Various research
has been conducted to study the leverage effect; see, e.g., Schwert (1989), Campbell and Hentschel
(1992) and Aït-Sahalia et al. (2013).
• Furthermore, it can capture the (conditional) asymmetry of data by allowing Zi ’s to be asymmetric. The asymmetry is another stylized fact of financial data. For instance, the empirical study
in Singleton and Wingender (1986) shows high skewness in individual stock returns. Skewness
is also reported in exchange rate returns in Peiro (1999). Christoffersen (2012) documents that
asymmetry exists in standardized returns; see Chapter 6 therein.
5
Since ωi ’s are not required to be stationary, the unconditional covariance matrix may not exist, in
which case there is no basis for testing (1.1). Testing (1.2), however, still makes perfect sense, because
the scalars ωi ’s only scale up or down the covariance matrix by a constant. We henceforth focus on
testing (1.2). As usual, by working with Σ0−1/2 Yi , testing (1.2) can be reduced to testing
H0 : Σ ∝ I vs.
Ha : Σ 6∝ I.
(1.4)
In the following, we focus on testing (1.4), in the high-dimensional setting where both p and n grow to
infinity with the ratio p/n → y ∈ (0, ∞).
1.5 Summary of main results
To deal with heteroskedasticity, we propose to self-normalize the observations. To be specific, we focus
on the self-normalized observations Yi / |Yi |, where | · | stands for the Euclidean norm. Observe that
Yi
Zi
=
,
|Yi | |Zi |
i = 1, 2, · · · , n.
Hence ωi ’s no longer play a role, and this is exactly the reason why we make no assumption on ωi ’s.
There is, however, no such thing as a free lunch. Self-normalization introduces a new challenge in
that the entries of Zi /|Zi | are dependent in an unusual fashion. To see this, consider the simplest case
where Zi ’s are i.i.d. standard multivariate normal random vectors. In this case, the entries of Zi ’s are
i.i.d. random variables from N(0, 1). However, the self-normalized random vector Zi /|Zi | is uniformly
distributed over the p-dimensional unit sphere (known as the Haar distribution on the sphere), and its
p entries are dependent on each other in an unusual way.
To conduct tests, we need some kind of CLTs. Our strategy is to establish a CLT for the linear
spectral statistic (LSS) of the sample covariance matrix based on the self-normalized observations,
specifically,
n
n
p X Yi YTi
p X Zi ZTi
e
Sn =
=
.
(1.5)
n i=1 |Yi |2
n i=1 |Zi |2
When |Yi | or |Zi | = 0, we adopt the convention that 0/0 = 0. Note that e
Sn is not the sample correlation matrix, which normalizes each variable by its standard deviation. Here we are normalizing each
observation by its Euclidean norm.
As we shall see below, our CLT is different from the ones for the usual sample covariance matrix
in Bai and Silverstein (2004) and Najim and Yao (2016). One important advantage of our result is
4 ) = 3 as in Bai and Silverstein (2004), nor the estimation
that applying our CLT requires neither E(Z11
4 ), which is inevitable in Najim and Yao (2016). Based on the new CLT, we propose two tests
of E(Z11
by modifying the likelihood ratio test and John’s test. More tests can be established based on our
4 ) does not
general CLT. Numerical study shows that our proposed tests work well even when E(Z11
exist. Because heavy-tailedness and heteroskedasticity are commonly encountered in practice, such
6
relaxations are appealing in many real applications.
Independently and concurrently, Li and Yao (2017) study high-dimensional covariance matrix test
under a mixture model. There are a couple of major differences between our paper and theirs. First
and foremost, in Li and Yao (2017), the mixture coefficients (ωi ’s in (1.3)) are assumed to be i.i.d. and
drawn from a distribution with a bounded support. Second, Li and Yao (2017) require independence
between the mixture coefficients and the innovation process (Zi ). In our paper, we do not put any
assumptions on the mixture coefficients. As we discussed in Section 1.4, such relaxations allow us
to accommodate several important stylized features of real data, consequently, make our tests more
suitable in many real applications. It can be shown that the test in Li and Yao (2017) can be inconsistent
under our general setting. Furthermore, as we can see from the simulation studies, the test in Li and Yao
(2017) is less powerful than the existing tests in the i.i.d. Gaussian setting and, in general, less powerful
than our tests.
Organization of the paper. The rest of the paper is organized as follows. In Section 2, we state
the CLT for the LSS of e
Sn , based on which we derive the asymptotic null distributions of the modified
likelihood ratio test statistic and John’s test statistic. Section 3 examines the finite-sample performance
of our proposed tests. Section 4 concludes. The proof of the main theorem is given in Appendix A,
with some technical derivations collected in Appendices B – D.
Notation. For any symmetric matrix A ∈ R p×p , F A denotes its ESD, that is,
p
F A (x) =
1X
1 A , for all x ∈ R,
p i=1 {λi ≤x}
where λA
i , i = 1, 2, · · · , p, are the eigenvalues of A and
function f , the associated LSS of A is given by
Z
+∞
1{·} denotes the indicator function. For any
p
1X
f (x)dF (x) =
f (λA
i ).
p i=1
A
−∞
Finally, the Stieltjes transform of a distribution G is defined as
mG (z) =
Z
∞
−∞
1
dG(λ),
λ−z
where supp(G) denotes the support of G.
7
for all z < supp(G),
2 Main results
2.1 CLT for the LSS of e
Sn
As discussed above, we focus on the sample covariance matrix based on the self-normalized Zi ’s,
namely, e
Sn defined in (1.5). Denote by Z = (Z1 , · · · , Zn ).
We now state the assumptions:
2 < ∞;
Assumption A. Z = Zi j p×n consists of i.i.d. random variables with E Z11 = 0 and 0 < E Z11
4 < ∞; and
Assumption B. E Z11
Assumption C. yn := p/n → y ∈ (0, ∞) as n → ∞.
The following proposition gives the LSD of e
Sn .
Proposition 2.1. Under Assumptions A and C, almost surely, the ESD of e
Sn converges weakly to the
standard Marčenko-Pastur law Fy , which admits the density
p
1
2πxy (x − a− (y))(a+ (y) − x),
py (x) =
0,
x ∈ [a− (y), a+ (y)],
otherwise,
and has a point mass 1 − 1/y at the origin if y > 1, where a± (y) = (1 ±
√
y)2 .
Proposition 2.1 is essentially a special case of Theorem 2 in Zheng and Li (2011) but with weaker
moment assumptions, and can be shown by modifying the proof of that theorem. We give the proof in
Appendix B to make this paper self-contained.
Sn shares the same LSD as the usual sample covariance matrix Sn =
According to Proposition 2.1, e
P
1 n
T
2
n i=1 Zi Zi if one assumes that E Z11 = 1. To conduct tests, we need the associated CLT. The CLTs
for the LSS of Sn have been established in Bai and Silverstein (2004) and Najim and Yao (2016), under
the Gaussian and non-Gaussian kurtosis conditions, respectively. Given that e
Sn and Sn have the same
LSD, one naturally asks whether their LSSs also have the same CLT. The following theorem gives a
negative answer. Hence, an important message is:
Self-normalization does not change the LSD, but it does affect the CLT.
To be more specific, for any function f , define the following centered and scaled LSS:
GeSn ( f ) := p
Z
+∞
−∞
e
f (x)d F Sn (x) − Fyn (x) .
Theorem 2.2. Suppose that Assumptions A – C hold. Let H denote the set of functions that are
analytic on a domain containing [a− (y)1{0<y<1} , a+ (y)], and f1 , f2 , · · · , fk ∈ H. Then, the random
8
vector GeSn ( f1 ), GeSn ( f2 ), · · · , GeSn ( fk ) converges weakly to a Gaussian vector G( f1 ), G( f2 ), · · · , G( fk )
with mean
1
EG( fℓ ) =
πi
I
ym3 (z)
!
ym2 (z)
!−1
dz
1−
1 + m(z) 3
1 + m(z) 2
!
!
I
ym2 (z) −2
ym3 (z)
1
dz,
fℓ (z)
−
1−
2πi C
1 + m(z) 3
1 + m(z) 2
and covariance
fℓ (z)
C
Cov((G( fi ), G( f j )) =
ℓ = 1, 2, · · · , k,
I I
fi (z1 ) f j (z2 )m′ (z1 )m′ (z2 )
2
dz1 dz2
C2 C1 1 + m(z1 )
1 + m(z2 ) 2
I I
fi (z1 ) f j (z2 )m′ (z1 )m′ (z2 )
1
− 2
dz1 dz2 ,
2π C2 C1
m(z2 ) − m(z1 ) 2
y
2π2
(2.1)
(2.2)
i, j = 1, 2, · · · , k.
Here, C1 and C2 are two non-overlapping contours contained in the domain and enclosing the interval
[a− (y)1{0<y<1} , a+ (y)], and m(z) is the Stieltjes transform of F y := (1 − y)1[0,∞) + yFy .
The proof of Theorem 2.2 is given in Appendix A.
Remark. The second terms on the right-hand sides of (2.1) and (2.2) appeared in equations (1.6) and
(1.7) in Theorem 1.1 of Bai and Silverstein (2004) (in the special case when T = I). The first terms
are new and are due to the self-normalization in e
Sn . It is worth emphasizing that our CLT neither
4
4 as in Najim and Yao (2016).
requires E Z11 = 3 as in Bai and Silverstein (2004), nor involves E Z11
2.2 Tests for the covariance matrix in the presence of heteroskedasticity
Based on Theorem 2.2, we propose two tests for testing (1.4) by modifying the likelihood ratio test and
John’s test. More tests can be established by choosing f in Theorem 2.2 to be different functions.
2.2.1
The likelihood ratio test based on self-normalized observations (LR-SN)
Recall that Sn =
1
n
Pn
T
i=1 Zi Zi .
The likelihood ratio test statistic is
Ln = log |Sn | − p log tr Sn + p log p;
see, e.g., Section 8.3.1 in Muirhead (1982). For the heteroskedastic case, we modify the likelihood
ratio test statistic by replacing Sn with e
Sn . Note that tr e
Sn = p on the event {|Zi | > 0 for i = 1, · · · , n},
which, by Lemma 2 in Bai and Yin (1993), occurs almost surely for all large n. Therefore, we are led
9
to the following modified likelihood ratio test statistic:
e
Sn =
Ln = log e
p
X
i=1
e
log λSi n .
It is the LSS of e
Sn when f (x) = log(x). In this case, when yn ∈ (0, 1), we have
GeSn (log) =p
Z
+∞
−∞
e
log(x)d F Sn (x) − Fyn (x)
y − 1
e
n
log λSi n − p
log(1 − yn ) − 1
yn
i=1
y − 1
n
log(1 − yn ) − 1 .
=e
Ln − p
yn
=
p
X
Applying Theorem 2.2, we obtain the following proposition.
Proposition 2.3. Under the assumptions of Theorem 2.2, if yn → y ∈ (0, 1), then
e
Ln − p yny−1
log(1
−
y
)
−
1
− yn − log(1 − yn )/2
n
n
D
−→ N(0, 1),
p
−2yn − 2 log(1 − yn )
(2.3)
The proof of Proposition 2.3 is given in Appendix B.
The convergence in (2.3) gives the asymptotic null distribution of the modified likelihood ratio test
statistic. Since it is derived from the sample covariance matrix based on self-normalized observations,
the test based on (2.3) will be referred to as the likelihood ratio test based on the self-normalized
observations (LR-SN).
2.2.2
John’s test based on self-normalized observations (JHN-SN)
John’s test statistic is given by
2
n
Sn
T n = tr
− I − p;
p
1/p tr Sn
Sn and noting again that tr e
Sn = p almost surely for all large n
see John (1971). Replacing Sn with e
lead to the following modified John’s test statistic:
p
2
n
1 X eSn 2
Ten = tr e
− n − p.
λ
Sn − I − p =
p
yn i=1 i
10
It is related to the LSS of e
Sn when f (x) = x2 . In this case, for any yn ∈ (0, ∞), we have
2
GeSn (x ) = p
Z
+∞
p
X
e
x d F (x) − Fyn (x) =
λSi n 2 − p(1 + yn ) = yn Ten .
2
−∞
e
Sn
i=1
Based on Theorem 2.2, we can prove the following proposition.
Proposition 2.4. Under the assumptions of Theorem 2.2, we have
Ten + 1 D
−→ N(0, 1).
2
(2.4)
The proof of Proposition 2.4 is given in Appendix B.
Below we will refer to the test based on (2.4) as John’s test based on the self-normalized observations (JHN-SN).
3 Simulation studies
We now demonstrate the finite-sample performance of our proposed tests. For different values of p and
p/n, we will check both the sizes and powers of the LR-SN and JHN-SN tests.
3.1 I.i.d. Gaussian case
To have a full picture of the performance of our tests, we start with the simplest situation where observations are i.i.d. multivariate normal random vectors. We will compare our proposed tests, LR-SN and
JHN-SN, with the tests mentioned in Section 1.1, namely, LW, S, CZZ and WY-LR, and also the newly
proposed test in Li and Yao (2017) (LY test). In the multivariate normal case, the WY-JHN test reduces
to the LW test.
We start with the size by sampling observations from N(0, I). Table 3 reports the empirical sizes of
these tests for testing H0 : Σ ∝ I at 5% significance level.
p/n = 0.5
p
LW
S CZZ WY-LR
p/n = 2
LY LR-SN JHN-SN
LW
S CZZ
LY JHN-SN
100 4.9% 4.8% 4.9%
4.5% 4.8%
4.6%
5.2%
5.5% 5.5% 5.7% 5.1%
4.9%
200 5.2% 5.0% 5.1%
5.1% 4.8%
5.1%
4.9%
4.6% 4.5% 5.1% 4.8%
4.5%
500 4.9% 5.1% 5.1%
4.8% 5.3%
4.9%
5.2%
5.1% 5.3% 4.9% 5.0%
5.2%
Table 3. Empirical sizes of the LW, S, CZZ, WY-LR, LY, and our proposed LR-SN and JHN-SN tests for testing H0 : Σ ∝ I at 5% significance
level. Observations are i.i.d. N(0, I). The results are based on 10, 000 replications for each pair of p and n.
11
From Table 3, we find that the empirical sizes of all tests are around the nominal level of 5%.
Next, to compare the power, we generate i.i.d. observations from N(0, Σ) under the alternative
with Σ = 0.1|i− j| p×p , and test H0 : Σ ∝ I at 5% significance level. Table 4 reports the empirical
powers.
p/n = 0.5
p
LW
S CZZ WY-LR
p/n = 2
LY LR-SN JHN-SN
LW
S CZZ
LY JHN-SN
100 50.7% 51.3% 50.1% 36.7% 28.1% 35.0%
48.9%
8.4% 8.7% 9.1% 6.3%
8.2%
200 97.3% 97.3% 97.2% 88.0% 79.4% 88.7%
97.0%
18.3% 17.9% 18.1% 11.9%
17.2%
500 100% 100% 100%
100%
70.7% 70.6% 69.8% 43.3%
70.5%
100% 100% 100%
Table 4. Empirical powers of the
WY-LR, LY, and our proposed LR-SN and JHN-SN tests for testing H0 : Σ ∝ I at 5% significance
LW, S, CZZ,
level. Observations are i.i.d. N 0, 0.1|i− j| p×p . The results are based on 10, 000 replications for each pair of p and n.
From Table 4, we find that our proposed LR-SN and JHN-SN tests and the tests mentioned in Section 1.1 have quite good powers especially as the dimension p gets higher, and the powers are roughly
comparable. As in the classical setting, John’s test (JHN-SN) is more powerful than the likelihood ratio
test (LR-SN). The LY test proposed in Li and Yao (2017) is less powerful.
To sum up, while developed under a much more general setup, our tests perform just as well as the
existing ones. Real differences emerge below when we consider more complicated situations, where
existing tests fail while our tests continue to perform well.
3.2 The elliptical case
Now we investigate the performance of our proposed tests under the elliptical distribution. As in
Section 1.3, we take the observations to be Yi = ωi Zi with
(i) ωi ’s being absolute values of i.i.d. standard normal random variables,
(ii) Zi ’s i.i.d. p-dimensional random vectors from N(0, Σ), and
(iii) ωi ’s and Zi ’s independent of each other.
3.2.1
Checking the size
Table 5 completes Table 2 by including the empirical sizes of our proposed LR-SN and JHN-SN tests,
and also the LY test in Li and Yao (2017) for testing H0 : Σ ∝ I at 5% significance level.
12
p/n = 0.5
p
LW
p/n = 2
S CZZWY-LRWY-JHN LYLR-SNJHN-SN
LW
S CZZWY-JHN LYJHN-SN
100 100%100%51.8% 100%
100%4.4% 4.6%
5.2%
100%100%50.2%
100%4.1%
4.9%
200 100%100%53.0% 100%
100%4.5% 5.1%
4.9%
100%100%52.3%
100%4.5%
4.5%
500 100%100%52.3% 100%
100%5.2% 4.9%
5.2%
100%100%53.5%
100%4.7%
5.2%
Table 5. Empirical sizes of the LW, S, CZZ, WY-LR, WY-JHN, LY, and our proposed LR-SN, JHN-SN tests for testing H0 : Σ ∝ I at 5%
significance level. Data are generated as Yi = ωi Zi where ωi ’s are absolute values of i.i.d. N(0, 1), Zi ’s are i.i.d. N(0, I), and further ωi ’s
and Zi ’s are independent of each other. The results are based on 10, 000 replications for each pair of p and n.
Table 5 reveals the sharp difference between the existing tests and our proposed ones: the empirical
sizes of the existing tests are severely distorted, in contrast, the empirical sizes of our LR-SN and JHNSN tests are around the nominal level of 5% as desired. In fact, the sizes of our tests in this table are
exactly the same as in Table 3. The reason is that, with self-normalization in our testing procedure, the
mixing coefficients ωi ’s are removed, and we end up with the same e
Sn as in the previous setting. The
same remark applies to the powers of our tests in Table 6 below. The LY test also yields the right level
of size.
3.2.2
Checking the power
Table 5 shows that the LW, S, CZZ and WY-LR tests are inconsistent under the elliptical distribution,
therefore we exclude them when checking the power. Similarly to Table 4, we will generate observa
tions under the elliptical distribution with Σ = 0.1|i− j| p×p . Table 6 reports the empirical powers of our
proposed tests and the LY test for testing H0 : Σ ∝ I at 5% significance level.
p/n = 0.5
p/n = 2
p
LY
LR-SN
JHN-SN
LY
JHN-SN
100
7.6%
35.0%
48.9%
3.5%
8.2%
200
14.5%
88.7%
97.0%
5.7%
17.2%
500
64.9%
100%
100%
9.0%
70.5%
Table 6. Empirical powers of the LY test and our proposed LR-SN and JHN-SN tests for testing H0 : Σ ∝ I at 5% significance level. Data
are generated as Yi = ωi Zi where ωi ’s are absolute values of i.i.d. N(0, 1), Zi are i.i.d. random vectors from N(0, Σ) with Σ = 0.1|i− j| p×p ,
and further ωi ’s and Zi ’s are independent of each other. The results are based on 10, 000 replications for each pair of p and n.
From Table 6, we find again that the LY test is less powerful.
13
3.3 Beyond elliptical, a GARCH-type case
Recall that in our general model (1.3), the observations Yi admit the decomposition ωi Zi . To examine
the performance of our tests in such a general setup, we simulate data using the following two-step
procedure:
ei , which consists of i.i.d.
1. For each Zi , we first generate another p-dimensional random vector Z
ei j ’s; and with Σ to be specified below, Zi is taken to be Σ1/2 e
standardized random variables Z
Zi . In
ei j ’s are sampled from standardized t-distribution with 4 degrees of freedom,
the simulation below, Z
which is heavy-tailed.
2. For each ωi , inspired by the ARCH/GARCH model, we take ω2i = 0.01+0.85ω2i−1 +0.1|Yi−1 |2 / tr Σ .
3.3.1
Checking the size
We test H0 : Σ ∝ I. Table 7 reports the empirical sizes of our proposed tests and the LY test at 5%
significance level.
p/n = 0.5
p/n = 2
p
LY
LR-SN
JHN-SN
LY
JHN-SN
100
8.2%
5.5%
5.3%
6.8%
5.0%
200
8.5%
5.7%
5.4%
6.8%
5.5%
500
7.6%
5.3%
5.2%
6.6%
5.4%
Table 7. Empirical sizes of the LY test and our proposed LR-SN and JHN-SN tests for testing H0 : Σ ∝ I at 5% significance level. Data are
generated as Yi = ωi Zi with ω2i = 0.01 + 0.85ω2i−1 + 0.1|Yi−1 |2 /p, and Zi consists of i.i.d. standardized t(4) random variables. The results
are based on 10, 000 replications for each pair of p and n.
From Table 7, we find that, for all different values of p and p/n, the empirical sizes of our proposed
tests are around the nominal level of 5%. Again, this is in sharp contrast with the results in Table 2,
where the existing tests yield sizes far higher than 5%.
4 , the
One more important observation is that although Theorem 2.2 requires the finiteness of E Z11
4 does not exist.
simulation above shows that our proposed tests work well even when E Z11
Another observation is that with 10,000 replications, the margin of error for a proportion at 5%
significance level is 1%, hence the sizes of the LY test in Table 7 are statistically significantly higher
than the nominal level of 5%.
3.3.2
Checking the power
To evaluate the power, we again take Σ = 0.1|i− j| p×p and generate data according to the design at the
beginning of this subsection (Section 3.3). Table 8 reports the empirical powers of our proposed tests
14
and the LY test for testing H0 : Σ ∝ I at 5% significance level.
p/n = 0.5
p/n = 2
p
LY
LR-SN
JHN-SN
LY
JHN-SN
100
20.7%
34.4%
47.9%
7.8%
8.7%
200
54.4%
87.8%
96.6%
10.5%
17.6%
500
100%
100%
100%
26.4%
69.9%
Table 8. Empirical powers of the LY test and our proposed LR-SN and JHN-SN tests for testing H0 : Σ ∝ I at 5% significance level. Data
e
e
are generated as Yi = ωi Zi with ω2i = 0.01 + 0.85ω2i−1 + 0.1|Yi−1 |2 /p, and Zi = 0.1|i− j| 1/2
p×p Zi where Zi consists of i.i.d. standardized t(4)
random variables. The results are based on 10, 000 replications for each pair of p and n.
Table 8 shows again that our tests enjoy a blessing of dimensionality: for a fixed ratio p/n, the
higher the dimension p, the higher the power. Moreover, comparing Table 8 with Table 4, we find that
for each pair of p and n, the powers of our tests are similar under the two designs. Such similarities
show that our tests can not only accommodate conditional heteroskedasticity but also are robust to
heavy-tailedness in Zi ’s. Finally, the LY test is again less powerful.
3.4 Summary of simulation studies
Combining the observations in the three cases, we conclude that
(i) The existing tests, LW, S, CZZ, WY-LR and WY-JHN, work well in the i.i.d. Gaussian setting,
however, they fail badly under the elliptical distribution and our general setup;
(ii) The newly proposed LY test in Li and Yao (2017) is applicable to the elliptical distribution, however, it is less powerful than the existing tests in the i.i.d. Gaussian setting and, in general, less
powerful than ours;
(iii) Our LR-SN and JHN-SN tests perform well under all three settings, yielding the right sizes and
enjoying high powers.
4 Conclusion
We study testing high-dimensional covariance matrices under an extension of the elliptical distribution.
The extended model can feature heteroskedasticity, leverage effect, asymmetry, etc. Under the general
setting, we establish a CLT for the LSS of the sample covariance matrix based on self-normalized
observations. The CLT is different from the existing ones for the usual sample covariance matrix, and
4 = 3 as in Bai and Silverstein (2004), nor involve E Z 4 as in Najim and Yao
it does not require E Z11
11
(2016). Based on the new CLT, we propose two tests by modifying the likelihood ratio test and John’s
15
test, for which explicit CLTs are derived. More tests can be established based on our general CLT.
Numerical studies show that our proposed tests work well regardless of whether the observations are
i.i.d. Gaussian or from the elliptical distribution or feature conditional heteroskedasticity or even when
Zi ’s are heavy-tailed.
Acknowledgements
We thank Zhigang Bao for a suggestion that helps relax the assumptions of Lemma A.3.
Appendix A
Proof of Theorem 2.2
In this section, we prove Theorem 2.2. We first introduce some additional notations. For any symmetric
matrix A, kAk stands for the spectral norm. Denote
Xi =
√
pZi
|Zi |
and
Xi
ri = √ , i = 1, 2, · · · , n.
n
Thus e
Sn = 1/n · XXT with X = (X1 , · · · , Xn ). Recall that mG (z) denotes the Stieltjes transform of
a distribution G with z < supp(G) and Fy the standard Marčenko-Pastur law with index y. Let e
Sn =
T
1/n · X X and define
mn (z) = mFeSn (z),
1 − yn
+ yn m0n (z),
m0n (z) = −
z
1−y
+ ym(z).
m(z) = −
z
mn (z) = mFeSn (z),
m0n (z) = mFyn (z),
m(z) = mFy (z),
Note that e
Sn and e
Sn have the same non-zero eigenvalues, and we have mn (z) = −(1 − yn )/z + yn mn (z).
For any z ∈ C with ℑ(z) , 0, define
D(z) = e
Sn − zI, Di (z) = D(z) − ri rTi , Di j (z) = D(z) − ri rTi − r j rTj for i , j,
−1
−1
ςi (z) = rTi D−1
i (z)ri − n tr Di (z),
−1
−2
τi (z) = rTi D−2
i (z)ri − n tr Di (z),
−1
−1
γi (z) = rTi D−1
i (z)ri − n E tr D1 (z), and
βi (z) =
β̆i j (z) =
1
1+
rTi D−1
i (z)ri
, βi j (z) =
1
1+
rTj D−1
i j (z)r j
, β̆i (z) =
1
1+
n−1 tr D−1
i (z)
(A.1)
(A.2)
,
1
1
1
,
b̄
(z)
=
.
,
b
(z)
=
n
n
−1
−1
1 + n−1 tr Di j (z)
1 + n−1 E tr D1 (z)
1 + n−1 E tr D−1
12 (z)
(A.3)
Throughout the rest of the proofs, K denotes a generic constant whose value does not depend on p
16
and n except possibly on yn and may change from line to line.
A.1 Preliminaries
A.1.1 Truncation, Centralization and Renormalization
4 < ∞, by Lemma 2.2 in Yin et al. (1988), there exists a positive sequence δ such that
Since E Z11
p
lim pδ8p = ∞, and P Z , b
Z, i.o. = 0,
δ p = o(1),
p→∞
(A.4)
bi j
b
where b
Z= Z
p×n and Zi j = Zi j 1{|Zi j |≤p1/2 δ p } . Moreover, by enlarging δ p if necessary, we can assume
that δ p also satisfies
4
δ−4
p E |Z11 | 1{|Z11 |>p1/2 δ p } = o(1);
see (1.8) in Bai and Silverstein (2004). Define Z̆ = b
Z−E b
Z and
Zi b
ZTi
pXb
b
,
Sn =
n i=1 |b
Zi |2
n
p X Z̆i Z̆Ti
,
n i=1 |Z̆i |2
n
S̆n =
where b
Zi and Z̆i represent the ith columns of b
Z and Z̆, respectively.
Claim A.1. Under the assumptions of Theorem 2.2, for any f ∈ H, we have
p
Z
+∞
b
f (x)dF Sn (x) = p
−∞
Z
+∞
f (x)dF S̆n (x) + o p (1).
(A.5)
−∞
The proof of Claim A.1 is postponed to Appendix C.
Combining (A.4) and (A.5), we obtain
GeSn ( f ) = p
Z
+∞
−∞
e
f (x)dF Sn (x) = p
Z
+∞
f (x)dF S̆n (x) + o p (1).
−∞
Therefore, to study the asymptotics of GeSn ( f ), it suffices to consider the random matrix S̆n , which is based on
the truncated and centralized entries Z̆i j ’s. Furthermore, because of the self-normalization in S̆n , w.l.o.g. we can
assume that Var Z̆11 = 1.
In the following, for notational ease, we still denote Z̆ by Z. In such a way, for each p, the entries Zi j = Zi(p)
j
are i.i.d. random variables satisfying
(p)
(p)
(p) 2
Assumption SA. |Z11 | ≤ p1/2 δ p with E Z11 = 0 and E Z11
= 1 for some positive sequence δ p which
satisfies δ p = o(1), lim p→∞ pδ8p = ∞; and
(p)
Assumption SB. sup p E Z11 4 < ∞.
A.1.2 Elementary estimates
In this section, we give some lemmas which will be used in the sequel.
17
Lemma A.2. [Rank-one perturbation formula] For any z ∈ C with ℑ(z) , 0, any symmetric matrix A ∈ R p×p
and v ∈ R p , we have
vT A + vvT − zI
A + vvT − zI
−1
− A − zI
−1
−1
vT (A − zI)−1
,
1 + vT (A − zI)−1 v
(A − zI)−1 vvT (A − zI)−1
=−
.
1 + vT (A − zI)−1 v
=
(A.6)
(A.7)
Equation (A.6) is a special case of (2.2) in Silverstein and Bai (1995), and (A.7) is a direct result of (A.6).
(p)
(p)
(p)
Lemma A.3. For each p, assume that V1 = V1 , V2 = V2 , · · · , V p = V p are i.i.d. random variables satis
fying V1(p) ≤ p1/2 δ p for some δ p = o(1) and lim p→∞ pδ8p = ∞. Suppose further that EV1(p) = 0, E V1(p) 2 = 1
√
(p)
and sup p E V1 4 < ∞. Let W = pV/|V| with V = (V1 , V2 , · · · , V p ). Then for any (deterministic) symmetric
matrices A, B ∈ C p×p ,
E WT AW − tr A WT BW − tr B
p
X
= E V14 − 3
aii bii − p−1 E V14 − 1 tr A tr B + 2 tr(AB) + o p kAk2 + kBk2 .
(A.8)
i=1
Furthermore, if we denote by ei the p-dimensional vector with a 1 in the ith coordinate and 0’s elsewhere, then
E WT Aei eTi AW · WT Bei eTi BW = o p kAk4 + kBk4 .
(A.9)
Lemma A.4. Under the assumptions of Lemma A.3, for all large p, any (deterministic) symmetric matrix A ∈
C p×p and k ≥ 2, we have
k
E WT AW − tr A ≤K pk−1 δ2k−4
kAkk ,
p
E WT AW − tr A ≤Kδ2p kAk.
(A.10)
(A.11)
Next, we collect some elementary results for the quantities and matrices defined in (A.1) – (A.3). Note that
if λ ≥ 0 and z ∈ C+ := {z ∈ C, ℑ(z) > 0}, we have |λ − z| ≥ ℑ(z) . It follows that
−1
−1
max kD−1 (z)k, kD−1
i (z)k, kDi j (z)k ≤ ℑ(z) ,
for all z ∈ C+ .
(A.12)
Observe also that when λ ≥ 0 and z ∈ C+ , we have ℑ z/(λ − z) ≥ 0. Therefore,
|βi (z)| =
|z|
|z|
,
≤
ℑ(z)
(z)r
|
|z + zrTi D−1
i
i
for all z ∈ C+ .
By the same argument, we can bound the other terms in (A.3):
max |βi (z)|, |βi j (z)|, |β̆i (z)|, |β̆i j (z)|, |bn(z)|, |b̄n(z)| ≤ |z| · ℑ(z)−1 ,
18
for all z ∈ C+ .
(A.13)
Lemma A.5. Under Assumptions SA, SB and C, for any z ∈ C+ ,
E|ς1 (z)|k ≤ K p−1 δ2k−4
ℑ(z)−k ,
p
E|τ1 (z)|k ≤ K p−1 δ2k−4
ℑ(z)−2k for k ≥ 2,
p
(A.14)
E|β̆1 (z) − bn (z)|2 ≤ K p−1 |z|6 ℑ(z)−10 , and
(A.15)
lim bn (z) = lim b̄n (z) = −zm(z).
(A.16)
n→∞
n→∞
√
S(1) = e
Sn −r1 rT1 . The following lemma gives the bounds of the eigenvalues
Recall that a± (y) = (1 ± y)2 . Let e
of e
Sn and e
S(1) .
Lemma A.6. Under the assumptions of Lemma A.5, for any ηl < a− (y)1{0<y<1} and ηr > a+ (y), there holds
eS∗
e
S∗n
n
P λmin
≤ ηl or λmax
≥ ηr = o n−k for any k > 0,
(A.17)
where e
S∗n can be either e
Sn or e
S(1) .
Lemma A.7. For any α ∈ (0, 1), v0 > n−(1+α) , xl < a− (y)1{0<y<1} and xr > a+ (y), let
Cn = {x ± iv0 : x ∈ [xl , xr ]} ∪ {xl ± iv : v ∈ [n−(1+α) , v0 ]} ∪ {xr ± iv : v ∈ [n−(1+α) , v0 ]}.
(A.18)
Then under the assumptions of Lemma A.5, for any ηl ∈ xl , a− (y)1{0<y<1} and ηr ∈ (a+ (y), xr ), there exists K > 0
such that
n
o
1+α
1 eS(1)
sup kD−1
.
e
1 (z)k, |γ1 (z)|, |β1 (z)| ≤ K 1 + n
(A.19)
Sn
{λmin ≤ηl or λmax ≥ηr }
{z∈Cn }
Furthermore, we have
sup
{n; z∈Cn }
n
o
k
EkD−1
1 (z)k , |bn (z)| = O(1) for k ≥ 1, and
n · sup E|γ1 (z)|k = O δ2k−4
p
{z∈Cn }
for k ≥ 2.
(A.20)
(A.21)
The proofs of Lemmas A.3–A.7 are given in Appendix C.
A.2 Proof of Theorem 2.2
Proof. For any f ∈ H, there exist v0 > 0, xl < a− (y)1{0<y<1} and xr > a+ (y) such that f is analytic on the domain
enclosed by
C := {x ± iv0 : x ∈ [xl , xr ]} ∪ {xl ± iv : v ∈ [0, v0 ]} ∪ {xr ± iv : v ∈ [0, v0]}.
Choose ηl ∈ xl , a− (y)1(0<y<1) and ηr ∈ (a+ (y), xr ). By (A.17) and the Borel-Cantelli lemma, almost surely for
all large n,
e
e
n
n
ηl < λSmin
≤ λSmax
< ηr .
(A.22)
Note that when (A.22) holds, we have
GeSn ( f ) = p
Z
+∞
−∞
I
e
1
f (z)Mn (z)dz, where
f (x)d F Sn (x) − Fyn (x) = −
2πi C
19
(A.23)
Mn (z) = p mn (z) − m0n (z) .
Recall the definition of Cn in (A.18). For all n large such that n−(1+α) < v0 , define
Mn (z),
bn (z) =
Mn xl + i · sgn(v)n−(1+α) ,
M
Mn xr + i · sgn(v)n−(1+α) ,
if z ∈ Cn ,
if z = xl + iv with v ∈ [−n−(1+α) , n−(1+α) ],
if z = xr + iv with v ∈ [−n
−(1+α)
,n
−(1+α)
(A.24)
],
where we make the convention that sgn(0) = 1. By (A.22), almost surely for all large n,
I
C
bn (z)dz ≤ K sup | f (z)| · p |ηl − xl |−1 + |ηr − xr |−1 · n−(1+α) = O n−α .
f (z) Mn (z) − M
{z∈C}
Combining the estimate with (A.23), we get
GSen ( f ) = −
1
2πi
I
bn (z)dz + o p (1).
f (z) M
C
(A.25)
bn (z), we have the following proposition whose proof is given in Section A.3.
For the asymptotics of M
bn (·) converges weakly to a Gaussian process M(·)
Proposition A.8. Under the assumptions of Lemma A.5, M
with mean function
EM(z) = −2
ym3 (z)
1 + m(z)
and covariance function
Cov (M(z1 ), M(z2 )) = −
3
!
1−
ym2 (z)
1 + m(z)
2
!−1
+
ym3 (z)
1 + m(z)
3
!
1−
ym2 (z)
1 + m(z)
2
!−2
2m′ (z1 )m′ (z2 )
2ym′ (z1 )m′ (z2 )
2
+
,
−
1 + m(z1 ) 2 1 + m(z2 ) 2
m(z2 ) − m(z1 ) 2 (z1 − z2 )2
,
z ∈ C,
(A.26)
z1 , z2 ∈ C.
(A.27)
By Proposition A.8, we have, for any f1 , · · · , fk ∈ H,
1
−
2πi
I
C
bn (z)dz, · · · , − 1
f1 (z) M
2πi
I
C
I
I
1
1
D
b
fk (z) Mn (z)dz −→ −
f1 (z)M(z)dz, · · · , −
fk (z)M(z)dz .
2πi C
2πi C
The right-hand side above is multivariate normal due to the facts that the integrals can be approximated by Riemann sums, which are multivariate normal, and the weak limit of multivariate normal vectors is also multivariate
normal. Furthermore, the mean and variance of the right-hand side follow from (A.26) and (A.27), and are
given by (2.1) and (2.2), respectively. Combining the convergence above with (A.25), we complete the proof of
Theorem 2.2.
A.3 Proof of Proposition A.8
bn (z) as
Proof. To prove Proposition A.8, write M
bn (z) = p mn (z) − Emn (z) + p Emn (z) − m0n (z) =: M
bn,1 (z) + M
bn,2 (z),
M
20
z ∈ C.
bn,1 (z) is random while M
bn,2 (z) is deterministic. The proof of Proposition A.8 is divided into
Observe that M
bn,1 (z). The tightness
three subsections. In Section A.3.1, we prove the finite-dimensional convergence of M
b
bn,2 (·) converges to a
of { Mn,1 (z) : z ∈ C, n ∈ N} is proven in Section A.3.2. In Section A.3.3, we show that M
continuous function uniformly on C.
bn,1 (z)
A.3.1 The finite-dimensional convergence of M
bn,1 (z̄) = M
bn,1 (z), therefore it suffices to show that for any q ∈ N, α j ∈ R and z j ∈ C+ := {z ∈ C, ℑ(z) >
Note that M
P
bn,1 (z j ) converges weakly to a Gaussian random variable, and for any pair
0} for j = 1, · · · , q, the sum qj=1 α j M
bn,1 (z1 ) and M
bn,1 (z2 ) converges to (A.27). We take three steps to prove these
z1 , z2 ∈ C+ , the covariance of M
conclusions.
P
bn,1 (z j ) can be approximated by the sum of a martingale difference
Step I. We first show that the sum qj=1 α j M
√
sequence. Recall that ri = Xi / n. Let Ei (·) denote the conditional expectation with respect to the σ-field
generated by r1 , . . . , ri , and E0 (·) = E(·) stand for the unconditional expectation. Note from the definitions
of βi (z) and β̆i (z) in (A.3) and ςi (z) in (A.2) that
βi (z) = β̆i (z) − βi (z)β̆i (z)ςi (z) = β̆i (z) − β̆2i (z)ςi (z) + βi (z)β̆2i (z)ςi2 (z).
Thus, using the definitions of mn (z), D(z) and Di (z), we have
bn,1 (z) =
M
n
X
−1
−1
(z)
D
(z)
−
D
(z)
tr Ei D−1 (z) − D−1
−
tr
E
i−1
i
i
i=1
=−
=−
+
n
X
i=1
n
X
i=1
n
X
i=1
=:
Ei − Ei−1 βi (z)rTi D−2
i (z)ri
Ei − Ei−1 β̆i (z)τi (z) − β̆2i (z)ςi (z) · n−1 tr D−2
i (z)
Ei − Ei−1
n
X
Vni,1 (z) +
n
X
Vni,2 (z) ≤ 4
i=1
n
X
β̆2i (z)ςi (z)τi (z)
Vni,2 (z) +
i=1
n
X
−
n
X
i=1
(A.28)
Ei − Ei−1 βi (z)β̆2i (z)ςi2 (z)rTi D−2
i (z)ri
Vni,3 (z),
i=1
where the second equality follows from (A.7) and the third equality uses the fact that Ei −Ei−1 β̆i (z)·1/n tr D−2
i (z) =
0. Applying the Burkholder-Davis-Gundy(BDG) inequality, (A.13), the Cauchy-Schwarz inequality and (A.14)
yields
E
i=1
2
n
X
i=1
2
E β̆2i (z)ςi (z)τi (z) ≤ K
n p
X
i=1
E|ςi (z)|4 E|τi (z)|4 = O δ4p .
(A.29)
Similarly, we can show that
E
n
X
i=1
2
Vni,3 (z) ≤ 4
n
X
i=1
2
E βi (z)β̆2i (z)ςi2 (z)rTi D−2
i (z)ri ≤ K
21
n
X
i=1
E|ςi (z)|4 = O δ4p .
(A.30)
Combining (A.28) with (A.29) and (A.30), we get
q
X
j=1
bn,1 (z j ) =
αjM
q
n X
X
α j Vni,1 (z j ) + o p (1) =:
i=1 j=1
n
X
Yni + o p (1).
i=1
Step II. Here we show that the martingale difference sequence {Yni }ni=1 satisfies condition (ii) of Lemma D.1
in Appendix D. In fact, by (A.13), (A.12) and (A.14), we have,
E|Vni,1 (z)|4 ≤ K E|τi (z)|4 + E|ςi (z)|4 = o n−1 .
Hence for any ε > 0,
q
n
n
1 X
K XX
E|Yni |4 ≤ 2
|α j |4 E|Vni,1 (z j )|4 = o(1),
E |Yni |2 1{|Yni |≥ε} ≤ 2
ε i=1
ε i=1 j=1
n
X
i=1
namely, condition (ii) in Lemma D.1 holds.
Step III. In this step, we verify condition (i) in Lemma D.1. We have
n
X
i=1
W.l.o.g., we focus on
Pn
i=1
q
n
X
X
Ei−1 Vni,1 (zℓ )Vni,1 (zm ) .
Ei−1 Yni2 =
αℓ αm
n
X
Ei−1 Vni,1 (z1 )Vni,1 (z2 )
i=1
=
β̆2i (z)ςi (z)
(A.31)
i=1
Ei−1 Vni,1 (z1 )Vni,1 (z2 ) . Note that
β̆i (z)τi (z) −
Thus,
ℓ,m=1
·n
−1
tr D−2
i (z)
d β̆i (z)ςi (z)
.
=
dz
n
∂2 X
Ei−1 Ei β̆i (u1 )ςi (u1 ) Ei β̆i (u2 )ςi (u2 )
∂u2 ∂u1 i=1
(u1 ,u2 )=(z1 ,z2 )
n
X
−
Ei−1 β̆i (z1 )τi (z1 ) − β̆2i (z1 )ςi (z1 ) · n−1 tr D−2
i (z1 ) ·
i=1
=:
Ei−1 β̆i (z2 )τi (z2 ) − β̆2i (z2 )ςi (z2 ) · n−1 tr D−2
i (z2 )
∂2
fn,1 (u1 , u2 )
+ fn,2 (z1 , z2 ).
∂u2 ∂u1
(u1 ,u2 )=(z1 ,z2 )
Denote by E(i) (·) the conditional expectation with respect to the σ-field generated by r1 , · · · , ri−1 , ri+1 , · · · , rn .
By (A.12), (A.13) and (A.11), we have
sup E Ei−1 β̆i (z j )τi (z j ) − β̆2i (z j )ςi (z j ) · n−1 tr D−2
i (z j )
1≤i≤n
ς
(z
)
(z
)
·
E
≤ sup E Ei−1 β̆i (z j ) · E(i) τi (z j ) + E Ei−1 β̆2i (z j ) · n−1 tr D−2
j
(i) i j
i
1≤i≤n
≤K sup E E(i) τi (z j ) + E E(i) ςi (z j ) = O n−1 δ2p , j = 1, 2.
1≤i≤n
22
(z
)
, j = 1, 2, are
On the other hand, by (A.12) and (A.13), the terms Ei−1 β̆i (z j )τi (z j ) − β̆2i (z j )ςi (z j ) · n−1 tr D−2
j
i
bounded. Combining the two estimates, we see that fn,2 (z1 , z2 ) = O p δ2p and
n
X
i=1
Ei−1 Vni,1 (z1 )Vni,1 (z2 ) =
∂2
fn,1 (u1 , u2 )
+ O p δ2p .
∂u2 ∂u1
(u1 ,u2 )=(z1 ,z2 )
(A.32)
2
. Let D = {u ∈ C : ṽ/2 < ℑ(u) < k1 }, where k1 > v0 is
It remains to analyze ∂u∂2 ∂u1 fn,1 (u1 , u2 )
(u1 ,u2 )=(z1 ,z2 )
an arbitrary constant and ṽ = min ℑ(z1 ), ℑ(z2 ) . Applying the Cauchy-Schwarz inequality, (A.10), (A.13) and
(A.12), we have for any u1 , u2 ∈ D,
n q
X
2
−1
−1
Ei−1 rTi Ei−1 β̆i (u1 )D−1
| fn,1 (u1 , u2 )| ≤
i (u1 ) ri − n tr Ei−1 β̆i (u1 )Di (u1 ) ·
i=1
q
2
−1
−1
Ei−1 rTi Ei−1 β̆(u2 )D−1
i (u2 ) ri − n tr Ei−1 β̆(u2 )Di (u2 )
q
n q
K|u1 ||u2 |
KX
2
2
≤
Ei−1 |β̆i (u1 )|2 kD−1
Ei−1 |β̆i (u2 )|2 kD−1
.
i (u1 )k ·
i (u2 )k ≤
n i=1
ṽ4
Similarly, we can show that ∂u∂2 fn,1 (u1 , u2 ) is bounded in n by a constant depending only on u1 , u2 and ṽ. Therefore, by Lemma 2.3 of Bai and Silverstein (2004), if fn,1 (u1 , u2 ) converges to a function f (u1 , u2 ) in probability
for any u1 , u2 ∈ D, then
∂2
P
∂2
fn,1 (u1 , u2 ) −→
f (u1 , u2 ).
∂u1 ∂u2
∂u1 ∂u2
Thus, it suffices to prove the convergence of fn,1 (u1 , u2 ).
To this end, we start with the following decomposition:
fn,1 (u1 , u2 ) =bn (u1 )bn (u2 )
n
X
i=1
+
n
X
i=1
Ei−1 Ei ςi (u1 ) · Ei ςi (u2 )
Ei−1 Ei β̆i (u1 )ςi (u1 ) · Ei β̆i (u2 )ςi (u2 ) − Ei bn (u1 )ςi (u1 ) · Ei bn (u2 )ςi (u2 )
=:In,1 (u1 , u2 ) + In,2 (u1 , u2 ).
Using (A.15) and (A.14), one can show E|In,2(u1 , u2 )| = o(1). It follows that
fn,1 (u1 , u2 ) = In,1 (u1 , u2 ) + o p (1).
23
(A.33)
For In,1 (u1 , u2 ), by (A.8) and (A.12), we have
XX
1
4
−1
−
3
b
(u
)b
(u
)
E
Z
Ei D−1
n
1
n
2
11
i (u1 ) j j Ei Di (u2 ) j j
2
n
i=1 j=1
p
n
In,1 (u1 , u2 ) =
X 1
1
−1
4
tr Ei D−1
−1
bn (u1 )bn (u2 ) E Z11
i (u1 ) · tr Ei Di (u2 )
2
p
n
i=1
n
−
(A.34)
n
X
2
−1
+ 2 bn (u1 )bn (u2 )
tr Ei D−1
i (u1 ) · Ei Di (u2 ) + o(1)
n
i=1
=:In,11 (u1 , u2 ) + In,12 (u1 , u2 ) + 2In,13 (u1 , u2 ) + o(1).
Next we turn to the studies of In,11 (u1 , u2 ), In,12 (u1 , u2 ) and In,13 (u1 , u2 ).
Term In,11 (u1 , u2 ). By the BDG inequality and (A.7),
sup
{1≤i≤n, 1≤ j≤p}
E Ei D−1
i (u)
jj
2
2
−1
− Emn (u) ≤ E eT1 D−1
1 (u) − ED (u) e1
n
X
2
2
−1
−1
T
−1
(E
−
E
)
D
(u)
−
D
(u)
e1
e
(u)
−
D
(u)
e
≤K E eT1 D−1
+
E
i
i−1
1
i
1
1
≤Kn · E
β1 (z)rT1 D−1
1 (u)e1
·
i=1
2
T −1
e1 D1 (u)r1
= o(1), for u ∈ D,
where the last bound follows from (A.13), (A.9) and (A.12). Further using (A.12) yields
sup
{1≤i≤n, 1≤ j≤p}
E Ei D−1
i (u1 )
jj
Ei D−1
i (u2 )
jj
− Emn (u1 )Emn (u2 ) = o(1),
(A.35)
Note that by Proposition 2.1 and the dominated convergence theorem, we have limn→∞ Emn (u) = m(u) for all u ∈ D.
Therefore, by (A.16) and (A.35),
P
4
− 3 y · u1 u2 m(u1 )m(u2 ) · m(u1 )m(u2 ).
In,11 (u1 , u2 ) −→ E Z11
Moreover, by equation (1.2) in Bai and Silverstein (2004), we have um(u) = −(1 + m(u))−1. Hence
ym(u1 )
m(u2 )
P
4
− 3 θ(u1 , u2 ), where θ(u1 , u2 ) =
In,11 (u1 , u2 ) −→ E Z11
.
1 + m(u1 ) 1 + m(u2 )
Term In,12 (u1 , u2 ).
Claim A.9. Under the assumptions of Lemma A.5, for any u ∈ D,
−1
−1
n−1
n−1
1/2
u
−
b̄
(u)
b̄
(u)
E tr D−1
(u)
+
p
u
−
=
O
n
,
and
sup
= O(1).
n
n
1
n
n
n
The proof of Claim A.9 is given in Appendix C.
By Claim A.9 and (A.12), we obtain
sup E
{1≤i≤n}
−1
−1
n−1
n−1
1
−1
= O n−1/2 .
tr Ei D−1
(u
)
tr
E
D
(u
)
−
y
b̄
(u
)
u
−
b̄
(u
)
u
−
1
i
2
n
2
n
1
n
2
1
i
i
pn
n
n
24
(A.36)
Using further (A.16), we get
P
4
In,12 (u1 , u2 ) −→ − E Z11
− 1 θ(u1 , u2 ).
Term In,13 (u1 , u2 ).
e i (u) =
Claim A.10. For each i and u ∈ D, let D
have
1
n
P
j,i
(A.37)
Z j ZTj − uI. Under the assumptions of Lemma A.5, we
−1
−1
e −1
e −1
kD
i (u)k ≤ ℑ(u) , and EkD1 (u) − D1 (u)k = o(1).
The proof of Claim A.10 is postponed to Appendix C.
By Claim A.10, we have
−1
e−1
e−1
sup EkEi D−1
i (u1 ) − Ei Di (u1 )k ≤ EkD1 (u1 ) − D1 (u1 )k = o(1),
{1≤i≤n}
which, together with (A.12), implies that
sup E
{1≤i≤n}
1
1
−1
e −1
e −1
tr Ei D−1
tr Ei D
i (u1 ) · Ei Di (u2 ) −
i (u1 ) · Ei Di (u2 ) = o(1).
p
p
(A.38)
Noting further (A.16) and inequality (2.17) in Bai and Silverstein (2004), we see that In,13 (u1 , u2 ) has the same
limit as the term (2.8) in Bai and Silverstein (2004) when T = I, and we get
P
In,13 (u1 , u2 ) −→
Z
0
1
θ(u1 , u2 )
dt;
1 − tθ(u1 , u2 )
(A.39)
see P. 578 in Bai and Silverstein (2004).
Summing up: Plugging (A.34), (A.36), (A.37) and (A.39) into (A.33) yields
P
fn,1 (u1 , u2 ) −→ −2θ(u1 , u2 ) + 2
Z
0
1
θ(u1 , u2 )
dt.
1 − tθ(u1 , u2 )
(A.40)
Combining (A.31) with (A.32) and (A.40), we see that {Yni }ni=1 satisfies condition (i) in Lemma D.1, so we
bn,1 (z) converges, in finite dimension, to a Gaussian process with mean 0 and covariance function
conclude that M
Z 1
θ(u1 , u2 )
∂2
− 2θ(u1 , u2 ) + 2
dt
∂u1 ∂u2
(u1 ,u2 )=(z1 ,z2 )
0 1 − tθ(u1 , u2 )
2m′ (z1 )m′ (z2 )
2ym′ (z1 )m′ (z2 )
2
,
=−
+
−
1 + m(z1 ) 2 1 + m(z2 ) 2
m(z2 ) − m(z1 ) 2 (z1 − z2 )2
z1 , z2 ∈ C,
namely, (A.27) holds.
b
A.3.2 Tightness of M
n,1 (z) : z ∈ C, n ∈ N
We will show the tightness using Lemma D.2 in Appendix D.
We first verify condition (i) in Lemma D.2. Recall the definition of Vni,1 (z) in (A.28). For any fixed z0 ∈ C
25
with ℑ(z0 ) = v0 , by (A.13) and (A.14), we have
E
n
X
n
X
2
Vni,1 (z0 ) =
i=1
i=1
2
E Vni,1 (z0 ) ≤ K
n
X
2
2
E τi (z0 ) + E ςi (z0 ) = O(1).
i=1
Using further (A.28) – (A.30), we get
n
n
n
X
X
X
2
2
2
bn,1 (z0 ) 2 ≤ K E
EM
Vni,1 (z0 ) + E
Vni,2 (z0 ) + E
Vn j,3 (z0 ) = O(1).
i=1
i=1
i=1
Hence condition (i) in Lemma D.2 is satisfied.
bn (z) in (A.24). We will verify the
Next, we move to condition (ii) in Lemma D.2. Note the definition of M
Kolmogorov-Chentsov condition:
sup
{n; z1 ,z2 ∈Cn }
bn,1 (z1 ) − M
bn,1 (z2 )
EM
|z1 − z2 |2
2
< ∞.
(A.41)
In fact, for any z1 , z2 ∈ Cn , we have
Ei − Ei−1 tr D−1 (z1 ) − D−1 (z2 )
z1 − z2
n
X
= (Ei − Ei−1 ) tr D−1 (z1 )D−1 (z2 )
bn,1 (z1 ) − M
bn,1 (z2 )
M
=
z1 − z2
Pn
i=1
i=1
=
n
X
i=1
=
n
X
i=1
−
−
−1
(Ei − Ei−1 ) tr D−1 (z1 )D−1 (z2 ) − tr D−1
i (z1 )Di (z2 )
2
−1
(Ei − Ei−1 ) βi (z1 )βi (z2 ) rTi D−1
i (z1 )Di (z2 )ri
n
X
i=1
n
X
i=1
(A.42)
−1
(Ei − Ei−1 ) βi (z2 )rTi D−2
i (z2 )Di (z1 )ri
−1
(Ei − Ei−1 ) βi (z1 )rTi D−2
i (z1 )Di (z2 )ri
=:Θn,1 (z1 , z2 ) + Θn,2 (z1 , z2 ) + Θn,3 (z1 , z2 ),
where the fourth equality follows from (3.7) in Bai and Silverstein (2004). We thus only need to bound
sup
E|Θn,i (z1 , z2 )|2 fori = 1, 2, 3.
{n; z1 ,z2 ∈Cn }
We start with E|Θn,1 (z1 , z2 )|2 . Note that
βi (z) = bn (z) − bn (z)βi (z)γi (z).
26
(A.43)
Thus,
Θn,1 (z1 , z2 ) =
n
X
i=1
2
−1
(Ei − Ei−1 ) βi (z1 ) · − bn (z2 )βi (z2 )γi (z2 ) + bn (z2 ) · rTi D−1
i (z1 )Di (z2 )ri
= − bn (z2 )
n
X
i=1
2
−1
(Ei − Ei−1 ) βi (z1 )βi (z2 )γi (z2 ) rTi D−1
i (z1 )Di (z2 )ri
− bn (z1 )bn (z2 )
+ bn (z1 )bn (z2 )
n
X
i=1
n
X
i=1
2
−1
(Ei − Ei−1 ) βi (z1 )γi (z1 ) rTi D−1
i (z1 )Di (z2 )ri
2
2 −1
−1
−1
−1
(z
)
(z
)D
−
n
tr
D
(z
)r
(z
)D
(Ei − Ei−1 ) rTi D−1
2
1
2
i
1
i
i
i
i
=: − bn (z2 )Bn,1(z1 , z2 ) − bn (z1 )bn (z2 )Bn,2(z1 , z2 ) + bn (z1 )bn (z2 )Bn,3(z1 , z2 ).
By the BDG inequality, we have
2
sup
{n; z1 ,z2 ∈Cn }
E Bn,1 (z1 , z2 ) ≤
≤
≤
≤
sup
{n; z1 ,z2 ∈Cn }
sup
{n; z1 ,z2 ∈Cn }
sup
{n; z1 ,z2 ∈Cn }
sup
{n; z1 ,z2 ∈Cn }
−1
Kn E β1 (z1 )β1 (z2 )γ1 (z2 ) rT1 D−1
1 (z1 )D1 (z2 )r1
2 2
4
4
−1
Kn E |β1 (z1 )|2 |β1 (z2 )|2 |γ1 (z2 )|2 kD−1
1 (z1 )k kD1 (z2 )k
Kn E |γ1 (z2 )|2 1 + n1+α 1
e
S(1)
{λmin
<ηl
or
en
λSmax
>ηr }
Kn E |γ1 (z2 )|2 + |γ1 (z2 )|2 · n12(1+α) 1
12
e
S(1)
{λmin
<ηl
or
en
λSmax
>ηr }
(A.44)
= O(1),
where the third inequality follows from (A.19), and the last step uses (A.21), (A.19) and (A.17). Similarly, one
2
can show sup{n; z1 ,z2 ∈Cn } E Bn,2 (z1 , z2 ) = O(1).
As to E|Bn,3(z1 , z2 )|2 , we have from the BDG inequality that
sup
E|Bn,3(z1 , z2 )|2
{n; z1 ,z2 ∈Cn }
≤K
≤K
sup
{n; z1 ,z2 ∈Cn }
sup
{n; z1 ,z2 ∈Cn }
−1
n · E rT1 D−1
1 (z1 )D1 (z2 )r1
2
2 2
−1
− n−1 tr D−1
1 (z1 )D1 (z2 )
4
−1
−1
−1
−1
n · E rT1 D−1
1 (z1 )D1 (z2 )r1 − n tr D1 (z1 )D1 (z2 )
−1
−1
−1
−1
+ rT1 D−1
1 (z1 )D1 (z2 )r1 − n tr D1 (z1 )D1 (z2 )
4
−1
≤K sup E D−1
1 (z1 )D1 (z2 )
2
·
4
2
−1
−1
tr
D
(z
)D
(z
)
1
2
1
1
n2
{n; z1 ,z2 ∈Cn }
4
2
1+α
−1
−1
−1
−1
1
(z
)
1
+
n
(z
)D
(z
)r
−
n
tr
D
(z
)D
+ n · rT1 D−1
e
S
2
1
2
1
1
en
1
1
(1)
1
1
{λmin
<ηl or λSmax
>ηr }
q
q
−1
−1
8
8
4
4
EkD−1
EkD−1
≤K sup
1 (z1 )k · EkD1 (z2 )k +
1 (z1 )k · EkD1 (z2 )k
{n; z1 ,z2 ∈Cn }
2
4(1+α)
−1
+ n · E kD−1
1
1 (z1 )D1 (z2 )k · n
e
S(1)
{λmin
<ηl
e
n
or λSmax
>ηr }
= O(1),
where the third inequality uses (A.10) and (A.19), the fourth inequality follows from the Cauchy-Schwarz inequality and (A.10) again, and the last step uses (A.20), (A.19) and (A.17).
27
Combining the three bounds above and the boundness of sup{n; z∈Cn } |bn (z)| in (A.20) yields
sup
2
E Θn,1 (z1 , z2 ) = O(1).
(A.45)
{n; z1 ,z2 ∈Cn }
As to Θn,2 (z1 , z2 ), by (A.43), we can rewrite Θn,2 (z1 , z2 ) as
Θn,2 (z1 , z2 ) = − bn (z2 )
+ bn (z2 )
n
X
i=1
n
X
i=1
−1
−1
−2
−1
(Ei − Ei−1 ) rTi D−2
i (z2 )Di (z1 )ri − n tr Di (z2 )Di (z1 )
−1
(Ei − Ei−1 ) βi (z2 )γi (z2 )rTi D−2
i (z2 )Di (z1 )ri .
Similarly to (A.44), by the BDG inequality, (A.10), the Cauchy-Schwarz inequality and Lemma A.7 yields
E|Θn,2 (z1 , z2 )|2
sup
{n; z1 ,z2 ∈Cn }
≤K
sup
{n; z1 ,z2 ∈Cn }
+K
sup
2
−1
−1
−2
−1
n|bn (z2 )|2 E rT1 D−2
1 (z2 )D1 (z1 )r1 − n tr D1 (z2 )D1 (z1 )
{n; z1 ,z2 ∈Cn }
≤K
sup
{n; z1 ,z2 ∈Cn }
+K
−1
n|bn (z2 )|2 E β1 (z2 )γ1 (z2 )rT1 D−2
1 (z2 )D1 (z1 )r1
2
(A.46)
q
−1
4
8
|bn (z2 )|2 EkD−1
1 (z2 )k · EkD1 (z1 )k
sup
{n; z1 ,z2 ∈Cn }
n|bn (z2 )|2 E |γ1 (z2 )|2 + |γ1 (z2 )|2 · n8(1+α) 1
e
S(1)
{λmin
<ηl
or
en
λSmax
>ηr }
= O(1).
Similarly, we can show that
sup
2
E Θn,3 (z1 , z2 ) = O(1).
(A.47)
{n; z1 ,z2 ∈Cn }
b
Plugging (A.45) – (A.47) into (A.42) yields (A.41). The tightness of M
n,1 (z) : z ∈ C, n ∈ N follows from
Lemma D.2.
bn,2 (·)
A.3.3 Convergence of M
bn,2 (·) converges to a continuous function uniformly on C. Below for each n,
In this subsection, we show that M
we work with Cn .
Define
Rn (z) = −Emn (z) − z −
yn
1
.
+
Emn (z) Emn (z) + 1
(A.48)
We have
1 = Emn (z) − z +
yn
Rn (z)
=: Emn (z) ̟mn (z).
+
Emn (z) + 1 Emn (z)
(A.49)
On the other hand, by equation (6.1.5) in Bai and Silverstein (2010) and using the fact that H = δ1 in our case,
28
we have
1=
m0n (z)
!
yn
−z+ 0
=: m0n (z) ̟m0n (z).
mn (z) + 1
(A.50)
Combining (A.49) and (A.50) leads to
Emn (z) − m0n (z) =Emn (z) · m0n (z) · ̟m0n (z) − ̟mn (z)
= Emn (z) − m0n (z) ·
bn,2 (z) = n Em (z) − m0 (z). Thus
Note that M
n
n
bn,2 (z) · 1 −
M
yn Emn (z) · m0n (z)
− m0n (z)Rn (z).
m0n (z) + 1 Emn (z) + 1
!
yn Emn (z) · m0n (z)
= −m0n (z) · nRn (z).
m0n (z) + 1 Emn (z) + 1
(A.51)
For m0n (z), we have from equation (4.2) in Bai and Silverstein (2004) that
lim sup m0n (z) − m(z) = 0.
(A.52)
n→∞ {z∈C}
Proposition A.11. Under the assumptions of Lemma A.5, we have
lim sup
n→∞ {z∈C }
n
and
!−1
!−1
yn Emn (z) m0n (z)
ym2 (z)
− 1−
= 0,
1− 0
mn (z) + 1 Emn (z) + 1
1 + m(z) 2
lim sup nRn (z) − 2
n→∞ {z∈C }
n
ym2 (z)
1 + m(z)
!
3 +
ym2 (z)
1 + m(z)
!
3 1 −
ym2 (z)
1 + m(z)
Moreover, the limiting functions in (A.53) and (A.54) are continuous on C.
2
!−1
= 0.
(A.53)
(A.54)
The proof of Proposition A.11 is given in Appendix C.
Combination of (A.51), (A.52) and Proposition A.11 yields
bn,2 (z) − h(z) −→ 0,
sup M
{z∈Cn }
where h(z) is the function on the right-hand side of (A.26). Note that h(z) is continuous on C. Using further the
bn,2 (z) converges uniformly to h(z) on C.
bn (z) in (A.24), we see that M
definition of M
A.3.4 Conclusion of the proof of Proposition A.8
bn (·) converges weakly to a Gaussian process with
Combining the results in Section A.3.1 – A.3.3, we see that M
mean function (A.26) and covariance function (A.27).
29
Appendix B Proofs of propositions in Section 2
2 −1/2
does not change the definition of e
Sn ,
Proof of Proposition 2.1. Note that scaling the entries of Z by EZ11
2
thus w.l.o.g. we can assume that EZ11 = 1.
√
Sn = 1/n · XXT and Sn = 1/n · ZZT . Let I = {i : |Zi | , 0}.
Let X = (X1 , · · · , Xn ) with Xi = pZi /|Zi |. Then e
Recall that we set Zi /|Zi | = 0 when |Zi | = 0. It follows that
tr e
Sn #I
=
≤ 1.
p
p
On the other hand, by Theorem 3.1 in Yin et al. (1988), we have
lim sup
n→∞
√
tr Sn
≤ lim kSn k = (1 + y)2 , a.s..
n→∞
p
Hence, by Lemma 2.7 in Bai (1999), almost surely for all large n,
2 e
tr Sn + Sn · tr X − Z X − Z T
2
p n
√
2 K X
2|Zi | |Zi |2
1 X 2 p
−1 =
|Zi |
1− √ +
≤K
pn i∈I
|Zi |
n i∈I
p
p
n
K X
2|Zi | |Zi |2
≤
,
1− √ +
n i=1
p
p
e
L4 (F Sn , F Sn ) ≤
(B.1)
where for any two distributions F and G, L(F, G) denotes the levy distance between them.
Consider the limit of (B.1). By the strong law of large numbers,
1 XX 2
1 X |Zi |2
Zi j = 1, a.s..
= lim
n→∞ np
n→∞ n
p
i=1
i=1 j=1
n
p
n
lim
(B.2)
Furthermore, Lemma 2 in Bai and Yin (1993) implies that for any fixed C > 0,
1 X |Zi |
1X
lim inf
√ ≥ lim inf
n→∞ n
n→∞ n
p
i=1
i=1
n
n
sP
p
j=1
Zi2j 1{|Zi j |≤C}
p
=
q
E Zi2j 1{|Zi j |≤C} , a.s..
By the arbitrariness of C, we obtain
1 X |Zi |
√ ≥ 1, a.s..
n i=1 p
n
lim inf
n→∞
(B.3)
On the other hand, by the Cauchy-Schwarz inequality and (B.2), almost surely
1 X |Zi |
lim sup
√ ≤ lim sup
p
n→∞
n→∞ n i=1
n
30
v
t
1 X |Zi |2
= 1.
n i=1 p
n
(B.4)
Combining (B.3) with (B.4) yields
1 X |Zi |
√ = 1, a.s..
n→∞ n
p
i=1
n
lim
(B.5)
e
e
By (B.1), (B.2) and (B.5), we obtain that limn→∞ L4 (F Sn , F Sn ) = 0, a.s.. Therefore, almost surely, F Sn converges
weakly to the same limit as F Sn does, which is the standard Marčenko-Pastur law with index y.
Proof of Proposition 2.3. We need to calculate the mean and variance in Theorem 2.2 when f (x) = log(x)
and y ∈ (0, 1).
We start with the mean
EG(log) =
1
πi
I
log(z)
C
1
−
2πi
I
ym3 (z)
1 + m(z)
log(z)
C
3
!
ym3 (z)
1 + m(z)
1−
ym2 (z)
1 + m(z)
!
3 1 −
2
!−1
ym2 (z)
1 + m(z)
dz
2
!−2
(B.6)
dz.
Consider the first term in (B.6). We have from equation (1.2) in Bai and Silverstein (2004) that
z=−
y
1
+
.
m(z) 1 + m(z)
Taking derivative with respect to z and solving for m′ (z) yields
m2 (z) 1 + m(z) 2
.
m (z) =
1 + m(z) 2 − ym2 (z)
′
Therefore,
!
2ym(z)m′ (z) 1 + m(z) − 2ym′ (z)m2 (z)
ym2 (z)
d
=
−
1−
dz
(1 + m(z))2
1 + m(z) 3
!
!−1
ym2 (z)
ym3 (z)
=−2
.
1−
1 + m(z) 3
1 + m(z) 2
It follows that for any f ∈ H,
1
πi
I
C
f (z)
ym3 (z)
1 + m(z)
!
3 1 −
ym2 (z)
1 + m(z)
2
!−1
31
!
I
ym2 (z)
1
dz = −
f (z) d 1 −
2πi C
1 + m(z) 2
!
I
ym2 (z)
1
′
f (z) 1 −
=
dz.
2πi C
1 + m(z) 2
(B.7)
Furthermore, by Lemma 3.11 in Bai and Silverstein (2010), for r ∈ [−1, 1],
p
√
yr + i y 1 − r2
,
√
x→0
1 + y + 2 yr
p
√
−1 − yr − i y 1 − r2
√
H2 (r) := lim− m(1 + y + 2 yr + ix) =
.
√
x→0
1 + y + 2 yr
−1 −
√
H1 (r) := lim+ m(1 + y + 2 yr + ix) =
Hence, by (B.7),
!
!−1
I
ym3 (z)
ym2 (z)
1
f (z)
1
−
dz
πi C
1 + m(z) 3
1 + m(z) 2
!
! !
√ Z −1
Z 1
yH12 (r)
yH22 (r)
y
√
√
′
f ′ (1 + y + 2 yr) 1 −
f
(1
+
y
+
2
dr
+
=
yr)
1
−
dr
πi
1
−1
1 + H1 (r) 2
1 + H2 (r) 2
√ Z
√
4 y 1 ′
√
f (1 + y + 2 yr)r 1 − r2 dr.
=−
π
−1
(B.8)
When f (z) = log(z), routine calculation gives that the integral above equals y. As to the second term of (B.6), it
has been computed in Bai and Silverstein (2004) and equals log(1 − y)/2; see equation (1.21) therein. To sum up,
EG(log) = y +
log(1 − y)
.
2
(B.9)
Next we compute the variance
Var(G(log)) =
I I
log(z1 ) log(z2 )m′ (z1 )m′ (z2 )
dz1 dz2
C2 C1
1 + m(z1 ) 2 1 + m(z2 ) 2
I I
log(z1 ) log(z2 )m′ (z1 )m′ (z2 )
1
− 2
dz1 dz2 .
2π C2 C1
m(z2 ) − m(z1 ) 2
y
2π2
(B.10)
o
n
We start with the first term. W.l.o.g., we assume that C2 encloses C1 . Let Cim := m : m = m(z), z ∈ Ci for
i ∈ {1, 2}. According to P. 598 in Bai and Silverstein (2004), C2m encloses C1m and C1m encloses −1 and 1/(y − 1).
Hence, by the residue theorem,
y
2π2
I I
C2
C1
log(z1 ) log(z2 )
y
dm(z1 )dm(z2 ) = 2
(1 + m(z1 ))2 (1 + m(z2 ))2
2π
I
C1m
log(z(m))
dm
(1 + m)2
!2
= −2y.
On the other hand, the second term in (B.10) equals −2 log(1−y) according to equation (1.22) in Bai and Silverstein
(2004). Thus,
Var(G(log)) = −2y − 2 log(1 − y).
(B.11)
Combining Theorem 2.2, (B.9) and (B.11), we obtain
D
log(1 − y)
GeSn (log) −→ N y +
, −2y − 2 log(1 − y) .
2
The conclusion in Proposition 2.3 follows.
32
Proof of Proposition 2.4. It suffices to calculate the mean and variance in Theorem 2.2 when f (x) = x2 . We
focus on the case when y < 1. The case when y > 1 can be dealt with similarly. The case y = 1 follows from the
continuity of (2.1) and (2.2) on y.
By (B.8) and equation (1.23) in Bai and Silverstein (2004),
√ Z
√
8 y 1
√
(1 + y + 2 yr) · r · 1 − r2 dr
π
−1
!2
2
1
√ 1X 2 i
√
+
1− y 4+ 1+ y 4 −
y = −y.
4
2 i=0 i
EG(x2 ) = −
Similarly to (B.10), by the residue theorem and equation (1.24) in Bai and Silverstein (2004), we have
Var(G( f )) =
y
2π2
−
=
I I
C2
z21 z22 m′ (z1 )m′ (z2 )
2 dz1 dz2
C2 C1 m(z2 ) − m(z1 )
!2
z2 (m)
dm
+ 4y(2 + 5y + 2y2 ) = 4y2 .
2
1
(1
+
m)
Cm
1
2π2
I
y
2π2
C1
I I
z21 z22
dm(z1 ) dm(z2 )
1 + m(z1 ) 2 1 + m(z2 ) 2
The conclusion follows.
Appendix C
Proofs of Claims, Lemmas and Propositions in Appendix A
b11 =
Proof of Claim A.1. Because of the self-normalization in b
Sn and S̆n , w.l.o.g., we can assume that Var Z
bn
n
Var Z̆11 ≡ 1. We first give the bound of λSmin
, λS̆min
and kb
Sn k, kS̆n k. By Lemma 2′ in Bai and Silverstein (2004)
(see P. 601 therein), for any ε > 0 and k > 0, we have,
P max p−1 |Z̆i |2 − p ≥ ε = o n−k .
1≤i≤n
Note that
E |Z11 |4 1{|Z11 |>p1/2 δ p }
b11 = E Z11 1{|Z |>p1/2 δ } ≤
= o p−3/2 δ p .
EZ
11
p
p3/2 δ3p
(C.1)
(C.2)
Thus, by (C.1), for all large n and any ε > 0,
√
√
Zi |2 − p ≥ ε ≤nP |Z̆1 | + |E b
Z1 | ≥ p + pε or |Z̆1 | − |E b
P max p−1 |b
Z1 | ≤ p − pε
1≤i≤n
≤nP p−1 |Z̆1 |2 − p ≥ ε/2 = o n−k , for any k > 0.
33
(C.3)
Applying (C.1), (C.3) and the Borel-Cantelli lemma yields that for any ε ∈ (0, 1), almost surely for all large n,
X
X
1
1
Z̆i Z̆Ti ≤ S̆n ≤
Z̆i Z̆Ti , and
n(1 + ε) i=1
n(1 − ε) i=1
n
n
X
X
1
1
b
b
Sn ≤
Zi b
ZTi ≤ b
Zi b
ZTi .
n(1 + ε) i=1
n(1 − ε) i=1
n
n
√
Recall that a± (y) = (1 ± y)2 . It follows from Theorem 1 and Remark 3 in Bai and Yin (1993), Theorem 3.1 and
Remark 2 in Yin et al. (1988), and the arbitrariness of ε that
n
≤ lim sup kS̆n k ≤ a+ (y), a.s..
a− (y)1{0<y<1} ≤ lim inf λS̆min
n→∞
(C.4)
n→∞
Using further the estimate above (2.1) in Bai and Yin (1993) and Lemma 2.3 in Yin et al. (1988), we obtain
n
≤ lim sup kb
Sn k ≤ a+ (y), a.s..
a− (y)1{0<y<1} ≤ lim inf λSmin
b
n→∞
Next, for any ε > 0, let
n→∞
(C.5)
o
o[n
n
Zi |2 − p ≥ ε
max p−1 |Z̆i |2 − p ≥ ε .
A = max p−1 |b
1≤i≤n
1≤i≤n
By (C.1) and (C.3), we have P(A) = o n−k for any k > 0. Note that for any f ∈ H, f is analytic on a domain
enclosing [a− (y)1{0<y<1} , a+ (y)]. By (C.4) and (C.5), we have, almost surely for all large n,
p
Z
+∞
−∞
b
f (x)dF Sn (x) − p
Z
+∞
−∞
f (x)dF S̆n (x) ≤ C f ·
p
X
i=1
b
λSi n − λS̆i n ,
(C.6)
where
Cf = n
′
o | f (x)| = O p (1).
sup
bn
n
min{λSmin
,λS̆min
}≤x≤max{kb
Sn k,kS̆n k}
By the Cauchy-Schwarz inequality,
p
X
i=1
b
λSi n
− λS̆i n ≤
≤
b=
where X
v
t
p q
X
r
i=1
b
λSi n
−
q
2
λS̆i n ·
v
t
2
p
X
i=1
b
λSi n + λS̆i n
q
q
2p
b − X̆ X
b − X̆T · kb
Sn k + kS̆n k,
· tr X
n
√ b b
√
p Z1 /|Z1 |, · · · , b
Zn /|b
Zn | and X̆ = p Z̆1 /|Z̆1 |, · · · , Z̆n /|Z̆n | , and the last step uses the inequality in
34
P. 621 of Bai (1999). Thus, by (C.4), (C.5) and (C.2), we have, almost surely for large n,
p
X
i=1
b
λSi n
−
λS̆i n
· 1 Ac
q
T
b − X̆ b
X − X̆ · 1Ac
≤K tr X
≤K p ·
≤K p ·
≤K p ·
v
t
v
u
t
q
n
n
1
X
X
b
Zi − Z̆i b
Zi − Z̆i T
1 2
Z̆i Z̆Ti
−
+
· 1 Ac
|b
Zi |2
|b
Zi | |Z̆i |
i=1
i=1
n
b11
X
p EZ
i=1
bi |2
|Z
2
+
n
p EZ
b11
1X
· 1 Ac
Z̆i Z̆Ti · n · max
1≤i≤n |b
n i=1
Zi |2 |Z̆i |2
b11 + EZ
b11 · 1Ac = O(δ p ).
p EZ
2
(C.7)
2
2
The conclusion follows by noting further that P A → 0, as n → ∞.
Proof of Lemma A.3. We have
E WT AW − tr A WT BW − tr B
=E WT AW · WT BW − tr A · WT BW − tr B · WT AW + tr A tr B .
(C.8)
Write W = (W1 , · · · , W p )T . Recall that we set V/|V| = 0 when |V| = 0. Thus
p
X
i=1
Wi2 = p1{|V|,0} .
(C.9)
By Lemma 2′ in Bai and Silverstein (2004), we have
E
Hence,
W12
= 1/p · E
p
X
i=1
Wi2 = P(|V| , 0) = 1 + o n−k , for any k > 0.
X
bi j
E tr A · WT BW =E W12 · tr A tr B + E W1 W2 · tr A
= tr A tr B + E W1 W2 · tr A
Similarly,
X
i, j
i, j
bi j + o p kAk2 + kBk2 .
X
ai j + o p kAk2 + kBk2 .
E tr B · WT AW = tr A tr B + E W1 W2 · tr B
i, j
35
(C.10)
(C.11)
(C.12)
As to E WT AW · WT BW , elementary computation yields
p
X
E WT AW · WT BW = E W14 − 3E W12 W22
aii bii + E W12 W22 tr A tr B + 2E W12 W22 tr(AB)
i=1
X
X
X
aii b jl + 4
ai j bli +
a jl bii
+ E W12 W2 W3 ·
i, j,l
i, j,l
(C.13)
i, j,l
X
X
X
ai j blm .
aii bi j + 2
ai j bii + E W1 W2 W3 W4 ·
+ E W13 W2 · 2
i, j
i, j
i, j,l,m
Combining (C.8) with (C.11) – (C.13) gives
E WT AW − tr A WT BW − tr B
W14
=E
− 3E
p
X
W12 W22
i=1
aii bii + E W12 W22 − 1 tr A tr B + 2E W12 W22 tr(AB)
X
X
X
X
aii bi j + 2
ai j bii
bi j + tr B
ai j + E W13 W2 · 2
− E W1 W2 · tr A
i, j
i, j
i, j
(C.14)
i, j
X
X
X
X
ai j blm
aii b jl + 4
ai j bli +
a jl bii + E W1 W2 W3 W4 ·
+ E W12 W2 W3 ·
i, j,l
2
i, j,l
i, j,l,m
i, j,l
2
+ o p kAk + kBk
.
We turn to analyze the remainder terms.
P
P
Term E W1 W2 · tr A i, j bi j + tr B i, j ai j . Note that
X
i, j
Similarly, |
P
i, j
ai j = 1T A1 − tr A ≤ 2pkAk.
(C.15)
bi j | ≤ 2pkBk. Thus
tr A
X
bi j + tr B
i, j
i, j
Next, for any ε ∈ (0, 1), let
A1 =
We have
E W1 W2
For the first term,
X
ai j ≤ K p2 kAk2 + kBk2 .
1 X 2
Vi − 1 ≥ ε .
p − 2 i,1,2
V1 V2 1 A 1
≤ p· E 2
V1 + V22 + · · · + V p2
V1 V2 1 A 1
p· E 2
V1 + V22 + · · · + V p2
!
(C.16)
!
+p· E
(C.17)
V1 V2 1Ac1
V12 + V22 + · · · + V p2
!
.
!
|V1 V2 |1A1
≤ pP(A1 ) = o p−k , for any k > 0,
≤ p·E 2
2
V1 + V2
(C.18)
(C.19)
where the last estimate is due to Lemma 2′ in Bai and Silverstein (2004). As to the second term, by Taylor
36
expansion, almost surely there exists ζ 2 ∈ 0, V12 + V22 such that
V12 + V22
V12 + V22 2
1
1
=
−
+
2
3 .
V12 + V22 + · · · + V p2 V32 + V42 + · · · + V p2
V32 + V42 + · · · + V p2
ζ 2 + V32 + · · · + V p2
It follows that
p· E
=p · E
V1 V2 1Ac1
V12 + V22 + · · · + V p2
V1 V2 1Ac1
!
!
V1 V2 V12 + V22
1Ac1 !
V1 V2 V12 + V22
2
1Ac1 !
−E
+E 2
V32 + V42 + · · · + V p2
V32 + V42 + · · · + V p2 2
ζ + V32 + · · · + V p2 3
!
1Ac1
V1 V2 V12 + V22 2 1Ac1
≤
p
·
E
· 2E pδ2p V14 + V24 = O p−1 δ2p ,
≤p · E
3
3
2
2
2
2
2
V3 + · · · + V p
V3 + V4 + · · · + V p
(C.20)
where the second inequality uses the fact that |Vi | ≤ p1/2 δ p . Plugging (C.19) and (C.20) into (C.18) yields
Therefore, by (C.16) and (C.21), we obtain
E W1 W2 = O p−1 δ2p .
X
X
bi j + tr B
ai j = o p kAk2 + kBk2 .
E W1 W2 · tr A
i, j
(C.21)
(C.22)
i, j
P
P
Term E W13 W2 · 2 i, j aii bi j + 2 i, j ai j bii . Note that
X
i, j
aii bi j =
p
X
i=1
aii · eTi B(1 − ei ) ≤ p3/2 kAk · kBk ≤ p3/2 kAk2 + kBk2 .
Thus,
X
X
2
aii bi j + 2
ai j bii ≤ K p3/2 kAk2 + kBk2 .
i, j
i, j
As to E W13 W2 , for any ε ∈ (0, 1), let
A2 =
1 X 2
Vi − 1 ≥ ε .
p − 1 i,2
37
(C.23)
Similarly to (C.21), we have
E
W13 W2
V13 V2 1A2
2
≤p · E
2
+p · E
!
V13 V2 1Ac2
V12 + V22 · · · + V p2 2
!
!
2V13 V23 1{Ac2 }
V13 V2 1{Ac2 }
−
E
≤K p2 · P(A2 ) + p2 · E
V12 + V32 + · · · + V p2 2
V12 + ζ 2 + V32 + · · · + V p2 3
!
E|V1 |3 E|V2 |3
= O(p−1 ).
≤o p−1 + K p2 ·
p3
V12 + V22 · · · + V p2
2
!
(C.24)
Combining (C.23) and (C.24) yields
X
X
√
aii bi j + 2
ai j bii = O p kAk2 + kBk2 .
E W13 W2 · 2
i, j
(C.25)
i, j
P
P
P
Term E W12 W2 W3 ·
i, j,l aii b jl + 4 i, j,l ai j bli + i, j,l a jl bii . We have
X
i, j,l
i, j,l
i=1
aii (1 − ei )T B(1 − ei ) +
p
X
aii
i=1
X
bjj
j,i
≤p2 kAk · kBk + p2 kAk · kBk ≤ p2 kAk2 + kBk2 .
Furthermore,
X
p
X
aii b jl ≤
ai j bli ≤
p
X
i=1
eTi A(1 − ei ) · (1 − ei )T Bei +
p
X
eTi ABei − aii bii
i=1
≤p kAk · kBk + 2pkAk · kBk ≤ K p kAk2 + kBk2 .
2
Therefore,
X
2
aii b jl + 4
i, j,l
X
ai j bli +
i, j,l
As to E W12 W2 W3 , for any ε ∈ (0, 1), let
A3 =
X
i, j,l
a jl bii ≤ K p2 kAk2 + kBk2 .
1 X 2
Vi − 1 ≥ ε .
p − 2 i,2,3
38
(C.26)
Similarly to (C.21), we have
E
W12 W2 W3
V12 V2 V3 1A3
2
≤p · E
!
V12 V2 V3 1Ac3
2
2 + p · E
!
V12 + V22 + · · · + V p2 2
!
!
2V12 V2 V3 V22 + V32 1Ac3
V12 V2 V3 1Ac3
2
2
−
E
≤K p P A3 + p · E
V12 + V42 + · · · + V p2 2
V12 + V42 + · · · + V p2 3
!
3V12 V2 V3 V22 + V32 2 1Ac3
+E
V12 + ζ 2 + V42 + · · · + V p2 4
!
EV12
· E pδ2p V24 + V34 = O p−1 δ2p .
≤o p−2 + K p2 ·
4
p
V12 + V22 + · · · + V p2
Combining the estimate with (C.26) yields
X
X
X
aii b jl + 4
ai j bli +
a jl bii = o p kAk2 + kBk2 .
E W12 W2 W3 ·
i, j,l
i, j,l
P
Term E W1 W2 W3 W4 ·
i, j,l,m ai j blm . Note that
X
i, j,l,m
ai j blm =1T A1 · 1T B1 − tr A tr B − 2 tr AB + 2
−
X
aii b jl + 4
i, j,l
X
ai j bli +
i, j,l
(C.27)
i, j,l
X
p
X
i=1
X
X
aii bi j + 2
ai j bii
aii bii − 2
i, j
i, j
a jl bii .
i, j,l
By (C.23) and (C.26),
X
i, j,l,m
ai j blm ≤ 2p2 kAk · kBk + 4pkAk · kBk + K p2 kAk2 + kBk2 ≤ K p2 kAk2 + kBk2 .
(C.28)
Moreover, recalling the definition of A1 in (C.17), again similarly to (C.21), we have
2
E W1 W2 W3 W4 ≤ p · E
V1 V2 V3 V4 1 A 1
!
2
V1 V2 V3 V4 1Ac1
!
+p · E
V12 + V22 + · · · + V p2 2
V12 + V22 + · · · + V p2 2
!
!
2V1 V2 V3 V4 V12 + V22 1Ac1
V1 V2 V3 V4 1Ac1
2
2
−
E
≤K p P A1 + p · E
V32 + V42 + · · · + V p2 2
V32 + V42 + · · · + V p2 3
!
3V1 V2 V3 V4 V12 + V22 2 1Ac1
+E
ζ 2 + V32 + · · · + V p2 4
!
E|V3 V4 |
2
−2
≤o p + K p ·
· E pδ2p V14 + V24 = O p−1 δ2p .
4
p
Combining the estimate with (C.28) yields
X
E W1 W2 W3 W4 ·
ai j blm = o p kAk2 + kBk2 .
i, j,l,m
39
(C.29)
Plugging the estimates in (C.22), (C.25), (C.27) and (C.29) into (C.14), we obtain
=
E WT AW − tr A WT BW − tr B
EW14
− 3E
W12 W22
p
X
i=1
aii bii + E W12 W22 − 1 tr A tr B
(C.30)
+ 2E W12 W22 tr AB + o p kAk2 + kBk2 .
Next we relate E W14 to E V14 . For any ε ∈ (0, 1), let
A4 =
1 X 2
Vi − 1 ≥ ε .
p − 1 i,1
Similarly to (C.21), we have
E W14 − E V14 ≤ E W14 − V14 1A4 + E W14 − V14 1Ac4
!
!
2p2 V16 1Ac4
p2
4
c · EV + E
1
≤p2 P A4 + E V14 · P A4 + E
−
1
A
2
3
1
4
V22 + V32 + · · · + V p2
V22 + V32 + · · · + V p2
≤o(1) + Kε + K
p3 δ2p EV14
p3
.
By the arbitrariness of ε, we obtain
E W14 − E V14 = o(1).
As to E W12 W22 , we have from (C.9) that
(C.31)
X 2
p2 P |V| , 0 = E
Wi2 = pE W14 + p(p − 1)E W12 W22 .
p
i=1
Thus, by (C.10) and (C.31), we get
E
W12 W22
p2 P(|V| , 0) − pE W14
E V14 − 1
=
=1−
+ o(p−1 ).
p(p − 1)
p
(C.32)
Plugging (C.31) and (C.32) into (C.30) yields
E WT AW − tr A WT BW − tr B
p
X
= E V14 − 3
aii bii − p−1 E V14 − 1 tr A tr B + 2 tr(AB) + o p kAk2 + kBk2 .
i=1
This completes the proof of (A.8).
We now prove (A.9). Let à = Aei eTi A =: ãi j and B̃ = Bei eTi B =: b̃i j . Applying (C.13) and the estimates
40
in (C.25), (C.27) and (C.29), we have
E WT ÃW WT B̃W
p
X
ã j j b̃ j j + E W12 W22 tr à tr B̃ + 2E W12 W22 tr ÃB̃ + o p kÃk2 + kB̃k2 .
= E W14 − 3E W12 W22
(C.33)
j=1
Note that
p
X
j=1
kÃk ≤kAk2 ,
kB̃k ≤ kBk2 ,
ã j j b̃ j j ≤kÃk · tr B̃ = kÃk · eTi B2 ei ≤ kAk2 kBk2 ,
tr à tr B̃ =eTi A2 ei · eTi B2 ei ≤ kAk2 kBk2 ,
tr ÃB̃ =eTi ABei · eTi BAei ≤ kAk2 kBk2 .
Plugging these bounds into (C.33), and further using (C.31) and (C.32) yield the desired estimate in (A.9).
Next we prove Lemma A.4. The following lemma will be used.
Lemma C.1. [Theorem 3 in Rosenthal (1970)] Let U1 , · · · , U p be i.i.d. random variables with EU1 = 0. For
any k ≥ 2, there exists K > 0 such that
E
p
X
i=1
k
Ui ≤ K
p
X
EUi2
i=1
k/2
+
p
X
i=1
E|Ui |k .
Proof of Lemma A.4. Write
E|WT AW − tr A|k = E |WT AW − tr A|k · 1{|V|=0} + E |WT AW − tr A|k · 1{|V|,0} .
We have
E |WT AW − tr A|k · 1{|V|=0} = E | tr A|k · 1{|V|=0} ≤ pk kAkk P |V| = 0 = kAkk · o(1),
(C.34)
where the last estimate follows from Lemma 2′ in Bai and Silverstein (2004). Furthermore,
p
k
VT AV − tr A · 1{|V|,0}
E |WT AW − tr A|k · 1{|V|,0} =E
2
|V|
p
k
k
T
T
≤KE
1
−
1
V
AV
·
+
KE
V
AV
−
tr
A
.
{|V|,0}
|V|2
(C.35)
By Lemma C.1, for any k ≥ 2,
p
k
−
1
VT AV · 1{|V|,0}
2
|V|
p
p
X
k/2 X
k
kAkk ,
≤kAkk E p − |V|2 ≤ KkAkk ·
+
E 1 − Vi2 2
E 1 − Vi2 k ≤ K pk−1 δ2k−4
p
E
i=1
i=1
41
(C.36)
where the last step follows from the fact that |Vi | ≤ p1/2 δ p and E V14 < ∞. On the other hand, if we denote by
A∗ the adjoint matrix of A, then by Lemma 2.7 in Bai and Silverstein (1998), for any k ≥ 2,
k
kAkk .
E VT AV − tr A ≤ K E|V1 |4 tr AA∗ k/2 + E|V1 |2k tr (AA∗ )k/2 ≤ K pk−1 δ2k−4
p
(C.37)
Combining (C.34) – (C.37) gives (A.10).
As to (A.11), by (C.10), (C.15) and (C.21), we get
p
p
X
X
X
ai j E W1 W2 −
aii E W12 +
aii = O δ2p kAk .
E WT AW − tr A =
i, j
i=1
i=1
Proof of Lemma A.5. Recall the definitions of ςi (z) and τi (z) in (A.2). Denote by E(i) (·) the conditional expectation with respect to the σ-field generated by r1 , · · · , ri−1 , ri+1 , · · · rn . By (A.10), for any k ≥ 2,
k
E|ς1 (z)|k =E E(1) |ς1 (z)|k ≤ K p−1 δ2k−4
EkD−1
p
1 (z)k , and
2k
EkD−1
E|τ1 (z)|k =E E(1) |τ1 (z)|k ≤ K p−1 δ2k−4
p
1 (z)k .
(C.38)
The bounds in (A.14) then follow from (A.12).
As to (A.15), by (A.13),
2
−1
−1
E β̆1 (z) − bn (z) =E β̆1 (z)bn (z) n−1 tr D−1
1 (z) − n E tr D1 (z)
2
2
−1
≤Kn−2 |z|4 ℑ(z)−4 · E tr D−1
1 (z) − E tr D1 (z) .
Let Ei (·) stand for the conditional expectation with respect to the σ-field generated by r1 , . . . , ri . By the BurkholderDavis-Gundy(BDG) inequality, (A.7) and the Cauchy-Schwarz inequality, for any k ≥ 2,
k
−1
E tr D−1
1 (z) − E tr D1 (z)
n
X
k
−1
=E
Ei − Ei−1 tr D−1
1 (z) − tr D1i (z)
i=2
≤KE
n
X
i=2
tr D−1
1 (z)
−
tr D−1
1i (z)
k
2 k/2
k/2
≤Knk/2 E β12 (z)rT2 D−2
12 (z)r2 ≤ Kn
q
(C.39)
4k
E|β12 (z)|2k · EkD−1
12 (z)k .
Using further the bounds in (A.12) and (A.13), we get (A.15).
Now we show (A.16). We first prove that Eβ1 (z), bn (z) and b̄n (z) converge to the same limit. In fact, by
(A.13) and the Cauchy-Schwarz inequality,
E|β1 (z) − bn (z)| ≤E β1 (z) − β̆1 (z) + E|β̆1 (z) − bn (z)|
1/2
≤|z|2 ℑ(z)−2 E|ς1 (z)| + E|β̆1 (z) − bn (z)|2
1/2
1/2
+ E|β̆1 (z) − bn (z)|2 .
≤|z|2 ℑ(z)−2 E|ς1 (z)|2
42
Plugging in the bounds in (A.14) and (A.15) yields
E|β1 (z) − bn (z)| ≤ K p−1/2 |z|2 ℑ(z)−3 + |z|3 ℑ(z)−5 .
(C.40)
Furthermore, by (A.13) and Lemma 2.6 in Silverstein and Bai (1995),
−1 2
−3
−1
|bn (z) − b̄n (z)| ≤|z|2 ℑ(z)−2 · n−1 E tr D−1
1 (z) − tr D12 (z) ≤ n |z| ℑ(z) .
(C.41)
Next, using an argument similar to that for equation (2.2) of Silverstein (1995), we have
Eβ1 (z) = −zEmn (z).
(C.42)
By Proposition 2.1 and the dominated convergence theorem, the term Emn (z) converges to m(z). The convergences in (A.16) follow from (C.40) and (C.41).
Proof of Lemma A.6. For any ηl < a− (y)1{0<y<1} and ηr > a+ (y), there exists ε ∈ (0, 1) such that (1 + ε)ηl <
a− (y)1{0<y<1} and (1 − ε)ηr > a+ (y). Note that for e
Sn ,
n
e
o
en
n
n
n
< (1 + ε)ηl or λSmax
> (1 − ε)ηr + P max p−1 |Zi |2 − p ≥ ε ,
P λSmin
< ηl or λSmax
> ηr ≤ P λSmin
1≤i≤n
and for e
S(1) ,
n eS(1)
eS
n e
e
Sn−1
n−1
n−1
n−1 S(1)
P λmin
< ηl or λmax
> ηr .
< ηl or λmax
> ηr = P λmin
The conclusion follows from the estimates (1.9a), (1.9b) and Lemma 2′ in Bai and Silverstein (2004).
Proof of Lemma A.7. For D−1
1 (z), we have
−1
+ |ηr − xr |−1 1
sup kD−1
1 (z)k ≤ |ηl − xl |
{z∈Cn }
≤K 1 + n1+α 1
e
S(1)
{λmin
<ηl
or
e
S
e
S
(1)
(1)
λmin
≥ηl and λmax
≤ηr
en
λSmax
>ηr }
.
+ n1+α 1
e
S
e
S
(1)
(1)
{λmin
<ηl or λmax
>ηr }
(C.43)
Thus, by (A.17), for any k ≥ 1,
eS(1)
en
k
k(1+α)
sup EkD−1
P λmin
< ηl or λSmax
> ηr = O(1).
1 (z)k ≤ K + Kn
(C.44)
{n; z∈Cn }
Furthermore,
1+α
−1
sup |γ1 (z)| ≤ K sup kD−1
1
1 (z)k + K sup EkD1 (z)k ≤ K 1 + n
{z∈Cn }
{z∈Cn }
{n; z∈Cn }
e
S(1)
{λmin
<ηl
2
T −1
For βi (z), note that by (A.7), rTi D−1 (z)ri − rTi D−1
i (z)ri = −βi (z) ri Di (z)ri . Hence
or
en
λSmax
>ηr }
2
T −1
T −1
T −1
βi (z) = βi (z) − βi (z) rTi D−1
i (z)ri + ri D (z)ri + ri Di (z)ri = 1 − ri D (z)ri ,
43
.
where the second equation holds due to the definition of βi (z). Similarly to (C.43), we can show that
sup kD−1 (z)k ≤ K 1 + n1+α 1
e
S(1)
{λmin
<ηl
{z∈Cn }
or
en
λSmax
>ηr }
.
It follows that
sup |βi (z)| ≤ 1 + sup |rTi D−1 (z)ri | ≤ 1 + yn sup kD−1 (z)k ≤ K 1 + n1+α 1
{z∈Cn }
e
S(1)
{λmin
<ηl
{z∈Cn }
{z∈Cn }
or
en
λSmax
>ηr }
.
(C.45)
We thus complete the proof of (A.19).
We now prove (A.21). Similarly to (C.44) and using the bound in (C.45), we can show that for any k ≥ 1,
sup
{n; z∈Cn }
n
o
k
E|β1 (z)|k , E|β12 (z)|k , EkD−1 (z)kk , EkD−1
12 (z)k = O(1).
(C.46)
For E|γ1 (z)|k , by (C.38), (C.39), (C.44) and (C.46), for any k ≥ 2,
k
k
k
sup E γ1 (z) ≤K sup E γ1 (z) − ς1 (z) + E ς1 (z)
{z∈Cn }
{z∈Cn }
k
k
−1 2k−4
−1
−1
(z)k
(z)
+
K
p
δ
EkD
(z)
−
E
tr
D
≤K sup n−k E tr D−1
p
1
1
1
{z∈Cn }
K
≤ k/2 sup
n {z∈Cn }
The conclusion (A.21) follows.
q
−1 2k−4
4k
E|β12 (z)|2k EkD−1
12 k + K p δ p
=O n−1 δ2k−4
.
p
Finally, we prove (A.20). It remains to show the boundness of sup{n; z∈Cn } |bn (z)|. Note that bn (z) = β1 (z) +
bn (z)β1 (z)γ1 (z). Thus, by the Cauchy-Schwarz inequality,
q
|bn (z)| ≤ E β1 (z) + |bn (z)| E|β1 (z)|2 E|γ1 (z)|2 .
It follows from (C.46) and (A.21) that for all large n,
sup |bn (z)| ≤ sup
{z∈Cn }
{n; z∈Cn }
1−
p
E β1 (z)
E|β1 (z)|2 E|γ1 (z)|2
= O(1).
Proof of Claim A.9. Recall the definition of b̄n (u) in (A.3). By equation (2.9) in Bai and Silverstein (2004), for
any u ∈ C+ ,
−1
n−1
b̄n (u)
− I p + b̄n (u)Ai (u) + Bi (u) + Ci (u) ,
D−1
i (u) = u −
n
44
where
Ai (u) =
X
j,i
Bi (u) =
X
j,i
Ci (u) =
Thus,
r j rTj − n−1 I D−1
i j (u),
βi j (u) − b̄n (u) r j rTj D−1
i j (u), and
b̄n (u) X −1
Di j (u) − D−1
i (u) .
n j,i
−1
n−1
b̄
(u)
(u)
+
p
u
−
E tr D−1
n
1
n
!−1
n−1
≤ u−
b̄n (u)
E| tr b̄n (u)A1 (u) | + E| tr B1 (u)| + E| tr C1 (u)| .
n
(C.47)
Now we study the terms in (C.47). For u ∈ D,
u−
n−1
b̄n (u)
n
!−1
1 + n−1 E tr D−1
1 + n−1 E tr D−1
1 + 2p/(nṽ)
12 (u)
12 (u)
= O(1),
≤
=
≤
−1
−1
−1
−1
ṽ/2
u 1 + n E tr D12 (u) − (n − 1)/n
ℑ u 1 + n E tr D12 (u)
where the last inequality uses (A.12) and the fact that ℑ (u/(λ − u)) ≥ 0 for any λ ≥ 0 and u ∈ C+ .
Term E| tr b̄n (u)A1 (u) |. Note from (A.10) and (A.12) that
2
2
2
−1
−1
−1
−1
−1
T −1
.
≤ Kn−1 EkD−1
E rT2 D−1
12 (u)k = O n
12 (u)r2 − n tr D12 (u) = E E(2) r2 D12 (u)r2 − n tr D12 (u)
(C.48)
(C.49)
Thus by (A.13) and the Cauchy-Schwarz inequality,
E tr b̄n (u)A1 (u) ≤ Kn ·
q
2
1/2
−1
−1
.
E rT2 D−1
12 (u)r2 − n tr D12 (u) = O n
(C.50)
Term E| tr B1 (u)|. By the Cauchy-Schwarz inequality, (A.13) and (A.12), we have
E| tr B1 (u)| ≤ n · E β12 (u) − b̄n (u) rT2 D−1
12 (u)r2
q
q
2
2
≤n · E β12 (u) − b̄n (u)
E rT2 D−1
12 (u)r2
q
2
−1
−1
≤Kn · E β212 (u)b̄2n (u) · rT2 D−1
12 (u)r2 − n E tr D12 (u)
q
2
−1
−1
−2 tr D−1 (u) − E tr D−1 (u) 2 .
≤Kn · E rT2 D−1
12 (u)r2 − n tr D12 (u) + n
12
12
(C.51)
Similarly to (C.39), using (A.12) and (A.13), we can show
2
−1
E tr D−1
12 (u) − E tr D12 (u) = O(n).
Combining the estimate with (C.49) and (C.51) yields
E tr B1 (u) = O n1/2 .
45
(C.52)
Term E| tr C1 (u)|. By (A.13) and Lemma 2.6 in Silverstein and Bai (1995),
−1
E| tr C1 (u)| ≤ |b̄n (u)| · E tr D−1
12 (u) − D1 (u) = O(1).
(C.53)
Combining (C.47), (C.48), (C.50), (C.52) and (C.53) yields the desired conclusion.
e −1 (u)k ≤ ℑ(u)−1 . As to the differProof of Claim A.10. By the same argument as for (A.12), we have kD
1
e−1
ence D−1
1 (u) − D1 (u), we have
X
1X
T
e −1 (u) ≤ kD−1 (u)k · 1
e −1 (u)k.
D−1
(u)
−
D
Z
Z
−
Xi XTi · kD
i
i
1
1
1
1
n i,1
n i,1
Theorem 1 in Bai and Yin (1993) implies that kn−1
P
i,1
Zi ZTi k = O p (1). Furthermore, by Lemma 2 therein,
1X
1X
1 X
p2
p2
1X
Xi XTi −
Zi ZTi ≤
Zi ZTi = o p (1).
1−
Zi ZTi ≤ max 1 −
·
2
2
{i,1}
n i,1
n i,1
n i,1
n i,1
|Zi |
|Zi |
Using further (A.12), we see that
e −1
D−1
1 (u) − D1 (u) = o p (1).
The conclusion follows from the dominated convergence theorem.
Proof of Proposition A.11. Proof of (A.53). We take two steps to prove (A.53): In Step I we shows that
lim sup Emn (z) − m(z) = 0,
n→∞ {z∈C }
n
(C.54)
and in Step II we prove (A.53).
D
e
Step 1. Recall that e
Sn = 1/n · XT X. By Proposition 2.1, F Sn → F y almost surely. Recall that ηl ∈
xl , a− (y)1{0<y<1} and ηr ∈ (a+ (y), xr ). We have
sup Emn (z) − m(z)
Z
Z 1
1
e
S
c (λ) dF n (λ) .
≤ sup E
1[ηl ,ηr ] (λ) d FeSn (λ) − F y (λ) + sup E
1
λ−z
λ − z [ηl ,ηr ]
{z∈Cn }
{z∈Cn }
{z∈Cn }
The first term converges to 0 by the bounded convergence theorem. As to the second term, we have from (A.17)
that
Z
e
1
en
n
1[ηl ,ηr ]c (λ)dFeSn (λ) ≤ lim n1+α P λSmin
< ηl or λSmax
> ηr = 0.
lim sup E
n→∞
n→∞ {z∈C }
λ−z
n
Hence (C.54) holds.
Step II. Bai and Silverstein (2004) show that with C0 = {xr ± iv, v ∈ [0, v0 ]} ∪ {xl ± iv, v ∈ [0, v0]}, there exists
a positive number δ such that inf {z∈C0 } |m(z) + 1| > δ; see P. 585 therein. Therefore, by (C.54), for all n large
46
enough,
inf
{z∈C0 ∩Cn }
Emn (z) + 1 ≥
δ
.
2
−1
≤ max{2, 4/v0}.
Moreover, by Lemma 2.3 of Silverstein (1995), for any z with ℑ(z) = ±v0 , we have Emn (z) + 1
Combining the two estimates we get
1
= O(1).
(C.55)
sup
(z)
+1
Em
{n; z∈Cn }
n
Using further (A.52) and (C.54), we have
sup
{n; z∈Cn }
1
= O(1).
m0n (z) + 1
(C.56)
Combining (A.52), (C.54), (C.55) with (C.56) yields
lim sup
n→∞ {z∈C }
n
yn Emn (z) · m0n (z)
ym(z)2
−
= 0.
(1 + m(z))2
m0n (z) + 1 Emn (z) + 1
Finally, by (4.6) in Bai and Silverstein (2004), there exists ξ1 < 1 such that
sup
{z∈C}
ym2 (z)
1 + m(z)
2 ≤ ξ1 < 1.
(C.57)
The conclusion in (A.53) follows. Furthermore, (C.57) and the definition of m(z) implies the continuity of
1 − ym2 (z)/ 1 + m(z) 2 −1 on C.
Proof of (A.54). Recall the definition of Rn (z) in (A.48). Note that Emn (z) = − (1 − yn ) /z + yn Emn (z). We
can rewrite Rn (z) as
Rn (z) =
yn
+ zyn Emn (z).
Emn (z) + 1
Thus, by (C.42) and (A.6), we have
nRn (z) Emn (z) + 1
= tr I + (z − Eβ1 (z)) E tr D−1 (z)
=E tr (D(z) + zI D−1 (z) − Eβ1 (z) · E tr D−1 (z)
n
X
=E tr
ri rTi · D−1 (z) − Eβ1 (z) · E tr D−1 (z)
i=1
−1
−1
=E nβ1 (z) rT1 D−1
1 (z)r1 − n E tr D (z)
−1
=E nβ1 (z)γ1 (z) + Eβ1 (z) · E tr D−1
1 (z) − tr D (z)
=:Jn (z) + J˜n (z).
We will analyze Jn (z) and J˜n (z), separately.
47
(C.58)
For J˜n (z), by (A.7) and (A.43),
J˜n (z) =Eβ1 (z) · E β1 (z)rT1 D−2
1 (z)r1
T −2
=bn (z)Eβ1 (z) · E rT1 D−2
1 (z)r1 − bn (z)Eβ1 (z) · E β1 (z)γ1 (z)r1 D1 (z)r1 .
For the second term, we have from (A.20), (C.46), Hölder’s inequality and (A.21) that
sup bn (z)Eβ1 (z) · E β1 (z)γ1 (z)rT1 D−2
1 (z)r1
{z∈Cn }
≤K sup
{z∈Cn }
q
1/4
8 1/4
= O n−1/2 .
E |γ1 (z)|2 · E |β1 (z)|4
· EkD−1
1 (z)k
For the first term, by (A.20), (C.46) and (A.11),
−1
−2
sup bn (z)Eβ1 (z) · E rT1 D−2
1 (z)r1 − n tr D1 (z)
{z∈Cn }
2
−1
.
≤K sup |E E(1) (τ1 (z) | ≤ Kn−1 δ2p sup EkD−1
1 (z)k = o n
{z∈Cn }
{z∈Cn }
Therefore
J˜n (z) = n−1 bn (z)Eβ1 (z) · E tr D−2
1 (z) + εn (z),
(C.59)
where εn (z) denotes a residual term which satisfies limn→∞ sup{z∈Cn } |εn (z)| = 0. In the following analysis, we will
continue using this notation, whose value may change from line to line.
As to Jn (z), since β1 (z) = bn (z) − b2n (z)γ1 (z) + b2n (z)β1 (z)γ12 (z) by (A.43), we have
Jn (z) = − nb2n (z)Eγ12 (z) + nbn (z)Eγ1 (z) + nb2n (z)E β1 (z)γ13 (z)
2
−1
= − nb2n (z)Eς12 (z) − n−1 b2n (z)E tr D−1
1 (z) − E tr D1 (z)
−1
− 2b2n (z)E ς1 (z) tr D−1
1 (z) − E tr D1 (z)
+ nbn (z)Eγ1 (z) + nb2n (z)E β1 (z)γ13 (z)
(C.60)
=: − nb2n (z)Eς12 (z) + Jn,1 (z) + Jn,2 (z) + Jn,3 (z) + Jn,4 (z).
We turn to study the last four remainder terms.
We first show that
2
−1
sup E tr D−1
1 (z) − E tr D1 (z) = O(1).
(C.61)
{n; z∈Cn }
Firstly, similarly to Lemmas A.6 and A.7, we can prove
−1
−1
T
T
e
e
Claim C.2. Let γ12 = rT1 D−1
12 (z)r1 − n E tr D12 (z) and S(12) = Sn − r1 r1 − r2 r2 . Under the assumptions of
Lemma A.5, for any ηl < a− (y)1{0<y<1} and ηr > a+ (y), we have
eS(12)
en
P λmin
≤ ηl or λSmax
≥ ηr = o n−k for any k > 0.
48
(C.62)
In addition, there exists K > 0 such that
n
o
1+α
1
sup kD−1
12 (z)k, |β12 (z)|, |γ12 (z)| ≤ K 1 + n
e
S(12)
{λmin
≤ηl
{z∈Cn }
en
λSmax
≥ηr }
or
,
(C.63)
and
sup
{n; z∈Cn }
n
o
k
EkD−1
12 (z)k , |b̄n (z)| = O(1) for k ≥ 1,
n · sup E|γ12 (z)|k ≤ O δ2k−4
p
{z∈Cn }
(C.64)
for k ≥ 2.
(C.65)
We now prove (C.61). By the BDG inequality and (A.7), we have
−1
sup E tr D−1
1 (z) − E tr D1 (z)
{n; z∈Cn }
= sup E
{n; z∈Cn }
= sup
n
X
i=2
n
X
{n; z∈Cn } i=2
≤K sup
{n; z∈Cn }
2
2
−1
(Ei − Ei−1 ) tr D−1
1 (z) − D1i (z)
2
−1
−1
−2
β1i (z) − b̄n (z) · tr D−2
E (Ei − Ei−1 ) β1i (z) rTi D−2
1i (z)
1i (z)ri − n tr D1i (z) + n
(C.66)
2
4
−1
−2
2
2
−1
,
n · E β12 (z) rT2 D−2
(z)
(z)k
(z)r
−
n
tr
D
+
n
·
E
|γ
(z)|
·
|β
(z)
b̄
(z)|
kD
2
12
12
n
12
12
12
where the second equality uses the fact that Ei −Ei−1 b̄n (z) tr D−2
(z)
= 0. For the first term, we have from (C.63)
1i
that
2
−1
−2
sup n · E β12 (z) rT2 D−2
12 (z)r2 − n tr D12 (z)
{n; z∈Cn }
2
1+α
−1
−2
≤K sup n · E rT2 D−2
1
12 (z)r2 − n tr D12 (z) · 1 + n
e
S(12)
{λmin
<ηl
{n; z∈Cn }
or
en
λSmax
>ηr }
2
−1
4
2(1+α)
−1
−2
1
≤K sup n · E rT2 D−2
12 (z)r2 − n tr D12 (z) + kD12 (z)k · n
2
e
S(12)
{λmin
<ηl
{n; z∈Cn }
or
(C.67)
en
λSmax
>ηr }
= O(1),
where the last step follows from (A.10), (C.64), (C.63) and (C.62). For the second term, we have
4
sup n · E |γ12 (z)|2 · |β12 (z)b̄n (z)|2 kD−1
12 (z)k
{n; z∈Cn }
≤K sup n · E |γ12 (z)|2 1 + n1+α 1
e
S(12)
{λmin
<ηl
{n; z∈Cn }
e
n
or λSmax
>ηr }
≤K sup n · E |γ12 (z)|2 + |γ12 (z)|2 · n8(1+α) 1
e
S
6
(C.68)
e
(12)
n
{λmin
<ηl or λSmax
>ηr }
{n; z∈Cn }
= O(1),
where the first inequality uses (C.64) and (C.63), and the last bound follows from (C.65), (C.63) and (C.62).
Plugging (C.67) and (C.68) into (C.66) yields (C.61).
For Jn,1 (z), combining (A.20) and (C.61) yields
2
−1
−1
sup Jn,1 (z) ≤ sup n−1 |bn (z)|2 E tr D−1
.
1 (z) − E tr D1 (z) = O n
{z∈Cn }
{z∈Cn }
49
(C.69)
As to Jn,2 (z), we obtain from the Cauchy-Schwarz inequality, (A.20), (C.38) and (C.61) that
q
2
−1
sup |Jn,2 (z)| ≤ 2 sup |bn (z)|2 E|ς1 (z)|2 · E tr D−1
= O n−1/2 .
1 (z) − E tr D1 (z)
{z∈Cn }
(C.70)
{z∈Cn }
As to Jn,3 (z), by (A.20) and (A.11),
2
sup |Jn,3 (z)| ≤ sup n|bn (z)| · |Eγ1 (z)| ≤ Kδ2p sup EkD−1
1 (z)k = O δ p .
{z∈Cn }
{z∈Cn }
(C.71)
{z∈Cn }
Finally, for Jn,4 (z), we have
−1
2
2
Jn,4 (z) =nb2n (z)E β1 (z)γ12 (z)rT1 D−1
1 (z)r1 − bn (z)E β1 (z)γ1 (z) · E tr D1 (z)
−1
2
2
=nb2n (z)E β1 (z)γ12 (z)rT1 D−1
1 (z)r1 − bn (z)E β1 (z)γ1 (z) tr D1 (z)
+ b2n (z) Cov β1 (z)γ12 (z), tr D−1
1 (z)
=nb2n (z)E β1 (z)γ12 (z)ς1 (z) + b2n (z) Cov β1 (z)γ12 (z), tr D−1
1 (z)
(C.72)
=:Jn,41 (z) + Jn,42 (z).
By the Cauchy-Schwarz inequality,
q
p
sup |Jn,41 (z)| ≤ sup n · |bn (z)|2 E|ς1 (z)|2 E |β1 (z)|2 |γ1 (z)|4
{z∈Cn }
{z∈Cn }
≤K sup
{z∈Cn }
≤K sup
{z∈Cn }
√
√
n · |bn (z)|2
n·
q
2
EkD−1
1 (z)k
r
E 1 + n1+α 1
e
S(1)
{λmin
<ηl
r
E |γ1 (z)|4 + |γ1 (z)|4 · n2(1+α) 1
e
S(1)
{λmin
<ηl
or
or
en
λSmax
>ηr }
en
λSmax
>ηr }
2
|γ1 (z)|4
(C.73)
= o(1),
where the second inequality follows from (C.38) and (A.19), the third inequality uses (A.20), and the last
step uses (A.21), (A.19) and (A.17). Similarly, we have from the Cauchy-Schwarz inequality, (C.46), (A.21)
and (C.61) that
sup |Jn,42 (z)| ≤ sup K E |β1 (z)|4
{z∈Cn }
{z∈Cn }
1/4
E |γ1 (z)|8
1/4
−1
E tr D−1
1 (z) − E tr D1 (z)
2 1/2
= o n−1/4 .
(C.74)
Plugging (C.73) and (C.74) into (C.72) , we obtain
sup |Jn,4 | = o(1).
{z∈Cn }
Combining (C.60), (C.69) – (C.71) and (C.75) yields
Jn (z) = −nb2n (z)Eς12 (z) + εn (z).
50
(C.75)
Moreover, by (A.8) and (A.20),
− nb2n (z)Eς12 (z)
p
X
2
2
4
−2
−1
4
−1
= − n−1 b2n (z)E E Z11
−3
(z)
+ εn (z).
(z)
+
2
tr
D
(z)
−
1
tr
D
D−1
E
Z
−
p
1
1
1
11
ii
i=1
Thus, by (A.20) and (C.61), we have
p
4
1X
−1 2
2 E Z11
2
4
−3 ·
sup Jn (z) + b2n (z) E Z11
(z)
E D−1
−
bn (z) · n−1 E tr D−1
1
1 (z)
ii
n i=1
yn
{z∈Cn }
+ 2b2n (z) · n−1 E tr D−2
1 (z) = o(1).
(C.76)
Plugging (C.59) and (C.76) into (C.58), and applying (C.55), we obtain
nRn (z)
P p −1 2
4
EZ11
− 3 b2n (z) · n−1 i=1
E D1 (z) ii
2
4
EZ11
− 1 b2n (z) · n−1 E tr D−1
1 (z)
=−
+
Emn (z) + 1
yn Emn (z) + 1
bn (z)Eβ1 (z) · n−1 E tr D−2
2b2n (z) · n−1 E tr D−2
1 (z)
1 (z)
+
+ εn (z).
−
Emn (z) + 1
Emn (z) + 1
(C.77)
The remaining task is to study (C.77).
We start with Eβ1 (z) and bn (z). Combining (C.42) and (C.54) yields
Eβ1 (z) = −zEmn (z) = −zm(z) + εn (z).
(C.78)
By the Cauchy-Schwarz inequality, (A.20), (C.46) and (A.21), we obtain
sup |bn (z) − Eβ1 (z)| = sup |bn (z)| E |β1 (z)γ1 (z)| ≤ sup K
{z∈Cn }
{z∈Cn }
{z∈Cn }
q
q
E |β1 (z)|2 E |γ1 (z)|2 = o(1).
Therefore, by (C.78),
bn (z) = −zEmn (z) + εn (z) = −zm(z) + εn (z).
(C.79)
p×p
−1
Next we relate D−1
, let diag(A) = diag a11 , a22 , · · · , a pp .
1 (z) to D . For any matrix A = ai j ∈ C
51
By (A.7), the Cauchy-Schwarz inequality, (C.46) and (A.20), we have
p
p
X
2 X −1 2
E D−1
E D (z) ii
1 (z) ii −
sup
{z∈Cn }
i=1
i=1
= sup E tr diag
{z∈Cn }
D−1
1 (z)
−1
+ D−1 (z) · D−1
1 (z) − D (z)
−1
−1
−1
≤ sup E β1 (z)rT1 D−1
1 (z) · diag D1 (z) + D (z) · D1 (z)r1
{z∈Cn }
≤K sup
{z∈Cn }
≤K sup
{z∈Cn }
q
−1
−1
2
−1
E|β1 (z)|2 · EkD−1
1 (z) · diag D1 (z) + D (z) · D1 (z)k
r
E D−1
1 (z)
6
+
q
8
−1
4
EkD−1
1 (z)k · EkD (z)k = O(1).
Therefore,
1 X −1 2 1 X −1 2
E D1 (z) ii =
E D (z) ii + εn (z).
n i=1
n i=1
p
p
Similarly, we can show that
1
1
−1
E tr D−1
1 (z) = E tr D (z) + εn (z),
n
n
1
1
−2
E tr D−2
1 (z) = E tr D (z) + εn (z).
n
n
(C.80)
(C.81)
Thus, we only need to focus on D−1 (z).
Pp
2
Now we analyze n−1 i=1 E D−1 (z) ii . By the Cauchy-schwarz inequality, we get
p
1 X −1 2
2
E D (z) ii − yn E D−1 (z) 11
{z∈Cn } n i=1
≤yn · sup E D−1 (z) 11 − E D−1 (z) 11 · D−1 (z) 11 + E D−1 (z) 11
sup
{z∈Cn }
≤yn · sup
{z∈Cn }
≤K · sup
{z∈Cn }
r
E
v
t
E
√
≤K n · sup
{z∈Cn }
√
≤K n · sup
{z∈Cn }
eT1
D−1 (z)
n
X
i=1
q
−
ED−1 (z)
2
e1 ·
r
2
E eT1 D−1 (z) + ED−1 (z) e1
2 p
EkD−1 (z)k2
eT1 (Ei − Ei−1 ) D−1 (z) − D−1
i (z) e1 ·
T −1
E β1 (z)rT1 D−1
1 (z)e1 e1 D1 (z)r1
r
E 1 + n1+α 1
e
S(1)
{λmin
<ηl
or
en
λSmax
>ηr }
2
2
T −1
· rT1 D−1
1 (u)e1 e1 D1 (u)r1
r
√
2
T −1
−1
4
2(1+α) 1
≤K n · sup
E rT1 D−1
1 (z)e1 e1 D1 (z)r1 + kD1 (z)k · n
e
S(1)
{λmin
<ηl
{z∈Cn }
2
or
en
λSmax
>ηr }
= o(1),
where the fourth inequality uses (A.7) and (C.46), the fifth inequality follows from (A.19), and the last step
52
uses (A.9), (A.20), (A.19) and (A.17). It follows from (C.54) that
1 X −1 2
E D (z) ii − yn m2 (z) = o(1).
sup
{z∈Cn } n i=1
p
(C.82)
As to n−1 E tr D−1 (z), we have from (C.54) that
sup n−1 E tr D−1 (z) − yn m(z) = o(1).
(C.83)
{z∈Cn }
Finally, we turn to n−1 E tr D−2 (z). Similarly to equation (4.13) in Bai and Silverstein (2004), rewrite D−1 (z)
as
D−1 (z) = −
I − bn (z)A(z) − B(z) − C(z)
,
z − bn (z)
where
A(z) =
n
X
i=1
ri rTi − n−1 I D−1
i (z),
B(z) =
n
X
i=1
X
1
−1
C(z) = bn (z)
D−1
i (z) − D (z) .
n
i=1
n
βi (z) − bn (z) ri rTi D−1
i (z), and
Thus
tr D−2 (z) = −
tr D−1 (z) − bn (z) tr A(z)D−1 (z) − tr B(z)D−1 (z) − tr C(z)D−1 (z)
.
z − bn (z)
(C.84)
Similarly to (C.52) and (C.53), we can show that
sup E tr B(z)D−1 (z) = O n1/2 and
{z∈Cn }
sup E tr C(z)D−1 (z) = O 1 .
{z∈Cn }
Furthermore, by (C.55) and (C.79),
sup
{n; z∈Cn }
1
= O(1).
z − bn (z)
(C.85)
Plugging the above bounds into (C.84) yields
n−1 E tr D−1 − n−1 bn (z)E tr A(z)D−1 (z)
1
−2
= o(1).
sup E tr D (z) +
z − bn (z)
{z∈Cn } n
We next rewrite
tr A(z)D−1 (z) = An,1 (z) + An,2 (z) + An,3 (z),
53
(C.86)
(C.87)
where
An,1 (z) = tr
n
X
i=1
An,3 (z) =
1
tr
n
−1
−1
,
ri rTi D−1
(z)
D
(z)
−
D
(z)
i
i
n
X
i=1
An,2 (z) = tr
i=1
n
X
−1 −2
,
ri rTi D−2
(z)
−
n
D
(z)
i
i
−1
−1
D−1
i (z) Di (z) − D (z) .
For An,2 (z), by (A.11) and (A.20), we have
2
−1 2
sup |n−1 EAn,2 (z)| = sup Eτ1 (z) ≤ K sup n−1 δ2p EkD−1
1 (z)k = O n δ p .
{z∈Cn }
{z∈Cn }
(C.88)
{z∈Cn }
Moreover, applying (A.7), the Cauchy-Schwarz inequality, (C.46) and (A.20) yields
−1
−1
sup n−1 EAn,3 (z) ≤ sup n−1 · E tr D−1
1 (z) D (z) − D1 (z)
{z∈Cn }
{z∈Cn }
= sup n−1 · E β1 (z)rT1 D−3
1 (z)r1
{z∈Cn }
≤K sup n−1 ·
{z∈Cn }
q
(C.89)
q
−1
6
.
E|β1 (z)|2 EkD−1
1 (z)k = O n
Now we consider An,1 (z). We have from (A.7) and (A.43) that
T −1
n−1 EAn,1 (z) = − E β1 (z)rT1 D−2
1 (z)r1 · r1 D1 (z)r1
T −1
T −2
T −1
= − bn (z)E rT1 D−2
1 (z)r1 · r1 D1 (z)r1 + bn (z)E β1 (z)γ1 (z) · r1 D1 (z)r1 · r1 D1 (z)r1
−1
−2
T −1
−1
= − n−2 bn (z)E tr D−2
1 (z) · tr D1 (z) − bn (z)E τ1 (z) · r1 D1 (z)r1 − n bn (z)E ς1 (z) · tr D1 (z)
T −1
+ bn (z)E β1 (z)γ1 (z) · rT1 D−2
1 (z)r1 · r1 D1 (z)r1
−1
=: − n−2 bn (z)E tr D−2
1 (z) · tr D1 (z) + An,11 (z) + An,12 (z) + An,13 (z).
Applying (A.20), the Cauchy-Schwarz inequality and (C.38) yields
sup |An,11 (z)| ≤ K sup
{z∈Cn }
{z∈Cn }
q
−1/2
2
E|τ1 (z)|2 · EkD−1
).
1 (z)k = O(n
Similarly, we have sup{z∈Cn } |An,12 (z)| = O(n−1/2 ). As to An,13 (z), by (A.20), the Cauchy-Schwarz inequality, (C.46) and (A.21), we get
sup |An,13 (z)| ≤ K sup E|β1 (z)|2
{z∈Cn }
{z∈Cn }
1/2
E|γ1 (z)|4
54
1/4
12
EkD−1
1 (z)k
1/4
= o(n−1/4 ).
Finally, we have from (A.20), the Cauchy-Schwarz inequality and (C.61) that
−1
−1
−2
−1
−1
sup − n−2 bn (z)E tr D−2
1 (z) · tr D1 (z) + bn (z)E n tr D1 (z) · E n tr D1 (z)
{z∈Cn }
−1
≤K sup n−2 · Cov tr D−2
1 (z), tr D1 (z)
{z∈Cn }
≤K sup n−2 ·
{z∈Cn }
≤K sup n−1 ·
{z∈Cn }
q
2
2
−1
−1
−2
E tr D−2
1 (z) − E tr D1 (z) · E tr D1 (z) − E tr D1 (z)
q
−1
4
.
EkD−1
1 (z)k = O n
Combining the estimates above with (C.80) and (C.81), we obtain
sup n−1 EAn,1 (z) + bn (z)E n−1 tr D−2 (z) · E n−1 tr D−1 (z) = o(1).
(C.90)
{z∈Cn }
Plugging (C.88), (C.89) and (C.90) into (C.87) yields
n−1 E tr A(z)D−1 (z) = −bn (z)E n−1 tr D−2 (z) · E n−1 tr D−1 (z) + εn (z).
(C.91)
Recall that zm(z) = −(1 + m(z))−1 . Plugging (C.91) into (C.86), and further using (C.79) and (C.83) yield
E n−1 tr D−1 (z)
b2n (z)E n−1 tr D−2 (z) · E n−1 tr D−1 (z)
n E tr D (z) = −
−
+ εn (z)
z − bn (z)
z − bn (z)
y
2 2
−1
−2
=
2 1 + z m (z)E n tr D (z) + εn (z),
2
z 1 + m(z)
−1
−2
where the second inequality also uses (A.20) and (C.85). Solving for n−1 E tr D−2 (z) and using (C.57) yield
n−1 E tr D−2 (z) =
y
z2 1 + m(z)
2
!
1−
ym2 (z)
1 + m(z)
2
!−1
+ εn (z).
(C.92)
Combining (C.77) – (C.79) with (C.54), (C.82), (C.83) and (C.92), we obtain
lim sup nRn (z) − 2
n→∞ {z∈C }
n
ym2 (z)
1 + m(z)
3
!
ym2 (z)
+
1 + m(z)
3
!
1−
ym2 (z)
1 + m(z)
2
!−1
= 0.
Moreover, by the definition of m(z) and (C.57), one can show that the limiting function of nRn (z) is continuous
on C. We thus complete the proof of Proposition A.11.
Appendix D
Two Lemmas
n
is a martingale differLemma D.1. [Theorem 35.12 in Billingsley (1995)] Suppose that for each n, {Yni }ℓi=1
ence sequence having second moments with respect to an increasing σ-field sequence {Fn,i }. If the following
assumptions hold:
Pn
P
(i) ℓi=1
E Yni2 |Fn,i−1 −→ σ2 ; and
55
(ii) for each ε > 0,
Then we have
Pℓn
i=1
E |Yni |2 I{|Yni |≥ε} −→ 0, as n −→ ∞.
ℓn
X
i=1
D
Yni −→ N 0, σ2 .
Lemma D.2. [Theorem 12.3 of Billingsley (1968)] The set, { Xn (t)| t ∈ [0, 1], n ∈ N}, is tight if it satisfies the
following two conditions:
(i) the set { Xn (0)| n ∈ N} is tight;
(ii) there exist constants γ ≥ 0, α > 1 and a non-decreasing continuous function h(·) on [0, 1] such that
P (|Xn (t2 ) − Xn (t1 )| ≥ λ) ≤
1
|h(t2 ) − h(t1 )|α ,
λγ
for all t1 , t2 ∈ [0, 1], n and all λ > 0.
REFERENCES
Aït-Sahalia, Y., Fan, J., and Li, Y. (2013), The leverage effect puzzle: Disentangling sources of bias at
high frequency, Journal of Financial Economics, 109, 224–249.
Anderson, T. W. (2003), An introduction to multivariate statistical analysis, Wiley Series in Probability
and Statistics, Wiley-Interscience [John Wiley & Sons], Hoboken, NJ, 3rd ed.
Bai, Z., Jiang, D., Yao, J., and Zheng, S. (2009), Corrections to LRT on large-dimensional covariance
matrix by RMT, The Annals of Statistics, 37, 3822–3840.
Bai, Z. and Silverstein, J. W. (1998), No eigenvalues outside the support of the limiting spectral distribution of large-dimensional sample covariance matrices, The Annals of Probability, 26, 316–345.
— (2004), CLT for linear spectral statistics of large-dimensional sample covariance matrices, The Annals of Probability, 32, 553–605.
— (2010), Spectral analysis of large dimensional random matrices, Springer Series in Statistics,
Springer, New York, 2nd ed.
Bai, Z. D. (1999), Methodologies in spectral analysis of large-dimensional random matrices, a review,
Statistica Sinica, 9, 611–677.
Bai, Z. D. and Yin, Y. Q. (1993), Limit of the smallest eigenvalue of a large-dimensional sample
covariance matrix, The Annals of Probability, 21, 1275–1294.
Bhattacharyya, S. and Bickel, P. J. (2014), Adaptive estimation in elliptical distributions with extensions
to high dimensions, Preprint.
56
Billingsley, P. (1968), Convergence of probability measures, John Wiley & Sons, Inc., New YorkLondon-Sydney.
— (1995), Probability and measure, Wiley Series in Probability and Mathematical Statistics, John
Wiley & Sons, Inc., New York, 3rd ed., a Wiley-Interscience Publication.
Bingham, N. H. and Kiesel, R. (2002), Semi-parametric modelling in finance: theoretical foundations,
Quantitative Finance, 2, 241–250.
Birke, M. and Dette, H. (2005), A note on testing the covariance matrix for large dimension, Statistics
& Probability Letters, 74, 281–289.
Bollerslev, T. (1986), Generalized autoregressive conditional heteroskedasticity, Journal of Econometrics, 31, 307–327.
Campbell, J. Y. and Hentschel, L. (1992), No news is good news: An asymmetric model of changing
volatility in stock returns, Journal of Financial Economics, 31, 281–318.
Chamberlain, G. (1983), A characterization of the distributions that imply mean–variance utility functions, Journal of Economic Theory, 29, 185–201.
Chen, S. X., Zhang, L.-X., and Zhong, P.-S. (2010), Tests for high-dimensional covariance matrices,
Journal of the American Statistical Association, 105, 810–819.
Christoffersen, P. (2012), Elements of Financial Risk Management, Academic Press, 2nd ed.
El Karoui, N. (2009), Concentration of measure and spectra of random matrices: applications to correlation matrices, elliptical distributions and beyond, The Annals of Applied Probability, 19, 2362–
2405.
— (2010), High-dimensionality effects in the Markowitz problem and other quadratic programs with
linear constraints: risk underestimation, The Annals of Statistics, 38, 3487–3566.
— (2013), On the realized risk of high-dimensional Markowitz portfolios, SIAM Journal on Financial
Mathematics, 4, 737–783.
Engle, R. F. (1982), Autoregressive conditional heteroscedasticity with estimates of the variance of
United Kingdom inflation, Econometrica, 50, 987–1007.
Fama, E. F. (1965), The behavior of stock-market prices, The Journal of Business, 38, 34–105.
Fang, K. T., Kotz, S., and Ng, K. W. (1990),Symmetric multivariate and related distributions, vol. 36
of Monographs on Statistics and Applied Probability, Chapman and Hall, Ltd., London.
John, S. (1971), Some optimal multivariate tests, Biometrika, 58, 123–127.
57
Ledoit, O. and Wolf, M. (2002), Some hypothesis tests for the covariance matrix when the dimension
is large compared to the sample size, The Annals of Statistics, 30, 1081–1102.
Li, W. and Yao, J. (2017), On structure testing for component covariance matrices of a high-dimensional
mixture, arXiv preprint arXiv:1705.04784.
Mandelbrot, B. (1967), The variation of some other speculative prices, The Journal of Business, 40,
393–413.
McNeil, A. J., Frey, R., and Embrechts, P. (2005), Quantitative risk management: Concepts, techniques
and tools, Princeton university press.
Muirhead, R. J. (1982), Aspects of multivariate statistical theory, John Wiley & Sons, Inc., New York,
wiley Series in Probability and Mathematical Statistics.
Nagao, H. (1973), On some test criteria for covariance matrix, The Annals of Statistics, 1, 700–709.
Najim, J. and Yao, J. (2016), Gaussian fluctuations for linear spectral statistics of large random covariance matrices, The Annals of Applied Probability, 26, 1837–1887.
Owen, J. and Rabinovitch, R. (1983), On the class of elliptical distributions and their applications to
the theory of portfolio choice, The Journal of Finance, 38, 745–752.
Peiro, A. (1999), Skewness in financial returns, Journal of Banking & Finance, 23, 847–862.
Rosenthal, H. P. (1970), On the subspaces of L p (p > 2) spanned by sequences of independent random
variables, Israel Journal of Mathematics, 8, 273–303.
Schwert, G. W. (1989), Why does stock market volatility change over time? The Journal of Finance,
44, 1115–1153.
Silverstein, J. W. (1995), Strong convergence of the empirical distribution of eigenvalues of largedimensional random matrices, Journal of Multivariate Analysis, 55, 331–339.
Silverstein, J. W. and Bai, Z. D. (1995), On the empirical distribution of eigenvalues of a class of
large-dimensional random matrices, Journal of Multivariate Analysis, 54, 175–192.
Singleton, J. C. and Wingender, J. (1986), Skewness persistence in common stock returns, Journal of
Financial and Quantitative Analysis, 21, 335–341.
Srivastava, M. S. (2005), Some tests concerning the covariance matrix in high dimensional data, Journal of the Japan Statistical Society (Nihon Tôkei Gakkai Kaihô), 35, 251–272.
Wang, C., Yang, J., Miao, B., and Cao, L. (2013), Identity tests for high dimensional data using RMT,
Journal of Multivariate Analysis, 118, 128–137.
58
Wang, Q. and Yao, J. (2013), On the sphericity test with large-dimensional observations, Electronic
Journal of Statistics, 7, 2164–2192.
Yin, Y. Q., Bai, Z. D., and Krishnaiah, P. R. (1988), On the limit of the largest eigenvalue of the
large-dimensional sample covariance matrix, Probability Theory and Related Fields, 78, 509–521.
Yucek, T. and Arslan, H. (2009), A survey of spectrum sensing algorithms for cognitive radio applications, IEEE communications surveys & tutorials, 11, 116–130.
Zheng, X. and Li, Y. (2011), On the estimation of integrated covariance matrices of high dimensional
diffusion processes, The Annals of Statistics, 39, 3121–3151.
59
| 10 |
On the implementation of
for non-free
Frédéri
onstru tion fun tions
on rete data types
1
2
3
Blanqui , Thérèse Hardin , and Pierre Weis
INRIA & LORIA, BP 239, 54506 Villers-lès-Nan y Cedex, Fran e
2
UPMC, LIP6, 104, Av. du Pr. Kennedy, 75016 Paris, Fran e
INRIA, Domaine de Volu eau, BP 105, 78153 Le Chesnay Cedex, Fran e
arXiv:cs/0701031v1 [cs.LO] 5 Jan 2007
1
3
Abstra t. Many algorithms use on rete data types with some additional invariants. The set of values satisfying the invariants is often a set
of representatives for the equivalen e lasses of some equational theory.
For instan e, a sorted list is a parti ular representative wrt ommutativity. Theories like asso iativity, neutral element, idempoten e, et . are
also very ommon. Now, when one wants to ombine various invariants,
it may be di ult to nd the suitable representatives and to e iently
implement the invariants. The preservation of invariants throughout the
whole program is even more di ult and error prone. Classi ally, the
programmer solves this problem using a ombination of two te hniques:
the denition of appropriate onstru tion fun tions for the representatives and the onsistent usage of these fun tions ensured via ompiler
veri ations. The ommon way of ensuring onsisten y is to use an abstra t data type for the representatives; unfortunately, pattern mat hing
on representatives is lost. A more appealing alternative is to dene a
on rete data type with private onstru tors so that both ompiler veri ation and pattern mat hing on representatives are granted. In this
paper, we detail the notion of private data type and study the existen e
of onstru tion fun tions. We also des ribe a prototype, alled Mo a,
that addresses the entire problem of dening on rete data types with
invariants: it generates e ient onstru tion fun tions for the ombination of ommon invariants and builds representatives that belong to a
on rete data type with private onstru tors.
1 Introdu tion
Many algorithms use data types with some additional invariants. Every fun tion
reating a new value from old ones must be dened so that the newly
reated
value satisfy the invariants whenever the old ones so do.
One way to easily maintain invariants is to use abstra t data types (ADT):
the implementation of an ADT is hidden and
tions are provided. A value of an ADT
onstru tion and observation fun -
an only be obtained by re ursively using
the
onstru tion fun tions. Hen e, an invariant an be ensured by using appropri-
ate
onstru tion fun tions. Unfortunately, abstra t data types pre lude pattern
mat hing, a very useful feature of modern programming languages [10, 11, 16,
15℄. There have been various attempts to
ombine both features in some way.
In [23℄, P. Wadler proposed the me hanisms of
views.
A view on an ADT
α is given by providing a on rete data type (CDT) γ and two fun tions in :
α → γ and out : γ → α su h that in ◦ out = idγ and out ◦ in = idα . Then,
a fun tion on α an be dened by mat hing on γ (by impli itly using in) and
the values of type γ obtained by mat hing an be inje ted ba k into α (by
impli itly using out). However, by leaving the appli ations of in and out impli it,
we an easily get in onsisten ies whenever in and out are not inverses of ea h
other. Sin e it may be di ult to satisfy this
translations between
artesian and polar
ondition ( onsider for instan e the
oordinates), these views have never
been implemented. Following the suggestion of W. Burton and R. Cameron
to use the
in
fun tion only [3℄, some propositions have been made for various
programming languages but none has been implemented yet [4, 17℄.
In [3℄, W. Burton and R. Cameron proposed another very interesting idea
whi h seems to have attra ted very little attention. An ADT must provide
on-
stru tion and observation fun tions. When an ADT is implemented by a CDT,
they propose to also export the
onstru tors of the CDT but only for using
them as patterns in pattern mat hing
underlying CDT
only the
lauses. Hen e, the
onstru tors of the
an be used for pattern mat hing but not for building values:
onstru tion fun tions
an be used for that purpose. Therefore, one
an
both ensure some invariants and oer pattern mat hing. These types have been
introdu ed in OCaml by the third author [24℄ under the name of
type with private onstru tors, or private data type (PDT)
Now, many invariants on
on rete data types
tional theory. Take for instan e the type of
Given some elements
v1 ..vn ,
an be related to some equa-
with the
onstru tors
the sorted list whi h elements are
ular representative of the equivalen e
x::y ::l=y ::x::l.
list
lass of
on rete data
for short.
v1 ::..::vn ::[]
v1 ..vn
[]
and
::.
is a parti -
modulo the equation
Requiring that, in addition, the list does not
ontain the same
x::x::l=x::l.
empty , singleton and
element twi e is a parti ular representative modulo the equation
Consider now the type of join lists with the onstru tors
append,
for whi h
on atenation is of
to asso iativity and
append
is
empty
stru ture of
onstant
ommutativity of
append.
orresponds to neutrality of
omplexity. Sorting
orresponds
Requiring that no argument of
empty
wrt
append.
We have a
ommutative monoid.
More generally, given some equational theory on a
on rete data type, one
may wonder whether there exists a representative for ea h equivalen e
if so, whether a representative of
that
t1 . . . t n
C(t1 . . . tn )
an be e iently
lass and,
omputed knowing
are themselves representatives.
In [21, 22℄, S. Thompson des ribes a me hanism introdu ed in the Miranda
fun tional programming language for implementing su h non-free
on rete data
types without pre luding pattern mat hing. The idea is to provide
rewrite rules,
newly
alled
laws, that are impli
reated value. This
tion fun tions (primed
onditional
itly applied as long as possible on every
an also be a hieved by using a PDT whi h
onstru -
onstru tors in [21℄) apply as long as possible ea h of
the laws. Then, S. Thompson studies how to prove the
dened by pattern mat hing on su h
lawful types.
orre tness of fun tions
However, few hints are given
on how to
he k whether the laws indeed implement the invariants one has in
mind. For this reason and be ause reasoning on lawful types is di ult, the law
me hanism was removed from Miranda.
In this paper, we propose to spe ify the invariants by unoriented equations
(instead of rules). We will
all su h a type a
relational data type
(RDT). Se -
tions 2 and 3 introdu e private and relational data types. Then, we study when
an RDT
an be implemented by a PDT, that is, when there exist
fun tions
omputing some representative for ea h equivalen e
onstru tion
lass. Se tion 4
provides some general existen e theorem based on rewriting theory. But rewriting may be ine ient. Se tion 5 provides, for some
ommon equational theories,
onstru tion fun tions more e ient than the ones based on rewriting. Se tion
6 presents Mo a, an extension of OCaml with relational data types whose
on-
stru tion fun tions are automati ally generated. Finally, Se tion 7 dis usses some
possible extensions.
2 Con rete data types with private onstru tors
We rst re all the denition of a rst-order term algebra. It will be useful for
dening the values of
on rete and private data types.
Denition 1 (First-order term algebra)
A
sorted term algebra denition is
S is a non-empty set of sorts, C is a non-empty
+
set of onstru tor symbols and Σ : C → S
is a signature mapping a non-empty
sequen e of sorts to every onstru tor symbol. We write C : σ1 . . . σn σn+1 ∈ Σ
to denote the fa t that Σ(C) = σ1 . . . σn σn+1 . Let X = (Xσ )σ∈S be a family
of pairwise disjoint sets of variables. The sets Tσ (A, X ) of terms of sort σ are
a triplet
A = (S, C, Σ)
where
indu tively dened as follows:
x ∈ Xσ , then x ∈ Tσ (A, X ).
C : σ1 . . . σn+1 ∈ Σ and ti ∈ Tσi (A, X ), then C(t1 , . . . , tn ) ∈ Tσn+1 (A, X ).
Tσ (A) be the set of terms of sort σ ontaining no variable.
If
If
Let
S0 of primitive types like int, string,
C0 of primitive onstants 0, 1, "foo", . . . Let Σ0 be the orresponding
(Σ0 (0) = int, . . . ).
In the following, we assume given a set
. . . and a set
signature
all on rete data type (CDT)
onstru tors. More formally:
In this paper, we
dened by a set of
Denition 2 (Con rete data type)
A
an indu tive type
à la
ML
on rete data type denition is a triplet
Γ = (γ, C, Σ) where γ is a sort, C is a non-empty set of onstru tor symbols and
Σ : C → (S0 ∪ {γ})+ is a signature su h that, for all C ∈ C , Σ(C) = σ1 ..σn γ .
The set V al(γ) of values of type γ is the set of terms Tγ (AΓ ) where AΓ =
(S0 ∪ {γ}, C0 ∪ C, Σ0 ∪ Σ).
This denition of CDTs
orresponds to a small but very useful subset of all
the possible types denable in ML-like programming languages. For the purpose
of this paper, it is not ne essary to use a more
omplex denition.
Example 1
4
The following type
onstru tors of sort
type
exp =
exp and
exp
is a CDT denition with two
a binary operator of sort
Zero | One | Opp of
exp | Plus of
onstant
exp exp exp.
exp *
exp
Now, a private data type denition is like a CDT denition together with
onstru tion fun tions as in abstra t data types. Constru tors
patterns as in
(ex ept in the denition of
use
annot
on rete data types but they
an be used as
be used for value
reation
onstru tion fun tions). For building values, one must
onstru tion fun tions as in abstra t data types. Formally:
Denition 3 (Private data type)
A
private data type denition
is a pair
Γ = (π, C, Σ) is a CDT denition and F is a family of onstru tion fun tions (fC )C∈C su h that, for all C : σ1 ..σn π ∈ Σ , fC : Tσ1 (AΓ ) ×
. . . × Tσn (AΓ ) → Tπ (AΓ ). Let V al(π) be the set of the values of type π , that
Π = (Γ, F )
where
is, the set of terms that one
an build by using the
f : Tπ (AΓ ) → Tπ (AΓ ) su h that,
ti ∈ Tσi (AΓ ), f (C(t1 ..tn )) = fC (f (t1 )..f (tn )), is
tion asso iated to F .
The fun tion
onstru tion fun tions only.
for all
C : σ1 ..σn π ∈ Σ
alled the
and
normalization fun -
This is quite immediate to see that:
Lemma 1.
V al(π) is the image of f .
PDTs have been implemented in OCaml by the third author [24℄. Extending a
programming language with PDTs is not very di ult: one only needs to modify
the
ompiler to parse the PDT denitions and
use of
he k that the
onditions on the
onstru tors are fullled.
Note that
onstru tion fun tions have no
onstraint in general: the full power
of the underlying programming language is available to dene them.
It should also be noted that, be ause the set of values of type
of the set of values of the underlying CDT
γ,
a fun tion on
π
π
is a subset
dened by pattern
mat hing may be a total fun tion even though it is not dened on all the possible
ases of
γ.
Dening a fun tion with patterns that mat h no value of type
not harm sin e the
orresponding
π
does
ode will never be run. It however reveals that
the developer is not aware of the distin tion between the values of the PDT and
those of the underlying CDT, and thus
an be
onsidered as a programming
error. To avoid this kind of errors, it is important that a PDT
omes with a
lear identi ation of its set of possible values. To go one step further, one
provide a tool for he king the
into a
ount the invariants, when it is possible. We leave this for future work.
Example 2
Let us now start our running example with the type
operations on arithmeti
4
ould
ompleteness and usefulness of patterns that takes
exp des
ribing
expressions.
Examples are written with OCaml [10℄, they an be readily translated in any programming language oering pattern-mat hing with textual priority, as Haskell, SML,
et .
type exp = private Zero | One | Opp of exp | Plus of exp * exp
exp is indeed a PDT built upon the CDT exp. Prompted by the
private, the OCaml ompiler forbids the use of exp onstru tors (outside the module my_exp.ml ontaining the denition of exp) ex ept in patterns.
If Zero is supposed to be neutral by the writer of my_exp.ml, then he/she will
This type
keyword
provide
onstru tion fun tions as follows:
let re zero = Zero and one = One and opp x = Opp x
and plus = fun tion
| (Zero,y) -> y
| (y,Zero) -> y
| (x,y) -> Plus(x,y)
3 Relational data types
We mentioned in the introdu tion that, often, the invariants upon
on rete data
types are su h that the set of values satisfying them is indeed a set of representatives for the equivalen e
lasses of some equational theory. We therefore propose
to spe ify invariants by a set of unoriented equations and study to whi h extent
su h a spe i ation
an be realized with an abstra t or private data type. In
ase of a private data type however, it is important to be able to des ribe the
set of possible values.
Denition 4 (Relational data type)
tion
A
relational data type (RDT) deni-
(Γ, E) where Γ = (π, C, Σ) is a CDT denition and E is a nite
set of equations on Tπ (AΓ , X ). Let =E be the smallest ongruen e relation ontaining E . Su h an RDT is implementable by a PDT (Γ, F ) if the family of
onstru tion fun tions F = (fC )C∈C is valid wrt E :
is a pair
(Corre tness) For all C : σ1 ..σn π and vi ∈ V al(σi ), fC (v1 ..vn ) =E C(v1 ..vn ).
(Completeness) For all C : σ1 ..σn σ, vi ∈ V al(σi ), D : τ1 ..τp σ ∈ Σ and
wi ∈ V al(τi ), fC (v1 ..vn ) = fD (w1 ..wp )
whenever
C(v1 ..vn ) =E D(w1 ..wp ).
We are going to see that the existen e of a valid family of
onstru tion fun -
tions is equivalent to the existen e of a valid normalization fun tion:
Denition 5 (Valid normalization fun tion)
is a
A map
valid normalization fun tion for an RDT (Γ, E)
with
f : Tπ (AΓ ) → Tπ (AΓ )
Γ = (π, C, Σ) if:
(Corre tness) For all t ∈ Tπ (AΓ ), f (t) =E t.
(Completeness) For all t, u ∈ Tπ (AΓ ), f (t) = f (u) whenever t =E u.
◦ f = f ) and
λxy.f (x) = f (y)).
Note that a valid normalization fun tion is idempotent (f
provides a de ision pro edure for
Theorem 6
=E
(the boolean fun tion
The normalization fun tion asso iated to a valid family is a valid
normalization fun tion.
Proof.
t ∈ Tπ . We have C :
σ1 ..σn π ∈ Σ and ti su h that t = C(t1 ..tn ). By denition, f (t) = fC (f (t1 )..
f (tn )). By indu tion hypothesis, f (ti ) =E ti . Sin e the family is valid and
f (t1 )..f (tn ) are values, fC (f (t1 )..f (tn )) =E C(f (t1 )..f (tn )). Thus, f (t) =E t.
Completeness. Let t, u ∈ Tπ su h that t =E u. We have t = C(t1 ..tn ) and u =
D(u1 ..up ). By denition, f (t) = fC (f (t1 )..f (tn )) and f (u) = fD (f (u1 )..f (up )).
By orre tness, f (ti ) =E ti and f (uj ) =E uj . Hen e, C(f (t1 )..f (tn )) =E
D(f (u1 )..f (up )). Sin e the family is valid and f (t1 )..f (tn ) are values, fC (f (t1 )
..f (tn )) = fD (f (t1 )..f (tn )). Thus, f (t) = f (u).
Corre tness. We pro eed by indu tion on the size of
Conversely, given
f : Tπ (AΓ ) → Tπ (AΓ ), one an easily dene a family of
f is a valid normalization fun tion.
onstru tion fun tions that is valid whenever
Denition 7 (Asso iated family of onstr. fun tions)
Given a CDT Γ =
f : Tπ (AΓ ) → Tπ (AΓ ), the family of onstru tion fun tions asso iated to f is the family (fC )C∈C su h that, for all C : σ1 ..σn π ∈ Σ
and ti ∈ Tσ1 (AΓ ), fC (t1 , . . . , tn ) = f (C(t1 , . . . , tn )).
(π, C, Σ)
and a fun tion
Theorem 8
The family of
onstru tion fun tions asso iated to a valid normal-
ization fun tion is valid.
Example 3
Zero = x}
family of
We
an
hoose
exp
as the underlying CDT and
to dene a RDT implementable by the PDT
onstru tion fun tions
zero, one, opp, plus.
exp,
E = { Plus x
with the valid
4 On the existen e of onstru tion fun tions
In this se tion, we provide a general theorem for the existen e of valid families
of
onstru tion fun tions based on rewriting theory. We re all the notions of
rewriting and
ompletion. The interested reader may nd more details in [8℄.
Standard rewriting. A rewrite rule is an ordered pair of terms (l, r) written
l → r.
A rule is
left-linear if no variable o urs twi e in its left hand side l.
Pos(t) of positions in t is dened as a set of words on positive
As usual, the set
p ∈ Pos(t), let t|p be the subterm of t at position p and t[u]p be
t|p repla ed by u.
Given a nite set R of rewrite rules, the rewriting relation is dened as
follows: t →R u i there are p ∈ Pos(t), l → r ∈ R and a substitution θ su h
that t|p = lθ and u = t[rθ]p . A term t is an R-normal form if there is no u su h
that t →R u. Let =R be the symmetri , reexive and transitive losure of →R .
A redu tion ordering ≻ is a well-founded ordering (there is no innitely dereasing sequen e t0 ≻ t1 ≻ . . .) stable by ontext (C(..t..) ≻ C(..u..) whenever
t ≻ u) and substitution (tθ ≻ uθ whenever t ≻ u). If R is in luded in a redu tion
ordering, then →R is well-founded (terminating, strongly normalizing).
integers. Given
the term
t
with
→R is onuent if, for all terms t, u, v su h that u ←∗R t →∗R v ,
∗
∗
there exists a term w su h that u →R w ←R v . This means that the relation
∗
∗
∗
∗
←R →R is in luded in the relation →R ←R ( omposition of relations is written
We say that
by juxtaposition).
If
→R
is
onuent, then every term has at most one normal form. If
well-founded, then every term has at least one normal form. Therefore, if
→R
→R
is
is
onuent and terminating, then every term has a unique normal form.
Standard ompletion. Given
a nite set
≻, the standard Knuth-Bendix
set R of rewrite rules su h that:
dering
nite
• R is in luded in ≻,
• →R is onuent,
• R and E have same
Note that
E
of equations and a redu tion or-
ompletion pro edure [2℄ tries to nd a
=E = =R .
theory:
ompletion may fail or not terminate but, in
R-normalization provides a de
R-normal forms of t and u are synta
ase of su
termination,
ision pro edure for
i the
ti ally equal.
However, sin e permutation theories like
=E
essful
sin e
t =E u
ommutativity or asso iativity and
ommutativity together (written AC for short) are in luded in no redu tion
ordering, dealing with them requires to
modulo these theories and
onsider rewriting with pattern mat hing
ompletion modulo these theories. In this paper, we
restri t our attention to AC.
Denition 9 (Asso iative- ommutative equations)
C(x, y) = C(y, x). Then, let EAC
Com be the set of
E ontains an
be the subset of E made of
ommutativity and asso iativity equations for the
ommutative onstru tors,
ommutative
onstru tors,
equation of the form
the
=AC
be the smallest
i.e. the set of
ongruen e relation
onstru tors
ontaining
C
Let
su h that
EAC
and
E¬AC = E \ EAC .
Rewriting modulo AC. Given a set R of rewrite rules, rewriting with pattern
mat hing modulo AC
is dened as follows: t →R,AC u i there are p ∈ Pos(t),
l → r ∈ R and a substitution θ su h that t|p =AC lθ and u = t[rθ]p . A redu tion
′
′
′
ordering ≻ is AC - ompatible if, for all terms t, t , u, u su h that t =AC t and
′
′
′
u =AC u , t ≻ u i t ≻ u. The relation →R,AC is onuent modulo AC if
(←∗R,AC =AC →∗R,AC ) ⊆ (→∗R,AC =AC ←∗R,AC ).
Completion modulo AC.
Given a nite set
ompatible redu tion ordering
nite set
R
≻,
E
of equations and an
ompletion modulo
AC
AC -
[18℄ tries to nd a
of rules su h that:
• R is in luded in ≻,
• →R,AC is onuent modulo AC ,
• E and R ∪ EAC have same theory: =E = =R∪EAC .
Denition 10
A theory
E
has a
omplete presentation
patible redu tion ordering for whi h the
terminates.
AC -
if there is an AC- om-
ompletion of
E¬AC
su
essfully
Many interesting systems have a
omplete presentation: ( ommutative) mo-
noids, (abelian) groups, rings, et . See [13, 5℄ for a
automated tools implementing
atalog. Moreover, there are
ompletion modulo AC. See for instan e [6, 12℄.
R, AC -normal forms but, by
AC -equivalent and one an easily
A term may have distin t
AC ,
all normal forms are
normal form for
AC -equivalent
terms [13℄:
Denition 11 (AC -normal form)
C , C -left- ombs
stru tor
(resp.
onuen e modulo
dene a notion of
Given an asso iative and ommutative on-
C -right- ombs)
and their
leaves
are indu tively
dened as follows:
If
t is not headed by C , then t is both a C -left-
omb and a
leaves of t is the one-element list leaves(t) = [t].
C -right-
omb. The
t is not headed by C and u is a C -right- omb, then C(t, u) is a C -right- omb.
leaves of C(t, u) is the list t :: leaves(u).
If t is not headed by C and u is a C -left- omb, then C(u, t) is a C -left- omb.
The leaves of C(u, t) is the list leaves(u)@[t], where @ is the on atenation.
If
The
orient
Let
be a fun tion asso iating a kind of
ombs (left or right) to every AC-
≤ be a total ordering on terms. Then, a term t is in AC -normal
form wrt orient and ≤ if:
onstru tor. Let
Every subterm of
t
headed by an AC- onstru tor
≤.
C(u, v)
C
is an
orient (C)-
omb
whose leaves are in in reasing order wrt
t of
u ≤ v.
For every subterm of
asso iative, we have
As it is well-known, one
Theorem 12
t
has an
the form
with
AC -normal
an put any term in
form
t ↓AC
wrt
orient (C)
orient (C)
→A
is a
form:
orient and the ordering ≤ are, every term
orient and ≤, and t =AC t ↓AC .
Proof. Let A be the set of rules obtained by
If
ommutative but non-
Whatever the fun tion
AC -normal
asso iativity equations of
If
C
EAC
a
is left, then take
ording to
hoosing an orientation for the
orient :
C(x, C(y, z)) → C(C(x, y), z).
C(C(x, y), z) → C(x, C(y, z)).
is right, then take
onuent and terminating relation putting every subterm headed by
an AC- onstru tor into a
omputing the
the leaves of
A-normal
omb form a
ording to
ombs and the arguments of
ommutative but non-asso iative
≤.
Then, the fun tion
form of any term and
sort (comb(t)) =AC
stru tors to put them in in reasing order wrt
omputes the
AC -normal
orient . Let comb be a fun tion
sort be a fun tion permuting
form of a term. Let now
on-
sort ◦ comb
t.
AC -equivalen e: the fun λxy.sort (comb(x)) = sort (comb(y)). It follows that R, AC -normalization
together with AC -normalization provides a valid normalization fun tion, hen e
This naturally provides a de ision pro edure for
tion
the existen e of a valid family of
onstru tion fun tions:
Theorem 13
of
If
E
has a
omplete presentation, then there exists a valid family
onstru tion fun tions.
Proof. Assume that
E
has a
omplete presentation
R.
We dene the
om-
putation of normal forms as it is generally implemented in rewriting tools. Let
R, AC -rewrite step if there is one, or failing if the
norm be the fun tion applying step until a normal
form is rea hed. Sin e R is a omplete presentation of E , by denition of the
ompletion pro edure, sort ◦comb ◦norm is a valid normalization fun tion. Thus,
by Theorem 8, the asso iated family of onstru tion fun tions is valid.
step
be a fun tion making an
term is in normal form. Let
The
onstru tion fun tions des ribed in the proof are not very e ient sin e
they are based on rewriting with pattern mat hing modulo AC, whi h is NPomplete [1℄, and do not take advantage of the fa t that, by denition of PDTs,
they are only applied to terms already in normal form. We
whether they
an be dened in a more e ient way for some
an therefore wonder
ommon equational
theories like the ones of Figure 1.
Fig. 1. Some
Name
Abbrev
ommon equations on binary onstru tors
Denition
Example
asso iativity Assoc(C) C(C(x, y), z) = C(x, C(y, z)) (x + y) + z = x + (y + z)
ommutativity Com(C)
C(x, y) = C(y, x)
x+y = y+x
neutrality
N eu(C, E)
C(x, E) = x
x+0=x
inverse
Inv(C, I, E)
C(x, I(x)) = E
x + (−x) = 0
idempoten e Idem(C)
C(x, x) = x
x∧x=x
nilpoten e
N il(C, A)
C(x, x) = A
x ⊕ x = ⊥ (ex lusive or)
Rewriting provides also a way to
Theorem 14
he k the validity of
onstru tion fun tions:
has a omplete presentation R and F = (fC )C∈C is a family
C : σ1 ..σn π ∈ Σ and terms vi ∈ V al(σi ), fC (v1 ..vn ) is an
form of C(v1 ..vn ) in AC -normal form, then F is valid.
If
E
su h that, for all
R, AC -normal
Proof.
C : σ1 ..σn π ∈ Σ and vi ∈ V al(σi ). Sin e fC (v1 ..vn ) is an
C(v1 ..vn ), we learly have fC (v1 ..vn ) =E C(v1 ..vn ).
Completeness. Let C : σ1 ..σn π ∈ Σ , vi ∈ V alF (σi ), D : τ1 ..τp π ∈ Σ , and
wi ∈ V alF (τi ) su h that C(v1 ..vn ) =E D(w1 ..wp ). Sin e R is a omplete presentation of E , norm(C(v1 ..vn )) =AC norm(D(w1 ..wp )). Thus, fC (v1 ..vn ) =
fD (w1 ..wp ).
Corre tness. Let
R, AC -normal
form of
It follows that rewriting provides a natural way to explain what are the
possible values of an RDT: values are
side of a rule of
R.
AC -normal
forms mat hing no left hand
5 Towards e ient onstru tion fun tions
When there is no
ommutative symbol,
onstru tion fun tions
an be easily
implemented by simulating innermost rewriting as follows:
Denition 15 (Linearization)
Let VPos(t) be the set of positions p ∈ Pos(t)
t|p is a variable x ∈ X . Let ρ : VPos(t) → X be an inje tive mapping
and lin(t) be the term obtained by repla ing in t every subterm at position
p ∈ VPos(t) by ρ(p). Let now Eq(t) be the onjun tion of true and of the
equations ρ(p) = ρ(q) su h that t|p = t|q and p, q ∈ VPos(t).
su h that
Denition 16
Given a set
stru tion fun tions
•
(fC )C∈C
R
of rewrite rules, let
F (R)
be the family of
on-
dened as follows:
l → r ∈ R with l = C(l1 , . . . , ln ), add to the denition of
\ , where b
fC the lause lin(l1 ), . . . , lin(ln ) when Eq(l) -> lin(r)
t is the term
obtained by repla ing in t every o urren e of a onstru tor C by a all to its
onstru tion fun tion fC .
• Terminate the denition of fC by the default lause x -> C(x).
For every rule
Theorem 17
Then,
F (R)
We now
Assume that
is valid wrt
E
onsider the
EAC = ∅
ase of
a modular way of dening the
example, with the type
ommutative only. The
and
E
has a
omplete presentation
(whatever the order of the non-default
R.
lauses is).
ommutative symbols. We are going to des ribe
onstru tion fun tions by pursuing our running
exp. Assume that Plus is de
onstru tion fun tions
lared to be asso iative and
an then be dened as follows:
let zero = Zero and one = One and opp x = Opp x
and plus = fun tion
| Plus(x,y), z -> plus (x, plus (y,z))
| x, y -> insert_plus x y
and insert_plus x = fun tion
| Plus(y,_) as u when x <= y -> Plus(x,u)
| Plus(y,t) -> Plus (y, insert_plus x t)
| u when x > u -> Plus(u,x)
| u -> Plus(x,u)
One
an easily see that
plus
sort ◦ comb
A-normalization
does the same job as the fun tion
used in Theorem 12 but in a slightly more e ient way sin e
and sorting are interleaved.
Zero is neutral. The AC- ompletion of { Plus(Zero, x)
{ Plus(Zero, x) → x}. Hen e, if x and y are terms in normal form,
Plus(x, y) an be rewritten modulo AC only if x = Zero or y = Zero.
Assume moreover that
= x}
then
gives
Thus, the fun tion
plus needs
to be extended with two new
lauses only:
and plus = fun tion
| Zero, y -> y
| x, Zero -> x
| Plus(x,y), z -> plus (x, plus (y,z))
| x, y -> insert_plus x y
Plus is de lared to have Opp as inverse. Then, the om{ Plus(Zero, x) = x, Plus(Opp(x), x) = Zero} gives
the following well known rules for abelian groups [13℄: { Plus(Zero, x) → x,
Plus(Opp(x), x) → Zero, Plus(Plus(Opp(x), x), y) → y , Opp(Zero) →Zero,
Opp(Opp(x)) → x, Opp(Plus(x, y)) → Plus(Opp(y),Opp(x)) }.
The rules for Opp are easily translated as follows:
Assume now that
pletion modulo AC of
and opp = fun tion
| Zero -> Zero
| Opp(x) -> x
| Plus(x,y) -> plus (opp y, opp x)
| _ -> Opp(x)
The third rule of abelian groups is
sin e it is obtained by rst adding the
alled an
ontext
extension
P lus([], y)
of the se ond one
on both sides of this
x and y in
(x, y) mat hes none of the three lauses previously
dening plus, that is, x and y are distin t from Zero, and x is not of the form
Plus(x1 , x2 ). To get the normal form of Plus(x, y), we need to he k that x and
the normal form of its opposite Opp(x) do not o ur in y . The last lause dening
se ond rule,then normalizing the right hand side. Take now two terms
normal form and assume that
plus needs
therefore to be modied as follows:
and plus = fun tion
| Zero, y -> y
| x, Zero -> x
| Plus(x,y), z -> plus (x, plus (y,z))
| x, y -> insert_opp_plus (opp x) y
and insert_opp_plus x y =
try delete_plus x y
with Not_found -> insert_plus (opp x) y
and delete_plus x = fun tion
| Plus(y,_) when x < y -> raise Not_found
| Plus(y,t) when x = y -> t
| Plus(y,t) -> Plus (y, delete_plus x t)
| y when y = x -> Zero
| _ -> raise Not_found
Forgetting about
tive,
Zero
and
Opp,
fun tion is modied as follows:
Plus is de lared asso iaplus is kept but the insert
suppose now that
ommutative and idempotent. The fun tion
and insert_plus x = fun tion
| Plus(y,_) as u when x = y -> u
| Plus(y,_) as u when x < y -> Plus(x,u)
| Plus(y,t) -> Plus (y,insert_plus x t)
| u when x > u -> Plus(u,x)
| u when x = u -> u
| u -> Plus(x,u)
Nilpoten e
In
an be dealt with in a similar way.
on lusion, for various
dene in a ni e modular way
ombinations of the equations of Figure 1, we
an
onstru tion fun tions that are more e ient than
the ones based on rewriting modulo AC. We summarize this as follows:
Denition 18
A set of equations E is a theory of type:
EAC = ∅ and E has a omplete presentation,
if E is the union of {Assoc(C), Com(C)} with either {N eu(C, E), Inv(C, I, E)},
{Idem(C)}, {N eu(C, E), Idem(C)} {N il(C, A)} or {N eu(C, E), N il(C, A)}.
(1) if
(2)
Two theories are disjoint if they share no symbol.
Let us give s hemes for
is generated only if the
These
onstru tion fun tions for theories of type 2. A
onditions
Neu(C,E), Inv(C,I,E),
onditions are not part of the generated
ode.
let f_C = fun tion
| E, x when Neu(C,E) -> x
| x, E when Neu(C,E) -> x
| C(x,y), z when Asso (C) -> f_C(x,f_C(y,z))
| x, y when Inv(C,I,E) -> insert_inv_C (f_I x) y
| x, y -> insert_C x y
and f_I = fun tion
| E -> E
| I(x) -> x
| C(x,y) -> f_C(f_I y, f_I x)
| x -> I x
and insert_inv_C x y =
try delete_C x y
with Not_found -> insert_C (f_I x) y
and delete_C x = fun tion
| Plus(y,_) when x < y -> raise Not_found
| Plus(y,t) when x = y -> t
| Plus(y,t) -> C(y, delete_C x t)
| y when y = x -> E
| _ -> raise Not_found
lause
et . are satised.
and insert_C x = fun tion
| C(y,_) as u when x = y & idem -> u
| C(y,t) when x = y & nil -> f_C(A,t)
| C(y,_) as u when x <= y & om -> C(x,u)
| C(y,t) when Com(C) -> C(y, insert_C x t)
| u when x > u & Com(C) -> C(u,x)
| u when x = u & Idem(C) -> u
| u when x = u & Nil(C,A) -> A
| u -> C(x,u)
Theorem 19
Let
E
Assume that, for all
Denition 16 if
be the union of pairwise disjoint theories of type 1 or 2.
C
onstru tor
k = 1,
Proof. Assume that E =
k , fC is dened as
(fC )C∈C is valid wrt E .
whi h theory is of type
and as above if
k = 2.
Then,
in
Sn
i=1 Ei where E1 , . . . , En are pairwise disjoint theories of type 1 or 2. Whatever the type of Ei is, we saw that Ei has a omplete
Ri . Therefore, sin e E1 , . . . , En share no symbol, by denition
of
Sn
AC - ompletion of E su essfully terminates with R = i=1 Ri .
Thus, →R,AC is terminating and AC - onuent. Sin e F = (fC )C∈C omputes
R, AC -normal forms in AC -normal forms, by Theorem 14, F is valid.
presentation
ompletion, the
The
onstru tion fun tions of type 2
an be easily extended to deal with ring
or latti e stru tures (distributivity and absorban e equations).
More general results
modularity of
an be expe ted by using or extending results on the
ompleteness for the
pleteness of hierar hi al
ombination of rewrite systems. The
om-
ombinations of non-AC -rewrite systems is studied in
[19℄. Note however that the modularity of
onuen e for
AC -rewrite systems has
been formally established only re ently in [14℄.
Note that the
same results with
onstru tion fun tion denitions of type 1 or 2 provide the
all-by-value,
The detailed study of the
all-by-name or lazy evaluation strategy.
omplexity of theses denitions ( ompared to AC-
rewriting) is left for future work.
6 The Mo a system
We now des ribe the Mo a prototype, a program generator that implements an
extension of OCaml with RDTs. Mo a parses a spe ial .mlm le
ontaining the
RDT denition and produ es a regular OCaml module (interfa e and implementation) whi h provides the
onstru tion fun tions for the RDT. Mo a provides
a set of keywords for spe ifying the equations des ribed in Figure 1.
For instan e, the RDT
exp
an be dened in Mo a as follows:
type exp = private Zero | One | Opp of exp | Plus of exp * exp
begin asso iative ommutative neutral(Zero) opposite(Opp) end
Mo a also features user's arbitrary rules with the
-> pattern.
These rules add extra
tions generated by Mo a: the LHS
onstru tion:
lauses in the denitions of
pattern
is
rule pattern
onstru tion fun -
opied verbatim as the pattern of
a
lause whi h returns the RHS
stru tors are repla ed by
pattern
alls to the
onsidered as an expression where
orresponding
ourse, in the presen e of su h arbitrary rules, we
tion or
ompleteness of the generated
for expert users that
ing set of rules. That way, the programmer
those whi h
annot guarantee the termina-
ode. This
an prove termination and
on-
onstru tion fun tions. Of
onstru tion is thus provided
ompleteness of the
an des ribe
orrespond-
omplex RDTs, even
annot be des ribed with the set of predened equational invariants.
Mo a also a
epts polymorphi RDTs and RDTs mutually dened with re ord
types (but equations between re ord elds are not yet available).
The equations of Figure 1 also support n-ary
unary
onstru tors of type
ment of type
instead of
ments: in a
exp list.
t list -> t.
In this
onstru tor, implemented as
ase,
Plus
Normal forms are modied a
gets a single argu-
ordingly and use lists
ombs. For instan e, asso iative normal forms get at lists of arguPlus(l) expression, no element of l is a Plus(l′) expression. The
orresponding data stru ture is widely used in rewriting.
Finally, Mo a oers an important additional feature: it
an generate onstru -
tion fun tions that provide maximally shared representatives. To re maximal
sharing, just add the
sharing
option when
ompiling the .mlm le. In this
ase, the generated type is slightly modied, sin e every fun tional
gets an extra argument to keep the hash
onstru tor
ode of the term. Maximally shared rep-
resentatives have a lot of good properties: not only data size is minimal and user's
memoized fun tions
is turned from a
an be light speed, but
omplex re ursive term
omparison between representatives
omparison to a pointer
omparison
a single ma hine instru tion. Mo a heavily uses this property for the generation
of
onstru tion fun tions: when dealing with non-linear equations, the maximal
sharing property allows Mo a to repla e term equality by pointer equality.
7 Future work
We plan to integrate Mo a to the development environment Fo al [20℄. Fo al
units
ontain de larations and denitions of fun tions, statements and proofs
as rst- lass
itizens. Their
ompilation produ es both a le
theorem prover Coq [7℄ and a OCaml sour e
Coq or via the automati
it su
he kable by the
ode. Proofs are done either within
theorem prover Zenon [9℄, whi h issues a Coq le when
esses. Every Fo al unit has a spe ial eld, giving the type of the data ma-
nipulated in this unit. Thus, it would be very interesting to do a full integration
of private/relational data types in Fo al, the proof of
orre tness of
onstru tion
fun tions being done with Zenon or Coq and then re orded as a theorem to be
used for further proofs. This should be
ompleted by the integration of a tool
on rewriting and equational theories able to
to generate and prove the
omplete equational presentations,
orresponding lemmas and to show some termination
properties. Some experiments already done within Fo al on
and Zenon give a serious hope of su
A knowledgments. The authors thank Claude Kir
on a previous version of the paper.
oupling CiME [6℄
ess.
hner for his
omments
Referen es
1. D. Benanav, D. Kapur, and P. Narendran. Complexity of mat hing problems. J.
of Symboli Computation, 3(1-2):203216, 1987.
2. P. Bendix and D. Knuth. Computational problems in abstra t algebra, hapter
Simple word problems in universal algebra. Pergamon Press, 1970.
3. F. Burton and R. Cameron. Pattern mat hing with abstra t data types. J. of
Fun tional Programming, 3(2):171190, 1993.
4. W. Burton, E. Meijer, P. Sansom, S. Thompson, and P. Wadler. Views: An extension to Haskell pattern mat hing. http://www.haskell.org/extensions/views.
html, 1996.
5. P. Le Chenade . Canoni al forms in nitely presented algebras. Resear h notes in
theoreti al omputer s ien e. Pitman, 1986.
6. E. Contejean, C. Mar hé, B. Monate, and X. Urbain. CiME version 2.02. LRI,
CNRS UMR 8623, Université Paris-Sud, Fran e, 2004. http:// ime.lri.fr/.
7. Coq Development Team. The Coq Proof Assistant Referen e Manual, Version 8.0.
INRIA, Fran e, 2006. http:// oq.inria.fr/.
8. N. Dershowitz and J.-P. Jouannaud. Rewrite systems. In J. van Leeuwen, editor,
Handbook of Theoreti al Computer S ien e, volume B, hapter 6. North Holland,
1990.
9. D. Doligez. Zenon, version 0.4.1. http://fo al.inria.fr/zenon/, 2006.
10. D. Doligez, J. Garrigue, X. Leroy, D. Rémy, and J. Vouillon. The Obje tive Caml
system release 3.09, Do umentation and user's manual. INRIA, Fran e, 2005.
http:// aml.inria.fr/.
11. S. P. Jones (editor). Haskell 98 Language and Libraries, The revised report. Cambridge University Press, 2003.
12. J.-M. Gaillourdet, T. Hillenbrand, B. Lö hner, and H. Spies. The new Waldmeister
loop at work. In Pro . of CADE'03, LNCS 2741. http://www.waldmeister.org/.
13. J.-M. Hullot. Compilation de formes anoniques dans les théories équationnelles.
PhD thesis, Université Paris 11, Fran e, 1980.
14. J.-P. Jouannaud. Modular hur h-rosser modulo. In Pro . of RTA'06, LNCS 4098.
15. P.-E. Moreau, E. Balland, P. Brauner, R. Kopetz, and A. Reilles. Tom Manual
version 2.3. INRIA & LORIA, Nan y, Fran e, 2006. http://tom.loria.fr/.
16. P.-E. Moreau, C. Ringeissen, and M. Vittek. A pattern mat hing ompiler for
multiple target languages. In Pro . of CC'03, LNCS 2622.
17. C. Okasaki. Views for standard ML. In Pro . of ML'98.
18. G. Peterson and M. Sti kel. Complete sets of redu tions for some equational
theories. J. of the ACM, 28(2):233264, 1981.
19. K. Rao. Completeness of hierar hi al ombinations of term rewriting systems. In
Pro . of FSTTCS'93, LNCS 761.
20. R. Rioboo, D. Doligez, T. Hardin, and all. FoCal Referen e Manual, version 0.3.1.
Université Paris 6, CNAM & INRIA, 2005. http://fo al.inria.fr/.
21. S. Thompson. Laws in Miranda. In Pro . of LFP'86.
22. S. Thompson. Lawful fun tions and program veri ation in Miranda. S ien e of
Computer Programming, 13(2-3):181218, 1990.
23. P. Wadler. Views: a way for pattern mat hing to ohabit with data abstra tion.
In Pro . of POPL'87.
24. P. Weis. Private onstru tors in OCaml. http://alan.petitepomme.net/ wn/
2003.07.01.html#5, 2003.
| 6 |
arXiv:1611.01038v1 [] 3 Nov 2016
CLASSIFICATION OF CURTIS-TITS AND PHAN
AMALGAMS WITH 3-SPHERICAL DIAGRAM
RIEUWERT J. BLOK, CORNELIU G. HOFFMAN, AND SERGEY V. SHPECTOROV
Abstract. We classify all non-collapsing Curtis-Tits and Phan amalgams
with 3-spherical diagram over all fields. In particular, we show that amalgams with spherical diagram are unique, a result required by the classification of finite simple groups. We give a simple condition on the amalgam
which is necessary and sufficient for it to arise from a group of Kac-Moody
type. This also yields a definition of a large class of groups of Kac-Moody
type in terms of a finite presentation.
1. Introduction
Local recognition results play an important role in various parts of mathematics. A key example comes from the monumental classification of finite simple
groups. Local analysis of the unknown finite simple group G yields a local
datum consisting of a small collection of subgroups fitting together in a particular way, called an amalgam. The Curtis-Tits theorem [7, 16, 17, 18, 19]
and the Phan (-type) theorems [25, 26, 27] describe amalgams appearing in
known groups of Lie type. Once the amalgam in G is identified as one of the
amalgams given by these theorems, G is known.
The present paper was partly motivated by a question posed by R. Solomon
and R. Lyons about this identification step, arising from their work on the
classification [9, 10, 11, 12, 13, 14]: Are Curtis-Tits and Phan type amalgams
uniquely determined by their subgroups? More precisely is there a way of
fitting these subgroups together so that the amalgam gives rise to a different
group? In many cases it is known that, indeed, depending on how one fits the
subgroups together, either the resulting amalgam arises from these theorems,
or it does not occur in any non-trivial group. This is due to various results
of Bennet and Shpectorov [1], Gramlich [15], Dunlap [8], and R. Gramlich,
M. Horn, and W. Nickel [23]. However, all of these results use, in essence, a
crucial observation by Bennett and Shpectorov about tori in rank-3 groups of
Lie type, which fails to hold for small fields. In the present paper we replace
the condition on tori by a more effective condition on root subgroups, which
holds for all fields. This condition is obtained by a careful analysis of maximal
subgroups of groups of Lie type. Thus the identification step can now be made
for all possible fields. A useful consequence of the identification of the group
1
2
BLOK, HOFFMAN, AND SHPECTOROV
G, together with the Curtis-Tits and Phan type theorems, is that it yields a
simplified version of the Steinberg presentation for G.
Note that this solves the - generally much harder - existence problem: “how
can we tell if a given amalgam appears in any non-trivial group?”
The unified approach in the present paper not only extends the various
results on Curtis-Tits and Phan amalgams occurring in groups of Lie type
to arbitrary fields, but in fact also applies to a much larger class of CurtisTits and Phan type amalgams, similar to those occurring in groups of KacMoody type. Here, both the uniqueness and the existence problem become
significantly more involved.
Groups of Kac-Moody type were introduced by J. Tits as automorphism
groups of certain geometric objects called twin-buildings [29]. In the same
paper J. Tits conjectured that these groups are uniquely determined by the
fact that the group acts on some twin-building, together with local geometric
data called Moufang foundations. As an example he sketched an approach
towards classifying such foundations in the case of simply-laced diagrams. This
conjecture was subsequently proved for Moufang foundations built from locally
split and locally finite rank-2 residues by B. Mühlherr in [24] and refined by
P. E. Caprace in [6]. All these results produce a classification of groups of
Kac-Moody type using local data in the form of an amalgam, together with
a global geometric assumption stipulating the existence of a twin-building on
which the group acts.
Ideally, one would use the generalizations of the Curtis-Tits and Phan type
theorems to describe the groups of Kac-Moody type in terms of a simplified
Steinberg type presentation. However, the geometric assumption is unsatisfactory for this purpose as it is impossible to verify directly from the presentation
itself.
In our unified approach we consider all possible amalgams whose local structure is any one of those appearing in the above problems. There is no condition
on the field. Then, we classify those amalgams that satisfy our condition on
root groups and show that in the spherical case they are unique. This explains
why groups of Lie type can uniquely be recognized by their amalgam. By
contrast, in the non-spherical case the amalgams are not necessarily unique
and, indeed, not all such amalgams give rise to groups of Kac-Moody type.
This is a consequence of the fact that we impose no global geometric condition. Nevertheless, we give a simple condition on the amalgam itself which
decides whether it comes from a group of Kac-Moody type or not. As a result, we obtain a purely group theoretic definition of a large class of groups of
Kac-Moody type just in terms of a finite presentation.
Finally, we note that an amalgam must satisfy the root subgroup condition
to occur in a non-trivial group. A subsequent study generalizing [3, 5] shows
CURTIS-TITS AND PHAN AMALGAMS
3
that in fact all amalgams satisfying the root group condition do occur in nontrivial groups. Thus, in this much wider context the existence problem is also
solved.
We shall now give an overview of the results in the present paper. Recall
that a Dynkin diagram Γ is an oriented edge-labelled graph. We say that Γ is
connected if the underlying (unlabelled) graph is connected in the usual sense.
Moreover, we use topological notions such as spanning tree and homotopy rank
of Γ referring to the underlying graph.
For Phan amalgams we prove the following (for the precise statement see
Theorem 5.21).
Theorem A. Let q be any prime power and let Γ be a connected 3-spherical
diagram
with homotopy rank r. Then, there is a bijection between the elements
Q
of rs=1 Aut(Fq2 ) and the type preserving isomorphism classes of Curtis-Tits
amalgams with diagram Γ over Fq .
For Curtis-Tits amalgams the situation is slightly more complicated (for the
precise statement see Theorem 4.24).
Theorem B. Let q be a prime power and let Γ be a connected 3-spherical
diagram with homotopy rank r. Then there exists a set of Q
positive integers
{e1 , . . . , er } so that there is a bijection between the elements of rs=1 Aut(Fqes )×
Z/2Z and the type preserving isomorphism classes of Curtis-Tits amalgams
with diagram Γ over Fq .
Corollary C. Let q be a prime power and let Γ be a 3-spherical tree. Then,
up to type preserving isomorphism, there is a unique Curtis-Tits and a unique
Phan amalgam over Fq with diagram Γ.
Note that Corollary C includes all spherical diagrams of rank ≥ 3. Several
special cases of the above results were proved elsewhere. Indeed, Theorem B
was proved for simply-laced diagrams and q ≥ 4 in [4]. Corollary C was proved
for Phan amalgams with Γ = An in [1], for general simply-laced tree diagram
in [8], and for Γ = Cn for q ≥ 3 in [20, 22].
The classification of Curtis-Tits amalgams will be done along the following
lines. Note if (G, Gi , Gj ) is a Curtis-Tits standard of type different from
A1 × A1 , and X is any Sylow p-subgroup in one of the vertex groups, say
Gi , then generically it generates G together with Gj . In Subsection 4.1 we
+
−
show that there is a unique pair (Xi , Xi ) of Sylow p-subgroups in Gi whose
members do not have this property. Moreover, each member commutes with
a unique member in the other vertex group.
In Subsection 4.2 we show that in a non-collapsing Curtis-Tits amalgam
−
G = {Gi , Gi,j , gi,j | i, j ∈ I} for each i there exists a pair (X+
i , Xi ) of Sylow
4
BLOK, HOFFMAN, AND SHPECTOROV
−
subgroups in Gi such that for any edge {i, j} (gi,j (X+
i , Xi ) is the pair for
+
−
(Gi,j , Gi , Gj ) as above. The collection X = {Xi , Xi : i ∈ I} is called a weak
system of fundamental root groups. Without loss of generality one can assume
that any amalgam with the same diagram has the exact same weak system
X . As a consequence all amalgams with the same diagram can be determined
up to isomorphism by studying the coefficient system associated to X , that
is, the graph of groups consisting of automorphisms of the vertex and edge
groups preserving X . In Subsection 4.3 we determine the coefficient system
associated to X . In Subsection 4.4, we pick a spanning tree Σ for Γ and use
precise information about the coefficient system to create a standard form
of a Curtis-Tits amalgam in which all vertex-edge inclusion maps are trivial
except for the edges in Σ. In particular this shows that if Γ is a tree, then the
amalgam is unique up to isomorphism. Finally in Subsection 4.5 we show that
for a suitable choice of Σ, the remaining non-trivial inclusion maps uniquely
determine the amalgam.
The classification of Phan amalgams in Section 5 follows the same pattern.
However in this case the role of the weak system of fundamental root groups
is replaced by a system of tori in the vertex groups, whose images in the edge
groups must form a torus there.
As shown here, the existence of a weak system of fundamental root groups is
a necessary condition for the existence of a non-trivial completion. A natural
question of course is whether it is also sufficient. In the spherical cases, the
amalgams are unique and the Curtis-Tits and Phan theorems identify universal completions of these amalgams. In [5] it is shown that any Curtis-Tits
amalgam with 3-spherical simply-laced diagram over a field with at least four
elements having property (D) has a non-trivial universal completion, which is
identified up to a rather precisely described central extension. In the present
paper we will not study completions of the Curtis-Tits and Phan amalgams
classified here, but merely note that similar arguments yield non-trivial completions for all amalgams. In particular, the conditions mentioned above are
indeed sufficient for the existence of these completions. In general we don’t
know of a direct way of giving conditions on an amalgam ensuring the existence
of a non-trivial completion.
Acknowledgements
This paper was written as part of the project KaMCAM funded by the European Research Agency through a Horizon 2020 Marie-Sklodowska Curie fellowship (proposal number 661035).
CURTIS-TITS AND PHAN AMALGAMS
5
2. Curtis-Tits and Phan amalgams and their diagrams
2.1. Diagrams
In order to fix some notation, we start with some definitions.
Definition 2.1. A Coxeter matrix over the set I = {1, 2, . . . , n} of finite
cardinality n is a symmetric matrix M = (mij )i,j∈I with entries in N≥1 ∪ {∞}
such that, for all i, j ∈ I distinct we have mii = 1 and mij ≥ 2.
A Coxeter diagram with Coxeter matrix M is an edge-labelled graph ∆ =
(I, E) with vertex set I = V ∆ and edge-set E = E ∆ without loops such that
for any distinct i, j ∈ I, there is an edge labelled mij between i and j whenever
mij > 2; if mi,j = 2, there is no such edge. Thus, M and ∆ determine each
other uniquely. For any subset J ⊆ I, we let ∆J denote the diagram induced
on vertex set J. We say that ∆ is connected if the underlying (unlabelled)
graph is connected in the usual sense. Moreover, we use topological notions
such as spanning tree and homotopy rank of ∆ referring to the underlying
graph.
A Coxeter system with Coxeter matrix M is a pair (W, S), where W is a
group generated by the set S = {si : i ∈ I} subject to the relations (si sj )mij =
1 for all i, j ∈ I. For each subset J ⊆ I, we let WJ = hsj : j ∈ JiW . We
call M and (W, S) m-spherical if every subgroup WJ with |J| = m is finite
(m ∈ N≥2 ). Call (W, S) spherical if it is n-spherical.
In order to describe Curtis-Tits and Phan amalgams more precisely, we also
introduce a Lie diagram.
Definition 2.2. Let ∆ = (I, E) be a Coxeter diagram. A Lie diagram of
Coxeter type ∆ is an untwisted or twisted Dynkin diagram Γ whose edge labels
lij do not specify the orientation. In this paper we shall only be concerned
with Lie diagrams of Coxeter type An , Bn , Dn , E6 , E7 , E8 , and F4 . For these,
we have the following correspondence
∆
Γ
An
An
Bn
Bn , Cn , 2Dn+1 , 2A2n−1 , 2A2n
Dn , En (n = 6, 7, 8)
F4
Dn
En
F4 , F4∗ , 2E6 , 2E6∗
Here F4 and 2E6 (resp. F4∗ and 2E6∗ ) denote the diagrams where node 1 corresponds to the long (resp. short) root (Bourbaki labeling).
Let us introduce some more notation. We shall denote the Frobenius automorphism of order 2 of Fq2 by σ. Below we will consider sesquilinear forms h
on an Fq2 -vector space V . By convention, all these forms are linear in the first
coordinate, that is h(λu, µv) = λh(u, v)µσ for u, v ∈ V and λ, µ ∈ Fq2 . Recall
that h is hermitian if h(v, u) = h(u, v)σ for all u, v ∈ V .
6
BLOK, HOFFMAN, AND SHPECTOROV
2.2. Standard pairs of Curtis-Tits type
Let Γ be a Lie diagram of type A2 , B2 /C2 , 2D3 /2A3 and q = pe for some prime
p ∈ Z and e ∈ Z≥1 . Then a Curtis-Tits standard pair of type Γ(q) is a triple
(G, G1 , G2 ) of groups such that one of the following occurs:
(Γ = A1 × A1 ). Now G = G1 × G2 and G1 ∼
= SL2 (q).
= G2 ∼
(Γ = A2 ). Now G = SL3 (q) = SL(V ) for some Fq -vector space with basis
{e1 , e2 , e3 } and G1 (resp. G2 ) is the stabilizer of the subspace he1 , e2 i (resp.
he2 , e3 i) and the vector e3 (resp. e1 ).
Explicitly we have
a b
c d
G1 =
: a, b, c, d ∈ Fq with ad − bc = 1 ,
1
1
a b : a, b, c, d ∈ Fq with ad − bc = 1 .
G2 =
c d
(Γ = C2 ). Now G = Sp4 (q) = Sp(V, β), where V is an Fq -vector space with
basis {e1 , e2 , e3 , e4 } and β is the symplectic form with Gram matrix
1 0
0 1
.
M =
−1 0
0 −1
G1 ∼
= SL2 (q) is the derived subgroup of StabG (he1 , e2 i) ∩ StabG (he3 , e4 i) and
G2 = StabG (e1 ) ∩ StabG (e3 ) ∼
= SL2 (q). Explicitly we have
= Sp2 (q) ∼
a b
c
d
: a, b, c, d ∈ Fq with ad − bc = 1 ,
G1 =
d −c
−b a
1
a
b
.
:
a,
b,
c,
d
∈
F
with
ad
−
bc
=
1
G2 =
q
1
c
d
Remark 2.3. We are only interested in Curtis-Tits standard pairs of type
B2 for q odd. However, in that case we have Spin5 (q) ∼
= Sp4 (q) is the unique
PSp
(q).
Therefore, we can also
central extension of the simple group Ω5 (q) ∼
=
4
describe the Curtis-Tits standard pair for B2 as a Curtis-Tits standard pair
for C2 with G1 and G2 interchanged.
CURTIS-TITS AND PHAN AMALGAMS
7
(Γ = 2A3 ). Now G = SU4 (q) = SU(V ) for some Fq2 -vector space V with basis
{e1 , e2 , e3 , e4 } equipped with a non-degenerate hermitian form h for which this
basis is hyperbolic with Gram matrix
1 0
0 1
.
M =
1 0
0 1
Now G1 is the derived subgroup of the simultaneous stabilizer of the subspaces
he1 , e2 i and he3 , e4 i and G2 is the stabilizer of the vectors e1 and e3 and the
hyperbolic line he2 , e4 i. We have G2 ∼
= SL2 (q 2 ).
= SL2 (q) and G1 ∼
= SU2 (q) ∼
Explicitly we have
a b
c
d
G1 =
: a, b, c, d ∈ Fq2 with ad − bc = 1 ,
dσ −cσ
−bσ aσ
1
a
bη
:
a,
b,
c,
d
∈
F
with
ad
−
bc
=
1
,
G2 =
q
1
cη −1
d
where η ∈ Fq2 has η + η q = 0.
Remark 2.4. For completeness we also define a standard Curtis-Tits pair
−
(H, H1, H2 ) of type 2D3 (q). Take H = Ω−
(V, Q), where V is an Fq 6 (q) = Ω P
vector space with basis {e1 , e2 , e3 , e4 , e5 , e6 } and Q( 5i=1 xi ei ) = x1 x3 +x2 x4 +
f (x5 , x6 ), for some quadratic polynomial f (x, 1) that is irreducible over Fq .
Here, H1 ∼
= SL2 (q) is the derived subgroup of StabH (he1 , e2 i) ∩ StabH (he3 , e4 i)
if SL2 (q) is perfect that is q > 2, and it is the subgroup StabH (he1 , e2 i) ∩
StabH (he3 , e4 i) ∩ Stab(v) for some non-singular vector v ∈ he5 , e6 i if q = 2,
2
∼
and H2 = StabH (e1 ) ∩ StabH (e3 ) ∼
= Ω−
4 (q) = PSL2 (q ).
However, there exists a unique standard Curtis-Tits pair (G, G1 , G2 ) of
type 2A3 (q) and a surjective homomorphism π : G → H with ker π = {±1} =
2
∼
Z(G1 ) ≤ Z(G). It induces π : G1 ∼
= SL2 (q 2 ) → Ω−
4 (q) = H2 = PSL2 (q ) and
∼
π : G2 = SU2 (q) → SL2 (q) = H2 . Because of this map, any amalgam involving
a standard Curtis-Tits pair of type 2D3 (q) is the image of an amalgam involing
a standard Curtis-Tits pair of type 2A3 (q) (see Subsection 2.4).
8
BLOK, HOFFMAN, AND SHPECTOROV
Definition 2.5. For Curtis-Tits amalgams, the standard identification map
will be the isomorphism g : SL2 (q e ) → Gi sending
a b
c d
to the corresponding matrix of Gi as described above. Here e = 1 unless
Γ(q) = 2A3 (q) and i = 1 or Γ(q) = 2D3 and i = 2, in which case e = 2.
2.3. Standard pairs of Phan type
Let Γ be as above. Then a Phan standard pair of type Γ(q) is a triple
(G, G1 , G2 ) such that one of the following occurs:
(Γ = A1 × A1 ). Now G = G1 × G2 and G1 ∼
= SU2 (q) = SU(V ) for
= G2 ∼
some Fq2 -vector space V with basis {e1 , e2 } equipped with a non-degenerate
hermitian form h for which this basis is orthonormal.
(Γ = A2 ). Now G = SU3 (q) = SU(V ) for some Fq2 -vector space V with
basis {e1 , e2 , e3 } equipped with a non-degenerate hermitian form h for which
this basis is orthonormal. As in the Curtis-Tits case, G1 (resp. G2 ) is the
stabilizer of the subspace he1 , e2 i (resp. he2 , e3 i) and the vector e3 (resp. e1 ).
We have G1 ∼
= SU2 (q).
= G2 ∼
Explicitly we have
b
a
σ
σ
σ
σ
−b a
G1 =
: a, b ∈ Fq2 with aa + bb = 1 ,
1
1
σ
σ
a
b : a, b ∈ Fq2 with aa + bb = 1 .
G2 =
−bσ aσ
(Γ = C2 ). Let V be an Fq2 -vector space with
be the symplectic form with Gram matrix
1
0
M =
−1 0
0 −1
basis {e1 , e2 , e3 , e4 } and let β
0
1
.
Moreover, let h be the (non-degenerate) hermitian form h for which this basis
is orthonormal.
Now G = Sp(V, β) ∩ SU(V, h) ∼
= SU2 (q) is the derived sub= Sp4 (q), G1 ∼
group of StabG (he1 , e2 i) ∩ StabG (he3 , e4 i) and G2 = StabG (e1 ) ∩ StabG (e3 ) ∼
=
Sp2 (q) ∼
= SU2 (q), . Note that Z(G) = Z(G1 ) and Z(G) ∩ G2 = {1}.
CURTIS-TITS AND PHAN AMALGAMS
9
Explicitly we have
a
b
σ σ
−b
a
σ
σ
: a, b ∈ Fq2 with aa + bb = 1 ,
G1 =
aσ bσ
−b a
1
a
b
σ
σ
: a, b ∈ Fq2 with aa + bb = 1 .
G2 =
1
σ
σ
−b
a
Definition 2.6. For Phan amalgams, the standard identification map will be
the isomorphism g : SU2 (q) → Gi sending
a
b
−bσ aσ
to the corresponding matrix of Gi as described above.
2.4. Amalgams of Curtis-Tits and Phan type
Definition 2.7. An amalgam over a poset (P, ≺) is a collection A = {Ax |
x ∈ P} of groups, together with a collection a• = {ayx | x ≺ y, x, y ∈ P}
of monomorphisms ayx : Ax ֒→ Ay , called inclusion maps such that whenever
x ≺ y ≺ z, we have azx = azy ◦ ayx ; we shall write Ax = ayx (Ax ) ≤ Ay . A
completion of A is a group A together with a collection α• = {αx | x ∈ P}
of homomorphisms αx : Ax → A, whose images - often denoted Ax = αx (Ax )
- generate A, such that for any x, y ∈ P with x ≺ y we have αy ◦ αxy = αx .
The amalgam A is non-collapsing if it has a non-trivial completion. As a
convention, for any subgroup H ≤ AJ , let H = α(H) ≤ A.
A completion (Ã, α̃• ) is called universal if for any completion (A, α• ) there
is a unique surjective group homomorphism π : Ã → A such that α• = π ◦ α̃• .
A universal completion always exists.
Definition 2.8. Let Γ = (I, E) be a Lie diagram. A Curtis-Tits (resp. Phan)
amalgam with Lie diagram Γ over Fq is an amalgam G = {Gi , Gi,j , gi,j | i, j ∈
I} over P = {J | ∅ =
6 J ⊆ I with |J| ≤ 2} ordered by inclusion such that
for every i, j ∈ I, (Gi,j , Gi , Gj ) is a Curtis-Tits / Phan standard pair of type
Γi,j (q e ), for some e ≥ 1 as defined in Subsection 2.2 and 2.3. Moreover e = 1
is realized for some i, j ∈ I. Note that in fact e is always a power of 2. This
follows immediately from connectedness of the diagram and the definition of
the standard pairs of type A2 , C2 , and 2A3 . For any subset K ⊆ I , we let
GK = {Gi , Gi,j , gi,j | i, j ∈ K}.
10
BLOK, HOFFMAN, AND SHPECTOROV
Remark 2.9. Suppose that one considers an amalgam
H = {Hi , Hi,j , hi,j | i, j ∈ I}
over Fq with diagram Γ, such that for any i, j ∈ I, the triple (Hi,j , Hi , Hj )
is not a standard pair, but there is a standard pair (Gi,j , Gi , Gj ) such that
the respective H’s are central quotients of the corresponding G’s. Then, H is
the quotient of a unique Curtis-Tits or Phan amalgam over Fq with diagram
Γ. Hence for classification purposes it suffices to consider Curtis-Tits or Phan
amalgams. In particular, in view of Remark 2.4, we can restrict ourselves
to Curtis-Tits amalgams in which the only rank-2 subdiagrams are of type
A1 × A1 , A2 , C2 , and 2A3 .
+
+
Definition 2.10. Suppose G = {Gi , Gi,j , gi,j | i, j ∈ I}, G + = {G+
i , Gi,j , gi,j |
i, j ∈ I} are two Curtis-Tits (or Phan) amalgams over Fq with the same diagram Γ. Then a type preserving isomorphism φ : G → G + is a collection
φ = {φi , φi,j : i, j ∈ I} of group isomorphisms such that, for all i, j ∈ I, we
have
+
φi,j ◦ gi,j = gi,j
◦ φi
+
φi,j ◦ gj,i = gj,i
◦ φj .
Unless indicated otherwise, this is the kind of isomorphism we shall consider,
omitting the term ”type preserving”. It is also possible to consider type permuting isomorphisms, defined in the obvious way.
3. Background on groups of Lie type
3.1. Automorphisms of groups of Lie type of small rank
Automorphisms of groups of Lie type are all known. In this subsection we collect some facts that we will need later on. We shall use the notation from [30].
Automorphisms of SLn (q). Define automorphisms of SLn (q) as follows (where
x = (xij )ni,j=1 ∈ SLn (q)):
cg : x 7→ xg = g −1xg
α
α : x 7→ x =
τ
(xαij )ni,j=1
t −1
τ : x 7→ x = x
(g ∈ PGLn (q)),
(α ∈ Aut(Fq )),
(transpose-inverse).
0 −1
We note that for n = 2, τ coincides with the map x 7→ x , where µ =
.
1 0
We let PΓLn (q) = PGLn (q) ⋊ Aut(Fq ).
Automorphisms of Sp2n (q). Outer automorphisms of Sp2n (q) are of the form
Aut(Fq ) as for SL2n (q), defined with respect to a symplectic basis, or come
from the group GSp2n (q) ∼
= Sp2n (q).(F∗q /(F∗q )2 ) of linear similarities of the
µ
CURTIS-TITS AND PHAN AMALGAMS
symplectic form, where F∗q acts as conjugation by
λIn 0n
δ(λ) =
0 n In
11
(λ ∈ F∗q )
This only provides a true outer automorphism if λ is not a square and we find
that PGSp2n (q) ∼
= PSp2n (q).2 if q is odd and PGSp2n (q) = PSp2n (q) if q is
even. We define
ΓSp2n (q) = GSp2n (q) ⋊ Aut(Fq )
PΓSp2n (q) = PGSp2n (q) ⋊ Aut(Fq ).
Note that, as in SL2 (q), the map τ : A → t A−1 is the inner automorphism
given by
0 n In
M=
.
−In 0n
Automorphisms of SUn (q). All linear outer automorphisms of SUn (q) are induced by GUn (q) the group of linear isometries of the hermitian form, or
are induced by Aut(Fq2 ) as for SLn (q 2 ) with respect to an orthonormal basis. The group Aut(Fq2 ) has order 2e, where q = pe , p prime. We let
ΓUn (q) = GUn ⋊ Aut(Fq2 ) and let PΓUn (q) denote its quotient over the center
(consisting of the scalar matrices). In this case, the transpose-inverse map τ
with respect to a hyperbolic basis is the composition of the inner automorphism given by
0 n In
M=
.
In 0 n
and the field automorphism x 7→ x = xq (with respect to the hyperbolic basis).
d q2 ) of field automorphisms of SUn (q) on a hyperbolic basis.
The group Aut(F
For ΓU2n (q) note that Aut(Fq2 ) = hαi acts with respect to an orthonormal
basis U = {u1 , . . . , u2n } for the Fq2 -vector space V with σ-hermitian form h
d q2 ) of
preserved by the group (see [30]). . We now identify a complement Aut(F
semilinear automorphisms of GU2n (q) in ΓU2n (q) with respect to a hyperbolic
basis. Fix the standard hyperbolic basis H = {ei , fi : i = 1, 2, . . . , n} so that
the elements in GU(V, h) are represented by a matrix in GU2n (q) with respect
to H. Let α ∈ Aut(Fq2 ) act on V via U. Then, Hα = {eαi , fiα : i = 1, 2 . . . , n}
is also a hyperbolic basis for V , so for some A ∈ GU2n (q), we have AH = Hα .
Now the composition α
b = A−1 ◦ α is an α-semilinear map that fixes H. The
corresponding automorphism of GU2n (q) acts by applying α to the matrix
entries.
Remark 3.1. The following special case will be of particular interest when
considering a Curtis-Tits standard pairs of type 2A3 (q). In this case the action of α
b as above on SU4 (q) translates via the standard identification maps
12
BLOK, HOFFMAN, AND SHPECTOROV
(see Definition 2.5) to actions on SL2 (q) and SL2 (q 2 ) as follows. The action
on SL2 (q 2 ) is the natural entry-wise field automorphism action. The action on
SL2 (q) will be a product of the natural entrywise action of α
b and a diagonal
automorphism diag(f, 1), where f ∈ Fq is such that α
b(η) = f η. Note that
be translates to (left) conjugation by
NFq /Fp (f ) = −1, so in particular σ = α
diag(−1, 1) only.
Definition 3.2. Since the norm is surjective, there exists ζ ∈ Fq2 such that
NFq2 /Fq (ζ) = f −1 . We then have that diag(ζ, ζ, ζ −q, ζ −q ) ∈ GU4 (q) acts trivially on SL2 (q 2 ) and acts as left conjugation by diag(f −1 , 1) on SL2 (q). It
follows that the composition α
e of α
b and this diagonal automorphism acts entrywise as α on both SL2 (q) and SL2 (q 2 ). We now define
g q2 ) = he
αi ≤ Aut(SU4 (q)).
Aut(F
Lemma 3.3. (See [28, 30].)
(1) As Sp2 (q) = SL2 (q) ∼
= SU2 (q), we have
Aut(Sp2 (q)) = Aut(SL2 (q)) = PΓL2 (q) ∼
= PΓU2 (q) = Aut(SU2 (q)).
(2) In higher rank we have
Aut(SLn (q)) = PΓLn (q) ⋊ hτ i
Aut(Sp2n (q)) = PΓSp2n (q)
Aut(SUn (q)) = PΓUn (q)
3.1.1. Some normalizers and centralizers.
1 0
Corollary 3.4. Let G = SL3 (q). Let ϕ : SL2 (q) → G given by A 7→
0 A
and let L = im ϕ. Then,
CAut(G) (L) = hdiag(a, b, b) : a, b ∈ F∗q i ⋊ hθi.
1
where θ = τ ◦ cν : X θ 7→ t (ν −1 Xν)−1 and ν = 0 −1.
1 0
Proof This follows easily from the fact that Aut(G) ∼
= PΓL3 (q). Let τ i ◦
α ◦ cg , where cg denotes conjugation by g ∈ GL3 (q) and α ∈ Aut(Fq ). Using
transvection matrices from L over the fixed field Fαq one sees that if i = 0, then
g must be of the form diag(a, b, b), and if i = 1, then it must be of the form
diag(a, b, b)ν, for some a, b ∈ F∗q . Then, if α 6= id, picking transvections from
L with a few entries in Fq − Fαq one verifies that α must be the identity.
CURTIS-TITS AND PHAN AMALGAMS
13
4. Classification of Curtis-Tits amalgams
4.1. Fundamental root groups in Curtis-Tits standard pairs
Lemma 4.1. Let q be a power of the prime p. Suppose that (G, G1 , G2 ) is a
Curtis-Tits standard pair of type Γ(q) as in Subsection 2.2. For {i, j} = {1, 2},
let Sj = Sylp (Gj ).
(1) There exist two groups Xi,ε
j ∈ Sj (ε = +, −) such that for any X ∈ Sj
we have
hGi , Xi ≤ Pεi if and only if X = Xi,ε
j
−
where P+
i and Pi are the two parabolic subgroups of G containing Gi .
If X 6= Xi,ε
j , then
(
(Gi × Gxi ) ⋊ hxi if Γ(q) = C2 (2),
hGi , Xi =
G
else.
where in the C2 (2) case X = hxi.
(2) We can select the signs ε so that Xj,ε
commutes with Xji,−ε , but not
i
i,ε
j,ε
with Xi,ε
j and, in fact hXi , Xj i is contained in the unipotent radical
Uεi,j of a unique Borel subgroup of Gi,j , namely Bεi,j = Pεi ∩ Pεj .
Proof We first prove part 1. by considering all cases.
A2 (q), q ≥ 3. View G = SL3 (q) = SL(V ) for some Fq -vector space with basis
{e1 , e2 , e3 }. By symmetry we may assume that i = 1 and j = 2. Let G1
(resp. G2 ) stabilize he1 , e2 i and fix e3 (resp. stabilize he2 , e3 i) and fix e1 ). A
root group in G2 is of the form Xv = StabG2 (v) for some v ∈ he2 , e3 i. We let
−
ε
ε
X+
2 = Xe2 and X2 = Xe3 . It clear that for ε = + (resp. ε = −) hG1 , X2 i = P
is contained in (but not equal to) the parabolic subgroup stabilizing he1 , e2 i
(resp. he3 i). Now suppose that X ∈ S2 is different from Xε2 (ε = +, −)
and X = Xλe2 +e3 for some λ ∈ F∗q . Consider the action of a torus element
d = diag(µ, µ−1, 1) ∈ G1 by conjugation on G2 . Then Xd = Xµλe2 +e3 . Since
|Fq | ≥ 3, Xd 6= X for some d and so we have
(4.1)
hGi , Xi ≥ hGi , X, Xdi = hGi , Gj i = G.
−
A2 (2). In this case S2 = {X+
2 , X = hri, X2 }, where r is the Coxeter element
fixing e1 and interchanging e2 and e3 . It follows that Gr1 is the stabilizer of
the subspace decomposition he2 i ⊕ he1 , e3 i and hence hG1 , Xi = G.
C2 (q), q ≥ 3, X short root. We use the notation of Subsection 2.2. First,
let i = 2, j = 1, let G2 ∼
= SL2 (q) be the stabilizer of e1 and e3
= Sp2 (q) ∼
∼
and let G1 = SL2 (q) be the derived subgroup of the stabilizer of the isotropic
2-spaces he1 , e2 i and he3 , e4 i. Root groups in G1 are of the form Xu,v =
StabG1 (u) ∩ StabG1 (v), where u ∈ he1 , e2 i and v ∈ he3 , e4 i are orthogonal. Let
14
BLOK, HOFFMAN, AND SHPECTOROV
−
X+
1 = Xe1 ,e4 and X1 = Xe2 ,e3 . It is easy to verify that for ε = + (resp. ε = −)
ε
ε
hG2 , X1 i = P is contained in the parabolic subgroup stabilizing he1 i (resp.
he3 i). Now let X = Xe1 +λe2 ,e3 −λ−1 e4 for some λ ∈ F∗q . Consider the action
of a torus element d = diag(1, µ−1 , 1, µ) ∈ G2 by conjugation on G1 . Then
Xd = Xhe1 +λµe2 i,e3 −λ−1 µ−1 e4 . Since q ≥ 3, Xd 6= X for some d and so, for i = 1,
and these G2 , X and d, we have (4.1) again.
C2 (q), q ≥ 4, X long root. Now, we let i = 1 and j = 2. Root groups in
G2 are of the form Xu = StabG2 (u) where u ∈ he2 , e4 i. Let X+
2 = Xe2 and
ε
ε
X−
=
X
.
It
is
easy
to
verify
that
for
ε
=
+
(resp.
ε
=
−)
hG
e4
1 , X2 i = P is
2
contained in the parabolic subgroup stabilizing he1 , e2 i (resp. he1 , e4 i). Now
let X = Xe2 +λe4 for some λ ∈ F∗q .
Consider the action of a torus element d = diag(µ, µ−1, µ−1 , µ) ∈ G1 by
conjugation on G2 . Then Xd = Xµe2 +µ−1 λe4 . Now if q ≥ 4, then Xd 6= X for
some d and so so, for these G1 , X and d, we have (4.1) again.
C2 (q), q = 3, X long root. The proof for the case q ≥ 4 does not yield the
result since, for q = 3, the element d centralizes G2 . A direct computation
in GAP shows that the conclusion still holds, though. Let x ∈ X = Xe2 +e4
send e2 to e4 . Then G1 and Gx1 contains two short root groups fixing e1 and
e3 . Their commutators generate a long root group fixing e1 , e2 , and e4 , while
being transitive on the points he3 + λe1 i. Further conjugation with an element
in G1 interchanging the points he1 i and he2 i yields a long root group in G2
different from X and we obtain an equation like (4.1) again.
C2 (2). First note that G ∼
= Sp4 (2) ∼
= O5 (2) is self point-line dual, so we
only need to consider the case where G2 = StabG (e1 ) ∩ StabG (e3 ) and G1 =
−
StabG (he1 , e2 i) ∩ StabG (he3 , e4 i). Now S1 = {X+
1 , X1 , hxi}, where x is the
permutation matrix of (1, 2)(3, 4). The conclusion follows easily.
2
A3 (q). We use the notation of Subsection 2.2. First, let i = 2, j = 1, let G2 ∼
=
2
∼
∼
SU2 (q) = SL2 (q) be the stabilizer of e1 and e3 and let G1 = SL2 (q ) be the
derived subgroup of the simultaneous stabilizer in G = SU4 (q) of the isotropic
2-spaces he1 , e2 i and he3 , e4 i. Root groups in G1 are of the form Xu,v =
StabG1 (u) ∩ StabG1 (v), where u ∈ he1 , e2 i and v ∈ he3 , e4 i are orthogonal. Let
−
X+
1 = Xe1 ,e4 and X1 = Xe2 ,e3 . It is easy to verify that for ε = + (resp. ε = −)
hG2 , Xε1 i = Pε is contained in the parabolic subgroup stabilizing he1 i (resp.
he3 i). Now let X = Xe1 +λe2 ,e3 −λ−σ e4 for some λ ∈ F∗q2 . Consider the action
of a torus element d = diag(1, µ−1, 1, µ) ∈ G2 (with µ ∈ F∗q ) by conjugation
on G1 . Then Xd = Xe1+λµe2 ,e3 −λ−σ µ−1 e4 . There are q − 1 choices for µ, so if
q ≥ 3, then Xd 6= X for some d. Hence, for i = 1, and these G2 , X and d, we
have (4.1) again. The case q = 2 is a quick GAP calculation.
Now, we let i = 1 and j = 2. Root groups in G2 are of the form Xu =
−
StabG2 (u) where u ∈ he2 , e4 i is isotropic. Let X+
2 = Xe2 and X2 = Xe4 . It is
easy to verify that for ε = + (resp. ε = −) hG1 , Xε2 i = Pε is contained in the
CURTIS-TITS AND PHAN AMALGAMS
15
parabolic subgroup stabilizing he1 , e2 i (resp. he1 , e4 i). Now let X = Xe2 +λe4
for some λ ∈ F∗q2 where Tr(λ) = λ + λσ = 0. Consider the action of a torus
element d = diag(µ, µ−1, µ−σ , µσ ) ∈ G1 (for some µ ∈ F∗q2 ) by conjugation
on G2 . Then Xd = Xµe2 +µ−σ λe4 . The q 2 − 1 choices for µ result in q − 1
different conjugates. Thus, if q − 1 ≥ 2, then Xd 6= X for some d and so so,
for these G1 , X and d, we have (4.1) again. The case q = 2 is a quick GAP
calculation. Namely, in this case, X = hxi, where x is the only element of
−
order 2 in G2 ∼
= S3 that does not belong to X+
1 ∪ X1 ; it is the Coxeter element
that fixes e1 and e3 and interchanges e2 and e4 . Now hG1 , Gr1 i contains the
long root group generated by the commutators of the short root group fixing
e1 in G1 and Gr1 , and likewise for e2 , e3 , and e4 . In particular, we have
(4.2)
hG1 , Xi ≥ hG1 , Gr1 i ≥ hG1 , G2 i = G
We now address part 2. Note that the positive and negative fundamental
j,ε
−
root groups with respect to the torus B+
i,j ∩ Bi,j satisfy the properties of Xi
and Xi,ε
j so by the uniqueness statement in 1. they must be equal. Now the
claims in part 2. are the consequences of the Chevalley commutator relations.
Remark 4.2.
−
Explicitly, the groups {X+
i , Xi } (i = 1, 2), possibly
for the Curtis-Tits standard pairs are as follows.
For Γ = A2 , we have
1 b
1
+
−
0 1
X1 =
: b ∈ Fq , and X1 = c
1
1
1
+
−
1 b : b ∈ Fq , and X2 =
X2 =
0 1
up to a switch of signs,
0
1
1
: c ∈ Fq
,
1 0 : c ∈ Fq .
c 1
For Γ = C2 , we have
1
0
1
b
c
1
0
1
−
+
,
:
c
∈
F
,
and
X
=
:
b
∈
F
X1 =
q
q
1
1 −c
1 0
0 1
−b 1
1
1
1
0
1
b
−
+
: c ∈ Fq .
: b ∈ Fq , and X2 =
X2 =
1
1
c
1
0
1
16
BLOK, HOFFMAN, AND SHPECTOROV
For Γ = 2A3 , we have (with η ∈ Fq2 of trace 0),
1
b
1
0
0
1
c
1
+
−
: b ∈ Fq2 , and X1 =
: c ∈ Fq 2 ,
X1 =
1 0
1 −c
−b 1
0 1
1
1
1
0
1
bη
−
+
: c ∈ Fq .
X2 =
: b ∈ Fq , and X2 =
1
1
cη −1
1
0
1
4.2. Weak systems of fundamental groups
In this subsection we show that a Curtis-Tits amalgam with 3-spherical diagram determines a collection of subgroups of the vertex groups, called a weak
system of fundamental root groups. We then use this to determine the coefficient system of the amalgam in the sense of [2], which, in turn is applied to
classify these amalgams up to isomorphism.
Definition 4.3. Suppose that G = {Gi , Gi,j , gi,j | i, j ∈ I} is a CT amalgam.
−
For each i ∈ I let X+
i , Xi ≤ Gi be a pair of opposite root groups. We say
−
that {X+
i , Xi | i ∈ I} is a weak system of fundamental root groups if, for any
−
edge {i, j} ∈ E there are opposite Borel groups B+
i,j and Bij in Gi,j , each of
+
−
which contains exactly one of {Xi , Xi }.
We call G orientable if we can select Xεi , Bεij (ε = +, −) for all i, j ∈ V such
ε
ε
that Xi , Xj ≤ Bεij . If this is not possible, we call G non-orientable.
The relation between root groups and Borel groups is given by the following
well-known fact.
Lemma 4.4. Let q be a power of the prime p. Let G be a universal group of
Lie type Γ(q) and let X be a Sylow p-subgroup. Then, NG (X) is the unique
Borel group B of G containing X.
Proposition 4.5. Suppose that G = {Gi , Gi,j , gi,j | i, j ∈ I} is a CT amalgam
with connected 3-spherical diagram Γ. If G has a non-trivial completion (G, γ),
then it has a unique weak system of fundamental root groups.
Proof We first show that there is some weak system of fundamental root
groups. For every edge {i, j}, let Xi,ε
j be the groups of Lemma 4.1. Suppose
that there is some subdiagram ΓJ with J = {i, j, k} in which j is connected
i,−
k,−
to both i and k, such that {Xi,+
6 {Xk,+
j , Xj } =
j , Xj } as sets. Without
loss of generality assume that Γi,j = A2 (by 3-sphericity) and moreover, that
i,−
Xk,+
6∈ {Xi,+
j
j , Xj }. For any subgroup H of a group in G , write H = γ(H).
Now note that Xkj,− commutes with Xjk,+ and since Γ contains no triangles it
CURTIS-TITS AND PHAN AMALGAMS
17
also commutes with Gi . But then Xkj,− commutes with hXjk,+ , Gi i which, by
Lemma 4.1, equals Gi,j (this is where we use that Γi,j = A2 ), contradicting that
Xkj,− does not commute with Xjk,− ≤ Gi,j . Thus, if there is a completion, then
by connectedness of Γ, for each i ∈ I we can pick a j ∈ I so that {i, j} ∈ E
j,±
and set X±
and drop the superscript. We claim that {X±
i = Xi
i | i ∈ I} is
a weak system of fundamental root groups. But this follows from part 2. of
Lemma 4.1.
The uniqueness derives immediately from the fact that by Lemma 4.1,
i,−
gi,j (Xi,+
j ) and gj,i (Xj ) are the only two Sylow p-subgroups in gj,i (Gj ) which
do not generate Gi,j with gi,j (Gi ).
An immediate consequence of the results above is the following observation.
Corollary 4.6. Suppose that G = {Gi , Gi,j , gi,j | i, j ∈ I} is a CT amalgam
with connected 3-spherical diagram Γ. Then, an element of NAut(Gi,j ) (Gi , Gj )
+
−
+
−
−
either fixes each of the pairs (Xi , Xi ), (Xj , Xj ), and (B+
i,j , Bi,j ) or it reverses
each of them. In particular,
+
−
+
−
NAut(Gi,j ) (Gi , Gj ) = NAut(Gi,j ) ({Xi , Xi }) ∩ NAut(Gi,j ) ({Xj , Xj }).
4.3. The coefficient system of a Curtis-Tits amalgam
The automorphisms of a Curtis-Tits standard pair will be crucial in the classification of Curtis-Tits amalgams and we will need some detailed description
of them.
We now fix a Curtis-Tits amalgam G = {Gi , Gi,j , gi,j | i, j ∈ I} of type Γ(q),
where for every i, j ∈ I, gi,j is the standard identification map of Definition 2.5.
−
Then, G has a weak system of fundamental root groups X = {{X+
i , Xi } : i ∈
I} as in Subsection 4.1.
Remark 4.7. Let G = {Gi , Gi,j , gi,j | i, j ∈ I} be a Curtis-Tits amalgam
over Fq with given diagram Γ. Next suppose that Γ is connected 3-spherical,
and that G and G are non-collapsing. Then, by Proposition 4.5, G and G each
have a weak system of fundamental root groups. Now note that for each i ∈ I,
Aut(Gi ) is 2-transitive on the set of Sylow p-subgroups. Thus, for each i ∈ I
and all j ∈ I − {i}, we can replace gi,j by gi,j ◦ αi , to form a new amalgam
isomorphic to G , whose weak system of fundamental root groups is exactly X .
Thus, in order to classify non-collapsing Curtis-Tits amalgams over Fq with
diagram Γ up to isomorphism, it suffices to classify those whose weak system
of fundamental root groups is exactly X .
Definition 4.8. Suppose that G = {Gi , Gi,j , gi,j | i, j ∈ I} is a Curtis-Tits
amalgam over Fq with connected 3-spherical diagram Γ. Denote the associated
−
weak system of fundamental root groups as X = {{X+
i , Xi } : i ∈ I}. The
18
BLOK, HOFFMAN, AND SHPECTOROV
coefficient system associated to G is the collection A = {Ai , Ai,j , ai,j | i, j ∈ I}
where, for any i, j ∈ I we set
−
Ai = NAut(Gi ) ({X+
i , Xi }),
ε
ε
Ai,j = NAut(Gi,j ) ({Xi : ε = +, −}) ∩ NAut(Gi,j ) ({Xj : ε = +, −}),
−1
ai,j : Ai,j → Aj is given by restriction: ϕ 7→ gj,i
◦ ρi,j (ϕ) ◦ gj,i .
where ρi,j (ϕ) is the restriction of ϕ to Gj ≤ Gi,j .
From now on we let A be the coefficient system associated to G . The
significance for the classification of Curtis-Tits amalgams with weak system of
fundamental root groups is as follows:
Proposition 4.9. Suppose that G and G + are Curtis-Tits amalgams with
diagram Γ over Fq with weak system of fundamental root groups X .
+
+
(1) For all i, j ∈ I, we have gi,j = gi,j ◦ δi,j and gi,j
= gi,j ◦ δi,j
for some
+
δi,j , δi,j ∈ Ai ,
(2) For any isomorphism φ : G → G + and i, j ∈ I, we have φi ∈ Ai ,
+
−1
φ{i,j} ∈ Ai,j , and ai,j (φ{i,j}) = δi,j
◦ φi ◦ δi,j
.
−1
Proof Part 1. follows since, for any i, j ∈ I we have gi,j
◦ gi,j ∈ Aut(G1 ) and
+
−
−
{gi,j (X+
i ), gi,j (Xi )} = {g i,j (Xi ), g i,j (Xi )}.
Part 2. follows from Corollary 4.6 since, for any i, j ∈ I,
+
+
(Gi,j , gi,j (Gi ), gj,i (Gj )) = (Gi,j , gi,j (Gi ), gj,i (Gj )) = (Gi,j , gi,j
(Gi ), gj,i
(Gj )).
We now determine the groups appearing in the coefficient system A associated to G .
Lemma 4.10. Fix i ∈ I and let q be such that Gi ∼
= SL2 (q). Then,
A i = Ti ⋊ Ci ,
where Ti is the subgroup of diagonal automorphisms in PGL2 (q) and Ci =
hτ, Aut(Fq )i.
Proof This follows from the fact that via the standard embedding map gi,j
−
the groups X+
i and Xi of the weak system of fundamental root groups are the
subgroups of unipotent upper and lower triangular matrices in SL2 (q).
Lemma 4.11. Let A be the coefficient system associated to the standard
Curtis-Tits amalgam G of type Γ(q) and the weak system of fundamental root
groups X .
CURTIS-TITS AND PHAN AMALGAMS
19
If Γ = A1 × A1 , we have Gi,j = Gi × Gj , gi,j and gj,i are identity maps,
and
(4.3)
Ai,j = Ai × Aj ∼
= Ti,j ⋊ Ci,j .
where Ti,j = Ti × Tj and Ci,j = Ci × Cj . Otherwise,
Ai,j = Ti,j ⋊ Ci,j ,
where
(4.4)
Ci,j
(
Aut(Fq ) × hτ i
= g
Aut(Fq2 ) × hτ i
for Γ = A2 , C2
for Γ = 2A3
and Ti,j denotes the image of the standard torus T in Aut(Gi,j ). Note that
∗
if Γ = A2
hdiag(a, b, c) : a, b, c ∈ Fq i ≤ GL3 (q)
−1
−1
∗
T = hdiag(ab, a b, a , a) : a, b ∈ Fq i ≤ GSp4 (q) if Γ = C2
hdiag(a, b, a−q , b−q ) : a, b ∈ F∗ i ≤ GU (q)
if Γ = 2A3 .
4
q2
Remark 4.12. Remarks on Lemma 4.11
(1) We view Sp2n (q) and SU2n (q) as a matrix group with respect to a
symplectic (resp. hyperbolic) basis for the 2n-dimensional vector space
d q2 )) acts entrywise on the matrices.
V and Aut(Fq ) (resp. Aut(F
(2) The map τ is the transpose-inverse map of Subsection 3.1.
(3) Recall that in the 2A3 case, Remark 3.1 and Definition 3.2 describe the
g q2 ) ≤ Ci,j on Gi and Gj via the standard identification
actions of Aut(F
maps.
Proof We first consider the A1 × A1 case of (4.3). When Γ = A1 × A1 , then
±
Gi,j = Gi × Gj and since the standard root groups Xi generate Gi (i = 1, 2),
their simultaneous normalizer must also normalize Gi and Gj . Thus the claim
follows from Lemma 4.10.
We now deal with all remaining cases simultaneously. In the 2A3 case we note
d q2 )
g q2 ) ≤ Ti,j ⋊Aut(F
that from Remark 3.1 and Definition 3.2 we see that Aut(F
is simply a different complement to Ti,j , so it suffices to prove the claim with
d q2 ).
g q2 ) replaced by Aut(F
Aut(F
−
Consider the descriptions of the set {X+
i , Xi } in all cases from Subsection 4.1. We see that since τ acts by transpose-inverse, it interchanges X+
i
and X−
i for i = 1, 2 in all cases, hence it also interchanges positive and negative Borel groups (see Corollary 4.6). Thus it suffices to consider those automorphisms that normalize the positive and negative fundamental root groups.
d q2 )) act entrywise, they
Since all field automorphisms (of Aut(Fq ) and Aut(F
do so. Clearly so does T. Thus we have established ⊇.
20
BLOK, HOFFMAN, AND SHPECTOROV
We now turn to the reverse inclusion. By Lemma 3.3 and the description
of the automorphism groups in Subsection 3.1 any automorphism of Gi,j is a
product of the form gατ i where g is linear, α is a field automorphism (from
d q2 ) in the 2A3 case) and i = 0, 1. As we saw above τ and α preserve the
Aut(F
root groups, so it suffices to describe g in case it preserves the sets of opposite
root groups. A direct computation shows that g must be in T.
Next we describe the connecting maps ai,j of A .
Lemma 4.13. Let A be the coefficient system of the standard Curtis-Tits
amalgam G over Fq with diagram Γ and weak system of fundamental root
groups X . Fix i, j ∈ I and let (Gi,j , Gi , Gj ) be a Curtis-Tits standard pair in
G with diagram Γi,j . Denote a = (aj,i , ai,j ) : Ai,j → Ai × Aj . Then, we have
the following:
(1) If Γi,j = A1 × A1 , then a is an isomorphism inducing Ti,j ∼
= Ti × Tj
and Ci,j ∼
C
×
C
.
= i
j
(2) If Γi,j = A2 , or 2A3 , then a : Ti,j → Ti × Tj is bijective.
∼
=
(3) If Γi,j = C2 , then a : Ti,j → T2i × Tj has index 1 or 2 in Ti × Tj
depending on whether q is even or odd.
(4) If Γi,j = A2 or C2 , then a : Ci,j → Ci ×Cj is given by τ s α 7→ (τ s α, τ s α)
(for s ∈ {0, 1} and α ∈ Aut(Fq )) which is a diagonal embedding.
(5) If Γ = 2A3 , then a : Ci,j → Ci × Cj , is given by τ s α
er 7→ (τ s αr , τ s αr )
e 7→ (σ, id).
(for s ∈ {0, 1}, r ∈ N, and α : x 7→ xp for x ∈ Fq2 . Here σ
Remark 4.14.
(1) In 4. τ acts as transpose-inverse and α acts entry-wise
on Gi,j , Gi and Gj .
Proof 1. This is immediate from Lemma 4.11.
For the remaining cases, recall that for any ϕ ∈ Ai,j , we have ai,j : ϕ 7→ g−1
◦
j,i
ρi,j (ϕ) ◦ gj,i , where ρi,j (ϕ) is the restriction of ϕ to Gj ≤ Gi,j (Definition 4.8)
and gi,j is the standard identification map of Definition 2.5. Note that for
Γi,j = A2 , C2 the standard identification map transforms the automorphism
ρj,i (ϕ) of Gi essentially to the “same” automorphism ϕ of Gi , whereas for
Γi,j = 2A3 , we must take Remark 3.1 into account.
2. Let Γi,j = A2 . Every element of Ti,j (Ti , and Tj respectively) is given
by a unique matrix of the form diag(a, 1, c) (diag(a, 1), and diag(1, c)), and we
have
(aj,i , ai,j ) : diag(a, 1, c) 7→ (diag(a, 1), diag(1, c))
(a, c ∈ F∗q ),
which is clearly bijective. In the 2A3 case, every element of Ti,j (Ti , and Tj respectively) is given by a unique matrix of the form diag(ab−1 , 1, a−q b−1 , b−(q+1) )
CURTIS-TITS AND PHAN AMALGAMS
21
(diag(1, b−(q+1) ), and diag(ab−1 , 1)), and we have
a : diag(ab−1 , 1, a−q b−1 , b−(q+1) )7→(diag(1, b−(q+1) ), diag(ab−1 , 1)).
This map is onto since NFq2 /Fq : b 7→ bq+1 is onto. Its kernel is trivial, as it is
given by pairs (a, b) ∈ Fq2 , with a = b and bq+1 = 1 so that also a−q b−1 = 1.
3. In the C2 -case, every element of Ti,j is given by a unique diagonal matrix
diag(a2 b, b, 1, a2 ) (a, b ∈ Fq ). Every element of Ti (resp. Tj ) is given by a
unique diag(c, 1) (resp. diag(d, 1)). Now we have
a : diag(a2 b, b, 1, a2 ) 7→ (diag(a2 , 1), diag(ba−2 , 1)).
It follows that a is injective and has image T2i × Tj . The rest of the claim
follows.
4. The field automorphism α ∈ Aut(Fq ) acts entrywise on the matrices in
Gi,j = SL3 (q), or Sp4 (q), and Gi = Gj = SL2 (q). In the case G = Sp4 (q),
we saw in Subsection 3.1 that τ is inner and coincides with conjugation by
M. This clearly restricts to conjugation by µ on both G2 = Sp2 (q) and
G1 = SL2 (q), which is again τ . Clearly these actions correspond to each other
via the standard identification maps gi,j and gj,i .
g q2 ) ≤ Ci,j on Gi and Gj via a was explained in
5. The action of Aut(F
Remark 3.1 and Definition 3.2. In case Gi,j = SU4 (q), τ is given by conjugation
by M composed with the field automorphism σ
b, where σ : x 7→ xq for x ∈ Fq2 .
The same holds for Gj = SU2 (q) and τ restricts to Gi = SL2 (q 2 ) as transposeinverse. In view of Remark 3.1 we see that via the standard identification map
each restricts to transpose inverse on Gi and Gj .
4.4. A standard form for Curtis-Tits amalgams
Suppose that G = {Gi , Gi,j , gi,j | i, j ∈ I} is a Curtis-Tits amalgam over Fq
with 3-spherical diagram Γ. Without loss of generality we will assume that all
inclusion maps gi,j are the standard identification maps of Definition 2.5.
By Proposition 4.5 it possesses a weak system of fundamental root groups
−
X = {{X+
i , Xi } : i ∈ I},
which via the standard embeddings gi,j can be identified with those given in
Subsection 4.1 (note that orienting X may involve changing some signs). Let
A = {Ai , Ai,j , ai,j | i, j ∈ I} be the coefficient system associated to G and X .
We wish to classify all Curtis-Tits amalgams G = {Gi , Gi,j , gi,j | i, j ∈ I}
over Fq with the same diagram as G with weak system of fundamental root
groups X up to isomorphism of Curtis-Tits amalgams. By Proposition 4.9
we may restrict to those amalgams whose connecting maps are of the form
gi,j = gi,j ◦ δi,j for δi,j ∈ Ai for all i ∈ I.
22
BLOK, HOFFMAN, AND SHPECTOROV
Definition 4.15. The trivial support of G (with respect to G ) is the set
{(i, j) ∈ I × I | gi,j = gi,j } (that is, δi,j = idGi in the notation of Proposition 4.9). The word “trivial” derives from the assumption that the gi,j ’s are
the standard identification maps of Definition 2.5.
Fix some spanning tree Σ ⊆ Γ and suppose that E − E Σ = {{is , js } : s =
1, 2, . . . , r} so that H1 (Γ, Z) ∼
= Zr .
Proposition 4.16. There is a Curtis-Tits amalgam G (Σ) over Fq with the
same diagram as G and the same X , which is isomorphic to G and has the
following properties:
(1) G has trivial support S = {(i, j) ∈ I × I | {i, j} ∈ E Σ} ∪ {(is , js ) : s =
1, 2 . . . , r}.
(2) for each s = 1, 2, . . . , r, we have gjs ,is = gj ,i ◦ γjs ,is , where γjs,is ∈ Cjs .
s s
Lemma 4.17. There is a Curtis-Tits amalgam G + over Fq with the same
diagram as G and the same X , which is isomorphic to G and has the following
properties: For any u, v ∈ I, if gu,v = gu,v ◦ γu,v ◦ du,v , for some γu,v ∈ Cu and
+
du,v ∈ Tu , then gu,v
= gu,v ◦ γu,v .
Proof Note that we have |I| ≥ 2 and that Γ is connected. Fix u ∈ I. Since Γ
is 3-spherical, there is at most one w ∈ I such that (Gu,w , Gu , Gw ) is a CurtisTits standard pair of type B2 or C2 . If there is no such w, let w be an arbitrary
+
vertex such that {u, v} ∈ E Γ. We define G + by setting gu,v
= gu,v ◦ γu,v for
all v 6= u.
Next we define φ : G → G + setting φu = du,w and φv = idGv for all v 6= u.
Now note that setting φu,w = idGu,v , {φu,w , φu , φw } is an isomorphism of the
+
subamalgams of G{u,w} and G{u,w}
. As for φu,v for v 6= w, note that in order for
+
{φu,v , φu , φv } to be an isomorphism of the subamalgams of G{u,v} and G{u,v}
,
we must have
+
gu,v
φu = φu,v ◦ gu,v
+
gv,u
φv = φu,v ◦ gv,u
which translates as
gu,v ◦ γu,v ◦ du,w = φu,v ◦ gu,v ◦ γu,v ◦ du,v
gv,u ◦ δv,u = φu,v ◦ gv,u ◦ δv,u
or in other words
−1
γu,v ◦ du,w ◦ d−1
u,v ◦ γu,v = av,u (φu,v )
idGv = au,v (φu,v ).
CURTIS-TITS AND PHAN AMALGAMS
23
−1
Note that γu,v ◦ du,w ◦ ◦d−1
u,v ◦ γu,v ∈ Tu ✁ Au . Now by Lemma 4.13 as
(Gu,v , Gu , Gv ) is not of type B2 or C2 the map (aj,i , ai,j ) : Ti,j → Ti × Tj
is onto. In particular, the required φu,v ∈ Tu,v can be found. This completes
the proof.
By Lemma 4.17 in order to prove Proposition 4.16 we may now assume that
gu,v = gu,v ◦ γu,v for some γu,v ∈ Cu for all u, v ∈ I.
Let G = {Gi , Gj , Gi,j , gi,j = gi,j ◦ γi,j | i, j ∈ I} be a Curtis-Tits amalgam
over Fq with |I| = 2 and γi,j ∈ Ci and γj,i ∈ Cj . We will describe all possible
+
+
+
amalgams G + = {Gi , Gj , Gi,j , gi,j
= gi,j ◦ γi,j
| i, j ∈ I} with γi,j
∈ Ci and
+
γj,i ∈ Cj , isomorphic to G via an isomorphism φ with φi ∈ Ci , φj ∈ Cj and
φi,j ∈ Ci,j .
Gi,j
g i, j
gj
,i
Gi
Gj
φi
φi,j
φj
Gi
Gj
g+
i, j
+
Gi,j
g j,i
Figure 1. The commuting hexagon of Corollary 4.18.
+
Corollary 4.18. With the notation introduced above, fix the maps γi,j , γi,j
, φi ∈
+
Ci as well as γj,i ∈ Cj . Then for any one of γj,i , φj ∈ Cj , there exists a choice
γ ∈ Ci for the remaining map in Cj so that there exists φi,j making the diagram in Figure 1 commute. Moreover, if Γi,j is one of A2 , B2 , C2 , 2A3 , then
γ is unique, whereas if Γi,j = 2D3 , then there are exactly two choices for γ.
Proof The first claim follows immediately from the fact that the restriction
maps aj,i : Ci,j → Ci and ai,j : Ci,j → Cj in part 4. and 5. of Lemma 4.13 are
both surjective. The second claim follows from the fact that aj,i : Ci,j → Ci is
injective except if Γi,j = 2D3 in which case it has a kernel of order 2.
Proof (of Proposition 4.16) By Lemma 4.17 we may assume that gi,j = gi,j ◦
γi,j for some γi,j ∈ Ci for all i, j ∈ I.
For any (possibly empty) subset T ⊆ V let S(T ) be the set of pairs (i, j) ∈ S
such that i ∈ T . Clearly the trivial support of G contains S(∅).
24
BLOK, HOFFMAN, AND SHPECTOROV
We now show that if T is the vertex set of a (possibly empty) proper subtree
of Σ, and u is a vertex such that T ∪ {u} is also the vertex set of a subtree of
Σ, then for any Curtis-Tits amalgam G whose trivial support contains S(T ),
there is a Curtis-Tits amalgam G + isomorphic to G , whose trivial support
contains S(T ∪ {u}).
Once this is proved, Claim 1. follows since we can start with T = ∅ and
end with a Curtis-Tits amalgam, still isomorphic to G , whose trivial support
contains S.
Now let T and u be as above. We first deal with the case where T 6= ∅.
Let t be the unique neighbor of u in the subtree of Σ with vertex set T ∪ {u}.
+
+
We shall define an amalgam G + = {Gi , Gi,j , gi,j
= gi,j ◦ γi,j
| i, j ∈ I} and an
+
+
isomorphism φ : G → G , where γi,j , φi ∈ Ci and φ{i,j} ∈ Ci,j for all i, j ∈ I.
+
First note that it suffices to define gi,j
, φi and φ{i,j} for {i, j} ∈ E: given this
data, by the A1 × A1 case in Lemma 4.11 and Lemma 4.18, for any non-edge
{k, l} there is a unique φ{k,l} ∈ Ck,l such that (φk,l , φk , φl ) is an isomorphism
+
between G{k,l} and G{k,l}
.
Before defining inclusion maps on edges, note that since Γ is 3-spherical,
no two neighbors of u in Γ are connected by an edge. Therefore we can
unambiguously set
+
gi,j
= gi,j for u 6∈ {i, j} ∈ E Γ.
+
+
Note that both maps gt,u
and gu,t
are forced upon us, but at this point for any
+
+
other neighbor v of u, only one of gu,v
and gv,u
is forced upon us. We set
+
gt,u
= gt,u , and
+
gv,u
= gv,u for v ∈ I with (u, v) 6∈ S and (v, u) ∈ S.
To extend the trivial support as required, we set
+
gu,v
= gu,v for v ∈ I with (u, v) ∈ S.
We can already specify part of φ: Set
φi = idGi for i ∈ I − {u},
φ{i,j} = idGi,j for u 6∈ {i, j} ∈ E Γ.
Thus, what is left to specify is the following: φu and φ{u,t} and, for all neighbors
v 6= t of u we must specify φ{u,v} as well as
+
gu,v
if (u, v) 6∈ S,
+
gv,u
if (u, v) ∈ S.
Figures 2 and 3 describe the amalgam G (top half) and G + (bottom half) at
the vertex u, where t ∈ V Σ, v ∈ V Γ, and {u, t}, {u, v} ∈ E Γ. Inclusion maps
CURTIS-TITS AND PHAN AMALGAMS
25
from G + forced upon us are indicated in bold, the dotted arrows are those we
must define so as to make the diagram commute.
Gt,u
g t,u
gu
Gu,v
g u, v
,t
Gt
gv
,u
Gu
φt := idGt
φu,t
Gv
φu
G+
t
φv := idGv
φu,v
G+
u
g
t,u
G+
t,u
G+
v
g
g u, t
u, v
+
G+
u,v
g v ,u
Figure 2. The case (u, v) ∈ S and (v, u) 6∈ S.
Gt,u
g t,u
gu
Gu,v
g u, v
,t
Gt
gv
,u
Gu
φt := idGt
φu,t
Gv
φu
G+
t
φv := idGv
φu,v
G+
u
g
t,u
G+
t,u
g u, t
G+
v
g+
u, v
G+
u,v
g v ,u
Figure 3. The case (u, v) 6∈ S and (v, u) ∈ S.
In these figures all non-dotted maps are of the form gi,j ◦ γi,j for some
γi,j ∈ Ci hence we can find the desired maps using Corollary 4.18.
In case T = ∅, the situation is as described in Figures 2 and 3 after removing
the {u, t}-hexagon and any conditions it may impose on φu , and letting v run
over all neighbors of u. That is, we must now define φu , and for any neighbor
v of u, we must find φu,v as well as
+
gu,v
if (u, v) 6∈ S,
+
gv,u
if (u, v) ∈ S.
26
BLOK, HOFFMAN, AND SHPECTOROV
To do so we let φu = idGu ∈ Cu . Finally, for each neighbor v of u we simply
+
+
let gu,v
= gu,v (so that φu,v = idGi,j ∈ Ci,j ) if (u, v) 6∈ S, and we obtain gv,u
and φu,v ∈ Cu,v using Corollary 4.18 if (u, v) ∈ S.
4.5. Classification of Curtis-Tits amalgams with 3-spherical diagram
In the case where G is a Curtis-Tits amalgam over Fq whose diagram is a
3-spherical tree, Proposition 4.16 says that G ∼
= G.
Theorem 4.19. Suppose that G is a Curtis-Tits amalgam with a diagram that
is a 3-spherical tree. Then, G is unique up to isomorphism. In particular any
Curtis-Tits amalgam with spherical diagram is unique.
Lemma 4.20. Given a Curtis-Tits amalgam over Fq with connected 3-spherical
diagram Γ there is a spanning tree Σ such that the set of edges in E Γ − E Σ =
{{is , js } : s = 1, 2, . . . , r} has the property that
(1) (G{is ,js} , gis ,js (Gis ), gis ,js (Gjs )) has type A2 (q es ), where es is some power
of 2.
(2) There is a loop Λs containing {is , js } such that any vertex group of Λs
l
is isomorphic to SL2 (q es 2 ) for some l ≥ 0.
Proof Induction on the rank r of H 1 (Γ, Z). If r = 0, then there is no loop at
all and we are done.
Consider the collection of all edges {i, j} of Γ such that Γ{i,j} has type A2
and H 1 (Γ − {i, j}, Z) has rank r − 1, and choose one such that Gi ∼
= SL2 (q e1 )
where e1 is minimal among all these edges. Next replace Γ by Γ−{i, j} and use
induction. Suppose {is , js } | s = 1, 2, . . . , r} is the resulting selection of edges
so that Σ = Γ − {is , js } | s = 1, 2, . . . , r} is a spanning tree and condition (1)
is satisfied. Note that by choice of these edges, also condition (2) is satisfied
by at least one of the loops of Γ − {{it , jt } : t = 1, 2, . . . , s − 1} that contains
{is , js }. Note that this uses the fact that by 3-sphericity every vertex belongs
to at least one subdiagram of type A2 .
Definition 4.21. Fix a connected 3-spherical diagram Γ and a prime power
q. Let Σ be a spanning tree and let the set of edges E Γ − E Σ = {{is , js } : s =
1, 2, . . . , r} together with the integers {es : s = 1, 2, . . . , r} satisfy the conclusions of Lemma 4.20. Let CT(Γ, q) be the collection of isomorphism classes of
Curtis-Tits amalgams of type Γ(q) and let G = {Gi , Gi,j , gi,j | i, j ∈ I} be the
standard Curtis-Tits amalgam over Fq with diagram Γ as in Subsection 4.4.
Consider the following map:
r
Y
κ:
Aut(Fqes ) × hτ i → CT(Γ).
s=1
κ((αs )rs=1 )
where
is the isomorphism class of the amalgam G + = G ((αs )rs=1 )
given by setting gj+s ,is = gj ,i ◦ αs for all s = 1, 2, . . . , r.
s s
CURTIS-TITS AND PHAN AMALGAMS
27
We now have
Corollary 4.22. The map κ is onto.
Proof Note that, for each s = 1, 2, . . . , r, the Curtis-Tits standard pair
(G{is ,js } , gis ,js (Gis ), gis ,js (Gjs )) has type A2 (q es ) and so Cjs = Aut(Fqes ).
Thus the claim is an immediate consequence of Proposition 4.16.
We note that if we select Σ differently, the map κ will still be onto. However,
the “minimal” choice made in Lemma 4.20 ensures that κ is injective as well,
as we will see.
Lemma 4.23. Suppose Γ(q) is a 3-spherical diagram Γ that is a simple loop.
Then, κ is injective.
φ
Proof Suppose there is an isomorphism κ(α) = G −→ G + = κ(β), for some
α, β ∈ Aut(Fq ) × hτ i. Write I = {0, 1, . . . , n − 1} so that {i, i + 1} ∈ E Γ
for all i ∈ I (subscripts modulo n). Without loss of generality assume that
(i1 , j1 ) = (1, 0) so that by Proposition 4.16 we may assume that gi,j = gi,j =
+
gi,j
for all (i, j) 6= (1, 0). This means that a : Ci,i+1 → Ci × Ci+1 sends φi,i+1
to (φi , φi+1 ) for any edge {i, i + 1} =
6 {0, 1}. Now note that by minimality of
q, Ci (and Ci,i+1 ) has a quotient Ci (and Ci,i+1) isomorphic to hAut(Fq )i ×
hτ i for every i ∈ I, by considering the action of Ci on the subgroup of Gi
isomorphic to SL2 (q). By Part 4 and 5 of Lemma 4.13 the maps a−1
i+1,i and
ai,i+1 induce isomorphisms Ci → Ci,i+1 and Ci,i+1 → Ci+1 , which compose to
an isomorphism
◦ gi,i+1 ◦ φi ◦ g−1
◦ gi+1,i ,
φi 7→ g−1
i+1,i
i,i+1
sending the image of τ and α in Ci to the image of τ (and α respectively) in
Ci+1 , where α : x 7→ xp for x in the appropriate extension of Fq defining Gi,i+1 .
Concatenating these maps along the path {1, 2, . . . , n − 1, 0} and considering
the edge {0, 1} we see that the images of β −1φ1 α and φ1 in C1 = C1 coincide.
Since C1 is abelian this means that β = α.
Theorem 4.24. Let Γ be a connected 3-spherical diagram with spanning tree
Σ and set of edges E Γ − E Σ = {{is , js } : s = 1, 2, . . . , r} together with the
integers {es : s = 1, 2, . . . , r} satisfying theQconclusions of Lemma 4.20. Then
κ is a bijection between the elements of rs=1 Aut(Fqes ) × hτ i and the type
preserving isomorphism classes of Curtis-Tits amalgams with diagram Γ over
Fq .
Proof Again, it suffices to show that κ is injective. This in turn follows from
Lemma 4.23, for if two amalgams are isomorphic (via a type preserving isomorphism), then the amalgams induced on subgraphs of Γ must be isomorphic
28
BLOK, HOFFMAN, AND SHPECTOROV
and Lemma 4.23 shows that κ is injective on the subamalgams supported by
the loops Λs (s = 1, 2, . . . , r).
5. Classification of Phan amalgams
5.1. Introduction
The classification problem is formulated as follows: Determine, up to isomorphism of amalgams, all Phan amalgams G with given diagram Γ possessing a
non-trivial (universal) completion.
5.2. Classification of Phan amalgams with 3-spherical diagram
5.2.1. Tori in Phan standard pairs. Let G = {Gi,j , Gi , gi,j | i, j ∈ I} be a
Phan amalgam over Fq with 3-spherical diagram Γ = (I, E). This means that
the subdiagram of Γ induced on any set of three vertices is spherical. This is
equivalent to Γ not containing triangles of any kind and such that no vertex
is on more than one C2 -edge.
Definition 5.1. For any i, j ∈ I with {i, j} ∈ E Γ, let
Dji = NGi,j (gj,i (Gj )) ∩ gi,j (Gi )
Lemma 5.2. Suppose that (G, G1 , G2 ) is a Phan standard pair of type Γ(q)
as in Subsection 2.3.
(1) If Γ(q) = A2 (q), then hD21 , D12 i is the standard torus stabilizing the
orthonormal basis {e1 , e2 , e3 }. Here D21 (resp. D12 ) is the stabilizer in
this torus of e1 (resp. e3 ).
(2) If Γ(q) = C2 (q), then hD21 , D12 i is the standard torus stabilizing the
basis {e1 , e2 , e3 = f1 , e4 = f2 } which is hyperbolic for the symplectic
form of Sp4 (q 2 ) and orthonormal for the unitary form of SU4 (q); Here
D12 (resp. D21 ) is the stabilizer of he2 i and hf2 i (resp. the pointwise
stabilizer of both he1 , f1 i and he2 , f2 i). Thus,
D12 = hdiag(1, a, 1, aσ ) : a ∈ Fq2 with aaσ = aq+1 = 1i,
D21 = hdiag(a, aσ , aσ , a) : a ∈ Fq2 with aaσ = aq+1 = 1i.
(3) In either case, for {i, j} = {1, 2}, Dji = CGi,j (Dij ) ∩ Gi and Dij is the
unique torus of Gj normalized by Dji .
Proof Parts 1. and 2. as well as the first claim of Part 3. are straightforward
matrix calculations. As for the last claim note that in both cases, Dji acts
diagonally on Gj viewed as SU2 (q) in it natural representation V via the
standard identification map; in fact (in the case C2 , D12 acts even innerly
on G1 ). If Dji normalizes a torus D′ in Gj then it will have to stabilize its
eigenspaces. Since q + 1 ≥ 3, the eigenspaces of Dji in its action on V have
CURTIS-TITS AND PHAN AMALGAMS
29
dimension 1, so Dji and D′ must share these eigenspaces. This means that
D′ = Dij .
5.2.2. Property (D) for Phan amalgams. We state Property (D) for 3-spherical
Phan amalgams, extending the definition from [4] which was given for CurtisTits amalgams under the assumption that q ≥ 4.
Definition 5.3. (property (D)) We say that G has property (D) if there is a
system of tori D = {Di : i ∈ I} such that for all edges {i, j} ∈ E Γ we have
gi,j (Di ) = Dji .
Lemma 5.4. Suppose that G has a completion (G, γ) so that γi is non-trivial
for all i ∈ I. Then, for any i, j, k ∈ I such that {i, j}, {j, k} ∈ E Γ, there is a
torus Dj ≤ Gj such that gj,i (Dj ) = Dij and gj,k (Dj ) = Dkj . In particular, G
has property (D).
Proof First note that in case q = 2, the conclusion of the lemma is trivially
true as, for all i ∈ I, Gi ∼
= S3 has a unique Phan torus.
We now consider the general case. For Γ(q) = A3 (q), this was proved by
Bennett and Shpectorov in [1] (see also [4]). For completeness we recall the
argument, which applies in this more general case as well. We shall prove that
γ(Dij ) = γ(Dkj )
and then let Dj ≤ Gj be such that γ(Dj ) = γ(Dij ) = γ(Dkj ). Note that since
γj is non-trivial, it now follows that gj,i (Dj ) = Dij and gj,k (Dj ) = Dkj .
Recall that for any subgroup H of a group in G we’ll write H = γ(H).
We show that Dji is normalized by Dkj and use Lemma 5.2 to conclude that
Dji = Djk . To that end we let h ∈ Dkj and prove that hDji h−1 = Dji . To achieve
this we show that hDji h−1 is normalized by Dij and again use Lemma 5.2. So
now let g ∈ Dij and note that since Γ is 3-spherical, {i, k} 6∈ E Γ so that g and
h commute. In addition note that by Lemma 5.2, gDji g −1 = Dji . Therefore we
have
ghDji h−1 g −1 = hgDji g −1 h−1 = hDji h−1 ,
as required.
5.2.3. The coefficient system of a Phan amalgam.
Definition 5.5. We now fix a standard Phan amalgam G = {Gi , Gi,j , gi,j |
i, j ∈ I} over Fq with diagram Γ(q), where for every i, j ∈ I, gi,j is the standard
identification map of Definition 2.6. Then, G has property (D) with system of
tori D = {Di : i ∈ I} as in Lemma 5.2.
If G is any other non-collapsing Phan amalgam over Fq with diagram Γ, then
since all tori of Gi are conjugate under Aut(Gi ), by adjusting the inclusion
30
BLOK, HOFFMAN, AND SHPECTOROV
maps gi,j we can replace G by an isomorphic amalgam whose system of tori is
exactly D.
From now on we assume that G , D = {Di : i ∈ I} and G are as in Definition 5.5
Definition 5.6. Suppose that G = {Gi , Gi,j , gi,j | i, j ∈ I} is a Phan amalgam
with connected 3-spherical diagram Γ having property (D). Let D = {Di : i ∈
I} be the associated system of tori. The coefficient system associated to G is
the collection A = {Ai , Ai,j , ai,j | i, j ∈ I} where, for any i, j ∈ I we set
Ai = NAut(Gi ) (Di ),
Ai,j = NAut(Gi,j ) (gi,j (Gi )) ∩ NAut(Gi,j ) (gj,i (Gj )),
−1
ai,j : Ai,j → Aj is given by restriction: ϕ 7→ gj,i
◦ ρi,j (ϕ) ◦ gj,i .
where ρi,j (ϕ) is the restriction of ϕ to Gj ≤ Gi,j .
From now on we let A be the coefficient system associated to G with respect
to the system of tori D. The fact that the ai,j are well-defined follows from
the following simple observation.
Lemma 5.7. For any i, j ∈ I with {i, j} ∈ E Γ, we have
Ai,j ≤ NAut(Gi,j ) (gi,j (Di )) ∩ NAut(Gi,j ) (gj,i (Dj )).
Proof The inclusion ≤ is immediate from the definitions.
The significance for the classification of Phan amalgams with the same system of tori is as follows:
Proposition 5.8. Suppose that G and G + are Phan amalgams of type G with
the same system of tori D = {Di : i ∈ I}.
+
(1) For all i, j ∈ I, we have gi,j = gi,j ◦ δi,j and gi,j = gi,j ◦ δi,j
for some
+
δi,j , δi,j ∈ Ai ,
(2) For any isomorphism φ : G → G + and i, j ∈ I, we have φi ∈ Ai ,
+
−1
φ{i,j} ∈ Ai,j , and ai,j (φ{i,j}) = δi,j
◦ φi ◦ δi,j
.
−1
Proof Part 1. follows since, for any i, j ∈ I we have gi,j
◦ gi,j ∈ Aut(Gi ) and
gi,j (Di ) = gi,j (Di ).
Part 2. follows from Lemma 5.7 since, for any i, j ∈ I,
+
+
(Gi,j , gi,j (Gi ), gj,i (Gj )) = (Gi,j , gi,j (Gi ), gj,i (Gj )) = (Gi,j , gi,j
(Gi ), gj,i
(Gj )).
We now determine the groups appearing in a coefficient system by looking
at standard pairs.
CURTIS-TITS AND PHAN AMALGAMS
31
Lemma 5.9. Fix i ∈ I and let q be such that Gi ∼
= SU2 (q). Then,
A i = Ti ⋊ Ci ,
where Ti is the subgroup of diagonal automorphisms in PGU2 (q) and Ci =
Aut(Fq2 ).
Proof This follows from the fact that via the standard embedding map gi,j
the groups Di of the system of tori are the subgroups of standard diagonal
matrices in SU2 (q).
To see this note that Gi ∼
= PGU2 (q)⋊Aut(Fq2 ).
= SU2 (q) and that Aut(Gi ) ∼
Also, Di = hdi for some d = diag(ζ, ζ q ) and ζ a primitive q + 1-th root of 1 in
Fq2 . A quick calculation now shows that τ and σ are the same in their action,
which is inner and one verifies that NGU2 (q) (Di ) = hτ, diag(a, b) : a, b ∈ F2q i.
Lemma 5.10. Let A be the coefficient system associated to the standard Phan
amalgam G of type Γ(q) and the system of tori D.
If Γ = A1 × A1 , we have Gi,j = Gi × Gj , gi,j and gj,i are identity maps,
and
(5.1)
Ai,j = Ai × Aj ∼
= Ti,j ⋊ Ci,j .
where Ti,j = Ti × Tj and Ci,j = Ci × Cj . Otherwise,
Ai,j = Ti,j ⋊ Ci,j ,
where Ci,j = Aut(Fq2 ) and Ti,j denotes the image of the standard torus T in
Aut(Gi,j ). Note that T is as follows
hdiag(a, b, c) : a, b, c ∈ Fq2 with aaσ = bbσ = ccσ = 1i if Γ = A2 ,
hdiag(cσ b, ab, c, aσ ) : a, b, c ∈ Fq2 with aaσ = bbσ = ccσ = 1i if Γ = C2 .
Remark 5.11.
(1) In case Γ = C2 , G ∼
= Sp4 (q) is realized as Sp4 (q 2 ) ∩
SU4 (q) with respect to a basis that is hyperbolic for the symplectic
form and orthonormal for the unitary form, and Aut(Fq2 ) acts entrywise on these matrices. Moreover, τ acts as transpose-inverse on these
matrices.
(2) In all cases τ coincides with σ.
Proof The A1 × A1 case is self evident. Now consider the case Γ = A2 .
As in the proof of Lemma 5.9, Aut(Fq2 ) ≤ NAut(Gi,j ) (Gi ) ∩ NAut(Gi,j ) (Gj ),
Aτ = t A−1 = Aσ and Aut(Gi,j ) ∼
= PGU3 (q)⋊Aut(Fq2 ), so it suffices to consider
linear automorphisms. As before this is an uncomplicated calculation.
Now consider the case Γ = C2 . Writing ΓL(V ) ∼
= GL4 (q 2 ) ⋊ Aut(Fq2 ) with
respect to the basis E = {e1 , e2 , e3 = f1 , e4 = f2 }, which is hyperbolic for the
symplectic form of Sp4 (q 2 ) and orthonormal for the unitary form of SU4 (q),
we have Gi,j = Sp4 (q 2 ) ∩ SU4 (q).
32
BLOK, HOFFMAN, AND SHPECTOROV
There is an isomorphism Φ : Gi,j → Sp4 (q) as in [21]. Abstractly, we have
Aut(Sp4 (q)) = GSp4 (q) ⋊ Aut(Fq ) (with respect to a suitable basis E for V ).
Since the embedding of Sp4 (q) into Sp4 (q 2 ) is non-standard, we are reconstructing the automorphism group here.
We first note that changing bases just replaces Aut(Fq2 ) with a different
complement to the linear automorphism group. As for linear automorphisms
we claim that
GSp4 (q 2 ) ∩ GU4 (q) = GSp4 (q)
(viewing the latter as a matrix group w.r.t. E). Clearly, up to a center, we
have Gi,j ≤ GSp4 (q 2 )∩GU4 (q) ≤ GSp4 (q) and we note that GSp4 (q)/ Sp4 (q) ∼
=
∗ 2
∗
∗
(Fq ) /(Fq ). Thus for q even, the claim follows. For q odd, let Fq2 = hζi and
define β = diag(ζ q−1 , ζ q−1, 1, 1). Then β ∈ GSp4 (q 2 ) ∩ GU4 (q) acts on Gi,j as
diag(ζ q , ζ q , ζ, ζ), which scales the symplectic form of Sp4 (q 2 ) by ζ q+1. By [21]
the form of Sp4 (q) is proportional and since ζ q+1 is a non-square in Fq , β is
a linear outer automorphism of Sp4 (q). Thus, GSp4 (q) = hSp4 (q), βi and the
claim follows.
We now determine Ai,j . First we note that β, as well as the group Aut(Fq2 )
with respect to the basis E, clearly normalize Gi and Gj hence by Lemma 5.7,
Aut(Fq2 ) ≤ Ai,j . So it suffices to determine inner automorphisms of Sp4 (q)
normalizing Dji and Dij .
Any inner automorphism in Sp4 (q) is induced by an inner automorphism
of Sp4 (q 2 ). So now the claim reduces to a matrix calculation in the group
Sp4 (q 2 ).
Next we describe the restriction maps ai,j for Phan amalgams made up of
a single standard pair with trivial inclusion maps.
Lemma 5.12. Let A be the coefficient system of the standard Phan amalgam G over Fq with diagram Γ and system of tori D. Fix i, j ∈ I and
let (Gi,j , Gi , Gj ) be a Phan standard pair in G with diagram Γi,j . Denote
a = (aj,i , ai,j ) : Ai,j → Ai × Aj . Then, we have the following:
(1) If Γi,j = A1 × A1 , then a is an isomorphism inducing Ti,j ∼
= Ti × Tj
∼
and Ci,j = Ci × Cj .
(2) If Γi,j = A2 or C2 , then a induces an isomorphism Ti,j → Ti × Tj .
(3) If Γ(q) = A2 (q) or Γ(q) = C2 (q), then a : Ci,j → Ci × Cj is given by
α 7→ (α, α) (for α ∈ Aut(Fq2 )) which is a diagonal embedding.
Proof 1. This is immediate from Lemma 5.10.
For the remaining cases, recall that for any ϕ ∈ Ai,j , we have ai,j : ϕ 7→
−1
gj,i ◦ ρi,j (ϕ) ◦ gj,i , where ρi,j (ϕ) is the restriction of ϕ to Gj ≤ Gi,j (Definition 5.6) and gi,j is the standard identification map of Definition 2.6. Note
CURTIS-TITS AND PHAN AMALGAMS
33
that the standard identification map transforms the automorphism ρj,i (ϕ) of
Gi essentially to the “same” automorphism ϕ of Gi .
First let Γ(q) = A2 (q). The map a is well-defined. On T, it is induced by
the homomorphism
diag(ac, c, ec) 7→ (diag(1, e), diag(a, 1)),
where a, c, e ∈ Fq2 are such that aaσ = ccσ = eeσ = 1. Note that the kernel is
Z(T) so that a is injective. The map is obviously surjective, so we are done.
Thus if we factor a by Ti,j and Ti × Tj , we get
(5.2)
Ci,j ֒→Ci × Cj
which is a diagonal embedding given by αr 7→ (αr , αr ), where r ∈ N and
α : x 7→ xp for x ∈ Fq2 .
Next let Γ(q) = C2 (q). We can rewrite the elements of T as a diagonal
matrix diag(xyz, xz, zy −1 , z), by taking z = aσ , y = (ac)−1 , x = a2 b. On T
the map a is induced by the homomorphism
diag(xyz, xz, y −1 z, z) 7→ (diag(y, 1), diag(x, 1))
with kernel {diag(z, z, z, z) : z ∈ Fq2 with zz σ = 1} = Z(GU4 (q)). Clearly
a : Ti,j → Ti × Tj is an isomorphism. Taking the quotient over these groups,
a induces a diagonal embedding as in (5.2), where we now interpret it in the
C2 (q) setting.
5.2.4. A standard form for Phan amalgams. Suppose that G = {Gi , Gi,j , gi,j |
i, j ∈ I} is a Phan amalgam over Fq with 3-spherical diagram Γ. Without loss
of generality we will assume that all inclusion maps gi,j are the standard identification maps of Definition 2.6. By Lemma 5.4, G has Property (D) and possesses a system D = {Di : i ∈ I} of tori, which, as noted in Definition 5.5 via
the standard embeddings gi,j can be identified with those given in Lemma 5.2.
We wish to classify all Phan amalgams G = {Gi , Gi,j , gi,j | i, j ∈ I} over
Fq with the same diagram as G . As noted in Definition 5.5 we may assume
that all such amalgams share D. Let A = {Ai , Ai,j , ai,j | i, j ∈ I} be the
coefficient system of G associated to D. By Proposition 5.8, we may restrict
to those amalgams whose connecting maps are of the form gi,j = gi,j ◦ δi,j for
δi,j ∈ Ai for all i ∈ I.
Definition 5.13. The trivial support of G (with respect to G ) is the set
{(i, j) ∈ I × I | gi,j = gi,j } (that is, δi,j = idGi in the notation of Proposition 5.8). The word “trivial” derives from the assumption that the gi,j ’s are
the standard identification maps of Definition 2.6.
Fix some spanning tree Σ ⊆ Γ and suppose that E − E Σ = {{is , js } : s =
1, 2, . . . , r} so that H1 (Γ, Z) ∼
= Zr . We now have
34
BLOK, HOFFMAN, AND SHPECTOROV
Proposition 5.14. There is a Phan amalgam G (Σ) with the same diagram as
G and the same D, which is isomorphic to G and has the following properties:
(1) G has trivial support S = {(i, j) ∈ I × I | {i, j} ∈ E Σ} ∪ {(is , js ) : s =
1, 2 . . . , r}.
(2) for each s = 1, 2, . . . , r, we have gjs ,is = gj ,i ◦ γjs ,is , where γjs,is ∈ Cjs .
s s
Lemma 5.15. There is a Phan amalgam G + over Fq with the same diagram as
G and the same D, which is isomorphic to G and has the following properties:
For any u, v ∈ I, if gu,v = gu,v ◦ γu,v ◦ du,v , for some γu,v ∈ Cu and du,v ∈ Tu ,
+
then gu,v
= gu,v ◦ γu,v .
Proof The proof follows the same steps as that of Lemma 4.17 using Part 2
of Lemma 5.12 instead of Lemma 4.13 Part 2.
By Lemma 5.15 in order to prove Proposition 5.14 we may now assume that
gu,v = gu,v ◦ γu,v for some γu,v ∈ Cu for all u, v ∈ I.
We now prove a Corollary for Phan amalgams analogous to, but stronger
than Corollary 4.18. To this end consider the situation of Figure 1 interpreted
in the Phan setting.
Corollary 5.16. With the notation introduced in Figure 1, fix the maps γi,j ,
+
+
γi,j
, φi ∈ Ci as well as γj,i ∈ Cj . Then for any one of γj,i
, φj ∈ Cj , there
exists a unique choice γ ∈ Ci for the remaining map in Cj so that there exists
φi,j making the diagram in Figure 1 commute.
Proof This follows immediately from the fact that the maps aj,i : Ci,j → Ci
and ai,j : Ci,j → Cj in part 3. of Lemma 5.12 are isomorphisms.
Proof (of Proposition 5.14) The proof follows the same steps as that of Proposition 4.16, replacing Lemma 4.17 and Corollary 4.18 by Lemma 5.15 and
Corollary 5.16.
5.2.5. Classification of Phan amalgams with 3-spherical diagram. In the case
where G is a Phan amalgam over Fq whose diagram is a 3-spherical tree,
Proposition 5.14 says that G ∼
= G.
Theorem 5.17. Suppose that G is a Curtis-Tits amalgam with a diagram that
is a 3-spherical tree. Then, G is unique up to isomorphism. In particular any
Phan amalgam with spherical diagram is unique.
Definition 5.18. Fix a connected 3-spherical diagram Γ and a prime power
q. Let Σ be a spanning tree and let the set of edges E Γ − E Σ = {{is , js } : s =
1, 2, . . . , r} together with the integers {es : s = 1, 2, . . . , r} satisfy the conclusions of Lemma 4.20. Note that since in the Phan case we do not have
subdiagrams of type 2A3 (q), we have es = 1 for all s = {1, 2, . . . , r}.
CURTIS-TITS AND PHAN AMALGAMS
35
Let Ph(Γ, q) be the collection of isomorphism classes of Phan amalgams of
type Γ(q) and let G = {Gi , Gi,j , gi,j | i, j ∈ I} be a Phan amalgam over Fq
with diagram Γ.
Consider the following map:
r
Y
κ:
Aut(Fq2 ) → Ph(Γ).
s=1
κ((αs )rs=1 )
where
is the isomorphism class of the amalgam G + = G ((αs )rs=1 )
given by setting gj+s ,is = gjs ,is ◦ αs for all s = 1, 2, . . . , r.
As for Curtis-Tits amalgams, one shows the following.
Corollary 5.19. The map κ is onto.
Lemma 5.20. Suppose Γ(q) is a 3-spherical diagram Γ that is a simple loop.
Then, κ is injective.
Proof The proof is identical to that of Lemma 4.23 replacing Proposition 4.16
by Proposition 5.14 and Lemma 4.13 by Lemma 5.12, and noting that in the
Phan case, we can consider the group Ci and Ci,j themselves rather than some
suitably chosen quotient.
Theorem 5.21. Let Γ be a connected 3-spherical diagram with spanning tree
Σ and set of edges E Γ −E
QΣ = {{is , js } : s = 1, 2, . . . , r}. Then κ is a bijection
between the elements of rs=1 Aut(Fq2 ) and the isomorphism classes of CurtisTits amalgams with diagram Γ over Fq .
Proof This follows from Lemma 5.20 just as Theorem 4.24 follows from
Lemma 4.23.
References
[1] C. D. Bennett and S. Shpectorov. A new proof of a theorem of Phan. J. Group Theory,
7(3):287–310, 2004.
[2] R. J. Blok and C. G. Hoffman. 1-cohomology of simplicial amalgams of groups. J.
Algebraic Combin., 37(2):381–400, 2013.
[3] R. J. Blok and C. G. Hoffman. Curtis–Tits groups generalizing Kac–Moody groups of
type Ãn−1 . J. Algebra, 399:978–1012, 2014.
[4] R. J. Blok and C. Hoffman. A classfication of Curtis-Tits amalgams. In N. Sastry,
editor, Groups of Exceptional Type, Coxeter Groups and Related Geometries, volume
149 of Springer Proceedings in Mathematics & Statistics, pages 1–26. Springer, January
2014.
[5] R. J. Blok and C. G. Hoffman. Curtis-Tits groups of simply-laced type. To appear in
J. Comb. Th. Ser. A.
[6] P. E. Caprace. On 2-spherical Kac-Moody groups and their central extensions. Forum
Math., 19(5):763–781, 2007.
[7] C. W. Curtis. Central extensions of groups of Lie type. J. Reine Angew. Math., 220:174–
185, 1965.
36
BLOK, HOFFMAN, AND SHPECTOROV
[8] J. Dunlap. Uniqueness of Curtis-Phan-Tits amalgams. PhD thesis, Bowling Green State
University, 2005.
[9] D. Gorenstein. The classification of finite simple groups. Vol. 1. The University Series
in Mathematics. Plenum Press, New York, 1983. Groups of noncharacteristic 2 type.
[10] D. Gorenstein, R. Lyons, and R. Solomon. The classification of the finite simple groups.
Number 2. Part I. Chapter G, volume 40 of Mathematical Surveys and Monographs.
American Mathematical Society, Providence, RI, 1996. General group theory.
[11] D. Gorenstein, R. Lyons, and R. Solomon. The classification of the finite simple groups.
Number 3. Part I. Chapter A, volume 40 of Mathematical Surveys and Monographs.
American Mathematical Society, Providence, RI, 1998. Almost simple K-groups.
[12] D. Gorenstein, R. Lyons, and R. Solomon. The classification of the finite simple groups.
Number 4. Part II. Chapters 1–4, volume 40 of Mathematical Surveys and Monographs.
American Mathematical Society, Providence, RI, 1999. Uniqueness theorems, With
errata: ıt The classification of the finite simple groups. Number 3. Part I. Chapter A
[Amer. Math. Soc., Providence, RI, 1998; MR1490581 (98j:20011)].
[13] D. Gorenstein, R. Lyons, and R. Solomon. The classification of the finite simple groups.
Number 5. Part III. Chapters 1–6, volume 40 of Mathematical Surveys and Monographs.
American Mathematical Society, Providence, RI, 2002. The generic case, stages 1–3a.
[14] D. Gorenstein, R. Lyons, and R. Solomon. The classification of the finite simple groups.
Number 6. Part IV, volume 40 of Mathematical Surveys and Monographs. American
Mathematical Society, Providence, RI, 2005. The special odd case.
[15] R. Gramlich. Weak Phan systems of type Cn . J. Algebra, 280(1):1–19, 2004.
[16] F. G. Timmesfeld. Presentations for certain Chevalley groups. Geom. Dedicata,
73(1):85–117, 1998.
[17] F. G. Timmesfeld. On the Steinberg-presentation for Lie-type groups. Forum Math.,
15(5):645–663, 2003.
[18] F. G. Timmesfeld. The Curtis-Tits-presentation. Adv. Math., 189(1):38–67, 2004.
[19] F. G. Timmesfeld. Steinberg-type presentation for Lie-type groups. J. Algebra,
300(2):806–819, 2006.
[20] R. Gramlich. Weak Phan systems of type Cn . J. Algebra, 280(1):1–19, 2004.
[21] R. Gramlich, C. Hoffman, and S. Shpectorov. A Phan-type theorem for Sp(2n, q). J.
Algebra, 264(2):358–384, 2003.
[22] R. Gramlich, M. Horn, and W. Nickel. The complete Phan-type theorem for Sp(2n, q).
J. Group Theory, 9(5):603–626, 2006.
[23] R. Gramlich, M. Horn, and W. Nickel. The complete Phan-type theorem for Sp(2n, q).
J. Group Theory, 9(5):603–626, 2006.
[24] B. Mühlherr. Locally split and locally finite twin buildings of 2-spherical type. J. Reine
Angew. Math., 511:119–143, 1999.
[25] K.-W. Phan. A characterization of the unitary groups PSU(4, q 2 ), q odd. J. Algebra,
17:132–148, 1971.
[26] K. W. Phan. On groups generated by three-dimensional special unitary groups. I. J.
Austral. Math. Soc. Ser. A, 23(1):67–77, 1977.
[27] K.-W. Phan. On groups generated by three-dimensional special unitary groups. II. J.
Austral. Math. Soc. Ser. A, 23(2):129–146, 1977.
[28] O. Schreier and B. Van der Waerden. Die automorphismen der projektiven gruppen.
Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg, 6(1):303–
322, December 1928.
CURTIS-TITS AND PHAN AMALGAMS
37
[29] J. Tits. Twin buildings and groups of Kac-Moody type. In Groups, combinatorics &
geometry (Durham, 1990), volume 165 of London Math. Soc. Lecture Note Ser., pages
249–286. Cambridge Univ. Press, Cambridge, 1992.
[30] R. A. Wilson. The finite simple groups, volume 251 of Graduate Texts in Mathematics.
Springer-Verlag London Ltd., London, 2009.
Department of Mathematics and Statistics, Bowling Green State University, Bowling Green, oh 43403, U.S.A.
Current address: School of Mathematics, University of Birmingham, Edgbaston, B15
2TT, U.K.
E-mail address: [email protected]
School of Mathematics, University of Birmingham, Edgbaston, B15 2TT,
U.K.
E-mail address: [email protected]
School of Mathematics, University of Birmingham, Edgbaston, B15 2TT,
U.K.
E-mail address: [email protected]
| 4 |
Tilt Assembly: Algorithms for Micro-Factories That Build Objects
with Uniform External Forces
Aaron T. Becker∗1 , Sándor P. Fekete2 , Phillip Keldenich2 , Dominik Krupke2 , Christian
Rieck2 , Christian Scheffer2 , and Arne Schmidt2
arXiv:1709.06299v1 [] 19 Sep 2017
1
Department of Electrical and Computer Engineering, University of Houston, USA.
[email protected]
2
Department of Computer Science, TU Braunschweig, Germany. {s.fekete, p.keldenich,
d.krupke, c.rieck, c.scheffer, arne.schmidt}@tu-bs.de
Abstract
We present algorithmic results for the parallel assembly of many micro-scale objects in two and three
dimensions from tiny particles, which has been proposed in the context of programmable matter and
self-assembly for building high-yield micro-factories. The underlying model has particles moving under
the influence of uniform external forces until they hit an obstacle; particles can bond when being forced
together with another appropriate particle.
Due to the physical and geometric constraints, not all shapes can be built in this manner; this gives rise
to the Tilt Assembly Problem (TAP) of deciding constructibility. For simply-connected polyominoes P
in 2D consisting of N unit-squares (“tiles”), we prove that TAP can be decided in O(N log N ) time. For
the optimization variant MaxTAP (in which the objective is to construct a subshape of maximum possible
size), we show polyAPX-hardness: unless P=NP, MaxTAP cannot be approximated within a factor of
1
1
Ω(N 3 ); for tree-shaped structures, we give an O(N 2 )-approximation algorithm. For the efficiency of the
assembly process itself, we show that any constructible shape allows pipelined assembly, which produces
copies of P in O(1) amortized time, i.e., N copies of P in O(N ) time steps. These considerations can
be extended to three-dimensional objects: For the class of polycubes P we prove that it is NP-hard to
decide whether it is possible to construct a path between two points of P ; it is also NP-hard to decide
constructibility of a polycube P . Moreover, it is expAPX-hard to maximize a path from a given start
point.
1
Introduction
In recent years, progress on flexible construction at micro- and nano-scale has given rise to a large set of
challenges that deal with algorithmic aspects of programmable matter. Examples of cutting-edge application
areas with a strong algorithmic flavor include self-assembling systems, in which chemical and biological substances such as DNA are designed to form predetermined shapes or carry out massively parallel computations;
and swarm robotics, in which complex tasks are achieved through the local interactions of robots with highly
limited individual capabilities, including micro- and nano-robots.
Moving individual particles to their appropriate attachment locations when assembling a shape is difficult
because the small size of the particles limits the amount of onboard energy and computation. One successful
approach to dealing with this challenge is to use molecular diffusion in combination with cleverly designed sets
of possible connections: in DNA tile self-assembly, the particles are equipped with sophisticated bonds that
∗ Work
from this author was partially supported by National Science Foundation IIS-1553063 and IIS-1619278
1
ensure that only a predesigned shape is produced when mixing together a set of tiles, see [18]. The resulting
study of algorithmic tile self-assembly has given rise to an extremely powerful framework and produced a
wide range of impressive results. However, the required properties of the building material (which must be
specifically designed and finely tuned for each particular shape) in combination with the construction process
(which is left to chemical reactions, so it cannot be controlled or stopped until it has run its course) make
DNA self-assembly unsuitable for some applications.
An alternative method for controlling the eventual position of particles is to apply a uniform external
force, causing all particles to move in a given direction until they hit an obstacle or another blocked particle.
As two of us (Becker and Fekete, [1]) have shown in the past, combining this approach with custom-made
obstacles (instead of custom-made particles) allows complex rearrangements of particles, even in grid-like
environments with axis-parallel motion. The appeal of this approach is that it shifts the design complexity
from the building material (the tiles) to the machinery (the environment). As recent practical work by
Manzoor et al. [15] shows, it is possible to apply this to simple “sticky” particles that can be forced to bond,
see Fig. 1: the overall assembly is achieved by adding particles one at a time, attaching them to the existing
sub-assembly.
n
w
e
s
Figure 1: A practical demonstration of Tilt Assembly based on alginate (i.e., a gel made by combining a
powder derived from seaweed with water) particles [15]. (a) Alginate particles in initial positions. (b) After
control moves of he, s, w, n, e, si (for east, south, west, north), the alginate microrobots move to the shown
positions. (c) After hw, ni inputs, the system produces the first multi-microrobot polyomino. (d) The next
three microrobot polyominoes are produced after applying multiple he, s, w, ni cycles. (e) After the alginate
microrobots have moved through the microfluidic factory layout, the final 4-particle polyomino is generated.
Moreover, pipelining this process may result in efficient rates of production, see Fig. 2 [15].
One critical issue of this approach is the requirement of getting particles to their destination without
being blocked by or bonding to other particles. As Fig. 3 shows, this is not always possible, so there are some
shapes that cannot be constructed by Tilt Assembly.
This gives rise to a variety of algorithmic questions: (1) Can we decide efficiently whether a given
polyomino can be constructed by Tilt Assembly? (2) Can the resulting process be pipelined to yield low
2
n
w
e
s
Figure 2: (Top left) Initial setup of a seven-tile polyomino assembly; the composed shape is shown enlarged
on the lower left. The bipartite decomposition into blue and red particles is shown for greater clarity, but can
also be used for better control of bonds. The sequence of control moves is he, s, w, ni, i.e., a clockwise order.
(Bottom left) The situation after 18 control moves. (Right) The situation after 7 full cycles, i.e., after 28
control moves; shown are three parallel “factories”.
Figure 3: A polyomino (black) that cannot be constructed by Tilt Assembly: the last tile cannot be attached,
as it gets blocked by previously attached tiles.
amortized building time? (3) Can we compute a maximum-size subpolyomino that can be constructed? (4)
What can be said about three-dimensional versions of the problem?
1.1
Our Contribution
We present the results shown in Table 1.
Dimension
Decision
Maximization
2D (simple)
O(N log N ) (Sec. 3)
polyAPX -hard
Approximation
Constructible Path
√
Ω(N 1/3 ), O( N ) (Sec. 4) O(N log N )(Sec. 4)
polyAPX -hard
Ω(N 1/3 ),
3D (general) NP-hard
(Sec. 5)
-
(Sec. 4)
NP-hard
(Sec. 5)
Table 1: Results for Tilt Assembly Problem (TAP) and its maximization variant (MaxTAP)
1.2
Related Work
Assembling polyominoes with tiles has been considered intensively in the context of tile self-assembly. In
1998, Erik Winfree [18] introduced the abstract tile self-assembly model (aTAM), in which tiles have glue
types on each of the four sides and two tiles can stick together if their glue type matches and the bonding
strength is sufficient. Starting with a seed tile, tiles will continue to attach to the existing partial assembly
3
until they form a desired polyomino; the process stops when no further attachments are possible. Apart
from the aTAM, there are various other models like the two-handed tile self-assembly model (2HAM) [8] and
the hierarchical tile self-assembly model [9], in which we have no single seed but pairs of subassemblies that
can attach to each other. Furthermore, the staged self-assembly model [10, 11] allows greater efficiency by
assembling polyominoes in multiple bins which are gradually combined with the content of other bins.
All this differs from the model in Tilt Assembly, in which each tile has the same glue type on all four
sides, and tiles are added to the assembly one at a time by attaching them from the outside along a straight
line. This approach of externally movable tiles has actually been considered in practice at the microscale
level using biological cells and an MRI, see [12], [13], [4]. Becker et al. [5] consider this for the assembly of
a magnetic Gauß gun, which can be used for applying strong local forces by very weak triggers, allowing
applications such as micro-surgery.
Using an external force for moving the robots becomes inevitable at some scale because the energy capacity
decreases faster than the energy demand. A consequence is that all non-fixed robots/particles perform the
same movement, so all particles move in the same direction of the external force until they hit an obstacle or
another particle. These obstacles allow shaping the particle swarm. Designing appropriate sets of obstacles
and moves gives rise to a range of algorithmic problems. Deciding whether a given initial configuration of
particles in a given environment can be transformed into a desired target configuration is NP-hard [1], even in
a grid-like setting, whereas finding an optimal control sequence is shown to be PSPACE-complete by Becker
et al. [2]. However, if it is allowed to design the obstacles in the first place, the problems become much more
tractable [1]. Moreover, even complex computations become possible: If we allow additional particles of
double size (i.e., two adjacent fields), full computational complexity is achieved, see Shad et al. [16]. Further
related work includes gathering a particle swarm at a single position [14] and using swarms of very simple
robots (such as Kilobots) for moving objects [6]. For the case in which human controllers have to move
objects by such a swarm, Becker et al. [3] study different control options. The results are used by Shahrokhi
and Becker [17] to investigate an automatic controller.
Most recent and most closely related to our paper is the work by Manzoor et al. [15], who use global
control to assembly polyominoes in a pipelined fashion: after constructing the first polyomino, each cycle
of a small control sequence produces another polyomino. However, the algorithmic part is purely heuristic;
providing a thorough understanding of algorithms and complexity is the content of our paper.
2
Preliminaries
Polyomino: For a set P ⊂ Z2 of N grid points in the plane, the graph GP is the induced grid graph,
in which two vertices p1 , p2 ∈ P are connected if they are at unit distance. Any set P with connected grid graph GP gives rise to a polyomino by replacing each point p ∈ P by a unit square
centered at p, which is called a tile; for simplicity, we also use P to denote the polyomino when the
context is clear, and refer to GP as the dual graph of the polyomino; P is tree-shaped, if GP is a
tree. A polyomino is called hole-free or simple if and only if the grid graph induced by Z2 \P is connected.
Blocking sets: For each point p ∈ Z2 we define blocking sets Np , Sp ⊆ P as the set of all points q ∈ P that
are above or below p and |px − qx | ≤ 1. Analogously, we define the blocking sets Ep , Wp ⊆ P as the set
of all points q ∈ P that are to the right or to the left of p and |py − qy | ≤ 1.
Construction step: A construction step is defined by a direction (north, east, south, west, abbreviated
by n, e, s, w) from which a tile is added and a latitude/longitude l describing a column or row. The
tile arrives from (l, ∞) for north, (∞, l) for east, (l, −∞) for south, and (−∞, l) for west into the
corresponding direction until it reaches the first grid position that is adjacent to one occupied by an
existing tile. If there is no such tile, the polyomino does not change. We note that a position p can be
added to a polyomino P if and only if there is a point q ∈ P with ||p − q||1 = 1 and one of the four
4
t
(a) Removing t destroys decomposability. The
polyomino can be decomposed by starting with
the three tiles above t.
(b) Removing the red convex tile leaves the polyomino non-decomposable; it can be decomposed
by starting from the bottom or the sides.
Figure 4: Two polyominoes and their convex tiles (white). (a) Removing non-convex tiles may destroy
decomposability. (b) In case of non-simple polygons we may not be able to remove convex tiles.
blocking sets, Np , Ep , Sp or Wp , is empty. Otherwise, if none of these sets are empty, this position is
blocked.
Constructibility: Beginning with a seed tile at some position p, a polyomino P is constructible if and only
if there is a sequence σ = ((d1 , l1 ), (d2 , l2 ), . . . , (dN −1 , lN −1 )), such that the resulting polyomino P 0 ,
induced by successively adding tiles with σ, is equal to P . We allow the constructed polyomino P 0 to
be a translated copy of P . Reversing σ yields a decomposition sequence, i.e., a sequence of tiles getting
removed from P .
3
Constructibility of Simple Polyominoes
In this section we focus on hole-free (i.e., simple) polyominoes. We show that the problem of deciding whether
a given polyomino can be constructed can be solved in polynomial time. This decision problem can be defined
as follows.
Definition 1 (Tilt Assembly Problem). Given a polyomino P , the Tilt Assembly Problem ( TAP)
asks for a sequence of tiles constructing P , if P is constructible.
3.1
A Key Lemma
A simple observation is that construction and (restricted) decomposition are the same problem. This allows
us to give a more intuitive argument, as it is easier to argue that we do not lose connectivity when removing
tiles than it is to prove that we do not block future tiles.
Theorem 2. A polyomino P can be constructed if and only if it can be decomposed using a sequence of tile
removal steps that preserve connectivity. A construction sequence is a reversed decomposition sequence.
Proof. To prove this theorem, it suffices to consider a single step. Let P be a polyomino and t be a tile that
is removed from P into some direction l, leaving a polyomino P 0 . Conversely, adding t to P 0 from direction l
yields P , as there cannot be any tile that blocks t from reaching the correct position, or we would not be able
to remove t from P in direction l.
For hole-free polyominoes we can efficiently find a construction/decomposition sequence if one exists. The
key insight is that one can greedily remove convex tiles. A tile t is said to be convex if and only if there is a
2 × 2 square solely containing t; see Fig. 4. If a convex tile is not a cut tile, i.e., it is a tile whose removal
does not disconnect the polyomino, its removal does not interfere with the decomposability of the remaining
polyomino.
This conclusion is based on the observation that a minimal cut (i.e., a minimal set of vertices whose
removal leaves a disconnected polyomino) of cardinality two in a hole-free polyomino always consists of two
(possibly diagonally) adjacent tiles. Furthermore, we can always find such a removable convex tile in any
decomposable hole-free polyomino. This allows us to devise a simple greedy algorithm.
5
We start by showing that if we find a non-blocked convex tile that is not a cut tile, we can simply remove it.
It is important to focus on convex tiles, as the removal of non-convex tiles can harm the decomposability: see
Fig. 4a for an illustration. In non-simple polyominoes, the removal of convex tiles can destroy decomposability,
as demonstrated in Fig. 4b.
Lemma 3. Consider a non-blocked non-cut convex tile t in a hole-free polyomino P . The polyomino P − t is
decomposable if and only if P is decomposable.
Proof. The first direction is trivial: if P − t is decomposable, P is decomposable as well, because we can
remove the non-blocked tile t first and afterwards use the existing decomposition sequence for P − t. The
other direction requires some case distinctions. Suppose for contradiction that P is decomposable but P − t
is not, i.e., t is important for the later decomposition.
Consider a valid decomposition sequence for P and the first tile t0 we cannot remove if we were to remove
t in the beginning. W.l.o.g., let t0 be the first tile in this sequence (removing all previous tiles obviously does
not destroy the decomposability). When we remove t first, we are missing a tile, hence t0 cannot be blocked
but has to be a cut tile in the remaining polyomino P − t. The presence of t preserves connectivity, i.e.,
{t, t0 } is a minimal cut on P . Because P has no holes, then t and t0 must be diagonal neighbors, sharing the
neighbors a and b. Furthermore, by definition neither of t and t0 is blocked in some direction. We make a
case distinction on the relation of these two directions.
The directions are orthogonal (Fig. 5a). Either a or b is a non-blocked convex tile, because t and t0
are both non-blocked; w.l.o.g., let this be a. It is easy to see that independent of removing t or t0 first,
after removing a we can also remove the other one.
The directions are parallel (Fig. 5b). This case is slightly more involved. By assumption, we have a
decomposition sequence beginning with t0 . We show that swapping t0 with our convex tile t in this
sequence preserves feasibility.
The original sequence has to remove either a or b before it removes t, as otherwise the connection
between the two is lost when t0 is removed first. After either a or b is removed, t becomes a leaf and
can no longer be important for connectivity. Thus, we only need to consider the sequence until either a
or b is removed. The main observation is that a and b block the same tiles as t or t0 , except for tile c as
in Fig. 5b. However, when c is removed, it has to be a leaf, because a is still not removed and in the
original decomposition sequence, t0 has already been removed. Therefore, a tile d =
6 t0 would have to be
removed before c. Hence, the decomposition sequence remains feasible, concluding the proof.
Next we show that such a convex tile always exists if the polyomino is decomposable.
Lemma 4. Let P be a decomposable polyomino. Then there exists a convex tile that is removable without
destroying connectivity.
Proof. We prove this by contradiction based on two possible cases.
Assume P to be a decomposable polyomino in which no convex tile is removable. Because P is decomposable, there exists some feasible decomposition sequence S. Let Pconvex denote the set of convex tiles of P and
let t ∈ Pconvex be the first removed convex tile in the decomposition sequence S. By assumption, t cannot be
removed yet, so it is either blocked or a cut tile.
t is blocked. Consider the direction in which we would remove t. If it does not cut the polyomino, the
last blocking tile has to be convex (and would have to be removed before t), see Fig. 6a. If it cuts
the polyomino, the component cut off also must have a convex tile and the full component has to be
removed before t, see Fig. 6b. This is again a contradiction to t being the first convex tile to be removed
in S.
t is a cut tile. P − t consists of exactly two connected polyominoes, P1 and P2 . It is easy to see that
P1 ∩ Pconvex 6= ∅ and P2 ∩ Pconvex 6= ∅, because every polyomino of size n ≥ 2 has at least two convex
tiles of which at most one becomes non-convex by adding t. (A polyomino of size 1 is trivial.) Before
being able to remove t, either P1 or P2 has to be completely removed, including their convex tiles. This
is a contradiction to t being the first convex tile in S to be removed.
6
c a0 t
dt b
at
t0 b
(a) If the unblocked directions of t and t0 are orthogonal, one of the two adjacent tiles (w.l.o.g. a) cannot
have any further neighbors. There can also be no
tiles in the upper left corner, because the polyomino
cannot cross the two free directions of t and t0 (red
marks).
(b) If the unblocked directions of t and t0 are parallel,
there is only the tile c for which something can change
if we remove t before t0 .
Figure 5: The red marks indicate that no tile is at this position; the dashed outline represents the rest of the
polyomino.
t0
B
A
t
t
(a) If the removal direction of t is not crossed, the last
blocking tile has to be convex (and has to be removed
before).
(b) If the removal direction of t crosses P , then P gets
split into components A and B. Component B has a
convex tile t0 that needs to be removed before t.
Figure 6: Polyominoes for which no convex tile should be removable, showing the contradiction to t being the
first blocked convex tile in P getting removed.
3.2
An Efficient Algorithm
An iterative combination of these two lemmas proves the correctness of greedily removing convex tiles. As we
show in the next theorem, using a search tree technique allows an efficient implementation of this greedy
algorithm.
Theorem 5. A hole-free polyomino can be checked for decomposability/constructibility in time O(N log N ).
Proof. Lemma 3 allows us to remove any convex tile, as long as it is not blocked and does not destroy
connectivity. Applying the same lemma on the remaining polyomino iteratively creates a feasible decomposition
sequence. Lemma 4 proves that this is always sufficient. If and only if we can at some point no longer find a
matching convex tile (to which we refer as candidates), the polyomino cannot be decomposable.
Let B be the time needed to check whether a tile t is blocked. A naı̈ve way of doing this is to try out
all tiles and check if t gets blocked, requiring time O(N ). With a preprocessing step, we can decrease B to
O(log N ) by using O(N ) binary search trees for searching for blocking tiles and utilizing that removing a tile
can change the state of at most O(1) tiles. For every vertical line x and horizontal line y going through P ,
we create a balanced search tree, i.e., for a total of O(N ) search trees. An x-search tree for a vertical line x
contains tiles lying on x, sorted by their y-coordinate. Analogously define a y-search tree for a horizontal line
y containing tiles lying on y sorted by their x-coordinate. We iterate over all tiles t = (x, y) and insert the
7
Figure 7: When removing the red tile, only the orange tiles can become unblocked or convex.
tile in the corresponding x- and y-search tree with a total complexity of O(N log N ). Note that the memory
complexity remains linear, because every tile is in exactly two search trees. To check if a tile at position
(x0 , y 0 ) is blocked from above, we can simply search in the (x0 − 1)-, x0 - and (x0 + 1)-search tree for a tile with
y > y 0 . We analogously perform search queries for the other three directions, and thus have 12 queries of
total cost O(log N ).
We now iterate on all tiles and add all convex tiles that are not blocked and are not a cut tile to the set
F (cost O(N log N )). Note that checking whether a tile is a cut tile can be done in constant time, because
it suffices to look into the local neighborhood. While F is not empty, we remove a tile from F , from the
polyomino, and from its two search trees in time O(log N ). Next, we check the up to 12 tiles that are blocked
first from the removed tile for all four orientations, see Fig. 7. Only these tiles can become unblocked or
a convex tile. Those that are convex tiles, not blocked and no cut tile are added to F . All tiles behind
those cannot become unblocked as the first tiles would still be blocking them. The cost for this is again in
O(log N ). This is continued until F is empty, which takes at most O(N ) loops each of cost O(log N ). If the
polyomino has been decomposed, the polyomino is decomposable/constructible by the corresponding tile
sequence. Otherwise, there cannot exist such a sequence. By prohibiting to remove a specific tile, one can
force a specific start tile.
3.3
Pipelined Assembly
Given that a construction is always possible based on adding convex corners to a partial construction, we
can argue that the idea of Manzoor et al. [15] for pipelined assembly can be realized for every constructible
polyomino: We can transform the construction sequence into a spiral-shaped maze environment, as illustrated
in Fig. 8. This allows it to produce D copies of P in N + D cycles, implying that we only need 2N cycles for
N copies. It suffices to use a clockwise order of four unit steps (west, north, east, south) in each cycle.
The main idea is to create a spiral in which the assemblies move from the inside to the outside. The first
tile is provided by an initial south movement. After each cycle, ending with a south movement, the next seed
tile of the next copy of P is added. For every direction corresponding to the direction of the next tile added
by the sequence, we place a tile depot on the outside of the spiral, with a straight-line path to the location of
the corresponding attachment.
Theorem 6. Given a construction sequence σ := ((d1 , l1 ), . . . , (dN −1 , lN −1 )) that constructs a polyomino P ,
we can construct a maze environment for pipelined tilt assembly, such that constructing D copies of P needs
O(N + D) unit steps. In particular, constructing one copy of P can be done in amortized time O(1).
Proof. Consider the construction sequence σ, the movement sequence ζ consisting of N repetitions of the
cycle (w, n, e, s), and an injective function m : σ → ζ, with m((w, ·)) = e, m((n, ·)) = s, m(e, ·)) = w and
m((s, ·)) = n. We also require that m((di , li )) = ζj if for all i0 < i there is a j 0 < j with m((di0 , li0 )) = ζj 0 and
j is smallest possible. This implies that in each cycle there is at least one tile in σ mapped to one direction
in this cycle.
8
3
4
2
0
5
7
1
6
Figure 8: (Left) A polyomino P . Shown is the assembly order and the direction of attachment to the seed
(tile 0). (Right) A maze environment for pipelined construction of the desired polyomino P . After the fourth
cycle, each further cycle produces a new copy of P .
Labyrinth construction: The main part of the labyrinth is a spiral as can be seen in Fig. 8. Consider a
spiral that is making |ζ| many turns, and the innermost point q of this spiral. From q upwards, we
make a lane through the spiral until we are outside the spiral. At this point we add a depot of tiles,
such that after each south movement a new tile comes out of the depot (this can easily be done with
bottleneck constructions as seen in Fig. 8 or in [15]). Then, we proceed for each turn in the spiral as
follows: For the j-th turn, if m−1 (ζj ) is empty we do nothing. Else if m−1 (ζj ) is not empty we want to
add the next tile. Let ti be this particular tile. Then, we construct a lane in direction −ζj , i.e., the
direction from where the tile will come from, until we are outside the spiral. By shifting this line in an
orthogonal direction we can enforce the tile to fly in at the correct position relating to li . There, we
add a depot with tiles, such that the first tile comes out after j − 1 steps and with each further cycle a
new tile comes out (this can be done by using loops in the depot, see Fig. 8 or [15]). Depots, which
lie on the same side of the spiral, can be shifted arbitrarily, so they do not collide. These depots can
be made arbitrarily big, and thus, we can make as many copies of P as we wish. Note that we can
make the paths in the spiral big enough, such that after every turn the bounding box of the current
polyomino fits through the spiral.
Correctness: We will now show that we will obtain copies of P . Consider any j-th turn in the spiral,
where the i-th tile ti is going to be added to the current polyomino. With the next step, ti and the
polyomino move in direction ζj . While the polyomino does not touch the next wall in the spiral,
the distance between ti and the polyomino will not decrease. However when the polyomino hits the
wall the polyomino stops moving and ti continues moving towards the polyomino. Wall-hitting is the
same situation as in our non-parallel model: To a fixed polyomino we can add tiles from n, e, s or
w. Therefore, the tile connects to the correct place. Since this is true for any tile and any copy, we
conclude that every polyomino we build is indeed a copy of P .
Time: Since the spiral has at most 4N unit steps (or N cycles), the first polyomino will be constructed
after 4N unit steps. By construction, we began the second copy one cycle after beginning the first
copy, the third copy one cycle after the second, and so on. This means, after each cycle, when the first
9
0
0
1
1
Figure 9: Two different sequences. The red tile represents the bounding box of the current polyomino. (Left)
A desired sequence. The latitude intersects the bounding box. (Right) A sequence where the latitude does
not intersect the bounding box.
polyomino is constructed, we obtain another copy of P . Therefore, for D copies we need N + D cycles
(or O(N + D) unit steps). For D ∈ Ω(N ) this results in an amortized constant time construction for P .
Note that this proof only considers construction sequences in the following form: If a tile ti increases the
side length of the bounding box of the current polyomino, then the tile is added from a direction with a
longitude/latitude, such that the longitude/latitude intersects the bounding box (see Fig. 9). In the case
there is a tile, such that the longitude/latitude does not intersect the bounding box, then we can rotate the
direction by π2 towards the polyomino and we will have a desired construction sequence.
4
Optimization Variants in 2D
For polyominoes that cannot be assembled, it is natural to look for a maximum-size subpolyomino that
is constructible. This optimization variant is polyAPX-hard, i.e., we cannot hope for an approximation
1
algorithm with an approximation factor within Ω(N 3 ), unless P = NP.
Definition 7 (Maximum Tilt Assembly Problem). Given a polyomino P , the Maximum Tilt Assembly
Problem ( MaxTAP) asks for a sequence of tiles building a cardinality-maximal connected subpolyomino
P0 ⊆ P.
Theorem 8. MaxTAP is polyAPX-hard, even for tree-shaped polyominoes.
Proof. We reduce Maximum Independent Set (MIS) to MaxTAP; see Fig. 10 for an illustration. Consider
an instance G = (V, E) of MIS, which we transform into a polyomino PG . We construct PG as follows. Firstly,
construct a horizontal line from which we go down to select which vertex in G will be chosen. The line must
have length 10n − 9, where n = |V |. Every 10th tile will represent a vertex, starting with the first tile on the
line. Let ti be such a tile representing vertex vi . For every vi we add a selector gadget below ti and for every
{vi , vj } ∈ δ(vi ) we add a reflected selector gadget below tj , as shown in Fig. 10, each consisting of 19 tiles.
Note that all gadgets for selecting vertex vi are above the gadgets of vj if i < j and that there are at most n2
such gadgets. After all gadgets have been constructed, we have already placed at most 19n2 + 10n − 9 ≤ 29n2
tiles. We continue with a vertical line with a length of 30n2 tiles.
Now let α∗ be an optimal solution to MIS. Then MaxTAP has a maximum polyomino of size at least
30n2 α∗ and at most 30n2 α∗ + 29n2 : We take the complete vertical part of ti for every vi in the optimal
solution of MIS. Choosing other lines block the assembly of further lines and thus, yields a smaller solution.
Now suppose we had an N 1−ε -approximation for MaxTAP. Then we would have a solution of at least
1
∗
∗
∗
2 ∗
N 1−ε T , where T is the optimal solution. We know that an optimal solution has T ≥ 30n α tiles and the
30n2 α∗
3
2
3
polyomino has at most N ≤ 30n + 29n ≤ 59n tiles. Therefore, we have at least 591−ε n3−3ε tiles and thus
at least 591−ε1n3−3ε α∗ strips, because each strips is 30n2 tiles long. Consider some ε ≥ 23 + η for any η > 0,
then the number of strips is 591/31n1−3η α∗ which results in an n1−δ -approximation for MIS, contradicting the
inapproximability of MIS (unless P=NP) shown by Berman and Schnitger [7].
As a consequence of the construction, we get Corollary 9.
10
v1
v2
v3
v4
v2
v4
v1
v3
Figure 10: Reduction from MIS to MaxTAP. (Left) A graph G with four vertices. (Right) A polyomino
constructed for the reduction with a feasible, maximum solution marked in grey.
1
Corollary 9. Unless P = N P , MaxTAP cannot be approximated within a factor of Ω(N 3 ).
√
On the positive side, we can give an O( N )-approximation algorithm.
√
Theorem 10. The longest constructible path in a tree-shaped polyomino P is a N -approximation for
MaxTAP, and we can find such a path in polynomial time.
Proof. Consider an optimal solution P ∗ and a smallest enclosing box B containing P ∗ . Then there must be
two opposite sides of B having at least one tile of P ∗ . Consider
the path S between both tiles. Because the
√
area AB of B is at least the number of tiles in P ∗ , |S| ≥ AB and a longest,
constructible path in P has
√
length at least |S|, we conclude that the longest constructible path is a N -approximation.
To find such a path, we can search for every path between two tiles, check whether we can build this path,
and take the longest, constructible path.
Checking constructibility for O(N 2 ) possible paths is rather expensive. However, we can efficiently approximate the longest constructible path in a tree-shaped polyomino with the help of sequentially constructible
paths, i.e., the initial tile is a leaf in the final path.
Theorem 11. We can find a constructible path in a tree-shaped polyomino in O(N 2 log N ) time that has a
length of at least half the length of the longest constructible path.
Proof. We only search for paths that can be built sequentially. Clearly, the longest such path is at least half
as long as the longest path that can have its initial tile anywhere. We use the same search tree technique as
before to look for blocking tiles. Select a tile of the polyomino as the initial tile. Do a depth-first search and
for every tile in this search, check if it can be added to the path. If it cannot be added, skip all deeper tiles,
as they also cannot be added. During every step in the depth-first search, we only need to change a single
tile in the search trees, doing O(1) updates with O(log N ) cost. As we only consider O(N ) vertices in the
depth-first search, this results in a cost of O(N log N ) for a fixed start tile. It is trivial to keep track of the
longest such constructible path. Repeating this for every tile results in a running time of O(N 2 log N ).
In tree-shaped polyominoes, finding a constructible path is easy. For simple polyominoes, additional
arguments and data structures lead to a similar result.
Theorem 12. In simple polyominoes, finding the longest of all shortest paths that are sequentially constructible
takes O(N 2 log N ) time.
11
Before we start with the proof of Theorem 12, we show in the next two lemmas that it is sufficient to
consider shortest paths only, and that we can restrict ourselves to one specific shortest path between two
tiles. Hence, we just need to test a maximum of O(n2 ) different paths.
Lemma 13. In a sequentially constructible path, if there is a direct straight connection for a subpath, the
subpath can be replaced by the straight connection.
B
AB
p0
W’
L
A
p1
BA
Figure 11: A subpath W 0 and its shortcut L in green. To block L, A and B must exist. But then, either p0
or p1 (red tiles) will also be blocked. Therefore, also W 0 cannot be built.
Proof. Consider a sequentially constructible path W and a subpath W 0 ⊂ W that has a straight line L
connecting the startpoint and the endpoint of W 0 . W.l.o.g., L is a vertical line and we build from bottom to
top. Assume that (W \W 0 ) ∪ L is not constructible. Then at least two structures (which can be single tiles)
A and B must exist, preventing us from building L. Furthermore, these structures have to be connected via
a path (AB or BA, see Fig. 11). We observe that none of these connections can exist or otherwise, we cannot
build W (if AB exist, we cannot build the last tile p0 of L; if BA exist, we cannot build the first tile p1 of
W 0 ). Therefore, we can replace W 0 with L.
By repeating the construction of Lemma 13 we get a shortest path from tile t1 to t2 in the following
form: Let P1 , . . . , Pk be reflex tiles on the path from t1 to t2 . Furthermore, for every 1 ≤ i ≤ k − 1, the path
from Pi to Pi+1 is monotone. This property holds for every shortest path, or else we can use shortcuts as in
Lemma 13.
Lemma 14. If a shortest path between two tiles is sequentially constructible, then every shortest path between
these two tiles is sequentially constructible.
Proof. Consider a constructible shortest path W , a maximal subpath W 0 that is x-y-monotone, and a
bounding box B around W 0 . Due to L1 -metric, any x-y-monotone path within B is as long as W 0 . Suppose
some path within B is not constructible. Then we can use the same blocking argument as in Lemma 13 to
prove that W 0 cannot be constructible as well, contradicting that W is constructible.
Using Lemma 13 and Lemma 14, we are ready to prove Theorem 12.
Proof of Theorem 12. Because it suffices to check one shortest path between two tiles, we can look at
the BFS tree from each tile and then proceed like we did in Theorem 11. Thus, for each tile we perform a
BFS in time O(N ) and a DFS with blocking look-ups in time O(N log N ), which results in a total time of
O(N 2 log N ).
12
5
Three-Dimensional Shapes
An interesting and natural generalization of TAP is to consider three-dimensional shapes, i.e., polycubes. The
local considerations for simply connected two-dimensional shapes are no longer sufficient. In the following
we show that deciding whether a polycube is constructible is NP-hard. Moreover, it is NP-hard to check
whether there is a constructible path from a start cube s to an end cube t in a partial shape.
As a stepping stone, we start with a restricted version of the three-dimensional problem.
Theorem 15. It is NP-hard to decide if a polycube can be built by inserting tiles only from above, north,
east, south, and west.
x1 _ x2 _ x 3
x2 _ x3 _ x4
x1 _ x3 _ x4
n
e
w
s
x1
x1
x2
x2
x3
x3
x4
x4
Figure 12: Top-view on the polycube. There is a vertical part going south for the true and false assignment
of each variable. We start building at the top layer (blue) and have to block either the true or the false part
of each variable from above. The blocked parts have to be built with only inserting from east, west, and
south. For each clause, the parts of the inverted literals are modified to allow at most two of them being
built in this way. All other parts can simply be inserted from above in the end.
Proof. We prove hardness by a reduction from 3SAT. A visualization for the formula (x1 ∨ x2 ∨ x3 ) ∧ (x2 ∨
x3 ∨ x4 ) ∧ (x1 ∨ x3 ∨ x4 ) can be seen in Fig. 12. It consists of two layers of interest (and some further auxiliary
ones for space and forcing the seed tile by using the one-way gadget shown in Fig. 14). In the beginning, one
has to build a part of the top layer (highlighted in blue in the example, details in Fig. 13 (Right)). Forcing a
specific start tile can be done by a simple construction. For each variable we have to choose to block the
left (for assigning true) or the right (for assigning false) part of the lower layer. In the end, the remaining
parts of the upper layer can trivially be filled from above. The blocked parts of the lower layer then have to
be built with only inserting tiles from east, south, or west. In the end, the non-blocked parts can be filled
in from above. For each clause we use a part (as shown in Fig. 13 (Left)) that allows only at most two of
its three subparts to be built from the limited insertion directions. We attach these subparts to the three
variable values not satisfying the clause, i.e., the negated literals. This forces us to leave at least one negated
13
true
false
true
x1
x2
false
Figure 13: Top-view on the polycube. (Left) In the beginning we have to block the access from the top for
either the true or false part of the variable. The variable is assigned the blocked value. (Right) Three gadgets
for a clause. Only two of them can be built if the tiles are only able to come from the east, south, and west.
literal of the clause unblocked, and thus at least one literal of the clause to be true. Overall, this allows us to
build the blocked parts of the lower layers only if the blocking of the upper level corresponds to a satisfying
assignment. If we can build the true and the false parts of a variable in the beginning, any truth assignment
for the variable is possible.
out
in
in
out
Figure 14: (Left) This polyomino can only be constructed by starting at “in” and ending at “out”. (Right)
By adding layers above (white) and below (black) this polyomino starting at the “out”-tile, we obtain a
polycube that is only constructible by starting at “in” (from the other direction we must build the black
and white layer first and must then build the grey layer with 2D directions). Triangles denote where we can
switch to another layer. With this gadget we can enforce a seed tile.
The construction can be extended to assemblies with arbitrary direction.
Theorem 16. It is NP-hard to decide if a polycube can be built by inserting tiles from any direction.
14
Proof. We add an additional layer below the construction in Theorem 15 that has to be built first and blocks
access from below. Forcing the bottom layer to be built first can again be done with the one-way gadget
shown in Fig. 14.
The difficulties of construction in 3D are highlighted by the fact that even identifying constructible
connections between specific positions is NP-hard.
Theorem 17. It is NP-hard to decide whether a path from one tile to another can be built in a general
polycube.
x4
x2
AND
t
x1
x1
x2
x3
x4
x3
s
Figure 15: (Left) Circuit representation for the SAT formula (x1 ∨ x2 ∨ x3 ) ∧ (x1 ∨ x2 ∨ x4 ) ∧ (x2 ∨ x3 ∨ x4 ) ∧
(x1 ∨ x3 ∨ x4 ) ∧ (x1 ∨ x2 ∨ x4 ). (Right) Reduction from SAT formula. Boxes represent variable boxes.
Proof. We prove NP-hardness by a reduction from SAT. For each variable we have two vertical lines, one for
the true setting, one for the false setting. Each clause gets a horizontal line and is connected with a variable
if it appears as literal in the clause, see Fig 15 (Left). We transform this representation into a tour problem
where, starting at a point s, one first has to go through either the true or false line of each variable and
then through all clause lines, see Fig. 15 (Right). The clause part is only passable if the path in at least one
crossing part (squares) does not cross, forcing us to satisfy at least one literal of a clause. As one has to go
through all clauses, t is only reachable if the selected branches for the variables equal a satisfying variable
assignment for the formula.
We now consider how to implement this as a polycube. The only difficult part is to allow a constructible
clause path if there is a free crossing. In Fig. 16 (Left), we see a variable box that corresponds to the crossing
of the variable path at the squares in Fig. 15 (Right). It blocks the core from further insertions. The clause
path has to pass at least one of these variable boxes in order to reach the other side. See Fig. 15 (Right) for
an example. Note that the corresponding clause parts can be built by inserting only from above and below,
so there are no interferences.
6
Conclusion/Future Work
We have provided a number of algorithmic results for Tilt Assembly. Various unsolved challenges remain.
What is the complexity of deciding TAP for non-simple polyominoes? While Lemma 4 can be applied to all
polyominoes, we cannot simply remove any convex tile. Can
√ we find a constructible path in a polyomino from
a given start and endpoint? This would help in finding a N -approximation for non-simple polyominoes. How
can we optimize the total makespan for constructing a shape? And what options exist for non-constructible
shapes?
15
Figure 16: (Left) Empty variable box. (Right) A clause line (blue) dips into a variable box. If the variable
box is built, then we cannot build the dip of the clause line.
An interesting approach may be to consider staged assembly, as shown in Fig. 17, where a shape gets
constructed by putting together subpolyominoes, instead of adding one tile at a time. This is similar to
staged tile self-assembly [10, 11]. This may also provide a path to sublinear assembly
times, as a hierarchical
√
assembly allows massive parallelization. We conjecture that a makespan of O( N ) for a polyomino with N
tiles can be achieved.
All this is left to future work.
Figure 17: (Left) A polyomino that cannot be constructed in the basic TAP model. (Right) Construction in
a staged assembly model by putting together subpolyominoes.
References
[1] A. T. Becker, E. D. Demaine, S. P. Fekete, G. Habibi, and J. McLurkin. Reconfiguring massive particle
swarms with limited, global control. In Proceedings of the International Symposium on Algorithms and
Experiments for Sensor Systems, Wireless Networks and Distributed Robotics (ALGOSENSORS), pages
51–66, 2013.
[2] A. T. Becker, E. D. Demaine, S. P. Fekete, and J. McLurkin. Particle computation: Designing worlds
to control robot swarms with only global signals. In Proceedings IEEE International Conference on
Robotics and Automation (ICRA), pages 6751–6756, 2014.
[3] A. T. Becker, C. Ertel, and J. McLurkin. Crowdsourcing swarm manipulation experiments: A massive
online user study with large swarms of simple robots. In Proceedings IEEE International Conference on
Robotics and Automation (ICRA), pages 2825–2830, 2014.
[4] A. T. Becker, O. Felfoul, and P. E. Dupont. Simultaneously powering and controlling many actuators
with a clinical MRI scanner. In Proceedings of the IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), pages 2017–2023, 2014.
[5] A. T. Becker, O. Felfoul, and P. E. Dupont. Toward tissue penetration by MRI-powered millirobots using
a self-assembled Gauss gun. In Proceedings IEEE International Conference on Robotics and Automation
(ICRA), pages 1184–1189, 2015.
16
[6] A. T. Becker, G. Habibi, J. Werfel, M. Rubenstein, and J. McLurkin. Massive uniform manipulation:
Controlling large populations of simple robots with a common input signal. In Proceedings of the
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 520–527, 2013.
[7] P. Berman and G. Schnitger. On the complexity of approximating the independent set problem.
Information and Computation, 96(1):77–94, 1992.
[8] S. Cannon, E. D. Demaine, M. L. Demaine, S. Eisenstat, M. J. Patitz, R. Schweller, S. M. Summers, and
A. Winslow. Two hands are better than one (up to constant factors). In Proc. Int. Symp. on Theoretical
Aspects of Computer Science(STACS), pages 172–184, 2013.
[9] H.-L. Chen and D. Doty. Parallelism and time in hierarchical self-assembly. SIAM Journal on Computing,
46(2):661–709, 2017.
[10] E. D. Demaine, M. L. Demaine, S. P. Fekete, M. Ishaque, E. Rafalin, R. T. Schweller, and D. L. Souvaine.
Staged self-assembly: nanomanufacture of arbitrary shapes with O(1) glues. Natural Computing,
7(3):347–370, 2008.
[11] E. D. Demaine, S. P. Fekete, C. Scheffer, and A. Schmidt. New geometric algorithms for fully connected
staged self-assembly. Theoretical Computer Science, 671:4–18, 2017.
[12] P. S. S. Kim, A. T. Becker, Y. Ou, A. A. Julius, and M. J. Kim. Imparting magnetic dipole heterogeneity
to internalized iron oxide nanoparticles for microorganism swarm control. Journal of Nanoparticle
Research, 17(3):1–15, 2015.
[13] P. S. S. Kim, A. T. Becker, Y. Ou, M. J. Kim, et al. Swarm control of cell-based microrobots using a
single global magnetic field. In Proceedings of the International Conference on Ubiquitous Robotics and
Ambient Intelligence (URAI), pages 21–26, 2013.
[14] A. V. Mahadev, D. Krupke, J.-M. Reinhardt, S. P. Fekete, and A. T. Becker. Collecting a swarm in a
grid environment using shared, global inputs. In Proc. IEEE Int. Conf. Autom. Sci. and Eng. (CASE),
pages 1231–1236, 2016.
[15] S. Manzoor, S. Sheckman, J. Lonsford, H. Kim, M. J. Kim, and A. T. Becker. Parallel self-assembly
of polyominoes under uniform control inputs. IEEE Robotics and Automation Letters, 2(4):2040–2047,
2017.
[16] H. M. Shad, R. Morris-Wright, E. D. Demaine, S. P. Fekete, and A. T. Becker. Particle computation:
Device fan-out and binary memory. In Proceedings IEEE International Conference on Robotics and
Automation (ICRA), pages 5384–5389, 2015.
[17] S. Shahrokhi and A. T. Becker. Stochastic swarm control with global inputs. In Proceedings of the
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 421–427, 2015.
[18] E. Winfree. Algorithmic self-assembly of DNA. PhD thesis, California Institute of Technology, 1998.
17
| 8 |
---------------------------------------------------------------------------------------------------------------------------------IBPSA’93, International Building Performance and Simulation Association, Adelaïde, Australie
---------------------------------------------------------------------------------------------------------------------------------
Multiple Model Software for Airflow and Thermal
Building Simulation. A case study under tropical
humid climate, in Réunion Island.
H. BOYER
J. BRAU
J.C. GATINA
Université de la Réunion(*) / INSA de Lyon (**)
CETHIL, INSA de Lyon (**)
Université de la Réunion (*)
Abstract : The first purpose of our work has been to allow -as far as heat transfer modes,
airflow calculation and meteorological data reconstitution are concerned- the integration of
diverse interchangeable physical models in a single software tool for professional use,
CODYRUN. The designer's objectives, precision requested and calculation time consideration,
lead us to design a structure accepting selective use of models, taking into account multizone
description and airflow patterns. With a building case study in Reunion Island, we first analyse
the sensibility of the thermal model to diffuse radiation reconstitution on tilted surfaces. Then,
a realistic balance between precision required and calculation time leads us to select detailed
models for the zone of main interest, but to choose simplified models for the other zones.
I ) Display of the simulation tool :
Born from a joint research project
involving both the Université de la Réunion and
INSA de Lyon, this work aims at producing an
efficient building thermal simulation tool,
including some research and conception aspects,
and taking into consideration different types of
climates. More precisely, it is a multizone software
integrating both natural ventilation and moisture
transfers, called CODYRUN.
a) Architecture and original aspects
The three main parts are the building
description, the simulation program and the
operation of the results. As far as the description is
concerned, we have been brought up to break down
the building into three types of entities which are
the following ones : firstly, the Zones (from a
__________________________________
(*) Laboratoire de Génie Industriel, Université de
la Réunion - Faculté des Sciences, 15 rue Cassin
97489 Saint-Denis Cedex. Ile de la RéunionFRANCE
(**) Institut National des Sciences Appliquées.
Equipe Equipement de l'Habitat, CETHIL,
Bat. 307, 20 Avenue Albert Einstein.
69621 Villeurbanne Cedex, FRANCE
thermal point of view), the Inter-zones partitions
(zones separations, outdoor being considered as a
particular zone) and finally, the Components (i.e.
the walls, the glass partitions, air conditioning
system sets, so on and so forth). For a simulation,
the Project notion includes a weather input file, a
building and then, a result file. The following
treelike structure illustrates our organisation :
Meteorological file
Project
Zones
Components
Inter-Zones
Components
Building
Result file
Fig. 1 : Data organization
During a simulation, one of the most
interesting aspects is to offer the expert thermician
a wide range of choices between different heat
transfer models and meteorological reconstitution
parameter models. The aim of this simulation may
be, for a given climate, to realize studies of the
software sensitivity to these different models, in
order to choose those that should be integrated in a
suitable conception tool.
In the second part of this article, the first
application concerns the comparison of two models
of sky diffuse radiation. In the same way, for a
---------------------------------------------------------------------------------------------------------------------------------IBPSA’93, International Building Performance and Simulation Association, Adelaïde, Australie
--------------------------------------------------------------------------------------------------------------------------------given climate, depending on the objective sought
The thermal model relies on INSA's
during the simulation (through tracking the
previous simulation code, CODYBA (BRAU,
temperatures or estimation of a yearly energy
1987). With the usual physical assumptions, we
consumption), it is interesting to have the liberty of
use the technique of nodal discretisation of the
selecting the models to be involved.
space variable by finite difference. In addition, the
In most existing simulation software, the
mass of air inside one zone is represented by a
choice of the models being already done, their
single thermal capacity. Thus, for a given zone, the
application is global to the whole building. If these
principle of energy conservation applied to each
models are complex, the multi-zone feature as well
concerned wall node, associated with the sensible
as airflow patterns leads quickly to calculation time
balance of the air volume, constitute a set of
which is not compatible with a conception tool. It
equations, that can be condensed in a matricial
has seemed interesting to us to allow, for some of
form :
the phenomena, a selective use of the models.
Thus, the choice of the indoor convection model is
dT
realized during the definition of a zone or else, the
(1)
C
A T +B
dt
choice of the conduction model is done during the
description of a wall. The aim is then to link the
The automation of the setting up of the
level of complexity of the models concerning one
previous equation requires a decomposition of A
entity to the interest beared for this same entity.
into twelve elementary matrices, each one having a
And that is how we will show in the second part of
particular feature.
this article the importance in choosing a detailed
model for the indoor convection in the main
dT
interest zone and simplified models for the other
C
zones.
dt
Acond + A cvi _lin+ ...+ A connex T
Bint_load +...+B connex
(2)
Airflow Model
Step
Thermal Model
Humidity Model
Fig. 2 : General flowchart
In relation with calculation program, the
software executes at each step the reckowing of
airflow patterns, temperature field and specific
humidity of each zone.
The most simplified airflow model
considers as known the airflow rates between all
zones, whereas the more detailed model calculates
with a pressure model the airflow through each of
the openings, which can be large ones. The
building is represented as a network of pressure
nodes, connected by non-linear equations giving
the flows as a function of the pressure difference.
This detailed airflow calculation goes through the
iterative solution of the system of non linear
equations made up with air mass conservation
inside each zone.
Thus, Acond gathers features linked to heat
conduction through the walls whereas Acvi_lin
gathers the features depending on indoor and
linearised convective exchanges. In the same way,
for each step, the filling up of B vector is made
easier by its decomposition into fifteen elementary
vectors. The physical coupling of the considered
zone to the other ones is realized through an
iterative connection process, through the filling up
of a matrice Aconnex and a vector Bconnex.
At each step, the resolution of equation (1)
uses an implicit finite difference procedure and the
coupling iterations between the different zones
make it possible to calculate the evolution of
temperatures as well as those of sensible powers
needed in case of air conditioning. Having in mind
a compromise precision/calculation time, it is to be
noticed that the thermician using the software
keeps the control of the solving methods, of the
iterations (mainly concerning connection process
and airflow model) and of the different
convergence criterions.
b) WINDOWS Front-End :
Developed on PC micro computer with
Microsoft WINDOWS interface, implemented in
C-language, the software beneficits of all the user-
---------------------------------------------------------------------------------------------------------------------------------IBPSA’93, International Building Performance and Simulation Association, Adelaïde, Australie
--------------------------------------------------------------------------------------------------------------------------------friendliness required for
a conception tool
The chosen building is a cubic shape with a
(windowing, mouse, ...).
side of 6 m, in contact with the ground. We
consider a thermal split up in three zones : the
A more technical aspect linked to this
ground floor, the eastern first floor and the western
system is very interesting for simulation tools
first floor. The following sketch illustrates our
based on PC, such as our software : memory
description :
managing supplied by WINDOWS allows to
allocate much more than the classical 640 Ko
limit, which is most necessary as regards the
capacity and the variety of the matrices and vectors
that interfere in the simulation process.
The user's interface proposes to begin the
description of a building with the following
window :
East
South
Fig 4 : Building sketch, Southern face
Fig. 3 : Building description window
The push buttons zone, interzone and
components make possible the access to this
previously defined entities. The access to the
models linked to the entity "building" is also
possible from this window through the push
buttons of the screen part called "models". It is
possible in this way to select the chosen models for
outside heat convection transfer, reconstitution of
meteorological parameters (diffuse and sky
temperature) as well as the airflow model.
The main characteristics are the following
ones : let us suppose all the walls are made up with
of dense concrete of 12 cm, the slab on grade being
of 30 cm in the same material. On the eastern and
western sides are display bay windows (simple
glass) of an elementary surface of four square
meters and of usual optical and conductive quality.
Later, the conductive model kept for each wall and
glass is a simple two capacitors model (R2C).
Besides, we suppose the presence of an air
conditioning system in the western zone, upstairs,
but we shall come back on this component in a
following paragraph. The building is described
with three zones, sixteen inter-zones and twenty
two components.
II ) Case study :
a) Description :
Réunion Island is located in the Indian
Ocean by 55°1 longitude East and 21°5 latitude
South. The climatic conditions being those of a
tropical and humid climate, we have reconstituted
an hourly file gathering all the meteorological
parameters necessary for our simulations, i.e.
mainly solar radiation data, outdoor dry
temperature and moisture rate, and wind. With
CODYRUN, for some periods of the considered
year, we propose to reproduce a building indoor
thermal conditions.
b) Diffuse radiation model choice :
Under tropical-humid climate, the readings
made on the short wave diffuse radiation show the
importance of this kind of inputs. A quick analysis
of the graphs on the site of Réunion Island shows
an important diffuse radiation when the direct
beam is low. Tropical humid climate leads to take
into account the total diffuse radiation (sky diffuse
and ground reflection), when designing the
protection devices, i.e. screens, shadow masks, ...
(Cabirol, 1984).
Most of solar energy calculations consider
this diffuse radiation as being isotropic. As given
d ( s, ) =
(4)
1 cos s
Max(cos i, 0)
( F . C ( s ).
( 1 F ).
) . dh
2
sin h
2
6
with C( s ) 1.00115 3.54.10 s 2.46.10 s
Gh
dh
F 1
(1
)
GhExt
Gh
and
i
mean angle of incidence of beam radiation
with respect to the surface normal
h
solar altitude
Gh
site horizontal global radiation
GhExt extraterrestrial global radiation
Considering the sharing (between direct
beam and diffuse part) of the solar heat gains
entering a zone (through the windows), the model
considers that the direct beam is incident on the
floor and that the diffuse part is split up depending
on the proportion of the surfaces. Through the
optical characteristics of the materials in which the
surfaces of the walls in consideration are made up,
it is necessary to solve a linear set of equations to
obtain the radiation absorbed by the surface nodes
of the walls. These radiations are constituting one
of the elementary vector, Bswi.
For the study, we have taken from the yearly
file a sequence of two extreme consecutive days,
one cloudy and the other sunny. The following
graphs show the evolution of the horizontal direct
beam and diffuse radiation for the simulation
period.
1200
Horizontal radiation (W/m²)
Dh
1000
dh
800
600
400
200
0
1
6
11
16
21
26
Time step
31
36
41
46
Indoor diffuse radiation / 100
(W)
---------------------------------------------------------------------------------------------------------------------------------IBPSA’93, International Building Performance and Simulation Association, Adelaïde, Australie
--------------------------------------------------------------------------------------------------------------------------------dh the diffuse and horizontal radiation, on a tilted
Fig. 5 : Horizontal radiation
plane (azimuth , inclination s), this diffuse
Through the glasses, one part of the energy
radiation is :
load is due to the beam part of the transmitted
radiation and the other part to its diffuse part. For
1 cos s
(3)
d ( s, ) (
). dh (W/m²)
the first day, the following graphic plots show the
2
evolution of the total diffuse incident radiation in
Meanwhile, anisotropic models having been
the eastern floor zone, cases isotropic and
validated (Gueymard, 1987), we have integrated
anisotropic (Willmott).
the Willmott model (Lebru 1983), besides the
isotropic model for reconstitution of incident
diffuse radiation on a tilted plane. With the
9
following notations, the proposed expression is :
8
Willmott
7
Isotropic
6
5
4
3
2
1
0
1
6
11
16
21
Time step (Hour)
Fig. 6: Eastern floor diffuse radiation
To integrate a non isotropic diffuse radition
pattern is the same as to consider as non isotropic a
part of the diffuse radiation preliminary calculated
with the isotropic model. As a consequence, the
diffuse radiation incident in a zone will be lesser
with the anisotropic model, as shown by the
previous graph. For the incident direct beam, the
live would show the opposite order.
The following figure allows comparison
between graphs of dry temperatures evolution
concerning the eastern first floor, and that for the
two cases previously mentioned.
Willmott
30
Isotropic
29
28
27
26
25
24
23
1
6
11
16
21
26
31
36
41
46
Time step (Hour)
Fig. 7 : Eastern floor dry temperature
Considering our building and the mentioned
period, the graphs are practically merge. This
simulation shows in our particular case that the
whole building model is little sensitive to the sky
diffuse radiation model. A simple interpretation of
this phenomenon can be delivered. In the
anisotropic case, the diffuse radiation deriving
from the isotropic model is divided into two parts,
one of them is isotropic, the other one directing. As
far as the energetic behaviour of a building is
---------------------------------------------------------------------------------------------------------------------------------IBPSA’93, International Building Performance and Simulation Association, Adelaïde, Australie
--------------------------------------------------------------------------------------------------------------------------------concerned, the coupling of the external short wave
The first one being a linear model with a
radiation and the zone in consideration is realized
coefficient of constant convective exchanges (but
through the glasses. For simple type of glass such
that can be modified by the conceptor), the second
as the one in which our windows are made, the
one, linear with a coefficient depending on the type
bibliographic expressions of the transmittance for
of wall (i.e. floor, ceiling, vertical wall) and the
direct or diffuse radiation are but little different.
last model is a non linear one. In this case, the
Moreover, the diffuse transmittance is not
exchange coefficient depends in a non linear way
dependent of the incidence angle. Thus, the
on the temperature difference between air and the
difference in dividing the energy load according to
given surface. The integration of such a model in
the isotropic criterion will have but little incidence
our system goes then through the filling up of a
on the zone's thermal behaviour.
vector dealing with non linear and convective
flows, Bcvi_lin and through a process of iterative
resolution with a convergence criterion that can be
c ) Choice of the indoor convection
modulated for example as regards the indoor dry
model
temperature. This non-linear model needs several
resolutions of the equation (1), at each step. With a
c -1) HVAC System Component :
convergence criterion of 10-3 °C on the air
temperature of the zone in consideration, the
The western zone being air conditioned, for
number of iterations is three or four. The use of
a better understanding of the following graphs, we
this model tends then to penalize the tool as far as
first display and comment the data window capture
the calculation time is concerned, but this model
of the Air Conditioning System component.
can be considered as the most reliable.
c -3) Exploitation
Considering the conditions of simulation
of the previous a) paragraph, our concern is now
the evolution of the dry temperature inside the
western zone floor as well as the sensible power
needed. Firstly, we consider the A case in which
the convection integrated with the help of a linear
model with a constant coefficient (5 W/m².K), in
each zone. The following graphs are obtained :
23
22
21
Fig 8 : Component description window
Thus, for an air conditioning system,
information relative to the sensible and latent loads
are to be entered. For our study, we'll consider the
sensible part. We must then define the threshold
temperature values (high and low), the hourly
schedules and the available heating and cooling
powers. Most of the time, simulation softwares
consider the heating power as convective.
However, to allow a better integration of systems
such as heating floors, it is possible to choose for
the involved power convective and radiative ratios.
In particular case of the air conditioning system
sizing, the software considers that the available
power is infinite.
c -2) The indoor convection
The choice of the indoor convection model
is made during the zone description. We have set
up three models as regard the indoor convection.
20
19
1
12
23
Fig. 9 : Air Temperature, Western floor
1
6
11
16
21
0
-0,5
-1
-1,5
-2
-2,5
-3
Fig. 10: AC Power, Western floor
---------------------------------------------------------------------------------------------------------------------------------IBPSA’93, International Building Performance and Simulation Association, Adelaïde, Australie
--------------------------------------------------------------------------------------------------------------------------------The graph of the power in dotted
with graphs very close to the case B, both in
corresponds to equipment sizing, in which case we
temperature and in power. In parallel, the
consider the air conditioning power as infinite.
comparison of the simulation times (outside
This graph enables us to determine the cooling
building initialization period) for a given period of
power to be set up, about 3kW, in order to respect
the day is given in the following board.
at any time the specified 20°C.
If the cooling equipment power is inferior,
(2 kW in our case), the result is an increase of the
indoor temperature during the second half of the
day (Fig. 9). Quantitatively, the two previous
graphs allow a designer to realize the overheating
(and its length) due to the under-sizing of the air
conditioning system.
In case B, we integrate the indoor
convection as non linear, for the three zones. This
simulation is very costly as far as calculation time
is concerned. In the last case, C, we'll deal with the
non linear convection only in the zone in which we
are most interested, i.e. the western floor, and we'll
use the linear model with a constant coefficient for
the two other zones.
The superposition of the graphs relative to
the three cases produces the following results.
Indoor Temperature (°C)
23
case A
22
Case
B
A
C
Temp. Err. (°C)
0 (reference)
1.2
0.1
Power Err. (kW)
0 (reference)
0.5
0.1
Time (mn"s)
2"53
0"54
1"35
Thus, there is approximativaly a one to a
half ratio between the simulation time in case B
and C. This ratio can be found again by other
simple considerations. Indeed, it is possible in first
approximation to suppose as constant the time
necessary to set up and solve the state equation of
one zone, i.e. t seconds. If Nb_zones is the number
of zones, in the case A, the resolution for one step
requires tA = (Nb_zones . t) seconds. Let us
suppose constant and equal to i the number of
iterations in a zone, interations introduced by the
non linear model of the indoor convection. In case
B, all the different zones being concerned with the
non linearity, the calculation time equals tB =
(Nb_zones . i . t) seconds. In case C, this time
becomes :
tC = (i + (Nb_zones - 1) . t ) seconds.
case B
case C
With values corresponding to our case
(Nb_zones = 3, i = 3), the ratio (t C/tB) is quite
close to 0.5.
21
20
19
1
6
11
16
21
Fig. 11: Air Temperature, Western floor
1
6
11
16
21
AC Sensible power (kW)
0
case A
-0,5
case B
-1
case C
-1,5
-2
-2,5
Fig. 12: AC Power, Western floor
If we consider as right the values proposed
for case C, the inaccuracy of the model with a
constant coefficient applied to all the zones is
clearly visible. The mistake on the temperature
reaches more than one degree and these on the
power needed is of 0,5 kW. On the contrary, the
case C shows the accuracy of the selective
application of the non linear convection model,
III) Conclusion
Through the previous simulations, we have
introduced CODYRUN, focusing on a few points.
Many aspects developed to this day in this software
have been purposely put aside, in particular in
regard to airflow simulation, which is a dominant
heat transfer mode under tropical climate. As far as
heat conduction transfer is concerned, the different
implemented models haven't been reviewed. In the
same way, as regard this transfer mode, tools such
as physical aggregation of walls or modal
reduction (Roux 1988), allowing a notable
decreasing of calculation time, will be the objects
of future improvements. So, we join one of the
preoccupations of his study, i.e. the balance
between precision required and calculation time.
REFERENCES
Auffret, P.; Gatina, J.C.; Hervé ,P. 1984. "Habitat
et climat à la Réunion. Construire en pays tropical
---------------------------------------------------------------------------------------------------------------------------------IBPSA’93, International Building Performance and Simulation Association, Adelaïde, Australie
--------------------------------------------------------------------------------------------------------------------------------humide." Documents et recherches n°11.
Université de la Réunion.
Brau, J.; Roux, J.J.; Depecker., P.; Croizer, J.;
Gaignou, A.; Finas, R. 1987."Micro-informatique
et comportement thermique des bâtiments en
régime dynamique." Génie Climatique, 1987,
n°11 (Oct-Nov), 15-23
Cabirol, T. ; 1984; "Habitat bioclimatique :
l'incidence de l'ensoleillement et du vent". Afrique
Expansion, 1984, n° 6 (Nov. ), 45-48
Lebru, A. 1983. "Estimation des irradiations
solaires horaires dans un plan quelconque à partir
de la donnée de l'irradiation horaire globale (et
éventuellement diffuse) horizontale." Research
Report N° 239 cahier 1847. CSTB, Sophia
Antipolis FRANCE.
Roux, J.J.; Depecker; P., Krauss, G.; 1988.
"Pertinence and performance of a thermal model
adapted to CAD context." In Proceedings of the
sixth International PLEA Conference (Porto,
Portugal, July 27-31),749-754
Geymard, C.; 1987; "An anisotropic solar
irradiance model for tilted surface and its
comparison with selected engineering algorithms."
Solar Energy, vol. 38, N° 5, 367- 386
Walton, G.N.; 1984; National Bureau of Standards,
Washington, DC. "A computer algorithm for
predicting Infiltration and Interroom Airflows".
ASHRAE Transactions 84-11, N°3
| 5 |
Adaptive Hybrid Beamforming with Massive
Phased Arrays in Macro-Cellular Networks
arXiv:1801.09029v2 [] 3 Feb 2018
Shahram Shahsavari† , S. Amir Hosseini , Chris Ng , and Elza Erkip†
† ECE Department of New York University, NYU Tandon School of Engineering, New York, USA
Blue Danube Systems, Warren, New Jersey, USA
† {shahram.shahsavari, elza}@nyu.edu, {amir.hosseini, chris.ng}@bluedanube.com
Abstract—Hybrid beamforming via large antenna arrays has
shown a great potential for increasing data rate in cellular
networks by delivering multiple data streams simultaneously. In
this paper, several beamforming design algorithms are proposed
based on the long-term channel information for macro-cellular
environments where the base station is equipped with a massive
phased array under per-antenna power constraint. Using an
adaptive scheme, beamforming vectors are updated whenever
the long-term channel information changes. First, the problem is
studied when the base station has a single RF chain (single-beam
scenario). Semi-definite relaxation (SDR) with randomization
is used to solve the problem. As a second approach, a lowcomplexity heuristic beam composition algorithm is proposed
which performs very close to the upper-bound obtained by SDR.
Next, the problem is studied for a generic number of RF chains
(multi-beam scenario) where the Gradient Projection method is
used to obtain local solutions. Numerical results reveal that using
massive antenna arrays with optimized beamforming vectors can
lead to 5X network throughput improvement over systems with
conventional antennas.
I. I NTRODUCTION
In light of the rapid development of fifth generation cellular
networks (5G), Massive MIMO has proven to improve the network performance significantly [1]. These systems comprise of
an array of many antenna elements. The user data is precoded
in the digital domain first and then, each of the digital streams
is converted to a radio frequency signal through a circuit
referred to as RF chain. Each signal is then transmitted by the
antenna element connected to that RF chain. This process is
best suited to a rich scattering propagation environment that
provides a large number of degrees of freedom. In macrocellular environment, however, these conditions often do not
hold. A more efficient alternative is the use of hybrid massive
MIMO systems in such scenarios [2].
In hybrid Massive MIMO systems, there are fewer RF
chains than antenna elements. This helps the overall system
to be much less power hungry and more cost effective, since
each RF chain consists of power consuming and expensive
elements such as A/D and D/A converters which do not
follow Moore’s law. However, these systems rely on accurate
channel estimation and typically are applied to TDD networks
to alleviate the estimation overhead [2]. On the other hand,
common deployment of LTE in North America is FDD based.
In this paper, we focus on a class of hybrid massive MIMO
systems where all antenna elements maintain RF coherency
[3]. This means that all antenna elements are closely spaced
and have matching phase and magnitude characteristics at the
operating frequency [4]. Using this technique, applicable also
in FDD with existing LTE protocols, the antenna system can
be used as a phased array and macro-cellular transmission is
achieved through hybrid beamforming (BF) [5].
In hybrid BF, each RF chain carries a stream of data and
is connected to each antenna element through a separate pair
of variable gain amplifier and phase shifter. By setting the
values of the amplifier and phase shifts (equivalently designing
BF vectors), multiple beams are generated, each carrying
one data stream over the air. Generating beams using phased
arrays generally requires channel information of all users. By
keeping the beam pattern constant over an extended period
of time, e.g., one hour, small scale channel variations can
be averaged out. Hence, the BF direction corresponds to a
dominant multipath component [6] which mainly depends on
the user location in macro-cellular environment due to the
primarily LOS channels. Whenever user location information
is updated, the system can adaptively switch to a different
beam pattern to constantly provide enhanced service to the
users. We refer to this technique as long-term adaptive BF.
The radiated power from an antenna array is constrained and
power constraints are chosen to limit the non-linear effects of
the amplifiers [7]. Generally, two types of power constraints
are considered in research problems: i) sum power constraint
(SPC) in which an upper-bound is considered for the total
power consumption of the array, and ii) per-antenna power
constraint (PAPC) in which an upper-bound is considered
for the power consumption of each antenna in the array [8],
[9]. Although it is more convenient to consider SPC for
research problems [10], [11], it is not applicable to practical
implementations, where each antenna element is equipped with
a separate power amplifier.
Generating adaptive beams that maximize the overall network throughput plays a significant role in exploiting the
benefits of hybrid BF in a cellular system. Any method that is
proposed should have a manageable complexity and operate
within the power constraints of the array. The goal of this paper
is to propose methods for long-term adaptive BF under PAPC
to maximize the average network rate using hybrid phased
arrays with arbitrary number of beams. First, we focus the
optimization on an individual cell where the interference from
other cells is treated as noise. We use well-known theoretical
and numerical techniques for finding the optimal beam pattern
𝑠=1
𝑠=𝐿
Figure 1: Single cell scenario with L sections
as well as a theoretical upper bound for the solution. Then,
we propose a low complexity heuristic algorithm that performs
close to the obtained upper bound.
Notation: We use uppercase bold letters to denote matrices
and lowercase bold letters to denote column vectors. Xmm and
wm are the (m, m)th and mth element of matrix X and vector
w, respectively. (.)T , (.)H , Tr{.}, and ||.||F are transpose,
hermitian, trace and Frobenius norm operations, respectively.
[N ] denotes the set of integers from 1 to N .
II. PROBLEM STATEMENT
We consider the downlink of a single-cell scenario consisting of a BS with M antennas and L M radio frequency
(RF) transceiver chains [2]. Since each RF chain can carry
a single data stream, the BS can serve L User Equipments
(UEs) simultaneously. As a result, the cell site is partitioned
into L sections (Fig. 1) and one UE per section is activated at
each time slot as will be explained later. We assume that user
equipments (UEs) are clustered in hotspots within the cell.
Let Hsi , (s, i) ∈ [L] × [Ks ] denote hotspot i from section s
which consists of a group of Nsi nearby UEs. We P
let Ks
be the number of hotspots in section s and K =
s Ks
denotes the total number of hotspots in the cell. The fraction
of UEs located at Hsi among the P
UEs in section s is defined
by αsi = Nsi /Ns , where Ns = i Nsi is the total number
n
of UEs in section s. Let Usi
, n ∈ [Nsi ], denote the nth UE
of hotspot Hsi .
We consider a macro-cellular environment in which the
channels are primarily LOS with the possibility of having
local scatterers around the UEs. We assume that only the longterm channel state information of UEs is available at the BS
and can be used to perform long-term BF. Furthermore, we
assume that the long-term channel vectors between the BS
and the users belonging
√ to a hotspot are the same due to their
proximity. Let gsi = βsi hsi , denote the long-term channel
vector between the BS and the UEs located at Hsi , where
βsi ∈ R+ and hsi ∈ CM denote the pathloss and spatial
signature between BS and Hsi , respectively. We consider the
Vandermonde model where hsi = [ejθsi , ej2θsi , . . . , ejM θsi ]T .
A use case of this channel model is when the users are located
in the far-field of a uniform linear array with M antennas
in a primarily line-of-sight environment [10]. In such cases
we have θsi = 2πd sin(ψsi )/λ, where d denotes the spacing
between successive elements, λ is the wavelength, and ψsi is
direction of Hsi relative to the BS. In order to model other
types of antenna arrays such as rectangular and circular arrays,
hsi can be changed accordingly. We note that this model
relates the long-term channel information to the location of the
hotspots. In [12], the validity of this model is demonstrated
using a variety of test-bed experiments.
Each RF chain is connected to each antenna element through
a separate pair of variable gain amplifier and phase shifter. We
model the corresponding gain and phase shift by a complex
coefficient. As a result, there are M complex coefficients
corresponding to each RF chain creating a BF vector. The
radiation pattern (or equivalently beam pattern) of the antenna
array corresponding to each RF chain can be modified by
controlling the corresponding BF vector [5]. We assume that
the BS uses BF vector ws ∈ CM , s ∈ [L] to generate a beam
pattern (or a beam in short) for serving UEs located in section
s. To reduce the complexity, these BF vectors are designed
based on the long-term channel information and are adaptively
modified when the long-term channel information changes,
i.e., when there is substantial change in the geographical
distribution of the hotspots. Moreover, we assume that the
UEs are scheduled within each section based on a round robin
n
n
scheduler. Let qsi
∈ C denote the signal to be sent to Usi
,
n
n 2
n∗
where E(ssi ) = 0 and E(|qsi | ) = 1. Also, let Usi∗ be the
scheduled UE in section s at √
a generic
time slot. Hence,
P
n∗
the BS transmit vector is x = P s∈[L] qsi
∗ ws , where P
denotes the average transmit power of the BS. Subsequently,
n∗
Usi
∗ receives signal
Xp
p
∗
∗
n
n∗ H
ysi
P βsi∗ qsi
P βs0 i∗ qsn0 i∗ hH
∗ =
∗ hsi∗ ws +
si∗ ws0 + v,
s0 6=s
where v ∼ CN (0, σ 2 ) is the noise. The first term corresponds
to the desired signal received from beam s and the second term
is the interference received from other L−1 beams. Therefore,
n
whenever Usi
is scheduled, the corresponding SINR is
SINRsi (W) =
wH Q w
P s siH s
,
1 + s0 6=s ws0 Qsi ws0
(1)
2
where Qsi = γsi hsi hH
si with γsi = P βsi /σ . The BF matrix is
M ×L
defined as W , [w1 , w2 , . . . , wL ] ∈ C
and has columns
corresponding to BF vectors
of
different
sections.
We note
P
that PAPC corresponds to s∈[L] |Wms |2 ≤ 1/M, ∀m. Hence
we define the feasible set as A = {W ∈ CM ×L | ∀m :
P
2
s∈[L] |Wms | ≤ 1/M }. The goal is to find BF matrix
W which maximizes a network utility function, denoted by
R(W) over the feasible set A. In this paper, we consider
average network rate as the network utility, i.e.,
X X
R(W) =
αsi log(1 + SINRsi ).
(2)
s∈[L] i∈[Ks ]
Hence, The problem can be formulated as follows.
ΠL : Wopt = argmax R(W)
W∈A
We note that the sub-index L in ΠL corresponds to the number
of the beams (equivalently number of RF chains). Although
there is no minimum utility constraint defined for individual
UEs in problem ΠL , sum-log maximization induces a type of
proportional fairness. It can be shown that problem ΠL is not
in a convex form [13, chapters 3, 4]. Therefore, finding the
globally optimal solution of this problem is difficult. In section
III, we study the single-beam (L = 1) problem Π1 to find local
solutions and an upper-bound to evaluate the performance. In
section IV, we provide an iterative algorithm to find a suboptimal solution of problem ΠL for arbitrary L.
In Sections III and IV, we will need to find the projection
of a general beamforming matrix W ∈ CM ×L on set A which
is defined as
PA (W) = argmin ||X − W||2F .
(3)
X∈A
We note that A is a closed convex set which leads to a unique
PA (W) for every W ∈ CM ×L , introduced by Lemma 1. The
proof is provided in Apeendix A.
Lemma 1. We have Ŵ = PA (W) if and only if for every
m ∈ {1, 2, . . . , M }
(
P
2
Ŵms = Wms
if
s |Wms | ≤ 1/M,
p P
P
2
Ŵms = Wms /( M s |Wms |2 ) if
s |Wms | > 1/M.
III. S INGLE -B EAM S CENARIO
In this section, we study problem Π1 where every UE is
served by a single-beam generated by BF vector w, i.e., there
is only one section in the cell (s = 1) and PAPC corresponds
to |wm |2 ≤ 1/M, ∀m. In this scenario, there is no interference
since one P
UE is scheduled per time slot. Hence we have
H
R(w) =
i∈[K] αi log(1 + w Qi w). Please note that we
drop index s in the single-beam scenario because s = 1. Next,
we derive an upper-bound for the optimal value of Π1 and
provide two different methods to obtain local solutions of this
problem. We will use the upper-bound as a benchmark in the
simulations to evaluate the effectiveness of the local solutions.
A. Semi-definite relaxation with randomization
Since wH Qi w is a complex scalar, we have wH Qi w =
H
(wH Qi w)T = γi hH
∈ CM ×M is a
i Xhi , where X = ww
rank-one positive semi-definite matrix. Using this transformation, semi-definite relaxation of problem Π1 is as follows.
Π1r :
Xopt
r
= argmax
K
X
X∈CM ×M i=1
αi log(1 + γi hH
i Xhi )
subject to: X ≥ 0, ∀m : Xmm ≤ 1/M
We remark that Π1 is equivalent to Π1r plus a non-convex
constraint Rank(X) = 1. Removing the rank-one constraint
enlarges the feasible set and makes it possible to find solutions
with higher objective value. Hence, the optimal objective value
of Π1r is an upper-bound for the optimal objective value
of Π1 . This is a well-known technique called ‘Semi-definite
Relaxation (SDR)’ [14]. Note that Π1r can be solved using
convex programming techniques [13]. After solving the convex
problem Π1r there are two possibilities:
1) Rank(Xopt
r ) = 1: in this case the upper-bound is tight
H
and we have Xopt
= wopt wopt , where wopt is the
r
solution of Π1 .
2) Rank(Xopt
r ) > 1: in this case, the upper-bound is not
tight and finding the global solution of Π1 is difficult.
However, there are a number of methods developed to
generate a reasonable BF vector w for problem Π1 by processing Xopt
[14]. For example, using eigenvalue decomr
position, we have Xopt
= VΛVH . Let v1 , v2 , . . . , vM
r
be the eigenvectors in descending order of eigenvalues.
One simple approach is to use the eigenvector corresponding to the maximum eigenvalue and form BF vector as
1
. It should be noted that normalization
wmev = √Mv||v
1 ||2
is necessary for feasibility. Although this simple method is
optimal when Rank(Xopt
r ) = 1, it is not the best strategy
when Rank(Xopt
r ) > 1. Using different ‘randomization’
techniques can lead to better solutions [14]. Let us define
wsdr = √Mb||b|| , where b = VΛ1/2 e with random vector
2
e ∈ CM . The elements of e are i.i.d. random variables
uniformly distributed on the unit circle in the complex
plane. Alternative distributions such as Gaussian distribution can also be adopted for e [15]. The randomization
method is to generate a number of BF vectors {wsdr }
and pick the one resulting in the highest objective value
of Π1 . Note that using e = [1, 0, 0, . . . , 0]T would lead to
wsdr = wmev . The number of random instances denoted
by Ntrial depends on the number of the hotspots, which is
discussed more in the numerical examples.
B. Single-beam sub-beam composition
In this section, we introduce a heuristic algorithm to find
a BF vector w for Π1 with a relatively good performance
compared to the upper-bound obtained by SDR. Suppose there
is only one hotspot in the network, say Hi . Using CauchySchwarz inequality,
we can show that the solution of Π1 is
√
w , hi / M . In this technique, which is referred to as
conjugate beamforming, the BS creates a narrow beam towards
the location of Hi [5, chapter 19]. We can generalize this
method to generate a beam pattern serving all the hotspots,
by summing up the individually optimal BF vectors, and
normalizing the result to satisfy
PK PAPC. Hence, the resulting
BF vector is wsbc , PA ( i=1 wi ), where PA (.) is given
by Lemma 1. We call this method single-beam sub-beam
composition (SB-SBC) due to the fact that we form a beam
pattern by adding up multiple sub-beams.
Adding up individually optimal BF vectors and projecting the result on the feasible set A will perturb each of
them. Therefore, wsbc would not exactly point towards all
the hotspots. To compensate for this disturbance, we use
another approach called single-beam phase optimized subbeam composition (SB-POSBC). In SB-POSBC, we add a
separate phase shift for each BF
wi in the summation,
Pvector
K
i.e., we define wposbc , PA ( i=1 ejφi wi ). By choosing a
set of appropriate phase shifts, wposbc leads to a beam pattern
which points to all the hotspots, hence, it leads to a better
network utility. Since it is not easy to find optimal phase
Hotspot
Antenna Element
SBC
POSBC
SDR-R
0
-30
30
-60
60
-90
-20
-10
0
90
10
Figure 2: Comparison between the beam patterns (in dB)
generated by SB-SBC and SB-POSBC for a uniform linear
antenna array with eight antennas (8-ULA) and four hotspots.
such as non-convex non-linear optimization. Although there
is no guarantee that these methods find the global optimum,
they converge to a local optimum of if some conditions hold;
we refer the reader to [16] for details. To find a local solution
for problem ΠL , we use an iterative numerical method called
‘Gradient Projection (GP)’. Although there are different types
of GP, we use one that includes two steps at each iteration: i)
taking a step in the gradient direction of the objective function
with a step-size satisfying a condition called Armijo Rule
(AR), and ii) projecting the new point on the feasible set.
Let W[k] be the BF matrix at iteration k. We define
W[k+1] = PA (W[k] + r[k] G[k] ),
[k]
[k]
[k]
(4)
[k]
shifts analytically, one approach is to try a number of randomly
chosen sets of phase shifts and pick the one which leads to the
highest objective value in Π1 . One can think of this random
trials as the counterpart of randomization technique described
in section III-A. Note that if ∀i : φi = 0 then wposbc = wsbc ,
hence, if the case of zero phase shifts is included in the set
of random phase shifts, we can ensure that SB-POSBC will
perform at least as good as SB-SBC. One important parameter
in SB-POSBC is the number of random trials of phase shift
sets denoted by Ntrial which will be studied in section V-A.
Figure 2 depicts a network with four hotspots. This figure
also depicts the beam patterns corresponding to BF vectors
wsbc , wposbc , and wsdr with Ntrial = 1000. We can observe
how phase shifts in SB-POSBC compensate for the perturbation caused by SB-SBC. Furthermore, we can also see that
SB-POSBC creates a similar beam to SDR with randomization
while its complexity is much lower.
where G[k] = [g1 , g2 , . . . , gL ] with gs , ∇ws R(W[k] ),
r[k] > 0. r[k] denotes the step-size at iteration k and PA (W)
is the projection of BF matrix W on the feasible set A which
is given by Lemma 1. We observe that the projection rule is
relatively simple and does not impose high implementation
complexity to the problem.
The step-size calculation rule directly affects the convergence of GP. Applying AR to problem ΠL , we have r[k] =
[k]
r̃β l where r̃ > 0 is a fixed scalar and l[k] is the smallest
non-negative integer satisfying R(W[k+1] ) − R(W[k] ) ≥
σ Re [ Tr{(W[k+1] − W[k] )H G[k] }] and W[k+1] is given
by (4). In order to find r[k] at iteration k, we start from
l[k] = 0 and increase l[k] one unit at a time until the above
condition is satisfied. 0 < σ < 1 and 0 < β < 1 are AR
parameters. In practice, σ is usually chosen close to zero, e.g.,
σ ∈ [10−5 , 10−1 ]. Also, β is usually chosen between 0.1 and
0.5 [16].
IV. M ULTI -B EAM S CENARIO
Lemma 2. Let {W[k] } be a sequence of BF matrices generated by gradient projection in (4) with step-size r[k] chosen
by the Armijo rule, described above. Then, every limit point
of {W[k] } is stationary.
In this section, we study problem ΠL for generic L. First
we present a heuristic similar to SB-SBC and SB-POSBC,
described in Section III-B, and then we introduce an iterative
algorithm to find a local solution of problem ΠL .
A. Multi-beam sub-beam composition
Similar to what is described in Section III-B, one can obtain
L BF vectors each of which generates a beam to cover a
section. To this end, we can consider each section and its
associated hotspots and use SB-SBC (or SB-POSBC) to find
a BF vector for that section. Furthermore, we assume that the
power is equally divided among the BF vectors. Hence, after
applying SB-SBC (or SB-POSBC) to find a BF vector√for
each section separately, we divide all the vectors by 1/ L.
We call this method MB-SBC (or MB-POSBC). We note that
this method does not consider inter-beam interference, because
each BF vector is obtained independently from the others.
B. Gradient projection
Numerical optimization methods can be used to find a
local solution of ΠL for arbitrary L. These methods are more
valuable when it is difficult to find a closed-form solution,
We refer the reader to [16, chapter 2] for the proof. To
implement GP, we need an initial point W[1] and a termination condition. We use MB-SBC described in Section
IV-A to generate an initial point for the numerical examples. For the termination condition, we define the error as
err[k+1] , ||W[k+1] − W[k] ||F and we stop after iteration k if
err[k+1] ≤ , where is a predefined error threshold. Although
the numerical examples will show that GP converges fast
with AR, we specify a threshold for the number of iterations
denoted by Niter to avoid slow convergence.
Fig. 3 illustrates a network with two beams where sections
1 and 2 are the left and right half planes, respectively, and each
beam serves 2 hotspots. Fig. 3a shows the beams generated
by double-beam SBC described in Section IV-A. We observe
that the BS suffers from inter-beam interference in this case.
GP takes the solution of double-beam SBC as initial point and
iteratively updates the BF coefficients of each beam (Fig. 3b),
which greatly reduces the inter-beam interference.
0
-30
Hotspot
Antenna Element
Beam 1 (left)
Beam 2 (right)
30
-60
60
-90
-20
-10
90
10
0
(a) Double-beam SBC
0
-30
Hotspot
Antenna Element
Beam 1 (left)
Beam 2 (right)
30
-60
we consider 100 random network realizations. We run both
algorithms with Ntrial = 100 , 101 , 102 , 103 , 104 and find the
BF vectors wsdr and wposbc and the corresponding network
utilities in bps/Hz. We also obtain the Upper-Bound (UB) by
solving the relaxed problem Π1r . Table II lists the average
performance of these algorithms given two values for number
of hotspots, K. We observe that the larger Ntrial becomes,
the closer the performance gets to the UB, which in turn
slows down the pace of improvement. For larger K, however,
the performance keeps increasing with the number of trials,
which suggests that the number of trials should be proportional
to the number of hotspots. While both algorithms provide
performance close to the UB with large enough Ntrial , SDR-R
outperforms SB-POSBC in some cases. This improved performance comes at the cost of higher computational complexity.
60
K
-90
-20
-10
0
90
10
Ntrial
100
101
102
103
104
4
SB − POSBC
SDR − R
UB
5.039
5.252
5.783
5.505
5.584
5.783
5.602
5.609
5.783
5.627
5.611
5.783
5.637
5.611
5.783
16
SB − POSBC
SDR − R
UB
3.617
4.123
4.715
4.220
4.521
4.715
4.403
4.572
4.715
4.498
4.585
4.715
4.556
4.587
4.715
(b) Gradient projection
Figure 3: Beam patterns generated by double-beam SBC and
gradient projection in a sample network with 2 beams, four
hotspots, and a uniform linear array with eight antennas.
Method
Table II: Average network utility in (bps/Hz)
B. Computational complexity and performance
V. N UMERICAL EXAMPLES
In this section, we provide numerical examples to evaluate
and compare the performance of the proposed methods. We
simulate the downlink of a three dimensional network with a
BS consisting of a 4 × 12 uniform rectangular array serving
a 120◦ sector of a cell. Table I lists the network parameters.
Hotspots are distributed uniformly at random in a ring around
the BS with inner and outer radii of 300 m and 577 m,
respectively. We use CVX package [17] to solve the convex
problem Π1r . We also use = 10−4 and Niter = 104 for GP.
Parameter
Value
Scenario
Cell radius
Bandwidth
Noise spectral density
BS transmit power (P )
Number of hotspots (K)
Pathloss in dB (β −1 )
single-beam, double-beam
577 m
20 MHz
−174 dBm/Hz
20 dBm
4, 8, 16
128.1 + 37.6 log10 (d in km)
Table I: Simulation parameters
A. Effect of number of trials on the performance
In this section, we consider the single-beam scenario described in Section III. We focus on SDR with randomization
(SDR-R) and SB-POSBC described in Sections III-A and
III-B, respectively. In both of these algorithms there are Ntrial
random trials. To evaluate the performance of these algorithms,
In this part, we compare the performance and numerical
complexity of different algorithms devised for the single-beam
scenario (L = 1). Table III lists the average throughput (in
bps/Hz) and average time spent on a typical desktop computer
(3.1 GHz Core i5 CPU, 16 GB RAM) to find the BF vector
using GP, SB-POSBC, and SDR-R algorithms. The reported
values are average values over 100 random network realizations. It is assumed that Ntrial = 103 for SDR-R and SBPOSBC. While GP and SB-POSBC have sub-second runtime,
SDR-R has a much higher complexity. This is because a
complex convex optimization problem has to be solved in
the first step of SDR-R, whereas GP and SB-POSBC only
require simple mathematical operations. We also observe that
the performance of these algorithms are very close. Overall, we
can conclude that GP and SB-POSBC are superior to SDRR since they achieve similar performance with much lower
computational complexity. The results also reveal that network
utility decreases for each of the algorithms when the number
of hotspots increases. This is the cost of having a single beam
pattern. In fact, given a fixed antenna array aperture, it is more
difficult to provide good BF gain for larger number of hotspots
with a single beam.
C. Performance evaluation
In this section, we consider single-beam (L = 1) and
double-beam (L = 2) scenarios. In order to compare the performance of the algorithms described in Sections III and IV,
K
Method
Run Time
Network Utility
4
GP
SB − POSBC
SDR − R
0.024
0.081
9.268
5.541
5.627
5.611
16
GP
SB − POSBC
SDR − R
0.040
0.293
21.026
4.958
4.498
4.585
Table III: Average run time (in seconds) and utility in (bps/Hz)
1
Empirical CDF
0.8
0.6
0.4
Single-beam GP
Single-beam POSBC
Single-beam SDR-R
Single-beam Upper-bound
Double-beam SBC
Double-beam GP
0.2
0
4
6
8
10
12
designed to optimize beam patterns in macro-cellular networks
to enable effective antenna deployment. Fig. 5 illustrates
the map of a sample cellular network in Danville, VA in
three different deployment scenarios. Fig. 5a represents the
case where all cells are equipped with conventional passive
antennas, whereas the other two figures showcase the deployment of BeamCraftTM 500, an active antenna array designed
and manufactured by Blue Danube Systems. The white dots
show the distribution of demand inside the network and the
illuminated patterns illustrate the SINR at each point. Fig. 5b
shows the single-beam scenario where the beams are optimized
using the GP algorithm. Fig. 5c illustrates the same result for
the double-beam scenario. It can be seen that double-beam
active antenna arrays with optimal beam patterns can offer
close to 5X throughput improvement over current systems with
conventional antennas.
A PPENDIX A
P ROOF OF L EMMA 1
Based on the definition of optimal projection in (3), we have
14
Ŵ = argmin d(X, W),
Network Utility (bps/Hz)
Figure 4: Empirical CDF of network utility
we consider 4000 random network realizations and calculate
the network utility corresponding to each algorithm for each
realization. For the single-beam scenario, the upper-bound of
the network utility is obtained for each realization by solving
problem Π1r . It is assumed that Ntrial = 103 for SDRR and SB-POSBC. Fig. 4 illustrates the empirical CDF of
network utility corresponding to each algorithm for K = 8
hotspots. We observe that SDR-R outperforms SB-POSBC
and GP in the single-beam scenario. Moreover, SB-POSBC
performs very close to SDR-R. Having two beams will double
the number of transmissions compared to the single-beam
scenario which can potentially lead to significant network utility improvement if the interference due to multi-user activity
(i.e. inter-beam interference) is managed appropriately. Since
double-beam GP considers interference, it leads to almost 2X
improvement in network utility compared to the single-beam
algorithms. On the other hand, the performance of doublebeam SBC is remarkably inferior to double-beam GP, due to
lack of interference management.
where, d(X, W) , ||X − W||2F is convex in X. Besides, set
A is also convex, hence (5) is a convex optimization problem.
Let us assume that for the coefficient representing antenna
element m and beam s, Xms = Zms ejΦms with Xms ≥ 0
and Wms = Rms ejΨms with Rms ≥ 0. We can reformulate
(5) as follows
(Ẑ, Φ̂) = argmin
{Zms ,Φms }
X X
2
2
Rms
+ Zms
− 2Rms Zms cos(Ψms − Φms ) ,
m∈[M ] s∈[L]
subject to: ∀m :
X
2
Zms
≤ 1/M, ∀m, s : Zms ≥ 0.
s∈[L]
Since ∀m, s : Zms , Rms ≥ 0, we have ∀m, s : Φ̂ms =
Ψms . Then,
is reduced to minimize
P
P the objective function
2
m∈[M ]
s∈[L] (Rms − Zms ) . Furthermore, since all terms
inside the sum are non-negative, the above problem can be
broken into M separate problems that can be solved independently for each antenna element index m. Hence, we can drop
the antenna element index and rewrite the problem as follows:
X
2
min
(Rs − Zs )
{Zs }
VI. C ONCLUDING R EMARKS
We have studied the hybrid BF problem for a single macrocell scenario where the BS is equipped with a massive phased
array. Long-term channel information is used to design the BF
vectors, where they are updated when there is a substantial
change in long-term channel information. Several algorithms
with different complexities have been proposed for designing
BF vectors in different scenarios.
Based on the multi-cell generalization of the proposed
algorithms, a commercial software has been developed by
Blue Danube Systems called BeamPlannerTM . The software is
(5)
X∈A
s∈[L]
subject to:
X
Zs2 ≤ 1/M,
∀s : Zs ≥ 0.
s∈[L]
It can be easily verified that the above problem is convex,
therefore, the Karush-Kuhn-Tucker (KKT) conditions will
result in the optimal solution [13, chapter 5]. First, we generate
the Lagrangian as follows:
L(Z, λ, µ) =
X
X
X
2
(Rs − Zs ) + µ
Zs2 − 1/M −
λs Zs .
s∈[L]
s∈[L]
s∈[L]
(a) Network with passive anten- (b) Network with optimized sin- (c) Network with optimized dounas. Average throughput is 53.0 gle beam phased arrays. Average ble beam phased arrays. Average
Mbps/km2 .
throughput is 89.8 Mbps/km2 .
throughput is 237.0 Mbps/km2 .
Figure 5: Sample cellular network located in Danville, VA optimized using BeamPlannerTM software.
KKT conditions to be checked are as follows:
∂L
= 0 ∀s ∈ [L],
∂Zs
Zs ≥ 0
∀s ∈ [L],
R EFERENCES
(6)
(7)
∀s ∈ [L],
(8)
µ, λs ≥ 0 ∀s ∈ [L],
X
Zs2 ≤ 1/M,
(9)
λs Zs = 0
(10)
s∈[L]
X
µ
Zs2 − 1/M = 0.
(11)
s∈[L]
From (6), we can conclude:
λs = 2(Zs − Rs ) + 2µZs .
(12)
2
By applying (11), we know
P that either µ = 0 or s∈L Zs =
1/M . If µ = 0 (hence s∈[L] Zs2 ≤ 1/M ), then from (12)
we obtain λs = 2(Zs − Rs ). According to (8), we either have
λs = 0 or Zs = 0. If Zs = 0, we will have λs < 0 which
contradicts (9). Otherwise if λs = 0, we have Zs = Rs .
POn the2 other hand, if we assume µ > 0, then we have
s∈[L] Zs = 1/M . Again, from (8), if λs > 0, then Zs = 0
and using (12) we have λs = −2Rs < 0 which is a
contradiction. If we set λs = 0, we have Zs = Rs /(1 + µ)
according to (12), which results in the following equalities:
X
X
Zs2 = 1/M,
(Rs /(1 + µ))2 = 1/M.
P
s∈[L]
s∈[L]
q P
L
With a simple substitution we obtain µ = M s=1 Rs2 − 1
q P
which yields Zs = Rs / M s∈[L] Rs2 . It should be noted
that µ > 0 and hence,
PRs > Zs . Therefore, this case represents
the cases in which s∈[L] Rs2 ≥ 1/M .
Consequently, the solution of the problem is as follows:
(
P
2
Rms
if
s∈[L] Rms < 1/M,
q
P
P
Ẑms =
2
2
if
Rms / M s∈[L] Rms
s∈[L] Rms ≥ 1/M.
for every m ∈ [M ] which concludes the proof.
[1] T. L. Marzetta et al., Fundamentals of Massive MIMO. Cambridge
University Press, 2016.
[2] A. F. Molisch et al., “Hybrid beamforming for massive mimo: A survey,”
IEEE Communications Magazine, vol. 55, no. 9, pp. 134–141, 2017.
[3] M. Banu, “HDAAS: An efficient massive MIMO technology,” 4th
Brooklyn 5G Summit, April 2017.
[4] Y. T. Lo and S. Lee, Antenna Handbook: theory, applications, and
design. Springer Science & Business Media, 2013.
[5] S. J. Orfanidis, Electromagnetic waves and antennas. Rutgers University New Brunswick, NJ, 2002.
[6] Q. Li et al., “MIMO techniques in WiMAX and LTE: a feature
overview,” IEEE Communications magazine, vol. 48, no. 5, 2010.
[7] J. Joung et al., “A survey on power-amplifier-centric techniques for
spectrum-and energy-efficient wireless communications,” IEEE Communications Surveys & Tutorials, vol. 17, no. 1, pp. 315–333, 2015.
[8] W. Yu and T. Lan, “Transmitter optimization for the multi-antenna
downlink with per-antenna power constraints,” IEEE Transactions on
Signal Processing, vol. 55, no. 6, pp. 2646–2660, 2007.
[9] C. T. Ng and H. Huang, “Linear precoding in cooperative MIMO cellular
networks with limited coordination clusters,” IEEE Journal on Selected
Areas in Communications, vol. 28, no. 9, pp. 1446–1454, 2010.
[10] E. Karipidis, N. D. Sidiropoulos, and Z. Q. Luo, “Far-field multicast
beamforming for uniform linear antenna arrays,” IEEE Transactions on
Signal Processing, vol. 55, no. 10, pp. 4916–4927, Oct 2007.
[11] A. B. Gershman et al., “Convex optimization-based beamforming,” IEEE
Signal Processing Magazine, vol. 27, no. 3, pp. 62–75, 2010.
[12] C. Zhang and R. C. Qiu, “Massive MIMO testbed - implementation and
initial results in system model validation,” vol. abs/1501.00035, 2015.
[13] S. Boyd and L. Vandenberghe, Convex optimization.
Cambridge
university press, 2004.
[14] Z.-Q. Luo et al., “Semidefinite relaxation of quadratic optimization
problems,” IEEE Signal Processing Magazine, vol. 27, no. 3, pp. 20–34,
2010.
[15] N. D. Sidiropoulos, T. N. Davidson, and Z.-Q. Luo, “Transmit beamforming for physical-layer multicasting,” IEEE Transactions on Signal
Processing, vol. 54, no. 6, pp. 2239–2251, 2006.
[16] D. P. Bertsekas, Nonlinear programming. Athena scientific Belmont,
2016.
[17] M. Grant, S. Boyd, and Y. Ye, “CVX: Matlab software for disciplined
convex programming,” 2008.
| 7 |
IEEE TRANSACTIONS ON XXXX
1
Non-iterative Label Propagation on
Optimal Leading Forest
arXiv:1709.08426v1 [cs.LG] 25 Sep 2017
Ji Xu, Guoyin Wang, Senior Member, IEEE
Abstract—Graph based semi-supervised learning (GSSL) has
intuitive representation and can be improved by exploiting
the matrix calculation. However, it has to perform iterative
optimization to achieve a preset objective, which usually leads to
low efficiency. Another inconvenience lying in GSSL is that when
new data come, the graph construction and the optimization have
to be conducted all over again. We propose a sound assumption,
arguing that: the neighboring data points are not in peer-to-peer
relation, but in a partial-ordered relation induced by the local
density and distance between the data; and the label of a center
can be regarded as the contribution of its followers. Starting from
the assumption, we develop a highly efficient non-iterative label
propagation algorithm based on a novel data structure named
as optimal leading forest (LaPOLeaF). The major weaknesses of
the traditional GSSL are addressed by this study. We further
scale LaPOLeaF to accommodate big data by utilizing block
distance matrix technique, parallel computing, and LocalitySensitive Hashing (LSH). Experiments on large datasets have
shown the promising results of the proposed methods.
Index Terms—Optimal leading forest, semi-supervised learning, label propagation, partial order assumption.
We ponder the possible reasons of these limitations and
argue that the crux is these models treat the relationship among
the neighboring data points as “peer-to-peer”. Because the
data points are considered equal significant to represent their
class, most GSSL objective functions try optimizing on each
data point with equal priority. However, this “peer-to-peer”
relationship is questionable in many situations. For example,
if a data point xc lies at the centering location of the space of
its class, then it will has more representative power than the
other one xd that diverges more from the central location, even
if xc and xd are in the same K-NN or (-NN) neighborhood.
This paper is grounded on the partial-order-relation assumption: the neighboring data points are not in equal status,
and the label of the leader (or parent) is the contribution
of its followers (or children). The assumption is intuitively
reasonable since there is an old saying: “a man is known by
the company he keeps”. The labels of the peripheral data may
change because of the model or parameter selection, but the
labels of the core data are much more stable. Fig. 1 illustrates
this idea.
I. I NTRODUCTION
L
ABELS of data are laborious or expensive to obtain,
while unlabeled data are generated or sampled in tremendous size in big data era. This is the reason why semisupervised learning (SSL) is increasingly drawing the interests
and attention from the machine learning society. Among the
variety of many SSL model streams, Graph-based SSL (GSSL)
has the reputation of being easily understood through visual
representation and is convenient to improve the learning performance by exploiting the corresponding matrix calculation.
Therefore, there has been a lot of research works in this regard,
e.g., [1], [2], [3].
However, the existing GSSL models have two apparent
limitations. One is the models usually need to solve an
optimization problem in an iterative fashion, hence the low
efficiency. The other is that these models have difficulty in
delivering label for a new bunch of data, because the solution
for the unlabeled data is derived specially for the given graph.
With newly included data, the graph has changed and the
whole iterative optimization process is required to run once
again.
Corresponding author: Guoyin Wang.
Ji Xu is with School of Information Science and Technology, Southwest
Jiaotong University. E-mail: [email protected]
Guoyin Wang is with Chongqing Key Laboratory of Computational Intelligence, Chongqing University of Posts and Telecommunications. E-mail:
[email protected]
Manuscript received October 30, 2017.
Unlabeled data
Labeled data
Fig. 1: partial-order-relation assumption: the label of the center
can be regarded as the contribution from the labels of its
followers. Therefore, one can safely infer herein that the left
unlabeled point is a triangle and the right one is a pentagram.
This paper proposes a non-iterative label propagation algorithm taking our previous research work, namely local density
based optimal granulation (LoDOG), as starting point. In
LoDOG, the input data was organized as an optimal number
of subtrees. Every non-center node in the subtrees is led by its
parent to join the microcluster the parent belongs to. In [4],
these subtrees are called Leading Tree. The proposed method,
Label Propagation on Optimal Leading Forest (LaPOLeaF),
performs label propagation on the structure of the relatively
independent subtrees in the forest, rather than on the traditional
nearest neighbor graph.
Therefore, LaPOLeaF exhibits several advantages when
compared with other GSSL methods: (a) the propagation is
IEEE TRANSACTIONS ON XXXX
2
performed on the subtrees, so the edges under consideration
are much more sparse than that of nearest neighbor graph;
(b) the subtrees are relatively independent to each other, so
the massive label propagation computation is easier to be
parallelized when the size of samples is huge;
(c) LaPOLeaF performs label propagation in a non-iterative
fashion, so it is of high efficiency.
Overall, LaPOLeaF algorithm is formulated in a simple way
and the empirical evaluations show the promising accuracy
and very high efficiency.
The rest of the paper is organized as follows. Section II
briefly reviews the related work. The model of LaPOLeaF is
presented in details in Section III. Section IV describes the
method to scale LaPOLeaF for big data. Section V analyzes
the computation complexity and discusses the relationship to
other researches, and Section VI describes the experimental
study. We reach a conclusion in Section VII.
where Z is the regression matrix that describes the relationship
between raw samples and anchors; ΩG (f ) = 21 f T Lf . L =
D − W is the Laplacian matrix.
Wang proposed a hierarchical AGR method to address
the granularity dilemma in AGR , by adding a series of
intermediate granular anchor layer between the finest original
data and the coarsest anchor layer [3].
One can see that the underlying philosophy is still the
two assumptions. Slightly different from the two assumptions,
Ni proposed a novel concept graph harmoniousness, which
integrates the feature learning and label learning into one
framework (framework of Learning by Propagability, FLP)
[2]. The objective function has only one term in FLP, yet
also needs to obtain the local optimal solution by alternately
running iterative optimizing procedure on two variables.
II. R ELATED STUDIES
A. Graph-based semi-supervised learning (GSSL)
Suppose an undirected graph is denoted as G = (V, E, W ),
where V is the set of the vertices, E the set of edges, and
W : E 7→ R is the mapping from an edge to a real number
(usually defined as the similarity between the two ending
points). GSSL takes the input data as the vertices V of the
graph, and places an edge ei,j between two vertices (vi , vj )
if (vi , vj ) are similar or correlated. The basic idea of GSSL
is propagating the labels of the labeled samples Xl to the
unlabeled Xu with the constructed graph. The propagation
strength between vi and vj on each edge is in proportion to
the weight Wi,j .
Almost all the existing GSSL works on two fundamental
assumptions. One is called “clustering assumption”, meaning
that the samples in the same cluster should have the same
labels. Clustering assumption usually is used for the labeled
sample set. The other is called “manifold assumption”, which
means that similar (or neighboring) samples should have similar labels. Manifold assumption is used for both the labeled
and unlabeled data.
Starting from the two assumptions, GSSL usually aims at
optimizing an objective function with two terms. However,
the concrete components in different GSSL models vary. For
example, in [5] the objective function is
X = {x1 , x2 , ..., xN } denotes the dataset . I = {1, 2, ..., N }
is the index set of X. di,j is the distance (under any metric)
between xi and xj .
min
F
l+u
1
1
1X
W ij √ F i − p F j
2 i,j=1
di
dj
2
+µ
l
X
B. optimal leading forest
Definition 1. Local Density. [6] The local density of xi is
di,j 2
P
computed as ρi =
e−( dc ) , where dc is the cut-off
j∈I\{i}
distance or band-width parameter.
Definition 2. leading node and δ-distance. If xli is the nearest
neighbor with higher local density to xi , then xli is called the
leading node of xi . Formally, li = arg min{dij |ρj > ρi },
j
denoted as xli = η(xi ) for short. di,li is called the δ-distance
of xi , or simply δi .
We store all the xli in an array named as LN.
Definition 3. leading tree (LT) [4]. If ρr = max {ρi }, xli =
1≤i≤N
η(xi ) . Let an arrow start from xi , i ∈ I\{r}, and end at xli .
Thus, X and the arrows form a tree T . Each node xi in T
(except xr ) tends to be led by xli to join the same cluster
xli belongs to, unless xi itself makes a center. Such a tree is
called a leading tree.
Definition 4. η operator [7]. For any non-root node x in an
LT, there is a leading node p for x. This mapping is denoted
as η(x) = p.
we denote η( η( ...η(•))) = η n (•) for short.
|
{z
}
n times
||F i − Y i ||
i=1
(1)
where F is the label indication matrix; di is the sum of the
i − th row of W ; Y i is the label of the i − th labeled data.
Liu proposed an Anchor Graph Regulation (AGR) approach
to predict the label for each data point as a locally weighted
average of the labels of anchor points [1]. In AGR, the function
is
c
c
X
1X
||Zl aj − y j ||2 + γ
ΩG (Zaj )
min
Q(A) =
2 j=1
A=[a1 ,...,ac ]
j=1
(2)
Definition 5. partial order in LT [7]. Suppose xi , xj ∈ X ,
we say xi ≺ xj , if f ∃m ∈ N + such that xj = η m (xi ).
Definition 6. center potential. Let γi denote the potential of
xi to be selected as a center, γi is computed as γi = ρi ∗ δi .
Intuitively, if an object xi has a large ρi (means it has
many near neighbors) and a large δi (means relatively far from
another object of larger ρ), then xi would have great chance
to be the center of a collection of data.
Pedrycz proposed the principle of justifiable granularity
indicating that a good information granule (IG) should has
sufficient experiment evidence and specific semantic [8], [9],
IEEE TRANSACTIONS ON XXXX
3
[10]. That is, there should be as many as possible data points
included in an IG, and the closure of the IG should be compact
and tight in geometric perspective.
Following this principle, we have proposed a local density
based optimal granulation (LoDOG) method to build justifiable granules accurately and efficiently [11]. In LoDOG, we
construct the optimal IGs of X by disconnecting the corresponding Leading tree into an optimal number of subtrees. The
optimal number Ngopt is derived via minimizing the objective
function:
points merged in the fat node xi in the subtree, if the node
is derived as an information granule, after some granulation
methods such as local sensitive hashing (LSH) (e.g., [12] [13])
or others. If there is no granulation performed before the LT
construction, all popi are assigned with constant 1.
LaPOLeaF is designed to consist of three stages after the
OLF has been constructed, namely, from children to parent
(C2P), from root to root (R2R), and from parent to children
(P2C). The idea of these stages is illustrated in Figure 2.
x5
L1
min Q(Ng ) = α ∗ H(Ng ) + (1 − α)
α
X
r1
DistCost(Ωi ), (3)
L2
x6
r2
Ng
x3
x4
L1
x7
x8
x9
L2
i=1
where DistCost(Ωi ) =
|ΩP
i |−1
L3
{δj |xj ∈ Ωi \R(Ωi )}.
j=1
Here, Ng is the number of IGs; α is the parameter striking
a balance between the experimental evidence and semantic;
Ωi is the set of points included in ith granule; | • | returns the
cardinality of a set; H(•) is a strictly monotonically increasing
function used to adjust the magnitude of Ng to well match
Ng
P
that of
DistCost(Ωi ). This function can be automatically
i=1
selected from a group of common functions such as logarithm
functions, linear functions, power functions, and exponential
functions; R(Ωi ) is the root of the granule Ωi as a leading
tree.
We used LoDOG to construct the Optimal Leading Forest
(OLF) from the dataset. The readers are referred to [11] for
more details of LoDOG.
x2
x10 x11
x1
(a) From children to parent
(b) From root to root
L3
(c) From parent to children
Fig. 2: Diagram of non-iterative label propagation on the
subtrees of an FNLT. (a) x3 gets its label as the weighted
summation of the L2 and L1 , and the label of x5 is computed
likely in a cascade fashion. (b) r1 is the root of an unlabeled
subtree. In this situation, we have to borrow the label for r1
from r2 . If r2 is not labeled either, this “borrow” operation
will be transitively carried out (see Section III-A2 for details).
(c) After the previous two stages, all roots of the subtrees are
guaranteed being labeled. Then also under the guidance of 4,
all the unlabeled children will get their label information in a
top-down fashion.
To decide the layer number for each node, one can easily
design a hierarchical traverse algorithm (see Appendix) for the
sub-leading-tree.
Definition 7. optimal leading forest (OLF). Ngopt leading
trees can be constructed from the dataset X by using LoDOG
method. All the leading trees are collectively called optimal
leading forest.
A. Three key stages of label propagation in LaPOLeaF
The concept of OLF is used to determine the localized
ranges of label propagation on the whole leading tree of X .
That is, OLF indicates where to stop propagating the label of
a labeled datum to its neighbors.
Definition 8. unlabeled (labeled) node. A node in the subtree
of the OLF is an unlabeled node (or the node is unlabeled),
if its label vector is a zero vector. Otherwise, i.e., if its label
vector has at least one element greater than zero, the node is
called a labeled node (or the node is labeled).
III. L ABEL P ROPAGATION ON O PTIMAL L EADING F OREST
(L A POL EA F)
LaPOLeaF first makes a global optimization to construct the
OLF, then performs label propagation on each of the subtrees.
Following the aforementioned partial relation assumption, the
relationship between the children and their parent is formulated as 4, and each stage of the label propagation of
LaPOLeaF will be guided by this formula.
P
popi
i Wi ∗ Li
, where Wi =
.
(4)
Lp = P
W
dist(i,
p)
i
i
where Lp is the label vector of the parent for a Kclassification problem. Li is the label vector of the i-th child
w.r.t. the current parent. That the k-th element equals one and
all others equal to zero represents a class label of the k-th
class, 1 ≤ k ≤ K. For regression problems, Li and Lp are
simply scalar value. popi is the population of the raw data
1) From children to parent:
Definition 9. unlabeled (labeled) subtree. A subtree in OLF
is called an unlabeled subtree (or the subtree is unlabeled),
if every node in this tree is not labeled. Otherwise, i.e., if the
leading tree contains at least one labeled node, this tree is
called a labeled subtree (or the subtree is labeled).
Since the label of a parent is regarded as the contribution
of its children, the propagation process is required to start
from the bottom of each subtree. The label vector of an
unlabeled children is initialized as vector 0, therefore it does
not contribute to the label of its parent. Once the layer index
of each node is ready, the bottom-up propagation can start to
execute in a parallel fashion for the labeled subtrees.
Proposition 1. After C2P propagation, the root of a labeled
subtree must be labeled.
Proof. According to the definitions of labeled node and
labeled subtree, and the procedure of C2P propagation, a
IEEE TRANSACTIONS ON XXXX
parent is labeled if it has at least one child labeled after the
corresponding round of the propagation. The propagation is
progressing sequentially along the bottom-up direction, and the
root is the parent at the top layer. Therefore, this proposition
obviously holds.
2) From root to root: If the labeled data are rare or unevenly
distributed, there would be some unlabeled subtrees. In such
a case, we must borrow some label information from other
labeled subtrees. Because the label of the root is more stable
than other nodes, the root of an unlabeled subtree ru should
borrow label information from a root of a labeled subtree rl .
However, there must be some requirements for rl . To keep
consistence with our partial order assumption, rl is required
to be superior to ru and is the nearest root to ru . Formally,
4
(a)
(b)
xp
Layeri
x1 x2
x3 x4
Layeri+1
xp'
L p L p
W1L1 W2 L2
W1 W2
x3 x4 L3 L4 Lp
Fig. 3: Illustration of the virtual parent idea in P2C propagation
stage. (a) A parent and 4 children (first 2 are labeled, and the
last two are unlabeled). (b) To compute the labels for x3 and
x4 , the labeled nodes (xp , x1 and x2 ) are replaced by a virtual
parent xp0 .
Algorithm 1: LaPOLeaF Algorithm
Input: Dataset X = Xl ∪ Xu
Output: Labels for Xu
1 Part 1: //Preparing the OLF;
rl = arg min{dist(ru , ri )|ru ≺ ri },
(5)
2 Compute distance matrix Dist for X ;
ri ∈RL
3 Compute local density ρ (Definition 1) ;
where RL is the set of labeled roots.
4 Compute leading nodes LN, δ-distance δ (Definition 2) ;
If there exists no such rl for a particular ru , we can conclude
5 Compute representation power γ using (6);
that the root rT of the whole leading tree constructed from X
6 Split the LT into OLF using objective function (3), return
(before splitting into a forest) is not labeled. So, to guarantee
roots RT and nodes set in each subtree;
every unlabeled root can successfully borrow a label, we only
7 Build adjacent List for each subtree;
need to guarantee rT is labeled.
8 Part 2: //Label propagation on the OLF;
If rT is unlabeled after C2P propagation, we consider the
9 Decide the level Number for each node using a
label-borrowing trick for rT as well. However, there is no other
hierarchical traverse approach (See Appendix);
root satisfying rT ≺ rl , so we modify (5) a little to borrow
10 C2P propagation using (4);
T
label from rl for rT :
11 R2R propagation using (5) and (6);
12 P2C propagation using (7);
l
rT = arg min{dist(rT , ri )}
(6) 13 Return Labels for X
u
ri ∈RL
The R2R propagation is executed for the unlabeled roots in
the representation-power-ascending order.
3) From parent to children: After the previous two stages,
all root nodes of the subtrees are labeled. In this P2C propagation, the labels are propagated in a top-down fashion, i.e., the
labels are sequentially propagated from top layer to bottom
layer and this process can be parallelized on the independent
subtrees.
We need consider two situations: a) for a parent xp , all m
children xi , 1 ≤ i ≤ m, are unlabeled. Here, We simply assign
Li =Lp , because this assignment directly satisfies (4) no matter
what value each Wi takes. b) for a parent xp , without loss of
generality, assume the first ml children are labeled, the other
mu children are unlabeled. In this situation, we generate a
virtual parent xp0 to replace the original xp and the ml labeled
children. Using (4), we have
Pml
j=1 Wj ∗ Lj
(7)
Lp0 = Lp − Pml
j=1 Wj
Then, the mu unlabeled children can be assigned with the
label Lp0 like in the first situation. The concept of virtual
parent is illustrated in Fig. 3.
B. LaPOLeaF algorithm
We present the overall algorithm of LaPOLeaF here, including some basic information about OLF construction.
C. An Example on double-moon dataset
We generate the double-moon dataset of 600 data points,
300 in each moon, to illustrate the main stages in LaPOLeaF,
helping the readers to build an intuitive impression of this
method. 5 labeled points are randomly selected in each moon.
In the first step, the OLF with Ngopt = 40 was constructed
using the steps described in Part 1 of Algorithm 1 (Fig.
4a). Here, the parameters in LoDOG are set as: {percent =
0.7, α = 0.5, H(x) = x}. The root of each subtree is marked
by yellow face and red edge. It is easily observed that the
edges appear in the OLF is much sparser than that in other
GSSL methods based on nearest neighbors [2] [5].
In the C2P propagation (Fig. 4b), the nodes in the subtrees
are firstly tagged by layer index. The sub-tree with greatest
height has 14 layers. After the bottom-up label propagation,
the root of each labeled subtree becomes labeled. And other
nodes on the path from the initial labeled node to its corresponding root are labeled as well. There are 44 nodes labeled
now. Yet the unlabeled subtrees remain unchanged.
Fig. 4c shows that in R2R propagation stage, each unlabeled root borrowed the labeled from its nearest neighboring
root with higher density. The green arrows show the label
borrowing information, with arrow head indicating the label
owner.
The P2C propagation can be fully parallelized because all
IEEE TRANSACTIONS ON XXXX
5
Root
Class One
Class Two
(d)
(c)
owner
borrower
D. Deriving the label for a new datum
A salient advantage of LaPOLeaF is that it can obtain the
label for a new datum (let us denote this task as LXNew)
in O(n) time. Because (a) the leading tree structure can
be incrementally updated in O(n) time, and the LoDOG
algorithm can find Ngopt in O(n) time, OLF can be updated
in O(n) time. And (b) the label propagation on the OLF takes
O(n) time.
The interested reader can refer to our previous work [7],
in which we have provided an detailed description of the
algorithm for incrementally updating the fat node leading tree.
And we also provided therein a proof of the correctness of the
algorithm.
IV. S CALABILITY OF L A POL EA F
To scale the model LaPOLeaF for big data context, we
propose two approaches. One uses parallel computing platform
and divide-and-conquer strategy to obtain an exact solution,
and the other is an approximate approach based on LocalitySensitive Hashing (LSH).
A. Divide and conquer approach
The problem confronted has three aspects with the divideand-conquer strategy. (a) Computing the distance matrix with
O(N 2 ) time complexity. (b) The computation of ρi and δi
needs accessing a row of elements in the whole distance
matrix, so ρ and δ for all data also has the complexity
N1
N2
...
NB
D1,1
D1,2
...
D1,B
D2,2
...
D2,B
N2
...
Fig. 4: An illustrative example of LaPOLeaF on the doublemoon dataset. (a)The OLF constructed from the dataset. (b)
C2P propagation. (c) R2R propagation. The green arrows
indicate the borrower and the owner when an unlabeled root
borrows label from another root. All the roots are labeled after
this stage. (d) P2C propagation. The color saturation reflects
the value of the maximal element in a label vector. The closer
to 1 the value is, the higher saturation of the color.
N1
...
Unlabeled Data
...
(b)
(a)
of O(N 2 ). (c) The distances between the centers should be
prepared in advance in the R2R propagation stage, since the
memory of a single computer is not able to accommodate the
whole distance matrix for a large dataset, and the distances
between centers can not be retrieved directly from the whole
distance matrix. Apart from the three parts, other steps in
LaPOLeaF are all linear to N and usually could run on a
single machine.
1) Compute the distance matrix in parallel: Distance matrix puts a considerable burden on both the computation time
and the capacity of memory. As an example, we have computed the required memory capacity for the distance matrix of
100,000 samples is over 37GB, even when a distance is stored
in a float of 4 Bytes.
Here, we propose a divide-and-conquer method for exactly
(not approximately) computing large distance matrix, whose
idea is illustrated in Fig. 5.
...
the roots in OLF are labeled and the propagation within each
subtree is independent from others. Using the discussion in
Section III-A3, all the unlabeled non-root nodes are labeled
(as in Fig. 4d).
NB
(a)
...
DB,B
(b)
Fig. 5: (a) If the whole dataset is divided into two subsets,
then the whole distances between any pair of points can be
computed through tree parts. The first two parts correspond
to the full connections within the two subgraphs respectively
(green and red curves), and the third part corresponds to the
full connections within the complete bipartite graph (black
lines). (b) The number of subsets is generalized from 2 to B.
Note that although we try to balance the size of each subset,
all the Ni are not necessarily equal. therefore, while Di,i is
always square matrix, Di,j for i 6= j may NOT be square
matrix.
Although the computing of distance matrix is of O(N 2 )
complexity, the positive message is that the mainstream CPU
manufacturers (such as Intel and AMD) and scientific computing softwares (such as Matlab and R) have made great
efforts to accelerate the matrix operations. For the distance
metric l2 -norm, instead of computing the distance between
the objects pair by pair, we formulate the distances for the
full connections between the two parts of a bipart graph as
in Theorem 2. For small datasets of 1,000 instances and 8
attributes on Matlab 2014a, matrix computing runs about 20
times faster than pairwise distance computing.
Theorem 2. The Euclidean distance matrix for the full connections within a complete bipartite graph is given by elemente D
e is computed via
wise square root of D.
e = P .2 × 1d×n + 1m×d × Q.2 − 2P × Q> ,
D
(8)
where P and Q are the matrixes formed with data points (as
row vectors) in each part of the bipartite graph, whose sizes
IEEE TRANSACTIONS ON XXXX
6
are m × d and n × d, respectively. P.2 is the element-wise
square for P .
e i,j , we can write
Proof. Considering an element D
e i,j =
D
d
P
k=1
=
d
P
p2i,k +
d
P
k=1
2
qj,k
−2
d
P
pi,k qj,k
k=1
(9)
TABLE I: Complexity comparison.
Methods
Graph construction
Label propagation
LLGC
O(n2 )
O(n3 )
FLP
O(n2 )
O(T1 Kn2 + T2 K 2 n2 )
AGR
O(T mn)
O(m2 n + m3 )
HAGR
O(T mh n)
O(m2h n + m3h )
LaPOLeaF
O(n2 )
O(n)
2
(pi,k − qj,k )
k=1
B. Approximate approach with LSH
Thus, the theorem is proven.
Ideally, the distance matrix of arbitrarily-sized data can be
computed in this way provided there are enough computers.
However, if the computers are not adequate for the dataset at
hand, one can turn to the second approach LSH.
2) Computing ρ, δ and nneigh in parallel: Because of the
additive characteristic of local density ρi , the whole ρ vector
can be computed in a fully parallel fashion on each computing
node, when the whole matrix are split in belt and stored
separately on different computing nodes. Suppose there are
B blocks of the distance matrix for N samples, then we have
ρ=
B
X
b=1
N
X
ρb =
B
X
[ρb1 , · · · , ρbN ],
(10)
b=1
where a is a random vector and b is a random real number
sampled uniformly from the interval (0, r]. For angular similarity, the hash function could be [14]
hv (x) = sgn(v > x)
2
Db (i, j)
) ), i = 1, ..., N/B.
exp(−(
ρbi =
dc
j=1
As mentioned above, if there are no adequate computers
for a given large dataset to run exact LaPOLeaF, then it is
reasonable to merge some closely neighboring data points into
one bucket by employing LSH techniques [12]–[14]. The basic
idea of LSH is that the nearest located neighbors have high
probability to share the same hash code (viz. collide with each
other), and the far away data points are not likely to collide.
For different distance metrics, we need different hash functions. For l2 -norm, the hash function is [12]
a·v+b
(13)
ha,b (v) =
r
(11)
where, ρb is the local density vector of N elements w.r.t.
distance matrix block b, and Db (i, j) is the (i, j) element in
the bth distance matrix block.
Unlike on a single computer, where δ can be computed with the guidance of sorted ρ, computing each
δi in parallel has to access the whole ρ and all
Df loor(i/bSize)+1 (mod(i, bSize), j) for 1 ≤ j ≤ N with
bSize = N/B.
3) Prepare the distance matrix for centers: The R2R propagation stage needs to access the distances between any pair of
centers (roots of the subtrees in OLF), denoted as distCenters .
If the distance matrix is stored on a centralized computer,
distCenters can be extracted directly from the whole distance
matrix D. However, when the D is stored in a distributed
system, it is divided into k blocks, denoted as Db , b = 1, ..., k.
Each Db is stored on a different computing node, and the index
range for instances whose distances are indicated by Db is
[(b−1)∗bSize+1, b∗bSize]. Usually we have bSize = N/k,
except for the last matrix block.
Therefore, to extract distCenters from the distributed D,
one has to sort firstly the centers according to their index in
ascending order, then get the distance entry between Center i
and Center j via
distCenters (i, j) = Df loor(i/bSize)+1 (mod(i, bSize), j);
(12)
By sorting the centers, each distance matrix needs to be
accessed only once to get distCenters .
(14)
where, v is a random vector, and sng(•) is the sign function.
Ji improved the work in [14] by introducing the GramSchmidt orthogonalization process to the random vector group
to form a representing unit named as “super-bit”.
After running LSH algorithm on the original dataset, the
instances will be put into many buckets. Then each bucket is
treated as a fat node by LaPOLeaF, and the number of the
data lying in the ith bucket is the popi in (4).
V. T IME COMPLEXITY AND RELATIONSHIP TO RELATED
WORKS
A. Complexity analysis
By investigating each step in Algorithm 1, we find out that
except the calculation of the distance matrix requires exactly
n(n − 1)/2 basic operations, all other steps in LaPOLeaF has
the linear time complexity to the size of X . When compared
to LLGC [5], FLP [2], AGR [1], and HAGR [3], LaPOLeaF
is much more efficient, as listed in Table I. In Table I, n is the
size of X ; T is the number of iterations; K is the number of
classes; mh is the number of points on the hth layer. Empirical
evaluation in Section VI verified this analysis.
Please note that although we write O(n3 ) for straightforward computation of matrix inverse, the complexity could be
reduced to O(n2.373 ) [15] [16].
It is also worthwhile to compare the efficiency of LXNew.
For LaPOLeaF, LXNew only require a linear time complexity
w.r.t. the size of the existing OLF. However, for the traditional
GSSL methods, LXNew requires the running time as long as
for labeling all data points in X .
IEEE TRANSACTIONS ON XXXX
7
TABLE II: Information of the datasets in the experiments
# Instances
# Attributes
# Classes
Iris
150
4
3
Wine
178
13
3
Yeast
1,484
8
8
MNIST
70,000
784
10
Activity
43,930,257
16
6
B. Relationship discussion
The label propagation in LaPOLeaF is a heuristic algorithm that lacks an optimization objective. Hence it offers no
mathematical guarantee to achieve best solution in this stage.
However, we argue that the optimization has been moved
forward to the OLF construction stage. Since we obtained
an optimal partial ordered structure of the whole dataset, we
believe that an iteration optimization which regards the data
as in peer-to-peer relation is no longer compulsory. This is the
difference between LaPOLeaF and other GSSL methods.
Meanwhile, LaPOLeaF can be regarded as an improved
version of K-NN. In K-NN, the K nearest neighbors is
considered as a spherical-shaped information granule and the
unlabeled data are assigned label with a voting strategy. The
parameter K is set by a user , and the result is quite sensitive to
the choice of K. By contrast, in LaPOLeaF, the information
granules are arbitrarily-shaped leading trees and the size of
each tree is automatically decided by the data and LaPOLeaF,
so usually the sizes are different. Because OLF better captures
the nature of the data distribution, and the label propagation
is reasonably designed, LaPOLeaF constantly outperforms KNN.
VI. E XPERIMENTAL STUDIES
The efficiency and effectiveness of LaPOLeaF is evaluated
on five real world datasets, among which three are small data
from UCI machine learning repository and the other two are
larger scaled. The information of the datasets is shown in
Table II. The 3 small datasets are used to demonstrate the
effectiveness of LaPOLeaF, and the other two datasets are used
to show the scalability of LaPOLeaF through the measures of
parallel computing and Locality-sensitive Hashing (LSH).
The experiment for the small datasets and Activity data is
conducted on a personal computer with an Intel i5-2430M
CPU and 16GB DDR3 memory. MNIST data is learned both
on the PC and a Spark cluster of eight work stations.
A. UCI small datasets
With the 3 small sized UCI datasets, namely, Iris, Wine,
and Yeast, it is shown that LaPOLeaF achieves competitive
accuracy while the efficiency is much higher, when compared
with the classical semi-supervised learning methods Linear
Discriminant Analysis (LDA) [17], Neighborhood Component
Analysis (NCA) [18] , Semi-supervised Discriminant Analysis
(SDA) [19], and Framework of Learning by Propagability
(FLP) [2]. The parameter configuration and some experimental
details, such as the distance metric chosen and preprocessing
method, for all the 5 datasets are listed in Table III.
2.972
x 10
5
α=0.3
α=0.35
α=0.4
2.9715
Objective function value
Dataset
2.971
2.9705
Ng=65
2.97
Ng=153
2.9695
2.969
Ng=405
0
100
200
300
Number of granuels
400
500
Fig. 6: Objective function values v.s. the number of information granules in LoDOG method for MNIST data.
The accuracies for the competing models on the 3 datasets
are shown in Table IV, from which one can see that LaPOLeaF
achieves the best accuracy twice and the accuracy is comparable even when the other method FLP wins on the dataset
Wine.
The main purpose of LaPOLeaF is not to improve the
accuracy of GSSL, but to improve its efficiency by getting
rid of the paradigm of iterative optimization to achieve the
minimum value of an objective function. LaPOLeaF exhibits
very high efficiency, for example, it completes the whole SSL
process within 0.27 seconds for Iris dataset, on the mentioned
personal computer.
B. MNIST dataset
The MNIST dataset contains 70,000 handwriting digits
images in total, about 7,000 for each digit (’0’-’9’). To
emphasize the effectiveness of LaPOLeaF itself, we directly
use the original pixel data as the learning features as in [20].
Since the distance matrix is oversized, we applied the divideand-conquer technology described in Section IV-A. The whole
dataset X is equally divided into 7 subsets, so the size of each
distance matrix block is 10,000×70,000.
After computing the matrix blocks Db , the vector group
{ρ, δ, nneigh}, the OLF of X can be constructed by running
LoDOG algorithm on a single machine. The parameters and
some intermediate results for the two datasets MNIST and
Activity are detailed in the last two rows of Table III. The
objective function values for choosing Ngopt with MNIST data
is shown in Fig. 6.
10 labeled samples are randomly chosen from each digit,
and the accuracies achieved by LaPOLeaF and the stateof-the-art method Hierarchical Anchor Graph Regularization
(HAGR) [20] is listed in Table V.
One can see that LaPOLeaF achieves competitive accuracy
on MNIST data. However, the highlight of LaPOLeaF is its
efficiency. LaPOLeaF complete the whole learning process,
including the OLF construction and the three stages in label
propagation, within 48 minutes on the personal computer. The
time consumptions are detailed in Table VI.
C. Activity dataset
The Activity dataset is from the domain of Human Activity
Recognition [21]. It includes the monitoring data sampled
IEEE TRANSACTIONS ON XXXX
8
TABLE III: Parameters configuration for the 5 datasets
opt
Dataset
percent
α
H(x)
Ng
|Xl |
Preprocessing
Distance
Iris
2
0.25
x + 30
8
6
Z-score
Euclidean
Wine
2
0.4
x1.2
8
6
NCA DR†
Euclidean
Yeast
5
0.1
55 + 2x
7
16
Z-score
Cosine
MNIST
10
0.3
2.97 × 105 + x1.2
405
100
none
Euclidean
Yeast
8
0.3
0.2x
347
16
see Fig. 7
Cosine
†: DR here is the abbreviation of dimensionality reduction.
TABLE IV: Accuracy comparison on the 3 small datasets.
Method
Iris
Wine
Yeast
LDA
66.91±25.29
62.05±
19.00±8.70
NCA
92.28±3.24
83.10±9.70
32.76±6.32
SDA
89.41 ±5.40
90.89±5.39
37.00±6.89
FLP
93.45±3.09
93.13±3.32
40.03±5.40
LaPOLeaF
94.86±4.57
90.68±5.32
42.28±2.36
TABLE V: Accuracies by HAGR and LaPOLeaF on MNIST
Method
HAGRbase
HAGR
LaPOLeaF
Accuracy
79.17±1.39
88.66±1.23
84.92±2.35
by the accelerators and gyroscopes built in the smart phones
and smart watches. Since there are many different models of
phones and watches, and the sampling frequency, accuracy are
therefore different, the data collected are heterogeneous. The
dataset contains 43,930,257 observations and 16 attributes in
each.
1) preprocessing: Because the raw data in separated into
4 comma separated values (.csv) files, we firstly perform
preprocessing as shown in Fig. 7.
Align & merge
ECDF feature
Normalization
SB-LSH
Fig. 7: Preprocessing flow chart of the Activity data.
i) The records from the four .csv files are aligned and
merged into one file. We use the subject ID and equipment
ID to align the data from different files, and the difference in
sampling frequency are dealt with by interpolation.
ii) Since that empirical cumulative distribution function
(ECDF) feature has been reported to outperform FFT and PCA
in HAR task [22]. We compute the ECDF feature from the
original data and use it in the future learning. Conventionally,
the time frame is set to 1 second, and the overlapping ratio is
50%. Since the major sampling frequency is 200, we include
200 observations in one frame to compute ECDF, and move
forward 100 observations when one row of ECDF features
is computed. In this way, the time granularity of activity
recognition is half a second. The segment number is set to
5, so the resulting dimensionality of the feature is 6*5=30.
ECDF has reduced the size of Activity data to 439,302 rows
(about 1% of the original).
TABLE VI: Running time of each stage on MNIST
Stage
Db
ρ
δ
LoDOG
LP$
Time(s)
952
1483
436
15
3
$: LP here is the abbreviation of label propagation.
iii) Use min-max normalization to normalize every column
of the data.
iv) Because of the large number of samples and features,
we employ LSH (specifically, SB-LSH [13]) to tackle the
two problems at the same time. With SB-LSH method, we
empirically set the parameter depth of a super-bit as 5, number
of super-bits as 6. LSH will further reduce the amount of data
rows by merging the collided samples into the same bucket.
For example, the resultant number of hash buckets is 771 for
the subset of subject a and smart phone model Nexus4 1,
compared with the ECDF feature rows’ number of 3,237. The
number of samples contained in each hash bucket is treated
as the weight to compute ρ in OLF construction.
2) LaPOLeaF for Activity data: After the preprocessing
stage, Activity data has been transformed into 84,218 rows of
categorical features. Each row consists of the series of hash
bucket number, and the weight of each row is the number of
ECDF rows that share the same hash bucket number sequence.
The distance matrix is computed based on the ECDF features
in the bucket sequence rather than the hash codes themselves,
and cosine distance is used. The parameters in OLF is set
as (α = 0.3, percent = 8, H(x) = 0.2x). This parameter
configuration leads to the number of subtrees in OLF is 347.
With the constructed OLF, we run C2P, R2R, and P2C
propagation sequentially, taking randomly selected 120 (20 per
class) labeled ECDF features as the labeled data Xl . The final
accuracy achieved by LaPOLeaF is 86.27±3.36.
VII. C ONCLUSIONS
The existing GSSL methods have two weaknesses. One is
low efficiency due to the iterative optimization process, and
the other is inconvenience two predict the label for newly
arrived data. This paper firstly made a sound assumption that
the neighboring data points are not in equal positions, but
lying in a partial-ordered relation; and the label of a center
can be regarded as the contribution of its followers. Based on
this assumption and our previous work named as LoDOG, a
new non-iterative semi-supervised approach called LaPOLeaF
is proposed. LaPOLeaF exhibits two salient advantages: a)
It has much higher efficiency than the sate-of-the-art models
while keep the accuracy comparable. b) It can deliver the labels
for a few newly arrived data in a time complexity of O(N ),
where N is the number of the old data forming the optimal
leading forest (OLF). To enable LaPOLeaF to accommodate
big data, we proposed an exact divide-and-conquer approach
and an approximate locality-sensitive-hashing (LSH) method.
Theoretical analysis and empirical validation have shown the
effectiveness and efficiency of LaPOLeaF. We plan to extend
IEEE TRANSACTIONS ON XXXX
LaPOLeaF in two directions: one is to apply it into the real
world big data mining problem with tight time constraints, and
the other is to improve the accuracy while keeping the high
efficiency unchanged.
A PPENDIX
We provide the algorithm of deciding the layer index of
each node in a tree using a queue data structure.
Algorithm 2: Decide the layer index of nodes in a tree.
Input: The root of T and the adjacent list AL of the tree.
Output: The layer indices for the nodes LayerInd[n].
1 Initialize an empty queue theQue;
2 EnQue(theQue,T);
3 LayerInd[T]=1;
4 while !IsEmpty(theQue) do
5
QueHead=DeQue(theQue);
6
if AL[QueHead]!=NULL then
7
EnQue(AL[QueHead]);
8
LayerInd[AL[QueHead]]= LayerInd[QueHead]+1;
9
end
10 end
11 Return LayerInd[];
The time complexity of Algorithm 2 is O(n), because the
basic operations for each node are EnQue() and DeQue().
ACKNOWLEDGMENT
This work has been supported by the National Key Research and Development Program of China under grants
2016QY01W0200 and 2016YFB1000905, the National Natural Science Foundation of China under grant 61572091.
R EFERENCES
[1] W. Liu, J. He, and S.-F. Chang, “Large graph construction for scalable
semi-supervised learning,” in Proceedings of the 27th international
conference on machine learning (ICML-10), pp. 679–686, 2010.
[2] B. Ni, S. Yan, and A. Kassim, “Learning a propagable graph for semisupervised learning: Classification and regression,” IEEE Transactions on
Knowledge and Data Engineering, vol. 24, no. 1, pp. 114–126, 2012.
[3] M. Wang, W. Fu, S. Hao, H. Liu, and X. Wu, “Learning on big
graph: Label inference and regularization with anchor hierarchy,” IEEE
Transactions on Knowledge and Data Engineering, vol. 29, no. 5,
pp. 1101–1114, 2017.
[4] J. Xu, G. Wang, and W. Deng, “DenPEHC: Density peak based efficient
hierarchical clustering,” Information Sciences, vol. 373, pp. 200–218,
2016.
[5] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Schölkopf, “Learning
with local and global consistency,” in Advances in neural information
processing systems, pp. 321–328, 2004.
[6] A. Rodriguez and A. Laio, “Clustering by fast search and find of density
peaks,” Science, vol. 344, no. 6191, pp. 1492–1496, 2014.
[7] J. Xu, G. Wang, T. Li, W. Deng, and G. Gou, “Fat node leading tree for
data stream clustering with density peaks,” Knowledge-Based Systems,
vol. 120, pp. 99–117, 2017.
[8] W. Pedrycz and W. Homenda, “Building the fundamentals of granular
computing: A principle of justifiable granularity,” Applied Soft Computing, vol. 13, no. 10, pp. 4209–4218, 2013.
[9] W. Pedrycz, G. Succi, A. Sillitti, and J. Iljazi, “Data description: A
general framework of information granules,” Knowledge-Based Systems,
vol. 80, pp. 98–108, 2015.
[10] X. Zhu, W. Pedrycz, and Z. Li, “Granular data description: Designing
ellipsoidal information granules,” IEEE Transactions on Cybernetics,
DOI: 10.1109/TCYB.2016.2612226, 2016.
9
[11] J. Xu, G. Wang, T. Li, and W. Pedrycz, “Local density-based optimal
granulation and manifold information granule description,” IEEE Transactions on Cybernetics, DOI: 10.1109/TCYB.2017.2750481, 2017.
[12] M. Datar, N. Immorlica, P. Indyk, and V. S. Mirrokni, “Locality-sensitive
hashing scheme based on p-stable distributions,” in Proceedings of the
twentieth annual symposium on Computational geometry, pp. 253–262,
ACM, 2004.
[13] J. Ji, J. Li, S. Yan, B. Zhang, and Q. Tian, “Super-bit localitysensitive hashing,” in Advances in Neural Information Processing Systems, pp. 108–116, 2012.
[14] M. S. Charikar, “Similarity estimation techniques from rounding algorithms,” in Proceedings of the 34th Annual ACM Symposium on Theory
of Computing, pp. 380–388, ACM, May 2002.
[15] V. V. Williams, “Breaking the coppersmith-winograd barrier,” 2011.
[16] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction
to Algorithms, Third Edition. MIT Press, 2009.
[17] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces
vs. fisherfaces: recognition using class specific linear projection,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 19,
pp. 711–720, Jul 1997.
[18] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov, “Neighborhood component analysis,” Advances in Neural Information Processing
Systems, pp. 513–520, 2004.
[19] D. Cai, X. He, and J. Han, “Semi-supervised discriminant analysis,” in
2007 IEEE 11th International Conference on Computer Vision, pp. 1–7,
Oct 2007.
[20] M. Wang, W. Fu, S. Hao, H. Liu, and X. Wu, “Learning on big
graph: Label inference and regularization with anchor hierarchy,” IEEE
Transactions on Knowledge and Data Engineering, vol. 29, pp. 1101–
1114, May 2017.
[21] A. Stisen, H. Blunck, S. Bhattacharya, T. S. Prentow, M. B. Kjærgaard,
A. Dey, T. Sonne, and M. M. Jensen, “Smart devices are different:
Assessing and mitigatingmobile sensing heterogeneities for activity
recognition,” in Proceedings of the 13th ACM Conference on Embedded
Networked Sensor Systems, pp. 127–140, ACM, 2015.
[22] N. Y. Hammerla, R. Kirkham, P. Andras, and T. Ploetz, “On preserving
statistical characteristics of accelerometry data using their empirical
cumulative distribution,” in Proceedings of the 2013 International Symposium on Wearable Computers, pp. 65–68, ACM, 2013.
Ji Xu received the B.S. from Beijing Jiaotong
University, Beijing, in 2004, and the M.S. from
Tianjin Normal University, Tianjin, China, in 2008.
Now he is a Ph.D. candidate with Southwest Jiaotong University, Chengdu, China. His research interests include data mining, granular computing and
machine learning. He has published a number of
papers in refereed international journals such as
IEEE Transactions on Cybernetics and Information
Sciences, etc.
Guoyin Wang received the B.S., M.S., and Ph.D.
degrees from Xian Jiaotong University, Xian, China,
in 1992, 1994, and 1996, respectively. He worked at
the University of North Texas, USA, and the University of Regina, Canada, as a visiting scholar during
1998-1999. Since 1996, he has been working at the
Chongqing University of Posts and Telecommunications, where he is currently a professor, the Director
of the Chongqing Key Laboratory of Computational
Intelligence, the Director of the National International Scientific and Technological Cooperation Base
of Big Data Intelligent Computing, and the Dean of the Graduate School. He
was the President of International Rough Sets Society (IRSS) 2014-2017. He
is the Chairman of the Steering Committee of IRSS and the Vice-President of
Chinese Association of Artificial Intelligence. He is the author of 15 books,
the editor of dozens of proceedings of international and national conferences,
and has over 200 reviewed research publications. His research interests include
rough set, granular computing, knowledge technology, data mining, neural
network, and cognitive computing.
| 2 |
arXiv:1711.00652v1 [] 2 Nov 2017
ULRICH MODULES OVER COHEN–MACAULAY LOCAL RINGS
WITH MINIMAL MULTIPLICITY
TOSHINORI KOBAYASHI AND RYO TAKAHASHI
Abstract. Let R be a Cohen–Macaulay local ring. In this paper we study the structure of Ulrich
R-modules mainly in the case where R has minimal multiplicity. We explore generation of Ulrich Rmodules, and clarify when the Ulrich R-modules are precisely the syzygies of maximal Cohen–Macaulay
R-modules. We also investigate the structure of Ulrich R-modules as an exact category.
Introduction
The notion of an Ulrich module, which is also called a maximally generated (maximal) Cohen–Macaulay
module, has first been studied by Ulrich [30], and widely investigated in both commutative algebra and
algebraic geometry; see [2, 4, 5, 9, 10, 12, 20, 23] for example. A well-known conjecture asserts that
Ulrich modules exist over any Cohen–Macaulay local ring R. Even though the majority seem to believe
that this conjecture does not hold true in full generality, a lot of partial (positive) solutions have been
obtained so far. One of them states that the conjecture holds whenever R has minimal multiplicity ([2]).
Thus, in this paper, mainly assuming that R has minimal multiplicity, we are interested in what we can
say about the structure of Ulrich R-modules.
We begin with exploring the number and generation of Ulrich modules. The following theorem is a
special case of our main results in this direction (Ω denotes the first syzygy).
Theorem A. Let (R, m, k) be a d-dimensional complete Cohen–Macaulay local ring.
(1) Assume that R is normal with d = 2 and k = C and has minimal multiplicity. If R does not have a
rational singularity, then there exist infinitely many indecomposable Ulrich R-modules.
(2) Suppose that R has an isolated singularity. Let M, N be maximal Cohen–Macaulay R-modules with
ExtiR (M, N ) = 0 for all 1 6 i 6 d − 1. If either M or N is Ulrich, then so is HomR (M, N ).
(3) Let x = x1 , . . . , xd be a system of parameters of R such that m2 = xm. If M is an Ulrich R-module,
then so is Ω(M/xi M ) for all 1 6 i 6 d. If one chooses M to be indecomposable and not to be a direct
summand of Ωd k, then one finds an indecomposable Ulrich R-module not isomorphic to M among
the direct summands of the modules Ω(M/xi M ).
Next, we relate the Ulrich modules with the syzygies of maximal Cohen–Macaulay modules. To state
our result, we fix some notation. Let R be a Cohen–Macaulay local ring with canonical module ω. We
denote by mod R the category of finitely generated R-modules, and by Ul(R) and ΩCM× (R) the full
subcategories of Ulrich modules and first syzygies of maximal Cohen–Macaulay modules without free
summands, respectively. Denote by (−)† the canonical dual HomR (−, ω). Then Ul(R) is closed under
(−)† , and contains ΩCM× (R) if R has minimal multiplicity. The module Ωd k belongs to ΩCM× (R), and
hence Ωd k, (Ωd k)† belong to Ul(R). Thus it is natural to ask when the conditions in the theorem below
hold, and we actually answer this.
Theorem B. Let R be a d-dimensional singular Cohen–Macaulay local ring with residue field k and
canonical module ω, and assume that R has minimal multiplicity. Consider the following conditions.
(1) The equality Ul(R) = ΩCM× (R) holds.
(2) The category ΩCM× (R) is closed under (−)† .
(3) The module (Ωd k)† belongs to ΩCM× (R).
(4) One has Tor1 (Tr(Ωd k)† , ω) = 0.
2010 Mathematics Subject Classification. 13C14, 13D02, 13H10.
Key words and phrases. Ulrich module, Cohen–Macaulay ring/module, minimal multiplicity, syzygy.
The second author was partly supported by JSPS Grant-in-Aid for Scientific Research 16H03923 and 16K05098.
1
2
TOSHINORI KOBAYASHI AND RYO TAKAHASHI
d †
(5) One has Extd+1
R (Tr(Ω k) , R) = 0 and R is locally Gorenstein on the punctured spectrum.
(6) There is an epimorphism ω ⊕n → Ωd k for some n > 0.
(7) There is an isomorphism Ωd k ∼
= (Ωd k)† .
(8) The local ring R is almost Gorenstein.
Then (1)–(6) are equivalent and (7) implies (1). If d > 0 and k is infinite, then (1) implies (8). If d = 1
and k is infinite, then (1)–(8) are equivalent. If R is complete normal with d = 2 and k = C, then (1)–(7)
are equivalent unless R has a cyclic quotient singularity.
Finally, we study the structure of the category Ul(R) of Ulrich R-modules as an exact category in the
sense of Quillen [25]. We prove that if R has minimal multiplicity, then Ul(R) admits an exact structure
with enough projective/injective objects.
Theorem C. Let R be a d-dimensional Cohen–Macaulay local ring with residue field k and canonical
module, and assume that R has minimal multiplicity. Let S be the class of short exact sequences 0 → L →
M → N → 0 of R-modules with L, M, N Ulrich. Then (Ul(R), S) is an exact category having enough
projective objects and enough injective objects with proj Ul(R) = add Ωd k and inj Ul(R) = add(Ωd k)† .
The organization of this paper is as follows. In Section 1, we deal with a question of Cuong on the
number of indecomposable Ulrich modules. We prove the first assertion of Theorem A to answer this
question in the negative. In Section 2, we consider how to generate Ulrich modules from given ones,
and prove the second and third assertions of Theorem A. In Section 3, we compare Ulrich modules with
syzygies of maximal Cohen–Macaulay modules, and prove Theorem B; in fact, we obtain more equivalent
and related conditions. The final Section 4 is devoted to giving applications of the results obtained in
Section 3. In this section we study the cases of dimension one and two, and exact structures of Ulrich
modules, and prove the rest assertions of Theorem B and Theorem C.
Convention
Throughout, let (R, m, k) be a Cohen–Macaulay local ring of Krull dimension d. We assume that all
modules are finitely generated and all subcategories are full. A maximal Cohen–Macaulay module is
simply called a Cohen–Macaulay module. For an R-module M we denote by ΩM the first syzygy of M ,
that is, the kernel of the first differential map in the minimal free resolution of M . Whenever R admits
a canonical module ω, we denote by (−)† the canonical dual functor HomR (−, ω). For an R-module M
we denote by e(M ) and µ(M ) the multiplicity and the minimal number of generators of M , respectively.
1. A question of Cuong
In this section, we consider a question raised by Cuong [6] on the number of Ulrich modules over
Cohen–Macaulay local rings with minimal multiplicity. First of all, let us recall the definitions of an
Ulrich module and minimal multiplicity.
Definition 1.1. (1) An R-module M is called Ulrich if M is Cohen–Macaulay with e(M ) = µ(M ).
(2) The ring R is said to have minimal multiplicity if e(R) = edim R − dim R + 1.
An Ulrich module is also called a maximally generated (maximal) Cohen–Macaulay module. There is
always an inequality e(R) > edim R − dim R + 1, from which the name of minimal multiplicity comes. If
k is infinite, then R has minimal multiplicity if and only if m2 = Qm for some parameter ideal Q of R.
See [3, Exercise 4.6.14] for details of minimal multiplicity.
The following question has been raised by Cuong [6].
Question 1.2 (Cuong). If R is non-Gorenstein and has minimal multiplicity, then are there only finitely
many indecomposable Ulrich R-modules?
To explore this question, we start by introducing notation, which is used throughout the paper.
Notation 1.3. We denote by mod R the category of finitely generated R-modules. We use the following
subcategories of mod R:
CM(R) = {M ∈ mod R | M is Cohen–Macaulay},
Ul(R) = {M ∈ CM(R) | M is Ulrich},
ULRICH MODULES AND MINIMAL MULTIPLICITY
3
M is the kernel of an epimorphism from a
,
ΩCM(R) = M ∈ CM(R)
free module to a Cohen–Macaulay module
ΩCM× (R) = {M ∈ ΩCM(R) | M does not have a (nonzero) free summand}.
Remark 1.4. (1) The subcategories CM(R), Ul(R), ΩCM(R), ΩCM× (R) of mod R are closed under finite direct sums and direct summands.
(2) One has ΩCM(R) ∪ Ul(R) ⊆ CM(R) ⊆ mod R.
Here we make a remark to reduce to the case where the residue field is infinite.
Remark 1.5. Consider the faithfully flat extension S := R[t]mR[t] of R. Then we observe that:
(1) If X is a module in ΩCM× (R), then X ⊗R S is in ΩCM× (S).
(2) A module Y is in Ul(R) if and only if Y ⊗R S is in Ul(S) (see [13, Lemma 6.4.2]).
The converse of (1) also holds true; we prove this in Corollary 3.4.
If R has minimal multiplicity, then all syzygies of Cohen–Macaulay modules are Ulrich:
Proposition 1.6. Suppose that R has minimal multiplicity. Then ΩCM× (R) is contained in Ul(R).
Proof. By Remark 1.5 we may assume that k is infinite. Since R has minimal multiplicity, we have
m2 = Qm for some parameter ideal Q of R. Let M be a Cohen–Macaulay R-module. There is a short
exact sequence 0 → ΩM → R⊕n → M → 0, where n is the minimal number of generators of M . Since M
is Cohen–Macaulay, taking the functor R/Q ⊗R − preserves the exactness; we get a short exact sequence
f
0 → ΩM/QΩM −
→ (R/Q)⊕n → M/QM → 0.
The map f factors through the inclusion map X := m(R/Q)⊕n → (R/Q)⊕n , and hence there is an
injection ΩM/QΩM → X. As X is annihilated by m, so is ΩM/QΩM . Therefore mΩM = QΩM , which
implies that ΩM is Ulrich.
As a direct consequence of [7, Corollary 3.3], we obtain the following proposition.
Proposition 1.7. Let R be a 2-dimensional normal excellent henselian local ring with algebraically closed
residue field of characteristic 0. Then there exist only finitely many indecomposable modules in ΩCM(R)
if and only if R has a rational singularity.
Combining the above propositions yields the following result.
Corollary 1.8. Let R be a 2-dimensional normal excellent henselian local ring with algebraically closed
residue field of characteristic 0. Suppose that R has minimal multiplicity and does not have a rational
singularity. Then there exist infinitely many indecomposable Ulrich R-modules. In particular, Quenstion
1.2 has a negative answer.
Proof. Proposition 1.7 implies that ΩCM(R) contains infinitely many indecomposable modules, and so
does Ul(R) by Proposition 1.6.
Here is an example of a non-Gorenstein ring satisfying the assumption of Corollary 1.8, which concludes
that the question of Cuong is negative.
Example 1.9. Let B = C[x, y, z,
ring with deg x = deg t = 3, deg y = 5 and deg z = 7.
xt]ybe za polynomial
Consider the 2 × 3-matrix M = y z x3 −t3 over B, and let I be the ideal of B generated by 2 × 2-minors
of M . Set A = B/I. Then A is a nonnegatively graded C-algebra as I is homogeneous. By virtue of
the Hilbert–Burch theorem ([3, Theorem 1.4.17]), A is a 2-dimensional Cohen–Macaulay ring, and x, t
is a homogeneous system of parameters of A. Directly calculating the Jacobian ideal J of A, we can
verify that A/J is Artinian. The Jacobian criterion implies that A is a normal domain. The quotient
ring A/tA is isomorphic to the numerical semigroup ring C[H] with H = h3, 5, 7i. Since this ring is not
Gorenstein (as H is not symmetric), neither is A. Let a(A) and F (H) stand for the a-invariant of A and
the Frobenius number of H, respectively. Then
a(A) + 3 = a(A) + deg(t) = a(A/tA) = F (H) = 4,
4
TOSHINORI KOBAYASHI AND RYO TAKAHASHI
where the third equality follows from [26, Theorem 3.1]. Therefore we get a(A) = 1 6< 0, and A does not
have a rational singularity by the Flenner–Watanabe criterion (see [21, Page 98]).
Let A′ be the localization of A at A+ , and let R be the completion of the local ring A′ . Then R
is a 2-dimensional complete (hence excellent and henselian) normal non-Gorenstein local domain with
residue field C. The maximal ideal m of R satisfies m2 = (x, t)m, and thus R has minimal multiplicity.
Having a rational singularity is preserved by localization since A has an isolated singularity, while it is
also preserved by completion. Therefore R does not have a rational singularity.
We have seen that Question 1.2 is not true in general. However, in view of Corollary 1.8, we wonder
if having a rational singularity is essential. Thus, we pose a modified question.
Question 1.10. Let R be a 2-dimensional normal local ring with a rational singularity. Then does R
have only finitely many indecomposable Ulrich modules?
Proposition 1.7 leads us to an even stronger question:
Question 1.11. If ΩCM(R) contains only finitely many indecomposable modules, then does Ul(R) so?
2. Generating Ulrich modules
In this section, we study how to generate Ulrich modules from given ones. First of all, we consider
using the Hom functor to do it.
Proposition 2.1. Let M, N be Cohen–Macaulay R-modules. Suppose that on the punctured spectrum of
R either M is locally of finite projective dimension or N is locally of finite injective dimension.
(1) ExtiR (M, N ) = 0 for all 1 6 i 6 d − 2 if and only if HomR (M, N ) is Cohen–Macaulay.
(2) Assume ExtiR (M, N ) = 0 for all 1 6 i 6 d − 1. If either M or N is Ulrich, then so is HomR (M, N ).
Proof. (1) This follows from the proof of [14, Proposition 2.5.1]; in it the isolated singularity assumption
is used only to have that the Ext modules have finite length.
(2) By (1), the module HomR (M, N ) is Cohen–Macaulay. We may assume that k is infinite by Remark
1.5(2), so that we can find a reduction Q of m which is a parameter ideal of R.
First, let us consider the case where N is Ulrich. Take a minimal free resolution F = (· · · → F1 →
F0 → 0) of M . Since ExtiR (M, N ) = 0 for all 1 6 i 6 d − 1, the induced sequence
f
0 → HomR (M, N ) → HomR (F0 , N ) −
→ · · · → HomR (Fd−1 , N ) → HomR (Ωd M, N ) → ExtdR (M, N ) → 0
is exact. Note that ExtdR (M, N ) has finite length. By the depth lemma, the image L of the map f is
Cohen–Macaulay. An exact sequence 0 → HomR (M, N ) → HomR (F0 , N ) → L → 0 is induced, and the
application of the functor − ⊗R R/Q to this gives rise to an injection
HomR (M, N ) ⊗R R/Q ֒→ HomR (F0 , N ) ⊗R R/Q.
Since N is Ulrich, the module HomR (F0 , N )⊗R R/Q is annihilated by m, and so is HomR (M, N )⊗R R/Q.
Therefore HomR (M, N ) is Ulrich.
Next, we consider the case where M is Ulrich. As x is an M -sequence, there is a spectral sequence
E2pq = ExtpR (R/Q, ExtqR (M, N )) =⇒ H p+q = Extp+q
R (M/QM, N ).
The fact that x is an R-sequence implies E2pq = 0 for p > d. By assumption, E2pq = 0 for 1 6 q 6 d − 1.
Hence an exact sequence 0 → E2d0 → H d → E20d → 0 is induced. Since M/QM is annihilated by m, so is
H d = ExtdR (M/QM, N ), and so is E2d0 . Note that
∼ Hd (x, HomR (M, N )) ∼
E d0 = Extd (R/Q, HomR (M, N )) =
= HomR (M, N ) ⊗R R/Q,
2
R
where H∗ (x, −) stands for the Koszul cohomology. If follows that m kills HomR (M, N ) ⊗R R/Q, which
implies that HomR (M, N ) is Ulrich.
As an immediate consequence of Proposition 2.1(2), we obtain the following corollary, which is a special
case of [9, Theorem 5.1].
Corollary 2.2. Suppose that R admits a canonical module. If M ∈ Ul(R), then M † ∈ Ul(R).
Next, we consider taking extensions of given Ulrich modules to obtain a new one.
ULRICH MODULES AND MINIMAL MULTIPLICITY
5
Proposition 2.3. Let Q be a parameter ideal of R which is a reduction of m. Let M, N be Ulrich Rmodules, and take any element a ∈ Q. Let σ : 0 → M → E → N → 0 be an exact sequence, and consider
the multiplication aσ : 0 → M → X → N → 0 as an element of the R-module Ext1R (N, M ). Then X is
an Ulrich R-module.
Proof. It follows from [27, Theorem 1.1] that the exact sequence
aσ ⊗R R/aR : 0 → M/aM → X/aX → N/aN → 0
∼ M/aM ⊕ N/aN . Applying the functor − ⊗R/aR R/Q, we get
splits; we have an isomorphism X/aX =
∼ M/QM ⊕ N/QN . Since M, N are Ulrich, the modules M/QM, N/QN are
an isomorphism X/QX =
k-vector spaces, and so is X/QX. Hence X is also Ulrich.
As an application of the above proposition, we give a way to make an Ulrich module over a Cohen–
Macaulay local ring with minimal multiplicity.
Corollary 2.4. Let Q be a parameter ideal of R such that m2 = Qm. Let M be an Ulrich R-module.
Then for each R-regular element a ∈ Q, the syzygy Ω(M/aM ) is also an Ulrich R-module.
Proof. There is an exact sequence σ : 0 → ΩM → R⊕n → M → 0, where n is a minimal number of
generators of M . We have a commutative diagram
σ:
0
aσ :
0
/ ΩM
/ ΩM
0O
0O
M/aM
O
M/aM
O
/ R⊕n
O
/M
O
/0
/X
O
/M
O
/0
0
0
a
with exact rows and columns. Since the minimal number of generators of M/aM is equal to n, the middle
column shows X ∼
= Ω(M/aM ). Propositions 1.2 and 2.3 show that X is Ulrich, and we are done.
Remark 2.5. In Corollary 2.4, if the parameter ideal Q annihilates the R-module Ext1R (M, ΩM ), then
we have aσ = 0, and Ω(M/aM ) ∼
= M ⊕ ΩM . Hence, in this case, the operation M 7→ Ω(M/aM ) does
not produce an essentially new Ulrich module.
Next, we investigate the annihilators of Tor and Ext modules.
Proposition 2.6. For an R-module M one has
annR Ext1R (M, ΩM ) =
= annR TorR
1 (M, TrM ) =
Proof. It is clear that
I :=
J :=
T
i>0, N ∈mod R
T
i>0, N ∈mod R
T
T
i>0, N ∈mod R
annR ExtiR (M, N )
i>0, N ∈mod R
annR TorR
i (M, N ).
annR ExtiR (M, N ) ⊆ annR Ext1R (M, ΩM )
R
annR TorR
i (M, N ) ⊆ annR Tor1 (M, TrM ).
It is enough to show that ann Ext1 (M, ΩM ) ∪ ann Tor1 (M, TrM ) is contained in I ∩ J.
(1) Take any element a ∈ annR Ext1R (M, ΩM ). The proof of [16, Lemma 2.14] shows that the multia
f
a
π
plication map (M −
→ M ) factors through a free module, that is, (M −
→ M ) = (M −
→F −
→ M ) with F
free. Hence, for all i > 0 and N ∈ mod R we have commutative diagrams:
a
/ Tori (M, N )
Tori (M, N )
◆◆◆
7
◆◆◆
♣♣♣
Tori (f,N )
'
♣♣♣Tori (π,N )
Tori (F, N )
Exti (M, N )
◆◆◆
◆◆&
i
Ext (π,N )
As Tori (F, N ) = Exti (F, N ) = 0, the element a is in I ∩ J.
a
/ Exti (M, N )
♣♣8
♣♣♣ i
Exti (F, N )
Ext (f,N )
6
TOSHINORI KOBAYASHI AND RYO TAKAHASHI
(2) Let b ∈ annR TorR
1 (M, TrM ). By [32, Lemma (3.9)], the element b annihilates HomR (M, M ).
b
Hence the map b · idM , which is nothing but the multiplication map (M −
→ M ), factors through a free
R-module. Similarly to (1), we get b is in I ∩ J.
Definition 2.7. We denote by annh M the ideal in the above proposition.
Note that annh M = R if and only if M is a free R-module.
For an R-module M we denote by add M the subcategory of mod R consisting of direct summands of
finite direct sums of copies of M .
With the notation of Remark 2.5, we are interested in when the operation M 7→ Ω(M/aM ) actually
gives rise to an essentially new Ulrich module. The following result presents a possible way: if we choose
an indecomposable Ulrich module M that is not a direct summand of Ωd k, then we find an indecomposable
Ulrich module not isomorphic to M among the direct summands of the modules Ω(M/xi M ).
Proposition 2.8. Suppose that R is henselian. Let Q = (x1 , . . . , xd ) be a parameter ideal of R which is a
reduction of m. Let M be an indecomposable Ulrich R-module. If M is a direct summand of Ω(M/xi M )
for all 1 6 i 6 d, then M is a direct summand of Ωd k.
Proof. For all integer 1 6 i 6 d the module Ext1R (M, ΩM ) is a direct summand of Ext1R (Ω(M/xi M ), ΩM ).
The latter module is annihilated by xi since it is isomorphic to Ext2R (M/xi M, ΩM ). Hence Q is contained
in annR Ext1R (M, ΩM ) = annh M , and therefore Q Ext>0
R (M, N ) = 0 for all N ∈ mod R. It follows from
[29, Corollary 3.2(1)] that M is a direct summand of Ωd (M/QM ). As M is Ulrich, the module M/QM
is a k-vector space, and Ωd (M/QM ) belongs to add(Ωd k), whence so does M . Since R is henselian and
M is indecomposable, the Krull–Schmidt theorem implies that M is a direct summand of Ωd k.
3. Comparison of Ul(R) with ΩCM× (R)
In this section, we study the relationship of the Ulrich R-modules with the syzygies of Cohen–Macaulay
R-modules. We begin with giving equivalent conditions for a given Cohen–Macaulay module to be a
syzygy of a Cohen–Macaulay module, after stating an elementary lemma.
Lemma 3.1. Let M, N be R-modules. The evaluation map ev : M ⊗R HomR (M, N ) → N is surjective
if and only if there exists an epimorphism (f1 , . . . , fn ) : M ⊕n → N .
Proof. The “only if” part follows by taking an epimorphism R⊕n → HomR (M, N ) and tensoring M .
To show the “if” part, pick any element
y ∈ N . Then we have y = f1 (x1 ) + · · · + fn (xn ) for some
Pn
x1 , . . . , xn ∈ M . Therefore y = ev( i=1 xi ⊗ fi )), and we are done.
Proposition 3.2. Let R be a Cohen–Macaulay local ring with canonical module ω. Then the following
are equivalent for a Cohen–Macaulay R-module M .
(1)
(2)
(3)
(4)
(5)
(6)
(7)
M ∈ ΩCM(R).
HomR (M, ω) = 0.
There exists a surjective homomorphism ω ⊕n → HomR (M, ω).
The natural homomorphism Φ : ω ⊗R HomR (ω, HomR (M, ω)) → HomR (M, ω) is surjective.
M is torsionless and TrΩTrM is Cohen–Macaulay.
Ext1R (TrM, R) = Ext1R (TrΩTrM, ω) = 0.
TorR
1 (TrM, ω) = 0.
Proof. (1) ⇒ (2): By the assumption, there is an exact sequence 0 → M → F → N → 0 such that N is
Cohen–Macaulay and F is free. Take f ∈ HomR (M, ω). There is a commutative diagram
0
/M
0
/ ω
f
/F
/W
/N
/0
/N
/0
with exact rows. Since N is Cohen–Macaulay, we have Ext1R (N, ω) = 0. Hence the second row splits,
and f factors through F . This shows HomR (M, ω) = 0.
ULRICH MODULES AND MINIMAL MULTIPLICITY
7
f
(2) ⇒ (1): There is an exact sequence 0 → M −
→ ω ⊕m → N → 0 such that N is Cohen–Macaulay.
⊕m
⊕m
Since HomR (M, ω ) = HomR (M, ω)
= 0, there are a free R-module F , homomorphisms g : M → F
and h : F → ω ⊕m such that f = hg. We get a commutative diagram
0
/M
0
/M
g
f
/F
h
/ ω ⊕m
/L
/0
/N
/0
with exact rows. The secound square is a pullback-pushout diagram, which gives an exact sequence
0 → F → L ⊕ ω ⊕m → N → 0. This shows that L is Cohen–Macaulay, and hence M ∈ ΩCM(R).
(2) ⇔ (7): This equivalence follows from [32, Lemma (3.9)].
(1) ⇒ (3): Let 0 → M → R⊕n → N → 0 be an exact sequence with F free. Applying (−)† , we have
an exact sequence 0 → N † → ω ⊕n → M † → 0.
(3) ⇒ (1): There is an exact sequence 0 → K → ω ⊕n → M † → 0. It is seen that K is Cohen–Macaulay.
Taking (−)† gives an exact sequence 0 → M → R⊕n → K † → 0, which shows M ∈ ΩCM(R).
(3) ⇔ (4): This follows from Lemma 3.1.
(5) ⇔ (6): The module TrΩTrM is Cohen–Macaulay if and only if ExtiR (TrΩTrM, ω) = 0 for all i > 0.
One has Ext1R (TrM, R) = 0 if and only if M is torsionless, if and only if M ∼
= ΩTrΩTrM up to free
summands; see [1, Theorem (2.17)]. Hence ExtiR (TrΩTrM, ω) = Exti−1
R (M, ω) = 0 for all i > 1.
(1) ⇔ (5): This equivalence follows from [18, Lemma 2.5] and its proof.
Remark 3.3. The equivalence (1) ⇔ (5) in Proposition 3.2 holds without the assumption that R admits
a canonical module. Indeed, its proof does not use the existence of a canonical module.
The property of being a syzygy of a Cohen–Macaulay module (without free summand) is preserved
under faithfully flat extension.
Corollary 3.4. Let R → S be a faithfully flat homomorphism of Cohen–Macaulay local rings. Let M be
a Cohen–Macaulay R-module. Then M ∈ ΩCM× (R) if and only if M ⊗R S ∈ ΩCM× (S).
Proof. Using Remark 3.3, we see that M ∈ ΩCM(R) if and only if Ext1R (TrR M, R) = 0 and TrR ΩR TrR M
is Cohen–Macaulay. Also, M has a nonzero R-free summand if and only if the evaluation map M ⊗R
HomR (M, R) → R is surjective by Lemma 3.1. Since the latter conditions are both preserved under
faithfully flat extension, they are equivalent to saying that M ⊗R S ∈ ΩCM(S) and that M ⊗R S has a
nonzero S-free summand, respectively. Now the assertion follows.
Next we state and prove a couple of lemmas. The first one concerns Ulrich modules and syzygies of
Cohen–Macaulay modules with respect to short exact sequences.
Lemma 3.5. Let 0 → L → M → N → 0 be an exact sequence of R-modules.
(1) If L, M, N are in Ul(R), then the equality µ(M ) = µ(L) + µ(N ) holds.
(2) Suppose that L, M, N are in CM(R). Then:
(a) If M is in Ul(R), then so are L and N .
(b) If M is in ΩCM× (R), then so is L.
Proof. (1) We have µ(M ) = e(M ) = e(L) + e(N ) = µ(L) + µ(N ).
(2) Assertion (a) follows by [2, Proposition (1.4)]. Let us show (b). As M is in ΩCM× (R), there is an
β
γ
exact sequence 0 → M −
→ R⊕a −
→ C → 0 with C Cohen–Macaulay. As M has no free summand, γ is a
minimal homomorphism. In particular, µ(C) = a. The pushout of β and γ gives a commutative diagram
0
/L
0
/L
0
/M
β
/ R⊕a
γ
δ
0
/N
/0
/D
/0
C
C
0
0
8
TOSHINORI KOBAYASHI AND RYO TAKAHASHI
with exact rows and columns. We see that a = µ(C) 6 µ(D) 6 a, which implies that δ is a minimal
homomorphism. Hence L = ΩD ∈ ΩCM× (R).
The following lemma is used to reduce to the case of a lower dimensional ring.
Lemma 3.6. Let Q = (x1 , . . . , xd ) be a parameter ideal of R that is a reduction of m. Let M be a Cohen–
Macaulay R-module. Then M is an Ulrich R-module if and only if M/xi M is an Ulrich R/xi R-module.
Proof. Note that Q/xi R is a reduction of m/xi R. We see that (m/xi R)(M/xi M ) = (Q/xi R)(M/xi M )
if and only if mM = QM . Thus the assertion holds.
Now we explore syzygies of the residue field of a Cohen–Macaulay local ring with minimal multiplicity.
Lemma 3.7. Assume that R is singular and has minimal multiplicity.
(1) One has ΩdR k ∈ ΩCM× (R). In particular, ΩdR k is an Ulrich R-module.
∼ d ⊕n for some n > 0.
(2) There is an isomorphism Ωd+1
R k = (ΩR k)
(3) Let Q = (x1 , . . . , xd ) be a parameter ideal of R with m2 = Qm, and suppose that d > 1. Then
d−1
1
∼ d
Ω1R (ΩiR/(x1 ) k) ∼
= Ωi+1
R k for all i > 0. In particular, ΩR (ΩR/(x1 ) k) = ΩR k.
(4) For each M ∈ Ul(R) there exists a surjective homomorphism (ΩdR k)⊕n → M for some n > 0.
Proof. (1)(2) We may assume that k is infinite; see Remark 1.5. So we find a parameter ideal Q =
(x1 , . . . , xd ) of R with m2 = Qm. The module m/Q is a k-vector space, and there is an exact sequence
0 → k ⊕n → R/Q → k → 0. Taking the dth syzygies gives an exact sequence
0 → (Ωd k)⊕n → R⊕t → Ωd k → 0.
Since Ωd k has no free summand by [28, Theorem 1.1], we obtain Ωd k ∈ ΩCM× (R) and (Ωd k)⊕n ∼
= Ωd+1 k.
The last assertion of (1) follows from this and Proposition 1.6.
(3) Set x = x1 . We show that Ω(ΩiR/xR k) ∼
= Ωi+1 k for all i > 0. We may assume i > 1; note then that
i
x is Ω k-regular. By [28, Corollary 5.3] we have an isomorphism Ωi k/xΩi k ∼
= ΩiR/xR k ⊕ Ωi−1
R/xR k. Hence
(3.7.1)
Ωi k ⊕ Ωi+1 k ∼
= Ω(Ωi k/xΩi k) ∼
= Ω(ΩiR/xR k) ⊕ Ω(Ωi−1
R/xR k),
where the first isomorphism follows from the proof of Corollary 2.4. There is an exact sequence 0 →
ΩiR/xR k → (R/xR)⊕ai−1 → · · · → (R/xR)⊕a0 → k → 0 of R/xR-modules, which gives an exact sequence
0 → Ω(ΩiR/xR k) → R⊕bi−1 → · · · → R⊕b0 → Ωk → 0
of R-modules. This shows Ω(ΩiR/xR k) ∼
= Ωi+1 k ⊕ R⊕u for some u > 0, and similarly we have an
⊕v
∼ i
isomorphism Ω(Ωi−1
for some v > 0. Substituting these in (3.7.1), we see u = v = 0
R/xR k) = Ω k ⊕ R
i
and obtain an isomorphism Ω(ΩR/xR k) ∼
= Ωi+1 k.
(4) According to Lemma 3.1 and Remark 1.5, we may assume that k is infinite. Take a parameter
ideal Q = (x1 , . . . , xd ) of R with m2 = Qm. We prove this by induction on d. If d = 0, then M is a
k-vector space, and there is nothing to show. Assume d > 1 and set x = x1 . Clearly, R/xR has minimal
multiplicity. By Lemma 3.6, M/xM is an Ulrich R/xR-module. The induction hypothesis gives an exact
⊕n
sequence 0 → L → (Ωd−1
→ M/xM → 0 of R/xR-modules. Lemma 3.5(2) shows that L is also
R/xR k)
an Ulrich R/xR-module, while Lemma 3.5(1) implies
⊕n
µR/xR (L) + µR/xR (M/xM ) = µR/xR ((Ωd−1
).
R/xR k)
Note that µR (X) = µR/xR (X) for an R/xR-module X. Thus, taking the first syzygies over R, we get an
exact sequence of R-modules:
⊕n
→ Ω(M/xM ) → 0.
0 → ΩL → Ω(Ωd−1
R/xR k)
From the proof of Corollary 2.4 we see that there is an exact sequence 0 → ΩM → Ω(M/xM ) → M → 0,
d
d ⊕n
while Ω(Ωd−1
→ M.
R/xR k) is isomorphic to Ω k by (3). Consequently, we obtain a surjection (Ω k)
We have reached the stage to state and prove the main result of this section.
Theorem 3.8. Let R be a d-dimensional Cohen–Macaulay local ring with residue field k and canonical
module ω. Suppose that R has minimal multiplicity. Then the following are equivalent.
ULRICH MODULES AND MINIMAL MULTIPLICITY
9
(1) The equality ΩCM× (R) = Ul(R) holds.
(2) For an exact sequence M → N → 0 in CM(R), if M ∈ ΩCM× (R), then N ∈ ΩCM× (R).
(3) The category ΩCM× (R) is closed under (−)† .
(4) The module (Ωd k)† belongs to ΩCM× (R). (4’) The module (Ωd k)† belongs to ΩCM(R).
(5) One has HomR ((Ωd k)† , ω) = 0.
d †
(6) One has TorR
1 (Tr((Ω k) ), ω) = 0.
d+1
(7) One has ExtR (Tr((Ωd k)† ), R) = 0 and R is locally Gorenstein on the punctured spectrum.
(8) The natural homomorphism ω ⊗R HomR (ω, Ωd k) → Ωd k is surjective.
(9) There exists a surjective homomorphism ω ⊕n → Ωd k.
If d is positive, k is infinite and one of the above nine conditions holds, then R is almost Gorenstein.
Proof. (1) ⇒ (2): This follows from Lemma 3.5(2).
(2) ⇒ (3): Let M be an R-module in ΩCM× (R). Then M ∈ Ul(R) by Proposition 1.6, and hence
†
M ∈ Ul(R) by Corollary 2.2. It follows from Lemma 3.7(4) that there is a surjection (Ωd k)⊕n → M † .
Since (Ωd k)⊕n is in ΩCM× (R) by Lemma 3.7(1), the module M † is also in ΩCM× (R).
(3) ⇒ (4): Lemma 3.7(1) says that Ωd k is in ΩCM× (R), and so is (Ωd k)† by assumption.
(4) ⇒ (1): The inclusion ΩCM× (R) ⊆ Ul(R) follows from Proposition 1.6. Take any module M in
Ul(R). Then M † is also in Ul(R) by Corollary 2.2. Using Lemma 3.7(4), we get an exact sequence
0 → X → (Ωd k)⊕n → M † → 0 of Cohen–Macaulay modules, which induces an exact sequence 0 → M →
(Ωd k)†⊕n → X † → 0. The assumption and Lemma 3.5(2) imply that M is in ΩCM× (R).
(4) ⇔ (4’): As R is singular, by [28, Corollary 4.4] the module (Ωd k)† does not have a free summand.
(4’) ⇔ (5) ⇔ (6) ⇔ (8) ⇔ (9): These equivalences follow from Proposition 3.2.
(4’) ⇔ (7): We claim that, under the assumption that R is locally Gorenstein on the punctured
d †
d †
spectrum, (Ωd k)† ∈ ΩCM(R) if and only if Extd+1
R (Tr((Ω k) ), R) = 0. In fact, since (Ω k) is Cohen–
i
Macaulay, it satisfies Serre’s condition (Sd ). Therefore it is d-torsionfree, that is, ExtR (Tr((Ωd k)† ), R) = 0
d †
d †
for all 1 6 i 6 d; see [22, Theorem 2.3]. Hence, Extd+1
R (Tr((Ω k) ), R) = 0 if and only if (Ω k) is (d + 1)torsionfree, if and only if it belongs to ΩCM(R) by [22, Theorem 2.3] again. Thus the claim follows.
According to this claim, it suffices to prove that if (4’) holds, then R is locally Gorenstein on the
punctured spectrum. For this, pick any nonmaximal prime ideal p of R. There are exact sequences
⊕a
0 → (Ωd k)p → Rp d−1 → · · · → Rp⊕a0 → 0.
P
i
d
We observe that (Ωd k)p is a free Rp -module with rankRp ((Ωd k)p ) = d−1
i=0 (−1) ad−1−i = rankR (Ω k).
d
d
The module Ω k has positive rank as it is torsionfree, and we see that (Ω k)p is a nonzero free Rp -module.
Since we have already shown that (4’) implies (9), there is a surjection ω ⊕n → Ωd k. Localizing this at
p, we see that ωp⊕n has an Rp -free summand, which implies that the Rp -module Rp has finite injective
dimension. Thus Rp is Gorenstein.
So far we have proved the equivalence of the conditions (1)–(9). It remains to prove that R is almost
Gorenstein under the assumption that d is positive, k is infinite and (1)–(9) all hold. We use induction
on d.
Let d = 1. Let Q be the total quotient ring of R, and set E = EndR (m). Let K be an R-module with
K∼
= ω and R ⊆ K ⊆ R in Q, where R is the integral closure of R. Using [24, Proposition 2.5], we have:
0 → Ωd k → R⊕ad−1 → · · · → R⊕a0 → k → 0,
(3.8.1)
m∼
= HomR (m, R) = E
and m† ∼
= HomR (m, K) ∼
= (K :Q m).
By (4) the module m† belongs to ΩCM× (R). It follows from [19, Theorem 2.14] that R is almost
Gorenstein; note that the completion of R also has Gorenstein punctured spectrum by (4’).
Let d > 1. Since (Ωd k)† ∈ ΩCM(R), there is an exact sequence 0 → (Ωd k)† → R⊕m → N → 0 for
some m > 0 and N ∈ CM(R). Choose a parameter ideal Q = (x1 , . . . , xd ) of R satisfying the equality
m2 = Qm, and set (−) = (−) ⊗R R/(x1 ). An exact sequence
0 → (Ωd k)† → R
⊕m
→N →0
x
is induced, which shows that (Ωd k)† is in ΩCM(R). Applying (−)† to the exact sequence 0 → Ωd k −
→
Ωd k → Ωd k → 0 and using [3, Lemma 3.1.16], we obtain isomorphisms
(Ωd k)† ∼
= Ext1R (Ωd k, ω) ∼
= HomR (Ωd k, ω).
10
TOSHINORI KOBAYASHI AND RYO TAKAHASHI
d−1
k is a direct summand of Ωd k by [28, Corollary 5.3], and hence HomR (Ωd−1
k, ω) is
The module ΩR
R
a direct summand of HomR (Ωd k, ω). Summarizing these, we observe that HomR (Ωd−1
k, ω) belongs to
R
ΩCM(R). Since R has minimal multiplicity, we can apply the induction hypothesis to R to conclude that
R is almost Gorenstein, and so is R by [11, Theorem 3.7].
Remark 3.9. When d > 2, it holds that
d−1
d †
d
∼
Extd+1
R (Tr((Ω k) ), R) = ExtR (HomR (ω, Ω k), R).
d
Thus Theorem 3.8(7) can be replaced with the condition that Extd−1
R (HomR (ω, Ω k), R) = 0.
Indeed, using the Hom-⊗ adjointness twice, we get isomorphisms
∼ HomR (ω, (Ωd k)†† ) ∼
HomR (ω, Ωd k) =
= HomR ((Ωd k)† ⊗R ω, ω) ∼
= HomR ((Ωd k)† , ω † ) ∼
= (Ωd k)†∗ ,
and (Ωd k)†∗ is isomorphic to Ω2 Tr((Ωd k)† ) up to free summand.
We have several more conditions related to the equality ΩCM× (R) = Ul(R).
Corollary 3.10. Let R be as in Theorem 3.8. Consider the following conditions:
(1) (Ωd k)† ∼
= Ωd k, (2) (Ωd k)† ∈ add(Ωd k), (3) annh (Ωd k)† = m, (4) ΩCM× (R) = Ul(R).
It then holds that (1) =⇒ (2) ⇐⇒ (3) =⇒ (4).
Proof. The implications (1) ⇒ (2) ⇒ (3) are obvious. The proof of Proposition 2.8 shows that if an
Ulrich R-module M satisfies annh M = m, then M is in add(Ωd k). This shows (3) ⇒ (2). Proposition
3.7(1) says that Ωd k is in ΩCM× (R), and so is (Ωd k)† by assumption. Theorem 3.8 shows (2) ⇒ (4).
We close this section by constructing an example by applying the above corollary.
Example 3.11. Let S = C[[x, y, z]] be a formal power series ring. Let G be the cyclic group 12 (1, 1, 1),
and let R = S G be the invariant (i.e. the second Veronese) subring of S. Then ΩCM× (R) = Ul(R). In
fact, by [32, Proposition (16.10)], the modules R, ω, Ωω are the nonisomorphic indecomposable Cohen–
Macaulay R-modules and (Ωω)† ∼
= Ωω. By [28, Theorem 4.3] the module Ω2 C does not have a nonzero
free or canonical summand. Hence Ω2 C is a direct sum of copies of Ωω, and thus (Ω2 C)† ∼
= Ω2 C. The
×
equality ΩCM (R) = Ul(R) follows from Corollary 3.10.
4. Applications
This section is devoted to stating applications of our main theorems obtained in the previous section.
4.1. The case of dimension one. We begin with studying the case where R has dimension 1.
Theorem 4.1. Let (R, m, k) be a 1-dimensional Cohen–Macaulay local ring with k infinite and canonical
module ω. Suppose that R has minimal multiplicity, and set (−)† = HomR (−, ω). Then
∼ m ⇐⇒ R is almost Gorenstein.
ΩCM× (R) = Ul(R) ⇐⇒ m† ∈ ΩCM× (R) ⇐⇒ m† =
Proof. Call the four conditions (i)–(iv) from left to right. The implications (i) ⇔ (ii) ⇒ (iv) are shown by
Theorem 3.8, while (iii) ⇔ (iv) by [19, Theorem 2.14] and (3.8.1). Lemma 3.7(1) shows (iii) ⇒ (ii).
Now we pose a question related to Question 1.2.
Question 4.2. Can we classify 1-dimensional Cohen–Macaulay local rings R with minimal multiplicity
(and infinite residue field) satisfying the condition # ind Ul(R) < ∞?
According to Proposition 1.6, over such a ring R we have the property that # ind ΩCM(R) < ∞, which
is studied in [18]. If R has finite Cohen–Macaulay representation type (that is, if # ind CM(R) < ∞),
then of course this question is affirmative. However, we do not have any partial answer other than this.
The reader may wonder if the condition # ind Ul(R) < ∞ implies the equality ΩCM× (R) = Ul(R). Using
the above theorem, we observe that this does not necessarily hold:
Example 4.3. Let R = k[[t3 , t7 , t8 ]] be (the completion of) a numerical semigroup ring, where k is an
algebraically closed field of characteristic zero. Then R is a Cohen–Macaulay local ring of dimension 1
with minimal multiplicity. It follows from [12, Theorem A.3] that # ind Ul(R) < ∞. On the other hand,
R is not almost-Gorenstein by [8, Example 4.3], so ΩCM× (R) 6= Ul(R) by Theorem 4.1.
ULRICH MODULES AND MINIMAL MULTIPLICITY
11
4.2. The case of dimension two. From now on, we consider the case where R has dimension 2. We
recall the definition of a Cohen–Macaulay approximation. Let R be a Cohen–Macaulay local ring with
canonical module. A homomorphism f : X → M of R-modules is called a Cohen–Macaulay approximation
(of M ) if X is Cohen–Macaulay and any homomorphism f ′ : X ′ → M with X ′ being Cohen–Macaulay
factors through f . It is known that f is a (resp. minimal) Cohen–Macaulay approximation if and only if
there exists an exact sequence
g
f
0→Y −
→X −
→M →0
of R-modules such that X is Cohen–Macaulay and Y has finite injective dimension (resp. and that X, Y
have no common direct summand along g). For details of Cohen–Macaulay approximations, we refer the
reader to [21, Chapter 11].
The module E appearing in the following remark is called the fundamental module of R.
Remark 4.4. Let (R, m, k) be a 2-dimensional Cohen–Macaulay local ring with canonical module ω.
(1) There exists a nonsplit exact sequence
(4.4.1)
0→ω→E→m→0
which is unique up to isomorphism. This is because Ext1R (m, ω) ∼
= k.
= Ext2R (k, ω) ∼
(2) The module E is Cohen–Macaulay and uniquely determined up to isomorphism.
(3) The sequence (4.4.1) gives a minimal Cohen–Macaulay approximation of m.
(4) There is an isomorphism E ∼
= E † . In fact, applying (−)† to (4.4.1) induces an exact sequence
0 → m† → E † → R → Ext1R (m, ω) → Ext1R (E, ω) = 0.
Applying (−)† to the natural exact sequence 0 → m → R → k → 0 yields m† ∼
= ω, while Ext1R (m, ω) ∼
=
†
k. We get an exact sequence 0 → ω → E → m → 0, and the uniqueness of (4.4.1) shows E † ∼
= E.
To prove the main result of this section, we prepare two lemmas. The first one relates the fundamental
module of a 2-dimensional Cohen–Macaulay local ring R with Ul(R) and ΩCM× (R).
Lemma 4.5. Let (R, m, k) be a 2-dimensional Cohen–Macaulay local ring with canonical module ω and
fundamental module E.
(1) Assume that R has minimal multiplicity. Then E is an Ulrich R-module.
(2) For each module M ∈ ΩCM× (R) there exists an exact sequence 0 → M → E ⊕n → N → 0 of
R-modules such that N is Cohen–Macaulay.
Proof. (1) We may assume that k is infinite by Remark 1.5(2). Let Q = (x, y) be a parameter ideal of R
with m2 = Qm. We have m/xm ∼
= m/(x) ⊕ k; see [28, Corollary 5.3]. Note that (m/(x))2 = y(m/(x)). By
[31, Corollary 2.5] the minimal Cohen–Macaulay approximation of m/xm as an R/(x)-module is E/xE.
In view of the proof of [21, Proposition 11.15], the minimal Cohen–Macaulay approximations of m/(x) and
k as R/(x)-modules are m/(x) and HomR/(x) (m/(x), ω/xω), respectively. Thus we get an isomorphism
E/xE ∼
= m/(x) ⊕ HomR/(x) (m/(x), ω/xω).
In particular, E/xE is an Ulrich R/(x)-module by Lemma 3.7(1) and Corollary 2.2. It follows from
Lemma 3.6 that E is an Ulrich R-module.
f
e
(2) Take an exact sequence 0 → M −
→ R⊕n −
→ L → 0 such that L is Cohen–Macaulay. As M has no
free summand, the homomorphism e is minimal. This means that f factors through the natural inclusion
i : m⊕n → R⊕n , that is, f = ig for some g ∈ HomR (M, m⊕n ). The direct sum p : E ⊕n → m⊕n of
copies of the surjection E → m (given by (4.4.1)) is a Cohen–Macaulay approximation. Hence there is a
homomorphism h : M → E ⊕n such that g = ph. We get a commutative diagram
0
/M
0
/M
f
/ R⊕n
O
/L
O
/0
/ E ⊕n
/N
/0
ip
h
with exact rows. This induces an exact sequence 0 → E ⊕n → R⊕n ⊕ N → L → 0, and therefore N is a
Cohen–Macaulay R-module.
A short exact sequence of Ulrich modules is preserved by certain functors:
12
TOSHINORI KOBAYASHI AND RYO TAKAHASHI
Lemma 4.6. Let 0 → X → Y → Z → 0 be an exact sequence of modules in Ul(R). Then it induces
exact sequences of R-modules
(a) 0 → X ⊗R k → Y ⊗R k → Z ⊗R k → 0,
(b) 0 → HomR (Z, k) → HomR (Y, k) → HomR (X, k) → 0, and
(c) 0 → HomR (Z, (Ωd k)† ) → HomR (Y, (Ωd k)† ) → HomR (X, (Ωd k)† ) → 0.
Proof. The sequence X ⊗R k → Y ⊗R k → Z ⊗R k → 0 is exact and the first map is injective by Lemma
3.5(1). Hence (a) is exact, and so is (b) by a dual argument. In what follows, we show that (c) is
exact. We first note that (Ωd k)† is a minimal Cohen–Macaulay approximation of k; see the proof of [21,
Proposition 11.15]. Thus there is an exact sequence 0 → I → (Ωd k)† → k → 0 such that I has finite
injective dimension. As Ul(R) ⊆ CM(R), we have Ext1R (M, I) = 0 for all M ∈ {X, Y, Z}. We obtain a
commutative diagram
0
/ HomR (Y, I)
/ HomR (Y, (Ωd k)† )
/ HomR (Y, k)
0
α
/ HomR (X, I)
β
/ HomR (X, (Ωd k)† )
/ HomR (X, k)
/0
γ
/0
with exact rows, where α is surjective. The exactness of (b) implies that γ is surjective. By the snake
lemma β is also surjective, and therefore (c) is exact.
Now we can state and show our main result in this section.
Theorem 4.7. Let R be a 2-dimensional complete singular normal local ring with residue field C and
having minimal multiplicity. Suppose that R does not have a cyclic quotient singularity. Then:
(Ωd k)† ∼
= Ωd k ⇐⇒ (Ωd k)† ∈ add(Ωd k) ⇐⇒ annh (Ωd k)† = m ⇐⇒ ΩCM× (R) = Ul(R).
Proof. In view of Corollary 3.10, it suffices to show that if R does not have a cyclic quotient singularity,
then the fourth condition implies the first one. By virtue of [32, Theorem 11.12] the fundamental module
α
E is indecomposable. Applying Lemma 4.5(2) to (Ωd k)† , we have an exact sequence 0 → (Ωd k)† −
→
E ⊕n → N → 0 such that N is Cohen–Macaulay. Since E is Ulrich by Lemma 4.5(1), so are all the three
modules in this sequence by Lemma 3.5(2). Thus we can apply Lemma 4.6 to see that the induced map
HomR (α, (Ωd k)† ) : HomR (E ⊕n , (Ωd k)† ) → HomR ((Ωd k)† , (Ωd k)† )
is surjective. This implies that α is a split monomorphism, and (Ωd k)† is isomorphic to a direct summand
of E ⊕n . Since E is indecomposable, it folllows that (Ωd k)† is isomorphic to E ⊕m for some m. We obtain
(Ωd k)† ∼
= Ωd k,
= (Ωd k)†† ∼
= (E † )⊕m ∼
= E ⊕m ∼
where the second isomorphism follows by Remark 4.4(4).
Remark 4.8. Let R be a cyclic quotient surface singularity over C. Nakajima and Yoshida [23, Theorem
4.5] give a necessary and sufficient condition for R to be such that the number of nonisomorphic indecomposable Ulrich R-modules is equal to the number of nonisomorphic nonfree indecomposable special
Cohen–Macaulay R-modules. By [15, Corollary 2.9], the latter is equal to the number of isomorphism
classes of indecomposable modules in ΩCM× (R). Therefore, they actually gives a necessary and sufficient
condition for R to satisfy ΩCM× (R) = Ul(R).
Using our Theorem 4.7, we give some examples of a quotient surface singularity over C to consider
Ulrich modules over them.
Example 4.9. (1) Let S = C[[x, y]] be a formal power series ring. Let G be the cyclic group 13 (1, 1),
and let R = S G be the invariant (i.e. the third Veronese) subring of S. Then ΩCM× (R) = Ul(R). This
follows from [23, Theorem 4.5] and Remark 4.8, but we can also show it by direct caluculation: we have
Cl(R) = {[R], [ω], [p]} ∼
= Z/3Z,
where ω = (x3 , x2 y)R is a canonical ideal of R, and p = (x3 , x2 y, xy 2 )R is a prime ideal of height 1 with
[ω] = 2[p]. Since the second Betti number of C over R is 9, we see Ω2 C ∼
= p⊕3 . As [p† ] = [ω] − [p] = [p],
×
2
† ∼ 2
† ∼
we have p = p and (Ω C) = Ω C. Theorem 4.7 shows ΩCM (R) = Ul(R).
ULRICH MODULES AND MINIMAL MULTIPLICITY
13
(2) Let S = C[[x, y]] be a formal power series ring. Let G be the cyclic group 81 (1, 5), and let R = S G
be the invariant subring of S. With the notation of [23], the Hirzebruch-Jung continued fraction of this
group is [2, 3, 2]. It follows from [23, Theorem 4.5] and Remark 4.8 that ΩCM× (R) 6= Ul(R).
4.3. An exact structure of the category of Ulrich modules. Finally, we consider realization of the
additive category Ul(R) as an exact category in the sense of Quillen [25]. We begin with recalling the
definition of an exact category given in [17, Appendix A].
Definition 4.10. Let A be an additive category. A pair (i, d) of composable morphisms
i
d
X−
→Y −
→Z
is exact if i is the kernel of d and d is the cokernel of i. Let E be a class of exact pairs closed under
isomorphism. The pair (A, E) is called an exact category if the following axioms hold. Here, for each
(i, d) ∈ E the morphisms i and d are called an inflation and a deflation, respectively.
(Ex0) 1 : 0 → 0 is a deflation.
(Ex1) The composition of deflations is a deflation.
(Ex2) For each morphism f : Z ′ → Z and each deflation d : Y → Z, there is a pullback diagram as in
the left below, where d′ is a deflation.
op
(Ex2 ) For each morphism f : X → X ′ and each inflation i : X → Y , there is a pushout diagram as in
the right below, where i′ is an inflation.
Y′
Y
d′
/ Z′
f
d
/Z
X
f
X′
i
i′
/Y
/ Y′
We can equip a structure of an exact category with our Ul(R) as follows.
Theorem 4.11. Let R be a d-dimensional Cohen–Macaulay local ring with residue field k and canonical
module, and assume that R has minimal multiplicity. Let S be the class of exact sequences 0 → L → M →
N → 0 of R-modules with L, M, N Ulrich. Then Ul(R) = (Ul(R), S) is an exact category having enough
projective objects and enough injective objects with proj Ul(R) = add(Ωd k) and inj Ul(R) = add((Ωd k)† ).
Proof. We verify the axioms in Definition 4.10.
(Ex0): This is clear.
(Ex1): Let d : Y → Z and d′ : Z → W be deflations. Then there is an exact sequence 0 → X →
d′ d
Y −−→ W → 0 of R-modules. Since Y is in Ul(R) and X, W ∈ CM(R), it follows from that X ∈ Ul(R).
Thus this sequence belongs to S, and d′ d is a deflation.
(Ex2): Let f : Z ′ → Z be a homomorphism in Ul(R) and d : Y → Z a deflation in S. Then we get
(d,f )
an exact sequence 0 → Y ′ → Y ⊕ Z ′ −−−→ Z → 0. Since Y ⊕ Z ′ ∈ Ul(R) and Y ′ , Z ∈ CM(R), Lemma
d′
3.5(2) implies Y ′ ∈ Ul(R). Make an exact sequence 0 → X ′ → Y ′ −→ Z ′ → 0. As Y ′ ∈ Ul(R) and
X ′ , Z ′ ∈ CM(R), the module Z ′ is in Ul(R) by Lemma 3.5(2) again. Thus d′ is a deflation.
(Ex2op ): We can check this axiom by the opposite argument to (Ex2).
Now we conclude that (Ul(R), S) is an exact category. Let us prove the remaining assertions. Lemma
4.6(c) yields the injectivity of (Ωd k)† . Since (−)† gives an exact duality of (Ul(R), S), the module
Ωd k is a projective object. We also observe from Lemma 3.7 and Corollary 2.2 that (Ul(R), S) has
enough projective objects with proj Ul(R) = add(Ωd k), and has enough injective objects with inj Ul(R) =
add((Ωd k)† ) by the duality (−)† .
Remark 4.12. Let (R, m) be 1-dimensional Cohen–Macaulay
local ring with infinite residue field. Let
by [12, Proposition A.1]. This equality
(t) be a minimal reduction of m. Then Ul(R) = CM R mt
m
])
of
categories,
since Hom-sets do not change; see [21,
acturely gives an equivalence Ul(R) ∼
CM(R[
=
t
Proposition 4.14]. Thus the usual exact structure on CM(R[ mt ]) coincides with the exact structure on
Ul(R) given above via this equivalence.
Acknowledgments
The authors are grateful to Doan Trung Cuong for valuable discussion on Ulrich modules. In particular,
his Question 1.2 has given a strong motivation for them to have this paper.
14
TOSHINORI KOBAYASHI AND RYO TAKAHASHI
References
[1] M. Auslander; M. Bridger, Stable module theory, Mem. Amer. Math. Soc. No. 94, American Mathematical Society,
Providence, R.I., 1969.
[2] J. P. Brennan; J. Herzog; B. Ulrich, Maximally generated Cohen–Macaulay modules, Math. Scand. 61 (1987), no.
2, 181–203.
[3] W. Bruns; J. Herzog, Cohen–Macaulay rings, revised edition, Cambridge Studies in Advanced Mathematics, 39,
Cambridge University Press, Cambridge, 1998.
[4] M. Casanellas; R. Hartshorne, ACM bundles on cubic surfaces, J. Eur. Math. Soc. (JEMS) 13 (2011), no. 3,
709–731.
[5] L. Costa; R. M. Miró-Roig, GL(V )-invariant Ulrich bundles on Grassmannians, Math. Ann. 361 (2015), no. 1-2,
443–457.
[6] D. T. Cuong, Problem Session, International Workshop on Commutative Algebra, Thai Nguyen University, January 4,
2017.
[7] H. Dao; O. Iyama; R. Takahashi; C. Vial, Non-commutative resolutions and Grothendieck groups, J. Noncommut.
Geom. 9 (2015), no. 1, 21–34.
[8] S. Goto; N. Matsuoka; T. T. Phuong, Almost Gorenstein rings, J. Algebra 379 (2013), 355–381.
[9] S. Goto; K. Ozeki; R. Takahashi; K.-I. Watanabe; K.-I. Yoshida, Ulrich ideals and modules, Math. Proc. Cambridge
Philos. Soc. 156 (2014), no. 1, 137–166.
[10] S. Goto; K. Ozeki; R. Takahashi; K.-I. Watanabe; K.-I. Yoshida, Ulrich ideals and modules over two-dimensional
rational singularities, Nagoya Math. J. 221 (2016), no. 1, 69–110.
[11] S. Goto; R. Takahashi; N. Taniguchi, Almost Gorenstein rings – towards a theory of higher dimension, J. Pure
Appl. Algebra 219 (2015), no. 7, 2666–2712.
[12] J. Herzog; B. Ulrich; J. Backelin, Linear maximal Cohen–Macaulay modules over strict complete intersections, J.
Pure Appl. Algebra 71 (1991), no. 2-3, 187–202.
[13] C. Huneke; I. Swanson, Integral closure of ideals, rings, and modules, London Mathematical Society Lecture Note
Series, 336, Cambridge University Press, Cambridge, 2006.
[14] O. Iyama, Higher-dimensional Auslander–Reiten theory on maximal orthogonal subcategories, Adv. Math. 210 (2007),
no. 1, 22–50.
[15] O. Iyama; M. Wemyss, The classification of special Cohen–Macaulay modules, Math. Z. 265 (2010), no. 1, 41–83.
[16] S. B. Iyengar; R. Takahashi, Annihilation of cohomology and strong generation of module categories, Int. Math.
Res. Not. IMRN 2016, no. 2, 499–535.
[17] B. Keller, Chain complexes and stable categories, Manuscripta Math. 67 (1990), no. 4, 379–417.
[18] T. Kobayashi, Syzygies of Cohen–Macaulay modules and Grothendieck groups, J. Algebra 490 (2017), 372–379.
[19] T. Kobayashi, Syzygies of Cohen–Macaulay modules over one dimensional Cohen–Macaulay local rings, Preprint
(2017), arXiv:1710.02673.
[20] J. O. Kleppe; R. M, Miró-Roig, On the normal sheaf of determinantal varieties, J. Reine Angew. Math. 719 (2016),
173–209.
[21] G. J. Leuschke; R. Wiegand, Cohen–Macaulay Representations, Mathematical Surveys and Monographs, vol. 181,
American Mathematical Society, Providence, RI, 2012.
[22] H. Matsui; R. Takahashi; Y. Tsuchiya, When are n-syzygy modules n-torsionfree?, Arch. Math. (Basel) 108 (2017),
no. 4, 351–355.
[23] Y. Nakajima; K.-i. Yoshida, Ulrich modules over cyclic quotient surface singularities, J. Algebra 482 (2017), 224–247.
[24] A. Ooishi, On the self-dual maximal Cohen–Macaulay modules, J. Pure Appl. Algebra 106 (1996), no. 1, 93–102.
[25] D. Quillen, Higher algebraic K-theory, I, Algebraic K-theory, I: Higher K-theories (Proc. Conf., Battelle Memorial
Inst., Seattle, Wash., 1972), pp. 85–147, Lecture Notes in Math., Vol. 341, Springer, Berlin, 1973.
[26] H. Sabzrou; F. Rahmati, The Frobenius number and a-invariant, Rocky Mountain J. Math. 36 (2006), no. 6, 2021–
2026.
[27] J. Striuli, On extensions of modules, J. Algebra 285 (2005), no. 1, 383–398.
[28] R. Takahashi, Syzygy modules with semidualizing or G-projective summands, J. Algebra 295 (2006), no. 1, 179–194.
[29] R. Takahashi, Reconstruction from Koszul homology and applications to module and derived categories, Pacific J.
Math. 268 (2014), no. 1, 231–248.
[30] B. Ulrich, Gorenstein rings and modules with high numbers of generators, Math. Z. 188 (1984), no. 1, 23–32.
[31] K.-i. Yoshida, A note on minimal Cohen–Macaulay approximations, Comm. Algebra 24 (1996), no. 1, 235–246.
[32] Y. Yoshino, Cohen–Macaulay modules over Cohen–Macaulay rings, London Mathematical Society Lecture Note Series,
146, Cambridge University Press, Cambridge, 1990.
Graduate School of Mathematics, Nagoya University, Furocho, Chikusaku, Nagoya, Aichi 464-8602, Japan
E-mail address: [email protected]
Graduate School of Mathematics, Nagoya University, Furocho, Chikusaku, Nagoya, Aichi 464-8602, Japan
E-mail address: [email protected]
URL: http://www.math.nagoya-u.ac.jp/~takahashi/
| 0 |
Differential Privacy on Finite Computers
arXiv:1709.05396v1 [] 15 Sep 2017
Victor Balcer∗
Salil Vadhan†
Center for Research on Computation & Society
School of Engineering & Applied Sciences
Harvard University
[email protected], salil [email protected]
September 19, 2017
Abstract
We consider the problem of designing and analyzing differentially private algorithms that
can be implemented on discrete models of computation in strict polynomial time, motivated
by known attacks on floating point implementations of real-arithmetic differentially private
algorithms (Mironov, CCS 2012) and the potential for timing attacks on expected polynomialtime algorithms. We use a case study the basic problem of approximating the histogram of
a categorical dataset over a possibly large data universe X . The classic Laplace Mechanism
(Dwork, McSherry, Nissim, Smith, TCC 2006 and J. Privacy & Confidentiality 2017) does not
satisfy our requirements, as it is based on real arithmetic, and natural discrete analogues, such
as the Geometric Mechanism (Ghosh, Roughgarden, Sundarajan, STOC 2009 and SICOMP
2012), take time at least linear in |X |, which can be exponential in the bit length of the input.
In this paper, we provide strict polynomial-time discrete algorithms for approximate histograms whose simultaneous accuracy (the maximum error over all bins) matches that of the
Laplace Mechanism up to constant factors, while retaining the same (pure) differential privacy
guarantee. One of our algorithms produces a sparse histogram as output. Its “per-bin accuracy” (the error on individual bins) is worse than that of the Laplace Mechanism by a factor
of log |X |, but we prove a lower bound showing that this is necessary for any algorithm that
produces a sparse histogram. A second algorithm avoids this lower bound, and matches the
per-bin accuracy of the Laplace Mechanism, by producing a compact and efficiently computable
representation of a dense histogram; it is based on an (n + 1)-wise independent implementation
of an appropriately clamped version of the Discrete Geometric Mechanism.
∗
Supported by NSF grant CNS-1237235 and CNS-1565387.
http://seas.harvard.edu/~ salil. Supported by NSF grant CNS-1237235, a Simons Investigator Award, and
a grant from the Sloan Foundation.
†
1
1
Introduction
Differential Privacy [DMNS06] is by now a well-established framework for privacy-protective statistical analysis of sensitive datasets. Much work on differential privacy involves an interplay between
statistics and computer science. Statistics provides many of the (non-private) analyses that we
wish to approximate with differentially private algorithms, as well as probabilistic tools that are
useful in analyzing such algorithms, which are necessarily randomized. From computer science,
differential privacy draws upon a tradition of adversarial modeling and strong security definitions,
techniques for designing and analyzing randomized algorithms, and considerations of algorithmic
resource constraints (such as time and memory).
Because of its connection to statistics, it is very natural that much of the literature on differential privacy considers the estimation of real-valued functions on real-valued data (e.g. the
sample mean) and introduces noise from continuous probability distributions (e.g. the Laplace
distribution) to obtain privacy. However, these choices are incompatible with standard computer
science models for algorithms (like the Turing machine or RAM model) as well as implementation
on physical computers (which use only finite approximations to real arithmetic, e.g. via floating
point numbers). This discrepancy is not just a theoretical concern; Mironov [Mir12] strikingly
demonstrated that common floating-point implementations of the most basic differentially private
algorithm (the Laplace Mechanism) are vulnerable to real attacks. Mironov shows how to prevent
his attack with a simple modification to the implementation, but this solution is specific to a single
differentially private mechanism and particular floating-point arithmetic standard. His solution increases the error by a constant factor and is most likely more efficient in practice than the algorithm
we will use to replace the Laplace Mechanism. However, he provides no bounds on asymptotic running time. Gazeau, Miller and Palamidessi [GMP16] provide more general conditions for which
an implementation of real numbers and a mechanism that perturbs the correct answer with noise
maintains differential privacy. However, they do not provide an explicit construction with bounds
on accuracy and running time.
From a theoretical point of view, a more appealing approach to resolving these issues is to avoid
real or floating-point arithmetic entirely and only consider differentially private computations that
involve discrete inputs and outputs, and rational probabilities. Such algorithms are realizable in
standard discrete models of computation. However, some such algorithms have running times that
are only bounded in expectation (e.g. due to sampling from an exponential distribution supported
on the natural numbers), and this raises a potential vulnerability to timing attacks. If an adversary
can observe the running time of the algorithm, it learns something about the algorithm’s coin tosses,
which are assumed to be secret in the definition of differential privacy. (Even if the time cannot
be directly observed, in practice an adversary can determine an upper bound on the running time,
which again is information that is implicitly assumed to be secret in the privacy definition.)
Because of these considerations, we advocate the following principle:
Differential Privacy for Finite Computers:
We should describe how to implement differentially private algorithms on discrete models of computation with strict bounds on running time (ideally polynomial in the bit
length of the input) and analyze the effects of those constraints on both privacy and
accuracy.
Note that a strict bound on running time does not in itself prevent timing attacks, but once we have
such a bound, we can pad all executions to take the same amount of time. Also, while standard
discrete models of computation (e.g. randomized Turing machines) are defined in terms of countable
2
rather than finite resources (e.g. the infinite tape), if we have a strict bound on running time, then
once we fix an upper bound on input length, they can indeed be implemented on a truly finite
computer (e.g. like a randomized Boolean circuit).
In many cases, the above goal can be achieved by appropriate discretizations and truncations
applied to a standard, real-arithmetic differentially private algorithm. However, such modifications
can have a nontrivial price in accuracy or privacy, and thus we also call for a rigorous analysis of
these effects.
In this paper, we carry out a case study of achieving “differential privacy for finite computers”
for one of the first tasks studied in differential privacy, namely approximating a histogram of a
categorical dataset. Even this basic problem turns out to require some nontrivial effort, particularly
to maintain strict polynomial time, optimal accuracy and pure differential privacy when the data
universe is large.
We recall the definition of differential privacy.
Definition 1.1. [DMNS06] Let M : X n → R be a randomized algorithm. We say M is (ε, δ)differentially private if for every two datasets D and D ′ that differ on one row and every subset
S⊆R
Pr[M(D) ∈ S] ≤ eε · Pr[M(D ′ ) ∈ S] + δ
We say an (ε, δ)-differentially private algorithm satisfies pure differential privacy when δ = 0
and say it satisfies approximate differential privacy when δ > 0.
In this paper, we study the problem of estimating the histogram of a dataset D ∈ X n , which is
the vector c = c(D) ∈ NX , where cx is the number of rows in D that have value x. Histograms can
be approximated while satisfying differential privacy using the Laplace Mechanism, introduced in
the original paper of Dwork, McSherry, Nissim and Smith [DMNS06]. Specifically, to obtain (ε, 0)differential privacy, we can add independent noise distributed according to a Laplace distribution,
specifically Lap(2/ε), to each component of c and output the resulting vector c̃. Here Lap(2/ε) is the
continuous, real-valued random variable with probability density function f (z) that is proportional
to exp(−ε · |z|/2). The Laplace Mechanism also achieves very high accuracy in two respects:
Per-Query Error: For each bin x, with high probability we have |c̃x − cx | ≤ O(1/ε).
Simultaneous Error: With high probability, we have maxx |c̃x − cx | ≤ O(log(|X |)/ε).
Note that both of the bounds are independent of the number n of rows in the dataset, and so the
fractional error vanishes linearly as n grows.
Simultaneous error is the more well-studied notion in the differential privacy literature, but
we consider per-query error to be an equally natural concept: if we think of the approximate
histogram c̃ as containing approximate answers to the |X | different counting queries corresponding
to the bins of X , then per-query error captures the error as experienced by an analyst who may be
only interested in one or a few of the bins of c̃. The advantage of considering per-query error is that
it can be significantly smaller than the simultaneous error, as is the case in the Laplace Mechanism
when the data universe X is very large. It is known that both of the error bounds achieved by the
Laplace Mechanism are optimal up to constant factors; no (ε, 0)-differentially private algorithm for
histograms can achieve smaller per-query error or simultaneous error [HT10, BBKN14].
Unfortunately, the Laplace Mechanism uses real arithmetic and thus cannot be implemented on
a finite computer. To avoid real arithmetic, we could use the Geometric Mechanism [GRS12], which
adds noise to each component of c according to the 2-sided geometric distribution, Geo(2/ε), which
is supported on the integers and has probability mass function f (z) ∝ exp(−ε·|z|/2). However, this
3
mechanism uses integers of unbounded size and thus cannot be implemented on a finite computer.
Indeed, while the algorithm can be implemented with a running time that is bounded in expectation
(after reducing ε so that eε/2 and hence all the probabilities are rational numbers), truncating long
executions or allowing an adversary to observe the actual running time can lead to a violation of
differential privacy. Thus, it is better to work with the Truncated Geometric Mechanism of Ghosh,
Roughgarden and Soundararajan [GRS12], where we clamp each noisy count c̃x to the interval
[0, n]. We observe that the resulting probability distribution of c̃x , supported on {0, 1, . . . , n}, can
be described explicitly in terms of cx , ε and n, and it can be sampled in polynomial time using
only integer arithmetic (after ensuring eε/2 is rational). Thus, we obtain:
Theorem 1.2 (Bounded Geometric Mechanism, informal statement of Thm. 3.7). For every finite
X , n and ε ∈ (0, 1], there is an (ε, 0)-differentially private algorithm M : X n → {0, 1, . . . , n}X for
histograms achieving:
• Per-query error O(1/ε).
• Simultaneous error O(log |X |)/ε.
• Strict running time |X | · poly(N ), where N is the bit length of the input (n, ε and a dataset
D ∈ X n ).
We note that while we only consider our particular definition of per-query accuracy, namely
that with high probability |c̃x − cx | ≤ O(1/ε), Ghosh et al. [GRS12] proved that the output of the
Bounded Geometric Mechanism can be used (with post-processing) to get optimal expected loss
with respect to an extremely general class of loss functions and arbitrary priors. The same result
applies to each individual noisy count c̃x output by our mechanism, since each bin is distributed
according to the Bounded Geometric Mechanism (up to a modification of ε to ensure rational
probabilities).
The Bounded Geometric Mechanism is not polynomial time for large data universes X . Indeed,
its running time (and output length) is linear in |X |, rather than polynomial in the bit length
of data elements, which is log |X |. To achieve truly polynomial time, we can similarly discretize
and truncate a variant of the Stability-Based Histogram of Bun, Nissim and Stemmer [BNS16].
This mechanism only adds Lap(2/ε) noise to the nonzero components of cx and then retains only
the noisy values c̃x that are larger than a threshold t = Θ(log(1/δ)/ε). Thus, the algorithm only
outputs a partial histogram, i.e. counts c̃x for a subset of the bins x, with the rest of the counts
being treated as zero. By replacing the use of the Laplace Mechanism with the (rational) Bounded
Geometric Mechanism as above, we can implement this algorithm in strict polynomial time:
Theorem 1.3 (Stability-Based Histogram, informal statement of Thm. 5.2). For every finite
X , n, ε ∈ (0, 1] and δ ∈ (0, 1/n), there is an (ε, δ)-differentially private algorithm M : X n →
{0, 1, . . . , n}⊆X for histograms achieving:
• Per-query error O(1/ε) on bins with true count at least O(log(1/δ)/ε).
• Simultaneous error O(log(1/δ)/ε).
• Strict running time poly(N ), where N is the bit length of the input (n, ε and a dataset
D ∈ X n ).
Notice that the simultaneous error bound of O(log(1/δ)/ε) is better than what is achieved by
the Laplace Mechanism when δ > 1/|X |, and is known to be optimal up to constant factors in this
4
range of parameters (see Theorem 6.1). The fact that this error bound is independent of the data
universe size |X | makes it tempting to apply even for infinite data domains X . However, we note
that when X is infinite, it is impossible for the algorithm to have a strict bound on running time
(as it needs time to read arbitrarily long data elements) and thus is vulnerable to timing attacks
and is not implementable on a finite computer.
Note also that the per-query error bound only holds on bins with large enough true count
(namely, those larger than our threshold t); we will discuss this point further below.
A disadvantage of the Stability-based Histogram is that it sacrifices pure differential privacy. It
is natural to ask whether we can achieve polynomial running time while retaining pure differential
privacy. A step in this direction was made by Cormode, Procopiuc, Srivastava and Tran [CPST11].
They observe that for an appropriate threshold t = Θ(log(|X |)/ε), if we run the Bounded Geometric
Mechanism and only retain the noisy counts c̃x that are larger than t, then the expected number of
bins that remain is less than n + 1. Indeed, the expected number of bins we retain whose true count
is zero (“empty bins”) is less than 1. They describe a method to directly sample the distribution
of the empty bins that are retained, without actually adding noise to all |X | bins. This yields
an algorithm whose output length is polynomial in expectation. However, the output length is
not strictly polynomial, as there is a nonzero probability of outputting all |X | bins. And it is not
clear how to implement the algorithm in expected polynomial time, because even after making the
probabilities rational, they have denominators of bit length linear in |X |.
To address these issues, we consider a slightly different algorithm. Instead of trying to retain
all noisy counts c̃x that are larger than some fixed threshold t, we retain the n largest noisy
counts (since there are at most n nonzero true counts). This results in a mechanism whose output
length is always polynomial, rather than only in expectation. However, the probabilities still have
denominators of bit length linear in |X |. Thus, we show how to approximately sample from this
distribution, to within an arbitrarily small statistical distance δ, at the price of a poly(log(1/δ))
increase in running time. Naively, this would result only in (ε, O(δ))-differential privacy. However,
when δ is significantly smaller than 1/|R|, where R is the range of the mechanism, we can convert
an (ε, δ)-differentially private mechanism to an (ε, 0)-differentially private mechanism by simply
outputting a uniformly random element of R with small probability. (A similar idea for the case
that |R| = 2 has been used in [KLN+ 11, CDK17].) Since our range is of at most exponential size
(indeed at most polynomial in bit length), the cost in our runtime for taking δ ≪ 1/|R| is at most
polynomial. With these ideas we obtain:
Theorem 1.4 (Pure DP Histogram in Polynomial Time, informal statement of Thm. 4.14). For
every finite X , n and ε ∈ (0, 2], there is an (ε, 0)-differentially private algorithm M : X n →
{0, 1, . . . , n}⊆X for histograms achieving:
• Per-query error O(1/ε) on bins with true count at least O(log(|X |)/ε).
• Simultaneous error O(log(|X |)/ε).
• Strict running time poly(N ), where N is the bit length of the input (n, ε and a dataset
D ∈ X n ).
Both Theorems 1.3 and 1.4 only retain per-query error O(1/ε) on bins with a large enough true
count. We also prove a lower bound showing that this limitation is inherent in any algorithm that
outputs a sparse histogram (as both of these algorithms do).
Theorem 1.5 (Lower Bound on Per-Query Error for Sparse Histograms, Theorem 6.2). Suppose
that there is an (ε, δ)-differentially private algorithm M : X n → {0, 1, . . . , n}X for histograms that
5
always outputs histograms with at most n′ nonempty bins and has per-query error at most E on all
bins. Then
min{log |X |, log(1/δ)}
E≥Ω
,
ε
provided that ε > 0, ε2 > δ > 0 and |X | ≥ (n′ )2 .
This lower bound is similar in spirit to a lower bound of [BBKN14], which shows that no (ε, 0)differentially private PAC learner for “point functions” (functions that are 1 on exactly one element
of the domain) can produce sparse functions as hypotheses.
To bypass this lower bound, we can consider algorithms that produce succinct descriptions of
dense histograms. That is, the algorithm can output a polynomial-length description of a function
c̃ : X → [0, n] that can be evaluated in polynomial time, even though X may be of exponential size.
We show that this relaxation allows us to regain per-query error O(1/ε).
Theorem 1.6 (Polynomial-Time DP Histograms with Optimal Per-Query Accuracy, informal
statement of Thm. 7.3). For every finite X , n and ε ∈ (0, 1], there is an (ε, 0)-differentially private
algorithm M : X n → H for histograms (where H is an appropriate class of succinct descriptions of
histograms) achieving:
• Per-query error O(1/ε).
• Simultaneous error O(log(|X |)/ε).
• Strict running time poly(N ), where N is the bit length of the input (n, ε and a dataset
D ∈ X n ) for both producing the description of a noisy histogram c̃ ← M(D) and for evaluating
c̃(x) at any point x ∈ X .
The algorithm is essentially an (n+1)-wise independent instantiation of the Bounded Geometric
Mechanism. Specifically, we release a function h : X → {0, 1}r selected from an (n + 1)-wise
independent family of hash functions, and for each x ∈ X , we view h(x) as coin tosses specifying
a sample from the Bounded Geometric Distribution. That is, we let S : {0, 1}r → [0, n] be an
efficient sampling algorithm for the Bounded Geometric Distribution, and then c̃x = S(h(x)) is our
noisy count for x. The hash function is chosen randomly from the family conditioned on values c̃x
for the nonempty bins x, which we obtain by running the actual Bounded Geometric Mechanism
on those bins. The (n + 1)-wise independence ensures that the behavior on any two neighboring
datasets (which together involve at most n + 1 distinct elements of X ) are indistinguishable in the
same way as in the ordinary Bounded Geometric Mechanism. The per-query accuracy comes from
the fact that the marginal distributions of each of the noisy counts are the same as in the Bounded
Geometric Mechanism. (Actually, we incur a small approximation error in matching the domain of
the sampling procedure to the range of a family of hash functions.)
As far as we know, the only other use of limited independence in constructing differentially
private algorithms is a use of pairwise independence by [BBKN14] in differentially private PAC
learning algorithms for the class of point functions. Although that problem is related to the one we
consider (releasing a histogram amounts to doing “query release” for the class of point functions, as
discussed below), the design and analysis of our algorithm appears quite different. (In particular,
our analysis seems to rely on (n + 1)-wise independence in an essential way.)
Another potential interest in our technique is as another method for bypassing limitations of
synthetic data for query release. Here, we have a large family of predicates Q = {q : X → {0, 1}},
and are interested in differentially private algorithms that, given a dataset D = (x1 , . . . , xn ) ∈ X n ,
output a “summary” M(D) that allows one to approximate the answers to all of the counting queries
6
P
q(D) = i q(xi ) associated with predicates q ∈ Q. For example, if Q is the family of point functions
consisting of all predicates that evaluate to 1 on exactly one point in the data universe X , then this
query release problem amounts to approximating the histogram of D. The fundamental result of
Blum, Ligett, and Roth [BLR13] and successors show that this is possible even for families Q and
data universes X that are of size exponential in n. Moreover, the summaries produced by these
algorithms has the form of a synthetic dataset — a dataset D̂ ∈ X n̂ such that for every query q ∈ Q,
we have q(D̂) ≈ q(D). Unfortunately, it was shown in [UV11] that even for very simple families Q
of queries, such correlations between pairs of binary attributes, constructing such a differentially
private synthetic dataset requires time exponential in the bitlength log |X | of data universe elements.
Thus, it is important to find other ways of representing approximate answers to natural families Q
of counting queries, which can bypass the inherent limitations of synthetic data, and progress along
these lines was made in a variety of works [GRU12, CKKL12, HRS12, TUV12, CTUW14, DNT15].
Our algorithm, and its use of (n + 1)-wise independence, can be seen as yet another representation
that bypasses a limitation of synthetic data (albeit a statistical rather than computational one).
Indeed, a sparse histogram is simply a synthetic dataset that approximates answers to all point
functions, and by Theorem 1.5, our algorithm achieves provably better per-query accuracy than
is possible with synthetic datasets. This raises the question of whether similar ideas can also be
useful in bypassing the computational limitations of synthetic data for more complex families of
counting queries.
2
Preliminaries
Throughout this paper, let N be the set {0, 1, . . .}, N+ be the set {1, 2, . . .}. For n ∈ N, let [n] be
the nonstandard set {0, . . . , n}. Notice that |[n]| = n + 1. Given a set A and finite set B, we define
AB to be the set of length |B| vectors over A indexed by the elements of B.
2.1
Differential Privacy
We define a dataset D ∈ X n to be an ordered tuple of n ≥ 1 rows where each row is drawn from a
discrete data universe X with each row corresponding to an individual. Two datasets D, D ′ ∈ X n
are considered neighbors if they differ in exactly one row.
Definition 2.1. [DMNS06] Let M : X n → R be a randomized algorithm. We say M is (ε, δ)differentially private if for every pair of neighboring datasets D and D ′ and every subset S ⊆ R
Pr[M(D) ∈ S] ≤ eε · Pr[M(D ′ ) ∈ S] + δ
We say an (ε, δ)-differentially private algorithm satisfies pure differential privacy when δ = 0
and say it satisfies approximate differential privacy when δ > 0. Intuitively, the ε captures an
upper bound on an adversary’s ability to determine whether a particular individual is in the dataset.
And the δ parameter represents an upper bound of the probability of a catastrophic privacy breach
(e.g. the entire dataset is released). The common setting of parameters takes ε ∈ (0, 1] to be a
small constant and δ to be negligible in n.
The following properties of differentially private algorithms will be used in some of our proofs.
Lemma 2.2 (post-processing [DMNS06]). Let M : X n → Y be (ε, δ)-differentially private and
f : Y → Z be any randomized function. Then f ◦ M : X n → Z is (ε, δ)-differentially private.
7
Lemma 2.3 (group privacy [DMNS06]). Let M : X n → Y be (ε, δ)-differentially private. Let
D1 , D2 ⊆ X n be datasets such that D2 can be obtained by changing at most m rows of D1 . Then
for all S ⊆ Y
Pr[M(D1 ) ∈ S] ≤ emε · Pr[M(D2 ) ∈ S] + emε · δ/ε
Lemma 2.4 (composition [DL09]). Let M1 : X n → Y1 be (ε1 , δ1 )-differentially private and M2 :
X n → Y2 be (ε2 , δ2 )-differentially private. Define M : X n → Y1 ×Y2 as M(D) = (M1 (D), M2 (D))
for all D ∈ X n . Then M is (ε1 + ε2 , δ1 + δ2 )-differentially private.
2.2
Histograms
For x ∈ X , the point function cx : X n → N is defined to count the number of occurrences of x in
a given dataset, i.e. for D ∈ X n
cx (D) = |{i ∈ {1, . . . n} : Di = x}|
In this paper we focus on algorithms for privately releasing approximations to the values of all point
functions, also known as a histogram. A histogram is collection of bins, one for each element x
in the data universe, with the xth bin consisting of its label x and a count cx ∈ N.
2.2.1
Representations
The input to our algorithms is always a dataset (i.e. an element D ∈ X n ) and the outputs represent
approximate histograms. We consider the following histogram representations as our algorithms’
outputs:
• A vector in NX . We use {c̃x }x∈X to denote a histogram where c̃x ∈ N is the approximate
count for the element x.
• A partial vector h ∈ (X × N)∗ such that each element x ∈ X appears at most once in h with
each pair (x, c̃x ) ∈ X × N interpreted as element x having approximate count c̃x . Elements
x not listed in the partial vector are assumed to have count c̃x = 0. Implicitly an algorithm
can return a partial vector by releasing bins for a subset of X .
• A data structure, encoded as a string, which defines a function h : X → N where h(x),
denoted hx , is the approximate count for x ∈ X and hx is efficiently computable given this
data structure (e.g. time polynomial in the length of the data structure). In Section 7, this
data structure consists of the coefficients of a polynomial, along with some parameters.
Each representation is able to express any histogram over X . The difference between them is the
memory used and the efficiency of computing a count. For example, computing the approximate
count for x ∈ X , when using the data structure representation is bounded by the time it takes to
compute the associated function. But when using partial vectors, one only needs to iterate through
the vector to determine the approximate count.
We define the following class of histograms. Let Hn,n′ (X ) ⊆ NX be the set of all histograms
over X with integer counts in [0, n] (or N when n = ∞) and at most n′ of them nonzero. By using
partial vectors each element of Hn,n′ (X ) can be stored in O(n′ · (log n + log |X |)) bits, which is
shorter than the vector representation when n′ = o(|X |/ log |X |).
8
2.2.2
Accuracy
In order to preserve privacy, our algorithms return histograms with noise added to the counts.
Therefore, it is crucial to understand their accuracy guarantees. So given a dataset D ∈ X n we
compare the noisy count c̃x = M(D)x of x ∈ X (the count released by algorithm M) to its true
count, cx (D). We focus on the following two metrics:
Definition 2.5. A histogram algorithm M : X n → NX has (a, β)-per-query accuracy if
∀D ∈ X n ∀x ∈ X
Pr[|M(D)x − cx (D)| ≤ a] ≥ 1 − β
Definition 2.6. A histogram algorithm M : X n → NX has (a, β)-simultaneous accuracy if
∀D ∈ X n
Pr[∀x ∈ X |M(D)x − cx (D)| ≤ a] ≥ 1 − β
Respectively, these metrics capture the maximum error for any one bin and the maximum error
simultaneously over all bins. Even though simultaneous accuracy is commonly used in differential
privacy, per-query accuracy has several advantages:
• For histograms, one can provable achieve a smaller per-query error than is possible for simultaneous error. Indeed, the optimal simultaneous error for (ε, 0)-differentially private histograms
is a = Θ (log(|X |/β)/ε) whereas the optimal per-query error is a = Θ (log(1/β)/ε), which is
independent of |X | [HT10, BBKN14].
• Per-query accuracy may be easier to convey to an end user of differential privacy. For example,
it is the common interpretation of error bars shown on a graphical depiction of a histogram.
Figure 1: A histogram with error bars
• For many algorithms (such as ours), per-query accuracy is good enough to imply optimal
simultaneous accuracy. Indeed, an algorithm with (a, β)-per-query accuracy also achieves
(a, β · |X |)-simultaneous accuracy (by a union bound).
However, we may not always be able to achieve as good per-query accuracy as we want. So we will
also use the following relaxation which bounds the error only on bins with large enough true count.
Definition 2.7. A histogram algorithm M : X n → NX has (a, β)-per-query accuracy on
counts larger than t if
∀D ∈ X n ∀x ∈ X s.t. cx (D) > t
2.3
Pr[|M(D)x − cx (D)| ≤ a] ≥ 1 − β
Probability Terminology
Definition 2.8. Let Z be an integer-valued random variable.
1. The probability mass function of Z, denoted fZ , is the function fZ (z) = Pr[Z = z] for
all z ∈ Z.
9
2. The cumulative distribution function of Z, denoted FZ , is the function FZ (z) = Pr[Z ≤
z] for all z ∈ Z.
3. The support of Z, denoted supp(Z), is the set of elements for which f (z) 6= 0.
Definition 2.9. Let Y and Z be random variables taking values in discrete range R. The total
variation distance between Y and Z is defined as
∆(Y, Z) = max Pr[Y ∈ A] − Pr[Z ∈ A]
A⊆R
=
1 X
Pr[Z = a] − Pr[Y = a]
·
2
a∈R
Lemma 2.10. Let Y and Z be random variables over discrete range R. Total variation distance
has the following properties:
1. Y and Z are identically distributed, denoted Y ∼ Z, if and only if ∆(Y, Z) = 0.
2. Let T : R → R′ be any function with R′ discrete. Then
∆(T (Y ), T (Z)) ≤ ∆(Y, Z)
3. Let Y1 , Y2 , Z1 and Z2 be random variables over discrete range R. Then
∆((Y1 , Y2 ), (Z1 , Z2 )) ≤ ∆(Y1 , Z1 ) +
2.3.1
max
a∈R,A⊆R
Pr [Y2 ∈ A | Y1 = a] − Pr[Z2 ∈ A | Z1 = a]
Sampling
Because we are interested in the computational efficiency of our algorithms we need to consider the
efficiency of sampling from various distributions.
A standard method for sampling a random variable is via inverse transform sampling.
Lemma 2.11. Let U be uniformly distributed on (0, 1]. Then for any integer-valued random variable
Z we have FZ−1 (U ) ∼ Z where FZ−1 (u) is defined as min{z ∈ supp(Z) : FZ (z) ≥ u}.
If Z, the random variable we wish to sample, has finite support we can compute the inverse
cumulative distribution by performing binary search on supp(Z) to find the minimum. This method
removes the need to compute the inverse function of the cumulative distribution function and is
used in some of our algorithms.
2.3.2
Order Statistics
Definition 2.12. Let Z1 , . . . , Zℓ be integer-valued random variables. The i-th order statistic of
Z1 , . . . , Zℓ denoted Z(i) is the i-th smallest value among Z1 , . . . , Zℓ .
Lemma 2.13. Let Z1 , . . . , Zℓ be i.i.d. integer-valued random variables with cumulative distribution
function F . Then
FZ(ℓ) (z) = (F (z))ℓ
and
(
1
i
FZ(i) |Z(i+1) =vi+1 ,...Z(ℓ) =vℓ (z) = FZ(i) |Z(i+1) =vi+1 (z) =
F (z)/F (vi+1)
for all 1 ≤ i < ℓ and vi+1 ≤ vi+2 ≤ . . . ≤ vℓ all in the support of Z1 .
10
if z > vi+1
otherwise
From this lemma, we can iteratively sample random variables distributed identically to Z(ℓ) ,
Z(ℓ−1) , . . . , Z(i) without having to sample all ℓ of the original random variables. The inverse cumulative distributions for the order statistics are
−1
1/i
−1
1/ℓ
−1
u
·
F
(v
)
(u)
=
F
F
u
(u)
=
F
FZ−1
i+1
Z |Z
=vi+1 ,...Z =vℓ
(ℓ)
(i)
2.4
(i+1)
(ℓ)
Two-Sided Geometric Distribution
A common technique for creating differentially private algorithms is to perturb the desired output
with appropriately scaled Laplace noise. Because our algorithms’ outputs are counts, we focus on
a discrete analogue of the Laplace distribution as in [GRS12].
We say an integer-valued random variable Z follows a two-sided geometric distribution
with scale parameter s centered at c ∈ Z (denoted Z ∼ c + Geo(s)) if its probability mass
function fZ (z) is proportional to e−|z−c|/s . It can be verified that fZ and its cumulative distribution
function FZ are
!
( 1/s
e
· e−(c−z)/s
if z ≤ c
e1/s − 1
−|z−c|/s
e1/s +1
·
e
F
(z)
=
fZ (z) =
Z
1
1/s
−(z−c)/s
e +1
1 − 1/s · e
otherwise
e
+1
for all z ∈ Z. When c is not specified, it is assumed to be 0. The inverse cumulative distribution
of Z is
(
s ln (u) + s ln e1/s + 1 − 1
if u ≤ 1/2
−1
FZ (u) = c +
1/s
−s ln (1 − u) − s ln e + 1
otherwise
or, equivalently,
&
FZ−1 (u) = c + s · sign(1/2 − u) ln(1 − |2u − 1|) + ln
2.5
e1/s + 1
2
!!'
+ ⌊2u⌋ − 1
Model of Computation
We analyze the running time of our algorithms with respect to the w-bit word RAM model
taking w = O(log N ) where N is the bit length of our algorithms’ inputs (D ∈ X n , ε and possibly
some additional parameters). In this model, memory accesses and basic operations (arithmetic,
comparisons and logical) on w-bit words are constant time. In addition, we assume the data
universe X = [m] for some m ∈ N. Some parameters to our algorithms are rational. We represent
rationals by pairs of integers.
Because our algorithms require randomness, we assume that they have access to an oracle that
when given a number d ∈ N+ returns a uniformly random integer between 1 and d inclusively.
3
The Geometric Mechanism
In this section we show how to construct a differentially private histogram using the Laplace Mechanism only requiring integer computations of bounded length.
As shown by Dwork, McSherry, Nissim and Smith [DMNS06], we can privately release a histogram by adding independent and appropriately scaled Laplace noise to each bin. Below we state
a variant that uses discrete noise, formally studied in [GRS12].
11
Algorithm 3.1. GeometricMechanism(D, ε) for D ∈ X n and ε > 0
1. For each x ∈ X , do the following:
(a) Set c̃x to cx (D) + Geo(2/ε) clamped to the interval [0, n]. i.e.
if Zx ≤ 0
0
where Zx = cx (D) + Geo(2/ε).
c̃x = n
if Zx ≥ n
Zx otherwise
(b) Release (x, c̃x ).
Note that the output of this algorithm is a collection of bins (x, c̃x ) which represents a partial
vector, but in this case we have a count for each x ∈ X so it defines a complete vector in Hn,|X |(X ) ⊆
NX . The privacy and accuracy properties of the algorithm are similar to those of the Laplace
Mechanism.
Theorem 3.2. GeometricMechanism(D, ε) has the following properties:
i. GeometricMechanism(D, ε) is (ε, 0)-differentially private [GRS12].
ii. GeometricMechanism(D, ε) has (a, β)-per-query accuracy for
2 1
a=
ln
ε β
iii. GeometricMechanism(D, ε) has (a, β)-simultaneous accuracy for
1
2 |X |
2
ln
ln
≤
a=
ε
ε
β
1 − (1 − β)1/|X |
Proof of ii-iii. Let Z ∼ Geo(2/ε). For any x ∈ X
Pr[|c̃x − cx (D)| ≤ a] ≥ Pr [|Z| ≤ ⌊a⌋]
= 1 − 2 · Pr[Z ≤ −⌊a⌋ − 1]
=1−2·
Now, for a =
l
2
ε
ln β1
m
Pr[|c̃x − cx (D)| ≤ a] ≥ 1 −
e−⌊a⌋·ε/2
eε/2 + 1
2·β
≥1−β
+1
eε/2
Part iii follows similarly after noting by independence of the counts that
Pr[∀x ∈ X |c̃x − cx (D)| ≤ a] ≥ Pr[|Z| ≤ ⌊a⌋]|X |
12
The accuracy bounds up to constant factors match the lower bounds for releasing a differentially
private histogram [HT10, BBKN14].
As presented above, this algorithm needs to store integers of unbounded size since Geo(2/ε)
is unbounded in magnitude. As noted in [GRS12], by restricting the generated noise to a fixed
range we can avoid this problem. However, even when the generated noise is restricted to a fixed
range, generating this noise via inverse transform sampling may require infinite precision. By
appropriately choosing ε, the probabilities of this noise’s cumulative distribution function can be
represented with finite precision, and therefore generating this noise via inverse transform sampling
only requires finite precision.
Proposition 3.3. For k ∈ N, n ∈ N+ and c ∈ [n], the algorithm GeoSample(k, n, c) has output
identically distributed to a two-sided geometric random variable with
scale parameter 2/ε̃ centered
−k
at c clamped to the range [0, n] where we define ε̃ = 2 · ln 1 + 2
. Morever, GeoSample(k, n, c)
has running time poly(k, n).
We have chosen ε̃ so that the cumulative distribution function of a two-sided geometric random
variable with scale parameter 2/ε̃ clamped to [0, n] takes on only rational values with a common
denominator d. Therefore, to implement inverse transform sampling on this distribution we only
need to choose a uniformly random integer in {1, . . . , d} rather than a uniformly random variable
over (0, 1].
Algorithm 3.4. GeoSample(k, n, c) for k ∈ N, n ∈ N+ and c ∈ [n]
1. Let d = (2k+1 + 1)(2k + 1)n−1 .
2. Define the function
k(c−z) 2k + 1 n+z−c
2
n−1−z+c
F (z) = d − 2k(z−c+1) 2k + 1
d
if 0 ≤ z ≤ c
if c < z < n
if z = n
3. Sample U uniformly at random from {1, . . . , d}.
4. Using binary search find the smallest z ∈ [n] such that F (z) ≥ U .
5. Return z.
The function F is obtained by clearing denominators in the cumulative distribution function of
c + Geo(2/ε̃) clamped to [0, n].
Lemma 3.5. Let F (z) be defined as in Algorithm 3.4. Then for z ∈ [n], F (z) ∈ [d] and F (z)/d
equals the cumulative distribution function of c + Geo(2/ε̃) clamped to [0, n].
We prove this lemma after seeing how it implies Proposition 3.3.
Proof of Proposition 3.3. Let U be drawn uniformly at random from {1, . . . , d}. By construction,
for all z ∈ [n]
Pr[GeoSample(k, n, c) ≤ z] = Pr[U ≤ F (z)] = F (z)/d
13
implying GeoSample(k, n, c) ∼ c + Geo(2/ε̃) by Lemma 3.5.
We now bound the running time. Binary search takes O(log n) rounds. The largest number
used is d with bit length O(nk) and all operations are polynomial in the bit length of these numbers.
Therefore, GeoSample(k, n, c) has running time poly(k, n).
Proof of Lemma 3.5. The cumulative distribution function
0
eε̃/2 · e−(c−z)·ε̃/2
ε̃/2
FZ (z) = e +1 1
1 − ε̃/2 · e(c−z)·ε̃/2
e +1
1
of Z ∼ c + Geo(2/ε̃) is
if z < 0
if 0 ≤ z ≤ c
if c < z < n
if z ≥ n
Consider the case when 0 ≤ z ≤ c.
FZ (z) =
−(c−z)
1 + 2−k
eε̃/2
−(c−z)ε̃/2
−k
·
e
=
·
1
+
2
2 + 2−k
eε̃/2 + 1
k c−z
2k + 1
2
= k+1
·
2
+1
2k + 1
2k(c−z)
=
c−z−1
(2k+1 + 1) (2k + 1)
n+z−c
2k(c−z) 2k + 1
=
d
F (z)
=
d
·
2k + 1
2k + 1
n−(c−z)
A similar argument holds when c < z < n and FZ (n) = 1 = F (n)/d. So FZ (z) = F (z)/d for all
z ∈ [n].
Using this algorithm we are ready to construct a private histogram algorithm with bounded
time complexity whose accuracy is identical to that of GeometricMechanism up to constant factors.
Algorithm 3.6. BoundedGeometricMechanism(D, ε) for D ∈ X n and rational ε ∈ (0, 1]
1. Let k = ⌈log(2/ε)⌉.
2. For each x ∈ X , do the following:
(a) Let c̃x = GeoSample(k, n, cx (D)).
(b) Release (x, c̃x ).
Theorem 3.7. Let rational ε ∈ (0, 1] and ε̃ = 2 · ln 1 + 2−⌈log(2/ε)⌉ ∈ (4/9 · ε, ε]. Then
BoundedGeometricMechanism(D, ε) has the following properties:
i. BoundedGeometricMechanism(D, ε) is (ε, 0)-differentially private.
ii. BoundedGeometricMechanism(D, ε) has (a, β)-per-query accuracy for
1
2 1
9
ln
ln
a=
≤
ε̃ β
2ε β
14
iii. BoundedGeometricMechanism(D, ε) has (a, β)-simultaneous accuracy for
|X |
9
2 |X |
ln
ln
≤
a=
ε̃
β
2ε
β
iv. BoundedGeometricMechanism(D, ε) has running time
|X | · poly(N )
where N is the bit length of the algorithm’s input (D ∈ X n and ε).
Proof of i-iii. By Proposition 3.3, GeoSample(k, n, cx(D)) generates a two-sided geometric random
variable with scale parameter 2/ε̃ centered at cx (D) clamped to [0, n]. So this algorithm is identically distributed to GeometricMechanism(D, ε̃). As ε̃ ∈ (4/9 · ε, ε], parts i-iii follows from Theorem
3.2.
Proof of iv. For each x ∈ X , computing cx (D) takes time O(n log |X |) and by Proposition 3.3
GeoSample(k, n, cx (D)) takes times poly(n, log(1/ε)) as k = O(log(1/ε)).
4
Improving the Running Time
For datasets over large domains X , the linear in |X | running time of Algorithm 3.6 can be prohibitive. We present an algorithm that reduces the running time’s dependence on the universe
size from nearly linear to poly-logarithmic based on the observation that most counts are 0 when
n ≪ |X |; this is the same observation made by Cormode, Procopiuc, Srivastava and Tran [CPST11]
to output sparse histograms.
4.1
Sparse Histograms
We start by reducing the output length of GeometricMechanism to release only the bins with the
heaviest (or largest) counts (interpreted as a partial vector).
Algorithm 4.1. KeepHeavy(D, ε) for D ∈ X n and ε > 0
1. For each x ∈ X , set c̃x to cx (D) + Geo(2/ε) clamped to [0, n].
2. Let x1 , . . . , xn+1 be the elements of X with the largest counts in sorted order, i.e.
c̃x1 ≥ c̃x2 ≥ . . . ≥ c̃xn+1 ≥
max
x∈X \{x1 ,...,xn+1 }
c̃x
3. Release h = {(x, c̃x ) : c̃x > c̃xn+1 } ∈ Hn,n (X ).
Observe that the output length has been improved to O(n · (log n + log |X |)) bits compared to
the O(|X | · log n) bits needed to represent the outputs of GeometricMechanism.
Theorem 4.2. KeepHeavy(D, ε) has the following properties:
i. KeepHeavy(D, ε) is (ε, 0)-differentially private.
15
ii. KeepHeavy(D, ε) has (a, β)-per-query accuracy on counts larger than t for
2 2
2 2|X |
ln
ln
and a =
t=2
ε
β
ε β
iii. KeepHeavy(D, ε) has (a, β)-simultaneous accuracy for
2 |X |
a=2
ln
ε
β
Note that, unlike GeometricMechanism, this algorithm only has (O(log(1/β)/ε), β)-per-query
accuracy on counts larger than t = O(log(|X |/β)/ε). This loss is necessary for any algorithm that
outputs a sparse histogram as we will show in Theorem 6.2.
Proof of i. Privacy follows from the (ε, 0)-differential privacy of GeometricMechanism (part i of
Theorem 3.2) along with differential privacy’s closure under post-processing (Lemma 2.2).
To prove the remaining parts, we start with the following lemma.
l
m
|
Lemma 4.3. For any β ′ ∈ (0, 1), let t′ = 2 2ε ln |X
and define the event
β′
Et′ = {∀x ∈ X |c̃x − cx (D)| ≤ t′ /2}
Pr[Et′ ] ≥ 1 − β ′ and Et′ implies that for all x ∈ X such that cx (D) > t′ we have c̃x > c̃xn+1 .
Proof. The probability of Et′ occurring follows from part iii of Theorem 3.2 as the {c̃x }x∈X are
identically distributed to the output of GeometricMechanism(D, ε).
Assume the event Et′ . Then for all x ∈ X such that cx (D) > t′ , we have c̃x > t′ /2 and for all
x ∈ X such that cx (D) = 0, we have c̃x ≤ t′ /2. Because there are at most n distinct elements in
D, we have c̃x > c̃xn+1 for all x ∈ X such that cx (D) > t′ .
Proof of part ii of Theorem 4.2. Let x ∈ X such that cx (D) > t. We have
Pr (|hx − cx (D)| > a) ≤ Pr c̃x ≤ c̃xn+1 + Pr (|c̃x − cx (D)| > a)
≤ β/2 + β/2
by Lemma 4.3 with β ′ = β/2 and t′ = t, and part ii of Theorem 3.2.
Proof of part iii of Theorem 4.2. Let t = 2 ⌈(2/ε) · ln(|X |/β)⌉. The event Et in Lemma 4.3 occurs
with probability at least 1 − β. Assume Et . By Lemma 4.3, for all x ∈ X such that cx (D) > t
we have c̃x > c̃xn+1 . This implies |hx − cx (D)| ≤ t/2. For the remaining x ∈ X we trivially have
|hx − cx (D)| ≤ t as hx = 0.
However, as described KeepHeavy still requires adding noise to the count of every bin. The
following algorithm M1 : X n × (0, 1] → Hn,n (X ) simulates KeepHeavy by generating a candidate
set of heavy bins from which only the heaviest are released. This candidate set is constructed from
all bins with nonzero true count and a sample representing the bins with a true count of 0 that
have the heaviest noisy counts.
16
Algorithm 4.4. M1 (D, ε) for D ∈ X n , ε > 0 and |X | ≥ 2n + 1
1
1. Let A = {x ∈ X : cx (D) > 0} and w = |X \ A|.
2. For each x ∈ A, set c̃x to cx (D) + Geo(2/ε) clamped to [0, n].
3. Pick a uniformly random sequence (q0 , . . . , qn ) of distinct elements from X \ A.
4. Sample (c̃q0 , . . . , c̃qn ) from the joint distribution of the order statistics (Z(w) , . . . , Z(w−n) ) where
Z1 , . . . , Zw are i.i.d. and distributed as Geo(2/ε) clamped to [0, n].
5. Sort the elements of A ∪ {q0 , . . . , qn } as x1 , . . . , x|A|+n+1 such that c̃x1 ≥ . . . ≥ c̃x|A|+n+1 .
6. Release h = {(x, c̃x ) : x ∈ {x1 , . . . , xn } and c̃x > c̃xn+1 } ∈ Hn,n (X ).2
Proposition 4.5. M1 (D, ε) is identically distributed to KeepHeavy(D, ε).
Proof. Let {ĉx }x∈X be the noisy counts set by KeepHeavy(D, ε) and let x̂1 , . . . , x̂n+1 be the sorted
ordering defined by these counts. We have c̃x ∼ ĉx for all x ∈ A and the Zi ’s are identically
distributed to {ĉx }x∈X \A .
{(qi , c̃qi )}ni=0 is identically distributed to the n + 1 bins with heaviest counts of {(x, ĉx )}x∈X \A .
Let the random variable B be the set of the labels of the n + 1 bins with heaviest counts of
{(x, ĉx )}x∈X \A . Therefore,
h = {(x, c̃x ) : x ∈ {x1 , . . . , xn } and c̃x > c̃xn+1 }
= {(x, c̃x ) : x ∈ A ∪ {q0 , . . . , qn } and c̃x ≥ c̃xn+1 }
∼ {(x, ĉx ) : x ∈ A ∪ B and ĉx ≥ ĉx̂n+1 }
= {(x, ĉx ) : x ∈ X and ĉx ≥ ĉx̂n+1 }
which shows that M1 (D, ε) is identically distributed to KeepHeavy(D, ε).
In order to sample from the order statistics used by M1 we construct the following algorithm
similar to GeoSample (Algorithm 3.4) from the previous section.
Proposition 4.6. Let k ∈ N and n, w ∈ N+ . Let v ∈ [n] and i ∈ {1, . . . , w}. Define the i.i.d.
random variables Z1 , . . . , Zw with each identically distributed to Geo(2/ε̃) clamped to [0, n] where
ε̃ = 2 · ln(1 + 2−k ). The following subroutine OrdSample(k, n, v, i) is identically distributed to the
i-th order statistic Z(i) conditioned on Z(i+1) = v. Also, OrdSample(k, n, v, i) has running time
poly(n, k, i).
1
|X | ≥ 2n + 1 ensures that |X \ A| ≥ n + 1. One can use GeometricMechanism(D, ε) when |X | ≤ 2n.
If instead we used continuous noise this last step is equivalent to releasing the n heaviest bins. However, in the
discrete case, where ties can occur, from the set A ∪ {x1 , . . . , xn } we cannot determine all bins with a count tied for
the n-th heaviest as there may be many other noisy counts tied with c̃xn . As a result, we only output the bins with
a strictly heavier count than c̃xn+1 .
2
17
As in BoundedGeometricMechanism (Algorithm 3.6), we have chosen ε̃ so that the cumulative
distribution function of Geo(2/ε̃) takes rational values with a common denominator of d. Therefore,
the cumulative distribution function of Z(i) conditioned on Z(i+1) = v is also rational and we can
sample from it with finite precision.
Algorithm 4.7. OrdSample(k, n, v, i) for k ∈ N; n, i ∈ N+ and v ∈ [n]
1. Let d = (2k+1 + 1)(2k + 1)n−1 .
2. Define the function
(
n−1−z
d − 2k(z+1) 2k + 1
F (z) =
d
if 0 ≤ z < n
if z = n
3. Sample U uniformly at random from {1, . . . , F (v)i }.
4. Using binary search find the smallest z ∈ [v] such that F (z)i ≥ U .
5. Return z.
Proof. By Lemma 3.5, F (z)/d is the cumulative distribution of Geo(2/ε̃) clamped to [0, n]. Therefore, by Lemma 2.13,
FZ(i) |Z(i+1) =v (z) =
F (z)/d
F (v)/d
i
=
F (z)
F (v)
i
for z ∈ [v]. Let U be drawn uniformly at random from {1, . . . , F (v)i }. Then, by construction,
i
Pr[OrdSample(k, n, v, i) ≤ z] = Pr[U ≤ F (z) ] =
F (z)
F (v)
i
implying OrdSample(k, n, v, i) ∼ Z(i) | Z(i+1) = v.
The binary search takes O(log n) iterations. Each iteration has running time polynomial in
the bit length of the numbers used. Therefore, this algorithm has running time poly(log d, i) =
poly(n, k, i).
Now from M1 we replace sampling from the joint distribution of the order statistics with
iterative calls to OrdSample to get the following algorithm.
18
Algorithm 4.8. M2 (D, ε) for D ∈ X n , rational ε ∈ (0, 1] and |X | ≥ 2n + 1
1. Let k = ⌈log(2/ε)⌉.
2. Let A = {x ∈ X : cx (D) > 0} and w = |X \ A|.
3. For each x ∈ A, let c̃x = GeoSample(k, n, cx (D)).
4. Pick a uniformly random sequence (q0 , . . . , qn ) of distinct elements from X \ A.
5. Let c̃q0 = OrdSample(k, n, n, w).
6. For each i ∈ {1, . . . , n}, let c̃qi = OrdSample(k, n, c̃qi−1 , w − i).
7. Sort the elements of A ∪ {q0 , . . . , qn } as x1 , . . . , x|A|+n+1 such that c̃x1 ≥ . . . ≥ c̃x|A|+n+1 .
8. Release h = {(x, c̃x ) : x ∈ {x1 , . . . , xn } and c̃x > c̃xn+1 } ∈ Hn,n (X ).
Theorem 4.9. Let rational ε ∈ (0, 1] and ε̃ = 2 · ln(1 + 2−⌈log(2/ε)⌉ ) ∈ (4/9 · ε, ε]. M2 (D, ε) is
identically distributed to KeepHeavy(D, ε̃). Therefore,
i. M2 (D, ε) is (ε, 0)-differentially private.
ii. M2 (D, ε) has (a, β)-per-query accuracy on counts larger than t for
2 2
2 2|X |
ln
ln
and a =
t=2
ε̃
β
ε̃ β
iii. M2 (D, ε) has (a, β)-simultaneous accuracy for
2 |X |
ln
a=2
ε̃
β
Proof. By Proposition 4.6, we have (c̃q0 , . . . , c̃qn ) is identically distributed to the joint distribution
(Z(w) , . . . , Z(w−n) ) used by M1 . Therefore, this algorithm is identically distributed to M1 (D, ε̃)
and, then by Proposition 4.5, identically distributed to KeepHeavy(D, ε̃). Parts i to iii follow from
Theorem 4.2.
This algorithm only has an output of length O(n · (log n + log |X |)). However, its running time
depends polynomially on |X | since sampling the wth order statistic, c̃q0 , using OrdSample takes time
polynomial in w ≥ |X | − n. Indeed, this is necessary since the distribution of the order statistic
Z(w) has probabilities that are exponentially small in w. 3
4.2
An Efficient Approximation
To remedy the inefficiency of M2 we consider an efficient algorithm that approximates the output
distribution of M2 .
3
Notice Pr[Z(w) = 0] = Pr[Z = 0]w where Z ∼ Geo(2/ε̃). And Pr[Z = 0]w =
coins to sample from this distribution.
19
eε̃/2 −1
eε̃/2 +1
w
. So we need to toss Ω(w)
Theorem 4.10. There exists an algorithm NonzeroGeometric : X n × (0, 1] × (0, 1) → Hn,n (X )
such that for all input datasets D ∈ X n , rational ε ∈ (0, 1] and δ ∈ (0, 1) such that 1/δ ∈ N
i. ∆ (M2 (D, ε), NonzeroGeometric(D, ε, δ)) ≤ δ.
ii. NonzeroGeometric(D, ε, δ) is (ε, (eε + 1) · δ)-differentially private.
iii. Moreover, the running time of NonzeroGeometric(D, ε, δ) is
poly(N )
where N is the bit length of this algorithm’s input (D, ε and δ).
Note that this algorithm only achieves (ε, δ)-differential privacy. By reducing δ, the algorithm
better approximates M2 , improving accuracy, at the cost of increasing running time (polynomial
in log(1/δ)). This is in contrast to most (ε, δ)-differentially private algorithms such as the stability
based algorithm of Section 5, where one needs n ≥ Ω(log(1/δ)/ε) to get any meaningful accuracy.
Now, we convert NonzeroGeometric to a pure differentially private algorithm by mixing it with
a uniformly random output inspired by a similar technique in [KLN+ 11, CDK17].
Algorithm 4.11. M∗ (D, γ) for D ∈ X n and rational γ ∈ (0, 1)
1. With probability 1 − γ release M′ (D).
2. Otherwise release a uniformly random element of R.
Lemma 4.12. Let M : X n → R be (ε, 0)-differentially private with discrete range R. Suppose
algorithm M′ : X n → R satisfies ∆ (M(D), M′ (D)) ≤ δ for all input datasets D ∈ X n with
parameter δ ∈ [0, 1). Then the algorithm M∗ has the following properties:
i. M∗ (D, γ) is (ε, 0)-differentially private whenever
δ≤
eε − 1
γ
1
·
·
eε + 1 1 − γ |R|
ii. M∗ (D, γ) has running time upper bounded by the sum of the bit length to represent γ, the
running time of M′ (D) and the time required to sample a uniformly random element of R.
By taking γ and δ small enough and satisfying the constraint in part i, the algorithm M∗
satisfies pure differential privacy and has nearly the same utility as M (due to having a statistical
distance at most γ + δ from M) while allowing for a possibly more efficient implementation since
we only need to approximately sample from the output distribution of M.
Proof of i. For all neighboring datasets D, D ′ ∈ X n and all r ∈ R
1
|R|
1
≤γ·
|R|
1
≤γ·
|R|
1
≤γ·
|R|
Pr[M∗ (D, γ) = r] = γ ·
+ (1 − γ) · Pr[M′ (D) = r]
+ (1 − γ) · (Pr[M(D) = r] + δ)
+ (1 − γ) eε · Pr[M(D ′ ) = r] + δ
+ (1 − γ) eε · Pr[M′ (D ′ ) = r] + δ + δ
20
Rearranging terms and using the upper bound on δ yields
1
+ (eε + 1) (1 − γ) · δ
|R|
1
1
ε
′
′
ε
≤ e (1 − γ) · Pr[M (D ) = r] + γ ·
+ (e − 1) γ ·
|R|
|R|
1
= eε (1 − γ) · Pr[M′ (D ′ ) = r] + γ ·
|R|
ε
∗
′
= e · Pr[M (D , γ) = r]
Pr[M∗ (D, γ) = r] = eε (1 − γ) · Pr[M′ (D ′ ) = r] + γ ·
Proof of ii. This follows directly from the construction of M∗ .
We can apply this lemma to NonzeroGeometric and under reasonable settings of parameters
get accuracy bounds identical to M2 up to constant factors.
Algorithm 4.13. PureNonzeroGeometric(D, ε, β) for D ∈ X n , rational ε ∈ (0, 1] and 1/β ∈ N
1. With probability 1 − β/3 release NonzeroGeometric(D, ε, δ) with
δ=
2−⌈log(1/ε)⌉ β
1
· ·
3
3 |Hn,n (X )|
2. Otherwise release a uniformly random element of Hn,n (X ).
Theorem 4.14. Let rational ε ∈ (0, 1], ε̃ = 2 · ln 1 + 2−⌈log(2/ε)⌉ ∈ (4/9 · ε, ε] and 1/β ∈ N.
PureNonzeroGeometric(D, ε, β) has the following properties:
i. PureNonzeroGeometric(D, ε, β) is (ε, 0)-differentially private.
ii. PureNonzeroGeometric(D, ε, β) has (a, β)-per-query accuracy on counts larger than t for
2 6
2 6|X |
ln
ln
and a =
t=2
ε̃
β
ε̃ β
iii. PureNonzeroGeometric(D, ε, β) has (a, β)-simultaneous accuracy for
2 3|X |
a=2
ln
ε̃
β
iv. PureNonzeroGeometric(D, ε, β) has running time
poly(N )
where N is the bit length of this algorithm’s input (D ∈ X n , ε and β).
Proof of i. Notice that 1/δ ∈ N (a constraint needed for NonzeroGeometric). Privacy follows from
Lemma 4.12 by taking M = M2 , M′ = NonzeroGeometric, γ = β/3 and R = Hn,n (X ) as
δ=
1
eε − 1
β/3
1
2−⌈log(1/ε)⌉ β
· ·
≤ ε
·
·
3
3 |Hn,n (X )|
e + 1 1 − β/3 |Hn,n (X )|
21
Proof of ii-iii. For any D ∈ X n and x ∈ X such that cx (D) > t define the set G = {h ∈ Hn,n (X ) :
|hx − cx (D)| ≤ a}. By construction,
Pr[PureNonzeroGeometric(D, ε, β) ∈ G] ≥ Pr[NonzeroGeometric(D, ε, δ) ∈ G] − β/3
where δ is defined in Algorithm 4.13. Notice that δ ≤ β/3. So by Theorem 4.10,
Pr[NonzeroGeometric(D, ε, δ) ∈ G] − β/3 ≥ Pr[M2 (D, ε) ∈ G] − 2 · β/3
And by part ii of Theorem 4.9, we have
Pr[M2 (D, ε) ∈ G] − 2 · β/3 ≥ 1 − β
Similarly, we can bound the simultaneous accuracy by using part iii of Theorem 4.9.
Proof of iv. By Lemma A1.1, it takes poly(n, log |X |) time to sample a uniformly random element
of Hn,n (X ) and compute |Hn,n (X )|. And by Theorem 4.10, NonzeroGeometric(D, ε, δ) with
δ=
2−⌈log(1/ε)⌉ β
1
· ·
3
3 |Hn,n (X )|
has running time poly(N ). So by Lemma 4.12, PureNonzeroGeometric has the desired running
time.
4.3
Construction of NonzeroGeometric
We finish this section with the construction of NonzeroGeometric. Notice that M2 passes arguments to OrdSample that result in OrdSample exponentiating an integer, which represents the
numerator of a fraction a/b = F (z)/F (v), to a power i ≥ |X | − n. We want to ensure that numbers
used by OrdSample do not exceed some maximum s.
The following algorithm will approximate s · (a/b)i by using repeated squaring and truncating
each intermediate result to keep the bit length manageable. The following lemma provides a bound
on the error and on the running time.
Proposition 4.15. There is an algorithm ExpApprox(a, b, i, s) such that for all b, i, s ∈ N+ and
a ∈ [b]:
i. ExpApprox(a, b, i, s) is a nondecreasing function in a and ExpApprox (a, b, i, s) = s when a = b.
ii. ExpApprox(a, b, i, s) satisfies the accuracy bound of
i
ExpApprox (a, b, i, s) a i
≤2·
−
s
b
s
iii. ExpApprox(a, b, i, s) has running time poly(log b, log i, log s).
Proof. The algorithm is defined as follows.
Algorithm 4.16. ExpApprox(a, b, i, s) for b, i, s ∈ N+ and a ∈ [b]
1. If i = 1 return as
b .
(
r 2 /s
2. Otherwise let r = ExpApprox (a, b, ⌊i/2⌋, s) and return
(a/b) · (r 2 /s)
22
if i is even
if i is odd
Proof of i. We proceed by induction. The case when i = 1 is trivial. Let i > 1 and assume that
ExpApprox(a, b, j, s) is a nondecreasing function in a for all j < i. Consider a ≤ a′ ≤ b. If i is odd,
then
a ExpApprox(a, b, ⌊i/2⌋, s)2
·
ExpApprox(a, b, i, s) =
b
s
′
a ExpApprox(a′ , b, ⌊i/2⌋, s)2
·
= ExpApprox(a′ , b, i, s)
≤
b
s
Likewise, the result holds when i is even. Therefore, ExpApprox(a, b, i, s) is a nondecreasing function
in a. It is trivial that ExpApprox (a, b, i, s) = s when a = b.
Proof of ii. For ease of notation define EA(i) = ExpApprox (a, b, i, s) as the other parameters do not
change in our analysis. By construction
EA (i) =
(
1
where mi =
a/b
mi · EA (⌊i/2⌋)2
+ ωi
s
if i is even
and |ωi | ≤ 1. We can now bound the error as
if i is odd
EA (i) a i
EA (i) mi · EA (⌊i/2⌋)2
mi · EA (⌊i/2⌋)2 a i
≤
−
−
−
+
s
b
s
s2
s2
b
EA (⌊i/2⌋) a ⌊i/2⌋
EA (⌊i/2⌋) a ⌊i/2⌋
|ωi |
+ mi ·
+
·
−
≤
s
s
b
s
b
⌊i/2⌋
EA (⌊i/2⌋)
a
1
−
≤ +2·
s
s
b
When i = 1, we have
EA(1) a
=
−
s
b
as
b
+ ω1 a
1
≤
−
s
b
s
So solving the recurrence gives
i
EA (i) a i
≤2·
−
s
b
s
Proof of iii. The running time follows from the observation that a call to ExpApprox (a, b, i, s) makes
at most O(log i) recursive calls with the remaining operations polynomial in the bit lengths of the
numbers used.
Now we can modify OrdSample using ExpApprox to keep the bit lengths of its numbers from
becoming too large, yielding an efficient algorithm whose output distribution is close to that of
OrdSample.
23
Algorithm 4.17. ApproxOrdSample(k, n, v, i, s) for k, n, i, s ∈ N+ and v ∈ [n]
1. Let d = (2k+1 + 1)(2k + 1)n−1 .
2. Define the function
(
n−1−z
d − 2k(z+1) 2k + 1
F (z) =
d
if 0 ≤ z < n
if z = n
3. Sample U uniformly from {1, . . . , s}.
4. Using binary search find the smallest z ∈ [v] such that
ExpApprox (F (z), F (v), i, s) ≥ U
5. Return z.
Proposition 4.18. Let k, n, i, s ∈ N+ and v ∈ [n]. Then
i
∆(OrdSample(k, n, v, i), ApproxOrdSample(k, n, v, i, s)) ≤ 2 · (n + 1) ·
s
In addition, ApproxOrdSample(k, n, v, i, s) has a running time of poly(k, n, log i, log s).
Proof. Let Y ∼ OrdSample(k, n, v, i) and Ye ∼ ApproxOrdSample(k, n, v, i, s). Let z ∈ [v]. Then
Pr[Y = z] =
F (z)
F (v)
i
−
F (z − 1)
F (v)
i
where we define F (−1) = 0. By part i of Proposition 4.15, ExpApprox(F (z), F (v), i, s) is an
increasing function in z. Therefore,
Pr[Ye = z] =
ExpApprox(F (z), F (v), i, s) ExpApprox(F (z − 1), F (v), i, s)
−
s
s
Therefore, by the triangle inequality and part ii of Proposition 4.15
| Pr[Y = z] − Pr[Ye = z]| ≤
z
X
z ′ =z−1
ExpApprox(F (z ′ ), F (v), i, s)
−
s
F (z ′ )
F (v)
i
i
≤4·
s
Summing over z ∈ [v] yields the desired bound on total variation distance (Definition 2.9).
The running time is dominated by the O(log n) calls to ExpApprox(F (z), F (v), i, s). By part iii
of Proposition 4.15, each call takes time
poly(log F (v), log i, log s) = poly(k, n, log i, log s)
as log F (v) ≤ log d = O(nk). So overall ApproxOrdSample has the desired running time.
Because M2 samples from a joint distribution obtained by iterated calls to OrdSample we must
also consider the accumulated distance between iterated calls to OrdSample and iterated calls to
ApproxOrdSample.
24
Corollary 4.19. Let k, n, w, s ∈ N+ with w > n. Consider the following random variables:
• Y0 ∼ OrdSample(k, n, n, w)
• Yj ∼ OrdSample(k, n, Yj−1 , w − j) for each j ∈ {1, . . . , n}
• Ye0 ∼ ApproxOrdSample(k, n, n, w, s)
• Yej ∼ ApproxOrdSample(k, n, Yej−1 , w − j, s) for each j ∈ {1, . . . , n}
Then
w
∆ (Y0 , . . . , Yn ), (Ye0 , . . . , Yen ) ≤ 2 · (n + 1)2 ·
s
Proof. By part iii of Lemma 2.10 and Proposition 4.18,
n
w
X
e
e
e
∆ Yj |Yj−1 , Yej |Yej−1 ≤ 2 · (n + 1)2 ·
∆ (Y0 , . . . , Yn ), (Y0 , . . . , Yn ) ≤ ∆ Y0 , Y0 +
s
j=1
where
∆ Yj |Yj−1 , Yej |Yej−1 =
max
x∈[n],S⊆[n]
n
Pr[Yj ∈ S|Yj−1 = x] − Pr[Yej ∈ S|Yej−1 = x]}
o
We are ready to state the mechanism NonzeroGeometric and show it satisfies Theorem 4.10.
It is identical to M2 except we replace calls to OrdSample with calls to ApproxOrdSample.
Algorithm 4.20. NonzeroGeometric(D, ε, δ) for D ∈ X n , rational ε ∈ (0, 1], 1/δ ∈ N and
|X | ≥ 2n + 1
1. Let k = ⌈log(2/ε)⌉ and s = 2 · (n + 1)2 · |X |/δ.
2. Let A = {x ∈ X : cx (D) > 0} and let w = |X \ A|.
3. For each x ∈ A, let c̃x = GeoSample(k, n, cx (D)).
4. Pick a uniformly random sequence (q0 , . . . , qn ) of distinct elements from X \ A.
5. Let c̃q0 = ApproxOrdSample(k, n, n, w, s).
6. For each i ∈ {1, . . . , n}, let c̃qi = ApproxOrdSample(k, n, c̃qi−1 , w − i, s).
7. Sort the elements of A ∪ {q0 , . . . , qn } as x1 , . . . , x|A|+n+1 such that c̃x1 ≥ . . . ≥ c̃x|A|+n+1 .
8. Release h = {(x, c̃x ) : x ∈ {x1 , . . . , xn } and c̃x > c̃xn+1 } ∈ Hn,n (X ).
Theorem 4.10 (restated). The algorithm NonzeroGeometric(D, ε, δ) satisfies for all input datasets
D ∈ X n , rational ε ∈ (0, 1] and δ ∈ (0, 1) such that 1/δ ∈ N
i. ∆ (M2 (D, ε), NonzeroGeometric(D, ε, δ)) ≤ δ.
ii. NonzeroGeometric(D, ε, δ) is (ε, (eε + 1) · δ)-differentially private.
25
iii. Moreover, the running time of NonzeroGeometric(D, ε, δ) is
poly(N )
where N is the bit length of this algorithm’s input (D, ε and δ).
Proof of i. Let M∗2 : X n × (0, 1] → Hn,2n+1 (X ) be the algorithm M2 except, instead of releasing
the heaviest bins, M∗2 releases the bins for all elements of A ∪ {x0 , . . . , xn } (i.e. M∗2 releases (y, c̃y )
for all y ∈ A and (xi , c̃xi ) for all i ∈ [n]). Similarly, we define NonzeroGeometric∗ with respect to
NonzeroGeometric.
Notice that M∗2 and NonzeroGeometric∗ have the same distribution overs the bins with nonzero
true count. Only on the bins with counts sampled using OrdSample and ApproxOrdSample respectively do their output distributions differ. As a result, we can apply Corollary 4.19 to the output
distributions of M∗2 and NonzeroGeometric∗ . So for all D ∈ X n
w
∆ (M∗2 (D, ε), NonzeroGeometric∗ (D, ε, δ)) ≤ 2 · (n + 1)2 ·
≤δ
s
Now we consider the effect of keeping the heaviest counts. Define T : Hn,2n+1 → Hn,n (X ) to be
the function that sets counts not strictly larger than the (n + 1)-heaviest count of its input to 0.
Notice that T ◦ M∗2 ∼ M2 and T ◦ NonzeroGeometric∗ ∼ NonzeroGeometric. So for all D ∈ X n ,
by part ii of Lemma 2.10,
∆(M2 (D, ε), NonzeroGeometric(D, ε, δ)) = ∆(T (M∗2 (D, ε)), T (NonzeroGeometric∗ (D, ε, δ)))
≤ ∆(M∗2 (D, ε), NonzeroGeometric∗ (D, ε, δ))
≤δ
Proof of ii. Let D and D ′ be neighboring datasets. Let S ⊆ Hn,n (X ). By the previous part and
part i of Theorem 4.9,
Pr[NonzeroGeometric(D, ε, δ) ∈ S] ≤ Pr[M2 (D, ε) ∈ S] + δ
≤ eε · Pr[M2 (D ′ , ε) ∈ S] + δ
≤ eε · Pr[NonzeroGeometric(D ′ , ε, δ) ∈ S] + δ + δ
Therefore, NonzeroGeometric(D, ε, δ) is (ε, (eε + 1) · δ)-differentially private.
Proof of iii. We consider the running time at each step. Construction of the true histogram takes
O(n log |X |) time. And by Proposition 3.3, the at most n calls to GeoSample take poly(log(1/ε), n)
time. Sampling n random bin labels from X takes O(n log |X |) time.
Now, NonzeroGeometric makes n + 1 calls to ApproxOrdSample with no argument exceeding
each term of (k, n, n, |X |, s) respectively. So by Proposition 4.18, these calls take time
poly(log(1/ε), n, log |X |, log s) ≤ poly(log(1/ε), n, log |X |, log(1/δ))
Sorting the at most 2n + 1 elements of A ∪ {q0 , . . . , qn } and then releasing the heaviest counts
takes O(n log n + n log |X |) time.
Therefore, overall NonzeroGeometric has the desired running time.
We have constructed algorithms for releasing a differentially private histogram, both pure and
approximate, with running time polynomial in log |X | and simultaneous accuracy matching that of
GeometricMechanism up to constant factors.
26
5
Removing the Dependence on Universe Size
When we would like to have accuracy independent of |X |, we can use an approximate differentially private algorithm based on stability techniques [BNS16] (Proposition 2.20). We present a
reformulation of their algorithm using two-sided geometric noise instead of Laplace noise.
Algorithm 5.1. StabilityHistogram(D, ε, b) for D ∈ X n , ε > 0 and b ∈ [n]
1. Let A = {x ∈ X : cx (D) > 0}.
2. For each x ∈ A, set c̃x to cx (D) + Geo(2/ε) clamped to [0, n].
3. Release h = {(x, c̃x ) : x ∈ A and c̃x > b} ∈ Hn,n (X ).
Note that we only release counts for x ∈ X whose true count is nonzero, namely elements in
the set A. Thus, the output length is O(n · (log n + log |X |)). However, releasing the set A is not
(ε, 0)-differentially private because this would distinguish between neighboring datasets: one with
a count of 0 and the other with a count of 1 for some element x ∈ X . Thus, we only release noisy
counts c̃x that exceed a threshold b. If b is large enough, then a count of 1 will only be kept with
small probability, yielding approximate differential privacy.
Theorem 5.2. StabilityHistogram(D, ε, b) has the following properties:
i. StabilityHistogram(D, ε, b) is (ε, δ)-differentially private provided that
b≥1+
2 1
ln
ε δ
ii. StabilityHistogram(D, ε, b) has (a, β)-per-query accuracy on counts larger than t for
2 1
2 1
t=b+
ln
ln
and a =
ε β
ε β
iii. StabilityHistogram(D, ε, b) has (a, β)-simultaneous accuracy for
2 n
a=b+
ln
ε β
Proof of i. Let D and D ′ be neighboring datasets. Let x ∈ X such that cx (D) 6= cx (D ′ ) and
S ⊆ [n]. There are 3 cases to consider:
• cx (D) ≥ 1 and cx (D ′ ) ≥ 1. Then c̃x ∼ cx (D) + Geo(2/ε) and similarly on the neighboring
database we have a noisy count c̃′x ∼ cx (D ′ ) + Geo(2/ε). So by the differential privacy of
GeometricMechanism (part i of Theorem 3.2) we have
Pr[M(D)x ∈ S] ≤ eε/2 · Pr[M(D ′ )x ∈ S]
27
• cx (D) = 1 and cx (D ′ ) = 0. Notice Pr[M(D ′ )x 6= 0] = 0. c̃x is distributed as 1 + Geo(2/ε)
clamped to [0, n]. Thus,
2 1
Pr[M(D)x 6= 0] = Pr[c̃x > b] ≤ Pr c̃x > 1 + ln
ε δ
δ
2 1
δ
≤
≤ Pr Z > ln
= ε/2
ε δ
2
e +1
where Z ∼ Geo(2/ε). Therefore,
Pr[M(D)x ∈ S] ≤ Pr[M(D ′ )x ∈ S] + δ/2
• cx (D) = 0 and cx (D ′ ) = 1. This case follows similarly to the previous case by considering
Pr[M(D)x = 0].
Then overall
Pr[M(D)x ∈ S] ≤ eε/2 · Pr[M(D ′ )x ∈ S] + δ/2
Because there are at most two bins on which D and D ′ have differing counts and each count c̃x is
computed independently, by Lemma 2.4, this algorithm is (ε, δ)-differentially private.
Proof of ii. Let x ∈ X such that cx (D) > t. Then x ∈ A. So by part ii of Theorem 3.2, we have
Pr [|c̃x − cx (D)| ≤ a] ≥ 1 − β. |c̃x − cx (D)| ≤ a implies c̃x > t − a = b. Thus, (x, c̃x ) is included in
the output of StabilityHistogram(D, ε, b) giving the desired accuracy.
Proof of iii. Notice that the counts of elements not in A are trivially accurate. Therefore, we only
need to consider the counts of elements in A. By part iii of Theorem 3.2,
2 |A|
Pr ∀x ∈ A |c̃x − cx (D)| ≤
ln
≥1−β
ε
β
The final step can increase the error additively by at most b. Also, |A| ≤ n. Therefore,
2 n
ln
≥1−β
Pr ∀x ∈ X |hx − cx (D)| ≤ b +
ε β
By using GeoSample (see Proposition 3.3), we can construct a computationally efficient algorithm for releasing a histogram that is (ε, δ)-differentially private with the accuracies matching
Algorithm 5.1 up to constant factors.
Algorithm 5.3. BoundedStabilityHistogram(D, ε, b) for D ∈ X n , rational ε ∈ (0, 1] and b ∈ [n]
1. Let k = ⌈log(2/ε)⌉.
2. Let A = {x ∈ X : cx (D) > 0}.
3. For each x ∈ A, let c̃x = GeoSample(k, n, cx (D)).
4. Release h = {(x, c̃x ) : x ∈ A and c̃x > b} ∈ Hn,n (X ).
28
Theorem 5.4. Let rational ε ∈ (0, 1], ε̃ = 2 · ln 1 + 2−⌈log(2/ε)⌉ ∈ (4/9 · ε, ε] and b ∈ [n]. Then
BoundedStabilityHistogram(D, ε, b) satisfies the following properties:
i. BoundedStabilityHistogram(D, ε, b) is (ε, δ)-differentially private provided that
b ≥1+
2
ln(1/δ)
ε̃
ii. BoundedStabilityHistogram(D, ε, b) has (a, β)-per-query accuracy on counts larger than t
for
2 1
2 1
ln
ln
and a =
t=b+
ε̃ β
ε̃ β
iii. BoundedStabilityHistogram(D, ε, b) has (a, β)-simultaneous accuracy for
2 n
a=b+
ln
ε̃ β
iv. BoundedStabilityHistogram(D, ε, b) has running time
poly(N )
where N is the bit length of this algorithm’s input (D ∈ X n , ε and b).
Proof of i-iii. By Proposition 3.3, GeoSample(k, n, cx(D)) generates a two-sided geometric random
variable with scale parameter 2/ε̃ centered at cx (D) clamped to [0, n]. Therefore, this algorithm is
identically distributed to StabilityHistogram(D, ε̃, b). Parts i-iii follow from Theorem 5.2.
Proof of iv. Construction of the true histogram takes O(n log |X |) time. And by Proposition 3.3,
the at most n calls to GeoSample take poly(log(1/ε), n) time. Because the counts do not exceed n,
the final step takes poly(n, log |X |) time.
Therefore, we have constructed an efficient algorithm for releasing a sparse histogram with
approximate differential privacy.
6
Lower Bounds
In this section, we prove a lower bound on the per-query accuracy of histogram algorithms whose
outputs are restricted to H∞,n′ (X ) (i.e. sparse histograms) using a packing argument [HT10,
BBKN14]. First, for completeness we state and reprove existing lower bounds for per-query accuracy and simultaneous accuracy as well as generalize them to the case of δ > 0.
Theorem 6.1 (following [HT10, BBKN14]). Let M : X n → H∞,|X |(X ) be (ε, δ)-differentially
private and β ∈ (0, 1/2].
i. If M has (a, β)-per-query accuracy and δ/ε ≤ β, then
1
1
1
ln
− 1, n
a ≥ · min
2
ε
4β
29
ii. If M has (a, β)-simultaneous accuracy, then
ε
1
|X | − 1
1
1
− 1, n
log
− 1, log
a ≥ · min
2
ε
4β
ε
4δ
Proof of i. Assume a < n/2. Let x, x0 ∈ X such that x 6= x0 . Define the dataset D0 ∈ X n such
that all rows are x0 . And define the dataset D such that the first m = ⌊2a⌋ + 1 rows are x and the
remaining n − m rows are x0 . Notice that Pr[|M(D)x − cx (D)| > a] ≤ β by the (a, β)-per-query
accuracy of M. By Lemma 2.3 and the fact that cx (D) > 2a while cx (D ′ ) = 0,
Pr[M(D)x − cx (D)| > a] ≥ e−mε · Pr[M(D ′ )x − cx (D)| > a] − δ/ε
≥ e−mε · Pr[M(D ′ )x − cx (D ′ )| ≤ a] − δ/ε
≥ e−mε · (1 − β) − δ/ε
Therefore,
−(2a+1)·ε
e
1
δ
≤
· β+
≤ 4β
1−β
ǫ
Proof of ii. Assume a < n/2. Let x0 ∈ X . For each x ∈ X define the dataset D (x) ∈ X n such that
the first m = ⌊2a⌋ + 1 rows are x and the remaining n − m rows are x0 . For all x ∈ X , let
Gx = {h ∈ H∞,|X |(X ) : ∀x′ ∈ X |hx′ − cx′ (D (x) )| ≤ a}
By Lemma 2.3, for all x ∈ X
Pr[M(D (x0 ) ) ∈ Gx ] ≥ e−mε · Pr[M(D (x) ) ∈ Gx ] − δ/ε
≥ e−mε · (1 − β) − δ/ε
Notice that Pr[M(D (x0 ) ) ∈
/ Gx0 ] ≤ β and {Gx }x∈X is a collection of disjoint sets. Then
X
Pr[M(D (x0 ) ) ∈
/ Gx0 ] ≥
Pr[M(D (x0 ) ) ∈ Gx ]
x∈X :x6=x0
≥ (|X | − 1) · e−mε · (1 − β) − δ/ε
Therefore,
−(2a+1)·ε
e
1
·
≤
1−β
β
δ
+
|X | − 1 ε
which implies the desired lower bound.
We now state our lower bound over sparse histograms.
Theorem 6.2. Let M : X n → H∞,n′ (X ) be (ε, δ)-differentially private with (a, β)-per-query accuracy with β ∈ (0, 1/2] and δ/ε ≤ β. Then
1
|X |
ε
1
1
ln
ln
−
1,
−
1,
n
a ≥ · min
2
2ε
16βn′
2ε
16βδ
30
The histogram algorithms of Sections 4 and 5 achieve (O(log(1/β)/ε), β)-per-query accuracy on
large enough counts. However, on smaller counts we can only guarantee (a, β)-per-query accuracy
with a = O(log(|X |/β)/ε) and a = O(log(1/(βδ))/ε) (by taking threshold b = O(log(1/δ)/ε))
for algorithms from Sections 4 and 5 respectively. Theorem 6.2 shows these bounds are the best
possible when |X | ≥ poly(n′ ) and δ ≤ poly(1/ε).
Proof. Assume a < n/2. Let x0 ∈ X . For each x ∈ X define the dataset D (x) ∈ X n such that the
first m = ⌈2a⌉ rows are x and the remaining n − m rows are x0 . By definition of (a, β)-per-query
accuracy and the fact that cx (D (x) ) ≥ 2a, we have
h
i
h
i
Pr M(D (x) )x ≥ a ≥ Pr M(D (x) )x − cx (D (x) ) ≤ a ≥ 1 − β
Then, by Lemma 2.3 and that D (x) is at distance at most m from D (x0 ) , we have
h
i
Pr M(D (x0 ) )x ≥ a ≥ (1 − β)e−mε − δ/ε
Thus, by linearity of expectations
hn
oi
E
x ∈ X : M(D (x0 ) )x ≥ a
≥ |X | · (1 − β)e−mε − δ/ε
On the other hand, as M(D (x0 ) ) ∈ H∞,n′ (X ) we have
hn
oi
E
x ∈ X : M(D (x0 ) )x ≥ a
≤ n′
Therefore,
e−⌈2a⌉·ε ≤
1
·
1−β
n′
δ
+
|X | ε
which along with ⌈2a⌉ ≤ 2a + 1 implies the lower bound of
1
1
|X |
1 ε
a ≥ · min
− 1, n
ln
− 1, ln
2
ε
4n′
ε
4δ
Therefore, along with part i of Theorem 6.1, we have
1
1
|X |
1
1
1 ε
a ≥ · min max min
− 1 , ln
ln
− 1, ln
−1 , n
2
ε
4n′
ε
4δ
ε
4β
1
1
1
1
|X |
1
1 ε
≥ · min
− 1 + ln
· min
ln
− 1, ln
−1 , n
2
2
ε
4n′
ε
4δ
ε
4β
1
1
|X |
ε
1
≥ · min
ln
ln
−
1,
−
1,
n
2
2ε
16βn′
2ε
16βδ
7
Better Per-Query Accuracy via Compact, Non-Sparse Representations
In this section, we present a histogram algorithm whose running time is poly-logarithmic in |X |, but,
unlike Algorithm 4.13, is able to achieve (a, β)-per query accuracy with a = O(log(1/β)/ε). Our
histogram algorithm will output a histogram from a properly chosen family. This family necessarily
contains histograms that have many nonzero counts to avoid the lower bound of Theorem 6.2.
31
Lemma 7.1. Let k ∈ N, n ∈ N+ and Z ∼ GeoSample(k, n, 0). There exists a multiset of histograms
Gk,n (X ) satisfying:
i. Let h be drawn uniformly at random from Gk,n (X ). For all x ∈ X and c ∈ [n]
−k+1
e−(2/3)·2
−k+1
· Pr[Z = c] ≤ Pr[hx = c] ≤ e(1/3)·2
· Pr[Z = c]
ii. Let h be drawn uniformly at random from Gk,n (X ). For all x ∈ X and a ∈ [n]
Pr[hx ≤ a] ≥ Pr[Z ≤ a]
iii. Let h be drawn uniformly at random from Gk,n (X ). For all B ⊆ X such that |B| ≤ n + 1 and
c ∈ [n]B
Y
Pr[∀x ∈ B hx = cx ] =
Pr[hx = cx ]
x∈B
iv. For all h ∈ Gk,n , the histogram h can be represented by a string of length poly(k, n, log |X |)
and given this representation for all x ∈ X the count hx can be evaluated in time
poly (k, n, log |X |)
v. For all A ⊆ X such that |A| ≤ n and c ∈ [n]A sampling a histogram h uniformly at random
from {h′ ∈ Gk,n (X ) : ∀x ∈ A h′x = cx } can be done in time
poly (k, n, log |X |)
Parts i and ii state when a histogram is sampled uniformly at random from Gn,k (X ) the marginal
distribution on each count is closely distributed to a two-sided geometric distribution centered at
0 clamped to [0, n].
Proof. (Construction) Let d = (2k+1 + 1)(2k + 1)n−1 . We would like to construct a (n + 1)-wise
independent hash family consisting of functions p : X → {1, . . . , d}, i.e. for all x ∈ X , p(x) is
uniformly distributed over {1, . . . , d} and for all distinct x1 , . . . , xn+1 ∈ X , the random variables
p(x1 ), . . . , p(xn+1 ) are independent.
Given any function p in this family we can construct a histogram by using p(x) as the randomness
for evaluating the noisy count of x via inverse transform sampling in a similar manner to Algorithm
3.4 as the marginal distribution of p(x) is uniformly distributed over {1, . . . , d}.
The set of all degree at most n polynomial over a finite field Fq is a (n + 1)-wise independent
hash family. Ideally, we would have |X | ≤ d and take q = d. In this case, we can map X to a subset
of Fq and use a bijection between {1, . . . d} and Fq to get the desired family of functions.
However, |X | may be larger than d and d is not a prime power. Therefore, we must pick a large
enough field Fq so that |X | ≤ q and that when mapping Fq to {1, . . . , d}, the resulting marginal
distributions are approximately uniform. So let
n
l
mo
m = max ⌈log |X |⌉ , log(3 · 2k−1 · d)
′ (X ) as the family of polynomials over the finite field F m with degree at most n. We
and define Gk,n
2
′ (X ) by taking
construct a histogram h ∈ Gk,n (X ) for each polynomial ph ∈ Gk,n
hx = min z ∈ [n] : F (z) ≥ (T (ph (T −1 (x))) mod d) + 1
32
for all x ∈ X where T : F2m → [2m − 1] is a bijection such that T and T −1 have running time
poly(m) and
if z < 0
0
n−1−z
k(z+1)
k
F (z) = d − 2
2 +1
if 0 ≤ z < n
d
if z = n
as defined in Algorithm 3.4 for c = 0. Notice that if h is drawn uniformly at random from Gk,n (X ),
′ (X ).
then ph is drawn uniformly at random from Gk,n
Proof of i. Let Z ∼ GeoSample(k, n, 0). For all x ∈ X and v ∈ [n], we have
Pr[hx = v] = Pr[min z ∈ [n] : F (z) ≥ (T (ph (T −1 (x))) mod d) + 1 = v]
= Pr[F (v − 1) < (T (ph (T −1 (x))) mod d) + 1 ≤ F (v)]
1
= m · |{y ∈ F2m : F (v − 1) < (T (y) mod d) + 1 ≤ F (v)}|
2
m
2
1
+ 1 (F (v) − F (v − 1))
≤ m·
2
d
d
= 1 + m · Pr[Z = v]
2
(by Lemma 3.5)
m
≤ ed/2 · Pr[Z = v]
Similarly, because d/2m ≤ 2/3
m
2
1
− 1 (F (v) − F (v − 1))
Pr[hx = v] ≥ m ·
2
d
d
≥ 1 − m · Pr[Z = v]
2
m
≥ e−2·d/2 · Pr[Z = v]
Now, part i follows as d/2m ≤ (1/3) · 2−k+1 by choice of m.
Proof of ii. Pick q ∈ N and r ∈ [d − 1] such that 2m = q · d + r. We proceed by splitting the
probability of Pr[hx ≤ a] based on whether or not the integer value of its randomness lies within
one of the q intervals of length d. Let U be drawn uniformly at random from [2m ], D be drawn
uniformly at random from {1, . . . , d} and Z ∼ GeoSample(k, n, 0). For all x ∈ X and a ∈ [n]
Pr[hx ≤ a] = Pr[min z ∈ [n] : F (z) ≥ (T (ph (T −1 (x))) mod d) + 1 ≤ a]
= Pr[(T (ph (T −1 (x))) mod d) + 1 ≤ F (a)]
= Pr[(U mod d) + 1 ≤ F (a)]
q·d
r
= m · Pr[D ≤ F (a)] + m · Pr[D ≤ F (a) | D ≤ r]
2
2
q·d
r
≥ m · Pr[D ≤ F (a)] + m · Pr[D ≤ F (a)]
2
2
= Pr[Z ≤ a]
33
(by monotonicity of F )
′ (X ) is a (n + 1)-wise independent hash family, for all B ⊆ X such that
Proof of iii. Because Gk,n
|B| ≤ n + 1 and c ∈ [n]B
Pr[∀x ∈ B hx = cx ] = Pr ∀x ∈ B F (cx − 1) < (T (ph (T −1 (x))) mod d) + 1 ≤ F (cx )
Y
=
Pr F (cx − 1) < (T (ph (T −1 (x))) mod d) + 1 ≤ F (cx )
x∈B
=
Y
Pr[hx = vx ]
x∈B
Proof of iv. F2m can be represented by an irreducible polynomial of degree m over F2 encoded as
a binary string of length m. Likewise, all elements of F2m can be represented by a polynomial
of degree at most m − 1 over F2 (requires m bits). This encoding defines an efficient bijection T
between F2m and [2m − 1] by also interpreting the string as the binary representation of an element
in [2m − 1].
For all h ∈ Gk,n (X ), h can be represented by k, n, the coefficients of ph and a description of the
field F2m . This representation can be encoded in poly(k, n, log |X |) bits.
Now, given this encoding, evaluation of ph (x) can be done in poly(m) = poly (k, n, log |X |) time.
And computing
hx = min z ∈ [n] : F (z) ≥ (T (ph (T −1 (x))) mod d) + 1
to get an approximate count takes poly (k, n, log |X |) time.
Proof of v. Let A ⊆ X such that |A| ≤ n and c ∈ [n]A . We can sample h given by the coefficients
a0 , . . . , an ∈ F2m uniformly at random from Gk,n (X ) such that hx = cx for all x ∈ A with the
following steps.
1. Construct F2m by finding an irreducible polynomial of degree m over F2 .
2. For each x ∈ X , sample ũx uniformly at random from the set
Sx = {y ∈ F2m : cx = min {z ∈ [n] : F (z) ≥ (T (y) mod d) + 1}}
We can sample from this set by observing that
Sx = {y ∈ F2m : F (cx − 1) < (T (y) mod d) + 1 ≤ F (cx )}
= {y ∈ F2m : ∃i, r ∈ N such that T (y) = i · d + r and F (cx − 1) < r + 1 ≤ F (cx )}
= T −1 ({i · d + r ∈ [2m − 1] : i ∈ N and F (cx − 1) < r + 1 ≤ F (cx )})
m
⌊2 /d⌋−1
[
{i · d + r ∈ [2m − 1] : F (cx − 1) < r + 1 ≤ F (cx )}
= T −1
i=0
3. Let B ⊆ X such that A ⊆ B and |B| = n + 1. For all x ∈ B \ A, sample ũx uniformly at
random from F2m .
4. Take the coefficients a0 , . . . , an ∈ F2m to be the coefficients of the interpolating polynomial
over F2m given the set of points (x, ũx ) for all x ∈ B. This polynomial exists and is unique.
34
We first prove correctness. Notice this procedure can only return a histogram h ∈ Gk,n (X ) such
that hx = vx for all x ∈ A. Let h be any such histogram. Then
Pr[Sampling h] = Pr[(a0 , . . . , an ) are the coefficients of ph ]
= Pr[∀x ∈ B ũx = ph (x)]
Y
=
Pr[ũx = ph (x)]
x∈B
=
Y 1
|Sx |
x∈A
!
1 |B\A|
·
2m
Therefore, these steps output h ∈ Gk,n (X ) uniformly at random such that hx = vx for all x ∈ A.
Construction of F2m can be done in O(m4 ) time [Sho90]. Integer operations are limited to
numbers not exceeding 2m . And polynomial interpolation takes O(n2 ) · poly(m) time [BP70].
Therefore, this procedure runs in time poly(k, n, log |X |).
Algorithm 3.6 (BoundedGeometricMechanism) has on each bin a count with marginal distribution Geosample(k, n, cx (D)) and its counts are independent. By using the previously defined
family, the following algorithm has essentially the same marginal distributions as Algorithm 3.6,
but the counts are only (n + 1)-wise independent. This yields an efficient algorithm as we only
need a small number of random bits (polynomial in log |X |) compared to the amount required for
all counts to be independent (linear in |X |).
Algorithm 7.2. CompactHistogram(D, ε) for D ∈ X n and rational ε ∈ (0, 1]
1. Let k = ⌈log(4/ε)⌉.
2. Let A = {x ∈ X : cx (D) > 0}.
3. For each x ∈ A, let c̃x = GeoSample(k, n, cx (D)).
4. Release h drawn uniformly at random from {h′ ∈ Gk,n (X ) : ∀x ∈ A h′x = c̃x }.
Theorem 7.3. Let rational ε ∈ (0, 1] and ε̃ = 2 · ln 1 + 2−⌈log(4/ε)⌉ ∈ (2ε/9, ε/2]. Then
CompactHistogram(D, ε) has the following properties:
i. CompactHistogram(D, ε) is (ε, 0)-differentially private.
ii. CompactHistogram(D, ε) has (a, β)-per-query accuracy for
2 1
ln
a=
ε̃ β
iii. CompactHistogram(D, ε) has (a, β)-simultaneous accuracy for
2 |X |
ln
a=
ε̃
β
35
iv. CompactHistogram(D, ε) has running time
poly(N )
where N is the bit length of this algorithm’s input (D ∈ X n and ε)
Proof of i. Let D, D ′ ∈ X n be neighboring datasets. Let A = {x ∈ X : cx (D) > 0}. Similarly,
define A′ = {x ∈ X : cx (D ′ ) > 0}. Let B = A ∪ A′ . Notice |B| ≤ n + 1. Let g ∈ Gk,n (X ). Let h be
drawn uniformly at random from Gk,n (X ). Then
Pr[CompactHistogram(D, ε) = g]
= Pr[∀x ∈ B CompactHistogram(D, ε)x = gx ] · Pr[h = g | ∀x ∈ B hx = gx ]
!
Y
Y
Pr[Zx (D) = gx ]
Pr[hx = gx ] · Pr[h = g | ∀x ∈ B hx = gx ]
=
x∈A
x∈B\A
where Zx (D) ∼ GeoSample(k, n, cx (D)). Now, because |B \ A| = 1 and and |B \ A′ | = 1 along with
Proposition 3.3 and part i of Lemma 7.1
Pr[CompactHistogram(D, ε) = g]
Pr[h = g | ∀x ∈ B hx = gx ]
!
Y
Y
Pr[Zx (D) = gx ]
Pr[hx = gx ]
=
x∈A
(1/3)·2−k+1
≤e
·
Y
x∈B\A
Pr[Zx (D) = gx ]
x∈B
−k+1
≤ eε̃+(1/3)·2
·
Y
Pr[Zx (D ′ ) = gx ]
x∈B
≤
ε̃+(1/3)·2−k+1
e
·
Y
x∈A′
≤ eε ·
!
Y
−k+1
Pr[Zx (D ′ ) = gx ] e(2/3)·2
·
Pr[hx = gx ]
x∈B\A′
Pr[CompactHistogram(D ′ , ε)
= g]
Pr[h = g | ∀x ∈ B hx = gx ]
as ε̃ ≤ ε/2 and 2−k+1 ≤ ε/2. Therefore, CompactHistogram(D, ε) is (ε, 0)-differentially private.
Proof of ii. Let h ∼ CompactHistogram(D, ε) and A = {x ∈ X : cx (D) > 0}. Let x ∈ A. Then,
by construction, hx ∼ GeoSample(k, n, cx (D)) and (a, β)-per-query accuracy follows from part ii of
Theorem 3.7.
Let x ∈ X \ A and h′ be drawn uniformly at random from Gk,n (X ). Notice cx (D) = 0 and
|A| ≤ n. By parts ii and iii of Lemma 7.1
Pr[|hx | ≤ a] = Pr[h′x ≤ a | ∀x ∈ A h′x = hx ]
= Pr[h′x ≤ a]
≥ Pr[Z ≤ a]
where Z ∼ GeoSample(k, n, 0). Thus, (a, β)-per-query accuracy follows from part ii of Theorem
3.7.
36
Proof of iii. A union bound over each x ∈ X along with the previous part gives a bound on the
(a, β)-simultaneous accuracy.
Proof of iv. CompactHistogram(D, ε) makes at most n calls to GeoSample. Therefore, by Proposition 3.3 and by part v of Lemma 7.1, we get the desired bound on running time.
Therefore, we have constructed a histogram algorithm with running time polynomial in and
accuracy matching lower bounds for releasing private histograms up to constant factors.
Acknowledgments
We thank the Harvard Privacy Tools differential privacy research group, particularly Mark Bun
and Kobbi Nissim, for informative discussions and feedback, and the anonymous TPDP reviewers
for helpful comments.
References
[BBKN14] Amos Beimel, Hai Brenner, Shiva Prasad Kasiviswanathan, and Kobbi Nissim. Bounds
on the sample complexity for private learning and private data release. Machine learning, 94(3):401–437, 2014.
[BLR13]
Avrim Blum, Katrina Ligett, and Aaron Roth. A learning theory approach to noninteractive database privacy. Journal of the ACM (JACM), 60(2):12, 2013. URL
https://arxiv.org/pdf/1109.2229.pdf.
[BNS16]
Mark Bun, Kobbi Nissim, and Uri Stemmer.
Simultaneous private learning
of multiple concepts.
In Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science, pages 369–380. ACM, 2016.
URL
https://arxiv.org/pdf/1511.08552.pdf.
[BP70]
Åke Björck and Victor Pereyra. Solution of vandermonde systems of equations. Mathematics of Computation, 24(112):893–903, 1970.
[CDK17]
Bryan Cai, Constantinos Daskalakis, and Gautam Kamath. Priv’it: Private and
sample efficient identity testing. arXiv preprint arXiv:1703.10127, 2017. URL
https://arxiv.org/pdf/1703.10127.pdf.
[CKKL12] Mahdi Cheraghchi, Adam Klivans, Pravesh Kothari, and Homin K Lee. Submodular
functions are noise stable. In Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms, pages 1586–1592. Society for Industrial and Applied
Mathematics, 2012. URL https://arxiv.org/pdf/1106.0518.pdf.
[CPST11]
Graham Cormode, Magda Procopiuc, Divesh Srivastava, and Thanh TL Tran. Differentially private publication of sparse data. arXiv preprint arXiv:1103.0825, 2011. URL
https://arxiv.org/pdf/1103.0825.pdf.
[CTUW14] Karthekeyan Chandrasekaran, Justin Thaler, Jonathan Ullman, and Andrew Wan.
Faster private release of marginals on small databases. In Proceedings of the 5th conference on Innovations in theoretical computer science, pages 387–402. ACM, 2014.
URL https://arxiv.org/pdf/1304.3754.pdf.
37
[DL09]
Cynthia Dwork and Jing Lei. Differential privacy and robust statistics. In Proceedings
of the forty-first annual ACM symposium on Theory of computing, pages 371–380.
ACM, 2009.
[DMNS06] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise
to sensitivity in private data analysis. In Theory of Cryptography Conference, pages
265–284. Springer, 2006.
[DNT15]
Cynthia Dwork, Aleksandar Nikolov, and Kunal Talwar. Efficient algorithms for privately releasing marginals via convex relaxations. Discrete & Computational Geometry,
53(3):650–673, 2015. URL https://arxiv.org/pdf/1308.1385.pdf.
[GMP16]
Ivan Gazeau, Dale Miller, and Catuscia Palamidessi. Preserving differential privacy
under finite-precision semantics. Theoretical Computer Science, 655:92–108, 2016. URL
https://arxiv.org/pdf/1306.2691.pdf.
[GRS12]
Arpita Ghosh, Tim Roughgarden, and Mukund Sundararajan. Universally utilitymaximizing privacy mechanisms. SIAM Journal on Computing, 41(6):1673–1693, 2012.
[GRU12]
Anupam Gupta, Aaron Roth, and Jonathan Ullman.
Iterative constructions
and private data release. Theory of Cryptography, pages 339–356, 2012. URL
https://arxiv.org/pdf/1107.3731.pdf.
[HRS12]
Moritz Hardt, Guy N Rothblum, and Rocco A Servedio. Private data release via learning thresholds. In Proceedings of the twenty-third annual ACM-SIAM symposium on
Discrete Algorithms, pages 168–187. Society for Industrial and Applied Mathematics,
2012. URL https://arxiv.org/pdf/1107.2444.pdf.
[HT10]
Moritz Hardt and Kunal Talwar. On the geometry of differential privacy. In Proceedings
of the forty-second ACM symposium on Theory of computing, pages 705–714. ACM,
2010. URL https://arxiv.org/pdf/0907.3754.pdf.
[KLN+ 11] Shiva Prasad Kasiviswanathan, Homin K Lee, Kobbi Nissim, Sofya Raskhodnikova, and
Adam Smith. What can we learn privately? SIAM Journal on Computing, 40(3):793–
826, 2011. URL https://arxiv.org/pdf/0803.0924.pdf.
[Knu05]
Donald E Knuth. Combinatorial Algorithms, Part 1, volume 4 of The Art of Computer
Programming. Addison-Wesley Professional, 2005.
[Mir12]
Ilya Mironov. On significance of the least significant bits for differential privacy. In
Proceedings of the 2012 ACM conference on Computer and communications security,
pages 650–661. ACM, 2012.
[Sho90]
Victor Shoup. New algorithms for finding irreducible polynomials over finite fields.
Mathematics of Computation, 54(189):435–447, 1990.
[TUV12]
Justin Thaler, Jonathan Ullman, and Salil Vadhan. Faster algorithms for privately releasing marginals. In International Colloquium on Automata, Languages, and Programming, pages 810–821. Springer, 2012. URL https://arxiv.org/pdf/1205.1758.pdf.
[UV11]
Jonathan Ullman and Salil P Vadhan. PCPs and the hardness of generating private synthetic data. In TCC, volume 6597, pages 400–416. Springer, 2011. URL
https://pdfs.semanticscholar.org/bf6f/e68b91283787b9ad28494a78b4cead10fa12.pdf.
38
A1
Generating a Sparse Histogram Uniformly at Random
Being able to efficiently compute |Hn,n′ (X )| and sample a uniformly random element from Hn,n′ (X )
is needed for an efficient implementation of Algorithm 4.13.
Lemma A1.1. |Hn,n′ (X )| can be calculated and a uniformly random element of Hn,n′ (X ) can be
sampled in time
poly(n′ , log n, log |X |)
Proof. We show HistSample(X , n, n′ ) defined below efficiently samples a uniformly random element
of Hn,n′ (X ) by using a bijection between Hn,n′ (X ) and {1, . . . , |Hn,n′ (X )|}.
Algorithm A1.2. HistSample(X , n, n′ ) for n, n′ ∈ N+ with n′ ≤ |X |
P ′
1. Pick U uniformly at random from {1, . . . , |Hn,n′ (X )|} = {1, . . . , ni=0
P
2. Find the smallest ℓ ∈ [n′ ] such that ℓi=0 |Xi | ni ≥ U .
i
Pℓ−1 |X | i h |X | ℓ
3. Let u′ = U − 1 − i=0
n
∈
·
n
−
1
.
i
ℓ
4. Use integer division to find q ∈
h
|X |
ℓ −
|X | i
i n }.
i
1 and r ∈ nℓ − 1 such that u′ = qnℓ + r.
5. Map q to a corresponding subset of X of size ℓ specified by sorted elements q1 < . . . < qℓ .
Specifically, we can let q1 , . . . , qℓ ∈ X be the sequence representing q in the combinatorial
number system of degree ℓ, i.e. the unique sequence satisfying
qℓ
q1
q=
+ ... +
ℓ
1
This sequence can be found P
greedily; for each j decreasing from ℓ to 1, using binary search,
find the largest qj such that ℓi=j qii ≤ q [Knu05].
6. Let r0 , . . . , rℓ−1 ∈ [n − 1] be the digits of the base n representation of r, i.e.
r = rℓ−1 nℓ−1 + . . . + r1 n + r0
7. For each i ∈ {1, . . . , ℓ}, release (qi , ri−1 + 1).
Steps 3 and 4 define a bijection mapping u to (ℓ, u′ ). Step 5 defines a bijection mapping (ℓ, u′ )
to (ℓ, q, r). In steps 6 and 7, the decompositions of q and r into their respective number systems are
bijections. Likewise, there is a bijection mapping (q1 , . . . , qℓ , r0 , . . . , rℓ−1 ) to an element of Hn,n′ (X ).
Therefore, our mapping from {1, . . . , |Hn,n′ (X )|} to Hn,n′ (X ) is bijective proving correctness.
P ′
Computing |Hn,n′ (X )| = ni=0 |Xi | ni takes poly(n′ , log n, log |X |) time. The remaining steps
can be done in poly(n′ , log n, log |X |) time as no number used exceeds |Hn,n′ (X )|, all numbers used
are expressible as the sum of at most n′ other numbers and steps 5-7 each consist of at most n′
iterations.
39
| 8 |
arXiv:1611.03730v1 [] 10 Nov 2016
Some Properties of the Nil-Graphs of
Ideals of Commutative Rings ∗†
R. Nikandisha‡, F. Shaveisib
a
Department of Basic Sciences, Jundi-Shapur University of Technology, Dezful, Iran
P.O. Box 64615-334,
b
Department of Mathematics, Faculty of Sciences, Razi University, Kermanshah, Iran
[email protected]
[email protected]
Abstract
Let R be a commutative ring with identity and Nil(R) be the set of nilpotent
elements of R. The nil-graph of ideals of R is defined as the graph AGN (R)
whose vertex set is {I : (0) 6= I ⊳ R and there exists a non-trivial ideal J such
that IJ ⊆ Nil(R)} and two distinct vertices I and J are adjacent if and only
if IJ ⊆ Nil(R). Here, we study conditions under which AGN (R) is complete or
bipartite. Also, the independence number of AGN (R) is determined, where R is
a reduced ring. Finally, we classify Artinian rings whose nil-graphs of ideals have
genus at most one.
1. Introduction
When one assigns a graph with an algebraic structure, numerous interesting algebraic problems arise from the translation of some graph-theoretic parameters such as
clique number, chromatic number, diameter, radius and so on. There are many papers in this topic, see for example [5], [8] and [12]. Throughout this paper, all rings
are assumed to be non-domain commutative rings with identity. By I(R) (I(R)∗ ) and
Nil(R), we denote the set of all proper (non-trivial) ideals of R and the nil-radical of
∗
Key Words: Nil-graph; Complete graph; Bipartite graph; Genus; Independence number.
2010 Mathematics Subject Classification: 05C15; 05C69; 13E05; 13E10.
‡
Corresponding author
†
1
R, respectively. The set of all maximal and minimal prime ideals of R are denoted
by Max(R) and Min(R), respectively. The ring R is said to be reduced, if it has no
non-zero nilpotent element.
Let G be a graph. The degree of a vertex x of G is denoted by d(x). The graph
G is said to be r-regular, if the degree of each vertex is r. The complete graph with
n vertices, denoted by Kn , is a graph in which any two distinct vertices are adjacent.
A bipartite graph is a graph whose vertices can be divided into two disjoint parts U
and V such that every edge joins a vertex in U to one in V . It is well-known that a
bipartite graph is a graph that does not contain any odd cycle. A complete bipartite
graph is a bipartite graph in which every vertex of one part is joined to every vertex
of the other part. If the size of one of the parts is 1, then it is said to be a star graph.
A tree is a connected graph without cycles. Let Sk denote the sphere with k handles,
where k is a non-negative integer, that is, Sk is an oriented surface of genus k. The
genus of a graph G, denoted by γ(G), is the minimal integer n such that the graph can
be embedded in Sn . A genus 0 graph is called a planar graph. It is well-known that
γ(Kn ) = ⌈
(n − 3)(n − 4)
⌉ for all n ≥ 3,
12
(m − 2)(n − 2)
⌉, for all n ≥ 2 and m ≥ 2.
4
For a graph G, the independence number of G is denoted by α(G). For more details
about the used terminology of graphs, see [13].
γ(Km,n ) = ⌈
We denote the annihilator of an ideal I by Ann(I). Also, the ideal I of R is called
an annihilating-ideal if Ann(I) 6= (0). The notation A(R) is used for the set of all
annihilating-ideals of R. By the annihilating-ideal graph of R, AG(R), we mean the
graph with vertex set A(R)∗ = A(R)\{0} and two distinct vertices I and J are adjacent
if and only if IJ = 0. Some properties of this graph have been studied in [1, 2, 3, 5, 6].
In [12], the authors have introduced another kind of graph, called the nil-graph of
ideals. The nil-graph of ideals of R is defined as the graph AGN (R) whose vertex set
is {I : (0) 6= I ⊳ R and there exists a non-trivial ideal J such that IJ ⊆ Nil(R)}
and two distinct vertices I and J are adjacent if and only if IJ ⊆ Nil(R). Obviously,
our definition is slightly different from the one defined by Behboodi and Rakeei in [5]
and it is easy to see that the usual annihilating-ideal graph AG(R) is a subgraph of
AGN (R). In [12], some basic properties of nil-graph of ideals have been studied. In this
article, we continue the study of the nil-graph of ideals. In Section 2, the necessary and
sufficient conditions, under which the nil-graph of a ring is complete or bipartite, are
found. Section 3 is devoted to the studying of independent sets in nil-graph ideals. In
2
Section 4, we classify all Artinian rings whose nil-graphs of ideals have genus at most
one.
2. When Is the Nil-Graph of Ideals Complete or Bipartite?
In this section, we study conditions under which the nil-graph of ideals of a commutative ring is complete or complete bipartite. For instance, we show that if R is a
Noetherian ring, then AGN (R) is a complete graph if and only if either R is Artinian
local or R ∼
= F1 × F2 , where F1 and F2 are fields. Also, it is proved that if AGN (R)
is bipartite, then AGN (R) is complete bipartite. Moreover, if R is non-reduced, then
AGN (R) is star and Nil(R) is the unique minimal prime ideal of R.
We start with the following theorem which can be viewed as a consequence of [12,
Theorem 5] (Here we prove it independently). Note that it is clear that if R is a reduced
ring, then AGN (R) ∼
= AG(R).
Theorem 1. Let R be a Noetherian ring. Then AGN (R) is a complete graph if and
only if either R is Artinian local or R ∼
= F1 × F2 , where F1 and F2 are fields.
Proof. First suppose that AGN (R) is complete. If R is reduced, then by [5, Theorem
2.7], R ∼
= F1 × F2 . Thus we can suppose that Nil(R) 6= (0). We continue the proof in
the following two cases:
Case 1. R is a local ring with the unique maximal ideal m. Since R is non-reduced,
by Nakayama’s Lemma (see [4, Proposition 2.6]), m and m2 are two distinct vertices of
AGN (R). Thus m3 ⊆ Nil(R) and so R is an Artinian local ring.
Case 2. R has at least two maximal ideals. First we show that R has exactly two
maximal ideals. Suppose to the contrary, m, n and p are three distinct maximal ideals
of R. Since AGN (R) is complete, we deduce that mn ⊆ Nil(R) ⊆ p, a contradiction.
Thus R has exactly two maximal ideals, say m and p. Now, we claim that both m
and p are minimal prime ideals. Since m and p are adjacent, we conclude one of the
maximal ideals, say p, is a minimal prime ideal of R. Now, suppose to the contrary, m
properly contains a minimal prime ideal q of R. Since mp ⊆ q, we get a contradiction.
So the claim is proved. Thus R is Artinian. Hence by [4, Theorem 8.7 ], we have
R∼
= R1 × R2 , where R1 and R2 are Artinian local rings. By contrary and with no loss
of generality, suppose that R1 contains a non-trivial ideal, say I. Then the vertices
I × R2 and (0) × R2 are not adjacent, a contradiction. Thus R ∼
= F1 × F2 , where F1
and F2 are fields.
3
Conversely, if R ∼
= F1 × F2 , where F1 and F2 are fields, then it is clear that AGN (R) ∼
=
K2 . Now, suppose that (R, m) is an Artinian local ring. Since m is nilpotent, it follows
that AGN (R) is complete.
The following example shows that Theorem 1 does not hold for non-Noetherian
rings.
Example 2. Let R =
is a complete graph.
k[xi : i≥1]
,
(x2i : i≥1)
where k is a field. Then R is not Artinian and AGN (R)
Remark 3. Let R be a ring. Every non-trivial ideal contained in Nil(R) is adjacent
to every other vertex of AGN (R). In particular, if R is an Artinian local ring, then
AGN (R) is a complete graph.
The next result shows that nil-graphs, whose every vertices have finite degrees, are
finite graphs.
Theorem 4. If every vertex of AGN (R) has a finite degree, then R has finitely many
ideals.
Proof. First suppose that R is non-reduced. Since d(Nil(R)) < ∞, the assertion
follows from Remark 3. Thus we can assume that R is reduced. Choose 0 6= x ∈ Z(R).
Since d(Rx) < ∞ and Rx is adjacent to every ideal contained in Ann(x), we deduce
that Ann(x) is an Artinian R-module. Similarly, one can show that Rx is an Artinian
R
implies that R is an Artinian ring.
R-module. Now, the R-isomorphism Rx ∼
= Ann(x)
Now, since R is reduced, [4, Theorem 8.7] implies that R is a direct product of finitely
many fields and hence we are done.
The next result gives another condition under which AGN (R) is complete.
Theorem 5. If AGN (R) is an r-regular graph, then AGN (R) is a complete graph.
Proof. If Nil(R) 6= (0), then by Remark 3, there is nothing to prove. So, suppose
that R is reduced. Since AGN (R) is an r-regular graph, Theorem 4 and [4, Theorem
8.7] imply that R ∼
= F1 × · · · × Fn , where n ≥ 2 and each Fi is a field. It is not hard
to check that every ideal I = I1 × · · · × In of R has degree 2nI − 1, where nI = |{i :
1 ≤ i ≤ n and Ii = (0)}|. Let I = F1 × (0) × · · · × (0) and J = F1 × · · · × Fn−1 × (0).
Then we have d(I) = 2n−1 − 1 and d(J) = 1. The r-regularity of AGN (R) implies that
2n−1 − 1 = 1 and so n = 2. Therefore R ∼
= F1 × F2 , as desired.
4
In the rest of this section, we study bipartite nil-graphs of ideals of rings.
Theorem 6. Let R be a ring such that AGN (R) is bipartite. Then AGN (R) is complete
bipartite. Moreover, if R is non-reduced, then AGN (R) is star and Nil(R) is the unique
minimal prime ideal of R.
Proof. If R is reduced, then by [6, Corollary 2.5], AGN (R) is a complete bipartite
graph. Now, suppose that R is non-reduced. Then by Remark 3, AGN (R) is a star
graph. So, by Remark 3, either Nil(R) is a minimal ideal or R has exactly two ideals.
In the latter case, R is an Artinian local ring and so Nil(R) is the unique minimal
prime ideal of R. Thus we can assume that Nil(R) = (x) is a minimal ideal of R, for
some x ∈ R. To complete the proof, we show that R has exactly one minimal prime
ideal. Suppose to the contrary, p1 and p2 are two distinct minimal prime ideals of R.
Choose z ∈ p1 \ p2 and set S1 = R \ p1 and S2 = {1, z, z 2 , . . .}. If 0 ∈
/ S1 S2 , then
by [11, Theorem 3.44], there exists a prime ideal p in R such that p ∩ S1 S2 = ∅ and
hence p = p1 , a contradiction. So, 0 ∈ S1 S2 . Therefore, there exist positive integer k
and y ∈ R \ p1 such that yz k = 0. Consider the ideals (x), (y) and (z k ). This is clear
that (x), (y) and (z k ) are three distinct vertices which form a triangle in AGN (R), a
contradiction.
The following corollary is an immediate consequence of Theorem 6 and Remark 3.
Corollary 7. If AGN (R) is a tree, then AGN (R) is a star graph.
We finish this section with the next corollary.
Corollary 8. Let R be an Artinian ring. Then AGN (R) is bipartite if and only if
AGN (R) ∼
= Kn , where n ∈ {1, 2}.
Proof. Let R be an Artinian ring and AGN (R) be bipartite. Then by Theorem 6,
AGN (R) is complete bipartite. If R is local, then Remark 3 implies that AGN (R) is
complete. Since AGN (R) is complete bipartite, we deduce that AGN (R) ∼
= Kn , where
n ∈ {1, 2}. Now, suppose that R is not local. Then by [4, Theorem 8.7], there exists a
positive integer n such that R ∼
= R1 × · · · × Rn , where every Ri is an Artinian local ring.
Since AGN (R) contains no odd cycle, it follows that n = 2. To complete the proof,
we show that both R1 and R2 are fields. By contrary and with no loss of generality,
suppose that R1 contains a non-trivial ideal, say I. Then it is not hard to check that
R1 ×(0), I ×(0) and (0)×R2 forms a triangle in AGN (R), a contradiction. The converse
is trivial.
5
3. The Independence Number of Nil-Graphs of Ideals
In this section, we use the maximal intersecting families to obtain a low bound for
the independence number of nil-graphs of ideals. Let R ∼
= R1 × R2 × · · · × Rn ,
T (R) = {(0) 6= I = I1 × I2 × · · · × In ⊳ R| ∀ 1 ≤ k ≤ n : Ik ∈ {(0), Rk }};
and denote the induced subgraph of AGN (R) on T (R) by GT (R).
Proposition 9. If R ∼
= R1 × R2 × · · · × Rn is a ring, then α(GT (R)) = 2n−1 .
Proof. For every ideal I = I1 × I2 × · · · × In , let
∆I = {k| 1 ≤ k ≤ n and Ik = Rk };
Then two distinct vertices I and J in GT (R) are not adjacent if and only if ∆I ∩∆J 6= ∅.
So, there is a one to one correspondence between the independent sets of GT (R) and
the set of families of pairwise intersecting subsets of the set [n] = {1, 2, . . . , n}. So, [10,
Lemma 2.1] completes the proof.
Using [4, Theorem 8.7], we have the following immediate corollary.
Corollary 10. Let R be an Artinian with n maximal ideals. Then α(AGN (R)) ≥ 2n−1 ;
moreover, the equality holds if and only if R is reduced.
Lemma 11. [9, Proposition 1.5] Let R be a ring and {p1 , . . . , pn } be a finite set of
S
distinct minimal prime ideals of R. Let S = R \ ni=1 pi . Then RS ∼
= Rp1 × · · · × Rpn .
Proposition 12. If |Min(R)| ≥ n, then α(AGN (R)) ≥ 2n−1 .
S
Proof. Let {p1 , . . . , pn } be a subset of Min(R) and S = R \ ni=1 pi . By Lemma 11,
there exists a ring isomorphism RS ∼
= Rp1 × · · · × Rpn . On the other hand, if IS , JS
are two non-adjacent vertices of AGN (RS ), then it is not hard to check that I, J are
two non-adjacent vertices of AGN (R). Thus α(AGN (R)) ≥ α(AGN (RS )) and so by
Proposition 9, we deduce that α(AGN (R)) ≥ 2n−1 .
From the previous proposition, we have the following immediate corollary which
shows that the finiteness of α(AGN (R)) implies the finiteness of number of the minimal
prime ideals of R.
6
Corollary 13. If R contains infinitely many minimal prime ideals, then the independence number of AGN (R) is infinite.
Theorem 14. For every Noetherian reduced ring R, α(AGN (R)) = 2|Min(R)|−1 .
S
Proof. Let Min(R) = {p1 , p2 , . . . , pn } and S = R \ nk=1 pk . Then Lemma 11 implies
that RS ∼
= Rp1 × · · · × Rpn . On the other hand, by using [9, Proposition 1.1], we deduce
that every Rpi is a field. Thus α(AGN (R)) ≥ α(AGN (RS )) = 2n−1 , by Corollary 10.
To complete the proof, it is enough to show that α(AGN (R)) ≤ α(AGN (RS )). To
see this, let I(x1 , x2 , . . . , xr ) and J = (y1 , y2 , . . . , ys ) be two non-adjacent vertices of
AGN (R). By [7, Corollary 2.4], S contains no zero-divisor and so IS , JS are non-trivial
ideals of RS . We show that IS , JS are non-adjacent vertices of AGN (RS ). Suppose
to the contrary, IS JS ⊆ Nil(R)S = (0). Then for every 1 ≤ i ≤ r and 1 ≤ j ≤ s,
Q
there exists sij ∈ S such that sij xi yj = 0. Setting t = i,j sij , we have tIJ = (0).
Since t is not a zero-divisor, we deduce that IJ = (0), a contradiction. Therefore,
α(AGN (R)) ≤ α(AGN (RS )), as desired.
Finally as an application of the nil-graph of ideals in the ring theory we have the
following corollary which shows that number of minimal prime ideals of a Noetherian
reduced ring coincides number of maximal ideals of the total ring of R.
Corollary 15. Let R be a Noetherian reduced ring. Then
|Min(R)| = |Max(T (R))| = log2 (α(AGN (R))).
S
Proof. Setting Min(R) = {p1 , p2 , . . . , pn } and S = R \ p∈Min(R) p, we have T (R) ∼
=
Rp1 × · · · × Rpn , by Lemma 11. Since every Rpi is a field, Corollary 10 and Theorem
14 imply that 2|Min(R)|−1 = 2|Max(T (R))|−1 = α(AGN (R)). So, the assertion follows.
4. The Genus of Nil-Graphs of Ideals
In [3, Corollary 2.11], it is proved that for integers q > 0 and g ≥ 0, there are
finitely many Artinian rings R satisfying the following conditions:
(1) γ(AG(R)) = g,
(2) | R
m | ≤ q for any maximal ideal m of R.
We begin this section with a similar result for the nil-graph of ideals.
7
Theorem 16. Let g and q > 0 be non-negative integers. Then there are finitely many
Artinian rings R such that γ(AGN (R)) = g and | R
m | ≤ q, for every maximal ideal m of
R.
Proof. Let R be an Artinian ring. Then [4, Theorem 8.7] implies that R ∼
= R1 × · · · ×
Rn , where n is a positive integer and each Ri is an Artinian local ring. We claim that
for every i, |Ri | ≤ q I(Ri ) . Since γ(AGN (R)) < ∞, we deduce that γ(AGN (Ri )) < ∞,
for every i. So by Remark 3 and formula for the genus of complete graphs, every
Ri has finitely many ideals. Therefore, by hypothesis and [3, Lemma 2.9], we have
i I(Ri )
|Ri | ≤ | R
≤ q I(Ri ) and so the claim is proved. To complete the proof, it is
mi |
sufficient to show that |R| is bounded by a constant, depending only on g and q. With
no loss of generality, suppose that |R1 | ≥ |Ri |, for every i ≥ 2. By the formula for the
1 )−5
≤ γ(AGN (R1 )) ≤ g. Hence |I(R1 )| ≤ 12g + 5 and so
genus of complete graphs, I(R12
|R| ≤ |R1 |n ≤ (q I(R1 ) )n ≤ q n(12g+5) .
So, we are done.
Let {Ri }i∈N be an infinite family of Artinian rings such that every Ri is a direct
product of 4 fields. Then it is clear that γ(AGN (Ri )) = 1, for every i. So, the condition
|R
m | ≤ q, for every maximal ideal m of R, in the previous theorem is necessary.
Let R be a Noetherian ring. Then one may ask does γ(AGN (R)) < ∞ imply that R
is Artinian? The answer of this question is negative. To see this, let R ∼
= S × D, where
S is a ring with at most one non-trivial ideal and D is a Noetherian integral domain
which is not a field. Then it is easy to check that AGN (R) is a planar graph and R is
a Noetherian ring which is not Artinian.
Before proving the next lemma, we need the following notation. Let G be a graph
e for the subgraph
and V ′ be the set of vertices of G whose degrees equal one. We use G
′
G \ V and call it the reduction of G.
e where G
e is the reduction of G.
Lemma 17. γ(G) = γ(G),
Remark 18. It is well-known that if G is a connected graph of genus g, with n vertices,
m edges and f faces, then n − m + f = 2 − 2g.
In the following, all Artinian rings, whose nil-graphs of ideals have genus at most
one, are classified.
8
Theorem 19. Let R be an Artinian ring. If γ(AGN (R)) < 2, then |Max(R)| ≤ 4 and
moreover, the following statements hold.
(i) If |Max(R)| = 4, then γ(AGN (R)) < 2 if and only if R is isomorphic to a direct
product of four fields.
(ii) If |Max(R)| = 3, then γ(AGN (R)) < 2 if and only if R ∼
= F1 × F2 × R3 , where
F1 , F2 are fields and R3 is an Artinian local ring with at most two non-trivial
ideals.
(iii) If |Max(R)| = 2, then γ(AGN (R)) < 2 if and only if either R ∼
= F1 × R2 , where
F1 is a field and R2 is an Artinian local ring with at most three non-trivial ideals
or R ∼
= R1 × R2 , where every Ri (i = 1, 2) is an Artinian local ring with at most
one non-trivial ideal.
(iv) If R is local, then γ(AGN (R)) < 2 if and only if R has at most 7 non-trivial
ideals.
Proof. Let γ(AGN (R)) < 2. First we show that |Max(R)| ≤ 4. Suppose to the
contrary, |Max(R)| ≥ 5. By [4, Theorem 8.7], R ∼
= R1 × · · · × R5 , where every Ri is an
Artinian ring. Let
I1 = R1 × (0) × (0) × (0) × (0);
I2 = (0) × R2 × (0) × (0) × (0);
I3 = R1 × R2 × (0) × (0) × (0);
J1 = (0) × (0) × R3 × (0) × (0);
J2 = (0) × (0) × (0) × R4 × (0);
J3 = (0) × (0) × (0) × (0) × R5 ;
J4 = (0) × (0) × R3 × R4 × (0);
J5 = (0) × (0) × (0) × R4 × R5 ;
J6 = (0) × (0) × R3 × (0) × R5 ;
J7 = (0) × (0) × R3 × R4 × R5 .
Then for every 1 ≤ i ≤ 3 and every 1 ≤ j ≤ 7, Ii and Jj are adjacent and so K3,7 is
a subgraph of AGN (R). Thus by the formula for the genus of the complete bipartite
graph, we have γ(AGN (R)) ≥ γ(K3,7 ) ≥ 2, a contradiction.
(i) Let |Max(R)| = 4 and γ(AGN (R)) < 2. By [4, Theorem 8.7], R ∼
= R1 × R2 ×
R3 × R4 , where every Ri is an Artinian local ring. We show that every Ri is a field.
Suppose not and with no loss of generality, R4 contains a non-trivial ideal, say a. Set
I1 = R1 × (0) × (0) × (0); I2 = (0) × R2 × (0) × (0); I3 = R1 × R2 × (0) × (0);
I4 = R1 × R2 × (0) × a; J1 = (0) × (0) × R3 × (0); J2 = (0) × (0) × (0) × R4 ;
9
J3 = (0) × (0) × (0) × a; J4 = (0) × (0) × R3 × R4 ; J5 = (0) × (0) × R3 × a.
It is clear that every Ii , 1 ≤ i ≤ 4, is adjacent to Jj , 1 ≤ j ≤ 5, and so K4,5 is a
subgraph of AGN (R). Thus by the formula for the genus of the complete bipartite
graph, we have γ(AGN (R)) ≥ γ(K4,5 ) ≥ 2, a contradiction. Conversely, assume that
R∼
= F1 × F2 × F3 × F4 , where every Fi is a field. We show that γ(AGN (R)) = 1. By
^
^
Lemma 17, it is enough to prove that γ(AG
N (R)) = 1. We know that AGN (R) has 4
^
vertices of degree 6 and 6 vertices of degree 3. So, AG
N (R) has n = 10 vertices and
^
m = 21 edges. Also, it is not hard to check that AG
N (R) has f = 11 faces. Now,
^
Remark 18 implies that γ(AGN (R)) = 1.
(ii) Let γ(AGN (R)) < 2 and R ∼
= R1 × R2 × R3 , where every Ri is an Artinian local
ring. We show that at least two of the three rings R1 , R2 and R3 are fields. Suppose not
and with no loss of generality, b and c are non-trivial ideals of R2 and R3 , respectively.
Set
I1 = R1 × (0) × (0); I2 = (0) × R2 × (0); I3 = R1 × R2 × (0); I4 = R1 × b × (0);
J1 = (0) × b × (0); J2 = (0) × (0) × c; J3 = (0) × b × c;
J4 = (0) × (0) × R3 ; J5 = (0) × b × R3 .
It is clear that every Ii , 1 ≤ i ≤ 4, is adjacent to Jj , 1 ≤ j ≤ 5, and so K4,5 is a subgraph
of AGN (R). Thus by the formula for the genus of the complete bipartite graph, we
have γ(AGN (R)) ≥ γ(K4,5 ) ≥ 2, a contradiction. Thus with no loss of generality, we
can suppose that R ∼
= F1 × F2 × R3 , where F1 and F2 are fields and R3 is an Artinian
local ring. Now, we prove that R3 has at most two non-trivial ideals. Suppose to the
contrary, a, b and c are three distinct non-trivial ideals of R3 . Let
I1 = (0) × (0) × R3 ; I2 = (0) × (0) × a; I3 = (0) × (0) × b; I4 = (0) × (0) × c;
J1 = F1 × (0) × (0); J2 = (0) × F2 × (0); J3 = F1 × F2 × (0);
J4 = F1 × F2 × a; J5 = F1 × F2 × b; J6 = F1 × F2 × c.
Clearly, every Ii , 1 ≤ i ≤ 4, is adjacent to Jj , 1 ≤ j ≤ 6, and so K4,6 is a subgraph of
AGN (R). Thus by the formula for the genus of the complete bipartite graph, we have
γ(AGN (R)) ≥ γ(K4,6 ) ≥ 2, a contradiction. Conversely, let R ∼
= F1 × F2 × R3 , where
F1 and F2 are fields and R3 be a ring with two non-trivial ideals c and c′ . Set
I1 = (0) × (0) × R3 ; I2 = (0) × (0) × c; I3 = (0) × (0) × c′ ;
J1 = F1 × (0) × (0); J2 = (0) × F2 × (0); J3 = F1 × F2 × (0).
10
Then for every 1 ≤ i, j ≤ 3, we have Ii Jj = (0). Hence γ(AGN (R)) ≥ γ(K3,3 ) ≥ 1.
However, in this case, AGN (R) is a subgraph of AGN (F1 × F2 × F3 × F4 ) (in which
every Fi is a field). Therefore, by (i), γ(AGN (R)) = 1. If R3 contains at most one
non-trivial ideal, then it is not hard to check that AGN (R) is a planar graph. This
completes the proof of (ii).
(iii) Assume that γ(AGN (R)) < 2 and R ∼
= R1 × R2 , where R1 and R2 are Artinian
local rings. We prove the assertion in the following two cases:
Case 1. R ∼
= F1 × R2 , where F1 is a field and R2 is an Artinian local ring. In this
case, we show that R2 has at most three non-trivial ideals. Suppose to the contrary,
R2 has at least four non-trivial ideals. Then for every two non-zero ideals I2 6= R2 and
J2 of R2 , the vertices F1 × I2 and (0) × J2 are adjacent and so K4,5 is a subgraph of
AGN (R). Thus by the formula for the genus of the complete bipartite graph, we have
γ(AGN (R)) ≥ γ(K4,5 ) ≥ 2, a contradiction.
Case 2. Neither R1 nor R2 is a field. We prove that every Ri has at most one nontrivial ideal. Suppose not and with no loss of generality, R2 has two distinct non-trivial
ideals. Then every ideal of the form R1 × J is adjacent to every ideal of the form I × K,
where I and J are proper ideals of R1 and R2 , respectively, and K is an arbitrary ideal
of R2 . So γ(AGN (R)) ≥ γ(K3,7 ) ≥ 2, a contradiction.
Conversely, if R ∼
= F1 × R2 , where F1 is a field and R2 is an Artinian local ring with
n ≤ 3 non-trivial ideals, then one can easily show that
(
1; n = 2, 3
γ(AGN (R)) =
0; n = 1.
∼ R1 × R2 , where R1 and R2 are Artinian local rings with one
Now, suppose that R =
non-trivial ideals. Then it is not hard to show that γ(AGN (R)) = 1. This comletes the
proof of (iii).
(iv) This follows from the formula of genus for the complete graphs and Remark 3.
From the proof of the previous theorem, we have the following immediate corollary.
Corollary 20. Let R be an Artinian ring. Then AGN (R) is a planar graph if and only
if |Max(R)| ≤ 3 and R satisfies one of the following conditions:
(i) R is isomorphic to the direct product of three fields.
11
(ii) R ∼
= F1 × R2 , where F1 is a field and R2 is an Artinian local ring with at most
one non-trivial ideal.
(iii) R is a local ring with at most four non-trivial ideals.
We close this paper with the following example.
Z [x]
Example 21. (i) Suppose that R ∼
= (x6m ) , where m ≥ 2. Let I1 = (3), I2 = (3x),
I3 = (3x + 3), J1 = (2), J2 = (4), J3 = (2x), J4 = (4x), J5 = (2x + 2), J6 = (4x + 2)
and J7 = (2x+4). Then one can check that these ideals are distinct vertices of AGN (R).
Also, every Ii (1 ≤ i ≤ 3) is adjacent to every Jk (1 ≤ k ≤ 7). Thus K3,7 is a subgraph
of AGN (R) and so the formula of genus for the complete bipartite graphs implies that
γ(AGN (R)) ≥ 2.
Z [x]
(ii) Let R ∼
= (x4 3 ) . Set I1 = (2x), I2 = (2x2 ), I3 = (2x + 2x2 ), J1 = (2), J2 = (2 + x2 ),
J3 = (2 + 2x2 ), J4 = (2 − x2 ), J5 = (2 + 2x), J6 = (2 + 2x + x2 ) and J7 = (2 + 2x + 2x2 ).
Similar to (i), one can show that every Ii is adjacent to every Jk and so γ(AGN (R)) ≥ 2.
References
[1] G. Aalipour, S. Akbari, R. Nikandish, M.J. Nikmehr and F. Shaveisi. On the coloring of
the annihilating-ideal graph of a commutative ring, Discrete. Math. 312 (2012) 2620–2626.
[2] G. Aalipour, S. Akbari, M. Behboodi, R. Nikandish, M. J. Nikmehr, F. Shaveisi, The
classification of the annihilating-ideal graph of a commutative ring, Algebra Colloq., 21(2)
(2014) 249–256.
[3] F. Aliniaeifard and M. Behboodi, Rings whose annihilating-ideal graphs have positive genus,
J. Algebra Appl. 41 (2013) 3629–3634.
[4] M. F. Atiyah and I.G. Macdonald, Introduction to Commutative Algebra, Addison-Wesley
Publishing Company, 1969.
[5] M. Behboodi and Z. Rakeei, The annihilating ideal graph of commutative rings I, J. Algebra
Appl. 10 (2011), 727–739.
[6] M. Behboodi and Z. Rakeei, The annihilating ideal graph of commutative rings II, J. Algebra
Appl. 10 (2011), 741–753.
[7] J. A. Huckaba, Commutative rings with zero divisors, Marcel Dekker Inc., New York, 1988.
[8] S. Kiani, H.R. Maimani, R. Nikandish, Some results on the domination number of a zerodivisor graph, Canad. Math. Bull., 57 (3) (2014), 573-578.
12
[9] E. Matlis, The minimal prime spectrum of a reduced ring, Illinois J. Math. 27(3) (1983),
353–391.
[10] A. Meyerowitz, Maximali intersecting families, Europ. J. Combinatorics, 16 (1995), 491–
501.
[11] R. Y. Sharp, Steps in Commutative Algebra, Cambridge University Press, 1990.
[12] F. Shaveisi and R. Nikandish, The nil-graph of ideals of a commutative ring, Bull. Malays.
Math. Sci. Soc., accepted.
[13] D. B. West, Introduction to Graph Theory, 2nd ed., Prentice Hall, Upper Saddle River,
2001.
13
| 0 |
The Genetic Code Revisited: Inner-to-outer map, 2D-Gray map, and
World-map Genetic Representations
H.M. de Oliveira1 and N.S. Santos-Magalhães2
Universidade Federal de Pernambuco
Grupo de Processamento de Sinais
Caixa postal 7.800 - CDU, 51.711-970, Recife-Brazil
2
Departamento de Bioquímica–Laboratório de Imunologia Keizo-Asami-LIKA
Av. Prof. Moraes Rego, 1235, 50.670-901, Recife, Brazil
1
{hmo,nssm}@ufpe.br
Abstract. How to represent the genetic code? Despite the fact that it is extensively known, the
DNA mapping into proteins remains as one of the relevant discoveries of genetics. However,
modern genomic signal processing usually requires converting symbolic-DNA strings into
complex-valued signals in order to take full advantage of a broad variety of digital processing
techniques. The genetic code is revisited in this paper, addressing alternative representations for it,
which can be worthy for genomic signal processing. Three original representations are discussed.
The inner-to-outer map builds on the unbalanced role of nucleotides of a ‘codon’ and it seems to
be suitable for handling information-theory-based matter. The two-dimensional-Gray map
representation is offered as a mathematically structured map that can help interpreting
spectrograms or scalograms. Finally, the world-map representation for the genetic code is
investigated, which can particularly be valuable for educational purposes - besides furnishing
plenty of room for application of distance-based algorithms.
1.
Introduction
The advent of molecular genetic comprises a revolution of far-reaching consequences for humankind, which
evolved into a specialised branch of the modern-day biochemistry. In the ‘postsequencing’ era of genetic, the
rapid proliferation of this cross-disciplinary field has provided a plethora of applications in the
late-twentieth-century. The agenda to find out the information in the human genome was begun in 1986 [1]. Now
that the human genome has been sequenced [2], the genomic analysis is becoming the focus of much interest
because of its significance to improve the diagnosis of diseases. Motivated by the impact of genes for concrete
goals - primary for the pharmaceutical industry - massive efforts have also been dedicated to the discovery of
modern drugs. Genetic signal processing (GSP) is being confronted with a redouble amount of data, which leads
to some intricacy to extract meaningful information from it [3]. Ironically, as more genetic information becomes
available, more the data-mining task becomes higgledy-piggledy. The recognition or comparison of long DNA
sequences is often nearly un-come-at-able.
The primary step in order to take advantage of the wide assortment of signal processing algorithms normally
concerns converting symbolic-DNA sequences into genomic real-valued (or complex-valued) genomic signals.
How to represent the genetic code? Instead of using a look-up table as usual (e.g., [4], [5]), a number of different
ways for implementing this assignment have been proposed. An interesting mapping from the information theory
viewpoint was recently proposed by Battail [6], which takes into account the unbalanced relevance of
nucleotides in a ‘codon’. Anastassiou applied a practical way of mapping the genetic code on the Argand-Gauss
plane [7]. Cristea has proposed an interesting complex map, termed as tetrahedral representation of nucleotides,
in which amino acids are mapped on the ‘codons’ according to the genetic code [8]. This representation is
derived by the projection of the nucleotide tetrahedron on a suitable plane. Three further representations for the
3-base ‘codons’ of the genetic code are outlined in this paper, namely i) inner-to-outer map, ii) 2D-Gray genetic
map, and iii) genetic world-chart representations.
2.
Process of Mapping DNA into Proteins
Living beings may be considered as information processing system able to properly react to a variety of stimuli,
and to store/process information for their accurate self-reproduction. The entire set of information of the DNA is
termed as the genome (Greek: ome=mass). The DNA plays a significant role in the biochemical dynamics of
every cell, and constitutes the genetic fingerprint of living organisms [4]. Proteins - consisting of amino acids catalyse the majority of biological reactions. The DNA controls the manufacture of proteins (Greek:
protos=foremost) that make up the majority of the dry mass of beings. The DNA sequence thus contains the
instructions that rule how an organism lives, including metabolism, growth, and propensity to diseases.
Transcription, which consists of mapping DNA into messenger RNA (m-RNA), occurs first. The translation
maps then the m-RNA into a protein, according to the genetic code [3], [4]. Despite nobody is able to predict the
protein 3-D structure from the 1-D amino acid sequence, the structure of nearly all proteins in the living cell is
uniquely predetermined by the linear sequence of the amino acids.
Genomic information of eukariote and prokariote DNA is -in a very real sense- digitally expressed in nature;
it is represented as strings of which each element can be one out a finite number of entries. The genetic code,
experimentally determined since 60's, is well known [5]. There are only four different nucleic bases so the code
uses a 4-symbol alphabet: A, T, C, and G. Actually, the DNA information is transcribed into single-stand RNA
- the mRNA. Here, thymine (T) is replaced by the uracil (U). The information is transmitted by a start-stop
protocol. The genetic source is characterised by the alphabet N:={U, C, A, G}. The input alphabet N3 is the set of
‘codons’ N3:={n1,n2,n3 | ni N, i=1,2,3}. The output alphabet A is the set of amino acids including the nonsense
‘codons’ (stop elements). A:={Leu, Pro, Arg, Gln, His, Ser, Phe, Trp, Tyr, Asn, Lys, Ile, Met, Thr, Asp, Cys, Glu,
Gly, Ala, Val, Stop}. The genetic code consequently maps the 64 3-base ‘codons’ of the DNA characters into one
of the 20 possible amino acids (or into a punctuation mark). In this paper we are barely concerned with the
standard genetic code, which is widespread and nearly universal.
The genetic code is a map GC: N3
→
A that maps triplets (n1,n2,n3) into one amino acid Ai. For instance,
GC(UAC)=Stop and GC(CUU)=Leu. In the majority of standard biochemistry textbooks, the genetic code is
represented as a table (e.g. [4], [5]). Let ||.|| denote the cardinality of a set. Evaluating the cardinality of the input
and the output alphabet, we have, ||N3||=||N||3=43=64 and ||A||=21, showing that the genetic code is a highly
degenerated code. In many cases, changing only one base does not automatically change the amino acid
sequence of a protein, and changing one amino acid in a protein does not automatically affect its function.
3.
The Genetic Code Revisited
Even worthy, ordinary representations for the genetic code can be replaced by the handy descriptions offered in
this paper. The first one is the so-called inner-to-outer diagram by Battail, which is suitable when addressing
information theory aspects [9]. We present in the sequel a variant of this map by using the notion of the Gray
code to systematise the diagram, gathering regions mapped into the same amino acid (Figure 1).
Fig. 1. Variant of Battail’s inner-to-outer map for the genetic code [6]. The first nucleotide of a triplet (or ‘codon’) is
indicated in the inner circle, the second one in the region surrounding it, the third one in the outer region, where the
abbreviated name of the amino acid corresponding to the ‘codon’, read from the centre, has also been plotted
Another representation for the genetic code can be derived combining the foundations of Battail’s map and the
technique for generalized 2-D constellation proposed for high-speed modems [10]. Specifically, the map
intended for 64-QAM modulation shown in Figure 2 can properly be adapted to the genetic code.
Fig. 2. 2D-Gray bit-to-symbol assignment for the 64-QAM digital modulation [10].
Binary labels are replaced by nucleotides according to the rule (x←y denotes the operator “replace y by x”):
U ← [11]; A ← [00]; G ← [10]; C ← [01]. The usefulness of this specific labelling can be corroborated by the
following argument. The “complementary base pairing” property can be interpreted as a parity check. The
DNA-parity can be defined as the sum modulo 2 of all binary coordinates of the nucleotide representations.
Labelling a DNA double-strand gives an error-correcting code. Each point of the 64-signal constellation is
mapped into a 'codon'. This map (Figure 3a) furnishes a way of clustering possible triplets into consistent regions
of amino acids (Figure 3b). In order to merge the areas mapped into the same amino acid, each of amino acids
can be coloured using a distinct colour as in Figure 4.
Fig. 3. (a) Genetic code map based on the 2D-Gray map for the 64 possible triplets (‘codons’). Each triplet differs to its four
closest neighbours by a single nucleotide. The first nucleotide specifies the quadrant regarding the main axis system. The
second one provides information about the quadrant in the secondary axis system, and the last nucleotide (the wobble base)
identifies the final position of the ‘codon’; (b) Genetic code map based on the 2D-Gray genetic map for the 64 possible
'codons' into one of the twenty amino acids (or start/stop)
Fig. 4. 2D-Gray genetic map for the 64 possible ‘codons’ into one of the twenty possible amino acids (or punctuation). Each
amino acid is shaded with a different colour, defining codification regions on the genetic plane. The structure is supposed be
2D-periodic
Evoking the two-dimensional cyclic structure of the above genetic mapping, it can be folded joining the
left-right borders, and the top-bottom frontiers. As a result, the map can be drawn on the surface of a sphere
resembling a world-map. Eight parallels of latitude are required (four in each hemisphere) as well as four
meridians of longitude associated to four corresponding anti-meridians. The Equator line is imaginary, and the
tropic circles have 11.25o, 33.75o, 56.25o, and 78.5o (North and south). Starting from a virtual and arbitrary
Greenwich meridian, the meridians can be plotted at 22.5o, 67.5o, 112.5o, and 157.5o (East and west). Each triplet
is assigned to a single point on the surface that we named as “Nirenberg-Kohama's Earth”1 (Figure 5).
In honour to Marshall Nirenberg and M. Gobind Kohama, who independently were the main responsible for cracking the genetic
code in three nucleotides (‘codons’)
Fig. 5. (a) Nirenberg-Kohama's Earth: Genetic code as a world-map representation. There are four meridians of longitude as
well as corresponding anti-meridians comprising four ‘codons’ each. The eight parallels of latitude (tropics) appear as
containing eight ‘codons’ each. The ‘codon’ UUU, for instance, has geographic co-ordinates (22.5W, 78.75ooN). The Voronoi
region [10] of each triplet can be highlighted according to the associated amino acid colour; (b) Continents of
Niremberg-Kohama's Earth: regions of essential amino acid correspond to the land and nonessential amino acids constitutes
the ocean. There are two continents (one in each hemisphere), and a single island (the Hystidine island)
If every one of essential amino acids are grouped and said to stand for ground, two continents and a lone
island emerge (Figure 5b). The remained (nonessential) amino acids materialize the sea. Several kinds of charts
can be drawn depending on the criteria used to cluster amino acids [4]. Amino acids can be put together
according to their metabolic precursor or coloured by the characteristics of side chains. This approach allows a
type of genetic geography. Each one of such representations has idiosyncrasies and can be suitable for analysing
specific issues of protein making.
4.
Closing Remarks2
The innovative representations for the genetic code introduced in this paper are mathematically structured so
they can be suitable for implementing computational algorithms. Although unprocessed DNA sequences could
be helpful, biologists are typically involved in the higher-level, location-based comments on such strings. Much
of signal processing techniques for genomic feature extraction and functional cataloguing have been focused on
local oligonucleotide patterns in the linear primary sequences of classes of genomes, searching for noteworthy
patterns [3], [7]. GSP techniques provide compelling paraphernalia for describing biologically features
embedded in the data. DNA spectrograms and scalograms are among GPS powerful tools [7], [11], which depend
on the choice of the genetic mapping. The miscellany of maps proposed in this paper supplies further cross-link
between telecommunications and biochemistry, and can be beneficial for “deciphering” genomic signals. These
maps can also be beneficial for educational purposes, furnishing a much rich reading and visualisation than a
simple look-up table.
This work was partially supported by the Brazilian National Council for Scientific and Technological Development (CNPq)
under research grants N.306180 (HMO) and N.306049 (NSSM). The first author also thanks Prof. G. Battail who decisively influenced his
interests
References
1. Int. Human Genome Sequencing Consortium: Initial Sequencing and Analysis of the Human Genome. Nature 409 (2001)
860-921
2. NIH (2004) National Center for Biotechnology Information, GenBank [on line], Available 05/05/04:
http://www.ncbi.nlm.nih.gov/Genomes/index.html
3. Zhang, X-Y., Chen, F., Zhank, Y-T., Agner, S.C., Akay, M., Lu, Z-H., Waye, M.M.Y., Tsui, S.K-W.: Signal Processing
Techniques in Genomic Engineering. Proc. of the IEEE 90 (2002) 1822-1833
4. Nelson, D.L., Cox, M.M.: Lehninger Principles of Biochemistry. 3rd ed. Worth Publishers, New York (2000)
5. Alberts, B., Bray, D., Johnson, A., Lewis, J., Raff, M., Roberts, K., Walter, P.: Essential Cell Biology. Garland Pub. New
York (1998)
6. Battail, G.: Is Biological Evolution Relevant to Information Theory and Coding? Proc. Int. Symp. on Coding Theory and
Applications, ISCTA 2001, Ambleside UK (2001) 343-351
7. Anastassiou, D.: Genomic Signal Processing, IEEE Signal Processing Mag. (2001) 8-20
8. Cristea, P.: Real and Complex Genomic Signals. Int. Conf. on DSP. 2 (2002) 543-546
9. Battail, G.: Does Information Theory Explain Biological Evolution? Europhysics Letters 40 (1097) 343-348
10. de Oliveira, H.M., Battail, G.: Generalized 2-dimensional Cross Constellations and the Opportunistic Secondary Channel,
Annales des Télécommunications 47 (1992) 202-213
11. Tsonis, A. A., Kumar, P., Elsner, J.B., Tsonis, P.A.: Wavelet Analysis of DNA Sequences. Physical Review E 53 (1996)
1828-1834.
| 5 |
Being Robust (in High Dimensions) Can Be Practical∗
arXiv:1703.00893v4 [cs.LG] 13 Mar 2018
Ilias Diakonikolas†
CS, USC
[email protected]
Jerry Li ¶
EECS & CSAIL, MIT
[email protected]
Gautam Kamath‡
EECS & CSAIL, MIT
[email protected]
Ankur Moitrak
Math & CSAIL, MIT
[email protected]
Daniel M. Kane§
CSE & Math, UCSD
[email protected]
Alistair Stewart∗∗
CS, USC
[email protected]
March 14, 2018
Abstract
Robust estimation is much more challenging in high dimensions than it is in one dimension: Most
techniques either lead to intractable optimization problems or estimators that can tolerate only a tiny
fraction of errors. Recent work in theoretical computer science has shown that, in appropriate distributional models, it is possible to robustly estimate the mean and covariance with polynomial time
algorithms that can tolerate a constant fraction of corruptions, independent of the dimension. However,
the sample and time complexity of these algorithms is prohibitively large for high-dimensional applications. In this work, we address both of these issues by establishing sample complexity bounds that are
optimal, up to logarithmic factors, as well as giving various refinements that allow the algorithms to
tolerate a much larger fraction of corruptions. Finally, we show on both synthetic and real data that our
algorithms have state-of-the-art performance and suddenly make high-dimensional robust estimation a
realistic possibility.
1
Introduction
Robust statistics was founded in the seminal works of [Tuk60] and [Hub64]. The overarching motto is
that any model (especially a parametric one) is only approximately valid, and that any estimator designed
for a particular distribution that is to be used in practice must also be stable in the presence of model
misspecification. The standard setup is to assume that the samples we are given come from a nice distribution,
but that an adversary has the power to arbitrarily corrupt a constant fraction of the observed data. After
several decades of work, the robust statistics community has discovered a myriad of estimators that are
provably robust. An important feature of this line of work is that it can tolerate a constant fraction of
corruptions independent of the dimension and that there are estimators for both the location (e.g., the
mean) and scale (e.g., the covariance). See [HR09] and [HRRS86] for further background.
It turns out that there are vast gaps in our understanding of robustness, when computational considerations are taken into account. In one dimension, robustness and computational efficiency are in perfect
∗A
version of this paper appeared in ICML 2017 [DKK+ 17].
by NSF CAREER Award CCF-1652862, a Sloan Research Fellowship, and a Google Faculty Research Award.
‡ Supported by NSF CCF-1551875, CCF-1617730, CCF-1650733, and ONR N00014-12-1-0999.
§ Supported by NSF CAREER Award CCF-1553288 and a Sloan Research Fellowship.
¶ Supported by NSF CAREER Award CCF-1453261, a Google Faculty Research Award, and an NSF Fellowship.
k Supported by NSF CAREER Award CCF-1453261, a grant from the MIT NEC Corporation, and a Google Faculty Research
Award.
∗∗ Research supported by a USC startup grant.
Authors are in alphabetical order.
Code of our implementation is available at https://github.com/hoonose/robust-filter.
† Supported
1
harmony. The empirical mean and empirical variance are not robust, because a single corruption can arbitrarily bias these estimates, but alternatives such as the median and the interquartile range are straightforward
to compute and are provably robust.
But in high dimensions, there is a striking tension between robustness and computational efficiency.
Let us consider estimators for location. The Tukey median [Tuk60] is a natural generalization of the onedimensional median to high-dimensions. It is known that it behaves well (i.e., it needs few samples) when
estimating the mean for various symmetric distributions [DG92, CGR16]. However, it is hard to compute
in general [JP78, AK95] and the many heuristics for computing it degrade badly in the quality of their
approximation as the dimension scales [CEM+ 93, Cha04, MS10]. The same issues plague estimators for
scale. The minimum volume ellipsoid [Rou85] is a natural generalization of the one-dimensional interquartile
range and is provably robust in high-dimensions, but is also hard to compute. And once again, heuristics
for computing it [VAR09, RS98] work poorly in high dimensions.
The fact that robustness in high dimensions seems to come at such a steep price has long been a point of
consternation within robust statistics. In a 1997 retrospective on the development of robust statistics [Hub97],
Huber laments:
“It is one thing to design a theoretical algorithm whose purpose is to prove [large fractions of
corruptions can be tolerated] and quite another thing to design a practical version that can be
used not merely on small, but also on medium sized regression problems, with a 2000 by 50 matrix
or so. This last requirement would seem to exclude all of the recently proposed [techniques].”
The goal of this paper is to answer Huber’s call to action and design estimators for both the mean and
covariance that are highly practical, provably robust, and work in high-dimensions. Such estimators make
the promise of robust statistics – estimators that work in high-dimensions and guarantee that their output
has not been heavily biased by some small set of noisy samples – much closer to a reality.
First, we make some remarks to dispel some common misconceptions. There has been a considerable
amount of recent work on robust principal component analysis, much of it making use of semidefinite programming. Some of these works can tolerate a constant fraction of corruptions [CLMW11], however require
that the locations of the corruptions are evenly spread throughout the dataset so that no individual sample
is entirely corrupted. In contrast, the usual models in robust statistics are quite rigid in what they require
and they do this for good reason. A common scenario that is used to motivate robust statistical methods is
if two studies are mixed together, and one subpopulation does not fit the model. Then one wants estimators
that work without assuming anything at all about these outliers.
There have also been semidefinite programming methods proposed for robust principal component analysis with outliers [XCS10]. These methods assume that the uncorrupted matrix is rank r and that the
fraction of outliers is at most 1/r, which again degrades badly as the rank of the matrix increases. Moreover,
any method that uses semidefinite programming will have difficulty scaling to the sizes of the problems we
consider here. For sake of comparison – even with state-of-the-art interior point methods – it is not currently feasible to solve the types of semidefinite programs that have been proposed when the matrices have
dimension larger than a hundred.
1.1
Robustness in a Generative Model
Recent works in theoretical computer science have sought to circumvent the usual difficulties of designing
efficient and robust algorithms by instead working in a generative model. The starting point for our paper is
the work of [DKK+ 16] who gave an efficient algorithm for the problem of agnostically learning a Gaussian:
Given a polynomial number of samples from a high-dimensional Gaussian N (µ, Σ), where an
b that satisfy
adversary has arbitrarily corrupted an ε-fraction, find a set of parameters N 0 (b
µ, Σ)
0
∗
e
dT V (N , N ) ≤ O(ε)
.
Total variation distance is the natural metric to use to measure closeness of the parameters, since a
(1 − ε)-fraction of the observed samples came from a Gaussian. [DKK+ 16] gave an algorithm for the above
∗ We
use the notation Õ(·) to hide factors which are polylogarithmic in the argument – in particular, we note that this
bound does not depend on the dimension.
2
problem (note that the guarantees are dimension independent), whose running time and sample complexity
are polynomial in the dimension d and 1/ε. [LRV16] independently gave an algorithm for the unknown mean
√
e
log d), and in the unknown covariance case achieves guarantees in
case that achieves dT V (N , N 0 ) ≤ O(ε
a weaker metric that is not affine invariant. A crucial feature is that both algorithms work even when the
moments of the underlying distribution satisfy certain conditions, and thus are not necessarily brittle to the
modeling assumption that the inliers come from a Gaussian distribution.
A more conceptual way to view such work is as a proof-of-concept that the Tukey median and minimum
volume ellipsoid can be computed efficiently in a natural family of distributional models. This follows because
not only would these be good estimates for the mean and covariance in the above model, but in fact any
estimates that are good must also be close to them. Thus, these works fit into the emerging research direction
of circumventing worst-case lower bounds by going beyond worst-case analysis.
Since the dissemination of the aforementioned works [DKK+ 16, LRV16], there has been a flurry of research
activity on computationally efficient robust estimation in a variety of high-dimensional settings [DKS16,
DKS17, CSV17, DKK+ 17, Li17, DBS17, BDLS17, SCV18, DKK+ 18], including studying graphical distributional models [DKS16], understanding the computation-robustness tradeoff for statistical query algorithms [DKS17], tolerating much more noise by allowing the algorithm to output a list of candidate hypotheses [CSV17], and developing robust algorithms under sparsity assumptions [Li17, DBS17, BDLS17], where
the number of samples is sublinear in the dimension.
1.2
Our Results
Our goal in this work is to show that high-dimensional robust estimation can be highly practical. However,
there are two major obstacles to achieving this. First, the sample complexity and running time of the
algorithms in [DKK+ 16] is prohibitively large for high-dimensional applications. We just would not be able
to store as many samples as we would need, in order to compute accurate estimates, in high-dimensional
applications.
Our first main contribution is to show essentially tight bounds on the sample complexity of the filtering
based algorithm of [DKK+ 16]. Roughly speaking, we accomplish this with a new definition of the good set
which plugs into the existing analysis in a straightforward manner and shows that it is possible to estimate
2
e
e 2 /ε2 ) samples.
the mean with O(d/ε
) samples (when the covariance is known) and the covariance with O(d
Both of these bounds are information-theoretically optimal, up to logarithmic factors.
Our second main contribution is to vastly improve the fraction of adversarial corruptions that can be
tolerated in applications. The fraction of errors that the algorithms of [DKK+ 16] can tolerate is indeed a
constant that is independent of the dimension, but it is very small both in theory and in practice. This is
due to the fact that many of the steps in the algorithm are overly conservative. In fact, we found that a
naive implementation of the algorithm did not remove any outliers in many realistic scenarios. We combat
this by giving new ways to empirically tune the threshold for where to remove points from the sample set.
These optimizations dramatically improve the empirical performance.
Finally, we show that the same bounds on the error guarantee continue to work even when the underlying
distribution is sub-Gaussian. This theoretically confirms that the robustness guarantees of such algorithms
are in fact not overly brittle to the distributional assumptions. In fact, the filtering algorithm of [DKK+ 16]
is easily shown to be robust under much weaker distributional assumptions, while retaining near-optimal
sample and error guarantees. As an example, we show that it yields a near sample-optimal efficient estimator
for robustly estimating the mean of a distribution, under the assumption that its covariance is bounded.
Even in this regime, the filtering algorithm guarantees optimal error, up to a constant factor. Furthermore
we empirically corroborate this finding by showing that the algorithm works well on real world data, as we
describe below.
Now we come to the task of testing out our algorithms. To the best of our knowledge, there have been no
experimental evaluations of the performance of the myriad of approaches to robust estimation. It remains
mostly a mystery which ones perform well in high-dimensions, and which do not. To test out our algorithms,
we design a synthetic experiment where a (1 − ε)-fraction of the samples come from a Gaussian and the
rest are noise and sampled from another distribution (in many cases, Bernoulli). This gives us a baseline
to compare how well various algorithms recover µ and Σ, and how their performance degrades based on the
dimension. Our plots show a predictable and yet striking phenomenon: All earlier approaches have error
3
rates that scale polynomially with the dimension and ours is a constant that is almost indistinguishable from
the error that comes from sample noise alone. Moreover, our algorithms are able to scale to hundreds of
dimensions.
But are algorithms for agnostically learning a Gaussian unduly sensitive to the distributional assumptions
they make? We are able to give an intriguing visual demonstration of our techniques on real data. The
famous study of [NJB+ 08] showed that performing principal component analysis on a matrix of genetic
data recovers a map of Europe. More precisely, the top two singular vectors define a projection into the
plane and when the groups of individuals are color-coded with where they are from, we recover familiar
country boundaries that corresponds to the map of Europe. The conclusion from their study was that genes
mirror geography. Given that one of the most important applications of robust estimation ought to be in
exploratory data analysis, we ask: To what extent can we recover the map of Europe in the presence of noise?
We show that when a small number of corrupted samples are added to the dataset, the picture becomes
entirely distorted (and this continues to hold even for many other methods that have been proposed). In
contrast, when we run our algorithm, we are able to once again recover the map of Europe. Thus, even
when some fraction of the data has been corrupted (e.g., medical studies were pooled together even though
the subpopulations studied were different), it is still possible to perform principal component analysis and
recover qualitatively similar conclusions as if there were no noise at all!
2
Formal Framework
Notation. For a vector v, we will let kvk2 denote its Euclidean norm. If M is a matrix, we will let kM k2
denote its spectral norm and kM kF denote its Frobenius norm. We will write X ∈u S to denote that X is
drawn from the empirical distribution defined by S.
Robust Estimation. We consider the following powerful model of robust estimation that generalizes many
other existing models, including Huber’s contamination model:
Definition 2.1. Given ε > 0 and a distribution family D, the adversary operates as follows: The algorithm
specifies some number of samples m. The adversary generates m samples X1 , X2 , . . . , Xm from some (unknown) D ∈ D. It then draws m0 from an appropriate distribution. This distribution is allowed to depend
on X1 , X2 , . . . , Xm , but when marginalized over the m samples satisfies m0 ∼ Bin(ε, m). The adversary is
allowed to inspect the samples, removes m0 of them, and replaces them with arbitrary points. The set of m
points is then given to the algorithm.
In summary, the adversary is allowed to inspect the samples before corrupting them, both by adding
corrupted points and deleting uncorrupted points. In contrast, in Huber’s model the adversary is oblivious
to the samples and is only allowed to add corrupted points.
We remark that there are no computational restrictions on the adversary. The goal is to return the
b in D that are close to the true parameters in an appropriate metric. For the
parameters of a distribution D
case of the mean, our metric will be the Euclidean distance. For the covariance, we will use the Mahalanobis
b −1/2 − IkF . This is a strong affine invariant distance that implies corresponding
distance, i.e., kΣ−1/2 ΣΣ
bounds in total variation distance.
We will use the following terminology:
Definition 2.2. We say that a set of samples is ε-corrupted if it is generated by the process described in
Definition 2.1.
3
Nearly Sample-Optimal Efficient Robust Learning
In this section, we present near sample-optimal efficient robust estimators for the mean and the covariance
of high-dimensional distributions under various structural assumptions of varying strength. Our estimators
rely on the filtering technique introduced in [DKK+ 16].
We note that [DKK+ 16] gave two algorithmic techniques: the first one was a spectral technique to
iteratively remove outliers from the dataset (filtering), and the second one was a soft-outlier removal method
relying on convex programming. The filtering technique seemed amenable to practical implementation (as
4
it only uses simple eigenvalue computations), but the corresponding sample complexity bounds given in
[DKK+ 16] are polynomially worse than the information-theoretic minimum. On the other hand, the convex
programming technique of [DKK+ 16] achieved better sample complexity bounds (e.g., near sample-optimal
for robust mean estimation), but relied on the ellipsoid method, which seemed to preclude a practically
efficient implementation.
In this work, we achieve the best of both worlds: we provide a more careful analysis of the filter technique that yields sample-optimal bounds (up to logarithmic factors) for both the mean and the covariance.
Moreover, we show that the filtering technique easily extends to much weaker distributional assumptions
(e.g., under bounded second moments). Roughly speaking, the filtering technique follows a general iterative
recipe: (1) via spectral methods, find some univariate test which is violated by the corrupted points, (2)
find some concrete tail bound violated by the corrupted set of points, and (3) throw away all points which
violate this tail bound.
We start with sub-gaussian distributions. Recall that if P is sub-gaussian on Rd with mean vector µ and
parameter ν > 0, then for any unit vector v ∈ Rd we have that PrX∼P [|v · (X − µ)| ≥ t] ≤ exp(−t2 /2ν).
Theorem 3.1. Let G be a sub-gaussian distribution on Rd with parameter ν = Θ(1), mean µG , covariance
matrix I, and ε > 0. Let S be an ε-corrupted set of samples from G of size Ω((d/ε2 ) poly log(d/ε)). There
exists an efficient algorithm that, onp
input S and ε > 0, returns a mean vector µ
b so that with probability at
least 9/10 we have kb
µ − µG k2 = O(ε log(1/ε)).
[DKK+ 16] gave algorithms for robustly estimating the mean of a Gaussian distribution with known
covariance and for robustly estimating the mean of a binary product distribution. The main motivation for
considering these specific distribution families is that robustly estimating the mean within Euclidean distance
immediately implies total variation distance bounds for these families. The above theorem establishes that
these guarantees hold in a more general setting with near sample-optimal bounds. Under a bounded second
moment assumption, we show:
Theorem 3.2. Let P be a distribution on Rd with unknown mean vector µP and unknown covariance matrix
ΣP σ 2 I. Let S be an ε-corrupted set of samples from P of size Θ((d/ε) log d). There exists
√ an efficient
algorithm that, on input S and ε > 0, with probability 9/10 outputs µ
b with kb
µ − µP k2 ≤ O( εσ).
A similar result on mean estimation under bounded second moments was concurrently shown in [SCV18].
The sample size above is optimal, up to a logarithmic factor, and the error guarantee is easily seen to be
the best possible up to a constant factor. The main difference between the filtering algorithm establishing
the above theorem and the filtering algorithm for the sub-gaussian case is how we choose the threshold for
the filter. Instead of looking for a violation of a concentration inequality, here we will choose a threshold at
random. In this case, randomly choosing a threshold weighted towards higher thresholds suffices to throw
out more corrupted samples than uncorrupted samples in expectation. Although it is possible to reject many
good samples this way, we show that the algorithm still only rejects a total of O(ε) samples with high
probability.
Finally, for robustly estimating the covariance of a Gaussian distribution, we have:
Theorem 3.3. Let G ∼ N (0, Σ) be a Gaussian in d dimensions, and let ε > 0. Let S be an ε-corrupted
set of samples from G of size Ω((d2 /ε2 ) poly log(d/ε)). There exists an efficient algorithm that, given S and
b so that with probability at least 9/10, it
ε, returns the parameters of a Gaussian distribution G0 ∼ N (0, Σ)
−1/2 b −1/2
holds kI − Σ
ΣΣ
kF = O(ε log(1/ε)).
We now provide a high-level description of the main ingredient which yields these improved sample
complexity bounds. The initial analysis of [DKK+ 16] established sample complexity bounds which were suboptimal by polynomial factors because it insisted that the set of good samples (i.e., before the corruption)
satisfied very tight tail bounds. To some degree such bounds are necessary, as when we perform our filtering
procedure, we need to ensure that not too many good samples are thrown away. However, the old analysis
required that fairly strong tail bounds hold uniformly. The idea for the improvement is as follows: If the
errors are sufficient to cause the variance of some polynomial p (linear in the unknown mean case or quadratic
in the unknown covariance case) to increase by more than ε, it must be the case that for some T , roughly
an ε/T 2 fraction of samples are error points with |p(x)| > T . As long as we can ensure that less than an
5
ε/T 2 fraction of our good sample points have |p(x)| > T , this will suffice for our filtering procedure to work.
For small values of T , these are much weaker tail bounds than were needed previously and can be achieved
with a smaller number of samples. For large values of T , these tail bounds are comparable to those used
in previous work [DKK+ 16] , but in such cases we can take advantage of the fact that |p(G)| > T only
with very small probability, again allowing us to reduce the sample complexity. The details are deferred to
Appendix A.
4
Filtering
We now describe the filtering technique more rigorously. We also describe some additional heuristics we
found useful in practice.
4.1
Robust Mean Estimation
We first consider mean estimation. The algorithms which achieve Theorems 3.1 and 3.2 both follow the
general recipe in Algorithm 1. We must specify three parameter functions:
• Thres(ε) is a threshold function—we terminate if the covariance has spectral norm bounded by Thres(ε).
• Tail(T, d, ε, δ, τ ) is an univariate tail bound, which would only be violated by a τ fraction of points if
they were uncorrupted, but is violated by many more of the current set of points.
• δ(ε, s) is a slack function, which we require for technical reasons.
Given these objects, our filter is fairly easy to state: first, we compute the empirical covariance. Then, we
check if the spectral norm of the empirical covariance exceeds Thres(ε). If it does not, we output the empirical
mean with the current set of data points. Otherwise, we project onto the top eigenvector of the empirical
covariance, and throw away all points which violate Tail(T, d, ε, δ, τ ), for some choice of slack function δ.
Algorithm 1 Filter-based algorithm template for robust mean estimation
1: Input: An ε-corrupted set of samples S, Thres(ε), Tail(T, d, ε, δ, τ ), δ(ε, s)
0
2: Compute the sample mean µS = EX∈u S 0 [X]
3: Compute the sample covariance matrix Σ
4: Compute approximations for the largest absolute eigenvalue of Σ, λ∗ := kΣk2 , and the associated unit
eigenvector v ∗ .
5: if kΣk2 ≤ Thres(ε) then
0
6:
return µS .
7: Let δ = δ(ε, kΣk2 ).
8: Find T > 0 such that
h
i
0
Pr 0 |v ∗ · (X − µS )| > T + δ > Tail(T, d, ε, δ, τ ).
X∈u S
0
9:
return {x ∈ S 0 : |v ∗ · (x − µS )| ≤ T + δ}.
Sub-gaussian case To
p concretely instantiate this algorithm for the subgaussian case, we take Thres(ε) =
O(ε log 1/ε), δ(ε, s) = 3 ε(s − 1), and
Tail(T, d, ε, δ, τ ) = 8 exp(−T 2 /2ν) + 8
ε
,
T 2 log(d log(d/ετ ))
where ν is the subgaussian parameter. See Section A.1 for details.
Second moment case To concretely instantiate this algorithm for the second moment case, we take
Thres(ε) = 9, δ = 0, and we take Tail to be a random rescaling of the largest deviation in the data set, in
the direction v ∗ . See Section A.2 for details.
6
4.2
Robust Covariance Estimation
Our algorithm for robust covariance follows the exact recipe outlined above, with one key difference—we
check for deviations in the empirical fourth moment tensor. Intuitively, just as in the robust mean setting,
we used degree-2 information to detect outliers for the mean (the degree-1 moment), here we use degree-4
information to detect outliers for the covariance (the degree-2 moment).
More concretely, this corresponds to finding a normalized degree-2 polynomial whose empirical variance
is too large. By then filtering along this polynomial, with an appropriate choice of Thres(ε), δ(ε, s), and Tail,
we achieve the desired bounds. See Section A.3 for the formal pseudocode and more details.
4.3
Better Univariate Tests
In the algorithms described above for robust mean estimation, after projecting onto one dimension, we center
the points at the empirical mean along this direction. This is theoretically sufficient, however, introduces
additional constant factors since the empirical mean along this direction may be corrupted. Instead, one can
use a robust estimate for the mean in one direction. Namely, it is well known that the median is a provably
robust estimator for the mean for symmetric distributions [HR09, HRRS86], and under certain models it is
in fact optimal in terms of its resilience to noise [DKW56, Mas90, Che98, DK14, DKK+ 17]. By centering
the points at the median instead of the mean, we are able to achieve better error in practice.
4.4
Adaptive Tail Bounding
In our empirical evaluation, we found that it was important to find an appropriate choice of Tail, to achieve
good error rates, especially for robust covariance estimation. Concretely, in this setting, our tail bound is
given by
Tail(T, d, ε, δ, τ ) = C1 exp(−C2 T ) + Tail2 (T, d, ε, δ, τ ) ,
for some function Tail2 , and constants C1 , C2 . We found that for reasonable settings, the term that dominated
was always the first term on the RHS, and that Tail2 is less significant. Thus, we focused on optimizing the
first term.
We found that depending on the setting, it was useful to change the constant C2 . In particular, in low
dimensions, we could be more stringent, and enforce a stronger tail bound (which corresponds to a higher
C2 ), but in higher dimensions, we must be more lax with the tail bound. To do this in a principled manner,
we introduced a heuristic we call adaptive tail bounding. Our goal is to find a choice of C2 which throws away
roughly an ε-fraction of points. The heuristic is fairly simple: we start with some initial guess for C2 . We
then run our filter with this C2 . If we throw away too many data points, we increase our C2 , and retry. If we
throw away too few, then we decrease our C2 and retry. Since increasing C2 strictly decreases the number
of points thrown away, and vice versa, we binary search over our choice of C2 until we reach something close
to our target accuracy. In our current implementation, we stop when the fraction of points we throw away
is between ε/2 and 3ε/2, or if we’ve binary searched for too long. We found that this heuristic drastically
improves our accuracy, and allows our algorithm to scale fairly smoothly from low to high dimension.
5
Experiments
We performed an empirical evaluation of the above algorithms on synthetic and real data sets with and
without synthetic noise. All experiments were done on a laptop computer with a 2.7 GHz Intel Core i5
CPU and 8 GB of RAM. The focus of this evaluation was on statistical accuracy, not time efficiency. In this
measure, our algorithm performs the best of all algorithms we tried. In all synthetic trials, our algorithm
consistently had the smallest error. In fact, in some of the synthetic benchmarks, our error was orders of
magnitude better than any other algorithms. In the semi-synthetic benchmark, our algorithm also (arguably)
performs the best, though there is no way to tell for sure, since there is no ground truth. We also note that
despite not optimizing our code for runtime, the runtime of our algorithm is always comparable, and in
many cases, better than the alternatives which provided comparable error. Code of our implementation is
available at https://github.com/hoonose/robust-filter.
7
1.5
excess `2 error
excess `2 error
0.15
1
0.5
0.1
0.05
0
100
200
300
dimension
100
400
Filtering
Sample mean w/ noise
RANSAC
200
300
dimension
400
LRVMean
Pruning
Geometric Median
Figure 1: Experiments with synthetic data for robust mean estimation: error is reported against dimension
(lower is better). The error is excess `2 error over the sample mean without noise (the benchmark). We
plot performance of our algorithm, LRVMean, empirical mean with noise, pruning, RANSAC, and geometric
median. On the left we report the errors achieved by all algorithms; however the latter four have much
larger error than our algorithm or LRVMean. On the right, we restrict our attention to only our algorithm
and LRVMean. Our algorithm has better error than all other algorithms.
100
excess Mahalanobis error
Skewed
100
excess Mahalanobis error
excess Mahalanobis error
excess Mahalanobis error
Isotropic
1.5
1
0.5
0
20
40
60
80
dimension
0.4
0.2
0
20
40
60
80
dimension
200
100
0
20
40
60
80
dimension
100
20
40
60
80
dimension
100
1
0.5
0
Filtering
Sample covariance w/ noise
RANSAC
LRVCov
Pruning
Figure 2: Experiments with synthetic data for robust covariance estimation: error is reported against dimension (lower is better). The error is excess Mahalanobis error over the sample covariance without noise (the
benchmark). We plot performance of our algorithm, LRVCov, empirical covariance with noise, pruning, and
RANSAC. We report two settings: one where the true covariance is isotropic (left column), and one where
the true covariance is very skewed (right column). In both, the latter three algorithms have substantially
larger error than ours or LRVCov. On the bottom, we restrict our attention to our algorithm and LRVCov.
The error achieved by LRVCov is quite good, but ours is better. In particular, our excess error is 4 orders of
magnitude smaller than LRVCov’s in high dimensions.
8
5.1
Synthetic Data
Experiments with synthetic data allow us to verify the error guarantees and the sample complexity rates
proven in Section 3 for unknown mean and unknown covariance. In both cases, the experiments validate the
accuracy and usefulness of our algorithm, almost exactly matching the best rate without noise.
Unknown mean The results of our synthetic mean experiment are shown in Figure 1. In the synthetic
mean experiment, we set ε = 0.1, and for dimension d = [100, 150, . . . , 400], we generate n = 10d
ε2 samples,
where a (1 − ε)-fraction come from N (µ, I), and an ε fraction come from a noise distribution. Our goal
is to produce an estimator which minimizes the `2 error the estimator has to the truth. As a baseline, we
compute the error that is achieved by only the uncorrupted sample points. This error will be used as the
gold standard for comparison, since in the presence of error, this is roughly the best one could do even if all
the noise points were identified exactly.†
On this data, we compared the performance of our Filter algorithm to that of (1) the empirical mean of
all the points, (2) a trivial pruning procedure, (3) the geometric median of the data, (4) a RANSAC-based
mean estimation algorithm, and (5) a recently proposed robust estimator for the mean due to [LRV16], which
we will call LRVMean. For (5), we use the implementation available in their Github.‡ In Figure 1, the x-axis
indicates the dimension of the experiment, and the y-axis measures the `2 error of our estimated mean minus
the `2 error of the empirical mean of the true samples from the Gaussian, i.e., the excess error induced over
the sampling error.
We tried various noise distributions, and found that the same qualitative pattern arose for all of them.
In the reported experiment, our noise distribution was a mixture of two binary product distributions, where
one had a couple of large coordinates (see Section B.1 for a detailed description). For all (nontrivial)
error distributions we tried, we observed that indeed the empirical mean, pruning, geometric median, and
RANSAC all have error which diverges as d grows, as the theory predicts. On the other hand, both our
algorithm and LRVMean have markedly smaller error as a function of dimension. Indeed, our algorithm’s
error is almost identical to that of the empirical mean of the uncorrupted sample points.
Unknown covariance The results of our synthetic covariance experiment are shown in Figure 2. Our
setup is similar to that for the synthetic mean. Since both our algorithm and LRVCov require access to
fourth moment objects, we ran into issues with limited memory on machines. Thus, we could not perform
experiments at as high a dimension as for the unknown mean setting, and we could not use as many
samples. We set ε = 0.05, and for dimension d = [10, 20, . . . , 100], we generate n = 0.5d
ε2 samples, where
a (1 − ε)-fraction come from N (0, Σ), and an ε fraction come from a noise distribution. We measure
distance in the natural affine invariant way, namely, the Mahalanobis distance induced by Σ to the identity:
b = kΣ−1/2 ΣΣ
b −1/2 − IkF . As explained above, this is the right affine-invariant metric for this problem.
err(Σ)
As before, we use the empirical error of only the uncorrupted data points as a benchmark.
On this corrupted data, we compared the performance of our Filter algorithm to that of (1) the empirical
covariance of all the points, (2) a trivial pruning procedure, (3) a RANSAC-based minimal volume ellipsoid
(MVE) algorithm, and (5) a recently proposed robust estimator for the covariance due to [LRV16], which
we will call LRVCov. For (5), we again obtained the implementation from their Github repository.
We tried various choices of Σ and noise distribution. Figure 2 shows two choices of Σ and noise. Again, the
x-axis indicates the dimension of the experiment and the y-axis indicates the estimator’s excess Mahalanobis
error over the sampling error. In the left figure, we set Σ = I, and our noise points are simply all located at
the all-zeros vector. In the right figure, we set Σ = I + 10e1 eT1 , where e1 is the first basis vector, and our
noise distribution is a somewhat more complicated distribution, which is similarly spiked, but in a different,
random, direction. We formally define this distribution in Section B.1. For all choices of Σ and noise we
tried, the qualitative behavior of our algorithm and LRVCov was unchanged. Namely, we seem to match the
empirical error without noise up to a very small slack, for all dimensions. On the other hand, the performance
of empirical mean, pruning, and RANSAC varies widely with the noise distribution. The performance of all
these algorithms degrades substantially with dimension, and their error gets worse as we increase the skew
of the underlying data. The performance of LRVCov is the most similar to ours, but again is worse by a large
† We
note that it is possible that an estimator may achieve slightly better error than this baseline.
‡ https://github.com/kal2000/AgnosticMean\AndCovarianceCode
9
constant factor. In particular, our excess risk was on the order of 10−4 for large d, for both experiments,
whereas the excess risk achieved by LRVCov was in all cases a constant between 0.1 and 2.
Discussion These experiments demonstrate that our statistical guarantees are in fact quite strong. In
particular, since our excess error is almost zero (and orders of magnitude smaller than other approaches),
this suggests that our sample complexity is indeed close to optimal, since we match the rate without noise,
and that the constants and logarithmic factors in the theoretical recovery guarantee are often small or
non-existent.
Original Data
Pruning Projection
0.2
-0.2
0.15
-0.1
0.1
0
0.05
0.1
0
-0.05
0.2
-0.1
0.3
-0.15
-0.1
-0.05
0
0.05
0.1
0.15
-0.15
0.2
-0.2
The data projected onto the top two
directions of the original data set
without noise
-0.1
0
0.1
0.2
0.3
The data projected onto the top two directions
of the noisy data set after pruning
Filter Output
Filter Projection
-0.2
-0.2
-0.1
-0.1
0
0
0.1
0.1
0.2
0.2
0.3
0.3
The 0.1
filtered
set of
points
projected
onto-0.2the
0.15
0.05
0
-0.05
-0.1
-0.15
top two directions returned by the filter
The data
projected
onto
the -0.1
top two
0.1
0.05
0
-0.05
-0.15
directions returned by the filter
0.15
-0.2
Figure 3: Experiments with semi-synthetic data: given the real genetic data from [NJB+ 08], projected down
to 20-dimensions, and with added noise. The colors indicate the country of origin of the person, and match
the colors of the countries in the map of Europe in the center. Black points are added noise. The top left
plot is the original plot from [NJB+ 08]. We (mostly) recover Europe in the presence of noise whereas naive
methods do not.
5.2
Semi-synthetic Data
To demonstrate the efficacy of our method on real data, we revisit the famous study of [NJB+ 08]. In
this study, the authors investigated data collected as part of the Population Reference Sample (POPRES)
project. This dataset consists of the genotyping of thousands of individuals using the Affymetrix 500K single
nucleotide polymorphism (SNP) chip. The authors pruned the dataset to obtain the genetic data of over
1387 European individuals, annotated by their country of origin. Using principal components analysis, they
produce a two-dimensional summary of the genetic variation, which bears a striking resemblance to the map
of Europe.
10
Our experimental setup is as follows. While the original dataset is very high dimensional, we use a 20
dimensional version of the dataset as found in the authors’ GitHub§ . We first randomly rotate the data, as
then 20 dimensional data was diagonalized, and the high dimensional data does not follow such structure.
ε
fraction of points (so that they make up an ε-fraction of the final points).
We then add an additional 1−ε
These added points were discrete points, following a simple product distribution (see Section B.1 for full
details). We used a number of methods to obtain a covariance matrix for this dataset, and we projected the
data onto the top two singular vectors of this matrix. In Figure 3, we show the results when we compare
our techniques to pruning. In particular, our output was able to more or less reproduce the map of Europe,
whereas pruning fails to. In Section B.2, we also compare our result with a number of other techniques,
including those we tested against in the unknown covariance experiments, and other robust PCA techniques.
The only alternative algorithm which was able to produce meaningful output was LRVCov, which produced
output that was similar to ours, but which produced a map which was somewhat more skewed. We believe
that our algorithm produces the best picture.
In Figure 3, we also display the actual points which were output by our algorithm’s Filter. While it
manages to remove most of the noise points, it also seems to remove some of the true data points, particularly
those from Eastern Europe and Turkey. We attribute this to a lack of samples from these regions, and thus
one could consider them as outliers to a dataset consisting of Western European individuals. For instance,
Turkey had 4 data points, so it seems quite reasonable that any robust algorithm would naturally consider
these points outliers.
Discussion We view our experiments as a proof of concept demonstration that our techniques can be
useful in real world exploratory data analysis tasks, particularly those in high-dimensions. Our experiments
reveal that a minimal amount of noise can completely disrupt a data analyst’s ability to notice an interesting
phenomenon, thus limiting us to only very well-curated data sets. But with robust methods, this noise does
not interfere with scientific discovery, and we can still recover interesting patterns which otherwise would
have been obscured by noise.
Acknowledgments
We would like to thank Simon Du and Lili Su for helpful comments on a previous version of this work.
References
[AK95]
E. Amaldi and V. Kann. The complexity and approximability of finding maximum feasible
subsystems of linear relations. Theoretical Computer Science, 147:181–210, 1995.
[BDLS17]
S. Balakrishnan, S. S. Du, J. Li, and A. Singh. Computationally efficient robust sparse estimation
in high dimensions. In Proceedings of the 30th Annual Conference on Learning Theory, COLT
’17, 2017.
[CEM+ 93] K. L. Clarkson, D. Eppstein, G. L. Miller, C. Sturtivant, and S.-H. Teng. Approximating
center points with iterated radon points. In Proceedings of the Ninth Annual Symposium on
Computational Geometry, SCG ’93, pages 91–98, New York, NY, USA, 1993. ACM.
[CGR16]
M. Chen, C. Gao, and Z. Ren. A general decision theory for huber’s ε-contamination model.
Electronic Journal of Statistics, 10(2):3752–3774, 2016.
[Cha04]
T. M. Chan. An optimal randomized algorithm for maximum tukey depth. In Proceedings of
the Fifteenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 430–436,
2004.
[Che98]
Z. Chen. A note on bias robustness of the median. Statistics & probability letters, 38(4):363–368,
1998.
§ https://github.com/NovembreLab/Novembre_etal_2008_misc
11
[CLMW11] E. J. Candès, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? J. ACM,
58(3):11, 2011.
[CSV17]
M. Charikar, J. Steinhardt, and G. Valiant. Learning from untrusted data. In Proceedings of
STOC’17, 2017.
[DBS17]
S. S. Du, S. Balakrishnan, and A. Singh. Computationally efficient robust estimation of sparse
functionals. In Proceedings of COLT’17, 2017.
[DG92]
D. L. Donoho and M. Gasko. Breakdown properties of location estimates based on halfspace
depth and projected outlyingness. Ann. Statist., 20(4):1803–1827, 12 1992.
[DK14]
C. Daskalakis and G. Kamath. Faster and sample near-optimal algorithms for proper learning
mixtures of gaussians. In Proceedings of The 27th Conference on Learning Theory, COLT 2014,
pages 1183–1213, 2014.
[DKK+ 16] I. Diakonikolas, G. Kamath, D. M. Kane, J. Li, A. Moitra, and A. Stewart. Robust estimators
in high dimensions without the computational intractability. In Proceedings of FOCS’16, 2016.
Full version available at https://arxiv.org/pdf/1604.06443.pdf.
[DKK+ 17] I. Diakonikolas, G. Kamath, D. M. Kane, J. Li, A. Moitra, and A. Stewart. Being robust
(in high dimensions) can be practical. In Proceedings of the 34th International Conference on
Machine Learning, ICML ’17, pages 999–1008. JMLR, Inc., 2017. Conference version available
at http://proceedings.mlr.press/v70/diakonikolas17a.html.
[DKK+ 18] I. Diakonikolas, G. Kamath, D. M. Kane, J. Li, A. Moitra, and A. Stewart. Robustly learning
a Gaussian: Getting optimal error, efficiently. In Proceedings of the 29th Annual ACM-SIAM
Symposium on Discrete Algorithms, SODA ’18, Philadelphia, PA, USA, 2018. SIAM.
[DKS16]
I. Diakonikolas, D. M. Kane, and A. Stewart. Robust learning of fixed-structure bayesian networks. CoRR, abs/1606.07384, 2016.
[DKS17]
I. Diakonikolas, D. M. Kane, and A. Stewart. Statistical query lower bounds for robust estimation
of high-dimensional gaussians and gaussian mixtures. In Proceedings of the 58th Annual IEEE
Symposium on Foundations of Computer Science, FOCS ’17, Washington, DC, USA, 2017. IEEE
Computer Society.
[DKW56]
A. Dvoretzky, J. Kiefer, and J. Wolfowitz. Asymptotic minimax character of the sample distribution function and of the classical multinomial estimator. Ann. Mathematical Statistics,
27(3):642–669, 1956.
[DL01]
L. Devroye and G. Lugosi. Combinatorial methods in density estimation. Springer Series in
Statistics, Springer, 2001.
[HR09]
P. J. Huber and E. M. Ronchetti. Robust statistics. Wiley New York, 2009.
[HRRS86]
F. R. Hampel, E. M. Ronchetti, P. J. Rousseeuw, and W. A. Stahel. Robust statistics. The
approach based on influence functions. Wiley New York, 1986.
[Hub64]
P. J. Huber. Robust estimation of a location parameter. The Annals of Mathematical Statistics,
35(1):73–101, 1964.
[Hub97]
P. J. Huber. Robustness: Where are we now? Lecture Notes-Monograph Series, pages 487–498,
1997.
[JP78]
D. S. Johnson and F. P. Preparata. The densest hemisphere problem. Theoretical Computer
Science, 6:93–107, 1978.
[Li17]
J. Li. Robust sparse estimation tasks in high dimensions. In Proceedings of COLT’17, 2017.
12
[LRV16]
K. A. Lai, A. B. Rao, and S. Vempala. Agnostic estimation of mean and covariance. In Proceedings of FOCS’16, 2016.
[Mas90]
P. Massart. The tight constant in the Dvoretzky-Kiefer-Wolfowitz inequality. Annals of Probability, 18(3):1269–1283, 1990.
[MS10]
G.L. Miller and D. Sheehy. Approximate centerpoints with proofs. Comput. Geom., 43(8):647–
654, 2010.
[NJB+ 08]
J. Novembre, T. Johnson, K. Bryc, Z. Kutalik, A. R. Boyko, A. Auton, A. Indap, K. S. King,
S. Bergmann, M. R. Nelson, et al. Genes mirror geography within europe. Nature, 456(7218):98–
101, 2008.
[Rou85]
P. Rousseeuw. Multivariate estimation with high breakdown point. Mathematical Statistics and
Applications, pages 283–297, 1985.
[RS98]
P. J. Rousseeuw and A. Struyf. Computing location depth and regression depth in higher
dimensions. Statistics and Computing, 8(3):193–203, 1998.
[SCV18]
J. Steinhardt, M. Charikar, and G. Valiant. Resilience: A criterion for learning in the presence
of arbitrary outliers. In Proc. of the 9th Conference on Innovations in Theoretical Computer
Science, 2018. to appear.
[T+ 15]
J. A. Tropp et al. An introduction to matrix concentration inequalities. Foundations and Trends
in Machine Learning, 8(1-2):1–230, 2015.
[Tuk60]
J.W. Tukey. A survey of sampling from contaminated distributions. Contributions to probability
and statistics, 2:448–485, 1960.
[VAR09]
S. Van Aelst and P. Rousseeuw. Minimum volume ellipsoid. Wiley Interdisciplinary Reviews:
Computational Statistics, 1(1):71–82, 2009.
[Ver10]
R. Vershynin. Introduction to the non-asymptotic analysis of random matrices, 2010.
[XCS10]
H. Xu, C. Caramanis, and S. Sanghavi. Robust pca via outlier pursuit. In Advances in Neural
Information Processing Systems, pages 2496–2504, 2010.
13
A
Omitted Details from Section 3
A.1
Robust Mean Estimation for Sub-Gaussian Distributions
In this section, we use our filter technique to give a near sample-optimal computationally efficient algorithm
to robustly estimate the mean of a sub-gaussian density with a known covariance matrix, thus proving
Theorem 3.1.
We emphasize that the algorithm and its analysis is essentially identical to the filtering algorithm given
in Section 8.1 of [DKK+ 16] for the case of a Gaussian N (µ, I). The only difference is a weaker definition
of the “good set of samples” (Definition A.4) and a simple concentration argument (Lemma A.5) showing
that a random set of uncorrupted samples of the appropriate size is good with high probability. Given these,
the analysis of this subsection follows straightforwardly from the analysis in Section 8.1 of [DKK+ 16] by
plugging in the modified parameters. For the sake of completeness, we provide the details below.
We start by formally defining sub-gaussian distributions:
Definition A.1. A distribution P on R with mean µ, is sub-gaussian with parameter ν > 0 if
EX∼P [exp(λ(X − µ))] ≤ exp(νλ2 /2)
for all λ ∈ R. A distribution P on Rd with mean vector µ is sub-gaussian with parameter ν > 0, if for all
unit vectors v, the one-dimensional random variable v · X, X ∼ P , is sub-gaussian with parameter ν.
We will use the following simple fact about the concentration of sub-gaussian random variables:
Fact A.2. If P is sub-gaussian on Rd with mean vector µ and parameter ν > 0, then for any unit vector
v ∈ Rd we have that PrX∼P [|v · (X − µ)| ≥ T ] ≤ exp(−T 2 /2ν).
The following theorem is a high probability version of Theorem 3.1:
Theorem A.3. Let G be a sub-gaussian distribution on Rd with parameter ν = Θ(1), mean µG , covariance
matrix I, and ε, τ > 0. Let S 0 be an ε-corrupted set of samples from G of size Ω((d/ε2 ) poly log(d/ετ )). There
exists an efficient algorithm that, on input
S 0 and ε > 0, returns a mean vector µ
b so that with probability at
p
G
least 1 − τ we have kb
µ − µ k2 = O(ε log(1/ε)).
P
1
Notation. We will denote µS = |S|
X∈S X and MS =
and modified sample covariance matrix of the set S.
1
|S|
P
X∈S (X
− µG )(X − µG )T for the sample mean
We start by defining our modified notion of good sample, i.e, a set of conditions on the uncorrupted set
of samples under which our algorithm will succeed.
Definition A.4. Let G be an identity covariance sub-gaussian in d dimensions with mean µG and covariance
matrix I and ε, τ > 0. We say that a multiset S of elements in Rd is (ε, τ )-good with respect to G if the
following conditions are satisfied:
p
(i) For all x ∈ S we have kx − µG k2 ≤ O( d log(|S|/τ )).
(ii) For every affine function L : Rd → R such that L(x) = v · (x − µG ) − T , kvk2 = 1, we have that
|PrX∈u S [L(X) ≥ 0] − PrX∼G [L(X) ≥ 0]| ≤ T 2 log dεlog( d ) .
(
ετ )
(iii) We have that kµS − µG k2 ≤ ε.
(iv) We have that kMS − Ik2 ≤ ε.
We show in the following subsection that a sufficiently large set of independent samples from G is (ε, τ )good (with respect to G) with high probability. Specifically, we prove:
Lemma A.5. Let G be sub-gaussian distribution with parameter ν = Θ(1) and with identity covariance,
and ε, τ > 0. If the multiset S is obtained by taking Ω((d/ε2 ) poly log(d/ετ )) independent samples from G,
it is (ε, τ )-good with respect to G with probability at least 1 − τ.
14
We require the following definition that quantifies the extent to which a multiset has been corrupted:
Definition A.6. Given finite multisets S and S 0 we let ∆(S, S 0 ) be the size of the symmetric difference of
S and S 0 divided by the cardinality of S.
The starting point of our algorithm will be a simple NaivePrune routine (Section 4.3.1 of [DKK+ 16])
that removes obvious outliers, i.e., points which are far from the mean. Then, we iterate the algorithm whose
performance guarantee is given by the following:
Proposition A.7. Let G be a sub-gaussian distribution on Rd with parameter ν = Θ(1), mean µG , covariance matrix I, ε > 0 be sufficiently small and τ > 0. Let S be an (ε, τ )-good
p set with respect to G. Let
S 0 be any multiset with ∆(S, S 0 ) ≤ 2ε and for any x, y ∈ S 0 , kx − yk2 ≤ O( d log(d/ετ )). There exists a
polynomial time algorithm Filter-Sub-Gaussian-Unknown-Mean that, given S 0 and ε > 0, returns one
of the following:
p
(i) A mean vector µ
b such that kb
µ − µG k2 = O(ε log(1/ε)).
def
d
(ii) A multiset S 00 ⊆ S 0 such that ∆(S, S 00 ) ≤ ∆(S, S 0 ) − ε/α, where α = d log(d/ετ ) log(d log( ετ
)).
We start by showing how Theorem A.3 follows easily from Proposition A.7.
Proof of Theorem A.3. By the definition of ∆(S, S 0 ), since S 0 has been obtained from S by corrupting an
ε-fraction of the points in S, we have that ∆(S, S 0 ) ≤ 2ε. By Lemma A.5, the set S of uncorrupted samples
is (ε, τ )-good with respect to G with probability at least 1p− τ. We henceforth condition on this event.
Since S is (ε, τ )-good, all x ∈ S have kx − µG k2 ≤ O( d log |S|/τ ). Thus, the NaivePrune procedure
does not remove from S 0 any member of S. Hence,
its output, S 00 , has ∆(S, S 00 ) ≤ ∆(S, S 0 ) and for any
p
00
x ∈ S , there p
is a y ∈ S with kxp
− yk2 ≤ O( d log |S|/τ ). By the triangle inequality, for any x, z ∈ S 00 ,
kx − zk2 ≤ O( d log |S|/τ ) = O( d log(d/ετ )).
Then, we iteratively apply the Filter-Sub-Gaussian-Unknown-Mean
procedure of Proposition A.7
p
until it terminates returning a mean vector µ with kb
µ − µG k2 = O(ε log(1/ε)). We claim that we need at
most O(α) iterations for this to happen. Indeed, the sequence of iterations results in a sequence of sets Si0 ,
so that ∆(S, Si0 ) ≤ ∆(S, S 0 ) − i · ε/α. Thus, if we do not output the empirical mean in the first 2α iterations,
in the next iteration there are no outliers left and the algorithm terminates outputting the sample mean of
the remaining set.
A.1.1
Algorithm Filter-Sub-Gaussian-Unknown-Mean: Proof of Proposition A.7
In this subsection, we describe the efficient algorithm establishing Proposition A.7 and prove its correctness.
0
Our algorithm calculates the empirical mean vector µS and empirical covariance matrix Σ. If the matrix Σ
0
has no large eigenvalues, it returns µS . Otherwise, it uses the eigenvector v ∗ corresponding to the maximum
0
magnitude eigenvalue of Σ and the mean vector µS to define a filter. Our efficient filtering procedure is
presented in detailed pseudocode below.
A.1.2
Proof of Correctness of Filter-Sub-Gaussian-Unknown-Mean
By definition, there exist disjoint multisets L, E, of points in Rd , where L ⊂ S, such that S 0 = (S \ L) ∪ E.
0
With this notation, we can write ∆(S, S 0 ) = |L|+|E|
|S| . Our assumption ∆(S, S ) ≤ 2ε is equivalent to |L|+|E| ≤
2ε · |S|, and the definition of S 0 directly implies that (1 − 2ε)|S| ≤ |S 0 | ≤ (1 + 2ε)|S|. Throughout the proof,
we assume that ε is a sufficiently small constant.
0
We define µG , µS , µS , µL , and µE to be the means of G, S, S 0 , L, and E, respectively.
Our analysis will make essential use of the following matrices:
• MS 0 denotes EX∈u S 0 [(X − µG )(X − µG )T ],
• MS denotes EX∈u S [(X − µG )(X − µG )T ],
• ML denotes EX∈u L [(X − µG )(X − µG )T ], and
15
Algorithm 2 Filter algorithm for a sub-gaussian with unknown mean and identity covariance
1: procedure Filter-Sub-Gaussian-Unknown-Mean(S 0 , ε, τ )
input: A multiset S 0 such that there exists an (ε, τ )-good S with ∆(S, S 0 ) ≤ 2ε
output: Multiset S 00 or mean vector µ
b satisfying Proposition A.7
0
2:
Compute the sample mean µS = EX∈u S 0 [X] and the sample covariance matrix Σ , i.e., Σ =
0
0
(Σi,j )1≤i,j≤d with Σi,j = EX∈u S 0 [(Xi − µSi )(Xj − µSj )].
3:
Compute approximations for the largest absolute eigenvalue of Σ − I, λ∗ := kΣ − Ik2 , and the
associated unit eigenvector v ∗ .
0
4:
if kΣ − Ik2 ≤ O(ε log(1/ε)), then return µS .
p
5:
Let δ := 3 εkΣ − Ik2 . Find T > 0 such that
h
i
0
ε
.
Pr 0 |v ∗ · (X − µS )| > T + δ > 8 exp(−T 2 /2ν) + 8 2
d
X∈u S
T log d log( ετ
)
0
return the multiset S 00 = {x ∈ S 0 : |v ∗ · (x − µS )| ≤ T + δ}.
6:
• ME denotes EX∈u E [(X − µG )(X − µG )T ].
Our analysis will hinge on proving the important claim that Σ − I is approximately (|E|/|S 0 |)ME . This
means two things for us. First, it means that if the positive errors align in some direction (causing ME to
have a large eigenvalue), there will be a large eigenvalue in Σ − I. Second, it says that any large eigenvalue
of Σ − I will correspond to an eigenvalue of ME , which will give an explicit direction in which many error
points are far from the empirical mean.
Useful Structural Lemmas. We begin by noting that we have concentration bounds on G and therefore,
on S due to its goodness.
d
G
2
Fact A.8. Let
w ∈ R beGany unit
vector, then2for any T > 0, εPrX∼G |w · (X − µ )| > T ≤ 2 exp(−T /2ν)
and PrX∈u S |w · (X − µ )| > T ≤ 2 exp(−T /2ν) + T 2 log d log( d ) .
(
ετ )
Proof. The first line is Fact A.2, and the former follows from it using the goodness of S.
By using the above fact, we obtain the following simple claim:
Claim A.9. Let w ∈ Rd be any unit vector, then for any T > 0, we have that:
0
0
Pr [|w · (X − µS )| > T + kµS − µG k2 ] ≤ 2 exp(−T 2 /2ν).
X∼G
and
0
0
Pr [|w · (X − µS )| > T + kµS − µG k2 ] ≤ 2 exp(−T 2 /2ν) +
X∈u S
0
T2
ε
.
d
log d log( ετ
)
0
Proof. This follows from Fact A.8 upon noting that |w · (X − µS )| > T + kµS − µG k2 only if |w · (X − µG )| >
T.
We can use the above facts to prove concentration bounds for L. In particular, we have the following
lemma:
Lemma A.10. We have that kML k2 = O (log(|S|/|L|) + ε|S|/|L|).
Proof. Since L ⊆ S, for any x ∈ Rd , we have that
|S| · Pr (X = x) ≥ |L| · Pr (X = x) .
X∈u S
X∈u L
(1)
Since ML is a symmetric matrix, we have kML k2 = maxkvk2 =1 |v T ML v|. So, to bound kML k2 it suffices to
bound |v T ML v| for unit vectors v. By definition of ML , for any v ∈ Rd we have that
|v T ML v| = EX∈u L [|v · (X − µG )|2 ].
16
For unit vectors v, the RHS is bounded from above as follows:
Z ∞
G 2
EX∈u L |v · (X − µ )| = 2
Pr |v · (X − µG )| > T T dT
X∈
L
u
0
Z √
O(
d log(d/ετ ))
Pr [|v · (X − µG )| > T ]T dT
=2
0
Z
√
O(
X∈u L
d log(d/ετ ))
≤2
0
Z 4√ν log(|S|/|L|)
|S|
G
min 1,
· Pr |v · (X − µ )| > T T dT
|L| X∈u S
T dT
0
√
O(
Z
+ (|S|/|L|)
√
4
d log(d/ετ ))
exp(−T 2 /2ν) +
ν log(|S|/|L|)
ε
T dT
d
T 2 log d log( ετ )
log(|S|/|L|) + ε · |S|/|L| ,
where the second line follows from the fact that kvk2 = 1, L ⊂ S, and S satisfies condition (i) of Definition A.4; the third line follows from (1); and the fourth line follows from Fact A.8.
As a corollary, we can relate the matrices MS 0 and ME , in spectral norm:
Corollary A.11. We have that MS 0 − I = (|E|/|S 0 |)ME + O(ε log(1/ε)), where the O(ε log(1/ε)) term
denotes a matrix of spectral norm O(ε log(1/ε)).
Proof. By definition, we have that |S 0 |MS 0 = |S|MS − |L|ML + |E|ME . Thus, we can write
MS 0 = (|S|/|S 0 |)MS − (|L|/|S 0 |)ML + (|E|/|S 0 |)ME
= I + O(ε) + O(ε log(1/ε)) + (|E|/|S 0 |)ME ,
where the second line uses the fact that 1 − 2ε ≤ |S|/|S 0 | ≤ 1 + 2ε, the goodness of S (condition (iv) in
Definition A.4), and Lemma A.10. Specifically, Lemma A.10 implies that (|L|/|S 0 |)kML k2 = O(ε log(1/ε)).
Therefore, we have that
MS 0 = I + (|E|/|S 0 |)ME + O(ε log(1/ε)) ,
as desired.
We now establish a similarly useful bound on the difference between the mean vectors:
p
p
0
0
E
G
Lemma A.12. We have that µS − µG = (|E|/|S
p |)(µ − µ ) + O(ε log(1/ε)), where the O(ε log(1/ε))
term denotes a vector with `2 -norm at most O(ε log(1/ε)).
Proof. By definition, we have that
0
|S 0 |(µS − µG ) = |S|(µS − µG ) − |L|(µL − µG ) + |E|(µE − µG ).
Since S is a good set, by condition (iii) of Definition A.4, we have kµS −µG k2 = O(ε). Since 1−2ε ≤ |S|/|S 0 | ≤
1 + 2ε, it follows that (|S|/|S 0 |)kµS − µG k2 = O(ε). Using the valid inequality
kML k2 ≥ kµL − µG k22 and
p
p
L
G
Lemma A.10, we obtain that kµ − µ k2 ≤ O
log(|S|/|L|) + ε|S|/|L| . Therefore,
p
p
p
(|L|/|S 0 |)kµL − µG k2 ≤ O (|L|/|S|) log(|S|/|L|) + ε|L|/|S| = O(ε log(1/ε)) .
In summary,
p
0
µS − µG = (|E|/|S 0 |)(µE − µG ) + O(ε log(1/ε)) ,
as desired. This completes the proof of the lemma.
17
By combining the above, we can conclude that Σ−I is approximately proportional to ME . More formally,
we obtain the following corollary:
Corollary A.13. We have Σ − I = (|E|/|S 0 |)ME + O(ε log(1/ε)) + O(|E|/|S 0 |)2 kME k2 , where the additive
terms denote matrices of appropriately bounded spectral norm.
0
0
Proof. By definition, we can write Σ − I = MS 0 − I − (µS − µG )(µS − µG )T . Using Corollary A.11 and
Lemma A.12, we obtain:
Σ − I = (|E|/|S 0 |)ME + O(ε log(1/ε)) + O((|E|/|S 0 |)2 kµE − µG k22 ) + O(ε2 log(1/ε))
= (|E|/|S 0 |)ME + O(ε log(1/ε)) + O(|E|/|S 0 |)2 kME k2 ,
where the second line follows from the valid inequality kME k2 ≥ kµE − µG k22 . This completes the proof.
0
Case of Small Spectral Norm. We are now ready to analyze the case that the mean vector µS is returned
def
by the algorithm in Step 4. In this case, we have that λ∗ = kΣ − Ik2 = O(ε log(1/ε)). Hence, Corollary
A.13 yields that
(|E|/|S 0 |)kME k2 ≤ λ∗ + O(ε log(1/ε)) + O(|E|/|S 0 |)2 kME k2 ,
which in turns implies that
(|E|/|S 0 |)kME k2 = O(ε log(1/ε)) .
On the other hand, since kME k2 ≥ kµE − µG k22 , Lemma A.12 gives that
p
p
p
0
kµS − µG k2 ≤ (|E|/|S 0 |) kME k2 + O(ε log(1/ε)) = O(ε log(1/ε)).
This proves part (i) of Proposition A.7.
Case of Large Spectral Norm. We next show the correctness of the algorithm when it returns a filter in
Step 5.
def
We start by proving that if λ∗ = kΣ − Ik2 > Cε log(1/ε), for a sufficiently large universal constant C,
then a value T satisfying the condition in Step 5 exists. We first note that that kME k2 is appropriately
large. Indeed, by Corollary A.13 and the assumption that λ∗ > Cε log(1/ε) we deduce that
(|E|/|S 0 |)kME k2 = Ω(λ∗ ) .
(2)
Moreover, using the inequality kME k2 ≥ kµE − µG k22 and Lemma A.12 as above, we get that
p
p
0
kµS − µG k2 ≤ (|E|/|S 0 |) kME k2 + O(ε log(1/ε)) ≤ δ/2 ,
(3)
p
def √
where we used the fact that δ = ελ∗ > C 0 ε log(1/ε).
Suppose for the sake of contradiction that for all T > 0 we have that
h
i
0
ε
.
Pr 0 |v ∗ · (X − µS )| > T + δ ≤ 8 exp(−T 2 /2ν) + 8 2
d
X∈u S
T log d log( ετ
)
Using (3), we obtain that for all T > 0 we have that
Pr
X∈u
S0
∗
|v · (X − µG )| > T + δ/2 ≤ 8 exp(−T 2 /2ν) + 8
ε
.
d
T 2 log d log( ετ
)
(4)
Since E ⊆ S 0 , for all x ∈ Rd we have that |S 0 | PrX∈u S 0 [X = x] ≥ |E| PrY ∈u E [Y = x]. This fact combined
with (4) implies that for all T > 0
!
∗
ε
.
Pr |v · (X − µG )| > T + δ/2 (|S 0 |/|E|) exp(−T 2 /2ν) + 2
(5)
d
X∈u E
T log d log( ετ
)
18
We now have the following sequence of inequalities:
Z ∞
kME k2 = EX∈u E |v ∗ · (X − µG )|2 = 2
Pr |v ∗ · (X − µG )| > T T dT
X∈u E
0
Z O(√d log(d/ετ ))
Pr |v ∗ · (X − µG )| > T T dT
=2
X∈u E
0
Z O(√d log(d/ετ ))
|S 0 |
min 1,
· Pr 0 |v ∗ · (X − µG )| > T T dT
≤2
|E| X∈u S
0
Z O(√d log(d/ετ ))
Z 4√ν log(|S 0 |/|E|)+δ
0
T dT + (|S |/|E|) √
exp(−T 2 /2ν) +
0
0
4
0
ν log(|S |/|E|)+δ
ε
T dT
d
)
T 2 log d log( ετ
0
2
log(|S |/|E|) + δ + O(1) + ε · |S |/|E|
log(|S 0 |/|E|) + ελ∗ + ε · |S 0 |/|E| .
Rearranging the above, we get that
(|E|/|S 0 |)kME k2 (|E|/|S 0 |) log(|S 0 |/|E|) + (|E|/|S 0 |)ελ∗ + ε = O(ε log(1/ε) + ε2 λ∗ ).
Combined with (2), we obtain λ∗ = O(ε log(1/ε)), which is a contradiction if C is sufficiently large. Therefore,
it must be the case that for some value of T the condition in Step 5 is satisfied.
The following claim completes the proof:
def
d
Claim A.14. Fix α = d log(d/ετ ) log(d log( ετ
)). We have that ∆(S, S 00 ) ≤ ∆(S, S 0 ) − 2ε/α .
Proof. Recall that S 0 = (S \ L) ∪ E, with E and L disjoint multisets such that L ⊂ S. We can similarly write
S 00 = (S \ L0 ) ∪ E 0 , with L0 ⊇ L and E 0 ⊂ E. Since
∆(S, S 0 ) − ∆(S, S 00 ) =
|E \ E 0 | − |L0 \ L|
,
|S|
it suffices to show that |E \ E 0 | ≥ |L0 \ L| + ε|S|/α. Note that |L0 \ L| is the number of points rejected by the
filter that lie in S ∩ S 0 . Note that the fraction of elements of S that are removed to produce S 00 (i.e., satisfy
0
|v ∗ · (x p
− µS )| > T + δ) is at most 2 exp(−T 2 /2ν) + ε/α. This follows from Claim A.9 and the fact that
T = O( d log(d/ετ )).
Hence, it holds that |L0 \ L| ≤ (2 exp(−T 2 /2ν) + ε/α)|S|. On the other hand, Step 5 of the algorithm
ensures that the fraction of elements of S 0 that are rejected by the filter is at least 8 exp(−T 2 /2ν) + 8ε/α).
Note that |E \ E 0 | is the number of points rejected by the filter that lie in S 0 \ S. Therefore, we can write:
|E \ E 0 | ≥ (8 exp(−T 2 /2ν) + 8ε/α)|S 0 | − (2 exp(−T 2 /2ν) + ε/α)|S|
≥ (8 exp(−T 2 /2ν) + 8ε/α)|S|/2 − (2 exp(−T 2 /2ν) + ε/α)|S|
≥ (2 exp(−T 2 /2ν) + 3ε/α)|S|
≥ |L0 \ L| + 2ε|S|/α ,
where the second line uses the fact that |S 0 | ≥ |S|/2 and the last line uses the fact that |L0 \ L|/|S| ≤
2 exp(−T 2 /2ν) + ε/α. Noting that log(d/ετ ) ≥ 1, this completes the proof of the claim.
A.1.3
Proof of Lemma A.5
Proof. Let N = Ω((d/ε2 ) poly log(d/ετ ))pbe the number of samples drawn from G. For (i), the probability
that a coordinate of a sample is at least 2ν log(N d/3τ ) is at most p
τ /3dN by Fact A.2. By a union bound,
the probability that
all
coordinates
of
all
samples
are
smaller
than
2ν log(N d/3τ ) is at least 1 − τ /3. In
p
p
this case, kxk2 ≤ 2νd log(N d/3τ ) = O( dν log(N ν/τ )).
After translating by µG , we note that (iii) follows immediately from Lemmas 4.3 of [DKK+ 16] and (iv)
follows from Theorem 5.50 of [Ver10], as long as N = Ω(ν 4 d log(1/τ )/ε2 ), with probability at least 1 − τ /3.
It remains to show that, conditioned on (i), (ii) holds with probability at least 1 − τ /3.
19
To simplify some expressions, let δ := ε/(log(d log d/ετ )) and R = C
that for all unit vectors v and all 0 ≤ T ≤ R that
p
d log(|S|/τ ). We need to show
Pr [|v · (X − µG )| > T ] − Pr [|v · (X − µG ) > T ≥ 0] ≤
X∈u S
X∼G
δ
.
T2
(6)
Firstly, we show that for all unit vectors v and T > 0
Pr [|v · (X − µG )| > T ] − Pr [|v · (X − µG )| > T ≥ 0] ≤
X∈u S
X∼G
δ
10ν ln(1/δ)
with probability at least 1 − τ /6. Since the VC-dimension of the set of all halfspaces is d + 1, this follows
2
from the VC inequality [DL01], since
pwe have more than Ω(d/(δ/(10ν log(1/δ)) ) samples. We thus only
need to consider the case when T ≥ 10ν ln(1/δ).
p
Lemma A.15. For any fixed unit vector v and T > 10ν ln(1/δ), except with probability exp(−N δ/(6Cν)),
we have that
δ
Pr [|v · (X − µG )| > T ] ≤
,
X∈u S
CT 2
where C = 8.
Proof. Let E be the event that |v · (X − µG )| > T . Since G is sub-gaussian, Fact A.2 yields that PrG [E] =
PrY ∼G [|v · (X − µG )| > T ] ≤ exp(−T 2 /(2ν)). Note that, thanks to our assumption on T , we have that
T ≤ exp(T 2 /(4ν))/2C, and therefore T 2 PrG [E] ≤ exp(−T 2 /(4ν))/2C ≤ δ/2C.
Consider ES [exp(t2 /(3ν) · N PrS [E])]. Each individual sample Xi for 1 ≤ i ≤ N , is an independent copy
of Y ∼ G, and hence:
"
#
2
2 X
n
T
T
ES exp
· N Pr[E]
= ES exp
·
1Xi ∈E )
S
3ν
3ν
i=1
"
#
2 X
N
n
Y
T
=
EXi exp
·
1Xi ∈E )
3ν
i=1
i=1
N
2
T
Pr[G] + 1
= exp
3ν G
N
(a)
T2
≤ exp
+1
6ν
(b)
≤ (1 + δ 5/3 )N
(c)
≤ exp(N δ 5/3 ) ,
where (a) follows from sub-gaussianity, (b) follows from our choice of T , and (c) comes from the fact that
1 + x ≤ ex for all x.
Hence, by Markov’s inequality, we have
δ
δN
5/3
Pr Pr[E] ≥
≤ exp N δ
−
S
CT 2
3C
= exp(N δ(δ 2/3 − 1/(3C))) .
Thus, if δ is a sufficiently small constant and C is sufficiently large, this yields the desired bound.
Now let C be a 1/2-cover in Euclidean distance
for the set of unit vectors of size 2O(d) . By a union bound,
p
0
0
for all v ∈ C and T a power of 2 between 4ν ln(1/δ) and R, we have that
Pr [|v 0 · (X − µG )| > T 0 ] ≤
X∈u S
20
δ
8T 2
except with probability
2O(d) log(R) exp(−N δ/6Cν) = exp (O(d) + log log R − N δ/6Cν) ≤ τ /6 .
p
However, for any unit vector v and 4ν ln(1/δ) ≤ T ≤ R, there is a v 0 ∈ C and such a T 0 such that for all
x ∈ Rd , we have |v · (X − µG )| ≥ |v 0 · (X − µG )|/2, and so |v 0 · (X − µG )| > 2T 0 implies |v 0 · (X − µG )| > T.
Then, by a union bound, (6) holds simultaneously for all unit vectors v and all 0 ≤ T ≤ R, with
probability a least 1 − τ /3. This completes the proof.
A.2
Robust Mean Estimation Under Second Moment Assumptions
In this section, we use our filtering technique to give a near sample-optimal computationally efficient algorithm to robustly estimate the mean of a density with a second moment assumption. We show:
Theorem A.16. Let P be a distribution on Rd with unknown mean vector µP and unknown covariance
matrix ΣP I. Let S be an ε-corrupted set of samples from P of size Θ((d/ε)
√ log d). Then there exists an
algorithm that given S, with probability 2/3, outputs µ
b with kb
µ − µP k2 ≤ O( ε) in time poly(d/ε).
Note that Theorem 3.2 follows straightforwardly from the above (divide every sample by σ, run the
algorithm of Theorem A.16, and multiply its output by σ).
As usual in our filtering framework, the algorithm will iteratively look at the top eigenvalue and eigenvector of the sample covariance matrix and return the sample mean if this eigenvalue is small (Algorithm
3). The main difference between this and the filter algorithm for the sub-gaussian case is how we choose
the threshold for the filter. Instead of looking for a violation of a concentration inequality, here we will
choose a threshold at random (with a bias towards higher thresholds). The reason is that, in this setting,
the variance in the direction we look for a filter in needs to be a constant multiple larger – instead of the
typical Ω̃(ε) relative for the sub-gaussian case. Therefore, randomly choosing a threshold weighted towards
higher thresholds suffices to throw out more corrupted samples than uncorrupted samples in expectation.
Although it is possible to reject many good samples this way, the algorithm still only rejects a total of O(ε)
samples with high probability.
We would like our good set of samples to have mean close to that of P and bounded variance in all
directions. This motivates the following definition:
P
Definition A.17. We call a set S ε-good for a distribution
√ P withS mean µ and covariance ΣP I if the
S
S
S
P
mean µ and covariance Σ of S satisfy kµ − µ k2 ≤ ε and kΣ k2 ≤ 2.
However, since we have no assumptions about higher moments, it may be be possible for outliers to affect
our sample covariance too much. Fortunately, such outliers have small probability and do not contribute too
much to the mean, so we will later reclassify them as errors.
Lemma A.18. Let S be N = Θ((d/ε) log d) samples drawn from P . Then, with probability at least 9/10, a
random X ∈u S satisfies
√
(i) kES [X] − µP k2 ≤ ε/3,
h
p i
(ii) PrS kX − µP k2 ≥ 80 d/ε ≤ ε/160,
(iii)
(iv)
h
ES (X − µP ) · 1kX−µP k
√
2 ≤80
i
d/ε
h
ES (X − µP )(X − µP )T · 1kX−µP k
√
≤
ε/3, and
2
√
2 ≤80
i
d/ε
≤ 3/2.
2
Proof. For (i), note that
ES [kE[X] − µP k22 ] =
X
2
ES [(E[X]i − µP
i ) ] ≤ d/N ≤ ε/360 ,
i
and so by Markov’s inequality, with probability at least 39/40, we have kE[X] − µP k22 ≤ ε/9.
21
For (ii), similarly to (i), note that
E[kY − µP k22 ] =
X
2
E (Yi − µP
≤d,
i )
i
p
for Y ∼ P . By Markov’s inequality, Pr[kY − µP k2 ≥ 80 d/ε] ≤ ε/160 with probability at least 39/40.
For (iii), let ν = EX∼P [X · 1kX−µP k ≤80√d/ε ] be the true mean of the distribution when we condition on
2
p
the event that kX − µP k2 ≤ 80 d/ε. By the same argument as (i), we know that
h
i
√
EX∈u S X · 1kX−µP k ≤80√d/ε − ν ≤ ε/9 ,
2
2
with probability at least 39/40. Thus it suffices to show that ν − µP · 1kX−µP k
√
2 ≤80
so, it suffices to show that for all unit vectors v ∈ Rd , we have
D
E
√
v, ν − µP · 1kX−µP k ≤80√d/ε < ε/10 .
d/ε 2
≤
√
ε/10. To do
2
Observe that for any such v, we have
D
E
i
h
v, µP · 1kX−µP k ≤80√d/ε − ν = EX∼P v, X − µP · 1kX−µP k ≤80√d/ε
2
2
r
(a)
p
≤ EX∼P [hv, X − µP i2 ] Pr [kX − µP k2 ≥ 80 d/ε]
X∼P
r
h
p i
(b)
= v T ΣP v · Pr kX − µP k2 ≥ 80 d/ε
X∼P
(c)
≤
√
ε/10 ,
where (a) follows from Cauchy-Schwarz, and (b) follows from the definition of the covariance, and (c) follows
from the assumption that ΣP I and from Markov’s inequality.
For (iv), we require the following Matrix Chernoff bound:
Lemma A.19 (Part of Theorem 5.1.1 of [T+ 15]). Consider
a sequence of d×d positive semi-definite random
P
matrices Xk with kXk k2 ≤ L for all k. Let µmax = k k E[Xk ]k2 . Then, for θ > 0,
#
"
X
Xk
≤ (eθ − 1)µmax /θ + L log(d)/θ ,
E
k
2
and for any δ > 0,
"
Pr
#
X
max
≥ (1 + δ)µ
Xk
k
max
≤ d(eδ /(1 + δ)1+δ )µ
/L
.
2
We apply this lemma with Xk = (xk − µP )(xk − µP )T 1kx
√
P
k −µ k2 ≤80
d/ε
for {x1 , . . . , xN } = S. Note that
kXk k2 ≤ (80)2 d/ε = L and that µmax ≤ N kΣP k2 ≤ N .
Suppose that µmax ≤ N/80. Then, taking θ = 1, we have
E[
X
] ≤ (e − 1)N/80 + O(d log(d)/ε) .
Xk
k
2
P
By Markov’s inequality, except with probability 39/40, we have k k Xk k2 ≤ N + O(d log(d)/ε) ≤ 3N/2,
for N a sufficiently high multiple of d log(d)/ε.
Suppose that µmax ≥ N/80, then we take δ = 1/2 and obtain
#
"
X
max
Pr
Xk ≥ 3µ
2 ≤ d(e3/2 /(5/2)3/2 )N ε/20d .
k
2
22
P
For N a sufficiently high multiple of d log(d)/ε, wePget that Pr[k k Xk k2 ≥ 3µmax /2] ≤ 1/40. Since
µmax ≤ N , we have with probability at least 39/40, k k Xk k2 ≤ 3N/2.
P
Noting that k k Xk k2 /N = kE[1kX−µP k ≤80√d/ε (X − µP )(X − µP )T ]k2 , we obtain (iv). By a union
2
bound, (i)-(iv) all hold simultaneously with probability at least 9/10.
Now we can get a 2ε-corrupted good set from an ε-corrupted set of samples satisfying Lemma A.18, by
reclassifying outliers as errors:
Lemma A.20. Let S = R ∪ E \ L, where R is a set of N = Θ(d log d/ε) samples drawn from P and E and
L are disjoint sets with |E|, |L| ≤ ε. Then, with probability 9/10, we can also write S = G ∪ E 0 \ L0 , where
G ⊆ R is ε-good, L0 ⊆ L and E 0 ⊆ E 0 has |E 0 | ≤ 2ε|S|.
p
Proof. Let G = {x ∈ R : kxk2 ≤ 80 d/ε}. Condition on the event that R satisfies Lemma A.18. By Lemma
A.18, this occurs with probability at least 9/10.
Since R satisfies (ii) of Lemma A.18, |G| − |R| ≤ ε|R|/160 ≤ ε|S|. Thus, E 0 = E ∪ (R \ G) has
|E 0 | ≤ 3ε/2. Note that (iv) of Lemma A.18 for R in terms of G is exactly |G|kΣG k2 /|R| ≤ 3/2, and so
kΣG k2 ≤ 3|R|/(2|G|) ≤ 2.
√
It remains to check that kµG − µP k2 ≤ ε. We have
i
h
|G| · µG − |G| · µP 2 = |R| · EX∼u R (X − µP ) · 1kX−µP k ≤80√d/ε
2
2
√
≤ |R| · ε/3 ,
where the last line follows from (iii) of Lemma A.18. Since we argued above that |R|/|G| ≥ 2/3, dividing
this expression by |G| yields the desired claim.
Algorithm 3 Filter under second moment assumptions
1: function FilterUnder2ndMoment(S)
2:
Compute µS , ΣS , the mean and covariance matrix of S.
3:
Find the eigenvector v ∗ with highest eigenvalue λ∗ of ΣS .
4:
if λ∗ ≤ 9 then
5:
return µS
6:
else
7:
Draw Z from the distribution on [0, 1] with probability density function 2x.
8:
Let T = Z max{|v ∗ · x − µS | : x ∈ S}.
9:
Return the set S 0 = {x ∈ S : |v ∗ · (X − µS )| < T }.
An iteration of FilterUnder2ndMoment may throw out more samples from G than corrupted samples.
However, in expectation, we throw out many more corrupted samples than from the good set:
Proposition A.21. If we run FilterUnder2ndMoment on a set S = G ∪ E \ L for some ε-good√ set G
and disjoint E, L with |E| ≤ 2ε|S|, |L| ≤ 9ε|S|, then either it returns µS with kµS − µP k2 ≤ O( ε), or
else it returns a set S 0 ⊂ S with S 0 = G ∪ E 0 \ L0 for disjoint E 0 and L0 . In the latter case we have
EZ [|E 0 | + 2|L0 |] ≤ |E| + 2|L|.
For D ∈ {G, E, L, S}, let µD be the mean of D and MD be the matrix EX∈u D [(X − µS )(X − µS )T ].
p
Lemma A.22. If G is an ε-good set with x ≤ 40 d/ε for x ∈ S ∪ G, then kMG k2 ≤ 2kµG − µS k22 + 2 .
Proof. For any unit vector v, we have
v T MG v = EX∈u G [(v · (X − µS ))2 ]
= EX∈u G [(v · (X − µG ) + v · (µP − µG ))2 ]
= v T ΣG v + (v · (µG − µS ))2
≤ 2 + 2kµG − µS k22 .
23
Lemma A.23. We have that |L|kML k2 ≤ 2|G|(1 + kµG − µS k22 ) .
Proof. Since L ⊆ G, for any unit vector v, we have
|L|v T ML v = |L|EX∈u L [(v · (X − µS ))2 ]
≤ |G|EX∈u G [(v · (X − µS ))2 ]
≤ 2|G|(1 + kµG − µS k22 ) .
Lemma A.24. kµG − µS k2 ≤
p
√
2εkMS k2 + 12 ε.
Proof. We have that |E|ME ≤ |S|MS + |L|ML and so
|E|kME k2 ≤ |S|kMS k2 + 2|G|(1 + kµG − µS k22 ) .
By Cauchy Schwarz, we have that kME k2 ≥ kµE − µS k22 , and so
q
p
|E|kµE − µS k2 ≤ |S|kMS k2 + 2|G|(1 + kµG − µS k22 ) .
By Cauchy-Schwarz and Lemma A.23, we have that
q
p
p
|L|kµL − µS k2 ≤ |L|kML k2 ≤ 2|G|(1 + kµG − µS k22 ) .
Since |S|µS = |G|µG + |E|µE − |L|µL and |S| = |G| + |E| − |L|, we get
|G|(µG − µS ) = |E|(µE − µS ) − |L|(µE − µS ) .
Substituting into this, we obtain
q
q
|G|kµG − µS k2 ≤ |E||S|kMS k2 + 2|E||G|(1 + kµG − µS k22 ) + 2|L||G|(1 + kµG − µS k22 ) .
√
√
√
x + y, we have
p
p
p
|G|kµG − µS k2 ≤ |E||S|kMS k2 + ( 2|E||G| + 2|L||G|)(1 + kµG − µS k2 ) .
Since for x, y > 0,
x+y ≤
Since ||G| − |S|| ≤ ε|S| and |E| ≤ 2ε|S|, |L| ≤ 9ε|S|, we have
p
√
kµG − µS k2 ≤ 2εkMS k2 + (6 ε)(1 + kµG − µS k2 ) .
√
Moving the kµG − µS k2 terms to the LHS, using 6 ε ≤ 1/2, gives
p
√
kµG − µS k2 ≤ 2εkMS k2 + 12 ε .
Since λ∗ = kMS k2 , the correctness if we return the empirical mean is immediate.
√
Corollary A.25. If λ∗ ≤ 9, we have that kµG − µS k2 = O( ε).
From now on, we assume λ∗ > 9. In this case we have kµG − µS k22 ≤ O(ελ∗ ). Using Lemma A.22, we
have
kMG k2 ≤ 2 + O(ελ∗ ) ≤ 2 + λ∗ /5
for sufficiently small ε. Thus, we have that
v ∗T MS v ∗ ≥ 4v ∗T MG v ∗ .
(7)
Now we can show that in expectation, we throw out many more corrupted points from E than from G\L:
24
Lemma A.26. Let S 0 = G ∪ E 0 \ L0 for disjoint E 0 , L0 be the set of samples returned by the iteration. Then
we have EZ [|E 0 | + 2|L0 |] ≤ |E| + 2|L|.
Proof. Let a = maxx∈S |v ∗ · x − µS |. Firstly, we look at the expected number of samples we reject:
EZ [|S 0 |] − |S| = EZ |S| Pr [|X − µS | ≥ aZ]
X∈u S
1
Z
= |S|
Pr
X∈u S
Z0 a
∗
|v · (X − µS )| ≥ ax 2xdx
Pr |v ∗ · (X − µS )| ≥ T (2T /a)dT
0 X∈u S
= |S|EX∈u S (v ∗ · (X − µS ))2 /a
= |S|
= (|S|/a) · v ∗T MS v ∗ .
Next, we look at the expected number of false positive samples we reject, i.e., those in L0 \ L.
0
S
EZ [|L |] − |L| = EZ (|G| − |L|) Pr
|X − µ | ≥ T
X∈u G\L
∗
S
≤ EZ |G| Pr [|v · (X − µ )| ≥ aZ]
X∈u G
Z
= |G|
= |G|
1
Pr [|v ∗ · (X − µS )| ≥ ax]2x dx
X∈u G
Z0 a
Pr [|v ∗ · (X − µS )| ≥ T ](2T /a) dT
X∈u G
Z0 ∞
Pr [|v ∗ · (X − µS )| ≥ T ](2T /a) dT
= |G|EX∈u G (v ∗ · (X − µS ))2 /a
≤ |G|
0
X∈u G
= (|G|/a) · v ∗T MG v ∗ .
Using (7), we have |S|v ∗T MS v ∗ ≥ 4|G|v ∗T MG v ∗ and so EZ [S 0 ] − S ≥ 3(EZ [L0 ] − L). Now consider that
|S 0 | = |G| + |E 0 | − |L0 | = |S| − |E| + |E 0 | + |L| − |L0 |, and thus |S 0 | − |S| = |E| − |E 0 | + |L0 | − |L|. This yields
that |E| − EZ [|E 0 |] ≥ 2(EZ [L0 ] − L), which can be rearranged to EZ [|E 0 | + 2|L0 |] ≤ |E| + 2|L|.
∗
S
P
Proof
√ of Proposition A.21. If λ ≤ 9, then we return the mean in Step 5, and by Corollary A.25, kµ −µ k2 ≤
O( ε).
If λ∗ > 9, then we return S 0 . Since at least one element of S has |v ∗ · X| = maxx∈S |v ∗ · X|, whatever
value of Z is drawn, we still remove at least one element, and so have S 0 ⊂ S. By Lemma A.26, we have
EZ [|E 0 | + 2|L0 |] ≤ |E| + 2|L|.
Proof of Theorem A.16. Our input is a set S of N = Θ((d/ε) log d) ε-corrupted samples so that with probability 9/10, S is a 2ε-corrupted set of ε-good samples for P by Lemmas A.18 and A.20. We have a set
S = G ∪ E 0 \ L, where G0 is an ε-good set, |E| ≤ 2ε, and |L| ≤ ε. Then, we iteratively apply FilterUnder2ndMoment until it outputs an approximation to the mean. Since each iteration removes a sample,
this must happen within N iterations. The algorithm takes at most poly(N, d) = poly(d, 1/ε) time.
As long as we can
√ show that the conditions of Proposition A.21 hold in each iteration, it ensures that
kµS − µP k2 ≤ O( ε). However, the condition that |L| ≤ 9ε|S| need not hold in general. Although in
expectation we reject many more samples in E than G, it is possible that we are unlucky and reject many
samples in G, which could make L large in the next iteration. Thus, we need a bound on the probability
that we ever have |L| > 9ε.
We analyze the following procedure: We iteratively run FilterUnder2ndMoment starting with a set
Si ∪ Ei \ Li of samples with S0 = S and producing a set Si+1 = G ∪ Ei+1 \ Li+1 . We stop if we output
an approximation to the mean or if |Li+1 | ≥ 13ε|S|. Since we do now always satisfy the conditions of
Proposition A.21, this gives that EZ [|Ei+1 | + |Li+1 |] = |Ei | + 2|Li |. This expectation is conditioned on the
25
state of the algorithm after previous iterations, which is determined by Si . Thus, if we consider the random
variables Xi = |Ei | + 2|Li |, then we have E[Xi+1 |Si ] ≤ Xi , i.e., the sequence Xi is a sub-martingale with
respect to Xi . Using the convention that Si+1 = Si , if we stop in less than i iterations, and recalling that we
always stop in N iterations, the algorithm fails if and only if |LN | > 9ε|S|. By a simple induction or standard
results on sub-martingales, we have E[XN ] ≤ X0 . Now X0 = |E0 | + 2|L0 | ≤ 3ε|S|. Thus, E[XN ] ≤ 3ε|S|. By
Markov’s inequality, except with probability 1/6, we have XN ≤ 18ε|S|. In this case, |LN | ≤ XN /2 ≤ 9ε|S|.
Therefore, the probability that we ever have |Li | > 9ε is at most 1/6.
By a union bound, the probability that the uncorrupted samples satisfy Lemma A.18 and Proposition
A.21 applies to every iteration is at least 9/10−1/6
≥ 2/3. Thus, with at least 2/3 probability, the algorithm
√
outputs a vector µ
b with kb
µ − µP k2 ≤ O( ε).
A.3
Robust Covariance Estimation
In this subsection, we give a near sample-optimal efficient robust estimator for the covariance of a zero-mean
Gaussian density, thus proving Theorem 3.3. Our algorithm is essentially identical to the filtering algorithm
given in Section 8.2 of [DKK+ 16]. As in Section A.1 the only difference is a weaker definition of the “good
set of samples” (Definition A.27) and a concentration argument (Lemma A.28) showing that a random set
of uncorrupted samples of the appropriate size is good with high probability. Given these, the analysis of
this subsection follows straightforwardly from the analysis in Section 8.2 of [DKK+ 16] by plugging in the
modified parameters.
The algorithm Filter-Gaussian-Unknown-Covariance to robustly estimate the covariance of a mean
0 Gaussian in [DKK+ 16] is as follows:
Algorithm 4 Filter algorithm for a Gaussian with unknown covariance matrix.
1: procedure Filter-Gaussian-Unknown-Covariance(S 0 , ε, τ )
input: A multiset S 0 such that there exists an (ε, τ )-good set S with ∆(S, S 0 ) ≤ 2ε
output: Either a set S 00 with ∆(S, S 00 ) < ∆(S, S 0 ) or the parameters of a Gaussian G0 with dT V (G, G0 ) =
O(ε log(1/ε)).
Let C > 0 be a sufficiently large universal constant.
2:
Let Σ0 be the matrix EX∈u S 0 [XX T ] and let G0 be the mean 0 Gaussian with covariance matrix Σ0 .
3:
if there is any x ∈ S 0 so that xT (Σ0 )−1 x ≥ Cd log(|S 0 |/τ ) then
4:
return S 00 = S 0 − {x : xT (Σ0 )−1 x ≥ Cd log(|S 0 |/τ )}.
5:
Compute an approximate eigendecomposition of Σ0 and use it to compute Σ0−1/2
6:
Let x(1) , . . . , x(|S 0 |) be the elements of S 0 .
⊗2
7:
For i = 1, . . . , |S 0 |, let y(i) = Σ0−1/2 x(i) and z(i) = y(i)
.
0
P
|S |
[
[T
0
T
8:
Let TS 0 = −I I + (1/|S |) i=1 z(i) z(i) .
9:
Approximate the top eigenvalue λ∗ and corresponding unit eigenvector v ∗ of TS 0 ..
10:
Let p∗ (x) = √12 ((Σ0−1/2 x)T v ∗] (Σ0−1/2 x) − tr(v ∗] ))
11:
12:
13:
14:
if λ∗ ≤ (1 + Cε log2 (1/ε))QG0 (p∗ ) then
return G0
Let µ be the median value of p∗ (X) over X ∈ S 0 .
Find a T ≥ C 0 so that
Pr (|p∗ (X) − µ| ≥ T + 4/3) ≥ Tail(T, d, ε, τ )
X∈u S 0
15:
return S 00 = {X ∈ S 0 : |p∗ (X) − µ| < T }.
In [DKK+ 16], we take Tail(T, d, ε, τ ) = 12 exp(−T ) + 3ε/(d log(N/τ ))2 , where N = Θ((d log(d/ετ ))6 /ε2 )
is the number of samples we took there.
To get a near sample-optimal algorithms, we will need a weaker definition of a good set. To use this, we
will need to weaken the tail bound in the algorithm to Tail(T, d, ε, τ ) = ε/(T 2 log2 (T )), when T ≥ 10 log(1/ε).
For T ≤ 10 log(1/ε), we take Tail(T, d, ε, τ ) = 1 so that we always choose T ≥ 10 log(1/ε). It is easy to show
26
that the integrals of this tail bound used in the proofs of Lemma 8.19 and Claim 8.22 of [DKK+ 16] have
similar bounds. Thus, our analysis here will sketch that these tail bounds hold for a set of Ω(d2 log5 (d/ετ )/ε2 )
samples from the Guassian.
Firstly, we state the new, weaker, definition of a good set:
Definition A.27. Let G be a Gaussian in Rd with mean 0 and covariance Σ. Let ε > 0 be sufficiently small.
We say that a multiset S of points in Rd is ε-good with respect to G if the following hold:
√
1. For all x ∈ S, xT Σ−1 x < d + O( d log(d/ε)).
2. We have that kΣ−1/2 Cov(S)Σ−1/2 − IkF = O(ε).
3. For all even degree-2 polynomials p, we have that Var(p(S)) = Var(p(G))(1 + O(ε)).
4. For p an even degree-2 polynomial with E[p(G)] = 0 and Var(p(G)) = 1, and for any T > 10 log(1/ε)
we have that
Pr (|p(x)| > T ) ≤ ε/(T 2 log2 (T )).
x∈u S
It is easy to see that the algorithm and analysis of [DKK+ 16] can be pushed through using the above
weaker definition. That is, if S is a good set, then G can be recovered to Õ(ε) error from an ε-corrupted
version of S. Our main task will be to show that random sets of the appropriate size are good with high
probability.
Proposition A.28. Let N be a sufficiently large constant multiple of d2 log5 (d/ε)/ε2 . Then a set S of N
independent samples from G is ε-good with respect to G with high probability.
Proof. First, note that it suffices to prove this when G = N (0, I).
Condition 1 follows by standard concentration bounds on kxk22 .
Condition 2 follows by estimating the entry-wise error between Cov(S) and I.
Condition 3 is slightly more involved. Let {pi } be an orthonormal basis for the set of even, degree-2,
mean-0 polynomials with respect to G. Define the matrix Mi,j = Ex∈u S [pi (x)pj (x)] − δi,j . This condition is
equivalent to kM k2 = O(ε). Thus, it suffices to show that for every v with kvk2 = 1 that v T M v = O(ε). It
O(d2 )
actually
. For each v, let
P suffices to consider a cover of such v’s. Note that this cover will be of size 2
pv = i vi pi . We need to show that Var(pv (S)) = 1 + O(ε). We can show this happens with probability
2
1 − 2−Ω(d ) , and thus it holds for all v in our cover by a union bound.
Condition 4 is substantially the most difficult of these conditions to prove. Naively, we would want to
find a cover of all possible p and all possible T , and bound the probability that the desired condition fails.
Unfortunately, the best a priori bound on Pr(|p(G)| > T ) are on the order of exp(−T ). As our cover would
2
need to be of size 2d or so, to make this work with T = d, we would require on the order of d3 samples in
order to make this argument work.
However, we will note that this argument is sufficient to cover the case of T < 10 log(1/ε) log2 (d/ε).
Fortunately, most such polynomials p satisfy much better tail bounds. Note that any even, mean zero
polynomial p can be written in the form p(x) = xT Ax−tr(A) for some matrix A. We call A the associated matrix to p. We note by the Hanson-Wright inequality that Pr(|p(G)| > T ) = exp(−Ω(min((T /kAkF )2 , T /kAk2 ))).
Therefore, the tail bounds above are only as bad as described when A has a single large eigenvalue. To take
advantage of this, we will need to break p into parts based on the size of its eigenvalues. We begin with a
definition:
Definition A.29. Let Pk be the set of even, mean-0, degree-2 polynomials, so that the associated matrix A
satisfies:
1. rank(A) ≤ k
√
2. kAk2 ≤ 1/ k.
√
√
Note that for p ∈ Pk that |p(x)| ≤ |x|2 / k + k.
Importantly, any polynomial can be written in terms of these sets.
27
Lemma A.30. Let p be an even, degree-2 polynomial with E[p(G)] = 0, Var(p(G)) = 1. Then if t = blog2 (d)c,
it is possible to write p = 2(p1 + p2 + . . . + p2t + pd ) where pk ∈ Pk .
Proof. Let A be the associated matrix to p. Note that kAkF = Var p = 1. Let Ak be the matrix corresponding
to the top k eigenvalues of A. We now let p1 be the polynomial associated to A1 /2, p2 be associated to
(A2 − A1 )/2, p4 be associated to (A4 − A2 )/2, and so on. It is clear that p = 2(p1 + p2 + . . . + p2t + pd ).
It is also clear that the matrix
associated to pk has rank at most k. If the matrix associated to pk had an
√
k,
it
would
need to be the case that the k/2nd largest eigenvalue of A had size at
eigenvalue
more
than
1/
√
least 2/ k. This is impossible since the sum of the squares of the eigenvalues of A is at most 1.
This completes our proof.
We will also need covers of each of these sets Pk .
Lemma A.31. For each k, there exists a set Ck ⊂ Pk so that
1. For each p ∈ Pk there exists a q ∈ Ck so that kp(G) − q(G)k2 ≤ (ε/d)2 .
2. |Ck | = 2O(dk log(d/ε)) .
√
Pk
Proof. We note that any such p is associated to a matrix A of the form A = i=1 λi vi viT , for λi ∈ [0, 1/ k]
Pk
and vi orthonormal. It suffices to let q correspond to the matrix A0 = i=1 µi wi wiT for with |λi −µi | < (ε/d)3
and |vi − wi | < (ε/d)3 for all i. It is easy to let µi and wi range over covers of the interval and the sphere
with appropriate errors. This gives a set of possible q’s of size 2O(dk log(d/ε)) as desired. Unfortunately, some
of these q will not be in Pk as they will have eigenvalues that are too large. However, this is easily fixed by
replacing each such q by the closest element of Pk . This completes our proof.
We next will show that these covers are sufficient to express any polynomial.
Lemma A.32. Let p be an even degree-2 polynomial with E[p(G)] = 0 and Var(p(G)) = 1. It is possible to
write p as a sum of O(log(d)) elements of some Ck plus another polynomial of L2 norm at most ε/d.
Proof. Combining the above two lemmas we have that any such p can be written as
t
p = (q1 + p1 ) + (q2 + p2 ) + . . . (q2t + p2t ) + (qd + pd ) = q1 + q2 + . . . + q 2 + q d + p0 ,
where qk above is in Ck and kpk (G)k2 < (ε/d)2 . Thus, p0 = p1 + p2 + . . . + p2t + pd has kp0 (G)k2 ≤ (ε/d).
This completes the proof.
p
The key observation now is that if |p(x)| ≥ T for kxk2 ≤ d/ε, then writing p = q1 + q2 + q4 + . . . + qd + p0
as above, it must be the case that |qk (x)| > (T −1)/(2 log(d)) for some k. Therefore, to prove our main result,
it suffices to show that, with high probability over the choice of S, for any T ≥ 10 log(1/ε) log2 (d/ε) and
any q ∈ Ck for some k, that Prx∈u S (|q(x)| > T /(2 log(d))) < ε/(2T 2 log2 (T ) log(d)). Equivalently, it suffices
2
to show that for T ≥ 10 log(1/ε) log(d/ε) it holds Prx∈u S (|q(x)| > T /(2 log(d))) < ε/(2T 2 log2 (T ) logp
(d)).
Note that this holds automatically for T > (d/ε), as p(x) cannot possibly be that large for kxk2 ≤ d/ε.
Furthermore, note that losing a constant factor in the probability, it suffices to show this only for T a power
of 2.
√
Therefore, it suffices to show for every k ≤ d, every q ∈ Ck and every d/ kε T log(1/ε) log(d/ε)
that with probability at least 1 − 2−Ω(dk log(d/ε)) over the choice of S we have that Prx∈u S (|q(x)| > T )
ε/(T 2 log4 (d/ε)). However, by the Hanson-Wright inequality, we have that
√
Pr(|q(G)| > T ) = exp(−Ω(min(T 2 , T k))) < (ε/(T 2 log4 (d/ε)))2 .
Therefore, by Chernoff bounds, the probability that more than a ε/(T 2 log4 (d/ε))-fraction of the elements
of S satisfy this property is at most
√
√
exp(−Ω(min(T 2 , T k))|S|ε/(T 2 log4 (d/ε))) = exp(−Ω(|S|ε/(log4 (d/ε)) min(1, k/T )))
≤ exp(−Ω(|S|ε2 /(log4 (d/ε))k/d))
≤ exp(−Ω(dk log(d/ε))) ,
28
as desired.
This completes our proof.
29
B
B.1
Omitted Details from Section 5
Full description of the distributions for experiments
Here we formally describe the distributions we used in our experiments. In all settings, our goal was to find
noise distributions so that noise points were not “obvious” outliers, in the sense that there is no obvious
pointwise pruning process which could throw away the noise points, which still gave the algorithms we tested
the most difficulty. We again remark that while other algorithms had varying performances depending on
the noise distribution, it seemed that the performance of ours was more or less unaffected by it.
Distribution for the synthetic mean experiment Our uncorrupted points were generated by N (µ, I),
where µ is the all-ones vector. Our noise distribution is given as
N=
1
1
Π1 + Π2 ,
2
2
where Π1 is the product distribution over the hypercube where every coordinate is 0 or 1 with probability
1/2, and Π2 is a product distribution where the first coordinate is ether 0 or 12 with equal probability, the
second coordinate is −2 or 0 with equal probability, and all remaining coordinates are zero.
Distribution for the synthetic covariance experiment For the isotropic synthetic covariance experiment, our uncorrupted points were generated by N (0, I), and the noise points were all zeros. For the skewed
synthetic covariance experiment, our uncorrupted points were generated by N (0, I + 100e1 eT1 ), where e1 is
the first unit vector, and our noise points were generated as follows: we took a fixed random rotation of
points of the form Yi ∼ Π, where Π is a product distribution whose first d/2 coordinates are each uniformly
selected from {−0.5, 0, 0.5}, and whose next d/2 − 1 coordinates are each 0.8 × Ai , where for each coordinate
i, Ai is an independent random integer between −2 and 2, and whose last coordinate is a uniformly random
integer between [−100, 100].
Setup for the semi-synthetic geographic experiment We took the 20 dimensional data from [NJB+ 08],
which was diagonalized, and randomly rotated it. This was to simulate the higher dimensional case, since
the singular vectors that [NJB+ 08] obtained did not seem to be sparse or analytically sparse. Our noise was
distributed as Π, where Π is a product distribution whose first d/2 coordinates are each uniformly random
integers between 0 and 2 and whose last d/2 coordinates are each uniformly randomly either 2 or 3, all scaled
by a factor of 1/24.
B.2
Comparison with other robust PCA methods on semi-synthetic data
In addition to comparing our results with simple pruning techniques, as we did in Figure 3 in the main
text, we also compared our algorithm with implementations of other robust PCA techniques from the literature with accessible implementations. In particular, we compared our technique with RANSAC-based
techniques, LRVCov, two SDPs ([CLMW11, XCS10]) for variants of robust PCA, and an algorithm proposed
by [CLMW11] to speed up their SDP based on alternating descent. For the SDPs, since black box methods were too slow to run on the full data set (as [CLMW11] mentions, black-box solvers for the SDPs are
impractical above perhaps 100 data points), we subsample the data, and run the SDP on the subsampled
data. For each of these methods, we ran the algorithm on the true data points plus noise, where the noise
was generated as described above. We then take the estimate of the covariance it outputs, and project the
data points onto the top two singular values of this matrix, and plot the results in Figure 4.
Similar results occurred for most noise patterns we tried. We found that only our algorithm and LRVCov
were able to reasonably reconstruct Europe, in the presence of this noise. It is hard to judge qualitatively
which of the two maps generated is preferable, but it seems that ours stretches the picture somewhat less
than LRVCov.
30
Original Data
Filter Projection
-0.2
-0.2
-0.1
-0.1
0
0
0.1
0.1
0.2
0.2
0.3
0.3
-0.15
-0.1
-0.05
0
0.05
0.1
0.15
0.2
0.15
0.1
0.05
RANSAC Projection
0.2
0
-0.05
-0.1
-0.15
-0.2
0.1
0.15
0.2
0.1
0.15
0.2
LRV Projection
-0.2
0.15
-0.1
0.1
0.05
0
0
0.1
-0.05
0.2
-0.1
0.3
-0.15
-0.2
-0.1
0
0.1
0.2
0.3
-0.15
-0.1
-0.05
0.2
0
0.05
CLMW SDP Projection
CLMW ADMM Projection
0.3
0.15
0.2
0.1
0.05
0.1
0
0
-0.05
-0.1
-0.1
-0.2
-0.15
-0.2
-0.1
0
0.1
0.2
-0.15
0.3
-0.1
-0.05
0
0.05
XCS Projection
0.3
0.2
0.1
0
-0.1
-0.2
-0.15
-0.1
-0.05
0
0.05
0.1
0.15
0.2
Figure 4: Comparison with other robust methods on the Europe semi-synthetic data. From left to right,
top to bottom: the original projection without noise, what our algorithm recovers, RANSAC, LRVCov, the
ADMM method proposed by [CLMW11], the SDP proposed by [XCS10] with subsampling, and the SDP
proposed by [CLMW11] with subsampling.
31
| 8 |
GENERATING SETS OF REIDEMEISTER MOVES OF ORIENTED
SINGULAR LINKS AND QUANDLES
arXiv:1702.01150v1 [math.GT] 3 Feb 2017
KHALED BATAINEH, MOHAMED ELHAMDADI, MUSTAFA HAJIJ, AND WILLIAM YOUMANS
Abstract. We give a generating set of the generalized Reidemeister moves for oriented singular
links. We use it to introduce an algebraic structure arising from the study of oriented singular
knots. We give some examples, including some non-isomorphic families of such structures over
non-abelian groups. We show that the set of colorings of a singular knot by this new structure is
an invariant of oriented singular knots and use it to distinguish some singular links.
Contents
1. Introduction
2. Basics of quandles
3. Oriented singular knots and quandles
4. Oriented singquandles over groups
5. A generating set of oriented singular Reidemeister moves
6. Open questions
References
1
2
2
6
8
13
13
1. Introduction
The discovery of the Jones polynomial of links [10] generated a search which uncovered vast
families of invariants of knots and links, among them the Vassiliev knot invariants [16]. The Jones
polynomial and its relatives can be computed combinatorially using knot diagrams or their braid
representations. A fundamental relationship between the Jones polynomial and Vassiliev invariants
was established in the work of Birman and Lin [4] where they showed that Vassiliev invariants can be
characterized by three axioms. Instead of focusing on a given knot, Vassiliev changed the classical
approach by deciding to study the space of all knots instead. This allowed one to study the space
of all knots instead of focusing on a single knot. As a result of this work, Vassiliev generated the
theory of singular knots and their invariants and has gained considerable attention since then.
Singular knots and their invariants have proven to be important subjects of study on their own,
and many classical knot invariants have been successfully extended to singular knots. For example,
the work of Fiedler [9] where the author extended the Jones and Alexander polynomials to singular
knots. The colored Jones polynomial was generalized to singular knots in [2]. Jones-type invariants
for singular links were constructed using a Markov trace on a version of Hecke algebras in [12].
The main purpose of this paper is to relate the theory of quandles to theory of singular knots.
In [6], the authors developed a type of involutory quandle structure called singquandles to study
non-oriented singular knots. This article solves the last open question given in [6] by introducing
certain algebraic structures with the intent of applying them to the case of oriented singular knots.
2000 Mathematics Subject Classification. Primary 57M25.
Key words and phrases. Generating sets of Reidemeister moves; Quandles, Singular Links.
1
GENERATING SETS OF REIDEMEISTER MOVES OF ORIENTED SINGULAR LINKS AND QUANDLES
2
We call these structures oriented singquandles. We give multiple examples of such structures and
use them to distinguish between various singular knots. Finally, for the purpose of constructing
the axioms of singquandles, it was necessary to construct a generating set of Reidemeister moves
acting on oriented singular links which can be found in section 5.
Organization. This article is organized as follows. In section 2 we review the basics of quandles. In section 3 we introduce the notion of oriented singquandles. In section 4 we focus our
work on oriented singquandles whose underlying structures rely on group actions and provide some
applications of singquandles to singular knot theory. Finally, in section 5 we detail a generating set
of Reidemeister moves for oriented singular knots and links.
2. Basics of quandles
Before we introduce any algebraic structures related to singular oriented links, we will need to
recall the definition of a quandle and give a few examples. For a more detailed exposition on
quandles, see [7, 11, 13].
Definition 2.1. A quandle is a set X with a binary operation (a, b) 7→ a ∗ b such that the following
axioms hold:
(1) For any a ∈ X, a ∗ a = a.
(2) For any a, b ∈ X, there is a unique x ∈ X such that a = x ∗ b.
(3) For any a, b, c ∈ X, we have (a ∗ b) ∗ c = (a ∗ c) ∗ (b ∗ c).
Axiom (2) of Definition 2.1 states that for each y ∈ X, the map ∗y : X → X with ∗y (x) := x ∗ y
is a bijection. Its inverse will be denoted by the mapping ∗y : X → X with ∗y (x) = x ∗ y, so that
(x ∗ y) ∗ y = x = (x ∗ y) ∗ y. Below we provide some typical examples of quandles.
• Any set X with the operation x ∗ y = x for any x, y ∈ X is a quandle called the trivial
quandle.
• A group X = G with n-fold conjugation as the quandle operation: x ∗ y = y −n xy n .
• Let n be a positive integer. For elements x, y ∈ Zn (integers modulo n), define x∗y ≡ 2y −x
(mod n). Then ∗ defines a quandle structure called the dihedral quandle, Rn .
• For any Z[T, T −1 ]-module M , x ∗ y = T x + (1 − T )x where x, y ∈ M defines an Alexander
quandle.
• Let < , >: Rn × Rn → Rn be a symmetric bilinear form on Rn . Let X be the subset of Rn
consisting of vectors x such that < x, x > 6= 0. Then the operation
2 < x, y >
x∗y =
y−x
< x, x >
defines a quandle structure on X. Note that, x ∗ y is the image of x under the reflection in
y. This quandle is called a Coxeter quandle.
A function φ : (X, ∗) → (Y, ) is a quandle homomorphism if φ(a ∗ b) = φ(a) φ(b) for any a, b ∈ X.
A bijective quandle endomorphism of (X, ∗) is called a quandle isomorphism. For example any
map fa,b : (Zn , ∗) → (Zn , ∗) defined by fa,b (x) = ax + b with a, b ∈ Zn and a invertible in Zn is a
quandle isomorphism, where x ∗ y = 2y − x (see [8] for more details).
3. Oriented singular knots and quandles
Recall that a singular link in S 3 is the image of a smooth immersion of n circles S 3 that has
finitely many double points, called singular points. An orientation of each circle induces orientation
on each component of the link. This given an oriented singular link. In this paper we will assume
that any singular link is oriented unless specified otherwise. Furthermore, we will work with singular
link projections, or diagrams, which are projections of the singular link to the plane such that the
information at each crossing is preserved by leaving a little break in the lower strand. Two oriented
GENERATING SETS OF REIDEMEISTER MOVES OF ORIENTED SINGULAR LINKS AND QUANDLES
3
singular link diagrams are considered equivalent if and only if one can obtain one from the other
by a finite sequence of singular Reidemeister moves (see Figure 2).
In the case of classical knot theory, the axiomatization of the Reidemeister moves gives rise
to the definition of a quandle. One of the goals of this paper is to generalize the structure of
quandles by considering singular oriented knots and links. We will call this structure an oriented
singquandle (see the definition below). The axioms of oriented singquandles will be constructed
using a generating set of Reidemeister moves on oriented singular links (see Figure 2).
A semiarc in a singular link diagram L is an edge in the link L considered as a 4-valent graph.
The oriented singquandle axioms are obtained by associating elements of the oriented singquandle
to semiarcs in an oriented singular link diagram and letting these elements act on each other at
crossings as shown in the following figure:
Figure 1. Regular and singular crossings
Now the goal is to derive the axioms that the binary operators R1 and R2 in Figure 1 should
satisfy. For this purpose, we begin with the generating set of Reidemeister moves given in Figure 2.
The proof that this is a generating set will be postponed to section 5. Using this set of Reidemeister
moves, the singquandle axioms can be derived easily. This can be seen in Figures 3, 4 and 5.
x
y
z
x
y
z
x¯
∗y
y
R1 (x¯
∗y, z) ∗ y
y
y
R2 (x¯
∗y, z)
R1 (x, z ∗ y)
y
Figure 3. The Reidemeister move Ω4a and colorings
R2 (x, z ∗ y)¯
∗y
GENERATING SETS OF REIDEMEISTER MOVES OF ORIENTED SINGULAR LINKS AND QUANDLES
Figure 2. A generating set of singular Reidemeister moves
x
(y¯
∗R1 (x, z)) ∗ x
z
x
(y ∗ R2 (x, z))¯
∗z
z
x
y¯
∗R1 (x, z)
R1 (x, z)
y ∗ R2 (x, z)
y
R2 (x, z)
R1 (x, z)
y
Figure 4. The Reidemeister move Ω4e and colorings
R2 (x, z)
4
GENERATING SETS OF REIDEMEISTER MOVES OF ORIENTED SINGULAR LINKS AND QUANDLES
x
x
y
R1 (x, y) ∗ R2 (x, y)
R2 (x, y)
R1 (y, x ∗ y)
5
y
R2 (y, x ∗ y)
Figure 5. The Reidemeister move Ω5a and colorings
The previous three figures immediately gives us the following definition.
Definition 3.1. Let (X, ∗) be a quandle. Let R1 and R2 be two maps from X × X to X. The
triple (X, ∗, R1 , R2 ) is called an oriented singquandle if the following axioms are satisfied:
R1 (x¯
∗y, z) ∗ y
R2 (x¯
∗y, z)
(y¯
∗R1 (x, z)) ∗ x
R2 (x, y)
R1 (x, y) ∗ R2 (x, y)
=
=
=
=
=
R1 (x, z ∗ y) coming from Ω4a
R2 (x, z ∗ y)¯∗y coming from Ω4a
(y ∗ R2 (x, z))¯∗z coming from Ω4e
R1 (y, x ∗ y) coming from Ω5a
R2 (y, x ∗ y) coming from Ω5a
(3.1)
(3.2)
(3.3)
(3.4)
(3.5)
We give the following examples.
Example 3.2. Let x ∗ y = ax + (1 − a)y, where a is invertible so that x ∗¯ y = a−1 x + (1 − a−1 )y.
Now let R1 (x, y) = bx + cy, then by axiom 2.4 we have R2 (x, y) = acx + (c(1 − a) + b)y. By plugging
these expressions into the above axioms we can find the relation c = 1 − b. Substituting, we find
that the following is an oriented singquandle for any invertible a and any b in Zn :
x ∗ y = ax + (1 − a)y
R1 (x, y) = bx + (1 − b)y
R2 (x, y) = a(1 − b)x + (1 − a(1 − b))y
(3.6)
(3.7)
(3.8)
It is worth noting that all of the above relations between constants can be derived from axiom
2.3, and the other axioms provide only trivial identities or the same relations. In this way, we can
extend this generalized affine singquandle to the nonoriented case as well by allowing x ¯∗ y = x ∗ y,
since axiom 2.3 will reduce to it’s counterpart in the axioms of nonoriented singquandles. (See
axiom 4.1 given in [6]).
With this observation we can generalize the class of involutive Alexander quandles into a class
of singquandles, which is given in the following example.
Example 3.3. Let Λ = Z[t±1 , v] and let X be a Λ-module. Then the operations
x ∗ y = tx + (1 − t)y,
R1 (x, y) = α(a, b, c)x + (1 − α(a, b, c))y
and
R2 (x, y) = t(1 − α(a, b, c))x + (1 − t)(1 − α(a, b, c))y
where α(a, b, c) = at + bv + ctv, make X an oriented singuandle we call an Alexander oriented
singquandle. That X is an oriented singquandle follows from proposition 3.2 by straightforward
substitution.
A coloring of an oriented singular link L is a function C : R −→ L, where X is a fixed oriented
singquandle and R is the set of semiarcs in a fixed diagram of L, satisfying the conditions given in
Figure 1. Now the following lemma is immediate from Definition 3.1.
GENERATING SETS OF REIDEMEISTER MOVES OF ORIENTED SINGULAR LINKS AND QUANDLES
6
Lemma 3.4. The set of colorings of a singular knot by an oriented singquandle is an invariant of
oriented singular knots.
The set of colorings of a singular link L by an oriented singquandle X will be denoted by
ColX (L). As in the usual context of quandles, the notions of oriented singquandle homomorphisms
and isomorphisms are immediate.
Definition 3.5. Let (X, ∗, R1 , R2 ) and (Y, ., S1 , S2 ) be two oriented singquandles. A map f :
X −→ Y is homomorphism if the following axioms are satisfied :
(1) f (x ∗ y) = f (x) . f (y),
(2) f (R1 (x, y)) = S1 (f (x), f (y)),
(3) f (R2 (x, y)) = S2 (f (x), f (y)).
If in addition f is a bijection then we say that (X, ∗, R1 , R2 ) and (Y, ., S1 , S2 ) are isomorphic. Note
that isomorphic oriented singquandles will induce the same set of colorings.
4. Oriented singquandles over groups
When the underlying set of an oriented singquandle X is a group one obtains a rich family of
structures. In this section we give a variety of examples including a generalization of affine oriented
singquandles as well as an infinite family of non-isomophic singquandles over groups.
Example 4.1. Let X = G be an abelian group, with f being a group automorphism and g a group
endomorphism. Consider the operations x ∗ y = f (x) + y − f (y) and R1 (x, y) = g(y) + x − g(x).
We can deduce from axiom 2.4 that R2 (x, y) = g(f (x)) + y − g(f (y)). Plugging into the axioms,
we find that the axioms are satisfied when (f ◦ g)(x) = (g ◦ f )(x). This structure will reappear in
example 3.2, which follows as a special case where f (x) = ax and g(x) = (1 − b)x.
Example 4.2. Let X = G be a group, and take the n-fold conjugation quandle from Definition 2.1
with n = 1 such that x ∗ y = y −1 xy. Then a direct computation gives the fact that (X, ∗, R1 , R2 )
is a singquandle if and only if R1 and R2 satisfy the following equations:
y −1 R1 (yxy −1 , z)y = R1 (x, y −1 zy)
R2 (yxy
−1
, z) = yR2 (x, y
−1
zy)y
(4.1)
−1
x−1 R1 (x, z)y[R1 (x, z)]−1 x = z[R2 (x, z)]−1 y[R2 (x, z)]z −1
−1
[R2 (x, y)]
(4.2)
(4.3)
R2 (x, y) = R1 (y, y
−1
xy)
(4.4)
R1 (x, y)R2 (x, y) = R2 (y, y
−1
xy)
(4.5)
A straightforward computation gives the following solutions, for all x, y ∈ G.
(1) R1 (x, y) = x and R2 (x, y) = y.
(2) R1 (x, y) = xyxy −1 x−1 and R2 (x, y) = xyx−1 .
(3) R1 (x, y) = y −1 xy and R2 (x, y) = y −1 x−1 yxy.
(4) R1 (x, y) = xy −1 x−1 yx, and R2 (x, y) = x−1 y −1 xy 2 .
(5) R1 (x, y) = y(x−1 y)n and R2 (x, y) = (y −1 x)n+1 y, where n ≥ 1.
Next we focus our attention on a subset of infinite families of oriented singquandle structures in
order to show that some in fact are not isomorphic.
Proposition 4.3. Let X = G be a non-abelian group with the binary operation x ∗ y = y −1 xy.
Then, for n ≥ 1, the following families of maps R1 and R2 give pairwise non-isomorphic oriented
singquandles structures (X, ∗, R1 , R2 ) is an oriented singquandle on G:
(1) R1 (x, y) = x(xy −1 )n and R2 (x, y) = y(x−1 y)n ,
(2) R1 (x, y) = (xy −1 )n x and R2 (x, y) = (x−1 y)n y,
(3) R1 (x, y) = x(yx−1 )n+1 and R2 (x, y) = x(y −1 x)n .
GENERATING SETS OF REIDEMEISTER MOVES OF ORIENTED SINGULAR LINKS AND QUANDLES
7
Furthermore, in each of the cases (1), (2) and (3), different values of n give also non-isomorphic
singular quandles.
Proof. A direct computations show that R1 and R2 satisfy the five axioms of Definition 3.1.
Figure 6. An oriented Hopf link
To see that the three solutions are pairwise non-isomorphic singquandles, we compute the set
of colorings of a singular knot by each of the three solutions. Consider the singular link given in
figure 6 and color the top arcs by elements x and y of G. Then it is easy to see that the set of
colorings is given by
ColX (L) = {(x, y) ∈ G × G | x = R1 (y, y −1 xy) y = R2 (y, y −1 xy)}.
Using R1 (x, y) =
x(xy −1 )n
and R2 (x, y) =
y(x−1 y)n
(4.6)
from solution (1), the set of colorings becomes:
ColX (L) = {(x, y) ∈ G × G | (x−1 y)n+1 = 1},
(xy −1 )n x
(4.7)
(x−1 y)n y
while the set of colorings of the link L with R1 (x, y) =
and R2 (x, y) =
from
solution (2) is:
ColX (L) = {(x, y) ∈ G × G | x−1 (x−1 y)n y = 1}.
(4.8)
−1
n+1
−1
Finally, the set of colorings of the same link with R1 (x, y) = x(yx )
and R2 (x, y) = x(y x)n
from solution (3) is:
ColX (L) = {(x, y) ∈ G × G | (y −1 x)n = 1}.
(4.9)
This allows us to conclude that solutions (1), (2), and (3) are pairwise non-isomorphic oriented
singquandles. In fact these computations also give that different values of n in any of the three
solutions (1), (2) and (3) give non-isomorphic oriented singquandles. We exclude the case of n = 0
as solutions (1) and (2) become equivalent.
We illustrate here how these new structures can be used to distinguish between oriented singular
knots and links.
Example 4.4. We choose X to be a non-abelian group and consider the 1-fold conjugation quandle
on X along with the binary operation R1 and R2 given by R1 (x, y) = x2 y −1 and R2 (x, y) = yx−1 y.
By considering solution (5) in Example 4.2 when n = 1 we know that X is an oriented singquandle.
In this example we show how the this oriented singquandle can be used to distinguish between two
singular knots that differ only in orientation.
Color the two arcs on the top of the knot on the left of Figure 7 by elements x and y. This
implies that the coloring space is the set in the diagonal in G × G. On the other hand, the knot on
the righthand side of Figure 7 has the coloring space {(x, y) ∈ G × G, xyx−1 = yxy −1 }. Since this
GENERATING SETS OF REIDEMEISTER MOVES OF ORIENTED SINGULAR LINKS AND QUANDLES
8
set is not the same as the diagonal of G × G, the coloring invariant distinguishes these two oriented
singular knots.
7
x
y
y
x
y
y −1 xy
R1 (y, x)
R2 (y, x)
x
y
x
y
Figure 7. Two oriented singular Hopf links with one singular point
5. A generating set of oriented singular Reidemeister moves
The purpose of this section is to give a generating set of oriented singular Reidemeister moves.
We used this generating to write the axiom of our singquandle structure in section 3.
Minimal generating sets for oriented Reidemeister moves have been studied previously by Polyak
in [15], where he proved that 4 moves are sufficient to generate the set for classical knots. However,
analogous work for singular knot theory seems to be missing from the literature. In this section we
will give a generating set of Reidemeister Moves for oriented singular links.
It was proven in [15] that the moves in Figure 5 constitute a minimal generating set of Reidemeister moves on oriented classical knots. For convenience we will use the same Ω notation used
by Polyak [15] in our paper.
Figure 8. A generating set of Reidemeister moves for oriented knot.
To obtain a generating set of oriented singular Reidemeister moves we enumerate all possible
such moves we then we show that they all can be obtained by a finite sequence of the moves given
in Figure 2. Recall first that the nonoriented case of singular knots, we have the four moves given
in Figure 5 singular Reidemeister moves.
If we consider the oriented case, we can easily compute the maximum number of Reidemeister
moves to consider. For Ω4 moves we have 8 independent orientations, and for Ω5 we will have 6.
GENERATING SETS OF REIDEMEISTER MOVES OF ORIENTED SINGULAR LINKS AND QUANDLES
9
Figure 9. All four possible moves on nonoriented singular knots that involve singular crossings.
While a combinatorically driven approach yields a handful more, it is easily seen that some moves
are simply the rotation of other moves, thus they are not considered. The 14 for Ω4 and Ω5 moves
are tabulated in Figure 5 for reference.
Figure 10. All 14 oriented moves involving singular crossings.
GENERATING SETS OF REIDEMEISTER MOVES OF ORIENTED SINGULAR LINKS AND QUANDLES
10
Theorem 5.1. Only 3 oriented singular Reidemeister moves are required to generate the entire
set of moves. These four moves are Ω4a, Ω4e, and Ω5a.
To show each moves dependence on this generating set, we will need to invoke Reidemeister
moves of type Ω1 and Ω2 while performing transformations. The specific moves are given in [15].
To prove this theorem, we will first show that the moves of type Ω4 are generated by two unique
moves. We will formulate this as a separate theorem:
Theorem 5.2. Only 2 oriented singular Reidemeister moves of type Ω4 are required to generate
all type Ω4 moves. These moves are Ω4a and Ω4e.
S
S
Lemma 5.3. The move Ω4c is equivalent to Ω2a Ω4a Ω2d.
Proof.
Lemma 5.4. the move Ω4d is equivalent to Ω2c
S
Ω4a
S
Ω4e
S
Ω4e
S
Ω2d.
Proof.
Lemma 5.5. the move Ω4g is equivalent to Ω2c
S
Ω2d.
Proof.
Lemma 5.6. the move Ω4h is equivalent to Ω2c
S
Ω2d.
Proof.
GENERATING SETS OF REIDEMEISTER MOVES OF ORIENTED SINGULAR LINKS AND QUANDLES
Lemma 5.7. the move Ω4b is equivalent to Ω2a
S
Ω2c
S
Ω4a
S
Ω2d
S
11
Ω2b.
Proof.
By applying Lemma 5.4 we see that Ω4d reduces further, and the lemma follows.
S
S
S
S
Lemma 5.8. the move Ω4f is equivalent to Ω2b Ω2c Ω4e Ω2d Ω2a.
Proof.
By applying Lemma 5.6 we see that Ω4h reduces further, and the lemma follows.
From here it remains to show that all moves of type Ω5 can be generated using only the Ω5d
move. We formulate this as a theorem:
Theorem 5.9. Only 1 oriented singular Reidemeister move of type Ω5 is required to generate all
type Ω5 moves.
The proof of Theorem 5.9 is a consequence of the following Lemmas.
S
S
S
S
Lemma 5.10. The move Ω5b is equivalent to Ω1a Ω4a Ω5d Ω4e Ω1a.
Proof.
Lemma 5.11. The move Ω5c is equivalent to Ω1b
S
Ω4a
S
Ω5d
S
Ω4e
S
Ω1b.
GENERATING SETS OF REIDEMEISTER MOVES OF ORIENTED SINGULAR LINKS AND QUANDLES
12
Proof.
Lemma 5.12. The move Ω5e is equivalent to Ω1c
S
Ω4e
S
Ω5a
S
Ω4a
S
Ω1c.
Proof.
Lemma 5.13. The move Ω5f is equivalent to Ω1d
S
Ω4e
S
Ω5a
S
Ω4a
S
Ω1d.
Proof.
At this point, all Ω5 moves have been shown to depend on only Ω4a, Ω4e, Ω5a, and Ω5d. The
last step remaining is to eliminate Ω5d.
Lemma 5.14. The move Ω5d can be realized by a combination of Ω2 moves, and one Ω5a move.
GENERATING SETS OF REIDEMEISTER MOVES OF ORIENTED SINGULAR LINKS AND QUANDLES
13
Proof.
6. Open questions
The following are some open questions for future research:
• Find other generating sets of oriented singular Reidemeister moves and prove their minimality.
• Define a notion of extensions of oriented singquandles as in [5].
• Define a cohomology theory of oriented singquandles and use low dimensional cocycles to
construct invariants that generalize the number of colorings of singular knots by oriented
singquandles.
References
[1] John C. Baez, Link invariants of finite type and perturbation theory, Lett. Math. Phys. 26 (1992), no. 1, 43–51,
doi: 10.1007/BF00420517. MR1193625 (93k:57006)
[2] Khaled Bataineh, Mohamed Elhamdadi, and Mustafa Hajij, The colored Jones polynomial of singular knots, New
York J. Math 22 (2016), 1439–1456.
[3] Joan S. Birman, New points of view in knot theory, Bull. Amer. Math. Soc. (N.S.) 28 (1993), no. 2, 253–287.
MR1191478 (94b:57007)
[4] Joan S. Birman and Xiao-Song Lin, Knot polynomials and Vassiliev’s invariants, Invent. Math. 111 (1993), no. 2,
225–270, doi: 10.1007/BF01231287. MR1198809
[5] J. Scott Carter, Mohamed Elhamdadi, Marina Appiou Nikiforou, and Masahico Saito, Extensions of quandles and
cocycle knot invariants, J. Knot Theory Ramifications 12 (2003), no. 6, 725–738, doi: 10.1142/S0218216503002718.
MR2008876
[6] Indu R. U. Churchill, Mohamed Elhamdadi, Mustafa Hajij, and Sam Nelson, Singular Knots and Involutive
Quandles, arXiv:1608.08163, 2016.
[7] Mohamed Elhamdadi and Sam Nelson, Quandles—an introduction to the algebra of knots, Student Mathematical
Library, vol. 74, American Mathematical Society, Providence, RI, 2015. MR3379534
[8] Mohamed Elhamdadi, Jennifer Macquarrie, and Ricardo Restrepo, Automorphism groups of quandles, J. Algebra
Appl. 11 (2012), no. 1, 1250008, 9, doi: 10.1142/S0219498812500089. MR2900878
[9] Thomas Fiedler, The Jones and Alexander polynomials for singular links, J. Knot Theory Ramifications 19 (2010),
no. 7, 859–866. MR2673687 (2012b:57024)
[10] V. F. R. Jones, Hecke algebra representations of braid groups and link polynomials, Ann. of Math. (2) 126 (1987),
no. 2, 335–388, doi: 10.2307/1971403. MR908150
[11] David Joyce, A classifying invariant of knots, the knot quandle, J. Pure Appl. Algebra 23 (1982), no. 1, 37–65,
doi: 10.1016/0022-4049(82)90077-9. MR638121
[12] Jesús Juyumaya and Sofia Lambropoulou, An invariant for singular knots, Journal of Knot Theory and Its
Ramifications 18 (2009), no. 06, 825–840.
GENERATING SETS OF REIDEMEISTER MOVES OF ORIENTED SINGULAR LINKS AND QUANDLES
14
[13] S. V. Matveev, Distributive groupoids in knot theory, Mat. Sb. (N.S.) 119(161) (1982), no. 1, 78–88, 160
(Russian). MR672410
[14] Luis Paris, The proof of Birman’s conjecture on singular braid monoids, Geom. Topol. 8 (2004), 1281–1300
(electronic). MR2087084
[15] Michael Polyak, Minimal generating sets of Reidemeister moves, Quantum Topol. 1 (2010), no. 4, 399–411, doi:
10.4171/QT/10. MR2733246
[16] V. A. Vassiliev, Cohomology of knot spaces, Theory of singularities and its applications, Adv. Soviet Math.,
vol. 1, Amer. Math. Soc., Providence, RI, 1990, pp. 23–69. MR1089670
Jordan University of Science and Technology, Irbid, Jordan
E-mail address: [email protected]
University of South Florida, Tampa, USA
E-mail address: [email protected]
University of South Florida, Tampa, USA
E-mail address: [email protected]
University of South Florida, Tampa, USA
E-mail address: [email protected]
| 4 |
COMMUTING VARIETIES OF r-TUPLES OVER
LIE ALGEBRAS
arXiv:1209.1659v2 [math.RT] 7 Nov 2013
NHAM V. NGO
Abstract. Let G be a simple algebraic group defined over an algebraically closed field k of characteristic p and let g be the Lie algebra of G. It is well known that for p large enough the spectrum
of the cohomology ring for the r-th Frobenius kernel of G is homeomorphic to the commuting variety of r-tuples of elements in the nilpotent cone of g [Suslin-Friedlander-Bendel, J. Amer. Math.
Soc, 10 (1997), 693–728]. In this paper, we study both geometric and algebraic properties including irreducibility, singularity, normality and Cohen-Macaulayness of the commuting varieties
Cr (gl2 ), Cr (sl2 ) and Cr (N ) where N is the nilpotent cone of sl2 . Our calculations lead us to state
a conjecture on Cohen-Macaulayness for commuting varieties of r-tuples. Furthermore, in the case
when g = sl2 , we obtain interesting results about commuting varieties when adding more restrictions into each tuple. In the case of sl3 , we are able to verify the aforementioned properties for
Cr (u). Finally, applying our calculations on the commuting variety Cr (Osub ) where Osub is the
closure of the subregular orbit in sl3 , we prove that the nilpotent commuting variety Cr (N ) has
singularities of codimension ≥ 2.
1. Introduction
1.1. Let k be an algebraically closed field of characteristic p (possibly p = 0). For a Lie algebra
g over k and a closed subvariety V of g, the commuting variety of r-tuples over V is defined as
the collection of all r-tuples of pairwise commuting elements in V . In particular, for each positive
integer r, we define
Cr (V ) = {(v1 , . . . , vr ) ∈ V r | [vi , vj ] = 0 for all 1 ≤ i ≤ j ≤ r}.
When r = 2, we call Cr (V ) an ordinary commuting variety (cf. [Vas]). Let N be the nilpotent
cone of g. We then call Cr (N ) the nilpotent commuting variety of r-tuples. For simplicity, from
now on we will call Cr (V ) a commuting variety over V whenever r > 2 in order to distinguish it
from ordinary commuting varieties.
The variety C2 (g) over a reductive Lie algebra g was first proved to be irreducible in characteristic
0 by Richardson in 1979 [R]. In positive characteristics, Levy showed that C2 (g) is irreducible under
certain mild assumptions on G [L]. In 2003, Premet completely determined irreducible components
of the nilpotent commuting variety C2 (N ) for an arbitrary reductive Lie algebra in characteristic 0
or p, a good prime for the root system of G. In particular, he showed that the nilpotent commuting
variety C2 (N ) is equal to the union of irreducible components C(ei ) = G · (ei , Lie(Z[G,G] (ei ))), where
the ei ’s are representatives for distinguished nilpotent orbits in g (which contain elements whose
centralizers are in the nilpotent cone).
Up to now, the properties of being Cohen-Macaulay and normal for ordinary commuting varieties
are in general not verified [K]. There is a long-standing conjecture stating that the commuting
variety C2 (g) is always normal (see [Po] and [Pr]). Artin and Hochster claimed that C2 (gln ) is
a Cohen-Macaulay integral domain (cf. [K],[MS]). This is verified up to n = 4 by the computer
program Macaulay [Hr]. There is not much hope for verifying Cohen-Macaulayness of nilpotent
commuting varieties since their defining ideals are not radical, thus creating great difficulties for
computer calculations. As pointed out by Premet, all components of C2 (N ) share the origin 0, so
if it is reducible then it can never be normal.
1
2
NHAM V. NGO
Not much is known about commuting varieties. In case V = gln , commuting varieties were
studied by Gerstenhaber, Guralnick-Sethuraman, and Kirillov-Neretin. Gerstenhaber proved that
Cr (gln ) is reducible for all n ≥ 4 and r ≥ 5 [G]. In 1987, Kirillov and Neretin lowered the bound
on r by simply showing that C4 (gl4 ) is reducible. Moreover, they proved that when n = 2 or 3 the
commuting variety Cr (gln ) is irreducible for all r ≥ 1 [KN]. In 2000, Guralnick and Sethuraman
studied the case when r = 3 and concluded that C3 (gln ) can be either irreducible (for n ≤ 10) or
reducible (for n ≥ 30) [GS], [S]. In general, the study of Cr (V ) for arbitrary r remains mysterious
even in simple cases.
1.2. Main results. The results in the present paper were motivated by investigating the cohomology for Frobenius kernels of algebraic groups. To be more precise, let G be an algebraic group
defined over k, and let Gr be the r-th Frobenius kernel of G. Then there is a homeomorphism
between the maximal ideal spectrum of the cohomology ring for Gr and the nilpotent commuting
variety over the Lie algebra Lie(G) whenever the characteristic p is large enough [SFB1, SFB2].
In this paper, we tackle the irreducibility, normality and Cohen-Macaulayness of this variety for
simple cases. The main results are summarized in the following.
Theorem 1.2.1. Suppose p > 2. Then for each r ≥ 1, the commuting varieties Cr (gl2 ), Cr (sl2 )
and Cr (N ) are irreducible, normal and Cohen-Macaulay.
For readers’ convenience, we sketch the structure of the paper as follows. We first introduce
notation, terminology and conventions in Sections 2 and 3. We also prove several properties related
to the map m : G ×B Cr (u) → Cr (N ) where u is the Lie algebra of the unipotent radical of a fixed
Borel subgroup B of G, and N is the nilpotent cone of g. For instance, we show that the map is
surjective and satisfies the hypotheses of Zariski’s Main Theorem. These results are analogous to
those for the moment map from G×B u to N . In the next section, we first show a connection between
Cr (gln ) and Cr (sln ) for arbitrary n, r ≥ 1. This link reduces all of the works for Cr (gln ) to that for
Cr (sln ). Then we consider the case n = 2 and prove the properties of being irreducible, normal,
and Cohen-Macaulay for both varieties by exploiting a fact of determinantal rings. In addition, the
analogs are shown for the nilpotent commuting variety over rank 2 matrices in Section 5. It should
be noticed that the nilpotency condition makes the defining ideal of this variety non-radical, thus
creating more difficulties in our task. In the case when g = sl2 , to obtain the Cohen-Macaulayness
of Cr (N ), we first analyze the geometry by intersecting Cr (N ) with a hypersurface and reduce the
problem to showing that a certain class of ideals are radical. Then we prove that this family of
ideals belongs to a principal radical system, a deep concept in commutative algebra introduced by
Hochster [BV]. As a consequence, we show that the moment map m : G ×B ur → Cr (N ) admits
rational singularities (cf. Proposition 5.3.1).
As an application we can compute the characters of the coordinate algebra for this variety (cf.
Theorem 5.4). Combining this with an explicit calculation for the reduced Gr -cohomology ring in
[N], we obtain an alternative proof for the main result in [SFB1] for the case when G = SL2 .
In Section 6, we study commuting varieties with additional restrictions on each tuple. In particular, let V1 , . . . , Vr be closed subvarieties of a Lie algebra g. Define
C(V1 , . . . , Vr ) = {(v1 , . . . , vr ) ∈ V1 × · · · × Vr | [vi , vj ] = 0 , 1 ≤ i ≤ j ≤ r},
a mixed commuting variety over V1 , . . . , Vr . In the case where g = sl2 , we can explicitly describe
the irreducible decomposition for any mixed commuting variety over g and N . This shows that
such varieties are mostly not Cohen-Macaulay or normal.
Section 7 involves results about the geometric structure of nilpotent commuting varieties over
various sets of 3 by 3 matrices. In particular, let N be the nilpotent cone of sl3 . We first study
the variety Cr (u) and then obtain the irreducibility and dimension of Cr (N ). Next we apply our
calculations on Cr (Osub ) to classify singularities of Cr (N ) and show that they are in codimension
greater than or equal to 2 (cf. Theorem 7.2.3), here Osub is the closure of the subregular orbit in
COMMUTING VARIETIES OF r-TUPLES OVER LIE ALGEBRAS
3
N . This result indicates that the variety Cr (N ) satisfies the necessary condition (R1) of Serre’s
criterion for normality.
2. Notation
2.1. Root systems and combinatorics. Let k be an algebraically closed field of characteristic
p. Let G be a simple, simply-connected algebraic group over k, defined and split over the prime
field Fp . Fix a maximal torus T ⊂ G, also split over Fp , and let Φ be the root system of T in G. Fix
a set Π = {α1 , . . . , αn } of simple roots in Φ, and let Φ+ be the corresponding set of positive roots.
Let B ⊆ G be the Borel subgroup of G containing T and corresponding to the set of negative roots
Φ− , and let U ⊆ B be the unipotent radical of B. Write U + ⊆ B + for the opposite subgroups. Set
g = Lie(G), the Lie algebra of G, b = Lie(B), u = Lie(U ).
2.2. Nilpotent orbits. We will follow the same conventions as in [Hum2] and [Jan]. Given a
G-variety V and a point v of V , we denote by Ov the G-orbit of v (i.e., Ov = G · v). For example,
consider the nilpotent cone N of g as a G-variety with the adjoint action. There are well-known
orbits: Oreg = G · vreg , Osubreg = G · vsubreg , (we abbreviate it by Osub ,) and Omin = G · vmin
where vreg , vsubreg , and vmin are representatives for the regular, subregular, and minimal orbits.
Denote by z(v) and Z(v) respectively the centralizers of v in g and G. It is well-known that
dim z(v) = dim Z(v) and dim Ov = dim G − dim z(v). For convenience, we write zreg (zsub and
zmin ) for the centralizers of vreg (vsub or vmin ). It is also useful to keep in mind that every orbit is
a smooth variety. Sometimes, we use OV for the structure sheaf of a variety V (see [CM], [Hum2]
for more details).
2.3. Basic algebraic geometry conventions. Let
commutative Noetherian ring with
√ R be a √
identity. We use Rred to denote the reduced ring R/ 0 where 0 is the radical ideal of the trivial
ideal 0, which consists of all nilpotent elements of R. Let Spec R be the spectrum of all prime
ideals of R. If V is a closed subvariety of an affine space An , we denote by I(V ) the radical ideal
of k[An ] = k[x1 , . . . , xn ] associated to this variety.
Given a G-variety V , B act freely on G × V by setting b · (g, v) = (gb−1 , bv) for all b ∈ B, g ∈ G
and v ∈ V . The notation G ×B V stands for the fiber bundle associated with the projection
π : G ×B V → G/B with fiber V . Topologically, G ×B V is the quotient space of G × V in which
the equivalence relation is given as
(g, v) ∼ (g ′ , v ′ ) ⇔ (g′ , v ′ ) = b · (g, v) for some b ∈ B.
In other words, each equivalence class of G ×B V represents a B-orbit in G × V . The map m :
G ×B V → G · V defined by mapping each equivalence class [g, v] to the element g · v for all
g ∈ G, v ∈ V is called the moment morphism. It is obviously surjective. Let X be an affine variety.
Then we always write k[X] for the coordinate ring of X which is the same as the ring of global
sections OX (X). Although G ×B V is not affine, we still denote k[G ×B V ] for its ring of global
sections. It is sometimes useful to make the identification k[G ×B V ] ∼
= k[G × V ]B .
Let f : X → Y be a morphism of varieties. Denote by f∗ the direct image functor from the
category of sheaves over X to the category of sheaves over Y . One can see that this is a left exact
functor. Hence, we have the right derived functors of this direct image. We call these functors
higher direct images and denote them by Ri f∗ with i > 0. In particular, if Y = Spec A is affine
and F is a quasi-coherent sheaf on X, then we have Ri f∗ (F) ∼
= L(Hi (X, F)) where L is the exact
functor mapping an A-module M to its associated sheaf L(M ). Here we follow conventions in [H].
3. Commutative algebra and Geometry
In this section we introduce concepts in commutative algebra and geometry that play important
roles in the later sections. In particular, we recall the definition of Cohen-Macaulay varieties and
their properties. We also show that the moment map G ×B Cr (u) → Cr (N ) is always surjective
4
NHAM V. NGO
for arbitrary type of G, and that it is a proper birational morphism. We then review a number of
well-known results in algebraic geometry.
3.1. Cohen-Macaulay Rings. We first define regular sequences, which are the key ingredient in
the definition of Cohen-Macaulay rings. Readers can refer to [E] and [H] for more details.
Definition 3.1.1. Let R be a commutative ring and let M be an R-module. A sequence x1 , . . . , xn ∈
R is called a regular sequence on M (or an M -sequence) if it satisfies
(1) (x1 , . . . , xn )M 6= M , and
(2) for each 1 ≤ i ≤ n, xi is not a zero-divisor of M/(x1 , . . . , xi−1 )M .
Consider R as a left R-module. For a given ideal I of R, it is well-known that the length of any
maximal regular sequence in I is unique. It is called the depth of I and denoted by depth(I). The
height or codimension of a prime ideal J of R is the supremum of the lengths of chains of prime
ideals descending from J. Equivalently, it is defined as the Krull dimension of R/J. In particular, if
R is an integral domain that is finitely generated over a field, then codim(J) = dim R − dim(R/J).
We are now ready to define a Cohen-Macaulay ring.
Definition 3.1.2. A ring R is called Cohen-Macaulay if depth(I) = codim(I) for each maximal
ideal I of R. A variety V is called Cohen-Macaulay if its coordinate ring k[V ] is a Cohen-Macaulay
ring.
Example 3.1.3. Smooth varieties are Cohen-Macaulay. The nilpotent cone N of a simple Lie
algebra g over an algebraically closed field of good characteristic is Cohen-Macaulay [Jan, 8.5].
3.2. Minors and Determinantal rings. Let U = (uij ) be an m × n matrix over a ring R. For
indices a1 , . . . , at , b1 , . . . , bt such that 1 ≤ ai ≤ m, 1 ≤ bi ≤ n, i = 1, . . . , t, we put
ua1 b1 · · · ua1 bt
.. .
..
[a1 , . . . , at | b1 , . . . , bt ] = det ...
.
.
uat b1 · · · uat bt
We do not require that a1 , . . . , at and b1 , . . . , bt are given in ascending order. Note that
[a1 , . . . , at | b1 , . . . , bt ] = 0
if t > min(m, n). To be convenient, we let [∅ | ∅] = 1. If a1 ≤ · · · ≤ at and b1 ≤ · · · ≤ bt we call
[a1 , . . . , at | b1 , . . . , bt ] a t-minor of U .
Definition 3.2.1. Let B be a commutative ring, and consider an m × n matrix
x11 · · · x1n
..
..
X = ...
.
.
xm1 · · · xmn
whose entries are independent indeterminates over R. Let R(X) be the polynomial ring over all
the indeterminates of X, and let It (X) be the ideal in R(X) generated by all t-minors of X. For
each t ≥ 1, the ring
R(X)
Rt (X) =
It (X)
is called a determinantal ring.
For readers’ convenience, we recall nice properties of determinantal rings as follows.
Proposition 3.2.2. [BV, 1.11, 2.10, 2.11, 2.12] If R is a reduced ring, then for every 1 ≤ t ≤
min(m, n), Rt (X) is a reduced, Cohen-Macaulay, normal domain of dimension (t−1)(m+n−t+1).
COMMUTING VARIETIES OF r-TUPLES OVER LIE ALGEBRAS
5
3.3. The moment morphism. Suppose G is a simple algebraic group. It is well-known that
N = G · u where u is the Lie algebra of the unipotent radical subgroup U of the Borel subgroup B
of G, and the dot is the adjoint action of G on the Lie algebra g. Note that if u1 , u2 are commuting
in u, then so are g · u1 , g · u2 in N for each g ∈ G. This observation can be generalized to give the
following moment map
m : G ×B Cr (u) → Cr (N )
by setting m[g, (u1 , . . . , ur )] = (g · u1 , . . . , g · ur ) for all g ∈ G, and (u1 , . . . , ur ) ∈ Cr (u). In the
case when r = 1, this is the moment map in the Springer resolution. Therefore, we also call it the
moment morphism for each r ≥ 1. The following proposition shows surjectivity of this morphism.
Theorem 3.3.1.
1
The moment morphism m : G ×B Cr (u) → Cr (N ) is always surjective.
Proof. Suppose (v1 , . . . , vr ) ∈ Cr (N ). Let b′ be the vector subspace of g spanned by the vi . As
[vi , vj ] = 0 for all 1 ≤ i, j ≤ r, b′ is an abelian, hence solvable, Lie subalgebra of g. Thus, there
exists a maximal solvable subalgebra b′′ of g containing b′ . By [Hum1, Theorem 16.4], b′′ and our
Borel subalgebra b are conjugate under some inner automorphism Ad(g) with g ∈ G. So there exist
u1 , . . . , ur ∈ b such that
(v1 , . . . , vr ) = Ad(g −1 )(u1 , . . . , ur ) = g−1 · (u1 , . . . , ur ) = m[g−1 , (u1 , . . . , ur )].
As all the vi are nilpotent and commuting, so are the ui . This shows that m is surjective.
As a corollary, we establish the connection between irreducibility of Cr (u) and Cr (N ).
Theorem 3.3.2. For each r ≥ 1, if Cr (u) is irreducible then so is Cr (N ).
Proof. As the moment morphism G × Cr (u) → Cr (N ) is surjective and G is irreducible, the irreducibility of Cr (N ) follows from that of Cr (u).
3.4. Zariski’s Main Theorem. Zariski’s Main Theorem is one of the powerful tools to study
structure sheaves of two schemes. In this subsection, we state the version for varieties and show that
the moment map in the preceding subsection satisfies the hypotheses of Zariski’s Main Theorem.
We first look at proper morphisms. As defining properness requires terminology from algebraic
geometry, we refer readers to [H, Section II.4] for the details. We only introduce some important
characterizations of proper morphisms which will be useful later.
Proposition 3.4.1. In the following properties, all the morphisms are taken over Noetherian
schemes.
(a) A closed immersion is proper.
(b) The composition of two proper morphims is proper.
(c) A projection X × Y → X is proper if and only if Y is projective.
We also recall that a rational map ϕ : X → Y (which is a morphism only defined on some open
subset) is called birational if it has an inverse rational map.
There are many versions of Zariski’s Main Theorem. Here we state a “pre-version” of the
theorem since the Main Theorem immediately follows from this result (cf. [H, Corollary III.11.3
and III.11.4]).
Theorem 3.4.2. Let f : X → Y be a birational proper morphism of varieties and suppose Y is
normal. Then f∗ OX = OY .
We now verify that the morphism m : G ×B Cr (u) → Cr (N ) satisfies the hypotheses of Zariski’s
Main Theorem. In other words, we have
1The author would like to thank Christopher M. Drupieski for the main idea in this proof
6
NHAM V. NGO
Proposition 3.4.3. For each r ≥ 1, the moment morphism m : G ×B Cr (u) → Cr (N ) is birational
proper.
Proof. We generalize the proofs of Lemmas 1 and 2 in [Jan, 6.10]. For the properness, we consider
the map
ǫ : G ×B Cr (u) ֒→ G/B × Cr (N )
with ǫ[g, (u1 , . . . , ur )] = (gB, g · (u1 , . . . , ur )) for all g ∈ G and (u1 , . . . , ur ) ∈ Cr (u). By the same
argument as in [Jan, 6.4], we can show that this map is a closed embedding; hence a proper
morphism by Proposition 3.4.1(a). Next, as G/B is projective, the projection map p : G/B ×
Cr (N ) → Cr (N ) is proper by Proposition 3.4.1(c). Therefore, part (b) of Proposition 3.4.1 implies
that m = p ◦ ǫ is also proper.
Consider the projection of Cr (N ) onto the first factor p1 : Cr (N ) → N . Recall that zreg is the
centralizer of a fixed regular element vreg in N . From Lemma 35.6.7 in [TY], zreg is a commutative
Lie algebra. Then we have
p−1
1 (Oreg ) = C(Oreg , N , . . . , N ) = G · (vreg , zreg , . . . , zreg ).
−1
As Oreg is an open subset in N , the preimage p−1
1 (Oreg ) is open in Cr (N ). Let V = p1 (Oreg ).
Since ZG (vreg ) ⊆ B, we have m induces an isomorphism from m−1 (V ) onto V . It follows that m is
a birational morphism.
Remark 3.4.4. We have not shown that m satisfies all the hypotheses of Zariski’s Theorem since
the normality for Cr (N ) is still unkown. As mentioned in Section 1, the variety Cr (N ) is normal
only when it is irreducible. In particular, Premet already proved that C2 (N ) is irreducible if and
only if G is of type A [Pr]. By considering the natural projection map Cr (N ) → C2 (N ), it follows
that if G is not of type A then Cr (N ) is reducible for each r ≥ 2. In this paper we prove for
arbitrary r ≥ 1 that the variety Cr (N ) is irreducible for types A1 and A2 . The result for type An
with arbitrary n, r > 2 remains an open problem.
Conjecture 3.4.5. If G is of type A, then Cr (N ) is irreducible. Moreover, it is normal. In other
words, the morphism
m : G ×B Cr (u) → Cr (N )
satifies all the hypotheses of Zariski’s Theorem.
3.5. Singularities and Resolutions. Here we state an observation on determinning the singular
points of an affine variety defined by homogeneous polynomials, and then define a resolution of
singularities.
Proposition 3.5.1. Let V be an affine variety whose defining radical ideal is generated by a nonempty set of homogeneous polynomials of degree at least 2. Then 0 is always a singular point of
V.
Proof. Suppose V is an affine subvariety of the affine space Am associated with the coordinate ring
k[x1 , . . . , xm ]. Let f1 , . . . , fn be the set of polynomials defining V . Note that dim V < m since
n ≥ 1. Consider the Jacobian matrix
df1
· · · xdfm1
x1
..
..
..
.
.
.
dfn
x1
···
dfn
xm
i
As all the fi are homogeneous of degree ≥ 2, we have df
xj (0, . . . , 0) = 0 for all 1 ≤ i ≤ n, 1 ≤ j ≤ m.
It follows that the tangent space at 0 has dimension m which is greater than dim V . Thus 0 is a
singular point of V .
COMMUTING VARIETIES OF r-TUPLES OVER LIE ALGEBRAS
7
Definition 3.5.2. A variety X has a resolution of singularities if there exists a non-singular variety
Y such that there is a proper birational morphism from Y to X.
Definition 3.5.3. A variety X has rational singularities if it is normal and has a resolution of
singularities
f :Y →X
i
such that the higher direct image R f∗ OY vanishes for i ≥ 1. (Sometimes one calls f a rational
resolution.)
In Lie theory, the nullcone N admits a resolution of singularites, the Springer resolution, and
also has rational singularities. One of our goals in the present paper is to generalize this resolution
for the nilpotent commuting variety over rank two Lie algebra.
4. Commuting Varieties over 2 by 2 matrices
4.1. Recall that the problem of showing that the commuting variety over n × n matrices is CohenMacaulay and normal is very difficult to verify. With ordinary commuting varieties, computer
verification works up to n = 4 [Hr]. There are also some studies on the Cohen-Macaulayness of
other structures closely similar to ordinary commuting varieties by Knutson, Mueller, ZolbaninSnapp and Zoque (cf. [K],[Mu], [MS], [Z]). Very little appears to be known for commuting varieties
in general. In this section, we confirm the properties of being Cohen-Macaulay and normal for
Cr (gl2 ) and Cr (sl2 ) with arbitrary r ≥ 1.
4.2. Nice properties of Cr (gl2 ) and Cr (sl2 ). We first show a general result connecting the
commuting varieties over gln and sln .
Theorem 4.2.1. 2 For each n and r ≥ 1, if p does not divide n, then there is an isomorphism of
varieties from Cr (gln ) to Cr (sln ) × Ar defined by setting
Tr(vr )
Tr(v1 )
(1)
I n , . . . , vr −
In × (Tr(v1 ), . . . , Tr(vr ))
ϕ : (v1 , . . . , vr ) 7→ v1 −
n
n
for vi ∈ gln .
Proof. It is easy to see that adding or subtracting cIn from the vi does not change the commuting
conditions on the vi . So the morphism ϕ in (1) is well-defined and its inverse is
a1
ar
ϕ−1 : (u1 , . . . , ur ) × (a1 , . . . , ar ) 7→ u1 + In , . . . , ur + In .
n
n
This completes the proof.
This result implies that our work for Cr (gl2 ) will be done if we can prove that Cr (sl2 ) is CohenMacaulay. Notice that Popov proved the normality of this variety in the case r = 2 [Po, 1.10].
However, his proof depends on computer calculations to verify that the defining ideal is radical.
Here we propose another approach that completely solves the problem for arbitrary r. Let slr2 be
the affine space defined as
x1 y 1
xr y r
,...,
| xi , yi , zi ∈ k, 1 ≤ i ≤ r .
z1 −x1
zr −xr
Then the variety Cr (sl2 ) can be defined as the zero locus of the following ideal
(2)
J = hxi yj − xj yi , yi zj − yj zi , xi zj − xj zi | 1 ≤ i ≤ j ≤ ri.
Proposition 4.2.2. For each r ≥ 1, the variety Cr (sl2 ) is:
(a) irreducible of dimension r + 2,
(b) Cohen-Macaulay and normal.
2The author would like to thank William Graham for his assistance in generalizing the result to gl .
n
8
NHAM V. NGO
Proof. It was shown by Kirillov and Neretin that Cr (gl2 ) and Cr (gl3 ) are irreducible for all r in
characteristic 0, [KN, Theorem 4]. The irreducibility of Cr (sl2 ) in characteristic 0 then follows by
Theorem 4.2.1. Here we provide a general proof for any characteristic p 6= 2.
Consider the ideal J above as the ideal I2 (X ) generated by all 2-minors of the following matrix
x1 x2 · · · xr
X = y1 y2 . . . yr .
z1 z2 · · · zr
)
. It follows immediately from PropoThen we can identify k[Cr (sl2 )] with the ring R2 (X ) = Ik(X
2 (X )
sition 3.2.2 that R2 (X ) is a Cohen-Macaulay and normal domain, hence completing the proof.
Corollary 4.2.3. Suppose p 6= 2. Then for each r ≥ 1, the variety Cr (gl2 ) is
(a) irreducible of dimension 2r + 2,
(b) Cohen-Macaulay and normal.
Proof. Follows immediately from Theorem 4.2.1 and Proposition 4.2.2.
This computation allows us to state a conjecture about Cohen-Macaulayness for commuting
varieties which is a generalization of that for ordinary commuting varieties.
Conjecture 4.2.4. Suppose p ∤ n. Then both commuting varieties Cr (gln ) and Cr (sln ) are CohenMacaulay.
5. Nilpotent Commuting Varieties over sl2
With the nilpotency condition, problems involving commuting varieties turn out to be more
difficult. The irreducibility of ordinary nilpotent commuting varieties was studied by Baranovsky,
Premet, Basili and Iarrobino (cf. [Ba], [Pr], [B], [BI]). However, there has not been any successful
work on normality and Cohen-Macaulayness even in simple cases. In this section, we completely
prove this conjecture for the nilpotent commuting variety over sl2 .
5.1. Irreducibility. Let G = SL2 , g = sl2 , and k be an algebraically closed field of characteristic
p 6= 2. The nilpotent cone of g then can be written as follows.
x y
2
| x + yz = 0 with x, y, z ∈ k
N =
z −x
Note that for each r ≥ 1, Cr (u) = ur , so that the moment map in Section 3.3 can be rewritten as
(3)
m : G ×B ur → Cr (N ).
As G/B and ur are smooth varieties, so is the vector bundle G ×B ur . In addition, the moment map
m is known to be proper birational from Proposition 3.4.3. It follows that G ×B ur is a resolution
of singularities for Cr (N ) through the morphism m. We will see later in this section that m is in
fact a rational resolution. First we study geometric properties of Cr (N ).
Proposition 5.1.1. For every r ≥ 1, we have
(a) Cr (N ) is an irreducible variety of dimension r + 1.
(b) The only singular point in Cr (N ) is the origin 0.
Proof.
(a) The first part follows immediately from the surjectivity of m in (3) and the fact that G ×B ur
is irreducible. Now since G ×B ur is a vector bundle over the base G/B with each fiber isomorphic
to ur , we have
dim G ×B ur = dim G/B + dim ur = r + 1.
As m is birational, Cr (N ) has the same dimension as G ×B ur .
COMMUTING VARIETIES OF r-TUPLES OVER LIE ALGEBRAS
9
(b) By Proposition 3.5.1 we immediately get that 0 is a singular point of the variety. It is enough
to show that every non-zero element in Cr (N ) belongs to a smooth open subset of dimension r + 1.
Let 0 6= v = (v1 , . . . , vr ) ∈ Cr (N ). We can assume that v1 6= 0 ∈ N . Then considering the
projection on the first factor p : Cr (N ) → N , we see that v ∈ G · (v1 , ur−1 ) = p−1 (G · v1 ) which is
open in Cr (N ) as G · v1 is the regular orbit in N . Now we define an action of the reductive group
G × kr−1 on Cr (N ) as follows:
G × kr−1 × Cr (N ) → Cr (N )
(g , a1 , . . . , ar−1 ) • (v1 , . . . , vr ) 7−→ g · (v1 , a1 v2 , . . . , ar−1 vr ).
It is easy to see that p−1 (G · v1 ) = G × kr−1 • (v1 , v1 , . . . , v1 ). As every orbit is itself a smooth
variety, we obtain p−1 (G · v1 ) is smooth of dimension r + 1.
5.2. Cohen-Macaulayness. We denote by ∩∗ the scheme-theoretic intersection in order to distinguish with the regular intersection of varieties. Before showing that Cr (N ) is Cohen-Macaulay, we
need some lemmas related to Cohen-Macaulay varieties. The first one is an exercise in [E, Exercise
18.13] (see also [BV, Lemma 5.15]).
Lemma 5.2.1. 3 Let X, Y be two Cohen-Macaulay varieties of the same dimension. Suppose the
scheme-theoretic intersection X ∩∗ Y is of codimension 1 in both X and Y . Then X ∩∗ Y is
Cohen-Macaulay if and only if X ∪ Y is Cohen-Macaulay.
The following lemmas involve properties about radical ideals that determine whether a schemetheoretic intersection ∩∗ coincides with the intersection of varieties ∩.
Lemma 5.2.2. Let I ⊳ k[x1 , . . . , xm ] be the radical ideal associated to a variety V in Am . Then the
variety V × 0 ⊂ Am × An is represented by the ideal I + hy1 , . . . , yn i ⊳ k[x1 , . . . , xr , y1 , . . . , yn ].
Proof. It is easy to see that we have an isomorphism of rings
k[x1 , . . . , xr , y1 , . . . , yn ] ∼ k[x1 , . . . , xr ]
=
I + hy1 , . . . , yn i
I
where the latter ring is reduced. The result immediately follows.
Lemma 5.2.3. Let I1 , I2 be radical ideals of k[x1 , . . . , xm ]. Let J1 , J2 be ideals of R = k[x1 , . . . , xm , y1 , . . . , yn ].
If I2 ⊆ I1 and J1 ⊆ J2 , then we have
p
p
I1 + J1 + I2 + J2 = I1 + J2
provided I1 + J2 is a radical ideal of R.
In particular, suppose V1 ⊆ V2 are varieties of Am and W is a variety of An . Then we have
(V1 × W ) ∩∗ (V2 × 0) = (V1 × W ) ∩ (V2 × 0) = V1 × 0.
√
√
√
Proof. It is
√ that I1 + J1 + I2 + J2 ⊆ I1 + J2 = I1 + J2 . On the other hand,
√ well-known
I1 + J2 ⊆ I1 + J1 + I2 + J2 . This shows
√ the first part of the lemma.
I1 + J1 with I1 = I(V1 ) and J1 is an ideal of R. Let
For the remainder,
let
I(V
×
W
)
=
1
√
I(V2 × 0) = I2 + J2 where I2 = I(V2 ), J2 = hy1 , . . . , yn i by the preceding lemma. Then we have
I1 + J2 is radical again by the previous lemma. It can be seen that J1 ⊆ J2 as an ideal of R. Hence
the first statement of the lemma implies that
R
= V1 × 0.
V1 × W ∩∗ V2 × 0 = Spec
I1 + J2
3In [E], although the author did not state the word “scheme-theoretic intersection”, we implicitly understand from
the exercise that the intersection needs to be scheme-theoretic.
10
NHAM V. NGO
For each r ≥ 1, the variety Cr (N ) is defined as the zero locus of the family of polynomials
{x2i + yi zi , xi yj − xj yi , xi zj − xj zi , yi zj − yj zi | 1 ≤ i ≤ j ≤ r}
in R = k[slr2 ] = k[xi , yi , zi | 1 ≤ i ≤ r]. It is easy to check by computer that this ideal is not
radical. This causes some difficulties for us to investigate the algebraic properties of this variety
like Cohen-Macaulayness. Let Ir be the radical ideal generated by the family of polynomials above
in R. We first reduce the problem to checking a certain condition in commutative algebra.
Lemma 5.2.4. If Is + hy1 + z1 i is a radical ideal for each s ≥ 1, then Cr (N ) is Cohen-Macaulay
for each r ≥ 1.
Proof. We argue by induction on r. When r = 1, C1 (N ) = N , which is a well-known CohenMacaulay variety. Suppose that Cr−1 (N ) is Cohen-Macaulay for some r ≥ 2. As we have seen
earlier Cr (N ) is irreducible of dimension r+1 and (y1 +z1 ) is not zero in the coordinate ring IRr . The
hypothesis and Lemma 5.15 in [BV] imply that it suffices to show the variety Cr (N ) ∩ V (y1 + z1 )
is Cohen-Macaulay of dimension r.
Let V = Cr (N ) ∩ V (y1 + z1 ). Solving from the constraint x21 + y1 z1 = 0, we have either
x1 = y1 = −z1 or x1 = −y1 = z1 . Then we decompose V = V1 ∪ V2 ∪ V3 where the Vi are irreducible
algebraic varieties defined by
V1 = V ∩ V (x1 = y1 = z1 = 0),
V2 = V ∩ V (x1 = y1 = −z1 6= 0),
V3 = V ∩ V (x1 = −y1 = z1 6= 0).
Moreover, we can explicitly describe these varieties as follows:
V1 = 0 × 0 × 0 × Cr−1 (N )
x2
x2
y1 ,
z1 , . . . , xr ,
V2 = {(x1 , y1 , z1 , x2 ,
x1
x1
= {(x1 , x1 , −x1 , x2 , x2 , −x2 , . . . , xr ,
x2
x2
V3 = {(x1 , y1 , z1 , x2 , y1 , z1 , . . . , xr+1
x1
x1
= {(x1 , −x1 , x1 , x2 , −x2 , x2 , . . . , xr ,
xr
xr
y1 , z1 ) | 0 6= x1 = y1 = −z1 ∈ k}
x1
x1
xr , −xr ) | xi ∈ k},
xr
xr
, y1 , z1 ) | 0 6= x1 = −y1 = z1 ∈ k}
x1
x1
−xr , xr ) | xi ∈ k}.
Observe that V1 is Cohen-Macaulay of dimension r by the inductive hypothesis, and V2 and V3
are affine r-spaces so they are Cohen-Macaulay. From Lemma 5.2.3, we have the scheme-theoretic
intersection
V1 ∩∗ V2 = {(0, 0, 0, x2 , x2 , −x2 , . . . , xr , xr , −xr ) | xi ∈ k, 2 ≤ i ≤ r}
which is an affine (r − 1)-space. Hence by Lemma 5.2.1 the union V1 ∪ V2 is a Cohen-Macaulay
variety of dimension r. Next we consider the scheme-theoretic intersection (V1 ∪ V2 ) ∩∗ V3 . Note
that V2 ∩∗ V3 = {0}, we have
(V1 ∪ V2 ) ∩∗ V3 = V1 ∩∗ V3 = {(0, 0, 0, x2 , −x2 , x2 , . . . , xr , −xr , xr ) | xi ∈ k}
for the same reason as earlier. Then again Lemma 5.2.1 implies that V1 ∪ V2 ∪ V3 is Cohen-Macaulay
of dimension r.
We are now interested in the conditions under which the sum of two radical ideals is again
radical. One of the well-known concepts in commutative algebra related to this problem is that of
a principal radical system introduced by Hochster, which shows certain class of ideals are radical.
Theorem 5.2.5. [BV, Theorem 12.1] Let A be a Noetherian ring, and let F be a family of ideals
in A, partially ordered by inclusion. Suppose that for every member I ∈ F one of the following
assumptions is fulfilled:
COMMUTING VARIETIES OF r-TUPLES OVER LIE ALGEBRAS
11
(a) I is a radical ideal; or
(b) There exists an element x ∈ A such T
that I + Ax ∈ F and
A
i
√
• x is not a zero-divisor I and ∞
i=0 (I + Ax )/I = 0, or
• there exists an ideal J ∈ F with I ( J, such that xJ ⊆ I and x is not a zero-divisor of
A
√
.
J
Then all the ideals in F are radical ideals.
Such a family of ideals is called a principal radical system. This concept plays an important role
in the proof of Hochster and Eagon showing that determinantal rings are Cohen-Macaulay [HE].
Before applying this theorem, we need to set up some notations.
Fix r ≥ 1. For each 1 ≤ m ≤ r, let Im be the radical ideal associated to the variety 0 × · · · × 0 ×
Cm (N ) ⊆ Cr (N ) with 0 ∈ N . Each ideal Im is prime by the irreducibility of Cm (N ) and it is easy
to see that
r−m
X
hxj , yj , zj i .
Im = Ir +
j=1
We also let, for each 1 ≤ m ≤ r,
r
m
X
X
hxj − yj , yj + zj i .
hxi , yi , zi i +
Pm =
j=m+1
i=1
Note that each Pm is a prime ideal since R/Pm is isomorphic to k[xm+1 , . . . , xr ]. Now we consider
the following family of ideals in R
m
X
hyi + zi i}rm=1
F ={Ij }rj=1 ∪ {Pj }rj=1 ∪ {Ir +
i=1
∪ {Ir +
r
X
i=1
hyi + zi i +
n
X
i=1
hxj + yj i}rn=1 ∪ m = hx1 , y1 , z1 , . . . , xr , yr , zr i .
Proposition 5.2.6. The family F is a principal radical system.
Proof. It is obvious that {Ij }rj=1 , {Pj }rj=1 , and m are radical. So we just have to consider the two
following cases:
P
(a) I = Ir + m
i=1 hyi + zi i for some 1 ≤ m ≤ r − 1. Observe that I + hym+1 + zm+1 i is an
element in F. Let J = Ir−m . It√
is easy to see ym+1 +zm+1 ∈
/ Ir−m so that ym+1 +zm+1 is not
a zero-divisor in the domain
P R/ J = R/Ir−m . It remains to show that (ym+1 +zm+1 )J ⊆ I.
Recall that Ir−m = Ir + m
j=1 hxj , yj , zj i. Then it suffices to prove that
(ym+1 + zm+1 ) xj ∈ I,
(ym+1 + zm+1 ) yj ∈ I,
(ym+1 + zm+1 ) zj ∈ I
for all 1 ≤Pj ≤ m. This is done
P in the Appendix 8.1.
(b) I = Ir + P ri=1 hyi + zi i + ni=1 hxj + yj i for some 0 ≤ n ≤ r − 1, where if n = 0, we set
I = Ir + ri=1 hyi + zi i. It is clear that I + hxn+1 + yn+1 i ∈ F. Choose J = Pn , then√the
same argument as in the previous case gives us xn+1 + yn+1 is not a zero-divisor of R/ J.
We also refer the reader to the Appendix 8.2 for the proof of (xn+1 + yn+1 )J ⊆ I.
Here is the main result of this section.
Theorem 5.2.7. For each r ≥ 1, the variety Cr (N ) is Cohen-Macaulay and therefore normal.
12
NHAM V. NGO
Proof. It immediately follows from Lemma 5.2.4 and Proposition 5.2.6.
Now we can summarize our results into a theorem as we stated in Section 1.2.
Theorem 5.2.8. Let gl2 and sl2 be Lie algebras defined over k of characteristic p 6= 2. Then for
each r ≥ 1, we have the commuting varieties Cr (gl2 ), Cr (sl2 ), and Cr (N ) are irreducible, normal,
and Cohen-Macaulay.
Remark 5.2.9. We claim that the theorem above holds even when the field k is not algebraically
closed. Indeed, all the proofs in the last section do not depend on the algebraic closedness of
k since the theory of determinantal rings does not require it. There should be a way to avoid
Nullstellensatz’s Theorem (hence algebraic closedness is not necessary) in the arguments in this
section.
5.3. Rational singularities. We prove in this section that the moment map
m : G ×B ur → Cr (N )
admits rational singularities. Since m is already a resolution of singularities, it is equivalent to
show the following.
Proposition 5.3.1.
(a) OCr (N ) = m∗ OG×B ur .
(b) The higher direct image Ri m∗ (OG×B ur ) = 0 for i > 0. Hence G×B ur is a rational resolution
of Cr (N ) via m.
Proof.
(a) This follows from Theorem 5.2.7, and Zariski’s Main Theorem 3.4.2.
(b) By [H, Proposition 8.5], we have Ri m∗ (OG×B ur ) ∼
= L Hi (G ×B ur , OG×B ur ) for each i ≥ 0
(as we pointed out in the end of Section 2.3). Note that L(k) = OG×B ur , so we have
H (G × u , OG×B ur ) ∼
=
i
B
r
∞
M
j=0
i
j
r∗
H (G/B, LG/B S (u )) =
∞
M
j ∗r
Ri indG
B (S (u )).
j=0
As we are assuming that u is a one-dimensional space of weight corresponding to the negative
root −α, S j (u∗r ) can be considered as the direct sum (kjα )⊕Pr (j) , where Pr (j) is the number
of ways getting r as a sum of j non-negative numbers. This weight is dominant, so by
j ∗r
Kempf’s vanishing theorem we obtain Ri indG
B (S (u )) = 0 for all i > 0 and j ≥ 0. It
follows that
Ri m∗ (OG×B ur ) = 0
for all i ≥ 1.
Now we state the result of this section.
Theorem 5.3.2. The moment map m : G ×B ur → Cr (N ) is a rational resolution of singularities.
5.4. Applications. As a corollary, we establish the connection with the reduced cohomology ring
of Gr . This gives an alternative proof for the main result in [SFB1] for the special case G = SL2 .
Theorem 5.4.1. For each r > 0, there is a G-equivariant isomorphism of algebras
k[G ×B ur ] ∼
= k[Cr (N )].
Consequently, there is a homeomorphism between Spec H• (Gr , k)red and Cr (N ).
Proof. Note that the moment map m is G-equivariant. This implies that the comorphism m∗
is compatible with the G-action. The isomorphism follows from part (a) of Proposition 5.3.1.
Combining this observation with [N, Proposition 5.2.4], we have the homeomorphism between the
spectrum of the reduced Gr -cohomology ring and Cr (N ).
COMMUTING VARIETIES OF r-TUPLES OVER LIE ALGEBRAS
13
Now we want to describe the characters for the coordinate algebra k[Cr (N )]. Before doing that,
we need to introduce some notation. Here we follow L
the convention in [Jan].
Let V be a graded vector space over k, i.e., V = i≥0 Vn , where each Vn is finite dimensional.
Set
X
cht V =
ch(Vn )tn
n≥0
the character series of V .
From the isomorphism in Theorem 5.4.1, the character series for the coordinate algebra of Cr (N )
can be computed via that of k[G ×B ur ]:
(4)
cht (k[Cr (N )]) = cht k[G ×B ur ] .
Theorem 5.4.2. For each r > 0, we have
(5)
cht k[Cr (N )] =
X
X
χ(nα)tn .
n≥0 a1 +···+ar =n
where the latter sum is taken over all r-tuple (a1 , . . . , ar ) of non-negative integers satisfying
a1 + · · · + ar = n.
Proof. From [Jan, 8.11(4)], k[G ×B ur ] is a graded G-algebra and for each n ≥ 0,
ch k[G ×B ur ]n = ch H0 (G/B, S n (ur∗ )).
The argument in Proposition 5.3.1 gives us
Hi (G/B, S n (u∗r )) = 0
for all i > 0, n ≥ 0. Hence, it follows by [Jan, 8.14(6)] that
χ(S n (u∗r )) = ch H0 (G/B, S n (u∗r ))
for each n ≥ 0. Here we recall that for each finite-dimensional B-module M , the Euler characteristic
of M is defined as
X
χ(M ) =
(−1)i ch Hi (G/B, M ).
i≥0
Then we write
ch(S n (u∗ ))
= e(nα); hence for each n we have
ch(S n (ur∗ )) = ch(S n (u∗ )⊗r )
= ch
X
a1 +···+ar =n
X
=
a1 +···+ar =n
X
=
a1
∗
ar
∗
!
S (u ) ⊗ · · · ⊗ S (u )
ch (S a1 (u∗ ) ⊗ · · · ⊗ S ar (u∗ ))
e(nα).
a1 +···+ar =n
Thus by the linear property of χ, we obtain
X
χ (S n (u∗r )) =
χ(e(nα)) =
a1 +···+ar =n
X
Combining all the formulas, we get
X
X
cht k[G ×B ur ] =
ch k[G ×B ur ]n tn =
n≥0
Therefore, the identity (4) completes our proof.
χ(nα).
a1 +···+ar =n
X
χ(nα)tn .
n≥0 a1 +···+ar =n
14
NHAM V. NGO
Remark 5.4.3. The above formula can be made more specific by making use of partition functions.
Let S be the set of r numbers a1 , . . . , ar ∈ N. Set PS : N → N with PS (m) the number of ways
getting m as a linear combination of elements in S with non negative coefficients. This is called a
partition function. Kostant used this notation with S = Π = {α1 , . . . , αn }, the set of simple roots,
to express the number of ways of writing a weight λ as a linear combination of simple roots, see
[SB] for more details. In our case, set S = {α1 = · · · = αr = 1} and denote Pr = PS . Then from
(5) we can have
ch k[Cr (N )]n = Pr (n)χ(nα).
As a consequence, we obtain a result on the multiplicity of H0 (λ) in k[Cr (N )].
Corollary 5.4.4. For each λ = mα ∈ X + , we have
k[Cr (N )] : H0 (λ) = Pr (m).
This result also shows that the coordinate algebra of Cr (N ) has a good filtration as a G-module.
6. The Mixed Cases
6.1. Now we turn our concerns into more complicated commuting varieties. Let V1 , . . . , Vr be
closed subvarieties of a Lie algebra g. Define
C(V1 , . . . , Vr ) = {(v1 , . . . , vr ) ∈ V1 × · · · × Vr | [vi , vj ] = 0 , 1 ≤ i ≤ j ≤ r},
a mixed commuting variety over V1 × · · · × Vr . It is obvious that when V1 = · · · = Vr , this variety
coincides with the commuting variety over V1 . We apply in this section our calculations from
previous sections to study mixed commuting varieties over sl2 and its nilpotent cone N .
We assume k is an algebraically closed field of characteristic p 6= 2. Set
Ci,j = C(N , . . . , N , sl2 , . . . , sl2 )
| {z } | {z }
i times
j times
with i, j ≥ 1, a mixed commuting variety over sl2 and N . Note that if j = 0, then Ci,0 = Ci (N ).
Otherwise, one gets Cj (sl2 ) if i = 0. By permuting the tuples, this variety is isomorphic to any
mixed commuting variety in which N appears i times and sl2 appears j times. Hence we consider
Ci,j a representative of such varieties. The following are some basic properties.
Proposition 6.1.1. For each i, j ≥ 1, we have:
(a) The variety Ci,j is reducible. Moreover, we have Ci,j = Ci+j (N ) ∪ 0 × . . . × 0 × Cj (sl2 ).
(b) dim Ci,j = i + j + 1.
(c) Ci,j is not normal. It is Cohen-Macaulay if and only if i = 1.
Proof. Part (a) and (b) immediately follow from the decomposition
Ci,j = Ci+j (N ) ∪ 0 × . . . × 0 × Cj (sl2 ).
Indeed, the inclusion ⊇ is obvious. For the other inclusion, we consider v = (v1 , . . . , vi+j ) ∈ Ci,j . If
vm 6= 0 for some 1 ≤ m ≤ i, then it is distinguished. Hence by [TY, Corollary 35.2.7] the centralizer
of vm is in N . This implies that all the vn with i + 1 ≤ n ≤ i + j must be in N so that v ∈ Ci+j (N ).
Otherwise, we have v1 = · · · = vi = 0; hence v ∈ 0 × . . . × 0 × Cj (sl2 ). Also note from earlier that
dim Ci+j (N ) = i + j + 1 and dim Cj (sl2 ) = j + 2.
(c) The mixed commuting variety Ci,j is never normal for all i, j ≥ 1 as it is reducible and
the irreducible components overlap each other (they all share the origin). Recall that a variety is
Cohen-Macaulay only if its irreducible components are equidimensional. In our case, this happens
only when i = 1. The remainder is to show that C1,j = C(N , sl2 , . . . , sl2 ) is Cohen-Macaulay. Let
q
I(Cj+1 (N )) = I(Cj (N )) + J1
COMMUTING VARIETIES OF r-TUPLES OVER LIE ALGEBRAS
15
where J1 = x21 + y1 z1 , x1 yi − xi y1 , x1 zi − xi z1 , y1 zi − yi z1 | i = 2, . . . , j + 1 , an ideal of k[slj+1
2 ].
Note also by Lemma 5.2.2 that
I(0 × Cj (sl2 )) = hx1 , y1 , z1 i + I(Cj (sl2 ))
and that I(Cj (sl2 )) ⊆ I(Cj (N )), J1 ⊆ hx1 , y1 , z1 i. Hence, by Lemma 5.2.3, we have
Cj+1 (N ) ∩∗ 0 × Cj (sl2 ) = Cj+1 (N ) ∩ 0 × Cj (sl2 ) = 0 × Cj (N ).
Observe, in addition, that each variety is Cohen-Macaulay. So Lemma 5.2.1 implies that C1,j is
Cohen-Macaulay.
Remark 6.1.2. One can see that mixed commuting varieties are the generalization of commuting
varieties. Strategies in studying the latter objects can be applied to that of the first objects.
However, we have not seen any work in the literature related to this concept. Our results in this
section show that mixed commuting varieties are usually not irreducible, hence fail to be normal or
Cohen-Macaulay. The reducibility also causes big dificulties in computing their dimensions. If V1
and Vn are the minimal and maximal element of the family V1 , . . . , Vn ordered by inclusion, then
for each r ≥ 1 one can easily see the following
dim Cr (V1 ) ≤ dim C(V1 , . . . , Vn ) ≤ dim Cr (Vn ).
These bounds are rough and depend on the dimensions of the commuting varieties. So new methods
are required to investigate a mixed commuting variety over various closed sets in a higher rank Lie
algebra.
7. Commuting Variety of 3 by 3 matrices
We turn our assumption to G = SL3 and g = sl3 defined over an algebraically closed field k.
Recall that N denotes the nilpotent cone of g. Not much is known about commuting varieties in
this case. In the present section, we only focus on the calculations on the nilpotent commuting
variety. In particular, we apply determinantal theory to study the smaller variety Cr (u), and then
use the moment morphism to obtain results on Cr (N ). We next show that for each r ≥ 1 all of the
singular points of Cr (N ) are in the commuting variety Cr (Osub ).
7.1. Irreducibility. First, we identify ur with the
0 0
0 0 0
ur = x 1 0 0 , . . . , x r 0
yr zr
y1 z1 0
affine space as follows.
0
0 | xi , yi , zi ∈ k, 1 ≤ i ≤ r .
0
For each pair of elements in u, the commutator equations are xi zj − xj zi with 1 ≤ i ≤ j ≤ r. It
follows that
k[x1 , y1 , z1 , . . . , xr , yr , zr ]
k[Cr (u)] = p
.
hxi zj − xj zi | 1 ≤ i ≤ j ≤ ri
Proposition 7.1.1. For each r ≥ 1, the variety Cr (u) is
(a) irreducible of dimension 2r + 1,
(b) singular at the point
0 0 0
0 0 0
X = 0 0 0 , · · · , 0 0 0 ,
y1 0 0
yr 0 0
with y1 , . . . , yr ∈ k,
(c) Cohen-Macaulay and normal.
16
NHAM V. NGO
Proof. We first consider the following isomorphism of rings
k[Cr (u)] ∼
= k[y1 , . . . , yr ] ⊗ p
(6)
k[x1 , z1 , . . . , xr , zr ]
.
hxi zj − xj zi | 1 ≤ i ≤ j ≤ ri
Let V be the variety associated to the second ring in the tensor product. Then parts (a)
and (c) follow if we are able to show that V is irreducible, Cohen-Macaulay and normal.
Indeed, these properties can be obtained from determinantal varieties as we argued earlier.
Consider the matrix
x1 x2 · · · xr
X =
.
z1 z2 · · · zr
It is easy to see that
k(X )
k[V ] ∼
=
I2 (X )
where I2 (X ) is the ideal generated by 2-minors of X . Hence by Theorem 3.2.2, k[V ] is a
Cohen-Macaulay, normal domain of Krull dimension r + 1. As tensoring with a polynomial
ring does not change the properties except the dimension, we have k[Cr (u)] is a CohenMacaulay, normal domain of Krull dimension 2r + 1 [BK]. This proves parts (a) and (c).
(b) Note that the group H = GLr × GL2 acts on V under the following action:
r
!
a11 · · · a1r
r
r
0 0 0
X
X
a
b
..
.
..
.. ,
× xi 0 0
a1i vj
a1i vi , . . . ,
7→
.
.
c d
0
z
0
i=1
i=1
i
ar1 · · · arr
i=1
0
0
0
0
0 for each 1 ≤ i ≤ r. Now consider a nonzero
where vi = axi + bzi
0
cxi + dzi 0
element v in V . Without loss of generality, we can assume that the entry x1 6= 0. Then v
belongs to the open set W = V ∩ V (x1 6= 0). On the other hand, it is not hard to see that
0 0 0
0 0 0
W = H · 1 0 0 , . . . , 1 0 0
0 0 0
0 0 0
which is a smooth orbit of dimension r+1. It follows that v is non-singular. From the isomorphism (6), we have the set of all singular points of the right-hand side is V (k[y1 , . . . , yr ]) ×
{0}. This determines the singular locus as desired.
Now we use properties of Cr (u) as ingredients to study the irreducibility and dimension of Cr (N ).
Theorem 7.1.2. The variety Cr (N ) is an irreducible variety of dimension 2r + 4.
Proof. Irreducibility follows from Theorem 3.3.2. Then, the birational properness of the moment
morphism
m : G ×B Cr (u) → Cr (N )
implies that
dim Cr (N ) = dim G ×B Cr (u) = dim Cr (u) + dim G/B = 2r + 4.
COMMUTING VARIETIES OF r-TUPLES OVER LIE ALGEBRAS
17
7.2. Singularities. Serre’s Criterion states that a variety V is normal if and only if the set of
singularities has codimension ≥ 2 and the depth of V at every point is ≥ min(2, dim V ). This
makes the task of determining the dimension of the singular locus of a variety necessary in order
to verify normality.
Note that the problem on the singular locus of ordinary commuting varieties was studied by
Popov [Po]. He showed that the codimension of singularities for C2 (g) is greater than or equal to
2 for an arbitrary reductive Lie algebra g. In other words, the variety C2 (g) holds the necessary
condition for being normal. This author has not seen any analogous work in the literature for
arbitrary commuting varieties.
We prove in this subsection that the set of all singularities for Cr (N ) has codimension ≥ 2. Let
α, β be the simple roots of the underlying root system Φ of G. Then we have the set of positive
roots Φ+ = {α, β, α + β}. Recall that zreg and zsub are respectively the centralizers of vreg and vsub
in g. We have shown in Proposition 7.1.1(b) that the vector space ur−α−β includes all singularities
of Cr (u). Before locating the singularities of Cr (N ), we need some lemmas.
Lemma 7.2.1. Suppose r ≥ 2. For each 1 ≤ i ≤ r, the subset V [i] = G · (zreg , . . . , vreg , . . . , zreg )
with vreg in the i-th of Cr (N ) is a smooth open subvariety of dimension 2r + 4.
Proof. As a linear combination of commuting nilpotents is also nilpotent, we define an action of
the algebraic group G × Mr−1,r on Cr (N ) where Mr−1,r is the set of all r − 1 × r-matrices and can
be identified with (Gr )r−1 :
(G × Mr−1,r ) × Cr (N ) → Cr (N )
r
r
X
X
ar−1,j vj ).
a1j vj , . . . , vi , . . . ,
(g, (aij )) • (v1 , . . . , vr ) 7−→ g · (
j=1
j=1
for every element (v1 , . . . , vr ) ∈ Cr (N ). Now observe that zreg is a vector space of dimension 2,
choose {vreg , w} a basis of this space. It is easy to check that for r ≥ 2 we have
V [i] = (G × Mr−1,r ) • (w , 0 , . . . , vreg , . . . , 0).
In other words, our original variety is an orbit under the bullet action. So it is smooth.
Now for each 1 ≤ i ≤ r, consider the projection map from Cr (N ) to the i-th factor, p : Cr (N ) →
N . We then have
V [i] = p−1 (Oreg )
which is an open subset in Cr (N ). As the variety Cr (N ) is irreducible, the dimension of V [i] is the
same as dim Cr (N ).
Lemma 7.2.2. The intersection of zsub and Osub is exactly the union of u−α ×u−α−β and u−α ×uβ .
Hence, we have dim (zsub ∩ Osub ) = 2.
Proof. It can be computed that zsub consists of all matrices of the form
x1 0
0
x2 x1
t2
x3 0 −2x1
where x1 , x2 , x3 , t2 are in k. As the determinant of a nilpotent matrix is always 0, we obtain
0 0 0
zsub ∩ N = x2 0 t2 | x2 , x3 , t2 ∈ k .
x3 0 0
18
NHAM V. NGO
On the other hand, it is well-known that Osub consists of all matrices of rank one and Osub =
Osub ∪ {0}. Then
0 0 0
0 0 0
zsub ∩ Osub = x2 0 0 ∪ x2 0 t2
x3 0 0
0 0 0
= u−α × u−α−β ∪ u−α × uβ .
It immediately follows that dim (zsub ∩ Osub ) = 2.
For each r ≥ 1, let Crsing be the singular locus of Cr (N ). The following theorem determines the
location of Crsing .
Theorem 7.2.3. For each r ≥ 1, we have Crsing ⊆ Cr (Osub ). Moreover, codim Crsing ≥ 2.
Proof. It is obvious that our result is true for r = 1. Assume now that r ≥ 2. It suffices to prove
that any element in the complement of Cr (Osub ) in Cr (N ) is smooth. Let V = Cr (Osub ) and
suppose w ∈ Cr (N )\V . Say w = (w1 , . . . , wr ) with some wn ∈
/ Osub , i.e., wn is a regular element.
Consider the projection onto the n-th factor pn : Cr (N ) → N . As G · wn = Oreg is open in N , the
preimage p−1
n (G · wn ) is an open set of Cr (N ). Note that
−1
p−1
n (G · wn ) = G · pn (wn ) = G · (zreg , . . . , vreg , . . . , zreg ).
By Lemma 7.2.1, w is non-singular. Therefore, V contains all the singularities of Cr (N ).
We are now computing the dimension of dim Cr (Osub ). It is observed for each r ≥ 1 that
Cr (Osub ) = G · (vsub , Cr−1 (zsub ∩ Osub )).
Now let V1 = uα × uα+β and V2 = uα × u−β . By the preceding lemma, we have zsub ∩ Osub = V1 ∪ V2 .
Also note that for u, v ∈ V1 ∪ V2 we have
[u, v] = 0
⇔
u, v ∈ V1
or
u, v ∈ V2 .
It follows that Cr−1 (zsub ∩ Osub ) = V1r−1 ∪ V2r−1 . So we obtain
Cr (Osub ) = G · (vsub , V1 , . . . , V1 ) ∪ G · (vsub , V2 , . . . , V2 ).
This implies that Cr (Osub ) is reducible. Using theorem of dimension on fibers, we can further
compute that G · (vsub , Vj , . . . , Vj ) has dimension 2r + 2 for each j = 1, 2. Thus dim Cr (Osub ) =
2r + 2 so that codim Cr (Osub ) = 2.
Acknowledgments
This paper is based on part of the author’s Ph.D. thesis. The author gratefully acknowledges the
guidance of his thesis advisor Daniel K. Nakano. Thanks to Christopher Drupieski for his useful
comments. We also thank William Graham for discussions about Cohen-Macaulay rings. Finally,
we are grateful to Alexander Premet and Robert Guralnick for the information about ordinary
commuting varieties.
8. Appendix
We verify in this section the condition of Theorem 5.2.5(b) for I and J are in the context of
Proposition 5.2.6.
COMMUTING VARIETIES OF r-TUPLES OVER LIE ALGEBRAS
19
8.1. Case 1. We check the following
(1) (ym+1 + zm+1 ) xj ∈ I,
(2) (ym+1 + zm+1 ) yj ∈ I,
(3) (ym+1 + zm+1 ) zj ∈ I.
For the first one, we consider for each 1 ≤ j ≤ m,
(ym+1 + zm+1 )xj + I = ym+1 xj + zm+1 xj + I
= xm+1 yj + xm+1 zj + I
= xm+1 hyj + zj i + I
=I
where the second identity is provided by xm+1 yj − xj ym+1 , xm+1 zj − xj zm+1 ∈ Ir ⊆ Im ; and the
last identity is provided by yj + zj ∈ I.
The same technique is applied for showing (2) and (3) as following:
(ym+1 + zm+1 )yj + I = ym+1 yj + zm+1 yj + I
= ym+1 yj + ym+1 zj + I
= ym+1 (yj + zj ) + I
= I,
(ym+1 + zm+1 )zj + I = ym+1 zj + zm+1 zj + I
= zm+1 yj + zm+1 zj + I
= zm+1 (yj + zj ) + I
= I.
8.2. Case 2. We need to check the following
(1) (xn+1 + yn+1 ) xj ∈ I,
(2) (xn+1 + yn+1 ) yj ∈ I,
(3) (xn+1 + yn+1 ) zj ∈ I,
(4) (xn+1 + yn+1 ) (xh − yh ) ∈ I,
for all 1 ≤ j ≤ n and n + 1 ≤ h ≤ r. Verifying (1), (2), and (3) is similar to our work in Case 1.
Lastly, we look at
(xn+1 + yn+1 )(xh − yh ) + I = xn+1 xh + yn+1 xh − xn+1 yh − yn+1 yh + I
= xn+1 xh − yn+1 yh + I
= xn+1 xh + yn+1 zh + I.
Now in order to complete our verification, we will show that xn+1 xh + yn+1 zh ∈ Ir . Indeed, we
have
2
(xn+1 xh + yn+1 zh )2 + Ir = x2n+1 x2h + 2xn+1 xh yn+1 zh + yn+1
zh2 + Ir
2
= −x2n+1 yh zh + 2x2n+1 yh zh + yn+1
zh2 + Ir
2
= x2n+1 yh zh + yn+1
zh2 + Ir
2
= −yn+1 zn+1 yh zh + yn+1
zh2 + Ir
= yn+1 zh (−zn+1 yh + yn+1 zh ) + Ir
= Ir .
As Ir is radical, we obtain xn+1 xh + yn+1 zh ∈ Ir as desired.
20
NHAM V. NGO
References
[B]
[Ba]
[BI]
[BK]
[BV]
[CM]
[E]
[G]
[GS]
[H]
[HE]
[Hum1]
[Hum2]
[Hr]
[Jan]
[K]
[KN]
[L]
[MS]
[Mu]
[N]
[Po]
[Pr]
[R]
[S]
[SB]
[SFB1]
[SFB2]
[TY]
[Vas]
[Z]
R. Basili, On the irreducibility of commuting varieties of nilpotent matrices, J. Pure Appl. Algebra, 149
(2000), 107–120.
V. Baranovsky, The varieties of pairs of commuting nilpotent matrices is irreducible, Transform. Groups, 6
(2001), 3–8.
R. Basili and A. Iarrobino, Pairs of commuting nilpotent matrices, and Hilbert function, J. Algebra, 320
(2008), 1235–1254.
S. Bouchiba and S. Kabbaj, Tensor products of Cohen-Macaulay rings: solution to a problem of Grothendieck,
J. Algebra, 252 (2002), 65–73.
W. Bruns and U. Vetter, Determinantal Rings, Lectures Notes in Math., Springer-Verlag, 1988.
D. H. Collingwood and W. M. McGovern, Nilpotent Orbits in Semisimple Lie Algebras, Van Nostrand
Reinhold Mathematics Series, 1993.
D. Eisenbud, Commutative Algebra with a view toward Algebraic Geometry, Springer, 1995.
M. Gerstenhaber, On dominance and varieties of commuting Matrices, Ann. Math., 73 (1961), 324–348.
R. M. Guralnick and B. A. Sethuraman,Commuting pairs and triples of matrices and related varieties ,
Linear Algebra Appl., 310 (2000), 139–148.
R. Hartshorne, Algebraic Geometry, Graduate Texts in Mathematics, Springer-Verlag, 1977.
M. Hochster and A. J. Eagon, Cohen-Macaulay rings, invariant theory, generic perfection of determinantal
loci, Amer. J. Math., 93 (1971), 1020–1058.
J. E. Humphreys, Introduction to Lie Algebras and Representation Theory, Graduate Texts in Mathematics,
Springer-Verlag, 1978.
, Linear Algebraic Groups, Graduate Texts in Mathematics, Springer-Verlag, 1995.
F. Hreinsdottir, Miscellaneous results and conjectures on the ring of commuting matrices, An. St. Univ.
Ovidius Constanta, 14 (2006), 45–60.
J. C. Jantzen and K-H. Neeb, Lie Theory: Lie Algebras and Representations, Progress in Mathematics,
Birkhauser, 228 2004.
A. Knutson, Some schemes related to the commuting variety, J. Algebraic Geom., 14 (2005), 283–294.
A. A. Kirillov and Y. A. Neretin, The variety An of n-dimensional Lie algebra structures, Amer. Math. Soc.
Transl., 137 (1987), 21–30.
P. Levy, Commuting varieties of Lie algebras over fields of prime characteristic, J. Algebra, 250 (2002),
473–484.
M. Majidi-Zolbanin and B. Snapp, A Note on the variety of pairs of matrices whose product is symmetric,
Cont. Math., 555 (2011).
C. C. Mueller, On the varieties of pairs of matrices whose product is symmetric, Ph.D. thesis, The University of Michigan, 2007.
N. V. Ngo, Cohomology for Frobenius kernels of SL2 , to appear in Journal of Algebra.
V. L. Popov, Irregular and singular loci of commuting varieties, Transform. Groups, 13 (2008), 819–837.
A. Premet, Nilpotent commuting varieties of reductive Lie algebras, Invent. Math., 154 (2003), 653–683.
R. W. Richardson, Commuting varieties of semisimple Lie algebras and algebraic groups, Comp. Math., 38
(1979), 311–327.
K. Sivic, On varieties of commuting triples III, Lin. Alg. and App., 437 (2012), 393–460.
J. R. Schmidt and A. M. Bincer, The Kostant partition function for simple Lie algebras, J. Math. Phys., 25
(1984), 2367–2374.
A. Suslin, E. M. Friedlander, and C. P. Bendel, Infinitesimal 1-parameter subgroups and cohomology, J.
Amer. Math. Soc, 10 (1997), 693–728.
, Support varieties for infinitesimal group scheme, J. Amer. Math. Soc, 10 (1997), 729–759.
P. Tauvel and R. Yu, Lie Algebras and Algebraic Groups, Springer Monographs in Mathematics, SpringerVerlag, 2005.
W. V. Vasconcelos, Arithmetic of Blowup Algebras, Cambridge University Press, 1994.
E. Zoque, On the variety of almost commuting nilpotent matrices, Transform. Groups, 15 (2010), 483–501.
Department of Mathematics, Statistics, and Computer Science, University of Wisconsin-Stout,
Menomonie, WI 54751, USA
Current address: Department of Mathematics and Statistics, Lancaster University, Lancaster,
LA1 4YW, UK
E-mail address: [email protected]
| 0 |
VANDERMONDE VARIETIES AND RELATIONS AMONG
SCHUR POLYNOMIALS
arXiv:1302.1298v1 [math.AG] 6 Feb 2013
RALF FRÖBERG AND BORIS SHAPIRO
Abstract. Motivated by the famous Skolem-Mahler-Lech theorem we initiate
in this paper the study of a natural class of determinantal varieties which we
call Vandermonde varieties. They are closely related to the varieties consisting of all linear recurrence relations of a given order possessing a non-trivial
solution vanishing at a given set of integers. In the regular case, i.e., when the
dimension of a Vandermonde variety is the expected one, we present its free
resolution, obtain its degree and the Hilbert series. Some interesting relations
among Schur polynomials are derived. Many open problems and conjectures
are posed.
1. Introduction
The results in the present paper come from an attempt to understand the famous Skolem-Mahler-Lech theorem and its consequences. Let us briefly recall its
formulation. A linear recurrence relation with constant coefficients of order k is an
equation of the form
un + α1 un−1 + α2 un−2 + · · · + αk un−k = 0, n ≥ k
(1)
where the coefficients (α1 , ..., αk ) are fixed complex numbers and αk 6= 0. (Equation
(1) is often referred to as a linear homogeneous difference equation with constant
coefficients.)
The left-hand side of the equation
tk + α1 tk−1 + α2 tk−2 + · · · + αk = 0
(2)
is called the characteristic polynomial of recurrence (1). Denote the roots of (2)
(listed with possible repetitions) by x1 , . . . , xk and call them the characteristic roots
of (1).
Notice that all xi are non-vanishing since αk 6= 0. To obtain a concrete solution
of (1) one has to prescribe additionally an initial k-tuple, (u0 , . . . , uk−1 ), which can
be chosen arbitrarily. Then un , n ≥ k are obtained by using the relation (1). A
solution of (1) is called non-trivial if not all of its entries vanish. In case of all
distinct characteristic roots a general solution of (1) can be given by
un = c1 xn1 + c2 xn2 + ... + ck xnk
where c1 , ..., ck are arbitrary complex numbers. In the general case of multiple
characteristic roots a similar formula can be found in e.g. [15].
An arbitrary solution of a linear homogeneous difference (or differential) equation with constant coefficients of order k is called an exponential polynomial of
order k. One usually substitutes xi 6= 0 by eγi and considers the obtained function in C instead of Z or N. (Other terms used for exponential polynomials are
quasipolynomials or exponential sums.)
The most fundamental fact about the structure of integer zeros of exponential
polynomials is the well-known Skolem-Mahler-Lech theorem formulated below. It
was first proved for recurrence sequences of algebraic numbers by K. Mahler [11]
2010 Mathematics Subject Classification. Primary 65Q10, Secondary 65Q30, 14M15.
1
2
R. FRÖBERG AND B. SHAPIRO
in the 30’s, based upon an idea of T. Skolem [13]. Then, C. Lech [9] published
the result for general recurrence sequences in 1953. In 1956 Mahler published the
same result, apparently independently (but later realized to his chagrin that he had
actually reviewed Lech’s paper some years earlier, but had forgotten it).
Theorem 1 (The Skolem-Mahler-Lech theorem). If a0 , a1 , ... is a solution to a linear recurrence relation, then the set of all k such that ak = 0 is the union of a finite
(possibly empty) set and a finite number (possibly zero) of full arithmetic progressions. (Here, a full arithmetic progression means a set of the form r, r + d, r + 2d, ...
with 0 < r < d.)
A simple criterion guaranteeing the absence of arithmetic progressions is that
no quotient of two distinct characteristic roots of the recurrence relation under
consideration is a root of unity, see e.g. [10]. A recurrence relation (1) satisfying this
condition is called non-degenerate. Substantial literature is devoted to finding the
upper/lower bounds for the maximal number of arithmetic progressions/exceptional
roots among all/non-degenerate linear recurrences of a given order. We give more
details in § 3. Our study is directly inspired by these investigations.
Let Lk be the space of all linear recurrence relations (1) of order at most k with
constant coefficients and denote by L∗k = Lk \ {αk = 0} the subset of all linear
recurrence of order exactly k. (Lk is the affine space with coordinates (α1 , ..., αk ).)
To an arbitrary pair (k; I) where k ≥ 2 is a positive integer and I = {i0 < i1 < i2 <
... < im−1 }, m ≥ k is a sequence of integers, we associate the variety Vk;I ⊂ L∗k , the
set of all linear recurrences of order exactly k having a non-trivial solution vanishing
at all points of I. Denote by Vk;I the (set-theoretic) closure of Vk;I in Lk . We call
Vk;I (resp. V k;I ) the open (resp. closed) linear recurrence variety associated to the
pair (k; I).
In what follows we will always assume that gcd(i1 − i0 , ..., im−1 − i0 ) = 1 to
avoid unnecessary freedom related to the time rescaling in (1). Notice that since
for m ≤ k − 1 one has Vk;I = L∗k and V k;I = Lk , this case does not require special
consideration. A more important observation is that due to translation invariance
of (1) for any integer l and any pair (k; I) the variety Vk;I (resp. V k;I ) coincides
with the variety Vk;I+l (resp. V k;I+l ) where the set of integers I + l is obtained by
adding l to all entries of I.
So far we defined V k;I and Vk;I as sets. However for any pair (k; I) the set
V k;I is an affine algebraic variety, see Proposition 4. Notice that this fact is not
completely obvious since if we, for example, instead of a set of integers choose as I
an arbitrary subset of real or complex numbers then the similar subset of Ln will,
in general, only be analytic.
Now we define the Vandermonde variety associated with a given pair (k; I), I =
{0 ≤ i0 < i1 < i2 < ... < im−1 }, m ≥ k. Firstly, consider the set Mk;I of
(generalized) Vandermonde matrices of the form
i0
x1
xi20
···
xik0
xi11
xi21
···
xik1
,
Mk;I =
(3)
···
···
···
···
i
x1m−1
i
x2m−1
···
i
xkm−1
where (x1 , ..., xk ) ∈ Ck . In other words, for a given pair (k; I) we take the map
Mk;I : Ck → M at(m, k) given by (3) where M at(m, k) is the space of all m × kmatrices with complex entries and (x1 , ..., xk ) are chosen coordinates in Ck .
We now define three slightly different but closely related versions of this variety
as follows.
VANDERMONDE VARIETIES AND RELATIONS AMONG SCHUR POLYNOMIALS
3
Version 1. Given a pair (k; I) with |I| ≥ k define the coarse Vandermonde variety
V dck;I ⊂ Mk;I as the set of all degenerate Vandermonde matrices, i.e., whose rank
is smaller than k. V dck;I is obviously an algebraic variety whose defining ideal II
is generated by all m
maximal minors of Mk;I . Denote the quotient ring by
k
RI = R/II .
Denote by Ak ⊂ Ck the standard Coxeter arrangement (of the Coxeter group
Ak−1 ) consisting of all diagonals xi = xj and by BC k ⊂ Ck the Coxeter arrangement
consisting of all xi = xj and xi = 0. Obviously, BCk ⊃ Ak . Notice that V dck;I
always includes the arrangement BC k (some of the hyperplanes with multiplicities)
which is often inconvenient. Namely, with very few exceptions this means that
V dck;I is not equidimensional, not CM, not reduced etc. For applications to linear
recurrences as well as questions in combinatorics and geometry of Schur polynomials
it seems more natural to consider the localizations of V dck;I in Ck \ Ak and in
Ck \ BC k .
c
c
Version 2. Define the Ak -localization V dA
k;I of V dk;I as the contraction of V dk;I
k
A
to C \ Ak . Its is easy to obtain the generating ideal of V dk;I . Namely, recall that
given a sequence J = (j1 < j2 < · · · < jk ) of nonnegative integers one defines the
associated Schur polynomial SJ (x1 , ..., xk ) as given by
xj11
xj2
SJ (x1 , . . . , xk ) = 1
···
xj1k
xj21
xj22
···
xj2k
···
···
···
···
xjk2
xjk2
/W (x1 , . . . , xk ),
···
xjkk
where W (x1 , . . . , xk ) is the usual Vandermonde determinant. Given a sequence
I = (0 ≤ i0 < i1 < i2 < · · · < im−1 ) with gcd(i1 − i0 , . . . , im−1 − i0 ) = 1 consider
the set of all its m
k subsequences Jκ of length k. Here the index κ runs over the
set of all subsequences of length k among {1, 2, ..., m}. Take the corresponding
Schur polynomials SJκ (x1 , . . . , xk ) and form the ideal IIA in the polynomial ring
C[x1 , . . . , xk ] generated by all m
k such Schur polynomials SJκ (x1 , . . . , xk ). One can
k
A
show that the Vandermonde variety V dA
k;I ⊂ C is generated by II , see Lemma 5.
A
A
Denote the quotient ring by RI = R/II where R = C[x1 , ..., xk ]. Analogously, to
the coarse Vandermonde variety V dk;I the variety V dA
k;I often contains irrelevant
coordinate hyperplanes which prevents it from having nice algebraic properties.
For example, if i0 > 0 then all coordinate hyperplanes necessarily belong to V dA
k;I
ruining equidimensionality etc. On the other hand, under the assumption that
i0 = 0 the variety V dA
k;I often has quite reasonable properties presented below.
c
c
Version 3. Define the BC k -localization V dBC
k;I of V dk;I as the contraction of V dk;I
k
BC
to C \ BC k . Again it is straightforward to find the generating ideal of V dk;I .
Namely, given a sequence J = (0 ≤ j1 < j2 < · · · < jk ) of nonnegative integers
define the reduced Schur polynomial ŜJ (x1 , ..., xk ) as given by
1
x1j2 −j1
ŜJ (x1 , . . . , xk ) =
···
x1jk −j1
1
x2j2 −j1
···
x2jk −j1
···
···
···
···
1
xkj2 −j1
/W (x1 , . . . , xk ).
···
jk −j1
xk
In other words, ŜJ (x1 , ..., xk ) is the usual Schur polynomial corresponding to the
sequence (0, j2 − j1 , ...., jk − j1 ). Given a sequence I = (0 ≤ i0 < i1 < i2 < · · · <
im−1 ) with gcd(i1 − i0 , . . . , im−1 − i0 ) = 1 consider as before the set of all its m
k
subsequences Jκ of length k where the index κ runs over the set of all subsequences
of length k. Take the corresponding reduced Schur polynomials ŜJκ (x1 , . . . , xk )
4
R. FRÖBERG AND B. SHAPIRO
and form the ideal IIBC in the polynomial ring C[x1 , . . . , xk ] generated by all m
k
such Schur polynomials ŜJκ (x1 , . . . , xk ). One can easily see that the Vandermonde
k
BC
BC
variety V dBC
=
k;I ⊂ C as generated by II . Denote the quotient ring by RI
BC
R/II .
BC
Conjecture 2. If dim(V dBC
is a radical ideal.
k;I ) ≥ 2 then II
Notice that considered as sets the restrictions to Ck \ BC k of all three varieties
BC
V dck;I , V dA
k;I , V dk;I coincide with what we call the open Vandermonde variety
op
V dk;I which is the subset of all matrices of the form Mk;I with three properties:
(i) rank is smaller than k;
(ii) all xi ’s are non-vanishing;
(iii) all xi ’s are pairwise distinct.
Thus set-theoretically all the differences between the three Vandermonde varieties are concentrated on the hyperplane arrangement BC k . Also from the above
BC
definitions it is obvious that V dop
k;I and V dk;I are invariant under addition of an
arbitrary integer to I. The relation between the linear recurrence variety Vk;I and
the open Vandermonde variety V dop
k;I is quite straight-forward. Namely, consider
the standard Vieta map:
V i : Ck → L k
(4)
sending an arbitrary k-tuple (x1 , ..., xk ) to the polynomial tk +α1 tk−1 +α2 tk−2 +· · ·+
αk whose roots are x1 , ...., xk . Inverse images of the Vieta map are exactly the orbits
of the standard Sk -action on Ck by permutations of coordinates. Thus, the Vieta
map sends a homogeneous and symmetric polynomial to a weighted homogeneous
polynomial.
op
Define the open linear recurrence variety Vk;I
⊆ Vk;I of a pair (k; I) as consisting
of all recurrences in Vk;I with all distinct characteristic roots distinct. The following
statement is obvious.
Lemma 3. The map V i restricted to V dop
k;I gives an unramified k!-covering of the
op
set Vk;I
.
Unfortunately at the present moment the following natural question is still open.
op
op
= Vk;I where Vk;I
Problem 1. Is it true that for any pair (k; I) one has that Vk;I
op
is the set-theoretic closure of Vk;I in L∗k ? If ’not’, then under what additional
assumptions?
Our main results are as follows. Using the Eagon-Northcott resolution of determinantal ideals, we determine the resolution, and hence the Hilbert series and
degree of RA
I in Theorem 6. We give an alternative calculation of the degree using
the Giambelli-Thom-Porteous formula in Proposition 8. In the simplest non-trivial
case, when m = k+1, we get more detailed information about V dA
k;I . We prove that
its codimension is 2, and that RA
is
Cohen-Macaulay.
We
also
discuss
minimal sets
I
of generators of II , and determine when we have a complete intersection in Theorem 9. (The proof of this theorem gives a lot of interesting relations between Schur
polynomials, see Theorem 10.) In this case the variety has the expected codimension, which is not always the case if m > k + 1. In fact our computer experiments
suggest that then the codimension rather seldom is the expected one. In case k = 3,
m = 5, we show that having the expected codimension is equivalent to RA
I being a
complete intersection and that II is generated by three complete symmetric functions. Exactly the problem (along with many other similar questions) when three
complete symmetric functions constitute a regular sequence was considered in an
VANDERMONDE VARIETIES AND RELATIONS AMONG SCHUR POLYNOMIALS
5
recent paper [4] where the authors formulated a detailed conjecture. We slightly
strengthen their conjecture below.
For the BC k -localized variety V dBC
k;I we have only proofs when k = 3, but we
present Conjectures 15 and 16, supported by many calculations. We end the paper
with a section which describes the connection of our work with the fundamental
problems in linear recurrence relations.
Acknowledgements. The authors want to thank Professor Maxim Kazarian (Steklov
Institute of Mathematical Sciences) for his help with Giambelli-Thom-Porteous formula, Professor Igor Shparlinski (Macquarie University) for highly relevant information on the Skolem-Mahler-Lech theorem and Professors Nicolai Vorobjov (University of Bath) and Michael Shapiro (Michigan State University) for discussions.
We are especially grateful to Professor Winfried Bruns (University of Osnabrück)
for pointing out important information on determinantal ideals.
2. Results and conjectures on Vandermonde varieties
We start by proving that V k;I is an affine algebraic variety, see Introduction.
Proposition 4. For any pair (k; I) the set V k;I is an affine algebraic variety.
Therefore, Vk;I = V k;I |L∗k is a quasi-affine variety.
Proof. We will show that for any pair (k; I) the variety V k;I of linear recurrences
is constructible. Since it is by definition closed in the usual topology of Lk ≃ Ck
it is algebraic. The latter fact follows from [12], I.10 Corollary 1 claiming that
if Z ⊂ X is a constructible subset of a variety, then the Zarisky closure and the
strong closure of Z are the same. Instead of showing that V k;I is constructible we
prove that Vk;I ⊂ L∗k is constructible. Namely, we can use an analog of Lemma 3 to
construct a natural stratification of Vk;I into the images of quasi-affine
sets under
S
λ
appropriate Vieta maps. Namely, let us stratify Vk;I as Vk;I = λ⊢k Vk;I
where
λ
λ ⊢ k is an arbitrary partition of k and Vk;I is the subset of Vk;I consisting of all
recurrence relations of length exactly k which has a non-trivial solution vanishing
at each point of I and whose characteristic polynomial determines
the partition
Ps
λ of its degree k. In other words, if λ = (λ1 , ..., λs ),
λ
=
k then the
j
j=1
characteristic polynomial should have s distinct roots of multiplicities λ1 , ..., λs
λ
resp. Notice that any of these Vk;I
can be empty including the whole Vk;I in which
λ
case there is nothing to prove. Let us now show that each Vk;I
is the image under
the appropriate Vieta map
of
a
set
similar
to
the
open
Vandermonde
variety. Recall
Ps
that if λ = (λ1 , ..., λs ),
λ
=
k
and
x
,
..,
x
are
the
distinct
roots
with the
1
s
j=1 j
multiplicities λ1 , ..., λs respectively of the linear recurrence (1) then the general
solution of (1) has the form
un = Pλ1 (n)xn1 + Pλ2 (n)xn2 + ... + Pλs (n)xns ,
where Pλ1 (n), ..., Pλs (n) are arbitrary polynomials in the variable n of degrees λ1 −
1, λ2 − 1, ...., λs−1 resp. Now, for a given λ ⊢ k consider the set of matrices
λ
Mk;I
=
xi10
xi1
1
···
i
x1m−1
i0 xi10
i1 xi11
···
i
im−1 x1m−1
...
...
···
...
i0λ1 −1 xi10
i1λ1 −1 xi11
···
λ1 −1 im−1
im−1
x1
...
...
···
...
xis0
xis1
···
i
xsm−1
i0 xis0
i1 xis1
···
i
i1 xsm−1
...
...
···
...
i0λs −1 xis0
i1λs −1 xis1
.
···
λs −1 im−1
im−1 xs
In other words, we are taking the fundamental solution xn1 , nxn1 , ..., nλ1 −1 xn1 , xn2 ,
nxn2 , ..., nλ2 −1 xn2 , ...., xns , nxns , ..., nλs −1 xns of (1) under the assumption that the
characteristic polynomial gives a partition λ of k and we are evaluating each function
6
R. FRÖBERG AND B. SHAPIRO
in this system at i0 , i1 , ..., im−1 resp. We now define the variety V dλk;I as the subset
λ
of matrices of the form Mk;I
such that: (i) the rank of such a matrix is smaller
than k; (ii) all xi are distinct; (iii) all xi are non-vanishing. Obviously, V dλk;I is a
quasi-projective variety in Cs . Define the analog
V iλ : Cs → Lk which sends an
Qs
s
s-tuple (x1 , ..., xs ) ∈ C to the polynomials j=1 (x − xj )λj ∈ Lk of the Vieta map
λ
V i. One can easily see that V iλ maps V dλk;I onto Vk;I
. Applying this construction
S
λ
is constructible which
to all partitions λ ⊢ k we will obtain that Vk;I = λ⊢k Vk;I
finishes the proof.
The remaining part of the paper is devoted to the study of the Vandermonde
BC
A
varieties V dA
k;I and V dk;I . We start with the Ak -localized variety V dk;I . Notice
A
k
that if m = k the variety V dk;I ⊂ C is an irreducible hypersurface given by the
Pk−1
equation SI = 0 and its degree equals j=0
ij − k2 . We will need the following
alternative description of the ideal IIA in the general case. Namely, using the
Jacobi-Trudi identity for the Schur polynomials we get the following statement.
Lemma 5. For any pair (k; I), I = {i0 < i1 < ... < im−1 } the ideal IIA is generated
by all k × k-minors of the m × k-matrix
Hk;I =
hi0 −(k−1)
hi1 −(k−1)
..
.
hi0 −(k−2)
hi1 −(k−2)
..
.
···
···
..
.
hi0
hi1
..
.
him−1 −(k−1)
him−1 −(k−2)
···
him−1
.
(5)
Here hi denotes the complete symmetric function of degree i, hi = 0 if i < 0,
h0 = 1.
Proof. It follows directly from the standard Jacobi-Trudi identity for the Schur
polynomials, see e.g. [16].
In particular, Lemma 5 shows that V dA
k;I is an determinantal variety in the usual
A
sense. When working with V dk;I and unless the opposite is explicitly mentioned we
will assume that I = {0 < i1 < ... < im−1 }, i.e. that i0 = 0 and that additionally
gcd(i1 , ..., im−1 ) = 1. Let us first study some properties of V dA
k;I in the so-called
regular case, i.e. when its dimension coincides with the expected one.
Namely, consider the set Ωm,k ⊂ M at(m, k) of all m× k-matrices having positive
corank. It is well-known that Ωm,k has codimension equal to m − k + 1. Since V dck;I
coincides with the pullback of Ωm,k under the map Mk;I and V dA
k;I is closely related
to it (but with trivial pathology on Ak removed) the expected codimension of V dA
k;I
equals m − k + 1. We call a pair (k; I) A-regular if k ≤ m ≤ 2k − 1 (implying that
A
the expected dimension of V dA
k;I is positive) and the actual codimension of V dk;I
coincides with its expected codimension. We now describe the Hilbert series of the
quotient ring RA
I in the case of a arbitrary regular pair (k; I) using the well-known
resolution of determinantal ideals of Eagon-Northcott [6].
To explain the notation in the following theorem, we introduce two gradings,
tdeg and deg, on C[t0 , . . . , tm−1 ]. The first one is the usual grading induced by
tdeg(ti ) = 1 for all i, and a second one is induced by deg(ti ) = −i. In the next
theorem M denotes a monomial in C[t0 . . . , tm−1 ].
Theorem 6. In the above notation, and with I ′ = {i1 , . . . , im−1 },
VANDERMONDE VARIETIES AND RELATIONS AMONG SCHUR POLYNOMIALS
7
A
A
(a) the Hilbert series HilbA
I (t) of RI = R/II is given by
P
Pm−k
P
1 − i=0 ((−1)i+1 J⊆I ′ ,|J|=k+i tsJ M∈Ni tdeg(M) )
A
HilbI (t) =
,
(1 − t)k
P
where sJ = j∈J j − k2 and Ni = {M ; tdeg(M ) = i}.
(m−k+1)
(b) The degree of RA
(1)(−1)m−k+1 /(m − k + 1)!, where T (t) is
I is T
the numerator in (a).
Proof. According to [6] provided that IIA has the expected codimension m − k + 1,
it is known to be Cohen-Macaulay and it has a resolution of the form
0 → Fm−k+1 → · · · → F1 → R → RA
(6)
I → 0,
m
k+j−2
where Fj is free module over R = C[x1 , . . . , xk ] of rank k+j−1
k−1 . We denote
the basis elements of Fj by MI T , where I ⊆ {i0 , . . . .im−1 }, |I| = k + j − 1, and T is
an arbitrary monomial in {t0 , . . . , tk−1 } of degree j − 1. If MI = {il1 , . . . , ilk+j−1 }
Pr
and T = tsj11 · · · tsjrr with si > 0 for all i and i=1 si = j − 1, then, in our sitPr Pk+j−1
uation, d(MI T ) = i=1 ( l=1 (−1)k+1 hikl −ji MI\{ikl } )T /tji . Here deg(MI ) =
Pk+j−1
ilj − k2 and deg(ti ) = −i. (Note that tdeg(ti ) = 1 but deg(ti ) = −i.)
n=1
Pr
Pk+j−1
Thus deg(MI T ) =
iln + i=1 si ji − k2 if MI = {il1 , . . . , ilk+j−1 } and
n=1
T = tsj11 · · · tsjrr . Observe that this resolution is never minimal. Indeed, for any sequence I = {0 = i0 < i1 · · · < im−1 }, we only need the Schur polynomials coming
from subsequences starting with 0, so IIA is generated by at most m−1
Schur
k−1
polynomials instead of totally m
;
see
also
discussions
preceding
the
proof
of
Thek
orem 9 below. Now, if J is an arbitrary homogeneous ideal in R = C[x1 , . . . , xk ]
and R/J has a resolution
1
r
R(−n1.i ) → R → R/J → 0,
R(−nr,i ) → · · · → ⊕βi=1
0 → ⊕βi=1
then the Hilbert series of R/J is given by
P 1 n1,i
P r nr,i
1 − βi=1
+ · · · + (−1)r βi=1
t
t
.
k
(1 − t)
For the resolution (6), all terms coming from MI T with t0 |T cancel. Thus we
get the claimed Hilbert series. If the Hilbert series is given by T (t)/(1 − t)k =
P (t)/((1 − t)dim(R/II ) , then the degree of the corresponding variety equals P (1).
We have T (t) = (1−t)m−k+1 P (t), or after differentiating the latter identity m−k+1
times we get P (1) = T (m−k+1) (1)(−1)m−k+1 /(m − k + 1)!.
Example 7. For the case 3 × 5 with I = {0, i1 , i2 , i3 , i4 ) , if the ideal V dA
k;I has
the right codimension, we get that its Hilbert series equals T (t)/(1 − t)3 , where
T (t) = 1−ti1 +i2 −2 −ti1 +i3 −2 −ti1 +i4 −2 −ti2 +i3 −2 −ti2 +i4 −2 −ti3 +i4 −2 +ti1 +i2 +i3 −3 +
ti1 +i2 +i4 −3 + ti1 +i3 +i4 −3 + ti2 +i3 +i4 −3 + ti1 +i2 +i3 −4 + ti1 +i2 +i4 −4 + ti1 +i3 +i4 −4 +
ti2 +i3 +i4 −4 − ti1 +i2 +i3 +i4 −3 − ti1 +i2 +i3 +i4 −4 − ti1 +i2 +i3 +i4 −5
and the degree of V dA
k;I equals
i1 i2 i3 +i1 i2 i4 +i1 i3 i4 +i2 i3 i4 −3(i1 i2 +i1 i3 +i1 i4 +i2 i3 +i2 i4 +i3 i4 )+7(i1 +i2 +i3 +i4 )−15.
An alternative way to calculate deg(V dA
k;I ) is to use the Giambelli-Thom-Porteous
formula, see e.g. [8]. The next result corresponded to the authors by M. Kazarian
explains how to do that.
8
R. FRÖBERG AND B. SHAPIRO
Proposition 8. Assume that V dA
k;I has the expected codimension m − k + 1. Then
its degree (taking multiplicities of the components into account) is equal to the
coefficient of tm−k+1 in the Taylor expansion of the series
Qm−1
j=1 (1 + ij t)
.
Qk−1
j=1 (1 + jt)
More explicitly,
deg(V dA
k;I ) =
m−k+1
X
σj (I)um−k+1−j
j
where σj is the jth elementary symmetric function of the entries (i1 , ..., im−1 ) and
Qk−1 1
u0 , u1 , u2 , ... are the coefficients in the Taylor expansion of j=1 1+jt
, i.e. u0 +
3k−2
Qk−1 1
u1 t + u2 t2 + ... = j=1 1+jt
. In particular, u0 = 1, u1 = − k2 , u2 = k+1
4 ,
3
k+2 k
k+3 15k3 −15k2 −10k+8
u3 = − 4 2 , u4 = 5
.
48
Proof. In the Giambelli formula setting, we consider a ”generic” family of (n × l)matrices A = ||ap,q ||, 1 ≤ p ≤ n, 1 ≤ q ≤ l, whose entries are homogeneous
functions of degrees deg(ap,q ) = αp − βq in parameters (x1 , ..., xk ) for some fixed
sequences β = (β1 , ..., βl ) and α = (α1 , ..., αn ). Denote by Σr the subvariety in the
parameter space Ck determined by the condition that the matrix A has rank at
most l − r, that is, the linear operator A : Cl → Cn has at least a r-dimensional
kernel. Then the expected codimension of the subvariety Σr is equal to
codim(Σr ) = r(n − l + r).
In case when the actual codimension coincides with the expected one its degree is
computed as the following r × r-determinant:
deg(Σr ) = det ||cn−l+r−i+j ||1≤i, j≤r ,
(7)
where the entries ci ’s are defined by the Taylor expansion
Qn
p=1 (1 + αp t)
2
.
1 + c1 t + c2 t + ... = Ql
q=1 (1 + βq t)
There is a number of situations where this formula can be applied. Depending
on the setting, the entries αp , βq can be rational numbers, formal variables, first
Chern classes of line bundles or formal Chern roots of vector bundles of ranks n
and l, respectively. In the situation of Theorem 6 we should use the presentation
(5) of V dA
k;I from Lemma 5. Then we have n = m, l = k, r = 1, α = I =
(0, i1 , ..., im−1 ), β = (k − 1, k − 2, ..., 0). Under the assumptions of Theorem 6 the
degree of the Vandermonde variety V dA
k;I will be given by the 1 × 1-determinant of
the Giambelli-Thom-Porteous formula (7), that is, the coefficient cm−k+1 of tm−k+1
in the expansion of
Qm−1
Qm−1
j=1 (1 + ij t)
j=0 (1 + ij t)
2
= Qk−1
,
1 + c1 t + c2 t + ... = Qk
j=1 (1 + (k − j)t)
j=1 (1 + jt)
which gives exactly the stated formula for deg(V dA
k;I ).
In the simplest non-trivial case m = k + 1 one can obtain more detailed information about V dA
k;I . Notice that for m = k + 1 the k + 1 Schur polynomials
generating the ideal IIA are naturally ordered according to their degree. Namely,
given an arbitrary I = {0 < i1 < i1 < ... < ik } with gcd(i1 , ..., ik ) = 1 denote by
Sj , j = 0, ..., k the Schur polynomial obtained by removal of the (j)-th row of the
matrix Mk;I . (Pay attention that here we enumerate the rows starting from 0.)
VANDERMONDE VARIETIES AND RELATIONS AMONG SCHUR POLYNOMIALS
9
Then, obviously, deg Sk < deg Sk−1 < .... < deg S0 . Using presentation (5) we get
the following.
Theorem 9. For any integer sequence I = {0 = i0 < i1 < i2 < ... < ik } of length
k + 1 with gcd(i1 , ..., ik ) = 1 the following facts are valid:
(i) codim(V dA
k;I ) = 2;
(ii) The quotient ring RA
I is Cohen-Macaulay;
A
(iii) The Hilbert series HilbA
I (t) of RI is given by the formula
k−1
k
X
X
k
k
1 −
tN −j−( 2) /(1 − t)k ,
tN −ij −( 2) +
HilbA
I (t) =
j=1
where N =
Pk
j=1
P
j=1
ij ;
Pk
k+1
(iv) deg(V
= 1≤j<l≤k ij il − k2
j=1 ij +
3 (3k − 2)/4;
(v) The ideal IIA is always generated by k generators Sk , ..., S1 (i.e., the last
generator S0 always lies in the ideal generated by Sk , ..., S1 ). Moreover,
if for some 1 ≤ n ≤ k − 2 one has in ≤ k − n then IIA is generated by
k − n elements Sk , ..., Sn+1 . In particular, it is generated by two elements
Sk , Sk−1 (i.e., is a complete intersection) if ik−2 ≤ k − 1.
dA
k;I )
The theorem gives lots of relations between Schur polynomials.
Theorem 10. Let the generators be Sk = sik−1 −k+1,ik−2 −k+2,...,i1 −1 , Sk−1 , . . . , S0 =
sik −(k−1),ik−1 −(k−2),...,i1 in degree increasing order. For s = 0, 1 . . . , k − 1 we have
hik −s Sk − hik−1 −s Sk−1 + · · · + (−1)k−1 hi1 −s S1 + (−1)k h−s S0 = 0.
Here hi = 0 if i < 0.
To prove Theorems 9 and 10 notice that since Schur polynomials are irreducible
[5], in the case m = k + 1 the ideal IIA always has the expected codimension 2
unless it coincides with the whole ring C[x1 , ..., xk ]. Therefore vanishing of any
two Schur polynomials lowers the dimension by two. (Recall that we assume that
gcd(i1 , ..., ik ) = 1.) On the other hand, as we mentioned in the introduction the
codimension of V dA
k;I in this case is at most 2. For m = k + 1 one can present a
very concrete resolution of the quotient ring RA
I .
Namely, given a sequence I = {0 = i0 < i1 < · · · < ik } we know that the ideal
IIA is generated by the k + 1 Schur polynomials Sl = sak ,ak−1 ,...,a1 , l = 0, . . . , k,
where
(ak , . . . , a1 ) = (ik , ik−1 , . . . , il+1 , iˆl , il−1 , . . . , i0 ) − (k − 1, k − 2, . . . , 1, 0).
P
Obviously, Sl has degree kj=1 ij − il − k2 and by the Jacobi-Trudi identity is given
by
hi0 −(k−1)
hi0 −(k−2) · · ·
hi0
hi1 −(k−1)
hi1 −(k−2) · · ·
hi1
..
..
..
..
.
.
.
.
Sl =
hil−1 −(k−1)
hil+1 −(k−1)
..
.
hil−1 −(k−2)
hil+1 −(k−2)
..
.
···
···
..
.
hil−1
hil+1
..
.
hik−1 −(k−1)
hik −(k−1)
hik−1 −(k−2
hik −(k−2)
···
···
hik−1
hik
.
Here (as above) hj denotes the complete symmetric function of degree j in x1 , ..., xk .
(We set hj = 0 if j < 0 and h0 = 1.) Consider the (k + 1) × k-matrix H = Hk;I
10
R. FRÖBERG AND B. SHAPIRO
given by
hi0 −(k−1)
hi1 −(k−1)
..
.
hi0 −(k−2)
hi1 −(k−2)
..
.
···
···
..
.
hi0
hi1
..
.
H=
.
hi −(k−1) hi −(k−2 · · · hik−1
k−1
k−1
hik −(k−1)
hik −(k−2) · · ·
hik
Let Hl be the (k + 1) × (k + 1)-matrix obtained by extending H with l-th column
of H. Notice that det(Hl ) = 0, and expanding it along the last column we get for
0 ≤ l ≤ k − 1 the relation
0 = det(Hl ) = hik −(k−l) Sk − hik−1 −(k−l) Sk−1 + · · · + (−1)k−1 hi1 −(k−l) S1 .
For l = k we get
hik Sk − hik−i Sk−1 + · · · + (−1)k hi0 S0 = 0
which implies that S0 always lie in the ideal generated by the remaining S1 , . . . , Sk .
We now prove Theorem 9.
P
Proof. Set N = kj=1 ij . For an arbitrary I = {0, i1 , ..., ik ) with gcd(i1 , ..., ik ) = 1
A
we get the following resolution of the quotient ring RA
I = R/II
k
k
k−1
0 −→ ⊕kl=1 R(−N +
+ l) −→ ⊕l=1
R(−N + il +
) −→ R −→ RA
I −→ 0
2
2
where R = C[x1 , ..., xk ]. Simple calculation with this resolution implies that the
A
Hilbert series HilbA
I (t) of RI is given by
!
k−1
k
X
X
k
k
tN −(2)−l /(1 − t)k
HilbA (t) = 1 −
tN −il −(2 ) +
I
l=1
l=1
and the degree of V dA
k;I is given by
deg(V dA
k;I ) =
X
1≤r<s≤k
ir is −
X
k
k
k+1
ir +
(3k − 2)/4.
2 r=1
3
Notice that the latter resolution might not be minimal, since the ideal might have
fewer than k generators. To finish proving Theorem 9 notice that if conditions of (v)
are satisfied then a closer look at the resolution reveals that the Schur polynomials
S0 , . . . , Sk−n lie in the ideal generated by Sk−n+1 , . . . , Sk .
In connection with Theorems 6 and 9 the following question is completely natural.
Problem 2. Under the assumptions i0 = 0 and gcd(i1 , ..., im−1 ) = 1 which pairs
(k; I) are A-regular?
Theorem 9 shows that for m = k + 1 the condition gcd(i1 , ..., ik ) = 1 guarantees
regularity of any pair (k; I) with |I| = k + 1. On the other hand, our computer
experiments with Macaulay suggest that for m > k regular cases are rather seldom.
In particular, we were able to prove the following.
Theorem 11. If m > k a necessary (but insufficient) condition for V dA
k;I to have
the expected codimension is i1 = 1.
Proof. If i1 ≥ 2, then ik−2 ≥ k − 1. This means that the ideal is generated by
Schur polynomials sa0 ,...,ak−1 with ak−2 ≥ 1. Multiplying these up to degree n
gives linear combinations of Schur polynomials sb1 ,...,bk−1 with bk−2 ≥ 1. Thus we
miss all Schur polynomials with bk−2 = 0. The number of such Schur polynomials
VANDERMONDE VARIETIES AND RELATIONS AMONG SCHUR POLYNOMIALS
11
equals the number of partitions of n in at most k−2 parts. The number of partitions
of n in exactly k − 2 parts is approximated with nk−3 /((k − 2)!(k − 1)!). Thus the
number of elements of degree n in the ring is at least cnk−3 for some positive c, so
the ring has dimension ≥ k − 2. The expected dimension is ≤ k − 3, which is a
contradiction.
So far a complete (conjectural) answer to Problem 2 is only available in the
first non-trivial case k = 3, m = 5. Namely, for a 5-tuple I = {0, 1, i2, i3 , i4 } to
be regular one needs the corresponding the Vandermonde variety V dA
3;I to be a
complete intersection. This is due to the fact that in this situation the ideal IIA
is generated by the Schur polynomials S4 , S3 , S2 of the least degrees in the above
notation. Notice that S4 = hi2 −2 , S3 = hi3 −2 , S2 = hi4 −2 . Thus V dA
3;I has the
expected codimension (equal to 3) if and only if C[x1 , x2 , x3 ]/hhi2 −2 , hi3 −2 , hi4 −2 i is
a complete intersection or, in other words, hi2 −2 , hi3 −2 , hi4 −2 is a regular sequence.
Exactly this problem (along with many other similar questions) was considered in
intriguing paper [4] where the authors formulated the following claim, see Conjecture 2.17 of [4].
Conjecture 12. Let A = {a, b, c} with a < b < c. Then ha , hb , hc in three variables
is a regular sequence if and only if the following conditions are satisfied:
(1) abc ≡ 0 mod 6;
(2) gcd(a + 1, b + 1, c + 1) = 1;
(3) For all t ∈ N with t > 2 there exists d ∈ A such that d + 2 6≡ 0, 1 mod t.
In fact, our experiments allow us to strengthen the latter conjecture in the following way.
Conjecture 13. In the above set-up if the sequence ha , hb , hc with a > 1 in three
variables is not regular, then hc lies in the ideal generated by ha and hb . (If
(a, b, c) = (1, 4, 3k + 2), k ≥ 1, then ha , hb , hc neither is a regular sequence, nor
hc ∈ (ha , hb ).)
We note that if we extend the set-up of [4] by allowing Schur polynomials s(r, s, t)
instead of just complete symmetric functions then if t > 0 in all three of them the
sequence is never regular. Conjectures 12 and 13 provide a criterion which agrees
with our calculations of dim(V dA
3;I ). Finally, we made experiments checking how
)
depends
on
the
last
entry
im−1 of I = {0, 1, i2, ..., im−1 } while keeping
dim(V dA
k;I
the first m − 1 entries fixed.
Conjecture 14. For any given I = (0, 1, i2 , ..., im−1 ) the dimension dim(V dA
k;I )
depends periodically on im−1 for all im−1 sufficiently large.
Notice that Conjecture 14 follows from Conjecture 12 in the special case k =
3, m = 5. Unfortunately, we do not have a complete description of the length of
this period in terms of the fixed part of I and it might be quite tricky.
For the BC k -localized variety V dBC
k;I we have, except for k = 3, only conjectures,
supported by many calculations.
Conjecture 15. For any integer sequence I = {0 = i0 < i1 < i2 < ... < ik } of
length k + 1 with gcd(i1 , ..., ik ) = 1 the following facts are valid.
(i) codim(V dBC
k;I ) = 2;
is Cohen-Macaulay;
(ii) The quotient ring RBC
I
of the form
(iii) There is a C[x1 , . . . , xn ] = R-resolution of RBC
I
k
k
k−1
0 → ⊕j=0
R[−N +j+
] → ⊕kj=1 R[−N +ij +
]⊕R[−N +ki1] → R → R → RBC
→0
I
2
2
12
R. FRÖBERG AND B. SHAPIRO
BC
(iv) The Hilbert series HilbBC
is given by the formula
I (t) of RI
k−1
k
X
X
k
k
1 −
tN −i1 −j −
/(1 − t)k
tN −j−( 2) − tN −ki1 +
HilbBC
I (t) =
2
j=1
j=1
where N =
Pk
j=1 ij ;
Pk
P
k+1
k
(v) deg(V
= 1≤j<l≤k ij il − k2
j=1 ij +
3 (3k − 2)/4 − 2 i1 (i1 − 1);
(vi) The ideal IIBC is always generated by k generators. It is generated by two
elements (i.e., is a complete intersection) if i1 ≤ k − 1.
dBC
k;I )
Conjecture 16. Let Sk , . . . , S1 be as in Theorem 6 and G0 = sik −i1 −k+1,...,i2 −i1 −1 .
Then, for s = 0, . . . , k − 1 we have
hik −i1 −s Sk − hik−1 −i1 −s Sk−1 + · · · + (−1)k−2 hi2 −i1 −s S2 + (−1)k−1 h−s S1 +
(−1)k si1 −1,...,(i1 −1)k−1 ,k−1−s G0 .
Here hi = 0 if i < 0 and hi,...,i,j = 0 if j > i, and (i1 − 1)k−1 means i1 − 1, . . . , i1 − 1
(k − 1 times).
That the ring is CM follows from the fact that the ideal is generated by the
maximal minors of a t × m-matrix in the ring of Laurent polynomials. To prove
the theorem it suffices to prove the relations between the Schur polynomials. Unfortunately we have managed to do that only for k = 3.
3. Final remarks
Here we briefly explain the source of our interest in Vandermonde varieties. In
1977 J. H. Loxton and A. J. van der Poorten formulated an important conjecture
(Conjecture 1′ of [10]) claiming that there exists a constant µk such that any integer
recurrence of order k either has at most µk integer zeros or has infinitely many zeros.
This conjecture was first settled by W. M. Schmidt in 1999, see [14] and also by
J. H. Evertse and H. P. Schlickewei, see [7].
The upper bound for µk obtained in [14] was
e3k log k
µk < e e
,
which was later improved by the same author to
e20k
µk < e e
.
Apparently the best known at the moment upper bound for µk was obtained in
[1] and is given by
√
k
11k
µk < e e
.
Although the known upper bounds are at least double exponential it seems
plausible that the realistic upper bounds should be polynomial. The only known
nontrivial lower bound for µk was found in [2] and is given by
k+1
µk ≥
− 1.
2
One should also mention the non-trivial exact result of F. Beukers showing that for
sequences of rational numbers obtained from recurrence relations of length 3 one
has µ3 = 6, see [3].
The initial idea of this project was to try to obtain upper/lower bounds for
µk by studying algebraic and geometric properties of Vandermonde varieties but
they seems to be quite complicated. Let us finish with some further problems and
comments on them, that we got with an extensive computer search. Many questions
VANDERMONDE VARIETIES AND RELATIONS AMONG SCHUR POLYNOMIALS
13
related to the Skolem-Mahler-Lech theorem translate immediately into questions
about Vk;I . For example, one can name the following formidable challenges.
Problem 3. For which pairs (k; I) the variety Vk;I is empty/non-empty? More
generally, what is the dimension of Vk;I ?
We made a complete computer search for RA
I and some variants where we removed solutions on the coordinate planes and axes, and looked for arithmetic sequences, for (0, i1 , i2 , i3 ), 0 < i1 < i2 < i3 , i3 ≤ 13 (so k = 3, m = 4). The only
cases when Vk;I was empty were I = (0, 1, 3, 7) and I = (0, 1, 3, 9) and their ”duals”
(0, 4, 6, 7) and (0, 6, 8, 9). We suspect that our exceptions are the only possible. For
k = 3, m = 5 we investigated I = (0, i1 , i2 , i3 , i4 ), 0 < i1 < i2 < i3 < i4 , i4 ≤ 9.
For i1 = 1 about half of the cases had the expected dimension. For (k, m) = (3, 6),
i5 ≤ 10, for (k, m) = (4, 6), i5 ≤ 9 and for (k, m) = (5, 8), i7 ≤ 10, most cases were
of expected dimension. The corresponding calculations for RBC
I , (k, m) = (3, 5),
i4 ≤ 9, showed that about half of the cases had expected codimension.
Problem 4. For which pairs (k; I) any solution of a linear recurrence vanishing at I
must have an additional integer root outside I? More specifically, for which pairs
(k; I) any solution of a linear recurrence vanishing at I must vanish infinitely many
times in Z? In other words, for which pairs (k; I) the set of all integer zeros of
the corresponding solution of any recurrence relation from Vk;I must necessarily
contain an arithmetic progression?
For example, in case k = 3, m = 4 we found that the first situation occurs
for 4-tuples (0, 1, 4, 6) and (0, 1, 4, 13) which both force a non-trivial solution of a
third order recurrence vanishing at them to vanish at the 6-tuple (0, 1, 4, 6, 13, 52),
which is the basic example in [3]. The second situation occurs if in a 4-tuple I =
{0, i1 , i2 , i3 } two differences between its entries coincide, see [3]. But this condition
is only sufficient and no systematic information is available. Notice that for any
pair (k; I) the variety V k;I is weighted-homogeneous where the coordinate αi , i =
1, . . . , k has weight i. (This action corresponds to the scaling of the characteristic
roots of (2).)
We looked for cases containing an arithmetic sequence with difference at most
10 and we found cases which gave arithmetic sequences with difference 2,3,4, and
5, and a few cases which didn’t give any arithmetic sequences.
Problem 5. Is it true that if an (k + 1)-tuple I consists of two pieces of arithmetic
progression with the same difference then any exponential polynomial vanishing at
I contains an arithmetic progression of integer zeros?
Problem 6. If the answer to the previous question is positive is it true that there are
only finitely many exceptions from this rule leading to only arithmetic progressions?
Finally a problem similar to that of J. H. Loxton and A. J. van der Poorten can
be formulated for real zeros of exponential polynomials instead of integer. Namely,
the following simple lemma is true.
Lemma 17. Let λ1 , ..., λn be a arbitrary finite set of (complex) exponents having
all distinct real parts then an arbitrary exponential polynomial of the form c1 eλ1 z +
c2 eλ2 z + .. + cn eλn z , ci ∈ C has at most finitely many real zeros.
Problem 7. Does there exist an upper bound on the maximal number real for the
set of exponential polynomials given in the latter lemma in terms of n only?
Problem 8. What about non-regular cases? Describe their relation to the existence
of additional integer zeros and arithmetic progressions as well as additional Schur
polynomials in the ideals.
14
R. FRÖBERG AND B. SHAPIRO
References
[1] P. B. Allen, On the multiplicity of linear recurrence sequences, J. Number Th., vol. 126
(2007), 212–216.
[2] E. Bavencoffe ad J. P. Bézevin, Une famille Remarkable de Suites Recurrentes Lineares,
Monatsh. Math., vol. 120 (1995), 189–203.
[3] F. Beukers, The zero-multiplicity of ternary recurrences, Compositio Math. 77 (1991),
165-177.
[4] A. Conca, C. Krattenthaler, J. Watanabe, Regular sequences of symmetric polynomials,
Rend. Sem. Mat. Univ. Padova 121 (2009), 179–199.
[5] R. Dvornicich, U. Zannier, Newton functions generating symmetric fields and irreducibility of Schur polynomials, Adv. Math. 222 (2009), no. 6, 1982–2003.
[6] J. A. Eagon, D. G. Northcott, Ideals defined by matrices and a certain complex associated with them, Proc. Roy. Soc. Ser. A 269 (1962) 188–204.
[7] J. H. Evertse ad H. P. Schlickewei, A qualitative version of the Absolute Subspace
theorem, J. reine angew. Math. vol. 548 (2002), 21–127.
[8] W. Fulton, Flags, Schubert polynomials, degeneracy loci, and determinantal formulas,
Duke Math. J., vol 65 (3) (1991), 381–420.
[9] C. Lech, A note on Recurring Series, Ark. Mat. 2, (1953), 417–421.
[10] J. H. Loxton and A. J. van der Poorten, On the growth of recurrence sequences, Math.
Proc. Camb. Phil. Soc. vol. 81 (1977), 369–377.
[11] K. Mahler, Eine arithmetische Eigenschaft der Taylor-Koeffizienten rationaler Funktionen, Proc. Kon. Nederl. Akad. Wetensch. Amsterdam, Proc. 38 (1935), 50–60.
[12] D. Mumford: The red book of varieties and schemes. Second, expanded edition. Includes
the Michigan lectures (1974) on curves and their Jacobians. With contributions by
Enrico Arbarello. Lecture Notes in Mathematics, 1358. Springer-Verlag, Berlin, 1999.
x+306 pp.
[13] Th. Skolem, Einige Sätze über gewisse Reihenentwicklugen und exponentiale Beziehungen mit Anwendung auf diophantische Gleichungen, Oslo Vid. akad. Skrifter I 1933 Nr.
6.
[14] W. M. Schmidt, The zero multiplicity of linear recurrence sequences, Acta Math. vol.
182 (1999), 243–282.
[15] R. Stanley, Enumerative combinatorics. Vol. I. With a foreword by Gian-Carlo Rota.
The Wadsworth and Brooks/Cole Mathematics Series. Wadsworth and Brooks/Cole
Advanced Books and Software, Monterey, CA, 1986. xiv+306 pp.
[16] H. Tamvakis, The theory of Schur polynomials revisited, arXiv:1008.3094v1.
Department of Mathematics, Stockholm University, SE-106 91, Stockholm, Sweden
E-mail address: [email protected]
Department of Mathematics, Stockholm University, SE-106 91, Stockholm, Sweden
E-mail address: [email protected]
| 0 |
Immunophenotypes of Acute Myeloid Leukemia
From Flow Cytometry Data Using Templates
Ariful Azad1 , Bartek Rajwa 2 and Alex Pothen 3∗
1
[email protected], Computational Research Division, Lawrence Berkeley National Laboratory, California, USA
[email protected], Bindley Bioscience Center, Purdue University, West Lafayette, Indiana, USA
3
[email protected], Department of Computer Science, Purdue University, West Lafayette, Indiana, USA
arXiv:1403.6358v1 [q-bio.QM] 22 Mar 2014
2
ABSTRACT
Motivation: We investigate whether a template-based classification
pipeline could be used to identify immunophenotypes in (and thereby
classify) a heterogeneous disease with many subtypes. The disease
we consider here is Acute Myeloid Leukemia, which is heterogeneous
at the morphologic, cytogenetic and molecular levels, with several
known subtypes. The prognosis and treatment for AML depends on
the subtype.
Results: We apply flowMatch, an algorithmic pipeline for flow
cytometry data created in earlier work, to compute templates
succinctly summarizing classes of AML and healthy samples. We
develop a scoring function that accounts for features of the AML data
such as heterogeneity to identify immunophenotypes corresponding
to various AML subtypes, including APL. All of the AML samples in
the test set are classified correctly with high confidence.
Availability: flowMatch is available at www.bioconductor.org/
packages/devel/bioc/html/flowMatch.html; programs
specific to immunophenotyping AML are at www.cs.purdue.edu/
homes/aazad/software.html.
Contact: [email protected]
1
INTRODUCTION
Can Acute Myeloid Leukemia (AML) samples be distinguished
from healthy ones using flow cytometry data from blood or bone
marrow with a template-based classification method? This method
builds a template for each class to summarize the samples belonging
to the class, and uses them to classify new samples. This question is
interesting because AML is a heterogeneous disease with several
subtypes and hence it is not clear that a template can succinctly
describe all types of AML. Furthermore, we wish to identify
immunophenotypes (cell types in the bone marrow and blood) that
are known to be characteristic of subtypes of AML. Pathologists use
these immunophenotypes to visualize AML and its subtypes, and a
computational procedure that can provide this information would
be more helpful in clinical practice than a classification score that
indicates if an individual is healthy or has AML.
In earlier work, we have developed a template-based classification
method for analyzing flow cytometry (FC) data, which consists of
measurements of morphology (from scattering) and the expression
of multiple biomarkers (from fluorescence) at the single-cell level.
Each FC sample consists of hundreds of thousands or more of such
single-cell measurements, and a study could consist of thousands
of samples from different individuals at different time points under
different experimental conditions (Aghaeepour et al., 2013; Shapiro,
2005). We have developed an algorithmic pipeline for various
steps in processing this data (Azad et al., 2013, 2010, 2012). We
summarize each sample by means of the cell populations that it
contains. (These terms are defined in Table 1 and illustrated in
Fig. 1.) Similar samples belonging to the same class are described
by a template for the class. A template consists of meta-clusters
that characterize the cell populations present in the samples that
constitute the class. We compute templates from the samples, and
organize the templates into a template tree. Given a sample to
classify, we compare it with the nodes in the template tree, and
classify it to the template that it is closest to. A combinatorial
measure for the dissimilarity of two samples or two templates,
computed by means of a mixed edge cover in a graph model
(described in the next section), is at the heart of this approach.
We have applied our algorithmic pipeline for templatebased classification to various problems: to distinguish the
phosphorylation state of T cells; to study the biological, temporal,
and technical variability of cell types in the blood of healthy
individuals; to characterize changes in the immune cells of Multiple
Sclerosis patients undergoing drug treatments; and to predict the
vaccination status of HIV patients. However, it is not clear if the
AML data set can be successfully analyzed with this scheme, since
AML is a hetereogeneous disease at the morphologic, cytogenetic
and molecular levels, and a few templates may not describe all of its
subtypes.
AML is a disease of myeloid stem cells that differentiate
to form several types of cells in the blood and marrow. It is
characterized by the profusion of immature myeloid cells, which
are usually prevented from maturing due to the disease. The
myeloid stem cell differentiates in several steps to form myeloblasts
and other cell types in a hierarchical process. This hierarchical
differentiation process could be blocked at different cell types,
leading to the multiple subtypes of AML. Eight different subtypes
of AML based on cell lineage are included in the French-AmericanBritish Cooperative Group (FAB) classification scheme (Bennett
et al., 1985). (A different World Health Organization (WHO)
classification scheme has also been published.) Since the prognosis
and treatment varies greatly among the subtypes of AML, accurate
diagnosis is critical.
We extend our earlier work on template-based classification here
by developing a scoring function that accounts for the subtleties of
FC data of AML samples. Only a small number of the myeloid
cell populations in AML samples are specific to AML, and there
are a larger number of cell populations that these samples share
with healthy samples. Furthermore, the scoring function needs to
account for the diversity of the myeloid cell populations in the
various subtypes of AML.
Our work has the advantage of identifying immunophenotypes
of clinical interest in AML from the templates. Earlier work on
the AML dataset we work with has classified AML samples using
methods such as nearest neighbor classification, logistic regression,
1
Table 1. Summary of terminology
Terms
Meaning
Sample
From FC data. Characterized by the collection of cell
populations included within it.
Cell population
(cluster)
A group of cells with identical morphology and
expressing similar biomarkers, e.g., helper T cells,
B cells. Computed from a sample.
Meta-cluster
A set of similar cell clusters from different samples.
Computed from similar clusters in samples.
Template
A collection of meta-clusters from samples of the
same class.
and the challenge is to determine the disease status of the rest
of the samples, 20 AML and 157 healthy, based only on the
information in the training set. The complete dataset is available
at http://flowrepository.org/.
The side scatter (SS) and all of the fluorescence channels are
transformed logarithmically, but the forward scatter (FS) is linearly
transformed to the interval [0,1] so that all channels have values
in the same range. This removes any bias towards FS channel in
the multi-dimensional clustering phase. After preprocessing, an FC
sample is stored as an n × p matrix A, where the element A(i, j)
quantifies the j th feature in the ith cell, and p is the number of
features measured in each of n cells. In this dataset, p = 7 for each
tube and n varies among the samples.
2.2
matrix relevance learning vector quantization, etc., but they have
not identified these immunophenotypes; e.g., (Biehl et al., 2013;
Manninen et al., 2013; Qiu, 2012).
Template-based classification has the advantage of being more
robust than nearest neighbor classification since a template
summarizes the characteristic properties of a class while ignoring
small sample-to-sample variations. It is also scalable to large
numbers of samples, since we compare a sample to be classified
only against a small number of templates rather than the much
larger number of samples. The comparisons with the templates
can be performed efficiently using the structure of the template
tree. It also reduces the data size by clustering the data to identify
cell populations and then working with the statistical distributions
characterizing the cell populations, in contrast to some of the earlier
approaches that work with data sets even larger than the FC data
by creating multiple variables from a marker (reciprocal, powers,
products and quotients of subsets of the markers, etc.).
Template-based classification has been employed in other areas
such as character, face, and image recognition, but its application to
FC is relatively recent. In addition to our work, templates have been
used for detecting the effects of phosphorylation (Pyne et al., 2009),
evaluating the efficiency of data transformations (Finak et al., 2010),
and labeling clusters across samples (Spidlen et al., 2013).
Identifying cell populations in each sample
We employ a two-stage clustering approach for identifying
phenotypically similar cell populations (homogeneous clusters of
cells) in each sample. At first, we apply the k-means clustering
algorithm for a wide range of values for k, and select the optimum
number of clusters k∗ by simultaneously optimizing the CalinskiHarabasz and S Dbw cluster validation methods (Halkidi et al.,
2001). Next, we model the clusters identified by the k-means
algorithm with a finite mixture model of multivariate normal
distributions. In the mixture model, the ith cluster is represented
by two distribution parameters µi , the p-dimensional mean vector,
and Σ, the p × p covariance matrix. The distribution parameters for
each cluster are then estimated using the Expectation-Maximization
(EM) algorithm. The statistical parameters of a cluster are used to
describe the corresponding cell population in the rest of the analysis.
2.3
Dissimilarity between samples
We calculate the dissimilarity between a pair of cell populations
by the Mahalanobis distance between their distributions. Let
c1 (µ1 , Σ1 ) and c2 (µ2 , Σ2 ) be two normally distributed clusters
and Σp be the pooled covariance of Σ1 and Σ2 . The Mahalanobis
distance d(c1 , c2 ) between the clusters is computed as follows:
d(c1 , c2 ) = 12 (µ1 − µ2 )> Σ−1
p (µ1 − µ2 ), where
(1)
2
2.1
METHODS
The AML Dataset
We have used an FC dataset on AML that was included in the
DREAM6/FlowCAP2 challenge of 2011. The dataset consists of FC
measurements of peripheral blood or bone marrow aspirate collected
from 43 AML positive patients and 316 healthy donors over a one
year period. Each patient sample was subdivided into eight aliquots
(“tubes”) and analyzed with different biomarker combinations, five
markers per tube (most markers are proteins). In addition to the
markers, the forward scatter (FS) and side scatter (SS) of each
sample was also measured in each tube. Hence, we have 359 ×
8 = 2, 872 samples and each sample is seven-dimensional (five
markers and the two scatters). Tube 1 is an isotype control used
to detect non-specific antibody binding and Tube 8 is an unstained
control for identifying background or autofluorescence of the
system. Since the data has been compensated for autofluorescence
and spectral overlap by experts, we omit these tubes from
our analysis. The disease status (AML/healthy) of 23 AML
patients and 156 healthy donors are provided as training set,
2
Σp = ((n1 − 1) ∗ Σ1 + (n2 − 1) ∗ Σ2 ))/(n1 + n2 − 2).
We compute the dissimilarity between a pair of samples by
optimally matching (in a graph-theoretic model) similar cell clusters
and summing up the dissimilarities of the matched clusters. In
earlier work, we have developed a robust variant of a graph matching
algorithm called the Mixed Edge Cover (MEC) algorithm that
allows a cluster in one sample to be matched with zero, one, or
more clusters in the second sample (Azad et al., 2010). The cell
population in the first sample could be either absent, or present,
or split into two or more cell populations in the second sample.
These can happen due to changes in biological conditions or due
to artifactual errors in clustering.
Consider two FC samples A and B consisting of ka and kb
cell populations such that A = {a1 , a2 , ..., aka }, and B =
{b1 , b2 , ..., bkb } where ai is the ith cluster from sample A and bj is
the j th cluster from B. The mixed edge cover computes a mapping
mec, of clusters across A and B such that mec(ai ) ∈ P(B) and
mec(bj ) ∈ P(A), where P(A) (P(B)) is the power set of A (B).
When a cluster ai (or bj ) remains unmatched under mec , i.e.,
(a) Template tree
(b) One phase of the
HM&M algorithm
T(S1, S2, S3, S4)
T(S1, S2)
S1
T(S3, S4)
S2
S3
S4
S3
S4
Fig. 1. (a) A hierarchical template tree created by the HM&M algorithm
from four hypothetical samples S1 , S2 , S3 and S4 . Cells are denoted with
dots, and clusters with solid ellipses in the samples are at the leaves of the
tree. An internal node represents a template created from its children, and
the root represents the template of these four samples. A meta-cluster is a
homogeneous collection of clusters and is denoted by a dashed ellipse inside
the template. (b) One phase of the HM&M algorithm creating a sub-template
T (S3 , S4 ) from samples S3 and S4 . At first, corresponding clusters across
S3 and S4 are matched by the MEC algorithm, and then the matched clusters
are merged to construct new meta-clusters.
mec(ai ) = ∅, we set d(ai , −) = λ, where the fixed cost λ is a
√
penalty for leaving a vertex unmatched. We set λ to p so that a pair
of clusters get matched only if the average squared deviation across
all dimensions is less than one. The cost of a mixed edge cover mec
is the sum of the dissimilarities of all pairs of matched clusters and
the penalties due to the unmatched clusters. A minimum cost mixed
edge cover is a mixed edge cover with the minimum cost. We use
this minimum cost as the dissimilarity D(A, B) between a pair of
samples A and B:
X
X
min (
d(ai , bj ) +
d(bi , aj )),
(2)
mixed edge
covers, mec
1≤i≤ka
bj ∈mec(ai )
1≤i≤kb
aj ∈mec(bi )
where d(ai , bj ) is computed from Equation (1). A minimum cost
mixed edge cover can be computed by a modified minimum weight
perfect matching algorithm in O(k3 log k) time where k is the
maximum number of clusters in a sample (Azad et al., 2010). The
number of cell clusters k is typically small (fewer than fifty for the
AML data), and the dissimilarity between a pair of samples can be
computed in less than a second on a desktop computer.
2.4
Creating templates from a collection of samples
We have designed a hierarchical matching-and-merging (HM&M)
algorithm that arranges a set of similar samples into a binary
template tree data structure (Azad et al., 2012). A node in the
tree represents either a sample (leaf node) or a template (internal
node). In both cases, a node is characterized by a finite mixture
of multivariate normal distributions each component of which is a
cluster or meta-cluster. Fig. 1 shows an example of a template-tree
created from four hypothetical samples, S1 , S2 , S3 , and S4 .
Let a node vi (representing either a sample or a template) in the
template tree consist of ki clusters or meta-clusters ci1 , ci2 , . . ., ciki .
A node vi is called an “orphan” if it does not have a parent in the
template-tree. Consider N flow cytometry samples S1 , S2 , . . . , SN
belonging to a class. Then the HM&M algorithm for creating a
template tree from these samples can be described by the following
three steps.
1. Initialization: Create a node vi for each of the N samples
Si . Initialize all these nodes to the set of orphan nodes. Repeat the
matching and merging steps until a single orphan node remains.
2. Matching: Compute the dissimilarity D(vi , vj ) between every
pair of nodes vi and vj in the current Orphan set with the mixed
edge cover algorithm. (using Equation (2))
3. Merging: Find a pair of orphan nodes (vi , vj ) with minimum
dissimilarity D(vi , vj ) and merge them to create a new node vl . Let
mec be a function denoting the mapping of clusters from vi to vj .
That is, if cix ∈ vi is matched to cjy ∈ vj , then cjy ∈ mec(cix ), where
1 ≤ x ≤ ki and 1 ≤ y ≤ kj . Create a new meta-cluster clz from
each set of matched clusters, clz = {cix ∪ mec(cix )}. Let kl be the
number of the new meta-clusters created above. Then the new node
vl is created as a collection of these newly created meta-clusters,
i.e., vl = {cl1 , cl2 , ..., clkl }. The distribution parameters, (µlz , Σlz ),
of each of the newly formed meta-clusters clz are estimated by the
EM algorithm. The height of vl is set to D(vi , vj ). The node vl
becomes the parent of vi and vj , and the set of orphan nodes is
updated by including vl and deleting vi and vj from it. If there
are orphan nodes remaining, we return to the matching step, and
otherwise, we terminate.
When the class labels of samples are not known a priori, the roots
of well-separated branches of tree give different class templates.
However, if samples belong to the same class – as is the case for
the AML dataset studied in this paper, the root of the templatetree gives the class-template. The HM&M algorithm requires
O(N 2 ) dissimilarity computations and O(N ) merge operations for
creating a template from a collection of N samples. Let k be
the maximum number of clusters or meta-clusters in any of the
nodes of the template-tree. Then a dissimilarity computation takes
O(k3 log k) time whereas the merge operation takes O(k) time
when distribution parameters of the meta-clusters are computed by
maximum likelihood estimation. Hence, the time complexity of
the algorithm is O(N 2 k3 log k), which is O(N 2 ) for bounded k.
The complexity of the algorithm can be reduced to O(N log N ) by
avoiding the computation of all pairwise dissimilarities between the
samples, for larger numbers of samples N , but we did not need to
do this here.
2.5
Classification score of a sample in AML dataset
Consider a sample X consisting of k cell populations S =
{c1 , c2 , ..., ck }, with the ith cluster ci containing |ci | cells. Let T −
and T + be the templates created from AML-negative (healthy) and
AML-positive training samples, respectively. We now describe how
to compute a score f (X) in order to classify the sample X to either
the healthy class or the AML class.
The intuition behind the score is as follows. An AML
sample contains two kinds of cell populations: (1) AML-specific
myeloblasts and myeloid cells, and (2) AML-unrelated cell
populations, such as lymphocytes. The former cell populations
correspond to the immunophenotypes of AML-specific metaclusters
in the AML template, and hence when we compute a mixed edge
cover between the AML template and an AML sample, these
3
clusters get matched to each other. (Such clusters in the sample do
not match to any metacluster in the healthy template.) Hence we
assign a positive score to a cluster in sample when it satisfies this
condition, signifying that it is indicative of AML. AML-unrelated
cell populations in a sample could match to meta-clusters in the
healthy template, and also to AML-unrelated meta-clusters in the
AML template. When either of these conditions is satisfied, a cluster
gets a negative score, signifying that it is not indicative of AML.
Since AML affects only the myeloid cell line and its progenitors,
it affects only a small number of AML-specific cell populations in
an AML sample. Furthermore, different subtypes of AML affect
different cell types in the myeloid cell line. Hence there are many
more clusters common to healthy samples than there are AMLspecific clusters common to AML samples. (This is illustrated later
in Fig. 3 (c) and (d).) Thus we make the range of positive scores
relatively higher than the range of negative scores. This scoring
system is designed to reduce the possibility of a false negative
(an undetected AML-positive patient), since this is more serious
in the diagnosis of AML. Additional data such as chromosomal
translocations and images of bone marrow from microscopy could
confirm an initial diagnosis of AML from flow cytometry.
In the light of the discussion above, we need to identify AMLspecific metaclusters initially. Given the templates T + and T − ,
we create a complete bipartite graph with the meta-clusters in
each template as vertices, and with each edge weighted by the
Mahalanobis distance between its endpoints. When we compute
a minimum cost mixed edge cover in this graph, we will match
meta-clusters common to both templates, and such meta-clusters
represent non-myeloid cell populations that are not AML-specific.
On the other hand, meta-clusters in the AML template T + that are
not matched to a meta-cluster in the healthy template T − correspond
to AML-specific metaclusters. We denote such meta-clusters in the
AML template T + by the set M + .
Now we can proceed to compare a sample against the template for
healthy samples and the template for AML. We compute a minimum
cost mixed edge cover between a sample X and the healthy template
T − , and let mec− (ci ) denote the set of meta-clusters in T − mapped
to a cluster ci in the sample X. Similarly, compute a minimum
cost mixed edge cover between X and the AML template T + , and
let mec+ (ci ) denote the set of meta-clusters in T + mapped to a
cluster ci . These sets could be empty if ci is unmatched in the
mixed edge cover. We compute the average Mahalanobis distance
between ci and the meta-clusters matched to it in the template
T − , and define this as the dissimilarity d(ci , mec− (ci )). From the
formulation of the mixed edge cover in (Azad et al., 2010), we have
d(ci , mec− (ci )) ≤ 2λ. Hence we define the similarity between ci
and mec− (ci ) as s(ci , mec− (ci )) = 2λ − d(ci , mec− (ci )). By
analogous reasoning, the similarity between ci and mec+ (ci ) is
defined as s(ci , mec+ (ci )) = 2λ − d(ci , mec+ (ci )).
The score of a sample is the sum of the scores of its clusters. We
define the score of a cluster ci , f (ci ), as the sum of two functions
f + (ci ) and f − (ci ) multiplied with suitable weights. A positive
score indicates that the sample belongs to AML, and a negative score
indicates that it is healthy.
The function f + (ci ) contributes a positive score to the sum if
ci is matched to an AML-specific meta-cluster in the mixed edge
cover between the sample X and the AML template T + , and a nonpositive score otherwise. For the latter case, there are two subcases:
If ci is unmatched in the mixed edge cover, it corresponds to none of
4
the meta-clusters in the template T + , and we assign it a zero score.
If ci is matched only to non-AML specific meta-clusters in the AML
template T + , then we assign it a small negative score to indicate that
it likely belongs to the healthy class (recall that k is the number of
clusters in sample X). Hence
f + (ci )
=
s ci , mec+ (ci ) ,
− 1 s(c , mec+ (c )) ,
i
i
k
0,
if mec+ (ci ) ∩ M + 6= ∅,
if mec+ (ci ) ∩ M + = ∅,
and mec+ (ci ) 6= ∅,
if mec+ (ci ) = ∅.
The function f − (ci ) contributes a negative score to a cluster ci in
the sample X if it is matched with some meta-cluster in the healthy
template T − , indicating that it likely belongs to the healthy class.
If it is not matched to any meta-cluster in T − , then we assign it
a positive score λ. This latter subcase accounts for AML-specific
clusters in the sample, or a cluster that is in neither template. In this
last case, we acknowledge the diversity of cell populations in AML
samples. Hence we have
(
− k1 s(ci , mec− (ci )) , if mec− (ci ) 6= ∅,
−
f (ci ) =
λ,
if mec− (ci ) = ∅.
Finally, we define
f (X) =
X |ci | 1 +
(f (ci ) + f − (ci )).
|X|
2
c ∈X
(3)
i
Here |X| is the number of cells in the sample X. The score of a
cluster ci is weighted by the fractional abundance of cells in it.
3
3.1
RESULTS
Cell populations in healthy and AML samples
In each tube, we identify cell populations in the samples using
the clustering algorithm described in Section 2.2. Each sample
contains five major cell types that can be seen when cell clusters
are projected on the side scatter (SS) and CD45 channels, as
depicted in Fig. 2. (Blast cells are immature progenitors of myeloid
cells or lymphocytes.) The side scatter measures the granularity
of cells, whereas CD45 is variably expressed by different white
blood cells (leukocytes). AML is initially diagnosed by rapid
growth of immature myeloid blast cells with medium SS and
CD45 expressions (Lacombe et al., 1997) marked in red in Fig. 2.
According to the WHO guidelines, AML is initially confirmed when
the sample contains more than 20% blasts. This is the case for
all, except one of the AML samples in the DREAM6/FlowCAP2
training set, and the latter will be discussed later.
3.2
Healthy and AML templates
From each tube of the AML dataset, using the training samples, we
build two templates: one for healthy samples, and one for AML. As
described in Section 2.4, the HM&M algorithm organizes samples
of the same class into a binary template tree whose root represents
the class template. The template trees created from the healthy and
AML training samples in Tube 6 are shown in Subfigures 3(a) and
3(b) respectively. The height of an internal node in the template
tree measures the dissimilarity between its left and right children,
0.8
0.8
Normal
AML
0.6
Lymphoid blasts
0.4
1.5%
0.2
0.4
0.6
0.8
Myeloid blasts
Monocytes
28%
0.2
0.4
Myeloid cells
0.2
SS (log)
0.6
Lymphocytes
0.2
0.4
0.6
0.8
CD45−ECD (log)
APL
116
101
7
172
49
103
88
5
26
9
60
105
33
134
174
165
151
95
117
89
67
37
58
0
20
40
60
80 100
(b)
86
23
137
145
142
131
82
72
129
101
105
15
146
54
128
46
116
93
65
62
140
130
118
58
39
91
9
124
155
50
110
57
13
139
134
156
51
78
100
89
67
60
41
43
7
19
114
47
14
73
147
30
143
2
109
27
11
18
153
61
75
90
117
4
126
97
99
98
24
49
59
44
83
127
95
10
45
125
34
144
76
103
1
154
20
42
87
52
135
74
38
36
112
17
68
150
5
77
64
102
121
16
70
3
123
22
148
48
84
21
92
37
96
94
29
63
111
79
133
115
120
106
141
138
66
151
26
152
132
56
33
85
80
25
69
31
107
119
88
122
113
53
136
32
55
6
108
8
71
12
149
40
81
28
104
35
0
10
20
30
40
Dissimilarity between
samples (templates)
(a)
AML positive patients
100
80
(d)
CD38
CD34
CD117
HLA−DR
CD45
SS
20
40
60
0.0
(e)
0.2
0.4
CD38
CD34
CD117
HLA−DR
CD45
SS
0.6
(f)
0.0
0.2
0.4
0.6
Expression levels of markers
0
60
40
20
Meta-clusters in the healthy template
Fraction of samples present in a meta-cluster (%)
(c)
80
100
Healthy individuals
0
Fraction of samples present in a meta-cluster (%)
Dissimilarity between
Dissimilarity
between
samples (templates)
samples
(templates)
Fig. 2. Cell types identified on the side scatter (SS) and CD45 channels for a healthy and an AML positive sample. Cell populations are discovered in the
seven-dimensional samples with the clustering algorithm and then projected on these channels for visualization. A pair of clusters denoting the same cell type
is marked with the same color. The proportion of myeloid blast cells (shown in red) increases significantly in the AML sample.
Meta-clusters in the AML template
Fig. 3. The healthy and AML templates created from Tube 6. (a) The template-tree created from 156 healthy samples in the training set. (b) The template-tree
created from 23 AML samples in the training set. Samples in the red subtree exhibit the characteristics of Acute Promyelocytic Leukemia (APL) as shown
in Subfigure (f). (c) Fraction of 156 healthy samples present in each of the 22 meta-clusters in the healthy template. Nine meta-clusters, each of them shared
by at least 60% of the healthy samples, form the core of the healthy template. (d) Fraction of 23 AML samples present in each of the 40 meta-clusters in the
AML template. The AML samples, unlike the healthy ones, are heterogeneously distributed over the meta-clusters. (e) The expression levels of markers in the
meta-cluster shown with blue bar in Subfigure (d). (Each horizontal bar in Subfigures (e) and (f) represents the average expression of a marker and the error
bar shows its standard deviation.) This meta-cluster represents lymphocytes denoted by medium SS and high CD45 expression and therefore does not express
the AML-related markers measured in Tube 6. (f) Expression of markers in a meta-cluster shown with red bar in Subfigure (d). This meta-cluster denotes
myeloblast cells as defined by the SS and CD45 levels. This meta-cluster expresses HLA-DR− CD117+ CD34− CD38+ , a characteristic immunophenotype
of APL. Five AML samples sharing this meta-cluster are similar to each other as shown in the red subtree in Subfigure (b).
5
whereas the horizontal placement of a sample is arbitrary. In these
trees, we observe twice as much heterogeneity in the AML samples
than among the healthy samples (in the dissimilarity measure),
despite the number of healthy samples being five times as numerous
as the AML samples. The larger heterogeneity among AML samples
is observed in other tubes as well. The template-tree for AML
partitions these samples into different subtrees that possibly denote
different subtypes of AML. For example, the subtree in Fig. 3(b) that
is colored red includes samples (with subject ids 37, 58, 67, 89, and
117) with immunophenotypes of Acute Promyelocytic Leukemia
(APL) (discussed later in this section).
Together, the meta-clusters in a healthy template represent a
healthy immune profile in the feature space of a tube from which
the template is created. We obtained 22 meta-clusters in the healthy
template created from Tube 6. The percentage of samples from the
training set participating in each of these meta-clusters is shown
in Fig. 3(c). Observe that 60% or more of the healthy samples
participate in the nine most common meta-clusters (these constitute
the core of the healthy template). The remaining thirteen metaclusters include populations from a small fraction of samples. These
populations could correspond to biological variability in the healthy
samples, variations in the FC experimental protocols, and possibly
also from the splitting of populations that could be an artifact of the
clustering algorithm.
The AML template created from Tube 6 consists of forty metaclusters (almost twice the number in the more numerous healthy
samples). Fig. 3(d) shows that, unlike the healthy samples, the
AML samples are heterogeneous with respect to the meta-clusters
they participate in: There are 21 meta-clusters that include cell
populations from at least 20% of the AML samples. Some of the
meta-clusters common to a large number of AML samples represent
non-AML specific cell populations. For example, Fig. 3(e) shows
the average marker expressions of the meta-cluster shown in the blue
bar in Fig. 3(d). This meta-cluster has low to medium side scatter
and high CD45 expression, and therefore represents lymphocytes
(Fig. 2). Since lymphocytes are not affected by AML, this metacluster does not express any AML-related markers, and hence can
be described as HLA-DR− CD117− CD34− CD38− , as expected.
Fig. 3(f) shows the expression profile of another meta-cluster shown
in the red bar in Fig. 3(d). This meta-cluster consists of five cell
populations from five AML samples (with subject ids 37, 58, 67,
89, and 117) and exhibits medium side scatter and CD45 expression
and therefore, represents myeloid blast cells. Furthermore, this
meta-cluster is HLA-DR− CD117+ CD34− CD38+ , and represents
a profile known to be that of Acute Promyelocytic Leukemia
(APL) (Paietta, 2003). APL is subtype M3 in the FAB classification
of AML (Bennett et al., 1985)) and is characterized by chromosomal
translocation of retinoic acid receptor-alpha (RARα) gene on
chromosome 17 with the promyelocytic leukemia gene (PML) on
chromosome 15, a translocation denoted as t(15;17). In the feature
space of Tube 6, these APL samples are similar to each other while
significantly different from the other AML samples. Our templatebased classification algorithm groups these samples together in the
subtree colored red in the AML template tree shown in Fig. 3(b).
3.3
Identifying meta-clusters symptomatic of AML
In each tube, we register meta-clusters across the AML and
healthy templates using the mixed edge cover (MEC) algorithm.
Meta-clusters in the AML template that are not matched to any
6
Table 2. Some of the meta-clusters characteristic of AML for the 23 AML
samples in the training set. In the second column, ‘−’, ‘low’, and ‘+’ denote
very low, low and high, abundance of a marker, respectively, and ± denotes a
marker that is positively expressed by some samples and negatively expressed
by others. The number of samples participating in a meta-cluster is shown in
the third column. The average fraction of cells in a sample participating in a
meta-cluster, and the standard deviation, are shown in the fourth column.
Tube
Marker expression
#Samples
Fraction of cells
2
3
4
4
5
5
5
6
6
6
7
7
7
Kappalow Lambdalow CD19+ CD20−
5
4
17
8
10
18
6
11
13
5
3
3
1
6.3%(±6.8)
18.0%(±4.8)
16.6%(±6.9)
11.1%(±5.7)
13.5%(±5.2)
10.8%(±3.8)
13.8%(±4.3)
13.3%(±2.6)
17.3%(±6.6)
12.9%(±4.7)
12.3%(±2.4)
10.0%(±8.5)
9.9%
CD7+ CD4− CD8− CD2−
CD15− CD13+ CD16− CD56−
CD15− CD13+ CD16− CD56+
CD14− CD11c− CD64− CD33+
CD14− CD11c+ CD64− CD33+
CD14low CD11c+ CD64low CD33+
HLA-DR+ CD117+ CD34+ CD38+
HLA-DR+ CD117± CD34+ CD38+
HLA-DR− CD117± CD34− CD38+
CD5− CD19+ CD3− CD10−
CD5+ CD19− CD3− CD10−
CD5− CD19− CD3− CD10+
meta-clusters in the healthy template represent the abnormal,
AML-specific immunophenotypes while the matched meta-clusters
represent healthy or non-AML-relevant cell populations. Table 2
lists several unmatched meta-clusters indicative of AML from
different tubes. As expected, every unmatched meta-cluster displays
medium side scatter and CD45 expression characteristic of myeloid
blast cells, and therefore we omit FS, SS, and CD45 values in
Table 2. We briefly discuss the immunophenotypes represented by
each AML-specific meta-cluster in each tube, omitting the isotype
control Tube 1 and unstained Tube 8.
Tube 6 is the most important panel for diagnosing AML since
it includes several markers expressed by AML blasts. HLA-DR is
an MHC class II cell surface receptor complex that is expressed on
antigen-presenting cells, e.g., B cells, dendritic cells, macrophages,
and activated T cells. It is expressed by myeloblasts in most
subtypes of AML except M3 and M7 (Campana and Behm, 2000).
CD117 is a tyrosine kinase receptor (c-KIT) expressed in blasts of
some cases (30 − 100%) of AML (Campana and Behm, 2000).
CD34 is a cell adhesion molecule expressed on different stem
cells and on the blast cells of many cases of AML (40%) (Mason
et al., 2006). CD38 is a glycoprotein found on the surface of
blasts of several subtypes of AML but usually not expressed in
the M3 subtypes of AML (Keyhani et al., 2000). In Tube 6, we
have identified two meta-clusters with high expressions of HLADR and CD34. One of them also expresses CD117 and CD34, and
Fig. 4(c) shows the bivariate contour plots of the cell populations
contained in this meta-cluster. The second meta-cluster expresses
positive but low levels of CD117 and CD34. These two HLADR+ CD34+ meta-clusters together are present in 18 out of the
23 training AML samples. The remaining five samples (subject id:
5, 7, 103, 165, 174) express HLA-DR− CD117± CD34− CD38+
myeloblasts, which is an immunophenotype of APL (Paietta, 2003)
as was discussed earlier. Fig. 4(d) shows the bivariate contour plots
of this APL-specific meta-cluster.
Tube 5 contains several antigens typically expressed by AML
blasts, of which CD33 is the most important. CD33 is a
transmembrane receptor protein usually expressed on immature
myeloid cells of the majority of cases of AML (91% reported
in (Legrand et al., 2000)). The AML specific meta-clusters
identified from markers in Tube 5 (see Table 2) include CD33+
myeloblasts from every sample in the training set. Several
of the CD33+ populations also express CD11c, a type I
transmembrane protein found on monocytes, macrophages and
neutrophils. CD11c is usually expressed by blast cells in acute
myelomonocytic leukemia (M4 subclass of AML), and acute
monocytic leukemia (M5 subclass of AML) (Campana and
Behm, 2000). Therefore CD14− CD11c+ CD64− CD33+ metacluster could represent patients with M4 and M5 subclasses of
AML. We show the bivariate contour plots of this meta-cluster in
Fig. 4(b) .
Tube 4 includes several markers usually expressed by AML
blasts, of which CD13 is the most important. CD13 is a zincmetalloproteinase enzyme that binds to the cell membrane and
degrades regulatory peptides (Mason et al., 2006). CD13 is
expressed on the blast cells of the majority of cases of AML (95%
as reported in (Legrand et al., 2000)). Table 2 shows two AMLspecific meta-clusters detected from the blast cells in Tube 4. In
addition to CD13, eight AML samples express CD56 glycoprotein
that is naturally expressed on NK cells, a subset of CD4+ T cells
and a subset of CD8+ T cells. Raspadori et al. (Raspadori et al.,
2001) reported that CD56 was more often expressed by myeloblasts
in FAB subclasses M2 and M5, which covers about 42% of AML
cases in a study by Legrand et al. (Legrand et al., 2000). In this
dataset, we observe more AML samples expressing CD13+ CD56−
blasts than expressing CD13+ CD56+ blasts, which conforms to the
findings of Raspadori et al. (Raspadori et al., 2001). Fig. 4(a) shows
the bivariate contour plots of the CD13+ CD56− meta-cluster.
Tube 2 is a B cell panel measuring B cell markers CD19
and CD20, and Kappa (κ) and Lambda (λ), immunoglobulin
light chains present on the surface of antibodies produced by B
lymphocytes. B-cell specific markers are occasionally co-expressed
with myeloid antigens especially in FAB M2 subtype of AML (with
chromosomal translocation t(8;21)) (Campana and Behm, 2000;
Walter et al., 2010). In Tube 2, we have identified a meta-cluster in
the myeloblasts that expresses high levels of CD19 and low levels
of Kappa and Lambda. The five samples with subject ids 5, 7, 103,
165, and 174 participating in this meta-cluster possibly belong to
the FAB-M2 subtype of AML. Tube 3 is a T cell panel measuring
T cell specific markers CD4, CD8, CD2, and CD7. Tube 7 is
a lymphocyte panel with several markers expressed on T and B
lymphocytes and is less important in detecting AML since they are
infrequently expressed by AML blasts.
3.4
Impact of each tube in the classification
As discussed in the methods section, we build six independent
classifiers based on the healthy and AML templates created from
Tubes 2-7 of the AML dataset. A sample is classified as an AML
sample if the classification score is positive, and as a healthy sample
otherwise. Let true positives (TP) be the number of AML samples
correctly classified, true negatives (TN) be the number of healthy
samples correctly classified, false positives (FP) be the number of
healthy samples incorrectly classified as AML, and false negatives
(FN) be the number of AML samples incorrectly classified as
healthy. Then, we evaluate the performance of each template-based
classifier with the well-known four statistical measures: Precision,
Recall(Sensitivity), Specificity, and F-value, defined as Precision =
TP
TP
TN
TP+FP , Recall(Sensitivity) = TP+FN , Specificity = FP+TN , and
Precision×Recall)
. These four measures take values in the
F-value = 2(Precision
+Recall
interval [0,1], and the higher the values the better the classifier.
First, we evaluate the impact of each tube in the classification
of the training samples. For a training sample X, the classification
score is computed by comparing it with the healthy and AML
templates created from the training set after removing X. The
predicted status of X is then compared against true status to
evaluate the classification accuracy. Table 3 (left panel) shows
various statistical measures for the classifiers defined in Tubes 27 of the training set. The classifiers based on Tubes 4, 5, and
6 have the highest sensitivity because these tubes include several
markers relevant to AML diagnosis (Campana and Behm, 2000;
Paietta, 2003). The number of true negatives TN is high in every
tube since the identification of healthy samples does not depend
on the detection of AML-specific markers. Hence specificity is
close to one for all tubes. Analogously, FP is low for most tubes,
and we observe high precision for most tubes. The F-value is a
harmonic mean of precision and recall, and denotes the superior
classification ability of markers in Tubes 4-6. Averaging scores from
all tubes does not improve the sensitivity and F-value dramatically.
However, combining Tubes 4-6 gives almost perfect classification
with one misclassification for the training set. We plot the average
classification scores from Tubes 4-6 for the training samples in
Fig. 5(a). The class labels of samples are also shown (blue circles
for healthy and red triangles for AML samples).
In Fig. 5(a), we observe an AML sample (subject id 116)
with score below the classification boundary. In this subject, the
proportion of myeloid blasts is 4.4%, which is lower than the
minimum 20% AML blasts necessary to recognize a patient to be
AML-positive according to the WHO guidelines (Estey and Döhner,
2006) (the FAB threshold is even higher, at 30%). Hence this is
either a rare case of AML, or one with minimal residual disease
after therapy, or perhaps it was incorrectly labeled as AML in the
training set. Subject 116 was classified with the healthy samples by
methods in other published work (Biehl et al., 2013).
3.5
Classifying test samples
Now we turn to the test samples. For each tube, we compute the
classification score for each sample in the test set using templates
created from the training set and applying Eq. 3. Since the average
classification score from Tubes 4-6 performs best for the training set,
we use it as a classifier for the test set as well. Since the status of
test samples was released after the DREAM6/FlowCAP2 challenge,
we can determine the classification accuracy of the test samples.
Fig. 5(b) shows the classification scores of the test samples, where
samples are placed in ascending order of classification scores. In
Fig. 5(b), we observe perfect classification in the test set. Similar to
the training set, we tabulate statistical measures for the classifiers in
Table 3.
When classifying a sample X, we assume the null hypothesis: X
is healthy (non-leukemic). The sample X receives a positive score
if it contains AML-specific immunophenotypes, and the higher
the score, the stronger the evidence against the null hypothesis.
Since Tube 1 (isotype control) does not include any AMLspecific markers, it can provide a background distribution for the
classification scores. In Tube 1, 174 out of 179 training samples
7
0.8
0
0
0
0
0.6
0.4
SS
SS
0.0
0.2
(a) Tube 6
HLA-DR+CD117+
CD34+CD38+
blasts
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0
0.2
0.4
0.6
0.2
0.4
0.6
CD117−
PE
CD117−PE
0.8 0.0
0.2
0.4
0.6
0.8 0.0
0.2
0.4
0.6
CD34−
PC5
CD34−PC5
0.8 0.0
0.2
0.4
0.6
0.8
0.8 0.0
0.2
0.4
0.6
0.8
CD38−
PC7
CD38−PC7
0
0
0
0
0.6
0.4
SS
SS
0.0
0.2
(b) Tube 6
HLA-DR-CD117±
CD34-CD38+
blasts
HLA−
DR−
FITC
HLA−
DR−FITC
0.8
CD45−
ECD
CD45−ECD
0.0 0.2 0.4 0.6 0.8 1.0 0.0
CD45−
ECD
CD45−ECD
0.2
0.4
0.6
HLA−
DR−
FITC
HLA−
DR−FITC
0.8 0.0
CD117−
PE
CD117−PE
CD34−
PC5
CD34−PC5
CD38−
PC7
CD38−PC7
Fig. 4. Bivariate contour plots (side scatter vs. individual marker) for two meta-clusters (one in each row) indicative of AML. The ellipses in a subplot denote
the 95th quantile contour lines of cell populations included in the corresponding meta-cluster. Myeloblast cells have medium side scatter (SS) and CD45
expressions. The red lines indicate approximate myeloblast boundaries (located on the left-most subfigures in each row and extended horizontally to the
subfigures on the right) and confirm that these meta-clusters represent immunophenotypes of myeloblast cells. Blue vertical lines denote the +/- boundaries
of a marker. Gray subplots show contour plots of dominant markers defining the meta-cluster in the same row. (a) HLA-DR+ CD117+ CD34+ CD38+ metacluster shared by 11 AML samples in Tube 6. (b) HLA-DR− CD117± CD34− CD38+ meta-cluster shared by 5 AML samples in Tube 6. This meta-cluster
is indicative of acute promyelocytic leukemia (APL). These bivariate plots are shown for illustration only, since the populations of specific cell types are
identified from seven-dimensional data.
Table 3. Four statistical measures evaluating the performance of the template-based classification in the training
set and test set of the AML data. The statistical measures are computed for each tube separately and two
combinations of tubes.
Tubes
Test set
Specificity
F-value
Precision
Recall
Specificity
F-value
0.74
0.91
0.70
0.74
0.96
0.99
0.96
1.00
1.00
1.00
0.83
0.82
0.82
0.85
0.98
1.00
0.65
1.00
1.00
1.00
0.75
0.85
0.80
0.85
1.00
1.00
0.94
1.00
1.00
1.00
0.86
0.74
0.89
0.92
1.00
0.5
AML
classification boundary
0
50
100
150
Training samples, ordered
(b) Test set
Actual class
Healthy
AML
0.5
1.0
Healthy
classification boundary
0.0
Actual class
Sam
1.0
Recall
0.94
0.75
1.00
1.00
1.00
Classification score
Precision
(a) Training set
0.0
Classification score
4
5
6
All (2-7)
4,5,6
Training set
0
50
100
150
Test samples, ordered
Fig. 5. Average classification score from Tubes 4,5,6 for each sample in the (a) training set and (b) test set. Samples with scores above the horizontal line are
classified as AML, and as healthy otherwise. The actual class of each sample is also shown. An AML sample (subject id 116) is always misclassified in the
training set, and this is discussed in the text.
have negative classification scores, but five samples have positive
scores, with values less than 0.2. In the best classifier designed
from Tubes 4, 5, 6, we observe that two AML-positive samples in
the training set and three AML-positive samples in the test set have
scores between 0 and 0.2. The classifier is relatively less confident
about these samples; nevertheless, the p-values of these five samples
8
(computed from the distribution in Tube 1) are still small (< 0.05),
so that they can be classified as AML-positive. The rest of the AML
samples in the training and test sets have scores greater than 0.2 and
the classifier is quite confident about their status (p-value zero).
Four AML samples in the test set (ids 239, 262, 285, and 326)
were subclassified as APL by comparing against distinct template
trees for APL and the other AML samples in the training set (cf.
Fig. 3 (b)).
Finally, we state the computational times required on an iMac
with four 2.7 GHz cores and 8 GB memory. Our code is in
R. Consider a single tube with 359 samples in it. The k-means
clustering of all samples took one hour, primarily because we need
to run the algorithm multiple times (about ten on the average) to find
the optimal value of the number of clusters. Creating the healthy
template from 156 samples in the training set required 10 seconds
(s) on one core, and the AML template for 23 AML samples took
0.5s on one core. Cross validation (leave one out) of the training set
took 30 minutes, and computing the classification score for the 180
test samples took 15s, both on four cores. We could have reduced
the running time by executing the code in parallel on more cores.
We have made the dominant step, the k-means clustering of all the
samples with an optimal number of clusters, faster using a GPU,
reducing the total time to a few minutes.
4
CONCLUSIONS
We have demonstrated that an algorithmic pipeline for templatebased classification can successfully identify immunophenotypes
of clinical interest in AML. These could be used to differentiate
the subtypes of AML, which is advantageous since prognosis and
treatment depends on the subtype. The templates enable us to
classify AML samples in spite of their heterogeneity. This was
accomplished by creating a scoring function that accounts for the
subtleties in cell populations within AML samples. We are currently
applying this approach to a larger AML data set, and intend to
analyze other heterogeneous data sets.
ACKNOWLEDGMENTS
This research was supported by NIH grant IR21EB015707-01, NSF
grant CCF-1218916, and DOE grant 13SC-003242.
REFERENCES
Aghaeepour, N., Finak, G., Hoos, H., Mosmann, T. R.,
Brinkman, R., Gottardo, R., Scheuermann, R. H., et al. (2013).
Critical assessment of automated flow cytometry data analysis
techniques. Nature Methods, 10(3), 228–238.
Azad, A., Langguth, J., Fang, Y., Qi, A., and Pothen, A. (2010).
Identifying rare cell populations in comparative flow cytometry.
Lecture Notes in Computer Science, 6293, 162–175.
Azad, A., Pyne, S., and Pothen, A. (2012).
Matching
phosphorylation response patterns of antigen-receptor-stimulated
T cells via flow cytometry. BMC Bioinformatics, 13(Suppl 2),
S10.
Azad, A., Khan, A., Rajwa, B., Pyne, S., and Pothen, A.
(2013). Classifying immunophenotypes with templates from
flow cytometry. In Proceedings of the International Conference
on Bioinformatics, Computational Biology and Biomedical
Informatics (ACM BCB), page 256. ACM.
Bennett, J. M., Catovsky, D., Daniel, M. T., Flandrin, G., Galton,
D. A., Gralnick, H. R., and Sultan, C. (1985). Proposed revised
criteria for the classification of acute myeloid leukemia: A report
of the French-American-British Cooperative Group. Annals of
Internal Medicine, 103(4), 620–625.
Biehl, M., Bunte, K., and Schneider, P. (2013). Analysis of flow
cytometry data by matrix relevance learning vector quantization.
PLoS One, 8(3), e59401.
Campana, D. and Behm, F. G. (2000). Immunophenotyping of
leukemia. Journal of Immunological Methods, 243(1), 59–75.
Estey, E. and Döhner, H. (2006). Acute myeloid leukaemia. The
Lancet, 368(9550), 1894–1907.
Finak, G., Perez, J., Weng, A., and Gottardo, R. (2010). Optimizing
transformations for automated, high throughput analysis of flow
cytometry data. BMC Bioinformatics, 11(1), 546.
Halkidi, M., Batistakis, Y., and Vazirgiannis, M. (2001).
On clustering validation techniques.
Journal of Intelligent
Information Systems, 17(2-3), 107–145.
Keyhani, A., Huh, Y. O., Jendiroba, D., Pagliaro, L., Cortez, J.,
Pierce, S., Pearlman, M., Estey, E., Kantarjian, H., and Freireich,
E. J. (2000). Increased CD38 expression is associated with
favorable prognosis in adult acute leukemia. Leukemia Research,
24(2), 153–159.
Lacombe, F., Durrieu, F., Briais, A., Dumain, P., Belloc, F.,
Bascans, E., Reiffers, J., Boisseau, M., and Bernard, P. (1997).
Flow cytometry CD45 gating for immunophenotyping of acute
myeloid leukemia. Leukemia, 11(11), 1878–1886.
Legrand, O., Perrot, J.-Y., Baudard, M., Cordier, A., Lautier,
R., Simonin, G., Zittoun, R., Casadevall, N., and Marie, J.-P.
(2000). The immunophenotype of 177 adults with acute myeloid
leukemia: proposal of a prognostic score. Blood, 96(3), 870–877.
Manninen, T., Huttunen, H., Ruusuvuori, P., and Nykter, M. (2013).
Leukemia prediction using sparse logistic regression. PLoS One,
8(8), e72932.
Mason, K. D., Juneja, S. K., and Szer, J. (2006).
The
immunophenotype of acute myeloid leukemia: is there a
relationship with prognosis? Blood Reviews, 20(2), 71–82.
Paietta, E. (2003). Expression of cell-surface antigens in acute
promyelocytic leukaemia. Best Practice & Research Clinical
Haematology, 16(3), 369–385.
Pyne, S., Hu, X., Wang, K., Rossin, E., Lin, T., Maier, L.,
Baecher-Allan, C., McLachlan, G., Tamayo, P., Hafler, D.,
et al. (2009). Automated high-dimensional flow cytometric data
analysis. Proceedings of the National Academy of Sciences,
106(21), 8519–8524.
Qiu, P. (2012). Inferring phenotypic properties from single-cell
characteristics. PLoS One, 7(5), e37038.
Raspadori, D., Damiani, D., Lenoci, M., Rondelli, D., Testoni, N.,
Nardi, G., Sestigiani, C., Mariotti, C., Birtolo, S., Tozzi, M., et al.
(2001). CD56 antigenic expression in acute myeloid leukemia
identifies patients with poor clinical prognosis. Leukemia, 15(8),
1161–1164.
Shapiro, H. M. (2005). Practical Flow Cytometry. Wiley-Liss.
Spidlen, J., Barsky, A., Breuer, K., Carr, P., Nazaire, M.-D., Hill,
B. A., Qian, Y., Liefeld, T., Reich, M., Mesirov, J. P., et al.
(2013). Genepattern flow cytometry suite. Source Code for
Biology and Medicine, 8(1), 1–8.
Walter, K., Cockerill, P., Barlow, R., Clarke, D., Hoogenkamp, M.,
Follows, G., Richards, S., Cullen, M., Bonifer, C., and Tagoh,
H. (2010). Aberrant expression of CD19 in AML with t (8;
21) involves a poised chromatin structure and pax5. Oncogene,
29(20), 2927–2937.
9
| 5 |
Computational evolution of decision-making strategies
Peter Kvam ([email protected])
Center for Adaptive Rationality, Max Planck Institute for Human Development
Lentzeallee 94, 14195 Berlin, Germany
Joseph Cesario ([email protected])
Department of Psychology, Michigan State University
316 Physics Rd, East Lansing, MI 48824, USA
arXiv:1509.05646v1 [] 18 Sep 2015
Jory Schossau ([email protected])
Department of Computer Science and Engineering, Michigan State University
428 South Shaw Rd, East Lansing, MI 48824, USA
Heather Eisthen ([email protected]), Arend Hintze ([email protected])
Department of Integrative Biology, BEACON Center for the Study of Evolution in Action, Michigan State University
288 Farm Ln, East Lansing, MI 48824, USA
Abstract
Most research on adaptive decision-making takes a strategyfirst approach, proposing a method of solving a problem and
then examining whether it can be implemented in the brain
and in what environments it succeeds. We present a method for
studying strategy development based on computational evolution that takes the opposite approach, allowing strategies to
develop in response to the decision-making environment via
Darwinian evolution. We apply this approach to a dynamic
decision-making problem where artificial agents make decisions about the source of incoming information. In doing so,
we show that the complexity of the brains and strategies of
evolved agents are a function of the environment in which they
develop. More difficult environments lead to larger brains and
more information use, resulting in strategies resembling a sequential sampling approach. Less difficult environments drive
evolution toward smaller brains and less information use, resulting in simpler heuristic-like strategies.
Keywords: computational evolution, decision-making, sequential sampling, heuristics
Introduction
Theories of decision-making often posit that humans
and other animals follow decision-making procedures that
achieve maximum accuracy given a particular set of constraints. Some theories claim that decision-making is optimal
relative to the information given, involving a process of maximizing expected utility or performing Bayesian inference
(Bogacz, Brown, Moehlis, Holmes, & Cohen, 2006; Griffiths
& Tenenbaum, 2006; Von Neumann & Morgenstern, 1944).
Others assume that behavior makes trade-offs based on the
environment, tailoring information processing to achieve sufficient performance by restricting priors (Briscoe & Feldman,
2011), ignoring information (Gigerenzer & Todd, 1999), or
sampling just enough to satisfy a particular criterion (Link
& Heath, 1975; Ratcliff, 1978). In most cases, mechanisms
underlying the initial development of these strategies are assumed – either explicitly or implicitly – to be the result of
natural and artificial selection pressures.
In cognitive science research, however, the evolution of a
strategy often takes a back seat to its performance and co-
herence. The clarity and intuitiveness of a theory undoubtedly play an immense role, as does its ability to explain and
predict behavior, but whether or not a strategy is a plausible result of selection pressures is rarely considered. To be
fair, this is largely because the process of evolution is slow,
messy, and often impossible to observe in organisms in the
lab. Fortunately, recent innovations in computing have enabled us to model this process with artificial agents. In this
paper, we propose a method of studying the evolution of dynamic binary decision-making using artificial Markov brains
(Edlund et al., 2011; Marstaller, Hintze, & Adami, 2013; Olson, Hintze, Dyer, Knoester, & Adami, 2013) and investigate
the evolutionary trajectories and ultimate behavior of these
brains resulting from different environmental conditions.
In order to demonstrate the method and investigate an interesting problem, we focus on the simple choice situation
where a decision-maker has to choose whether the source of
a stimulus is ’signal’ S or ’noise’ N (for preferential decisions, nonspecific choices A or B can be substituted). A similar decision structure underlies a vast array of choices that
people and other animals make, including edible/inedible,
healthy/sick, safe/dangerous, and so on. The task requires a
decision-maker to take in and process information over time
and make a decision about which source yielded that information. However, the decision maker is free to vary the amount
of information it uses and processing it applies, and different
theories make diverging predictions about how each of these
should vary. On one hand, it may be more advantageous to
use every piece of information received, feeding it through a
complex processing system in order to obtain maximum accuracy. On the other, a simpler processing architecture that
ignores information may be sufficient in terms of accuracy
and more robust to random mutations, errors, or over-fitting.
More complex models
Many of the most prominent complex decision-making models fall under the sequential sampling framework (Bogacz et
al., 2006; Link & Heath, 1975; Ratcliff, 1978). These models
assume that a decision-making agent takes or receives samples one by one from a distribution of evidence, with each
sample pointing toward the signal or noise distribution. They
posit that agent combines samples to process information, for
example by adding up the number favoring S and subtracting
the number favoring N. When the magnitude of this difference exceeds a criterion value θ (e.g. larger than 4 / smaller
than -4), a decision is triggered in favor of the corresponding
choice option (+θ ⇒ S, −θ ⇒ N). This strategy implements
a particular form of Bayesian inference, allowing a decisionmaker to achieve a desired accuracy by guaranteeing that the
log odds of one hypothesis (S or N) over the other is at least
equal to the criterion value.
In these models, each piece of information collected is used
to make a decision. Although organisms may not literally add
and subtract pieces of information, we should expect to observe two characteristics in organisms that implement these
or similar strategies. First, they should be relatively complex, storing the cumulative history of information to make
their decisions. Second, they should give each piece of information they receive relatively equal weight, spreading out the
weights assigned to information across a long series of inputs.
more detail.
Less complex models
Decision task
Toward the other end of the spectrum of model complexity
are heuristics which deliberately ignore information in order to obtain better performance in particular environments
(Brandstätter, Gigerenzer, & Hertwig, 2006; Gigerenzer &
Brighton, 2009; Gigerenzer & Todd, 1999). Many of these
strategies are non-compensatory, meaning that they terminate the use of information as soon as one piece of evidence
clearly favors either S or N. Accordingly, a decision maker
can have a relatively simple information processing architecture, as it can just copy incoming information to its output
indicators to give an answer. Some of these require ordinal information about different sources of information and their validity, resulting in increased complexity (Dougherty, FrancoWatkins, & Thomas, 2008), but for the current problem we
assume that all information comes from a single source.
As a result of the relatively simple architecture and onepiece decision rules, we can expect to observe two characteristics in organisms that implement strategies similar to these
heuristics. First, they should have relatively simple information processing architectures, favoring short and robust pathways that do little integration. Second, they should appear to
give the most weight to the last piece(s) of information they
receive before making their decision, yielding a relationship
between the final decision and the sequence of inputs that is
heavily skewed to the most recently received inputs.
Of course, the real behavior of artificially evolved organisms will probably lie somewhere along the spectrum between these two poles. However, we can compare the relative
leanings of different populations of organisms by varying the
characteristics of the environments in which they evolve. We
next describe the decision-making task and manipulations in
The task that the agents had to solve was a binary decision
problem, where they received information from one source S
or another N. The information coming from either source included two binary numbers, and therefore could yield any of
the inputs [00], [01], [10], or [11]. Source S would yield primarily 0s on the left and 1s on the right, and source N would
yield primarily 1s on the left and 0s on the right. The exact
proportion of these inputs was varied in order to alter the difficulty of the task. For example, an easy S stimulus would
give 90% 0s (10% 1s) on the left, and 90% 1s (10% 0s) on
the right. The two inputs were independent, so this would
ultimately give 81% [01] inputs, 9% [11], 9% [00], and 1%
[10]. In a more difficult environment, an S stimulus might
have 60% 0s on the left and 60% 1s on the right, yielding
36% [01], 24% [11], 24% [00], and 16% [10]. For an N stimulus, the possible inputs would be the same, but the frequency
of [01] and [10] inputs would be flipped (i.e. more 1s on the
left and 0s on the right). These frequencies were not shown to
the agents at the start of each trial. Instead, each trial started
with a random frequency of 50%, increasing each consecutive
step by 1% until the target frequency was reached. This was
done in part to emulate how agents encounter stimuli in real
situations (i.e. stimuli progressively come into sensory range,
increasing in strength over time rather than simply appearing), but also to avoid ’sticking’ at a local maximum where
agents simply copy their first input to outputs.
The target frequency of 1s and 0s was manipulated to be
60 − 90% (in 5% increments), resulting in 7 difficulty levels
for different populations of agents.
For each decision, the agents received up to 100 inputs.
Each new input constituted one time step during which the
Methods
We were interested in examining the strategies and evolutionary trajectories that digital agents took to solve a simple dynamic decision-making problem. To do so, we developed a
binary decision-making task for the agents to solve. The fitness of an agent was defined as the number of correct decisions it made over 100 trials of the task, and the probability
that it would reproduce was determined by this fitness value.
Note that fitness was determined by the number of correct answers, reflecting agents’ propensity to respond together with
their accuracy when they did respond - there was no fitness
penalty or cost for agent complexity. Formally, the probability that it generated each child of the next generation was
given by its fitness divided by the total fitness across the total
population (roulette wheel selection). An agent reproduced to
the next generation by creating a copy of itself with random
mutations. Over the course of 10,000 generations, this selection and mutation process led to evolution of agents that could
successfully perform the task, and enabled us to analyze the
strategies that the evolved agents ultimately developed.
agent could process that information. If an agent gave an answer by signaling [01] to indicate S or [10] to indicate N (see
below), then the decision process would come to a halt, where
no new inputs would be given and the agent would be graded
on its final answer. An agent received 1 point toward its fitness if it gave the correct answer or 0 points if it was incorrect
or if it failed to answer before 100 inputs were given.
In addition to the difficulty manipulation, we included a
“non-decision time” manipulation, where an agent was not
permitted to answer until t time steps had elapsed (i.e. the
agent had received t inputs). This number t was varied
from 10 to 50 in 5-step increments, yielding 9 levels of nondecision time across different environments. Increasing t
tended to make agents evolve faster, as longer non-decision
time tended to allow agents to more easily implement strategies regardless of difficulty level.
pings in the gate tables (e.g. it could change between any of
the gates shown in Figure 1). This code consisted of 2000200,000 ’nucleotides’ and included mutation rates of 0.005%
point mutations, 0.2% duplication mutations, and 0.1% deletion mutations, consistent with previous work (Edlund et al.,
2011; Marstaller et al., 2013; Olson et al., 2013). More precisely, logic gates are specified by ’genes’ within this genetic
code. Each gene consists of a sequence of nucleotides, numbered 1-4 to reflect the four base nucleotides present in DNA,
and starts with the number sequence ’42’ followed by ’213’
(start codon), beginning at an arbitrary location within the
genome. Genes are typically about 300 nucleotides long and
can have ’junk’ sequences of non-coding nucleotides between
them, resulting in the large size of the genomes.
The first generation of Markov brain agents in each population was generated from a random seed genome. The first
100 agents were created as random variants of this seed brain
using the mutation rates described above, resulting in approximately 20 − 30 random connections per agent. These 100
agents each made 100 decisions, and were selected to reproduce based on their accuracy. This process was repeated for
each population for 10,000 generations, yielding 100 agents
per population that could perform the decision task.
Data
Figure 1: Diagram of the structure of a sample Markov brain
with input, processing, and output nodes (circles) with connecting logic gates (rectangles). Each gate contains a corresponding table mapping its input values (left) to output values
(right). Note that our actual agents had twice the number of
nodes shown here available to them.
Markov brain agents
The Markov brain agents (Edlund et al., 2011; Marstaller et
al., 2013; Olson et al., 2013) consisted of 16 binary nodes and
of directed logic gates that moved and/or combined information from one set of nodes to another (see Figure 1). Two of
these nodes (1 and 2) were reserved for inputs from the environment, described above. Another two (15 and 16) were
used as output nodes. These output nodes could show any
combination of two binary values. When they did not read
[01] (indicating S), or [10] (indicating N), the agents were
permitted to continue updating their nodes with inputs until
time step 100. To update their nodes at each time step, the
agents used logic gates (represented as squares in Figure 1,
which took x node values and mapped them onto y nodes using an x × y table.
The input nodes, table, and output nodes for these gates
were all specified by an underlying genetic code that each
Markov brain possessed. Point, insertion, or deletion mutations in the genetic code would cause them add / subtract
inputs to a gate, add / subtract outputs, or change the map-
For each of the 63 conditions (7 difficulty levels × 9 nondecision times), we ran 10,000 generations of evolution for
100 different sub-populations of Markov brains, giving 6300
total populations. From each of these populations, a random
organism was chosen and its line of ancestors was tracked
back to the first generation. This set of agents from the last
to the first generation is called the line of decent (LOD). For
each of the 100 replicates per experimental conditions, all parameters (such as fitness) of agents on the LOD were averaged for each generation.
In each of these LODs, we tracked the average number of
connections between nodes (see Figure 1) that agents had in
each condition and each generation. We refer to this property
of the agents as “brain size” — the analogous properties in an
organism are the number and connectivity of neurons — and
we show its evolutionary trajectory in Figure 2.
Finally, we took a close look at the behavior of generation
9970 – this is near the end to ensure that the generation we examined could solve the task, but slightly and somewhat arbitrarily removed from generation 10,000 to ensure that agents
in this generation weren’t approaching one of the random dips
in performance (i.e. random mutations from this generation
were less likely to be deleterious than more recent ones). For
these agents, we examined each trial to see what information
they received at each time step, which step they made their
decision, and which decision they made (coded as correct or
incorrect). This allowed us to examine the relationship between the inputs they received and the final answer they gave,
giving an estimate of the weight they assigned to each new
piece of information.
Materials
The agents, tasks, and evolution were implemented in C++
using Visual Studio Desktop and Xcode, and the full evolution simulations were run at Michigan State University’s High
Performance Computing Center.
Results
With the exception of high difficulty, low non-decision time
conditions, most populations and conditions of agents were
able to achieve essentially perfect accuracy on the decision
task after 10,000 generations. However, the strategies implemented by each population varied heavily by condition.
It is perhaps worth noting at this point the tremendous
amount of data that our approach yields. Each condition consisted of 100 populations of 100 agents that made 100 decisions each generation, yielding 10,000 agents and 1 million
decisions per generation per condition. This tremendous sample size renders statistical comparisons based on standard error, for example, essentially moot. For this reason, we present
mostly examples that illustrate important findings rather than
exhaustive statistical comparisons.
Brain size
Final brain size (number of connections among nodes) varied as a function of both stimulus difficulty and non-decision
time. We focus primarily on high non-decision time conditions, as many of the low non-decision time populations —
particularly in the difficult stimuli conditions — were unable
to achieve the high performance of other groups. As Figure
2 shows, agents faced with the easiest conditions (10-15%)
tended to have the smallest final brain size, with means of
around 15 − 20 connections. Agents faced with medium difficulty environments evolved approximately 25 − 30 connections, and agent brain size in the most difficult conditions approached 35 connections and appeared to still be climbing
with further generations.
Perhaps more interesting, though, is the evolutionary trajectory that each of the populations in these conditions took.
As shown, each group started with 25-30 connections in the
initial generation, and in all of them the number of connections initially dropped for the first 200-400 generations. After that, however, the conditions appear to diverge, with the
agents in the easy conditions losing even more connections,
agents in the medium conditions staying approximately level,
and agents in the difficult conditions adding more and more
connections.
Strategy use
In order to examine the pattern of information use in the
agents, we additionally examined the relationship between
each piece of information received and the final answer
given. We did so by taking the series of inputs (e.g.
[00],[11],[01],[01],[11]) and assigning each one a value - information favoring S ([01] inputs) was assigned a value of +1,
information favoring N ([10] inputs) was assigned a value of
Figure 2: Mean number of connections in agent brains across
generations for three levels of task difficulty. For the sake of
comparison, the trajectories shown are all from populations
with a non-decision time of 40 steps
−1, and others ([00] and [11]) were assigned a value of 0. Answers favoring S were also given a value of +1 and answers
favoring N a value of −1. Doing so allowed us to track the
sequence of −1, 0, +1 — which we refer to as the trajectory
— leading to the decision and to correlate this with the final
+1 or −1 answer. The result of this analysis for the example
conditions is shown in Figure 3.
As shown, the trajectory correlations in the more difficult
conditions tend to be flatter than those in the easy conditions,
and final answers tend to correlate with a longer history of
inputs. This indicates that these agents were assigning more
similar weight to each piece of information they use, utilizing
the full history of inputs they had received rather than just the
final piece. Note that all agents appeared to use the most recent pieces of information more heavily. This will be the case
for almost any model that generates the data, as the last pieces
of information tend to be those that trigger the decision rule
– for example, in sequential sampling this will be the piece
of information that moves the evidence across the threshold
– and as such will always be highly correlated with the final
answer. 1
Information use also varied somewhat across levels of nondecision time, but its effect was not particularly pronounced
except in the more difficult conditions (e.g. 60-70%). However, this effect is largely a consequence of agent populations’
failure to evolve to perform the task as well when stimulus
1 However,
since it can sometimes take several updates / time
steps to move a ’trigger’ input through the brain to the output nodes,
the final piece of information will not always be perfectly correlated
with the output.
Figure 3: Example correlations between inputs and final decision for easy (blue), medium (purple) and difficult (red) conditions. The trajectories are time-locked on the final answer
on the right side, so the last piece of information an agent received is the rightmost value, and to the left is moving backward through the trajectory.
discriminability and non-decision time were low. For example, agents in the difficult, short non-decision time condition
(red in left panel of Figure 3) attained accuracy of only 82%,
compared to 95+% in other conditions. Higher difficulty still
led to larger brains and a longer history of processing in these
conditions, but its effect was less pronounced. Therefore,
high values of non-decision time apparently made it easier
to evolve complex strategies, likely because agents were exposed to more information before making their decisions.
Discussion
While agents’ strategies spanned a range of complexity, more
difficult environments pushed them toward more complex
strategies resembling sequential sampling while easier environments led to strategies more similar to non-compensatory
heuristics. Therefore, both sequential sampling and heuristics
seem to be strategies that could plausibly result from different
environmental demands. However, our results run counter to
the idea that heuristics are invoked when decisions are particularly difficult or choice alternatives are not easily distinguished (Brandstätter et al., 2006).
The final strategies may not support the claim that organisms are primarily heuristic decision-makers (Gigerenzer &
Brighton, 2009), but it still lends credence to the premise of
ecological rationality on which many heuristics are based.
This approach suggests that different environments (choice
ecologies) lead to different decision-making strategies rather
than a one-size-fits-all process. It is certainly plausible that
agents in environments with mixed or changing difficulty levels converge on a single strategy, but for the moment it seems
that multiple strategies can be implemented across multiple
choice environments.
While difficult conditions led to larger brains and more information processing, perhaps a more critical finding is that
simpler choice environments led to simpler decision strategies and architectures. While this may initially seem like the
other side of the same coin, this result is particularly interesting because we did not impose any penalties for larger brains.
Although other researchers have suggested that metabolic
costs limit the evolution of large brains (Isler & Van Schaik,
2006; Laughlin, van Steveninck, & Anderson, 1998) and can
be substantial in real brains (Kuzawa et al., 2014), they were
not necessary to drive evolution toward smaller brains.
Instead, we suspect that the drop in brain size is a result of
the agents’ response to mutations, or the mutation load imposed by the size of its genome. For example, a random mutation in the genome that connects, disconnects, or re-maps
a gate is more likely to affect downstream choice-critical elements of a brain that uses more nodes and connections to
process information (has a higher mutation load), particularly
if it has a larger ratio of coding to non-coding nucleotides. In
this case, a smaller brain would be a tool for avoiding deleterious mutations to the information processing stream. Alternatively, the minimum number of nodes and connections
required to perform the task is likely lower in the easier conditions than in the more difficult ones, so mutations that reduce brain size and function might be able to persist in the
easier but not the more difficult conditions. In either case,
it is clear that a larger brain does not offer sufficient benefits in the easier conditions to overcome the mutation load it
imposes.
Another potential risk of having a larger brain is the chance
of a random mutation preventing information from reaching
the output nodes – with a longer chain of processing nodes
being easier to interrupt or confuse than a shorter one. While
the agents in more difficult conditions were evidently able to
overcome such a possibility (usually answering within 20
steps of the end of non-decision time), it may be a barrier that
required substantial fitness rewards to cross, which were not
present in the easier conditions.
We hesitate to make claims that are too broad given the
scope of our study, but the finding that brain size can be limited by mutation load is provoking. This may explain why
systems that are subject to mutations and selection pressures
– including neurons and muscle cells – are reduced when they
are unused, even when the energetic costs of maintaining the
structure appear to be low. It seems a promising direction for
future research to examine in-depth how mutation rate and
robustness contribute to organisms’ fitness above and beyond
the costs associated with metabolism.
Approach
We hope to have presented a method for examining questions regarding adaptation and evolution that often arise in
cognitive science and psychology. Whereas previous studies have worked from a particular strategy and examined the
choice environments in which it succeeds, we present a way
of answering questions about how the environment can shape
the evolution of a strategy. The strategies resulting from this
computational evolution approach are adaptive, easily implemented in the brain, and the result of realistic natural selection pressures. Additionally, we have shown that this approach is capable of addressing important questions about existing models of simple dynamic decisions, though it could
undoubtedly shed light on an array of related problems.
Of course, there are limitations to this approach, many of
which are computational. The agents we used had only 16
nodes, 4 of which were reserved for inputs and outputs, meaning that only 12 could be used for storing (memory) and processing information. Although more nodes could be added –
and certainly an accurate model of even very simple nervous
systems would have many times more – this would severely
slow down the steps required for evolution. It might also lead
to problems that are analogous to the over-fitting that occurs
when more parameters are added to a model, though this is
itself a question worth exploring.
Conclusions
In this paper, we presented a computational evolution framework that could be used to examine how environments lead to
different behaviors. This framework allowed us to examine
the strategies that might have arisen in organisms to address
the problem of dynamic decision-making, where agents receive information over time and must somehow use this input
to make decisions that affect their fitness.
We found that both the evolutionary trajectory and the
strategies ultimately implemented by the agents are heavily
influenced by the characteristics of the choice environment,
with the difficulty of the task being a particularly notable
influence. More difficult environments tended to encourage
the evolution of complex information integration strategies,
while simple environments actually caused agents to decrease
in complexity, perhaps in order to maintain simpler and more
robust decision architectures. They did so despite no explicit
costs for complexity, indicating that mutation load may be
sufficient to limit brain size.
Finally, we discussed these results in the context of existing models of human decision-making, suggesting that both
non-compensatory strategies such as fast and frugal heuristics (Gigerenzer & Todd, 1999) and complex ones such
as sequential sampling (Link & Heath, 1975) may provide
valid descriptions – or at least serve as useful landmarks –
of the strategies implemented by evolved agents. In doing
so, we provided evidence that strategy use is environmentdependent, as different decision environments led to different patterns of information use. More generally, we have
shown that a computational evolution approach integrating
computer science, evolutionary biology, and psychology is
able to provide insights into how, why, and when different
decision-making strategies evolve.
Acknowledgments
This work was supported by Michigan State University’s
High Performance Computing Facility and the National Science Foundation under Cooperative Agreement No. DBI0939454 and Grant No. DGE-1424871.
References
Bogacz, R., Brown, E., Moehlis, J., Holmes, P., & Cohen, J. D. (2006). The physics of optimal decision
making: a formal analysis of models of performance in
two-alternative forced-choice tasks. Psychological Review,
113(4), 700–765.
Brandstätter, E., Gigerenzer, G., & Hertwig, R. (2006). The
priority heuristic: Making choices without trade-offs. Psychological Review, 113(2), 409–432.
Briscoe, E., & Feldman, J. (2011). Conceptual complexity
and the bias/variance tradeoff. Cognition, 118(1), 2–16.
Dougherty, M. R., Franco-Watkins, A. M., & Thomas, R.
(2008). Psychological plausibility of the theory of probabilistic mental models and the fast and frugal heuristics.
Psychological Review, 115(1), 199–211.
Edlund, J. A., Chaumont, N., Hintze, A., Koch, C., Tononi,
G., & Adami, C. (2011). Integrated information increases
with fitness in the evolution of animats. PLoS Computational Biology, 7(10), e1002236.
Gigerenzer, G., & Brighton, H. (2009). Homo heuristicus:
Why biased minds make better inferences. Topics in Cognitive Science, 1(1), 107–143.
Gigerenzer, G., & Todd, P. M. (1999). Simple heuristics that
make us smart. New York, NY: Oxford University Press.
Griffiths, T. L., & Tenenbaum, J. B. (2006). Optimal predictions in everyday cognition. Psychological Science, 17(9),
767–773.
Isler, K., & Van Schaik, C. P. (2006). Metabolic costs of brain
size evolution. Biology Letters, 2(4), 557–560.
Kuzawa, C. W., Chugani, H. T., Grossman, L. I., Lipovich,
L., Muzik, O., Hof, P. R., . . . Lange, N. (2014). Metabolic
costs and evolutionary implications of human brain development. Proceedings of the National Academy of Sciences,
111(36), 13010–13015.
Laughlin, S. B., van Steveninck, R. R. d. R., & Anderson,
J. C. (1998). The metabolic cost of neural information.
Nature Neuroscience, 1(1), 36–41.
Link, S., & Heath, R. (1975). A sequential theory of psychological discrimination. Psychometrika, 40(1), 77–105.
Marstaller, L., Hintze, A., & Adami, C. (2013). The evolution of representation in simple cognitive networks. Neural
Computation, 25(8), 2079–2107.
Olson, R. S., Hintze, A., Dyer, F. C., Knoester, D. B., &
Adami, C. (2013). Predator confusion is sufficient to
evolve swarming behaviour. Journal of The Royal Society
Interface, 10(85), 20130305.
Ratcliff, R. (1978). A theory of memory retrieval. Psychological Review, 85(2), 59–108.
Von Neumann, J., & Morgenstern, O. (1944). Theory of
games and economic behavior. Princeton, NJ: Princeton
University Press.
| 9 |
1
Asymptotic Signal Detection Rates
with 1-bit Array Measurements
arXiv:1711.00739v2 [] 20 Feb 2018
Manuel S. Stein
Abstract—This work considers detecting the presence of a
band-limited random radio source using an antenna array
featuring a low-complexity digitization process with single-bit
output resolution. In contrast to high-resolution analog-to-digital
conversion, such a direct transformation of the analog radio
measurements to a binary representation can be implemented
hardware and energy-efficient. However, the probabilistic model
of the binary receive data becomes challenging. Therefore, we
first consider the Neyman-Pearson test within generic exponential
families and derive the associated analytic detection rate expressions. Then we use a specific replacement model for the binary
likelihood and study the achievable detection performance with 1bit radio array measurements. As an application, we explore the
capability of a low-complexity GPS spectrum monitoring system
with different numbers of antennas and different observation
intervals. Results show that with a moderate amount of binary
sensors it is possible to reliably perform the monitoring task.
Index Terms—1-bit ADC, analog-to-digital conversion, array
processing, exponential family, GPS, Neyman-Pearson test, quantization, detection, spectrum monitoring
I. I NTRODUCTION
Since, in 1965, Gordon E. Moore predicted a doubling
in computational capability every two years, chip companies
have kept pace with this prognosis and set the foundation
for digital systems which today allow processing high-rate
radio measurements by sophisticated algorithms. In conjunction with wireless sensor arrays, this results in advanced
signal processing capabilities, see, e.g. [1]. Unfortunately, in
the last decades, the advances regarding the analog circuits
forming the radio front-end were much slower. Therefore, in
the advent of the Internet of things (IoT), where small and
cheap objects are supposed to feature radio interfaces, cost
and power consumption of wireless sensors are becoming an
issue. In particular, analog-to-digital conversion [2], [3] shows
to set constraints on the digitization rate and the number of
antenna elements under strict hardware and power budgets.
In this context, we consider detection of a band-limited
source with unknown random structure by a sensor array
providing single-bit radio measurements. We formulate the
processing task as a binary hypothesis test regarding exponential family models and derive expressions for the asymptotic
detection rates. The results are used to determine the design
of a low-complexity GPS spectrum monitoring system.
This work was supported by the German Academic Exchange Service
(DAAD) with funds from the German Federal Ministry of Education and
Research (BMBF) and the People Program (Marie Skłodowska-Curie Actions)
of the European Union’s Seventh Framework Program (FP7) under REA grant
agreement no. 605728 (P.R.I.M.E. - Postdoctoral Researchers International
Mobility Experience).
M. S. Stein is with the Chair for Stochastics, Universität Bayreuth, Germany
(e-mail: [email protected]).
Note that detection with quantized signals has found attention in distributed decision making [4] where data is collected
through sensors at different locations and quantization is used
to diminish the communication overhead between sensors and
terminal node [5]–[7]. Optimum quantization for detection
is the focus of [8]–[11], while [12] considers the detection
performance degradation due to hard-limiting. For discussions
on symbol detection for quantized communication see, e.g.,
[13]–[18]. Detection for cognitive radio with single-antenna
1-bit receivers is considered in [19] while array processing
with 1-bit measurements is analyzed in [20]–[22].
II. P ROBLEM F ORMULATION
We consider a receive situation where a narrow-band random wireless signal is impinging on an array of S sensors,
y = γAx + η.
(1)
Each sensor features two outputs (in-phase and quadrature),
M
such that the receive
TvectorT yT ∈ Y = R , M =S 2S, can be
with y I , y Q ∈ R . Likewise,
decomposed y = y I y Q
T
∈ R2 consists of two
the random source x = xI xQ
zero-mean signal components with covariance matrix
(2)
Rx = Ex xxT = I,
where Eu [·] denotes the expectation concerning the probability distribution p(u) and I the identity matrix. The steering
T
matrix A = AT
∈ RM×2 , with AI , AQ ∈ RS×2
AT
Q
I
models a uniform linear sensor array response (half carrierwavelength inter-element distance) for a narrow-band signal
arriving from direction ζ ∈ R, such that
cos 0
sin 0
cos π sin (ζ)
sin π sin (ζ)
AI =
(3)
..
..
.
.
cos (S − 1)π sin (ζ) sin (S − 1)π sin (ζ)
and
AQ =
− sin 0
− sin π sin (ζ)
..
.
− sin (S − 1)π sin (ζ)
cos 0
cos π sin (ζ)
..
.
cos (S − 1)π sin (ζ)
.
(4)
The parameter γ ∈ R characterizes the source strength in
relation to the additive zero-mean sensor noise η ∈ RM with
(5)
Rη = Eη ηη T = I.
2
Due to the properties of the source and noise signals, the
receive data (1) can be modeled by a Gaussian distribution
exp − 12 y T R−1
y (γ)y
y ∼ py (y; γ) = p
,
(6)
(2π)M det (Ry (γ))
with covariance matrix
Ry (γ) = Ey;γ yy T = γ 2 AAT + I.
(7)
Based on the likelihood (6), the problem of signal detection
can be stated as the decision about which of the two models
H0 : y ∼ py (y; γ0 ),
H1 : y ∼ py (y; γ1 )
has generated the data (K independent array snapshots)
Y = y 1 y2 . . . yK ∈ Y K .
(8)
H0 : z ∼ pz (z; θ0 ),
(9)
(10)
where sign (u) is the element-wise signum function, i.e.,
(
+1 if [y]i ≥ 0
[z]i =
(11)
−1 if [y]i < 0.
Note, that (10) characterizes a low-complexity digitization
process without feedback and is, therefore, distinct from
sigma-delta conversion [23], where a single fast comparator
with feedback is used to mimic a high-resolution ADC.
Modeling the output of (10) by its exact parametric probability distribution function, requires computing the integral
Z
pz (z; γ) =
py (y; γ)dy
(12)
C
while the probability of correctly deciding for H1 is given by
Z
PD =
pZ (Z; θ1 )dZ.
(16)
Approaching the decision problem (14) under the desired test
size PFA and maximum PD , the Neyman-Pearson theorem
shows that it is optimum to use the likelihood ratio test [24]
L(Z) =
pZ (Z; θ1 )
> ξ′
pZ (Z; θ0 )
IN THE
E XPONENTIAL FAMILY
Consider the multivariate parametric exponential family
pz (z; θ) = exp β T (θ)φ(z) − λ(θ) + κ(z) ,
(13)
where θ ∈ RD constitute its parameters, β(θ) : RD → RL the
natural parameters, φ(z) : RM → RL the sufficient statistics,
λ(θ) : RD → R the log-normalizer and κ(z) : RM → R the
(17)
for the assignment of the critical region
C = {Z : L(Z) > ξ ′ },
(18)
′
while the decision threshold ξ is determined through
Z
PFA =
pZ (Z; θ 0 )dZ.
(19)
{Z:L(Z)>ξ ′ }
Based on the ratio (17), a test statistic T (Z) : Z K → R can
be formulated such that the binary decision is performed by
(
H0 if T (Z) ≤ ξ
decide
.
(20)
H1 if T (Z) > ξ
To analyze the performance of (20), it is required to characterize the distribution of T (Z) and evaluate (15) and (16). As the
data Z consists of K independent samples, the test statistic
can be factorized into a sum of independent components
T (Z) =
M
for all 2 points in Z = B , where Y(z) characterizes the
subset in Y which by (10) is transformed to z. Additionally,
(12) requires the orthant probability, which for M > 4 is an
open problem. As the multivariate Bernoulli model resulting
from (10) is part of the exponential family like (6), in the
following we resort to discussing the considered processing
task for generic data models within this broad class. Without
exactly specifying the quantized likelihood (12), this will allow
us to analyze the asymptotically achievable detection rates
with low-complexity 1-bit array measurements.
III. D ECISIONS
(14)
is to be performed. To this end, we assign a critical region
C ⊂ Z K and decide in favor of H1 if the observed data
satisfies Z ∈ C. The probability of erroneously deciding for
H1 while H0 is the true data-generating model is calculated
Z
PFA =
pZ (Z; θ0 )dZ,
(15)
Y(z)
M
H1 : z ∼ pz (z; θ1 )
C
Using model (1) in practical applications implies that a highresolution analog-to-digital converter (ADC) is available for
each output channel. Significant savings are possible when
using a converter with 1-bit output resolution. Such a receiver
can be modeled by an ideal sampling device with infinite
resolution followed by a hard-limiter
z = sign (y),
carrier measure. Given a data set Z ∈ Z K of the form (9),
the simple binary hypothesis test between
K
X
t(z k )
(21)
k=1
such that, by the central limit theorem, the test statistic in the
large sample regime follows the normal distribution
a
(T (Z) − µi )2
1
exp −
,
(22)
p T (Z)|Hi = √
2σi2
2πσi
a
where by = we denote asymptotic equality. Through the mean
and standard deviation of the test statistic
µi = EZ;θi [T (Z)] ,
r
h
σi =
EZ;θi
2 i
T (Z) − µi ,
the asymptotic performance is then given by
ξ − µ1
a
,
PD = Pr {T (Z) > ξ|H1 } = Q
σ1
ξ − µ0
a
PFA = Pr {T (Z) > ξ|H0 } = Q
,
σ0
(23)
(24)
(25)
(26)
3
where Q(u) denotes the Q-function. Consequently, for a
desired PFA , the decision threshold is
a
ξ = Q−1 (PFA ) σ0 + µ0 ,
resulting in the asymptotic probability of detection
µ1 − µ0
σ0
a
−1
.
−
PD (PFA ) = Q Q (PFA )
σ1
σ1
(27)
(28)
2
arcsin (Σy (θ)) ,
π
−1
−1
Σy (θ) = diag (Ry (θ)) 2 Ry (θ) diag (Ry (θ)) 2 .
Rz (θ) =
Further, the evaluation of (35) requires determining the matrix
h
T i
(43)
C(θ) = Ez;θ vec zz T vec zz T
(29)
IV. R ESULTS
the log-likelihood ratio for exponential family models is
ln L(Z) =
K
X
k=1
T
b φ(z k ) − K(λ(θ 1 ) − λ(θ 0 )),
(30)
such that with the empirical mean of the sufficient statistics
φ̄ =
K
1 X
φ(z k ),
K
(42)
which is possible [21] by the arcsine law and the orthant
probability of the quadrivariate Gaussian distribution [26].
Writing
b = β(θ1 ) − β(θ0 ),
is obtained by the arcsine law [25, pp. 284],
(31)
We apply the results to discuss the design of a 1-bit array
GPS spectrum monitoring system. The task is to check a spectral band with two-sided bandwidth B = 2.046 Mhz, centered
at 1.57 GHz, for a source signal from a specific direction.
The receiver samples the low-pass filtered receive signal at
Nyquist rate, i.e., fs = B, such that K = 2046 array snapshots
are available within a millisecond. The upper triangular area
k=1
a likelihood-based test statistic is
0
T (Z) = b φ̄.
The mean and standard deviation of the test are
r
1 T
T
b Rφ (θi )b,
µi = b µφ (θ i ), σi =
K
(32)
−2
(33)
χ [dB]
T
−4
where
µφ (θ) = Ez;θ [φ(z)] ,
h
i
Rφ (θ) = Ez;θ φ(z)φT (z) − µφ (θ)µT
φ (θ).
(34)
(35)
For Gaussian models (6),
1
β(θ) = − vec R−1
y (θ) ,
2
φ(y) = vec yy T ,
µφ (θ) = vec (Ry (θ)) ,
(36)
(37)
(38)
where the matrix Φ eliminates duplicate and diagonal statistics. This is potentially suboptimal as (39) does, in general,
not contain all sufficient statistics of a multivariate Bernoulli
distribution. The missing statistics are absorbed in the carrier
of (13) and, therefore, do not contribute to the decision
process. For the natural parameter difference (29), we use
(40)
as it maximizes the distance of the asymptotic outcome in (32)
under both hypotheses. The mean of the statistics (39),
µφ (θ) = Ez;θ [φ(z)] = Φ vec (Rz (θ)) ,
−8
5
and the matrix (35) can be determined through Isserlis’ theorem. If the model (13) is unspecified, a favorable choice of
φ(z) and b has to be found. For the 1-bit analysis, we use
(39)
φ(z) = Φ vec zz T ,
−1
b = R−1
φ (θ 1 )µφ (θ 1 ) − Rφ (θ 0 )µφ (θ 0 )
SNR = −15
SNR = −18
SNR = −21
SNR = −24
−6
(41)
10
15
Number of Sensors S
dB
dB
dB
dB
20
Fig. 1. Quality vs. Array Size (K = 2046, ζ = 45◦ )
under the receiver operating characteristic (ROC) curve,
Z 1
χ=2
PD (u)du − 1,
(44)
0
determines the system quality regarding the detection task.
√ For
different signal-to-noise ratios (SNR), γ0 = 0 vs. γ1 = SNR,
Fig. 1 shows the 1-bit system quality for an exemplary setting
with K = 2046, ζ = 45◦ , versus the number of array elements
S. While for very weak sources (SNR = −24 dB) more than
S = 20 sensors are required to provide high performance, at a
power level of SNR = −15 dB already S = 5 antennas suffice
to operate close to a perfect monitoring system with χ = 1. To
determine a favorable observation length, for S = 8, ζ = 30◦ ,
Fig. 2 shows χ for different numbers of samples. While
reliable detection with SNR = −24 dB requires sampling
more than 10 ms, the decision at SNR = −15 dB can be
made trustworthy within less than 1 ms. Fig. 3 depicts the
analytic and simulated performance (using 105 realizations)
with S = 8, K = 100, ζ = 15◦ . At moderate sample size, the
4
R EFERENCES
χ [dB]
0
−2
−4
SNR = −15
SNR = −18
SNR = −21
SNR = −24
−6
2
dB
dB
dB
dB
4
6
8
Observation Time [ms]
10
Fig. 2. Quality vs. Observation Time (S = 8, ζ = 30◦ )
asymptotic results already show a good correspondence with
the Monte-Carlo simulations.
1
0.8
PD
0.6
0.4
0.2
0
PFA = 10−3
PFA = 10−4
−18 −16 −14 −12 −10 −8
Signal-to-Noise Ratio [dB]
−6
Fig. 3. Analysis vs. Simulation (S = 8, K = 100, ζ = 15◦ )
V. C ONCLUSION
We have derived achievable detection rates with a large
number of radio measurements obtained with a lowcomplexity sensor array performing 1-bit analog-to-digital
conversion. Discussing the simple binary hypothesis test in the
framework of the exponential family enables circumventing
the intractability of the 1-bit array likelihood. Its difficult
characterization forms a fundamental obstacle to the application of analytic tools in statistical signal and information
processing to problems with coarsely quantized multivariate
data. Using the analytic results to determine the spectrum
monitoring capability with wireless multi-antenna receivers
shows that, under the right system design, radio frequency
measurements from binary arrays are sufficient to perform the
processing task of signal detection in a fast and reliable way.
[1] H. Krim, M. Viberg, ”Two decades of array signal processing research:
The parametric approach,” IEEE Signal Process. Mag., vol. 13, no. 4,
pp. 67–94, Jul. 1996.
[2] R. H. Walden, “Analog-to-digital converter survey and analysis,” IEEE
J. Sel. Areas Commun., vol. 17, no. 4, pp. 539–550, Apr. 1999.
[3] B. Murmann, “ADC performance survey 1997-2017,” [Online]. Available: http://web.stanford.edu/˜murmann/adcsurvey.html
[4] R. R. Tenney, N. R. Sandell, “Detection with distributed sensors,” IEEE
Trans. Aerosp. Electron. Syst., vol. 17, no. 4, pp. 501–510, Jul. 1981.
[5] J. Fang, Y. Liu, H. Li, S. Li, “One-bit quantizer design for multisensor
GLRT fusion,” IEEE Signal Process. Lett., vol. 20, no. 3, pp. 257–260,
Mar. 2013.
[6] D. Ciuonzo, G. Papa, G. Romano, P. Salvo Rossi, P. Willett, “One-bit
decentralized detection with a Rao test for multisensor fusion,” IEEE
Signal Process. Lett., vol. 20, no. 9, pp. 861–864, Sept. 2013.
[7] H. Zayyani, F. Haddadi, M. Korki, “Double detector for sparse signal
detection from one-bit compressed sensing measurements,” IEEE Signal
Process. Lett., vol. 23, no. 11, pp. 1637-1641, Nov. 2016.
[8] S. Kassam, “Optimum quantization for signal detection,” IEEE Trans.
Commun., vol. 25, no. 5, pp. 479–484, May 1977.
[9] H. V. Poor, J. B. Thomas, “Optimum data quantization for a general
signal detection problem,” in Asilomar Conference on Circuits, Systems
and Computers, Pacific Grove, 1977, pp. 299-303.
[10] B. Aazhang, H. Poor, “On optimum and nearly optimum data quantization for signal detection,” IEEE Trans. Commun., vol. 32, no. 7, pp.
745-751, Jul 1984.
[11] W. A. Hashlamoun, P. K. Varshney, “Near-optimum quantization for
signal detection,” IEEE Trans. Commun., vol. 44, no. 3, pp. 294–297,
Mar. 1996.
[12] P. Willett, P. F. Swaszek, “On the performance degradation from onebit quantized detection,” IEEE Trans. Inf. Theory, vol. 41, no. 6, pp.
1997–2003, Nov. 1995.
[13] G. Foschini, R. Gitlin, S. Weinstein, “Optimum detection of quantized
PAM data signals,” IEEE Trans. Commun., vol. 24, no. 12, pp. 1301–
1309, Dec. 1976.
[14] O. Dabeer, U. Madhow, “Detection and interference suppression for
ultra-wideband signaling with analog processing and one bit A/D,” in
Asilomar Conference on Signals, Systems and Computers, Pacific Grove,
2003, pp. 1766–1770.
[15] A. Mezghani, M. S. Khoufi, J. A. Nossek, “Maximum likelihood
detection for quantized MIMO systems,” in International ITG Workshop
on Smart Antennas, Vienna, 2008, pp. 278–284.
[16] S. Wang, Y. Li, J. Wang, “Multiuser detection in massive spatial
modulation MIMO with low-resolution ADCs,” IEEE Trans. Wireless
Commun., vol. 14, no. 4, pp. 2156–2168, Apr. 2015.
[17] J. Choi, J. Mo, R. W. Heath, “Near maximum-likelihood detector and
channel estimator for uplink multiuser massive MIMO systems with
one-bit ADCs,” IEEE Trans. Commun., vol. 64, no. 5, pp. 2005–2018,
May 2016.
[18] S. Jacobsson, G. Durisi, M. Coldrey, U. Gustavsson, C. Studer,
”Throughput analysis of massive MIMO uplink with low-resolution
ADCs,” IEEE Trans. Wireless Commun., vol. 16, no. 6, pp. 4038–4051,
Jun. 2017.
[19] A. Ali, W. Hamouda, “Low power wideband sensing for one-bit quantized cognitive radio systems,” IEEE Wireless Commun. Lett., vol. 5, no.
1, pp. 16-19, Feb. 2016.
[20] O. Bar-Shalom, A. J. Weiss, “DOA estimation using one-bit quantized
measurements,” IEEE Trans. Aerosp. Electron. Syst., vol. 38, no. 3, pp.
868–884, Jul. 2002.
[21] M. Stein, K. Barbé, J. A. Nossek, “DOA parameter estimation with 1bit quantization - Bounds, methods and the exponential replacement”,
in International ITG Workshop on Smart Antennas, Munich, 2016, pp.
1-6.
[22] C. L. Liu, P. P. Vaidyanathan, “One-bit sparse array DOA estimation,”
in IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP), New Orleans, 2017, pp. 3126–3130.
[23] P. M. Aziz, H. V. Sorensen, J. van der Spiegel, “An overview of sigmadelta converters,” IEEE Signal Process. Mag., vol. 13, no. 1, pp. 61–84,
Jan. 1996.
[24] S. M. Kay, Fundamentals of Statistical Signal Processing, Volume 2:
Detection Theory. Upper Saddle River, NJ: Prentice-Hall, 1998.
[25] J. B. Thomas, Introduction to Statistical Communication Theory. Hoboken, NJ: John Wiley & Sons, 1969.
[26] M. Sinn, K. Keller, “Covariances of zero crossings in Gaussian processes,” Theory Probab. Appl., vol. 55, no. 3, pp. 485–504, 2011.
| 7 |
PORTRAIT GROWTH IN CONTRACTING, REGULAR BRANCH
GROUPS
arXiv:1710.02902v1 [] 9 Oct 2017
ZORAN ŠUNIĆ AND JONE URIA-ALBIZURI
Abstract. We address a question of Grigorchuk by providing both a system
of recursive formulas and an asymptotic result for the portrait growth of the
first Grigorchuk group. The results are obtained through analysis of some features of the branching subgroup structure of the group. More generally, we
provide recursive formulas for the portrait growth of any finitely generated,
contracting, regular branch group, based on the coset decomposition of the
groups that are higher in the branching subgroup structure in terms of the
lower subgroups. Using the same general approach we fully describe the portrait growth for all non-symmetric GGS-groups and for the Apollonian group.
1. Introduction
Regular branch groups acting on rooted trees have been extensively studied since
the 1980s. The interest in these groups is due to their remarkable properties. For
instance, some of the groups in this class provide counterexamples to the General
Burnside Problem, many of them are amenable but not elementary amenable, and
the first example of a group of intermediate word growth was of this type. The initial
examples, constructed in the early 1980s, were the Grigorchuk 2-groups [Gri80,
Gri84] and the Gupta-Sidki p-groups [GS83]. Many other examples and different
generalizations have been introduced since then.
The notion of word growth for a finitely generated group was introduced by
A.S. Švarc [Šva55] and later, independently, by J. Milnor [Mil68a, Mil68b]. Given
a finitely generated group, one can define the word metric with respect to a finite
generating set. It is natural to ask what is the word growth function of a particular
group, that is, what is the number of elements that can be written as words up
to a given length (equivalently, what is the volume of the ball of a given radius
in the word metric). A lot of work has been done in this direction. For instance,
it is known that groups of polynomial growth are exactly the virtually nilpotent
groups [Gro81]. Since the free group of rank at least 2 has exponential growth,
the word growth of every finitely generated group is at most exponential. In 1968,
Milnor [Mil68c] asked if every finitely generated group has either polynomial or
exponential growth, and Grigorchuk [Gri83, Gri84] showed in the early 1980s that
groups of intermediate growth exist. In particular, the growth of the first Grigorchuk group [Gri80], introduced in 1980 as an example of a Burnside 2-group, is
superpolynomial, but subexponential.
The first Grigorchuk group is a self-similar, contracting, regular branch group,
and so are the Gupta-Sidki p-groups. The study of such groups often relies on
the representation of their elements through portraits. For instance, the known
estimate [Bar98] of word growth for the first Grigorchuk group is based on an
estimate of the number of portraits of elements of given length. Thus, in the
2010 Mathematics Subject Classification. Primary 20E08.
J. Uria-Albizuri acknowledge financial support from the Spanish Government, grants
MTM2011-28229-C02 and MTM2014-53810-C2-2-P, and from the Basque Government, grants
IT753-13, IT974-16 and the predoctoral grant PRE-2014-1-347.
1
2
ZORAN ŠUNIĆ AND JONE URIA-ALBIZURI
context of self-similar, contracting groups acting on regular rooted trees, it is of
interest to study the portrait growth of a group with respect to a finite generating
set. The necessary definitions are given in Section 2, but let us provide here a
rough description. Given a finitely generated group acting on a rooted tree, one
can consider the action of the group on the subtrees below each vertex of the tree.
If the action on such subtrees is always given by elements of the group itself, the
group is called self-similar. Moreover, if the elements describing the action at the
subtrees on each level are getting shorter (with respect to the word metric) with the
level, with finitely many exceptions, the group is said to be contracting. Thus, in a
finitely generated, contracting group G, there is a finite set M such that, for every
element g in G, there exists a level such that all elements describing the action of g
on the subtrees below that level come from the finite set M. The first level where
this happens is called the depth of the element g (with respect to M), a concept
introduced by Sidki [Sid87]. In this setting, the portrait growth function (growth
sequence) counts the number of elements up to a given depth and one may ask
what is the growth rate of such a function. This question was specifically raised by
Grigorchuk [Gri05] for the first Grigorchuk group.
We provide a system of recursive formulas for the portrait growth of the first
Grigorchuk group (see Theorem 4.2) and the following asymptotic result.
Theorem 1.1. There exists a positive constant γ such that the portrait growth
sequence {an }∞
n=0 of the first Grigorchuk group G satisfies the inequalities
n
1 γ2n
e
≤ an ≤ 4eγ2 ,
4
for all n ≥ 0. Moreover, γ ≈ 0.71.
We note here that the recursive formulas from Theorem 4.2, together with the
results of Lemma 3.2, may be used to estimate γ with any desired degree of accuracy.
More generally, we provide recursive formulas (see (2)) for the portrait growth
sequence of any finitely generated, contracting, regular branch group. The formulas
are directly based on the branching subgroup structure of the group.
As a further application, we also study the portrait growth for all non-symmetric
GGS-groups and for the Appollonian group and, in each case, we resolve the obtained recursion and provide a straightforward description of the portrait growth.
Theorem 1.2. Let G be a GGS-group defined by a non-symmetric vector e ∈ Fp−1
.
p
The portrait growth sequence of G {an }∞
is
given
by
n=0
a0 = 1 + 2(p − 1)
n−1
an = p(x1 + (p − 1)y1 )p
,
where x1 and y1 are the number of solutions in Fp of
((n0 , . . . , np−1 )C(0, e)) ⊙ (n0 , . . . , np−1 ) = (0, . . . , 0),
with n0 + · · · + np−1 = 0 and n0 + · · · + np−1 = 1, respectively.
For instance, for the so called Gupta-Sidki 3-group, which corresponds to the
GGS-group defined by e = (1, −1) with p = 3, the portrait growth sequence is
n−1
given by a0 = 5 and an = 3 · 93 , for n ≥ 1.
Theorem 1.3. The portrait growth sequence {an }∞
n=0 of the Apollonian group is
given by
n
n
3n −1
1 √ 3
.
an = 3 2 · 7 3 = √ 7 3
3
PORTRAIT GROWTH IN CONTRACTING, REGULAR BRANCH GROUPS
3
Note that, in all three cases, the portrait growth is doubly exponential even
though the word growth function is intermediate for the first Grigorchuk group,
mostly unknown for the GGS-groups, and exponential for the Apollonian group. We
conjecture that the portrait growth is doubly exponential for all finitely generated,
contracting, regular branch groups.
The paper is organized as follows: in Section 2 we provide the basic definitions regarding groups acting on regular rooted trees and we describe a procedure
yielding recursive relations for the portrait growth sequence of a finitely generated,
contracting, regular branch group. In Section 3 we state some useful observations
about sequences of doubly exponential growth. Finally in Sections 4, 5 and 6 we
describe the cases of the first Grigorchuk group, non-symmetric GGS-groups, and
the Apollonian group.
2. Portrait growth sequence on a regular branch contracting
group
A regular rooted tree T is a graph whose vertices are the words over a finite
alphabet X, and two vertices u and v are joined by an edge if v = ux for some
x ∈ X. The empty word, denoted ∅, represents the root of the tree and the tree
is called d-adic if X consists of d elements. The vertices represented by words of
a fixed length constitute a level, that is, the words of length n constitute the nth
level of the tree. The ternary rooted tree based on X = {1, 2, 3} is displayed in
Figure 1.
∅
1
11
12
2
..
.
13
21
22
3
..
.
23
31
32
..
.
33
Figure 1. A ternary rooted tree
An automorphism of the tree T is a permutation of the vertices preserving incidence. Such a permutation, by necessity, must also preserve the root and all
other levels of the tree. The automorphisms of T form the automorphism group
Aut T under composition. A group acting faithfully on a regular rooted tree may
be regarded as a subgroup of Aut T . Every automorphism g can be fully described
by indicating how g permutes the d vertices at level 1 below the root, and how it
acts on the subtrees hanging from each vertex at level 1. Observe that each subtree hanging from a vertex is isomorphic to the whole tree T , so that the following
description makes sense. Namely, we decompose any automorphism g ∈ Aut T as
(1)
g = (g1 , . . . , gd )α,
where α ∈ Sd is the permutation describing the action of g on the d vertices at
level 1, and each gi ∈ Aut T describes the action of g on the ith subtree below
level 1, for i = 1, . . . , d. This process can be repeated at any level, and the element
4
ZORAN ŠUNIĆ AND JONE URIA-ALBIZURI
describing the action of g at a particular vertex u is called the section of g at the
vertex u. Given a group G ≤ Aut T we say that G is self-similar if the sections of
each element in G belong again to the group G.
We say that a self-similar group G ≤ Aut T is contracting if there is a finite
set M of elements in G such that, for every element g, there exists some level n
such that all sections of g at the vertices at and below the nth level belong to
M. The smallest set M among such finite sets is called the nucleus of G and we
denote it by N (G) (or by N , when G is clear from the context). The definition of
a contracting group, in this form, is due to Nekrashevych [Nek05].
For any finitely generating group G, generated by a finite symmetric set S = S −1 ,
we define the word length of g, denoted ∂(g), to be the length of the shortest word
over the alphabet S representing the element g. A self-similar group G generated
by a finite symmetric set S is contracting if and only if there exist constants λ < 1
and C ≥ 0, and level n such that for every g ∈ G and every vertex u on level n
∂(gu ) ≤ λ∂(g) + C.
Note that, if ∂(g) >
C
1−λ ,
then
∂(gu ) ≤ λ∂(g) + C < ∂(g),
that is, the sections gu are strictly shorter than g. Thus, the elements that possibly
have no shortening in their sections are the ones that belong to the finite set
C
g ∈ G | ∂(g) ≤
.
1−λ
In particular, the nucleus is part of this finite set. The metric approach to contracting groups precedes the nucleus definition of Nekrashevych and was used by
Grigorchuk in his early works on families of groups related to the first Grigorchuk
group [Gri80, Gri84]. Nekrasevych [Nek05] showed that he nucleus definition and
the metric definition are equivalent in the case of finitely generated groups.
Given a contracting group G acting on a d-ary tree and an element g in G,
the nucleus portrait is a finite tree, whose interior vertices are decorated by
permutations from Sd and whose leaves are decorated by elements of the nucleus,
describing the action of the element g. The portrait is constructed recursively as
follows. If g is an element of the nucleus the tree consist only of the root decorated
by g. If g is not an element of the nucleus, then we consider its decomposition (1).
The portrait of g is obtained by decorating the root by the permutation α and by
attaching the portraits of the section g1 , . . . , gd at the corresponding vertices of the
first level. Since G is contracting this recursive procedure must end at some point,
and we obtain the portrait of the element g. (A concrete example in the case of
the first Grigorchuk group is provided in Figure 2.)
Let us denote by d(g) the depth of the portrait of an element g ∈ G, that is,
the length of the largest ray from the root to a leaf in the portrait of g. For each
n ∈ N, the set {g ∈ G | d(g) ≤ n} is finite, and the function a : N → N given by
a(n) = |{g ∈ G | d(g) ≤ n}|
is called the portrait growth function, or portrait growth sequence, of G (with
respect to the nucleus).
We now focus on regular branch groups, since their structure gives us a way to
describe the portrait growth function of a contracting group in a recursive way.
Given G ≤ Aut T , the elements of G fixing level n form a normal subgroup of
G, called the nth level stabilizer and denoted stG (n). For every n ∈ N, we have an
PORTRAIT GROWTH IN CONTRACTING, REGULAR BRANCH GROUPS
5
injective homomorphism
ψn : stG (n) −→ Aut T × · · · × Aut T ,
|
{z
}
dn
sending each element g ∈ stG (n) to the dn -tuple consisting of the sections of g at
n
level n. Note that, if the group is self-similar, ψn (stG (n)) ≤ G × .d. . × G. For
simplicity we write ψ = ψ1 . A group G ≤ Aut T is level transitive if it acts
transitively on every level of the tree. A level transitive group G ≤ Aut T is called
regular branch if there exists a normal subgroup K of finite index in G such that
ψ(K ∩ stG (1)) ≥ K × · · · × K.
Observe that, since K is of finite index in G, the group ψ −1 (K × · · · × K) has finite
index in K ∩ stG (1), and hence in G.
We now describe a procedure yielding recursive formulas for the portrait growth
of any finitely generated, contracting, regular branch group G, branching over its
normal subgroup K of index k. Consider a left transversal T = {t1 , . . . , tk } for K
in G and denote by pn (ti ) = |{g ∈ ti K | d(g) ≤ n}| and pn = |{g ∈ G | d(g) ≤ n}|
the sizes of the sets consisting of the elements of depth at most n in the coset ti K
Pk
and in the whole group G, respectively. We have pn = i=1 pn (ti ).
Let S = {s1 , . . . , sℓ } be a left transversal for ψ −1 (K × · · · × K) in K. For
i = 1, . . . , k and j = 1, . . . , ℓ we have
ti sj = (g1 , . . . , gd )α ≡ (tij1 , . . . , tijd )α
(mod K × · · · × K),
for some tijr ∈ T , r = 1, . . . , d. Thus, for n ≥ 0,
(2)
pn+1 (ti ) =
ℓ
X
pn (tij1 ) . . . pn (tijd )
j=1
(3)
pn+1 =
k X
ℓ
X
pn (tij1 ) . . . pn (tijd ).
i=1 j=1
The initial conditions p0 (ti ), i = 1, . . . , k, for the recursive formulas can be
obtained simply by counting the members of the nucleus that come from the corresponding coset, while p0 is the size of the nucleus.
The following observation will be helpful later on. A rooted automorphism is an
automorphisms whose sections, other than at the root, are trivial.
Lemma 2.1. Let G be a finitely generated, contracting, regular branch group
branching over the subgroup K and let T be a transversal of K in G. If a ∈ G is a
rooted automorphism then, for every t ∈ T and n ≥ 1, we have
pn (t) = pn (at) = pn (ta) = pn (ta ).
Proof. Observe that for every g ∈ tK and u ∈ Ln , n ≥ 1, we have
(ag)u = au ga(u) = ga(u) ,
(ga)u = gu ag(u) = gu ,
(g a )u = ga−1 (u) .
Thus, there are bijections between the sets of elements of depth n in tK, atK, taK
and ta K.
6
ZORAN ŠUNIĆ AND JONE URIA-ALBIZURI
3. Doubly exponential growth
We begin by defining sequences of doubly exponential growth.
Definition 3.1. A sequence of positive real numbers {an }n∈N grows doubly exponentially if there exist some positive constants α, β and some γ, d > 1 such that
n
n
αeγd ≤ an ≤ βeγd ,
for every n ∈ N.
In order to show that the portrait growth sequences in our examples are doubly
exponential we need the following auxiliary result.
∞
Lemma 3.2. Let {an }n=0 be a sequence of positive real numbers and d a constant
with d > 1. The following are equivalent.
(i) There exist positive constants A and B such that, for all n ≥ 0,
Aadn ≤ an+1 ≤ Badn .
(ii) There exist positive constants α, β, and γ such that, for all n ≥ 0,
n
n
αeγd ≤ an ≤ βeγd .
∞
Moreover, in case (i) is satisfied, the sequence lndann n=0 is convergent, we may
set
ln an
γ = lim
n→∞ dn
−M
and α and β can be chosen to be e
and eM , respectively, where
1
M=
max{| ln A|, | ln B|}.
d−1
The error of the approximation γ ≈ γn =
ln an
dn
is no greater than M/dn .
Proof. (ii) implies (i). We have, for all n,
β γdn d
β
α d
α γdn d
γdn+1
γdn+1
βe
=
αe
≤
a
≤
βe
=
αe
≤ d adn .
a
≤
n+1
n
d
d
d
β
β
α
α
≤ ln B, and therefore
(i) implies (ii). For all i, we have ln A ≤ ln aai+1
d
i
ln
ai+1
≤ max{| ln A|, | ln B|} = (d − 1)M.
adi
For n ≥ 0, let
rn =
∞
X
ai+1
1
ln d .
i+1
d
ai
i=n
The series rn isPabsolutely convergent and we have the estimate |rn | ≤ M/dn , by
M
1
comparison to ∞
i=n di+1 (d − 1)M = dn .
Let
∞
X
1
ai+1
γ = ln a0 + r0 = ln a0 +
ln d .
i+1
d
ai
i=0
Since r0 is a convergent series, γ is well defined. We have
1 a1
γ = ln a0 + r0 = ln a0 + ln d + r1
d a0
a2
ln a1
1
ln a1
+ r1 =
+ 2 ln d + r2
=
d
d
d
a1
ln a2
= 2 + r2 = . . .
d
PORTRAIT GROWTH IN CONTRACTING, REGULAR BRANCH GROUPS
7
Thus, for all n,
ln an
+ rn .
dn
∞
Since |rn | ≤ M/dn , we see that the sequence lndann n=0 converges to γ. Moreover,
the inequalities γ − M/dn ≤ lndann ≤ γ + M/dn yield
γ=
n
n
e−M eγd ≤ an ≤ eM eγd .
We end this section with a simple combinatorial observation that shows that an
upper bound of the form required in condition (i) of the lemma always exists for
regular branch groups.
Lemma 3.3. Let G be a finitely generated, contracting, regular branch group
acting on the d-adic tree and let {an }n≥0 be the portrait growth sequence of G.
Then
an+1 ≤ |G : stG (1)|adn , for n ≥ 0.
Proof. Since every element of depth at most n + 1 has sections at the first level
that have depth at most n, the number of possible decorations at level 1 and below
for portraits of depth at most n + 1 is at most adn . On the other hand, the number
of possible labels at the root is |G : stG (1)|, and we obtain the inequality.
4. Portrait growth in the first Grigorchuk group
Denote by G the first Grigorchuk group, introduced in [Gri80]. In his treatise [Gri05] on solved and unsolved problems centered around G, Grigorchuk asked
what is the growth of the sequence counting the number of portraits of given size
in G (Problem 3.5).
The first Grigorchuk group is defined as follows.
Definition 4.1. Let T be the binary tree. The first Grigorchuk group G is the
group generated by the rooted automorphism a permuting the two subtrees on
level 1, and by b, c, d ∈ stG (1), where b, c and d are defined recursively by
ψ(b) = (a, c)
ψ(c) = (a, d)
ψ(d) = (1, b)
Already in his early works in 1980s Grigorchuk observed that G is contracting
with nucleus N (G) = {1, a, b, c, d}. Since G is a contracting group, its elements have
well defined portraits, which are finite decorated trees. For instance, the portrait
of the element bacac is provided in Figure 2.
Grigorchuk also showed that G is a regular branch group, branching over the
subgroup K = h[a, b]iG of index |G : K| = 16. An accessible account can be found
in Chapter VIII of [dlH00]. A transversal for K in G is given by
T = { 1, d, ada, dada, a, ad, da, dad, b, c, aca, cada, ba, ac, ca, cad }.
and a transversal for ψ −1 (K × K) in K is given by
S = {1, abab, (abab)2, baba}.
Theorem 4.2. The portrait growth sequence {an }∞
n=0 of the first Grigorchuk group
G is given recursively by
a0 = 5
an = 2xn + 4yn + 2zn + 2Xn + 4Yn + 2Zn , for n ≥ 1,
8
ZORAN ŠUNIĆ AND JONE URIA-ALBIZURI
()
()
(1 2)
1
b
(1 2)
c
d
a
Figure 2. portrait of the element bacac
where xn , yn , zn , Xn , Yn , and Zn , for n ≥ 1, satisfy the system of recursive relations
xn+1 = x2n + 2yn2 + zn2 ,
yn+1 = xn Yn + Yn zn + Xn yn + yn Zn ,
zn+1 = Xn2 + 2Yn2 + Zn2
Xn+1 = 2xn yn + 2yn zn ,
Yn+1 = xn Xn + 2yn Yn + zn Zn ,
Zn+1 = 2Xn Yn + 2Yn Zn ,
with initial conditions
x1 = y1 = z1 = Y1 = 1,
X1 = 2,
Z1 = 0.
Proof. Denote by pn (t) the number of portraits of depth no greater than n in Γ
that represent elements in the coset tK.
By Lemma 2.1, we have
(4)
pn+1 (t) = pn+1 (at) = pn+1 (ta ) = pn+1 (ta), for n ≥ 0, t ∈ T.
Thus we only need to exhibit recursive formulas for the 6 coset representatives in
the set {1, c, dada, b, d, cada}. We have
PORTRAIT GROWTH IN CONTRACTING, REGULAR BRANCH GROUPS
ψ(1) = (1, 1)
ψ(abab) = (ca, ac)
2
ψ((abab) ) = (dada, dada)
ψ(baba) = (ac, ca)
ψ(c) = (a, d)
ψ(cabab) = (aca, cad)
2
ψ(c(abab) ) = (dad, ada)
ψ(dada) = (b, b)
2
ψ(dada(abab) ) = (cada, cada)
9
ψ(cbaba) = (c, ba)
ψ(dadaabab) = (da, ad)
ψ(dadababa) = (ad, da)
ψ(b) = (a, c)
ψ(babab) = (aca, dad)
2
ψ(b(abab) ) = (dad, aca)
ψ(bbaba) = (c, a)
ψ(d) = (1, b)
ψ(dabab) = (ca, ad)
2
ψ(d(abab) ) = (dada, cada)
ψ(cada) = (ba, d)
2
ψ(cada(abab) ) = (cad, ada)
ψ(dbaba) = (ac, da)
ψ(cadaabab) = (ada, cad)
ψ(cadababa) = (d, ba),
where the sections are already written modulo K by using representatives in T (the
spacing into 6 groups of 4 indicates how each of the 6 cosets of K with representatives 1, c, dada, b, d, and cada splits into 4 cosets of K × K).
Thus, for n ≥ 0,
pn+1 (1) = pn (1)2 + 2pn (ac)pn (ca) + pn (dada)2 ,
pn+1 (c) = pn (a)pn (d) + pn (dad)pn (ada) + pn (c)pn (ba) + pn (aca)pn (cad),
pn+1 (dada) = pn (b)2 + 2pn (ad)pn (da) + pn (cada)2
(5)
pn+1 (b) = 2pn (a)pn (c) + 2pn (dad)pn (aca),
pn+1 (d) = pn (1)pn (b) + pn (ac)pn (da) + pn (ca)pn (ad) + pn (dada)pn (cada),
pn+1 (cada) = 2pn (d)pn (ba) + 2pn (ada)pn (cad),
with initial conditions
p0 (1) = p0 (a) = p0 (b) = p0 (c) = p0 (d) = 1,
p0 (t) = 0, for t ∈ T \ {1, a, b, c, d}.
Direct calculations, based on (5), give
p1 (b) = 2
p1 (cada) = 0
p1 (t) = 1, for t ∈ {1, c, d, dada}.
If we denote, for n ≥ 1,
xn = pn (1) = pn (a),
yn = pn (c) = pn (ac) = pn (aca) = pn (ca),
zn = pn (dada) = pn (dad),
Xn = pn (b) = pn (ba),
Yn = pn (d) = pn (ad) = pn (ada) = pn (da),
Zn = pn (cada) = pn (cad)
we obtain, for n ≥ 1,
an = 2xn + 4yn + 2zn + 2Xn + 4Yn + 2Zn ,
10
ZORAN ŠUNIĆ AND JONE URIA-ALBIZURI
where xn , yn , zn , Xn , Yn , and Zn satisfy the recursive relations and initial conditions as claimed, which follows from (5).
Theorem 4.3. There exists a positive constant γ such that the portrait growth
sequence {an }∞
n=0 of the first Grigorchuk group G satisfies the inequalities
n
1 γ2n
e
≤ an ≤ 4eγ2 ,
4
for all n ≥ 0. Moreover, γ ≈ 0.71.
Proof. Following Lemma 3.2, we first determine positive constants A an B such
that for each n ∈ N we have
Aa2n ≤ an+1 ≤ Ba2n .
By Lemma 3.3 we may take B = |G : stG (1)| = 2.
For the other inequality, we need a constant A such that
an+1 − Aa2n ≥ 0.
Using Theorem 4.2 we may express, for n ≥ 1, both an+1 and an in terms of
xn , yn , zn , Xn , Yn , Zn and if we set A = 14 , we obtain
an+1 − Aa2n = (xn − zn + Xn − Zn )2 ≥ 0.
1
1
Since M = d−1
max{| ln A|, | ln B|} = 2−1
max{| ln 1/4|, | ln 2|} = ln 4, we obtain
−M
M
α=e
= 1/4 and β = e = 4. Finally, the approximation of γ ≈ 0.71 can be
calculated by using the recursion given by Theorem 4.2 and Lemma 3.2.
5. Portrait growth in non-symmetric GGS-groups
The GGS-groups (named after Grigorchuk, Gupta, and Sidki) form a family of
groups generalizing the Gupta-Sidki examples [GS83] (which were in turn inspired
by the first Grigorchuk group [Gri80]).
Definition 5.1. For a prime p, p ≥ 3, and a vector e = (e1 , . . . , ep−1 ) ∈ Fp−1
,
p
the GGS-group defined by e is the group of p-ary automorphisms generated by
the rooted automorphism a, which permutes the subtrees on level 1 according to
the permutation (1 . . . p), and the automorphism b ∈ st(1) defined recursively by
b = (b, ae1 , . . . , aep−1 ).
Set S = S −1 = {a, a2 , . . . , ap−1 , b, . . . , bp−1 }. It is easy to see that G is contracting with nucleus N (G) = S ∪ {1}.
Let
0
e1 e2
. . . ep−1
ep−1 0 e1
. . . ep−2
C(0, e) = .
.
.
..
..
..
..
..
.
.
e1
e2 . . . ep−1
0
be the circulant matrix of the vector (0, e1 , . . . , ep−1 ). We say that the vector
e = (e1 , . . . , ep−1 ) is symmetric if ei = ep−i , for i = 1, . . . , p − 1 (that is, the vector
is symmetric precisely when the corresponding circulant matrix is symmetric).
.
Theorem 5.2. Let G be a GGS-group defined by a non-symmetric vector e ∈ Fp−1
p
The portrait growth sequence {an }∞
of
G
is
given
by
n=0
a0 = 1 + 2(p − 1)
n−1
an = p(x1 + (p − 1)y1 )p
,
PORTRAIT GROWTH IN CONTRACTING, REGULAR BRANCH GROUPS
11
where x1 and y1 are the number of solutions in Fpp of
((n0 , . . . , np−1 )C(0, e)) ⊙ (n0 , . . . , np−1 ) = (0, . . . , 0),
with n0 + · · · + np−1 = 0 and n0 + · · · + np−1 = 1, respectively, where ⊙ denotes
the product by coordinates.
Proof. Fernández-Alcober and Zugadi-Reizabal [FAZR14] showed that a GGS-group
defined by a non-symmetric vector is regular branch over G′ , whose index in G is
p2 . A left-transversal for G′ in G is given by
T = {ai bj | i, j = 0, . . . , p − 1}.
For each pair (i, j) ∈ {0, . . . , p − 1}2 denote by pn (i, j) the number of portraits
of depth no greater than n in the coset ai bj G′ .
We have
2
ai bj ≡ ai bn0 (ba )n1 (ba )n2 . . . (ba
where j = n0 + · · · + np−1 in Fp . And then,
ai bj ≡ ai (ai0 bn0 , . . . , aip−1 bnp−1 )
p−1
)np−1
(mod G′ ),
(mod G′ × · · · × G′ ),
where (i0 , . . . , ip−1 ) = (n0 , . . . , np−1 )C(0, e). We obtain that,
X
pn+1 (i, j) =
n0 +···+np−1 =j
p−1
Y
pn (ir , nr ).
r=0
Observe that the decomposition of pn+1 (i, j) does not depend on i, so we can
set pn+1 (i, j) = Pn+1 (j) and we have
(6)
X
pn+1 (j) =
n0 +···+np−1 =j
p−1
Y
pn (ir , nr ),
r=0
Pp−1
and then, for n ≥ 1, we have an = p j=0 pn (j), where we multiply by p because
we have to sum for each i = 0, . . . , p − 1.
Since the nucleus is S ∪ {1}, the initial conditions are the given by
p0 (0, 0) = p0 (i, 0) = p0 (0, j) = 1 for i, j ∈ {1, . . . , p − 1},
p0 (i, j) = 0 otherwise.
In other words, p0 (i, j) = 1 if ij = 0 and p0 (i, j) = 0, otherwise. By (6), p1 (0) is
the number of solutions in Fpp of
(7) (n0 , . . . , np−1 )C(0, e) ⊙ (n0 , n1 , . . . , np−1 ) = (i0 n0 , . . . , ip−1 np−1 ) = (0, . . . , 0),
with n0 + · · · + np−1 = 0, and p1 (j) is the number of solutions of the same equation,
but with n0 + · · · + np−1 = j.
We prove by induction that pn (1) = pn (j), for n ≥ 1 and j 6= 0. Observe that,
for n = 1, if (n0 , . . . , np−1 ) is a solution of (7), with n0 + · · · + np−1 = 1, then
(jn0 , . . . , jnp−1 ) is also a solution, but with n0 + · · · + np−1 = j. Similarly, if we
multiply a solution that sums up to j by the multiplicative inverse j −1 of j in Fp , we
obtain a solution that sums up to 1. Thus, there is a bijection between the solutions
and hence p1 (1) = p1 (j), for j 6= 0. Let us now assume that pn (1) = pn (j), for
n ≥ 1 and j 6= 0, and let us prove the equality for n + 1. By (6) and the assumption
that n ≥ 1, we have that pn (i, j) = pn (j) and
pn+1 (j) =
X
n0 +···+np−1 =j
p−1
Y
r=0
pn (nr ).
12
ZORAN ŠUNIĆ AND JONE URIA-ALBIZURI
By the inductive hypothesis we have pn (nr ) = pn (j −1 nr ) (this is true regardless of
whether nr is 0 or not). Thus,
pn+1 (j) =
p−1
Y
X
n0 +···+np−1 =j
pn (nr ) =
n0 +···+np−1 =j
r=0
X
=
j −1 n0 +···+j −1 np−1 =1
X
p−1
Y
pn (j −1 nr )
r=0
X
pn (j −1 nr ) =
r=0
p−1
Y
n′0 +···+n′p−1 =1
p−1
Y
pn (n′r ) = pn+1 (1).
r=0
We now resolve the recursion (6).
Denote xn = pn (0) and yn = pn (1), for n ≥ 1, so that an = p(xn + (p − 1)yn ).
The fact that pn (ni ) = yn whenever ni 6= 0, together with (6), implies that
X
Y
Y
xn+1 =
xn
yn ,
yn+1 =
n0 +···+np−1 =0
ni =0
X
Y
n0 +···+np−1 =1
ni 6=0
xn
ni =0
Y
yn
ni 6=0
Thus, by making all possible choices of ℓ coordinates, ℓ = 0, . . . , p, in (n0 , . . . , np−1 )
that are equal to 0, we obtain
p
X
p−ℓ ℓ p
(8)
zℓ
xn yn
xn+1 =
ℓ
ℓ=0
p
X
ℓ p
yn+1 =
(9)
z′ ,
xp−ℓ
y
n
n
ℓ ℓ
ℓ=0
where zℓ is the number of solutions of n′1 + · · · + n′ℓ = 0 such that none of n′1 , . . . , n′ℓ
is 0 and zℓ′ the number of solutions of n′1 + · · · + n′ℓ = 1 such that none of n′1 , . . . , n′ℓ
is 0.
For zℓ and zℓ′ , ℓ ≥ 1, we have the relations
zℓ+1 = (p − 1)zℓ′
′
zℓ+1
= zl + (p − 2)zℓ′ ,
with initial conditions z1 = 0 and z1′ = 1. The solution to this system is
1
((p − 1)ℓ − (−1)ℓ−1 (p − 1)),
p
1
zℓ′ = ((p − 1)ℓ − (−1)ℓ ),
p
zℓ =
which, by (8) and (9), gives
1
(xn + (p − 1)yn )p +
p
1
= (xn + (p − 1)yn )p −
p
xn+1 =
yn+1
p−1
(xn − yn )p ,
p
1
(xn − yn )p .
p
Finally, we obtain
xn+1 + (p − 1)(yn+1 ) = (xn + (p − 1)yn )p ,
and we conclude that
n−1
an = p(x1 + (p − 1)y1 )p
.
PORTRAIT GROWTH IN CONTRACTING, REGULAR BRANCH GROUPS
13
6. Portrait growth in the Apollonian group
The Apollonian group is a subgroup of the Hanoi Towers group. The Hanoi
Towers group was introduced by Grigorchuk and the first author [GŠ06] and the
Apollonian group was introduced later by Grigorchuk, Nekrashevych and the first
author [GNŠ06].
Definition 6.1. The Appolonian group A acting on the ternary tree is the group
generated by the automorphisms
x = (1, y, 1)(1 2),
y = (x, 1, 1)(1 3),
z = (1, 1, z)(2 3).
Set S = {x, y, z, x−1 , y −1 , z −1 . It is easy to see that A is contracting with nucleus
N (A) = S ∪ {1}.
Theorem 6.2. The portrait growth sequence {an }∞
n=0 of the Apollonian group is
given, for n ≥ 0, by
n
3n −1
n
1 √ 3
.
an = 3 2 · 7 3 = √ 7 3
3
Proof. Denote by E the subgroup of index 2 in A consisting of the elements in A
that are represented by words of even length over the alphabet {x± , y ± , z ± }. A left
transversal for E in A is given by T = {1, x}. It is known [GŠ07] that the Hanoi
Towers group H is a regular branch group over its commutator H ′ , which is of index
8 in H, and that the index of H ′ × H ′ × H ′ in H ′ is 12. The Apollonian group A
has index 4 in H and contains the commutator H ′ . Moreover, E = H ′ , implying
that A is a regular branch group, branching over E. The index of E × E × E in E
is 12, and a transversal is given by
T ′ = {1, yx, (yx)2 , x2 , y 2 , z 2 , x2 yx, y 3 x, z 2 yx, x2 (yx)2 , y 2 (yx)2 , z 2 (yx)2 }.
(1, 1, 1)
=1
(x, y, 1)(1 3 2)
= yx
≡ 1,
2
(x, yx, y)(1 2 3)
= (yx)
(y, y, 1)
= x2
(x, 1, x)
=y
2
=z
2
(1, z, z)
2
2
(yx, y , 1)(1 3 2) = x yx
2
(x , y, x)(1 3 2)
(x, zy, z)(1 3 2)
3
=y x
2
= z yx
≡ x,
(1, y, 1)(1 2)
=x
≡ 1,
(y, yx, 1)(2 3)
= xyx
(yx, yx, y)(1 3)
= x(yx)
≡ 1,
(y, y 2 , 1)(1 2)
= x3
≡ 1,
≡ 1,
(1, yx, x)(1 2)
≡ 1,
(z, y, z)(1 2)
≡ 1,
2
≡ 1,
≡ 1,
2
2
2
= xy
2
= xz
2
3
(y , y x, 1)(2 3) = x yx
(y, yx , x)(2, 3)
(zy, yx, z)(2 3)
3
= xy x
2
= xz yx
≡ x,
≡ x,
≡ x,
≡ x,
≡ x,
≡ x,
≡ x,
≡ x,
(yx, y 2 x, y)(1 2 3) = x2 (yx)2 ≡ 1,
(y 2 x, y 2 x, y)(1 3) = x3 (yx)2 ≡ x,
(x, zyx, zy)(1 2 3) = z 2 (yx)2 ≡ 1,
(zyx, yx, zy)(1 3) = xz 2 (yx)2 ≡ x,
(x2 , yx, xy)(1 2 3) = y 2 (yx)2 ≡ 1,
(yx, yx2 , xy)(1 3) = xy 2 (yx)2 ≡ x,
Table 1. The cosets of E × E × E decomposing the cosets of E
14
ZORAN ŠUNIĆ AND JONE URIA-ALBIZURI
Denote by Xn and Yn the number of portraits of depth at most n in the cosets
1E and xE respectively. The coset decomposition provided in Table 1 implies that
Xn+1 = 3Xn3 + 9Xn Yn2 ,
Yn+1 = 3Yn3 + 9Xn2 Yn .
which then yields
an+1 = Xn+1 + Yn+1 = 3(Xn + Yn )3 = 3a3n .
Taking into account the initial condition a0 = 7, we obtain an = 3
induction.
3n −1
2
n
73 by
References
[Bar98]
Laurent Bartholdi. The growth of Grigorchuk’s torsion group. Internat. Math. Res.
Notices, (20):1049–1054, 1998.
[dlH00]
Pierre de la Harpe. Topics in geometric group theory. Chicago Lectures in Mathematics.
University of Chicago Press, Chicago, IL, 2000.
[FAZR14] Gustavo A. Fernández-Alcober and Amaia Zugadi-Reizabal. GGS-groups: order of congruence quotients and Hausdorff dimension. Trans. Amer. Math. Soc., 366(4):1993–
2017, 2014.
[GNŠ06] Rostislav Grigorchuk, Volodymyr Nekrashevych, and Zoran Šunić. Hanoi towers groups.
In Topological and Geometric Methods in Group Theory, volume 3 issue 2 of Oberwolfach Reports. 2006.
[Gri80]
R. I. Grigorčuk. On Burnside’s problem on periodic groups. Funktsional. Anal. i
Prilozhen., 14(1):53–54, 1980.
[Gri83]
R. I. Grigorchuk. On the Milnor problem of group growth. Dokl. Akad. Nauk SSSR,
271(1):30–33, 1983.
[Gri84]
R. I. Grigorchuk. Degrees of growth of finitely generated groups and the theory of
invariant means. Izv. Akad. Nauk SSSR Ser. Mat., 48(5):939–985, 1984.
[Gri05]
Rostislav Grigorchuk. Solved and unsolved problems around one group. In Infinite
groups: geometric, combinatorial and dynamical aspects, volume 248 of Progr. Math.,
pages 117–218. Birkhäuser, Basel, 2005.
[Gro81]
Mikhael Gromov. Groups of polynomial growth and expanding maps. Inst. Hautes
Études Sci. Publ. Math., (53):53–73, 1981.
[GS83]
Narain Gupta and Saı̈d Sidki. On the Burnside problem for periodic groups. Math. Z.,
182(3):385–388, 1983.
[GŠ06]
Rostislav Grigorchuk and Zoran Šuniḱ. Asymptotic aspects of Schreier graphs and
Hanoi Towers groups. C. R. Math. Acad. Sci. Paris, 342(8):545–550, 2006.
[GŠ07]
Rostislav Grigorchuk and Zoran Šunić. Self-similarity and branching in group theory.
In Groups St. Andrews 2005. Vol. 1, volume 339 of London Math. Soc. Lecture Note
Ser., pages 36–95. Cambridge Univ. Press, Cambridge, 2007.
[Mil68a] J. Milnor. A note on curvature and fundamental group. J. Differential Geometry, 2:1–7,
1968.
[Mil68b] John Milnor. Growth of finitely generated solvable groups. J. Differential Geometry,
2:447–449, 1968.
[Mil68c] John Milor. Problem 5603. Amer. Math. Monthly, 75:685–686, 1968.
[Nek05]
Volodymyr Nekrashevych. Self-similar groups, volume 117 of Mathematical Surveys
and Monographs. American Mathematical Society, Providence, RI, 2005.
[Sid87]
Said Sidki. On a 2-generated infinite 3-group: subgroups and automorphisms. J. Algebra, 110(1):24–55, 1987.
[Šva55]
A. S. Švarc. A volume invariant of coverings. Dokl. Akad. Nauk SSSR (N.S.), 105:32–34,
1955.
Department of Mathematics, Hofstra University, Hempstead, NY 11549, USA
E-mail address: [email protected]
Department of Mathematics, University of the Basque Country, UPV/EHU, Leioa,
Bizkaia, Spain.
E-mail address: [email protected]
| 4 |
Knowledge Base Completion: Baselines Strike Back
Rudolf Kadlec and Ondrej Bajgar and Jan Kleindienst
IBM Watson
V Parku 4, 140 00 Prague, Czech Republic
{rudolf kadlec, obajgar, jankle}@cz.ibm.com
arXiv:1705.10744v1 [cs.LG] 30 May 2017
Abstract
Many papers have been published on the
knowledge base completion task in the
past few years. Most of these introduce
novel architectures for relation learning
that are evaluated on standard datasets
such as FB15k and WN18. This paper
shows that the accuracy of almost all models published on the FB15k can be outperformed by an appropriately tuned baseline — our reimplementation of the DistMult model. Our findings cast doubt on
the claim that the performance improvements of recent models are due to architectural changes as opposed to hyperparameter tuning or different training objectives. This should prompt future research to re-consider how the performance
of models is evaluated and reported.
1
Introduction
Projects such as Wikidata1 or earlier Freebase (Bollacker et al., 2008) have successfully accumulated a formidable amount of knowledge in
the form of hentity1 - relation - entity2i triplets.
Given this vast body of knowledge, it would be
extremely useful to teach machines to reason over
such knowledge bases. One possible way to
test such reasoning is knowledge base completion
(KBC).
The goal of the KBC task is to fill in
the missing piece of information into an incomplete triple. For instance, given a query
hDonald Trump, president of, ?i one should predict that the target entity is USA.
More formally, given a set of entities E and a set
of binary relations R over these entities, a knowledge base (sometimes also referred to as a knowl1
https://www.wikidata.org/
edge graph) can be specified by a set of triplets
hh, r, ti where h, t ∈ E are head and tail entities
respectively and r ∈ R is a relation between them.
In entity KBC the task is to predict either the tail
entity given a query hh, r, ?i, or to predict the head
entity given h?, r, ti.
Not only can this task be useful to test the
generic ability of a system to reason over a knowledge base, but it can also find use in expanding
existing incomplete knowledge bases by deducing
new entries from existing ones.
An extensive amount of work has been published on this task (for a review see (Nickel et al.,
2015; Nguyen, 2017), for a plain list of citations
see Table 2). Among those DistMult (Yang et al.,
2015) is one of the simplest.2 Still this paper
shows that even a simple model with proper hyperparameters and training objective evaluated using
the standard metric of Hits@10 can outperform 27
out of 29 models which were evaluated on two
standard KBC datasets, WN18 and FB15k (Bordes et al., 2013).
This suggests that there may be a huge space for
improvement in hyper-parameter tuning even for
the more complex models, which may be in many
ways better suited for relational learning, e.g. can
capture directed relations.
2
The Model
Inspired by the success of word embeddings in
natural language processing, distributional models
for KBC have recently been extensively studied.
Distributional models represent the entities and
sometimes even the relations as N -dimensional
real vectors3 , we will denote these vectors by bold
font, h, r, t ∈ RN .
2
We could even say too simple given that it assumes symmetry of all relations which is clearly unrealistic.
3
Some models represent relations as matrices instead.
The DistMult model was introduced by Yang
et al. (2015). Subsequently Toutanova and Chen
(2015) achieved better empirical results with the
same model by changing hyper-parameters of the
training procedure and by using negative-log likelihood of softmax instead of L1-based max-margin
ranking loss. Trouillon et al. (2016) obtained even
better empirical result on the FB15k dataset just
by changing DistMult’s hyper-parameters.
DistMult model computes a score for each
triplet hh, r, ti as
s(h, r, t) = hT · Wr · t =
N
X
hi ri ti
i=1
where Wr is a diagonal matrix with elements of
vector r on its diagonal. Therefore the model can
be alternatively rewritten as shown in the second
equality.
In the end our implementation normalizes the
scores by a softmax function. That is
P (t|h, r) = P
exp(s(h, r, t))
t̄∈Eh,r exp(s(h, r, t̄))
where Eh,r is a set of candidate answer entities for
the hh, r, ?i query.
3
Experiments
Datasets. In our experiments we use two standard datasets WN18 derived from WordNet (Fellbaum, 1998) and FB15k derived from the Freebase
knowledge graph (Bollacker et al., 2008).
Method. For evaluation, we use the filtered
evaluation protocol proposed by Bordes et al.
(2013). During training and validation we transform each triplet hh, r, ti into two examples: tail
query hh, r, ?i and head query h?, r, ti. We train
the model by minimizing negative log-likelihood
(NLL) of the ground truth triplet hh, r, ti against
randomly sampled pool of M negative triplets
hh, r, t0 i, t0 ∈ E \ {t} (this applies for tail queries,
head queries are handled analogically).
In the filtered protocol we rank the validation
or test set triplet against all corrupted (supposedly
untrue) triplets – those that do not appear in the
train, valid and test dataset (excluding the test set
triplet in question itself). Formally, for a query
hh, r, ?i where the correct answer is t, we compute the rank of hh, r, ti in a candidate set Ch,r =
{hh, r, t0 i : ∀t0 ∈ E} \ (T rain ∪ V alid ∪ T est) ∪
{hh, r, ti}, where T rain, V alid and T est are sets
of true triplets. Head queries h?, r, ti are handled
analogically. Note that softmax normalization is
suitable under the filtered protocol since exactly
one correct triplet is guaranteed to be among the
candidates.
In our preliminary experiments on FB15k, we
varied the batch size b, embedding dimensionality N , number of negative samples in training M , L2 regularization parameter and learning
rate lr. Based on these experiments we fixed
lr=0.001, L2=0.0 and we decided to focus on influence of batch size, embedding dimension and
number of negative samples. For final experiments we trained several models from hyperparameter range: N ∈ {128, 256, 512, 1024},
b ∈ {16, 32, 64, 128, 256, 512, 1024, 2048} and
M ∈ {20, 50, 200, 500, 1000, 2000}.
We train the final models using Adam (Kingma
and Ba, 2015) optimizer (lr = 0.001, β1 =
0.9, β2 = 0.999, = 10−8 , decay = 0.0). We
also performed limited experiments with Adagrad,
Adadelta and plain SGD. Adagrad usually required substantially more iterations than ADAM
to achieve the same performance. We failed
to obtain competitive performance with Adadelta
and plain SGD. On FB15k and WN18 validation datasets the best hyper-parameter combinations were N = 512, b = 2048, M = 2000
and N = 256, b = 1024, M = 1000, respectively. Note that we tried substantially more hyperparameter combinations on FB15k than on WN18.
Unlike most previous works we do not normalize
neither entity nor relation embeddings.
To prevent over-fitting, we stop training once
Hits@10 stop improving on the validation set. On
the FB15k dataset our Keras (Chollet, 2015) based
implementation with TensorFlow (Abadi et al.,
2015) backend needed about 4 hours to converge
when run on a single GeForce GTX 1080 GPU.
Results. Besides single models, we also evaluated performance of a simple ensemble that averages predictions of multiple models. This technique consistently improves performance of machine learning models in many domains and it
slightly improved results also in this case.
The results of our experiments together with
previous results from the literature are shown in
Table 2. DistMult with proper hyperparameters
twice achieves the second best score and once
the third best score in three out of four commonly reported benchmarks (mean rank (MR) and
0.8
0.7
0.6
2048
1024
256
128
64
32
16
Batch size
512
Hits@10
Hits@1
0.5
Figure 1: Influence of batch size on Hits@10 and
Hits@1 metrics for a single model with N = 512
and M = 2000.
Method
HolE †
DistMult ‡
ComplEx ‡
R-GCN+ ]
DistMult ensemble
Accuracy(Hits@1)
WN18 FB15k
93.0
40.2
72.8
54.6
93.6
59.9
67.9
60.1
78.4
79.7
Table 1: Accuracy (Hits@1) results sorted by performance on FB15k. Results marked by † , ‡ and
] are from (Nickel et al., 2016), (Trouillon et al.,
2017) and (Schlichtkrull et al., 2017), respectively.
Our implementation is listed in the last row.
4
3.1
FB15k
0.9
Accuracy
Hits@10 on WN18 and FB15k). On FB15k only
the IRN model (Shen et al., 2016) shows better
Hits@10 and the ProjE (Shi and Weniger, 2017)
has better MR.
Our implementation has the best reported mean
reciprocal rank (MRR) on FB15k, however this
metric is not reported that often. MRR is a metric
of ranking quality that is less sensitive to outliers
than MR.
On WN18 dataset again the IRN model together
with R-GCN+ shows better Hits@10. However,
in MR and MRR DistMult performs poorly. Even
though DistMult’s inability to model asymmetric
relations still allows it to achieve competitive results in Hits@10 the other metrics clearly show its
limitations. These results highlight qualitative differences between FB15k and WN18 datasets.
Interestingly on FB15k recently published models (including our baseline) that use only r and h
or t as their input outperform models that utilize
richer features such as text or knowledge base path
information. This shows a possible gap for future
improvement.
Table 1 shows accuracy (Hits@1) of several
models that reported this metric. On WN18 our
implementation performs worse than HolE and
ComplEx models (that are equivalent as shown by
Hayashi and Shimbo (2017)). On FB15k our implementation outperforms all other models.
Conclusion
Hyper-parameter influence on FB15k
In our experiments on FB15k we found that increasing the number of negative examples M had
a positive effect on performance.
Another interesting observation is that batch
size has a strong influence on final performance.
Larger batch size always lead to better results,
for instance Hits@10 improved by 14.2% absolute when the batch size was increased from 16 to
2048. See Figure 1 for details.
Compared to previous works that trained DistMult on these datasets (for results see bottom of
Table 2) we use different training objective than
Yang et al. (2015) and Trouillon et al. (2017)
that optimized max margin objective and NLL
of softplus activation function (sof tplus(x) =
ln(1 + ex )), respectively. Similarly to Toutanova
and Chen (2015) we use NLL of softmax function, however we use ADAM optimizer instead of
RProp (Riedmiller and Braun, 1993).
Simple conclusions from our work are: 1) Increasing batch size dramatically improves performance
of DistMult, which raises a question whether other
models would also significantly benefit from similar hyper-parameter tuning or different training
objectives; 2) In the future it might be better to
focus more on metrics less frequently used in this
domain, like Hits@1 (accuracy) and MRR since
for instance on WN18 many models achieve similar, very high Hits@10, however even models
that are competitive in Hits@10 underperform in
Hits@1, which is the case of our DistMult implementation.
A lot of research focus has recently been centred on the filtered scenario which is why we decided to use it in this study. An advantage is that
it is easy to evaluate. However the scenario trains
the model to expect that there is only a single correct answer among the candidates which is unrealistic in the context of knowledge bases. Hence
114
156
655
457
Extra
features
None
MR
985
304
251
303
225
218
331
212
270
211
206
249
352
-
FB15k
MR H10 MRR
162 39.8 979 6.3
125 47.1 87
64.4 77
68.7 75
70.2 59
74.0 91
77.3 78
78.7 82
79.5 58
76.7 41.4 0.25
73.9 0.524
69
79.7 0.543
84.0 0.692
34
88.4 38
92.7 50
76.2 58
84.6 119 64.8 75
84.2 70.3 0.603
84.2 0.696
87.0 0.822
108 73.0 82
79.0 57.7 0.35
79.7 0.555
82.4 0.654
42.2 89.3 0.798
35.9 90.4 0.837
Path
SE (Bordes et al., 2011)
Unstructured (Bordes et al., 2014)
TransE (Bordes et al., 2013)
TransH (Wang et al., 2014)
TransR (Lin et al., 2015b)
CTransR (Lin et al., 2015b)
KG2E (He et al., 2015)
TransD (Ji et al., 2015)
lppTransD (Yoon et al., 2016)
TranSparse (Ji et al., 2016)
TATEC (Garcia-Duran et al., 2016)
NTN (Socher et al., 2013)
HolE (Nickel et al., 2016)
STransE (Nguyen et al., 2016)
ComplEx (Trouillon et al., 2017)
ProjE wlistwise (Shi and Weniger, 2017)
IRN (Shen et al., 2016)
R TransE (Garcı́a-Durán et al., 2015)
PTransE (Lin et al., 2015a)
GAKE (Feng et al., 2015)
Gaifman (Niepert, 2016)
Hiri (Liu et al., 2016)
R-GCN+ (Schlichtkrull et al., 2017)
NLFeat (Toutanova and Chen, 2015)
TEKE H (Wang and Li, 2016)
SSP (Xiao et al., 2017)
DistMult (orig) (Yang et al., 2015)
DistMult (Toutanova and Chen, 2015)
DistMult (Trouillon et al., 2017)
Single DistMult (this work)
Ensemble DistMult (this work)
WN18
H10 MRR
80.5 38.2 89.2 86.7 92.0 92.3 92.8 92.2 94.3 93.2 66.1 0.53
94.9 0.938
93.4 0.657
94.7 0.941
95.3 93.9 90.8 0.691
96.4 0.819
94.3 0.940
92.9 93.2 94.2 0.83
93.6 0.822
94.6 0.797
95.0 0.790
Text
Method
None
Filtered
Table 2: Entity prediction results. MR, H10 and MRR denote evaluation metrics of mean rank, Hits@10
(in %) and mean reciprocal rank, respectively. The three best results for each metric are in bold. Additionally the best result is underlined. The first group (above the first double line) lists models that were
trained only on the knowledge base and they do not use any additional input besides the source entity and
the relation. The second group shows models that use path information, e.g. they consider paths between
source and target entities as additional features. The models from the third group were trained with additional textual data. In the last group we list various implementations of the DistMult model including our
implementation on the last two lines. Since DistMult does not use any additional features these results
should be compared to the models from the first group. “NLFeat” abbreviates Node+LinkFeat model
from (Toutanova and Chen, 2015). The results for NTN (Socher et al., 2013) listed in this table are taken
from Yang et al. (2015). This table was adapted from (Nguyen, 2017).
future research could focus more on the raw scenario which however requires using other information retrieval metrics such as mean average precision (MAP), previously used in KBC for instance
by Das et al. (2017).
We see this preliminary work as a small contribution to the ongoing discussion in the machine
learning community about the current strong focus on state-of-the-art empirical results when it
might be sometimes questionable whether they
were achieved due to a better model/algorithm or
just by more extensive hyper-parameter search.
For broader discussion see (Church, 2017).
In light of these results we think that the field
would benefit from a large-scale empirical comparative study of different KBC algorithms, similar to a recent study of word embedding models (Levy et al., 2015).
References
Martin Abadi, Ashish Agarwal, Paul Barham, Eugene
Brevdo, Zhifeng Chen, Craig Citro, Greg Corrado,
Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay
Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey
Irving, Michael Isard, Yangqing Jia, Lukasz Kaiser,
Manjunath Kudlur, Josh Levenberg, Dan Man, Rajat
Monga, Sherry Moore, Derek Murray, Jon Shlens,
Benoit Steiner, Ilya Sutskever, Paul Tucker, Vincent
Vanhoucke, Vijay Vasudevan, Oriol Vinyals, Pete
Warden, Martin Wicke, Yuan Yu, and Xiaoqiang
Zheng. 2015. TensorFlow : Large-Scale Machine
Learning on Heterogeneous Distributed Systems .
Kurt Bollacker, Colin Evans, Praveen Paritosh,
Tim Sturge, and Jamie Taylor. 2008.
Freebase: A collaboratively created graph database
for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD International
Conference on Management of Data. ACM, New
York, NY, USA, SIGMOD ’08, pages 1247–1250.
https://doi.org/10.1145/1376616.1376746.
Antoine Bordes, Xavier Glorot, Jason Weston, and
Yoshua Bengio. 2014. A semantic matching energy
function for learning with multi-relational data. Machine Learning 94(2):233–259.
Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko.
2013. Translating embeddings for modeling multirelational data. In C. J. C. Burges, L. Bottou,
M. Welling, Z. Ghahramani, and K. Q. Weinberger,
editors, Advances in Neural Information Processing
Systems 26, Curran Associates, Inc., pages 2787–
2795. http://papers.nips.cc/paper/5071-translatingembeddings-for-modeling-multi-relational-data.pdf.
Antoine Bordes, Jason Weston, Ronan Collobert, and
Yoshua Bengio. 2011. Learning structured embed-
dings of knowledge bases. In Conference on artificial intelligence. EPFL-CONF-192344.
Francois
Chollet.
2015.
https://github.com/fchollet/keras/.
Keras
Kenneth Ward Church. 2017.
Emerging trends:
I did it, I did it, I did it, but...
Natural Language Engineering 23(03):473–480.
https://doi.org/10.1017/S1351324917000067.
Rajarshi Das, Arvind Neelakantan, David Belanger,
and Andrew Mccallum. 2017. Chains of Reasoning
over Entities, Relations, and Text using Recurrent
Neural Networks. EACL .
Christiane Fellbaum. 1998. WordNet. Wiley Online
Library.
Jun Feng, Minlie Huang, Yang Yang, and Xiaoyan
Zhu. 2015. GAKE: Graph Aware Knowledge Embedding. In Proceedings of the 27th International
Conference on Computational Linguistics (COLING’16). pages 641–651.
Alberto Garcı́a-Durán, Antoine Bordes, and Nicolas Usunier. 2015.
Composing Relationships
with Translations.
In Conference on Empirical Methods in Natural Language Processing
(EMNLP 2015). Lisbonne, Portugal, pages 286–290.
https://doi.org/10.18653/v1/D15-1034.
Alberto Garcia-Duran, Antoine Bordes, Nicolas
Usunier, and Yves Grandvalet. 2016. Combining Two And Three-Way Embeddings Models for
Link Prediction in Knowledge Bases. Journal
of Artificial Intelligence Research 55:715—-742.
https://doi.org/10.1613/jair.5013.
Katsuhiko Hayashi and Masashi Shimbo. 2017.
On the Equivalence of Holographic and Complex Embeddings for Link Prediction pages 1–8.
http://arxiv.org/abs/1702.05563.
Shizhu He, Kang Liu, Guoliang Ji, and Jun Zhao.
2015. Learning to Represent Knowledge Graphs
with Gaussian Embedding. CIKM ’15 Proceedings
of the 24th ACM International on Conference on Information and Knowledge Management pages 623–
632. https://doi.org/10.1145/2806416.2806502.
Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and
Jun Zhao. 2015. Knowledge Graph Embedding
via Dynamic Mapping Matrix. Proceedings of
the 53rd Annual Meeting of the Association for
Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) pages 687–696.
http://www.aclweb.org/anthology/P15-1067.
Guoliang Ji, Kang Liu, Shizhu He, and Jun Zhao.
2016. Knowledge Graph Completion with Adaptive Sparse Transfer Matrix. Proceedings of the 30th
Conference on Artificial Intelligence (AAAI 2016)
pages 985–991.
Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: a
Method for Stochastic Optimization. International
Conference on Learning Representations pages 1–
13.
Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem,
Rianne van den Berg, Ivan Titov, and Max Welling.
2017. Modeling Relational Data with Graph Convolutional Networks http://arxiv.org/abs/1703.06103.
Omer Levy, Yoav Goldberg, and Ido Dagan. 2015.
Improving Distributional Similarity with Lessons
Learned from Word Embeddings. Transactions
of the Association for Computational Linguistics
3:211–225. https://doi.org/10.1186/1472-6947-15S2-S2.
Yelong Shen, Po-Sen Huang, Ming-Wei Chang, and
Jianfeng Gao. 2016. Implicit reasonet: Modeling large-scale structured relationships with shared
memory. arXiv preprint arXiv:1611.04642 .
Yankai Lin, Zhiyuan Liu, and Maosong Sun. 2015a.
Modeling relation paths for representation learning of knowledge bases. CoRR abs/1506.00379.
http://arxiv.org/abs/1506.00379.
Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu,
and Xuan Zhu. 2015b. Learning Entity and Relation Embeddings for Knowledge Graph Completion. Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence Learning pages 2181–
2187.
Qiao Liu, Liuyi Jiang, Minghao Han, Yao Liu, and
Zhiguang Qin. 2016. Hierarchical random walk inference in knowledge graphs. In Proceedings of the
39th International ACM SIGIR conference on Research and Development in Information Retrieval.
ACM, pages 445–454.
Dat Quoc Nguyen. 2017.
An overview of
embedding models of entities and relationships for knowledge base completion
https://arxiv.org/pdf/1703.08098.pdf.
Dat Quoc Nguyen, Kairit Sirts, Lizhen Qu, and Mark
Johnson. 2016. STransE: a novel embedding model
of entities and relationships in knowledge bases.
Proceedings of the 2016 Conference of the North
American Chapter of the Association for Computational Linguistics: Human Language Technologies
pages 460–466. https://doi.org/10.18653/v1/N161054.
Maximilian Nickel, Kevin Murphy, Volker Tresp,
and Evgeniy Gabrilovich. 2015.
A Review
of Relational Machine Learning for Knowledge
Graph.
Proceedings of the IEEE (28):1–23.
https://doi.org/10.1109/JPROC.2015.2483592.
Maximilian Nickel, Lorenzo Rosasco, and Tomaso
Poggio. 2016.
Holographic Embeddings of
Knowledge Graphs.
AAAI pages 1955–1961.
http://arxiv.org/abs/1510.04935.
Mathias Niepert. 2016. Discriminative gaifman models. In Advances in Neural Information Processing
Systems. pages 3405–3413.
Martin Riedmiller and Heinrich Braun. 1993. A direct
adaptive method for faster backpropagation learning: The rprop algorithm. In Neural Networks,
1993., IEEE International Conference on. IEEE,
pages 586–591.
Baoxu Shi and Tim Weniger. 2017. ProjE : Embedding
Projection for Knowledge Graph Completion. AAAI
.
Richard Socher, Danqi Chen, Christopher D. Manning,
and Andrew Y. Ng. 2013. Reasoning With Neural
Tensor Networks for Knowledge Base Completion.
Proceedings of the Advances in Neural Information
Processing Systems 26 (NIPS 2013) .
Kristina Toutanova and Danqi Chen. 2015. Observed
versus latent features for knowledge base and text
inference. Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality pages 57–66.
Théo Trouillon, Christopher R. Dance, Johannes
Welbl, Sebastian Riedel, Éric Gaussier, and
Guillaume Bouchard. 2017. Knowledge Graph
Completion via Complex Tensor Factorization
http://arxiv.org/abs/1702.06879.
Théo Trouillon, Johannes Welbl, Sebastian Riedel,
Eric Gaussier, and Guillaume Bouchard. 2016.
Complex Embeddings for Simple Link Prediction.
Proceedings of ICML 48:2071–2080.
http://arxiv.org/pdf/1606.06357v1.pdf.
Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng
Chen. 2014. Knowledge Graph Embedding by
Translating on Hyperplanes. AAAI Conference on
Artificial Intelligence pages 1112–1119.
Zhigang Wang and Juanzi Li. 2016. Text-enhanced
representation learning for knowledge graph. In
Proceedings of the Twenty-Fifth International Joint
Conference on Artificial Intelligence. AAAI Press,
pages 1293–1299.
Han Xiao, Minlie Huang, and Xiaoyan Zhu. 2017. Ssp:
Semantic space projection for knowledge graph embedding with text descriptions. In Pro- ceedings of
the 31st AAAI Conference on Artificial In- telligence.
Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015.
Embedding
Entities and Relations for Learning and Inference in Knowledge Bases.
ICLR page 12.
http://arxiv.org/abs/1412.6575.
Hee-geun Yoon, Hyun-je Song, Seong-bae Park, and
Se-young Park. 2016. A Translation-Based Knowledge Graph Embedding Preserving Logical Property
of Relations. Naacl pages 1–9.
| 2 |
A Tale of Two Animats: What does it take to have goals?
Larissa Albantakis
Wisconsin Institute for Sleep and Consciousness
Department of Psychiatry, University of Wisconsin, Madison, WI, USA
[email protected]
What does it take for a system, biological or not, to have goals? Here, this question is approached in the context of in
silico artificial evolution. By examining the informational and causal properties of artificial organisms (“animats”)
controlled by small, adaptive neural networks (Markov Brains), this essay discusses necessary requirements for
intrinsic information, autonomy, and meaning. The focus lies on comparing two types of Markov Brains that
evolved in the same simple environment: one with purely feedforward connections between its elements, the other
with an integrated set of elements that causally constrain each other. While both types of brains ‘process’
information about their environment and are equally fit, only the integrated one forms a causally autonomous entity
above a background of external influences. This suggests that to assess whether goals are meaningful for a system
itself, it is important to understand what the system is, rather than what it does.
0. Prequel
It was a dark and stormy night, when an experiment of
artificial evolution was set into motion at the University of
Wisconsin-Madison. Fifty independent populations of
adaptive Markov Brains, each starting from a different
pseudo-random seed, were released into a digital world full
of dangers and rewards. Who would make it into the next
generation? What would their neural networks look like
after 60,000 generations of selection and mutation?
While electrical signals were flashing inside the
computer, much like lightning on pre-historic earth, the
scientist who, in god-like fashion, had designed the
simulated universes and set the goals for survival, waited in
suspense for the simulations to finish. What kind of
creatures would emerge? …
I. Introduction
Life, from a physics point of view, is often pictured as
a continuous struggle of thermodynamically open systems
to maintain their complexity in the face of the second law
of thermodynamics—the overall increase of entropy in our
universe [1–3]. The ‘goal’ is survival. But is our universe
like a game, in which organisms, species, or life as a whole
increase their score by surviving? Is there a way to win?
Does life have a chance if the ‘goal’ of the universe is a
maximum entropy state (‘death’)?
Maybe there is an underlying law written into the
fabrics of our universe that aligns the ‘goal’ of life with the
‘goal’ of the universe. Maybe ‘information’ is fundamental
to discover it [4] (see also Carlo Rovelli’s essay
contribution). Maybe all there is are various gradients,
oscillations, or fluctuations. In any case, looming behind
these issues, another fundamental question lingers: What
does it take for a system, biological or not, to have goals?
To approach this problem with minimal confounding
factors, let us construct a universe from scratch: discrete,
deterministic, and designed with a simple set of predefined,
built-in rules for selection. This is easily done within the
realm of in silico artificial evolution. One such world is
shown in Fig. 1A (see also [5]). In this environment, the
imposed goal is to categorize blocks of different sizes into
those that have to be caught (‘food’) and those that have to
be avoided (‘danger’), limiting life to the essential.
Nevertheless, this task requires temporal-spatial integration
of sensor inputs and internal states (memory), to produce
appropriate motor responses. Fitness is measured as the
number of successfully caught and avoided blocks.
Let us then populate this simulated universe with
‘animats’, adaptive artificial organisms, equipped with
evolvable Markov Brains [5,6]. Markov Brains are simple
neural networks of generalized logic gates, whose inputoutput functions and connectivity are genetically encoded.
For simplicity, the Markov Brains considered here are
constituted of binary, deterministic elements. Over the
course of thousands of generations, the animats adapt to
their task environment through cycles of fitness-based
selection and (pseudo) random genetic mutation (Fig. 1B).
One particularly simple block-categorization environment
requires the animats to catch blocks of size 1 and avoid
blocks of size 3 (“c1-a3”) to increase their fitness.
In silico evolution experiments have the great
advantage that they can easily be repeated many times, with
different initial seeds. In this way, a larger portion of the
‘fitness landscape’, the solution space of the task
environment, can be explored. In the simple c1-a3
environment, perfect solutions (100% fitness) were
achieved at the end of 13 out of 50 evolution experiments
starting from independent populations run for 60,000
generations. In the following we will take a look at the kind
of creatures that evolved.
2
A
B
Environment
Adaptation through mutation and fitness selection
36 units
sensors
Generation #0
Fitness: 47%
Generation #11264
Fitness: 78.9%
Generation #59904
Fitness: 97.7%
hidden
elements
motors
16 units
Fig. 1: Artificial evolution of animats controlled by Markov Brains. (A)Task
The animat
16 by 436 world with
3 is placed in aTask
Task 2
Task 1
periodic boundaries to the left and right. An animat’s sensors are activated when a block is positioned above them,
Catch
3 to the left
6 or right one
regardless
of distance.
1 or 4left. Animats can move
1 one at a time to the right
1 Blocks of different sizes fall
unit per update. (B) An animat’s Markov Brain is initialized without connections between elements and adapts to the task
5
2
3from [7] with permission.
4
2
Avoidthrough 3fitness-based selection and probabilistic
environment
mutation. Adapted
C
Task 1 (“catch size 1, avoid size 3”)
DII. Perfect
fitness—goal achieved?
< )Max >
H
H
H
<#concepts>
Fitness (%)
Task 4 (“catch size 3+6, avoid size 4+5”)
Task 4 (7 fittest)
implemented for fitness selection. It is the animat’s one and
100
6
only ‘purpose’, programmed into
its artificial universe.
4
0.6
Note
also that this description
captures
the animat’s
90 in nature, various possible5 adaptations provide
As
3.5
0.5
behavior
perfectly. After all, it is—literally—determined to
distinct80solutions to the c1-a3 environment.
The animats we
4
3
0.4 the task.
solve
tested in
this environment [5] could develop
a maximal size
70
3
2.5
0.3 Is this top-level description in terms of goals useful
of 2 sensors, 2 motors, and 4 hidden elements, but were
60
2
2 an extrinsic, observer’s
and
is
it
justified?
Certainly,
from
started at generation #0 without any connections between
0.2
perspective,
it
captures
specific
50
them (Fig.
1B). We discovered 13 1out of 50 strains of
1.5 aspects of the animat’s
0.1
universe:
the
selection
rule,
the
fact
that there are blocks of
animats40 that evolved perfect fitness,
using
diverse
0
0
1
0 strategies,
2
4
2
4
0 these 2blocks 4are caught
size 01 and size
3, and
that some of
behavioral
implemented
by 0Markov2 Brains4 with
4
#Generations
#Generations
#Generations
#Generations
#10
#10
by the animat
and some
are
not, etc. But
does it relate
at4 all
different logic
functions and architectures
(see two
1.5
5
5
to intrinsic properties of the animat itself?
examples
12 in Fig. 2).
Task 4
4.5 To approach this question,
4.5
first, one might ask
From
mere
observation
of
an
animat’s
behavior,
it
is
10
whether,
where,
and
how
much
information
about the
4
4
notoriously difficult to compress its behavioral
strategy
into
1
8
environment
is
represented
in
the
animat’s
Markov
Brain.
a simple mechanistic description (see [8] for an example
3.5
3.5
6
The
degree
to
which
a
Markov
Brain
represents
features
of
video). In some cases, an animat might first ‘determine’ the
3
3
0.5
the
environment
might
be
assessed
by
information-theoretic
size and4 direction of the falling block
and then ‘follow’
2.5
2.5
means
[6], for example, as the2 shared entropy between
small blocks
or ‘move away’ from large blocks. Such
2
2
environment states E and internal states M, given the sensor
narratives,
however, cannot cover all0 initial conditions or
0
1.5
1.5
40
60
80 we understand
100
40 an animat
60
80 its 100 states
40 S:
60
80
100
0
0.5
1
1.5
task solutions.
How
can
and
Fitness (%)
Fitness (%)
Fitness (%)
< )Max >
behavior?
𝑅 = 𝐻(𝐸: 𝑀|𝑆).
(1)
On the one hand, the animat’s Markov Brain is
R captures information about features of the environment
deterministic, consists of at most 8 elements, and we have
encoded in the internal states of the Markov Brain beyond
perfect knowledge of its logic structure. While there is no
the information present in its sensors. Conditioning on the
single elegant equation that captures an animat’s internal
sensors discounts information that is directly copied from
dynamics, we can still describe and predict the state of its
the environment at a particular time step. A simple camera
elements, how it reacts to sensor inputs, and when it
would thus have zero representation, despite its capacity to
activates its motors, moment by moment, for as long as we
make > 107 bit copies of the world.
want. Think of a Markov Brain as a finite cellular
For animats adapting to the block-catching task,
automaton with inputs and outputs. No mysteries.
relevant environmental features include whether blocks are
On the other hand, we may still aim for a
small or large, move to the left or right, etc. Indeed,
comprehensive, higher-level description of the animat’s
representation R of these features increases, on average,
behavior. One straightforward strategy is to refer to the goal
over the course of evolution [6]. While this result implies
of the task: “the animat tries to catch blocks of size 1 and
that representation of environmental features, as defined
avoid blocks of size 3”. This is, after all, the rule we
< )Max >
<#concepts>
E
3
above, is related to task fitness, the measure R itself does
not capture whether or to what extent the identified
representations actually play a causal role in determining an
animat’s behavior1.
Machine-learning approaches, such as decoding,
provide another way to identify whether and where
information about environmental variables is present in the
evolved Markov Brains. Classifiers are trained to predict
environmental categories from brain states—a method now
frequently applied to neuro-imaging data in the
neurosciences [9,10]. Roughly, the better the prediction, the
more information was available to the classifier. Just as for
R, however, the fact that information about specific stimuli
can be extracted from a brain’s neural activity does not
necessarily imply that the brain itself is ‘using’ this
information [11].
What about our animats? As demonstrated in Fig. 2,
the c1-a3 block-categorization task can be perfectly solved
by animats with as few as 2 hidden elements. Their capacity
for representation is thus bounded by 4 bits (2 hidden
elements + 2 motors). Is that sufficient for a representation
of the goal for survival? At least in principle, 4 binary
categories could be ‘encoded’. Yet, in practice, even a
larger version of animats with higher capacity for
representation (10 hidden elements) only achieved values
on the order of R = 0.6 bits in a similar block-catching
environment [6]. To solve this task, the animats thus do not
seem to require much categorical information about the
environment beyond their sensor inputs.
While this lack of representation in the animats may
be due to their small size and the simplicity of the task,
there is a more general problem with the type of
information measures described above: the information that
is quantified is, by definition, extrinsic information.
A
Feed-forward network
with self-loops
B
Feed-back network
Any form of representation is ultimately a correlation
measure between external and internal states, and requires
that relevant environmental features are preselected and
categorized by an independent observer (e.g. to obtain E in
eq. 1, or to train the decoder). As a consequence, the
information about the environment represented in the
animat’s Markov Brain is meaningful for the investigator.
Whether it is causally relevant, let alone meaningful, for the
animat is not addressed.2
III. Intrinsic information
To be causally relevant, information must be
physically instantiated. For every ‘bit’, there must be some
mechanism that is in one of two (or several) possible states,
and which state it is in must matter to other mechanisms. In
other words, the state must be “a difference that makes a
difference” [12,13].
More formally, a mechanism M has inputs that can
influence it and outputs that are influenced by it. By being
in a particular state m, M constrains the possible past states
of its inputs3, and the possible futures states of its outputs in
a specific way. How much M in state m constrains its inputs
can be measured by its cause information (ci); how much it
constrains its outputs is captured by its effect information
(ei) [13].
An animat’s Markov Brain is a set of interconnected
logic elements. A mechanism M inside the Markov Brain
could be one of its binary logic elements, but can in
principle also be a set of several such elements4. In discrete
dynamical systems, such as the Markov Brains, with
discrete updates and states, we can quantify the cause and
effect information of a mechanism M in its current state 𝑚,
within system Z as the difference D between the
constrained and unconstrained probability distributions over
Z’s past and future states [13]:
sensors
(2)
𝑒𝑖(𝑀 = 𝑚, ) = 𝐷 𝑝 𝑧,63 |𝑚, , 𝑝 𝑧,63
(3)
where 𝑧,23 are all possible past states of Z one update ago,
and 𝑧,63 all possible future states of Z at the next update.
For 𝑝 𝑧,23 , we assume a uniform (maximum entropy)
distribution, which corresponds to perturbing Z into all
hidden
elements
motors
2
Fitness: 100%
Fitness: 100%
Fig. 2: Example network architectures of evolved
Markov Brains that achieved perfect fitness in the c1-a3
block-catching task. Adapted from [5] with permission.
1
𝑐𝑖(𝑀 = 𝑚, ) = 𝐷 𝑝 𝑧,23 |𝑚, , 𝑝 𝑧,23
Furthermore, representations of individual environmental
features are typically distributed across many elements [6], and
thus do no coincide with the Markov Brain’s elementary (micro)
logic components.
Note that this holds, even if we could evaluate the correlation
between internal and external variables in an observer-independent
manner, except then the correlations might not even be meaningful
for the investigator.
3
If M would not constrain its inputs, its state would just be a
source of noise entering the system, not causal information.
4
Sets of elements can constrain their joint inputs and outputs
in a way that is irreducible to the constraints of their constituent
elements taken individually [13]. The irreducible cause-effect
information of a set of elements can be quantified similarly to
Eqn. 2-3, by partitioning the set and measuring the distance
between 𝑝 𝑧,±3 |𝑚, and the distributions of the partitioned set.
4
possible states with equal likelihood. Using such systematic
perturbations makes it possible to distinguish observed
correlations from causal relations [14]5. By evaluating a
causal relationship in all possible contexts (all system
states), we can obtain an objective measure of its specificity
(“Does A always lead to B, or just sometimes?”) [13,15].
Likewise, we take 𝑝 𝑧,63 to be the distribution obtained
by providing independent, maximum entropy inputs to each
of the system’s elements [13]. In this way, Eqn. 2 and 3
measure the causal specificity with which mechanism M in
state 𝑚, constrains the system’s past and future states.
A system can only ‘process’ information to the extent
that it has mechanisms to do so. All causally relevant
information within a system Z is contained in the system’s
cause-effect structure, the set of all its mechanisms, and
their cause and effect distributions 𝑝 𝑧,23 |𝑚, and
𝑝 𝑧,63 |𝑚, . The cause-effect structure of a system in a
state specifies the information intrinsic to the system, as
opposed to correlations between internal and external
variables. If the goals that we ascribe to a system are indeed
meaningful from the intrinsic perspective of the system,
they must be intrinsic information, contained in the
system’s cause-effect structure (if there is no mechanism
for it, it does not matter to the system).
Yet, the system itself does not ‘have’ this intrinsic
information. Just by ‘processing’ information, a system
cannot evaluate its own constraints. This is simply because
a system cannot, at the same time, have information about
itself in its current state and also other possible states. Any
memory the system has about its past states has to be
physically instantiated in its current cause-effect structure.
While a system can have mechanisms that, by being in their
current state, constrain other parts of the system, these
mechanisms cannot ‘know’ what their inputs mean6. In the
same sense, a system of mechanisms in its current state
does not ‘know’ about its cause-effect structure; instead, the
cause-effect structure specifies what it means to be the
system in a particular state7. Intrinsic meaning thus cannot
arise from ‘knowing’, it must arise from ‘being’.
What does it mean to ‘be’ a system, as opposed to an
assembly of interacting elements, defined by an extrinsic
observer? When can a system of mechanisms be considered
an autonomous agent separate from its environment?
5
By contrast to the uniform, perturbed distribution, the
stationary, observed distribution of system Z entails correlations
due to the system’s network structure which may occlude or
exaggerate the causal constraints of the mechanism itself.
6
Take a neuron that activates, for example, every time a
picture of the actress Jennifer Aniston is shown [22]. All it
receives as inputs is quasi-binary electrical signals from other
neurons. The meaning “Jennifer Aniston” is not in the message to
this neuron, or any other neuron.
7
For example, an AND logic gate receiving 2 inputs is what it
is, because it switches ON if and only if both inputs were ON. An
AND gate in state ON thus constrains the past states of its input to
be ON.
IV. To be or not to be integrated
Living systems, or agents, more generally, are, by
definition, open systems that dynamically and materially
interact with their environment. For this reason, physics, as
a set of mathematical laws governing dynamical evolution,
does not distinguish between an agent and its environment.
When a subsystem within a larger system is characterized
by physical, biological, or informational means, its
boundaries are typically taken for granted (see also [16]).
Let us return to the Markov Brains shown in Fig. 2,
which evolved perfect solutions in the c1-a3 environment.
Comparing the two network architectures, the Markov
Brain in Fig. 2A has only feedforward connections between
elements, while the hidden elements in Fig. 2B feedback to
each other. Both Markov Brains ‘process’ information in
the sense that they receive signals from the environment
and react to these signals. However, the hidden elements in
Fig. 2B constrain each other, above a background of
external inputs, and thus from an integrated system of
mechanisms.
Whether and to what extent a set of elements is
integrated can be determined from its cause-effect structure,
using the theoretical framework of integrated information
theory (IIT) [13]. A subsystem of mechanisms has
integrated information Φ > 0, if all of its parts constrain,
and are being constrained by, other parts of the system.
Every part must be a difference that makes a difference
within the subsystem. Roughly, Φ quantifies the minimal
intrinsic information that is lost if the subsystem is
partitioned in any way. An integrated subsystem with Φ > 0
has a certain amount of causal autonomy from its
environment8. Maxima of Φ define where intrinsic causal
borders emerge [17,18]. A set of elements thus forms a
causally autonomous entity if its mechanisms give rise to a
cause-effect structure with maximal Φ, compared to smaller
or larger overlapping sets of elements. Such a maximally
integrated set of elements forms a unitary whole (it is ‘one’
as opposed to ‘many’) with intrinsic, self-defined causal
borders, above a background of external interactions. By
contrast, systems whose elements are connected in a purely
feedforward manner have Φ = 0: there is at least one part of
the system that remains unconstrained by the rest. From the
intrinsic perspective, then, there is no unified system, even
though an external observer can treat it as one.
So far, we have considered the entire Markov Brain,
including sensors, hidden elements, and motors, as the
system of interest. However, the sensors only receive input
from the environment, not from other elements within the
system, and the motors do not output to other system
elements. The whole Markov Brain is not an integrated
system, and thus not an autonomous system, separate from
8
This notion of causal autonomy applies to deterministic and
probabilistic systems, to the extent that their elements constrain
each other, above other background inputs, e.g. from the sensors.
5
its environment. Leaving aside the animat’s ‘retina’
(sensors) and ‘motor neurons’ (motors), inside the Markov
Brain in Fig. 2B, we find a minimal entity with Φ > 0 and
self-defined causal borders—a ‘brain’ within the Markov
Brain. By contrast, all there is, in the case of Fig. 2A, is a
cascade of switches, and any border demarcating a
particular set of elements would be arbitrary.
Dynamically and functionally the two Markov Brains
are very similar. However, one is an integrated, causally
autonomous entity, while the other is just a set of elements
performing a function. Note again that the two systems are
equally ‘intelligent’ (if we define intelligence as task
fitness). Both solve the task perfectly. Yet, from the
intrinsic perspective being a causally autonomous entity
makes all the difference (see here [13,19]). But is there a
practical advantage?
V. Advantages of being integrated
The cause-effect structure of a causally autonomous
entity describes what it means to be that entity from its own
intrinsic perspective. Each of the entity’s mechanisms, in its
current state, corresponds to a distinction within the entity.
Being an entity for which ‘light’ is different from ‘dark’,
for example, requires that the system itself, its cause-effect
structure, must be different when it ‘sees’ light, compared
to when it ‘sees’ dark. In this view, intrinsic meaning might
be created by the specific way in which the mechanisms of
an integrated entity constrain its own past and future states,
and by their relations to other mechanisms within the entity.
The animat ‘brain’ in Fig. 2B, constituted of the 2
hidden elements, has at most 3 mechanisms (each element,
and also both elements together, if they irreducibly
constrain the system). At best, these mechanisms could
specify that “something is probably this way, not that way”,
and “same” or “different”. Will more complex
environments lead to the evolution of more complex
autonomous agents?
In the simple c1-a3 environment, animats with
integrated brains do not seem to have an advantage over
feedforward architectures. Out of the 13 strains of animats
that reached perfect fitness, about half developed
architectures with recurrent connections (6/13) [5].
However, in a more difficult block-catching environment,
which required more internal memory (“catch size 3 and 6,
avoid size 4 and 5”), the same type of animats developed
more integrated architectures with higher Φ, and more
mechanisms (one example architecture is shown in Fig.
1B). The more complex the environment, the more
evolution seems to favor integrated structures.
In theory, and more so for artificial systems, being an
autonomous entity is not a requirement for intelligent
behavior. Any task could, in principle, be solved by a
feedforward architecture given an arbitrary number of
elements and updates. Nevertheless, in complex, changing
environments, with a rich causal structure, where resources
are limited and survival requires many mechanisms,
integrated agents seem to have an evolutionary advantage
[5,20]. Under these conditions, integrated systems are more
economical in terms of elements and connections, and more
flexible than functionally equivalent systems with a purely
feedforward architecture. Evolution should also ensure that
the intrinsic cause-effect structure of an autonomous agent
‘matches’ the causal structure of its environment [21].
From the animats, it is still a long way towards agents
with intrinsic goals and intentions. What kind of causeeffect structure is required to experience goals, and which
environmental conditions could favor its evolution, remains
to be determined. Integrated information theory offers a
quantitative framework to address these questions.
VI. Conclusion
Evolution did produce autonomous agents. We
experience this first hand. We are also entities with the right
kind of cause-effect structure to experience goals and
intentions. To us, the animats appear to be agents that
behave with intention. However, the reason for this lies
within ourselves, not within the animats. Some of the
animats even lack the conditions to be separate causal
entities from their environment. Yet, observing their
behavior affects our intrinsic mechanisms. For this reason,
describing certain types of directed behaviors as goals, in
the extrinsic sense, is most likely useful to us from an
evolutionary perspective. While we cannot infer agency
from observing apparent goal-directed behavior, by the
principle of sufficient reason, something must cause this
behavior (if we see an antelope running away, maybe there
is a lion). On a grander scale, descriptions in terms of goals
and intentions can hint at hidden gradients and selection
processes in nature, and inspire new physical models.
For determining agency and intrinsic meaning in other
systems, biological or not, correlations between external
and internal states have proven inadequate. Being a causally
autonomous entity from the intrinsic perspective requires an
integrated cause-effect structure; merely ‘processing’
information does not suffice. Intrinsic goals certainly
require an enormous amount of mechanisms. Finally, when
physics is reduced to a description of mathematical laws
that determine dynamical evolution, there seems to be no
place for causality. Yet, a (counterfactual) notion of
causation may be fundamental to identify agents and
distinguish them from their environment.
Acknowledgements
I thank Giulio Tononi for his continuing support and
comments on this essay, and William Marshall, Graham Findlay,
and Gabriel Heck for reading this essay and providing helpful
comments. L.A. receives funding from the Templeton World
Charities Foundation (Grant #TWCF0196).
6
References
1. Schrödinger E (1992) What is Life?: With Mind and
Matter and Autobiographical Sketches. Cambridge
University Press.
12. Bateson G (1972) Steps to an Ecology of Mind.
University of Chicago Press.
2. Still S, Sivak DA, Bell AJ, Crooks GE (2012)
Thermodynamics of Prediction. Phys Rev Lett 109:
120604.
13. Oizumi M, Albantakis L, Tononi G (2014) From the
Phenomenology to the Mechanisms of Consciousness:
Integrated Information Theory 3.0. PLoS Comput Biol
10: e1003588.
3. England JL (2013) Statistical physics of self-replication.
J Chem Phys 139: 121923.
14. Pearl J (2000) Causality: models, reasoning and
inference. Cambridge Univ Press.
4. Walker SI, Davies PCW (2013) The algorithmic origins
of life. J R Soc Interface 10: 20120869.
15. Ay N, Polani D (2008) Information Flows in Causal
Networks. Adv Complex Syst 11: 17–41.
5. Albantakis L, Hintze A, Koch C, Adami C, Tononi G
(2014) Evolution of Integrated Causal Structures in
Animats Exposed to Environments of Increasing
Complexity. PLoS Comput Biol 10: e1003966.
16. Krakauer D, Bertschinger N, Olbrich E, Ay N, Flack JC
(2014) The Information Theory of Individuality The
architecture of individuality.
6. Marstaller L, Hintze A, Adami C (2013) The evolution of
representation in simple cognitive networks. Neural
Comput 25: 2079–2107.
7. Albantakis L, Tononi G (2015) The Intrinsic CauseEffect Power of Discrete Dynamical Systems—From
Elementary Cellular Automata to Adapting Animats.
Entropy 17: 5472–5502.
8. Online Animat animation.
http://integratedinformationtheory.org/animats.html
9. Quian Quiroga R, Panzeri S (2009) Extracting
information from neuronal populations: information
theory and decoding approaches. Nat Rev Neurosci 10:
173–185.
10. King J-R, Dehaene S (2014) Characterizing the dynamics
of mental representations: the temporal generalization
method. Trends Cogn Sci 18: 203–210.
11. Haynes J-D (2009) Decoding visual consciousness from
human brain signals. Trends Cogn Sci 13: 194–202.
17. Marshall W, Albantakis L, Tononi G (2016) Blackboxing and cause-effect power. arXiv 1608.03461.
18. Marshall W, Kim H, Walker SI, Tononi G, Albantakis L
How causal analysis can reveal autonomy in biological
systems.
19. Tononi G, Boly M, Massimini M, Koch C (2016)
Integrated information theory: from consciousness to its
physical substrate. Nat Rev Neurosci 17: 450–461.
20. Albantakis L, Tononi G (2015) Fitness and neural
complexity of animats exposed to environmental change.
BMC Neurosci 16: P262.
21. Tononi G (2015) Integrated
Scholarpedia 10: 4164.
information
theory.
22. Quiroga RQ, Reddy L, Kreiman G, Koch C, Fried I
(2005) Invariant visual representation by single neurons
in the human brain. Nature 435: 1102–1107.
| 9 |
Asynchronous approach in the plane: A deterministic
polynomial algorithm
arXiv:1612.02168v3 [] 5 May 2017
†
Sébastien Bouchard† , Marjorie Bournat† , Yoann Dieudonné‡ , Swan Dubois† , Franck Petit†
Sorbonne Universités, UPMC Univ Paris 06, CNRS, INRIA, LIP6 UMR 7606, Paris, France
‡
MIS Lab., Université de Picardie Jules Verne, France
May 8, 2017
Abstract
In this paper we study the task of approach of two mobile agents having the same limited range
of vision and moving asynchronously in the plane. This task consists in getting them in finite
time within each other’s range of vision. The agents execute the same deterministic algorithm
and are assumed to have a compass showing the cardinal directions as well as a unit measure. On
the other hand, they do not share any global coordinates system (like GPS), cannot communicate
and have distinct labels. Each agent knows its label but does not know the label of the other
agent or the initial position of the other agent relative to its own. The route of an agent is a
sequence of segments that are subsequently traversed in order to achieve approach. For each
agent, the computation of its route depends only on its algorithm and its label. An adversary
chooses the initial positions of both agents in the plane and controls the way each of them moves
along every segment of the routes, in particular by arbitrarily varying the speeds of the agents.
Roughly speaking, the goal of the adversary is to prevent the agents from solving the task, or at
least to ensure that the agents have covered as much distance as possible before seeing each other.
A deterministic approach algorithm is a deterministic algorithm that always allows two agents
with any distinct labels to solve the task of approach regardless of the choices and the behavior
of the adversary. The cost of a complete execution of an approach algorithm is the length of both
parts of route travelled by the agents until approach is completed.
Let ∆ and l be the initial distance separating the agents and the length of (the binary representation of) the shortest label, respectively. Assuming that ∆ and l are unknown to both agents,
does there exist a deterministic approach algorithm always working at a cost that is polynomial in
∆ and l?
Actually the problem of approach in the plane reduces to the network problem of rendezvous
in an infinite oriented grid, which consists in ensuring that both agents end up meeting at the
same time at a node or on an edge of the grid. By designing such a rendezvous algorithm with
appropriate properties, as we do in this paper, we provide a positive answer to the above question.
Our result turns out to be an important step forward from a computational point of view, as
the other algorithms allowing to solve the same problem either have an exponential cost in the
initial separating distance and in the labels of the agents, or require each agent to know its starting
position in a global system of coordinates, or only work under a much less powerful adversary.
Keywords: mobile agents, asynchronous rendezvous, plane, infinite grid, deterministic algorithm, polynomial cost.
1
1
1.1
Introduction
Model and Problem
The distributed system considered in this paper consists of two mobile agents that are initially placed
by an adversary at arbitrary but distinct positions in the plane. Both agents have a limited sensory
radius (in the sequel also referred to as radius of vision), the value of which is denoted by , allowing
them to sense (or, to see) all their surroundings at distance at most from their respective current
locations. We assume that the agents know the value of . As stated in [12], when = 0, if agents
start from arbitrary positions of the plane and can freely move on it, making them occupy the same
location at the same time is impossible in a deterministic way. So, we assume that > 0 and we
consider the task of approach which consists in bringing them at distance at most so that they can
see each other. In other words, the agents completed their approach once they mutually sense each
other and they can even get closer. Without loss of generality, we assume in the rest of this paper
that = 1.
The initial positions of the agents, arbitrarily chosen by the adversary, are separated by a distance
∆ that is initially unknown to both agents and that is greater than = 1. In addition to the initial
positions, the adversary also assigns a different non-negative integer (called label) to each agent. The
label of an agent is the only input of the deterministic algorithm executed by the agent. While the
labels are distinct, the algorithm is the same for both agents. Each agent is equipped with a compass
showing the cardinal directions and with a unit of length. The cardinal directions and the unit of
length are the same for both agents.
To describe how and where each agent moves, we need to introduce two important notions that
are borrowed from [12]: The route and the walk of an agent. The route of an agent is a sequence
(S1 , S2 , S3 . . .) of segments Si = [ai , ai+1 ] traversed in stages as follows. The route starts from a1 ,
the initial position of the agent. For every i ≥ 1, starting from the position ai , the agent initiates
Stage i by choosing a direction α (using its compass) as well as a distance x. Stage i ends as soon as
the agent either sees the other agent or reaches ai+1 corresponding to the point at distance x from
ai in direction α. Stages are repeated indefinitely (until the approach is completed). Since both
agents never know their positions in a global coordinate system, the directions they choose at each
stage can only depend on their (deterministic) algorithm and their labels. So, the route (the actual
sequence of segments) followed by an agent depends on its algorithm and its label, but also on its
initial position. By contrast, the walk of each agent along every segment of its route is controlled
by the adversary. More precisely, within each stage Si and while the approach is not achieved, the
adversary can arbitrarily vary the speed of the agent, stop it and even move it back and forth as
long as the walk of the agent is continuous, does not leave Si , and ends at ai+1 . Roughly speaking,
the goal of the adversary is to prevent the agents from solving the task, or at least to ensure that
the agents have covered as much distance as possible before seeing each other. We assume that at
any time an agent can remember the route it has followed since the beginning.
A deterministic approach algorithm is a deterministic algorithm that always allows two agents to
solve the task of approach regardless of the choices and the behavior of the adversary. The cost of
an accomplished approach is the length of both parts of route travelled by the agents until they see
each other. An approach algorithm is said to be polynomial in ∆ and in the length of the binary
representation of the shortest label between both agents if it always permits to solve the problem
of approach at a cost that is polynomial in the two aforementioned parameters, no matter what the
adversary does.
It is worth mentioning that the use of distinct labels is not fortuitous. In the absence of a way
2
of distinguishing the agents, the task of approach would have no deterministic solution. This is
especially the case if the adversary handles the agents in a perfect synchronous manner. Indeed, if
the agents act synchronously and have the same label, they will always follow the same deterministic
rules leading to a situation in which the agents will always be exactly at distance ∆ from each other.
1.2
Our Results
In this paper, we prove that the task of approach can be solved deterministically in the above
asynchronous model, at a cost that is polynomial in the unknown initial distance separating the
agents and in the length of the binary representation of the shortest label. To obtain this result, we
go through the design of a deterministic algorithm for a very close problem, that of rendezvous in
an infinite oriented grid which consists in ensuring that both agents end up meeting either at a node
or on an edge of the grid. The tasks of approach and rendezvous are very close as the former can be
reduced to the latter.
It should be noticed that our result turns out to be an important advance, from a computational
point of view, in resolving the task of approach. Indeed, the other existing algorithms allowing to
solve the same problem either have an exponential cost in the initial separating distance and in the
labels of the agents [12], or require each agent to know its starting position in a global system of
coordinates [10], or only work under a much less powerful adversary [18] which initially assigns a
possibly different speed to each agent but cannot vary it afterwards.
1.3
Related Work
The task of approach is closely linked to the task of rendezvous. Historically, the first mention of
the rendezvous problem appeared in [33]. From this publication until now, the problem has been
extensively studied so that there is henceforth a huge literature about this subject. This is mainly
due to the fact that there is a lot of alternatives for the combinations we can make when addressing
the problem, e.g., playing on the environment in which the agents are supposed to evolve, the way
of applying the sequences of instructions (i.e., deterministic or randomized) or the ability to leave
some traces in the visited locations, etc. Naturally, in this paper we focus on work that are related
to deterministic rendezvous. This is why we will mostly dwell on this scenario in the rest of this
subsection. However, for the curious reader wishing to consider the matter in greater depth, regarding
randomized rendezvous, a good starting point is to go through [2, 3, 28]. Concerning deterministic
rendezvous, the literature is divided according to the way of modeling the environnement: Agents
can either move in a graph representing a network, or in the plane.
For the problem of rendezvous in networks, a lot of papers considered synchronous settings, i.e.,
a context where the agents move in the graph in synchronous rounds. This is particularly the case
of [17] in which the authors presented a deterministic protocol for solving the rendezvous problem,
which guarantees a meeting of the two involved agents after a number of rounds that is polynomial
in the size n of the graph, the length l of the shortest of the two labels and the time interval τ
between their wake-up times. As an open problem, the authors asked whether it was possible to
obtain a polynomial solution to this problem which would be independent of τ . A positive answer
to this question was given, independently of each other, in [27] and [35]. While these algorithms
ensure rendezvous in polynomial time (i.e., a polynomial number of rounds), they also ensure it at
polynomial cost because the cost of a rendezvous protocol in a graph is the number of edges traversed
by the agents until they meet—each agent can make at most one edge traversal per round. Note
that despite the fact a polynomial time implies a polynomial cost in this context, the reciprocal
3
is not always true as the agents can have very long waiting periods, sometimes interrupted by a
movement. Thus these parameters of cost and time are not always linked to each other. This was
highlighted in [31] where the authors studied the tradeoffs between cost and time for the deterministic
rendezvous problem. More recently, some efforts have been dedicated to analyse the impact on time
complexity of rendezvous when in every round the agents are brought with some pieces of information
by making a query to some device or some oracle [14, 30]. Along with the work aiming at optimizing
the parameters of time and/or cost of rendezvous, some other work have examined the amount of
required memory to solve the problem, e.g., [24, 25] for tree networks and in [11] for general networks.
In [6], the problem is approached in a fault-prone framework, in which the adversary can delay an
agent for a finite number of rounds, each time it wants to traverse an edge of the network.
Rendezvous is the term that is usually used when the task of meeting is restricted to a team
of exactly two agents. When considering a team of two agents or more, the term of gathering is
commonly used. Still in the context of synchronous networks, we can cite some work about gathering
two or more agents. In [19], the task of gathering is studied for anonymous agents while in [5, 15, 20]
the same task is studied in presence of byzantine agents that are, roughly speaking, malicious agents
with an arbitrary behavior.
Some studies have been also dedicated to the scenario in which the agents move asynchronously
in a network [12, 21, 29], i.e., assuming that the agent speed may vary, controlled by the adversary.
In [29], the authors investigated the cost of rendezvous for both infinite and finite graphs. In the
former case, the graph is reduced to the (infinite) line and bounds are given depending on whether
the agents know the initial distance between them or not. In the latter case (finite graphs), similar
bounds are given for ring shaped networks. They also proposed a rendezvous algorithm for an
arbitrary graph provided the agents initially know an upper bound on the size of the graph. This
assumption was subsequently removed in [12]. However, in both [29] and [12], the cost of rendezvous
was exponential in the size of the graph. The first rendezvous algorithm working for arbitrary finite
connected graphs at cost polynomial in the size of the graph and in the length of the shortest label
was presented in [21]. (It should be stressed that the algorithm from [21] cannot be used to obtain the
solution described in the present paper: this point is fully explained in the end of this subsection).
In all the aforementioned studies, the agents can remember all the actions they have made since
the beginning. A different asynchronous scenario for networks was studied in [13]. In this paper,
the authors assumed that agents are oblivious, but they can observe the whole graph and make
navigation decisions based on these observations.
Concerning rendezvous or gathering in the plane, we also found the same dichotomy of synchronicity vs. asynchronicity. The synchronous case was introduced in [34] and studied from a fault-tolerance
point of view in [1, 16, 22]. In [26], rendezvous in the plane is studied for oblivious agents equipped
with unreliable compasses under synchronous and asynchronous models. Asynchronous gathering
of many agents in the plane has been studied in various settings in [7, 8, 9, 23, 32]. However, the
common feature of all these papers related to rendezvous or gathering in the plane – which is not
present in our model – is that the agents can observe all the positions of the other agents or at least
the global graph of visibility is always connected (i.e., the team cannot be split into two groups so
that no agent of the first group can detect at least one agent of the second group).
Finally, the closest works to ours allowing to solve the problem of approach under an asynchronous
framework are [10, 4, 12, 18]. In [10, 12, 18], the task of approach is solved by reducing it to the
task of rendezvous in an infinite oriented grid. In [4], the authors present a solution to solve the
task of approach in a multidimensional space by reducing it to the task of rendezvous in an infinite
multidimensional grid. Let us give some more details concerning these four works to highlight the
4
contrasts with our present contribution. The result from [12] leads to a solution to the problem of
approach in the plane but has the disadvantage of having an exponential cost. The result from [10]
and [4] also implies a solution to the problem of approach in the plane at cost polynomial in the
initial distance of the agents. However, in both these works, the authors use the powerful assumption
that each agent knows its starting position in a global system of coordinates (while in our paper, the
agents are completely ignorant of where they are). Lastly, the result from [18] provides a solution
at cost polynomial in the initial distance between agents and in the length of the shortest label.
However, the authors of this study also used a powerful assumption: The adversary initially assigns
a possibly different and arbitrary speed to each agent but cannot vary it afterwards. Hence, each
agent moves at constant speed and uses clock to achieve approach. By contrast, in our paper, we
assume basic asynchronous settings, i.e., the adversary arbitrarily and permanently controls the
speed of each agent.
To close this subsection, it is worth mentioning that it is unlikely that the algorithm from [21]
that we referred to above, which is especially designed for asynchronous rendez-vous in arbitrary
finite graphs, could be used to obtain our present result. First, in [21] the algorithm has not a
cost polynomial in the initial distance separating the agents and in the length of the smaller label.
Actually, ensuring rendezvous at this cost is even impossible in arbitrary graph, as witnessed by
the case of the clique with two agents labeled 0 and 1: the adversary can hold one agent at a node
and make the other agent traverse Θ(n) edges before rendezvous, in spite of the initial distance 1.
Moreover, the validity of the algorithm given in [21] closely relies on the fact that both agents must
evolve in the same finite graph, which is clearly not the case in our present scenario. In particular even
when considering the task of rendezvous in an infinite oriented grid, the natural attempt consisting
in making each agent apply the algorithm from [21] within bounded grids of increasing size and
centered in its initial position, does not permit to claim that rendezvous ends up occurring. Indeed,
the bounded grid considered by an agent is never exactly the same than the bounded grid considered
by the other one (although they may partly overlap), and thus the agents never evolve in the same
finite graph which is a necessary condition to ensure the validity of the solution of [21] and by
extension of this natural attempt.
1.4
Roadmap
The next section (Section 2) is dedicated to the computational model and basic definitions. We
sketch our solution in Section 3, formally described in Sections 4 and 5. Section 6 presents the
correctness proof and cost analysis of the algorithm. Finally, we make some concluding remarks in
Section 7.
2
Preliminaries
We know from [12, 18] that the problem of approach in the plane can be reduced to that of rendezvous
in an infinite grid specified in the next paragraph.
Consider an infinite square grid in which every node u is adjacent to 4 nodes located North, East,
South, and West from node u. We call such a grid a basic grid. Two agents with distinct labels
(corresponding to non-negative integers) starting from arbitrary and distinct nodes of a basic grid
G have to meet either at some node or inside some edge of G. As for the problem of approach (in
the plane), each agent is equipped with a compass showing the cardinal directions. The agents can
see each other and communicate only when they share the same location in G. In other words, in
5
the basic grid G we assume that the sensory radius (or, radius of vision) of the agents is equal to
zero. In such settings, the only initial input that is given to a rendezvous algorithm is the label of
the executing agent. When occupying a node u, an agent decides (according to its algorithm) to
move to an adjacent node v via one of the four cardinal directions: the movement of the agent along
the edge {u, v} is controlled by the adversary in the same way as in a section of a route (refer to
Subsection 1.1), i.e., the adversary can arbitrarily vary the speed of the agent, stop it and even move
it back and forth as long as the walk of the agent is continuous, does not leave the edge, and ends
at v.
The cost of a rendezvous algorithm in a basic grid is the total number of edge traversals by both
agents until their meeting.
From the reduction described in [18], we have the following theorem.
Theorem 1. If there exists a deterministic algorithm solving the problem of rendezvous between any
two agents in a basic grid at cost polynomial in D and in the length of the binary representation
of the shortest of their labels where D is the distance (in the Manhattan metric) between the two
starting nodes occupied by the agents, then there exists a deterministic algorithm solving the problem
of approach in the plane between any two agents at cost polynomial in ∆ and in the length of the
binary representation of the shortest of their labels where ∆ is the initial Euclidean distance separating
the agents.
For completeness let us now outline the reduction described in [18]. Consider an infinite square
grid with edge length 1. More precisely, for any point v in the plane, we define the basic grid Gv
to be the infinite graph, one of whose nodes is v, and in which every node u is adjacent to 4 nodes
at Euclidean distance 1 from it, and located North, East, South, and West from node u. We now
focus on how to transform any rendezvous algorithm in the grid Gv to an algorithm for the task of
approach in the plane.
Let A be any rendezvous algorithm for any basic grid. Algorithm A can be executed in the grid
Gw , for any point w in the plane. Consider two agents in the plane starting respectively from point
v and from another point w in the plane. Let V 0 be the set of nodes in Gv that are the closest
√ nodes
0
0
0
from w. Let v be a node in V , arbitrarily chosen. Notice that v is at distance at most 2/2 < 1
from w. Let α be the vector v 0 w. Execute algorithm A on the grid Gv with agents starting at nodes v
and v 0 . Let p be the point in Gv (either a node of it or a point inside an edge), in which these agents
meet at some time t. The transformed algorithm A∗ for approach in the plane works as follows:
Execute the same algorithm A but with one agent starting at v and traveling in Gv and the other
agent starting at w and traveling in Gw , so that the starting time of the agent starting at w is the
same as the starting time of the agent starting at v 0 in the execution of A in Gv . The starting time
of the agent starting at v does not change. If approach has not been accomplished before, in time t
the agent starting at v and traveling in Gv will be at point p, as previously. In the same way, the
agent starting at w and traveling in Gw will get to some point q at time t. Clearly, q = p + α. Hence
both agents will be at distance less than 1 at time t, which means that they accomplish approach in
the plane because = 1 (refer to Subsection 1.1).
Hence in the rest of the paper we will consider rendezvous in a basic grid, instead of the task of
approach. We use N (resp. E, S, W ) to denote the cardinal direction North (resp. East, South,
West) and an instruction like “Perform N S” means that the agent traverses one edge to the North
and then traverses one edge to the South (by the way, coming back to its initial position). We denote
by D the initial (Manhattan) distance separating two agents in a basic grid. A route followed by
an agent in a basic grid corresponds to a path in the grid (i.e., a sequence of edges e1 , e2 , e3 , e4 , . . .)
6
that are consecutively traversed by the agent until rendezvous is done. For any integer k, we define
the reverse path to the path e1 , . . . , ek as the path ek , ek−1 , . . . , e1 = e1 , . . . , ek−1 , ek . We denote by
C(p) the number of edge traversals performed by an agent during the execution of a procedure p.
Consider two distinct nodes u and v. We define a specific path from u to v, denoted P (u, v), as
follows. If there exists a unique shortest path from u to v, this shorthest path is P (u, v). Otherwise,
consider the smallest rectangle R(u,v) such that u and v are two of its corners. P (u, v) is the unique
path among the shortest path from u to v that traverses all the edges on the northern side of R(u,v) .
Note that P (u, v) = P (v, u).
An illustration of P (u, v) is given in Figure 1.
Figure 1: Some different cases for P (u, v)
3
3.1
Idea of the algorithm
Informal Description in a Nutshell. . .
We aim at achieving rendezvous of two asynchronous mobile agents in an infinite grid and in a
deterministic way. It is well known that solving rendezvous deterministically is impossible in some
symmetric graphs (like a basic grid) unless both agents are given distinct identifiers called labels.
We use them to break the symmetry, i.e., in our context, to make the agents follow different routes.
The idea is to make each agent “read” its label binary representation, a bit after another from the
most to the least significant bits, and for each bit it reads, follow a route depending on the read bit.
Our algorithm ensures rendezvous during some of the periods when they follow different routes i.e.,
when the two agents process two different bits.
7
Furthermore, to design the routes that both agents will follow, our approach would require
to know an upper bound on two parameters, namely the initial distance between the agents and
the length (of the binary representation) of the shortest label. As we suppose that the agents
have no knowledge of these parameters, they both perform successive “assumptions”, in the sequel
called phases, in order to find out such an upper bound. Roughly speaking, each agent attempts to
estimate such an upper bound by successive tests, and for each of these tests, acts as if the upper
bound estimation was correct. Both agents first perform Phase 0. When Phase i does not lead to
rendezvous, they perform Phase i + 1, and so on. More precisely, within Phase i, the route of each
agent is built in such a way that it ensures rendezvous if 2i is a good upper bound on the parameters
of the problem. Hence, in our approach two requirements are needed: both agents are assumed (1) to
process two different bits (i.e., 0 and 1) almost concurrently and (2) to perform Phase i = α almost
at the same time—where α is the smallest integer such that the two aforementioned parameters are
upper bounded by 2α .
However, to meet these requirements, we have to face two major issues. First, since the adversary
can vary both agent speeds, the idea described above does not prevent the adversary from making the
agents always process the same type of bit at the same time. Besides, the route cost depends on the
phase number, and thus, if an agent were performing some Phase i with i exponential in the initial
distance and in the length of the binary representation of the smallest label, then our algorithm would
not be polynomial. To tackle these two issues, we use a mechanism that prevents the adversary from
making an agent execute the algorithm arbitrarily faster than the other without meeting. Each of
both these issues is circumvent via a specific “synchronization mechanism”. Roughly speaking, the
first one makes the agents read and process the bits of the binary representation of their labels at
quite the same speed, while the second ensures that they start Phase α at almost the same time. This
is particularly where our feat of strength is: orchestrating in a subtle manner these synchronizations
in a fully asynchronous context while ensuring a polynomial cost. Now that we have described the
very high level idea of our algorithm, let us give more details.
3.2
Under the hood
The approach described above allows us to solve rendezvous when there exists an index for which
the binary representations of both labels differ. However, this is not always the case especially
when a binary representation is a prefix of the other one (e.g., 100 and 1000). Hence, instead of
considering its own label, each agent will consider a transformed label: The transformation borrowed
from [17] will guarantee the existence of the desired difference over the new labels. In the rest of
this description, we assume for convenience that the initial Manhattan distance D separating the
agents is at least the length of the shortest binary representation of the two transformed labels (the
complementary case adds an unnecessary level of complexity to understand the intuition).
As mentioned previously, our solution (cf. Algorithm 5 in Section 5) works in phases numbered
0, 1, 2, 3, 4, . . . During Phase i (cf. Procedure Assumption called at line 3 in Algorithm 5), the agent
supposes that the initial distance D is at most 2i and processes one by one the first 2i bits of its
transformed label: In the case where 2i is greater than the binary representation of its transformed
label, the agent will consider that each of the last “missing” bits is 0. When processing a bit,
the agent executes a particular route which depends on the bit value and the phase number. The
route related to bit 0 (relying in particular on Procedure Berry called at line 9 in Algorithm 6)
and the route related to bit 1 (relying in particular on Procedure Cloudberry called at line 11 in
Algorithm 6) are obviously different and designed in such a way that if both these routes are executed
8
almost simultaneously by two agents within a phase corresponding to a correct upper bound, then
rendezvous occurs by the time any of them has been completed. In the light of this, if we denote by
α the smallest integer such that 2α ≥ D, it turns out that an ideal situation would be that the agents
concurrently start phase α and process the bits at quite the same rate within this phase. Indeed,
we would then obtain the occurrence of rendezvous by the time the agents complete the process
of the jth bit of their transformed label in phase α, where j is the smallest index for which the
binary representations of their transformed labels differ. However, getting such an ideal situation in
presence of a fully asynchronous adversary appears to be really challenging. This is where the two
synchronization mechanisms briefly mentioned above come into the picture.
If the agents start Phase α approximately at the same time, the first synchronization mechanism
(cf. Procedure RepeatSeed called at line 15 in Algorithm 6) permits to force the adversary to make
the agents process their respective bits at similar speed within Phase α, as otherwise rendezvous
would occur prematurely during this phase before the process by any agent of the jth bit. This
constraint is imposed on the adversary by dividing each bit process into some predefined steps and
by ensuring that after each step s of the kth bit process, for any k ≤ 2α , an agent follows a specific
route that forces the other agent to complete the step s of its kth bit process. This route, on which
the first synchronization is based, is constructed by relying on the following simple principle: If an
agent performs a given route X included in a given area S of the basic grid, then the other agent can
“push it” over X. In other words, unless rendezvous occurs, the agent forces the other to complete
its route X by covering S a number of times at least equal to the number of edge traversals involved
in route X (each covering of S allows to traverse all the edges of S at least once). Hence, one of
the major difficulties we have to face lies in the setting up of the second synchronization mechanism
guaranteeing that the agents start Phase α around the same time. At first glance, it might be
tempting to use an analogous principle to the one used for dealing with the first synchronization.
Indeed, if an agent a1 follows a route covering r times an area Y of the grid, such that Y is where
the first α − 1 phases of an agent a2 take place and r is the maximal number of edge traversals an
agent can make during these phases, then agent a1 pushes agent a2 to complete its first α − 1 phases
and to start Phase α. Nevertheless, a strict application of this principle to the case of the second
synchronization directly leads to an algorithm having a cost that is superpolynomial in D and the
length of the smallest label, due to a cumulative effect that does not appear for the case of the first
synchronization. As a consequence, to force an agent to start its Phase α, the second synchronization
mechanism does not depend on the kind of route described above, but on a much more complicated
route that permits an agent to “push” the second one. This works by considering the “pattern” that
is drawn on the grid by the second agent rather than just the number of edges that are traversed (cf.
Procedure Harvest called at line 1 in Algorithm 6). This is the most tricky part of our algorithm,
one of the main idea of which relies in particular on the fact that some routes made of an arbitrarily
large sequence of edge traversals can be pushed at a relative low cost by some other routes that
are of comparatively small length, provided they are judiciously chosen. Let us illustrate this point
through the following example. Consider an agent a1 following from a node v1 an arbitrarily large
sequence of Xi , in which each Xi corresponds either to AA or BB where A and B are any routes
(A and B corresponding to their respective backtrack i.e., the sequence of edge traversals followed
in the reverse order). An agent a2 starting from an initial node v2 located at a distance at most d
from v1 can force agent a1 to finish its sequence of Xi (or otherwise rendezvous occurs), regardless
of the number of Xi , simply by executing AABB from each node at distance at most d from v2 .
To support this claim, let us suppose by contradiction that it does not hold. At some point, agent
a2 necessarily follows AABB from v1 . However, note that if either agent starts following AA (resp.
9
BB) from node v1 while the other is following AA (resp. BB) from node v1 , then the agents meet.
Indeed, this implies that the more ahead agent eventually follows A (resp. B) from a node v3 to
v1 while the other is following A (resp. B) from v1 to v3 , which leads to rendezvous. Hence, when
agent a2 starts following BB from node v1 , agent a1 is following AA, and is not in v1 , so that it
has at least started the first edge traversal of AA. This means that when agent a2 finishes following
AA from v1 , a1 is following AA, which implies, using the same arguments as before, that they meet
before either of them completes this route. Hence, in this example, agent a2 can force a1 to complete
an arbitrarily large sequence of edge traversals with a single and simple route. Actually, our second
synchronization mechanism implements this idea (this point is refined in Section 5). This was way
the most complicated to set up, as each part of each route in every phase had to be orchestrated very
carefully to permit in fine this low cost synchronization while still ensuring rendezvous. However, it
is through this original and novel way of moving that we finally get the polynomial cost.
4
Basic patterns
In this section we define some sequences of moving instructions, i.e., patterns of moves, that will
serve in turn as building blocks in the construction of our rendezvous algorithm. The main roles of
these patterns are given in the next section when presenting our general solution.
4.1
Pattern Seed
Pattern Seed is involved as a subpattern in the design of all the other patterns presented in this
section. The description of Pattern Seed is given in Algorithm 1. It is made of two periods. For
a given non-negative integer x, the first period of Pattern Seed(x) corresponds to the execution
of x phases, while the second period is a complete backtrack of the path travelled during the first
period. Pattern Seed is designed in such a way that it offers some properties that are shown in
Subsubsection 6.1.2 and that are necessary to conduct the proof of correctness. One of the main
purpose of this pattern is the following: starting from a node v, Pattern Seed(x) allows to visit all
nodes of the grid at distance at most x from v and to traverse all edges of the grid linking two nodes
at distance at most x from v (informally, the procedure permits to cover an area of radius x).
Algorithm 1 Pattern Seed(x)
1: /* First period */
2: for i ← 1; i ≤ x; i ← i + 1 do
3:
/* Phase i */
4:
Perform (N (SE)i (W S)i (N W )i (EN )i )
5: end for
6: /* Second period */
7: L ← the path followed by the agent during the first period
8: Backtrack by following the reverse path L
4.2
Pattern RepeatSeed
Following the high level description of our solution (Section 3), Pattern RepeatSeed is the basic
primitive procedure that implements the first synchronization mechanism (between two consecutive
10
steps of a bit process). An agent a1 executing pattern RepeatSeed(x, n) from a node u processes n
times pattern Seed(x) from node u. All along this execution, a1 stays at distance at most x from u.
Besides, once the execution is over, the agent is back at u.
The description of pattern RepeatSeed is given in Algorithm 2.
Algorithm 2 Pattern RepeatSeed(x, n)
Execute n times Pattern Seed(x)
4.3
Pattern Berry
According to Section 3, Pattern Berry is used in particular to design the specific route that an agent
follows when processing bit 0. The description of Pattern Berry is given in Algorithm 3. It is made
of two periods, the second of which is a backtrack of the first one. Pattern Berry offers several
properties that are proved in Subsubsection 6.1.4 and used in the proof of correctness. Note that,
Pattern Berry(x, y) executed from a node u for any two integers x and y allows an agent to perform
Pattern Seed(x) from each node at distance at most y from u.
Algorithm 3 Pattern Berry(x, y)
1: /* First period */
2: Let u be the current node
3: for i ← 1; i ≤ x + y; i ← i + 1 do
4:
for j ← 0; j ≤ i; j ← j + 1 do
5:
for k ← 0; k ≤ j; k ← k + 1 do
6:
for each node v at distance k from u ordered in the clockwise direction from the North
do
7:
Follow P (u, v)
8:
Execute Seed(i − j)
9:
Follow P (v, u)
10:
end for
11:
end for
12:
end for
13: end for
14: /* Second period */
15: L ← the path followed by the agent during the first period
16: Backtrack by following the reverse path L
4.4
Pattern Cloudberry
Algorithm 4 describes Pattern Cloudberry. According to Section 3, Pattern Cloudberry is used to
design the specific route that an agent follows when processing bit 1. The description of Pattern
Cloudberry is given in Algorithm 4. As for Patterns Seed and Berry, the pattern is made of two
periods, the second of which corresponds to a backtrack of the first one. Properties related to this
pattern are given in Subsubsection 6.1.5. Note that, Pattern Cloudberry(x, y, z, h) executed from a
node u for any integers x, y, z and h allows an agent to perform Pattern Berry(x, y) from each node
at distance at most z from u.
11
Algorithm 4 Pattern Cloudberry(x, y, z, h)
1: /* First period */
2: Let u be the current node
3: Let U be the list of nodes at distance at most z from u ordered in the order of the first visit
when applying Seed(z) from node u
4: for i ← 0; i ≤ 2z(z + 1); i ← i + 1 do
5:
Let v be the node with index h + i (mod 2z(z + 1) + 1) in U
6:
Follow P (u, v)
7:
Execute Seed(x)
8:
Execute Berry(x, y)
9:
Follow P (v, u)
10: end for
11: /* Second period */
12: L ← the path followed by the agent during the first period
13: Backtrack by following the reverse path L
5
Main Algorithm
In this section, we give the formal description of our solution allowing to solve rendezvous in a basic
grid. We also give the main objectives of the involved subroutines and how they work at a high
level. The main algorithm that solves the rendezvous in a basic grid is Algorithm RV (shown in
Algorithm 5). As mentioned in Subsection 3.2, we use the label of an agent only when it has been
transformed. Let us describe this transformation that is borrowed from [17]. Let (b0 b1 . . . bn−1 ) be the
binary representation of the label of an agent. We define its transformed label as the binary sequence
(b0 b0 b1 b1 . . . bn−1 bn−1 01). This transformation permits to obtain the feature that is highlighted by
the following remark.
Remark 2. Given two distinct labels la and lb , their transformed labels are never prefixes of each
other. In other words, there exists an index j such that the jth bit of the transformed label of la is
different from the jth bit of the transformed label of lb .
As explained in Section 3, we need such a feature because our solution requires that at some
point both agents follow different routes by processing different bit values.
Algorithm 5 RV
1: d ← 1
2: while agents have not met yet do
3:
Execute Assumption(d)
4:
d ← 2d
5: end while
Algorithm RV makes use of a subroutine, i.e., Procedure Assumption. When an agent executes
this procedure with a parameter α that is a “good” assumption i.e., that upperbounds the initial
distance D and the value j of the smallest bit position for which both transformed labels differ, we
have the guarantee that rendezvous occurs by the end of this execution. In the rest of this section,
we assume that α is the smallest good assumption that upperbounds D and j.
The code of Procedure Assumption is given in Algorithm 6. It makes use, for technical reasons,
of the sequence r that is defined below.
12
i
3i i i
ρ(1) = 1 and ∀ power of two i ≥ 2, ρ(i) = r( ) + ( (i( + 1) + 1) + 1)
2
2 2 2
∀ power of two i, r(i) = ρ(i) + 3i
Procedure Assumption can be divided into two parts. The first part consists of the execution of
Procedure Harvest (line 1 of Algorithm 6) and corresponds to the second synchronization mechanism
mentioned in Section 3. The main feature of this procedure is the following: when the earlier agent
finishes the execution of Harvest(α) within the execution of Assumption(α), we have the guarantee
that the later agent has at least started to execute Assumption with parameter α (actually, as
explained below, we have even the guarantee that most of Harvest(α) has been executed by the
later agent). Procedure Harvest is presented below. The second part of Procedure Assumption
(cf. lines 2 − 19 of Algorithm 6) consists in processing the bits of the transformed label one by one.
More precisely when processing a given bit in a call to Procedure Assumption(d), the agent acts in
steps 0, 1, . . . , 2d(d + 1): After each of these steps, the agent executes Pattern RepeatSeed whose
role is described below. In each of these steps, the agent executes Berry (resp. Cloudberry) if the
bit it is processing is 0 (resp. 1). These patterns of moves (cf. Algorithms 3 and 4 in Section 4)
are made in such a way that rendezvous occurs by the time any agent finishes the process of its
jth bit in Assumption(α) if we have the following synchronization property. Each time any of both
agents starts executing a step s during the process of its jth bit in Assumption(α), the other agent
has finished the execution of either step s − 1 in the jth bit process of Assumption(α) if s > 0,
or the last step of the (j − 1)th bit process of Assumption(α) if s = 0 (j > 0 in view of the label
transformation given above). To obtain such a synchronization, an agent executes what we called
the first synchronization mechanism in the previous section (cf. line 15 in Algorithm 6) after each
step of a bit process. Actually, this mechanism relies on procedure RepeatSeed, the code of which is
given in Algorithm 1. Note that the total number of steps, and thus of executions of RepeatSeed, in
Assumption(α) is 2α2 (α + 1) + α. For every 0 ≤ i ≤ 2α2 (α + 1) + α, the ith execution of RepeatSeed
in Assumption(α) by an agent permits to force the other agent to finish the execution of its ith
step in Assumption(α) by repeating a pattern Seed (its main purpose is described just above its
code given by Algorithm 2): With the appropriate parameters, this pattern Seed covers any pattern
(Berry or Cloudberry) made in the ith step of Assumption(α) and the number of times it is repeated
is at least the maximal number of edge traversals we can make in the ith step of Assumption(α).
Algorithm 7 gives the code of Procedure Harvest. As in Procedure Assumption, it makes use,
for technical reasons, of two sequences ρ and r that are defined above. Procedure Harvest is made of
two parts: the executions of Procedure P ushP attern (lines 1−3 of Algorithm 7), and the calls to the
patterns Cloudberry and RepeatSeed (lines 4 − 5 of Algorithm 7). When Harvest is executed with
parameter α (which is a good assumption), the first part ensures that the later agent has at least
completed every execution of Assumption with a parameter that is smaller than α, while the second
part ensures that the later agent has completed almost the entire execution of Harvest(α) (more
precisely, when the earlier agent finishes the second part, we have the guarantee that it remains for
the later agent to execute at most the last line before completing its own execution of Harvest(α)).
13
Algorithm 6 Assumption(d)
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
Execute Harvest(d)
radius ← r(d)
i←1
while i ≤ d do
j←0
while j ≤ 2d(d + 1) do
// Begin of step j
if the length of the transformed label is strictly greater than i, or its ith bit is 0 then
Execute Berry(radius, d)
else
Execute Cloudberry(radius, d, d, j)
end if
// End of step j
radius ← radius + 3d
Execute RepeatSeed(radius, C(Cloudberry(radius − 3d, d, d, j)))
j ←j+1
end while
i←i+1
end while
To give further details on Procedure Harvest, let us first describe Procedure P ushP attern (its
code is given in Algorithm 8). When the earlier agent completes the execution of P ushP attern(2i, d)
with i some power of two, assuming that the later agent had already completed Assumption(i), we
have the guarantee that the later agent has completed its execution of Assumption(2i). To ensure
this, we regard the execution of Assumption(2i) as a sequence of calls to basic patterns (namely
RepeatSeed, Berry and Cloudberry), which is formally defined in Definition 3. This sequence is
what we meant when talking about “the pattern drawn on the grid” in Subsection 3.2. For each
basic pattern p1 in the sequence, the earlier agent executes another pattern p2 at the end of which we
ensure that the later agent has completed p1 . If p1 is either Pattern Berry or Pattern Cloudberry,
then p2 is Pattern RepeatSeed: we use the same idea here as for the first synchronization mechanism.
If p1 is Pattern RepeatSeed, then p2 is Pattern Berry, relying on a property of the route XX (with
X any non-empty route) introduced in the last paragraph of Subsection 3.2: if both agents follow this
route concurrenly from the same node, then they meet. Pattern Seed can be seen as such a route,
and Procedure Berry (whose code is shown in Algorithm 3) consists in executing Pattern Seed from
each node at distance at most α. Hence, unless they meet, the later agent completes its execution
of Pattern RepeatSeed before the earlier one starts executing Seed from the same node. Note that
P ushP attern uses as many patterns as the number of basic patterns in the sequence it is supposed
to push: this and the fact of doubling the value of the input parameter of Procedure Assumption in
Algorithm 5 contribute in particular to keep the polynomiality of our solution.
Thus, once the earlier agent completes the first part of Harvest(α), the later one has at least
started the execution of Assumption(α) (and thus of the first part of Harvest(α)). At this point, we
might think at first glance that we just shifted the problem. Indeed, the number of edge traversals
that has to be made to complete all the executions of Assumption prior to Assumption(α) is quite
the same, if not higher, than the number of edge traversals that has to be made when executing the
first part of Harvest(α). Hence the difference between both agents in terms of edge traversals has not
been improved here. However, a crucial and decisive progress has nonetheless been done: contrary a
priori to the series of Assumption executed before Assumption(d), the first part of Harvest(α) can
be pushed at low cost via the execution of Pattern Cloudberry (line 4 of Algorithm 7) by the earlier
14
agent. Actually this pattern corresponds to the kind of route, described at the end of Subsection 3.2
for the second synchronization mechanism, which is of small length compared to the sequence of
patterns it can push. Indeed, the first part of Harvest(α) can be viewed as a “large” sequence of
Patterns Seed and Berry: however Seed and Berry can be seen (by analogy with Subsection 3.2)
as routes of the form AA and BB respectively, while Pattern Cloudberry executes Seed and Berry
(i.e., AABB) once from at least each node at distance at most α.
Note that when the earlier agent has completed the execution of Pattern Cloudberry in Harvest(α),
the later agent has at least started the execution of Pattern Cloudberry in Harvest(α). Hence, there
is still a difference between both agents, but it has been considerably reduced: it is now relatively
small so that we can handle it pretty easily afterwards.
Algorithm 7 Harvest(d)
1:
2:
3:
4:
5:
for i ← 1; i < d; i ← 2i do
Execute P ushP attern(i, d)
end for
Execute Cloudberry(ρ(d), d, d, 0)
Execute RepeatSeed(r(d), C(Cloudberry(ρ(d), d, d, 0)))
Definition 3 (Basic and Perfect Decomposition). Given a call P to an algorithm, we say that the
basic decomposition of P , denoted by BD(P ), is P itself if P corresponds to a basic pattern, the
type of which belongs to {RepeatSeed; Berry; Cloudberry}. Otherwise, if during its execution P
makes no call then BD(P ) =⊥, else BD(P ) = BD(x1 ), BD(x2 ), . . . , BD(xn ) where x1 , x2 , . . . , xn is
the sequence (in the order of execution) of all the calls in P that are children of P . We say that
BD(P ) is a perfect decomposition if it does not contain any ⊥.
Remark 4. The basic decomposition of every call to Procedure Assumption is perfect.
Algorithm 8 P ushP attern(i, d)
1: for each p in BD(Assumption(i)) do
2:
if p is a call to pattern RepeatSeed with value x as first parameter then
3:
Execute Berry(x, d)
4:
else
5:
/* pattern p is either a call to pattern Berry or a call to pattern Cloudberry (in view of the above remark)
and has at least two parameters */
6:
Let x (resp. y) be the first (resp. the second) parameter of p
7:
Execute RepeatSeed(d + x + 2y, C(Cloudberry(x, y, y, 0)))
8:
end if
9: end for
6
Proof of correctness and cost analysis
The purpose of this section is to prove that Algorithm RV ensures rendezvous in the basic grid at
cost polynomial in D (the initial distance between the agents), and l, the length of the shortest
label. To this end, the section is made of four subsections. The first two subsections are dedicated
to technical results about the basic patterns presented in Section 4 and synchronization properties
of Algorithm RV, which are used in turn to carry out the proof of correctness and the cost analysis
of Algorithm RV that are presented in the last two subsections.
15
6.1
Properties of the basic patterns
This subsection is dedicated to the presentation of some technical materials about the basic patterns
described in Section 4, which will be used in the proof of correctness of Algorithm 5 solving rendezvous
in a basic grid.
6.1.1
Vocabulary
Before going any further, we need to introduce some extra vocabulary in order to facilitate the
presentation of the next properties and lemmas.
Definition 5. A pattern execution A precedes another pattern execution B if the beginning of A
occurs before the beginning of B.
Definition 6. Two pattern executions A and B are concurrent iff:
• pattern execution A does not finish before pattern execution B starts
• pattern execution B does not finish before pattern execution A starts
By misuse of langage, in the rest of this paper we will sometimes say “a pattern” instead of “a
pattern execution”.
Hereafter we say that a pattern A concurrently precedes a pattern B, iff A and B are concurrent,
and A precedes B.
Definition 7. A pattern A pushes a pattern B in a set of executions E, if for every execution of E
in which B concurrently precedes A, agents meet before the end of the execution of B, or B finishes
before A.
In the sequel, given two sequences of moving instructions X and Y , we will say that X is a prefix
of Y if Y can be viewed as the execution of the sequence X followed by another sequence possibly
empty.
6.1.2
Pattern Seed
In this subsubsection, we show some properties related to Pattern Seed.
Proposition 8 follows by induction on the input parameter of Pattern Seed and Proposition 9
follows from the description of Algorithm 1.
Proposition 8. Let x be an integer. Starting from a node v, Pattern Seed(x) guarantees the following properties:
1. it allows to visit all nodes of the grid at distance at most x from v
2. it allows to traverse all edges of the grid linking two nodes at distance at most x from v
Proposition 9. Given two integers x1 ≤ x2 , the first period of Pattern Seed(x1 ) is a prefix of the
first period of Pattern Seed(x2 ).
Lemma 10. Let x1 and x2 be two integers such that x1 ≤ x2 . Let a1 and a2 be two agents executing
respectively Patterns Seed(x1 ) and Seed(x2 ) both from the same node such that the execution of
Pattern Seed(x1 ) concurrently precedes the execution of Pattern Seed(x2 ). Let t1 (resp. t2 ) be the
time when agent a1 (resp. a2 ) completes the execution of Pattern Seed(x1 ) (resp. Seed(x2 )). Agents
a1 and a2 meet by time min(t1 , t2 ).
16
Proof. Consider a node u and a first agent a1 executing Pattern Seed(x1 ) from u with x1 any integer.
Suppose that the execution of Seed(x1 ) by a1 concurrently precedes the execution of Pattern Seed(x2 )
by another agent a2 still from node u with x1 ≤ x2 .
According to Proposition 9, the first period of Seed(x1 ) is a prefix of the first period of Pattern Seed(x2 ). If the path followed by agent a1 during its execution of Seed(x1 ) is e1 , e2 , . . . , en ,
e1 , e2 , . . . , en (the overlined part of the path corresponds to the backtrack), then the path followed
by agent a2 during the execution of Pattern Seed(x2 ) is e1 , e2 , . . . , en , s, e1 , e2 , . . . , en , s where s corresponds to the edges traversed at a distance ∈ {x1 + 1; . . . ; x2 }. When a2 starts executing the path
e1 , e2 , . . . , en , a1 is on the path e1 , e2 , . . . , en , e1 , e2 , . . . , en . Thus, either a2 catches a1 when the latter
is following e1 , e2 , . . . , en , or they meet while a1 follows e1 , e2 , . . . , en .
Thus, if the execution of Seed(x1 ) by a1 concurrently precedes the execution of Seed(x2 ) by agent
a2 both executed from the same node, agents meet by the end of these executions.
6.1.3
Pattern RepeatSeed
This subsubsection is dedicated to some properties of Pattern RepeatSeed. Informally speaking,
Lemmas 11 and 12 describe the fact that Pattern RepeatSeed pushes respectively Pattern Berry
and Cloudberry when it is given appropriate parameters.
Lemma 11. Consider two nodes u and v separated by a distance δ. If Pattern Berry(x1 , y) is
executed from node v and Pattern RepeatSeed(x2 , n) is executed from node u with x1 , x2 , y and n
integers such that x2 ≥ x1 + y + δ and n ≥ C(Berry(x1 , y)) then Pattern RepeatSeed(x2 , x) pushes
Pattern Berry(x1 , y).
Proof. Assume that, in the grid, there are two agents a1 and a2 . Denote by u and v their respective initial positions. Suppose that u and v are separated by a distance δ. Assume that
agent a1 starts executing Pattern RepeatSeed(x2 , n) from node u and agent a2 performs Pattern
Berry(x1 , y) on node v (with n ≥ C(Berry(x1 , y)) and x2 ≥ x1 + y + δ). Also suppose that Pattern
Berry(x1 , y) concurrently precedes Pattern RepeatSeed(x2 , n). Let us suppose by contradiction,
that RepeatSeed(x2 , n) does not push Berry(x1 , y), which means, by Definition 7 that at the end of
the execution of RepeatSeed(x2 , n) by a1 , agents have not met and a2 has not finished executing its
Berry(x1 , y).
When executing its Berry(x1 , y) agent a2 can not be at a distance greater than x1 + y from its
initial position, and can not be at a distance greater than δ + x1 + y from node u. Besides, in view
of Proposition 8, each Pattern Seed(x2 ) (which composes Pattern RepeatSeed(x2 , n)) from node u
allows to visit all nodes and to traverse all edges at distance at most x2 from node u. Thus, each
Pattern Seed(x2 ) executed from node u allows to visit all nodes and to traverse all edges (although
not necessarily in the same order) that are traversed during the execution of Pattern Berry(x1 , y)
from node v.
Consider the position of agent a2 when a1 starts executing any of the Seed(x2 ) which compose
Pattern RepeatSeed(x2 , n), and when a1 has completed it. If a2 has not completed a single edge
traversal, then whether it was in a node or traversing an edge, it has met a1 which traverses every
edge a2 traverses during its execution of Pattern Berry(x1 , y). As this contradicts our hypothesis,
each time a1 completes one of its executions of Pattern Seed(x2 ), a2 has completed at least an edge
traversal. As agent a1 executes n ≥ C(Berry(x1 , y)) times Pattern Seed(x2 ) then a2 traverses at
least C(Berry(x1 , y)) edges before a1 finishes executing its RepeatSeed(x2 , n). As C(Berry(x1 , y)) is
the number of edge traversals in Berry(x1 , y), when a1 finishes executing Pattern RepeatSeed(x2 , n),
17
a2 has finished executing its Pattern Berry(x1 , y), which contradicts our assumption and proves the
lemma.
Lemma 12. Consider two nodes u and v separated by a distance δ. If Pattern Cloudberry(x1 , y, z, h)
is executed from node v (with x1 , y, z and h integers) and Pattern RepeatSeed(x2 , n) is executed from
u such that x2 ≥ x1 + y + z + δ and n ≥ C(Cloudberry(x1 , y, z, h)) then Pattern RepeatSeed(x2 , n)
pushes Pattern Cloudberry(x1 , y, z, h).
Proof. Using similar arguments to those used in the proof of Lemma 11, we can prove Lemma 12.
6.1.4
Pattern Berry
This subsubsection is dedicated to the properties of Pattern Berry. Informally speaking, Lemma 14
describes the fact that Pattern Berry permits to push Pattern RepeatSeed when it is given appropriate parameters. Proposition 13 and Lemma 15 are respectively analogous to Proposition 9 and
Lemma 10.
According to Algorithm 3, we have the following proposition.
Proposition 13. Given four integers x1 + y1 ≤ x2 + y2 , the first period of Pattern Berry(x1 , y1 ) is
a prefix of the first period of Pattern Berry(x2 , y2 ).
Lemma 14. Consider two nodes u and v separated by a distance δ. Let RepeatSeed(x1 , n) and
Berry(x2 , y) be two patterns respectively executed from nodes u and v with x1 , x2 , y and n integers.
If y ≥ δ and x1 ≤ x2 then Pattern Berry(x2 , y) pushes Pattern RepeatSeed(x1 , n).
Proof. Assume that there are two agents a1 and a2 initially separated by a distance δ. Assume that
their respective initial positions are node u and node v. Agent a2 executes Pattern RepeatSeed(x1 , n)
centered on v with x1 and n any integers. This execution of Pattern RepeatSeed(x1 , n) concurrently
precedes the execution of Pattern Berry(x2 , y) by a1 with y ≥ δ and x1 ≤ x2 . When executing
this Pattern, agent a1 performs Pattern Seed(x2 ) from each node at distance at most y from u with
y ≥ δ. So, at some point a1 executes Pattern Seed(x2 ) centered on node v. Since x2 ≥ x1 , by
Lemma 10, if a2 has not finished executing its RepeatSeed(x1 , n) when a1 starts executing Pattern
Seed(x2 ) from v, then agents meet by the end of the latter.
Hence, to avoid rendezvous the adversary must choose an execution in which the speed of agent
a2 is such that it completes all executions of Patterns Seed(x1 ) inside RepeatSeed(x1 , n) before a1
starts the execution of Pattern Seed(x2 ) centered on v.
Let t1 (resp. t2 ) be the time when agent a1 (resp. a2 ) completes its execution of Pattern
Berry(x2 , y) (resp. RepeatSeed(x1 , n)). Thus, if the execution of Pattern RepeatSeed(x1 , n) by a2
concurrently precedes the execution of Pattern Berry(x2 , y) by agent a1 , either t2 ≤ t1 or the agents
meet by time min(t1 , t2 ).
Lemma 15. Consider two agents a1 and a2 executing respectively Patterns Berry(x1 , y1 ) and
Berry(x2 , y2 ) both from node u with x1 , x2 , y1 and y2 integers such that x2 + y2 ≥ x1 + y1 . Suppose that the execution of Berry(x1 , y1 ) by a1 concurrently precedes the execution of Berry(x2 , y2 )
by a2 . Let t1 (resp. t2 ) be the time when agent a1 (resp. a2 ) completes its execution of Pattern
Berry(x1 , y1 ) (resp. Berry(x2 , y2 )). Agents a1 and a2 meet by time min(t1 , t2 ).
Proof. Consider a node u and a first agent a1 executing Pattern Berry(x1 , y1 ) from u with x1 and
y1 two integers. Suppose that the execution of Pattern Berry(x1 , y1 ) by a1 concurrently precedes
an execution of Pattern Berry(x2 , y2 ) by another agent a2 still from node u with x2 + y2 ≥ x1 + y1 .
18
This proof is similar to the proof of Lemma 10. According to Proposition 13, if the path followed
by agent a1 during its execution of Berry(x1 , y1 ) is e1 , e2 , . . . , en , e1 , e2 , . . . , en (the overlined part of
the path corresponds to the backtrack), then the path followed by agent a2 during the execution of
Pattern Berry(x2 , y2 ) is e1 , e2 , . . . , en , s, e1 , e2 , . . . , en , s where s corresponds to the edges traversed
from the x1 + y1 + 1-th iteration of the main loop of Pattern Berry to its x2 + y2 -th iteration.
Thus, either a2 catches a1 when the latter is following e1 , e2 , . . . , en , or they meet while a2 follows
e1 , e2 , . . . , en .
Let t1 (resp. t2 ) be the time when agent a1 (resp. a2 ) completes its execution of Pattern
Berry(x1 , y1 ) (resp. Berry(x2 , y2 )). In the same way as in the proof of Lemma 10, if the execution
of Berry(x1 , y1 ) by a1 concurrently precedes the execution of Berry(x2 , y2 ) by agent a1 both executed
from the same node, the agents meet by time min(t1 , t2 ).
6.1.5
Pattern Cloudberry
Informally speaking, the following lemma highlights the fact that Pattern Cloudberry can push “a
lot of basic patterns” under some conditions. In other words, we can force an agent to make a lot of
edge traversals “at relative low cost”.
Lemma 16. Consider two nodes u and v separated by a distance δ. Consider a sequence S of Patterns
RepeatSeed and Berry executed from u, and a Pattern Cloudberry(x, y, z, h) executed from v (with
x, y, z and h four integers) such that z ≥ δ and the execution of S concurrently precedes the execution
of Pattern Cloudberry(x, y, z, h). If for each Pattern RepeatSeed R and Pattern Berry B belonging
to S, x + y is greater than or equal to the sum of the parameters of B, and x is greater than or equal
to the first parameter of R, then the execution of Pattern Cloudberry(x1 , y1 , z, h) from v pushes S.
Proof. Let a2 be an agent executing a sequence S of Patterns RepeatSeed and Berry from a node u.
Suppose that there exist two integers x1 and y1 such that each Pattern Berry B inside the sequence
is assigned parameters the sum of which is at most x1 + y1 , and such that each Pattern RepeatSeed
R of the sequence is assigned a first parameter which is at most x1 . Let v be another node separated
from u by a distance δ. Suppose that another agent a1 executes Pattern Cloudberry(x1 , y1 , z, h)
from v with z ≥ δ and h two integers.
In order to prove that the execution of Pattern Cloudberry(x1 , y1 , z, h) by a1 pushes the sequence
of Patterns S, let us suppose by contradiction that there exists an execution in which S concurrently
precedes Cloudberry(x1 , y1 , z, h), and that by the end of the execution of Cloudberry(x1 , y2 , z, h) by
a1 , a2 neither has met a1 nor has finished executing its whole sequence of patterns.
According to Algorithm Cloudberry, when executing Cloudberry(x1 , y1 , z, h), a1 executes Pattern
Seed(x) followed by Pattern Berry(x, y) on each node at distance at most z from v. As z ≥ δ, during
its execution of Cloudberry(x1 , y1 , z, h), a1 follows P (v, u), executes Pattern Seed(x1 ) (denoted by
p1 ) and then Pattern Berry(x1 , y1 ) (denoted by p2 ) both from node u. In order to prove that the
execution of Cloudberry(x1 , y1 , z, h) by a1 pushes the execution of S by a2 , we are going to prove
that if a2 has not finished executing S when a1 starts executing p1 and p2 , agents meet. This will
imply that the adversary has to make a2 complete S before a1 starts executing p1 and p2 in order
to prevent the agents from meeting, and will thus prove the lemma.
By assumption, a2 has not finished executing S when a1 arrives on u to execute p1 and p2 . Let
us consider what it can be executing at this moment. If it is executing Pattern Seed(x2 ) with x2 any
integer, then by assumption, x2 ≤ x1 and by Lemma 10, agents meet by the end of the execution of
p1 , which contradicts the assumption that agents do not meet by the end of Cloudberry(x1 , y1 , z, h).
19
It means that when a1 starts executing p1 , a2 is executing Pattern Berry(x2 , y2 ) for any integers x2
and y2 such that x2 + y2 ≤ x1 + y1 . After p1 , a1 executes p2 . By Lemma 15, if a2 is still executing
Pattern Berry(x2 , y2 ) for any integers x2 and y2 such that x2 + y2 ≤ x1 + y1 (the same as above, or
another) then the agents meet by the end of the execution of p2 which contradicts our assumption
once again. As a consequence, when a1 starts executing p2 , a2 is executing Pattern Seed(x3 ) for an
integer x3 ≤ x1 . Denote by p3 this pattern, and remember that a2 can not have started it before
a1 starts executing p1 . Moreover, when a1 starts executing p2 , a2 can not be in u as it is the node
where a1 starts p2 , thus it has at least started traversing the first edge of p3 . Hence, p1 concurrently
precedes p3 , and p1 ends up before p3 .
By Algorithm Seed, like in the proof of Lemma 10, we can denote by e1 , . . . , en , e1 , . . . , en the
route followed by a2 when executing p3 and by e1 , . . . , en , s, e1 , . . . , en , s the route followed by a1 when
executing p1 where s corresponds to edges traversed at a distance belonging to {x2 + y2 + 1; . . . ; x1 +
y1 }. Remark that according to the definition of a backtrack, e1 , . . . , en , s = s, e1 , . . . , en . Consider
the moment t1 when a2 finishes the first period of p3 and begins the second one. It has just traversed
e1 , . . . , en , and is about to execute e1 , . . . , en . At this moment, a1 can not have traversed the edges
e1 , . . . , en , or else agents have met by t1 , which would contradict our assumption. However, as p1
is completed before p3 , a1 must finish executing s, e1 , . . . , en before a2 finishes executing e1 , . . . , en
which implies that agents meet by the end of the execution of p1 and contradicts once again the
hypothesis that they do not meet by the end of p2 .
So, in every case, it contradicts the assumption that by the end of the execution of Pattern
Cloudberry(x1 , y1 , z, h), a2 neither has met a1 nor has finished executing S. Hence, the execution of
Pattern Cloudberry(x1 , y1 , z, h) by a1 pushes the execution of S by a2 , and the lemma holds.
6.2
Agents synchronizations
We recall the reader that D is the initial distance separating the two agents in the basic grid.
The aim of this subsection is to introduce and prove several synchronization properties our
algorithms offer (cf., Lemmas 20 and 21). By “synchronization” we mean that if one agent has
completed some part of its rendezvous algorithm, then either it must have met the other agent or
this other agent has also completed some part (not necessarily the same one) of its algorithm i.e., it
must have made progress.
To prove Lemmas 20 and 21, we first need to show some more technical results—Lemmas 17, 18,
and 19.
Lemma 17. Let u and v be the two nodes initially occupied by the agents a1 and a2 . Let d1 and
d2 ≥ D be two powers of two not necessarily different from each other. If agent a2 executes Procedure
Assumption(d1 ) from node u and agent a1 executes Procedure P ushP attern(d1 , d2 ) from node v, then
Procedure P ushP attern(d1 , d2 ) pushes Procedure Assumption(d1 ).
Proof. Consider two agents a1 and a2 . Their respective initial nodes are u and v, which are separated
by a distance D. Assume that a2 executes Procedure Assumption(d1 ) with d1 any power of two, and
that a1 executes P ushP attern(d1 , d2 ) with d2 ≥ D any other power of two. Assume by contradiction
that the execution of Assumption(d1 ) by agent a2 concurrently precedes the execution of Procedure
P ushP attern(d1 , d2 ) by a1 , and that by the end of the execution of the latter, neither agents have
met, nor the execution of Procedure Assumption(d1 ) is completed.
According to Algorithm 8, there are as many basic patterns (from {RepeatSeed; Berry; Cloudberry})
in BD(P ushP attern(d1 , d2 )) as in BD(Assumption(d1 )). We denote by n this number of basic patterns. Each basic pattern inside BD(P ushP attern(d1 , d2 )) and BD(Assumption(d1 )) is
20
given an index between 1 and n according to their order of appearance. According to Remark 4,
BD(Assumption(d1 )) is perfect. This means that when agent a2 starts the execution of Assumption(d1 ),
this agent starts the execution of the first basic pattern in BD(Assumption(d1 )), that when agent a2
completes the execution of Assumption(d1 ), it completes the execution of the n-th basic pattern in
BD(Assumption(d1 )), and that, for any integer i between 1 and n − 1, agent a2 does not make any
edge traversal between the i-th and the (i + 1)-th basic pattern in BD(Assumption(d1 )). Every edge
traversal agent a2 makes during the execution of Procedure Assumption(d1 ) is performed during one
of the basic patterns inside BD(Assumption(d1 )). Remark that BD(P ushP attern(d1 , d2 )) is also
perfect.
Suppose that for any integer i between 1 and n, by the end of the execution of the i-th pattern
inside BD(P ushP attern(d1 , d2 )), agents have met or the execution by a2 of the i-th pattern inside
BD(Assumption(d1 )) is over. We get a contradiction, as it means that, by the end of the execution
of Procedure P ushP attern(d1 , d2 ) by a1 (and thus by the end of the n-th pattern of BD(P ushP attern(d1 , d2 ))), agents have met or the execution of the n-th pattern inside BD(Assumption(d1 ))
(and thus Assumption(d1 ) itself) by a2 is over. As a consequence, there exists an integer i between
1 and n, such that by the end of the execution of the i-th pattern inside BD(P ushP attern(d1 , d2 ))
by a1 , agents have not met, and the execution by a2 of the i-th pattern inside BD(Assumption(d1 ))
is not over. Without loss of generality, let us make the assumption that i is the smallest positive
integer, such that by the end of the execution of the i-th pattern inside BD(P ushP attern(d1 , d2 ))
by a1 , agents have not met, and the execution by a2 of the i-th pattern inside BD(Assumption(d1 ))
is not over.
Let us first show that the execution of the i-th pattern inside BD(Assumption(d1 )) concurrently precedes the execution of the i-th pattern inside BD(P ushP attern(d1 , d2 )). If i = 1, since
Assumption(d1 ) concurrently precedes P ushP attern(d1 , d2 ), the i-th pattern inside BD(Assumption(d1 ))
concurrently precedes the i-th pattern inside BD(P ushP attern(d1 , d2 )). If i > 1 and the i-th pattern inside BD(Assumption(d1 )) does not concurrently precede the i-th pattern inside BD(P ushP attern(d1 , d2 )), then the i-th pattern inside BD(Assumption(d1 )) does not begin before the i-th
pattern inside BD(P ushP attern(d1 , d2 )), which implies that the (i−1)-th pattern inside BD(Assumption(d1 ))
ends after the (i − 1)-th pattern inside BD(P ushP attern(d1 , d2 )), which contradicts the hypothesis
that i is the smallest positive integer, such that by the end of the i-th pattern inside BD(P ushP attern(d1 , d2 )), agents have not met, and the i-th pattern inside BD(Assumption(d1 )) is not over.
This means that the i-th pattern inside BD(Assumption(d1 )) concurrently precedes the i-th pattern
inside BD(P ushP attern(d1 , d2 )).
According to Lemmas 11, 12 and 14, Algorithm P ushP attern and the fact that d2 ≥ D, whatever
the type of the i-th pattern inside BD(Assumption(d1 )) (Berry, Cloudberry or RepeatSeed), the
i-th pattern inside BD(P ushP attern(d1 , d2 )) pushes it. In particular, if the i-th pattern inside
BD(Assumption(d1 )) is a Berry or a Cloudberry called after the test at Line 8, at Line 9 or 11,
the i-th pattern inside BD(P ushP attern(d1 , d2 )) pushes it regardless of which of the two patterns it
is. Indeed, for any integers x and h, Cloudberry(x, d1 , d1 , h) is composed of several Berry(x, d1 ) so
that C(Cloudberry(x, d1 , d1 , h)) ≥ C(Berry(x, d1 )). As the i-th pattern inside BD(Assumption(d1 ))
concurrently precedes the i-th pattern inside BD(P ushP attern(d1 , d2 )), this contradicts the fact that
by the end of the i-th pattern inside BD(P ushP attern(d1 , d2 )), agents have not met, and the i-th
pattern inside BD(Assumption(d1 )) is not over.
We then get a contradiction regardless of the case, which proves the lemma.
Lemma 18. Let d1 be any power of two, and x be any integer such that the first parameter of each
21
basic pattern inside BD(Assumption(d1 )) is assigned a value which is at most x. For every power
of two d2 ≥ d1 , the first parameter of each basic pattern inside BD(P ushP attern(d1 , d2 )) is lower
than or equal to x + 3d2 .
Proof. We prove this lemma by contradiction. Make the assumption that there exists a power of two
d1 and an integer x1 such that the first parameter of each basic pattern inside BD(Assumption(d1 ))
is given a value lower than or equal to x1 . Also suppose that there exists a call to a basic pattern
inside BD(P ushP attern(d1 , d2 )) for some power of two d2 ≥ d1 in which the first parameter is given
a value greater than x1 + 3d2 . According to Algorithm P ushP attern, in BD(P ushP attern(d1 , d2 ))
there cannot be any call to basic Pattern Cloudberry, and each basic pattern inside BD(P ushP attern(d1 , d2 )) and BD(Assumption(d1 )) is given an index between 1 and n according to their
order of appearance, with n the number of basic patterns in either of these decompositions. Thus,
for any integer i between 1 and n, there is a pair of patterns (p1 , p2 ) such that p1 is the i-th basic
pattern inside BD(Assumption(d1 )), and p2 is the i-th pattern inside BD(P ushP attern(d1 , d2 )). We
consider any pair (p1 , p2 ) such that the first parameter of p2 is given a value greater than x1 + 3d2 ,
and we analyse three cases depending on the type of pattern p1 . By assumption, the first parameter
of p1 is x1 .
Let us first consider the case in which p1 is Pattern RepeatSeed(x2 , n1 ) with x2 and n1 any two
integers. According to Algorithm 8, since p1 is Pattern RepeatSeed(x2 , n1 ), p2 is Berry(x2 , d2 ). By
assumption, the first parameter of p2 is greater than x1 +3d2 , which contradicts our other assumption
that the first parameter of p1 is at most x1 .
Thus, p1 is either Pattern Berry or Pattern Cloudberry. In BD(Assumption(d1 )), whether it
is called directly by Procedure Assumption(d1 ), or inside its call to Harvest(d1 ), or inside the call
of the latter to P ushP attern(d3 , d1 ) with a power of two d3 < d1 , the second parameter of Pattern
Berry is always d1 , and the second and third parameters of Pattern Cloudberry are always d1 as
well. Let p1 be Pattern Berry(x2 , d1 ) with any integer x2 ≤ x1 . According to Algorithm 8, p2 is
RepeatSeed(d1 + d2 + x2 , C(Berry(x2 , d1 ))). This implies that the first parameter of Pattern p2 i.e.,
d1 + d2 + x2 is greater than x1 + 3d2 . This means that x2 > x1 − d1 + 2d2 > x1 which contradicts
the assumption that the first parameter of p1 is at most x1 .
At last, according to Algorithm 8, if p1 is Pattern Cloudberry(x2 , d1 , d1 , h) with two integers
h and x2 ≤ x1 , p2 is RepeatSeed(d2 + 2d1 + x2 , C(Cloudberry(x2 , d1 , d1 , h))). This implies that
the first parameter of Pattern p2 i.e., d2 + 2d1 + x2 is greater than x1 + 3d2 . This means that
x2 > x1 + 2d2 − 2d1 > x1 which also contradicts the assumption that the first parameter of p1 is at
most x1 .
Hence, within BD(P ushP attern(d1 , d2 )), there cannot be a call to a basic pattern in which the
first parameter is assigned a value greater than x1 + 3d2 , which proves the lemma.
Lemma 19. Let d1 be a power of two. The first parameter of each basic pattern inside BD(Assumption(d1 ))
is at most ρ(2d1 ) − 3d1 .
Proof. We prove this lemma by induction on d1 .
Let us first consider that d1 = 1, and prove that the first parameter of each basic pattern inside
BD(Assumption(1)) is at most ρ(2) − 3. Let us assume by contradiction that there exists a basic
pattern inside BD(Assumption(1)) for which the first parameter is given a value greater than ρ(2)−3.
Denote by p such a pattern. Procedure Assumption(1) begins with Harvest(1) which is composed
of calls to Cloudberry(ρ(1), 1, 1, 0) and RepeatSeed(r(1), C(Cloudberry(1, 1, ρ(1), 0))). As ρ(1) and
r(1) are lower than ρ(2) − 3, pattern p does not belong to BD(Harvest(1)). As a consequence,
22
pattern p is called after Harvest(1). After Harvest(1), the first parameter that is given to the
patterns called in Procedure Assumption(1) is always at most ρ(2) − 3. Indeed, the first parameter
is assigned its maximal value when j = 2d1 (d1 + 1) = 4 and i = d1 = 1 in the while loop i.e., when
3d1 = 3 has been added i(j + 1) = 5 times to r(d1 ) = r(1), which gives a maximal value equal to
r(d1 ) + 3d21 (2d1 (d1 + 1) + 1) = r(1) + 15 = ρ(2d1 ) − 3d1 = ρ(2) − 3. We then get a contradiction with
the existence of p since its first parameter is lower than or equal to ρ(2) − 3.
Let us now assume that there exists a power of two d2 such that for each power of two d3 ≤ d2 , the
first parameter of each basic pattern inside BD(Assumption(d3 )) is at most ρ(2d2 ) − 3d2 , and prove
that the first parameter of each basic pattern inside BD(Assumption(2d2 )) is at most ρ(4d2 ) − 6d2 .
Let us assume by contradiction that there exists a basic pattern p inside BD(Assumption(2d2 )) which
is assigned a first parameter that is greater than ρ(4d2 ) − 6d2 . Procedure Assumption(2d2 ) begins
with Harvest(2d2 ) which in turn, begins with P ushP attern(1, 2d2 ), . . . , P ushP attern(d2 , 2d2 ).
According to the definition of a basic decomposition, if p is called by P ushP attern(1, 2d2 ), . . . ,
P ushP attern(d2 , 2d2 ), it belongs to BD(P ushP attern(1, 2d2 )), . . . , BD(P ushP attern(d2 , 2d2 )).
By induction hypothesis, inside BD(Assumption(1)), . . . , BD(Assumption(d2 )), the first parameter of each basic pattern is at most ρ(d2 ) − 3d2 . According to Lemma 18, inside BD(P ushP attern(1, 2d2 )), . . . , BD(P ushP attern(d2 , 2d2 )), the first parameter of each basic pattern is at
most ρ(d2 ) + 3d2 = r(d2 ) ≤ ρ(2d2 ) − 6d2 . Moreover, after P ushP attern(1, 2d2 ), . . . , P ushP attern(d2 , 2d2 ), Harvest(2d2 ) executes Pattern Cloudberry(ρ(2d2 ), 2d2 , 2d2 , 0) followed by Pattern RepeatSeed(r(2d2 ), C(Cloudberry(2d2 , 2d2 , ρ(2d2 ), 0))). Inside these calls, the first parameter
is respectively given the values ρ(2d2 ) and r(2d2 ) which are both lower than ρ(4d2 ) − 6d2 . As a
consequence, p does not belong to BD(Harvest(2d2 )). This means that this pattern is called after
Harvest(2d2 ). However, in the same way as when d1 = 1, we can show that the first parameter
keeps increasing and reaches a maximal value equal to r(2d2 ) + 12d22 (4d2 (2d2 + 1) + 1) = ρ(4d2 ) − 6d2
which contradicts the existence of a basic pattern inside BD(Assumption(2d2 )) which is assigned a
first parameter that is greater than ρ(4d2 ) − 6d2 , and then proves the lemma.
Before presenting the next lemma, we need to introduce the following notions. We say that the
first four lines of Algorithm Harvest are its first part, and that the last line is the second part.
Procedure Assumption begins with a call to Procedure Harvest: We will consider that the first
part of Procedure Assumption is the first part of this call, and that the second part of Procedure
Assumption is the second part of this call. After these two parts, there is a third part in Procedure
Assumption which consists of calls to basic patterns. Moreover, note that the execution of Algorithm
RV can be viewed as a sequence of consecutive calls to Procedure Assumption with an increasing
parameter. We will say that the (i + 1)-th call to Procedure Assumption (i.e., the call to Procedure
Assumption(2i ) by an agent executing Algorithm RV is Phase i.
Lemma 20. Consider two agents executing Algorithm RV. Let i be an integer such that 2i ≥ D. If
rendezvous has not occurred before, at the end of the execution by any of both agents of the second
part of Phase i, the other agent has finished executing the first part of Phase i.
Proof. Let a1 and a2 be two agents executing Algorithm RV. Let i1 and d1 be two integers such
that 2i1 = d1 ≥ D. Assume by contradiction that at the end of the execution of the second part
of Phase i1 by a1 , agents have not met and a2 has not completed its execution of the first part of
Phase i1 .
By assumption, when a1 finishes executing the second part of Phase i1 , a2 is either executing
Phase i2 for an integer i2 < i1 , or the first part of Phase i1 .
23
First of all, let us show that when a1 finishes executing the sequence P ushP attern(1, d1 ),
. . . , P ushP attern(2i1 −1 , d1 ) (i.e., the loop at the beginning of procedure Harvest(d1 )), a2 cannot be executing Phase i2 for an integer i2 < i1 . Indeed, in view of Lemma 17 and the fact that
d1 ≥ D, we know that the sequence P ushP attern(1, d1 ), . . . , P ushP attern(2i1 −1 , d1 ) pushes the
sequence Assumption(1), . . . , Assumption(2i1 −1 ). This means that by the time a1 finishes P ushP attern(2i1 −1 , d1 ), the agents have met or a2 has finished executing Procedure Assumption(2i1 −1 )
i.e., Phase (i1 − 1). Given that by assumption, agents do not meet before a1 completes its execution of the first part of Phase i1 , when a1 finishes executing the loop at the beginning of procedure
Harvest(d1 ), a2 is executing the first part of Phase i1 .
Let us now show that when a1 finishes executing Cloudberry(ρ(d1 ), d1 , d1 , 0), a2 has finished
executing the loop at the beginning of Procedure Harvest(d1 ). According to Lemmas 18 and 19,
inside this loop, the first parameter which is assigned to Patterns RepeatSeed and Berry is at
most ρ(d1 ). Besides, while executing this loop, a2 executes a sequence of Patterns RepeatSeed and
Berry called by Procedure P ushP attern. Since d1 ≥ D, according to Lemma 16, the execution of
Cloudberry(ρ(d1 ), d1 , d1 , 0) by a1 pushes the execution by a2 of the loop at the beginning of Procedure
Harvest(d1 ). By assumption, when a1 finishes executing Cloudberry(ρ(d1 ), d1 , d1 , 0), agents have
not met which implies that a2 has finished executing the loop.
After executing Pattern Cloudberry(ρ(d1 ), d1 , d1 , 0) but before completing Procedure Harvest(d1 ),
a1 performs RepeatSeed(r(d1 ), C(Cloudberry(ρ(d1 ), d1 , d1 , 0))). According to Lemma 12, as r(d1 ) =
ρ(d1 ) + 3d1 , the execution of RepeatSeed(r(d1 ), C(Cloudberry(ρ(d1 ), d1 , d1 , 0))) by a1 pushes the
execution of Cloudberry(ρ(d1 ), d1 , d1 , 0) by a2 . Still by assumption, when a1 finishes executing
RepeatSeed(r(d1 ), C(Cloudberry(ρ(d1 ), d1 , d1 , 0))), agents have not met, and thus a2 has finished
executing Cloudberry(ρ(d1 ), d1 , d1 , 0). This means that when a1 finishes executing Harvest(d1 ) and
thus the second part of Phase i1 , a2 has completed the execution of the first part of Phase i1 , which
proves the lemma.
In the hereafter lemma, we focus on the calls to Pattern RepeatSeed in the second and in the
third part of Procedure Assumption(d1 ) for any power of two d1 . In the statement and proof of this
lemma, they are called “synchronization RepeatSeed”, and indexed from 1 to d1 (2d1 (d1 +1)+1)+1) in
their ascending execution order in these two parts of the procedure. The call to Pattern RepeatSeed
in the second part of Procedure Assumption is the first (indexed by 1) synchronization RepeatSeed
during an execution of Procedure Assumption(d1 ) for any power of two d1 .
Lemma 21. Let a1 and a2 be two agents executing Algorithm RV. Let u and v be their respective
initial nodes separated by a distance D. For every power of two d1 ≥ D and every positive integer i, if
agents have not met yet, then when one agent finishes executing the i-th synchronization RepeatSeed
of Assumption(d1 ), the other agent has at least started executing the i-th synchronization RepeatSeed
of Assumption(d1 ).
Proof. Consider two nodes u and v separated by a distance D, and two agents a1 and a2 respectively
located on u and v. Suppose that agent a1 has just finished executing the i-th synchronization
RepeatSeed inside Procedure Assumption(d1 ) with any power of two d1 ≥ D and any positive
integer i. Let us prove by induction on i that if rendezvous has not occurred yet a2 has at least
started executing this i-th synchronization RepeatSeed.
Let us first consider the case in which i = 1. The synchronization RepeatSeed a1 has just
finished executing is called at the end of the execution of Procedure Harvest(d1 ) called at Line 1
of Procedure Assumption(d1 ). As d1 ≥ D, by Lemma 20, when a1 finishes executing Pattern
24
RepeatSeed, and thus Harvest(d1 ), agents have met or a2 has completed the execution of the
first part of Procedure Assumption(d1 ). This means that when a1 has finished executing the first
synchronization RepeatSeed, either agents have met or a2 has at least begun the execution of the
first synchronization RepeatSeed.
Let us now make the assumption that, for every power of two d1 ≥ D, during an execution of
Procedure Assumption(d1 ), there exists an integer j between 1 and d1 (2d1 (d1 + 1) + 1) such that
when agent a1 has finished executing the j-th synchronization RepeatSeed, either agents have met
or a2 has at least begun the execution of the j-th synchronization RepeatSeed, and prove that when
a1 has finished executing the (j + 1)-th synchronization RepeatSeed, either agents have met or a2
has at least begun the execution of the (j + 1)-th synchronization RepeatSeed. Let us assume by
contradiction that when a1 has finished executing the (j + 1)-th synchronization RepeatSeed, a2 has
neither met a1 nor started executing the (j + 1)-th synchronization RepeatSeed.
After executing the j-th synchronization RepeatSeed, a1 executes Line 9 or Line 11 of Algorithm
Assumption(d1 ) and thus either Pattern Berry or Pattern Cloudberry, depending on the bits of its
transformed label. By Lemmas 14 and 16, as d1 ≥ D, if a2 is still executing the j-th synchronization
RepeatSeed, whichever pattern a1 executes, it pushes the execution of the j-th synchronization
RepeatSeed by a2 . By assumption, when a1 finishes executing Line 9 or Line 11 of Algorithm
Assumption(d1 ) after the j-th synchronization RepeatSeed, agents have not met which implies that
a2 has finished executing the j-th synchronization RepeatSeed.
The next pattern that a1 executes is the (j + 1)-th synchronization RepeatSeed. Given the above
assumptions and statements, when a1 starts executing this synchronization RepeatSeed, a2 has finished executing the j-th synchronization RepeatSeed and has started executing Line 9 or Line 11 of
Algorithm Assumption(d1 ). By Lemmas 11 and 12, as d1 ≥ D, whichever pattern a2 executes, it is
pushed by the execution of the (j + 1)-th synchronization RepeatSeed by a1 . Given that, still by assumption, agents do not meet before a1 finishes executing the (j +1)-th synchronization RepeatSeed,
when this occurs, a2 has finished the execution of Line 9 or 11 of Algorithm Assumption(d1 ), just
after the j-th, and just before the (j + 1)-th synchronization RepeatSeed. Hence, when a1 finishes
executing the (j + 1)-th synchronization RepeatSeed, a2 has at least started executing the (j + 1)-th
synchronization RepeatSeed, which contradicts the hypothesis that when a1 has finished executing
the (j + 1)-th synchronization RepeatSeed, a2 has neither met a1 nor started executing the (j + 1)-th
synchronization RepeatSeed, and proves the lemma.
6.3
Correctness of Algorithm RV
Theorem 22. Algorithm RV solves the problem of rendezvous in the basic grid.
Proof. To prove this theorem, it is enough to prove the following claim.
Claim 23. Let d1 be the smallest power of two such that d1 ≥ max(D, l0 ) with l0 the index of the
first bit which differs in the transformed labels of the agents. Algorithm RV ensures rendezvous by
the time one of both agents completes an execution of Procedure Assumption(d1 ).
This proof is made by contradiction. Suppose that the agents a1 and a2 executing Algorithm RV
never meet. First, in view of Remark 2, l0 exists. Respectively denote by u and v, the initial nodes
of a1 and a2 .
Consider an agent that eventually starts executing Assumption(d1 ) where d1 is the smallest
power of two such that d1 ≥ max(D, l0 ). As d1 ≥ D, by Lemma 20, we know that as soon as this
agent finishes executing Procedure Harvest(d1 ), both agents have started executing Assumption(d1 ).
25
Otherwise, agents have met which contradicts our assumption. Without loss of generality, suppose
that the bits in the transformed labels of agents a1 and a2 with the index l0 are respectively 1
and 0. We are going to prove that the agents meet before one of them finishes the execution of
Assumption(d1 ).
To achieve this, we first show that there exists an iteration of the loop at Line 6 of Algorithm 6
during which the two following properties are satisfied:
1. the value of the variable i is equal to l0
2. the value of the variable j is such that when executing Pattern Cloudberry at Line 11, the first
pair of Patterns Seed and Berry executed inside this Cloudberry by a1 starts from the initial
node of a2
As, d1 ≥ l0 , there is an iteration of the loop at Line 4 during which the first property is verified.
We now show that the second property is also satisfied. Let U be a list of all the nodes at distance
at most d1 from u and ordered in the order of the first visit when executing Seed(d1 ) from node u.
The same list is considered in the algorithm of Pattern Cloudberry(x, d1 , d1 , h) for any integers x
and h. First of all, there are 2d1 (d1 + 1) + 1 nodes at distance at most d1 from u, and thus in U .
Since the distance between u and v is D ≤ d1 , v belongs to U . Let j1 an integer lower than or equal
to 2d1 (d1 + 1) be its index in U . According to Procedure Assumption, the value of the variable j
is incremented at each iteration of the loop at Line 6 and takes one after another each value lower
than or equal to 2d1 (d1 + 1). Consider the iteration when it is equal to j1 . According to Algorithm
Cloudberry, the first node from which a1 executes Seed and Berry is the node which has index
j1 + 0 (mod 2d1 (d1 + 1) + 1) = j1 . This node is v, which proves that there exists an iteration of the
loop at Line 6 (and thus of the loop at Line 4) during which the second property is verified too. Let
us denote by I the iteration of the loop at Line 4 which satisfies the two aforementioned properties.
It is the iteration after the (1 + (l0 − 1)(2d1 (d1 + 1) + 1) + j1 )-th synchronization RepeatSeed.
According to Lemma 21, we know that when an agent finishes executing the i-th synchronization
RepeatSeed inside the second and the third part of any execution of Procedure Assumption(d1 ) (for
any positive integer i lower than or equal to d1 (2d1 (d1 +1)+1)+1), the other agent has at least begun
the execution of this synchronization RepeatSeed. Thus, when an agent is the first one which starts
executing I, it has just finished executing the (1 + (l0 − 1)(2d1 (d1 + 1) + 1) + j1 )-th synchronization
RepeatSeed and the other agent is executing (or finishing executing) the same RepeatSeed. Let us
prove that rendezvous occurs before any of the agents starts the (2 + (l0 − 1)(2d1 (d1 + 1) + 1) + j1 )-th
synchronization RepeatSeed.
Let us consider the patterns both agents execute between the beginning of the (1+(l0 −1)(2d1 (d1 +
1) + 1) + j1 )-th synchronization RepeatSeed, and the beginning of the next one. Agent a1 executes
Pattern RepeatSeed(x, n) with x an integer and n a positive integer (call this pattern, p1 ) and
Pattern Cloudberry(x, d1 , d1 , j1 ) from node u while a2 executes RepeatSeed(x, n) (let us call it p2 )
and Berry(x, d1 ) (p3 ) from node v. During its execution of Pattern Cloudberry(x, d1 , d1 , j1 ) from
node u, a1 first follows P (u, v), and then executes Pattern Seed(x) followed by Pattern Berry(x, d1 )
both from node v (call them respectively p4 and p5 ). Recall that during any execution of Pattern
Berry(x, d1 ) from node v, there are two periods, the second one consisting in backtracking every
edge traversal made during the first one. During the first period, in particular, an agent executes
a Pattern Seed(x) from every node at distance at most d1 . Those patterns include an execution of
Pattern Seed(x) from node u and another from v. Since backtracking Seed(x) allows to perform
exactly the same edge traversals as Seed(x), during the second period of Pattern Berry(x, d1 ), there
is also an execution of Pattern Seed(x) from node u and another from v.
26
Let us consider two different cases. In the first one, when a1 starts executing p4 from v, inside
p3 , a2 has not yet started following P (v, u) to go executing Seed(x) from u. In the second one, when
a1 starts executing p4 from v, a2 has at least started following P (v, u) to go executing Seed(x) from
u. In the following, we analyse both these cases.
Concerning the first case, we get a contradiction. Consider what a2 can be executing when
a1 starts executing p4 from node v, after following P (u, v). First, it can still be executing the
synchronization RepeatSeed p2 from node v. Then, by Lemma 10, rendezvous occurs. The only
other pattern that a2 can be executing at this moment is p3 . However, in this case, we know that
a2 will have finished its execution of p3 before a1 starts p5 , just after p4 . Otherwise, by Lemma 15,
rendezvous occurs.
We have just reminded the reader that during any execution of Pattern Berry(x, d1 ) from v,
agent a2 performs, among the Patterns Seed(x) from every node at distance at most d1 from v,
Patterns Seed(x) from v. If it executes one of these Patterns Seed(x) while a1 is executing its p4
from node v after following P (u, v), by Lemma 10, rendezvous occurs. This implies that before
a1 finishes following P (u, v), a2 has completed each execution of Pattern Seed(x) from v inside its
execution of Berry(x, d1 ).
It means that, each execution of Pattern Seed(x) from node v during the second period of p3
has already been completed by a2 when a1 starts executing its own Seed(x) from v. Since inside the
second period of p3 , a2 executes Pattern Seed(x) from node v, a2 has already executed the whole
first period of p3 when a1 starts executing p4 from v including Pattern Seed(x) performed from node
u, as u is at distance at most d1 from v. This contradicts the definition of this first case: according
to this definition, when a1 starts executing p4 from v, inside p3 , a2 has not followed P (v, u) yet, and
thus has not executed Seed(x) from u.
Concerning the second case, we prove that rendezvous occurs, which is also a contradiction.
Recall that in this case, when a1 starts executing p4 from v, a2 has at least started following P (v, u)
to go executing Seed(x) from u. If a2 has not finished following P (v, u) when a1 starts executing
P (u, v), then if we denote by t1 (resp. t2 ) the time when a1 (resp. a2 ) finishes following P (u, v)
(resp. P (v, u)), agents meet by time min(t1 , t2 ) as P (u, v) = P (v, u). If a2 has finished following
P (v, u) before a1 starts executing P (u, v), then it has begun executing Seed(x) from u before a1
finishes executing p1 (before it executes Cloudberry(x, d1 , d1 , j1 )), which means by Lemma 10 that
agents achieve rendezvous.
So, whatever the execution chosen by the adversary, rendezvous occurs in the worst case by the
time any agent completes Assumption(d1 ), which contradicts the assumption that rendezvous never
happens. This proves the claim, and by extension the theorem.
6.4
Cost analysis
Theorem 24. The cost of Algorithm RV is polynomial in D and l.
Proof. In order to prove this theorem, we first need to show the following two claims.
Claim 25. Let d1 be any power of two. The cost of each basic pattern inside BD(Assumption(d1 ))
is polynomial in d1 .
Let us prove this claim. First, the costs of these basic patterns are polynomial in d1 if the values of
their parameters are polynomial in d1 . Indeed, C(Seed(x)) ∈ O(x2 ), C(RepeatSeed(x, n)) ∈ O(n ×
C(Seed(x))), C(Berry(x, y)) ∈ O((x + y)6 ), and C(Cloudberry(x, y, z, h)) ∈ O(z 2 × (C(Seed(x)) +
C(Berry(x, y)))).
27
Pattern Seed does not belong to BD(Assumption(d1 )). It is called when executing the other
basic patterns, which give it a parameter which is polynomial in their own parameters. Hence, we
focus on the parameters of Pattern RepeatSeed, Berry, and Cloudberry, and prove that their values
are polynomial in d1 .
For each basic Pattern Berry or Cloudberry inside BD(Assumption(d1 )), the value given to its
second parameter is always d1 . For each basic Pattern Cloudberry inside BD(Assumption(d1 )), the
value assigned to the third parameter of Pattern Cloudberry, is always d1 . The fourth parameter of
Pattern Cloudberry does not have any impact on its cost since it only modifies the order in which
the edge traversals are made, and not their number.
The first parameter of these three basic patterns can take various complicated values, but
they are still polynomial in d1 . Indeed, according to Lemma 19, for any power of two d1 , inside
BD(Assumption(d1 )), the value of this first parameter is at most ρ(2d1 ) − 3d1 , which is polynomial
in d1 .
At last, the second parameter of Pattern RepeatSeed, is always equal to C(p) where p is one of
the other patterns, either Berry or Cloudberry. Besides, since the parameters given to this pattern
p are polynomial in d1 , this is also the case for the second parameter of Pattern RepeatSeed. Hence
the claim is proven.
Claim 26. Let d1 be a power of two. The cost of Procedure Assumption(d1 ) is polynomial in d1 .
Let us prove this claim. According to the definition of a basic decomposition, and of Remark 4, for any power of two d1 , each edge traversal performed during an execution of Procedure
Assumption(d1 ) is performed by one of the basic patterns inside BD(Assumption(d1 )). The cost
of Procedure Assumption(d1 ) is the same as the sum of the costs of all the basic patterns inside BD(Assumption(d1 )). According to Claim 25, we know that for any power of two d1 , inside
BD(Assumption(d1 )), each basic pattern is polynomial in d1 . Thus, to prove this claim it is enough
to show that BD(Assumption(d1 )) contains a number of basic patterns which is polynomial in d1 .
For any power of two d1 , Procedure Assumption(d1 ) is composed of a call to Procedure Harvest(d1 )
and the nested loops. These loops consist in 2d1 (2d1 (d1 + 1) + 1) calls to basic patterns. Half of them
are made to RepeatSeed and the others either to Berry or to Cloudberry. In its turn, Harvest(d1 )
is composed of two parts: a loop calling Procedure P ushP attern and two basic patterns. For any
power of two d2 , in view of Algorithm 8, and since they are both perfect, the number of basic patterns
inside BD(P ushP attern(d2 , d1 )) or BD(Assumption(d2 )) is the same. As a consequence, if d1 ≥ 2,
BD(P ushP attern(1, d1 )), . . . , BD(P ushP attern( d21 , d1 )) is composed of as many basic patterns as
there are in BD(Assumption(1)), . . . , BD(Assumption( d21 )).
For any power of two i, let us denote by L1 (i) (resp. L2 (i)) the number of calls to basic patterns
inside BD(Assumption(i)) (resp. BD(Harvest(i))). We then have the following equations:
L1 (i) = L2 (i) + 2i(2i(i + 1) + 1)
log2 (i)−1
L2 (i) =
X
(L1 (2j )) + 2
j=0
They imply the following:
if i ≥ 2
L2 (1) = 2 and
i
i
i
i
then L2 (i) = L2 ( ) + L1 ( ) = 2L2 ( ) + i(i( + 1) + 1)
2
2
2
2
28
Hence, L2 (i) ∈ O(i5 ). Both L2 (i) and L1 (i) are polynomial in i, which means that for any power
of two d1 , BD(Assumption(d1 )) is composed of number of basic patterns which is polynomial in d1 .
Hence, in view of Claim 25, the cost of Assumption(d1 ) is indeed polynomial in d1 , which proves the
claim.
Now, it remains to conclude the proof of the theorem. According to Claim 23, rendezvous
is achieved by the end of the execution of Assumption(δ) by any of both agents, where δ is the
smallest power of two such that δ ≥ max(D, l0 ) and l0 is the index of the first bit which differs in the
transformed labels of the agents. So, according to Claim 26, the cost of Assumption(δ) is polynomial
in D and l0 , and by extension polynomial in D and l as by construction we have l0 ≤ 2l +2. Moreover,
before executing Assumption(δ), all the calls to Procedure Assumption use an input parameter lower
than δ and thus, each of these calls is also polynomial in D and l. Hence, in view of the fact that
the number of calls to procedure Assumption before executing Assumption(δ) belongs to Θ(log δ)
(the input parameter of Assumption doubles after each call), the theorem follows.
7
Conclusion
From Theorems 1, 22 and 24, we obtain the following result concerning the task of approach in the
plane.
Theorem 27. The task of approach can be solved at cost polynomial in the unknown initial distance
∆ separating the agents and in the length of (the binary representation) of the shortest of their labels.
Throughout the paper, we made no attempt at optimizing the cost. Actually, as the acute reader
will have noticed, our main concern was only to prove the polynomiality. Hence, a natural open
problem is to find out the optimal cost to solve the task of approach. This would be all the more
important as in turn we could compare this optimal cost with the cost of solving the same task with
agents that can position themselves in a global system of coordinates (the almost optimal cost for
this case is given in [10]) in order to determine whether the use of such a system (e.g., GPS) is finally
relevant to minimize the travelled distance.
References
[1] Noa Agmon and David Peleg. Fault-tolerant gathering algorithms for autonomous mobile robots.
SIAM J. Comput., 36(1):56–82, 2006.
[2] Steve Alpern. Rendezvous search: A personal perspective. Operations Research, 50(5):772–795,
2002.
[3] Steve Alpern. The theory of search games and rendezvous. International Series in Operations
Research and Management Science, Kluwer Academic Publishers, 2003.
[4] Evangelos Bampas, Jurek Czyzowicz, Leszek Gasieniec, David Ilcinkas, and Arnaud Labourel.
Almost optimal asynchronous rendezvous in infinite multidimensional grids. In Distributed
Computing, 24th International Symposium, DISC 2010, Cambridge, MA, USA, September 1315, 2010. Proceedings, pages 297–311, 2010.
29
[5] Sébastien Bouchard, Yoann Dieudonné, and Bertrand Ducourthial. Byzantine gathering in
networks. Distributed Computing, 29(6):435–457, 2016.
[6] Jérémie Chalopin, Yoann Dieudonné, Arnaud Labourel, and Andrzej Pelc. Rendezvous in networks in spite of delay faults. Distributed Computing, 29(3):187–205, 2016.
[7] Mark Cieliebak, Paola Flocchini, Giuseppe Prencipe, and Nicola Santoro. Distributed computing
by mobile robots: Gathering. SIAM J. Comput., 41(4):829–879, 2012.
[8] Reuven Cohen and David Peleg. Convergence properties of the gravitational algorithm in asynchronous robot systems. SIAM J. Comput., 34(6):1516–1528, 2005.
[9] Reuven Cohen and David Peleg. Convergence of autonomous mobile robots with inaccurate
sensors and movements. SIAM J. Comput., 38(1):276–302, 2008.
[10] Andrew Collins, Jurek Czyzowicz, Leszek Gasieniec, and Arnaud Labourel. Tell me where I
am so I can meet you sooner. In Automata, Languages and Programming, 37th International
Colloquium, ICALP 2010, Bordeaux, France, July 6-10, 2010, Proceedings, Part II, pages 502–
514, 2010.
[11] Jurek Czyzowicz, Adrian Kosowski, and Andrzej Pelc. How to meet when you forget: log-space
rendezvous in arbitrary graphs. Distributed Computing, 25(2):165–178, 2012.
[12] Jurek Czyzowicz, Andrzej Pelc, and Arnaud Labourel. How to meet asynchronously (almost)
everywhere. ACM Transactions on Algorithms, 8(4):37, 2012.
[13] Gianlorenzo D’Angelo, Gabriele Di Stefano, and Alfredo Navarra. Gathering on rings under the
look-compute-move model. Distributed Computing, 27(4):255–285, 2014.
[14] Shantanu Das, Dariusz Dereniowski, Adrian Kosowski, and Przemyslaw Uznanski. Rendezvous
of distance-aware mobile agents in unknown graphs. In Structural Information and Communication Complexity - 21st International Colloquium, SIROCCO 2014, Takayama, Japan, July
23-25, 2014. Proceedings, pages 295–310, 2014.
[15] Shantanu Das, Flaminia L. Luccio, and Euripides Markou. Mobile agents rendezvous in spite
of a malicious agent. In Algorithms for Sensor Systems - 11th International Symposium on
Algorithms and Experiments for Wireless Sensor Networks, ALGOSENSORS 2015, Patras,
Greece, September 17-18, 2015, Revised Selected Papers, pages 211–224, 2015.
[16] Xavier Défago, Maria Gradinariu, Stéphane Messika, and Philippe Raipin Parvédy. Faulttolerant and self-stabilizing mobile robots gathering. In Distributed Computing, 20th International Symposium, DISC 2006, Stockholm, Sweden, September 18-20, 2006, Proceedings, pages
46–60, 2006.
[17] Anders Dessmark, Pierre Fraigniaud, Dariusz R. Kowalski, and Andrzej Pelc. Deterministic
rendezvous in graphs. Algorithmica, 46(1):69–96, 2006.
[18] Yoann Dieudonné and Andrzej Pelc. Deterministic polynomial approach in the plane. Distributed
Computing, 28(2):111–129, 2015.
[19] Yoann Dieudonné and Andrzej Pelc. Anonymous meeting in networks. Algorithmica, 74(2):908–
946, 2016.
30
[20] Yoann Dieudonné, Andrzej Pelc, and David Peleg. Gathering despite mischief. ACM Transactions on Algorithms, 11(1):1, 2014.
[21] Yoann Dieudonné, Andrzej Pelc, and Vincent Villain. How to meet asynchronously at polynomial cost. SIAM J. Comput., 44(3):844–867, 2015.
[22] Yoann Dieudonné and Franck Petit. Self-stabilizing gathering with strong multiplicity detection.
Theor. Comput. Sci., 428:47–57, 2012.
[23] Paola Flocchini, Giuseppe Prencipe, Nicola Santoro, and Peter Widmayer. Gathering of asynchronous robots with limited visibility. Theor. Comput. Sci., 337(1-3):147–168, 2005.
[24] Pierre Fraigniaud and Andrzej Pelc. Deterministic rendezvous in trees with little memory. In
Distributed Computing, 22nd International Symposium, DISC 2008, Arcachon, France, September 22-24, 2008. Proceedings, pages 242–256, 2008.
[25] Pierre Fraigniaud and Andrzej Pelc. Delays induce an exponential memory gap for rendezvous
in trees. ACM Transactions on Algorithms, 9(2):17, 2013.
[26] Taisuke Izumi, Samia Souissi, Yoshiaki Katayama, Nobuhiro Inuzuka, Xavier Défago, Koichi
Wada, and Masafumi Yamashita. The gathering problem for two oblivious robots with unreliable
compasses. SIAM J. Comput., 41(1):26–46, 2012.
[27] Dariusz R. Kowalski and Adam Malinowski. How to meet in anonymous network. Theor.
Comput. Sci., 399(1-2):141–156, 2008.
[28] Evangelos Kranakis, Danny Krizanc, and Sergio Rajsbaum. Mobile agent rendezvous: A survey. In Structural Information and Communication Complexity, 13th International Colloquium,
SIROCCO 2006, Chester, UK, July 2-5, 2006, Proceedings, pages 1–9, 2006.
[29] Gianluca De Marco, Luisa Gargano, Evangelos Kranakis, Danny Krizanc, Andrzej Pelc, and Ugo
Vaccaro. Asynchronous deterministic rendezvous in graphs. Theor. Comput. Sci., 355(3):315–
326, 2006.
[30] Avery Miller and Andrzej Pelc. Fast rendezvous with advice. In Algorithms for Sensor Systems
- 10th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless
Networks and Distributed Robotics, ALGOSENSORS 2014, Wroclaw, Poland, September 12,
2014, Revised Selected Papers, pages 75–87, 2014.
[31] Avery Miller and Andrzej Pelc. Time versus cost tradeoffs for deterministic rendezvous in
networks. Distributed Computing, 29(1):51–64, 2016.
[32] Linda Pagli, Giuseppe Prencipe, and Giovanni Viglietta. Getting close without touching: neargathering for autonomous mobile robots. Distributed Computing, 28(5):333–349, 2015.
[33] Thomas Schelling. The Strategy of Conflict. Oxford University Press, Oxford, 1960.
[34] Ichiro Suzuki and Masafumi Yamashita. Distributed anonymous mobile robots: Formation of
geometric patterns. SIAM J. Comput., 28(4):1347–1363, 1999.
[35] Amnon Ta-Shma and Uri Zwick. Deterministic rendezvous, treasure hunts, and strongly universal exploration sequences. ACM Transactions on Algorithms, 10(3):12, 2014.
31
| 8 |
Two Results on Slime Mold Computations
Ruben Becker1 , Vincenzo Bonifaci2 , Andreas Karrenbauer1 ,
Pavel Kolev∗1 , and Kurt Mehlhorn1
arXiv:1707.06631v2 [] 29 Mar 2018
1
Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbrücken,
Germany {ruben,karrenba,pkolev,mehlhorn}@mpi-inf.mpg.de
2
Institute for the Analysis of Systems and Informatics, National Research Council of Italy
(IASI-CNR), Rome, Italy, [email protected]
Abstract
We present two results on slime mold computations. In wet-lab experiments (Nature’00) by Nakagaki
et al. the slime mold Physarum polycephalum demonstrated its ability to solve shortest path problems.
Biologists proposed a mathematical model, a system of differential equations, for the slime’s adaption
process (J. Theoretical Biology’07). It was shown that the process convergences to the shortest path (J.
Theoretical Biology’12) for all graphs. We show that the dynamics actually converges for a much wider
class of problems, namely undirected linear programs with a non-negative cost vector.
Combinatorial optimization researchers took the dynamics describing slime behavior as an inspiration
for an optimization method and showed that its discretization can ε-approximately solve linear programs
with positive cost vector (ITCS’16). Their analysis requires a feasible starting point, a step size depending
linearly on ε, and a number of steps with quartic dependence on opt/(εΦ), where Φ is the difference
between the smallest cost of a non-optimal basic feasible solution and the optimal cost (opt).
We give a refined analysis showing that the dynamics initialized with any strongly dominating point
converges to the set of optimal solutions. Moreover, we strengthen the convergence rate bounds and
prove that the step size is independent of ε, and the number of steps depends logarithmically on 1/ε and
quadratically on opt/Φ.
∗ This work has been funded by the Cluster of Excellence “Multimodal Computing and Interaction” within the Excellence
Initiative of the German Federal Government.
1
Contents
1 Introduction
1.1 The Biologically-Grounded Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 The Biologically-Inspired Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
3
5
2 Convergence of the Continuous Undirected Dynamics: Simple Instances
2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 The Convergence Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
8
9
3 Convergence of the Continuous Undirected
3.1 Existence of a Solution with Domain [0, ∞)
3.2 LP Duality . . . . . . . . . . . . . . . . . .
3.3 Convergence to Dominance . . . . . . . . .
3.4 The Equilibrium Points . . . . . . . . . . .
3.5 Convergence . . . . . . . . . . . . . . . . . .
3.6 Details of the Convergence Process . . . . .
Dynamics: General
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
4 Improved Convergence Results: Discrete Directed Dynamics
4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 Useful Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3 Strongly Dominating Capacity Vectors . . . . . . . . . . . . . . .
4.4 x(k) is Close to a Non-Negative Kernel-Free Vector . . . . . . . .
4.5 x(k) is ε-Close to an Optimal Solution . . . . . . . . . . . . . . .
4.6 Proof of Theorem 4.2 . . . . . . . . . . . . . . . . . . . . . . . . .
4.7 Preconditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.8 A Simple Lower Bound . . . . . . . . . . . . . . . . . . . . . . . .
2
.
.
.
.
.
.
.
.
Instances
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
11
11
15
16
17
18
20
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
22
22
23
25
26
28
30
31
32
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Figure 1: The experiment in [NYT00] (reprinted from there): (a) shows the maze uniformly covered by
Physarum; yellow color indicates presence of Physarum. Food (oatmeal) is provided at the locations labeled
AG. After a while the mold retracts to the shortest path connecting the food sources as shown in (b) and
(c). (d) shows the underlying abstract graph. The video [Phy] shows the experiment.
1
Introduction
We present two results on slime mold computations, one on the biologically-grounded model and one on the
biologically-inspired model. The first model was introduced by biologists to capture the slime’s apparent
ability to compute shortest paths. We show that the dynamics can actually do more. It can solve a wide
class of linear programs with nonnegative cost vectors. The latter model was designed as an optimization
technique inspired by the former model. We present an improved convergence result for its discretization.
The two models are introduced and our results are stated in Sections 1.1 and 1.2 respectively. The results
on the former model are shown in Sections 2 and 3, the results on the latter model are shown in Section 4.
1.1
The Biologically-Grounded Model
Physarum polycephalum is a slime mold that apparently is able to solve shortest path problems. Nakagaki,
Yamada, and Tóth [NYT00] report about the following experiment; see Figure 1. They built a maze, covered
it by pieces of Physarum (the slime can be cut into pieces which will reunite if brought into vicinity), and
then fed the slime with oatmeal at two locations. After a few hours the slime retracted to a path that
follows the shortest path in the maze connecting the food sources. The authors report that they repeated
the experiment with different mazes; in all experiments, Physarum retracted to the shortest path.
The paper [TKN07] proposes a mathematical model for the behavior of the slime and argues extensively
that the model is adequate. Physarum is modeled as an electrical network with time varying resistors. We
have a simple undirected graph G = (N, E) with distinguished nodes s0 and s1 modeling the food sources.
Each edge e ∈ E has a positive length ce and a positive capacity xe (t); ce is fixed, but xe (t) is a function
of time. The resistance re (t) of e is re (t) = ce /xe (t). In the electrical network defined by these resistances,
a current of value 1 is forced from s0 to s1 . For an (arbitrarily oriented) edge e = (u, v), let qe (t) be the
resulting current over e. Then, the capacity of e evolves according to the differential equation
ẋe (t) = |qe (t)| − xe (t),
(1)
where ẋe is the derivative of xe with respect to time. In equilibrium (ẋe = 0 for all e), the flow through any
edge is equal to its capacity. In non-equilibrium, the capacity grows (shrinks) if the absolute value of the flow
is larger (smaller) than the capacity. In the sequel, we will mostly drop the argument t as is customary in
the treatment of dynamical
systems. It is well-known that the electrical flow q is the feasible flow minimizing
P
energy dissipation e re qe2 (Thomson’s principle).
We refer to the dynamics above as biologically-grounded, as it was introduced by biologists to model the
behavior of a biological system. Miyaji and Ohnishi were the first to analyze convergence for special graphs
(parallel links and planar graphs with source and sink on the same face) in [MO08]. In [BMV12] convergence
was proven for all graphs. We state the result from [BMV12] for the special case that the shortest path is
unique.
3
Theorem 1.1 ([BMV12]). Assume c > 0 and that the undirected shortest path P ∗ from s0 to s1 w.r.t. the
cost vector c is unique. Assume x(0) > 0. Then x(t) in (1) converges to P ∗ . Namely, xe (t) → 1 for e ∈ P ∗
and xe → 0 for e 6∈ P ∗ as t → ∞.
[BMV12] also proves an analogous result for the undirected transportation problem; [Bon13] simplified
the argument under additional assumptions. The paper [Bon15] studies a more general dynamics and proves
convergence for parallel links.
In this paper, we extend this result to non-negative undirected linear programs
min{cT x : Af = b, |f | ≤ x},
(2)
where A ∈ Rn×m , b ∈ Rn , x ∈ Rm , c ∈ Rm
≥0 , and the absolute values are taken componentwise. Undirected
LPs can model a wide range of problems, e.g., optimization problems such as shortest path and min-cost
flow in undirected graphs, and the Basis Pursuit problem in signal processing [CDS98].
We use n for the number of rows of A and m for the number of columns, since this notation is appropriate
when A is the node-edge-incidence matrix of a graph. A vector f is feasible if Af = b. We assume that the
system Af = b has a feasible solution and that there is no non-zero f in the kernel of A with ce fe = 0 for
all e. A vector f lies in the kernel of A if Af = 0. The vector q in (1) is now the minimum energy feasible
solution
X c
e
fe2 : Af = b ∧ fe = 0 whenever xe = 0 .
(3)
q(t) = argmin
xe (t)
f ∈Rm
e:xe 6=0
We remark that q is unique; see Subsection 3.1.1. If A is the incidence matrix of a graph (the column
corresponding to an edge e has one entry +1, one entry −1 and all other entries are equal to zero), (2) is a
transshipment problem with flow sources and sinks encoded by a demand vector b. The condition that there
is no solution in the kernel of A with ce fe = 0 for all e states that every cycle contains at least one edge of
positive cost. In that setting, q(t) as defined by (3) coincides with the electrical flow induced by resistors of
value ce /xe (t). We can now state our first main result.
Theorem 1.2. Let c ≥ 0 satisfy cT |f | > 0 for every nonzero f in the kernel of A. Let x∗ be an optimum
solution of (2) and let X⋆ be the set of optimum solutions. Assume x(0) > 0. The following holds for the
dynamics (1) with q as in (3):
(i) The solution x(t) exists for all t ≥ 0.
(ii) The cost cT x(t) converges to cT x∗ as t goes to infinity.
(iii) The vector x(t) converges to X⋆ .
(iv) For all e with ce > 0, xe (t) − |qe (t)| converges to zero as t goes to infinity.1 If x∗ is unique, x(t)
and q(t) converge to x∗ as t goes to infinity.
Item (i) was previously shown in [SV16a] for the case of a strictly positive cost vector. The result
in [SV16a] is stated for the cost vector c = 1. The case of a general positive cost vector reduces to this
special case by rescaling the solution vector x. We stress that the dynamics (1) is biologically-grounded. It
was proposed to model a biological system and not as an optimization method. Nevertheless, it can solve a
large class of non-negative LPs. Table 1 summarizes our first main result and puts it into context.
Sections 2 and 3 are devoted to the proof of our first main theorem. For ease of exposition, we present
the proof in two steps. In Section 2, we give a proof under the following simplifying assumptions.
(A) c > 0,
(B) The basic feasible solutions of (2) have distinct cost,
(C) We start with a positive vector x(0) ∈ Xdom := { x ∈ Rn : there is a feasible f with |f | ≤ x }.
Section 2 generalizes [Bon13]. For the undirected shortest path problem, condition (B) states that all simple
undirected source-sink paths have distinct cost and condition (C) states that all source-sink cuts have a
capacity of at least one at time zero (and hence at all times). The existence of a solution with domain [0, ∞)
was already shown in [SV16a]. We will show that Xdom is an invariant set, i.e., the solution stays in Xdom
1 We
conjecture that this also holds for the indices e with ce = 0.
4
Problem
Reference
[MO08]
[BMV12]
[SV16a]
Undirected Shortest Path
Undirected Shortest Path
Undirected Positive LP
Existence
of Solution
Yes
Yes
Yes
Convergence
to OPT
Yes
Yes
No
Yes
Yes
Undirected Nonnegative LP
Our Result
Comments
parallel edges, planar graphs
all graphs
c>0
1) c ≥ 0
2) ∀v ∈ ker(A) : cT |v| > 0
Table 1: Convergence results for the continuous undirected Physarum dynamics (1).
P
P
for all times, and that E(x) = e re x2e = e ce xe is a Lyapunov function for the dynamics (1), i.e., Ė ≤ 0
and Ė = 0 if and only if ẋ = 0. It follows from general theorems about dynamical systems that the dynamics
converges to a fixed point of (1). The fixed points are precisely the vectors |f | where f is a feasible solution
of (2). A final argument establishes that the dynamics converges to a fixed point of minimum cost.
In Section 3, we prove the general case of the first main theorem. We assume
(D) c ≥ 0,
(E) cost(z) = cT |z| > 0 for every nonzero vector z in the kernel of A,
(F) We start with a positive vector x(0) > 0.
Section 3 generalizes [BMV12] in two directions. First, we treat general undirected LPs and not just the
undirected shortest path problem, respectively, the transshipment problem. Second, we replace the condition
c > 0 by the requirement c ≥ 0 and every nonzero vector in the kernel of A has positive cost. For the
undirected shortest path problem, the latter condition states that the underlying undirected graph has
no zero-cost cycle. Section 3 is technically considerably more difficult than Section 2. We first establish
the existence of a solution with domain [0, ∞). To this end, we derive a closed formula for the minimum
energy feasible solution and prove that the mapping x 7→ q is locally-Lipschitz. Existence of a solution with
domain [0, ∞) follows by standard arguments. We then show that Xdom is an attractor, i.e., the solution
x(t) converges to Xdom . We next characterize equilibrium points and exhibit a Lyapunov function. The
Lyapunov function is a normalized version of E(x). The normalization factor is equal to the optimal value
of the linear program max { α : Af = αb, |f | ≤ x } in the variables f and α. Convergence to an equilibrium
point follows from the existence of a Lyapunov function. A final argument establishes that the dynamics
converges to a fixed point of minimum cost.
1.2
The Biologically-Inspired Model
Ito et al. [IJNT11] initiated the study of the dynamics
ẋ(t) = q(t) − x(t).
(4)
We refer to this dynamics as the directed dynamics in contrast to the undirected dynamics (1). The directed
dynamics is biologically-inspired – the similarity to (1) is the inspiration. It was never claimed to model the
behavior of a biological system. Rather, it was introduced as a biologically-inspired optimization method.
The work in [IJNT11] shows convergence of this directed dynamics (4) for the directed shortest path problem
and [JZ12, SV16c, Bon16] show convergence for general positive linear programs, i.e., linear programs with
positive cost vector c > 0 of the form
min{cT x : Ax = b, x ≥ 0}.
(5)
The discrete versions of both dynamics define sequences x(t) , t = 0, 1, 2, . . . through
x(t+1) = (1 − h(t) )x(t) + h(t) q (t)
(t+1)
x
(t)
(t)
= (1 − h )x
(t)
+ h |q
(t)
|
5
discrete directed dynamics;
(6)
discrete undirected dynamics,
(7)
where h(t) is the step size and q (t) is the minimum energy feasible solution as in (3). For the discrete
dynamics, we can ask complexity questions. This is particularly relevant for the discrete directed dynamics
as it was designed as an biologically-inspired optimization method.
For completeness, we review the state-of-the-art results for the discrete undirected dynamics. For the
undirected shortest path problem, the convergence of the discrete undirected dynamics (7) was shown
in [BBD+ 13]. The convergence proof gives an upper bound on the step size and on the number of steps required until an ε-approximation of the optimum is obtained. [SV16b] extends the result to the transshipment
problem and [SV16a] further generalizes the result to the case of positive LPs. The paper [SV16b] is related
to our first result. It shows convergence of the discretized undirected dynamics (7), we show convergence of
the continuous undirected dynamics (1) for a more general cost vector.
We come to the discrete directed dynamics (6). Similarly to the undirected setting, Becchetti et
al. [BBD+ 13] showed the convergence of (6) for the shortest path problem. Straszak and Vishnoi extended
the analysis to the transshipment problem [SV16b] and positive LPs [SV16c].
Theorem 1.3. [SV16c, Theorem 1.3] Let A ∈ Zn×m have full row rank (n ≤ m), b ∈ Zn , c ∈ Zm
>0 , and let
DS := max{| det(M )| : M is a square sub-matrix of A}.2 Suppose the Physarum dynamics (6) is initialized
with a feasible point x(0) of (5) such√that M −1 ≤ x(0) ≤ M and cT x(0) ≤ M · opt, for some M ≥ 1. Then,
for any ε > 0 and step size h ≤ ε/( 6||c||1 DS )2 , after k = O((εh)−2 ln M ) steps, x(k) is a feasible solution
with cT x(k) ≤ (1 + ε)opt.
Theorem 1.3 gives an algorithm that computes a (1 + ε)-approximation to the optimal cost of (5). In
comparison to [BBD+ 13, SV16b], it has several shortcomings. First, it requires a feasible starting point.
Second, the step size depends linearly on ε. Third, the number of steps required to reach an ε-approximation
has a quartic dependence on opt/(εΦ). In contrast, the analysis in [BBD+ 13, SV16b] yields a step size
independent of ε and a number of steps that depends only logarithmically on 1/ε, see Table 2.
We overcome these shortcomings. Before we can state our result, we need some notation. Let X⋆ be
the set of optimal solutions to (5). The distance of a capacity vector x to X⋆ is defined as dist(x, X⋆ ) :=
inf{||x − x′ ||∞ : x′ ∈ X⋆ }. Let γA := gcd({Aij : Aij 6= 0}) ∈ Z>0 and
D := max {|det (M )| : M is a square submatrix of A/γA with dimension n − 1 or n} .
(8)
Let N be the set of non-optimal basic feasible solution of (5) and
Φ := min cT g − opt ≥ 1/(DγA )2 ,
g∈N
(9)
where the inequality is well known [PS82, Lemma 8.6]. For completeness, we present a proof in Subsection 4.5.
Informally, our second main result establishes the following properties of the Physarum dynamics (6):
(i) For any ε > 0 and any strongly dominating starting point3 x(0) , there is a fixed step size h(x(0) ) such
that the Physarum dynamics (6) initialized with x(0) and h(x(0) ) converges to X⋆ , i.e., dist(x(k) , X⋆ ) <
ε/(DγA ) for large enough k.
(ii) The step size can be chosen independently of ε.
(iii) The number of steps k depends logarithmically on 1/ε and quadratically on opt/Φ.
(iv) The efficiency bounds depend on a scale-invariant determinant4 D.
In Section 4.8, we establish a corresponding lower bound. We show that for the Physarum dynamics (6)
to compute a point x(k) such that dist(x(k) , X⋆ ) < ε, the number of steps required for computing an εapproximation has to grow linearly in opt/(hΦ) and ln(1/ε), i.e. k ≥ Ω(opt · (hΦ)−1 · ln(1/ε)). Table 2 puts
our results into context.
2 Using Lemma 3.1, the dependence on D can be improved to a scale-independent determinant D, defined in (8). For
S
further details, we refer the reader to Subsection 4.2.
3 We postpone the definition of strongly dominating capacity vector to Section 4.3. Every scaled feasible solution is strongly
dominating. In the shortest
P path problem,
Pa capacity vector x is strongly dominating if every source-sink cut (S, S) has positive
directed capacity, i.e.,
x −
x > 0.
a∈E(S,S) a
a∈E(S,S) a
4 Note that (γ )n−1 D ≤ D ≤ (γ )n D, and thus D yields an exponential improvement over D , whenever γ ≥ 2.
A
S
A
S
A
6
Reference
Problem
h step size
[BBD+ 13]
Shortest Path
indep. of ε
[SV16b]
Transshipment
indep. of ε
[SV16c]
Positive LP
depends on ε
Our Result
Positive LP
indep. of ε
Lower Bound
Positive LP
indep. of ε
k number of steps
poly(m, n, ||c||1 , ||x(0) ||1 )
· ln(1/ε)
poly(m, n, ||c||1 , ||b||1 , ||x(0) ||1 )
· ln(1/ε)
poly(||c||1 , DS , ln ||x(0) ||1 )
· 1/(Φε)4
poly(||c||1 , ||b||1 , D, γA , ln ||x(0) ||1 )
·Φ−2 ln(1/ε)
Ω(opt · (hΦ)−1 ln(1/ε))
Guarantee
dist(x(k) , X⋆ ) < ε
dist(x(k) , X⋆ ) < ε
cT x(k) ≤ (1 + ε)opt
cT x(k) < ming∈N cT g
dist(x(k) , X⋆ ) <
ε
DγA
dist(x(k) , X⋆ ) < ε
Table 2: Convergence results for the discrete directed Physarum dynamics (6).
We state now our second main result for the special case of a feasible starting point, and we provide the
full version in Theorem 4.2 which applies for arbitrary strongly dominating starting point, see Section 4. We
use the following constants in the statement of the bounds.
(i) h0 := cmin /(4D||c||1 ), where cmin := mini {ci };
(ii) Ψ(0) := max{mD2 kb/γAk1 , kx(0) k∞ };
2
(iii) C1 := Dkb/γA k1 ||c||1 , C2 := 82 m2 nD5 γA
||A||∞ kbk1 and C3 := D3 γA ||b||1 ||c||1 .
Theorem 1.4. Suppose A ∈ Zn×m has full row rank (n ≤ m), b ∈ Zn , c ∈ Zm
>0 and ε ∈ (0, 1). Given a
feasible starting point x(0) > 0 the Physarum dynamics (6) with step size h ≤ (Φ/opt) · h20 /2 outputs for any
(0)
k ≥ 4C1 /(hΦ) · ln(C2 Ψ(0) /(ε · min{1, xmin })) a feasible x(k) > 0 such that dist(x(k) , X⋆ ) < ε/(DγA ).
We stated the bounds on h in terms of the unknown quantities Φ and opt. However, Φ/opt ≥ 1/C3 by
Lemma 3.1 and hence replacing Φ/opt by 1/C3 yields constructive bounds for h. Note that the upper bound
on the step size does not depend on ε and that the bound on the number of iterations depends logarithmically
on 1/ε and quadratically on opt/Φ.
What can be done if the initial point is not strongly dominating? For the transshipment problem it suffices
to add an edge of high capacity and high cost from every source node to every sink node [BBD+ 13, SV16b].
This will make the instance strongly dominating and will not affect the optimal solution. We generalize this
observation to positive linear programs. We add an additional column equal to b and give it sufficiently
high capacity and cost. This guarantees that the resulting instance is strongly dominating and the optimal
solution remains unaffected. Moreover, our approach generalizes and improves upon [SV16b, Theorem 1.2],
see Section 4.7.
Proof Techniques: The crux of the analysis in [IJNT11, BBD+ 13, SV16b] is to show that for large enough
k, x(k) is close to a non-negative flow f (k) and then to argue that f (k) is close to an optimal flow f ⋆ . This
line of arguments yields a convergence of x(k) to X⋆ with a step size h chosen independently of ε.
In Section 4, we extend the preceding approach to positive linear programs, by generalizing the concept of
non-negative cycle-free flows to non-negative feasible kernel-free vectors (Subsection 4.4). Although, we use
the same high level ideas as in [BBD+ 13, SV16b], we stress that our analysis generalizes all relevant lemmas
in [BBD+ 13, SV16b] and it uses arguments from linear algebra and linear programming duality, instead
of combinatorial arguments. Further, our core efficiency bounds (Subsection 4.2) extend [SV16c] and yield
a scale-invariant determinant dependence of the step size and are applicable for any strongly dominating
point (Subsection 4.3).
7
2
Convergence of the Continuous Undirected Dynamics: Simple
Instances
We prove Theorem 1.2 under the following simplifying assumptions:
(A) c > 0,
(B) The basic feasible solutions of (2) have distinct cost,
(C) We start with a positive vector x(0) ∈ Xdom := { x ∈ Rn : there is a feasible f with |f | ≤ x } .
This section generalizes [Bon13]. For the undirected shortest path problem, condition (B) states that all
simple undirected source-sink paths have distinct cost and condition (C) states that all source-sink cuts have
a capacity of at least one at time zero (and hence at all times). The existence of a solution with domain
[0, ∞) was already shown in [SV16a].
2.1
Preliminaries
Note that we may assume that A has full row-rank since any equation that is linearly dependent on other
equations can be deleted without changing the feasible set. We continue to use n and m for the dimension
of A. Thus, A has rank n. We continue by fixing some terms and notation. A basic feasible solution of (2) is
a pair of vectors x and f = (fB , fN ), where fB = A−1
B b and AB is a square n × n non-singular sub-matrix of
A and fN = 0 is the vector indexed by the coordinates not in B, and x = |f |. Since f uniquely determines
x, we may drop the latter for the sake of brevity and call f a basic feasible solution of (2). A feasible
solution f is kernel-free or non-circulatory if it is contained in the convex hull of the basic feasible solutions.5
We say that a vector f ′ is sign-compatible with a vector f (of the same dimension) or f -sign-compatible if
fe′ 6= 0 implies fe′ fe > 0. In particular, supp(f ′ )P
⊆ supp(f ). For a given capacity vector x and a vector
2
f ∈ Rm with supp(f ) ⊆ supp(x), we use E(f ) = e (c
the energy of f . The energy of f
Pe /xe )fe to denote
T
c
|f
|
=
c
|f
|
to
denote the cost of f . Note that
is infinite,
if
supp(f
)
⊆
6
supp(x).
We
use
cost(f
)
=
e
e
e
P
P
E(x) = e (ce /xe )x2e = e ce xe = cost(x). We define the constants cmax = kck∞ and cmin = mine:ce >0 ce .
We use the following corollary of the finite basis theorem for polyhedra.
Lemma 2.1. Let f be a feasible solution of (2). Then f is the sum of a convex combination of at most
m basic feasible solutions plus a vector in the kernel of A. Moreover, all elements in this representation are
sign-compatible with f .
Proof. We may assume f ≥ 0. Otherwise, we flip the sign of the appropriate columns of A. Thus, the system
Af = b, f ≥ 0 is feasible and f is the sum of a convex combination of at most m basic feasible solutions plus
a vector in the kernel of A by the finite basis theorem [Sch99, Corollary 7.1b]. By definition, the elements
in this representation are non-negative vectors and hence sign-compatible with f .
Fact 2.2 (Grönwall’s Lemma). Let A, B, α, β ∈ R, α 6= 0, β 6= 0, and let g be a continuous differentiable
function on [0, ∞). If A + αg(t) ≤ ġ(t) ≤ B + βg(t) for all t ≥ 0, then −A/α + (g(0) + A/α)eαt ≤ g(t) ≤
−B/β + (g(0) + B/β)eβt for all t ≥ 0.
Proof. We show the upper bound. Assume first that B = 0. Then
ġeβt − βgeβt
d g
=
≤ 0 implies
dt eβt
e2βt
g(t)
g(0)
≤ β0 = g(0).
eβt
e
If B 6= 0, define h(t) = g(t) + B/β. Then
ḣ = ġ ≤ B + βg = B + β(h − B/β) = βh
and hence h(t) ≤ h(0)eβt . Therefore g(t) ≤ −B/β + (g(0) + B/β)eβt .
5 For the undirected shortest path problem, we drop the equation corresponding to the sink. Then b becomes the negative
indicator vector corresponding to the source node. Note that n is one less than the number of nodes of the graph. The basic
feasible solutions are the simple undirected source-sink paths. A circulatory solution contains a cycle on which there is flow.
8
An immediate consequence of Grönwall’s Lemma is that the undirected Physarum dynamics (1) initialized
with any positive starting vector x(0), generates a trajectory {x(t)}t≥0 such that each time state x(t) is a
positive vector. Indeed, since ẋe = |qe | − xe ≥ −xe , we have xe (t) ≥ xe (0) · exp{−t} for every index e with
xe (0) > 0 and every time t. Further, by (1) and (3), it holds for indices e with xe (0) = 0 that xe (t) = 0 for
every time t. Hence, the trajectory {x(t)}t≥0 has a time-invariant support.
Fact 2.3 ([JZ12]). Let R = diag(ce /xe ). Then q = R−1 AT p, where p = (AR−1 AT )−1 b.
P
Proof. q minimizes e re qe2 subject to Aq = b. The KKT conditions imply the existence of a vector p such
that Rq = AT p. Substituting into Aq = b yields p = (AR−1 AT )−1 b.
Lemma 2.4. Xdom is an invariant set, i.e., if x(0) ∈ Xdom then x(t) ∈ Xdom for all t.
Proof. Let q(t) be the minimum energy feasible solution with respect to R(t) = diag(ce /xe (t)), and let f (t)
d
be such that f (0) is feasible, |f (0)| ≤ x(0), and f˙(t) = q(t) − f (t). Then dt
(Af − b) = A(q − f ) = b − Af
−t
and hence Af (t) − b = (Af (0) − b)e = 0. Thus f (t) is feasible for all t. Moreover,
d
(f − x) = f˙ − ẋ = q − f − (|q| − x) = q − |q| − (f − x) ≤ −(f − x).
dt
Thus f (t) − x(t) ≤ (f (0) − x(0))e−t ≤ 0 by Gronwall’s Lemma applied with g(t) = f (t) − x(t) and β = −1,
and hence f (t) ≤ x(t) for all t. Similarly,
d
(f + x) = f˙ + ẋ = q − f + (|q| − x) = q + |q| − (f + x) ≥ −(f + x).
dt
Thus f (t) + x(t) ≥ (f (0) + x(0))e−t ≥ 0 by Grönwall’s Lemma applied with g(t) = f (t) + x(t) and α = −1
and A = 0, and hence f (t) ≥ −x(t) for all t.
We conclude that |f (t)| ≤ x(t) for all t. Thus, x(t) ∈ Xdom for all t.
2.2
The Convergence Proof
We will first characterize the equilibrium points. They are precisely the points |f |, where f is a basic feasible
solution; the proof uses property (B). We then show that E(x) is a Lyapunov function for (1), in particular,
Ė ≤ 0 and Ė = 0 if and only if x is an equilibrium point. For this argument, we need that the energy of q is
at most the energy of x with equality if and only if x is an equilibrium point. This proof uses (A) and (C).
It follows from the general theory of dynamical systems that x(t) approaches an equilibrium point. Finally,
we show that convergence to a non-optimal equilibrium is impossible.
Lemma 2.5 (Generalization of Lemma 2.3 in [Bon13]). Assume (A) to (C). If f is a basic feasible solution
of (2), then x = |f | is an equilibrium point. Conversely, if x is an equilibrium point, then x = |f | for some
basic feasible solution f .
Proof. Let f be a basic feasible solution, let x = |f |, and let q be the minimum energy feasible solution with
respect to the resistances ce /xe . We have Aq = b and supp(q) ⊆ supp(x) by definition of q. Since f is a
basic feasible solution there is a subset B of size n of the columns of A such that AB is non-singular and
f = (A−1
B b, 0). Since supp(q) ⊆ supp(x) ⊆ B, we have q = (qB , 0) for some vector qB . Thus, b = Aq = AB qB
and hence qB = fB . Therefore ẋ = |q| − x = 0 and x is an equilibrium point.
Conversely, if x is an equilibrium point, |qe | = xe for every e. By changing the signs of some columns of
A, we may assume q ≥ 0. Then q = x. Since qe = xe /ce ATe p where Ae is the e-th column of A by Lemma 2.3,
we have ce = ATe p, whenever xe > 0. By Lemma 2.1, q is a convex combination of basic feasible solutions
and a vector in the kernel of A that are sign-compatible with q. The vector in the kernel must be zero as q is
a minimum energy feasible solution. For any basic feasible solution
to q, we have supp(z) ⊆
P z contributing P
supp(x). Summing over the e ∈ supp(z), we obtain cost(z) = e∈supp(z) ce ze = e∈supp(z) ze ATe p = bT p.
Thus, the convex combination involves only a single basic feasible solution by assumption (B) and hence x
is a basic feasible solution.
9
The vector x(t) dominates a feasible solution at all times. Since q(t) is the minimum energy feasible
solution at time t, this implies E(q(t)) ≤ E(x(t)) at all times. A further argument shows that we have
equality if and only if x = |q|.
Lemma 2.6 (Generalization of Lemma 3.1 in [Bon13]). Assume (A) to (C). At all times, E(q) ≤ E(x). If
E(q) = E(x), then x = |q|.
Proof. Recall that x(t) ∈ Xdom for all t. Thus, at all times, there is a feasible f such that |f | ≤ x. Since q
is a minimum energy feasible solution, we have
E(q) ≤ E(f ) ≤ E(x).
If E(q) = E(x) then E(q) = E(f ) and hence q = f since the minimum energy feasible solution is unique.
Also, |f | = x since |f | ≤ x and |fe | < xe for some e implies E(f ) < E(x). The last conclusion uses c > 0.
Lyapunov functions are the main tool for proving convergence of dynamical systems. We show that E(x)
is a Lyapunov function for (1).
Lemma 2.7 (Generalization of Lemma 3.2 in [Bon13]). Assume (A) to (C). E(x) is a Lyapunov function
for (1), i.e., it is continuous as a function of x, E(x) ≥ 0, Ė(x) ≤ 0 and Ė(x) = 0 if and only if ẋ = 0.
Proof. E is clearly continuous and non-negative. Recall that E(x) = cost(x). Let R be the diagonal matrix
with entries ce /xe . Then
d
cost(x) = cT (|q| − x)
dt
= xT R|q| − xT Rx
by (1)
since c = Rx
= xT R1/2 R1/2 |q| − xT Rx
≤ (q T Rq)1/2 (xT Rx)1/2 − xT Rx
T
1/2
≤ (x Rx)
= 0.
T
1/2
(x Rx)
T
− x Rx
by Cauchy-Schwarz
by Lemma 2.6
d
cost(x) = 0 implies that both inequalities above are equalities. This is only possible if the
Observe that dt
vectors |q| and x are parallel and E(q) = E(x). Thus, x = |q| by Lemma 2.6.
It follows now from the general theory of dynamical systems that x(t) converges to an equilibrium point.
Corollary 2.8 (Generalization of Corollary 3.3. in [Bon13].). Assume (A) to (C). As t → ∞, x(t) approaches an equilibrium point and cT x(t) approaches the cost of the corresponding basic feasible solution.
Proof. The proof in [Bon13] carries over. We include it for completeness. The existence of a Lyapunov
function E implies by [LaS76, Corollary 2.6.5] that x(t) approaches the set x ∈ Rm
≥0 : Ė = 0 , which by
:
ẋ
=
0
.
Since
this
set
consists
of
isolated
points (Lemma 2.5),
Lemma 2.7 is the same as the set x ∈ Rm
≥0
x(t) must approach one of those points, say the point x0 . When x = x0 , one has E(q) = E(x) = cost(x) =
cT x.
It remains to exclude that x(t) converges to a nonoptimal equilibrium point.
Theorem 2.9 (Generalization of Theorem 3.4 in [Bon13]). Assume (A) to (C). As t → ∞, cT x(t) converges
to the cost of the optimal solution and x(t) converges to the optimal solution.
Proof. By the corollary, it suffices to prove the second part of the claim. For the second P
part, assume that
x(t) converges to a non-optimal solution z. Let x∗ be the optimal solution and let W = e x∗e ce ln xe . Let
δ = (cost(z) − cost(x∗ ))/2. Note that for all sufficiently large t, we have E(q(t)) ≥ cost(z) − δ ≥ cost(x∗ ) + δ.
Further, by definition qe = (xe /ce )ATe p and thus
Ẇ =
X
e
x∗e ce
X
X
|qe | − xe
=
x∗e |ATe p| − cost(x∗ ) ≥
x∗e ATe p − cost(x∗ ) ≥ δ,
xe
e
e
10
where the last inequality follows by
to the fact that x is bounded.
3
P
e
x∗e ATe p = bT p = E(q) ≥ cost(x∗ ) + δ. Hence W → ∞, a contradiction
Convergence of the Continuous Undirected Dynamics: General
Instances
We now prove the general case. We assume
(D) c ≥ 0,
(E) cost(z) > 0 for every nonzero vector z in the kernel of A,
(F) We start with a positive vector x(0) > 0.
In this section, we generalize [BMV12] in two directions. First, we treat general undirected LPs and not just
the undirected shortest path problem, respectively, the transshipment problem. Second, we substitute the
condition c > 0 with c ≥ 0 and every nonzero vector in the kernel of A has positive cost. For the undirected
shortest path problem, the latter condition states that the underlying undirected graph has no zero-cost
cycle.
3.1
Existence of a Solution with Domain [0, ∞)
In this section we show that a solution x(t) to (1) has domain [0, ∞). We first derive an explicit formula for
the minimum energy feasible solution q and then show that the mapping x 7→ q is Lipschitz continuous; this
implies existence of a solution with domain [0, ∞) by standard arguments.
3.1.1
The Minimum Energy Solution
Recall that for γA = gcd({Aij : Aij 6= 0}) ∈ Z>0 , we defined by
D = max {|det (M )| : M is a square submatrix of A/γA with dimension n − 1 or n} .
We derive now properties of the minimum energy solution. In particular, if every nonzero vector in the
kernel of A has positive cost,
(i) the minimum energy feasible solution is kernel-free and unique (Lemma 3.2),
(ii) |qe | ≤ D||b/γA ||1 for every e ∈ [m] (Lemma 3.3),
(iii) q is defined by (12) (Lemma 3.4), and
(iv) E(q) = bT p, where p is defined by (12) (Lemma 3.5).
We note that for positive cost vector c > 0, these results are known.
We proceed by establishing some useful properties on basic feasible solutions.
Lemma 3.1. Suppose A ∈ Zn×m is an integral matrix, and b ∈ Zn is an integral vector. Then, for any
basic feasible solutions f with Af = b and f ≥ 0, it holds that ||f ||∞ ≤ D||b/γA ||1 and fj 6= 0 implies
|fj | ≥ 1/(DγA ).
Proof. Since f is a basic feasible solution, it has the form f = (fB , 0) such that AB ·fB = b where AB ∈ Zn×n
is an invertible submatrix of A. We write M−i,−j to denote the matrix M with deleted i-th row and j-th
column. Let Qj be the matrix formed by replacing the j-th column of AB by the column vector b. Then,
using the fact that det(tA) = tn det(A) for every A ∈ Rn×n and t ∈ Z, the Cramer’s rule yields
|fB (j)| =
n
j+k
−1
· bk · det γA
[AB ]−k,−j
1 X (−1)
det (Qj )
=
−1
det (AB )
γA
det γA
AB
k=1
−1
−1
By the choice of γA , the values det(γA
AB ) and det(γA
[AB ]−k,−j ) are integral for all k, it follows that
|fB (j)| ≤ D||b/γA ||1
and
fB (j) 6= 0
11
=⇒
1
≤ |fB (j)| .
DγA
Lemma 3.2. If every nonzero vector in the kernel of A has positive cost, the minimum energy feasible
solution is kernel-free and unique.
Proof. Let q be a minimum energy feasible solution. Since q is feasible, it can be written as qn + qr , where
qn is a convex combination of basic feasible solutions and qr lies in the kernel of A. Moreover, all elements
in this representation are sign-compatible with q by Lemma 2.1. If qr 6= 0, the vector q − qr is feasible and
has smaller energy, a contradiction. Thus qr = 0.
We next prove uniqueness. Assume for the sake of a contradiction that there are two distinct minimum
energy feasible solutions q (1) and q (2) . We show that the solution (q (1) + q (2) )/2 uses less energy than q (1)
and q (2) . Since h 7→ h2 is a strictly convex function from R to R, the average of the two solutions will be
(1)
(2)
(1)
(2)
better than either solution if there is an index
P e with re > 0 and qe 6= qe . The difference z = q − q
lies in the kernel of A and hence cost(z) = e ce |ze | > 0. Thus there is an e with ce > 0 and ze 6= 0. We
have now shown uniqueness.
Lemma 3.3. Assume that every nonzero vector in the kernel of A has positive cost. Let q be the minimum
energy feasible solution. Then |qe | ≤ D||b/γA ||1 for every e.
Proof. Since q is a convex combination of basic feasible solutions, |qe | ≤ maxz |ze | where z ranges over basic
−1
feasible solutions of the form (zB , 0), where zB = AB
b and AB ∈ Rn×n is a non-singular submatrix of A.
Thus, by Lemma 3.1 every component of z is bounded by D||b/γA ||1 .
In [SV16c], the bound |qe | ≤ D2 m||b||1 was shown. We will now derive explicit formulae for the minimum
energy solution q. We will express q in terms of a vector p ∈ Rn , which we refer to as the potential, by
analogy with the network setting, in which p can be interpreted as the electric potential of the nodes. The
energy of the minimum energy solution is equal to bT p. We also derive a local Lipschitz condition for the
mapping from x to q. Note that for c > 0 these facts are well-known. Let us split the column indices [m] of
A into
P := { e ∈ [m] : ce > 0 } and Z := { e ∈ [m] : ce = 0 } .
(10)
Lemma 3.4. Assume that every nonzero vector in the kernel of A has positive cost. Let re = ce /xe and let
R denote the corresponding diagonal matrix. Let us split A into AP and AZ , and q into qP and qZ . Since AZ
has linearly independent columns, we may assume that the first |Z| rows of AZ form a square non-singular
A′ A′
matrix. We can thus write A = AP′′ AZ′′ with invertible A′Z . Then the minimum energy solution satisfies
P
A′P
A′′P
A′Z
A′′Z
qP
qZ
Z
′
b
= ′′
b
and
RP
0
" ′ T
AP
qP
=
T
qZ
A′Z
T
A′′P
T
A′′Z
#
p′
p′′
(11)
′
for some vector p = pp′′ ; here p′ has dimension |Z|. The equation system (11) has a unique solution given
by
′ −1 ′
′
[AZ ] (b − A′P qP )
qZ
p
−[[A′Z ]T ]−1 [A′′Z ]T p′′
=
and
=
,
(12)
qP
p′′
M R−1 M T (b′′ − A′′Z [A′Z ]−1 b′ )
RP−1 ATP p
where M = A′′P − A′′Z [A′Z ]−1 A′P is the Schur complement of the block A′Z of the matrix A.
Proof. q minimizes E(f ) = f T Rf among all solutions of Af = b. The KKT conditions state that q must
satisfy Rq = AT p for some p. Note that 2Rf is the gradient of the energy E(f ) with respect to f and that
the −AT p is the gradient of pT (b − Af ) with respect to f . We may absorb the factor −2 in p. Thus q
satisfies (11).
We show next that the linear system (11) has a unique solution. The top |Z| rows of the left system
in (11) give
qZ = [A′Z ]−1 (b′ − A′P qP ).
(13)
Substituting this expression for qZ into the bottom n − |Z| rows of the left system in (11) yields
M qP = b′′ − A′′Z [A′Z ]−1 b′ .
12
From the top |P | rows of the right system in (11) we infer qP = RP−1 ATP · p. Thus
M RP−1 ATP · p = b′′ − A′′Z [A′Z ]−1 · b′ .
(14)
The bottom n − |Z| rows of the right system in (11) yield 0 = ATZ p = [A′Z ]T p′ + [A′′Z ]T p′′ and hence
p′ = −[[A′Z ]T ]−1 [A′′Z ]T p′′ .
(15)
Substituting (15) into (14) yields
b′′ − A′′Z [A′Z ]−1 b′ = M RP−1 [A′P ]T p′ + [A′′P ]T p′′
= M RP−1 [A′′P ]T − [A′P ]T [[A′Z ]T ]−1 [A′′Z ]T p′′
= M RP−1 M T p′′ .
(16)
It remains to show that the matrix M RP−1 M T is non-singular. We first observe that the rows of M are
linearly independent. Consider the left system in (11). Multiplying the first |Z| rows by [A′Z ]−1 and
then subtracting A′′Z times the resulting rows from the last n − |Z| rows turns A into the matrix Q =
[A′ ]−1 A′ I
Z
P
. By assumption, A has independent rows. Moreover, the preceding operations guarantee that
M
0
−1/2
rank(A) = rank(Q). Therefore, M has independent rows. Since RP−1 is a positive diagonal matrix, RP
exists and is a positive diagonal matrix. Let z be an arbitrary nonzero vector of dimension |P |. Then
−1/2
−1/2
z T M RP−1 M T z = (RP M T z)T (RP M T z) > 0 and hence M RP−1 M T is non-singular. It is even positive
semi-definite.
There is a shorter proof that the system (11) has a unique solution. However, the argument does not give
an explicit expression for the solution. In the case of a convex objective function and affine constraints, the
KKT conditions are sufficient for being a global minimum. Thus any solution to (11) is a global optimum.
We have already shown in Lemma 3.2 that the global minimum is unique.
We next observe that the energy of q can be expressed in terms of the potential.
Lemma 3.5. Let q be the minimum energy feasible solution and let f be any feasible solution. Then
E(q) = bT p = f T AT p.
Proof. As in the proof of Lemma 3.4, we split q into qP and qZ , R into RP and RZ , and A into AP and AZ .
Then
E(q) = qPT RP qP
by the definition of E(q) and since RZ = 0
T
by the right system in (11)
T
by the left system in (11)
T
by the right system in (11).
= p AP qp
= p (b − AZ qZ )
=b p
For any feasible solution f , we have f T AT p = bT p.
3.1.2
The Mapping x 7→ q is Locally-Lipschitz
We show that the mapping x 7→ q is Lipschitz continuous; this implies existence of a solution x(t) with domain
[0, ∞) by standard arguments. Our analysis builds upon Cramer’s rule and the Cauchy-Binet formula. The
Cauchy-Binet formula extends Kirchhoff’s spanning tree theorem which was used in [BMV12] for the analysis
of the undirected shortest path problem.
Lemma 3.6. Assume c ≥ 0, no non-zero vector in the kernel of A has cost zero, and that A, b, and
c are integral. Let α, β > 0. For any two vectors x and x̃ in Rm with α1 ≤ x, x̃ ≤ β1, define γ :=
2mn (β/α)n cnmax D2 ||b/γA ||1 . Then |qe (x)| − |qe (x̃)| ≤ γ||x − x̃||∞ for every e ∈ [m].
13
Proof. First assume that c > 0. By Cramer’s rule
(AR−1 AT )−1 =
1
((−1)i+j det(M−j,−i ))ij ,
det(AR−1 AT )
where M−i,−j is obtained from AR−1 AT by deleting the i-th row and the j-th column. For a subset S of [m]
and an index i ∈ [n], let AS be the n × |S| matrix consisting of the columns selected by S and let A−i,S be
the matrix obtained from AS by deleting row i. If D is a diagonal matrix of size m, then (AD)S = AS DS .
The Cauchy-Binet theorem expresses the determinant of a product of two matrices (not necessarily square)
as a sum of determinants of square matrices. It yields
X
(det((AR−1/2 )S ))2
det(AR−1 AT ) =
S⊆[m]; |S|=n
X
=
(
Y
S⊆[m]; |S|=n e∈S
xe /ce ) · (det AS )2 .
Similarly,
X
det(AR−1 AT )−i,−j =
(
Y
S⊆[m]; |S|=n−1 e∈S
Using p = (AR−1 AT )−1 b, we obtain
P
P
i+j
j∈[n] (−1)
PS⊆[m];
pi =
xe /ce ) · (det A−i,S · det A−j,S ).
xe /ce ) · (det A−i,S · det A−j,S )bj
Q e∈S
.
2
|S|=n ( e∈S xe /ce ) · (det AS )
|S|=n−1 (
S⊆[m];
Q
(17)
Substituting into q = R−1 AT p yields
xe T
A p
qe =
ce e
Q
P
P
i+j+2n
xe X
S⊆[m]; |S|=n−1 ( e′ ∈S xe′ /ce′ ) · (det A−i,S · det A−j,S )bj
j∈[n] (−1)
Q
P
Ai,e ·
=
2
ce i
S⊆[m]; |S|=n ( e′ ∈S xe′ /ce′ ) · (det AS )
P
P
Q
P
j+n
i+n
′ /ce′ ) ·
(−1)
b
det
A
(−1)
A
det
A
·
x
(
′
j
−j,S
i,e
−i,S
e
j∈[n]
i∈[n]
e ∈S∪e
S⊆[m]; |S|=n−1
Q
P
=
2
′
′
)
·
(det
A
)
/c
x
(
′
S
e
e ∈S e
S⊆[m]; |S|=n
Q
P
′
′
)
·
det(A
|A
)
·
det(A
|b)
/c
x
(
′
S
e
S
e
e ∈S∪e e
S⊆[m]; |S|=n−1
Q
P
=
,
(18)
′ /ce′ ) · (det AS )2
x
(
′
e
e ∈S
S⊆[m]; |S|=n
where (AS |Ae ), respectively (AS |b), denotes the n × n matrix whose columns are selected from A by S and
whose last column is equal to Ae , respectively b.
We are now ready to estimate the derivative ∂qe /∂xi . Assume first that e 6= i. By the above, qe =
xe F +Gxi /ci
ce H+Ixi /ci , where F , G, H and I are given implicitly by (18). Then
n 2
m
2 · n−1
β D ||b/γA ||1
xe F I/ci − GH/ci
∂qe
|=|
|≤
≤ γ.
|
2
∂xi
ce (H + Ixi /ci )
(α/cmax )n
For e = i, we have qe =
Gxe /ce
H+Ixe /ce ,
where G, H, and I are given implicitly by (18). Then
n 2
m
∂qe
GH/ce
n−1 β D ||b/γA ||1
|
|=|
|≤
≤ γ.
∂xe
(H + Ixe /ce )2
(α/cmax )n
Finally, consider x and x̃ with α1 ≤ x, x̃ ≤ β1. Let x̄ℓ = (x̃1 , . . . , x̃ℓ , xℓ+1 , . . . , xm ). Then
X
||qe (x)| − |qe (x̃)|| ≤ |qe (x) − qe (x̃)| ≤
|qe (x̄ℓ ) − qe (x̄ℓ+1 )| ≤ γ||x − x̃||1 .
0≤ℓ<m
14
In the general case where c ≥ 0, we first derive an expression for p′′ similar to (17). Then the equations for
p′ in (12) yield p′ , the equations for qP in (12) yield qP , and finally the equations for qZ in (12) yield qZ .
We are now ready to establish the existence of a solution with domain [0, ∞).
Lemma 3.7. The solution to the undirected dynamics in (1) has domain [0, ∞). Moreover, x(0)e−t ≤
x(t) ≤ D||b/γA ||1 · 1 + max(0, x(0) − D||b/γA ||1 · 1)e−t for all t.
Proof. Consider any x0 > 0 and any t0 ≥ 0. We first show that there is a positive δ ′ (depending on x0 ) such
that a unique solution x(t) with x(t0 ) = x0 exists for t ∈ (t0 − δ ′ , t0 + δ ′ ). By the Picard-Lindelöf Theorem,
this holds true if the mapping x 7→ |q| − x is continuous and satisfies a Lipschitz condition in a neighborhood
of x0 . Continuity clearly holds. Let ε = mini (x0 )i /2 and let U = { x : ||x − x0 ||∞ < ε }. Then for every
x, x̃ ∈ U and every e
|qe (x)| − |qe (x̃)| ≤ γ||x − x̃||1 ,
where γ is as in Lemma 3.6. Local existence implies the existence of a solution which cannot be extended.
Since q is bounded (Lemma 3.3), x is bounded at all finite times, and hence the solution exists for all t.
The lower bound x(t) ≥ x(0)e−t > 0 holds by Fact 2.2 with A = 0 and α = −1. Since |qe | ≤ D||b/γA ||1 ,
ẋ = |q| − x ≤ D||b/γA ||1 · 1 − x, we have x(t) ≤ D||b/γA ||1 · 1 + max(0, x(0) − D||b/γA ||1 · 1)e−t by Fact 2.2
with B = D||b/γA ||1 · 1 and β = −1.
3.2
LP Duality
The energy E(x) is no longer a Lyapunov function, e.g., if x(0) ≈ 0, x(t) and hence E(x(t)) will grow
initially. We will show that energy suitably scaled is a Lyapunov function. What is the appropriate scaling
factor? In the case of the undirected shortest path problem, [BMV12] used the minimum capacity of any
source-sink cut as a scaling factor. The proper generalization to our situation is to consider the linear
program max{α : Af = αb, |f | ≤ x}, where x is a fixed positive vector. Linear programming duality yields
the corresponding minimization problem which generalizes the minimum cut problem to our situation.
Lemma 3.8. Let x ∈ Rm
>0 and b 6= 0. The linear programs
max{α : Af = αb, |f | ≤ x}
and
min{|y T A|x : bT y = −1}
(19)
are feasible and have the same objective value. Moreover, there is a finite set YA = { d1 , . . . , dK } of vectors
T
di ∈ Rm
≥0 that are independent of x such that the minimum above is equal to C⋆ = mind∈YA d x. There is a
6
feasible f with |f | ≤ x/C⋆ .
Proof. The pair (α, f ) = (0, 0) is a feasible solution for the maximization problem. Since b 6= 0, there exists
y with bT y = −1 and thus both problems are feasible. The dual of max{α : Af − αb = 0, f ≤ x, −f ≤ x}
has unconstrained variables y ∈ Rn and non-negative variables z + , z − ∈ Rm and reads
min{xT (z + + z − ) : −bT y = 1, AT y + z + − z − = 0, z + , z − ≥ 0}.
(20)
From z − = AT y + z + , z + ≥ 0, z − ≥ 0 and x > 0, we conclude min(z + , z − ) = 0 in an optimal solution.
Thus z − = max(0, AT y) and z + = max(0, −AT y) and hence z + + z − = |AT y| in an optimal dual solution.
Therefore, (20) and the right LP in (19) have the same objective value.
We next show that the dual attains its minimum at a vertex of the feasible set. For this it suffices to show
that its feasible set contains no line. Assume it does. Then there are vectors d = (y1 , z1+ , z1− ), d non-zero,
and p = (y0 , z0+ , z0− ) such that (y, z + , z − ) = p + λd = (y0 + λy1 , z0+ + λz1+ , z0− + λz1− ) is feasible for all λ ∈ R.
Thus z1+ = z1− = 0. Note that if either z1+ or z1− would be non-zero then either z0+ + λz1+ or z0− + λz1− would
6 In the undirected shortest path problem, the d’s are the incidence vectors of the undirected source-sink cuts. Let S be any
set of vertices containing s0 but not s1 , and let 1S be its associated indicator vector. The cut corresponding to S contains the
edges having exactly one endpoint in S. Its indicator vector is dS = |AT 1S |. Then dS
e = 1 iff |S ∩ { u, v }| = 1, where e = (u, v)
S T
or e = (v, u), and dS
e = 0 otherwise. For a vector x ≥ 0, (d ) x is the capacity of the source-sink cut (S, V \S). In this setting,
C⋆ is the value of a minimum cut.
15
have a negative component for some lambda. Then AT y + z + + z − = 0 implies AT y1 = 0. Since A has full
row rank, y1 = 0. Thus the dual contains no line and the minimum is attained at a vertex of its feasible
region. The feasible region of the dual does not depend on x.
+
−
Let (y 1 , z1+ , z1− ) to (y K , zK
, zK
) be the vertices of (20), and let YA = { |AT y 1 |, . . . , |AT y K | }. Then
min dT x = min{xT (z + + z − ) : −bT y = 1, AT y + z + − z − = 0, z + , z − ≥ 0}
d∈YA
= min{|y T A|x : bT y = −1}.
We finally show that there is a feasible f with |f | ≤ x/C⋆ . Let x′ := x/C⋆ . Then x′ > 0 and
mind∈YA dT x′ = mind∈YA dT x/C⋆ = C⋆ /C⋆ = 1 and thus the right LP with x = x′ (19) has objective
value 1. Hence, the left LP has objective value 1 and there is a feasible f with |f | ≤ x′ .
3.3
Convergence to Dominance
In the network setting, an important role is played by the set of edge capacity vectors that support a feasible
flow. In the LP setting, we generalize this notion to the set of dominating states, which is defined as
Xdom := {x ∈ Rm : ∃ feasible f : |f | ≤ x}.
An alternative characterization, using the set YA from Lemma 3.8, is
T
X1 := {x ∈ Rm
≥0 : d x ≥ 1 for all d ∈ YA }.
We prove that Xdom = X1 and that the set X1 is attracting in the sense that the distance between x(t) and
X1 goes to zero, as t increases.
Lemma 3.9.
1. It holds that Xdom = X1 . Moreover, limt→∞ dist(x(t), X1 ) = 0, where dist(x, X1 ) is the
Euclidean distance between x and X1 .
2. If x(t0 ) ∈ X1 , then x(t) ∈ X1 for all t ≥ t0 . For all sufficiently large t, x(t) ∈ X1/2 := {x ∈ Rn≥0 :
dT x ≥ 1/2 for all d ∈ YA }, and if x ∈ X1/2 then there is a feasible f with |f | ≤ 2x.
Proof.
1. If x ∈ X1 , then dT x ≥ 1 for all d ∈ YA and hence Lemma 3.8 implies the existence of a feasible
solution f with |f | ≤ x. Conversely, if x ∈ Xdom , then there is a feasible f with |f | ≤ x. Thus dT x ≥ 1
for all d ∈ YA and hence x ∈ X1 . By the proof of Lemma 3.8, for any d ∈ YA , there is a y such that
d = |AT y| and bT y = −1. Let Y (t) = dT x. Then
Ẏ = |y T A|ẋ = |y T A|(|q| − x) ≥ |y T Aq| − Y = |y T b| − Y = 1 − Y.
Thus for any t0 and t ≥ t0 , Y (t) ≥ 1 + (Y (t0 ) − 1)e−(t−t0 ) by Lemma 2.2 applied with A = 1
and α = −1. In particular, lim inf t→∞ Y (t) ≥ 1. Thus lim inf t→∞ mind∈YA dT x ≥ 1 and hence
limt→∞ dist(x(t), X1 ) = 0.
2. Moreover, if Y (t0 ) ≥ 1, then Y (t) ≥ 1 for all t ≥ t0 . Hence x(t0 ) ∈ X1 implies x(t) ∈ X1 for all t ≥ t0 .
Since x(t) converges to X1 , x(t) ∈ X1/2 for all sufficiently large t. If x ∈ X1/2 there is f such that
Af = 21 b and |f | ≤ x. Thus 2f is feasible and |2f | ≤ 2x.
The next lemma summarizes simple bounds on the values of resistors r, potentials p and states x that
hold for sufficiently large t. Recall that P = { e ∈ [m] : ce > 0 } and Z = { e ∈ [m] : ce = 0 }, see (10).
Lemma 3.10.
1. For sufficiently large t, it holds that re ≥ ce /(2D||b/γA ||1 ), bT p ≤ 8D||b/γA ||1 ||c||1 and
T
2
|Ae p| ≤ 8D ||b||1 ||c||1 for all e.
2. For all e, it holds that ẋe /xe ≥ −1 and for all e ∈ P , it holds that ẋe /xe ≤ 8D2 ||b||1 ||c||1 /cmin .
3. There is a positive constant C such that for all t ≥ t0 , there is a feasible f (depending on t) such that
xe (t) ≥ C for all indices e in the support of f .
16
Proof.
1. By Lemma 3.7, xe (t) ≤ 2D||b/γA ||1 for all sufficiently large t. It follows that re = ce /xe ≥
ce /(2D||b/γA ||1 ). Due to Lemma 3.9, for large enough t, there is a feasible flow with |f | ≤ 2x. Together
with xe (t) ≤ 2D||b/γA ||1 , it follows that
bT p = E(q) ≤ E(2x) = 4cT x ≤ 8D||b/γA ||1 ||c||1 .
Now, orient A according to q and consider any index e′ . Recall that for all indices e, we have ATe p = 0
if e ∈ Z, and qe = (xe /ce ) · ATe p if e ∈ P . Thus ATe p ≥ 0 for all e. If e′ ∈ Z or e′ ∈ P and qe′ = 0, the
claim is obvious. So assume e′ ∈ P and qe′ > 0. Since q is a convex combination of q-sign-compatible
basic feasible solutions, there is a basic feasible solution f with f ≥ 0 and fe′ > 0. By Lemma 3.1,
fe′ ≥ 1/(DγA ). Therefore
X
fe′ ATe′ p ≤
fe ATe p = bT p ≤ 8D||b/γA ||1 ||c||1
e
for all sufficiently large t. The inequality follows from fe ≥ 0 and ATe p ≥ 0 for all e. Thus ATe′ p ≤
8D2 ||b||1 ||c||1 for all sufficiently large t.
2. We have ẋe /xe = (|qe | − xe )/xe ≥ −1 for all e. For e with ce > 0
|qe | − xe
|AT p|
ẋe
=
≤ e ≤ 8D2 ||b||1 ||c||1 /cmin .
xe
xe
ce
3. Let t0 be such that dT x(t) ≥ 1/2 for all d ∈ YA and t ≥ t0 . Then for all t ≥ t0 , there is f such that
Af = 21 b and |f | ≤ x(t); f may depend on t. By Lemma 2.1, we can write 2f as convex combination of
f -sign-compatible basic feasible solutions (at most m of them) and a f -sign-compatible solution in the
kernel of A. Dropping the solution in the kernel of A leaves us with a solution which is still dominated
by x.
It holds that for every e ∈ E with fe 6= 0, there is a basic feasible solution g used in the convex
decomposition such that 2|fe | ≥ |ge | > 0. By Lemma 3.1, every non-zero component of g is at least
1/(DγA ). We conclude that xe ≥ 1/(2DγA ), for every e in the support of g.
3.4
The Equilibrium Points
We next characterize the equilibrium points
F = { x ∈ R≥0 : |q| = x } .
(21)
Let us first elaborate on the special case of the undirected shortest path problem. Here the equilibria are the
flows of value one from source to sink in a network formed by undirected source-sink paths of the same length.
This can be seen as follows. Consider any x ≥ 0 and assume supp(x) is a network of undirected source-sink
paths of the same length. Call this network N . Assign to each node u, a potential pu equal to the length of the
shortest undirected path from the sink s1 to u. These potentials are well-defined as all paths from s1 to u in
N must have the same length. For an edge e = (u, v) in N , we have qe = xe /ce (pu − pv ) = xe /ce ·ce = xe , i.e.,
q = x is the electrical flow with respect to the resistances ce /xe . Conversely, if x is an equilibrium point and
the network is oriented such that q ≥ 0, we have xe = qe = xe /ce (pu − pv ) for all edges e = (u, v) ∈ supp(x).
Thus ce = pu − pv and this is only possible if for every node u, all paths from u to the sink have the same
length. Thus supp(x) must be a network of undirected source-sink paths of the same length. We next
generalize this reasoning.
Theorem 3.11. If x = |q| is an equilibrium point and the columns of A are oriented such that q ≥ 0, then
all feasible solutions f with supp(f ) ⊆ supp(x) satisfy cT f = cT x. Conversely, if x = |q| for a feasible q, A
is oriented such that q ≥ 0, and all feasible solutions f with supp(f ) ⊆ supp(x) satisfy cT f = cT x, then x is
an equilibrium point.
17
Proof. If x is an equilibrium point, |qe | = xe for every e. By changing the signs of some columns of A, we
may assume q ≥ 0, i.e., q = x. Let p be the potential with respect to x. For every index e ∈ P in the support
of x, since ce > 0 we have qe = xcee ATe p and hence ce = ATe p. Further, for the indices e ∈ Z in the support of
x, we have ce = 0 = ATe p due to the second block of equations on the right hand side in (11). Let f be any
feasible solution whose support is contained in the support of x. Then the first part follows by
X
X
fe ATe p = bT p = E(q) = E(x) = cost(x).
ce f e =
e∈supp(f )
e∈supp(f )
For the second part, we misuse notation and use A to also denote the submatrix of the constraint matrix
indexed by the columns in the support of x. We may assume that the rows of A are independent. Otherwise,
we simply drop redundant constraints. We may assume q ≥ 0; otherwise we simply change the sign of some
columns of A. Then x is feasible. Let AB be a square non-singular submatrix of A and let AN consist of the
remaining columns of A. The feasible solutions f with supp(f ) ⊆ supp(x) satisfy AB fB + AN fN = b and
hence fB = A−1
B (b − AN fN ). Then
T
T −1
cT f = cTB fB + cTN fN = cB A−1
B b + (cN − cB AB AN )fN .
Since, by assumption, cT f is constant for all feasible solutions whose support is contained in the support of
cB
ATB −1 T
−1 T
T
T
x, we must have cN = ATN [A−1
B ] cB . Let p = [AB ] cB . Then it follows that A p = AT [AB ] cB = cN
N
and hence Rx = AT p. Thus the pair (x, p) satisfies the right hand side of (11). Since x is feasible, it also
satisfies the left hand side of (11). Therefore, x is the minimum energy solution with respect to x.
Corollary 3.12. Let g be a basic feasible solution. Then |g| is an equilibrium point.
Proof. Let g be a basic feasible solution. Orient A such that g ≥ 0. Since g is basic, there is a B ⊆ [m] such
that g = (gB , gN ) = (A−1
B b, 0). Consider any feasible solution f with supp(f ) ⊆ supp(g). Then f = (fB , 0)
and hence b = Af = AB fB . Therefore, fB = gB and hence cT f = cT g. Thus x = |g| is an equilibrium
point.
This characterization of equilibria has an interesting consequence.
Lemma 3.13. The set L := {cT x : x ∈ F } of costs of equilibria is finite.
Proof. If x is an equilibrium, x = |q|, where q is the minimum energy solution with respect to x. Orient A
such that q ≥ 0. Then by Theorem 3.11, cT f = cT x for all feasible solutions f with supp(f ) ⊆ supp(x). In
particular, this holds true for all such basic feasible solutions f . Thus L is a subset of the set of costs of all
basic feasible solutions, which is a finite set.
We conclude this part by showing that the optimal solutions of the undirected linear program (2) are
equilibria.
Theorem 3.14. Let x be an optimal solution to (2). Then x is an equilibrium.
Proof. By definition, there is a feasible f with |f | = x. Let us reorient the columns of A such that f ≥ 0
and let us delete all columns e of A with fe = 0. Consider any feasible g with supp(g) ⊆ supp(x). We claim
that cT x = cT g. Assume otherwise and consider the point y = x + λ(g − x). If |λ| is sufficiently small, y ≥ 0.
Furthermore, y is feasible and cT y = cT x + λ(cT g − cT x). If cT g 6= cT x, x is not an optimal solution to (2).
The claim now follows from Theorem 3.11.
3.5
Convergence
In order to show convergence, we construct a Lyapunov function. The following functions play a crucial
role in our analysis. Let Cd = dT x for d ∈ YA , and recall that C⋆ = mind∈YA dT x denotes the optimum.
Moreover, we define by
X
cT x
x
xe
and Vd :=
−E
for every d ∈ YA .
h(t) :=
re |qe |
C⋆
C⋆
Cd
e
18
Theorem 3.15.
(1) For every d ∈ YA , Ċd ≥ 1 − Cd . Thus, if Cd < 1 then C˙d > 0.
d
(2) If x(t) ∈ X1 , then dt
cost(x(t)) ≤ 0 with equality if and only if x = |q|.
(3) Let d ∈ YA be such that C⋆ = dT x at time t. Then it holds that V̇d ≤ h(t).
(4) It holds that h(t) ≤ 0 with equality if and only if |q| = Cx⋆ .
Proof.
1. Recall that for d ∈ YA , there is a y such that bT y = −1 and d = |AT y|. Thus C˙d = dT (|q| − x) ≥
T
|y Aq| − Cd = 1 − Cd and hence C˙d > 0, whenever Cd < 1.
2. Remember that E(x) = cost(x) and that x(t) ∈ X1 implies that there is a feasible f with |f | = x. Thus
E(q) ≤ E(f ) ≤ E(x). Let R be the diagonal matrix of entries ce /xe . Then
d
cost(x) = cT (|q| − x)
dt
= xT R1/2 R1/2 |q| − xT Rx
T
1/2
≤ (q Rq)
T
1/2
(x Rx)
≤0
by (1)
since c = Rx
T
by Cauchy-Schwarz
− x Rx
since E(q) ≤ E(x).
If the derivative is zero, both inequalities above have to be equalities. This is only possible
P if the
vectors |q| and x are parallel and E(q) = E(x). Let λ be such that |q| = λx. Then E(q) = e xcee qe2 =
P
λ2 e ce xe = λ2 E(x). Since E(x) > 0, this implies λ = 1.
d
cost(x) =
3. By definition of d, C⋆ = Cd . By the first two items, we have C˙⋆ = dT |q| − C⋆ and dt
T
c |q| − cost(x). Thus
d
cost(x) − C˙⋆ cost(x)
C⋆ dt
d cost(x)
C⋆ (cT |q| − cost(x)) − (dT |q| − C⋆ )cost(x)
=
=
dt C⋆
C⋆2
C⋆2
T
T
T
xe 2
xe X
C⋆ · c |q| − d |q| · c x X
= h(t),
≤
−
re |qe |
re
=
2
C⋆
C⋆
C⋆
e
e
P
where we used re = ce /xe and hence cT |q| = e re xe |qe |, cT x = E(x), and dT |q| ≥ |y T Aq| = 1 since
d = |y T A| for some y with bT y = −1.
4. We have
1/2
1/2 X
X
X
X
1/2
xe
1/2 xe 1/2
2
xe 2
re C
= E Cx⋆
E(q)1/2
|q
|
=
r
|q
|
≤
)
r
r
q
r
(
e
e
e
e
e C⋆ e
e
C⋆
⋆
e
e
by Cauchy-Schwarz. Since h(t) =
h(t) ≤ E
x 1/2
C⋆
e
e
P
xe
e re |qe | C⋆
· E(q)1/2 − E
− E( Cx⋆ ) by definition, it follows that
x
C⋆
=E
x 1/2
C⋆
· E(q)1/2 − E
x 1/2
C⋆
≤0
since x/C⋆ dominates a feasible solution and hence E(q) ≤ E(x/C⋆ ). If h(t) = 0, we must have equality
in the application of Cauchy-Schwarz, i.e., the vectors x/C⋆ and |q| must be parallel, and we must have
E(q) = E(x/C⋆ ) as in the proof of part 2.
We show now convergence against the set of equilibrium points. We need the following technical Lemma
from [BMV12].
Lemma 3.16 (Lemma 9 in [BMV12]). Let f (t) = maxd∈YA fd (t), where each fd is continuous and differentiable. If f˙(t) exists, then there is a d ∈ YA such that f (t) = fd (t) and f˙(t) = f˙d (t).
Theorem 3.17. All trajectories converge to the set F of equilibrium points.
Proof. We distinguish cases according to whether the trajectory ever enters X1 or not. If the trajectory
d
cost(x) ≤ 0 for all t ≥ t0 with equality only if x = |q|. Thus the trajectory
enters X1 , say x(t0 ) ∈ X1 , then dt
converges to the set of fix points. If the trajectory never enters X1 , consider V = maxd∈YA (Vd + 1 − Cd ).
19
We show that V̇ exists for almost all t. Moreover, if V̇ (t) exists, then V̇ (t) ≤ 0 with equality if and only if
|qe | = xe for all e. It holds that V is Lipschitz-continuous as the maximum of a finite number of continuously
differentiable functions. Since V is Lipschitz-continuous, the set of t’s where V̇ (t) does not exist has zero
Lebesgue measure (see for example [CLSW98, Ch. 3]). If V̇ (t) exists, we have V̇ (t) = V̇d (t) − Ċd (t) for some
d ∈ YA according to Lemma 3.16. Then, it holds that V̇ (t) ≤ h(t) − (1 − Cd ) ≤ 0. Thus x(t) converges to
the set
x ∈ R≥0 : V̇ = 0 = { x ∈ R≥0 : |q| = x/C and C = 1 } = { x ∈ R≥0 : |q| = x } .
At this point, we know that all trajectories x(t) converge to F . Our next goal is to show that cT x(t)
converges to the cost of an optimum solution of (2) and that |q| − x converges to zero. We are only able to
show the latter for all indices e ∈ P , i.e. with ce > 0.
3.6
Details of the Convergence Process
In the argument to follow, we will encounter
the following situation several times. We have a non-negative
R∞
function f (t) ≥ 0 and we know that 0 f (t)dt is finite. We want to conclude that f (t) converges to zero
for t → ∞. This holds true if f is Lipschitz continuous. Note that the proof of the following lemma is very
similar to the proof in [BMV12, Lemma 11]. However, in our case we apply the Local Lipschitz condition
that we showed in Lemma 3.6.
R∞
Lemma 3.18. Let f (t) ≥ 0 for all t. If 0 f (t)d(t) is finite and f (t) is Lipschitz-continuous, i.e., for every
ε > 0, there is a δ > 0 such that |f (t′ ) − f (t)| ≤ ε for all t′ ∈ [t, t + δ], then f (t) converges to zero as t goes
to infinity. The functions t 7→ xT R|q| − xT Rx = cT |q| − cT x and t 7→ h(t) are Lipschitz continuous.
Proof. If f (t) does not converge to zero, there is ε > 0 and an infinite unbounded sequence t1 , t2 , . . . such
that f (ti ) ≥ ε for all i. Since fR is Lipschitz continuous there is δ > 0 such that f (t′i ) ≥ ε/2 for t′i ∈ [ti , ti + δ]
∞
and all i. Hence, the integral 0 f (t)dt is unbounded.
Since x˙e is continuous and bounded (by Lemma 3.7), xe is Lipschitz-continuous. Thus, it is enough to
show that qe is Lipschitz-continuous for all e. Since qZ (recall that Z = { e : ce = 0 } and P = [m] \ Z) is an
affine function of qP , it suffices to establish the claim for e ∈ P . So let e ∈ P be such that ce > 0. First, we
claim that xe (t + ε) ≤ (1 + 2Kε)xe for all ε ≤ K/4, where K = 8D2 ||b||1 ||c||1 /cmin. Assume that this is not
the case. Let
ε = inf{δ ≤ 1/4K : xe (t + δ) > (1 + 2Kδ)xe (t)},
then ε > 0 (since x˙e (t) ≤ Kxe (t) by Lemma 3.10) and, by continuity, xe (t + ε) ≥ (1 + 2Kε)xe (t). There
must be t′ ∈ [t, t + ε] such that x˙e (t′ ) = 2Kxe (t). On the other hand,
x˙e (t′ ) ≤ Kxe (t′ ) ≤ K(1 + 2Kε)xe (t) ≤ K(1 + 2K/4K)xe (t) < 2Kxe (t),
which is a contradiction. Thus, xe (t + ε) ≤ (1 + 2Kε)xe for all ε ≤ 1/4K. Similarly, xe (t + ε) ≥ (1 − 2Kε)xe .
Now, let α = (1 − 2Kε)xe and β = (1 + 2Kε)xe . Then
||qe (t + δ)| − |qe (t)|| ≤ M ||x(t + δ) − x(t)||1 ≤ M m(4Kε)xe ≤ 8εM mKD||b/γA||1 ,
since xe ≤ 2D||b/γA ||1 for sufficiently large t and where M is as in Lemma 3.6. Since C⋆ is at least 1/2 for
all sufficiently large t, the division by C⋆ and C⋆2 in the definition of h(t) does not affect the claim.
Lemma 3.19. For all e ∈ [m] of positive cost, it holds that |xe − |qe || → 0 as t goes to infinity.
d
cost(x) ≤ xT R|q| − xT Rx ≤ 0 with equality if
Proof. For a trajectory ultimately running in X1 , we showed dt
and only if x = |q|. Also, E(q) ≤ E(x), since x dominates a feasible solution. Furthermore, xT R|q| − xT Rx
goes to zero using Lemma 3.18. Thus
X
X
X
X
X
X
re (xe − |qe |)2 =
re x2e +
re qe2 − 2
re xe |qe | ≤ 2
re x2e −
re xe |qe |
e
e
e
e
20
e
e
goes to zero. Next observe that there is a constant C such
P that xe (t) ≤ C Cfor all
P e and t as a result of
Lemma 3.7. Also cmin > 0 and hence re ≥ cmin /C. Thus e re (xe − |qe |)2 ≤ cmin
· e (xe − |qe |)2 and hence
|xe − |qe || → 0 for every e with positive cost. For trajectories outside X1 , we argue about ||qe | − Cx⋆ | and use
C⋆ → 1, namely
X
X
X
xe 2
xe
)
−
|qe | → 0.
re ( C
re C
re ( Cxe⋆ − |qe |)2 ≤ 2
⋆
⋆
e
e
e
Note that the above does not say anything about the indices e ∈ Z (with ce = 0). Recall that AP qP +
AZ qZ = b and that the columns of AZ are independent. Thus, qZ is uniquely determined by qP . For the
undirected shortest path problem, the potential difference pT b between source and sink converges to the
length of a shortest source-sink path. If an edge with positive cost is used by some shortest undirected path,
then no shortest undirected path uses it with the opposite direction. We prove the natural generalizations.
Let OP T be the set of optimal solutions to (2) and let Eopt = ∪x∈OP T supp(x) be the set of columns
used in some optimal solution. The columns of positive cost in Eopt can be consistently oriented as the
following Lemma shows.
Lemma 3.20. Let x∗1 and x∗2 be optimal solutions to (2) and let f and g be feasible solutions with |f | = x∗1
and |g| = x∗2 . Then there is no e such that fe ge < 0 and ce > 0.
Proof. Assume otherwise. Then |ge − fe | = |ge | + |fe | > 0. Consider h = (ge f − fe g)/(ge − fe ). Then
−fe ge
= 0 and for every index e′ , it holds
Ah = (ge Af − fe Ag)/(ge − fe ) = b and h is feasible. Also, he = ge gfee −f
e
that |he′ | =
|ge fe′ −fe ge′ |
|ge −fe )|
≤
|ge ||fe′ |+|fe ||ge′ |
|ge |+|fe |
and hence
cost(h) < cost(f ) + cost(g) =
|ge |
|fe |
cost(x∗1 ) +
cost(x∗2 ) = cost(x∗1 ),
|ge | + |fe |
|ge | + |fe |
a contradiction to the optimality of x∗1 and x∗2 .
By the preceding Lemma, we can orient A such that fe ≥ 0 whenever |f | is an optimal solution to (2)
and ce > 0. We then call A positively oriented.
Lemma 3.21. It holds that pT b converges to the cost of an optimum solution of (2). If A is positively
oriented, then lim inf t→∞ ATe p ≥ 0 for all e.
Proof. Let x∗ be an optimal solution of (2). We first show convergence to a point in L and then convergence
to cT x∗ . Let ε > 0 be arbitrary. Consider any time t ≥ t0 , where t0 and C as in Lemma 3.10 and moreover
for every e ∈ P . Then xe ≥ C for all indices e in the support of some basic feasible solution
||qe | − xe | ≤ cCε
max
f . For every e ∈ P , we have qe = xcee ATe p. We also assume q ≥ 0 by possibly reorienting columns of A. Hence
|ce − ATe p| = 1 −
qe
xe − qe
cmax
ce =
ce ≤
|qe − xe | ≤ ε.
xe
xe
C
For indices e ∈ Z, we have ATe p = 0 = ce . Since, ||f ||∞ ≤ D||b/γA ||1 (Lemma 3.1), we conclude
X
X
(ce − pT Ae )fe ≤ ε
|fe | ≤ ε · mD||b/γA ||1 .
cT f − p T b =
e
e∈supp(f )
Since the set L is finite, we can let ε > 0 be smaller than half the minimal distance between elements in
L. By the preceding paragraph, there is for all sufficiently large t, a basic feasible solution f such that
|cT f − bT p| ≤ ε. Since bT p is a continuous function of time, cT f must become constant. We have now shown
that bT p converges to an element in L. We P
will next show that bT p converges to the optimum cost. Let x∗
be an optimum solution to (2) and let W = e x∗e ce ln xe . Since x(t) is bounded, W is bounded. We assume
21
that A is positively oriented, thus there is a feasible f ∗ with |f ∗ | = x∗ and fe∗ ≥ 0 whenever ce > 0. By
reorienting zero cost columns, we may assume fe∗ ≥ 0 for all e. Then Ax∗ = b. We have
Ẇ =
X
x∗e ce
e
=
X
e; ce >0
=
X
e
=
X
e
|qe | − xe
xe
x∗e |ATe p| − cost(x∗ )
since qe =
x∗e |ATe p| − cost(x∗ )
xe T
ce Ae p
whenever ce > 0
since ATe p = 0 whenever ce = 0
x∗e (|ATe p| − ATe p) + bT p − cost(x∗ )
and hence bT p − cost(x∗ ) must converge to zero; note that bT p is Lipschitz continuous in t.
Similarly, |ATe p| − ATe p must converge to zero whenever x∗e > 0. This implies lim inf ATe p ≥ 0. Assume
otherwise, i.e., for every ε > 0, we have ATe p < −ε for arbitrarily large t. Since p is Lipschitz-continuous in
t, there is a δ > 0 such that ATe p < −ε/2 for infinitely many disjoint intervals of length δ. In these intervals,
|ATe p| − ATe p ≥ ε and hence W must grow beyond any bound, a contradiction.
Corollary 3.22. E(x) and cost(x) converge to cT x∗ , whereas x and |q| converge to OP T . If the optimum
solution is unique, x and |q| converge to it. Moreover, if e ∈
/ Eopt , xe and |qe | converge to zero.
Proof. The first part follows from E(x) = cost(x) = bT p and the preceding Lemma. Thus x and q converge
to the set F of equilibrium points, see (21), that are optimum solutions to (2). Since every optimum solution
is an equilibrium point by Theorem 3.14, x and q converge to OP T . For e 6∈ Eopt , fe = 0 for every
f ∈ F ∩ OP T . Since x and |q| converge to F ∩ OP T , xe and |qe | converge to zero for every e ∈ Eopt .
4
Improved Convergence Results: Discrete Directed Dynamics
In this section, we present in its full generality our main technical result on the Physarum dynamics (6).
4.1
Overview
Inspired by the max-flow min-cut theorem, we consider the following primal-dual pair of linear programs:
the primal LP is given by max { t : Af = t · b; 0 ≤ f ≤ x } in variables f ∈ Rm and t ∈ R, and its dual
LP reads min xT z : z ≥ 0; z ≥ AT y; bT y = 1 in variables z ∈ Rm and y ∈ Rn . Since the dual feasible
region does not contain a line and the minimum is bounded, the optimum is attained at a vertex, and in an
optimum solution we have z = max{0, AT y}. Let V be the set of vertices of the dual feasible region, and
let Y := { y : (z, y) ∈ V } be the set of their projections on y-space. Then, the dual optimum is given by
min{ max{0, y T A} · x : y ∈ Y }. The set of strongly dominating capacity vectors x is defined as
T
X := x ∈ Rm
.7
>0 : y Ax > 0 for all y ∈ Y
Note that X contains the set of all scaled feasible solutions {x = tf : Af = b, f ≥ 0, t > 0}.
We next discuss the choice of step size. For y ∈ Y and capacity vector x, let α(y, x) := y T Ax. Further,
let α(x) := min { α(y, x) : y ∈ Y } and α(ℓ) := α(x(ℓ) ). Then, for any x ∈ X there is a feasible f such that
0 ≤ f ≤ x/α(x), see Lemma 4.8. In particular, if x is feasible then α(x) = 1, since α(y, x) = 1 for all y ∈ Y .
We partition the Physarum dynamics (6) into the following five regimes and define for each regime a fixed
step size, see Subsection 4.3.
7 In the shortest path problem (recall that b = e − e ) the set Y consists of all y ∈ { −1, +1 }n such that y = 1 = −y ,
n
n
1
P1
x −
i.e., y encodes a cut with S = { i : yi = −1 } and S = { i : yi = +1 }. The condition y T Ax > 0 translates into
a∈E(S,S) a
P
a∈E(S,S)
xa > 0, i.e., every source-sink cut must have positive directed capacity.
22
Corollary 4.1. The Physarum dynamics (6) initialized with x(0) ∈ X and a step size h satisfies:
• If α(0) = 1, we work with h ≤ h0 and have α(ℓ) = 1 for all ℓ.
• If 1/2 ≤ α(0) < 1, we work with h ≤ h0 /2 and have 1 − δ ≤ α(ℓ) < 1 for ℓ ≥ h−1 log(1/2δ) and δ > 0.
• If 1 < α(0) ≤ 1/h0 , we work with h ≤ h0 and have 1 < α(ℓ) ≤ 1 + δ for ℓ ≥ h−1 · log(1/δh0 ) and δ > 0.
• If 0 < α(0) < 1/2, we work with h ≤ α(0) h0 and have 1/2 ≤ α(ℓ) < 1 for ℓ ≥ 1/h.
• If 1/h0 < α(0) , we work with h ≤ 1/4 and have 1 < α(ℓ) ≤ 1/h0 for ℓ = ⌊log1/(1−h) h0 (α(0) −1)/(1−h0)⌋.
In each regime, we have 1 − α(ℓ+1) = (1 − h)(1 − α(ℓ) ).
We give now the full version of Theorem 1.4 which applies for any strongly dominating starting point.
Theorem 4.2. Suppose A ∈ Zn×m has full row rank (n ≤ m), b ∈ Zn , c ∈ Zm
>0 and ε ∈ (0, 1). Given
x(0) ∈ X and its corresponding α(0) , the Physarum dynamics (6) initialized with x(0) runs in two regimes:
(i) The first regime is executed when α(0) 6∈ [1/2, 1/h0] and it computes a point x(t) ∈ X such that
α(t) ∈ [1/2, 1/h0]. In particular, if α(0) < 1/2 then h ≤ (Φ/opt) · (α(0) h0 )2 and t = 1/h. Otherwise,
if α(0) > 1/h0 then h ≤ Φ/opt and t = ⌊log1/(1−h) [h0 (α(0) − 1)/(1 − h0 )]⌋.
(ii) The second regime starts from a point x(t) ∈ X with α(t) ∈ [1/2, 1/h0], it has a step size h ≤
(0)
(Φ/opt) · h20 /2 and outputs for any k ≥ 4C1 /(hΦ) · ln(C2 Ψ(0) /(ε · min{1, xmin })) a vector x(t+k) ∈ X
(t+k)
such that dist(x
, X⋆ ) < ε/(DγA ).
We stated the bounds on h in terms of the unknown quantities Φ and opt. However, Φ/opt ≥ 1/C3 by
Lemma 3.1 and hence replacing Φ/opt by 1/C3 yields constructive bounds for h.
Organization: This section is devoted to proving Theorem 4.2, and it is organized as follows: Subsection 4.2
establishes core efficiency bounds that extend [SV16c] and yield a scale-invariant determinant dependence of
the step size and are applicable to strongly dominating points. Subsection 4.3 gives the definition of strongly
dominating points and shows that the Physarum dynamics (6) initialized with such a point is well defined.
Subsection 4.4 extends the analysis in [BBD+ 13, SV16b, SV16c] to positive linear programs, by generalizing
the concept of non-negative flows to non-negative feasible kernel-free vectors. Subsection 4.5 shows that x(ℓ)
converges to X⋆ for large enough ℓ. Subsection 4.6 concludes the proof of Theorem 4.2.
4.2
Useful Lemmas
def
Recall that R(ℓ) = diag(c) · (X (ℓ) )−1 is a positive diagonal matrix and L(ℓ) = A(R(ℓ) )−1 AT is invertible. Let
p(ℓ) be the unique solution of L(ℓ) p(ℓ) = b. We improve the dependence on DS in [SV16c, Lemma 5.2] to D.
Lemma 4.3. [SV16c, extension of Lemma 5.2] Suppose x(ℓ) > 0, R(ℓ) is a positive diagonal matrix and
(ℓ)
L = A(R(ℓ) )−1 AT . Then for every e ∈ [m], it holds that kAT (L(ℓ) )−1 Ae k∞ ≤ D · ce /xe .
Proof. The statement follows by combining the proof in [SV16c, Lemma 5.2] with Lemma 3.1.
We show next that [SV16b, Corollary 5.3] holds for x-capacitated vectors, which extends the class of
feasible starting points, and further yields a bound in terms of D.
Lemma 4.4. [SV16b, extension of Corollary 5.3] Let p(ℓ) be the unique solution of L(ℓ) p(ℓ) = b and assume
x(ℓ) is a positive vector with corresponding positive scalar α(ℓ) such that there is a vector f satisfying Af =
α(ℓ) · b and 0 ≤ f ≤ x(ℓ) . Then kAT p(ℓ) k∞ ≤ D||c||1 /α(ℓ) .
23
Proof. By assumption, f satisfies α(ℓ) b = Af =
P
e
fe Ae and 0 ≤ f ≤ x(ℓ) . This yields
α(ℓ) kAT p(ℓ) k∞ = kAT (L(ℓ) )−1 · α(ℓ) bk∞ = k
≤
X
e
fe kAT (L(ℓ) )−1 Ae k∞
X
e
(Lem.
fe AT (L(ℓ) )−1 Ae k∞
≤
4.3)
D
X
fe
e
ce
(ℓ)
xe
≤ Dkck1 .
We note that applying Lemma 4.3 and Lemma 4.4 into the analysis of [SV16c, Theorem 1.3] yields
an improved result that depends on the scale-invariant determinant D. Moreover, we show in the next
Subsection 4.3 that the Physarum dynamics (6) can be initialized with any strongly dominating point.
We establish now an upper bound on q that does not depend on x. We then use this upper bound on q
to establish a uniform upper bound on x.
Lemma 4.5. For any x(ℓ) > 0, kq (ℓ) k∞ ≤ mD2 kb/γA k1 .
(ℓ)
(ℓ)
Proof. Let f be a basic feasible solution of Af = b. By definition, qe = (xe /ce )ATe (L(ℓ) )−1 b and thus
qe(ℓ) =
(ℓ)
(ℓ)
xe X
xe X T (ℓ) −1
Ae (L ) Au fu ≤
|fu | · ATe (L(ℓ) )−1 Au ≤ Dkf k1 ,
ce u
ce u
where the last inequality follows by
ATe (L(ℓ) )−1 Au = ATu (L(ℓ) )−1 Ae ≤ kAT (L(ℓ) )−1 Ae k∞
(Lem.
≤
4.3)
D · ce /x(ℓ)
e .
(ℓ)
By Cramer’s rule and Lemma 3.1, we have |qe | ≤ Dkf k1 ≤ mD2 kb/γA k1 .
Let k, t ∈ N. We denote by
q (t,k) =
t+k−1
X
i=t
h (1 − h)t+k−1−i (i)
q
1 − (1 − h)k
and p(t,k) =
t+k−1
X
p(i) .
(22)
i=t
Straightforward checking shows that Aq (t,k) = b. Further, for C := diag(c), t ≥ 0 and k ≥ 1, we have
x(t)
t+k−1
Y
i=t
[1 + h(C −1 AT p(i) − 1)] = x(t+k) = (1 − h)k x(t) + [1 − (1 − h)k ]q (t,k) .
We give next an upper bound on x(k) that is independent of k.
Lemma 4.6. Let Ψ(0) = max{mD2 kb/γAk1 , kx(0) k∞ }. Then kx(k) k∞ ≤ Ψ(0) , ∀k ∈ N.
Proof. We prove the statement by induction. The base case kx(0) k∞ ≤ Ψ(0) is clear. Suppose the statement
holds for some k > 0. Then, triangle inequality and Lemma 4.5 yield
kx(k+1) k∞ ≤ (1 − h)kx(k) k∞ + hkq (k) k∞ ≤ (1 − h)Ψ(0) + hΨ(0) ≤ Ψ(0) .
We show now convergence to feasibility.
Lemma 4.7. Let r(k) = b − Ax(k) . Then r(k+1) = (1 − h)r(k) and hence r(k) = (1 − h)k (b − Ax(0) ).
Proof. By definition x(k+1) = (1 − h)x(k) + hq (k) , and thus the statement follows by
r(k+1) = b − Ax(k+1) = b − (1 − h)Ax(k) − hb = (1 − h)r(k) .
24
4.3
Strongly Dominating Capacity Vectors
For the shortest path problem, it is known that one can start from any capacity vector x for which the directed
capacity of every source-sink cut is positive, where the directed capacity of a cut is the total capacity of the
edges crossing the cut in source-sink direction minus the total capacity of the edges crossing the cut in the
sink-source direction. We generalize this result. We start with the max-flow like LP
max { t : Af = t · b; 0 ≤ f ≤ x }
(23)
in variables f ∈ Rm and t ∈ R and its dual
min xT z : z ≥ 0; z ≥ AT y; bT y = 1
(24)
in variables z ∈ Rm and y ∈ Rn . The feasible region of the dual contains no line. Assume otherwise; say it
contains (z, y) = (z (0) , y (0) ) + λ(z (1) , y (1) ) for all λ ∈ R. Then, z ≥ 0 implies z (1) = 0 and further z ≥ AT y
implies z (0) ≥ AT y (0) + λAT y (1) and hence AT y (1) = 0. Since A has full row rank, we have y (1) = 0. The
optimum of the dual is therefore attained at a vertex. In an optimum solution, we have z = max{0, AT y}.
Let V be the set of vertices of the feasible region of the dual (24), and let
Y := { y : (z, y) ∈ V }
be the set of their projections on y-space. Then, the optimum of the dual (24) is given by
min max{0, y T A} · x .
y∈Y
The set of strongly dominating capacity vectors x is defined by
T
X := x ∈ Rm
>0 : y Ax > 0 for all y ∈ Y
.
(25)
(26)
We next show that for all x(0) ∈ X and sufficiently small step size, the sequence {x(k) }k∈N stays in X.
Moreover, y T Ax(k) converges to 1 for every y ∈ Y . We define by
α(y, x) := y T Ax
and
α(x) := min { α(y, x) : y ∈ Y } .
Let α(ℓ) := α(x(ℓ) ). Then, x(ℓ) ∈ X iff α(ℓ) > 0. We summarize the discussion in the following Lemma.
Lemma 4.8. Suppose x(ℓ) ∈ X. Then, there is a vector f such that Af = α(ℓ) · b and 0 ≤ f ≤ x(ℓ) .
Proof. By the strong duality theorem applied on (23) and (24), it holds by (25) that
o
n
t = min max{0, y T A} · x(ℓ) ≥ min y T Ax(ℓ) = α(ℓ) .
y∈Y
y∈Y
The statement follows by the definition of (23).
We demonstrate now that α(ℓ) converges to 1.
Lemma 4.9. Assume x(ℓ) ∈ X. Then, for any h(ℓ) ≤ min{1/4, α(ℓ)h0 } we have x(ℓ+1) ∈ X and
1 − α(ℓ+1) = (1 − h(ℓ) ) · (1 − α(ℓ) ).
Proof. By applying Lemma 4.4 and Lemma 4.8 with x(ℓ) ∈ X, we have kAT p(ℓ) k∞ ≤ D||c||1 /α(ℓ) and hence
(ℓ)
(ℓ)
(ℓ) T (ℓ)
≥ −(h(ℓ) xe )/(2α(ℓ) h0 ) ≥ −xe /2. Thus,
for every index e it holds −h(ℓ) · c−1
e xe Ae p
(ℓ)
x(ℓ+1)
= (1 − h(ℓ) )x(ℓ)
· [Re(ℓ) ]−1 ATe p(ℓ) ≥
e
e +h
3 (ℓ) 1 (ℓ)
1
x − xe = x(ℓ)
> 0.
4 e
2
4 e
Let y ∈ Y be arbitrary. Then y T b = 1 and hence y T r(ℓ) = y T (b − Ax(ℓ) ) = 1 − y T Ax(ℓ) = 1 − α(y, x(ℓ) ). The
second claim now follows from Lemma 4.7.
We note that the convergence speed crucially depends on the initial point x(0) ∈ X, and in particular to
its corresponding value α(0) . Further, this dependence naturally partitions the Physarum dynamics (6) into
the five regimes given in Corollary 4.1.
25
4.4
x(k) is Close to a Non-Negative Kernel-Free Vector
In this subsection, we generalize [SV16b, Lemma 5.4] to positive linear programs. We achieve this in two
steps. First, we generalize a result by Ito et al. [IJNT11, Lemma 2] to positive linear programs and then we
substitute the notion of a non-negative cycle-free flow with a non-negative feasible
kernel-free vector.
Throughout this and the consecutive subsection, we denote by ρA := max DγA , nD2 ||A||∞ .
n
Lemma 4.10. Suppose a matrix A ∈ Zn×m has full row rank and
Pvector b ∈ Z . Let g be a feasible solution
to Ag = b and S ⊆ [n] be a subset of row indices of A such that i∈S |gi | < 1/ρA . Then, there is a feasible
solution f such that gi · fi ≥ 0 for all i ∈ [n], fi = 0 for all i ∈ S and kf − gk∞ < 1/(DγA ).
Proof. W.l.o.g. we can assume that g ≥ 0 as we could change the signs of the columns of A accordingly. Let
1S be the indicator vector of S. We consider the linear program
min{1TS x : Ax = b, x ≥ 0}
and let opt be its optimum value. Notice that 0 ≤ opt ≤ 1TS g < 1/ρA . Since the feasible region does
not contain a line and the minimum is bounded, the optimum is attained at a basic feasible solution, say
f . Suppose that there is an index i ∈ S with fi > 0. By Lemma 3.1, we have fi ≥ 1/(DγA ). This is a
contradiction to the optimality of f and hence fi = 0 for all i ∈ S.
Among the feasible solutions f such that fi gi ≥ 0 for all i and fi = 0 for all i ∈ S, we choose the one
that minimizes ||f − g||∞ . For simplicity, we also denote it by f . Note that f satisfies supp(f ) ⊆ S, where
S = [m]\S. Further, since fS = 0 and
AS gS + AS gS = Ag = b = Af = AS fS + AS fS = AS fS
we have AS fS − gS = AS gS . Let AB be a linearly independent column subset of AS with maximal
cardinality, i.e. the column subset AN , where N = S \ B, is linearly dependent on AB . Hence, there is an
invertible square submatrix A′B ∈ Z|B|×|B| of AB and a vector v = (vB , 0N ) such that
′
AB
vB = AB vB = AS gS .
A′′B
Let r = (AS gS )B . Since A′B is invertible, there is a unique vector vB such that A′B vB = r. Observe that
|ri | =
X
j∈S
Ai,j gj ≤ ||A||∞
X
j∈S
|gj | <
1
||A||∞
=
.
2
nD ||A||∞
nD2
By Cramer’s rule vB (e) is quotient of two determinants. The denominator is det(A′B ) and hence at least one
in absolute value. For the numerator, the e-th column is replaced by r. Expansion according to this column
shows that the absolute value of the numerator is bounded by
D |B|
1
D X
|ri | <
·
≤
.
2
γA
γA nD
DγA
i∈B
Therefore, kf − gk∞ ≤ 1/(DγA ) and the statement follows.
Lemma 4.11. Let P
q ∈ Rm , p ∈ Rn and N = {e ∈ [m] : qe ≤ 0 or pT Ae ≤ 0}, where Aq = b and
−1
p = L b. Suppose e∈N |qe | < 1/ρA . Then there is a non-negative feasible kernel-free vector f such that
supp(f ) ⊆ E\N and kf − qk∞ < 1/(DγA ).
Proof. We apply Lemma 4.10 to q with S = N . Then, there is a non-negative feasible vector f such that
supp(f ) ⊆ E\N and kf − qk∞ < 1/(DγA ). By Lemma 2.1, f can be expressed as a sum of a convex
combination of basic feasible solutions plus a vector w in the kernel of A. Moreover, all vectors in this
representation are sign compatible with f , and in particular w is non-negative too.
26
P
Suppose for contradiction that w 6= 0. By definition, 0 = pT Aw = e∈[m] pT Ae we and since w ≥ 0 and
w 6= 0, it follows that there is an index e ∈ [m] satisfying we > 0 and pT Ae ≤ 0. Since f and w are sign
compatible, we > 0 implies fe > 0. On the other hand, as pT Ae ≤ 0 we have e ∈ N and thus fe = 0. This
is a contradiction, hence w = 0.
Using Corollary 4.1, for any point x(0) ∈ X there is a point x(t) ∈ X such that α(t) ∈ [1/2, 1/h0]. Thus,
we can assume that α(0) ∈ [1/2, 1/h0] and work with h ≤ h0 /2, where h0 = cmin /(2D||c||1 ). We generalize
next [SV16b, Lemma 5.4].
Lemma 4.12. Suppose x(t) ∈ X such that α(t) ∈ [1/2, 1/h0], h ≤ h0 /2 and ε ∈ (0, 1). Then, for any k ≥
h−1 ln(8mρA Ψ(0) /ε) there is a non-negative feasible kernel-free vector f such that kx(t+k) − f k∞ < ε/(DγA ).
def
Proof. Let β (k) = 1 − (1 − h)k . By (22), vector q (t,k) satisfies Aq (t,k) = b and thus Lemma 4.6 yields
kx(t+k) − β (k) q (t,k) k∞ = (1 − h)k · kx(t) k∞ ≤ exp{−hk} · Ψ(0) ≤ ε/(8mρA ).
(27)
Using Corollary 4.1, we have x(t+k) ∈ X such that α(t+k) ∈ (1/2, 1/h0) for every k ∈ N+ . Let Fk = Qk ∪ Pk ,
(t,k)
≤ 0} and Pk = {e ∈ [m] : ATe p(t,k) ≤ 0}. Then, for every e ∈ Qk it holds
where Qk = {e ∈ [m] : q e
|q (t,k)
| ≤ [β (k) ]−1 · |xe(t+k) − β (k) q (t,k)
| ≤ ε/(7mρA).
e
e
(28)
By Lemma 4.6, kx(·) k ≤ Ψ(0) . Moreover, by (22) for every e ∈ Pk we have
xe(t+k)
=
x(t)
e
k+t−1
Y h
i=t
≤
≤
≤
i
T (i)
1 + h c−1
−1
e Ae p
n
o
T (t,k)
x(t)
e · exp −hk + (h/ce ) · Ae p
exp {−hk} · Ψ(0)
ε/(8mρA ),
and by combining the triangle inequality with (27), it follows for every e ∈ Pk that
h
i
(k) −1
(t+k)
(k) (t,k)
(t+k)
|
≤
[β
]
·
|x
−
β
q
|
+
|x
|
|q (t,k)
e
e
e
e
≤
≤
Therefore, (28) and (29) yields that
X
e∈Fk
[β (k) ]−1 · ε/(4mρA )
ε/(3mρA).
(29)
| ≤ m · ε/(3mρA ) ≤ ε/(3ρA ).
|q (t,k)
e
(30)
(t,k)
and N = Fk , it follows by (30) that there is a non-negative feasible
By Lemma 4.11 applied with q e
kernel-free vector f such that supp(f ) ⊆ E\N and
kf − q (t,k) k∞ < ε/(3DγA ).
By Lemma 4.5, we have kq (t,k) k∞ ≤ mD2 kb/γA k1 and since Ψ(0) ≥ mD2 kb/γA k1 , it follows that
kx(t+k) − f k∞ = kx(t+k) − β (k) q (t+k) + β (k) q (t+k) − f k∞
k
≤ kx(t+k) − β (k) q (t+k) k∞ + kf − q (t+k) k∞ + (1 − h) kq(t+k) k∞
≤
ε
ε
ε · mD2 kb/γA k1
ε
≤
+
+
.
8mρA
3DγA
DγA
8mρA · Ψ(0)
27
4.5
x(k) is ε-Close to an Optimal Solution
Recall that N denotes the set of non-optimal basic feasible solutions of (5) and Φ = ming∈N cT g − opt. For
completeness, we prove next a well known inequality [PS82, Lemma 8.6] that lower bounds the value of Φ.
Lemma 4.13. Suppose A ∈ Rn×m has full row rank, b ∈ Rn and c ∈ Rm are integral. Then, Φ ≥ 1/(DγA )2 .
Proof. Let g = (gB , 0) be an arbitrary basic feasible solution with basis matrix AB , where gB (e) 6= 0 and
|supp(gB )| = n. We write M−i,−j to denote the matrix M with deleted i-th row and j-th column. Let Qe
be the matrix formed by replacing the e-th column of AB by the column vector b. Then, by Cramer’s rule,
we have
n
j+k
−1
· bk · det γA
[AB ]−k,−j
1 X (−1)
1
det(Qe )
=
.
≥
|gB (e)| =
−1
det(AB )
γA
Dγ
det γA AB
A
k=1
Note that all components of vector gB have denominator with equal value, i.e. det(AB ). Consider an
arbitrary non-optimal basic feasible solution g and an optimal basic feasible solution f ⋆ . Then, ge = Ge /G
and fe⋆ = Fe /F are rationals such that Ge , G, Fe , F ≤ DγA for every e. Further, let re = ce (Ge F − Fe F ) ∈ Z
for every e ∈ [m], and observe that
cT (g − f ⋆ ) =
X
e
ce (ge − fe⋆ ) =
1 X
re ≥ 1/(DγA )2 ,
GF e
where the last inequality follows by cT (g − f ⋆ ) > 0 implies
P
e re
≥ 1.
Lemma 4.14. Let f be a non-negative feasible kernel-free vector and ε ∈ (0, 1) a parameter. Suppose
for every non-optimal basic feasible solution g, there exists an index e ∈ [m] such that ge > 0 and fe <
ε/(2mD3 γA ||b||1 ). Then, kf − f ⋆ k∞ < ε/(DγA ) for some optimal f ⋆ .
Proof. Let C = 2D2 ||b||1 . Since f is kernel-free, by Lemma 2.1 it can be expressed as a convex combination
Pℓ
Pm
of sign-compatible basic feasible solutions f = i=1 αi f (i) + i=ℓ+1 αi f (i) , where f (1) , . . . , f (ℓ) denote the
(i)
(i)
optimal solutions. By Lemma 3.1, fe > 0 implies fe ≥ 1/(DγA ). By the hypothesis, for every non-optimal
f (i) , i.e. i ≥ ℓ + 1, there exists an index e(i) ∈ [m] such that
(i)
1/(DγA ) ≤ fe(i)
Thus,
αi /(DγA ) ≤
(i)
αi fe(i)
≤
Pm
and fe(i) < ε/(mDγA · C).
m
X
j=1
(j)
αj fe(i) = fe(i) < ε/(mDγA · C),
and hence i=ℓ+1 αi ≤ ε/C. By Lemma 3.1, we have kf (j) k∞ ≤ Dkb/γAk1 = C/(2DγA ) for every j. Let
Pℓ
Pm
β ≥ 0 be an arbitrary vector satisfying i=1 βi = i=ℓ+1 αi . Let νi = αi + βi for every i ∈ [ℓ] and let
Pℓ
f ⋆ = i=1 νi f (i) . Then, f ⋆ is an optimal solution and we have
kf ⋆ − f k∞ =
ℓ
X
i=1
βi f (i) −
≤ max
i∈[1:m]
f (i)
m
X
i=ℓ+1
∞
·
αi · f (i)
ℓ
X
βi +
i=1
∞
m
X
i=ℓ+1
αi
!
≤
2ε
ε
C
=
.
·
C 2DγA
DγA
In the following lemma, we extend the analysis in [SV16b, Lemma 5.6] from the transshipment problem to positive linear programs. Our result crucially relies on an argument that uses the parameter
Φ = ming∈N cT g − opt. It is here, where our analysis incurs the linear step size dependence on Φ/opt
and the quadratic dependence on opt/Φ for the number of steps.
28
An important technical detail is that the first regime incurs an extra (Φ/opt)-factor dependence. At
first glance, this might seem unnecessary due to Corollary 4.1, however a careful analysis shows its necessity
(see (35) for the inductive argument). Further, we note that the undirected Physarum dynamics (7) satisfies
(t)
(t)
(0)
xmin ≥ (1 − h)t · xmin , whereas the directed Physarum dynamics (6) might yield a value xmin which decreases
(0)
with faster than exponential rate. As our analysis incurs a logarithmic dependence on 1/xmin, it is prohibitive
(t)
to decouple the two regimes and give bounds in terms of log(1/xmin ), which would be necessary as x(t) is
the initial point of the second regime.
Lemma 4.15. Let g be an arbitrary non-optimal basic feasible solution. Given x(0) ∈ X and its corresponding α(0) , the Physarum dynamics (6) initialized with x(0) runs in two regimes:
(i) The first regime is executed when α(0) 6∈ [1/2, 1/h0] and computes a point x(t) ∈ X such that
α(t) ∈ [1/2, 1/h0]. In particular, if α(0) < 1/2 then h ≤ (Φ/opt) · (α(0) h0 )2 and t = 1/h. Otherwise,
if α(0) > 1/h0 then h ≤ Φ/opt and t = ⌊log1/(1−h) [h0 (α(0) − 1)/(1 − h0 )]⌋.
(ii) The second regime starts from a point x(t) ∈ X such that α(t) ∈ [1/2, 1/h0], it has step size h ≤
(0)
(opt/Φ) · h20 /2 and for any k ≥ 4 · cT g/(hΦ) · ln(Ψ(0) /εxmin ), guarantees the existence of an index
(t+k)
e ∈ [m] such that ge > 0 and xe
< ε.
Proof. Similar to the work of [BBD+ 13, SV16b], we use a potential function that takes as input a basic
feasible solution g and a step number ℓ, and is defined by
X
ge ce ln x(ℓ)
Bg(ℓ) :=
e .
e∈[m]
Since
(ℓ+1)
xe
=
(ℓ)
xe (1
Bg(ℓ+1) − Bg(ℓ)
T (ℓ)
+ h(ℓ) [c−1
− 1]), we have
e · Ae p
T (ℓ)
(ℓ+1)
X
X
Ae p
xe
−1
ge ce ln 1 + h(ℓ)
=
ge ce ln (ℓ) =
ce
xe
e
e
i
h
h
i
X
ATe p(ℓ)
≤ h(ℓ)
g e ce
− 1 = h(ℓ) −cT g + [p(ℓ) ]T Ag = h(ℓ) −cT g + bT p(ℓ) .
ce
e
(ℓ+1)
Let f ⋆ be an optimal solution to (5). In order to lower bound Bf ⋆
x − x2 , for all x ∈ [− 21 , 21 ]. Then, we have
(ℓ+1)
Bf ⋆
(ℓ)
− Bf ⋆
(31)
(ℓ)
− Bf ⋆ , we use the inequality ln(1 + x) ≥
(ℓ+1)
=
≥
≥
T (ℓ)
⋆
(ℓ) Ae p
−
1
f
c
ln
1
+
h
e e
(ℓ)
ce
xe
e
e
2 !
T (ℓ)
T (ℓ)
X
(ℓ) 2 Ae p
(ℓ) Ae p
⋆
− 1 − [h ]
−1
f e ce h
ce
ce
e
h(ℓ) bT p(ℓ) − opt − h(ℓ) · (1/2α(ℓ) h0 )2 · opt ,
X
fe⋆ ce ln
xe
=
X
(32)
where the last inequality follows by combining
X
T (ℓ)
(ℓ) T
⋆
T (ℓ)
fe⋆ ce [(c−1
− opt,
e Ae p ) − 1] = [p ] Af − opt = b p
e
T (ℓ)
kA p
k∞ ≤ D||c||1 /α (by Lemma 4.4 and Lemma 4.8 applied with x(ℓ) ∈ X), h0 = cmin /(4D||c||1 ) and
X
T (ℓ)
h(ℓ)
fe⋆ ce · (c−1
− 1)2 ≤ h(ℓ) (2D||c||1 /α(ℓ) cmin )2 opt = h(ℓ) (1/2α(ℓ) h0 )2 opt.
e Ae p
(ℓ)
e
Further, by combining (31), (32), cT g − opt ≥ Φ for every non-optimal basic feasible solution g and provided
that the inequality h(ℓ) (1/2α(ℓ) h0 )2 opt ≤ Φ/2 holds, we obtain
h(ℓ) Φ
Φ
(ℓ+1)
(ℓ)
T
(ℓ)
T (ℓ)
T
(ℓ)
≥ Bg(ℓ+1) − Bg(ℓ) +
.
(33)
c g − opt −
Bf ⋆ − Bf ⋆ ≥ h
b p −c g +h
2
2
29
Using Corollary 4.1, we partition the Physarum dynamics (6) execution into three regimes, based on
α(0) . For every i ∈ {1, 2, 3}, we show next that the i-th regime has a fixed step size h(ℓ) = hi such that
h(ℓ) (1/2α(ℓ) h0 )2 opt ≤ Φ/2, for every step ℓ in this regime.
By Lemma 4.9, for every i ∈ {1, 2, 3} it holds for every step ℓ in the i-th regime that
α(ℓ) = 1 − (1 − hi )ℓ · (1 − α(0) ).
(34)
Case 1: Suppose α(0) > 1/h0 . Notice that h(ℓ) = Φ/opt suffices, since 1/(2α(ℓ) h0 ) < 1/2 for every
α > 1/h0 . Further, by applying (34) with α(t) := 1/h0 , we have t = ⌊log1/(1−h(ℓ) ) [h0 (α(0) − 1)/(1 − h0)]⌋ ≤
(opt/Φ) · log(α(0) h0 ). Note that by (34) the sequence {α(ℓ) }ℓ≤t is decreasing, and by Corollary 4.1 we have
1 < α(t) ≤ 1/h0 .
Case 2: Suppose α(0) ∈ (0, 1/2). By (34) the sequence {α(ℓ) }ℓ∈N is increasing and by Corollary 4.1 the
regime is terminated once α(ℓ) ∈ [1/2, 1). Observe that h(ℓ) = (Φ/opt) · (α(0) h0 )2 suffices, since α(0) ≤ α(ℓ) .
Then, by (34) applied with α(t) := 1/2, this regime has at most t = (opt/Φ) · (1/α(0) h0 )2 steps.
Case 3: Suppose α(0) ∈ [1/2, 1/h0]. By (34) the sequence {α(ℓ) }ℓ∈N converges to 1 (decreases if α(0) ∈
(1, 1/h0 ] and increases when α(0) ∈ [1/2, 1). Notice that h(ℓ) = (Φ/opt)·h20 /2 suffices, since 1/2 ≤ α(ℓ) ≤ 1/h0
for every ℓ ∈ N. We note that the number of steps in this regime is to be determined soon.
Hence, we conclude that inequality (33) holds. Further, using Case 1 and Case 2 there is an integer t ∈ N
such that α(t) ∈ [1/2, 1/h0]. Let k ∈ N be the number of steps in Case 3, and let h := (Φ/opt) · h20 /2. Then,
for every ℓ ∈ {t, . . . , t + k − 1} it holds that h(ℓ) = h and thus
(ℓ)
(t+k)
Bf ⋆
(0)
− Bf ⋆ ≥ Bg(t+k) − Bg(0) +
t+k−1
X
ℓ=0
hΦ
h(ℓ) Φ
≥ Bg(t+k) − Bg(0) + k ·
.
2
2
(35)
(ℓ)
By Lemma 4.6, Bg ≤ cT g · ln Ψ(0) for every basic feasible solution g and every ℓ ∈ N, and thus
Bg(t+k)
≤
≤
≤
hΦ
(t+k)
(0)
+ Bg(0) + Bf ⋆ − Bf ⋆
2
hΦ
(0)
+ cT g · ln Ψ(0) + opt · ln Ψ(0) − opt · ln xmin
−k ·
2
Ψ(0)
hΦ
+ 2cT g · ln (0) .
−k ·
2
x
−k ·
min
(t+k)
Suppose for the sake of a contradiction that for every e ∈ [m] with ge > 0 it holds xe
> ε. Then,
(t+k)
(0)
Bg
> cT g · ln ε yields k < 4 · cT g/(hΦ) · ln(Ψ(0) /(εxmin )), a contradiction to the choice of k.
4.6
Proof of Theorem 4.2
By Corollary 4.1 and Lemma 4.15, if x(0) ∈ X such that α(0) > 1/h0 , we work with h ≤ Φ/opt and
after t = ⌊log1/(1−h) [h0 (α(0) − 1)/(1 − h0 )]⌋ ≤ (opt/Φ) · log(α(0) h0 ) steps, we obtain x(t) ∈ X such that
α(t) ∈ (1, 1/h0 ]. Otherwise, if α(0) ∈ (0, 1/2) we work with h ≤ (Φ/opt) · (α(0) h0 )2 and after t = 1/h
steps, we obtain x(t) ∈ X such that α(t) ∈ [1/2, 1). Hence, we can assume that α(t) ∈ [1/2, 1/h0] and set
h ≤ (Φ/opt) · h20 . Then, the Lemmas in Subsection 4.4 and 4.5 are applicable.
Let E1 := Dkb/γAk1 ||c||1 , E2 := 8mρA Ψ(0) , E3 := 2mD3 γA kbk1 and E4 := 8mD2 kbk1. Consider an
arbitrary non-optimal basic feasible solution g.
By Lemma 3.1, we have cT g ≤ E1 and thus both Lemma 4.12 and Lemma 4.15 are applicable with h,
(0)
⋆
ε := ε/E4 and any k ≥ k0 := 4E1 /(hΦ) · ln[(E2 / min{1, xmin}) · (DγA /ε⋆ )]. Hence, by Lemma 4.15, the
(t+k)
< ε⋆ /(DγA ).
Physarum dynamics (6) guarantees the existence of an index e ∈ [m] such that ge > 0 and xe
(t+k)
Moreover, by Lemma 4.12 there is a non-negative feasible kernel-free vector f such that kx
− f k∞ <
ε⋆ /(DγA ). Thus, for the index e it follows that ge > 0 and fe < 2ε⋆ /DγA = (ε/2) · (4/E4 DγA ) = ε/(2E3 ).
Then, Lemma 4.14, yields kf −f ⋆ k∞ < ε/(2DγA ) and by triangle inequality we have kx(k) −f ⋆ k∞ < ε/(DγA ).
30
By construction, ρA = max{DγA , nD2 ||A||∞ } ≤ nD2 γA ||A||∞ . Let E2′ = 8mnD2 γA ||A||∞ Ψ(0) and E5 =
2
· DγA = 82 m2 nD5 γA
||A||∞ kbk1 . Further, let C1 = E1 and C2 = E5 . Then, the statement follows for
(0)
any k ≥ k1 := 4C1 /(hΦ) · ln(C2 Ψ(0) /(ε · min{1, xmin })).
E2′ E4
4.7
Preconditioning
What can be done if the initial point is not strongly dominating? For the transshipment problem it suffices
to add an edge of high capacity and high cost from every source node to every sink node [BBD+ 13, SV16b].
This will make the instance strongly dominating and will not affect the optimal solution.
In this section, we generalize this observation to positive linear programs. We add an additional column
equal to b and give it sufficiently high capacity and cost. This guarantees that the resulting instance is
strongly dominating and the optimal solution remains unaffected. We state now our algorithmic result.
Theorem 4.16. Given an integral LP (A, b, c > 0), a positive x(0) ∈ Rm and a parameter ε ∈ (0, 1). Let
([A | b], b, (c, c′ )) be an extended LP with c′ = 2C1 and z (0) := 1 + DS ||x||∞ ||A||1 ||b||1 .8 Then, (x(0) ; z (0) ) is a
strongly dominating starting point of the extended problem such that y T [A | b](x(0) , z (0) ) ≥ 1, for all y ∈ Y . In
particular, the Physarum dynamics (6) initialized with (x(0) , z (0) ) and a step size h ≤ h20 /C3 , outputs for any
(0)
k ≥ 4C1 · (DγA )2 /h · ln(C2 Υ(0) /(ε · min{1, xmin})) a vector (x(k) , z (k) ) > 0 such that dist(x(k) , X⋆ ) < ε/(DγA )
and z (k) < ε/(DγA ), where here Υ(0) := max{Ψ(0) , z (0) }.
Theorem 4.16 subsumes [SV16b, Theorem 1.2] for flow problems by giving a tighter asymptotic convergence rate, since for the transshipment problem A is a totally unimodular matrix and satisfies D = DS = 1,
γA = 1, ||A||∞ = 1 and Φ = 1. We note that the scalar z (0) depends on the scaled determinant DS , see
Theorem 1.3.
4.7.1
Proof of Theorem 4.16
In the extended problem, we concatenate to matrix A a column equal to b such that the resulting constraint
matrix becomes [A | b]. Let c′ be the cost and let x′ be the initial capacity of the newly inserted constraint
column. We will determine c′ and x′ in the course of the discussion. Consider the dual of the max-flow like
LP for the extended problem. It has an additional variable z ′ and reads
min xT z + x′ z ′ : z ≥ 0; z ′ ≥ 0; z ≥ AT y; z ′ ≥ bT y; bT y = 1 .
In any optimal solution, z ′ = bT y = 1 and hence the dual is equivalent to
min xT z + x′ : z ≥ 0; z ≥ AT y; bT y = 1 .
The strongly dominating set of the extended problem is therefore equal to
x
x
m+1
T
X=
∈
R
:
y
[A
|
b]
>
0
for
all
y
∈
Y
.
>0
x′
x′
(36)
(37)
The defining condition translates into x′ > −y T Ax for all y ∈ Y . We summarize the discussion in the
following Lemma.
P
Lemma 4.17. Given a positive x ∈ Rm , let ρ := ||b||1 DS and x′ := 1 + ρ||A||1 ||x||∞ , where ||A||1 := i,j |Ai,j |
and DS := max{| det(A′ )| : A′ is a square sub-matrix of A}. Then, (x; x′ ) is a strongly dominating starting
point of the extended problem such that y T [A | b](x; x′ ) = y T Ax + x′ ≥ 1, for all y ∈ Y .
Proof. We show first that maxy∈Y ||y||∞ ≤ ρ implies the statement. Let y ∈ Y be arbitrary. Since |y T Ax| ≤
||A||1 ||x||∞ ||y||∞ , we have maxy∈Y |y T Ax| ≤ ρ||A||1 ||x||∞ = x′ − 1 and hence y T [A | b](x; x′ ) ≥ 1.
P
8
We denote by ||A||1 :=
i,j
|Ai,j |, i.e. we interpret matrix A as a vector and apply to it the standard ℓ1 norm.
31
It remains to show that maxy∈Y ||y||∞ ≤ ρ. The constraint polyhedron of the dual (36) is given in matrix
notation as
≥
Im×m −AT
0m
z
z
= 1 .
P (ext) :=
∈ Rm+n : 0Tm
bT
y
y
0m
Im×m 0m×n
≥
Let us denote the resulting constraint matrix and vector by M ∈ R2m+1×m+n and d ∈ R2m+1 , respectively.
Note that if b = 0 then the primal LP (23) is either unbounded or infeasible. Hence, we consider the
non-trivial case when b 6= 0. Observe that the polyhedron P (ext) is not empty, since for any y such that
bT y = 1 there is z = max{0, AT y} satisfying (z; y) ∈ P (ext) . Further, P (ext) does not contain a line (see
Subsection 4.3) and thus P (ext) has at least one extreme point p′ ∈ P (ext) . As the dual LP (24) has a
bounded value (the target function is lower bounded by 0) and an extreme point exists (p′ ∈ P (ext) ), the
optimum is attained at an extreme point p ∈ P (ext) . Moreover, as every extreme point is a basic feasible
solution and matrix M has linearly independent columns (A has full row rank), it follows that p has m + n
tight linearly independent constraints.
Let MB(p) ∈ Rm+n×m+n be the basis submatrix of M satisfying MB(p) p = dB(p) . Since A, b are integral
and MB(p) is invertible, using Laplace expansion we have 1 ≤ | det(MB(p) )| ≤ ||b||1 DS = ρ. Let Qi denotes
the matrix formed by replacing the i-th column of MB(p) by the column vector dB(p) . Then, by Cramer’s
rule, it follows that |yi | = | det(Qi )/ det(MB(p) )| ≤ | det(MB(p) )| ≤ ρ, for all i ∈ [n].
It remains to fix the cost of the new column. Using Lemma 3.1, opt ≤ cT x(k) ≤ C1 for every k ∈ N, and
thus we set c′ := 2C1 .
4.8
A Simple Lower Bound
Building upon [SV16b, Lemma B.1], we give a lower bound on the number of steps required for computing an
ε-approximation to the optimum shortest path. In particular, we show that for the Physarum dynamics (6)
to compute a point x(k) such that dist(x(k) , X⋆ ) < ε, the required number of steps k has to grow linearly in
opt/(hΦ) and ln(1/ε).
Theorem 4.18. Let (A, b, c) be a positive LP instance such that A = [1 1], b = 1 and c = (opt opt + Φ)T ,
where opt > 0 and Φ > 0. Then, for any ε ∈ (0, 1) the discrete directed Physarum dynamics (6) initialized
with x(0) = (1/2, 1/2) and any step size h ∈ (0, 1/2], requires at least k = (1/2h) · max{opt/Φ, 1} · ln(2/ε)
(k)
(k)
steps to guarantee x1 ≥ 1 − ε, x2 ≤ ε. Moreover, if ε ≤ Φ/(2opt) then cT x(k) ≥ (1 + ε)opt as long as
k ≤ (1/2h) · max{opt/Φ, 1} · ln(2Φ/(ε · opt)).
(k)
Proof. Let c1 = opt and c2 = γopt, where γ = 1 + Φ/opt. We first derive closed-form expressions for x1 ,
(k)
(k)
(k)
(k)
(k)
(k)
(k)
(k) (k)
x2 , and x1 + x2 . Let s(k) = γx1 + x2 . For any k ∈ N, we have q1 + q2 = 1 and q1 /q2 =
(k)
(k)
(k)
(k)
(k)
(k) (k)
(k)
(k) (k)
(x1 /c1 )/(x2 /c2 ) = γx1 /x2 . Therefore, q1 = γx1 /s and q2 = x2 /s , and hence
(k)
x1
(k)
(k−1)
= (1 + h(−1 + γ/s(k−1) ))x1
(k)
(k−1)
(k)
and x2
(k−1)
(k−1)
= (1 + h(−1 + 1/s(k−1) ))x2
(k)
.
(38)
(k)
Further, x1 + x2 = (1 − h)(x1
+ x2
) + h, and thus by induction x1 + x2 = 1 for all k ∈ N.
(k)
(k−1)
(k)
Therefore, s(k) ≤ γ for all k ∈ N and hence x1 ≥ x1
, i.e. the sequence {x1 }k∈N is increasing and
(k)
the sequence {x2 }k∈N is decreasing. Moreover, since h(−1 + 1/s(k−1) ) ≥ h(1 − γ)/γ = −hΦ/(opt + Φ) and
using the inequality 1 − z ≥ e−2z for every z ∈ [0, 1/2], it follows by (38) and induction on k that
k
hΦ
1
2hΦ
(k)
(0)
x2 ≥ 1 −
.
x2 ≥ exp −k ·
opt + Φ
2
opt + Φ
(k)
Thus, x2 ≥ ε whenever k ≤ (1/2h) · (opt/Φ + 1) · ln(2/ε). This proves the first claim.
(k)
(k)
(k)
For the second claim, observe that cT x(k) = opt · x1 + γopt · x2 = opt · (1 + (γ − 1)x2 ). This is greater
(k)
than (1 + ε)opt iff x2 ≥ ε · opt/Φ. Thus, cT x(k) ≥ (1 + ε)opt as long as k ≤ (1/2h) · (opt/Φ + 1) · ln(2/(ε ·
opt/Φ)).
32
References
[BBD+ 13] Luca Becchetti, Vincenzo Bonifaci, Michael Dirnberger, Andreas Karrenbauer, and Kurt
Mehlhorn. Physarum can compute shortest paths: Convergence proofs and complexity bounds.
In ICALP, volume 7966 of LNCS, pages 472–483, 2013.
[BMV12] Vincenzo Bonifaci, Kurt Mehlhorn, and Girish Varma. Physarum can compute shortest paths.
Journal of Theoretical Biology, 309:121 – 133, 2012.
[Bon13]
Vincenzo Bonifaci. Physarum can compute shortest paths: A short proof. Inf. Process. Lett.,
113(1-2):4–7, 2013.
[Bon15]
Vincenzo Bonifaci. A revised model of network transport optimization in Physarum Polycephalum.
November 2015.
[Bon16]
Vincenzo Bonifaci. On the convergence time of a natural dynamics for linear programming. CoRR,
abs/1611.06729, 2016.
[CDS98]
Scott Shaobing Chen, David L. Donoho, and Michael A. Saunders. Atomic decomposition by
basis pursuit. SIAM Journal on Scientific Computing, 20(1):33–61, 1998.
[CLSW98] F. H. Clarke, Yu. S. Ledyaev, R. J. Stern, and P. R. Wolenski. Nonsmooth Analysis and Control
Theory. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 1998.
[IJNT11] Kentaro Ito, Anders Johansson, Toshiyuki Nakagaki, and Atsushi Tero. Convergence properties
for the physarum solver. arXiv:1101.5249v1, January 2011.
[JZ12]
A. Johannson and J. Zou. A slime mold solver for linear programming problems. In CiE, pages
344–354, 2012.
[LaS76]
J. B. LaSalle. The Stability of Dynamical Systems. SIAM, 1976.
[MO08]
T. Miyaji and Isamu Ohnishi. Physarum can solve the shortest path problem on Riemannian
surface mathematically rigourously. International Journal of Pure and Applied Mathematics,
47:353–369, 2008.
[NYT00]
T. Nakagaki, H. Yamada, and Á. Tóth. Maze-solving by an amoeboid organism. Nature, 407:470,
2000.
[Phy]
http://people.mpi-inf.mpg.de/~mehlhorn/ftp/SlimeAusschnitt.webm.
[PS82]
Christos H. Papadimitriou and Kenneth Steiglitz. Combinatorial Optimization: Algorithms and
Complexity. Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 1982.
[Sch99]
Alexander Schrijver. Theory of linear and integer programming. Wiley-Interscience series in
discrete mathematics and optimization. Wiley, 1999.
[SV16a]
Damian Straszak and Nisheeth K. Vishnoi. IRLS and slime mold: Equivalence and convergence.
CoRR, abs/1601.02712, 2016.
[SV16b]
Damian Straszak and Nisheeth K. Vishnoi. Natural algorithms for flow problems. In SODA,
pages 1868–1883, 2016.
[SV16c]
Damian Straszak and Nisheeth K. Vishnoi. On a natural dynamics for linear programming. In
ITCS, pages 291–291, New York, NY, USA, 2016. ACM.
[TKN07]
A. Tero, R. Kobayashi, and T. Nakagaki. A mathematical model for adaptive transport network
in path finding by true slime mold. Journal of Theoretical Biology, pages 553–564, 2007.
33
| 8 |
The Beta Generalized Marshall-Olkin-G Family of Distributions
Laba Handique and Subrata Chakraborty*
Department of Statistics, Dibrugarh University
Dibrugarh-786004, India
*Corresponding Author. Email: [email protected]
(21 August 21, 2016)
Abstract
In this paper we propose a new family of distribution considering Generalized Marshal-Olkin
distribution as the base line distribution in the Beta-G family of Construction. The new family
includes Beta-G (Eugene et al. 2002 and Jones, 2004) and GMOE (Jayakumar and Mathew, 2008)
families as particular cases. Probability density function (pdf) and the cumulative distribution
function (cdf) are expressed as mixture of the Marshal-Olkin (Marshal and Olkin, 1997) distribution.
Series expansions of pdf of the order statistics are also obtained. Moments, moment generating
function, Rényi entropies, quantile power series, random sample generation and asymptotes are also
investigated. Parameter estimation by method of maximum likelihood and method of moment are
also presented. Finally proposed model is compared to the Generalized Marshall-Olkin
Kumaraswamy extended family (Handique and Chakraborty, 2015) by considering three data fitting
examples with real life data sets.
Key words: Beta Generated family, Generalized Marshall-Olkin family, Exponentiated family, AIC,
BIC and Power weighted moments.
1. Introduction
Here we briefly introduce the Beta-G (Eugene et al. 2002 and Jones, 2004) and Generalized
Marshall-Olkin family (Jayakumar and Mathew, 2008) of distributions.
1.1 Some formulas and notations
Here first we list some formulas to be used in the subsequent sections of this article.
If T is a continuous random variable with pdf, f (t ) and cdf F (t ) P [T t ] , then its
Survival function (sf): F (t ) P [T t ] 1 F (t ) ,
Hazard rate function (hrf): h(t ) f (t ) / F (t ) ,
Reverse hazard rate function (rhrf): r (t ) f (t ) / F (t ) ,
1
Cumulative hazard rate function (chrf): H (t ) log [ F (t )] ,
( p, q, r ) th Power Weighted Moment (PWM): p, q , r t p [ F (t ) ] q [1 F (t ) ] r f (t ) dt ,
Rényi entropy: I R ( ) (1 ) 1 log f (t ) dt .
1.2 Beta-G family of distributions
The cdf of beta-G (Eugene et al. 2002 and Jones 2004) family of distribution is
F
BG
1
(t )
B (m, n)
F (t )
v
m 1
(1 v) n 1 d v
o
B F ( t ) ( m, n )
B ( m, n )
(1)
I F (t ) ( m , n )
t
Where I t ( m, n) B ( m, n) 1 x m 1 (1 x) n 1 dx denotes the incomplete beta function ratio.
0
The pdf corresponding to (1) is
1
1
f BG (t )
f (t ) F (t ) m 1 [1 F (t )] n 1
f (t ) F (t ) m 1 F (t ) n 1
B (m, n)
B ( m, n )
Where f (t ) d F (t ) dt .
B (m, n) B F (t ) (m, n)
sf:
F BG (t ) 1 I F ( t ) (m, n )
B ( m, n )
hrf:
h BG (t ) f
BG
(t ) F BG (t )
rhrf:
r BG (t ) f BG (t ) F BG (t )
(2)
f (t ) F (t ) m 1 F (t ) n 1
B ( m, n ) B F ( t ) ( m, n )
f (t ) F (t ) m 1 F (t ) n 1
B F ( t ) ( m, n )
B ( m, n ) B F ( t ) ( m, n )
H (t ) log
B ( m, n )
Some of the well known beta generated families are the Beta-generated (beta-G) family (Eugene
chrf:
et al., 2002; Jones 2004), beta extended G family (Cordeiro et al., 2012), Kumaraswamy beta
generalized family (Pescim et al., 2012), beta generalized weibull distribution (Singla et al., 2012),
beta generalized Rayleigh distribution (Cordeiro et al., 2013),
beta extended half normal
distribution (Cordeiro et al., 2014), beta log-logistic distribution (Lemonte, 2014) beta generalized
inverse Weibull distribution (Baharith et al., 2014), beta Marshall-Olkin family of distribution
(Alizadeh et al., 2015) and beta generated Kumaraswamy-G family of distribution (Handique and
Chakraborty 2016a) among others.
1.3 Generalized Marshall-Olkin Extended ( GMOE ) family of distributions
Jayakumar and Mathew (2008) proposed a generalization of the Marshall and Olkin (1997) family of
distributions by using the Lehman second alternative (Lehmann 1953) to obtain the sf F GMO (t ) of
the GMOE family of distributions by exponentiation the sf of MOE family of distributions as
2
F
GMO
G (t )
(t )
, t ; 0 ; 0
1 G ( t )
(3)
where t , 0 ( 1 ) and 0 is an additional shape parameter. When 1,
F GMO (t ) F MO (t ) and for 1,
F GMO (t ) F (t ) . The cdf and pdf of the GMOE distribution
are respectively
F
and
f
GMO
GMO
G (t )
(t )
1 G (t )
G (t )
(t ) 1
1 G (t )
1
(4)
g (t ) g (t ) G (t ) 1
2
[1 G (t )] 1
[1 G (t )]
(5)
Reliability measures like the hrf, rhrf and chrf associated with (1) are
h
GMO
f GMO (t ) g (t ) G (t ) 1
(t ) GMO
F
(t )
[1 G (t )] 1
r
GMO
G (t )
1 G (t )
g (t )
1
h(t )
G (t ) 1 G (t )
1 G (t )
f GMO (t ) g (t ) G (t ) 1
(t ) GMO
F
(t )
[1 G (t )] 1
G (t )
1
1 G (t )
g (t ) G (t ) 1
g (t ) G (t ) 1
[1 G (t )] [{1 G (t )} G (t ) ] [1 G (t )] 1 G (t ) [1 G (t )]
H
GMO
G (t )
G (t )
log
(t ) log
1 G (t )
1 G (t )
Where g (t ) , G (t ) , G (t ) and h (t ) are respectively the pdf, cdf, sf and hrf of the baseline distribution.
We denote the family of distribution with pdf (3) as GMOE( , , a, b ) which for 1, reduces
to MOE( , a, b) .
Some of the notable distributions derived Marshall-Olkin Extended exponential distribution
(Marshall and Olkin, 1997), Marshall-Olkin Extended uniform distribution (Krishna, 2011; Jose and
Krishna 2011), Marshall-Olkin Extended power log normal distribution (Gui, 2013a), MarshallOlkin Extended log logistic distribution (Gui, 2013b), Marshall-Olkin Extended Esscher transformed
Laplace distribution (George and George, 2013), Marshall-Olkin Kumaraswamy-G familuy of
distribution (Handique and Chakraborty, 2015a), Generalized Marshall-Olkin Kumaraswamy-G
family of distribution (Handique and Chakraborty 2015b) and Kumaraswamy Generalized MarshallOlkin family of distribution (Handique and Chakraborty 2016b).
3
In this article we propose a family of Beta generated distribution by considering the
Generalized Marshall-Olkin family (Jayakumar and Mathew, 2008) as the base line distribution in
the Beta-G family (Eugene et al. 2002 and Jones, 2004). This new family referred to as the Beta
Generalized Marshall-Olkin family of distribution is investigated for some its general properties.
The rest of this article is organized in seven sections. In section 2 the new family is introduced along
with its physical basis. Important special cases of the family along with their shape and main
reliability characteristics are presented in the next section. In section 4 we discuss some general
results of the proposed family. Different methods of estimation of parameters along with three
comparative data modelling applications are presented in section 5. The article ends with a
conclusion in section 6 followed by an appendix to derive asymptotic confidence bounds.
2. New Generalization: Beta Generalized Marshall-Olkin-G ( BGMO G ) family of
distributions
Here we propose a new Beta extended family by considering the cdf and pdf of GMO (Jayakumar
and Mathew, 2008) distribution in (4) and (5) as the f (t ) and F (t ) respectively in the Beta
formulation in (2) and call it BGMO G distribution. The pdf of BGMO G is given by
f
BGMOG
1 g (t ) G (t ) 1 G (t )
(t )
1
B (m, n) [1 G (t )] 1 1 G (t )
m 1
G (t )
1 G (t )
n 1
(6)
, 0 t , 0 a, b , m 0, n 0
Similarly substituting from equation (4) in (1) we get the cdf of BGMO G respectively as
F BGMOG (t ) I
G (t )
1
1 G ( t )
(m, n)
(7)
The sf, hrf, rhrf and chrf of BGMO G distribution are respectively given by
F BGMOG (t ) 1 I
sf:
G (t )
1
1 G ( t )
(m, n)
hrf :
h BGMOG (t )
1
g (t ) G (t ) 1
B ( m, n) [1 G (t )] 1 [1 I
G (t )
1
1 G ( t )
G (t )
1
1 G (t )
m 1
G (t )
1 G (t )
rhrf :
4
( m, n ) ]
n 1
(8)
r BGMOG (t )
1
g (t ) G (t ) 1
B (m, n) [1 G (t )] 1 [ I
( m, n) ]
G (t )
1
1 G ( t )
G (t )
1
1 G (t )
chrf:
H BGMOG (t ) log [ 1 I
m 1
G (t )
1 G (t )
G (t )
1
1 G ( t )
n 1
(9)
(m, n) ]
Remark: The BGMO G (m, n, , ) reduces to
(i)
BMO (m, n, ) (Alizadeh et al., 2015) for 1 ;
Mathew, 2008)
(ii) GMO ( , ) ,(Jayakumar and
if m n 1 ; (iii) MO ( ) (Marshall and Olkin, 1997)
when
m n 1 ; and (iv) B (m, n) (Eugene et al., 2002; Jones 2004) for 1 .
2.1 Genesis of the distribution
If m and n are both integers, then the probability distribution takes the same form as the order
statistics of the random variable T .
Proof: Let T1 , T 2 ...,T m n 1 be a sequence of i.i.d. random variables with cdf 1 [1 G (t ) a ] b . Then
the pdf of the mth order statistics T ( m ) is given by
( m n 1) !
[1 { G (t ) 1 G (t ) } ] m 1 [{ G (t ) 1 G (t ) } ] ( m n 1) m
(m 1)! [( m n 1) m] !
g (t ) G (t ) 1 [1 G (t )] 1
( m n)
[1 { G (t ) 1 G (t ) } ] m 1 [{ G (t ) 1 G (t )} ] n 1
( m) ( n )
g (t ) G (t ) 1 [1 G (t )] 1
1
[1 { G (t ) 1 G (t ) } ] m 1 [{ G (t ) 1 G (t ) } ] n 1
B ( m, n )
g (t ) G (t ) 1 [1 G (t )] 1
2.2 Shape of the density function
Here we have plotted the pdf of the BGMO G for some choices of the distribution G and parameter
values to study the variety of shapes assumed by the family.
5
2.5
1.5
2.0
m=1,n=2,=1.5,=2.1,=0.9,=0.6
m=13,n=3.5,=2.5,=0.6,=0.5,=0.5
m=50,n=8.5,=2.2,=0.89,=0.5,=0.5
m=0.05,n=0.45,=1.4,=1.5,=0.5,=0.5
m=0.2,n=0.4,=3.5,=1.5,=0.5,=0.4
m=40,n=9.5,=2.5,=0.5,=0.5,=0.5
0.0
0.0
0.5
1.0
Density
1.5
2.0
m=0.6,n=2,=2,=0.5,=0.9
m=1.3,n=1.9,=1.6,=0.6,=0.1
m=3.5,n=2,=0.5,=1.6,=0.5
m=19.5,n=1.9,=0.9,=1.6,=0.6
m=19.5,n=1.9,=0.9,=1.6,=0.5
m=2.9,n=2.2,=2.5,=0.5,=0.5
0.5
Density
BGMO-W
1.0
2.5
BGMO-E
0.0
0.5
1.0
1.5
2.0
2.5
3.0
0.0
0.5
1.0
X
(a)
(b)
3.0
3.5
2.5
2.0
1.5
Density
1.5
0.5
0.0
0.0
0.5
1.0
m=0.1,n=2.5,=1.4,=3.5,=0.5,=0.5
m=0.3,n=2.1,=1.3,=4.5,=0.5,=0.5
m=0.23,n=4.45,=1.3,=4.5,=0.5,=0.5
m=0.26,n=3.35,=1.4,=4.5,=0.3,=0.5
m=0.05,n=2.5,=1.2,=4.5,=0.5,=0.5
m=0.2,n=2.5,=1.3,=4.5,=0.5,=0.5
1.0
2.5
2.5
BGMO-Fr
m=8.2,n=1.9,=5.55,=3.5,=0.48,=0.8
m=7.2,n=1.9,=5.45,=3.1,=0.42,=0.7
m=5,n=1.56,=2.5,=0.5,=0.5,=0.5
m=5.2,n=1.56,=4.5,=2.5,=0.5,=0.5
m=8.2,n=1.9,=5.55,=3.2,=0.45,=0.6
m=1,n=3.5,=2.5,=0.5,=0.5,=0.5
2.0
2.0
X
BGMO-L
Density
1.5
0
1
2
3
4
0
1
2
3
4
X
X
(c)
(d)
5
6
Fig 1: Density plots (a) BGMO E , (b) BGMO W , (c) BGMO L and (d) BGMO Fr
Distributions:
6
7
BGMO-W
2.5
3.0
BGMO-E
2.0
h(x)
m=0.01,n=0.65,=1, =1.9,=0.9,=0.8
m=0.08,n=0.3,=2.1, =1.83,=0.8,=0.7
m=0.04,n=0.25,=1.7,=2,=0.8,=1.8
m=0.05,n=0.45,=1.5,=1.8,=0.8,=0.8
m=0.08,n=0.15,=2, =2.2,=0.8,=1.8
m=0.02,n=0.65,=0.9,=1.9,=0.9,=0.8
0.0
0.0
0.5
0.5
1.0
1.0
1.5
h(x)
1.5
2.0
2.5
m=0.05,n=1.9,=1.6,=0.9,=0.7
m=23.5,n=1.99,=0.58,=1.6,=0.5
m=0.04,n=1.99,=0.58,=1.6,=0.5
m=19.5,n=1.84,=0.9,=2.8,=0.9
m=15.5,n=1.9,=0.8,=1.6,=0.5
m=0.89,n=2.78,=0.48,=1.5,=0.5
1.0
1.5
2.0
2.5
0
1
2
3
X
X
(a)
(b)
BGMO-L
BGMO-Fr
2.5
0.5
2.5
0.0
1.5
2.0
m=0.6,n=1.9,=0.25,=5.5,=0.8,=0.9
m=0.99,n=2.5,=3,=3.5,=0.5,=0.5
m=5.23,n=4.65,=2.3,=6.7,=0.5,=0.5
m=1.9,n=2.5,=3,=3.5,=0.63,=0.7
m=0.3,n=1.8,=0.2,=3.5,=0.5,=0.8
m=7.23,n=4.95,=2.3,=6.7,=0.41,=0.5
0.0
0.5
1.0
h(x)
0.0
0.5
1.0
h(x)
1.5
2.0
m=2,n=0.35, =2.5,=5.5,=2.9,=1.9
m=7.2,n=1.9,=5.45,=3.1,=0.42,=0.7
m=5,n=1.56, =2.5,=0.5,=0.5,=0.5
m=4,n=1.56, =4.5,=2.5,=0.5,=0.5
m=4.2,n=1.99,=6.55,=9.2,=0.45,=0.6
m=0.5,n=3.8,=0.19,=3.5,=0.6,=0.9
4
0
1
2
3
4
0
X
1
2
3
4
5
6
7
X
(c)
(d)
Fig 2: Hazard plots plots (a) BGMO E , (b) BGMO W , (c) BGMO L and (d) BGMO Fr
Distributions:
From the plots in figure 1 and 2 it can be seen that the family is very flexible and can offer
many different types of shapes of density and hazard rate function including the bath tub shaped free
hazard.
3. Some special BGMO G distributions
Some special cases of the BGMO G family of distributions are presented in this section.
3.1 The BGMO exponential ( BGMO E ) distribution
t
, t 0
Let the base line distribution be exponential with parameter 0, g ( t : ) e
and G (t : ) 1 e t ,
t 0 then for the BGMO E model we get the pdf and cdf respectively as:
7
1 e t [ e t ] 1 e t
f BGMOE (t )
1
B (m, n) [1 e t ] 1 1 e t
F BGMOE (t ) I
and
F BGMOE (t ) 1 I
sf:
e t
1
t
1 e
e t
1
t
1 e
m 1
e t
t
1 e
n 1
( m, n )
(m, n)
hrf :
h BGMOE (t )
1
e t [ e t ] 1
B (m, n) [1 e t ] 1 [1 I t (m, n ) ]
e
1
t
1 e
e t
1
t
1 e
m 1
e t
t
1 e
n 1
rhrf :
r BGMOE (t )
1
e t [ e t ] 1
B (m, n) [1 e t ] 1 [ I t (m, n) ]
e
1
t
1 e
e t
1
t
1 e
H BGMOE (t ) log [ 1 I
chrf:
m 1
e t
t
1 e
e t
1
t
1 e
n 1
( m, n ) ]
3.2 The BGMO Lomax ( BGMO L ) distribution
Considering the Lomax distribution (Ghitany et al. 2007) with pdf and cdf given
by g (t : , ) ( ) [1 (t )] ( 1) , t 0 , and G ( t : , ) 1 [1 ( t ) ] , 0 and 0 the
pdf and cdf of the BGMO L distribution are given by
f BGMOL (t )
1 ( )[1 (t )] ( 1) [ [1 ( t ) ] ] 1
B (m, n)
[1 [1 ( t ) ] ] 1
[1 ( t ) ]
1
1 [1 ( t ) ]
and
sf:
F BGMOL (t ) I
m 1
[1 ( t ) ]
1 [1 ( t ) ]
[ 1 ( t ) ]
1
1 [ 1 ( t ) ]
F BGMOL (t ) 1 I
( m, n )
[ 1 ( t )]
1
1 [ 1 ( t )]
8
(m , n )
n 1
hrf : h BGMOL (t )
1 ( ) [1 (t )] ( 1) [1 (t )] ( 1)
B ( m, n )
[1 {1 (t )} ] 1
1 I
1
[ 1 ( t )]
1
1 [ 1 ( t )]
[1 (t )]
1
1 [1 (t )]
rhrf:
m 1
[1 (t )]
1 [1 (t )]
( m, n )
n 1
r BGMOL (t )
1 ( )[1 (t )] ( 1) [1 (t )] ( 1)
B ( m, n )
[1 {1 (t )} ] 1
I
[1 (t )]
1
1 [1 (t )]
chrf:
H BGMOL (t ) log [ 1 I
m 1
1
[1 (t )]
1 [1 (t )]
[ 1 ( t )]
1
1 [ 1 ( t )]
[ 1 ( t )]
1
1 [ 1 ( t )]
(m, n)
n 1
( m, n ) ]
3.3 The BGMO Weibull ( BGMO W ) distribution
Considering the Weibull distribution (Ghitany et al. 2005, Zhang and Xie 2007) with parameters
0 and 0 having pdf and cdf g (t ) t 1 e t and G (t ) 1 e t respectively we get
the pdf and cdf of BGMO W distribution as
f BGMOW (t )
1
t
1 t e
[e
t 1
B ( m, n )
[1 e
]
and
F BGMOW (t ) I
sf:
F BGMOW (t ) 1 I
hrf:
t
e t
1
t
1 e
e t
1
t
1 e
]
e t
1
1 e t
1
( m, n )
( m, n )
h BGMOW (t )
1 t 1 e t [ e t ] 1
B (m, n)
1 I
[1 e t ] 1
e t
1
1 e t
m 1
1
e t
1
t
1 e
( m, n )
e t
1 e t
9
n 1
m 1
e t
1 e t
n 1
rhrf:
r
BGMOW
1
t 1 e t [ e t ] 1
(t )
B (m, n)
I
[1 e t ] 1
1
e t
1
1 e t
e t
1
1 e t
H BGMOW (t ) log [ 1 I
chrf:
e t
1
t
1 e
m 1
e t
1 e t
(m, n)
n 1
(m, n) ]
3.4 The BGMO Frechet ( BGMO Fr ) distribution
Suppose the base line distribution is the Frechet distribution (Krishna et al., 2013) with pdf and cdf
g (t ) t ( 1) e ( t )
given by
G (t ) e ( t ) , t 0
and
respectively, and then the
corresponding pdf and cdf of BGMO Fr distribution becomes
f
BGMOFr
1 t ( 1) e ( t ) [1 e (
(t )
B ( m, n )
[1 [1 e ( t ) ] ] 1
[1 e ( t ) ]
1
1 [1 e ( t ) ]
F BGMOFr (t ) I
and
F BGMOFr (t ) 1 I
sf:
hrf :
m 1
] 1
[1 e ( t ) ]
1 [1 e ( t )
[ 1 e ( t ) ]
1
( t )
]
1 [ 1 e
[ 1 e ( t ) ]
1
( t )
]
1 [ 1 e
t)
]
n 1
(m, n)
( m, n )
h BGMOFr (t )
1 t ( 1) e ( t ) [1 e ( t ) ] 1
B ( m, n )
1 I
[1 [1 e ( t ) ] ] 1
1
[ 1 e ( t ) ]
1
( t )
]
1 [ 1 e
[1 e ( t ) ]
1
1 [1 e ( t ) ]
rhrf :
r BGMOFr (t )
10
m 1
[1 e ( t ) ]
1 [1 e ( t )
( m, n )
]
n 1
1
t ( 1) e ( t ) [1 e ( t ) ] 1
B ( m, n )
I
[1 [1 e ( t ) ] ] 1
[ 1 e ( t ) ]
1
1 [1 e ( t ) ]
chrf:
H BGMOFr (t ) log [ 1 I
[ 1 e ( t ) ]
1
( t )
]
1 [ 1 e
1
[1 e ( t ) ]
1
1 [1 e ( t ) ]
m 1
( m, n )
[ 1 e ( t ) ]
1 [1 e ( t )
]
n 1
( m, n ) ]
3.5 The BGMO Gompertz ( BGMO Go ) distribution
Next by taking the Gompertz distribution (Gieser et al. 1998) with pdf and cdf g (t ) e t e
and
G (t ) 1 e
t
( e 1)
, 0, 0, t 0
respectively,
we
get
the
pdf
and
BGMO Go distribution as
f BGMOGo (t )
t
t
( e 1)
t
( e 1)
1
1 e e
[e
t
(
e
1)
B (m, n)
[1 e
] 1
]
and
F BGMOGo (t ) I
sf:
F BGMOGo (t ) 1 I
hrf:
( e t 1)
e
1
t
( e 1)
1 e
( et 1)
e
1
( et 1 )
1 e
( e t 1)
e
t
( e 1)
1 e
( m, n )
( et 1)
e
1
( et 1 )
1 e
( m, n )
h BGMOGo (t )
( et 1)
( et 1)
1
e t e
[e
t
( e 1)
B ( m, n )
[1 e
] 1
] 1
( et 1)
e
1
t
( e 1)
1 e
rhrf:
m 1
1
1 I
m1
( e t 1 )
e
1
( e t 1 )
1 e
(m, n)
( et 1)
e
t
( e 1)
1 e
r BGMOGo (t )
11
n 1
n 1
t
( e 1)
cdf
of
t
t
( e 1)
t
( e 1)
1
1 e e
[e
( e t 1)
B (m, n)
[1 e
] 1
]
( e t 1)
e
1
t
( e 1)
1 e
chrf:
1
I
H BGMOGo (t ) log [ 1 I
m 1
( e t 1)
e
1
( et 1 )
1 e
( m, n )
( e t 1)
e
t
( e 1)
1 e
( et 1 )
e
1
( e t 1)
1 e
n 1
( m, n ) ]
3.6 The BGMO Extended Weibull ( BGMO EW ) distribution
The pdf and the cdf of the extended Weibull (EW) distributions of Gurvich et al. (1997) is given by
t D R , 0
g (t : , ) exp[ Z (t : )] z (t : ) and G (t : , ) 1 exp[ Z (t : )] ,
where Z (t : ) is a non-negative monotonically increasing function which depends on the parameter
vector . and z (t : ) is the derivative of Z (t : ) .
By considering EW as the base line distribution we derive pdf and cdf of the BGMO EW as
f BGMOEW (t )
1 exp[ Z (t : )] z (t : )[ exp{ Z (t : )} ] 1
B ( m, n )
[1 exp{ Z (t : )}] 1
exp[ Z (t : )]
1
1 exp[ Z (t : )]
and
F BGMOEW (t ) I
m 1
exp[ Z ( t : )]
1
1 exp[ Z ( t : )]
exp[ Z (t : )]
1 exp[ Z (t : )]
n 1
( m, n )
Important models can be seen as particular cases with different choices of Z (t : ) :
(i) Z (t : ) = t : exponential distribution.
(ii) Z (t : ) = t 2 : Rayleigh (Burr type-X) distribution.
(iii) Z (t : ) = log (t k ) : Pareto distribution
(iv) Z (t : ) = 1[exp ( t ) 1] : Gompertz distribution.
sf:
F BGMOEW (t ) 1 I
exp[ Z ( t : )]
1
1 exp[ Z ( t : )]
( m, n )
12
hrf: h BGMOEW (t )
1 exp[ Z (t : )] z (t : )[ exp{ Z (t : )}] 1
B (m, n)
[1 exp{ Z (t : )}] 1
1 I
1
exp[ Z ( t : )]
1
1 exp[ Z ( t : )]
exp[ Z (t : )]
1
1 exp[ Z (t : )]
m 1
( m, n )
exp[ Z (t : )]
1 exp[ Z (t : )]
n 1
rhrf: r BGMOEW (t )
1 exp[ Z (t : )] z (t : )[ exp{ Z (t : )} ] 1
B (m, n)
[1 exp{ Z (t : ) }] 1
1 I
exp[ Z (t : )]
1
1 exp[ Z (t : )]
chrf:
H BGMOEW (t ) log [ 1 I
m 1
exp[ Z ( t : )]
1
1 exp[ Z ( t : )]
1
exp[ Z ( t : )]
1
1 exp[ Z ( t : )]
( m, n )
exp[ Z (t : )]
1 exp[ Z (t : )]
n 1
( m, n ) ]
3.7 The BGMO Extended Modified Weibull ( BGMO EMW ) distribution
The modified Weibull (MW) distribution (Sarhan and Zaindin 2013) with cdf and pdf is given by
G (t ; , , ) 1 exp [ t t ] , t 0, 0, , 0, 0 and
g (t ; , , ) ( t 1 ) exp[ t t ] respectively.
The corresponding pdf and cdf of BGMO EMW are given by
f BGMOEMW (t )
1 ( t 1 ) exp [ t t ] [ exp{ t t } ] 1
B ( m, n )
[1 exp { t t }]] 1
exp [ t t ]
1
1 exp [ t t ]
F BGMOEMW (t ) I
and
sf:
F BGMOEMW (t ) 1 I
m 1
exp [ t t ]
1 exp [ t t ]
exp [ t t ]
1
1 exp [ t t ]
exp [ t t ]
1
1 exp [ t t ]
hrf : h BGMOEMW (t )
13
(m, n)
(m, n)
n 1
1 ( t 1 ) exp [ t t ] [ exp { t t } ] 1
B (m, n)
[1 exp { t t }]] 1
1 I
exp [ t t ]
1
1 exp [ t t ]
m 1
1
exp [ t t ]
1
1 exp [ t t ]
exp [ t t ]
1 exp [ t t ]
( m, n )
n 1
rhrf: r BGMOEMW (t )
1
( t 1 ) exp [ t t ] [ exp { t t } ] 1
B ( m, n)
[1 exp { t t }]] 1
I
exp [ t t ]
1
1 exp [ t t
chrf:
H BGMOEMW (t ) log [ 1 I
]
m 1
1
exp[ t t ]
1
1 exp [ t t ]
exp [ t t ]
1 exp [ t t
exp [ t t ]
1
1 exp [ t t ]
]
( m, n)
n 1
( m, n ) ]
3.8 The BGMO Extended Exponentiated Pareto ( BGMO EEP ) distribution
The pdf and cdf of the exponentiated Pareto distribution, of Nadarajah (2005), are given respectively
k
by g (t ) k k t ( k 1) [1 ( t ) k ] 1 and G ( t ) [1 ( t ) ] , x and , k , 0 . Thus the
pdf and the cdf of BGMO EEP distribution are given by
f
BGMOEEP
1 k k t ( k 1) [1 ( t ) k ] 1 [1 {1 ( t ) k } ] 1
(t )
B (m, n)
[1 [1 {1 ( t )k } ]] 1
[1 {1 ( t )k } ]
1
k
1 [1 {1 ( t ) } ]
and
sf:
F BGMOEEP (t ) I
F BGMOEEP (t ) 1 I
m 1
[1 {1 ( t )k } ]
k
1 [1 {1 ( t ) } ]
[ 1{1 ( t ) k } ]
1
k
1 [ 1{1 ( t ) } ]
[ 1{1 ( t ) k } ]
1
k
1 [ 1{ 1 ( t ) } ]
n 1
(m, n)
( m, n )
hrf: h BGMOEEP (t )
1 k k t ( k 1) [1 ( t ) k ] 1 [1 {1 ( t )k } ] 1
B ( m, n )
[1 [1 {1 ( t ) k } ]] 1
1 I
1
[ 1{1 ( t ) k } ]
1
k
1 [ 1{ 1 ( t ) } ]
[1 {1 ( t )k } ]
1
k
1 [1 {1 ( t ) } ]
m 1
[1 {1 ( t ) k } ]
k
1 [1 {1 ( t ) } ]
14
n 1
( m, n )
rhrf: r BGMOEEP (t )
1
k k t ( k 1) [1 ( t ) k ] 1 [1 {1 ( t ) k } ] 1
B (m, n)
[1 [1 {1 ( t ) k } ]] 1
I
[1 {1 ( t ) k } ]
1
k
1 [1 {1 ( t ) }
chrf:
]
m 1
[1 {1 ( t ) k } ]
k
1 [1 {1 ( t ) }
H BGMOEEP (t ) log [ 1 I
[ 1{1 ( t ) k } ]
1
k
1 [ 1{1 ( t ) } ]
1
[1{1 ( t ) k } ]
1
k
1 [1{1 ( t ) } ]
]
( m, n )
n 1
( m, n ) ]
4. General results for the Beta Generalized Marshall-Olkin ( BGMO G ) family of
distributions
In this section we derive some general results for the proposed BGMO G family.
4.1 Expansions
By using binomial expansion in (6), we obtain
f BGMOG (t; , , m, n)
1 g (t ) G (t ) 1 G (t )
1
B (m, n) [1 G (t )] 1 1 G (t )
g (t ) G (t )
B (m, n) [1 G (t )] 2 [1 G (t )]
1
m 1
G (t )
1 G (t )
G (t )
1
1 G (t )
m 1
n 1
G (t )
1 G (t )
f MO (t , ) [ F MO (t , )] 1 [1 {F MO (t , )} ] m1 [ F MO (t , ) ] ( n1)
B ( m, n )
f MO (t , )[ F MO (t , )] n 1 [1 {F MO (t , )} ] ( m 1)
B ( m, n )
m 1 m 1
( 1) j [ F MO (t; )] j
f MO (t , ) [ F MO (t , )] n 1
j
B ( m, n )
j 0
m 1 m 1
(1) j [ F MO (t ; )] ( jn )1
f MO (t , )
j
B ( m, n )
j 0
m 1
f MO (t , ) j [ F MO (t ; )] ( j n ) 1
(10)
j 0
m 1
j
d
( j n) dt [ F
MO
(t ; )] ( j n )
j 0
15
n 1
m 1
d
[ F MO (t; )] ( j n)
dt
j
j 0
m 1
d
dt [ F
MO
j
(11)
(t ; ( ( j n )))]
j 0
m 1
f MO (t; ( ( j n )))
j
(12)
j 0
Where j
m 1
(1) j 1
and j j ( j n)
B (m, n) ( j n) j
Alternatively, we can expand the pdf as
m 1
f MO (t , ) j [ F MO (t; )] ( j n ) 1
j 0
( j n ) 1
m 1
f MO (t , )
j
j 0
l 0
( j n ) 1
(1) l [ F MO (t ; ) ] l
l
( j n ) 1
f MO (t , )
l
[ F MO (t ; ) ] l
(13)
l 0
( j n ) 1
l 0
l d
[ F MO (t; ) ] l 1
l 1 dt
( j n )1
d
dt [ F
l
MO
(t ; ) ] l 1
l 0
( j n) 1
( j n ) 1
1
Where l j (1) l
and l
j (1) l
l
l
l 1 j 0
j0
We can expand the cdf as (see “Incomplete Beta Function” From Math World—A Wolfram Web
Resource. http://mathworld. Wolfram.com/ Incomplete Beta Function. html)
(1 b) n
B ( z ; a, b ) B z ( a , b) z a
n 0
n ! ( a n)
za
n 0
zn
Where (x) n is a Pochhammer symbol.
(1) n (b 1) !
zn
n ! (b n 1) ! (a n)
b 1 (1) n n
z a
z
n 0 n ( a n)
F BGMOG (t ) I
G (t )
1
1 G ( t )
(14)
(m, n)
Using (14) in (7) we have
16
G (t )
1
1
B (m, n) 1 G (t )
i 0
i 0
i
n 1
(1) i
[1 { F MO (t , ) } ] mi
B (m, n) (m i ) i
n 1 m i
(1) i
(1) j [ F MO (t , ) ] j
B (m, n ) (m i) i j 0 j
i, j 0
n 1 m i j
(1) i j
B (m, n) (m i) i j k 0
n 1 (1) i G (t )
1
i (m i) 1 G (t )
i 0
n 1 (1) i
1
[1 { F MO (t , ) } ] m
[1 { F MO (t , ) } ] i
i (m i )
B ( m, n )
i 0
m
j
i, j 0 k 0
k
n 1 m i
(1) i j k
B (m, n) (m i ) i j k
j
(1) k [ F MO (t , ) ] k
j MO
[ F (t , ) ] k
By exchanging the indices j and k in the sum symbol, we have
F BGMO (t ; , , m, n)
i, k 0 j k
(1) i j k n 1 m i
B (m, n) (m i ) i j k
j MO
[ F (t , ) ] k
and then
F BGMO (t ; , , m, n)
k
F MO (t , ) k
(15)
k 0
n 1 m i
(1) i j k
i j k
j k B ( m, n ) ( m i )
Where k
i 0
j
Similarly an expansion for the cdf of BGMO G can be derives as
F BGMOG (t ; , , m, n) I1[ F MO (t , ) ] (m, n)
m n 1
pm
m n 1
pm
m n 1
m n 1
[1 { F MO (t , )} ] p [ F MO (t , ) ] m n 1 p
p
m n 1 p p
(1) q {F MO (t , )} q [ F MO (t , ) ] m n 1 p
p
q 0 q
p
(1)
p m q 0
m n 1
pm
m n 1
q
p m n 1 MO
[ F (t , )] ( m n 1 p q )
q
p
( m n 1 p q )
( m n 1 p q )
q p m n 1
(1) r [ F MO (t , )] r
(
1
)
q
p
r
q 0
r 0
p
p ( m n 1 p q )
p m q 0
r0
p m n 1 (m n 1 p q ) MO
[ F (t , )] r
(1) q r
p
r
q
17
m n 1
p ( m n 1 p q )
p m q 0
Where
p,q,r
[ F MO (t , )] r
(16)
r 0
p, q , r
p m n 1 ( m n 1 p q )
(1) q r
p
r
q
4.2 Order statistics
Suppose T1, T 2 ,...T n is a random sample from any BGMO G distribution. Let T r : n denote the r th
order statistics. The pdf of T r : n can be expressed as
n!
f
(r 1)! (n r )!
f r : n (t )
BGMOG
(t ) F BGMOG (t ) r 1 {1 F BGMOG (t )} n r
nr
n!
(1) j
(r 1)! (n r )! j 0
n r
f
j
BGMOG
(t ) F BGMOG (t ) j r 1
Now using the general expansion of the BGMO G distribution pdf and cdf we get the pdf of the
r th order statistics for of the BGMO G is given by
f r : n (t )
nr
n!
(1) j
(r 1)! (n r )! j 0
n r MO
f (t , )
j
( j n ) 1
l 0
l
[ F MO (t; ) ] l
MO
k
k F (t , )
k 0
j r 1
Where l and k defined in above
Now k F MO (t , ) k
k 0
Where
d j r 1, k
j r 1
d j r 1, k [ F MO (t , ) ] k
k0
1 k
[c ( j r ) k ] c d j r 1,
k 0 c 1
(Nadarajah et. al 2015)
k c
Therefore the density function of the r th order statistics of BGMO G distribution can be
expressed as
f r : n (t )
nr
n!
(1) j
( r 1)! ( n r ) ! j 0
n r MO
f (t , )
j
( j n ) 1
l 0
l
[ F MO (t; ) ] l
d j r 1, k [ F MO (t , ) ] k
k 0
nr
n!
( 1) j
(r 1) ! ( n r )! j 0
( j n ) 1
n r MO
f (t , ) l d j r 1, k [ F MO (t , ) ] k l
l 0
k 0
j
( j n )1
f MO (t , )
l 0
l,k
[ F MO (t , ) ] k l
(17)
k 0
18
Where l , k
n r
n!
(1) j
(r 1)! (n r )! j 0
n r
l d j r 1, k and l and d j r 1, k defined above.
j
4.3 Probability weighted moments
The probability weighted moments (PWMs), first proposed by Greenwood et al. (1979), are
expectations of certain functions of a random variable whose mean exists. The ( p, q, r )th PWM of T
is defined by
p , q , r t p F (t ) q [1 F (t )] r f (t ) dt
From equations (10) and (13) the s th moment of T can be written either as
E (T s )
t
s
f BGMOG (t ; , , m, n ) dt
m1
j
t
j 0
m 1
j
j 0
t
s
[ F MO (t ; )] ( j n)1 f MO (t , ) dt
s
[ G (t ) 1 G (t )]
( j n ) 1
[ g (t ) [1 G (t )] 2 ] dt
m 1
j s, 0, ( j n ) 1
j 0
( j n ) 1
or E (T s )
t
l
l 0
s
[ F MO (t; ) ] l f MO (t , ) dt
( j n ) 1
t
l
l 0
s
l
[G (t ) 1 G (t )] [ g (t ) [1 G (t )] 2 ] dt
( j n ) 1
l
s, l , 0
l 0
q
r
Where p, q , r t p [G (t ) 1 G (t )] [ G (t ) 1 G (t )] [ g (t ) [1 G (t )] 2 ] dt
is the PWM of MO ( ) distribution.
Therefore the moments of the BGMO - G(t ; , , m, n) can be expresses in terms of the PWMs
of MO ( ) (Marshall and Olkin, 1997). The PWM method can generally be used for estimating
parameters quantiles of generalized distributions. These moments have low variance and no severe
biases, and they compare favourably with estimators obtained by maximum likelihood.
19
Proceeding as above we can derive s th moment of the r th order statistic Tr:n , in a random
sample of size n from BGMO G on using equation (17) as
( j n ) 1
E (T s r ;n )
l0
l, k
s, k l, 0
k 0
Where j , l and l , k defined in above
4.5 Moment generating function
The moment generating function of BGMO G family can be easily expressed in terms of those of
the exponentiated MO (Marshall and Olkin, 1997) distribution using the results of section 4.1. For
example using equation (11) it can be seen that
sT
st
M T ( s ) E [e ] e f
BGMO
m 1
j e st
j 0
m1
(t ; , , m, n ) dt e st j
j 0
d
[ F MO (t; )] ( j n ) dt
dt
m 1
d
[ F MO (t ; )] ( j n ) dt j M X ( s )
dt
j 0
Where M X (s ) is the mgf of a MO (Marshall and Olkin, 1997) distribution.
4.6 Renyi Entropy
The entropy of a random variable is a measure of uncertainty variation and has been used in various
situations in science and engineering. The Rényi entropy is defined by
I R ( ) (1 ) 1 log f (t ) dt
where 0 and 1 For furthers details, see Song (2001). Using binomial expansion in (6) we
can write
f
BGMOG
(t ; , , m, n )
m 1
n 1
1
G (t ) G (t )
g
(
t
)
G
(
t
)
1
B (m, n) [1 G (t )] 2 [1 G (t )] 1 G (t ) 1 G (t )
f MO (t , ) [ F MO (t , )] ( 1) [1 {F MO (t , )} ] ( m1) [ F MO (t , ) ] ( n1)
B ( m, n )
f MO (t , ) [ F MO (t , )] ( n 1) [1 {F MO (t , )} ] ( m 1)
B ( m, n )
( m 1)
(m 1)
MO
MO
( n 1)
(1) j [ F MO (t ; )] j
f
(
t
,
)
[
F
(
t
,
)]
j
B ( m, n )
j 0
f MO (t , )
B (m, n)
( m 1)
j 0
(m 1)
(1) j [ F MO (t ; )] j ( n 1)
j
20
Thus the Rényi entropy of T can be obtained as
( m 1) MO
I R ( ) (1 ) log Z j f (t , ) [ F MO (t ; )] j ( n 1) dt
j 0
1
Where Z j
B ( m, n )
(m 1)
(1) j
j
4.7 Quantile power series and random sample generation
The quantile function T, let t Q (u ) F 1 (u ) , can be obtained by inverting (7). Let z Qm , n (u ) be
the beta quantile function. Then,
1
[1 {1 Q (u )} ]
m, n
t Q (u ) Q G
1
1 [1 {1 Q (u )} ]
m,n
It is possible to obtain an expansion for Qm,n (u ) in the Wolfram website as
It is possible to obtain an expansion for Qm,n (u ) as
z Qm, n (u ) e i u
i m
i0
(see
“Power
series”
From
MathWorld--A
Wolfram
Web
Resource.
http://mathworld.wolfram.com/PowerSeries.html)
Where e i [ m B ( m, n) ] 1 m d i and d 0 0, d 1 1, d 2 (n 1) ( m 1) ,
d3
(n 1) (m 2 3mn m 5n 4)
2 (m 1) 2 (m 2)
d 4 (n 1) [ m 4 (6 n 1) m 3 (n 2) (8n 5) m 2 (33n 2 30n 4) m
n (31n 47) 18 ] [ 3(m 1) 3 (m 2) (m 3) ]. . .
The Bowley skewness (Kenney and Keeping 1962) measures and Moors kurtosis (Moors
1988) measure are robust and less sensitive to outliers and exist even for distributions without
moments. For BGMO G family these measures are given by
B
Q(3 4) Q(1 4) 2Q(1 2)
Q (3 8) Q (1 8) Q(7 8) Q(5 8)
and M
Q(3 4) Q(1 4)
Q(6 8) Q (2 8)
For example, let the G be exponential distribution with parameter 0, having pdf and cdf as
g (t : ) e t , t 0 and G (t : ) 1 e t , respectively. Then the p th quantile is obtained as
(1 / ) log [1 p ] . Therefore, the p th quantile t p , of BGMO E is given by
21
1
[1 {1 Qm , n ( p )} ]
1
t p log 1
1
1 [1 {1 Q ( p )} ]
m, n
4.8 Asymptotes
Here we investigate the asymptotic shapes of the proposed family following the method followed in
Alizadeh et al., (2015).
Proposition 1. The asymptotes of equations (6), (7) and (8) as t 0 are given by
f (t ) ~
g (t )
[1 { 1 } ] m1
B(m, n)
as G (t ) 0
F (t ) ~
1
[1 { 1 } ] m
B ( m, n ) m
as G (t ) 0
h (t ) ~
g (t )
[1 { 1 } ] m 1
B(m, n)
as G (t ) 0
Proposition 2. The asymptotes of equations (6), (7) and (8) as t are given by
f (t ) ~
1 F (t ) ~
n g (t ) G (t ) n 1
B (m, n)
as t
[ G (t )] n
n B ( m, n )
as t
h(t ) ~ g (t ) G (t ) 1
as t
5. Estimation
5.1 Maximum likelihood method
The model parameters of the BGMO G distribution can be estimated by maximum likelihood. Let
t (t1 , t 2 ,...t n )T be a random sample of size n from BGMO G with parameter
vector θ ( m, n, , , βT ) T , where β ( 1 , 2 ,... q ) T corresponds to the parameter vector of the
baseline distribution G. Then the log-likelihood function for θ is given by
r
r
(θ) r log r log log [ g (ti , β) ] ( 1) log [G (ti , β)] r log[ B(m, n)]
i 0
r
i 0
r
( 1) log [1 G (t i , β)] (m 1) log [ 1 [ G (ti , β) {1 G (ti , β ) } ] ]
i 0
i 0
r
(n 1) log [ [ G (t i , β) {1 G (t i , β) } ] ]
(18)
i0
This log-likelihood function can not be solved analytically because of its complex form but it can be
maximized numerically by employing global optimization methods available with software’s like R,
SAS, Mathematica or by solving the nonlinear likelihood equations obtained by differentiating (18).
22
By
taking
the
partial
derivatives
of
the
log-likelihood
function
with
respect
to
m, n, , and β components of the score vector
U θ (U m , U n , U ,U ,U T ) T can be obtained as follows:
Um
r
r (m) r (m n) log [ 1 [ G (ti , β) {1 G (ti , β) }] ]
m
i 0
Un
r
r (n) r (m n) b log [[ G (ti , β) {1 G (ti , β) } ] ]
n
i0
U
r
r
r
r log log [G (ti , β)] log [1 G (ti , β)]
i 0
i 0
[ G (ti , β) {1 G (ti , β) } ] log [ G (ti , β) {1 G (ti , β) }]
1 [ G (ti , β) {1 G (ti , β) } ]
i 0
r
(1 m)
[ G (ti , β) {1 G (ti , β) } ] log [ G (ti , β) {1 G (ti , β) } ]
[ G (ti , β) {1 G (ti , β) } ]
i 0
r
(n 1)
U
r
r
G (ti , β)
( 1)
i 0 1 G (ti , β )
r
G (ti , β) G (ti , β)
1
G (ti , β)
(n 1)
i 0 [{1 G (ti , β)} { G (ti , β)} ][1 G (ti , β )]
i 0 1 G (t i , β )
r
(1 m)
U
r
r
r
g (β ) (ti , β)
G ( β ) (ti , β)
G ( β ) (ti , β)
(1 )
( 1)
β i0 g (ti , β)
G (ti , β)
i 0
i 0 1 G (ti , β)
G (ti , β) 1 G ( β ) (ti , β)
i 0 [ {1 G (t i , β )} { G (ti , β)} ][1 G (ti , β)]
r
(m 1)
r
(1 n)
i 0
G ( β ) (ti , β)
[1 G (ti , β)]G (ti , β)
Where (.) is the digamma function.
5.2 Asymptotic standard error and confidence interval for the mles:
The asymptotic variance-covariance matrix of the MLEs of parameters can obtained by inverting the
Fisher information matrix I (θ) which can be derived using the second partial derivatives of the loglikelihood function with respect to each parameter. The i j th elements of I n (θ) are given by
2 l (θ)
,
I i j E
i j
i, j 1, 2, , 3 q
The exact evaluation of the above expectations may be cumbersome. In practice one can estimate
I n (θ) by the observed Fisher’s information matrix Î n (θˆ ) is defined as:
23
2 l (θ)
Î i j
i
j
,
i, j 1, 2,, 3 q
θ θˆ
Using the general theory of MLEs under some regularity conditions on the parameters as n the
1
n (θˆ θ) is N k (0,Vn ) where Vn (v jj ) I n (θ) . The asymptotic
asymptotic distribution of
behaviour remains valid if
Vn
Vˆ Î 1 (θˆ )
is replaced by n
. This result can be used to provide large
sample standard errors and also construct confidence intervals for the model parameters. Thus an
approximate standard error and (1 / 2) 100% confidence interval for the mle of jth parameter j are
respectively given by
v̂ j j and ˆ j Z / 2 vˆ j j , where Z / 2 is the / 2 point of standard normal
distribution.
As an illustration on the MLE method its large sample standard errors, confidence interval in
the case of BGMO E (m, n, , , ) is discussed in an appendix.
5.3 Real life applications
In this subsection, we consider fitting of three real data sets to show that the proposed BGMO G
distribution can be a better model than GMOKw G (Handique and Chakraborty, 2015) by taking
as Weibull distribution as G. We have estimated the parameters by numerical maximization of
loglikelihood function and provided their standard errors and 95% confidence intervals using large
sample approach (see appendix).
In order to compare the distributions, we have considered known criteria like AIC (Akaike
Information Criterion), BIC (Bayesian Information Criterion), CAIC (Consistent Akaike Information
Criterion) and HQIC (Hannan-Quinn Information Criterion). It may be noted that AIC 2 k 2 l ;
BIC k log (n) 2 l ; CAIC AIC
2 k (k 1)
; and HQIC 2 k log [log (n)] 2l
n k 1
where k is the number of parameters in the statistical model, n the sample size and l is the
maximized value of the log-likelihood function under the considered model. In these applications
method of maximum likelihood will be used to obtain the estimate of parameters.
Example I:
The following data set gives the time to failure (10 3 h) of turbocharger of one type of engine given in
Xu et al. (2003).
{1.6, 2.0, 2.6, 3.0, 3.5, 3.9, 4.5, 4.6, 4.8, 5.0, 5.1, 5.3, 5.4, 5.6, 5.8, 6.0, 6.0, 6.1, 6.3, 6.5, 6.5, 6.7, 7.0,
7.1, 7.3, 7.3, 7.3, 7.7, 7.7, 7.8, 7.9, 8.0, 8.1, 8.3, 8.4, 8.4, 8.5, 8.7, 8.8, 9.0}
24
Table 1: MLEs, standard errors and 95% confidence intervals (in parentheses) and
the AIC, BIC, CAIC and HQIC values for the data set.
Parameters
â
b̂
̂
ˆ
̂
ˆ
log-likelihood( lmax )
AIC
BIC
CAIC
HQIC
GMOKw W
BGMO W
1.178
(0.017)
(1.14, 1.21)
0.291
(0.209)
(-0.12, 0.70)
0.617
(0.002)
(0.61, 0.62)
1.855
(0.003)
(1.85, 1.86)
1.619
(0.977)
(-0.29, 3.53)
0.178
(0.144)
(-0.10, 0.46)
-90.99
1.187
(0.702)
(-0.19, 2.56)
2.057
(2.240)
(-2.33, 6.45)
0.009
(0.006)
(-0.00276, 0.02)
4.194
(0.668)
(2.88, 5.50)
0.047
(0.108)
(-0.16, 0.26)
0.017
(0.016)
(-0.01, 0.05)
-80.38
193.98
204.11
196.53
197.65
172.76
182.89
175.31
176.43
Estimated cdf's
1.0
Estimated pdf's
BGMO-W
GMOKw-W
0.6
0.4
cdf
0.15
0.0
0.00
0.05
0.2
0.10
Density
0.20
0.8
0.25
BGMO-W
GMOKw-W
0
2
4
6
8
10
0
2
4
6
X
X
(a)
(b)
8
Fig: 3 Plots of the observed (a) histogram and estimated pdf’s and (b) estimated cdf’s for the
BGMO G and GMOKw G for example I.
25
Example II:
The following data is about 346 nicotine measurements made from several brands of cigarettes in
1998. The data have been collected by the Federal Trade Commission which is an independent
agency of the US government, whose main mission is the promotion of consumer protection. [http:
//www.ftc.gov/ reports/tobacco or http: // pw1.netcom.com/ rdavis2/ smoke. html.]
{1.3, 1.0, 1.2, 0.9, 1.1, 0.8, 0.5, 1.0, 0.7, 0.5, 1.7, 1.1, 0.8, 0.5, 1.2, 0.8, 1.1, 0.9, 1.2, 0.9, 0.8, 0.6, 0.3,
0.8, 0.6, 0.4, 1.1, 1.1, 0.2, 0.8, 0.5, 1.1, 0.1, 0.8, 1.7, 1.0, 0.8, 1.0, 0.8, 1.0, 0.2, 0.8, 0.4, 1.0, 0.2, 0.8,
1.4, 0.8, 0.5, 1.1, 0.9, 1.3, 0.9, 0.4, 1.4, 0.9, 0.5, 1.7, 0.9, 0.8, 0.8, 1.2, 0.9, 0.8, 0.5, 1.0, 0.6, 0.1, 0.2,
0.5, 0.1, 0.1, 0.9, 0.6, 0.9, 0.6, 1.2, 1.5, 1.1, 1.4, 1.2, 1.7, 1.4, 1.0, 0.7, 0.4, 0.9, 0.7, 0.8, 0.7, 0.4, 0.9,
0.6, 0.4, 1.2, 2.0, 0.7, 0.5, 0.9, 0.5, 0.9, 0.7, 0.9, 0.7, 0.4, 1.0, 0.7, 0.9, 0.7, 0.5, 1.3, 0.9, 0.8, 1.0, 0.7,
0.7, 0.6, 0.8, 1.1, 0.9, 0.9, 0.8, 0.8, 0.7, 0.7, 0.4, 0.5, 0.4, 0.9, 0.9, 0.7, 1.0, 1.0, 0.7, 1.3, 1.0, 1.1, 1.1,
0.9, 1.1, 0.8, 1.0, 0.7, 1.6, 0.8, 0.6, 0.8, 0.6, 1.2, 0.9, 0.6, 0.8, 1.0, 0.5, 0.8, 1.0, 1.1, 0.8, 0.8, 0.5, 1.1,
0.8, 0.9, 1.1, 0.8, 1.2, 1.1, 1.2, 1.1, 1.2, 0.2, 0.5, 0.7, 0.2, 0.5, 0.6, 0.1, 0.4, 0.6, 0.2, 0.5, 1.1, 0.8, 0.6,
1.1, 0.9, 0.6, 0.3, 0.9, 0.8, 0.8, 0.6, 0.4, 1.2, 1.3, 1.0, 0.6, 1.2, 0.9, 1.2, 0.9, 0.5, 0.8, 1.0, 0.7, 0.9, 1.0,
0.1, 0.2, 0.1, 0.1, 1.1, 1.0, 1.1, 0.7, 1.1, 0.7, 1.8, 1.2, 0.9, 1.7, 1.2, 1.3, 1.2, 0.9, 0.7, 0.7, 1.2, 1.0, 0.9,
1.6, 0.8, 0.8, 1.1, 1.1, 0.8, 0.6, 1.0, 0.8, 1.1, 0.8, 0.5, 1.5, 1.1, 0.8, 0.6, 1.1, 0.8, 1.1, 0.8, 1.5, 1.1, 0.8,
0.4, 1.0, 0.8, 1.4, 0.9, 0.9, 1.0, 0.9, 1.3, 0.8, 1.0, 0.5, 1.0, 0.7, 0.5, 1.4, 1.2, 0.9, 1.1, 0.9, 1.1, 1.0, 0.9,
1.2, 0.9, 1.2, 0.9, 0.5, 0.9, 0.7, 0.3, 1.0, 0.6, 1.0, 0.9, 1.0, 1.1, 0.8, 0.5, 1.1, 0.8, 1.2, 0.8, 0.5, 1.5, 1.5,
1.0, 0.8, 1.0, 0.5, 1.7, 0.3, 0.6, 0.6, 0.4, 0.5, 0.5, 0.7, 0.4, 0.5, 0.8, 0.5, 1.3, 0.9, 1.3, 0.9, 0.5, 1.2, 0.9,
1.1, 0.9, 0.5, 0.7, 0.5, 1.1, 1.1, 0.5, 0.8, 0.6, 1.2, 0.8, 0.4, 1.3, 0.8, 0.5, 1.2, 0.7, 0.5, 0.9, 1.3, 0.8, 1.2,
0.9}
26
Table 2: MLEs, standard errors and 95% confidence intervals (in parentheses) and
the AIC, BIC, CAIC and HQIC values for the nicotine measurements data.
Parameters
GMOKw W
BGMO W
0.765
(0.025)
(0.72, 0.81)
2.139
(0.774)
(0.62, 3.66)
4.271
(0.018)
(4.24, 4.31)
2.919
(0.013)
(2.89, 2.94)
1.097
(0.309)
(0.49, 1.70)
0.114
(0.042)
(0.03, 0.19)
-111.75
0.866
(0.159)
(0.55, 1.18)
0.329
(0.167)
(0.00168, 0.66)
2.131
(1.182)
(-0.19, 4.45)
2.223
(0.725)
(0.80, 3.64)
3.285
(3.901)
(-4.36, 10.93)
2.635
(1.310)
(0.07, 5.20)
-109.28
235.50
258.58
235.75
244.69
230.56
253.64
230.80
239.76
â
b̂
̂
ˆ
̂
ˆ
log-likelihood( lmax )
AIC
BIC
CAIC
HQIC
1.0
Estimated cdf's
1.5
Estimated pdf's
BGMO-W
GMOKw-W
0.0
0.0
0.2
0.5
0.4
cdf
Density
0.6
1.0
0.8
BGMO-W
GMOKw-W
0.0
0.5
1.0
1.5
2.0
0.0
0.5
1.0
X
X
(a)
(b)
1.5
2.0
Fig: 4 Plots of the observed (a) histogram and estimated pdf’s and (b) estimated cdf’s for the
BGMO G and GMOKw G for example II.
27
Example III:
This data set consists of 100 observations of breaking stress of carbon fibres (in Gba) given by
Nichols and Padgett (2006).
{3.70, 2.74, 2.73, 2.50, 3.60, 3.11, 3.27, 2.87, 1.47, 3.11, 4.42, 2.40, 3.15, 2.67,3.31, 2.81, 0.98, 5.56,
5.08, 0.39, 1.57, 3.19, 4.90, 2.93, 2.85, 2.77, 2.76, 1.73, 2.48, 3.68, 1.08, 3.22, 3.75, 3.22, 2.56, 2.17,
4.91, 1.59, 1.18, 2.48, 2.03, 1.69, 2.43, 3.39, 3.56, 2.83, 3.68, 2.00, 3.51, 0.85, 1.61, 3.28, 2.95, 2.81,
3.15, 1.92, 1.84, 1.22, 2.17, 1.61, 2.12, 3.09, 2.97, 4.20, 2.35, 1.41, 1.59, 1.12, 1.69, 2.79, 1.89, 1.87,
3.39, 3.33, 2.55, 3.68, 3.19, 1.71, 1.25, 4.70, 2.88, 2.96, 2.55, 2.59, 2.97, 1.57, 2.17, 4.38, 2.03, 2.82,
2.53, 3.31, 2.38, 1.36, 0.81, 1.17, 1.84, 1.80, 2.05, 3.65}.
Table 3: MLEs, standard errors and 95% confidence intervals (in parentheses) and
the AIC, BIC, CAIC and HQIC values for the breaking stress of carbon fibres data.
Parameters
â
b̂
̂
ˆ
̂
ˆ
log-likelihood( lmax )
AIC
BIC
CAIC
HQIC
GMOKw W
BGMO W
1.015
(0.071)
(0.88, 1.15)
0.385
(0.168)
(0.06, 0.71)
0.803
(0.003)
(0.79, 0.81)
2.222
(0.004)
(2.21, 2.23)
1.482
(0.440)
(0.62, 2.34)
0.345
(0.169)
(0.01, 0.68)
142.63
1.458
(1.123)
(-0.74, 3.66)
0.734
(1.810)
(-2.81, 4.28)
0.598
(1.496)
(-2.33, 3.53)
2.439
(0.779)
(0.91, 3.97)
0.685
(1.150)
(-1.57, 2.94)
0.201
(0.573)
(-0.92, 1.32)
-141.29
297.26
312.89
298.16
303.59
294.58
310.21
295.48
300.92
28
Estimated cdf's
1.0
Estimated pdf's
BGMO-W
GMOKw-W
0.6
0.4
cdf
0.3
0.0
0.0
0.1
0.2
0.2
Density
0.4
0.8
0.5
BGMO-W
GMOKw-W
0
1
2
3
4
5
6
0
1
2
X
3
4
5
6
X
(a)
(b)
Fig: 5 Plots of the observed (a) histogram and estimated pdf’s and (b) estimated cdf’s for the
BGMO G and GMOKw G for example III.
In Tables 1, 2 and 3 the MLEs, se’s (in parentheses) and 95% confidence intervals (in parentheses)
of the parameters for the fitted distributions along with AIC, BIC, CAIC and HQIC values are
presented for example I, II, and III respectively. In the entire examples considered here based on the
lowest values of the AIC, BIC, CAIC and HQIC, the BGMO W distribution turns out to be a
better distribution than GMOKw W distribution. A visual comparison of the closeness of the fitted
densities with the observed histogram of the data for example I, II, and III are presented in the
figures 3, 4, and 5 respectively. These plots indicate that the proposed distributions provide a closer
fit to these data.
6. Conclusion
Beta Generalized Marshall-Olkin family of distributions is introduced and some of its important
properties are studied. The maximum likelihood and moment method for estimating the parameters
are also discussed. Application of three real life data fitting shows good result in favour of the
proposed family when compared to Generalized Marshall-Olkin Kumaraswamy extended family of
distributions. It is therefore expected that this family of distribution will be an important addition to
the existing literature on distribution.
Appendix: Maximum likelihood estimation for
BGMO E
The pdf of the BGMO E distribution is given by
1 e t [ e t ] 1 e t
f BGMOE (t )
1
B (m, n) [1 e t ] 1 1 e t
29
m 1
e t
t
1 e
n 1
For a random sample of size n from this distribution, the log-likelihood function for the parameter
vector θ (m, n, , , ) T is given by
r
r
(θ) r log r log r log ti ( 1) log (e t i ) r log[ B(m, n)]
i 0
r
i0
r
( 1) log [1 e ti ] (m 1) log [1 [ e ti {1 e ti } ] ]
i 0
i 0
r
(n 1) log [ e ti {1 e ti } ]
i 0
The components of the score vector θ (m, n, , , ) T are
Um
r
r (m) r (m n ) log [1 [ e t i {1 e t i } ] ]
m
i 0
Un
r
r (n) r (m n) log [ e ti {1 e ti }]
n
i0
U
r
r
r
r log log [e ti ] log [1 e t i ]
i 0
i 0
[ e ti {1 e ti } ] log [ e t i {1 e t i }]
(1 m)
1 [ e ti {1 e ti } ]
i 0
r
[ e t i {1 e t i } ] log [ e t i {1 e t i } ]
(n 1)
[ e ti {1 e t i } ]
i 0
r
U
r
r
e ti
( 1)
t i
i0 1 e
r
[e t i ] [1 e t i ]
1
1 e ti
(
n
1
)
t i
t i
} { e t i } ] [1 e ti ]
i 0 [{1 e
i 0 1 e
r
(1 m)
U
r
r
r
r
e t i
e ti
ti ( 1)
(
1
)
t i
t i
i 0
i0 1 e
i 0 1 e
[e ti ]
ti
} { e ti } ][1 e ti ]
i 0 [ {1 e
r
(m 1)
r
(1 n)
i 0
e ti
[1 e ti ] e ti
The asymptotic variance covariance matrix for mles of the unknown parameters θ (m, n, , , )
of BGMO E (m, n, , , ) distribution is estimated by
30
var (mˆ )
cov ( nˆ , mˆ )
Î n 1 (θˆ ) cov (θ̂, mˆ )
cov ( α̂, mˆ )
cov ( λ̂, mˆ )
cov ( mˆ , nˆ ) cov ( mˆ , θ̂)
cov ( mˆ , α̂)
var ( nˆ )
cov ( nˆ , θ̂)
cov ( nˆ , α̂)
cov (θ̂, nˆ )
var (θ̂ )
cov (θ̂, α̂ )
cov (α̂ , nˆ )
cov (α̂, θ̂ )
var (α̂ )
cov (λ̂, nˆ )
cov ( λ̂, θ̂ )
cov ( λ̂, α̂ )
cov ( mˆ , λ̂)
cov ( nˆ , λ̂)
cov (θ̂, λ̂)
cov (α̂, λ̂ )
var ( λ̂)
2 l (θ )
Where the elements of the information matrix Iˆ n (θˆ )
i j
the following second partial derivatives:
2
r (m) r (m n)
m2
can be derived using
θ θˆ
2
r (n) r (m n)
n2
r
r
[ α e λti {1 α e λti } ] 2 θ log [α e λti {1 α e λti }] 2
2
2 (1 m)
θ
θ 2
[1 [ α e λti {1 α e λti } ]θ ] 2
i 0
r
(1 m)
[ α e λti {1 α e λti } ] θ log [α e λti {1 α e λti }] 2
1 [ α e λti {1 α e λt i } ]θ
i 0
r
e 2 λti
2 r θ
(θ 1)
λt i 2
α 2 α 2
)
i 0 (1 α e
r
θ (n 1)
e λti [1 αe λti ] - 2 α e 3 λti ti {1 α e λti } 3 - 2 α e 2 λti ti {1 α e λti } 2
b
i 0
r
θ (n 1)
- α e 2 λti ti {1 α e λti } 2 α e λti ti {1 α e λti }
b
i 0
r
θ (n 1)
e λti [1 αe λti ] - α e 2 λti ti {1 α e λti } 2 α e λti ti {1 α e λti }
b
i 0
2
θ [ α e λti {1 α e λti }] θ -1 2 α e 3 λti {1 α e λti } 3 - 2 e 2 λti {1 α e λti } 2
(1 m)
λt i
λt i θ
1 [αe
{1 α e
}]
i 0
r
r
(1 m)
i 0
r
(1 m)
i 0
θ 2 [ α e λti {1 α e λti } ] 2 θ-2 α e 2 λti {1 α e λti } 2 e λti {1 α e λti }
2
[1 [ α e λti {1 α e λti } ]θ ] 2
θ (θ - 1)[ α e λti {1 α e λti }] θ -2 α e 2 λti {1 α e λti } 2 e λti {1 α e λti }
1 [ α e λti {1 α e λti }]θ
31
2
r
2
2 (θ 1)
2
λ
i 0
e λti [1 α e λti ]ti - α α e 2 λti ti {1 α e λti } 2 e λti ti {1 α e λti }
b
e
r
θ (n 1)
α ti - αα e 2 λti ti {1 α e λti } 2 e λti ti {1 α e λti }
b
i 0
r
θ (n 1)
α 2 λ e 2 λti ti 2 α λ e λti ti 2
λt i
[1 α e λti ] 2
1
α
e
i 0
r
θ (n 1)
r
λt i
[1 α e
i 0
λt i
- 2 α 2α e 3 λti ti 2 {1 α e λti } 3 3 α α e 2 λti ti 2 {1 α e λti } 2
]ti
λt i 2
λt i
α e ti {1 α e
}
b
r
θ 2 [ α e λti {1 α e λti } ] 2θ -2 - α α e 2 λti ti {1 α e λt i } 2 αe λti {1 α e λti }
(1 m)
[1 [ α e λti {1 α e λti } ]θ ] 2
i 0
2
r
θ (θ - 1) [ α e λti {1 α e λti } ] θ -2 - α α e 2 λti ti {1 α e λti } 2 αe λti {1 α e λti }
(1 m)
1 [ α e λti {1 α e λti } ]θ
i 0
θ [αe
r
(1 m)
λt i
{1 α e
λt i
2
α e 3 λti ti 2 {1 α e λti } 3 3 α α e 2 λti ti 2 {1 α e λti } 2
α e λti ti 2 {1 α e λti }
1 [ α e λti {1 α e λti } ]θ
}]
θ -1 - 2 α
i 0
2
2
r (m n)
mn
r
[ α e λti {1 α e λti } ] θ log [α e λti {1 α e λti }]
2
mθ
1 [ α e λti {1 α e λti } ]θ
i 0
2
m α
r
θ [ α e λti {1 α e λti } ] θ -1 [ α e 2 λti {1 α e λti } 2 ] [ α e λti {1 α e λti } ]
i 0
1[αe
r
λt i
{1 α e
λt i
θ
}]
2
θ [ α e λti {1 α e λti } ] θ -1 [ α α e 2 λti ti {1 α e λti } 2 ] [ α e λti ti {1 α e λti } ]
m
1 [ α e λti {1 α e λti } ]θ
i 0
2
nθ
r
log[ α e λt ti
i
{1 α e λti } ]
i 0
r
e λti [1 α e λti ] - α e 2 λti {1 α e λti } 2 e λti {1 α e λti }
2
θ
nα
b
i 0
32
r
2
e λti [1 α e λti ] - α α e 2 λti ti {1 α e λti } 2 α e λti ti {1 α e λti }
θ
n λ
b
i0
2
r
θα α
r
(n 1)
e
r
r
(1 m)
e λti
1 α eλt
i 0
λt i
i
[1 αe λti ] - α e 2 λti {1 α e λti } 2 e λti {1 α e λti }
b
i 0
[ α e λti {1 α e λti } ] θ -1 - α e 2 λti {1 α e λti } 2 e λti {1 α e λti }
1 [αe
i 0
λt i
{1 α e
λt i
θ
}]
r
θ(1 - m)
[ α e λti {1 α e λti } ] 2 θ-1 - α e 2 λti {1 α e λti } 2 e λti {1 α e λti } log[ e λti {1 α e λti }]
[1 [ α e λti {1 α e λti } ]θ ] 2
i 0
r
θ(1 - m)
[ α e λti {1 α e λti } ] θ -1 - α e 2 λti {1 α e λti } 2 e λti {1 α e λti } log[ e λti {1 α e λti }]
1[ α e
i0
r
r
λ ti
λ ti
{1 α e
θ
}]
2
ti log[ α e λti ti {1 α e λti } ]
θ λ
i 0
i 0
r
(n 1)
e λti [1 α e λti ] - α α e 2 λti ti {1 α e λti } 2 α e λti ti {1 α e λti }
b
i 0
r
(1 m)
[ α e λti {1 α e λti } ] θ -1 - αα e 2 λti ti {1 α e λti } 2 e λti {1 α e λti }
1 [αe
i 0
λt i
{1 α e
λt i
θ
}]
r
θ(1 - m)
[ α e λti {1 α e λti }] 2 θ-1 - α α e 2 λti {1 α e λti ti } 2 e λti ti {1 α e λti } log[ e λti {1 α e λti }]
[1 [ α e λti {1 α e λti } ]θ ] 2
i 0
r
θ(1 - m)
1 [ α e
i0
2
(θ 1)
r
θ (n 1)
i 0
r
θ (n 1)
[ α e λti {1 α e λt i } ] θ -1 - α α e 2 λti ti {1 α e λti } 2 e λt i ti {1 α e λti } log[ e λti {1 α e λti }]
r
2 λt i
ti
e λti ti
α e
λt i 2
]
1 α e λti
i 0 [1 α e
λt i
{1 α e
λt i
θ
}]
ti - α e 2 λti {1 α e λti } 2 e λti {1 α e λti }
b
e λti [1 α e λti ]ti - α e 2 λti {1 α e λti } 2 e λti {1 α e λti }
b
i 0
33
2 α αe 3 λti ti {1 α e λti } 3 α e 2λti ti {1 α e λti } 2
e λti [1 α e λti ]
2 λt i
λt i 2
λt i
λt i
r
2α e
ti {1 α e } e ti {1 α e
}
θ (n 1)
b
i 0
r
(1 m)
θ {αe
λt i
(1 α e
λt i
)}
θ 1
2 α αe 3 λti ti {1 α e λti } 3 α e 2λti ti {1 α e λti } 2
2 λt i
λt i 2
λt i
λt i
2α e
ti {1 α e } e ti {1 α e
}
1 {α e λti (1 α e λti )} θ
i 0
θ 2 {αe λti (1 α e λti )} 2θ 2 (α e 2 λti {1 α e λti } 2 e λti {1 α e λti })
r
(1 m)
(α αe 2 λti ti {1 α e λti } 2 α e λti ti {1 α e λti })
[1 {α e λti (1 α e λti )} θ ] 2
i 0
θ ( θ - 1) {αe λti (1 α e λti )} θ 2 (α e 2 λti {1 α e λti } 2 e λti {1 α e λti })
r
(1 m)
i 0
( α αe 2 λti ti {1 α e λti } 2 α e λt i ti {1 α e λti })
1 {α e
λt i
(1 α e
λt i
)} θ
Where (.) is the derivative of the digamma function.
References
1. Alizadeh M., Tahir M.H., Cordeiro G.M., Zubair M. and Hamedani G.G. (2015). The
Kumaraswamy Marshal-Olkin family of distributions. Journal of the Egyptian Mathematical
Society, in press.
2. Bornemann, Folkmar and Weisstein, Eric W. “Power series” From MathWorld-A Wolfram
Web Resource. http://mathworld.wolfram.com/PowerSeries.html.
3. Barreto-Souza W., Lemonte A.J. and Cordeiro G.M. (2013). General results for Marshall and
Olkin's family of distributions. An Acad Bras Cienc 85: 3-21.
4. Baharith L.A., Mousa S.A., Atallah M.A. and Elgyar S.H. (2014). The beta generalized
inverse Weibull distribution. British J Math Comput Sci 4: 252-270.
5. Cordeiro G.M., Silva G.O. and Ortega E.M.M. (2012a). The beta extended Weibull
distribution. J Probab Stat Sci 10: 15-40.
6. Cordeiro G.M., Cristino C.T., Hashimoto E.M. and Ortega E.M.M. (2013b). The beta
generalized Rayleigh distribution with applications to lifetime data. Stat Pap 54: 133-161.
7. Cordeiro G.M., Silva G.O., Pescim R.R. and Ortega E.M.M. (2014c). General properties for
the beta extended half-normal distribution. J Stat Comput Simul 84: 881-901.
8. Eugene N., Lee C. and Famoye F. (2002). Beta-normal distribution and its applications.
Commun Statist Theor Meth 31: 497-512.
34
9. George D. and George S. (2013). Marshall-Olkin Esscher transformed Laplace distribution
and processes. Braz J Probab Statist 27: 162-184.
10. Ghitany M.E., Al-Awadhi F.A. and Alkhalfan L.A. (2007). Marshall-Olkin extended Lomax
distribution and its application to censored data. Commun Stat Theory Method 36: 18551866.
11. Ghitany M.E., Al-Hussaini E.K. and Al-Jarallah R.A. (2005). Marshall-Olkin extended
Weibull distribution and its application to censored data. J Appl Stat 32: 1025-1034.
12. Gieser P.W., Chang M.N., Rao P.V., Shuster J.J. and Pullen J. (1998). Modelling cure rates
using the Gompertz model with covariate information. Stat Med 17(8):831–839.
13. Greenwood J.A., Landwehr J.M., Matalas N.C. and Wallis J.R. (1979). Probability weighted
moments: definition and relation to parameters of several distributions expressable in inverse
form. Water Resour Res 15: 1049-1054.
14. Gui W. (2013a) . A Marshall-Olkin power log-normal distribution and its applications to
survival data. Int J Statist Probab 2: 63-72.
15. Gui W. (2013b). Marshall-Olkin extended log-logistic distribution and its application in
minification processes. Appl Math Sci 7: 3947-3961.
16. Gurvich M., DiBenedetto A. and Ranade S. (1997). A new statistical distribution for
characterizing the random strength of brittle materials. J. Mater. Sci. 32: 2559-2564.
17. Handique L. and Chakraborty S. (2015a). The Marshall-Olkin-Kumaraswamy-G family of
distributions. arXiv:1509.08108 [].
18. Handique L. and Chakraborty S. (2015b). The Generalized Marshall-Olkin-Kumaraswamy-G
family of distributions. arXiv:1510.08401 [].
19. Handique L. and Chakraborty S. (2016). Beta generated Kumaraswamy-G and other new
families of distributions. arXiv:1603.00634 [] , Under Review.
20. Handique L. and Chakraborty S. (2016b). The Kumaraswamy Generalized Marshall-Olkin
family of distributions.
21. Jones M.C. (2004). Families of distributions arising from the distributions of order statistics.
Test 13: 1-43.
22. Kenney J.F., Keeping and E.S. (1962). Mathematics of Statistics, Part 1. Van Nostrand, New
Jersey, 3rd edition.
23. Krishna E., Jose K.K. and Risti´c M. (2013). Applications of Marshal-Olkin Fréchet
distribution. Comm. Stat. Simulat. Comput. 42: 76–89.
24. Lehmann E.L. (1953). The power of rank tests. Ann Math Statist 24: 23-43.
25. Lemonte A.J. (2014). The beta log-logistic distribution. Braz J Probab Statist 28: 313-332.
35
26. Marshall A. and Olkin I. (1997). A new method for adding a parameter to a family of
distributions with applications to the exponential and Weibull families. Biometrika 84: 641652.
27. Mead M.E., Afify A.Z., Hamedani G.G. and Ghosh I. (2016). The Beta Exponential Frechet
Distribution with Applications, to appear in Austrian Journal of Statistics.
28. Moors J.J.A. (1988). A quantile alternative for kurtosis. The Statistician 37: 25–32.
29. Nadarajah S. (2005). Exponentiated Pareto distributions. Statistics 39: 255-260.
30. Nadarajah S., Cordeiro G.M. and Ortega E.M.M. (2015). The Zografos-Balakrishnan–G
family of distributions: mathematical properties and applications, Commun. Stat. Theory
Methods 44:186-215.
31. Nichols M.D. and Padgett W.J.A. (2006). A bootstrap control chart for Weibull percentiles.
Quality and Reliability Engineering International, v. 22: 141-151.
32. Pescim R.R., Cordeiro G.M., Demetrio C.G.B., Ortega E.M.M. and Nadarajah S. (2012). The
new class of Kummer beta generalized distributions. SORT 36: 153-180.
33. Sarhan A.M. and Apaloo J. (2013). Exponentiated modified Weibull extension distribution.
Reliab Eng Syst Safety 112: 137-144.
34. Singla N., Jain K. and Sharma S.K. (2012). The beta generalized Weibull distribution:
Properties and applications. Reliab Eng Syst Safety 102: 5-15.
35. Song K.S. (2001). Rényi information, loglikelihood and an intrinsic distribution measure.
EM J Stat Plan Infer 93: 51-69.
36. Weisstein, Eric W. “Incomplete Beta Function” From MathWorld-A Wolfram Web
Resource. http://mathworld.Wolfram.com/IncompleteBetaFunction.html.
37. Xu, K., Xie, M., Tang, L.C. and Ho, S.L. (2003). Application of neural networks in
forecasting engine systems reliability. Applied Soft Computing, 2(4): 255-268.
38. Zhang T. and Xie M. (2007). Failure data analysis with extended Weibull distribution.
Communication in Statistics – Simulation and Computation 36: 579-592.
36
| 10 |
JACOBIAN MATRIX: A BRIDGE BETWEEN LINEAR AND NONLNEAR
POLYNOMIAL-ONLY PROBLEMS
W. Chen
Permanent mail address: P. O. Box 2-19-201, Jiangshu University of Science & Technology,
Zhenjiang City, Jiangsu Province 212013, P. R. China
Present mail address (as a JSPS Postdoctoral Research Fellow): Apt.4, West 1st floor,
Himawari-so, 316-2, Wakasato-kitaichi, Nagano-city, Nagano-ken, 380-0926, JAPAN
E-mail:
Permanent email box: [email protected]
Abbreviated Title: Nonlinear linear polynomial-only problems
Abstract: By using the Hadamard matrix product concept, this paper introduces two
generalized matrix formulation forms of numerical analogue of nonlinear differential
operators. The SJT matrix-vector product approach is found to be a simple, efficient and
accurate technique in the calculation of the Jacobian matrix of the nonlinear discretization by
finite difference, finite volume, collocation, dual reciprocity BEM or radial functions based
numerical methods. We also present and prove simple underlying relationship (theorem (3.1))
between general nonlinear analogue polynomials and their corresponding Jacobian matrices,
which forms the basis of this paper. By means of theorem 3.1, stability analysis of numerical
solutions of nonlinear initial value problems can be easily handled based on the well-known
results for linear problems. Theorem 3.1 also leads naturally to the straightforward extension
of various linear iterative algorithms such as the SOR, Gauss-Seidel and Jacobi methods to
nonlinear algebraic equations. Since an exact alternative of the quasi-Newton equation is
established via theorem 3.1, we derive a modified BFGS quasi-Newton method. A simple
formula is also given to examine the deviation between the approximate and exact Jacobian
matrices. Furthermore, in order to avoid the evaluation of the Jacobian matrix and its inverse,
the pseudo-Jacobian matrix is introduced with a general applicability of any nonlinear
systems of equations. It should be pointed out that a large class of real-world nonlinear
1
problems can be modeled or numerically discretized polynomial-only algebraic system of
equations. The results presented here are in general applicable for all these problems. This
paper can be considered as a starting point in the research of nonlinear computation and
analysis from an innovative viewpoint.
Key words. Hadamard product, Jacobian matrix, SJT product, nonlinear polynomial-only
equations, nonlinear stability analysis, quasi-Newton method, pseudo-Jacobian matrix.
AMS subject classifications. 47H17, 65J15
1. Introduction. The numerical solution of nonlinear partial differential equations plays a
prominent role in many areas of physical and engineering. Considerable research endeavors
have been directed to develop various nonlinear solution methodologies. However, it is not
easy to achieve some significant practical progress in this direction because of great
complexity the nonlinearity arises. Recently, some innovative contributions were made by
one of the present authors [1, 2]. The Hadamard product of matrices was therein introduced to
nonlinear computations and analysis. The SJT product of matrix and vector was defined to
efficiently calculate the accurate Jacobian matrix of some numerical formulations of nonlinear
differential equations. The present study is a step forward development based on these works.
In comparison to nonlinear cases, a vast variety of computing and analysis tools of linear
problems have been quite well developed today. It is natural desire to employ these effective
linear methods to nonlinear problems. However, this is a rather hard task due to the actual
great gaps between both. Based on the Hadamard product approach, Chen et al. [2] derived
two kinds of generalized matrix formulations in numerical approximation of nonlinear
differential or integral operators. By using these unified formulations, this paper presents and
verifies the simple relationship (theorem 3.1) between numerical analogue solutions of
nonlinear differential operators and their Jacobian matrices. It is noted that theorem 3.1 is
only applicable for an important special class of polynomial-only algebraic system of
equations. However, in practice such polynomial-only systems have very widespread
applications. The theorem paves a shortcut path to exploit the existing methods of solving
linear problems to the complex polynomial-only nonlinear problems. Some significant results
are immediately obtained by using theorem 3.1. First, so far there is not general and simple
approach available for stability analysis in the numerical solution of nonlinear initial value
2
problems. We here develop such a technique based on the application of theorem 3.1. Second,
the linear iterative methods of algebraic equations such the SOR, Gauss-Seidel, and Jacobi
methods were often applied in conjunction with the Newton method rather than directly to
nonlinear system of equations itself. The present work directs to a straightforward extension
of these techniques to nonlinear algebraic equations. Third, the quasi-Newton equation is the
very basis of various quasi-Newton methods. Based on theorem 3.1, we constitute an exact
alternative equation of the approximate quasi-Newton equation. As a consequence, we derive
a set of the modified BFGS matrix updating formulas. Finally, in order to avoid the
calculation of the Jacobian matrix and its inverse, we introduce the pseudo-Jacobian matrix.
By using this new concept, the general nonlinear system of equations without limitation of
polynomial-only problems is encompassed in this work. The proposed pseudo-Jacobian
matrix is used for stability analysis of nonlinear initial value problems.
This paper is structured as follows. Section 2 gives a brief introduction to the Hadamard and
SJT products. Two unified matrix formulations of general numerical discretization of
nonlinear problems are obtained by using the Hadamard product. Section 3 proves the simple
relationship theorem 3.1 between the numerical analogue of nonlinear operator and the
corresponding Jacobian matrix, which forms the basis of later work. Section 4 is comprised of
three subsections. Section 4.1 is concerned with stability analysis of numerical solution of
nonlinear initial value problems, and in section 4.2 several existing linear iterative methods
are directly extended to the nonlinear problems. Section 4.2 involves the construction of a set
of the modified Broyden-type matrix updating formulas. Section 5 defines the pseudoJacobian matrix, and applies this concept to stability analysis of nonlinear initial value
computation. Finally, some remarks of the present work are given in section 6. Unless
otherwise specified, U, C and F in this paper represent vector.
2. Two unified matrix formulations of general nonlinear discretizations
Matrix computations are of central importance in nonlinear numerical analysis and
computations. However, since nonlinear problems are actually different from linear ones, the
traditional linear algebraic approach, which are based on the concept of linear transformation,
can not provide a unified powerful tool for nonlinear numerical computation and analysis task.
In this section, by using the Hadamard product and SJT product, we gain two kinds of
3
generalized matrix formulations of nonlinear numerical discretization and a simple, accurate
approach to calculate the Jacobian matrix [1, 2].
Definition 2.1 Let matrices A=[aij] and B=[bij]∈CN×M, the Hadamard product of matrices is
defined as A°B= [aij bij]∈CN×M. where CN×M denotes the set of N×M real matrices.
Definition 2.2 If matrix A=[aij]∈CN×M, then A°q=[aijq]∈CN×M is defined as the Hadamard
power of matrix A, where q is a real number. Especially, if aij ≠0, A°(-1)=[1/aij]∈CN×M is defined
as the Hadamard inverse of matrix A. A°0=11 is defined as the Hadamard unit matrix in which
all elements are equal to unity.
Definition 2.3 If matrix A=[aij]∈CN×M, then the Hadamard matrix function f
defined as f
o
(A) = [f (a
ij
)]∈ C
N×M
o
(A)
is
.
Theorem 2.1: letting A, B and Q∈CN×M, then
1> A°B=B°A
(1a)
2> k(A°B)=(kA)°B, where k is a scalar.
(1b)
3> (A+B)°Q=A°Q+B°Q
(1c)
4> A°B=ENT(A⊗B)EM,
where matrix EN (or EM) is defined as EN =[ e1⊗e1M LMeN⊗eN],
ei=[0L0 1 0L0], i=1, L, N, ENT is the transpose matrix of EN. ⊗ denotes the Kronecker
i
product of matrices.
(1d)
5> If A and B are non-negative, then λ min (A)min{bii }≤ λ j (A o B) ≤ λ max (A)max{bii },
where λ is the eigenvalue.
(1e)
6> (detA)(detB)≤det(A°B), where det( ) denotes the determinant.
(1f)
For more details about the Hadamard product see [3, 4].
4
2.1. Nonlinear formulation-K* of general numerical methods
It is well known that the majority of popular numerical methods such as the finite element,
boundary element, finite difference, Galerkin, least square, collocation and spectral methods
have their root on the method of weighted residuals (MWR) [5, 6]. Therefore, it will be
generally significant to apply the Hadamard product to the nonlinear computation of the
MWR. In the MWR, the desired function φ in the differential governing equation
ψ {u} − f = 0
(2)
is replaced by a finite series approximation û ,
u = uˆ =
N
∑ C jφ j ,
(3)
j =1
where ψ { } is a differential operator.
i
can be defined as the assumed functions and CjÔs
are the unknown parameters. The approximate function û is completely specified in terms of
unknown parameters Cj. Substituting this approximation û into the governing equation (2), it
is in general unlikely that the equation will be exactly satisfied, namely, result in a residual R
ψ {û} − f = R
(4)
The method of weighted residuals seeks to determine the N unknowns Cj in such a way that
the error R is minimized over the entire solution domain. This is accomplished by requiring
that weighted average of the error vanishes over the solution domain. Choosing the weighting
function Wj and setting the integral of R to zero:
∫D [ψ {û} − f ]Wj dD = ∫D RWj dD = 0 ,
j=1,2,....,N.
(5)
Equation (5) can be used to obtain the N unknown coefficients. This equation also generally
describes the method of weighted residuals. In order to expose our idea clearly, considering
the following linear and nonlinear differential operators in two dimensions with varying
parameter:
L1{u} = c( x, y)
∂ pu
L2 {u} = c( x, y)
∂ pu ∂ q u
(6a)
∂x p
(6b)
∂x p ∂y q
Substitution of Eq. (3) into Eqs. (6a) and (6b) and applications of equation (1d) in the theorem
2.1 result in
*
It is denoted as formulation-S in [2].
5
T
p
∂ φ
L1{uˆ} = c( x, y) pi C ,
∂x
(7a)
∂ pφi T ∂ qφi T
L2 (uˆ ) = c( x, y) p C o q C = E1T c( x, y)
∂x ∂y
∂ pφi T ∂ qφi T
∂ pφ T ∂ qφ T
i
i
p ⊗ q E1 (C ⊗ C ) = c( x, y) p ⊗ q (C ⊗ C )
∂y
∂y
∂x
∂x
(7b)
where C is a vector composed of the unknown parameters, E1=1. Substituting the Eqs. (7a, b)
into Eq. (5), we have
T
∂ pφ j
ˆ
L
u
W
dD
=
c
x
y
W
dD
,
( ) p j C
∫D 1{ } j
∫D
∂x
(8a)
p T q T
∂ φi
∂ φi
∫D L2 (uˆ)Wj dD = ∫D c( x, y) ∂x p ⊗ ∂x q Wj dD(C ⊗ C)
(8b)
As a general case, the quadratic nonlinear partial differential equation is given by
N1
∑
k ,l = 0
akl ( x, y)
∂ ( k + l )u
N2
∂ ( k + l )u ∂ ( i + j )u
+
b
x
,
y
+ d = 0,
(
)
∑
kl
∂x k ∂y l i, j = 0
∂x k ∂y l ∂x i∂y j
(9)
k ,l = 0
where d is constant. The above equation encompasses a wide range of the quadratic nonlinear
governing equations.
Applying Eqs. (8a, b), we can easily derive the MWR formulation of the nonlinear differential
equation (9) with the following form
Kn × n C + Gn × n 2 (C ⊗ C ) + F = 0 ,
(10)
where F is the constant vector.
T
N1
∂ ( k + l )φ
i
Kn × n = ∫ ∑ akl ( x, y) k l Wj dD ∈ C n × n
D
∂x ∂y
k ,l = 0
and
T
T
∂ (i + j )φ
∂ ( k + l )φ
2
N2
p
p
Gn × n 2 = ∫ ∑ bkl ( x, y) i j ⊗ k l Wj dD ∈ C n × n
D
∂x ∂y
∂x ∂y
k ,l = 0
i, j = 0
represent constant coefficient matrices corresponding to the linear and nonlinear operators,
respectively. For the cubic nonlinear differential equations, we can obtain similar general
matrix formulation by using the same approach:
6
Ln × n C + Rn × n 3 (C ⊗ C ⊗ C ) + F = 0 ,
(11)
where L and R are constant coefficient matrices. For higher order nonlinear problems, the
formulations can be easily obtained in the same way. To simplify notation, formulations with
form of Eqs. (10) and (11) are denoted as formulation-K, where K is chosen since the
Kronecker product is used in expressing nonlinear numerical discretization term.
As was mentioned earlier, most of popular numerical techniques can be derived from the
method of weighted residual. The only difference among these numerical methods lies in the
use of different weighting and basis functions in the MWR. From the foregoing deduction, it
is noted that Eq. (10) can be obtained no matter what weighting and basis functions we use in
the method of weighted residuals. Therefore, it is straightforward that we can obtain the
formulation-K for the nonlinear computations of these methods. In many numerical methods,
the physical values are usually used as the desired variables instead of the unknown expansion
coefficient vector C in the preceding formulas. Both approaches are in fact identical. It is
theoretically convenient to use C here.
In the following we give explicit formulas for computing the Jacobian derivative matrix of the
quadratic and cubic nonlinear formulation-K (10) and (11). Eq. (10) can be restated
vec(G1 )
Kn × n C + M (C ⊗ C ) + F = 0 ,
vec(G )
n
(12)
where vec( ) is the vector-function of a rectangular matrix formed by stacking the column of
matrix into one long vector [7]. Gi Ôs are n×n symmetric matrices and can be easily obtained
from the corresponding rows of the matrix G in Eq. (10) through the invert process of vec( ).
Furthermore, we have
C T G1
Kn × n + M C + F = 0 ,
C T G
n
(13)
where superscript T means the transpose of vector. According to the rule in differentiation of
matrix function [4], the Jacobian derivative matrix of the above equation can be obtained by
the following formula:
7
C T G1
∂ϕ {C}
= Kn × n + 2 M .
∂C
C T G
n
(14)
Similarly, the cubic nonlinear equation (11) can be restated as
[R
[R
ψ (C ) = Ln × n C +
]
(C ⊗ C)]
T
1
(C ⊗ C)
n×n2
M
n
n×n2
C + F = 0 .
(15)
T
The Jacobian matrix of the above equation can be evaluated by
∂Rn1 × n 2 (C ⊗ C )
T
C
v
∂
C
∂ψ {C}
M
= Ln × n +
+
∂C
T ∂Rn 2 (C ⊗ C )
n×n
C
∂C
[R
[R
]
M
(C ⊗ C)]
T
1
2 (C ⊗ C )
n×n
n
n×n2
.
T
(16)
Furthermore, we have
C T R11C C T R12 C L C T R1n C
T
∂ψ (C )
C R21C C T R22 C L C T R2 n C
= L+3
,
∂C
M
M
M
M
T
T
T
C Rn1C C Rn 2 C L C Rnn C
(17)
where Rij result from matrix Ri and are rectangular constant coefficient matrices.
2.2. Nonlinear formulation-H and SJT product
The previously presented formulation-K is somewhat complex. This section will show that
the Hadamard product can be directly exploited to express nonlinear discretization term of
some numerical techniques. For example, consider the quadratic nonlinear differential
operator
∂U ( x, y) ∂U ( x, y)
, its numerical analogue by using a point-wise approximation
∂x
∂y
technique can be expressed
∂u( x, y) ∂u( x, y)
= ux uy
∂x
∂y
{
v
v
}i = {ux }i °{uy }i = (A xU ) o ( AyU ) ,
i=1,2,É,N,
(18)
where i indexes the number of discrete points; Ax and Ay represent the coefficient matrices
dependent on specified numerical discretization scheme. It is noted that we use the desired
v
function value vector U here instead of the unknown parameter vector C in section 2.1. In
fact, both is equivalent. This explicit matrix formulation (18) is obtained in a straightforward
and intuitive way. The finite difference, collocation methods and their variants such as
differential quadrature and pseudo-spectral methods belong to the point-wise approximating
8
numerical technique. In addition, the finite volume, dual reciprocity BEM [8] (the most
efficient technique in applying BEM to nonlinear problems) and numerical techniques based
on radial basis functions [9] can express their analogue of nonlinear differential operators in
the Hadamard product form. For all above-mentioned methods, the numerical analogues of
some nonlinear operators often encountered in practice are given by
{(
)}
1. c( x, y)u, x = c x j , y j o ( AxU ) ,
2.
(u, x )q = ( AxU )oq , where q is a real number,
3.
∂u m ∂u n
= AxU om o AyU on ,
∂x ∂y
(
)(
(19a)
(19b)
)
(19c)
4. sin u, x = sin o ( AxU ) ,
(19d)
5. exp(u, x ) = expo ( AxU ) .
(19e)
In the above equations ( ),x =∂( )/∂x; Ax and Ay denote the known coefficient matrices
resulting from cetain numerical methods. We define the nonlinear discretization expression in
the Hadamard product form as the formulation-H. It is very easy to transform the formulationH such as Eq. (18) into the formulation-K by using formula (1d). In what follows, the SJT
product is introduced to efficiently compute analytical solution of the Jacobian derivative
matrix.
Definition 2.4. If matrix A=[aij]∈CN×M, vector U={uj}∈CN×1, then
A◊U=[aijui]∈CN×M is
defined as the postmultiplying SJT product of matrix A and vector U, where ◊ represents the
SJT product. If M=1, A◊B=A°B.
Definition 2.5. If matrix A=[aij]∈CN×M, vector V={vj}∈CM×1, then VT◊A=[aij vj]∈CN×M is
defined as the SJT premultiplying product of matrix A and vector V.
Considering the nonlinear formulation (18), we have
∂
∂U
{( AxU ) o ( AyU )} = Ax ◊( AyU ) + Ay◊( AxU ).
(20)
Formula (20) produces the accurate Jacobian matrix through simple algebraic computations.
The SJT premultiplying product is related to the Jacobian matrix of the formulations such as
9
dU m
= AU m , i.e.,
dx
} (
{
∂
v AxU m = mU o( m −1)
∂U
) ◊A .
T
(21)
x
In the following, we discuss some operation rules in applying the SJT product to evaluate the
Jacobian matrix of the nonlinear formulations (13).
1.
2.
{{c( x y )} o ( A U )} = A ◊{c( x y )}
∂
v ( A U ) } = qA ◊( A U )
.
∂U {
∂
) ◊A ◊ A U
v {( A U ) o ( A U )} = m{(U (
) } ( )+
∂U
n{(U ( ) )◊A }◊( A U )
∂
v
∂U
j, j
x
x
3.
x
oq
x
x
om
y
x
j, j
o( q −1)
o m −1
on
o n −1
y
4.
∂
v {sin( AxU )} = Ax ◊ coso ( AxU )
∂U
5.
∂
v expo ( AxU ) = Ax ◊ expo ( AxU )
∂U
{
6. If ψ = f o (φ ),
x
x
y
In the above equations
(22b)
on
om
(22c)
(22d)
}
φ = ϕ o (U ) , we have
(22a)
(22e)
∂ψ ∂ψ ∂φ
=
.
∂U ∂φ ∂U
(22f)
∂
∂
and
represent the Jacobian derivative matrix of certain
∂φ
∂U
Hadamard vector function with respect to vectors φ and U, respectively. It is observed from
these formulas that the Jacobian matrix of the nonlinear formulation-H can be calculated by
using the SJT product in the chain rules similar to those in differentiation of a scalar function.
The above computing formulas yield the analytical solutions of the Jacobian matrix. The
computational effort of a SJT product is only n2 scalar multiplications. However, it is noted
that the SJT product seems to be not amenable to the evaluation of the Jacobian matrix of the
previous formulation-K.
The finite difference method is often employed to calculate the approximate solution of the
Jacobian matrix and also requires O(n2) scalar multiplications. In fact, the SJT product
approach requires n2 and 5n2 less multiplication operations than one and two order finite
differences, respectively. Moreover, the SJT product produces the analytic solution of the
Jacobian matrix. In contrast, the approximate Jacobian matrix yielded by the finite difference
method affects the accuracy and convergence rate of the Newton-Raphson method, especially
for highly nonlinear problems. The efficiency and accuracy of the SJT product approach were
10
numerically demonstrated in [1, 2].
We notice the following fact that the SJT product is closely related with the ordinary product
of matrices, namely, if matrix A=[aij]∈CN×M, vector U={ui}∈CN×1, then the postmultiplying
SJT product of matrix A and vector U satisfies
A◊U=diag{u1, u2, ....., uN}A,
(23a)
where matrix diag{u1, u2, ....., uN}∈CN×N is a diagonal matrix. Similarly, for the SJT
premultiplying product, we have
VT◊A = A diag{v1, v2, ....., vM},
(23b)
where vector V={vj}∈CM×1. The reason introducing the SJT product is to simplify the
presentation, manifest the relation with the Jacobian matrix of the formulation-H and clear the
fact that the SJT product approach enjoys the same chain rule of scalar differentiation.
Some numerical examples applying these above formulas presented in [1, 2]. The
effectiveness and efficiency are demonstrated therein. Obviously, the formulation-H is
preferred whenever possible. However, the formulation-K is believed to be in the most
general an identical formulation form of various nonlinear numerical analogue due to its
broad applicability. The general formulation-K and formulation-H provide a computational
attractiveness to develop the unified techniques in the nonlinear analysis and computation. In
next sections, we will employ the results given here.
3. Jacobian matrix and nonlinear numerical analogue
Consider the quadratic nonlinear term of equation (18) and its Jacobian matrix of equation
(20), we have
[ A ◊( A U ) + A ◊( A U )]U = diag( A U )( A U ) + diag( A U )( A U )
x
y
y
x
y
x
(
= 2( AxU ) o AyU
x
)
y
(24)
by means of equation (23a), where diag(AxU) and diag(AyU) are diagonal matrices with
diagonal terms of AxU and AyU. Furthermore, consider the cubic nonlinear differential
operator
∂u( x, y) ∂u( x, y) ∂ 2u( x, y)
= ( A xU ) o ( AxU ) o AxyU ,
∂x
∂y
∂xy
(
)
11
(25)
whose Jacobian matrix is
[( ) (
)]
[
)]
(
[
(
)]
A x ◊ AyU o AxyU + A y ◊ ( AxU ) o AxyU + A xy ◊ ( AxU ) o AyU .
Similar to Eq. (24), we can derive
{A ◊[( A U ) o ( A U )] + A ◊[( A U ) o ( A U )] +
A ◊[( A U ) o ( A U )]}U = 3( A U ) o ( A U ) o ( A U )
x
y
xy
xy
x
y
x
y
xy
x
y
(26)
xy
In fact, we can summarize
N (2 ) (U ) =
1 (2)
J (U )U
2
(27)
for quadratic nonlinear term and
N (3) (U ) =
1 (3)
J (U )U
3
(28)
for cubic nonlinear term, where the N(2) and N(3) denote the quadratic and cubic nonlinear
terms and J(2) and J(3) represent the corresponding Jacobian matrices.
As were mentioned in section 2, the formulation-K is in general appropriate for nonlinear
numerical discretization expression of all numerical techniques resulting from the method of
weighted residuals, which include the finite element, boundary element, finite difference,
Galerkin, least square, collocation and spectral methods. Also, the nonlinear formulation-H of
the finite difference, finite volume, dual reciprocity BEM, radial function based methods,
collocation and their variants can easily be transformed into the formulation-K. Therefore, in
the following we will investigate the effectiveness of equations (27) and (28) for the
formulation-K. First, by comparing equations (13) and (14), it is very obvious
N ( 2 ) (C ) =
1 (2)
J (C )C
2
(29)
for the quadratic nonlinear formulation-K. Furthermore, by postmultiplying the Jacobian
matrix of the cubic nonlinear term in formulation-K equation (17) by the vector C, we have
C T R11C C T R12 C L C T R1n C
T
T
T
C R21C C R22 C L C Rnn C C = R
(C ⊗ C ⊗ C )
n × n3
M
M
M
M
T
T
T
C Rn1C C Rn 2 C L C Rnn C
(30)
through inverse operations from equations (17) to (15). Therefore, we have
N (3) (C ) =
1 (3)
J (C )C
3
(31)
Next we use the mathematics inductive method to generalize the relationship formulas (29)
12
and (31) to any order nonlinear terms. First, we assume that there exists
k 448
6447
1
k
N ( k ) (C ) = H ( ) C ⊗ C ⊗ K ⊗ C = J ( k ) (C )C
n×nk
k
(32)
for the k-order nonlinear term. Consider the (k+1)-order nonlinear term, the corresponding
formulation-K expression can be stated as
N ( k +1) (C ) = H (
k +1)
n × n k +1
[
[
[
=
+1448
644k7
C ⊗ C ⊗ K ⊗ C
]
]
]
T
k
N1( ) (C )
T
k
N2( ) (C ) C ,
M
T
k)
(
N n (C )
(33)
where Ni( k ) (C ) are the k-order nonlinear terms,
k 448
6447
k
k +1
Ni( ) (C ) = hi ( k) C ⊗ C ⊗ K ⊗ C ,
n×n
i=1, 2, É., n.
(34)
The Jacobian matrix can be given by
k
∂N ( ) (C )
T
C T 1 v N ( k ) (C )
∂C 1
(k )
T
k +1)
(
∂N
C ) T ∂N2 v(C ) N ( k ) (C )
(
k +1)
(
v
J
= C
+
2
(C ) =
∂C
∂C
M
M
T
(k )
(k )
C T ∂Nn v(C ) Nn (C )
∂C
[
[
[
]
]
]
(35)
By using equation (32), we have
J ( k +1) (C )C = kN ( k +1) (C ) + N ( k +1) (C )
(36)
= ( k + 1) N ( k +1) (C )
Therefore, it is generally validated
N ( m ) (C ) =
1 (m)
J (C )C ,
m
(37)
where m denotes the nonlinear order. It is again stressed that the indirect parameter vector C
formulation is actually equivalent to those using the unknown function value vector U.
Therefore, equation (37) is equally satisfied for the vector U formulations. Summarize the
above results, we have
Theorem 3.1: If N ( m ) (U ) and J ( m ) (U ) are defined as nonlinear numerical analogue of the m
order nonlinear differential operator and its corresponding Jacobian matrix, respectively, then
13
N ( m ) (U ) =
1 (m)
J (U )U is always satisfied irrespective if which numerical technique is
m
employed to discretize.
In fact, all integral-order nonlinear polynomial systems of equations can be represented in the
formulation-K form. For example, consider quadratic nonlinear term Gn × n 2 (C ⊗ C ) in
equation (10), we can find that for an n-dimension polynomial system of equations, the
quadratic nonlinear term can at most have n2 independent coefficients. Therefore, coefficient
matrix Gn × n 2 is sufficient to determine any quadratic nonlinear polynomial terms uniquely.
Similar analysis can be done for higher order nonlinear polynomial terms. Now we can
conclude that theorem 3.1 is applicable for all integral-order nonlinear polynomial systems of
equations. In addition, for quadratic nonlinear problems, Gn × n 2 (C ⊗ C ) in equation (10) is a
constant coefficient matrix and actually the second order derivative matrix (the Hessian
matrix) of quadratic nonlinear algebraic vectors. Numerical properties such as singular values
of such matrix may disclose some significant information of nonlinear systems.
4. Applications
By using theorem 3.1, this section will address some essential nonlinear computational issues
pertinent to the computational stability analysis of nonlinear initial value problems, linear
iteration solution of nonlinear algebraic equations and a modified BFGS quasi-Newton
method.
4.1. Stability analysis of nonlinear initial value problems
The spatial discretization of time-dependent differential systems results in the initial value
problem. For linear systems, methods for determining conditions of numerical stability and
accuracy of various time integration schemes are well established. However, for nonlinear
problems, these tasks have been dramatically complicated. It was found that numerical
instability can occur in the nonlinear computation even for methods that are unconditionally
stable for linear problems [10, 11]. Recently, an energy and momentum conserving condition
is sometimes imposed to guarantee stability of nonlinear integration. The mathematical
techniques in performing such strategy are often quite sophisticated and thus not easily
learned and used. We wish to develop a methodology which can evaluate stability behavior of
14
general integration schemes and avoids the above difficulties.
Without the loss of generality, the canonical form of the first order initial problem with
nonlinear quadratic and cubic polynomial terms is given by
dU
= f (t , U )
dt
(38)
= LU + N (2 ) (t, U ) + N (3) (t, U )
where U is unknown vector; L is a given nonsingular matrix; N(2)(t, U) and N(3)(t, U) are given
vectors of quadratic and cubic nonlinear polynomials, respectively. Therefore, according to
theorem 3.1, we have
1 (2)
1 ( 3)
L + 2 J (t, U ) + 3 J (t, U )U = A(t, U )U
(39)
where J(2)(t, U) and J(3)(t, U) are the Jacobian matrix of the quadratic and cubic nonlinear terms.
It is seen that the right side of equation (38) is expressed as a definite explicit matrix-vector
separated from in Eq. (39). So equation (38) can be restated as
dU
= A(t, U )U .
dt
(40)
Eq. (40) has the form of linear initial value problem with varying coefficient matrix A(t, U),
which provides a very attractive convenience to apply the well-developed techniques of linear
problems to nonlinear problems.
A variety of linear time integration methods available now fall into two groups, explicit and
implicit. The explicit methods are usually easy-to-use and need not solve a matrix system.
However, these methods are also usually conditionally stable even for linear problems and
thus in many cases stability requires small time step. For nonlinear problems, it is intrinsic
advantages of the explicit methods that iterative solutions are not required. In wave
propagation problems, the methods are often used due to their lower computing cost per step
[12]. On the other hand, the implicit methods require the solution of a matrix system one or
more times and therefore are computationally expensive per time step, especially for
nonlinear problems. However, the implicit methods tend to be numerically stable and thus
allow large time step. So these methods have advantage to solve stiff problems [13]. In what
follows, we will investigate these two types of methods.
Explicit methods
As an example of the simplest, let us consider the explicit Euler scheme solution of equation
15
(38)
Un +1 = Un + hf (tn , Un ) .
(41)
In terms of equation (40), we have
[
]
Un +1 = I + A(tn , Un )h Un ,
(42)
where I is unite matrix. If A(tn, Un) is negative definite, then the stability condition is given by
hp
2
(43)
λ max
as in the linear case, where
represents the largest eigenvalue of A. We note that A(tn,
Un) is a time-varying coefficient matrix. Therefore, it is difficult to predict stability behavior
of global journey. In other words, the present strategy will only provide the local stability
analysis at one time step. Park [10] pointed out that the local stable calculation can guarantee
the global stability, inversely, the local instability causes the global response unstable. As in
the linear system with time-varying coefficients, the key issue in the local stability analysis is
how to determine
. It is known [14] that the l p matrix norm of A gives a bound on all
eigenvalues of A, namely
λA ≤ A .
(44)
Of these, the l1 or l∞ matrix norm of A is easily computed. Substitution of inequality (44)
into inequality (43) produces
hp
2
A
(45)
Therefore, it is not difficult to confine the stepsize h satisfying stability condition inequality
(43) by certain matrix norm.
For the other explicit integration methods, the procedure of stability analysis is similar to
what we have done for the explicit Euler method. For example, the stable region of the
negative real axis in the well-known fourth-order Runge--Kutta method is λh p 2.785.
Therefore, the method can be the local and global stable when applied to nonlinear problems
provided that the condition
hp
2.785 2.785
p
A
λ max
(46)
is satisfied.
In particular, we can obtain somewhat more elaborate results for the formulation-H given in
16
section 2.2. To illustrate clearly, consider Burger's equation
∂u
∂u 1 ∂ 2u
+u
=
∂t
∂x Re ∂x 2
(47)
which Re is the Reynolds number. When applying the finite difference, collocation, finite
volume method, dual reciprocity BEM or radial function based methods, the spatial
discretization of the above equation will result in the formulation-H form discretization
dU
1
BxU − U °( AxU ) ,
=
dt Re
(48)
where U is consisted of the desired values of u. By using the SJT product, we get the Jacobian
matrix of the right side nonlinear term of equation (48)
J (2 ) (U ) = − I◊( AxU ) − Ax ◊U
(49)
According to theorem 3.1, we have
dU
= A(t, U )U .
dt
(50)
where
A(t, U ) =
[
]
1
1
Bx − I◊( AxU ) + Ax ◊U .
2
Re
(51)
One can easily derive
A(t, U ) ≤
1
Bx + Ax U .
Re
(52)
Substituting inequality (52) into inequality (45), we have
h≤
2
.
1
Bx + Ax U
Re
(53)
The above inequality gives the restriction conditions of stability when the explicit Euler
method is applied to this case. If we have a priori knowledge of a bound of U, inequality (53)
can provide global stability condition with respect to time stepsize h. For the fourth-order
Runge-Kutta method, similar formulas can be obtained. It is seen from the preceding analysis
that the present methodology of stability analysis deals with nonlinear problem in a similar
way that we normally handle linear problems with time-dependent coefficients.
Implicit and semi-implicit methods
Without loss of generality, let us consider the implicit Euler method of the simplest implicit
method as a case study
Un +1 = Un + hf (tn , Un +1 ) .
(54)
In terms of equation (40), we have
17
[
Un +1 = I − A(tn , Un +1 )h
]−1Un .
(55)
As in the linear case, the stability condition of equation (55) is that coefficient matrix A(t, U)
is negative definite. Due to the fact that Un+1 is unknown before computation, the approach is
a posterior stability analysis. In fact, for all A-stable implicit integration methods, the local
and global stability can be guaranteed if the negative definite feature of matrix A is kept at all
successive steps. It is noted that for A-stable integration methods, stability condition of
nonlinear system is independent of the time step h as in linear case.
In the numerical solution of nonlinear initial value problems by implicit time-stepping
methods, a system of nonlinear equations has to be solved each step in some iterative way. To
avoid this troublesome and time-consuming task, various semi-implicit methods were
developed by using linearization procedure in the implicit solution of nonlinear problem. For
example, if the nonlinear function f(t, U) in the implicit Euler equation (54) is linearized by
using Newton method, we get the semi-implicit Euler method, namely,
∂f
Un +1 = Un + h 1 − h
Un
∂U
−1
f (t , U n ) ,
(56)
where ∂f ∂U is the Jacobian matrix of f(t, U). The stability analysis of equation (56) can be
carried out in the same way as we have done for the explicit methods.
4.2. Linear iterative methods for nonlinear algebraic systems
The linear iterative methods are used most often for large sparse system of linear equations,
which include Jacobi, Gauss-Seidel and SOR methods. Newton method and its variants do not
belong to this type of methods. Ortega and Rheinboldt [15] addressed the detailed discussions
on various possible applications of these methods coupling the Newton-like methods to solve
nonlinear problems such as the so-called SOR-Newton, Newton-SOR, etc. However, it is
found very difficult when we attempt a direction extension of the linear iterative methods to
nonlinear equations without the linearization procedure such as Newton method [14, p. 220,
15, p. 305]. This impediment stems from an explicit matrix-vector separated expression of the
general nonlinear equations is not in general available. In this section, we confine our
attention to overcome this barricade for the polynomial-only equations. Theorem 3.1 provides
a simple approach to express the nonlinear terms as the explicit matrix-vector form. Therefore,
it is possible to conveniently apply the general linear iterative methods to nonlinear
polynomial-only systems, especially for the systems with the formulation-H form.
18
Without loss of generality, consider nonlinear polynomial equations
f (U ) − b = 0
(57)
with the quadratic and cubic nonlinear terms. By using theorem 3.1, we have
1
1
f (U ) = L + J (2 ) (U ) + J (3) (U )U + b
2
3
= A(U )U − b = 0
(58)
where J(2)(U) and J(3)(U) are the Jacobian matrix of the quadratic and cubic nonlinear terms as
in equation (39). Therefore, if we can easily compute J (2)(U) and J(3)(U). The obstacle in
directly employing the linear iterative methods to nonlinear problems will be eliminated. By
analogy with the original form of various linear iterative methods, we give the following
nonlinear Jacobi, Gauss-Seidel and SOR iteration formulas in the solution of equation (58),
respectively,
Ui(
k +1)
Ui(
k +1)
=
(k ) (k ) ,
b
−
a
U
U
i
k i ∑ ij i
aii Ui( )
j ≠i
=
( k ) ( k +1)
(k ) (k ) ,
b
−
a
U
U
−
a
U
U
∑ ij i i
i
k i ∑ ij i
aii Ui( )
j pi
j fi
( )
1
( )
(59)
( )
1
( )
( )
(60)
and
Ui(
k +1)
where
k
= (1 − ω )Ui( ) +
k
k +1
k
k
bi − ∑ aij Ui( ) Ui( ) − ∑ aij Ui( ) Ui( ) ,
k
aii Ui( )
j pi
j fi
ω
( )
( )
( )
(61)
is the relaxation factor in the SOR method and allowed to vary with k The choice
of
In particular, for nonlinear numerical analogue of the finite difference, finite volume, dual
19
reciprocity BEM, radial function based methods, collocation and their variants, the
discretization can be represented in the formulation-H form. We can use the SJT product
method to yield the simple explicit matrix expression of the Jacobian matrix. Therefore, in
fact, it is not necessary to evaluate the Jacobian matrix in these cases.
Of course, the initial start is also of considerable importance in applying these formulas.
(60) to accelerate convergence. The computational effort of the linear iterative
methods is much less than the Newton method. However, as a penalty, the convergence rate is
( )
linear. It is noted that if aii Ui( k ) is found zero in iterative process, some row or column
interchange is required. Some numerical experiments are also necessary to assess their
performances vis-a-vis the Newton-like methods for various benchmark problems. Also, some
in-depth results of linear iterative methods [14, 15] may be very useful to enhance the present
nonlinear iterative formulas.
4.3. A modified BFGS quasi-Newton method
To avoid time-consuming evaluation and inversion of the Jacobian matrix in each iterative
step of the standard Newton method, the quasi-Newton method was developed with
maintaining a superlinear convergence rate. This key of such methods is a matrix-updating
procedure, one of the most successful and widely used which is known as BFGS method
named for its four developers, Broyden, Fletcher, Goldfarb, and Shanno. The so-called quasiNewton equation is the very fundamental of various quasi-Newton methods, namely,
Ji (Ui − Ui −1 ) = f (Ui ) − f (Ui −1 ) .
(62)
The Jacobian matrix Ji is updated by adding a rank-one matrix to the previous Ji-1 in satisfying
equation (62) and the following relations:
Ji p = Ji −1 p ,
when (Ui − Ui −1 )T p = 0 ,
(63)
where Ui − Ui −1 = q , f (Ui ) − f (Ui −1 ) = δfi . It is emphasized that J here is the Jacobian matrix
of total system. It is noted that equation (62) is an approximate one. For the polynomial-only
problems, we can gain the exact alternative of equation (62) by using theorem 3.1. Without
loss of generality, equation (57) can be restated as
f (U ) = LU + N (2 ) (U ) + N (3) (U ) + b = 0 ,
(64)
where LU, N(2)(U) and N(3)(U) represent the linear, quadratic and cubic terms of the system of
20
equations. The Jacobian matrix of the system is given by
∂f (U )
∂N (2 ) (U ) ∂N (3) (U )
.
J=
=L+
+
∂U
∂U
∂U
(65)
By using theorem 3.1, we have
JU = LU + 2 N (2 ) (U ) + 3 N (3) (U ) = f (U ) .
(66)
Therefore, we can exactly establish
JiUi − Ji −1Ui −1 = f (Ui ) − f (Ui −1 ) = y
(67)
After some simple deductions, we get
Ji (Ui − Ui −1 ) = g ,
(68)
where g = −( Ji − Ji −1 )Ui −1 + y . It is worth stressing that equation (68) differs from equation
(62) in that it is exactly constructed. In the same way of deriving the BFGS updating formula,
applying equations (63) and (68) yields
Ji = Ji −1
Ji −1q − g)q T
(
−
.
(69)
qT q
Furthermore, we have
( J q − Ji −1Ui −1 − y)q T ,
U qT
Ji I + i −T1 = Ji −1 − i −1
q q
qT q
(70)
where I is the unite matrix. Note that left term in bracket of the above equation is the unite
matrix plus a rank-one matrix. By using the known Shermann-Morrison formula, we can
derive
( J U )q T − ( Ji −1q − Ji −1Ui −1 − y)q T
Ji = Ji −1 − Ti −1 i −T1
q q + q Ui −1
qT q
+
( Ji −1q − Ji −1Ui −1 − y)q T (Ui −1q T )
(71)
(qT q + qTUi −1)qT q
The above equation (71) can be simplified as
Ji = Ji −1 + rq T
(72)
where
γ =−
Ji −1Ui −1
q T q + q T Ui −1
−
( Ji −1q − Ji −1Ui −1 − y)q T + ( Ji −1q − Ji −1Ui −1 − y)q T Ui −1
(
qT q
)
q T q + q T Ui −1 q T q
(73)
By employing the Shermann-Morrison formula to equation (72), we finally have
Ji−1
=
Ji−−11
Ji−−11r )(q T Ji−−11 )
(
−
.
1 + q T ( Ji−−11r )
(74)
21
The updating formulas (69) and (74) are a modified version of the following original BFGS
formulas:
Ji = Ji −1 −
( Ji −1q − δf )q T
(75)
qT q
and
Ji−1
=
Ji−−11
Ji−−11δf − q )q T Ji−−11
(
−
,
q T ( Ji−−11δf )
(76)
f is defined as in equation (63). One can find that modified updating formulas (69)
where
and (74) look slightly more complicated compared to the original BFGS formulas (75) and
(76), but in fact, the required multiplication number of both are nearly the same, only about
3n2 operations. In a similar way, equation (68) can also be utilized to derive the modified DFP
quasi-Newton updating matrix formulas. Theoretically, the present updating formulas
improve the accuracy of the solution by establishing themselves on the exact equation (68)
instead of the approximate quasi-Newton equation (62). The basic idea of the quasi-Newton
method is a successive update of rank one or two. Therefore, it is noted that equation (68) is
not actually exact due to the approximate Jacobian matrix yielded in the previous iterative
steps. It may be better to initialize Jacobian matrix J via an exact approach. In addition, it is a
puzzle for a long time why one-rank BFGS updating formula performs much better compared
to the other two-rank updating schemes such as DFP method [17]. In our understanding, the
most possible culprit is due to the inexactness of quasi-Newton equation (62). Therefore, this
suggests that the updating formulas of higher order may be more attractive in conjunction
with equation (68), which will include more additional curvature information to accelerate
convergence. It is noted that in one dimension, the present equations (69) and (74)
degenerates into the original Newton method by comparing with the fact that the traditional
quasi-Newton method becomes the secant method. The performances of the present
methodology need be examined in solving various benchmark problems.
Also, the theorem 3.1 provides a practically significant approach to examine the deviation
between the approximate and exact Jacobian matrices by vector norm
[
]
err Jˆ ( U ) = f ( U ) - Jˆ ( U )U
f (U) ,
(77)
where Ĵ (U ) is the approximate Jacobian matrix of f(U), f (U ) and f (U ) are defined in
equations (64) and (66), respectively.
22
5. Pseudo-Jacobian matrix and its applications
The efficient numerical solution and analysis of nonlinear systems of algebraic equations
usually requires repeated computation of the Jacobian matrix and its inversion. Function
differences and hand-coding of derivatives are two conventional numerical methods for this
task. However, the former suffers from possible inaccuracy, particularly if the problem is
highly nonlinear. The latter method is time-consuming and error prone, and a new coding
effort is also required whenever a function is altered. Recently, the automatic differentiation
(AD) techniques receive an increased attention. However, straightforward application of AD
software to large systems can bring about unacceptable amounts of computation. Either
sparsity or structure of the systems is necessarily used to overcome the limitation. On the
other hand, the SJT product approach presented in section 2 is a simple, accurate and efficient
approach in the evaluation of the Jacobian matrix of the nonlinear systems with the
formulation-H form. However, this approach is not applicable for general nonlinear system
formulation-K. It is clear from the preceding review that a generally applicable, simple and
efficient technique is, at least now, not available for the evaluation of the Jacobian matrix. In
addition, the inversion of the Jacobian matrix is a more computationally expensive task. Our
work in this section is concerned with the construction of the pseudo-Jacobian matrix of one
rank to circumvent these difficulties. It is emphasized that the pseudo-Jacobian matrix
presented below is in general applicable for any nonlinear algebraic systems with no restricted
to polynomial-only problems.
Consider a general form of nonlinear initial value problem
dU
= LU + N (t, U ) ,
dt
(78)
where L is a given nonsingular matrix and N(t, U) is a given vector of nonlinear functions.
N(t, U) can be expressed in a form
(
)
T
1
N (t, U ) = N (t, U ) U °( −1) U
n
,
(79)
( )
= wv U
T
where n is the dimension size of equation system, U °( −1) is the Hadamard inverse of vector U
as explained in definition 2.2 of section 2. It is necessary to bear in mind that all elements of
unknown vector U can not be equal to zero when using formula (79). We can avoid the zero
elements by using a simple linear transformation
23
U = U + c,
(80)
where c is a constant vector. In the following discussion, we assume without loss of generality
that no zero elements present in vector U. Therefore, by using formula (79), equation (78) can
restated as
[
]
dU
= L + wv T U
dt
= A(t, U )U
(81)
Note that w and v are vectors. Therefore, wvT is a matrix of one rank. Compared with
equation (40), it is seen that both have an explicit expression with a separated matrix form.
The difference lies in the fact that nonlinear terms are expressed as a one-rank modification to
linear term in equation (81) with no use of Jacobian matrix, while equation (40) represents
nonlinear terms via the respective Jacobian matrices. For convenient in notation, wvT is here
defined as the pseudo-Jacobian matrix, which is a fundamental idea for the ensuing work. In
fact, it is noted that the pseudo-Jacobian matrix can be derived for general linear and
nonlinear terms with no limited applicable of polynomial-only systems.
As we have done in section 4.1, equation (81) can be applied to examine the local stability of
the explicit and implicit methods when applied to nonlinear problems. For example, consider
iterative equation (41) of the explicit Euler method and utilize stability condition inequalities
(43) and (45), we have
hp
2
L + wv
T
≤
2
L + w vT
,
(82)
where w and v vary with Un. Some elaborate results on L+ wvT can be found in [18]. For the
implicit method, consider iterative formula (54) of the back difference method, we have
[ (
)]
Un +1 = I − L + wv T h
−1
Un ,
(83)
where w and v change with Un+1. Therefore, for the A-stable back difference method, the local
stability condition is to hold the negative definite of matrix L + wv T .
It is emphasized here that the above procedure of applying the pseudo-Jacobian matrix to the
stability analysis of the explicit Euler and implicit back difference methods is of equal
applicability to all explicit, implicit and semi-implicit methods such as the Runge-Kutta,
Rosenbrock, Gear backward difference and fully implicit Runge-Kutta methods, etc.
24
In the following, we establish the relationship between the pseudo-Jacobian matrix and
original Jacobian matrix. Consider the nonlinear terms in equation (66), by using theorem 3.1,
we have
J N U = 2 N (2 ) (U ) + 3 N (3) (U ) ,
(84)
where J N represents the Jacobian matrix of the nonlinear terms. It is observed that equation
(84) can not determine the Jacobian matrix J N uniquely in more than one dimension. By
multiplying
(
( )
) = 2 N ( ) U + 3N ( ) U 1 U ( )
( )] (
) [ ( )
)
n
1 °( −1) T
, we get
U
n
T
1
J N U U °( −1
n
2
3
° −1 T
)
= JN
(85)
where Ĵ N is the pseudo-Jacobian matrix of the orignal Jaobian matrix JN. So we have
1
U
)
1 2
J N = J N U1
n
M
Un
U1
U1
U2
1
M
Un
U2
U1
Un
U2
L
Un = J N p(U ) ,
O M
L 1
L
(86)
where p(U) is defined as the deviation matrix between the original Jacobian and pseudoJacobian matrices for polynomial-only problems. Similar relationship for general nonlinear
system is not available.
It is straightforward that this pseudo-Jacobian matrix can be employed to directly extend the
linear iterative Jacobi, Gauss-Seidel and SOR methods to nonlinear system of equations. In
addition, this concept can be applied to the Newton method to avoid the evaluation and
inversion of Jacobian matrix. For the sake of brevity, they are not presented here.
6. Some remarks
In this study, the Jacobian matrix is established as a bridge between linear and nonlinear
polynomial-only problems. Some significant results are achieved through the application of
the theorem 3.1. It is worth stressing that although the theorem 3.1 was verified through the
use of formulation-K and formulation-H given in section 2, it holds true no matter which
approaches are employed in the expression of nonlinear analogue term and the evaluation of
the Jacobian matrix. As was mentioned in section 2.1, any nonlinear algebraic polynomial25
only equations can be expressed as the formulation-K form and theorem 3.1 can thus be
exhibited for general nonlinear polynomial equations. For example, consider very simple
mixed quadratic and cubic nonlinear algebraic equations [19]
x12 + x22 − 1 = 0
0.75 x13 − x2 + 0.9 = 0
(87)
It can be expressed in the formulation-K form as
ψ ( x ) = Lx + G2 × 4 ( x ⊗ x ) + R2 × 8 ( x ⊗ x ⊗ x ) + F = 0 ,
(88)
and by using theorem 3.1, we have
1
1
ψ ( x ) = L + J (2 ) ( x ) + J (3) ( x ) x + F = 0
2
3
(89)
where x=(x1, x2), F, L, G2×4 and R2×8 are the constant vector and coefficient matrices. J(2)(x)
and J(3)(x) are the Jacobian matrix of the quadratic and cubic nonlinear terms, respectively.
Nonlinear term XAX, in which X is a rectangular matrix of the desired values and A is
constant coefficient matrix, often appears in optimal control, filter and estimation. Theorem
3.1 is the same effective for such nonlinear term. Interested readers may try more cases. It is
interesting to note that equation (89) is in fact identical in form to the familiar derivative
expression of scalar polynomial function. In the practical applications, it is not actually
necessary to express nonlinear algebraic equations in the formulation-K form like equation
(88). It is stressed that theorem 3.1 provide a convenient approach to express nonlinear system
of equations as linear-form representation without the use of linearization procedure such as
the Newton method. It is also well known that a very large class of real-world nonlinear
problems can be modeled or numerically discretized polynomial-only algebraic system of
equations. The results presented in this paper are in general applicable for all these problems.
Therefore, this work is potentially important in a board spectrum of science and engineering.
This paper is confined within the integral-order nonlinear problems. In fact, the theorem 3.1 is
also applicable for fractional order nonlinear polynomial-only problems. We will further
involve the problems of such type in the subsequent paper.
The concept of the pseudo-Jacobian matrix can be used for general nonlinear system of
equations without restriction to polynomial-only problems. Due to its one-rank feature, the
evaluation of inverse is avoided in various nonlinear computation and analysis, which results
in a considerable saving in computing effort.
26
In sections 4.1 and 5, the explicit and implicit Euler methods of two simplest integrators are
typically studied to avoid that the complexity of special integration methods obscures the
exposition of the present fundamental strategy and make it hard to understand. It is very clear
that the same procedure can be easily extended to nonlinear stability analysis of general
explicit and implicit methods. For the A-stable methods, it is found that the local stability of
solutions can be assured if the time-varying coefficient matrix sustains negative definite,
which provides a clue how to construct some stable integrators for nonlinear initial value
problems.
The present work may be in itself of theoretical importance and provides some innovative
viewpoints for the nonlinear computation and analysis. Numerical examples assessing these
given formulas and methodologies are presently underway. A further study of various
possibilities applying the Hadamard product, SJT product, theorem 3.1 and pseudo-Jacobian
matrix will be beneficial. For example, according to theorem 3.1, all nonlinear polynomial
systems of equations can be expressed as a separated matrix linear system with variabledependent coefficient matrix by using the Jacobian matrix. Therefore, the analysis of these
Jacobian matrices may expose essential sources of some challenging problems such as
numerical uncertainty [11], shock and chaotic behaviors.
References
1.
W. Chen and T. Zhong, The study on nonlinear computations of the DQ and DC methods,
Numer. Methods for P. D. E., 13, 57-75, 1997.
2.
W. Chen, Differential Quadrature Method and Its Applications in Engineering
Applying Special Matrix Product to Nonlinear Computations, Ph.D. dissertation,
Shanghai Jiao Tong University, Dec. 1996.
3.
R. A. Horn, ÒThe Hadamard productÓ, in Matrix Theory and Applications, Series: Proc.
of Symposia in applied mathematics, V. 40, C. R. Johnson Ed., American Mathematical
Society, Providence, Rhode Island, , pp. 87-169, 1990.
4.
G. Ni, Ordinary Matrix Theory and Methods (in Chinese), Shanghai Sci. & Tech. Press,
1984.
5.
B. A. Finlayson, The Method of Weighted Residuals and Variational Principles,
Academic Press, New York, 1972.
6.
L. Lapidus and G. F. Pinder,. Numerical Solution of Partial Differential Equations in
27
Science and Engineering, John Wiley & Sons Inc., New York, 1982.
7.
P. Lancaster and M. Timenetsky., The Theory of Matrices with Applications, 2nd edition,
Academic Press. Orlando, 1985.
8.
P. W. Partridge; C. A. Brebbia and L. W. Wrobel, The Dual Element Method,
Computational Mechanics Publications, Southampton, UK, 1992.
9.
M. Zerroukat, H. Power and C. S. Chen, A numerical method for heat transfer problems
using collocation and radial basis functions, Inter. J. Numer. Methods Engrg, 42, 12631278, 1998.
10. K. C. Park, An improved stiffly stable method for direct integration of non-linear
structural dynamic equations, ASME Journal of Applied Mechanics, 464-470, 1975.
11. H. C. Yee and R. K. Sweby, Aspects of numerical uncertainties in time marching to
steady-state numerical solutions, AIAA J., 36, 712-723, 1998.
12. M. A. Dokainish and K. Subbaraj, A survey of direct time-integration methods in
computational structural dynamics-1. Explicit methods, Computers & Structures, 32(6),
1371-1386, 1989.
13. M. A. Dokainish and K. Subbaraj, A survey of direct time-integration methods in
computational structural dynamics-II. Implicit methods, Computers & Structures, 32(6),
1387-1401, 1989.
14. G. H. Golub and J. M. Ortega, Scientific Computing and Differential Equations,
Academic Press, New York, 1992.
15. J. M. Ortega and W. C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several
Variables, Academic Press, New York, 1970.
16. L. A. Hageman and Porsching, T. A. Aspects of non-linear block successive
orverrelaxation, SIAM J. Numer. Anal. 12, 316-335, 1975.
17. K. W. Brodlie, "Unconstrained minimization", in The State of the Art in Numerical
Analysis, D. Jacobs Ed., Academic Press, New York, 1977, pp. 229-268.
18. J. H. Wilkinson, The Algebraic Eigenvalue Problems, Oxford University Press, 1965.
19. K. Che and P. Huang, Numerical Recipes for Science and Engineering (in Chinese),
China Railway Press, Beijing, 1987.
28
| 5 |
Exploiting generalisation symmetries in accuracy-based
learning classifier systems: An initial study
Larry Bull
Abstract. Modern learning classifier systems typically exploit a
niched genetic algorithm to facilitate rule discovery. When used
for reinforcement learning, such rules represent generalisations
over the state-action-reward space. Whilst encouraging maximal
generality, the niching can potentially hinder the formation of
generalisations in the state space which are symmetrical, or very
similar, over different actions. This paper introduces the use of
rules which contain multiple actions, maintaining accuracy and
reward metrics for each action. It is shown that problem
symmetries can be exploited, improving performance, whilst not
degrading performance when symmetries are reduced. 12
1 INTRODUCTION
Learning Classifier Systems (LCS) [Holland, 1976] are rulebased systems, where the rules are usually in the traditional
production system form of “IF condition THEN assertion”. An
evolutionary algorithm and/or other heuristics are used to search
the space of possible rules, whilst another learning process is
used to assign utility to existing rules, thereby guiding the search
for better rules. LCS are typically used as a form of
reinforcement learner, although variants also exist for supervised
[Bernadó Mansilla & Garrell, 2003], unsupervised [Tammee et
al., 2007] and function [Wilson, 2002] learning. Almost twenty
years ago, Stewart Wilson introduced a form of LCS in which
rule utility is calculated solely by the accuracy of the predicted
consequences of rule assertions/actions – the “eXtended
Classifier System” (XCS) [Wilson, 1995]. Importantly, XCS
makes a clear connection between LCS and modern
reinforcement learning (see [Sutton & Barto, 1998]): XCS uses a
genetic algorithm (GA) [Holland, 1975] to discover regularities
in the problem thereby enabling generalisations over the
complete state-action-reward space. It has been found able to
solve a number of well-known problems optimally (e.g., see
[Butz, 2006]). Modern LCS, primarily XCS and its derivatives,
have been applied to a number of real-world problems (e.g., see
[Bull, 2004]), particularly data mining (e.g., see [Bull et al.,
2008]), to great effect. Formal understanding of modern LCS has
also increased in recent years (e.g., see [Bull & Kovacs, 2005]).
XCS uses a niched GA, that is, it runs the GA over rules
which are concurrently active. Initially, following [Booker,
1985] (see also [Fogarty, 1994]), the GA was run in the match
set [M], i.e., the subset of rules whose condition matches the
current state. The primary motivation for restricting the GA in
this way is to avoid the recombination of rule conditions which
generalise over very different areas of the problem space. Wilson
Dept. of Computer Science & Creative Tech., UWE BS16 1QY, UK.
Email: [email protected].
[1998] later increased the niching to action sets [A], i.e., the
subset of [M] whose action matches the chosen output of the
system. Wilson correctly highlighted that for tasks with
asymmetrical generalisations per action, the GA would still have
the potential to unhelpfully recombine rules working over
different sub-regions of the input space unless it is moved to [A].
Using two simple benchmark tasks, he didn’t show significant
changes in performance but did show a decrease in the number
of unique rules maintained when some asymmetry existed from
the use in [A]. Modern XCS uses the [A] form of GA, which has
been studied formally in various ways (e.g., see [Bull, 2002;
2005][Butz et al., 2004][Butz et al., 2007]). It can be noted that
the first LCS maintained separate GA populations per action
[Holland & Reitman, 1978] (see [Wilson, 1985] for a similar
scheme).
The degree of symmetry within the state-action-reward space
across all problems is a continuum. As noted, running the GA in
niches of concurrently active rules identifies those whose
conditions overlap in the problem space. However, using the GA
in [A] means that any common structure in the problem space
discovered by a rule with one action must wait to be shared
through the appropriate mutation of its action. Otherwise it must
be rediscovered by the GA for rules with another action(s). As
the degree of symmetry in the problem increases, so the
potentially negative effect of using the GA in [A] on the search
process increases.
This paper proposes a change in the standard rule structure to
address the issue and demonstrates it using a slightly simplified
version of XCS, termed YCS [Bull, 2005].
2 YCS: A SIMPLE ACCURACY-BASED LCS
YCS is without internal memory, the rule-base consists of a
number (P) of condition-action rules in which the condition is a
string of characters from the traditional ternary alphabet {0,1,#}
and the action is represented by a binary string. Associated with
each rule is a predicted reward value (r), a scalar which indicates
the error () in the rule’s predicted reward and an estimate of the
average size of the niches in which that rule participates (). The
initial random population has these parameters initialized,
somewhat arbitrarily, to 10.
On receipt of an input message, the rule-base is scanned, and
any rule whose condition matches the message at each position
is tagged as a member of the current match set [M]. An action is
then chosen from those proposed by the members of the match
set and all rules proposing the selected action form an action set
[A]. A version of XCS’s explore/exploit action selection scheme
will be used here. That is, on one cycle an action is chosen at
random and on the following the action with the highest average
fitness-weighted reward is chosen deterministically.
The simplest case of immediate reward R is considered here.
Reinforcement in YCS consists of updating the error, the niche
size estimate and then the reward estimate of each member of the
current [A] using the Widrow-Hoff delta rule with learning rate
:
j j + ( |R - rj| - j )
(1)
rj rj + ( R - rj )
(2)
j j + ( |[A]| - j )
(3)
The original YCS employs two discovery mechanisms, a
panmictic (standard global) GA and a covering operator. On
each time-step there is a probability g of GA invocation. The GA
uses roulette wheel selection to determine two parent rules based
on the inverse of their error:
fj = ( 1 / (jv + 1) )
(4)
Here the exponent v enables control of the fitness pressure
within the system by facilitating tuneable fitness separation
under fitness proportionate selection (see [Bull, 2005] for
discussions). Offspring are produced via mutation (probability )
and crossover (single point with probability ), inheriting the
parents’ parameter values or their average if crossover is
invoked. Replacement of existing members of the rulebase uses
roulette wheel selection based on estimated niche size. If no
rules match on a given time step, then a covering operator is
used which creates a rule with the message as its condition
(augmented with wildcards at the rate p#) and a random action,
which then replaces an existing member of the rulebase in the
usual way. Parameter updating and the GA are not used on
exploit trials.
last system cycle upon which it was part of a GA (a development
of [Booker, 1989]). The GA is applied within the current action
set [A] when the average number of system cycles since the last
GA in the set is over a threshold GA. If this condition is met, the
GA time-stamp of each rule is set to the current system time, two
parents are chosen according to their fitness using standard
roulette-wheel selection, and their offspring are potentially
crossed and mutated, before being inserted into the rule-base as
described above.
YCS is therefore a simple accuracy-based LCS which
captures the fundamental characteristics of XCS: “[E]ach
classifier maintains a prediction of expected payoff, but the
classifier’s fitness is not given by the prediction. Instead the
fitness is a separate number based on an inverse function of the
classifier’s average prediction error” [Wilson, 1995] and a
“classifier’s deletion probability is set proportional to the [niche]
size estimate, which tends to make all [niches] have about the
same size, so that classifier resources are allocated more or less
equally to all niches” [ibid]. However, YCS does not include a
number of other mechanisms within XCS, such as niche-based
fitness sharing, which are known to have beneficial effects in
some domains (see [Butz et al., 2004]).
The pressure within XCS and its derivatives to evolve
maximally general rules over the problem space comes from the
triggered niche GA. Selection for reproduction is based upon the
accuracy of prediction, as described. Thus within a niche,
accurate rules are more likely to be selected. However, more
general rules participate in more niches as they match more
inputs. Rules which are both general and accurate therefore
typically reproduce the most: the more general and accurate, the
more a rule is likely to be selected. Any rule which is less
general but equally accurate will have fewer chances to
reproduce. Any rule which is over general will have more
chances to reproduce but a lower accuracy (see [Butz et al.,
2004] for detailed analysis).
Under the new rule representation scheme introduced here
each rule consists of a single condition and each possible action.
Associated with each action are the two parameters updated
according to equations 1 and 2:
Traditional rule – condition: action: reward: error: niche
New rule –
Figure 1: Schematic of YCS as used here.
The niche GA mechanism used here is XCS’s time-based
approach under which each rule maintains a time-stamp of the
condition: action1: reward: error: niche
action2: reward: error
action3: reward: error
…
actionN: reward: error
All other processing remains the same as described. In this
way, any symmetry is directly exploitable by a single rule whilst
still limiting the possibility for recombining rules covering
different parts of the problem space since the GA is run in [A],
as Wilson [1998] described. Any action which is not correctly
associated with the generalisation over the problem space
represented by the condition will have a low accuracy and can be
ignored in any post processing of rules for knowledge discovery.
The generalisation process of modern LCS is implicitly extended
to evolve rules which are accurate over as many actions as
possible since they will participate in more niches. Note that the
niche size estimate can become noisier than in standard
YCS/XCS. Similarly, any effects from the potential maintenance
of inaccurate generalisations in some niches due to their being
accurate in other niches are not explored here. Initial results do
not indicate any significant disruption however.
3 EXPERIMENTATION
3.1 Symmetry
Following [Wilson, 1995], the well-known multiplexer task is
used in this paper. These Boolean functions are defined for
binary strings of length l = k + 2k under which the k bits index
into the remaining 2k bits, returning the value of the indexed bit.
A correct classification results in a payoff of 1000, otherwise 0.
For example, in the k=4 multiplexer the following traditional
rules form one optimal [M] (error and niche size not shown):
1111###############1: 1: 1000
1111###############1: 0: 0
Figure 2 shows the performance of YCS using the new
multi-action rule representation on the 20-bit multiplexer (k=4)
problem with P=1000, p#=0.6, =0.04, v=10, =0.5, GA=25 and
=0.2. After [Wilson, 1995], performance, taken here to mean
the fraction of correct responses, is shown from exploit trials
only, using a 50-point running average, averaged over twenty
runs. It can be seen that optimal performance is reached around
60,000 trails. Figure 2 also shows the average specificity, taken
here to mean the fraction of non-# bits in a condition, for the
LCS. That is, the amount of generalization produced. The
maximally general solution to the 20-bit multiplexer has
specificity 5/20 = 0.25 and YCS can be seen to produce rulebases with an average specificity very close to the optimum. The
average error of rules can also be seen to decrease over time.
Figure 3 shows the performance of YCS using the traditional
rule representation with the same parameters. As can be seen,
optimal performance is not reliably reached in the allowed time.
Figure 4 shows the performance of the same system with
P=2000, with optimality reached around 60,000 trials (matching
that of XCS with the same equivalent parameters, e.g., [Butz et
al., 2004]). That is, with double the rule-base resource, the GA is
able to reliably (re)discover the problem structure in all [A] over
the same time period using the traditional rule representation.
Hence, in a problem with complete symmetry between [A], the
new rule representation presented here significantly improves the
efficiency of the GA.
Figure 2: Performance of new rule representation.
Figure 3: Performance of traditional rule representation.
3.2 Less Symmetry
To reduce the symmetry in the multiplexer in a simple way, an
extra bit can be added. Here an incorrect response becomes
sensitive to the value of the extra input bit: if it is set, the reward
is 500, otherwise it is 0. That is, using the new rule
representation, it is no longer possible for just one rule to use the
same generalisation over the input space to accurately predict the
reward for each action in a given [M]. The following traditional
rules represent one optimal [M]:
1111###############1#: 1: 1000
1111###############11: 0: 500
1111###############10: 0: 0
Figure 4: As Figure 3 but with larger population size.
Figure 5 shows how YCS is unable to solve the less
symmetrical 20-bit multiplexer using the new rule representation
with P=1000. Figures 6 and 7 show how the performance of
YCS with and without the new representation (respectively) is
optimal and roughly equal with P=2000. Note that the new
representation still only requires two rules per [M], as opposed to
three in the traditional scheme. However, although there is a
slight increase in learning speed with the new scheme, it is not
statistically significant (T-test, time taken to reach and maintain
optimality over 50 subsequent exploit cycles, p>0.05). Figures 8
and 9 show there is significant benefit (p≤0.05) from the new
representation when k=5, i.e., the harder 37-bit multiplexer
(P=5000).
3.3 Multiple Actions
Multiplexers are binary classification problems. To create a
multi-class/multi-action variant in a simple way the case where
the data bit is a ‘1’ is altered to require an action equal to the
value of the address bits for a correct response. In this way there
are 2k possible actions/classes. Under the new format with k=3,
one optimal [M] could be represented as the single rule:
Figure 6: As Figure 5 but with larger population size.
111#######1: 8: 1000
7: 0
6: 0
5: 0
4: 0
3: 0
2: 0
1: 0
0: 0
Figures 10 and 11 show the performance of YCS with and
without the new representation (respectively) with k=3 and
P=2000. As can be seen, both representations are capable of
optimal performance with the parameters used but the new
representation learns significantly faster ((p≤0.05).
Figure 5: Performance of new scheme on less symmetrical
task.
Figure 7: Performance of traditional rules on less
symmetrical task (vs. Figure 6).
Figure 8: Performance of new scheme on less symmetrical
multiplexer when k=5.
Figure 9: Performance of traditional rules on less
symmetrical multiplexer when k=5.
3.4 Imbalance
The frequency of state visitation is rarely close to uniform in
most reinforcement learning tasks. For example, in a spatial
maze navigation task, those states at or near a goal will typically
be visited more often than those states far from a goal. In data
mining, real-world data does not typically contain equal
examples of all cases of the underlying concept space - known as
the class imbalance problem, and often tackled through
under/over sampling. This bias of sampling the problem space
can cause difficulties in the production of accurate
generalisations since over general rules can come to dominate
niches due to their frequency of use (updating and reproduction)
in more frequently visited states. Orriols-Puig and Bernado
Mansilla [2008] introduced a heuristic specifically for (limited
to) binary classification tasks which dynamically alters the
learning rate ( and frequency of GA activity (GA) to address
the issue in accuracy-based LCS. They show improved learning
in both imbalanced multiplexers and well-known data sets.
The new rule representation would appear to have some
potential to address the issue of imbalance generally when there
is symmetry in the underlying problem space, i.e., both for
reinforcement learning and data mining. Since all actions are
maintained by all rules, information about all actions is
maintained in the population. Whilst over general conditions will
quickly emerge for the same reasons as for the traditional
representation, later in the search, the use and updating of the
correct actions for less frequently visited states will indicate their
true value and the GA will (potentially) adjust generalisations
appropriately. An imbalanced multiplexer (akin to [Orriols-Puig
& Bernado Mansilla, 2008]) can be created by simply
introducing a probabilistic bias in sampling action ‘1’ compared
to ‘0’. Figures 12 and 13 show the performance of YCS with and
without the new representation (respectively) with k=4, P=2000
and a bias of 80% (4:1). Exploit cycle testing remains unbiased,
as before. As can be seen, the new representation is able to cope
with the bias, whereas the equivalent traditional rule
representation is not. The same was generally found to be true
for various levels of bias, k, etc. (not shown).
Figure 10: Performance of new scheme on multi-action task.
Figure 11: Performance of traditional rules on multi-action
task.
Figure 12: Performance of new scheme on the imbalanced
task.
REFERENCES
Figure 13: Performance of the traditional scheme on the
imbalanced task.
4 CONCLUSIONS & FUTURE WORK
This paper has proposed the use of rules which contain multiple
actions, maintaining accuracy and reward metrics for each
action. This somewhat minor alteration appears to provide
benefits over the traditional approach in a variety of scenarios.
Future work should also consider the new, general rule structure
proposed here with more complex representations such as realvalued intervals (e.g., see [Stone & Bull, 2003]) or genetic
programming (e.g., see [Preen & Bull, 2013]), together with
delayed reward tasks.
Kovacs and Tindale [2013] have recently highlighted issues
regarding the niche GA, particularly with respect to overlapping
problems. They compare the performance of an accuracy-based
LCS with a global GA (see also [Bull, 2005]), a niche GA, and a
global GA which uses the calculated selective probabilities of
rules under a niche GA. The aim being to avoid the reduced
actual selection of accurate, general rules due to overlap within a
given niche. Using the 11-bit multiplexer (k=3) problem they
show a possible slight increase in performance from their new
scheme over the niche GA, with the global GA performing
worst. Their new scheme shows an increase in the number of
unique rules maintained compared to the niche GA and they
postulate this increase in rule diversity may explain the
suggested difference in performance. This seems likely given the
multiplexer does not contain any overlap. Note that Wilson
[1994] proposed using both a global and niche GA together “to
offset any inbreeding tendency” within niches. Since they used a
supervised form of XCS which only maintains the highest
reward entries of the state-action-reward map (UCS) [Bernado
Mansilla & Garrell, 2003], the exploitation of symmetry does not
help to explain their findings. The effect of the new
representation in overlapping problems remains to be explored.
The related use of multiple conditions per action may be a more
appropriate approach.
Bernado Mansilla, E. & Garrell, J. (2003) Accuracy-Based Learning
Classifier Systems: Models, Analysis and Applications to
Classification Tasks. Evolutionary Computation 11(3): 209-238.
Booker, L.B. (1985) Improving the Performance of Genetic Algorithms
in Classifier Systems. In J.J. Grefenstette (ed) Proceedings of the
First International Conference on Genetic Algorithms and their
Applications. Lawrence Erlbaum Associates, pp80-92.
Booker, L.B. (1989) Triggered Rule Discovery in Classifier Systems. In
J. Schaffer (ed) Proceedings of the Third International Conference
on Genetic Algorithms and their Applications. Morgan Kaufmann,
pp265-274.
Bull, L. (2002) On Accuracy-based Fitness. Soft Computing 6(3-4): 154161.
Bull, L. (2004)(ed) Applications of Learning Classifier Systems.
Springer.
Bull, L. (2005) Two Simple Learning Classifier Systems. In L. Bull & T.
Kovacs (eds) Foundations of Learning Classifier Systems. Springer,
pp63-90.
Bull, L. & Kovacs, T. (2005)(eds) Foundations of Learning Classifier
Systems. Springer.
Bull, L., Bernado Mansilla, E & Holmes, J. (2008)(eds) Learning
Classifier Systems in Data Mining. Springer.
Butz, M. (2006) Rule-based Evolutionary Online Learning Systems.
Springer.
Butz, M., Kovacs, T., Lanzi, P-L & Wilson, S.W. (2004) Toward a
Theory of Generalization and Learning in XCS. IEEE Transactions
on Evolutionary Computation 8(1): 28-46
Butz, M., Goldberg, D., Lanzi, P-L. & Sastry, K. (2007) Problem
solution sustenance in XCS: Markov chain analysis of niche support
distributions and the impact on computational complexity. Genetic
Programming and Evolvable Machines 8(1): 5-37
Fogarty, T.C. (1994) Co-evolving Co-operative Populations of Rules in
Learning Control Systems. In T.C. Fogarty (ed) Evolutionary
Computing. Springer, pp195-209.
Holland, J.H. (1975) Adaptation in Natural and Artificial Systems.
University of Michigan Press.
Holland, J.H. (1976) Adaptation. In R. Rosen & F.M. Snell (eds)
Progress in Theoretical Biology, 4. Academic Press, pp313-329.
Holland, J.H. & Reitman, J.H. (1978) Cognitive Systems Based in
Adaptive Algorithms. In Waterman & Hayes-Roth (eds) Patterndirected Inference Systems. Academic Press.
Kovacs, T. & Tindale, R. (2013) Analysis of the niche genetic algorithm
in learning classifier systems. In Proceedings of the Genetic and
Evolutionary Computation Conference. ACM Press, pp1069-1076.
Orriols-Puig, A. & Bernado Mansilla, E. (2008) Evolutionary Rule-based
Systems for Imbalanced Data Sets. Soft Computing 13(3): 213-225.
Preen, R. & Bull, L. (2013) Dynamical Genetic Programming in XCSF.
Evolutionary Computation 21(3): 361-388.
Stone, C. & Bull, L. (2003) For Real! XCS with Continuous-Valued
Inputs. Evolutionary Computation 11(3): 299-336
Sutton, R.S. & Barto, A.G. (1998) Reinforcement Learning. MIT Press.
Tammee, K., Bull, L. & Ouen, P. (2007) Towards Clustering with XCS.
In D. Thierens et al. (eds) Proceedings of the Genetic and
Evolutionary Computation Conference. ACM Press, pp1854-1860
Wilson, S.W. (1985) Knowledge Growth in an Artificial Animal. J.J.
Grefenstette (ed) Proceedings of the First International Conference
on Genetic Algorithms and their Applications. Lawrence Erlbaum
Associates, pp16-23.
Wilson, S.W. (1994) ZCS: A Zeroth-level Classifier System.
Evolutionary Computation 2(1):1-18.
Wilson, S.W. (1995) Classifier Fitness Based on Accuracy. Evolutionary
Computation 3(2):149-177.
Wilson, S.W. (1998) Generalization in the XCS Classifier System. In
Koza et al. (eds.) Genetic Programming 1998: Proceedings of the
Third Annual Conference. Morgan Kaufmann, pp322-334.
Wilson, S.W. (2002) Classifiers that Approximate Functions. Natural
Computing 1(2-3): 211-234.
| 9 |
Generalizing, Decoding, and Optimizing
Support Vector Machine Classification
von Dipl. Math. Mario Michael Krell
Dissertation
zur Erlangung des Grades eines Doktors der
Naturwissenschaften
- Dr. rer. nat. -
Vorgelegt im Fachbereich 3 (Mathematik & Informatik)
der Universität Bremen
im Januar 2015
Datum des Promotionskolloquiums: 26. März 2015
Gutachter:
Prof. Dr. Frank Kirchner (Universität Bremen)
Prof. Dr. Christof Büskens (Universität Bremen)
Dedicated to my parents
Abstract
The classification of complex data usually requires the composition of processing
steps. Here, a major challenge is the selection of optimal algorithms for preprocessing
and classification (including parameterizations). Nowadays, parts of the optimization
process are automized but expert knowledge and manual work are still required. We
present three steps to face this process and ease the optimization. Namely, we take
a theoretical view on classical classifiers, provide an approach to interpret the classifier together with the preprocessing, and integrate both into one framework which
enables a semiautomatic optimization of the processing chain and which interfaces
numerous algorithms.
First, we summarize the connections between support vector machine (SVM) variants and introduce a generalized model which shows that these variants are not to
be taken separately but that they are highly connected. Due to the more general
connection concepts, several further variants of the SVM can be generated including
unary and online classifiers. The model improves the understanding of relationships
between the variants. It can be used to improve teaching and to facilitate the choice
and implementation of the classifiers. Often, knowledge about and implementations
of one classifier can be transferred to the variants. Furthermore, the connections also
reveal possible problems when applying some variants. So in certain situation, some
variants should not be used or the preprocessing needs to prepare the data to fit to
the used variant. Last but not least, it is partially possible to move with the help
of parameters between the variants and let an optimization algorithm automatically
choose the best model.
Having complex, high dimensional data and consequently a more complex processing chain as a concatenation of different algorithms, up to now it was nearly
impossible to find out what happened in the classification process and which components of the original data were used. So in our second step, we introduce an approach called backtransformation. It enables a visualization of the complete processing chain in the input data space and thereby allows for a joint interpretation
of preprocessing and classification to decode the decision process. The interpretation
can be compared with expert knowledge to find out that the algorithm is working as
expected, to generate new knowledge, or to find errors in the processing (e.g., usage
of artifacts in the data).
The third step is meant for the practitioner and hence a bit more technical. We
propose the signal processing and classification environment pySPACE which enables the systematic evaluation and comparison of algorithms. It makes the aforementioned approaches usable for the public. Different connected SVM models can be
compared and the backtransformation can be applied to any processing chain due to
a generic implementation. Furthermore, this open source software provides an interface for users, developers, and algorithms to optimize the processing chain for the
data at hand including the preprocessing as well as the classification.
The benefits and properties of these three approaches (also in combination) are
shown in different applications (e.g., handwritten digit recognition and classification
of brain signals recorded with electroencephalography) in the respective chapters.
Zusammenfassung
Die Klassifizierung komplexer Daten erfordert für gewöhnlich die Kombination
von Verarbeitungsschritten. Hierbei ist die Auswahl optimaler Algorithmen zur
Vorverarbeitung und Klassifikation (inlusive ihrer Parametrisierung) eine große Herausforderung. Teile dieses Optimierungsprozesses sind heutzutage schon automatisiert aber es sind immer noch Expertenwissen und Handarbeit notwendig. Wir
stellen drei Möglichkeiten vor, um diesen Optimierungsprozess besser handhaben zu
können. Dabei betrachten wir etablierte Klassifikatoren von der theoretischen Seite,
stellen eine Möglichkeit zur Verfügung, den Klassifikator zusammen mit der Vorverarbeitung zu interpretieren, und wir integrieren beides in eine Software welche
die semiautomatische Optimierung der Verarbeitungsketten ermöglicht und welche
zahlreiche Verarbeitungsalgorithmen zur Verfügung stellt.
Im ersten Schritt, fassen wir die zahlreichen Varianten der Support Vector
Machine (SVM) zusammen und führen ein verallgemeinerndes (generalizing) Modell ein, welches zeigt, dass diese Varianten nicht für sich allein stehen sondern
dass sie sehr stark verbunden sind. Mit Hilfe der Betrachtung dieser Verbindungen ist es möglich weitere SVM-Varianten zu generieren wie zum Beispiel Onlineund Einklassenklassifikatoren. Unser Model verbessert das Verständnis über die
Zusammenhänge zwischen den Varianten. Es kann in der Lehre verwendet werden und um die Wahl und Implementierung eines Klassifikators zu vereinfachen.
Oftmals können Erkenntnisse und Implementierungen von einem Klassifikator auf
eine andere Variante übertragen werden. Desweiteren, können die entdeckten
Verbindungen mögliche Probleme offenbaren, wenn man bestimmte Varianten anwenden möchte. In bestimmten Fällen sollten einige der Varianten nicht verwendet
werden oder aber die restliche Verarbeitungskette müsste angepasst werden um mit
dieser Variante verwendet werden zu können. Nicht zuletzt ist es teilweise möglich
mit Hilfe von Parametern sich zwischen den verschiedenen Varianten zu bewegen
und ein Optimierungsalgorithmus könnte dadurch die Bestimmung des besten Algorithmusses übernehmen.
Wenn man mit komplexen und hochdimensionalen Daten arbeitet, verwendet
man oft auch komplexe Verarbeitungsketten. Bisher war es daher meist nicht
möglich herauszufinden, welche Teile der Daten für den gesamten Klassifikationsprozess entscheidend sind. Um dies zu beheben, führen wir in unserem zweiten
Schritt die “Backtransformation” (Rücktransformation) ein. Sie ermöglicht die
Darstellung der kompletten Verarbeitungskette im Raum der Eingangsdaten und
lässt damit eine gemeinsame Interpretation von Vorverarbeitung und Klassifikation
zu, um den Entscheidungsprozess zu entschlüsseln (decode). Die anschließende Interpretation kann mit existierendem Expertenwissen abgeglichen werden um herauszufinden, ob sich die verwendete Verarbeitung erwartungsgemäß verhält. Sie
kann auch zu neuen Erkenntnissen führen oder Fehler in der Verarbeitungskette
aufdecken, wenn zum Beispiel sogenannte Artefakte in den Daten verwendet werden.
Der dritte Schritt ist für den Praktiker gedacht und daher etwas mehr technisch.
Wir stellen unsere Signalverarbeitungs- und Klassifikationsumgebung pySPACE
vor, welche die systematische Auswertung und den Vergleich von Verarbeitungsalgorithmen ermöglicht. Es stellt die zuvor genannten Ansätze der Öffentlichkeit
zur Verfügung. Die verschiedenen, stark verbundenen SVM-Varianten können verglichen werden und die Backtransformation kann auf beliebige Verarbeitungsketten
in pySPACE angewandt werden, dank einer generischen Implementierung. Desweiteren, stellt diese quelloffene Software eine Schnittstelle dar für Algorithmen,
Entwickler und Benutzer um Vorverarbeitung und Klassifikation für die jeweils vorliegenden Daten zu optimieren.
Die Vorteile und Eigenschaften unserer drei Ansätze (auch in Kombination) werden in verschiedenen Anwendungen gezeigt, wie zum Beispiel der Handschrifterkennung oder der Klassifikation von Gehirnsignalen mit Hilfe der Elektroenzephalografie.
Acknowledgements
First of all, I would like to thank all my teachers at school and university, especially Rosemarie Böhm, Armin Bochmann, Prof. Dr. Thomas Friedrich, Prof. Klaus
Mohnke, Prof. Dr. Bernd Kummer, and Dr. Irmgard Kucharzewski. Without these
people, I would have never become a mathematician.
Most importantly, I would like to thank my advisor and institute director Prof.
Dr. Frank Kirchner and my project leader Dr. Elsa Andrea Kirchner. They gave
me the opportunity to work at a great institute with great people at a great project.
Usually project work means to be very restricted in the scientific work and there
is not much space for own ideas. But in this case, I am grateful that I had much
space for getting into machine learning, being creative, and developing my own ideas.
Dr. Elsa Andrea Kirchner lead me into the very interesting and challenging area of
processing electroencephalographic data and Prof. Dr. Frank Kirchner encouraged
me to take a step back and to look at more general approaches which could also help
robotics and to think about the bigger scientific problems and a longterm perspective.
I also thank Prof. Dr. Christof Büskens for discussing the optimization perspective
of this thesis with me.
For structuring this thesis, thinking more “scientific”, and better thinking about
how other persons perceive my written text and how to handle their criticism, Dr.
Sirko Straube invested numerous “teaching lessons” and never lost patience. I very
much appreciate that.
I would like to thank my friends and colleagues, David Feess and Anett Seeland.
For getting into the basics of this thesis, I had a lot of support by David Feess who
raised my first interest to make support vector machine variants easier to understand, to look into a sparse classifier trained on electroencephalographic data, and to
improve the usability and availability of pySPACE. When needing someone for the
discussion of any problem, Anett Seeland was always there and also solved a lot of
programming problems for/with me.
I would like to thank my team leader of the team “Sustained Learning”, Dr. Jan
Hendrik Metzen. Together with Timo Duchrow he laid the foundations of pySPACE.
He led numerous discussions of papers and algorithms and largely improved my critical view on research and possible flaws in analyses and gave me a lot of scientific
advice. Hence, he also reviewed nearly all of my papers and helped a lot improving
them.
I would also like to thank all my other coauthors, reviewers, and colleagues who
supported my work, especially Lisa Senger, Hendrik Wöhrle, Johannes Teiwes, Marc
Tabie, Foad Ghaderi, Su Kyoung Kim, Andrei Cristian Ignat, Elmar Berghöfer, Constantin Bergatt, Yohannes Kassahun, Bertold Bongardt, Renate Post-Gonzales, and
Stephanie Vareillas.
Due to the influence of all these people on my thesis, I preferred using the more
common plural “We” in this thesis and used the singular “I” only in very rare cases
where I want to distinguish my contribution from the work of these people.
This work was supported by the German Federal Ministry of Economics and Technology (BMWi, grants FKZ 50 RA 1012 and FKZ 50 RA 1011). I would like to thank
the funders. They had no role in study design, data collection and analysis, decision
to publish, or preparation of this thesis.
A special thanks goes to all the external people who provided tools/help for this
thesis, like the open source software developers, the free software developers, scientists, and the numerous people in the internet who ask questions and provide
answers for programming problems (including LATEX issues). Without these people
neither pySPACE nor this thesis would exist. As outlined in this thesis, pySPACE is
largely based on the open source software stack of Python (NumPy, SciPy, matplotlib,
and MDP) and by wrapping scikit-learn a lot of other algorithms can be interfaced.
Without the Mendeley software I would have lost the overview over the references
and without LATEX and the numerous additional packages I probably could not have
created a readable document.
Last but not least, I would like to thank all my friends, choir conductors, and good
music artists. Doing science requires people who cheer you up when things do not go
well. To free one’s mind, stay focused, or also for cheering up, music is a wonderful
tool which accompanies me a lot.
Contents
0 Introduction
0.1 General Motivation . . . . . . . . . . . .
0.2 Objectives and Contributions . . . . . .
0.3 Structure . . . . . . . . . . . . . . . . . .
0.4 Application Perspective: P300 Detection
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
4
7
9
1 Generalizing: Classifier Connections
1.1 Support Vector Machine and Related Methods . . . . . .
1.2 Single Iteration: From Batch to Online Learning . . . . .
1.3 Relative Margin: From C-SVM to RFDA via SVR . . . . .
1.4 Origin Separation: From Binary to Unary Classification
1.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
13
16
37
48
69
86
2 Decoding: Backtransformation
2.1 Backtransformation using the Derivative . . . . . .
2.2 Affine Backtransformation . . . . . . . . . . . . . . .
2.3 Generic Implementation of the Backtransformation
2.4 Applications of the Backtransformation . . . . . . .
2.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
91
96
97
103
105
117
3 Optimizing: pySPACE
3.1 Structure and Principles . . . . . . . . . . . . . .
3.2 User and Developer Interfaces . . . . . . . . . . .
3.3 Optimization Problems and Solution Strategies
3.4 pySPACE Usage Examples . . . . . . . . . . . . .
3.5 Discussion . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
119
123
129
134
143
159
.
.
.
.
.
.
.
.
.
.
4 Conclusion
165
A Publications in the Context of this Thesis
169
B Proofs and Formulas
B.1 Dual Optimization Problems . . . . . . . . . . . . . . .
B.2 Model Connections . . . . . . . . . . . . . . . . . . . .
B.3 BRMM Properties . . . . . . . . . . . . . . . . . . . . .
B.4 Unary Classifier Variants and Implementations . . .
B.5 Positive Upper Boundary Support Vector Estimation
i
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
172
172
180
186
194
199
ii
Contents
C Configuration Files
202
Acronyms
208
Symbols
210
References
215
Chapter 0
Introduction
0.1
General Motivation
Humans are able to detect the animal in the wood, to separate lentils thrown into
the ashes, to look for a needle in a haystack, to find the goal and the ball in a stadium, to spot a midge on the wall, . . . . In everyday life, humans and animals often
have to base decisions on infrequent relevant stimuli with respect to frequent irrelevant ones. Humans and animals are experts for this situation due to selection
mechanisms that have been extensively investigated, e.g., in the visual [Treue, 2003]
and the auditory [McDermott, 2009] domain. In their book on signal detection theory, Macmillan and Creelman argue that this comparison of stimuli is the basic psychophysical process and that all judgements are of one stimulus relative to another
[Macmillan and Creelman, 2005].1
In short, humans and animals are the experts for numerous classification tasks
and their classification skills are important for their intelligence. It is a major challenge, to provide artificial systems like computers and robots with such a type of
artificial intelligence to automatically discover patterns in data [Bishop, 2006]. Especially when striving for longterm autonomy of robots, such capabilities are needed
(besides others) because a robot will certainly encounter new situations and should
be able to map them to previous experience.
The focus of this manuscript will be on computer algorithms for classifying data
into two categories (binary classification). Given some labeled data for a classifier, the
difficulty is not to generate any appropriate model but the model should be generated
quickly, provide a classification result quickly, be as simple as possible, and most
importantly generalize well to so far unseen data.
There is a tremendous number of classification applications (e.g., terrain classification for robots [Hoepflinger et al., 2010, Filitchkin and Byl, 2012], image classifica1
This paragraph contains text snippets from [Straube and Krell, 2014] by Dr. Sirko Straube.
1
2
Chapter 0. Introduction
tion [LeCun et al., 1998, Golle, 2008, Le et al., 2012], color distinction for robot soccer
[Röfer et al., 2011], email spam detection [Blanzieri and Bryl, 2009], and analysis of
brain signals as input for intelligent man machine interfaces [Kirchner et al., 2013,
Kirchner et al., 2014a, Kim and Kirchner, 2013]).
There is also a very large number of approaches to solve these problems. Often
the original data (raw data) cannot be used by the classifier to build a model, but
an additional preprocessing is required which transforms the raw data to so-called
feature vectors which better describe the data, e.g., mean values, frequency power
spectra, and amplitudes after a low pass filtering.2 When dealing with classification
tasks of complex data, the generation of meaningful features is a major issue. This is
due to the fact that the data often consists of a superposition of a multitude of signals,
together with dynamic and observational noise. Hence, the data processing usually
requires the combination of different preprocessing steps in addition to a classifier.
In fact, the generation of good features is usually more important than the actual
classification algorithm [Domingos, 2012].3
Unfortunately, the challenge to define an appropriate processing of the data is
so complicated, that expert knowledge is often required and that even with the help
of this knowledge, the optimal processing might not be found due to the variety of
possible choices of algorithms and parameterizations.4 Testing every possible choice
is completely impossible.
The General Research Question
In this thesis, we present three related approaches to make this process easier. It is a
small step into requiring less manual work and expert knowledge and automatizing
this tuning process. It can be motivated by a general question. In this context, a
machine learning expert might ask:
“How shall I use which classifier (depending on the data at hand) and
what features of my data does it rely on?”
The “which” refers to the variety of possible algorithms. Even after choosing
the classifier, an implementation is required and the data needs to be preprocessed
(“how”) and after the processing the expert wants to know if the processing worked
correctly and if it is even possible to learn something from it (“what”).
2
In Section 2.2.1 more examples will be given and it will be shown, how algorithms are combined
for the feature generation to processing chains.
3
Without an appropriate preprocessing, a classifier is not able to build a general model, which will
give good results on unseen data.
4
To distinguish model parameters of algorithms from the meta-parameters, which customize the
algorithm, the latter are usually called hyperparameters.
0.1. General Motivation
Unfortunately,
3
there is no fully satisfactory answer to the first part
of this question according to the “no free lunch theorem” of optimization
[Wolpert and Macready, 1997].5 The answer to the second part depends on the complexity of the applied processing algorithms and might be very difficult to provide, especially when different algorithms are combined or adaptive or nonlinear algorithms
are used.
Besides the no free lunch theorem, the difficulty of choosing the “right” classifier
and answering the “which” is complicated by the dependence of the classifier on the
preprocessing and the high number of existing algorithms.6 Advantages of certain
classifiers often depend on the application but also on the chosen way of tuning hyperparameters and implementing the algorithm (e.g., stop criterion for convergence).
A common approach to compare classifiers is to have a benchmarking evaluation with
a small subset of classifiers on a special choice of datasets. This can give a hint on
the usefulness of certain classifiers for certain applications/datasets but does not provide a deeper understanding of the classifiers and how they relate to each other. A
different approach is to clearly determine the relations between classifiers in order
to facilitate the choice of an appropriate one. Unfortunately, only few connections
between classifiers are known and, since they are spread all over the literature, it is
quite difficult to conceive of them as a whole. Hence, summarizing the already known
connections and deriving new ones is required to ease the choice of the classifier. This
even holds for the numerous variants of the support vector machine (SVM). We will
focus on that classifier, because it is very powerful and understanding the connection
to its variants is already helpful. It is reasonable to pick a group which has a certain
common ground, because it is impossible to connect all classifiers.
Additionally to looking at classifiers it is important to look at their input: the
feature vectors, which are used as data for building the classifier (training samples). For finding the relevant features in the data, there are several algorithms
in the context of feature selection [Guyon and Elisseeff, 2003, Saeys et al., 2007,
Bolón-Canedo et al., 2012]. Even though these algorithms can improve classification
accuracy and interpretability, they do not give information about the relevance of the
features for the classifier finally used in a data processing chain. The answer to the
question, “what features of my data does my classifier rely on”, can be difficult to provide because of three issues. First, the classifier might have to be treated as a black
box. Second, it might have nonlinear behavior, meaning that the relevance of certain
features in the data is highly dependent on the sample which is classified. The third
5
For our case, the theorem states that for every classification problem, where classifier a is better
than classifier b, there is a different problem where the opposite holds true.
6
With a different preprocessing a different classifier might be appropriate, e.g., with a nonlinear
instead of a linear model.
4
Chapter 0. Introduction
and most important point is, that the classifier is not applied to the raw data but preprocessed data. Hence, the classifier should not be regarded as a single algorithm,
but instead the complete decision algorithm consisting of preprocessing algorithms
and classifier and their interplay with the data need to be considered. For example,
in the extreme case where a classifier is not even really required because the features
are sufficiently good, it is important to look at the generation of the features to decode
the decision algorithm.
Last but not least, the question of “how” to apply the data processing is probably
the most time consuming part of designing a good data processing chain. Performing hyperparameter optimization and large scale evaluations is cumbersome. A lot
of time for programming and waiting for the results is required. Furthermore, when
trying to reproduce results from other persons there is no access to the used implementations and the details of the evaluation scheme. The most complicated part
might be to configure the processing for the needs of the concrete application and to
generate optimal or at least useful features.
To fix all these problems completely is impossible but it is possible to tackle parts
of them and go a step further towards a solution as outlined in the following section.
Despite this more general and abstract motivation, we will provide a more concrete motivation by an application in Section 0.4.
0.2
Objectives and Contributions
The main objective of this thesis is to provide (theoretical, practical, and technical) insights and tools for data scientists to simplify the design of the classification process. In contrast to other work, the goal is not to derive new algorithms
or to tweak existing algorithms.
Here, a “classification process” also includes the complete evaluation process with
the preprocessing, tuning of hyperparameters, and the analysis of results. Three
subgoals can be identified, derived from the previously discussed question: “How
shall I use which classifier and what features of my data does it rely on?”
1 Theoretical aspect: Analyze the connections between SVM variants to derive a
more “general” picture.
2 Practical aspect: Construct an approach for decoding and interpreting the decision algorithm together with the preprocessing.
3 Technical aspect: Implement a framework for better automatizing the process of
optimizing the construction of an appropriate signal processing chain including
a classifier.
0.2. Objectives and Contributions
5
Subgoal 1 targets the question of “which” classifier to use. The question of “what
features of my data does it (the classifier) rely on” is covered by the second subgoal.
The last subgoal requires us to answer the question of “how” to apply the classifier
and supports Subgoal 1 by providing a platform to compare and analyze classifiers.
It also supports Subgoal 2 as an interface for implementing it.
Note that this introduced numbering will also be used concerning the achievements of this thesis and the respective chapter numbers. Furthermore, it is important to note that there are connections between the goals, because the respective
approaches can (and often have to) be combined. To face the three subgoals, the
following approaches are taken.
Contribution 1: Generalizing Due to the ever-growing number of classification algorithms, it is difficult to decide which ones to consider for a given application. Knowledge about the relations between the classifiers facilitates the choice
and implementation of classifiers. As such, instead of further specializing existing
classifiers we take a unifying view. Considering only the variants of the classical
support vector machine (C-SVM) [Vapnik, 2000, Cristianini and Shawe-Taylor, 2000,
Müller et al., 2001, Schölkopf and Smola, 2002], some connections are already known
but the knowledge about these connections is distributed over the literature.
We summarize these connections and introduce the following three general concepts building further intuitive connections between these classifiers.
The C-SVM belongs to the group of batch learning classifiers. These classifiers
operate on the complete set of training data to build their model consuming large
resources of memory and processing time. In contrast, online learning algorithms
update their model with each single sample and, later on, forget the sample. They
are very fast and memory efficient which is required for several applications but they
usually perform less well. The single iteration approach describes a way to transfer
batch learning to online learning classifiers. If the solution algorithm of the batch
learning classifier is repeatedly iterated over the single training samples to update
a linear classification function, an online learning algorithm can be generated by
performing this update only once for each incoming sample.
The second concept, called relative margin, establishes a connection between
the more geometrically motivated SVM and the regularized Fisher’s discriminant
(RFDA) coming from statistics.
The third concept, the origin separation approach, allows defining unary classifiers with the help of binary classifiers by taking the origin as a second class.7
7
Unary classifiers use only one class for building a model but they are usually applied to binary
classification problems, where the focus is to describe the more relevant class, or where not enough
training samples are available from the second class to build a model.
6
Chapter 0. Introduction
Together with the existing more formal connection concepts (especially normal
and squared loss, kernel functions, and normal and sparse regularization), these
connections span the complete space of established SVM variants and additionally
provide new not-yet discovered variants.
Knowing the theory of these novel connections simplifies the implementation of
the algorithms and makes it possible to transfer extensions or modifications from
one algorithm to the other connected ones. Thus, it enables to build a classifier that
fits into the individual research aims. Furthermore, it simplifies teaching and getting
to know these classifiers. Note that the connections are not to be taken separately
but in most cases they can be combined.
Contribution 2: Decoding Having the knowledge about the relations between
classifiers is not always sufficient for choosing the best one. It is also important to
understand the final processing model to find out what lies behind the data and to
ensure that the classifier is not relying on artifacts (errors in the data). Existing
approaches visualize the data and the single processing steps, but this might not
be sufficient for a complete picture, especially when dimensionality reduction algorithms are used in the preprocessing. This is often the case for high-dimensional
and noisy data. Hence, a representation of the entire processing chain including both
classification and preprocessing is required. Our novel approach to calculate this representation is called backtransformation. It iteratively transforms the classification
function back through the signal processing chain to generate a representation in the
same format as the input data. This representation provides weights for each part
of the data to tell which components are relevant for the complete processing and
which parts are ignored. It can be directly visualized, when using classical data visualization approaches as they are for example used for image, electroencephalogram
(EEG), and functional magnetic resonance imaging (fMRI) data. This practical contribution opens up the black box of the signal processing chain and can now be used
to support the “close collaboration between machine learning experts and application
domain ones” [Domingos, 2012, p. 85]. It can provide a deeper understanding of the
processing and it can help to improve the processing and to generate new knowledge
about the data. In some cases even new expert knowledge might be generated.
Contribution 3: Optimizing
For a generic implementation of the backtransfor-
mation an interface is required. Furthermore, it is still required to optimize the
hyperparameters of the classifiers and the preprocessing for further improvement
of the processing chain.
Hence, it is necessary to have “an infrastructure that
makes experimenting with many different learners, data sources, and learning problems easy and efficient” [Domingos, 2012, p. 85]. To solve this problem, we de-
0.3. Structure
7
veloped the Signal Processing And Classification Environment written in Python
(pySPACE) [Krell et al., 2013b]. It provides functionality for a systematic and automated comparison of numerous algorithms and (hyper-)parameterizations in a signal processing chain. Additionally, pySPACE enables the visualization of data, algorithms, and evaluation results in a common framework. With its large number of
supporting features this software is unique and a major improvement to the existing
open source software.
0.3
Structure
In this thesis, we present our steps to improve and automatize the process of designing a good processing chain for a classification problem (classifier connections,
backtransformation, pySPACE). This thesis is structured as follows.
First, the different SVM variants are introduced including the known connections and in the following three more general concepts are introduced which connect
them (Chapter 1). Second, the backtransformation concept is presented in Chapter 2.
Third, the pySPACE framework, the more technical part of this thesis, and its use
for optimization is shown in Chapter 3. All three main parts are also displayed in
Figure 1 using the same numbering. Finally, a conclusion and an outlook is given in
Chapter 4. In the appendix, all my publications are summarized. Furthermore, the
appendix contains detailed proofs, information on the used data, and some configuration files used for the evaluations in the different chapters.
The related work and our proposed approaches are often highly connected and
consequently presented separately in the respective chapters and not in an extra
chapter about literature at the beginning of this thesis. Each approach integrates at
least a part of the related work.
Even though the contributions of this thesis are separated into three chapters,
they are still connected. For the evaluations in Chapter 1 and Chapter 2, the respective algorithms are integrated into pySPACE and the framework is used to perform
the evaluations using the concepts described in Chapter 3. Furthermore, the backtransformation concept from Chapter 2 will be applied to the different classifiers from
Chapter 1 and additional knowledge about the classifiers will be incorporated into a
variant of the concept. Last but not least, all three parts should be combined to get
the best result when analyzing data.
8
Chapter 0. Introduction
Generalizing
understand relations between
Support Vector Machine variants
classifier connections
(incl. single iteration, relative
margin, origin separation)
1
Decoding
w
x
interprete decision process
(classifier, preprocessing, and data)
2
f (x) = h( b ) , A ( 1 )i
!
"
x
= AT ( w
)
,
(
)
1
b
Optimizing
optimize classifier & preprocessing;
evaluate & share approaches
pySPACE: Signal Processing And
Classification Environment
3
Figure 1: Graphical abstract of this thesis. The numbering is also used for the
corresponding subgoals and respective chapters. The first part provides a more general picture of SVM variants by connecting them. The second part introduces the
backtransformation concept to decode data processing chains. Finally, the third part
presents our framework pySPACE which is an interface for optimizing signal processing chains. Furthermore, the previous two parts can be used and analyzed with
this software.
Disclaimer: Text Reuse
Single sentences but also entire paragraphs of this thesis are taken from my own publications without explicit quotation because I am the main author8 or I contributed
the used part to them.9 Except for my summary paper [Krell et al., 2014c, see also
Section 2.4.4], which is somehow scattered over some introductory parts, I explicitly
mention these sources at the beginning of the respective chapters or sections where
they are used. Often parts of these papers could be omitted by referring to other
sections or they had to be adapted for consistency. On the other hand, additional
information, additional experiments, the relation to the other parts of this thesis, or
personal experiences are added.
8
[Krell et al., 2013b,
Krell et al., 2014a,
Krell and Wöhrle, 2014]
9
[Feess et al., 2013, Straube and Krell, 2014]
Krell et al., 2014c,
Krell and Straube, 2015,
0.4. Application Perspective: P300 Detection
9
Notation
In this thesis mostly the “standard notation” is used and it should be possible to infer
the meaning from the context. Nevertheless, there is a list of acronyms and a list
of used symbols at the end of this document. If some notation is unclear we refer to
these lists. It will be directly mentioned, if the standard symbols are not used.
0.4
Application Perspective: P300 Detection
Even though the approaches derived in this thesis are very general and can be
applied in numerous applications, they were originally developed with a concrete
dataset/application in mind. We will first describe the general setting, continue with
a description of the experiment which generated the data, and finally highlight the
connection of the dataset to this thesis to provide an additional less abstract motivation.
0.4.1 General Background of the Dataset
Current brain-computer interfaces (BCIs) rely on machine learning techniques
as the ones discussed in this thesis.
event-related potential
(ERP)10
They can be used to detect the P300
for communication purposes (e.g., for P300 based
spellers [Farwell and Donchin, 1988, Krusienski et al., 2006] or for controlling a virtual environment [Bayliss, 2003]), to detect interaction errors for automated correction [Ferrez and Millán, 2008, Kim and Kirchner, 2013], or to detect movement
preparation or brain activity that is related to the imagination of movements for
communication or control of technical devices [Bai et al., 2011, Blankertz et al., 2006,
Kirchner et al., 2014b].
The P300 is not only used to implement active BCIs for communication and
control but can furthermore be used more passively as it was investigated in
the dataset described in the following. For example, in embedded brain reading
(eBR) [Kirchner, 2014] the P300 is naturally evoked in case an operator detects and
recognizes an important warning during interaction. Thus, the detection of the P300
is used to infer whether the operator will respond to the warning or not and to adapt
the interaction interface with respect to the inferred upcoming behavior. A repetition of the warning by the interaction interface can be postponed in case a P300 is
detected after a warning was presented since it can be inferred that the operator
will respond to the warning. In case there is no P300 detected, the warning will be
10
This is a special signal in the measurement of electrical activity along the scalp (electroencephalogram). The name refers to a positive peak at the parietal region which occurs roughly 300 ms (or with a
larger latency) after the presentation of a rare but important visual stimulus (see also Section 0.4.2).
10
Chapter 0. Introduction
repeated instantly since it can be inferred that the operator did not detect and recognize the warning and will therefore not respond [Wöhrle and Kirchner, 2014]. Since
in the explained case we are able to correlate the brain activity with the subject’s
behavior, the detected behavior can be used as a label to control for the correctness
of the predicted brain states and hence to adapt the classifier by online learning to
continuously improve classification performance [Wöhrle et al., 2013b] (Section 1.2).
The previous description was created with the help of Dr. Elsa Andrea Kirchner, who headed the experiments for the dataset. The following rather short dataset
description is adapted from [Feess et al., 2013] where the data was used to compare
different sensor selection algorithms. A very detailed description of the experiment
and related experiments is provided in [Kirchner et al., 2013].
0.4.2 Description of the Dataset
The data described in this section has been acquired from a BCI system that
belongs to the class of passive BCIs: the purpose is the gathering of information about the user’s mental state rather than a voluntary control of a system [Zander and Kothe, 2011, Kirchner, 2014]. Therefore, no deliberate participation
of the subject is required.
The goal of the system is to identify whether the subject distinctively perceived
certain rare target stimuli among a large number of unimportant standard stimuli.
It is expected that the targets in such scenarios elicit an ERP called P300 whereas
the standards do not [Courchesne et al., 1977].
Five subjects participated in the experiment and carried out two sessions on different days each. A session consisted of five runs with 720 standard and 120 target
stimuli per run. EEG data were recorded at 1 kHz with an actiCAP EEG system
(Brain Products GmbH, Munich, Germany) from 62 channels following the extended
10–20 layout. (This system usually uses 64 channels. Electrodes TP7 and TP8 were
used for electromyogram (EMG) measurements and are excluded here.)11
The data was recorded in the Labyrinth Oddball scenario (see Figure 2), a testbed
for the use of passive BCIs in robotic telemanipulation. In this scenario, participants were instructed to play a simulated ball labyrinth game, which was presented
through a head-mounted display. The insets in the photograph show the labyrinth
board as seen by the subject. While playing, one of two types of visual stimuli was
displayed every 1 second with a jitter of ±100 ms. The corners arranged around the
board represent these stimuli. As can be seen, the difference in the standard and target stimuli is rather subtle: in the first case the top and bottom corners are slightly
larger and in the latter the left and right corners are larger. The subjects were in11
The electrode layout with 64 electrodes is depicted in Figure C.6.
11
0.4. Application Perspective: P300 Detection
structed to ignore the standard stimuli and to press a button as a reaction to the rare
target stimuli.
Both standard and target stimuli elicit a visual potential as seen in the averaged
time series in Figure 2 (strong negative peak at around 200 ms after the stimuli). Additionally, target stimuli induce a positive ERP, the P300, with maximum amplitude
around 600 ms after stimulus at electrode Pz. It is assumed that the P300 is evoked by
rare, relevant stimuli that are recognized, and cognitively evaluated by the subject.
Targets
!V ERP: Standards
Averaged
Standards
(n = 720)
Pz
-4
-2
2
4
300
0
600
ms
600
ms
Targets
(n = 120)
Averaged ERP: Targets
!V
-4
Pz
-2
2
2
P300
4
0
300
Figure 2: Labyrinth Oddball: The subject plays a physical simulation of a ball
labyrinth game. He has to respond to rare target stimuli by pressing a buzzer and
ignore the more frequent standard stimuli. The insets show the shape of the stimuli,
which can be distinguished by the length of the edges. The graphs to the left depict
the event-related potentials (ERPs) evoked by both stimulus types at electrode Pz.
Both stimuli elicit an early negative potential attributed to visual processing, but
only targets evoke an additional strong, positive potential around 600 ms after the
stimulus. Visualization and description taken from [Feess et al., 2013].
The BCI only needs to passively monitor whether the operator of the labyrinth
game correctly recognized and distinguished these stimuli. There is an objective
affirmation of the successful stimulus recognition, because a button has to be pressed,
whenever a target is recognized. No feedback is given to the user.
0.4.3 Relevance for this Thesis
Even though this data is not (yet) open source, it was used in this thesis for several
reasons as listed in the following.
• It provides numerous datasets to have a comparison of algorithms.12
• EEG data classification is a very challenging task where the applied signal processing chain usually performs much better than the human.
12
Up to 50 datasets/recordings, depending on the evaluation scheme.
12
Chapter 0. Introduction
• The data has a very bad signal to noise ratio. Thus it is a challenge to optimize
the processing chain.
• The data was recorded in a controlled and somehow artificial scenario but in
fact it targets a much more promising application of a BCI where the humans
intentions are monitored with the help of the EEG during a teleoperation task
with many robots where robots act more autonomously. This task can be very
challenging and the monitoring can be used to avoid malfunction in the interface. When analyzing the P300 data and tuning processing chains, we kept this
more complex application in mind.
• The dataset was the motivation for all findings in this thesis as described in the
following.
• The aforementioned more practical application requires online learning to integrate new training data for performance improvement and to account for the
different types of drifts in the data (see Section 1.2).
• Support vector machine and Fisher’s discriminant were common classifiers on
this type of data [Krusienski et al., 2006] (see Section 1.3).
• Depending on the application, which uses the P300, it might be very difficult
to acquire data from a second class and consequently a classifier is of interest,
which only works with one class (see Section 1.4). Altogether, a more general
model of classification algorithms and their properties and connections is helpful here.
• There is always the danger of relying on artifacts (e.g., muscle artifacts, eye
movement) and there is an interest from neurobiology to decode the processing
chain, which is built to classify the P300 (see Chapter 2). For the given dataset,
we could show that eye artifacts are not relevant.
• Finding a good processing chain for such demanding data requires a lot of hyperparameter optimization and comparison of different algorithms. Furthermore, it is useful to exchange processing chains between scientist to find flaws,
communicate problems and approaches, and help each other improving the processing (see Chapter 3).
• There is an interest in using as few sensors and time points for the processing to
make the set up easier and the processing faster (see Section 3.4.3). To derive
such selection algorithm it can be helpful to combine the tools and insights,
derived in this thesis.
A reference processing chain for this data is depicted in Figure 3.4. In the evaluations in this thesis, only the difference to this processing scheme is reported. The
processing chain assumes that the data has already been segmented in samples of
one second length after the target or standard stimulus.
Chapter 1
Generalizing: Classifier Connections
Contents
1.1 Support Vector Machine and Related Methods . . . . . . . . . .
16
1.1.1 Support Vector Machine . . . . . . . . . . . . . . . . . . . . . .
16
1.1.2 Least Squares Support Vector Machine . . . . . . . . . . . . .
29
1.1.3 Regularized Fisher’s Discriminant . . . . . . . . . . . . . . . .
31
1.1.4 Relative Margin Machine . . . . . . . . . . . . . . . . . . . . .
32
1.1.5 Online Passive-Aggressive Algorithm . . . . . . . . . . . . . .
33
1.1.6 Unary Classification . . . . . . . . . . . . . . . . . . . . . . . .
34
1.2 Single Iteration: From Batch to Online Learning . . . . . . . .
37
1.2.1 Newton Optimization . . . . . . . . . . . . . . . . . . . . . . . .
40
1.2.2 Sequential Minimal Optimization . . . . . . . . . . . . . . . .
42
1.2.3 Special Offset Treatment . . . . . . . . . . . . . . . . . . . . . .
43
1.2.4 Single Iteration: From Batch SVM to Online PAA . . . . . . .
46
1.2.5 Practice: Normalization and Threshold Optimization . . . . .
47
1.3 Relative Margin: From C-SVM to RFDA via SVR . . . . . . . . .
48
1.3.1 Motivation of the Relative Margin . . . . . . . . . . . . . . . .
50
1.3.2 Deriving the Balanced Relative Margin Machine . . . . . . . .
51
1.3.3 Classifier Connections with the BRMM . . . . . . . . . . . . .
53
1.3.4 Practice: Implementation and Applications . . . . . . . . . . .
58
1.4 Origin Separation: From Binary to Unary Classification . . .
69
1.4.1 Connection between ν-SVM and νoc-SVM . . . . . . . . . . . .
71
1.4.2 Novel One-Class Variants of C-SVM, BRMM, and RFDA . . .
73
1.4.3 Equivalence of SVDD and One-Class SVMs . . . . . . . . . . .
75
1.4.4 Novel Online Unary Classifier Variants of the C-SVM . . . . .
77
1.4.5 Comparison of Unary Classifiers on the MNIST Dataset . . .
77
1.4.6 P300 Detection as Unary Classification Problem . . . . . . . .
82
1.4.7 Practice: Normalization and Evaluation . . . . . . . . . . . . .
84
1.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
86
13
14
Chapter 1. Generalizing: Classifier Connections
The aim of this chapter is to summarize known and novel connections between SVM
variants to derive a more general view on this group of classifiers. This shall facilitate
the choice of the classifier given certain data or applications at hand.
Given some data-value pairs (xj , yj ) with xj ∈ Rm and j ∈ {1, . . . , n} a common
task is to find a function F which maps F (xj ) = yj as good as possible and which
should also perform well on unseen data. If yj is from a continuous space like R, an
algorithm deriving such a function f is called regression algorithm. If yj is from a
discrete domain, the algorithm is a classifier. In this thesis, we will focus on the case
of binary classification with yj ∈ {−1, +1}. In most cases, linear classifiers will be
used with f (xj ) = hw, xj i + b, where w is the classification vector and b the offset. To
finally map the classification function to a decision ({−1, +1}), the signum function
is applied (F (x) = sgn(f (x))).
The
classical
support
vector
machine
(C-SVM)
[Vapnik, 2000,
Cristianini and Shawe-Taylor, 2000, Müller et al., 2001, Schölkopf and Smola, 2002]
is a well-established binary classifier.1
Good generalization properties, efficient
implementations, and powerful extensions like the kernel trick or possible sparsity
properties (explained in Section 1.1), make the SVM attractive for numerous variants and applications [LeCun et al., 1998, Guyon et al., 2002, Lal et al., 2004,
LaConte et al., 2005,
Golle, 2008,
Tam et al., 2011,
Filitchkin and Byl, 2012,
Kirchner et al., 2013]. The most important variants are
• ν support vector machine (ν-SVM) [Schölkopf et al., 2000, Section 1.1.1.3],
• support vector regression (SVR) [Vapnik, 2000, Smola and Schölkopf, 2004, Section 1.1.1.4],
• least squares support vector machine (LS-SVM) [Van Gestel et al., 2002, Section 1.1.2],
• relative
margin
machine
(RMM)
[Shivaswamy and Jebara, 2010,
Krell et al., 2014a, Section 1.1.4 and 1.3],
• passive-aggressive
algorithm
(PAA)
[Crammer et al., 2006,
Sec-
tion 1.1.5, 1.1.6.2, and 1.2.4],
• support vector data description (SVDD) [Tax and Duin, 2004, Section 1.1.6.1],
• and
classical
one-class
support
vector
machine
(νoc-SVM)
[Schölkopf et al., 2001b, Section 1.1.6.3].
• Furthermore, regularized Fisher’s discriminant (RFDA) can be seen as a SVM
variant, too [Mika, 2003, Krell et al., 2014a, Section 1.1.3 and 1.3].
In the literature these algorithms are usually treated as distinct classifiers. This also
holds for the evaluations. Some connections between these classifiers are known but
scattered erratically over the large body of literature. First, in Section 1.1 the models
1
The C in the abbreviation probably refers to the hyperparameter C in the classifier definition and
is used to distinguish it from other related classifiers (see also Section 1.1.1).
15
and these connections will be summarized. In the following, general concepts for a
unifying view are proposed to further connect these classifiers and ease the process
of choosing a fitting classifier:
• The single iteration concept creates online learning classifiers like PAA from
batch learning classifiers to save computational resources (Section 1.2).
• The relative margin concept intuitively connects SVM, SVR, RMM, LS-SVM,
and RFDA (Section 1.3).
• The origin separation concept transforms binary to unary classifiers like νocSVM for outlier detection or to data description (Section 1.4).
By combining these three approaches, a large number of additional variants can be
generated (see Fig. 1.1). In Section 1.5 the connections between the aforementioned
classifiers will be summarized and possible scenarios explained where the knowledge
of the connections is helpful (e.g., implementation, application, and teaching).
single
iteration
SVM
origin
separation
unary
SVM
origin
separation
unary single
BRMM iteration
unary
BRMM
PAA
relative
margin
relative
margin
unary
PAA
relative
margin
relative
margin
single
iteration
origin
separation
BRMM
PAA
origin
separation
single
iteration
BRMM
PAA
Figure 1.1: 3D-Cube of our novel connections (commutative diagram). Combining the approaches, introduced in Chapter 1: relative margin (vertical arrows)
to generate the balanced relative margin machine (BRMM) which is the connection
to the regularized Fisher’s discriminant (RFDA), single iteration (horizontal arrows)
to generate online classifiers like the passive-aggressive algorithm (PAA), and the
origin separation (diagonal arrows) to generate unary classifiers from binary ones.
Each approach is associated with one dimension of the cube and going along one edge
means to apply or remove the respective approach from the classifier at the edge.
16
Chapter 1. Generalizing: Classifier Connections
1.1
Support Vector Machine and Related Methods
In this section, we introduce all the aforementioned SVM variants including some
basic concepts and known connections which are pure parameter mappings and no
general concepts. For further reading, we refer to the large corpus of books about
SVMs. Readers who are familiar with the basics of support vector machines and its
variants can continue with the next section.
The models will be required in the following sections which introduce three general concepts to connect them. Putting everything together in Section 1.5, we will
show that the SVM variants, introduced in this section, are all highly connected.
1.1.1 Support Vector Machine
In a nutshell, the principle of the C-SVMs is to construct two parallel hyperplanes
with maximum distance such that the samples belonging to different classes are separated by the space between these hyperplanes (see also Figure 1.2). Such space
between the planes is usually called margin—or
inner margin
in our context.
Commonly, only the Euclidean norm
kvk2 =
qP
vi2
is used for measuring
the
distance
between
points but it is also possible to use an arbitrary p-norm
q
kvkp =
p
P p
vi
with p ∈ [1, ∞].2
For getting the distance between two parallel hyperplanes or a point and a hy-
perplane instead, the respective dual p′ -norm has to be used with
[Mangasarian, 1999]. If p = ∞,
p′
1
p
+
1
p′
= 1
is defined to be 1. Having the two hyperplanes
H+1 and H−1 with
Hz = {x|hw, xi + b = z} ,
their distance is equal to
2
kwkp′
(1.1)
according to Mangasarian. (In case of the Euclidean
norm, this effect is also known from the Hesse normal form.) The resulting model
reads as:
Method 1 (Maximum Margin Separation).
max
w,b
s.t.
1
kwkp′
(1.2)
yj (hw, xj i + b) ≥ 1 ∀j : 1 ≤ j ≤ n.
For new data, the respective linear classification function is:
f (x) = hw, xj i + b.
2
Due to convergence properties it holds kvk∞ := max |vi |.
(1.3)
17
1.1. Support Vector Machine and Related Methods
f(x) = -1
f(x) = 0
f(x) = 1
x2
maximum
distance
f(x) = <w,x>+b
x1
Figure 1.2: Support vector machine scheme. The blue dots are training samples
with y = −1 and the red dot with y = +1 respectively. Displayed are the three parallel
hyperplanes H+1 , H0 , and H−1 .
To get a mapping to the class labels −1 and +1 we use the decision function
F (x) = y(x) = sgn(f (x)) :=
(
+1 if f (x) > 0,
−1 otherwise.
(1.4)
For better solvability, Method 1 is reformulated to an equivalent one. The fraction is
inverted, and the respective minimization problem is solved instead. Furthermore,
the root is omitted to simplify the optimization process and further calculations. An
additional scaling factor is added for better looking formulas when solving the optimization problem. These superficial modifications do not change the optimal solution.
The resulting reformulated model reads:
Method 2 (Hard Margin Separation Support Vector Machine).
min
w,b
s.t.
1
p′
′
kwkpp′
(1.5)
yj (hw, xj i + b) ≥ 1 ∀j : 1 ≤ j ≤ n.
Since strict separation margins are prone to overfitting or do not exist at all, some
disturbance in the form of samples penetrating the margin is allowed denoted with
the error variable tj .
When speaking of the C-SVM, normally the Euclidean norm is used (p = p′ = 2):
18
Chapter 1. Generalizing: Classifier Connections
Method 3 (L1–Support Vector Machine).
min
w,b,t
1
2
s.t.
kwk22 + C
P
tj
yj (hw, xj i + b) ≥ 1 − tj
tj
≥0
∀j : 1 ≤ j ≤ n,
(1.6)
∀j : 1 ≤ j ≤ n.
The hyperparameter C defines
the compromise between the width of the margin
2
1
regularization term 2 kwk2 and the amount of samples lying in or on the wrong
side of the margin (tj > 0).3 This principle is called soft margin, because the margins
defined by the two hyperplanes H+1 and H−1 can be violated by some samples (see
also Figure 1.3). In the final solution of the optimization problem only these samples
and the samples on the two hyperplanes are relevant and provide the SVM with its
name.
f(x) = -1
f(x) = 0
f(x) = 1
x2
maximum
distance
f(x) = <w,x>+b
x1
Figure 1.3: Soft margin support vector machine scheme. In contrast to Figure 1.2, some samples are on the wrong side of the hyperplanes within the margin.
Definition 1 (Support Vector). The vectors defining the margin, i.e., those data examples xj where tj > 0 or where identity holds in the first inequality constraint
(Method 3), are the support vectors.
The L1 in the method name of the SVM (Method 3) refers to the loss term
ktk1 =
P
tj for tj ≥ 0 in the target function. A L2 variant that uses ktk22 instead
was suggested but is rarely used in applications, especially when kernels are used.
When using kernels, it is important to have as few support vectors as possible and
the L2 variant often has much more support vectors [Mangasarian and Kou, 2007]
(see also Section 1.1.1.2).
3
C is called regularization constant, cost factor, or complexity.
1.1. Support Vector Machine and Related Methods
1.1.1.1
19
Lagrange Duality
For deriving solution algorithms for the optimization problem of the C-SVM and
for the introduction of kernels—presently one of the most important research topics in SVM theory— it is useful to apply duality theory4 from optimization, e.g.,
[Burges, 1998]. Solving the dual instead of the primal (original) optimization problem can be easier in some cases and if certain requirements are fulfilled, then
the solutions are connected via the Karush-Kuhn-Tucker (KKT) conditions, e.g.,
[Boyd and Vandenberghe, 2004]. Finally, duality theory enables necessary optimality
conditions, which can be used to solve the optimization problems. Even though the
following calculations will be only performed for the C-SVM, the concepts also apply
for numerous variants and the respective calculations are similar (as partially shown
in the appendix).
To avoid a degenerated optimization problem, it is required to check if at least
one point fulfills all restrictions (feasible point), if there is a solution of the optimization problem, and if the problem can be “locally linearized”, i.e., fulfills a constrained
qualification, e.g., [Boyd and Vandenberghe, 2004]. These points are usually ignored
in the SVM literature probably because they seem obvious from the geometrical perspective. Nevertheless, they are the basis of most optimization algorithms for SVMs.
Theorem 1 (The C-SVM Model is well defined). The C-SVM optimization problem
has feasible points and a solution always exists, if there is at least one sample for each
class. Additionally when using the hard margin the sets of the two classes need to be
strictly separable. Furthermore, Slater’s constraint qualification is fulfilled.5
The question of how to determine the solution is a main topic of Section 1.2. The
benefit of this theorem is twofold. We proofed that the model is well defined and that
we can apply Lagrange duality. The advantage of Lagrange duality for Method 3 is a
reformulation of the optimization problem, which is easier to solve and which allows
replacing the original norm by much more complex distance measures (called kernel
trick). This advantage does not hold for the variants based on other norms (p 6= 2).
For obtaining the dual optimization problem, first of all the respective Lagrange
function has to be determined. For this, dual variables are introduced for every inequality (αj , γj ) and the inequalities are rewritten to have the form g(w, b, t) ≤ 0. The
Lagrange function is the target function plus the sum of the reformulated inequality
functions weighted with the dual variables:
L1 (w, b, t, α, γ) =
4
X
X
X
1
kwk22 +
Cj tj −
αj (yj (hw, xj i + b) − 1 + tj ) −
γj tj .
2
(1.7)
This should not be mixed up with the previously mentioned duality of the norms. The dual optimization problem can be seen as an alternative/additional view on the original optimization problem.
5
The proof is given in Appendix B.1. Other constraint qualifications do exist, but Slater’s was most
easy to check for the given convex optimization problem.
20
Chapter 1. Generalizing: Classifier Connections
For the L2–SVM this yields:
L2 (w, b, t, α) =
X
X
1
kwk22 +
Cj t2j −
αj (yj (hw, xj i + b) − 1 + tj ).
2
(1.8)
To consider the label or the time for the weighting of errors, C has been chosen sample
dependent (Cj ).
With a case study, it can be shown that the original optimization is equivalent to
optimizing:
min sup L1 (w, b, t, α, γ).
w,b,t α≥0,γ≥0
(1.9)
Infeasible points in the original optimization problem get a value of infinity due to
usage of the supremum and for the feasible points the original target function value
is obtained. According to Theorem 1, the optimization problem has a solution and
Slater’s constraint qualification is fulfilled. Consequently the duality theorem can be
applied [Burges, 1998]. It states that we can exchange minimization and “supremization” and that the solutions for both problems are the same:
min sup Lq (w, b, t, α, γ) = max min Lq (w, b, t, α, γ), q ∈ {1, 2} .
w,b,t α≥0,γ≥0
α≥0,γ≥0 w,b,t
(1.10)
The advantages of the new resulting optimization problem, called dual optimization
problem, are twofold. First, the inner part is an unconstrained optimization problem
which can be analytically solved. Second, the remaining constraints are much easier
to handle than the constraints in the original (primal) optimization problem.
For simplifying the dual optimization problem, the minimization problem is
solved by calculating the derivatives of the Lagrange function for the primal variables
and setting them to zero. This is the standard solution approach for unconstrained
optimization.
X
X
∂Lk
∂L1
∂L2
∂Lk
αj yj xj ,
αj yj ,
= w−
=−
= Cj − αj − γj ,
= 2tj Cj − αj . (1.11)
∂w
∂b
∂tj
∂tj
j
j
The most important resulting equations are
w=
X
αj yj xj ,
(1.12)
j
which gives a direct relation between w and α, and
X
j
αj yj = 0,
(1.13)
21
1.1. Support Vector Machine and Related Methods
which is a linear restriction on the optimal α. For the L1 variant, the equation
Cj − αj = γj
(1.14)
results in the side effect that γj can be omitted in the optimization problem and αj
gets the upper bound Cj instead, due to the constraint γ ≥ 0. Finally, substituting
the equations for optimality into Lq and multiplying the target function with −1 to
obtain a minimization problem results in the following theorem:
Theorem 2 (Dual L1– and L2–SVM Formulations). The term
min
P
Cj ≥αj ≥0,
X
1X
αj
αi αj yi yj hxi , xj i −
αj yj =0 2 i,j
j
(1.15)
is the dual optimization problem for the L1–SVM and
for the L2–SVM.
X
1X
1 X αj2
min
αi αj yi yj hxi , xj i −
αj +
P
4 j Cj
αj ≥0, αj yj =0 2 i,j
j
(1.16)
The dual of the hard margin SVM is given in Theorem 18 and for the L2 variant
a more detailed calculation is provided in Appendix B.1.3.
In the dual formulation, only the pairwise scalar products of training samples are
required. This is exploited in the kernel trick (Section 1.1.1.2). Note that only in the
L1 case there is an upper bound on the dual variables. Furthermore, when looking
more detailed into the calculations we realize that the additional equation in the dual
feasibility constraints is a result of b being a free variable which is not minimized in
the target function. These observations will be again relevant in Section 1.2.
The αj are connected to the primal problem via Equation (1.12) but also via the
complementary slackness equations [Boyd and Vandenberghe, 2004]:
αj > 0 ⇒ yj (hw, xj i + b) ≤ 1.
(1.17)
Consequently, only samples on the margin or on the wrong side of the margin contribute to the classification function according to Equation (1.12). All the other samples are irrelevant and do not “support” the decision function. Hence the name.
For the L1 case, it additionally holds
tj > 0 ⇒ γ = 0 ⇒ αj = Cj .
(1.18)
This immediately tells us that every sample which is on the wrong side of the margin
gets the maximum weight assigned and that every xj with αj > 0 is a support vector.
22
Chapter 1. Generalizing: Classifier Connections
Sometimes αj > 0 is used instead to define the term support vector.
So a specialty of the SVM is that only a fraction of the data is needed to describe
the final solution. Interestingly, w could be split into the difference of two prototypes
where each corresponds to one class:
w=
X
j:yj =1,αj >0
αj xj −
X
j:yj =−1,αj >0
αj xj = w+1 − w−1 .
(1.19)
So especially for the L1 case, the prototypes are in its core only the average of the
samples of one class which are difficult to distinguish from samples of the other class.
In the L2 case, it is a weighted average. When looking at the implementation details
in Section 1.2, it turns out that weights are higher if it is more difficult to distinguish
the sample from the opposite class.
Additionally to implementation aspects in Section 1.2, the results of this section
will be also used in the following to introduce kernels. Here, the weighted average of
samples is not used anymore. It is replaced by a weighted sum of functions.
1.1.1.2
Loss Functions, Regularization Terms, and Kernels
This section introduces three important concepts in the context of SVMs which are
also used in other areas of machine learning like regression, dimensionality reduction, and classification (without SVM variants). They are already a first set of (known
but loose) connections in the form of parameter mappings between SVM variants.
They will be repeatedly referred to in the other sections.
Loss Functions
First, we will have a closer look at the tj in the C-SVM models.
Instead of using tj , it is also possible to omit the side constraints by replacing tj in
the target function of the model with the function
max {0, 1 − yj (hw, xj i + b)} .
(1.20)
The underlying function l(t) = max {0, 1 − ys} is called hinge loss, where y ∈ {−1, +1}
is the class label and s is the classification score. The respective squared function for
the L2–SVM is called squared hinge loss. In case of the hard margin SVM a tj or a
respective replacing function could be introduced by defining
tj =
(
∞ if 1 − yj (hw, xj i + b) > 0,
0
else.
(1.21)
Definition 2 (Loss Function). The term summing up the model errors tqj is called loss
term (sometimes also empirical error). The respective function defining the error of the
23
1.1. Support Vector Machine and Related Methods
algorithm model in relation to a single sample is called loss function.
There are several ways of choosing the loss function, each resulting in a new classifier. A (not complete) list of existing loss functions is given in Table 1.1. For some
of them, there is a corresponding underlying density model [Smola, 1998]. Three
choices have already been introduced and many more will be used in the following
sections.
name
function
hinge loss
max {0, ξ}
max {0, ξ}2
squared hinge loss
|ξ|
Laplacian loss
1 2
2ξ
Gaussian loss
ǫ insensitive loss
max {0, ξ − ǫ, −ξ − ǫ}
(
Huber’s robust loss
polynomial loss
piecewise polynomial loss
1 2
2σ ξ
|ξ| −
p
1
p |ξ|
1
pσ p−1
σ 12
if |ξ| ≤ σ,
if |ξ| > σ
|ξ|p
if |ξ| ≤ σ,
0 − 1 loss
|ξ| − σ p−1 if |ξ| > σ
p
c
1−ξ
if ξ < 1+c
,
a
c
a
1
if ξ ≥ 1+c
1+c (1+c)ξ−c+a
(
logistic loss
log(1 + exp(ξ + 1))
LUM loss [Liu et al., 2011]
0 if ξ ≥ −1,
1 if ξ < −1
Table 1.1: Loss functions with ξ := 1 − ys. y ∈ {−1, +1} is the class label and s is
the classification score. For some functions additional hyperparameters are used (σ,
p, c, a, ǫ).
Regularization Terms If for a classification algorithm only the loss function were
minimized, chances are high that it will overfit to the given data. This means that it
might perfectly match the given training data but might not generalize well and that
it will perform worse on the testing data. To avoid such behavior, often a regularization term is used — like
1
p′
′
kwkpp′ in the C-SVM definition. Sometimes, this term is
also called prior probability [Mika et al., 2001]. The target function of the respective
algorithm is always the weighted sum of loss term and regularization term.
An advantage of using
1
2
kwk22 as regularization function is its differentiability
and strong convexity. When using a convex regularization and loss function, every
local optimum is also a global one. Furthermore, together with the convexity of the
24
Chapter 1. Generalizing: Classifier Connections
optimization problem the strong convexity results in the effect that there is always
a unique w solving the optimization problem [Boyd and Vandenberghe, 2004]. This
does not hold for 1-norm regularization (p′ = 1, kwk1 =
P
|wi |) where there could
be more than one optimal solution. p′ ∈ {1, 2} are the most common choices for
regularization. The most common case is p′ = 2, due to its intuitiveness and its nice
properties in the duality theory setting (see Section 1.1.1.1 and Section 1.2). The
advantage of the regularization with p′ = 1 is its tendency to sparse solutions.6 Some
more information about this behavior is given in Section 1.3.3.4. If w can be split
into vectors w(1) , . . . , w(k) the terms kwk1,2 :=
P
w(i)
2
and kwk1,∞ :=
P
w(i)
∞
are
sometimes used to induce grouped sparsity [Bach et al., 2012]. This means, that the
classifier tends to completely set some components w(i) to zero vectors.
Kernels In Theorem 2 it could be shown in the case of p′ = 2 that the C-SVM
problem can be reformulated to only work on the pairwise scalar products of the
training data samples xj and not the single samples anymore. This is used in the
kernel trick, where the scalar product is replaced by a kernel function k. This results
in a nonlinear separation of the data which is very advantageous, if the data is not
linearly separable. The respective classification function becomes
f (x) = b +
n
X
αj k(x, xi ).
(1.22)
i=1
The most common kernel functions are displayed in Table 1.2. For some applications like text or graph classification, a kernel function is directly applied to the data
without the intermediate step of creating a feature vector.
name
function
linear kernel
hxi , xj i
polynomial kernel
sigmoid kernel
Gaussian kernel (RBF)
Laplacian kernel
(γ hxi , xj i + b)d
tanh(γ hxi , xj i + b)
kx −x k2
exp − i2σ2j 2
kx −x k
exp − i σ j 2
Table 1.2: Kernel functions applied to the input (xi , xj ). The other variables in
the functions are additional hyperparameters to customize/tune the kernel for the
respective application.
The use of the kernel can be compared with the effect of lifting the data to a
higher dimensional space before applying the separation algorithm. Instead of defin6
More components of w are mapped to zero.
25
1.1. Support Vector Machine and Related Methods
ing the lifting, only the kernel function has to be defined. As a direct mathematical
consequence, a kernel function is usually required to be a symmetric, positive semidefinite, and continuous function. These requirements are also called Mercer conditions. By furthermore restricting this function to a compact space (e.g., each vector
component is only allowed to be in a bounded and closed interval) the Mercer theorem
can be applied.
Theorem 3 (Mercer Theorem [Mercer, 1909]). Let X be a compact set and k : X×X →
R be a symmetric, positive semi-definite, and continuous function. Then there exists
an orthonormal basis ei ∈ L2 (X) and non-negative eigenvalues λi such that
k(s, t) =
∞
X
λj ej (s)ej (t) .
(1.23)
j=1
√
Now using Φ = diag( λ) (e1 , e2 , . . .), we get
k(a, b) = hΦ(a), Φ(b)i ∀a, b.
(1.24)
Consequently, Φ is a mapping in a high dimensional space, were the standard scalar
product is used. The proof of the Mercer Theorem also gives a rule on how to construct the basis. This rule uses the derivatives of the kernel function. Hence, for
the linear and polynomial kernel a finite basis can be constructed but especially for
the Gaussian kernel, which is also called radial basis function (RBF) kernel, only a
mapping into an infinite dimensional space is possible because the derivative of the
exponential function never vanishes.
Instead of the previous argument using the dual optimization problem, the following representer theorem is also used in the literature to introduce kernels.
Theorem 4 (Nonparametric Representer Theorem [Schölkopf et al., 2001a]). Suppose we are given a nonempty set X, a positive definite real-valued kernel k on X × X,
a training sample (x1 , y1 ), . . . , (xn , yn ) ∈ X × R, a strictly monotonically increasing
real-valued function g on [0, ∞), an arbitrary cost function c : (X × R2 )n → R ∪ {∞},
and a class of functions
F =
(
f ∈R
X
f (·) =
∞
X
i=1
)
βi k(·, zi ), βi ∈ R, zi ∈ X, kf k < ∞ .
(1.25)
Here, k·k is the norm in the reproducing kernel Hilbert space (RHKS) Hk associated
with k, i.e. for any zi ∈ X, βi ∈ R (i ∈ N),
∞
X
i=1
2
βi k(·, zi )
=
∞
∞ X
X
i=1 j=1
βi βj k(zi , zj ).
(1.26)
26
Chapter 1. Generalizing: Classifier Connections
Then any f ∈ F minimizing the regularized risk functional
c((x1 , y1 , f (x1 )), . . . , (xn , y2 , f (xn ))) + g(kf k)
(1.27)
admits a representation of the form
f (·) =
n
X
αj k(·, xi ).
(1.28)
i=1
For the C-SVM, the decision function f is optimized in contrast to the classification vector w. The cost function c is used for the loss term and g is used for the
regularization term
1
2
kf k2 . The theorem states that f can be replaced in the opti-
mization problem with a finite sum using only the training samples. The result is the
same as the previous approach for introducing the kernel.
No matter which way is chosen to introduce the kernel, the kernel trick can be
applied to most of the SVM variants with 2-norm regularization introduced in the
following except the online passive aggressive algorithm, because it does not keep
the samples in memory.
Even after building its model, the SVM has to keep the training data (only the
support vectors) for the classification function when using nonlinear kernels. In this
case the size of the solution (usually) grows with the size of the training data. Such
a type of model is called non-parametric model. In contrast, when using the linear
kernel the SVM provides a parametric model of the data with the parameters w and
b, because the number of parameters is independent from the size of the training
data. The usage of linear and RBF kernel is not unrelated but there is an interesting
connection.
Theorem 5 (RBF kernel generalizes linear kernel for the C-SVM). According to
[Keerthi and Lin, 2003], the linear C-SVM with the regularization parameter C ′ is
the limit of a C-SVM with RBF kernel and hyperparameters σ 2 → ∞ and C = C ′ σ 2 .
This theorem was used by [Keerthi and Lin, 2003] to suggest a hyperparameter
optimization algorithm, which first determines the optimal linear classifier and then
uses the relation C = C ′ σ 2 to reduce the space of hyperparameters to be tested for the
RBF kernel classifier. It could be also used into the other direction. If the optimal C
and σ 2 become too large, the linear classifier with C ′ =
C
σ2
could be considered instead.
Consequently, the connection between the two variants can be directly used to speed
up the hyperparameter optimization and also to somehow optimize the choice of the
best variant.
27
1.1. Support Vector Machine and Related Methods
1.1.1.3
ν-Support Vector Machine
The hyperparameter C in the L1-SVM model influences the number of support vectors but this influence cannot be mathematically specified.7 The ν support vector machine (ν-SVM) has been introduced with a different parametrization of the C-SVM to
be able to provide a lower bound on the number of support vectors in relation to the
number of training samples [Schölkopf et al., 2000, Crisp and Burges, 2000]:
Method 4 (ν-Support Vector Machine (ν-SVM)).
kwk22 − νρ +
1
2
min
w,t,ρ,b
1
n
P
tj
(1.29)
s.t. yj (hw, xj i + b) ≥ ρ − tj and tj ≥ 0 ∀j : 1 ≤ j ≤ n .
The additional hyperparameter ν ∈ [0, 1] which replaces the C ∈ (0, ∞) is the
reason for the name of the algorithm. The original restriction ρ′ ≥ 0 is omitted for
simplicity as suggested in [Crisp and Burges, 2000]. For the problem to be feasible,
min
ν≤
(
P
yj =+1
yj , −
P
yj =−1
n
yj
)
(1.30)
has to hold [Crisp and Burges, 2000]. Similar to the calculations in Section 1.1.1.1
the dual optimization can be derived (Theorem 19):
α
1
2
s.t.
1
n
min
P
i,j
P
αi αj yi yj hxi , xj i
≥ αj ≥ 0
αj yj = 0,
j
P
αj = ν .
(1.31)
∀j : 1 ≤ j ≤ n,
j
Due to the restrictions, ν defines the minimum
percentage of support vectors used
1
from the training data. If there is no α ∈ 0, n , then ν is the exact percentage of
support vectors and not just a bound.
Theorem 6 (Equivalence between C-SVM and ν-SVM). If α(C) is an optimal solution
for the dual of the C-SVM with hyperparameter C > 0, the ν-SVM has the same
solution with ν =
1
Cl
P
αi (C) despite a scaling with Cl.
On the other hand, if ρ is part of the optimal solution of a ν-SVM with ν > 0 and
a negative objective value, by choosing C =
1
ρl
the C-SVM provides the same (scaled)
optimal solution.
The
proof
and
further
details
on
this
theorem
can
be
found
[Chang and Lin, 2001].
7
Especially since this parametrization largely depends on the scaling/normalization of the data.
in
28
Chapter 1. Generalizing: Classifier Connections
1.1.1.4
Support Vector Regression
Parallel to the C-SVM, the support vector regression (SVR) has been developed
[Vapnik, 2000, Smola and Schölkopf, 2004]. In the literature, the name Support Vector Machine is sometimes also used for the SVR. For a better distinction, the name
Support Vector Classifier (SVC) is sometimes used for the C-SVM. As the name
indicates, SVR is a regression algorithm (yj ∈ R) and not a classifier. The formal
definition is:
Method 5 (L1–Support Vector Regression (SVR)).
min
w,b,t
s.t.
1
2
kwk22 + C
P
P
sj + C
tj
ǫ + sj ≥ hw, xj i + b − yj
sj , tj
≥ −ǫ − tj
≥0
∀j : 1 ≤ j ≤ n
(1.32)
∀j : 1 ≤ j ≤ n.
The additional hyperparameter ǫ defines a region, where errors are allowed. Due
to this ǫ-tube the SVR tends to have few support vectors which are at the border
or outside of this tube. Using a squared loss or “hard margin” loss, other regularization, or kernels works for this algorithm as well as for the C-SVM. According to
[Smola and Schölkopf, 2004], the dual optimization problem is
min
α,β
s.t.
1
2
P
i,j
(αi − βi )(α − β) hxi , xj i −
0 ≤ αj ≤ C
0 ≤ βj ≤ C
P
∀j : 1 ≤ j ≤ n
P
j
yj (αj − βj ) + ǫ
P
(αj + βj )
j
(1.33)
∀j : 1 ≤ j ≤ n
(αj − βj ) = 0.
It is connected to the primal optimization problem via
w=
X
j
(αj − βj ) = 0.
(1.34)
Apart from the underlying theory from statistical learning (using regularization,
loss term, and kernels) and the fact that the solution also only depends on a subset
of samples called support vectors, there seems to be no direct intuitive connection
between SVR and C-SVM. Nevertheless, the existence of a parameter mapping could
be proven by [Pontil et al., 1999] in the following theorem:
Theorem 7 (Connection between SVR and C-SVM). Suppose the classification problem of the C-SVM in Method 3 is solved with regularization parameter C and the
optimal solution is found to be (w, b). Then, there exists a value a ∈ (0, 1) such that
∀ǫ ∈ [a, 1), if the problem of the SVR in Method 5 is solved with regularization parameter (1 − ǫ)C, the optimal solution will be (1 − ǫ)(w, b).
29
1.1. Support Vector Machine and Related Methods
Proof. The proof by [Pontil et al., 1999] will not be repeated, here. Instead, in Section 1.3 this theorem will become immediately clear by introducing a third classifier (balanced relative margin machine) which is connected intuitively to SVR and
C-SVM. As a consequence the choice of a will be geometrically motivated.
As already demanded by [Pontil et al., 1999], there is also a ν-SVR similar to the
ν-SVM in Section 1.1.1.3 [Schölkopf et al., 2000, Smola and Schölkopf, 2004].
Method 6 (ν-Support Vector Regression).
min
w,b,t
1
2
kwk22 + C (nνǫ +
P
sj +
P
tj )
ǫ + sj ≥ hw, xj i + b − yj
s.t.
sj , tj
ν-SVR
and
SVR
[Chang and Lin, 2002].
are
connected
≥ −ǫ − tj
≥0
similar
(1.35)
∀j : 1 ≤ j ≤ n
∀j : 1 ≤ j ≤ n.
to
ν-SVM
and
C-SVM
Interestingly, this time ν is not replacing C.
Instead,
it is indirectly replacing ǫ which is now a model parameter and not a hyperparameter
anymore. ν provides a weighting for the automatic selection of ǫ. This is reasonable,
because in the SVR model, ǫ has the main influence on the number of support
vectors.8 In contrast to the ν-SVM model, the proof for the existence of solutions for
the C-SVM can be directly transferred to the ν-SVR.
We recently suggested a novel “variant” of the SVR for creating a regression of the upper/lower bound of a mapping with randomized real-valued output
[Fabisch et al., 2015]. It is called positive upper boundary support vector estimation
(PUBSVE). Further details are provided in Appendix B.5.
1.1.2 Least Squares Support Vector Machine
Using the Gaussian loss (which is also called least squares error) in the SVM model
instead of the (squared) hinge loss directly results in the least squares support vector
machine (LS-SVM) [Suykens and Vandewalle, 1999]. This change in the loss function
is substantial and results in a very different classifier:9
Method 7 (Least Squares Support Vector Machine LS-SVM).
min
w,b,t
s.t.
1
2
kwk22 +
C
2
P 2
t
j
yj (hw, xj i + b) = 1 − tj
(1.36)
∀j : 1 ≤ j ≤ n.
Note that this classifier is the exact counterpart to ridge regression10
8
A smaller ǫ-tube where model errors are allowed, result in more errors and each of the corresponding samples is a support vector.
9
The difference will become clear in Section 1.3.
10
More details are provided in Appendix B.2.1.
30
Chapter 1. Generalizing: Classifier Connections
[Hoerl and Kennard, 1970, Saunders et al., 1998]. The motivation of this classifier
was to solve a “set of linear equations, instead of quadratic programming for classical
SVM’s” [Suykens and Vandewalle, 1999, p. 1]. This comes at the prize of using all
samples for the solution (except the ones with xj ∈ Hyj ) in contrast to having few
support vectors. Consequently, the method might be disadvantageous when working
with kernels on large datasets, because the kernel function needs to be applied to
every training sample and the new incoming sample which shall be classified.
For solving the optimization problem of the classifier the use of Lagrange multiplier is not necessary, but it enables the use of kernels and the comparability with
the C-SVM. The application of it can be justified analogously to Theorem 1. The
respective Lagrange function is
CX 2 X
1
αj (yj (hw, xj i + b) − 1 + tj ).
tj −
kwk22 +
2
2
L(w, b, t, α) =
(1.37)
In contrast to the formulation of the L2–SVM, it holds αj ∈ R. Setting the derivative
of L to zero results in the equations:
w=
X
αj yj xj ,
(1.38)
X
αj yj ,
(1.39)
j
0=
j
αj
tj =
C
1 = yj (hw, xj i + b) + tj
∀j : 1 ≤ j ≤ n ,
(1.40)
∀j : 1 ≤ j ≤ n ,
(1.41)
which are sufficient for solving the problem [Suykens and Vandewalle, 1999]. Substituting the first and third equation into the fourth equation and introducing a kernel
function k reduces the set of equations to a set of (n + 1) equations with (n + 1) variables:
0=
X
αj yj ,
(1.42)
j
1 = yj b +
αj X
αi yi yj k(xi , xj )
+
C
i
∀j : 1 ≤ j ≤ n.
(1.43)
With a very large n this set of equations might become too difficult to solve and
a special quadratic programming approach might be required as suggested for the
C-SVM (see Section 1.2), which does not require to compute and store all k(xi , xj ).
31
1.1. Support Vector Machine and Related Methods
1.1.3 Regularized Fisher’s Discriminant
The LS-SVM is also closely connected to the regularized Fisher’s discriminant as
outlined in this section. In Section 1.3 we will show that are a special cases of a more
general classifier.
Originally, the Fisher’s discriminant (FDA) is defined as the optimal vector w that
maximizes the ratio of variance between the classes and the variance within the
classes after applying the linear classification function:
w ∈ arg max
a
(aT (µ2 − µ1 ))2
.
aT (Σ2 + Σ1 )a
(1.44)
Here, µi and Σi are the mean and variance of the training data from class i, respectively. We can see that every positive scaling of w is a solution, too. Further, in terms
of the linear classification functions f (x) = hw, xi + b, the definition of the FDA does
not impose any constraints on the choice of the offset b. These ambiguities are the
reason why different reformulations of the original problem can be found in the literature. For a good comparison with the C-SVM we need the following equivalent
definition [Van Gestel et al., 2002, Mika, 2003]:
min
w,b
n
X
j=1
(hw, xj i + b − yj )2
(1.45)
where the offset b is integrated into the optimization and where a scaled w is not a
solution anymore.. This method is also called Minimum Squared Error method or
Least Squares method. In [Duda et al., 2001] a similar model was derived but with a
fixed offset.
For normal distributed data with equal covariance matrices for both classes
but different mean, the FDA is known to be the Bayes optimal classifier
[Mika et al., 2001]. Motivated by the concept of Bayesian priors, Mika suggests to
have an additional regularization term in the target function [Mika, 2003]:
Method 8 (Regularized Fisher’s Discriminant (RFDA)).
min Reg(w, b) + C ktk22
w,b,t
s.t.
(1.46)
hw, xj i + b = yj + tj ∀j : 1 ≤ j ≤ n.
Here the variable t is used to describe the loss with the help of restrictions as
in the C-SVM model. For the RFDA this is not necessary but it will help us for the
comparison with other methods.
Theorem 8 (Equivalence of LS-SVM and RFDA). Using Reg(w, b) =
larization results in the least squares support vector machine.
1
2
kwk22 as regu-
32
Chapter 1. Generalizing: Classifier Connections
Proof. Direct consequence of the definitions.
Mika also suggests to introduce kernels for the kernel Fisher discriminant with
regularization (KFD) by replacing w with
P
j
αj yj xj (α ∈ R) and by replacing the re-
sulting scalar products with a kernel function. For the regularization Mika suggests
to apply a regularization directly on α. Using kαk1 for example as regularization term
results in sparse solutions in the kernel space [Mika et al., 2001]. A similar approach
was also mentioned for SVMs [Mangasarian and Kou, 2007].11
In [Mika, 2003] it is also mentioned that non-Gaussian distribution assumptions
result in other loss terms. Further choices like Laplacian loss (for Laplacian noise)
will be examined in Section 1.3.
1.1.4 Relative Margin Machine
The following classifier is the basis of a novel classifier which generalizes most of the
already introduced classifiers (see Section 1.3).
The relative margin machine (RMM) from [Shivaswamy and Jebara, 2010] extended the C-SVM by an additional outer margin that accounts for the spread of
the data and adds a data dependent regularization:
Method 9 (Relative Margin Machine (RMM)).
min
w,b,t
s.t.
1
2
kwk22 + C
P
tj
yj (hw, xj i + b)
1
2
2 (hw, xj i + b)
tj
≥ 1 − tj
≤
R2
2
≥0
∀j : 1 ≤ j ≤ n
(1.47)
∀j : 1 ≤ j ≤ n
∀j : 1 ≤ j ≤ n.
The additional hyperparameter R in this method constrains the maximum distance a sample can have from the decision plane in relation to the length of the classification vector w; R is called range in the following. The real distance is R ·
1 12
kwk .
Thus, it provides an additional outer margin at the hyperplanes HR and H−R , which
is dependent on the inner margin.
Definition 3 (Relative Margin). The relative margin is the combination of the inner
and the outer margin.
The range has to be either chosen manually or automatically, and we always assume R ≥ 1, as by definition ±1 are the borders of the inner margin. The classifier
scheme is depicted in Figure 1.4. Further details on motivation and variants of this
classifier are the content of Section 1.3.
11
The C-SVM regularization term with kernels is:
P
αi αj yi yj k(xi , xj ).
i,j
12
Note,
2
kwk
is the distance between the aforementioned maximum margin hyperplanes.
33
1.1. Support Vector Machine and Related Methods
-R
-1
0
1
R
x2
maximum
distance
x1
Figure 1.4: Relative margin machine scheme. There are two new hyperplanes,
HR and H−R , to define the outer margin in contrast to Figure 1.2.
1.1.5 Online Passive-Aggressive Algorithm
The passive-aggressive algorithm (PAA) was motivated by the loss functions of the
C-SVM and the use of a regularization term [Crammer et al., 2006]. All three versions of the loss term were considered: hard margin, hinge, and squared hinge
loss.
The resulting algorithms are denoted by PA, PA-I, and PA-II respectively
[Crammer et al., 2006]. In contrast to the C-SVM, the PAA is an online learning
classifier (see also Section 1.2). It uses one single sample at a time, adapts its classification function parameter w and then it forgets the sample.
In the single update step of the PAA the loss function to be minimized is a function
of only one incoming training sample and instead of the norm of the classification
vector w the distance between the old and new classification is minimized which was
an idea taken from [Helmbold et al., 1999]:
wt+1 = argminw∈Rm
1
kw − wt k22 + Cl(w, xt , yt ).
2
(1.48)
The loss function l is the same as used for the C-SVM with hard margin, hinge, or
squared hinge loss. Note that no offset is used. To incorporate one, Crammer suggests
to use an extra component in w for the b and also extend the data to homogeneous
coordinates with an additional 1 which results in the classification function f (x) =
h(w, b), (x, 1)i. Consequently, the offset is also subject to minimization in the target
function. The introduced optimization problem is always feasible even with hard
margin loss and Lagrange duality can be applied. In contrast to the C-SVM, concrete
solution formulas can be derived for the different losses [Crammer et al., 2006]. The
34
Chapter 1. Generalizing: Classifier Connections
detailed algorithm description is provided in Figure 1.5.
INPUT: aggressive parameter C > 0
INITIALIZE: w1 = (0, . . . , 0)
For t = 1, 2, . . .
• receive instance: xt ∈ Rm
• predict: ŷt = sgn hwt , xt i
• receive correct label: yt ∈ {−1, +1}
• suffer loss: lt = max {0, 1 − yt hwt , xt i}
• update:
1. set:
αt =
lt
kxt k2
(
lt
αt = min C,
kxt k2
lt
αt =
2
1
kxt k + 2C
(PA)
)
(PA-I)
(PA-II)
2. update: wt+1 = wt + αt yt xt
Figure 1.5: Online passive-aggressive Algorithm (PAA) as described in
Section 1.1.5 and [Crammer et al., 2006].
The PAA is even more connected to the C-SVM as it seems at first sight. This is
shown in Section 1.2.4.
1.1.6 Unary Classification
Instead of a classification with two classes (binary classification) some classifiers
focus only on one class (unary classification) even though a second class might be
present from the application point of view. The reason for omitting this second class
might be the desire to model only the properties of one class and not of a second one
or the lack of data as it is the case for outlier or novelty detection [Aggarwal, 2013].
A more detailed motivation for unary classification will be given in Section 1.4. In
the following, we will discuss three SVM variants for unary classification algorithms
1.1.6.1
Support Vector Data Description
For constructing a classifier with the data from a single class and not two classes,
the support vector data description (SVDD) is a straightforward approach. Its concept is to find a hypersphere with minimal radius which encloses all samples of one
35
1.1. Support Vector Machine and Related Methods
class. It is assumed that samples outside this hypersphere do not belong to the class
[Tax and Duin, 2004].
Method 10 (support vector data description (SVDD)).
R′2 + C ′
min
′
′
R ,c,t
s.t. kc −
P ′
t
j
xj k22
≤ R′2 + t′j and t′j ≥ 0 ∀j : 1 ≤ j ≤ n .
(1.49)
R′ is the radius of the enclosing hypersphere with center c. The decision function is
F (x) = sgn R′2 − kc − xk22 .
(1.50)
The SVDD could be also seen as a SVM variant [Tax, 2001, Tax and Duin, 2004]
and it can be used with kernels, too. In case of using kernels, the set of support
vectors also tends to be small, because samples inside the hypersphere are no support
vectors.
1.1.6.2
Unary Online Passive-Aggressive Algorithm
The concept of the SVDD to enclose the data with a hypersphere was also used to
define the unary PAA [Crammer et al., 2006]. Instead of the hinge loss with its hard
margin and squared version (see Section 1.1.5), the “SVDD loss” is considered:
lR (c, x) =
(
0
if kc − xk ≤ 0,
kc − xk − R otherwise.
(1.51)
The same optimization problem is solved as for the binary PAA to determine a new
center c with a new incoming sample:
ct+1 = argminc∈Rm
1
kc − ct k22 + Cl(ct , xt )
2
(1.52)
where l forces lR to be zero (hard margin, respective algorithm denoted with unary
q
PA), or l = lR
with q ∈ {1, 2} (soft margin, unary PAq). The processing scheme
is similar to the method reported in Figure 1.5. But with the different loss, the
respective update factors are:
αt = lR (ct , xt ) (PA0), αt = min {C, lR (ct , xt )} (PA1), αt =
and the update formula is:
ct+1 = ct + αt
xt − ct
.
kxt − ct k
lR (ct , xt )
(PA2),
1
1 + 2C
(1.53)
(1.54)
36
Chapter 1. Generalizing: Classifier Connections
In contrast to the SVDD, the hyperparameter R has to be chosen beforehand. For
extending the method with an automatic tuning of R an upper bound Rmax has to be
defined instead. The radius R is now indirectly optimized by extending the center c
(m+1) which is initialized with R
with an additional component
max . It is then related
q c
2
Rmax
− (c(m+1) )2 . The respective data gets an additional
to the optimal R by R =
component with the value zero. For further details we refer to [Crammer et al., 2006,
section 6].
1.1.6.3
Classical One-Class Support Vector Machine
The classical one-class support vector machine (νoc-SVM) has been introduced as a tool for “estimating the support of a high-dimensional distribution”
[Schölkopf et al., 2001b, title of the paper].
Method 11 (One-Class Support Vector Machine (νoc-SVM)).
min
w,t,ρ
1
2
kwk22 − ρ +
1
νl
P
tj
(1.55)
s.t. hw, xj i ≥ ρ − tj and tj ≥ 0 ∀j
with the decision function
F (x) = sgn (hw, xi − ρ) .
(1.56)
Again, there is a hidden binary classification included via the decision function,
namely whether a sample belongs to the one class or not. The dual of the νoc-SVM
[Schölkopf et al., 2001b],
min
α
1
2
P
i,j
αi αj hxi , xj i
s.t. 0 ≤ αi ≤
1
νl
∀i and
(1.57)
P
αi = 1 ,
is quite similar to the dual of the ν-SVM after a scaling of the dual variables with ν.
Only the equation
P
αj yj = 0 is missing. This equation cannot be fulfilled for a unary
j
classifier, because it holds yj = 1 ∀j : 1 ≤ j ≤ n. This similarity and its consequences
will be analyzed in detail in Section 1.4.
1.2. Single Iteration: From Batch to Online Learning
1.2
37
Single Iteration:
From Batch to Online Learning
This section contains my findings from:
Krell, M. M., Feess, D., and Straube, S. (2014a). Balanced Relative Margin Machine –
The missing piece between FDA and SVM classification. Pattern Recognition Letters,
41:43–52, doi:10.1016/j.patrec.2013.09.018
and
Krell, M. M. and Wöhrle, H. (2014).
on the origin separation approach.
New one-class classifiers based
Pattern Recognition Letters, 53:93–99,
doi:10.1016/j.patrec.2014.11.008.
No text parts are taken from these publications.
So far, we only defined the optimization problem for the C-SVM and its numerous
variants. To really use these models, there is still an approach required to at least
approximately solve the optimization problems which will be covered in this section.
It is not straightforward, because there is no closed form solution.13 Furthermore, it
is important to have algorithms which scale well with the size of the dataset to make
it possible to build a model with the help of an arbitrarily large set of training data.14
The implementation approaches are transferred to other classifiers in the following
sections but they also provide a connection between C-SVM and PAA.
Similar to the number of SVM variants, there are also several approaches for
solving the optimization problem. In this section, we focus on a few approaches
which finally lead, with the help of the single iteration approach, to an algorithm that operates on an arbitrary large set of training data at the price of accuracy. The drop of accuracy results from simplifications of the original optimization
problem. These simplifications are required to speed up the solution algorithms.
For example, the use of kernels will be finally omitted, because with increasing
size of the data an increasing size of support vectors is expected [Steinwart, 2003,
Steinwart and Christmann, 2008]. This also increases the amount of required memory and time for the prediction which in some applications might be inappropriate.
The C-SVM is categorized as a batch learning algorithm. This means that it
requires the complete set of training data to build its model. The opposite category
would be online learning classifiers like the PAA in Section 1.1.5. These classifiers
incrementally update their classification model with the incoming single training
samples and do not use all training data at once. With each sample, they perform
an update of their model parameters which have a fixed size and do not increase with
13
14
A single formula which allows to calculate the model parameters at once.
In most cases, the performance of classifiers improves with an increasing amount of training data.
38
Chapter 1. Generalizing: Classifier Connections
an increasing number of samples.
The advantage for the application is not only to have an algorithm which can
be trained on arbitrarily large datasets but it also gives the possibility to adapt the
model at runtime when the model is used and to update the model with new training
samples (online application). In this scenario, samples are classified with the help
of the current classifier model and the classification has some impact on a system.
Due to resulting actions of the system or other verification mechanisms, the true
label of the sample is determined a posteriori.15 This feedback is then used for updating the online learning algorithm. Consequently, online learning algorithms are
expected to work sufficiently fast in the update step and the classification step, such
that both steps can be used during an application. A big advantage of online learning
algorithms in such online applications is that they can adapt to changing conditions
which might result in drifts in the data. Those drifts might not have occurred when
acquiring the initial training data [Quionero-Candela et al., 2009].
Assume for example an algorithm running on a robot with a camera, which uses
images to detect the soil type of the environment to avoid getting stuck or wet. It
is impossible to have a complete training set which accounts for every situation, e.g.,
light condition, temperature, color of the underground, or a water drop on the camera.
So the respective classification algorithm might make wrong predictions. Now as the
robot is walking or driving over the ground it might detect the underground very
accurately by measuring pressure on the feet and slippage. Consequently, it could
adapt the image classification algorithm with the help of the afterwards detected
labels.16 For this adaption an online learning algorithm would be required with strict
limitations on the resources because it has to run on the robot. If the classification
or the adaptation is too slow the robot might have to stop to wait for the results to
decide where to go. Furthermore, the computational resources on a robot are usually
low to save space and energy and provide longterm autonomy.
Another application is the (longterm) use of EEG in embedded brain reading [Kirchner and Drechsler, 2013, Kirchner, 2014] where the operator shall not be
limited in his movement space. Here, a BCI is used to infer the behavior of the human and to adapt an interface to the human. Thereby it is taking false predictions
into account. For example, an exoskeleton can lessen its stiffness when the EEG classifier predicts an incoming movement [Kirchner et al., 2013, Seeland et al., 2013a],
or a control scenario can repeat warnings less often if the classifier detects that the
warning has been cognitively perceived. EEG data is known to be non-stationary. So
15
Not in every application such a verification is possible. Sometimes unsupervised approaches are
used which for example assume that the classified label was correct and can be used for the update.
16
Note that a simultaneous localization and mapping (SLAM) algorithm is required for the matching
between images, positions, and sensors.
1.2. Single Iteration: From Batch to Online Learning
39
online-learning can improve the system as shown in [Wöhrle et al., 2015]. Getting
true labels is ensured by the concept of embedded brain reading. The real behavior
of the subject can be compared with the inferred behavior and the classification process can be adapted. Furthermore, it is useful to have the complete processing on
a small mobile device with low power consumption [Wöhrle et al., 2014] to ease the
applicability. So here again the properties of efficient online learning are needed.
A reason to look at an online version of the C-SVM in this context was that in our
practical experience the batch learning algorithm performed well on the data in the
offline evaluation due to its good generalization properties. In the application, we
could show that an online classifier can have performance comparable to the original
algorithm [Wöhrle et al., 2013b, Wöhrle and Kirchner, 2014]. It can even improve in
the application [Tabie et al., 2014, Wöhrle et al., 2015] since the fast updates can be
used for online adaptation. With the batch algorithm this is impossible if new samples come in too fast or if too much memory is consumed when all training samples
are kept.
One approach to give a SVM online learning properties is not to start the learning of the model from scratch but to use a warm start by initializing an optimization algorithm with the old solution from a previous update step [Laskov et al., 2006,
Steinwart et al., 2009]. This approach also works with kernels. Unfortunately, an increasing amount of time for calculating the decision function is required if the number of support vectors is increasing. Furthermore, the memory consumption increases
linearly with each incoming data sample. Another approach to cope with this issue is
to use the warm start approach but also include a decreasing step to the update step
where the amount of data, which is kept, is reduced to keep memory consumption
constant [Gretton and Desobry, 2003, Van Vaerenbergh et al., 2010]. Nevertheless, a
high amount of memory and processing is still required for these approaches and
an evaluation is required in the future to compare the different approaches and to
analyze there properties in online applications.
The motivation of this section is to provide a more general approach to derive
online learning algorithms not only for the C-SVM but also for its variants and to
understand the relations between the different solvers and the different underlying
classifier models. A short summary on the approaches and the respective section
where they are discussed is given in Table 1.3. For a detailed analysis of the benefits of online learning in the context of the P300 dataset (Section 0.4), we refer to
[Wöhrle et al., 2015]. In the experiment in Section 2.4.6 it can be seen clearly that
online learning can improve classification performance, when using the online SVM
introduced in Section 1.2.4.
40
Chapter 1. Generalizing: Classifier Connections
Samples
Per
Update
Repeated
Step
Iterations
References
optimization
all
yes
[Chapelle, 2007], Section 1.2.1
SMO
2
yes
[Platt, 1999a], Section 1.2.2
1
yes
[Mangasarian and Musicant, 1998],
descent,
1
yes
[Hsieh et al., 2008],
omit offset
1&2
yes
[Steinwart et al., 2009], Section 1.2.3
PAA,
1
no
[Crammer et al., 2006],
single iteration
1
no
[Krell et al., 2014a], Section 1.2.4
Approach
Newton
successive
overrelaxation,
dual gradient
Table 1.3: Overview on SVM solution approaches grouped by similarity. They
are required because they lead to the single iteration approach. All the algorithms
basically consist of an update step, where the classifier model is updated to be more
optimal concerning the chosen samples, and in some cases (batch learning) they have
an iteration loop over the complete set of samples with certain heuristics.
1.2.1 Newton Optimization
This section introduces a straightforward solution approach for the C-SVM optimization problem as a summary of [Chapelle, 2007].
For solving the C-SVM optimization problem directly, it is advantageous to directly put the side constraints into the target function to get an unconstrained optimization problem:
min
w,b
X
1
(max {0, 1 − yj (hw, xj i + b)})q , q ∈ {1, 2} .
kwk22 + C
2
j
(1.58)
This approach is slightly different to penalty methods because the hyperparameter
C remains fixed and is not iteratively increased. The second step by Chapelle was to
introduce a kernel into the primal optimization problem:
(
X
X
1X
min
ai k(xi , xj ) + b
max 0, 1 − yj
ai aj k(xi , xj ) + C
a 2
i
j
i,j
!)!q
.
(1.59)
This can be either done directly, with the representer theorem (Theorem 4), or by
transforming the dual problem with kernel back to the primal problem. Note that
there is no restriction on the weights ai in contrast to the dual variables αi in Theo-
41
1.2. Single Iteration: From Batch to Online Learning
rem 2. The classification function is: f (x) =
P
ai k(xi , x) + b.
i
The third step is to repeatedly calculate the gradient (∇) and the
Hessian (H) of the target function and perform a newton update step
[Boyd and Vandenberghe, 2004]:
a → a − γH −1 ∇
(1.60)
with step size γ. For the detailed formulas of the derivatives refer to [Chapelle, 2007].
Note that the loss functions are pieced together by other functions and there is no
second derivative at the intersection points. So in this method, the one sided second
derivative is used for the Hessian which makes it a quasi-Newton method. Furthermore, the algorithm also exploits results from the optimality conditions by setting
the weights to zero if the respective sample is classified without any error (zero loss).
If γ 6= 1 is chosen, this trick is required. Otherwise, all samples in the training data
could become support vectors which would increase computational complexity and
slow down convergence. The matrix inversion is usually replaced with the solution of
linear equation and it is possible to use a sparse approximation of H to save processing time. But this method might have memory problems if the number of samples is
too large. Furthermore, the hinge loss has to be replaced with the approximation
L(y, t) =
0
(ξ+h)2
4h
ξ
if − ξ > h,
if |ξ| ≤ h
if ξ > h
(1.61)
with ξ = 1 − yt
and the offset b is omitted [Chapelle, 2007] although it might be possible to derive the
respective formulas with the offset. A special treatment of the offset is also common
for other solution approaches (see Section 1.2.3).
Chapelle also states that “from a machine learning point of view there is no reason
to prefer the hinge loss anyway” but does not provide a proof or reference to support
this claim. A special argument for working with the hinge loss is that it tends to
work on a smaller set of support vectors in contrast to using the squared hinge loss.
This has not been analytically proven but there are indicators from statistical learning theory [Steinwart, 2003], and in fact Chapelle proved empirically that using his
version, the SVM tends to use more support vectors and is inferior to Sequential
Minimal Optimization (introduced in Section 1.2.2). Having fewer support vectors is
important when working with kernels because it speeds up the processing in the classification step. Furthermore, when online learning is the goal, having fewer support
vectors can speed up the update steps.
42
Chapter 1. Generalizing: Classifier Connections
1.2.2 Sequential Minimal Optimization
The C-SVM optimization problem is traditionally solved with sequential minimal optimization (SMO) [Platt, 1999a] as implemented in the LibSVM library [Chang and Lin, 2011]. It is briefly described in this section.
Its principle is to reduce the dual optimization problem as good as possible and
then iteratively solve the reduced problems. The dual optimization problem reads:
min
P
C≥αj ≥0,
X
1X
αi αj yi yj k(xi , xj ) −
αj .
αj yj =0 2 i,j
j
(1.62)
At the initialization all αj are set to
zero. The smallest
optimization problem re
quires to choose two dual variables e.g., α1old , α2old for an update to keep the equation
P
αj yj = 0 valid in the update step. Now, all variables are kept fixed except
these two and the respective optimization problem is solved analytically considering
all side constraints. Due to the equation in the constraints, one can focus on the
update of α2old and later on calculate
α1new = α1old + y α2old − α2new
(1.63)
where y = y1 y2 . The borders for α2new are
L = max
0, α2old
+
yα1old
1+y
C
−
2
and H = min
C, α2old
+
yα1old
1−y
−
C . (1.64)
2
Following [Platt, 1999a], the first step is to solve the unconstrained optimization
problem which results in:
α2opt = α2old −
y2 f old (x1 ) − y1 − f old (x2 ) + y2
2k(x1 , x2 ) − k(x1 , x1 ) − k(x2 , x2 )
with f old (x) =
X
αjold yj k(xj , x) + b .
j
(1.65)
A final curve discussion shows that this unconstrained optimum has to be projected
to the borders to obtain the constrained optimum:
α2new :=
L
if
α2opt < L,
αopt if L ≤ α2opt ≤ H,
2
H
if
α2opt
(1.66)
> H.
Now, the two variables are changed to their optimal value. Then a new pair is chosen
and the optimization step is repeated until a convergence criterion is reached.
The expensive part in the calculation is to get the function values f old (xi ). When
working with the linear kernel, this step can be simplified by tracking w (initialized
43
1.2. Single Iteration: From Batch to Online Learning
with zeros):
wnew = wold + y1 α1new − α1old x1 + y2 α2new − α2old x2 .
D
(1.67)
E
Now f old (x1 ) − f old (x2 ) can be replaced by wold , x1 − x2 .
The remaining question is on how to choose the pair of dual variables for each
update. Instead of repeatedly iterating over all available pairs, different heuristics can be used which rely on the error f old (xi ) − yi , and which try to maximize the expected benefit of an update step. Note that this method only requires
to store the weights and access the training data sample wise. Nevertheless for
speed up, caching strategies are used which store kernel products and error values, especially for samples with 0 < αj < C.
For further details, we refer to
[Platt, 1999a, Chen et al., 2006]. The SMO principle can be also applied to other
SVM variants like for example SVR [Smola and Schölkopf, 2004] or L2–SVM instead
of L1–SVM which was handled in this section. A similar approach has also been
applied for RMM [Shivaswamy and Jebara, 2010].
1.2.3 Special Offset Treatment
This section discusses simplifications of the SMO approach which only require the
choice of a single index for an update and not a heuristic for choosing a pair of dual
variables for an update. The approach also operates on the dual optimization problem. The simplifications are an important preparative step for the single iteration
approach in Section 1.2.4. Furthermore, the same approach will be applied to other
classifiers in the following sections.
When working with kernels there are simplifications where the offset b in
the decision function is omitted [Steinwart et al., 2009] as also mentioned in
Section 1.2.1, or it is integrated in the data space using homogenous coordinates [Mangasarian and Musicant, 1998, Hsieh et al., 2008]. The approach is advantageous in case of linear separation functions as implemented in the LIBLINEAR
library [Fan et al., 2008]. In this case, the solution algorithm iterates over single
samples and updates the classification function parameters w and b of the decision
function sgn(hw, xi + b) to the optimal values in relation to this single sample. We
mainly follow the dual gradient descent approach from [Hsieh et al., 2008] in this
section. The resulting formulas are the same as by the successive overrelaxation approach in [Mangasarian and Musicant, 1998] or the one-dimensional update step in
[Steinwart et al., 2009, Cristianini and Shawe-Taylor, 2000].17
The reason for the simplification of the offset treatment is to get rid of the equa17
This equivalence has not yet been reported.
44
Chapter 1. Generalizing: Classifier Connections
tion
P
αj yj = 0 in the dual optimization which resulted from the differentiation of
the Lagrange function with respect to the offset b (Equation (1.11)). Without this
equation in the dual optimization problem, a similar approach as presented in Section 1.2.2 could be used but only one dual variable has to be chosen for one update
step. If the offset is omitted (b ≡ 0), the dual becomes
X
1X
αj for the L1 loss and
αi αj yi yj k(xi , xj ) −
Cj ≥αj ≥0 2
j
i,j
min
(1.68)
X
1X
1 X αj2
αj +
αi αj yi yj k(xi , xj ) −
for the L2 loss.
αj ≥0 2
4 j Cj
j
i,j
min
1
2
(1.69)
To regain the offset in the simplified primal model with (b ≡ 0), the regularization
kwk22 is replaced by
1
2
kwk22 + H 2 21 b2 with an additional hyperparameter H > 0 which
determines the influence of the offset to the target function. A calculation shows that
this approach can be transformed to the previous one where the offset is omitted:
kwk22 + H 2 b2 = k(w, Hb)k22 , f (x) = hw, xi + b = (w, Hb), x, H
1
H2
.
(1.70)
Only w is replaced by (w, Hb) and x by x, H1 . The formula for the decision function
and the optimal w remain the same as in SMO. So at the end, the kernel function
k(xi , xj ) has to be replaced by k(xi , xj ) +
1
H2
in the aforementioned dual problem and
b can be obtained via
X
1 X
1
b= 2
yj αj xj ,
yj αj which is a result of (w, Hb) =
.
H j
H
j
(1.71)
In short, from the model perspective solving the dual when the offset b is part of the
regularization is equivalent to omitting it. In the following, we focus on the latter.
The optimization problem can be reduced to updates which refer only to one sample in contrast to SMO, which requires two samples but due to the additional equation was also reduced to a single variable problem. Let fq be the target function of the
dual optimization problem (q ∈ {1, 2}) and let ej be the j-th unit vector. For updating
αjold , we first determine the quadratic function gq (d) = fq (α + dej ):
!
X
d2
yi yj αiold k(xi , xj ) + const. and
g1 (d) = k(xj , xj ) + d −1 +
2
i
d2
g2 (d) =
2
1
k(xj , xj ) +
2Cj
!
(1.72)
αjold X
+
+ d −1 +
yi yj αiold k(xi , xj ) + const. (1.73)
2Cj
i
!
g ′ (0)
In a second step, the optimal d is determined analytically dopt = − g′′q (0) and as
q
45
1.2. Single Iteration: From Batch to Online Learning
in SMO the unconstrained optimum αjold + dopt is projected to its feasible interval.
This finally results in the update formula:
αjnew
(
= max 0, min
(
αjold
!
))
(1.74)
!
(1.75)
X
1
−
−1 +
αiold yi yj k(xi , xj ) , Cj
k(xj , xj )
i
in the L1 case and for the L2 case it is:
αjnew = max 0, αjold −
1
k(xj , xj ) +
X
αjold
.
αiold yi yj k(xi , xj )
−1+
2Cj
i
1
2Cj
In some versions of this approach, there is an additional factor γ on the descent
step part18 but as the formula shows, choosing a factor of one is the optimal choice.
This
approach
is
also
similar
to
stochastic
gradient
descent
(SGD)
[Kivinen et al., 2004] but here the regularization term of the SVM model is
considered additionally to the loss term.
For choosing the sample of interest in each update, different strategies are
possible.
For example, different heuristics could be used again as compared in
[Steinwart et al., 2009].19 A more simple approach is to sort the weights, randomize them, or leave them unchanged and than have two loops. The first, outer loop
iterates over all samples and updates them until a convergence criterion is reached
like a maximum number of iterations or a too little change of the weights. After each
iteration over all samples the inner loop is started. The inner loop is the same as the
outer loop but iterates only over a subset of samples with positive dual weight. In
case of L1 loss, the subset is sometimes restricted to weights αj with 0 < αj < Cj .
Similar approaches are used for choosing the first sample in the SMO approach (Section 1.2.2) but due to the simplification presented in this section, the more complex
heuristic for the second sample in the SMO algorithm is not needed anymore. The
storage requirements of both approaches are the same and the update formula can
again be simplified in the linear case by replacing
D
E
yj wold , xj =
X
αiold yi yj k(xi , xj )
(1.76)
i
and also updating w in every step with
wnew = wold + αjnew − αjold yj xj .
18
(1.77)
For example, the term k(xj1,xj ) is replaced by k(xjγ,xj ) in Equation (1.74).
Inspired by the heuristics for SMO, Steinwart mainly compares strategies for selecting pairs of
samples.
19
46
Chapter 1. Generalizing: Classifier Connections
1.2.4 Single Iteration: From Batch SVM to Online PAA
In this section, we introduce a possibility to derive online learning algorithms from
SVM variants.
Definition 4 (Single Iteration Approach). The single iteration approach creates a
variant of a classification algorithm with linear kernel by first deriving an optimization algorithm, which iterates over single samples to optimize the target function as
in Section 1.2.3, and by second performing the update step only once. This directly
results in an online learning algorithm.
Consequently, we first plug Equation (1.76) into the update formula from Equation (1.74) or (1.75), respectively, and replace the kernel product k(xj , xj ) by kxj k22
which results in:
(
αjnew
= max 0, min
(
αjold
or αjnew = max 0, αjold −
D
E
1
old
−1
+
y
w
,
x
, Cj
−
j
j
kxj k22
1
+
kxj k22
))
!
E
D
αjold
.
− 1 + yj wold , xj
2Cj
1
2Cj
(1.78)
(1.79)
Since the update step is performed only once, the α weights are always initialized
with zero and do not have to be kept in memory but only w has to be updated when a
new sample xnew with label y new and loss punishment parameter C comes in:
(
(
δ = max 0, min −
(
1
kxnew k22
1
or δ = max 0, −
kxnew k22 +
and wnew = wold + δy new xnew .
1
2C
D
−1 + y new wold , xnew
−1 + y
new
D
w
old
new
,x
E
E
,C
)
))
(1.80)
(1.81)
(1.82)
Theorem 9 (Equivalence between passive-aggressive algorithm and online classical
support vector machine). The PAA can be derived from the respective SVM with the
single iteration approach.
Proof. This is a direct consequence, because the derived formulas are the same as for
PA-I and PA-II (defined in Section 1.1.5). The equivalence for PA is derived by setting
C := ∞ and
1
2C
:= 0.
Note that the single iteration approach can be also applied to related classifiers
to derive online versions (see also Section 1.3 and 1.4). Another advantage is that
now it is even possible to have a variant which combines batch and online learning.
First, the classifier is trained on a larger dataset with batch learning (offline). In
1.2. Single Iteration: From Batch to Online Learning
47
the second step, only the classification function parameter w is stored and all other
modeling parameters (but not the hyperparameters) can be removed from memory to
save resources. It is even possible to transfer the classifier in this step to a mobile
device with limited resources [Wöhrle et al., 2013b, Wöhrle et al., 2014] and use this
device in the online application. Finally, the connected online learning algorithm can
be used in the application when for every new incoming sample the online update
formula is applied to update w.
This approach could be also applied to other combinations of batch and online
learning classifiers, but can lead to unexpected behavior due to different properties
of the classifiers (e.g., the online classifier has a different type of regularization or
different underlying loss).
In contrast to SGD [Kivinen et al., 2004], the update formula should not be applied repeatedly on the same data samples, because it is always treated like new
data. The old weights α cannot be considered, because they have not been stored. As
a consequence, repeated iteration might put a weight of 2C to a sample even though
C should be the maximum from the modeling perspective.
When using the single iteration approach, it is also important to keep in mind that
with the updates, the influence of a sample to the classification vector w is permanent
and that there is no decremental step to directly remove the sample. A possibility for
compensation would be to introduce a forgetting factor γ < 1 in the update:
wnew = γwold + δy new xnew .
(1.83)
This has also been suggested in [Leite and Neto, 2008] to avoid a growing of kwk2
which occurred because a fixed margin approach was used in an online learning algorithm instead of approximating the optimal margin as in our approach.
1.2.5 Practice: Normalization and Threshold Optimization
When dealing with SVM variants, it is always important to normalize the features of
the input data. The classifier relies on the relation between the features and without normalization one feature can easily dominate the others if it provides too large
absolute values.
The presented special treatment of the offset assumes that a small offset or even
no offset is a reasonable choice, which is for example the case when using the the
RBF kernel, which is invariant under any translation of the data. Consequently, the
approach of using no offset has shown comparable performance to the SMO approach
[Steinwart et al., 2009]. When normalizing the data, the offset treatment should be
considered. If the features are normalized to be in the interval [0, 1], a negative offset
is more expected than with a normalization to the interval [−1, 1] . With increasing
48
Chapter 1. Generalizing: Classifier Connections
dimension of the data (n) in the linear case, the influence of the offset becomes less
relevant when calculating (w, b)2 because w has the main influence. If there are
only few dimensions (less than 10), the offset treatment might cause problems when
a linear kernel is used and a nonzero offset is required for the optimal separation of
positive and negative samples. This can be partially compensated by using a small
hyperparameter H (e.g., 10−2 ). But if it is too small, there is the danger of rounding
errors when its squared inverse is added to the scalar product of samples as mentioned in Section 1.2.3.
If the usage or evaluation of the classifier does not rely on the classification
score but on the decision function it is often good to tune the decision threshold.
This can also compensate for a poorly chosen offset.
Furthermore, de-
pending on the metric a different threshold will be the optimal choice.
There
are several algorithms for changing the threshold and also modifying the classification score [Platt, 1999b, Grandvalet et al., 2006, Metzen and Kirchner, 2011,
Lipton et al., 2014, Kull and Flach, 2014].
To summarize, we presented the single iteration approach to derive online learning from batch learning algorithms like the PAA from the C-SVM. The benefit in
memory and processing efficiency comes at the cost of accuracy and additional effort
in normalizing the data appropriately and optimizing the decision threshold.
1.3
Relative Margin:
From C-SVM to RFDA via SVR
This section is based on:
Krell, M. M., Feess, D., and Straube, S. (2014a). Balanced Relative Margin Machine –
The missing piece between FDA and SVM classification. Pattern Recognition Letters,
41:43–52, doi:10.1016/j.patrec.2013.09.018.
All theoretic discoveries of this publication were my own work. My coauthors helped
me very much by discussing my approaches and repeatedly reviewing my texts to
improve the comprehensibility. Hence, they also provided a few text parts. David
Feess additionally contributed the synthetic data and the respective visualizations.
In this section, we approach the class of relative margin classification algorithms from the mathematical programming perspective.
We will describe and
analyze our suggestions to extend the relative margin machine (RMM) concept
[Shivaswamy and Jebara, 2010] introduced in Section 1.1.4 This will result in new
methods, which are highly connected to other well known classification algorithms
as depicted in Figure 1.6.
The main idea is that outliers at the new outer margin are treated in the same
49
1.3. Relative Margin: From C-SVM to RFDA via SVR
SVR
, 1;
: -1 pping
s
e
valu ter ma
ame
par
Lap-RFDA
BRMM
SVM
R→∞
L2-SVM
square
loss
;
loss
itive ping
s
n
p
se
𝜺-in eter ma
m
a
par
R→1
R→∞
square
loss
Laplacian
loss
RFDA
R→1
L2-BRMM
ffset an
d
scaling
FDA
Gaussian
loss
C→∞
fixed o
LS-SVM
Figure 1.6: Overview of balanced relative margin machine (BRMM) method
connections. The details can be found in Section 1.3. Visualization taken from
[Krell et al., 2014a].
way as in the inner margin. Due to this balanced handling of outliers by the proposed
method, it is called balanced relative margin machine (BRMM).
After further motivating the relative margin (Section 1.3.1) and introducing balanced relative margin machine (BRMM) (Section 1.3.2), we show that this model is
equivalent to SVR (with the dependent variables Y = {−1, 1}) and connects C-SVM
and RFDA. Though these methods are very different, they have a common rationale,
and it is good to know how they are connected. Our proposed connection shows that
there is a rather smooth transition between C-SVM and RFDA even though both
methods are motivated completely differently. The original FDA is motivated from
statistics (see Section 1.1.3) while the C-SVM is defined via a geometrical concept
(see Section 1.1). Using BRMM, it is now possible to optimize the classifier type instead of choosing it beforehand. So, our suggested BRMM interconnects the other two
methods and in that sense generalizes both of them at the same time.
Due to this relation, the way of introducing kernels, squared loss, or sparse variants is the same for this classifier as for C-SVM in Section 1.1.1.2. Additionally, we
developed a new geometric characterization of sparsity in the number of used features
for the BRMM, when used with a 1–norm regularization (Section 1.3.3.4). This finding can be transferred to RFDA and C-SVM. On the other side, the implementation
techniques from Section 1.2 can be directly transferred from C-SVM to BRMM.
We finally verify our findings empirically in this section by the means of simulated
and benchmark data. The goal of these evaluations is not to show the superiority of
the method. This has already been mainly done in [Shivaswamy and Jebara, 2010].
50
Chapter 1. Generalizing: Classifier Connections
The sole purpose is to show the properties of the BRMM with special focus on the
transition from C-SVM to RFDA.
1.3.1 Motivation of the Relative Margin
There are also other motivations for using a relative margin additionally to the purpose of connecting classifiers.
1.3.1.1
Time Shifts in Data
The following example shows how RMM might be advantageous in comparison to
C-SVM when there are drifts in particular directions in feature space. Data drifts
in applications are quite common, e.g., drifts in sensor data due to noise or spatial shifts [Quionero-Candela et al., 2009]. One example of such data are EEG data,
which are highly non-stationary, and often influenced by high noise levels. Another
could be a changing distribution of data from a robot due to wear.
Let us assume that drifts occur mostly in directions of large spread and that the
relevant information has a lower spread. In fact, drifts during the training phase
increase the effective spread in the training samples themselves. Consider therefore
two Gaussians in R2 with means (0, −0.5) and (t, 0.5) where t changes in time. Hence,
the second distribution drifts along the x axis in some way. Suppose both distributions have the same variances of σx2 = 1 in x direction and σy2 = 0.1 in y direction.
Figure 1.7 depicts an associated classification scenario where t changes from 8 to 6
during the training data acquisition and from 4 to 2 during the test phase. It can be
observed how the limitation of the outer spread of the data turns the classification
plane in a direction nearly parallel to the main spread of the samples. The number of
misclassifications under an ongoing drift is thus considerably smaller for RMM than
for C-SVM.
We will come back to this dataset and perform an evaluation of classifiers with it
in Section 1.3.4.3.
1.3.1.2
Affine Transformation Perspective
To give another different motivation for maximum relative margins,
in
[Shivaswamy and Jebara, 2010] an entirely reformulated classification problem is
considered. Instead of learning an optimal classifier, it was argued that it is possible
to learn an optimal affine transformation of the data such that a given classifier (w
and b fixed) performs well and such that the transformation produces a small scatter
on the data. The authors proved that such optimal transformations can be chosen to
have rank one, yielding an optimization problem equivalent to a linear classification
51
1.3. Relative Margin: From C-SVM to RFDA via SVR
8
Class A (train)
Class B (train)
Class A (test)
Class B (test)
RMM
SVM
6
4
2
0
−2
−4
−6
−2
0
2
4
6
8
Figure 1.7: Classification problem with drift in one component of class B. The
samples of class B are drawn from distributions with the mean of the x component
drifting from 8 to 6 during training and from 4 to 2 during test. The solid lines show
the decision planes, the dashed lines nearby show the ±1 margins. For the RMM, the
outer lines define the outer margin that limits the spread of distances to the decision
plane to 2 in this case. Visualization taken from [Krell et al., 2014a].
with large margin and small spread of the output at the same time. We showed that
the fixation of the classifier can even be omitted and the results remain the same:
choosing a suitable restricted transformation is similar to using RMM. Further details are provided in Appendix B.3.2.
1.3.2 Deriving the Balanced Relative Margin Machine
A major shortcoming of the basic RMM method is the handling of outliers at the outer
margins. Such samples can in principle dominate the orientation of any separating
plane, as no classification results outside the range of ±R are allowed. When working
with very noisy data that might contain artifacts such outliers are very common. Two
modified versions were introduced by [Shivaswamy and Jebara, 2010] to handle this
insufficiency.
52
Chapter 1. Generalizing: Classifier Connections
Method 12 (Equation (13) from [Shivaswamy and Jebara, 2010]).
1
2
min
w,b,t
kwk22 + C
P
tj + Dr
r ≥ yj (hw, xj i + b) ≥ −r
s.t.
yj (hw, xj i + b) ≥ 1 − tj
tj
≥0
∀j : 1 ≤ j ≤ n
(1.84)
∀j : 1 ≤ j ≤ n
∀j : 1 ≤ j ≤ n.
Method 13 (Equation (14) from [Shivaswamy and Jebara, 2010]).
min′
w,b,s,s ,t,r
1
2
kwk22 + C
s.t.
P
tj + D(r +
ν
n
(sj + s′j ))
P
r + sj ≥ yj (hw, xj i + b) ≥ −r − s′j
yj (hw, xj i + b) ≥ 1 − tj
tj
≥0
∀j : 1 ≤ j ≤ n
(1.85)
∀j : 1 ≤ j ≤ n
∀j : 1 ≤ j ≤ n.
These methods, however, require additional variables and hyperparameters and
are rather unintuitive. In the following, we propose a new variant, which is effectively
similar to Shivaswamy and Jebara’s variant, but at the same time considerably less
complex because of fewer parameters. This makes it comparable against other classification methods and thus easier to understand. Consider at first the reformulation
of Method 9:
min
w,b,t
s.t.
1
2
kwk22 + C
P
tj
R ≥ yj (hw, xj i + b) ≥ −R
yj (hw, xj i + b) ≥ 1 − tj
tj
≥0
∀j : 1 ≤ j ≤ n
(1.86)
∀j : 1 ≤ j ≤ n
∀j : 1 ≤ j ≤ n.
If the lowest border −R is reached, tj becomes 1 + R. As tj is subject to the minimization, this lowest border should normally not be reached. Such a high error is quite
uncommon. Therefore we drop it. If without this border a tj became larger than 1 + R
it either has to be considered an outlier from the modeling perspective—it has to be
deleted from the data—or R has been chosen too low. Both cases should not be part
of the method.
After this consideration, we can introduce an outer soft margin without new variables or restrictions:
Method 14 (L1–Balanced Relative Margin Machine (BRMM)).
min
w,b,t
s.t.
1
2
kwk22 + C
P
tj
R + tj ≥ yj (hw, xj i + b) ≥ 1 − tj
tj
≥0
∀j : 1 ≤ j ≤ n
∀j : 1 ≤ j ≤ n.
(1.87)
1.3. Relative Margin: From C-SVM to RFDA via SVR
53
Notice that this method has one restriction less and no additional method variables or hyperparameters.20 At the same time, it provides the same capabilities
as the original method and a consistent handling of outliers. The simplicity of the
method yields a high comparability to other large margin classifiers and makes it
easier to implement. The name balanced follows from the idea to treat outliers in the
outer margin in the same way as outliers in the inner margin. From our perspective,
this approximation is reasonable. Depending on the application, however, this might
not be appropriate. If there are reasons for different inner and outer loss, e.g., more
expected outliers in the outer margin or if they are less important, the method can
be adapted as follows with an additional hyperparameter but without more method
variables or constraints:
1
2
min
w,b,t
s.t.
kwk22 +
P
tj
C(yj (hw, xj i + b) − 1) ≥ −tj
C ′ (y
j (hw, xj i + b) − R) ≤ tj
tj
≥0
∀j : 1 ≤ j ≤ n
(1.88)
∀j : 1 ≤ j ≤ n
∀j : 1 ≤ j ≤ n.
The proposed balanced version can be seen as a reasonable first approach. It is also
possible to use squared loss (L2–BRMM) or a hard margin for the inner and the outer
margin. It might be useful to use different ranges for the two classes if there are different intrinsic spreads. The only modification to the BRMM method is to replace
the range by a class-specific hyperparameter R (yj ). Furthermore, it is possible use
the range as a variable with a new weight as hyperparameter in the target function.
Both changes lead to additional hyperparameters, which complicates the hyperparameter optimization and makes the method less intuitive and less comparable to
other methods.
With Method 13 Shivaswamy and Jebara introduced a variant of the ν-SVM to
provide a lower limit on the support vectors for the “outer margin” but it is much more
reasonable to use the ν for the total number of support vectors. This can be achieved
by exploiting the relation between the proposed BRMM and SVR (see Section 1.3.3.3).
1.3.3 Classifier Connections with the BRMM
1.3.3.1
Connection between BRMM and C-SVM
The difference between C-SVM and BRMM (Methods 3 and 14) is the restriction
on the classification by the range. For large values of R, however, this constraint
becomes inactive. Hence, one can always find an Rmax such that BRMM and C-SVM
20
When applying duality theory, it is more convenient to use different variables for outer and inner
margin. this change has no effect on the optimal w and b.
54
Chapter 1. Generalizing: Classifier Connections
become identical for all R ≥ Rmax . One approach to find this upper bound on the
useful ranges from a set of training examples is to train a C-SVM on the training set.
Rmax is the highest occurring absolute value of the classification function applied on
samples in the training set; every R above Rmax has no influence whatsoever. So for
hyperparameter optimization, only the interval [1, Rmax ] has to be observed.
Theorem 10 (BRMM generalizes C-SVM). A BRMM with R ≥ Rmax is equivalent to
the C-SVM.
As a direct consequence, values of R always exist for which the BRMM, by definition, performs at least as well as the C-SVM. Depending on the available amount
of training data, a good choice of R might nevertheless be troublesome. The same
connection to the C-SVM has already been shown for the RMM (Method 9) but not
connections to the RFDA and SVR, because they do not exist. Therefore, the BRMM
is necessary as discussed in the following sections.
1.3.3.2
Connection between BRMM and RFDA
The RFDA model (Method 8) has been introduced in Section 1.1.3. Let us focus on
regularization functions of
1
2
kwk2 and kwk1 since we have the same regularization
in BRMM and SVM approaches. Nevertheless, other regularization functions can be
considered without loss of generality.
Consider now the BRMM (Method 14) with hyperparameter R = 1, the smallest
range allowed. In this case, the inequalities of the method can be fused:
⇔
⇔
⇔
⇔
R + tj ≥
1 + tj ≥
tj ≥
yj (hw, xj i + b)
yj (hw, xj i + b)
yj (hw, xj i + b) − 1
tj ≥ |yj (hw, xj i + b) − 1|
≥ 1 − tj
≥ 1 − tj
≥ −tj
tj ≥ |(hw, xj i + b) − yj | .
R=1
−1
(1.89)
|yj | = 1
As tj is subject to minimization, we can assume that equality holds in the last inequality:
tj = |(hw, xj i + b) − yj | .
(1.90)
Hence, the resulting method is the same as the RFDA, except for the quadratic term
P 2
tj in the loss function of the soft margin. This difference, however, is equivalent
to different noise models—linear loss functions in a RFDA correspond to a Laplacian
noise model instead of a Gaussian one [Mika et al., 2001]. Conversely, a L2–BRMM
can be derived from the L2–SVM.
Theorem 11 (BRMM generalizes RFDA and LS-SVM). A BRMM with R = 1 is equivalent to the RFDA with Laplacian noise model (Laplacian loss, see also Table 1.1).
55
1.3. Relative Margin: From C-SVM to RFDA via SVR
A BRMM with R = 1 and squared loss is equivalent to the RFDA with Gaussian
noise model (Gaussian loss, see also Table 1.1). Consequently, it is also equivalent to
the LS-SVM.
In summary, both C-SVM and RFDA variants can be considered special cases of
the BRMM variants, or, from a different perspective, BRMM methods interconnect
the more well-established C-SVMs and RFDA, as depicted in Figure 1.6.
In [Shivaswamy and Jebara, 2010], there was a broad benchmarking of classifiers
to show that in many cases the RMM performs better. The comparison also included
the C-SVM and the RFDA with kernel, called regularized kernel linear discriminant
analysis in this paper. With this relation it becomes now clear, why the RMM always
showed comparable or better performance.
1.3.3.3
Connection between BRMM, ǫ-insensitive loss RFDA, and SVR
As already mentioned at the end of Section 1.1.3, depending on certain assumptions
on
the distribution
of the data, one may want to replace the loss term of the RFDA
P 2
tj = ktk22
with a different one. We already had a look at the case of assuming
Laplacian noise, which results in the loss term ktk1 . We will now consider a RFDA
with ǫ-insensitive loss function [Mika et al., 2001]
min
w,b,t
s.t.
1
2
kwk22 + C ktkǫ
(1.91)
yj (hw, xj i + b) = 1 − tj
∀j : 1 ≤ j ≤ n.
to compare it with the BRMM. Here, k.kǫ means no penalty for components smaller
than a predefined ǫ ∈ (0, 1) and k.k1 penalty for everything outside this region:
ktkǫ =
P
max {|tj | − ǫ, 0}. This loss term is well known from support vector regres-
sion (SVR). In fact, applying SVR to data with binary labels {−1, 1} exactly results
in the ǫ-insensitive RFDA. We argue that this version of RFDA or SVR is effectively
equivalent to the BRMM. This also shows that not the C-SVM but rather the BRMM
is the binary version of the SVR.
Theorem 12 (Equivalence between RFDA, SVR, and BRMM). RFDA with ǫinsensitive loss function and 2–norm regularization (or SVR reduced to the values
1 and −1) and BRMM result in an identical classification with a corresponding
function, mapping RFDA (SVR) hyperparameters (C, ǫ) to BRMM hyperparameters
(C ′ , R′ ) and vice versa.
Proof. By use of the mappings
′
′
(C , R ) =
C 1+ǫ
,
1−ǫ 1−ǫ
and (ǫ, C) =
′
R −1
2C ′
,
R′ + 1 R′ + 1
(1.92)
56
Chapter 1. Generalizing: Classifier Connections
the method definitions become equal. The mappings effectively only scale the optimization problems. The calculation is straightforward and can be found in Appendix B.2.2. So every ǫ-insensitive RFDA can be expressed as BRMM and vice
versa.
A direct consequence of Theorem 12 and Theorem 10 is Theorem 7 (C = (1 − ǫ)C ′ ).
In fact, we can directly calculate the respective border for ǫ in Theorem 7:
Rmax − 1
2
=1−
.
Rmax + 1
Rmax + 1
a=
(1.93)
Another positive effect is, that the ν-SVR (Method 6) can be used to define a ν-BRMM:
Method 15 (ν-Balanced Relative Margin Machine (ν-BRMM)).
min
w,b,t
1
2
kwk22 + C (nνǫ +
P
sj +
P
tj )
ǫ + sj ≥ yj (hw, xj i + b) − 1 ≥ −ǫ − tj
s.t.
sj , tj
≥0
∀j : 1 ≤ j ≤ n
(1.94)
∀j : 1 ≤ j ≤ n.
The replacement of ǫ in this case by R is not possible because ǫ is subject to minimization and not a hyperparameter anymore. When looking at the (rescaled) dual
optimization problem (derived in Appendix B.3.4), it becomes immediately clear, that
ν is now a lower border on the total number of support vectors in the same way as it
was the case for the ν-SVM:
min
α
s.t.
1
2
1
n
1
n
P
P
j
P
(αi − βi )(αj − βj ) hxi , xj i yi yj −
≥ αj ≥ 0, ∀j : 1 ≤ j ≤ n,
≥ βj ≥ 0, ∀j : 1 ≤ j ≤ n,
αj yj =
P
1
Cn
P
j
(αj − βj )
(1.95)
βj yj ,
j
αj + βj = ν .
j
1.3.3.4
So far,
term.
Sparsity
all considered methods shared the 2–norm in the regularization
Particularly for C-SVM, a 1–norm regularization has been proposed
[Bradley and Mangasarian, 1998]. In comparison to their 2–norm counterpart, a
C-SVM with 1–norm regularization is known to operate on a reduced set of features.
It can thus be regarded as a classifier with intrinsic feature selection mechanism.
Omitting unimportant features can in turn render a classifier more robust. From a
more practical point of view, less features might imply less sensors in the application and thus simplify the data acquisition. To achieve the same sparsity properties
57
1.3. Relative Margin: From C-SVM to RFDA via SVR
for BRMM, we propose to adapt the 1–norm approach to it. The resulting mathematical program can be casted to a linear one and so be solved by the Simplex algorithm [Nocedal and Wright, 2006]. For implementation, we used the GNU Linear
Programming Kit [Makhorin, 2010] and directly inserted the raw model of the classifier.
Therefore, the classification function parameters are split into positive and negative components (w = w+ − w− and b = b+ − b− ) and inequality constraints are
eliminated by introducing additional slack variables gj and hj .
Method 16 (1–norm Balanced Relative Margin Machine).
P
min
w± ,b± ,t,g,h∈Rm
+
s.t.
(wi+ + wi− ) + C
P
tj
yj ( w+ − w− , xj + b+ − b− ) = 1 − tj + hj
yj ( w+ − w− , xj + b+ − b− ) = R + tj − gj
∀j : 1 ≤ j ≤ n
(1.96)
∀j : 1 ≤ j ≤ n
Interestingly, with this method description and the properties of the Simplex algorithm [Nocedal and Wright, 2006], we proved the following:
Theorem 13 (Feature Reduction of 1–norm BRMM). A solution of 1–norm BRMM
with the Simplex algorithms always uses a number of features smaller than the number of support vectors lying on the four margins:
{x|hw, xi + b ∈ {1, −1, R, −R}} .
(1.97)
The formula explicitly excludes support vectors in the soft margin. This theorem
is of special interest, when the dimension of the data largely exceeds the number of
given samples. In this case, it can be derived that the maximum number of used
features is bounded by the number of training samples.
The property of 1–norm C-SVM to work on a reduced set of features has so far only
been shown empirically [Bradley and Mangasarian, 1998]. In fact, it is not possible
to provide a general proof which is independent from the properties of the dataset.
This can be illustrated with the help of a toy example: Consider m orthogonal unit
vectors in Rm with randomly distributed class labels. For the resulting parameters w
and b of the 1–norm BRMM classification function we get
|wi | = 1 ∀1 ≤ i ≤ m and b = 0,
(1.98)
with a sufficiently large C and arbitrary R. So each feature is used. Without further
assumptions, better boundaries on the number of used features cannot be given.
The application of Theorem 13 to SVM and RFDA and a detailed proof are shown
in Appendix B.3.1.
58
Chapter 1. Generalizing: Classifier Connections
1.3.3.5
Kernels
If the common 2–norm regularization is used, the introduction of kernels is exactly
the same as for C-SVM (refer to Section 1.1.1.2). The required dual optimization
problem is given in Section 1.3.4.1. Interestingly, the relation between linear and
RBF kernel is the same as for C-SVM in Theorem 5.
Theorem 14 (RBF kernel generalizes linear kernel for BRMM and SVR). The linear
BRMM and SVR with the regularization parameter C ′ are the limit of the respective
BRMM and SVR with RBF kernel and hyperparameters σ 2 → ∞ and C = C ′ σ 2 . In
both cases the same range R or tolerance parameter ǫ are used.
Proof. The proof is the same as in [Keerthi and Lin, 2003] for Theorem 5 but mainly
α − β instead of α and the respective dual optimization problems (see Appendix B.1.5
and Section 1.1.1.4) are used. Note that the proof highly relies on the additional
equation in the dual constraints and as such cannot be applied to the algorithms
versions with special offset treatment as suggested in Section 1.2.3.
For the 1–norm approach, the restrictions are treated the same way as in the
2–norm case, but the target function has to be changed to preserve the sparsity effect. hw, xi is replaced by
P
αj k(xj , x), where k(., .) is the kernel function, and the
1–norm of w is replaced by the 1–norm of the weights αi . This results in a sparse
solution, though sparse does not mean few features in this context but fewer kernel
evaluations [Mangasarian and Kou, 2007]:
Method 17 (1–norm Kernelized BRMM).
min
kαk1 + C ktk1
w,b,t
s.t.
R + tj ≥ yj (b +
m
P
i=1
αi k(xi , xj )) ≥ 1 − tj
∀j : 1 ≤ j ≤ n
tj
∀j : 1 ≤ j ≤ n,
≥0
(1.99)
where k : Rn × Rn → R is the kernel function.
For the special case of R = 1, Method 17 is equivalent to the “linear sparse kernelized Fisher’s discriminant” [Mika et al., 2001].
1.3.4 Practice: Implementation and Applications
In this section, we will discuss the choice of the hyperparameters of BRMM, some implementations issues by also using results from Section 1.2 to derive online versions,
and finally show some properties of the related classifier variants in some applications.
1.3. Relative Margin: From C-SVM to RFDA via SVR
59
The BRMM has two hyperparameters: the range R and the C-SVM regularization
parameter C. Both hyperparameters are highly connected and need to be optimized.
When reducing the range R from Rmax to 1 to transfer the classifier from C-SVM
over the BRMM to the RFDA, it can be observed that the number of support vectors
is increasing, because in the extreme case, every sample becomes a support vector.
This slows down the convergence of the optimization problem solver. To speed up the
optimization in this case it is better to stick to special optimization algorithms tailored to the respective RFDA models or choose R slightly larger than 1. Furthermore,
it might be a good approach to start with a large R and decrease it stepwise, e.g., with
a pattern search algorithm [Eitrich and Lang, 2006]. For too small C the number of
support vectors also becomes very large and the solution algorithm is slow. Furthermore, the performance of the respective classifier usually is not that good. Hence it
is always good to start with a large C, e.g., 1.
Normally, cross-validation21 is used for hyperparameter optimization to save
time.
For an improved automatic optimization, it is efficient to start with
high values and iteratively decrease the values with a pattern search algorithm [Eitrich and Lang, 2006].
To save resources, this could be combined with
warm start principle to adapt the batch learning algorithms to the changed parameters [Steinwart et al., 2009]. Here the old solution is reused. With such a hyperparameter optimization, it is no longer necessary to choose between C-SVM and RFDA,
because this is automatically done. Note that for the original RFDA a squared loss is
required where for the C-SVM the non squared hinge loss is more common.
Using the Simplex algorithm [Nocedal and Wright, 2006] from the GNU Linear
Programming Kit [Makhorin, 2010] for the sparse version of the BRMM is only possible if the problem matrix is not too large. It is possible to use other optimization
algorithms but here a problem might be that these algorithms might not converge
to the optimal solution or might not provide the most sparse solution. Here, some
more research is needed to find a good optimization algorithm specifically tailored to
the classifier model to also handle large datasets. This seems to be not that easy,
because even for the 1–norm regularized, hinge loss SVM there is no implementation in the established LIBLINEAR package [Fan et al., 2008] which implements
all the other linear SVM methods (with the special offset treatment trick from Section 1.2.3) and several variants. Maybe it is possible to modify the Simplex algorithm
and tailor it to the sparse BRMM, use a decomposition technique as suggested in
[Torii and Abe, 2009], or apply one of the many suggested algorithms for “Optimization with Sparsity-Inducing Penalties” [Bach, 2011].
21
For a k fold cross-validation, a dataset is divided into k equal sized sets (folds), and then iteratively
(k times) one set is chosen as testing data and the remaining k − 1 folds are used for training the
algorithm.
60
Chapter 1. Generalizing: Classifier Connections
1.3.4.1
Implementation of BRMM with 2–norm regularization
A straightforward way to use a BRMM with 2–norm without implementation is directly given by the constructive proof of Theorem 12. Using the formula
(ǫ, C) =
2C ′
R′ + 1 R′ + 1
′
R −1
,
(1.100)
the SVR implementation of the LIBSVM can be directly interfaced as 2–norm BRMM
algorithm. This implementation is following the SMO concept (Section 1.2.2). For
implementing BRMM one can also follow the concepts from Section 1.2.3 and 1.2.4,
as in the following. This will finally result in an online classifier.
For implementing the algorithm directly for BRMM models with 2–norm regularization, the dual optimization problems are used. After reintroducing separate loss
variables for inner and outer loss and multiplication with −1, the dual problem of
Method 14 reads:
1
2 (α
min
α,β
− β)T Q(α − β) −
P
0 ≤ αj ≤ C, 0 ≤ βj ≤ C
s.t.
P
(αj − βj ) = 0,
with Qkl = yk yl hxk , xl i
αj + R
P
βj
∀j : 1 ≤ j ≤ n
(1.101)
∀k, l : 1 ≤ k ≤ n, 1 ≤ l ≤ n.
The respective dual optimization problem of L2–BRMM (squared loss) is:
min
1
2 (α
s.t.
0 ≤ αj , 0 ≤ βj
α,β
− β)T Q(α − β) −
P
(αj − βj ) = 0,
P
αj + R
∀j : 1 ≤ j ≤ n
with Qkl = yk yl hxk , xl i
P
βj +
1
4
P α2j
C
+
1
4
P βj2
C
(1.102)
∀k, l : 1 ≤ k ≤ n, 1 ≤ l ≤ n.
Class dependent ranges (Rj ) and cost parameters (Cj ) or different regularization constants for inner and outer margin (C, C ′ ) can be applied to this formulation correspondingly. For using kernels, only the scalar product in Q has to be replaced with
the kernel function.
As the calculation is similar to the C-SVM calculation, a similar solution approach can be used, e.g., sequential minimal optimization [Platt, 1999a,
Shivaswamy and Jebara, 2010]. To follow the concept from Section 1.2.3, a classifier without the offset can be generated by dropping equation
P
(αj − βj ) = 0. For
having an offset in the target function, additionally to skipping this equation, yk yl
has to be added to Qkl .
The following algorithm now uses update formulas for αj and βj , though after
each update at least one of them will be zero. Following the similar calculations in
61
1.3. Relative Margin: From C-SVM to RFDA via SVR
Section 1.2.3 the update formulas are:
1
Qj. · (αi − β i ) − 1
αji −
Qjj
αji+1 = Pj
!
Pj (x) = max {0, min {x, Cj }}
βji+1
=
Pj′
βji
n
1
−
Rj − Qj. · (αi − β i )
Qjj
n
Pj′ (x) = max 0, min x, Cj′
oo
!
.
(1.103)
To get these formulas, the hyperparameters C and R are replaced by the aforementioned class dependent variants Cj and Rj . For the L2 variant, the same approach
leads to:
αji+1
βji+1
1
= P αji −
Qjj +
1
= P βji −
Qjj +
1
2Cj
!
αj
Qj. · (αi − β i ) − 1 +
2Cj
!
β
Rj − Qj. · (αi − β i ) +
2Cj′
1
2Cj′
P (x) = max {0, x} .
(1.104)
Independent of the chosen variant, the resulting classification function is:
f (x) =
In
the
(w =
P
linear
case,
yj (αj − βj ) xj , b =
P
mulas [Hsieh et al., 2008]:
αji+1
βji+1
X
the
yj (αj − βj )(hxj , xi + 1).
formulas
for
optimal
= Pj
Pj′
βji
E
D
1
Rj − yj xj , wi + bi
−
Qjj
and
b
!
E
1 D
yj xj , wi + bi − 1
−
Qjj
w
yj (αj − βj ) xj ) can be plugged into the update for-
αji
=
(1.105)
!
wi+1 , bi+1 = wi , bi + αji+1 − αji yj (xj , 1) − βji+1 − βji yj (xj , 1) ,
(1.106)
and for L2–BRMM correspondingly. Now, only the diagonal of Q and the samples
have to be stored/used and not the complete matrix, which makes this formula particularly useful for large scale applications.
For choosing the index j, there are several possibilities [Steinwart et al., 2009].
For implementation, we chose a simple one [Mangasarian and Musicant, 1998]: in
an outer loop we iterate over all indices in random order and in an inner loop we just
62
Chapter 1. Generalizing: Classifier Connections
repeatedly iterate over the active indices. An index j is active, when either αj or βj is
greater than zero. The iteration stops after some maximum number of iterations, or
when the maximum change in an iteration loop falls below some predefined threshold. For initialization all variables (w0 , b0 , α0 , β 0 ) are set to zero. In the linear case,
the “single iteration” approach could be used here, too. To simulate RMM we used
a simplification by setting Cj′ = ∞ ∀j. Further details on deriving the formulas and
solvability are given in Appendix B.3.3.
1.3.4.2
Synthetic Data: Visualization of the Relations
To illustrate the relations between BRMM, C-SVM, and RFDA by means of classification performance and to analyze the influence of the range R, we apply all
classifiers to a synthetic dataset. Additionally, the performance difference between
the original RMM with hard outer margin and BRMM with soft outer margin
is investigated.
For comparability, the data model is the same as employed in
[Shivaswamy and Jebara, 2010]. Data are sampled from two Gaussian distributions
representing two classes. The distributions have different means, but identical co15
variance: µ1 = (1, 1), µ2 = (19, 13), Σ = ( 17
15 17 ) . It shall be noted that the Gaussian
nature of the sample distributions clearly favors RFDA-like classification techniques.
Evaluation was done using a 5–fold cross-validation on a total of 3000 samples
per class. For simplicity we used 1–norm RMM/BRMM and fixed the regularization
parameter C at 0.003. Using 2–norm RMMs or other values for C results in similar
graphics.
Figure 1.8a shows the classification performance as a function of the range R. The
first observation is that the performance does not change for R ≥ 8. For these values
of R, no sample lies inside the outer margin. Hence, they already have a distance
less than R from the separating plane, and a further increase of R has no influence
anymore—RMM and BRMM effectively become a C-SVM. With decreasing range the
error rate drops because the classifier can better adapt to the distributions. Without
outer soft margin the data are pressed into the inner soft margin. This results in
worse performance for small ranges and the regularization term looses importance.
In the BRMM case, i.e., with soft outer margin, the classifier gets closer and closer to
a RFDA-type classifier when the range decreases. Note: The RFDA variant presented
here uses a Laplacian noise model whereas the RFDA normally uses a Gaussian one.
Nevertheless, we can see that the BRMM can mimic both C-SVM and RFDA. In some
cases a well-chosen range can in fact constitute better classifiers somewhere between
the two popular methods. The following section provides an example for this case.
63
1.3. Relative Margin: From C-SVM to RFDA via SVR
1.3.4.3
Synthetic Data with Drift
We now have a look at the behavior on data with drift and the feature reduction
ability. For this, we used synthetic data from the same model as used to motivate
the RMM principle in Section 1.3.1.1. The data consist of samples from two twodimensional, Gaussian distributions µ1 = (0, −0.5),
µ2 = (t, 0.5),
0 ) with
Σ = ( 10 0.1
the same variance but different means. For the second distribution, the mean t of
the x component changes linearly over time: from 8 to 6 during training and from 4
to 2 during the test phase. A total of 1000 samples were computed per class in the
training phase, and another 1000 as test case. To additionally investigate how the
different classifiers handle meaningless noise features, the dataset was extended by
50 additional noise components. Each sample of these components was drawn from
a uniform distribution on the unit interval. The first two components of the data,
however, still resemble what is shown in Figure 1.7, only with more samples. Lastly,
we generated some additional variation and outliers in the data by adding Cauchydistributed noise (x0 = 0, γ = 0.1) to each component. To omit too large outliers, we
replaced noise amplitudes larger than 10 by 10.
For the classification, we used the RMM and BRMM implementations with
regularization (1–norm, 2–norm) and loss (L1, L2) variants as introduced in Section 1.3.3.4 and 1.3.4.1. RFDA (R = 1) and SVM (R = 8) variants appear in the
results as special cases of the BRMM methods as previously discussed.
The range R was varied between 1 and 8. Due to the high noise, the hyperparameter C had a big influence on the error and its optimal choice was highly dependent
on the chosen range. Since we wanted to show the effect of the range, we kept C
fixed over all ranges (0.03 for the 2–norm and 0.002 for the 1–norm approaches). Figure 1.8b shows the classification performance in terms of the error rate on the testing
data as a function of R. The relatively high error rates (cf. Figure 1.8a) are due to the
drift in the data and the high noise.
The 2–norm approaches operate on the complete set of features and therefore
perform worse than the 1–norm approaches (e.g., minimum error of 22% for 2-norm
L1–BRMM). Even lower performances (not shown) were observed using a RBF kernel. The systematic drift can only be observed in two feature dimensions, but for
2–norm regularized, RBF kernel, or polynomial kernel approaches every feature has
an impact on the classification function, due to the model properties. So these models are worse in handling the drift and building a good classifier, because the given
classification problem clearly favors strategies, which ignore the irrelevant features.
Since they do not reduce features, these approaches are very sensitive to the noise in
the data.
Generally, RMMs perform worse than BRMMs because of the bad treatment of
64
Chapter 1. Generalizing: Classifier Connections
1.5
equi.
to SVM
1.2
1.1
30
Error (%)
Error (%)
35
equivalent to RFDA
1.3
Rmax
25
1–norm L1–BRMM
1–norm L1–RMM
2–norm L1–BRMM
2–norm L1–RMM
2–norm L2–BRMM
2–norm L2–RMM
20
15
1.0
10
BRMM
RMM
0.9
0.8
equi.
to SVM
variants
40
1.4
0
2
4
6
8
10
equivalent
to RFDA
variants
5
12
0
1
2
(a) Synthetic data (mean, std. error)
3
4
5
6
7
Range R
Range R
(b) Synthetic data with drift
Figure 1.8: Classifier performance as function of R on synthetic data. RMM
and BRMM (1–norm approaches in (a)) are compared and the transitions to corresponding RFDA and SVM variants are highlighted at the respective values of R
(R = 1 at the lower end; R ≥ 8 (a) and R ≥ 6.2 (b) at the upper end). Visualizations taken from [Krell et al., 2014a].
outliers at the outer margin. For different choices of the hyperparameter C, the
results look similar.
With changing range, the errors of the 1–norm approaches show a smooth transition with clear minima around 5% error rate at a range of 1.5 for BRMM and 2.0 for
RMM. As expected, the number of features used by the 1–norm approaches is notably
reduced. With ranges larger than 3 for BRMM and 4 for RMM, only one feature is retained. The number monotonically increases with decreasing range: while RMM uses
five features for ranges lower than 2.4, BRMM uses only the two relevant features for
the lower ranges. For higher values of the hyperparameter C this relation remains
the same but especially for the RFDA case with R = 1 the numbers increase up to
the total number of 52 features. So the feature reduction ability might get lost and
the classifier apparently tends to more and more overfitting and less generalization
on the path from C-SVM via BRMM to RFDA.
1.3.4.4
MNIST: Handwritten Digit Classification
In this section, we describe a dataset which will not only be used in the following
section for an evaluation but also in several other parts of this thesis.
The MNIST dataset consists of pictures of handwritten digits (0-9) of different persons with predefined train and test sets (around 60000 and 10000 samples)
[LeCun et al., 1998]. The images in the dataset are normalized to have the numbers
centered and with same size and intensity. It is an established benchmark dataset
1.3. Relative Margin: From C-SVM to RFDA via SVR
65
where the meaning of the data is directly comprehensible. Since it is freely available, we use it to enable reproducibility of our evaluations. Other arguments for
its usage are, that it enables simple, intuitive visualizations (for the backtransformation) and provides a large set of training samples. The currently best classification result with 0.23% test error rate is with a multi-column deep neural networks
[Schmidhuber, 2012]. Note that this algorithm is tuned to this type of data and it
cannot be seen as a pure classifier anymore because it intrinsically also learns a good
representation of the data which corresponds to preprocessing, feature generation,
and normalization. Hence, this algorithm tries to learn all ingredients of the decision
process at once and is not comparable to classical classification algorithms which rely
on a good preprocessing. Despite that, for our evaluations we are not interested in
the absolute performance values but in the differences between the SVM variants.
Figure 1.9: Examples of normalized digits (1 and 2). The original feature vector
data has been mapped to the respective image format.
1.3.4.5
Benchmark Data: Visualization of the RFDA–SVM Relations
In this section, we verify that BRMM behaves as expected also on real world data.
For this, we use the MNIST data (see Section 1.3.4.4) and a selection of IDA benchmark datasets described by [Rätsch et al., 2001]. The selection has the sole purpose
of generating a comprehensible figure: we show a selection with similar error levels,
so that the curve shapes are distinct.
Chapter 1. Generalizing: Classifier Connections
Individual Classification Problems
12
Error (%)
10
8
Banana
Image
Splice
Thyroid
Waveform
6
4
1
2
3
4
5
Range R
(a) IDA data (mean and standard error)
6
1
2
3
4
5
50
40
30
20
10
0
−10
−20
−30
−40
−50
Relative Error to SVM (%)
66
Range R
(b) MNIST data (rel. error change)
Figure 1.10: Classifier performance as function of R on benchmark data. For
the MNIST data (b) the individual results (0 vs. 1, 0 vs. 2, . . . 8 vs. 9) are displayed
with the percentage change of the error relative to the error of the corresponding
C-SVM classifier. Visualizations taken from [Krell et al., 2014a].
We used RBF kernels and determined its hyperparameter γ as proposed
by [Varewyck and Martens, 2011]. For the IDA data classification, the regularization parameter C was chosen using a 5–fold cross-validation tested with the three
complexities suggested by Varewyck (0.5, 2, 8). On the MNIST data we fixed the C
to 2 due to high computational load. As before, we visualize the performance as a
function of the range parameter R.
For the IDA data evaluation we did a 5–fold cross-validation with five repetitions.
The results are shown in Figure 1.10a. For the MNIST data, train and test data are
predefined. Since BRMM is a binary classifier, we performed separate evaluations
for each possible combination of two different digits, resulting in 45 classification
problems for which the results are shown individually in Figure 1.10b. The results
are illustrated as relative error changes compared to a C-SVM classifier to obtain
comparable values for the effect of the range. This relative change in performance
shown in Figure 1.10b is given by
Error(BRMM) − Error(SVM)
· 100.
Error(SVM)
(1.107)
When looking at the performance on both the IDA results and the individual
MNIST comparisons reveal that the influence of the range is highly dataset specific. For the IDA datasets the improvement using BRMM is marginal. For the
MNIST data, all classifiers with a range larger than 7 were equivalent to C-SVM.
A performance improvement using the appropriate R can be observed in many cases.
1.3. Relative Margin: From C-SVM to RFDA via SVR
67
(a) MNIST data with number 5
(b) MNIST data with number 9
Figure 1.11: Classifier performance as function of R on MNIST data for two
special numbers. “NX” stands for the binary classification of X with 5 or 9 respectively.
However, there are cases where the performance does not change or even decreases.
Figure 1.11 displays the single results with the real error for the binary comparison with the digit 9 and the digit 5. For the digit 9 there is mostly no change in
performance but for the digit 5 there is great potential for performance improvement
using BRMM.
1.3.4.6
Application of the BRMM to EEG Datasets
EEG data is known to be highly non-stationary due to constantly changing processes
in brain and changing sensor electrode impedances [Sanei and Chambers, 2007]. To
68
Chapter 1. Generalizing: Classifier Connections
investigate the usability and the feature reduction properties of 1–norm BRMM approach in this context, we used five preprocessed EEG datasets from the P300 experiment as described in Section 0.4. No spatial filtering was used to really let the
classifier do the dimensionality reduction and make this task more challenging. The
signal amplitudes for each time point at each electrode were used as features, which
resulted in 1612 features, which we normalized to have zero mean and variance one.
For each of the remaining 5 subjects, we had two recording days with 5 repetitions
of the experiment. For each of the 5 subjects and for each of the two recording days,
we took one of the 5 sets for training and the remaining 4 of one day for testing. This
procedure was repeated for each dataset. Each set having between 700 and 800 data
samples.
For comparison, we used the classical 2–norm SVM, a 1–norm SVM and 1–norm
BRMM as classifiers. Since the datasets contain an unbalanced number of samples
per class (ratio 6 : 1), we assigned the weight 8 to the underrepresented class which
was good on average (cross-validation on training data). This weighting was achieved
by using class specific Cj . The classification performance is measured by means of
balanced accuracy (BA) (Figure 3.5). The BRMM range was fixed at R = 1.5. This
value was found to be adequate for these datasets in a separate optimization on the
training data. C was optimized by first using a 5–fold cross validation to find a
rough range of values for each classifier. The optimal hyperparameters were then
automatically chosen on each individual training set, and the trained classifier was
evaluated on the corresponding test set.
Table 1.4: Classification performance on EEG data
1–norm BRMM
1–norm SVM
2–norm SVM
balanced accuracy
0.872
0.854
0.857
standard error
0.006
0.008
0.007
standard deviation
0.028
0.036
0.032
The results (mean of balanced accuracy) are shown in Table 1.4. Our suggested
1–norm BRMM outperforms the other classifiers significantly (p < 0.05, paired t-test
corrected for 3 comparisons), the SVMs in turn perform on par. This indicates that a
relative margin which accounts for the drifts in the data might be a better choice on
EEG data. As expected, the number of features the 1–norm approaches used was notably smaller than for 2–norm SVM. 1–norm SVM used only 66–102 features, 1–norm
BRMM used 101–255. This corresponds to less than 16% of the available features used
by 2–norm SVM, and less than 30% of the number of examples. The increased number of features by 1–norm BRMM is expected for two reasons. The relative margin
and the respective needs to be modeled with more variables. Furthermore, it is possi-
1.4. Origin Separation: From Binary to Unary Classification
69
ble to have a larger number of training samples at the hyperplane of the outer margin
which increases the possibility of more features being used due to Theorem 13.
1.3.4.7
Summary
In the applications, we could show that the BRMM is a reasonable classifier which
generalizes RFDA and C-SVM and provides a smooth transition between both classifiers, which can be easily fetched from the geometrical perspective. The increased
performance comes with the price of the additional hyperparameter R which needs
to be optimized.
If a 2–norm regularization is used, the implementation of this new algorithm is
straightforward by using the approaches from Section 1.2 or interfacing the existing
highly efficient implementation of the equivalent SVR in the LIBSVM package.
1.4
Origin Separation:
From Binary to Unary Classification
This section is based on:
Krell, M. M. and Wöhrle, H. (2014).
on the origin separation approach.
New one-class classifiers based
Pattern Recognition Letters, 53:93–99,
doi:10.1016/j.patrec.2014.11.008.
All theoretic discoveries of this publication were my own work. Hendrik Wöhrle
helped me with a few text parts, multiple reviews, and discussions.
Focusing the classification on one class is a common approach if there are
not enough examples for a second class (e.g., novelty and outlier detection [Aggarwal, 2013]), or if the goal is to describe a single target class and its distribution [Schölkopf et al., 2001b]. Some unary (one-class) classifiers are modifications
of binary ones like k-nearest-neighbours [Aggarwal, 2013, Mazhelis, 2006], decision
trees [Comité et al., 1999], and SVMs [Schölkopf et al., 2000, Tax and Duin, 2004,
Crammer et al., 2006]. This section focuses on the connections between SVM variants, and their unary counterparts.
The νoc-SVM (see Section 1.1.6.3) was presented in [Schölkopf et al., 2001b] as
a model for “Estimating the support of a high-dimensional distribution” just one
year after the publication of the ν-SVM [Schölkopf et al., 2000]. In both cases, the
algorithms are mainly motivated by their theoretical properties and a hyperparameter ν is introduced which is a lower bound on the fraction of support vectors. It is
shown in [Schölkopf et al., 2001b] that the νoc-SVM is a generalization of the Parzen
windows estimator [Duda et al., 2001]. Furthermore, in the motivation of the νoc-
70
Chapter 1. Generalizing: Classifier Connections
SVM [Schölkopf et al., 2001b] the authors state that their “strategy is to map the
data into the feature space corresponding to the kernel and to separate them from
the origin with maximum margin”. The important answer of how this strategy leads
to the final model description and if there is a direct connection to the existing C-SVM
or ν-SVM is not given, despite similarities in the model formulations. A more concrete geometric motivation is published in [Mahadevan and Shah, 2009, p. 1628] as
a side remark. They argue that “the objectives of 1-class SVMs are 2-fold:” “Develop
a classifier or hyperplane in the feature space which returns a positive value for all
samples that fall inside the normal cluster and a negative value for all values outside
this cluster.” and “Maximize the perpendicular distance of this hyperplane from the
origin. This is because of the inherent assumption that the origin is a member of the
faulty class.” However, they did not provide a proof that the νoc-SVM fulfills these
objectives and indicate that the C-SVM is the basis of this model, which is wrong. It
turns out that this concept can be used as a generic approach to turn binary classifiers into unary classifiers, which is the basis of this section.
Definition 5 (Origin Separation Approach). In the origin separation approach, the
origin is added as a negative training example to a unary classification problem with
only positive training samples. With this modified data, classical binary classifiers
are trained.22
In Figure 1.12 the concept is visualized in the context of SVM classification. We
will prove that, when applying this generic concept to the ν-SVM, solutions can be
mapped one-to-one to the νoc-SVM (Section 1.4.1).
Additionally to figuring out the relations between already existing unary classifiers, it is also possible to combine the origin separation with the previously introduced relative margin (see Section 1.4.2) and/or the single iteration approach (see
Section 1.4.4) and generate entirely new unary classifiers.
The geometric view of the SVDD, where a hypersphere with minimal radius is
constructed to include the data, is inherently different from the origin separation approach, which creates a separating hyperplane instead. Nevertheless, we will show
and visualize a relation between SVM and SVDD with the help of the origin separation approach (see Section 1.4.3).
The connection between C-SVM and PAA via the single iteration approach is
of special interest here. The original unary PAA (see Section 1.1.6.2) was motivated from the SVDD and not from the C-SVM which was the original motivation
of the binary PAA. Based on the connection of the PAA to the C-SVM (and thus the
BRMM), we apply the origin separation approach to derive new unary classifiers
from C-SVM, BRMM, and RFDA for online learning which can be used to apply the
22
A strict (hard margin) separation of the origin is required to avoid a degeneration of the classifier.
1.4. Origin Separation: From Binary to Unary Classification
f(x) = 0
71
f(x) = 1
x2
maximum
distance
f(x) = -1
f(x) = <w,x>+b x1
Figure 1.12: Origin separation scheme. An artificial sample (blue dot) for a second
class (y = −1) is added to the origin.
algorithms when resources are limited. This completes the picture on PAAs given in
[Crammer et al., 2006].
Figure 1.13 visualizes the variety of resulting classifiers and their relations, which
will be explained in detail in the following sections. We will focus on the main methods and give the details on further models in Appendix B.4.
In Section 1.4.5, the properties of the classifiers will be analyzed at the example of
handwritten digit recognition. Another application, where unary classification might
be useful, is EEG data analysis as explained in Section 1.4.6.
1.4.1 Connection between ν-SVM and νoc-SVM
It has been proven under the assumption of separability and hard margin separation
that the νoc-SVM defines the hyperplane with maximum distance for separating the
data from the origin [Schölkopf et al., 2001b, Proposition 1]. This concept is similar to
the well known maximum margin principle in binary classification. In the following,
we will generalize this proposition to arbitrary data and maximum margin separation
with a soft margin, e.g., as specified for the ν-SVM.
Theorem 15 (Equivalence between ν-SVM and νoc-SVM via origin separation). Applying the origin separation approach to the ν-SVM results in the νoc-SVM.
72
Chapter 1. Generalizing: Classifier Connections
ν-SVM parameter SVM
simplification and
single iteration step
transformation
generalization
generalization
zero
separation
approach
BRMMs
simplification and
single iteration step
generalization
RFDAs
inspired
binary
PAPs
BRMM
PAPs
zero
separation
approach
inspired
zero separation
approach
one-class
one-class
generalization
BRMMs
SVM
equivalence if all
data is normalized
to norm of one
single iteration step
SVDD
inspired
online
one-class
BRMMs
one-class
PAPs
Figure 1.13: Scheme of relations between binary classifiers (yellow) and
their one-class (red) and online (blue) variants. The new variants introduced
are in bold. The details are explained in Section 1.4. Visualization taken from
[Krell and Wöhrle, 2014].
Proof. The ν-SVM (Method 4) is defined by the optimization problem
min
′ ′ ′
w ,t ,ρ ,b′
1
2
s.t. yi
kw′ k22 − νρ′ +
D
E
1
n′
P ′
t
j
w′ , x′j + b′ ≥ ρ′ − t′j and t′j ≥ 0 ∀j .
(1.108)
n′ is the number of training samples. w′ and b′ define the classification function
f (x) = sgn (hw′ , xi + b′ ). The slack variables t′j are used to handle outliers which do
not fit the model of linear separation.
In the origin separation approach, only the origin (zero) is taken as the negative
class (y0 = −1). In this case, the origin must not be an outlier (t0 = 0), because it is the
only sample of the negative class.23 Consequently, the respective inequality becomes:
− (hw′ , 0i + b′ ) = ρ′ . Accordingly, b′ can be automatically set to −ρ′ . To achieve class
balance, as many samples as we have original samples of the positive class are added
to the origin for the negative class. This step only affects the total number of samples
which is doubled (n′ = 2n ), such that n only represents the number of real positive
training samples and not the artificially added ones. Putting everything together
23
Also known as hard margin separation.
73
1.4. Origin Separation: From Binary to Unary Classification
(yj = 1, b′ = −ρ′ , n′ = 2n), results in
min
w′ ,t′ ,ρ′
1
2
kw′ k22 − νρ′ +
1
2n
P ′
t
j
(1.109)
s.t. hw′ , xi i ≥ 2ρ′ − t′j and t′j ≥ 0 ∀j .
By applying the substitutions:
w′ →
ν
ν
ν
w, ρ′ → ρ, and t′j → tj
2
4
2
in Equation (1.109) and by multiplying its inequalities with
with
4
,
ν2
(1.110)
2
ν
and its target function
this model is shown to be equivalent to:
min
w,t,ρ
1
2
kwk22 − ρ +
1
νl
P
tj
(1.111)
s.t. hw, xj i ≥ ρ − tj and tj ≥ 0 ∀j
with the decision function f (x) = sgn hw, xi −
ρ
2 .
This is the model of the νoc-SVM
(Method 11). There is only a difference in the offset of the decision function which
should be −ρ instead of − ρ2 . This difference can be geometrically justified as explained in the following and the function can be changed accordingly. Additionally to
the decision hyperplane, a SVM is identified with its margin, additional hyperplanes
for the positive and the negative class. The difference in the offsets corresponds to
choosing the hyperplane of the positive class as the decision criterion instead. This is
reasonable, because for the SVM models the training data is assumed to also include
outliers which are on the opposite side of the respective hyperplane. Furthermore,
the decision criterion might be changed in a postprocessing step or varied in the evaluation step [Bradley, 1997].
1.4.2 Novel One-Class Variants of C-SVM, BRMM, and RFDA
Since the BRMM generalizes the C-SVM and RFDA, it is sufficient to apply the origin
separation approach directly to the BRMM (Method 14).
With the same argumentation as for the ν-SVM in Theorem 1.4.1 we insert the
origin (zero sample) into the inequality
y0 (hw, x0 i + b) ≥ 1 − t0
(1.112)
and enforce t0 = 0 which results in − (hw, 0i + b) = 1 and consequently b = −1.
Subtracting the inequality with 1 finally results in the
74
Chapter 1. Generalizing: Classifier Connections
Method 18 (One-Class Balanced Relative Margin Machine).
min
w,t
1
2
kwk22 + C
P
tj
(1.113)
s.t. 1 + R + tj ≥ hw, xj i ≥ 2 − tj and tj ≥ 0 ∀i .
Modifying the decision function f (x) = sgn (hw, xi − 1) in the same way as we did
for the ν-SVM results in f (x) = sgn (hw, xi − 2). Note that the offset is now fixed,
which enables the application of the single iteration approach from Section 1.2.4
without any changes to the offset treatment. With the extreme case, R = ∞, we obtain a new one-class SVM (Coc-SVM) It is expected to be very similar to the νoc-SVM
because they were derived from C-SVM and ν-SVM which are equivalent according
to Theorem 6. However, the new model provides a better geometric interpretation
and a simplified implementation.
Due to the single iteration approach it would be possible to use the implementations of the binary counterparts with linear kernel. Only the hard margin separation
for the artificially added sample has to be realized. Nevertheless, it helps to take
a deeper look into implementation strategies to adapt them to the special setting of
unary classification with the origin separation approach. Furthermore, for the use of
nonlinear kernels special care has to be taken, because the origin of the underlying
RHKS might not have a corresponding sample in the original data space anymore.
Especially the zero sample is not at the origin.
For solving the optimization problem in Equation (1.113), it is no longer required
to update pairs of samples [Platt, 1999a]. Because of the special offset treatment
the approach from Section 1.2.3 can be directly applied. Considering the fixed offset
(b = −1), the respective update formulas can be derived (see also Appendix B.4.2):
(k+1)
= P αj −
(k+1)
= P βj
αj
βj
w(k+1)
=
w(k)
(k)
(k)
+
1
kxj k2
D
D
1
kxj k2
(k+1)
((αj
−
+
E
w(k) , xj − 2
E
w(k) , xj − (R + 1)
(k)
αj )
with P (z) = max {0, min {z, C}} .
−
(k+1)
(βj
−
(k)
βj ))
(1.114)
xj
Comparing these formulas with the formulas of the binary classifier in Section 1.3.4.1
shows that the implementations of binary classifiers require only minor modifications
to be also used for unary classification: The offset has to be fixed to −1 and its update
has to be suppressed.
1.4. Origin Separation: From Binary to Unary Classification
75
Squared Loss and Kernels
The origin separation approach can also be used for variants of the discussed algorithms if squared loss variables (t2j ) in the target function or kernels are used as
introduced in Section 1.1.1.2. The formulas can be derived in the same way as for the
binary classifiers (see Appendix B.4.2).
Kernels are motivated by an implicit mapping of the data to a higher dimensional
space (RHKS). Consequently, the separation from the origin is applied in the RHKS
and not in the originally data space. For example, using a Gaussian kernel (k(x, y) =
2
eγkx−yk2 ) results in a separation of points on an infinite dimensional unit hypersphere
from its center at the origin in the RHKS, because
kxkk := k(x, x) = 1 ∀x ∈ Rm .
(1.115)
Strict Separation from the Origin for SVM: C = ∞
For a different geometric view on the new one-class SVM, consider the extreme case
of hard margin separation (C = ∞), which enforces the slack variables to be zero. Let
X denote the set of training instances xj with the convex hull conv(X). The origin
separation approach reveals that the optimal hyperplane (for the positive class) is
tangent to conv(X) in its point of minimal norm x′ (Theorem 23). The hyperplane is
orthogonal to the vector identified with x′ and w = x′ kx2′ k2 .
2
1.4.3
Equivalence of SVDD and One-Class SVMs on the Unit Hypersphere
The approach of SVDD (Method 10) is different from the origin separation. Here, the
goal is to find a hypersphere with minimal radius R around a center c such that the
data is inside this hypersphere and the outliers are outside
min′ R2 + C ′
R,c,t
s.t. kc −
P ′
t
xi k22
i
≤ R2 + t′i and t′i ≥ 0 ∀i .
(1.116)
Theorem 16 (Equivalence of SVDD and νoc-SVM on the Unit Hypersphere). If the
data is on the unit hypersphere (normalized to a norm of one),
w = c, ti =
kck22 + 1 − R2
1
t′i
, ρ=
, C′ =
2
2
νl
(1.117)
gives a one-to-one mapping between the SVDD and the νoc-SVM model.
For a proof refer to [Tax, 2001] or Appendix B.2.3 but not to [Tax and Duin, 2004],
where the proof is incomplete. The equivalence of the models is also reasonable from
76
Chapter 1. Generalizing: Classifier Connections
SVDD
training data
one-class
SVM
X
X
XX
XX
c
radius R
w
X X
XX
X
X outlier
0
data space
(unit hypersphere)
Figure 1.14: Geometric relation between SVDD with a separating hypersphere (red) with radius R and center c and one-class SVM with its hyperplane (blue) and classification vector w when the data lies on a unit
hypersphere (black). Samples outside the red hypersphere are outliers in the
SVDD model and samples below the blue hyperplane are outliers for νoc-SVM.
The remaining data belongs to the class of interest. Visualization taken from
[Krell and Wöhrle, 2014].
our new geometric perspective as visualized in Figure 1.14. Intersecting the data
space (unit hypersphere) with a SVM hyperplane separates the data space into the
same two parts as when cutting it with the SVDD hypersphere. From the geometric
view, R2 + d2 = 1 should also hold true, where d is the distance of the origin to the
separating hyperplane of the SVM. So maximizing this distance in the SVM model is
equivalent to minimizing the radius of the hypersphere in the SVDD model. If the
data is not normalized to a norm of one, the models differ. Note that when using
Gaussian kernels, data is internally normalized to unit norm (see Equation (1.115)
or [Tax, 2001]).
Theorem 17 (From νoc-SVM to the New One-Class SVM). Let ρ(ν) denote the optimal value of the νoc-SVM model. If ρ(ν) > 0, νoc-SVM is equivalent to our new
one-class SVM by substituting
w̄ =
2w
2ti
2
, t̄i =
, C̄ =
ρ(ν)
ρ(ν)
νlρ(ν)
even if the data is not normalized. So both models are similar, too.
The proof can be found in Appendix B.2.3.
(1.118)
77
1.4. Origin Separation: From Binary to Unary Classification
1.4.4 Novel Online Unary Classifier Variants of the C-SVM
In Section 1.4.2, formulas for the weight update belonging to a single sample have
been derived. According to Section 1.2.3 and Section 1.2.4 the application of the single
iteration approach is straightforward and leads to the update formulas for an online
classifier version:
Method 19 (Online One-Class BRMM).
α = max 0, min
β = max 0, min
1
kx
new k2
2
1
kxnew k22
wnew = wold + (α − β) xj .
D
2 − wold , xnew
D
wold , xnew
E
E
,C
− (R + 1) , C
(1.119)
This model combines, the single iteration, the relative margin, and the origin
separation approach. For an online one-class SVM variant, only α is used (β ≡ 0).
Update formulas for variants with different loss can be defined respectively (see Appendix B.4.3).
This direct transfer of the introduced unary classifiers to online classification completes the picture on the binary PAA, which are connected to the C-SVM by the single
iteration approach. It results in online versions for the unary variants of the batch
algorithms: C-SVM (R = ∞), BRMM, and RFDA (R = 1).
1.4.5 Comparison of Unary Classifiers on the MNIST Dataset
To get a first impression of the new unary classifiers, a comparison on the MNIST
dataset (see Section 1.3.4.4) was performed. We chose a one vs. rest evaluation
scheme, where classifiers were trained only on one digit (target class) and tested
on all other digits (rest, outliers). Using unary classifiers on this data has three
advantages: First, the classifiers describe how to detect a single digit and not how
to detect the difference to all the other digits. Second, the classifiers do not have to
handle class imbalance. And third, the classifiers can better detect new outliers like
bad handwriting or letters (which are not part of this dataset).
1.4.5.1
Comparison of Classifiers with different Range or Radius
For dimensionality reduction, a principal component analysis [Lagerlund et al., 1997,
Rivet et al., 2009, Abdi and Williams, 2010] (PCA) was applied (with training on the
given training data), keeping the 40 most important principal components.24 Furthermore, all resulting feature vectors were normalized to unit norm. The regular24
The PCA implementation of scikit-learn was used here [Pedregosa et al., 2011].
78
Chapter 1. Generalizing: Classifier Connections
ization parameter C was individually optimized using a grid search with 5-fold crossvalidation on the training data. As performance metric, the average of the area under
the ROC curve [Bradley, 1997] (AUC) over all possible digits was used, to account for
class imbalance (ratio 1 : 9) and sensitivity on the decision boundary that was not
optimized [Swets, 1988, Bradley, 1997, Straube and Krell, 2014]. The pySPACE configuration file is provided in the appendix in Figure C.2. The results are depicted in
Figure 1.15.
1.00
0.95
AUC
0.90
0.85
classifier
binary BRMM
online unary BRMM
unary BRMM
unary PA0
unary PA1
0.80
0.75
4
6
8
10
12
scaling s
14
16
18
20
Figure 1.15: Comparison of different unary classifiers on the MNIST dataset
s
with varying radius (unary PAAs PA0 and PA1, Rmax = 10
) or range (BRMM
s
variants, R = 4 )) parameter. Both hyperparameters are calculated with the help
of the scaling parameter s. The average AUC with standard error is displayed in a
scenario, where the classifier has been trained on one digit out of ten. The binary
BRMM is displayed for comparison, too. For the BRMM variants, the border at R = 1
(s = 4) corresponds to a RFDA variant and the upper border (s = 20) is equivalent to
the respective C-SVM variant. Visualization taken from [Krell and Wöhrle, 2014].
For the unary BRMM variants, different range parameters were tested, but no
influence on the performance can be observed. In this application, the online unary
BRMM performs as well as the batch algorithm (unary BRMM), although it requires
less training time (600ms instead of 30min average time) and memory (O(n) instead
of O(n · l)). This is a clear advantage of the online classifier, since it allows to train
the algorithm on large datasets and potentially increase performance with constantly
low processing resources.
For the binary BRMM, there is a performance increase on the way from RFDA to
C-SVM. C-SVM performs better than the unary classifiers, but requires all digits for
the training (nine times more samples). Consequently, using the C-SVM instead of a
1.4. Origin Separation: From Binary to Unary Classification
79
unary classifier requires more computing resources.
For the unary PAAs (online classifiers) PA0 and PA1, which were motivated by
SVDD as described in Section 1.1.6.2, different values for the maximal radius were
tested. The PAAs optimize their radius parameter, but need a predefined maximum
value. By increasing the maximum radius, first performance increases and then
monotonically decreases. PA1 clearly outperforms PA0, because it allows for misclassifications in its model, which improves its generalization capability. This effect
is quite common and has been observed also for numerous other classifiers. With the
optimal choice of the radius, PA1 performs as well as the BRMMs. This is possibly
the same effect as the equivalence of the SVDD and the one-class SVMs on data on a
unit hypersphere, as used in this example. Unfortunately, the intrinsic optimization
of the radius is not working sufficiently well and the maximum radius parameter
needs additional tuning. This is a clear disadvantage compared to the other classifiers, especially since hyperparameter tuning is often difficult in one-class scenarios
like outlier detection.
To summarize, despite the slightly worse performance value in comparison to
C-SVM (R = 20), in this application unary classifiers are useful due to reduced computing resources and because they might better generalize on unseen data like handwritten letters. For online learning, the new online unary SVM is better than the
already existing online one-class algorithms (PA0 and PA1), because it does not require the optimization of the hyperparameter Rmax .
1.4.5.2
Equivalence of νoc-SVM and the novel one-class C-SVM
To visualize the equivalence between νoc-SVM and our new version of a one-class
SVM, which is directly derived from C-SVM, an additional evaluation was conducted
by varying the hyperparameters ν and C (see Figure 1.16). This results in one performance curve for each digit, the classifier has been trained on. Everything else was
kept as in the previous evaluation in Section 1.4.5.1 (e.g., testing on all remaining
digits). The pySPACE configuration file is provided in the appendix in Figure C.1.
The performance of the new one-class SVM is constant when the regularization
parameter C is chosen to be very high or very low. Both performance curves show
the same increase in performance, the same maximum performance, and then the
same decrease. Only the scaling between the hyperparameters is different and consequently the curves look differently. The similarity of the curves indicates an equivalence of the solutions. This equivalent behavior is also expected from the theory
(see Section 1.4.1) and was the motivation to derive the other classifiers, because the
derivation requires the C-SVM modeling and is not applicable to ν-SVM. For very
low ν, the performance of νoc-SVM decreases drastically. Reasons for this might be
Chapter 1. Generalizing: Classifier Connections
AUC
AUC
80
C
ν
C
ν
C
ν
C
ν
C
ν
Figure 1.16: Performance comparison of νoc-SVM (blue) and new one-class
SVM (red) trained on different digits (0-9) with varying hyperparameters ν and C.
Visualization taken from [Krell and Wöhrle, 2014].
differences in implementation, rounding errors, or a degeneration of the optimization
problem.
1.4.5.3
Generalization on Unseen Data and Sensitivity to Normalization
In the following, the effect of different normalization techniques and the generalization on unseen data are analyzed. Normalization techniques change the position of
the data in relation to the origin. Consequently, an effect on the origin separation
is expected. One-class classifiers are not dependent on the second class and should
better handle changes in this class.
In comparison to the evaluation in Section 1.4.5.1, the PCA is omitted and only
0.25% of the training data are used for pretraining and calibration of the algorithms.
In the testing phase, each sample is first classified and if it is the label of interest,
an external verification is assumed, which allows to have an incremental training
for the unary online algorithms (unary PA1 and online unary SVM). To show the
generalization capability, only the label/digit of interest (one of 1 – 9) was used as
positive class and 0 as opposing class for calibration and second class for the binary
classifier (new one-class SVM). For testing, all digits were used. For normalization,
three approaches are compared: no normalization (No), normalization of the feature
vector to unit norm (Euclidean), and finally determining mean and variance of each
feature on the training data and normalizing the data to zero mean and variance of
one (Gaussian). For the unary PA1, the optimization of the hyperparameter Rmax was
included in the 5-fold cross-validation (which optimizes the hyperparameter C) using
the same range of values for the hyperparameter as in Section 1.4.5.1. To account
for the random selection of training samples and the splitting in the cross-validation
step, the experiment was repeated 100 times. The results are shown in Figure 1.17.
Three conclusions can be drawn from the results:
81
1.4. Origin Separation: From Binary to Unary Classification
1.0
0.9
AUC
0.8
0.7
0.6
C-SVM
online unary SVM
unary PA1
0.5
Normalization
Euclidean
Gaussian
No
Classifier
Figure 1.17: Comparison of different normalization techniques and online
classifiers on the MNIST dataset. The box plots show the median of the performance
values (AUC) with the 25% and 75% quantile.
1. The one-class classifiers are highly dependent on the chosen normalization.
As already mentioned, this was expected, because the normalization largely
changes the position of the data to the origin in this example. For the binary
C-SVM classifier there are no large differences in performance between the different normalization techniques but the unary classifiers largely differ. For the
online unary SVM, the Gaussian feature normalization is best and for the unary
PA1 Euclidean feature normalization is best.
2. The unary PA1 shows the worst performance.
One reason for this might be that the small calibration dataset was not sufficient for a tuning of Rmax . If it were chosen to small, the incremental learning
would change the center of the circle to much. Furthermore, the approach of just
putting a circle around the samples of interest might not be the right approach
in this example, because it does not generalize enough.
3. The online unary SVM clearly outperforms the other classifiers.
This was expected due to the incremental training and because of the focusing
on the class of interest. Hence, it does not overfit on the “outliers” (irrelevant
82
Chapter 1. Generalizing: Classifier Connections
class) and performs better when other types of “outliers” come in.
1.4.6 P300 Detection as Unary Classification Problem
In this Section, we evaluate the application of unary classification on the P300
dataset which is described in Section 0.4.
Normally,
P300
detection
lem [Krusienski et al., 2006],
lem [Hohne et al., 2010].
is
treated
as
a
binary
classification
prob-
and sometimes even as a multi-class prob-
The important class is the P300 ERP. As the second
class, the brain signal which corresponds to the unimportant more frequent stimulus
is taken or other noise samples, which are not related to the important stimulus. In
such a classification task, the classifier might therefore not learn the characteristics
of the P300 signal but how to differentiate the important class from the unimportant
class. To simplify the use in the application and from the modeling perspective, we
suggest to focus on the important class and use a unary classification. This reduces
the training effort and the classifier models the signal of interest. Furthermore, the
problem of class imbalance in P300 detection in a two class approach can be avoided.
It is caused by the fact that the important stimulus is rare and has to be treated
from the algorithm and evaluation point of view [Straube and Krell, 2014].
Processing In the following, we introduces the methods used for processing and
classifying the EEG data.
The preprocessing was as described in [Feess et al., 2013] and displayed in Figure 3.4. For the normalization we again compared Euclidean, Gaussian, and no
(“Noop”) feature normalization. For classification, we compared binary and unary,
online and batch BRMM including the special cases of R = 1 (RFDA) and R = ∞
(C-SVM). Furthermore, we included the unary PAAs, PA1 and PA2. The online classifiers were kept fixed on the testing data. For our investigation, the threshold was
optimized on the training data [Metzen and Kirchner, 2011] because unary classifiers
are very sensitive to it.
For all classifiers, the regularization parameter C has to be optimized. We tested
with the following range: [10−4 , 10−3.5 , . . . , 102 ]. A second hyperparameter is only
relevant for the BRMM (with a tested range of 1 + 10−1 , 1 + 10−0.8 , . . . , 1 + 101 ) and
the unary PAAs, PA1 and PA2 (with a maximum radius of 10−0.5 , 10−0.7 , . . . , 101.5 ). To
determine the optimal hyperparameters, a grid search with a 5-fold cross validation
was performed on the training data.
In each session, 5 datasets were recorded. For evaluation, we trained the algorithms on one of five sets and tested on the remaining 4 datasets. This is a typical cross-validation scheme. Although not given to the unary classifiers, the data of
83
1.4. Origin Separation: From Binary to Unary Classification
the second class (frequent irrelevant standard stimuli) were used for evaluation and
training the other supervised algorithm: the xDAWN algorithm uses data which does
not belong to the ERP of interest to determine the noise in the data. For Gaussian
feature normalization, the label of the data is irrelevant. Only the threshold optimization really needs the second class. As discussed in Section 3.3, the BA was taken
as performance metric [Straube and Krell, 2014].
Results and Discussion
The results of the evaluation are depicted in Figure 1.18.
The above mentioned classifiers and normalization methods are compared.
1.0
Balanced Accuracy
0.9
0.8
0.7
0.6
Normalization
Euclidean
Gaussian
Noop
0.5
Binary Batch BRMM
Binary Batch RFDA
Binary Batch SVM
Binary Online BRMM
Binary Online RFDA
Classifier
Binary Online SVM
Unary Batch BRMM
Unary Batch RFDA
Unary Batch SVM
Unary Online BRMM
Unary Online PA1
Unary Online PA2
Unary Online RFDA
Unary Online SVM
0.4
Figure 1.18: Comparison of the different classifiers and normalization techniques. Unary and binary classifier variants are compared as well as online and
batch learning variants. In the box plots, median and 75% quantiles are displayed.
When using Euclidean feature normalization, the performance of PA1 and PA2
is comparable to the performance of the unary online SVM. This is reasonable, because the respective batch algorithms (one-class SVM and SVDD) are equivalent,
when applied on data with a norm of one. Nevertheless, the performance of PA1 and
PA2 with Euclidean feature normalization is inferior to the performance of the other
classifiers with Gaussian feature normalization. This shows, that for the observed
data, the approach of linear separation with hyperplanes is superior to the approach
of separation with the help of surrounding hyperspheres. Another problem might be
84
Chapter 1. Generalizing: Classifier Connections
1.00
0.95
Balanced Accuracy
0.90
0.85
0.80
0.75
0.70
0.65
Binary Batch BRMM
Binary Batch RFDA
Binary Batch SVM
Classifier
Binary Online BRMM
Binary Online RFDA
Binary Online SVM
Unary Batch BRMM
Unary Batch RFDA
Unary Batch SVM
Unary Online BRMM
Unary Online RFDA
Unary Online SVM
0.60
Figure 1.19: Comparison of classifiers (except PA1 and PA2) after Gaussian
feature normalization. In the box plots, median and 75% quantiles are displayed.
the choice of the optimal maximum range of PA1 and PA2 as in the experiment in
Section 1.4.5.1
The processing with Gaussian feature normalization always performs slightly better or equal to the other normalization techniques (except for PA1 and PA2). Again,
the unary classifiers are more sensitive to the type of normalization, which is reasonable due to the origin separation approach. The results for using the Gaussian
feature normalization only are displayed in Figure 1.19. It can be observed that
when using this normalization, all (other) classifiers show comparable performance
results. This holds for the comparison of online and batch learning algorithms but
most importantly the binary classifiers do not outperform the variants of the investigated unary classifiers in Figure 1.19. A reason for this behavior is, that the xDAWN
algorithm already reduced dimensionality a lot and has the main influence. If it were
left out, the performance would drop especially for the unary classifiers (results here
not reported).
1.4.7 Practice: Normalization and Evaluation
As the experiments show, the choice of normalization is crucial. This is similar to
the considerations as in Section 1.2.5. One has to consider, if the approach of sep-
1.4. Origin Separation: From Binary to Unary Classification
85
arating the data from the origin is reasonable. For example, separating the data
{(0, 1), (1, 0), (0, −1), (−1, 0)} from (0, 0) would not make any sense and is not even
possible with a hard margin separating unary one-class SVM or SVDD. In this case,
it is always good to reflect, if the origin can be considered as an outlier. For P300 detection, a zero sample (without Gaussian normalization) corresponds to no relevant
signal in the data and is the perfect opposite class. In fact, if the preprocessing were
perfect it would map all the other data to zero. For unnormalized MNIST data, a zero
vector can identified be with an empty image which corresponds to no digit, which is
definitely an outlier or can be seen as the opposing class.
With increasing dimensionality of the data it is easier to separate the training
dataset from the origin but this might also decrease the capability of the unary classifier to describe the data.
From the intuition the geometric idea behind SVDD seems more appropriate than
the origin separation approach but the experiments indicated the opposite.
Taking everything together, the origin separation requires careful consideration
before application. This is probably the reason, why it is used seldom in the direct
way. On the other side, when using the RBF kernel, which is quite common, most
problems disappear. First of all, still data should be normalized but the separability to the origin is not relevant any more. Second, in this case, the data is lying in
an infinite dimensional sphere and the positive orthant and consequently the data
is always separable from the origin, which is the center of the sphere. Third, SVDD
and the application of the origin separation to C-SVM result in the same classification and it does not matter anymore which approach is considered more reasonable.
Last but not least, the νoc-SVM generalizes nicely the Parzen windows estimator as
shown in [Schölkopf et al., 2001b], which is a reasonable approach to approximate
probabilities. Consequently, using the RBF kernel together with the origin separation approach is a good choice and according to Theorem 5 it also generalizes the
linear version.
The usage of unary classifiers is difficult to evaluate and hyperparameters are
difficult to optimize. In some cases, a visualization might be useful but will not give a
quantification. From our point of view, the best way out is to use another class. Since
the introduced unary classifiers all come with a decision function which determines,
if new incoming data belongs to the given data or not, this approach is reasonable.
The second class can be:
• the large number of irrelevant samples (e.g., unrelated EEG data in P300 detection or other digits or letters in case of the MNIST data),
• a small number of outliers (e.g., data from a (simulated) crashed robot or data
from missed targets in P300 classification), or
86
Chapter 1. Generalizing: Classifier Connections
• synthetic data by adding noise to the given training samples, which is often
used in the literature but which might not be representative for future incoming
data.
In any case, class imbalance should be considered in the evaluation (see Section 3.3).
Furthermore, the offset should be carefully optimized or an evaluation should be
used, which is not dependent on a decision criterion.
To summarize, we could show that the origin separation approach is an intuitive
way to derive unary classifiers from binary classifiers like the numerous SVM variants in Section 1.1. The respective implementations from the binary classifiers can be
used. On the other side, unary classification comes with difficulties of offset tuning,
data normalization, and appropriate evaluation. With our presented geometric intuition it becomes immediately clear that the origin separation approach is only working when it is reasonable to have a linear hyperplane (for modeling the data), and to
consider the origin as the opposite class. Knowing that the approach is equivalent to
the possibly more intuitive SVDD concept when using a fitting kernel or normalization technique even improves our geometric concept. It is now easy to understand,
why different models perform quite similar and why it is important to also have a
look at evaluation techniques, decision criterion optimization, feature normalization
in the preprocessing, and the use of kernels which is probably most important.
1.5
Discussion
In this chapter, numerous classifiers were introduced and their new and old relations were summarized for the first time. For the experts, most knowledge might be
already known or trivial but for the normal user of these algorithms, the given relations remain mostly unknown because they are not reported or just distributed in
the literature. But how does this summary of classifier connections help to answer
the question of “which” classifier to use? This will be discussed in the following with
three different perspectives/use cases.
Learning and Teaching Perspective The first requirement to answer this question is to know the classifiers. Hence, summarizing them is a first approach. But still
getting to know them might be difficult. Here, our set of relations can probably help
more than just learning about regularization and loss functions. It is not required
to learn the single models but to understand the concepts on how the models are
derived. This can be directly used in teaching as outlined in the following.
Assuming the concept of a squared loss, kernel, SVR, and the related ridge regression are already known, it is very intuitive and straightforward to look at the
87
1.5. Discussion
simplification of binary classification by considering only two possible values for regression: {−1, +1}. This directly results in BRMM and RFDA/LS-SVM. Since, RFDA
is the special case of BRMM with R = 1 a good next step is to look for R = ∞ and get
C-SVM. This can be supported by respective visualizations and formal descriptions
of the algorithms. So with the help of the relative margin concept (Section 1.3) a first
set of classifiers can be derived without much effort. With C-SVM and relative margin, one should give a short introduction to the geometric background of maximum
margin separation.
The next step in teaching would be to answer the question of how to implement
the algorithms as done in Section 1.2. This can be connected to practical questions as
in robotics, where limited memory and processing power have to be considered. Here,
one answer can be the online algorithms, derived by the single iteration approach.
Finally, unary classification can be seen as a tool to handle multi-class classification, large class imbalance, or simply just to describe one class. The origin separation
approach from Section 1.4 can then be used to derive the unary classifiers again
geometrically. As a “better” justification, the relation of the νoc-SVM to the probability modeling Parzen windows estimator and the maybe geometrically more intuitive
SVDD can be used.
This teaching approach can be supported by several visualizations of the classifiers as already given in the previous sections but also with a more general overview
graphic as provided by Figure 1.20 to highlight, how the different approaches are
connected.
Application Perspective
Another interesting point of view is the application. As-
suming, a (linear) C-SVM turned out to be a very good choice due to its generalization
capabilities even on a small number of samples in some preliminary data analysis.
If the data shall now be processed on a robot or an embedded device with limited
resources, one could directly transfer the linear decision function. If later on a verification of new data becomes possible and drifts in the data are expected, the single
iteration approach provides a direct way to adapt the classifier with low effort of
implementation, low processing power, and no additional memory usage.
If more data is acquired, it makes sense to model statistical properties of the
data and drifts. Here, the relative margin concept is a first direct and smooth approach which transfers C-SVM to RFDA. The transfer can be automatically achieved
by using BRMM and tuning its hyperparameter R. If the amount of available data
becomes very large and hyperparameter optimization showed, that R = 1 is a reasonable choice and that the regularization parameter C can be chosen very large than it
might be a good step to switch to the limit, which is the FDA. An advantage of this
step is, that this classifier allows for very fast online updates [Schlögl et al., 2010].
88
Chapter 1. Generalizing: Classifier Connections
online classification
passive-aggressive
algorithm
single
iteration
regression
support
vector
regression
single
iteration
origin
separation
relative
margin
equivalent
in binary
case
ε=0
ridge
regression
support vector
machine (SVM)
balanced relative
margin machine
R=1
equivalent
in binary
case
origin
separation
statistical classification
regularized Fisher's
discriminant analysis
one-class
SVM
equivalent
on
normalized
data
support
vector data
description
unary
classification
Figure 1.20: Simplified overview of connections of support vector machine
(SVM) variants. The details and several further connections can be found distributed in Chapter 1. The red color highlights the new connections, provided by
the new generalized model. For every classifier it is possible to use squared or a nonsquared loss. Except for the online classifiers (green box), a kernel function can be
always used. Last but not least, the three introduced approaches can be combined as
depicted in Figure 1.1.
On the other hand, one might realize, that only one class is relevant in the data
and so uses the zero separation approach to only work with one classifier as was
suggested for the P300 detection in Section 1.4.6. Alternatively, if the application
might request the capability to work and many classes and might even require to
be extensible for new classes. This is for example the case when first only the goal
is to predict movements, where data with no movement planing can be taken as
(“artificial”) second class but later this goal is changed to also distinguish different
movement types. Another example might be soil detection for a robot by images and
verification over sensors. During runtime, an arbitrary number of new underground
types could occur. A set of already defined classifiers says, that a new image does not
seems to belong to already observed soil types and this is verified by other sensors.
Hence, a new unary classifier could be generated to determine this soil type for future
occurrence.
Such automatic behavior will be required for longterm autonomy and for constructing intelligent robots. For completeness, it should be mentioned that such a
problem could be also tackled with unsupervised algorithms (clustering) or even better with a mixture of unsupervised and supervised algorithm. Last but not least, it is
important to mention, that all these considerations from the application point of view
1.5. Discussion
89
most often do not occur separately but this is not a problem, because the introduced
approaches can be easily combined.
Implementation and Optimization Perspective Luckily, with the implementation of the BRMM with special offset treatment, all the mentioned approaches and
variants can be handled within just one implementation. For the single iteration
approach, the number of maximum iterations could be set to the total number of
trainings samples and in the online learning case, the update formulas can be directly reused. For getting the border cases for the relative margin approach, R can
be set to the respective values. And for integrating the origin separation approach it
is only required to keep the offset fixed at −1 after an update step.
A similar view can be taken, when optimizing the classifier using the generalized
BRMM model with its variants. The number of iterations can be taken as a hyperparameter which is tuned and if it gets close to the number of samples, the online
learning version should be used instead. When switching between L1 and L2 loss the
old support vectors are a first good guess for the new classifier and can be reused.
Especially if the linear kernel is used, sparsity of the number of support vector is less
relevant and with an increasing number of samples it might make sense to switch
from L1 to L2 loss.
If the range parameter R is optimized it is good to start with high values, especially when only few samples are available. When optimized, it might turn out that
the maximum Range Rmax should be used, to avoid an outer margin or R should be
taken very small to model drifts in the data and so the respective C-SVM or RFDA
variants should be used. In case of few data it is probably impossible to determine a
good R for the beginning the maximum value is a good choice, which could be later
on adapted.
The one-class approach cannot be directly part of the optimization because it is
more a conceptual question of how to model the data. Nevertheless, when starting
with unary classification, samples of the opposite class (e.g., outliers) might occur,
which raise the desire to be integrated into the classifier. This is in fact possible in
the model. Furthermore, there is even the possibility to remove the zero separation
if enough data of real outliers is available but still use the old model and adapt it.
From our practical experience, the models often perform very similar. This is now
reasonable due to their strong connections. Consequently, the less complex version is
the best choice for the application.
Summary
This chapter showed that all the SVM variants are strongly connected
and it is possible to somehow move between them. This collection of connections
draws a more general picture of the different models and can be also seen as a very
90
Chapter 1. Generalizing: Classifier Connections
general classifier model. It can be used for different views on data classification from
the teaching, learning, application, implementation, and the optimization perspective. Hence, these views should (hopefully) simplify the choice of the classifier and
increase the understanding by not looking at a single variant but considering the
complete graph/net of classifiers.
In future, the benefits of the new algorithms need to be investigated in detail
in further applications. It would be good, to also have a smooth transition between
solution algorithms of BRMM with R equal or close to 1 and the larger values. (Maybe
there is a solution for the SVR which can be transferred or vice versa.) Last but
not least, algorithms for improved (online) tuning of the hyperparameters and the
decision boundary need to be developed and analyzed.
Related Publications
Krell, M. M., Feess, D., and Straube, S. (2014a). Balanced Relative Margin
Machine – The missing piece between FDA and SVM classification. Pattern
Recognition Letters, 41:43–52, doi:10.1016/j.patrec.2013.09.018.
Krell, M. M. and Wöhrle, H. (2014).
on the origin separation approach.
New one-class classifiers based
Pattern Recognition Letters, 53:93–99,
doi:10.1016/j.patrec.2014.11.008.
Krell, M. M., Straube, S., Wöhrle, H., and Kirchner, F. (2014c).
General-
izing, Optimizing, and Decoding Support Vector Machine Classification. In
ECML/PKDD-2014 PhD Session Proceedings, September 15-19, Nancy, France.
Wöhrle, H., Krell, M. M., Straube, S., Kim, S. K., Kirchner, E. A., and Kirchner, F. (2015). An Adaptive Spatial Filter for User-Independent Single Trial
Detection of Event-Related Potentials. IEEE Transactions on Biomedical Engineering, doi:10.1109/TBME.2015.2402252.
Wöhrle, H., Teiwes, J., Krell, M. M., Kirchner, E. A., and Kirchner, F. (2013b). A
Dataflow-based Mobile Brain Reading System on Chip with Supervised Online
Calibration - For Usage without Acquisition of Training Data. In Proceedings
of the International Congress on Neurotechnology, Electronics and Informatics,
pages 46–53, Vilamoura, Portugal. SciTePress.
Fabisch, A., Metzen, J. H., Krell, M. M., and Kirchner, F. (2015). Accounting
for Task-Hardness in Active Multi-Task Robot Control Learning. Künstliche
Intelligenz.
Chapter 2
Decoding: Backtransformation
This chapter is based on:
Krell, M. M. and Straube, S. (2015). Backtransformation: A new representation of
data processing chains with a scalar decision function. Advances in Data Analysis
and Classification. submitted.
Dr. Sirko Straube contributed by reviewing my text and by several discussions which
also led to my discovery of the backtransformation. I wrote the paper.
Contents
2.1 Backtransformation using the Derivative . . . . . . . . . . . . .
96
2.2 Affine Backtransformation . . . . . . . . . . . . . . . . . . . . . . .
97
2.2.1 Affine Backtransformation Modeling Example . . . . . . . . .
98
2.3 Generic Implementation of the Backtransformation . . . . . . 103
2.4 Applications of the Backtransformation . . . . . . . . . . . . . . 105
2.4.1 Visualization in General . . . . . . . . . . . . . . . . . . . . . . 105
2.4.2 Handwritten Digit Classification: Affine Processing Chain . . 106
2.4.3 Handwritten Digit Classification: Nonlinear Classifier . . . . 107
2.4.4 Handwritten Digit Classification: Classifier Comparison . . . 110
2.4.5 Movement Prediction from EEG Data . . . . . . . . . . . . . . 112
2.4.6 Reinitialization of Linear Classifier with Affine Preprocessing 114
2.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
91
92
Chapter 2. Decoding: Backtransformation
This chapter presents our backtransformation approach to decode complex data processing chains.
The basis of machine learning is understanding the data [Chen et al., 2008],
and generating meaningful features [Domingos, 2012, “Feature Engineering Is The
Key”, p. 84]. Looking at the pure values of data and the implementation and
parameters of algorithms does usually provide no insights. Consequently, for numerous data types and processing algorithms, visualization approaches have been
developed [Rieger et al., 2004, Rivet et al., 2009, Le et al., 2012, Haufe et al., 2014,
Szegedy et al., 2014] as a better abstraction to enhance the understanding of the behavior of the applied algorithms and of the data. Here, the visualization of an algorithm is often realized in a similar way as for the input data. Sometimes knowledge
about the algorithm or the data is used to provide a visualization which is easier to
interpret or which provides further insights. For example, for frequency filters, the
frequency response is a much more helpful representation than the pure weights of
the filter. Furthermore, internal parts of an algorithm can give additional helpful
information, too, like the support vectors of a SVM, the signal template matrix A of
the xDAWN1 , the covariance matrices used by the FDA, or the characteristics of a
single neuron in an artificial neural network [Szegedy et al., 2014].
To come up with a representation/visualization gets way more complicated
when algorithms are combined for a more sophisticated preprocessing before applying a final decision algorithm [Verhoeye and de Wulf, 1999, Rivet et al., 2009,
Krell et al., 2013b, Kirchner et al., 2013, Feess et al., 2013], i.e., for processing
chains. Under these circumstances, understanding and visualization of single algorithms does only explain single steps in the processing chain that are typically not
independent from each other as outlined in the following examples.
• If the data of intermediate processing steps is visualized, the ordering of two
filters will change the visualization but might have no effect on the result.
• If a dimensionality reduction algorithm like the PCA is used, visualizations
will differ when different numbers of dimensions are retained. But reducing
the feature dimension to 75% or 25% will make no difference on the whole decision process, if the classifier uses only 10% of the highest ranked principal
components (e.g., a C-SVM with 1-norm regularization).
• Two completely different dimensionality algorithms are used, and exactly the
same or completely different classifiers are added to the processing chain, but
1
The xDAWN is a dimensionality reduction algorithm and spatial filter for time series data, where
the goal is to enhance an underlying signal, which occurs time locked [Rivet et al., 2009]. The common
dimensionality algorithms linearly combine features to create a new set of reduced and more meaningful features. A spatial filter, like the xDAWN, combines the data of sensors/channels to new pseudochannels but applies the same processing at every time point. A typical temporal filter is a decimator
which combines low-pass filtering and downsampling.
93
the effect on the data might be the same.
• In the worst case, a dimensionality reduction algorithm is applied, but leaving
it out does not change the overall picture of the algorithm, because the classifier
or the data does not require this reduction.
Hence, one is often interested in knowledge about the whole data transformation in
the processing chain but a general approach for solving this problem is missing. This
situation gets even worse the more complex the data and the associated processing
chains become. If dimensionality reduction algorithms are used for example to reduce
the complexity of the data and to get rid of the noise, the structure of the output data
is usually very different from the original input after the reduction step. In such
a case, it is very difficult to understand the connection between decision algorithm,
preprocessing, and original data even if single parts can be visualized. Consequently,
a concept for representing the complete processing chain in the domain and format
of the original input data is required.
State of the Art Several approaches are described in the literature to visualize
the outcome and transformation of classification algorithms, but again, taking the
perspective of a single processing step neglecting the processing history (i.e., the preceding algorithms).
A very simple approach for data in a two-dimensional space is given in the scikitlearn documentation2 [Pedregosa et al., 2011]. We adapted the provided script (see
Figure C.5) to a visualization of SVM variants in Figure 2.1.
If the classifier provides a probability fit as classification function, the approach
from [Baehrens et al., 2010] can be applied. Its main principle is to determine the
derivative of the probability fit to give information about the classifier dependent on
a chosen sample. The result is the local importance of the data components concerning the sample of interest. Unfortunately, this calculation of the derivative is quite
complex, difficult to automatize, computationally expensive, and does not consider
any processing before the classification and is restricted to a small subset of classifiers. Nevertheless, in [Baehrens et al., 2010] a very good overview about existing
methods is given and the benefits of their suggested approach but also the limitations
are shown, which will also hold for our (general) approach.
The visualization of the FDA is discussed in [Blankertz et al., 2011] in the context of an EEG based BCI application with different views on the temporal, spatial,
and spatio-temporal domain. Here, the classifier is applied on spatial features and
visualized as a spatial filter together with an interpretation in relation to the original
data and other spatial filters. For other visualizations, the classifier weights are not
2
html
http://scikit-learn.org/stable/auto_examples/plot_classifier_comparison.
94
Chapter 2. Decoding: Backtransformation
Figure 2.1: Visualization of different classifiers trained on different datasets.
Displayed are (from left to right) the training data with positive (red) and negative
(blue) samples, the C-SVM with linear, RBF, and polynomial kernel, the PAA with
hinge loss (PA1), and the FDA. The contour plots show the values of the classification
function, where dark red is used for high positive values which correspond to the
positive class and dark blue means very low negative values for the opposite class.
From top to bottom three different datasets are used.
directly used. Furthermore, no complex signal processing chain is used, even though
spatial filters are very common for the preprocessing of this type of data. The FDA
was applied to the raw data and largely improved with a shrinkage criterion. As a
side remark, they mention the possibility to visualize the FDA weights directly, when
applied to spatio-temporal features [Blankertz et al., 2011, paragraph before section
6, p. 18].
This direct visualization of weights of a linear C-SVM has already been suggested
in [LaConte et al., 2005].3 This approach is intuitive, easy to calculate, and enables
a combination with the preprocessing. Furthermore, it can be generalized to other
data and other classifiers [Blankertz et al., 2011].
Contribution Our concept, denoted as backtransformation, incorporates the aforementioned approaches, but with the fundamental difference that it takes all preprocessing steps in the respective chain into account. With this approach, we are able to
extract the complete transformation of the data from the chain, so that, e.g., changes
in the order of algorithms or the effect of insertions/deletions of single algorithms
3
Further methods are presented but they are tailored to fMRI data.
95
become immediately visible. Backtransformation also considers processing chains,
where the original (e.g., spatio-temporal) structure of the data is hidden. The data
processing chain is identified with a (composed) function, mapping the input data to
a scalar. In its core, backtransformation is only the derivative of this function, calculated with the chain rule or numerically. The derivative is either calculated locally for
each sample of interest (general backtransformation) or globally when the processing
chain consists of affine transformations only (affine backtransformation). While the
general backtransformation gives information on which components in the data have
a large (local) influence on the decision process and which components are rather
unimportant, the affine backtransformation is independent from the single sample.4
Numerous established data processing algorithms are affine transformations and
it is often possible to combine them to process the data. In Section 2.2, a closer look
is taken at this type of algorithms and it is shown that it is possible to retrieve the
information on how the data is transformed by the complete decision process, even
if a dimensionality reduction algorithm or a temporal filter hide information. The
affine backtransformation iteratively goes back from the decision algorithm through
all processing steps to determine a parameterization of the composed processing function and to enable a semantic interpretation. This results in a helpful representation
of the processing chain, where each component in the source domain of the data gets
a weight assigned showing its impact in the decision process. In fact, summing up
the products of weights and respective data parts is equivalent to applying the single
algorithms on the data step-by-step.
General Setting
The requirement to apply the proposed backtransformation as
outlined in the following is that the data processing is a concatenation of differentiable transformations (e.g., affine mappings) and that the last algorithm in the chain
is a (decision) function which maps the data to a single scalar. The mapping to the
label (F (x)) is not relevant, here.
For each processing stage, the key steps of the backtransformation are to first
choose a mathematical representation of input and output data and then to determine a parameterization of the algorithm which has to be mapped to fit to the chosen
data representations. Finally, the derivatives of the resulting transformations have
to be calculated and iteratively combined. In its core it is the application of the chain
rule for derivatives (see Section 2.1). For the case of using only affine mappings, it
is just the multiplication of the transformation matrices, as shown in Section 2.2.
Details on the implementation are given in Section 2.3. For an example of a processing chain of windowed time series data with a two-dimensional representation of the
data see Figure 2.2 and Section 2.2.1.
4
The respective derivatives are constant for every sample and as such not depending on it.
96
Chapter 2. Decoding: Backtransformation
The backward modeling begins with the parametrization of the final decision function and continues by iteratively combining it backwards with the preceding algorithms in a processing chain. With each iteration, weights are calculated, which
correspond to the components of the input data of the last observed algorithm.
For the abstract formulation of the backtransformation approach, data with a onedimensional representation before and after each processing step is used. The output
of each processing step is fed into the next processing algorithm.
2.1
Backtransformation using the Derivative
This section shortly introduces the general backtransformation. Let the input data
be denoted with x(0) = xin ∈ Rn0 and let the series of processing algorithms be represented by differentiable mappings
F0 : Rn0 → Rn1 , . . . , Fk : Rnk → R
(2.1)
which are applied to the data consecutively.5 Then, the application of the processing
chain can be summarized to:
xout = x(k+1) = F (x(0) ) = (Fk ◦ . . . ◦ F0 )(x(0) ) .
(2.2)
With this notation, the derivative can be calculated with the chain rule:
∂F
∂F
∂Fk
∂F1
∂F (0)
0
k−1
= (k) x(k) · (k−1) x(k−1) · . . . · (1) x(1) · (0) x(0) ,
x
∂y
∂y
∂y
∂y
∂y
(2.3)
where x(l) ∈ Rnl is the respective input of the l-th algorithm in the processing chain
with the mapping Fl and x(l+1) is the output. The terms
∂Fl
∂y (l)
and
∂F
∂y
represent the to-
tal differentials of the differentiable mappings and not the partial derivatives. Equation (2.3) is a matrix product. It can be calculated iteratively using the backtransformation matrices Bl and the derivatives
Bk =
∂Fl−1
(x(l−1) ):
∂y (l−1)
∂Fk (k)
∂Fl−1 (l−1)
x
and
B
=
x
· Bl with l = 1, . . . , k .
l−1
∂y (k)
∂y (l−1)
(2.4)
Now each matrix Bl ∈ Rnl ×1 has the same dimensions as the respective x(l)
and tells which change in the components of x(l) will increase (positive entry in
Bl ), decrease (negative entry), or will have no effect (zero entry) on the decision
function.
5
The higher the absolute value of an entry (multiplied with the esti-
The notation of data and its components differs in this chapter in relation to Chapter 1, because
instead of looking at training data, we look at one data sample x(0) with its different processing stages
(l)
x(l) and the respective changes in each component of the data (xgh ).
97
2.2. Affine Backtransformation
mated variance of the respective input), the larger is the influence of the respective data component on the decision function. Consequently, not only the backtransformation of the complete processing chain (B0 ) but also the intermediate results (Bl ; l > 0) might be used for analyzing the processing chain.
Bk is the
matrix used in the existing approaches, which do not consider the preprocessing
[LaConte et al., 2005, Baehrens et al., 2010, Blankertz et al., 2011]. Note that the Bl
are dependent on the input of the processing chain and are expected to change with
changing input. So the information about the influence of certain parts in the data is
only a local information. A global representation is only possible when using affine
transformations instead of arbitrary differentiable mappings Fl .
2.2
Affine Backtransformation
For handling affine transformations like translations, the data vectors are augmented by adding a coordinate with value 1 to have homogenous coordinates. Every affine transformation F can be identified with a tuple (A, T ), where A is a linear
mapping matrix and T a translation vector and the corresponding mapping of the
processing algorithm applied on data xin reads as
out
x
in
in
= F (x ) = Ax + T = (A T )
xin
1
!
.
(2.5)
So by extending the matrix (A T ) to a Matrix A′ with an additional row of zeroswith
a 1 at the translational component, the mapping on the augmented data x′in =
xin
1
can be written in the simple notation: x′out = A′ x′in . With a processing chain with
corresponding matrices A′0 , . . . , A′k the transformation of the input data x′in can be
summarized to
x′out = A′k · . . . · A′1 · A′0 · x′in .
(2.6)
With this notation, the backtransformation concept now boils down to iteratively
determine the matrices
Bk = A′k , Bk−1 = A′k · A′k−1 , . . . , and B0 = A′k · A′k−1 · . . . · A′1 · A′0 .
(2.7)
This corresponds to a convolution of affine mappings.6 Each Bl ∈ Rnk ×2 defines the
mapping of the data from the respective point in the processing chain (after l previous processing steps) to the final decision value. So each product Bl consists of a
weighting vector w(l) and an offset b(l) (and the artificial second row with zero en6
Note that no matrix inversion is required even though one might expect that, because the goal is
to find out what the original mapping was doing with the data which sounds like an inverse approach.
98
Chapter 2. Decoding: Backtransformation
tries and 1 in the last column). The term w(l) can now be used for interpretation and
understanding the respective sub-processing chain or the complete chain with w(0)
(see Section 2.4). The following section renders possible (and impossible) algorithms
which can be used for the affine backtransformation and how the weights from the
backtransformation are determined in detail for a data processing chain applied on
two-dimensional data.
2.2.1 Affine Backtransformation Modeling Example
2d-Input Data Array
amplitudes of sensors (h) at time points (g)
(0)
xgh
(1)
Temporal Filtering
x
subtract mean, subsampling, ih
low/band/high pass FIR filtering
=
=
X
(3)
(4)
= b(4) +
classification (SVM, FDA, LS-SVM),
regression (linear regression, SVR)
wih =
X
(4)
X
(4)
fhj sij wij
x(5)
Scalar Output
regression value, classification score
(3)
(4)
wij = sij wij
(4)
xij wij
i,j
Data Processing Chain
= x(2)
xij = xij sij + bij
xij
(4)
thgi fhj sij wij
j
(3)
x
X
h
xij
(4)
(1)
(1)
xih fhj
Feature Extraction
x(3)
time-domain amplitudes, polynomial fits
Decision Function (5)
wgh =
i,j
(2)
xij
Feature Normalization
rescaling, standardization
(0)
(0)
xgh thgi
g
(1)
xih
Dimensionality Reduction (2)
xij
ICA, PCA,
spatial filter (CSP, xDAWN, ∏SF)
X
w(0)
(4)
wij
respective
backtransformation
weights on algorithm input
Figure 2.2: Illustrative data processing chain scheme with examples of linear
algorithms and the formulas for the backtransformation in short. Spatio(0)
temporal data xgh are processed from top to bottom (x(5) ). Every component of the
scheme is optional. Backtransformation takes the classifier parametrization w(4) and
projects it iteratively back (w(k) ) through the processing chain and results in a representation w(0) corresponding to the input domain. For more details refer to Section 2.2.1.
In this section, a more concrete example of applying the backtransformation principle is given for processing time series epochs of fixed length of several sensors with
the same sampling frequency. Examples for affine transformations are given to show
that there is a large number of available algorithms to construct a good processing
chain. Some cases will be highlighted which are not affine. A possible processing
chain is depicted in Figure 2.2. Note that all components of this chain are optional
99
2.2. Affine Backtransformation
and the presented scheme can be applied to an arbitrary data processing chain of
affine maps even if dimensions like time and space are replaced by others or left out
(see Section 2.2 and 2.4.2).
An intuitive way of handling such data is to represent it as two-dimensional arrays with the time on one axis and space (e.g., sensors) on the other axis, since important preprocessing steps like temporal and spatial filters just operate on one axis.
So this type of representation eases the use and the parameterization of these algorithms compared to the aforementioned mathematically equivalent one-dimensional
representation. Furthermore, a two-dimensional representation of the data helps for
its visualization and interpretation. For parametrization of the two-dimensional arrays, the common double index notation is used, where the data x(0) is represented
(0)
by its components xgh with temporal index g and spatial index h. This index scheme
will be kept for all processing stages even if the data could be represented as onedimensional feature vectors for some stages. The same indexing scheme can be applied for the parametrization of the affine data processing algorithms in the chain as
will be shown in the following. As before, the input of the i-th algorithm is denoted
with x(i−1) and the output with x(i) respectively. To fit to the concept of backtransformation, first the parametrization of the decision algorithm will be introduced and
then the preceding algorithms step-by-step . An overview of the processing chain, the
chosen parameterizations, and the resulting weights from the backtransformation is
depicted in Figure 2.2.
2.2.1.1
Linear Decision Function
A linear decision function can be parameterized using a decision vector/matrix
(4)
wij ∈ Rmi ×nj and an offset b(4) ∈ R. The transformation of the input x(4) ∈ Rmi ×nj to
the decision value x(5) ∈ R is then defined as
x(5) = b(4) +
nj
mi X
X
(4) (4)
xij wij ,
(2.8)
i=1 j=1
with mi time points and nj sensors. Examples for machine learning algorithms with
linear decision function are all the algorithms introduced in Chapter 1 without kernel or linear kernel. Using a RBF kernel would result in a smooth but not linear
decision function. Even worse, working with a decision tree [Comité et al., 1999] as
classifier would result in a non-differentiable decision function such that even the
general backtransformation could not be applied.
100
2.2.1.2
Chapter 2. Decoding: Backtransformation
Feature Normalization
With a scaling s ∈ Rmi ×nj and transition b ∈ Rmi ×nj and the same indexes as for the
linear decision function, an affine feature normalization can be written as
(4)
(3)
xij = xij sij + bij with i ∈ {1, . . . mi } and j ∈ {1, . . . nj } .
(2.9)
This covers most standard feature normalization algorithms like rescaling or stanando Haralick, 2001]. Nonlinear scalings, e.g., using absolute valdardization [Aksoy
n
(3)
ues as in min 10, xij
norm sij =
, or sample dependent scalings, e.g., division by the Euclidean
1
, are not affine mappings and could not be used here. For the affine
kx(3) k2
backtransformation the formula of the feature normalization needs do be inserted
into the formula of the decision function:
x(5) = b(4) +
X (3)
i,j
Here, b(3) = b(4) +
P
i,j bij
(4)
xij sij + bij wij = b(3) +
X (3)
(4)
xij sij wij .
(2.10)
i,j
(4)
summarizes the offset. As denoted in Figure 2.2, sij wij is
(3)
the weight to the input data part xij .
2.2.1.3
Feature Generation
For simplicity, the data amplitudes at different sensors
have been
directly taken as
features and nothing needs to be changed in this step x(3) = x(2) . Other linear fea-
tures like polynomial fits would be possible, too [Straube and Feess, 2013]. Nonlinear
features (e.g., standard deviation, sum of squares, or sum of absolute values of each
sensor) would not work for the affine backtransformation but for the general one.
Symbolic features, mapped to natural numbers will be even impossible to analyze
with the general backtransformation.
2.2.1.4
Dimensionality Reduction on the Spatial Component
A spatial filter transforms real sensors to new pseudo sensors by linear combination
of the signal of the original sensors. To use well known dimensionality reduction algorithms like PCA, and independent component analysis [Jutten and Herault, 1991,
Hyvärinen, 1999, Rivet et al., 2009] (ICA) for spatial filtering, the space component of
the data is taken as feature component for these algorithms and the time component
for the samples. Examples for typical spatial filters are common spatial patterns
[Blankertz et al., 2008] (CSP), xDAWN [Rivet et al., 2009, Wöhrle et al., 2015], and
πSF [Ghaderi and Straube, 2013].
The backtransformation with the spatial filtering is the most important part of
101
2.2. Affine Backtransformation
the concept, because spatial filtering hides the spatial information needed for visualization or getting true spatial information into the classifier.7
The number of virtual sensors ranges between the number of real sensors and one.
The spatial filter for the j-th virtual sensor is a tuple of weights f1j , ..., fnh j defining
the linear weighting of the nh real channels. The transformation for the i-th time
point is written as
(3)
xij =
nh
X
(1)
xih fhj ,
(2.11)
h=1
where the time component could be ignored, because the transformation is independent of time. The transformation formula can be substituted into formula (2.11):
x(5) = b(3) +
nh
XX
(1)
(4)
xih fhj sij wij
(2.12)
i,j h=1
X
X (1)
(4)
xih · fhj sij wij .
= b(3) +
Equation (2.13) shows, that the weight
(1)
(2.13)
j
i,h
P
j
(4)
fhj sij wij is assigned to the input data
component xih . If there is no time component, a spatial filter is just a linear dimensionality reduction algorithm. It is also possible to combine different reduction
methods or to do a dimensionality reduction after the feature generation.
2.2.1.5
There
Detrending, Temporal Filtering, and Decimation
are
numerous
discrete-time
[Oppenheim and Schafer, 2009].
signal
processing
algorithms
Detrending the mean from a time series can
be done in several ways. Having a time window, a direct approach would be to
subtract the mean of the time window, or to use some time before the relevant
time frame to calculate a guess for the mean (baseline correction). Often, such
algorithms can be seen as finite impulse response (FIR) filters, which eliminate very
low frequencies. Filtering the variance is a quadratic filter [Krell et al., 2013c] and
infinite impulse response (IIR) filters have a feedback part. Both filters are not
applicable for the affine backtransformation, because they have no respective affine
transformations. One can either use uniform temporal filtering, which is similar
to spatial filtering with changed axis, or introduce different filters for every sensor.
As parametrization, thgi is chosen for the weight at sensor h for the source g and the
7
This was also the original motivation to develop this concept.
102
Chapter 2. Decoding: Backtransformation
resulting time point i with a number of mg time points in the source domain:
(1)
xih =
mg
X
(0)
xgh thgi .
(2.14)
g=1
Starting with the more common filter formulation as convolution (filter of length N ):
(1)
xih =
N
X
l=0
(0)
al · x(n−l)h
n
X
g:=n−l
=
(0)
g=n−N
a(n−g) · xgh ,
(2.15)
the filter coefficients ai can be directly mapped to the thgi and the other coefficients
can be set to zero.
Reducing the sampling frequency of the data by downsampling is a combination of
a low-pass filter and systematically leaving out several time points after the filtering
(decimation). When using a FIR filter, the given parameterization of a temporal filter
can be used here, too. For leaving out samples, the matrix tgi for channel h can be
obtained from an identity matrix by only keeping the rows, where samples are taken
from.
The final step is similar to the spatial filtering part:
x(5) = b(3) +
X
i,h
mg
X
X
(4)
(0)
x th · fhj sij w
gh gi
ij
g=1
X (0)
X
(4)
xgh · thgi fhj sij wij
= b(3) +
= b(3) +
g,h
mg nh
X
X
(2.16)
j
(2.17)
i,j
(0)
(0)
xgh wgh .
(2.18)
g=1 h=1
(0)
The input component of the original data xgh finally gets assigned the weight
(0)
wgh =
(4)
h
i,j tgi fhj sij wij .
P
Note that for some applications it is good to work on normal-
ized and filtered data for interpreting data and the behavior of the data processing.
In that case, the backtransformation is stopped before the temporal filtering and the
respective weights are used.
2.2.1.6
Others
The aforementioned algorithms can be combined and repeated (e.g., concatenations
of FIR filters or PCA and xDAWN). Having a different feature generator, multiple
filters, decimation, or skipping a filter or normalization the same calculation scheme
could be used resulting in different b(3) and w(0) . Nevertheless, w(0) has the same
indexes as the original data x(0) . After the final mapping to a scalar by the deci-
2.3. Generic Implementation of the Backtransformation
103
sion function, a shift of the decision criterion (e.g., using threshold adaptation as
suggested in [Metzen and Kirchner, 2011]) is possible but has no impact on the backtransformation because it only requires w(0) and not the offset. If a probability fit
[Platt, 1999b, Lin et al., 2007, Baehrens et al., 2010] was used, this step has to be
either ignored or the general approach (Section 2.1) has to be applied. Since the
probability fit is a mostly sigmoid function which maps R → [0, 1], it is also possi-
ble to visualize its derivative separately. For the interpretation concerning a sample,
the function value is determined and the respective (positive) derivative is multiplied
with the affine transformation part to get the local importance. Hence, the relations
between the weights remain the same but the absolute values only change. This
approach of mixing the calculations is much easier to implement.
If nonlinear preprocessing is used to normalize the data (e.g., to have variance of
one), the normalized data can be used as input for the backtransformation and the
respective processing chain. This might be even advantageous for the interpretation
when the visualization of the original data is not helpful due to artifacts and outliers.
An example for such a case is to work with normalized image data like the MNIST
dataset (see Section 1.3.4.4) instead of the original data, where the size of the images
and the position of the digits varied a lot (see also Section 2.4.2 and Section 2.4.3).
2.3
Generic Implementation of the Backtransformation
This section gives information on how to apply the backtransformation concept in
practice especially when the aforementioned calculations are difficult or impossible
to perform and a “generic” implementation is required to handle arbitrary processing
chains.
The backtransformation has been implemented in pySPACE (see also Section 3)
and can be directly used. This modular Python software gives simple access to more
than 200 classification and preprocessing algorithms and so it provides a reasonable
interface for a generic implementation. It provides data visualization tools for the
different processing stages and largely supports the handling of complex processing
chains.
In practice, accessing the single parameterizations for the transformation matrices Ai for the affine backtransformation might be impossible (e.g., because external
libraries are used without access to the internal algorithm parameters) or too difficult
(e.g., code of numerous algorithms needs to be written to extract these parameters).
In this case, the backtransformation approach cannot be applied directly in the way
it is described in Section 2.2. Instead, the respective products and weights for the
affine backtransformation can be reconstructed with the following trick which only
requires the algorithms to be affine. No access to any parameters is needed. First,
104
Chapter 2. Decoding: Backtransformation
the offset of the transformation product is obtained by processing a zero data sample
with the complete processing chain. The processing function is denoted by F . The
resulting scalar output is the offset
b(0) = F (0).
(2.19)
Second, a basis {e1 , . . . , en } of the original space (e.g., the canonical basis) needs to
(0)
be chosen. In the last step, the weights wi , which directly correspond to the base
elements, are determined by also processing the respective base element ei with the
processing chain and subtracting the offset b(0) from the scalar output:
(0)
wi
= F (ei ) − F (0).
(2.20)
The calculation of the derivative for the general backtransformation approach is
more complicated. Deriving and implementing the derivative function for each algorithm used in a processing chain and combining the derivatives can be very difficult,
especially if the goal is to implement it for a large number of relevant algorithms,
e.g., as provided in pySPACE. The variety of possible derivatives even of classification
functions can be very diverse [Baehrens et al., 2010]. A generic approach would be to
use automatic differentiation tools [Griewank and Walther, 2008]. These tools generate a program which calculates the derivative directly from the program code. They
can also consider the concatenation of algorithms by applying the chain rule. For
most standard implementations, open source automatic differentiation tools could
be applied. For existing frameworks, it is required to modify each algorithm implementation such that the existing differentiation tools know all derivatives of used
elemental functions used in the code, which might be a lot of work. Furthermore,
this approach would be impossible if black box algorithms were used. So for simplicity, a different approach, which is similar to the previous one for the affine case can
be chosen. This is the numerical calculation of the derivative of the complete decision
function via differential quotients for directional derivatives:
F (x0 + hei ) − F (x0 )
∂F
(x0 ) ≈
.
∂ei
h
(2.21)
Here, ei is the i-th unit vector, and h is the step size. It is difficult to choose the
optimal h for the best approximation, but for the backtransformation a rough approximation should be sufficient. A good first guess is to choose h = 1.5 · 10−8 hx0 , ei i if
hx0 , ei i =
6 0 and in the other case h = 1.5 · 10−8 [Press, 2007]. In the backtransforma-
tion implementation in pySPACE, the value of 1.5 · 10−8 can be exchanged easily by
the user. It is additionally possible to use more accurate formulas for the differential
2.4. Applications of the Backtransformation
105
quotient at the cost additional function evaluations like
F (x0 − hei ) − 8F (x0 − h2 ei ) + 8F (x0 + h2 ei ) − F (x0 − hei )
∂F
(x0 ) ≈
.
∂ei
6h
2.4
(2.22)
Applications of the Backtransformation
Having a transformation of the decision algorithm back through different data representation spaces to the original data space might help for the understanding and
interpretation of processing chains in several applications (e.g., image detection, classification of neuroscientific data, robot sensor regression) as explained in the following. First, some general remarks will be given on visualization techniques. Afterwards, the affine and the general backtransformation will be applied on handwritten
digit classification (Section 2.4.2, Section 2.4.3, and Section 2.4.4) because it is a relatively simple problem which can be understood without expert knowledge. A more
complex example on EEG data classification is given in Section 2.4.5. Finally, an
outlook on the possibility of more sophisticated usage is given with processing chain
manipulation. The affine backtransformation can be additionally used for ranking
and regularization of sensors (see Section 3.4.3)
2.4.1 Visualization in General
As suggested in [LaConte et al., 2005] for fMRI data, the backtransformation weights
could be visualized in the same way as the respective input data is visualized. This
works only if there is a possibility to visualize the data and if this visualization displays the “strength” of the values of the input data. Otherwise, additional effort has
to be put into the visualization, or the weights have to be analyzed as raw numbers.
For interpreting the weights, it is usually required to also have the original data visualized for comparison (as averaged data or single samples) because higher weights in
the backtransformation could be rendered meaningless if the corresponding absolute
data values are low or even zero. Additionally to the backtransformation visualization of one data processing chain, different chains (with different hyperparameters,
training data, or algorithms) can be compared (see Section 2.4.4). Differences in the
weights directly correspond to the differences in the processing. Normally, weights
with high absolute values correspond to important components for the processing
and weights close to zero are less important and might be even omitted. This very
general interpretation scheme does not work for all applications. In some cases, the
weights have to be set in relation to the values of the respective data components: If
data values are close to zero, high weights might still be irrelevant, and vice versa.
To avoid such problems, it is better to take normalized data, which is very often also
106
Chapter 2. Decoding: Backtransformation
a good choice for pure data visualization. Another variant to partially compensate for
this issue is to also look at the products of weights and the respective data values.
According to [Haufe et al., 2014], the backtransformation model is a backward
model of the original data and as such mixes the reduction of noise with the emphasis of the relevant data pattern. To derive the respective forward model they suggest
to multiply the respective weighting vector with the covariance matrix of the data.
From a different perspective, this approach sounds reasonable, too: If backtransformation reveals that a feature gets a very high weight by the processing chain, but
this feature is zero for all except one outlier sample a modified backtransformation
would reveal this effect. Furthermore, if a feature is highly correlated with other
features, a sparse classifier might just use this one feature and skip the other features which might lead to the wrong assumption, that the other features are useless
even though they provide the same information. On the other hand, if features are
highly correlated as it holds for EEG data this approach might be also disadvantageous. The processing chain might give a very high weight to the feature, where the
best distinction is possible, but the covariance transformation will blur this important information over all sensors and time points. Using such a blurred version for
feature selection would be a bad choice. Another current drawback of the method
from [Haufe et al., 2014] is that it puts some assumptions on the data which often do
not hold: The expectancy values of noise, data, and signal of interest are assumed to
be zero “w.l.o.g.” (without loss of generality). Hence, more realistic assumptions are
necessary for better applicability.
Note that in Figure 2.2, Section 2.2, and Section 2.2.1 it has been shown that
every iteration step in the backtransformation results in weightings w(i) which correspond to the data x(i) . This data is obtained by applying the first i algorithms of the
processing chain on the original input data x(0) . So depending on the application, it
is even possible to visualize data and weights of intermediate processing steps. This
can be used to further improve the overall picture of what happens in the processing
chain.
2.4.2 Handwritten Digit Classification: Affine Processing Chain
For a simple application example of the affine backtransformation approach, the
MNIST dataset is used (see Section 1.3.4.4). These normalized greyscale images
have an inherent structure due to 28 × 28 used pixels. but they are stored as onedimensional feature vectors (784 features). For processing, we first applied a PCA on
the feature vectors and reduced the dimension of the data to 4 (or 64). As a second
step, the resulting features were normalized to have zero mean and standard deviation of one on the training data. Finally, a linear C-SVM (LIBSVM) with a fixed
2.4. Applications of the Backtransformation
107
regularization parameter (value: 1) is trained on the normalized PCA features. Without backtransformation, the filter weights for the 4 (or 64) principal components could
be visualized in the domain of the original data and the single (4 or 64) weights assigned by C-SVM could be given, but the interplay between C-SVM and PCA would
remain unknown, especially if all 784 principal components would be used. This information can only be given with backtransformation and is displayed in Figure 2.3
for the distinction of digit pairs (from 0, 1, and 2). The generic implementation of
the affine backtransformation was used, since only affine algorithms were used in
the processing chain (PCA, feature standardization, linear classifier). The forward
model to the backtransformation, obtained by multiplication with the covariance matrix, is also visualized in Figure 2.3. Note that the original data is not normalized
(zero mean), although this was an assumption on the data for the covariance transformation approach from [Haufe et al., 2014]. Nevertheless, the resulting graphics
look reasonable.
Generally, it can be seen that the classifier focuses on the digit parts, where there
is no overlay between the digits on average. For one class there are high positive
values and for the other there are high negative weights. For the classification with
64 principal components, the covariance correction smoothes the weight usage and
results in a visualization which is similar to the visualization of the backtransformation for the classification with 4 principal components. Hence, the 60 additional
components are mainly used for canceling out “noise”.
2.4.3 Handwritten Digit Classification: Nonlinear Classifier
To show the effect of the generic backtransformation for a nonlinear processing chain,
the evaluation of Section 2.4.2 is repeated with a RBF kernel for C-SVM instead of
a linear one. The hyperparameter of the kernel, γ, has been determined according
to [Varewyck and Martens, 2011]. Everything else remained unchanged. Again the
generic implementation was used. Note that every sample requires its own backtransformation. So for the visualization of the backtransformation, only the first four
single samples were taken.
It can be clearly seen in Figure 2.4 that there is a different backtransformation for
each sample. Similar to the results in Section 2.4.2 (Figure 2.3), the backtransformation with covariance correction (when 64 principal components are taken as features)
seems to be more useful in contrast to the raw visualization which also contains the
noise cancellation part. This is surprising because this approach has been originally
developed for linear models and not for nonlinear ones [Haufe et al., 2014]. Using a
correction with a “local” covariance would be more appropriate in this case but more
demanding from the computation and implementation point of view. A large number
108
A
Chapter 2. Decoding: Backtransformation
0 vs. 1
0 vs. 2
1 vs. 2
backtransformation
with
4 principal
components
...
positive
and
zero
respective
weight
...
covariance
correction
B
negative
0 vs. 1
0 vs. 2
1 vs. 2
backtransformation
with
64
principal
components
...
positive
and
respective
zero
weight
...
covariance
correction
negative
Figure 2.3: Contour plots of backtransformation weights for handwritten
digit classification: The white and black silhouettes display an average contour of
the original data (digits 0 vs. 1, 0 vs. 2, and 1 vs. 2). The colored contour plots show
the respective weights in the classification process before and after covariance correction with a different number of used principal components (case A and B). Negative
weights (blue) are important for the classification of the first class (black silhouette)
and positive weights (red) for the second class (white silhouette). Green weights are
close to zero and do only contribute weakly to the classification process.
of principal components seems to be a bad choice for the nonlinear kernel, because it
does not seem to generalize that well and is using a lot of small components instead
of focusing on the big shape of the digits.
109
2.4. Applications of the Backtransformation
A (4 principal comp.)
0 vs. 1
0 vs. 2
1 vs. 2
covariance correction
B (64 principal comp.)
0 vs. 1
0 vs. 2
1 vs. 2
covariance correction
Figure 2.4: Contour plots of backtransformation weights for handwritten
digit classification with nonlinear classifier: The setting is the same as in Figure 2.3 except that no average shapes are displayed but the shape of the sample of
interest where the backtransformation is calculated for.
110
Chapter 2. Decoding: Backtransformation
In case of using only 4 principal components, the approach mainly shows the
shape of the digit 2 (or 0 for the first column). In contrast, the visualizations without
covariance correction clearly indicate with a blue color which parts are relevant for
classifying it as the first class and with the red color which parts are important for
the second class. An interesting effect occurs for the first classifier at the fourth digit
(1). Here a closer look could be taken at the classifier and the data to find out why
there are yellow weights outside the regular shape of the digit 1. This might be the
result of some artifacts in the data (e.g., a sample with very bad handwriting near to
the observed sample) or an artifact in the processing.
In the nonlinear and the linear case with 64 principal components the backtransformation reveals that the decision process is not capable of deriving real shape features for the digits. This might be a reason, why a specially tuned deep neural network performs better in this classification task [Schmidhuber, 2012].
2.4.4 Handwritten Digit Classification: Classifier Comparison
This section is based on an evaluation in:
Krell, M. M., Straube, S., Wöhrle, H., and Kirchner, F. (2014c). Generalizing, Optimizing, and Decoding Support Vector Machine Classification. In ECML/PKDD-2014
PhD Session Proceedings, September 15-19, Nancy, France.
I wrote this paper completely on my own to have a first, very short summary of this
thesis and reused some text parts. My coauthors helped me with reviews and discussions about the paper and my thesis in general.
Again, the MNIST dataset was used with the classification of the digits 0, 1, and 2;
the data was reduced in dimensionality with PCA from 784 to 40; and then it was normalized with a standardization (zero mean and variance of one on the given training
data). For classification, a squared loss penalization of misclassifications was used
to obtain the more common Gaussian loss for RFDA and to be better comparable.
RFDA, L2–SVM, the respective online SVM using the single iteration approach (see
Section 1.2.4), and the νoc-SVM were compared. The classifiers were chosen as good
representatives of the algorithms introduced in Chapter 1 and to compare their behavior on a visual level. Backtransformation can summarize all three processing
steps and provides the respective weights belonging to the input data. This is visualized in Fig. 2.5.
The linear classifiers itself do only determine the 40 weights of the normalized
principal components. These weights would be difficult to interpret, but with the
given affine backtransformation the weighting and its correspondence to the average
shapes can be observed. As expected due to the model similarities (single iteration ap-
111
2.4. Applications of the Backtransformation
L2-C-SVM
PA2
νoc-SVM
1 vs. 2
0 vs. 1
RFDA
0 vs. 2
zero
weight
positive
negative
Figure 2.5: Contour plots of backtransformation weights for handwritten
digit classification with different classifiers: The white and black silhouettes
display an average contour of the original data (digits 0, 1, and 2). The colored contour plots show the respective weights in the classification process. Negative weights
(blue) are important for the classification of the first class (black silhouette) and positive weights (red) for the second class (white silhouette). Green weights are close to
zero and do not contribute to the classification process. For the unary classification,
the second class (white) was used. Visualization taken from [Krell et al., 2014c].
proach, Section 1.2) similar weight distributions were obtained for the L2–SVM and
its online learning variant (PA2 PAA). The visualizations of L2–SVM and RFDA look
similar due to the connection with BRMM (relative margin approach, Section 1.3).
However, for the distinction between the two digits 0 and 2 some larger differences
can be observed. The unary classifier is different to the other classifiers as expected
because it has been trained on a single digit only (origin separation approach, Section 1.4). Nevertheless, characteristics of the other class can be marginally observed
due to the use of PCA which has been trained on both classes. This can be seen in the
second and third row: although trained on the digit 2 in both cases, the classification
112
Chapter 2. Decoding: Backtransformation
results look different.
2.4.5 Movement Prediction from EEG Data
The EEG is a very complex signal, measuring electrical activity on the scalp with
a very high temporal resolution and more than 100 sensors. Several visualization
techniques exist for this type of signal, which are used in neuroscience for analysis.
When processing EEG data for BCIs, there is a growing interest in understanding
the properties of processing chains and the dynamics of the data, to avoid relying
on artifacts and to get information on the original signal back for further interpretation [Kirchner, 2014]. Here, very often spatial filtering is used for dimensionality
reduction to linearly combine the signals from the numerous electrodes to a largely
reduced number of new virtual sensors with much less noise (see Section 2.2.1.4).
These spatial filters and much more importantly the data patterns they are enhancing are visualized with similar methods as used for visualizing data. If the spatial
filter is the main part of the processing (e.g., only two filters are used), this approach
is sufficient to understand the data processing. However, often more filters and other,
additional preprocessing algorithms are used. Hence, the original spatial information
cannot be determined for the input of the classifier. This disables a good visualization
of the classifier and an understanding of what the classifier learned from the training
data. So here, backtransformation can be very helpful.
To
illustrate
this,
a
dataset
from
an
EEG
experiment
was
taken
[Tabie and Kirchner, 2013]. In this experiment, subjects were instructed to move
their right arm as fast as possible from a flat board to a buzzer in approximately
30 cm distance.
The classification task was to predict upcoming movements by
detecting movement-related cortical potentials [Johanshahi and Hallett, 2003] in
the EEG single trials. Before applying the backtransformation and visualizing the
data as depicted in Figure 2.6, the data has been normalized with a standardization,
a decimation, and temporal filtering. Only the last part of the signal closed to
the movement was visualized.
The processing chain was similar to the one in
Section 2.2.1. The details are described in [Seeland et al., 2013b].
The averaged input data in Figure 2.6 shows a very strong negative activation at the motor cortex mainly at the left hemisphere over the electrodes
C1, Cz, and FCC1h.8
This activation is consistent with the occurrence of
movement related cortical potentials and is expected from the EEG literature
[Johanshahi and Hallett, 2003]. The region of the activation (blue circle on the left
hemisphere at the motor cortex region) is associated with right arm movements,
which the subjects had to perform in the experiment.
8
A standard extended 10 − 20 electrode layout has been chosen with 128 electrodes (see Figure C.6).
113
2.4. Applications of the Backtransformation
time before movement onset:
−200 ms
−150 ms
−100 ms
−50 ms
Amplitude
0.45
0
-0.07
0
Weight
0.7
-0.7
Figure 2.6: Visualization of data for movement prediction and the corresponding processing chain: In the first row the average of the data before a movement is displayed as topography plots and in the second row the backtransformation
weights are displayed, respectively. The data values from the different sensors were
mapped to the respective position on the head, displayed as an ellipse with the nose
at the top and the ears on the sides.
The backtransformation weights are much more spread over the head compared
to the averaged data. There is a major activation at the left motor cortex at electrodes C1 and CP3, but also a large activation at the back of the head at the right
hemisphere around the electrode P8. On the time scale, the most important weights
can be found at the last time point, 50 ms before movement onset.
This is reasonable, because the most important movement related information
is expected to be just before the movement starts, although movement intention
can be detected above chance level on average 460 ms before the movement onset [Lew et al., 2012]. Note that the analysis has been performed on single trials
and not on averaged data and that for a good classification the largest difference is
of interest and not the minimal one. The high weights at C1 and CP3 clearly fit to
the high negative activation found in the averaged data and as such highlight the
signal of interest. For interpreting the other weights, two things have to be kept in
mind. First, EEG data usually contains numerous artifacts and second, due to the
conductivity of the skin it is possible to measure every electric signal at a certain
electrode also on the other electrodes. Keeping that in mind, the activation around
P8 could be interpreted as a noise filter for the more important class related signal
at C1 and CP3. This required filtering effect on EEG data is closely related to spatial
filtering, which emphasizes a certain spatial pattern [Blankertz et al., 2011, section
4.2]. It could be also a relevant signal which cannot be observed in the plot of the
114
Chapter 2. Decoding: Backtransformation
averaged data. These observations are now a good starting point for domain experts
to take a closer look at the raw data to determine which interpretation fits better.
2.4.6 Reinitialization of Linear Classifier with Affine Preprocessing
There could be several reasons for exchanging the preprocessing in a signal processing chain. For example, first some initial preprocessing is loaded but in parallel a
new better fitting data specific processing is trained or tuned on new incoming data
(e.g., a new spatial filter [Wöhrle et al., 2015]). If dimensionality would not be fitting
after changing the preprocessing chain, a new classifier would also be needed. But
even if dimensions of old and new preprocessing were the same it might be good to
adapt the classifier to that change to have a better initialization. Here, the affine
backtransformation can be used as described in the following.
For this application, a processing chain of affine transformations is assumed
which ends with a sample weighting online learning algorithm like PAA. Since the
classification function is a weighted sum of samples, it enables following calculation:
w=
X
αi yi x̂i =
i
X
αi yi (Axi + T ) = A
X
αi yi xi + T
= Aw(0) + T b with w(0) =
X
αi yi xi and b =
i
αi yi
(2.23)
i
i
i
X
X
αi yi .
(2.24)
i
Here, xi is the training data with the training samples yi and x̂1 is the preprocessed
training data given to the classifier. The weights αi are calculated by update formulas
of the classifier. During the update step, w(0) must be calculated additionally but
neither xi , yi , nor αi are stored. When changing the preprocessing from (A, T ) to
(A′ , T ′ )
w′ = A′ w(0)
(2.25)
is a straightforward estimate for the new classifier. The advantage of this formula is,
that it just requires additionally calculating and storing w(0) . So the resulting classifier can be still used for memory efficient online learning. Especially, even if neither
(A′ , T ′ ) nor (A, T ) is known, w′ can be calculated using the new signal processing
function F̂ (x) = A′ x + T ′ :
w′ = A′ w(0) = F̂ (w(0) ) − T ′ b = F̂ (w(0) ) − 0A′ w(0) b − T ′ b = F̂ (w(0) ) − F̂ (0w(0) )b . (2.26)
So, w′ can be computed by processing w(0) and a sample of zero entries in the signal
processing chain. This only requires some minor processing time but no additional
resources. Usually the processing chain is very fast and so the additional processing
time should not be a problem.
2.4. Applications of the Backtransformation
115
For giving a proof of concept, the data introduced in Section 0.4 was used. We concatenated the 5 recordings of each subject and obtained 10 datasets with more than
4000 samples each. In a preceding preparation the data was standardized, decimated,
and bandpass filtered (see first 4 nodes in Figure 3.4). As modular preprocessing, a
chain was trained on each of the datasets consisting of the xDAWN filter (8 retained
pseudo channels), a simple feature generator, which used the amplitudes of the signal as features, and a feature normalization linearly mapping each feature to the
interval [0, 1], assuming 5% of the data to be outliers (see Figure C.3). This modular processing chain was then randomly loaded9 in a simulated incremental learning
scenario, where a sample was first classified and then the classifier (PA1, see Section 1.1.6.2) directly got the right label and performed an update step. The classifier
has not been trained before. After a fixed number of iterations, the preprocessing was
again randomly changed, to analyze the effect of changing the preprocessing (for the
specification file see Figure C.3). Due to the randomization, the preprocessing does
not fit to the data. Consequently, with every change of the preprocessing a drop in
performance is expected. In contrast, the incremental learning should increase the
performance over time, because, the classifier adapts to the data and the preprocessing. For simplicity, the regularization parameter C was fixed to 1 for the overrepresented target class and 5 for the other class. The BA was used as performance metric
to account for class imbalance. The evaluation is repeated 10 times to have different
randomizations. It is clear, that this setting is artificial, but for showing the problem
of the classifier to deal with changing preprocessing and how our approach can can
overcome this issue it is helpful.
In Figure 2.7 the positive effect of the backtransformation on the performance
is shown, when the preprocessing is randomly changed after a varying amount of
processed data samples. The new approach using backtransformation is not negatively affected by changing the preprocessing in contrast to the simple approach of
not adapting the classifier to the different processing. There is even a slight improvement in performance. When changing the processing too often, the simple classifier
without the backtransformation adaptation would be as good as a guessing classifier
(performance of 0.5).
For Figure 2.8 the preprocessing is randomly changed every 1000 samples and the
change of performance in time is displayed during incremental training. It can be
clearly seen that performance dramatically drops when the preprocessing is changed
after 1000 samples.
For the experiment, w = 0 and b = 0 was used for initialization and hyperparameters were not optimized but fixed. A different initialization or other hyperparameters
9
The randomly chosen processing chain was trained on one of the 9 other datasets but not the one
of the current evaluation.
116
Chapter 2. Decoding: Backtransformation
0.64
backtransformation
with
without
0.62
Balanced accuracy
0.60
0.58
0.56
0.54
0.52
0.50
0.48
1.5
2.0
2.5
log dist
3.0
3.5
4.0
Figure 2.7: Adaption to random preprocessing: Performance (and standard error) of an online classifier which gets an incremental update after each incoming
sample. After every 10log dist incoming samples the preprocessing is changed by randomly loading a new preprocessing.
0.75
0.70
Performance [BA]
0.65
online learning with backtransformation
... without backtransformation
... with constant preprocessing
0.60
0.55
0.50
0.45
0.40
500
1000
1500
2000
2500
3000
3500
4000
Number of processed samples
Figure 2.8: Performance trace to random preprocessing after every 1000 samples: Performance of an online classifier which gets an incremental update after
each incoming sample is displayed were the metric is the average over all evaluations. The metric BA is calculated with a moving window of 60 samples as described
by [Wöhrle et al., 2015].
might show better or worse performance in total, but the clear positive effect of the
backtransformation as a good initialization after changing the preprocessing will be
2.5. Discussion
117
the same. The effect might get lost when also using incremental learning for the preprocessing and compensating the changing preprocessing in the classifier, because
incremental learning in the preprocessing could generate stationary features from
non-stationary data and backtransformation would undo this positive effect.
2.5
Discussion
With the affine backtransformation, we introduced a direct approach to look at the
complete data processing chain (in contrast to separate handling of its components)
and to transform it to a representation in the same format as the data. We generalized the concept to arbitrary differentiable processing chains. We showed, that it is
necessary and possible to break up the black box of classifier and preprocessing. The
approach could be used to improve the understanding of complex processing chains
and might enable several applications in future. It was shown that backtransformation can be used for visualization of the decision process and a direct comparison with
a visualization of the data is possible and enables an interpretation of the processing.
Our approach extends existing algorithms by also considering the preprocessing,
by putting no restrictions on the decision algorithm, by providing the implementation details, and by integrating the backtransformation in the pySPACE framework
which already comes with a large number of available algorithms. The framework is
required and very useful for the suggested generic implementation. A big advantage
is, that our generic approach enables the usage of arbitrary (differentiable) processing algorithms and their combination. Due to the integration into a high-level framework, the backtransformation can be applied to different data types and applications
and it can benefit from future extensions of pySPACE to new applications and new
data types.
Backtransformation can be used for interpreting the behavior of the decision process, but it remains an open question of how the further analysis is performed, because additional investigations and expert knowledge might still be required. A related problem occurs when using temporal and spatial filters. Here the solution is to
visualize the frequency response and the spatial pattern instead of the pure weights
of the transformation. The frequency response gives information on how frequencies
are filtered out and spatial patterns give information on which signal in space is emphasized by the respective spatial filter. It is important for the future to develop new
methods, which improve the interpretability of the decision process. This could be
achieved for example by extending the method of covariance multiplication with a
more sophisticated calculation of the covariance matrix or by deriving a different for-
118
Chapter 2. Decoding: Backtransformation
mula for getting a forward model, which describes how the data is generated.10 This
might enable the backtransformation to reveal new signals or connections in the data
which can then be used to improve the observed data processing chain. This improvement is especially important for longterm learning. If a robot shall generate its own
expert knowledge from a self-defined decision process, the process of interpretations
needs to be more automized.
In future, it would be also interesting to analyze the application of the backtransformation further, e.g., by using other data, processing chains, or decision algorithms
like regression.
Related Publications
Krell, M. M. and Straube, S. (2015). Backtransformation: A new representation
of data processing chains with a scalar decision function. Advances in Data
Analysis and Classification. submitted.
Krell, M. M., Straube, S., Wöhrle, H., and Kirchner, F. (2014c).
General-
izing, Optimizing, and Decoding Support Vector Machine Classification. In
ECML/PKDD-2014 PhD Session Proceedings, September 15-19, Nancy, France.
Feess, D., Krell, M. M., and Metzen, J. H. (2013). Comparison of Sensor Selection Mechanisms for an ERP-Based Brain-Computer Interface. PloS ONE,
8(7):e67543, doi:10.1371/journal.pone.0067543.
Kirchner, E. A., Kim, S. K., Straube, S., Seeland, A., Wöhrle, H., Krell, M. M.,
Tabie, M., and Fahle, M. (2013).
On the applicability of brain reading for
predictive human-machine interfaces in robotics.
PloS ONE, 8(12):e81732,
doi:10.1371/journal.pone.0081732.
10
In the context of EEG data processing, especially source localization methods might be very helpful
because they enable an interpretation of the processing in relation to parts of the brain and not the raw
sensors which accumulate the signals from different parts of the brain.
Chapter 3
Optimizing: pySPACE
This chapter is based on:
Krell, M. M., Straube, S., Seeland, A., Wöhrle, H., Teiwes, J., Metzen, J. H.,
Kirchner, E. A., and Kirchner, F. (2013b).
classification environment in Python.
pySPACE
a signal processing and
Frontiers in Neuroinformatics, 7(40):1–11,
doi:10.3389/fninf.2013.00040.
For a clarification of my contribution, I refer to Section 3.5.2.
Contents
3.1 Structure and Principles . . . . . . . . . . . . . . .
3.1.1 Data . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.2 Algorithms . . . . . . . . . . . . . . . . . . . . .
3.1.3 Infrastructure . . . . . . . . . . . . . . . . . . .
3.2 User and Developer Interfaces . . . . . . . . . . .
3.2.1 System and Storage Interface . . . . . . . . . .
3.2.2 Processing Interface . . . . . . . . . . . . . . .
3.2.3 Offline Analysis . . . . . . . . . . . . . . . . . .
3.2.4 Online Analysis . . . . . . . . . . . . . . . . . .
3.2.5 Extensibility, Documentation and Testing . . .
3.2.6 Availability and Requirements . . . . . . . . .
3.3 Optimization Problems and Solution Strategies
3.4 pySPACE Usage Examples . . . . . . . . . . . . . .
3.4.1 Example: Algorithm Comparison . . . . . . . .
3.4.2 Usage of the Software and Published Work . .
3.4.3 Comparison of Sensor Selection Mechanisms .
3.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . .
3.5.1 Related Work . . . . . . . . . . . . . . . . . . .
3.5.2 My Contribution to pySPACE for this Thesis .
3.5.3 Summary . . . . . . . . . . . . . . . . . . . . . .
119
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
123
123
125
127
129
130
130
132
132
133
134
134
143
144
147
149
159
159
161
162
120
Chapter 3. Optimizing: pySPACE
This chapter presents the software pySPACE which allows to use the contributions
from the previous chapters and furthermore design, optimize, and evaluate data processing chains. In the following, we will discuss typical data analysis problems illustrated by examples from neuroscientific and robotic data.
Motivation Most data in neuroscience and robotics are not feature vector data but
time series data from the different sensors which were used. Consequently, for this
data a classifier (as for example introduced in Chapter 1) usually cannot be directly
applied and a sophisticated preprocessing is required as for example explained in
Section 2.2.1. There are also other areas where such data are used but we will use
these two examples to show some problems where our approach might provide help.
Time series are recorded in various fields of neuroscience to infer information
about neural processing. Although the direct communication between most parts
of the nervous system is based on spikes as unique and discrete events, graded potentials are seen as reflections of neural population activity in both, invasive and
non-invasive techniques. Examples for such time series come from recordings of local
field potentials (LFPs), EEG, or even fMRI.
Common characteristics of time series data reflecting neural activity are: (i) a
high noise level (caused by external signal sources, muscle activity, or overlapping
uncorrelated brain activity) and (ii) a large amount of data that is often recorded
with many sensors (electrodes) and with a high sampling rate. To reduce noise
and size the data are preprocessed, e.g., by filtering in the frequency domain or
by averaging over trials and/or sensors. These approaches have been very successful in the past, but the solutions were often chosen manually, guided by the literature, visual inspection and in-house written scripts, so that possible drawbacks
remain.
It is still not straightforward to compare or reproduce analyses across
laboratories and the investigator has to face many choices (e.g., filter type, desired frequency band, and respective hyperparameters) that cannot be evaluated
systematically without investing a large amount of time. Another critical issue is
that the data might contain so far undiscovered or unexpected signal components
that might be overseen by the choice of the applied data analysis. False or incomplete hypotheses can be a consequence. An automatic optimization of the processing
chain might avoid such effects. On the other hand, the success of applications using automatically processed and classified neurophysiological data has been widely
demonstrated, e.g., for usage of BCIs [Lemm et al., 2004, Bashashati et al., 2007,
Hoffmann et al., 2008, Seeland et al., 2013b, Kirchner et al., 2013] and classification
of epileptic spikes [Meier et al., 2008, Yadav et al., 2012]. These applications demonstrate that automated signal processing and classification can indeed be used to directly extract relevant information from such time series recordings.
121
Similar problems also apply for data in robotics.
ally lower but it might still cause problems.
The noise level is usu-
A big problem is the amount
of data, which can/could be recorded with a robot in contrast to its limited processing power and memory.
For example, deep see grippers are con-
structed with multimodal sensor processing to fulfill complex manipulation tasks
[Aggarwal et al., 2015, Kampmann and Kirchner, 2015].
There are sensors in
the numerous motors of more and more complex robots [Lemburg et al., 2011,
Manz et al., 2013, Bartsch, 2014].
Sometimes internal sensors are used to en-
able or improve localization [Schwendner et al., 2014] but usually several other
sensors are added to enable SLAM [Hildebrandt et al., 2014]. Often video image
data is used for SLAM but also for object manipulation and terrain classification
[Manduchi et al., 2005, Müller et al., 2014]. For the processing of this data, usually
expert knowledge is used (as in neuroscience too) for constructing a feasible signal
processing chain. But taking the expert out of the loop is necessary for real longterm
autonomy of robots.
Solving all facets of the aforementioned problems, which are all connected to the
problem of optimizing the signal processing chain, is probably impossible. Nevertheless, we will show that it is at least possible to improve the situation from the interface/framework perspective which can be used as the basis for further approaches.
Here, recent tools can help to tackle the data processing problem, especially when
made available open source, by providing a common ground that everyone can use.
As a side effect, there is the chance to enhance the reproducibility of the conducted research, since researchers can directly exchange how they processed their data based
on the respective specification or script files. A short overview of the variety of existing approaches is given in the related work (Section 3.5.1). There is an increasing
number and complexity of signal processing and classification algorithms that enable
more sophisticated processing of the data. However, this is also considered as a problem, since it also demands (i) tools where the signal processing algorithms can be
directly compared [Sonnenburg et al., 2007, Domingos, 2012] and (ii) to close the still
existing large gap between developer and user, i.e., make the tools usable for a larger
group of people with no or few experience in programming or data analysis.
Contribution With the software pySPACE, we introduce a modular framework
that can help scientists to process and analyze time series data in an automated and
parallel fashion. The software supports the complete process of data analysis, including processing, storage and evaluation. No individual execution scripts are needed,
instead users can control pySPACE via text files in YAML Ain’t Markup Language
[Ben-Kiki et al., 2008] (YAML) format, specifying what data operation should be executed. The software was particularly designed to process windowed (segmented) time
122
Chapter 3. Optimizing: pySPACE
series and feature vector data, typically with classifiers at the end of the processing
chain. For such supervised algorithms the data can be separated into training and
testing data. pySPACE is, however, not limited to this application case: data can
be preprocessed without classification, reorganized (e.g., shuffled, merged), or manipulated using own operations. The framework offers automatic parallelization of
independent (not communicating) processes by means of different execution backends, from serial over multicore to distributed cluster systems. Finally, processing
can be executed in an offline or in an online fashion. While the normal use case is
concerned with recorded data saved to a hard disk (and therefore offline), the online
mode, called pySPACE live, offers the application-directed possibility to process data
directly when it is recorded without storing it to hard disk. We refer to this processing here as online due to the direct access in contrast to offline processing where the
input data is loaded from a hard disk.
To tackle the challenge of an increasing number of signal processing algorithms,
additional effort was put into the goal of keeping pySPACE modular and easy-toextend. Further algorithms can be added by the advanced user; the algorithms will
be automatically included in the collection of available algorithms and into the documentation. Furthermore, the software is capable of using existing signal processing libraries, preferably implemented in Python and of using existing wrappers to
other languages like C++. So far, interfaces are implemented to external classifiers
(from scikit-learn [Pedregosa et al., 2011] and LibSVM [Chang and Lin, 2011]), modular toolkit for data processing [Zito et al., 2008] (MDP), WEKA [Hall et al., 2009],
and MMLF (http://mmlf.sourceforge.net/). Core functionality of pySPACE uses the
Python libraries NumPy [Dubois, 1999] and SciPy [Jones et al., 2001].
pySPACE was implemented as a comprehensive tool that covers all aspects a user
needs to perform the intended operations. The software has a central configuration
where the user can optionally specify global input and output parameters and make
settings for individual paths to external packages as well as setting computational
parameters. The processing is then defined in individual specification files (using
YAML) and the framework can be executed with the respective operation on several
datasets at once. This functionality is not only provided for internal algorithms, but
can also be used with external frameworks like WEKA and MMLF. For the basic
signal processing algorithms implemented in pySPACE, we adopted the node and
flow concept of the MDP software together with basic principles that were introduced together with it. Currently, more than 200 of such signal processing nodes
are integrated into pySPACE. These nodes can be combined and result in numerous
different processing flows. Different evaluation schemes (e.g., cross-validation) and
performance metrics are provided, and different evaluation results can be combined
to one output file. This output can be explored using external software or by using a
3.1. Structure and Principles
123
graphical user interface (GUI) provided within pySPACE.
A drawback of most frameworks is that they focus on the preprocessing and a
machine learning part is often missing or vice versa. Furthermore, they do not enable a simple configuration and parallel execution of processing chains. To enable an
interfacing to existing tools, pySPACE supports a variety of data types. As soon as
several datasets have to be processed automatically with a set of different processing
algorithms (including classification) and numerous different hyperparameter values,
pySPACE is probably the better choice in comparison to the other tools. Additionally,
the capability to operate on feature vector data makes pySPACE useful for a lot of
other applications, where the feature generation has been done with other tools. To
the best of our knowledge, pySPACE is unique in its way of processing data with special support of neurophysiological data and with its number of available algorithms.
Outline The structural concepts of pySPACE will be presented in Section 3.1. In
Section 3.2 we will describe how the software is interfaced including the requirements
for running it. This is followed by a short description of optimization aspects in
pySPACE (Section 3.3). Several examples and application cases will be highlighted in
Section 3.4 including a more complex analysis, using the power of pySPACE and the
content of the previous chapters (Section 3.4.3). Finally, we discuss the related work,
the connection between this thesis and the pySPACE framework, and summarize
with a more personal view.
3.1
Structure and Principles
The software package structure of pySPACE was designed in order to be selfexplanatory for the user and to correspond to the inherent problem structure. Core
components in the main directory are run containing everything that can be executed, resources where external and internal data formats and types are defined, missions with existing processing algorithms the user can specify, and
environments containing infrastructure components for execution. How to run the
software is described in Sections 3.2 and 3.4. The other packages and their connections are described in the following.
3.1.1 Data
When analyzing data, the first difficulty is getting it into a framework or into a format, one can continue working with. So as a good starting point, one can look at the
way the data are organized and handled within the software, including ways to load
data into the framework and how the outcome is stored. Data are distinguished in
124
Chapter 3. Optimizing: pySPACE
pySPACE by granularity: from single data samples to datasets and complete summaries (defined in the resources package), as explained in the following. They require at the same time different types of processing which are subsequently described
in Sections 3.1.2 and 3.1.3 and depicted in Figure 3.1.
Four types of data samples can occur in pySPACE: the raw data stream, the windowed time series, feature vector, and prediction vector. A data sample comes with
some metadata for additional description, e.g., specifying sensor names, sampling frequency, feature names, or classifier information. When loading a raw data stream it
is first of all segmented into a windowed time series. Windowed time series have the
form of two-dimensional arrays with amplitudes sorted according to sensors on the
one axis and time points on the other. Feature vectors are one-dimensional arrays of
feature values. In a prediction vector the data sample is reduced to the classification
outcome and the assigned label or regression value.
For analysis, data samples are combined to datasets. In pySPACE, a dataset is
defined as a recording of one single experimental run, either as streamed data or already preprocessed as a set of the corresponding time series windows, or as a loose
collection of feature vectors. It also has metadata specifying the type, the storage
format, and information about the original data and preceding processing steps. For
each type of dataset, various loading and saving procedures are defined. Currently
supported data formats for loading streaming datasets are the comma separated values (.csv), the European Data Format (.edf), and two formats specifically used for
EEG data which are the one from Brain Products GmbH (Gilching, Germany) (.eeg)
and the EEGLAB [Delorme and Makeig, 2004] format (.set). With the help of the
EEGLAB format several other EEG data formats can be converted to be used in
pySPACE. For cutting out the windows from the data stream, either certain markers
can be used or stream snippets with equal distance are created automatically. For
supervised learning, cutting rules can be specified to label these windows. Feature
vector datasets can be loaded and stored in .csv files or the “attribute-relation file
format” (ARFF), which is, e.g., useful for the interface to WEKA [Hall et al., 2009].
Groups of datasets, e.g., experimental repetitions with the same subject or different subjects, can be combined to be analyzed and compared jointly. Such dataset collections are called summary in pySPACE. Summaries are organized in folder structures. To enable simple evaluations, all single performance results in a summary are
combined to one .csv file, which contains various metrics, observed parameters and
classifier information.
125
3.1. Structure and Principles
3.1.2 Algorithms
Nodes and operations are the low and high-level algorithms in pySPACE (see Figure 3.1). They are organized in the missions package. New implementations have to
be placed in the missions package and can then be used like the already implemented
ones. Here, the type and granularity of input (as depicted in Figure 3.1) have to be
considered, the algorithms need to inherit from the base class, and implement some
basic processing function(s).
summary
operation
chain
operation
1
operation
offline
operation
2
operation
3
operation
..
WEKA
..
node
2
node
3
node
..
subsampling
SVM
..
offline
summary
node chain
node chain
dataset
node
1
merge
offline + online
node
data sample
FIR Filter
Figure 3.1: High-level and low-level processing types (upper and lower part)
and their connection to the data granularity (summary, dataset, sample).
Access levels for the user are depicted in blue and can be specified with YAML files
(Section 3.2.2). Only low-level processing can be performed online. For offline analysis, it is accessed by the node chain operation. For the operations and nodes several different algorithms can be chosen. Algorithms are depicted in orange (Section 3.1.2) and respective infrastructure components concatenating these in green
(Section 3.1.3). Visualization taken from [Krell et al., 2013b].
3.1.2.1
Nodes
The signal processing algorithms in pySPACE which operate on data samples (e.g.,
single feature vectors) are called nodes. Some nodes are trainable, i.e., they define
their output based on the training data provided. The concept of nodes was inspired
by MDP as well as the concept of their concatenation, which is presented in Section 3.1.3.1. In contrast to frameworks like MDP and scikit-learn, the processing
in the nodes is purely sample based1 to ease implementation and online application
1
There is no special handling of batches of data.
126
Chapter 3. Optimizing: pySPACE
of the algorithms. Nodes are grouped depending on their functionality as depicted
in Figure 3.2. Currently, there are more than 100 nodes available in pySPACE plus
some wrappers for other libraries (MDP, LibSVM, scikit-learn). A new node inherits
from the base node and at least defines an execute function which maps the input
(time series, feature vector, or prediction vector) to a new object of one of these types.
Furthermore, it has a unique name ending with “Node” and its code is placed into the
respective nodes folder. Templates are given to support the implementation of new
nodes. For a complete processing of data from time series windows over feature vectors to the final predictions and their evaluation, several processing steps are needed
as outlined in the following and in Figure 3.2.
Preprocessing comprises denoising time series data and reducing dimensionality
in the temporal and frequency domain. By contrast, the spatial filters operate in the
spatial domain to reduce noise. This can be done by combining the signals of different sensors to new virtual sensors or by applying sensor selection mechanisms.
Classification algorithms typically operate on feature vector data, i.e, before classification the time series have to be transformed with at least one feature generator to
a feature vector. A classifier is then transforming feature vectors to predictions. In
postprocessing, feature vectors can be normalized and score mappings can be applied
to prediction scores. For every data type a visualization is possible. Furthermore,
there are meta nodes, which internally call other nodes or node chains. Thus, they
can combine results of nodes or optimize node parameters. If training and testing
data are not predefined, the data must be split to enable supervised learning. By
default, data are processed as testing data.
Source nodes are necessary to request data samples from the datasets, sink nodes
are required for gathering data together to get new datasets or to evaluate classification performance. They establish the connection from datasets to data samples which
is required for processing datasets with concatenations of nodes.
3.1.2.2
Operations
An operation automatically processes one data summary2 and creates a new one. It
is also responsible for the mapping between summaries and datasets. Several operations exist for reorganizing data (e.g., shuffling or merging), interfacing to WEKA
and MMLF, visualizing results, or to access external code. The most important operation is, however, the node chain operation that enables automatic parallel processing
of the modular node chain (see Section 3.1.3.1). An operation has to implement two
main functions. The first function creates independent processes for specified parameter ranges and combinations, as well as different datasets. This functionality
2
Note that a summary can also consist of just a single dataset.
127
3.1. Structure and Principles
Spatial
Filter
ICA
CSP Avg. EEG
Scatter
Raw Data Instance Selection
Visualization
Others
Histogram Feature Selection
Backtransformation
Train-Test histogram
Classifier Ensemble
π SF
Euclidean
Splitter
Fusion
Grid Search
Gaussian
FDA
Spectrum
Splitter
FIR
TKEO Sub-Chain
Pattern Search LIBSVM
Coherence
Decimation
sigmoid
Postprocessing
Naive Bayes
Linear Fit Scikit-learn Wrapper
optimal
linear
Score Mapping
LDA
precision
weighted
ridge
regression
PAA
Window Function Correlations
detrend
z-score
STFT
baseline Moments
BRMM
DWT SOR SVM
Classification
probability
voting
one-class
SVM
Random
label voting
QDA
Gating
Functions
Performance
Normalizations
Amplitudes LIBLINEAR
Feature
Generator
Feature Vector
Feature Vector
Preprocessing
Resample
Sink
Source
IIR
Type Conversion
Feature
Normalization
Crossvalidation
Meta
Filters
FFT
Debug
Time Series
PCA
Time Series
Sensor Selection
Stream
Stream
xDAWN
Figure 3.2: Overview of the more than 100 processing nodes in pySPACE. The
given examples are arranged according to processing categories (package names) and
subcategories. The size of the boxes indicates the respective number of currently
available algorithms. My contributions in the context of this thesis (in terms of implemented nodes) are highlighted with double rule.
is the basis for the parallelization property of pySPACE (see Section 3.1.3.3). The
process itself defines the mapping of one or more datasets from the input summary
to a dataset of the output summary and its call function is the important part. The
second function of an operation is called “consolidate” and implements the clean up
part after all its processes finished. This is especially useful to store some meta information and to check and compress the results. Operations and their concatenations
are used for offline analysis (see Section 3.2.3). In Section 3.4.1 an example of an
operation will be given and explained.
3.1.3 Infrastructure
So far we have discussed what to process (data) and which algorithms to use (nodes,
operations). The infrastructure of pySPACE now defines how the processing is done.
This core part is mainly defined in the environment package and usually not modified. It comprises the online execution (see Section 3.2.4), the concatenation of nodes
and operations (as depicted in Figure 3.1), and the parallel execution of processing
tasks.
128
3.1.3.1
Chapter 3. Optimizing: pySPACE
Node Chains
Nodes can be concatenated to a node chain to get a desired signal processing flow.
The only restriction here is what a particular node needs as input format (raw stream
data, time series, feature vector, or prediction vector). The input of a node chain is
a dataset (possibly in an online fashion), which is accessed by a source node at the
beginning of the node chain. For offline analysis, a sink node is at the end of the node
chain to gather the result and return a dataset as output. In the online analysis,
incoming data samples are processed immediately and the result is forwarded to the
application. Between the nodes, the processed data samples are directly forwarded,
and if needed cached for speed-up. Additional information can be transferred between
nodes where this is necessary. To automatically execute a node chain on several
datasets or to compare different node chains, a higher level processing is used: the
node chain operation as depicted in Figure 3.3.
3.1.3.2
Operation Chains
Similar to concatenating nodes to node chains, operations can be concatenated to operation chains. Then, the first operation takes the general input summary and the
others take the result summary of the preceding operation as input. At the end, the
operation chain produces a series of consecutive summaries. Additionally to combining different operations, a benefit of the operation chain in combination with node
chain operations is that a long node chain can be split into smaller parts and intermediate results can be saved and reused. In an operation chain, operations are
performed sequentially so that parallelization is only possible within each operation.
3.1.3.3
Parallelization
An offline analysis of data processing often requires a comparison of multiple different processing schemes on various datasets. This can and should be done in parallel
to get a reduction of processing time by using all available central processing units
(CPUs). Otherwise, exhaustive evaluations might not be possible as they require too
much time. Operations in pySPACE provide the possibility to create independent processes, which can be launched in a so-called “embarrassingly parallel” mode. This can
be used for investigations where various different algorithms and hyperparameters
are compared (e.g., spatial filters, filter frequencies, feature generators). As another
application example, data from different experimental sessions or different subjects
might be processed in parallel. The degree of process distribution is determined in
pySPACE by usage of the appropriate back-end for multicore and cluster systems.
Figure 3.3 schematically shows how a data summary of two datasets is processed
129
3.2. User and Developer Interfaces
automatically with different node chains in parallel.
A
spec
{1,2,3,4a}
{2,1,3,4b}
{1,2,4a}
CPU
1
1
2
3
4a
CPU
2
2
1
3
4b
CPU
3
1
2
4a
Column Chart
processed Data
Line Chart
Table
B
Figure 3.3: Processing scheme of a node chain operation in pySPACE. A and
B are two different datasets (Section 3.1.1), which shall be processed as specified in
a simple spec file (Section 3.2.2). The processing is then performed automatically.
As a result, it can produce new data but also visualizations and performance charts.
To speed up processing the different processing tasks can be distributed over several CPUs (Section 3.1.3.3). The puzzle symbols illustrate different modular nodes
(Section 3.1.2.1), e.g., a cross-validation splitter (1), a feature generator (2), a visualization node (3), and two different classifiers (4a, 4b). They are concatenated to a
node chain (Section 3.1.3.1). Visualization taken from [Krell et al., 2013b].
Additionally, some nodes of the meta package can distribute their internal evaluations by requesting own subprocesses from the back-end.3 This results in a two-level
parallelization.
For further speed-up, process creation and process execution are parallelized. For
the online application, different processing chains are executed in parallel if the same
data is used for different signal processing chains, e.g., to predict upcoming movements and to detect warning perception (P300) from the EEG.
3.2
User and Developer Interfaces
pySPACE was designed as a complete software environment4 without requiring individual hand-written scripts for interaction. Users and developers have clearly defined access points to pySPACE that are briefly described in this section. Most of
these are files in the YAML format. Still, major parts of pySPACE can also be used
as a library,5 e.g., the included signal processing algorithms.
3
4
5
This feature has been mainly developed by Anett Seeland.
in contrast to libraries
This requires adding the pySPACE folder to the PYTHONPATH variable.
130
Chapter 3. Optimizing: pySPACE
3.2.1 System and Storage Interface
The main configuration of pySPACE on the system is done with a small setup script
that creates a folder, by default called pySPACEcenter, containing everything in one
place the user needs to get started. This includes the global configuration file, links
to main scripts to start pySPACE (see Sections 3.2.3 and 3.2.4), a sub-folder for files
containing the mission specification files (see Section 3.2.2), and the data storage
(input and output). Examples can be found in the respective folders. The global configuration file is also written in YAML and has default settings that can be changed
or extended by the user.
3.2.2 Processing Interface
No matter if node chains, operations, or operation chains are defined (Figure 3.1),
the specifications for processing in pySPACE are written in YAML. Examples are
the node chain illustrated in Figure 3.4 or the operation illustrated in Figure 3.8.
In addition to this file, the user has to make sure that the data are described with a
short metadata file where information like data type and storage format are specified.
If the data have been processed with pySPACE before, this metadata file is already
present.
The types of (most) parameters in the YAML files are detected automatically
and do not require specific syntax rules as can be inferred from the illustrated
node chain (Figure 3.4), i.e., entries do not have to be tagged as being of type integer, floating point, or string. On the highest level, parameters can consist of
lists (introduced with minus on separate lines like the node list) and dictionaries (denoted by “key: value” pairs on separate lines, or in the Python syntax, like
{key1: value1, key2: value2}). During processing, these values are directly
passed to the initialization of the respective object.
Figure 3.4 shows an example of a node chain specification that can be used to
process EEG data. It illustrates the concatenation of different node categories (introduced in Section 3.1.2.1).6 Data samples for this node chain could, e.g., consist
of multiple EEG channels and multiple time points, so that after loading one would
obtain windowed time series. Each data sample is then processed as specified: each
channel is standardized, reduced in sampling rate, and lowpass filtered. Then, the
data are equally split into training and testing data to train the supervised learning
algorithms, which are, in this example, the spatial filter xDAWN [Rivet et al., 2009],
the feature normalization and the classifier later on (here, the LibSVM Support Vector Machine as implemented by [Chang and Lin, 2011]). Included in this node chain
is a hyperparameter optimization (grid search) of the regularization parameter of
6
For simplicity, most default parameters were not displayed.
3.2. User and Developer Interfaces
#
−
#
−
−
−
#
−
#
#
−
#
−
#
#
−
#
#
−
#
−
#
−
supply node chain with data
node : TimeSeriesSource
t h r e e p r e p r o c e s s i n g algorithms
# standardize each s e n s o r : mean 0 , variance 1
node : Standardization
# reduce sampling f r e q u e n c y t o 25 Hz
node : Decimation
parameters :
t a r g e t f r e q u e n c y : 25
# f i l t e r i n g with f a s t Fourier transform
node : FFTBandPassFilter
parameters :
pass band : [ 0 . 0 , 4 . 0 ]
s p l i t data t o have 50% t r a i n i n g data
node : T r a i n T e s t S p l i t t e r
parameters :
t r a i n r a t i o : 0.5
random : True
l i n e a r combination o f s e n s o r s t o g e t
reduced number o f ( pseudo ) channels ( here 8 )
node : xDAWN
parameters :
retained channels : 8
take a l l s i n g l e amplitudes as f e a t u r e s
node : TimeDomainFeatures
mean 0 and variance 1 f o r each f e a t u r e
( determined on t r a i n i n g data )
node : GaussianFeatureNormalization
meta node , c a l l i n g c l a s s i f i e r f o r
optimizing one parameter ( c o m p l e x i t y ˜ ˜ C ˜ ˜ )
node : GridSearch
parameters :
o p t i m i z a t i o n : # d e f i n e the grid
ranges : { ˜ ˜C ˜ ˜ : [ 0 . 1 , 0 . 0 1 , 0 . 0 0 1 , 0 . 0 0 0 1 ] }
e v a l u a t i o n : # which metric t o o p t i m i z e
metric : Balanced accuracy
v a l i d a t i o n s e t : # how t o s p l i t t r a i n i n g data
s p l i t s : 5 # 5−f o l d c r o s s−v a l i d a t i o n
nodes :
# c l a s s i f i e r wrapper around e x t e r n a l SVM
− node : LibSVMClassifier
parameters :
complexity : ˜ ˜ C ˜ ˜
kernel : LINEAR
Optimize the d e c i s i o n boundary f o r BA
node : ThresholdOptimization
c a l c u l a t e various performance m e t r i c s
node : PerformanceSink
Figure 3.4: Node chain example file. Comments are denoted by a “#”.
For further explanation see Section 3.2.2.
131
132
Chapter 3. Optimizing: pySPACE
the classifier. This is done with five-fold cross-validation on the training data. Finally, performance metrics are calculated respectively for training and testing data.
In a real application, the example in Figure 3.4 can be used to classify P300 data as
described in Section 0.4.
3.2.3 Offline Analysis
Stored data can be analyzed in pySPACE using the launch.py script. This script is
used for operations and operation chains. The user only needs the respective specification file in YAML. The file name is a mandatory parameter of launch.py. For
having non-serial execution but a distribution of processing, the parallelization mode
parameter (e.g., “mcore” for multicore) is required. The operation specified in a file
called my_operation.yaml can be executed from the command line, e.g., as
./launch.py -o my_operation.yaml --mcore .
GUIs exist for th construction of node chains and especially for the exploration of
the results. With the latter (example given in Figure 3.9), different metrics can be
displayed, parameters compared, and the observation can be reduced to sub-parts of
the complete results output, e.g., explore only results of one classifier type, though
several different were processed. In Section 3.4.1 an example of an offline analysis is
given and explained.
3.2.4 Online Analysis
For processing data from a recording device in an application, it is required to define
a specific node chain, train it (if necessary), and then use it directly on incoming data.
This is possible using the pySPACE live mode.7 It allows to define a certain application setup (such as involved components, communication parameters, acquisition
hardware, number and type of node chains) by using additional parameter files that
reference other pySPACE specification files (like in the offline analysis).
Several node chains can be used concurrently to enable simultaneous and parallel
processing of different chains. For this, data are distributed to all node chains and
the results are collected and stored or sent to the configured recipient (e.g., a remote
computer). The data can be acquired from a custom IP-based network protocol or
directly from a local file for testing purposes and simulation. Data from supported
acquisition-hardware8 can be converted to the custom network protocol using a dedicated software tool, that comes bundled with pySPACE.
7
I did not contribute to pySPACE live except some debugging, tuning, and enabling the incremental
learning.
8
e.g., the BrainAmp USB Adapter by Brain Products GmbH (Gilching, Germany)
3.2. User and Developer Interfaces
133
3.2.5 Extensibility, Documentation and Testing
Integration of new nodes, operations, and dataset definitions is straightforward due
to the modular structure of pySPACE. Once written and included in the software
structure, they automatically appear in the documentation and can be used with the
general YAML specification described above.9 All operations and nodes come with a
parameter description and a usage example. If necessary, single nodes can be defined
externally of pySPACE and they will still be included likewise, if they are specified
via the global configuration file (Section 3.2.1).
If pySPACE shall be used for more complex evaluation schemes, pure YAML syntax would not be sufficient anymore for our domain-specific language (DSL) (e.g., 5000
testing values from 10−5 to 100 with logarithmic scaling). Consequently, we allow for
Python code injections via strings in a YAML file which are later on replaced by the
real values. The injections can be used for defining parameter ranges and when modifying the parameters in the node definitions. Some examples are given in Figure C.2,
indicated by the “eval(...)” string. This combines the simplicity of the YAML format
with the power of Python to describe more complex evaluations in a readable and
compressed format.
These configuration files are experiment descriptions which can be directly exchanged between the users to discuss problems, to standardize processing schemes,
or just to easily communicate reproducible approaches. The comparison between our
experiment descriptions is much easier than comparing scripts because of
• better structure of the description,
• standardized keywords, and
• less required information/commands and consequently very good compressed
representation.
The documentation of pySPACE is designed for both users and developers. We followed a top down approach with smooth transition from high-level to low-level documentation and final linking to the source code for the developers. The documentation
is automatically compiled with the documentation generator Sphinx.10 We largely
customized the generator of the documentation structure which creates overviews
of existing packages, modules and classes (API documentation). Some properties of
nodes are automatically determined and integrated into their documentation like input data types and possible names for usage in the YAML specification. Furthermore,
a list of all available nodes, and lists of usage examples for operations and operation
chain are generated automatically by parsing the software structure. In contrast to
other famous projects using Sphinx like scikit-learn, or Python, we also programmed
9
10
Class names and YAML strings are automatically matched.
http://sphinx-doc.org/
134
Chapter 3. Optimizing: pySPACE
the main page with the restructured text format, which is the basis of Sphinx documentation, and did not use specific commands for webpage design (html).
Additionally, test scripts and unit tests are available in the test component of
pySPACE. During the software development process, the infrastructure mostly remains untouched but very often new nodes are implemented or existing nodes are
extended. To improve test coverage, we developed a generic test concept for the nodes:
1. The node documentation is checked for an example to create a node.
2. A node is created using this example.
3. Predefined data is used to be processed by the node, including the training procedure.
Furthermore, an interface is provided to use this generic testing concept to easily
generate self-defined tests by providing the respective input and output data. This
concept will be used in future to define a test suite, which just checks for changes in
the processing to support a change management.
The documentation is generated and unit tests are automatically executed on an
everyday basis. For bug fixing, bug reports are possible via email to the pySPACE
developer list or via issue reports on https://github.com/pyspace/pyspace.
3.2.6 Availability and Requirements
pySPACE can be downloaded from https://github.com/pyspace and is distributed under GNU General Public License. The documentation can be found there,
too. The software can be currently used on Linux, OS X, and Windows. For parallelization, off-the-shelf multi-core PCs as well as cluster architectures using message
passing interface (MPI) or the IBM LoadLeveler system can be interfaced. The software requires Python2.6 or 2.7, NumPy, SciPy, and YAML. Further optional dependencies exist, e.g., Matplotlib [Hunter, 2007] is required for plotting. Computational
efficiency is achieved by using C/C++-Code libraries where necessary, e.g., NumPy
is working with C-arrays and implementations and SVM classification can be performed using the Python wrapper of the LIBSVM C++ package.
3.3
Optimization Problems and Solution Strategies
The optimization of data processing chains is very complex. Hence, some separation
into subproblems and respective solution strategies is required. In this section, we
will highlight some subproblems and solution approaches. pySPACE can be seen as
an interface to implement existing approaches and also to explore new approaches.
3.3. Optimization Problems and Solution Strategies
135
Performance Evaluation and the Class Imbalance Problem This large paragraph introduces several metrics integrated into pySPACE and analyzes their sensitivity to the class ratio.
It includes text parts and figures from Dr. Sirko Straube and is based on:
Straube, S. and Krell, M. M. (2014). How to evaluate an agent’s behaviour to infrequent events? – Reliable performance estimation insensitive to class distribution.
Frontiers in Computational Neuroscience, 8(43):1–6, doi:10.3389/fncom.2014.00043.
I contributed a few text parts to this paper. My contributions were the (re-)discovery
of the class imbalance problem in the machine learning context, the evaluation in
this paper which pictures the class imbalance problem, and discussions about the
paper and about performance metrics in general.
For optimizing the processing chain, a performance measure is required to quantify
which algorithm is better than another. The basis of defining performance metrics is
the confusion matrix, which is introduced in Figure 3.5. The figure also shortly summarizes the most important metrics.11 They are separated into two groups, because
when keeping the decision algorithm but changing the ratio of positive and negative
samples, some metrics are sensitive to this change and some are not.12 Another issue,
when looking at these metrics is the lack of common naming conventions.
The true positive rate (TPR) is also called Sensitivity or Recall. The true negative
rate (TNR) is equal to the Specificity. When the two classes are balanced, the accuracy (ACC) and the balanced accuracy (BA) are equal. The weighted accuracy (WA) is
a more general version introducing a class weight w (for BA: w=0.5). The BA is sometimes also referred to as the balanced classification rate [Lannoy et al., 2011], classwise balanced binary classification accuracy [Hohne and Tangermann, 2012], or as
a simplified version of the AUC [Sokolova et al., 2006, Sokolova and Lapalme, 2009].
Another simplification of the AUC is to assume standard normal distributions so
that each value of the AUC corresponds to a particular shape of the receiver operating characteristic [Green and Swets, 1988, Macmillan and Creelman, 2005] (ROC)
curve. This simplification is denoted AU Cz and it is the shape of the AUC that is
assumed when using the performance measure d′ . This measure is the distance
between the means of signal and noise distributions in standard deviation units
√
given by the z-score. The two are related by AU Cz = Θ(d′ / 2) where Θ is the normal distribution function. A formula for calculating the general AUC is given by
11
All mentioned performance metrics and many more are integrated into pySPACE and calculated
for every (binary) classifier evaluation.
12
Explained later in this section in more detail and depicted in Figure 3.6.
136
Chapter 3. Optimizing: pySPACE
Figure 3.5: Confusion matrix and metrics. (A) The performance of an agent discriminating between two classes (positives and negatives) is described by a confusion
matrix. Top: The probabilities of the two classes are overlapping in the discrimination space as illustrated by class distributions. The agent deals with this using
a decision boundary to make a prediction. Middle: The resulting confusion matrix
shows how the prediction by the agent (columns) is related to the actual class (rows).
Bottom: The true positive rate (TPR) and the true negative rate (TNR) quantify the
proportion of correctly predicted elements of the respective class. (B) Metrics based
on the confusion matrix grouped into sensitive and non-sensitive metrics for class imbalance when both classes are considered. Visualization and shortened description
taken from [Straube and Krell, 2014].
[Keerthi et al., 2007]:
P
yi =1
X X 1 − sgn(f (xj ) − f (x1 ))
1
P
2
1·
1 y =1 y =−1
yj =−1
i
j
(3.1)
3.3. Optimization Problems and Solution Strategies
137
with the testing data samples xj with label yj .
Matthews correlation coefficient (MCC) is also known as phi-coefficient, or Pearson correlation coefficient from statistics and can be also straightforwardly used for
regression problems in contrast to the other metrics. The F-measure [Powers, 2011]
is also referred to as the F-score. A version as weighted harmonic mean is named
Fβ-score, where β denotes the weighting factor. It has been for example used in the
aforementioned optimization approaches by Keerthi et al. and Eitrich et al.13 Despite its strong sensitivity to the ratio of positive and negative samples and its lack
of interpretability it is still very often used, especially in text classification where
largely unbalanced settings occur [Lipton et al., 2014].
The problem of metrics being sensitive to class imbalance is quite
old [Kubat et al., 1998] but still seems to be no common knowledge.
In
[Straube and Krell, 2014], the authors argument that class imbalance is very common for realistic experiments. Prominent examples can be also found at the evaluation of unary classification (see Section 1.4). For multi-class evaluations, it gets even
worse [Lipton et al., 2014]. Lipton et al. report, that it is crucial to optimize the decision criterion (threshold) when using the F-measure and using the default threshold
(zero) for the SVM is usually not a good choice. This effect is also partially related
to class imbalance. Generally, it is always good choose a threshold which optimizes
the performance measure of interest [Metzen and Kirchner, 2011]. This threshold
optimization algorithm is integrated into pySPACE.
Using metrics which are sensitive to the class ration makes them incomparable,
when class ratios change. Furthermore, their absolute values are fairly meaningless if the class ratio is not reported. Figure 3.6 visualizes the effect of changing the
class ratio in an evaluation and how it effects the sensitive metrics. It also visualizes a related effect of comparability between true classifiers and different “guessing”
classifiers. The graphic clearly shows, that at least from the perspective of class imbalance, no metric should be used which is sensitive to the class ratio, because the
values change very much. Even the normalization of the mutual information (MI)
does not make the metric insensitive. For more details on metrics and the imbalance
problem refer to [Straube and Krell, 2014].
Classifier Problem Usually, classifiers are defined as an optimization problem
and not as a concrete algorithm. Except for the sparse approaches, solving SVM
variants is not so difficult because the models of interest are defined as convex optimization problems which can be simplified by duality theory (see Section 1). The only
difficulty is that the algorithms need to be able to tackle large amounts of data, e.g.,
13
It was also primarily used by the developers of pySPACE but later on replaced by the BA due to
sensitivity to class imbalance.
138
Chapter 3. Optimizing: pySPACE
Figure 3.6: Performance, class ratios and guessing. Examples of metric sensitivities to class ratios (A) and agents that guess (B). Effect of the metrics AUC and d’
are represented by AUCz using the simplification of assumed underlying normal distributions. The value for d′ in this scenario is 0.81. Similarly, the BA also represents
the effect on the WA. (A) The agent responds with the same proportion of correct
and incorrect responses, no matter how frequent positive and negative targets are.
For the balanced case (ratio 1:1) the obtained confusion matrix is [TP 90; FN 10;
TN 70; FP 30]. (B) Hypothetical agent that guesses either all instances as positive
(right) or as negative (left) in comparison to the true agent used in (A). Class ratio is
1:4, colors are the same as in (A). The performance values are reported as difference
to the performance obtained from a classifier guessing each class with probability
0.5, i.e., respective performances for guessing are: [ACC 0.5; G-Mean 0.5; BA 0.5;
F-Measure 0.29; MCC 0; AUCz 0.5; nMI 0]. Visualization and description taken from
[Straube and Krell, 2014]. nMI denotes the normalized MI. The respective normalization factor is the inverse of the maximum possible MI due to class imbalance.
by online learning, iteration over samples, or reduction of the training data (see also
Section 1.2). Different implementations of classifiers are available in pySPACE (not
limited to SVM variants).
Over- and Underfitting
Note that the overall goal is to find the perfect processing
which finally detects/classifies the signal of interest as well as possible. This especially includes the generalization to unseen data and situations. The main dilemma
is that we build a model of our data on the given training data but then the model is
expected to perform well on unseen data.
A direct approach to tackle the problem with unseen data is to later integrate this
new data into the decision algorithm with online learning in an online application
3.3. Optimization Problems and Solution Strategies
139
(see also Section 1.2).
If a classifier model does not fit the data well enough, it is said to be underfitting.
This problem is often already considered in the classifier design, especially by the
kernel, loss, and regularization approaches mentioned in Section 1.1.1.2.
The kernel enables complex models that can fit to the data even in cases where a
linear model is not appropriate for the data at hand. The loss term is usually motivated by an assumption on the noise in the data which inhibits a perfect matching of
the chosen model to the data. Both concepts enable a good fitting of the model to the
data. Only if either loss or kernel is not chosen well, the model will be underfitting.
Unfortunately, more often data cannot be provided sufficiently enough and consequently the model fitting might be too exact and does not generalize well on unseen
data. This effect is called overfitting.
Here, regularization (in combination with the loss term) is an approach to avoid
this effect and to obtain more general models (e.g., because the margin between the
two classes is maximized or sparse solutions are enforced).
The prize to pay is that the resulting regularization parameter has to be optimized
additionally to potential hyperparameters of the chosen loss function or the kernel.
Unfortunately, optimizing hyperparameters can again result in over- or underfitting
especially if too many hyperparameter are used and optimized. So the problem of
over- and underfitting might be just lifted to a higher level.
Hyperparameter Optimization
Hyperparameters of the classifier considered in
this thesis are
• the regularization parameter C,
• the extension of this parameter with class weighting (i.e., C(yj )),
• the range parameter R of the BRMM or the radius R of the unary PAA, and
• specific kernel parameters (see Table 1.1).
Sometimes, even more hyperparameters are introduced for additional tuning, like
sample dependent weightings Cj or feature weightings. Furthermore, hyperparameters of the solution algorithms like the number of iterations and the stopping tolerance could be optimized. The type of loss and regularization could be changed, too,
which is not considered in the following. The optimization of these hyperparameters
can not yet be considered as sufficiently well “solved”.
Even the most basic step of choosing an appropriate evaluation metric is not always straightforward as previously discussed. For evaluating an algorithm, there are
several evaluation schemes (like k-fold cross-validation) which are quite well studied.
Most common evaluation approaches result in a function which is not even continuous and might have several local optima. Another difficulty is, that function evaluations are very expensive because they require to repeatedly train the classifier and
140
Chapter 3. Optimizing: pySPACE
evaluate it on testing data.
A straightforward approach to handle function evaluations and reduce processing time is to use parallelization as done in pySPACE. To really speed
up the repeated classifier training with different hyperparameters, warm starts
[Steinwart et al., 2009] can be used to initialize the optimization algorithms which
construct the classifier.14 For (n − 1)-fold cross-validation (leave-one-out error) there
are special schemes for additional speed up [Lee et al., 2004, Loosli et al., 2007,
Franc et al., 2008].
Another approach for saving processing resources is to use
heuristics for the hyperparameter optimization and to focus on finding a “quasioptimal” solution [Varewyck and Martens, 2011], which is often sufficient. This approach is specifically designed for the C-SVM with RBF kernel. First data is normalized, then the hyperparameter γ from the kernel is calculated directly, and finally for
the regularization parameter C only at most 3 values have to be tested. This scheme
can be used in pySPACE and is very helpful because finding a good γ by hand is difficult. A similar (but more complex) approach can be found in [Keerthi and Lin, 2003],
which uses Theorem 5 for the C-SVM and can be generalized to BRMM and SVR using Theorem 14. First, the C for the linear case is optimized and then a line search is
performed with a fixed ratio between the hyperparameters γ and C of the respective
classifier with RBF kernel.
The hyperparameter optimization and the C-SVM classifier problem can be also
seen as a bilevel optimization problem. Hence, one approach is to tackle both problems at once [Keerthi et al., 2007, Moore et al., 2011]. In [Moore et al., 2011] only
SVR was handled with a simple validation function. In [Keerthi et al., 2007] the
validation function is smoothed which results in a difference to the targeted validation function. The evaluation is not broad (only 4 datasets) but it is promising.
Unfortunately, the code is not provided and their implementation does not scale well
with the number of training samples.15 Is is surprising that the authors did not
continue their work on this algorithm. It would be interesting to investigate this
approach more in detail in future (e.g., using a large scale optimizer like WORHP
[Büskens and Wassel, 2013]).
For analyzing the aforementioned change of the validation function, we integrated
smooth versions of existing metrics into pySPACE for further analysis. Here we realized, that some smoothing techniques will not work because the resulting metrics are
too different from the target metric. For the metrics related to [Keerthi et al., 2007],
it is important to look at the parameter of the smoothing function or even adapt it
14
We implemented and tested a pattern search (Figure 3.7) using a warm start which resulted in a
large speed up but also required a lot of memory resources, because several processing chains had to
be used in parallel for different choices of the hyperparameter values and randomization of the data
splitting.
15
Maybe this could be handled using parallelization techniques.
3.3. Optimization Problems and Solution Strategies
141
during the optimization. Too low values of this parameter result in a large difference
to the target metric and too high values might result in numerical problems with
too high values of the derivative. We also integrated the smoothing approach from
[Eitrich and Lang, 2006, Eitrich, 2007].16
Pattern Search
Eitrich et al. use a smoothed metric to optimize several SVM
hyperparameters with a pattern search method (Figure 3.7). An important aspect of
their approach is the large speed up due to parallelization of the pattern search, the
function evaluation, and the C-SVM solving strategies [Eitrich, 2006]. Due to the use
of the pattern search, the method is derivative free and it can be applied to a very
large class of optimization problems in contrast to the previous bilevel optimization.
Unfortunately, the pattern search comes with additional hyperparameters.
We also integrated the pattern search into pySPACE. We only used a parallelization of the pattern search and the validation cycle, but not for the solution of
the C-SVM problem in contrast to [Eitrich, 2006]. Implementing such a concept in
Python is not straightforward, because communication and other overhead due to
the parallelization has to be kept low and when using the standard parallelization
package in Python (multiprocessing) an additional second level of parallelization is
not possible anymore.
When exploring performance plots of BRMMs with pySPACE (not reported) several observations can be made as listed in the following
• Rather high values for C and R (e.g., 1 and 10, respectively) provide better results and faster convergence of the solution algorithms compared to very low
values (e.g., 10−5 and 1.1).
• If the evaluation metric is not smoothed, there will always be plateaus (for
mathematical reasons) but they are not relevant if the number of testing samples is sufficiently high.
• There is a maximum value of R and C which should be considered. It should
be acknowledged that there is always a maximum meaningful value for R and
C. Choosing higher values will result in the same performance and also in a
plateau in the hyperparameter landscape.
• Combining all three observations it is good to start with rather high values. Furthermore at least at the beginning of the pattern search, the hyperparameters
should be reduced, when performance is not decreasing instead of requesting a
performance improvement. So the algorithm does not get stuck on a plateau.
• It is often more efficient to work with logarithmic steps in the hy16
Smoothing the validation function is support by the fact, that the classification function can be
sometimes chosen partially smooth, with the regularization parameter as variable (see Theorem 24 in
the appendix). Consequently, the composed function is expected to be at least partially smooth.
142
Chapter 3. Optimizing: pySPACE
perparameter
landscape
as
also
suggested
in
[Keerthi and Lin, 2003,
Varewyck and Martens, 2011].
These approaches of customizing the pattern search are possible with our implementation.
1. Take sequence of direction sets Dk (e.g., Dk = {ei |i = 1, .., n} ∪ {−ei |i = 1, .., n}),
initial step size s0 , initial starting point x0 , f0 := f (x0 ) (current minimal value),
contraction parameter c, step tolerance t, and a decreasing sequence p(sk ) to
define the minimal improvement (e.g. constantly zero) and iterate over k
2. Evaluate the points xk + s0 · d for d ∈ Dk
3. If f (xk + s0 · d) < fk − p(sk ):
• sk+1 = sk or increased
• xk+1 = xk + s0 · d
• Continue with Step 2
4. Otherwise: sk+1 = c · sk and xk+1 = xk
5. If sk+1 < t: STOP
6. Continue with Step 2
Figure 3.7: General scheme of the pattern search [Nocedal and Wright, 2006].
There are numerous variants/extensions of this method like restricting the number
of iterations or performing the evaluations asynchronously [Gray and Kolda, 2006].
Grid Search
Despite the previously mentioned promising approaches for hyperpa-
rameter optimization, in most cases the grid search is used (or even no hyperparameter optimization is performed or reported at all). In this case, the algorithms are
evaluated on a predefined grid of values for the hyperparameters and the best one is
chosen. This approach is also implemented in pySPACE with support of parallelization. It is inefficient for two reasons. First, it does not exploit the knowledge about
the topography of the landscape of function values, to derive good regions to expand.
And second, if the optimal point is outside of the grid region, the performance result
can be much worse than in the other hyperparameter optimization approaches. Nevertheless, it usually provides sufficiently good results as for example shown by the
variant in [Varewyck and Martens, 2011]. The large dependency of the performance
on the chosen grid makes this famous algorithm difficult to compare to real optimization algorithms, because it is always possible to choose a grid which performs at least
equally good or better, or a grid which performs worse.
3.4. pySPACE Usage Examples
143
Preprocessing Optimization
So far, we only discussed methods for optimizing
the hyperparameters of the classifier. What is missing in the literature are approaches to additionally optimize the preprocessing.
Even though, the genera-
tion of meaningful feature in the preprocessing is expected to have a large impact
[Domingos, 2012], it is mostly done by hand and using expert knowledge.
In [Flamary et al., 2012] raw data from a time series was used and the optimization of the filter in the preprocessing was combined with the classifier construction.
The target function of C-SVM is extended with a regularization term of the filter
(including an additional regularization constant). For optimization, a two-stage algorithm is suggested, which switches between between updates of C-SVM and filter.
The optimization of a multi-column deep neural network [Schmidhuber, 2012] can
also be seen as a joint optimization of feature generation and classifier. Here, the
different layers of the neural network can be identified with different types of preprocessing or feature generation. In the context of pure feature learning without
classification, neural networks are also used [Ranzato et al., 2007].
Discussion For optimizing the complete processing chain, pySPACE shows a great
advantage to the previously mentioned approaches. Grid search and pattern search
can be applied to complete processing chains without much additional effort. It is
even possible to have hybrid approaches, where the grid defines different types of
algorithms and the pattern search optimizes the respective algorithm hyperparameters. Furthermore, arbitrary node chains, evaluation schemes, and performance
metrics which are available in pySPACE can be combined to define the optimization
procedure. Even without the optimization algorithms, pySPACE largely supports the
comparison of algorithms as shown in the examples in this thesis. In future, we plan
to use this interface to implement a complete automatic optimization process which
will be called autoSPACE and which will work on a database of datasets.
3.4
pySPACE Usage Examples
pySPACE is applicable in various situations, from simple data processing over comprehensive algorithm comparisons to online execution. In this section an example
for an offline analysis is given that comprises most of the key features of pySPACE.
Thereby it is shown how the intended analysis can be easily realized without the
need for programming skills. Published work and related projects are named where
pySPACE has been used, most often with such an offline analysis. Finally, a more
complex example is given which incorporates content from the previous main chapters.
144
Chapter 3. Optimizing: pySPACE
3.4.1 Example: Algorithm Comparison
In the following, an exemplary and yet realistic research question for processing neurophysiological data serves to explain how a node chain can be parameterized and
thus different algorithms and hyperparameters can be tested. To show that for such
a comparison of algorithms and/or algorithm hyperparameters pySPACE can be a
perfect choice, the whole procedure from data preparation to final evaluation of the
results is described.
Data and Research Question
We take the data described in Section 0.4. Our aim, besides the distinction of the two
classes Standard and Target, is to investigate the effect of different spatial filters,
i.e., ICA, PCA, xDAWN, and CSP (see also Section 2.2.1.4), on the classification performance, or whether one should not use any spatial filter at all (denoted by “Noop”).
Spatial filters aim to increase the signal-to-noise ratio by combining the data of the
original electrodes to pseudo-channels. Thereby, not only performance can be increased, but also information is condensed into few channels, enabling reduction of
dimensionality and thereby reducing the processing effort. Thus, a second research
question here is to evaluate the influence of the number of pseudo-channels on the
classification performance.
Data Preparation
In our example, each recording session consists of five datasets. To have a sufficient amount of data, they were all are concatenated. This is an available operation
in pySPACE after the data were transferred from stream (raw EEG format) to the
pySPACE time series format. Therefore, after data preparation, all merged recordings that should be processed are present in the input path (see below), each in a
separate sub-directory with its own meta file.
Processing Configuration
The algorithm comparison has to be specified in a file as depicted in Figure 3.8. The
type keyword declares the intended operation, i.e., node chains will be executed. The
data, which can be found in the directory P300_data (input path) will be processed
according to the specifications in the file P300.yaml. This file is identical to the one
presented in Figure 3.4, except that it is parameterized to serve as a template for
all node chains that should be executed. The parameterization is done by inserting
unique words for all variables that need to be analyzed. This means, in this example
that the specification of the xDAWN node is replaced by
145
3.4. pySPACE Usage Examples
− node :
alg
parameters :
retained channels :
introducing
__alg__
as
parameter
for
the
channels
different
spatial
__channels__ for the varying number of pseudo-channels.
filters
and
All values that
should be tested for these two parameters are specified in the operation file (Figure 3.8) below the keyword parameter ranges. pySPACE will create all possible
node chains of this operation using the Cartesian product of the value sets (grid).
The value of the parameter __alg__ is the corresponding node name, with Noop
(meaning no option) telling pySPACE that in this condition nothing should be done
with the data. In the example Noop could serve as a baseline showing what happens
when no spatial filter is used.
type : node chain # o p e r a t i o n t y p e
input path : ” P300 data ” # l o c a t i o n o f data in s t o r a g e f o l d e r
templates : [ ” P300 . yaml ” ] # s p e c i f i c a t i o n o f node chain ( s )
parameter ranges : # Cartesian product o f parameters t o be t e s t e d
a l g : [ ’CSP ’ , ’xDAWN ’ , ’ ICA ’ , ’PCA ’ , ’ Noop ’ ] # nodes t e s t e d
c h a n n e l s : [ 2 , 4 , 6 , 8 , 10 , 20 , 30 , 40 , 50 , 62]
# number o f pseudo−channels
runs : 10 # number o f r e p e t i t i o n s
Figure 3.8: Operation specification example file for spatial filter
comparison. For more details see discussion in Section 3.4.1.
In this example, varying the number of retained channels will lead to equal results for each value in the case of using Noop. Therefore, an additional constraint
could ensure that Noop is only combined with one value of __channels__ which
would reduce computational effort. Furthermore, instead of a grid of parameters, a
list of parameter settings could be specified or Python commands could simplify the
writing of spec files for users with basic Python knowledge. For example, the command range(2, 63, 2) could be used to define a list of even numbers from 2 to 62
instead of defining the number of retained pseudo-channels individually.
Finally, the runs keyword declares the number of repeated executions of each node
chain. Repetitions can be used to compensate for random effects in the results due to
components in the node chain that use randomness, like the TrainTestSplitter. Using different data splitting strategies when processing the same data with different
parameterizations (e.g., spatial filters or number of retained pseudo-channels) would
make the results incomparable. To avoid such behavior and to ensure reproducibility of the results, randomness in pySPACE is realized by using the random package
146
Chapter 3. Optimizing: pySPACE
of Python with a fixed seed that is set to the index of the repeated execution. In
other words, the same value of runs returns the same results for a given dataset and
operation. For obtaining different results, this number has to be changed.
Execution and Evaluation
The execution of the operation works as described in Section 3.2.3. The result is
stored in a folder in the data storage, named by the time-stamp of execution. For
replicability, it contains a zipped version of the software stack and the processing
specification files. For each single processing result there is a subfolder named after
the processed data, the specified parameters and their corresponding values. For
evaluation, performance results are not stored separately in these single folders, but
the respective metrics are summarized in a .csv tabular. Furthermore, by default the
result folders are also compressed and only one is kept as an example.
The result visualization with the evaluation GUI of pySPACE can be seen in Figure 3.9. Here, the varied parameters (compare test parameters in Figure 3.8 with
selection in upper left of Figure 3.9) as well as the data can be selected and individually compared with respect to the desired metric.
Figure 3.9: Visualization from the evaluation GUI for the result of the
spatial filter comparison, explained in Section 3.4.1. Visualization taken from
[Krell et al., 2013b].
147
3.4. pySPACE Usage Examples
It is not surprising, that the xDAWN is superior to the other algorithms because
it was specifically designed for the type of data used in this analysis. The well performance of the CSP is interesting, because it is normally only used for the detection
of changes in EEG frequency bands connected to muscle movement. The bad performance of the ICA shows, that the pseudo-channels have no ordering in importance in
contrast to the other filters. A correct reduction step would be here to reduce dimensionality internally in the algorithm in its whitening step. This error in interfacing
the implementation from the MDP library was fixed due to this result. Normally, the
ICA should perform better than the PCA.
3.4.2 Usage of the Software and Published Work
This section shortly highlights the use of pySPACE in the community, in different
projects, and in several publications.
Since pySPACE became open source software in August 2013, there is not yet a
public user community. Usage statistics from the repository are unfortunately not
yet available. The software was announced at the machine learning open source
software webpage (http://mloss.org/software/view/490/) in the context of a
presentation at a workshop [Krell et al., 2013a] which resulted in 2753 views and
575 downloads.
The publication which first presented the software to the com-
munity in a special issue about Python tools for neuroscience [Krell et al., 2013b]
resulted in 1654 views, 192 paper downloads, 10 citations, and 118 mentions in
public networks.
Furthermore, pySPACE has been presented at 3 conferences
[Krell et al., 2013a, Krell et al., 2014b, Krell, 2014], where the last presentation resulted in a video tutorial (http://youtu.be/KobSyPceR6I, 345 views). In 2015,
pySPACE was presented at the CeBIT.
pySPACE has been developed, tested and used since 2008 at the Robotics Innovation Center of the German Research Center for Artificial Intelligence in Bremen and
by the Robotics Research Group at the University of Bremen:
• project
VI-Bot
(http://robotik.dfki-bremen.de/en/research/
projects/vi-bot.html):
EEG data analysis for movement prediction
and detection of warning perception during robot control with an exoskeleton,
• project
IMMI
(http://robotik.dfki-bremen.de/en/research/
projects/immi.html): EEG and EMG data analysis for movement prediction, detection of warning perception, and detection of the perception of
errors applied in embedded brain reading [Kirchner, 2014],
• direct control of a robot with different types of EEG signals,
148
Chapter 3. Optimizing: pySPACE
• project
Recupera
(http://robotik.dfki-bremen.de/en/research/
projects/recupera.html): EEG and EMG data analysis for movement
detection to support rehabilitation,
• project
ACTIVE
(http://robotik.dfki-bremen.de/en/research/
projects/active.html): analysis of epileptic seizure EEG data,
• project
TransTerrA
(http://robotik.dfki-bremen.de/en/research/
projects/transterra.html): transfer of results from the project IMMI,
• project
VirGo4
(http://robotik.dfki-bremen.de/en/research/
projects/virgo4.html):
tuning of regression algorithm for robot sen-
sors [Rauch et al., 2013, Köhler et al., 2014],
• project City2.e 2.0 (http://robotik.dfki-bremen.de/de/forschung/
projekte/city2e-20.html): comparison of different methods for parking
space occupancy prediction (in future),
• project
LIMES
(http://robotik.dfki-bremen.de/en/research/
projects/limes.html): parallelization of robot simulations,
• classification of iterative closest point (ICP) matches into good and bad ones,
• soil detection from sensor values of a robot, and
• every evaluation in this thesis and the visualizations of the backtransformation.
The existing publications are mainly results from the projects VI-Bot and
its follower project IMMI. They only show a small subset of possible applications of the software, documenting its applicability to EEG and EMG data (e.g.,
[Kirchner and Tabie, 2013, Kirchner et al., 2014b, Kirchner, 2014]).
In
[Kirchner et al., 2010,
Wöhrle et al., 2013a,
Seeland et al., 2013b,
Kirchner et al., 2013, Kim and Kirchner, 2013, Kirchner, 2014, Wöhrle et al., 2014,
Seeland et al., 2015] pySPACE was used for evaluations on EEG data in the context
of real applications. P300 data as described in Section 0.4 is used to customize
complex control environments because warnings do not have to be repeated if
they were perceived by the operator.
Another application is to predict/detect
movements to use it in rehabilitation and/or to adapt an exoskeleton/orthosis due
to the predicted/detected movement.
In [Kim and Kirchner, 2013], human brain
signals are analyzed which are related to perception of errors, like interaction error
and observation error. Special formulas for a moving variance filter are used in
pySPACE for EMG data preprocessing [Krell et al., 2013c]. In [Metzen et al., 2011a,
Ghaderi and Straube, 2013, Ghaderi and Kirchner, 2013, Wöhrle et al., 2015] the
149
3.4. pySPACE Usage Examples
framework is used for the evaluation of spatial filters as also done in Section 3.4.1.
An example for a large-scale comparison of sensor selection algorithms can be found
in [Feess et al., 2013] and Section 3.4.3. Here, the parallelization in pySPACE for
a high performance cluster was required, due to high computational load coming
from the compared algorithms and the amount of data used for this evaluation. In
[Ghaderi et al., 2014], the effect of eye artifact removal from the EEG was analyzed.
There are also several publication, looking at the adaptation of EEG processing
chains
[Metzen and Kirchner, 2011,
Ghaderi and Straube, 2013,
Some
machine
learning
Metzen et al., 2011b,
Wöhrle and Kirchner, 2014,
evaluations
on
EEG
data
Wöhrle et al., 2015,
Tabie et al., 2014].
were
performed
[Metzen and Kirchner, 2011, Metzen et al., 2011b, Kassahun et al., 2012].
In the context of this thesis, pySPACE was for example used for evaluations of new classifiers on synthetic and benchmarking data [Krell et al., 2014a,
Krell and Wöhrle, 2014] and for visualizing data processing chains with the backtransformation [Krell et al., 2014c, Krell and Straube, 2015].
3.4.3 Comparison of Sensor Selection Mechanisms
This section is based on:
Feess, D., Krell, M. M., and Metzen, J. H. (2013). Comparison of Sensor Selection
Mechanisms for an ERP-Based Brain-Computer Interface. PloS ONE, 8(7):e67543,
doi:10.1371/journal.pone.0067543.
It was largely reduced to the parts relevant for this thesis and some additional observations and algorithms were added. This includes some text parts that are written
by David Feess. David Feess and I equally contributed to this paper. David’s focus
was the state of the art, the sensor selection with the “performance” ranking, and
writing most parts of the paper. The main contribution of Dr. Jan Hendrik Metzen
was the probability interpretation of the results (not reported in this section) and the
evaluations with the two SSNR approaches. There were several discussion between
the authors about the paper and the evaluation. My main contribution was in the
implementation and in design of the other ranking algorithms like the ranking with
spatial filters or SVMs.
In this section, we will highlight a more complex application/evaluation, which
touches all aspects of this thesis. The analysis will be applied to data from a passive BCI application (P300 data, see Section 0.4).
A major barrier for a broad applicability of BCIs based on EEG is the large number of EEG sensors (electrodes) typically used (up to more than 100).17 The necessity
17
The cap has to be placed on the user’s scalp and for each electrode a conductive gel has to be applied.
150
Chapter 3. Optimizing: pySPACE
for this results from the fact that the relevant information for the BCI is often spread
over the scalp in complex patterns that differ depending on subjects and application
scenarios. Since passive BCIs aim at minimizing nuisance of their users, it is important to look at sensor selection algorithms in this context. The less sensors need
to be applied, the less preparation time is required and the more mobile the system
might become. So the users will probably be less aware of the fact that their EEG is
recorded.
Recently, a number of methods have been proposed to determine an individual
optimal sensor selection. In [Feess et al., 2013] a selection of approaches has been
compared against each other and most importantly against several baselines (for the
first time). The following baselines were analyzed:
• Use the complete set of sensors.
• Use two electrode constellations corresponding to commercialized EEG systems:
one 32 electrode 10–10 layout as used in the actiCAP EEG system (Figure C.6)
and the original 10–20 layout with 19 sensors.
• Use random selections of sensors (100 repetitions).
• Use the normal evaluation scheme on the data and recursively eliminate the
sensor which is least decreasing the performance.18
Note that the latter might be computational expensive but given an evaluation
scheme it is the most direct intuitive way, because when reducing sensors, the goal is
always not to loose performance or even increase it due to reduced noise from irrelevant sensors.
For a realistic estimation of the reduced system’s performance sensor constellations found on one experimental session were transferred to a different session for
evaluation. Notable (and unanticipated) differences among the methods were identified and could demonstrate that the best method in this setup is able to reduce the
required number of sensors considerably. Even though the final best approach was
tailored to the given type of data, the presented algorithms and evaluation schemes
can be transferred to any binary classification task on sensor arrays. The results will
be also reported in this section.
Even though, the analysis is performed on EEG data, sensor selection algorithms
are also relevant in other applications In robotics for example, reducing the number
of relevant sensors for a certain classification task can help to save resources (material, time, money, electricity) and it can improve the understanding because with
fewer sensors the interpretation (e.g., with the backtransformation from Chapter 2)
18
The BA was used as performance metric as discussed in Section 3.3.
3.4. pySPACE Usage Examples
151
becomes easier. In contrast to the EEG application, there are two minor differences.
Some sensors are normally divided into sub-sensors, but for really removing the sensor, all sub-sensors need to be removed. For example, an inertial measurement unit
(IMU) provides “sub-sensors” for movement in x, y, and z direction or a camera could
be divided into its pixel components as sub-sensors. For EEG data sensors could be
also grouped, but this is not so relevant. The second difference is in the evaluation. In
EEG data processing, the setting of the electrodes (sensors) between different recording sessions is never the same, because electrode conductivity, electrode positions,
and the head (e.g., hair length) are always slightly different. For robots, this should
normally not be such an important issue, even though the robotic system itself might
be subject to wear.19
For a detailed description of the state of the art and methodology we refer to
[Feess et al., 2013]. In this section, we will focus on some aspects in context of this
thesis.
Sensor Ranking for Recursive Backwards Elimination
This section describes the used methods. Motivated by the processing chain, used
in the final evaluation, different ranking algorithms are suggested. The ranking of
the sensors is then used to recursively eliminate one sensor after the other. After
each removal, a new ranking is determined and the sensor with the lowest rank is
removed.
The standard processing chain in this experimental paradigm is given in Figure 3.4 with the only exception in the feature generation to be consistent with
[Feess et al., 2013]. Features are extracted from the filtered signal by fitting straight
lines to short segments of each channel’s data that are cut out every 120 ms and
have a duration of 400 ms. The slopes of the fitted lines are then used as features
[Straube and Feess, 2013].
Similar when the goal is to decode the decision process, a ranking of sensors can
be based on the different stages of a processing chain for the related decision process
as shown in the following. The respective algorithm short names for the evaluation
are denoted in brackets in the title.
Spatial Filter Ranking (xDAWN, CSP, PCA) When a spatial filter has been
trained, its filter weights can be used for a ranking, by for example adding up the
absolute coefficient of the first four spatial filters (xDAWN, CSP, PCA, see also Sec19
One approach to handle these changes is online learning (see Section 1.4).
152
Chapter 3. Optimizing: pySPACE
tion 2.2.1.4):
Wh =
4
X
j=1
|fhj | .
(3.2)
Here, Wh provides the weight associated with the h-th real sensor. The weight with
the lowest value Wh is iteratively removed. (The filter is than trained on the reduced
set of sensors.)
Signal to Signal-Plus-Noise Ratio (SSNRAS , SSNRV S )
There are two addi-
tional methods connected to the xDAWN [Rivet et al., 2012], which was specifically
designed for P300 data. The first calculates the signal to signal-plus noise ratio in the
actual sensor space (SSNRAS ). The second, calculates the same ration in the virtual
space after the application of the xDAWN (SSNRV S ). The value is calculated with
every sensor removed and the sensor with the lowest increase (or highest decrease)
of the ratio is selected for recursive removal.
Support Vector Machine-Recursive Feature Elimination (1SVM, 2SVM,
1SVMO, 2SVMO) If no spatial filter is applied in the processing chain, the weights
in the classifier still have a one-to-one correspondence to the original sensors. This
can be used for a ranking. Given a classification vector w with components wij where
the j component is related to the j-th sensor, we can again define the ranking
Wh =
X
i
|wih | .
(3.3)
Again recursively the sensor h with the lowest Wh is removed as in cursive feature
elimination [Lal et al., 2004]. To simulate the hard margin case, the regularization
parameter C was fixed to 100.20 Using real hard margin separation would not be
feasible, because with small electrode numbers the two classes become inseparable.
Additionally to the ranking with the C-SVM (2SVM), the variant with 1–norm regularization (1SVM) was used due to its property to induce sparsity in the feature
space.
This view on the classifier was the original motivation to look at the sparsity
properties presented in Section 1.3.3.4 and the backtransformation (Chapter 2) and
to extend the original analysis from [Feess et al., 2013].
Most importantly, this ranking turned out to be a very good example of the necessity to optimize the hyperparameter C at least roughly. We will show, that C should
be optimized and not chosen very high. Therefore, we additionally performed a grid
search (C ∈
20
10−2 , 10−1.5 , . . . , 102 ) with 5-fold cross validation optimizing the BA.
This was not reported in [Feess et al., 2013] but could be reproduced with the configuration file.
153
3.4. pySPACE Usage Examples
This resulted in additional rankings, denoted with 1SVMO and 2SVMO respectively.
A variant would be to use a sum of squares, to be more close to the 2–norm regularization of the C-SVM [Tam et al., 2011].
Instead of reducing the processing chain such that it is possible to gain sensor
weights from the linear classifier, it would be also possible to use the affine backtransformation (see Chapter 2) after the first preprocessing right before the application of
the xDAWN filter:
Wh =
X
(1)
wih
.
(3.4)
i
Note that this way of ranking sensors could be applied to any affine processing chain.
Ranking in Regularization (SSVMO) A disadvantage of the 1–norm regularized
C-SVM is that it is only inducing sparsity in the feature space but not directly in
the number of sensors. There are several approaches to induce grouped sparsity
[Bach et al., 2012]. An intuitive approach would be to use
kwk1,∞ =
X
h
max |wih |
(3.5)
i
where the second index h again corresponds to the sensor. The advantage of this regularization is, that the resulting classifier can be still defined as a linear optimization
problem and it might be possible to derive a proof of sparsity similar to Theorem 13.
Unfortunately, this way of regularization solely focuses on sparsity and not on generalization. In fact, a short analysis showed that often equally high weights referring
to one sensor are assigned. To compensate for this, we choose a mixed regularization:
Method 20 (Sensor-Selecting Support Vector Machine (SSVM)).
min
w,b,t
s.t.
P
i,h
|wih | + Cs
P
h
yk
max |wih | + C
i
P
i,h
P
wih xkih + b
k
!
tk
≥ 1 − tk ∀k
tk ≥ 0
(3.6)
∀k .
Here, the additional regularization constant Cs weights between sparsity in sensor space and the original 1–norm regularization. The final classifier weights can
again be used for ranking. We used the same approach for optimizing C as for the
1SVMO but CS had to be fixed to 1000 because an optimization was computationally
too expensive.
If the first index is related to the time, sparsity in time can be induced, accordingly. If a smaller time interval is needed, the final decision can be accelerated.
154
Chapter 3. Optimizing: pySPACE
Evaluation Schemes
For evaluating the performance of a sensor selection method, three datasets are required: one on which the actual sensor selection is performed, one where the system
(spatial filter, classifier, etc.) is trained based on the selected sensor constellation, and
one where the system’s performance is evaluated. From an EEG-application point of
view, sensor selection should be performed on data from a prior usage session of the
subject and not on data from the current one, on which the system is trained and evaluated (one would not demount sensors that are already in position after a training
run). Since the selected sensor constellations are transferred from one usage session
to another, this evaluation scheme is denoted as inter-session (see also Figure 3.10).
The sensor constellations are thus evaluated on data from a different usage session
with potentially different positioning of EEG sensors, different electrode impedances,
etc. For the selected sensor constellation, the system is trained on data from one run
of the session and evaluated on the remaining 4 runs. Thus, the inter-session scheme
does not imply that classifiers are transferred between sessions but only that sensor
constellations are transferred. If the sensor properties (e.g., impedance, position) between different recordings are not expected to change, this evaluation part should be
omitted.
An alternative evaluation scheme, which is used frequently in related work, is
the intra-session scheme (as depicted in Figure 3.10): in this scheme, the sensor
selection is performed on data from the usage session itself; namely on the same
run’s data on which the system is trained later on. Thus, sensor constellations are
not transferred to a different session and the influence of changes in EEG sensor
positions and impedances is not captured. While this scheme is not sensible in the
context of an actual application, it is nevertheless used often for evaluation of sensor
selection methods because data of multiple usage sessions from the same subject
may not be available. We perform the intra-session evaluation mainly to investigate
to which extent its results generalize to the inter-session evaluation scheme.
To mimic an application case with a training period prior to an actual operation
period, the evaluation is performed by applying an “inverse cross-validation like”
schema on basis of the runs from one session. In the intra-session scheme, one run
is used for sensor selection and training of the classification flow, and the remaining
four runs from that session are used as test cases. This is repeated so that each of the
5 runs is used for sensor selection/training once and results for our dataset (consisting
of 5 subjects with 2 sessions each) in a total of 5 · 2 · 5 = 200 performance scores per
selection method and sensor set size. In the inter-session scheme we can perform the
sensor selection on each of the five runs of the other session of the subject, and thus
we obtain 5 · 200 = 1000 performance scores.
155
3.4. pySPACE Usage Examples
Inter-Session
Intra-Session
Constellation
Session X R1
Session X R1
R2
R3
R4
R2
R3
R4
R5
R2
R3
R4
R5
Constellation
R5
Session Y R1
Classifier
Classifier
Figure 3.10: Intra-session and inter-session scheme. R1–R5 denote the runs
from each experimental session. In the intra-session scheme (left), the sensor selection (blue) is performed in the same run in which the system is trained (green), and
the evaluation (red) is performed on the remaining runs from that session. In the
inter-session scheme (right), the sensor constellations are transferred to a different
session of the same subject. Note that run and session numbering were permuted
during the experiment so that in each condition, each run was used for sensor selection and training. Visualization taken from [Feess et al., 2013].
Another possible evaluation scheme, which we will not follow in this thesis is to
look at the transfer between subjects or to be more general the transfer between
different systems, the sensors are attached to.
Standard Signal Processing and Classification
During the training phase, the regularization parameter C of the C-SVM is optimized
using a grid search C ∈ 100 , 10−1 , . . . , 10−6
with a 5-fold cross-validation.
The individual sensor selection methods require the training and evaluation of
different parts of the signal processing chain: the SSNR and Spatial Filter sensor selection algorithms can be applied for each run based on the signals after the low-pass
filter and require no separate evaluation based on validation data. For the SVMbased methods, the entire signal processing chain has to be trained. In this case, the
xDAWN filter is not used during the sensor selection in order to retain a straightforward mapping from SVM weights to sensor space. Again, no evaluation on validation data is required. The Performance method requires to train the entire signal
processing chain, too; however, additionally, a validation of the trained system’s performance is required. For this, the data from a run is split using an internal 5-fold
cross-validation. Each of the methods yields one sensor constellation per run for each
session of a subject.
Processing with pySPACE
For this evaluation, pySPACE is very helpful. Even though, the evaluation was split
into several parts, the standardized configuration files ensured consistency between
the experiments. Without the parallelization capabilities, the evaluation would probably last too long. Adding ranking capabilities to spatial filters and linear classifiers
156
Chapter 3. Optimizing: pySPACE
(even combined with hyperparameter optimization) was straightforward due to the
software structure and only the SSNR and Performance methods needed some extra
implementation. This was combined into one electrode selection algorithm, which
was able to interface the different methods and to store and load electrode ranking results. For the evaluation the resulting rankings (coming from the recursive
backward elimination) only had to be loaded and evaluated depending on the chosen
number of electrodes and the chosen evaluation scheme.
Results
Figure 3.11 shows the results for the intra-session scheme. At first it can be noticed
that all standard caps perform essentially on chance level. The same is true for the
SSNRAS and 2SVM selection heuristics: for more than 5 sensors, both curves lie close
to the center of the random selection patches. The PCA filter method performs even
worse than random for a large range of constellation sizes. The SSNRV S method,
the xDAWN filter, the Performance ranking, and the 1SVM ranking deliver a performance considerably better than chance level for 30 or less sensors. The latter three
perform nearly identically for the whole range and they are better than chance level.
The CSP method performs slightly worse than these methods for less than 20 sensors. The Performance ranking performs slightly worse than these methods in the
range between 30 and 40 sensors. For SSNRV S , the mean performance remains on
the baseline level of using all sensors down to around 18 sensors and is remarkably
better than any of the other heuristics.
It can be clearly seen, that ranking using the classifiers with optimized complexity (1SVMO, 2SVMO, SSVMO) perform comparable or even slightly better than the
not optimized 1SVM ranking. Especially, the 2SVMO ranking shows a large improvement in comparison to the ranking with no optimization (2SVM).
In the inter-session results shown in Figure 3.12, all sensor selection methods
drop in absolute performance compared to the intra-session scheme. Random constellations and standard caps are not effected by the type of transfer since they are
not adapted to a specific session anyway. The relative order of the curves remains
identical to the intra-session results. The performance of the best methods is still
above or in the upper range of the random constellations, and SSNRV S still outperforms all random constellations in the relevant range.
Discussion
As the sensor selection of the SSNRAS and 2SVM methods performs essentially
equivalent to random selection, apparently these methods are not able to extract
any useful information from the data. It can be clearly seen, that the 2SVM requires
157
3.4. pySPACE Usage Examples
0.84
Balanced accuracy
0.83
0.82
0.84
0.81
0.80
0.80
Performance
PCA
xDAWN
CSP
2SVM
2SVMO
1SVM
1SVMO
SSVMO
Standard caps
0.79
0.78
0.77
0.76
10
0.82
All
SSNRV S
SSNRAS
20
0.78
0.76
0.74
0.72
4
30
5
40
6
7
8
50
9
10
60
Number of EEG Electrodes
Figure 3.11: Intra-session evaluation of the classification performance versus the
number of EEG electrodes for different sensor selection approaches. The horizontal
line All is a reference showing the performance using all available 62 electrodes. The
grey patches correspond to histograms of performances of 100 randomly sampled electrode constellations. The elongation in y-direction spans the range of the occurring
performances and the width of the patches in x-direction corresponds to the quantity
of results in that particular range. The three black stars represent widely accepted
sensor placements for 19, 32,and 62 EEG electrodes. All other curves depict the mean
classification performance over all subjects and cross-validation splits. The results
for 4–10 sensors are shown separately in the inset. By using an inset the curves in
the main graphic appear less compressed. Description taken from [Feess et al., 2013].
a hyperparameter optimization (2SVMO) to be able to generalize well, which also
holds a bit for the 1SVM. A potential reason for the failure of the PCA could be that
the sources with highest variance, which are preferred by PCA, might be dominated
by EEG artifacts rather than task-related activities.
In accordance with the results of [Rivet et al., 2012], SSNRV S performs considerably better than the relatively similar SSNRAS ranker. This is most likely due to the
fact that SSNRAS cannot take redundancy between channels into account. SSNRV S
accomplishes this by aggregating redundant information from different channels into
a single surrogate channel via spatial filtering.
It is perhaps surprising that Performance is not the best ranking and SSNRV S
performs much better; we suspect that this might be caused by an overfitting of the
158
Chapter 3. Optimizing: pySPACE
0.84
Balanced accuracy
0.83
0.82
0.84
0.81
0.80
0.80
Performance
PCA
xDAWN
CSP
2SVM
2SVMO
1SVM
1SVMO
SSVMO
Standard caps
0.79
0.78
0.77
0.76
10
0.82
All
SSNRV S
SSNRAS
20
0.78
0.76
0.74
0.72
30
4
5
40
6
7
8
9
50
10
60
Number of EEG Electrodes
Figure 3.12: Inter-session evaluation of the classification performance versus the
number of EEG electrodes for different sensor selection approaches. For more details,
please see Figure 3.11
sensor selection by Performance to the selection session. This effect might be reduced by using a performance estimate which is more robust than the mean, such as
the median or the mean minus one standard deviation (to favor constellations with
smaller variances in performance and less outliers). However, this issue requires
further investigation.
The sensor selection capabilities of the SSVMO ranking are reasonable, since the
performance is comparable to the other good rankings (xDAWN, 1SVMO, 2SVMO).
Maybe, with an improved hyperparameter tuning, this algorithm is able to outperform these algorithms. Therefore, a more efficient implementation would be required.
Furthermore, the integration of warm starts for speeding up the hyperparameter optimization might be helpful for future investigations.
For the inter-session scheme, the loss in performance of all methods in comparison
to the intra-session scheme is expected. It results from the fact that due to day-today changes in brain patterns and differences in the exact sensor placement, different
constellations may be optimal on different days—even for the same subject.
The fact, that the relative order of the results remains unchanged, however, indicates that a comparison of electrode selection approaches can in principle be performed without the effort of acquiring a second set of data for each subject. This
159
3.5. Discussion
facilitates the process of deciding for a particular sensor selection approach substantially. For obtaining a realistic estimate of the classification performance in future
recordings with less sensors one needs a second, independent data recording session
for each subject, however.
All in all, we showed that a reduction of the number of sensors is possible from
62 to at least 40 sensors and that sensor selection approaches using the backtransformation concept show a good performance (SSVMO, 1SVMO, 2SVMO), even though
there is still room for improvement. Furthermore, we demonstrated a more complex use case of pySPACE and the necessity to compare algorithms and to optimize
hyperparameters.
3.5
Discussion
3.5.1 Related Work
Based
on
source
toolboxes
EEGLAB
the
commercial
existing
software
for
processing
[Delorme and Makeig, 2004]
magnetoencephalography
for
package
(MEG)
Matlab,
data
from
and
FieldTrip
and
EEG,
phy
are
for
example
PyMVPA
[Garcia and Fourcaud-Trocmé, 2009],
(http://nipy.org/).
are
open
neuroscience,
like
[Oostenveld et al., 2011]
and
SPM
fil.ion.ucl.ac.uk/spm/) especially for fMRI data.
libraries
there
Respective Python
[Hanke et al., 2009],
and
the
NIPY
(http://www.
OpenElectro-
software
projects
These tools, are very much tailored to their special
type of data and application and are not appropriate for more general signal processing and classification. In scientific computing in general, Python is probably the
programming language mostly used, because it is easy to learn/use/read, because it
can be made efficient by using C/C++ interfaces, and because it provides high quality
libraries which already define a lot of required functionality.
The Python machine learning stack is organized roughly, starting from core
libraries for numerical and scientific computation such as NumPy [Dubois, 1999]
and SciPy [Jones et al., 2001], over libraries containing implementations of core machine learning algorithms such as scikit-learn [Pedregosa et al., 2011], to higher level
frameworks such as MDP, which allow to combine several methods and evaluate
their performance empirically. Besides that, there are non-standardized ways of interfacing with machine learning tools that are not implemented in Python such as
LibSVM [Chang and Lin, 2011] and WEKA [Hall et al., 2009].
The distinction between libraries and frameworks is typically not strict; frameworks often contain some implementations of basic processing algorithms as libraries
do and libraries typically include some basic framework-like tools for configuration
160
Chapter 3. Optimizing: pySPACE
and evaluation. pySPACE can be considered as a high-level framework which contains a large set of built-in machine learning algorithms as well as wrappers for
external software such as scikit-learn, MDP, WEKA, and LibSVM.
In contrast to libraries like scikit-learn, the focus of pySPACE is much more on
configuration, automation, and evaluation of large-scale empirical evaluations of signal processing and machine learning algorithms. Thus, we do not see pySPACE as an
alternative to libraries but rather as a high-level framework, which can easily wrap
libraries (and does so already for several ones), and which makes it easier to use and
compare the algorithms contained in these libraries.
In contrast to frameworks like MDP, pySPACE requires less programming skills
since a multitude of different data processing and evaluation procedures can be completely specified using configuration files in YAML-syntax without requiring the user
to write scripts, which would be a “show-stopper” for users without programming
experience. Similarly, frameworks based on GUIs are not easily used in distributed
computing contexts on remote machines without graphical interface. Thus, we consider pySPACE’s YAML-based configuration files a good compromise between simplicity and flexibility.
Additionally, pySPACE allows to execute the specified experiments on different
computational modalities in a fully automated manner using different back-ends:
starting from a serial computation on a single machine, over symmetric multiprocessing on shared-memory multi-core machines, to distributed execution on highperformance clusters based on MPI or IBM’s job scheduler LoadLeveler. Further
back-ends like one integrating IPython parallel [Pérez and Granger, 2007] could easily be integrated in the future. Other tools for parallel execution are either restricted to the symmetric multiprocessing scenario like joblib [Varoquaux, 2013] or
by themselves not directly usable in machine learning without some “glue” scripts
such as IPython parallel. Recently, the framework SciKit-Learn Laboratory (skll,
https://skll.readthedocs.org/) became open source. This framework also
uses the command line for distributing different data processing operations on a summary of datasets but in contrast to pySPACE it only interfaces scikit-learn, which
largely limits its capabilities.
A further advantage of pySPACE is that it allows to easily transfer methods from
the offline benchmarking mode to the processing in real application scenarios. The
user can use the same YAML-based data processing specifications in both modes.
For loading EEG and related data, pySPACE is already quite powerful. But in
the data handling of more arbitrary data, it would greatly benefit from interfacing to
the Python library pandas [McKinney, 2010]. This library provides a large range of
efficient, large scale data handling methods, which could increase the performance
of pySPACE and enlarge the number of available formats, data handling algorithms,
161
3.5. Discussion
and data cleaning methods.
There
are
several
boxes
which
like
OpenVibe
EEGLAB
could
be
further
open
interesting
source
to
be
[Renard et al., 2010],
[Delorme and Makeig, 2004],
signal
interfaced
BCI2000
Oger
processing
with
tool-
pySPACE
[Schalk et al., 2004],
[Verstraeten et al., 2012],
pyMVPA [Hanke et al., 2009], Shogun [Sonnenburg et al., 2010], and many more,
including frameworks which would only use the automatic processing and parallelization capabilities of pySPACE. These interfaces might help to overcome some
limitations of the software like the focus on feature vector and segmented time series
data or the missing interactive data visualization.
3.5.2 My Contribution to pySPACE for this Thesis
pySPACE was not exclusively my own work.21 For good software development always
a team is required and especially major changes to an existing software require a
discussion between developers and users. The original benchmarking framework was
written by Dr. Jan Hendrik Metzen and Timo Duchrow and the code for the signal
processing chains was adapted from MDP.
Nevertheless, for the goal to implement a framework for better automatizing the
process of optimizing the construction of an appropriate signal processing chain including a classifier and to make this framework open source large changes had to
be made. This also includes usability and documentation issues. In context of these
changes, I see my major contribution to the framework and to my thesis.
For comparing classifiers, originally the WEKA framework was used. To also
evaluate classifiers in signal processing chains (node chains), which was required for
the application and the work on Chapter 1, I implemented the concept of classifiers
and their evaluation including numerous different performance measures. This work
was followed by implementing algorithms for the hyperparameter optimization. For
increasing the usability, I suggested, discussed, and implemented the major restructuring of the software, I eased the setup of the software, and largely improved the
documentation and the testing suite. The basic concepts introduced in this chapter
already existed right from the beginning of the software in 2008 without my contribution because they are required by the problem of tuning signal processing chains
itself. My contribution is to make this structure visible in the code and in its documentation for users and developers.
Last but not least, I implemented all algorithms used for evaluations and visualizations in this thesis like the BRMM and the generic backtransformation.
21
This can be seen at http://pyspace.github.io/pyspace/history.html and http://
pyspace.github.io/pyspace/credits.html.
162
Chapter 3. Optimizing: pySPACE
All the authors of [Krell et al., 2013b] had an important contribution to the framework and helped making it open source. Due to my aforementioned contribution to
pySPACE, I was the main author of this publication. I defined the structure and
wrote most text parts of the paper but also the other authors contributed a few parts.
For the introduction (which is also used in this chapter) especially Dr. Sirko Straube
contributed some text parts. He also mainly implemented Figure 3.1. The text
parts about online processing are mainly the work of Johannes Teiwes and Hendrik
Wöhrle. Figure 3.3 and the pySPACE logo and some more graphics in the pySPACE
documentation are joint work with Johannes Teiwes. The evaluation example (see
Section 3.4.1) was joint work with Anett Seeland. The related work (see Section 3.5.1)
was mostly written by Dr. Jan Hendrik Metzen.
3.5.3 Summary
In this chapter a general framework was presented which supports the tuning/optimization, analysis, and comparison of signal processing chains. Even though more
automation and more sophisticated algorithms for the optimization (as for example
mentioned in Section 3.3) should be integrated into pySPACE, basic concepts and
tools are already available and the software provides interfaces to integrate these
approaches. The framework supports a wide range of data formats, platforms, and
applications and provides several parallelization schemes, performance metrics, and
most importantly algorithms from diverse categories. It can be used for benchmarking as well as real online applications. Numerous results and publications would not
have been possible without this framework and the cluster which we use to parallelize our calculations.
In the process of developing methods two aspects of pySPACE were very helpful:
the reproducibility and the simplicity of the configuration. For discussing approaches
and problems the respective files were used and exchanged. So often new approaches
could be tested using configuration files from other scientists. This also ensured comparability and saved a lot of time. Even though the main research results of pySPACE
have been in the area of EEG data processing so far, the concepts and implementations can be transferred to other problem settings. For example, some analyses on
robotics data have been performed using sensor selection capabilities, the parallelization, or a regression algorithm, and in Chapter 1 we showed an analysis of data on a
more abstract level, unrelated to a direct application. Usually, algorithms are developed and tested using scripts and later on these might be exchanged and then need
to be adapted to other applications or evaluations. By using pySPACE as a common
ground the exchange of algorithms between team members and the transfer to other
applications or evaluations was straightforward.
163
3.5. Discussion
The generic documentation and testing in pySPACE largely ease the maintenance. Nevertheless, working on the framework with the goal to make it usable for
everyone in every application is extremely demanding and would probably require
some full time developers. In the future, pySPACE could benefit from additional algorithms (e.g., by using an improved wrapper to Weka, or enabling evaluations of
clustering algorithms), input/storage formats (e.g., using pandas), job distribution
back-ends (e.g., database access, or the distribution concept from the SciKit-Learn
Laboratory), and use cases (e.g., soil detection by robots, support in rehabilitation,
video and picture processing). Furthermore, there are several possibilities, to improve testing coverage, performance, usability, logging, and automation of the framework. Especially the latter is interesting to target a fully autonomous optimization of
a signal processing chain. A broad scientific user community of pySPACE would provide a basis for easy exchange and discussion of signal processing and classification
approaches, as well as an increased availability of new signal processing algorithms
from various disciplines.
Related Publications
Krell, M. M., Straube, S., Seeland, A., Wöhrle, H., Teiwes, J., Metzen, J. H.,
Kirchner, E. A., and Kirchner, F. (2013b). pySPACE a signal processing and
classification environment in Python. Frontiers in Neuroinformatics, 7(40):1–
11, doi:10.3389/fninf.2013.00040.
Krell, M. M., Straube, S., Wöhrle, H., and Kirchner, F. (2014c).
General-
izing, Optimizing, and Decoding Support Vector Machine Classification. In
ECML/PKDD-2014 PhD Session Proceedings, September 15-19, Nancy, France.
Straube, S. and Krell, M. M. (2014).
How to evaluate an agent’s be-
haviour to infrequent events? – Reliable performance estimation insensitive
to class distribution.
Frontiers in Computational Neuroscience, 8(43):1–6,
doi:10.3389/fncom.2014.00043.
Straube, S., Metzen, J. H., Seeland, A., Krell, M. M., and Kirchner, E. A. (2011).
Choosing an appropriate performance measure: Classification of EEG-data with
varying class distribution. In Proceedings of the 41st Meeting of the Society for
Neuroscience 2011, Washington DC, United States.
Feess, D., Krell, M. M., and Metzen, J. H. (2013). Comparison of Sensor Selection Mechanisms for an ERP-Based Brain-Computer Interface. PloS ONE,
8(7):e67543, doi:10.1371/journal.pone.0067543.
164
Chapter 3. Optimizing: pySPACE
Wöhrle, H., Teiwes, J., Krell, M. M., Seeland, A., Kirchner, E. A., and Kirchner,
F. (2014). Reconfigurable dataflow hardware accelerators for machine learning
and robotics. In Proceedings of European Conference on Machine Learning and
Principles and Practice of Knowledge Discovery in Databases (ECML PKDD2014), September 15-19, Nancy, France.
Kirchner, E. A., Kim, S. K., Straube, S., Seeland, A., Wöhrle, H., Krell, M. M.,
Tabie, M., and Fahle, M. (2013).
On the applicability of brain reading for
predictive human-machine interfaces in robotics.
PloS ONE, 8(12):e81732,
doi:10.1371/journal.pone.0081732.
Krell, M. M., Tabie, M., Wöhrle, H., and Kirchner, E. A. (2013c).
Memory
and Processing Efficient Formula for Moving Variance Calculation in EEG and
EMG Signal Processing. In Proceedings of the International Congress on Neurotechnology, Electronics and Informatics, pages 41–45, Vilamoura, Portugal.
SciTePress.
Tiedemann, T., Vögele, T., Krell, M. M., Metzen, J. H., and Kirchner, F. (2015).
Concept of a data thread based parking space occupancy prediction in a berlin
pilot region. In Papers from the 2015 AAAI Workshop. Workshop on AI for Transportation (WAIT-2015), January 25-26, Austin, USA. AAAI Press.
Presentation of the Software
Krell, M. M., Straube, S., Seeland, A., Wöhrle, H., Teiwes, J., Metzen, J. H.,
Kirchner, E. A., and Kirchner, F. (2013a). Introduction to pySPACE workflows.
peer-reviewed talk, NIPS 2013 Workshop on MLOSS: Towards Open Workflows,
Lake Tahoe, Nevada, USA.
Krell, M. M., Kirchner, E. A., and Wöhrle, H. (2014b). Our tools for large scale or
embedded processing of physiological data. Passive BCI Community Meeting,
Delmenhorst, Germany.
Krell, M. M. (2014). Introduction to the Signal Processing and Classification
Environment pySPACE. PyData Berlin.
Chapter 4
Conclusion
Optimizing the classification of complex data is a difficult task which often requires
expert knowledge. To ease the optimization process especially for non-experts, three
approaches are introduced in this thesis to improve the design and understanding of
signal processing and classification algorithms and their combination.
Classifier Connections
Several connections between existing SVM variants have been shown and resulted
in additional new SVM variants including unary classifiers and online learning algorithms, which were shown to be relevant for certain applications. These connections
replace the loose net of SVM variants by a strongly connected one, which can be
regarded as a more general overall model. Knowing the connections, it is easier to
understand differences and similarities between the classifiers and save time when
teaching, implementing, optimizing, or just applying the classifiers. Furthermore,
different concepts can be transferred and existing proofs of properties can be generalized to the connected models.
Backtransformation
To interpret and decode the complete signal processing chain which ends with a classifier, the backtransformation approach was presented. Whenever the processing
consists of affine transformations, it results in a representation of the processing
chain, giving weights for each component in the input domain, which can be directly
visualized. It replaces the handcrafted and cumbersome visualization and interpretation of single algorithms with a joint view on the complete processing which is very
easy to obtain due to a generic implementation.
165
166
Chapter 4. Conclusion
pySPACE
The pySPACE framework was presented as a tool, to process data, tune algorithms
and their hyperparameters, and to enable the communication between scientists.
This largely supports optimization of the signal processing chain. Especially for the
classifiers handled in this thesis, the hyperparameter optimization and the choice
of the preprocessing is important. Furthermore, the framework was required for
comparing the classifiers and analyzing them, as well as for implementing the backtransformation in a generic way. pySPACE was used for all evaluations in this thesis
and even in numerous other cases and so proved its usability as a tool for scientific
research (see Section 3.4.2).
Implications for the Practitioner
All three approaches are not to be taken separately1 but jointly to tackle the question
of
“How shall I use which classifier and
what features of my data does it rely on?”
So given a new problem, how can the insights and tools provided in this thesis
help?
The first step is that the respective data has to be prepared such that it can be
loaded into pySPACE. Usually, this step is quite simple due to the available loading
routines, examples, and documentation. Now it is possible to explore several processing chains with pySPACE using different visualization, evaluation, and optimization
techniques (“How”, see Chapter 3). For a more systematic approach when choosing
the classifier, the “general” view/model on SVM classifier variants in Chapter 1 can
be used (“which”). As outlined in Section 1.5, the model can help in different ways:
• It can be used to understand/teach the models and their connections.
• It provides a rough guideline from the application point of view.
• It can partially be used to optimize the choice of classifier with an optimization
algorithm, e.g., as provided by pySPACE. By parameterizing the number of iterations and possibly integrating it into the performance metric, the optimization
could help to decide between batch and online learning using the single iteration
approach (Section 1.2). Furthermore, the choice between C-SVM and RFDA is
parameterized with the relative margin concept from the BRMM (Section 1.3).
This provides a smooth transition, which can be used for optimization.
The resulting different processing chains can be compared and analyzed by decoding
them with the backtransformation (“what”, see Chapter 2). Therefore, the pySPACE
1
For a separate discussion of the approaches refer to Section 1.5, 2.5, and 3.5, respectively.
167
framework can be used with its generic implementation of the backtransformation.
Irrelevant data can be detected and new insights about the data or the processing can
be gained. This can be used to derive new algorithms and/or improve the processing
chain.
Last but not least, the backtransformation can be used to support application
driven dimensionality reduction, e.g., by extending the classifier model with a respective “sparsification” or by providing a ranking of input components in the source
domain instead of the feature domain.
Outlook
The aforementioned contributions can only be seen as the beginning and a lot of
research has to follow.
For the generalizing part (Chapter 1), connections between further classifiers (not
limited to SVM variants) should be derived and knowledge and particularities concerning one algorithm should be transferred to the connected ones, if possible.
For the decoding part (Chapter 2), different visualization techniques should be
developed for different types of data, and especially new tools should be developed to
ease the interpretability of the affine as well as the general backtransformation.
For the framework and optimization part (Chapter 3), the number of algorithms,2
supported data types,3 and optimization algorithms4 should be increased especially
with the goal of making pySPACE useful for more applications and providing more
functionality, automation, and efficiency of the optimizing approaches.
Overall, the three introduced concepts should be analyzed in further applications
to prove their usefulness.
By pushing all aspects further and integrating the results in pySPACE, it might
be possible to achieve the longterm goal of creating a nearly fully automized algorithm for autonomous longterm learning and efficient optimization of signal processing chains (autoSPACE).
2
e.g., wrappers, clustering algorithms, and preprocessing which is tailored to not yet supported types
of data
3
e.g., text or music data
4
e.g., joint optimization of classifier and hyperparameters or joint optimization of filtering and classification
168
Chapter 4. Conclusion
Appendix A
Publications
in the Context of this Thesis
This chapter lists all my 18 publications and the chapter number (No.) they relate
to. They are sorted by personal relevance for this thesis. If they are a major part of a
chapter, the chapter number is highlighted.
No. Publication details
9 journal publications (1 submitted)
1,2,3 Krell, M. M., Feess, D., and Straube, S. (2014a). Balanced Relative Margin
Machine – The missing piece between FDA and SVM classification. Pattern
Recognition Letters, 41:43–52, doi:10.1016/j.patrec.2013.09.018.
1,3 Krell, M. M. and Wöhrle, H. (2014).
on the origin separation approach.
New one-class classifiers based
Pattern Recognition Letters, 53:93–99,
doi:10.1016/j.patrec.2014.11.008.
2,3 submitted: Krell, M. M. and Straube, S. (2015). Backtransformation: A new
representation of data processing chains with a scalar decision function. Advances in Data Analysis and Classification. submitted.
3 Krell, M. M., Straube, S., Seeland, A., Wöhrle, H., Teiwes, J., Metzen, J. H.,
Kirchner, E. A., and Kirchner, F. (2013b). pySPACE a signal processing and
classification environment in Python. Frontiers in Neuroinformatics, 7(40):1–
11, doi:10.3389/fninf.2013.00040.
1,3 Wöhrle, H., Krell, M. M., Straube, S., Kim, S. K., Kirchner, E. A., and Kirchner, F. (2015). An Adaptive Spatial Filter for User-Independent Single Trial
Detection of Event-Related Potentials. IEEE Transactions on Biomedical Engineering, doi:10.1109/TBME.2015.2402252.
169
170
Appendix A. Publications in the Context of this Thesis
1,3 Straube, S. and Krell, M. M. (2014).
How to evaluate an agent’s be-
haviour to infrequent events? – Reliable performance estimation insensitive
to class distribution.
Frontiers in Computational Neuroscience, 8(43):1–6,
doi:10.3389/fncom.2014.00043.
2,3 Feess, D., Krell, M. M., and Metzen, J. H. (2013). Comparison of Sensor Selection Mechanisms for an ERP-Based Brain-Computer Interface. PloS ONE,
8(7):e67543, doi:10.1371/journal.pone.0067543.
1,2,3 Kirchner, E. A., Kim, S. K., Straube, S., Seeland, A., Wöhrle, H., Krell, M. M.,
Tabie, M., and Fahle, M. (2013).
On the applicability of brain reading for
predictive human-machine interfaces in robotics.
PloS ONE, 8(12):e81732,
doi:10.1371/journal.pone.0081732.
1 Fabisch, A., Metzen, J. H., Krell, M. M., and Kirchner, F. (2015). Accounting
for Task-Hardness in Active Multi-Task Robot Control Learning. Künstliche
Intelligenz.
6 conferences publications
1,2,3 Krell, M. M., Straube, S., Wöhrle, H., and Kirchner, F. (2014c).
General-
izing, Optimizing, and Decoding Support Vector Machine Classification. In
ECML/PKDD-2014 PhD Session Proceedings, September 15-19, Nancy, France.
3 Straube, S., Metzen, J. H., Seeland, A., Krell, M. M., and Kirchner, E. A. (2011).
Choosing an appropriate performance measure: Classification of EEG-data with
varying class distribution. In Proceedings of the 41st Meeting of the Society for
Neuroscience 2011, Washington DC, United States.
1,3 Wöhrle, H., Teiwes, J., Krell, M. M., Seeland, A., Kirchner, E. A., and Kirchner,
F. (2014). Reconfigurable dataflow hardware accelerators for machine learning
and robotics. In Proceedings of European Conference on Machine Learning and
Principles and Practice of Knowledge Discovery in Databases (ECML PKDD2014), September 15-19, Nancy, France.
1,3 Wöhrle, H., Teiwes, J., Krell, M. M., Kirchner, E. A., and Kirchner, F. (2013b). A
Dataflow-based Mobile Brain Reading System on Chip with Supervised Online
Calibration - For Usage without Acquisition of Training Data. In Proceedings
of the International Congress on Neurotechnology, Electronics and Informatics,
pages 46–53, Vilamoura, Portugal. SciTePress.
171
3 Krell, M. M., Tabie, M., Wöhrle, H., and Kirchner, E. A. (2013c).
Memory
and Processing Efficient Formula for Moving Variance Calculation in EEG and
EMG Signal Processing. In Proceedings of the International Congress on Neurotechnology, Electronics and Informatics, pages 41–45, Vilamoura, Portugal.
SciTePress.
3 Tiedemann, T., Vögele, T., Krell, M. M., Metzen, J. H., and Kirchner, F. (2015).
Concept of a data thread based parking space occupancy prediction in a berlin
pilot region. In Papers from the 2015 AAAI Workshop. Workshop on AI for Transportation (WAIT-2015), January 25-26, Austin, USA. AAAI Press.
3 talks about pySPACE:
2,3 Krell, M. M. (2014). Introduction to the Signal Processing and Classification
Environment pySPACE. PyData Berlin.
3 Krell, M. M., Straube, S., Seeland, A., Wöhrle, H., Teiwes, J., Metzen, J. H.,
Kirchner, E. A., and Kirchner, F. (2013a). Introduction to pySPACE workflows.
peer-reviewed talk, NIPS 2013 Workshop on MLOSS: Towards Open Workflows,
Lake Tahoe, Nevada, USA.
1,2,3 Krell, M. M., Kirchner, E. A., and Wöhrle, H. (2014b). Our tools for large scale or
embedded processing of physiological data. Passive BCI Community Meeting,
Delmenhorst, Germany.
Appendix B
Proofs and Formulas
B.1 Dual Optimization Problems
B.1.1
Well Defined SVM Model
Theorem 1 (The SVM Model is well defined). The SVM optimization problem has
feasible points and a solution always exists if there is at least one sample for each
class. Additionally when using the hard margin the sets of the two classes need to be
strictly separable. Furthermore, Slater’s constraint qualification is fulfilled.
Proof. The first point is fairly easy, because a feasible point P = (w, b, t) is described
by
w = 0, b = 0, tj = 10 ∀j : 1 ≤ j ≤ n .
(B.1)
When working with a hard margin SVM the two sets S+1 and S−1 with
Sz = conv ({xj |yj = z})
(B.2)
need to be strictly separable by a hyperplane with parametrization P = (w, b) which
defines a feasible point. Otherwise, the optimization problem has no solution, because there is no feasible point.
The optimization problem consists of a convex target function and linear constraints. Furthermore, P is a Slater point, because small changes of P are still
feasible.
Consequently, Slater’s constraint qualification can be applied to show
that the problem can be locally linearized and Lagrange duality can be applied
[Burges, 1998, Slater, 2014].
The argument about solvability will only be given for Method 3 but can be applied
to numerous variants analogous like the hard margin separation, squared loss, or
arbitrary norm of w. The target function f (w, b, t) =
1
2
kwk22 + C
P
tj is bounded below
by zero and f (P ) is an upper bound. Furthermore, the constraints are linear and
172
173
B.1. Dual Optimization Problems
define a closed set and the target function is convex. Even strong convexity holds
when fixing b. Consequently, there can not be more than one solution vector w. Due
to the uniqueness of the optimal tj , an optimal b is also unique. With the help of the
upper bound f (P ), the set of considered feasible points can be reduced to the ones
where
kwk2 ≤
q
2
2f (P ), ktk1 ≤
f (P )
.
C
(B.3)
holds. All other points result in values of the target function being higher than the
value obtained by the feasible point P . If there were also limits for b, the set of
relevant and feasible points would be compact and consequently a minimum would
be obtained, because f is a continuous function. Since this is not the case, we have
to work with a sequence (w(n), b(n), t(n)) approaching the infimum. This sequence
is existing, because f is bounded from below. Due to the bounds on w and t we can
assume (at least by working with the respective subsequences) lim w(n) = w′ and
n→∞
lim t(n) = t′ . Without loss of generality, we can furthermore assume that lim b(n) =
n→∞
n→∞
∞ and that x1 is a sample with y1 = −1. Inserting this into the feasibility constraints
results in:
y1 (hw(n), x1 i + b(n)) ≥ 1 − t(n)1 ⇒ lim − hw(n), x1 i − b(n) ≥ lim 1 − t(n)1
n→∞
n→∞
and consequently − hw′ , x1 i − ∞ ≥ 1 − t′1 which is a contradiction.
(B.4)
Assuming
lim b(n) = ∞ also leads to a contradiction by using the sample of the second class
n→∞
with yj = +1. Consequently, we can assume lim b(n) = b′ holds at least for a subsen→∞
quence. So finally, we obtain the solution of the optimization problem: (w′ , b′ , t′ ).
B.1.2
Dual of the Hard Margin Support Vector Machine
Theorem 18 (Dual Hard Margin SVM). If the samples of the two classes are strictly
separable, the duality gap for the hard margin SVM is zero and the dual optimization
problem reads:
X
1X
αj
αi αj yi yj hxi , xj i −
αj yj =0 2 i,j
j
min
P
αj ≥0,
(B.5)
Proof. The proof is the same as for Theorem 2 using the Lagrange function
L(w, b, α) =
X
1
kwk22 −
αj (yj (hw, xj i + b) − 1)
2
(B.6)
and the derivatives
X
X
∂L
∂L
αj yj xj ,
αj yj .
=w−
=−
∂w
∂b
j
j
(B.7)
174
B.1.3
Appendix B. Proofs and Formulas
Detailed Calculation for the Dual of the L2–SVM
Even though, the calculations for the dual formulation are straightforward, there is
always in danger of small errors. To give at least one complete example, we provide
the calculation for the L2–SVM from Section 1.1.1.1 Theorem 2 in detail. The Model
reads:
Method 21 (L2–Support Vector Machine (p = p′ = 2)).
min
w,b,t
1
2
s.t.
kwk22 +
P
Cj t2j
(B.8)
yj (hw, xj i + b) ≥ 1 − tj
∀j : 1 ≤ j ≤ n.
The respective Lagrange function is
L2 (w, b, t, α) =
X
X
1
kwk22 +
Cj t2j −
αj (yj (hw, xj i + b) − 1 + tj )
2
(B.9)
with the derivatives
X
X
∂L2
∂L2
∂L2
αj yj xj ,
αj yj ,
=w−
=−
= 2tj Cj − αj
∂w
∂b
∂tj
j
j
(B.10)
as explained in Section 1.1.1.1. This results in the equations:
w=
X
αj yj xj
(B.11)
X
αj yj
(B.12)
j
0=
j
tj =
αj
.
2Cj
When substituting the optimal tj and w in L2 , we calculate:
(B.13)
175
B.1. Dual Optimization Problems
X
α
j
L2 (w, b, t, α) =L2 αj yj xj , b,
, α
1
=
2
*
X
−
=
αi yi xi ,
i
X
(B.14)
2Cj
j
αj yj
j
X
αj yj xj
j
*
X
+
+
X
αj
2Cj
Cj
j
αi yi xi , xj
i
+
!2
!
αj
+b −1+
2Cj
X αj2
1X
αi αj yi yj hxi , xj i +
2 i,j
4Cj
j
−
=−
X
i,j
αi αj yi yj hxi , xj i − b
X
(B.15)
αj yj +
j
!
(B.16)
(B.17)
X
j
αj −
X αj2
j
2Cj
X αj2
X
1X
αi αj yi yj hxi , xj i −
αj .
−b·0+
2 i,j
4Cj
j
j
(B.18)
(B.19)
Only in the last step, Equation (B.12) was used to eliminate b. Consequently, if b were
omitted in the original model, the resulting function would still be the same. Only
the additional restriction from Equation (B.12) would disappear.
B.1.4
Dual of the ν-SVM
Theorem 19 (Dual of the ν-SVM). Assuming solvability of the ν-SVM
min
w,t,ρ,b
1
2
kwk22 − νρ +
1
n
P
tj
(B.20)
s.t. yj (hw, xj i + b) ≥ ρ − tj and tj ≥ 0 ∀j : 1 ≤ j ≤ n .
the dual optimization can be formulated as:
α
1
2
s.t.
1
n
min
P
i,j
αi αj yi yj hxi , xj i
≥ αj ≥ 0 ∀j : 1 ≤ j ≤ n,
P
αj yj = 0,
j
P
αj = ν .
(B.21)
j
Proof. As in Theorem 1 it can be shown, that there are feasible points of the ν-SVM
and that it fulfills Slater’s constraint qualification. The therein mentioned proof for
the existence of a solution cannot be applied here. The target function is bounded
only from above by zero because putting all variables to zero is a feasible solution.
(As a side effect it automatically holds ρ > 0 [Crisp and Burges, 2000].) Furthermore,
the target function contains a negative component. Consequently, it cannot be used
for restricting the set of feasible points to a compact set.
176
Appendix B. Proofs and Formulas
The Lagrange functions reads:
L(w, b, t, ρ, α, γ) =
X
X
1X
1
γj tj
αj (yj (hw, xj i + b) − ρ + tj ) −
tj −
kwk22 − νρ +
2
n j
j
j
(B.22)
with the derivatives
X
X
X
∂L
∂L
∂L
1
∂L
αj yj xj ,
αj yj ,
αj . (B.23)
=w−
=−
= − αj − γj ,
= −ν +
∂w
∂b
∂tj
n
∂ρ
j
j
j
Again, setting the derivatives to zero and substituting them in the Lagrange functions, provides the dual problem. ρ is eliminated from the Lagrange function, but the
additional constraint equation remains. Everything else is the same as for the dual
of the L1–SVM in Theorem 2.
B.1.5
Dual of the Binary BRMM
The general (primal) L1–BRMM model with special offset treatment reads:
min
w,b,s,t
1
2
kwk22 +
H 2
2 b2
+
P
Cj tj +
P
Cj′ sj
Rj + sj ≥ yj (hw, xj i + b) ≥ 1 − tj
s.t.
sj
tj
≥0
≥0
∀j : 1 ≤ j ≤ n
(B.24)
∀j : 1 ≤ j ≤ n
∀j : 1 ≤ j ≤ n.
The corresponding L2–BRMM model is very similar:
min
w,b,s,t
s.t.
1
2
kwk22 +
H 2
2b
+
P
Cj t2j +
P
Cj′ s2j
(B.25)
Rj + sj ≥ yj (hw, xj i + b) ≥ 1 − tj
∀j : 1 ≤ j ≤ n.
The Lagrange functions read:
X
X
1
H
L1 (w, b, s, t, α, β, γ, δ) = kwk22 + b2 +
Cj tj +
Cj′ sj
2
2
X
−
αj (yj (hw, xj i + b) − 1 + tj )
+
−
X
X
βj (yj (hw, xj i + b) − Rj − sj )
γj sj −
X
δj tj and
+
X
(B.27)
(B.28)
(B.29)
1
H
L2 (w, b, s, t, α, β) = kwk22 + b2 +
Cj′ s2j
Cj t2j +
2
2
X
−
αj (yj (hw, xj i + b) − 1 + tj )
X
(B.26)
X
βj (yj (hw, xj i + b) − Rj − sj ).
(B.30)
(B.31)
(B.32)
177
B.1. Dual Optimization Problems
The original problem is now equivalent to first maximize L with positive dual variables (α, β, γ, δ) and then minimize with respect to the primal variables (w, b, s, t). The
dual problem is the reverse. Hence, we first need the derivatives with respect to the
primal variables:
X
∂L
= Hb −
(αj − βj )yj ,
∂b
∂L2
= 2sj Cj′ − βj ,
∂sj
∂L2
= 2tj Cj − αj .
∂tj
X
∂L
=w−
(αj − βj )yj xj ,
∂w
∂L1
= Cj′ − βj − δj ,
∂sj
∂L1
= Cj − αj − γj ,
∂tj
(B.33)
(B.34)
(B.35)
For getting the dual problems, two steps are required. Setting the derivatives zero,
gives equations for the primal variables, which then can be replaced in the optimization problem, such that only the dual problem remains. The other step, just for
cosmetic reasons, is to multiply the problem with −1 and switch from maximization
to minimization. Together, this results in
min
1
2 (α
s.t.
0 ≤ αj ≤ Cj
α,β
− β)T Q(α − β) −
P
αj +
∀j : 1 ≤ j ≤ n
0 ≤ βj ≤ Cj′
P
Rj βj
(B.36)
∀j : 1 ≤ j ≤ n
in the L1 case and
min
1
2 (α
s.t.
0 ≤ αj
α,β
− β)T Q(α − β) −
0 ≤ βj
P
αj +
∀j : 1 ≤ j ≤ n
P
Rj βj +
1
4
P α2j
Cj
+
1
4
P βj2
Cj′
(B.37)
∀j : 1 ≤ j ≤ n
in the L2 case with
Qkl = yk yl hxk , xl i +
1
H
∀k, l : 1 ≤ k ≤ n, 1 ≤ l ≤ n.
(B.38)
Without the special offset treatment, the equation
X
j
yj (αj − βj ) = 0
(B.39)
would have to be added to the constraints in the dual optimization problem. (The
proof would be nearly the same.)
178
Appendix B. Proofs and Formulas
B.1.6
Dual of the One-Class BRMM Models
The dual problem formulations are required to derive update formulas to get a solution in an iterative way and to generate the online versions of the algorithms. Furthermore, they are needed to introduce kernels.
Method 22 (Dual of the One-Class BRMM).
min
α,β
1
2
P
i,j
(αi − βi )(αj − βj ) hxi , xj i − 2
P
αj + (R + 1)
s.t. 0 ≤ αj ≤ C and 0 ≤ βj ≤ C ∀j : 1 ≤ j ≤ n .
P
βj
(B.40)
Theorem 20 (Dual of the One-Class BRMM). Method 22 is a dual problem of Method
28 and both methods are connected via
w=
X
(αj − βj )xj .
(B.41)
Proof. To simplify the calculations, we use the equivalent formulation:
1
2
min
w,t,u
kwk22 + C
P
tj + C
P
uj
(B.42)
s.t. 1 + R + uj ≥ hw, xj i ≥ 2 − tj and tj , uj ≥ 0 ∀j : 1 ≤ j ≤ n .
This is a convex optimization problem and with w = 0, tj = 10, uj = 10 ∀i a
slater point is defined. Hence, strong duality holds (Slater’s constraint qualifica-
tion) [Boyd and Vandenberghe, 2004]. From the modified problem, we can derive the
Lagrange function:
1
L(w, t, u, α, β, γ, δ) = kwk22
2
X
X
X
+C
tj −
γj tj +
αj (2 − tj − hw, xj i)
+C
X
uj −
X
δj uj +
X
βj (hw, xj i − R − 1 − uj )
(B.43)
(B.44)
(B.45)
and calculate the derivatives:
X
∂L
∂L
∂L
=w−
(αj − βj )xj ,
= C − γj − αj ,
= C − δj − βj .
∂w
∂tj
∂uj
(B.46)
To get the dual problems, the derivatives have to be set to zero and substituted in
the Lagrange function. Since all dual variables have to be positive, the equations
∂L
∂tj
= 0 and
∂L
∂uj
= 0 can be used to eliminate γj and δj with the inequalities αj ≤ C
and βj ≤ C. Putting everything together gives us the dual problem:
max − 12
α,β
P
i,j
(αi − βi )(αj − βj ) hxi , xj i + 2
P
αj − (R + 1)
s.t. 0 ≤ αj ≤ C and 0 ≤ βj ≤ C ∀j : 1 ≤ j ≤ n .
P
βj
(B.47)
179
B.1. Dual Optimization Problems
Multiplication of the target function with −1 completes the proof.
Method 23 (Dual of the L2–One-Class BRMM).
min
α,β
1
2
P
i,j
(αi − βi )(αj − βj ) hxi , xj i − 2
P
αj + (R + 1)
s.t. 0 ≤ αj and 0 ≤ βj ∀j : 1 ≤ j ≤ n .
P
βj +
P α2j +βj2
2C
(B.48)
Theorem 21 (Dual of the L2–One-Class BRMM). Method 23 is a dual problem of
Method 31 and both methods are connected via w =
P
(αj − βj )xj .
Proof. To simplify the calculations, we use an equivalent formulation:
min
w,t,u
1
2
kwk22 +
C
2
P 2
P 2
tj + C2
uj
(B.49)
s.t. 1 + R + uj ≥ hw, xj i ≥ 2 − tj ∀j : 1 ≤ j ≤ n .
This is a convex optimization problem and with w = 0, tj = 10, uj = 10 ∀j a
slater point is defined. Hence, strong duality holds (Slater’s constraint qualification) [Boyd and Vandenberghe, 2004]. From the modified problem formulation, we
can derive the Lagrange function:
1
L(w, t, u, α, β) = kwk22
2
CX 2 X
+
tj +
αj (2 − tj − hw, xj i)
2
CX 2 X
βj (hw, xj i − R − 1 − uj )
uj +
+
2
(B.50)
(B.51)
(B.52)
and calculate the derivatives:
X
∂L
∂L
∂L
(αj − βj )xj ,
=w−
= Ctj − αj ,
= Cuj − βj .
∂w
∂tj
∂uj
(B.53)
To get the dual problems, the derivatives have to be set to zero and substituted in the
Lagrange function, to eliminate the primal variables: w, t, and u. This gives us the
dual problem:
max − 21
α,β
P
i,j
(αi − βi )(αj − βj ) hxi , xj i + 2
s.t. 0 ≤ αj and 0 ≤ βj ∀j : 1 ≤ j ≤ n .
P
αj − (R + 1)
P
βj −
Multiplication of the target function with −1 completes the proof.
P α2j +βj2
2C
(B.54)
180
Appendix B. Proofs and Formulas
Method 24 (Dual of the Hard-Margin One-Class BRMM).
1
2
min
α,β
P
i,j
(αi − βi )(αj − βj ) hxi , xj i − 2
P
αj + (R + 1)
s.t. 0 ≤ αj and 0 ≤ βj ∀j : 1 ≤ j ≤ n .
P
βj
(B.55)
Theorem 22 (Dual of the Hard-Margin One-Class BRMM). If there exists a w′ with
R > hw′ , xj i > 0 ∀j, Method 24 is a dual problem of Method 32 and both methods are
connected via w =
P
(αj − βj )xj .
Proof. Since w′ fulfills the constraints R > hw′ , xj i > 0 ∀j, it is a slater
point of Method 32.
Hence, strong duality holds (Slater’s constraint qualifica-
tion) [Boyd and Vandenberghe, 2004]. The Lagrange function for Method 32 is:
X
X
1
αj (2 − hw, xj i) +
βj (hw, xj i − R − 1)
L(w, α, β) = kwk22 +
2
with the derivative:
X
∂L
=w−
(αj − βj )xj .
∂w
(B.56)
(B.57)
Hence, the dual problem reads:
max − 12
α,β
P
i,j
(αi − βi )(αj − βj ) hxi , xj i + 2
s.t. 0 ≤ αj and 0 ≤ βj ∀j : 1 ≤ j ≤ n .
P
αj − (R + 1)
P
βj
(B.58)
Multiplication of the target function with −1 completes the proof.
Method 25 (Dual BRMM Variants with Kernel k). To introduce kernels, hxi , xj i is
again replaced by k(xi , xj ) in the dual problems. The decision functions reads:
f (x) = sgn
X
(αj − βj )k(xj , x) − 2
(B.59)
B.2 Model Connections
B.2.1
Least Squares SVM and Ridge Regression
The model for the ridge regression is:
Method 26 (Ridge Regression).
min
w,b,t
s.t.
with yj ∈ R.
1
2
kwk22 +
C
2
P 2
t
j
yj − (hw, xj i + b) = tj
(B.60)
∀j : 1 ≤ j ≤ n
181
B.2. Model Connections
When restricting this model to yj ∈ {−1, +1} (binary classification) it is equivalent
to LS-SVM (Method 7) due to the equation:
t2j = (1 − yj (hw, xj i + b))2 = (yj − (hw, xj i + b))2 .
B.2.2
(B.61)
Equality of ǫ-RFDA and BRMM
The ǫ-RFDA method from Section 1.3.3.3 reads
Definition 6 (2–norm regularized, ǫ-insensitive RFDA).
min
w,b,t
s.t.
1
2
kwk22 + C ktkǫ
(B.62)
yj (hw, xj i + b) = 1 − tj
∀j : 1 ≤ j ≤ n.
Theorem 12 (Equivalence between RFDA, SVR, and BRMM). The RFDA with ǫinsensitive loss function and 2–norm regularization (or the SVR reduced to the values 1 and −1) and BRMM result in an identical classification with a corresponding
function, mapping RFDA (SVR) hyperparameters (C, ǫ) to BRMM hyperparameters
(C ′ , R′ ) and vice versa.
Proof. As a first step, we want to replace the ǫ–norm by a linear formulation. Using
the definition of k.kǫ and replacing |tj | − ǫ by a new variable hj , the method can be
written as
min
w,b,h
1
2
kwk22 + C
P
max {hj , 0}
(B.63)
|yj (hw, xj i + b) − 1| = hj + ǫ ∀j : 1 ≤ j ≤ n.
s.t.
Since h is subject to minimization, the constraint can just as well be specified as
inequality
|yj (hw, xj i + b) − 1| ≤ hj + ǫ.
(B.64)
Additionally, to omit the max {hj , 0} term we introduce a new positive variable sj and
define sj = hj if hj > 0 and sj = 0 for hj ≤ 0. This results in a further reformulation
of the original method:
min
w,b,s
s.t.
1
2
kwk22 + C
P
sj
|yj (hw, xj i + b) − 1| ≤ sj + ǫ ∀j : 1 ≤ j ≤ n
sj
≥0
(B.65)
∀j : 1 ≤ j ≤ n.
The last step is to replace the absolute value. This is done with the help of a case-by-
182
Appendix B. Proofs and Formulas
case analysis that results in the linear program
min
w,b,s
1
2
kwk22 + C
P
sj
yj (hw, xj i + b) ≥ 1 − (sj + ǫ) ∀j : 1 ≤ j ≤ n
s.t.
(B.66)
yj (hw, xj i + b) ≤ 1 + (sj + ǫ) ∀j : 1 ≤ j ≤ n
sj
≥0
∀j : 1 ≤ j ≤ n.
As ǫ < 1 we can scale the problem by dividing by 1 − ǫ, yielding the scaled variables
w′ , b′ , s′j . We also scale the target function with
reads
min
w,b,s
s.t.
1
2
kw′ k22 +
C
1−ǫ
yj (hw′ , xj i
yj (hw′ , xj i
P ′
s
1
,
(1−ǫ)2
such that the scaled problem
j
+
b′ )
+
b′ )
s′j
≥ 1 − s′j
≤
1+ǫ
1−ǫ
+
≥0
s′j
∀j : 1 ≤ j ≤ n
(B.67)
∀j : 1 ≤ j ≤ n
∀j : 1 ≤ j ≤ n.
The scaling also has an effect on the classification function, which is scaled in the
same way as the variables. This scaling does not change the sign of the classification
values and so the mapping to the classes is still the same. Renaming
1+ǫ
1−ǫ
C
1−ǫ
to C ′ and
to R′ , the result is BRMM with hyperparameters C ′ and R′ .
To make the proof in the other direction, we first have to search for the ǫ corre-
sponding to R′ and afterwards scale the C ′ with the help of ǫ. This results in
ǫ=
2C ′
R′ − 1
′
,
C
=
(1
−
ǫ)C
=
.
R′ + 1
R′ + 1
(B.68)
For the mapping between RFDA and SVR we have to use the fact that yj ∈ {−1, 1}
and consequently |yj | = 1 and yj2 = 1 :
|yj (hw, xj i + b) − 1| = |yj | |yj (hw, xj i + b) − 1| = |(hw, xj i + b) − yj | .
(B.69)
Note that it is possible to always replace the absolute value function |a| with −b ≤
a ≤ b if b is subject to minimization and it automatically holds b ≥ 0.
B.2.3
One-Class Algorithm Connections
Theorem 16 (Equivalence of SVDD and νoc-SVM on the Unit Hypersphere). If all
training samples lie on the unit hypersphere, SVDD (Method 10) is equivalent to νocSVM (Method 11).
183
B.2. Model Connections
Proof. SVDD is defined by
min
R′2 + C ′
′
′
R ,a,t
s.t. ka −
P ′
t
j
xj k22
(B.70)
≤ R′2 + t′j and t′j ≥ 0 ∀j : 1 ≤ j ≤ n .
The norm part can be rewritten:
ka − xj k22 = kak22 − 2 ha, xj i + kxj k22
(B.71)
With this equation and since the samples are on the unit hypersphere kxj k22 = 1
the SVDD method can be reformulated to:
min
R′2 + C ′
′
′
R ,a,t
s.t. ha, xj i ≥
P ′
t
j
kak22 +1−R′2 −t′j
2
(B.72)
and t′j ≥ 0 ∀j : 1 ≤ j ≤ n
with fa (x) = sgn R′2 − kak22 + 2 ha, xj i − 1 . Using the mapping
w = a, tj =
results in
t′j
kak22 + 1 − R2
1
, ρ=
, ν= ′
2
2
Cl
1
min kwk22 + 1 − 2ρ + 2 νl
w,ρ,t
P
tj
(B.73)
(B.74)
s.t. hw, xj i ≥ ρ − tj and 2tj ≥ 0 ∀j : 1 ≤ j ≤ n .
Scaling the target function and the restriction 2tj ≥ 0 with 0.5, shows the equivalence
to the νoc-SVM model (Method 11). The decision functions are the same, too:
fa (x) = sgn (2 hw, xj i − 2ρ) = sgn (hw, xj i − ρ) .
(B.75)
On the other hand, to get from the νoc-SVM to the SVDD, the reverse mapping
a = w, t′j = 2tj , R2 = kwk22 + 1 − 2ρ, C ′ =
1
νl
(B.76)
can be used.
Theorem 17 (From νoc-SVM to the New One-Class SVM). Let ρ(ν) denote the optimal value of νoc-SVM model. If ρ(ν) > 0, νoc-SVM is equivalent to our new one-class
SVM.
Proof. Having the optimal ρ(ν), the optimal w for νoc-SVM can be determined by the
184
Appendix B. Proofs and Formulas
optimization problem.
1
2
min
w,t
kwk22 +
1
νl
P
tj
(B.77)
s.t. hw, xj i ≥ ρ(ν) − tj and tj ≥ 0 ∀j .
Now, scaling the target function with
4
(ρ(ν))2
and the restrictions with
2
ρ(ν)
gives the
equivalent optimization problem:
min
w,t
s.t.
2
P 2tj
2w
2
ρ(ν) 2 + νlρ(ν)
ρ(ν)
E
D
2tj
2tj
2w
ρ(ν) , xj ≥ 2 − ρ(ν) and ρ(ν)
1
2
(B.78)
≥ 0 ∀j .
This scaling is feasible, since ρ(ν) > 0. Substituting
w̄ =
2tj
2
2w
, t̄j =
, C̄ =
ρ(ν)
ρ(ν)
νlρ(ν)
(B.79)
results in the new one-class SVM. Finally, for the decision function it holds:
f (x) = sgn (hw, xj i − ρ(ν)) = sgn
ρ(ν)
ρ(ν)
w̄, xj − 2
2
2
= sgn (hw̄, xj i − 2) . (B.80)
Theorem 23 (Hard-Margin One-Class SVM: C = ∞). Let X denote the set of training
instances xj with the convex hull conv(X). For the Hard-Margin One-Class SVM, the
origin separation approach reveals that the optimal hyperplane (for the positive class)
is tangent to conv(X) in its point of minimal norm x′ . The hyperplane is orthogonal to
the vector x′ with w = x′ kx2′ k2 .
2
Proof.
Method 27 (Hard-Margin One-Class SVM).
min
w
1
2
kwk22
s.t. hw, xj i ≥ 2 ∀j : 1 ≤ j ≤ n .
(B.81)
Via convex linear combination of hw, xj i ≥ 2 ∀i it holds hw, xi ≥ 2 ∀x ∈ conv(X).
Furthermore, by the Cauchy-Schwarz inequality one gets:
2 ≤ w, x′ ≤ kwk2 x′
2
⇒ kwk2 ≥
2
.
kx′ k2
(B.82)
So if w′ = x′ kx2′ k2 would fulfill all restrictions, it would be optimal, because kw′ k2 =
2
kx′ k2 .
2
The following proof is a variant from [Boyd and Vandenberghe, 2004, separat-
185
B.2. Model Connections
be reformulated to hx′ , xj i < kx′ k22 . Due to convexity
for any 0 ≤ α ≤ 1. Consider the function h : R → R,
derivative at zero:
2x′
, xj < 2. This can
kx′ k22
it holds (1 − α)x′ + αxj ∈ conv(X)
h(α) = k(1 − α)x′ + αxj k22 and its
ing hyperplane theorem]. Assume, there exists an xj with
∂
∂h
2
(0) =
(1 − α)2 x′ 2 + α2 kxj k22 + 2α(1 − α) x′ , xj (0)
∂α
∂α
2
= −2(1 − α) x′ 2 + 2α kxj k22 + 2(1 − 2α) x′ , xj (0)
= −2 x′
2
2
With our assumption we get:
(B.83)
(B.84)
+ 2 x′ , xj .
∂h
∂α (0)
(B.85)
< 0. Consequently, there exists a small 0 < t < 1
such that h(t) < h(0). In other words, there exists an
x′t = (1 − t)x′ + txj ∈ conv(X)
(B.86)
such that kx′t k2 < kx′ k2 which contradicts the definition of x′ which is the point of
minimal norm in conv(X). Hence, our assumption was wrong and w′ = x′ kx2′ k2 fulfills
2
all the restrictions hw′ , xj i ≥ 2 ∀j and is the solution of the hard-margin one-class
SVM.
B.2.4
Connection of Classifiers with Different Regularization Parameter
Theorem 24 (Linear Transition with Regularization Parameter). There is a function
of optimal dual variables α(C) of the C-SVM depending on the chosen regularization
parameter C. It can be defined, such that except for a finite number of points it is
locally linear which means that it locally has a representation: αi (C) = Cv1 + v2 with
v1 , v2 ∈ Rn . The same holds for the function b(C). Consequently, for a linear kernel
the classification vector w and the offset b can be chosen such that, they are partially
linear functions depending on the classifier weights.
This theorem is a side effect of the proofs given in [Chang and Lin, 2001, especially the formulas in lemma 5]. Note that this can be easily extended, e.g., to class
dependent weighting. In some cases, the optimization problem might have more
than one solution but the solution function can be chosen such that it only includes
the choice where the behavior is locally linear. It is not yet clear, if the function can
be discontinuous at the finite number of points where it is not locally linear but the
example calculation in [Chang and Lin, 2001] indicates, that this is not the case.
186
Appendix B. Proofs and Formulas
B.3 BRMM Properties
B.3.1
Sparsity of the 1–norm BRMM
Having the formulation of Method 16 as a linear program, the Simplex algorithm can
be applied to deliver an exact solution in a finite number of steps. Since there might
be more than one solution, an advantage of the Simplex algorithm is that it prefers
solutions with more variables equal to zero as shown in the following proof.
Theorem 13 (Feature Reduction of 1–norm BRMM). A solution of the 1–norm BRMM
(Method 16) with the Simplex algorithm always uses a number nf of features smaller
than the number of support vectors lying on the four margins. In other words, nf is
smaller than the number of training examples xj that satisfy
D
E
w+ − w− , xj + b+ − b− ∈ {1, −1, R, −R} .
(B.87)
Proof. Due to the usage of the soft margin, the convex optimization problem always
has a solution.
Since we have a solvable linear optimization problem, the Simplex algorithm can
be applied. The set of feasible points in this special case is a polytope. The principle
of the Simplex algorithm is to take a vertex of this polytope and choose step by step
a neighboring one with a higher value of the target function. In the context of the
Simplex algorithm these vertices are called basic feasible points. Hence the solution
found by the Simplex algorithm is always a vertex of this polytope and it is called a
basic feasible solution [Nocedal and Wright, 2006].
As a first step, we introduce the mathematical description of these vertices. This
results in a restriction on the number of nonzero variables in the 1–norm BRMM
method. The next step is then to analyze the interconnection between the variables
and to connect the found restriction with the number of used features of the linear
classifier.
Method 16 has a total of 2n linear equations which can be formulated as one equation using a matrix multiplication with a matrix A ∈ R2n×(2m+2+3n) . The parameter
n is the number of given data vectors and m is the dimension of the data space which
is also the number of available features.
Definition 7 (Basic Feasible Point in Method 16). A basic feasible point
y = (w+ , w− , b+ , b− , t, g, h)
(B.88)
has only positive components and solves the method equations. Each component of
y corresponds to a column of A. y can only have nonzero components so that all
corresponding columns of A are linearly independent.
187
B.3. BRMM Properties
So a maximum of 2n out of 2m + 2 + 3n components of a basic feasible point
can be different from zero because a 2n × (2m + 2 + 3n) matrix can have at most
2n linearly independent columns. It can be proven that the basic feasible points
from this definition are exactly the vertices of the Simplex algorithms applied on
Method 16 [Nocedal and Wright, 2006]. Let y be a basic feasible solution. We already
know that a maximum of 2n components of y can be nonzero and we now analyze the
consequences for the individual parts w+ , w− , b+ , b− , t, g, h.
We are mainly interested in the classification vector w = w+ −w− as the number of
features used refers to the number of components of w which are different from zero.
The above considerations alone deliver nf ≤ 2n as upper boundary for the number
of features. To get a more precise boundary, we have to analyze the dependencies
between the variables tj , gj , and hj . Hence, for each training example xj we conduct
a case-by-case analysis of the classification function f (x) = w+ − w− , x + b+ − b− :
If |f (xj )| < 1 :
If |f (xj )| = 1 :
tj 6= 0 and gj 6= 0.
gj 6= 0.
If 1 < |f (xj )| < R : gj 6= 0 and hj 6= 0.
If |f (xj )| = R :
If |f (xj )| > R :
(B.89)
hj 6= 0.
hj 6= 0 and tj 6= 0.
For each of the n training samples at least one of the variables tj , gj and hj is nonzero.
Hence, the upper bound for the number of features drops from 2n down to n. Additionally, one nonzero component is required in all cases where |f (xj )| is not equal to 1
or R. The number of these cases can be written as n − n1R , where n1R is the number
of training samples xj for which f (xj ) ∈ {1, −1, R, −R}. Summing up, the maximal
number 2n of nonzero components of any basic feasible solution y is composed of
• one component if b = b+ − b− is not zero (1b ),
• n plus another n − n1R components from the case-by-case analysis,
• and finally the number of used features nf .
Written as an inequality we finally have
2n ≥ 1b + (2n − n1R ) + nf ⇒ n1R ≥ nf ,
(B.90)
as we wanted to prove.
Note that in the special case of R = 1 we count each vector on the hyperplane
twice, accounting for the fact that these vectors still lie on two planes at the same time
in terms of the method. In the case of a 1–norm SVM, the number of used features
is restricted by the number of vectors lying on the two hyperplanes with |f (x)| = 1.
These findings are a direct consequence from Theorem 13 and the connections shown
188
Appendix B. Proofs and Formulas
in Section 1.3.
B.3.2
Extension of Affine Transformation Perspective
Consider a totally different view on binary classification.
a good classifier together with a good transformation.
We now search for
Therefore, we want a
large soft margin as defined in the SVM method together with a small spread
of the data after transformation.
This approach corresponds to the one in
[Shivaswamy and Jebara, 2010], but in contrast our classifier is not fixed. The corresponding optimization problem is:
Definition 8 (Classification Transformation Problem).
min
w,b,A,T,R
s.t.
1
2
kwk22 + C
n
P
tj + BR
j=1
yj (hw, Axj + T i + b) ≥ 1 − tj
1
2
kAxj +
T k22
tj
≤
R2
≥0
∀j : 1 ≤ j ≤ n
(B.91)
∀j : 1 ≤ j ≤ n
∀j : 1 ≤ j ≤ n
where B and C are positive hyperparameters.
Lemma 25. The classification transformation problem always has a solution.
Proof. First, the feasible set is not empty because we can set
w = 0, b = 0, A = 0, T = 0, R = 0, tj = 1 ∀j : 1 ≤ j ≤ n.
(B.92)
Now we can “reduce” the feasibility set by restricting the target function to the value
reached by this feasible point:
n
X
1
2
tj + BR ≤ Cn.
kwk2 + C
2
j=1
(B.93)
This results in additional restrictions of the variables:
kwk2 ≤
0≤R ≤
√
2Cn,
Cn
B ,
ktk1 ≤ n,
kAxj + T k2 ≤
2Cn
B .
(B.94)
The last one can be reformulated and seen as a restriction of (AT ) on the space which
is build by the xj with an additional last component with the values 1 (homogeneous
space).
kAT k2 ≤
1
2Cn
q
B
1 + min kxj k22
(B.95)
189
B.3. BRMM Properties
Outside this space, we can define AT to be zero without loss of generality. So we
showed the affine transformation to be bounded in the space of matrices. It can be
seen that the feasible set is closed. So the only remaining unbounded variable is b.
If it were bounded, too, we could use the existence of minima of continuous functions
on compact sets or bounded, closed sets in finite dimensional R-vector space.
Nevertheless, our target function is also bounded below by zero and so we can find
a sequence (wm , tm , Am , T m , Rm , bm )m∈N approaching the infimum. By looking only at
subsequences,
lim (wm , tm , Am , T m , Rm ) = (w, t, A, T, R)
(B.96)
m→∞
can be assumed. If b had no converging subsequence, limm→∞ b = ∞ or − ∞ holds for
the above subsequence. If the classifications problem is not trivial, we can assume
y1 = 1, y2 = −1. If limm→∞ b = ∞, we can use the inequality
− (hwm , Am x2 + T m i + bm ) ≥ 1 − tm
2
(B.97)
and get the contradiction −∞ ≥ 1 − t2 . The other case is similar. So our sequence
approaching the infimum can be assumed to converge. Because of closure of the
feasible set and continuity of the target function, the limit is one solution of the
minimization problem.
Lemma 26. The classification transformation problem has a solution, with an optimal Matrix A∗ = [ĀT̄ ] with rank one.
Proof. From the previous lemma we already know that there is a solution of the
problem. We call it (w0 , b0 , t0 , A0 , T 0 , R0 ).
Assuming w0 6= 0 without loss of generality, we can find an orthonormal base and
thereby an orthonormal transformation O which maps w0 to w0
2
∗ e1 . This results
in the same transformation as in the affine transformation problem. Now we use this
base transformation to transform the problem. We have
yj (hw,
Axj + T i + b)
D
E
= yj ( Ow, (OAOT )(Oxj ) + OT + b) ∀j : 1 ≤ j ≤ n
=
1
2
1
2
kAxj + T k22
(OAOT )(Oxj ) + OT
2
2
(B.98)
∀j : 1 ≤ j ≤ n.
So by fixing the optimal w0 and by using the new orthonormal base we get a subprob-
190
Appendix B. Proofs and Formulas
lem where each partial solution is still one of the previous problem:
1
2
min
b,t,Ā,T̄ ,R
w0
s.t. yj ( w0
2
2
+C
tj + BR
(āT1 x̄j + T̄1 ) + b) ≥ 1 − tj
2
1 Pn
2
P
T
i=1 (āi x̄j
+ T̄i
)2
tj
≤
R2
≥0
∀j : 1 ≤ j ≤ n
(B.99)
∀j : 1 ≤ j ≤ n
∀j : 1 ≤ j ≤ n
where āi is the ith row of Ā. The bar (.̄) stands for components in the representation
of the new base. Since we are trying to minimize R, the sum in the second inequality
has to be minimal. Furthermore, aj is irrelevant for the rest of the program ∀j 6= 1. So
we can set (ā0i , t̄0i ) = 0 ∀i 6= 1 and we have a rank one matrix after retransformation.
This matrix is still optimal in the original problem because the change has no effect
on the target function.
After demonstrating that an optimal A can be chosen with rank one, we can reduce the original problem or look at a subproblem and we get a program with no fixed
variables, but still we use the transformation defined by w0 :
1
2
min
w,b,t,Ā,T̄ ,R
s.t. yj
kwk22 + C
hw,w0 i
kw0 k2
P
tj + BR
≥ 1 − tj
∀j : 1 ≤ j ≤ n
R2
tj
≥0
∀j : 1 ≤ j ≤ n
(āT1 x̄j + T̄1 ) + b
1 T
2 (ā1 x̄j
+ T̄1
)2
≤
(B.100)
∀j : 1 ≤ j ≤ n.
This method looks similar to the Relative Margin Machine formulation.
B.3.3
Implementation of the BRMM with 2–norm regularization
In the following we will give further details on the calculations, which lead to the
algorithm formulas given in Section 1.3.4.1. Therefore, we use the dual problem
formulations from Appendix B.1.5.
For implementing a solution algorithm, in the n-th step, all except one index j are
kept fixed in dual and for this index the optimal αjn+1 and βjn+1 are determined. Let
191
B.3. BRMM Properties
f1 and f2 be the target functions of the dual problems. Now we define
d2
Qjj + d (Qj. (α − β) − 1) + c
2
!
!
1
αj
d2
Qjj +
+ d Qj. (α − β) − 1 +
+c
g2 (d) = f2 (α + dej , β) =
2
2Cj
2Cj
g1 (d) = f1 (α + dej , β) =
(B.101)
(B.102)
d2
Qjj + d (Rj − Qj. (α − β)) + c’
(B.103)
2
!
!
d2
1
βj
h2 (d) = f2 (α, β + dej ) =
Qjj +
+ d Rj − Qj. (α − β) +
+ c’ (B.104)
′
2
2Cj
2Cj′
h1 (d) = f1 (α, β + dej ) =
with respective constants c and c′ . The remaining step is to calculate the optimal d, a
case by case analysis concerning the boundaries, and plugging together the solution
formula with the boundary constraints to get formulas for αjn+1 and βjn+1 depending
on αjn and βjn .
Solvability and Constraint Qualifications
The following argument has the same structure as the proof in Section 1.1.1.1 but is
now for the BRMM instead of the C-SVM.
For using duality theory, two requirements have to be checked, which we will do
now on the concrete primal problem. First, it has to be proven that there is a solution, because applying duality theory requires an optimal point. Second a constraint
qualification has to hold, such that a local linearization of the problem is possible,
which is the basic concept of duality theory.
The two target functions (with q ∈ {1, 2}) are defined as:
fq′ (w, b, t, s) =
X
X
1
H
kwk22 + b2 +
Cj tqj +
Cj′ sqj .
2
2
(B.105)
First some important observations:
• The constraints are linear.
• fq′ are convex, continuous functions.
⇒ The BRMM model is defined by a convex optimization problem.
• With e being a vector of ones, the point p = (~0, 0, 2e, 2e)
is a feasible point (ful
filling all constraints) with u := fq′ (p) = 2q
P
Cj + Cj′ .
⇒ An upper bound of the optimal value of the optimization problem is u.
• p is a Slater point, meaning that it fulfills the restrictions without equality.
Since p is a Slater point of a convex optimization problem, the Slater condition is
fulfilled, which is a constraint qualification. So it remains to show the existence of
a solution. With the help of the upper bound u we can infer further restrictions for
192
Appendix B. Proofs and Formulas
optimal points:
kwk2 ≤
√
2
2u, |b| ≤
r
2
v
v
u
u
2u
u
u
.
, kskq ≤ u
, ktkq ≤ u
t
t
q
q
H
min Cj
min Cj′
1≤j≤n
(B.106)
1≤j≤n
Together with the normal constraints of the model, these restrictions define a compact set. Since fq′ is a continuous function, it has a minimum on this set. Hence, a
solution exists.
For the proof of the existence of a solution, it is very useful, that b is part of
the target function. Otherwise, one has to work with sequences and subsequence
approaching the infimum, which exists, because fq′ is bounded below by zero. Assuming, that there is no minimum, results in a subsequence with converging components
except of bn going to plus or minus infinity, one gets a contradiction when taking the
limits of the constraints of one example for each class:
lim yj (hwn , xj i + bn ) = yj lim bn ≥ 1 − lim tn .
n→∞
n→∞
n→∞
(B.107)
This results in ∞ ≥ const. for one class and −∞ ≥ const. for the other class, which is
a contradiction to the assumed divergence on bn .
B.3.4 ν-Balanced Relative Margin Machine
The ν-BRMM was derived from the ν-SVR by a sign/variable shifting (between tj and
sj ) if yj = −1:
min
w,b,ǫ,s,t
s.t.
1
2
kwk22 + C (nνǫ +
P
sj +
P
tj )
ǫ + sj ≥ yj (hw, xj i + b) − 1 ≥ −ǫ − tj
sj , tj
≥0
∀j : 1 ≤ j ≤ n
(B.108)
∀j : 1 ≤ j ≤ n.
This model is always feasible and fulfills Slater’s constraint qualification. The proof is
similar to the C-SVM. Consequently, it is possible to derive optimality conditions and
to work with the dual optimization problem. Formulating, the respective Lagrange
193
B.3. BRMM Properties
function and calculating the derivative leads to:
X
X
1
tj +
sj
L(w, b, ǫ, s, t, α, β, γ, δ) = kwk22 + C nνǫ +
2
X
X
+
αj (1 − tj − ǫ − yj (hw, xj i + b)) −
γj tj
+
∂L
∂w
∂L
∂b
∂L
∂ǫ
∂L
∂tj
∂L
∂sj
X
βj (yj (hw, xj i + b) − 1 − ǫ − sj ) −
X
yj (αj − βj )
=w −
=−
X
=Cnν −
(αj − βj )yj xj
X
αj −
=C − αj − γj
X
X
δj sj
(B.109)
(B.110)
(B.111)
(B.112)
(B.113)
βj
(B.114)
(B.115)
=C − βj − δj .
(B.116)
The dual ν-BRMM reads:
1
2
min
α
P
(αi − βi )(αj − βj ) hxi , xj i yi yj −
s.t. C ≥ αj ≥ 0 ∀j : 1 ≤ j ≤ n,
P
j
(αj − βj )
C ≥ βj ≥ 0 ∀j : 1 ≤ j ≤ n,
P
αj yj =
j
P
P
(B.117)
βj yj ,
j
αj + βj = νCn .
j
To show, that ν is a lower border (in percentage) on the number of support vectors it
is good to rescale the dual parameters and get an equivalent rescaled dual ν-BRMM
problem:
min
α
s.t.
1
2
1
n
1
n
P
P
j
P
j
(αi − βi )(αj − βj ) hxi , xj i yi yj −
≥ αj ≥ 0, ∀j : 1 ≤ j ≤ n,
≥ βj ≥ 0, ∀j : 1 ≤ j ≤ n,
αj yj =
P
βj yj ,
j
αj + βj = ν .
1
Cn
P
j
(αj − βj )
(B.118)
194
Appendix B. Proofs and Formulas
B.4 Unary Classifier Variants and Implementations
B.4.1
One-Class Balanced Relative Margin Machine and its Variants
For the following algorithms the decision function reads:
f (x) = sgn (hw, xi − 2) .
(B.119)
For the range parameter R it holds R ≥ 1.
Method 28 (One-Class BRMM).
min
w,t
1
2
kwk22 + C
P
tj
(B.120)
s.t. 1 + R + tj ≥ hw, xj i ≥ 2 − tj and tj ≥ 0 ∀j : 1 ≤ j ≤ n .
Method 29 (New One-Class SVM (R = ∞)).
1
2
min
w,b,t
kwk22 + C
P
tj
(B.121)
s.t. hw, xj i ≥ 2 − tj and tj ≥ 0 ∀j : 1 ≤ j ≤ n .
Method 30 (One-Class RFDA (R = 1)).
min
w
X
1
|hw, xj i − 2| .
kwk22 + C
2
(B.122)
Method 31 (L2–One-Class BRMM).
1
2
min
w,t
kwk22 +
C
2
P 2
t
j
(B.123)
s.t. 1 + R + tj ≥ hw, xj i ≥ 2 − tj ∀j : 1 ≤ j ≤ n .
Method 32 (Hard-Margin One-Class BRMM).
1
2
min
w
kwk22
s.t. 1 + R ≥ hw, xj i ≥ 2 ∀j : 1 ≤ j ≤ n .
(B.124)
For the existence of a solution R > 1 is required. In contrast to all other BRMM
methods, this model might have no solution.
Method 33 (1–Norm One-Class BRMM).
min kwk1 + C
w,t
P
tj
s.t. 1 + R + tj ≥ hw, xj i ≥ 2 − tj and tj ≥ 0 ∀j : 1 ≤ j ≤ n .
(B.125)
195
B.4. Unary Classifier Variants and Implementations
B.4.2
Iterative Solution Formulas for One-Class BRMM Variants
This section introduces update formulas using the approaches from Section 1.2.3
and 1.2.4 Let j be the index of the relevant sample for the update in the k-th iteration.
Theorem 27 (Update Formulas for the One-Class BRMM). With the projection function P (z) = max {0, min {z, C}}, the update formulas are:
(k+1)
αj
(k+1)
βj
=P
(k)
αj
=P
(k)
βj
X
1
−
−2 +
(αi − βi )k(xi , xj )
k(xj , xj )
!
X
1
+
−(R + 1) +
(αi − βi )k(xi , xj )
k(xj , xj )
(B.126)
!
(B.127)
and in the linear case:
(k+1)
αj
(k+1)
βj
=P
(k)
αj
=P
(k)
βj
1 D (k) E
w
,
x
−
2
−
j
kxj k22
!
(B.128)
1 D (k) E
w , xj − (R + 1)
+
2
kxj k2
(k+1)
w(k+1) = w(k) + ((αj
(k)
(k+1)
− αj ) − (βj
!
(B.129)
(k)
− βj )) xj .
(B.130)
Proof. With the help of
h(α, β) =
X
X
1X
(αi − βi )(αm − βm )k(xi , xm ) − 2
αi + (R + 1)
βi
2 i,m
(B.131)
we define g1 (d) = h(α(k) + dej , β (k) ), g2 (d) = h(α(k) , β (k) + dej ) and calculate:
X (k)
∂g1
(k)
= d · k(xj , xj ) +
(αi − βi )k(xi , xj ) − 2 ,
∂d
X (k)
∂g2
(k)
= d · k(xj , xj ) −
(αi − βi )k(xi , xj ) + (R + 1) .
∂d
(B.132)
(B.133)
If k(xj , xj ) = 0 the index can be ignored and no update is required. With k(xj , xj ) > 0
the optimal d can be determined with
∂g1
∂d
= 0 or
∂g2
∂d
= 0 respectively. With the
projection of the resulting solution to the restriction interval [0, C] this gives the update formulas. Replacing k(xi , xj ) with hxi , xj i and substituting w(m) =
results in the formulas for the linear case.
P
(αj − βj )xj
Theorem 28 (Update Formulas for the L2–One-Class BRMM). With the projection
196
Appendix B. Proofs and Formulas
function P (z) = max {0, z}, the update formulas are:
(k+1)
αj
(k+1)
βj
=P
1
−
k(xj , xj ) +
=P
1
+
k(xj , xj ) +
α(k)
j
β (k)
j
1
C
1
C
(k)
X
α
−2 + j +
(αi − βi )k(xi , xj )
C
(k)
(B.134)
X
β
−(R + 1) − j +
(αi − βi )k(xi , xj ) (B.135)
C
and in the linear case:
(k+1)
αj
(k+1)
βj
=P
=P
α(k)
j
β (k)
j
−
1
kxj k22 +
1
C
1
+
kxj k22 +
(k+1)
w(k+1) = w(k) + ((αj
1
C
(k)
(k)
D
E
α
j
w(k) , xj − 2 +
(B.136)
C
(k)
D
w(k) , xj
E
(k+1)
− αj ) − (βj
βj
− (R + 1) −
C
(k)
− βj )) xj .
(B.137)
(B.138)
Proof. With the help of
h(α, β) =
X
X
X α2 + β 2
1X
i
i
(αi −βi )(αm −βm )k(xi , xm )−2
αi +(R+1)
βi +
(B.139)
2 i,m
2C
we define g1 (d) = h(α(k) + dej , β (k) ), g2 (d) = h(α(k) , β (k) + dej ) and calculate:
∂g1
= d k(xj , xj ) +
∂d
∂g2
= d k(xj , xj ) +
∂d
X (k)
1
αj
(k)
,
+
(αi − βi )k(xi , xj ) − 2 +
C
C
X
1
βj
(k)
(k)
−
(αi − βi )k(xi , xj ) + (R + 1) +
.
C
C
(B.140)
(B.141)
If k(xj , xj ) = 0 the index can be ignored and no update is required. With k(xj , xj ) > 0
the optimal d can be determined with
∂g1
∂d
= 0 or
∂g2
∂d
= 0 respectively. With the
projection of the resulting solution to the restriction interval [0, ∞) this gives the
update formulas. Replacing k(xi , xj ) with hxi , xj i and substituting w(m) =
βj )xj results in the formulas for the linear case.
P
(αj −
Theorem 29 (Update Formulas for the Hard-Margin One-Class BRMM). With the
projection function P (z) = max {0, z}, the update formulas are:
(k+1)
αj
(k+1)
βj
=P
(k)
αj
=P
(k)
βj
X
1
−
−2 +
(αi − βi )k(xi , xj )
k(xj , xj )
!
X
1
−(R + 1) +
(αi − βi )k(xi , xj )
+
k(xj , xj )
(B.142)
!
(B.143)
197
B.4. Unary Classifier Variants and Implementations
and in the linear case:
(k+1)
αj
(k+1)
βj
=P
(k)
αj
=P
(k)
βj
1 D (k) E
−
2
w
,
x
−
j
kxj k22
!
(B.144)
1 D (k) E
w
,
x
+
−
(R
+
1)
j
kxj k22
(k+1)
w(k+1) = w(k) + ((αj
(k)
(k+1)
− αj ) − (βj
!
(k)
− βj )) xj .
(B.145)
(B.146)
These formulas are the same as for the One-Class BRMM but with a different projection.
Proof. The proof is the same as for the One-Class BRMM but the final projection is
different because, there is no upper boundary on the variables.
B.4.3
Online One-Class BRMM Variants
According to the origin separation approach in Section 1.2.4, deriving the update
formulas is straightforward. With a new incoming sample xj , the respective weights
are initialized with zero, w is updated and afterwards, the update weights are not
needed any longer. w is usually initialized with zeros, but it be also can also randomly
initialized or with a vector from a different dataset.
Method 34 (Online One-Class BRMM).
α = max 0, min
β = max 0, min
1
kxj k22
1
kxj k22
w(j+1) = w(j) + (α − β) xj .
D
2 − w(j) , xj
D
E
E
,C
w(j) , xj − (R + 1) , C
(B.147)
(B.148)
Method 35 (Online L2–One-Class BRMM).
α = max 0,
β = max 0,
1
1
kxj k22 + C
1
1
kxj k22 + C
2−
D
D
w(j) , xj
w(j) , xj
w(j+1) = w(j) + (α − β) xj .
E
E
− (R + 1)
Method 36 (Online Hard-Margin One-Class BRMM).
α = max 0,
β = max 0,
1
kxj k22
1
kxj k22
2−
D
D
w(j) , xj
w(j) , xj
w(j+1) = w(j) + (α − β) xj .
E
E
− (R + 1)
(B.149)
198
Appendix B. Proofs and Formulas
To get the respective SVM perceptrons, R = ∞ has to be used, which results in β = 0
in all cases.
Method 37 (Online One-Class SVM).
w
(j+1)
=w
(j)
(
+ max 0, min
(
D
E
1
(j)
2
−
w
,
x
,C
j
kxj k22
))
xj .
(B.150)
Method 38 (Online L2–One-Class SVM).
w
(j+1)
=w
(j)
(
+ max 0,
1
kxj k22 +
1
C
D
2− w
(j)
, xj
E
)
xj .
(B.151)
Method 39 (Online Hard-Margin One-Class SVM).
w
(j+1)
=w
(j)
)
(
E
D
1
(j)
xj .
2
−
w
,
x
+ max 0,
j
kxj k22
(B.152)
For completeness, we also give the reduced formulas for the RFDA variants (R = 1),
except, for the hard margin case, where no solution exists.
Method 40 (Online One-Class RFDA).
(
w(j+1) = w(j) + max −C, min
(
D
E
1
(j)
2
−
w
,
x
,C
j
kxj k22
))
xj .
(B.153)
Method 41 (Online L2–One-Class RFDA).
w(j+1) = w(j) +
1
kxj k22 +
1
C
D
2 − w(j) , xj
E
xj .
(B.154)
199
B.5. Positive Upper Boundary Support Vector Estimation
B.5 Positive Upper Boundary Support Vector Estimation
This section is joint work with Alexander Fabisch and is based on:
Fabisch, A., Metzen, J. H., Krell, M. M., and Kirchner, F. (2015). Accounting for TaskHardness in Active Multi-Task Robot Control Learning. Künstliche Intelligenz.
My contribution to this paper is the PUBSVE algorithm and the respective formulas
for the implementation after a request by Alexander Fabisch.
This section presents the SVR variant PUBSVE. It is related to this thesis due to its
relation to SVM, and its special offset treatment.
We are given a set of observations D = {(xj , yj )}nj=1 and assume that the yj depend
on the xj via yj = f (xj ) − ej , where ej is some noise term. In contrast to standard
regression problems, we assume ej ≥ 0, i.e., we always observe values yj which are
less or equal than the true function value f (xj ). This model is appropriate for instance in reinforcement learning when f (xj ) returns the maximal reward possible in
a context xj , and yj is the actual reward obtained by a learning agent, which often
makes suboptimal decisions.
We are now interested in inferring the function f from observations D, i.e., learn
an estimate fˆ of f . One natural constraint on the estimate is that fˆ(xj ) ≥ yj , i.e., fˆ
shall be an upper boundary on D. Assuming positive values, the goal is to have a low
b and to keep the boundary as tight as possible but also to generalize well on unseen
data. This can be achieved by a regularization:
Method 42 (Positive Upper Boundary Support Vector Estimation (PUBSVE)).
min
w,b
1
2
kwk22 +
H 2
2b
+C
P q
tj
(B.155)
s.t. hw, xj i + b ≥ yj − tj and tj ≥ 0 ∀j : 1 ≤ j ≤ n .
H is a special hyperparameter, to weight between a simple maximum using the
offset b or having a real curve fitted.1 The error toleration constant C should be
chosen to be infinity to enforce a hard margin. It was just used here, to give a more
general model and make the resemblance between our error handling and the hinge
loss clear (Tabular 1.1). In this case, q ∈ {1, 2} was used to also allow for squared
loss.2 The yj need to be normalized (e.g., by subtracting min
yj ′ ), such that a positive
′
j
value of b can be expected because otherwise, fˆ(x) ≡ 0 would be the solution of our
suggested model. The same approach could be used, to estimate a negative lower
boundary by multiplying the yj and the resulting final boundary function f from the
1
2
Usually H should be chosen high for real curve fitting.
If q = 2, the constraint tj ≥ 0 can be omitted.
200
Appendix B. Proofs and Formulas
PUBSVE with −1. The introduction of nonlinear kernels and sparse regularization,
and the implementation is straightforward (see also Chapter 1). We typically use
P
a non-parametric, kernelized model for fˆ, e.g., fˆ(x) = b + ni=1 αj k(xj , x) with RBF
kernel k and offset b because it provides an arbitrary tight boundary and usually a
linear model is not appropriate.
Thanks to the offset in the target function, the special offset treatment approach
can be used for implementation as outlined in the following. First the dual optimization problems are derived.
L1 (w, b, t, α, γ) =
L2 (w, b, t, α) =
∂Lq
∂w
∂Lq
∂b
∂L1
∂tj
∂L2
∂tj
X
X
X
H
1
kwk22 + b2 + C
tj +
αj (yj − tj − b − hw, xj i) −
γj tj
2
2
(B.156)
X
X
1
H
kwk22 + b2 + C
t2j +
αj (yj − tj − b − hw, xj i)
2
2
=w−
X
= Hb −
αj xj ⇒ wopt =
X
αj ⇒ bopt =
X
(B.157)
αj xj
(B.158)
1 X
αj
H
(B.159)
= C − αj − γj ⇒ 0 ≤ αj ≤ C
(B.160)
αj
2C
= 2Ctj − αj ⇒ topt
j =
(B.161)
Consequently the dual L1–PUBSVE reads:
min
α:0≤αj ≤C∀j
2
X
1 X
1X
αj yj
αi αj hxi , xj i +
αj
−
2 i,j
2H
j
j
(B.162)
with the respective update formula after introducing the kernel function k:
αjnew
(
(
1
= max 0, min αjold −
k(xj , xj ) +
1
H
−yj +
X
i
!
))
1 X
.
αi , C
αiold k(xi , xj ) +
H i
(B.163)
For the hard margin case, set C = ∞. The dual L2–PUBSVE reads:
min
α:0≤αj ∀j
2
X
1X
1 X
1 X 2
αi αj hxi , xj i +
α
αj
−
αj yj +
2 i,j
2H
4C j j
j
j
(B.164)
with the update formula:
αjnew
= max
(
0, αjold
1
−
k(xj , xj ) +
1
2C
+
1
H
X
αjold
1 X old
α
αiold k(xi , xj ) +
− yj +
.
2C
H i i
i
(B.165)
!)
B.5. Positive Upper Boundary Support Vector Estimation
201
The target function is:
f (x) =
X
αi k(xi , x) + b with b =
(B.166)
y
i
1 X
αi .
H i
To reduce the training time and memory usage of the PUBSVE significantly, instead of training the PUBSVE on the whole set of observed pairs we can update the
1 [Syed
x
et al., 1999]: we forget every exboundaries incrementally after each update
ample except the support vectors xi and the corresponding weights (αi > 0), collect
new samples and use the new samples and the support vectors to train the model
2
of the upper and lower boundaries. The result
is illustrated in Figure B.1, where
the samples are drawn from uniform random distributions with x ∈ [0, 1) and y lies
between the boundaries that are marked by the gray areas. We use this method to
3 cost of a slightly higher error because
reduce the computational complexity at the
some previous examples that are close the estimated boundary but are not support
vectors might be outside of the boundaries after another iteration.
y
y
4
5 x
2
6
3
7
4
8
y
1 x
Figure B.1: Visualization of incremental learning with PUBSVE. 8 iterations of
the incremental learning of upper and lower boundaries: for each update of the PUBSVE 5wextake the new samples (small red dots) and the support vectors (large yellow
dots) from the previous iteration as a training set. The area between the upper and
the lower boundary is blue and the area that has been added in comparison to the
previous
6 iteration is red. All previous samples that will not be used for the incremental training are displayed as small blue dots. The true boundaries are marked by the
gray areas.
7
8
Appendix C
Configuration Files
type : node chain
input path : MNIST
parameter ranges :
c l a s s i f i e r : [ LibsvmOneClass , OcRmm]
label : [ ’0 ’ , ’1 ’ , ’2 ’ , ’3 ’ , ’4 ’ ,
’5 ’ , ’6 ’ , ’7 ’ , ’8 ’ , ’9 ’ ]
l c : [ 2 , 1 . 5 , 1 , 0 . 5 , 0 , −0.5 , −1, −1.5 ,
−2, −2.5 , −3, −3.5 , −4]
node chain :
− node : PCASklearn
parameters :
n components : 40
− node : EuclideanFeatureNormalization
− node :
classifier
parameters :
class labels : [ label
, REST]
complexity : eval (10∗∗ l c )
m a x i t e r a t i o n s : 100000
nu : eval (( ( − l c + 2 . 1 ) / 6 . 2 ) ∗ ∗ ( 1 . 5 ) )
random : true
t o l e r a n c e : eval (min(0.001∗10∗∗ l c
,0.01))
− node : PerformanceSink
parameters : { i r c l a s s :
label
, s e c c l a s s : REST}
Figure C.1: Operation specification file for the comparison of new one-class
SVM (“OcRmm” with range=∞) and νoc-SVM (“LibsvmOneClass”) on MNIST
data (Section 1.4.5.2).
202
203
type : node chain
input path : MNIST
parameter ranges :
c l a s s i f i e r : [OcRmm, OcRmmPerceptron , 2RMM,
UnaryPA0 , UnaryPA1 ]
label : [ ’0 ’ , ’1 ’ , ’2 ’ , ’3 ’ , ’4 ’ , ’5 ’ , ’6 ’ , ’7 ’ , ’8 ’ , ’9 ’ ]
l r : eval ( range ( 4 , 2 1 ) )
node chain :
− node : PCASklearn
parameters :
n components : 40
− node : EuclideanFeatureNormalization
− node : Grid Search
parameters :
evaluation :
metric : AUC
performance sink node :
node : PerformanceSink
parameters :
calc AUC : true
ir class :
label
s e c c l a s s : REST
nodes :
− node :
classifier
parameters :
class labels : [ label
, REST]
complexity : eval ( 1 0 ∗ ∗ ˜ ˜ l c ˜ ˜ )
m a x i t e r a t i o n s : 100
radius : eval ( ( l r ) / 1 0 . 0 )
range : eval ( l r / 4 . 0 )
t o l e r a n c e : eval ( 0 . 0 0 1 ∗ 1 0 ∗ ∗ ˜ ˜ l c ˜ ˜ )
optimization :
ranges :
˜ ˜ l c ˜ ˜ : eval ([ −5.0 +.5∗ i for i in range ( 1 5 ) ] )
p a r a l l e l i z a t i o n : { p r o c e s s i n g m o d a l i t y : backend}
validation set :
split node :
node : C V S p l i t t e r
parameters : { s p l i t s : 5 , s t r a t i f i e d : true }
− node : PerformanceSink
parameters : { i r c l a s s :
label
, s e c c l a s s : REST}
Figure C.2: Operation specification file for unary classifier comparison on
MNIST data (Section 1.4.5.1).
204
Appendix C. Configuration Files
type : node chain
input path : P300 Data Preprocessed InterSession
s t o r e n o d e c h a i n : True
node chain :
− node : Time Series Source
− node : xDAWN
parameters :
e r p c l a s s l a b e l : ” Target ”
retained channels : 8
− node : TDF
− node : O FN
− node : NilSink
Figure C.3: Operation specification file for storing preprocessing flows (Section 2.4.6).
205
type : node chain
input path : P300 Data Preprocessed InterSession
parameter ranges :
b a c k t r a n s f o r m a t i o n : [ with , without ]
c o a d a p t : [ f a l s e , double ]
l o g d i s t : [1 , 1.25 , 1.5 , 1.75 , 2 , 2.25 , 2.5 ,
2.75 , 3 , 3.25 , 3.5 , 3.75 , 4]
runs : 10
constraints :
[ ( ” b a c k t r a n s f o r m a t i o n ” == ” with ” and ” c o a d a p t ” == ” double ” )
or ( ” b a c k t r a n s f o r m a t i o n ” == ” without ” and ” c o a d a p t ” == ” False ” ) ]
node chain :
− node : Time Series Source
− node : I n s t a n c e S e l e c t i o n
parameters :
reduce class : false
t e s t p e r c e n t a g e s e l e c t e d : 100
train percentage selected : 0
− node : Noop
parameters : { k e e p i n h i s t o r y : True}
− node : RandomFlowNode
parameters :
dataset :
INPUT DATASET
d i s t a n c e : eval (10∗∗ l o g d i s t )
flow base dir : result folder from stored preprocessing
r e t r a i n : True
− node : RmmPerceptron
parameters :
c l a s s l a b e l s : [ Target , Standard ]
co adaptive :
co adapt
co adaptive index : 2
complexity : 1
history index : 1
range : 100
r e t r a i n : True
weight : [ 5 . 0 , 1 . 0 ]
z e r o t r a i n i n g : True
− node : C l a s s i f i c a t i o n P e r f o r m a n c e S i n k
parameters : { i r c l a s s : Standard , s a v e t r a c e : True}
Figure C.4: Operation specification file for reinitialization after changing the
preprocssing (Section 2.4.6).
206
Appendix C. Configuration Files
# !/ usr/bin/python
# h t t p :// s c i k i t −learn . org/ s t a b l e /auto examples/ p l o t c l a s s i f i e r c o m p a r i s o n . html
” ” ” Reduced s c r i p t v e r s i o n by Mario Michael K r e l l taken from s c i k i t−learn . ” ” ”
# Code s o u r c e : Gaë l Varoquaux
#
Andreas Mü l l e r
# Modified f o r documentation by Jaques Grobler
# L i c e n s e : BSD 3 c l a u s e
import numpy as np
import m a t p l o t l i b . p y p l o t as p l t
from m a t p l o t l i b . c o l o r s import ListedColormap
from sklearn . c r o s s v a l i d a t i o n import t r a i n t e s t s p l i t
from sklearn . p r e p r o c e s s i n g import StandardScaler
from sklearn . d a t a s e t s import make moons , make circles , m a k e c l a s s i f i c a t i o n
from sklearn . svm import SVC
from sklearn . lda import LDA
from sklearn . l i n e a r m o d e l import P a s s i v e A g g r e s s i v e C l a s s i f i e r
h = . 0 2 # s t e p s i z e in the mesh
names = [ ” Linear SVM” , ”RBF SVM” , ” Polynomial SVM” , ”PA1” , ”FDA” ]
c l a s s i f i e r s = [SVC( kernel= ” l i n e a r ” , C= 0 . 0 2 5 ) , SVC(gamma=2 , C= 1 ) ,
SVC( kernel= ” poly ” , degree =2 , C=10) ,
P a s s i v e A g g r e s s i v e C l a s s i f i e r ( n i t e r = 1 ) , LDA ( ) ]
X, y = m a k e c l a s s i f i c a t i o n ( n f e a t u r e s =2 , n redundant =0 , n i n f o r m a t i v e =2 ,
random state =1 , n c l u s t e r s p e r c l a s s =1)
rng = np . random . RandomState ( 2 )
X += 2 ∗ rng . uniform ( s i z e =X . shape )
l i n e a r l y s e p a r a b l e = (X, y )
d a t a s e t s = [ make moons ( n o i s e = 0 . 3 , random state = 0 ) ,
m a k e c i r c l e s ( n o i s e = 0 . 2 , f a c t o r = 0 . 5 , random state = 1 ) ,
linearly separable ]
f i g u r e = p l t . f i g u r e ( f i g s i z e =(3∗ len ( c l a s s i f i e r s ) , 3∗len ( d a t a s e t s ) ) )
i = 1
for ds in d a t a s e t s :
X, y = ds
X = StandardScaler ( ) . f i t t r a n s f o r m (X)
X train , X te st , y t r a i n , y t e s t = t r a i n t e s t s p l i t (X, y , t e s t s i z e = . 4 )
x min , x max = X [ : , 0 ] . min ( ) − . 5 , X [ : , 0 ] .max( ) + . 5
y min , y max = X [ : , 1 ] . min ( ) − . 5 , X [ : , 1 ] .max( ) + . 5
xx , yy = np . meshgrid ( np . arange ( x min , x max , h ) ,
np . arange ( y min , y max , h ) )
cm = p l t . cm . RdBu
cm bright = ListedColormap ( [ ’ #FF0000 ’ , ’ #0000FF ’ ] )
ax = p l t . subplot ( len ( d a t a s e t s ) , len ( c l a s s i f i e r s ) + 1 , i )
ax . s c a t t e r ( X t r a i n [ : , 0 ] , X t r a i n [ : , 1 ] , c= y t r a i n , cmap=cm bright )
ax . s e t x l i m ( xx . min ( ) , xx .max ( ) )
ax . s e t y l i m ( yy . min ( ) , yy .max ( ) )
ax . s e t x t i c k s ( ( ) )
ax . s e t y t i c k s ( ( ) )
i += 1
for name , c l f in zip ( names , c l a s s i f i e r s ) :
ax = p l t . subplot ( len ( d a t a s e t s ) , len ( c l a s s i f i e r s ) + 1 , i )
c l f . f i t ( X train , y t r a i n )
s c o r e = c l f . s c o r e ( X te st , y t e s t )
i f hasattr ( c l f , ” d e c i s i o n f u n c t i o n ” ) :
Z = c l f . d e c i s i o n f u n c t i o n ( np . c [ xx . r a v e l ( ) , yy . r a v e l ( ) ] )
else :
Z = c l f . p r e d i c t p r o b a ( np . c [ xx . r a v e l ( ) , yy . r a v e l ( ) ] ) [ : , 1 ]
Z = Z . reshape ( xx . shape )
ax . c o n t o u r f ( xx , yy , Z , cmap=cm, alpha = . 8 )
ax . s c a t t e r ( X t r a i n [ : , 0 ] , X t r a i n [ : , 1 ] , c= y t r a i n , cmap=cm bright )
ax . s e t x l i m ( xx . min ( ) , xx .max ( ) )
ax . s e t y l i m ( yy . min ( ) , yy .max ( ) )
ax . s e t x t i c k s ( ( ) )
ax . s e t y t i c k s ( ( ) )
ax . s e t t i t l e ( name )
i += 1
figure . subplots adjust ( l e f t =.02 , right =.98)
f i g u r e . s a v e f i g ( ” s c i k i t c l a s s i f i e r v i s p p . png ” , dpi =300 , bbox inches= ’ t i g h t ’ )
Figure C.5: Scikit-learn script for classifier visualization (Figure 2.1).
207
Version 14-01-2007
page 1 of 2
actiCAP 128Ch Standard-2
White holders: Label 1 - 32, hard-wired Ch33 - Ch64
Yellow holders: Label 1 - 32, hard-wired Ch97 - Ch128
Black holder: Label & hard-wired Gnd
Green holders: Label 1 - 32, hard-wired Ch1 - Ch32
Pink holders: Label 1 - 32, hard-wired Ch65 - Ch96
Blue holder: Label & hard-wired Ref
Components:
Softcap
Holders with flat side inwards
3 additional holders for use with double-sided adhesive rings to place electrodes on bare skin
1 chin belt
this layout / pinout
Electrode Names
www.brainproducts.com
Electrode names
Fpz
Fp1
Fp2
AFp1
AFp2
AF7
AF8
AFz
AF3
F9
AFF5h
AFF1h
AF4
F10
AFF6h
AFF2h
F7
F8
F5
FFT9h
FFT7h
FT9
FT7
FTT9h
F3
FFC5h
FC5
FTT7h
T7
FFC3h
FC3
FCC5h
C5
C1
CCP2h
CCP1h
CCP3h
CP1
CP3
C2
Cz
CPz
C4
C6
CCP4h
CP2
TPP7h
TPP9h
P1
P3
P5
CPP1h
CPP3h
CPP5h
CPP2h
CP4
Pz
PPO5h
P9
PPO9h
PPO1h
Softcap
Clinical
Neurophysiology
2001; 112: 713-719
CPP6h
P4
PO3
PPO2h
PO9
PO4
POz
TP8
TP10
TPP8h
TPP10h
P6
PPO6h
P10
PO7
PO8
POO1
Oostenveld, R. & Praamstra,
P. The five percent electrode
system for high-resolution
EEG and ERP measurements.
P2
TTP8h
CP6
P8
P7
Electrode Nomenclature
according to:
CPP4h
T8
CCP6h
TP7
TP9
FTT10h
FTT8h
FCC6h
FCC4h
FT10
FT8
FC6
FC4
FC2
FCC2h
FFT10h
FFT8h
FFC6h
FFC4h
FFC2h
FCC1h
F6
F4
F2
FCz
FC1
C3
CP5
Fz
FFC1h
FCC3h
CCP5h
TTP7h
F1
POO2
O1
O2
Oz
POO9h
PPO10h
OI1h
POO10h
PO10
OI2h
I1
I2
Iz
Electrode Nomenclature according to:
Oostenveld, R. & Praamstra, P. The five percent e
system for high-resolution EEG and ERP measurements.
Clinical Neurophysiology 2001; 112: 713-719
l
Number Labels on page 2
Figure C.6: Electrode positions of a 128 channel electrode cap taken from
www.brainproducts.com. For a 64 channel cap, the pink and yellow colored electrodes are not used.
Acronyms
ν-SVM ν support vector machine — Section 1.1.1.3
νoc-SVM classical one-class support vector machine — Section 1.1.6.3
C-SVM classical support vector machine — Section 1.1
AUC area under the ROC curve [Bradley, 1997]
BA balanced accuracy — Figure 3.5
BCI brain-computer interface
BRMM balanced relative margin machine — Section 1.3.2
CPU central processing unit
CSP common spatial patterns [Blankertz et al., 2008]
DSL domain-specific language
EEG electroencephalogram
EMG electromyogram
ERP event-related potential
FDA Fisher’s discriminant — Section 1.1.3
fMRI functional magnetic resonance imaging
GUI graphical user interface
ICA independent component analysis [Jutten and Herault, 1991, Hyvärinen, 1999,
Rivet et al., 2009]
208
209
Acronyms
LS-SVM least squares support vector machine — Section 1.1.2
MDP modular toolkit for data processing [Zito et al., 2008]
MEG magnetoencephalography
MPI message passing interface
PAA passive-aggressive algorithm— Section 1.1.5
PCA principal
component
analysis
[Lagerlund et al., 1997,
Rivet et al., 2009,
Abdi and Williams, 2010]
PUBSVE positive upper boundary support vector estimation — Appendix B.5
pySPACE Signal Processing And Classification Environment written in Python
RBF radial basis function
RFDA regularized Fisher’s discriminant — Section 1.1.3
RHKS reproducing kernel Hilbert space
RMM relative margin machine — Section 1.1.4
ROC receiver
operating
characteristic
[Green and Swets, 1988,
Macmillan and Creelman, 2005]
SLAM simultaneous localization and mapping
SMO sequential minimal optimization — Section 1.2.2
SVDD support vector data description — Section 1.1.6.1
SVM support vector machine — Section 1.1
SVR support vector regression — Section 1.1.1.4
YAML YAML Ain’t Markup Language [Ben-Kiki et al., 2008]
Symbols
b
offset/bias of the classification function f
C
regularization parameter of the C-SVM
and its variants, also called cost parameter
or complexity
conv
convex hull
ei
i-th unit vector
exp
exponential function
f
classification function
Hz
:= {x ∈ Rn |hw, xi + b = z } hyper plane
k
kernel function to replace the scalar product in the algorithm model
m
dimensionality of the data
n
number of samples
k.kp
p-norm
h., .i
scalar product
sgn(t)
:=
tj
loss value for the misclassification of xj
(
+1 if t > 0,
−1 otherwise.
signum function
with label yj
w
vector ∈ Rm to describe a linear function on
the data x via a scalar product
x
xj
data sample ∈ Rm
j-th sample of the training data ∈ Rm
yj
label of xj
xij
two-dimensional data sample x with i-th
temporal and j-th spatial dimension (e.g.,
sensor)
210
List of Figures
1
Graphical abstract of the main parts of this thesis . . . . . . . . . . . . .
8
2
Labyrinth Oddball . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.1 3D-Cube of our novel connections . . . . . . . . . . . . . . . . . . . . . . . 15
1.2 Support vector machine scheme . . . . . . . . . . . . . . . . . . . . . . . . 17
1.3 Soft margin support vector machine scheme . . . . . . . . . . . . . . . . . 18
1.4 Relative margin machine scheme . . . . . . . . . . . . . . . . . . . . . . . 33
1.5 Online passive-aggressive Algorithm . . . . . . . . . . . . . . . . . . . . . 34
1.6 Overview of BRMM method connections . . . . . . . . . . . . . . . . . . . 49
1.7 Classification problem with drift in one component of one class . . . . . 51
1.8 Classifier performance as function of R on synthetic data . . . . . . . . . 64
1.9 Examples of normalized digits . . . . . . . . . . . . . . . . . . . . . . . . . 65
1.10 Classifier performance as function of R on benchmark data . . . . . . . . 66
1.11 Classifier performance as function of R on MNIST data for two special
numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
1.12 Origin separation scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
1.13 Scheme of relations between binary classifiers and their one-class and
online variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
1.14 Geometric relation between SVDD and one-class SVM . . . . . . . . . . 76
1.15 Comparison of different unary classifiers on the MNIST dataset . . . . . 78
1.16 Performance comparison of and new one-class SVM . . . . . . . . . . . . 80
1.17 Comparison of different normalization techniques and online classifiers
81
1.18 Comparison of the different classifiers and normalization techniques . . 83
1.19 Comparison of classifiers (except PA1 and PA2) after Gaussian feature
normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
1.20 Simplified overview of connections of SVM variants . . . . . . . . . . . . 88
2.1 Visualization of different classifiers trained on different datasets . . . . 94
2.2 Illustrative data processing chain scheme with examples of linear algorithms and the formulas for the backtransformation in short . . . . . . . 98
211
212
List of Figures
2.3 Contour plots of backtransformation weights for handwritten digit classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
2.4 Contour plots of backtransformation weights for handwritten digit classification with nonlinear classifier . . . . . . . . . . . . . . . . . . . . . . 109
2.5 Contour plots of backtransformation weights for handwritten digit classification with different classifiers . . . . . . . . . . . . . . . . . . . . . . 111
2.6 Visualization of data for movement prediction and the corresponding
processing chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
2.7 Adaption to random preprocessing . . . . . . . . . . . . . . . . . . . . . . 116
2.8 Performance trace to random preprocessing after every 1000 samples . . 116
3.1 Processing types and their connection to the data granularity . . . . . . 125
3.2 Overview of the more than 100 processing nodes in pySPACE . . . . . . 127
3.3 Processing scheme of a node chain operation in pySPACE . . . . . . . . . 129
3.4 Node chain example file . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
3.5 Confusion matrix and metrics . . . . . . . . . . . . . . . . . . . . . . . . . 136
3.6 Performance, class ratios and guessing . . . . . . . . . . . . . . . . . . . . 138
3.7 General scheme of the pattern search . . . . . . . . . . . . . . . . . . . . 142
3.8 Operation specification example file for spatial filter comparison . . . . . 145
3.9 Visualization from the evaluation GUI . . . . . . . . . . . . . . . . . . . . 146
3.10 Intra-session and inter-session scheme . . . . . . . . . . . . . . . . . . . . 155
3.11 Intra-session sensor selection evaluation . . . . . . . . . . . . . . . . . . 157
3.12 Inter-session sensor selection evaluation . . . . . . . . . . . . . . . . . . . 158
B.1 Visualization of incremental learning with PUBSVE . . . . . . . . . . . . 201
C.1 Operation specification file for the comparison of new one-class SVM
and νoc-SVM on MNIST data . . . . . . . . . . . . . . . . . . . . . . . . . 202
C.2 Operation specification file for unary classifier comparison on MNIST
data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
C.3 Operation specification file for storing preprocessing flows . . . . . . . . 204
C.4 Operation specification file for reinitialization after changing the preprocssing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
C.5 Scikit-learn script for classifier visualization . . . . . . . . . . . . . . . . 206
C.6 Electrode positions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
List of Tables
1.1 Loss functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.2 Kernel functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.3 Overview on SVM solution approaches. . . . . . . . . . . . . . . . . . . . 40
1.4 Classification performance on EEG data . . . . . . . . . . . . . . . . . . . 68
213
214
List of Tables
Bibliography
[Abdi and Williams, 2010] Abdi, H. and Williams, L. J. (2010). Principal component
analysis. Wiley Interdisciplinary Reviews: Computational Statistics, 2(4):433–459,
doi:10.1002/wics.101.
[Aggarwal et al., 2015] Aggarwal, A., Kampmann, P., Lemburg, J., and Kirchner, F.
(2015). Haptic Object Recognition in Underwater and Deep-sea Environments.
Journal of Field Robotics, 32(1):167–185, doi:10.1002/rob.21538.
[Aggarwal, 2013] Aggarwal, C. C. (2013). Outlier Analysis. Springer New York.
[Aksoy and Haralick, 2001] Aksoy, S. and Haralick, R. M. (2001). Feature normalization and likelihood-based similarity measures for image retrieval. Pattern Recognition Letters, 22(5):563–582, doi:10.1016/S0167-8655(00)00112-4.
[Bach, 2011] Bach, F. (2011). Optimization with Sparsity-Inducing Penalties. Foundations and Trends R in Machine Learning, 4(1):1–106, doi:10.1561/2200000015.
[Bach et al., 2012] Bach, F., Jenatton, R., Mairal, J., and Obozinski, G. (2012). Structured Sparsity through Convex Optimization. Statistical Science, 27(4):450–468.
[Baehrens et al., 2010] Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M.,
Hansen, K., and Müller, K.-R. (2010). How to Explain Individual Classification
Decisions. Journal of Machine Learning Research, 11:1803–1831.
[Bai et al., 2011] Bai, O., Rathi, V., Lin, P., Huang, D., Battapady, H., Fei, D.-Y.,
Schneider, L., Houdayer, E., Chen, X., and Hallett, M. (2011). Prediction of human
voluntary movement before it occurs. Clinical Neurophysiology, 122(2):364–372,
doi:16/j.clinph.2010.07.010.
[Bartsch, 2014] Bartsch, S. (2014). Development, Control, and Empirical Evaluation
of the Six-Legged Robot SpaceClimber Designed for Extraterrestrial Crater Exploration. KI - Künstliche Intelligenz, 28(2):127–131, doi:10.1007/s13218-014-0299-y.
215
216
Bibliography
[Bashashati et al., 2007] Bashashati, A., Fatourechi, M., Ward, R. K., and Birch,
G. E. (2007). A survey of signal processing algorithms in brain-computer interfaces based on electrical brain signals. Journal of Neural Engineering, 4(2):R32–
57, doi:10.1088/1741-2560/4/2/R03.
[Bayliss, 2003] Bayliss, J. D. (2003). Use of the evoked potential P3 component for
control in a virtual apartment. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 11(2):113–116, doi:10.1109/TNSRE.2003.814438.
[Ben-Kiki et al., 2008] Ben-Kiki, O., Evans, C., and döt Net, I. (2008). YAML1.1.
http://yaml.org/spec/1.1/.
[Bishop, 2006] Bishop, C. M. (2006). Pattern Recognition and Machine Learning.
Springer New York.
[Blankertz et al., 2006] Blankertz, B., Dornhege, G., Lemm, S., Krauledat, M., Curio,
G., and Müller, K.-R. (2006). The Berlin Brain-Computer Interface: machine learning based detection of user specific brain states. Journal of Universal Computer
Science, 12(6):581–607, doi:10.3217/jucs-012-06-0581.
[Blankertz et al., 2011] Blankertz, B., Lemm, S., Treder, M., Haufe, S., and Müller,
K.-R. (2011). Single-Trial Analysis and Classification of ERP Components–a Tutorial. NeuroImage, 56(2):814–825, doi:10.1016/j.neuroimage.2010.06.048.
[Blankertz et al., 2008] Blankertz, B., Tomioka, R., Lemm, S., Kawanabe, M.,
and Müller,
K.-R. (2008).
Single-Trial Analysis.
Optimizing Spatial filters for Robust EEG
IEEE Signal Processing Magazine,
25(1):41–56,
doi:10.1109/MSP.2008.4408441.
[Blanzieri and Bryl, 2009] Blanzieri, E. and Bryl, A. (2009). A survey of learningbased techniques of email spam filtering. Artificial Intelligence Review, 29(1):63–
92, doi:10.1007/s10462-009-9109-6.
[Bolón-Canedo et al., 2012] Bolón-Canedo, V., Sánchez-Maroño, N., and AlonsoBetanzos, A. (2012). A review of feature selection methods on synthetic data.
Knowledge and Information Systems, 34(3):483–519, doi:10.1007/s10115-0120487-8.
[Boyd and Vandenberghe, 2004] Boyd, S. and Vandenberghe, L. (2004). Convex Optimization. Cambridge University Press.
[Bradley, 1997] Bradley, A. P. (1997). The use of the area under the ROC curve in the
evaluation of machine learning algorithms. Pattern Recognition, 30(7):1145–1159,
doi:10.1016/S0031-3203(96)00142-2.
217
Bibliography
[Bradley and Mangasarian, 1998] Bradley, P. S. and Mangasarian, O. L. (1998). Feature Selection via Concave Minimization and Support Vector Machines. In Proceedings of the 15th International Conference on Machine Learning (ICML 1998),
pages 82–90. Morgan Kaufmann Publishers Inc.
[Burges, 1998] Burges, C. J. C. (1998).
A Tutorial on Support Vector Machines
for Pattern Recognition. Data Mining and Knowledge Discovery, 2(2):121–167,
doi:10.1023/A:1009715923555.
[Büskens and Wassel, 2013] Büskens, C. and Wassel, D. (2013). The ESA NLP Solver
WORHP. In Fasano, G. and Pintér, J. D., editors, Modeling and Optimization
in Space Engineering, volume 73 of Springer Optimization and Its Applications,
pages 85–110. Springer New York. http://www.worhp.de.
[Chang and Lin, 2001] Chang, C.-C. and Lin, C.-J. (2001). Training nu-support vector classifiers: Theory and Algorithms. Neural Computation, 13(9):2119–2147,
doi:10.1162/089976601750399335.
[Chang and Lin, 2002] Chang, C.-C. and Lin, C.-J. (2002). Training nu-support vector regression: theory and algorithms.
Neural computation, 14(8):1959–1977,
doi:10.1162/089976602760128081.
[Chang and Lin, 2011] Chang,
ACM
Transactions
on
C.-C.
Intelligent
and
Lin,
Systems
C.-J.
and
(2011).
Technology,
LIBSVM.
2(3):1–27,
doi:10.1145/1961189.1961199.
[Chapelle, 2007] Chapelle, O. (2007). Training a support vector machine in the primal. Neural computation, 19(5):1155–1178, doi:10.1162/neco.2007.19.5.1155.
[Chen et al., 2008] Chen, C.-h., Härdle, W., and Unwin, A. (2008). Handbook of Data
Visualization (Springer Handbooks of Computational Statistics). Springer-Verlag
TELOS.
[Chen et al., 2006] Chen, P.-H., Fan, R.-E., and Lin, C.-J. (2006). A study on SMOtype decomposition methods for support vector machines. IEEE transactions on
neural networks / a publication of the IEEE Neural Networks Council, 17(4):893–
908, doi:10.1109/TNN.2006.875973.
[Comité et al., 1999] Comité, F., Denis, F., Gilleron, R., and Letouzey, F. (1999). Positive and unlabeled examples help learning. In Watanabe, O. and Yokomori, T.,
editors, Algorithmic Learning Theory, volume 1720 of Lecture Notes in Computer
Science, pages 219–230. Springer Berlin Heidelberg.
218
Bibliography
[Courchesne et al., 1977] Courchesne, E., Hillyard, S. A., and Courchesne, R. Y.
(1977). P3 waves to the discrimination of targets in homogeneous and heterogeneous stimulus sequences. Psychophysiology, 14(6):590–597.
[Crammer et al., 2006] Crammer, K., Dekel, O., Keshet, J., Shalev-Shwartz, S., and
Singer, Y. (2006). Online Passive-Aggressive Algorithms. Journal of Machine
Learning Research, 7:551 – 585.
[Crisp and Burges, 2000] Crisp, D. J. and Burges, C. J. C. (2000). A Geometric Interpretation of v-SVM Classifiers. In Solla, S. A., Leen, T. K., and Müller, K.-R.,
editors, Advances in Neural Information Processing Systems 12, pages 244–250.
MIT Press.
[Cristianini and Shawe-Taylor, 2000] Cristianini, N. and Shawe-Taylor, J. (2000). An
Introduction to Support Vector Machines and other kernel-based learning methods.
Cambridge University Press.
[Delorme and Makeig, 2004] Delorme, A. and Makeig, S. (2004).
EEGLAB: an
open source toolbox for analysis of single-trial EEG dynamics including independent component analysis.
Journal of Neuroscience Methods, 134(1):9–21,
doi:10.1016/j.jneumeth.2003.10.009.
[Domingos, 2012] Domingos,
P.
about machine learning.
(2012).
A
few
useful
things
to
know
Communications of the ACM, 55(10):78–87,
doi:10.1145/2347736.2347755.
[Dubois, 1999] Dubois, P. F. (1999). Extending Python with Fortran. Computing
Science and Engineering, 1(5):66–73.
[Duda et al., 2001] Duda, R. O., Hart, P. E., and Stork, D. G. (2001). Pattern Classification. Wiley-Interscience, 2. edition.
[Eitrich, 2006] Eitrich, T. (2006).
chines for classification.
Data mining with parallel support vector ma-
Advances in Information Systems, pages 197–206,
doi:10.1007/11890393 21.
[Eitrich, 2007] Eitrich, T. (2007). Dreistufig parallele Software zur Parameteroptimierung von Support-Vektor-Maschinen mit kostensensitiven Gutemaßen. Publikationsreihe des John von Neumann-Instituts für Computing (NIC) NIC-Serie
Band 35.
[Eitrich and Lang, 2006] Eitrich,
T. and Lang,
B. (2006).
Efficient opti-
mization of support vector machine learning parameters for unbalanced
Bibliography
219
datasets. Journal of Computational and Applied Mathematics, 196(2):425–436,
doi:10.1016/j.cam.2005.09.009.
[Fabisch et al., 2015] Fabisch, A., Metzen, J. H., Krell, M. M., and Kirchner, F.
(2015). Accounting for Task-Hardness in Active Multi-Task Robot Control Learning. Künstliche Intelligenz.
[Fan et al., 2008] Fan, R.-E., Chang, K.-W., Hsieh, C.-J., Wang, X.-R., and Lin, C.-J.
(2008). LIBLINEAR: A Library for Large Linear Classification. The Journal of
Machine Learning Research, 9:1871–1874.
[Farwell and Donchin, 1988] Farwell, L. A. and Donchin, E. (1988). Talking off the
top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroencephalography and Clinical Neurophysiology, 70(6):510–523.
[Feess et al., 2013] Feess, D., Krell, M. M., and Metzen, J. H. (2013). Comparison of
Sensor Selection Mechanisms for an ERP-Based Brain-Computer Interface. PloS
ONE, 8(7):e67543, doi:10.1371/journal.pone.0067543.
[Ferrez and Millán, 2008] Ferrez, P. W. and Millán, J. d. R. (2008). Error-related EEG
potentials generated during simulated brain-computer interaction. IEEE Transactions on Biomedical Engineering, 55(3):923–929, doi:10.1109/TBME.2007.908083.
[Filitchkin and Byl, 2012] Filitchkin, P. and Byl, K. (2012). Feature-based terrain
classification for LittleDog. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1387–1392. IEEE.
[Flamary et al., 2012] Flamary, R., Tuia, D., Labbe, B., Camps-Valls, G., and Rakotomamonjy, A. (2012). Large Margin Filtering. IEEE Transactions on Signal Processing, 60(2):648–659, doi:10.1109/TSP.2011.2173685.
[Franc et al., 2008] Franc, V., Laskov, P., and Müller, K.-R. (2008). Stopping conditions for exact computation of leave-one-out error in support vector machines. In
Proceedings of the 25th international conference on Machine learning - ICML ’08,
pages 328–335. ACM Press.
[Garcia and Fourcaud-Trocmé, 2009] Garcia, S. and Fourcaud-Trocmé, N. (2009).
OpenElectrophy: An Electrophysiological Data- and Analysis-Sharing Framework.
Frontiers in Neuroinformatics, 3(14), doi:10.3389/neuro.11.014.2009.
[Ghaderi et al., 2014] Ghaderi, F., Kim, S. K., and Kirchner, E. A. (2014). Effects of
eye artifact removal methods on single trial P300 detection, a comparative study.
Journal of neuroscience methods, 221:41–77, doi:10.1016/j.jneumeth.2013.08.025.
220
Bibliography
[Ghaderi and Kirchner, 2013] Ghaderi, F. and Kirchner, E. A. (2013). Periodic Spatial Filter for Single Trial Classification of Event Related Brain Activity.
In
Biomedical Engineering, Calgary,AB,Canada. ACTAPRESS.
[Ghaderi and Straube, 2013] Ghaderi, F. and Straube, S. (2013). An adaptive and
efficient spatial filter for event-related potentials. In Proceedings of the 21st European Signal Processing Conference, (EUSIPCO).
[Golle, 2008] Golle, P. (2008).
Machine learning attacks against the Asirra
CAPTCHA. In Proceedings of the 15th ACM conference on Computer and communications security - CCS ’08, page 535. ACM Press.
[Grandvalet et al., 2006] Grandvalet, Y., Mariéthoz, J., and Bengio, S. (2006). A probabilistic interpretation of SVMs with an application to unbalanced classification.
In Advances in Neural Information Processing Systems 18 (NIPS 2005), pages 467–
474. MIT Press.
[Gray and Kolda, 2006] Gray, G. A. and Kolda, T. G. (2006).
Algorithm 856:
APPSPACK 4.0: Asynchronous parallel pattern search for derivative-free optimization. ACM Transactions on Mathematical Software, 32:485–507.
[Green and Swets, 1988] Green, D. M. and Swets, J. A. (1988). Signal detection theory and psychophysics. Peninsula Publ., Los Altos, CA.
[Gretton and Desobry, 2003] Gretton, A. and Desobry, F. (2003). On-line one-class
support vector machines. An application to signal segmentation. In 2003 IEEE
International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP ’03)., volume 2, pages II–709–712. IEEE.
[Griewank and Walther, 2008] Griewank, A. and Walther, A. (2008).
Evaluating
Derivatives: Principles and Techniques of Algorithmic Differentiation. Society for
Industrial and Applied Mathematics.
[Guyon and Elisseeff, 2003] Guyon, I. and Elisseeff, A. (2003). An introduction to
variable and feature selection. The Journal of Machine Learning Research, 3:1157–
1182.
[Guyon et al., 2002] Guyon, I., Weston, J., Barnhill, S., and Vapnik, V. (2002). Gene
Selection for Cancer Classification using Support Vector Machines.
Machine
Learning, 46(1-3):389–422, doi:10.1023/A:1012487302797.
[Hall et al., 2009] Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P.,
and Witten, I. H. (2009). The WEKA data mining software. ACM SIGKDD Explorations Newsletter, 11(1):10–18, doi:10.1145/1656274.1656278.
221
Bibliography
[Hanke et al., 2009] Hanke, M., Halchenko, Y. O., Sederberg, P. B., Olivetti, E.,
Fründ, I., Rieger, J. W., Herrmann, C. S., Haxby, J. V., Hanson, S. J., and Pollmann, S. (2009). PyMVPA: A Unifying Approach to the Analysis of Neuroscientific
Data. Frontiers in Neuroinformatics, 3(3), doi:10.3389/neuro.11.003.2009.
[Haufe et al., 2014] Haufe, S., Meinecke, F., Görgen, K., Dähne, S., Haynes, J.-D.,
Blankertz, B., and Bieß mann, F. (2014). On the interpretation of weight vectors of linear models in multivariate neuroimaging.
NeuroImage, 87:96–110,
doi:10.1016/j.neuroimage.2013.10.067.
[Helmbold et al., 1999] Helmbold, D. P., Kivinen, J., and Warmuth, M. K. (1999).
Relative loss bounds for single neurons.
IEEE transactions on neural net-
works / a publication of the IEEE Neural Networks Council, 10(6):1291–1304,
doi:10.1109/72.809075.
[Hildebrandt et al., 2014] Hildebrandt, M., Gaudig, C., Christensen, L., Natarajan,
S., Carrió, J. H., Paranhos, P. M., and Kirchner, F. (2014). A validation process for
underwater localization algorithms. International Journal of Advanced Robotic
Systems, 11(138), doi:10.5772/58581.
[Hoepflinger et al., 2010] Hoepflinger, M. A., Remy, C. D., Hutter, M., Spinello, L.,
and Siegwart, R. (2010). Haptic terrain classification for legged robots. In 2010
IEEE International Conference on Robotics and Automation, pages 2828–2833.
IEEE.
[Hoerl and Kennard, 1970] Hoerl, A. E. and Kennard, R. W. (1970). Ridge Regression: Biased Estimation for Nonorthogonal Problems. Technometrics, 12(1):55–67,
doi:10.1080/00401706.1970.10488634.
[Hoffmann et al., 2008] Hoffmann, U., Vesin, J.-M., Ebrahimi, T., and Diserens, K. (2008).
disabled
subjects.
An efficient P300-based brain-computer interface for
Journal
of
Neuroscience
Methods,
167(1):115–25,
doi:10.1016/j.jneumeth.2007.03.005.
[Hohne et al., 2010] Hohne, J., Schreuder, M., Blankertz, B., and Tangermann, M.
(2010). Two-dimensional auditory P300 speller with predictive text system. In
2010 Annual International Conference of the IEEE Engineering in Medicine and
Biology Society (EMBC), pages 4185–4188.
[Hohne and Tangermann, 2012] Hohne, J. and Tangermann, M. (2012). How stimulation speed affects Event-Related Potentials and BCI performance. In 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology
Society (EMBC), pages 1802–1805.
222
Bibliography
[Hsieh et al., 2008] Hsieh, C.-J., Chang, K.-W., Lin, C.-J., Keerthi, S. S., and Sundararajan, S. (2008). A dual coordinate descent method for large-scale linear SVM.
In Proceedings of the 25th International Conference on Machine learning (ICML
2008), pages 408–415. ACM Press.
[Hunter, 2007] Hunter, J. D. (2007). Matplotlib: A 2D Graphics Environment. Computing in Science & Engineering, 9(3):90–95, doi:10.1109/MCSE.2007.55.
[Hyvärinen, 1999] Hyvärinen, A. (1999). Fast and Robust Fixed-Point Algorithms
for Independent Component Analysis. IEEE Transactions on Neural Networks,
10(3):626–634, doi:10.1109/72.761722.
[Johanshahi and Hallett, 2003] Johanshahi, M. and Hallett, M., editors (2003). The
Bereitschaftspotential: movement-related cortical potentials. Kluwer Academic/Plenum Publishers.
[Jones et al., 2001] Jones, E., Oliphant, T., Peterson, P., et al. (2001). SciPy: Open
source scientific tools for Python. http://www.scipy.org/.
[Jutten and Herault, 1991] Jutten, C. and Herault, J. (1991). Blind separation of
sources, part I: An adaptive algorithm based on neuromimetic architecture. Signal
Processing, 24(1):1–10, doi:10.1016/0165-1684(91)90079-X.
[Kampmann and Kirchner, 2015] Kampmann, P. and Kirchner, F. (2015). Towards
a fine-manipulation system with tactile feedback for deep-sea environments.
Robotics and Autonomous Systems, doi:10.1016/j.robot.2014.09.033.
[Kassahun et al., 2012] Kassahun, Y., Wöhrle, H., Fabisch, A., and Tabie, M. (2012).
Learning Parameters of Linear Models in Compressed Parameter Space. In Villa,
A. E., Duch, W., Érdi, P., Masulli, F., and Palm, G., editors, Artificial Neural Networks and Machine Learning – ICANN 2012, volume 7553 of Lecture Notes in
Computer Science, pages 108–115. Springer, Lausanne, Switzerland.
[Keerthi and Lin, 2003] Keerthi, S. S. and Lin, C.-J. (2003). Asymptotic behaviors of
support vector machines with Gaussian kernel. Neural Computation, 15(7):1667–
1689, doi:10.1162/089976603321891855.
[Keerthi et al., 2007] Keerthi, S. S., Sindhwani, V., and Chapelle, O. (2007). An efficient method for gradient-based adaptation of hyperparameters in SVM models. In
Advances in Neural Information Processing Systems 19 (NIPS 2006), pages 673–
680. MIT Press.
[Kim and Kirchner, 2013] Kim, S. K. and Kirchner, E. A. (2013). Classifier transferability in the detection of error related potentials from observation to interaction.
Bibliography
223
In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, SMC-2013, October 13-16, Manchester, UK, pages 3360–3365.
[Kirchner, 2014] Kirchner, E. A. (2014). Embedded Brain Reading. PhD thesis, University of Bremen, Bremen.
[Kirchner and Drechsler, 2013] Kirchner, E. A. and Drechsler, R. (2013). A Formal
Model for Embedded Brain Reading. Industrial Robot: An International Journal,
40:530–540, doi:10.1108/IR-01-2013-318.
[Kirchner et al., 2013] Kirchner, E. A., Kim, S. K., Straube, S., Seeland, A., Wöhrle,
H., Krell, M. M., Tabie, M., and Fahle, M. (2013). On the applicability of brain reading for predictive human-machine interfaces in robotics. PloS ONE, 8(12):e81732,
doi:10.1371/journal.pone.0081732.
[Kirchner and Tabie, 2013] Kirchner, E. A. and Tabie, M. (2013). Closing the Gap:
Combined EEG and EMG Analysis for Early Movement Prediction in Exoskeleton
based Rehabilitation. In Proceedings of the 4th European Conference on Technically
Assisted Rehabilitation - TAR 2013, Berlin, Germany.
[Kirchner et al., 2014a] Kirchner, E. A., Tabie, M., and Seeland, A. (2014a). Multimodal movement prediction - towards an individual assistance of patients. PLoS
ONE, 9(1):e85060, doi:10.1371/journal.pone.0085060.
[Kirchner et al., 2014b] Kirchner, E. A., Tabie, M., and Seeland, A. (2014b). Multimodal movement prediction - towards an individual assistance of patients. PloS
ONE, 9(1):e85060, doi:10.1371/journal.pone.0085060.
[Kirchner et al., 2010] Kirchner, E. A., Wöhrle, H., Bergatt, C., Kim, S. K., Metzen,
J. H., Feess, D., and Kirchner, F. (2010). Towards Operator Monitoring via Brain
Reading – An EEG-based Approach for Space Applications. In Proc. 10th Int. Symp.
Artificial Intelligence, Robotics and Automation in Space, pages 448–455, Sapporo.
[Kivinen et al., 2004] Kivinen, J., Smola, A. J., and Williamson, R. C. (2004). Online
Learning with Kernels. IEEE Transactions on Signal Processing, 52(8):2165–2176,
doi:10.1109/TSP.2004.830991.
[Köhler et al., 2014] Köhler, T., Berghöfer, E., Rauch, C., and Kirchner, F. (2014).
Sensor fault detection and compensation in lunar/planetary robot missions using
time-series prediction based on machine learning. Acta Futura, Issue 9: AI in
Space Workshop at IJCAI 2013:9–20.
[Krell, 2014] Krell, M. M. (2014). Introduction to the Signal Processing and Classification Environment pySPACE. PyData Berlin.
224
Bibliography
[Krell et al., 2014a] Krell, M. M., Feess, D., and Straube, S. (2014a). Balanced Relative Margin Machine – The missing piece between FDA and SVM classification.
Pattern Recognition Letters, 41:43–52, doi:10.1016/j.patrec.2013.09.018.
[Krell et al., 2014b] Krell, M. M., Kirchner, E. A., and Wöhrle, H. (2014b). Our tools
for large scale or embedded processing of physiological data. Passive BCI Community Meeting, Delmenhorst, Germany.
[Krell and Straube, 2015] Krell, M. M. and Straube, S. (2015). Backtransformation:
A new representation of data processing chains with a scalar decision function.
Advances in Data Analysis and Classification. submitted.
[Krell et al., 2013a] Krell, M. M., Straube, S., Seeland, A., Wöhrle, H., Teiwes, J.,
Metzen, J. H., Kirchner, E. A., and Kirchner, F. (2013a). Introduction to pySPACE
workflows. peer-reviewed talk, NIPS 2013 Workshop on MLOSS: Towards Open
Workflows, Lake Tahoe, Nevada, USA.
[Krell et al., 2013b] Krell, M. M., Straube, S., Seeland, A., Wöhrle, H., Teiwes, J.,
Metzen, J. H., Kirchner, E. A., and Kirchner, F. (2013b). pySPACE — a signal processing and classification environment in Python. Frontiers in Neuroinformatics,
7(40):1–11, doi:10.3389/fninf.2013.00040.
[Krell et al., 2014c] Krell, M. M., Straube, S., Wöhrle, H., and Kirchner, F. (2014c).
Generalizing, Optimizing, and Decoding Support Vector Machine Classification. In
ECML/PKDD-2014 PhD Session Proceedings, September 15-19, Nancy, France.
[Krell et al., 2013c] Krell, M. M., Tabie, M., Wöhrle, H., and Kirchner, E. A. (2013c).
Memory and Processing Efficient Formula for Moving Variance Calculation in EEG
and EMG Signal Processing. In Proceedings of the International Congress on
Neurotechnology, Electronics and Informatics, pages 41–45, Vilamoura, Portugal.
SciTePress.
[Krell and Wöhrle, 2014] Krell, M. M. and Wöhrle, H. (2014). New one-class classifiers based on the origin separation approach. Pattern Recognition Letters, 53:93–
99, doi:10.1016/j.patrec.2014.11.008.
[Krusienski et al., 2006] Krusienski, D. J., Sellers, E. W., Cabestaing, F., Bayoudh,
S., McFarland, D. J., Vaughan, T. M., and Wolpaw, J. R. (2006). A comparison
of classification techniques for the P300 Speller. Journal of neural engineering,
3(4):299–305, doi:10.1088/1741-2560/3/4/007.
[Kubat et al., 1998] Kubat, M., Holte, R. C., and Matwin, S. (1998). Machine Learning for the Detection of Oil Spills in Satellite Radar Images. Machine Learning,
30(2-3):195–215, doi:10.1023/A:1007452223027.
225
Bibliography
[Kull and Flach, 2014] Kull, M. and Flach, P. A. (2014). Reliability Maps: A Tool to
Enhance Probability Estimates and Improve Classification Accuracy. In Calders,
T., Esposito, F., Hüllermeier, E., and Meo, R., editors, Machine Learning and
Knowledge Discovery in Databases, European Conference, ECML PKDD 2014,
Nancy, France, September 15-19, 2014. Proceedings, Part II, volume 8725 of Lecture Notes in Computer Science, pages 18–33. Springer Berlin Heidelberg.
[LaConte et al., 2005] LaConte, S., Strother, S., Cherkassky, V., Anderson, J., and
Hu, X. (2005). Support vector machines for temporal classification of block design
fMRI data. NeuroImage, 26(2):317–329, doi:10.1016/j.neuroimage.2005.01.048.
[Lagerlund et al., 1997] Lagerlund, T. D., Sharbrough, F. W., and Busacker, N. E.
(1997).
Spatial filtering of multichannel electroencephalographic recordings
through principal component analysis by singular value decomposition. Journal of
Clinical Neurophysiology, 14(1):73–82.
[Lal et al., 2004] Lal, T. N., Schröder, M., Hinterberger, T., Weston, J., Bogdan,
M., Birbaumer, N., and Schölkopf, B. (2004).
Support vector channel selec-
tion in BCI. IEEE Transactions on Biomedical Engineering, 51(6):1003–1010,
doi:10.1109/TBME.2004.827827.
[Lannoy et al., 2011] Lannoy, G., François, D., Delbeke, J., and Verleysen, M. (2011).
Weighted svms and feature relevance assessment in supervised heart beat classification. In Fred, A., Filipe, J., and Gamboa, H., editors, Biomedical Engineering
Systems and Technologies, volume 127 of Communications in Computer and Information Science, pages 212–223. Springer Berlin Heidelberg.
[Laskov et al., 2006] Laskov, P., Gehl, C., Krüger, S., and Müller, K.-R. (2006). Incremental Support Vector Learning: Analysis, Implementation and Applications.
Journal of Machine Learning Research, 7:1909–1936.
[Le et al., 2012] Le, Q. V., Ranzato, M., Monga, R., Devin, M., Chen, K., Corrado,
G. S., Dean, J., and Ng, A. Y. (2012). Building high-level features using large scale
unsupervised learning. In Proceedings of the 29 th International Conference on
Machine Learning.
[LeCun et al., 1998] LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998).
Gradient-based learning applied to document recognition.
Proceedings of the
IEEE, 86(11):2278–2324, doi:10.1109/5.726791.
[Lee et al., 2004] Lee, M. M. S., Keerthi, S. S., Ong, C. J., and DeCoste, D. (2004).
An efficient method for computing leave-one-out error in support vector machines
226
Bibliography
with Gaussian kernels. IEEE Transactions on Neural Networks, 15(3):750–757,
doi:10.1109/TNN.2004.824266.
[Leite and Neto, 2008] Leite, S. C. and Neto, R. F. (2008).
Incremental mar-
gin algorithm for large margin classifiers. Neurocomputing, 71(7-9):1550–1560,
doi:10.1016/j.neucom.2007.05.002.
[Lemburg et al., 2011] Lemburg, J., de Gea Fernandez, J., Eich, M., Mronga, D.,
Kampmann, P., Vogt, A., Aggarwal, A., and Kirchner, F. (2011). AILA - design
of an autonomous mobile dual-arm robot. In 2011 IEEE International Conference
on Robotics and Automation, pages 5147–5153. IEEE.
[Lemm et al., 2004] Lemm, S., Schäfer, C., and Curio, G. (2004). BCI Competition
2003–Data set III: probabilistic modeling of sensorimotor mu rhythms for classification of imaginary hand movements. IEEE Transactions on Biomedical Engineering, 51(6):1077–1080, doi:10.1109/TBME.2004.827076.
[Lew et al., 2012] Lew, E., Chavarriaga, R., Zhang, H., Seeck, M., and del Millan,
J. R. (2012). Self-paced movement intention detection from human brain signals:
Invasive and non-invasive EEG. In 2012 Annual International Conference of the
IEEE Engineering in Medicine and Biology Society (EMBC), pages 3280–3283.
[Lin et al., 2007] Lin, H.-T., Lin, C.-J., and Weng, R. C. (2007). A note on Platt’s
probabilistic outputs for support vector machines. Machine Learning, 68(3):267–
276, doi:10.1007/s10994-007-5018-6.
[Lipton et al., 2014] Lipton, Z. C., Elkan, C., and Narayanaswamy, B. (2014). Optimal Thresholding of Classifiers to Maximize F1 Measure. In Machine Learning and Knowledge Discovery in Databases - European Conference, ECML/PKDD
2014, Nancy, France, September 15-19, 2014. Proceedings, Part II, pages 225–239.
[Liu et al., 2011] Liu, Y., Zhang, H. H., and Wu, Y. (2011). Hard or Soft Classification? Large-margin Unified Machines. Journal of the American Statistical Association, 106(493):166–177, doi:10.1198/jasa.2011.tm10319.
[Loosli et al., 2007] Loosli, G., Gasso, G., and Canu, S. (2007). Regularization Paths
for ν-SVM and ν-SVR. In Liu, D., Fei, S., Hou, Z., Zhang, H., and Sun, C., editors, Advances in Neural Networks – ISNN 2007, volume 4493 of Lecture Notes in
Computer Science, pages 486–496. Springer Berlin Heidelberg.
[Macmillan and Creelman, 2005] Macmillan, N. A. and Creelman, C. D. (2005). Detection Theory : A User’s Guide. Lawrence Erlbaum Associates, Mahwah, NJ.
227
Bibliography
[Mahadevan and Shah, 2009] Mahadevan,
Fault
detection
and
port vector machines.
diagnosis
in
S.
and
process
Shah,
data
S.
using
Journal of Process Control,
L.
(2009).
one-class
sup-
19(10):1627–1639,
doi:http://dx.doi.org/10.1016/j.jprocont.2009.07.011.
[Makhorin, 2010] Makhorin, A. O. (2010). GNU Linear Programming Kit (GLPK).
[Manduchi et al., 2005] Manduchi,
Matthies, L. (2005).
Autonomous
Off-Road
R.,
Castano,
A.,
Talukder,
A.,
and
Obstacle Detection and Terrain Classification for
Navigation.
Autonomous
Robots,
18(1):81–102,
doi:10.1023/B:AURO.0000047286.62481.1d.
[Mangasarian, 1999] Mangasarian, O. (1999). Arbitrary-norm separating plane. Operations Research Letters, 24(1-2):15–23, doi:10.1016/S0167-6377(98)00049-2.
[Mangasarian and Kou, 2007] Mangasarian, O. L. and Kou, G. (2007). Feature Selection for Nonlinear Kernel Support Vector Machines. In Proceedings of the 7th
IEEE International Conference on Data Mining Workshops (ICDMW 2007), pages
231–236. IEEE Computer Society.
[Mangasarian and Musicant, 1998] Mangasarian, O. L. and Musicant, D. R. (1998).
Successive Overrelaxation for Support Vector Machines. IEEE Transactions on
Neural Networks, 10:1032 – 1037.
[Manz et al., 2013] Manz, M., Bartsch, S., and Kirchner, F. (2013). Mantis - a robot
with advanced locomotion and manipulation abilities. In Proceedings of the 12th
Symposium on Advanced Space Technologies in Robotics and Automation.
[Mazhelis, 2006] Mazhelis, O. (2006). One-class classifiers : a review and analysis
of suitability in the context of mobile-masquerader detection. South African Computer Journal, 36:29–48.
[McDermott, 2009] McDermott, J. H. (2009). The cocktail party problem. Current
Biology, 19(22):R1024–R1027, doi:10.1016/j.cub.2009.09.005.
[McKinney, 2010] McKinney, W. (2010). Data structures for statistical computing in
python. In van der Walt, S. and Millman, J., editors, Proceedings of the 9th Python
in Science Conference, pages 51 – 56. http://pandas.pydata.org/.
[Meier et al., 2008] Meier, R., Dittrich, H., Schulze-Bonhage, A., and Aertsen, A.
(2008).
Detecting epileptic seizures in long-term human EEG: a new ap-
proach to automatic online and real-time detection and classification of polymorphic seizure patterns. Journal of Clinical Neurophysiology, 25(3):119–131,
doi:10.1097/WNP.0b013e3181775993.
228
Bibliography
[Mercer, 1909] Mercer, J. (1909). Functions of positive and negative type and their
connection with the theory of integral equations. Philosophical Transactions of the
Royal Society of London. Series A, 209:415 – 446.
[Metzen et al., 2011a] Metzen, J. H., Kim, S. K., Duchrow, T., Kirchner, E. A., and
Kirchner, F. (2011a). On transferring spatial filters in a brain reading scenario.
In 2011 IEEE Statistical Signal Processing Workshop (SSP), pages 797–800, Nice,
France. IEEE.
[Metzen et al., 2011b] Metzen, J. H., Kim, S. K., and Kirchner, E. A. (2011b). Minimizing calibration time for brain reading. In Mester, R. and Felsberg, M., editors,
Pattern Recognition, Lecture Notes in Computer Science, volume 6835, pages 366–
375. Springer Berlin Heidelberg, Frankfurt.
[Metzen and Kirchner, 2011] Metzen, J. H. and Kirchner, E. A. (2011). Rapid Adaptation of Brain Reading Interfaces based on Threshold Adjustment. In 35th Annual
Conference of the German Classification Society, (GfKl-2011), page 138, Frankfurt,
Germany.
[Mika, 2003] Mika, S. (2003). Kernel Fisher Discriminants. PhD thesis, Technische
Universität Berlin.
[Mika et al., 2001] Mika, S., Rätsch, G., and Müller, K.-R. (2001). A mathematical
programming approach to the kernel fisher algorithm. Advances in Neural Information Processing Systems 13 (NIPS 2000), pages 591–597.
[Moore et al., 2011] Moore, G., Bergeron, C., and Bennett, K. P. (2011). Model selection for primal SVM. Machine Learning, 85(1-2):175–208, doi:10.1007/s10994-0115246-7.
[Müller et al., 2014] Müller, J., Frese, U., Röfer, T., Gelin, R., and Mazel, A. (2014).
Graspy – object manipulation with nao. In Röhrbein, F., Veiga, G., and Natale,
C., editors, Gearing Up and Accelerating Crossfertilization between Academic and
Industrial Robotics Research in Europe:, volume 94 of Springer Tracts in Advanced
Robotics, pages 177–195. Springer International Publishing.
[Müller et al., 2001] Müller, K.-R., Mika, S., Rätsch, G., Tsuda, K., and Schölkopf, B.
(2001). An introduction to kernel-based learning algorithms. IEEE Transactions
on Neural Networks, 12(2):181–201, doi:10.1109/72.914517.
[Nocedal and Wright, 2006] Nocedal, J. and Wright, S. J. (2006). Numerical Optimization. Springer, 2nd edition.
Bibliography
229
[Oostenveld et al., 2011] Oostenveld, R., Fries, P., Maris, E., and Schoffelen, J.-M.
(2011). FieldTrip: Open source software for advanced analysis of MEG, EEG, and
invasive electrophysiological data. Computational intelligence and neuroscience,
2011:156869, doi:10.1155/2011/156869.
[Oppenheim and Schafer, 2009] Oppenheim, A. V. and Schafer, R. W. (2009).
Discrete-Time Signal Processing. Prentice Hall Press.
[Pedregosa et al., 2011] Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V.,
Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., and Duchesnay, E.
(2011). Scikit-learn: Machine Learning in Python. Journal of Machine Learning
Research, 12:2825–2830.
[Pérez and Granger, 2007] Pérez, F. and Granger, B. E. (2007). IPython: A System for
Interactive Scientific Computing. Computing in Science & Engineering, 9(3):21–
29, doi:10.1109/MCSE.2007.53.
[Platt, 1999a] Platt, J. C. (1999a). Fast training of support vector machines using
sequential minimal optimization. In Schölkopf, B., Burges, C. J. C., and Smola,
A. J., editors, Advances in Kernel Methods, pages 185–208. MIT Press.
[Platt, 1999b] Platt, J. C. (1999b). Probabilistic Outputs for Support Vector Machines
and Comparisons to Regularized Likelihood Methods. In Schölkopf, B., Burges, C.
J. C., and Smola, A. J., editors, Advances in Kernel Methods, pages 61–74. MIT
Press.
[Pontil et al., 1999] Pontil, M., Rifkin, R. M., and Evgeniou, T. (1999). From Regression to Classification in Support Vector Machines. In ESANN, pages 225–230.
[Powers, 2011] Powers, D. M. W. (2011). Evaluation: From Precision, Recall and FMeasure to ROC, Informedness, Markedness & Correlation. Journal of Machine
Learning Technologies, 2(1):37–63.
[Press, 2007] Press, W. (2007). Numerical recipes: the art of scientific computing.
Cambridge University Press, 3 edition.
[Quionero-Candela et al., 2009] Quionero-Candela, J., Sugiyama, M., Schwaighofer,
A., and Lawrence, N. D. (2009). Dataset Shift in Machine Learning. MIT Press.
[Ranzato et al., 2007] Ranzato, M., Huang, F. J., Boureau, Y.-L., and LeCun, Y.
(2007). Unsupervised Learning of Invariant Feature Hierarchies with Applications to Object Recognition. In 2007 IEEE Conference on Computer Vision and
Pattern Recognition, pages 1–8. IEEE.
230
Bibliography
[Rätsch et al., 2001] Rätsch, G., Onoda, T., and Müller, K.-R. (2001). Soft Margins for
AdaBoost. Machine Learning, 42(3):287–320.
[Rauch et al., 2013] Rauch, C., Berghöfer, E., Köhler, T., and Kirchner, F. (2013).
Comparison of sensor-feedback prediction methods for robust behavior execution.
In Timm, I. and Thimm, M., editors, KI 2013: Advances in Artificial Intelligence,
volume 8077 of Lecture Notes in Computer Science, pages 200–211. Springer Berlin
Heidelberg.
[Renard et al., 2010] Renard, Y., Lotte, F., Gibert, G., Congedo, M., Maby, E., Delannoy, V., Bertrand, O., and Lécuyer, A. (2010). OpenViBE: An Open-Source Software
Platform to Design, Test, and Use Brain–Computer Interfaces in Real and Virtual
Environments. Presence: Teleoperators and Virtual Environments, 19(1):35–53,
doi:10.1162/pres.19.1.35.
[Rieger et al., 2004] Rieger, J., Kosar, K., Lhotska, L., and Krajca, V. (2004). EEG
Data and Data Analysis Visualization. In Barreiro, J., Martı́n-Sánchez, F., Maojo,
V., and Sanz, F., editors, Biological and Medical Data Analysis, volume 3337 of
Lecture Notes in Computer Science, pages 39–48. Springer Berlin Heidelberg.
[Rivet et al., 2012] Rivet, B., Cecotti, H., Maby, E., and Mattout, J. (2012). Impact
of spatial filters during sensor selection in a visual P300 brain-computer interface.
Brain topography, 25(1):55–63, doi:10.1007/s10548-011-0193-y.
[Rivet et al., 2009] Rivet, B., Souloumiac, A., Attina, V., and Gibert, G. (2009).
xDAWN Algorithm to Enhance Evoked Potentials: Application to Brain-Computer
Interface.
IEEE Transactions on Biomedical Engineering, 56(8):2035–2043,
doi:10.1109/TBME.2009.2012869.
[Röfer et al., 2011] Röfer, T., Laue, T., Müller, J., Fabisch, A., Feldpausch, F., Gillmann, K., Graf, C., de Haas, T. J., Härtl, A., Humann, A., Honsel, D., Kastner,
P., Kastner, T., Könemann, C., Markowsky, B., Riemann, O. J. L., and Wenk, F.
(2011). B-human team report and code release 2011. Technical report, B-Human
(Universität Bremen und DFKI). Only available online: http://www.b-human.
de/downloads/bhuman11_coderelease.pdf.
[Saeys et al., 2007] Saeys, Y., Inza, I. n., and Larrañaga, P. (2007). A review of
feature selection techniques in bioinformatics. Bioinformatics (Oxford, England),
23(19):2507–2517, doi:10.1093/bioinformatics/btm344.
[Sanei and Chambers, 2007] Sanei, S. and Chambers, J. A. (2007). EEG Signal Processing. John Wiley & Sons.
231
Bibliography
[Saunders et al., 1998] Saunders, C., Gammerman, A., and Vovk, V. (1998). Ridge
Regression Learning Algorithm in Dual Variables. In Proceedings of the Fifteenth
International Conference on Machine Learning, ICML ’98, pages 515–521. Morgan
Kaufmann Publishers Inc.
[Schalk et al., 2004] Schalk, G., McFarland, D. J., Hinterberger, T., Birbaumer, N.,
and Wolpaw, J. R. (2004). BCI2000: a general-purpose brain-computer interface
(BCI) system. IEEE Transactions on Biomedical Engineering, 51(6):1034–1043,
doi:10.1109/TBME.2004.827072.
[Schlögl et al., 2010] Schlögl, A., Vidaurre, C., and Müller, K.-R. (2010).
tive Methods in BCI Research - An Introductory Tutorial.
Adap-
In Graimann, B.,
Pfurtscheller, G., and Allison, B., editors, Brain-Computer Interfaces, The Frontiers Collection, pages 331–355. Springer Berlin Heidelberg.
[Schmidhuber, 2012] Schmidhuber, J. (2012). Multi-column deep neural networks
for image classification. In Proceedings of the 2012 IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), pages 3642–3649. IEEE Computer Society.
[Schölkopf et al., 2001a] Schölkopf, B., Herbrich, R., and Smola, A. J. (2001a). A
Generalized Representer Theorem. In Helmbold, D. and Williamson, B., editors,
Computational Learning Theory, COLT/EuroCOLT 2001, LNAI 2111, pages 416–
426. Springer Berlin Heidelberg.
[Schölkopf et al., 2001b] Schölkopf, B., Platt, J. C., Shawe-Taylor, J., Smola, A. J., and
Williamson, R. C. (2001b). Estimating the support of a high-dimensional distribution. Neural Computation, 13(7):1443–1471, doi:10.1162/089976601750264965.
[Schölkopf and Smola, 2002] Schölkopf, B. and Smola, A. J. (2002). Learning with
Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT
Press, Cambridge, MA, USA.
[Schölkopf et al., 2000] Schölkopf, B., Smola, A. J., Williamson, R. C., and Bartlett,
P. L. (2000). New Support Vector Algorithms. Neural Computation, 12(5):1207–
1245, doi:10.1162/089976600300015565.
[Schwendner et al., 2014] Schwendner, J., Joyeux, S., and Kirchner, F. (2014). Using Embodied Data for Localization and Mapping.
Journal of Field Robotics,
31(2):263–295, doi:10.1002/rob.21489.
[Seeland et al., 2015] Seeland, A., Manca, L., Kirchner, F., and Kirchner, E. A. (2015).
Spatio-temporal Comparison Between ERD/ERS and MRCP-based Movement Prediction. In In Proceedings of the 8th International Conference on Bio-inspired Systems and Signal Processing (BIOSIGNALS-15), Lisbon. ScitePress.
232
Bibliography
[Seeland et al., 2013a] Seeland, A., Wöhrle, H., Straube, S., and Kirchner, E. A.
(2013a). Online movement prediction in a robotic application scenario. In 6th International IEEE EMBS Conference on Neural Engineering (NER), pages 41–44,
San Diego, California.
[Seeland et al., 2013b] Seeland, A., Wöhrle, H., Straube, S., and Kirchner, E. A.
(2013b). Online movement prediction in a robotic application scenario. In 2013
6th International IEEE/EMBS Conference on Neural Engineering (NER), pages
41–44, San Diego, California. IEEE.
[Shivaswamy and Jebara, 2010] Shivaswamy, P. K. and Jebara, T. (2010). Maximum
relative margin and data-dependent regularization. Journal of Machine Learning
Research, 11:747–788.
[Slater, 2014] Slater, M. (2014). Lagrange Multipliers Revisited. In Giorgi, G. and
Kjeldsen, T. H., editors, Traces and Emergence of Nonlinear Programming SE - 14,
pages 293–306. Springer Basel.
[Smola, 1998] Smola, A. J. (1998). Learning with Kernels. PhD thesis, Technische
Universität Berlin.
[Smola and Schölkopf, 2004] Smola, A. J. and Schölkopf, B. (2004).
rial on support vector regression.
A tuto-
Statistics and Computing, 14(3):199–222,
doi:10.1023/B:STCO.0000035301.49549.88.
[Sokolova et al., 2006] Sokolova, M., Japkowicz, N., and Szpakowicz, S. (2006). Beyond accuracy, f-score and roc: A family of discriminant measures for performance
evaluation. In Sattar, A. and Kang, B.-h., editors, AI 2006: Advances in Artificial
Intelligence, volume 4304 of Lecture Notes in Computer Science, pages 1015–1021.
Springer Berlin Heidelberg.
[Sokolova and Lapalme, 2009] Sokolova, M. and Lapalme, G. (2009). A systematic
analysis of performance measures for classification tasks. Information Processing
& Management, 45(4):427–437, doi:10.1016/j.ipm.2009.03.002.
[Sonnenburg et al., 2007] Sonnenburg, S., Braun, M. L., Ong, C. S., Bengio, S., Bottou, L., Holmes, G., LeCun, Y., Müller, K.-R., Pereira, F., Rasmussen, C. E., Rätsch,
G., Schölkopf, B., Smola, A. J., Vincent, P., Weston, J., and Williamson, R. C. (2007).
The Need for Open Source Software in Machine Learning. Journal of Machine
Learning Research, 8:2443–2466.
[Sonnenburg et al., 2010] Sonnenburg, S., Rätsch, G., Henschel, S., Widmer, C., Behr,
J., Zien, A., de Bona, F., Binder, A., Gehl, C., and Franc, V. (2010). The SHOGUN
Bibliography
233
Machine Learning Toolbox. Journal of Machine Learning Research, 11:1799–1799–
1802–1802.
[Steinwart, 2003] Steinwart, I. (2003). Sparseness of support vector machines. Journal of Machine Learning Research, 4:1071–1105.
[Steinwart and Christmann, 2008] Steinwart, I. and Christmann, A. (2008). Support
Vector Machines. Springer.
[Steinwart et al., 2009] Steinwart, I., Hush, D., and Scovel, C. (2009). Training SVMs
without offset. Journal of Machine Learning Research, 12:141–202.
[Straube and Feess, 2013] Straube, S. and Feess, D. (2013). Looking at ERPs from
Another Perspective: Polynomial Feature Analysis. Perception, 42 ECVP Abstract
Supplement:220.
[Straube and Krell, 2014] Straube, S. and Krell, M. M. (2014). How to evaluate an
agent’s behaviour to infrequent events? – Reliable performance estimation insensitive to class distribution. Frontiers in Computational Neuroscience, 8(43):1–6,
doi:10.3389/fncom.2014.00043.
[Straube et al., 2011] Straube, S., Metzen, J. H., Seeland, A., Krell, M. M., and Kirchner, E. A. (2011). Choosing an appropriate performance measure: Classification of
EEG-data with varying class distribution. In Proceedings of the 41st Meeting of the
Society for Neuroscience 2011, Washington DC, United States.
[Suykens and Vandewalle, 1999] Suykens, J. A. K. and Vandewalle, J. (1999). Least
Squares Support Vector Machine Classifiers. Neural Processing Letters, 9(3):293–
300, doi:10.1023/A:1018628609742.
[Swets, 1988] Swets, J. A. (1988). Measuring the accuracy of diagnostic systems.
Science, 240(4857):1285–1293, doi:10.1126/science.3287615.
[Syed et al., 1999] Syed, N. A., Liu, H., and Sung, K. K. (1999). Handling concept
drifts in incremental learning with support vector machines. In Proceedings of
the fifth ACM SIGKDD international conference on Knowledge discovery and data
mining - KDD ’99, pages 317–321. ACM Press.
[Szegedy et al., 2014] Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D.,
Goodfellow, I., and Fergus, R. (2014). Intriguing properties of neural networks. In
International Conference on Learning Representations.
[Tabie and Kirchner, 2013] Tabie, M. and Kirchner, E. A. (2013). EMG Onset Detection - Comparison of Different Methods for a Movement Prediction Task based on
234
Bibliography
EMG. In Alvarez, S., Solé-Casals, J., Fred, A., and Gamboa, H., editors, Proceedings of the International Conference on Bio-inspired Systems and Signal Processing, pages 242–247, Barcelona, Spain. SciTePress.
[Tabie et al., 2014] Tabie, M., Wöhrle, H., and Kirchner, E. A. (2014). Runtime calibration of online eeg based movement prediction using emg signals. In In Proceedings of the 7th International Conference on Bio-inspired Systems and Signal
Processing (BIOSIGNALS-14), pages 284–288, Angers, France. SciTePress.
[Tam et al., 2011] Tam, W.-K., Tong, K.-y., Meng, F., and Gao, S. (2011). A minimal
set of electrodes for motor imagery BCI to control an assistive device in chronic
stroke subjects: a multi-session study. IEEE Transactions on Neural Systems and
Rehabilitation Engineering, 19(6):617–27, doi:10.1109/TNSRE.2011.2168542.
[Tax, 2001] Tax, D. M. J. (2001). One-class classification: Concept learning in the
absence of counterexamples. PhD thesis, Delft University of Technology.
[Tax and Duin, 2004] Tax,
port
Vector
Data
D. M. J. and Duin,
Description.
R. P. W. (2004).
Machine
Learning,
Sup-
54(1):45–66,
doi:10.1023/B:MACH.0000008084.60811.49.
[Tiedemann et al., 2015] Tiedemann, T., Vögele, T., Krell, M. M., Metzen, J. H., and
Kirchner, F. (2015). Concept of a data thread based parking space occupancy prediction in a berlin pilot region. In Papers from the 2015 AAAI Workshop. Workshop
on AI for Transportation (WAIT-2015), January 25-26, Austin, USA. AAAI Press.
[Torii and Abe, 2009] Torii, Y. and Abe, S. (2009).
Decomposition techniques for
training linear programming support vector machines. Neurocomputing, 72(46):973–984, doi:10.1016/j.neucom.2008.04.008.
[Treue, 2003] Treue, S. (2003). Visual attention: the where, what, how and why
of saliency. Current Opinion in Neurobiology, 13(4):428–432, doi:10.1016/S09594388(03)00105-3.
[Van Gestel et al., 2002] Van Gestel, T., Suykens, J. A. K., Lanckriet, G., Lambrechts, A., De Moor, B., and Vandewalle, J. (2002).
Bayesian framework
for least-squares support vector machine classifiers, gaussian processes, and
kernel Fisher discriminant analysis.
Neural Computation, 14(5):1115–1147,
doi:10.1162/089976602753633411.
[Van Vaerenbergh et al., 2010] Van Vaerenbergh, S., Santamaria, I., Liu, W., and
Principe, J. C. (2010). Fixed-budget kernel recursive least-squares. In 2010 IEEE
International Conference on Acoustics, Speech and Signal Processing, pages 1882–
1885. IEEE.
Bibliography
235
[Vapnik, 2000] Vapnik, V. (2000). The nature of statistical learning theory. Springer.
[Varewyck and Martens, 2011] Varewyck, M. and Martens, J.-P. (2011). A practical approach to model selection for support vector machines with a Gaussian kernel. IEEE Transactions on Systems, Man, and Cybernetics. Part B, Cybernetics,
41(2):330–340, doi:10.1109/TSMCB.2010.2053026.
[Varoquaux, 2013] Varoquaux, G. (2013). joblib 0.7.0d.
[Verhoeye and de Wulf, 1999] Verhoeye, J. and de Wulf, R. (1999). An Image Processing Chain for Land-Cover Classification Using Multitemporal ERS-1 Data. Photogrammetric Engineering & Remote Sensing, 65(10):1179–1186.
[Verstraeten et al., 2012] Verstraeten, D., Schrauwen, B., Dieleman, S., Brakel, P.,
Buteneers, P., and Pecevski, D. (2012). Oger: Modular Learning Architectures
For Large-Scale Sequential Processing. Journal of Machine Learning Research,
13:2995–2998.
[Wöhrle and Kirchner, 2014] Wöhrle, H. and Kirchner, E. A. (2014). Online classifier adaptation for the detection of p300 target recognition processes in a complex
teleoperation scenario. In da Silva, H. P., Holzinger, A., Fairclough, S., and Majoe,
D., editors, Physiological Computing Systems, Lecture Notes in Computer Science,
pages 105–118. Springer Berlin Heidelberg.
[Wöhrle et al., 2015] Wöhrle, H., Krell, M. M., Straube, S., Kim, S. K., Kirchner,
E. A., and Kirchner, F. (2015). An Adaptive Spatial Filter for User-Independent
Single Trial Detection of Event-Related Potentials. IEEE Transactions on Biomedical Engineering, doi:10.1109/TBME.2015.2402252.
[Wöhrle et al., 2013a] Wöhrle, H., Teiwes, J., Kirchner, E. A., and Kirchner, F.
(2013a). A Framework for High Performance Embedded Signal Processing and
Classification of Psychophysiological Data. In APCBEE Procedia. International
Conference on Biomedical Engineering and Technology (ICBET-2013), 4th, May
19-20, Kopenhagen, Denmark. Elsevier.
[Wöhrle et al., 2013b] Wöhrle, H., Teiwes, J., Krell, M. M., Kirchner, E. A., and Kirchner, F. (2013b). A Dataflow-based Mobile Brain Reading System on Chip with
Supervised Online Calibration - For Usage without Acquisition of Training Data.
In Proceedings of the International Congress on Neurotechnology, Electronics and
Informatics, pages 46–53, Vilamoura, Portugal. SciTePress.
[Wöhrle et al., 2014] Wöhrle, H., Teiwes, J., Krell, M. M., Seeland, A., Kirchner,
E. A., and Kirchner, F. (2014). Reconfigurable dataflow hardware accelerators for
236
Bibliography
machine learning and robotics. In Proceedings of European Conference on Machine
Learning and Principles and Practice of Knowledge Discovery in Databases (ECML
PKDD-2014), September 15-19, Nancy, France.
[Wöhrle et al., 2014] Wöhrle, H., Teiwes, J., Tabie, M., Seeland, A., Kirchner, E. A.,
and Kirchner, F. (2014). Prediction of Movements by Online Analysis of Electroencephalogram with Dataflow Accelerators. In Proc. International Congress on Neurotechnology, Electronics and Informatics (NEUROTECHNIX 2014), Rome, Italy.
ScitePress.
[Wolpert and Macready, 1997] Wolpert, D. and Macready, W. (1997). No free lunch
theorems for optimization.
IEEE Transactions on Evolutionary Computation,
1(1):67–82, doi:10.1109/4235.585893.
[Yadav et al., 2012] Yadav, R., Swamy, M. N. S., and Agarwal, R. (2012). Model-based
seizure detection for intracranial EEG recordings. IEEE Transactions on Biomedical Engineering, 59(5):1419–1428, doi:10.1109/TBME.2012.2188399.
[Zander and Kothe, 2011] Zander, T. O. and Kothe, C. (2011). Towards passive braincomputer interfaces: applying brain-computer interface technology to humanmachine systems in general.
Journal of Neural Engineering, 8(2):025005,
doi:10.1088/1741-2560/8/2/025005.
[Zito et al., 2008] Zito, T., Wilbert, N., Wiskott, L., and Berkes, P. (2008). Modular
toolkit for Data Processing (MDP): a Python data processing framework. Frontiers
in Neuroinformatics, 2(8), doi:10.3389/neuro.11.008.2008.
| 1 |
arXiv:1604.00039v1 [] 31 Mar 2016
Pointwise Adaptive Estimation of the Marginal
Density of a Weakly Dependent Process
Karine Bertin∗
Nicolas Klutchnikoff†
April 4, 2016
Abstract
This paper is devoted to the estimation of the common marginal
density function of weakly dependent processes. The accuracy of
estimation is measured using pointwise risks. We propose a datadriven procedure using kernel rules. The bandwidth is selected using
the approach of Goldenshluger and Lepski and we prove that the
resulting estimator satisfies an oracle type inequality. The procedure
is also proved to be adaptive (in a minimax framework) over a scale of
Hölder balls for several types of dependence: stong mixing processes, λdependent processes or i.i.d. sequences can be considered using a single
procedure of estimation. Some simulations illustrate the performance
of the proposed method.
Keywords. Adaptive minimax rates, Density estimation, Hölder spaces,
Kernel estimation, Oracle inequality, Weakly dependent processes
1
Introduction
Let X = (Xi )i∈Z be a real-valued weakly dependent process admitting a
common marginal density f : R → R. We consider the problem of estimating
f at a fixed point x0 based on observation of X1 , . . . , Xn with n ∈ N∗ . The
accuracy of an estimator f˜n is evaluated using the pointwise risk defined, for
fixed x0 ∈ R and q > 0, by
Rq (f˜n , f ) = E|f˜n (x0 ) − f (x0 )|q
∗
1/q
,
CIMFAV, Universidad de Valparaíso, General Cruz 222, Valparaíso, Chile, tel/fax:
0056322303623
†
Crest-Ensai and Université de Strasbourg
1
where E denotes the expectation with respect to the distribution of the process
X. The main interest in considering such risks is to obtain estimators that
adapt to the local behavior of the density function to be estimated.
The aim of this paper is to obtain adaptive estimators of f on Hölder
classes of regularity s > 0 for this risk and different types of weakly dependent
processes.
In the independent and identically distributed (i.i.d.) case, the minimax
rate of convergence is n−s/(2s+1) (see Tsybakov (2009), and references therein).
Adaptive procedures based on the classical Lepki procedure (see Lepski (1990))
s/(2s+1)
have been obtained (see Butucea (2000)) with rates of the form logn n
.
In the context of dependent data, Ragache and Wintenberger (2006) as
well as Rio (2000) studied kernel density estimators from a minimax point
of view for pointwise risks. They obtained the same rate of convergence
as in the independent case when the coefficients of dependence decay at
a geometric rate. Several papers deal with the adaptive estimation of the
common marginal density of a weakly dependent process. Tribouley and
Viennet (1998) and Comte and Merlevède (2002) proposed Lp –adaptive
estimators under α–mixing or β–mixing conditions that converge at the
previously mentioned rates. Gannaz and Wintenberger (2010) extend these
results to a wide variety of weakly dependent processes including λ–dependent
processes. Note that, in these papers, the proposed procedures are based
on nonlinear wavelet estimators and only integrated risks are considered.
Moreover, the thresholds are not explicitly defined since they depend on an
unknown multiplicative constant. As a consequence, such methods can not
be used directly for practical purposes.
Our main purpose is to prove similar results for pointwise risks. We propose
here a kernel density estimator with a data-driven selection of the bandwidth
where the selection rule is performed using the so-called Goldenshluger-Lepski
method (see Goldenshluger and Lepski, 2008, 2011, 2014). This method
was successfully used in different contexts such as in Comte and GenonCatalot (2012), Doumic, Hoffmann, Reynaud-Bouret, and Rivoirard (2012),
Bertin, Lacour, and Rivoirard (2014), Rebelles (2015), but only with i.i.d.
observations. However there are at least two practical motivations to consider
dependent data. Firstly, obtaining estimators that are robust with respect to
slight perturbations from the i.i.d. ideal model can be useful. Secondly, many
econometric models (such as ARCH or GARCH) deal with dependent data
that admit a common marginal density. These two motivations suggest to
consider a class of dependent data as large as possible and to find a single
procedure of estimation that adapts to each situation of dependence.
Our contribution is the following. We obtain the adaptive rate of con2
vergence for pointwise risks over a large scale of Hölder spaces in several
situations of dependence, such as α–mixing introduced by Rosenblatt (1956)
and the λ–dependence defined by Doukhan and Wintenberger (2007). This
partially generalizes previous results obtained in i.i.d. case by Butucea (2000)
and Rebelles (2015). To the best of our knowledge, this is the first adaptive
result for pointwise density estimation in the context of dependent data.
To establish it, we prove an oracle type inequality: the selected estimator
performs almost as well as the best estimator in a given large finite family
of kernel estimators. Our data-driven procedure depends only on explicit
quantities. This implies that this procedure can be directly implemented in
practice. As a direct consequence, we get a new method to choose an accurate
local bandwidth for kernel estimators.
The rest of this paper is organized as follows. Section 2 is devoted to the
presentation of our model and of the assumptions on the process X. The
construction of our procedure of estimation is developed in Section 3. The
main results of the paper are stated in Section 4 whereas their proofs are
postponed to Section 6. A simulation study is performed in Section 5 to
illustrate the performance of our method in comparison to other classical
estimation procedures. The proofs of the technical results are presented in
the appendix.
2
Model
In what follows X = (Xi )i∈Z is a real-valued discrete time process and the
observation consists of the vector (X1 , . . . , Xn ). We assume that the Xi ’s
are identically distributed and we aim at estimating the common marginal
density f at a fixed point x0 ∈ R. In this section, basic assumptions on the
distribution of X are stated. Examples of processes are given, which illustrate
the variety of models covered in this work.
2.1
Assumptions
We first assume that f is bounded in a neigborhood of x0 .
Assumption 1. The marginal density f satisfies
sup f (x) ≤ B,
x∈Vn (x0 )
where Vn (x0 ) = [x0 −
1
log n
1/q
, x0 +
1
log n
3
1/q
] and B is a positive real constant.
Before giving the two last assumptions, we need some notation. For
i ∈ Zu , we consider the random variable Xi = (Xi1 , . . . , Xiu ) with values in
Ru . If i ∈ Zu and j ∈ Zv , we define the gap between j and i by γ(j, i) =
min(j) − max(i) ∈ Z. For any positive integer u, the functional class Gu
consists of real-valued functions g defined on Ru such that the support of g is
included in (Vn (x0 ))u ,
kgk∞,u =
and
sup |g(x)| < ∞,
x∈[−1,1]u
|g(x) − g(y)|
< +∞.
Lipu (g) = sup Pu
x6=y
i=1 |xi − yi |
We now define the sequence ρ(X) = (ρr (X))r∈N∗ by
ρr (X) = sup
sup
sup sup
u,v∈N∗ (i,j)∈Zu ×Zv g∈Gu g̃∈Gv
γ(j,i)≥r
Cov g(Xi ), g̃(Xj )
Ψ(u, v, g, g̃)
,
where Ψ(u, v, g, g̃) = max(Ψ1 (u, v, g, g̃), Ψ2 (u, v, g, g̃)) where for g ∈ Gu and
g̃ ∈ Gv
Ψ1 (u, v, g, g̃) = 4kgk∞,u kg̃k∞,v
and
Ψ2 (u, v, g, g̃) = ukg̃k∞,v Lipu (g) + vkgk∞,u Lipv (g̃) + uv Lipu (g) Lipv (g̃).
Both Assumptions 2 and 3 given below, allow us to control the covariance
of functionals of the process. Assumption 2 deals with the type of dependence,
whereas Assumption 3 is technical (see Lemma 1, Section 6).
Assumption 2. For some positive constants a, b, and c, the sequence ρ(X)
is such that
ρr (X) ≤ c exp(−arb ), ∀r ∈ N.
Assumption 3. The exists a positive constant C such that
µ(X) =
sup
sup
g,g̃∈G1 \{0} i6=j
where kgk1 =
R
R
|E (g(Xi )g̃(Xj ))|
≤ C,
kgk1 kg̃k1
|g(t)|dt.
4
2.2
Comments
On Assumption 1. We are assuming that the marginal density f is
bounded on a neighborhood of x0 . Such an assumption is classical in density estimation (see Goldenshluger and Lepski, 2011, and references therein).
Note also that stationarity of X is not assumed. Thus, re-sampled processes
of stationary processes can be considered in this study as in Ragache and
Wintenberger (2006).
On Assumption 2. Recall that a process X is called weakly dependent
if, roughly speaking, the covariance between functionals of the past and
the future of the process decreases as the gap from the past to the future
increases (for a broader picture of weakly-dependent processes, as well as
examples and applications, we refer the reader to Dedecker, Doukhan, Lang,
León R., Louhichi, and Prieur (2007) and references therein). Assumption 2
ensures that the decay occurs at a geometric rate. Under similar assumptions,
Doukhan and Neumann (2007), Merlevède, Peligrad, and Rio (2009) proved
Bernstein-type inequalities that are used in the proof of Theorem 1.
Note also that the type of weak-dependence considered in this paper
includes usual types of dependence such as strong mixing as well as classical
weak-dependence assumptions used in econometrics as illustrated below.
Strongly mixing processes. The process X is called α-mixing if the
sequence α(X) = (αr (X))r∈N defined by
αr (X) = sup sup
sup |P(A ∩ B) − P(A)P(B)|
n
+∞
n∈Z A∈F−∞
B∈Fn+r
tends to 0 as r goes to infinity, where Fk` is defined for any k, ` ∈ Z as the
σ-algebra generated by (Xi )k≤i≤` . Recall that for any u, v ∈ N∗ , g : Ru → R
and g̃ : Rv → R such that kgk∞,u < +∞ and kg̃k∞,v < +∞ we have
sup
(i,j)∈Zu ×Zv
γ(j,i)≥r
|Cov(g(Xi ), g̃(Xj ))| ≤ 4kgk∞,u kg̃k∞,v αr (X).
This readily implies that, for any r ∈ N, we have ρr (X) ≤ αr (X).
λ–dependent processes. The process X is called λ–dependent if the
sequence (λr (X))r defined by
λr (X) = sup
sup
sup sup
u,v∈N∗ (i,j)∈Zu ×Zv g∈Gu g̃∈Gv
γ(j,i)≥r
5
Cov g(Xi ), g̃(Xt )
Ψ2 (u, v, g, g̃)
,
tends to 0 as r tends to infinity. Then ρr (X) ≤ λr (X) for any r ∈ N.
Example. Bernoulli shifts are, under some conditions, λ–dependent. Indeed, let us consider the process X defined by:
Xi = H((ξi−j )j∈Z ),
i∈Z
(1)
where H : RZ → [0, 1] is a measurable function and the variables ξi are i.i.d.
and real-valued. In addition, assume that there exists a sequence (θr )r∈N∗
such that
E|H((ξi )i∈Z ) − H((ξi0 )i∈Z )| ≤ θr
(2)
where, for any r ∈ N∗ , (ξi0 )i∈Z is an i.i.d. sequence such that ξi0 = ξi if |i| ≤ r and
ξi0 is an independent copy of ξi otherwise. It can be proved (see Doukhan and
Louhichi, 1999) that such processes are strongly stationary and λ–dependent
with rate ρr ≤ 2θ[r/2] . Remark also that θr can be evaluated under both
regularity conditions on the function H and integrability conditions on the ξi ,
i ∈ Z. Indeed, if we assume that there exist b ∈ (0, 1] and positive constants
P
(ai )i∈Z such that |H((xi )i ) − H((yi )i )| ≤ i∈Z ai |xi − yi |b with ξi ∈ Lb (R) for
all i ∈ Z, then
X
ai E|ξi |b .
θr =
|i|≥r
Moreover, under the weaker condition that (ξi )i∈Z is λ–dependent and
stronger assumptions on H (see Doukhan and Wintenberger, 2007), the
process X inherits the same properties. Finally, we point out that classical
econometrics models such as AR, ARCH or GARCH can be viewed as causal
Bernoulli shifts (that is, they obey (1) with j ∈ N).
On Assumption 3 This technical assumption is satisfied in several situations. In what follows, we offer examples of sufficient conditions such that
Assumption 3 holds.
Situation 1. We assume that for any i, j ∈ Z the pair (Xi , Xj ) admits a
density function fi,j with respect to the Lebesgue measure in R2 . Moreover
we assume that there exists a constant F such that
sup
fi,j (x) ≤ F.
sup
i,j∈Z x,y∈Vn (x0 )
Under this assumption and using Fubini’s theorem, we readily obtain that
Assumption 3 holds with C = F. Note that this assumption is used in Gannaz
and Wintenberger (2010).
6
Situation 2. We consider the infinite moving average process, with i.i.d.
innovations (ξj )j∈Z , given by :
Xi =
X
t ∈ Z,
aj ξi−j ,
j∈Z
where (aj )j∈N and (a−j )j∈N are decreasing sequences of deterministic positive
real numbers. We assume that ξ1 admits a density function pξ (·) bounded
above by a positive constant P. Set i, j ∈ Z and denote by Ξ the σ-algebra
generated by the process (ξi )i∈Z\{i,j} . For g, g̃ in G1 we have
E (g(Xi )g̃(Xj )) = E [E (g(Xi )g̃(Xj )|Ξ)]
= E [E (g(a0 ξi + aj−i ξj + A)g̃(a0 ξj + ai−j ξi + B)|Ξ)] ,
where A and B are Ξ-mesurable random variables. A simple change of
variables gives:
|E (g(Xi )g̃(Xj )|Ξ)| ≤
P2
kgk1 kg̃k1 .
a20 − aj−i ai−j
Since a20 − aj−i ai−j ≥ a20 − a1 a−1 , Assumption 3 is fulfilled with C = P2 (a20 −
a1 a−1 )−1 .
Situation 3. We consider a GARCH(1, 1) model. Let α, β and γ be
positive real numbers. Let (ξi )i∈Z be i.i.d. innovations with marginal density
pξ (·), bounded above by a positive constant B, and denote by (Fi )i∈Z the
natural filtration associated with this proccess. Assume that the process X is
such that, for any i ∈ Z:
Xi = σi ξi
with
2
2
σi2 = γ + αXi−1
+ βσi−1
.
(3)
Consider i, j ∈ Z such that i < j. For g, g̃ in G1 we have
E (g(Xi )g̃(Xj )) = EE [g(Xi )g̃(Xj )|Fi ]
= E (g(Xi )E [g̃(Xj )|Fi ]) .
Now remark that, since σj ∈ Fj−1 and ξj is independent of Fj−1 , we have
E [g̃(Xj )|Fi ] = E (E [g̃(ξj σj )|Fj−1 ] |Fi )
1
x
= g̃(x)E
pξ
σj
σj
R
Z
Since σj ≥
√
!
!
|Fi dx.
γ we obtain:
|E (g(Xi )g̃(Xj ))| ≤
B
kgk1 kg̃k1 .
γ
Assumption 3 is thus fulfilled with C = B/γ.
7
3
Estimation procedure
In this section, we describe the construction of our procedure which is based
on the so-called Goldenshluger-Lepski method (GLM for short). It consists
in selecting, in a data driven way, an estimator in a given family of linear
kernel density estimators. Consequently, our method offers a new approach
to select an optimal bandwidth for kernel estimators in order to estimate the
marginal density of a process in several situations of weak dependence. This
leads to a procedure of estimation which is well adapted to inhomogeneous
smoothness of the underlying marginal density. Notice also that our procedure
is completely data-driven: it depends only on explicit constants that do not
need to be calibrated by simulations or using the so-called rule of thumb.
3.1
Kernel density estimators
We consider kernels K : R → R that satisfy the following assumptions.
Assumption 4. The kernel KRis compactly supported on [−1, 1]. Its Lipschitz
constant Lip(K) is finite and R K(x) dx = 1.
Assumption 5. There exists m ∈ N such that the kernel K is of order m.
That is, for any 1 ≤ ` ≤ m, we have
Z
K(x)x` dx = 0.
R
Let h∗ = n−1 exp
define Hn = {2−k
defined by,
√
log n and h∗ = (log n)−1/q be two bandwidths and
: k ∈ N} ∩ [h∗ , h∗ ]. We consider the family of estimators fˆh
n
1X
ˆ
fh (x0 ) =
Kh (x0 − Xi ),
n i=1
h ∈ Hn
where Kh (·) = h−1 K(h−1 ·).
3.2
Bandwidth selection
Following Goldenshluger and Lepski (2011), we first define for h ∈ Hn the
two following quantities
n
o
c (h, h)
A(h, x0 ) = max |fˆh∨h (x0 ) − fˆh (x0 )| − M
n
h∈Hn
and
n (h)
c
M
=
v
u
u
t2q|log h|
8
+
!
δn
,
n (h) +
nh
Jb
(4)
where δn = (log n)−1/2 , {y}+ = max(0, y) for any y ∈ R, and h ∨ h =
max(h, h) for any h, h ∈ Hn . We also consider
c (h, h) = M
c (h) + M
c (h ∨ h),
M
n
n
n
and
Jbn (h) =
n
1 X
K 2 (x0 − Xi ).
n2 i=1 h
Then our procedure consists in selecting the bandwidth ĥ(x0 ) such that
c (h) .
ĥ(x0 ) = arg min A(h, x0 ) + M
n
(5)
h∈Hn
The final estimator of f (x0 ) is defined by
fˆ(x0 ) = fˆĥ(x0 ) (x0 ).
Remark. The Goldenshluger-Lepski method consists in selecting a data-driven
bandwidth that makes a trade-off between the two quantities A(h, x0 ) and
c (h). Hereafter we explain how the minimization in (5) can be viewed
M
n
as an empirical version of the classical trade-off between a bias term and a
penalized standard deviation term.
c (h) can be viewed as a penalized upper bound of the
1. The quantity M
n
standard deviation of the estimator fˆh . Indeed, Lemma 2 implies that
Var(fˆh (x0 )) ≤ Jn (h) +
where
Jn (h) =
δn
6nh
1Z 2
Kh (x0 − x)f (x)dx
n
(6)
would be the variance of fˆh (x0 ) if the data were i.i.d. Moreover (see
the proof of Theorem 1 in Section 6), for n large enough and with high
probability
δn
δn
Jn (h) +
≤ Jˆn (h) +
.
6nh
nh
2. The quantity A(h, x0 ) is a rough estimator of the bias term of fˆh . Indeed
(see proof of Theorem 1), we have
A(h, x0 ) ≤ max |Efˆh∨h (x0 ) − Efˆh (x0 )| + 2T
h∈Hn
≤2 max|Kh ? f (x0 ) − f (x0 )| + 2T,
h≤h
9
(7)
where
o
n
c (h)
T = max |fˆh (x0 ) − Efˆh (x0 )| − M
.
n
+
h∈Hn
√
The quantity T is negligible with respect to 1/ nh with high probability
and maxh≤h |Kh ? f (x0 ) − f (x0 )| is of the same order of the bias of fˆh
over Hölder balls.
4
Results
We prove two results. Theorem 1 is an oracle-type inequality: under appropriate assumptions, our estimator performs almost as well as the best linear
kernel estimator in the considered family. Theorem 2 proves that our procedure achieves classical minimax rates of convergence (up to a multiplicative
logarithmic factor) over a wide scale of Hölder spaces.
Theorem 1. Under Assumptions 1, 2, 3 and 4 we have:
Rqq (fˆ, f ) ≤ C1∗ min
h∈Hn
max |Kh ? f (x0 ) − f (x0 )|q +
h≤h
h∈Hn
!q/2
|log h|
nh
(8)
where C1∗ is a positive constant that depends only on a, b, c, B, C and K.
Proof of Theorem 1 is postponed to Section 6.
Remark. The right hand side term of (8) can be viewed as a tight upper
bound for minh∈Hn E|fˆh (x0 ) − f (x0 )|q since it is the sum of an approximation
of the bias term and the standard deviation term (up to a multiplicative
logarithmic term) of fˆh . That means that our procedure performs almost as
well as the best kernel density estimator in the considered family.
Now using Theorem 1, we obtain in Theorem 2 the adaptive rate of
convergence on Hölder classes. Let s, L and B be positive real numbers. The
Hölder class C s (L, B) is defined as the set of functions f : R → R such that
Assumption 1 is fulfilled with the constant B, f is ms = sup{k ∈ N : k < s}
times differentiable and
|f (ms ) (x) − f (ms ) (y)| ≤ L|x − y|s−ms ,
∀x, y ∈ R.
Theorem 2. Let a, b, c, B, C, m and L be positive constants. Let K be
a kernel such that Assumptions 4 and 5 are fulfilled (in particular, K is a
10
kernel of order m) and set s such that ms ≤ m. There exists a constant C2∗
that depends only on a, b, c, B, C, s, L, K and q such that:
sup
sup
sup
E|fˆ(x0 ) − f (x0 )|q
ρ(X)∈R(a,b,c) µ(X)≤C f ∈C s (L,B)
1/q
≤
C2∗
log n
n
!
s
2s+1
,
where
n
o
R(a, b, c) = (ρr )r∈N : ρr ≤ c exp(−arb ) .
This result is a direct consequence of Theorem 1, since it can be easily
proved that
sup
max |Kh ? f (x0 ) − f (x0 )|q ≤ C3∗ hsq ,
f ∈C s (L,B) h≤h
h∈Hn
for any bandwidth h > 0, where C3∗ depends only on s, L, K and q. This
implies that, for n large enough, there exists hn (s, L, K, q) ∈ Hn such that
the right hand side of (8) is bounded, up to a multiplicative constant, by the
expected rate.
Remark.
1. Recall that the expectation E is taken with respect to the
distribution of the process X. Note also that the sequence ρ(X), µ(X)
and f depend only on this distribution. As a consequence our procedure
of estimation is minimax (up to a multiplicative log n term) with respect
to any distribution of X that satisfies the conditions:
ρ(X) ∈ R(a, b, c),
µ(X) ≤ C and f ∈ C s (L, B).
Indeed, in the i.i.d. case (which is included in our framework since
ρ(X) ≡ 0 and µ(X) ≤ B2 ), the minimax rate of convergence over the
Hölder class C s (L, B) is of order n−s/(2s+1) and can be obtained from
the results of Hasminskii and Ibragimov (1990) or Tsybakov (2009).
Moreover, note that fˆ does not depend on the constants a, b, c, B, C, L
and s that appear in these conditions. Thus, our procedure is adaptive,
up to the log n term, to both the regularity of f and the “structure” of
dependence.
2. It can be deduced from our proofs that the minimax rate of convergence
over Hölder classes, under Assumptions 1, 2 and 3, is upper bounded,
up to a multiplicative constant, by n−s/(2s+1) . This result was previously
obtained in a similar setting by Ragache and Wintenberger (2006) and
Rio (2000). Given that this rate is minimax optimal in the i.i.d. case, it
is also the minimax rate of convergence under our assumptions.
11
3. The extra log n term in the rate of convergence obtained in Theorem 2
is unavoidable. Indeed, for pointwise estimation, even in the i.i.d. case
(see Lepski, 1990, Klutchnikoff, 2014, Rebelles, 2015, among others)
the adaptive rate of convergence is of this form. This ensures that our
procedure attains the adaptive rate of convergence over Hölder classes.
4. Note that δn , that appears in (4), allows us to control the covariance
terms of the development of Var(fˆh (x0 )) under Assumption 2. If we
only consider the i.i.d. case, the covariance terms vanish, and the choice
δn = 0 can be considered. The resulting procedure still satisfies an
oracle inequality and remains adaptive in this case.
5. As far as we know, this result is the first theoretical pointwise adaptive
result for the estimation of the marginal density in a context of weak
dependence. Moreover, integrating the pointwise risk on a bounded
domain, we obtain that our procedure converges adaptively at the rate
s/(2s+1)
(n−1 log n)
in Lp –norm (p =
6 ∞) over Hölder balls. This extends
the results of Gannaz and Wintenberger (2010).
5
Simulation study
In this section, we study the performance of our procedure using simulated
data. More precisely, we aim at estimating three density functions, for three
types of dependent processes. In each situation, we study the accuracy of
our procedure as well as classical competitive methods, calculating empirical
risks using p = 10000 Monte-Carlo replications. In the following, we detail
our simulation scheme and comment the obtained results.
Simulation scheme
Density functions. We consider three density functions to be estimated. The
first one is:
f1 (x) = 1.28 sin((3π/2 − 1)x)I[0,0.65] (x) + I(0.65,1] (x) + cI[0,1] (x) ,
where c is a positive constant such that f1 is a density. The second one is the
density of a mixture of three normal distributions restricted to the support
[0, 1]
f2 (x) =
1
1
1
φ0.5,0.1 (x) + φ0.6,0.01 (x) + φ0.65,0.95 (x) + c I[0,1] (x),
2
4
4
12
where φµ,σ stands for the density of a normal distribution with mean µ and
standard deviation σ and c is a positive constant such that f2 is a density. Note
that very similar densities were also considered in Gannaz and Wintenberger
(2010). The third one is:
f3 (x) =
5
X
k=1
1
k
+
2 − 40 x −
10 20
!
I( k−1 , k ] (x) + 0.5I(0.5,1] (x).
10
10
The function f1 is very smooth except in the discontinuity point x = 0.65.
The function f2 is a classical example where rule-of-thumb bandwidths do
not work. The third function has several spikes in [0, 0.5] and is constant
on [0.5, 1]. As a consequence, a global choice of bandwidth can fail to catch
the two different behaviors of the function. The three densities are bounded
from above (Assumption 1 is then satisfied) and their inverse cumulative
distribution functions are Lipschitz.
Types of dependence: We simulate data (X1 , . . . , Xn ) with density f ∈
{f1 , f2 , f3 } in three cases of dependence. Denote by F the cumulative distribution function of f .
Case 1. The Xi are independent variables given by F −1 (Ui ) where the
Ui are i.i.d. uniform variables on [0, 1]. Assumptions 2 and 3 are clearly
satisfied.
Case 2. The Xi are λ–dependent given by F −1 (G(Yi )) where the Yi
satisfy of the non-causal equation:
Yi = 2(Yi−1 + Yi+1 )/5 + 5ξi /21,
i ∈ Z.
Here (ξi )i∈Z is an i.i.d. sequence of Bernoulli variables with parameter
1/2. The function G is the marginal distribution function of the Yi and of
0
the variable U +U3 +ξ0 , where U and U 0 are independent uniform variables
on [0, 1] (see Gannaz and Wintenberger, 2010, for more details).
Case 3. The Xi are given by F −1 (G(Yi )) where Y = (Yi )i∈Z is an
ARCH(1) process given by (3) where the (ξ)i∈Z are i.i.d. standard
normal variables, α = 0.5, β = 0 and γ = 0.5. In this case the function
G is estimated using the empirical distribution function on a simulated
process Ỹ independent of Y with the same distribution.
It remains to verify that Assumptions 2 and 3 hold for the processes X in the
last two cases. Firstly, note that the process Y is λ–dependent with
λr (Y) ≤ c exp(−ar),
13
(9)
for some positive constants a and c. Indeed, in the second case, Yi is of the
form (1) since it satisfies Yi =
P
j∈Z aj ξi−j with aj =
r
1
1
3
|j|
1
2
and the sequence
θr (see (2)) such that θr ∝ 2 . In the third case, since α < 1, Yi is β–mixing
at a geometric rate and then it is α–mixing and λ–dependent at a geometric
rate.
Now, in both cases, since F −1 and G are Lipschitz and the process (Yi )i∈Z
is λ–dependent, using Proposition 2.1 of Dedecker et al. (2007), we have that
the process (Xi )i∈Z is also λ–dependent with λr (X) satisfying (9) with some
positive constants a and c. As a consequence, Assumption 2 is fulfilled in the
last two cases.
Secondly, Assumption 3 holds for the process (Yi )i∈Z (see Situation 2 and
Situation 3 in Subsection 2.2). Then, for g, g̃ ∈ G1 and i, j ∈ Z with i =
6 j,
we have
|E (g(Xi )g̃(Xj ))| ≤Ckg ◦ F −1 ◦ Gk1 kg̃ ◦ F −1 ◦ Gk1
CB2
≤ 2 kgk1 kg̃k1
D
where D = min{G0 (x) : x ∈ G−1 ◦ F (Vn (x0 ))} is bounded from below as soon
as x0 ∈ (0, 1) and n is large enough. As a consequence, Assumption 3 is also
fulfilled for the procces X.
Estimation procedures
We propose to compare in this simulation study the following procedures.
• Our procedure (GL) fˆ performed with, in (4), q = 2 and δn =
(log n)−1/2 .
• The leave-one-out cross validation (CV) performed on the family of
kernel estimators given in Subsection 3.1.
• The kernel procedure with bandwidth given by the rule-of-thumb (RT).
• The procedure developed by Gannaz and Wintenberger (2010).
In the first three procedures, we use the uniform kernel.
Quality criteria
For each density function f ∈ {f1 , f2 , f3 } and each case of dependence,
we simulate p = 10000 sequences of observations (X1 , . . . , Xn ) with n = 1000.
14
f1
f2
f3
GL
CV
RT
GL
CV
RT
GL
CV
RT
Case 1
Case 2
Case 3
0.036
0.027
0.036
0.181
0.079
0.965
0.090
0.172
0.263
0.033
0.034
0.040
0.203
0.116
0.975
0.098
0.180
0.266
0.044
0.049
0.054
0.222
0.162
0.971
0.118
0.190
0.286
Table 1: Mean of ISE for the two densities f1 and f2 , the three cases of
dependence and the three procedures GL, CV and RT.
Given an estimation procedure, we calculate p estimators fˆ(1) , . . . , fˆ(p) . We
consider the empirical integrated square error:
d =
ISE
p Z
2
1X
f (x) − f (j) (x) dx.
p j=1 [0,1]
Results
Our results are summarized in Table 1.
For the estimation of the function f1 , our procedure gives better results
than the CV or RT methods in cases of dependence (2 and 3). We also
outperform the results of Gannaz and Wintenberger (2010) for case 1 and 2
where the ISE was around 0.09 (case 3 was not considered in their paper).
For the estimation of f2 which is quite smooth, the cross validation method
is about two times better than GL method and as expected, the RT method
does not work. For the estimation of f3 that contains several peaks, the GL
procedure is about two times better than the CV method.
To conclude, in the considered examples, our procedure has similar or
better performances than already existing methods used for dependent data.
Moreover, it gives better results when the density function to be estimated
presents irregularities. This illustrates the fact that our method adapts locally
to the irregularities of the function thanks to the use of local bandwidths. An
other important point is that the choice of the bandwidth depends on explicit
constants that can be used directly in practice and do not need previous
calibration. Additionally, our GL procedure is about 25 times faster than
cross-validation.
15
6
6.1
Proofs
Basic notation
For the sake of readability, we introduce in this section some conventions and
notations that are used throughout the proofs. Moreover, here and later, we
assume that Assumptions 1, 2 and 3 hold.
Firstly, let us consider, for any h ∈ Hn , the functions gh and ḡh defined,
for any y ∈ R, by gh (y) = Kh (x0 − y) and
ḡh (y) =
gh (y) − Egh (X1 )
.
n
Note that we have:
fˆh (x0 ) − Efˆh (x0 ) =
n
X
ḡh (Xi ),
h ∈ Hn .
i=1
Next, we introduce some constants. Let us consider:
C1 = kKk21 (B2 + C),
C2 = BkKk22 ,
and
C3 = 2kKk∞ .
Moreover we define L = Lip(K) and
C4 = 2C3 L + L2
6.2
and
1/4
C5 = (2C1 )3/4 C4 .
Auxiliary results
In the three following lemmas, assume that Assumptions 1, 2 and 3 are
satisfied. The first lemma provides bounds on covariance terms for functionals
of the past and the future of the observations. The considered functionals
depend on the kernel K.
Lemma 1. For any h ∈ Hn , we define
D1 (h) = D1 (n, h) =
C3
nh
and
D2 (h) = D2 (n, h) =
C5
.
n2 h
Then for any u, v and r in N, if (i1 , . . . , iu , j1 , . . . jv ) ∈ Zu+v is such that
i1 ≤ . . . , iu ≤ iu + r ≤ j1 ≤ . . . ≤ jv , we have
Cov
u
Y
k=1
ḡh (Xik ),
v
Y
!
≤ Φ(u, v)D1u+v−2 (h)D2 (h)ρr1/4 ,
ḡh (Xjm )
m=1
where Φ(u, v) = u + v + uv.
16
The following lemma provides a moment inequality for the classical kernel
estimator.
Lemma 2. There exists a positive integer N0 that depends only on a, b, c,
B, C and K such that, for any n ≥ N0 , we have
E
2
n
X
C2
δn
δn
≤
+
,
ḡh (Xi ) ≤ Jn (h) +
6nh
nh 6nh
i=1
where Jn is defined by (6). Moreover for q > 0, we have
n
X
E
q!
≤ Cq (nh)−q/2 (1 + o(1)),
ḡh (Xi )
i=1
where Cq is a positive constant. Here the o(·)–terms depend only on a, b, c,
B, C and K.
The following result is an adaptation of the Bernstein-type inequality
obtained by Doukhan and Neumann (2007).
Lemma 3 (Bernstein’s inequality). We have:
P
n
X
!
ḡh (Xi ) ≥ λ(t) ≤ C7 exp(−t/2)
i=1
where,
√
λ(t) = σn (h) 2t + Bn (h) (2t)2+1/b ,
σn (h) = Jn (h) +
t≥0
δn
6nh
(10)
(11)
and
C6
(12)
nh
with C6 and C7 positive constants that depend only on a, b, c, B, C and K.
Bn (h) =
6.3
Proof of Theorem 1
Let us denote γ = q(1 + δn /(12 max(C2 , 1/6))). For convenience, we split the
proof into several steps.
Step 1. Let us consider the random event
(
A =
\
h∈Hn
δn
n (h) − Jn (h) ≤
2nh
Jb
17
)
and the quantities Γ1 and Γ2 defined by :
q
Γ1 = E fˆ(x0 ) − f (x0 ) IA
and
1/2
2q ˆ
Γ2 = max R2q
(fh , f )P(A c )
h∈Hn
where IA is the characteristic function of the set A . Using Cauchy-Schwarz
inequality, it follows that:
Rqq (fˆ, f ) ≤ Γ1 + Γ2 .
We define
Mn (h, a) =
v
u
u
t2q|log h|
!
aδn
Jn (h) +
.
nh
Now note that if the event A holds, we have:
Mn
3
1
c (h) ≤ M
≤M
.
h,
n
n h,
2
2
Steps 2–5 are devoted to control the term Γ1 whereas Γ2 is upper bounded in
Step 6.
Step 2. Let h ∈ Hn be an arbitrary bandwidth. Using triangular inequality
we have:
|fˆ(x0 )−f (x0 )| ≤ |fˆĥ (x0 )− fˆh∨ĥ (x0 )|+|fˆh∨ĥ (x0 )− fˆh (x0 )|+|fˆh (x0 )−f (x0 )|.
If h ∨ ĥ = ĥ, the first term of the right hand side of this inequality is equal to
0, and if h ∨ ĥ = h, it satisfies
n
o
c (h, ĥ)
|fˆĥ (x0 ) − fˆh∨ĥ (x0 )| ≤ |fˆĥ (x0 ) − fˆh∨ĥ (x0 )| − M
n
n
+
c (h, ĥ)
+M
n
o
c (h, h)
≤ max |fˆh (x0 ) − fˆh∨h (x0 )| − M
n
h∈Hn
+
c (h, ĥ)
+M
n
c (ĥ) + M
c (h).
≤ A(h, x0 ) + M
n
n
Applying the same reasoning to the term |fˆh∨ĥ (x0 ) − fˆh (x0 )| and using (5),
this leads to
c (h) + |fˆ (x ) − f (x )|.
|fˆ(x0 ) − f (x0 )| ≤ 2 A(h, x0 ) + M
n
h 0
0
Using this equation, we obtain that, for some positive constant cq ,
Γ1 ≤ cq E Aq (h, x0 )IA + Mqn h,
18
3
+ Rqq (fˆh , f )) .
2
(13)
Step 3. Now, we upper bound A(h, x0 ). Using basic inequalities we have:
A(h, x0 ) ≤ max
n
h∈Hn
+ max
≤ max
h∈Hn
+
+ max
o
n
c (h)
fˆh (x0 ) − Efˆh (x0 ) − M
n
h∈Hn
o
n
c (h ∨ h)
fˆh∨h (x0 ) − Efˆh∨h (x0 ) − M
n
h∈Hn
n
o
Efˆh∨h (x0 ) − Efˆh (x0 )
o
Efˆh∨h (x0 ) − Efˆh (x0 )
+
+
+ 2T,
where
T = max
n
h∈Hn
o
c (h)
fˆh (x0 ) − Efˆh (x0 ) − M
n
+
( n
X
= max
h∈Hn
)
c (h)
ḡh (Xi ) − M
n
.
i=1
+
Using (7), we obtain
A(h, x0 ) ≤ 2 max {|Kh ? f (x0 ) − f (x0 )|}+ + 2T.
(14)
h≤h
h∈Hn
Denoting by Eh (x0 ) the first term of the right hand side of (14), we obtain
E (Aq (h, x0 )IA ) ≤ cq Ehq (x0 ) + E T q IA
(15)
,
for some positive constant cq .
Step 4. It remains to upper bound E(T q IA ). To this aim, notice that,
E(T q IA ) ≤ ET̃ q ,
where
T̃ = max
h∈Hn
( n
X
ḡh (Xi ) − Mn
i=1
(16)
1
h,
2
)
.
+
Now, we define r(·) by
r(u) =
q
2σn2 (h)u + 2d−1 Bn (h) (2u)d ,
u≥0
where Bn (h) and σ√n (h) are given by (11) and (12) and d = 2 + b−1 . Since
h ≥ h∗ = n−1 exp( log n), we have, for n large enough:
d−1
2
Bn (h)(2γ|log h|) ≤
s
δn
d
q
12 2 max(C2 , 1/3)
2q|log h|
.
nh
(17)
Moreover, we have
q
2σn2 (h)γ|log h| ≤
v
u
u
t2q|log h|
≤
v
u
u
t2q|log h|
δn
Jn (h) +
6nh
!
δn
1+
12 max(C2 , 1/6)
!
!
δn
Jn (h) +
.
3nh
19
(18)
+
Last inequality comes from the fact that Jn (h) is upper bounded by C2 /(nh).
Now, using (17) and (18), we obtain:
r(γ| log h|) ≤ Mn h,
1
.
2
(19)
Thus, doing the change of variables t = (r(u))q and thanks to (19), we
obtain:
q
ET̃ ≤
X Z ∞
0
q−1
r (u)r(u)
n
X
P
0
!
1
h,
+ t1/q dt
2
!
ḡh (Xi ) ≥ r(γ| log h|) + r(u) du,
i=1
X Z ∞
h∈Hn
ḡh (Xi ) ≥ Mn
i=1
X Z ∞
h∈Hn
≤C
P
0
h∈Hn
≤C
n
X
−1
q
u λ(u) P
0
n
X
!
ḡh (Xi ) ≥ λ(γ| log h| + u) du,
i=1
where λ(·) is defined by (10). Using Lemma 3, we obtain
q
ET̃ ≤ C
X Z ∞
h∈Hn
≤C
X
u
−1
q
σn2 (h)u
0
+ Bn (h)u
3
q
(
)
u γ| log h|
exp − −
du
2
2
σnq (h)hγ/2 .
h∈Hn
Using the definitions of the quantities that appear in this equation and using
(16), we readily obtain:
q
−q/2
ET IA ≤ Cn
X
(2
(γ−q)/2 −k
)
k∈N
≤C
n−q/2
δn log 2
1 − exp − 24 max(C
2 ,1/6)
q
≤ Cn−q/2 log n
≤ C(nh)−q/2 .
(20)
Step 5. Lemma 2 implies that:
E|fˆh (x0 ) − f (x0 )|q ≤ cq |Eh (x0 )|q + (nh)−q/2
(21)
for some positive constant cq .
Combining (13), (15), (20) and (21), we have:
Γ1 ≤ C ∗ min max |Kh ? f (x0 ) − f (x0 )|q +
h∈Hn h≤h
h∈Hn
20
!q/2
|log h|
nh
(22)
where C ∗ is a positive constant that depends only on a, b, c, B, C and K.
Step 6. Using Lemma 3 where in g h , K is replaced by K 2 , we obtain that
P
Z
n
hX
δn x
2
Kh (x0 − Xi ) − h Kh2 (x0 − x)f (x)dx >
n i=1
2
!
≤ exp(−C1 n2 h2 ),
where C1 is a constant that depends only on a, b, c, B, C and δ. Then this
implies that
q
log n
P (A c ) ≤
exp − C1 exp(2 log n)
log 2
and then
1
Γ2 = o q/2 .
(23)
n
Now, using (22) and (23), Theorem 1 follows.
A
Proof of Lemma 1
In order to prove this lemma, we derive two different bounds for the term
Υh (u, v) = Cov
u
Y
ḡh (Xik ),
v
Y
!
ḡh (Xjm )
.
m=1
k=1
The first bound is obtained by a direct calculation whereas the second one
is obtained thanks to the dependence structure of the observations. For the
sake of readability, we denote ` = u + v throughout this proof.
Direct bound. The proof of this bound is composed of two steps. First, we
assume that ` = 2, then the general case ` ≥ 3 is considered.
Assume that ` = 2. If Assumptions 1 and 3 are fulfilled, we have
n2 Υh (u, v) ≤ |E (gh (Xi )gh (Xj ))| + (Egh (X1 ))2
≤ (C + B2 )kgh k21 ≤ C1 .
Then, we have
|Cov (ḡh (Xi ), ḡh (Xj ))| ≤ C1 n−2 .
(24)
Let us now assume that ` ≥ 3. Without loss of generality, we can assume
that u ≥ 2 and v ≥ 1. We have:
Υh (u, v) ≤ A + B
where
A
= E ( uk=1 ḡh (Xik ) vm=1 ḡh (Xjm ))
B = E (Qu ḡh (Xi )) E (Qv
k=1
m=1 ḡh (Xjm )) .
k
Q
Q
21
Remark that both A and B can be bounded, using (24), by
kḡh k(u−2)+v
Cov(ḡh (Xi1 ), ḡh (Xi2 )) ≤
∞
C3
nh
`−2
C1
.
n2
This implies our first bound, for all ` ≥ 2:
Υh (u, v) ≤
2C1
n2
C3
nh
`−2
(25)
.
Structural bound. Using Assumption 2, we obtain that
Υh (u, v) ≤ Ψ u, v, ḡh⊗u , ḡh⊗v ρr .
Now using that
nhḡh
C3
and
nhḡh
Lip
C3
!⊗u
≤1
∞
nhḡh
≤ Lip
C3
!
≤
L
,
C3 h
we obtain, since h ≤ h∗ , that
C3
Υh (u, v) ≤
nh
`
Φ(u, v)
C4
ρr .
C32 h2
This implies that
Υh (u, v) ≤
1
n2
C3
nh
`−2
C4
Φ(u, v)ρr .
h4
(26)
Conclusion. Now combining (25) and (26) we obtain:
1 C3 `−2
C4
Υh (u, v) ≤ 2
(2C1 )3/4
Φ(u, v)ρr
n nh
h4
C5 C3 `−2
≤ 2
Φ(u, v)ρr1/4 .
n h nh
This proves Lemma 1.
22
1/4
B
Proof of Lemma 2
Proof of this result can be readily adapted from the proof of Theorem 1 in
Doukhan and Louhichi (2001) (using similar arguments that ones used in the
proof of Lemma 1). The only thing to do is to bound explicitely the term
A2 (ḡh ) = E
n
X
!2
ḡh (Xi )
.
i=1
Set R = h−1/4 . Remark that
A2 (ḡh ) = nEḡh (X1 )2 +
X
Eḡh (Xi )ḡh (Xj )
i6=j
= Jn (h) + 2
n−1
X n−i
X
Eḡh (Xi )ḡh (Xi+r ).
i=1 r=1
Using Lemma 1 and (24), we obtain:
A2 (ḡh ) ≤ Jn (h) + 2n
n−1
X
C1
(n − r)Φ(1, 1)ρr1/4
+
2D
(h)
2
2
n
r=1
r=R+1
R
X
∞
X
1
≤ Jn (h) +
ρr1/4
(2C1 )h3/4 + (6C5 )
nh
r=R+1
∞
X
1 2C1
≤ Jn (h) +
+ (6C5 )
ρr1/4 .
3/4
nh (log n)
r=1+(log n)1/4
Last inequality holds sine h ≤ h∗ ≤ (log n)−1 . Using Assumption 2, there
exists N0 = N0 (K, B, C, a, b, c) such that, for any n ≥ N0 we have:
A2 (ḡh ) ≤ Jn (h) +
δn
.
6nh
This equation, combined with the fact that Jn (h) ≤ C2 (nh)−1 , completes the
proof.
C
Proof of Lemma 3
First, let us remark that Lemma 6.2 in Gannaz and Wintenberger (2010)
and Assumption 2 imply that there exist positive constants L1 and L2 (that
depend on a, b and c) such that, for any k ∈ N we have,
X
(1 + r)k ρ1/4
≤ L1 Lk2 (k!)1/b .
r
r∈N
23
This implies that, using Lemma 2, one can apply the Bernstein-type inequality
obtained by Doukhan and Neumann (2007, see Theorem 1). First, remark
that, using Lemma 2, for n large enough, we have
E
n
X
!2
ḡh (Xi )
≤ σn (h) and Bn (h) =
i=1
2L2 C3
.
nh
where the theoretical expression of Bn (h) given in Doukhan and Neumann
(2007). Let us now denote d = 2 + b−1 . We obtain:
P
n
X
!
u /2
ḡh (Xi ) ≥ u ≤ exp −
σn2 (h)
i=1
2
1
d
+ Bn (h)u
2d−1
d
.
√
Now, let us remark that, on the one hand λ(t) ≥ σn (h) 2t and thus
On the other hand, λ2+
λ2 (t) ≥ 2σn2 (h)t.
1
λ2 (t) ≥ (2Bnd (h)t)λ
and thus, finally:
2d−1
ad
1−2d
d
1
(t) ≥ 2Bnd (h)t and thus
1
(t). This implies that λ2 (t) ≥ t(σn2 (h)+Bnd (h)λ
σn2 (h)
1
d
(t))
λ2 (t)
exp −
2d−1
d
+ Bn (h)λ
2d−1
d
≤ exp(−t/2).
(t)
This implies the results.
D
Acknowledgments
The authors have been supported by Fondecyt project 1141258. Karine
Bertin has been supported by the grant Anillo ACT-1112 CONICYT-PIA,
Mathamsud project 16-MATH-03 SIDRE and ECOS project C15E05.
References
K. Bertin, C. Lacour, and V. Rivoirard. Adaptive estimation of conditional density
function. To appear in Annales de l’Institut Henri Poincaré. Probabilités et
Statistiques, 2014. URL http://arxiv.org/abs/1312.7402.
C. Butucea. Two adaptive rates of convergence in pointwise density estimation.
Math. Methods Statist., 9(1):39–64, 2000. ISSN 1066-5307.
F. Comte and V. Genon-Catalot. Convolution power kernels for density estimation.
J. Statist. Plann. Inference, 142(7):1698–1715, 2012. ISSN 0378-3758. doi:
10.1016/j.jspi.2012.02.038. URL http://dx.doi.org/10.1016/j.jspi.2012.
02.038.
24
Fabienne Comte and Florence Merlevède. Adaptive estimation of the stationary
density of discrete and continuous time mixing processes. ESAIM Probab.
Statist., 6:211–238, 2002. ISSN 1292-8100. doi: 10.1051/ps:2002012. URL http:
//dx.doi.org/10.1051/ps:2002012. New directions in time series analysis
(Luminy, 2001).
Jérôme Dedecker, Paul Doukhan, Gabriel Lang, José Rafael León R., Sana Louhichi,
and Clémentine Prieur. Weak dependence: with examples and applications,
volume 190 of Lecture Notes in Statistics. Springer, New York, 2007. ISBN
978-0-387-69951-6.
Paul Doukhan and Sana Louhichi. A new weak dependence condition and
applications to moment inequalities. Stochastic Process. Appl., 84(2):313–
342, 1999. ISSN 0304-4149. doi: 10.1016/S0304-4149(99)00055-1. URL
http://dx.doi.org/10.1016/S0304-4149(99)00055-1.
Paul Doukhan and Sana Louhichi. Functional estimation of a density under a
new weak dependence condition. Scand. J. Statist., 28(2):325–341, 2001. ISSN
0303-6898. doi: 10.1111/1467-9469.00240. URL http://dx.doi.org/10.1111/
1467-9469.00240.
Paul Doukhan and Michael H. Neumann. Probability and moment inequalities for
sums of weakly dependent random variables, with applications. Stochastic Process.
Appl., 117(7):878–903, 2007. ISSN 0304-4149. doi: 10.1016/j.spa.2006.10.011.
URL http://dx.doi.org/10.1016/j.spa.2006.10.011.
Paul Doukhan and Olivier Wintenberger. An invariance principle for weakly
dependent stationary general models. Probab. Math. Statist., 27(1):45–73, 2007.
ISSN 0208-4147.
M. Doumic, M. Hoffmann, P. Reynaud-Bouret, and V. Rivoirard. Nonparametric
Estimation of the Division Rate of a Size-Structured Population. SIAM J. Numer.
Anal., 50(2):925–950, 2012. ISSN 0036-1429. doi: 10.1137/110828344. URL
http://dx.doi.org/10.1137/110828344.
Irène Gannaz and Olivier Wintenberger. Adaptive density estimation under weak
dependence. ESAIM Probab. Stat., 14:151–172, 2010. ISSN 1292-8100. doi:
10.1051/ps:2008025. URL http://dx.doi.org/10.1051/ps:2008025.
A. Goldenshluger and O. Lepski. On adaptive minimax density estimation
on Rd . Probab. Theory Related Fields, 159(3-4):479–543, 2014. ISSN 01788051. doi: 10.1007/s00440-013-0512-1. URL http://dx.doi.org/10.1007/
s00440-013-0512-1.
Alexander Goldenshluger and Oleg Lepski. Universal pointwise selection rule in
multivariate function estimation. Bernoulli, 14(4):1150–1190, 2008. ISSN 13507265. doi: 10.3150/08-BEJ144. URL http://dx.doi.org/10.3150/08-BEJ144.
25
Alexander Goldenshluger and Oleg Lepski. Bandwidth selection in kernel density
estimation: oracle inequalities and adaptive minimax optimality. Ann. Statist.,
39(3):1608–1632, 2011. ISSN 0090-5364. doi: 10.1214/11-AOS883. URL http:
//dx.doi.org/10.1214/11-AOS883.
Rafael Hasminskii and Ildar Ibragimov. On density estimation in the view of
Kolmogorov’s ideas in approximation theory. Ann. Statist., 18(3):999–1010, 1990.
ISSN 0090-5364. doi: 10.1214/aos/1176347736. URL http://dx.doi.org/10.
1214/aos/1176347736.
N. Klutchnikoff. Pointwise adaptive estimation of a multivariate function.
Math. Methods Statist., 23(2):132–150, 2014. ISSN 1066-5307. doi: 10.3103/
S1066530714020045. URL http://dx.doi.org/10.3103/S1066530714020045.
O. V. Lepski. A problem of adaptive estimation in Gaussian white noise. Teor.
Veroyatnost. i Primenen., 35(3):459–470, 1990. ISSN 0040-361X. doi: 10.1137/
1135065. URL http://dx.doi.org/10.1137/1135065.
Florence Merlevède, Magda Peligrad, and Emmanuel Rio. Bernstein inequality and
moderate deviations under strong mixing conditions. In High dimensional probability V: the Luminy volume, volume 5 of Inst. Math. Stat. Collect., pages 273–292.
Inst. Math. Statist., Beachwood, OH, 2009. doi: 10.1214/09-IMSCOLL518. URL
http://dx.doi.org/10.1214/09-IMSCOLL518.
Nicolas Ragache and Olivier Wintenberger. Convergence rates for density estimators
of weakly dependent time series. In Dependence in probability and statistics,
volume 187 of Lecture Notes in Statist., pages 349–372. Springer, New York,
2006. doi: 10.1007/0-387-36062-X_16. URL http://dx.doi.org/10.1007/
0-387-36062-X_16.
Gilles Rebelles. Pointwise adaptive estimation of a multivariate density under
independence hypothesis. Bernoulli, 21(4):1984–2023, 2015. ISSN 1350-7265.
doi: 10.3150/14-BEJ633. URL http://dx.doi.org/10.3150/14-BEJ633.
Emmanuel Rio. Théorie asymptotique des processus aléatoires faiblement dépendants, volume 31 of Mathématiques & Applications (Berlin) [Mathematics &
Applications]. Springer-Verlag, Berlin, 2000. ISBN 3-540-65979-X.
M. Rosenblatt. A central limit theorem and a strong mixing condition. Proc. Nat.
Acad. Sci. U. S. A., 42:43–47, 1956. ISSN 0027-8424.
Karine Tribouley and Gabrielle Viennet. Lp adaptive density estimation in a β
mixing framework. Ann. Inst. H. Poincaré Probab. Statist., 34(2):179–208, 1998.
ISSN 0246-0203. doi: 10.1016/S0246-0203(98)80029-0. URL http://dx.doi.
org/10.1016/S0246-0203(98)80029-0.
26
Alexandre B. Tsybakov. Introduction to nonparametric estimation. Springer Series
in Statistics. Springer, New York, 2009. ISBN 978-0-387-79051-0. doi: 10.1007/
b13794. URL http://dx.doi.org/10.1007/b13794. Revised and extended
from the 2004 French original, Translated by Vladimir Zaiats.
27
| 10 |
The Spin Group in Superspace
Hennie De Schepper, Alí Guzmán Adán, Frank Sommen
Clifford Research Group, Department of Mathematical Analysis, Faculty of Engineering and
Architecture, Ghent University, Krijgslaan 281, 9000 Gent, Belgium.
arXiv:1804.00963v1 [] 3 Apr 2018
Abstract
There are two well-known ways of describing elements of the rotation group SO(m). First,
according to the Cartan-Dieudonné theorem, every rotation matrix can be written as an even
number of reflections. And second, they can also be expressed as the exponential of some
anti-symmetric matrix.
In this paper, we study similar descriptions of the corresponding extension of SO(m) to
superspace. The setting is natural to describe the behavior of bosonic and fermionic particles.
This group of super-rotations SO0 is also an extension of the symplectic group. While still
being connected, it is thus no longer compact. As a consequence, it cannot be fully described
by just one action of the exponential map on its Lie algebra. Instead, we obtain an Iwasawatype decomposition for this group in terms of three exponentials acting on three direct
summands of the corresponding Lie algebra of supermatrices.
At the same time, SO0 strictly contains the group generated by super-vector reflections.
Therefore, its Lie algebra is isomorphic to a certain extension of the algebra of superbivectors.
This means that the Spin group in superspace has to be seen as the group generated by the
exponentials of the so-called extended superbivectors in order to cover SO0 . We also study
the actions of this Spin group on supervectors and provide a proper subset of it that is a
double cover of SO0 . Finally, we show that every fractional Fourier transform in n bosonic
dimensions can be seen as an element of the spin group in superspace.
Keywords. Spin groups, symplectic groups, Clifford analysis, superspace
Mathematics Subject Classification (2010). 30G35 (22E60)
1
Introduction
Mathematical analysis on superspace (super analysis) is a very important tool in the study of several
branches of contemporary theoretical physics such as supergravity or superstring theories. It was introduced by Berezin, see [1], and possesses a huge mathematical impact since it breaks several traditional
patterns of classical analysis, such as commutation among coordinate variables and positive dimensions.
Indeed, elements in superspace are defined by means of bosonic (commuting) variables xj and fermionic
(anti-commuting) variables x`j , where, in a natural way, the bosonic variables describe positive dimensions
while the fermionic ones describe negative dimensions.
Generalizations of some objects from classical analysis appear in superspace with a wider nature.
For example, the notion of a supervector variable is introduced as a variable that takes values in some
Grassmann envelope and, through the corresponding definitions of bosonic and fermionic partial derivatives ∂xj and ∂x`j , the supervector derivative (supergradient or super Dirac operator) can be introduced
as well. These analogies lead to the introduction of Clifford analysis in superspace as it was done in the
papers [2, 3, 13, 14, 16].
A proper extension of Clifford analysis to this setting needs a set of rules (or axioms) that guarantee
the preservation of the Clifford nature in the new objects, although they can have a very different behavior
compared to the classical ones. This set of rules is provided by the notion of a radial algebra, which is an
abstract setting that algebraically describes all main properties which any Clifford environment should
1
satisfy, see [3, 5, 12, 15]. This approach does not depend on an a priori defined vector space or signature.
In particular, the main exigence is that the anticommutator {x, y} of every pair of vector variables is a
commuting element, leading to the definition of an abstract inner product by the formula
1
hx, yi = − {x, y}.
2
(1)
However this abstract description leaves some issues unclarified. In particular, it is not possible to describe
the whole set of linear transformations leaving the above bilinear form invariant and in consequence, there
is no proper definition for the spin group in this general abstract setting. We can, of course, define vector
reflections by ψ(w)[x] = wxw where w, x are abstract vector variables. Yet this doesn’t enable us to
prove that all linear transformations leaving (1) invariant can be written as the composition of vector
reflections. In order to find the set of those transformations it is necessary to work with a concrete
representation of the radial algebra, i.e., an underlying vector space V where the vector variables are
defined endowed with a fixed bilinear form and signature.
For example, in the Clifford polynomial representation of the radial algebra (see [5, 12]), we have the
Euclidean space V = Rm where the most important group leaving the inner product (1) invariant is the
set of rotations SO(m). The spin group appears in this setting as a double cover of SO(m) given by
2k
Y
Spin(m) :=
wj : k ∈ N, wj ∈ Sm−1 ,
j=1
where Sm−1 = {w ∈ Rm : w2 = −1} denotes the unit sphere in Rm . The relation between Spin(m) and
SO(m) is easily seen through the Lie group representation h : Spin(m) → SO(m)
h(s)[x] = sxs,
s ∈ Spin(m), x ∈ Rm ,
which describes the action of every element of SO(m) in terms of Clifford multiplication. Such a description of the spin group follows from the Cartan-Dieudonné theorem which states that every orthogonal
transformation in an m-dimensional symmetric bilinear space can be written as the composition of at
most m reflections.
The situation may however not be the same in another representation of the radial algebra where
the Cartan-Dieudonné theorem is no longer valid. In this paper we deal with one of those cases: the
representation in superspace. Our main goal is to properly define the (super) spin group as a set of
elements describing every superrotation through Clifford multiplication in superspace. To that end, we
consider vector variables taking values in the Grassmann envelope Rm|2n (ΛN ). This makes it possible
to study the group of supermatrices leaving (1) invariant and to define in a proper way the spin group
in superspace. It is worth noticing that the superstructures are absorbed by the Grassmann algebras
leading to classical Lie groups and Lie algebras instead of supergroups or superalgebras.
The paper is organized as follows. We start with some preliminaries on Grassmann algebras, Grassmann envelopes and supermatrices in Section 2. In particular, we carefully recall the notion of an
exponential map for Grassmann numbers and supermatrices as elements of finite dimensional associative
algebras. In Section 3 we briefly describe the Clifford setting in superspace leading to the introduction of
the Lie algebra of superbivectors. An extension of this algebra plays an important rôle in the description
of the super spin group. The use of the exponential map in such an extension (which takes us out of the
radial algebra) necessitates the introduction of the corresponding tensor algebra. Section 4 is devoted to
the study of the invariance of the “inner product” (1) in this setting. There, we study several groups of
supermatrices and in particular, the group of superrotations SO0 and its Lie algebra so0 , which combine
both orthogonal and symplectic structures. We prove that every superrotation can be uniquely decomposed as the product of three exponentials acting in some special subspaces of so0 . Finally, in section 5
we study the problem of defining the spin group in this setting and its differences with the classical case.
We show that the compositions of even numbers of vector reflections are not enough to fully describe
SO0 since they only show an orthogonal structure and don’t include the symplectic part of SO0 . Next
we propose an alternative, by defining the spin group through the exponential of extended superbivectors
and show that they indeed cover the whole set of superrotations. In particular, we explicitly describe a
subset S which is a double covering of SO0 and contains in particular every fractional Fourier transform.
2
2
Preliminaries
2.1
Grassmann algebras and Grassmann envelopes
Let ΛN be the Grassmann algebra of order N ∈ N over the field K (K = R or C) with canonical generators
f1 , . . . , fN which are subject to the multiplication rules
fj fk + fk fj = 0,
(2)
implying in particular that fj2 = 0. Every element of ΛN can be written in the form
X
a=
aA fA ,
(3)
A⊂{1,...,N }
where aA ∈ K and fA = fj1 · · · fjk , for A = {j1 , . . . , jk }, with 1 ≤ j1 ≤ . . . ≤ jk ≤ N ; here we put f∅ = 1.
(k)
We define the space of homogeneous elements of degree k by ΛN = spanK {fA : |A| = k}, where in
(k)
particular ΛN = {0} for k > N . It then easily follows from (2)-(3) that
ΛN =
N
M
(k)
(k)
(`)
(k+`)
ΛN and ΛN ΛN ⊂ ΛN
k=0
P
(k)
The projection of ΛN on its k-homogeneous part is denoted by [·]k : ΛN → ΛN , i.e. [a]k = |A|=k aA fA .
In particular we denote [a]0 = a∅ =: a0 . It is well known that ΛN shows a natural Z2 -grading. In
L
L
(2k)
(2k+1)
as the spaces of homogeneous even and odd
fact, defining ΛN,0 = k≥0 ΛN and ΛN,1 = k≥0 ΛN
elements respectively, we obtain the superalgebra structure ΛN = ΛN,0 ⊕ ΛN,1 where ΛN,j ΛN,k ⊂ ΛN,j+k
for j, k ∈ Z2 . We recall that ΛN is graded commutative in the sense that
vw = wv,
vw`= w`v,
v`w`= −w`v`,
v, w ∈ ΛN,0 ,
v`, w`∈ ΛN,1 .
(0)
Any element a ∈ ΛN is the sum of a number a0 ∈ ΛN = K and a nilpotent element. In fact, every
LN
(k)
N +1
a ∈ Λ+
= 0. It is easily seen that the projection [·]0 : ΛN → K is an algebra
N :=
k=1 ΛN satisfies a
homomorphism, i.e. [ab]0 = a0 b0 for a, b ∈ ΛN . In particular the following property holds.
Lemma 1. Let a ∈ ΛN such that a2 ∈ K \ {0}. Then a ∈ K.
It sometimes is useful to consider ΛN embedded in a Grassmann algebra of higher order; in particular,
we consider ΛN ⊂ ΛN +1 := AlgK {f1 , . . . , fN , fN +1 } being fN +1 a new canonical Grassmann generator.
(k)
(k)
This embedding preserves all grading structures since ΛN ⊂ ΛN +1 and ΛN,j ⊂ ΛN +1,j for j ∈ Z2 .
The algebra ΛN is a K-vector space of dimension 2N . As every finite dimensional K-vector space,
ΛN becomes a Banach space with the introduction of an arbitrary norm, all norms being equivalent. In
particular, the following result is obtained through straightforward computation.
P
Lemma 2. The norm k · k1 defined on ΛN by kak1 = A⊂{1,...,N } |aA | satisfies
kabk1 ≤ kak1 kbk1 ,
for every a, b ∈ ΛN .
(4)
The exponential of a ∈ ΛN , denoted by ea or exp(a), is defined by the power series
ea =
∞
X
aj
j=0
(5)
j!
Proposition 1. The series (5) converges for every a ∈ ΛN and ea is a continuous function in ΛN .
Proof.
From (4) it follows that
∞
X
kaj k1
j=0
j!
≤
∞
X
kakj
1
j=0
3
j!
= ekak1
whence (5) (absolutely) converges in the Banach space ΛN . Now consider BR := {a ∈ ΛN : kak1 ≤ R}
for some R > 0, where it holds that
kaj k1
kakj1
Rj
≤
≤
j!
j!
j!
P∞ j
P∞ Rj
and j=0 j! converges, whence from the Weierstrass M -criterium we have that j=0 aj! uniformly cona
a
verges in BR and e thus is continuous in BR . Then, e is continuous in ΛN .
Now consider the graded vector space Kp|q with standard homogeneous basis e1 , . . . , ep , e`1 , . . . , e`q , i.e.
K = Kp|0 ⊕ K0|q where {e1 , . . . , ep } is a basis for Kp|0 and {e`1 , . . . , e`q } is a basis for K0|q . Elements in
Kp|0 are called even homogeneous elements while elements in K0|q are called odd homogeneous elements.
The Grassmann envelope Kp|q (ΛN ) is defined as the set of formal linear combinations of the form
p|q
x = x + x`=
p
X
xj ej +
j=1
q
X
where xj ∈ ΛN,0 ,
x`j e`j ,
x`j ∈ ΛN,1 .
(6)
j=1
Then Kp|q (ΛN ) is a K-vector space of dimension 2N −1 (p + q), inheriting the Z2 -grading of Kp|q , i.e.
Kp|q (ΛN ) = Kp|0 (ΛN ) ⊕ K0|q (ΛN ), where Kp|0 (ΛN ) denotes the subspace of vectors of the form (6) with
x`j = 0, and K0|q (ΛN ) denotes the subspace of vectors of the form (6) with xj = 0. The subspaces
Kp|0 (ΛN ) and K0|q (ΛN ) are called the Grassmann envelopes of Kp|0 and K0|q , respectively.
In Kp|q (ΛN ),Pthere exists a subspace which is naturally isomorphic to Kp|0 . It consists of
(6)
Pvectors
m
p
of the form x = j=1 xj ej with xj ∈ K. The map [·]0 : Kp|q (ΛN ) → Kp|0 defined by [x]0 = j=1 [xj ]0 ej
will be useful.
T
The standard basis of Kp|q can be represented by the columns ej = (0, . . . , 1, . . . , 0) (1 on the j-th
T
place from the left) and e`j = (0, . . . , 1, . . . , 0) (1 on the (p + j)-th place from the left). In this basis,
T
elements of Kp|q (ΛN ) take the form x = (x1 , . . . , xp , x`1 , . . . , x`q ) .
2.2
Supermatrices
The Z2 -grading of Kp|q yields the Z2 -grading of the space End Kp|q of endomorphisms on Kp|q . This
space is isomorphic to the space Mat(p|q) of block matrices of the form
A B`
A 0
0 B`
M=
=
+
(7)
0 D
C` D
C` 0
where1 A ∈ Kp×p , B` ∈ Kp×q , C` ∈ Kq×p and D ∈ Kq×q . The first term in (7) is the even part of M and
the second term is the odd one. The Grassmann envelope of Mat(p|q) is denoted by Mat(p|q)(ΛN ) and
consists of matrices of the form (7) but with entries in ΛN (namely, A, D with even entries and B`, C`
with odd entries). Elements in Mat(p|q)(ΛN ) are called supermatrices.
The Z2 -grading of Mat(p|q)(ΛN ), inherited from Mat(p|q), together with the usual matrix multiplication, provides a superalgebra structure on this Grassmann envelope. More precisely, for any k ∈ N let
(2k)
(k)
Mat(p|q)(ΛN ) be the space of all homogeneous supermatrices of degree k. This is, Mat(p|q)(ΛN ) con(2k)
(2k+1)
sists of all diagonal block matrices with entries in ΛN , and Mat(p|q)(ΛN
) consists of all off-diagonal
(2k+1)
block matrices with entries in ΛN
. These subspaces define a grading in Mat(p|q)(ΛN ) by
Mat(p|q)(ΛN ) =
N
M
(k)
Mat(p|q)(ΛN ),
and
(k)
(`)
(k+`)
Mat(p|q)(ΛN ) Mat(p|q)(ΛN ) ⊂ Mat(p|q)(ΛN
).
k=0
Then, it is clear that every supermatrix M can be written as the sum of a numeric matrix M0 ∈
LN
(0)
(k)
Mat(p|q)(ΛN ) and a nilpotent supermatrix M ∈ Mat(p|q)(Λ+
N ) :=
k=1 Mat(p|q)(ΛN ). In accordance
with the general ideas valid for Grassmann algebras and Grassmann envelopes we define the algebra
(0)
homomorphism [·]0 : Mat(p|q)(ΛN ) → Mat(p|q)(ΛN ) as the projection:
A B`
A0 0
M=
−→
= M0 = [M ]0
0 D0
C` D
1
Given a set S, we use the notation Sp×q to refer to the set of matrices of order p × q with entries in S.
4
where A0 and D0 are the numeric projections of A and D on Kp×p and Kq×q respectively. We recall that
(0)
Mat(p|q)(ΛN ) is equal to the even subalgebra of Mat(p|q). Given a set of supermatrices S we define
[S]0 = {[M ]0 : M ∈ S}.
Every supermatrix M defines a linear operator over Kp|q (ΛN ) which acts on a vector x = x + x`,
x ∈ Kp|0 (ΛN ), x`∈ K0|q (ΛN ) as
A B`
x
Ax + B`x`
Mx =
=
∈ Kp|q (ΛN ).
x`
C` D
C`x + Dx`
However, the supermatrix M ∈ Mat(p|q)(ΛN ) defining an endomorphism of Kp|q (ΛN ) is not unique. In
fact, the zero endomorphism is defined by every supermatrix on the ideal HN ⊂ Mat(p|q)(ΛN ) given by
0 0
0 B`
(N )
(N )
HN = L =
: L ∈ Mat(p|q)(ΛN ) or HN = L =
: L ∈ Mat(p|q)(ΛN ) ,
0 D
0 0
for N odd or even respectively. Hence, all supermatrices in M + HN define the same endomorphism over
Kp|q (ΛN ) as M . But this situation changes if we consider endomorphisms over Kp|q (ΛN +1 ) defined by
supermatrices with entries in ΛN .
Lemma 3. Let E be an endomorphism over Kp|q (ΛN +1 ). If E admits a supermatrix representation in
Mat(p|q)(ΛN ), then this representation is unique in Mat(p|q)(ΛN ).
This result is easily proved using the introduction of a new canonical Grassmann generator fN +1 to
show that an element a ∈ ΛN satisfies afN +1 = 0 if and only if a = 0.
To study group structures in Mat(p|q)(ΛN ) we start from the Lie group GL(p|q)(ΛN ) of all invertible
elements of Mat(p|q)(ΛN ). The following is a well-known characterization of this group, see [1].
A B`
Theorem 1. Let M =
∈ Mat(p|q)(ΛN ). Then the following statements are equivalent.
C` D
(i) M ∈ GL(p|q)(ΛN ).
(ii) A, D are invertible.
(iii) A0 , D0 are invertible.
In addition, for every M ∈ GL(p|q)(ΛN ) its inverse is
M
−1
=
−1
A − B`D−1 C`
−1
−D−1 C` A − B`D−1 C`
−1 !
−A−1 B` D − C`A−1 B`
−1
.
D − C`A−1 B`
The usual definitions of transpose, trace and determinant of a matrix are not appropriate in the
graded case. For example, although the transpose
T
A
C`T
MT =
B`T DT
of a supermatrix M is a well defined element of Mat(p|q)(ΛN ), the usual property (M L)T 6= LT M T will
not hold in general. This problem is fixed by introducing the supertranspose by
AT
C`T
M ST =
.
−B`T DT
The transpose and supertranspose operations satisfy the following relations, see [1].
p×q
and C` ∈ Λq×p
. Then,
Proposition 2. Let M, L ∈ Mat(p|q)(ΛN ), x ∈ Kp|q (ΛN ), B` ∈ ΛN,1
N,1
(i)
T
B`C` = −C`T B`T ,
ST
(ii) (M L)
= LST M ST ,
5
T
(iii) (M x) = xT M ST ,
A −B`
Ip
ST ST
2
(iv) M
=
= SM S, where S =
0
−C` D
ST
−1
(v) M −1
= M ST
for every M ∈ GL(p|q)(ΛN ).
0
−Iq
,
The situation for the trace is similar. The usual trace tr(M ) of an element M ∈ Mat(p|q)(ΛN ) is well
defined, but in general, tr(M L) 6= tr(LM ) for M, L ∈ Mat(p|q)(ΛN ). The notion of supertrace provides
a solution to this problem; it is defined as the map str : Mat(p|q)(ΛN ) → ΛN,0 given by
str
A B`
C` D
= tr(A) − tr(D).
The following properties easily follow from the above definition, see [1].
q×p
Proposition 3. Let M, L ∈ Mat(p|q)(ΛN ), B` ∈ Λp×q
and C` ∈ ΛN,1
. Then
N,1
(i) tr B`C` = − tr C`B` ,
(ii) str(M L) = str(LM ),
(iii) str M ST = str(M ).
The superdeterminant or Berezinian is a function from GL(p|q)(ΛN ) to ΛN,0 defined by:
sdet(M ) =
det(A)
det(A − B`D−1 C`)
=
.
det(D)
det(D − C`A−1 B`)
Some of its most basic properties are given in the following proposition, see [1].
Proposition 4. Let M, L ∈ GL(p|q)(ΛN ), then
i) sdet(M L) = sdet(M ) sdet(L),
ii) sdet M ST = sdet(M ).
Since Mat(p|q)(ΛN ) is a finite dimensional vector space, every two norms in this space are equivalent,
whence, without loss of generality, we can define for M = {mj,k }j,k=1,...,p+q ∈ Mat(p|q)(ΛN ) the norm
Pp+q
kM k = j,k=1 kmj,k k1 , satisfying kM Lk ≤ kM kkLk for every pair M, L ∈ Mat(p|q)(ΛN ).
Similarly to Proposition 1 the following result can be proven.
Proposition 5. For every M ∈ Mat(p|q)(ΛN ) the exponential defined by
eM =
∞
X
Mj
j=0
j!
absolutely converges. In consequence, eM is a continuous function in Mat(p|q)(ΛN ).
Also the supertranspose, the supertrace and the superdeterminant are continuous functions.
Proposition 6. Let M, L ∈ Mat(p|q)(ΛN ). Then
(i) e0 = Ip+q ;
ST
ST
(ii) eM
= eM ;
(iii) If M L = LM then eM +L = eM eL ;
−1
(iv) eM ∈ GL(p|q)(ΛN ) and eM
= e−M ;
2
Ik denotes the identity matrix in Rk×k .
6
(v) e(a+b)M = eaM ebM for every pair a, b ∈ ΛN,0 ;
(vi) eCM C
−1
= CeM C −1 for every C ∈ GL(p|q)(ΛN );
d tM
dt e
(vii) etM (t ∈ R) is a smooth curve in Mat(p|q)(ΛN ),
(viii) sdet eM = estr(M ) .
= M etM = etM M and
d tM
dt e
t=0
= M;
Remark 2.1. The proofs of (i)-(vii) are straightforward computations. A detailed proof for (viii) can be
found in [1]. Similar properties to (i) and (iii)-(vii) can be obtained for the exponential map in ΛN .
We also can define the notion of logarithm for a supermatrix M ∈ Mat(p|q)(ΛN ) by
ln(M ) =
∞
X
(−1)j+1
j=1
(M − Ip+q )j
,
j
(8)
wherever it converges.
Proposition 7.
(i) The series (8) converges and yields a continuous function near Ip+q .
(ii) In Mat(p|q)(ΛN ), let U be a neighbourhood of Ip+q on which ln is defined and let V be a neighbourhood of 0 such that exp(V ) := {eM : M ∈ V } ⊂ U . Then eln(M ) = M , ∀M ∈ U , and ln(eL ) = L,
∀L ∈ V .
Proof.
(i) Observe that
∞
X
(M − Ip+q )j
j
j=1
≤
∞
j
X
kM − Ip+q k
.
j
j=1
Whence, since the radius of convergence of the last series is 1, (8) absolutely converges and defines
a continuous function in the ball kM − Ip+q k < 1.
(ii) The statement immediately follows from the absolutely convergence of the series for exp and ln,
and from the formal identities eln x = ln(ex ) = x in the indeterminate x.
It is worth mentioning that the same procedure can be repeated in ΛN . With the above definitions
of the exponential and logarithmic maps, it is possible to obtain all classical results known for Lie groups
and Lie algebras of real and complex matrices .
The exponential of a nilpotent matrix M ∈ Mat(p|q)(Λ+
N ) clearly is given by a finite sum, which
yields the bijective mapping:
+
exp : Mat(p|q)(Λ+
N ) → Ip+q + Mat(p|q)(ΛN )
with inverse
+
ln : Ip+q + Mat(p|q)(Λ+
N ) → Mat(p|q)(ΛN ),
since both expansions have a finite number of non-zero terms, whence problems of convergence do not
arise.
We recall that a supermatrix M belongs to GL(p|q)(ΛN ) if and only if its numeric projection M0 has
an inverse. Then, M = M0 (Ip+q + M0−1 M) = M0 exp(L), for some unique L ∈ Mat(p|q)(Λ+
N ).
7
3
3.1
The superspace framework with Grassmann coefficients
The Clifford setting
In order to set up the Clifford analysis framework in superspace, take p = m, q = 2n (m, n ∈ N)
and K = R. The canonical homogeneous basis e1 , . . . , em , e`1 , . . . , e`2n of Rm|2n can be endowed with an
orthogonal and a symplectic structure by the multiplication rules
ej ek + ek ej = −2δj,k ,
ej e`k + e`k ej = 0,
e`j e`k − e`k e`j = gj,k ,
where the symplectic form gj,k is defined by
g2j,2k = g2j−1,2k−1 = 0,
g2j−1,2k = −g2k,2j−1 = δj,k ,
j, k = 1, . . . , n.
Following these relations, elements in Rm|2n generate an infinite dimensional algebra denoted by Cm,2n .
A similar approach in the Grassmann envelope Rm|2n (ΛN ) leads to the definition of the algebra
Am,2n (ΛN ) = AlgR (f1 , . . . , fN , e1 , . . . , em , e`1 , . . . , e`2n ) = ΛN ⊗ Cm,2n
where it is assumed that elements of ΛN commute with elements of Cm,2n .
α2n
Every element in Am,2n (ΛN ) can be written as a finite sum of terms of the form aej1 · · · ejk e`1α1 · · · e`2n
2n
where a ∈ ΛN , 1 ≤ j1 ≤ . . . ≤ jk ≤ m and (α1 , . . . , α2n ) ∈ (N ∪ {0}) is a multi-index. In this algebra
we consider the corresponding generalization of the projection [·]0 which now goes from Am,2n (ΛN ) to
α2n
α2n
.
]0 = [a]0 ej1 · · · ejk e`1α1 · · · e`2n
Cm,2n and is defined by [aej1 · · · ejk e`1α1 · · · e`2n
Developing a proper Clifford analysis in this framework requires a suitable realization of the radial
algebra in Am,2n (ΛN ). For a detailed treatment of the radial algebra setting we refer the reader to
[4, 5, 12, 15]. This realization in Am,2n (ΛN ) is seen through a set of vector variables taking values in the
Grassmann envelope Rm|2n (ΛN ), i.e., supervectors of the form:
x = x + x`=
m
X
xj ej +
j=1
2n
X
x`j e`j ,
xj ∈ ΛN,0 ,
x`j ∈ ΛN,1 ,
j=1
which satisfy the fundamental axiom of radial algebra, i.e. the anti-commutator of every pair of them is
a commuting element. Indeed, for every pair x, y ∈ Rm|2n (ΛN )
{x, y} = −2
m
X
j=1
xj yj +
n
X
(x`2j−1 y`2j − x`2j y`2j−1 ) ∈ ΛN,0 .
(9)
j=1
The algebra generated by all elements in Rm|2n (ΛN ) is called the radial algebra embedded in Am,2n (ΛN )
and will be denoted by Rm|2n (ΛN ). This algebra is generated by the set of elements
{fA ej : A ⊂ {1, . . . , N }, |A| even , j = 1, . . . , m} ∪ {fA e`j : A ⊂ {1, . . . , N }, |A| odd , j = 1, . . . , 2n} ,
which turns Rm|2n (ΛN ) into a finite dimensional real space. For more details about this kind of realization
of the radial algebra we refer the reader to [3, 4, 13, 14].
3.2
Superbivectors
Superbivectors play a very important rôle in this work. Following the radial algebra approach, the space
of bivectors can be defined as the space of all ΛN,0 -linear combinations of wedge products of supervectors,
i.e.
X
X
X
1
x ∧ y = [x, y] =
(xj yk − xk yj )ej ek +
(xj y`k − x`k yj )ej e`k +
(x`j y`k + x`k y`j ) e`j e`k ,
2
1≤j≤m
1≤j<k≤m
where e`j
1≤j≤k≤2n
1≤k≤2n
(2)
e`k = 12 {e`j , e`k }. Hence, the space Rm|2n (ΛN ) of superbivectors consists of elements of the form
X
X
X
B=
bj,k ej ek +
b`j,k ej e`k +
Bj,k e`j e`k ,
(10)
1≤j<k≤m
1≤j≤k≤2n
1≤j≤m
1≤k≤2n
8
where bj,k ∈ ΛN,0 , b`j,k ∈ ΛN,1 and Bj,k ∈ ΛN,0 ∩ Λ+
N, . Observe that the coefficients Bj,k are commuting
but nilpotent since they are generated by elements of the form x`j y`k + x`k y`j that belong to Λ+
N, . This
(2)
constitutes an important limitation for the space of superbivectors because it means that Rm|2n (ΛN )
doesn’t allow for any other structure than the orthogonal one. In
P fact, the real projection [B]0 of
every superbivector B is just the classical Clifford bivector: [B]0 = 1≤j<k≤m [bj,k ]0 ej ek . Thence it is
(2)E
(2)
necessary to introduce an extension Rm|2n (ΛN ) of Rm|2n (ΛN ) containing elements B of the form (10)
but with Bj,k ∈ ΛN,0 . This extension allows to consider two different structures in the same element B:
the orthogonal and the symplectic one. In fact, in this case we have
X
X
[B]0 =
[bj,k ]0 ej ek +
[Bj,k ]0 e`j e`k .
1≤j<k≤m
1≤j≤k≤2n
(2)
(2)E
Remark 3.1. Observe that Rm|2n (ΛN ) and Rm|2n (ΛN ) are finite dimensional real vector spaces with
m(m − 1)
+ 2N −1 2mn + 2N −1 − 1 n(2n + 1),
2
m(m
− 1)
(2)E
+ 2N −1 2mn + 2N −1 n(2n + 1).
dim Rm|2n (ΛN ) = 2N −1
2
(2)
dim Rm|2n (ΛN ) = 2N −1
(2)E
The extension Rm|2n (ΛN ) of the superbivector space clearly lies out of the radial algebra Rm|2n (ΛN )
and generates an infinite dimensional algebra using the multiplication rules defined in Am,2n (ΛN ). Ele(2)E
ments in Rm|2n (ΛN ) are called extended superbivectors. Both superbivectors and extended superbivectors
preserve several properties of classical bivectors.
(2)E
(2)
Proposition 8. The space Rm|2n (ΛN ) is a Lie algebra. In addition, Rm|2n (ΛN ) is a Lie subalgebra of
(2)E
Rm|2n (ΛN ).
Proof.
We only need to check that the Lie bracket defined by the commutator in the associative algebra
(2)E
(2)
Am,2n (ΛN ) is an internal binary operation in Rm|2n (ΛN ) and Rm|2n (ΛN ). Direct computation shows
that for a, b ∈ ΛN,0 and a`, b`∈ ΛN,1 we get:
[aej ek , ber es ] = ab (2δj,s er ek − 2δs,k er ej + 2δr,j ek es − 2δr,k ej es ),
[aej ek , b`er e`s ] = ab`(2δr,j ek e`s − 2δr,k ej e`s ),
[aej ek , be`r
e`s ] = 0,
e`s + (1 − δj,r )gs,k ej er ),
[a`ej e`k , b`er e`s ] = a`b`(2δr,j e`k
[a`ej e`k , be`r
[ae`j
e`k , be`r
e`s ] = a`b (gk,s ej e`r + gk,r ej e`s ),
e`s ] = ab (gj,s e`r
e`k + gk,s e`r
e`j + gj,r e`k
e`s + gk,r e`j
e`s ).
It is well known from the radial algebra framework that the commutator of a bivector with a vector
always is a linear combination of vectors with coefficients in the scalar subalgebra. Indeed, for vector
variables x, y, z we obtain
1
1
[x ∧ y, z] = [[x, y], z] = [2xy − {x, y}, z] = [xy, z] = {y, z}x − {x, z}y.
2
2
(2)E
This property can be easily generalized to Rm|2n (ΛN ) by straightforward computation. In particular, the
following result holds.
Proposition 9. Let x ∈ Rm|2n (ΛN ), let {b1 , . . . , b2N −1 } be a basis for ΛN,0 and let {b`1 , . . . , b`2N −1 } be a
basis for ΛN,1 . Then,
[br ej ek , x] = 2br (xj ek − xk ej ),
[b`r ej e`2k−1 , x] = b`r (2xj e`2k−1 + x`2k ej ),
[b`r ej e`2k , x] = b`r (2xj e`2k − x`2k−1 ej ),
[br e`2j
[br e`2j−1
[br e`2j−1
e`2k , x] = −br (x`2j−1 e`2k + x`2k−1 e`2j ),
e`2k−1 , x] = br (x`2j e`2k−1 + x`2k e`2j−1 ),
e`2k , x] = br (x`2j e`2k − x`2k−1 e`2j−1 ).
The above computations also are valid for supervectors x in any extension of Rm|2n (ΛN ) such as Rm|2n (ΛN +1 ).
9
3.3
Tensor algebra and exponential map
Since Am,2n (ΛN ) is infinite dimensional, the definition of the exponential map by means of the power
P∞ j
series ex = j=0 xj! is not as straightforward as it was for the algebras ΛN or Mat(p|q)(ΛN ). A correct
definition of the exponential map in Am,2n (ΛN ) requires the introduction of the tensor algebra. More
details about the general theory of tensor algebras can be found several basic references, see e.g. [6, 11, 17].
Let T (V )L
be the tensor algebra of the vector space V with BV = {f1 , . . . , fN , e1 , . . . , em , e`1 , . . . , e`2n },
∞
i.e., T (V ) = j=0 T j (V ) where T j (V ) = spanR {v1 ⊗ · · · ⊗ vj : v` ∈ BV } is the j-fold tensor product of
V with itself. Then Am,2n (ΛN ) can be seen as a subalgebra of T (V )/I where I ⊂ T (V ) is the twosided
ideal generated by the elements:
fj ⊗ fk + fk ⊗ fj ,
fj ⊗ ek − ek ⊗ fj ,
fj ⊗ e`k − e`k ⊗ fj ,
ej ⊗ ek + ek ⊗ ej + 2δj,k ,
ej ⊗ e`k + e`k ⊗ ej ,
e`j ⊗ e`k − e`k ⊗ e`j − gj,k .
Indeed, T (V )/I is isomorphic to the extension of Am,2n (ΛN ) which also contains infinite sums of
α2n
where a ∈ ΛN , 1 ≤ j1 ≤ . . . ≤ jk ≤ m and
arbitrary terms of the form aej1 · · · ejk e`1α1 · · · e`2n
2n
(α1 , . . . , α2n ) ∈ (N ∪ {0}) is a multi-index.
P∞ j
The exponential map exp(x) = ex = j=0 xj! is known to be well defined in the tensor algebra T (V ),
see e.g. [6], whence it also is well defined in T (V )/I. It has the following mapping properties:
exp : Am,2n (ΛN ) → T (V )/I,
exp : Rm|2n (ΛN ) → Rm|2n (ΛN ).
The first statement directly follows from the definition of T (V )/I, while the second one can be obtained following the standard procedure established for ΛN and Mat(p|q)(ΛN ), since Rm|2n (ΛN ) is finite
dimensional.
The ortho symplectic structure in Rm|2n (ΛN )
4
4.1
Invariance of the inner product
The ΛN,0 -valued function given by the anti-commutator of supervectors (9) leads to the definition of the
symmetric bilinear form h·, ·i : Rm|2n (ΛN ) × Rm|2n (ΛN ) −→ ΛN,0 given by
m
n
X
1X
1
xj yj −
(x`2j−1 y`2j − x`2j y`2j−1 ) ∈ ΛN,0 .
hx, yi = − {x, y} =
2
2 j=1
j=1
which we will use as a generalized inner product. It can be easily written in terms of supermatrices, since
Im
0
0 1
T
hx, yi = x Qy
where Q =
, and J2n = diag
.
−1 0
0 − 21 J2n
Pm
When N = 1, the inner product hx, yi = j=1 xj yj coincides with the Euclidean inner product in Rm .
So from now on, we assume N > 1.
In order to find all the supermatrices M ∈ Mat(m|2n)(ΛN ) the corresponding linear operators of
which leave the inner product h·, ·i invariant, observe that
T
hM x, M yi = hx, yi ⇐⇒ (M x) QM y = xT Qy ⇐⇒ xT M ST QM − Q y = 0.
We thus have to identify those supermatrices L ∈ Mat(m|2n)(ΛN ) for which
xT L y = 0
∀x, y ∈ Rm|2n (ΛN ).
(11)
Lemma 4. A supermatrix L ∈ Mat(m|2n)(ΛN ) satisfies (11) if and only if L ∈ WN where WN is a
twosided ideal of Mat(m|2n)(ΛN ) defined as follows:
• if N = 2` + 1, W2`+1 is generated by the set of supermatrices
0 B`
0 0
(N )
(N −1)
L=
:
L
∈
Mat(m|2n)(Λ
)
∪
L
=
:
L
∈
Mat(m|2n)(Λ
)
;
N
N
0 D
C` 0
10
• if N = 2`, W2` is generated by the set of supermatrices
0 0
(N )
L=
: L ∈ Mat(m|2n)(ΛN ) .
0 D
Proof.
We can easily check by straightforward computations that the two subspaces defined above are two-sided
ideals of Mat(m|2n)(ΛN ) i.e., M WN ⊂ WN and WN M ⊂ WN for every M ∈ Mat(m|2n)(ΛN ). Let
A B`
L=
C` D
2n×2n
, B` = {b`j,k } ∈ Λm×2n
, C` = {c`j,k } ∈ Λ2n×m
and D = {dj,k } ∈ ΛN,0
.
with A = {aj,k } ∈ Λm×m
N,1
N,1
N,0
Choosing for x, y all possible coordinate supervectors we obtain that L satisfies (11) if and only if its
entries are such that
xj aj,k yk = 0
∀xj , yk ∈ ΛN,0 ,
xj b`j,k y`k = 0
∀xj ∈ ΛN,0 , y`k ∈ ΛN,1 ,
x`j c`j,k yk = 0
∀x`j ∈ ΛN,1 , yk ∈ ΛN,0 ,
x`j dj,k y`k = 0
∀x`j , y`k ∈ ΛN,1 .
Taking in the first equation xj = yk = 1 we obtain aj,k = 0 for every j, k = 1, . . . , m. For the remaining
coefficients we need to distinguish between two cases.
(N )
Case: N = 2` + 1. We first prove that b`j,k , c`j,k ∈ spanR {f1 · · · f2`+1 }. If b`j,k ∈ ΛN , it is clear that
b`j,k y`k = 0, for every y`k ∈ ΛN,1 . It then easily follows that xj b`j,k y`k = 0 for every pair xj ∈ ΛN,0 , y`k ∈
(N )
ΛN,1 . If on the contrary, b`j,k ∈
/ ΛN , there would be at least one of the generators fr missing in one of
the canonical terms of b`j,k . Then, taking xj = 1 and y`k = fr we will have xj b`j,k y`k 6= 0. Hence,
xj b`j,k y`k = 0
∀xj ∈ ΛN,0 , y`k ∈ ΛN,1
if and only if b`j,k ∈ spanR {f1 · · · f2`+1 }.
The same holds for c`j,k . Next we prove that dj,k ∈ spanR {f1 · · · fr−1 fr+1 · · · f2`+1 : 1 ≤ r ≤ 2` + 1}. In
(N −1)
fact, if dj,k ∈ ΛN
it is clear that x`j dj,k y`k = 0, since x`j dj,k y`k is the sum of homogeneous terms of
(N −1)
degree at least N + 1. If on the contrary, dj,k ∈
/ ΛN
, there would be at least two different Grassmann
generators fr , fs missing in one of the canonical terms of dj,k . Then, taking x`j = fr and y`k = fs we have
that x`j dj,k y`k 6= 0. Hence,
x`j dj,k y`k = 0
∀x`j , y`k ∈ ΛN,1
if and only if dj,k ∈ spanR {f1 · · · fr−1 fr+1 · · · f2`+1 : 1 ≤ r ≤ 2` + 1}.
Case: N = 2`. We first prove that b`j,k = c`j,k = 0. If b`j,k 6= 0, there will be always at least one
Grassmann generator fr missing in one of the terms of b`j,k since N is an even number. Then, taking
xj = 1 and y`k = fr we will have xj b`j,k y`k 6= 0. Hence,
xj b`j,k y`k = 0
∀xj ∈ ΛN,0 , y`k ∈ ΛN,1
if and only if b`j,k = 0.
(N )
The same holds for c`j,k . We now prove that dj,k ∈ spanR {f1 · · · f2` }. Indeed, if dj,k ∈ ΛN it is clear that
(N )
x`j dj,k y`k = 0. If on the contrary, dj,k ∈
/ ΛN , there would be at least two different Grassmann generators
fr , fs missing in one of the canonical terms of dj,k . Then, taking x`j = fr and y`k = fs we have that
x`j dj,k y`k 6= 0. Hence,
x`j dj,k y`k = 0
∀x`j , y`k ∈ ΛN,1
if and only if dj,k ∈ spanR {f1 · · · f2` }.
As a direct consequence of Lemma 4 we have the following result.
Corollary 1. The set O(m|2n)(ΛN ) of supermatrices in Mat(m|2n)(ΛN ) leaving the inner product h·, ·i
invariant is characterized by
O(m|2n)(ΛN ) = {M ∈ Mat(m|2n)(ΛN ) : M ST QM − Q ∈ WN }.
11
Remark 4.1. The form L = M ST QM − Q suggests that we do not need the whole ideal WN to describe
O(m|2n)(ΛN ). In fact, supermatrices of the previous form satisfy
LST = M ST QST M ST
ST
− QST = M ST QS (SM S) − QS = M ST QM − Q S = LS,
whence the subspace WN∗ = {L ∈ WN : LST = LS} can be considered in the above definition. However
the use of WN∗ is not important for the purposes of this paper, whence we will continue working with WN .
We will now study the algebraic structure of O(m|2n)(ΛN ).
Theorem 2. The following statements hold:
(i) O(m|2n)(ΛN ) ⊂ GL(m|2n)(ΛN ).
(ii) O(m|2n)(ΛN ) is a group under the usual matrix multiplication.
(iii) O(m|2n)(ΛN ) is a closed subgroup of GL(m|2n)(ΛN ).
Proof.
(i) To prove that every supermatrix in O(m|2n)(ΛN ) is invertible, first note that the real projection
of a supermatrix in WN (N > 1) is zero. Then for every M ∈ O(m|2n)(ΛN ) we have
ST
A0 0
T
M
Q[M
]
−
Q
=
[M
]
Q[M
]
−
Q
=
0,
where
[M
]
=
.
0
0
0
0
0
0 D0
This can be rewritten in terms of the real blocks A0 and D0 as AT0 A0 = Im and D0T J2n D0 = J2n ,
implying that A0 ∈ O(m) and D0 ∈ Sp(2n). On account of Theorem 1, M thus is invertible.
(ii) Here we only need to prove that matrix inversion and matrix multiplication are internal operations in O(m|2n)(ΛN ). For the inversion, the condition M ST QM − Q = L ∈ WN implies that
ST
ST
M −1
QM −1 − Q = − M −1
LM −1 ∈ WN since WN is an ideal. Hence, M ∈ O(m|2n)(ΛN )
−1
implies M
∈ O(m|2n)(ΛN ). For the multiplication, let M1 , M2 ∈ O(m|2n)(ΛN ) and L1 , L2 ∈
WN such that MjST QMj − Q = Lj , j = 1, 2. Then:
(M1 M2 )
ST
Q (M1 M2 ) − Q = M2ST M1ST QM1 M2 − Q = M2ST (Q + L1 ) M2 − Q
= L2 + M2ST L1 M2 .
Since WN is an ideal, L2 + M2ST L1 M2 ∈ WN . Hence, M1 M2 ∈ O(m|2n)(ΛN ).
(iii) Let {Mj }j∈N ⊂ O(m|2n)(ΛN ) be a sequence that converges to a supermatrix M ∈ Mat(m|2n)(ΛN ).
Since algebraic operations are continuous in Mat(m|2n)(ΛN ) we have that
lim MjST QMj − Q = M ST QM − Q.
j→∞
But MjST QMj − Q ∈ WN for every j ∈ N and, since WN is a finite dimensional subspace of
Mat(m|2n)(ΛN ), it is closed. Whence the limit M ST QM − Q belongs to WN .
Remark 4.2. The above theorem states that O(m|2n)(ΛN ) is a Lie group.
The group O(m|2n)(ΛN ) can be partitioned in a natural way into the classes
OL = OL (m|2n)(ΛN ) := {M ∈ O(m|2n)(ΛN ) : M ST QM − Q = L},
L ∈ WN .
Not for every L ∈ WN the set OL is non empty. In particular, OL = ∅ for every L ∈ WN \ WN∗ .
Proposition 10. The following statements hold:
(i) Let L1 , L2 ∈ WN and M1 ∈ OL1 , M2 ∈ OL2 . Then
M1 M2 ∈ OM2ST L1 M2 +L2
and
M1−1 ∈ O−(M −1 )ST L
1
12
−1
1 M1
.
(ii) OL is a subgroup of O(m|2n)(ΛN ) if and only if L = 0.
(iii) The binary relation:
R = {(M1 , M2 ) ∈ O(m|2n)(ΛN )2 : M1 , M2 ∈ OL ; L ∈ WN }
= {(M1 , M2 ) ∈ O(m|2n)(ΛN )2 : M1 M2−1 ∈ O0 }.
is an equivalence relation.
Proof.
(i) This was already proven in the proof of Theorem 2 (ii).
(ii) If OL is a subgroup then Im+2n ∈ OL and L = Im+2n QIm+2n − Q = 0. To prove now that O0
is a subgroup we just have to consider L1 = L2 = 0 in (i). Then M1 , M2 ∈ O0 directly implies
M1 M2 ∈ O0 and M1−1 ∈ O0 .
ST
(iii) Let M1 ∈ OL1 and M2 ∈ OL2 . From (i) we get that M1 M2−1 ∈ OL where L = M2−1
(L1 − L2 ) M2−1 .
−1
Thence, M1 , M2 ∈ OL for some L ∈ WN if and only if M1 M2 ∈ O0 . Since O0 is a subgroup of
O(m|2n)(ΛN ) it now easily follows that R is an equivalence relation.
Remark 4.3. The subgroup O0 is a closed subgroup of GL(m|2n)(ΛN ), whence it is a Lie group. It
plays a crucial rôle in the study of O(m|2n)(ΛN ) since given any representative element M of OL we can
describe the whole set OL by means of the relation OL = O0 M . For that reason, we will, from now on,
focus our attention to O0 .
Proposition 11. The following statements hold:
(i) A supermatrix M ∈ Mat(m|2n)(ΛN ) belongs to O0 if and only if
1 `T
`
T
A A − 2 C J2n C = Im ,
1 `T
T `
A B − 2 C J2n D = 0,
`T ` 1 T
B B + 2 D J2n D = 12 J2n .
(12)
(ii) sdet(M ) = ±1 for every M ∈ O0 .
(iii) [O(m|2n)(ΛN )]0 = [O0 ]0 = O(m) × Sp(2n).
Proof.
i) The relation M ST QM = Q can be written in terms of A, B`, C`, D as:
Im
AT A − 12 C`T J2n C`
AT B` − 21 C`T J2n D
=
0
−B`T A − 12 DT J2n C` −B`T B` − 21 DT J2n D
0
− 12 J2n
.
(ii) On account of Proposition 4, M ST QM = Q implies that sdet(M )2 sdet(Q) = sdet(Q), whence
sdet(M )2 = 1. The statement then follows from Lemma 1.
(iii) See the proof of Theorem 2 i).
4.2
Group of superrotations SO0 .
As in the classical way, we can introduce now the set of superrotations by
SO0 = SO0 (m|2n)(ΛN ) = {M ∈ O0 : sdet(M ) = 1}.
This is easily seen to be a Lie subgroup of O0 with real projection equal to SO(m) × Sp(2n). In fact,
the conditions M ST QM = Q and sdet(M ) = 1 imply that M0T QM0 = Q and sdet(M0 ) = 1. This means
that
A0 0
M0 =
0 D0
with AT0 A0 = Im , D0T J2n D0 = J2n and det(A0 ) = det(D0 ). But D0 ∈ Sp(2n) implies det(D0 ) = 1. Then
det(A0 ) = 1 and A0 ∈ SO(m).
The following proposition states that, as in the classical case, SO0 is connected and in consequence,
it is the identity component of O0 .
13
Proposition 12. SO0 is a connected Lie group.
Proof.
Since the real projection SO(m) × Sp(2n) of SO0 is a connected group, it suffices to prove that for
every M ∈ SO0 there exist a continuous path inside SO0 connecting M with its real projection M0 . To
PN
(j)
that end, let us write M = j=0 [M ]j , where [M ]j is the projection of M on Mat(m|2n)(ΛN ) for each
j = 0, 1, . . . , N . Then, observe that
N
N
N
k
X
X
X
X
M ST QM = Q ⇐⇒
M ST j Q [M ]j = Q ⇐⇒
M ST j Q[M ]k−j = Q
j=0
j=0
⇐⇒ M0T QM0 = Q,
and
j=0
k=0
k
X
ST
M
Q[M ]k−j = 0,
j
k = 1, . . . , N.
j=0
PN
Let us now take the path M (t) = j=0 tj [M ]j . For t ∈ [0, 1] this is a continuous path with M (0) = M0
and M (1) = M . In addition, M (t)T0 QM (t)0 = M0T QM0 = Q and for every k = 1, . . . , N we have,
k
k
X
X
ST
Q[M ]k−j = 0.
M (t)ST j Q[M (t)]k−j = tk
M
j
j=0
j=0
Hence, M (t)ST QM (t) = Q, t ∈ [0, 1]. Finally, observe that sdet(M (t)) = 1 for every t ∈ [0, 1], since
sdet(M (t)0 ) = sdet(M0 ) = 1.
We will now investigate the corresponding Lie algebras of O(m|2n)(ΛN ), O0 and SO0 .
Theorem 3.
(i) The Lie algebra so(m|2n)(ΛN ) of O(m|2n)(ΛN ) is given by
so(m|2n)(ΛN ) = {X ∈ Mat(m|2n)(ΛN ) : X ST Q + QX ∈ WN }.
(ii) The Lie algebra so0 = so0 (m|2n)(ΛN ) of O0 coincides with the Lie algebra of SO0 and is given by
the space of all "super anti-symmetric" supermatrices
so0 = {X ∈ Mat(m|2n)(ΛN ) : X ST Q + QX = 0}.
A B`
(iii) A supermatrix X =
∈ Mat(m|2n)(ΛN ) belongs to so0 if and only if
C` D
AT + A = 0,
1
B` − C`T J2n = 0
2
and DT J2n + J2n D = 0.
(iv) [so(m|2n)(ΛN )]0 = [so0 ]0 = so(m) ⊕ sp(2n).
Proof.
(i) If X ∈ Mat(p|q)(ΛN ) is in the Lie algebra of O(m|2n)(ΛN ) then etX ∈ O(m|2n)(ΛN ) for every
ST
t ∈ R, i.e. etX QetX − Q = L(t) ∈ WN . Differentiating at t = 0 we obtain X ST Q + QX = L0 (0),
0
where L (0) ∈ WN since WN is a closed subspace. On the other hand, if X ∈ Mat(p|q)(ΛN ) satisfies
X ST Q + QX = L ∈ WN , then X ST = L1 − QXQ−1 where L1 = LQ−1 ∈ WN . Computing the
exponential of tX ST we obtain
j
j
∞
∞
X
X
Q(−tX)Q−1 + tL1
Q(−tX)Q−1
ST
=
+ L2 (t) = Qe−tX Q−1 + L2 (t),
etX =
j!
j!
j=0
j=0
where L2 (t) is a convergent infinite sum of products that contain the factor tL1 at least one time.
Then, since WN is a two-sided ideal, L2 (t) ∈ WN for every t ∈ R, whence
etX
ST
QetX − Q = L2 (t)QetX ∈ WN
14
∀t ∈ R.
(ii) To prove that so0 is the Lie algebra of O0 it suffices to repeat the above reasoning with L = L1 =
L2 = 0. From Proposition 6 it easily follows that the Lie algebra of SO0 is
{X ∈ Mat(p|q)(ΛN ) : X ST Q + QX = 0, str(X) = 0}.
But X ST Q + QX = 0 implies str(X) = 0. In fact, the condition X ST = −QXQ−1 implies that
str(X ST ) = − str(QXQ−1 ) = − str(X), yielding str(X) = str(X ST ) = − str(X) and str(X) = 0.
Hence, the Lie algebra of SO0 is so0 .
(iii) Observe that the relation X ST Q + QX = 0 can be written in terms of A, B`, C`, D as follows:
AT + A
− 12 C`T J2n + B`
= 0.
−B`T − 12 J2n C` − 12 DT J2n − 12 J2n D
(iv) As the real
we have [so(m|2n)(ΛN )]0 = [so0 ]0 .
projection
of every element in WN (N > 1) is zero,
A B`
A0 0
Let X =
∈ so0 , then X0 = [X]0 =
satisfies X0ST Q+QX0 = 0. Using (iii)
0 D0
C` D
we obtain AT0 + A0 = 0 and D0T J2n + J2n D0 = 0 which implies that A0 ∈ so(m) and D0 ∈ sp(2n).
Remark 4.4. As in the Remark 4.1, the supermatrices of the form L = X ST Q + QX satisfy LST =
QXS + X ST QS = LS. Then, the subspace WN∗ can replace WN in the above definition of so(m|2n)(ΛN ).
The connectedness of SO0 allows to write any of its elements as a finite product of exponentials of
supermatrices in so0 , see [8]. In the classical case, a single exponential suffices for such a description since
SO(m) is compact and in consequence exp : so(m) → SO(m) is surjective. This property, however, does
not hold in the group of superrotations SO0 , since the exponential map from sp(2n) to the non-compact
Lie group Sp(2n) ∼
= {Im } × Sp(2n) ⊂ SO0 is not surjective, whence not every element in SO0 can be
written as a single exponential of a supermatrix in so0 . Nevertheless, it is possible to find a decomposition
for elements of SO0 in terms of a fixed number of exponentials of so0 elements.
Every supermatrix M ∈ SO0 has a unique decomposition M = M0 + M = M0 (Im+2n + L) where
−1
M0 is its real projection, M ∈ Mat(m|2n)(Λ+
N ) its nilpotent projection and L = M0 M. We will now
separately study the decompositions for M0 ∈ SO(m) × Sp(2n) and Im+2n + L ∈ SO0 .
First consider M0 ∈ SO(m)×Sp(2n). We already mentioned that exp : so(m) → SO(m) is surjective,
while exp : sp(2n) → Sp(2n) is not. However, it can be proved that Sp(2n) = exp(sp(2n)) · exp(sp(2n)),
invoking the following polar decomposition for real algebraic Lie groups, see Proposition 4.3.3 in [9].
Proposition 13. Let G ⊂ GL(p) be an algebraic Lie group such that G = GT and g its Lie algebra.
Then every A ∈ G can be uniquely written as A = ReX , R ∈ G ∩ O(p), X ∈ g ∩ Sym(p), where Sym(p)
is the subspace of all symmetric matrices in Rp×p .
Taking p = 2n and G = Sp(2n) in the above proposition we get that every symplectic matrix D0 can
be uniquely written as D0 = R0 eZ0 with R0 ∈ Sp(2n) ∩ O(2n) and Z0 ∈ sp(2n) ∩ Sym(2n). But the group
Sp(2n) ∩ O(2n) is isomorphic to U (n) which is connected and compact. Then the exponential map from
the Lie algebra sp(2n) ∩ so(2n) ∼
= u(n) is surjective on Sp(2n) ∩ O(2n). This means that D0 ∈ Sp(2n) can
be written as D0 = eY0 eZ0 with Y0 ∈ sp(2n) ∩ so(2n) and Z0 ∈ sp(2n) ∩ Sym(2n). Hence, the supermatrix
M0 ∈ SO(m) × Sp(2n) can be decomposed as
X
X
Im
0
e 0
0
e 0
0
M0 =
=
= eX eY ,
0 eZ0
0
e Y0 e Z 0
0
e Y0
X0 0
0 0
where X =
∈ so(m) × [sp(2n) ∩ so(2n)] and Y =
∈ {0m } × [sp(2n) ∩ Sym(2n)].
0 Y0
0 Z0
Now consider the element Im+2n + L ∈ SO0 . As shown at the end of Section 2, the function
+
exp : Mat(m|2n)(Λ+
N ) → Im+2n + Mat(m|2n)(ΛN ) is a bijection with the logarithmic function defined in
(8) as its inverse. Then the supermatrix Z = ln(Im+2n + L) satisfies eZ = Im+2n + L and is nilpotent.
Those properties suffice for proving that Z ∈ so0 . From now on we will denote the set so0 ∩Mat(m|2n)(Λ+
N)
by so0 (m|2n)(Λ+
).
N
15
Z
Proposition 14. Let Z ∈ Mat(m|2n)(Λ+
N ) such that e ∈ SO0 . Then Z ∈ so0 .
Proof.
ST
It suffices to prove that etZ ∈ SO0 for every t ∈ R. The expression etZ QetZ − Q can be written as the
following polynomial in the real variable t.
"
#
N
N
k k
j
ST
j
X
X
t
Z
t
(Z
)
tZST
tZ
Q
−Q
P (t) = e
Qe − Q =
j!
k!
j=0
k=0
N X
k
k
N
N
j
ST j
k−j k−j
k X
X
X
X
t (Z )
t Z
k
tk
t
=
Q
=
(ZST )j QZk−j =
Pk (Z),
j!
(k − j)!
k! j=0 j
k!
j=0
k=1
k=1
k=1
Pk
where Pk (Z) = j=0 kj (ZST )j QZk−j . If P (t) is not identically zero, i.e. not all the Pk (Z) are 0, we can
take k0 ∈ {1, 2, . . . , N } to be the largest subindex for which Pk0 (Z) 6= 0. Then,
lim
1
t→∞ tk0
P (t) =
Pk0 (Z)
6= 0,
k0 !
contradicting that P (Z) = {0}. So P (t) identically vanishes, yielding etZ ∈ SO0 for every t ∈ R.
This way, we have proven the following result.
Theorem 4. Every supermatrix in SO0 can be written as M = eX eY eZ with X ∈ so(m) × [sp(2n) ∩
so(2n)], Y ∈ {0m } × [sp(2n) ∩ Sym(2n)] and Z ∈ so0 (m|2n)(Λ+
N ). Moreover, the elements Y and Z are
unique.
4.3
Relation with superbivectors.
Theorem 3 allows to compute the dimension of so0 as a real vector space.
+
2mn
+
n(2n
+
1)
.
Corollary 2. The dimension of the real Lie algebra so0 is 2N −1 m(m−1)
2
Proof.
Since so0 is the direct sum of the corresponding subspaces of block components A, B`, C` and D respectively,
it suffices to compute the dimension of each one of them. According to Theorem 3 (iii) we have:
A
0
0
C`
0
0
V1 =
V2 =
V3 =
0
0
m×m
: AT = −A, A ∈ ΛN,0
1 `T
C J2n
2
∼
= ΛN,0 ⊗ so(m),
2n×m
∼
,
= ΛN,1 ⊗ R
0
0
∼
: DT J2n + J2n D = 0, D ∈ Λ2n×2n
= ΛN,0 ⊗ sp(2n),
N,0
D
2n×m
: C` ∈ ΛN,1
then dim V1 = 2N −1
m(m − 1)
,
2
then dim V2 = 2N −1 m2n,
then dim V3 = 2N −1 n(2n + 1).
(2)E
Comparing this result with the one in Remark 3.1 we obtain that dim Rm|2n (ΛN ) = dim so0 . This
means that both vector spaces are isomorphic. This isomorphism also holds on the Lie algebra level.
Following the classical Clifford approach, the commutator
[B, x]
(2)E
∀ B ∈ Rm|2n (ΛN ), x ∈ Rm|2n (ΛN ),
(13)
(2)E
should be the key for the Lie algebra isomorphism. Proposition 9 shows that for every B ∈ Rm|2n (ΛN )
the commutator (13) defines an endomorphism over Rm|2n (ΛN ) that can be represented by a supermatrix
in Mat(m|2n)(ΛN ). But as it was explained in Section 2, that supermatrix is not unique. This issue can
be solved with the natural extension of the linear operator defined in (13) to Rm|2n (ΛN +1 ).
16
(2)E
Lemma 5. The map φ : Rm|2n (ΛN ) → Mat(m|2n)(ΛN ) defined by
φ(B)x = [B, x]
(2)E
∀ B ∈ Rm|2n (ΛN ), x ∈ Rm|2n (ΛN +1 ),
(14)
takes values in so0 . In particular, if we consider {b1 , . . . , b2N −1 } and {b`1 , . . . , b`2N −1 } to be the canonical
basis of ΛN,0 and ΛN,1 respectively, we obtain the following basis for so0 .
Ek,j − Ej,k 0
,
φ(br ej ek ) = 2br
0
0
0
Ej,2k
,
φ(b`r ej e`2k−1 ) = b`r
2E2k−1,j
0
0
−Ej,2k−1
φ(b`r ej e`2k ) = b`r
,
2E2k,j
0
0
0
φ(br e`2j e`2k ) = −br
,
0 E2j,2k−1 + E2k,2j−1
0
0
φ(br e`2j−1 e`2k−1 ) = br
,
0 E2j−1,2k + E2k−1,2j
0
0
,
φ(br e`2j−1 e`2k ) = br
0 E2k,2j − E2j−1,2k−1
0
0
,
φ(br e`2j e`2k−1 ) = br
0 E2j,2k − E2k−1,2j−1
1 ≤ r ≤ 2N −1 , 1 ≤ j < k ≤ m,
1 ≤ r ≤ 2N −1 , 1 ≤ j ≤ m, 1 ≤ k ≤ n,
1 ≤ r ≤ 2N −1 , 1 ≤ j ≤ m, 1 ≤ k ≤ n,
1 ≤ r ≤ 2N −1 , 1 ≤ j ≤ k ≤ n,
1 ≤ r ≤ 2N −1 , 1 ≤ j ≤ k ≤ n,
1 ≤ r ≤ 2N −1 , 1 ≤ j ≤ k ≤ n,
1 ≤ r ≤ 2N −1 , 1 ≤ j < k ≤ n,
where Ej,k denotes the matrix with all entries equal 0, except the one in the j-th row and k-th column
which is equal to 1, and the order of Ej,k should be deduced from the context.
Proof.
The above equalities can be directly obtained from Proposition 9, whence we should only check that all
supermatrices obtained above form a basis for so0 . The matrices Ej,k satisfy the relations
T
Ej,k
= Ek,j , Ej,2k−1 J2n = Ej,2k , Ej,2k J2n = −Ej,2k−1 , J2n E2j,k = E2j−1,k , J2n E2j−1,k = −E2j,k .
Then
• for φ(br ej ek ) we have A = 2br (Ek,j − Ej,k ), B` = 0, C` = 0 and D = 0, whence
AT = 2br (Ej,k − Ek,j ) = −A;
• for φ(b`r ej e`2k−1 ) we have A = 0, B` = b`r Ej,2k , C` = 2b`r E2k−1,j and D = 0, whence
1 `T
C J2n = b`r Ej,2k−1 J2n = b`r Ej,2k = B`;
2
• for φ(b`r ej e`2k ) we have A = 0, B` = −b`r Ej,2k−1 , C` = 2b`r E2k,j and D = 0, whence
1 `T
C J2n = b`r Ej,2k J2n = −b`r Ej,2k−1 = B`;
2
• for φ(br e`2j
e`2k ) we have A = 0, B` = 0, C` = 0 and D = −br (E2j,2k−1 + E2k,2j−1 ), whence
DT J2n + J2n D = −br (E2k−1,2j J2n + E2j−1,2k J2n + J2n E2j,2k−1 + J2n E2k,2j−1 )
= −br (−E2k−1,2j−1 − E2j−1,2k−1 + E2j−1,2k−1 + E2k−1,2j−1 ) = 0;
• for φ(br e`2j−1
e`2k−1 ) we have A = 0, B` = 0, C` = 0 and D = br (E2j−1,2k + E2k−1,2j ), whence
DT J2n + J2n D = br (E2k,2j−1 J2n + E2j,2k−1 J2n + J2n E2j−1,2k + J2n E2k−1,2j )
= br (E2k,2j + E2j,2k − E2j,2k − E2k,2j ) = 0;
17
• for φ(br e`2j−1
e`2k ) we have A = 0, B` = 0, C` = 0 and D = br (E2k,2j − E2j−1,2k−1 ), whence
DT J2n + J2n D = br (E2j,2k J2n − E2k−1,2j−1 J2n + J2n E2k,2j − J2n E2j−1,2k−1 )
= br (−E2j,2k−1 − E2k−1,2j + E2k−1,2j + E2j,2k−1 ) = 0.
The above computations show that all supermatrices obtained belong to so0 . Direct verification shows
that they form a set of 2N −1 m(m−1)
+ 2N −1 2mn + 2N −1 n(2n + 1) linear independent elements, i.e. a
2
basis of so0 .
(2)E
Theorem 5. The map φ : Rm|2n (ΛN ) → so0 defined in (14) is a Lie algebra isomorphism.
Proof.
From Lemma 5 follows that φ is a vector space isomorphism. In addition, due to the Jacoby identity in
(2)E
the associative algebra Am,2n (ΛN ) we have for every B1 , B2 ∈ Rm|2n (ΛN ) and x ∈ Rm|2n (ΛN +1 ) that
[φ(B1 ), φ(B2 )] x = φ(B1 )φ(B2 )x − φ(B2 )φ(B1 )x
= [B1 , [B2 , x]] + [B2 , [x, B1 ]] = [[B1 , B2 ] , x] = φ ([B1 , B2 ]) x.
implying, by Lemma 3, that [φ(B1 ), φ(B2 )] = φ ([B1 , B2 ]), i.e., φ is a Lie algebra isomorphism.
5
The Spin group in Superspace
So far we have seen that the Lie algebra so0 of the Lie group of superrotations SO0 has a realization
in Am,2n (ΛN ) as the Lie algebra of extended superbivectors. In this section, we discuss the proper way
of defining the corresponding realization of SO0 in T (V )/I, i.e., the analogue of the Spin group in the
Clifford superspace framework.
5.1
Supervector reflections
The group generated by the supervector reflections was briefly introduced in [14] using the notion of
the unit super-sphere defined as the super-surface S(m|2n)(ΛN ) = {w ∈ Rm|2n (ΛN ) : w2 = −1}. The
reflection associated to the supervector w ∈ S(m|2n)(ΛN ) is defined by ψ(w)[x] = wxw, x ∈ Rm|2n (ΛN ).
It is known from the radial algebra setting that ψ(w) maps vectors into vectors. Indeed,
wxw = {x, w}w − w2 x = {x, w}w + x = x − 2hx, wiw.
Every supervector reflection can be identified with a unique matrix in O(m|2n)(ΛN ). On account of
the results in Section 2, an option for doing this is extending the operator ψ(w) to Rm|2n (ΛN +1 ) in the
natural way, i.e.
ψ(w)[x] = wxw
x ∈ Rm|2n (ΛN +1 ).
(15)
Pm
P2n
Lemma 6. Let w = w + w`= j=1 wj ej + j=1 w`j e`j ∈ S(m|2n)(ΛN ). Then, the endomorphism (15)
can be represented by an unique supermatrix
A(w) B`(w)
ψ(w) =
∈ O(m|2n)(ΛN )
C`(w) D(w)
with A(w) = −2Dw Em×m Dw + Im , B`(w) = Dw Em×2n Dw`J2n , C`(w) = −2Dw`E2n×m Dw , and finally
D(w) = Dw`E2n×2n Dw`J2n + I2n , where
w1
w`1
1 1 ... 1
. .
.
p×q
..
..
Dw =
, Dw` =
and Ep×q = .. .. . . . .. ∈ R .
.
.
wm
w`2n
18
1
1
...
1
Proof.
Pm
P2n
Observe that ψ(w)[x] = wxw = {x, w}w + x = k=1 yk ek + k=1 y`j e`k , where
m
n
X
X
yk = −2
wj wk xj + xk +
w`2j−1 wk x`2j − w`2j wk x`2j−1 ,
j=1
j=1
m
n
X
X
y`k = −2
wj w`k xj + x`k +
−w`2j−1 w`k x`2j + w`2j w`k x`2j−1 .
j=1
Then, ψ(w)x =
A(w) B`(w)
C`(w) D(w)
A(w) = −2
w12
w1 w2
..
.
w1 wm
w2 w1
w22
..
.
w2 wm
j=1
x
x`
...
...
..
.
...
where,
wm w1
wm w2
..
.
2
wm
+ Im = −2Dw Em×m Dw + Im ,
−w`2 w1 w`1 w1 . . . −w`2n w1
−w`2 w2 w`1 w2 . . . −w`2n w2
B`(w) =
..
..
..
..
.
.
.
.
−w`2 wm w`1 wm . . . −w`2n wm
w1 w`1
w2 w`1 . . . wm w`1
w1 w`2
w2 w`2 . . . wm w`2
C`(w) = −2
..
..
..
..
.
.
.
.
w1 w`2n w2 w`2n . . . wm w`2n
w`2 w`1
−w`1 w`1 . . . w`2n w`1
w`2 w`2
−w`1 w`2 . . . w`2n w`2
D(w) =
..
..
..
..
.
.
.
.
w`2 w`2n −w`1 w`2n . . . w`2n w`2n
w`2n−1 w1
w`2n−1 w2
..
.
w`2n−1 wm
= Dw Em×2n Dw`J2n ,
= −2Dw`E2n×m Dw ,
−w`2n−1 w`1
−w`2n−1 w`2
..
.
−w`2n−1 w`2n
+ I2n = Dw`E2n×2n Dw`J2n + I2n .
The uniqueness of such a supermatrix in Mat(m|2n)(ΛN ) is guaranteed by Lemma 3 and it is easily seen
that ψ(w) belongs to O(m|2n)(ΛN ) since
1
1
1
hψ(w)[x], ψ(w)[y]i = − {wxw, wyw} = w{x, y}w = − {x, y} = hx, yi.
2
2
2
Algebraic operations with the matrices A(w), B`(w), C`(w), D(w) are easy since
m
n
X
X
2
Ep×m Dw
Em×q =
wj2 Ep×q ,
Ep×2n Dw`J2n Dw`E2n×q = 2
w`2j−1 w`2j Ep×q .
j=1
j=1
We can now define the bosonic Pin group in superspace as
Pinb (m|2n)(ΛN ) = {w1 · · · wk : wj ∈ S(m|2n)(ΛN ), k ∈ N},
and extend the map ψ to a Lie group homomorphism ψ : Pinb (m|2n)(ΛN ) → O(m|2n)(ΛN ) by
ψ(w1 · · · wk )[x] = w1 · · · wk x wk · · · w1 = ψ(w1 ) ◦ · · · ◦ ψ(wk )[x].
Proposition 15. Let w ∈ S(m|2n)(ΛN ). Then ψ(w) ∈ O0 and sdet (ψ(w)) = −1.
Proof.
19
(16)
`
To prove that ψ(w) ∈ O0 it suffices to prove thatPA(w), B`(w),
PnC (w), D(w) satisfy (12). This can be easily
m
2
2
done using (16) and the identity −1 = w = − j=1 wj + j=1 w`2j−1 w`2j . In fact, we have
2
Em×m Dw − 4Dw Em×m Dw + Im
A(w)T A(w) = 4Dw Em×m Dw
m
X
= 4
wj2 Dw Em×m Dw − 4Dw Em×m Dw + Im ,
j=1
n
X
C`(w)T J2n C`(w) = 4Dw Em×2n Dw`J2n Dw`E2n×m Dw = 8
w`2j−1 w`2j Dw Em×m Dw .
j=1
Then, A(w)T A(w) − 21 C`(w)T J2n C`(w) = 4
Also,
hP
m
2
j=1 wj − 1 −
i
w
`
w
`
j=1 2j−1 2j Dw Em×m Dw + Im = Im .
Pn
2
Em×2n Dw`J2n + Dw Em×2n Dw`J2n
A(w)T B`(w) = −2Dw Em×m Dw
m
X
= −2
wj2 Dw Em×2n Dw`J2n + Dw Em×2n Dw`J2n ,
j=1
C`(w)T J2n D(w) = −2Dw Em×2n Dw`J2n Dw`E2n×2n Dw`J2n − 2Dw Em×2n Dw`J2n
n
X
= −4
w`2j−1 w`2j Dw Em×2n Dw`J2n − 2Dw Em×2n Dw`J2n .
j=1
h P
i
Pn
m
Hence, A(w)T B`(w)− 12 C`(w)T J2n D(w) = 2 − j=1 wj2 + 1 + j=1 w`2j−1 w`2j Dw Em×2n Dw`J2n = 0. In
the same way we have
m
X
2
Em×2n Dw`J2n = −
wj2 J2n Dw`E2n×2n Dw`J2n ,
B`(w)T B`(w) = −J2n Dw`E2n×m Dw
j=1
T
D(w) J2n D(w) = J2n Dw`E2n×2n Dw`J2n Dw`E2n×2n Dw`J2n + 2J2n Dw`E2n×2n Dw`J2n + J2n
n
X
= 2
w`2j−1 w`2j + 1 J2n Dw`E2n×2n Dw`J2n + J2n ,
j=1
whence
m
n
X
X
1
1
1
B`(w)T B`(w) + DT (w)J2n D(w) = −
wj2 + 1 +
w`2j−1 w`2j J2n Dw`E2n×2n Dw`J2n + J2n = J2n .
2
2
2
j=1
j=1
Then, A(w), B`(w), C`(w), D(w) satisfy (12) and in consequence, ψ(w) ∈ O0 . To prove that sdet(ψ(w)) =
−1, first observe that ψ(w) = ψ(w)−1 since ψ(w) ◦ ψ(w)[x] = wwxww = x. Hence, due to Theorem 1 we
−1
obtain A(w) = A(w) − B`(w)D(w)−1 C`(w)
, yielding
det A(w) − B`(w)D(w)−1 C`(w)
1
=
.
sdet(ψ(w)) =
det[D(w)]
det[A(w)] det[D(w)]
We will compute det[D(w)] using the formula det[D(w)] = exp(tr ln D(w)) and the fact that D(w) − I2n
is a nilpotent matrix. Observe that
∞
∞
j
X
X
Dw`E2n×2n Dw`J2n
j+1 (D(w) − I2n )
ln D(w) =
(−1)
=
(−1)j+1
j
j
j=1
j=1
20
j
.
It follows from (16) that
Dw`E2n×2n Dw`J2n
j
j−1
n
X
= 2j−1
w`2j−1 w`2j
Dw`E2n×2n Dw`J2n .
j=1
Then,
P
j−1
n
j−1
∞
2
w
`
w
`
j=1 2j−1 2j
X
ln D(w) = (−1)j+1
Dw`E2n×2n Dw`J2n
j
j=1
and in consequence,
tr ln D(w) = −
∞
X
(−1)j+1
2j
P
n
j=1
w`2j−1 w`2j
j
= − ln 1 + 2
j
j=1
n
X
w`2j−1 w`2j .
j=1
Hence
1
det(D(w)) =
1+2
n
P
w`2j−1 w`2j
j=1
Similar computations yield det(A(w)) = 1 − 2
m
P
j=1
wj2 which shows that sdet(ψ(w)) = −1.
The above proposition states that the Lie group homomorphism ψ takes values in O0 , and its restriction to the bosonic spin group, defined as
Spinb (m|2n)(ΛN ) = {w1 · · · w2k : wj ∈ S(m|2n)(ΛN ), k ∈ N},
takes values in the subgroup SO0 .
In the classical case, the Pin and the Spin groups are a double covering of the groups O(m) and
SO(m) respectively. A natural question in this setting is whether Pinb (m|2n)(ΛN ) and Spinb (m|2n)(ΛN )
cover the groups O0 and SO0 . The answer to this question is negative and the main reason is that the
real projection of every vector w ∈ S(m|2n)(ΛN ) is in the unitary sphere Sm−1 of Rm , i.e.,
[w]0 =
m
X
[wj ]0 ej
and
[w]20 = −1.
j=1
Then, the real projection of ψ (Pinb (m|2n)(ΛN )) is just O(m), while [O0 ]0 = O(m) × Sp(2n). This means
that these bosonic versions of Pin and Spin do not describe the symplectic parts of O0 and SO0 . This
phenomenon is due to the natural structure of supervectors: their real projections belong to a space
with an orthogonal structure while the symplectic structure plays no rôle. Up to a nilpotent vector, they
are classical Clifford vectors, whence it is impossible to generate by this approach the real symplectic
geometry that is also present in the structure of O0 and SO0 . That is why we have chosen the name
of "bosonic" Pin and "bosonic" Spin groups. This also explains why we had to extend the space of
superbivectors in section 3. The ordinary superbivectors are generated over ΛN,0 by the wedge product
of supervectors. Then, they can only describe so(m) and not sp(2n) and in consequence, they do not
cover so0 .
As in the classical setting (see [7]), it is possible to obtain the following result that shows, from
another point of view, that Pinb (m|2n)(ΛN ) cannot completely describe O0 .
(2)
Proposition 16. The Lie algebra of Pinb (m|2n)(ΛN ) is included in Rm|2n (ΛN ).
Proof.
Let γ(t) = w1 (t) · · · wk (t) be a path in Pinb (m|2n)(ΛN ) with wj (t) ∈ S(m|2n)(ΛN ) for every t ∈ R and
Pk
0
γ(0) = 1. The tangent to γ at t = 0 is dγ
j=1 w1 (0) · · · wj (0) · · · wk (0). We will show that each
dt t=0 =
summand of
dγ
dt t=0
(2)
belongs to Rm|2n (ΛN ).
21
For j = 1 we have w10 (0)w2 (0) · · · wk (0) = −w10 (0)w1 (0). But w1 (t)w1 (t) ≡ −1 implies
w10 (0)w1 (0) + w1 (0)w10 (0) = 0
⇒
hw10 (0), w1 (0)i = 0.
(2)
Then w10 (0)w1 (0) = −hw10 (0), w1 (0)i + w10 (0) ∧ w1 (0) = w10 (0) ∧ w1 (0) ∈ Rm|2n (ΛN ). For j = 2,
w1 (0)w20 (0) · · · wk (0) = w1 (0)w20 (0)w2 (0)w1 (0) = − [w1 (0)w20 (0)w1 (0)] · [w1 (0)w2 (0)w1 (0)]
= −ψ(w1 (0))[w20 (0)] · ψ(w1 (0))[w2 (0)].
But ψ(w1 (0)) preserves the inner product, so w1 (0)w20 (0) · · · wk (0) = ψ(w1 (0))[w20 (0)] ∧ ψ(w1 (0))[w2 (0)]
(2)
∈ Rm|2n (ΛN ). We can proceed similarly for every j = 3, . . . , k.
5.2
A proper definition for the group Spin(m|2n)(ΛN )
The above approach shows that the radial algebra setting does not contain a suitable realization of SO0
(2)E
in the Clifford superspace framework. Observe that the Clifford realization of so0 given by Rm|2n (ΛN )
lies outside of the radial algebra Rm|2n (ΛN ), which suggests that something similar should happen with
the Clifford realization of the Lie group SO0 . In this case, a proper definition for the Spin group would
(2)E
be generated by the exponentials (in general contained in T (V )/I) of all the elements in Rm|2n (ΛN ), i.e.
n
o
(2)E
Spin(m|2n)(ΛN ) := eB1 · · · eBk : B1 , . . . , Bk ∈ Rm|2n (ΛN ), k ∈ N ,
and the action of this group on Rm|2n (ΛN ) is given by the group homomorphism h : Spin(m|2n)(ΛN ) →
SO0 defined by the restriction to Rm|2n (ΛN ) of
h(eB )[x] = eB xe−B ,
(2)E
B ∈ Rm|2n (ΛN ), x ∈ Rm|2n (ΛN +1 ).
In fact, for every extended superbivector B, h(eB ) maps supervectors into supervectors and admits a
supermatrix representation in Mat(m|2n)(ΛN ) belonging to SO0 . This is summarized below.
(2)E
Proposition 17. Let B ∈ Rm|2n (ΛN ). Then, h(eB )[x] = eφ(B) x for every x ∈ Rm|2n (ΛN +1 ).
Proof.
k
X
k
B j x(−B)k−j holds. Then,
In every associative algebra, the identity [B, [B . . . [B, x] . . .]] =
{z
}
|
j
j=0
k
h(eB )[x] = eB xe−B =
∞
X
Bk
k=0
k!
!
x
∞
X
(−B)k
k=0
!
=
k!
∞
X
k=0
k
j
k−j
X
B
(−B)
x
j!
(k − j)!
j=0
∞
k
∞
∞
X
X
X
1
φ(B)k x
1 X k
B j x(−B)k−j =
[B, [B . . . [B, x] . . .]] =
= eφ(B) x.
=
{z
}
k! j=0 j
k! |
k!
k=0
k=0
k
k=0
(2)E
Remark 5.1. The above proposition means that the Lie algebra isomorphism φ : Rm|2n (ΛN ) → so0 is
the derivative at the origin of the Lie group homomorphism h : Spin(m|2n)(ΛN ) → SO0 , i.e.,
etφ(B) = h(etB )
(2)E
∀t ∈ R, B ∈ Rm|2n (ΛN ).
On account of the connectedness of SO0 it can be shown that the group Spin(m|2n)(ΛN ) is a realization of SO0 in T (V )/I through the representation h.
Theorem 6. For every M ∈ SO0 there exist s ∈ Spin(m|2n)(ΛN ) such that h(s) = M .
22
Proof.
Since SO0 is a connected Lie group (Proposition 12), for every supermatrix M ∈ SO0 there exist
(2)E
X1 , . . . , Xk ∈ so0 such that eX1 · · · eXk = M , see Corollary 3.47 in [8]. Taking B1 , . . . , Bk ∈ Rm|2n (ΛN )
such that φ(Bj ) = Xj , j = 1, . . . , k, we obtain
M x = eX1 · · · eXk x = eφ(B1 ) · · · eφ(Bk ) x = h(eB1 ) ◦ · · · ◦ h(eBk )[x] = h(eB1 · · · eBk )[x].
The above equality is valid for every x ∈ Rm|2n (ΛN +1 ). Then, since h(eB1 · · · eBk ) and M belong to
Mat(m|2n)(ΛN ), Lemma 3 guarantees that s = eB1 · · · eBk ∈ Spin(m|2n)(ΛN ) satisfies h(s) = M .
The decomposition of SO0 given in Theorem 4 provides the exact number of exponentials of extended
superbivectors to be considered in Spin(m|2n)(ΛN ) in order to cover the whole group SO0 . If we consider
(2)E
the subspaces S1 , S2 , S3 of Rm|2n (ΛN ) given by
m(m − 1)
+ n2 ,
2
dim S2 = n2 + n,
S1 = φ−1 so(m) × [sp(2n) ∩ so(2n)] ,
S2 = φ−1 {0m } × [sp(2n) ∩ Sym(2n)] ,
S3 = φ−1 so0 (m|2n)(Λ+
N)
dim S1 =
dim S3 = dim so0 −
m(m − 1)
− n(2n + 1),
2
(2)E
we get the decomposition Rm|2n (ΛN ) = S1 ⊕S2 ⊕S3 , leading to the subset S = exp(S1 ) exp(S2 ) exp(S3 ) ⊂
Spin(m|2n)(ΛN ) which suffices for describing SO0 . Indeed, from Theorem 4 it follows that the restriction
h : S → SO0 is surjective. We now investigate the explicit form of the superbivectors in each one of the
subspaces S1 , S2 and S3 .
Proposition 18. The following statements hold.
1 ≤ j < k ≤ m,
ej ek ,
(i) A basis for S1 is e`2j−1 e`2k−1 + e`2j e`2k , 1 ≤ j ≤ k ≤ n,
e`2j−1 e`2k − e`2j e`2k−1 , 1 ≤ j < k ≤ n.
1 ≤ j ≤ n,
e`2j−1 e`2j ,
(ii) A basis for S2 is: e`2j−1 e`2k−1 − e`2j e`2k , 1 ≤ j ≤ k ≤ n,
e`2j−1 e`2k + e`2j e`2k−1 , 1 ≤ j < k ≤ n.
(iii) S3 consists of all elements of the form (10) with bj,k , Bj,k ∈ ΛN,0 ∩ Λ+
N, and b`j,k ∈ ΛN,1 .
Proof.
We first recall that a basis for the Lie algebra sp(2n) is given by the elements
Aj,k := E2j,2k−1 + E2k,2j−1 , 1 ≤ j ≤ k ≤ n
Bj,k := E2j−1,2k + E2k−1,2j , 1 ≤ j ≤ k ≤ n,
Cj,k := E2k,2j − E2j−1,2k−1 , 1 ≤ j ≤ k ≤ n,
Dj,k := E2j,2k − E2k−1,2j−1 , 1 ≤ j < k ≤ n,
where the matrices Ej,k ∈ Rn×n are defined as in Lemma 5. It holds that ATj,k = Bj,k for 1 ≤ j ≤ k ≤ n,
T
T
Cj,k
= Dj,k for 1 ≤ j < k ≤ n and Cj,j
= Cj,j for 1 ≤ j ≤ n. Hence, for every matrix D0 ∈ sp(2n) we
have
X
X
D0 =
(aj,k Aj,k + bj,k Bj,k + cj,k Cj,k ) +
dj,k Dj,k ,
1≤j≤k≤n
D0T =
X
1≤j<k≤n
(aj,k Bj,k + bj,k Aj,k ) +
1≤j≤k≤n
X
1≤j<k≤n
where aj,k , bj,k , cj,k , dj,k ∈ R.
23
(cj,k Dj,k + dj,k Cj,k ) +
n
X
j=1
cj,j Cj,j ,
(i) From the previous equalities we get that D0T = −D0 if and only if
X
X
D0 =
aj,k (Aj,k − Bj,k ) +
cj,k (Cj,k − Dj,k ) .
1≤j≤k≤n
1≤j<k≤n
Then, a basis for sp(2n) ∩ so(2n) is {Aj,k − Bj,k : 1 ≤ j ≤ k ≤ n} ∪ {Cj,k − Dj,k : 1 ≤ j < k ≤ n}.
The rest of the proof directly follows from Lemma 5.
(ii) In this case we have that D0T = D0 if and only if
X
D0 =
X
aj,k (Aj,k + Bj,k ) +
1≤j≤k≤n
cj,k (Cj,k + Dj,k ) +
n
X
cj,j Cj,j ,
j=1
1≤j<k≤n
whence a basis for sp(2n) ∩ Sym(2n) is
{Aj,k + Bj,k : 1 ≤ j ≤ k ≤ n} ∪ {Cj,j : 1 ≤ j ≤ n} ∪ {Cj,k + Dj,k : 1 ≤ j < k ≤ n}.
The rest of the proof directly follows from Lemma 5.
iii) This trivially follows from Lemma 5.
5.3
Spin covering of the group SO0
It is a natural question in this setting whether the spin group still is a double covering of the group of
rotations, as it is in classical Clifford analysis. In order to answer this question, we investigate how many
times S ⊂ Spin(m|2n)(ΛN ) covers SO0 , or more precisely, the cardinal of the set {s ∈ S : h(s) = M }
given a certain fixed element M ∈ SO0 .
From Proposition 17 we have that the representation h of an element s = eB1 eB2 eB3 ∈ S, Bj ∈ Sj ,
has the form h(s) = eφ(B1 ) eφ(B2 ) eφ(B3 ) . Following the decomposition M = eX eY eZ given in Theorem 4
for M ∈ SO0 , we get that h(s) = M if and only if eφ(B1 ) = eX , B2 = φ−1 (Y ) and B3 = φ−1 (Z). Then,
the cardinal of {s ∈ S : h(s) = M } only depends on the number of extended superbivectors B1 ∈ S1 that
satisfy eφ(B1 ) = eX . It reduces our analysis to finding the kernel of the restriction h|exp(S1 ) : exp(S1 ) →
SO(m) × [Sp(2n) ∩ SO(2n)] of the Lie group homomorphism h to exp(S1 ). This kernel is given by
ker h|exp(S1 ) = {eB : eφ(B) = Im+2n , B ∈ S1 }.
(2)
We recall, from Proposition 18, that B ∈ S1 may be written as B = Bo + Bs where Bo ∈ R0,m is a
classical real bivector and Bs ∈ φ−1 ({0m } × [sp(2n) ∩ so(2n)]). The components Bo , Bs commute and in
consequence, eB = eBo eBs . Consider the projections φo and φs of φ over the algebra of classical bivectors
(2)
R0,m and φ−1 ({0m } × [sp(2n) ∩ so(2n)]) respectively, i.e.
(2)
φs : φ−1 ({0m } × [sp(2n) ∩ so(2n)]) → sp(2n) ∩ so(2n),
φo : R0,m → so(m),
where
φ(B) =
φo (Bo )
0
0
φs (Bs )
or equivalently: φo (Bo )[x] = [Bo , x] for x ∈ Rm|0 (ΛN +1 ), and φs (Bs )[x] = [Bs , x], for x ∈ R0|2n (ΛN +1 ).
Hence eφ(B) = Im+2n if and only if eφo (Bo ) = Im and eφs (Bs ) = I2n . For the first condition, we know
(2)
from classical Clifford analysis that Spin(m) = {eB : B ∈ R0,m } is a double covering of SO(m) and in
φo (Bo )
B0
consequence, e
= Im implies e = ±1. Let us now compute all possible values for eBs for which
φs (Bs )
e
= I2n . To that end, we need the following linear algebra result.
Proposition 19. Every matrix D0 ∈ so(2n) ∩ sp(2n) can be written in the form D0 = RΣRT where
R ∈ SO(2n) ∩ Sp(2n) and
Σ=
0
−θ1
θ1
0
..
.
0
−θn
θn
0
24
,
θj ∈ R,
j = 1, . . . , n.
(17)
Proof.
c
The map Ψ(D0 ) = 12 QD0 QT , where
Q=
1
0
..
.
i
0
..
.
0
1
..
.
0
i
..
.
...
...
..
.
0
0
..
.
0
0
..
.
0
0
0
0
...
1
i
∈ Cn×2n ,
is a Lie group isomorphism between SO(2n) ∩ Sp(2n) and U (n). In addition, Ψ is its own infinitesimal representation on the Lie algebra level, and in consequence, a Lie algebra isomorphism
between
c
so(2n) ∩ sp(2n) and u(n). The inverse of Ψ is given by Ψ−1 (L) = 21 QT L Q + QT Lc Qc . For every
D0 ∈ so(2n) ∩ sp(2n), let us consider the skew-Hermitian matrix L = Ψ(D0 ) ∈ u(n). It is know that
every skew-Hermitian matrix is unitarily diagonalizable and all its eigenvalues are purely imaginary, see
c
[10]. Hence, L = Ψ(D0 ) can be written as L = U Ξ U T where U ∈ U (n) and Ξ = diag(−iθ1 , . . . , −iθn ),
−1
T
−1
θj ∈ R. Then, D0 = Ψ (L) = RΣR where R = Ψ (U ) ∈ SO(2n) ∩ Sp(2n) and Σ = Ψ−1 (Ξ) has the
form (17).
e
Since φs (Bs ) ∈ so(2n) ∩ sp(2n), we have φs (Bs ) = RΣRT as in the previous proposition. Hence,
= ReΣ RT where eΣ is the block-diagonal matrix
φs (Bs )
eΣ = diag(eθ1 J2 , . . . , eθn J2 ) with eθj J2 = cos θj I2 + sin θj J2 .
Hence eφs (Bs ) = I2n if and only if eΣ = I2n , which is seen to be equivalent to θj = 2kj π, kj ∈ Z
(j = 1, . . . , n), or to
Σ=
n
X
2kj π (E2j−1,2j − E2j,2j−1 ) ,
kj ∈ Z (j = 1, . . . , n).
j=1
Now, SO(2n) ∩ Sp(2n) being connected and compact, there exists BR ∈ φ−1 (so(2n) ∩ sp(2n)) such
φ(BR )
that R
. We recall that the h-action leaves any multivector structure invariant, in particular,
=e
(2)E
(2)E
(2)E
B
h[e ] Rm|2n (ΛN ) ⊂ Rm|2n (ΛN ) for every B ∈ Rm|2n (ΛN ). Then, using that φ is the derivative at the
origin of h, we get that the extended superbivector h(eBR )[φ−1 (Σ)] = eBR φ−1 (Σ)e−BR is such that
φ eBR φ−1 (Σ)e−BR = eφ(BR ) Σe−φ(BR ) = RΣRT = φ(Bs ),
−1
implying that Bs = eBR φ−1 (Σ)e−BR . Then, in order to compute eBs = eBR eφ
−1
to compute eφ (Σ) . Following the correspondences given in Lemma 5 we get
φ−1 (Σ) =
n
X
2kj π φ−1 (E2j−1,2j − E2j,2j−1 ) =
j=1
n
X
(Σ) −BR
e
, we first have
2
2
kj π e`2j−1
+ e`2j
.
j=1
and, in consequence
−1
eφ
(Σ)
= exp
n
X
n
Y
2
2
2
2
kj π e`2j−1
+ e`2j
=
exp kj π e`2j−1
+ e`2j
.
j=1
(18)
j=1
2
2
Let us compute exp π e`2j−1
+ e`2j
, j ∈ {1, . . . , n}. Consider x = e`2j−1 −ie`2j , y = e`2j−1 +ie`2j ; where i is
2
2
2
2
the usual imaginary unit in C. It is clear that xy = e`2j−1
+e`2j
+i(e`2j−1 e`2j −e`2j e`2j−1 ) = e`2j−1
+e`2j
+i and
2
2
[x, y] = 2i which is a commuting element. Then, exp π e`2j−1 + e`2j = exp (π xy − iπ) = − exp (π xy).
In order to compute exp (π xy) we first prove the following results.
Lemma 7. For every k ∈ N the following relations hold.
(i) yk x , xyk = −2ik yk−1 ,
(ii) xk yk xy = xk+1 yk+1 − 2ik xk yk .
25
Proof.
(i) We proceed by induction. For k = 1 we get [y, x] = −2i which obviously is true. Now assume that
(i) is true for k ≥ 1, then for k + 1 we get
yk+1 x = y yk x = yxyk − 2ik yk = (xy − 2i)yk − 2ik yk = xyk+1 − 2i(k + 1)yk .
(ii) From (i) we get xk yk xy = xk xyk − 2ik yk−1 y = xk+1 yk+1 − 2ik xk yk .
Pk
k
k−j
Lemma 8. For every k ∈ N it holds that (xy) =
S(k, j) xj yj , where S(n, j) is the
j=1 (−2i)
Stirling number of the second kind corresponding to k and j.
Remark 5.2. The Stirling number of the second kind S(k, j) is the number of ways of partitioning a
set of k elements into j non empty subsets. Among the properties of the Stirling numbers we recall the
following ones:
S(k, 1) = S(k, k) = 1,
∞
X
S(k + 1, j + 1) = S(k, j) + (j + 1)S(k, j + 1),
k=j
j
S(k, j)
(ex − 1)
xk
=
.
k!
j!
Proof of Lemma 8.
We proceed by induction. For k = 1 the statement clearly is true. Now assume it to be true for k ≥ 1.
Using Lemma 7, we have for k + 1 that
(xy)k+1 =
k
k
X
X
(−2i)k−j S(k, j) xj yj xy =
(−2i)k−j S(k, j)xj+1 yj+1 + (−2i)k+1−j j S(k, j) xj yj
j=1
j=1
k
= (−2i) xy +
k−1
X
!
(−2i)
k−j
[S(k, j) + (j + 1)S(k, j + 1)] x
j+1
y
j+1
+ xk+1 yk+1
j=1
=
k+1
X
(−2i)k+1−j S(k + 1, j) xj yj .
j=1
Then we obtain
eπxy =
∞
X
πk
k=0
=1+
k!
(xy)k = 1 +
k=1 j=1
k!
(−2i)k−j S(k, j) xj yj
∞
k
X
(−2πi)
(−2i)−j
(−2i)k−j S(k, j) xj yj = 1 +
S(k, j) xj yj
k!
k!
j=1
∞
X
∞ X
∞
X
πk
j=1 k=j
=1+
∞ X
k
X
πk
∞
X
k=j
j
−1
xj yj = 1.
j!
−2πi
(−2i)−j
e
j=1
2
2
from which we conclude that exp π e`2j−1
+ e`2j
= − exp (π xy) = −1.
Remark 5.3. Within the algebra AlgR {e`1 , . . . , e`2n } the elements e`2j−1 , e`2j may be identified with the opeπ
π
rators e 4 i ∂aj , e− 4 i aj respectively, the aj ’s being real variables. Indeed, these identifications immediately
lead to the Weyl algebra defining relations
π
π
π
π
e 4 i ∂aj e− 4 i ak − e− 4 i ak e 4 i ∂aj = ∂aj ak − ak ∂aj = δj,k .
2
2
Hence e`2j−1
+ e`2j
may be identified with the harmonic oscillator i ∂a2j − a2j and in consequence, the
h
i
2
2
element exp π e`2j−1
+ e`2j
corresponds to exp πi ∂a2j − a2j . We recall that the classical Fourier
transform in one variable can be written as an operator exponential
π
π
F[f ] = exp
i exp
i ∂a2j − a2j [f ].
4
4
h
i
Hence, exp πi ∂a2j − a2j = −F 4 = −id, where id denotes the identity operator.
26
P
kj
Qn
−1
2
2
= (−1) kj , whence eBs = ±1. Then,
Going back to (18) we have eφ (Σ) = j=1 exp π e`2j−1
+ e`2j
for B = Bo + Bs ∈ S1 such that eφ(B) = Im+2n , we have eB = eBo eBs = ±1, i.e. ker h|exp(S1 ) = {−1, 1}.
Theorem 7. The set S = exp(S1 ) exp(S2 ) exp(S3 ) is a double covering of SO0 .
Pn θ
2
2
,
Remark 5.4. As shown before, every extended superbivector of the form B = j=1 2j π e`2j−1
+ e`2j
θj ∈ R, belongs to S1 . Then, though the identifications made in remark 5.3 we can see all the operators
n
n
n
i Y
h π
X
Y
θ
π
j
exp
πi(∂a2j − a2j ) =
exp θj i(∂a2j − a2j ) =
exp −θj i Fa2θj j ,
2
2
2
j=1
j=1
j=1
2θ
as elements of the Spin group in superspace. Here, Faj j denotes the one-dimensional fractional Fourier
transform of order 2θj in the variable aj .
6
Conclusions and future work
In this paper we have shown that vector reflections in superspace are not enough to describe the set of
linear transformations leaving the inner product invariant. This constitutes a very important difference
with the classical case in which the algebra of bivectors x ∧ y is isomorphic to the special orthogonal
algebra so(m). Such a property is no longer fulfilled in this setting. The real projection of the algebra of
(2)
superbivectors Rm|2n (ΛN ) does not include the symplectic algebra structure which is present in the Lie
algebra of supermatrices so0 , corresponding to the group of super rotations.
That fact has an major impact on the definition of the Spin group in this setting. The set of
elements defined through the multiplication of an even number of unit vectors in superspace does not
suffice for describing Spin(m|2n)(ΛN ). A suitable alternative, in this case, is to define the (super) spin
elements as products of exponentials of extended superbivectors. Such an extension of the Lie algebra
of superbivectors contains, through the corresponding identifications, harmonic oscillators. This way, we
obtain the Spin group as a cover of the set of superrotations SO0 through the usual representation h. In
addition, every fractional Fourier transform can be identified with a spin element.
In forthcoming work, we will prove the invariance of the (super) Dirac operator ∂x under the corresponding actions of this (super) Spin group. We will also study the invariance of the Hermitian system
under the action of the corresponding Spin subgroup in superspace.
Acknowledgements
Alí Guzmán Adán is supported by a BOF-doctoral grant from Ghent University with grant number
01D06014.
References
[1] F. A. Berezin. Introduction to Super Analysis. D. Reidel Publishing Co., Inc., New York, NY, USA,
1987.
[2] H. De Bie and F. Sommen.
322(12):2978–2993, 2007.
A clifford analysis approach to superspace.
Annals of Physics,
[3] H. De Bie and F. Sommen. Correct rules for clifford calculus on superspace. Advances in Applied
Clifford Algebras, 17(3):357–382, 2007.
[4] H. De Schepper, A. Guzman Adan, and F. Sommen. Hermitian clifford analysis on superspace.
Submitted for publication, 2016.
[5] Hennie De Schepper, Alí Guzmán Adán, and Frank Sommen. The radial algebra as an abstract
framework for orthogonal and hermitian clifford analysis. Complex Analysis and Operator Theory,
pages 1–34, 2016.
27
[6] P. DIENES. The exponential function in linear algebras. The Quarterly Journal of Mathematics,
os-1(1):300–309, 1930.
[7] Thomas Friedrich. Dirac operators in Riemannian geometry, volume 25 of Graduate Studies in
Mathematics. American Mathematical Society, Providence, RI, 2000. Translated from the 1997
German original by Andreas Nestke.
[8] Brian Hall. Lie groups, Lie algebras, and representations: An elementary introduction, volume 222
of Graduate Texts in Mathematics. Springer, Cham, second edition, 2015.
[9] Joachim Hilgert and Karl-Hermann Neeb. Structure and geometry of Lie groups. Springer Monographs in Mathematics. Springer, New York, 2012.
[10] Roger A. Horn and Charles R. Johnson. Matrix analysis. Cambridge University Press, Cambridge,
second edition, 2013.
[11] Anthony W. Knapp. Lie groups beyond an introduction, volume 140 of Progress in Mathematics.
Birkhäuser Boston, Inc., Boston, MA, second edition, 2002.
[12] F. Sommen. An algebra of abstract vector variables. In Portugaliae Mathematica 54 (3) (1997),
pages 287–310, 1997.
[13] F. Sommen. An extension of clifford analysis towards super-symmetry. In Clifford algebras and their
applications in mathematical physics, pages 199–224. Springer, 2000.
[14] F. Sommen. Clifford analysis on super-space. Advances in Applied Clifford Algebras, 11(1):291–304,
2001.
[15] F. Sommen. Analysis Using Abstract Vector Variables, pages 119–128. Birkhäuser Boston, Boston,
MA, 2002.
[16] F. Sommen. Clifford analysis on super-space. ii. Progress in analysis, 1:383–405, 2003.
[17] Takeo Yokonuma. Tensor spaces and exterior algebra, volume 108 of Translations of Mathematical
Monographs. American Mathematical Society, Providence, RI, 1992. Translated from the 1977
Japanese edition by the author.
Hennie de Schepper
Clifford Research Group, Department of Mathematical Analysis, Ghent University,
Krijgslaan 281, 9000 Gent, Belgium.
e-mail: [email protected]
Alí Guzmán Adán
Clifford Research Group, Department of Mathematical Analysis, Ghent University,
Krijgslaan 281, 9000 Gent, Belgium.
e-mail: [email protected]
Frank Sommen
Clifford Research Group, Department of Mathematical Analysis, Ghent University,
Krijgslaan 281, 9000 Gent, Belgium.
e-mail: [email protected]
28
| 4 |
Realizing evaluation strategies by
hierarchical graph rewriting
Petra Hofstedt
Brandenburg University of Technology Cottbus
arXiv:1009.3770v1 [cs.LO] 20 Sep 2010
[email protected]
Abstract. We discuss the realization of evaluation strategies for the
concurrent constraint-based functional language ccfl within the translation schemata when compiling ccfl programs into the hierarchical
graph rewriting language lmntal. The support of lmntal to express local computations and to describe the migration of processes and rules
between local computation spaces allows a clear and simple encoding of
typical evaluation strategies.
1
Introduction
The C oncurrent C onstraint F unctional Language ccfl is a new multiparadigm
constraint programming language combining the functional and the constraintbased paradigms. ccfl allows a pure functional programming style, but also the
usage of constraints for the description and solution of problems with incomplete
knowledge on the one hand and for the communication and synchronization of
concurrent processes on the other hand.
ccfl compiles into another multiparadigm language, i.e. the language lmntal
(pronounced ”elemental”) [UK05,UKHM06]. lmntal realizes a concurrent language model based on rewriting hierarchical graphs. One of its major aims is to
unify various paradigms of computation and, thus, we chose lmntal as a base
model and target language for the CCFL compilation.1
In this paper we discuss the implementation of evaluation strategies for ccfl
within the compilation schemata. The support of lmntal to express local computations and to describe the migration of processes and rules between local
computation spaces allows the realization of typical evaluation strategies in a
clear and simple way.
Sect. 2 introduces into programming with ccfl and presents the main language features by example. Sect. 3 is dedicated to the compiler target language
lmntal and its evaluation principles. We discuss the encoding of evaluation
strategies in Sect. 4.
1
In [HL09] we took another approach with an abstract machine on a parallel multicore architecture as compilation target and enabled even programming with typical
parallelization patterns in ccfl.
Program 2.1 A simple (functional) ccfl program
def add x y = x + y
def addOne = add 1
def f a c x = case x of 1 −> x ;
n −> n ∗ f a c ( n−1)
Program 2.2 ccfl: list length
data L i s t a = N i l | Cons a ( L i s t a )
def l e n g t h l =
case l of N i l
−> 0 ;
Cons x x s −> 1 + l e n g t h x s
2
Constraint-functional programming with CCFL
The C oncurrent C onstraint-based F unctional Language ccfl combines concepts from the functional and the constraint-based paradigms. We briefly sketch
on the main conceptual ideas. For a detailed presentation of ccfl’s full syntax
and semantics and application examples we refer to [HL09,Hof08].
Functional programming. ccfl’s functional sub-language syntactically borrows
from haskell. The language allows the typical constructs such as case- and letexpressions, function application, some predefined infix operations, constants,
variables, and constructor terms, user-defined data types, higher-order functions
and partial application and it has a polymorphic type system.
Example 1. Prog. 2.1 shows a simple functional ccfl program. We stress on
this example later again. The following derivation sequence uses a call-by-value
strategy. As usual, we denote the n-fold application of the reduction relation
by n and its reflexive, transitive closure by ? . We underline innermost
redexes. Note, that the given sequence is one of several (equivalent) derivations.
add (addOne (6+1)) (addOne 8)
add (addOne 7) (add 1 8)
2
add (addOne 7) (addOne 8)
3
add (addOne 7) 9
add 8 9
2
17
Free variables. In ccfl, expressions may contain free variables. Function applications with free variables are evaluated using the residuation principle [Smo93],
that is, function calls are suspended until the variables are bound to expressions
such that a deterministic reduction is possible. For example, a function call 4 + x
with free variable x will suspend. In contrast, consider Prog. 2.2 defining the data
type List a and a length-function on lists. To proceed with the computation of
the expression length (Cons x (Cons 1 (Cons y Nil))) a concrete binding of the
variables x and y is not necessary. The computation yields 3.
In the following, we will use the haskell-typical notions for lists, i.e. [] and
e.g. [1,6] for an empty and non-empty list, resp. , and ”:” as the list constructor.
2
Program 2.3 ccfl: a game of dice
1
2
3
4
5
6
7
fun game : : Int −> Int −> Int −> C
def game x y n =
case n of 0 −> x =:= 0 & y =:= 0 ;
m −> with x1 , y1 , x2 , y2 : : Int
in d i c e x1 & d i c e y1 &
x =:= x1 + x2 & y =:= y1 + y2 &
game x2 y2 (m−1)
8
9
10
fun d i c e : : Int −> C
def d i c e x =
member [ 1 , 2 , 3 , 4 , 5 , 6 ] x
11
12
13
14
15
fun member : : L i s t a −> a −> C
def member l x =
l =:= y : y s −> x =:= y |
l =:= y : y s −> case y s of [ ]
−> x =:= y ;
z : z s −> member y s x
Constraint-based programming. ccfl features equality constraints on functional
expressions, user-defined constraints, and conjunctions of these which enables the
description of cooperating processes and non-deterministic behavior.2
As an example consider Prog. 2.3. In lines 2–7 we define a constraint abstraction (or user-defined constraint, resp.) game. A constraint abstraction has
a similar form like a functional definition. However, it is allowed to introduce
free variables using the keyword with, the right-hand side of a constraint abstraction may consist of several body alternatives the choice of which is decided
by guards, and each of these alternatives is a conjunction of constraint atoms.
A constraint always has result type C.
The constraint abstraction game initiates a game between two players throwing the dice n times and reaching the overall values x and y, resp.
In lines 5–7 we see a conjunction of constraints which are either applications
of user-defined constraints, like (dice x1) and (game x2 y2 (m−1)), or equalities
e1 =:=e2 on functional expressions.
Constraints represent processes to be evaluated concurrently and they communicate and synchronize by shared variables. This is realized by suspending
function calls (see above) and constraint applications in case of insufficiently
instantiated variables.
Guards in user-defined constraints enable to express non-determinism. For
example the member-constraint in lines 12–15 non-deterministically chooses a
value from a list. Since the match-constraints of the guards of both alternatives
are the same (lines 13 and 14), i.e. l =:= y : ys, the alternatives are chosen
non-deterministically which is used to simulate the dice.
2
The integration of external constraint domains (and solvers) such as finite domain
constraints or linear arithmetic constraints is discussed in [Hof08].
3
Note that alternatives by case-expressions as in lines 3 and 4 and alternatives
by guarded expressions as in lines 13 and 14 are fundamentally different concepts.
They do not only differ syntactically by using the keyword case−of and the
separation mark ”;” on the one hand and constraints as guards and the mark ”|”
on the other hand. Of course, the main difference is in the evaluation: While casealternatives are tested sequentially, guards are checked non-deterministically for
entailment.
The constraint evaluation in ccfl is based on the evaluation of the therein
comprised functional expressions. Thus, we can restrict our presentation of evaluation strategies to the reduction of functional expressions in this paper.
Equality constraints are interpreted as strict. That is, the constraint
e1 =:= e2 is satisfied, if both expressions can be reduced to the same ground
data term [HAB+ 06]. While a satisfiable equality constraint x =:= fexpr produces a binding of the variable x to the functional expression fexpr and terminates with result value Success, an unsatisfiable equality is reduced to the value
Fail representing an unsuccessful computation.
ccfl is a concurrent language. Thus, constraints within conjunctions are
evaluated concurrently. Concerning the functional sub-language of ccfl, we allow the concurrent reduction of independent sub-expressions.
3
LMNtal
The hierarchical graph rewriting language lmntal is the target language of
the compilation of ccfl programs. One of its major aims is to ”unify various paradigms of computation” [UK05] and, thus, it lent itself as base model
and target language for the compilation of ccfl programs. We briefly introduce
the principles of lmntal by example, in particular the concepts necessary to explain our approach in the following. For a detailed discussion of lmntal’s syntax,
semantics, and usage see e.g. [UK05,UKHM06,LMN10].
An lmntal program describes a process consisting of atoms, cells, logical
links, and rules.
An atom p(X1 , ..., Xn ) has a name p and logical links X1 , ..., Xn which may
connect to other atoms and build graphs in this way. For example, the atoms
f (A,B,E), A = 7, g(D,B) are interconnected by the links A and B. Note that
the links of an atom are ordered such that the above atoms can also be denoted
by E = f(A,B), A = 7, B = g (D). As a shortened notation we may also write
E = f(7,g(D)). A and B are inner links in this example; they cannot appear
elsewhere because links are bilateral connections and can, thus, occur at most
twice.
A cell {a? , r? , c? , l? } encloses a process, i.e. atoms a, rules r (see below),
and cells c within a membrane ”{}” and it may encapsulate computations and
express hierarchical graphs in this way. Links l may also appear in cells where
they are used to interconnect with other cells and atoms.
4
Program 3.1 lmntal: non-deterministic bubble sort
1
L=[X,Y| L2 ] :− X > Y | L=[Y,X| L2 ] .
2
aList = [2 ,1 ,5 ,0 ,4 ,6 ,3].
Program 3.2 lmntal: membranes to encapsulate computations
1
2
{@r , { $p } , $s } :− {{@r , $p } , $s } .
{{@r , $p } / , $s } :− {@r , $p , $s } .
3
4
{addOne (A, B) :− B = 1 + A.
{addOne ( 2 ,D) } , {addOne ( 4 ,E) } }
Example 2. Consider the following two cells.
{addOne(A,B) :− B=1+A. E=5, {addOne(2,D)}}, {+E}
The first cell encloses a rule addOne(A,B) :− B=1+A., an atom E=5, and
an inner cell {addOne(2,D)} enclosing an atom itself. The second (outer) cell
just contains a link +E connecting into the first cell onto the value 5.
Rules have the form lhs :− rhs and they are used to describe the rewriting
of graphs. Both, the left-hand side lhs and the right-hand side rhs of a rule are
process templates which may contain atoms, cells and rules and further special
constructs (see e.g. [UK05]) among them process contexts and rule contexts.
Contexts may appear within a cell and they refer to the rest of the entities
of this cell. Rule contexts @r are used to represent multisets of rules, process
contexts $p represent multisets of cells and atoms.
Consider the bubble sort rule and a list to sort in Prog. 3.1 as a first and
simple example. In the bubble sort rule, link L connects to the graph [X,Y |L2]
representing a list. Thus, [X,Y |L2] does not describe the beginning of a list but
an arbitrary cutout such that the rule is in general applicable onto every list
position where X > Y holds. Since lmntal does not fix an evaluation strategy,
the sorting process is non-deterministic.
As a second example consider the lmntal Prog. 3.2. Lines 3–4 show a cell, i.e.
a process encapsulated by a membrane ”{}”. It consists of an addOne-rewrite
rule in a prolog-like syntax and two cells each enclosing an addOne-atom by
membranes in line 4. A, B, D, E are links. The addOne-rewrite rule cannot be
applied on the addOne-atoms in line 4 because they are enclosed by extra membranes which prevent them from premature evaluation. The rules in lines 1 and
2, however, operate on a higher level and they allow to describe the shifting of
the addOne-rule into the inner cells and backwards. At this, @r is a rule-context
denoting a (multi)set of rules, and $p and $s are process-contexts which stand
for (multi)sets of cells and atoms. The template {@r, $p}/ in line 2 has a stable flag ”/” which denotes that it can only match with a stable cell, i.e. a cell
containing no applicable rules.
In the current situation, the rule in line 1 is applicable to the cell of the
lines 3–4, where @r matches the addOne-rule, $p matches one of the inner
5
addOne-atoms and $s stands for the rest of the cell contents. A possible reduction of the this cell (in the context of the rules of lines 1–2) is, thus, the
following; we underline the elements reduced in the respective steps:
{addOne(A, B) : −B = 1 + A. {addOne(2, D)}, {addOne(4, E)} } (1)
{ {addOne(A, B) : −B = 1 + A. addOne(4, E) }, {addOne(2, D)} } addOne
{ {addOne(A, B) : −B = 1 + A. E = 1 + 4 }, {addOne(2, D)} } +
{ {addOne(A, B) : −B = 1 + A. E = 5}, {addOne(2, D)} }
The first inner cell is now stable such that no rule is applicable inside. Thus,
we can apply the rule from line 2.
{ {addOne(A, B) : −B = 1 + A. E = 5}, {addOne(2, D)} }
{addOne(A, B) : −B = 1 + A. E = 5, {addOne(2, D)} }
(2)
In this state, again the first outer rule (line 1) is applicable which yields the
following rewriting sequence and final state:
{addOne(A, B) : −B = 1 + A. E = 5, {addOne(2, D)} }
{ {addOne(A, B) : −B = 1 + A. addOne(2, D) }, E = 5}
...
{addOne(A, B) : −B = 1 + A. E = 5, D = 3}
(1)
addOne
As one can see by the above example, lmntal supports a prolog-like syntax.
However, there are fundamental differences. Our example already demonstrated
the use of process-contexts, rule-contexts, membrane enclosed cells, and the stable flag. Different from other languages, the head of a rule may contain several
atoms, even cells, rules, and contexts. A further important difference to other
declarative languages are the logical links of lmntal. What one may hold for variables in our program, i.e. A, B, D, E, are actually links. Their intended meaning
strongly differs from that of variables. Declarative variables stand for particular
expressions or values and, once bound, they stay bound throughout the computation and are indistinguishable from their value. Links in lmntal also connect to
a structure or value. However, link connections may change. While this is similar
to imperative variables, links are used to interconnect exactly two atoms, two
cells, or an atom and a cell to build graphs and they have, thus, at most two
occurrences. The links D and E in the above example occur only once and, thus,
link to the outside environment. In rules, logical links must occur exactly twice.
Semantically, lmntal is a concurrent language realizing graph rewriting. It
inherits properties from concurrent logic languages. lmntal does not support
evaluation strategies. The rule choice is non-deterministic, but can be controlled
by guards (used e.g. in Prog. 4.1 for the fac-rules, see below).
As shown in the example, the encapsulation of processes by membranes allows to express local computations, and it is possible to describe the migration
of processes and rules between local computation spaces. We will use these techniques to implement evaluation strategies for ccfl.
6
Program 4.1 Intermediate lmntal compilation result
1
add (X, Y, V0) :− V0 = X+Y.
2
addOne (X, V0) :− app ( add , 1 , V1 ) , app (V1 , X, V0 ) .
3
4
f a c (X, V0) :− X =:= 1 | V0 = 1 .
f a c (X, V0) :− X =\= 1 | V0 = X∗V1 , V2 = X−1, app ( f a c , V2 , V1 ) .
5
6
7
8
9
10
app ( f a c , V1) :− f a c (V1 ) .
app ( f a c , V1 , V2) :− f a c (V1 , V2 ) .
...
app ( add , V1 , V2 , V3) :− add (V1 , V2 , V3 ) .
app (V2 , V3 , V4 ) , add (V1 , V2) :− add (V1 , V3 , V4 ) .
...
4
Encoding evaluation strategies
Now, we discuss the compilation of CCFL programs into lmntal code. We start
with a presentation of the general translation schemata and show the realization
of a call-by-value strategy and an outermost evaluation strategy subsequently.
Our compilation schemata are partially based on translation techniques
[Nai91,Nai96,War82] for functional into logic languages.
A ccfl function definition is translated into a set of lmntal rules. ccfl data
elements and variables are represented and processed by means of certain heap
data structures during run-time. However, to clarify the presentation in this
section, we represent ccfl variables directly by lmntal links3 instead and data
structures by lmntal atoms. For a detailed discussion of the heap data structures
see [Hof08]. ccfl infix operations are mapped onto their lmntal counterparts.
Function applications are realized by an atom app (...) and an according app-rule
which is also used for higher-order function application and partial application
as discussed in [Hof08]. Case-expressions generate extra lmntal rules for pattern
matching, let-constructs are straightforwardly realized by equalities.
Example 3. Consider the ccfl Prog. 2.1. It compiles into the (simplified) lmntal
code given in Prog. 4.1 (not yet taking an evaluation strategy into consideration).
The additional link arguments V0 of the add-, addOne-, and fac-rewrite rules
are used to access the result of the rule application which is necessary because
lmntal explicitly deals with graphs while a computation with a (constraint-)
functional language like ccfl yields an expression as a result. The two fac rules
result from the case distinction in the according ccfl function.
Note that the right-hand side of the addOne rule in line 2 represents the
lmntal expression (app (app add 1) X) resulting from the ccfl addOne definition in Prog. 2.1 by η-enrichment.
3
Moreover, we tolerate n-fold occurrences of links in rules, where n 6= 2. This is also
not conform with lmntal, where links must occur exactly twice in a rule, but the
problem disappears with the introduction of heap data structures as well.
7
The rules of lines 5-10 are a cutout of the rule set generated to handle function
application including higher-order functions and partial application (according
to a schema adapted from [Nai96,War82]). Thus, in these rules the root symbols
appear with different arities.
lmntal evaluates non-deterministically, and it does a priori not support certain evaluation strategies. Thus, the control of the order of the sub-expression
evaluation for ccfl is integrated into the generated lmntal code. We realized
code generation schemata for different evaluation strategies for ccfl by encapsulating computations by membranes using similar ideas as demonstrated in
Prog. 3.2. We discuss the realization of a call-by-value and an outermost reduction strategies in the following.
A call-by-value strategy To realize evaluation strategies expressions are destructured into sub-expressions which are encapsulated by membranes and interconnected by links. These links allow to express dependencies between subexpressions on the one hand, but to hold the computations apart from each
other, on the other hand. Consider the ccfl application
add (addOne (6+1)) (addOne 8)
(1)
from Example 1. It is destructured and yields the following lmntal atoms:
Z = add (X,Y), X = addOne (W), W = 6+1, Y = addOne (8)
or in an equivalent notation, resp.:
add (X,Y,Z), addOne (W,X), W = 6+1, addOne (8,Y)
(2)
The idea to realize a call-by-value evaluation strategy is now provide the
expressions to be reduced first (i.e. the inner calls W = 6+1 and addOne (8,Y))
with the ruleset and to delay the outer calls by holding them apart from the
rules until the computation of the inner redexes they depend on is finished.
To enable a concurrent computation of independent inner sub-expressions as
discussed in Example 1 we must assign each independent inner expression or
atom, resp., a separate membrane (including a copy of the rules). This yields
the following structure (where every atom is held within separate membranes
for organizational reasons).
{{add(X,Y,Z)}} {{addOne(W,X)}} {@rules, {W=6+1}} {@rules, {addOne(8,Y)}} (3)
Fig. 1 visualizes an evaluation of (3) using a call-by-value strategy with concurrent evaluation of independent sub-expressions. Reduction step numbers are
given in brackets at the right margin. Membranes are represented as enclosing
ellipses. Interdependencies of atoms by links control the order of the evaluation
and they are represented by arrows.
In the initial state, the atoms W=6+1 and addOne(8,Y) are inner redexes
to be reduced first. We mark these by a gray color. They are provided with the
lmntal rules @rules generated from the ccfl program. The atoms add(X,Y,Z)
8
@rules
W=6+1
addOne(W,X)
add(X,Y,Z)
addOne(8,Y) @rules
(1)
addOne(W,X)
@rules W=7
add(X,Y,Z)
addOne(8,Y) @rules
(2)
(A)
@rules
addOne(7,X)
add(X,Y,Z)
addOne(8,Y) @rules
(3,4,5)
3
@rules
addOne(7,X)
add(X,Y,Z)
Y=9
@rules
(6)
(B)
@rules
addOne(7,X)
add(X,9,Z)
···
Fig. 1. A call-by-value computation sequence
and addOne(W,X) represent outer calls to be delayed until the computation of
their inner sub-expressions have been finished. Thus, we put them into extra
protecting membranes.
To control the order of sub-expression evaluation we need three things:
i. the destructured expression as given in (3),
ii. lmntal rules (denoted by @rules in Fig. 1) generated from the ccfl program: These rules take the destructuring of expressions into consideration
and realize local call-by-value evaluations.
iii. a general lmntal ruleset reorganizing the computation spaces in case that
local computations are finished.
(i) and (ii) The destructuring of expressions in lmntal rules (ii) generated
from the ccfl program is handled similarly to (i) as discussed above. Prog. 4.2
shows the lmntal code generated from Prog. 2.1 taking the intended call-by-value
evaluation into consideration.
The lmntal rules’ right-hand sides consist of cells containing the destructured
expressions. Outermost expressions are encapsulated by extra membranes to protect them against premature evaluation as necessary for an innermost strategy.
This effect is observable for the addOne-rule in line 2 and the second fac-rule in
line 5, while for the other rules the flat term structure of the right-hand sides
of the ccfl functions is just carried over to the generated lmntal rules. To simplify the presentation in Fig. 1, however, we inlined the app-calls for function
9
Program 4.2 Generated lmntal code for a call-by-value strategy
1
2
{ add (X, Y, V0 ) , $p } :− {V0 = X+Y, $p } .
{addOne (X, V0 ) , $p } :− { app ( add , 1 , V1 ) } , {{ app (V1 , X, V0 ) , $p } } .
3
4
5
{ f a c (X, V0 ) , $p } :− X =:= 1 | {V0 = 1 , $p } .
{ f a c (X, V0 ) , $p } :− X =\= 1 |
{{V0 = X∗V1 , $p } } , {V2 = X−1} , {{ app ( f a c , V2 , V1 ) } } .
6
7
8
{ app ( f a c , V1 ) , $p } :− { f a c (V1 ) , $p } .
{ app ( f a c , V1 , V2 ) , $p } :− { f a c (V1 , V2 ) , $p } .
...
Program 4.3 lmntal code for a call-by-value strategy, simplified version
1
{ add (X, Y, V0 ) , $p } :− {V0 = X+Y, $p } .
2
{addOne (X, V0 ) , $p } :− { add ( 1 ,X, V0 ) , $p } .
3
4
5
{ f a c (X, V0 ) , $p } :− X =:= 1 | {V0 = 1 , $p } .
{ f a c (X, V0 ) , $p } :− X =\= 1 |
{{V0 = X∗V1 , $p } } , {V2 = X−1} , {{ f a c (V2 , V1 ) } } .
applications (e.g. we applied the app-rule of line 7 on the call app (fac ,V2,V1)
of line 5) and used instead of Prog. 4.2 the accordingly simplified Prog. 4.3.
(iii) The reorganization of computation spaces for the call-by-value strategy
is mainly realized by two lmntal rules given in Prog. 4.4. They are visualized
in Fig. 2 for better understanding. These rules organize the evaluation of outer
calls when the inner redexes they depend on have been completely reduced. In
the following, we discuss the rules semantics by means of Fig. 2.
(A)
@rules p(...,L)
q(...,L,...)
(B)
@rules p(...,L)
q(...,L,...)
@rules
...
p(...,L) q(...,L,...)
p(...,L) q(...,L,...)
...
Fig. 2. lmntal: rules emulating a call-by-value strategy
Both rules, (A) and (B), are only applicable when the cell
{@rules, {p (..., L)}} is stable, i.e. the rules cannot be applied on the
atom p (..., L) (or L=p(...), resp.) further. For rule (A), the atom q (..., L ,...)
does not contain any further link connected to a process representing a
sub-expression evaluation. Thus, both atoms are ready for their combined
evaluation, and they are put into one computation cell or space, resp., together
with the rules. An example for the application of this rule is the computation
10
Program 4.4 lmntal rules to control the reorganization of computation spaces
for a call-by-value reduction of the ccfl compilation result
1
2
3
4
ruleA@@
{ @rules , { $ p r o c s p , +L , i n L i n k s ( 0 ) } } / ,
{ { $ p r o c s q , −L , i n L i n k s (N) } } :−
N =:= 1 | { @rules , { $ p r o c s p , $ p r o c s q , i n L i n k s ( 0 ) } } .
5
6
7
8
ruleB@@
{ @rules , { $ p r o c s p , +L , i n L i n k s ( 0 ) } } / ,
{ { $ p r o c s q , −L , i n L i n k s (N) } } :−
N > 1 | { { $ p r o c s p , $ p r o c s q , i n L i n k s (M) , M = N−1} } .
Program 4.5 lmntal compilation result for an outermost strategy
1
2
3
{ f a c (X, V0 ) , $p } :− X =:= 1 | {V0 = 1 , $p } .
{ f a c (X, V0 ) , $p } :− X =\= 1 |
{V0 = X∗V1 , $p } , {{V2 = X−1}} , {{ f a c (V2 , V1 ) } } .
step (2) in Fig. 1, where the atoms W=7 and addOne(W,X) are brought
together into one membrane and join into the atom addOne(7,X).
Rule (B) describes the case that the atom q (..., L ,...) does contain at least
one further link connected to a process itself under evaluation. These represent
sub-expressions of q (..., L ,...) or inner redexes, resp., and are denoted by ingoing links resp. arrows in Fig. 2.4 Thus, while p (..., L) and q (..., L ,...) can
be combined in one membrane, they are not ready for evaluation yet such that
we omit the rules @rules on the right-hand side here. An example is step (6) in
Fig. 1: the atoms add(X,Y,Z) and Y=9 are combined in one membrane but not
provided with the rules.
An outermost strategy For call-by-name and lazy evaluations the computation
proceeds on the outermost level. As we will see in the following, thus, copying
of the rule-set, like for the innermost strategy, (which may become expensive) is
not necessary. Besides, the mechanisms are are quite similar.
Consider the ccfl faculty function from Prog. 2.1. The (simplified) lmntal
compilation result taking an outermost strategy into consideration is given in
Prog. 4.5. In contrast to Prog. 4.3, now innermost expressions on the right-hand
sides are encapsulated by extra membranes to protect them against premature
evaluation. (The flat term structure of the right-hand sides of the ccfl functions
add and addOne is again just carried over to the generated lmntal rules and
yields the same results as in Prog. 4.3).
Fig. 3 shows an outermost evaluation sequence of an lmntal process corresponding to the ccfl expression add (addOne (6+1)) (addOne 8). We did denote the ruleset @rules only in the first computation state; there is in general
only one copy and it resides on the top level where the computation executes. An
4
An outgoing link from q (..., L ,...) would accordingly represent its parent expression. Such links are allowed, of course, but omitted in Fig. 2 for easier understanding.
11
@rules
addOne(W,X)
W=6+1
add(X,Y,Z)
addOne(8,Y)
(1)
addOne(W,X)
W=6+1
Z=X+Y
addOne(8,Y)
(2,3)
(C) 2
addOne(W,X)
W=6+1
Z=X+Y
addOne(8,Y)
(4)
(D)
W=6+1
addOne(W,X)
Z=X+Y addOne(8,Y)
(5,...)
?
W=6+1
X=1+W Z=X+9
···
Fig. 3. An outermost computation sequence
evaluation sequence for an lmntal process corresponding to the ccfl expression
fac (addOne (addOne 3)) is shown in Fig. 5.
We show two examples of evaluation sequences to allow a direct comparison
between the two evaluation sequences in Fig. 1 and Fig. 3 for the expression
add (addOne (6+1)) (addOne 8) on the one hand, but to illustrate a particular
aspect of outermost strategies (i.e. the unprotect-protect mechanism as described
in particular for the sequence in Fig. 5 below), too.
The two main rules realizing the outermost evaluation are given in Fig. 4.
We mark cells on the outermost level, i.e. cells to be reduced first, by the dark
gray color as before. Again, the rules are only applicable if the cells with dark
gray color are stable, i.e. they cannot be reduced further.
Rule (C) lifts the (”first”) inner expression p (..., L) on the evaluation level
in case that the outermost term q (..., L ,...) is not (yet) reducible. However, for
such (light-gray marked) sub-expressions (and except for arithmetic expressions)
only one reduction step is allowed before protecting its result again by an extra
membrane. This is realized by a variant of each rule of Prog. 4.5 (not shown
there). Rule (C) is applied e.g. in the first computation step of Fig. 5. Afterwards
the described unprotect-protect-mechanism applies. In steps (3) and (4) of Fig. 5
rule (C) is applied even twice which allows to make a reduction step onto the
inner redex addOne(3,Y) in step (6). For the applications of rule (C) in e.g. step
(4) of Fig. 5 and (2,3) in Fig. 3 the unprotect-protect mechanism does not apply
because the outermost expressions are arithmetic ones, here, such that we use
rule (D) afterwards.
12
...
(C)
(D)
...
q(...,L,...)
p1(...,L1)
p(...,L)
R=L1 op L2
...
...
p2(...,L2)
...
q(...,L,...)
p(...,L)
...
...
op L2 p2(...,L2)
p1(...,L1) R=L1 ...
Fig. 4. lmntal: rules emulating an outermost strategy
Program 4.6 lmntal compilation result: list length
l e n g t h (L , V0 ) , n i l (L) :− . . .
l e n g t h (L , V0 ) , cons (X, XS , L) :− . . .
Rule (D) can only be applied on expressions with arithmetic built-in operators as root. Because, in this case, the outermost term can only be evaluated if
the inner expressions are completely evaluated, we lift them onto the outermost
level, in general. The rule is applied in the step 4 of Fig. 3 and step 5 of Fig. 5.
We realized a call-by-name strategy using the presented approach. The encoding of a call-by-need strategy is possible by additionally introducing heap
data structures as anyway needed to deal with free variables and constraints in
ccfl (see above and [Hof08]). These allow sharing as needed for lazy evaluation.
Residuation Function applications in ccfl may contain free variables. In such a
case, we apply the residuation principle [Smo93] (see above). This is realized in
the resulting lmntal program by according atoms in the rules left-hand sides as
in the following example (or guards resp. as e.g. in Prog. 4.1 for the fac-rules)
checking for concerning variable bindings.
Example 4. Consider again the ccfl Prog. 2.2 defining a length-function.
Prog. 4.6 shows a cut-out of the generated lmntal program. From a not sufficiently instantiated ccfl expression (length y) the compiler generates an according lmntal atom length (Y,V). The lmntal evaluation of this process together with Prog. 4.6 suspends as long as there is no connection of the link Y to
an appropriate atom or graph, i.e. we need an atom nil (Y) or cons (H,T,Y).
5
Conclusion
We discussed the realization of typical evaluation strategies for functional programs based on hierarchical graph rewriting. The control of sub-expression evaluation was built into the translation schemata when compiling ccfl programs
into the graph rewriting language lmntal. lmntal is a concurrent language not
13
@rules
addOne(3,Y)
addOne(Y,X)
fac(X,W)
(1)
(C)
addOne(3,Y)
addOne(Y,X)
fac(X,W)
(2)
addOne(3,Y)
X=1+Y
fac(X,W)
(3)
(C)
addOne(3,Y)
X=1+Y
fac(X,W)
(4)
(C)
addOne(3,Y)
X=1+Y
fac(X,W)
(5)
(D)
addOne(3,Y) X=1+Y
fac(X,W)
(6)
fac(X,W)
Y=3+1 X=1+Y
(7)
(C)
fac(X,W)
Y=3+1 X=1+Y
···
Fig. 5. An outermost computation sequence, II
supporting strategies a priori. However, the abilities of lmntal to express hierarchical computation spaces and to migrate processes and rules between them
enables a clear and simple strategy control.
Ueda [Ued08b,Ued08a] presents encodings of the pure lambda calculus and
the ambient calculus, resp., using lmntal. Like in our approach, the membrane
construct of lmntal plays an essential role for the encoding and allows a significantly simpler than previous encodings, like e.g. that of the lambda calculus by
Sinot [Sin05] based on token passing in interaction nets. In [BFR05] Banâtre,
Fradet, and Randenac identify a basic calculus γ0 containing the very essence
of the chemical calculus and which – similar to lmntal – is based on multiset
rewriting. They show an encoding of the strict λ-calculus which is straightforward because of the strict nature of γ0 and state that an encoding of a call-byname λ-calculus is possible too, but more involved. In contrast, lmntal a priori
does not support certain evaluation strategies. Instead, lmntal offers extended
14
features like the rewriting of rules and graph hierarchies which allows to encapsulate and migrate computations which was highly used in our modelling of
evaluation strategies as presented in the paper.
Acknowledgment This work has been supported by a postdoctoral fellowship No.
PE 07542 from the Japan Society for the Promotion of Science (JSPS).
References
BFR05.
J.-P. Banâtre, P. Fradet, and Y. Radenac. Principles of Chemical Programming. Electr. Notes in Theor. Computer Science, 124(1):133–147, 2005.
HAB+ 06. M. Hanus, S. Antoy, B. Braßel, H. Kuchen, F.J. Lopez-Fraguas, W. Lux,
J.J. Moreno Navarro, and F. Steiner. Curry: An Integrated Functional
Logic Language. Technical report, 2006. Version 0.8.2 of March 28, 2006.
HL09.
P. Hofstedt and F. Lorenzen. Constraint Functional Multicore Programming. In Informatik 2009. Proceedings, volume 154 of LNI – Lecture Notes
in Informatics, pages 367, 2901–2915, 2009.
Hof08.
P. Hofstedt.
CCFL – A Concurrent Constraint Functional Language. Technical Report 2008-08, Technische Universität Berlin, 2008.
http://www-docs.tu-cottbus.de/programmiersprachen-compilerbau/
public/publikationen/2008/2008-08.pdf.
LMN10. LMNtal PukiWiki. http://www.ueda.info.waseda.ac.jp/lmntal/, 2010.
last visited 12 May 2010.
Nai91.
L. Naish. Adding equations to NU-Prolog. In Programming Language
Implementation and Logic Programming – PLILP, volume 528 of LNCS,
pages 15–26. Springer, 1991.
Nai96.
L. Naish. Higher-order logic programming in Prolog. Technical Report
96/2, Department of Computer Science, University of Melbourne, 1996.
Sin05.
F.-R. Sinot. Call-by-Name and Call-by-Value as Token-Passing Interaction
Nets. In P. Urzyczyn, editor, Typed Lambda Calculi and Applications –
TLCA, volume 3461 of LNCS, pages 386–400. Springer, 2005.
Smo93.
G. Smolka. Residuation and Guarded Rules for Constraint Logic Programming. In F. Benhamou and A. Colmerauer, editors, Constraint Logic Programming. Selected Research, pages 405–419. The MIT Press, 1993.
Ued08a.
K. Ueda. Encoding Distributed Process Calculi into LMNtal. ENTCS,
209:187–200, 2008.
Ued08b. K. Ueda. Encoding the Pure Lambda Calculus into Hierarchical Graph
Rewriting. In A. Voronkov, editor, Rewriting Techniques and Applications
– RTA, volume 5117 of LNCS, pages 392–408. Springer, 2008.
UK05.
K. Ueda and N. Kato. LMNtal: a Language Model with Links and Membranes. In Fifth International Workshop on Membrane Computing (WMC
2004), volume 3365 of LNCS, pages 110–125. Springer, 2005.
UKHM06. K. Ueda, N. Kato, K. Hara, and K. Mizuno. LMNtal as a Unifying Declarative Language. In T. Schrijvers and T. Frühwirth, editors, Third Workshop on Constraint Handling Rules, Technical Report CW 452, pages 1–15.
Katholieke Universiteit Leuven, 2006.
War82.
D.H.D. Warren. Higher-order extensions to PROLOG: Are they needed?
Machine Intelligence, 10:441–454, 1982.
15
| 6 |
Character triples and Shoda pairs
Gurmeet K. Bakshi and Gurleen Kaur∗†
Centre for Advanced Study in Mathematics,
Panjab University, Chandigarh 160014, India
arXiv:1702.00955v1 [] 3 Feb 2017
email: [email protected] and [email protected]
Abstract
In this paper, a construction of Shoda pairs using character triples is given
for a large class of monomial groups including abelian-by-supersolvable and
subnormally monomial groups. The computation of primitive central idempotents and the structure of simple components of the rational group algebra
for groups in this class are also discussed. The theory is illustrated with examples.
Keywords : rational group algebra, primitive central idempotents, simple components, Shoda pairs, strong Shoda pairs, character triples, monomial groups.
MSC2000 : 16S34, 16K20, 16S35
1
Introduction
Given a finite group G, Shoda ([5], Corollary 45.4) gave a criterion to determine
whether an induced monomial representation of G is irreducible or not. Olivieri,
del Rı́o and Simón [14] rephrased Shoda’s theorem as follows:
If χ is a linear character of a subgroup H of G with kernel K, then the
induced character χG is irreducible if, and only if, the following hold:
(i) K E H, H/K is cyclic;
(ii) if g ∈ G and [H, g] ∩ H ⊆ K, then g ∈ H.
A pair (H, K) of subgroups of G satisfying (i) and (ii) above is called a Shoda pair
of G. For K E H 6 G, define:
X
b := 1
h,
H
|H| h∈H
∗
†
Research supported by NBHM, India, is gratefully acknowledged
Corresponding author
1
ε(H, K) :=
(
b
K,
H=K;
Q b b
(K − L), otherwise,
where L runs over all the minimal normal subgroups of H containing K properly,
and
e(G, H, K) := the sum of all the distinct G-conjugates of ε(H, K).
An important feature ([14], Theorem 2.1) of a Shoda pair (H, K) of G is that there
is a rational number α, necessarily unique, such that αe(G, H, K) is a primitive
central idempotent of the rational group algebra QG, called the primitive central
idempotent of QG realized by the Shoda pair (H, K). We’ll denote this α by
α(G,H,K) . For monomial groups, all the primitive central idempotents of QG are
realized by Shoda pairs of G. For the Shoda pair (H, K) of G, the case when
e(G, H, K) is a primitive central idempotent of QG, is of special interest, thus,
leading to the following definition of a strong Shoda pair. A strong Shoda pair [14]
of G is a pair (H, K) of subgroups of G satisfying the following conditions:
(i) K E H E NG (K);
(ii) H/K is cyclic and a maximal abelian subgroup of NG (K)/K;
(iii) the distinct G-conjugates of ε(H, K) are mutually orthogonal.
In [14], it is proved that if (H, K) is a strong Shoda pair of G, then it is also a Shoda
pair of G and e(G, H, K) is a primitive central idempotent of QG. The groups G
such that all the primitive central idempotents of QG are realized by strong Shoda
pairs of G are termed as strongly monomial groups. Examples of such groups
include abelian-by-supersolvable groups ([14], Theorem 4.4). The main reason for
defining strong Shoda pairs in [14] was that the authors were able to provide a
description of the structure of the simple component QGe(G, H, K) of QG for a
strong Shoda pair (H, K) of G.
The work in [14] thus leads to the problem of computing Shoda pairs of a
given finite group G and to provide a description of the structure of the simple
components of QG corresponding to the primitive central idempotents realized by
them. The interest is in fact in providing a method to obtain a set S of Shoda
pairs of G such that the mapping (H, K) 7→ α(G,H,K) e(G, H, K) defines a bijection
from S to the set of all primitive central idempotents of QG realized by Shoda
pairs of G. Such a set S is called a complete and irredundant set of Shoda pairs
of G, and has recently been provided by the first author with Maheshwary [2] for
normally monomial groups. For the work in this direction, also see [1] and [3].
In this paper, we plan to study the problem for the class C of all finite groups G
such that all the subgroups and quotient groups of G satisfy the following property:
2
either it is abelian or it contains a non central abelian normal subgroup. The groups
in C are known to be monomial ([11], Lemma 24.2). However, we have noticed
that C is not contained in the class of strongly monomial groups. Huppert ([11],
Theorem 24.3) proved that C contains all the groups G for which there is a solvable
normal subgroup N of G such that all Sylow subgroups of N are abelian and G/N
is supersolvable. In particular, C contains abelian-by-supersolvable groups. In view
of an important criterion of subnormally monomial groups given in [8] and [9], we
have shown in section 2 that C also contains all subnormally monomial groups and,
in particular, normally monomial groups. Our aim is to extend the work to the
class C.
An important tool which has turned out to be useful is Isaacs’s notion of character triples together with Clifford’s correspondence theorem. Following Isaacs ([12],
p.186), we have defined N-linear character triples of G for a normal subgroup N
of G. In view of Clifford’s correspondence theorem ([12], Theorem 6.11), for each
N-linear character triple of G, we have defined its direct Clifford correspondents,
which is another set of N-linear character triples of G with useful properties proved
in Theorem 1 of section 3. With its help, we have given, in section 4, a construction of Shoda pairs of groups in C. For each normal subgroup N of G, we have
constructed a rooted directed tree GN , whose particular leaves correspond to Shoda
pairs of G, if G ∈ C (Theorem 2). We have also explored the condition for the
collection of Shoda pairs corresponding to these leaves of GN as N runs over all
the normal subgroups of G to be complete and irredundant. In section 5, we have
given a new character free expression of α(G,H,K), where (H, K) is a Shoda pair of
G corresponding to a leaf of GN . This expression is in terms of the directed path
from the root to the corresponding leaf and enables us to provide a necessary and
sufficient condition for e(G, H, K) to be a primitive central idempotent of QG. In
section 6, we generalize Proposition 3.4 of [14] and determine the structure of the
simple components of QG for G ∈ C. Finally, in section 7, we provide illustrative
examples.
2
The class C of monomial groups
Throughout this paper, G denotes a finite group. By H ≤ G, H G, H E G, we
mean, respectively, that H is a subgroup, proper subgroup, normal subgroup of
G. Denote by [G : H], the index of H in G. Also NG (H) denotes the normalizer
T
of H in G and coreG (H) = x∈G xHx−1 is the largest normal subgroup of G
contained in H. For x, y ∈ G, [x, y] = x−1 y −1 xy is the commutator of x and
3
y, and CenG (x) = {g ∈ G | gx = xg} is the centralizer of x in G. Denote by
Irr G, the set of all complex irreducible characters of G. For a character χ of G,
ker χ = {g ∈ G | χ(g) = χ(1)} and Q(χ) denotes the field obtained by adjoining
to Q the character values χ(g), g ∈ G. If ψ is a character of a subgroup H of G
and x ∈ G, then ψ x is the character of H x = x−1 Hx given by ψ x (g) = ψ(xgx−1 ),
g ∈ H x . Denote by ψ G , the character ψ induced to G. For a subgroup A of H, ψA
denotes the restriction of ψ to A.
Let C denote the class of all finite groups G such that all the subgroups and
quotient groups of G satisfy the following property: either it is abelian or it contains a non central abelian normal subgroup. It follows from Lemma 24.2 of [11]
that the groups in C are monomial. Recall that a finite group is monomial if every
complex irreducible character of the group is induced by a linear character of a
subgroup. In this section, we compare C with the following classes of groups:
Ab
: Abelian groups
Nil
: Nilpotent groups
Sup
: Supersolvable groups
A
: Solvable groups with all the Sylow subgroups abelian
nM
: Normally monomial groups, i.e., groups with all the complex
irreducible characters induced from linear characters of normal
subgroups
sM
: Subnormally monomial groups, i.e., groups with all the complex
irreducible characters induced from linear characters of subnormal
subgroups
stM
: Strongly monomial groups
X
: Solvable groups G satisfying the following condition : For all
primes p dividing the order of G and for all subgroups A of G,
O p (A), the unique smallest normal subgroup of A such that
A/O p (A) is a p-group, has no central p-factor
X - by -Y : Groups G such that there exist a normal subgroup N with N ∈ X
and G/N ∈ Y.
We prove the following:
Proposition 1 The following statements hold:
(i) Ab-by-Nil ⊆ Ab-by-Sup ⊆ A-by-Sup ⊆ C;
(ii) ( nM ∪ Ab-by-Nil ) ⊆ sM ⊆ X ⊆ C;
(iii) A-by-Sup * X;
4
(iv) X * A-by-Sup;
(v) C * stM.
Proof. (i) Clearly, Ab-by-Nil ⊆ Ab-by-Sup ⊆ A-by-Sup. From ([11], Theorem
24.3), it follows that A-by-Sup ⊆ C. This proves (i).
(ii) It is obvious that nM ⊆ sM. We now show that Ab-by-Nil ⊆ sM. Let G ∈
Ab-by-Nil. Let A be a normal abelian subgroup of G such that G/A is nilpotent.
Let χ ∈ Irr G. It is already known that χ is monomial. By ([11], Lemma 24.8),
there exists a subgroup H of G containing A such that χ is induced from a linear
character on H. As H/A is a subgroup of the nilpotent group G/A, it is subnormal
in G/A. Consequently, H is subnormal in G. This proves that G ∈ sM. Next, by
([9], Theorem 3.7), we have sM ⊆ X. We now show that X ⊆ C. By Lemma
2.6 of [8], X is closed under taking subgroups and factor groups. Thus to prove
that X ⊆ C, we only need to show that every non abelian group in X contains a
non central abelian normal subgroup. Let G ∈ X. If G is nilpotent, then clearly
it has the desired property. If G is not nilpotent, then Lemma 2.7 of [8] implies
that σ(G), the socle of G, is non central in G. Also, in view of ([13], Lemma 3.11,
Problem 2A.5), σ(G) is abelian, as G is solvable. Hence G has the desired property
and it follows that X ⊆ C.
(iii) Consider the group G generated by a, b, c, d with defining relations: a2 =
b3 = c3 = d3 = 1, a−1 ba = b−1 , a−1 ca = c−1 , a−1 da = d, b−1 cb = cd, b−1 db =
d, c−1 dc = d. It is easy to see that G is supersolvable and hence belongs to
A -by-Sup. We’ll show that G 6∈ X. Let CSF be the class of all chiefly subFrobenius groups, i.e., all finite solvable groups G for which CenG (kL) is subnormal
in G whenever kL is an element of a chief factor K/L of G. It is easy to see that
supersolvable groups are chiefly sub-Frobenius, and hence G ∈ CSF . Also it is
known ([9], Theorem 3.8) that sM=CSF ∩ X. Now if G ∈ X, then it follows that
G is subnormally monomial. It can be shown that G has an irreducible character
of degree 3, denoted by χ say. If G is subnormally monomial, then χ is induced
from a linear character of a subnormal subgroup H of G. Also, χ(1) = 3 implies
that [G : H] = 3, which yields that H is in fact normal in G. However, G does not
have any normal subgroup of index 3, a contradiction. This proves that G 6∈ X
and (iii) follows.
(iv) Consider G=SmallGroup(192, 1025) in GAP[7] library. It can be checked using
GAP that G is subnormally monomial and hence belongs to the class X but it does
not belong to A-by-Sup.
(v) Simple computations using Wedderga[4] reveal that SmallGroup(1000, 86) is
5
not strongly monomial. However, it belongs to C. This proves (v).
3
N -linear character triples
Let G be a finite group. Let H ≤ G and (H, A, ϑ) a character triple, i.e., A E H
and ϑ ∈ Irr A is invariant in H. We call it to be N-linear character triple of G, if
ϑ is linear and ker ϑG = N. For N-linear character triple (H, A, ϑ) of G, denote
by Irr(H|ϑ), the set of all irreducible characters ψ of H which lie above ϑ, i.e.,
f
the restriction ψA of ψ to A has ϑ as a constituent. Let Irr(H|ϑ)
be its subset
f
consisting of those ψ which satisfy ker ψ G = N. Denote by Lin(H|ϑ),
the subset of
f
Irr(H|ϑ) consisting of linear characters. Further, for the character triple (H, A, ϑ),
we fix a normal subgroup A(H,A,ϑ) of H of maximal order containing ker ϑ such
that A(H,A,ϑ) / ker ϑ is abelian. Note that there may be several choices of such
A(H,A,ϑ) , however, we fix one such choice for a given triple (H, A, ϑ). Observe that
A(H,A,ϑ) / ker ϑ always contains the center of H/ ker ϑ. However, it can be seen that
if G ∈ C and H/ ker ϑ is non abelian, then A(H,A,ϑ) / ker ϑ properly contains the
center of H/ ker ϑ. We shall later use this observation without any mention.
Given N- linear character triple (H, A, ϑ) of G, we provide a construction of the
set Cl(H, A, ϑ) of another N-linear character triples of G required for the purpose
of constructing Shoda pairs of G.
Construction of Cl(H, A, ϑ)
Let Aut(C|ϑ) be the group of automorphisms of the field C of complex numbers
which keep Q(ϑ) fixed. For brevity, denote A(H,A,ϑ) by A. Consider the action of
f
Aut(C|ϑ) on Lin(A|ϑ)
by setting
f
σ ∈ Aut(C|ϑ), ϕ ∈ Lin(A|ϑ).
σ.ϕ = σ ◦ ϕ,
f
Also H acts on Lin(A|ϑ)
by
f
h ∈ H, ϕ ∈ Lin(A|ϑ).
h.ϕ = ϕh ,
f
Notice that the two actions on Lin(A|ϑ)
are compatible in the sense that
f
h ∈ H, σ ∈ Aut(C|ϑ), ϕ ∈ Lin(A|ϑ).
σ.(h.ϕ) = h.(σ.ϕ),
f
This consequently gives an action of Aut(C|ϑ) ×H on Lin(A|ϑ).
Under this double
f
action, denote by Lin(A|ϑ), a set of representatives of distinct orbits of Lin(A|ϑ).
If H 6= A, set
Cl(H, A, ϑ) = {(IH (ϕ), A, ϕ) | ϕ ∈ Lin(A|ϑ)},
6
where IH (ϕ) = {g ∈ G | ϕg = ϕ} is the inertia group of ϕ in H. For H =
A, define Cl(H, A, ϑ) to be an empty set. Note that all the character triples
in Cl(H, A, ϑ) are N-linear character triples of G and we call them the direct
Clifford correspondents (abbreviated d.c.c.) of (H, A, ϑ). The name ‘direct Clifford
f H (ϕ)|ϕ) are Clifford
correspondents’ refers to the fact that the characters in Irr(I
f
correspondents of the characters in Irr(H|ϑ)
in view of the following theorem:
Theorem 1 Let G ∈ C and N a normal subgroup of G. Let (H, A, ϑ) be Nlinear character triple of G with H 6= A and Cl(H, A, ϑ) be as defined above. Let
A = A(H,A,ϑ) . Then
(i) for any (IH (ϕ), A, ϕ) ∈ Cl(H, A, ϑ), the following hold:
(a) A A. Furthermore, A = H = IH (ϕ) holds, if, and only if, H/ ker ϑ is
abelian;
(b) IH (ϕ) E NH (ker ϕ);
f H (ϕ)|ϕ) to
(c) the induction ψ 7→ ψ H defines an injective map from Irr(I
f
Irr(H|ϑ).
f
f H (ϕ)|ϕ)
(ii) for each χ ∈ Irr(H|ϑ),
there exists (IH (ϕ), A, ϕ) ∈ Cl(H, A, ϑ), ψ ∈ Irr(I
and σ ∈ Aut(C) such that χ = σ ◦ ψ H .
f H (ϕ1 )|ϕ1),
(iii) if (IH (ϕ1 ), A, ϕ1), (IH (ϕ2 ), A, ϕ2 ) ∈ Cl(H, A, ϑ), ψ1 ∈ Irr(I
f H (ϕ2 )|ϕ2 ) and σ ∈ Aut(C) are such that ψ H = σ ◦ ψ H , then
ψ2 ∈ Irr(I
2
1
ϕ1 = ϕ2 (= ϕ say), and in this case ψ2 = σ ◦ ψ1x for some x ∈ NH (ker ϕ).
Proof. (i) We first show that
A.
A
(1)
If H/ ker ϑ is abelian, then clearly A = H and therefore the above equation holds
trivially, as A 6= H. If H/ ker ϑ is non abelian, then A/ ker ϑ properly contains
the centre of H/ ker ϑ. However, ϑ being invariant in H, it follows that A/ ker ϑ
is contained in the centre of H/ ker ϑ. Therefore, eqn (1) follows. This proves (a).
Next, consider α ∈ IH (ϕ) and β ∈ NH (ker ϕ). Then
β −1 αβ ∈ IH (ϕ)
if, and only if,
[β −1 αβ, x] = β −1 [α, βxβ −1]β ∈ ker ϕ for all x ∈ A.
(2)
However, if x ∈ A, then βxβ −1 ∈ A, as A E H. This gives [α, βxβ −1] ∈ ker ϕ,
as α ∈ IH (ϕ). Consequently, β being in NH (ker ϕ), eqn (2) follows. This proves
7
(b). In view of Clifford’s correspondence theorem ([12], Theorem 6.11), ψ 7→ ψ H
defines an injective map from Irr(IH (ϕ)|ϕ) to Irr(H|ϑ). It can be checked that,
f H (ϕ)|ϕ) is mapped to Irr(H|ϑ).
f
under this map, Irr(I
This finishes the proof of (c).
f
(ii) Consider χ ∈ Irr(H|ϑ).
Let λ be an irreducible constituent of χA . We claim
f
that λ ∈ Lin(A|ϑ).
As A E H and ϑ is invariant in H, by ([12], Theorem 6.2), ϑ is
the only irreducible constituent of χA . However, λ being an irreducible constituent
of χA , it follows that λA is a constituent of χA . Hence ϑ is an irreducible constituent
of λA , and therefore ker ϑA ≤ ker λ. But ker ϑA = coreA (ker ϑ) = ker ϑ, as ker ϑ
is normal in H. This gives ker ϑ ≤ ker λ. Consequently, A/ ker ϑ being abelian,
it follows that λ is linear and moreover λA = ϑ. We now show that ker λG = N.
As hχ, λH i, the inner product of χ with λH is non zero, and χ is irreducible, we
have ker λH ≤ ker χ. Hence coreG (ker λH ) ≤ coreG (ker χ), which gives ker λG ≤
ker χG = N. Also, ker ϑ ≤ ker λ implies that N = coreG (ker ϑ) ≤ coreG (ker λ) =
ker λG . Hence the claim follows. Now choose ϕ ∈ Lin(A|ϑ) which lies in the orbit
of λ. This gives σ ∈ Aut(C|ϑ) and x ∈ H such that λ = σ ◦ ϕx . As λ is an
irreducible constituent of χA , it follows that ϕx is an irreducible constituent of
the restriction of σ −1 ◦ χ to A. However, A being normal in H, from Clifford’s
theorem ([12], Theorem 6.2), it follows that ϕ is an irreducible constituent of
the restriction of σ −1 ◦ χ to A. Consequently, Clifford’s correspondence theorem
provides ψ ∈ Irr(IH (ϕ)|ϕ) such that σ −1 ◦ χ = ψ H . It is easy to check that this ψ
f H (ϕ)|ϕ). Hence (ii) follows.
belongs to Irr(I
f H (ϕ1 )|ϕ1 ), ψ2 ∈ Irr(I
f H (ϕ2 )|ϕ2 ) are such that
(iii) Suppose ψ1 ∈ Irr(I
ψ2H = σ ◦ ψ1H ,
(3)
where σ ∈ Aut(C). By restricting to A, it follows from ([12], Theorem 6.2) that
ϕ2 = σ ◦ ϕh1
(4)
for some h ∈ H. Therefore, ϕ1 and ϕ2 lie in the same orbit under the double action
and hence ϕ1 = ϕ2 = (ϕ say). In this case, from eqn (4),
ϕ = σ ◦ ϕh ,
(5)
which on comparing the kernels yields h ∈ NH (ker ϕ). Now using eqn (5) and the
f H (ϕ)|ϕ). Thus
fact that IH (ϕ) E NH (ker ϕ), it is easy to see that σ ◦ ψ1h ∈ Irr(I
f H (ϕ)|ϕ) and, in view of eqn (3), they are same
σ ◦ ψ1h and ψ2 both belong to Irr(I
when induced to H. Consequently, the injectivity of the induction map in part (i)
implies ψ2 = σ ◦ ψ1h . This proves (iii) and completes the proof.
8
4
A construction of Shoda pairs
We begin by recalling some basic terminology in graph theory. A graph is a pair
G = (V, E), where V is a non-empty set whose elements are termed vertices of G
and E is a set of unordered pairs of vertices of G. Each element {u, v} ∈ E, where
u, v ∈ V, is called an edge and is said to join the vertices u and v. If e = {u, v} ∈ E,
then e is said to be incident with both u and v. Further, if v ∈ V, the degree of v
is the number of edges in E that are incident with v. A walk in the graph G is a
sequence of vertices and edges of the form : v1 {v1 , v2 } v2 · · · {vn−1 , vn } vn , where
each edge {vi , vi+1 } is incident with the vertices vi and vi+1 immediately preceding
and succeeding it. A walk is termed path if all the vertices are distinct and is
called a cycle if it begins and ends with the same vertex and all other vertices are
distinct. A connected graph is the one in which any two vertices are connected by
a path. A connected graph which contain no cycles is called a tree.
A directed graph is a pair G = (V, E), where V is a non-empty set whose
elements are termed vertices and E is a set of ordered pairs of vertices of V. In a
directed graph, an edge e = (u, v) is said to be incident out of u and incident into
v. The terminology of directed walk and directed path is same as that in graph
but now the edges are directed in the same direction. In a directed graph, the
number of edges incident out of v is called out-degree of v and the number of edges
incident into v is called in-degree of v, where v ∈ V. The vertices of in-degree 0
are called source and those of out-degree 0 are called sink. In a directed graph,
there is an obvious underlying undirected graph whose vertex set is same as that
of the directed graph and there is an edge {u, v} if either (u, v) or (v, u) is an edge
in the directed graph. A directed graph is called a directed tree if its underlying
undirected graph is a tree. The sink vertices of a directed tree are termed as its
leaves. A directed tree is called a rooted directed tree if it has a unique source.
The unique source of a rooted directed tree is called its root.
We now proceed with the construction of Shoda pairs. Let G ∈ C and let N
be a normal subgroup of G. Consider the directed graph G = (V, E) whose vertex
set V consist of all N-linear character triples of G and there is an edge (u, v) ∈ E
if v is a direct Clifford correspondent(d.c.c.) of u. Clearly (G, N, 1N ) ∈ V, where
1N is the character of N which takes constant value 1. Let VN be the set of those
vertices v ∈ V for which there is a directed path from (G, N, 1N ) to v. Let EN
be the set of ordered pairs (u, v) ∈ E with u, v ∈ VN . Then GN = (VN , EN ) is a
directed subgraph of G. Observe that any vertex (H, A, ϑ) of GN with H = A is a
sink vertex.
9
Theorem 2 Let G ∈ C and N the set of all normal subgroups of G.
(i) For N ∈ N , the following hold:
(a) GN is a rooted directed tree with (G, N, 1N ) as its root;
(b) the leaves of GN of the type (H, H, ϑ) correspond to Shoda pairs of G.
More precisely, if (H, H, ϑ) is a leaf of GN , then (H, ker ϑ) is a Shoda
pair of G.
(ii) If (H ′ , K ′ ) is any Shoda pair of G, then there is a leaf (H, H, ϑ) of GN , where
N = coreG (K ′ ), such that (H ′ , K ′ ) and (H, ker ϑ) realize the same primitive
central idempotent of QG.
To prove the theorem, we need some preparation.
Lemma 1 For each vertex v of GN , there is a unique directed path from (G, N, 1N )
to v.
′
′
′
Proof. Let v1 (v1 , v2 ) v2 · · · (vn−1 , vn ) vn and v1′ (v1′ , v2′ ) v2′ · · · (vm−1
, vm
) vm
be two
′
directed paths from v1 = v1′ = (G, N, 1N ) to vn = vm
= v = (H, A, ϑ). Assume
that m ≤ n. We claim that vi = vi′ for 1 ≤ i ≤ m. We’ll prove it by induction
on i. For i = 1, we already have v1 = v1′ = (G, N, 1N ). Assume that vi = vi′
for some i < m. Write vj = (Hj , Aj , ϑj ) and vj′ = (Hj′ , A′j , ϑ′j ) for 1 ≤ j ≤ m.
From the construction of d.c.c., we have Ai+1 = A(Hi ,Ai ,ϑi ) and A′i+1 = A(Hi′ ,A′i ,ϑ′i ) .
As (Hi , Ai , ϑi ) = (Hi′, A′i , ϑ′i ), it follows immediately that Ai+1 = A′i+1 . Now ϑi+1
(resp. ϑ′i+1 ) being the restriction of ϑ to Ai+1 (resp. A′i+1 ) yields that ϑi+1 = ϑ′i+1 .
′
′
Further, as Hi+1 = IHi (ϑi+1 ) and Hi+1
= IHi′ (ϑ′i+1 ), it follows that Hi+1 = Hi+1
.
This proves the claim, which as a consequence implies that both vm and vn are
equal to v. This is not possible if m < n as no two vertices in a path are same.
Lemma 2 The following statements hold for GN :
(i) If (u1 , v1 ) and (u2 , v2 ) are edges of GN with v1 = v2 , then u1 = u2 ;
(ii) If v1 {v1 , v2 } v2 · · · {vn−1 , vn } vn is a path in the underlying undirected graph
of GN , then there is a unique j, 1 ≤ j ≤ n, with the following:
(a) (vi+1 , vi ) ∈ EN for 1 ≤ i < j;
(b) (vi , vi+1 ) ∈ EN for j ≤ i < n.
Proof. (i) is a consequence of Lemma 1 and (ii) follows immediately from (i). It
may be mentioned that in the lemma, (a) is empty if j = 1, and (b) is empty if
j = n.
10
Lemma 3 The underlying undirected graph of GN is a tree.
Proof. Let GN = (VN , EN ) be the underlying undirected graph of GN = (VN , EN ).
Clearly GN is a connected graph. To show that GN is a tree, it is enough to
prove that GN is disconnected after removing an edge (see [6], Theorem 3-5). Let
e = {u, v} ∈ EN and let E′N = EN \ {e}. We need to show that G′N = (VN , E′N ) is
disconnected. If not, then there is a path
v1 {v1 , v2 } v2 · · · {vn−1 , vn } vn
(6)
in G′N , where v1 = u and vn = v. As {u, v} ∈ EN , either (u, v) or (v, u) belongs
to EN . Suppose (u, v) ∈ EN . As the path given in eqn (6) is also a path in GN ,
there is a j, 1 ≤ j ≤ n so that part (ii) of Lemma 2 holds. If j < n, then
vn = v is a d.c.c. of vn−1 , which by Lemma 2(i) implies vn−1 = u. Consequently
{u, v} = {vn−1 , vn } ∈ E′N , which is not so. If j = n, then vi is a d.c.c. of vi+1 for all
1 ≤ i < n. Hence if vi = (Hi , Ai , ϑi ), then from Theorem 1, An An−1 · · · A1 .
However (u, v) = (v1 , vn ) ∈ EN implies A1
An , a contradiction. Using similar
arguments, it follows that (v, u) ∈ EN is not possible. This proves the lemma.
We are now ready to prove the theorem. Recall from ([15], Proposition 1.1)
P
P
−1
is a primitive
that if χ ∈ Irr G, then eQ (χ) = χ(1)
σ∈Gal(Q(χ)/Q)
g∈G σ(χ(g))g
|G|
central idempotent of QG, where Gal(Q(χ)/Q) is the Galois group of Q(χ) over
Q.
Proof of Theorem 2 (i) By Lemma 3, GN is a directed tree. If the in-degree of
(G, N, 1N ) is non zero then there is a N-linear character triple (H, A, ϑ) of G such
that (G, N, 1N ) is a d.c.c. of (H, A, ϑ). Hence by Theorem 1(i)(a) A N. However
(H, A, ϑ) being N-linear character triple of G, we have ker ϑG = N, which gives
N ≤ A, a contradiction. This proves that (G, N, 1N ) is a source of GN . Next if
(H, A, ϑ) is a vertex of GN different from (G, N, 1N ), then there is a directed path
from (G, N, 1N ) to (H, A, ϑ), which implies that the in-degree of (H, A, ϑ) is non
zero. Hence (H, A, ϑ) can’t be a source. This proves (a). To prove (b), consider a
leaf of GN of the type (H, H, ϑ). Let v1 (v1 , v2 ) v2 · · · (vn−1 , vn ) vn be the directed
path from v1 = (G, N, 1N ) to vn = (H, H, ϑ). Let vi = (Hi , Ai , ϑi ), 1 ≤ i ≤ n. As
(Hi+1 , Ai+1 , ϑi+1 ) is a d.c.c. of (Hi , Ai , ϑi ), from Theorem 1
f i |ϑi ) for all ψ ∈ Irr(H
f i+1 |ϑi+1 ).
ψ Hi ∈ Irr(H
(7)
f i |ϑi ) for 1 ≤ i ≤ n.
ϑHi ∈ Irr(H
(8)
f n |ϑn ) = {ϑn } = {ϑ}, as
This is true for all 1 ≤ i < n. Observe that Irr(H
Hn = An = H. The repeated application of eqn (7) when ψ is ϑ implies that
11
For i = 1, the above equation, in particular, gives ϑG ∈ Irr G. This proves (b).
(ii) Let (H ′ , K ′ ) be a Shoda pair of G and let N = coreG (K ′ ). Let ψ be a linear
character on H ′ with kernel K ′ . We claim that there is a leaf (H, H, ϑ) of GN
such that (H ′ , K ′ ) and (H, ker ϑ) realize the same primitive central idempotent
of QG. From the definition of the primitive central idempotent of QG realized
by a Shoda pair given in [14], it follows that the primitive central idempotent
of QG realized by (H ′ , K ′ ) is eQ (ψ G ) and that realized by (H, ker ϑ) is eQ (ϑG ).
Let χ = ψ G . By Shoda’s theorem, χ ∈ Irr G. Also, ker χ = N. If N = G,
then (H ′, K ′ ) is clearly (G, G). Also in this case, GN is just the vertex (G, G, 1G )
which corresponds to the Shoda pair (G, G) = (H ′ , K ′ ). Thus we may assume that
f
N 6= G. Denote χ by χ1 . As χ1 ∈ Irr(G|1
N ), by Theorem 1, there is a d.c.c.,
f
(IG (ϕ), A(G,N,1 ) , ϕ) of (G, N, 1N ), χ2 ∈ Irr(IG (ϕ)|ϕ) and σ1 ∈ Aut(C) such that
N
χG
2.
χ1 = σ1 ◦
Put (H1 , A1 , ϑ1 ) = (G, N, 1N ), (H2 , A2 , ϑ2 ) = (IG (ϕ), A(G,N,1N ) , ϕ).
If H2 = A2 , stop. If not, again by Theorem 1, there is a d.c.c. (H3 , A3 , ϑ3 ) of
f 3 |ϑ3 ), σ2 ∈ Aut(C) such that
(H2 , A2 , ϑ2 ), χ3 ∈ Irr(H
2
χ2 = σ2 ◦ χH
3 .
Moreover, if this case arises, then by Theorem 1(i)(a), N = A1
(9)
A2
A3 ≤ G.
Again if H3 = A3 stop, otherwise continue. This process of continuing must stop
after finite number of steps as at nth step, there is an ascending chain
N = A1
···
A2
An ≤ G.
Suppose the process stops at nth step. Then we have character triples (Hi , Ai , ϑi ),
f i |ϑi ) and σi ∈ Aut(C) such that
1 ≤ i ≤ n, with Hn = An , χi ∈ Irr(H
i
χi = σi ◦ χH
i+1 ,
1 ≤ i < n.
(10)
The above equation yields
χ = σ ◦ χG
n,
where σ = σ1 ◦ σ2 ◦ · · · ◦ σn .
f n |ϑn ) = {ϑn }, and hence χn = ϑn . This gives χ = σ◦θG .
As Hn = An , we have Irr(H
n
′
′
Consequently eQ (χ) = eQ (ϑG
n ) and hence (H , K ) and (Hn , ker ϑn ) realize the same
primitive central idempotent of QG. This proves the claim and completes the proof
of theorem.
For N ∈ N , denote by LN the set of leaves of GN of type (H, H, ϑ). Let SN be
the set of Shoda pairs of G corresponding to the leaves in LN . We have shown in
S
Theorem 2 that if G ∈ C, then the mapping from N ∈N SN to the set of primitive
12
central idempotents of QG realized by the Shoda pairs of G is surjective. In other
S
words, N ∈N SN is a complete set of Shoda pairs of G. We now begin to investigate
whether this set of Shoda pairs of G is irredundant, i.e., this map is injective.
Two Shoda pairs (H1 , K1 ) and (H2 , K2 ) of G are said to be equivalent if they
realize the same primitive central idempotent of QG.
Lemma 4 If N and N ′ are distinct normal subgroups of G, then the Shoda pair
corresponding to a leaf in LN can not be equivalent to that in LN ′ .
Proof. Let (H, H, ϑ) ∈ LN and (H ′ , H ′, ϑ′ ) ∈ LN ′ . Suppose (H, ker ϑ) is equivalent to (H ′ , ker ϑ′ ). Then eQ (ϑG ) = eQ (ϑ′G ), which gives ϑ′G = σ ◦ ϑG for some
σ ∈ Aut(C). Consequently, ker ϑ′G = ker σ ◦ ϑG = ker ϑG . Hence, N = N ′ .
We next examine if distinct leaves in LN , for a fixed normal subgroup N of G,
can correspond to equivalent Shoda pairs. For this purpose, we need to fix some
terminology. If (H, H, ϑ) ∈ LN and v1 (v1 , v2 ) v2 · · · (vn−1 , vn ) vn is the directed
path from v1 = (G, N, 1N ) to vn = (H, H, ϑ), then we call n to be the height of
(H, H, ϑ) and term vi as the ith node of (H, H, ϑ), 1 ≤ i ≤ n. It may be noted
f i |ϑi ) for all
that if vi = (Hi , Ai , ϑi ), then from eqn (8), it follows that ϑHi ∈ Irr(H
1 ≤ i ≤ n.
Definition Let (H, H, ϑ) ∈ LN be of height n with (Hi , Ai , ϑi ) as its ith node,
1 ≤ i ≤ n. We call (H, H, ϑ) to be good if the following holds for all 1 < i ≤ n:
given x ∈ NHi−1 (ker ϑi ), there exist σ ∈ Aut(C) such that (ϑHi )x = σ ◦ ϑHi .
Remark 1 It may be noted that, for any normal subgroup N of G, the leaves of
LN of height 2 are always good.
Lemma 5 If N ∈ N and (H, H, ϑ) ∈ LN is good, then its corresponding Shoda
pair can not be equivalent to that of other leaves in LN .
Proof. Let (H, H, ϑ) and (H ′, H ′ , ϑ′ ) ∈ LN be distinct and let (H, H, ϑ) be good.
Let the height of (H, H, ϑ) and (H ′ , H ′ , ϑ′ ) be n and n′ respectively. Let (Hi , Ai , ϑi )
be the ith node of (H, H, ϑ), 1 ≤ i ≤ n, and (Hj′ , A′j , ϑ′j ) the j th node of (H ′, H ′ , ϑ′ ),
1 ≤ j ≤ n′ . We have
f i |ϑi ), 1 ≤ i ≤ n
ϑHi ∈ Irr(H
(11)
and
′
f ′ |ϑ′ ), 1 ≤ j ≤ n′ .
ϑ′Hj ∈ Irr(H
j j
(12)
Let k be the least positive integer such that (Hk , Ak , ϑk ) 6= (Hk′ , A′k , ϑ′k ). Clearly,
k 6= 1. By the definition of k, (Hk , Ak , ϑk ) and (Hk′ , A′k , ϑ′k ) are distinct d.c.c.’s of
13
(Hk−1 , Ak−1 , ϑk−1 ). The construction of d.c.c. yields Ak = Ak′ . In view of eqns
(11), (12), it follows from Theorem 1(iii) that
ϑ′Hk−1 6= σ ◦ ϑHk−1
(13)
for any σ ∈ Aut(C). We now show a contradiction to eqn (13), if (H, ker ϑ) and
(H ′ , ker ϑ′ ) are equivalent. We have that Hi = Hi′ and ϑi = ϑ′i for 1 ≤ i ≤ k − 1.
f 2 |ϑ2 ).
Also H1 = H1′ = G. From eqns (11) and (12), ϑH2 and ϑ′H2 belong to Irr(H
If (H, ker ϑ) and (H ′ , ker ϑ′ ) are equivalent, then eQ (ϑ′G ) = eQ (ϑG ), which gives
ϑ′H1 = σ1 ◦ ϑH1 ,
for some σ1 ∈ Aut(C).
Hence, by Theorem 1(iii), there exist x ∈ NH1 (ker ϑ2 ) such that
ϑ′H2 = σ1 ◦ (ϑH2 )x .
(14)
Also, (H, H, ϑ) being good, there exists τ ∈ Aut(C) such that
(ϑH2 )x = τ ◦ ϑH2 .
(15)
Let σ2 = σ1 ◦ τ . From eqns (14) and (15),
ϑ′H2 = σ2 ◦ ϑH2 .
Now repeating this process with H1 , H2 replaced by H2 , H3 respectively, we obtain
that
ϑ′H3 = σ3 ◦ ϑH3 , for some σ3 ∈ Aut(C).
Continuing this process, we obtain after k − 1 steps that
ϑ′Hk−1 = σk−1 ◦ ϑHk−1 , for some σk−1 ∈ Aut(C).
This contradicts eqn (13) and completes the proof.
Theorem 3 For G ∈ C,
S
N ∈N
SN is a complete irredundant set of Shoda pairs of
G if, and only if, the leaves in LN are good for all N ∈ N .
S
Proof. Suppose N ∈N SN is a complete irredundant set of Shoda pairs of G ∈ C.
Let N ∈ N . Let (H, H, ϑ) ∈ LN be of height n with (Hi , Ai , ϑi ) its ith node,
1 ≤ i ≤ n. We’ll show that (H, H, ϑ) is good. Let 2 ≤ i ≤ n and x ∈ NHi−1 (ker ϑi ).
f i |ϑi ). Proceeding as in the proof of part (ii) of
By Theorem 1(i)(c), (ϑHi )x ∈ Irr(H
Theorem 2, there exists (H ′ , H ′, ϑ′ ) ∈ LN such that
(ϑHi )x = σ ◦ ϑ′Hi , for some σ ∈ Aut(C).
14
(16)
S
Inducing to G, we get ϑG = σ ◦ϑ′G , which gives ϑ = ϑ′ and H = H ′ , as N ∈N SN is
complete and irredundant. Consequently, eqn (16) implies that (H, H, ϑ) is good.
This proves ‘only if’ statement. The ‘if’ statement follows from Theorem 2 and
Lemma 5.
As a consequence, the above theorem yields the following result proved in ([2],
Theorem 1(i)):
S
Corollary 1 If G is a normally monomial group, then N ∈N SN is a complete
irredundant set of Shoda pairs of G.
Proof. It is enough to show that all the character triples in
S
N ∈N
LN are good.
Let (H, H, ϑ) ∈ LN , where N ∈ N . If N = G, then (H, H, ϑ) = (G, G, 1G ), which
G
f
is clearly good. Assume N 6= G. As ϑG ∈ Irr(G|1
N ), ker ϑ = N. Hence, by ([10],
Lemma 2.2), ϑG (1) = [G : A(G,N,1N ) ]. However, ϑG (1) = [G : H]. Therefore, we
have
[G : A(G,N,1N ) ] = [G : H]
(17)
Let the height of (H, H, ϑ) be n and let (Hi , Ai , ϑi ) be its ith node. As (H2 , A2 , ϑ2 )
is d.c.c. of (H1 , A1 , ϑ1 ) = (G, N, 1N ), we have A2 = A(G,N,1N ) . Also in view of
Theorem 1(i)(a), we have N = A1 A2 · · · An = H. This gives A(G,N,1N )
H, if n > 2, in which case eqn (17) can’t hold. Hence we must have n = 2, which
in view of remark 1, implies that (H, H, ϑ) is good.
Remark 2 Later from remark 3, it will follow that if G is a normally monomial
S
group, then the Shoda pairs in N ∈N SN are strong Shoda pairs of G.
5
Idempotents from Shoda pairs
We continue to use the notation developed in the previous section. Given a group
G ∈ C, we have shown in the previous section that any Shoda pair of G is equivalent
S
to (H, ker ϑ), where (H, H, ϑ) is a character triple in N ∈N LN , and it realizes
the primitive central idempotent eQ (ϑG ) of QG. In this section, we examine the
expression of eQ (ϑG ) in terms of e(G, H, K), where K = ker ϑ. In [14], Olivieri,
del Rı́o and Simón proved that eQ (ϑG ) = α(G,H,K)e(G, H, K), where α(G,H,K) =
[CenG (ε(H,K)):H]
. The following theorem gives a new character free expression of
[Q(ϑ):Q(ϑG )]
α(G,H,K) and also provides a necessary and sufficient condition for α(G,H,K) = 1. It
may be mentioned that α(G,H,K) = 1 is a necessary condition for (H, K) to be a
strong Shoda pair of G.
Theorem 4 Let G ∈ C and N a normal subgroup of G. Let (H, H, ϑ) ∈ LN be of
height n with (Hi , Ai , ϑi ) as its ith node, 1 ≤ i ≤ n. Let K = ker ϑ. Then
15
(i) α(G,H,K) =
[CenG (ε(H,K)):CenHn−1 (ε(H,K))]
Q
;
2≤i≤n−1 [CenHi−1 (e(Hi ,H,K)):Hi ]
(ii) α(G,H,K) = 1 if, and only if, CenHi−1 (e(Hi , H, K)) = CenHi−1 (ε(H, K))Hi for
all 2 ≤ i ≤ n − 1;
(iii) if (H, H, ϑ) is good, then in the above statements CenHi−1 (e(Hi , H, K)) can
be replaced by NHi−1 (ker ϑi ) for all 2 ≤ i ≤ n − 1.
We first prove the following:
Lemma 6 Let G be a finite group and K E H ≤ G with H/K cyclic. Let A E H
and D = K ∩ A. Then
ε(A, D) = ε(H, K) + e
for some central idempotent e of QH orthogonal to ε(H, K).
Proof. Let ψ be a linear character on H with kernel K and let ϕ = ψA . By ([12],
Corollary 6.17), we have
X
ϕH =
hϕH , χψiχψ.
χ∈Irr H/A
As ϕ is invariant in H, we have (χψ)A = hϕ, (χψ)A iϕ = hϕH , χψiϕ, and hence
χψ(1) = hϕH , χψiϕ(1) = hϕH , χψi. Therefore,
ϕH =
X
(χψ)(1)χψ.
χ∈Irr H/A
P
Let n=[A:D] and
the collection of all the irreducible constituents of (ϕi )H ,
1 ≤ i ≤ n with (i, n) = 1. In other words,
X
= {χψ i | χ ∈ Irr H/A, 1 ≤ i ≤ n, (i, n) = 1}.
Consider the natural action of Aut(C) on
P
. Under this action, let χ1 , χ2 , · · · , χr
be the representatives of distinct orbits with χ1 = ψ. It can be checked that orb(χi ),
the orbit of χi , is given by {σ ◦ χi | σ ∈ Gal(Q(χi )/Q)}. Also, we have
X
1≤i≤n, (i,n)=1
(ϕi )H =
X
X
1≤i≤r σ∈Gal(Q(χi )/Q)
16
χi (1)σ ◦ χi .
Consequently,
X
1≤i≤r
X
1 X
χi (1)σ(χi (h))h−1
|H| h∈H
1≤i≤r
σ∈Gal(Q(χi )/Q)
X
X
1
=
(ϕi )H (h)h−1
|H| h∈H
1≤i≤n, (i,n)=1
X
1 X
(τ ◦ ϕ)H (h)h−1
=
|H| h∈H
τ ∈Gal(Q(ϕ)/Q)
X
1 X
(τ ◦ ϕ)(a)a−1
=
|A| a∈A
eQ (χi ) =
X
τ ∈Gal(Q(ϕ)/Q)
= ε(A, D).
As eQ (χ1 ) = eQ (ψ) = ε(H, K), the result follows.
The following proposition is crucial in the proof of Theorem 4.
Proposition 2 Let G be a finite group. Let K E H 6 G with H/K cyclic and ψ
a linear character on H with kernel K. Suppose that there is a normal subgroup
A of G and a subgroup L of G such that A ≤ H ≤ L, ψA is invariant in L, and
L E NG (ker ψA ). Then the following hold:
(i) if ψ L ∈ Irr L, then the distinct G-conjugates of eQ (ψ L ) are mutually orthogonal;
(ii) if ψ G ∈ Irr G, then eQ (ψ G ) is the sum of all distinct G-conjugates of eQ (ψ L ).
Furthermore,
α(G,H,K) = α(L,H,K)
[CenG (ε(H, K)) : CenL (ε(H, K))]
.
[CenG (e(L, H, K)) : L]
Proof. (i) Denote ker ψA by D. First of all, we will show that
eQ (ψ L )eQ (ψ L )g = 0, if g 6∈ NG (D).
(18)
For this, it is enough to prove that
e(L, H, K)e(L, H, K)g = 0, if g 6∈ NG (D).
(19)
Let g 6∈ NG (D). We have
e(L, H, K)e(L, H, K)g = (
X
ε(H, K)x )(
x∈T
X
ε(H, K)y )g ,
y∈T
where T is a transversal of CenL (ε(H, K)) in L. Thus eqn (19) follows if we show
that for
ε(H, K)x ε(H, K)yg = 0,
17
for all x, y ∈ T.
By Lemma 6, ε(H, K)ε(A, D) = ε(H, K). This gives ε(H, K)x ε(A, D)x = ε(H, K)x
and ε(A, D)yg ε(H, K)yg = ε(H, K)yg . As ψA is invariant in L, we have D E L and
hence ε(A, D)x = ε(A, D) and ε(A, D)yg = ε(A, D)g . Thus, ε(H, K)x ε(H, K)yg
= ε(H, K)x ε(A, D)ε(A, D)g ε(H, K)yg = 0, as g 6∈ NG (D). This proves eqn (18),
which also yields
CenG (eQ (ψ L )) 6 NG (D).
(20)
Now, let g ∈ NG (D) \ CenG (eQ (ψ L )). Since ψ L ∈ Irr L and L E NG (D), it follows
that eQ (ψ L ) and eQ (ψ L )g are distinct primitive central idempotents of QL and
therefore eQ (ψ L )eQ (ψ L )g = 0. This proves (i).
(ii) Let T , T1 , T2 , T3 respectively be a right transversal of CenL (ε(H, K)) in G,
CenL (ε(H, K)) in L, L in CenG (eQ (ψ L )), CenG (eQ (ψ L )) in G. We have,
X
XX X
ε(H, K)x )yz
(
ε(H, K)g =
g∈T
z∈T
X2 x∈T1
X3 y∈T
e(L, H, K)yz
=
z∈T3 y∈T2
=
[CenG (eQ (ψ L )) : L] X
eQ (ψ L )z .
α(L,H,K)
z∈T
3
Also, it is easy to see that
X
ε(H, K)g = [CenG (ε(H, K)) : CenL (ε(H, K))]e(G, H, K)
g∈T
=
We know from part(i) that
[CenG (ε(H,K)):CenL (ε(H,K))]
eQ (ψ G ).
α(G,H,K)
X
eQ (ψ L )z is an idempotent. Also eQ (ψ G ) being an
z∈T3
idempotent, it follows immediately that
which gives the desired result.
[CenG (eQ (ψL )):L]
α(L,H,K)
=
[CenG (ε(H,K)):CenL (ε(H,K))]
,
α(G,H,K)
The above proposition gives the following generalization of ([14], Corollary 3.6):
Corollary 2 Let (H, K) be a pair of subgroups of a finite group G and A be a
normal subgroup of G contained in H satisfying the following conditions:
(i) K E H E NG (D), where D = K ∩ A;
(ii) H/K is cyclic and a maximal abelian subgroup of NG (K)/K.
Then (H, K) is a strong Shoda pair of G and e(G, H, K) is a primitive central
idempotent of QG.
18
Proof. In view of (i), H = L satisfies the hypothesis of the above proposition.
Therefore, the above proposition yields that the distinct G-conjugates of ε(H, K)
are mutually orthogonal and hence (H, K) is a strong Shoda pair of G.
We also have the following:
Corollary 3 Let G ∈ C, N a normal subgroup of G. Let (H, H, ϑ) ∈ LN . If
H E NG (ker ϑ2 ), then (H, ker ϑ) is a strong Shoda pair of G, where (H2 , A2 , ϑ2 ) is
the second node of (H, H, ϑ).
Proof. Let K = ker ϑ. From Theorem 2, (H, K) is a Shoda pair of G. By
considering A = A2 , L = H and ψ = ϑ in the above proposition, it follows that the
distinct G-conjugates of ε(H, K) are mutually orthogonal. Also ϑA2 =ϑ2 implies
that K ∩ A2 = ker ϑ2 and hence NG (K) 6 NG (ker ϑ2 ). Now H being normal in
NG (ker ϑ2 ), it follows that H E NG (K). Consequently, by ([14], Proposition 3.3),
it follows that (H, K) is a strong Shoda pair of G.
Remark 3 From the above corollary, it follows immediately that if G ∈ C and
S
(H, H, ϑ) ∈ N ∈N LN is of height 2, then (H, ker ϑ) is a strong Shoda pair of G.
Proof of Theorem 4 (i) By taking L = A = Hn and G = Hn−1 in Proposition 2,
it follows that
α(Hn−1 ,H,K) = 1.
Also for 2 ≤ i ≤ n − 1, by taking L = Hi , G = Hi−1 , A = Ai , we have
α(Hi−1 ,H,K) = α(Hi ,H,K)
[CenHi−1 (ε(H, K)) : CenHi (ε(H, K))]
.
[CenHi−1 (e(Hi , H, K)) : Hi ]
Consequently
α(G,H,K) =
=
Q
[CenHi−1 (ε(H,K)):CenHi (ε(H,K))]
2≤i≤n−1
[CenHi−1 (e(Hi ,H,K)):Hi ]
[CenG (ε(H,K)):CenHn−1 (ε(H,K))]
Q
,
2≤i≤n−1 [CenHi−1 (e(Hi ,H,K)):Hi ]
as desired.
(ii) In view of (i), α(G,H,K) = 1 if, and only if,
Y
[CenHi−1 (ε(H, K)) : CenHi (ε(H, K))] =
2≤i≤n−1
Y
[CenHi−1 (e(Hi , H, K)) : Hi ].
2≤i≤n−1
But CenHi−1 (ε(H, K))/ CenHi (ε(H, K)) being isomorphic to a subgroup of
CenHi−1 (e(Hi , H, K))/Hi , the above equation holds if, and only if,
[CenHi−1 (ε(H, K)) : CenHi (ε(H, K))] = [CenHi−1 (e(Hi , H, K)) : Hi ],
19
for all i, 2 ≤ i ≤ n − 1, which yields the required result.
(iii) Let 2 ≤ i ≤ n − 1. Clearly x ∈ CenHi−1 (e(Hi , H, K)) if, and only if, x ∈
CenHi−1 (eQ (ϑHi )) if, and only if, eQ ((ϑHi )x ) = eQ (ϑHi ) if, and only if, (ϑHi )x =
σ ◦ ϑHi for some σ ∈ Aut(C). However, the later holds if, and only if, x ∈
NHi−1 (ker ϑi ), provided (H, H, ϑ) is good. This proves (iii).
6
Simple components
Given G ∈ C, it follows from Theorem 2 that any simple component of QG is given
by QGeQ (ϑG ), where (H, H, ϑ) ∈ ∪N ∈N LN . Let’s now compute the structure of
QGeQ (ϑG ).
For a ring R, let U(R) be the unit group of R and Mn (R) the ring of n × n
matrices over R. Denote by R ∗στ G, the crossed product of the group G over the
ring R with action σ and twisting τ .
We begin with the following generalization of Proposition 3.4 of [14]:
Proposition 3 Let G be a finite group and (H, K) a Shoda pair of G. Let ψ, A, L
be as in Proposition 2. Then
QGeQ (ψ G ) ∼
= Mn (QLeQ (ψ L ) ∗στ CenG (eQ (ψ L ))/L),
where n = [G : CenG (eQ (ψ L ))], the action σ :CenG (eQ (ψ L ))/L → Aut(QLeQ (ψ L ))
maps x to the conjugation automorphism induced by x and the twisting
τ :CenG (eQ (ψ L ))/L × CenG (eQ (ψ L ))/L → U(QLeQ (ψ L )) is given by (g1 , g2 ) 7→ g,
where g ∈ L is such that g1 · g2 = g · g1 g2 for g1 , g2 ∈ CenG (eQ (ψ L ))/L.
Proof. Clearly QGeQ (ψ G ) is isomorphic to the ring EndQG (QGeQ (ψ G )) of QG
endomorphisms of QGeQ (ψ G ). As eQ (ψ G ) is the sum of distinct G-conjugates of
eQ (ψ L ), we have
EndQG (QGeQ (ψ G )) ∼
= Mn (EndQG (QGeQ (ψ L ))),
where n = [G : CenG (eQ (ψ L ))]. Also the map f 7→ f (eQ (ψ L )) yields isomorphism
EndQG (QGeQ (ψ L )) ∼
= eQ (ψ L )QGeQ (ψ L ).
But distinct G-conjugates of eQ (ψ L ) being orthogonal, we have eQ (ψ L )QGeQ (ψ L ) ∼
=
eQ (ψ L )QCeQ (ψ L ) = QCeQ (ψ L ), where C = CenG (eQ (ψ L )). Consequently, QGeQ (ψ G )
is isomorphic to Mn (QCeQ (ψ L )). Since L E NG (ker ψA ), eqn (20) gives L E C and
hence
QCeQ (ψ L ) ∼
= QLeQ (ψ L ) ∗σ C/L,
τ
20
where σ and τ are as in the statement. This completes the proof.
We now proceed to compute the structure of the simple component QGeQ (ϑG )
of QG, where G ∈ C and (H, H, ϑ) ∈ ∪N ∈N LN .
Suppose that (H, H, ϑ) ∈ ∪N ∈N LN is of height n and (Hi , Ai , ϑi ) is its ith node,
1 ≤ i ≤ n. Let K = ker ϑ. Let 1 ≤ i ≤ n − 1. We must notice that G = Hi , L =
Hi+1 , A = Ai+1 and ψ = ϑ satisfy the hypothesis of Proposition 3. Let ki = [Hi :
CenHi (eQ (ϑHi+1 ))], σi the action of CenHi (eQ (ϑHi+1 ))/Hi+1 on QHi+1 eQ (ϑHi+1 ) by
conjugation, τi : CenHi (eQ (ϑHi+1 ))/Hi+1 ×CenHi (eQ (ϑHi+1 ))/Hi+1 → U(QHi+1 eQ (ϑHi+1 ))
the twisting given by (g1 , g2 ) 7→ g, where g ∈ Hi+1 is such that g1 · g2 = g · g1 g2 , for
g1 , g2 ∈ CenHi (eQ (ϑHi+1 ))/Hi+1 . Observe that CenHi (eQ (ϑHi+1 ))=CenHi (e(Hi+1 , H, K))
for all i.
Apply Proposition 3 with G = Hn−1 and A = L = Hn . It follows that
n−1
NHn−1 (K)/H),
QHn−1 eQ (ϑHn−1 ) ∼
= Mkn−1 (QHε(H, K) ∗στn−1
(21)
as eQ (ϑHn ) = ε(H, K) and CenHn−1 (ε(H, K)) = NHn−1 (K). Denote by R the
matrix ring on right hand side of the above isomorphism.
The action of CenHn−2 (e(Hn−1 , H, K))/Hn−1 on QHn−1 eQ (ϑHn−1 ) given by σn−2
induces its natural action on R given by x 7→ η ◦ σn−2 (x) ◦ η −1 , where x ∈
CenHn−2 (e(Hn−1 , H, K))/Hn−1 and η is the isomorphism in eqn (21). For notational convenience, we denote this action on R again by σn−2 . Similarly, the twisting η◦τn−2 from CenHn−2 (e(Hn−1 , H, K))/Hn−1 ×CenHn−2 (e(Hn−1 , H, K))/Hn−1 to
U(R) will again be denoted by τn−2 for convenience. Applying Proposition 3 with
G = Hn−2 , L = Hn−1 , A = An−1 , it follows that QHn−2 eQ (ϑHn−2 ) is isomorphic to
n−1
n−2
Mkn−2 ((Mkn−1 (QHε(H, K)∗στn−1
NHn−1 (K)/H)∗στn−2
CenHn−2 (e(Hn−1 , H, K)/Hn−1).
Continue this process, after n − 1 steps, we obtain the following:
Theorem 5 Let G ∈ C and (H, H, ϑ) ∈ ∪N ∈N LN . If (H, H, ϑ) is of height n and
(Hi , Ai , ϑi ) is its ith node, 1 ≤ i ≤ n, then QGeQ (ψ G ) is isomorphic to
n−1
n−2
Mk1 (Mk2 · · · (Mkn−1 (Q(ξk )∗στn−1
NHn−1 (K)/H)∗στn−2
· · ·∗στ11 CenH1 (e(H2 , H, K))/H2),
where k=[H : K], ξk is a primitive k th root of unity, and σi , τi , ki are as defined
above.
7
Examples
In this section, we illustrate the construction of Shoda pairs. We begin by observing
the following facts for an arbitrary group G in C:
21
1. The directed tree GN , when N = G, is just the vertex (G, G, 1G ), as Cl(G, G, 1G )
is an empty set. In this case, GN corresponds to the Shoda pair (G, G).
2. If N is a normal subgroup of G such that A(G,N,1N ) /N is cyclic, then
(G, N, 1N ) has only one d.c.c., namely, (IG (ϕ), A(G,N,1N ) , ϕ),
where ϕ can be taken to be any linear character on A(G,N,1N ) with kernel
f (G,N,1 ) |1N ). We have ϕN = 1N and
N. To show this, consider ϕ ∈ Lin(A
N
ker ϕG = N. The condition ϕN = 1N gives N ≤ ker ϕ ≤ A(G,N,1N ) . This
yields that ker ϕ is a normal subgroup of G, as A(G,N,1N ) /N is a cyclic normal
subgroup of G/N. Consequently, we have ker ϕ = ker ϕG = N, and thus ϕ
is a linear character on A(G,N,1N ) with kernel N. Conversely, it is clear that
f (G,N,1 ) |1N ).
any linear character on A(G,N,1N ) with kernel N belongs to Lin(A
N
f
Hence Lin(A(G,N,1N ) |1N ) consists precisely of all linear characters on A(G,N,1N )
with kernel N. It is easy to see that all these characters lie in the same orbit
under the double action. Consequently, there is only one d.c.c. of (G, N, 1N ),
as desired.
Further, if A(G,N,1N ) is such that A(G,N,1N ) /N is maximal among all the
abelian subgroups of G/N, then we have IG (ϕ) = A(G,N,1N ) and hence there
is no further d.c.c. of (IG (ϕ), A(G,N,1N ) , ϕ) and the process stops here and
the corresponding directed tree is as follows:
(A(G,N,1N ) , A(G,N,1N ) , ϕ)
(G, N, 1N )
Figure 1: GN
If A(G,N,1N ) /N is not maximal among all the abelian subgroups of G/N, then
IG (ϕ) 6= A(G,N,1N ) . In this case, we further need to compute the d.c.c. of
(IG (ϕ), A(G,N,1N ) , ϕ) in order to determine GN .
3. From the above fact, it follows that if N is a normal subgroup of G with
G/N cyclic, then its corresponding directed tree is as follows:
22
(G, G, ϕ)
(G, N, 1N )
Figure 2: GN
where ϕ is any linear character of G with ker ϕ = N and it yields the Shoda
pair (G, N) of G.
4. If N is a normal subgroup of G with G/N abelian but not cyclic then there
f
is no d.c.c. of (G, N, 1N ). As, in this case A(G,N,1 ) = G and ϕ ∈ Lin(G|1
N)
N
implies that ϕN = 1N and ker ϕG = N gives ker ϕ = N. Consequently, G/N
is cyclic, which is not the case. Hence the directed tree GN is just the vertex
(G, N, 1N ), which does not yield any Shoda pair as there is no leaf of the
required type.
5. Given N-linear character triple (H, A, ϑ) of G, consider the set K of all normal
subgroups K of A(H,A,ϑ) satisfying the following:
(i) A(H,A,ϑ) /K is cyclic;
(ii) K ∩ A = ker ϑ;
(iii) coreG (K) = N.
Let K1 , K2 , · · · , Kr be a set of representatives of K under the equivalence
relation defined by conjugacy of subgroups in H. If ϕi is a linear character
on A(H,A,ϑ) with kernel Ki , 1 ≤ i ≤ r, then
Cl(H, A, ϑ) = {(IH (ϕi ), A(H,A,ϑ), ϕi ) | 1 ≤ i ≤ r}.
f (H,A,ϑ) |ϑ). Let K = ker ϕ. Clearly A(H,A,ϑ)/K
To show this, consider ϕ ∈ Lin(A
is cyclic. Also ϕA = ϑ and ker ϕG = N implies that K ∩ A = ker ϑ
and coreG (K) = N. Hence K ∈ K. Thus we have shown that if ϕ ∈
f (H,A,ϑ) |ϑ), then ker ϕ ∈ K. Conversely, it is clear that if K ∈ K, then
Lin(A
f (H,A,ϑ) |ϑ). Furany linear character on A(H,A,ϑ) with kernel K lie in Lin(A
f (H,A,ϑ) |ϑ) lie in the same orbit under
thermore, note that ϕ1 and ϕ2 ∈ Lin(A
the double action if, and only if, ker ϕ1 and ker ϕ2 are conjugate in H. This
yields the desired result.
6. In the above fact, if (H, A, ϑ) is such that H/ ker ϑ is abelian, then it may be
noted that K, K ′ ∈ K are conjugate in H if, and only if, K = K ′ .
23
7.1
Example 1
Let G be the group generated by xi , 1 ≤ i ≤ 6, with the following defining
relations:
2 −1
5
2
5
5
x21 x−1
2 =x2 x3 =x4 =x3 =x5 =x6 =1,
[x2 , x1 ]=[x3 , x1 ]=[x3 , x2 ]=[x6 , x3 ]=[x6 , x4 ]=[x6 , x5 ]=1,
[x5 , x4 ]=x6 , [x5 , x1 ]=x4 x5 ,
[x6 , x1 ]=x26 , [x4 , x2 ]=x4 x26 , [x6 , x2 ]=x36 ,
[x5 , x2 ]=x5 x26 , [x5 , x3 ]=x35 x26 , [x4 , x3 ]=x34 x26 , [x4 , x1 ]=x24 x35 x46 .
This group is SmallGroup(1000,86) in GAP library. We have already mentioned
in section 2 that it belongs to C but it is not strongly monomial. There are 6
normal subgroups of G given by N1 =G, N2 =hx2 , x3 , x4 , x5 , x6 i, N3 =hx3 , x4 , x5 , x6 i,
N4 =hx4 , x5 , x6 i, N5 =hx6 i, N6 =h1i. We will compute the directed tree GNi , for all
i. From fact 1, GN1 is just the vertex (G, G, 1G ) and it corresponds to the Shoda
pair (G, G). For 2 ≤ i ≤ 4, G/Ni is cyclic. Therefore if ϕ1 , ϕ2 , ϕ3 are linear
characters on G with kernel N2 , N3 , N4 respectively, then, by fact 3 above, we
have the following:
(G, G, ϕ1 )
(G, G, ϕ2 )
(G, G, ϕ3 )
(G, N2 , 1N2 )
(G, N3 , 1N3 )
(G, N4 , 1N4 )
Figure 3: GN2
Figure 4: GN3
Figure 5: GN4
and the leaves (G, G, ϕ1 ), (G, G, ϕ2) and (G, G, ϕ3 ) yield the Shoda pairs (G, N2 ),
(G, N3 ), (G, N4 ) of G respectively.
Construction of GN5 : We first need to compute d.c.c. of (G, N5 , 1N5 ). Observe
that N4 /N5 is an abelian normal subgroup of maximal order in G/N5 . Therefore,
we set A(G,N5 ,1N5 ) = N4 . We now use fact 5 to compute Cl(G, N5 , 1N5 ). It turns
out that K = {hxi4 x5 , x6 i | 0 ≤ i ≤ 4} ∪ {hx4 , x6 i}. Moreover, hx5 , x6 i, hx4 , x6 i,
hx−1
4 x5 , x6 i are the only subgroups in K which are distinct up to conjugacy in G.
Consider the linear characters ϕ1 , ϕ2 , ϕ3 on N4 given as follows:
ϕ1 : x4 7→ ξ5 , x5 7→ 1, x6 7→ 1
ϕ2 : x4 7→ 1, x5 7→ ξ5 , x6 →
7 1
ϕ3 : x4 7→ ξ5 , x5 7→ ξ5 , x6 7→ 1,
24
where ξ5 is a primitive 5th root of unity. Clearly the kernels of ϕ1 , ϕ2 and ϕ3 are
hx5 , x6 i, hx4 , x6 i and hx−1
4 x5 , x6 i respectively. Also we have IG (ϕ1 ) = IG (ϕ2 ) =
IG (ϕ3 ) = N4 . Hence, the directed tree is as follows:
(N4 , N4 , ϕ1 ) (N4 , N4 , ϕ2 ) (N4 , N4 , ϕ3 )
(G, N5 , 1N 5 )
Figure 6: GN5
The three leaves (N4 , N4 , ϕ1 ), (N4 , N4 , ϕ2 ), (N4 , N4 , ϕ3 ), respectively, yield Shoda
pairs (hx4 , x5 , x6 i, hx5 , x6 i), (hx4 , x5 , x6 i, hx4 , x6 i) and (hx4 , x5 , x6 i, hx−1
4 x5 , x6 i) of
G.
Construction of GN6 : Recall that N6 = h1i. Note that N5 is an abelian normal
subgroup of maximal order in G. Therefore, we set A(G,N6 ,1N6 ) = N5 . As N5 /N6 is
cyclic, by fact 2,
Cl(G, N6 , 1N6 ) = {(G1 , N5 , ϕ1 )},
where ϕ1 : N5 → C maps x6 to ξ5 , and G1 = IG (ϕ1 ) = hx3 , x4 , x5 , x6 i. As
G1 6= N5 , we further need to compute d.c.c. of (G1 , N5 , ϕ1 ). It is observed that
hx5 , x6 i/ ker ϕ1 is an abelian normal subgroup of maximal order in G1 / ker ϕ1 . Set
A(G1 ,N5 ,ϕ1 ) = hx5 , x6 i. We now use fact 5 to compute Cl(G1 , N5 , ϕ1 ). It turns out
that there are 5 subgroups in K, namely hx5 xi6 i, 0 ≤ i ≤ 4, and all of them are
conjugate in G1 . Consider the linear character ϕ2 on hx5 , x6 i which maps x5 to 1
and x6 to ξ5 . Hence
Cl(G1 , N5 , ϕ1 ) = {(G2 , hx5 , x6 i, ϕ2 )},
where G2 = IG1 (ϕ2 ) = hx5 , x6 , x3 x24 i. Again G2 6= hx5 , x6 i, and we compute
Cl(G2 , hx5 , x6 i, ϕ2). Now G2 / ker ϕ2 is abelian. Therefore, using facts 5 and 6,
we obtain that
Cl(G2 , hx5 , x6 i, ϕ2) = {(G2 , G2 , ϕ3 ), (G2 , G2 , ϕ4 )},
where ϕ3 and ϕ4 are linear characters on G2 with kernel hx3 x24 x36 , x35 i and hx35 i
respectively. The process stops here and the corresponding tree is as follows:
25
(G2 , G2 , ϕ3 ) (G2 , G2 , ϕ4 )
(G2 , hx5 , x6 i, ϕ2 )
(G1 , N5 , ϕ1 )
(G, N6 , 1N6 )
Figure 7: GN6
The leaves (G2 , G2 , ϕ3 ) and (G2 , G2 , ϕ4 ) of GN6 correspond to the Shoda pairs
(hx5 , x6 , x3 x24 i, hx3 x24 x36 , x35 i) and (hx5 , x6 , x3 x24 i, hx35 i) of G respectively. It turns
out that the collection of Shoda pairs corresponding to all the GNi , 1 ≤ i ≤ 6, is a
complete irredundant set of Shoda pairs of G. Futhermore, if (H, K) is any of the
Shoda pairs constructed with this process, then, from Theorem 4, it follows that
1/2, if (H, K)=(hx5 , x6 , x3 x24 i, hx3 x24 x36 , x35 i) or (hx5 , x6 , x3 x24 i, hx35 i);
α(G,H,K) = 1,
otherwise.
7.2
Example 2
Consider the group G generated by a, b, c, d, e, f with the following defining
relations:
a2 =b3 =c3 =d3 =1, a−1 ba=b−1 , a−1 ca=c−1 , a−1 da=d, b−1 cb=cd, b−1 db=d, c−1 dc=d.
There are 8 normal subgroups of G given as follows: N1 =G, N2 =hb, ci, N3 =hc, di,
N4 =hcb, c−1 b−1 i, N5 =hcb−1 , c−1 bi, N6 =hb, di, N7 =hdi and N8 = h1i. As before, the
directed tree GN1 is just the vertex (G, G, 1G ), which gives Shoda pair (G, G). As
G/N2 is cyclic, by fact 3, the tree corresponding to N2 is as given below:
(G, G, ϕ)
(G, N2 , 1N2 )
Figure 8: GN2
where ϕ can be taken as any linear character on G with kernel N2 , which gives
Shoda pair (G, N2 ) of G.
26
For 3 ≤ i ≤ 6, N2 /Ni is an abelian normal subgroup of maximal order in G/Ni .
Hence we set A(G,Ni ,1Ni ) = N2 . It turns out that A(G,Ni ,1Ni ) /Ni is cyclic and maximal among all the abelian subgroups of G/Ni . Therefore by fact 2, GNi , 3 ≤ i ≤ 6,
can be described as follows:
(hb, ci, hb, ci, ϕ1 )
(G, N3 , 1N3 )
Figure 9: GN3
(hb, ci, hb, ci, ϕ2 )
(hb, ci, hb, ci, ϕ3 )
(G, N4 , 1N4 )
(G, N5 , 1N5 )
Figure 10: GN4
Figure 11: GN5
(hb, ci, hb, ci, ϕ4 )
(G, N6 , 1N6 )
Figure 12: GN6
where ϕ1 , ϕ2 , ϕ3 , ϕ4 are any linear characters on hb, ci with kernel N3 , N4 , N5 , N6
respectively. The above trees yield Shoda pairs (hb, ci, N3 ), (hb, ci, N4 ), (hb, ci, N5),
(hb, ci, N6 ) of G.
Next, it turns out that Cl(G, N7 , 1N7 ) is empty. Therefore, GN7 is just the vertex
(G, N7 , 1N7 ), which does not give any Shoda pair.
It now remains to construct the directed tree GN8 . Observe that N3 /N8 is an
abelian normal subgroup of maximal order in G/N8 . Hence we can set A(G,N8 ,1N8 ) =
N3 . In view of fact 5, we have Cl(G, N8 , 1N8 ) = {(G1 , N3 , ϕ)}, where ϕ is a
linear character on N3 with kernel hci and G1 =IG (ϕ)=ha, c, di. As G1 6= N3 , we
further compute Cl(G1 , N3 , ϕ). Now observe that G1 / ker ϕ = G1 /hci is abelian
and therefore facts 5 & 6 yield
Cl(G1 , N3 , ψ) = {(G1 , G1 , ϕ1 ), (G1 , G1 , ϕ2 )},
where ϕ1 and ϕ2 can be taken to be any linear characters of G1 with kernel hci and
ha, ci respectively. The process stops here and the corresponding tree is as follows:
(G1 , G1 , ϕ1 ) (G1 , G1 , ϕ2 )
(G1 , hc, di, ϕ)
(G, N8 , 1N8 )
Figure 13: GN8
and it yields Shoda pairs (ha, c, di, hci) and (ha, c, di, ha, ci) of G. Again, it turns
out that the collection of Shoda pairs constructed with this process is a complete
and irredundant set of Shoda pairs of G.
27
References
[1] G. K. Bakshi, R. S. Kulkarni, and I. B. S. Passi, The rational group algebra
of a finite group, J. Algebra Appl. 12 (2013), no. 3.
[2] G.K. Bakshi and S. Maheshwary, The rational group algebra of a normally
monomial group, J. Pure Appl. Algebra 218 (2014), no. 9, 1583–1593.
[3]
, Extremely strong shoda pairs with GAP, J. Symbolic Comput. 76
(2016), no. 5, 97–106.
[4] O. Broche Cristo, A. Herman, A. Konovalov, A. Olivieri, G. Olteanu,
Á. del Rı́o, and I. van Geldar, Wedderga — wedderburn decomposition
of group algebras, Version 4.7.2; (2014), (http://www.cs.st-andrews.ac.uk/
ãlexk/wedderga).
[5] C. W. Curtis and Irving Reiner, Representation theory of finite groups and
associative algebras, Pure and Applied Mathematics, Vol. XI, Interscience
Publishers, a division of John Wiley & Sons, New York-London, 1962.
[6] Narsingh Deo, Graph theory with applications to engineering and computer
science, Prentice-Hall Series in Automatic Computations, Prentice-Hall, Inc.,
Englewood Cliffs, 1974.
[7] The GAP Group, GAP – Groups, Algorithms, and Programming, Version
4.8.6, 2016.
[8] G. A. How, Special classes of monomial groups. I, Chinese J. Math. 12 (1984),
no. 2, 115–120.
[9]
, Special classes of monomial groups. II, Chinese J. Math. 12 (1984),
no. 2, 121–127.
[10]
, Special classes of monomial groups. III, Chinese J. Math. 12 (1984),
no. 3, 199–211.
[11] Bertram Huppert, Character theory of finite groups, De Gruyter Expositions
in Mathematics, 25, Walter de Gruyter & Co., Berlin, 1998.
[12] I. Martin Isaacs, Character theory of finite groups, Academic Press [Harcourt
Brace Jovanovich Publishers], New York, 1976, Pure and Applied Mathematics, No. 69.
28
[13] I.Martin Isaacs, Finite group theory, Graduate Study in Mathematics, 92,
American Mathematical Society Providence, R I, 2008.
[14] A. Olivieri, Á. del Rı́o, and J. J. Simón, On monomial characters and central
idempotents of rational group algebras, Comm. Algebra 32 (2004), no. 4, 1531–
1550.
[15] T. Yamada, The Schur subgroup of the Brauer group, Lecture Notes in Mathematics, Vol. 397, Springer-Verlag, Berlin, 1974.
29
| 4 |
Human-in-the-Loop Synthesis for
Partially Observable Markov Decision Processes
arXiv:1802.09810v1 [] 27 Feb 2018
Steven Carr1
Nils Jansen2
Ralf Wimmer3
Abstract— We study planning problems where autonomous
agents operate inside environments that are subject to uncertainties and not fully observable. Partially observable Markov
decision processes (POMDPs) are a natural formal model to
capture such problems. Because of the potentially huge or
even infinite belief space in POMDPs, synthesis with safety
guarantees is, in general, computationally intractable. We
propose an approach that aims to circumvent this difficulty:
in scenarios that can be partially or fully simulated in a virtual
environment, we actively integrate a human user to control
an agent. While the user repeatedly tries to safely guide the
agent in the simulation, we collect data from the human input.
Via behavior cloning, we translate the data into a strategy
for the POMDP. The strategy resolves all nondeterminism and
non-observability of the POMDP, resulting in a discrete-time
Markov chain (MC). The efficient verification of this MC gives
quantitative insights into the quality of the inferred human
strategy by proving or disproving given system specifications.
For the case that the quality of the strategy is not sufficient, we
propose a refinement method using counterexamples presented
to the human. Experiments show that by including humans into
the POMDP verification loop we improve the state of the art
by orders of magnitude in terms of scalability.
I. I NTRODUCTION
We aim at providing guarantees for planning scenarios
given by dynamical systems with uncertainties and partial
observability. In particular, we want to compute a strategy
for an agent that ensures certain desired behavior [15].
A popular formal model for planning subject to stochastic
behavior are Markov decision processes (MDPs). An MDP
is a nondeterministic model in which the agent chooses to
perform an action under full knowledge of the environment
it is operating in. The outcome of the action is a probability distribution over the system states. Many applications,
however, allow only partial observability of the current
system state [20], [40], [45]. For such applications, MDPs are
extended to partially observable Markov decision processes
(POMDPs). While the agent acts within the environment,
it encounters certain observations, according to which it
can infer the likelihood of the system being in a certain
state. This likelihood is called the belief state. Executing an
action leads to an update of the belief state according to
new observations. The belief state together with the update
function form a (possibly infinite) MDP, commonly referred
to as the underlying belief MDP [35].
1 The
University of Texas at Austin, USA
University, Nijmegen, The Netherlands,
[email protected]
3 Albert-Ludwigs-Universität Freiburg, Freiburg im Breisgau, Germany
4 Worcester Polytechnic Institute (WPI), USA
2 Radboud
Jie Fu4
Ufuk Topcu1
G
X
Fig. 1: Example Gridworld Environment. Features are (1) an
agent with restricted range of vision (green area), (2) static
and randomly moving obstacles (red), and (3) a goal area G.
As a motivating example, take a motion planning scenario
where we want to devise a strategy for an autonomous agent
accounting for both randomly moving and static obstacles.
Observation of these obstacles is only possible within a
restricted field of vision like in Fig. 1. The strategy shall
provably ensure a safe traversal to the goal area with a certain
high probability. On top of that, the expected performance of
the agent according to the strategy shall encompass taking the
quickest possible route. These requirements amount to having
quantitative reachability specifications like “The probability to
reach the goal area without crashing into obstacles is at least
90 %” and expected cost specifications like “The expected
number of steps to reach the goal area is at most 10”.
Quantitative verification techniques like probabilistic model
checking (PMC) [21] provide strategies inducing guarantees
on such specifications. PRISM [25] or Storm [12] employ
efficient methods for finite MDPs, while POMDP verification
– as implemented in a PRISM prototype [29] – generates a
large, potentially infinite, belief MDP, and is intractable even
for rather small instances. So-called point-based methods [30],
[38] employ sampling of belief states. They usually have
slightly better scalability than verification, but there is no
guarantee that a strategy provably adheres to specifications.
We discuss two typical problems in POMDP verification.
1) For applications following the aforementioned example,
verification takes any specific position of obstacles
and previous decisions into account. More generally,
strategies inducing optimal values are computed by
assessment of the full belief MDP [35].
2) Infinite horizon problems may require a strategy to have
infinite memory. However, randomization over possible
choices can often trade off memory [9]. The intuition is
that deterministic choices at a certain state may need to
vary depending on previous decisions and observations.
Allowing for a probability distribution over the choices
– relaxing determinism – is often sufficient to capture
the necessary variability in the decisions. As also finite
memory can be encoded into a POMDP by extending
the state space, randomization then supersedes infinite
memory for many cases [2], [19].
Here, we propose to make active use of humans’ power
of cognition to (1) achieve an implicit abstraction of the
belief space and (2) capture memory-based decisions by
randomization over choices. We translate POMDP planning
scenarios into virtual environments where a human can
actively operate an agent. In a nutshell, we create a game that
the human plays to achieve a goal akin to the specifications
for the POMDP scenario.
This game captures a family of concrete scenarios, for
instance varying in characteristics like the agent’s starting
position or obstacle distribution. We collect data about the
human actions from a number of different scenarios to
build a training set. With Hoeffding’s inequality [46], we
statistically infer how many human inputs are needed until
further scenarios won’t change the likelihoods of choices.
Using behavior cloning techniques from Learning-fromDemonstration (LfD) [3], [34], we cast the training set into
a strategy for a POMDP that captures one specific scenario.
Such a strategy fully resolves the nondeterminism and partial
observability, resulting in a discrete-time Markov chain (MC).
PMC for this MC is efficient [25] and proves or disproves
the satisfaction of specifications for the computed strategy.
A human implicitly bases their decisions on experiences
over time, i. e., on memory [11]. We collect likelihoods of
decisions and trade off such implicit memory by translating
these likelihoods into randomization over decisions. In
general, randomization plays a central role for describing
human behavior in cognitive science. Take for instance [23]
where human behavior is related to quantifying the trade-off
between various decisions in Bayesian decision theory.
Formally, the method yields a randomized strategy for the
POMDP that may be extended with a finite memory structure.
Note that computing such a strategy is already NP-hard,
SQRT-SUM-hard, and in PSPACE [41], justifying the usage
of heuristic and approximative methods.
Naturally, such a heuristic procedure comprises no means
of optimality. However, employing a refinement technique
incorporating stochastic counterexamples [1], [17] enables
to pointedly immerse the human into critical situations to
gather more specific data. In addition, we employ bounds
on the optimal performance of an agent derived from the
underlying MDP. This delivers an indication whether no
further improvement is possible by the human.
Besides simple motion planning, possible applications
include self-driving cars [13], autonomous trading agents
in the stock market [42], or service robots [22].
We implemented this synthesis cycle with humans in
the loop within a prototype employing efficient verification.
The results are very promising as both PRISM and pointbased solvers are outperformed by orders of magnitude both
regarding running time and the size of tractable models.
Our approach is inherently correct, as any computed
strategy is verified for the specifications.
Related Work: Closest to our work is [10], where deep
reinforcement learning employs human feedback. In [32],
a robot in a partially observable motion planning scenario
can request human input to resolve the belief space. The
availability of a human is modeled as a stochastic sensor.
Similarly, oracular POMDPs [4] capture scenarios where
a human is always available as an oracle. The latter two
approaches do not discuss how to actually include a human
in the scenarios. The major difference of all approaches listed
above in comparison to our method is that by employing
verification of inferred strategies, we obtain hard guarantees
on the safety or performance.
Verification problems for POMDPs and their decidability
have been studied in [7]. [44] investigates abstractions for
POMDP motion planning scenarios formalizing typical human
assessments like “the obstacle is either near or far”, learning
MDP strategies from human behavior in a shared control
setting was used in [16]. Finally, various learning-based
methods and their (restricted) scalability are discussed in [5].
Structure of the paper: After formalisms in Sect. II,
Section III gives a full overview of our methodology. In
Sect. IV, we formally discuss randomization and memory for
POMDP strategies; after that we introduce strategy generation
for our setting together with an extensive example. We
describe our experiments and results in Sect. VI.
II. P RELIMINARIES
A probability distribution over a finite or countably infinite set
X is a function µ : X → [0, 1] ⊆ R with ∑x∈X µ(x) = µ(X) =
1. The set of all distributions on X is Distr(X). The support
of a distribution µ is supp(µ) = {x ∈ X | µ(x) > 0}.
A. Probabilistic Models
Definition 1 (MDP) A Markov decision process (MDP) M is
a tuple M = (S, sI , Act, P) with a finite (or countably infinite)
set S of states, an initial state sI ∈ S, a finite set Act of actions,
and a probabilistic transition function P : S × Act → Distr(S).
The available actions in s ∈ S are Act(s) = {a ∈ Act | (s, a) ∈
dom(P)}. We assume the MDP M contains no deadlock states,
i. e., Act(s) 6= 0/ for all s ∈ S. A discrete-time Markov chain
(MC) is an MDP with |Act(s)| = 1 for all s ∈ S.
A path (or run) of M is a finite or infinite sequence
a0
a1
π = s0 −→
s1 −→
· · · , where s0 = sI , si ∈ S, ai ∈ Act(si )
M
The set of (in)finite paths is PathsM
fin (Paths ). To define
a probability measure for MDPs, strategies resolve the
nondeterministic choices of actions. Intuitively, at each state
a strategy determines a distribution over actions to take. This
decision may be based on the history of the current path.
Definition 2 (Strategy) A strategy σ for
M is a function
σ : PathsM
→
Distr(Act)
s.
t.
supp
σ
(π)
⊆ Act last(π) for
fin
M
all π ∈ PathsM
fin . Σ denotes the set of all strategies of M.
A strategy σ is memoryless if last(π) = last(π 0 ) implies
σ (π) = σ (π 0 ) for all π, π 0 ∈ dom(σ ). It is deterministic if
σ (π) is a Dirac distribution for all π ∈ dom(σ ). A strategy
that is not deterministic is randomized. Here, we mostly use
strategies that are memoryless and randomized, i. e., of the
form σ : S → Distr(Act).
A strategy σ for an MDP resolves all nondeterministic
choices, yielding an induced Markov chain (MC) M σ , for
which a probability measure over the set of infinite paths is
defined by the standard cylinder set construction.
Definition 3 (Induced Markov Chain) Let MDP M =
(S, sI , Act, P) and strategy σ ∈ ΣM . The MC induced by M
and σ is given by M σ = (S, sI , Act, Pσ ) where:
Pσ (s, s0 ) =
∑
σ (s)(a) · P(s, a)(s0 )
Initial
POMDP D,
Specification ϕ
Refinement
Refined
POMDP
UNSAT
SAT
Training
Environment
Demonstration
Model
Checking
Strategy
Behavior
Cloning
∀s, s0 ∈ S .
a∈A(s)
Fig. 2: Workflow of human-in-the-loop (HiL) methodology.
B. Partial Observability
Definition 4 (POMDP) A partially observable Markov decision process (POMDP) is a tuple D = (M, Z, O) such that
M = (S, sI , Act, P) is the underlying MDP of D, Z is a finite
set of observations and O : S → Z is the observation function.
the states in the POMDP. This distribution is expected to
correspond to the probability to be in a specific state based
on the observations made so far.
C. Specifications
We require that states with the same observations have
the same set of enabled actions, i. e., O(s) = O(s0 ) implies
Act(s) = Act(s0 ) for all s, s0 ∈ S. More general observation
functions take the last action into account and provide a
distribution over Z. There is a transformation of the general
case to the POMDP definition used here that blows up the
state space polynomially [8].
Furthermore, let Pr(s|z) be the probability that given
observation z ∈ Z, the state of the POMDP is s ∈ S. We
assume a maximum-entropy probability distribution [18] to
provide an initial distribution over potential states for an
1
observation z given by Pr(s|z) = |{s0 ∈S | z=O(s
0 )}| . Vice versa,
we set Pr(z|s) = 1 iff z = O(s) and Pr(z|s) = 0 otherwise.
The notion of paths directly transfers from MDPs to
POMDPs. We lift the observation function to paths: For
a0
a1
a POMDP D and a path π = s0 −→
s1 −→
· · · sn ∈ PathsM
fin ,
a0
the associated observation sequence is O(π) = O(s0 ) −→
a1
O(s1 ) −→
· · · O(sn ). Note that several paths in the underlying
MDP may yield the same observation sequence. Strategies
have to take this restricted observability into account.
Definition 5 An observation-based strategy of POMDP D is
a function σ : PathsM
fin → Distr(Act) such that σ is a strategy
for the underlying MDP and for all paths π, π 0 ∈ PathsM
fin
with O(π) = O(π 0 ) we have σ (π) = σ (π 0 ). ΣD
z denotes the
set of observation-based strategies for D.
An observation-based strategy selects actions based on the observations encountered along a path and past actions. Note that
applying an observation-based strategy to a POMDP yields
an induced MC as in Def. 3 where all nondeterminism and
partial observability is resolved. Again, we use memoryless
and randomized strategies of the form σz : Z → Distr(Act).
The semantics of a POMDP can be described using a belief
MDP with an uncountable state space. The idea is that each
state of the belief MDP corresponds to a distribution over
For a POMDP D = (M, Z, O), a set G ⊆ S of goal states
and a set B ⊆ S of bad states, we consider quantitative
reach-avoid specifications of the form ϕ = P>λ (¬B U G). A
strategy σz ∈ Σz satisfies this specifications if the probability
of reaching a goal state without entering a bad state in
between is at least λ in the induced MC, written Dσz |= ϕ. We
also use similar specifications of the form EC≤κ (¬B U G),
measuring the expected cost to safely reach a goal state. For
POMDPs, observation-based strategies in their full generality
are necessary [33].
Consider a specification ϕ that is not satisfied by an MC
or MDP M. One common definition of a counterexample is a
(minimal) subset S0 ⊆ S of the state space such that the MC or
sub-MDP induced by S0 still violates ϕ [1]. The intuition is,
that by the reduced state space critical parts are highlighted.
III. M ETHODOLOGY
A. Problem Statement
We are given a partially observable planning scenario, which is
modeled by a POMDP D, and a specification ϕ. The POMDP
D is one of a family of similar planning scenarios, where
each concrete scenario can modeled by an individual POMDP.
The goal is to compute an observation-based randomized
σz
memoryless strategy σz ∈ ΣD
z such that D |= ϕ.
The general workflow we employ is shown in Fig. 2. Note
that we mostly assume a family of POMDP scenarios to train
the human strategy, as will be explained in what follows. We
now detail the specific parts of the proposed approach.
B. Training Environment
Our setting necessitates that a virtual and interactive environment called training environment sufficiently captures the
underlying POMDP planning scenarios. The initial training
environment can be parameterized for: the size of the
environment; the numbers and locations of dynamic obstacles
and landmarks; and the location of the goal state.
3
4
up
X
8
7
s3
1/2
6
2
1
5
X
Fig. 3: Possible observations (left) and two observations
triggering similar actions.
Similar classes of problems would require similar initial
training environments. For example, an environment may
incorporate a small grid with one dynamic obstacle and two
landmarks, while the actual POMDP we are interested in
needs the same number of dynamic obstacles but may induce
a larger grid or add additional landmarks. The goal state
location also impacts the type of strategy trained by the
human. With a randomized goal location, the human will
prioritize obstacle avoidance over minimizing expected cost.
We directly let the human control the agent towards a
conveyable goal, such as avoiding obstacles while moving
to the goal area. We store all data regarding the human
control inputs in a training set. For a POMDP, this means
that at each visited state of the underlying MDP we store the
corresponding observation and the human’s action choice.
We now collect data from several (randomly-generated)
environments until statistically the human input will not
significantly change the strategy by collecting further data.
In fact, the training set contains likelihoods of actions.
C. Strategy Generation from Behavior Cloning
We compute an environment-independent strategy for the
agent by casting the collected data into probability distributions over actions for each observation at each state of the
system. Intuitively, the strategy is independent from a concrete
environment but compatible with all concrete scenarios
the training environment captures. Generally, linear [14]
or softmax regression [46] offers a means to interpret the
likelihoods over actions into probabilities. Formally, we
get an observation-based strategy of the POMDP. The
computed strategy mimics typical human choices in all
possible situations.
So far, such a strategy requires a large training set that
needs a long time to be established, as we need human input
for all different observations and actions. If we do not have a
sufficiently large training set, that is, we have a lack of data,
the strategy is underspecified.
We use data augmentation [24] to reduce the observationaction space upon which we train the strategy. Our assumption
is that the human acts similarly upon similar observations of
the environment. For instance, take two different observations
describing that a moving obstacle is to the right border or
to the left of the agent’s range of vision. While these are in
fact different observations, they may trigger similar actions
(moving into opposite directions – see Fig. 3) where on the
left we see the possible observations for the agent and on
the right two observations triggering similar actions (away
from the obstacle). Therefore, we define an equivalence
s1
2/3
s0
1/2
a
1/3
s2
down
a
s4
up
s7
a
s6
a
down
up
a
s5
down
Fig. 4: Randomization vs. memory
relation on observations and actions. We then weigh the
likelihoods of equivalent actions for each state with equivalent
observations and again cast these weighted likelihoods into
probability distributions. Summarized, as this method reduces
the observation-action space, it also reduces the required size
of the training set and the required number of human inputs.
D. Refinement through Model Checking and Counterexamples
We apply the computed strategy to a POMDP for a concrete
scenario. As we resolve all nondeterminism and partial
observability, the resulting model is an MC. To efficiently
verify this MC against the given specification, we employ
probabilistic model checking using PRISM. For instance,
if for this MC the probability of reaching the goal area
without bumping into obstacles is above a certain threshold,
the computed strategy provably induces this exact probability.
In case PMC reveals that the obtained strategy does not
satisfy the requirements, we need to improve the strategy for
the specific POMDP we are dealing with. Here, we again
take advantage of the human-in-the-loop principle. First,
we generate a counterexample using, e. g., the techniques
described in [17]. Such a counterexample highlights critical
parts of the state space of the induced MC. We then immerse
the human into critical parts in the virtual environment
corresponding to critical states of the specific POMDP. By
gathering more data in these apparently critical situations for
this scenario we strive to improve the human performance
and the quality of the strategy.
IV. R ANDOMIZED S TRATEGIES
Deciding if there is an observation-based strategy for a
POMDP satisfying a specification as in Sec. II-C typically requires unbounded memory and is undecidable in general [27]
If we restrict ourselves to the class of memoryless strategies
(which decide only depending on the current observation), we
need to distinguish two sub-classes: (1) finding an optimal
deterministic memoryless strategy is NP-complete [26], (2)
finding an optimal randomized memoryless strategy NP-hard,
SQRT-SUM-hard, and in PSPACE [41]. From a practical
perspective, randomized strategies are much more powerful
as one can – to a certain extent – simulate memory by
randomization. The following example illustrates this effect
and its limitations.
Example 1 In the POMDP in Fig. 4, observations are
defined by colors. The goal is to reach s7 with maximal
probability. The only states with nondeterminism are s3 ,
s4 , and s5 (blue). For a memoryless deterministic strategy
selecting “up” in all blue states, the optimal value is 2/3.
A memoryless randomized strategy can select “up” with
probability 0 < p < 1 and “down” with probability 1 − p
for blue states. Then both from s3 and s4 , the target states
are eventually reached with probability 1 and from s5 with
probability p. Therefore the probability to reach s7 from the
initial state is 2/3 + 1/3 p < 1.
Finally, deterministic strategies with memory can distinguish state s5 from s3 and s4 because their predecessors have
different observations. An optimal strategy may select “up”
in a blue state if its predecessor is yellow, and otherwise “up”
if a blue state has been seen an even number of times and
“down” for an odd number, yielding probability 1 to reach s7 .
Summarized, computing randomized memoryless strategies
for POMDPs is – while still a hard problem – a powerful
alternative to the harder or even undecidable problem of
computing strategies with potentially infinite memory.
V. S TRATEGY G ENERATION
We detail the four phases to behavior cloning from human
inputs: (1) building a training set, (2) feature-based data augmentation, (3) the initial strategy generation, and (4) refining
the initial strategy using counterexamples.
A. Training Set
We first provide a series of randomly-generated sample
environments to build a human-based training set. The
environments are randomized in size, location and number of
obstacles as well as the location of initial and goal states. The
training set ΩE is represented as a function ΩE : Z × Act → N,
where ΩE (z, a) = na means that na is the number of times
action a is selected by the human for observation z. The size
of the training set is given by |ΩE | = ∑z∈Z,a∈Act ΩE (z, a).
In each sample environment, the human is given a map of
the static environment – the locations of static obstacles and
goal states – as well as an observation in the current state.
This observation may, for instance, refer to the position of a
visible obstacle. Moreover, the human is provided with one or
more specifications. Proscribing a threshold on the probability
of reaching the goal to a human seems unpractical. Instead, to
have the human act according to the specifications outlined in
Sect. II, we can for instance ask the human to maximize the
probability of reaching the goal whilst minimizing an expected
cost. In practice, this just means that the human attempts
to maximize the probability of reaching the goal without
crashing and as quickly as possible. The human observes
the obstacles one-step from the agent (see Fig. 3), but is not
aware of the agent’s precise position or if observed obstacles
are static or dynamic. For an unknown initial position, there
are two phases [36], [37]:
1) Exploration: The human will first try to determine
their position while taking advantage of knowledge of
the static environment.
2) Exploitation: When the human is confident about their
current position they will start moving towards the goal.
The human acts on each (randomly generated) concrete
scenario until they either reach the goal or crash. We
continue collecting data until the human’s inputs no longer
significantly change the strategy. The statistically-derived
minimum required size |ΩE | of the initial training set is bound
by Hoeffding’s inequality [46]. In particular, we derive an
upper bound ε ∈ R (with a confidence of 1 − δ for δ ∈ [0, 1])
for the difference between (1) a strategy that is independent
from further training with other concrete environments and
(2) the strategy derived from the training set of size |ΩE |.
The number of samples is bounded by
1
2
|ΩE | ≥ 2 ln
.
(1)
2ε
δ
B. Feature Representation
Human input – choices of observation-action pairs – for
our simulations has limitations. First, it may not cover the
entire observation-space so we may have observations without
(sufficient) human choices of actions; the resulting strategy
will be underspecified. Additionally, many of the observationaction pairs are equivalent in nature since – in our example
setting – the tendency for the human’s action input is to
move away from neighboring obstacles. Similar equivalences
may be specified depending on the case study at hand. We
introduce a feature-based representation to take advantage of
such similarities to reduce the required size of a training set.
Consider therefore the gridworld scenario in Fig. 1. Recall
that the agent has restricted range of vision, see Fig. 3. The
set of positions in the grid Gridx × Gridy ⊆ N × N is
Pos = (x, y) x ∈ {0, . . . , Gridx }, y ∈ {0, . . . , Gridy } .
For one dynamic obstacle, an agent state consists of
the position (xa , ya ) ∈ Pos of agent a and the position (xo , yo ) ∈ Pos of the dynamic obstacle o, i. e., s =
(xa , ya , xo , yo ) ∈ Pos × Pos. The agent’s actions Act =
{(−1, 0), (1, 0), (0, 1), (0, −1)} describe the one-step directions “left”, “right”, “up”, “down”. The set B of obstacle
positions is B = {(xo , yo ), (xl1 , yl1 ), . . . , (xlm , ylm ) | (xo , yo ) ∈
Pos, (xli , yli ) ∈ Pos, 1 ≤ i ≤ m} for dynamic obstacle o and
landmarks l1 , . . . , lm .
The observations describe the relative position of obstacles
with respect to the agent’s position, see Fig. 3. We describe
these positions by a set of Boolean functions Oi : S × 2Pos →
{0, 1} where S = Posx × Posy is the agent’s position and for
a visibility distance of 1, Oi is defined for 1 ≤ i ≤ 8 by:
O1 (s, B) = 1
O2 (s, B) = 1
O3 (s, B) = 1
O4 (s, B) = 1
O5 (s, B) = 1
O6 (s, B) = 1
O7 (s, B) = 1
O8 (s, B) = 1
iff
iff
iff
iff
iff
iff
iff
iff
((xa − 1, ya − 1) ∈ B) ∨ (xa = 0) ∨ (ya = 0),
((xa − 1, ya ) ∈ B) ∨ (xa = 0),
((xa − 1, ya + 1) ∈ B) ∨ (xa = 0) ∨ (ya = n),
((xa , ya + 1) ∈ B) ∨ (ya = n),
((xa + 1, ya + 1) ∈ B) ∨ (xa = n) ∨ (ya = n),
((xa + 1, ya ) ∈ B) ∨ (xa = n),
((xa + 1, ya − 1) ∈ B) ∨ (xa = n) ∨ (ya = 0),
((xa , ya − 1) ∈ B) ∨ (ya = 0) .
Note that for a visibility distance of 2, Oi is defined for
1 ≤ i ≤ 24. Consequently, an observation z = O(s) at state
s is a vector z = (z(1) , . . . , z(8) ) ∈ {0, 1}8 with z(i) = Oi (s, B).
The observation space Z = {z1 , . . . , z256 } is the set of all
observation vectors.
Providing a human with enough environments to cover
the entire observation space is inefficient. [39] To simplify
this space, we introduce action-based features [31], which
capture the short-term human behavior of prioritizing to avoid
obstacles for current observations. Particularly, we define
features f : Z × Act → N. In our example setting we have
8
f1 (z, a) = ∑ z(i) ,
i=1
3
7
i=1
i=5
5
∑
i∈{1,7,8}
∑
(z j ,a j )∈F̂
For the cases where a sequence has no equivalence, we evenly
distribute the strategy between the action choices Act (such
occasions are rare and our refinement procedure will improve
any negative actions after model checking).
σz (z, a) :=
f2 (z, a) = ax − ∑ z(i) + ∑ z(i) ,
f3 (z, a) = ay −
from the particular environment that we synthesize a strategy
for. For all (z, a) ∈ Z × Act we assign the probability of
selecting an action σz (z, a) from its corresponding feature’s
f (z, a) frequency in the training set compared to the set of
all features with observation z:
ΩE (z j , a j )
.
σz (z, a) =
∑ai ∈Act ΩE (z j , ai )
z(i) + ∑ z(i) ,
i=3
where f1 describes the number of obstacles in the observations.
f2 and f3 are the respective x and y directional components of
the difference between the motion of the agent’s action (ax and
ay respectively) and position of the obstacles in its observation.
3
Then, the comprised feature function
is f : Z × Act → N with
f (z, a) = f1 (z, a), f2 (z, a), f3 (z, a) .
We define a component-wise “equivalence” of observationsaction features:
^
f (z1 , a1 ) = f (z2 , a2 ) ⇐⇒
fi (z1 , a1 ) = fi (z2 , a2 ) .
i
In Fig. 3, both observations see an obstacle in the corner
of the observable space. For the left-hand case, the obstacle
is on the bottom-left and action “right” is taken to avoid it.
In the right-hand case, the obstacle is on the top-right and
action “left” is taken to avoid it. These observation-action
cases are considered equivalent in our feature model.
In developing a strategy for the POMDP, we iterate
through the observation-action space Z × Act and find featureequivalent inputs based on the above criteria. A set of featureequivalent inputs is then F̂ = {(z1 , a1 ), . . . , (zk , ak )} where
f (z1 , a1 ) = f (zk , ak ). By using the feature-equivalent inputs
we are guaranteed to require less human inputs. The maximum
possible size of the equivalent-feature set is |F̂| ≤ 84 = 70,
due to the number of permutations of f1 . So at best our feature
method can allow for 70 times fewer inputs. The efficiency
gained by the introduction of features is at least |F̂| ≥ 1 + n4
for an empty n sized gridworld, the worst possible case. The
majority of observations in sparse gridworlds are zero- or
single-obstacle observations
with an
average efficiency of
approximately E[|F̂|] ∈ [ 80 = 1, 81 = 8], which gives us a
conservative lower bound on the efficiency from a featurebased representation.
C. Initial Strategy Generation
The human training set ΩE has been generated from a series of
similar but randomly-generated environments. Therefore the
initial strategy generated from the training set is independent
1
|Act|
if
∑
ΩE (z, ai ) = 0 .
ai ∈Act(z)
For the strategy σz , we perform model checking on the
induced MC Dσz to determine if the specification is satisfied.
D. Refinement Using Counterexamples
When the specification is refuted, we compute a counterexample in form of a set of critical states S0 ⊆ S. Note the
probability of satisfying the specification will be comparatively low at these states. The human is then requested to
update the strategy for the observations z = O(s) for all s ∈ S0 .
For an observation z with an action update selection of ai ,
the observation-action strategy update parameter ω E (z, a) is:
(
1
if a = ai ,
E
ω (z, a) = ∑s∈S Pr(s|z)Prreach (s)
1
otherwise .
We perform a Bayesian update with normalization constant
c to calculate the stochastic strategy where c = ∑a∈Act σz 0 (z, a)
1
σz 0 (z, a) = ω E (z, a)σz (z, a) .
c
Thereby, at each control loop the probability of the human
input ai in the strategy is increased.
Bounds on Optimality. As discussed in Sect. IV, a randomized memoryless strategy for a POMDP may not induce
optimal values in terms of reachability probabilities or
expected cost. Moreover, in our setting, there is a limit
on the human’s capability – for instance if there are too
many features to comprehend. An optimal strategy for the
underlying MDP of a POMDP provides bounds on optimal
values for the POMDP. These values are a measure on what
is achievable, although admittedly this bound may be very
coarse. Such a bounding within a reinforcement learning
context is discussed in [5].
VI. I MPLEMENTATION AND E XPERIMENTS
We implemented the motion planning setting as in Fig. 1
inside an interactive MATLAB environment. Grid size, initial
state, number of obstacles, and goal location are variables.
We use PRISM [25] to perform probabilistic model checking
of the induced MC of the POMDP, see Sect. III-D. We use
the PRISM POMDP prototype [29] and a point-based value
iteration (PBVI) solver [6], [28] for comparison with other
TABLE I: Expected cost improvement – 4×4 gridworld
Iteration
Pr(¬B U G)
0
1
2
3
4
Optimal
0.225
0.503
0.592
0.610
0.636
– n. a. –
TABLE III: Comparison to existing POMDP tools
Expected Cost (EC=? [C])
13.57
9.110
7.154
6.055
5.923
5
HiL Synth
states
time (s)
grid
3×3
4×4
5×5
6×6
10 × 10
11 × 11
277
990
2459
5437
44794
– MO –
43.74
121.74
174.90
313.50
1668.30
– MO –
PRISM-POMDP
states
time (s)
303
987
2523
5743
54783
81663
2.20
4.64
213.53
– MO –
– MO –
– MO –
PBVI
states
time (s)
81
256
625
1296
– MO –
– MO –
3.86
2431.05
– MO –
– MO –
– MO –
– MO –
TABLE II: Expected cost of initial strategy from training sets
Training Grids
Variable
Fixed-4
Fixed-10
Optimal
Pr(¬B U G)
0.425
0.503
0.311
– n. a. –
TABLE IV: Strategy refinement – 4×4 gridworld
Expected Cost (EC=? [C])
10.59
9.27
14.53
3
tools. Note that there exists no available tool to compute
optimal randomized memoryless strategies. All experiments
were conducted on a 2.5 GHz machine with 4 GB of RAM.
A. Efficient Data Collection
A human user trains an initial “generic” strategy through
a simulation of multiple randomly-generated environments,
varying in size, number of obstacles and goal location. In
order to more accurately reflect the partially observable nature
of the problem, the human is only shown a map of the “known”
features (landmarks and goal location) in the environment as
well as the observation associated with the current state.
The goal is to obtain a strategy from the data that is
independent of a change in the environments. We gather
inputs according to Sect. V-A and Sect. V-B. For a bound
of ε = 0.05 with confidence of 1 − δ = 0.99, we require
ΩE = 1060 samples, see Eq. 1. Furthermore, the efficiency
factor introduced by the feature equivalence depends on the
generated scenarios, i. e., the number of features. For our
examples, we conservatively assume an efficiency factor of
4, so we require ΩE = 265 samples. If the specification is
refuted, we compute a critical part S0 ⊆ S of the state space S,
i. e., a counterexample. By starting the simulation in concrete
scenarios at locations induced by S0 , we “ask” the human for
specific inputs that refine the strategy at critical parts.
B. Experiments
a) Strategy Refinement: In Table IV we show 5 iterations
of counterexample-based strategy refinement for a 4×4
gridworld. In each iteration, we measure the time to construct
the MC and the time to model check. These running times are
negligible for this small example, important however is the
probability for safely reaching a target, namely Pr(¬B U G).
One can see that for the initial, generic strategy this probability
is rather low. Having the simulation start in critical parts
iteratively improves this probability up to nearly 0.8, at which
point we find no measurable improvement. For this example,
the upper bound on the maximum probability derived from
MDP model checking is 1. Figure 5 shows a heatmap of
this improving behavior where darker coloring means higher
probability for safely reaching the goal.
Iteration
0
1
2
3
4
Construction (s)
Model Checking (s)
2.311
2.423
2.346
2.293
2.293
1.533
1.653
1.952
1.727
1.727
Pr(¬B U G)
0.129
0.521
0.721
0.799
0.799
b) Fixed goal location: When we fix the goal-location
parameter to the top-right of the grid, we can examine the
strategy refinement’s impact on the expected number of steps
to the goal (see I). The grid-size space was randomly sampled
between n ∈ [4, 11], we also compare the impact of fixing
the grid-size for the training set. There is clearly a benefit
to restricting the samples from the training set to samples of
similar problem styles. In a 4 × 4 gridworld, a fixed training
set of similar sized environments outperforms the strategies
generated by a varying set of environment sizes (see Table II).
c) Comparison to Existing Tools and Solvers: We
generated POMDP models for several grid sizes with one
landmark and one dynamic obstacle. We list the number of
model states and the solution times for our human-in-the-loop
synthesis method, PRISM-POMDP and PBVI. From Table III
we can see that for the smaller problem sizes, the existing
tools perform slightly better than our method. However, as the
problem grows larger, both PRISM-POMDP and PBVI run
out of memory and are clearly outperformed. The advantage
of our memoryless approach is that the strategy itself is
independent of the size of the state space and the problem
scales with the size of the verification for the induced MC.
VII. C ONCLUSION AND F UTURE W ORK
We introduced a formal approach to utilize humans’ inputs
for strategy synthesis in a specific POMDP motion planning
setting, where strategies provably adhere to specifications.
Our experiments showed that with a simple prototype we
could raise the state-of-the-art, especially in the combination
with formal verification. In the future, we will investigate
how to infer decisions based on memory and how to employ
human-understandable counterexamples [43].
ACKNOWLEDGMENT
This work has been partly funded by ONR N00014-15-IP00052, NSF 1550212, and DARPA W911NF-16-1-0001.
(a) Iteration 0
(b) Iteration 1
(c) Iteration 2
Fig. 5: Heatmap for the quality of agent strategies with
dynamic obstacle location (2, 0) and static landmark at (1, 2).
R EFERENCES
[1] Erika Ábrahám, Bernd Becker, Christian Dehnert, Nils Jansen, JoostPieter Katoen, and Ralf Wimmer. Counterexample generation for
discrete-time Markov models: An introductory survey. In SFM, volume
8483 of LNCS, pages 65–121. Springer, 2014.
[2] Christopher Amato, Daniel S. Bernstein, and Shlomo Zilberstein. Optimizing fixed-size stochastic controllers for POMDPs and decentralized
POMDPs. Autonomous Agents and Multi-Agent Systems, 21(3):293–
320, 2010.
[3] Brenna D. Argall, Sonia Chernova, Manuela Veloso, and Brett
Browning. A survey of robot learning from demonstration. Robotics
and Autonomous Systems, 57(5):469–483, 2009.
[4] Nicholas Armstrong-Crews and Manuela Veloso. Oracular partially
observable Markov decision processes: A very special case. In ICRA,
pages 2477–2482. IEEE, 2007.
[5] Anthony R. Cassandra and Leslie Pack Kaelbling. Learning policies
for partially observable environments: Scaling up. In ICML, page 362.
Morgan Kaufmann, 2016.
[6] Anthony R. Cassandra, Leslie Pack Kaelbling, and Michael L. Littman.
Acting optimally in partially observable stochastic domains. In AAAI,
volume 94, pages 1023–1028, 1994.
[7] Krishnendu Chatterjee, Martin Chmelı́k, Raghav Gupta, and Ayush
Kanodia. Qualitative analysis of POMDPs with temporal logic
specifications for robotics applications. In ICRA, pages 325–330.
IEEE, 2015.
[8] Krishnendu Chatterjee, Martin Chmelı́k, Raghav Gupta, and Ayush
Kanodia. Optimal cost almost-sure reachability in POMDPs. Artificial
Intelligence, 234:26–48, 2016.
[9] Krishnendu Chatterjee, Luca de Alfaro, and Thomas A. Henzinger.
Trading memory for randomness. In QEST. IEEE, 2004.
[10] Paul Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane
Legg, and Dario Amodei. Deep reinforcement learning from human
preferences. CoRR, abs/1706.03741, 2017.
[11] Martin A. Conway. Cognitive models of memory. The MIT Press,
1997.
[12] Christian Dehnert, Sebastian Junges, Joost-Pieter Katoen, and Matthias
Volk. A Storm is coming: A modern probabilistic model checker. In
CAV (2), volume 10427 of LNCS, pages 592–600. Springer, 2017.
[13] Kurt Dresner and Peter Stone. A multiagent approach to autonomous
intersection management. Artificial Intelligence, 31:591–656, 2008.
[14] Krishnamurthy Dvijotham and Emanuel Todorov. Inverse optimal
control with linearly-solvable MDPs. In ICML, pages 335–342, 2010.
[15] Ronald A. Howard. Dynamic Programming and Markov Processes.
The MIT Press, 1960.
[16] Nils Jansen, Murat Cubuktepe, and Ufuk Topcu. Synthesis of shared
control protocols with provable safety and performance guarantees. In
ACC, pages 1866–1873. IEEE, 2017.
[17] Nils Jansen, Ralf Wimmer, Erika Ábrahám, Barna Zajzon, Joost-Pieter
Katoen, Bernd Becker, and Johann Schuster. Symbolic counterexample
generation for large discrete-time markov chains. Sci. Comput.
Program., 91:90–114, 2014.
[18] Edwin T. Jaynes. On the rationale of maximum-entropy methods.
Proceedings of the IEEE, 70(9):939–952, 1982.
[19] Sebastian Junges, Nils Jansen, Ralf Wimmer, Tim Quatmann, Leonore
Winterer, Joost-Pieter Katoen, and Bernd Becker. Permissive finitestate controllers of pomdps using parameter synthesis. CoRR,
abs/1710.10294, 2017.
[20] Leslie Pack Kaelbling, Michael L. Littman, and Anthony R. Cassandra.
Planning and acting in partially observable stochastic domains. Artificial
Intelligence, 101(1):99–134, 1998.
[21] Joost-Pieter Katoen. The probabilistic model checking landscape. In
LICS, pages 31–45. ACM, 2016.
[22] Piyush Khandelwal et al. BWIBots: A platform for bridging the gap
between AI and human–robot interaction research. Int’l Journal of
Robotics Research, 2017.
[23] Konrad P. Körding and Daniel M. Wolpert. Bayesian decision theory
in sensorimotor control. Trends in Cognitive Sciences, 10(7):319–326,
2006.
[24] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet
classification with deep convolutional neural networks. In Advances in
neural information processing systems, pages 1097–1105, 2012.
[25] Marta Kwiatkowska, Gethin Norman, and David Parker. P RISM 4.0:
Verification of probabilistic real-time systems. In CAV, volume 6806
of LNCS, pages 585–591. Springer, 2011.
[26] Michael L. Littman. Memoryless policies: Theoretical limitations and
practical results. In SAB, pages 238–245. The MIT Press, 1994.
[27] Omid Madani, Steve Hanks, and Anne Condon. On the undecidability of
probabilistic planning and infinite-horizon partially observable Markov
decision problems. In AAAI, pages 541–548. AAAI Press, 1999.
[28] Nicolas Meuleau, Kee-Eung Kim, Leslie Pack Kaelbling, and Anthony R. Cassandra. Solving POMDPs by searching the space of finite
policies. In UAI, pages 417–426. Morgan Kaufmann, 1999.
[29] Gethin Norman, David Parker, and Xueyi Zou. Verification and
control of partially observable probabilistic systems. Real-Time Systems,
53(3):354–402, 2017.
[30] Joelle Pineau, Geoff Gordon, and Sebastian Thrun. Point-based value
iteration: An anytime algorithm for POMDPs. In IJCAI, volume 3,
pages 1025–1032, 2003.
[31] David L. Poole and Alan K. Mackworth. Artificial Intelligence:
foundations of computational agents. CUP, 2010.
[32] Stephanie Rosenthal and Manuela Veloso. Modeling humans as
observation providers using POMDPs. In RO-MAN, pages 53–58.
IEEE, 2011.
[33] Sheldon M. Ross. Introduction to Stochastic Dynamic Programming.
Academic Press, Inc., 1983.
[34] Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of
imitation learning and structured prediction to no-regret online learning.
In AISTATS, pages 627–635, 2011.
[35] Guy Shani, Joelle Pineau, and Robert Kaplow. A survey of pointbased POMDP solvers. Autonomous Agents and Multi-Agent Systems,
27(1):1–51, 2013.
[36] David R. Shanks, Richard J. Tunney, and John D. McCarthy. A reexamination of probability matching and rational choice. Journal of
Behavioral Decision Making, 15(3):233–250, 2002.
[37] Robert Sim and Nicholas Roy. Global a-optimal robot exploration in
slam. In ICRA, pages 661–666. IEEE, 2005.
[38] Trey Smith and Reid Simmons. Heuristic search value iteration for
POMDPs. In UAI, pages 520–527. AUAI Press, 2004.
[39] Martin A Tanner and Wing Hung Wong. The calculation of posterior
distributions by data augmentation. Journal of the American Statistical
Association, 82(398):528–540, 1987.
[40] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic
Robotics. The MIT Press, 2005.
[41] Nikos Vlassis, Michael L. Littman, and David Barber. On the
computational complexity of stochastic controller optimization in
POMDPs. ACM Trans. on Computation Theory, 4(4):12:1–12:8, 2012.
[42] Michael P. Wellman et al. Designing the market game for a trading
agent competition. IEEE Internet Computing, 5(2):43–51, 2001.
[43] Ralf Wimmer, Nils Jansen, Andreas Vorpahl, Erika Ábrahám, JoostPieter Katoen, and Bernd Becker. High-level counterexamples for
probabilistic automata. Logical Methods in Computer Science, 11(1),
2015.
[44] Leonore Winterer, Sebastian Junges, Ralf Wimmer, Nils Jansen, Ufuk
Topcu, Joost-Pieter Katoen, and Bernd Becker. Motion planning under
partial observability using game-based abstraction. In CDC. IEEE,
2017.
[45] Tichakorn Wongpiromsarn and Emilio Frazzoli. Control of probabilistic
systems under dynamic, partially known environments with temporal
logic specifications. In CDC, pages 7644–7651. IEEE, 2012.
[46] Brian D. Ziebart, Andrew L. Maas, J. Andrew Bagnell, and Anind K.
Dey. Maximum entropy inverse reinforcement learning. In AAAI, pages
1433–1438. AAAI Press, 2008.
| 2 |
arXiv:1803.02360v1 [math-ph] 6 Mar 2018
Submitted to Bernoulli
Gaussian optimizers for entropic inequalities
in quantum information
GIACOMO DE PALMA1 DARIO TREVISAN2 VITTORIO
GIOVANNETTI3 and LUIGI AMBROSIO4
1
QMATH, Department of Mathematical Sciences, University of Copenhagen,
Universitetsparken 5, 2100 Copenhagen, Denmark. E-mail: [email protected]
2
Università degli Studi di Pisa, 56126 Pisa, Italy.
3
NEST, Scuola Normale Superiore and Istituto Nanoscienze-CNR, 56126 Pisa, Italy.
4
Scuola Normale Superiore, 56126 Pisa, Italy.
We survey the state of the art for the proof of the quantum Gaussian optimizer conjectures of
quantum information theory. These fundamental conjectures state that quantum Gaussian input
states are the solution to several optimization problems involving quantum Gaussian channels.
These problems are the quantum counterpart of three fundamental results of functional analysis
and probability: the Entropy Power Inequality, the sharp Young’s inequality for convolutions and
the theorem “Gaussian kernels have only Gaussian maximizers”. Quantum Gaussian channels
play a key role in quantum communication theory: they are the quantum counterpart of Gaussian
integral kernels and provide the mathematical model for the propagation of electromagnetic
waves in the quantum regime. The quantum Gaussian optimizer conjectures are needed to
determine the maximum communication rates over optical fibers and free space. The restriction
of the quantum-limited Gaussian attenuator to input states diagonal in the Fock basis coincides
with the thinning, the analogue of the rescaling for positive integer random variables. Quantum
Gaussian channels provide then a bridge between functional analysis and discrete probability.
Keywords: quantum information theory, quantum Gaussian channels, entropic inequalities, thinning, Entropy Power Inequality, sharp Young’s inequality for convolutions.
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . .
2 Gaussian optimizers in functional analysis . . . . . . .
2.1 Gaussian kernels have only Gaussian maximizers
2.2 The sharp Young’s inequality for convolutions . .
2.3 The Entropy Power Inequality . . . . . . . . . .
3 Quantum Gaussian systems . . . . . . . . . . . . . . .
3.1 Quantum systems . . . . . . . . . . . . . . . . . .
3.2 Quantum channels . . . . . . . . . . . . . . . . .
3.3 Quantum Gaussian systems . . . . . . . . . . . .
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
6
6
7
7
8
9
10
11
2
G. De Palma, D. Trevisan, V. Giovannetti, and L. Ambrosio
3.4 Quantum Gaussian channels . . . . . . . . . . . . . . . . . . . . . . . . . .
4 The minimum output entropy conjecture . . . . . . . . . . . . . . . . . . . . . .
5 Gaussian optimizers for entropic inequalities in quantum information . . . . . .
5.1 Quantum Gaussian channels have Gaussian maximizers . . . . . . . . . .
5.1.1 The proof of Conjecture 5.4 for one-mode quantum-limited Gaussian channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.2 The proof of Conjecture 5.1 for all the one-mode attenuators and
amplifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.3 The proof of Conjecture 5.4 for p=q . . . . . . . . . . . . . . . . .
5.2 The Entropy Photon-number Inequality . . . . . . . . . . . . . . . . . . .
5.3 The sharp Young’s inequality for the beam-splitter . . . . . . . . . . . . .
6 The thinning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7 Quantum conditioning and the quantum Entropy Power Inequality . . . . . . .
7.1 The quantum Entropy Power Inequality . . . . . . . . . . . . . . . . . . .
7.2 Quantum conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3 The quantum conditional Entropy Power Inequality . . . . . . . . . . . .
8 Conclusions and perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Author Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
14
16
18
19
20
21
22
22
24
26
26
27
28
31
32
32
32
1. Introduction
Gaussian functions play a key role in both functional analysis and probability, and are the
solution to several optimization problems involving Gaussian kernels. The most prominent among these problems is determining the norms of the Gaussian integral kernels G
that send a function f ∈ Lp (Rm ) to the function Gf ∈ Lq (Rn ) with p, q ≥ 1. In the
seminal paper “Gaussian kernels have only Gaussian maximizers” [86], Lieb proved that
these norms are achieved by Gaussian functions. A closely related fundamental result
is the sharp Young’s inequality for convolutions [7, 11, 41, 18, 8], stating that for any
p, q, r ≥ 1, the ratio kf ∗ gkr / kf kp kgkq with f ∈ Lp (Rn ) and g ∈ Lq (Rn ) is maximized
by Gaussian functions, where f ∗ g denotes the convolution of f with g. This inequality
has several fundamental applications, such as a proof of the Entropy Power Inequality
[85, 36, 19], of the Brunn-Minkowski inequality [45, 19] and Lieb’s solution [85, 87] of
Wehrl’s conjecture [105, 1], stating that coherent states minimize the Wehrl entropy. The
theorem “Gaussian kernels have only Gaussian maximizers” and the sharp Young’s inequality for convolutions are among the most important inequalities of functional analysis
(see e.g. the book [84]).
The Entropy Power Inequality [36, 100, 98] states that the Shannon differential entropy
of the sum of two independent random variables with values in Rn and given Shannon
differential entropies is minimum when the two random variables are Gaussian, and
is a fundamental tool of information theory [19]. The Entropy Power Inequality was
Gaussian optimizers for entropic inequalities in quantum information
3
introduced by Shannon to provide an upper bound to the information capacity of nonGaussian channels [98], and was later used to bound the information capacity region of
the Gaussian broadcast channel [9] and the secret information capacity of the Gaussian
wiretap channel [83]. The Entropy Power Inequality was also employed to prove the
convergence in relative entropy for the central limit theorem [6].
Quantum information theory [90, 64, 108, 70] is the theory of the transmission and the
processing of the information stored in quantum systems. Most of nowadays communications are made with electromagnetic signals traveling through optical fibers or free space.
Quantum Gaussian channels [17, 13, 104, 71] are the quantum counterpart of Gaussian
integral kernels, and an n-mode quantum Gaussian channel provides the mathematical
model for the propagation of n modes of the electromagnetic radiation along an optical
fibre or free space in the quantum regime. For this reason, quantum Gaussian channels
play a key role in quantum communication theory.
The subject of this review is the generalization of all the above inequalities for the convolution and for Gaussian integral kernels to quantum Gaussian channels. The solutions
to the resulting quantum optimization problems are conjectured to be quantum Gaussian states, the quantum counterpart of Gaussian probability measures. This Gaussian
optimizer problem arose in quantum information theory for the determination of the classical information capacity of phase-covariant quantum Gaussian channels [69, 49, 50, 26].
Indeed, proving that the coherent states constitute an optimal coding requires to prove
a minimum output entropy conjecture, stating that the coherent input states minimize
the output entropy of n-mode phase-covariant quantum Gaussian channels (Theorem
4.1). This conjecture implies that both the minimum output entropy and the classical
capacity of phase-covariant quantum Gaussian channels are additive with respect to the
tensor product, i.e., that entanglement does not increase the communication rate. While
the minimum output entropy of any classical channel is trivially additive, this property
does not hold in general for quantum channels [62]. The proof of the minimum output
entropy conjecture has then been a fundamental result, which required more than ten
years [51, 43, 44, 48, 88, 46] (see the review [71]; see also [72] for the capacity of non
phase-covariant quantum Gaussian channels).
Proving that the coherent states constitute an optimal coding for the Gaussian broadcast channel requires a constrained version of the minimum output entropy conjecture.
This constrained version states that quantum Gaussian input states minimize the output
entropy of n-mode quantum Gaussian channels among all the input states with a given
entropy [58, 56, 54, 57, 92] (Conjecture 5.1). The constrained minimum output entropy
conjecture also implies the converse theorems for the triple trade-off coding with the
quantum-limited attenuator and amplifier [106, 107, 92]. The conjecture has been generalized to the Entropy Photon-number Inequality [55, 54], stating that quantum Gaussian
input states minimize the output entropy of the beam-splitter among all the couple of
input states each with a given entropy 5.13. Moreover, it has been realized [33] that the
constrained minimum output entropy conjecture would follow from the generalization of
the theorem “Gaussian kernels have Gaussian maximizers” to n-mode quantum Gaussian channels (Conjecture 5.4). Since the beam-splitter is the quantum counterpart of
the convolution, the Entropy Photon-number Inequality is the quantum counterpart of
4
G. De Palma, D. Trevisan, V. Giovannetti, and L. Ambrosio
the Entropy Power Inequality. Based on this relation, we conjecture for the first time in
this review the validity of a sharp Young’s inequality for the beam-splitter (Conjecture
5.14).
The proof of all the above quantum inequalities has been completed only in some
particular cases, and is currently an active field of research. The constrained minimum
output entropy conjecture has been proven only for one-mode quantum Gaussian channels [32, 34, 21, 33, 93] or for input states diagonal in some joint product basis [35].
These results are based on a new majorization theorem for one-mode quantum Gaussian
channels [32] (Theorem 5.2). The majorization result has been extended to single-jump
lossy quantum channels [29], but unfortunately it fails for multi-mode quantum Gaussian
channels [29]. The proof of the constrained minimum output entropy conjecture for onemode quantum Gaussian channels made possible the proof of the fundamental relation
between the von Neumann and the Wehrl entropy, stating that for any n, n-mode quantum Gaussian states have the minimum Wehrl entropy among all the n-mode quantum
states with a given von Neumann entropy [23]. For generic p, q ≥ 1, the theorem “Gaussian kernels have Gaussian maximizers” has been proven only for one-mode quantum
Gaussian channels [31], while for n-mode channels the theorem has been proven only for
p = 1 [47, 71] and p = q [42, 73]. A proof of the Entropy Photon-number Inequality has
been attempted with the quantum analogue of the heat semigroup technique of the proof
of the Entropy Power Inequality by Blachman and Stam. This technique led instead to
the proof of the quantum Entropy Power Inequality [79, 80, 78, 27, 28, 21] (Theorem 7.1),
which provides a lower bound to the output entropy of the beam-splitter in terms of the
entropies of the two inputs. This bound is strictly lower than the output entropy achieved
by Gaussian input states, hence the quantum Entropy Power Inequality is strictly weaker
than the Entropy Photon-number Inequality, that is still an open conjecture. The same
heat semigroup technique led to the proof of the quantum conditional Entropy Power
Inequality [77, 30] and of the quantum Entropy Power Inequality for the quantum additive noise channels both in the unconditioned [74] and conditional [25] versions. The
quantum conditional Entropy Power Inequality (Theorem 7.2) determines the minimum
quantum conditional von Neumann entropy of the output of the beam-splitter or of the
squeezing among all the input states where the two inputs are conditionally independent given the memory and have given quantum conditional entropies. This inequality
has been exploited to prove an uncertainty relation for the conditional Wehrl entropy
[22]. These Entropy Power Inequalities have stimulated the proof of similar inequalities
in different contexts, such as the qubit swap channel [3, 14] and information combining
[66]. The implications among the main results and conjectures for quantum Gaussian
channels are summarized in section 1.
As a possible approach towards the proof of the unsolved entropic inequalities for
quantum Gaussian channels, we mention that sharp functional inequalities in the commutative setting have been recently studied using the theory of optimal transport [103].
These methods led to e.g. quantitative stability results for isoperimetric [39], Sobolev
and log-Sobolev [40, 37] inequalities. Ideas from optimal transport are also implicit in
the solution of Shannon’s problem on the monotonicity of entropy [2]. Recently, transportation distances have been proposed in the quantum fermionic setting [15, 16] and
Gaussian optimizers for entropic inequalities in quantum information
Sharp Young’s inequality
for the beam-splitter
(Conjecture 5.14)
Entropy Photon-number Inequality
(Conjecture 5.13)
5
Quantum Gaussian channels
have Gaussian maximizers
(Conjecture 5.4)
p → p norms of
quantum Gaussian channels
(Theorem 5.11)
1 → p norms of
quantum Gaussian channels
(Corollary 4.4)
Quantum Entropy
Power Inequality
(Theorem 7.1)
Constrained minumum output entropy
of quantum Gaussian channels
(Conjecture 5.1)
Minimum output entropy of
quantum Gaussian channels
(Theorem 4.1)
Majorization for quantum
Gaussian channels
(Theorem 4.3)
Figure 1. Implications among conjectures and results. Green = proven result; Yellow = result proven
in some particular cases; Red = open conjecture.
have then been extended to quantum Gaussian systems [95, 96] (see also [20]).
An interesting particular case of the inequalities for quantum Gaussian channels is
when the input states are diagonal in the Fock basis [59, 75]. This provides a link between
quantum Gaussian channels and classical discrete probability theory. The restriction of
the one-mode quantum-limited attenuator to input states diagonal in the Fock basis
is the linear map acting on discrete classical probability distributions on N known as
thinning [32]. The quantum-limited attenuator is the quantum Gaussian channel that
models the attenuation of electromagnetic signals. The thinning has been introduced by
Rényi [94] as a discrete analogue of the rescaling of a continuous real random variable,
and has been involved with this role in discrete versions of the central limit theorem
[60, 109, 61], of the Entropy Power Inequality [110, 76] and of Young’s inequality [81].
Most of these results require the ad hoc hypothesis of the ultra log-concavity (ULC)
of the input state. In particular, the Restricted Thinned Entropy Power Inequality [76]
6
G. De Palma, D. Trevisan, V. Giovannetti, and L. Ambrosio
states that the Poisson input probability distribution minimizes the output Shannon
entropy of the thinning among all the ULC input probability distributions with a given
Shannon entropy. The results on quantum Gaussian channels presented in this review led
to the proof of new entropic inequalities for the thinning that apply to any probability
distribution, regardless of whether they satisfy the ULC assumption. Quantum Gaussian
states correspond to geometric probability distributions. The inequalities on quantum
Gaussian channels imply that geometric input probability distributions both achieve the
norms of the thinning [31] (Theorem 6.5) and minimize its output entropy among all the
input probability distributions with a given entropy [34] (Theorem 6.4).
The review is structured as follows. In section 2, we present the classical results for
Gaussian optimizers in functional analysis. In section 3, we introduce Gaussian quantum
systems and channels, and the necessary notions of quantum mechanics. In section 4 we
present the minimum output entropy conjecture and its proof. In section 5, we present
all the conjectures on Gaussian optimizers in quantum information including the new
sharp Young’s inequality for the beam-splitter, together with the state of the art in their
proofs. In section 6, we present the thinning and its relation with the results for quantum
Gaussian channel. In section 7, we present the quantum Entropy Power Inequality and
its proof. Moreover, we introduce the quantum conditional entropy, and present the
conditioned version of the quantum Entropy Power Inequality. We conclude in section 8.
2. Gaussian optimizers in functional analysis
2.1. Gaussian kernels have only Gaussian maximizers
For any p ≥ 1, the Lp (Rn ) norm of a function f : Rn → C is
Z
p1
p
kf kp =
|f (x)| dx
.
(2.1)
Rn
Given p, q ≥ 1, let us consider a Gaussian integral kernel G from Lp (Rm ) to Lq (Rn ):
Z
G(x, y) f (y) dy ,
x ∈ Rn ,
f ∈ Lp (Rm ) ,
(2.2)
(G f )(x) =
Rm
where G(x, y) is a Gaussian function on Rm+n , i.e., the exponential of a quadratic polynomial. The norm of G is
kG f kq
.
(2.3)
kGkp→q =
sup
0<kf kp <∞ kf kp
In the seminal paper “Gaussian kernels have only Gaussian maximizers” [86], Lieb proved
that under certain fairly broad assumptions on G, p and q, this operator is well defined,
and the supremum in (2.3) is attained on a Gaussian function f . If 1 < p < q < ∞,
any function that attains the supremum in (2.3) is a Gaussian function. The proof of
this fundamental result is based on the multiplicativity of the norm of generic integral
kernels with respect to the tensor product.
Gaussian optimizers for entropic inequalities in quantum information
7
Theorem 2.1 ([86, 67]). The norms of integral kernels are multiplicative, i.e., for
any two (not necessarily Gaussian) integral kernels G1 : Lp (Rm1 ) → Lq (Rn1 ) and G2 :
Lp (Rm2 ) → Lq (Rn2 ),
kG1 ⊗ G2 kp→q = kG1 kp→q kG2 kp→q .
(2.4)
Moreover, if the ratios kG1 f1 kq / kf1 kp and kG2 f2 kq / kf2 kp are maximized by the unique
functions f1 = f¯1 ∈ Lp (Rm1 ) and f2 = f¯2 ∈ Lp (Rm2 ), the ratio k(G1 ⊗ G2 )f kq / kf kp is
maximized by the unique function f = f¯1 ⊗ f¯2 ∈ Lp (Rm1 +m2 ).
2.2. The sharp Young’s inequality for convolutions
The convolution operation can be considered as a degenerate Gaussian integral kernel
given by a Dirac delta function centered in the origin. Indeed, the convolution of f ∈
Lp (Rn ) with g ∈ Lq (Rn ) is
Z
f (y) g(z) δ0 (y + z − x) dy dz ,
x ∈ Rn .
(2.5)
(f ∗ g)(x) =
R2n
The sharp Young’s inequality for convolutions states that the supremum
kf ∗ gkr
0<kf kp ,kgkq <∞ kf kp kgkq
sup
(2.6)
is finite iff
1
1 1
+ =1+ ,
(2.7)
p q
r
and in this case it is achieved by Gaussian functions. This result has been first proven by
Beckner [7] and by Brascamp and Lieb [11] using a rearrangement inequality for integrals
[12]. A completely different proof based on the heat semigroup has been provided by
Toscani [102].
2.3. The Entropy Power Inequality
Let X be a random variable with values in Rn and whose probability law is absolutely
continuous with respect to the Lebesgue measure, so that it admits a probability density
f (x)dx. The Shannon differential entropy [19] of X is
Z
f (x) ln f (x) dx ,
(2.8)
S(X) = −
Rn
and quantifies the noise contained in X. Let σ be a symmetric strictly positive n × n real
matrix, and let X be the centered Gaussian random variable with covariance matrix σ
and density
1 T −1
e− 2 x σ x
.
(2.9)
f (x) = p
det(2πσ)
8
G. De Palma, D. Trevisan, V. Giovannetti, and L. Ambrosio
The Shannon differential entropy of X is proportional to the logarithm of the determinant
of the covariance matrix:
1
(2.10)
S(X) = ln det(2πeσ) .
2
Let us consider the sum of two independent random variables X and Y with values
in Rn . The Entropy Power Inequality [36, 100, 98] states that, if X and Y have Shannon differential entropy fixed to the values S(X) and S(Y ), respectively, the Shannon
differential entropy of X + Y is minimum when X and Y have a Gaussian probability
distribution with proportional covariance matrices. The covariance matrix of the sum of
two independent random variables is equal to the sum of their covariance matrices:
σX+Y = σX + σY .
(2.11)
If σY = λ σX for some λ > 0, (2.10) and (2.11) imply
exp
2S(X + Y )
2S(X)
2S(Y )
= exp
+ exp
,
n
n
n
(2.12)
so that the Entropy Power Inequality has the form
exp
2S(X)
2S(Y )
2S(X + Y )
≥ exp
+ exp
.
n
n
n
(2.13)
Two different proofs of the Entropy Power Inequality are known. The first is due to
Blachman and Stam [100, 10], and is based on perturbing the inputs X and Y with
the heat semigroup. The second is due to Lieb [85], and is based on the sharp Young’s
inequality for convolutions and on the properties of the Rényi entropies. For any p > 1,
the p-Rényi entropy of the random variable X with values in Rn and density f is
Sp (X) =
p
ln kf kp .
1−p
(2.14)
The Rényi entropies are a generalization of the Shannon differential entropy, which is
recovered in the limit p → 1:
S(X) = lim Sp (X) .
(2.15)
p→1
3. Quantum Gaussian systems
In this Section, we introduce the elements of quantum information and quantum Gaussian
systems that are needed for presenting the entropic inequalities for quantum Gaussian
channels. For a more comprehensive introduction, we refer the reader to the books [70, 97]
and the review [71].
Gaussian optimizers for entropic inequalities in quantum information
9
3.1. Quantum systems
Let H be a separable complex Hilbert space with not necessarily finite dimension. We
adopt the bra-ket notation, where a vector ψ ∈ H is denoted as |ψi and the scalar product
between the vectors φ and ψ is denoted as hφ|ψi, is linear in ψ and antilinear in φ.
For any p ≥ 1, the p-Schatten norm of a linear compact operator X̂ on H is
p2 p1
†
,
(3.1)
X̂ = Tr X̂ X̂
p
where X̂ † is the adjoint operator of X̂. The p-Schatten norm play the role of the Lp norm
of functional analysis. The operators with finite 1-Schatten norm are called trace-class
operators. The ∞-Schatten norm kp
X̂k∞ of a continuous linear operator X̂ is defined as
the supremum of the spectrum of X̂ † X̂.
Quantum states are the noncommutative counterpart of probability measures. A quantum state is a positive trace-class operator with unit trace. Any quantum state ρ̂ can be
diagonalized in an orthonormal basis:
ρ̂ =
∞
X
k=0
pk |ψk ihψk | ,
(3.2)
where {|ψk ihψk |}k∈N denote the rank-one projectors onto the orthonormal vectors {ψk }k∈N ,
and {pk }k∈N are the eigenvalues of ρ̂. Since ρ̂ is positive and has unit trace, {pk }k∈N is a
probability measure on N. The quantum state ρ̂ is called pure if it is a rank-one projector,
and mixed otherwise. With a small abuse of nomenclature, we call pure state both the
normalized vector ψ ∈ H and the associated rank-one projector |ψihψ|. From (3.2), any
quantum state can be expressed as a convex combination of orthogonal pure states.
The von Neumann entropy of the quantum state ρ̂ in (3.2) is the Shannon entropy of
its eigenvalues:
∞
X
pk ln pk ,
(3.3)
S(ρ̂) = −Tr [ρ̂ ln ρ̂] = −
k=0
and is the quantum counterpart of the Shannon differential entropy. As the Shannon
entropy and contrarily to the Shannon differential entropy, the von Neumann entropy is
always positive, and vanishes iff ρ̂ is pure. If ρ̂ is a quantum state of the quantum system
A with Hilbert space HA , we use indistinctly the notations S(ρ̂) or S(A) for the entropy
of ρ̂.
As in the case of classical probability measures, we can define for any p > 1 the p-Rényi
entropy of the quantum state ρ̂ as
p
Sp (ρ̂) =
ln kρ̂kp .
(3.4)
1−p
The Rényi entropies are a generalization of the von Neumann entropy, which is recovered
in the limit p → 1:
S(ρ̂) = lim Sp (ρ̂) .
(3.5)
p→1
10
G. De Palma, D. Trevisan, V. Giovannetti, and L. Ambrosio
The observables of a quantum system are the self-adjoint operators on the Hilbert
space, and the expectation value of the observable Ô on the state ρ̂ is
h i
D E
(3.6)
Ô = Tr Ô ρ̂ .
ρ̂
If A and B are quantum systems with Hilbert spaces HA and HB , the joint system
AB has Hilbert space HA ⊗ HB . A pure state ψ ∈ HA ⊗ HB is called product state if
ψ = ψA ⊗ ψB for some ψA ∈ HA and ψB ∈ HB . A fundamental difference with respect
to classical probability is that not all pure states are product states. A pure state that
is not a product state is called entangled state. Let ρ̂AB be a quantum state of the joint
quantum system AB. We define the reduced or marginal states on A and B as
ρ̂A = TrB ρ̂AB ,
ρ̂B = TrA ρ̂AB ,
(3.7)
where TrA and TrB denote the partial trace over the system A and B, respectively. In
other words, ρ̂A is the quantum state of A such that
i
h
i
h
(3.8)
TrA X̂A ρ̂A = TrAB X̂A ⊗ ÎB ρ̂AB
for any bounded operator X̂A on HA , and analogously for ρ̂B .
3.2. Quantum channels
Quantum channels [90, 64, 108, 70] are the noncommutative counterpart of the Markov
operators of probability theory. A quantum channel Φ from the quantum system A with
Hilbert space HA to the quantum system B with Hilbert space HB is a linear completely
positive trace-preserving map from the trace-class operators on HA to the trace-class
operators on HB . Precisely, a map Φ is said to be
• positive if Φ(X̂) ≥ 0 for any trace-class operator X̂ ≥ 0;
• completely positive if the map Φ ⊗ Id is positive for any d ∈ N, where Id denotes
the identity map on the operators on the Hilbert space Cd ;
• trace-preserving if Tr Φ(X̂) = Tr X̂ for any trace-class operator X̂.
These properties ensure that for any d ∈ N the map Φ ⊗ Id sends the quantum states
on HA ⊗ Cd to quantum states on HB ⊗ Cd . Since any joint probability measure of two
random variables is a convex combination of product probability measures, the complete
positivity of any Markov operator is a trivial consequence of its positivity. On the contrary, the existence of entanglement makes complete positivity a nontrivial requirement
for quantum channels. For example, the transposition is a linear positive trace-preserving
map that is not completely positive.
Any quantum channel Φ from A to B can be realized by an isometry followed by a
partial trace, i.e., there exists an Hilbert space HE and an isometry V̂ : HA → HB ⊗ HE
with V̂ † V̂ = ÎA such that for any trace-class operator X̂ on HA ,
i
h
(3.9)
Φ X̂ = TrE V̂ X̂ V̂ † .
Gaussian optimizers for entropic inequalities in quantum information
11
The Hilbert space HE and the isometry V̂ are unique up to isometries on HE . The
expression (3.9) is called the Stinespring dilation of Φ. The quantum channel from A to
E defined on trace-class operators as
i
h
(3.10)
Φ̃ X̂ = TrA V̂ X̂ V̂ †
is called the complementary channel of Φ [70]. We mention that the Stinespring dilation
has a classical analogue that has found some applications in the commutative setting,
e.g., in infinite-dimensional stochastic analysis [63].
Let Φ : A → B be a quantum channel. The dual channel of Φ is the linear map Φ†
from bounded operators on HB to bounded operators on HA such that for any trace-class
operator  on HA and any bounded operator B̂ on HB
h i
i
h
(3.11)
Tr B̂ Φ Â = Tr Φ† B̂ Â .
For any 1 ≤ p, q ≤ ∞, the p → q norm of a quantum channel Φ is defined as
Φ X̂
q
kΦkp→q =
sup
.
0<kX̂ k <∞
X̂
p
(3.12)
p
A fundamental question is whether the p → q norm of a channel is multiplicative with
respect to the tensor product, i.e., whether
Φ⊗n
p→q
= kΦknp→q
∀n∈N.
(3.13)
This property holds for any classical integral kernel [86, 67], but it is known to fail for
generic quantum channels [67].
3.3. Quantum Gaussian systems
A n-mode Gaussian quantum system is the mathematical model for n harmonic oscillators, or n modes of the electromagnetic radiation. For the sake of simplicity, we present
one-mode Gaussian quantum systems first.
The Hilbert space of a one-mode Gaussian quantum system is L2 (R), the irreducible
representation of the canonical commutation relation (see [97] or [70], Chapter 12 for a
more complete presentation)
â, ↠= Î .
(3.14)
The operator â is called ladder operator, plays the role of a noncommutative complex
variable and acts on ψ in a suitable dense domain in L2 (R) as
(â ψ)(x) =
x ψ(x) + ψ ′ (x)
√
.
2
(3.15)
12
G. De Palma, D. Trevisan, V. Giovannetti, and L. Ambrosio
The quantum states of a one-mode Gaussian quantum system are the quantum counterparts of the probability measures on R2 . Since each mode is associated to a complex
noncommutative variable, the number of real classical components is twice the number
of quantum modes. We define the Hamiltonian
N̂ = ↠â ,
(3.16)
that counts the number of excitations, or photons. The vector annihilated by â is the
vacuum and is denoted by |0i. From the vacuum we can build the eigenstates of the
Hamiltonian, called Fock states:
n
â†
√
|ni =
|0i , hm|ni = δmn , N̂ |ni = n|ni , m, n ∈ N ,
(3.17)
n!
where hφ|ψi denotes the scalar product in L2 (R). An operator diagonal in the Fock basis
is called Fock-diagonal.
A quantum Gaussian state is a quantum state proportional to the exponential of a
quadratic polynomial in â and ↠. The most important Gaussian states are the thermal
Gaussian states, where the polynomial is proportional to the Hamiltonian ↠â. They
correspond to a geometric probability distribution for the energy:
n
∞
E
1 X
|nihn| ,
ω̂(E) =
E + 1 n=0 E + 1
E≥0.
(3.18)
For E = 0, we recover the vacuum state ω̂(0) = |0ih0|. The average energy of ω̂(E) is
i
h
(3.19)
E = Tr N̂ ω̂(E) ,
and the von Neumann entropy is
g(E) := S(ω̂(E)) = (E + 1) ln (E + 1) − E ln E .
(3.20)
As Gaussian probability measures maximize the Shannon differential entropy among all
the probability measures with a given covariance matrix, thermal quantum Gaussian
states maximize the von Neumann entropy among all the quantum states with a given
average energy.
The Hilbert space of a n-mode Gaussian quantum system is the tensor product of n
Hilbert spaces of a one-mode Gaussian quantum system, i.e., the irreducible representation of the canonical commutation relations
i
h
i
h
i, j = 1, . . . , n ,
(3.21)
[âi , âj ] = â†i , â†j = 0 ,
âi , â†j = δij Î ,
where each ladder operator âi is associated to one mode. An n-mode thermal quantum
⊗n
Gaussian state is the tensor product ω̂(E)
of n identical one-mode thermal quantum
Gaussian states.
Gaussian optimizers for entropic inequalities in quantum information
13
3.4. Quantum Gaussian channels
Quantum Gaussian channels are the quantum channels that preserve the set of quantum Gaussian states. The most important families of quantum Gaussian channels are
the beam-splitter, the squeezing, the quantum Gaussian attenuators and the quantum
Gaussian amplifiers. The beam-splitter and the squeezing are the quantum counterparts
of the classical linear mixing of random variables, and are the main transformations in
quantum optics. Let A and B be one-mode quantum Gaussian systems with ladder operators â and b̂, respectively. The beam-splitter of transmissivity 0 ≤ λ ≤ 1 is implemented
by the unitary operator
√
(3.22)
Ûλ = exp ↠b̂ − b̂† â arccos λ ,
and performs a linear rotation of the ladder operators (see e.g. [38], Section 1.4.2):
√
√
Ûλ† â Ûλ = λ â + 1 − λ b̂ ,
√
√
(3.23)
Ûλ† b̂ Ûλ = − 1 − λ â + λ b̂ .
The physical beam-splitter is a passive element, and does not require energy for functioning. Indeed, the mixing unitary operator preserves the Hamiltonian (3.16):
(3.24)
Ûλ† â† â + b̂† b̂ Ûλ = ↠â + b̂† b̂ .
The two-mode squeezing [5] of parameter κ ≥ 1 is implemented by the unitary operator
√
(3.25)
Ûκ = exp ↠b̂† − â b̂ arccosh κ ,
and acts on the ladder operators as
√
√
κ â + κ − 1 b̂† ,
√
√
Ûκ† b̂ Ûκ = κ − 1 ↠+ κ b̂ .
Ûκ† â Ûκ =
(3.26)
The squeezing is an active operation that requires energy. Indeed, the squeezing unitary
operator does not preserve the Hamiltonian (3.16).
We define for any joint quantum state ρ̂AB on AB and any λ ≥ 0 the quantum channel
from AB to A
h
i
(3.27)
Bλ (ρ̂AB ) = TrB Ûλ ρ̂AB Ûλ† ,
where TrB denotes the partial trace over the system B. Bλ implements the beam-splitter
for 0 ≤ λ ≤ 1 and the squeezing for λ ≥ 1.
The quantum Gaussian attenuators model the attenuation and the noise affecting electromagnetic signals traveling through optical fibers or free space. The quantum Gaussian
attenuator Eλ,E can be implemented mixing the input state ρ̂ with the thermal Gaussian
state with average energy E ≥ 0 through a beam-splitter of transmissivity 0 ≤ λ ≤ 1:
Eλ,E (ρ̂) = Bλ (ρ̂ ⊗ ω̂(E)) .
(3.28)
14
G. De Palma, D. Trevisan, V. Giovannetti, and L. Ambrosio
The quantum Gaussian attenuators constitute a multiplicative semigroup with composition law
E1,E = I ,
Eλ,E ◦ Eλ′ ,E = Eλλ′ ,E
∀ E ≥ 0, 0 ≤ λ, λ′ ≤ 1 .
(3.29)
The quantum Gaussian amplifiers model the amplification of electromagnetic signals.
The quantum Gaussian amplifier Aκ,E can be implemented performing a two-mode
squeezing of parameter κ ≥ 1 on the input state ρ̂ and the thermal Gaussian state
with average energy E ≥ 0:
Aκ,E (ρ̂) = Bκ (ρ̂ ⊗ ω̂(E)) .
(3.30)
Also the quantum Gaussian amplifiers constitute a semigroup with composition law
A1,E = I ,
Aκ,E ◦ Aκ′ ,E = Aκκ′ ,E
∀ E ≥ 0, κ, κ′ ≥ 1 .
(3.31)
The attenuator Eλ,E and the amplifier Aκ,E are called quantum-limited if E = 0, i.e., if
they mix the input state with the vacuum. Indeed, the vacuum as state of the environment
adds the least possible noise to the input state. In this case, since ω̂(0) = |0ih0| is a pure
state, the expressions (3.28) and (3.30) are the Stinespring dilations of the corresponding
channels.
4. The minimum output entropy conjecture
The minimum output entropy of a quantum channel plays a key role in the determination
of its classical information capacity. The following theorem has been a fundamental result
in quantum communication theory.
Theorem 4.1. For any n ∈ N, the vacuum state minimizes the output entropy of the
n-mode quantum Gaussian attenuators and of the n-mode quantum Gaussian amplifiers,
i.e., for any n-mode quantum state ρ̂, any E ≥ 0, 0 ≤ λ ≤ 1 and κ ≥ 1
⊗n
⊗n
⊗n
= n S(Eλ,E (|0ih0|)) ,
S Eλ,E
(ρ̂) ≥ S Eλ,E
|0ih0|
⊗n
⊗n
= n S(Aκ,E (|0ih0|)) .
(4.1)
S A⊗n
κ,E (ρ̂) ≥ S Aκ,E |0ih0|
Therefore, the minimum output entropy of the quantum attenuators and amplifiers is
additive.
We stress that Theorem 4.1 is trivial for classical Gaussian channels, i.e., for Gaussian
integral kernels that send probability measures on Rm to probability measures on Rn .
Indeed, for the concavity of the entropy it is sufficient to prove Theorem 4.1 for pure
input states. In the classical case, the only pure probability measures are the Dirac delta
functions, and they all achieve the same output entropy. As we will see, the proof of
Gaussian optimizers for entropic inequalities in quantum information
15
Theorem 4.1 exploits tools of quantum information theory that do not have a classical
counterpart: the complementary channel and the decomposition of any Gaussian channel
as a quantum-limited attenuator followed by a quantum-limited amplifier.
The proof of Theorem 4.1 is based on majorization theory [89].
Definition 1 (majorization). We say that the quantum state ρ̂ majorizes the quantum
state σ̂, and write ρ̂ ≻ σ̂, iff σ̂ can be obtained applying to ρ̂ a convex combination
of unitary operators, i.e., iff there exists a probability measure µ on the set of unitary
operators such that
Z
(4.2)
σ̂ = Û ρ̂ Û † dµ Û .
The link between majorization and the entropy is provided by the following property.
Proposition 4.2. Let ρ̂ and σ̂ be quantum states such that ρ̂ ≻ σ̂. Then, f (ρ̂) ≥ f (σ̂)
for any unitarily invariant convex functional f on the set of quantum states. In particular,
• kρ̂kp ≥ kσ̂kp for any p ≥ 1;
• S(ρ̂) ≤ S(σ̂).
Theorem 4.1 is a consequence of this more fundamental result.
Theorem 4.3 (majorization for quantum Gaussian channels). For any n ∈ N and for
all the n-mode quantum Gaussian attenuators and amplifiers, the output generated by
the vacuum input state majorizes the output generated by any other input state, i.e., for
any 0 ≤ λ ≤ 1, κ ≥ 1 and E ≥ 0 and for any n-mode quantum state ρ̂,
⊗n
⊗n
⊗n
⊗n
⊗n
⊗n
≻ Aκ,E
(ρ̂) .
(4.3)
≻ Eλ,E
(ρ̂) ,
Aκ,E
|0ih0|
Eλ,E
|0ih0|
Besides Theorem 4.1, a fundamental consequence of Theorem 4.3 is the following.
Corollary 4.4 (1 → p norms of quantum Gaussian channels). For any p ≥ 1 and any
n ∈ N, the vacuum input state achieves the 1 → p norm of the n-mode quantum Gaussian
attenuators and amplifiers, i.e., for any 0 ≤ λ ≤ 1, κ ≥ 1 and E ≥ 0
⊗n
n
⊗n
⊗n
Eκ,E
= Eκ,E
|0ih0|
= kEκ,E (|0ih0|)kp ,
1→p
p
⊗n
⊗n
Aκ,E
= Aκ,E
|0ih0|⊗n
= kAκ,E (|0ih0|)knp .
(4.4)
1→p
p
Therefore, the 1 → p norms of the quantum Gaussian attenuators and amplifiers are
multiplicative.
Theorem 4.1 was first proven by Giovannetti, Holevo and Garcı́a-Patrón [48]. Shortly
later, Mari, Giovannetti and Holevo realized that the same proof implies the more general
16
G. De Palma, D. Trevisan, V. Giovannetti, and L. Ambrosio
Theorem 4.3, first for one-mode quantum Gaussian channels [88], and then for multi-mode
quantum Gaussian channels [47, 68]. We present here a sketch of the proof. For more
details, the reader can also consult the review [71].
The first step to prove Theorem 4.3 is the following observation.
Proposition 4.5. For any n, any n-mode quantum Gaussian attenuator or amplifier can be decomposed as an n-mode quantum-limited attenuator followed by a n-mode
quantum-limited amplifier.
Theorem 4.3 is trivial for the quantum-limited attenuator, since the vacuum is a fixed
point. Thanks to Proposition 4.5, it is sufficient to prove Theorem 4.3 for the quantumlimited amplifier. It is easy to see that it is sufficient to prove Theorem 4.3 for pure input
states. The next step exploits the following properties:
Proposition 4.6. Let Φ be a quantum channel and Φ̃ be its complementary channel.
Then, for any pure input state ψ, the quantum states Φ(|ψihψ|) and Φ̃(|ψihψ|) have the
same spectrum.
From Proposition 4.6, the optimal input states for Φ and Φ̃ must coincide.
Proposition 4.7. The complementary channel of the quantum-limited amplifier is a
quantum-limited attenuator followed by the same quantum-limited amplifier followed by
the transposition, i.e., for any κ ≥ 1,
Ãκ,0 = T ◦ Aκ,0 ◦ E1−1/κ,0 ,
(4.5)
where T is the transposition operation.
From Propositions 4.6 and 4.7, the optimal input states for the quantum-limited amplifier must coincide with the optimal input states for a suitable quantum-limited attenuator composed with the same quantum-limited amplifier. Since the optimal input states
must be pure, they must be left pure by the quantum-limited attenuator. The claim then
follows from the following property.
Proposition 4.8. For any n ∈ N and any 0 < λ < 1, the vacuum is the only n-mode
⊗n
quantum state ρ̂ such that Eλ,0
(ρ̂) is pure.
5. Gaussian optimizers for entropic inequalities in
quantum information
The problem of determining the information capacity region of the quantum Gaussian
degraded broadcast channel has led to a constrained minimum output entropy conjecture
[58], which is a generalization of Theorem 4.1 with a constrained input entropy.
Gaussian optimizers for entropic inequalities in quantum information
17
Conjecture 5.1 (constrained minimum output entropy conjecture). For any n ∈ N,
quantum Gaussian input states minimize the output entropy of the n-mode Gaussian
quantum attenuators and amplifiers among all the input states with a given entropy. In
other words, let ρ̂ be a generic n-mode quantum state, and let ω̂ be the one-mode thermal
Gaussian state with entropy S(ρ̂)/n, so that ω̂ ⊗n is the n-mode thermal Gaussian state
with the same entropy as ρ̂. Then, for any 0 ≤ λ ≤ 1, κ ≥ 1 and E ≥ 0,
S(ρ̂)
⊗n
⊗n
+ (1 − λ) E ,
S Eλ,E
(ρ̂) ≥ S Eλ,E
ω̂ ⊗n = n S(Eλ,E (ω̂)) = n g λ g −1
n
⊗n
⊗n
⊗n
= n S(Aκ,E (ω̂))
S Aκ,E (ρ̂) ≥ S Aκ,E ω̂
S(ρ̂)
−1
= ng κg
+ (κ − 1) (E + 1) ,
(5.1)
n
where the function g has been defined in (3.20).
Conjecture 5.1 has been proven only in the one-mode case (n = 1) by De Palma, Trevisan and Giovannetti [34, 33], and has been extended to one-mode gauge-contravariant
quantum Gaussian channels by Qi, Wilde and Guha [93]. The proof by De Palma et al. is
based on the following fundamental majorization result for one-mode quantum Gaussian
channels [32], which extends Theorem 4.3.
Theorem 5.2.
ρ̂,
↓
For any 0 ≤ λ ≤ 1, κ ≥ 1 and E ≥ 0 and any one-mode quantum state
Eλ,E (ρ̂) ≺ Eλ,E ρ̂↓ ,
Aκ,E (ρ̂) ≺ Aκ,E ρ̂↓ ,
(5.2)
where ρ̂ is the passive rearrangement of ρ̂, i.e., the passive state with the same spectrum
as ρ̂.
We recall that a passive state is a quantum state that minimizes the average energy
among all the quantum states with the same spectrum [91, 82, 52]. If ρ̂ is diagonalized
in the orthonormal eigenbasis {ψn }n∈N as
ρ̂ =
∞
X
n=0
ρ̂↓ is given by
pn |ψn ihψn | ,
ρ̂↓ =
∞
X
n=0
p0 ≥ p1 ≥ . . . ≥ 0 ,
pn |nihn| ,
(5.3)
(5.4)
where {|ni}n∈N is the Fock basis.
From Theorem 5.2, in the case of one mode the constrained minimization of the output
entropy of Conjecture 5.1 can be restricted to passive input states. Unfortunately, an
analogue majorization theorem does not hold for more than one mode [29].
Conjecture 5.1 has first been proven for the one-mode quantum-limited attenuator
[34]. The proof is based on the following isoperimetric inequality, that constitutes the
infinitesimal version of the conjecture.
18
G. De Palma, D. Trevisan, V. Giovannetti, and L. Ambrosio
Theorem 5.3 (isoperimetric inequality for the one-mode quantum-limited attenuator).
Among all the input states with a given entropy, quantum Gaussian input states maximize
the derivative of the output entropy of the one-mode quantum-limited attenuator with
respect to the attenuation parameter. In other words, let ρ̂ be a one-mode quantum state,
and ω̂ the one-mode thermal Gaussian state with the same entropy as ρ̂. Then,
d
S (Eλ,0 (ρ̂))
dλ
λ=1
≤
d
S (Eλ,0 (ω̂))
dλ
λ=1
= g −1 (S(ρ̂)) g ′ g −1 (S(ρ̂)) .
(5.5)
The adjective “isoperimetric” is due to the formal analogy between entropy and volume
[36]. Up to a change of signs, the left hand side in (5.5) plays the role of a perimeter and
the function g −1 (s)g ′ (g −1 (s)) that of an isoperimetric profile.
Thanks to Theorem 5.2, it is sufficient to prove Theorem 5.3 for passive states. The
proof is then performed through the Lagrange multipliers. Since the Hilbert space of a
one-mode Gaussian quantum system has infinite dimension, a generic passive state has
infinite parameters. This issue is solved restricting to a finite dimensional subspace with
bounded maximum energy, and then proving that the maximum of the left-hand side of
(5.5) for passive input states supported in the subspace tends to the right-hand side in
the limit of infinite maximum energy.
Conjecture 5.1 for the one-mode quantum-limited attenuator then follows integrating the isoperimetric inequality (5.5) thanks to the semigroup property (3.29) of the
quantum-limited attenuator.
The generalization of Theorem 5.3 to all the one-mode quantum Gaussian attenuators and amplifiers would have implied Conjecture 5.1 for n = 1. However, for any
one-mode quantum Gaussian channel other than the quantum-limited attenuator, the
infinite dimension of the Hilbert space is really an issue. Indeed, for any quantum state
d
S (Eλ,E (ρ̂)) λ=1 is infinite for any E > 0 and
ρ̂ with a support of finite dimension dλ
d
dκ S (Aλ,E (ρ̂)) κ=1 is infinite for any E ≥ 0, and nothing can be proven restricting to a
finite dimensional subspace. If one tries to use the Lagrange multipliers directly for the
infinite dimensional problem, the Gaussian state is not the only solution [93], so that a
new approach is needed. This approach is based on the p → q norms and is presented in
subsection 5.1 below.
5.1. Quantum Gaussian channels have Gaussian maximizers
The theorem “Gaussian kernels have Gaussian maximizers” has been conjectured to
apply also to quantum Gaussian channels.
Conjecture 5.4 (quantum Gaussian channels have Gaussian maximizers). For any
n ∈ N and any p, q ≥ 1, quantum Gaussian input states achieve the p → q norm of the
n-mode Gaussian quantum attenuators and amplifiers. In other words, for any 0 ≤ λ ≤ 1,
Gaussian optimizers for entropic inequalities in quantum information
19
κ ≥ 1 and E ≥ 0,
⊗n
Eλ,E
A⊗n
κ,E
p→q
p→q
= sup
E ′ ≥0
= sup
E ′ ≥0
⊗n
⊗n
Eλ,E
ω̂(E ′ )
q
⊗n
ω̂(E ′ )
p
⊗n
⊗n
Aκ,E ω̂(E ′ )
⊗n
ω̂(E ′ )
=
sup
kEλ,E (ω̂(E ′ ))kq
kω̂(E ′ )kp
E ′ ≥0
q
=
sup
!n
kAκ,E (ω̂(E ′ ))kq
kω̂(E ′ )kp
E ′ ≥0
p
,
!n
,
(5.6)
′
where ω̂(E ) is the one-mode thermal Gaussian state with average energy E ′ as in (3.18).
Therefore, the p → q norms of the quantum Gaussian attenuators and amplifiers are
multiplicative.
Remark 5.1. The suprema in (5.6) are
• finite and achieved for a finite E ′ ≥ 0 if 1 ≤ p < q;
• finite and asymptotically achieved in the limit E ′ → ∞ if 1 < p = q;
• infinite and asymptotically achieved in the limit E ′ → ∞ if 1 ≤ q < p.
Remark 5.2. Conjecture 5.4 can be extended to any linear and completely positive
map that preserves the set of unnormalized quantum Gaussian states, i.e., the operators
proportional to a quantum Gaussian state. These maps include all quantum Gaussian
channels and all the probabilistic maps resulting from the conditioning on the outcome of
a Gaussian measurement performed on a subsystem [70, 97]. The generalized conjecture
states that quantum Gaussian input states achieve the p → q norms of all such maps. In
this more general setup, the analogue of the optimization in the right-hand side of (5.6)
cannot be restricted to the thermal Gaussian states, but has to be performed over all
quantum Gaussian states.
Conjecture 5.4 has been proven only in some particular cases. As we have seen in
Corollary 4.4, the majorization result Theorem 4.3 implies Conjecture 5.4 for any n in
the case p = 1. De Palma, Trevisan and Giovannetti proved Conjecture 5.4 in the case of
one-mode quantum-limited channels, i.e., n = 1 and E = 0 [31]. Frank and Lieb proved
Conjecture 5.4 for any n in the case p = q [42], and Holevo extended the result to any
n-mode quantum Gaussian channel (still for p = q) [73].
5.1.1. The proof of Conjecture 5.4 for one-mode quantum-limited Gaussian channels
First, De Palma et al. prove Conjecture 5.4 for the one-mode quantum-limited attenuator.
From the following Lemma, it is sufficient to prove Conjecture 5.4 for positive input
operators.
Lemma 5.5 ([4]).
For any p ≥ 1, any quantum channel Φ and any operator X̂,
p
X̂ † X̂
.
(5.7)
Φ X̂
≤ Φ
p
p
20
G. De Palma, D. Trevisan, V. Giovannetti, and L. Ambrosio
The proof of Conjecture 5.4 is then based on the following new logarithmic Sobolev
inequality, that constitutes the infinitesimal version of Conjecture 5.4 (in the same way
as Gross’ logarithmic Sobolev inequality is the infinitesimal version of Nelson’s Hypercontractive theorem [53]).
Theorem 5.6 (logarithmic Sobolev inequality for the quantum-limited Gaussian attenuator). Let us fix p ≥ 1. Let ρ̂ be a one-mode quantum state, and let ω̂ be the thermal
Gaussian state such that ω̂ p /Trω̂ p has the same entropy as ρ̂p /Trρ̂p . Then,
d
ln kEλ,0 (ρ̂)kp
dλ
λ=1
≥
d
ln kEλ,0 (ω̂)kp
dλ
.
(5.8)
λ=1
Thanks to Theorem 5.2, it is sufficient to prove Theorem 5.6 for passive input states. As
in the case of Theorem 5.3, the proof is then performed through the Lagrange multipliers,
restricting to a finite dimensional subspace with bounded maximum energy. Conjecture
5.4 for the one-mode quantum-limited attenuator follows integrating (5.8) thanks to the
semigroup property of the attenuator (3.29).
Conjecture 5.4 for the one-mode quantum-limited amplifier follows from the following
duality Lemma for the Schatten norms.
Lemma 5.7.
For any p > 1 and any positive operator X̂,
h
i
Ŷ p : Ŷ ≥ 0, rank Ŷ < ∞ .
X̂ = sup Tr X̂ Ŷ
p
(5.9)
p−1
Lemma 5.7 implies the following duality for the norms of quantum channels.
Lemma 5.8.
For any quantum channel Φ and any p, q ≥ 1,
kΦkp→q = Φ†
q
p
q−1 → p−1
.
(5.10)
The norms of the quantum-limited amplifier can then be determined from the norms
of the quantum-limited attenuator thanks to the following property.
Lemma 5.9. The dual of the quantum-limited Gaussian amplifier is proportional to a
quantum-limited Gaussian attenuator, i.e., for any κ ≥ 1,
A†κ,0 =
1
E1 .
κ κ ,0
(5.11)
5.1.2. The proof of Conjecture 5.1 for all the one-mode attenuators and amplifiers
De Palma, Trevisan and Giovannetti have exploited the proof of Conjecture 5.4 for the
one-mode quantum-limited amplifier to prove Conjecture 5.1 for all the one-mode attenuators and amplifiers [33]. First, they prove Conjecture 5.1 for the one-mode quantumlimited amplifier. The first step is rephrasing Conjecture 5.4 for the one-mode quantumlimited amplifier in the following way.
Gaussian optimizers for entropic inequalities in quantum information
21
Theorem 5.10. Let us fix κ ≥ 1. Let ρ̂ be a generic one-mode quantum state, and let
ω̂ be the one-mode thermal Gaussian state with the same entropy as ρ̂. Then, for any
q > 1 there exists 1 ≤ p < q such that the p → q norm of Aκ,0 is achieved by ω̂, and
kAκ,0 (ω̂)kq
kAκ,0 (ρ̂)kq
≤ kAκ,0 kp→q =
.
kρ̂kp
kω̂kp
(5.12)
Rewriting (5.12) in terms of the Rényi entropies we get
Sq (Aκ,0 (ρ̂)) ≥ Sq (Aκ,0 (ω̂)) +
p−1q
(Sp (ρ̂) − Sp (ω̂)) .
q−1p
(5.13)
Taking the limit q → 1 and recalling (3.5) we get the claim
S(Aκ,0 (ρ̂)) ≥ S(Aκ,0 (ω̂)) .
(5.14)
This result implies Conjecture 5.1 for all the one-mode attenuators and amplifiers since
any of these channels can be decomposed as a one-mode quantum-limited attenuator followed by a one-mode quantum-limited amplifier (Proposition 4.5), for which Conjecture
5.1 holds.
5.1.3. The proof of Conjecture 5.4 for p=q
The proof by Frank and Lieb is completely different from the proof by De Palma et al.,
and is based on the following theorem.
Theorem 5.11.
For any p > 1 and any quantum channel Φ,
kΦkp→p ≤ Φ Î
p−1
p
∞
Φ† Î
1
p
∞
.
(5.15)
Conjecture 5.4 follows directly applying Theorem 5.11 to quantum Gaussian channels.
The proof of Theorem 5.11 is based on Hadamard’s three line lemma [99].
Theorem 5.12 (Hadamard’s three line lemma). Let f be analytic in the strip {z : 0 <
ℜz < 1} and continuous and bounded on its closure. Let
Mt (f ) = sup |f (t + iy)|
(5.16)
Mt ≤ M01−t M1t .
(5.17)
y∈R
for 0 ≤ t ≤ 1. Then
Theorem 5.11 follows applying Theorem 5.12 to
i
h 1−z
,
f (z) = Tr Ŷ p p−1 Φ X̂ pz
(5.18)
where X̂ and Ŷ are positive and Ŷ has finite rank, and recalling the duality relation for
the Schatten norms (Lemma 5.7).
22
G. De Palma, D. Trevisan, V. Giovannetti, and L. Ambrosio
5.2. The Entropy Photon-number Inequality
The Entropy Photon-number Inequality is the quantum counterpart of the Entropy Power
Inequality for the beam-splitter. Guha, Erkmen and Shapiro conjectured it [55] as a
generalization of Conjecture 5.1, and De Palma, Mari and Giovannetti extended the
conjecture to the squeezing [27].
Conjecture 5.13 (Entropy Photon-number Inequality). For any n, n-mode thermal
quantum Gaussian states minimize the output entropy of the n-mode beam-splitter or
squeezing among all the n-mode input states where the two inputs have given entropies.
In other words, let ρ̂A and ρ̂B be two n-mode quantum states, and let ω̂A and ω̂B be the
⊗n
one-mode thermal Gaussian states with entropies S(ρ̂A )/n and S(ρ̂B )/n, such that ω̂A
⊗n
and ω̂B are the n-mode thermal Gaussian states with the same entropy as ρ̂A and ρ̂B ,
respectively. Then, for any 0 ≤ λ ≤ 1
⊗n
⊗n
S Bλ⊗n (ρ̂A ⊗ ρ̂B ) ≥ S Bλ⊗n ω̂A
⊗ ω̂B
S(ρ̂B )
S(ρ̂A )
+ (1 − λ) g −1
,
(5.19)
= n g λ g −1
n
n
and for any κ ≥ 1
⊗n
⊗n
⊗ ω̂B
S Bκ⊗n (ρ̂A ⊗ ρ̂B ) ≥ S Bκ⊗n ω̂A
S(ρ̂A )
S(ρ̂B )
−1
−1
=ng κg
+ (κ − 1) g
+1
,
n
n
(5.20)
where the function g has been defined in (3.20).
For any n-mode quantum state ρ̂, the n-mode thermal Gaussian state with the same
entropy as ρ̂ has average photon number per mode g −1 (S(ρ̂/n)). This quantity is called
the entropy photon-number of ρ̂, hence the name Entropy Photon-number Inequality.
In the case where the second input ρ̂B of the beam-splitter or of the squeezing is
a thermal Gaussian state, Conjecture 5.13 reduces to Conjecture 5.1. The only other
particular case where the Entropy Photon-number Inequality has been proven is when
the two inputs are (not necessarily thermal) Gaussian states [21].
5.3. The sharp Young’s inequality for the beam-splitter
The similarity between the Entropy Photon-number Inequality and the Entropy Power
Inequality together with the proof of the latter through the sharp Young’s inequality
for convolutions leads to conjecture a quantum version of the Young’s inequality, here
formulated for the first time.
Let us define for any n ∈ N, any p, q, r ≥ 1 and any λ ≥ 0
Bλ⊗n X̂ ⊗ Ŷ
r
Cn (p, q, r, λ) =
sup
.
(5.21)
0<kX̂kp ,kŶ kq <∞
Ŷ
X̂
p
q
Gaussian optimizers for entropic inequalities in quantum information
23
Conjecture 5.14 (Quantum sharp Young’s inequality). For any n ∈ N, any p, q, r ≥ 1
and any λ ≥ 0, the supremum in (5.21) can be restricted to thermal Gaussian states, i.e.,
⊗n
⊗n
Bλ⊗n ω̂(EA ) ⊗ ω̂(EB )
r
Cn (p, q, r, λ) = sup
⊗n
⊗n
EA , EB ≥0
ω̂(EB )
ω̂(EA )
q
p
!n
kBλ (ω̂(EA ) ⊗ ω̂(EB ))kr
n
=
sup
= C1 (p, q, r, λ) .
(5.22)
kω̂(E
)k
kω̂(E
)k
A p
B q
EA , EB ≥0
Therefore, the constants Cn are multiplicative.
Remark 5.3. We conjecture that the supremum in (5.22) is
• finite and achieved by finite EA , EB ≥ 0 if p1 + q1 > 1 + 1r ;
• finite and asymptotically achieved in the limit EA , EB → ∞ if p1 + 1q = 1 + r1 ;
• infinite and asymptotically achieved in the limit EA , EB → ∞ if 1p + q1 < 1 + r1 .
The striking difference with respect to the classical case is that the supremum in (5.22)
is finite when p1 + q1 > 1 + 1r . The divergence in the classical Young’s inequality when
1
1
1
p + q > 1 + r is asymptotically achieved by a sequence of Gaussian probability measures
that tends to a Dirac delta, and can be ascribed to the fact that a probability density
can have arbitrarily high L∞ norm. The divergence disappears in the quantum scenario
since kρ̂k∞ ≤ 1 for any quantum state ρ̂.
The quantum sharp Young’s inequality provides a multiplicative upper bound to the
p → q norms of the quantum Gaussian attenuators and amplifiers. Indeed, assuming
Conjecture 5.14, we have for any n ∈ N, p, q ≥ 1, 0 ≤ λ ≤ 1 and E ≥ 0
⊗n
⊗n
X̂
Eλ,E
Bλ⊗n X̂ ⊗ ω̂(E)
q
q
⊗n
Eλ,E
=
sup
=
sup
p→q
0<kX̂kp <∞
0<kX̂kp <∞
X̂
X̂
p
p
n
≤ inf C1 (p, r, q, λ) kω̂(E)kr
,
(5.23)
r≥1
and the same holds for the Gaussian quantum amplifiers. Since the conjectured quantum
sharp Young’s inequality is saturated by quantum Gaussian states, we conjecture that
the upper bound (5.23) is sharp and coincides with (5.6), i.e., that
sup
E ′ ≥0
kEλ,E (ω̂(E ′ ))kq
kω̂(E ′ )kp
= inf C1 (p, r, q, λ) kω̂(E)kr .
r≥1
(5.24)
Moreover, the quantum sharp Young’s inequality provides a lower bound to the output
entropy of the beam-splitter and of the squeezing. Indeed, rewriting (5.22) in terms of
24
G. De Palma, D. Trevisan, V. Giovannetti, and L. Ambrosio
the Rényi entropies we get for any n ∈ N, λ ≥ 0, p, q, r ≥ 1 and any n-mode quantum
states ρ̂A and ρ̂B
p−1
q−1
r
Sp (ρ̂A ) +
Sq (ρ̂B ) − n ln C1 (p, q, r, λ) .
Sr Bλ⊗n (ρ̂A ⊗ ρ̂B ) ≥
r−1
p
q
(5.25)
r
and set
We choose 0 ≤ α, β < r−1
p = p(r, α) =
r
,
r + α− αr
q = q(r, β) =
r
,
r+β−βr
(5.26)
so that (5.25) becomes
nr
ln C1 (p(r, α), q(r, β), r, λ) .
Sr Bλ⊗n (ρ̂A ⊗ ρ̂B ) ≥ α Sp(r,α) (ρ̂A ) + β Sq(r,β) (ρ̂B ) −
r−1
(5.27)
Finally, taking the limit r → 1 and the supremum over α, β ≥ 0 we get
d
S Bλ⊗n (ρ̂A ⊗ ρ̂B ) ≥ sup α S(ρ̂A ) + β S(ρ̂B ) − n
,
C1 (p(r, α), q(r, β), r, λ)
dr
α,β≥0
r=1
(5.28)
where we used that
lim p(r, α) = lim q(r, β) = 1 ,
r→1
r→1
C1 (1, 1, 1, λ) = 1 .
(5.29)
Since the conjectured quantum Young inequality is saturated by quantum Gaussian
states, we conjecture that the lower bound (5.28) is sharp and coincides with the bound
provided by the Entropy Photon-number Inequality (5.19).
6. The thinning
The thinning [94] is the map acting on probability distributions on N that is the discrete
analogue of the continuous rescaling operation on R+ .
Definition 2 (Thinning). Let N be a random variable with values in N. The thinning
with parameter 0 ≤ λ ≤ 1 is defined as
Tλ (N ) =
N
X
Bi ,
(6.1)
i=1
where the {Bn }n∈N+ are independent Bernoulli variables with parameter λ (also independent of N ), i.e., each Bi is 1 with probability λ, and 0 with probability 1 − λ.
From a physical point of view, the thinning can be understood as follows. Let p be the
probability distribution of the number N of photons that are sent through a beam-splitter
Gaussian optimizers for entropic inequalities in quantum information
25
with transmissivity λ, such that for any n ∈ N the probability that n photons are sent is
pn . Each photon has probability λ of being transmitted and probability 1 − λ of being
reflected. Then, Tλ (N ) is the random variable associated to the number of transmitted
photons, and has probability distribution
∞
X
k n
[Tλ (p)]n =
λ (1 − λ)k−n pk
∀n∈N.
(6.2)
n
k=n
The thinning coincides with the restriction of the one-mode quantum-limited Gaussian
attenuator to input states diagonal in the Fock basis.
Theorem 6.1 ([32], Theorem 56). For any 0 ≤ λ ≤ 1 and any probability distribution
p on N
!
∞
∞
X
X
[Tλ (p)]n |nihn| .
(6.3)
Eλ,0
pn |nihn| =
n=0
n=0
We recall that for any E ≥ 0, the thermal quantum Gaussian states ω̂(E) corresponds
to the geometric probability distribution ω(E) for the energy given by
n
E
1
.
(6.4)
ω(E)n =
E+1 E+1
We can then extend to the thinning all the results on the quantum-limited attenuator.
Let p and q be two probability distributions on N. We say that p majorizes q, and
write p ≻ q, iff there exists a doubly stochastic infinite matrix A such that [89]
qn =
∞
X
Ank pk
k=0
∀n∈N.
(6.5)
The infinite matrix A is doubly stochastic iff
Amn ≥ 0 ∀ m, n ∈ N ,
∞
X
Ank =
k=0
∞
X
k=0
Akn = 1
∀n∈N.
(6.6)
The link with the majorization for quantum states of Definition 1 is the following.
Theorem 6.2. The quantum state ρ̂ majorizes the quantum state σ̂ iff the probability
distribution on N associated to the spectrum of ρ̂ majorizes the probability distribution
on N associated to the spectrum of σ̂.
Theorem 5.2 implies then
Theorem 6.3.
For any 0 ≤ λ ≤ 1 and any probability distribution p on N,
T λ p ≺ T λ p↓ ,
(6.7)
where p↓ is the decreasing rearrangement of p, i.e., p↓n = pσ(n) for any n ∈ N, where
σ : N → N is a bijective function such that pσ(0) ≥ pσ(1) ≥ . . . ≥ 0.
26
G. De Palma, D. Trevisan, V. Giovannetti, and L. Ambrosio
The Shannon entropy of the probability measure p on N is the counterpart of the von
Neumann entropy:
∞
X
S(p) = −
pn ln pn .
(6.8)
n=0
The proof of Conjecture 5.1 for the one-mode quantum-limited attenuator [34] implies
Theorem 6.4. Geometric input probability distributions minimize the output Shannon
entropy of the thinning among all the input probability distribution with a given Shannon
entropy. In other words, let p be a generic probability distribution on N, and let ω be the
geometric probability distribution with the same Shannon entropy as p. Then, for any
0≤λ≤1
S(Tλ (p)) ≥ S(Tλ (ω)) = g λ g −1 (S(p)) .
(6.9)
For any p ≥ 1, the lp norm of the sequence of complex numbers {xn }n∈N is
! p1
X
.
kxkp =
|xn |p
(6.10)
n∈N
For any p, q ≥ 1, the p → q norm of the thinning is
kTλ kp→q =
kTλ xkq
sup
0<kxkp <∞
.
kxkp
(6.11)
The proof of Conjecture 5.4 for the one-mode quantum-limited attenuator [31] implies
then
Theorem 6.5. For any p, q ≥ 1, the p → q norm of the thinning is achieved by
geometric probability distributions, i.e., for any 0 ≤ λ ≤ 1,
kTλ kp→q = sup
E≥0
kTλ ω(E)kq
kω(E)kp
.
(6.12)
Remark 6.1. The supremum in (6.12) is
• finite and achieved for a finite E ≥ 0 if 1 ≤ p < q;
• finite and asymptotically achieved in the limit E → ∞ if 1 < p = q;
• infinite and asymptotically achieved in the limit E → ∞ if 1 ≤ q < p.
7. Quantum conditioning and the quantum Entropy
Power Inequality
7.1. The quantum Entropy Power Inequality
The first attempt to prove the Entropy Photon-number Inequality was through the quantum counterpart of the heat semigroup technique of the proof of the Entropy Power
Gaussian optimizers for entropic inequalities in quantum information
27
Inequality by Blachman and Stam. However, this technique only leads to the quantum
Entropy Power Inequality [79, 80, 27], which has the same expression as the Entropy
Power Inequality and provides a lower bound to the output entropy of the beam-splitter
or of the squeezing in terms of the entropies of the two inputs. Since this bound is strictly
lower than the output entropy achieved by thermal Gaussian input states, the quantum
Entropy Power Inequality is strictly weaker than the Entropy Photon-number Inequality.
Theorem 7.1 (quantum Entropy Power Inequality). For any λ ≥ 0 and any two nmode quantum states ρ̂A and ρ̂B with a finite average energy,
S Bλ⊗n (ρ̂A ⊗ ρ̂B )
S(ρ̂A )
S(ρ̂B )
exp
≥ λ exp
+ |1 − λ| exp
.
(7.1)
n
n
n
Remark 7.1. The factors of 2 in the exponents in the classical Entropy Power Inequality (2.13) do not appear in (7.1) because an n-mode quantum state is the counterpart
of a random variable on R2n . The coefficients in front of the exponentials in the righthand side of (7.1) come from the coefficients in the transformation rules for the ladder
operators (3.23) and (3.25).
The quantum Entropy Power Inequality was proposed by König and Smith [79], who
proved it in the case λ = 21 [79, 80]. De Palma, Mari and Giovannetti extended the
proof to any λ ≥ 0 [27]. De Palma, Mari, Lloyd and Giovannetti proposed and proved an
Entropy Power Inequality for the most general linear transformation of bosonic modes
[28]. Huber, König and Vershynina proposed and proved an Entropy Power Inequality
for the quantum additive noise channels [74].
7.2. Quantum conditioning
In the classical scenario, the Shannon entropy of the random variable A conditioned on
the “memory” random variable M with law p is defined as the expectation value of the
Shannon entropy of A conditioned on the values assumed by M [19]:
Z
S(A|M ) = S(A|M = m) dp(m) .
(7.2)
Let now A and M be quantum systems, and let us consider a quantum state ρ̂AM on the
joint system AM . The definition (7.2) cannot be brought to the quantum setting when
A is entangled with M , since conditioning on the values assumed by M is not possible.
However, (7.2) can be rewritten as
S(A|M ) = S(AM ) − S(M ) ,
(7.3)
that is the right definition for the quantum conditional entropy [90, 64, 108, 70] (see [101]
for a broad discussion). We write S(A|M )ρ̂AM when the joint quantum state to which
the conditional entropy refers is not clear from the context. A striking feature of the
28
G. De Palma, D. Trevisan, V. Giovannetti, and L. Ambrosio
quantum conditional entropy is that it can be negative, while the quantum entropy is
always positive.
The correlation between two random variables or two quantum systems A and B are
quantified by the (quantum) mutual information [90, 64, 108, 70]
I(A : B) = S(A) + S(B) − S(AB) .
(7.4)
Both the classical and quantum versions of the mutual information are positive as a
consequence of the subadditivity of the entropy [90, 64, 108, 70]. The classical mutual
information vanishes iff A and B are independent random variables. Analogously, the
quantum mutual information vanishes iff ρ̂AB = ρ̂A ⊗ ρ̂B .
The conditional mutual information between A and B conditioned on the memory M
is
I(A : B|M ) = S(A|M ) + S(B|M ) − S(AB|M ) .
(7.5)
The classical conditional mutual information is positive as a consequence of the expression (7.2) for the conditional entropy and of the positivity of the mutual information [19].
Also the quantum conditional mutual information is positive [90, 64, 108, 70]. Since the
quantum conditional entropy cannot be written as in (7.2), this result is highly nontrivial. The classical conditional mutual information vanishes iff A and B are conditionally
independent given the value of M . The quantum conditional mutual information vanishes
for all the joint quantum states of the following form [65]
ρ̂ABM =
∞
M
n=0
pn ρ̂AM (n) ⊗ ρ̂BM (n) ,
A
(7.6)
B
where p is a probability distribution on N and each ρ̂AM (n) or ρ̂BM (n) is a quantum state
A
B
on the Hilbert space HA ⊗ HM (n) or HB ⊗ HM (n) , respectively, and where
A
HM =
B
∞
M
n=0
HM (n) ⊗ HM (n) .
A
(7.7)
B
If A, B and M have finite dimension, all the quantum states with vanishing conditional
mutual information are of the form (7.6). The same property is believed to hold for
infinite dimension, but this has not been proven yet.
A fundamental consequence of the positivity of the quantum conditional mutual information is the associated data-processing inequality, stating that discarding a subsystem
always decreases the quantum conditional mutual information, i.e., for any quantum state
on a joint quantum system ABCM
I(AC : B|M ) ≤ I(A : B|M ) .
(7.8)
7.3. The quantum conditional Entropy Power Inequality
Let X and Y be random variables with values in Rn , and let M be a random variable such
that X and Y are conditionally independent given M . Then, the expression (7.2), the
Gaussian optimizers for entropic inequalities in quantum information
29
Entropy Power Inequality (2.13) and Jensen’s inequality imply the conditional Entropy
Power Inequality [30]
exp
2S(X|M )
2S(Y |M )
2S(X + Y |M )
≥ exp
+ exp
.
n
n
n
(7.9)
The inequality (7.9) is saturated by any joint probability measure on ABM such that,
conditioning on any value m of M , A and B are independent Gaussian random variables
with proportional covariance matrices, and the proportionality constant does not depend
on m.
Since the quantum conditional entropy cannot be expressed as in (7.2), the above proof
does not go through in the quantum setting. However, the following quantum conditional
Entropy Power Inequality follows adapting the proof of the quantum Entropy Power
Inequality.
Theorem 7.2 (quantum conditional Entropy Power Inequality). Quantum Gaussian
states minimize the output quantum conditional entropy of the beam-splitter and of the
squeezing among all the input states where the two inputs are conditionally independent
given the memory. In other words, let A and B be n-mode Gaussian quantum systems,
and let M be a generic quantum system. Let ρ̂ABM be a joint quantum state with finite
average energy on AB, finite S(ρ̂M ) and with I(A : B|M ) = 0, and let
(7.10)
ρ̂CM = Bλ⊗n ⊗ IM (ρ̂ABM ) ,
where λ ≥ 0 and A and B are the two inputs of the beam-splitter or of the squeezing.
Then,
S(A|M )
S(B|M )
S(C|M )
≥ λ exp
+ |1 − λ| exp
.
(7.11)
exp
n
n
n
Moreover, let M be a 2n-mode Gaussian quantum system of the form M = MA MB ,
where MA and MB are n-mode Gaussian quantum systems. Then, for any a, b ∈ R
(k)
there exists a sequence {ρ̂ABM }k∈N of 4n-mode quantum Gaussian states of the form
(k)
(k)
(k)
ρ̂ABM = ρ̂AMA ⊗ ρ̂BMB such that
S(A|M )ρ̂(k)
=a,
ABM
S(B|M )ρ̂(k)
=b
(7.12)
ABM
for any k ∈ N, and
lim exp
k→∞
S(C|M )ρ̂(k)
CM
n
= λ exp
a
b
+ |1 − λ| exp .
n
n
(7.13)
If M is trivial, (7.11) becomes the quantum Entropy Power Inequality. The quantum
conditional Entropy Power Inequality was first conjectured by König, who proved it in
the case 0 ≤ λ ≤ 1 for Gaussian input states [77]. The general case was proven by
De Palma and Trevisan [30]. De Palma and Huber proved a conditional Entropy Power
30
G. De Palma, D. Trevisan, V. Giovannetti, and L. Ambrosio
Inequality for the quantum additive noise channels [25]. The proofs of [30, 25] settled
some regularity issues that affected the previous proofs of [79, 27, 28, 77].
The proof of the quantum conditional Entropy Power inequality of [30] is the quantum
counterpart of the proof of the classical Entropy Power Inequality by Blachman and Stam
based on the evolution with the heat semigroup. Let A be a n-mode Gaussian quantum
system with ladder operators â1 , . . . , ân . The displacement operator D̂(z) with z ∈ Cn
is the unitary operator that displaces the ladder operators:
†
D̂(z) âi D̂(z) = âi + zi Î
i = 1, . . . , n .
(7.14)
The quantum heat semigroup is the quantum Gaussian channel generated by a convex
combination of displacement operators with a Gaussian probability measure:
Z
√ †
√
2 dz
t z ρ̂ D̂
t z e−|z| n , N0 = I , Nt ◦Nt′ = Nt+t′ ∀ t, t′ ≥ 0 .
D̂
Nt (ρ̂) =
π
Cn
(7.15)
This is the quantum counterpart of the classical heat semigroup acting on a probability
density function f on Cn :
Z
√
2 dz
(Nt f )(w) =
f w − t z e−|z| n ,
w ∈ Cn
(7.16)
π
Cn
Let ρ̂AM be a joint quantum state on AM . The quantum conditional Fisher information
of the state ρ̂AM is the rate of increase of the quantum
√ conditional mutual information
between A and Z when the system A is displaced by t Z according to (7.15). In other
words, for any t > 0, let σ̂AMZ (t) be the probability measure on Cn with values in
quantum states on AM such that
Z
√
√ †
2 dz
dσ̂AMZ (z, t) = D̂
t z ρ̂ D̂
t z e−|z| n ,
dσ̂AMZ (z, t) = (Nt ⊗ IM )(ρ̂AM ) .
π
Cn
(7.17)
Then, the quantum conditional Fisher information of ρ̂AM is
J(A|M )ρ̂AM =
d
I(A : Z|M )σ̂AM Z (t)
dt
.
(7.18)
t=0
The quantum de Bruijn identity links the quantum conditional Fisher information to the
time derivative of the conditional entropy along the heat semigroup.
Lemma 7.3 (quantum de Bruijn identity).
J(A|M )ρ̂AM =
d
S(A|M )(Nt ⊗IM )(ρ̂AM )
dt
.
(7.19)
t=0
The first part of the proof of the quantum conditional Entropy Power Inequality is
proving the following quantum conditional Stam inequality, which provides an upper
bound to the quantum conditional Fisher information of the output of the beam-splitter
or of the squeezing in terms of the quantum conditional Fisher information of the two
inputs.
Gaussian optimizers for entropic inequalities in quantum information
31
Theorem 7.4 (quantum conditional Stam inequality). Let ρ̂ABM be a quantum state
on ABM with finite average energy, finite S(M ) and I(A : B|M ) = 0, and let ρ̂CM be
as in (7.10). Then, for any λ ≥ 0 the quantum conditional Stam inequality holds:
λ
|1 − λ|
1
≥
+
.
J(C|M )ρ̂CM
J(A|M )ρ̂AM
J(B|M )ρ̂BM
(7.20)
The quantum conditional Stam inequality follows from the data processing inequality
for the quantum conditional mutual information (7.8), and implies that the quantum
conditional Entropy Power Inequality does not improve along the evolution with the
heat semigroup. Then, the proof of the Entropy Power Inequality is concluded if we show
that it becomes asymptotically an equality in the infinite time limit. This is achieved by
proving that the quantum conditional entropy has an universal scaling independent on
the initial state in the infinite time limit under the evolution with the heat semigroup.
Lemma 7.5.
S(M ),
For any joint quantum state ρ̂AM with finite average energy and finite
S(A|M )(Nt ⊗IM )(ρ̂AM ) = n ln t + n + o(1)
for t → ∞ .
(7.21)
The proof of this scaling is based on the following more general result.
Theorem 7.6. Let A, B be quantum Gaussian systems with m and n modes, respectively, and let Φ : A → B a quantum Gaussian channel. Then, for any quantum system
M and any quantum state ρ̂AM on AM with finite average energy and finite S(M ),
S(B|M )(Φ⊗IM )(ρ̂AM ) ≥ lim S(B|A′ )(Φ⊗IA′ )(τ̂AA′ (E)) ,
E→∞
(7.22)
where A′ is a Gaussian quantum system with m modes, and for any E ≥ 0, τ̂AA′ (E) is
a pure state such that its marginal on A is the thermal Gaussian state ω̂(E)⊗m .
We mention that a result similar to Theorem 7.6 has been proven in the scenario with
a constraint on the average energy of the system A [24].
8. Conclusions and perspectives
The optimization problems of functional analysis whose solutions are Gaussian functions
have stimulated to conjecture that quantum Gaussian states are the solution to the quantum counterparts of these optimization problems. These conjectures play a key role in
quantum information theory, since they are necessary to prove the converse theorems
for many communication scenarios with quantum Gaussian channels. We have reviewed
the state of the art in the proof of these conjectures. In the case of one-mode quantum Gaussian channels, they are almost all solved, with the exceptions of the Entropy
Photon-number Inequality and the sharp Young’s inequality for the beam-splitter. On
32
G. De Palma, D. Trevisan, V. Giovannetti, and L. Ambrosio
the contrary, there are only very few results for multi-mode quantum Gaussian channels.
In this scenario, both the constrained minimum output entropy conjecture (Conjecture
5.1) and the multiplicativity of the p → q norms with 1 < p < q (Conjecture 5.4) are still
completely open challenging problems, and we hope that this review will set the ground
for their solution.
Quantum Gaussian channels also constitute a bridge between continuous and discrete
classical probability. Indeed, on the one hand their properties are very similar to the
properties of Gaussian integral kernels, with quantum Gaussian states playing the role of
Gaussian probability measures. On the other hand, the quantum states diagonal in the
Fock basis of a one-mode Gaussian quantum system are in a one-to-one correspondence
with the probability measures on N. This correspondence establishes a bridge between
Gaussian quantum system and discrete probability. The role of quantum Gaussian states
is here played by the geometric probability distributions. These distributions turn out
to be the solution to many optimization problems involving the thinning, which is the
discrete analogue of the rescaling of a real random variable.
We then hope that this review will stimulate even more cross-fertilization among
functional analysis, discrete probability and quantum information.
Acknowledgements
We thank Elliott Lieb and Rupert Frank for a careful reading of the review and useful
comments.
GdP acknowledges financial support from the European Research Council (ERC Grant
Agreements Nos. 337603 and 321029), the Danish Council for Independent Research
(Sapere Aude), VILLUM FONDEN via the QMATH Centre of Excellence (Grant No.
10059), and the Marie Sklodowska-Curie Action GENIUS (Grant No. 792557).
Author Contributions
The authors equally contributed to conceive the review and prepare the sketch. The main
text was written by GdP. All the authors helped to shape and review the manuscript.
References
[1] Arlen Anderson and Jonathan J Halliwell. Information-theoretic measure of uncertainty due to quantum and thermal fluctuations. Physical Review D, 48(6):2753,
1993.
[2] Shiri Artstein, Keith M. Ball, Franck Barthe, and Assaf Naor. Solution of Shannon’s
problem on the monotonicity of entropy. J. Amer. Math. Soc., 17(4):975–982, 2004.
[3] Koenraad Audenaert, Nilanjana Datta, and Maris Ozols. Entropy power inequalities for qudits. Journal of Mathematical Physics, 57(5):052202, 2016.
Gaussian optimizers for entropic inequalities in quantum information
33
[4] Koenraad MR Audenaert. A note on the p q norms of 2-positive maps. Linear
Algebra and its Applications, 430(4):1436–1440, 2009.
[5] S. Barnett and P.M. Radmore. Methods in Theoretical Quantum Optics. Oxford
Series in Optical and Imaging Sciences. Clarendon Press, 2002.
[6] Andrew R Barron. Entropy and the central limit theorem. The Annals of probability, pages 336–342, 1986.
[7] William Beckner. Inequalities in fourier analysis. Annals of Mathematics, pages
159–182, 1975.
[8] Jonathan Bennett, Anthony Carbery, Michael Christ, and Terence Tao. The
Brascamp-Lieb inequalities: finiteness, structure and extremals. Geom. Funct.
Anal., 17(5):1343–1415, 2008.
[9] Patrick P Bergmans. A simple converse for broadcast channels with additive white
gaussian noise (corresp.). Information Theory, IEEE Transactions on, 20(2):279–
280, 1974.
[10] Nelson M Blachman. The convolution inequality for entropy powers. Information
Theory, IEEE Transactions on, 11(2):267–271, 1965.
[11] Herm Jan Brascamp and Elliott H Lieb. Best constants in youngs inequality, its
converse, and its generalization to more than three functions. In Inequalities, pages
417–439. Springer, 2002.
[12] HJ Brascamp, Elliott H Lieb, and JM Luttinger. A general rearrangement inequality for multiple integrals. In Inequalities, pages 391–401. Springer, 2002.
[13] Samuel L Braunstein and Peter Van Loock. Quantum information with continuous
variables. Reviews of Modern Physics, 77(2):513, 2005.
[14] Eric A Carlen, Elliott H Lieb, and Michael Loss. On a quantum entropy power
inequality of audenaert, datta, and ozols. Journal of Mathematical Physics,
57(6):062203, 2016.
[15] Eric A. Carlen and Jan Maas. An Analog of the 2-Wasserstein Metric in NonCommutative Probability Under Which the Fermionic FokkerPlanck Equation is
Gradient Flow for the Entropy. Commun. Math. Phys., 331(3):887–926, November
2014.
[16] Eric A. Carlen and Jan Maas. Gradient flow and entropy inequalities for quantum
Markov semigroups with detailed balance. J. Funct. Anal., 273(5):1810–1869, 2017.
[17] Carlton M Caves and Peter D Drummond. Quantum limits on bosonic communication rates. Reviews of Modern Physics, 66(2):481, 1994.
[18] Michael Christ. Near-extremizers of Young’s Inequality for Rˆd. arXiv:1112.4875
[math], December 2011. arXiv: 1112.4875.
[19] T.M. Cover and J.A. Thomas. Elements of Information Theory. A WileyInterscience publication. Wiley, 2006.
[20] Nilanjana Datta, Yan Pautrat, and Cambyse Rouzé. Contractivity properties of
a quantum diffusion semigroup. Journal of Mathematical Physics, 58(1):012205,
2017.
[21] Giacomo De Palma. Gaussian optimizers and other topics in quantum information.
PhD thesis, Scuola Normale Superiore, Pisa (Italy), September 2016. Supervisor:
Prof. Vittorio Giovannetti; arXiv:1710.09395.
34
G. De Palma, D. Trevisan, V. Giovannetti, and L. Ambrosio
[22] Giacomo De Palma. Uncertainty relations with quantum memory for the wehrl
entropy. arXiv preprint arXiv:1709.04921, 2017.
[23] Giacomo De Palma. The wehrl entropy has gaussian optimizers. Letters in Mathematical Physics, 108(1):97–116, Jan 2018.
[24] Giacomo De Palma and Johannes Borregaard. The ultimate precision of quantum
illumination. arXiv preprint arXiv:1802.02158, 2018.
[25] Giacomo De Palma and Stefan Huber. The conditional entropy power inequality
for quantum additive noise channels. arXiv preprint arXiv:1803.00470, 2018.
[26] Giacomo De Palma, Andrea Mari, and Vittorio Giovannetti. Classical capacity of
gaussian thermal memory channels. Physical Review A, 90(4):042312, 2014.
[27] Giacomo De Palma, Andrea Mari, and Vittorio Giovannetti. A generalization
of the entropy power inequality to bosonic quantum systems. Nature Photonics,
8(12):958–964, 2014.
[28] Giacomo De Palma, Andrea Mari, Seth Lloyd, and Vittorio Giovannetti. Multimode quantum entropy power inequality. Physical Review A, 91(3):032320, 2015.
[29] Giacomo De Palma, Andrea Mari, Seth Lloyd, and Vittorio Giovannetti. Passive
states as optimal inputs for single-jump lossy quantum channels. Physical Review
A, 93(6):062328, 2016.
[30] Giacomo De Palma and Dario Trevisan. The conditional entropy power inequality
for bosonic quantum systems. Communications in Mathematical Physics, pages
1–24, 2018.
[31] Giacomo De Palma, Dario Trevisan, and Vittorio Giovannetti.
One-mode
quantum-limited gaussian channels have gaussian maximizers. arXiv preprint
arXiv:1610.09967, 2016.
[32] Giacomo De Palma, Dario Trevisan, and Vittorio Giovannetti. Passive states optimize the output of bosonic gaussian quantum channels. IEEE Transactions on
Information Theory, 62(5):2895–2906, May 2016.
[33] Giacomo De Palma, Dario Trevisan, and Vittorio Giovannetti. Gaussian states
minimize the output entropy of one-mode quantum gaussian channels. Phys. Rev.
Lett., 118:160503, Apr 2017.
[34] Giacomo De Palma, Dario Trevisan, and Vittorio Giovannetti. Gaussian states
minimize the output entropy of the one-mode quantum attenuator. IEEE Transactions on Information Theory, 63(1):728–737, 2017.
[35] Giacomo De Palma, Dario Trevisan, and Vittorio Giovannetti. Multimode gaussian
optimizers for the wehrl entropy and quantum gaussian channels. arXiv preprint
arXiv:1705.00499, 2017.
[36] Amir Dembo, Thomas M Cover, and Joy Thomas. Information theoretic inequalities. Information Theory, IEEE Transactions on, 37(6):1501–1518, 1991.
[37] Max Fathi, Emanuel Indrei, and Michel Ledoux. Quantitative logarithmic Sobolev
inequalities and stability estimates. Discrete Contin. Dyn. Syst., 36(12):6835–6853,
2016.
[38] Alessandro Ferraro, Stefano Olivares, and Matteo GA Paris. Gaussian states in
continuous variable quantum information. arXiv preprint quant-ph/0503237, 2005.
[39] A. Figalli, F. Maggi, and A. Pratelli. A mass transportation approach to quanti-
Gaussian optimizers for entropic inequalities in quantum information
35
tative isoperimetric inequalities. Invent. Math., 182(1):167–211, 2010.
[40] A. Figalli, F. Maggi, and A. Pratelli. Sharp stability theorems for the anisotropic
Sobolev and log-Sobolev inequalities on functions of bounded variation. Advances
in Mathematics, 242:80–101, August 2013.
[41] John Fournier. Sharpness in youngs inequality for convolution. Pacific Journal of
Mathematics, 72(2):383–397, 1977.
[42] Rupert L. Frank and Elliott H. Lieb. Norms of quantum gaussian multi-mode
channels. Journal of Mathematical Physics, 58(6):062204, 2017.
[43] Raul Garcia-Patron, Carlos Navarrete-Benlloch, Seth Lloyd, Jeffrey H Shapiro, and
Nicolas J Cerf. Majorization theory approach to the gaussian channel minimum
entropy conjecture. Physical Review Letters, 108(11):110505, 2012.
[44] Raúl Garcı́a-Patrón, Carlos Navarrete-Benlloch, Seth Lloyd, Jeffrey H Shapiro, and
Nicolas J Cerf. The holy grail of quantum optical communication. AIP Conference
Proceedings, 1633(1):109–112, 2014.
[45] R Gardner. The brunn-minkowski inequality. Bulletin of the American Mathematical Society, 39(3):355–405, 2002.
[46] V Giovannetti, Raúl Garcı́a-Patrón, NJ Cerf, and AS Holevo. Ultimate classical
communication rates of quantum optical channels. Nature Photonics, 8(10):796–
800, 2014.
[47] V Giovannetti, Alexander Semenovich Holevo, and A Mari. Majorization and additivity for multimode bosonic gaussian channels. Theoretical and Mathematical
Physics, 182(2):284–293, 2015.
[48] V Giovannetti, AS Holevo, and Raúl Garcı́a-Patrón. A solution of gaussian optimizer conjecture for quantum channels. Communications in Mathematical Physics,
334(3):1553–1571, 2015.
[49] Vittorio Giovannetti, Saikat Guha, Seth Lloyd, Lorenzo Maccone, and Jeffrey H
Shapiro. Minimum output entropy of bosonic channels: a conjecture. Physical
Review A, 70(3):032315, 2004.
[50] Vittorio Giovannetti, Saikat Guha, Seth Lloyd, Lorenzo Maccone, Jeffrey H
Shapiro, and Brent J Yen. Minimum bosonic channel output entropies. AIP Conference Proceedings, 734(1):21–24, 2004.
[51] Vittorio Giovannetti, Alexander S Holevo, Seth Lloyd, and Lorenzo Maccone. Generalized minimal output entropy conjecture for one-mode gaussian channels: definitions and some exact results. Journal of Physics A: Mathematical and Theoretical,
43(41):415305, 2010.
[52] J Gorecki and W Pusz. Passive states for finite classical systems. Letters in
Mathematical Physics, 4(6):433–443, 1980.
[53] Leonard Gross. Logarithmic sobolev inequalities and contractivity properties of
semigroups. In Dirichlet forms, pages 54–88. Springer, 1993.
[54] Saikat Guha. Multiple-user quantum information theory for optical communication
channels. Technical report, DTIC Document, 2008.
[55] Saikat Guha, Baris Erkmen, and Jeffrey H Shapiro. The entropy photon-number inequality and its consequences. In Information Theory and Applications Workshop,
2008, pages 128–130. IEEE, 2008.
36
G. De Palma, D. Trevisan, V. Giovannetti, and L. Ambrosio
[56] Saikat Guha and Jeffrey H Shapiro. Classical information capacity of the bosonic
broadcast channel. In Information Theory, 2007. ISIT 2007. IEEE International
Symposium on, pages 1896–1900. IEEE, 2007.
[57] Saikat Guha, Jeffrey H Shapiro, and Baris Erkmen. Capacity of the bosonic wiretap
channel and the entropy photon-number inequality. In Information Theory, 2008.
ISIT 2008. IEEE International Symposium on, pages 91–95. IEEE, 2008.
[58] Saikat Guha, Jeffrey H Shapiro, and Baris I Erkmen. Classical capacity of bosonic
broadcast communication and a minimum output entropy conjecture. Physical
Review A, 76(3):032303, 2007.
[59] Saikat Guha, Jeffrey H Shapiro, and Raúl Garcı́a-Patrón Sánchez. Thinning, photonic beamsplitting, and a general discrete entropy power inequality. In Information Theory (ISIT), 2016 IEEE International Symposium on, pages 705–709.
IEEE, 2016.
[60] Peter Harremoës, Oliver Johnson, and Ioannis Kontoyiannis. Thinning and the law
of small numbers. In Information Theory, 2007. ISIT 2007. IEEE International
Symposium on, pages 1491–1495. IEEE, 2007.
[61] Peter Harremoës, Oliver Johnson, and Ioannis Kontoyiannis. Thinning, entropy,
and the law of thin numbers. Information Theory, IEEE Transactions on,
56(9):4228–4244, 2010.
[62] Matthew B Hastings. Superadditivity of communication capacity using entangled
inputs. Nature Physics, 5(4):255–257, 2009.
[63] Erika Hausenblas and Jan Seidler. A note on maximal inequality for stochastic
convolutions. Czechoslovak Math. J., 51(126)(4):785–790, 2001.
[64] M. Hayashi. Quantum Information Theory: Mathematical Foundation. Graduate
Texts in Physics. Springer Berlin Heidelberg, 2016.
[65] Patrick Hayden, Richard Jozsa, Denes Petz, and Andreas Winter. Structure of
states which satisfy strong subadditivity of quantum entropy with equality. Communications in mathematical physics, 246(2):359–374, 2004.
[66] Christoph Hirche and David Reeb. Bounds on information combining with quantum
side information. arXiv preprint arXiv:1706.09752, 2017.
[67] Alexander S Holevo. Multiplicativity of p-norms of completely positive maps and
the additivity problem in quantum information theory. Russian Mathematical Surveys, 61(2):301, 2006.
[68] Alexander S Holevo. On the proof of the majorization theorem for quantum gaussian channels. Russian Mathematical Surveys, 71(3):585, 2016.
[69] Alexander S Holevo and Reinhard F Werner. Evaluating capacities of bosonic
gaussian channels. Physical Review A, 63(3):032312, 2001.
[70] Alexander Semenovich Holevo. Quantum Systems, Channels, Information: A Mathematical Introduction. De Gruyter Studies in Mathematical Physics. De Gruyter,
2013.
[71] Alexander Semenovich Holevo. Gaussian optimizers and the additivity problem in
quantum information theory. Russian Mathematical Surveys, 70(2):331, 2015.
[72] AS Holevo. On the constrained classical capacity of infinite-dimensional covariant
quantum channels. Journal of Mathematical Physics, 57(1):015203, 2016.
Gaussian optimizers for entropic inequalities in quantum information
37
[73] AS Holevo. On quantum gaussian optimizers conjecture in the case q=p. arXiv
preprint arXiv:1707.02117, 2017.
[74] Stefan Huber, Robert König, and Anna Vershynina. Geometric inequalities from
phase space translations. Journal of Mathematical Physics, 58(1):012206, 2017.
[75] O. Johnson and S. Guha. A de bruijn identity for discrete random variables. In
2017 IEEE International Symposium on Information Theory (ISIT), pages 898–
902, June 2017.
[76] Oliver Johnson and Yaming Yu. Monotonicity, thinning, and discrete versions
of the entropy power inequality. Information Theory, IEEE Transactions on,
56(11):5387–5395, 2010.
[77] Robert König. The conditional entropy power inequality for gaussian quantum
states. Journal of Mathematical Physics, 56(2):022201, 2015.
[78] Robert König and Graeme Smith. Limits on classical communication from quantum
entropy power inequalities. Nature Photonics, 7(2):142–146, 2013.
[79] Robert König and Graeme Smith. The entropy power inequality for quantum
systems. IEEE Transactions on Information Theory, 60(3):1536–1548, 2014.
[80] Robert König and Graeme Smith. Corrections to “the entropy power inequality for
quantum systems”. IEEE Transactions on Information Theory, 62(7):4358–4359,
2016.
[81] Alberto Lanconelli and Aurel I. Stan. A Hölder inequality for norms of Poissonian
Wick products. Infin. Dimens. Anal. Quantum Probab. Relat. Top., 16(3):1350022,
39, 2013.
[82] A Lenard. Thermodynamical proof of the gibbs formula for elementary quantum
systems. Journal of Statistical Physics, 19(6):575–586, 1978.
[83] Sik K Leung-Yan-Cheong and Martin E Hellman. The gaussian wire-tap channel.
Information Theory, IEEE Transactions on, 24(4):451–456, 1978.
[84] E.H. Lieb and M. Loss. Analysis. Crm Proceedings & Lecture Notes. American
Mathematical Society, 2001.
[85] Elliott H Lieb. Proof of an entropy conjecture of wehrl. Communications in Mathematical Physics, 62(1):35–41, 1978.
[86] Elliott H Lieb. Gaussian kernels have only gaussian maximizers. Inventiones mathematicae, 102(1):179–208, 1990.
[87] Elliott H Lieb and Jan Philip Solovej. Proof of an entropy conjecture for bloch
coherent spin states and its generalizations. Acta Mathematica, 212(2):379–398,
2014.
[88] Andrea Mari, Vittorio Giovannetti, and Alexander S Holevo. Quantum state majorization at the output of bosonic gaussian channels. Nature communications, 5,
2014.
[89] A.W. Marshall, I. Olkin, and B. Arnold. Inequalities: Theory of Majorization and
Its Applications. Springer Series in Statistics. Springer New York, 2010.
[90] M.A. Nielsen and I.L. Chuang. Quantum Computation and Quantum Information:
10th Anniversary Edition. Cambridge University Press, 2010.
[91] W Pusz and SL Woronowicz. Passive states and kms states for general quantum
systems. Communications in Mathematical Physics, 58(3):273–290, 1978.
38
G. De Palma, D. Trevisan, V. Giovannetti, and L. Ambrosio
[92] Haoyu Qi and Mark M. Wilde. Capacities of quantum amplifier channels. Physical
Review A, 95:012339, Jan 2017.
[93] Haoyu Qi, Mark M Wilde, and Saikat Guha. On the minimum output entropy of
single-mode phase-insensitive gaussian channels. arXiv preprint arXiv:1607.05262,
2017.
[94] Alfréd Rényi. A characterization of poisson processes. Magyar Tud. Akad. Mat.
Kutató Int. Közl, 1:519–527, 1956.
[95] Cambyse Rouzé and Nilanjana Datta. Concentration of quantum states from quantum functional and talagrand inequalities. arXiv preprint arXiv:1704.02400, 2017.
[96] Cambyse Rouzé and Nilanjana Datta. Relating relative entropy, optimal transport
and fisher information: a quantum hwi inequality. arXiv preprint arXiv:1709.07437,
2017.
[97] A. Serafini. Quantum Continuous Variables: A Primer of Theoretical Methods.
CRC Press, 2017.
[98] Claude Elwood Shannon. A mathematical theory of communication. ACM SIGMOBILE Mobile Computing and Communications Review, 5(1):3–55, 2001.
[99] B. Simon. Basic Complex Analysis: A Comprehensive Course in Analysis, Part
2A:. A comprehensive course in analysis. American Mathematical Society, 2015.
[100] AJ Stam. Some inequalities satisfied by the quantities of information of fisher and
shannon. Information and Control, 2(2):101–112, 1959.
[101] M. Tomamichel. Quantum Information Processing with Finite Resources: Mathematical Foundations. SpringerBriefs in Mathematical Physics. Springer International Publishing, 2015.
[102] Giuseppe Toscani. Heat equation and convolution inequalities. Milan Journal of
Mathematics, 82(2):183–212, 2014.
[103] Cédric Villani. Optimal transport, volume 338 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. SpringerVerlag, Berlin, 2009.
[104] Christian Weedbrook, Stefano Pirandola, Raul Garcia-Patron, Nicolas J Cerf, Timothy C Ralph, Jeffrey H Shapiro, and Seth Lloyd. Gaussian quantum information.
Reviews of Modern Physics, 84(2):621, 2012.
[105] Alfred Wehrl. On the relation between classical and quantum-mechanical entropy.
Reports on Mathematical Physics, 16(3):353–358, 1979.
[106] Mark M Wilde, Patrick Hayden, and Saikat Guha. Information trade-offs for optical
quantum communication. Physical Review Letters, 108(14):140501, 2012.
[107] Mark M Wilde, Patrick Hayden, and Saikat Guha. Quantum trade-off coding for
bosonic communication. Physical Review A, 86(6):062306, 2012.
[108] M.M. Wilde. Quantum Information Theory. Cambridge University Press, 2017.
[109] Yaming Yu. Monotonic convergence in an information-theoretic law of small numbers. Information Theory, IEEE Transactions on, 55(12):5412–5422, 2009.
[110] Yaming Yu and Oliver Johnson. Concavity of entropy under thinning. In Information Theory, 2009. ISIT 2009. IEEE International Symposium on, pages 144–148.
IEEE, 2009.
| 7 |
DRAFT
1
Partial Diffusion Recursive Least-Squares for
Distributed Estimation under Noisy Links Condition
arXiv:1607.05539v1 [cs.DC] 19 Jul 2016
Vahid Vahidpour, Amir Rastegarnia, Azam Khalili, and Saeid Sanei, Senior Member, IEEE
Abstract—Partial diffusion-based recursive least squares
(PDRLS) is an effective method for reducing computational load
and power consumption in adaptive network implementation.
In this method, each node shares a part of its intermediate
estimate vector with its neighbors at each iteration. PDRLS
algorithm reduces the internode communications relative to the
full-diffusion RLS algorithm. This selection of estimate entries
becomes more appealing when the information fuse over noisy
links. In this paper, we study the steady-state performance of
PDRLS algorithm in presence of noisy links and investigate
its convergence in both mean and mean-square senses. We
also derive a theoretical expression for its steady-state meansquare deviation (MSD). The simulation results illustrate that the
stability conditions for PDRLS under noisy links are not sufficient
to guarantee its convergence. Strictly speaking, considering nonideal links condition adds a new complexity to the estimation
problem for which the PDRLS algorithm becomes unstable and
do not converge for any value of the forgetting factor.
Index Terms—Adaptive networks, diffusion adaptation, distributed estimation, energy conservation, recursive least-square,
partial diffusion, noisy links.
I. I NTRODUCTION
W
E study the problem of distributed estimation over
adaptive networks, in which a set of agents are interacted with each other to solve distributed estimation and
inference problems in a collaborative manner. There exist several useful techniques for solving such optimization problems
in distributed manner that enable adaptation and learning in
real-time. They include incremental [1], [2], [3], consensus
[4], [5], [6], and diffusion [7], [8], [9], [10], [11], [12], [13],
[14], [15] strategies. The diffusion strategies are effective
methods for performing distributed estimation over adaptive
networks. In the original diffusion strategy, all agents in
the network generate their individual intermediate estimates
using the data accessible to them locally. Then, the nodes
exchange their intermediate estimates to all their immediate
neighbors. However, the most expensive part of realizing a
cooperative task over a wireless ad hoc network is usually
the data communications through radio links. Therefore, it
is of practical importance to reduce the amount of internode
communications in diffusion strategies while maintaining the
benefits of cooperation.
To aim this, various techniques have been proposed, such as
choosing a subset of the nodes [16], [17], [18], [19], selecting
V. Vahidpour, A. Rastegarnia, and A. Khalili and are with the Department of Electrical Engineering, Malayer University, Malayer 6571995863, Iran (email: [email protected]; [email protected]; [email protected]).
S. Sanei is with the Department of Computer Science, University of Surrey,
Surrey GU2 7XH, UK (email: [email protected]).
a subset of the entries of the estimates [20], [21], and reducing
the dimension of the estimates [22], [23], [24]. Among these
methods, we focus on the second method in which a subset
of the entries are selected in communications.
In all mentioned works, the data are assumed to be exchanged among neighboring nodes without distortion. However, due to the link noise, this assumption may not be true
in practice. Some useful results out of studying the effect of
link noise during the exchange of weight estimates, already
appear for traditional diffusion algorithm [25], [26], [27], [28],
for incremental case [29], [30], [31], and for consensus-based
algorithms [32], [33].
In partial-diffusion strategies proposed in [20], [21], the
links between nodes are assumed to be ideal. However, the
performance of these strategies, due to replacing the unavailable entries by their corresponding ones in each nodes own
intermediate estimate vector, are strongly affected under noisy
information exchange.
The main contributions of this paper can be summarized as
follows:
(i) Focusing on [21], we consider the fact that the weight
estimates exchanged among the nodes can be subject to
quantization errors and additive noise over communication links. We also consider two different schemes [20]
for selecting the weight vector entries for transmission
at each iteration. We allow for noisy exchange just
during the two combination steps. It should be noted that
since our objective is to minimize the internode communication, the nodes only exchange their intermediate
estimates with their neighbors;
(ii) Using the energy conservation argument [34] we analyze
the stability of algorithms in mean and mean square sense
under certain statistical conditions;
(iii) Stability conditions for PDRLS are derived under noisy
links, and it is demonstrated that the steady-state performance of the new algorithm is not as good as that of the
PDRLS algorithm with ideal links;
(iv) We derive a variance relation which contains an additional terms in comparison to original PDRLS algorithm
that represent the effects of noisy links. We evaluate
these moments and derived closed-form expressions for
mean-square derivation (MSD) to explain the steady-state
performance.
(v) It is demonstrated through different examples that the
stability conditions for PDRLS under noisy links are
not sufficient to guarantee its convergence. Considering
noisy links adds a new complexity to estimation problem
for which the PDRLS algorithm suffers a significant
DRAFT
2
degradation in steady-state performance for any value of
forgetting factor.
The remainder of this paper is organized as follows: In
Section II, we formulate the PDLMS under imperfect information exchange. The performance analyses are examined in
Section III, where the methods of entry selection matrix are
also examined. We provide simulation results in Section IV
and draw the conclusions in Section V.
A. Notation
We use the lowercase boldface letters to denote vectors,
uppercase boldface letter for matrices, and lowercase plain
letters for scalar variables. We also use (.)∗ to denote conjugate
transposition, Tr(.) for the trace of its matrix argument, ⊗
for the Kronecker product, and vec. for a vector formed by
stacking the columns of its matrix argument. We further use
diag{· · · } to denote a (block) diagonal matrix formed from its
argument, and col{· · · } to denote a column vector formed by
stacking its arguments on top of each other. All vectors in our
treatment are column vectors, with the exception of regression
vectors, uk,i .
II. A LGORITHM D ESCRIPTION
A. Diffusion Recursive Least-Squares Algorithm with Noisy
Links
To begin with, consider a set of N nodes, spatially distributed over some region, aim to identify an unknown parameter vector, wo ∈ C M ×1 , in a collective manner. At every
time instant i, node k collects a measurement dk (i) that is
assumed to be related to an unknown vector by
dk (i) = uk,i wo + vk (i)
y k,i = col {dk (i), . . . , dk (0)}
(2)
H k,i = col {uk (i), . . . , uk (0)}
(3)
v k,i = col {vk (i), . . . , vk (0)}
(4)
the objective is to estimate wo by solving the following
weighted least-squares problem
ψ
2
Λi
The solution ψ k,i is given by [35]
−1
ψ k,i = H ∗k,i Λi H k,i
H ∗k,i Λi y k,i
H ∗k,i Λi H k,i = λH ∗k,i−1 Λi−1 H k,i−1 + u∗k,i uk,i
(5)
(6)
where Λi ≥ 0 denotes a Hermitian weighting matrix. Common
choice for λi is
Λi = diag 1, λ, . . . , λi
(7)
where 0 λ ≤ 1 is a forgetting factor whose value is
generally very close to one.
(8)
H ∗k,i Λi y k,i = λH ∗k,i−1 Λi−1 y k,i−1 + u∗k,i dk (i)
(9)
−1
Let P k,i = H ∗k,i Λi H k,i
. Then applying the socalled matrix inversion formula [36][36] to (8) the following
recursions for calculating ψ k,i are obtained
!
λ−1 P k,i−1 u∗k,i uk,i P k,i−1
−1
(10)
P k,i−1 −
P k,i = λ
1 + λ−1 uk,i P k,i−1 u∗k,i
ψ k,i = ψ k,i−1 + P k,i u∗k,i dk (i) − uk,i ψ k,i−1
(11)
Since wk,i is a better estimate compared with ψ k,i , it is
beneficial to replace ψ k,i−1 with wk,i−1 in (11)
ψ k,i = wk,i−1 + P k,i u∗k,i (dk (i) − uk,i wk,i−1 )
(12)
Hence, the local estimates are diffused outside of each node’s
own neighborhood. Then, for every time instant i, each node
k performs an adaptation step followed by a combination step
as follows:
1) Adaptation: Each node computes an intermediate estimate of wo by (10) and (12). The resulting pre-estimates
are named ψ k,i as in (13).
2) Combination: The nodes exchange their local preestimates with their neighbors and perform a weighted
average as in (14) to obtain the estimate wk,i (via socalled spatial update).
N
X
wk,i =
alk ψ l,i
(13)
l=1
(1)
where uk (i) is a row vector of length M (the regressor of
node k at time i) and vk (i) represent additive noise with zero
2
mean and variance συ,k
and the unknown vector wo denotes
the parameter of interest.
Collecting the measurements and noise samples of node k
up to time i into vector quantities as follows:
min y k,i − H k,i ψ
It can be verified from the properties of recursive leastsquares solutions [35], [34] that
The scalar alk is non-negative real coefficient corresponding
to the (l, k)entries of N × N combination matrix A = {alk }.
These coefficients are zero whenever node l ∈
/ Nk , where Nk
denotes the neighborhood of node k. This matrix is assumed
to satisfy the condition:
AT 1 N = 1 N
(14)
where the notation 1 denotes an N × 1 column vector with
all its entries equal to one.
The above algorithm only uses the local input-output data,
observed by each node, in the adaptation phase. However, in
[15], a more general algorithm has been proposed in which
each node shares its input-output data as well as its intermediate estimate with its neighbors and updates its intermediate
estimate using all available data. This is carried out via a
convex combination of the update terms induced by each
input-output data pair. For the obvious reason of minimizing
the communications, here, we will only consider the abovementioned diffusion RLS algorithm.
We model the noisy data received by node k from its
neighbor l as follows:
(ψ)
ψ lk,i = ψ l,i + v lk,i
(ψ)
(15)
where v lk,i (M × 1) denotes vector noise signal. It is temporally white and spatially independent random process with
DRAFT
3
(ψ)
(ψ)
zero mean and covariance given by Rv,lk . The quantity Rv,lk
are all zero if l ∈
/ Nk or when l = k. it should be noted that
the subscript lk indicates that l is the source and k is the sink,
i.e. the flow of information is from l to k.
Using the perturbed estimate (15), the combination step in
adaptive strategy becomes
X
wk,i =
alk ψ lk,i
(16)
l∈Nk
B. Partial-Diffusion RLS Algorithm under Noisy Information
Exchange
In order to reduce the amount of communication required
among the nodes we utilize partial-diffusion strategy proposed
in [20], to transmit L out of M entries of the intermediate
estimates at each time instant where the integer L is fixed and
pre-specified. The selection of the to-be-transmitted entries at
node k and time instant i can be characterized by an M × M
diagonal entry-selection matrix, denoted by Kk,i , that has L
ones and M −L zeros on its diagonal. The proposition of ones
specifies the selected entries. Multiplication of an intermediate
estimate vector by this matrix replaces its non-selected entries
with zero.
Rewriting (16) as
X
wk,i = akk ψ k,i +
alk [Kl,i ψ lk,i
l∈Nk \{k}
+ (I M − Kl,i ) ψ lk,i ]
(17)
Each node requires the knowledge of all entries of its neighbors intermediate estimate vectors for combination. However,
when the intermediate estimates are partially transmitted (0 <
L < M ), the nodes have no access to the non-communicated
entries. To resolve this ambiguity, we let the nodes use the
entries of their own intermediate estimates in lieu of ones from
the neighbors that have not been communicated, i.e., at node
k, substitute
(I M − Kl,i ) ψ k,i , ∀l ∈ Nk \ {k}
(18)
(I M − Kl,i ) ψ lk,i , ∀l ∈ Nk \ {k}
(19)
for
Based on this approach, we formulate a partial-diffusion
recursive least-squares (PDRLS) algorithm under imperfect
information exchange using (10) and (12) for adaptation and
the following equation for combination:
C. Entry Selection Methods
In order to select L out of M entries of the intermediate
estimates of each node at each iteration, the processes we
utilized are analogous to the selection processes in stochastic
and sequential partial-update schemes [37], [38], [39], [40].
In other words, we use the same schemes as introduced in
[20]. Here, we just review these methods named sequential
partial-diffusion and stochastic partial-diffusion.
In sequential partial-diffusion the entry selection matrices,
Kk,i , are diagonal matrices:
(
κ1,i · · ·
0
1 if ` ∈ J(i mod B̄)+1
..
..
..
Kk,i = .
.
. , κ1,i = 0 otherwise
0
· · · κM,i
(22)
with B̄ = dM/Le. The number of selection entries at each
iteration is limited by L. The coefficient subsets Ji are not
unique as long as they obey the following requirements [37]:
1) Cardinality of Ji is between 1 and L;
SB̄
2) κ=1 = S where S ={1, 2, . . . , M };
3) Jκ ∩ Jη = ∅, ∀κ, η ∈ 1, . . . , B̄ and κ 6= η.
The description of the entry selection matrices, Kk,i , in
stochastic partial-diffusion is similar to that of sequential one.
The only difference is as follows. At a given
iteration, i, of the
sequential case one of the sets Jκ , κ = 1, . . . , B̄ is chosen
in advance, whereas for stochastic case, one of the sets Jκ
is sampled at random from {J1 , J2 , . . . , JB̄ }. One might ask
why these methods are considered to organize these selection
matrices. To answer this question, it is worth mentioning that
the nodes need to know which entries of their neighbors
intermediate estimates have been transmitted at each iteration.
These schemes bypass the need for addressing.
Remark. The probability of transmission for all the entries at
each node is equal and expressed as
ρ = L/M
(23)
Moreover, the entry selection matrices, Kk,i , do not depend
on any data/parameter other than L and M .
III. P ERFORMANCE A NALYSIS
In this section, we analyze the performance of the algorithm
(10), (12) and (20) and show that it is asymptotically unbiased
in the mean and converges in the mean-square error sense
wk,i =
X
under some simplifying assumptions. We also provide an
akk ψ k,i +
alk [Kl,i ψ lk,i + (I M − Kl,i ) ψ k,i
]
expression
for mean-square deviation (MSD). We presume
l∈Nk \{k}
that the input data is stochastic in nature and utilize the en(ψ)
ergy conservation arguments previously applied to LMS-type
+v k,i
(20)
adaptive distributed algorithms in, e.g., [9],[12], [21]. Here,
(ψ)
where v k,i denotes the aggregate M × 1 zero mean noise we study the performance of PDRLS algorithm considering
signal and is introduced as follows:
both sequential and stochastic partial-diffusion schemes under
X
(ψ)
(ψ)
v k,i =
alk Kl,i v lk,i
(21) noisy information exchange.
l∈Nk \{k}
This noise represents the aggregate effect on node k of all
selected exchange noises
from the neighbors of node k while
exchanging the estimates ψ l,i during the combination step.
A. Assumptions
Several simplifying assumptions have been traditionally
adopted in the literature to gain insight into the performance
DRAFT
4
of such adaptive algorithms. To proceed with the analysis we
shall therefore introduce similar assumptions to what has been
used before in the literature, and use them to derive useful
performance measures.
Assumptions. In order to make the analysis tractable, we
introduce the following assumptions on statistical properties
of the measurement data and noise signals.
1) The regression data uk,i are temporally white and spatially independent random variables
with
h
i zero mean and
∗
covariance matrix Ru,k , E uk,i uk,i ≥ 0.
diagonal matrices as follows:
n
o
(ψ)
(ψ)
(ψ)
v i , col v 1,i , . . . , v N,i
n
o
(ψ)
(ψ)
R(ψ)
,
col
R
,
.
.
.
,
R
v
v,1
v,N
(ψ)
The quantities Rv,lk are all zero if l ∈ Nk or when
l = k.
3) The regression data {um,i1 }, the model noise signals
(ψ)
v n (i2 ), and the link noise signals v l1 k1 ,j1 are mutually independent random variables for all indexes
{i1 , i2 , j1 , k1 , l1 , m, n}.
4) In order to make the performance analysis tractable, we
introduce the following ergodicity assumption. For sufficiently large i, at any node k, we can replace Phk,i and
i
−1
P −1
with
their
expected
values,
E
[P
]
and
E
P
k,i
k,i
k,i ,
respectively.
5) For a sufficiently
h
ilarge i, at any node k, we have
E [P k,i ] = E P −1
k,i
ψ̃ k,i = w̃k,i−1 − P k,i u∗k,i [uk,i w̃k,i−1 + v k (i)]
∗
ψ̃ k,i = P k,i P −1
k,i w̃ k,i−1 − P k,i uk,i uk,i w̃ k,i−1
−P k,i u∗k,i v k (i)
Our objective is to examine whether, and how fast, the
weight estimates wk,i from the distributed implementation
(10), (11), and (20) converge towards the solution wo (5).
To do so, we introduce the M1 error vectors:
ψ̃ k,i , wo − ψ k,i
(24)
w̃k,i , wo − wk,i
(25)
Furthermore, denote the network intermediate estimate-error
and estimate-error vector as
n
o
ψ̃ i , col ψ̃ 1,i , . . . , ψ̃ N,i
(26)
w̃i , col {w̃1,i , . . . , w̃N,i }
(27)
Also, collecting the noise signal (21) and its covariances from
across the network into N × 1 block vectors and N × N block
(31)
rewriting (12) as
−1
∗
P −1
k,i = λP k,i−1 + uk,i uk,i
(32)
Substituting (32) into the first term on RHS of (31) yields
∗
ψ̃ k,i = λP k,i P −1
k,i−1 w̃ k,i−1 − P k,i uk,i v k (i)
(33)
We are interested in the steady-state behavior of the matrix
P k,i . As i → ∞, and 0 λ ≤ 1, the steady-state mean value
of P −1
k,i is given by
i
X
lim E [P k,i ] = lim E
λi−j u∗k,i uk,i
i→∞
i→∞
j=1
i
X
= lim
B. Network Update Equation
(30)
which can be written as
−1
Assumption 4 is a common assumption in the analysis
of performance of RLS-type algorithm (see for example,
[34], pp.318-319) and the results from assuming that the
regressor vector of each node is an ergodic process, so that
the time average of the nodes rank-one instantaneous regressor
covariance matrix, u∗k,i uk,i , over a sufficiently long time range
can be replaced with the ensemble average (expected value).
Assumption 5 is a good approximation when λ is close to
unity and the condition number of Ru,k is not very large [9],
[15], [21], [34].
(29)
Using the data model (1) and subtracting wo from both sides
of the relation (12) we obtain
(ψ)
2) The noise signal v k (i) and v k,i are temporally white and
spatially independent random variable with zero mean
(ψ)
2
and co(variance)n σv,k
and
o Rv,k , respectively. In addition,
(28)
i→∞
λi−j E u∗k,i uk,i
j=1
= lim
i→∞
i
X
λi−j Ru,k
j=1
1
Ru,k
(34)
=
1−λ
Using Assumptions (4) and (5) and relation (31), we have
for large enough i
h
i
−1
P k,i P −1
≈
E
[P
]
E
P
k,i
k,i−1
k,i−1
h
i−1 h
i
≈ E P −1
E P −1
k,i
k,i−1
≈ IM
(35)
and
h
i−1
P k,i ≈ E [P k,i ] ≈ E P −1
≈ (1 − λ) R−1
k,i
u,k
(36)
Consequently for a sufficiently large i, (33) can be approximated by
∗
ψ̃ k,i = λw̃k,i − (1 − λ) R−1
u,k uk,i vk (i)
(37)
On the other hand, subtracting both sides of (20) from wo
gives
X
w̃k,i = I M −
alk Kl,i ψ̃ k,i
l∈Nk \{k}
+
X
l∈Nk \{k}
(ψ)
alk Kl,i ψ̃ l,i − v k,i
(38)
DRAFT
5
To describe these relations more compactly, we collect the
information from across the network into block vectors and
matrices. Using (26)-(28), leads to
ψ̃ i = λw̃i−1 − Γsi
w̃i = Bi ψ̃ i−1 −
where
(39)
(ψ)
vi
(40)
o
n
−1
Γ , (1 − λ) diag R−1
u,1 , . . . , Ru,N
(41)
si , u∗1,i v 1 (i), . . . , u∗N,i v N (i)
B 1,1,i · · · B 1,N,i
..
..
Bi = ...
.
.
(42)
B N,1,i
···
(43)
B N,N,i
where
B p,q,i
P
I M − l∈Np \{p} alp Kl,i
= aqp Kq,i
OM
if p = q
if q ∈ Np \ {p} (44)
otherwise
and O M is the M × M zero matrix. So, the network weight
error vector, w̃i , ends up evolving according to the following
stochastic recursion:
(ψ)
w̃i = λBi w̃i−1 − Bi Γsi − v i
(45)
C. Convergence in Mean
Taking expectation of both sides of (45) under Remark and
Assumptions 1 and 2, we find that the mean error vector
evolves according to the following recursion:
E [w̃]i = λQE [w̃i−1 ]
(46)
Q = E [Bi ]
(47)
where
D. Mean-Square Stability
We now study the mean-square performance of PDRLS
under imperfect information exchange. We will also derive
closed-form expressions that characterize the network performance. Doing so, we resort to the energy conservation analysis
of [21], [35], [41], [42]. The details are as follows.
Equating the squared weighted Euclidean norm of both sides
of (45) and applying the expectation operator together with
using Remark and Assumptions 1 and 2 yield the following
weighted variance relation:
h
i
h
i
2
E kw̃i kΣ = E w̃∗i−1 λ2 BTi ΣBi w̃i−1
h
i
h
i
∗(ψ)
(ψ)
+ E s∗i ΓBTi ΣBi Γsi + E v i Σv i
(49)
where Σ is an arbitrary symmetric nonnegative-definite matrix.
Let us evaluate each of the expectations on the right-hand side.
The first expectation is given by
h
i
E w̃∗i−1 λ2 BTi ΣBi w̃i−1
h h
ii
= E E w̃∗i−1 λ2 BTi ΣBi w̃i−1 |w̃i−1
i
h
= E w̃∗i−1 E λ2 BTi ΣBi
, E w̃∗i−1 Σ0 w̃i−1
h
i
2
= E kw̃i kΣ0
(50)
where we introduce the nonnegative-definite weighting matrix
h
i
Σ0 = E λ2 BTi ΣBi
(51)
Since w̃i−1 is independent of Σ0 , we have
h
i
h
i
2
2
E kw̃i−1 kΣ0 = E kw̃i−1 kE[Σ0 ]
(52)
2
Like [20], Q can be obtained for both stochastic and sequential
partial-diffusion using the definition of Bi . What is most
noteworthy here is to find the value of each Q entry after
applying expectation operator. Therefore, we can write
(1 − ρ + ρar,lp ) I M if p = q
E [B p,q,i ] = ρar,qp I M
if q ∈ Np \ {p} (48)
OM
otherwise
All the entries of Q are real and non-negative and all the
rows of Q add up to unity. This property can be established
for both stochastic and sequential partial-diffusion schemes
and for any value of L [20]. This implies that Q is a right
stochastic matrix. In view of the fact that the eigenvalue of
a stochastic matrix with the largest absolute value (spectral
radius) is equal to one [36], [41], the spectral radius of the
matrix λQ is equal to λ. Thus, for 0 λ ≤ 1, every element
of E [w̃i ] converges to zero as i → ∞, and the estimator is
asymptotically unbiased and convergent in mean. Note that
this is not in fact the necessary and sufficient condition for
convergence of E [w̃i ] as (46) has been obtained under an
independence assumption which is not true in general.
It is convenient to introduce the alternative notation kxkσ
2
to refer to the weighted square quantity kxkΣ , where σ =
vec {Σ}. We shall use these two notations interchangeably.
Using the following equalities for arbitrary matrices
{U , W , Σ, Z} of compatible dimensions:
(U ⊗ W ) (Σ ⊗ Z)
= UΣ ⊗ W Z
(53)
T
vec {U ΣW } =
W ⊗ U vec {Σ}
(54)
h
n
oiT
Tr (ΣW ) = vec W T
vec {Σ} (55)
we have
h
i
σ 0 = E λ2 BTi ΣBi = F σ
(56)
F = λ2 Φ
(57)
Φ = EBTi ⊗ BTi
(58)
where
Here, the efforts to find (58) in [20] can be extended. Therefore, Φ can be established in general form. Again, what is
most important here is to discuss about the probability of
transmitted entries of nodes. This would be helpful in finding
some expectations, E [κt,p,i Kq,i ], we are faced with.
DRAFT
6
At a given iteration, the probability of transmitting two
different entries of single node is [20]
L−1
(59)
ρ
M −1
In the stochastic partial-diffusion scheme, at a given iteration,
the probability of transmitting two entries from two different
nodes is ρ2 . Thus, we have [20]
E [κt,p,i Kq,i ]
=
(
t,t
M −L
L−1
I
+
ρ
ρ M
M
−1
M −1 J M
2
ρ IM
if p = q
if p 6= q
(60)
where J t,t
M is an M × M single-entry matrix with one at
its (t, t)th entry and zeros elsewhere, for p = 1, . . . , N ,
q = 1, . . . , N , and t = 1, . . . , M . Furthermore, in sequential
partial-diffusion and under same entry selection pattern, at a
given iteration, we have [20]
M −L
L−1
IM + ρ
J t,t
E [κt,p,i Kq,i ] = ρ
M (61)
M −1
M −1
As shown in [20], all the entries of Φ are real and nonnegative. moreover, all their columns sum up to one both for
stochastic and sequential partial-diffusion scheme and for any
value of L.
Second term on RHS of (49)
h
i
E s∗i ΓBTi ΣBi Γsi = vecT {G} Φσ
(62)
where
G = ΓE [si s∗i ] Γ
which, in view of Assumptions, can be expressed as
n
o
2
−1
2
2
G = (1 − λ) σv,1
R−1
u,1 , . . . , σv,N Ru,N
Last term on RHS of (49):
h
i
h n
oi
∗(ψ)
(ψ)
(ψ) ∗(ψ)
E v i Σv i
= E Tr Σv i v i
= vecT R(ψ)
σ
v
The variance relation becomes
h
i
h
i
2
2
E kw̃i kσ = E kw̃i−1 kλ2 Φσ + vecT {G} Φσ
+vecT R(ψ)
σ
v
E. Mean-Square Performance
At steady state, when i → ∞, (66) can be written as
h
i
2
lim E kw̃i k(I 2 2 −F )σ =
N M
i→∞
n
o
vecT {G} Φσ + vecT R(ψ)
σ
v
(67)
Expression (66) is a very useful relation; it allows us to
evaluate the network MSD through proper selection of the
weighting vector σ. The network MSD is defined as the
average value:
N
i
1 X h
2
E kw̃k,i k
i→∞ N
MSDnetwork , lim
which amounts to averaging the MSDs of the individual nodes.
Therefore,
i
h
i
1 h
2
2
E kw̃i k = lim E kw̃i k1/N
MSDnetwork = lim
i→∞
i→∞ N
(69)
This means that in order to recover the network MSD from
relation (66), we should select the weighting vector σ such
that
1
(I N 2 M 2 − F ) σ = vec {I N M }
(70)
N
Solving for σ and substituting back into (66) we arrive at the
following expression for the network MSD
h
n
oi
= N1 vecT {G} Φ + vecT R(ψ)
MSDnetwork
noisy
v
−1
(I N 2 M 2 − F )
(63)
(64)
vec {I N M }
(71)
In perfect information exchange, the last two terms of (66)
don not appear, so we can conclude that the network MSD
deteriorates as follows:
MSDnetwork
noisy
(65)
(68)
k=1
=
+
MSDnetwork
ideal
n
o
1
−1
vecT R(ψ)
(I N 2 M 2 − F ) vec {I N M }
v
N
(72)
IV. S IMULATION R ESULTS
(66)
A recursion of type (66) is stable and convergent if the
matrix λ2 Φ is stable [34]. The entries of Φ are all real-valued
and non-negative. In addition, all the columns of Φ add up
to unity. As a result, Φ is left-stochastic and has unit spectral
radius. Hence, the spectral radius of the matrix λ2 Φ is equal
to λ2 that is smaller than one. Therefore, the mean-square
stability condition for PDRLS under noisy links is the same
as that of PDRLS under noise-free links and the rate of this
convergence is dependent on the value of λ. Again, this need
not translate to be the necessary and sufficient condition for
convergence of recursion of type (66) in actuality as (66) has
been obtained under independence assumption which is not
true in general. Therefore, the mean-square convergence of
PDRLS under noisy links is an open question.
In order to illustrate the performance of each PDRLS
strategy under imperfect information exchange, we consider
an adaptive network with a random topology and N = 10
where each node is, on average, connected to two other
nodes. The measurements were generated according to model
(1), and regressors, uk,i , were chosen Gaussian i.i.d with
randomly generated different diagonal covariance matrices,
Ru,k . The additive noises at nodes are zero mean Gaussian
with variances σ 2v,k and independent of the regression data.
The traces of the covariance matrix regressors and the noise
variances at all nodes, T r (Ru,k ) and σ 2v,k , are shown in
Fig. 1. We also use white Gaussian link noise signals such
(ψ)
2
2
that Rv,lk = σψ,lk
IM . All link noise variances σψ,lk
IM , are
randomly generated and illustrated in Fig. 2. We assign the
link number by the following procedure. We denote the link
from node l to node k as `l,k , where l 6= k. Then, we collect
the links {`l,k , l ∈ Nk \ {k}} in an ascending order of l in the
DRAFT
7
−18
−40
−22
−24
−26
−45
1
2
3
4
5
6
7
8
9
10
Tr(Ru,k ) (dB)
12
11.5
σ 2w,lk (dB)
2
σ v,k
(dB)
−20
−50
11
10.5
10
9.5
9
1
2
3
4
5
6
7
8
9
10
Node Number k
Fig. 1. Variance of the noise (top) and Covariance matrix trace of the input
signal (bottom) at each node.
list Lk (which is a set with ordered elements) for each node.
We concatenate {Lk } in an ascending order of k to get the
overall list L = {L1 , L2 , . . . , LN }. Eventually, the mth link
in the network is given by the mth element in the list L. It is
noteworthy that we adopt the network MSD learning curves of
all figures by averaging over 50 experiments and the unknown
parameter wo of length M = 8 is randomly generated. In Fig.
3, we illustrate the simulated time evolution of network MSD
of the PDRLS algorithm under noisy information exchange
using both sequential and stochastic partial-diffusion schemes
for λ = 0.995. Fig. 4, demonstrates a similar scenario as that in
Fig. 3 when λ = 1. To compare the steady-state performance
of network MSD curves of PDRLS strategies, we examine the
network MSD learning curve of PDRLS strategies again with
ideal links under various numbers of entries communicated at
each iteration, L, when λ = 1 as illustrated in Fig. 5. From
the results above, we can make the following observations:
• The PDRLS algorithm delivers a tradeoff between communications cost and estimation performance under
noise-free links [21];
• The MSD performance of PDRLS with noisy links is
strictly come under the influence of forgetting factor,
λ. So, a minimal change in λ leads to a significant
degradation on MSD performance;
• As can be seen, PDLMS algorithm with noisy links fails
to converge for both stochastic and sequential schemes,
whereas it converges for the noise-free links case..
• The more entries are communicated at each iteration, the
more perturbed weight estimates are interred in the consultation phase. Therefore, the number of communicated
entries has a marked effect on the MSD performance of
PDRLS with noisy links.
V. C ONCLUSION AND F UTURE W ORK
In this work, we investigated the performance of PDRLS
algorithm when links between nodes were noisy. We derived
an analytical expression for the network mean-square deviation, MSD, using weighted energy conservation relation. Our
−55
0
5
10
15
20
25
30
35
40
45
Link Number
2
Fig. 2. The variance profiles for various sources of link noises in dB, σψ,lk
.
results revealed that the noisy links are the main factor in
the performance degradation of PDRLS algorithm. They also
illustrated that the stability conditions for PDRLS under noisy
links are not sufficient to guarantee its convergence. In other
words, considering non-ideal links condition added a new
complexity to the estimation problem for which the PDRLS
algorithm became unstable and did not converge for any value
of forgetting factor, λ. It was also showed that the PDLMS
algorithm with noisy links shows divergent behavior for both
selection schemes (stochastic and sequential), while it does
not for the noise-free links case. In the future, tighter and
accurate bounds on the convergence rate of the mean and
mean-square update equations of PDRLS algorithm with noisy
links can be established. Necessary and sufficient conditions
for convergence of the algorithm with noisy links need to be
derived. These can be addressed in the future.
R EFERENCES
[1] C. G. Lopes and A. H. Sayed, “Incremental adaptive strategies over
distributed networks,” Signal Processing, IEEE Transactions on, vol. 55,
no. 8, pp. 4064–4077, 2007.
[2] M. G. Rabbat and R. D. Nowak, “Quantized incremental algorithms
for distributed optimization,” Selected Areas in Communications, IEEE
Journal on, vol. 23, no. 4, pp. 798–808, 2005.
[3] A. Nedic and D. P. Bertsekas, “Incremental subgradient methods for
nondifferentiable optimization,” SIAM Journal on Optimization, vol. 12,
no. 1, pp. 109–138, 2001.
[4] A. Bertrand and M. Moonen, “Consensus-based distributed total least
squares estimation in ad hoc wireless sensor networks,” Signal Processing, IEEE Transactions on, vol. 59, no. 5, pp. 2320–2330, 2011.
[5] S. S. Stanković, M. S. Stanković, and D. M. Stipanović, “Decentralized
parameter estimation by consensus based stochastic approximation,”
Automatic Control, IEEE Transactions on, vol. 56, no. 3, pp. 531–543,
2011.
[6] G. Mateos, I. D. Schizas, and G. B. Giannakis, “Distributed recursive
least-squares for consensus-based in-network adaptive estimation,” Signal Processing, IEEE Transactions on, vol. 57, no. 11, pp. 4583–4588,
2009.
[7] J. Chen and A. H. Sayed, “Diffusion adaptation strategies for distributed
optimization and learning over networks,” Signal Processing, IEEE
Transactions on, vol. 60, no. 8, pp. 4289–4305, 2012.
[8] X. Zhao, S.-Y. Tu, and A. H. Sayed, “Diffusion adaptation over networks
under imperfect information exchange and non-stationary data,” Signal
Processing, IEEE Transactions on, vol. 60, no. 7, pp. 3460–3475, 2012.
DRAFT
8
−5
−5
L
L
L
L
L
−10
−15
−20
−25
−30
−35
−40
−20
−25
−30
−35
−40
−50
0
100
200
300
400
500
600
700
800
900
−55
1000
0
100
200
300
−5
500
600
700
800
900
1000
−5
L
L
L
L
L
−10
=0
=1
=2
=4
=8
L
L
L
L
L
−10
−15
Network MSD (dB)
−15
Network MSD (dB)
400
i (iteration)
i (iteration)
−20
−25
−30
−35
−40
=0
=1
=2
=4
=8
−20
−25
−30
−35
−40
−45
−45
−50
=0
=1
=2
=4
=8
−45
−45
−50
L
L
L
L
L
−10
Network MSD (dB)
Network MSD (dB)
−15
=0
=1
=2
=4
=8
−50
0
100
200
300
400
500
600
700
800
900
1000
i (iteration)
−55
0
100
200
300
400
500
600
700
800
900
1000
i (iteration)
Fig. 3. Simulated network MSD curves for partial diffusion RLS algorithms
using sequential (top) and stochastic (bottom) with different number of entries
communicated under noisy links when λ = 0.995.
Fig. 4. Simulated network MSD curves for partial diffusion RLS algorithms
using sequential (top) and stochastic (bottom) with different number of entries
communicated under noisy links λ = 1.
[9] A. Bertrand, M. Moonen, and A. H. Sayed, “Diffusion bias-compensated
RLS estimation over adaptive networks,” Signal Processing, IEEE
Transactions on, vol. 59, no. 11, pp. 5212–5224, 2011.
[10] S. Chouvardas, K. Slavakis, and S. Theodoridis, “Adaptive robust
distributed learning in diffusion sensor networks,” Signal Processing,
IEEE Transactions on, vol. 59, no. 10, pp. 4692–4707, 2011.
[11] N. Takahashi, I. Yamada, and A. H. Sayed, “Diffusion least-mean
squares with adaptive combiners: Formulation and performance analysis,” Signal Processing, IEEE Transactions on, vol. 58, no. 9, pp. 4795–
4810, 2010.
[12] F. S. Cattivelli and A. H. Sayed, “Diffusion LMS strategies for distributed estimation,” Signal Processing, IEEE Transactions on, vol. 58,
no. 3, pp. 1035–1048, 2010.
[13] C. G. Lopes and A. H. Sayed, “Diffusion least-mean squares over
adaptive networks: Formulation and performance analysis,” Signal Processing, IEEE Transactions on, vol. 56, no. 7, pp. 3122–3136, 2008.
[14] S.-Y. Tu and A. H. Sayed, “Diffusion strategies outperform consensus
strategies for distributed estimation over adaptive networks,” Signal
Processing, IEEE Transactions on, vol. 60, no. 12, pp. 6217–6234, 2012.
[15] F. S. Cattivelli, C. G. Lopes, and A. H. Sayed, “Diffusion recursive
least-squares for distributed estimation over adaptive networks,” Signal
Processing, IEEE Transactions on, vol. 56, no. 5, pp. 1865–1877, 2008.
[16] Ø. L. Rørtveit, J. H. Husøy, and A. H. Sayed, “Diffusion LMS
with communication constraints,” in Signals, Systems and Computers
(ASILOMAR), 2010 Conference Record of the Forty Fourth Asilomar
Conference on. IEEE, 2010, pp. 1645–1649.
[17] C. G. Lopes and A. H. Sayed, “Diffusion adaptive networks with changing topologies,” in 2008 IEEE International Conference on Acoustics,
Speech and Signal Processing, 2008.
[18] N. Takahashi and I. Yamada, “Link probability control for probabilistic
diffusion least-mean squares over resource-constrained networks,” in
Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on. IEEE, 2010, pp. 3518–3521.
[19] X. Zhao and A. H. Sayed, “Single-link diffusion strategies over adaptive
networks,” in Acoustics, Speech and Signal Processing (ICASSP), 2012
IEEE International Conference on. IEEE, 2012, pp. 3749–3752.
[20] R. Arablouei, S. Werner, Y.-F. Huang, and K. Dogancay, “Distributed
least mean-square estimation with partial diffusion,” Signal Processing,
IEEE Transactions on, vol. 62, no. 2, pp. 472–484, 2014.
[21] R. Arablouei, K. Dogancay, S. Werner, and Y.-F. Huang, “Adaptive
distributed estimation based on recursive least-squares and partial diffusion,” Signal Processing, IEEE Transactions on, vol. 62, no. 14, pp.
3510–3522, 2014.
[22] M. O. Sayin and S. S. Kozat, “Single bit and reduced dimension diffusion strategies over distributed networks,” Signal Processing Letters,
IEEE, vol. 20, no. 10, pp. 976–979, 2013.
[23] ——, “Compressive diffusion strategies over distributed networks for
reduced communication load,” Signal Processing, IEEE Transactions
on, vol. 62, no. 20, pp. 5308–5323, 2014.
[24] S. Chouvardas, K. Slavakis, and S. Theodoridis, “Trading off complexity
with communication costs in distributed adaptive learning via Krylov
subspaces for dimensionality reduction,” Selected Topics in Signal
Processing, IEEE Journal of, vol. 7, no. 2, pp. 257–273, 2013.
[25] R. Abdolee and B. Champagne, “Diffusion LMS algorithms for sensor
networks over non-ideal inter-sensor wireless channels,” in Distributed
Computing in Sensor Systems and Workshops (DCOSS), 2011 International Conference on. IEEE, 2011, pp. 1–6.
[26] A. Khalili, M. A. Tinati, A. Rastegarnia, and J. A. Chambers, “Tran-
DRAFT
9
[37] K. Dogancay, Partial-update adaptive signal processing: Design Analysis and Implementation. Academic Press, 2008.
[38] M. Godavarti and A. O. Hero III, “Partial update LMS algorithms,”
Signal Processing, IEEE Transactions on, vol. 53, no. 7, pp. 2382–2399,
2005.
[39] S. C. Douglas, “Adaptive filters employing partial updates,” Circuits and
Systems II: Analog and Digital Signal Processing, IEEE Transactions
on, vol. 44, no. 3, pp. 209–216, 1997.
[40] J. R. Treichler, C. R. Johnson, and M. G. Larimore, Theory and design
of adaptive filters. Wiley, 1987.
[41] T. Y. Al-Naffouri and A. H. Sayed, “Transient analysis of datanormalized adaptive filters,” Signal Processing, IEEE Transactions on,
vol. 51, no. 3, pp. 639–652, 2003.
[42] N. R. Yousef and A. H. Sayed, “A unified approach to the steadystate and tracking analyses of adaptive filters,” Signal Processing, IEEE
Transactions on, vol. 49, no. 2, pp. 314–324, 2001.
0
Network MSD (dB)
−10
−20
−52.5
−53
−30
−53.5
780
800
820
−40
−50
−60
0
100
200
300
400
500
600
700
800
900
1000
700
800
900
1000
i (iteration)
0
Network MSD (dB)
−10
−20
−52
−52.5
−53
−30
−53.5
780
800
820
−40
−50
−60
0
100
200
300
400
500
600
i (iteration)
Fig. 5. Simulated network MSD curves for partial diffusion RLS algorithms
using sequential (top) and stochastic (bottom) with different number of entries
communicated under ideal links when λ = 1.
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
sient analysis of diffusion leastmean squares adaptive networks with
noisy channels,” International Journal of Adaptive Control and Signal
Processing, vol. 26, no. 2, pp. 171–180, 2012.
A. Khalili, M. A. Tinati, A. Rastegarnia, and J. Chambers, “Steady-state
analysis of diffusion LMS adaptive networks with noisy links,” Signal
Processing, IEEE Transactions on, vol. 60, no. 2, pp. 974–979, 2012.
S.-Y. Tu and A. H. Sayed, “Adaptive networks with noisy links,”
in Global Telecommunications Conference (GLOBECOM 2011), 2011
IEEE. IEEE, 2011, pp. 1–5.
A. Khalili, M. A. Tinati, and A. Rastegarnia, “Performance analysis of
distributed incremental LMS algorithm with noisy links,” International
Journal of Distributed Sensor Networks, vol. 2011, 2011.
——, “Steady-state analysis of incremental LMS adaptive networks with
noisy links,” Signal Processing, IEEE Transactions on, vol. 59, no. 5,
pp. 2416–2421, 2011.
——, “Analysis of incremental RLS adaptive networks with noisy links,”
IEICE Electronics Express, vol. 8, no. 9, pp. 623–628, 2011.
S. Kar and J. M. F. Moura, “Distributed consensus algorithms in sensor
networks with imperfect communication: Link failures and channel
noise,” Signal Processing, IEEE Transactions on, vol. 57, no. 1, pp.
355–369, 2009.
G. Mateos, I. D. Schizas, and G. B. Giannakis, “Performance analysis
of the consensus-based distributed LMS algorithm,” EURASIP Journal
on Advances in Signal Processing, vol. 2009, p. 68, 2009.
A. H. Sayed, Adaptive filters. John Wiley {&} Sons, 2011.
——, Fundamentals of adaptive filtering. John Wiley & Sons, 2003.
C. D. Meyer, Matrix analysis and applied linear algebra. Siam, 2000,
vol. 2.
| 3 |
SAMUEL COMPACTIFICATIONS OF AUTOMORPHISM GROUPS
DANA BARTOŠOVÁ AND ANDY ZUCKER
arXiv:1802.02513v1 [math.DS] 7 Feb 2018
1. Introduction
In this paper, we are interested in the automorphism groups of countable first-order structures and the Samuel compactifications of these groups. We will address a variety of questions about the algebraic structure of the Samuel compactification and exhibit connections
between this algebraic structure and the combinatorics of the first-order structures at hand.
Let G be a topological group; all topological groups and spaces will be assumed Hausdorff.
The group G comes with a natural uniform structure, the left uniformity, whose entourages
are of the form {(g, h) ∈ G × G : g −1 h ∈ V } where V ranges over open symmetric neighborhoods of the identity. Every uniform space U admits a Samuel comactifiation, the Gelfand
space of the algebra of bounded uniformly continuous functions on U (see [Sa] or [U]). We
denote by S(G) the Samuel compactification of the group G with its left uniform structure.
In addition to being a compact Hausdorff space, the space S(G) can also be endowed with
algebraic structure. A G-flow is a compact Hausdorff space X equipped with a continuous
right G-action a : X × G → X. Typically the action a is understood, and we write x · g or
xg for a(x, g). We can give S(G) the structure of a G-flow; indeed, for each g ∈ G, the rightmultiplication map h → hg is left-uniformly continuous, so can be continuously extended
to S(G). With some extra work, it can be shown that the evaluation S(G) × G → S(G) is
continuous.
If X and Y are G-flows, a G-map is a continuous map ϕ : X → Y which respects the
G-action. A G-ambit is a pair (X, x0 ), where X is a G-flow and x0 ∈ X has a dense orbit. If
(X, x0 ) and (Y, y0 ) are ambits, then a map of ambits is a G-map ϕ : X → Y with ϕ(x0 ) = y0 .
Notice that there is at most one map of ambits from (X, x0 ) to (Y, y0 ). By identifying G as
embedded into S(G) and by considering the orbit of 1G , we turn (S(G), 1G ) into an ambit.
It turns out that this is the greatest ambit; for any G-ambit (X, x0 ), there is a map of ambits
ϕ : (S(G), 1G ) → (X, x0 ).
We can use this universal property to endow S(G) with yet more structure. A compact
left-topological semigroup is a semigroup S with a compact Hausdorff topology in which the
left multiplication maps t → st are continuous for each s ∈ S. Now let x ∈ S(G); then the
pair (x · G, x) is a G-ambit, so there is a unique G-map λx : S(G) → x · G with λx (1G ) = x.
We can endow S(G) with the structure of a compact left-topological semigroup by setting
xy := λx (y). It is not hard to show that this operation is associative.
Another consequence of the universal property is the existence of universal minimal flows.
Let X be a G-flow. A subflow is any closed Y ⊆ X which is invariant under the G-action.
The G-flow X is minimal if X 6= ∅ and every orbit is dense; equivalently, X is minimal if X
contains no proper subflows. An easy Zorn’s Lemma argument shows that every flow contains
a minimal subflow. The G-flow X is universal if there is a G-map from X onto any minimal
flow. Let M ⊆ S(G) be any minimal subflow, and let Y be a minimal flow. Pick y0 ∈ Y
arbitrarily, making (Y, y0 ) an ambit. Then there is a map of ambits ϕ : (S(G), 1G ) → (Y, y0 ),
1
2
D. BARTOŠOVÁ AND A. ZUCKER
and ϕ|M : M → Y is a G-map. We have just shown the existence of a universal minimal
flow. By using some techniques from the theory of compact left-topological semigroups,
it can be shown that there is a unique universal minimal flow up to G-flow isomorphism,
denoted M (G).
The existence and uniqueness of the universal minimal flow suggests another “canonical”
G-ambit we can construct. If X is a G-flow, we can view each g ∈ G as the function
ρg : X → X. Form the product space X X , and set E(X) = {ρg : g ∈ G}. It will be useful in
this instance to write functions on the right, so if f ∈ X X , we write x·f or xf instead of f (x).
The group G acts on E(X) via x · (f · g) = (x · f ) · g. Notice that ρg · h = ρgh , so (E(X), ρ1G )
is an ambit. We can also give E(X) a compact left-topological semigroup structure; rather
than a universal property, it is the fact that members of E(X) are functions that allows us
to do this. Indeed, the product is given by composition, which with our notation means that
for f1 , f2 ∈ E(X), we define x · (f1 · f2 ) = (x · f1 ) · f2 . The ambit E(X) (the distinguished
point being understood) is called the enveloping semigroup of X; we will be particularly
interested in E(M (G)), the enveloping semigroup of the universal minimal flow.
It is worth pointing out that we could have avoided some of this notational awkwardness by
switching the roles of left and right throughout, i.e. working with left G-actions and compact
right-topological semigroups. The reason we work with our left-right conventions is due to
the specific groups that we will be working with, i.e. automorphism groups of countable
first-order structures. We will point out later how a left-right switch could be made. Also
note that several of the references use the opposite left-right conventions, in particular [HS]
and [Ba].
Robert Ellis (see [E]) first proved the existence and uniqueness of M (G), and was the first
to consider the two canonical ambits S(G) and E(M (G)). As S(G) is the greatest ambit,
there is a map of ambits ϕ : S(G) → E(M (G)) (when referring to S(G) and enveloping semigroups, we will suppress the distinguished point unless there is possible confusion). He posed
the following very natural question: is ϕ : S(G) → E(M (G)) an isomorphism? Vladimir
Pestov (see [P]) observed that the existence of extremely amenable groups, groups where
M (G) is a singleton, provides a negative answer to Ellis’s question. Pestov also constructed
many other examples of groups G where S(G) and E(M (G)) were not isomorphic. The
diversity of counterexamples to Ellis’s question led Pestov to make the following conjecture.
Conjecture 1.1 (Pestov). Let G be a topological group. Then the canonical map ϕ : S(G) →
E(M (G)) is an isomorphism iff G is precompact.
Here, G is said to be precompact if the completion of its left uniformity is compact. If this
is the case, then all of S(G), M (G), and E(M (G)) are isomorphic to the left completion.
Aside from the initial work of Pestov, very little work has been done on Conjecture 1.1.
Glasner and Weiss (see [GW1]) have shown that S(Z) and E(M (Z)) are not isomorphic.
Their proof is rather difficult and uses some deep results in ergodic theory (see [F]); as such,
their methods are unlikely to generalize even to countable amenable groups.
In this paper, we address Conjecture 1.1 for groups of the form G = Aut(K) where K is
a countable first-order structure. We endow G with the topology of pointwise convergence,
turning G into a Polish group. In a mild abuse of terminology, we will call groups of this
form automorphism groups. When K is a countable set with no additional structure, we
have Aut(K) = S∞ , the group of all permutations of a countable set. More generally, automorphism groups are exactly the closed subgroups of S∞ . The work of Kechris, Pestov, and
Todorcevic [KPT] provides explicit computations of M (G) for many automorphism groups.
SAMUEL COMPACTIFICATIONS
3
Having an explicit representation of M (G) aids in analyzing the properties of E(M (G)).
Along with an explicit representation of S(G) for automorphism groups (see [Z]), this allows us to address Conjecture 1.1 for some of these groups. Our first main theorem is the
following.
Theorem 1.2. Let K be any of the following:
• a countable set without structure,
• the random Kn -free graph,
• the random r-uniform hypergraph.
Then for G = Aut(K), we have S(G) ∼
6= E(M (G)).
We then turn to finding the extent to which S(G) and E(M (G)) differ. Any minimal
subflow M ⊆ S(G) is isomorphic to M (G), and it turns out that S(G) admits a retraction
onto M , i.e. a G-map ϕ : S(G) → M with ϕ|M the identity. Pestov has shown (see [P1]) that
S(G) ∼
= E(M (G)) iff the retractions of S(G) onto a minimal subflow M ⊆ S(G) separate
the points of S(G). So if S(G) ∼
6= E(M (G)), it makes sense to ask which pairs of points
cannot be separated; this will not depend on the choice of minimal subflow M ⊆ S(G).
Given x, y ∈ S(G), we say they can be separated by retractions if there is a retraction
ϕ : S(G) → M with ϕ(x) 6= ϕ(y).
Every compact left-topological semigroup S admits a smallest two-sided ideal, denoted
K(S). Our second main theorem is the following.
Theorem 1.3. There are x 6= y ∈ K(S(S∞ )) which cannot be separated by retractions.
On the way to proving Theorem 1.3, we prove some theorems of independent interest
both for general topological groups G and for S∞ . By a well-known theorem of Ellis, every
compact left-topological semigroup S contains an idempotent, an element u ∈ S which
satisfies u · u = u (see [E]). Given Y ⊆ S, write J(Y ) for the set of idempotents in Y . Our
route to proving Theorem 1.3 involves a careful understanding of when the product of two
idempotents is or is not an idempotent.
In the case G = S∞ , we are able to find large semigroups of idempotents; this is what
allows us to prove Theorem 1.3.
Theorem 1.4. There are two minimal subflows M 6= N ⊆ S(S∞ ) so that J(M ) ∪ J(N ) is
a semigroup.
It is worth noting that any minimal subflow M ⊆ S(G) is a compact subsemigroup of
S(G), so J(M ) 6= ∅.
There are some cases when it is clear that K(S(G)) contains sufficiently large semigroups
of idempotents. Given a G-flow X, recall that a pair of points x, y ∈ X is called proximal
if there is p ∈ E(X) with xp = yp; the pair (x, y) is called distal if it is not proximal. A
G-flow X is proximal if every pair from X is proximal, and X is called distal if every pair
x 6= y ∈ X is distal. If M (G) is proximal, then whenever M ⊆ S(G) is a minimal subflow, we
have J(M ) = M . If M (G) is distal and M ⊆ S(G) is a minimal subflow, then J(M ) = {u},
a single idempotent. So long as S(G) contains at least two minimal right ideals, which is
always the case when G is Polish (see [Ba]), then E(M (G)) ∼
6= S(G) in these cases. We will
discuss examples of groups with M (G) proximal and provide a partial characterization of
Polish groups G with M (G) distal.
4
D. BARTOŠOVÁ AND A. ZUCKER
Theorem 1.5. Let G be a Polish group. The following are equivalent.
(1) M (G) is distal and metrizable.
i
ϕ
(2) There is a short exact sequence of groups 1 → H →
− G−
→ K → 1 with H extremely
amenable and K compact metrizable.
Furthermore, in item (2), K ∼
= M (G), and the G-action is given by k · g = kϕ(g).
2. Countable first-order structures and the Samuel compactification
In this section, we provide the necessary background on countable structures and provide
an explicit construction of the Samuel compactification of an automorphism group. The
presentation here is largely taken from [Z1].
Recall that S∞ is the group of all permutations of ω := {0, 1, 2, ...}. We can endow S∞
with the topology of pointwise convergence; a typical basic open neighborhood of the identity
is {g ∈ S∞ : g(k) = k for every k < n} for some n < ω. Notice that each of these basic open
neighborhoods is in fact a clopen subgroup.
Fix now G a closed subgroup of S∞ . A convenient way to describe the G-orbits of finite
tuples from ω is given by the notions of a Fraı̈ssé class and structure. A relational language
L = {Ri : i ∈ I} is a collection of relation symbols. Each relation symbol Ri has an arity
ni ∈ N. An L-structure A = hA, RiA i consists of a set A and relations RiA ⊆ Ani ; we say
that A is an L-structure on A. If A, B are L-structures, then g : A → B is an embedding
if g is a map from A to B such that RiA (x1 , ..., xni ) ⇔ RiB (g(x1 ), ..., g(xni )) for all relations.
We write Emb(A, B) for the set of embeddings from A to B. We say that B embeds A
and write A ≤ B if Emb(A, B) 6= ∅. An isomorphism is a bijective embedding, and an
automorphism is an isomorphism between a structure and itself. If A ⊆ B, then we say that
A is a substructure of B, written A ⊆ B, if the inclusion map is an embedding. A is finite,
countable, etc. if A is.
Definition 2.1. Let L be a relational language. A Fraı̈ssé class K is a class of L-structures
with the following four properties.
(1) K contains only finite structures, contains structures of arbitrarily large finite cardinality, and is closed under isomorphism.
(2) K has the Hereditary Property (HP): if B ∈ K and A ⊆ B, then A ∈ K.
(3) K has the Joint Embedding Property (JEP): if A, B ∈ K, then there is C which
embeds both A and B.
(4) K has the Amalgamation Property (AP): if A, B, C ∈ K and f : A → B and
g : A → C are embeddings, there is D ∈ K and embeddings r : B → D and
s : C → D with r ◦ f = s ◦ g.
If K is a countably infinite L-structure (which we will typically assume has underlying set
ω), we write Age(K) for the class of finite L-structures which embed into K. The following
is the major fact about Fraı̈ssé classes.
SAMUEL COMPACTIFICATIONS
5
Fact 2.2. If K is a Fraı̈ssé class, there is up to isomorphism a unique countably infinte
L-structure K with Age(K) = K satisfying one of the following two equivalent conditions.
(1) K is ultrahomogeneous: if f : A → B is an isomorphism between finite substructures
of K, then there is an automorphism of K extending f .
(2) K satisfies the Extension Property: if B ∈ K, A ⊆ B, and f : A → K is an
embedding, there is an embedding h : B → K extending f .
Conversely, if K is a countably infinite L-structure satisfying 1 or 2, then Age(K) is a
Fraı̈ssé class.
Given a Fraı̈ssé class K, we write Flim(K), the Fraı̈ssé limit of K, for the unique structure
K as above. We say that K is a Fraı̈ssé structure if K ∼
= Flim(K) for some Fraı̈ssé class.
Our interest in Fraı̈ssé structures stems from the following result.
Fact 2.3. For any Fraı̈ssé structure K, Aut(K) is isomorphic to a closed subgroup of S∞ .
Conversely, any closed subgroup of S∞ is isomorphic to Aut(K) for some Fraı̈ssé structure
K.
Fix S
a Fraı̈ssé class K with Fraı̈ssé limit K. Set G = Aut(K). We also fix an exhaustion
K = n AnS
, with each An ∈ K, |An | = n, and Am ⊆ An for m ≤ n. Whenever we
write K = n An , it will be assumed that the right side is an exhaustion of K. Write
Hn = {gGn : g ∈ G}, where Gn = G ∩ NAn is the pointwise stabilizer of An . We can
identify Hn with Emb(An , K),Sthe set of embeddings of An into K. Note that under this
identification, we have Hn = N ≥n Emb(An , AN ). For g ∈ G, we often write g|n for gGn ,
and we write in for Gn . The group G acts on Hn on the left; if x ∈ Hn and g ∈ G, we have
g · x = g ◦ x. For m ≤ n, we let inm ∈ Emb(Am , An ) be the inclusion embedding.
Each f ∈ Emb(Am , An ) gives rise to a dual map fˆ : Hn → Hm given by fˆ(x) = x◦f . Note
that we must specify the range of f for the dual map to make sense, but this will usually be
clear from context.
Proposition 2.4.
(1) For f ∈ Emb(Am , An ), the dual map fˆ : Hn → Hm is surjective.
(2) For every f ∈ Emb(Am , An ), there is N ≥ n and h ∈ Emb(An , AN ) with h ◦ f = iN
m.
Proof. Item 1 is an immediate consequence of the extension property. For item 2, use ultrahomogeneity to find g ∈ G with g ◦ f = im . Let N ≥ n be large enough so that ran(g|n ) ⊆ AN ,
and set h = g|n .
We now proceed with an explicit construction of S(G). First, if X is a discrete space, we
let βX be the space of ultrafilters on X. We topologize βX by declaring a typical basic open
neighborhood to be of the form {p ∈ βX : A ∈ p}, where A ⊆ X. We view X as a subset of
βX by identifying x ∈ X with the ultrafilter {A ⊆ X : x ∈ A}. If Y is a compact Hausdorff
space and ϕ : X → Y is any map, there is a unique continuous extension ϕ̃ : βX → Y .
Now let f ∈ Emb(Am , An ). The dual map fˆ extends to a continuous map f˜ : βHn → βHm .
If p ∈ βHn and f ∈ Emb(Am , An ), we will sometimes write p · f for f˜(p). Form the inverse
6
D. BARTOŠOVÁ AND A. ZUCKER
limit lim βHn along the maps ı̃nm . We can identify G with a dense subspace of lim βHn by
←−
←−
associating to each g ∈ G the sequence of ultrafilters principal on g|n . The space lim βHn
←−
turns out to be the Samuel compactification S(G) (see Corollary 3.3 in [P]).
To see that S(G) is the greatest ambit, we need to exhibit a right G-action on S(G).
This might seem unnatural at first; after all, the left G-action on each Hn extends to a left
G-action on βHn , giving us a left G-action on S(G). The problem is that the left action is
not continuous when G is given its Polish topology. The right action we describe doesn’t
“live” on any one level of the inverse limit lim βHn ; we need to understand how the various
←−
levels interact.
Let πn : lim βHn → βHn be the projection map. We often write α(n) := πn (α). For
←−
α ∈ lim βHn , g ∈ G, m ∈ N, and S ⊆ Hm , we have
←−
S ∈ αg(m) ⇔ {x ∈ Hn : x ◦ g|m ∈ S} ∈ α(n)
where n ≥ m is large enough so that ran(g|m ) ⊆ An . Notice that if g|m = h|m = f ,
then αg(m) = αh(m) := α · f := λαm (f ). By distinguishing the point 1 ∈ lim βHn with
←−
1(m) principal on im , we endow S(G) with the structure of a G-ambit, and (S(G), 1) is the
greatest ambit (see Theorem 6.3 in [Z]).
Using the universal property of the greatest ambit, we can define a left-topological semigroup structure on S(G): Given α and γ in lim βHn , m ∈ N, and S ⊆ Hm , we have
←−
S ∈ αγ(m) ⇔ {f ∈ Hm : S ∈ α · f } ∈ γ(m).
If α ∈ S(G) and S ⊆ Hm , a useful shorthand is to put
α−1 (S) = {f ∈ Hm : S ∈ α · f }.
Then the semigroup multiplication can be written as
S ∈ αγ(m) ⇔ α−1 (S) ∈ γ(m).
Notice that for fixed α, αγ(m) depends only on γ(m); indeed, if α ∈ lim βHn , p ∈ βHm , and
←−
S ⊆ Hm , we have S ∈ α · p iff α−1 (S) ∈ p. In fact, α · p = λ̃αm (p), where the map λ̃αm is the
continuous extention of λαm to βHm .
As promised in the introduction, we now explain the reason behind our left-right conventions. The primary reason behind considering right G-flows is because for G = Aut(K), the
left uniformity is very natural to describe. Namely, every entourage contains an entourage
of the form {(g, h) ∈ G × G : g|m = h|m }. This leads naturally to considering the embeddings from Am to K. If we wanted to consider the right uniformity, we would instead be
considering partial isomorphisms of K with range Am , which are less easily described.
3. KPT correspondence
In this section, we provide a brief review of KPT correspondence. For proofs of the results
in this section, see [KPT], [NVT], or [Z].
Let L be a relational language and L∗ = L ∪ S, where S = {Si : i ∈ N} and the Si
are new relational symbols of arity ni . If A is an L∗ -structure, write A|L for the structure
SAMUEL COMPACTIFICATIONS
7
obtained by throwing away the interpretations of the relational symbols in L∗ \ L. If K∗
is a class of L∗ -structures, set K∗ |L = {A∗ |L : A∗ ∈ K∗ }. If K = K∗ |L and K∗ is closed
under isomorphism, we say that K∗ is an expansion of K. If A∗ ∈ K∗ and A∗ |L = A, then
we say that A∗ is an expansion of A, and we write K∗ (A) for the set of expansions of A
in K∗ . If f ∈ Emb(A, B) and B∗ ∈ K∗ (B), we let B∗ · f be the unique expansion of A
so that f ∈ Emb(B∗ · f, B∗ ). The expansion K∗ is precompact if for each A ∈ K, the set
{A∗ ∈ K∗ : A∗ |L = A} is finite.
If K∗ is an expansion of the Fraı̈ssé class K, we say that the pair (K∗ , K) is reasonable if for
any A, B ∈ K, embedding f : A → B, and expansion A∗ of A, then there is an expansion
B∗ of B with f : A∗ → B∗ an embedding. When K∗ is also a Fraı̈ssé class, we have the
following equivalent definition.
Proposition 3.1. Let K∗ be a Fraı̈ssé expansion class of the Fraı̈ssé class K with Fraı̈ssé
limits K∗ , K respectively. Then the pair (K∗ , K) is reasonable iff K∗ |L ∼
= K.
Set Fin(K) = {A ∈ K : A ⊆ K}. Suppose (K∗ , K) is reasonable and precompact. Set
~ : hA, S|
~ A i ∈ K∗ whenever A ∈ Fin(K)}.
XK∗ := {hK, Si
We topologize this space by declaring the basic open neighborhoods to be of the form
N (A∗ ) := {K0 ∈ XK∗ : A∗ ⊆ K0 }, where A∗ is an expansion of some A ∈ Fin(K). We
can view XK∗ as a closed subspace of
Y
{A∗ : A∗ ∈ K∗ (A)}.
A∈Fin(K)
Notice that since (K∗ , K) is precompact, XK∗ is compact. If
compatible metric is given by
S
n
An = K is an exhaustion, a
~ hK, T~ i = 1/k(S,
~ T~ ),
d(hK, Si,
~ T~ ) is the largest k for which hAk , S|
~A i∼
where k(S,
= hAk , T~ |Ak i.
k
We can now form the (right) logic action of G = Aut(K) on XK∗ by setting K0 · g to be
the structure where for each relation symbol S ∈ S, we have
0
0
S (K ·g) (x1 , ..., xn ) ⇔ S K (g(x1 ), ..., g(xn )).
This action is jointly continuous, turning XK∗ into a G-flow. For readers used to left logic
actions, acting on the right by g is the same as acting on the left by g −1 .
First let us consider when XK∗ is a minimal G-flow.
Definition 3.2. We say that the pair (K∗ , K) has the Expansion Property (ExpP) when for
any A∗ ∈ K∗ , there is B ∈ K such that for any expansion B∗ of B, there is an embedding
f : A∗ → B ∗ .
Proposition 3.3. Let K∗ be a reasonable, precompact Fraı̈ssé expansion class of the Fraı̈ssé
class K with Fraı̈ssé limits K∗ , K respectively. Let G = Aut(K). Then the G-flow XK∗ is
minimal iff the pair (K∗ , K) has the ExpP.
8
D. BARTOŠOVÁ AND A. ZUCKER
Expansion classes are particularly inetesting when K∗ has the following combinatorial
property.
Definition 3.4. Let C be a class of finite structures.
(1) We say that A ∈ C is a Ramsey object if for any r ≥ 2 and any B ∈ C with A ≤ B,
there is C ∈ C with B ≤ C so that for any coloring c : Emb(A, C) → r, there is
h ∈ Emb(B, C) with |c(h ◦ Emb(A, B))| = 1.
(2) We say that C has the Ramsey Property (RP) if every A ∈ C is a Ramsey object.
The following is one of the major theorems in [KPT]. This theorem in its full generality
is proven in [NVT].
Theorem 3.5. Let K∗ be a reasonable, precompact Fraı̈ssé expansion class of the Fraı̈ssé
class K with Fraı̈ssé limits K∗ , K, respectively. Let G = Aut(K). Then XK∗ ∼
= M (G) iff the
pair (K∗ , K) has the ExpP and K∗ has the RP.
Pairs (K∗ , K) of Fraı̈ssé classes which are reasonable, precompact, satisfy the ExpP, and
where K∗ has the RP are called excellent. In particular, if K = Flim(K), G = Aut(K), and
there is an expansion class K∗ so that (K∗ , K) is excellent, then M (G) is metrizable. The
following converse is one of the major theorems of [Z].
Theorem 3.6. Let K be a Fraı̈ssé class with K = Flim(K) and G = Aut(K). If M (G) is
metrizable, then there is an expansion class K∗ so that (K∗ , K) is excellent.
4. Ellis’s problem for random relational structures
In S
this section, we prove Theorem 1.2. Let G = Aut(K) for some Fraı̈ssé structure
K = n An . If T ⊆ Hm , n ≥ m, and f ∈ Emb(Am , An ), set
f (T ) = {s ∈ Hn : s ◦ inm ∈ T }.
This is a minor abuse of notation, since f also denotes a map from Am to An ; however, this
map induces a continuous embedding of the Boolean algebra P(Hm ) into P(Hn ).
We will freely identify P(Hm ) with 2Hm ; in particular, G acts on 2Hm by right shift, where
for ϕ ∈ 2Hm , f ∈ Hm , and g ∈ G, we have ϕ · g(f ) = ϕ(g · f ).
Definition 4.1. We call a subset S ⊆ Hm minimal if the flow χS · G ⊆ 2Hm is minimal.
The formulation of Ellis’s problem we will work with is the one concerning retractions
given by Pestov. We will be interested in whether every pair x 6= y ∈ S(G) can be separated
by retractions. A characterization of when this occurs for discrete groups can be found in
[Ba] (see Proposition 11). We first prove a similar characterization for automorphism groups
in the next two lemmas.
SAMUEL COMPACTIFICATIONS
9
Before proceeding, a quick remark on notation is in order. If X is a G-flow, then there
is a unique map of ambits ϕ : S(G) → E(X). If x ∈ X and p ∈ S(G), we write x · p for
x · ϕ(p).
Lemma 4.2. Suppose α, γ ∈ S(G) cannot be separated by retractions, and let S ⊆ Hm be
minimal. Then S ∈ α(m) ⇔ S ∈ γ(m).
Proof. Let M ⊆ S(G) be a minimal subflow, and consider the non-empty closed subsemigroup {p ∈ M : χS · p = χS }. By Ellis’s theorem, let u ∈ M be an idempotent with
χS · u = χS . As the left multiplication λu : S(G) → M is a retraction, we must have
u · α = u · γ. Therefore χS · α = χS · γ, and α−1 (S) = γ −1 (S) := T . It follows that im ∈ T
iff S ∈ α(m) iff S ∈ γ(m).
For each m < ω, let Bm ⊆ P(Hm ) be the Boolean algebra generated by the minimal
0
subsets of Hm . Let Bm
be the Boolean algebra {T ⊆ Hm : ∃n ≥ m(inm (T ) ∈ Bn )}.
Lemma 4.3. Fix M ⊆ S(G) a minimal. The following are equivalent.
(1) Retractions of S(G) onto M separate points of S(G),
0
= P(Hm ).
(2) For every m < ω, we have Bm
Proof. Suppose that there are α 6= γ ∈ S(G) which cannot be separated by retractions. Find
m < ω with α(m) 6= γ(m), and find T ⊆ Hm with T ∈ α(m), T 6∈ γ(m). Note that for
every n ≥ m, we have inm (T ) ∈ α(n) and inm (T ) 6∈ γ(n). Towards a contradiction, suppose
for some n ≥ m that inm (T ) was a Boolean combination of minimal set A1 , ..., Ak ⊆ Hn .
By our assumption, α(n) and γ(n) agree on the membership of each Ai , hence also on the
membership of T , a contradiction.
0
0
) denote
. Let S(Bm
Conversely, suppose that T ⊆ Hm is not in the Boolean algebra Bm
0
0
0
the Stone space of Bm ; form the inverse limit lim S(Bn ), and let p ∈ lim S(Bn ) be chosen
←−
←−
so that p(m) doesn’t decide T . Then find α, γ ∈ lim βHn with T ∈ α(m), T 6∈ γ(m) which
←−
both extend p. Then α and γ cannot be separated by retractions.
Notice that item (2) of Lemma 4.3 does not depend on M . In general, the relation of
whether x 6= y ∈ S(G) can be separated by retractions does not depend on the minimal
subflow of S(G) chosen, but we postpone this discussion until the end of section 6 (see the
discussion after Theorem 5.11).
Now suppose that (K∗ , K) is an excellent pair of Fraı̈ssé classes. Given a set of expansions
E ⊆ K∗ (Am ) and K0 ∈ XK∗ , let Hm (E, K0 ) = {f ∈ Hm : Am (f, K0 ) ∈ E}. If n ≥ m and
A0n ∈ K∗ (An ), we set Emb(E, A0n ) = {f ∈ Emb(Am , An ) : Am (f, A0n ) ∈ E}.
Proposition 4.4. S ⊆ Hm is minimal iff there is K0 ∈ XK∗ and E ⊆ K∗ (Am ) so that
S = Hm (E, K0 )
Proof. One direction is easy once we note that given E ⊆ K∗ (Am ), the map ϕE : XK∗ → 2Hm
given by ϕE (K0 ) = Hm (E, K0 ) is a map of G-flows. In the other direction, let S ⊆ Hm be
minimal. Then Y := χS · G ⊆ 2Hm is a minimal G-flow, so fix a G-map ϕ : XK∗ → Y . Note
that ϕ(K∗ ) must be a G∗ -fixed point, so by ultrahomogeneity of K∗ it must be of the form
10
D. BARTOŠOVÁ AND A. ZUCKER
Hm (E, K∗ ) for some E ⊆ K∗ (Am ). It follows that ϕ = ϕE , so in particular S = Hm (E, K0 )
for some K0 ∈ XK∗ .
The main tool allowing us to prove Theorem 1.2 is an explicit characterization of M (G)
for certain autormorphism groups G. The following facts can be found in [KPT].
Fact 4.5. Let K = Age(K), where K is any of the structures in the statement of Theorem
1.2. Let K∗ be the class of linearly ordered members of K. Then (K∗ , K) is an excellent pair.
Setting G = Aut(K), then M (G) = XK∗ is the space of linear orders of K.
The next theorem is the simplest case of Theorem 1.2. The following notion will be useful
in the proof. Given T ⊆ Hm and N ≥ m, an N -pattern of T is a set S ⊆ Emb(Am , AN ) so
that there is y ∈ HN with S = {f ∈ Emb(Am , AN ) : y ◦ √
f ∈ T }. We Will be using Stirling’s
formula which states that asymptotically n! is equal to 2πn( ne )n .
Theorem 4.6. Let K = {xn : n < ω} be a countable set with no structure (so G ∼
= S∞ ),
0
⊆ 2Hm is meager. In particular, any
and set Am = {xi : i < m}. Then for every m ≥ 2, Bm
0
.
T ∈ Hm whose orbit is dense is not in Bm
Proof. Let T ⊆ Hm have dense orbit. So for every N ≥ m, every S ⊆ Emb(Am , AN ) is an
N -pattern of T . Towards a contradiction, suppose for some n ≥ m that inm (T ) was a Boolean
combination of minimal sets B1 , ..., Bk ⊆ Hn . Let N >> n; we will obtain a contradiction
N
m
by counting the number of N -patterns in inm (T ), which by assumption is 2m!(m) ≥ 2N /2 .
Since |K∗ (An )| = n! and since there are N ! linear orders on AN , this gives us 2n! N !
possible N -patterns for each Bi by Proposition 4.4. Therefore any N -pattern of T must be
a Boolean combination of some k of these N -patterns. Each choice of k patterns results
k
in at most 22 Boolean combinations, so the total number of possible patterns is at most
k
22 (2n! N !)k patterns. Noting that n and k remain fixed as we let N grow large, we have
that asymptotically there are CN · ekN possible N -patterns of inm (T ), which is far less than
m
2N /2 , a contradiction.
We now consider the case where K = Flim(K) is the random r-uniform hypergraph for
some r ≥ 2. In order to generalize the arguments
in the proof of Theorem 4.6, we will
S
need some control over the exhaustion K = n An . We will do this by not specifying an
exhaustion in advance, but instead determining parts of it as we proceed.
We will need the following notion. With K as above, let A ⊆ B ∈ K, and let C ⊆ D ∈ K.
We say that D extends C along A ⊆ B if for any f ∈ Emb(A, C), there is an h ∈ Emb(B, D)
with h|A = f .
Given C ∈ K, write |C| for the number of vertices in C.
Lemma 4.7. Let K be the class of r-uniform hypergraphs for some r ≥ 2. Let e ⊆ B ∈ K,
where e ∈ K is the hypergraph on r vertices consisting of an edge, and let C ∈ K with
|C| = N . Then there is D ∈ K extending C along e ⊆ B with |D| ≤ cN r−1 for some
constant c depending only on |B|.
SAMUEL COMPACTIFICATIONS
11
Proof. Recall that given an r-uniform hypergraph C, a matching is a subset of the edges
of C so that each vertex is included in at most one edge. By Baranyai’s theorem [B],
the edge set
into M1 , ..., M` with each Mi a matching and with
of C can be partitioned
rdN/re
r−1
`≤
/dN/re) ≈ c0 N . For each i ≤ ` and j ≤ r!, let Dij be a set of |B| − r new
r
S S
vertices. We will define the hypergraph D on vertex set C ∪ i≤` j≤r! Dij . First add edges
to D so that Dij ∼
= B \ e. For each e0 ∈ Mi , enumerate the embeddings fj : e → C with
0
range e . Add edges to D so that each fj extends to an embedding hj : B → D with range
e0 ∪ Dij . This is possible since each Mi is a matching. The hypergraph D has |D| ≤ cN r−1
as desired.
Theorem 4.8. Let K be the class of r-uniform hypergraphs for r ≥ 2, with K = Flim(K).
Let Ar ⊆ K be an edge on r vertices. Then if T ⊆ Hr has dense orbit, then T 6∈ Br0 .
Remark. Though we are not specifying an exhaustion in advance, we will still use some of the
associated notation. In particular, when we write Am for some m < ω, we mean a subgraph
of K on m vertices.
Proof. Suppose towards a contradiction that there were some graph An ⊇ Ar so that inr (T )
was a Boolean combination of minimal sets B1 , ..., Bk ⊆ Hn . Let N n, and fix a graph
AN ⊇ An with at least Nr /2 edges. Let AN 0 ⊇ AN extend AN along Ar ⊆ An with
N 0 ≈ cN r−1 as guaranteed by Lemma 4.7. We will obtain a contradiction by counting the
number of N 0 -patterns of inr (T ). Exactly as in the proof of Theorem 4.6, there are at most
k
0
r−1
22 (2n! N 0 !)k ≈ CN 0 · ekN ≈ CN r−1 · ekN
many N 0 -patterns. But since T has dense orbit,
N
r
there must be at least 2( r )/2 ≈ 2cN N 0 -patterns of inr (T ), a contradiction.
We next turn to the class K of Kr -free graphs for some r ≥ 3. We will need a result similar
to Lemma 4.7, but the given proof will not work as the construction doesn’t preserve being
Kr -free. Recall that the Ramsey number R(r, n) of r and n is the smallest integer such that
any graph on R(r, n) vertices either contains a clique of size r or an independent set of size
n.
Lemma 4.9. Let K be the class of Kr -free graphs for some r ≥ 3. Let e ⊆ B ∈ K, where
e is an edge, and let C ∈ K with |C| = N . Then there is D ∈ K extending C along e ⊆ B
with |D| ≤ cN 2(r−1)/(r−2) .
Proof. Let R(r, n) be the Ramsey number of r and n. In [AKS], it is shown that R(r, n) =
o(nr−1 ). Since C is Kr -free, this implies that C has an independent set of size at least
N 1/(r−1) ; by repeatedly removing independent sets, we see that the chromatic number of C
is at most ` ≈ N (r−2)/(r−1) . Write C = C1 t · · · t C` so that each Ci is an independent set.
For every ordered pair (i, j) of distinct indices with i, j ≤S`, let D(i,j) be a set of |B| − 2 new
vertices. We will define the graph D on vertex set C ∪ {i,j}∈[`]2 (D(i,j) ∪ D(j,i) ). First add
edges to D so that D(i,j) ∼
= B \ e; fix h0 : B \ e → D(i,j) an isomorphism. Write e = {a, b}; if
f : e → C with f (a) = i and f (b) = j, then add edges to D so that h0 ∪ f := h : B → D is an
embedding with range f (e)∪D(i,j) . The hypergraph D is Kr -free and has |D| ≤ cN 2(r−2)/(r−1)
as desired.
12
D. BARTOŠOVÁ AND A. ZUCKER
Theorem 4.10. Let K be the class of Kr -free graphs for some r ≥ 3, with K = Flim(K).
Let A2 ⊆ K be an edge. Then if T ⊆ H2 has dense orbit, then T 6∈ B20 .
As in the proof of Theorem 4.8, we will not specify an exhaustion in advance, but we will
still use some of the notational conventions.
Proof. Suppose towards a contradiction that there were some graph An ⊇ A2 so that in2 (T )
was a Boolean combination of minimal sets B1 , ..., Bk ⊆ Hn . Let N n, and fix a graph
AN ⊇ An with at least n2 /r edges. Let AN 0 ⊇ AN extend AN along A2 ⊆ An with
N 0 ≈ cN 2(r−2)/(r−1) as guaranteed by Lemma 4.9. We now obtain a contradiction by counting
0
2(r−1)/(r−2)
N 0 -patterns. Once again, there are at most CN 0 · ekN ≈ CN 2(r−2)/(r−1) · ekN
many
N
/r
0
n
cN 2
(
)
2
N -patterns in i2 (T ), which contradicts the fact that there are at least 2
≈2
many
N 0 -patterns.
We end this section with a conjecture. While it is a strict sub-conjecture of Conjecture
1.1, we think it might be more easily approached.
Conjecture 4.11. Let G be a closed, non-compact subgroup of S∞ with metrizable universal
minimal flow. Then S(G) 6∼
= E(M (G)).
5. A closer look at S∞
In this section, we take a closer look at S(S∞ ), with an eye towards understanding which
pairs of points x 6= y ∈ S(S∞ ) can be separated by retractions. We view S∞ as the group of
permutations of ω. We can realize ω as a Fraı̈ssé structure in the empty language. We set
An = n, so that Hn is the set of all injections from n into ω, and for m ≤ n, Emb(Am , An )
is the set of all injections from m into n. We will often abuse notation and write s ∈ Hm as
the tuple (s0 , ..., sm−1 ), where si = s(i).
We start by developing some notions for any automorphism group. Let f ∈ Emb(Am , An ).
If F ⊆ P(Hm ) is a filter, then we write f (F) for the filter generated by {f (T ) : T ∈ F}.
If H ⊆ P(Hn ) is a filter, then f˜(H) is the push-forward filter {T ⊆ Hm : f (T ) ∈ H}. This
may seem like a conflict of notation since f˜ : βHn → βHm is the extended dual map of f .
We canTjustify this notation as follows. To each filter H on Hn , we associate the closed set
XH := A∈H A ⊆ βHn . Conversely, given a closed set X ⊆ βHn , we can form the filter of
clopen neighborhoods FX := {A ⊆ Hn : X ⊆ A}. Then we obtain the identity
Xf˜(H) = f˜(XH ).
A similar identity holds given a filter F on Hm :
Xf (F ) = f˜−1 (XF ).
Let Y ⊆ S(G) be closed. Let πm : S(G) → βHm be the projection map. Then πm (Y )
Y
is a closed subset of βHm . Write Fm
for Fπm (Y ) . For n ≥ m, the filter FnY extends the
n
Y
n
Y
Y
filter im (Fm ) and ı̃m (Fn ) = Fm . Conversely, given filters Fm on Hm for every m < ω such
SAMUEL COMPACTIFICATIONS
13
that Fn extends inm (Fm ) and with ı̃nm (Fn ) = Fm , there is a unique closed Y ⊆ S(G) with
Y
Fm = Fm
for each m < ω. We will call such a sequence of filters compatible.
M
We will need to understand the filters Fm
when M ⊆ S(G) is a minimal subflow. It turns
out that these filters are characterized by a certain property of their members.
Definition 5.1. Given T ⊆ Hm , we say that T is thick if either of the following equivalent
items hold (see [Z1]).
(1) T ⊆ Hm is thick iff χHm ∈ χT · G.
(2) T ⊆ Hm is thick iff for every n ≥ m, there is s ∈ Hn with s ◦ Emb(Am , An ) ⊆ T .
We can now state the following fact from [Z1].
Theorem 5.2. Let G be an automorphism group, and let M ⊆ S(G) be closed. Then M is
M
a minimal subflow iff each Fm
is a maximal filter of thick sets.
Another observation is the following.
Y
and f ∈ Emb(Am , An ).
Proposition 5.3. Say Y ⊆ S(G) is a subflow, and let T ∈ Fm
Y
Then f (T ) ∈ Fn .
Proof. Pick g ∈ G with g|m = f . Then for any α ∈ S(G), we have T ∈ αg(m) iff f (T ) ∈ α(n).
As Y is G-invariant, the result follows.
We now turn our attention to G = S∞ . Let {σi : i < m!} T
list the permutations of m, i.e.
M
. Call S ⊆ Hm
the members of Emb(Am , Am ). Then by Proposition 5.3, i σi (T ) ∈ Fm
saturated if whenever (a0 , ..., am−1 ) ∈ S and σ is a permutation, then (aσ(0) , ..., aσ(m−1) ) ∈ S.
M
has a base of saturated sets.
We have just shown that Fm
Let us recall Ramsey’s theorem: for every k, `, r < ω with k ≤ `, there is m ≥ ` so that
for every coloring c : [m]k → r, there is A ⊆ m with |A| = ` and |c([A]k )| = 1. Ramsey’s
theorem will
consequences for the collection of thick subsets of Hm .
F
F have interesting
n
Let ϕ : n Hn → n [ω] be the order forgetful map, i.e. for (y0 , ..., ym−1 ) ∈ Hm , we set
ϕ(y0 , ..., ym−1 ) = {y0 , ..., ym−1 } ∈ [ω]m . Any filter F on Hm pushes forward to a filter ϕ(F)
on [ω]m . We can define a thick subset of [ω]m in a very similar fashion to a thick subset of
Hm ; more precisely, we say T ⊆ [ω]m is thick iff for every n ≥ m, there is s ∈ [ω]n with
[s]m ⊆ T . Call S ⊆ [ω]m thin if it is not thick. We now have the following crucial corollary
of Ramsey’s theorem: if T ⊆ [ω]m is thick and T = T0 ∪ · · · ∪ Tk , then some Ti is thick. In
particular, if H is a thick filter on [X]m , i.e. a filter containing only thick sets, then we can
extend H to a thick ultrafilter. It also follows that for every m < ω, the collection of thin
subsets of [ω]m forms an ideal.
M
Theorem 5.4. Let M ⊆ S(S∞ ) be a minimal right ideal. Then for every m < ω, ϕ(Fm
)
m
−1
is a thick ultrafilter. Conversely, if p ∈ β[ω] is a thick ultrafilter, then {ϕ (T ) : T ∈ p}
generates a maximal thick filter on Hm , hence there is a minimal M ⊆ S(S∞ ) with p =
M
ϕ(Fm
).
14
D. BARTOŠOVÁ AND A. ZUCKER
M
) is a thick filter. Towards a contradiction, suppose it is not an ultrafilProof. Clearly ϕ(Fm
M
M
ter, and extend it to a thick ultrafilter p ∈ β[ω]m . Let T ∈ p \ ϕ(Fm
). Then ϕ−1 (T ) 6∈ Fm
.
−1
M
However, ϕ (T ) ∩ S is thick for every saturated S ∈ Fm . As saturated sets form a base for
M
M
.
, this contradicts the maximality of Fm
Fm
m
Now let p ∈ β[ω] be a thick ultrafilter. Then F := {ϕ−1 (T ) : T ∈ p} generates a thick
filter. Suppose S ⊆ Hm and {S} ∪ F generated a thick filter strictly larger than F. We may
assume S is saturated. Then ϕ(S) ∈ p, so ϕ−1 (ϕ(S)) = S ∈ F, a contradiction.
Notice that if p ∈ β[ω]n is thick and m ≤ n, then there is a unique thick ultrafilter
q ∈ β[ω]m with the property that {a ∈ [ω]n : [a]m ⊆ S} ∈ p for every S ∈ q. Certainly
such a q must be unique. To see that this q exists, suppose [ω]m = S t T . Then the set
n
(p) for this q. If
{a ∈ [ω]n : [a]m ∩ S 6= ∅ and [a]m ∩ T 6= ∅} is not thick. We will write πm
M
n
M
M ⊆ S(G) is a minimal right ideal and p = ϕ(Fn ), then we have πm (p) = ϕ(Fm
).
Let LO(ω) be the space of linear orders on ω. Viewed as a subset of the right shift 2H2 ,
LO(ω) becomes an S∞ -flow. It is known (see [KPT] or [GW]) that LO(ω) ∼
= M (S∞ ). Indeed,
we saw in section 4 that if K is the class of finite sets and K∗ is the class of finite linear orders,
then (K∗ , K) is an excellent pair, and XK∗ ∼
= LO(ω). If M ⊆ S(S∞ ) is a minimal right ideal
and <∈ LO(ω), then the map λ : M → LO(ω) given by λ(α) = < ·α := limgi →α < ·gi is an
S∞ -flow isomorphism. We will often write <α for < ·α, and we will write > for the reverse
linear order of <.
If <0 , <1 ∈ LO(ω) and m ≥ 2, define the set
Am (<0 , <1 ) = {{a0 , ..., am−1 } ∈ [ω]m : ∀i, j < m(ai <0 aj ⇔ ai <1 aj )}
and define Bm (<0 , <1 ) = Am (<0 , >1 ). If s ∈ Hm , we say that <0 and <1 agree on s if
ϕ(s) ∈ Am (<0 , <1 ), and we say that they anti-agree on s if ϕ(s) ∈ Bm (<0 , <1 ). When
m = 2 we often omit the subscript. If M is a minimal right ideal, then ϕ(F2M ) contains
exactly one of A(<0 , <1 ) or B(<0 , <1 ). Let AM ⊆ LO(ω) × LO(ω) be defined AM = {(<0
, <1 ) : A(<0 , <1 ) ∈ ϕ(F2M )}. Then AM is certainly reflexive and symmetric. To see that AM
is an equivalence relation, note that A(<0 , <1 ) ∩ A(<1 , <2 ) ⊆ A(<0 , <2 ). Furthermore, AM
has exactly two equivalence classes; this is because B(<0 , <1 ) ∩ B(<1 , <2 ) ⊆ A(<0 , <2 ).
Lemma 5.5. Let M ⊆ S(S∞ ) be a minimal right ideal, and let (<0 , <1 ) ∈ AM . Then for
M
any m < ω, we have Am (<0 , <1 ) ∈ ϕ(Fm
).
Proof. Let {fi : i < k} enumerate Emb(A2 , Am ). Then
this is exactly the desired set.
T
i
M
fi (ϕ−1 (A(<0 , <1 ))) ∈ Fm
, and
Lemma 5.6. Let M ⊆ S(S∞ ) be a minimal right ideal, and let α ∈ M . Then the following
are equivalent:
(1) α is an idempotent,
(2) For any <∈ LO(ω), we have (<, <α ) ∈ AM ,
(3) There is <∈ LO(ω) with (<, <α ) ∈ AM .
SAMUEL COMPACTIFICATIONS
15
Proof. Suppose α ∈ M is an idempotent, and let <∈ LO(ω). Then considering i2 = (x0 , x1 ) ∈
Emb(A2 , A2 ), we have x0 <α x1 iff {f ∈ H2 : f0 < f1 } ∈ α(2). But since α is an idempotent,
this is equivalent to {f ∈ H2 : f0 <α f1 } ∈ α(2). But this implies that ϕ−1 (A(<, <α )) ∈ α(2),
implying that A(<, <α ) ∈ ϕ(F2M ).
Conversely, suppose α ∈ M and <∈ LO(ω) with (<, <α ) ∈ AM . If f ∈ Emb(A2 , An ), then
we have f0 <α f1 iff {s ∈ Hn : s(f0 ) < s(f1 )} ∈ α(n). By Lemma 5.5, we see that this is iff
{s ∈ Hn : s(f0 ) <α s(f1 )} ∈ α(n). It follows that < ·α · α =< ·α, so α is idempotent.
Theorem 5.7. Let M, N ⊆ S(S∞ ) be minimal right ideals. The following are equivalent.
(1) AM = AN ,
(2) If u ∈ M and v ∈ N are idempotents, then uv ∈ M is also idempotent.
Proof. Suppose AM 6= AN , with (<0 , <1 ) ∈ AN \ AM . Find u ∈ M with <0 ·u = <0 , and
find v ∈ N with <0 ·v = <1 . By Lemma 5.6, u and v are idempotents and uv is not an
idempotent.
Conversely, suppose u ∈ M and v ∈ N are idempotents with uv not idempotent. Find
<0 ∈ LO(ω) with <0 ·u = <0 , and let <1 = <0 ·v. Since v is idempotent, we have by Lemma
5.6 that (<0 , <1 ) ∈ AN ; but since uv is not idempotent, we have (<0 , <1 ) 6∈ AM .
It is easy to construct minimal right ideals M, N ⊆ S(S∞ ) with AM 6= AN . Let <0
m
, <1 ∈ LO(ω) be linear orders so that for every m < ω, there are sm = (sm
0 , ..., sm−1 ) ∈ Hm
m
m
m
m
and t = (t0 , ..., tm−1 ) ∈ Hm so that <0 and <1 agree on s and anti-agree on tm . Let
M ⊆ S(S∞ ) be a minimal subflow with ϕ−1 (A(<0 , <1 )) ∈ F2M , and let N ⊆ S(S∞ ) be a
minimal subflow with ϕ−1 (B(<0 , <1 )) ∈ F2N . Then (<0 , <1 ) ∈ AM \ AN .
We now turn our attention to constructing M 6= N ⊆ S(S∞ ) minimal right ideals with
AM = AN ; this will prove Theorem 1.4 as a corollary of Theorem 5.7. To this end, we will
construct two thick ultrafilters p 6= q ∈ β[ω]3 with π23 (p) = π23 (q), so that whenever M and
N are minimal subflows of S(S∞ ) with ϕ(F3M ) = p and ϕ(F3N ) = q, then ϕ(F2M ) = ϕ(F2N ).
In particular, this implies that AM = AN .
Recall that a selective ultrafilter is an ultrafilter p on ω with the property that for any
finite coloring c : [ω]2 → r, there is a p-large set A ⊆ ω which is monochromatic for c.
Another way of saying this is as follows. Given a set A ⊆ ω, set λA = [A]2 , and if F is a
filter on ω, let λF be the filter generated by {λA : A ∈ F}. Then the ultrafilter p is selective
iff λp is an ultrafilter. The existence of selective ultrafilters is independent of ZFC.
We will be considering the following generalizations of selective ultrafilters. Let m < ω.
If T ⊆ [ω]m , we set λT = {s ∈ [ω]m+1 : [s]m ⊆ T }. If n > m, we set λ(n−m) (T ) = {s ∈ [ω]n :
[s]m ⊆ T }. Notice that the λ(n−m) operation is the same as applying λ (n − m)-many times,
justifying this notation. If F is a filter on [ω]m , we let λ(n−m) F be the filter generated by
{λ(n−m) T : T ∈ F}. It can happen that for some T ∈ F we have λ(n−m) T = ∅. We will
usually be working under assumptions that prevent this from happening. For instance, if
T ⊆ [ω]m is thick, then λ(n−m) T 6= ∅ for every n > m. Even better, if F is a thick filter on
[ω]m , then λ(n−m) F is a thick filter on [ω]n .
16
D. BARTOŠOVÁ AND A. ZUCKER
Definition 5.8. Let p ∈ β[ω]m be a thick ultrafilter. We say that p is (m, n)-selective if
λ(n−m) p is an ultrafilter. We say that p is weakly (m, n)-selective if there is a unique thick
ultrafilter extending the filter λ(n−m) p.
If p ∈ β[ω]m is a thick ultrafilter and q ∈ β[ω]n is a thick ultrafilter extending λ(n−m) p,
n
then we have πm
(q) = p. Therefore to prove Theorem 1.4, it is enough to construct a
thick ultrafilter p ∈ β[ω]2 which is not weakly (2, 3)-selective. Indeed, if p ∈ β[ω]2 is not
weakly (2, 3)-selective, then there are thick ultrafilters q0 6= q1 both extending the filter λp,
so π23 (q0 ) = π23 (q1 ).
Our construction proceeds in two parts. First we define a certain type of pathological
subset of [ω]3 and show that its existence allows us to construct p ∈ β[ω]2 which is not
weakly (2, 3)-selective. Then we show the existence of such a pathological set.
We begin by developing some abstract notions. Let Y be a set, and let I be a proper
ideal on Y . Write S ⊆I T if S \ T ∈ I. Let ψ : P(Y ) → P(Y ) be a map satisfying ψ 2 = ψ,
S ⊆ ψ(S), S ⊆ T ⇒ ψ(S) ⊆ ψ(T ), and ψ(∅) = ∅. Call a set S ⊆ Y ψ-closed or just closed if
ψ(S) = S, and call S near-closed if there is a closed set T with S∆T ∈SI. Call a set S ⊆ Y
(< ℵ0 )-near-closed if there are k < ω and closed T0 , ..., Tk−1 with S∆( i<k Ti ) ∈ I. Notice
that a finite union of near-closed sets is (< ℵ0 )-near-closed.
Now suppose S ⊆ Y is a set which is not (< ℵ0 )-near-closed. If p, q ∈ βY , we say that p
ψ-intertwines q over S modulo I if the following three items all hold:
(1) {S \ T : T is near-closed and T ⊆I S} ⊆ p,
(2) {ψ(T ) \ S : T ∈ p, T ⊆ S} ⊆ q,
(3) p and q extend the filterdual of I.
If ψ, S, and I are understood, we will just say that p intertwines q. Notice in (1) that
if T is near-closed with T ⊆I S, then S ∩ T is also near-closed, so it is enough to consider
near-closed T with T ⊆ S.
Lemma 5.9. Fix S ⊆ Y which is not (< ℵ0 )-near-closed.
(1) If B ⊆ Y with B ∈ I, then B is near-closed. Hence S 6∈ I.
(2) There are p, q ∈ βY so that p intertwines q.
Proof. The first part follows since the empty set is closed.
Since S is not (< ℵ0 )-near-closed, we have that {S \ T : T near-closed and T ⊆I S}
generates a filter F extending the filterdual of I. Let p ∈ βY be any ultrafilter extending
F.
Now let T ∈ p. Then ψ(T ) \ S 6∈ I; otherwise we would have ψ(T ) ⊆I S, so S \ ψ(T ) ∈ p,
contradicting that T ∈ p. Also note by monotonicity of ψ that (ψ(T0 ) ∩ ψ(T1 )) \ S ⊇
ψ(T0 ∩ T1 ) \ S, so the collection {ψ(T ) \ S : T ∈ p} generates a filter H avoiding I; letting q
be any ultrafilter extending both H and the filterdual of I, we see that p intertwines q.
We now apply these ideas. Let Y = [ω]3 , and let I be the thin ideal. Given T ⊆ [ω]3 , view
T as a 3-uniform hypergraph, and form the shadow graph ∂T := {{a, b} ∈ [ω]2 : ∃c({a, b, c} ∈
SAMUEL COMPACTIFICATIONS
17
T )}. Define ψ(T ) = λ∂T . In words, ψ(T ) is the largest hypergraph with ∂ψ(T ) = ∂T . More
generally, we can set Y = [ω]n and let I be the ideal of subsets of [ω]n which are not thick.
If m < n and T ⊆ [ω]n , we set ∂ (n−m) T = {s ∈ [ω]m : ∃t ∈ [ω]n−m (s ∪ t ∈ T )}. Then we can
set ψ(T ) = λ(n−m) ∂ (n−m) T .
Theorem 5.10. Let Y = [ω]n , let I be the thin ideal, and let ψ = λ(n−m) ∂ (n−m) for some
m < n. Suppose S ⊆ [ω]n is not (< ℵ0 )-near-closed, and say p, q ∈ β[ω]n where p intertwines
n
n
q. Then πm
(p) = πm
(q).
n
n
Proof. Suppose towards a contradiction that p0 := πm
(p) 6= q 0 := πm
(q) as witnessed by
m
0
m
0
A ⊆ [ω] with A ∈ p , [ω] \ A ∈ q . Then setting B := {s ∈ [ω]n : [s]m ⊆ A} and
C := {s ∈ [ω]n : [s]m ⊆ [ω]m \ A}, we have B ∈ p, C ∈ q, and B ∩ C = ∅. Note that both B
and C are ψ-closed. Since p and q are intertwined, we have ψ(B ∩ S) \ S ∈ q, so in particular
B \ S ∈ q. But since C ∈ q, this is a contradiction.
The next theorem along with Theorems 5.10 and 5.7 will prove Theorem 1.4.
Theorem 5.11. With I and ψ as in Theorem 5.10, there is S ⊆ [ω]n which is not (< ℵ0 )near-closed.
Proof. The following elegant proof is due to Anton Bernshteyn.
We take S to be the random n-uniform hypergraph. Suppose towards a contradiction that
S was k-near-closed for some k < ω. We write S = S0 ∪ · · · ∪ Sk−1
S with each Si near-closed.
Let Ti ⊆ [ω]n be a ψ-closed set with Si ∆Ti ∈ I, and write T = i<k Ti . So S∆T ∈ I. This
means that there is some ` < ω so that the hypergraph S∆T contains no clique of size `.
We now compute an upper bound on the number of induced subgraphs of S that can
appear on N vertices V := {v0 , ..., vN −1 } ⊆ ω. Since S is the random n-uniform hypergraph,
N
there must be 2( n ) many possibilities. But by assumption, S = T ∆G, where G is some
hypergraph with no cliques of size `. Since an induced subgraph of a ψ-closed graph is
N
ψ-closed, each Ti |V is determined by ∂ (n−m) (Ti |V ), so in particular, there are at most 2(m)
N
many possibilities for each Ti |V , so at most 2k(m) possibilities for T |V . As for G, we need
an estimate on the number of `-free n-uniform hypergraphs on N vertices.
It is a fact that
N
for some constant c > 0 depending only on ` and n, we can find c n subsets of N of size
` which pairwise have intersection smaller than n. By a probabilistic argument, it follows
that the proportion of n-uniform hypergraphs on N vertices which are `-free is at most
`
N
`
N
N
(1 − 2−(n) )c( n ) ≤ 2−c(n)( n ) := 2−d( n ) .
Multiplying together the number of choices for T |V with the number of choices for G|V , we
have that the number of possibilities for S|V is at most
N
N
N
(2(1−d)( n ) )(2k(m) ) 2( n ) .
This shows that S is not (< ℵ0 )-near-closed.
Let us now briefly discuss why Theorem 1.4 implies Theorem 1.3. Recall (see [HS]) that
in any compact left-topological semigroup S, the smallest ideal K(S) is both the union of
the minimal right ideals and the union of the minimal left ideals. The intersection of any
18
D. BARTOŠOVÁ AND A. ZUCKER
minimal right ideal and any minimal left ideal is a group, so in particular contains exactly
one idempotent. More concretely, if M ⊆ S is a minimal right ideal and u ∈ M is an
idempotent, then Su is a minimal left ideal and M u = M ∩ Su. All the groups formed in
this way are algebraically isomorphic. When S = S(G) for some topological group G, we
can interpret this group as aut(M (G)), the group of G-flow isomorphisms of M (G).
Fix M ⊆ S(G) a minimal subflow, and let ϕ : S(G) → M be a G-map. Letting p = ϕ(1G ),
then we must have ϕ = λp . It follows that ϕ is a retraction iff ϕ = λu for some idempotent
u ∈ M . Furthermore, if p ∈ M , then there is a unique idempotent u ∈ M with p = pu ∈ M u.
It follows that for some q ∈ M we have λq ◦ λp = λu .
Now suppose N ⊆ S(G) is another minimal right ideal, and that x 6= y ∈ S(G) can be
separated by a retraction ψ onto N . Pick any p ∈ M and form the G-map λp ◦ ψ. Notice
that λp |N is an isomorphism. For some q ∈ M we have λp ◦ ψ = λq . Then for some r ∈ M ,
we have λr ◦ λq = λu a retraction. It follows that x and y are also separated by λu . Hence
the relation of being separated by a retraction does not depend on the choice of minimal
subflow M ⊆ S(G).
Now let G = S∞ , and let M 6= N be the minimal right ideals found in Theorem 1.4. Let
L be any minimal left ideal, and let u ∈ M ∩ L and v ∈ N ∩ L be idempotents. We will
show that u and v cannot be separated by retractions, so let ϕ : S(G) → M be a retraction.
Then ϕ = λw for some idempotent w ∈ M . Then ϕ(u) = wu = u since idempotents in M
are left identities for M . But now consider ϕ(v) = wv. By our assumption on M and N , wv
is an idempotent. However, we must also have wv ∈ M ∩ L since M and L are respectively
right and left ideals. It follows that wv = u, so ϕ(u) = ϕ(v) as desired.
6. Proximal and Distal
The technique of finding M, N ⊆ S(G) minimal subflows with J(M ) ∪ J(N ) a semigroup
allows for a quick solution to Ellis’s problem for some Polish groups G.
Recall from the introduction that a pair of points x, y in a G-flow X is called proximal if
there is p ∈ E(X) with xp = yp and X is proximal if every pair of points is proximal. Now
suppose that M (G) is proximal, i.e. that M (G) is proximal. Then every element of M is an
idempotent; to see why, notice that it suffices to show that M ∩ L is a singleton whenever
L is a minimal left ideal. Indeed, suppose u 6= p ∈ M ∩ L, with u idempotent. Suppose
that (u, p) were proximal, i.e. that for some q ∈ S(G) we have uq = pq. Since M ∩ L is a
group with identity u, we must have pu = p. Now as M is a minimal right ideal, find r ∈ M
with uqr = u. But then pqr = puqr = pu = p. This is a contradiction, so (u, p) cannot be
proximal.
A G-flow X is distal if every pair of non-equal points is distal, that is, not proximal. A
useful fact (see [A]) is that X is distal iff E(X) is a group. If M (G) is distal and M ⊆ S(G)
is a minimal subflow, then J(M ) is a singleton. To see this, note that if u, v ∈ J(M ), then
uv = vv = v, so (u, v) is a proximal pair. If u ∈ J(M ) is the unique idempotent, then the
map ϕ : E(M ) → M given by p → u · p is a G-flow isomorphism.
For automorphism groups G with M (G) proximal or distal, it follows that the conclusion
of Theorem 1.4 is automatic for any two minimal right ideals M 6= N . The same argument
for S∞ shows that any two idempotents of the same minimal left ideal cannot be separated
by retractions. Of course, we need to know that S(G) contains more than one minimal right
ideal; see ([Ba], Corollary 11) for a proof of this fact.
SAMUEL COMPACTIFICATIONS
19
The following theorem collects some examples of Polish groups G with M (G) proximal.
Theorem 6.1. Let G be either Homeo(2ω ) or the automorphism group of the countablyinfinite-dimensional vector space over a finite field. Then S(G) ∼
6= E(M (G))
We now consider the case when M (G) is distal. The rest of this section is spent proving
Theorem 1.5.
proof of Theorem 1.5. Suppose M (G) is distal and metrizable, and fix minimal M ⊆ S(G).
As M ∼
= E(M ) is a group, we see that Aut(M ), the group of G-flow isomorphisms of M , acts
transitively on M . Therefore by ch. 2, Theorem 13 of [A], M must be equicontinuous, hence
a compact metrizable group. Furthermore, as M must have a comeager orbit by [BYMT],
we have that M is a single orbit.
Let H = {g ∈ G : ug = u}. By [MNT], H is extremely amenable. To show that H is
normal, we show that the map ϕ : G → M given by ψ(g) = ug is a homomorphism. Indeed,
we have ug · uh = (ugu)h = ugh since u is a two-sided identity for M . This shows the
existence of the short exact sequence from item (2) of Theorem 1.5.
i
ϕ
Now suppose 1 → H →
− G−
→ K → 1 is a short exact sequence of groups with H extremely
amenable and K compact metrizable. Notice that the action k · g := k · ϕ(g) turns K into a
G-flow. Notice that ϕ is left uniformly continuous, so we can extend to a map ϕ : S(G) → K
via ϕ(p) = limgi →p ϕ(gi ). Let M ⊆ S(G) be minimal; then ϕ|M is a G-map.
Since H ⊆ G is extremely amenable, let u ∈ M be an H-fixed point. Now viewing K as
the right coset space {Hg : g ∈ G}, we build a G-equivariant map ψ : K → M by setting
ψ(Hg) = ug. As the image of ψ is dense in M , we will be done once we show that ψ is
continuous. To see this, notice that we have the following commutative diagram.
M
ψ
ϕ
K
λϕ(u)
K
Indeed, let Hg ∈ K. Then ϕ(ψ(Hg)) = ϕ(ug)) = ϕ(u) · g = ϕ(u) · Hg. Since λϕ(u) and ϕ
are continuous, we must have that ψ is continuous.
7. Some ultrafilters on [ω]2
This last section includes a short discussion of some ultrafilters motivated by the work in
section 5. The first main theorem of this section provides a counterpoint to Theorem 1.4.
Theorem 7.1. It is consistent with ZFC that there is a minimal subflow M ⊆ S(G) so that
if N ⊆ S(G) is a minimal subflow with J(M ) ∪ J(N ) a semigroup, then M = N .
The second theorem points out a key difference between selective ultrafilters and (2, 3)selective ultrafilters. Recall that if p, q ∈ βω, then we say that q ≥RK p if there is a function
f : ω → ω with f (q) = p. Another characterization of selective ultrafilters is that they are
20
D. BARTOŠOVÁ AND A. ZUCKER
exactly the ultrafilters which are minimal in the Rudin-Keisler order. The next theorem
shows that (2, 3)-selectives can be very far from Rudin-Keisler minimal.
Theorem 7.2. If p ∈ βω, there is a countably closed forcing extension P adding a (2, 3)selective ultrafilter q with q ≥RK p.
As it turns out, these two theorems will both be proven using the same forcing construction.
We define a forcing P which is very similar to a forcing defined by Laflamme [L]. A slightly
more straightforward forcing would suffice for Theorem 7.1 where we don’t refer to a fixed
p ∈ βω, but with a bit more work, we can prove both theorems.
F
Definition 7.3. Fix p ∈ βω. Write ω = n En with |En | = n. We define P = hP, ≤i as
follows.
(1) A condition A ∈ P is a subset of ω so that for every k < ω, we have {n < ω :
|A ∩ En | ≥ k} ∈ p.
(2) We declare that B ≤ A iff B ⊆ A.
If A, B ∈ P, we define B A iff there is k < ω so that {m < ω : |Em ∩ (B \ A)| ≤ k} ∈ p.
It is straightforward to see that hP, i is a separative pre-order which is equivalent to P.
Lemma 7.4. P is countably closed.
Proof. T
First notice that if hAn : n < ωi is a -decreasing sequence in P , then setting
A0n = i≤n Ai , we have that A0n is -equivalent to An . So we may freely work with ≤decreasing sequences.
Suppose hAn : n < ωi is a ≤-decreasing sequence in P . Write S(m, k) = {n < ω :
|Am ∩ En | ≥ k}. Note that S(m, k) ∈ p for every m, k < ω. Also, if m ≤ m0 and k ≤ k 0 ,
then S(m0 , k 0 ) ⊆ S(m, k).
S For m ≥ 1, we define Tm = S(m, m) \ S(m + 1, m + 1). Note that if m < ω, then
n≥m Tm = S(m, m). If m ≥ 1 and n ∈ Tm , then |Am ∩ En | ≥ m. We form B ∈ P by setting
[ [
B=
Am ∩ En .
m≥1 n∈Tm
For each m ≥ 1, we have {n < ω : |B ∩ En | ≥ m} = S(m, m) ∈ p, so B ∈ p. To see that
B Am , we note that {n < ω : B 6⊆ Am } ⊆ ω \ S(m, m).
If A ∈ P, we set à =
S
n [A
∩ En ]2 ⊆ [ω]2 . The next proposition will prove Theorem 7.2.
Proposition 7.5. Let G ⊆ P be generic. Then G̃ := {Ã : A ∈ G} generates a thick ultrafilter
on [ω]2 which is (2, n)-selective for every n. Furthermore, this ultrafilter is RK-above p.
Proof. Set E2 = 1eP , and suppose E2 = S t T . Let A ∈ p. By Ramsey’s theorem, there is
some non-decreasing function k → b(2, k) increasing to infinity so that any 2-coloring of the
complete graph on k vertices has a monochromatic clique of size b(2, k). If |A ∩ EN | = k,
then let XN ⊆ A ∩ EN be chosen so that |XN | = b(2, k) and X˜N ⊆ S or X˜N ⊆ T . Define
S 0 , T 0 ⊆ ω, placing N ∈ S 0 S
or N ∈ T 0 depending on which outcome happens. WLOG suppose
0
S ∈ p. Then letting X = N ∈S 0 XN , we have X ∈ P, X ≤ A, and X decides whether S or
T is in the filter generated by G̃.
SAMUEL COMPACTIFICATIONS
21
The argument that the ultrafilter generated by G̃ is (2, n)-selective is almost the exact
same. By Ramsey’s theorem, there is some non-decreasing function k → b(n, k) increasing
to infinity so that any 2-coloring of the complete n-uniform hypergraph on k-vertices has a
monochromatic clique of size b(n, k). Now letting En = λ(n−2) (E2 ), fix a partition En = StT .
If A ∈ P, we can in a similar fashion find X ≤ A deciding whether S or T is in the filter
λ(n−2) (G̃).
Lastly, let ψ : E2 → ω be so that ψ({x, y}) = n iff {x, y} ⊆ En . Then if U ∈ V [G] is the
ultrafilter generated by G̃, then ψ(U) = p.
We now turn towards the proof of Theorem 7.1. To do this, we use Theorem 5.7. Working
in V [G], let MG ⊆ S(S∞ ) be the unique minimal subflow so that ϕ(F2MG ) is the ultrafilter
generated by G̃. We need to show that {A(<0 , <1 ) : (<0 , <1 ) ∈ AMG } generates G̃. To see
why this is, fix A ∈ P. We may assume that if A ∩ En 6= ∅, then |A ∩ En | ≥ 2. We will
construct linear orders
S <0 and <1 so that A(<0 , <1 ) = Ã.
First write ω = n Xn , where X0 = ω \ A and Xn = A ∩ En . Some of the Xn may
be empty, but this is fine. First define <0 and <1 on X0 to be any linear orders which
completely disagree. Suppose <0 and <1 have been defined on X0 ∪ · · · ∪ Xn−1 . First define
<0 and <1 on Xn so that they agree. Now place Xn <0 -below everything built so far and
also <1 -above everything built so far. Then A(<0 , <1 ) = Ã as desired. This completes the
proof of Theorem 7.1.
The proof of Theorem 7.1 suggests another type of ultrafilter on [ω]2 we can define. If
p ∈ β[ω]2 is thick, define Ap = {(<0 , <1 ) : A(<0 , <1 ) ∈ p}. As we saw in section 5, Ap is an
equivalence relation on LO(ω).
Definition 7.6. Let p ∈ β[ω]2 be a thick ultrafilter. We call p a linear order ultrafilter if
{A(<0 , <1 ) : (<0 , <1 ) ∈ Ap } generates p. Call p a weak linear order ultrafilter if p is the
unique thick ultrafilter containing every A(<0 , <1 ) with (<0 , <1 ) ∈ p.
One can prove that there are thick ultrafilters p ∈ β[ω]2 which are not weak linear order
ultrafilters, providing an alternate proof of Theorem 1.4. The proof is very similar to the
proof that some p ∈ β[ω]2 is not weakly (2, 3)-selective.
We end with some open question about these ultrafilters.
Question 7.7. Does ZFC prove the existence of (2, 3)-selective ultrafilters? Of linear order
ultrafilters?
Question 7.8. Can there exist a weakly (2, 3)-selective ultrafilter which is not (2, 3)-selective?
Same question for linear order ultrafilters.
The last question is motivated by Theorem 7.2. This shows that (2, 3)-selective ultrafilters
can exist arbitrarily high up in the Rudin-Keisler order.
Question 7.9. Is it consistent with ZFC that the (2, 3)-selective ultrafilters are upwards
Rudin-Keisler cofinal?
References
[AKS] M. Ajtai, J. Komlós, and E. Szemerédi, A Note on Ramsey Numbers, Journal of Combinatorial
Theory, 29, (1980) 354–360.
22
D. BARTOŠOVÁ AND A. ZUCKER
[A] J. Auslander, Minimal Flows and Their Extensions, North Holland, 1988.
[B] Z. Baranyai, On the factorization of the complete uniform hypergraph, Colloq. Math. Soc. Janos Bolyai,
10 (1975), 91–108.
[Ba] D. Bartošová, Topological dynamics of automorphism groups of ω-homogeneous structures via near
ultrafilters, Ph.D. Thesis, University of Toronto, 2013.
[BYMT] I. Ben-Yaacov, J. Melleray, and T. Tsankov, Metrizable universal minimal flows of Polish groups
have a comeagre orbit, GAFA, 27(1) (2017), 67–77.
[E] R. Ellis, Lectures on Topological Dynamics, W.A. Benjamin, 1969.
[F] H. Firstenberg, Disjointness in ergodic theory, minimal sets, and a problem in Diophantine approximation, Math. Syst. Theory, 1 (1967), 1–49.
[GW] E. Glasner and B. Weiss, Minimal actions of the group S(Z) of permutations of the integers, Geometric
and Functional Analysis, 12 (2002), 964988.
[GW1] E. Glasner and B. Weiss, Interpolation sets for subalgebras of `∞ (Z), Israel J. Math., 44(4) (1983),
345–360.
[HS] N. Hindman and D. Strauss, Algebra in the Stone-Čech Compactification, 2nd Edition, De Gruyter,
2012.
[KPT] A.S. Kechris, V.G. Pestov, and S. Todorčević, Fraı̈ssé limits, Ramsey theory, and topological dynamics
of automorphism groups, Geometric and Functional Analysis, 15 (2005), 106–189.
[L] C. Laflamme, Forcing with filters and complete combinatorics, Annals of Pure and Applied Logic, 42(2)
(1989), 125–163.
[MNT] J. Melleray, L. Nguyen Van Thé, T. Tsankov, Polish groups with metrizable universal minimal flows,
Int. Math. Res. Not., no. 5 (2016), 1285–1307.
[NVT] L. Nguyen Van Thé, More on the Kechris-Pestov-Todorčević Correspondence: Precompact Expansions, Fund. Math., 222 (2013), 19-47.
[P] V. Pestov, On free actions, minimal flows, and a problem by Ellis, Trans. Amer. Math. Soc., 350 (10),
(1998), 4149–4165.
[P1] V. Pestov, Some universal constructions in abstract topological dynamics, Topological dynamics and
applications, 215 of Contemp. Math. (1998), 8399.
[Sa] Pierre Samuel, Ultrafilters and compactifications of uniform spaces. Trans. Amer. Math. Soc., 64 (1948),
100–132.
[U] V. Uspenskij, Compactifications of topological groups, Proceedings of the ninth Prague topological
symposium (2001), 2002, 331–346
[Z] A. Zucker, Topological dynamics, ultrafilter combinatorics, and the Generic Point Problem. Trans.
Amer. Math. Soc., 368(9), (2016).
[Z1] A. Zucker, Thick, syndetic, and piecewise syndetic subsets of Fraı̈ssé structures. Preprint (2016).
| 4 |
A Two-Phase Safe Vehicle Routing and Scheduling
Problem: Formulations and Solution Algorithms
Aschkan Omidvar a*, Eren Erman Ozguven b, O. Arda Vanli c,
R. Tavakkoli-Moghaddam d
a
Department of Civil and Coastal Engineering, University of Florida, Gainesville, FL 32611, USA
Department of Civil and Environmental Engineering, FAMU-FSU College of Engineering., Tallahassee, FL, 32310, USA
c
Department of Industrial and Manufacturing Engineering, FAMU-FSU College of Engineering, Tallahassee, FL, 32310, USA
d
School of Industrial Engineering, College of Engineering, University of Tehran, Iran
b
Abstract
We propose a two phase time dependent vehicle routing and scheduling optimization model that identifies the
safest routes, as a substitute for the classical objectives given in the literature such as shortest distance or travel
time, through (1) avoiding recurring congestions, and (2) selecting routes that have a lower probability of crash
occurrences and non-recurring congestion caused by those crashes. In the first phase, we solve a mixed-integer
programming model which takes the dynamic speed variations into account on a graph of roadway networks
according to the time of day, and identify the routing of a fleet and sequence of nodes on the safest feasible
paths. Second phase considers each route as an independent transit path (fixed route with fixed node sequences),
and tries to avoid congestion by rescheduling the departure times of each vehicle from each node, and by
adjusting the sub-optimal speed on each arc. A modified simulated annealing (SA) algorithm is formulated to
solve both complex models iteratively, which is found to be capable of providing solutions in a considerably
short amount of time. In this paper, speed (and travel time) variation with respect to the hour of the day is
calculated via queuing models (i.e., M/G/1) to capture the stochasticity of travel times more accurately unlike
the most researches in this area, which assume the speed on arcs to be a fixed value or a time dependent step
function. First, we demonstrate the accurate performance of M/G/1 in estimation and predicting speeds and travel
times for those arcs without readily available speed data. Crash data, on the other hand, is obtained for each arc.
Next, 24 scenarios, which correspond to each hour of a day, are developed, and are fed to the proposed solution
algorithms. This is followed by evaluating the routing schema for each scenario where the following objective
*
Corresponding author. Tel.: +1-850-405-6688
E-mail address: [email protected]
functions are utilized: (1) the minimization of the traffic delay (maximum congestion avoidance), and (2) the
minimization of the traffic crash risk, and (3) the combination of two objectives. Using these objectives, we
identify the safest routes, as a substitute for the classical objectives given in the literature such as shortest
distance or travel time, through (1) avoiding recurring congestions, and (2) selecting routes that have a lower
probability of crash occurrences and non-recurring congestion caused by those crashes. This also allows us to
discuss the feasibility and applicability of our model. Finally, the proposed methodology is applied on a
benchmark network as well as a small real-world case study application for the City of Miami, Florida. Results
suggest that in some instances, both the travelled distance and travel time increase in return for a safer route,
however, the advantages of safer route can outweigh this slight increase.
Keywords: Vehicle Routing; Traffic Safety; Time Dependent Routing and Scheduling; Queuing Theory
1. Introduction and literature reivew
Traffic crashes and congestion are major costs to the collective and social well-being. According to
the Federal Highway Administration, traffic crashes imposed an economic cost of $242.0 billion to
the U.S. economy in 2010 [1]. Total estimated cost of congestion to Americans was also
approximately $124 billion in 2013 [1]. These figures show the vital influence of congestion and
crashes on our daily trips. For many years, vehicle routing and scheduling have been used to
investigate the effects of congestion on the roadway networks. There are two main sources of
congestion: recurring and non-recurring. The literature shows that in the United States, 40% of traffic
congestion is recurring due to spatial, traffic and behavioural issues, and traffic accidents,
construction, work zones and environmental events contribute to 25%, 15% and 10% of the total
traffic congestion, respectively [1]. Without a doubt, travel safety is one of the main components of
transportation activities, however, the idea of safety in the field of vehicle routing is not as maturely
developed as transportation planning or logistics. Hence, re-routing vehicle through safer road
segments and intersections can be one of many strategies to increase safety in micro and macro scale.
An optimization approach to minimize route hazard elements could be a handy mean to achieve safer
routes.
The origins of fleet routing problem goes as early as Dantzig and Ramser [2], who proposed the
first vehicle routing problem (VRP) in the context of truck dispatching for gas delivery to fuel
stations. This paper opened a new chapter in combinatorial optimization, and numerous researchers
applied this for other problems such as waste collection, school bus routing, routing in supply chain,
dial-a-ride services, and other goods/services collection and dispatching problems. Eventually, Clarke
and Wright [3] improved this model, and proposed a heuristic algorithm to solve the VRP problem.
The basic VRP consists a set of homogeneous vehicles with limited capacity and a central depot.
Vehicles serve the customers located in each node of the graph through the arcs between each pair of
nodes. Each customer has a specific demand, shown as 𝑞𝑖 and travelling on each arc has a linear
correlation with the cost components. All vehicles must return to the central depot after serving all
the customers [4]. However, these conditions may change depending on the type of VRP, such as
pick-up and delivery VRP, capacitated VRP, and open VRP among others.
Vehicle Routing and Scheduling Problem involves the activities of route assignment for
dispatching and/or collecting from/to one or multiple depot to/from customer nodes, which are
scattered geographically. These problems usually come with several constraints, such as vehicle
capacity, maximum travelled distance or time, time windows or customers prioritizing. VRP is
defined on a directed or undirected graph as 𝐺(𝑉, 𝐴), where 𝑉 = (0, 1, … , 𝑛) is the vector of nodes
and 𝐴 = ((𝑖, 𝑗): 𝑖, 𝑗 𝜖 𝑉, 𝑖 ≠ 𝑗) represents the set of arcs. Each arc contains a positive cost of 𝑐𝑖𝑗 , which
can be distance, time, or any other cost component. The goal is to find one or several Hamiltonian
loops, where all customers are being served with the minimum possible cost.
In most VRPs, the speeds and travel times on arcs are assumed to be constant. In other words,
Euclidean distances are assumed with a fixed cost function on each arc. On the other hand, stochastic
VRP has mainly focused on nodes than arcs. In other words, these models study demand or service
time at nodes, and rarely studied speed and other travel characteristics on arcs. By contrast, dynamic
VRP has considered changing travel times; however, technology, such as wireless systems, GPS,
short signals among others, is a must in this type of VRP, and routing and planning is infeasible
without this technology. Fixed travel cost function in today’s volatile and uncertain transportation is
not feasible. Therefore, time-dependent VRP (TDVRP) was first introduced by Cooke and Halsey
(1966) where they expanded the idea of shortest path between nodes in a network from fixed times
to variable times. However, they did not consider the case of multiple vehicles. In the TDVRP, the
travel cost function is assumed to be changing according to the time of the day [5]. This assumption
enables us to consider the speed variations on the arcs, which can help a roadway user avoid
congestion [6, 7]. This extension of VRP was initially proposed at the early stages of research on
VRP. However, due to its complexity in modelling and solution, it has not been studied until the past
decade. Malandraki and Dastkin (1992) proposed heuristic neighborhood search algorithms to solve
the TDVRP and time-dependent travelling salesman problem (TDTSP), where they suggested a step
function in order to consider travel time variations throughout the planning horizon.
Most researchers considered travel times at discrete levels (Table 1. Research on Time-dependent
Vehicle Routing Problem (TDVRP). Although a step function for travel times and speeds can
simulate the real world condition better than a fixed travel time (or speed) value, there is a need for
further development to successfully represent the actual real-life speed variations and changes in the
traffic flow on roadways [8, 9]. Other approaches to incorporate parameter variations include dynamic
and stochastic vehicle routing applications that focus on the service time and demand at the nodes
[10], models considering speed variations and dynamic travel times [11-13], and stochastic and timevarying network applications [14, 15]. For further information on the static and dynamic vehicle
routing problems (VRP), please refer to [4, 10].
Table 1. Research on Time-dependent Vehicle Routing Problem (TDVRP)
Authors
Model Type
Research Description
Solution Approach
[16]
Basic
Travel time variation is ignored.
Vehicle load and traffic
congestion in peak hours are
considered.
Integer Programming
[11]
Discrete travel times
and objective
function
MILP with discrete step function
for travel times.
Branch & cut and Greedy
neighborhood search
[17]
[5]
[18]
[19]
Discrete travel times
and objective
function
Discrete travel times
and objective
function
Discrete travel times
and objective
function
Discrete travel times
and objective
function
Modelling based on travel times
according to the dispatching
time
Modelling based on travel times
according to the dispatching
time
Modelling based on travel times
according to the dispatching
time
Modelling based on travel times
according to the dispatching
time
All alternative solutions on small
size case studies up to 5 nodes are
studied
Tabu search on Solomon benchmark
instances
Local Search
Ant Colony Optimization
To the best of authors’ knowledge, traffic safety, in terms of crash risk on roadways, has not been
introduced to the graph theory and transportation network optimization. Therefore, this study is an
important step towards filling this gap. In this paper, we propose a two phase vehicle routing and
scheduling optimization model that identifies the safest routes, as a substitute for the classical
objectives given in the literature such as shortest distance or travel time, through (1) avoiding
recurring congestions, and (2) selecting routes that have a lower probability of crash occurrences and
non-recurring congestion caused by those crashes.
2. Mathematical Modelling
The proposed modeling approach has two phases as seen in Fig. 1. We will discuss these phases
in the following subsections.
2.1. Phase 1:Routing graph modelling
In this phase, we formulate a mixed-integer programming which takes the dynamic speed variations
into account on a graph of roadway networks, according to the time of day. The probability of crash
as a function of the speed is calculated based on traffic crash record for three years on each roadway
segment and intersection. In most locations, with an increase in the traffic density, and reduction in
speed, the probability of having a crash increases [20-22].
Fig. 1. A schematic representation of modelling approach: (a) Roadway network (b) Result of Phase 1: routes identified
(c) Result of Phase 2: route schedules determined, including departure time and the travel speed at each node
Therefore, the first graph model identifies the routing of a fleet and sequence of nodes on the safest
feasible paths. Several constraints, such as hard and soft time windows on each node, capacity,
operation hours, and number of vehicles are introduced to ensure the fast and quality service. The
model takes real time speed data as input. However, in case the user only has access to the traffic
flow data but not speed data, we also propose a methodology to incorporate the speed variation with
respect to the hour of the day, which is obtained based on (a) the available traffic flow values, and
(b) the queuing model concept (namely M/G/1) to capture the stochasticity of speed variations and
travel times more accurately. Please see Section 2.3 for this methodology. So, in our first model,
travel times are changeable and they are a function of speed. The objective function consists of two
main components: (1) crash probability on each segment according to the time of the day and (2)
normalized Travel Time Index (TTI) [23], which is a function of travel time. In summary, the two
components of the objective function allows one to (a) increase the trip safety by choosing segments
with lower crash risk, and (b) avoid recurring and non-recurring congested segments. Fig. 1a shows
the initial network, and Fig. 1b indicates the routes identified as a result of Phase 1.
The first model is defined on a Hamiltonian graph as 𝐺= (𝑉, 𝐴) for which V= (𝑣0 , … , 𝑣𝑛+1 )
represent the set of nodes. Nodes 𝑣0 and 𝑣𝑛+1 refer to the central depot, and the nodes
(𝑣1 , … , 𝑣𝑛 ) represent a set of customer nodes. 𝐴= ((𝑖, 𝑗):𝑖, 𝑗 𝜖 𝑉, 𝑖≠𝑗) is the set of arcs. 𝐾 demonstrates
the number of fleets at the time of start ready in the depot, K= (1,…,k). Moreover, 𝑞𝑖 represents the
non-negative demand at node i and the maximum load for each vehicle is denoted as 𝑄𝑘. 𝑠𝑖 stands for
non-negative service time at the node i. Service time and demand for the central depot is assumed as
zero (𝑠0= 0, 𝑞0= 0). Customer node has a service time window of [ei, li], and 𝐿 is the latest allowed
time for the departing depot. Each arc is associated with a fixed distance (dij).
We incorporate three time dependent parameters in the model: (1) the departure time from node i
(𝑝𝑖k), (2) travel time between nodes i and j in hours 𝑡𝑖𝑗 (𝑝𝑖k), which is a function of (𝑝𝑖k), and (3) TTijk
(𝑝𝑖k) is the time index from i to j for vehicle k when the vehicle leaves the node i at the time 𝑝𝑖k.
Finally, the crash probability for arc (i,j) at the departure time (𝑝𝑖𝑘) is shown as ξij (𝑝𝑖𝑘) which is a
non-negative and non-zero value with the maximum of 1.
The first decision variable of the model is 𝑥𝑖𝑗𝑘, which is a binary variable that takes a value of 1 if
vehicle k leaves node i for node j, and otherwise takes a value of 0. 𝑤𝑖𝑘 demonstrates the amount of
net load that the vehicle k carries after leaving the node i. a𝑖𝑘 and 𝑝𝑖𝑘 track the times that vehicle k
enters and leaves node i, respectively.
The optimization model for the first phase is presented as follows:
Objectives:
𝐾
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 1 − [∏ ∏ [1 − (𝜉𝑖𝑗𝑘 (𝑝𝑖𝑘 ))]
𝑥𝑖𝑗𝑘
]
(1)
𝑘=1 (𝑖,𝑗)∈𝑉
𝐾
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 ∑
𝑘=1
∑ 𝑇𝑇𝑖𝑗𝑘 (𝑝𝑖𝑘 ) 𝑥𝑖𝑗𝑘
(𝑖,𝑗)∈𝑉
(2)
Subject to:
𝐾
𝑛+1
∑
𝑘=1
𝑛
∑
𝑗=0
𝐾
∑
𝑗=1
𝑛
∑
𝑘=1
𝐾
∑
𝑗=1
𝑛
∑
𝑥𝑖𝑗𝑘 = 1 ; ∀𝑖 ∈ {1, … , 𝑛}
(3)
𝑥0𝑗𝑘 ≤ 𝐾
(4)
𝑛
𝑘=1
𝑖=0
∑
𝑖=1
𝑛+1
∑
𝐾
𝑥0𝑗𝑘 = ∑
𝑥𝑖𝑓𝑘 − ∑
𝑗=1
𝑘=1
𝑥𝑖,𝑛+1,𝑘
(5)
𝑥𝑓𝑗𝑘 = 0 ; ∀𝑘 ∈ 𝐾 , 𝑓 ∈ {1, … , 𝑛}
(6)
𝑥𝑖0𝑘 + 𝑥𝑛+1,𝑖,𝑘 = 0 ; ∀𝑘 ∈ 𝐾 , ∀𝑖 ∈ {1, … , 𝑛}
𝑛
𝑛+1
𝑖≠𝑗
𝑗≠𝑖
(7)
∑𝑖=1 𝑞𝑖 ∑𝑗=1 𝑥𝑖𝑗𝑘 ≤ 𝑄𝑘 ; ∀ 𝑘 ∈ 𝐾
𝑛+1
𝑒𝑖 ∑
𝑗=1
𝑛+1
𝑥𝑖𝑗𝑘 ≤ 𝑎𝑖𝑘 ≤ 𝑙𝑖 ∑
𝑗=1
𝑥𝑖𝑗𝑘 ; ∀𝑖 ∈ {0, 1, … , 𝑛 + 1}
(8)
, ∀𝑘 ∈ 𝐾
(9)
𝑎𝑖𝑘 + 𝑠𝑖 + 𝑡𝑖𝑗𝑘 (𝑝𝑖𝑘 ) ≤ 𝑎𝑗𝑘 × 𝑥𝑖𝑗𝑘 + (1 − 𝑥𝑖𝑗𝑘 ) × 𝐿 ; ∀𝑖, 𝑗
(10)
∈ {0, 1, … , 𝑛 + 1} , ∀𝑘 ∈ 𝐾
𝑎𝑖𝑘 + 𝑠𝑖 ≤ 𝑝𝑖𝑘 ≤ 𝐿 − 𝑡𝑖,𝑛+1 ; ∀𝑘 ∈ 𝐾 , ∀ 𝑖 ∈ {0,1, … , 𝑛}
(11)
𝑎𝑖𝑘 ≥ 0 ; ∀𝑖 ∈ {0,1, … , 𝑛 + 1} ; ∀𝑘 ∈ 𝐾
(12)
𝑥𝑖𝑗𝑘 ∈ {0,1} ; ∀(𝑖, 𝑗) ∈ 𝐴 , ∀𝑘 ∈ 𝐾
(13)
𝑤𝑖𝑘 ≥ 0 ; ∀𝑖 ∈ {0,1, … , 𝑛 + 1} ; ∀𝑘 ∈ 𝐾
(14)
Objective function (1) minimizes the total crash probability on all routes. Objective function (2)
minimizes the total travel time index (TTI). Travel time index 𝑇𝑇𝑖𝑗𝑘 (𝑝𝑖𝑘 ) is defined as the travel
time at time 𝑝𝑖𝑘 divided by the free flow travel time, which is commonly used in the literature [24].
With this index, a routing schema which minimizes the TTI does not necessarily minimize the travel
time. This is due to the fact that an optimal routing plan may select a longer route in order to avoid
the congested roadways, roadways with lower speed limits, or a combination of both. In fact,
Objective (2) is formulated to provide a more uniform driving pattern with less cars on the roadways
even if it requires longer travels in term of distance and time.
Constraint sets (3-7) are graph construction constraints, and are defined as follows: Each customer
node must be visited exactly once (3), and maximum of K vehicles can be used for the routing plan
(4). This constraints allow the model to utilize the fleet partially, and therefore keep some vehicles
inactive in the depot. Dispatched vehicles must return to the depot after serving (5). When a vehicle
enters a customer’s node, it will leave after serving (6). Nodes 𝑣0 and 𝑣𝑛+1 are defined for fleet
dispatching and return from/to depot (7). Finally, vehicles have certain capacities that cannot be
exceeded (8). Constraint sets (9-11) are timing constraints: Service should be performed within the
pre-defined time windows at each customer node (9). Constraints (10) tracks the arrival/departure
times from/to each node, and also guarantees that routing schedule does not exceed the latest planning
time. Constraints (11) allow vehicles to stay at nodes in order to avoid traffic congestion. Similar to
the parameters in objective function, travel time 𝑡𝑖𝑗𝑘 (𝑝𝑖𝑘 ) is a function of time of day and the average
speed at that certain time. While solving the VRP problems, sub tours are one of the main difficulties
encountered. However, unlike traditional VRP formulations, we do not provide subtour elimination
constraints. In the safe VRP formulation presented herein, sub tours are eliminated through the
collaboration of constraints (10) and (11).
Please also note that we consider 24 hours of the day in the model. Let T denote each hour of
day T={T1 ,…,Tm }, where m=24. Each 𝑇𝑙 has an associated speed ℎ𝑙 , where 𝑙 is between 1 and m. The
𝑙
𝑙
earliest and latest moment in each time interval in defined as 𝑡𝑚𝑖𝑛
and 𝑡𝑚𝑎𝑥
. Depending on the length
of an arc, hour of day, and also the speed at that hour, a vehicle may face different levels of traffic
flow, and consequently speed variations. In other words, a vehicle travels on one arc in one or more
time intervals. We demonstrate a set of time periods from node i to j for vehicle k as 𝑡𝑖𝑗𝑘 (𝑝𝑖𝑘 ) =
𝑙
𝑙+1
𝑙
𝑙+1
{ 𝑡𝑖𝑗𝑘
, 𝑡𝑖𝑗𝑘
, … }, and travel speeds 𝑆𝑖𝑗𝑘 (𝑝𝑖𝑘 ) = { 𝑆𝑖𝑗𝑘
, 𝑆𝑖𝑗𝑘
, … }. This is used in the proposed model,
and our algorithm is capable of incorporating this concept.
We compare the results of the proposed approach to the classical network optimization objectives
of minimum distance minimum travel time and, defined, respectively, in functions (15) and (16):
𝐾
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 ∑
𝑘=1
𝐾
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 ∑
𝑘=1
∑ 𝑑𝑖𝑗𝑘 𝑥𝑖𝑗𝑘
(15)
(𝑖,𝑗)∈𝑉
∑ [𝑠𝑖 + 𝑡𝑖𝑗𝑘 (𝑝𝑖𝑘 )] 𝑥𝑖𝑗𝑘
(16)
(𝑖,𝑗)∈𝑉
2.2. Phase 2: Speed and departure time scheduling
In the route scheduling literature, including ship routing and scheduling, air cargo or other transit
scheduling models, it is assumed that are routes are predetermined, and optimization approaches
search for the optimal speed and scheduling [25-27]. The Phase 2 of the proposed approach, based
on the graph and routes constructed in Phase 1, solves a second optimization model which considers
each route as an independent transit path (fixed route with fixed node sequences) [25] and determines
the service timing to avoid congestion by rescheduling the departure times of each vehicle from each
node and finds the optimal speed on each arc. Fig. 1c illustrates the scheduling decisions made in
Phase 2. Therefore, Phase 2 is inspired by this type of approaches.
We employ the concept of shortest path for speed and departure time optimization [25]. Each
node in Fig. 1c is expanded to several nodes corresponding to different scenarios (arrival time to the
successor node). In other words, the arrival time (within the time window) at each node are
discretized, and on the generated directed acyclic graph for each route, our aim is to find the minimum
cost. Arrival time at a node depends on the departure time from the previous node, and the speed on
the arc connecting each pair of nodes. Hence, the shortest path (minimum cost) should have the
optimal departure time and speed values. Each discretized arrival time scenario is represented as
𝑁𝑖𝑠 where i is the node number and s is the number of scenario (𝑁𝑖,𝑠+1 ≥ 𝑁𝑖,𝑠 ). Every arc ((𝑖, 𝑠), (𝑖 +
1, 𝑝)) in the shortest path graph connects a discretized arrival time for node s to a discretized time
for node s+1 with the cost defined as 𝑐(𝑖,𝑠 ),(𝑖+1,𝑝) ,. For each node in route k, we define m as the
number of discretization. The variable 𝑥(𝑖,𝑠 ),(𝑖+1,𝑝 ) takes a value of 1 if arc ((𝑖, 𝑠 ), (𝑖 + 1, 𝑝)) is used,
and 0 otherwise. The mathematical formulation of the model proposed in Phase 2 is as follows:
Objectives:
𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒
∑
∑
∑
𝐶(𝑖,𝑠),(𝑖+1,𝑝) 𝑥(𝑖,𝑠),(𝑖+1,𝑝)
(17)
𝑖=1,…,𝑛−1 𝑠=1,…,𝑚 𝑝=1,…,𝑚
Subject to:
∑
∑ 𝑥(𝑖𝑠),(𝑖+1,𝑝) = 1 ; 𝑖 = (1, … , 𝑛 − 1)
𝑠∈𝑁𝑖 𝑝=1,…,𝑚
(18)
∑ 𝑥(𝑖−1,𝑠),(𝑖,𝑝) = ∑ 𝑥(𝑖,𝑠),(𝑖+1,𝑝) ; 𝑖 = (1, … , 𝑛) , 𝑝
𝑠=1,…,𝑚
𝑠=1,…,𝑚
= (1, … , 𝑚)
(19)
𝑥(𝑖𝑠),(𝑖+1,𝑝) ∈ {0,1} ; 𝑖 = (1, … , 𝑛 − 1) , 𝑠 = (1, … , 𝑚) , 𝑝 =
(1, … , 𝑚 )
(20)
Phase 2 objective function (17) seeks the shortest path that provides the minimal cost. In our
model, we try to minimize the crash risk and congestion. Constrains set (18) guarantees all the transit
nodes (nodes in route) are served, and the flow in acyclic graph of shortest path formulation is
evaluated through constraint sets (19). The Phase 2 optimization model is run for each route obtained
from Phase 1. In the following sections, we will discuss that our proposed algorithm solves Phase 1
and then Phase 2 iteratively and consecutively in each iteration to find an optimal routing and
scheduling plan.
2.3. Queuing based dynamic speed prediction
In sections 2.1 and 2.2, we discuss that variable speed values are crucial due to the dynamic logic
of our models. Numerous agencies and companies collect such real-time speed (Inrix, Total Traffic,
etc.). In addition, navigation devices utilize this data to navigate users. Therefore, the proposed
models can be fed to these navigation and routing devices to obtain and plan for safer routing.
However, real-time speed (or travel time) data is not always available. Considering an average speed
value on arcs also cannot represent the real congestion or traffic conditions. In the literature, many
researchers considered a step function to estimate speed and travel times. However, studies have
proven that it cannot efficiently represent real-world patterns [8, 9, 28]. Therefore, in this study, a
queueing model is used to calculate and estimate speeds, and consequently the travel times on arcs.
For queue modelling, traffic flows on every path is needed. We calculate speed on every path at each
hour of the day by using the concepts of the well-known traffic flow theory [29, 30]. This equation
relates the traffic flow (𝐹𝑖𝑗𝑦), traffic density (𝐾ijy) and speed (𝑆𝑖𝑗𝑦) between nodes i and j in hour
y.
F𝑖𝑗𝑦=K𝑖𝑗𝑦 × S𝑖𝑗𝑦
(21)
Queueing parameters are listed in Table 2. We split a roadway segment into multiple segments
with a length of 1/𝐾𝑗𝑎𝑚 , where 𝐾𝑗𝑎𝑚 is the maximum traffic density (this is the length that a vehicle
occupies in a path) [31]. Each segment is assumed as a service station, where cars arrive at the rate
of λ, and get a service with rate of μ. We formulated an M/M/1 model, but the performance of speed
prediction was not satisfactory. However, since the distribution of time between arrivals has been
proven to follow a Gamma family distribution [9, 32], we kept the arrival rate as Poisson, and
changed the service time to general distribution (M/G/1). Hence, the effective speed can be obtained
by dividing the segment length (1/𝐾𝑗𝑎𝑚 ) over the total time spent in the system (W). Also nominal
speed can be considered as the posted speed limit, the value 5 added to the posted speed limit, 85 th
percentile of the speed observations, free flow speed or any other speed level depending on the
characteristics of the site. One may need to pay close attention to the selection of the speed as it can
influence the drivers’ choice of speed and consequently the system. For further information on the
choice of speed please refer to [33]. In this study we considered the value equal to the posted speed
limit. From this one can calculate the unit less relative speed by dividing the actual (effective) speed
(S𝑖𝑗𝑦) over the Nominal Speed.
Table 2 M/G/1 Queueing Parameters
Parameter
Definition
K
Traffic density
Kjam
Maximum traffic density
S
Effective speed
SN
Nominal speed
R
Relative Speed
F
Traffic flow
W
Total time spent in the system
λ
Arrival rate
μ
Service time
Traffic intensity
𝜌
Unit
Vehicle per mile
Vehicle per mile
Mile per hour
Mile per hour
--Vehicle per hour
Hour
Vehicle per hour
Vehicle per hour
(λ/μ=E/ Kjam)
Vandale et al. (2010) formulated the waiting time and relative speed for general and several special
cases. Here, we explain the formulation for M/G/1 in detail. The service time in this model is
generally distributed with a service time of 1⁄𝜇 , and a standard deviation of σ. Hence, expected
service rate is 𝜇, and is calculated as 𝜇 = 𝑆𝑁 × 𝐾𝑗𝑎𝑚 .
Lemma 1. Total waiting time for the M/G/1 queuing system is:
𝑊=
1
𝜌2 + 𝑆𝑁2 𝐾 2 𝜎 2
+
𝑆𝑁 𝐶 2𝑆𝑁 𝐾(1 − 𝜌)
(22)
Proof 1. For the general case, 𝑊 = (1/𝐾𝑗𝑎𝑚 )⁄𝑆 and 𝑊 = 1⁄𝜇 − 𝜆 = 1/[𝑆𝑁 (𝐾𝑗𝑎𝑚 − 𝐾)].
Combining Little’s theorem and the Pollaczek-Khintchine formula for the average number of cars in
the system, and substituting for 𝜆 and 𝜇, total waiting time in the system is calculated [34].
Using Equation (22), speed and relative speed are calculated as:
𝑆=
2𝑆𝑁 (𝐾𝑗𝑎𝑚 − 𝐾)
2𝐾𝑗𝑎𝑚 + 𝐾(𝛽 2 − 1)
,
𝑅=
𝑆
2(𝜌 − 1)
=
𝑆𝑁 2 + 𝜌(𝛽 2 − 1)
(23)
where 𝛽 delineates the coefficient of variation of service time, and calculated as follows: 𝛽 = 𝜎𝑆𝑁 𝐶.
Finally, flow-density-speed function can be obtained by substituting (21) in (23):
𝑓(𝑆, 𝐹) = 2𝐾𝑗𝑎𝑚 𝑆 2 + [𝐹(𝛽 2 − 1) − 2𝐾𝑗𝑎𝑚 × 𝑆𝑁 ]𝑆 + 2𝐹 × 𝑆𝑁 = 0
(24)
The shape of the flow-density-speed curve changes by assigning different values to 𝛽. For a
moderate variability in service time, 𝛽 = 1 seems to be reasonable. In the case study section, we will
assume 𝛽=1; however, the methodology can be applied to cases where 𝛽 has a different value. After
these curves are created, we select the threshold speed from the curve and calculate dynamic travel
times. Please refer to the Case Study section for the selection of the speeds from the curves. Travel
time values are stored in a hyper-matrix, and are fed to the models according to the time of day. Next
section discusses how the proposed algorithm solves the models using these data.
3. Solution Approach
VRP is a NP-Hard problem and by a linear growth in the size of problem, the calculation time and
complexity of model increases exponentially. Therefore, heuristic and meta-heuristic algorithms are
developed to tackle the complex problems. In our VRP formulation the first component of the
objective function is nonlinear. Our first model is also not deterministic, and the parameters change
according to the time of day. As such, it is not feasible to solve them through optimality using exact
algorithms. Hence, in this study, in order to achieve a fair tradeoff between the computation time and
solution accuracy, we propose to solve the aforementioned problem using a hybrid algorithm, which
combines a novel heuristic algorithm and a sophisticated meta-heuristic technique. A schematic
approach of the solution approach is depicted in Fig. 2. For the example given in Fig. 2, our proposed
initial solution generator algorithm divides the plane into five slices, with the center of D and the
angle of 2𝜋/5. Each slice is further divided into two sub-slices. Therefore, there will be ten slices of
equal angle. Our algorithm starts from the closest node to D, and adds the nodes in the slice 2𝑖 − 1
to vehicle 𝑖 with the order of closest node to D to the furthest one, in terms of direct distance. For
even slices, the procedure is the same, but the order of nodes is from the furthest to the closest node
to depot. Finally, the last node of region 2𝑖 − 1 is joined to the first node of 2𝑖, a 2-opt neighborhood
search is conducted on each of 5 slices, and the initial solution is obtained as in Fig. 2-a. The pseudocode of this algorithm is provided in Fig. 3 for further detail.
Fig. 2. A schematic representation of solution approach (a) Polar coordinates-based heuristic to generate the initial solution
(b) Meta-heuristic to solve Phase 1 and Phase 2 problems
An enhanced simulated annealing (SA) algorithm is the meta-heuristic approach employed in this
research. In the SA, developed by Kirkpatrick [35], the acceptance probability of an inferior solution
is defined as 𝑒𝑥𝑝 (−∆𝑓/𝑡), where ∆𝑓 denotes the solution gap between the current and neighbour
objective function values, and 𝑡 is the temperature variable in SA. Equations (25) and (26) shows
these parameters (This is known as Boltzman criterion) [35].
𝐴𝑐𝑐𝑒𝑝𝑡𝑎𝑛𝑐𝑒 𝑟𝑎𝑡𝑒 𝑜𝑓 𝑠 ′ (𝑎𝑔𝑎𝑖𝑛𝑠𝑡 𝑠) = {𝑒
1
∆𝑓
−
𝑡𝑘
∆𝑓 ≥ 0
𝑒𝑙𝑠𝑒
∆𝑓 = 𝑓(𝑠 ′ ) − 𝑓(𝑠)
(25)
(26)
We also define the cooling factor in Equation (27). This formulation indicates that if we start from
𝑇0 with this α , we get to 𝑇0 at the end or iterations on outer loop of SA. For local neighborhood
search procedure, we introduce 6 categories of heuristics in SA: (1) insertion, (2) swap, (3) 2-OPT,
(4) 3-OPT, (5) Reversion, (6) Split [36]. The first two have two sub heuristics depending on the
number of nodes selected to be inserted or swapped.
𝑇𝑓 (1⁄𝑚𝑎𝑥.
α=( )
𝑇0
𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 )
(27)
This algorithm starts from a random initial solution, and the final solution quality depends on the
initial solution. Therefore, in order to obtain a good initial feasible solution to the problem, we use
the heuristic algorithm developed by the authors [13]. This approach uses a greedy local
neighborhood search algorithm that works with polar coordinates of the nodes, where the depot is
the initial pole, and expands the search range along the radius and azimuth, successively (Fig. 2a,
Fig. 3). The initial solution is then fed to the meta-heuristic algorithm, which develops itself to
improve the solution quality at each iteration leading to suboptimal solutions.
Algorithm: Polar coordinates-based local search
Input: 𝑘 , 𝜀 , 𝑛, max
(𝑑) ;
Output: 𝑃𝑒𝑟𝑚. (𝑛𝑜𝑑𝑒𝑠)𝑜𝑛 𝑘 𝑡ℎ 𝑣𝑒ℎ𝑖𝑐𝑙𝑒 ∗
𝐝𝐨: 𝑖𝑛𝑖𝑡𝑖𝑎𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛
∟ Input network of nodes and depot
divide the plane into k equal slices
𝐟𝐨𝐫 𝑖 in {𝑘}
𝐢𝐟 𝑖 𝑖𝑠 1 𝐝𝐨
Lower limit angle (i) = 𝜀
Upper limit angle (i) = (2𝜋⁄𝑘 ) + 𝜀
else 𝐝𝐨
Lower limit angle (i) = 𝜀 + (2𝜋⁄𝑘 ). (𝑖 − 1)
Upper limit angle (i) = 𝜀 + (2𝜋/𝑘) 𝑖
end
do Split each slice with bisection line
end
temp_odd={empty}
temp_even={empty}
for 𝑖 𝑖𝑛 {1: 2: 2𝑘 − 1}
for j in {1: max
(𝑑)/𝑛}
(1)
Increase local search region (0-j)
a. Azimuth
b. Radius
do assign nodes in search region 𝑖 to 𝑡𝑒𝑚𝑝_𝑜𝑑𝑑 [(𝑖 + 1)/2].
end
for 𝑖 𝑖𝑛 {2: 2: 2𝑘 − 2}
for j in {1: max
(𝑑)/𝑛}
(2)
Increase local search region (0-j)
c. Azimuth
d. Radius
do assign nodes in search region 𝑖 to temp_even [𝑖/2].
end
Concatenate permutations [(1), (2)]
𝑘: 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑎𝑐𝑡𝑖𝑣𝑒 𝑣𝑒ℎ𝑖𝑐𝑙𝑒𝑠
𝜀: 𝐼𝑛𝑖𝑡𝑖𝑎𝑙 𝐴𝑛𝑔𝑙𝑒
𝑛: 𝑆𝑒𝑎𝑐ℎ 𝑒𝑥𝑝𝑎𝑛𝑠𝑖𝑜𝑛 𝑠𝑡𝑒𝑝 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 (𝑛 𝑐𝑜𝑐𝑒𝑛𝑡𝑒𝑟𝑖𝑐 𝑐𝑖𝑟𝑐𝑙𝑒𝑠)
max(𝑑) : 𝑀𝑎𝑥𝑖𝑚𝑢𝑚 𝑑𝑖𝑟𝑒𝑐𝑡 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒 𝑜𝑓 𝑐𝑢𝑠𝑡𝑜𝑚𝑒𝑟 𝑛𝑜𝑑𝑒𝑠 𝑓𝑟𝑜𝑚 𝑑𝑒𝑝𝑜𝑡
Fig. 3. Pseudo code for heuristic initial solution generator: Polar coordinates-based
greedy local neighborhood search
4. Model Evaluation on Benchmark Problems
We generate several small, medium and large scale instances employing the Solomon logic as
well as its extant benchmark instances for the proposed VRP with time windows model evaluation.
This benchmark set is one of the most commonly used benchmark instances in VRP in the context
of time windows. For further research on Solomon’s methodology, please refer to [37]. In detail,
demand, service time, time windows, and the position of nodes and depot is modelled according to
the Solomon’s benchmark networks. Crash probabilities, travel time indices and speeds on are
generated according to a step function of three levels and five intervals. Note that crash probability
and travel time index values increase during rush hours whereas the speed drops. Next, we inject a
moderate uniformly distributed noise to differentiate these parameters on different arcs. In other
words, this noise is introduced to the data to reflect the real-world stochasticity more accurately than
a deterministic step function. This noise does not interfere with the time-dependent nature of the
model and is only introduced to randomize the values from one time interval to another. The amount
and the direction of the noise is simulated based on our observation on the real world case. Fig. 4
shows an example of this approach for one arc. Dashed line in Fig. 4 is the initial step function for
the parameters whereas the solid line is the noisy distributions. Noise is introduced to all 24 hours;
however, the proportion of the noise is randomly chosen. Fig. 4a shows high noise proportion for
crash probability whereas it is low for TTI, as seen in Fig. 4b. Since TTI and speed are correlated,
the proportion of noise introduced for these two parameters on each arc is similar.
(a)
(b)
(c)
Fig. 4. Variations in Crash Probability, TTI and Speed Introduced as Noisy Step Functions. Dashed Line Is the Step
Function before Introducing the Noise, and Solid Line is the Noisy Data (Input to the Test Problems).
For parameter tuning, we employed a Latin hypercube sampling design. A total of 30 input
parameter configuration is created for each scenario, and they are tested 5 times in the algorithm
where the minimum value is returned as the objective function value. Inferences are performed with
a Gaussian process regression, namely Kriging, using the JMP statistical analysis software [38, 39].
The most accurate parameter tuning is selected among the all kriging outputs, and the best
performance out of 30 configuration. All experiments are conducted on an Intel core i7-5500U 2.4
GHz processor, 8 GB Ram, Win 10 Pro platform. Table 3 shows the performance of our algorithm.
Although the results for the main objective function is also presented in Table 3 as a reference, our
main objective is to compare the minimum travelled distance obtained from our model to the best
lower bounds obtained by Solomon (1987).
Table 3. Model Validation and Algorithm Performance Assessment. R Refers To Benchmark Series R (Randomly And
Uniformly Distributed Nodes). First 4 Rows Are Generated Instances, Rows 5 to 10 Are Instances Taken from Solomon’s
Test Problem Dataset. Abbreviations: Shortest Path (SP), Number of Vehicles (#V), Travelled Distance (TD), Computational
Time in seconds (T), Objective Function Value (OFV), Objective Function Gap in percent (OG), Computational Time Gap in
percent (TG).
Instance
R10
R25
R50
R80
R101
R105
R109
R201
R205
R209
Exact/LB
SP
#V
----------------1645.7
19
1377.1
14
1194.7
11
1252.3
4
994.4
3
909.1
3
Basic SA
OFV
#V
1.62
2
1.59
2
1.66
4
1.66
7
1.71
20
1.73
20
1.72
20
1.72
19
1.74
18
1.33
19
T
272
322
419
601
982
949
927
846
872
829
OFV
1.62
1.59
1.58
1.60
1.59
1.66
1.63
1.67
1.70
1.10
Augmented SA
TD
#V
244.4
2
311.0
2
973.5
3
1366.5
7
2462.6 20
2004.2 20
2044.7 20
2515.0 15
1243.5 14
1196.4 12
T
65
121
207
359
542
530
561
474
465
438
Comparison
OG
TG
0.0
-318
0.0
-166
-5.1
-102
-3.7
-67
-7.5
-81
-4.2
-79
-5.5
-65
-3.0
-78
-2.4
-88
-20.9
-89
The first four rows are the test problems generated using the Solomon’s approach, for 10, 25, 50
and 80 nodes, respectively. Fleet sizes for these instances are 2, 3, 5, 12 vehicles, respectively. The
time windows are considered as hard time windows on upper bound and soft on lower bound. That
is, early arrival is allowed; however, service cannot be started until the lower bound. On the other
hand, late arrival at nodes is not allowed. Due to the NP-complete nature of the models, solving the
instances to the optimality is not feasible. Therefore, the cells for these instances are left blank. These
instances Benchmark test problems R100 series refer to those with hard time windows, and R 200
series are those with semi-relaxed time windows. Dispatching time is set to 7:00 AM. In terms of the
computational time, we observe that the proposed algorithm is capable of solving the test problems
is a relatively short period of time. Note that the travelled distance returned by our approach is
considerable higher than the lower bound. This observation is not counter-intuitive since the objective
function is to obtain the safest path rather than the shortest path in this paper. In the following
sections, we will discuss the trade-off between the safe routes based on the travelled distance. The
computation time results also indicate that the model is capable of finding high quality results within
an acceptable time period even for large and complex network problems. We compared the results
from our proposed algorithm to those obtained from the basic SA algorithm as described by
Kirkpatrick [35]. It has been shown in the literature that SA performs well for similar models [4, 40].
Our proposed approach was able to increase the solution quality by 6 percent (on average), while the
computation time has drastically decreased between 70 to 320 percent.
5. Case Study
Safe vehicle routing approach proposed in this paper is applicable in many practical situations,
including routing for dial-a-ride services for aging population, transportation of sensitive goods or
dangerous substances, and ambulance routing. In this section we illustrate the application of the
proposed models on a real-world case study. We study the ambulance routing problem, and consider
the following four hospitals in the City of Miami to be served with one vehicle:
0.
University of Miami Hospital (Central Depot)
1.
The Miami Medical Center
2.
North Shore Medical Center
3.
Mount Sinaj Medical Center
Fig. 5 shows the locations of the hospitals (nodes), and gives the graph definition of the model
along with the distance matrix. We choose the material to be transported as blood units in this case
study. These blood units are to be dispatched from the main depot to other three demand nodes. Our
objective is to select an optimal route and departure schedule that minimize both the risk of
congestion and the risk of crash. Each demand node has time windows and certain demand, and
vehicles stop at each node to serve them. Table 4. Parameter Values for Nodes in Case Study lists
the parameter values used in the problem. In Fig. 5, the distances are city block shortest distances.
For the sake of simplicity, we transform our real-world network into a graph (Fig. 5-right). Arcs
between the depot and demand points (1, 2, 3) are major roadways including primary arterials and
interstate roadways. Shortest distance arcs (4, 5, 6) are shown on Fig. 5-left between each pair of
demand points, which are mostly collectors and local roadway segments.
O-D distance matrix
0
1
2
3
0
0
6.898
5.500
6.402
1
6.898
0
10.202
12.003
2
5.500
10.202
0
8.800
3
6.402
12.003
8.800
0
Fig. 5. (Left) Real-World Network for Safe Routing Study – Miami. (Right) Network Transformed to Graph and Urban
Block Distance Matrix
In the transformation of city map (Fig. 5-left) to a graph (Fig. 5-right), the Hamiltonian graph
condition is violated. In other words, although each node can be visited once in a VRP graph, we
assume that a vehicle may use a roadway more than once to get access to an unserved node in our
formulation. For instance, suppose that the vehicle served the node 3, and is now at node 1. The
vehicle’s final destination is the node 2, and the only possible option is arc 4 given the Hamiltonian
graph representation. However, in the real world, a vehicle may also use the arc 1, and therefore can
get back to the node 2 through arcs 1 and 2 without entering the main depot. We fix this issue with
the following Proposition 1. This idea is similar to the concept of VRP with satellite facilities
introduced by Bard et al. [41], and alternative fuel stations by Erdogan and Miller-Hooks [42].
Proposition 1: An arc can be used more than once in a VRP network.
Proof: For this purpose, multiple visit should be allowed to a node. Let’s augment the current
graph to a new one as G′ = (V′, E′) with new dummy vertices for the main depot, 𝑊 = {𝑣𝑛+2 }, where
the new vertex set is defined as 𝑉 ′ = 𝑉 ∪ 𝑊. The number of dummy vertices associated with vn+2,
m, is set to the number of visits. m should be small enough to reduce the network size, but large
enough to enable the full utilization of the roadway network.
Table 4. Parameter Values for Nodes in Case Study
Parameter
𝑠𝑖
𝑞𝑖
1
2
3
4
5
6
Q𝑖
[ei, li]
L
m
description
Service time at node i
Demand at node i
Capacity of vehicle i
Time windows on node i
Latest Departure Time
Dummy vertices associated with depot
Value
{0.1, 0.1, 0.1}
{100, 120, 80}
300
[𝑒𝑖 = 0, 𝑙𝑖 = 1.3]
# of scenario-1
2
In order to formulate the objective functions of Eq. (1), we use the hourly flow data and average
hourly speed data. The data is collected on telemetric traffic monitoring stations (TTMS). For all arcs
that have a TTMS station, we obtain three years of hourly flow data between the years 2010 and
2012. We use all the data points to generate the traffic flow, density and speed relationship diagrams.
For the arcs with no flow data at hand, we perform simulation based on roadways in vicinity, where
flow data is available. Researchers are encouraged to use predictive methods and crash analytics to
estimate the parameters for the objective function [43].
Fig. 6a and Fig. 6d show the average hourly flow for Arc 3 in the eastbound and westbound
directions for the selected three years. Following the equations (21)-(24), we create the flow, density
and speed relationship diagrams as depicted in Fig. 6b, Fig. 6c, Fig. 6e and Fig. 6f. Nominal speed
(SN) is set equal to the free flow speed on each arc/direction. Assuming 𝛽 = 1 for the graph and by
taking the derivative of Eq. (24) maximizing for 𝑠, we obtain the maximum traffic flow for M/G/1 as
follows, where the problem reduces to the well-known case of linear relationship between the speed
and density:
𝐹𝑚𝑎𝑥 =
𝑆𝑁 × 𝐾𝑗𝑎𝑚
4
(28)
For each arc and direction, the highest flow value (assuming normal environmental and traffic
conditions) is set to 𝐹𝑚𝑎𝑥 . From Eq (28), we calculate the jam density, and create flow, speed and
density relationship diagrams shown in Fig. 6. As shown in Fig. 6a and Fig. 6b, for each flow value,
we have two speed values that correspond to congested and uncongested regimes. Depending on the
classification of the roadway, we set our thresholds between 0.854 and 0.901 quantiles of flow (see
[44]) for default values) to determine the level of congestion. Congested speed values are those
corresponding to the flow values above the threshold, and uncongested speed values are those
corresponding to the flow values less than the selected threshold. Fig. 7 illustrates the performance
of our approach and M/G/1 queuing model in simulating the speeds at every hour. We observe that
estimated speeds have a good concordance with the real-life data. We repeat the procedure for all
arcs and directions, and feed this data into the optimization models. In the models, speed data is used
to calculate the travel times and TTI.
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 6. An Example of Flow Analysis in The Case Study on Arc 3 West and East Directions
Fig. 7. An Instance of Speed Simulation on Arcs Using M/G/1 Queuing Approach on Arc 3 (East and West)
To determine the crash probability on each arc, past crash history are obtained from the Traffic
Safety Office of the Florida Department of Transportation (FDOT) between years 2010 and 2012.
Through an extensive GIS analysis, we classify crashes on each arc according to the direction, and
put the crash counts in hourly intervals. For example, Fig. 8 shows the hourly crash characteristics
on the arc 3 for both directions. For some arcs, depending on the roadway class, crash count increases
during rush hours, and for some others this increase is negligible. These differences are reflected in
the routing plans obtained by the optimization models. Since arc lengths are not equal, we normalize
the hourly crash counts by arc length to get the number of crashes per mile for each arc. The
normalized crash counts are divided by the total number of hours in three years to calculate the crash
probabilities. Finally, we scale the probabilities in order to have the same order of magnitude for both
the normalized TTI and the crash probabilities.
Fig. 8. Hourly Traffic Crash Patterns on Arc 3 (East and West)
A total of 24 scenarios are considered here. A total of 24 scenarios are considered here. Starting
from 12 AM, a new scenario is created every hour thereafter for the next 24 hours. We conduct
exhaustive enumeration on all scenarios for the case study in order to check the performance of the
proposed augmented SA approach. This approach was able to solve both models to global optimality
in less than two seconds. The following parameter values are selected for the proposed parameter
tuning:
Maximum number of iterations: 10
Initial temperature: 10
Final temperature: 0.01
Maximum number of iteration per temperature: 5
Number of population admitted to next iteration: 4
Cooling factor: Boltzman index
The proposed optimization approach minimizes the weighted sum of the functions given in Eq.
(1) and Eq. (2) to find the safest route with least congestion. We compare this solution to the safest
route and the least congested route solutions obtained, respectively, by minimizing Eq. (1) and Eq.
(2) separately. In addition, we compare the optimal solution to the following: (a) the fastest route
(minimum travel time) obtained by considering (16) as the objective function, and the shortest route
(minimum travel distance) by minimizing (15). Table 5 gives the minimum travel time routes. It can
be seen that the slowest route occurs at 5pm interval with 61.20 min. The minimum travel distance
route does not depend on the time of the day. Minimum distance solutions found as 3-5-4-1 or 1-45-3 with the total travelled distance of 32.30 miles.
Table 6 summarizes the routes obtained from the proposed method for each time interval. In Table
6, optimal routes are enumerated according to arc permutation (not nodes). Our modelling approach
is node-based optimization; however, we use the arc numbers for better readability and ease of
tracking the routes. Minimum distance is achieved through the following routing plan: 3-5-4-1 or 14-5-3 with the total travelled distance of 32.3009 miles. The results with objective function (16),
where only the travel time is minimized, indicates that when the speed is close to free flow speed,
routing is planned on interstate highways (arcs 1, 2 and 3) and in other hours, routes change (Table
5). During rush hours specifically, routing is performed through a combination of major and
local/connector roadways. Routing with the minimum travel time is the same as routing with
minimum distance during congested hours. Finally, crash risk cost gap indicates that the minimum
travel time routing plan is considerably different from the safest path routing plan.
Columns of Table 6 present the output routing plans of three models classified as: “Weighted Sum
of Objectives” is the weighted sum of Eq. (1) and Eq. (2), “Crash Risk Minimization” associated with
Eq. (1) only, and “Congestion Avoidance” associated with Eq. (2).
Table 5. Minimum travel time routes
Start Time
12:00:00 AM - 6:00:00
AM
7:00:00 AM
8:00:00 AM
9:00:00 AM - 1:00:00
PM
2:00:00 PM
3:00:00 PM
4:00:00 PM
5:00:00 PM
6:00:00 PM
7:00:00 PM -11:00:00
PM
Crash Risk Cost
Gap
Traversed Distance Gap
(Compared to min
distance)
41.02
>0
5.299
46.76
47.29
0.149
0.154
4.002
4.002
44.22 - 44.95
>0
5.299
52.13
61.01
60.94
61.20
60.95
0.305
0.198
0.210
0.153
0.191
2.196
0.000
0.000
0.000
0.000
43.81
0.282
5.299
Route
OFV (min.)
1-1-2-2-3-3
1-6-3-2-2
1-6-3-2-2
1-1-2-2-3-3
1-1-3-5-2
3-5-4-1
3-5-4-1
3-5-4-1
3-5-4-1
1-1-2-2-3-3
With the Congestion Avoidance objective, the optimal routes show a uniform pattern in AM and
PM hours: from 10pm until 11am 1-4-5-3 is selected as the optimal route and from 12pm to 9pm 35-4-1 is selected as the optimal rush hour. During AM rush hours, TT Gap increases to 19.09,
probably because the speed decreases from the free flow speed value. However, that the optimal
routes lead to no extra travelled mileage (TD Gap is 0 for all time periods). Moreover, the objective
function value during the PM peak hours, the busiest and most congested time of day in Miami, is
still within an acceptable range (less than 0.15 increase in the total TTI). This indicates that congestion
avoidance can be achieved without sacrificing the minimum distance. Nevertheless, this objective
function increases the total travel time by 30%. This observation indicates that the concept of avoiding
congestion may offer roadway users less congestion and higher speed levels, and consequently, faster
routing is not necessarily an accurate assumption. Yet, this objective function can help minimizing
the number of acceleration and deceleration movements, frustration of being stuck in traffic jams and
consequently provides a more delightful travelling experience even though the duration is longer than
usual. Crash risk minimization planning shows erratic changes at night. However, route 2-4-6-3
seems to be the safest route to take during both rush hours.
In the main objective function (safe and least congested routing), the impact of crash risk
minimization is more dominant than TTI during midday (9am-2pm) and night (7pm-11pm) hours.
This is because of the reduced congestion probability at those hours. During rush hours (7am-8am
and 3pm-6pm), on the other hand, the routing is influenced by TTI minimization mostly. During the
remaining times (12 am-5m), the two components of objective function have equal impact, and the
solution is a trade-off between the two objectives. It can also be seen that safe routing increases
travelled distance up to 5.6% (from Table 6 the maximum travelled distance gap for weighted
objective function is 1.806; it is 5.6% of the minimum distance objective function of 32.30) and total
travel time up to 31% (from Table 6 the maximum value of travel time gap for weighted sum objective
function is 18.98; it is 31% of the minimum travel time of 61.20)
Whether these numbers justify the feasibility of this approach in a real world application or not,
will mainly depend on policy makers and stakeholders. Type of business plays an important role in
this decision making as well. For transporting vulnerable populations as well as hazardous material
handling and medical substances among others, safety can be of higher priority than time and
distance. Safe path selection can be a critical addition for the navigation devices, as a substitute for
conventional minimum travel time routes.
Table 6. Case Study Routing And Scheduling Results. OFV: Objective Function Value. TT Gap: Travel Time Increase Compared to Routing with Minimum Travel Time, CR Gap: Increase in Crash
Risk Compared to Routing with Minimum Crash Probability, TD Gap: Extra Travelled Mileage Compared to Routing with Minimum Travelled Distance. Routes are numbered according to arcs.
Computational Time for all instance is less than 200 milliseconds for all scenarios.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Start Time
Route
12:00:00 AM
1:00:00 AM
2:00:00 AM
3:00:00 AM
4:00:00 AM
5:00:00 AM
6:00:00 AM
7:00:00 AM
8:00:00 AM
9:00:00 AM
10:00:00 AM
11:00:00 AM
12:00:00 PM
1:00:00 PM
2:00:00 PM
3:00:00 PM
4:00:00 PM
5:00:00 PM
6:00:00 PM
7:00:00 PM
8:00:00 PM
9:00:00 PM
10:00:00 PM
11:00:00 PM
1-6-5-2
2-5-6-1
1-6-5-2
2-5-6-1
2-5-6-1
2-5-6-1
2-4-6-3
1-4-5-3
1-4-5-3
2-4-6-3
2-4-6-3
3-6-4-2
2-4-6-3
3-6-4-2
3-6-4-2
3-5-4-1
3-5-4-1
3-5-4-1
3-5-4-1
2-4-6-3
3-6-4-2
3-5-4-1
1-4-5-3
1-6-5-2
Weighted Sum of Objectives
TT
CR
OFV
TD Gap
Gap
Gap
1.575 11.63 0.000
0.900
1.542 11.63 0.014
0.900
1.470 11.63 0.012
0.900
1.509 11.63 0.087
0.900
1.548 11.63 0.000
0.900
1.529 11.63 0.022
0.900
1.590 13.24 0.000
1.806
1.719 13.72 0.000
0.000
1.963 13.64 0.125
0.000
1.741 11.93 0.000
1.806
1.709 12.21 0.000
1.806
1.677 12.15 0.000
1.806
1.690 11.78 0.000
1.806
1.716 11.62 0.000
1.806
1.705
4.98
0.000
1.806
1.925
0.00
0.198
0.000
1.995
0.00
0.210
0.000
1.944
0.00
0.153
0.000
1.947
0.00
0.191
0.000
1.665 12.68 0.000
1.806
1.590 13.66 0.000
1.806
1.557 18.67 0.002
0.000
1.603 18.98 0.000
0.000
1.595 11.63 0.000
0.900
Crash Risk Minimization
TT
Route
OFV
TD Gap
Gap
1-6-5-2
0.608 11.63
0.900
2-2-3-6-1
0.561
3.50
4.002
1-1-3-5-2
0.491
8.14
2.196
1-1-2-2-3-3 0.455
0.00
5.299
2-5-6-1
0.580 11.63
0.900
1-1-2-5-3
0.540
8.14
2.196
2-4-6-3
0.612 13.24
1.806
1-4-5-3
0.739 13.72
0.000
3-6-4-2
0.843 25.33
1.806
2-4-6-3
0.720 11.93
1.806
2-4-6-3
0.693 12.21
1.806
3-6-4-2
0.654 12.15
1.806
2-4-6-3
0.663 11.78
1.806
3-6-4-2
0.688 11.62
1.806
3-6-4-2
0.674
4.98
1.806
3-6-4-2
0.729
4.93
1.806
2-4-6-3
0.790 22.19
1.806
2-4-6-3
0.788 22.10
1.806
2-4-6-3
0.762 12.53
1.806
2-4-6-3
0.659 12.68
1.806
3-6-4-2
0.611 13.66
1.806
2-5-6-1
0.587 11.40
0.900
1-4-5-3
0.636 18.98
0.000
1-6-5-2
0.627 11.63
0.900
Congestion Avoidance (TTI Delay Minimization)
TT
CR
Route
OFV
TD Gap
Gap
Gap
1-4-5-3 1.00
19.09 0.038
0.00
1-4-5-3 1.00
19.09 0.089
0.00
1-4-5-3 1.00
19.09 0.062
0.00
1-4-5-3 1.00
19.09 0.130
0.00
1-4-5-3 1.00
19.09 0.057
0.00
1-4-5-3 1.00
19.09 0.063
0.00
1-4-5-3 1.00
17.62 0.106
0.00
1-4-5-3 1.05
13.72 0.000
0.00
1-4-5-3 1.12
13.64 0.125
0.00
1-4-5-3 1.10
15.85 0.079
0.00
1-4-5-3 1.10
16.28 0.065
0.00
1-4-5-3 1.12
16.23 0.184
0.00
3-5-4-1 1.13
15.86 0.155
0.00
3-5-4-1 1.14
15.70 0.130
0.00
3-5-4-1 1.13
8.95
0.269
0.00
3-5-4-1 1.12
0.00
0.198
0.00
3-5-4-1 1.11
0.00
0.210
0.00
3-5-4-1 1.15
0.00
0.153
0.00
3-5-4-1 1.12
0.00
0.191
0.00
3-5-4-1 1.05
16.69 0.228
0.00
3-5-4-1 1.01
18.08 0.070
0.00
3-5-4-1 1.00
18.67 0.002
0.00
1-4-5-3 1.00
18.98 0.000
0.00
1-4-5-3 1.00
19.09 0.058
0.00
6. Conclusion
We propose a two phase time dependent safe vehicle routing and scheduling optimization model that identifies the
safest and least congested routes, as a substitute for the traditional objectives given in the literature such as shortest
distance or travel time, through (1) avoiding recurring congestions, and (2) selecting routes that have a lower
probability of crash occurrences and non-recurring congestion caused by those crashes. The paper introduces the idea
of using crash risk and travel time delay through crash probability and travel time index (TTI) in vehicle routing
problem. In the first phase, our model identifies the routing of a fleet and sequence of nodes on the safest feasible
paths. The second phase reschedules the departure times of each vehicle from each node and adjusts the optimal speed
on each arc. A modified simulated annealing (SA) algorithm is formulated to solve the NP-hard optimization problem.
Results show that SA is capable of solving complex models iteratively in a considerably short computational time. We
also introduce a new approach for estimations the speeds on arcs without real-life speed data via a queuing model. We
apply the proposed models on a small real-world case study application in the City of Miami. Routing schema with
respect to several conditions are evaluated for this case study application: (1) minimize the traffic delay (maximum
congestion avoidance), (2) minimize traffic crash risk and (3) the combination of two. Using these results, we compare
the travel times and traversed distances obtained from the proposed models to the classic objectives of travel time and
distance minimization.
The results showed that safe routing can be achieved with none or slight increase in travelled distance. The effect
of crash risk minimization outweighs TTI at night and during midday hours providing a safer travel for roadway users.
However, during the PM rush hours, the routing is influenced by TTI minimization mostly especially due to the lower
level of service (LOS). In some cases, the optimal route is different from either routes, and is a safer route through a
trade-off between both objectives. We suggest the proposed safe routing approach to policy makers and planners since
safer routing can be achieved with the proposed models for a slight increase in travelled distance or travel time. Traffic
accidents are costly, and route planning through safer routes can result in major savings for fleet owners as well as
insurance companies. The knowledge obtained from this approach can successfully contribute to the development of
more reliable safety-focused transportation plans, as the solution of the model points to specific situations where routes
can be selected based on minimizing the crash risk rather than only the travel time, which will, in turn, help improving
the safety for the roadway users. Some transportation activities such as hazardous material handling and emergency
transportation among others, require a higher safety of transportation. The proposed concepts in this paper can be
implemented on other disciplines of transportation and logistics such as path finding and network design. Formulating
the models in a multi-objective format to study the safe paths can be an interesting future work. The actual performance
of safe routing in GPS devices for navigation is also an interesting future study direction. In addition, if the majority
of vehicles on a network take the proposed approach in this paper, rerouting traffic may shift these areas to other
network segments. Therefore, researchers can study the safety rerouting in network level and in aggregate setting. In
this paper historic crash data was used for the estimation of risk factor. One can use prediction techniques while
accounting for unobserved heterogeneity (please see [45]) to estimate the risk factor in future. Finally, the concept of
safety in routing can be studied using other approaches, such as user equilibrium modelling, dynamic programming,
and dynamic traffic assignment among others.
Acknowledgements
This project was supported by United States Department of Transportation grant DTRT13-G-UTC42, and
administered by the Center for Accessibility and Safety for an Aging Population (ASAP) at the Florida State
University (FSU), Florida A&M University (FAMU), and University of North Florida (UNF). We also thank the
Florida Department of Transportation and National Oceanic and Atmospheric Administration for providing the data.
The opinions, results, and findings expressed in this manuscript are those of the authors and do not necessarily
represent the views of the United States Department of Transportation, The Florida Department of Transportation,
The National Oceanic and Atmospheric Administration, The Center for Accessibility and Safety for an Aging
Population, the Florida State University, the Florida A&M University, or the University of North Florida.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
ECMT, " Managing urban traffic congestion," European conference of ministers of transport2007.
G. B. Dantzig and J. H. Ramser, "The truck dispatching problem," Management science, vol. 6, pp. 80-91,
1959.
G. Clarke and J. W. Wright, "Scheduling of vehicles from a central depot to a number of delivery points,"
Operations research, vol. 12, pp. 568-581, 1964.
P. Toth and D. Vigo, Vehicle routing: problems, methods, and applications vol. 18: Siam, 2014.
S. Ichoua, M. Gendreau, and J.-Y. Potvin, "Vehicle dispatching with time-dependent travel times," European
journal of operational research, vol. 144, pp. 379-396, 2003.
C. Lecluyse, K. Sörensen, and H. Peremans, "A network-consistent time-dependent travel time layer for
routing optimization problems," European Journal of Operational Research, vol. 226, pp. 395-413, 2013.
A. L. Kok, E. Hans, and J. Schutten, "Vehicle routing under time-dependent travel times: the impact of
congestion avoidance," Computers & Operations Research, vol. 39, pp. 910-918, 2012.
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
T. Van Woensel, L. Kerbache, H. Peremans, and N. Vandaele, "Vehicle routing with dynamic travel times:
A queueing approach," European journal of operational research, vol. 186, pp. 990-1007, 2008.
T. Van Woensel and N. Vandaele, "Modeling traffic flows with queueing models: a review," Asia-Pacific
Journal of Operational Research, vol. 24, pp. 435-461, 2007.
V. Pillac, M. Gendreau, C. Guéret, and A. L. Medaglia, "A review of dynamic vehicle routing problems,"
European Journal of Operational Research, vol. 225, pp. 1-11, 2013.
C. Malandraki and M. S. Daskin, "Time dependent vehicle routing problems: Formulations, properties and
heuristic algorithms," Transportation science, vol. 26, pp. 185-200, 1992.
M. Figliozzi, "Vehicle routing problem for emissions minimization," Transportation Research Record:
Journal of the Transportation Research Board, pp. 1-7, 2010.
A. Omidvar and R. Tavakkoli-Moghaddam, "Sustainable vehicle routing: Strategies for congestion
management and refueling scheduling," in Energy Conference and Exhibition (ENERGYCON), 2012 IEEE
International, 2012, pp. 1089-1094.
A. K. Ziliaskopoulos and H. S. Mahmassani, "Time-dependent, shortest-path algorithm for real-time
intelligent vehicle highway system applications," Transportation research record, pp. 94-94, 1993.
E. Miller-Hooks and H. Mahmassani, "Optimal routing of hazardous materials in stochastic, time-varying
transportation networks," Transportation Research Record: Journal of the Transportation Research Board,
pp. 143-151, 1998.
G. G. Brown, C. J. Ellis, G. W. Graves, and D. Ronen, "Real-time, wide area dispatch of mobil tank trucks,"
Interfaces, vol. 17, pp. 107-120, 1987.
A. V. Hill and W. Benton, "Modelling intra-city time-dependent travel speeds for vehicle scheduling
problems," Journal of the Operational Research Society, pp. 343-351, 1992.
H. Hashimoto, M. Yagiura, and T. Ibaraki, "An iterated local search algorithm for the time-dependent vehicle
routing problem with time windows," Discrete Optimization, vol. 5, pp. 434-456, 2008.
A. V. Donati, R. Montemanni, N. Casagrande, A. E. Rizzoli, and L. M. Gambardella, "Time dependent
vehicle routing problem with a multi ant colony system," European journal of operational research, vol.
185, pp. 1174-1191, 2008.
L. Blincoe, T. R. Miller, E. Zaloshnja, and B. A. Lawrence, "The economic and societal impact of motor
vehicle crashes, 2010 (Revised)," 2015.
A. Omidvar, O. A. Vanli, E. E. Ozguven, R. Moses, A. Barrett, A. Kocatepe, et al., "Effect of Traffic Patterns
on the Frequency of Aging-Driver-Involved Highway Crashes: A Case Study on the Interstate-95 in Florida,"
presented at the Transportation Research Board 95th Annual Meeting, 2016.
A. Omidvar, E. Ozguven, O. Vanli, and R. Moses, "Understanding the factors affecting the frequency and
severity of aging population-involved crashes in Florida," Advances in Transportation Studies, 2016.
T. Lomax and R. Margiotta, Selecting travel reliability measures: the Institute, 2003.
A. D. May, Traffic flow fundamentals, 1990.
K. Fagerholt, "Ship scheduling with soft time windows: An optimisation based approach," European Journal
of Operational Research, vol. 131, pp. 559-571, 2001.
K. Fagerholt, G. Laporte, and I. Norstad, "Reducing fuel emissions by optimizing speed on shipping routes,"
Journal of the Operational Research Society, vol. 61, pp. 523-529, 2010.
M. Christiansen, K. Fagerholt, and D. Ronen, "Ship routing and scheduling: Status and perspectives,"
Transportation science, vol. 38, pp. 1-18, 2004.
N. Vandaele, T. Van Woensel, and A. Verbruggen, "A queueing based traffic flow model," Transportation
Research Part D: Transport and Environment, vol. 5, pp. 121-135, 2000.
C. Daganzo and C. Daganzo, Fundamentals of transportation and traffic operations vol. 30: Pergamon
Oxford, 1997.
L. Elefteriadou, An introduction to traffic flow theory: Springer, 2014.
D. Heidemann, "A queueing theory approach to speed-flow-density relationships," in Internaional
symposium on transportation and traffic theory, 1996, pp. 103-118.
A. S. Al-Ghamdi, "Analysis of time headways on urban roads: case study from Riyadh," Journal of
Transportation Engineering, vol. 127, pp. 289-294, 2001.
P. C. Anastasopoulos and F. L. Mannering, "The effect of speed limits on drivers' choice of speed: a random
parameters seemingly unrelated equations approach," Analytic methods in accident research, vol. 10, pp. 111, 2016.
F. S. Hillier, "Introduction to operations research," 1967.
S. Kirkpatrick and M. P. Vecchi, "Optimization by simmulated annealing," science, vol. 220, pp. 671-680,
[36]
[37]
[38]
[39]
[40]
[41]
[42]
[43]
[44]
[45]
1983.
A. Amini, R. Tavakkoli-Moghaddam, and A. Omidvar, "Cross-docking truck scheduling with the arrival
times for inbound trucks and the learning effect for unloading/loading processes," Production &
Manufacturing Research, vol. 2, pp. 784-804, 2014.
M. M. Solomon, "Algorithms for the vehicle routing and scheduling problems with time window constraints,"
Operations research, vol. 35, pp. 254-265, 1987.
R. H. Myers, D. C. Montgomery, and C. M. Anderson-Cook, Response surface methodology: process and
product optimization using designed experiments: John Wiley & Sons, 2016.
D. J. MacKay, "Introduction to Gaussian processes," NATO ASI Series F Computer and Systems Sciences,
vol. 168, pp. 133-166, 1998.
I. H. Osman, "Metastrategy simulated annealing and tabu search algorithms for the vehicle routing problem,"
Annals of operations research, vol. 41, pp. 421-451, 1993.
J. F. Bard, L. Huang, P. Jaillet, and M. Dror, "A decomposition approach to the inventory routing problem
with satellite facilities," Transportation science, vol. 32, pp. 189-203, 1998.
S. Erdoğan and E. Miller-Hooks, "A green vehicle routing problem," Transportation Research Part E:
Logistics and Transportation Review, vol. 48, pp. 100-114, 2012.
F. L. Mannering and C. R. Bhat, "Analytic methods in accident research: Methodological frontier and future
directions," Analytic methods in accident research, vol. 1, pp. 1-22, 2014.
J. Zegeer, M. Blogg, K. Nguyen, and M. Vandehey, "Default values for highway capacity and level-ofservice analyses," Transportation Research Record: Journal of the Transportation Research Board, pp. 3543, 2008.
F. L. Mannering, V. Shankar, and C. R. Bhat, "Unobserved heterogeneity and the statistical analysis of
highway accident data," Analytic methods in accident research, vol. 11, pp. 1-16, 2016.
| 3 |
arXiv:1506.05893v1 [] 19 Jun 2015
On Systematic Testing for Execution-Time Analysis
Daniel Bundala
Sanjit A. Seshia
UC Berkeley
Email: [email protected]
UC Berkeley
Email: [email protected]
Abstract—Given a program and a time deadline, does the
program finish before the deadline when executed on a given
platform? With the requirement to produce a test case when such
a violation can occur, we refer to this problem as the worst-case
execution-time testing (WCETT) problem.
In this paper, we present an approach for solving the WCETT
problem for loop-free programs by timing the execution of a
program on a small number of carefully calculated inputs. We
then create a sequence of integer linear programs the solutions
of which encode the best timing model consistent with the measurements. By solving the programs we can find the worst-case
input as well as estimate execution time of any other input. Our
solution is more accurate than previous approaches and, unlikely
previous work, by increasing the number of measurements we
can produce WCETT bounds up to any desired accuracy.
Timing of a program depends on the properties of the platform
it executes on. We further show how our approach can be used
to quantify the timing repeatability of the underlying platform.
I. I NTRODUCTION
Execution-time analysis is central to the design and verification of real-time embedded systems. In particular, over the
last few decades, much work has been done on estimating
the worst-case execution time (WCET) (see, e.g. [1], [2], [3]).
Most of the work on this topic has centered on techniques
for finding upper and lower bounds on the execution time
of programs on particular platforms. Execution time analysis
is a challenging problem due to the interaction between
the interaction of a large space of program paths with the
complexity of underlying platform (see, e.g., [4], [5]). Thus,
WCET estimates can sometimes be either too pessimistic (due
to conservative platform modeling) or too optimistic (due to
unmodeled features of the platform).
The above challenges for WCET analysis can limit its
applicability in certain settings. One such problem is to verify,
given a program P , a target platform H, and a deadline d,
whether P can violate deadline d when executed on H —
with the requirement to produce a test case when such a
violation can occur. We refer to this problem as the worstcase execution-time testing (WCETT) problem.
Tools that compute conservative upper bounds on execution
time have two limitations in addressing this WCETT problem:
(i) if the bound is bigger than d, one does not know whether
the bound is too loose or whether P can really violate d,
and (ii) such tools typically aggregate states for efficiency
and hence do not produce counterexamples. Moreover, such
tools rely on having a fairly detailed timing model of the
hardware platform (processor, memory hierarchy, etc.). In
some industrial settings, due to IP issues, hardware details are
not readily available, making the task much harder for timing
analysis (see e.g, this NASA report for more details [6]); in
such settings, one needs an approach to timing analysis that
can work with a “black-box” platform.
In this paper, we present an approach to systematically test
a program’s timing behavior on a given hardware platform.
Our approach can be used to solve the WCETT problem,
in that it not only predicts the execution times of the worstcase (longest) program path, but also produces a suitable test
case. It can also be adapted to produce the top K longest
paths for any given K. Additionally, the timing estimate for
a program path comes with a “guard band” or “error bound”
characterizing the approximation in the estimate — in all our
experiments, true value was closed to the estimate and well
within the guard band.
Our approach builds upon on prior work on the GameTime
system [7], [8]. GameTime allows one to predict the execution
time of any program path without running it by measuring a
small sample of “basis paths” and learning a timing model of
the platform based on those measurements.
The advantage of the GameTime approach is that the
platform timing model is automatically learned from endto-end path measurements, and thus it is easy to apply to
any platform. However, the accuracy of GameTime’s estimate
(the guard bands) depend on the knowledge of a platform
parameter µmax which bounds the cumulative variation in the
timing of instructions along a program path from a baseline
value.
For example, a load instruction might take just 1 cycle if the
value is in a register, but several 10s of cycles if it is in main
memory and not cached. The parameter µmax can be hard to
pre-compute based on documentation of the processors ISA
or even its implementation, if available.
Our approach shares GameTime’s ease of portability to any
platform, and like it, it is also suitable for black-box platforms.
However, rather than depending on knowledge of µmax , we
show how one can compute the guard bands using an integer
linear programming formulation. Experimental results show
that our approach can be more accurate than the original
GameTime algorithm [8], at a small extra computational cost.
Moreover, our algorithm is tunable: depending on the desired
accuracy specified by a user, the algorithm measures more
paths and yields more precise estimate; possibly measuring all
paths if perfect accuracy is requested. Finally, we also show
how to estimate the parameter µmax .
int f ( int x) {
i f ( x % 2 == 0 ) {
i f ( x & 101) {
x ++;
} else {
x +=7;
}
}
return x ;
}
[x%2]
[!(x&101)]
[x&101]
[!(x%2)]
x+ = 7
x++
return x
Fig. 1: Source code (top) and its corresponding control flow
graph (bottom)
II. P RELIMINARIES
Our solution to the problem is an extension of the
measurement-based GameTime approach [11]. We now
present the model used in [11] as well as in this paper.
A. Model
To decide whether a program can exceed a given time
limit d, it suffices to decide whether the worst input exceeds
the limit d. However, recall that without any restrictions on the
program, when the program contains unbounded loops or recursion, even determining whether the program terminates not
even the number of steps it performs is undecidable. Therefore,
we consider only deterministic programs with bounded loops
and no recursion. We did not find this limitation restricting
as reactive controllers are already often written with this
limitation in mind.
Given a computer program, one can associate with it the
control-flow graph (CFG) in the usual way (Figure 1); vertices
representing locations and edges the basic blocks (straight-line
code fragments) of the program. Since we assume that the
loops are bounded and there is no recursion, the loops can
be unrolled and all function calls inlined. Thus, the resulting
CFG is always a directed and an acyclic graph (DAG).
Given a DAG G = (V, E), with the set of vertices V , set
of edges E, we designate two vertices: source s and sink t
corresponding to the entry and exit points of the program,
respectively.
For vertex v of graph, we use in(v) to denote the set of
incoming edges to v and out(v) to denote the set of outgoing
edges from v.
To model [11] the execution times of a program, we
associate with each edge e ∈ E cost we . The cost we models
the (baseline) execution time of the particular statement the
edge e corresponds to.
As described in the introduction, we measure only execution
times of entire source-to-sink paths and not of individual statements. Given a source-to-sink
P path x, the baseline execution
time of the path x is wx = e∈x we where the sum is over all
edges present in x. However, due to caching, branch misses,
etc. the actual execution time differs from the baseline and thus
the execution time (the length of the path) can be modeled as
X
wx =
we + dx
e∈x
where the term dx denotes the variation from the baseline
value. The term dx is a function of the input and the context the
program runs in. This is known as the bounded path-dependent
repeatability condition [12].
The length of path x is denoted wx . Observe that different
inputs corresponding to the same path in general take different
time to execute. However, we assume that |d| ≤ µx for some
µx ∈ R. We denote the maximum maxx µx by µmax . The
value µmax is a measure of timing repeatability. If µmax = 0
then the system is perfectly time repeatable. In general, the
larger the value of µmax , the less repeatable the timing of the
system is.
The aim of this paper is to find a path x such that wx
is maximal. The algorithm in [11] as well as ours, does not
require the knowledge of µx ’s or µmax to find the worstcase execution input. However, the accuracy of the algorithms
depend of µmax , as that is inherent to the timing of the
underlying hardware,
Our algorithm carefully synthesizes a collection of inputs
on which it runs the given program and measures the times
it takes to execute each of them. Then, using these measurements, it estimates the length of the longest path.
Formally, the pair (x, wx ) consisting of a path x and its
length wx is called a measurement. We denote the length of
the longest path by wM .
To summarize, throughout the paper we use the following
notation.
• G - underlying DAG
• S - set of measured paths
• M = {(xi , li )} - set of measurements consisting of pairs
path xi ∈ S and the observed length li
It was shown in [11] how, using only |E| measurements
of source-to-sink paths, to find an input corresponding to a
path of length at least wM − 2|E|µmax . In particular, if the
longest path is longer than the second longest path by at least
2|E|µmax , the algorithm in [11] in fact finds the longest path.
Thus, we say that the “accuracy” of the algorithm is 2|E|µmax .
in such a way that for any path x, it always holds that |cb | ≤ 2
for every b ∈ B. The paths in B are called the basis paths as
they suffice to express any other path.
P
Now, if the path px can be written as px = b∈B cb · pb
then its estimated (baseline) length is
X
wx =
cb · wb
b∈B
Fig. 2: DAG with exponentially many paths
In this paper, we show how to (i) improve the accuracy without
increasing the number of measurements, (ii) by increasing the
number of measurements improve the accuracy even further,
(iii) how to estimate the timing repeatability of the underlying
platform (as captured by µmax ).
Our algorithm as well as the algorithm in [11] measures the
length of some paths and then estimates the lengths of other
paths based on those measurements. Note that as long as not
all the lengths of all the paths are measured, the inaccuracy
of estimates is unavoidable. Consider for example the graph
in Figure 2 and assume the graph consists of n “diamonds”.
Clearly, there are 2n source-to-sink paths in the graph.
Assume that we = 1 for each edge and µx = 0 for all paths
x except for one path y for which µy = µmax > 0. Now,
suppose we measure the lengths of some collection of paths S.
As long as S does not contain y, the length of every observed
path is 2n. Hence, any length of wy of y in the interval
[2n − µmax , 2n + µmax ] is consistent with the measurement.
Therefore, in the worst case, the best achievable estimate of
the length of wy can always be at least µmax from the real
answer.
where wb ’s are the measured lengths of the basis paths.
The algorithm thus runs the program on the inputs corresponding to the basis paths in order to measure the lengths
pb for each b ∈ B. Moreover, it was shown in [11] how,
by encoding the problem as an integer-linear-program (ILP)
instance, to find
Ppath X such that the corresponding estimated
length: pX = b∈B cb · wb is maximized2 .
Consider the accuracy
P of the estimated length of pX . By
construction, pX = b∈B cb · pb . Hence,
X
X X
we =
cb
we .
e∈X
b∈B
e∈b
Further, for b ∈ B, we have wb =
wX −
X
cb · wb
=
X
P
e∈b
we + db . Hence,
w e + dX −
e∈X
b∈B
=
X
=
w e + dX −
X
X
cb (
we + db )
b∈B
we −
e∈X
X
cb
X
b∈B
+dX −
1
cb · wb
b∈B
e∈X
X
X
X
e∈b
we
e∈b
cb · db
b∈B
We now briefly describe the algorithm in [11]; we skip the
standard technical details such as CFG extraction or how to
find an input corresponding to a given path and focus only on
how to extract the longest path.
Let m be the number of edges in E. Then, by numbering
the edges in E, one can think of each path x as a vector px
in Rm such that
0 if ith edge is not used in x
px (i) =
1 if ith edge is used in x
≤
dX −
X
cb · db
b∈B
≤
(2|B| + 1)µmax
where cb ’s are coefficients in R. Moreover, using theory of 2barycentric spanners [13], it was shown that B can be chosen
Thus, the algorithm in [11], finds the longest path (and the
corresponding input) only up to the error term of (2|B| +
1)µmax , under certain assumptions outlined in [11]. A challenge, as noted earlier, is that it is not easy to estimate the
value of µmax . Consider the first four columns in Table V.
The second column in the table shows the lengths of the
longest path as estimated by the algorithm in [11]. However,
the third column shows the actual length of the path that is
measured when the program is executed on the corresponding
inputs. Notice that in some cases the prediction does not
match the measured time. Also, the algorithm in [11] does not
provide any error bounds on the estimates, thereby making the
predicted values less useful.
In this paper we show how, given the exactly same set of
measurements as in [11], we can find a tighter estimate and
how to incorporate the knowledge of additional measurements
to obtain even tighter bounds. In fact, for the benchmarks given
1 In general, however, it holds that the more paths are included in S the
better is the estimate of the longest path.
2 In case the resulting path is infeasible in the program, a constraint is added
into the ILP and the ILP is solved again.
Now, given two paths x and y, one can define the linear
combination a · x + b · y for a, b ∈ R in the natural (component
wise) way. Thus, one can think of paths as points in an mdimensional vector space over R. In particular, it was show
in [11] that there is always a basis B of at most m source-tosink paths such that each source-to-sink path x can be written
as a linear combination of paths from B:
X
px =
cb · pb
b∈B
x0
x1
x2
x3
Fig. 3: Basis paths for DAG in Figure 2. For example, the
path that always takes the bottom path through each diamond
can be expressed as x1 + x2 + x3 − 2 ∗ x0
in Table V, we not only obtain more accurate predictions of
running time, we can also give error bounds for our estimates.
III. A LGORITHM
A. Overview
We now give an overview of our algorithm. Recall, that the
problem studied can be considered as follows: Given a DAG
with source s and sink t, find the longest source-to-sink path
where the lengths of that paths are modeled as described in
the Preliminaries, Section II.
The algorithm in [11] expresses every path as a linear
combination of basis paths; using their lengths to estimate
the length of the paths not measured. Intuitively, if two paths
overlap (they share common edges) then knowing the length of
one provides some information about the length of the other.
Even basis paths with zero coefficient in the linear combination
can provide information about the length of an estimated path.
In our algorithm, we write integer linear programs (ILPs),
with one constraint per measurement, looking for the longest
path with the edge weights consistent with the measurements
and µx . Even though, µx are not observable, we show how to
obtain consistent bound on µmax from the measurements.
B. Path Extraction
In this section we assume that we have a set of measurements M, consisting of pairs (x, lx ) where x is a path and lx is
the measured length of x and we show how to find the longest
path consistent with the measurements. In Section III-D we
then show how to actually calculate the set S. To make the
notation consistent with [11], we call the measured paths the
basis paths, even if they do not necessarily form a basis in
the underlying vector space as was the case in [11].
Suppose, for the moment, that the value of µmax is known
and equal to D ∈ R. Then the following problem encodes
the existence of individual
edge weights (we ) such that the
P
cumulative sum ( e∈xi we ) along each measured path is
consistent with its measured length; that is, the measured value
differs by at most D from the cumulative sum.
Problem 1. Input: DAG G, a set of measurements M and
D ∈ R
max len(path)P
s.t. li − D ≤ e∈xi we ≤ li + D
for each measurement (xi , li ) ∈ M
vars : we ≥ 0
for each edge e
Where max len(path) expresses the length of the longest
cumulative sum along some source-to-sink path in the graph.
We now turn this problem into an ILP by expressing the
existence of a path as follows:
Problem 2. Input: DAG G, a set of measurements M and
D ∈ R
P
max
e∈E pe P
s.t. li − D ≤ e∈xi we ≤ li + D
Pfor each measurement (xi , li ) ∈ M
b =1
Pe∈out(s) e
b
=1
Pe∈in(t) e P
e∈(v) be =
e∈out(b) be
for each vertex v 6∈ {s, t}
pe ≤ we
for each edge e
pe ≤ M · be
for each edge e
vars : for each edge e:
we ≥ 0
be ∈ {0, 1}
pe ≥ 0
Where M ∈ R is a constant larger than any potential we .
In the implementation, we take M to be the largest li in the
set of measurements S plus one.
In the above ILP (Problem 2), Boolean variables be ’s specify
which edges of the graph are present in the extremal
source-toP
sink path and pe ’s shall equal be ×we . Thus, e∈E pe denotes
the length of the extremal source-to-sink path.
The existence of a path is encoded by the constraints
specifying that there is a flow from the source to the sink. That
is, that exactly one edge from the source has be = 1, exactly
one edge to the sink has be = 1 and that for all intermediate
vertices, the number of incoming edges to that vertex with
be = 1 equals the number of outgoing edges from that vertex
with be = 1.
Further, for each edge e ∈ E, we use the variable pe to
denote the multiplication pe = be · we . As be ∈ {0, 1}, we
have pe ≤ we . Also, the constraints pe ≤ M · pe ensure that
if be = 0 then pe = 0. On the other hand, if be = 1 then
constraints imply only that 0 ≤ pe ≤
Pwe . Finally, note that the
objective function is to maximize e∈E pe . hence, if be = 1
then optimum value for pe is to set pe to we . Hence, in the
optimal solution, if be = 1 then pe = we = 1 · we = be · we
as desired.
PRecall, that for measurement (xi , li ) it holds that li =
e∈xi we + d where |d| ≤ µx ≤ µmax . Thus, in general,
D needs to be at least 0 to ensure that Problem 2 is feasible.
By the assumption, taking D = µmax yields a feasible ILP.
TABLE I: Different values of D obtained in the benchmarks
by perturbing the measurements.
Lemma 1. Problem 2 is feasible for D = µmax .
Benchmark
However, the value of µmax is neither directly observable
nor known as it depends on the actual hardware the program
is running on. We now show how to obtain a valid D yielding
a feasible ILP in Problem 2. Later we show under what
circumstances the solution of the resulting ILP gives the
correct longest path.
Consider the following LP3 .
altitude
stabilisation
automotive
cctask
irobot
sm
Problem 3. Input: DAG G and a set of measurements M
min µ
P
s.t. li − µ ≤ e∈xi we ≤ li + µ
for each measurement (xi , li ) ∈ M
vars : for each edge e:
we ≥ 0
µ≥0
Intuitively, the problem above finds the least value of D for
which Problem 2 is feasible, i.e., the least D consistent with
the given set of measurements M . Formally, we have:
Theorem 2. Let p(µ) be the solution of Problem 3. Then
taking D = p(µ) in Problem 2 yields a feasible ILP.
Proof: First, note that Problem 3 always has a solution,
e.g., take we = 0 and µ = maxi li .
Notice that, assuming there is at least one source-to-sink
path, the only possible way for Problem 2 to be infeasible
is that D is small enough so that the constraints li − D ≤
P
e∈xi we ≤ li + D are inconsistent.
However, by
Pthe construction of Problem 3, p(µ) satisfies,
li − p(µ) ≤
e∈xi we ≤ li + p(µ) for every path xi . The
result now immediately follows.
Note that, by construction, taking µ = µmax in Problem 3 is
feasible. Hence, D ≤ µmax as D is the least value consistent
with the measurements.
Notice that the solution of Problem 3, can be used as a
measure of timing repeatability of the underlying hardware
platform. In case of perfect timing repeatability, that is if each
edge (each statement of the underlying program) always took
exactly the same time (regardless of concrete values of cache
hits, misses, branch predictions, etc) to execute and that dx =
0 for every measured path, then the solution of Problem 3
would be 0. Conversely, the larger the value of the solution
of Problem 3 the bigger the discrepancy between different
measurements.
To measure the effect of timing repeatability on the computed value of D, we have taken measurements for a set of
benchmarks used to evaluate our tool on (Section IV-B) and
randomly perturbed the measured execution times. We have
perturbed each measurement randomly by up to 10%, 25% and
50%. Table I shows the calculated values of D. As expected,
the larger the perturbation, the larger the calculated value of D.
3 Note
that Problem 3 is a linear program and not an integer linear program.
0%
57.0
343.2
1116.0
73.9
37.2
0.1
Perturbation
10%
25%
66.1
87.8
371.3
807.1
1281.3 1486.9
110.6
150.9
95.9
288.8
23.2
117.4
50%
126.5
1107.2
2961.3
270.6
552.8
216.8
C. Optimality
The solution of Problem 2 is the best estimate of the longest
path that is consistent with measurements M. We now show
how good the estimate is.
Consider the solution of Problem 2 with D equal to the
solution of Problem 3. For each edge e, let p(we ) denote the
value of the variable we in the solution, and let τ be the path
corresponding to the solution of Problem 2. Denote the length
of τ in the solution by p(len(τ )). We now show how much
p(len(τ )) differs from the actual length of τ . Specifically, we
shall show that the goodness of the estimate of the length of
τ is related to the following ILP4 .
Problem 4. Input: DAG G and a set of measurements M
max | len(path)|
P
s.t. −1 ≤ e∈xi we ≤ +1
for each measurement (xi , li ) ∈ M
vars : we
for each edge e
The existence of a path and the length of the path is
expressed in the above ILP in exactly the same way as was
done in Problem 2. Note that the above ILP is always feasible
with | len(path) at least 1; one solution is to set we = 1 for
one edge outgoing from the sink and set we = 0 for all other
edges. Further, note that Problem 4 depends only on the graph
and the set of the measured basis paths; it is independent of
the (measured) lengths of the paths. In fact, we can show that
as long as some path does not appear in the measurements
M, the solution of the above ILP is strictly greater than 1.
Theorem 3. Let G be a DAG, M a set of measurements and
π a source-to-sink path in G such that π is not present in M.
Then the solution of Problem 4 is strictly greater than 1.
Proof: We give a satisfying assignment to variables we
in Problem 4 such that len(π) > 1.
Specifically, let ei be the first edge of π, that is, the edge
outgoing from the sink of G. Further, let, D = {(u, v) |u ∈ π}
be the set of edges with the initial vertex lying on π. Then the
assignment to weights we is as follows:
1
e = ei
1 + |E|
1
− |E|
e∈D
we =
0 otherwise
4 To find the absolute value | len(path)| we solve two linear programs.
One with the objective function max len(path) and one with the objective
function max − len(path).
Note that, with this assignment to we ’s, the length of pi
1
> 1. Now, consider any other path τ
equals len(π) = 1 + |E|
used in measurements M. In particular, τ 6= π. There are |E|
edges in G and the weight we associated with each edge e is
1
1
at least − |E|
. Hence, len(τ ) ≥ |E| × − |E|
= −1.
Now, if τ does not include ei , that is ei 6∈ τ then, len(τ ) ≤ 0
as wei is the only positive weight. If τ includes ei , that is
ei ∈ τ , then τ necessarilly contains at least one edge from
D as τ is different from π. Hence, len(τ ) ≤ 1. In any case,
−1 ≤ len(τ ) ≤ 1 as required and thus we have given a valid
assignment to we ’s with len(π) > 1.
Recall that the set S denotes the set of paths occurring
measurements M. Let r(we ) and r(µx ) be the real values of
we for each edge e and µx for each path x ∈ S. Then for each
edge e the expression |p(we ) − r(we )| denotes the difference
between the calculated values of we and the actual value of
we . Analogously, the expression
P extends to entire paths: for
a path x we have p(wx ) = e∈x p(we ). Now, the difference
for the worst path can be bounded as follows.
Theorem 4. Let k be the solution of Problem 4. Then
|p(len(τ )) − r(len(τ ))| ≤ 2kµmax
Proof: Note that by construction, r(we ) and r(µx ) are a
solution of Problem 3. Hence, for every (xi , li ) ∈ M it holds
that.
li −µmax ≤ li −r(µxi ) ≤
X
r(we ) ≤ li +r(µxi ) ≤ li +µmax .
e∈xi
Recall that D ≤ µmax and that that p(we ) is a solution of
Problem 2. Hence, for every (xi , li ) ∈ M it holds that:
li − µmax ≤ li − D ≤
X
p(we ) ≤ li + D ≤ li + µmax .
e∈xi
Hence, by subtracting the last two equations from each
other, we have for any basis path xi ∈ S that:
−2µmax ≤
X
p(we ) − r(we ) ≤ 2µmax
e∈xi
Now, dividing by 2µmax we have:
P
p(we ) − r(we )
≤ 1.
−1 ≤ e∈xi
2µmax
for any basis path xi ∈ S.
Thus, the above inequality implies that taking
we =
p2 (we ) − r(we )
2µmax
is a (not necessarily optimal) solution of Problem 4. Since k
is the length of the longest path achievable in Problem 4, it
follows that for any path x (not necessarily in the basis), we
have
P
p(we ) − r(we )
−k ≤ e∈x
≤ k.
2µmax
TABLE II: Comparison of the accuracy in the longest path
extraction between our algorithm and the one in [11]. For our
accuracy, we take 2 ∗ k where k is the solution of Problem 4.
For [11] we take 2 ∗ (# basis paths).
Benchmark
altitude
stabilisation
automotive
cctask
irobot
sm
# Basis Paths
6
10
13
18
21
69
[11] Accuracy
12
20
26
36
42
138
Our Accuracy
10.0
16.4
14.0
34.0
18.0
48.6
By rearranging, we have
−2kµmax ≤
X
p(we ) − r(we ) ≤ 2kµmax .
e∈x
In other words, the calculated length differs from the real
length by at most 2kµmax , as desired.
Recall that the algorithm in [11] has difference between the
estimated and the actual length at most 2|E|µmax whereas our
algorithm has 2kµmax where k is the solution of Problem 4.
Observe that the dependence in the error term on µmax is
unavoidable as µmax is inherent to the timing properties of
the underlying platform.
For comparison, we have generated the same basis as in [11]
and calculated the corresponding k’s for several benchmarks.
Table II summarizes the results (see Table IV for the description of benchmarks). As can be seen from the table, when
using the same set of measurements, our method gives more
accurate estimates than the one in [11].
Furthermore, recall that in Problem 3 we calculate the
best (lower) bound D on µmax consistent with the given
measurements. Together with the above theorem, this gives
“error bounds” to the estimate in Problem 2. Specifically, if
the length of the longest path computed in Problem 2 is T
then, the length of the path when measured, that is consistent
with the measurements is within T ± (2k × D). However,
note that this is only the best bound deducible from the
measurements since D ≤ µmax . Since µmax is not directly
observable and we assume no nontrivial bound on µmax , the
length of the path cannot be bounded more accurately without
actually measuring the path.
The above analysis applies to the extraction of the single
longest path. Now, suppose that instead of extracting just one
longest path, we want to extract K longest paths. To that
end, we iterate the above procedure and whenever a path is
extracted, we add a constraint eliminating the path from the
solution of Problem 2 and then solve the updated ILP. For a
path x, the constraint
eliminating it from the solution space of
P
Problem 2 is e∈x be < |x|. The constraint specifies that not
all the edges along x can be taken together in the solution.
As the length of the predicted and the measured length
differ, it may happen (e.g., Table V) that when measured
the length of the path predicted to be the longest is not
actually the longest. Thus, to find the longest path, we may
need to iterate the above process by generating paths with
ever smaller predicted lengths, stopping whenever the current
estimate differs by more than (2k × D) from the longest
estimate.
TABLE III: Number of paths generated by Algorithm 1 to
reach the desired accuracy (second column). Third column
shows the accuracy (solution of Problem 4) of the generated
set of paths
Benchmark
D. Basis Computation
The algorithm (Problem 2) to estimate the longest path
depends on the set of measurements M of the basis paths S.
In this section we show how to calculate such a set of paths.
In general, arbitrary set of paths can be used as basis paths.
For example, we have shown in Table III that using the set of
paths used in [11], we are able to get more accurate estimates
than those obtained in [11].
Recall that (Theorem 4) the accuracy of the solution of
Problem 2 is tightly coupled with the solution of Problem 4.
This leads to a tunable algorithm, which depending on the
desired accuracy of the predictions, calculates a set of paths
to be used in Problem 2.
Specifically, given a desired accuracy A ∈ R, A ≥ 1, we
want to find a set of feasible paths S such that the solution of
Problem 4 is at most A. We implemented a simple iterative
algorithm (Algorithm 1) that finds such a set by repeatedly
extending the set of paths by the path corresponding to the
solution of Problem 4.
In the algorithm, if the longest extracted path is infeasible
in the underlying program, we add a constraint into the ILP
(Problem 4) prohibiting the path. That is, if the longest path is
infeasible and τ is the unsatisfiable core of the longest path5
then we add a constraint that not all the be ’s corresponding
to
P
the edges used in τ are set to 1 in Problem 2, i.e., e∈τ be <
|τ | where |τ | denotes the number of edges in τ . Then we solve
the updated ILP.
S←∅
while (Solution of Problem 4 with paths S) > A do
x ← longest path in Problem 4
if x is feasible then
S ← S ∪ {x}
else
Add a constraint prohibiting x
end
end
return S
Algorithm 1: Iterative algorithm for basis computation
Theorem 5. If A ≥ 1 then the Algorithm 1 terminates with a
set P of paths such that the solution of Problem 4 with paths
P is at most A.
Proof: Note that each constraint in Problem 4 limits the
length of some path to (at most) 1. In particular, if S contains
all the paths in the graph then the solution of Problem 4
equals 1.
5 Minimal set of edges that cannot be taken together as identified by an
SMT solver
altitude
stabilisation
automotive
cctask
irobot
sm
Desired k
10
5
2
10
5
2
10
5
2
10
5
2
10
5
2
22
18
15
Actual k
5.0
5.0
1.0
7.0
4.7
2.0
7.0
5.0
2.0
9.0
5.0
2.0
9.0
5.0
2.0
21.8
18.0
14.5
# Basis Paths
7
7
10
11
12
40
14
14
30
19
25
76
22
34
118
70
73
77
Time(s)
0.03
0.03
1.66
0.10
0.96
22.78
0.14
0.89
27.60
0.20
4.10
42.91
0.50
20.13
182.42
328.14
7089.04
10311.49
Further, if the algorithm finds some path x to be the longest
in some iteration then the length of x in all subsequent
iterations will be at most 1 as x ∈ S. Therefore, as long as the
solution is greater than 1, the longest path found is different
from all the paths found in the previous iterations.
Also, if the path is infeasible, then a constraint is added
that prevents the path occurring in any subsequent iterations.
It follows from these considerations that the algorithm keeps
adding new paths in each iteration and eventually terminates.
By construction, the solution of Problem 4 with the set of
paths S is at most A.
In the extreme case of A = 1, it immediately follows from
Theorem 3, that Algorithm 1 returns all feasible paths in the
underlying graph.
We have implemented the above iterative algorithm and
evaluated it on several case studies. Table III summarizes the
number of paths generated by the algorithm for the given
accuracy k as well as the running time required to find those
paths.
We have observed that the basis computation took substantial part of the entire algorithm. However, notice that the
basis-computation algorithm (Algorithm 1) need not start with
S = ∅ and works correctly for any initial collection of paths S.
Therefore, as an optimization, we first compute the initial
set of paths S using the original algorithm from [11], which
computes the 2-barycentric spanner of the underlying DAG.
Only then we proceed with the iterative algorithm to find a
set of paths with the desired accuracy.
Figure 4 shows the performance of the iterative algorithm
on two benchmarks. The decreasing (blue) line shows how
the accuracy k decreases with each path added to the basis.
The increasing (red) line shows the time needed to find
each path. The figure shows only the performance after the
precomputation of the 2-barycentric spanner.
Fig. 4: Performance of the Algorithm 1. The decreasing (blue)
line shows the length of x computed (k) in line 3. The
increasing (red) line shows the time taken to perform each
iteration.
(a) cctask
k
time(s)
0.9
12
0.8
10
0.7
8
0.6
6
0.5
4
0.4
2
10
time(s )
k
14
20
30
40
# paths
50
60
70
0.3
80
(b) irobot
11
10
k
time(s)
1.4
1.2
9
8
1.0
k
time(s )
7
6
0.8
5
4
0.6
3
2
20
30
40
50
60
70
# paths
80
90
100
0.4
110
IV. E VALUATION
A. Implementation
The algorithm to identify the longest (up to accuracy k)
path a given program P is shown in Algorithm 2.
We now briefly describe the implementation of main stages
of the Algorithm 2. Our implementation is build on top of [11].
See [11] for further discussion of technical details.
CFG extraction The procedure begins by building the
CFG G of the given program. The vertices correspond to
locations and edges to individual statements. The extraction
is build on top of CIL front end for C [14]. Note that all ILPs
(Problems 2, 3 and 4) introduce a variable per each edge of G
Extract CFG G from the program P
S ← basis with accuracy at most k (Algorithm 1)
D ← Solution of Problem 3
τ ← Solution of Problem 2 with paths S and D
return τ and its estimated length
Algorithm 2: Algorithm to find the worst-case execution
time. Input: Program P , accuracy k
and that each problem optimizes for the length of a source-tosink path. Thus, if some edge is always followed by another
one (the in- and out-degree of the joining vertex is one) then
the edges can be merged into a single one6 without changing
the solution of the ILP problems. Therefore, we process G
by merging edges that lie on a single path into a single edge.
This reduces the number of variables used and improves the
performance.
Basis computation The basis is computed as described in
Section III-D, Algorithm 1. We use Gurobi solver [15] to
solve individual ILPs and use the 2-barycentric spanner as
the initial set S. Solving the ILPs posed the main bottleneck
of our approach.
Input generation Each path through the CFG G corresponds to a sequence of operations in the program and hence
a conjunction of statements. For a given path, we use the
Z3 SMT solver [16] to find an input that corresponds to
the given path or to prove that the path is infeasible and no
corresponding input exists. In the experiments, SMT solving
was fairly fast.
Longest-path extraction We solve Problem 2 using Gurobi
ILP solver [15].
P If the extracted path π is infeasible, we add
a constraint ( e∈π be < |π|) eliminating the path from the
solution and solve the new problem. Similarly, we solve for
K longest paths; we add a constraint prohibiting the extracted
path and solve the resulting ILP again, repeating until we
successfully generate K feasible paths.
Recall that the calculated length of the path is accurate only
up to the precision 2k × D. Thus, we can repeat the process
until the length of the extracted path is outside of the range
for the longest extracted path. However, in practice we did
not find this necessary and found the longest path after a few
iterations.
Path-Length Measurement To measure the execution time
of the given program on a given input, we create a C program
where we set the input to the given values and then run it using
a cycle-accurate simulator of the PTARM processor [17].
B. Benchmarks
We have evaluated our algorithm on several benchmarks
and compared it with the algorithm in [11]. We used the same
benchmarks as in [11] as well as benchmarks from control
tasks from robotic and automotive settings. The benchmarks
in [11] come from Mälardalen benchmark suite [18] and the
6 For example, in Figure 2, every diamond can be replaced by two edges,
one edge for the top half and one edge for the bottom half.
TABLE IV: Number of nodes, edges and paths in the CFGs
extracted from the benchmarks
Benchmark
altitude
stabilisation
automotive
cctask
irobot
sm
# Nodes
36
64
88
102
170
452
# Edges
40
72
100
118
195
523
# Paths
11
216
506
657
8136
33, 755, 520
PapaBench suite [19]. The authors of [11] chose implementations of actual realtime systems (as opposed to hand-crafted
ones) that have several paths, were of various sizes, but do not
require automatic estimation of loop bounds.
Since we assume all programs contain only bounded loops
and no recursion, we preprocessed the programs by unrolling
the loops and inlining the functions where necessary. Table IV
summarizes properties of the benchmarks used.
Table V shows the lengths of five longest paths as found by
our algorithm and the one in [11]. The first half of the table
shows the results as obtained by the algorithm in [11]. The
second half shows the results as obtained (together with the
“error bounds”) by our algorithm (Algorithm 2).
Note that in each case the longest path returned by our
algorithm is never shorter than the longest path found by [11].
In half of the cases, our algorithm is able to find a path longer
than the one in [11]. Also notice that the actual measured
length is always within the computed “error bounds”.
The biggest benchmark, sm, is a collection of nested switchcase logic setting state variables but with minimal computations otherwise. Hence, there is a large number of paths
(33, 755, 520) in the CFG yet, as expected, the computed D
is small.
V. C ONCLUSION
In this paper, we have addressed the problem of estimating
the worst-case timing of a program via systematic testing
on the target platform. Our approach not only generates an
estimate of worst-case timing, but can also produces test cases
showing how that timing is exhibited on the platform. Our
approach improves the accuracy of the previously published
GameTime approach, while also providing error bounds on the
estimate.
Note that our approach can be adapted to produce timing
estimates along arbitrary program paths. In order to do this,
one can fix variables be in Problem 2 suitably. Thus, we can
also estimate the longest execution of a given path that is
consistent with the measurements.
In the paper we have analyzed the timing behavior of a
given program. However, instead of measuring cycles we can
measure energy consumption of the program executions. The
same techniques can then be applied to find the input consuming the most energy. In general, the approach presented in this
paper can also be extended to other quantitative properties of
the program and is not limited only to the WCETT analysis.
R EFERENCES
[1] Y.-T. S. Li and S. Malik, Performance Analysis of Real-Time Embedded
Software. Kluwer Academic, 1999.
[2] Reinhard Wilhelm et al., “The Determination of Worst-Case Execution
Times—Overview of the Methods and Survey of Tools,” ACM Transactions on Embedded Computing Systems (TECS), 2007.
[3] E. A. Lee and S. A. Seshia, Introduction to Embedded Systems: A CyberPhysical Systems Approach, first edition ed. http://leeseshia.org, 2011.
[4] E. A. Lee, “Computing foundations and practice for cyber-physical
systems: A preliminary report,” University of California at Berkeley,
Tech. Rep. UCB/EECS-2007-72, May 2007.
[5] R. Kirner and P. Puschner, “Obstacles in worst-case execution time
analysis,” in ISORC, 2008, pp. 333–339.
[6] NASA Engineering and Safety Center, “NASA report on Toyota unintended acceleration investigation, appendix a: Software,” http://www.
nhtsa.gov/staticfiles/nvs/pdf/NASA FR Appendix A Software.pdf.
[7] S. A. Seshia and A. Rakhlin, “Game-theoretic timing analysis,” in
Proc. IEEE/ACM International Conference on Computer-Aided Design
(ICCAD), 2008, pp. 575–582.
[8] ——, “Quantitative analysis of systems using game-theoretic learning,”
ACM Transactions on Embedded Computing Systems (TECS), 2012.
[9] Y.-T. S. Li and S. Malik, “Performance analysis of embedded software
using implicit path enumeration,” in Proceedings of the 32Nd Annual
ACM/IEEE Design Automation Conference, ser. DAC ’95. New
York, NY, USA: ACM, 1995, pp. 456–461. [Online]. Available:
http://doi.acm.org/10.1145/217474.217570
[10] R. Wilhelm, “Determining bounds on execution times.” 2009.
[11] S. A. Seshia and A. Rakhlin, “Game-theoretic timing analysis,”
in Proceedings of the 2008 IEEE/ACM International Conference
on Computer-Aided Design, ser. ICCAD ’08. Piscataway, NJ,
USA: IEEE Press, 2008, pp. 575–582. [Online]. Available: http:
//dl.acm.org/citation.cfm?id=1509456.1509584
[12] Z. Wasson, “Analyzing data-dependent timing and timing repeatability
with gametime,” Master’s thesis, EECS Department, University of
California, Berkeley, May 2014. [Online]. Available: http://www.eecs.
berkeley.edu/Pubs/TechRpts/2014/EECS-2014-132.html
[13] B. Awerbuch and R. D. Kleinberg, “Adaptive routing with end-toend feedback: Distributed learning and geometric approaches,” in
Proceedings of the Thirty-sixth Annual ACM Symposium on Theory of
Computing, ser. STOC ’04. New York, NY, USA: ACM, 2004, pp. 45–
53. [Online]. Available: http://doi.acm.org/10.1145/1007352.1007367
[14] George Necula et al., “CIL - infrastructure for C program analysis and
transformation,” http://manju.cs.berkeley.edu/cil/.
[15] I. Gurobi Optimization, “Gurobi optimizer reference manual,” 2015.
[Online]. Available: http://www.gurobi.com
[16] L. De Moura and N. Bjørner, “Z3: An efficient smt solver,” in
Proceedings of the Theory and Practice of Software, 14th International
Conference on Tools and Algorithms for the Construction and
Analysis of Systems, ser. TACAS’08/ETAPS’08. Berlin, Heidelberg:
Springer-Verlag, 2008, pp. 337–340. [Online]. Available: http://dl.acm.
org/citation.cfm?id=1792734.1792766
[17] Center for Hybrid and Embedded Software, UC Berkeley, “The PTARM
simulator,” http://chess.eecs.berkeley.edu/pret/.
[18] Mälardalen WCET Research Group, “The Mälardalen benchmark suite,”
http://www.mrtc.mdh.se/projects/wcet/benchmarks.html.
[19] F. Nemer, H. Cass, P. Sainrat, J. paul Bahsoun, and M. D. Michiel,
“Papabench: a free real-time benchmark,” in In WCET ?06, 2006.
TABLE V: Comparison of the results produced by the algorithm presented in this paper and in [11] for generating the top five
longest paths for a set of benchmarks. Column Predicted shows the length predicted by each algorithm. Column Measured
show the running time measured when run on the corresponding input. For our paper, we give “error bounds” of the form
k × D. The column Time shows the time it takes to generate the estimates and test cases. The largest measured value per each
benchmark is shown in bold.
Benchmark
altitude
stabilisation
automotive
cctask
irobot
sm
GameTime [11]
Predicted
Measured Time(s)
867
867
789
789
776
751
11.7
659
763
581
581
4387
3697
4293
4036
4290
3516
26.4
4286
3242
4196
3612
13595
8106
11614
9902
11515
11515
47.0
11361
5010
11243
11138
991
808
972
605
943
852
29.4
936
848
924
821
1462
1430
1459
1463
1457
1418
50.9
1454
1451
1454
1463
2553
2550
2551
2550
2536
2537
211.0
2534
2537
2532
2537
Our Algorithm
Predicted
Measured
909 ± 1.0 × 57.0
867
815 ± 1.0 × 57.0
758
732 ± 1.0 × 57.0
789
719 ± 1.0 × 57.0
737
638 ± 1.0 × 57.0
581
4303 ± 2.0 × 343.0
3599
4302 ± 2.0 × 343.0
4046
4285 ± 2.0 × 343.0
3944
4284 ± 2.0 × 343.0
3516
4248 ± 2.0 × 343.0
3697
11824 ± 2.0 × 1116.0
10982
11696 ± 2.0 × 1116.0
10657
11424 ± 2.0 × 1116.0
10577
11338 ± 2.0 × 1116.0
11515
9830 ± 2.0 × 1116.0
9263
870 ± 2.0 × 73.0
861
869 ± 2.0 × 73.0
858
866 ± 2.0 × 73.0
897
865 ± 2.0 × 73.0
897
861 ± 2.0 × 73.0
873
1451 ± 2.0 × 37.0
1406
1450 ± 2.0 × 37.0
1411
1449 ± 2.0 × 37.0
1411
1448 ± 2.0 × 37.0
1411
1448 ± 2.0 × 37.0
1464
2552 ± 21.81 × 0.2
2550
2551 ± 21.81 × 0.2
2550
2536 ± 21.81 × 0.2
2537
2536 ± 21.81 × 0.2
2537
2531 ± 21.81 × 0.2
2537
Time(s)
14.4
77.2
93.8
138.4
269.0
3290.2
| 6 |